text
stringlengths
59
500k
subset
stringclasses
6 values
\begin{document} \title{ ADS modules} \date{} \author{{\bf \normalsize Adel Alahmadi\footnote{This research was supported by the Deanship of Scientific Research, King Abdulaziz University, Jeddah, project no. 514/130/1432.}, S. K. Jain and Andr\'{e} Leroy} } \maketitle\markboth{\rm A. Alahmadi S.K. Jain and A. Leroy}{ \rm ADS modules} \begin{abstract} We study the class of $ADS$ rings and modules introduced by Fuchs \cite{F}. We give some connections between this notion and classical notions such as injectivity and quasi-continuity. A simple ring $R$ such that $R_R$ is ADS must be either right self-injective or indecomposable as a right $R$-module. Under certain conditions we can construct a unique ADS hull up to isomorphism. We introduce the concept of completely ADS modules and characterize completely ADS semiperfect right modules as direct sum of semisimple and local modules. \end{abstract} \section{INTRODUCTION} The purpose of this note is to study the class of $ADS$ rings and modules. Fuchs \cite{F} calls a right module $M$ right $ADS$ if for every decomposition $M=S\oplus T$ of $M$ and every complement $T^{\prime }$ of $S$ we have $M=S\oplus T^{\prime }$. Clearly any ring in which idempotents are central (in particular commutative rings or reduced rings) has the property that $R_{R}$ is $ADS$. Moreover, if $R$ is commutative then every cyclic $R$-module is $ADS$. We note that every right quasi-continuous module (also known as $\pi$-injective module) is right $ADS$, but not conversely. However, a right ADS module which is also CS is quasi-continuous. We provide equivalent conditions for a module to be $ADS$. A module need not have an $ADS$\ hull in the usual sense but we show that, under some hypotheses, every nonsingular right module possesses a right $ ADS$ hull which is unique up to isomorphism. We call a right module $M$ completely $ADS$ if each of its subfactors is $ADS$. We characterize completely $ADS$ semiperfect right modules as direct sums of semisimple and local modules. In particular we give an alternative proof of the characterizations of semiperfect $\pi c$-rings (rings whose cyclics are quasi-continuous). \section{Definitions and Notations} Throughout every module will be a right module unless otherwise stated. All rings have identity and all modules are unital. A module $M$ is called \textit{ continuous} if it satisfies (C1): each complement in $M$ is a direct summand, and (C2): if a submodule $N$ of $M$ is isomorphic to a direct summand of $M$ then $N$ itself is a direct summand of $M$. A module $M$ is called \textit{quasi-continuous} ($ \pi $-injective) if it satisfies (C1) and (C3): the sum of two direct summands of $M$ with zero intersection is again a direct summand of $M$. Equivalently a module $M$ is quasi-continuous if and only if every projection $\pi_i:N_1\oplus N_2 \longrightarrow N_i$, where $N_i$ ($i=1,2$) are submodules of $M$, can be extended to $M$. For two modules $A$ and $B$, we say that $A$\textit{\ is }$B$\textit{ -injective} if any homomorphism from a submodule $C$ of $B$ to $A$ can be extended to a homomorphism from $B$ to $A$. We note that if $A$ is $B$-injective and $A$ is contained in $B$ then $A$ is a direct summand of $B$. A module $M$ is called \textit{semiperfect} if each of its homomorphic images has a projective cover. A submodule $N$ of a module $M$ is \textit{ small} in $M$ if for any proper submodule $P$ of $M$, $P+N\neq M$. We will write $N<<M$. Let $A$ and $P$ be submodules of a module $M$. Then $P$ is called a supplement of $A$ if it is minimal with the property $A + P = M$. A module $M$ is \textit{discrete} if it satisfies (D$_{1}$): for every submodule $A$ of $M$ there exists a decomposition $M=M_{1}\oplus M_{2}$ such that $M_{1}\subset A$ and $M_{2}\cap A$ is small in $M$, and (D$_{2}$): if $A $ is a submodule of $M$ such that $M/A$ is isomorphic to a direct summand of $M$, then $A$ is a direct summand of $M$. A module $M$ is called \textit{quasi-discrete} if it satisfies D$_{1}$ and D$_3$: if $M_1$ and $M_2$ are summands of $M$ and $M=M_1+M_2$ then $M_1\cap M_2$ is a summand of $M$. For any module $M$, $E(M)$ denotes the injective hull of $M$. We recall a useful result of Azumaya that for any two modules $M$ and $N$, if $M$ is $N$-injective then for any $R$-homomorphism $\sigma :E(N) \rightarrow E(M)$, $\sigma (N)\subseteq M$. \section{ PROPERTIES OF ADS MODULES} We begin with a lemma which is useful in checking the ADS property of a module. This was proved by Burgess and Raphael \cite{BR}, however, for the sake of completeness, we provide the proof. \begin{lemma} \label{ADS iff relative injectivity} An $R$-module $M$ is ADS if and only if for each decomposition $M=A\oplus B$, $A$ and $B$ are mutually injective. \end{lemma} \begin{proof} Suppose $M$ is $ADS.$ We prove $A$ is $B$-injective. Let $C$ be a submodule of $B$ and let $f:C\rightarrow A$ be an $R$-$\hom $omorphism. Set $X$=\{$ c+f(c)\mid $ $c\in C\}$. Then $X\cap A=0$. So $X$ is contained in a complement, say $K,$ of $A$. Then by hypothesis, $M=A\oplus K$. The trick is to define an $R$ -homomorphism $g:B\rightarrow A$ which is a composition of the projection $\pi _{K}:M\rightarrow K$ along $A$ followed by the projection $\pi _{A}:M\rightarrow A$ along $B$ and restricting to $B.$ By writing an element $c\in C$ as $c=(c+f(c))-f(c)$, we see that $\pi _{A}\pi _{K}=f$ on $C$ and hence $\pi _{A}\pi _{K}$ is an extension of $f.$ \noindent Conversely, suppose for each decomposition $M=A\oplus B$, $A$ and $B$ are mutually injective. Let $C$ be a complement of $A$. Set $U=B\cap (A\oplus C)$ which is nonzero because $A\oplus C$ is essential in $M$. Let $\pi _{A}$ be the projection of $A\oplus C$ on to $A$ and $f:U\rightarrow A$ be the restriction of $\pi _{A}$ to $U$. This can be extended \ to $ g:B\rightarrow A,$ by assumption. Let $b\in B$ and let $D=(b-g(b))R+C.$ We claim D$\cap A=0$. Let $a\in A$ and let $a=br-g(b)r+c$ for some $c\in C$. This gives $br=a+g(b)r-c\in U$ and so $f(br)=a+g(b)r$ because $f$ is the identity on $A$ and $0$ on $C$. This yields $a=0$, proving our claim. Thus $D=C$ and hence $b-g(b)\in C$ for all $b\in B$. Therefore, $b=(b-g(b))+g(b)\in C\oplus A$ and so $M=A\oplus B\sqsubseteq C\oplus A,$ proving that $ M=C\oplus A.$\end{proof} Our next proposition gives equivalent statements as to when a module is ADS analogous to characterization of quasi-continuous modules (Cf. \cite{GJ}). \begin{proposition} \label{equivalent conditions for ADS} For an $R$-module $M$ the following are equivalent: \begin{enumerate} \item[(i)] $M$ is ADS. \item[(ii)] For any direct summand $S_{1}$ and a submodule $S_{2}$ having zero intersection with $S_{1}$, the projection map $\pi _{i}:S_{1}\oplus S_{2}\longrightarrow S_{i}$ ($i=1$, $2$) can be extended to an endomorphism (indeed a projection) of $M$. \item[(iii)] If $M=M_{1}\oplus M_{2}$ then $M_{1}$ and $M_{2}$ are mutually injective. \item[(iv)] For any decomposition $M=A\oplus B$, the projection $\pi _{B}:M\longrightarrow B$ is an isomorphism when it is restricted to any complement $C$ of $A$ in $M$, \item[(v)] For any decomposition $M=A\oplus B$ and any $b\in B$, $A$ is $bR$-injective, \item[(vi)] For any direct summand $A\subseteq^{\oplus} M$ and any $c\in M$ such that $A\cap cR=0$, $A$ is $cR$-injective. \end{enumerate} \end{proposition} \begin{proof} (i)$\Rightarrow $ (ii) Let ${\hat{S}_{2}}$ be \ a complement of $S_{1}$ containing $S_{2}$. Then by definition of ADS module, $M=S_{1}\oplus {\hat{S} _{2}}$. Hence the canonical projections ${\hat{\pi _{1}}}:S_{1}\oplus {\hat{{ S}_{2}}}\longrightarrow S_{1}$ and ${\hat{\pi _{2}}}:S_{1}\oplus {\hat{S}_{2} }\longrightarrow {\hat{{S}_{2}}}$ are clearly extensions of $\pi _{1}$ and $ \pi _{2}$. \noindent (ii)$\Rightarrow $ (i) Let $M=A\oplus B$ and let $C$ be a complement of $A$ in $M$. We must show that $M=A\oplus C$. By hypothesis, the projection $\pi :A\oplus C\longrightarrow C$ can be extended to an endomorphism $f:$ $M\longrightarrow M$. We claim $f(M)\subset C$. Since $ A\oplus C$ is essential in $M$, for any $0\neq m\in M$, there exists an essential right ideal $E$ of $R$ such that $0\neq mE\subset A\oplus C$. This gives $f(m)E=\pi (mE)\subset C$. Since $C$ is closed in $M$, this yields $ f(m)\in C$, proving our claim. We also remark that $f^{2}=f$, $ M=Ker(f)\oplus im(f)$ and $Ker(f)=\{m-f(m)\;|\;m\in M\}$. We now show that $Ker(f)=A$. For any $a\in A$, clearly $a=a-f(a)\in Ker(f)$, hence $A\subset Ker(f)$. Now let $0\neq $ $m-f(m)\in Ker(f)$. There exists $r\in R$ such that $0\neq (m-f(m))r\in A\oplus C$. This implies $ f[(m-f(m))r]=f(mr)-f(f(mr))=f(mr)-f(mr)=0$. Since $f$ extends $\pi $, this means that $0\neq (m-f(m))r\in Ker(\pi )=A.$ But $A$ being closed in $M$, we conclude $A=Ker(f)$, completing the proof. \noindent (i)$\Leftrightarrow $(iii) This is Lemma \ref{ADS iff relative injectivity} above. \noindent (i)$\Leftrightarrow $(iv) Let $C$ be a complement of $A$. Then $\ker (\pi _{B}|_{C})=0$. Since $A\oplus C=(A\oplus C)\cap (A\oplus B)=((A\oplus C)\cap B)+A$, we have $\pi_{B}(C)=\pi _{B}(A\oplus C)=\pi _{B}((A\oplus C)\cap B)=(A\oplus C)\cap B$. This gives $\pi _{B}(C)=B$ when $M$ is ADS. On the other hand if $\pi _{B}(C)=B$ then $M=A\oplus C$, hence $M$ is ADS. \noindent (i)$\Leftrightarrow $(v) This is classical (Cf. Proposition 1.4 in \cite{MM}). \noindent (i)$\Rightarrow $(vi) Consider $C$ a complement of $A$ containing $cR$. Since $M$ is ADS we have $M=A\oplus C$. Using $(v)$, this leads to $A$ being $cR$-injective. \noindent (vi)$\Rightarrow $(i) This is clear since if $M=A\oplus B$, $(vi)$ implies that $A$ is $bR$-injective for all $b\in B$ and Proposition 1.4 in \cite{MM} yields that $A$ is $B$-injective. \end{proof} Let us mention the following necessary condition for a module to be ADS. \begin{corollary} Let $M_R$ be an ADS module. For any direct summand $A\subseteq^{\oplus} M$ and any $(a,c,r)\in A\times M \times R$ such that $cR\cap A=0$ and $ann(cr) \subseteq r.ann(a)$ there exists $a'\in A$ such that $a=a'r$. If $R$ is a right PID the converse is true. \end{corollary} \begin{proof} By Proposition \ref{equivalent conditions for ADS}(vi), we know that $A$ is $cR$-injective. Consider $\varphi\in Hom_R(crR,A)$ defined by $\varphi(cr)=a$. The condition on annihilators guarantees that $\varphi$ is well defined. By relative injectivity, this map can be extended to $\overline{\varphi}:cR\longrightarrow A$, and hence we get $a=\varphi(cr)=\overline{\varphi}(c)r$. We obtain the desired result by defining $a'=\overline{\varphi}(c)$. \noindent If $R$ is a principal ideal domain then the submodules of $cR$ are of the form $crR$ for some $r\in R$. The condition mentioned in the statement of the corollary makes it possible to extend any map in $Hom_R(crR,A)$ to a map in $Hom_R(cR,A)$ for any direct summand $A\subseteq ^\oplus M$. Invoking Proposition \ref{equivalent conditions for ADS}(vi), we can thus conclude that $M$ is ADS. \end{proof} It is known that the sum of two closed submodules of a quasi-continuous module is closed \cite{GJ}. We prove that the direct sum of two closed submodules of an ADS module is again closed when one of them is a summand. \begin{proposition} Let $A$ and $B$ be two closed submodules of an ADS module $M$ such that $A$ is a summand and $A\cap B=0$. Then $A\oplus B$ is a closed submodule of $M$. \end{proposition} \begin{proof} Let $C$ be a complement of $A$ containing $B$. Since $M$ is ADS, we have $ M=A\oplus C$. Let $x=a+c$ be in the closure of $A\oplus B$ in $M$, where $ a\in A$ and $c\in C$. Since $a\in A\subseteq cl(A\oplus B)$, we have that $ a\in cl(A\oplus B)$. Hence there exists an essential right ideal $E$ of $R$ such that $cE \subseteq (A\oplus B) \cap C=B$. The fact that $B$ is closed implies $c \in B$. Hence $x\in A\oplus B$, as desired. \end{proof} \begin{remark} Let $A,B$ be closed submodules of an ADS module $M$ such that $A$ is a direct summand of $M$. If $A\cap B$ is a direct summand of $M$, then $A + B$ is closed. Indeed let $K$ be a complement of $A\cap B$. Since $M$ is ADS we have $M=(A\cap B)\oplus K$. Hence $A+B=A\oplus (K\cap B)$. The above proposition then yields the result. \end{remark} The proposition that follows gives an interesting property of an ADS module. The original statement is due to Gratzer and Schmidt (cf. Theorem 9.6 in \cite{F}). We first prove the following lemma. \begin{lemma} \label{Decomposition of injective hull goes down} Let $M=B\oplus C$ be a decomposition of $M$ with projections $\beta :M\rightarrow B$, $\gamma :M\rightarrow C$. Then $M=B\oplus C_1$ if and only if there exists $\theta \in End(M)$ such that $C_1=(\gamma - \beta\theta\gamma)(M)$ \end{lemma} \begin{proof} Suppose that $M=B\oplus C_{1}$ with projections $\beta _{1}$ on $B$ and $ \gamma_{1}$ on $C_{1}$. We will show that $\beta _{1}=\beta +\beta \theta \gamma $ and $\gamma _{1}=\gamma -\beta \theta \gamma $ with $\theta =\gamma -\gamma _{1}$. We have $B<\ker (\theta )$, so $\theta =\theta \beta +\theta \gamma =\theta \gamma $. \noindent If $m=b+c=b_{1}+c_{1}$, where $b,b_{1}\in B,c\in C,c_{1}\in C_{1}$. Then $\theta(m) =c-c_{1}=b_{1}-b\in B$. Thus $\beta \theta =\theta $. Hence $ \gamma _{1}=\gamma -\theta =\gamma -\beta \theta \gamma $. Also $\beta _{1}=1_{A}-\gamma _{1}=\beta +\gamma -\gamma _{1}=\beta +$ $\beta \theta \gamma $. Conversely, if $\beta _{1}$, $\gamma _{1}$ are defined as above, that is $ \beta _{1}=\beta +\beta \theta \gamma $ and $\gamma _{1}=\gamma -\beta \theta \gamma $ for any $\theta \in End(M)$, then $\beta _{1}+\gamma _{1}=1_{A}$, $\beta _{1}^{2}=\beta _{1}$, $\gamma _{1}^{2}=\gamma _{1}$, $ \beta _{1}\gamma _{1}=\gamma _{1}\beta _{1}=0$. Therefore, $M=\beta _{1}M\oplus \gamma _{1}M$. Since $\beta_{1}(M)\subset B$ and $ \beta_1(b)=\beta(b)=b$ for $b\in B$, we have $M=B\oplus (\gamma - \beta\theta\gamma)(M)$, as required. \end{proof} Using the same notations as in the previous lemma we state the following corollary. \begin{corollary} \label{characterization of ADS following Fuchs} A module $M$ is ADS if and only if for any decomposition $M=B\oplus C$ the complements of $B$ in $M$ are all of the form $(\gamma-\beta\theta\gamma)(M)$ for some $\theta\in End(M)$. \end{corollary} \begin{proposition} Let $M=B\oplus C$ be a decomposition of an ADS $R$-module $M$. Let $\beta $ and $ \gamma $ be projections on $B$ and $C$ respectively. Then the intersection $ D$ of all the complements of $B$ is the maximal fully invariant submodule of $M$ which has zero intersection with $B$. \end{proposition} \begin{proof} Let $\theta \in End(M)$. Then $C_{1}=(\gamma -\beta \theta \gamma )(M)$ is again a complement of $B$. For $c\in D$ we have $(\gamma -\beta \theta \gamma )(c)=c$ and $\gamma c=c$, because \thinspace $c\in C_{1}\cap C$. Hence $\beta \theta c=0$ and $\theta c\in C$. This holds for all complements $ C$, so $\theta c\in D$, so $D$ is fully invariant in $M$ with $D\cap B=0$. On the other hand, assume $X$ is fully invariant with $X\cap B=0$. Since $ M=B\oplus C$, and $\pi _{B}(X)\subseteq X$ and $\pi_{C}(X)\subseteq X$, this leads to $X=(X\cap B)\oplus (X\cap C)=X\cap C$. Hence $X<C$. Since $M$ is ADS this holds for any complement of $B$ in $M$, and hence $X\subseteq D$. \end{proof} It is known that an indecomposable regular ring which is right continuous is right self-injective (cf. Corollary 13.20 in \cite{G} ). The following theorem is a generalization of this result for simple rings without the assumption of regularity. We may add that an indecomposable two-sided continuous regular ring is simple (cf. \cite{G} Corollary 13.26). \begin{theorem} Let $R$ be an ADS simple ring. Then either $R_{R}$ is indecomposable or $R$ is a right self-injective regular ring. \end{theorem} \begin{proof} Let $Q$ be the right maximal quotient of $R$ which is regular right self-injective. Since $R$ is right (left) nonsingular $E(R)=Q$. Suppose $R$ is not right indecomposable and let $e$ be a nontrivial idempotent. Then since $R$ is ADS $eR$ is $(1-e)R$-injective (cf. Lemma \ref{ADS iff relative injectivity}). Furthermore, since $Hom((1-e)Q,eQ)\cong eQ(1-e)$, $(eQ(1-e)(1-e)R\subseteq eR$. Because $R$ is simple, $R=$ $ R(1-e)R\subset Q(1-e)R$. This yields, $1\in Q(1-e)R$ . Therefore $Q=Q(1-e)R$, and so $eQ=eR$. Similarly $(1-e)Q=(1-e)R$ hence $R=Q$, i.e. $R$ is a right self-injective regular ring. \end{proof} \begin{corollary} A simple regular right continuous ring is right self-injective. \end{corollary} \section{ADS HULLS} We now proceed to construct an $ADS$\ hull of a nonsingular module. Burgess and Raphael (cf. \cite{BR}) claimed that an example can be constructed of a finite dimensional module over a finite dimensional algebra which has no $ADS$ hull. We show that, under some circumstances, such an ADS hull does exist. \begin{lemma} Suppose $M$ is nonsingular. Then $M$ is ADS iff for every decomposition $ E(M)=E_{1}\oplus E_{2}$ where $E_{1}\cap M$ is a direct summand of $M$, then $ M=(E_{1}\cap M)\oplus (E_{2}\cap M)$. \end{lemma} \begin{proof} Suppose $M$ is ADS. We may write $M=(E_{1}\cap M)\oplus K$ where $K$ is a complement of $E_{1}\cap M$. Let $e_{i}:(E_{1}\cap M)\oplus (E_{2}\cap M)\longrightarrow E_{i}\cap M$ be the projection map. Then by Proposition \ref{equivalent conditions for ADS}(ii) there exists $e_{i}^{\ast }:M\longrightarrow M$ that extends $e_{i}$. Let $\pi _{i}:E_{1}\oplus E_{2}\longrightarrow E_{i}$ be the natural projection. Since $E(M)$ is injective we can further extends $e_{i}\ast $ to $e_{i}^{\ast \ast }\in End(E(M))$. We claim $e_{i}^{\ast \ast }$ is an idempotent in $End(E(M)$. Indeed let $x\neq 0$ be any element in $E(M)$ and $A$ an essential right ideal of $R$ such that $0\neq xA\subseteq M$. We have $(e_{i}^{\ast \ast })^{2}(x)A=(e_{i}^{\ast \ast })^{2}(xA)=(e_{i}^{\ast })^{2}(xA))=e_{i}^{\ast }(xA)=e_{i}^{\ast \ast }(xA)=e_{i}^{\ast \ast }(x)A$. This yields the claim, since $M$ is nonsingular. Thus $e_{i}^{\ast \ast }(E(M))=\pi _{i}(E(M))=E_{i}$. Now $M\subseteq _{e}E(M)=E_{1}\oplus E_{2}$ implies $ E_{1}\cap M\subseteq _{e}(E_{1}\oplus E_{2})\cap E_{1}$. Similarly $ E_{2}\cap M\subseteq _{e}E_{2}$ and so $e_{i}^{\ast \ast }=\pi _{i}$ on $ E_{1}\cap M\oplus E_{2}\cap M\subseteq _{e}M\subseteq _{e}E(M)$. Since $M$ is nonsingular $e_{i}^{\ast \ast }=\pi _{i}$ on $E(M)$. In particular, $\pi _{i}(M)\subseteq M$ and so $M=(\pi _{1}+\pi _{2})(M)\subseteq \pi _{1}(M)\oplus \pi _{2}(M)\subset (E_{1}\cap M)\oplus (E_{2}\cap M)$. Conversely, let $M=A\oplus B$ and $C$ be a complement of $A$. We must show that $M=A\oplus C$. Since $A\oplus C <_e M$, we get $E(M)=E(A)\oplus E(C)$. Since both $A$ and $C$ are closed in $M$, we have $E(A)\cap M=A$ and $E(C)\cap M=C$. Since $A$ is a direct summand of $M$ we have, thanks to the hypothesis, $M=(E(A)\cap M)\oplus (E(C)\cap M)=A\oplus C$, as desired. \end{proof} \begin{theorem} \label{Characterization of ADS via idempotents} Let $M$ be a right $R$-module. Then $M$ is ADS if and only if for every $e=e^2$, $f=f^2\in End(E(M))$ with $eM\subset M$ and $fE(M)= eE(M)$, we have $fM\subset M$. \end{theorem} \begin{proof} Let us prove necessity: $(1-f)(E(M))\cap M \subseteq _e(1-f)(E(M))$ and $ f(E(M))\cap M\subseteq _e f(E(M))$. Thus $((1-f)(E(M))\cap M )\oplus (f(E(M))\cap M )\subseteq _e M$. We claim $f(E(M))\cap M=e(M)$. Note first that $e(E(M)) \cap M= f(E(M))\cap M$. Clearly $eE(M)\cap M \subseteq eM$. Let $ C=(1-f)(E(M))\cap M $. Then $C\oplus eM \subseteq_e M$. Because $eM$ is closed $C$ is a complement of $ eM$ in $M$ (cf. Lemma 6.32 in Lam's book). Because $M$ is ADS we have $ M=e(M) \oplus C$. Let $g$ be the projection of $eM$ along $C$, so that $ g(M)=e(M)$. Now $g(M)=e(M)\subseteq f(E(M))$. This gives $eM=(M)=fg(M)=fe(M)$ . Since $C$ is contained in $(1-f)(E(M))$, $f(C)=0$. Then $fM =f(C\oplus eM)=eM\subseteq M$. \noindent Conversely, let $M=eM\oplus (1-e)(M)$ and $C$ be a complement of $ e(M)$ in $M$. We want to show $M=e(M)\oplus C$. Now, $C\oplus e(M)\subseteq _{e}M$ and so $E(C)\oplus E(eM)=E(M)$. Hence $E(C)\oplus eE(M)=E(M)$. Let $f$ be the projection on $eE(M)$ along $E(C)$. We have $f(E(M))=e(E(M))$ and $E(C)=(1-f)(E(M))$. By hypothesis we have $f(M)\subseteq M$. Let $m$ be in $M$. Then $m\in M=E(C)\oplus f(E(M))$, say $m=c+f(m)$, where $c\in E(C)$. $c=m-f(m)\in E(C)\cap M=C$, because $C$ is closed. We conclude that $M=C\oplus e(M)$. \end{proof} We may recall that any endomorphism $f\in End_R(M)$ of a nonsingular module $M$ can be uniquely extended to an endomorphism $f^*$ of its injective hull $E(M)$. Let us mention moreover that if $f=f^2$ then $f^*=(f^*)^2$. Under these notations we obtain the following corollary. \begin{corollary} Let $M$ be a right nonsingular $R$-module. $M$ is ADS if and only if for every $e=e^2\in End(M)$ and $f=f^2\in End(E(M))$ with $fE(M)= e^*E(M)$, we have $fM\subset M$. \end{corollary} We are now ready to show, that under some circumstances, an ADS hull can be constructed for a nonsingular module. For a nonsingular right $R$-module $M$, we continue to let $e^*$ denote the unique extension of $e^2=e\in End(M)$ to the injective hull $E(M)$ of $M$. \begin{theorem} \label{ADS hull} Let $M_R$ be a nonsingular right $R$-module. Let $\overline{M}$ denote the intersection of all the ADS submodules of $E(M)$ containing $M$. Suppose that for any $e^2=e\in End(\overline{M})$ and for any ADS submodule $N$ of $E(M)$ containing $M$ we have $e^*(N) \subset N$. Then, $\overline{M}$ is, up to isomorphism, the unique ADS hull of $M$. \end{theorem} \begin{proof} Let $\Omega$ be the set of ADS submodules $N$ such that $M<N<E(M)$. Then $\overline{M}=\bigcap_{N\in \Omega }N$. We claim that $\overline{M}$ is ADS. Clearly $E(\overline{M})=E(M)$. Let $e=e^2\in End_R(M),f^2=f\in End(E(M))$ such that $e(\overline{M})\subseteq \overline{M}$ and $f(E(M))=e^*(E(M))$. Since $M$ is nonsingular and $e(\overline{M})\subseteq \overline{M}$, we have $e(N)\subseteq N$ for every $N\in \Omega $. So, for every $N\in \Omega $, $ f(N)\subset N$ because $N$ is ADS. Let $x\in \overline{M}$. Then $x\in N$ for every $N\in \Omega $. Hence $f(x)\in N$ for every $N\in \Omega $. Therefore $ f(x)\in \bigcap_{N\in \Omega }N=\overline{M}$, that is $f(\overline{M} )\subseteq \overline{M}$, proving our claim. \end{proof} \begin{remarks} Let us remark that the condition stated in the above theorem is in particular fulfilled if we consider the ADS hull of a nonsingular ring. Indeed in this case we consider the ADS rings between $R$ and $Q:=E(R)$ and projections are identified with idempotents of the rings. Of course, these idempotents remain idempotents in overrings. \end{remarks} \section{COMPLETELY $ADS$ MODULES } \begin{theorem} \label{General decomposition} Let $M=\oplus _{i\in I}M_{i}$ be a decomposition of a module $M$ into a direct sum of indecomposable modules $ M_{i}$. Suppose $M$ is completely ADS. Then \begin{enumerate} \item[(i)] For every $(i$, $j)\in I^{2}$, $i\ne j$, $M_{i}$ is $M_{j}$-injective. \item[(ii)] If $(i,j)\in I^{2}$, $i\ne j$ are such that $Hom_{R}(M_{i}$, $ M_{j})\neq 0$, then $M_{j}$ is simple. \item[(iii)] $M=S\oplus T$ where $S$ is semisimple and $T=\oplus _{j\in J\subset I}M_{j} $ is a direct sum of indecomposable modules. Moreover, for any $\theta \in End(M)$ we have $\theta(S)\subset S$ and for $j\in J$, $\theta(M_{j})\subseteq M_{j}\oplus S$. \end{enumerate} \end{theorem} \begin{proof} Since the ADS property is inherited by direct summands, statement $(i)$ is an obvious consequence of Lemma \ref{ADS iff relative injectivity}. \noindent $(ii)$ For convenience, let us write $i=1$, $j=2$ and suppose that $0\ne \sigma \in Hom_R(M_1,M_2)$. We have $\sigma(M_1)\oplus M_2\oplus \dots\cong M_1/\ker(\sigma)\oplus M_2\oplus \dots =M/\ker(\sigma)$ is ADS, by assumption. Hence $\sigma(M_1)$ is $M_2$-injective and, since $\sigma(M_1) \subseteq M_2$, we get that $\sigma(M_1)$ is a direct summand of $M_2$. But $ M_2$ is indecomposable, hence $\sigma(M_1)=M_2$. We conclude that $M_2\oplus M_2=\sigma(M_1)\oplus M_2$ is $ADS$. This means that $M_2$ is $M_2$-injective i.e. $M_2$ is quasi-injective. Let us now show that for any $0\ne m_2\in M_2$, $m_2R=M_2$. Since $ \sigma(M_1)=M_2$, there exists $m_1\in M_1$ such that $\sigma(m_1)=m_2$. We remark that $\sigma(m_1R)\oplus M_2=\frac{m_1R}{\ker \sigma\cap m_1R}\oplus M_2=\frac{m_1R\oplus M_2}{\ker\sigma \cap m_1R}$ is a submodule of $\frac{M}{ \ker \sigma \cap m_1R}$. Since $M$ is completely ADS, we conclude that $ \sigma(m_1R)\oplus M_2$ is ADS. As earlier in this proof, relative injectivity and indecomposability lead to $\sigma(m_1R)=M_2$. Hence $ m_2R=M_2 $, as desired. $(iii)$ Let $I_1$ consist of those $i\in I$ such that there exists $j\in I$, $j\ne i$ with $Hom_R(M_j,M_i)\ne 0$. We define $S:=\oplus_{i\in I_1}M_i$ and $T:=\oplus_{j\in J}M_j$ where $J:=I\setminus I_1$. Statement $(ii)$ above implies that $M=S \oplus T$ where $S$ is semisimple and $T$ is a sum of indecomposable modules. Moreover if $j\in J$, then for any $i\in I, \; i\ne j$, we have $Hom_R(M_i,M_j)=0$. It is clear that, for any $\theta \in End(M)$ we must have $\theta (S)\subset S$. For $j\in J$ and $x\in M_j$ let us write $\theta(x) = y + z$, where $z\in S$ and $y\in T$. Since, for $l\in J$, $l\ne j$, $Hom_R(M_j,M_l)=0$, we have $\pi_l\theta(x)=0$, where $\pi_l: M\rightarrow M_l$ is the natural epimorphism. Thus $\pi_l(y)=0$. This shows that $y\in M_j$, as required. \end{proof} Oshiro's theorem states that any quasi-discrete module is a direct sum of indecomposable modules (cf. \cite{MM} Theorem $4.15$). Hence the above Theorem \ref{General decomposition} applies to completely ADS quasi-discrete modules. In general for a quasi-discrete module we have the following theorem: \begin{theorem} Let $M$ be a completely ADS quasi-discrete module. Then $M$ can be written as $M=S\oplus M_1\oplus M_2$, where $S$ is semisimple, $M_1$ is a direct sum of local modules and $M_2$ is equal to its own radical. \end{theorem} \begin{proof} Corollary 4.18 and Proposition 4.17 in \cite{MM} imply that $M=N\oplus M_2$ where $N$ has a small radical and $M_2$ is equal to its own radical. Theorem \ref{General decomposition} applied to $N$ yields the conclusion. \end{proof} We now apply the previous theorem to the case of semiperfect modules. \begin{theorem} \label{semiperfect modules} Let $M$ be a semiperfect module with a completely ADS projective cover $P$. Then $M$ can be presented as $M = S\oplus T$ where $S$ is semisimple and $T$ is a sum of local modules. Moreover any partial sum in this decomposition contains a supplement of the remaining terms. \end{theorem} \begin{proof} Clearly $P$ is semiperfect and projective (cf. Theorem $11.1.5$ in \cite{K}). Combining the statements in $42.5$ in \cite{W} and Corollary $4.54$ in \cite{MM}, we get that $P$ is discrete and is a direct sum of local modules. The remark preceding the present theorem then implies that we can write $P=S^{\prime }\oplus T^{\prime }$ where $S^{\prime }$ is semisimple and $T^{\prime }$ is a direct sum of indecomposable local modules. Let $\sigma$ be an onto homomorphism from $P$ to $M$ with small kernel $K$. We thus have $M=\sigma(S^{\prime }) + \sigma(T^{\prime })$. Since homomorphic images of $M$ have projective covers, Lemma 4.40 \cite{MM} shows that $\sigma(T^{\prime })$ contains a supplement $X$ of $\sigma(S^{\prime })$. In particular, we have $ \sigma(S^{\prime }) \cap X<< X$. Since $\sigma(S^{\prime })$ is semisimple we conclude that $\sigma(S^{\prime }) \cap X=0$ and hence $M=\sigma(S^{\prime }) \oplus \sigma(T^{\prime })$. Since homomorphic images of a local module are still local, we conclude that the terms appearing in $\sigma (T^{\prime }) $ are local modules. The last statement is a direct consequence of Lemma 4.40 \cite{MM}. \end{proof} Let us mention that local rings which are not uniform give examples of semiperfect completely ADS modules which are not CS and hence not quasi-continuous. The following corollary characterizes semiperfect $\pi c$-rings providing a new proof of Theorem 2.4 in \cite{GJ}. \begin{theorem} Let $R$ be a semiperfect ring such that every cyclic module is quasi-continuous. Then $R=\oplus_{i\in I} A_i$ where each $A_i,\; i\in I$ is simple artinian or a valuation ring. \end{theorem} \begin{proof} Since $R$ is semiperfect $R=B_1\oplus B_2\oplus \dots \oplus B_n$ a direct sum of indecomposable right ideals. In view of the fact that quasi-continuous modules are ADS, Theorem \ref{General decomposition} gives a decomposition $R=e_1R\oplus e_2R \oplus \dots \oplus e_kR \oplus \dots \oplus e_nR$ where $e_iR$ are simple right ideals for $1\le i \le k$ and $e_jR$ are local right ideals for $k < j \le n$. Let $\sigma$ be a homomorphism from $e_sR$ to $e_tR$ for some $1\le s,t \le n$. Then $e_sR/\ker \sigma$ embeds in $e_tR$. Since $R/\ker( \sigma)$ is quasi-continuous, $e_sR/\ker \sigma$ is $e_tR$-injective and hence $e_sR/\ker(\sigma)$ splits in $e_tR$. This shows that either $e_sR/\ker(\sigma)\cong e_tR$ or $\ker(\sigma)=e_sR$, that is $\sigma=0$. Since $e_tR$ is projective, if $e_sR/\ker(\sigma)\cong e_tR$, then $\ker(\sigma)$ splits in $e_sR$, thus $\ker(\sigma)=0$. In short we get that if $\sigma\ne 0$ then $e_sR\cong e_tR$, the latter isomorphism implies $e_sR$ and $e_tR$ are minimal right ideals (cf. Lemma 2.3 in \cite{GJ}). By grouping the right ideals $e_iR$ according to their isomorphism classes, we get $R=A_1\oplus A_2\oplus \dots \oplus A_l$, $l\le n$, where each $A_i$ is either a simple artinian ring or a local ring. We claim that if $A_i$ is a local ring then it is a valuation ring. We thus have to show that any pair of two nonzero submodules $C,D$ of the ring $A_i$ are comparable. Let us consider the right submodules $\frac{C}{C\cap D}$ and $\frac{D}{C\cap D}$ of $\frac {R}{C\cap D}$. Since $ A_i/ (C \cap D)$ is a local quasi-continuous it is uniform, but $C / (C \cap D) \bigcap D/ (C \cap D) =0$. Therefore $C/(C \cap D)$ or $D/(C \cap D) = 0$ hence $C$ and $D$ are indeed comparable. \end{proof} Let us conclude this paper with some questions: \begin{enumerate} \item It is known that if $R_R$ and $_RR$ are both CS then $R$ is Dedekind finite. What could be the analogue of this for ADS modules? \item Does a directly finite ADS module have the internal cancellation property? (cf. Theorem 2.33 in \cite{MM}, for the quasi-continuous case). \item What can be said of a module which is ADS and has the C$_2$ property? \end{enumerate} \centerline{ACKNOWLEDGEMENT} We thank the referee for drawing our attention to a number of typos. \noindent \normalsize Adel Alahmadi\\ \normalsize King Abdullaziz University, Jeddah, Saudi Arabia\\ \normalsize E-mail: [email protected] \\ \normalsize S.K. Jain\\ \normalsize King Abdullaziz University, Jeddah, Saudi Arabia\\ \normalsize and Ohio University, Athens, USA \\ \normalsize E-mail: [email protected] \\ \normalsize Andr\'e Leroy\\ \normalsize Universit\'{e} d'Artois, Facult\'{e} Jean Perrin, Lens, France\\ \normalsize and MECAA, King Abdullaziz University, Jeddah, Saudi Arabia\\ \normalsize E-mail: [email protected] \end{document}
arXiv
\begin{document} \title{\bf\color{black} \begin{abstract} In this paper, we investigate exact tail asymptotics for the stationary distribution of a fluid model driven by the $M/M/c$ queue, which is a two-dimensional queueing system with a discrete phase and a continuous level. We extend the kernel method to study tail asymptotics of its stationary distribution, and a total of three types of exact tail asymptotics is identified from our study and reported in the paper. \vskip 0.2cm \noindent \textbf{Keywords}\ \ fluid queue driven by an $M/M/c$ queue; kernel method; exact tail asymptotics; stationary distribution; asymptotic analysis \vskip 0.2cm \noindent {\bf MSC 2010 Subject Classification} \ \ 60K25, 60J27, 30E15, 05A15. \end{abstract} \section{Introduction} Fluid flows have been widely used for modelling information flows in performance analysis of packet telecommunication systems. In this area, fluid queues with Markov-modulated input rates have played an important role in the recent development. In such a fluid model, the rate of information change is modulated according to a Markov process, evolving in the background. Several references on the Markov-modulated fluid queues can be found in the literature, such as \cite{S98, N04, GLR13}. In these studies, the state space $N$ of the modulating Markov process is assumed to be finite, which put a restriction on applications. On the contrary, in this paper, we consider an infinite capacity fluid model driven by the $M/M/c$ queue, which is a specific birth-death process. First, we let $Z(t)$ be the state of a continuous-time Markov chain on a countable state space, the background process, at time $t$, and let $X(t)$ be the fluid level in the queue at time $t$. Let $r_{Z(t)}$ denote the rate of change of the fluid level (or the \textit{net input rate}) at time $t$. Then, the dynamics of the fluid level $X(t)$ are given by \begin{equation*} \frac{dX(t)}{dt}=\left\{ \begin{array}{ll} r_{Z(t)}, & \mbox{if $X(t)> 0$ or $r_{Z(t)}\geq 0$,} \\ 0, & \mbox{if $X(t)= 0$ and $r_{Z(t)}< 0$.} \end{array} \right. \end{equation*} Fluid queues driven by infinite state Markov chains have been considered in the past by several authors. For instance, van Doorn and Scheinhardt, in \cite{DS97}, for the stationary distribution of the fluid queue driven by a birth-death process, they used orthogonal polynomials to solve an infinite system of differential equations under certain boundary conditions and provided the same integral expression obtained by Virtamo and Norros in \cite{VN94} and by Adan and Resing in \cite{AR96} in the case driven by the $M/M/1$ queue. Parthasarythy and Vijayashree, in \cite{PV02}, provided an expression, via an integral representation of Bessel functions, for the stationary distributions of the buffer occupancy and the buffer content, respectively, for a fluid queue driven by an $M/M/1$ queue. By using the Laplace transform, they obtained a system of differential equations, which led to a continued fraction and the solution of the stationary distribution. In \cite{BS02}, Barbot and Sericola provided an analytic expression for the stationary distribution of the fluid queue driven by an $M/M/1$ queue through the generating function technique. Analysis for the transient distribution of the fluid queue driven by an $M/M/1$ queue was reported by Sericola, Parthasarathy and Vijayashreehad in \cite{SPV05}. Although methods for studying the stationary performance measures of the fluid queue driven by the $M/M/1$ queue are different in the above mentioned references, the expressions obtained through integral expressions are usually cumbersome and hard to be used directly for asymptotic properties for the stationary distribution. In this paper, we extend the kernel method to characterize exact tail asymptotics for the stationary distribution of the fluid model driven by an $M/M/c$ queue. The main contributions include: \begin{enumerate} \item An extension of the kernel method. The key idea of the kernel method was proposed by Knuth in \cite{K69} and further developed by Banderier \textit{et al.} in \cite{BBD02}. The method has been recently extended to study the exact tail behaviour for two-dimensional stochastic networks (or random walks with reflective boundaries) for both discrete and continuous random walks in the quarter plane, for example see Li and Zhao~\cite{LZ11} and Dai, Dawson and Zhao~\cite{DDZ15}, and references therein. Compared to other methods, the kernel method, which has been successfully used to study tail behaviour of models with both level and background either discrete or continuous, does not require a determination or characterization of the entire unknown function in order to characterize the exact tail asymptotic properties in stationary distributions. It is worthwhile to point out that the application of the kernel method to the fluid model driven by an $M/M/c$ queue is not straightforward and requires significant efforts, since in this case the level is continuous and the background is discrete. \item An extension of the finding for the tail asymptotic behaviour in the stationary distribution of a fluid queue driven by a Markov chain. We show in Section~\ref{sec:5} that for the fluid model driven by an $M/M/c$ queue, a total of three types of exact tail asymptotic properties exists, in comparison with the finding by Govorun, Latouche and Remiche in \cite{GLR13}, in which they showed that for a fluid model driven by a finite state Markov chain there is only one type of tail asymptotic property. This is also an extension of the tail asymptotic behaviour in the stationary distribution of a fluid queue driven by an $M/M/1$ queue, since the tail asymptotic property given in Case (iii) of Theorems~\ref{the-tail-asy-3} and \ref{the-tail-asy-4} does not exist for the case of $c=1$. \end{enumerate} The rest of the paper is organized as follows: In Section~\ref{sec:2}, we describe the fluid model, define the notation and present the system of partial differential equations satisfied by the joint probability distribution function of the buffer level and of the state of the driving process. In this section, we also establish the fundamental equation based on the differential equations. Section~\ref{sec:3} is devoted to the discussion on properties of the branch points in the kernel equation and the analytic continuation of the unknown functions in terms of the kernel method. In Section~\ref{sec:4}, an asymptotic analysis of the two unknown functions is carried out. In Section~\ref{sec:5}, an characterization on exact tail asymptotic in the stationary distribution for the model is presented. We show that there exist three types of tail asymptotic properties for the boundary, joint, and marginal distributions, respectively. These results are an extension of the single type behaviour found in \cite{GLR13} for the stationary density of the fluid queue driven by a finite state Markov chain. In Section~\ref{sec:6}, two special cases ($c=1$ and $c=2$) are further considered. Finally, in Section~\ref{sec:7}, we make some concluding remarks to complete the paper. \section{Model description and fundamental equation} \label{sec:2} We consider the fluid model driven by an $M/M/c$ queueing system $\{Z(t), t\geq 0\}$, where $Z(t)$ denotes the queue length of the M/M/c queue at time $t$. It is known that $Z(t)$ is a special birth-death process with the state space $\mathbb{E}=\{0, 1, 2, \ldots \}$. Let $\lambda_{i}$ be the arrival rate and $\mu_{i}$ be the service rate in state $i$ for $\{Z(t), t\geq 0\}$. Then, \[ \lambda_{i}=\lambda> 0 \ \ \mbox{for any} \ \ i\geq 0, \] and with $\mu>0$, \[ \mu_{i}= \left \{ \begin{array}{ll} i\mu, & \mbox{for $0 \leq i\leq c-1$}, \\ c \mu, & \mbox{for $i\geq c$}. \end{array} \right. \] Suppose $\lambda< c\mu$. Then the unique stationary distribution $\xi=(\xi_{i})_{i\in \mathbb{E}}$ of $\{Z(t)\}$ exists, which is given by \begin{equation*} \xi_{i}= \left \{ \begin{array}{ll} \xi_{0} \displaystyle \frac{\rho^{i}}{i!}, & \mbox{for $1 \leq i\leq c$}, \\ \xi_{c} \displaystyle \left (\frac{\rho}{c} \right )^{i-c}, & \mbox{for $i > c$}, \end{array} \right. \end{equation*} where $\xi_{0}=(\sum_{i=0}^{c-1}\frac{\rho^{i}}{ i!}+\frac{\rho^{c}}{(c-1)!(c-\rho)})^{-1}$ and $\rho=\frac{\lambda}{\mu}$. According to \cite{SPV05}, we may regard the fluid model driven by an $M/M/c$ queue $\{Z(t), t\geq 0\}$ as a fluid commodity, which is referred to as \textit{credit}. The credit accumulates in an infinite capacity buffer during the full busy period of $M/M/c$ queue (i.e. whenever a customer arrives and finds all servers busy) at a positive rate $r_{Z(t)}$, defined as $r_{i}=r> 0$ for any $i\geq c$. The credit depletes the fluid during the partial busy period of $M/M/c$ queue (i.e. whenever an arriving customer finds less than $c$ customers in the queue) at a negative rate $r_{Z(t)}$. It is reasonable to assume that the negative rate $r_{i}$ increases in $i$. Without loss of generality, we assume that the net input rate is $r_{i}= i-c$ for any $0\leq i\leq c-1$. In order that the stationary distribution of $X(t)$ exists, we shall assume throughout the paper that \[ \sum_{i\in \mathbb{E}}\xi_{i}r_{i}<0, \] which is equivalent to \begin{equation*} (r+1)\lambda < c\mu+ (c\mu-\lambda)\cdot\sum_{i=0}^{c-2}\frac{(c-i)\lambda^{i+1-c}\cdot(c-1)!}{\mu^{i+1-c}\cdot i!}. \end{equation*} Now, we denote \[ F_{i}(t,x)= P\{Z(t)=i, X(t)\leq x\} \] for any $t\geq 0,$ $x\geq 0$ and $i \in \mathbb{E}$. It is well known (see e.g. \cite{DS97}) that the joint distribution $F_{i}(t,x)$ satisfies the following partial differential equations: \begin{eqnarray*} \nonumber \frac{\partial F_{0}(t,x)}{\partial t} &=& c\frac{\partial F_{0}(t,x)}{\partial x} -\lambda F_{0}(t,x) +\mu F_{1}(t,x), \\ \label{equ-par-dif-1} \frac{\partial F_{i}(t,x)}{\partial t} &=& (c-i)\frac{\partial F_{i}(t,x)}{\partial x} +\lambda F_{i-1}(t,x)-(\lambda+ i\mu)F_{i}(t,x) +(i+1)\mu F_{i+1}(t,x), \ \ 1\leq i \leq c-1,\\ \label{equ-par-dif-2} \frac{\partial F_{i}(t,x)}{\partial t} &=& -r\frac{\partial F_{i}(t,x)}{\partial x} +\lambda F_{i-1}(t,x)-(\lambda+ c\mu)F_{i}(t,x) +c\mu F_{i+1}(t,x), \ \ i\geq c. \end{eqnarray*} Let $Z$ and $X$ be the stationary states of $Z(t)$ and $X(t)$ respectively. Then, the stationary distribution is given by \[ \Pi_{i}(x)=\lim_{t\rightarrow \infty}F_{i}(t,x) = P\{Z=i, X \leq x\}. \] Define $\pi_{i}(x)=\frac{\partial \Pi_{i}(x)}{\partial x}$ for any $x> 0$ and $\pi_{i}(0)=\lim_{x\rightarrow 0^{+}}\pi_{i}(x)$. From the above partial differential equations, we have the following equations: \begin{align}\label{equ-joi-dis-1} -c\pi_{0}(x) & = \mu\Pi_{1}(x)-\lambda\Pi_{0}(x), \\ \label{equ-joi-dis-2} -(c-i)\pi_{i}(x) &= \lambda\Pi_{i-1}(x)-(\lambda+ i\mu)\Pi_{i}(x) + (i+1)\mu\Pi_{i+1}(x), \quad \mbox{for $1 \leq i \leq c-1$}, \\ \label{equ-joi-dis-3} r \pi_{i}(x) & = \lambda\Pi_{i-1}(x)-(\lambda+ c\mu)\Pi_{i}(x) + c\mu\Pi_{i+1}(x), \quad \mbox{for $i\geq c$}. \end{align} The initial condition of (\ref{equ-joi-dis-1}), (\ref{equ-joi-dis-2}) and (\ref{equ-joi-dis-3}) is given by \[ \Pi_{i}(0)=0, \ \ i\geq c. \] In addition, for any $i\in \mathbb{E}$, we have \[ \Pi_{i}(\infty)=\lim_{x\rightarrow \infty}\Pi_{i}(x)= \xi_{i}. \] Let $\phi_{i}(\alpha)$ be the Laplace transform for $\pi_{i}(x)$, i.e., \[ \phi_{i}(\alpha)= \int_{0}^{\infty} \pi_{i}(x)e^{\alpha x}dx. \] For any $i\in \mathbb{E}$, we have \begin{equation*} \int_{0}^{\infty}\Pi_{i}(x)e^{\alpha x}dx= \int_{0}^{\infty}\left [\Pi_{i}(0)+\int_{0^{+}}^{x}\pi_{i}(s)ds \right ]e^{\alpha x}dx=-\frac{1}{\alpha}\Pi_{i}(0)-\frac{1}{\alpha}\phi_{i}(\alpha). \end{equation*} Thus taking the Laplace transforms of $\Pi_{i}(x)$ and $\pi_{i}(x)$ in (\ref{equ-joi-dis-2}) and (\ref{equ-joi-dis-3}), we can get \begin{align} -\phi_{c-1}(\alpha) \nonumber &= -\frac{\lambda}{\alpha} \left [\Pi_{c-2}(0)+\phi_{c-2}(\alpha) \right ]+ \frac{\lambda+(c-1)\mu}{\alpha} \left [\Pi_{c-1}(0)+\phi_{c-1}(\alpha) \right ]- \frac{c\mu}{\alpha}[\Pi_{c}(0)+\phi_{c}(\alpha)], \end{align} and for any $i\geq c$ \begin{align}\nonumber r \phi_{i}(\alpha) &= -\frac{\lambda}{\alpha} \left [\Pi_{i-1}(0)+\phi_{i-1}(\alpha) \right ]+ \frac{\lambda+c\mu}{\alpha} \left [\Pi_{i}(0)+\phi_{i}(\alpha) \right ]- \frac{c\mu}{\alpha}[\Pi_{i+1}(0)+\phi_{i+1}(\alpha)]. \end{align} It then follows that \begin{eqnarray}\label{equ-gene} \nonumber && \sum_{i=c-1}^\infty [-\lambda z^{2}+(-\alpha r+\lambda+c\mu)z-c\mu] \phi_{i}(\alpha)z^i \\ \nonumber &=& \lambda\phi_{c-2}(\alpha)z^{c}+[(\mu-\alpha-\alpha r)z-c\mu]\phi_{c-1}(\alpha) z^{c}+ \sum_{i=c-1}^\infty [\lambda z^{2}-(\lambda+ c\mu)z+c\mu]\Pi_{i}(0)z^i \\ \nonumber &&+\lambda\Pi_{c-2}(0)z^c+(\mu z-c\mu)\Pi_{c-1}(0)z^{c-1}. \end{eqnarray} Denote \[ \psi(\alpha, z)=\sum_{i=c-1}^\infty \phi_{i}(\alpha) z^i, \] and \[ \psi(z)=\sum_{i=c-1}^\infty \Pi_{i}(0)z^i. \] Then, we can obtain the following fundamental equation, which connects the bivariate unknown function $\psi(\alpha, z)$ to the univariate unknown functions $\phi_{c-2}(\alpha)$, $\phi_{c-1}(\alpha)$ and $\psi(z)$: \begin{equation*} H(\alpha, z)\psi(\alpha, z)= \lambda z^{c}[\phi_{c-2}(\alpha)+ \Pi_{c-2}(0)]+ H_{1}(\alpha, z) \phi_{c-1}(\alpha)+ H_{2}(\alpha, z)\psi(z)+ H_{0}(\alpha, z)\Pi_{c-1}(0), \end{equation*} where \begin{eqnarray*} && H(\alpha, z)= -\lambda z^{2}+(-\alpha r+\lambda +c \mu) z- c\mu, \\ && H_{1}(\alpha, z)=(\mu-\alpha r -\alpha)z^{c}-c\mu z^{c-1}, \\ && H_{2}(\alpha, z)= H_{2}(z)=\lambda z^{2}-\lambda z-c \mu z+ c\mu,\\ && H_{0}(\alpha, z)=H_{0}(z)=\mu z^{c}- c\mu z^{c-1}. \end{eqnarray*} By establishing a relation between $\phi_{c-2}(\alpha)$ and $\phi_{c-1}(\alpha)$, we obtain the following result. \begin{theorem}\label{the-phi-rel} The fundamental equation can be rewritten as \begin{equation}\label{equ-funde} H(\alpha, z)\psi(\alpha, z)= \hat{H}_{1}(\alpha, z)\phi_{c-1}(\alpha)+ H_{2}(z)\psi(z)+ \hat{H}_{0}(\alpha, z), \end{equation} where \begin{eqnarray*} \hat{H}_{1}(\alpha, z) &=& \lambda z^{c}A_{c-2}(\alpha)+ H_{1}(\alpha, z), \\ \hat{H}_{0}(\alpha, z) &=& H_{0}(z)\Pi_{c-1}(0)+\lambda z^{c}\Pi_{c-2}(0)+\lambda z^{c}\sum_{n=0}^{c-2} \left[k_{n}\lambda^{c-2-n}\prod_{m=n}^{c-2}\frac{A_{m}(\alpha)}{(m+1)\mu}\right], \end{eqnarray*} with $k_{0}=\mu\Pi_{1}(0)-\lambda\Pi_{0}(0)$, \[ k_{i}=\lambda\Pi_{i-1}(0)-(\lambda+i\mu)\Pi_{i}(0)+ (i+1)\mu \Pi_{i+1}(0),\ 1\leq i\leq c-2, \] and \[ A_{i}(\alpha)=\frac{(i+1)\mu}{\alpha+\lambda+i\mu-\lambda A_{i-1}(\alpha)}, \ 0\leq i\leq c-2, \ \ A_{-1}(\alpha)=0. \] \end{theorem} \proof Taking the Laplace transform for $\Pi_{i}(x)$ and $\pi_{i}(x)$ in (\ref{equ-joi-dis-1}) and (\ref{equ-joi-dis-2}), leads to the following linear equations: \begin{equation*} \left\{ \aligned & (\alpha+\lambda)\phi_{0}(\alpha)-\mu \phi_{1}(\alpha)= k_{0}, \\ & -\lambda\phi_{0}(\alpha)+(\alpha+\lambda+\mu)\phi_{1}(\alpha)-2\mu \phi_{2}(\alpha)= k_{1}, \\ & \vdots \\ & -\lambda\phi_{c-3}(\alpha)+[\alpha+\lambda+(c-2)\mu]\phi_{c-2}(\alpha)= (c-1)\mu\phi_{c-1}(\alpha)+k_{c-2}. \endaligned \right. \end{equation*} Since $A_{0}'(\alpha)=\frac{-\mu}{(\alpha+\lambda)^{2}}< 0$, we assume that $A_{k-1}'(\alpha)< 0$ for any $\alpha\geq 0$, as the inductive hypothesis, to show \[ A_{k}'(\alpha)=\frac{-(k+1)\mu[1-\lambda A_{k-1}'(\alpha)]}{[\alpha+\lambda+ k\mu-\lambda A_{k-1}(\alpha)]^{2}}< 0. \] Thus, $A_{i}(\alpha)$ is a decreasing function about $\alpha$ for any $0 \leq i\leq c-2$. For any $\alpha > 0$ and $0 \leq i\leq c-2$, we can obtain that \[ A_{i}(\alpha)< A_{i}(0)=\frac{(i+1)\mu}{\lambda}, \] which implies that $A_{i+1}(\alpha)=\frac{(i+2)\mu}{\alpha+\lambda+(i+1)\mu-\lambda A_{i}(\alpha)}> 0$. Hence $0 < A_{i}(\alpha) < \frac{(i+1)\mu}{\lambda}$ for any $0 \leq i\leq c-2$ and $\alpha > 0$. From the linear equations and the definition of $A_{i}(\alpha)$, we have for any $0 \leq i\leq c-2$, \begin{equation}\label{equ-phi-rel} \phi_{i}(\alpha)= \sum_{n=0}^{i}[k_{n}\lambda^{i-n}\prod_{m=n}^{i}\frac{A_{m}(\alpha)}{(m+1)\mu}]+ A_{i}(\alpha)\phi_{i+1}(\alpha). \end{equation} Specially, for the case $c=1$, we have $ \hat{H}_{1}(\alpha, z) = H_{1}(\alpha, z)$ and $\hat{H}_{0}(\alpha, z) = H_{0}(z)\Pi_{0}(0)$. Hence, the theorem is proved. $\Box$ \section{Kernel equation and branch points} \label{sec:3} The tail asymptotic behaviour of the stationary distribution for the fluid queue relies on properties of the kernel function $H(\alpha, z)$, and the functions $ \hat{H}_{1}(\alpha, z)$ and $H_{2}( z)$. Now, we consider the kernel equation \begin{equation*} H(\alpha, z)= 0, \end{equation*} which can be written as a quadratic form in $z$ as follows \begin{equation}\label{equ-qua-form} H(\alpha, z) = az^{2}+ b(\alpha) z+ d =0, \end{equation} where $a=-\lambda$, $b(\alpha)=-\alpha r+\lambda +c\mu$ and $d=- c\mu$. Let \[ \Delta(\alpha)=b^{2}(\alpha)-4ad \] be the discriminant of the quadratic form in (\ref{equ-qua-form}). In the complex plane $\mathbb{C}$, for each $\alpha$, the two solutions to (\ref{equ-qua-form}) are given by \begin{equation}\label{equ-Z} Z_{\pm}(\alpha)=\frac{-b(\alpha)\pm \sqrt{\Delta(\alpha)}}{2a}. \end{equation} When $\Delta(\alpha)=0$, $\alpha$ is called a branch point of $Z(\alpha)$. Symmetrically, for each $z$, the solution to (\ref{equ-qua-form}) is given by \begin{equation}\label{equ-alpha-1} \alpha(z)=\frac{-\lambda z^{2}+ (\lambda+c\mu)z-c\mu}{zr}. \end{equation} Note that all functions and variables are treated as complex ones throughout the paper. We have the following property on the branch points. \begin{lemma}\label{lem-bran-point} $\Delta(\alpha)$ has two positive zero points $\alpha_{1} = \frac{\left ( \sqrt{c\mu} - \sqrt{\lambda} \right )^2}{r}$ and $\alpha_{2} = \frac{\left ( \sqrt{c\mu} + \sqrt{\lambda} \right )^2}{r}$. Moreover, $\Delta(\alpha)> 0$ in $(-\infty, \alpha_{1})\cup (\alpha_{2}, \infty)$ and $\Delta(\alpha)< 0$ in $(\alpha_{1}, \alpha_{2})$. \end{lemma} For convenience, define the cut plane $\widetilde{\mathbb{C}}_{\alpha}$ by \[ \widetilde{\mathbb{C}}_{\alpha}= \mathbb{C}_\alpha \setminus \{[\alpha_{1},\alpha_{2}]\}. \] In the cut plane $\widetilde{\mathbb{C}}_{\alpha}$, denote the two branches of $Z(\alpha)$ by $Z_{0}(\alpha)$ and $Z_{1}(\alpha)$, where $Z_{0}(\alpha)$ is the one with the smaller modulus and $Z_{1}(\alpha)$ is the one with the larger modulus. Hence we have \[ Z_{0}(\alpha)=Z_{-}(\alpha) \ \mbox{and} \ Z_{1}(\alpha)=Z_{+}(\alpha) \ \mbox{if} \ \Re(\alpha)> \frac{\lambda+c\mu}{r}, \] \[ Z_{0}(\alpha)=Z_{+}(\alpha) \ \mbox{and} \ Z_{1}(\alpha)=Z_{-}(\alpha) \ \mbox{if} \ \Re(\alpha) \leq \frac{\lambda+c\mu}{r}. \] \begin{lemma}\label{lem-ana-H} The functions $Z_{0}(\alpha)$ and $Z_{1}(\alpha)$ are analytic in $\widetilde{\mathbb{C}}_{\alpha}$. Similarly, $\alpha(z)$ is meromorphic in $\mathbb{C}_z$ and $\alpha(z)$ has two zero points and one pole. \end{lemma} \proof We first give a proof to $Z_{0}(\alpha)$ and the proof to $Z_{1}(\alpha)$ can be given in the same fashion. Let $\alpha= a+bi$ with $a, b\in \mathbb{R}$ and $\arg(\alpha)\in (-\pi, \pi]$, and write $\Delta(\alpha)=\Re(\Delta(\alpha)) + \Im(\Delta(\alpha)) i$. We then have \[ \Re(\Delta(\alpha))=R(a, b)=(a^{2}-b^{2})r^{2}-2(\lambda+c\mu)ra+(\lambda-c\mu)^{2}, \] and \[ \Im(\Delta(\alpha))=I(a, b)=2abr^{2}-2(\lambda+c\mu)br. \] Let $\Im(\Delta(\alpha))=0$, we obtain that $a= \frac{\lambda+c\mu}{r}$ or $b=0$. For $b=0$, from Lemma \ref{lem-bran-point}, we know that $R(a, b)\leq 0$ and $I(a, b)=0$ along the curve $\mathcal{C}_{1}=\{\alpha= a+bi: \alpha_{1}\leq a\leq\alpha_{2}, b=0\}$. According to the property of the square root function, if we take $\mathcal{C}_{1}$ as a cut of $\sqrt{\Delta(\alpha)}$, then the function $Z_{0}(\alpha)$ cannot be analytic on the curve $\mathcal{C}_{1}$. Thus, we will consider the analytic property for $Z_{0}(\alpha)$ on the cut plane $\widetilde{\mathbb{C}}_{\alpha}=\mathbb{C}_\alpha \setminus \mathcal{C}_{1}$ in the following. For $a= \frac{\lambda+c\mu}{r}$ and any $b\in \mathbb{R}$, we obtain that \[ R\left (\frac{\lambda+c\mu}{r}, b \right )= R\left (\frac{\lambda+c\mu}{r}, 0 \right )-b^{2}r^{2}< R \left (\frac{\lambda+c\mu}{r}, 0 \right )< 0. \] Therefore, along the curve $\mathcal{C}_{2}=\{\alpha= a+bi: a=\frac{\lambda+c\mu}{r}\}$, we have $R(a, b)\leq 0$ and $I(a, b)=0$, which implies that $\sqrt{\Delta(\alpha)}$ or ($-\sqrt{\Delta(\alpha)}$) cannot be analytic on $\mathcal{C}_{2}$. However, from the definition of $Z_{0}(\alpha)$, we have that the branch $Z_{0}(\alpha)=Z_{+}(\alpha)$ is analytic in the domain $\{\alpha\in \widetilde{\mathbb{C}}_{\alpha}: \Re(\alpha) < \frac{\lambda+c\mu}{r}\}$ and $Z_{0}(\alpha)=Z_{-}(\alpha)$ is analytic in the complementary domain of the closure of this set in $\widetilde{\mathbb{C}}_{\alpha}$. From the choice of the square root, we know that the function $Z_{0}(\alpha)$ is continuous on the curves $\mathcal{C}_{2}$, which separates the two above domains. Thus, by Morera's Theorem, we have that the function $Z_{0}(\alpha)$ is analytic in the cut plane $\widetilde{\mathbb{C}}_{\alpha}$. From (\ref{equ-alpha-1}), $\alpha(z)$ is analytic in $\mathbb{C}_z$ except at the pole $z=0$, which implies that $\alpha(z)$ is meromorphic in $\mathbb{C}_z$. It also follows from (\ref{equ-alpha-1}) that $\alpha(z)$ has two zero points. $\Box$ Based on Lemma \ref{lem-ana-H}, we have the analytic continuation of $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$ and $H_{2}(z)$. \begin{lemma}\label{lem-ana-H1-H2} The function $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$ is analytic on $\widetilde{\mathbb{C}}_{\alpha}$ and $H_{2}(z)$ is analytic on $\mathbb{C}_z$. \end{lemma} \proof From Theorem \ref{the-phi-rel}, we have \[ \hat{H}_{1}(\alpha, Z_{0}(\alpha))=[\lambda A_{c-2}(\alpha)+\mu-\alpha r -\alpha]Z_{0}(\alpha)^{c}-c\mu Z_{0}(\alpha)^{c-1}. \] The analytic property is immediate from Lemma \ref{lem-ana-H}. From the definition of $H_{2}( z)$, we can easily get the assertion. $\Box$ \section{Asymptotic analysis of $\phi_{c-1}(\alpha)$ and $\psi(z)$} \label{sec:4} In order to characterize the exact tail asymptotics for the stationary distribution $\Pi_{i}(x)$, we need to study the asymptotic property of the two unknown functions $\phi_{c-1}(\alpha)$ and $\psi(z)$ at their dominant singularities, respectively. There are three steps in the asymptotic analysis of $\phi_{c-1}(\alpha)$ and $\psi(z)$: (i) analytic continuation of the functions $\phi_{c-1}(\alpha)$ and $\psi(z)$; (ii) singularity analysis of the functions $\phi_{c-1}(\alpha)$ and $\psi(z)$; and (iii) applications of a Tauberian-like theorem. In this section, we give details of the first and second steps, and the detail of the third step will be given in Appendix A. We first introduce the following lemma, which is a transformation of Pringsheim's theorem for a generating function (see, for example, Dai and Miyazawa \cite{DM11}). \begin{lemma}\label{lem-radiu-conv} Let $g(x)=\int_{0}^{\infty}e^{xt}f(t)dt$ be the moment generating function with real variable $x$. The convergence parameter of $g(x)$ is given by \[ C_{p}(g)=\sup \{x\geq 0: g(x)< \infty\}. \] Then, the complex variable function $g(\alpha)$ is analytic on $\{\alpha\in \mathbb{C}: \Re(\alpha)< C_{p}(g)\}$. \end{lemma} Now, we provide detailed information about the extended generator for the fluid queue, which will be used later to investigate the analytic continuation of $\phi_{c-1}(\alpha)$. Instead of focusing on the case that the modulated process is an M/M/c queue, we will consider a general setting, whose background process is a general continuous-time Markov chain with an irreducible, conservative and countable (finite or infinitely countable) generator $Q=(q_{ij})$. We first recall some related definitions. Let $\Phi_{t}$ be a continuous-time Markov process with a locally compact, separable metric space $X$ and transition function $P^{t}(i,j)$. We denote by $\mathcal{D}(\mathcal{A})$ the set of all functions $f$, for which there exists a measurable function $g$ such that the process $C_{t}^{f}$, defined by \[ C_{t}^{f}= f(\Phi_{t})-f(\Phi_{0})-\int_{0}^{t}g(\Phi_{s})ds, \] is a local martingale. We write $\mathcal{A}f=g$ and call $\mathcal{A}$ the extended generator of the process $\Phi$. Consider a general fluid model $(X(t), Z(t))$. Define its weakly infinitesimal generator $\mathcal{B}$ of the fluid queue by \[ \mathcal{B}g(x,i)=\lim_{t\rightarrow 0}\frac{E_{(x,i)}[g(X(t), Z(t))]-g(x,i)}{t}. \] Then, we present the following lemma. \begin{lemma}\label{lem-ext-gen} Let $(X(t), Z(t))$ be the general fluid queue with the generator $Q=(q_{ij})$, and let $g(x,i)$ be a function such that $g$ is partially differentiable about $x$ and for any $x\geq 0$, \begin{equation*} \sum_{j\in \mathbb{E}}g(x,j)q_{ij}<\infty. \end{equation*} Moreover, we assume that $\sup_{i\in \mathbb{E}}|r_{i}|<\infty$. (i) For $x> 0,$ $i\in \mathbb{E}$ or $x=0,$ $i\in \mathbb{E^{+}}$, we have \[ \mathcal{B}g(x,i)= r_{i}\frac{d g(x,i)}{d x}+ \sum_{j\in \mathbb{E}}g(x,j)q_{ij}, \] and for $x=0,$ $i\in \mathbb{E^{-}}\cup \mathbb{E^{\circ}}$, we have \[ \mathcal{B}g(0,i)= \sum_{j\in \mathbb{E}}g(0,j)q_{ij}, \] where $\mathbb{E^{+}}=\{i\in \mathbb{E}| r_{i}> 0\}$, $\mathbb{E^{-}}=\{i\in \mathbb{E}| r_{i}< 0\}$ and $\mathbb{E^{\circ}}=\{i\in \mathbb{E}| r_{i}= 0\}$. (ii) If the partial derivative $\frac{d g(x,i)}{d x}$ is continuous in $x$, then $f\in \mathcal{D}(\mathcal{A})$ and $\mathcal{A}f=\mathcal{B}f$. \end{lemma} \proof This proof is similar to the proof of Lemma 3.1 in \cite{LL18} and we will omit the detail here. It is worth noting that the phase process in \cite{LL18} is a finite continuous-time Markov chain, which is different from the phase process $\{Z(t)\}$ in this paper. In order to extend the result in \cite{LL18}, we need to impose the assumption that $\sup_{i\in \mathbb{E}}|r_{i}|<\infty$. $\Box$ According to Lemma \ref{lem-ext-gen}, we can state the following lemma, which is crucial for the analytic continuation of $\phi_{c-1}(\alpha)$ and $\psi(z)$. \begin{lemma}\label{lem-ana} $\phi_{c-1}(\alpha)$ is analytic on $\{\alpha: \Re(\alpha) < \alpha^{\ast}\}$, where $\alpha^{\ast}=C_{p}(\phi_{c-1})> 0$, and $\psi(z)$ is analytic on the disk $\Gamma_{z^{\ast}}=\{z: |z|< z^{\ast}\}$, where $z^{\ast}=\frac{c\mu}{\lambda}$. Moreover, the following equation is satisfied in the domain $D_{\alpha, z}=\{(\alpha, z): H(\alpha, z)=0 \ \ and \ \ \psi(\alpha, z)< \infty\}$: \begin{equation}\label{equ-ana-rel} \hat{H}_{1}(\alpha, z)\phi_{c-1}(\alpha)+ H_{2}( z)\psi(z)+ \hat{H}_{0}(\alpha, z)=0. \end{equation} \end{lemma} \proof First, we prove that $\alpha^{\ast}> 0$. It follows from Lemma \ref{lem-ext-gen} that the extended generator is given by \[ \mathcal{A}V(x, i) = -e^{\alpha x}z^{i} \left [\lambda+c\mu-\alpha r-\lambda z-\frac{c\mu}{z} \right ], \] for $x\geq 0,$ $i\geq c$, and \[ \mathcal{A}V(x, i) = -e^{\alpha x}z^{i} \left [(c-i)\alpha+\lambda+i\mu -\lambda z-\frac{i\mu}{z} \right ], \] for $x>0,$ $0\leq i\leq c-1$, and \[ \mathcal{A}V(0, i) = -z^{i} \left [\lambda+i\mu-\lambda z-\frac{i\mu}{z} \right ], \] for $0\leq i\leq c-1$. In order to find some constant $s> 0$ such that \[ \mathcal{A}V(x, i) \leq -sV(x, i), \] for $x> 0,$ $i\geq 0$, we need to choose appropriate $\alpha$ and $z$ such that for any $0 \leq i\leq c-1$, \begin{equation*} \left \{ \begin{array}{cc} \lambda+c\mu-\alpha r-\lambda z-\frac{c\mu}{z}>0, \\ (c-i)\alpha+\lambda+i\mu-\lambda z-\frac{i\mu}{z}>0. \end{array}\right. \end{equation*} For $\alpha=\frac{c\mu-\lambda}{r}> 0$, we have \[ \frac{\lambda+c\mu-r\alpha-\sqrt{\Delta_{1}}}{2\lambda}< 1,\ \ \frac{\lambda+i\mu+(c-i)\alpha-\sqrt{\Delta_{2}}}{2\lambda}< 1, \] and \[ \frac{\lambda+c\mu-r\alpha+\sqrt{\Delta_{1}}}{2\lambda}> 1,\ \ \frac{\lambda+i\mu+(c-i)\alpha+\sqrt{\Delta_{2}}}{2\lambda}> 1, \] where $\Delta_{1}=(\lambda+c\mu-r\alpha)^{2}-4c\lambda\mu$ and $\Delta_{2}=[\lambda+i\mu+(c-i)\alpha]^{2}-4i\lambda\mu$. Thus, there exists some $z \in B= B_{1}\cap B_{2}\cap (1, \infty)\neq \varnothing $ such that \begin{equation}\label{equ-dri-con} \mathcal{A}V(x, i)\leq -s V(x, i)+ b I_{\mathbb{L}_{0}}, \end{equation} where \begin{eqnarray*} B_{1} &=& \left (\frac{\lambda+c\mu-r\alpha-\sqrt{\Delta_{1}}}{2\lambda}, \frac{\lambda+c\mu-r\alpha+\sqrt{\Delta_{1}}}{2\lambda} \right ), \\ B_{2} &=& \left (\frac{\lambda+i\mu+(c-i)\alpha-\sqrt{\Delta_{2}}}{2\lambda}, \frac{\lambda+i\mu+(c-i)\alpha+\sqrt{\Delta_{2}}}{2\lambda} \right ),\\ s &=& \min \left \{\lambda+c\mu-\alpha r-\lambda z-\frac{c\mu}{z}, \lambda+i\mu+\alpha-\lambda z-\frac{i\mu}{z} \right \}> 0,\\ \mathbb{L}_{0} &=& \{(x,i)| x=0, 0 \leq i \leq c-1\}. \end{eqnarray*} Since the drift condition (\ref{equ-dri-con}) holds, from Theorem 7 in \cite{MT}, we know that \[ \phi_{c-1} \left (\frac{c\mu-\lambda}{r}\right )z^{c-1}< \psi\left (\frac{c\mu-\lambda}{r}, z \right )<\sum_{i=0}^{\infty}\int_{0}^{\infty}\pi_{i}(x)V(x,i)dx <\infty. \] Thus, from Lemma \ref{lem-radiu-conv} we can obtain that $\alpha^{\ast}\geq \frac{c\mu-\lambda}{r}> 0$. For $\psi(z) = \sum_{i=c-1}^{\infty}\Pi_{i}(0)z^{i} $, we have \[ \psi(z) \leq \Pi_{c-1}(0)z^{c-1}+\sum_{i=c}^{\infty}\xi_{i} z^{i} = \Pi_{c-1}(0)z^{c-1}+ \xi_{c}z^{c}\sum_{i=0}^{\infty}(\frac{\lambda}{c\mu})^{i} z^{i}. \] Since $\sum_{i=0}^{\infty}(\frac{\lambda}{c\mu})^{i} z^{i}$ is convergent in $|z|< \frac{c\mu}{\lambda}$, we have that $\psi(z)$ is analytical in the disk $\Gamma_{\frac{c\mu}{\lambda}}$. Now we prove the second assertion. From the equation (\ref{equ-funde}), we can obtain that if both $\phi_{c-1}(\alpha)$ and $\psi(z)$ are finite, then $\psi(\alpha, z)$ is finite as long as $H(\alpha, z)\neq 0$. Assume that $H(\alpha_{0}, z_{0})=0$ for some $\alpha_{0}> 0$ and $1< z_{0}< \frac{c\mu}{\lambda}$, and $\phi_{c-1}(\alpha_{0})<\infty$, $\psi(z_{0})<\infty$. Then, for small enough $\varepsilon> 0$, we have $\psi(\alpha_{0}, z_{0})< \psi(\alpha_{0}, z_{0}+\varepsilon)< \infty$, thus (\ref{equ-ana-rel}) holds for such pair $(\alpha_{0}, z_{0})$. $\Box$ \begin{remark}\label{rem-dri} Actually, accoridng to \cite{DMT95}, we know that the equation (\ref{equ-dri-con}) implies the fluid model driven by an $M/M/c$ queue is $V$-uniformly ergodic. \end{remark} Now, we present another relationship between $\phi_{c-1}(\alpha)$ and $\psi(z)$, and extend their analytic domains. \begin{lemma}\label{lem-asy-ana-1} (i) $\phi_{c-1}(\alpha)$ can be analytically continued to the domain $D_{\alpha}=\{\alpha\in \widetilde{\mathbb{C}}_{\alpha}: \hat{H}_{1}(\alpha, Z_{0}(\alpha))\neq 0\}\cap\{\alpha\in \widetilde{\mathbb{C}}_{\alpha}: |Z_{0}(\alpha) |< \frac{c\mu}{\lambda}\}$, and \begin{equation}\label{equ-asy-ana-1} \phi_{c-1}(\alpha)=-\frac{H_{2}( Z_{0}(\alpha))\psi(Z_{0}(\alpha))+ \hat{H}_{0}(\alpha, Z_{0}(\alpha))}{ \hat{H}_{1}(\alpha, Z_{0}(\alpha))}. \end{equation} (ii) $\psi(z)$ can be analytically continued to the domain $D_{z}=\{z\in \mathbb{C}: H_{2}(z)\neq 0\}\cap\{z\in \mathbb{C}: Re(\alpha(z)) < \alpha^{\ast}\}$ and \begin{equation}\label{equ-asy-ana-2} \psi(z) =-\frac{\hat{H}_{1}(\alpha(z), z) \phi_{c-1}(\alpha(z))+\hat{H}_{0}(\alpha(z), z)}{ H_{2}(z)}. \end{equation} \end{lemma} \proof (i) For any $(\alpha, z)$ such that $H(\alpha, z)=0$ and $\psi(\alpha,z)<\infty$, we can get equation (\ref{equ-ana-rel}). Using $z=Z_{0}(\alpha)$ leads to (\ref{equ-asy-ana-1}). Then, from Lemma \ref{lem-ana}, we know that the right-hand side of the above equation is analytic except for the points such that $\hat{H}_{1}(\alpha, Z_{0}(\alpha))=0$ or $|Z_{0}(\alpha)|\geq \frac{c\mu}{\lambda}$. Hence, we get the assertion. Similarly, we can prove assertion (ii). $\Box$ Based on the above arguments, we have the following lemma. \begin{lemma}\label{lem-pol-ana} The convergence parameter $\alpha^{\ast}$ satisfies $0< \alpha^{\ast}\leq \alpha_{1}$. If $\alpha^{\ast} < \alpha_{1}$, then $\alpha^{\ast}$ is necessarily a zero point of $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$. \end{lemma} \proof From Lemma \ref{lem-asy-ana-1}-(i), we know that $\phi_{c-1}(\alpha)$ is analytic on $D_{\alpha}$ and thus the convergence parameter $\alpha^{\ast}\leq \alpha_{1}$. For the case $\alpha^{\ast}< \alpha_{1}$, we can deduce from Lemma~\ref{lem-asy-ana-1}-(i) that $\alpha^{\ast}$ is either a zero point of $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$ or a point such that $|Z_{0}(\alpha^*)|\geq \frac{c\mu}{\lambda}$. In the following we prove $|Z_{0}(\alpha)|< \frac{c\mu}{\lambda}$ for $\alpha\in (0, \alpha_{1})$. For $\alpha \leq \alpha_{1}$, we have \[ Z_{0}(\alpha)= Z_{+}(\alpha)=\frac{-\alpha r+\lambda+c\mu - \sqrt{(-\alpha r+\lambda+c\mu)^{2}-4c\lambda\mu}}{2\lambda}, \] which is a strictly increasing function of $\alpha$. Thus, for any $\alpha\in (0, \alpha_{1})$, we have \begin{equation}\label{equ-z} 1=Z_{0}(0) < Z_{0}(\alpha)< Z_{0}(\alpha_{1})= \sqrt{\frac{c\mu}{\lambda}}<\frac{c\mu}{\lambda}. \end{equation} $\Box$ In order to perform the subsequent asymptotic arguments using the technique of complex analysis, we need to make some assumptions, which are collected as follows. \begin{assumption}\label{ass-1} (i) The function $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$ has at most one real zero point in $(0, \alpha_{1}]$, denoted by $\tilde{\alpha}$ if such a zero exists. (ii) The zero point $\tilde{\alpha}$ satisfies $H_{2}( Z_{0}(\tilde{\alpha}))\psi(Z_{0}(\tilde{\alpha}))+ \hat{H}_{0}(\tilde{\alpha}, Z_{0}(\tilde{\alpha}))\neq 0$. (iii) The unique $\tilde{\alpha}$ is a multiple zero of $k$ times for function $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$, where $k \geq 1$ is an integer. \end{assumption} \begin{remark}\label{rem-ass} (i) For any set of model parameters $c$, $\lambda$ and $\mu$, (i), (ii) and (iii) of Assumption \ref{ass-1} can be easily checked numerically. (ii) In many cases, these assumptions are not necessary. For example, if $c\mu> \lambda(r+1)$, we can derive from the expression of $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$ that the unique zero point $\tilde{\alpha}$ must be a simple zero point. In this case, (iii) of Assumption \ref{ass-1} is redundant. Moreover, we will show in Section~\ref{sec:6} that all (i), (ii) and (iii) of Assumption \ref{ass-1} are redundant for the special cases $c=1$ and $c=2$. Actually, our extensive numerical calculations (for many sets of $\lambda$, $\mu$ and $r$ values) suggest that all (i), (ii) and (iii) are redundant for a general case, but a rigorous proof is still not available at this moment. \end{remark} The next lemma, which follows from Lemma~\ref{lem-asy-ana-1}-(i), provides more details about the convergence parameter $\alpha^{\ast}$. \begin{lemma}\label{lem-sin-ana} Suppose that (i) and (ii) of Assumption \ref{ass-1} hold. Then (i) if the zero point $\tilde{\alpha}$ exists and $\tilde{\alpha}< \alpha_{1}$, we have $\alpha^{\ast}=\tilde{\alpha}$, (ii) if the zero point $\tilde{\alpha}$ exists and $\tilde{\alpha}= \alpha_{1}$, we have $\alpha^{\ast}=\tilde{\alpha}=\alpha_{1}$, (iii) if $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$ has no real zero points in $(0, \alpha_{1}]$, we have $\alpha^{\ast}=\alpha_{1}$. \end{lemma} Based on the above analysis, we can provide the following tail asymptotic properties for $\phi_{c-1}(\alpha)$ and $\psi(z)$, which are the key for characterizing exact tail asymptotics in the stationary distribution of the fluid queue. \begin{theorem}\label{the-tail-asy-Pi1} Suppose that (i) and (ii) of Assumption \ref{ass-1} hold. For the function $\phi_{c-1}(\alpha)$, a total of three types of asymptotics exists as $\alpha$ approaches to $\alpha^{\ast}$, based on the detailed property of $\alpha^{\ast}$ stated in Lemma~\ref{lem-sin-ana}. Case (i) If (i) of Lemma \ref{lem-sin-ana} and (iii) of Assumption \ref{ass-1} hold, then \[ \lim_{\alpha\rightarrow \alpha^{\ast}}(\alpha^{\ast}-\alpha)^{k}\phi_{c-1}(\alpha)= c_{1}, \] where \[ c_{1}=\frac{H_{2}(Z_{0}(\alpha^{\ast}))\psi(Z_{0}(\alpha^{\ast}))+ \hat{H}_{0}(\alpha^{\ast}, Z_{0}(\alpha^{\ast}))} {\hat{H}^{(k)}_{1}(\alpha^{\ast}, Z_{0}(\alpha^{\ast}))}, \] and $\hat{H}^{(k)}_{1}(\alpha^{\ast}, Z_{0}(\alpha^{\ast}))$ is the $k$th derivative of $\alpha^{\ast}$. Case (ii) If (ii) of Lemma \ref{lem-sin-ana} holds, then \[ \lim_{\alpha\rightarrow \alpha^{\ast}}\sqrt{\alpha^{\ast}-\alpha}\cdot\phi_{c-1}(\alpha)=c_{2}, \] where \[ c_{2}=\frac{2\lambda [H_{2}( Z_{0}(\alpha^{\ast}))\psi(Z_{0}(\alpha^{\ast}))+ \hat{H}_{0}(\alpha^{\ast}, Z_{0}(\alpha^{\ast}))]}{\frac{\partial \hat{H}_{1}(\alpha^{\ast}, Z_{0}(\alpha^{\ast}))}{\partial Z_{0}(\alpha^{\ast})}\cdot \sqrt{\alpha^{\ast}-\alpha}}. \] Case (iii) If (iii) of Lemma \ref{lem-sin-ana} holds, then \[ \lim_{\alpha\rightarrow \alpha^{\ast}}\sqrt{\alpha^{\ast}-\alpha}\cdot\phi_{c-1}'(\alpha)=c_{3}, \] where \[ c_{3}=\frac{\partial L(\alpha, z)}{\partial z}|_{(\alpha^{\ast}, Z_{0}(\alpha^{\ast}))} \frac{\sqrt{\alpha_{2}-\alpha_{1}}}{2\lambda}, \] and $L(\alpha, z)=-\frac{H_{2}(\alpha, z)\psi(z)+ \hat{H}_{0}(\alpha, z)}{ \hat{H}_{1}(\alpha, z)}.$ \end{theorem} \proof (i) In this case, $\alpha^{\ast}= \tilde{\alpha}$ is a multiple zero, with degree $k$, of $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$. From (\ref{equ-asy-ana-1}), we have \[ (\tilde{\alpha}-\alpha)^{k}\phi_{c-1}(\alpha)=-\frac{H_{2}(Z_{0}(\alpha))\psi(Z_{0}(\alpha))+ \hat{H}_{0}(\alpha, Z_{0}(\alpha))} {\hat{H}_{1}(\alpha, Z_{0}(\alpha))/(\tilde{\alpha}-\alpha)^{k}}. \] It follows that \begin{equation}\label{equ-alpha} \lim_{\alpha\rightarrow \tilde{\alpha}}(\tilde{\alpha}-\alpha)^{k}\phi_{c-1}(\alpha)= \frac{H_{2}( Z_{0}(\tilde{\alpha}))\psi(Z_{0}(\tilde{\alpha}))+ \hat{H}_{0}(\tilde{\alpha}, Z_{0}(\tilde{\alpha}))} {\hat{H}^{(k)}_{1}(\tilde{\alpha}, Z_{0}(\tilde{\alpha}))}=c_{1}. \end{equation} Moreover, as stated in Remark \ref{rem-ass}, we can obtain that $c_{1}\neq 0$. Similarly, we also haven $c_{2}, c_{3} \neq 0$ in the following proof. (ii) In this case, $\alpha^{\ast}= \tilde{\alpha}= \alpha_{1}$, which implies that $\alpha_{1}$ is not only a zero point of $\Delta(\alpha)$ but also the zero point of $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$. Suppose that $\alpha_{1}$ is a zero of degree $m \geq 2$. Then, we have \[ \lambda A_{c-2}'(\alpha_{1})=\frac{3}{2(c-1)\mu}r+\frac{1}{(c-1)\mu}> 0, \] which conflicts the fact that $\lambda A_{c-2}'(\alpha_{1})< 0$. Hence, $\alpha_{1}$ is a simple zero point of $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$. Thus, we have \begin{eqnarray*} \lim_{\alpha\rightarrow \alpha^{\ast}} \sqrt{\alpha^{\ast}-\alpha} \cdot\phi_{c-1}(\alpha) &=& \lim_{\alpha\rightarrow \alpha^{\ast}} -\frac{H_{2}( Z_{0}(\alpha))\psi(Z_{0}(\alpha))+ \hat{H}_{0}(\alpha, Z_{0}(\alpha))}{\hat{H}_{1}(\alpha, Z_{0}(\alpha)/\sqrt{\alpha^{\ast}-\alpha} } \\ &=& \lim_{\alpha\rightarrow \alpha^{\ast}} \frac{H_{2}(Z_{0}(\alpha))\psi(Z_{0}(\alpha))+ \hat{H}_{0}(\alpha, Z_{0}(\alpha))}{\sqrt{\alpha^{\ast}-\alpha}\cdot[\frac{\partial \hat{H}_{1}(\alpha^{\ast}, Z_{0}(\alpha^{\ast})}{\partial \alpha^{\ast}} + Z_{0}'(\alpha^{\ast})\cdot \frac{\partial \hat{H}_{1}(\alpha^{\ast}, Z_{0}(\alpha^{\ast}))}{\partial Z_{0}(\alpha^{\ast})}]}, \end{eqnarray*} where \[ \lim_{\alpha\rightarrow \alpha^{\ast}}\sqrt{\alpha^{\ast}-\alpha}\cdot\frac{\partial \hat{H}_{1}(\alpha^{\ast}, Z_{0}(\alpha^{\ast})}{\partial \alpha^{\ast}}=0 \] and \begin{eqnarray*} \lim_{\alpha\rightarrow \alpha^{\ast}}\sqrt{\alpha^{\ast}-\alpha}Z_{0}'(\alpha^{\ast}) &=& \lim_{\alpha\rightarrow \alpha^{\ast}}\frac{Z_{0}(\alpha^{\ast})-Z_{0}(\alpha)}{\sqrt{\alpha^{\ast}-\alpha}}\\ &=& \lim_{\alpha\rightarrow \alpha^{\ast}}\left[\frac{b(\alpha^{\ast})-b(\alpha)}{2\lambda\sqrt{\alpha^{\ast}-\alpha}}+ \frac{\sqrt{(\alpha-\alpha^{\ast})(\alpha-\alpha_{2})}}{2\lambda\sqrt{\alpha^{\ast}-\alpha}}\right]\\ &=& \frac{\sqrt{\alpha_{2}-\alpha^{\ast}}}{2\lambda}. \end{eqnarray*} It follows that \[ \lim_{\alpha\rightarrow \alpha^{\ast}}\sqrt{\alpha^{\ast}-\alpha}\cdot\phi_{c-1}(\alpha)=\frac{2\lambda [H_{2}( Z_{0}(\alpha^{\ast}))\psi(Z_{0}(\alpha^{\ast}))+ \hat{H}_{0}(\alpha^{\ast}, Z_{0}(\alpha^{\ast}))]}{\frac{\partial \hat{H}_{1}(\alpha^{\ast}, Z_{0}(\alpha^{\ast}))}{\partial Z_{0}(\alpha^{\ast})}\cdot \sqrt{\alpha_{2}-\alpha^{\ast}}}=c_{2}. \] (iii) In this case, $\alpha^{\ast}= \alpha_{1}$. Let \[ L(\alpha, z)=-\frac{H_{2}(z)\psi(z)+ \hat{H}_{0}(\alpha, z)}{ \hat{H}_{1}(\alpha, z)}. \] From (\ref{equ-asy-ana-1}), we have \begin{eqnarray*} \phi_{c-1}'(\alpha) &=& \frac{\partial L(\alpha, z)}{\partial \alpha}+ \frac{\partial L(\alpha, z)}{\partial z} \cdot Z_{0}'(\alpha). \end{eqnarray*} It follows that \begin{eqnarray*} \lim_{\alpha\rightarrow \alpha^{\ast}}\sqrt{\alpha^{\ast}-\alpha}\cdot\phi_{c-1}'(\alpha) &=& \lim_{\alpha\rightarrow \alpha^{\ast}}\sqrt{\alpha^{\ast}-\alpha}\cdot\left[\frac{\partial L(\alpha, z)}{\partial \alpha}+ \frac{\partial L(\alpha, z)}{\partial z}\cdot Z_{0}'(\alpha)\right]\\ &=& \lim_{\alpha\rightarrow \alpha^{\ast}}\frac{\partial L(\alpha, z)}{\partial z}\cdot \sqrt{\alpha^{\ast}-\alpha}\cdot Z_{0}'(\alpha)\\ &=& \frac{\partial L(\alpha, z)}{\partial z}|_{(\alpha^{\ast}, Z_{0}(\alpha^{\ast}))} \frac{\sqrt{\alpha_{2}-\alpha_{1}}}{2\lambda}= c_{3}. \end{eqnarray*} $\Box$ The asymptotic property for $\psi(z)$ can be stated as follows. \begin{theorem}\label{the-tail-asy-Pi2} For the function $\psi(z)$, we have the following asymptotic property as $z$ approaches to $ \tilde{z}=\frac{c\mu}{\lambda}$: \[ \lim_{z\rightarrow \tilde{z}}(\tilde{z}-z)\psi(z)= d_{\tilde{z}}, \] where \[ d_{\tilde{z}}= \frac{\hat{H}_{1}(\alpha(\tilde{z}), \tilde{z}) \phi_{c-1}(\alpha(\tilde{z}))+\hat{H}_{0}(\alpha(\tilde{z}), \tilde{z})}{\lambda(\tilde{z}-1)}. \] \end{theorem} \proof From (\ref{equ-asy-ana-2}), we have \begin{eqnarray*} \lim_{z\rightarrow \tilde{z}}(\tilde{z}-z)\psi(z) &=& \lim_{z\rightarrow \tilde{z}} \frac{\hat{H}_{1}(\alpha(z), z) \phi_{c-1}(\alpha(z))+\hat{H}_{0}(\alpha(z), z)}{\lambda(z-1)}\\ &=& \frac{\hat{H}_{1}(\alpha(\tilde{z}), \tilde{z}) \phi_{c-1}(\alpha(\tilde{z}))+\hat{H}_{0}(\alpha(\tilde{z}), \tilde{z})}{\lambda(\tilde{z}-1)}. \end{eqnarray*} Moreover, we can calculate that $\hat{H}_{1}(\alpha(\tilde{z}), \tilde{z})> 0$, which implies that $d_{\tilde{z}}> 0$. $\Box$ \section{Exact tail asymptotics for $\Pi_{i}(x)$ and $\Pi(x)$} \label{sec:5} Lemma \ref{lem-tail-asy-2} and Lemma \ref{lem-tail-asy-Pi} specify exact tail asymptotic properties for the density function $\pi_{c-1}(x)$ and boundary probabilities $\Pi_{i}(0)$, respectively, which are direct consequences of the detailed asymptotic behavior of $\phi_{c-1}(\alpha)$ and $\psi(z)$, and the Tauberian-like theorem, given in Appendix~\ref{app:A}. Moreover, the tail asympotics for the joint probability $\Pi_{i}(x)$, the density function $\pi_{i}(x)$, the marginal distribution $\Pi(x)=\sum_{i=0}^{\infty}\Pi_{i}(x)$, and the density function $\pi(x)= \frac{d\Pi(x)}{dx}$ for $x> 0$ are also provided in this section. \begin{lemma}\label{lem-tail-asy-2} Suppose that (i) and (ii) of Assumption \ref{ass-1} hold. For the density function $\pi_{c-1}(x)$ of the fluid queue, we have the following tail asymptotic properties for large enough $x$. Case (i) If (i) of Lemma \ref{lem-sin-ana} and (iii) of Assumption \ref{ass-1} hold, then \[ \pi_{c-1}(x)\sim C_{1}e^{-\alpha^{\ast} x}x^{k-1}. \] Case (ii) If (ii) of Lemma \ref{lem-sin-ana} holds, then \[ \pi_{c-1}(x) \sim C_{2} e^{-\alpha^{\ast} x}x^{-\frac{1}{2}}. \] Case (iii) If (iii) of Lemma \ref{lem-sin-ana} holds, then \[ \pi_{c-1}(x) \sim C_{3} e^{-\alpha^{\ast} x}x^{-\frac{3}{2}}, \] where $C_{1}=\frac{c_{1}}{\Gamma(k)}$, $C_{2}=\frac{c_{2}}{\sqrt{\pi}}$, $C_{3}=\frac{-c_{3}}{2\sqrt{\pi}}$ and $c_{i}, i=1, 2, 3$ are defined in Theorem \ref{the-tail-asy-Pi1}. \end{lemma} \begin{lemma}\label{lem-tail-asy-Pi} For the boundary probabilities $\Pi_{i}(0)$ of the fluid queue, we have the following tail asymptotic properties for large enough $i$. \[ \Pi_{i}(0)\sim d_{\tilde{z}}\cdot \Big (\frac{1}{\tilde{z}} \Big )^{i+1}, \] where $\tilde{z}=\frac{c\mu}{\lambda}$ and $d_{\tilde{z}}$ is defined in Theorem \ref{the-tail-asy-Pi2}. \end{lemma} Now we provide details for the exact tail asymptotic characterization in the (general) joint probabilities $\Pi_{i}(x)$ for any $i\geq c-1$. \begin{theorem}\label{the-tail-asy-3} Suppose that (i) and (ii) of Assumption \ref{ass-1} hold. For the joint probabilities $\Pi_{i}(x)$ of the fluid queue, then we have the following tail asymptotic properties for any $i \geq c-1$ and large enough $x$. Case (i) If (i) of Lemma \ref{lem-sin-ana} and (iii) of Assumption \ref{ass-1} hold, then \begin{equation}\label{equ-joint-probab} \Pi_{i}(x)- \xi_{c} \Big (\frac{\lambda}{c\mu} \Big )^{i-c} \sim -\frac{C_{1}}{\alpha^{\ast}}e^{-\alpha^{\ast}x}x^{k-1} \Big (\frac{1}{z^{\ast}} \Big )^{i-c}, \end{equation} and \[ \pi_{i}(x) \sim C_{1}e^{-\alpha^{\ast}x}x^{k-1} \Big (\frac{1}{z^{\ast}} \Big )^{i-c}. \] Case (ii) If (ii) of Lemma \ref{lem-sin-ana} holds, then \[ \Pi_{i}(x)- \xi_{c} \Big (\frac{\lambda}{c\mu} \Big )^{i-c} \sim -\frac{C_{2}}{\alpha^{\ast}} e^{-\alpha^{\ast}x}x^{-\frac{1}{2}} \Big (\frac{1}{z^{\ast}} \Big )^{i-c}, \] and \[ \pi_{i}(x) \sim C_{2}e^{-\alpha^{\ast}x}x^{-\frac{1}{2}} \Big (\frac{1}{z^{\ast}} \Big )^{i-c}. \] Case (iii) If (iii) of Lemma \ref{lem-sin-ana} holds, then \[ \Pi_{i}(x)- \xi_{c} \Big (\frac{\lambda}{c\mu} \Big )^{i-c} \sim -\frac{C_{3}}{\alpha^{\ast}} e^{-\alpha^{\ast}x}x^{-\frac{3}{2}} \Big (\frac{1}{z^{\ast}} \Big )^{i-c}, \] and \[ \pi_{i}(x) \sim C_{3}e^{-\alpha^{\ast}x}x^{-\frac{3}{2}} \Big (\frac{1}{z^{\ast}} \Big )^{i-c}, \] where $z^{\ast}=Z_{0}(\alpha^{\ast})$ and $C_{1}$, $C_{2}$, $C_{3}$ are defined in Lemma \ref{lem-tail-asy-2}. \end{theorem} \proof We only prove (i), since (ii) and (iii) can be similarly proved. For $i=c-1$, we have \[ \lim_{x\rightarrow\infty} \frac{C_{1}e^{-\alpha^{\ast} x}x^{k-1}}{\xi_{c-1}-\Pi_{c-1}(x)} = \lim_{x\rightarrow\infty} \frac{\alpha^{\ast}C_{1}e^{-\alpha^{\ast} x}x^{k-1}-(k-1)C_{1}e^{-\alpha^{\ast} x}x^{k-2}}{\pi_{c-1}(x)} = \alpha^{\ast}, \] where the first equality follows from the L'Hospital's rule and the second equality follows from Lemma \ref{lem-tail-asy-2}. Hence, we have $\Pi_{c-1}(x)-\xi_{c-1} \sim -\frac{C_{1}}{\alpha^{\ast}}e^{-\alpha^{\ast}x}x^{k-1}$ as $x\rightarrow \infty$. Now, suppose that (\ref{equ-joint-probab}) is true for any $i=m > c-1$. Thus, for $i=m+1$ it follows from (\ref{equ-joi-dis-3}) that \[ c\mu \Pi_{m+1}(x)= -\lambda\Pi_{m-1}(x)+(\lambda+c\mu)\Pi_{m}(x)+r \pi_{m}(x), \] which leads to \begin{eqnarray*} &&\lim_{x\rightarrow\infty}\frac{ \Pi_{m+1}(x)-\xi_{c}(\frac{\lambda}{c\mu})^{m+1-c}}{\frac{C_{1}}{\alpha^{\ast}}e^{-\alpha^{\ast}x}x^{k-1}} \\ &=&\lim_{x\rightarrow\infty} \left [-\frac{\lambda}{c\mu}\cdot\frac{\Pi_{m-1}(x)-\xi_{c}(\frac{\lambda}{c\mu})^{m-c-1}}{\frac{C_{1}}{\alpha^{\ast}} e^{-\alpha^{\ast}x}x^{k-1}} +\frac{\lambda+c\mu}{c\mu}\cdot\frac{ \Pi_{m}(x)-\xi_{c}(\frac{\lambda}{c\mu})^{m-c}}{\frac{C_{1}}{\alpha^{\ast}}e^{-\alpha^{\ast}x}x^{k-1}}+ \frac{r\alpha^{\ast}}{c\mu}\cdot\frac{ \pi_{m}(x)}{C_{1}e^{-\alpha^{\ast}x}x^{k-1}} \right ]\\ &=& -(\frac{1}{z^{\ast}})^{m-c} \left [-\frac{\lambda}{c\mu}z^{\ast}+\frac{\lambda+ c\mu}{c\mu}- \frac{r \alpha^{\ast}}{c\mu } \right ] \\ &=& -(\frac{1}{z^{\ast}})^{m+1-c}, \end{eqnarray*} where the last equation follows from the fact that $H(\alpha^{\ast}, z^{\ast})=0$ and $z^{\ast}=Z_{0}(\alpha^{\ast})$. This completes the proof. $\Box$ \begin{remark}\label{rem-c} According to (\ref{equ-phi-rel}), we can derive tail asymptotic properties of $\phi_{c-2}(\alpha)$ from $\phi_{c-1}(\alpha)$, and thus tail asymptotic properties for $\Pi_{c-2}(x)$. Similarly, a relationship can be established between $\phi_{i}(\alpha)$ and $\phi_{i-1}(\alpha)$ for any $1 \leq i\leq c-2 $, and thus tail asymptotic properties for the joint probability $\Pi_{i}(x)$ can be obtained for any $0\leq i \leq c-1$. \end{remark} In the following theorem, we provide exact tail asymptotics for the marginal probabilities $\Pi(x)$. \begin{theorem}\label{the-tail-asy-4} Suppose that (i) and (ii) of Assumption \ref{ass-1} hold. For the marginal probabilities $\Pi(x)$ of the fluid queue, we have the following tail asymptotic properties: Case (i) If (i) of Lemma \ref{lem-sin-ana} and (iii) of Assumption \ref{ass-1} hold, then \begin{equation*} \Pi(x)- 1 \sim -\frac{\tilde{C}_{1}}{\alpha^{\ast}}e^{-\alpha^{\ast}x}x^{k-1}, \end{equation*} and \[ \pi(x) \sim \tilde{ C}_{1}e^{-\alpha^{\ast}x}x^{k-1}. \] Case (ii) If (ii) of Lemma \ref{lem-sin-ana} holds, then \[ \Pi(x)- 1 \sim -\frac{\tilde{C}_{2}}{\alpha^{\ast}}e^{-\alpha^{\ast}x}x^{-\frac{1}{2}}, \] and \[ \pi(x) \sim e^{-\alpha^{\ast}x}x^{-\frac{1}{2}}. \] Case (iii) If (iii) of Lemma \ref{lem-sin-ana} holds, then \[ \Pi(x)- 1 \sim -\frac{\tilde{C}_{3}}{\alpha^{\ast}}e^{-\alpha^{\ast}x}x^{-\frac{3}{2}}, \] and \[ \pi(x) \sim \tilde{C}_{3}e^{-\alpha^{\ast}x}x^{-\frac{3}{2}}, \] where $\tilde{C}_{i}=\left[\frac{\hat{H}_{1}(\alpha^{\ast}, 1)}{ H(\alpha^{\ast}, 1)}+\sum_{k=0}^{c-2}A_{k}(\alpha^{\ast}) A_{k+1}(\alpha^{\ast})\cdots A_{c-2}(\alpha^{\ast})\right]C_{i}$, $i=1, 2, 3$, $C_{1}$, $C_{2}$ and $C_{3}$ are defined in Lemma~\ref{lem-tail-asy-2}, and $A_{i}(\alpha)$ is defined in Theorem~\ref{the-phi-rel}. \end{theorem} \proof Let $z=1$. It follows from (\ref{equ-funde}) that \[ H(\alpha, 1)\psi(\alpha, 1)= \hat{H}_{1}(\alpha, 1)\phi_{c-1}(\alpha)+ H_{2}( 1)\psi(1)+ \hat{H}_{0}(\alpha, 1). \] Thus, we get \begin{equation}\label{equ-mar-dis} H(\alpha, 1)\int_{0}^{\infty}\sum_{i=c-1}^{\infty} \pi_{i}(x) e^{\alpha x}dx=\hat{H}_{1}(\alpha, 1)\int_{0}^{\infty}\pi_{c-1}(x) e^{\alpha x}dx+ \hat{H}_{0}(\alpha, 1), \end{equation} since $H_2(1)=0$. From (\ref{equ-mar-dis}), we have \[ \int_{0}^{\infty} \sum_{i=c-1}^{\infty} \pi_{i}(x) e^{\alpha x}dx=\frac{\hat{H}_{1}(\alpha, 1)}{ H(\alpha, 1)}\int_{0}^{\infty}\pi_{c-1}(x) e^{\alpha x}dx+\frac{\hat{H}_{0}(\alpha, 1)}{H(\alpha, 1)}. \] Now, we prove \begin{equation}\label{equ-exc-orde} \int_{0}^{\infty} \sum_{i=0}^{\infty} \pi_{i}(x) e^{\alpha x}dx =\int_{0}^{\infty} \pi(x) e^{\alpha x}dx, \end{equation} where $\pi_{i}(x)=\frac{\partial\Pi_{i}(x)}{\partial x}$ and $\pi(x)=\frac{d\Pi(x)}{dx}$ for any $x> 0$. For any fixed $x$, we can obtain \[ \sum_{i=0}^{\infty} \Pi_{i}(x)= P\{X< x\}\leq 1, \] which implies that $\sum_{i=0}^{\infty} \Pi_{i}(x)$ is convergent for any $x$. From (\ref{equ-joi-dis-3}), we have for $i\geq c$, \[ \pi_{i}(x)=\frac{\lambda}{r}\Pi_{i-1}(x)- \frac{\lambda+ c\mu}{r}\Pi_{i}(x)+ \frac{c\mu}{r} \Pi_{i+1}(x)\leq \frac{\lambda}{r} \xi_{i-1}+\frac{c\mu}{r} \xi_{i+1}. \] Since \[ \sum_{i=c}^{\infty} \frac{\lambda}{r} \xi_{i-1}+ \sum_{i=c}^{\infty} \frac{c\mu}{r} \xi_{i+1} < \frac{\lambda+ c\mu}{r}< \infty, \] according to Weierstrass criterion, we can obtain that $\sum_{i=0}^{\infty} \pi_{i}(x)$ is convergent uniformly in $x$. Thus, we can get equation (\ref{equ-exc-orde}). From (\ref{equ-mar-dis}), we have \[ \int_{0}^{\infty} \pi(x) e^{\alpha x}dx=\frac{\hat{H}_{1}(\alpha, 1)}{ H(\alpha, 1)}\phi_{c-1}(\alpha)+\frac{\hat{H}_{0}(\alpha, 1)}{H(\alpha, 1)}+\sum_{i=0}^{c-2}\phi_{i}(\alpha). \] From (\ref{equ-phi-rel}), we can establish the relationship between $\phi_{i}(\alpha)$ and $\phi_{c-1}(\alpha)$ for any $0< i\leq c-2$, and thus \[ \sum_{i=0}^{c-2}\phi_{i}(\alpha)=\left(\sum_{k=0}^{c-2}A_{k}(\alpha)A_{k+1}(\alpha)\cdots A_{c-2}(\alpha)\right)\phi_{c-1}(\alpha)+ H_{c-1}(\alpha). \] Here $H_{c-1}(\alpha)$ is an analytic function about $\alpha$, which can be determined explicitly by (\ref{equ-phi-rel}). Hence, according to the Tauberian-like theorem and the asymptotic behavior of $\phi_{c-1}(\alpha)$, we can obtain the tail asymptotic properties of $\pi(x)$ and thus attain the tail asymptotic properties of $\Pi(x)$. $\Box$ \section{Special cases} \label{sec:6} In this section, we consider two important special cases: $c=1$ and $c=2$, for which exact asymptotic properties for the stationary distribution can be obtained without Assumption~\ref{ass-1}. The analysis of these two cases is feasible. However, the arguments for the cases $c\geq 3$ are rather complex since the expression of $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$ is intractable for any $c\geq 3$. \subsection{Fluid queue driven by $M/M/1$ queue} In this case, the unique zero point of $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$ can be obtained explicitly as follows. \begin{lemma}\label{lem-asy-ana-6} Let \[ \tilde{\alpha}=\frac{\mu}{r+1}-\lambda, \] then $\tilde{\alpha}$ is the only possible zero point of $H_{1}(\alpha, Z_{0}(\alpha))$. Moreover, $\tilde{\alpha}$ must be a simple zero point of $H_{1}(\alpha, Z_{0}(\alpha))$. \end{lemma} \proof We rationalize $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$ by \begin{equation}\label{equ-zero} g(\alpha)=2a\hat{H}_{1}(\alpha, Z_{0}(\alpha))\hat{H}_{1}(\alpha, Z_{1}(\alpha)). \end{equation} Then, it follows from the definition of $\hat{H}_{1}(\alpha, z)$ in Theorem \ref{the-phi-rel} and (\ref{equ-Z}) that \begin{eqnarray*} g(\alpha) &=& -2\lambda Z_{0}(\alpha)Z_{1}(\alpha)[(\mu -\alpha r - \alpha) Z_{0}(\alpha)- \mu][(\mu -\alpha r - \alpha) Z_{1}(\alpha)- \mu] \\ &=& \frac{-2\alpha\mu^{2}}{\lambda}[( r +1)\alpha-\mu+ \lambda(r+1)]. \end{eqnarray*} It is obvious that $\tilde{\alpha}=\frac{\mu}{r+1}-\lambda$ is the only possible zero point of $H_{1}(\alpha, Z_{0}(\alpha))$ with a modulus greater than $0$. $\Box$ \begin{lemma}\label{lem-asy-ana-6} The unique zero point $\tilde{\alpha}$ satisfies the following inequality \[ H_{2}( Z_{0}(\tilde{\alpha}))\psi(Z_{0}(\tilde{\alpha}))+ \hat{H}_{0}(\tilde{\alpha}, Z_{0}(\tilde{\alpha}))\neq 0. \] \end{lemma} \proof From the initial condition $\Pi_{i}(0)= 0$ for any $i\geq 1$, we have \[ \psi(Z_{0}(\tilde{\alpha}))=\sum_{i=0}^{\infty}\Pi_{i}(0)Z_{0}^{i}(\tilde{\alpha})=\Pi_{0}(0). \] For $\tilde{\alpha}=\frac{\mu}{r+1}-\lambda$, we have $Z_{0}(\tilde{\alpha})=\min\{1+r, \frac{\mu}{\lambda(1+r)}\}$. Thus, from the definitions of $\hat{H}_{0}(\alpha, z)$ and $H_{2}(z)$ in Theorem \ref{the-phi-rel}, and the fact that $\frac{\mu}{\mu-\lambda Z_{0}(\tilde{\alpha})}\neq 1$, we can obtain that \[ \frac{-\hat{H}_{0}(\tilde{\alpha}, Z_{0}(\tilde{\alpha}))}{ H_{2}(Z_{0}(\tilde{\alpha}))}=\frac{\mu\Pi_{0}(0)}{\mu-\lambda Z_{0}(\tilde{\alpha})}\neq \Pi_{0}(0), \] which implies that $H_{2}(Z_{0}(\tilde{\alpha}))\psi(Z_{0}(\tilde{\alpha}))+ \hat{H}_{0}(\tilde{\alpha}, Z_{0}(\tilde{\alpha}))\neq 0$. $\Box$ Since the following inequality always holds \[ \tilde{\alpha}= \frac{\mu}{r+1}-\lambda\leq \alpha_{1}=\frac{(\sqrt{\mu}-\sqrt{\lambda})^{2}}{r}, \] we only have two tail asymptotic properties for the stationary distribution $\Pi_{i}(x)$ and the marginal distribution $\Pi(x)$ of the fluid queue in this case. Here we omit the details and only present the asymptotic property for the marginal distribution. \begin{theorem}\label{the-tail-asy-marginal-6} For the marginal distribution $\Pi(x)$ of the fluid queue, we have the following tail asymptotic properties for large enough $x$: Case (i) If (i) of Lemma \ref{lem-sin-ana} holds (i.e. $\tilde{\alpha}= \frac{\mu}{r+1}-\lambda<\alpha_{1}$), then \[ \Pi(x)- 1 \sim -\frac{(r+1)\tilde{\alpha}c_{1}}{r}e^{-\tilde{\alpha} x}; \] Case (ii) If (ii) of Lemma \ref{lem-sin-ana} holds (i.e. $\tilde{\alpha}= \frac{\mu}{r+1}-\lambda=\alpha_{1}$), then \[ \Pi(x)- 1 \sim -\frac{(r+1)\tilde{\alpha}c_{2}}{r\sqrt{\pi}} e^{-\tilde{\alpha} x}x^{-\frac{1}{2}}. \] Here $c_{1}$ and $c_{2}$ are defined in Theorem~\ref{the-tail-asy-Pi1}. \end{theorem} \subsection{Fluid queue driven by $M/M/2$ queue} In this case, we can obtain the following lemma. \begin{lemma}\label{lem-asy-ana-6-b} The function $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$ has at most one real zero point in $(0, \alpha_{1}]$. Moreover, this unique zero point, denoted by $\tilde{\alpha}$ if exists, must be a simple zero point. \end{lemma} \proof Let $g(\alpha)=0$, where $g(\alpha)$ is defined in (\ref{equ-zero}). We can obtain the following equation: \[ (r+1)\alpha^{3}+[3\lambda(r+1)+\mu r]\alpha^{2}+[3\lambda^{2}(r+1)+\mu\lambda r-\lambda\mu-\mu^{2}]\alpha+\lambda^{3}(r+1)-\lambda^{2}\mu-2\lambda\mu^{2}=0. \] Denote the left hand side of the above equation by $\tilde{g}(\alpha)$. For any $\alpha> 0$, we have \[ \tilde{g}''(\alpha)=6(r+1)\alpha+6\lambda(r+1)+2\mu r> 0, \] which implies that $\tilde{g}(\alpha)$ is a convex function for any $\alpha> 0$. If $\tilde{g}(0)=\lambda^{3}(r+1)-\lambda^{2}\mu-2\lambda\mu^{2} < 0$, we can derive that $\tilde{g}(\alpha)=0$ has only one real solution on $(0, \infty)$, which implies that there exists at most one real solution on $(0, \alpha_{1}]$, denoted by $\tilde{\alpha}$ if exists. Moreover, according to the property of convex functions, we have $\tilde{g}'(\tilde{\alpha})>0$, which implies that $\tilde{\alpha}$ is a simple zero point. If $\tilde{g}(0) \geq 0$, we have $\tilde{g}'(0)=3\lambda^{2}(r+1)+\mu\lambda r-\lambda\mu-\mu^{2}> 0$ and thus $\tilde{g}(\alpha)=0$ has no real solution on $(0, \infty)$. $\Box$ \begin{lemma}\label{lem-asy-ana-6} The unique zero point $\tilde{\alpha}$, if it exists, satisfies the following inequality \[ H_{2}( Z_{0}(\tilde{\alpha}))\psi(Z_{0}(\tilde{\alpha}))+ \hat{H}_{0}(\tilde{\alpha}, Z_{0}(\tilde{\alpha}))\neq 0. \] \end{lemma} \proof From the definitions of $H_{2}(z)$ and $\hat{H}_{0}(\alpha, z)$ in Theorem \ref{the-phi-rel}, for $c=2$, we have \begin{eqnarray}\nonumber &&H_{2}(Z_{0}(\alpha))\psi(Z_{0}(\alpha))+ \hat{H}_{0}(\alpha, Z_{0}(\alpha)) \\ \nonumber &=& (\lambda Z_{0}(\alpha)-2\mu)(Z_{0}(\alpha)-1)\psi(Z_{0}(\alpha))+ \left[\mu Z_{0}(\alpha)^{2}-2\mu Z_{0}(\alpha)+\frac{\lambda \mu Z_{0}^{2}(\alpha)}{\alpha+\lambda}\right]\Pi_{1}(0)+\frac{\lambda \alpha Z_{0}^{2}(\alpha)}{\alpha+\lambda}\Pi_{0}(0)\\ \label{equ-not-zero} &=& Z_{0}(\alpha)^{2} \left[\lambda (Z_{0}(\alpha)-1)\Pi_{1}(0)+\frac{\alpha (\lambda\Pi_{0}(0)-\mu \Pi_{1}(0))}{\alpha+\lambda}\right], \end{eqnarray} where the second equality follows from the fact that $\psi(z)=\Pi_{1}(0)z$. Actually, from (\ref{equ-joi-dis-1}), we can obtain that $\lambda\Pi_{0}(0)-\mu \Pi_{1}(0)\geq 0$. Moreover, from (\ref{equ-z}), we have $Z_{0}(\alpha)> 1$ for any $\alpha \in (0, \alpha_{1}]$. Hence, from (\ref{equ-not-zero}) we can get \[ H_{2}( Z_{0}(\tilde{\alpha}))\psi(Z_{0}(\tilde{\alpha}))+ \hat{H}_{0}(\tilde{\alpha}, Z_{0}(\tilde{\alpha}))> 0. \] $\Box$ From the above lemmas, we can obtain the following theorem. \begin{theorem}\label{the-tail-asy-6} For the marginal probabilities $\Pi(x)$ of the fluid queue, we have the following tail asymptotic properties: Case (i) If (i) of Lemma \ref{lem-sin-ana} holds, then \begin{equation}\label{equ-mar-probab-2} \Pi(x)- 1 \sim -\frac{\tilde{C}_{1}}{\alpha^{\ast}}e^{-\alpha^{\ast}x}; \end{equation} Case (ii) If (ii) of Lemma \ref{lem-sin-ana} holds, then \[ \Pi(x)- 1 \sim -\frac{\tilde{C}_{2}}{\alpha^{\ast}}e^{-\alpha^{\ast}x}x^{-\frac{1}{2}}; \] Case (iii) If (iii) of Lemma \ref{lem-sin-ana} holds, then \[ \Pi(x)- 1 \sim -\frac{\tilde{C}_{3}}{\alpha^{\ast}}e^{-\alpha^{\ast}x}x^{-\frac{3}{2}}. \] Here $\tilde{C}_{i}$, $i=1, 2, 3$, are defined in~Theorem \ref{the-tail-asy-4}. \end{theorem} \begin{remark}\label{rem-c2} Compared with the case of $c=1$, a new asymptotic behavior, Case (iii), appears in the case of $c=2$. We now give an example to illustrate that this new asymptotics exists for the case of $c \geq 3$. For example, let $c=3$, if we take $r=10,$ $\lambda=20$, $\mu=30$, we can obtain four zero points $4,$ $-67$ and $-15\pm 5i$ of $\hat{H}_{1}(\alpha, Z_{0}(\alpha))$. Thus, we have $\tilde{\alpha}=4 > \alpha_{1}=0.5$, which implies that Case (iii) holds. \end{remark} \section{Concluding remarks} \label{sec:7} In this paper, we applied the kernel method to investigate exact tail asymptotic properties of the joint stationary probabilities and the marginal distribution of the fluid queue driven by an $M/M/c$ queue. Different from the model studied in \cite{LZ11} and \cite{LTZ13}, for which tail asymptotic properties are symmetric between the level and the phase, since both level and phase processes are discrete, the tail asymptotics for $\phi_{c-1}(\alpha)$ and $\psi(z)$ are asymmetric in this paper, since the phase process is discrete and level process is continuous. There exists a total of three different types of exact tail asymptotics for the stationary probabilities of the fluid queue in this paper. However, we may see in \cite{GLR13} that the stationary probabilities of the fluid queue driven by a finite Markov chain is always exactly geometric, which corresponds to Case (i) of Theorem \ref{the-tail-asy-3}. This implies that the infinite phase causes new phenomena. In Section 6, we showed that Case (iii) of Theorem \ref{the-tail-asy-3} does not appear in the case of $c=1$, but exists for the case of $c\geq 2$. This implies that the asymptotic behaviour for $c \geq 2$ can be significantly different from that for the case of $c=1$. From the arguments given in the paper, we have seen that Assumption \ref{ass-1} is redundant in the special cases $c=1$ and $c=2$. Based on our numerical calculations for a broad range selections of parameter values, we conjecture that Assumption \ref{ass-1} is redundant for all cases of $c\geq 3$. \appendix \section{Tauberian-like theorem} \label{app:A} Denote \[ \Delta_{1}(\phi, \varepsilon)=\{x: |x|\leq |x_{0}|+\varepsilon, |\arg(x-x_{0})|> \phi, \varepsilon>0, 0<\phi<\frac{\pi}{2}\}. \] Let $f_{n}$ be a sequence of numbers, with the generating function \[ f(x)=\sum_{n\geq 1} f_{n} x^{n}. \] \begin{lemma}\label{lem-tauber-like-1} (Flajolet and Odlyzko 1990) Assume that $f(x)$ is analytic in $\Delta_{1}(\phi, \varepsilon)$ expect at $x=x_{0}$ and \[ f(x)\sim K(x_{0}-x)^{s} \ \ \mbox{as}\ \ x\rightarrow x_{0} \ \ \mbox{in} \ \ \Delta_{1}(\phi, \varepsilon). \] Then as $n\rightarrow \infty$, (i) If $s \not\in \{0, 1, 2, \ldots\}$, \[ f_{n}=\frac{K }{\Gamma(-s)}n^{-s-1}x_{0}^{-n}, \] where $\Gamma(\cdot)$ is the Gamma function. (ii) If $s$ is a non-negative integer, then \[ f_{n}= o(n^{-s-1}x_{0}^{-n}). \] \end{lemma} For the continuous case, let \[ g(x)=\int_{0}^{\infty}e^{xt}f(t)dt. \] Denote \[ \Delta_{2}(\phi, \varepsilon)=\{x: \Re(x)\leq |x_{0}|+\varepsilon, x\neq x_{0}, \varepsilon>0, |\arg(x-x_{0})|> \phi\}. \] The following lemma has been shown in Theorem 2 in \cite{DDZ15}. \begin{lemma}\label{the-tauber-like-2} Assume that $g(x)$ satisfies the following conditions: (i) The left-most singularity of $g(x)$ is $x_{0}$ with $x_{0} > 0$. Furthermore, we assume that as $x\rightarrow x_{0}$, \[ g(x)\sim (x_{0}-x)^{-s} \] for some $s\in \mathbb{C}\backslash Z_{-}$. (ii) $g(x)$ is analytic on $\Delta_{2}(\phi_{0}, \varepsilon)$ for some $\phi_{0}\in (0, \frac{\pi}{2}]$. (iii) $g(x)$ is bounded on $\Delta_{2}(\phi_{1}, \varepsilon)$ for some $\phi_{1}> 0$. Then, as $t\rightarrow \infty$, \[ f(t)\sim e^{-x_{0} t} \frac{t^{s-1}}{\Gamma(s)}, \] where $\Gamma(\cdot)$ is the Gamma function. \end{lemma} \end{document}
arXiv
\begin{document} \title{Galois quotients of metric graphs and invariant linear systems} \begin{abstract} For a map $\varphi : \varGamma \rightarrow \varGamma^{\prime}$ between metric graphs and an isometric action on $\varGamma$ by finite group $K$, $\varphi$ is a {\it $K$-Galois covering} on $\varGamma^{\prime}$ if $\varphi$ is a morphism, the degree of $\varphi$ coincides with the order of $K$ and $K$ induces a transitive action on every fibre. We prove that for a metric graph $\varGamma$ with an isometric action by finite group $K$, there exists a rational map, from $\varGamma$ to a tropical projective space, which induces a $K$-Galois covering on the image. By using this fact, we also prove that for a hyperelliptic metric graph without one valent points and with genus at least two, the invariant linear system of the hyperelliptic involution $\iota$ of the canonical linear system, the complete linear system associated to the canonical divisor, induces an $\langle \iota \rangle$-Galois covering on a tree. This is an analogy of the fact that a compact Riemann surface is hyperelliptic if and only if the canonical map, the rational map induced by the canonical linear system, is a double covering on a projective line $\boldsymbol{P}^1$. \end{abstract} {\bf keywords}: metric graph, invariant linear subsystem, rational map, Galois covering, hyperelliptic metric graph, canonical map {\bf 2010 Mathematical Subject Classification}: 14T05, 15A80 \tableofcontents \section{Introduction} Tropical geometry is an algebraic geometry over tropical semifield $\boldsymbol{T} = (\boldsymbol{R} \cup \{ - \infty \}, {\rm max}, +)$. A tropical curve is a one-dimensional object obtained from a compact Riemann surface by a limit operation called tropicalization and realized as a metric graph. In this paper, a metric graph means a finite connected multigraph where each edge is identified with a closed segment of $\boldsymbol{T}$. Exactly as a compact Riemann surface, concepts of a divisor, a rational function and a complete linear system etc.~are defined on a metric graph. A morphism of metric graphs is a (finite) harmonic map. In tropical geometry, a hyperelliptic metric graph, {\it {\it i.e.}} a metric graph with a special action by two element group is investigated in detail in \cite{Haase=Musiker=Yu}. In this paper, we study a metric graph with an action by a finite group and develop a quotient metric graph in a tropical projective space as the image of a rational map. Note that in this paper, we always suppose that an action by a finite group on a metric graph is isometric. \begin{thm}[Theorem \ref{main theorem2}] \label{main theorem1} Let $\varGamma$ be a metric graph and $K$ a finite group acting on $\varGamma$. Then, there exists a rational map, from $\varGamma$ to a tropical projective space, which induces a $K$-Galois covering on the image. \end{thm} Here, the definition of a $K$-Galois covering on a metric graph is given as follows. \begin{dfn}[Definition \ref{branched $K$-Galois1}] \label{$K$-Galois} {\upshape Assume that a map $\varphi : \varGamma \rightarrow \varGamma^{\prime}$ between metric graphs and an action on $\varGamma$ by $K$ are given. Then, $\varphi$ is a {\it $K$-Galois covering} on $\varGamma^{\prime}$ if $\varphi$ is a morphism of metric graphs, the degree of $\varphi$ coincides with the order of $K$ and the action on $\varGamma$ by $K$ induces a transitive action on every fibre by $K$. } \end{dfn} For the proof of theorem \ref{main theorem1}, we use the following various results. For a divisor $D$ on a metric graph $\varGamma$, $R(D)$ denotes the set of rational functions corresponding to the complete linear system $|D|$ together with a constant function of $- \infty$ on $\varGamma$, {\it {\it i.e.}} $R(D) := \{ f \,|\, f \text{ is a rational function other than } -\infty \text{ and } D + {\rm div}(f) \text{ is effective.}\} \cup \{ - \infty \}$. $R(D)$ becomes a tropical semimodule over $\boldsymbol{T}$ ({\cite[Lemma $4$]{Haase=Musiker=Yu}}). \begin{thm}[{\cite[Theorem $6$]{Haase=Musiker=Yu}}] \label{HMY} $R(D)$ is finitely generated. \end{thm} $|D|$ is also finitely generated since $|D|$ is identified with the projection of $R(D)$. In this paper, we show the following theorem such that we consider a finite group on Theorem \ref{HMY}. \begin{thm}[Remark \ref{$R(D)^K$ is a tropical semimodule.}, Theorem \ref{main theorem'} and Theorem \ref{main theorem''}] \label{$R(D)^K$ is finitely generated1.} Let $\varGamma$ be a metric graph, $K$ a finite group acting on $\varGamma$ and $D$ a $K$-invariant effective divisor on $\varGamma$. Then, the set $R(D)^K$ consisting of all $K$-invariant rational functions in $R(D)$ becomes a tropical semimodule and is finitely generated. \end{thm} We can show that the set $|D|^K$ consisting of all $K$-invariant divisors in $|D|$ is identified with the projection of $R(D)^K$. Thus, the $K$-invariant linear system $|D|^K$ is also finitely generated. Let $\phi_{|D|^K}$ be the rational map, from $\varGamma$ to a tropical projective space, associated to $|D|^K$. Then, the following holds. \begin{thm}[Theorem \ref{If $K$-injective, then $K$-Galois}] \label{Galois} $\phi_{|D|^K}$ induces a $K$-Galois covering on ${\rm Im}(\phi_{|D|^K})$ if and only if $\phi_{|D|^K}$ maps distinct $K$-orbits to distinct points. \end{thm} For each edge of ${\rm Im}(\phi_{|D|^K})$, a natural measure defined by the $\boldsymbol{Z}$-affine structure of the tropical projective space is induced. In the proof of theorem \ref{Galois}, it is essential to show that $\phi_{|D|^K}$ is a local isometry for the edge length defined from this measure. For the fact that generally, the rational map defined by a finite number of rational functions on a metric graph may not induce a morphism of metric graphs since the rational map may not be harmonic, Theorem \ref{Galois} states that the rational map induced by a $K$-invariant linear system induces a morphism of metric graphs if it satisfies the condition in Theorem \ref{Galois}. Moreover, we show the following theorem. \begin{thm}[Theorem \ref{$K$-ample1}] \label{$K$-ample} There exists a $K$-invariant effective divisor $D$ on a metric graph with an action of a finite group $K$ such that $\phi_{|D|^K}$ maps distinct $K$-orbits to distinct points. \end{thm} In conclusion, we obtain Theorem \ref{main theorem1}. Especially when the group $K$ is trivial, we have the following corollary. \begin{cor}[Corollary \ref{embedded in a tropical projective space1}] \label{embedded in a tropical projective space} A metric graph is embedded in a tropical projective space by a rational map. \end{cor} For a canonical map, which is the rational map induced by a canonical linear system, Haase--Musiker--Yu\cite{Haase=Musiker=Yu} showed the following theorem. \begin{thm}[{\cite[Theorem $49$]{Haase=Musiker=Yu}}] \label{HMY2} A metric graph whose canonical map is not injective is hyperelliptic. \end{thm} In the proof of Theorem \ref{HMY2}, Haase--Musiker--Yu\cite{Haase=Musiker=Yu} gave all hyperelliptic metric graphs satisfying the condition concretely. Moreover, Haase--Musiker--Yu\cite{Haase=Musiker=Yu} showed that the inverse of Theorem \ref{HMY2} does not hold and posed the problem of other characterizing metric graphs with non-injective canonical maps. As answers of this problem, we give Theorem \ref{application1} and Corollary \ref{application2} as follows. \begin{thm}[Theorem \ref{canonical map1}] \label{application1} Let $\varGamma$ be a metric graph without one valent points. Then, the canonical map induces a morphism which is a double covering on the image if and only if the genus of $\varGamma$ is two. \end{thm} By Theorem \ref{Galois}, Theorem \ref{HMY2} and its proof and Theorem \ref{application1}, we have the following. \begin{cor}[Corollary \ref{canonical map2}] \label{application2} Let $\varGamma$ be a metric graph of genus at least three without one valent points. Then, the canonical map of $\varGamma$ is not injective if and only if the map induced by the canonical map is not harmonic. \end{cor} By using Theorem \ref{main theorem1} and the fact that for the rational map induced by the canonical linear system associated with a divisor whose degree is two and whose rank is one on a metric graph, the image is a tree and the order of every fibre is one or two ({\cite[Proposition $48$]{Haase=Musiker=Yu}}), we have the following. \begin{thm}[Theorem \ref{canonical map3}] \label{double covering} For a hyperelliptic metric graph with genus at least two without one valent points, the invariant linear system of the hyperelliptic involution $\iota$ of the canonical linear system induces a rational map whose image is a tree and which is a $\langle \iota \rangle$-Galois covering on the image. \end{thm} Theorem \ref{double covering} means that an analogy of the fact the canonical map of a classical hyperelliptic compact Riemann surface is a double covering holds by the rational map induced not by the canonical linear system but by an invariant linear subsystem of the hyperelliptic involution of the canonical system for a hyperelliptic metric graph. The length of each edge of ${\rm Im}(\phi_{|D|^K})$ in Theorem \ref{Galois} have not given in \cite{Haase=Musiker=Yu}. We show this edge length is defined naturally and then we become to be able to argue whether a rational map is harmonic or not. In this paper, we recall some basic facts corresponding to metric graphs in Section $2$. We prove Theorem \ref{main theorem1} and corresponding statements in Section $3$. Metric graph with edge-multiplicities and harmonic morphisms between them, we need in Section $3$, are defined in Section $4$. \section{Preliminaries} In this section, we briefly recall some basic facts of tropical algebra (\cite{Akiba},\cite{Katakura}), metric graphs (\cite{Kawaguchi=Yamaki}), divisors on metric graphs (\cite{ABBR1}, \cite{Chan}, \cite{GK}, \cite{Kawaguchi=Yamaki}, \cite{MZ}), harmonic morphisms of metric graphs (\cite{Chan}, \cite{Haase=Musiker=Yu}, \cite{Kageyama}), and chip-firing moves on metric graphs (\cite{Haase=Musiker=Yu}), which we need later. \subsection{Tropical algebra} The set of $\boldsymbol{T}:=\boldsymbol{R} \cup \{ - \infty \}$ with two tropical operations: \begin{center} $a \oplus b := {\rm max}\{ a, b \}$~~~and~~~$a \odot b := a + b$, \end{center} where both $a$ and $b$ are in $\boldsymbol{T}$, becomes a semifield. $\boldsymbol{T}=(\boldsymbol{T}, \oplus, \odot)$ is called the {\it tropical semifield} and $\oplus$ (resp. $\odot$) is called {\it tropical sum} (resp. {\it tropical multiplication}). We frequently write $a \oplus b $ and $a \odot b$ as ``$a + b$'' and ``$ab$'', respectively. A vector $\boldsymbol{v} \in \boldsymbol{T}^n$ is {\it primitive} if all coefficients of $\boldsymbol{v}$ are integers and their greatest common divisor is one. For a vector $\boldsymbol{u} \in \boldsymbol{Q}^n$, its length is defined as $\lambda$ such that $\boldsymbol{u} = \lambda \boldsymbol{v}$, where $\boldsymbol{v} \in \boldsymbol{Z}^n$ is the primitive vector with the same direction as $\boldsymbol{u}$. For a vector $\boldsymbol{u} = ( u_1,\ldots, u_n ) \in \boldsymbol{T}^n$, we define the length of $\boldsymbol{u}$ as $\infty$ if each $u_i \in \boldsymbol{Q} \cup \{ -\infty \}$ and some $u_j = - \infty$. In each case, we call $\lambda$ or $\infty$ the {\it lattice length} of $\boldsymbol{u}$. For $(x_1, \ldots, x_n) \in \boldsymbol{T}^n$ and $a \in \boldsymbol{T}$, we define a scalar operation in $\boldsymbol{T}^n$ as follows: \[ ``a(x_1, \ldots, x_n)\text{''} := (``ax_1\text{''}, \ldots, ``ax_n\text{''}). \] For $\boldsymbol{x}, \boldsymbol{y} \in \boldsymbol{T}^{n + 1} \setminus \{ (- \infty, \ldots, - \infty) \}$, we define the following relation $\sim$: \begin{center} $\boldsymbol{x} \sim \boldsymbol{y} \iff$ there exists a real number $\lambda$ such that $\boldsymbol{x} =$``$\lambda \boldsymbol{y}$''. \end{center} The relation $\sim$ becomes an equivalence relation. $\boldsymbol{TP}^n := \boldsymbol{T}^{n+1} / \sim$ is called the {\it $n$-dimensional tropical projective space}. Let $\boldsymbol{u} = ( u_1 : \cdots: u_{n+1})$ and $\boldsymbol{v} = ( v_1 : \cdots: v_{n+1})$ be distinct two points on $\boldsymbol{TP}^n (n \ge 2)$. A distance between $\boldsymbol{u} = ( u_1 : \cdots: u_{n+1})$ and $\boldsymbol{v} = ( v_1 : \cdots: v_{n+1})$ is defined as ``the lattice length of $((u_1 - u_i) - (v_1 - v_i), \ldots, (u_n - u_i) - (v_n - v_i))$''$ = l \cdot {\rm gcd}((u_1 - u_i) - (v_1 - v_i), \ldots, (u_n - u_i) - (v_n - v_i))$ for some $i$ if all $u_j - u_i ,v_j - v_i$ are rational numbers, where $l$ is a positive rational number such that all $\frac{(u_j - u_i)}{l}$ and $\frac{(v_j - v_i)}{l}$ are integers. A distance between a point and itself on $\boldsymbol{TP}^n$ is defined by zero. \begin{lemma} \label{well-definedness of length} Let $\boldsymbol{u} = ( u_1 : \cdots: u_{n+1})$ and $\boldsymbol{v} = ( v_1 : \cdots: v_{n+1})$ be distinct two points on $\boldsymbol{TP}^n (n \ge 2)$ such that for some $i$, all $u_j - u_i ,v_j - v_i$ are integers. Then, \begin{eqnarray*} &&{\rm gcd}((u_1 - u_i) - (v_1 - v_i), \ldots, (u_n - u_i) - (v_n - v_i))\\ &=& {\rm gcd}((u_1 - u_k) - (v_1 - v_k), \ldots, (u_n - u_k) - (v_n - v_k)) \end{eqnarray*} holds for any $k$. \end{lemma} \begin{proof} Let $l_i :={\rm gcd}((u_1 - u_i) - (v_1 - v_i), \ldots, (u_n - u_i) - (v_n - v_i))$ for any $i$. For any $k$, we have integers $m_k$ and $t_k$ such that $(u_k - u_i) - (v_k - v_i) = l_i \cdot m_k, {\rm gcd}(m_1, \ldots, m_n)=1$, $(u_k - u_j) - (v_k - v_j) = l_j \cdot t_k$ and ${\rm gcd}(t_1, \ldots, t_n)=1$. Since $(u_k - u_i) - (v_k - v_i) = (u_k - v_k) - (u_i - v_i)$, \begin{eqnarray*} l_j &=& {\rm gcd}((u_1 - u_j) - (v_1 - v_j), \ldots, (u_n - u_j) - (v_n - v_j))\\ &=& {\rm gcd}((u_1 - v_1) - (u_j - v_j), \ldots, (u_n - v_n) - (u_j - v_j))\\ &=& {\rm gcd}((u_i - v_i) + l_i \cdot m_1 - (u_j - v_j), \ldots, (u_i - v_i) + l_i \cdot m_n - (u_j - v_j))\\ &=& {\rm gcd}((u_i - u_j) - (v_i - v_j) + l_i \cdot m_1, \ldots, (u_i - u_j) - (v_i - v_j) + l_i \cdot m_n)\\ &=& {\rm gcd}(l_j \cdot t_i + l_i \cdot m_1, \ldots, l_j \cdot t_i + l_i \cdot m_n). \end{eqnarray*} Then $l_j$ must divide $l_i \cdot m_1, \ldots, l_i \cdot m_n$. As ${\rm gcd}(m_1, \ldots, m_n)=1$, $l_j$ divides $l_i$. Thus $l_j \le l_i$. The inverse inequality holds since $i$ and $j$ are arbitrary. \end{proof} By Lemma \ref{well-definedness of length}, the above distance between two points of $\boldsymbol{TP}^n$ satisfying the condition is well-defined. A {\it tropical semimodule} on $\boldsymbol{T}$ is defined like a classical module on a ring. Note that a tropical semimodule on $\boldsymbol{T}$ has two tropical operations: tropical sum $\oplus$ and tropical scalar multiplication $\odot$. Let $R$ and $R^{\prime}$ be tropical semimodules on $\boldsymbol{T}$, respectively. A map $f : R \rightarrow R^{\prime}$ is said a {\it homomorphism} if for any $a, b \in R$ and $\lambda \in \boldsymbol{T}$, $f(a \oplus b) = f(a) \oplus f(b)$ and $f(\lambda \odot a) = \lambda \odot f(a)$ hold. For a homomorphism $f : R \rightarrow R^{\prime}$ of tropical semimodules, $f$ is an {\it isomorphism} if there exists a homomorphism $f^{\prime} : R^{\prime} \rightarrow R$ of tropical semimodules such that $f^{\prime} \circ f = {\rm id}_R$ and $f \circ f^{\prime} = {\rm id}_{R^{\prime}}$. Then, $f^{\prime}$ is also an isomorphism. Two tropical semimodules $R$ and $R^{\prime}$ are {\it isomorphic} if there exists an isomorphism of tropical semimodules between them. \subsection{Metric graphs} In this paper, a {\it graph} means an unweighted, finite connected nonempty multigraph. Note that we allow the existence of loops. For a graph $G$, the sets of vertices and edges are denoted by $V(G)$ and $E(G)$, respectively. The {\it genus} of $G$ is defined by $g(G):=|E(G)|-|V(G)|+1$. The {\it valence} ${\rm val}(v)$ of a vertex $v$ of $G$ is the number of edges emanating from $v$, where we count each loop as two. A vertex $v$ of $G$ is a {\it leaf end} if $v$ has valence one. A {\it leaf edge} is an edge of $G$ adjacent to a leaf end. An {\it edge-weighted graph} $(G, l)$ is the pair of a graph $G$ and a function $l: E(G) \rightarrow {\boldsymbol{R}}_{>0} \cup \{\infty\}$ called a {\it length function}, where $l$ can take the value $\infty$ on only leaf edges. A {\it metric graph} is the underlying $\infty$-metric space of an edge-weighted graph $(G, l)$, where each edge of $G$ is identified with the closed interval $[0,l(e)]$ and if $l(e)=\infty$, then the leaf end of $e$ must be identified with $\infty$. Such a leaf end identified with $\infty$ is called a {\it point at infinity} and any other point is said to be a {\it finite point}. For the above metric graph $\varGamma$, $(G, l)$ is said to be its {\it model}. There are many possible models for $\varGamma$. We construct a model $(G_{\circ}, l_{\circ})$ called the {\it canonical model} of $\varGamma$ as follows. Generally, we determine $V(G_{\circ}):= \{ x \in \varGamma~|~{\rm val}(x) \neq 2 \}$, where the {\it valence} ${\rm val}(x)$ of $x$ is the number of connected components of $U \setminus \{ x \}$ with any sufficiently small connected neighborhood $U$ of $x$ in $\varGamma$ except following two cases. When $\varGamma$ is a circle, we determine $V(G_{\circ})$ as the set consisting of one arbitrary point on $\varGamma$. When $\varGamma$ is the $\infty$-metric space obtained from the graph consisting only of two edges with length of $\infty$ and three vertices adjacent to these edges, $V(G_{\circ})$ consists of the two endpoints of $\varGamma$ (those are points at infinity) and an any point on $\varGamma$ as the origin, Since connected components of $\varGamma \setminus V(G_{\circ})$ consist of open intervals, whose lengths determine the length function $l_{\circ}$. If a model $(G, l)$ of $\varGamma$ has no loops, then $(G, l)$ is said to be a {\it loopless model} of $\varGamma$. For a model $(G, l)$ of $\varGamma$, the loopless model for $(G, l)$ is obtained by regarding all midpoints of loops of $G$ as vertices and by adding them to the set of vertices of $G$. The loopless model for the canonical model of a metric graph is called the {\it canonical loopless model}. For terminology, in a metric graph $\varGamma$, an edge of $\varGamma$ means an edge of the underlying graph $G_{\circ}$ of the canonical model $(G_{\circ}, l_{\circ})$. Let $e$ be an edge of $\varGamma$ which is not a loop. We regard $e$ as a closed subset of $\varGamma$, {\it {\it i.e.}}, including the endpoints $v_1, v_2$ of $e$. The {\it relative interior} of $e$ is $e^{\circ} = e \setminus \{ v_1, v_2 \}$. For a point $x$ on $\varGamma$, a connected component of $U \setminus \{ x \}$ with any sufficiently small connected neighborhood $U$ of $x$ is a {\it half-edge} of $x$. For a model $(G, l)$ of a metric graph $\varGamma$, we frequently identify a vertex $v$ (resp. an edge $e$) of $G$ with the point corresponding to $v$ on $\varGamma$ (resp. the closed subset corresponding to $e$ of $\varGamma$). The {\it genus} $g(\varGamma)$ of a metric graph $\varGamma$ is defined to be its first Betti number, where one can check that it is equal to $g(G)$ of any model $(G, l)$ of $\varGamma$. A metric graph of genus zero is called a {\it tree}. \subsection{Divisors on metric graphs} Let $\varGamma$ be a metric graph. An element of the free abelian group ${\rm Div}(\varGamma)$ generated by points on $\varGamma$ is called a {\it divisor} on $\varGamma$. For a divisor $D$ on $\varGamma$, its {\it degree} ${\rm deg}(D)$ is defined by the sum of the coefficients over all points on $\varGamma$. We write the coefficient at $x$ as $D(x)$. A divisor $D$ on $\varGamma$ is said to be {\it effective} if $D(x) \ge 0$ for any $x$ in $\varGamma$. If $D$ is effective, we write simply $D \ge 0$. For an effective divisor $D$ on $\varGamma$, the set of points on $\varGamma$ where the coefficient(s) of $D$ is not zero is called the {\it support} of $D$ and written as ${\rm supp}(D)$. The {\it canonical divisor} $K_{\varGamma}$ of $\varGamma$ is defined as $K_{\varGamma} := \sum_{x \in \varGamma}({\rm val}(x) - 2) \cdot x$. A {\it rational function} on $\varGamma$ is a constant function of $-\infty$ or a piecewise linear function with integer slopes and with a finite number of pieces, taking the value $\pm \infty$ only at points at infinity. ${\rm Rat}(\varGamma)$ denotes the set of rational functions on $\varGamma$. For a point $x$ on $\varGamma$ and $f$ in ${\rm Rat}(\varGamma)$ which is not constant $-\infty$, the sum of the outgoing slopes of $f$ at $x$ is denoted by ${\rm ord}_x(f)$. If $x$ is a point at infinity and $f$ is infinite there, we define ${\rm ord}_x(f)$ as the outgoing slope from any sufficiently small connected neighborhood of $x$. Note when $\varGamma$ is a singleton, for any $f$ in ${\rm Rat}(\varGamma)$, we define ${\rm ord}_x(f) := 0$. This sum is $0$ for all but finite number of points on $\varGamma$, and thus \[ {\rm div}(f):=\sum_{x \in \varGamma}{\rm ord}_x(f) \cdot x \] is a divisor on $\varGamma$, which is called the {\it principal divisor} defined by $f$. Two divisors $D$ and $E$ on $\varGamma$ are said to be {\it linearly equivalent} if $D-E$ is a principal divisor. We handle the values $\infty$ and $-\infty$ as follows. Let $f, g$ in ${\rm Rat}(\varGamma)$ take the value $\infty$ and $-\infty$ at a point $x$ at infinity on $\varGamma$ respectively, and $y$ be any point in any sufficiently small neighborhood of $x$. When ${\rm ord}_x(f) + {\rm ord}_x(g)$ is negative, then $(f \odot g)(x) := \infty$. When ${\rm ord}_x(f) + {\rm ord}_x(g)$ is positive, then $(f \odot g)(x) := -\infty$. Remark that the constant function of $-\infty$ on $\varGamma$ dose not determine a principal divisor. For a divisor $D$ on $\varGamma$, the {\it complete linear system} $|D|$ is defined by the set of effective divisors on $\varGamma$ being linearly equivalent to $D$. For a divisor $D$ on a metric graph, let $R(D)$ be the set of rational functions $f \not\equiv -\infty$ such that $D + {\rm div}(f)$ is effective together with $-\infty$. When ${\rm deg}(D)$ is negative, $|D|$ is empty, so is $R(D)$. Otherwise, from the argument in Section $3$ of \cite{Haase=Musiker=Yu}, $|D|$ is not empty and consequently so is $R(D)$. Hereafter, we treat only divisors of nonnegative degree. \begin{rem}[\cite{Song} and cf. {\cite[Lemma 4]{Haase=Musiker=Yu}}] \label{R(D) is tropical semimodule} \upshape{ $R(D)$ becomes a tropical semimodule on $\boldsymbol{T}$ by extending above tropical operations onto functions, giving pointwise sum and product. } \end{rem} For a tropical subsemimodule $M$ of $(\boldsymbol{R} \cup \{ \pm \infty \})^{\varGamma}$ (or of $\boldsymbol{R}^{\varGamma}$), $f$ in $M$ is called an {\it extremal of} $M$ when it implies $f = g_1$ or $f = g_2$ that any $g_1$ and $g_2$ in $M$ satisfies $f = g_1 \oplus g_2$. \begin{rem}[\cite{Song}] \upshape{ \label{generator is extremal} Any finitely generated tropical subsemimodule $M$ of $R(D) \subset (\boldsymbol{R} \cup \{ \pm \infty \})^{\varGamma}$ is generated by the extremals of $M$. } \end{rem} For a divisor $D$ on a metric graph $\varGamma$, we set $r(D)$, called the {\it rank} of $D$, as the minimum integer $s$ such that for some effective divisor $E$ with degree $s - 1$, the complete linear system associated to $D - E$ is empty set. A Riemann--Roch theorem for finite loopless graphs was established by Baker--Norine (\cite{Baker=Norine1}). A Riemann--Roch theorem for metric graphs was proven independently by Gathmann--Kerber (\cite{GK}) and by Mikhalkin--Zharkov (\cite{MZ}). \begin{rem}[Riemann--Roch theorem for metric graphs] {\upshape Let $\varGamma$ be a metric graph and $D$ a divisor on $\varGamma$. Then, $r(D) - r(K_{\varGamma} - D) = {\rm deg}(D) + 1 - g(\varGamma)$ holds. } \end{rem} Let $\varGamma$ be a metric graph of genus at least two. $\varGamma$ is {\it hyperelliptic} if there exists a divisor on $\varGamma$ whose degree is two and whose rank is one. A binary group action on $\varGamma$ with a tree quotient is called a {\it hyperelliptic involution} of $\varGamma$. Chan (\cite{Chan}), Amini--Baker--Brugall\'{e}--Rabinoff (\cite{ABBR1}) and Kawaguchi--Yamaki (\cite{Kawaguchi=Yamaki}) investigated hyperelliptic metric graphs. \begin{rem}[{\cite[Theorem 5]{Kawaguchi=Yamaki}}] {\upshape Let $\varGamma$ be a metric graph of genus at least two without one valent points. Then, the following are equivalent: \begin{itemize} \item[$(1)$] $\varGamma$ is hyperelliptic; \item[$(2)$] $\varGamma$ has a hyperelliptic involution. \end{itemize} Furthermore, a hyperelliptic involution is unique. } \end{rem} \subsection{Harmonic morphisms} Let $\varGamma, \varGamma^{\prime}$ be metric graphs, respectively, and $\varphi : \varGamma \rightarrow \varGamma^{\prime}$ be a continuous map. The map $\varphi$ is called a {\it morphism} if there exist a model $(G, l)$ of $\varGamma$ and a model $(G^{\prime}, l^{\prime})$ of $\varGamma^{\prime}$ such that the image of the set of vertices of $G$ by $\varphi$ is a subset of the set of vertices of $G^{\prime}$, the inverse image of the relative interior of any edge of $G^{\prime}$ by $\varphi$ is the union of the relative interiors of a finite number of edges of $G$ and the restriction of $\varphi$ to any edge $e$ of $G$ is a dilation by some nonnegative integer factor ${\rm deg}_e(\varphi)$. Note that the dilation factor on $e$ with ${\rm deg}_e(\varphi) \ne 0$ represents the ratio of the distance of the images of any two points $x$ and $y$ except points at infinity on $e$ to that of original $x$ and $y$. If an edge $e$ is mapped to a vertex of $G^{\prime}$ by $\varphi$, then ${\rm deg}_e(\varphi) = 0$. The morphism $\varphi$ is said to be {\it finite} if ${\rm deg}_e(\varphi) > 0$ for any edge $e$ of $G$. For any half-edge $h$ of any point on $\varGamma$, we define ${\rm deg}_h(\varphi)$ as ${\rm deg}_e(\varphi)$, where $e$ is the edge of $G$ containing $h$. Let $\varGamma^{\prime}$ be not a singleton and $x$ a point on $\varGamma$. The morphism $\varphi$ is {\it harmonic at} $x$ if the number \[ {\rm deg}_x(\varphi) := \sum_{x \in h \mapsto h^{\prime}}{\rm deg}_h(\varphi) \] is independent of the choice of half-edge $h^{\prime}$ emanating from $\varphi(x)$, where $h$ is a connected component of the inverse image of $h^{\prime}$ by $\varphi$ containing $x$. The morphism $\varphi$ is {\it harmonic} if it is harmonic at all points on $\varGamma$. One can check that if $\varphi$ is a finite harmonic morphism, then the number \[ {\rm deg}(\varphi) := \sum_{x \mapsto x^{\prime}}{\rm deg}_x(\varphi) \] is independent of the choice of a point $x^{\prime}$ on $\varGamma^{\prime}$, and is said the {\it degree} of $\varphi$, where $x$ is an element of the inverse image of $x^{\prime}$ by $\varphi$. If $\varGamma^{\prime}$ is a singleton and $\varGamma$ is not a singleton, for any point $x$ on $\varGamma$, we define ${\rm deg}_x(\varphi)$ as zero so that we regard $\varphi$ as a harmonic morphism of degree zero. If both $\varGamma$ and $\varGamma^{\prime}$ are singletons, we regard $\varphi$ as a harmonic morphism which can have any number of degree. The collection of metric graphs together with harmonic morphisms between them forms a category. Let $\varphi : \varGamma \rightarrow \varGamma^{\prime}$ be a finite harmonic morphism between metric graphs. The {\it pull-back} of $f^{\prime}$ in ${\rm Rat}(\varGamma^{\prime})$ is the function $\varphi^{\ast}f^{\prime} : \varGamma \rightarrow \boldsymbol{R} ~\cup \{ \pm \infty \}$ defined by $\varphi^{\ast}f^{\prime} := f^{\prime} \circ \varphi$. We define the {\it push-forward homomorphism} on divisors $\varphi_\ast : {\rm Div}(\varGamma) \rightarrow {\rm Div}(\varGamma^{\prime})$ by \[ \varphi_\ast (D) := \sum_{x \in \varGamma}D(x) \cdot \varphi(x). \] The {\it pull-back homomorphism} on divisors $\varphi^{\ast}:{\rm Div}(\varGamma^{\prime}) \rightarrow {\rm Div}(\varGamma)$ is defined to be \[ \varphi^{\ast} (D^{\prime}) := \sum_{x \in \varGamma}{\rm deg}_x(\varphi) \cdot D^{\prime}(\varphi(x)) \cdot x. \] One can check that ${\rm deg}(\varphi_\ast(D)) = {\rm deg}(D)$ and $\varphi^{\ast} ({\rm div}(f^{\prime})) = {\rm div}(\varphi^{\ast} f^{\prime})$ for any divisor $D$ on $\varGamma$ and any $f^{\prime}$ in ${\rm Rat}(\varGamma^{\prime})^{\times}$, respectively (cf. \cite[Proposition 4.2]{Baker=Norine}). \subsection{Chip-firing moves} In \cite{Haase=Musiker=Yu}, Haase, Musiker and Yu used the term {\it subgraph} of a metric graph as a compact subset of the metric graph with a finite number of connected components and defined the {\it chip firing move} ${\rm CF}(\widetilde{\varGamma_1}, l)$ by a subgraph $\widetilde{\varGamma_1}$ of a metric graph $\widetilde{\varGamma}$ and a positive real number $l$ as the rational function ${\rm CF}(\widetilde{\varGamma_1}, l)(x) := - {\rm min}(l, {\rm dist}(x, \widetilde{\varGamma_1}))$, where ${\rm dist}(x, \widetilde{\varGamma_1})$ is the infimum of the lengths of the shortest path to arbitrary points on $\widetilde{\varGamma_1}$ from $x$. They proved that every rational function on a metric graph is an (ordinary) sum of chip firing moves (plus a constant) ({\cite[Lemma 2]{Haase=Musiker=Yu}}) with the concept of a {\it weighted chip firing move}. This is a rational function on a metric graph having two disjoint proper subgraphs $\widetilde{\varGamma_1}$ and $\widetilde{\varGamma_2}$ such that the complement of the union of $\widetilde{\varGamma_1}$ and $\widetilde{\varGamma_2}$ in $\widetilde{\varGamma}$ consists only of open line segments and such that the rational function is constant on $\widetilde{\varGamma_1}$ and $\widetilde{\varGamma_2}$ and linear (smooth) with integer slopes on the complement. A weighted chip firing move is an (ordinary) sum of chip firing moves (plus a constant) ({\cite[Lemma 1]{Haase=Musiker=Yu}}). With unbounded edges, their definition of chip firing moves needs a little correction. Let $\varGamma_1$ be a subgraph of a metric graph $\varGamma$ which does not have any connected components consisting only of points at infinity and $l$ a positive real number or infinity. The {\it chip firing move} by $\varGamma_1$ and $l$ is defined as the rational function ${\rm CF}(\varGamma_1, l)(x) := - {\rm min}(l, {\rm dist}(x, \varGamma_1))$. \begin{rem}[\cite{Song}] \upshape{ A weighted chip firing move on a metric graph is a linear combination of chip firing moves having integer coefficients (plus a constant). } \end{rem} \begin{rem}[\cite{Song}] \upshape{ Every rational function on a metric graph is a linear combination of chip firing moves having integer coefficients (plus a constant). } \end{rem} A point on $\varGamma$ with valence two is said to be a {\it smooth} point. We sometimes refer to an effective divisor $D$ on $\varGamma$ as a {\it chip configuration}. We say that a subgraph $\varGamma_1$ of $\varGamma$ can {\it fire on} $D$ if for each boundary point of $\varGamma_1$ there are at least as many chips as the number of edges pointing out of $\varGamma_1$. A set of points on a metric graph $\varGamma$ is said to be {\it cut set} of $\varGamma$ if the complement of that set in $\varGamma$ is disconnected. \section{Rational maps induced by $|D|^K$} In this section, our main concern is the rational map induced by an invariant linear system on a metric graph with an action by a finite group $K$. We find a condition that the rational map induces a $K$-Galois covering on the image. \subsection{Generators of $R(D)^K$} In this subsection, for an effective divisor $D$ on a metric graph and a finite group $K$ acting on the metric graph, we give two proofs, other than that in \cite{Song}, of the statement that the $K$-invariant set $R(D)^K$ of $R(D)$ is finitely generated as a tropical semimodule. When $D$ is $K$-invariant, $R(D)/\boldsymbol{R}$ is identified with the subset $|D|^K$ of $|D|$ consisting of all $K$-invariant elements of $|D|$, so the $K$-invariant linear system $|D|^K$ is finitely generated by the generating set of $R(D)^K$ modulo tropical scaling (except by $-\infty$). \begin{rem}[{\cite[Lemma 6]{Haase=Musiker=Yu}}] \label{finitely generation of R(D)} {\upshape Let $\widetilde{\varGamma}$ be a metric graph, $\widetilde{D}$ be a divisor on $\widetilde{\varGamma}$ and $S$ be the set of rational functions $f$ in $R(\widetilde{D})$ such that the support of $\widetilde{D} + {\rm div}(f)$ does not contain any cut set of $\widetilde{\varGamma}$ consisting only of smooth points. Then \begin{itemize} \item[$(1)$] $S$ contains all the extremals of $R(\widetilde{D})$, \item[$(2)$] $S$ is finite modulo tropical scaling (except by $-\infty$), and \item[$(3)$] $S$ generates $R(\widetilde{D})$ as a tropical semimodule. \end{itemize} } \end{rem} \begin{rem}[{\cite[Theorem 14]{Haase=Musiker=Yu}}] \label{finitely generation of R(D)2} {\upshape Let $G$ be a model of $\widetilde{\varGamma}$ and let $S_G$ be the set of functions $f \in R(D)$ such that the support of $D+{\rm div}(f)$ does not contain an interior cut set ({\it {\it i.e.}} a cut set consisting of points in interior of edges in the model $G$). Then \begin{itemize} \item[$(1)$] $S_G$ contains the set $S$ from Remark \ref{finitely generation of R(D)}, and \item[$(2)$] $S_G$ is finite modulo tropical scaling (except by $-\infty$). \end{itemize} } \end{rem} Though in the above remarks they assume that $R(\widetilde{D})$ is a subset of $\boldsymbol{R}^{\widetilde{\varGamma}}$, the proof is applied even in the case that $R(\widetilde{D})$ is a subset of $(\boldsymbol{R} \cup \{ \pm \infty \})^{\widetilde{\varGamma}}$ with preparations in Section 2. Also, the above remarks throws the relation between $S$ (resp. $S_G$) and $\widetilde{D}$ into relief, hence hereafter we write $S$ (resp. $S_G$) for $\widetilde{D}$ as $S(\widetilde{D})$ (resp. $S_G(\widetilde{D}))$. Next, for $R(D)^K$, the following holds. \begin{rem}[\cite{Song}] \upshape{ \label{$R(D)^K$ is a tropical semimodule.} $R(D)^K$ is a tropical semimodule. } \end{rem} Note that $R(D + {\rm div}(f))^K = R(D)^K \odot (-f)$ for any $K$-invariant rational function $f$. The following lemma is an extension of {\cite[Lemma 5]{Haase=Musiker=Yu}}. \begin{rem}[\cite{Song}] \label{condition to be extremal} \upshape{ Let $f$ be in ${\rm Rat}(\varGamma)$. Then, $f$ is an extremal of $R(D)^K$ if and only if there are not two proper $K$-invariant subgraphs $\varGamma_1$ and $\varGamma_2$ covering $\varGamma$ such that each can fire on $D + {\rm div}(f)$. } \end{rem} If $D$ is $K$-invariant, $R(D)^K/\boldsymbol{R}$ is naturally identified with the subset $|D|^K$ of $|D|$ consisting of all $K$-invariant elements of $|D|$. In fact, let $D$ be a $K$-invariant effective divisor on $\varGamma$. For any $D^{\prime} \in |D|^K$, there exists $f \in R(D)^K$ such that $D^{\prime}=D + {\rm div}(f)$. Since both $D$ and $D^{\prime}$ are $K$-invariant, $D^{\prime} = D + {\rm div}(f \circ \sigma)$ for any $\sigma \in K$. Thus $0 = {\rm div}(f) - {\rm div}(f \circ \sigma) = {\rm div}(f - f \circ \sigma)$ and there exists $c \in \boldsymbol{R}$ such that $f - f \circ \sigma = c$, {\it {\it i.e.}} $f = c \odot f \circ \sigma$ by Liouville's theorem. Since the order $k$ of $\sigma$ is finite, $f = c \odot \cdots \odot c \odot f$ holds, where c is multiplied $k$ times. As $k$ is not zero, $c$ must be zero. Therefore $f$ is $K$-invariant and then $f \in R(D)^K$. Conversely, $[g] \in R(D)^K / \boldsymbol{R}$ corresponds to an element $D + {\rm div}(g)$ in $|D|^K$. Let $\varGamma$ be a metric graph, $K$ a finite group acting on $\varGamma$ and $D$ an effective divisor on $\varGamma$. In this subsection, we prove that $R(D)^K$ is finitely generated as a tropical semimodule in two different ways from that in Subsection $4.1$. In one of them, we return arguments about effective divisors to ones about $K$-invariant divisors to use the condition of generators of $R(D)$ found by Haase, Musiker and Yu. In the other, we find generators of $R(D)^K$ by an algebraic way. By later proof, we know that the number of generators of $R(D)^K$ is not greater than that of $R(D)$. Let $D_1$ be the maximum $K$-invariant part of $D$, {\it {\it i.e.}} $D_1 := \sum_{x \in \varGamma}{\rm min}_{x^{\prime} \in Kx}\{D(x^{\prime})\} \cdot x$. By the definition, both $D_1$ and the $K$-variant part $D_2 := D - D_1$ are effective. \begin{lemma} \label{important lemma} $R(D)^K = R(D_1)^K$ holds. \end{lemma} \begin{proof} $D_1 + {\rm div}(f_1) \ge 0$ holds for any element $f_1$ of $R(D_1)^K$. Therefore $D + {\rm div}(f_1) = D_2 + (D_1 + {\rm div}(f_1)) \ge 0$, {\it {\it i.e.}} $f_1 \in R(D_1)^K$. For arbitrary element $f$ of $R(D)^K$, the set of poles of $f$ is $K$-invariant as $f$ is $K$-invariant and the set is contained in the support of $D_1$. This means $D_1 + {\rm div}(f) \ge 0$. In fact, if $D_1 + {\rm div}(f) < 0$ holds, then there exists a point $x$ on $\varGamma$ whose orbit by $K$ is a subset of ${\rm supp}(D_1 + {\rm div}(f))$. Therefore $0 > D_2 + (D_1 + {\rm div}(f)) = D + {\rm div}(f)$ holds and this contradicts to $f \in R(D)^K$. \end{proof} By Lemma \ref{important lemma}, we can prove $(1) , (3)$ of the following theorem in the same way of the proof of {\cite[Lemma 6]{Haase=Musiker=Yu}} and $(2)$ clearly holds by {\cite[Lemma 6]{Haase=Musiker=Yu}}. \begin{thm} \label{main theorem'} In the above situation, the following hold\,: \begin{itemize} \item[$(1)$] $S_G(D_1)^K$ contains all the extremals of $R(D_1)^K$, \item[$(2)$] $S_G(D_1)^K$ is finite modulo tropical scaling (except by $-\infty)$, and \item[$(3)$] $S_G(D_1)^K$ generates $R(D_1)^K$ as a tropical semimodule. \end{itemize} \end{thm} The following theorem is proved with a purely algebraic way from stacked point of view. \begin{thm} \label{main theorem''} Let $\varGamma$ be a metric graph, $K$ be a finite group acting on $\varGamma$ and $D$ a $K$-invariant effective divisor on $\varGamma$. For a minimal generating set $\{ f_1, \ldots, f_n \}$ of $R(D)$, $\{ g_1, \ldots, g_n \}$ is a generating set of $R(D)^K$, where $g_i := ``\sum_{\sigma \in K}f_i \circ \sigma\text{''}$. \end{thm} \begin{proof} For any $i$ and $\tau \in K$, \[ g_i \circ \tau = `` \left( \sum_{\sigma \in K}f_i \circ \sigma \right) \circ \tau\text{''} = ``\sum_{\sigma \in K}f_i \circ (\sigma \circ \tau)\text{''} = ``\sum_{\sigma \in K}f_i \circ \sigma\text{''} = g_i \] and \[ 0 \leq \tau (D + {\rm div}(f_i)) = \tau(D) + {\rm div}(f_i \circ \tau) = D + {\rm div}(f_i \circ \tau) \] hold. Thus, each $g_i$ is in $R(D)^K$. For any element $g=``\sum_{i=1}^{n}a_i f_i\text{''} \in R(D)^K$, \[ g = ``\sum_{\sigma \in K}g \circ \sigma\text{''} = ``\sum_{\sigma \in K} \left( \sum_{i=1}^{n} a_i f_i \right) \circ \sigma\text{''} = ``\sum_{i=1}^{n} a_i \left( \sum_{\sigma \in K} f_i \circ \sigma \right) \text{''} = ``\sum_{i=1}^{n} a_i g_i\text{''}. \] Hence $\{ g_1, \ldots, g_n \}$ generates $R(D)^K$. \end{proof} \begin{cor} Let $\varGamma$ be a metric graph, $K$ be a finite group acting on $\varGamma$ and $D$ an effective divisor on $\varGamma$. For a minimal generating set $\{ f_1, \ldots, f_n \}$ of $R(D_1)$, $\{ g_1, \ldots, g_n \}$ is a generating set of $R(D)^K=R(D_1)^K$, where $g_i := ``\sum_{\sigma \in K}f_i \circ \sigma\text{''}$. \end{cor} \begin{rem} {\upshape $\{ g_1, \ldots, g_n \}$ is not always minimal. Using Lemma \ref{condition to be extremal}, we can obtain a minimal generating set by omitting elements not being extremal from $\{ g_1 , \ldots, g_n \}$. } \end{rem} \subsection{Galois covering on metric graphs} Let $\varGamma$ be a metric graph and $K$ a finite group acting on $\varGamma$. We define $V_1(\varGamma)$ as the set of points $x$ on $\varGamma$ such that there exists a point $y$ in any neighborhood of $x$ whose stabilizer is not equal to that of $x$. \begin{rem}[\cite{Song}] \upshape{ \label{$V_1$ is finite} $V_1(\varGamma)$ is a finite set. } \end{rem} We set $(G_0, l_0)$ as the canonical loopless model of $\varGamma$. By Lemma \ref{$V_1$ is finite}, we obtain the model $(\widetilde{G_1}, \widetilde{l_1})$ of $\varGamma$ by setting the $K$-orbit of the union of $V(G_0)$ and $V_1(\varGamma)$ as the set of vertices $V(\widetilde{G_1})$. Naturally, we can regard that $K$ acts on $V(\widetilde{G_1})$ and also on $E(\widetilde{G_1})$. Thus, the sets $V(\widetilde{G^{\prime}})$ and $E(\widetilde{G^{\prime}})$ are defined as the quotient sets of $V(\widetilde{G_1})$ and $E(\widetilde{G_1})$ by $K$, respectively. Let $\widetilde{G^{\prime}}$ be the graph obtained by setting $V(\widetilde{G^{\prime}})$ as the set of vertices and $E(\widetilde{G^{\prime}})$ as the set of edges. Since $\widetilde{G_1}$ is connected, $\widetilde{G^{\prime}}$ is also connected. We obtain the loopless graph $G^{\prime}$ from $\widetilde{G^{\prime}}$ and the loopless model $(G_1, l_1)$ of $\varGamma$ from the inverse image of $V(G^{\prime})$ by the map defined by $K$. Note that $V(G_1)$ contains $V(\widetilde{G_1})$. Since $K$ is a finite group acting on $\varGamma$, the length function $l^{\prime} : E(G^{\prime}) \rightarrow {\boldsymbol{R}}_{> 0} \cup \{ \infty \}$, $[e] \mapsto |K_e| \cdot l_1(e)$ is well-defined, where $[e]$ and $K_e$ mean the equivalence class of $e$ and the stabilizer of $e$, respectively. Let $\varGamma^{\prime}$ be the metric graph obtained from $(G^{\prime}, l^{\prime})$. Then, $\varGamma^{\prime}$ is the quotient metric graph of $\varGamma$ by $K$. For any edge $e$ of $G_1$, by the Orbit-Stabilizer formula, $|K_e|$ is a positive integer. Thus, for $(G_1, l_1)$ and $(G^{\prime}, l^{\prime})$, there exists only one morphism $\pi : \varGamma \rightarrow \varGamma^{\prime}$ that satisfies ${\rm deg}_e(\pi) = |K_e|$ for any edge $e$ of $G_1$. \begin{rem}[\cite{Song}] \upshape{ $\pi$ is a finite harmonic morphism of degree $|K|$. } \end{rem} Note that whether $\varGamma$ is a singleton or not agrees with whether $\varGamma^{\prime}$ is a singleton. Let $\varphi : \varGamma \rightarrow \varGamma^{\prime}$ be a finite harmonic morphism of metric graphs. We write the isometry transformation group of $\varGamma$ as ${\rm Isom}(\varGamma)$, {\it i.e.} ${\rm Isom}(\varGamma) := \{ \sigma : \varGamma \rightarrow \varGamma \,|\, \sigma \text{ is an isometry} \}$. \begin{dfn} \label{branched $K$-Galois1} {\upshape Assume that a map $\varphi : \varGamma \rightarrow \varGamma^{\prime}$ between metric graphs and an action on $\varGamma$ by $K$ are given. Then, $\varphi$ is a {\it $K$-Galois covering} on $\varGamma^{\prime}$ if $\varphi$ is a harmonic morphism of metric graphs, the degree of $\varphi$ is coincident with the order of $K$ and the action on $\varGamma$ by $K$ induces a transitive action on every fibre by $K$. $K$ is called the {\it Galois group} of $\varphi$. } \end{dfn} If $\varphi$ is a $K$-Galois covering, then $\varphi$ is finite since $K$ transitively acts on ever fibre. \begin{rem} {\upshape A $K$-Galois covering can be $K^{\prime}$-Galois for a finite group $K^{\prime}$ which is not conjugate to $K$. } \end{rem} \begin{lemma} \label{There exists an isomorphism.} There exists a finite harmonic morphism of degree one ({\it i.e.} an isomorphism) $\psi$ from the quotient metric graph $\varGamma/K$ to $\varGamma^{\prime}$ which satisfies $\varphi = \psi \circ \pi$, where $\pi : \varGamma \rightarrow \varGamma / K$ is the natural surjection. \end{lemma} \begin{proof} Let $(G, l), (G^{\prime}, l^{\prime})$ and $(G^{\prime \prime}, l^{\prime \prime})$ be models of $\varGamma, \varGamma^{\prime}$ and $\varGamma / K$ corresponding to $\varphi$ and $\pi$, respectively. Let $\psi : \varGamma / K \rightarrow \varGamma^{\prime}$ be a map defined by $[x] \mapsto \varphi(x)$. Since $[x] = Kx \subset \varGamma$ and $\varphi$ is $K$-Galois, $\varphi(Kx) = \varphi(x)$ holds. Thus $\psi$ is well-defined. By the definition of $\psi$, $\varphi = \psi \circ \pi$ holds. As $\varphi$ is continuous, so is $\psi$. For any edge $e \in E(G)$, since $\varphi$ is $K$-Galois, \[ {\rm deg}(\varphi) = \sum_{\substack{e_1 \subset Ke \\ e_1 \in E(G)}} {\rm deg}_{e_1}(\varphi) = \sum_{\substack{e_1 \subset Ke \\ e_1 \in E(G)}} {\rm deg}_e(\varphi) = |Ke| \cdot {\rm deg}_e(\varphi). \] Therefore, \[ {\rm deg}_e(\varphi) = \frac{{\rm deg}(\varphi)}{|Ke|} = \frac{|K|}{|Ke|} = |K_e|. \] For any edge $[e] \in E(G^{\prime \prime})$, \[ \frac{l^{\prime \prime}([e])}{l^{\prime}(\varphi(e))} = \frac{l(e) \cdot {\rm deg}_e(\pi)}{l(e) \cdot {\rm deg}_e(\varphi)} = \frac{l(e) \cdot |K_e|}{l(e) \cdot |K_e|} = 1. \] Therefore, $\psi$ is a finite harmonic morphism of degree one. \end{proof} Note that $\pi$ in Lemma \ref{There exists an isomorphism.} is a $K$-Galois covering. \subsection{Rational maps induced by $|D|^K$} Several concepts and statements appearing in this subsection are based on Section $7$ of \cite{Haase=Musiker=Yu}. Let $\varGamma$ be a metric graph, $K$ a finite group acting on $\varGamma$ and let $D$ be a $K$-invariant effective divisor on $\varGamma$. For a finite generating set ${\it F}=\{ f_1 , \ldots, f_n \}$ of $R(D)^K$, let $(G, l)$ be the model of $\varGamma$ such that $V(G) := V(G_0) \cup \bigcup_{i=1}^n ({\rm supp}(D + {\rm div}(f_i)))$, where $G_0$ is the underlying graph structure of the canonical loopless model $(G_0, l_0)$ of $\varGamma$. $\phi_{\it F} : \varGamma \rightarrow \boldsymbol{TP^{n-1}}, x \mapsto (f_1 (x): \cdots: f_n (x))$ denotes the {\it rational map} induced by ${\it F}$. \begin{prop} \label{image is a metric graph} ${\rm Im}(\phi_{\it F})$ is a metric graph in $\boldsymbol{TP}^{n-1}$. \end{prop} \begin{proof} If $n=1$, then $\phi_{\it F}$ is a constant map from $\varGamma$ to $\boldsymbol{TP}^0$ and ${\rm Im}(\phi_{\it F})$ is a metric graph. Let us assume that $n \ge 2$. As $R(D)^K$ contains all constant functions, $\phi_{\it F}$ is well-defined. Since all $f_i$s are $\boldsymbol{Z}$-affine function, the image of $\phi_{\it F}$ is a one-dimensional polyhedral complex. Let $e=v_1 v_2$ be an edge of $G$. As each $f_i$ is constant on $e$, $\phi_{\it F}(e)$ is a segment or a point in $\boldsymbol{TP}^{n-1}$ and the distant between $\phi_{\it F}(v_1)$ and $\phi_{\it F}(v_2)$ can be measured by the definition of rational functions on metric graphs. Hence ${\rm Im}(\phi_{\it F})$ becomes a metric graph. \end{proof} When does $\phi_{\it F}$ induce a finite harmonic morphism from $\varGamma$ to ${\rm Im}(\phi_{\it F})$? Moreover, when does $\phi_{\it F}$ induce a $K$-Galois covering on ${\rm Im}(\phi_{\it F})$? We consider an answer for these questions. \begin{rem} \label{homomorphism remark} {\upshape $R(D)^K$ is isomorphic to $R(D^{\prime})^K$ as a tropical semimodule for any element $D^{\prime}$ of $|D|^K$. In fact, since $D^{\prime}$ is linearly equivalent to $D$ and both are $K$-invariant, there exists $f \in R(D)^K$ such that $D^{\prime} = D + {\rm div}(f)$ (see Subsection $3.1$). $\psi : R(D)^K \rightarrow R(D^{\prime})^K$, $g \mapsto g - f$ and $\psi^{\prime} : R(D^{\prime})^K \rightarrow R(D)^K$, $g \mapsto g + f$ are homomorphisms of tropical semimodules and are inverses of each other. } \end{rem} \begin{dfn} {\upshape Let $\varGamma$ be a metric graph, $K$ a finite group acting on $\varGamma$ and let $D$ be a $K$-invariant effective divisor on $\varGamma$. $D$ is {\it $K$-very ample} if for any elements $x$ and $x^{\prime}$ of $\varGamma$ whose orbits by $K$ differ from each other, there exist $f$ and $f^{\prime}$ in $R(D)^K$ such that $f(x) - f(x^{\prime}) \not = f^{\prime}(x) - f^{\prime}(x^{\prime})$. We call $D$ {\it $K$-ample} if some positive multiple $kD$ is $K$-very ample. When $K$ is trivial, we use words ``very ample'' or ``ample'' simply. } \end{dfn} \begin{rem} {\upshape $D$ is $K$-very ample if and only if $D^{\prime}$ is $K$-very ample for any element $D^{\prime}$ of $|D|^K$. In fact, if $D$ is $K$-very ample, for any points $x$ and $x^{\prime}$ on $\varGamma$ whose orbits by $K$ differ from each other, there exist $g$ and $g^{\prime}$ in $R(D)^K$ such that $g(x) - g(x^{\prime}) \not= g^{\prime}(x) - g^{\prime}(x^{\prime})$. Using $\psi$ given in Remark \ref{homomorphism remark}, $(g - f)(x) - (g - f)(x^{\prime}) = (g(x) - g(x^{\prime})) - (f(x) - f(x^{\prime})) \not= (g(x) - g(x^{\prime})) - (f^{\prime}(x) - f^{\prime}(x^{\prime})) = (g - f^{\prime})(x) - (g - f^{\prime})(x^{\prime})$. Thus, $D^{\prime}$ is $K$-very ample. The converse is shown in the same way. } \end{rem} \begin{dfn} {\upshape Let ${\it F} = \{ f_1, \ldots, f_n\}$ be a finite generating set of $R(D)^K$. $\phi_{\it F}$ is {\it $K$-injective} if $\phi_{\it F}$ separates different $K$-orbits on $\varGamma$, {\it {\it i.e.}} for any $x$ and $x^{\prime}$ in $\varGamma$ whose $K$-orbits differ from each other, $\phi_{\it F} (x) \not = \phi_{\it F} (x^{\prime})$ holds. } \end{dfn} \begin{rem} \label{independent remark} \upshape{ Let ${\it F}_1 = \{ f_1, \ldots, f_n \}$ and ${\it F}_2 = \{g_1, \ldots, g_n \}$ be minimal generating sets of $R(D)^K$, respectively. Since both ${\it F}_1$ and ${\it F}_2$ are minimal, each $g_i$ is written as $a_{i} \odot f_i$ with some real number $a_i$ by changing numbers if we need. Thus, we can move ${\rm Im}(\phi_{{\it F}_1})$ to ${\rm Im}(\phi_{{\it F}_2})$ by the translation $(x_1 : \cdots : x_n) \mapsto (x_1 + a_1 : \cdots : x_n + a_n)$. Hence $\phi_{{\it F}_1}$ is $K$-injective if and only if $\phi_{{\it F}_2}$ is $K$-injective. } \end{rem} \begin{lemma} $D$ is $K$-very ample if and only if the rational map associated to any finite generating set is $K$-injective. \end{lemma} \begin{proof} (``if'' part) Let ${\it F} = \{ f_1, \ldots, f_n \}$ be a generating set of $R(D)^K$. Assume that $\phi_{\it F}$ is $K$-injective, {\it {\it i.e.}} for any $Kx \not= Kx^{\prime}$ , $\phi_{\it F}(x) \not= \phi_{\it F}(x^{\prime})$. If for any $i$ and $j$, $f_i(x) - f_i(x^{\prime})=f_j(x) - f_j(x^{\prime})=:c$, then \begin{eqnarray*} \phi_{\it F}(x) &=& (f_1(x): \cdots: f_n(x)) = (f_1(x^{\prime}) + c: \cdots: f_n(x^{\prime}) + c)\\ &=& (f_1(x^{\prime}): \cdots: f_n(x^{\prime})) = \phi_{\it F}(x^{\prime}). \end{eqnarray*} This is a contradiction. Thus there exist $i \not= j$ such that $f_i(x) - f_i(x^{\prime}) \not= f_j(x) - f_j(x^{\prime})$. (``only if'' part) Suppose that there exists a finite generating set ${\it F} = \{ f_1, \ldots, f_n \} \subset R(D)^K$ such that $\phi_{\it F}$ is not $K$-injective. There exist distinct points $x$ and $x^{\prime}$ on $\varGamma$ whose $K$-orbits are different from each other and whose images by $\phi_{\it F}$ are same. Therefore, there exists a real number $c$ such that $f_i(x^{\prime}) + c = f_i(x)$ for any $i$. This means that $f_i(x) - f_i(x^{\prime}) = f_j(x) - f_j(x^{\prime})$ for any $i$ and $j$. Hence for any $f$ and $f^{\prime}$ in $R(D)^K$, $f(x)-f(x^{\prime}) = f^{\prime}(x) - f^{\prime}(x^{\prime})$ since ${\it F}$ generates $R(D)^K$ as a tropical semimodule. \end{proof} If the induced rational map associated to a minimal generating set of $R(D)^K$ is $K$-injective, then $D$ is $K$-very ample since all generating sets of $R(D)^K$ contains a minimal generating set of $R(D)^K$ consisting only of extremals of $R(D)^K$ (see \cite{Song}). Therefore, we obtain the following corollary. \begin{cor} $D$ is $K$-very ample if and only if the rational map associated to a minimal generating set of $R(D)^K$ is $K$-injective. \end{cor} \begin{lemma} \label{slope one lemma} If $\varGamma$ dose not consist only of one point and for any point $x$ on $\varGamma$, there exists $f$ in $R(D)^K$ such that the support of $D + {\rm div}(f)$ contains the orbit of $x$ by $K$, then for any edge $e$ of $G$, there exists $f_i$ which has slope one on $e$. \end{lemma} \begin{proof} Suppose that there exists an edge $e=v_1v_2 \in E(G)$ such that any $f_i$ dose not have slope one on $e$. By assumptions, there exists $f_j$ has slope at least two on $e$. By changing numbers if we need, we may assume that $f_j(v_1) > f_j(v_2)$. Let $(f_j)_{ \ge f_i(t)}(x):={\rm max}\{ f_j(x), f_j(t) \}$ for any $t \in \varGamma$. Since both $f_j$ and the constant $f_j(t)$ function are in $R(D)^K$ and $(f_j)_{ \ge f_i(t)}$ is the tropical sum of them, $(f_j)_{ \ge f_i(t)} \in R(D)^K$ holds. $\varGamma_{v_1} := \{ x \in \varGamma \,|\, f_j(x) \ge f_j(v_1) \}$ is $K$-invariant and can fire on $D + {\rm div}((f_j)_{ \ge f_i(v_1)})$. In fact, for any $x \in \varGamma_{v_1}$ and $\sigma \in K$, since $f_j(\sigma(x)) = f_j(x) \ge f_j(v_1)$ holds, $\varGamma_{v_1}$ is $K$-invariant. For any sufficiently close point $t$ to $v_1$ on $e$ and $\varGamma_t := \{ x \in \varGamma \,|\, f_j(x) \ge f_j(t) \}$, $g := (f_j)_{ \ge f_i(t)}-(f_j)_{ \ge f_i(v_1)}$ has a constant integer slope which is different from zero on any closure of connected component of $\varGamma_t \setminus {\varGamma_{v_1}}$. Therefore for any point $x$ on the boundary set of $\varGamma_{v_1}$ and any positive number $l$ less than the minimum of lengths of these closures, \begin{eqnarray*} (D + {\rm div}((f_j)_{ \ge f_j(v_1)} + ({\rm CF}(\varGamma_{v_1}, l))))(x) &\ge& (D + {\rm div}((f_j)_{ \ge f_j(v_1)} + g))(x)\\ &=& (D + {\rm div}(f_j)_{\ge f_i(t)})(x) \ge 0. \end{eqnarray*} Thus $(f_j)_{\ge f_j(v_1)} + {\rm CF}(\varGamma_{v_1}, l)$ is in $R(D)^K$ and has slope one on $[t, v_1] \subset e$. This is a contradiction. \end{proof} \begin{rem} {\upshape When $K$ is trivial, the condition ``for any point $x$ on $\varGamma$, there exists $f$ in $R(D)^K$ such that the support of $D + {\rm div}(f)$ contains the orbit of $x$ by $K$'' means that the rank $r(D)$ of $D$ is greater than or equal to one. } \end{rem} \begin{lemma} \label{K-orbit lemma} If $\phi_{\it F}$ is $K$-injective, then for any $x \in \varGamma$, there exists $f \in R(D)^K$ such that ${\rm supp}(D + {\rm div}(f)) \supset Kx$. \end{lemma} \begin{proof} If $\varGamma$ consists only of one point $p$ , then $D$ must be of the form $kp$ with a positive integer $k \in \boldsymbol{Z}_{>0}$ and $R(D)^K = R(D)$ consists only of constant functions on $\varGamma$. Therefore for any $f \in R(D)^K$, the support of $D + {\rm div}(f)$ coincides with the support of $D$ and it is $\{ p \}$. Let us assume that $\varGamma$ does not consist only of one point. Since $\varGamma$ is connected, $\varGamma$ contains a closed segment. We show the contraposition. Suppose that there exists a point $x$ on $\varGamma$ such that for any $f \in R(D)^K$, the support of $D + {\rm div}(f)$ does not contain $Kx$. In particular, for any $i$, the support of $D + {\rm div}((f_i)_{\ge f_i (x)})$ (resp. the support of $D + {\rm div}(f_i)$) does not contain $Kx$, where $ (f_i)_{\ge f_i (x)}(t) := {\rm max}\{ f_i (x), f_i(t) \}$ and it is in $R(D)^K$ as it is the tropical sum of $f_i$ and the constant $f_i (t)$ function for any $t \in \varGamma$. Therefore $(D + {\rm div}((f_i)_{\ge f_i(x)}))(x)=0$ (resp. $(D + {\rm div}(f_i))(x) = 0$) holds. By the definition of $(f_i)_{\ge f_i(x)}, D(x) \ge 0$ and $({\rm div}((f_i)_{\ge f_i(x)}))(x) \ge 0$. Thus $D(x) = ({\rm div}((f_i)_{\ge f_i(x)}))(x) = 0$, and then $({\rm div}(f_i))(x)=0$. If $f_i$ is nonconstant around $x$, then there must exist a direction on which $f_i$ has positive slope and another direction on which $f_i$ has negative slope at $x$. This means that $({\rm div}((f_i)_{\ge f_i(x)}))(x) \ge 1$ and this is a contradiction. Consequently, $f_i$ is locally constant at $x$. As ${\it F}$ is finite, we can choose a connected neighborhood $U_x$ of $x$ such that $\phi_{\it F}(U_x) = \phi_{\it F}(x)$. Since $K$ is finite, $\phi_{\it F}$ is not $K$-injective. \end{proof} \begin{rem} \label{slope zero remark} {\upshape Since $D$ is effective, $R(D)^K$ contains all constant functions on $\varGamma$. Therefore for any edge $e$ of $G$, there exists $f_i$ which has slope zero on $e$.} \end{rem} \begin{rem} \upshape{ In Section $6$, we define metric graphs with edge-multiplicities and harmonic morphisms between them. Hereafter, we use these concepts and so we recommend seeing Section $6$. } \end{rem} \begin{thm} \label{If $K$-injective, then $K$-Galois} If $\phi_{\it F}$ is $K$-injective, then $\phi_{\it F}$ induces a $K$-Galois covering on ${\rm Im}(\phi_{\it F})$ with some edge-multiplicities. \end{thm} \begin{proof} If $\varGamma$ is a singleton, then the image of $\phi_{\it F}$ is also a singleton. Since $\phi_{\it F}$ induces a finite harmonic morphism between singletons, then it is $K$-Galois. Assume that $\varGamma$ is not a singleton. By Proposition \ref{image is a metric graph}, Lemma \ref{slope one lemma}, Lemma \ref{K-orbit lemma} and Remark \ref{slope zero remark}, $\phi_{\it F}$ is a local isometry. In fact, for any edge $e = v_1 v_2$ of $G$, \begin{eqnarray*} \phi_{\it F}(v_2) &=& (f_1(v_2) : \cdots :f_n(v_2))\\ &=& (f_1(v_1) + s_1 \cdot l(e) : \cdots :f_n(v_1) + s_n \cdot l(e)), \end{eqnarray*} where each $s_i$ is the slope of $f_i$ on $e$ from $v_1$ to $v_2$. Let $j$ be a number such that $f_j$ has slope zero on $e$, {\it {\it i.e.}} $s_j = 0$. Then, the distance between $\phi_{\it F}(v_1)$ and $\phi_{\it F}(v_2)$ is \begin{eqnarray*} &&\text{``the lattice length of }\\ &&((f_1(v_2) - f_j(v_2)) - (f_1(v_1) - f_j(v_1)), \ldots, (f_n(v_2) - f_j(v_2)) - (f_n(v_1) - f_j(v_1)))\text{''}\\ &=& \text{``the lattice length of }(s_1 \cdot l(e), \ldots, s_n \cdot l(e))\text{''}\\ &=& l(e) \cdot {\rm gcd}(s_1, \ldots, s_n) = l(e). \end{eqnarray*} Let $(G_{\circ}, l_{\circ})$ (resp. $(G_{\circ}^{\prime}, l_{\circ}^{\prime})$) be the canonical model of $\varGamma$ (resp. ${\rm Im}(\phi_{\it F}))$. We show that we can choose loopless models $(G_1, l_1)$ and $(G^{\prime}, l^{\prime})$ of $\varGamma$ and ${\rm Im}(\phi_{\it F})$ respectively for that $\phi_{\it F}$ induces a $K$-Galois covering on ${\rm Im}(\phi_{\it F})$ with the edge-multiplicities ${\bold 1} : E(G_1) \rightarrow \boldsymbol{Z}_{\ge 0}, e \mapsto 1$, and $m^{\prime} : E(G^{\prime}) \rightarrow \boldsymbol{Z}_{\ge 0}, e^{\prime} \mapsto |K_{e}|$, where $e$ is an edge of $G_1$ whose image by $\phi_{\it F}$ is $e^{\prime}$. For any $x^{\prime} \in V(G_{\circ}^{\prime}) \setminus \phi_{\it F}(V(G_{\circ}))$, since $\phi_{\it F}$ is $K$-injective, there exists a unique orbit $Kx$ in $\varGamma$ whose image by $\phi_{\it F}$ is $x^{\prime}$. $x$ is smooth. Let $e$ be the edge of $G_{\circ}$ containing $x$. If there exist no elements of $K$ which inverse $e$, then $\phi_{\it F}(x)$ is smooth and this is a contradiction. Hence there exists an element of $K$ which inverse $e$. As $x^{\prime}$ is not smooth, by the proof of Lemma \ref{$V_1$ is finite}, $x$ is the midpoint of $e$ and $x^{\prime}$ has valence one. Thus, let $V(G_1) := V(G) \cup \bigcup_{x^{\prime} \in V(G_{\circ}^{\prime}) \setminus V(G_{\circ})} \phi_{\it F}^{-1}(x^{\prime})$ and $V(G^{\prime}) := \phi_{\it F}(V(G_1))$. Then $\phi_{\it F}$ induces a finite harmonic morphism from $\varGamma$ to ${\rm Im}(\phi_{\it F})$ of degree $|K|$ with the edge-multiplicities ${\bold 1}$ and $m^{\prime}$. By the definition of the action of $K$ on $\varGamma$ and by the assumption, the induced finite harmonic morphism is a $K$-Galois covering on ${\rm Im}(\phi_{\it F})$. \end{proof} \begin{cor} If $\phi_{\it F}$ is $K$-very ample, then $\phi_{\it F}$ induces a $K$-Galois covering on ${\rm Im}(\phi)$. \end{cor} \begin{lemma} \label{$K$-Galois is $K$-injective} If $\phi_{\it F}$ induces a $K$-Galois covering (with the edge-multiplicities in Theorem \ref{If $K$-injective, then $K$-Galois}), then $\phi_{\it F}$ is $K$-injective. \end{lemma} \begin{proof} If there exist two $K$-orbits in $\varGamma$ whose images by $\phi_{\it F}$ consistent with each other, then the inverse image by $\phi_{\it F}$ contains at least two $K$-orbits. Thus $K$ does not act transitively on the fibre. \end{proof} \begin{cor} $\phi_{\it F}$ induces a $K$-Galois covering with the edge-multiplicities in Theorem \ref{If $K$-injective, then $K$-Galois} if and only if $\phi_{\it F}$ is $K$-injective. \end{cor} \begin{rem} \label{$K$-Galois is $K$-injective1} \upshape{ By the same proof of Lemma \ref{$K$-Galois is $K$-injective}, we have the statement ``Every $K$-Galois covering on a metric graph (with edge-multiplicities) maps distinct $K$-orbits to distinct points.'', which is more general than Lemma \ref{$K$-Galois is $K$-injective}. } \end{rem} We then have an answer for the question ``when does $\phi_{\it F}$ induce a $K$-Galois covering on ${\rm Im}(\phi_{\it F})$?''. Next, we pose a question ``whether there exists a divisor which induces a $K$-Galois covering induced by $K$-invariant linear system or not''. \begin{rem}[{\cite[Corollary $46$]{Haase=Musiker=Yu}}] {\upshape Every divisor of positive degree is ample. } \end{rem} \begin{thm} \label{$K$-ample1} Every effective $K$-invariant divisor of positive degree is $K$-ample. \end{thm} \begin{proof} Let $\pi : \varGamma \rightarrow \varGamma^{\prime} := \varGamma / K$ be the natural surjection. By the construction, $\pi$ is $K$-Galois. Thus $\pi$ is $K$-injective. Let $x$ and $y$ be points on $\varGamma$ whose $K$-orbits are different from each other and let $x^{\prime}:= \pi(x)$ and $y^{\prime}:=\pi(y)$. Let $D$ be an effective $K$-invariant divisor on $\varGamma$ of positive degree. $\pi_{\ast}(D)$ is ample since ${\rm deg}(\pi_{\ast}(D))={\rm deg}(D) \ge 1$. Therefore there exists a positive integer $k$ such that $k\pi_{\ast}(D)$ is very ample. Let $f_1^{\prime}$ and $f_2^{\prime}$ be in $R(k\pi_{\ast}(D))$ such that $f_1^{\prime}(x^{\prime})-f_1^{\prime}(y^{\prime}) \not= f_2^{\prime}(x^{\prime})-f_2^{\prime}(y^{\prime})$. As $D$ is $K$-invariant and $\pi$ is $K$-injective, \begin{eqnarray*} \pi^{\ast} \left( \pi_{\ast}(D) \right) &=& \pi^{\ast}\left( \sum_{x \in \varGamma}D(x)\cdot \pi(x) \right)\\ &=& \sum_{x \in \varGamma}{\rm deg}_x(\pi) \cdot \left\{ \left( \sum_{y \in \varGamma}D(y)\cdot \pi(y) \right) (\pi(x)) \right\} \cdot x\\ &=& \sum_{x \in \varGamma}{\rm deg}_x(\pi) \cdot \left( \sum_{y \in \pi^{-1}(\pi(x))}D(y) \right) \cdot x\\ &=& \sum_{x \in \varGamma}{\rm deg}_x(\pi) ( |Kx| \cdot D(x) ) \cdot x = \sum_{x \in \varGamma}(|K_x| \cdot |Kx| \cdot D(x))\cdot x\\ &=& \sum_{x \in \varGamma} |K| D(x) \cdot x = |K| D. \end{eqnarray*} Since $k\pi_{\ast}(D) + {\rm div}(f_i^{\prime})$ is effective, \[ \pi^{\ast}(k\pi_{\ast}(D) + {\rm div}(f_i^{\prime})) = k\pi^{\ast}(\pi_{\ast}(D))+\pi^{\ast}({\rm div}(f_i^{\prime})) = k|K|D+{\rm div}(\pi^{\ast}f_i^{\prime}) \] is also effective. This means $\pi^{\ast}f_i^{\prime} \in R(k|K|D)$. As \begin{eqnarray*} \pi^{\ast}f_1^{\prime}(x)-\pi^{\ast}f_1^{\prime}(y) &=& f_1^{\prime}(\pi(x))-f_1^{\prime}(\pi(y)) = f_1^{\prime}(x^{\prime})-f_1^{\prime}(y^{\prime})\\ &\not=& f_2^{\prime}(x^{\prime})-f_2^{\prime}(y^{\prime}) = f_2^{\prime}(\pi(x))-f_2^{\prime}(\pi(y))\\ &=& \pi^{\ast}f_2^{\prime}(x)-\pi^{\ast}f_2^{\prime}(y),\end{eqnarray*} $k|K|D$ is $K$-very ample. \begin{comment} By the construction, $\varphi$ is $K$-Galois. Thus $\varphi$ is $K$-injective. Let $x$ and $y$ be points on $\varGamma$ whose $K$-orbits are different from each other and let $x^{\prime}:= \varphi(x)$ and $y^{\prime}:=\varphi(y)$. Let $D$ be an effective $K$-invariant divisor on $\varGamma$ of positive degree. $\varphi_{\ast}(D)$ is ample since ${\rm deg}(\varphi_{\ast}(D))={\rm deg}(D) \ge 1$. Therefore there exists a positive integer $k$ such that $k\varphi_{\ast}(D)$ is very ample. Let $f_1^{\prime}$ and $f_2^{\prime}$ be in $R(k\varphi_{\ast}(D))$ such that $f_1^{\prime}(x^{\prime})-f_1^{\prime}(y^{\prime}) \not= f_2^{\prime}(x^{\prime})-f_2^{\prime}(y^{\prime})$. As $D$ is $K$-invariant and $\varphi$ is $K$-injective, \begin{eqnarray*} \varphi^{\ast} \left( \varphi_{\ast}(D) \right) &=& \varphi^{\ast}(\sum_{x \in \varGamma}D(x)\cdot \varphi(x))\\ &=& \sum_{x \in \varGamma}{\rm deg}_x(\varphi) \cdot \left\{ \left( \sum_{y \in \varGamma}D(y)\cdot \varphi(y) \right) (\varphi(x)) \right\} \cdot x\\ &=& \sum_{x \in \varGamma}{\rm deg}_x(\varphi) \cdot \left( \sum_{y \in \varphi^{-1}(\varphi(x))}D(y) \right) \cdot x\\ &=& \sum_{x \in \varGamma}{\rm deg}_x(\varphi) ( |Kx| \cdot D(x) ) \cdot x = \sum_{x \in \varGamma}(|K_x| \cdot |Kx| \cdot D(x))\cdot x\\ &=& \sum_{x \in \varGamma} |K| D(x) \cdot x = |K| D. \end{eqnarray*} Since $k\varphi_{\ast}(D) + {\rm div}(f_i^{\prime})$ is effective, \[ \varphi^{\ast}(k\varphi_{\ast}(D) + {\rm div}(f_i^{\prime})) = k\varphi^{\ast}(\varphi_{\ast}(D))+\varphi^{\ast}({\rm div}(f_i^{\prime})) = k|K|D+{\rm div}(\varphi^{\ast}f_i^{\prime}) \] is also effective. This means $\varphi^{\ast}f_i^{\prime} \in R(k|K|D)$. As \begin{eqnarray*} \varphi^{\ast}f_1^{\prime}(x)-\varphi^{\ast}f_1^{\prime}(y) &=& f_1^{\prime}(\varphi(x))-f_1^{\prime}(\varphi(y)) = f_1^{\prime}(x^{\prime})-f_1^{\prime}(y^{\prime})\\ &\not=& f_2^{\prime}(x^{\prime})-f_2^{\prime}(y^{\prime}) = f_2^{\prime}(\varphi(x))-f_2^{\prime}(\varphi(y))\\ &=& \varphi^{\ast}f_2^{\prime}(x)-\varphi^{\ast}f_2^{\prime}(y),\end{eqnarray*} $k|K|D$ is $K$-very ample. \end{comment} \end{proof} Therefore, the answer is ``always''. In conclusion, we have the following theorem. \begin{thm} \label{main theorem2} Let $\varGamma$ be a metric graph and $K$ a finite group acting on $\varGamma$. Then, there exists a rational map, from $\varGamma$ to a tropical projective space, which induces a $K$-Galois covering on the image with edge-multiplicities. \end{thm} Especially when the group $K$ is trivial, we have the following corollary. \begin{cor} \label{embedded in a tropical projective space1} A metric graph is embedded in a tropical projective space by a rational map. \end{cor} \begin{prop} \label{pull-back of rational functions2} If $\phi_{\it F}$ induces a $K$-Galois covering $\phi$, then $\phi^{\ast}({\rm Rat}({\rm Im}(\phi_{\it F})) = {\rm Rat}(\varGamma)^K$ holds. \end{prop} \begin{proof} For any $f^{\prime} \in {\rm Rat}({\rm Im}(\phi_{\it F}))$, obviously $\phi^{\ast}(f^{\prime}) = f^{\prime} \circ \phi \in {\rm Rat}(\varGamma)^K$ holds. Let $(G, l)$ (resp. $(G^{\prime}, l^{\prime}))$ be a model of $\varGamma$ (resp. ${\rm Im}(\phi_{\it F})$) corresponding to $\phi$. Let $f$ be an element of ${\rm Rat}(\varGamma)^K$. Since $\varphi$ is $K$-injective, there exists a one-to-one mapping between $K$-orbits of $\varGamma$ and ${\rm Im}(\phi_{\it F})$. Let $g(x^{\prime}) := f(\phi^{-1}(x^{\prime})), x^{\prime} \in {\rm Im}(\phi_{\it F})$ and $g$ is well-defined. By the definition of $g$, for any $x \in \varGamma$, $\phi^{\ast}(g) (x) = g \circ \phi (x) = g(\phi(x)) = f(x)$ holds. Thus, $f = \phi^{\ast}(g) \in \phi^{\ast}({\rm Rat}({\rm Im}(\phi_{\it F})))$. \end{proof} \begin{rem} \label{pull-back of rational functions} Let $\varGamma$ be a metric graph, $K$ a finite group acting on $\varGamma$ and $\varphi : \varGamma \rightarrow \varGamma^{\prime} := \varGamma /K$ the natural surjection. Let $(G_1, l_1)$ (resp. $(G^{\prime}, l^{\prime})$) be the model of $\varGamma$ (resp. $\varGamma^{\prime}$) in Section $3$. ${\rm Rat}(\varGamma)_K$ denotes the set consisting of $K$-invariant rational functions $f$ on $Gamma$ whose each slope on $e$ is a multiple of $|K_e|$, where $e$ is a connected component of $\varGamma \setminus ({\rm supp}({\rm div}(f)) \cup V(G_1))$. Then, $\varphi^{\ast}({\rm Rat}(\varGamma^{\prime})) = {\rm Rat}(\varGamma)_K$. \end{rem} \begin{proof} Let $f^{\prime} \in {\rm Rat}(\varGamma^{\prime})$. By the definition of pull-back of a function, $\varphi^{\ast}(f^{\prime}) = f^{\prime} \circ \varphi \in {\rm Rat}(\varGamma)^K$. Let $e$ be a connected component of $\varGamma \setminus ({\rm supp}({\rm div}(\varphi^{\ast}(f^{\prime}))) \cup V(G_1))$. By the construction of $\varGamma^{\prime}$, $l^{\prime}(\varphi(e)) = l^{\prime}([e]) = |K_e| l(e)$. $\varphi^{\ast}(f^{\prime})$ has the slope which is a multiple by $K_e$ of the one of $f^{\prime}$ on $\varphi(e)$. Therefore, $\varphi^{\ast}(f^{\prime})$ is in ${\rm Rat}(\varGamma)_K$. Let $f$ be an element of ${\rm Rat}(\varGamma)_K$. Let $g$ be the rational function on $\varGamma^{\prime}$ defined by the following $(1)$ and $(2)$. \begin{itemize} \item[$(1)$] Fix a point $x_0$ on $\varGamma$. $g(\varphi(x_0)) := f(x_0)$. \item[$(2)$] For a connected component $e$ of $\varGamma \setminus ({\rm supp}({\rm div}(f)) \cup V(G_1))$, $g$ has the slope $\frac{(\text{the slope of } f \text{ on } e)}{|K_e|}$. \end{itemize} Then, $g$ is well-defined and $f = \varphi^{\ast}(g) \in \varphi^{\ast}({\rm Rat}(\varGamma^{\prime})$. In fact, the following fold. Let $x_1, x_2$ be any two point on $\varGamma$ and $P_1 = e_{11} \cdots e_{1n_1}$ and $P_2 = e_{21} \cdots e_{2n_2}$ any two paths from $x_1$ to $x_2$. Let $s_{ij}$ be the slope of $f$ on $e_{ij}$. As \[ f(x_2) = f(x_1) + \sum_{j=1}^{n_1} l_1(e_{1j})s_{1j} = f(x_1) + \sum_{j=1}^{n_2} l_1(e_{2j})s_{2j} \] holds, then we have \[ l_1(e_{1j})s_{1j} = \sum_{j=1}^{n_2} l_1(e_{2j})s_{2j}. \] Therefore, \[ \sum_{j=1}^{n_1} \frac{ |K_{e_{1j}}| \cdot l_1(e_{1j}) \cdot s_{1j}}{|K_{e_{1j}}|} = \sum_{j=1}^{n_1} \frac{ |K_{e_{2j}}| \cdot l_1(e_{1j}) \cdot s_{2j}}{|K_{e_{2j}}|} \] and then $g$ is well-defined. Let $x_1$ be $x_0$. For any $x_2$, \begin{eqnarray*} f(x_2) &=& f(x_0) + \sum_{j=1}^{n_1} l_1 (e_{1j}) s_{1j}\\ &=& g(\varphi(x_0)) + \sum_{j=1}^{n_1} \frac{ |K_{e_{ij}}| \cdot l_1 (e_{1j}) \cdot s_{1j}}{|K_{e_{ij}}|} = g(\varphi(x_2)). \end{eqnarray*} Then, $f = \varphi^{\ast}(g)$. \end{proof} \subsection{Applications} In \cite{Haase=Musiker=Yu}, Haase, Musiker and Yu give a problem ``give a characterization of metric graphs whose canonical divisors are not very ample'' (see {\cite[Problem $51$]{Haase=Musiker=Yu}}). In this subsection, we give an answer to this problem and at the same time, we consider an analogy of the fact the canonical map of a hyperelliptic compact Riemann surface is a double covering. Let $\varGamma$ be a metric graph and $D$ a divisor on $\varGamma$. $\phi_{|D|}$ denotes the rational map induced by $|D|$, {\it {\it i.e.}} for a minimal generating set $\{ f_1, \ldots, f_n \}$ of $R(D)$, $\phi_{|D|} := (f_1 : \cdots : f_n) : \varGamma \rightarrow \boldsymbol{TP}^{n-1}, x \mapsto (f_1(x) : \cdots : f_n(x))$. \begin{rem}[{\cite[Proposition $48$]{Haase=Musiker=Yu}}] \label{hyperelliptic proposition} \upshape{ If ${\rm deg}(D)=2$, then $\phi_{|D|}(\varGamma)$ is a tree. If in addition $r(D)=1$, then the fibre $\phi_{|D|}^{-1}(x)=\{y \in \varGamma \,|\, \phi_{|D|}(y)=x \}$ has size one or two for all $x$ in the image. } \end{rem} By Remark \ref{hyperelliptic proposition}, we have the following lemma. \begin{lemma} \label{hyperelliptic lemma} Let $\varGamma$ be a hyperelliptic metric graph without one valent points and $D$ a divisor on $\varGamma$ whose degree is two and whose rank is one. Then, the complete linear system $|D|$ is invariant by the hyperelliptic involution $\iota$ and the rational map associated to $|D|$ induces a $\langle \iota \rangle$-Galois covering on a tree. \end{lemma} \begin{proof} Obviously $|D|$ is invariant by $\langle \iota \rangle$. By Remark \ref{hyperelliptic proposition}, ${\rm Im}(\phi_{|D|})$ is a tree. By the proof of Remark \ref{hyperelliptic proposition}, for any point $x$ on a bridge of $\varGamma$, $|\phi_{|D|}^{-1}(\phi_{|D|}(x))| = 1$ and any point $y$ not on a bridge but on a cycle of $\varGamma$,$|\phi_{|D|}^{-1}(\phi_{|D|}(y))| = 2$ and $\phi_{|D|}^{-1}(\phi_{|D|}(y)) = \{y, \iota(y) \}$. Therefore $\phi_{|D|}$ is $\langle \iota \rangle$-injecive. Thus $\phi_{|D|}$ induces a $\langle \iota \rangle$-Galois covering. \end{proof} The {\it canonical map} is the rational map induced by the canonical linear system $|K_{\varGamma}|$ on a metric graph $\varGamma$. \begin{thm} \label{canonical map1} Let $\varGamma$ be a metric graph without one valent points and $\phi_{|K_{\varGamma}|}$ the canonical map of $\varGamma$. Then $\phi_{|K_{\varGamma}|}$ induces a $\boldsymbol{Z} / 2 \boldsymbol{Z}$-Galois covering on the image of $\phi_{|K_{\varGamma}|}$ if and only if the genus $g$ of $\varGamma$ is two. \end{thm} \begin{proof} Since ${\rm deg}(K_{\varGamma}) = 2g - 2$ and $r(K_{\varGamma}) = g-1$ by Riemann--Roch theorem, when $g = 0$, $\phi_{|K_{\varGamma}|}$ is not induced and when $g = 1$, $\phi_{|K_{\varGamma}|}$ is a constant map. When $g = 2$, $K_{\varGamma}$ has degree two and rank one and then by Lemma \ref{hyperelliptic lemma}, $\phi_{|K_{\varGamma}|}$ is a $\boldsymbol{Z} / 2 \boldsymbol{Z}$-Galois covering on a tree. When $g \ge 3$, for $K_{\varGamma}$ is not very ample, $\varGamma$ must be one of the following two type of hyperelliptic metric graphs by \cite[Theorem 49]{Haase=Musiker=Yu}. (type $1$) $\varGamma$ is a metric graph consisting two vertices $x, y$ and $g + 1$ multiple edges between them. See Figure \ref{canonicalmap-1}. \begin{figure} \caption{type $1$} \label{canonicalmap-1} \end{figure} The rational functions $f_{i1}, f_{i2}$ and $f_{i3}$ in Figure \ref{canonicalmap-3} are extremals of $R(K_{\varGamma})$ and the rational map $e_i^{\circ} \rightarrow \boldsymbol{TP}^{2}, t \mapsto (f_{i1}(t) : f_{i2}(t) : f_{i3}(t))$ is injective. \begin{figure} \caption{$z_{i1}$ is the midpoint of $e_i$. $z_{i2}$ and $z_{i3}$ are the internally dividing points obtained by internally dividing $e_i$ by $(g-1) : (g-2)$, where $z_{i2}$ is further than $z_{i3}$ from $x$. $f_{i1}, f_{i2}$ and $f_{i3}$ define principal divisors such that $D + {\rm div}(f_{i1}) = (2g - 2)z_{i1}$, $D + {\rm div}(f_{i2}) = x + (2g - 3)z_{i2}$ and $D + {\rm div}(f_{i2}) = y + (2g - 3)z_{i3}$, respectively.} \label{canonicalmap-3} \end{figure} On the other hand, obviously all extremals of $R(K_{\varGamma})$ attain maximal only at $x$ and $y$ by Lemma \ref{condition to be extremal} (in this case, $K$ is trivial). Hence $\phi_{|K_{\varGamma}|}|_{\varGamma \setminus \{ x, y \}}$ is injective and $\phi_{|K_{\varGamma}|}(x) = \phi_{|K_{\varGamma}|}(y)$. Thus $\phi_{|K_{\varGamma}|}$ is not a $\boldsymbol{Z} / 2 \boldsymbol{Z}$-Galois covering. (type $2$) $\varGamma$ is a metric graph of the form in Figure \ref{canonicalmap-2}. $e_{g + 2}$ and $e_{g + 3}$ have a same length. \begin{figure} \caption{type $2$} \label{canonicalmap-2} \end{figure} Since $K_{\varGamma}$ is linearly equivalent to $D := (g-1)(x+y)$, $\phi_{|K_{\varGamma}|} = \phi_{|D|}$ holds. Similarly to the proof of type $1$, we have three extremals $f_{i1}, f_{i2}$ and $f_{i3}$ of $R(D)$ which induce an injective rational map on $e_i^{\circ}, i=1, \ldots, g$. The rational functions $h_1, \ldots, h_5$ and $h_6$ in Figure \ref{canonicalmap-4} are extremals of $R(D)$ and the map $(e_{g} \cup e_{g+1}) \setminus \{ p,q \} \rightarrow \boldsymbol{TP}^{5}, t \mapsto (h_1(t) : h_2(t) : h_3(t) : h_4(t) : h_5(t) : h_6(t))$ is injective. The rational functions $h_7$ and $h_8$ in Figure \ref{canonicalmap-5} are extremals of $R(D)$ and the map $(e_{g + 2} \cup e_{g + 3}) \setminus \{ x,y \} \rightarrow \boldsymbol{TP}^{2}, t \mapsto (f_{11}(t) : h_7(t) : h_8(t))$ is injective. In particular, when $g = 3$, see Figure \ref{canonicalmap-7}. Hence $\phi_{|D|}|_{\varGamma \setminus \{ x, y \}}$ is injective. On the other hand, by the same reason, $\phi_{|D|}(x) = \phi_{|D|}(y)$. In conclusion, $\phi_{|D|}$ is not a $\boldsymbol{Z} / 2 \boldsymbol{Z}$-Galois covering. \begin{figure} \caption{$w_1$(resp.~$w_2$) is the midpoint of $e_g$(resp.~$e_{g+1}$). $w_3$ and $w_5$(resp.~$w_4$ and $w_6$) are the internally dividing points obtained by internally dividing $e_g$(resp.~$e_{g+1}$) by $(g-2) : (g-1)$, where $w_3$(resp.~$w_4$) is further than $w_5$(resp.~$w_6$) from $p$(resp.~$q$). $h_{1}, h_{2}, h_{3}, h_{4}, h_{5}$ and $h_{6}$ define principal divisors such that $D + {\rm div}(h_1) = (2g-2)w_1$, $D + {\rm div}(h_2) = (2g-2)w_2$, $D + {\rm div}(h_3) = p + (2g-3)w_3$, $D + {\rm div}(h_4) = p + (2g-3)w_4$, $D + {\rm div}(h_5) = q + (2g-3)w_5$ and $D + {\rm div}(h_6) = q + (2g-3)w_6$, respectively.} \label{canonicalmap-4} \end{figure} \begin{figure} \caption{(case $g \ge 4$) $z_{11}, w_7$ and $w_8$ are the midpoints of $e_1, e_{g + 2}$ and $e_{g + 3}$, respectively. $h_{7}$ and $h_{8}$ define principal divisors such that $D + {\rm div}(h_7) = (2g-6)z_{11} + p + 2w_7$ and $D + {\rm div}(h_8) = (2g-6)z_{11} + q + 2w_8$, respectively.} \label{canonicalmap-5} \end{figure} \begin{figure} \caption{(case $g = 3$) $w_7$ and $w_8$ are the midpoints of $e_{g + 2}$ and $e_{g + 3}$, respectively. $h_{7}$ and $h_{8}$ define principal divisors such that $D + {\rm div}(h_7) = p + 2w_7$ and $D + {\rm div}(h_8) = q + 2w_8$, respectively.} \label{canonicalmap-7} \end{figure} \end{proof} \begin{cor} \label{canonical map2} Let $\varGamma$ be a metric graph of genus $\ge 3$ without one valent points. $K_{\varGamma}$ is not very ample if and only if the canonical map is not harmonic. In particular, $\varGamma$ is hyperelliptic and $g({\rm Im}(\phi_{|K_{\varGamma}|})=g(\varGamma)+1$. \end{cor} \begin{proof} In the proof of Theorem \ref{canonical map1}, we can directly check that the degrees of $\phi_{|K_{\varGamma}|}$ (resp. $\phi_{|D|}$) at $x$ and $y$ are different from the degree of $\phi_{|K_{\varGamma}|}$ (resp. $\phi_{|D|}$) at any other point. \end{proof} Theorem \ref{canonical map1} and Corollary \ref{canonical map2} mean that an analogy of the fact the canonical map of a hyperelliptic compact Riemann surface is a double covering on a projective line $\boldsymbol{P}^1(\boldsymbol{C})$ with non-zero degree does not hold for a metric graph and in stead of this, we have the following by Lemma \ref{hyperelliptic lemma}. \begin{prop} \label{double covering1} Let $\varGamma$ be a hyperelliptic metric graph with genus at least two without one valent points. Then, an invariant linear subsystem of the hyperelliptic involution $\iota$ of the canonical linear system induces a rational map whose image is a tree and which is a $\langle \iota \rangle$-Galois covering on the image. \end{prop} \begin{proof} As $\iota$ is an isometry, $K_{\varGamma}$ is of the form $D + E$, where both $D$ and $E$ are effective divisors on $\varGamma$ and ${\rm deg}(D) = 2$, $r(D)=1$. Since $K_{\varGamma}$ and $D$ are invariant by $\iota$, so is $E$. Thus the canonical linear system $|K_{\varGamma}| = |D + E|$ contains the invariant linear subsystem $\Lambda$ of the hyperelliptic involution whose elements are of the form $D_1 + E$, where $D_1$ is effective and linearly equivalent to $D$. Let $R$ be the subsemimodule of $R(K_{\varGamma}) = R(D + E)$ corresponding to $\Lambda$. Then $R = R(D)$. In fact, for any $f \in R$, there exists $D_1 + E \in \Lambda$ such that $(D + E) + {\rm div}(f) = D_1 + E \ge 0$ and $D_1$ is effective. Thus $D + {\rm div}(f) = D_1 \ge 0$, {\it i.e.} $f \in R(D)$. Conversely, for any $g \in R(D)$, there exists $D_1 \in \Lambda^{\prime}$ such that $D + {\rm div}(g) = D_1 \ge 0$, where $\Lambda^{\prime}$ is the linear system corresponding to $R(D)$. Hence $D + E + {\rm div}(g) = D_1 + E \ge 0$ and then $g \in R$. Therefore, by Lemma \ref{hyperelliptic lemma}, $\Lambda$ induces a rational map which is a $\langle \iota \rangle$-Galois covering on a tree. \end{proof} Moreover, we have the following lemma. \begin{lemma} \label{containment relation lemma} Let $\varGamma$ be a metric graph, $K$ a finite group and $D$ a divisor on $\varGamma$. For finitely generated $K$-invariant linear subsystems $\Lambda_1 \subset \Lambda_2 \subset |D|$, let $\phi_{\Lambda_1} = (f_1 : \cdots : f_n)$ (resp. $\phi_{\Lambda_2} = (g_1 : \cdots : g_m)$) be the rational map induced by $\Lambda_1$ (resp. $\Lambda_2$). If $\phi_{\Lambda_1}$ induces a $K$-Galois covering, then $\phi_{\Lambda_1}$ induces a $K$-Galois covering. \end{lemma} \begin{proof} Let $f_i = \text{``}\sum_{j=1}^{m} a_{ij}g_j\text{''}$. Since $m$ is finite and each $g_i$ is a rational function on a metric graph, we can choose a model $(G, l)$ of $\varGamma$ satisfying the following condition: for any $i$ and edge $e$ of $G$, there exists a number $i_e$ such that $f_i |_e = a_{ii_e} + g_{i_e}$. Let $f^{\prime}_i := f_i - a_{ii_e}$. Then, ${\it F}^{\prime} := \{ f^{\prime}_1, \ldots, f^{\prime}_n \}$ is a minimal generating set of $R_1$, where $R_1$ is the tropical subsemimodule of $R(D)$ corresponding to $\Lambda_1$. As $\phi_{\Lambda_1}$ is $K$-Galois, by Theorem \ref{If $K$-injective, then $K$-Galois}, it is $K$-injective and then $\phi_{{\it F}^{\prime}}$ is also $K$-injective by Remark \ref{independent remark}. By the definition of ${\it F}^{\prime}$, $\phi_{\Lambda_2}|e = \phi_{{\it F}^{\prime}}|e$ holds. Hence $\phi_{\Lambda_2}$ is $K$-injective on $e$. Since $e$ is arbitrary, $\phi_{\Lambda_2}$ is $K$-injective. In conclusion, by Theorem \ref{If $K$-injective, then $K$-Galois} again, $\phi_{\Lambda_2}$ induces a $K$-injective. \end{proof} Consequently, by Proposition \ref{double covering1} and Lemma \ref{containment relation lemma}, the following holds. \begin{thm} \label{canonical map3} For a hyperelliptic metric graph with genus at least two without one valent points, the invariant linear system of the hyperelliptic involution $\iota$ of the canonical linear system induces a rational map whose image is a tree and which is a $\langle \iota \rangle$-Galois covering on the image. \end{thm} \section{Metric graphs with edge-multiplicities} In Section $3$, we prove that the induced rational map by $|D|^K$ which is $K$-injective is a finite harmonic morphism (and then a $K$-Galois covering) of metric graphs with an edge-multiplicity. We define in this section metric graphs with edge-multiplicities and harmonic morphisms between them. Compare Subsections $2.2, 2.3$ and $2.4$. Note that all of them are original definitions of the author and we may need more improvements. \subsection{Metric graphs with edge-multiplicities} \begin{dfn} {\upshape Let $\varGamma$ be a metric graph, and $(G, l)$ a model of $\varGamma$. We call a function $m : E(G) \rightarrow \boldsymbol{Z}_{>0}$ an {\it edge-multiplicity function} on $G$. ${\bold 1}$ is the edge-multiplicity function assigning multiplicity one to all edges and called a {\it trivial} edge-multiplicity function. Two triplets $(G, l, m)$ and $(G^{\prime}, l^{\prime}, m^{\prime})$ are said to be {\it isomorphic} if there exists an isomorphism between $G$ and $G^{\prime}$ keeping the length and the multiplicity of each edge. We define ${\rm Isom}_{(G, l)}(\varGamma)$ as the subset of the isometry transformation group ${\rm Isom}(\varGamma)$ of $\varGamma$ whose element keeps the length of each edge of $G$. We set ${\rm Isom}_{(G, l, m)}$ as the subset of ${\rm Isom}_{(G, l)}(\varGamma)$ whose each element keeps the multiplicity of each edge of $G$. } \end{dfn} \begin{dfn}[Subdivision of models] {\upshape Let $\varGamma$ be a metric graph, and $(G, l)$, $(G^{\prime}, l^{\prime})$ models of $\varGamma$. $(G, l)$ is said to be a {\it subdivision} of $(G^{\prime}, l^{\prime})$ and written as $(G, l) \succ (G^{\prime}, l^{\prime})$ if $V(G^{\prime})$ is a subset of $V(G)$. } \end{dfn} \begin{dfn} {\upshape Let $\varGamma$ be a metric graph, and $(G, l) \succ (G^{\prime}, l^{\prime})$ models of $\varGamma$. A triplet $(G, l, m)$ is said to be a {\it subdivision} of a triplet $(G^{\prime}, l^{\prime}, m^{\prime})$ and written as $(G, l, m) \succ (G^{\prime}, l^{\prime}, m^{\prime})$ if for any $e^{\prime} \in E(G^{\prime})$ and $e_i \in E(G)$ such that $e^{\prime} = e_1 \sqcup \cdots \sqcup e_n$, $m^{\prime}(e^{\prime})$ divides all $m(e_i)$. In particular, if $m^{\prime}(e^{\prime})$ and all $m(e_i)$ equals, then $(G^{\prime}, l^{\prime}, m^{\prime})$ is said to be a {\it trivial} subdivision of $(G, l, m)$ and then $(G^{\prime}, l^{\prime}, m^{\prime})$ is denoted by $(G^{\prime}, l^{\prime}, m)$. } \end{dfn} \begin{dfn} \upshape{ For a quadruplet $(\varGamma, G, l, m)$, the {\it metric graph with an edge-multiplicity}, denoted by $\varGamma_{m}$, is defined by the pair of metric graph $\varGamma$ and $m$ such that we can choose only models $(G^{\prime}, l^{\prime}) \prec (G, l)$ of $\varGamma$. The word ``a {\it point} $x$ on $\varGamma_{m}$'' means that $x \in \varGamma$. The {\it genus} of $\varGamma_{m}$ is the genus of $\varGamma$. } \end{dfn} \begin{dfn} \upshape{ Let $\varGamma_{m}$ be a metric graph with an edge-multiplicity. ${\rm Div}(\varGamma_{m})$ is defined by ${\rm Div}(\varGamma)$ and an element of ${\rm Div}(\varGamma_{m})$ is called a {\it divisor} on $\varGamma_{m}$. The {\it canonical divisor} on $\varGamma_{m}$ is the canonical divisor on $\varGamma$. We define ${\rm Rat}(\varGamma_{m})$ as ${\rm Rat}(\varGamma)$. We call an element of ${\rm Rat}(\varGamma_{m})$ a {\it rational function} on $\varGamma_{m}$. } \end{dfn} Note that for an edge $e$ of $G$ and $f \in {\rm Rat}(\varGamma_{m})$, $f$ has different finite slopes on $e$ since $f$ may have plural pieces. For a metric graph with an edge-multiplicity, we use same terms and notations for the underlying metric graph. \subsection{Harmonic morphisms with edge-multiplicities} \begin{dfn} \upshape{ Let $\varGamma_m, \varGamma^{\prime}_{m^{\prime}}$ be metric graphs with edge-multiplicities $m, m^{\prime}$, respectively, and $\varphi_{m,\, m^{\prime}} : \varGamma_m \rightarrow \varGamma^{\prime}_{m^{\prime}}$ be a continuous map. The map $\varphi_{m,\, m^{\prime}}$ is called a {\it morphism} if $\varphi_{m,\, m^{\prime}}$ is a morphism as loopless models $(G, l)$ and $(G^{\prime}, l^{\prime})$. For an edge $e$ of $G$, if $\varphi_{m,\, m^{\prime}}(e)$ is a vertex of $G^{\prime}$, let $m^{\prime}(\varphi_{m,\, m^{\prime}}(e)) := 0$ formally. The morphism $\varphi_{m,\, m^{\prime}}$ is said to be {\it finite} if $\varphi_{m,\, m^{\prime}}$ is finite as a morphism of loopless models. } \end{dfn} \begin{dfn} \upshape{ Let $\varphi_{m,\, m^{\prime}} : \varGamma_m \rightarrow \varGamma^{\prime}_{m^{\prime}}$ be a morphism of metric graphs with edge-multiplicities. Let $\varGamma^{\prime}_{m^{\prime}}$ be not a singleton and $x$ a point on $\varGamma_m$. The morphism $\varphi_{m,\, m^{\prime}}$ is {\it harmonic at} $x$ if for any edge $e_1$ of $G$ adjacent to $x$, $m(e_1)$ devides $m^{\prime}(\varphi_{m,\, m^{\prime}}(e_1))$ and the number \[ {\rm deg}^{m,\, m^{\prime}}_x(\varphi_{m,\, m^{\prime}}) := \sum_{x \in h \mapsto h^{\prime}} \frac{m^{\prime}(\varphi_{m,\, m^{\prime}}(e))}{m(e)} \cdot {\rm deg}_h(\varphi_{m,\, m^{\prime}}) \] is independent of the choice of half-edge $h^{\prime}$ emanating from $\varphi_{m,\, m^{\prime}}(x)$, where $h$ is a connected component of the inverse image of $h^{\prime}$ by $\varphi_{m,\, m^{\prime}}$ containing $x$ and $e$ is the edge of $G$ containing $h$. The morphism $\varphi_{m,\, m^{\prime}}$ is {\it harmonic} if it is harmonic at all points on $\varGamma_m$. For a point $x^{\prime}$ on $\varGamma^{\prime}_{m^{\prime}}$, \[ {\rm deg}^{m,\, m^{\prime}}(\varphi_{m,\, m^{\prime}}) := \sum_{x \mapsto x^{\prime}}{\rm deg}^{m,\, m^{\prime}}_x(\varphi_{m,\, m^{\prime}}) \] is said the {\it degree} of $\varphi_{m,\, m^{\prime}}$, where $x$ is an element of the inverse image of $x^{\prime}$ by $\varphi_{m,\, m^{\prime}}$. If $\varGamma^{\prime}_{m^{\prime}}$ is a singleton and $\varGamma_m$ is not a singleton, for any point $x$ on $\varGamma_m$, we define ${\rm deg}^{m,\, m^{\prime}}_x(\varphi_{m,\, m^{\prime}})$ as zero so that we regard $\varphi_{m,\, m^{\prime}}$ as a harmonic morphism of degree zero. If both $\varGamma_m$ and $\varGamma^{\prime}_{m^{\prime}}$ are singletons, we regard $\varphi_{m,\, m^{\prime}}$ as a harmonic morphism which can have any number of degree. } \end{dfn} \begin{lemma} $\sum_{x \mapsto x^{\prime}}{\rm deg}^{m,\, m^{\prime}}_x(\varphi_{m,\, m^{\prime}})$is independent of the choice of a point $x^{\prime}$ on $\varGamma^{\prime}$. \end{lemma} \begin{proof} It is sufficient to check that for any vertex of $G^{\prime}$, the sum is same. Let $x_1^{\prime}$ and $x_2^{\prime}$ be vertices of $G^{\prime}$ both adjacent to an edge $e^{\prime}$ of $G^{\prime}$. Let $h_1^{\prime}$ be the half-edge of $x_1^{\prime}$ contained in $e^{\prime}$. Then \begin{eqnarray*} \sum_{x_1 \mapsto x_1^{\prime}}{\rm deg}^{m,\, m^{\prime}}_{x_1}(\varphi_{m,\, m^{\prime}}) &=& \sum_{x_1 \mapsto x_1^{\prime}} \left( \sum_{x_1 \in h_1 \mapsto h_1^{\prime}}{\rm deg}^{m,\, m^{\prime}}_{h_1}(\varphi_{m,\, m^{\prime}}) \right)\\ &=& \sum_{x_1 \mapsto x_1^{\prime}} \left( \sum_{x_1 \in e_1 \mapsto e^{\prime}}{\rm deg}^{m,\, m^{\prime}}_{e_1}(\varphi_{m,\, m^{\prime}}) \right)\\ &=& \sum_{e \mapsto e^{\prime}}{\rm deg}^{m,\, m^{\prime}}_e(\varphi_{m,\, m^{\prime}}). \end{eqnarray*} Similarly, \[ \sum_{x_2 \mapsto x_2^{\prime}}{\rm deg}^{m,\, m^{\prime}}_{x_2}(\varphi_{m,\, m^{\prime}}) = \sum_{e \mapsto e^{\prime}}{\rm deg}^{m,\, m^{\prime}}_e(\varphi_{m,\, m^{\prime}}).\qedhere \] \end{proof} The collection of metric graphs with edge-multiplicities together with harmonic morphisms between them forms a category. \begin{dfn} \upshape{ Let $\varphi_{m,\, m^{\prime}} : \varGamma_m \rightarrow \varGamma^{\prime}_{m^{\prime}}$ be a finite harmonic morphism of metric graphs with edge-multiplicities. For $f$ in ${\rm Rat}(\varGamma_m)$, the {\it push-forward} of $f$ is the function $(\varphi_{m,\, m^{\prime}})_\ast f: \varGamma^{\prime}_{m^{\prime}} \rightarrow \boldsymbol{R} \cup \{ \pm \infty \}$ defined by \[ (\varphi_{m,\, m^{\prime}})_\ast f(x^{\prime}) := \sum_{\substack{x \in \varGamma_{m} \\ \varphi_{m,\, m^{\prime}}(x) = x^{\prime}}} {\rm deg}^{m,\, m^{\prime}}_x(\varphi_{m,\, m^{\prime}}) \cdot f(x). \] The {\it pull-back} of $f^{\prime}$ in ${\rm Rat}(\varGamma^{\prime}_{m^{\prime}})$ is the function $(\varphi_{m,\, m^{\prime}})^{\ast}f^{\prime} : \varGamma_m \rightarrow \boldsymbol{R} ~\cup \{ \pm \infty \}$ defined by $(\varphi_{m,\, m^{\prime}})^{\ast}f^{\prime} := f^{\prime} \circ \varphi_{m,\, m^{\prime}}$. We define the {\it push-forward homomorphism} on divisors $(\varphi_{m,\, m^{\prime}})_\ast : {\rm Div}(\varGamma_m) \rightarrow {\rm Div}(\varGamma^{\prime}_{m^{\prime}})$ by homomorphism \[ (\varphi_{m,\, m^{\prime}})_\ast (D) := \sum_{x \in \varGamma_m}D(x) \cdot \varphi_{m,\, m^{\prime}}(x). \] The {\it pull-back homomorphism} on divisors $(\varphi_{m,\, m^{\prime}})^{\ast}:{\rm Div}(\varGamma^{\prime}_{m^{\prime}}) \rightarrow {\rm Div}(\varGamma_m)$ is defined to be \[ (\varphi_{m,\, m^{\prime}})^{\ast} (D^{\prime}) := \sum_{x \in \varGamma_m}{\rm deg}^{m,\, m^{\prime}}_x(\varphi_{m,\, m^{\prime}}) \cdot D^{\prime}(\varphi_{m,\, m^{\prime}}(x)) \cdot x. \] } \end{dfn} \begin{rem} We need not to assume that $\varphi_{m,\, m^{\prime}}$ is finite to define pull-backs of rational functions and divisors. \end{rem} \begin{prop} For any divisors $D$ on $\varGamma_m$ and $D^{\prime}$ on $\varGamma^{\prime}_{m^{\prime}}$, ${\rm deg}((\varphi_{m,\, m^{\prime}})_{\ast}(D)) = {\rm deg}(D)$ and ${\rm deg}((\varphi_{m,\, m^{\prime}})^{\ast}(D^{\prime})) = {\rm deg}^{m,\, m^{\prime}}(\varphi_{m,\, m^{\prime}})$ hold. \end{prop} \begin{proof} The first equation holds obviously. Let $x^{\prime}$ be a point on $\varGamma^{\prime}_{m^{\prime}}$. Since $\sum_{x \mapsto x^{\prime}}((\varphi_{m,\, m^{\prime}})^{\ast}(D^{\prime}))(x) = \sum_{x \mapsto x^{\prime}}{\rm deg}_x^{m,\, m^{\prime}}(\varphi_{m,\, m^{\prime}}) \cdot D^{\prime}(x^{\prime}) = {\rm deg}^{m,\, m^{\prime}}(\varphi_{m,\, m^{\prime}}) \cdot D^{\prime}(x^{\prime})$, we have the second equation. \end{proof} \begin{dfn} \upshape{ Let $\varphi_{m,\, m^{\prime}} : \varGamma_m \rightarrow \varGamma^{\prime}_{m^{\prime}}$ be a finite harmonic morphism of metric graphs with edge-multiplicities. For a rational function $f$ on $\varGamma_m$ other that $- \infty$, we define the number \[ {\rm div}^{m,\, m^{\prime}}(f) := \sum_{x \in \varGamma_m} \left( \sum_{x \in e \in E(G)} \frac{m^{\prime}(\varphi_{m,\, m^{\prime}}(e))}{m(e)} \cdot \left(\text{the outgoing slope of } f \text{ on } e \text{ at } x \right) \cdot x \right) \] and call it the {\it principal divisor} with edge-multiplicities $m$ and $m^{\prime}$ defined by $f$. } \end{dfn} \begin{prop} For any rational functions $f$ on $\varGamma_m$ and $f^{\prime}$ on $\varGamma^{\prime}_{m^{\prime}}$ both other than $- \infty$, $(\varphi_{m,\, m^{\prime}})_{\ast}({\rm div}^{m,\, m^{\prime}} f ) = {\rm div}((\varphi_{m,\, m^{\prime}})_{\ast}(f))$ and $(\varphi_{m,\, m^{\prime}})({\rm div}(f^{\prime})) = {\rm div}^{m,\, m^{\prime}}((\varphi_{m,\, m^{\prime}})^{\ast}f^{\prime})$ hold. \end{prop} \begin{proof} Let us write $\varphi_{m,\, m^{\prime}}$ as $\varphi$ simply. We may break $\varGamma_m$ and $\varGamma^{\prime}_{m^{\prime}}$ into sets $S$ and $S^{\prime}$ of segments along which $f$ and $\varphi_{\ast}f$, respectively, are linear and such that each segment $s \in S$ is mapped linearly to some $s^{\prime} \in S^{\prime}$. Then at any point $x^{\prime}$ on $\varGamma^{\prime}_m$, we have \begin{eqnarray*} \varphi_{\ast}({\rm div}^{m,\, m^{\prime}} (f) ) (x^{\prime}) = \sum_{\substack{ x \in \varGamma_m \\ x \mapsto x^{\prime}}} {\rm div}^{m,\, m^{\prime}}(f) (x) = \sum_{x \in \varphi^{-1}(x^{\prime})} \sum_{s=xy \in S} \frac{m^{\prime}(\varphi(s))}{m(s)} \cdot \frac{f(y) - f(x)}{l(s)} \end{eqnarray*} and \begin{eqnarray*} &&{\rm div}(\varphi_{\ast} f)(x^{\prime})\\ &=& \sum_{s^{\prime} = x^{\prime} y^{\prime} \in S^{\prime}} \frac{(\varphi_{\ast} f) (y^{\prime}) - (\varphi_{\ast} f) (x^{\prime})} {l^{\prime}(s^{\prime})}\\ &=& \sum_{s^{\prime} = x^{\prime} y^{\prime} \in S^{\prime}} \left\{ \sum_{\substack{ y \in \varGamma_m \\ \varphi(y) = y^{\prime}}} \left( \sum_{ y \in s \mapsto s^{\prime}} \frac{m^{\prime}(s^{\prime})}{m(s)} \cdot \frac{l^{\prime}(s^{\prime})}{l(s)} \right) f(y) - \sum_{\substack{ x \in \varGamma_m \\ \varphi(x) = x^{\prime}}} \left( \sum_{ x \in s \mapsto s^{\prime}} \frac{m^{\prime}(s^{\prime})}{m(s)} \cdot \frac{l^{\prime}(s^{\prime})}{l(s)} \right) f(x) \right\} \cdot \frac{1}{ l^{\prime}(s^{\prime})}\\ &=& \sum_{s^{\prime} = x^{\prime} y^{\prime} \in S^{\prime}} \left\{ \sum_{\substack{ y \in \varGamma_m \\ \varphi(y) = y^{\prime}}} \left( \sum_{ y \in s \mapsto s^{\prime}} \frac{m^{\prime}(s^{\prime})}{m(s)} \cdot \frac{1}{l(s)} \right) f(y) - \sum_{\substack{ x \in \varGamma_m \\ \varphi(x) = x^{\prime}}} \left( \sum_{ x \in s \mapsto s^{\prime}} \frac{m^{\prime}(s^{\prime})}{m(s)} \cdot \frac{1}{l(s)} \right) f(x) \right\}\\ &=& \sum_{s^{\prime} = x^{\prime} y^{\prime} \in S^{\prime}} \left\{ \sum_{\substack{ s = xy \in S \\ \varphi(s) = s^{\prime}}} \left( \frac{m^{\prime}(s^{\prime})}{m(s)} \cdot \frac{f(y)}{l(s)} - \frac{m^{\prime}(s^{\prime})}{m(s)} \cdot \frac{f(x)}{l(s)} \right) \right\}\\ &=& \sum_{x \in \varphi^{-1}(x^{\prime})} \sum_{s=xy \in S} \frac{m^{\prime}(\varphi(s))}{m(s)} \cdot \frac{f(y) - f(x)}{l(s)}. \end{eqnarray*} Let us assume that $\varGamma_m$ and $\varGamma^{\prime}_{m^{\prime}}$ are broken into $S_1$ and $S^{\prime}_1$ of segments along which $\varphi^{\ast}f^{\prime}$ and $f^{\prime}$, respectively, have the same conditions as that of $S$ and $S^{\prime}$. Then for any point $x$ on $\varGamma_m$, we have \begin{eqnarray*} \left( \varphi^{\ast}({\rm div}(f^{\prime})) \right)(x) &=& {\rm deg}^{m,\, m^{\prime}}_x (\varphi_{m,\, m^{\prime}}) \cdot ({\rm div}(f^{\prime})(\varphi(x)))\\ &=& {\rm deg}^{m,\, m^{\prime}}_x (\varphi_{m,\, m^{\prime}}) \cdot \left( \sum_{s^{\prime} = \varphi(x) y^{\prime} \in S^{\prime}} \frac{f^{\prime}(y^{\prime}) - f^{\prime}(\varphi(x))} {l^{\prime}(s^{\prime})} \right)\\ &=& \sum_{s^{\prime} = \varphi(x) y^{\prime} \in S^{\prime}} {\rm deg}^{m,\, m^{\prime}}_x (\varphi_{m,\, m^{\prime}}) \cdot \frac{f^{\prime}(y^{\prime}) - f^{\prime}(\varphi(x))} {l^{\prime}(s^{\prime})}\\ &=& \sum_{s^{\prime} = \varphi(x) y^{\prime} \in S^{\prime}} \sum_{s = xy \mapsto s^{\prime}}\frac{m^{\prime}(s^{\prime})}{m(s)} \cdot \frac{l^{\prime}(s^{\prime})}{l(s)} \cdot \frac{f^{\prime}(y^{\prime}) - f^{\prime}(\varphi(x))} {l^{\prime}(s^{\prime})}\\ &=& \sum_{s^{\prime} = \varphi(x) y^{\prime} \in S^{\prime}} \sum_{s = xy \mapsto s^{\prime}}\frac{m^{\prime}(s^{\prime})}{m(s)} \cdot \frac{f^{\prime}(y^{\prime}) - f^{\prime}(\varphi(x))} {l(s)}\\ &=& \sum_{s = xy \in S}\frac{m^{\prime}(s^{\prime})}{m(s)} \cdot \frac{f^{\prime}(\varphi(y)) - f^{\prime}(\varphi(x))} {l(s)}\\ &=& \sum_{s = xy \in S}\frac{m^{\prime}(s^{\prime})}{m(s)} \cdot \frac{(\varphi^{\ast}f^{\prime})(y) - (\varphi^{\ast}f^{\prime})(x)} {l(s)}\\ &=& ({\rm div}^{m,\, m^{\prime}}(\varphi^{\ast}(f^{\prime})))(x). \qedhere \end{eqnarray*} \end{proof} \begin{dfn} \upshape{ Let $\varphi_{m,\, m^{\prime}} : \varGamma_m \rightarrow \varGamma^{\prime}_{m^{\prime}}$ be a map between metric graphs with edge-multiplicities $m$ and $m^{\prime}$ and let $K$ be a finite group. $\varphi_{m,\, m^{\prime}}$ is a {\it $K$-Galois covering} on $\varGamma^{\prime}_{m^{\prime}}$ if $\varphi_{m,\, m^{\prime}}$ is a finite harmonic morphism of metric graphs with edge-multiplicities, $|K| = {\rm deg}^{m,\, m^{\prime}}(\varphi_{m,\, m^{\prime}})$ and $K$ acts on transitively on every fibre and $K$ keeps edge-multiplicities. } \end{dfn} \begin{rem} \upshape{ If $\varphi_{m,\, m^{\prime}} : \varGamma_m \rightarrow \varGamma^{\prime}_{m^{\prime}}$ is $K$-Galois, then there exists a group homomorphism $K \rightarrow {\rm Isom}_{(G, l, m)}$ for a model $(G, l)$ of $\varGamma_m$. } \end{rem} \begin{comment}2019.1.26 \section{Metric graphs with edge-multiplicities} In Section $5$, we prove that the induced rational map by $|D|^K$ which is $K$-injective is a finite harmonic morphism (and then a $K$-Galois covering) of metric graphs with an edge-multiplicity. We define in this section metric graphs with edge-multiplicities and harmonic morphisms between them. Compare Subsections $2.2$ and $2.4$. Note that all of them are original definitions of the author and we may need more improvements. \subsection{Metric graphs with edge-multiplicities} \begin{dfn} {\upshape Let $\varGamma$ be a metric graph, and $(G, l)$ a model of $\varGamma$. We call a function $m : E(G) \rightarrow \boldsymbol{Z}_{>0}$ an {\it edge-multiplicity function} on $G$. ${\bold 1}$ is the edge-multiplicity function assigning multiplicity one to all edges and called a {\it trivial} edge-multiplicity function. Two triplets $(G, l, m)$ and $(G^{\prime}, l^{\prime}, m^{\prime})$ are said to be {\it isomorphic} if there exists an isomorphism between $G$ and $G^{\prime}$ keeping the length and the multiplicity of each edge. We define ${\rm Isom}_{(G, l)}(\varGamma)$ as the subset of the isometry transformation group ${\rm Isom}(\varGamma)$ of $\varGamma$ whose element keeps the length of each edge of $G$. We set ${\rm Isom}_{(G, l, m)}$ as the subset of ${\rm Isom}_{(G, l)}(\varGamma)$ whose each element keeps the multiplicity of each edge of $G$. } \end{dfn} \begin{dfn}[Subdivision of models] {\upshape Let $\varGamma$ be a metric graph, and $(G, l)$, $(G^{\prime}, l^{\prime})$ models of $\varGamma$. $(G, l)$ is said to be a {\it subdivision} of $(G^{\prime}, l^{\prime})$ and written as $(G, l) \succ (G^{\prime}, l^{\prime})$ if $V(G^{\prime})$ is a subset of $V(G)$. } \end{dfn} \begin{dfn} {\upshape Let $\varGamma$ be a metric graph, and $(G, l) \succ (G^{\prime}, l^{\prime})$ models of $\varGamma$. A triplet $(G, l, m)$ is said to be a {\it subdivision} of a triplet $(G^{\prime}, l^{\prime}, m^{\prime})$ and written as $(G, l, m) \succ (G^{\prime}, l^{\prime}, m^{\prime})$ if for any $e^{\prime} \in E(G^{\prime})$ and $e_i \in E(G)$ such that $e^{\prime} = e_1 \sqcup \cdots \sqcup e_n$, $m^{\prime}(e^{\prime})$ divides all $m(e_i)$. In particular, if $m^{\prime}(e^{\prime})$ and all $m(e_i)$ equals, then $(G^{\prime}, l^{\prime}, m^{\prime})$ is said to be a {\it trivial} subdivision of $(G, l, m)$ and then $(G^{\prime}, l^{\prime}, m^{\prime})$ is denoted by $(G^{\prime}, l^{\prime}, m)$. } \end{dfn} \begin{dfn} \upshape{ For a quadruplet $(\varGamma, G, l, m)$, the {\it metric graph with an edge-multiplicity}, denoted by $\varGamma_{m}$, is defined by the pair of metric graph $\varGamma$ and $m$ such that we can choose only models $(G^{\prime}, l^{\prime}) \prec (G, l)$ of $\varGamma$. The word ``a {\it point} $x$ on $\varGamma_{m}$'' means that $x \in \varGamma$. The {\it genus} of $\varGamma_{m}$ is the genus of $\varGamma$. } \end{dfn} For a metric graph with an edge-multiplicity, we use same terms and notations for the underlying metric graph. \subsection{Harmonic morphisms with edge-multiplicities} \begin{dfn} \upshape{ Let $\varGamma_m, \varGamma^{\prime}_{m^{\prime}}$ be metric graphs with edge-multiplicities $m, m^{\prime}$, respectively, and $\varphi_{m,\, m^{\prime}} : \varGamma_m \rightarrow \varGamma^{\prime}_{m^{\prime}}$ be a continuous map. The map $\varphi_{m,\, m^{\prime}}$ is called a {\it morphism} if $\varphi_{m,\, m^{\prime}}$ is a morphism as loopless models $(G, l)$ and $(G^{\prime}, l^{\prime})$. For an edge $e$ of $G$, if $\varphi_{m,\, m^{\prime}}(e)$ is a vertex of $G^{\prime}$, let $m^{\prime}(\varphi_{m,\, m^{\prime}}(e)) := 0$ formally. The morphism $\varphi_{m,\, m^{\prime}}$ is said to be {\it finite} if $\varphi_{m,\, m^{\prime}}$ is finite as a morphism of loopless models. } \end{dfn} \begin{dfn} \upshape{ Let $\varphi_{m,\, m^{\prime}} : \varGamma_m \rightarrow \varGamma^{\prime}_{m^{\prime}}$ be a morphism of metric graphs with edge-multiplicities. Let $\varGamma^{\prime}_{m^{\prime}}$ be not a singleton and $x$ a point on $\varGamma_m$. The morphism $\varphi_{m,\, m^{\prime}}$ is {\it harmonic at} $x$ if the number \[ {\rm deg}^{m,\, m^{\prime}}_x(\varphi_{m,\, m^{\prime}}) := \sum_{x \in h \mapsto h^{\prime}} \frac{m^{\prime}(\varphi_{m,\, m^{\prime}}(e))}{m(e)} \cdot {\rm deg}_h(\varphi_{m,\, m^{\prime}}) \] is a nonnegative integer and independent of the choice of half-edge $h^{\prime}$ emanating from $\varphi_{m,\, m^{\prime}}(x)$, where $h$ is a connected component of the inverse image of $h^{\prime}$ by $\varphi_{m,\, m^{\prime}}$ containing $x$ and $e$ is the edge of $G$ containing $h$. The morphism $\varphi_{m,\, m^{\prime}}$ is {\it harmonic} if it is harmonic at all points on $\varGamma_m$. For a point $x^{\prime}$ on $\varGamma^{\prime}_{m^{\prime}}$, \[ {\rm deg}^{m,\, m^{\prime}}(\varphi_{m,\, m^{\prime}}) := \sum_{x \mapsto x^{\prime}}{\rm deg}^{m,\, m^{\prime}}_x(\varphi_{m,\, m^{\prime}}) \] is said the {\it degree} of $\varphi_{m,\, m^{\prime}}$, where $x$ is an element of the inverse image of $x^{\prime}$ by $\varphi_{m,\, m^{\prime}}$. If $\varGamma^{\prime}_{m^{\prime}}$ is a singleton and $\varGamma_m$ is not a singleton, for any point $x$ on $\varGamma_m$, we define ${\rm deg}^{m,\, m^{\prime}}_x(\varphi_{m,\, m^{\prime}})$ as zero so that we regard $\varphi_{m,\, m^{\prime}}$ as a harmonic morphism of degree zero. If both $\varGamma_m$ and $\varGamma^{\prime}_{m^{\prime}}$ are singletons, we regard $\varphi_{m,\, m^{\prime}}$ as a harmonic morphism which can have any number of degree. } \end{dfn} \begin{lemma} $\sum_{x \mapsto x^{\prime}}{\rm deg}^{m,\, m^{\prime}}_x(\varphi_{m,\, m^{\prime}})$is independent of the choice of a point $x^{\prime}$ on $\varGamma^{\prime}$. \end{lemma} \begin{proof} It is sufficient to check that for any vertex of $G^{\prime}$, the sum is same. Let $x_1^{\prime}$ and $x_2^{\prime}$ be vertices of $G^{\prime}$ both adjacent to an edge $e^{\prime}$ of $G^{\prime}$. Let $h_1^{\prime}$ be the half-edge of $x_1^{\prime}$ contained in $e^{\prime}$. Then \begin{eqnarray*} \sum_{x_1 \mapsto x_1^{\prime}}{\rm deg}^{m,\, m^{\prime}}_{x_1}(\varphi_{m,\, m^{\prime}}) &=& \sum_{x_1 \mapsto x_1^{\prime}} \left( \sum_{x_1 \in h_1 \mapsto h_1^{\prime}}{\rm deg}^{m,\, m^{\prime}}_{h_1}(\varphi_{m,\, m^{\prime}}) \right)\\ &=& \sum_{x_1 \mapsto x_1^{\prime}} \left( \sum_{x_1 \in e_1 \mapsto e^{\prime}}{\rm deg}^{m,\, m^{\prime}}_{e_1}(\varphi_{m,\, m^{\prime}}) \right)\\ &=& \sum_{e \mapsto e^{\prime}}{\rm deg}^{m,\, m^{\prime}}_e(\varphi_{m,\, m^{\prime}}). \end{eqnarray*} Similarly, \[ \sum_{x_2 \mapsto x_2^{\prime}}{\rm deg}^{m,\, m^{\prime}}_{x_2}(\varphi_{m,\, m^{\prime}}) = \sum_{e \mapsto e^{\prime}}{\rm deg}^{m,\, m^{\prime}}_e(\varphi_{m,\, m^{\prime}}).\qedhere \] \end{proof} The collection of metric graphs with edge-multiplicities together with harmonic morphisms between them forms a category. \begin{dfn} \upshape{ Let $\varphi_{m,\, m^{\prime}} : \varGamma_m \rightarrow \varGamma^{\prime}_{m^{\prime}}$ be a map between metric graphs with edge-multiplicities $m$ and $m^{\prime}$ and let $K$ be a finite group. $\varphi_{m,\, m^{\prime}}$ is a {\it $K$-Galois covering} on $\varGamma^{\prime}_{m^{\prime}}$ if $\varphi_{m,\, m^{\prime}}$ is a finite harmonic morphism of metric graphs with edge-multiplicities, $|K| = {\rm deg}^{m,\, m^{\prime}}(\varphi_{m,\, m^{\prime}})$ and $K$ acts on transitively on every fibre and $K$ keeps edge-multiplicities. } \end{dfn} \begin{rem} \upshape{ If $\varphi_{m,\, m^{\prime}} : \varGamma_m \rightarrow \varGamma^{\prime}_{m^{\prime}}$ is $K$-Galois, then there exists a group homomorphism $K \rightarrow {\rm Isom}_{(G, l, m)}$ for a model $(G, l)$ of $\varGamma_m$. } \end{rem} \begin{prb} \upshape{ For metric graphs with edge-multiplicities, give good definitions of rational functions, divisors, push-forwards, pull-backs of them by a (finite) harmonic morphism and other concepts associated to them. } \end{prb} \end{comment} \end{document}
arXiv
Amount of energy of the Big Bang What is the currently accepted estimated range of the amount of energy of the Big Bang event? In joules at some estimated size, so a temperature may be calculated. For context, I wonder if the temperature of the Big Bang would make its heat radiation wavelength shorter than the Planck length. Sir Cumference Let's start by making some points clear: 1. We don't know what the Big Bang was. Rather, we know that the Universe is expanding. If you extrapolate backwards, you'd expect the Universe to be denser and denser. More specifically, we talk about this as a change in the scale factor $a$, and this gets smaller and smaller as we look further back in time. According to general relativity (our modern theory of gravity), 13.8 billion years ago, $a$ should have been $0$; however, you can't have a metric with $a = 0$. Thus, we know that general relativity is necessarily incomplete. It breaks down at the conditions of the early universe, so we currently have no physical model to explain that time. Rather, we know that the early universe expanded, and the Big Bang is the time that perplexes cosmologists. Some theories, like quantum gravity, have emerged in an effort to explain the Big Bang; however, we currently have little understanding of what it actually was. So no, we can't tell you what the energy output of the event was, since we don't know what actually happened. 2. The temperature of the early Universe was high Our theories break down at the Planck epoch of the Universe. The Planck epoch was the earliest epoch of the Universe and lasted until $10^{-42}$ seconds after the Big Bang — that's 200 Planck times, which are the shortest meaningful measurement of time. During this epoch, the entire Universe was at $1.417×10^{32} \; \mathrm{K}$, which is the Planck temperature. This is the hottest possible temperature; an object at this temperature will emit photons with wavelengths of a Planck length (you can read more about this in my answer here). The point is that there is no meaningful distance smaller than a Planck length, so the Universe couldn't be hotter than the Planck temperature. 7 Points 14 Comments Share John Duffield 5 years ago +1 for answering the question, but don't forget that theories break down, and they are only theories. We don't actually know that the temperature of the very early universe was high. Sir Cumference 5 years ago @JohnDuffield The WMAP observations provide strong evidence that the temperature of the early universe was high, ruling out a cold Big Bang. See Komatsu et al. (2010). I'm sure we'd all agree that the temperature of the early universe was high. But I said _don't forget that theories break down_ and said the _very_ early universe. In fact, I'll go so far as to say this: IMHO the temperature before the big bang was absolute zero. @JohnDuffield Well, we don't really know what the Big Bang was, so I'm a bit confused as to how you derived that. It's to do with what Hawking said: the universe is like a black hole in reverse. PS: I'm not a fan of Hawking, or of Hawking radiation. But I think he was broadly correct when he said the universe is like a black hole in reverse. IMHO pulling away from a black hole through space is something like the universe expanding over time. uhoh 3 years ago Even if 1. and 2. were known, does not knowing *how big* the universe is (total mass or energy) add a third and independent uncertainty? @uhoh Well, that's if we simply equate "energy of the BB" with "energy of the universe"; the former isn't really well defined. As for the latter, you're right that not knowing the universe would prevent us from finding the total energy. An infinitely large universe would have infinite energy (assuming the cosmological principle), whereas determining the energy density of a closed universe goes hand in hand with determining its radius of curvature, allowing us to get the total energy. @SirCumference total size of the universe would be *something like* the mass of the matter plus the energy, though I am sure there are better ways to say that. I did not suggest you would equate as you've stated, but I think it's reasonable to expect a larger universe to have a larger energy big bang, no? I still think you need a 3rd uncertainty item for the size of the universe, of which we have no upper limit. @uhoh Well, "Big Bang" more or less refers to a point in time rather than an event, specifically where we get gravitational and temperature singularities. Now that I think about it, I'd expect the energy of the universe would also approach infinity as we get closer to this time. Whereas cosmological redshift decreases the energy of photons, if we went back in time we'd see the energy being higher than today. At the Big Bang (when the scale factor goes to zero), the wavelength of a photon would go to zero as well, so the energy would be infinite. Still, wouldn't a larger universe be a "larger infinite" energy? I can never remember, but are there some flavors of infinity that can be larger than others? @uhoh If you say "infinity" in math, it has (roughly) two distinct meanings based on the context. In set theory, it refers to a quantity of objects, specifically one that's at least as big as the number of integers (there are bigger infinities in that context, which you mentioned). In analysis on the other hand, we mainly just use it to refer to a limit (a sort of unrigorous way of saying the limit diverges). Here we're just considering how energy rises as we consider more and more matter in our universe, so when saying "infinite energy" I mean energy diverges if we make our universe infinite. @SirCumference okay well I still think that a 3rd source of uncertainty related to the unknown size of the universe is needed, but that's just me. @uhoh If we're talking about total energy of the universe, then yes. But the energy density of our universe *at the Big Bang* rises to infinity for any size of the universe. big-bang-theory high-energy-astrophysics
CommonCrawl
\begin{document} \baselineskip=0.5cm \title{Quantum coherence and correlations in quantum system} \author{Zhengjun Xi$^{1,\star}$, Yongming Li$^{1}$, Heng Fan$^{2,3}$} \maketitle \begin{affiliations} \item College of Computer Science, Shaanxi Normal University, Xi'an, 710062, P. R. China \item Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing, 100190, P. R. China. \item Collaborative Innovation Center of Quantum Matter, Beijing, P. R. China $^\star$e-mail:[email protected] \end{affiliations} \begin{abstract} \baselineskip=0.5cm Criteria of measure quantifying quantum coherence, a unique property of quantum system, are proposed recently. In this paper, we first give an uncertainty-like expression relating the coherence and the entropy of quantum system. This finding allows us to discuss the relations between the entanglement and the coherence. Further, we discuss in detail the relations among the coherence, the discord and the deficit in the bipartite quantum system. We show that, the one-way quantum deficit is equal to the sum between quantum discord and the relative entropy of coherence of measured subsystem. \end{abstract} Quantum coherence arising from quantum superposition plays a central role in quantum mechanics. Quantum coherence is a common necessary condition for both entanglement and other types of quantum correlations, and it is also an important physical resource in quantum computation and quantum information processing. Recently, a rigorous framework to quantify coherence has been proposed\cite{Baumgratz13} (or see early work\cite{Aberg06}). Within such a framework for the coherence, one can define suitable measures, include the relative entropy and the $l_1$- norm of coherence\cite{Baumgratz13}, and a measure by the Wigner-Yanase-Dyson skew information\cite{Girolami14}. Quantum coherence has received a lot of attentions\cite{Angelo13, Marvian13,Rosario13,Levi14,Marvian14,Lostaglio14,Hai14,Monras14,Karpat14, Aberg14,Xi14}. We know that quantum coherence and the entanglement are related to quantum superposition, but we are not sure of the exact relations between quantum coherence and the entanglement, is there a quantitative relation between the two of them? On the other hand, it is well known that entanglement does not account for all nonclassical correlations (or quantum correlations) and that even correlation of separable state does not completely be classical. Quantum discord \cite{Ollivier02, Henderson01} and quantum deficit \cite{Oppenheim02} have been viewed as two possible quantifiers for quantum correlations. There have been much interest in characterizing and interpreting their applications in quantum information processing\cite{horodeckirmp,Modi12,Chuan12,Streltsov12,Amico-RMP,Cramer-RMP,Cui-Gu,Haldane-PRL,FanLiuPRL, FanWangPRL,FanCuiPRX, Zwolak12,Pirandola14}. In particular, Horodecki \emph{et al.} \cite{Horodecki05} discussed the relationship between the discord and quantum deficit in the bipartite quantum system. If only one-way classical communication from one party to another is allowed, they showed that the one-way quantum deficit is an upper bound of quantum discord via the local von Neumman measurements on the party. Curiously, up to now, no attempt for a transformed framework between them has been reported. In other word, is there a more clear quantitative relations between them? In the present work, we will resolve the above questions via quantum coherence. We only focus on particular the entropic form, also called relative entropy of coherence, which enjoys the properties of physical interpretation and being easily computable\cite{Baumgratz13}. Firstly, we derive an uncertainty-like expression which states that the sum of the coherence and the entropy in quantum system is bounded from the above by $\log_2d$, where $d$ is the dimension of the quantum system. As an application, we discuss the relations between the entanglement and the coherence. Meanwhile, we find that the relative entropy of coherence satisfies the super-additivity. In the bipartite quantum system, based on the projective measurement in which the relative entropy of coherence is quantified, we obtain that the increased entropy produced by the local projective measurement is equal to the sum between the quantum correlation destroyed by this measurement and the relative entropy of coherence of the measured subsystem. Since the incoherent states under two different bases are unitarily equivalent, then there are same matrix elements under the different bases for given quantum state. These two facts are the reasons that we study in detail the explicit expressions of the discord and the deficit in terms of the relative entropy of coherence in the bipartite quantum system. In this way, we can give a clear quantitative relation between the discord and the deficit. \noindent{\large\bf Results} \noindent {\bf Measure of quantum coherence.} Consider a finite-dimensional Hilbert space $\mathcal{H}$ with $d=dim({\mathcal{H}})$. Fix a basis $\{|i\rangle\}_{i=1}^{d}$, we take the suggestion given by Baumgratz \emph{et al.}\cite{Baumgratz13}, let $I$ be a set of the incoherent states, which is of the form \begin{equation} \sigma=\sum_{i=1}^d\sigma_i|i\rangle\langle i|, \end{equation} where $\sigma_i\in[0,1]$ and $\sum_i\sigma_i=1$. Baumgratz \emph{et al.} proposed that any proper measure of the coherence $C$ must satisfy the following conditions: \begin{itemize} \item [{(C1)}] $C(\rho)\geq 0$ for all quantum states $\rho$, and $C(\rho)=0$ if and only if $\rho\in \mathcal{I}$. \item [{(C2a)}] Monotonicity under incoherent completely positive and trace preserving maps (ICPTP) $\Phi$, i.e., $C(\rho) \geq C(\Phi(\rho))$. \item [{(C2b)}] Monotonicity for average coherence under subselection based on measurements outcomes: $C(\rho)\geq \sum_n p_n C(\rho_n) $, where $\rho_n=K_n\rho K_n^\dag/p_n$ and $p_n=\mathrm{Tr}(K_n \rho K_n^\dag)$ for all $\{K_n\}$ with $\sum_n K_n^{\dagger}K_n= I$ and $K_n \mathcal{I} K_n^\dagger\subseteq\mathcal{I}$. \item [{(C3)}] Non-increasing under mixing of quantum states (convexity), i.e., $\sum_ip_i C(\rho_i)\geq C(\sum_ip_i\rho_i),$ for any ensemble $\{p_i,\rho_i\}$. \end{itemize} Note that conditions (C2b) and (C3) automatically imply condition (C2a). We know that the condition (C2b) is important as it allows for sub-selection based on measurement outcomes, a process available in well controlled quantum experiments\cite{Baumgratz13}. It has been shown that the relative entropy and $l_1$-norm satisfy all conditions. However, the measure of coherence induced by the squared Hilbert-Schmidt norm satisfies conditions (C1), (C2a), (C3), but not (C2b). Recently, we also find that the measure of coherence induced by the fidelity does not satisfy condition (C2b), and an explicit example is presented~\cite{Xi14}. In the following, we only consider the measure of relative entropy of coherence. For any quantum state $\rho$ on the Hilbert space $\mathcal{H}$, the relative entropy of coherence\cite{Baumgratz13} is defined as \begin{equation} C_{\mathrm{RE}}(\rho):=\min_{\sigma\in I}S(\rho||\sigma), \end{equation} where $S(\rho||\sigma)=\mathrm{Tr}(\rho\log_2\rho-\rho\log_2\sigma)$ is the relative entropy\cite{Nielsen}. With respect to the properties of the relative entropy\cite{Vedralrmp02}, it is quite easy to check that this measure satisfies the conditions of coherence measures. In particular, there is a closed form solution that make it easy to evaluate analytical expressions\cite{Baumgratz13}. For Hilbert space $\mathcal{H}$ with fixing the basis $\{|i\rangle\}_{i=1}^{d}$, we denote \begin{equation}\label{eq:rho} \rho=\sum_{i,i^\prime}\rho_{i,i^\prime}|i\rangle\langle i^\prime| \end{equation} and denote $\rho_{\mathrm{diag}}=\sum_i\rho_{ii}|i\rangle\langle i|$. By using the properties of relative entropy, it is easy to obtain \begin{equation}\label{eq:rec1} C_{\mathrm{RE}}(\rho)=S(\rho_{\mathrm{diag}})-S(\rho), \end{equation} where $S(\rho)=-\mathrm{Tr}\rho\log_2\rho$ is the von Neumann entropy\cite{Nielsen}. We remark that the incoherent state $\rho_{\mathrm{diag}}$ is generated by removing all the off-diagonal elements and leaving the diagonal elements in density matrix or density operator $\rho$~(\ref{eq:rho}). This operation is called completely decohering, or completely dephasing channel\cite{Horodecki05}, we then denote \begin{equation}\label{incoherent operation} \rho_{\mathrm{diag}}=\Pi(\rho)=\sum_i^d\Pi_i\rho \Pi_i, \end{equation} where $\Pi_i=|i\rangle\langle i|$ are one-dimensional projectors, and $\sum_i\Pi_i=I_{\mathcal{H}}$, $I_{\mathcal{H}}$ is identity operator on Hilbert space $\mathcal{H}$. Thus, we claim that the coherence contained in quantum state is equal to the increased entropy caused by the completely decohering. In addition, some basic properties have given\cite{Baumgratz13}. For example, we can obtain \begin{equation}\label{rec_bound_1} C_{\mathrm{RE}}(\rho)\leq S(\rho_{\mathrm{diag}})\leq\log d. \end{equation} Note that $C_{\mathrm{RE}}(\rho)=S(\rho_{\mathrm{diag}})$ if and only if the quantum state $\rho$ is a pure state. In particular, if there exists pure states such that $C_{\mathrm{RE}}(\rho)=\log d$, these pure states are called maximally coherent states. Baumgratz \emph{et al.} have defined a maximally coherent state \cite{Baumgratz13}, which takes the form \begin{equation} |\psi\rangle=\frac{1}{\sqrt{d}}\sum_{i=1}^d|i\rangle. \end{equation} \noindent {\bf Uncertainty-like relation between the coherence and entanglement.} Interestingly, if one combines Eq.~(\ref{eq:rec1}) with Eq.~(\ref{rec_bound_1}), we can obtain a new tight bound of the relative entropy of coherence, \begin{equation} C_{\mathrm{RE}}(\rho)\leq I(\rho), \end{equation} where $I(\rho):=\log_2d-S(\rho)$ is the information function, which has an operational meanings: it is the number of pure qubits one can draw from many copies of the state $\rho$~\cite{Horodecki05}. By using a straightforward algebraic calculation, we can obtain an interesting ``uncertainty relation" between the coherence and the entropy of quantum system, namely, \begin{equation}\label{uncertainty_relation} C_{\mathrm{RE}}(\rho)+S(\rho)\leq \log_2d. \end{equation} This shows that the sum of the entropy of the quantum system and the amount of the coherence of quantum system is always smaller than a given fixed value: the larger $S(\rho)$, the smaller $C_{\mathrm{RE}}(\rho)$. In particular, when $\rho$ is the maximally mixed state, then no coherence exists in the quantum system. But in another way the larger $C_{\mathrm{RE}}(\rho)$, the smaller $S(\rho)$. Then, we can claim that if the quantum system is entangled with the outside world, then the coherence of the system may decay. Next, we will discuss the relations between the coherence and entanglement in the bipartite quantum system. Consider the bipartite quantum system in a composite Hilbert space $\mathcal{H}^{AB}=\mathcal{H}^A\otimes\mathcal{H}^B$, without loss of generality, we henceforth take $d=d_A=d_B$, where $d^A$ and $d^B$ are the dimensions of the quantum systems $A$ and $B$, which could be shared between two parties, Alice and Bob, respectively. Let $\{|i\rangle^A\}_{i=1}^{d}$ and $\{|j\rangle^B\}_{j=1}^{d}$ be the orthogonal basis for the Hilbert space $\mathcal{H}^A$ and $\mathcal{H}^B$, respectively. Assume that a maximally coherent state of the bipartite quantum system is of the form \begin{equation}\label{eq:mcs_bi} |\psi\rangle^{AB}=\frac{1}{d}\sum_{i,j=1}^d|i\rangle^A|j\rangle^B. \end{equation} It is easy to verify that this state is a product state, i.e., \begin{equation}\label{eq:mcs_product} |\psi\rangle^{AB}=\frac{1}{\sqrt{d}}\sum_{i=1}^d|i\rangle^A\otimes \frac{1}{\sqrt{d}}\sum_{i=1}^d|i\rangle^B. \end{equation} But that is not all the maximally coherent states can do, there is even a class of the maximally coherent states, they are also probably maximally entangled states. This shows that the maximally coherent state may be the maximally entangled state, or may be product state. This is because that the measure of the coherence depends on the choice of the basis, but the entangled property is not so. This also implies that though two states are both the maximally coherent states, their reduced states are entirely different. For the maximally entangled state with maximally coherent, its reduced states are completely mixed states, which does not exist the coherence. We give an example to illustrate the results as following. \noindent {\bf Example 1} Consider two-qubit system with the basis $\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\}$, and the relative entropy of coherence depends on this basis. Suppose that \begin{equation}\label{eq:mes_max_1} |\psi_{1}\rangle:=\frac{1}{2}(|00\rangle+|01\rangle-|10\rangle+|11\rangle). \end{equation} Obviously, we have that $C_{\mathrm{RE}}(|\psi_{1}\rangle)=2$. But at the same time, this state is also rewritten by \begin{equation}\label{eq:mes_max_2} |\psi_{1}^\prime\rangle=\frac{1}{\sqrt{2}}(|0\rangle|+\rangle+|1\rangle|-\rangle), \end{equation} where $|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ and $|-\rangle=\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle)$ are the maximally coherent states in one-qubit system. Based on entanglement theory, we easily know that the state~(\ref{eq:mes_max_2}) is also a maximally entangled state. In addition, it is generally known that Bell states are the maximally entangled states, one of them is \begin{equation}\label{eq:mes_bi} |\psi_{2}\rangle=\frac{1}{\sqrt{2}}\left(|00\rangle+|11\rangle\right). \end{equation} Obviously, it is not maximally coherent state. We easily give another maximally coherent state \begin{equation}\label{eq:mcs_2qubit_1} |\psi_{3}\rangle=\frac{1}{2}(|00\rangle+|01\rangle+|10\rangle+|11\rangle). \end{equation} This state is a product state, which is of the form \begin{equation}\label{eq:mcs_2qubit_2} |\psi_{3}^\prime\rangle=|+\rangle\otimes |+\rangle. \end{equation} Let $|\phi\rangle^{AB}=\sum_i\lambda_i|i\rangle^A|i\rangle^B$ be a bipartite entangled state (its Schmidt number is strictly greater than one) with respect to the basis in which the coherence is quantified. Then, the entanglement and the coherence are equal to the entropy of the subsystem, we have \begin{equation} E(|\phi\rangle^{AB})=C_{\mathrm{RE}}(|\phi\rangle^{AB})=S(\rho^A). \end{equation} Here, entanglement measure $E$ is any of distillation entanglement $E_D$\cite{Bennett96}, relative entropy of entanglement $E_{\mathrm{RE}}$\cite{Vedral97} and entanglement of formation $E_F$\cite{Wootters98}. They are upper bound on the entropy of subsystem and satisfy the inequality\cite{Horodecki00} \begin{equation} E_D(\rho^{AB})\leq E_{\mathrm{RE}}(\rho^{AB})\leq E_F(\rho^{AB})\leq \min\{S(\rho^A), S(\rho^B)\}. \end{equation} Then, we substitute this inequality into the uncertainty relation Eq.~(\ref{uncertainty_relation}) arriving at the following result. \noindent {\bf Theorem 1} Given a quantum state $\rho^{AB}$ on the Hilbert space $\mathcal{H}^{AB}$, we have \begin{equation} E(\rho^{AB})+C_{\mathrm{RE}}(\rho^A)\leq \log_2 d^A. \end{equation} This inequality shows that the larger the coherence of subsystem, the less entanglement between two the subsystems. In other words, the system $A$ is already as entangled as it can possibly be with the other system $B$, then itself coherence would pay for their entangled behavior. In analogy, if one builds quantum computer, to realize the purpose of computation, it is made clear that quantum computer has to be well isolated in order to retain its quantum coherence (or quantum properties). On the other hand, if one want to perform quantum information processing in term of the resource of entanglement, we expect to use maximally entangled state, in this case, any information can not be obtained by local operation, for example in superdense coding and teleportation. At the end of this section, we give another new property of the relative entropy of coherence. Based on the additivity of the von Neumann entropy, we obtain that the relative entropy of coherence is additive, \begin{equation} C_{\mathrm{RE}}(\rho^{A}\otimes \rho^{B})=C_{\mathrm{RE}}(\rho^{A})+C_{\mathrm{RE}}(\rho^{B}). \end{equation} By using the properties of relative entropy, one can show that the relative entropy of coherence satisfies the super-additivity. Let $\Pi^A$ and $\Pi^B$ be two the completely dephasing operations on the subsystems $A$ and $B$, respectively. We denote $\Pi^{AB}=\Pi^A\otimes\Pi^B$, applying it on quantum state $\rho^{AB}$, we obtain the classical state $\rho_{\mathrm{diag}}^{AB}=\Pi^{AB}(\rho^{AB})$. Since quantum operations never increase relative entropy, we then have \begin{equation} S(\Pi^{AB}(\rho^{AB})||\Pi^{AB}(\rho^A\otimes \rho^B))\leq S(\rho^{AB}||\rho^A\otimes\rho^B). \end{equation} Thus, we obtain the super-additivity inequality of the relative entropy of coherence, \begin{equation} C_{\mathrm{RE}}(\rho^{AB})\geq C_{\mathrm{RE}}(\rho^{A})+C_{\mathrm{RE}}(\rho^{B}). \end{equation} Obviously, for the maximally coherent state~(\ref{eq:mcs_bi}), the equality holds. From this relation, we know that the coherence contained in the bipartite quantum system is greater than the sum of the coherence of the local subsystems. \noindent {\bf Relations between quantum coherence and quantum correlations.} We know that there are two different measures of quantum correlations via the different physical background, i.e., quantum discord and quantum deficit. To better understand our results, let us give the formal definitions of the quantum discord and one-way quantum deficit. For a bipartite quantum state $\rho^{AB}$, quantum discord is originally defined by the difference of two inequivalent expressions for the mutual information via local von Neumann measurements\cite{Ollivier02}, \begin{equation} \delta^{\rightarrow}(\rho^{AB}):=\min_{\{\Pi_i^A\}}(\mathcal{I}(\rho^{AB})-\mathcal{I}(\sum_i\Pi_i^A\rho^{AB}\Pi_i^A)). \end{equation} where the minimum is taken over all local von Neumann measurements on the subsystem $A$. Here $\mathcal{I}(\rho^{XY})=S(\rho^X)+S(\rho^Y)-S(\rho^{XY})$ is the mutual information\cite{Nielsen}. Quantum deficit is originally defined by the difference the amount of extractable work for the global system and the local subsystems\cite{Oppenheim02}. In this paper, we only allow one-way classical communication from $A$ to $B$ by performed von Neumann measurements on the local system $A$, then the one-way quantum deficit\cite{Horodecki05} is defined as \begin{equation} \Delta^{\rightarrow}(\rho^{AB}):=\min_{\{\Pi_i^A\}}S(\rho^{AB}||\sum_i\Pi_i^A\rho^{AB}\Pi_i^A), \end{equation} where the minimum is taken over all local von Neumann measurements on the subsystem $A$. Quantum discord and the one-way quantum deficit are nonnegative, and are equal to zero on classical-quantum states only. Horodecki \emph{et al.} have obtained that the one-way quantum deficit is an upper bound of quantum discord\cite{Horodecki05}, namely, \begin{equation} \delta^{\rightarrow}(\rho^{AB})\leqslant\Delta^{\rightarrow}(\rho^{AB}). \end{equation} In the following we will present some differences between them. In general, we can always write $\rho^{AB}=\sum_{i,i^\prime,j,j^\prime}\rho_{i,i^\prime,j,j^\prime}|i\rangle^A\langle i^\prime|\otimes |j\rangle^B\langle j^\prime|$ with fixed basis $\{|i\rangle^A|j\rangle^B\}_{i,j=1}^{d}$ for the bipartite quantum system, and $\rho^A=\sum_{i,i^\prime}\rho_{i,i^\prime}|i\rangle^A\langle i^\prime|$ and $\rho^B=\sum_{j,j^\prime}\rho_{j,j^\prime}|j\rangle^B\langle j^\prime|$ are the reduced density operators or the reduced states for each party. To extract information contained in the state, Alice can perform the measurement $\Pi$~(\ref{incoherent operation}) on her party, then the quantum state $\rho^{AB}$ becomes \begin{equation} \tilde{\rho}^{AB}=\sum_{i,j,j^\prime}\rho_{i,i,j,j^\prime}|i\rangle^A\langle i|\otimes |j\rangle^B\langle j^\prime|. \end{equation} and the reduced state $\rho^{A}$ becomes $\tilde{\rho}^A=\sum_i\rho_{ii}|i\rangle\langle i|$, but the reduced state $\rho^B$ does not change. This shows that local measurement removes the coherent elements in the reduced state, but it also destroys the quantum correlations between the parties $A$ and $B$. The post-measurement state $\tilde{\rho}^{AB}$ can be also written as \begin{equation}\label{diagonal blocks} \tilde{\rho}^{AB}=\sum_{i}p_i|i\rangle^A\langle i|\otimes \rho^B_i, \end{equation} where $\rho^B_i=\sum_{j,j^\prime}\rho_{i,i,j,j^\prime}|j\rangle^B\langle j^\prime|/\mathrm{Tr}(\sum_{j,j^\prime}\rho_{i,i,j,j^\prime}|j\rangle^B\langle j^\prime|)$ is the remaining state of $B$ after obtaining the outcome $i$ on $A$ with the probability $p_i=\mathrm{Tr}(\sum_{j,j^\prime}\rho_{i,i,j,j^\prime}|j\rangle^A\langle j^\prime|)$. It is also easy to check that $p_i=\sum_{j}\rho_{i,i,j,j}=\rho_{ii}$. By the local measurement $\Pi$, Alice can extract information which can be given by the mutual information about the classical-quantum state $\tilde{\rho}^{AB}$, \begin{equation} \mathcal{J}^{\rightarrow}(\rho^{AB}|\Pi):=\mathcal{I}(\tilde{\rho}^{AB}). \end{equation} The quantity $\mathcal{J}^{\rightarrow}(\rho^{AB}|\Pi)$ represents the amount of information gained about the subsystem $B$ by measuring the subsystem $A$. We use the difference of mutual information before and after the measurement $\Pi$ to characterize the amount of quantum correlation in quantum state $\rho^{AB}$, \begin{equation} \delta^{\rightarrow}(\rho^{AB}|\Pi)=\mathcal{I}(\rho^{AB})-\mathcal{I}(\tilde{\rho}^{AB}). \end{equation} The quantification $\delta^{\rightarrow}(\rho^{AB}|\Pi)$ is the discord-like quantity. Then, we can define the deficit-like quantity (full name, one-way quantum deficit-like) $\Delta^{\rightarrow}(\rho^{AB}|\Pi)$ with respect to the local measurement $\Pi$, \begin{equation} \Delta^{\rightarrow}(\rho^{AB}|\Pi)=S(\rho^{AB}||\tilde{\rho}^{AB}). \end{equation} More explicitly we have $\Delta^{\rightarrow}(\rho^{AB}|\Pi)=\Delta S_{AB}$, where $\Delta S_{AB}=S(\tilde{\rho}^{AB})-S(\rho^{AB})$ is the increased entropy produced by the local measurement on $A$. After some algebraic manipulation, we give firstly trade-off as follows \begin{align}\label{eq:trade-off_1} \delta^{\rightarrow}(\rho^{AB}|\Pi)+C_{\mathrm{RE}}(\rho^A)=\Delta^{\rightarrow}(\rho^{AB}|\Pi). \end{align} This shows that the increased entropy produced by the local measurement is equal to the sum between the quantum correlation destroyed by the local measurement and the relative entropy of coherence of the measured system. Note that the trade-off only holds with respect to the local measurement $\Pi$. But, we know that the discord and the deficit do not depend on the local measurement. In the following, we will discuss the general case. If one optimizes the discord-like quantity and deficit-like quantity over all the rank-1 projective measurements, then we can obtain the second trade-off relation between them as follows. \noindent {\bf Theorem 2} Given a quantum state $\rho^{AB}$ on the Hilbert space $\mathcal{H}^{AB}$, if $\delta^{\rightarrow}(\rho^{AB})>0$, then we have \begin{equation}\label{eq:trade-off_2} \delta^{\rightarrow}(\rho^{AB})+C_{\mathrm{RE}}(\rho^A)=\Delta^{\rightarrow}(\rho^{AB}). \end{equation} The proof is left to the Method. This shows that the measures of quantum correlations are distinct from each other with respect to the different background, but this difference does not affect the inherent quantum correlation between the subsystems, this difference can be described exactly by the coherence of the measured system. Note that the condition $\delta^{\rightarrow}(\rho^{AB})>0$ is necessary. If not, let us consider the state $|\psi_3\rangle$ in the Example 1, we have that $\delta^{\rightarrow}(\rho^{AB})=\Delta^{\rightarrow}(\rho^{AB})=0$, but $C_{\mathrm{RE}}(\rho^A)=1$. After the local measurements, we only obtain the diagonal blocks matrix~(\ref{diagonal blocks}). That is to say, to obtain completely diagonal matrix with respect to the basis $\{|i\rangle^A|j\rangle^B\}$, we must remove the all off-diagonal elements, and remain the diagonal elements in every the diagonal block matrix. For every block matrix $\rho^B_i$, we can perform the similar operation~(\ref{incoherent operation}) in the previous section. After performing these operations, it follows that \begin{equation}\label{eq:cc state} \rho_{\mathrm{diag}}^{AB}=\sum_{i,j}\rho_{i,i,j,j}|i\rangle^A\langle i|\otimes |j\rangle^B\langle j|. \end{equation} Obviously, the state $\rho^{AB}$ has the same incoherent state as the classical-quantum state $\tilde{\rho}^{AB}$. Based on this fact, by using the approach of the proof of the Theorem 2, we then obtain the third trade-off relation, \begin{equation}\label{eq:trade-off_3} C_{\mathrm{RE}}(\rho^{AB})-C_{\mathrm{RE}}(\tilde{\rho}^{AB})=\Delta^{\rightarrow}(\rho^{AB}). \end{equation} Intuitively, the local measurement can lead to the decrease of the coherence in bipartite quantum system. That is to say, quantum correlations in the bipartite quantum system is equal to the amount of the coherence lost by the measurement on one of the subsystems. \noindent {\large\bf Discussion} \noindent We have obtained two new properties of the relative entropy of coherence, the one is that the relative entropy of coherence does not exceed the information function for a given quantum state, the other is the super-additivity. Based on the former, we have obtained an uncertainty-like relation between the coherence and the entropy of quantum system, i.e., the more the coherence, the less the entropy. We have obtained another uncertainty-like relation between the entanglement and the coherence of subsystem, i.e., the system is already as entangled as it can possibly be with the outside world, then the coherence itself would pay for their entangled behavior. For any bipartite quantum system, by performing completely dephasing operation on the subsystem, we have obtained three trade-offs among the relative entropy of coherence, quantum discord-like and one-way quantum deficit-like quantum correlations. Our results gave a clear quantitative analysis and operational connections between quantum coherence and quantum correlations in the bipartite quantum system. We may focus further on the fascinating question whether one can find the relation between two-way quantum deficit and the relative entropy of coherence. It is also possible that all four concepts, thermodynamics, entanglement, quantum correlations and coherence, can be understood in a unified framework. Those progresses may develop further the quantum information science. \noindent {\large\bf Methods} \noindent {\bf Proof of the Theorem 2 in the Main Text.} Before proceeding with the proof of Theorem 2, a fact is that the relative entropy of coherence is unitary invariant by using the different bases. For $d$-dimensional Hilbert space $\mathcal{H}$, we can take the basis $\{|i\rangle\}_{i=1}^{d}$, then the density operator upon it can be given by Eq.~(\ref{eq:rho}). Under the unitary operator $U$, the density operator~(\ref{eq:rho}) become \begin{equation}\label{eq:rho_A2} \rho_U:=U\rho U^\dagger=\sum_{i,i^\prime}\rho_{i,i^\prime}U|i\rangle\langle i^\prime|U^\dagger=\sum_{i,i^\prime}\rho_{i,i^\prime}|\varphi_i\rangle\langle \varphi_{i^\prime}|, \end{equation} where $|\varphi_i\rangle=U|i\rangle$ for each $i$. Obviously, the density operators $\rho$ and $\rho_U$ have same the matrix elements under the bases $\{|i\rangle\}_{i=1}^{d}$ and $\{|\varphi_i\rangle\}_{i=1}^{d}$, respectively. Then, we denote $C_{\mathrm{RE}}(\rho)$ as the measure of coherence under the basis $\{|i\rangle\}_{i=1}^{d}$, and denote $C_{\mathrm{RE}}(\rho_U)$ as the measure of coherence under the basis $\{|\varphi_i\rangle\}_{i=1}^{d}$, we obtain \begin{equation} C_{\mathrm{RE}}(\rho)=C_{\mathrm{RE}}(\rho_U). \end{equation} Then, we begin the proof of Theorem 2. Let $\{|i\rangle^A|j\rangle^B\}$ be the orthogonal basis for the Hilbert space $\mathcal{H}^{AB}$, and the bipartite quantum state can be given by \begin{equation} \rho^{AB}=\sum_{i,i^\prime,j,j^\prime}\rho_{i,i^\prime,j,j^\prime}|i\rangle^A\langle i^\prime|\otimes |j\rangle^B\langle j^\prime|. \end{equation} Let $\{\Pi^{\delta}_i\}$ be an optimal projective measurement for quantum discord $\delta^{\rightarrow}(\rho^{AB})$. By using this measurement, we can define a new basis on the Hilbert space $\mathcal{H}^A$, denote $\Pi^{\delta}_i=|i\rangle_{\delta}\langle i|$. Without loss of generality, let $\{|i\rangle^{\delta}|j\rangle\}$ be the basis on the Hilbert space $\mathcal{H}^{AB}$, then there exists an unitary operator $U^A$ on $A$ such that \begin{align} \rho^{AB}_\delta=&(U^A_\delta\otimes I^B)\rho^{AB}(U^A_\delta\otimes I^B)^\dagger\nonumber\\ =&\sum_{i,i^\prime,j,j^\prime}\rho_{i,i^\prime,j,j^\prime}|i\rangle^A_\delta \langle i^\prime|\otimes |j\rangle^B\langle j^\prime|. \end{align} By using the properties of the discord and the deficit\cite{Modi12}, we know \begin{align}\label{app_eq:discord1} \delta^{\rightarrow}(\rho^{AB})=&\delta^{\rightarrow}(\rho^{AB}_\delta)=\delta^{\rightarrow}(\rho^{AB}_\delta|\Pi^\delta),\nonumber\\ \Delta^{\rightarrow}(\rho^{AB})=&\Delta^{\rightarrow}(\rho^{AB}_\delta)\leq \Delta^{\rightarrow}(\rho^{AB}_\delta|\Pi^\delta). \end{align} Using Eq.~(\ref{eq:trade-off_1}), under the basis $\{|i\rangle_{\delta}\}_{i=1}^{d}$, we have \begin{align} \delta^{\rightarrow}(\rho^{AB}_\delta|\Pi^\delta)+C_{\mathrm{RE}}(\rho^A_\delta)=\Delta^{\rightarrow}(\rho^{AB}_\delta|\Pi^\delta). \end{align} Substituting Eqs.~(\ref{app_eq:discord1}) into this relation, we obtain \begin{equation}\label{app_eq:upper_deficit} \delta^{\rightarrow}(\rho^{AB})+C_{\mathrm{RE}}(\rho^A_\delta)\geq\Delta^{\rightarrow}(\rho^{AB}). \end{equation} Similarly, let $\{\Pi^{\Delta}_i\}$ be an optimal projective measurement for the one-way quantum deficit $\Delta^{\rightarrow}(\rho^{AB})$. We can also define another basis on the Hilbert space $\mathcal{H}^A$, denote $\Pi^{\Delta}_i=|i\rangle_{\Delta}\langle i|$. Let $\{|i\rangle^{\Delta}|j\rangle\}$ be the basis on the Hilbert space $\mathcal{H}^{AB}$, then there exists an unitary operator $U^A_\Delta$ on $A$ such that \begin{align} \rho^{AB}_\Delta=&(U^A_\Delta\otimes I^B)\rho^{AB}(U^A_\Delta\otimes I^B)^\dagger\nonumber\\ =&\sum_{i,i^\prime,j,j^\prime}\rho_{i,i^\prime,j,j^\prime}|i\rangle^A_\Delta \langle i^\prime|\otimes |j\rangle^B\langle j^\prime|. \end{align} Naturally, we have the following relations \begin{align}\label{app_eq:discord2} \delta^{\rightarrow}(\rho^{AB})=&\delta^{\rightarrow}(\rho^{AB}_\Delta)\leq\delta^{\rightarrow}(\rho^{AB}_\Delta|\Pi^\Delta),\nonumber\\ \Delta^{\rightarrow}(\rho^{AB})=&\Delta^{\rightarrow}(\rho^{AB}_\Delta)= \Delta^{\rightarrow}(\rho^{AB}_\Delta|\Pi^\Delta). \end{align} Then, depending on the bases $\{|i\rangle_{\Delta}\}_{i=1}^{d}$, by using Eqs.(\ref{app_eq:discord2}), we have \begin{align}\label{app_eq:upper_deficit2} \Delta^{\rightarrow}(\rho^{AB}) =&\delta^{\rightarrow}(\rho^{AB}_\Delta|\Pi^\Delta)+C_{\mathrm{RE}}(\rho^A_\Delta)\nonumber\\ \geq& \delta^{\rightarrow}(\rho^{AB}_\Delta)+C_{\mathrm{RE}}(\rho^A_\Delta)\nonumber\\ =&\delta^{\rightarrow}(\rho^{AB})+C_{\mathrm{RE}}(\rho^A_\Delta). \end{align} Combining Eq.~(\ref{app_eq:upper_deficit}) with Eq.~(\ref{app_eq:upper_deficit2}), we obtain the following relation \begin{equation}\label{app_eq:discord_deficit_coherence} \delta^{\rightarrow}(\rho^{AB})+C_{\mathrm{RE}}(\rho^A_{\delta})\geq \Delta^{\rightarrow}(\rho^{AB}) \geq\delta^{\rightarrow}(\rho^{AB})+C_{\mathrm{RE}}(\rho^A_{\Delta}). \end{equation} By using the fact in the previous, we have \begin{equation} C_{\mathrm{RE}}(\rho^A_\Delta)=C_{\mathrm{RE}}(\rho^A_\delta)=C_{\mathrm{RE}}(\rho^A). \end{equation} Substituting this relation into Eq.~(\ref{app_eq:discord_deficit_coherence}), we obtain \begin{align} \delta^{\rightarrow}(\rho^{AB})+C_{\mathrm{RE}}(\rho^A)=\Delta^{\rightarrow}(\rho^{AB}). \end{align} Thus, we get the desired result. \noindent {\large \bf References} \noindent \noindent {\large \bf Acknowledgments} \noindent The authors thank Prof. Y. Feng for helpful discussion. Z.J. Xi is supported by NSFC (Grant No. 61303009), and Specialized Research Fund for the Doctoral Program of Higher Eduction (20130202120002), and Fundamental Research Funds for the Central Universities (GK201502004). Y.M. Li is supported by NSFC (Grant No. 11271237). H. Fan is supported by NSFC (Grant No.11175248). \noindent {\large \bf Author contributions} \noindent Z.X. contributed the idea. Z.X. and H.F. performed the calculations. Y.L. checked the calculations. Z.X. wrote the main manuscript, Y.L. and H.F. made an improvement. All authors contributed to discussion and reviewed the manuscript. \noindent {\large \bf Additional Information} \noindent \textbf{Competing financial interests:} The authors declare no competing financial interests. \end{document}
arXiv
This article is about Earth's natural satellite. For moons in general, see Natural satellite. For other uses, see Moon (disambiguation). Earth's natural satellite Full moon seen from Earth Earth I Selene (poetic) Cynthia (poetic) Selenian (poetic) Cynthian (poetic) Orbital characteristics Epoch J2000 (356400–370400 km) Semi-major axis 384399 km (0.00257 AU)[1] 0.0549[1] Orbital period 27.321661 d (27 d 7 h 43 min 11.5 s[1]) Synodic period (29 d 12 h 44 min 2.9 s) Average orbital speed 1.022 km/s 5.145° to the ecliptic[2][a] Longitude of ascending node Regressing by one revolution in 18.61 years Argument of perigee Progressing by one revolution in 8.85 years Satellite of Earth[b][3] Mean radius (0.2727 of Earth's)[1][4][5] Equatorial radius (0.2725 of Earth's)[4] Polar radius Flattening 10921 km (equatorial) 3.793×107 km2 (0.074 of Earth's) 2.1958×1010 km3 (0.020 of Earth's)[4] 7.342×1022 kg (0.012300 of Earth's)[1][4][6] Mean density 3.344 g/cm3[1][4] 0.606 × Earth Surface gravity 1.62 m/s2 (0.1654 g)[4] Moment of inertia factor 0.3929±0.0009[7] Escape velocity 2.38 km/s Sidereal rotation period 27.321661 d (synchronous) Equatorial rotation velocity 4.627 m/s Axial tilt 1.5424° to ecliptic 6.687° to orbit plane[2] 24° to Earth's equator [8] North pole right ascension 266.86°[9] North pole declination 65.64°[9] 0.136[10] Surface temp. 100 K 220 K 390 K 150 K 230 K[11] Apparent magnitude −2.5 to −12.9[c] −12.74 (mean full moon)[4] Angular diameter 29.3 to 34.1 arcminutes[4][d] Atmosphere[12] 10−7 Pa (1 picobar) (day) 10−10 Pa (1 femtobar) (night)[e] Composition by volume The Moon is an astronomical body orbiting Earth as its only natural satellite. It is the fifth-largest satellite in the Solar System, and by far[13] the largest among planetary satellites relative to the size of the planet that it orbits (its primary). The Moon is, after Jupiter's satellite Io, the second-densest satellite in the Solar System among those whose densities are known. The Moon is thought to have formed about 4.51 billion years ago, not long after Earth. The most widely accepted explanation is that the Moon formed from the debris left over after a giant impact between Earth and a hypothetical Mars-sized body called Theia. New research of Moon rocks, although not rejecting the Theia hypothesis, suggests that the Moon may be older than previously thought.[14] The Moon is in synchronous rotation with Earth, and thus always shows the same side to Earth, the near side. Because of libration, slightly more than half (about 59%) of the total lunar surface can be viewed from Earth.[15] The near side is marked by dark volcanic maria that fill the spaces between the bright ancient crustal highlands and the prominent impact craters. After the Sun, the Moon is the second-brightest celestial object regularly visible in Earth's sky. Its surface is actually dark, although compared to the night sky it appears very bright, with a reflectance just slightly higher than that of worn asphalt. Its gravitational influence produces the ocean tides, body tides, and the slight lengthening of the day. The Moon's average orbital distance is 384,402 km (238,856 mi),[16][17] or 1.28 light-seconds. This is about thirty times the diameter of Earth. The Moon's apparent size in the sky is almost the same as that of the Sun, since the star is about 400 times the lunar distance and diameter. Therefore, the Moon covers the Sun nearly precisely during a total solar eclipse. This matching of apparent visual size will not continue in the far future because the Moon's distance from Earth is gradually increasing. The Moon was first reached by a human-made object in September 1959, when the Soviet Union's Luna 2, an unmanned spacecraft, was intentionally crashed onto the lunar surface. This accomplishment was followed by the first successful soft landing on the Moon by Luna 9 in 1966. The United States' NASA Apollo program achieved the only manned lunar missions to date, beginning with the first manned orbital mission by Apollo 8 in 1968, and six manned landings between 1969 and 1972, with the first being Apollo 11 in July 1969. These missions returned lunar rocks which have been used to develop a geological understanding of the Moon's origin, internal structure, and the Moon's later history. Since the 1972 Apollo 17 mission, the Moon has been visited only by unmanned spacecraft. Both the Moon's natural prominence in the earthly sky and its regular cycle of phases as seen from Earth have provided cultural references and influences for human societies and cultures since time immemorial. Such cultural influences can be found in language, lunar calendar systems, art, and mythology. 1 Name and etymology 2 Formation 3 Physical characteristics 3.1 Internal structure 3.2 Surface geology 3.2.1 Volcanic features 3.2.2 Impact craters 3.2.3 Lunar swirls 3.2.4 Presence of water 3.3 Gravitational field 3.4 Magnetic field 3.5 Atmosphere 3.5.1 Dust 3.5.2 Past thicker atmosphere 3.6 Seasons 4 Earth–Moon system 4.1 Orbit 4.2 Relative size 4.3 Appearance from Earth 4.4 Tidal effects 4.5 Eclipses 5 Observation and exploration 5.1 Before spaceflight 5.2 By spacecraft 5.2.1 20th century 5.2.1.1 Soviet missions 5.2.1.2 United States missions 5.2.1.3 1980s–2000 5.2.2 21st century 5.2.3 Planned commercial missions 5.2.4 Human impact 6 Astronomy from the Moon 8 In culture 8.1 Mythology 8.2 Calendar 8.3 Lunacy 10.1 Citations 12.1 Cartographic resources 12.2 Observation tools 12.3 General Name and etymology See also: List of lunar deities The Moon, tinted reddish, during a lunar eclipse During the lunar phases, only portions of the Moon can be observed from Earth The usual English proper name for Earth's natural satellite is simply the Moon, with a capital M.[18][19] The noun moon is derived from Old English mōna, which (like all its Germanic cognates) stems from Proto-Germanic *mēnōn,[20] which in turn comes from Proto-Indo-European *mēnsis "month"[21] (from earlier *mēnōt, genitive *mēneses) which may be related to the verb "measure" (of time).[22] Occasionally, the name Luna /ˈluːnə/ is used in scientific writing[23] and especially in science fiction to distinguish our moon from others, while in poetry "Luna" has been used to denote personification of Earth's moon.[24] Cynthia /ˈsɪnθiə/ is another poetic name, though rare, for the Moon personified as a goddess,[25] while Selene /səˈliːniː/ (literally "Moon") is the Greek goddess of the Moon. The usual English adjective pertaining to the Moon is "lunar", derived from the Latin word for the Moon, lūna. The adjective selenian /səliːniən/,[26] derived from the Greek word for the Moon, σελήνη selēnē, and used to describe the Moon as a world rather than as an object in the sky, is rare,[27] while its cognate selenic was originally a rare synonym[28] but now nearly always refers to the chemical element selenium.[29] The Greek word for the Moon does however provide us with the prefix seleno-, as in selenography, the study of the physical features of the Moon, as well as the element name selenium.[30][31] The Greek goddess of the wilderness and the hunt, Artemis, equated with the Roman Diana, one of whose symbols was the Moon and who was often regarded as the goddess of the Moon, was also called Cynthia, from her legendary birthplace on Mount Cynthus.[32] These names – Luna, Cynthia and Selene – are reflected in technical terms for lunar orbits such as apolune, pericynthion and selenocentric. Near side of the Moon Lunar north pole Lunar south pole Main articles: Origin of the Moon, Giant-impact hypothesis, and Circumplanetary disk The Moon formed 4.51 billion years ago,[f] some 60 million years after the origin of the Solar System. Several forming mechanisms have been proposed,[33] including the fission of the Moon from Earth's crust through centrifugal force[34] (which would require too great an initial spin of Earth),[35] the gravitational capture of a pre-formed Moon[36] (which would require an unfeasibly extended atmosphere of Earth to dissipate the energy of the passing Moon),[35] and the co-formation of Earth and the Moon together in the primordial accretion disk (which does not explain the depletion of metals in the Moon).[35] These hypotheses also cannot account for the high angular momentum of the Earth–Moon system.[37] Play media The evolution of the Moon and a tour of the Moon The prevailing hypothesis is that the Earth–Moon system formed after an impact of a Mars-sized body (named Theia) with the proto-Earth (giant impact). The impact blasted material into Earth's orbit and then the material accreted and formed the Moon.[38][39] The Moon's far side has a crust that is 50 km (31 mi) thicker than that of the near side. This is thought to be because the Moon fused from two different bodies. This hypothesis, although not perfect, perhaps best explains the evidence. Eighteen months prior to an October 1984 conference on lunar origins, Bill Hartmann, Roger Phillips, and Jeff Taylor challenged fellow lunar scientists: "You have eighteen months. Go back to your Apollo data, go back to your computer, do whatever you have to, but make up your mind. Don't come to our conference unless you have something to say about the Moon's birth." At the 1984 conference at Kona, Hawaii, the giant impact hypothesis emerged as the most consensual theory. Before the conference, there were partisans of the three "traditional" theories, plus a few people who were starting to take the giant impact seriously, and there was a huge apathetic middle who didn't think the debate would ever be resolved. Afterward, there were essentially only two groups: the giant impact camp and the agnostics.[40] Giant impacts are thought to have been common in the early Solar System. Computer simulations of giant impacts have produced results that are consistent with the mass of the lunar core and the angular momentum of the Earth–Moon system. These simulations also show that most of the Moon derived from the impactor, rather than the proto-Earth.[41] However, more recent simulations suggest a larger fraction of the Moon derived from the proto-Earth.[42][43][44][45] Other bodies of the inner Solar System such as Mars and Vesta have, according to meteorites from them, very different oxygen and tungsten isotopic compositions compared to Earth. However, Earth and the Moon have nearly identical isotopic compositions. The isotopic equalization of the Earth-Moon system might be explained by the post-impact mixing of the vaporized material that formed the two,[46] although this is debated.[47] The impact released a lot of energy and then the released material re-accreted into the Earth–Moon system. This would have melted the outer shell of Earth, and thus formed a magma ocean.[48][49] Similarly, the newly formed Moon would also have been affected and had its own lunar magma ocean; its depth is estimated from about 500 km (300 miles) to 1,737 km (1,079 miles).[48] While the giant impact hypothesis might explain many lines of evidence, some questions are still unresolved, most of which involve the Moon's composition.[50] Oceanus Procellarum ("Ocean of Storms") Ancient rift valleys – rectangular structure (visible – topography – GRAIL gravity gradients) Ancient rift valleys – context Ancient rift valleys – closeup (artist's concept) In 2001, a team at the Carnegie Institute of Washington reported the most precise measurement of the isotopic signatures of lunar rocks.[51] The rocks from the Apollo program had the same isotopic signature as rocks from Earth, differing from almost all other bodies in the Solar System. This observation was unexpected, because most of the material that formed the Moon was thought to come from Theia and it was announced in 2007 that there was less than a 1% chance that Theia and Earth had identical isotopic signatures.[52] Other Apollo lunar samples had in 2012 the same titanium isotopes composition as Earth,[53] which conflicts with what is expected if the Moon formed far from Earth or is derived from Theia. These discrepancies may be explained by variations of the giant impact hypothesis. The Moon is a very slightly scalene ellipsoid due to tidal stretching, with its long axis displaced 30° from facing the Earth (due to gravitational anomalies from impact basins). Its shape is more elongated than current tidal forces can account for. This 'fossil bulge' indicates that the Moon solidified when it orbited at half its current distance to the Earth, and that it is now too cold for its shape to adjust to its orbit.[54] Main article: Internal structure of the Moon Lunar surface chemical composition[55] silica SiO2 45.4% 45.5% alumina Al2O3 14.9% 24.0% lime CaO 11.8% 15.9% iron(II) oxide FeO 14.1% 5.9% magnesia MgO 9.2% 7.5% titanium dioxide TiO2 3.9% 0.6% sodium oxide Na2O 0.6% 0.6% The Moon is a differentiated body. It has a geochemically distinct crust, mantle, and core. The Moon has a solid iron-rich inner core with a radius possibly as small as 240 kilometres (150 mi) and a fluid outer core primarily made of liquid iron with a radius of roughly 300 kilometres (190 mi). Around the core is a partially molten boundary layer with a radius of about 500 kilometres (310 mi).[56][57] This structure is thought to have developed through the fractional crystallization of a global magma ocean shortly after the Moon's formation 4.5 billion years ago.[58] Crystallization of this magma ocean would have created a mafic mantle from the precipitation and sinking of the minerals olivine, clinopyroxene, and orthopyroxene; after about three-quarters of the magma ocean had crystallised, lower-density plagioclase minerals could form and float into a crust atop.[59] The final liquids to crystallise would have been initially sandwiched between the crust and mantle, with a high abundance of incompatible and heat-producing elements.[1] Consistent with this perspective, geochemical mapping made from orbit suggests the crust of mostly anorthosite.[12] The Moon rock samples of the flood lavas that erupted onto the surface from partial melting in the mantle confirm the mafic mantle composition, which is more iron-rich than that of Earth.[1] The crust is on average about 50 kilometres (31 mi) thick.[1] The Moon is the second-densest satellite in the Solar System, after Io.[60] However, the inner core of the Moon is small, with a radius of about 350 kilometres (220 mi) or less,[1] around 20% of the radius of the Moon. Its composition is not well understood, but is probably metallic iron alloyed with a small amount of sulphur and nickel; analyses of the Moon's time-variable rotation suggest that it is at least partly molten.[61] Surface geology Main articles: Geology of the Moon, Moon rocks, and List of lunar features The Topographic Globe of the Moon Geological features of the Moon (near side / north pole at left, far side / south pole at right) Topography of the Moon STL 3D model of the Moon with 10× elevation exaggeration rendered with data from the Lunar Orbiter Laser Altimeter of the Lunar Reconnaissance Orbiter The topography of the Moon has been measured with laser altimetry and stereo image analysis.[62] Its most visible topographic feature is the giant far-side South Pole–Aitken basin, some 2,240 km (1,390 mi) in diameter, the largest crater on the Moon and the second-largest confirmed impact crater in the Solar System.[63][64] At 13 km (8.1 mi) deep, its floor is the lowest point on the surface of the Moon.[63][65] The highest elevations of the surface are located directly to the northeast, and it has been suggested might have been thickened by the oblique formation impact of the South Pole–Aitken basin.[66] Other large impact basins such as Imbrium, Serenitatis, Crisium, Smythii, and Orientale also possess regionally low elevations and elevated rims.[63] The far side of the lunar surface is on average about 1.9 km (1.2 mi) higher than that of the near side.[1] The discovery of fault scarp cliffs by the Lunar Reconnaissance Orbiter suggest that the Moon has shrunk within the past billion years, by about 90 metres (300 ft).[67] Similar shrinkage features exist on Mercury. A recent study of over 12000 images from the orbiter has observed that Mare Frigoris near the north pole, a vast basin assumed to be geologically dead, has been cracking and shifting. Since the Moon doesn't have tectonic plates, its tectonic activity is slow and cracks develop as it loses heat over the years.[68] Volcanic features Main article: Lunar mare Lunar nearside with major maria and craters labeled The dark and relatively featureless lunar plains, clearly seen with the naked eye, are called maria (Latin for "seas"; singular mare), as they were once believed to be filled with water;[69] they are now known to be vast solidified pools of ancient basaltic lava. Although similar to terrestrial basalts, lunar basalts have more iron and no minerals altered by water.[70] The majority of these lavas erupted or flowed into the depressions associated with impact basins. Several geologic provinces containing shield volcanoes and volcanic domes are found within the near side "maria".[71] Evidence of young lunar volcanism Almost all maria are on the near side of the Moon, and cover 31% of the surface of the near side,[72] compared with 2% of the far side.[73] This is thought to be due to a concentration of heat-producing elements under the crust on the near side, seen on geochemical maps obtained by Lunar Prospector's gamma-ray spectrometer, which would have caused the underlying mantle to heat up, partially melt, rise to the surface and erupt.[59][74][75] Most of the Moon's mare basalts erupted during the Imbrian period, 3.0–3.5 billion years ago, although some radiometrically dated samples are as old as 4.2 billion years.[76] Until recently, the youngest eruptions, dated by crater counting, appeared to have been only 1.2 billion years ago.[77] In 2006, a study of Ina, a tiny depression in Lacus Felicitatis, found jagged, relatively dust-free features that, because of the lack of erosion by infalling debris, appeared to be only 2 million years old.[78] Moonquakes and releases of gas also indicate some continued lunar activity.[78] In 2014 NASA announced "widespread evidence of young lunar volcanism" at 70 irregular mare patches identified by the Lunar Reconnaissance Orbiter, some less than 50 million years old. This raises the possibility of a much warmer lunar mantle than previously believed, at least on the near side where the deep crust is substantially warmer because of the greater concentration of radioactive elements.[79][80][81][82] Just prior to this, evidence has been presented for 2–10 million years younger basaltic volcanism inside the crater Lowell,[83][84] Orientale basin, located in the transition zone between the near and far sides of the Moon. An initially hotter mantle and/or local enrichment of heat-producing elements in the mantle could be responsible for prolonged activities also on the far side in the Orientale basin.[85][86] The lighter-colored regions of the Moon are called terrae, or more commonly highlands, because they are higher than most maria. They have been radiometrically dated to having formed 4.4 billion years ago, and may represent plagioclase cumulates of the lunar magma ocean.[76][77] In contrast to Earth, no major lunar mountains are believed to have formed as a result of tectonic events.[87] The concentration of maria on the Near Side likely reflects the substantially thicker crust of the highlands of the Far Side, which may have formed in a slow-velocity impact of a second moon of Earth a few tens of millions of years after their formation.[88][89] Impact craters Further information: List of craters on the Moon Lunar crater Daedalus on the Moon's far side The other major geologic process that has affected the Moon's surface is impact cratering,[90] with craters formed when asteroids and comets collide with the lunar surface. There are estimated to be roughly 300,000 craters wider than 1 km (0.6 mi) on the Moon's near side alone.[91] The lunar geologic timescale is based on the most prominent impact events, including Nectaris, Imbrium, and Orientale, structures characterized by multiple rings of uplifted material, between hundreds and thousands of kilometers in diameter and associated with a broad apron of ejecta deposits that form a regional stratigraphic horizon.[92] The lack of an atmosphere, weather and recent geological processes mean that many of these craters are well-preserved. Although only a few multi-ring basins have been definitively dated, they are useful for assigning relative ages. Because impact craters accumulate at a nearly constant rate, counting the number of craters per unit area can be used to estimate the age of the surface.[92] The radiometric ages of impact-melted rocks collected during the Apollo missions cluster between 3.8 and 4.1 billion years old: this has been used to propose a Late Heavy Bombardment of impacts.[93] Blanketed on top of the Moon's crust is a highly comminuted (broken into ever smaller particles) and impact gardened surface layer called regolith, formed by impact processes. The finer regolith, the lunar soil of silicon dioxide glass, has a texture resembling snow and a scent resembling spent gunpowder.[94] The regolith of older surfaces is generally thicker than for younger surfaces: it varies in thickness from 10–20 km (6.2–12.4 mi) in the highlands and 3–5 km (1.9–3.1 mi) in the maria.[95] Beneath the finely comminuted regolith layer is the megaregolith, a layer of highly fractured bedrock many kilometers thick.[96] Comparison of high-resolution images obtained by the Lunar Reconnaissance Orbiter has shown a contemporary crater-production rate significantly higher than previously estimated. A secondary cratering process caused by distal ejecta is thought to churn the top two centimeters of regolith a hundred times more quickly than previous models suggested – on a timescale of 81,000 years.[97][98] Lunar swirls at Reiner Gamma Lunar swirls Main article: Lunar swirls Lunar swirls are enigmatic features found across the Moon's surface. They are characterized by a high albedo, appear optically immature (i.e. the optical characteristics of a relatively young regolith), and have often a sinuous shape. Their shape is often accentuated by low albedo regions that wind between the bright swirls. Presence of water Main article: Lunar water Liquid water cannot persist on the lunar surface. When exposed to solar radiation, water quickly decomposes through a process known as photodissociation and is lost to space. However, since the 1960s, scientists have hypothesized that water ice may be deposited by impacting comets or possibly produced by the reaction of oxygen-rich lunar rocks, and hydrogen from solar wind, leaving traces of water which could possibly persist in cold, permanently shadowed craters at either pole on the Moon.[99][100] Computer simulations suggest that up to 14,000 km2 (5,400 sq mi) of the surface may be in permanent shadow.[101] The presence of usable quantities of water on the Moon is an important factor in rendering lunar habitation as a cost-effective plan; the alternative of transporting water from Earth would be prohibitively expensive.[102] In years since, signatures of water have been found to exist on the lunar surface.[103] In 1994, the bistatic radar experiment located on the Clementine spacecraft, indicated the existence of small, frozen pockets of water close to the surface. However, later radar observations by Arecibo, suggest these findings may rather be rocks ejected from young impact craters.[104] In 1998, the neutron spectrometer on the Lunar Prospector spacecraft showed that high concentrations of hydrogen are present in the first meter of depth in the regolith near the polar regions.[105] Volcanic lava beads, brought back to Earth aboard Apollo 15, showed small amounts of water in their interior.[106] The 2008 Chandrayaan-1 spacecraft has since confirmed the existence of surface water ice, using the on-board Moon Mineralogy Mapper. The spectrometer observed absorption lines common to hydroxyl, in reflected sunlight, providing evidence of large quantities of water ice, on the lunar surface. The spacecraft showed that concentrations may possibly be as high as 1,000 ppm.[107] Using the mapper's reflectance spectra, indirect lighting of areas in shadow confirmed water ice within 20° latitude of both poles in 2018.[108] In 2009, LCROSS sent a 2,300 kg (5,100 lb) impactor into a permanently shadowed polar crater, and detected at least 100 kg (220 lb) of water in a plume of ejected material.[109][110] Another examination of the LCROSS data showed the amount of detected water to be closer to 155 ± 12 kg (342 ± 26 lb).[111] In May 2011, 615–1410 ppm water in melt inclusions in lunar sample 74220 was reported,[112] the famous high-titanium "orange glass soil" of volcanic origin collected during the Apollo 17 mission in 1972. The inclusions were formed during explosive eruptions on the Moon approximately 3.7 billion years ago. This concentration is comparable with that of magma in Earth's upper mantle. Although of considerable selenological interest, this announcement affords little comfort to would-be lunar colonists – the sample originated many kilometers below the surface, and the inclusions are so difficult to access that it took 39 years to find them with a state-of-the-art ion microprobe instrument. Analysis of the findings of the Moon Mineralogy Mapper (M3) revealed in August 2018 for the first time "definitive evidence" for water-ice on the lunar surface.[113][114] The data revealed the distinct reflective signatures of water-ice, as opposed to dust and other reflective substances.[115] The ice deposits were found on the North and South poles, although it is more abundant in the South, where water is trapped in permanently shadowed craters and crevices, allowing it to persist as ice on the surface since they are shielded from the sun.[113][115] Gravitational field Main article: Gravity of the Moon GRAIL's gravity map of the Moon The gravitational field of the Moon has been measured through tracking the Doppler shift of radio signals emitted by orbiting spacecraft. The main lunar gravity features are mascons, large positive gravitational anomalies associated with some of the giant impact basins, partly caused by the dense mare basaltic lava flows that fill those basins.[116][117] The anomalies greatly influence the orbit of spacecraft about the Moon. There are some puzzles: lava flows by themselves cannot explain all of the gravitational signature, and some mascons exist that are not linked to mare volcanism.[118] Main article: Magnetic field of the Moon The Moon has an external magnetic field of generally less than 0.2 nanoteslas,[119] or less than one hundred thousandth that of Earth. The Moon does not currently have a global dipolar magnetic field and only has crustal magnetization likely acquired early in its history when a dynamo was still operating.[120][121] However, early in its history, 4 billion years ago, its magnetic field strength was likely close to that of Earth today.[119] This early dynamo field apparently expired by about one billion years ago, after the lunar core had completely crystallized.[119] Theoretically, some of the remnant magnetization may originate from transient magnetic fields generated during large impacts through the expansion of plasma clouds. These clouds are generated during large impacts in an ambient magnetic field. This is supported by the location of the largest crustal magnetizations situated near the antipodes of the giant impact basins.[122] Main article: Atmosphere of the Moon Sketch by the Apollo 17 astronauts. The lunar atmosphere was later studied by LADEE.[123][124] The Moon has an atmosphere so tenuous as to be nearly vacuum, with a total mass of less than 10 tonnes (9.8 long tons; 11 short tons).[125] The surface pressure of this small mass is around 3 × 10−15 atm (0.3 nPa); it varies with the lunar day. Its sources include outgassing and sputtering, a product of the bombardment of lunar soil by solar wind ions.[12][126] Elements that have been detected include sodium and potassium, produced by sputtering (also found in the atmospheres of Mercury and Io); helium-4 and neon[127] from the solar wind; and argon-40, radon-222, and polonium-210, outgassed after their creation by radioactive decay within the crust and mantle.[128][129] The absence of such neutral species (atoms or molecules) as oxygen, nitrogen, carbon, hydrogen and magnesium, which are present in the regolith, is not understood.[128] Water vapor has been detected by Chandrayaan-1 and found to vary with latitude, with a maximum at ~60–70 degrees; it is possibly generated from the sublimation of water ice in the regolith.[130] These gases either return into the regolith because of the Moon's gravity or are lost to space, either through solar radiation pressure or, if they are ionized, by being swept away by the solar wind's magnetic field.[128] A permanent asymmetric Moon dust cloud exists around the Moon, created by small particles from comets. Estimates are 5 tons of comet particles strike the Moon's surface every 24 hours. The particles striking the Moon's surface eject Moon dust above the Moon. The dust stays above the Moon approximately 10 minutes, taking 5 minutes to rise, and 5 minutes to fall. On average, 120 kilograms of dust are present above the Moon, rising to 100 kilometers above the surface. The dust measurements were made by LADEE's Lunar Dust EXperiment (LDEX), between 20 and 100 kilometers above the surface, during a six-month period. LDEX detected an average of one 0.3 micrometer Moon dust particle each minute. Dust particle counts peaked during the Geminid, Quadrantid, Northern Taurid, and Omicron Centaurid meteor showers, when the Earth, and Moon, pass through comet debris. The cloud is asymmetric, more dense near the boundary between the Moon's dayside and nightside.[131][132] Past thicker atmosphere In October 2017, NASA scientists at the Marshall Space Flight Center and the Lunar and Planetary Institute in Houston announced their finding, based on studies of Moon magma samples retrieved by the Apollo missions, that the Moon had once possessed a relatively thick atmosphere for a period of 70 million years between 3 and 4 billion years ago. This atmosphere, sourced from gases ejected from lunar volcanic eruptions, was twice the thickness of that of present-day Mars. The ancient lunar atmosphere was eventually stripped away by solar winds and dissipated into space.[133] The Moon's axial tilt with respect to the ecliptic is only 1.5424°,[134] much less than the 23.44° of Earth. Because of this, the Moon's solar illumination varies much less with season, and topographical details play a crucial role in seasonal effects.[135] From images taken by Clementine in 1994, it appears that four mountainous regions on the rim of the crater Peary at the Moon's north pole may remain illuminated for the entire lunar day, creating peaks of eternal light. No such regions exist at the south pole. Similarly, there are places that remain in permanent shadow at the bottoms of many polar craters,[101] and these "craters of eternal darkness" are extremely cold: Lunar Reconnaissance Orbiter measured the lowest summer temperatures in craters at the southern pole at 35 K (−238 °C; −397 °F)[136] and just 26 K (−247 °C; −413 °F) close to the winter solstice in the north polar crater Hermite. This is the coldest temperature in the Solar System ever measured by a spacecraft, colder even than the surface of Pluto.[135] Average temperatures of the Moon's surface are reported, but temperatures of different areas will vary greatly depending upon whether they are in sunlight or shadow.[137] Earth–Moon system See also: Other moons of Earth Scale model of the Earth–Moon system: Sizes and distances are to scale. Main articles: Orbit of the Moon and Lunar theory Animation of Moon's orbit around Earth from 2018 to 2027 Moon · Earth Earth–Moon system (schematic) DSCOVR satellite sees the Moon passing in front of Earth The Moon makes a complete orbit around Earth with respect to the fixed stars about once every 27.3 days[g] (its sidereal period). However, because Earth is moving in its orbit around the Sun at the same time, it takes slightly longer for the Moon to show the same phase to Earth, which is about 29.5 days[h] (its synodic period).[72] Unlike most satellites of other planets, the Moon orbits closer to the ecliptic plane than to the planet's equatorial plane. The Moon's orbit is subtly perturbed by the Sun and Earth in many small, complex and interacting ways. For example, the plane of the Moon's orbit gradually rotates once every 18.61 years,[138] which affects other aspects of lunar motion. These follow-on effects are mathematically described by Cassini's laws.[139] The Moon is an exceptionally large natural satellite relative to Earth: Its diameter is more than a quarter and its mass is 1/81 of Earth's.[72] It is the largest moon in the Solar System relative to the size of its planet,[i] though Charon is larger relative to the dwarf planet Pluto, at 1/9 Pluto's mass.[j][140] The Earth and the Moon's barycentre, their common center of mass, is located 1,700 km (1,100 mi) (about a quarter of Earth's radius) beneath Earth's surface. The Earth revolves around the Earth-Moon barycentre once a sidereal month, with 1/81 the speed of the Moon, or about 12.5 metres (41 ft) per second. This motion is superimposed on the much larger revolution of the Earth around the Sun at a speed of about 30 kilometres (19 mi) per second. The surface area of the Moon is slightly less than the areas of North and South America combined. Appearance from Earth Moonset in western sky over the High Desert in California, on the morning of the Trifecta: Full moon, Supermoon, Eclipse, 31 January 2018 See also: Lunar observation, Lunar phase, Moonlight, and Earthlight (astronomy) The Moon is in synchronous rotation as it orbits Earth; it rotates about its axis in about the same time it takes to orbit Earth. This results in it always keeping nearly the same face turned towards Earth. However, because of the effect of libration, about 59% of the Moon's surface can actually be seen from Earth. The side of the Moon that faces Earth is called the near side, and the opposite the far side. The far side is often inaccurately called the "dark side", but it is in fact illuminated as often as the near side: once every 29.5 Earth days. During new moon, the near side is dark.[141] The Moon had once rotated at a faster rate, but early in its history its rotation slowed and became tidally locked in this orientation as a result of frictional effects associated with tidal deformations caused by Earth.[142] With time, the energy of rotation of the Moon on its axis was dissipated as heat, until there was no rotation of the Moon relative to Earth. In 2016, planetary scientists using data collected on the much earlier NASA Lunar Prospector mission, found two hydrogen-rich areas (most likely former water ice) on opposite sides of the Moon. It is speculated that these patches were the poles of the Moon billions of years ago before it was tidally locked to Earth.[143] The Moon is prominently featured in Vincent van Gogh's 1889 painting, The Starry Night The Moon has an exceptionally low albedo, giving it a reflectance that is slightly brighter than that of worn asphalt. Despite this, it is the brightest object in the sky after the Sun.[72][k] This is due partly to the brightness enhancement of the opposition surge; the Moon at quarter phase is only one-tenth as bright, rather than half as bright, as at full moon.[144] Additionally, color constancy in the visual system recalibrates the relations between the colors of an object and its surroundings, and because the surrounding sky is comparatively dark, the sunlit Moon is perceived as a bright object. The edges of the full moon seem as bright as the center, without limb darkening, because of the reflective properties of lunar soil, which retroreflects light more towards the Sun than in other directions. The Moon does appear larger when close to the horizon, but this is a purely psychological effect, known as the moon illusion, first described in the 7th century BC.[145] The full Moon's angular diameter is about 0.52° (on average) in the sky, roughly the same apparent size as the Sun (see § Eclipses). The Moon's highest altitude at culmination varies by its phase and time of year. The full moon is highest in the sky during winter (for each hemisphere). The orientation of the Moon's crescent also depends on the latitude of the viewing location; an observer in the tropics can see a smile-shaped crescent Moon.[146] The Moon is visible for two weeks every 27.3 days at the North and South Poles. Zooplankton in the Arctic use moonlight when the Sun is below the horizon for months on end.[147] 14 November 2016 supermoon was 356,511 kilometres (221,526 mi) away[148] from the center of Earth, the closest occurrence since 26 January 1948. It will not be closer until 25 November 2034.[149] The distance between the Moon and Earth varies from around 356,400 km (221,500 mi) to 406,700 km (252,700 mi) at perigee (closest) and apogee (farthest), respectively. On 14 November 2016, it was closer to Earth when at full phase than it has been since 1948, 14% closer than its farthest position in apogee.[150] Reported as a "supermoon", this closest point coincided within an hour of a full moon, and it was 30% more luminous than when at its greatest distance because its angular diameter is 14% greater and 1.14 2 ≈ 1.30 {\displaystyle \scriptstyle 1.14^{2}\approx 1.30} .[151][152][153] At lower levels, the human perception of reduced brightness as a percentage is provided by the following formula:[154][155] perceived reduction % = 100 × actual reduction % 100 {\displaystyle {\text{perceived reduction}}\%=100\times {\sqrt {{\text{actual reduction}}\% \over 100}}} When the actual reduction is 1.00 / 1.30, or about 0.770, the perceived reduction is about 0.877, or 1.00 / 1.14. This gives a maximum perceived increase of 14% between apogee and perigee moons of the same phase.[156] There has been historical controversy over whether features on the Moon's surface change over time. Today, many of these claims are thought to be illusory, resulting from observation under different lighting conditions, poor astronomical seeing, or inadequate drawings. However, outgassing does occasionally occur and could be responsible for a minor percentage of the reported lunar transient phenomena. Recently, it has been suggested that a roughly 3 km (1.9 mi) diameter region of the lunar surface was modified by a gas release event about a million years ago.[157][158] The Moon's appearance, like the Sun's, can be affected by Earth's atmosphere. Common optical effects are the 22° halo ring, formed when the Moon's light is refracted through the ice crystals of high cirrostratus clouds, and smaller coronal rings when the Moon is seen through thin clouds.[159] The monthly changes in the angle between the direction of sunlight and view from Earth, and the phases of the Moon that result, as viewed from the Northern Hemisphere. The Earth–Moon distance is not to scale. The illuminated area of the visible sphere (degree of illumination) is given by ( 1 − cos ⁡ e ) / 2 = sin 2 ⁡ ( e / 2 ) {\displaystyle (1-\cos e)/2=\sin ^{2}(e/2)} , where e {\displaystyle e} is the elongation (i.e., the angle between Moon, the observer (on Earth) and the Sun). Tidal effects Main articles: Tidal force, Tidal acceleration, Tide, and Theory of tides The libration of the Moon over a single lunar month. Also visible is the slight variation in the Moon's visual size from Earth. The gravitational attraction that masses have for one another decreases inversely with the square of the distance of those masses from each other. As a result, the slightly greater attraction that the Moon has for the side of Earth closest to the Moon, as compared to the part of the Earth opposite the Moon, results in tidal forces. Tidal forces affect both the Earth's crust and oceans. The most obvious effect of tidal forces is to cause two bulges in the Earth's oceans, one on the side facing the Moon and the other on the side opposite. This results in elevated sea levels called ocean tides.[160] As the Earth spins on its axis, one of the ocean bulges (high tide) is held in place "under" the Moon, while another such tide is opposite. As a result, there are two high tides, and two low tides in about 24 hours.[160] Since the Moon is orbiting the Earth in the same direction of the Earth's rotation, the high tides occur about every 12 hours and 25 minutes; the 25 minutes is due to the Moon's time to orbit the Earth. The Sun has the same tidal effect on the Earth, but its forces of attraction are only 40% that of the Moon's; the Sun's and Moon's interplay is responsible for spring and neap tides.[160] If the Earth were a water world (one with no continents) it would produce a tide of only one meter, and that tide would be very predictable, but the ocean tides are greatly modified by other effects: the frictional coupling of water to Earth's rotation through the ocean floors, the inertia of water's movement, ocean basins that grow shallower near land, the sloshing of water between different ocean basins.[161] As a result, the timing of the tides at most points on the Earth is a product of observations that are explained, incidentally, by theory. While gravitation causes acceleration and movement of the Earth's fluid oceans, gravitational coupling between the Moon and Earth's solid body is mostly elastic and plastic. The result is a further tidal effect of the Moon on the Earth that causes a bulge of the solid portion of the Earth nearest the Moon that acts as a torque in opposition to the Earth's rotation. This "drains" angular momentum and rotational kinetic energy from Earth's spin, slowing the Earth's rotation.[160][162] That angular momentum, lost from the Earth, is transferred to the Moon in a process (confusingly known as tidal acceleration), which lifts the Moon into a higher orbit and results in its lower orbital speed about the Earth. Thus the distance between Earth and Moon is increasing, and the Earth's spin is slowing in reaction.[162] Measurements from laser reflectors left during the Apollo missions (lunar ranging experiments) have found that the Moon's distance increases by 38 mm (1.5 in) per year[163] (roughly the rate at which human fingernails grow).[164] Atomic clocks also show that Earth's day lengthens by about 15 microseconds every year,[165] slowly increasing the rate at which UTC is adjusted by leap seconds. Left to run its course, this tidal drag would continue until the spin of Earth and the orbital period of the Moon matched, creating mutual tidal locking between the two. As a result, the Moon would be suspended in the sky over one meridian, as is already currently the case with Pluto and its moon Charon. However, the Sun will become a red giant engulfing the Earth-Moon system long before this occurrence.[166][167] In a like manner, the lunar surface experiences tides of around 10 cm (4 in) amplitude over 27 days, with two components: a fixed one due to Earth, because they are in synchronous rotation, and a varying component from the Sun.[162] The Earth-induced component arises from libration, a result of the Moon's orbital eccentricity (if the Moon's orbit were perfectly circular, there would only be solar tides).[162] Libration also changes the angle from which the Moon is seen, allowing a total of about 59% of its surface to be seen from Earth over time.[72] The cumulative effects of stress built up by these tidal forces produces moonquakes. Moonquakes are much less common and weaker than are earthquakes, although moonquakes can last for up to an hour – significantly longer than terrestrial quakes – because of the absence of water to damp out the seismic vibrations. The existence of moonquakes was an unexpected discovery from seismometers placed on the Moon by Apollo astronauts from 1969 through 1972.[168] Main articles: Solar eclipse, Lunar eclipse, and Eclipse cycle From Earth, the Moon and the Sun appear the same size, as seen in the 1999 solar eclipse (left), whereas from the STEREO-B spacecraft in an Earth-trailing orbit, the Moon appears much smaller than the Sun (right).[169] Eclipses only occur when the Sun, Earth, and Moon are all in a straight line (termed "syzygy"). Solar eclipses occur at new moon, when the Moon is between the Sun and Earth. In contrast, lunar eclipses occur at full moon, when Earth is between the Sun and Moon. The apparent size of the Moon is roughly the same as that of the Sun, with both being viewed at close to one-half a degree wide. The Sun is much larger than the Moon but it is the vastly greater distance that gives it the same apparent size as the much closer and much smaller Moon from the perspective of Earth. The variations in apparent size, due to the non-circular orbits, are nearly the same as well, though occurring in different cycles. This makes possible both total (with the Moon appearing larger than the Sun) and annular (with the Moon appearing smaller than the Sun) solar eclipses.[170] In a total eclipse, the Moon completely covers the disc of the Sun and the solar corona becomes visible to the naked eye. Because the distance between the Moon and Earth is very slowly increasing over time,[160] the angular diameter of the Moon is decreasing. Also, as it evolves toward becoming a red giant, the size of the Sun, and its apparent diameter in the sky, are slowly increasing.[l] The combination of these two changes means that hundreds of millions of years ago, the Moon would always completely cover the Sun on solar eclipses, and no annular eclipses were possible. Likewise, hundreds of millions of years in the future, the Moon will no longer cover the Sun completely, and total solar eclipses will not occur.[171] Because the Moon's orbit around Earth is inclined by about 5.145° (5° 9') to the orbit of Earth around the Sun, eclipses do not occur at every full and new moon. For an eclipse to occur, the Moon must be near the intersection of the two orbital planes.[172] The periodicity and recurrence of eclipses of the Sun by the Moon, and of the Moon by Earth, is described by the saros, which has a period of approximately 18 years.[173] Because the Moon is continuously blocking our view of a half-degree-wide circular area of the sky,[m][174] the related phenomenon of occultation occurs when a bright star or planet passes behind the Moon and is occulted: hidden from view. In this way, a solar eclipse is an occultation of the Sun. Because the Moon is comparatively close to Earth, occultations of individual stars are not visible everywhere on the planet, nor at the same time. Because of the precession of the lunar orbit, each year different stars are occulted.[175] Observation and exploration Before spaceflight Main articles: Exploration of the Moon: Before spaceflight, Selenography, and Lunar theory Map of the Moon by Johannes Hevelius from his Selenographia (1647), the first map to include the libration zones A study of the Moon in Robert Hooke's Micrographia, 1665 One of the earliest-discovered possible depictions of the Moon is a 5000-year-old rock carving Orthostat 47 at Knowth, Ireland.[176][177] Understanding of the Moon's cycles was an early development of astronomy: by the 5th century BC, Babylonian astronomers had recorded the 18-year Saros cycle of lunar eclipses,[178] and Indian astronomers had described the Moon's monthly elongation.[179] The Chinese astronomer Shi Shen (fl. 4th century BC) gave instructions for predicting solar and lunar eclipses.[180](p411) Later, the physical form of the Moon and the cause of moonlight became understood. The ancient Greek philosopher Anaxagoras (d. 428 BC) reasoned that the Sun and Moon were both giant spherical rocks, and that the latter reflected the light of the former.[181][180](p227) Although the Chinese of the Han Dynasty believed the Moon to be energy equated to qi, their 'radiating influence' theory also recognized that the light of the Moon was merely a reflection of the Sun, and Jing Fang (78–37 BC) noted the sphericity of the Moon.[180](pp413–414) In the 2nd century AD, Lucian wrote the novel A True Story, in which the heroes travel to the Moon and meet its inhabitants. In 499 AD, the Indian astronomer Aryabhata mentioned in his Aryabhatiya that reflected sunlight is the cause of the shining of the Moon.[182] The astronomer and physicist Alhazen (965–1039) found that sunlight was not reflected from the Moon like a mirror, but that light was emitted from every part of the Moon's sunlit surface in all directions.[183] Shen Kuo (1031–1095) of the Song dynasty created an allegory equating the waxing and waning of the Moon to a round ball of reflective silver that, when doused with white powder and viewed from the side, would appear to be a crescent.[180](pp415–416) Galileo's sketches of the Moon from Sidereus Nuncius In Aristotle's (384–322 BC) description of the universe, the Moon marked the boundary between the spheres of the mutable elements (earth, water, air and fire), and the imperishable stars of aether, an influential philosophy that would dominate for centuries.[184] However, in the 2nd century BC, Seleucus of Seleucia correctly theorized that tides were due to the attraction of the Moon, and that their height depends on the Moon's position relative to the Sun.[185] In the same century, Aristarchus computed the size and distance of the Moon from Earth, obtaining a value of about twenty times the radius of Earth for the distance. These figures were greatly improved by Ptolemy (90–168 AD): his values of a mean distance of 59 times Earth's radius and a diameter of 0.292 Earth diameters were close to the correct values of about 60 and 0.273 respectively.[186] Archimedes (287–212 BC) designed a planetarium that could calculate the motions of the Moon and other objects in the Solar System.[187] During the Middle Ages, before the invention of the telescope, the Moon was increasingly recognised as a sphere, though many believed that it was "perfectly smooth".[188] In 1609, Galileo Galilei drew one of the first telescopic drawings of the Moon in his book Sidereus Nuncius and noted that it was not smooth but had mountains and craters. Thomas Harriot had made, but not published such drawings a few months earlier. Telescopic mapping of the Moon followed: later in the 17th century, the efforts of Giovanni Battista Riccioli and Francesco Maria Grimaldi led to the system of naming of lunar features in use today. The more exact 1834–36 Mappa Selenographica of Wilhelm Beer and Johann Heinrich Mädler, and their associated 1837 book Der Mond, the first trigonometrically accurate study of lunar features, included the heights of more than a thousand mountains, and introduced the study of the Moon at accuracies possible in earthly geography.[189] Lunar craters, first noted by Galileo, were thought to be volcanic until the 1870s proposal of Richard Proctor that they were formed by collisions.[72] This view gained support in 1892 from the experimentation of geologist Grove Karl Gilbert, and from comparative studies from 1920 to the 1940s,[190] leading to the development of lunar stratigraphy, which by the 1950s was becoming a new and growing branch of astrogeology.[72] By spacecraft See also: Robotic exploration of the Moon, List of proposed missions to the Moon, Colonization of the Moon, and List of artificial objects on the Moon Soviet missions Main articles: Luna program and Lunokhod programme First view in history of the far side of the Moon, taken by Luna 3, 7 October 1959. A model of Soviet Moon rover Lunokhod 1 The Cold War-inspired Space Race between the Soviet Union and the U.S. led to an acceleration of interest in exploration of the Moon. Once launchers had the necessary capabilities, these nations sent unmanned probes on both flyby and impact/lander missions. Spacecraft from the Soviet Union's Luna program were the first to accomplish a number of goals: following three unnamed, failed missions in 1958,[191] the first human-made object to escape Earth's gravity and pass near the Moon was Luna 1; the first human-made object to impact the lunar surface was Luna 2, and the first photographs of the normally occluded far side of the Moon were made by Luna 3, all in 1959. The first spacecraft to perform a successful lunar soft landing was Luna 9 and the first unmanned vehicle to orbit the Moon was Luna 10, both in 1966.[72] Rock and soil samples were brought back to Earth by three Luna sample return missions (Luna 16 in 1970, Luna 20 in 1972, and Luna 24 in 1976), which returned 0.3 kg total.[192] Two pioneering robotic rovers landed on the Moon in 1970 and 1973 as a part of Soviet Lunokhod programme. Luna 24 was the last Soviet mission to the Moon. United States missions Main articles: Apollo program and Moon landing Earthrise (Apollo 8, 1968, taken by William Anders) Moon rock (Apollo 17, 1972) During the late 1950s at the height of the Cold War, the United States Army conducted a classified feasibility study that proposed the construction of a manned military outpost on the Moon called Project Horizon with the potential to conduct a wide range of missions from scientific research to nuclear Earth bombardment. The study included the possibility of conducting a lunar-based nuclear test.[193][194] The Air Force, which at the time was in competition with the Army for a leading role in the space program, developed its own similar plan called Lunex.[195][196][193] However, both these proposals were ultimately passed over as the space program was largely transferred from the military to the civilian agency NASA.[196] Following President John F. Kennedy's 1961 commitment to a manned moon landing before the end of the decade, the United States, under NASA leadership, launched a series of unmanned probes to develop an understanding of the lunar surface in preparation for manned missions: the Jet Propulsion Laboratory's Ranger program produced the first close-up pictures; the Lunar Orbiter program produced maps of the entire Moon; the Surveyor program landed its first spacecraft four months after Luna 9. The manned Apollo program was developed in parallel; after a series of unmanned and manned tests of the Apollo spacecraft in Earth orbit, and spurred on by a potential Soviet lunar flight, in 1968 Apollo 8 made the first manned mission to lunar orbit. The subsequent landing of the first humans on the Moon in 1969 is seen by many as the culmination of the Space Race.[197] Neil Armstrong working at the Lunar Module Eagle during Apollo 11 (1969) "That's one small step ..." Neil Armstrong became the first person to walk on the Moon as the commander of the American mission Apollo 11 by first setting foot on the Moon at 02:56 UTC on 21 July 1969.[198] An estimated 500 million people worldwide watched the transmission by the Apollo TV camera, the largest television audience for a live broadcast at that time.[199][200] The Apollo missions 11 to 17 (except Apollo 13, which aborted its planned lunar landing) removed 380.05 kilograms (837.87 lb) of lunar rock and soil in 2,196 separate samples.[201] The American Moon landing and return was enabled by considerable technological advances in the early 1960s, in domains such as ablation chemistry, software engineering, and atmospheric re-entry technology, and by highly competent management of the enormous technical undertaking.[202][203] Scientific instrument packages were installed on the lunar surface during all the Apollo landings. Long-lived instrument stations, including heat flow probes, seismometers, and magnetometers, were installed at the Apollo 12, 14, 15, 16, and 17 landing sites. Direct transmission of data to Earth concluded in late 1977 because of budgetary considerations,[204][205] but as the stations' lunar laser ranging corner-cube retroreflector arrays are passive instruments, they are still being used. Ranging to the stations is routinely performed from Earth-based stations with an accuracy of a few centimeters, and data from this experiment are being used to place constraints on the size of the lunar core.[206] 1980s–2000 An artificially colored mosaic constructed from a series of 53 images taken through three spectral filters by Galileo' s imaging system as the spacecraft flew over the northern regions of the Moon on 7 December 1992. After the first Moon race there were years of near quietude but starting in the 1990s, many more countries have become involved in direct exploration of the Moon. In 1990, Japan became the third country to place a spacecraft into lunar orbit with its Hiten spacecraft. The spacecraft released a smaller probe, Hagoromo, in lunar orbit, but the transmitter failed, preventing further scientific use of the mission.[207] In 1994, the U.S. sent the joint Defense Department/NASA spacecraft Clementine to lunar orbit. This mission obtained the first near-global topographic map of the Moon, and the first global multispectral images of the lunar surface.[208] This was followed in 1998 by the Lunar Prospector mission, whose instruments indicated the presence of excess hydrogen at the lunar poles, which is likely to have been caused by the presence of water ice in the upper few meters of the regolith within permanently shadowed craters.[209] As viewed by Chandrayaan-1's NASA Moon Mineralogy Mapper equipment, on the right, the first time discovered water-rich minerals (light blue), shown around a small crater which it was ejected from. The European spacecraft SMART-1, the second ion-propelled spacecraft, was in lunar orbit from 15 November 2004 until its lunar impact on 3 September 2006, and made the first detailed survey of chemical elements on the lunar surface.[210] The ambitious Chinese Lunar Exploration Program began with Chang'e 1, which successfully orbited the Moon from 5 November 2007 until its controlled lunar impact on 1 March 2009.[211] It obtained a full image map of the Moon. Chang'e 2, beginning in October 2010, reached the Moon more quickly, mapped the Moon at a higher resolution over an eight-month period, then left lunar orbit for an extended stay at the Earth–Sun L2 Lagrangian point, before finally performing a flyby of asteroid 4179 Toutatis on 13 December 2012, and then heading off into deep space. On 14 December 2013, Chang'e 3 landed a lunar lander onto the Moon's surface, which in turn deployed a lunar rover, named Yutu (Chinese: 玉兔; literally "Jade Rabbit"). This was the first lunar soft landing since Luna 24 in 1976, and the first lunar rover mission since Lunokhod 2 in 1973. Another rover mission (Chang'e 4) was launched in 2019, becoming the first ever spacecraft to land on the Moon's far side. China intends to following this up with a sample return mission (Chang'e 5) in 2020.[212] Between 4 October 2007 and 10 June 2009, the Japan Aerospace Exploration Agency's Kaguya (Selene) mission, a lunar orbiter fitted with a high-definition video camera, and two small radio-transmitter satellites, obtained lunar geophysics data and took the first high-definition movies from beyond Earth orbit.[213][214] India's first lunar mission, Chandrayaan-1, orbited from 8 November 2008 until loss of contact on 27 August 2009, creating a high-resolution chemical, mineralogical and photo-geological map of the lunar surface, and confirming the presence of water molecules in lunar soil.[215] The Indian Space Research Organisation planned to launch Chandrayaan-2 in 2013, which would have included a Russian robotic lunar rover.[216][217] However, the failure of Russia's Fobos-Grunt mission has delayed this project, and was launched on 22 July 2019. The lander Vikram attempted to land on the lunar south pole region on 6 September, but lost the signal in 2.1 km (1.3 mi). What happened after that is unknown. Copernicus's central peaks as observed by the LRO, 2012 The Ina formation, 2009 The U.S. co-launched the Lunar Reconnaissance Orbiter (LRO) and the LCROSS impactor and follow-up observation orbiter on 18 June 2009; LCROSS completed its mission by making a planned and widely observed impact in the crater Cabeus on 9 October 2009,[218] whereas LRO is currently in operation, obtaining precise lunar altimetry and high-resolution imagery. In November 2011, the LRO passed over the large and bright crater Aristarchus. NASA released photos of the crater on 25 December 2011.[219] Two NASA GRAIL spacecraft began orbiting the Moon around 1 January 2012,[220] on a mission to learn more about the Moon's internal structure. NASA's LADEE probe, designed to study the lunar exosphere, achieved orbit on 6 October 2013. Upcoming lunar missions include Russia's Luna-Glob: an unmanned lander with a set of seismometers, and an orbiter based on its failed Martian Fobos-Grunt mission.[221] Privately funded lunar exploration has been promoted by the Google Lunar X Prize, announced 13 September 2007, which offers US$20 million to anyone who can land a robotic rover on the Moon and meet other specified criteria.[222] Shackleton Energy Company is building a program to establish operations on the south pole of the Moon to harvest water and supply their Propellant Depots.[223] NASA began to plan to resume manned missions following the call by U.S. President George W. Bush on 14 January 2004 for a manned mission to the Moon by 2019 and the construction of a lunar base by 2024.[224] The Constellation program was funded and construction and testing begun on a manned spacecraft and launch vehicle,[225] and design studies for a lunar base.[226] However, that program has been canceled in favor of a manned asteroid landing by 2025 and a manned Mars orbit by 2035.[227] India has also expressed its hope to send a manned mission to the Moon by 2020.[228] On 28 February 2018, SpaceX, Vodafone, Nokia and Audi announced a collaboration to install a 4G wireless communication network on the Moon, with the aim of streaming live footage on the surface to Earth.[229] Recent reports also indicate NASA's intent to send a woman astronaut to the Moon in their planned mid-2020s mission.[230] Planned commercial missions In 2007, the X Prize Foundation together with Google launched the Google Lunar X Prize to encourage commercial endeavors to the Moon. A prize of $20 million was to be awarded to the first private venture to get to the Moon with a robotic lander by the end of March 2018, with additional prizes worth $10 million for further milestones.[231][232] As of August 2016, 16 teams were reportedly participating in the competition.[233] In January 2018 the foundation announced that the prize would go unclaimed as none of the finalist teams would be able to make a launch attempt by the deadline.[234] In August 2016, the US government granted permission to US-based start-up Moon Express to land on the Moon.[235] This marked the first time that a private enterprise was given the right to do so. The decision is regarded as a precedent helping to define regulatory standards for deep-space commercial activity in the future, as thus far companies' operation had been restricted to being on or around Earth.[235] On 29 November 2018 NASA announced that nine commercial companies would compete to win a contract to send small payloads to the Moon in what is known as Commercial Lunar Payload Services. According to NASA administrator Jim Bridenstine, "We are building a domestic American capability to get back and forth to the surface of the moon.".[236] See also: List of artificial objects on the Moon, Space art § Art in space, and Planetary protection § Category V Remains of human activity, Apollo 17's Lunar Surface Experiments Package Beside the traces of human activity on the Moon, there have been some intended permanent installations like the Moon Museum art piece, Apollo 11 goodwill messages, Lunar plaque, Fallen Astronaut memorial and other artifacts. Astronomy from the Moon A false-color image of Earth in ultraviolet light taken from the surface of the Moon on the Apollo 16 mission. The day-side reflects a large amount of UV light from the Sun, but the night-side shows faint bands of UV emission from the aurora caused by charged particles.[237] For many years, the Moon has been recognized as an excellent site for telescopes.[238] It is relatively nearby; astronomical seeing is not a concern; certain craters near the poles are permanently dark and cold, and thus especially useful for infrared telescopes; and radio telescopes on the far side would be shielded from the radio chatter of Earth.[239] The lunar soil, although it poses a problem for any moving parts of telescopes, can be mixed with carbon nanotubes and epoxies and employed in the construction of mirrors up to 50 meters in diameter.[240] A lunar zenith telescope can be made cheaply with an ionic liquid.[241] In April 1972, the Apollo 16 mission recorded various astronomical photos and spectra in ultraviolet with the Far Ultraviolet Camera/Spectrograph.[242] Main article: Space law Although Luna landers scattered pennants of the Soviet Union on the Moon, and U.S. flags were symbolically planted at their landing sites by the Apollo astronauts, no nation claims ownership of any part of the Moon's surface.[243] Russia, China, India, and the U.S. are party to the 1967 Outer Space Treaty,[244] which defines the Moon and all outer space as the "province of all mankind".[243] This treaty also restricts the use of the Moon to peaceful purposes, explicitly banning military installations and weapons of mass destruction.[245] The 1979 Moon Agreement was created to restrict the exploitation of the Moon's resources by any single nation, but as of November 2016, it has been signed and ratified by only 18 nations, none of which engages in self-launched human space exploration or has plans to do so.[246] Although several individuals have made claims to the Moon in whole or in part, none of these are considered credible.[247][248][249] Luna, the Moon, from a 1550 edition of Guido Bonatti's Liber astronomiae See also: Moon in fiction and Tourism on the Moon Further information: Lunar deity, Selene, Luna (goddess), Man in the Moon, and Crescent Statue of Chandraprabha (meaning "as charming as the moon"), the eighth Tirthankara in Jainism, with the symbol of a crescent moon below it. Sun and Moon with faces (1493 woodcut) The contrast between the brighter highlands and the darker maria creates the patterns seen by different cultures as the Man in the Moon, the rabbit and the buffalo, among others. In many prehistoric and ancient cultures, the Moon was personified as a deity or other supernatural phenomenon, and astrological views of the Moon continue to be propagated today. In Proto-Indo-European religion, the Moon was personified as the male god *Meh1not.[250] The ancient Sumerians believed that the Moon was the god Nanna,[251][252] who was the father of Inanna, the goddess of the planet Venus,[251][252] and Utu, the god of the sun.[251][252] Nanna was later known as Sîn,[252][251] and was particularly associated with magic and sorcery.[251] In Greco-Roman mythology, the Sun and the Moon are represented as male and female, respectively (Helios/Sol and Selene/Luna);[250] this is a development unique to the eastern Mediterranean[250] and traces of an earlier male moon god in the Greek tradition are preserved in the figure of Menelaus.[250] In Mesopotamian iconography, the crescent was the primary symbol of Nanna-Sîn.[252] In ancient Greek art, the Moon goddess Selene was represented wearing a crescent on her headgear in an arrangement reminiscent of horns.[253][254] The star and crescent arrangement also goes back to the Bronze Age, representing either the Sun and Moon, or the Moon and planet Venus, in combination. It came to represent the goddess Artemis or Hecate, and via the patronage of Hecate came to be used as a symbol of Byzantium. An iconographic tradition of representing Sun and Moon with faces developed in the late medieval period. The splitting of the moon (Arabic: انشقاق القمر‎) is a miracle attributed to Muhammad.[255] A song titled 'Moon Anthem' was released on the occasion of landing of India's Chandrayan-II on the Moon.[256] Further information: Lunar calendar, Lunisolar calendar, Metonic cycle, Blue moon, and Movable feast The Moon's regular phases make it a very convenient timepiece, and the periods of its waxing and waning form the basis of many of the oldest calendars. Tally sticks, notched bones dating as far back as 20–30,000 years ago, are believed by some to mark the phases of the Moon.[257][258][259] The ~30-day month is an approximation of the lunar cycle. The English noun month and its cognates in other Germanic languages stem from Proto-Germanic *mǣnṓth-, which is connected to the above-mentioned Proto-Germanic *mǣnōn, indicating the usage of a lunar calendar among the Germanic peoples (Germanic calendar) prior to the adoption of a solar calendar.[260] The PIE root of moon, *méh1nōt, derives from the PIE verbal root *meh1-, "to measure", "indicat[ing] a functional conception of the Moon, i.e. marker of the month" (cf. the English words measure and menstrual),[261][262][263] and echoing the Moon's importance to many ancient cultures in measuring time (see Latin mensis and Ancient Greek μείς (meis) or μήν (mēn), meaning "month").[264][265][266][267] Most historical calendars are lunisolar. The 7th-century Islamic calendar is an exceptional example of a purely lunar calendar. Months are traditionally determined by the visual sighting of the hilal, or earliest crescent moon, over the horizon.[268] Moonrise, 1884, painting by Stanisław Masłowski (National Museum, Kraków, Gallery of Sukiennice Museum) Further information: Lunar effect The Moon has long been associated with insanity and irrationality; the words lunacy and lunatic (popular shortening loony) are derived from the Latin name for the Moon, Luna. Philosophers Aristotle and Pliny the Elder argued that the full moon induced insanity in susceptible individuals, believing that the brain, which is mostly water, must be affected by the Moon and its power over the tides, but the Moon's gravity is too slight to affect any single person.[269] Even today, people who believe in a lunar effect claim that admissions to psychiatric hospitals, traffic accidents, homicides or suicides increase during a full moon, but dozens of studies invalidate these claims.[269][270][271][272][273] ^ Between 18.29° and 28.58° to Earth's equator.[1] ^ There are a number of near-Earth asteroids, including 3753 Cruithne, that are co-orbital with Earth: their orbits bring them close to Earth for periods of time but then alter in the long term (Morais et al, 2002). These are quasi-satellites – they are not moons as they do not orbit Earth. For more information, see Other moons of Earth. ^ The maximum value is given based on scaling of the brightness from the value of −12.74 given for an equator to Moon-centre distance of 378 000 km in the NASA factsheet reference to the minimum Earth–Moon distance given there, after the latter is corrected for Earth's equatorial radius of 6 378 km, giving 350 600 km. The minimum value (for a distant new moon) is based on a similar scaling using the maximum Earth–Moon distance of 407 000 km (given in the factsheet) and by calculating the brightness of the earthshine onto such a new moon. The brightness of the earthshine is [ Earth albedo × (Earth radius / Radius of Moon's orbit)2 ] relative to the direct solar illumination that occurs for a full moon. (Earth albedo = 0.367; Earth radius = (polar radius × equatorial radius)½ = 6 367 km.) ^ The range of angular size values given are based on simple scaling of the following values given in the fact sheet reference: at an Earth-equator to Moon-centre distance of 378 000 km, the angular size is 1896 arcseconds. The same fact sheet gives extreme Earth–Moon distances of 407 000 km and 357 000 km. For the maximum angular size, the minimum distance has to be corrected for Earth's equatorial radius of 6 378 km, giving 350 600 km. ^ Lucey et al. (2006) give 107 particles cm−3 by day and 105 particles cm−3 by night. Along with equatorial surface temperatures of 390 K by day and 100 K by night, the ideal gas law yields the pressures given in the infobox (rounded to the nearest order of magnitude): 10−7 Pa by day and 10−10 Pa by night. ^ This age is calculated from isotope dating of lunar zircons. ^ More accurately, the Moon's mean sidereal period (fixed star to fixed star) is 27.321661 days (27 d 07 h 43 min 11.5 s), and its mean tropical orbital period (from equinox to equinox) is 27.321582 days (27 d 07 h 43 min 04.7 s) (Explanatory Supplement to the Astronomical Ephemeris, 1961, at p.107). ^ More accurately, the Moon's mean synodic period (between mean solar conjunctions) is 29.530589 days (29 d 12 h 44 min 02.9 s) (Explanatory Supplement to the Astronomical Ephemeris, 1961, at p.107). ^ There is no strong correlation between the sizes of planets and the sizes of their satellites. Larger planets tend to have more satellites, both large and small, than smaller planets. ^ With 27% the diameter and 60% the density of Earth, the Moon has 1.23% of the mass of Earth. The moon Charon is larger relative to its primary Pluto, but Pluto is now considered to be a dwarf planet. ^ The Sun's apparent magnitude is −26.7, while the full moon's apparent magnitude is −12.7. ^ See graph in Sun#Life phases. At present, the diameter of the Sun is increasing at a rate of about five percent per billion years. This is very similar to the rate at which the apparent angular diameter of the Moon is decreasing as it recedes from Earth. ^ On average, the Moon covers an area of 0.21078 square degrees on the night sky. ^ a b c d e f g h i j k l Wieczorek, Mark A.; et al. (2006). "The constitution and structure of the lunar interior". Reviews in Mineralogy and Geochemistry. 60 (1): 221–364. Bibcode:2006RvMG...60..221W. doi:10.2138/rmg.2006.60.3. ^ a b Lang, Kenneth R. (2011). The Cambridge Guide to the Solar System' (2nd ed.). Cambridge University Press. Archived from the original on 1 January 2016. ^ Morais, M.H.M.; Morbidelli, A. (2002). "The Population of Near-Earth Asteroids in Coorbital Motion with the Earth". Icarus. 160 (1): 1–9. Bibcode:2002Icar..160....1M. doi:10.1006/icar.2002.6937. ^ a b c d e f g h i j Williams, Dr. David R. (2 February 2006). "Moon Fact Sheet". NASA/National Space Science Data Center. Archived from the original on 23 March 2010. Retrieved 31 December 2008. ^ Smith, David E.; Zuber, Maria T.; Neumann, Gregory A.; Lemoine, Frank G. (1 January 1997). "Topography of the Moon from the Clementine lidar". Journal of Geophysical Research. 102 (E1): 1601. Bibcode:1997JGR...102.1591S. doi:10.1029/96JE02940. hdl:2060/19980018849. ^ Terry, Paul (2013). Top 10 of Everything. Octopus Publishing Group Ltd. p. 226. ISBN 978-0-600-62887-3. ^ Williams, James G.; Newhall, XX; Dickey, Jean O. (1996). "Lunar moments, tides, orientation, and coordinate frames". Planetary and Space Science. 44 (10): 1077–1080. Bibcode:1996P&SS...44.1077W. doi:10.1016/0032-0633(95)00154-9. ^ Makemson, Maud W. (1971). "Determination of selenographic positions". The Moon. 2 (3): 293–308. Bibcode:1971Moon....2..293M. doi:10.1007/BF00561882. ^ a b Archinal, Brent A.; A'Hearn, Michael F.; Bowell, Edward G.; Conrad, Albert R.; Consolmagno, Guy J.; Courtin, Régis; et al. (2010). "Report of the IAU Working Group on Cartographic Coordinates and Rotational Elements: 2009" (PDF). Celestial Mechanics and Dynamical Astronomy. 109 (2): 101–135. Bibcode:2011CeMDA.109..101A. doi:10.1007/s10569-010-9320-4. Archived from the original (PDF) on 4 March 2016. Retrieved 24 September 2018. also available "via usgs.gov" (PDF). ^ Matthews, Grant (2008). "Celestial body irradiance determination from an underfilled satellite radiometer: application to albedo and thermal emission measurements of the Moon using CERES". Applied Optics. 47 (27): 4981–4993. Bibcode:2008ApOpt..47.4981M. doi:10.1364/AO.47.004981. PMID 18806861. ^ A.R. Vasavada; D.A. Paige & S.E. Wood (1999). "Near-Surface Temperatures on Mercury and the Moon and the Stability of Polar Ice Deposits". Icarus. 141 (2): 179–193. Bibcode:1999Icar..141..179V. doi:10.1006/icar.1999.6175. ^ a b c Lucey, Paul; Korotev, Randy L.; et al. (2006). "Understanding the lunar surface and space-Moon interactions". Reviews in Mineralogy and Geochemistry. 60 (1): 83–219. Bibcode:2006RvMG...60...83L. doi:10.2138/rmg.2006.60.2. ^ Lowrie, William (1997). Fundamentals of Geophysics. Cambridge: Cambridge University Press. p. 5. ^ "The Moon is older than scientists thought". Universe Today. ^ Stern, David (30 March 2014). "Libration of the Moon". NASA. Retrieved 11 February 2020. ^ "How far away is the moon?". Space Place. NASA. Archived from the original on 6 October 2016. ^ Scott, Elaine (2016). Our Moon: New discoveries about Earth's closest companion. Houghton Mifflin Harcourt. p. 7. ISBN 978-0-544-75058-6. ^ "Naming Astronomical Objects: Spelling of Names". International Astronomical Union. Archived from the original on 16 December 2008. Retrieved 6 April 2020. ^ "Gazetteer of Planetary Nomenclature: Planetary Nomenclature FAQ". USGS Astrogeology Research Program. Archived from the original on 27 May 2010. Retrieved 6 April 2020. ^ Orel, Vladimir (2003). A Handbook of Germanic Etymology. Brill. ^ Fernando López-Menchero, Late Proto-Indo-European Etymological Lexicon ^ Barnhart, Robert K. (1995). The Barnhart Concise Dictionary of Etymology. Harper Collins. p. 487. ISBN 978-0-06-270084-1. ^ E.g. James A. Hall III (2016) Moons of the Solar System, Springer International ^ "Luna". Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005. (Subscription or UK public library membership required.) ^ "Cynthia". Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005. (Subscription or UK public library membership required.) ^ "selenian". Merriam-Webster Dictionary. ^ "selenian". Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005. (Subscription or UK public library membership required.) ^ "selenic". Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005. (Subscription or UK public library membership required.) ^ "selenic". Merriam-Webster Dictionary. ^ "Oxford English Dictionary: lunar, a. and n." Oxford English Dictionary: Second Edition 1989. Oxford University Press. Retrieved 23 March 2010. ^ σελήνη. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project. ^ Pannen, Imke (2010). When the Bad Bleeds: Mantic Elements in English Renaissance Revenge Tragedy. V&R unipress GmbH. pp. 96–. ISBN 978-3-89971-640-5. Archived from the original on 4 September 2016. ^ Barboni, M.; Boehnke, P.; Keller, C.B.; Kohl, I.E.; Schoene, B.; Young, E.D.; McKeegan, K.D. (2017). "Early formation of the Moon 4.51 billion years ago". Science Advances. 3 (1): e1602365. Bibcode:2017SciA....3E2365B. doi:10.1126/sciadv.1602365. PMC 5226643. PMID 28097222. ^ Binder, A.B. (1974). "On the origin of the Moon by rotational fission". The Moon. 11 (2): 53–76. Bibcode:1974Moon...11...53B. doi:10.1007/BF01877794. ^ a b c Stroud, Rick (2009). The Book of the Moon. Walken and Company. pp. 24–27. ISBN 978-0-8027-1734-4. ^ Mitler, H.E. (1975). "Formation of an iron-poor moon by partial capture, or: Yet another exotic theory of lunar origin". Icarus. 24 (2): 256–268. Bibcode:1975Icar...24..256M. doi:10.1016/0019-1035(75)90102-5. ^ Stevenson, D.J. (1987). "Origin of the moon–The collision hypothesis". Annual Review of Earth and Planetary Sciences. 15 (1): 271–315. Bibcode:1987AREPS..15..271S. doi:10.1146/annurev.ea.15.050187.001415. ^ Taylor, G. Jeffrey (31 December 1998). "Origin of the Earth and Moon". Planetary Science Research Discoveries. Hawai'i Institute of Geophysics and Planetology. Archived from the original on 10 June 2010. Retrieved 7 April 2010. ^ "Asteroids Bear Scars of Moon's Violent Formation". 16 April 2015. Archived from the original on 8 October 2016. ^ Dana Mackenzie (21 July 2003). The Big Splat, or How Our Moon Came to Be. John Wiley & Sons. pp. 166–168. ISBN 978-0-471-48073-0. ^ Canup, R.; Asphaug, E. (2001). "Origin of the Moon in a giant impact near the end of Earth's formation". Nature. 412 (6848): 708–712. Bibcode:2001Natur.412..708C. doi:10.1038/35089010. PMID 11507633. ^ "Earth-Asteroid Collision Formed Moon Later Than Thought". National Geographic. 28 October 2010. Archived from the original on 18 April 2009. Retrieved 7 May 2012. ^ Kleine, Thorsten (2008). "2008 Pellas-Ryder Award for Mathieu Touboul" (PDF). Meteoritics and Planetary Science. 43 (S7): A11–A12. Bibcode:2008M&PS...43...11K. doi:10.1111/j.1945-5100.2008.tb00709.x. Archived from the original (PDF) on 27 July 2018. Retrieved 8 April 2020. ^ Touboul, M.; Kleine, T.; Bourdon, B.; Palme, H.; Wieler, R. (2007). "Late formation and prolonged differentiation of the Moon inferred from W isotopes in lunar metals". Nature. 450 (7173): 1206–1209. Bibcode:2007Natur.450.1206T. doi:10.1038/nature06428. PMID 18097403. ^ "Flying Oceans of Magma Help Demystify the Moon's Creation". National Geographic. 8 April 2015. Archived from the original on 9 April 2015. ^ Pahlevan, Kaveh; Stevenson, David J. (2007). "Equilibration in the aftermath of the lunar-forming giant impact". Earth and Planetary Science Letters. 262 (3–4): 438–449. arXiv:1012.5323. Bibcode:2007E&PSL.262..438P. doi:10.1016/j.epsl.2007.07.055. ^ Nield, Ted (2009). "Moonwalk (summary of meeting at Meteoritical Society's 72nd Annual Meeting, Nancy, France)". Geoscientist. Vol. 19. p. 8. Archived from the original on 27 September 2012. ^ a b Warren, P.H. (1985). "The magma ocean concept and lunar evolution". Annual Review of Earth and Planetary Sciences. 13 (1): 201–240. Bibcode:1985AREPS..13..201W. doi:10.1146/annurev.ea.13.050185.001221. ^ Tonks, W. Brian; Melosh, H. Jay (1993). "Magma ocean formation due to giant impacts". Journal of Geophysical Research. 98 (E3): 5319–5333. Bibcode:1993JGR....98.5319T. doi:10.1029/92JE02726. ^ Daniel Clery (11 October 2013). "Impact Theory Gets Whacked". Science. 342 (6155): 183–185. Bibcode:2013Sci...342..183C. doi:10.1126/science.342.6155.183. PMID 24115419. ^ Wiechert, U.; et al. (October 2001). "Oxygen Isotopes and the Moon-Forming Giant Impact". Science. 294 (12): 345–348. Bibcode:2001Sci...294..345W. doi:10.1126/science.1063037. PMID 11598294. Archived from the original on 20 April 2009. Retrieved 5 July 2009. ^ Pahlevan, Kaveh; Stevenson, David (October 2007). "Equilibration in the Aftermath of the Lunar-forming Giant Impact". Earth and Planetary Science Letters. 262 (3–4): 438–449. arXiv:1012.5323. Bibcode:2007E&PSL.262..438P. doi:10.1016/j.epsl.2007.07.055. ^ "Titanium Paternity Test Says Earth is the Moon's Only Parent (University of Chicago)". Astrobio.net. 5 April 2012. Retrieved 3 October 2013. ^ Garrick-Bethell; et al. (2014). "The tidal-rotational shape of the Moon and evidence for polar wander" (PDF). Nature. 512 (7513): 181–184. Bibcode:2014Natur.512..181G. doi:10.1038/nature13639. PMID 25079322. ^ Taylor, Stuart R. (1975). Lunar Science: a Post-Apollo View. Oxford: Pergamon Press. p. 64. ISBN 978-0-08-018274-2. ^ Brown, D.; Anderson, J. (6 January 2011). "NASA Research Team Reveals Moon Has Earth-Like Core". NASA. NASA. Archived from the original on 11 January 2012. ^ Weber, R.C.; Lin, P.-Y.; Garnero, E.J.; Williams, Q.; Lognonne, P. (21 January 2011). "Seismic Detection of the Lunar Core" (PDF). Science. 331 (6015): 309–312. Bibcode:2011Sci...331..309W. doi:10.1126/science.1199375. PMID 21212323. Archived from the original (PDF) on 15 October 2015. Retrieved 10 April 2017. ^ Nemchin, A.; Timms, N.; Pidgeon, R.; Geisler, T.; Reddy, S.; Meyer, C. (2009). "Timing of crystallization of the lunar magma ocean constrained by the oldest zircon". Nature Geoscience. 2 (2): 133–136. Bibcode:2009NatGe...2..133N. doi:10.1038/ngeo417. hdl:20.500.11937/44375. ^ a b Shearer, Charles K.; et al. (2006). "Thermal and magmatic evolution of the Moon". Reviews in Mineralogy and Geochemistry. 60 (1): 365–518. Bibcode:2006RvMG...60..365S. doi:10.2138/rmg.2006.60.4. ^ Schubert, J. (2004). "Interior composition, structure, and dynamics of the Galilean satellites.". In F. Bagenal; et al. (eds.). Jupiter: The Planet, Satellites, and Magnetosphere. Cambridge University Press. pp. 281–306. ISBN 978-0-521-81808-7. ^ Williams, J.G.; Turyshev, S.G.; Boggs, D.H.; Ratcliff, J.T. (2006). "Lunar laser ranging science: Gravitational physics and lunar interior and geodesy". Advances in Space Research. 37 (1): 67–71. arXiv:gr-qc/0412049. Bibcode:2006AdSpR..37...67W. doi:10.1016/j.asr.2005.05.013. ^ Spudis, Paul D.; Cook, A.; Robinson, M.; Bussey, B.; Fessler, B. (January 1998). "Topography of the South Polar Region from Clementine Stereo Imaging". Workshop on New Views of the Moon: Integrated Remotely Sensed, Geophysical, and Sample Datasets: 69. Bibcode:1998nvmi.conf...69S. ^ a b c Spudis, Paul D.; Reisse, Robert A.; Gillis, Jeffrey J. (1994). "Ancient Multiring Basins on the Moon Revealed by Clementine Laser Altimetry". Science. 266 (5192): 1848–1851. Bibcode:1994Sci...266.1848S. doi:10.1126/science.266.5192.1848. PMID 17737079. ^ Pieters, C.M.; Tompkins, S.; Head, J.W.; Hess, P.C. (1997). "Mineralogy of the Mafic Anomaly in the South Pole‐Aitken Basin: Implications for excavation of the lunar mantle". Geophysical Research Letters. 24 (15): 1903–1906. Bibcode:1997GeoRL..24.1903P. doi:10.1029/97GL01718. hdl:2060/19980018038. ^ Taylor, G.J. (17 July 1998). "The Biggest Hole in the Solar System". Planetary Science Research Discoveries: 20. Bibcode:1998psrd.reptE..20T. Archived from the original on 20 August 2007. Retrieved 12 April 2007. ^ Schultz, P.H. (March 1997). "Forming the south-pole Aitken basin – The extreme games". Conference Paper, 28th Annual Lunar and Planetary Science Conference. 28: 1259. Bibcode:1997LPI....28.1259S. ^ "NASA's LRO Reveals 'Incredible Shrinking Moon'". NASA. 19 August 2010. Archived from the original on 21 August 2010. ^ Watters, Thomas R.; Weber, Renee C.; Collins, Geoffrey C.; Howley, Ian J.; Schmerr, Nicholas C.; Johnson, Catherine L. (June 2019). "Shallow seismic activity and young thrust faults on the Moon". Nature Geoscience (published 13 May 2019). 12 (6): 411–417. Bibcode:2019NatGe..12..411W. doi:10.1038/s41561-019-0362-2. ISSN 1752-0894. ^ Wlasuk, Peter (2000). Observing the Moon. Springer. p. 19. ISBN 978-1-85233-193-1. ^ Norman, M. (21 April 2004). "The Oldest Moon Rocks". Planetary Science Research Discoveries. Hawai'i Institute of Geophysics and Planetology. Archived from the original on 18 April 2007. Retrieved 12 April 2007. ^ Head, L.W.J.W. (2003). "Lunar Gruithuisen and Mairan domes: Rheology and mode of emplacement". Journal of Geophysical Research. 108 (E2): 5012. Bibcode:2003JGRE..108.5012W. CiteSeerX 10.1.1.654.9619. doi:10.1029/2002JE001909. Archived from the original on 12 March 2007. Retrieved 12 April 2007. ^ a b c d e f g h Spudis, P.D. (2004). "Moon". World Book Online Reference Center, NASA. Archived from the original on 3 July 2013. Retrieved 12 April 2007. ^ Gillis, J.J.; Spudis, P.D. (1996). "The Composition and Geologic Setting of Lunar Far Side Maria". Lunar and Planetary Science. 27: 413. Bibcode:1996LPI....27..413G. ^ Lawrence, D.J., et al. (11 August 1998). "Global Elemental Maps of the Moon: The Lunar Prospector Gamma-Ray Spectrometer". Science. 281 (5382): 1484–1489. Bibcode:1998Sci...281.1484L. doi:10.1126/science.281.5382.1484. PMID 9727970. Archived from the original on 16 May 2009. Retrieved 29 August 2009. ^ Taylor, G.J. (31 August 2000). "A New Moon for the Twenty-First Century". Planetary Science Research Discoveries: 41. Bibcode:2000psrd.reptE..41T. Archived from the original on 1 March 2012. Retrieved 12 April 2007. ^ a b Papike, J.; Ryder, G.; Shearer, C. (1998). "Lunar Samples". Reviews in Mineralogy and Geochemistry. 36: 5.1–5.234. ^ a b Hiesinger, H.; Head, J.W.; Wolf, U.; Jaumanm, R.; Neukum, G. (2003). "Ages and stratigraphy of mare basalts in Oceanus Procellarum, Mare Numbium, Mare Cognitum, and Mare Insularum". Journal of Geophysical Research. 108 (E7): 1029. Bibcode:2003JGRE..108.5065H. doi:10.1029/2002JE001985. ^ a b Phil Berardelli (9 November 2006). "Long Live the Moon!". Science. Archived from the original on 18 October 2014. ^ Jason Major (14 October 2014). "Volcanoes Erupted 'Recently' on the Moon". Discovery News. Archived from the original on 16 October 2014. ^ "NASA Mission Finds Widespread Evidence of Young Lunar Volcanism". NASA. 12 October 2014. Archived from the original on 3 January 2015. ^ Eric Hand (12 October 2014). "Recent volcanic eruptions on the moon". Science. Archived from the original on 14 October 2014. ^ Braden, S.E.; Stopar, J.D.; Robinson, M.S.; Lawrence, S.J.; van der Bogert, C.H.; Hiesinger, H.doi=10.1038/ngeo2252 (2014). "Evidence for basaltic volcanism on the Moon within the past 100 million years". Nature Geoscience. 7 (11): 787–791. Bibcode:2014NatGe...7..787B. doi:10.1038/ngeo2252. ^ Srivastava, N.; Gupta, R.P. (2013). "Young viscous flows in the Lowell crater of Orientale basin, Moon: Impact melts or volcanic eruptions?". Planetary and Space Science. 87: 37–45. Bibcode:2013P&SS...87...37S. doi:10.1016/j.pss.2013.09.001. ^ Gupta, R.P.; Srivastava, N.; Tiwari, R.K. (2014). "Evidences of relatively new volcanic flows on the Moon". Current Science. 107 (3): 454–460. ^ Whitten, J.; et al. (2011). "Lunar mare deposits associated with the Orientale impact basin: New insights into mineralogy, history, mode of emplacement, and relation to Orientale Basin evolution from Moon Mineralogy Mapper (M3) data from Chandrayaan-1". Journal of Geophysical Research. 116: E00G09. Bibcode:2011JGRE..116.0G09W. doi:10.1029/2010JE003736. ^ Cho, Y.; et al. (2012). "Young mare volcanism in the Orientale region contemporary with the Procellarum KREEP Terrane (PKT) volcanism peak period 2 b.y. ago". Geophysical Research Letters. 39 (11): L11203. Bibcode:2012GeoRL..3911203C. doi:10.1029/2012GL051838. ^ Munsell, K. (4 December 2006). "Majestic Mountains". Solar System Exploration. NASA. Archived from the original on 17 September 2008. Retrieved 12 April 2007. ^ Richard Lovett (2011). "Early Earth may have had two moons : Nature News". Nature. doi:10.1038/news.2011.456. Archived from the original on 3 November 2012. Retrieved 1 November 2012. ^ "Was our two-faced moon in a small collision?". Theconversation.edu.au. Archived from the original on 30 January 2013. Retrieved 1 November 2012. ^ Melosh, H. J. (1989). Impact cratering: A geologic process. Oxford University Press. ISBN 978-0-19-504284-9. ^ "Moon Facts". SMART-1. European Space Agency. 2010. Retrieved 12 May 2010. ^ a b Wilhelms, Don (1987). "Relative Ages" (PDF). Geologic History of the Moon. U.S. Geological Survey. Archived (PDF) from the original on 11 June 2010. ^ Hartmann, William K.; Quantin, Cathy; Mangold, Nicolas (2007). "Possible long-term decline in impact rates: 2. Lunar impact-melt data regarding impact history". Icarus. 186 (1): 11–23. Bibcode:2007Icar..186...11H. doi:10.1016/j.icarus.2006.09.009. ^ "The Smell of Moondust". NASA. 30 January 2006. Archived from the original on 8 March 2010. Retrieved 15 March 2010. ^ Heiken, G. (1991). Vaniman, D.; French, B. (eds.). Lunar Sourcebook, a user's guide to the Moon. New York: Cambridge University Press. p. 736. ISBN 978-0-521-33444-0. ^ Rasmussen, K.L.; Warren, P.H. (1985). "Megaregolith thickness, heat flow, and the bulk composition of the Moon". Nature. 313 (5998): 121–124. Bibcode:1985Natur.313..121R. doi:10.1038/313121a0. ^ Boyle, Rebecca. "The moon has hundreds more craters than we thought". Archived from the original on 13 October 2016. ^ Speyerer, Emerson J.; Povilaitis, Reinhold Z.; Robinson, Mark S.; Thomas, Peter C.; Wagner, Robert V. (13 October 2016). "Quantifying crater production and regolith overturn on the Moon with temporal imaging". Nature. 538 (7624): 215–218. Bibcode:2016Natur.538..215S. doi:10.1038/nature19829. PMID 27734864. ^ Margot, J.L.; Campbell, D.B.; Jurgens, R.F.; Slade, M.A. (4 June 1999). "Topography of the Lunar Poles from Radar Interferometry: A Survey of Cold Trap Locations" (PDF). Science. 284 (5420): 1658–1660. Bibcode:1999Sci...284.1658M. CiteSeerX 10.1.1.485.312. doi:10.1126/science.284.5420.1658. PMID 10356393. ^ Ward, William R. (1 August 1975). "Past Orientation of the Lunar Spin Axis". Science. 189 (4200): 377–379. Bibcode:1975Sci...189..377W. doi:10.1126/science.189.4200.377. PMID 17840827. ^ a b Martel, L.M.V. (4 June 2003). "The Moon's Dark, Icy Poles". Planetary Science Research Discoveries: 73. Bibcode:2003psrd.reptE..73M. Archived from the original on 1 March 2012. Retrieved 12 April 2007. ^ Seedhouse, Erik (2009). Lunar Outpost: The Challenges of Establishing a Human Settlement on the Moon. Springer-Praxis Books in Space Exploration. Germany: Springer Praxis. p. 136. ISBN 978-0-387-09746-6. ^ Coulter, Dauna (18 March 2010). "The Multiplying Mystery of Moonwater". NASA. Archived from the original on 13 December 2012. Retrieved 28 March 2010. ^ Spudis, P. (6 November 2006). "Ice on the Moon". The Space Review. Archived from the original on 22 February 2007. Retrieved 12 April 2007. ^ Feldman, W.C.; S. Maurice; A.B. Binder; B.L. Barraclough; R.C. Elphic; D.J. Lawrence (1998). "Fluxes of Fast and Epithermal Neutrons from Lunar Prospector: Evidence for Water Ice at the Lunar Poles" (PDF). Science. 281 (5382): 1496–1500. Bibcode:1998Sci...281.1496F. doi:10.1126/science.281.5382.1496. PMID 9727973. ^ Saal, Alberto E.; Hauri, Erik H.; Cascio, Mauro L.; van Orman, James A.; Rutherford, Malcolm C.; Cooper, Reid F. (2008). "Volatile content of lunar volcanic glasses and the presence of water in the Moon's interior". Nature. 454 (7201): 192–195. Bibcode:2008Natur.454..192S. doi:10.1038/nature07047. PMID 18615079. ^ Pieters, C.M.; Goswami, J.N.; Clark, R.N.; Annadurai, M.; Boardman, J.; Buratti, B.; Combe, J.-P.; Dyar, M.D.; Green, R.; Head, J.W.; Hibbitts, C.; Hicks, M.; Isaacson, P.; Klima, R.; Kramer, G.; Kumar, S.; Livo, E.; Lundeen, S.; Malaret, E.; McCord, T.; Mustard, J.; Nettles, J.; Petro, N.; Runyon, C.; Staid, M.; Sunshine, J.; Taylor, L.A.; Tompkins, S.; Varanasi, P. (2009). "Character and Spatial Distribution of OH/H2O on the Surface of the Moon Seen by M3 on Chandrayaan-1". Science. 326 (5952): 568–572. Bibcode:2009Sci...326..568P. doi:10.1126/science.1178658. PMID 19779151. ^ Li, Shuai; Lucey, Paul G.; Milliken, Ralph E.; Hayne, Paul O.; Fisher, Elizabeth; Williams, Jean-Pierre; Hurley, Dana M.; Elphic, Richard C. (August 2018). "Direct evidence of surface exposed water ice in the lunar polar regions". Proceedings of the National Academy of Sciences. 115 (36): 8907–8912. Bibcode:2018PNAS..115.8907L. doi:10.1073/pnas.1802345115. PMC 6130389. PMID 30126996. ^ Lakdawalla, Emily (13 November 2009). "LCROSS Lunar Impactor Mission: "Yes, We Found Water!"". The Planetary Society. Archived from the original on 22 January 2010. Retrieved 13 April 2010. ^ Colaprete, A.; Ennico, K.; Wooden, D.; Shirley, M.; Heldmann, J.; Marshall, W.; Sollitt, L.; Asphaug, E.; Korycansky, D.; Schultz, P.; Hermalyn, B.; Galal, K.; Bart, G.D.; Goldstein, D.; Summy, D. (1–5 March 2010). "Water and More: An Overview of LCROSS Impact Results". 41st Lunar and Planetary Science Conference. 41 (1533): 2335. Bibcode:2010LPI....41.2335C. ^ Colaprete, Anthony; Schultz, Peter; Heldmann, Jennifer; Wooden, Diane; Shirley, Mark; Ennico, Kimberly; Hermalyn, Brendan; Marshall, William; Ricco, Antonio; Elphic, Richard C.; Goldstein, David; Summy, Dustin; Bart, Gwendolyn D.; Asphaug, Erik; Korycansky, Don; Landis, David; Sollitt, Luke (22 October 2010). "Detection of Water in the LCROSS Ejecta Plume". Science. 330 (6003): 463–468. Bibcode:2010Sci...330..463C. doi:10.1126/science.1186986. PMID 20966242. ^ Hauri, Erik; Thomas Weinreich; Albert E. Saal; Malcolm C. Rutherford; James A. Van Orman (26 May 2011). "High Pre-Eruptive Water Contents Preserved in Lunar Melt Inclusions". Science Express. 10 (1126): 213–215. Bibcode:2011Sci...333..213H. doi:10.1126/science.1204626. PMID 21617039. ^ a b Rincon, Paul (21 August 2018). "Water ice 'detected on Moon's surface'". BBC News. Retrieved 21 August 2018. ^ David, Leonard. "Beyond the Shadow of a Doubt, Water Ice Exists on the Moon". Scientific American. Retrieved 21 August 2018. ^ a b "Water Ice Confirmed on the Surface of the Moon for the 1st Time!". Space.com. Retrieved 21 August 2018. ^ Muller, P.; Sjogren, W. (1968). "Mascons: lunar mass concentrations". Science. 161 (3842): 680–684. Bibcode:1968Sci...161..680M. doi:10.1126/science.161.3842.680. PMID 17801458. ^ Richard A. Kerr (12 April 2013). "The Mystery of Our Moon's Gravitational Bumps Solved?". Science. 340 (6129): 138–139. doi:10.1126/science.340.6129.138-a. PMID 23580504. ^ Konopliv, A.; Asmar, S.; Carranza, E.; Sjogren, W.; Yuan, D. (2001). "Recent gravity models as a result of the Lunar Prospector mission" (PDF). Icarus. 50 (1): 1–18. Bibcode:2001Icar..150....1K. CiteSeerX 10.1.1.18.1930. doi:10.1006/icar.2000.6573. Archived from the original (PDF) on 13 November 2004. ^ a b c Mighani, S.; Wang, H.; Shuster, D.L.; Borlina, C.S.; Nichols, C.I.O.; Weiss, B.P. (2020). "The end of the lunar dynamo". Science Advances. 6 (1): eaax0883. Bibcode:2020SciA....6..883M. doi:10.1126/sciadv.aax0883. PMC 6938704. PMID 31911941. ^ Garrick-Bethell, Ian; Weiss, iBenjamin P.; Shuster, David L.; Buz, Jennifer (2009). "Early Lunar Magnetism". Science. 323 (5912): 356–359. Bibcode:2009Sci...323..356G. doi:10.1126/science.1166804. PMID 19150839. ^ "Magnetometer / Electron Reflectometer Results". Lunar Prospector (NASA). 2001. Archived from the original on 27 May 2010. Retrieved 17 March 2010. ^ Hood, L.L.; Huang, Z. (1991). "Formation of magnetic anomalies antipodal to lunar impact basins: Two-dimensional model calculations". Journal of Geophysical Research. 96 (B6): 9837–9846. Bibcode:1991JGR....96.9837H. doi:10.1029/91JB00308. ^ "Moon Storms". NASA. 27 September 2013. Archived from the original on 12 September 2013. Retrieved 3 October 2013. ^ Culler, Jessica (16 June 2015). "LADEE - Lunar Atmosphere Dust and Environment Explorer". Archived from the original on 8 April 2015. ^ Globus, Ruth (1977). "Chapter 5, Appendix J: Impact Upon Lunar Atmosphere". In Richard D. Johnson & Charles Holbrow (ed.). Space Settlements: A Design Study. NASA. Archived from the original on 31 May 2010. Retrieved 17 March 2010. ^ Crotts, Arlin P.S. (2008). "Lunar Outgassing, Transient Phenomena and The Return to The Moon, I: Existing Data" (PDF). The Astrophysical Journal. 687 (1): 692–705. arXiv:0706.3949. Bibcode:2008ApJ...687..692C. doi:10.1086/591634. Archived (PDF) from the original on 20 February 2009. ^ Steigerwald, William (17 August 2015). "NASA's LADEE Spacecraft Finds Neon in Lunar Atmosphere". NASA. Retrieved 18 August 2015. ^ a b c Stern, S.A. (1999). "The Lunar atmosphere: History, status, current problems, and context". Reviews of Geophysics. 37 (4): 453–491. Bibcode:1999RvGeo..37..453S. CiteSeerX 10.1.1.21.9994. doi:10.1029/1999RG900005. ^ Lawson, S.; Feldman, W.; Lawrence, D.; Moore, K.; Elphic, R.; Belian, R. (2005). "Recent outgassing from the lunar surface: the Lunar Prospector alpha particle spectrometer". Journal of Geophysical Research. 110 (E9): 1029. Bibcode:2005JGRE..11009009L. doi:10.1029/2005JE002433. ^ R. Sridharan; S.M. Ahmed; Tirtha Pratim Dasa; P. Sreelathaa; P. Pradeepkumara; Neha Naika; Gogulapati Supriya (2010). "'Direct' evidence for water (H2O) in the sunlit lunar ambience from CHACE on MIP of Chandrayaan I". Planetary and Space Science. 58 (6): 947–950. Bibcode:2010P&SS...58..947S. doi:10.1016/j.pss.2010.02.013. ^ Drake, Nadia; 17, National Geographic PUBLISHED June (17 June 2015). "Lopsided Cloud of Dust Discovered Around the Moon". National Geographic News. Archived from the original on 19 June 2015. Retrieved 20 June 2015. CS1 maint: numeric names: authors list (link) ^ Horányi, M.; Szalay, J.R.; Kempf, S.; Schmidt, J.; Grün, E.; Srama, R.; Sternovsky, Z. (18 June 2015). "A permanent, asymmetric dust cloud around the Moon". Nature. 522 (7556): 324–326. Bibcode:2015Natur.522..324H. doi:10.1038/nature14479. PMID 26085272. ^ "NASA: The Moon Once Had an Atmosphere That Faded Away". Time. ^ Hamilton, Calvin J.; Hamilton, Rosanna L., The Moon, Views of the Solar System Archived 4 February 2016 at the Wayback Machine, 1995–2011. ^ a b Amos, Jonathan (16 December 2009). "'Coldest place' found on the Moon". BBC News. Retrieved 20 March 2010. ^ "Diviner News". UCLA. 17 September 2009. Archived from the original on 7 March 2010. Retrieved 17 March 2010. ^ Rocheleau, Jake (21 May 2012). "Temperature on the Moon – Surface Temperature of the Moon – PlanetFacts.org". Archived from the original on 27 May 2015. ^ Haigh, I. D.; Eliot, M.; Pattiaratchi, C. (2011). "Global influences of the 18.61 year nodal cycle and 8.85 year cycle of lunar perigee on high tidal levels" (PDF). J. Geophys. Res. 116 (C6): C06025. Bibcode:2011JGRC..116.6025H. doi:10.1029/2010JC006645. CS1 maint: uses authors parameter (link) ^ V V Belet︠s︡kiĭ (2001). Essays on the Motion of Celestial Bodies. Birkhäuser. p. 183. ISBN 978-3-7643-5866-2. ^ "Space Topics: Pluto and Charon". The Planetary Society. Archived from the original on 18 February 2012. Retrieved 6 April 2010. ^ Phil Plait. "Dark Side of the Moon". Bad Astronomy: Misconceptions. Archived from the original on 12 April 2010. Retrieved 15 February 2010. ^ Alexander, M.E. (1973). "The Weak Friction Approximation and Tidal Evolution in Close Binary Systems". Astrophysics and Space Science. 23 (2): 459–508. Bibcode:1973Ap&SS..23..459A. doi:10.1007/BF00645172. ^ "Moon used to spin 'on different axis'". BBC News. BBC. 23 March 2016. Archived from the original on 23 March 2016. Retrieved 23 March 2016. ^ Luciuk, Mike. "How Bright is the Moon?". Amateur Astronomers. Archived from the original on 12 March 2010. Retrieved 16 March 2010. ^ Hershenson, Maurice (1989). The Moon illusion. Routledge. p. 5. ISBN 978-0-8058-0121-7. ^ Spekkens, K. (18 October 2002). "Is the Moon seen as a crescent (and not a "boat") all over the world?". Curious About Astronomy. Archived from the original on 16 October 2015. Retrieved 28 September 2015. ^ "Moonlight helps plankton escape predators during Arctic winters". New Scientist. 16 January 2016. Archived from the original on 30 January 2016. ^ ""Super Moon" exceptional. Brightest moon in the sky of Normandy, Monday, November 14 - The Siver Times". 12 November 2016. Archived from the original on 14 November 2016. ^ "Moongazers Delight – Biggest Supermoon in Decades Looms Large Sunday Night". 10 November 2016. Archived from the original on 14 November 2016. ^ "Supermoon November 2016". Space.com. 13 November 2016. Archived from the original on 14 November 2016. Retrieved 14 November 2016. ^ Tony Phillips (16 March 2011). "Super Full Moon". NASA. Archived from the original on 7 May 2012. Retrieved 19 March 2011. ^ Richard K. De Atley (18 March 2011). "Full moon tonight is as close as it gets". The Press-Enterprise. Archived from the original on 22 March 2011. Retrieved 19 March 2011. ^ "'Super moon' to reach closest point for almost 20 years". The Guardian. 19 March 2011. Archived from the original on 25 December 2013. Retrieved 19 March 2011. ^ Georgia State University, Dept. of Physics (Astronomy). "Perceived Brightness". Brightnes and Night/Day Sensitivity. Georgia State University. Archived from the original on 21 February 2014. Retrieved 25 January 2014. ^ Lutron. "Measured light vs. perceived light" (PDF). From IES Lighting Handbook 2000, 27-4. Lutron. Archived (PDF) from the original on 5 February 2013. Retrieved 25 January 2014. ^ Walker, John (May 1997). "Inconstant Moon". Earth and Moon Viewer. Fourth paragraph of "How Bright the Moonlight": Fourmilab. Archived from the original on 14 December 2013. Retrieved 23 January 2014. 14% [...] due to the logarithmic response of the human eye. ^ Taylor, G.J. (8 November 2006). "Recent Gas Escape from the Moon". Planetary Science Research Discoveries: 110. Bibcode:2006psrd.reptE.110T. Archived from the original on 4 March 2007. Retrieved 4 April 2007. ^ Schultz, P.H.; Staid, M.I.; Pieters, C.M. (2006). "Lunar activity from recent gas release". Nature. 444 (7116): 184–186. Bibcode:2006Natur.444..184S. doi:10.1038/nature05303. PMID 17093445. ^ "22 Degree Halo: a ring of light 22 degrees from the sun or moon". Department of Atmospheric Sciences, University of Illinois at Urbana–Champaign. Retrieved 13 April 2010. ^ a b c d e Lambeck, K. (1977). "Tidal Dissipation in the Oceans: Astronomical, Geophysical and Oceanographic Consequences". Philosophical Transactions of the Royal Society A. 287 (1347): 545–594. Bibcode:1977RSPTA.287..545L. doi:10.1098/rsta.1977.0159. ^ Le Provost, C.; Bennett, A.F.; Cartwright, D.E. (1995). "Ocean Tides for and from TOPEX/POSEIDON". Science. 267 (5198): 639–642. Bibcode:1995Sci...267..639L. doi:10.1126/science.267.5198.639. PMID 17745840. ^ a b c d Touma, Jihad; Wisdom, Jack (1994). "Evolution of the Earth-Moon system". The Astronomical Journal. 108 (5): 1943–1961. Bibcode:1994AJ....108.1943T. doi:10.1086/117209. ^ Chapront, J.; Chapront-Touzé, M.; Francou, G. (2002). "A new determination of lunar orbital parameters, precession constant and tidal acceleration from LLR measurements" (PDF). Astronomy and Astrophysics. 387 (2): 700–709. Bibcode:2002A&A...387..700C. doi:10.1051/0004-6361:20020420. ^ "Why the Moon is getting further away from Earth". BBC News. 1 February 2011. Archived from the original on 25 September 2015. Retrieved 18 September 2015. ^ Ray, R. (15 May 2001). "Ocean Tides and the Earth's Rotation". IERS Special Bureau for Tides. Archived from the original on 27 March 2010. Retrieved 17 March 2010. ^ Murray, C.D.; Dermott, Stanley F. (1999). Solar System Dynamics. Cambridge University Press. p. 184. ISBN 978-0-521-57295-8. ^ Dickinson, Terence (1993). From the Big Bang to Planet X. Camden East, Ontario: Camden House. pp. 79–81. ISBN 978-0-921820-71-0. ^ Latham, Gary; Ewing, Maurice; Dorman, James; Lammlein, David; Press, Frank; Toksőz, Naft; Sutton, George; Duennebier, Fred; Nakamura, Yosio (1972). "Moonquakes and lunar tectonism". Earth, Moon, and Planets. 4 (3–4): 373–382. Bibcode:1972Moon....4..373L. doi:10.1007/BF00562004. ^ Phillips, Tony (12 March 2007). "Stereo Eclipse". Science@NASA. Archived from the original on 10 June 2008. Retrieved 17 March 2010. ^ Espenak, F. (2000). "Solar Eclipses for Beginners". MrEclip]]. Retrieved 17 March 2010. ^ Walker, John (10 July 2004). "Moon near Perigee, Earth near Aphelion". Fourmilab. Archived from the original on 8 December 2013. Retrieved 25 December 2013. ^ Thieman, J.; Keating, S. (2 May 2006). "Eclipse 99, Frequently Asked Questions". NASA. Archived from the original on 11 February 2007. Retrieved 12 April 2007. ^ Espenak, F. "Saros Cycle". NASA. Archived from the original on 24 May 2012. Retrieved 17 March 2010. ^ Guthrie, D.V. (1947). "The Square Degree as a Unit of Celestial Area". Popular Astronomy. Vol. 55. pp. 200–203. Bibcode:1947PA.....55..200G. ^ "Total Lunar Occultations". Royal Astronomical Society of New Zealand. Archived from the original on 23 February 2010. Retrieved 17 March 2010. ^ "Lunar maps". Retrieved 18 September 2019. ^ "Carved and Drawn Prehistoric Maps of the Cosmos". Space Today. 2006. Archived from the original on 5 March 2012. Retrieved 12 April 2007. ^ Aaboe, A.; Britton, J.P.; Henderson, J.A.; Neugebauer, Otto; Sachs, A.J. (1991). "Saros Cycle Dates and Related Babylonian Astronomical Texts". Transactions of the American Philosophical Society. 81 (6): 1–75. doi:10.2307/1006543. JSTOR 1006543. One comprises what we have called "Saros Cycle Texts", which give the months of eclipse possibilities arranged in consistent cycles of 223 months (or 18 years). ^ Sarma, K.V. (2008). "Astronomy in India". In Helaine Selin (ed.). Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures. Encyclopaedia of the History of Science (2 ed.). Springer. pp. 317–321. Bibcode:2008ehst.book.....S. ISBN 978-1-4020-4559-2. ^ a b c d Needham, Joseph (1986). Science and Civilization in China, Volume III: Mathematics and the Sciences of the Heavens and Earth. Taipei: Caves Books. ISBN 978-0-521-05801-8. ^ O'Connor, J.J.; Robertson, E.F. (February 1999). "Anaxagoras of Clazomenae". University of St Andrews. Archived from the original on 12 January 2012. Retrieved 12 April 2007. ^ Robertson, E.F. (November 2000). "Aryabhata the Elder". Scotland: School of Mathematics and Statistics, University of St Andrews. Archived from the original on 11 July 2015. Retrieved 15 April 2010. ^ A.I. Sabra (2008). "Ibn Al-Haytham, Abū ʿAlī Al-Ḥasan Ibn Al-Ḥasan". Dictionary of Scientific Biography. Detroit: Charles Scribner's Sons. pp. 189–210, at 195. ^ Lewis, C.S. (1964). The Discarded Image. Cambridge: Cambridge University Press. p. 108. ISBN 978-0-521-47735-2. ^ van der Waerden, Bartel Leendert (1987). "The Heliocentric System in Greek, Persian and Hindu Astronomy". Annals of the New York Academy of Sciences. 500 (1): 1–569. Bibcode:1987NYASA.500....1A. doi:10.1111/j.1749-6632.1987.tb37193.x. PMID 3296915. ^ Evans, James (1998). The History and Practice of Ancient Astronomy. Oxford & New York: Oxford University Press. pp. 71, 386. ISBN 978-0-19-509539-5. ^ "Discovering How Greeks Computed in 100 B.C." The New York Times. 31 July 2008. Archived from the original on 4 December 2013. Retrieved 9 March 2014. ^ Van Helden, A. (1995). "The Moon". Galileo Project. Archived from the original on 23 June 2004. Retrieved 12 April 2007. ^ Consolmagno, Guy J. (1996). "Astronomy, Science Fiction and Popular Culture: 1277 to 2001 (And beyond)". Leonardo. 29 (2): 127–132. doi:10.2307/1576348. JSTOR 1576348. ^ Hall, R. Cargill (1977). "Appendix A: Lunar Theory Before 1964". NASA History Series. Lunar Impact: A History of Project Ranger. Washington, DC: Scientific and Technical Information Office, NASA. Archived from the original on 10 April 2010. Retrieved 13 April 2010. ^ Zak, Anatoly (2009). "Russia's unmanned missions toward the Moon". Archived from the original on 14 April 2010. Retrieved 20 April 2010. ^ "Rocks and Soils from the Moon". NASA. Archived from the original on 27 May 2010. Retrieved 6 April 2010. ^ a b "Soldiers, Spies and the Moon: Secret U.S. and Soviet Plans from the 1950s and 1960s". The National Security Archive. National Security Archive. Archived from the original on 19 December 2016. Retrieved 1 May 2017. ^ Brumfield, Ben (25 July 2014). "U.S. reveals secret plans for '60s moon base". CNN. Archived from the original on 27 July 2014. Retrieved 26 July 2014. ^ Teitel, Amy (11 November 2013). "LUNEX: Another way to the Moon". Popular Science. Archived from the original on 16 October 2015. ^ a b Logsdon, John (2010). John F. Kennedy and the Race to the Moon. Palgrave Macmillan. ISBN 978-0-230-11010-6. ^ Coren, M. (26 July 2004). "'Giant leap' opens world of possibility". CNN. Archived from the original on 20 January 2012. Retrieved 16 March 2010. ^ "Record of Lunar Events, 24 July 1969". Apollo 11 30th anniversary. NASA. Archived from the original on 8 April 2010. Retrieved 13 April 2010. ^ "Manned Space Chronology: Apollo_11". Spaceline.org. Archived from the original on 14 February 2008. Retrieved 6 February 2008. ^ "Apollo Anniversary: Moon Landing "Inspired World"". National Geographic. Archived from the original on 9 February 2008. Retrieved 6 February 2008. ^ Orloff, Richard W. (September 2004) [First published 2000]. "Extravehicular Activity". Apollo by the Numbers: A Statistical Reference. NASA History Division, Office of Policy and Plans. The NASA History Series. Washington, DC: NASA. ISBN 978-0-16-050631-4. LCCN 00061677. NASA SP-2000-4029. Archived from the original on 6 June 2013. Retrieved 1 August 2013. ^ Launius, Roger D. (July 1999). "The Legacy of Project Apollo". NASA History Office]]. Archived from the original on 8 April 2010. Retrieved 13 April 2010. ^ SP-287 What Made Apollo a Success? A series of eight articles reprinted by permission from the March 1970 issue of Astronautics & Aeronautics, a publication of the American Institute of Aeronautics and Astronautics. Washington, DC: Scientific and Technical Information Office, National Aeronautics and Space Administration. 1971. ^ "NASA news release 77-47 page 242" (PDF) (Press release). 1 September 1977. Archived (PDF) from the original on 4 June 2011. Retrieved 16 March 2010. ^ Appleton, James; Radley, Charles; Deans, John; Harvey, Simon; Burt, Paul; Haxell, Michael; Adams, Roy; Spooner N.; Brieske, Wayne (1977). "NASA Turns A Deaf Ear To The Moon". OASI Newsletters Archive. Archived from the original on 10 December 2007. Retrieved 29 August 2007. ^ Dickey, J.; et al. (1994). "Lunar laser ranging: a continuing legacy of the Apollo program". Science. 265 (5171): 482–490. Bibcode:1994Sci...265..482D. doi:10.1126/science.265.5171.482. PMID 17781305. ^ "Hiten-Hagomoro". NASA. Archived from the original on 14 June 2011. Retrieved 29 March 2010. ^ "Clementine information". NASA. 1994. Archived from the original on 25 September 2010. Retrieved 29 March 2010. ^ "Lunar Prospector: Neutron Spectrometer". NASA. 2001. Archived from the original on 27 May 2010. Retrieved 29 March 2010. ^ "SMART-1 factsheet". [¹[European Space Agency]]. 26 February 2007. Archived from the original on 23 March 2010. Retrieved 29 March 2010. ^ "China's first lunar probe ends mission". Xinhua. 1 March 2009. Archived from the original on 4 March 2009. Retrieved 29 March 2010. ^ Leonard David (17 March 2015). "China Outlines New Rockets, Space Station and Moon Plans". Space.com. Archived from the original on 1 July 2016. Retrieved 29 June 2016. ^ "KAGUYA Mission Profile". JAXA. Archived from the original on 28 March 2010. Retrieved 13 April 2010. ^ "KAGUYA (SELENE) World's First Image Taking of the Moon by HDTV". Japan Aerospace Exploration Agency (JAXA) and Japan Broadcasting Corporation (NHK). 7 November 2007. Archived from the original on 16 March 2010. Retrieved 13 April 2010. ^ "Mission Sequence". Indian Space Research Organisation. 17 November 2008. Archived from the original on 6 July 2010. Retrieved 13 April 2010. ^ "Indian Space Research Organisation: Future Program". Indian Space Research Organisation. Archived from the original on 25 November 2010. Retrieved 13 April 2010. ^ "India and Russia Sign an Agreement on Chandrayaan-2". Indian Space Research Organisation. 14 November 2007. Archived from the original on 17 December 2007. Retrieved 13 April 2010. ^ "Lunar CRater Observation and Sensing Satellite (LCROSS): Strategy & Astronomer Observation Campaign". NASA. October 2009. Archived from the original on 1 January 2012. Retrieved 13 April 2010. ^ "Giant moon crater revealed in spectacular up-close photos". NBC News. Space.com. 6 January 2012. ^ Chang, Alicia (26 December 2011). "Twin probes to circle moon to study gravity field". Phys.org. Associated Press. Retrieved 22 July 2018. ^ Covault, C. (4 June 2006). "Russia Plans Ambitious Robotic Lunar Mission". Aviation Week. Archived from the original on 12 June 2006. Retrieved 12 April 2007. ^ "About the Google Lunar X Prize". X-Prize Foundation. 2010. Archived from the original on 28 February 2010. Retrieved 24 March 2010. ^ Wall, Mike (14 January 2011). "Mining the Moon's Water: Q&A with Shackleton Energy's Bill Stone". Space News. ^ "President Bush Offers New Vision For NASA" (Press release). NASA. 14 December 2004. Archived from the original on 10 May 2007. Retrieved 12 April 2007. ^ "Constellation". NASA. Archived from the original on 12 April 2010. Retrieved 13 April 2010. ^ "NASA Unveils Global Exploration Strategy and Lunar Architecture" (Press release). NASA. 4 December 2006. Archived from the original on 23 August 2007. Retrieved 12 April 2007. ^ NASAtelevision (15 April 2010). "President Obama Pledges Total Commitment to NASA". YouTube. Archived from the original on 28 April 2012. Retrieved 7 May 2012. ^ "India's Space Agency Proposes Manned Spaceflight Program". Space.com. 10 November 2006. Archived from the original on 11 April 2012. Retrieved 23 October 2008. ^ SpaceX to help Vodafone and Nokia install first 4G signal on the Moon | The Week UK ^ "NASA plans to send first woman on Moon by 2024". The Asian Age. 15 May 2019. Retrieved 15 May 2019. ^ Chang, Kenneth (24 January 2017). "For 5 Contest Finalists, a $20 Million Dash to the Moon". The New York Times. ISSN 0362-4331. Archived from the original on 15 July 2017. Retrieved 13 July 2017. ^ Mike Wall (16 August 2017), "Deadline for Google Lunar X Prize Moon Race Extended Through March 2018", space.com, retrieved 25 September 2017 ^ McCarthy, Ciara (3 August 2016). "US startup Moon Express approved to make 2017 lunar mission". The Guardian. ISSN 0261-3077. Archived from the original on 30 July 2017. Retrieved 13 July 2017. ^ "An Important Update From Google Lunar XPRIZE". Google Lunar XPRIZE. 23 January 2018. Retrieved 12 May 2018. ^ a b "Moon Express Approved for Private Lunar Landing in 2017, a Space First". Space.com. Archived from the original on 12 July 2017. Retrieved 13 July 2017. ^ Chang, Kenneth (29 November 2018). "NASA's Return to the Moon to Start With Private Companies' Spacecraft". The New York Times. The New York Times Company. Retrieved 29 November 2018. ^ "NASA - Ultraviolet Waves". Science.hq.nasa.gov. 27 September 2013. Archived from the original on 17 October 2013. Retrieved 3 October 2013. ^ Takahashi, Yuki (September 1999). "Mission Design for Setting up an Optical Telescope on the Moon". California Institute of Technology. Archived from the original on 6 November 2015. Retrieved 27 March 2011. ^ Chandler, David (15 February 2008). "MIT to lead development of new telescopes on moon". MIT News. Archived from the original on 4 March 2009. Retrieved 27 March 2011. ^ Naeye, Robert (6 April 2008). "NASA Scientists Pioneer Method for Making Giant Lunar Telescopes". Goddard Space Flight Center. Archived from the original on 22 December 2010. Retrieved 27 March 2011. ^ Bell, Trudy (9 October 2008). "Liquid Mirror Telescopes on the Moon". Science News. NASA. Archived from the original on 23 March 2011. Retrieved 27 March 2011. ^ "Far Ultraviolet Camera/Spectrograph". Lpi.usra.edu. Archived from the original on 3 December 2013. Retrieved 3 October 2013. ^ a b "Can any State claim a part of outer space as its own?". United Nations Office for Outer Space Affairs. Archived from the original on 21 April 2010. Retrieved 28 March 2010. ^ "How many States have signed and ratified the five international treaties governing outer space?". United Nations Office for Outer Space Affairs. 1 January 2006. Archived from the original on 21 April 2010. Retrieved 28 March 2010. ^ "Do the five international treaties regulate military activities in outer space?". United Nations Office for Outer Space Affairs. Archived from the original on 21 April 2010. Retrieved 28 March 2010. ^ "Agreement Governing the Activities of States on the Moon and Other Celestial Bodies". United Nations Office for Outer Space Affairs. Archived from the original on 9 August 2010. Retrieved 28 March 2010. ^ "The treaties control space-related activities of States. What about non-governmental entities active in outer space, like companies and even individuals?". United Nations Office for Outer Space Affairs. Archived from the original on 21 April 2010. Retrieved 28 March 2010. ^ "Statement by the Board of Directors of the IISL On Claims to Property Rights Regarding The Moon and Other Celestial Bodies (2004)" (PDF). International Institute of Space Law. 2004. Archived (PDF) from the original on 22 December 2009. Retrieved 28 March 2010. ^ "Further Statement by the Board of Directors of the IISL On Claims to Lunar Property Rights (2009)" (PDF). International Institute of Space Law. 22 March 2009. Archived (PDF) from the original on 22 December 2009. Retrieved 28 March 2010. ^ a b c d Dexter, Miriam Robbins (1984). "Proto-Indo-European Sun Maidens and Gods of the Moon". Mankind Quarterly. 25 (1 & 2): 137–144. ^ a b c d e Nemet-Nejat, Karen Rhea (1998), Daily Life in Ancient Mesopotamia, Daily Life, Greenwood, p. 203, ISBN 978-0-313-29497-6 ^ a b c d e Black, Jeremy; Green, Anthony (1992). Gods, Demons and Symbols of Ancient Mesopotamia: An Illustrated Dictionary. The British Museum Press. p. 135. ISBN 978-0-7141-1705-8. CS1 maint: ref=harv (link) ^ Zschietzschmann, W. (2006). Hellas and Rome: The Classical World in Pictures. Whitefish, Montana: Kessinger Publishing. p. 23. ISBN 978-1-4286-5544-7. CS1 maint: ref=harv (link) ^ Cohen, Beth (2006). "Outline as a Special Technique in Black- and Red-figure Vase-painting". The Colors of Clay: Special Techniques in Athenian Vases. Los Angeles: Getty Publications. pp. 178–179. ISBN 978-0-89236-942-3. CS1 maint: ref=harv (link) ^ "Muhammad." Encyclopædia Britannica. 2007. Encyclopædia Britannica Online, p.13 ^ Ahead Of Chandrayaan 2 Landing, Poet-Diplomat Writes "Moon Anthem" NDTV, 6 Sept.2019 ^ Marshack, Alexander (1991), The Roots of Civilization, Colonial Hill, Mount Kisco, NY. ^ Brooks, A.S. and Smith, C.C. (1987): "Ishango revisited: new age determinations and cultural interpretations", The African Archaeological Review, 5 : 65–78. ^ Duncan, David Ewing (1998). The Calendar. Fourth Estate Ltd. pp. 10–11. ISBN 978-1-85702-721-1. ^ For etymology, see Barnhart, Robert K. (1995). The Barnhart Concise Dictionary of Etymology. Harper Collins. p. 487. ISBN 978-0-06-270084-1. . For the lunar calendar of the Germanic peoples, see Birley, A. R. (Trans.) (1999). Agricola and Germany. Oxford World's Classics. US: Oxford University Press. p. 108. ISBN 978-0-19-283300-6. ^ Mallory, J.P.; Adams, D.Q. (2006). The Oxford Introduction to Proto-Indo-European and the Proto-Indo-European World. Oxford Linguistics. Oxford University Press. pp. 98, 128, 317. ISBN 978-0-19-928791-8. ^ Harper, Douglas. "measure". Online Etymology Dictionary. ^ Harper, Douglas. "menstrual". Online Etymology Dictionary. ^ Smith, William George (1849). Dictionary of Greek and Roman Biography and Mythology: Oarses-Zygia. 3. J. Walton. p. 768. Retrieved 29 March 2010. ^ Estienne, Henri (1846). Thesaurus graecae linguae. 5. Didot. p. 1001. Retrieved 29 March 2010. ^ mensis. Charlton T. Lewis and Charles Short. A Latin Dictionary on Perseus Project. ^ μείς in Liddell and Scott. ^ "Islamic Calendars based on the Calculated First Visibility of the Lunar Crescent". University of Utrecht. Archived from the original on 11 January 2014. Retrieved 11 January 2014. ^ a b Lilienfeld, Scott O.; Arkowitz, Hal (2009). "Lunacy and the Full Moon". Scientific American. Archived from the original on 16 October 2009. Retrieved 13 April 2010. ^ Rotton, James; Kelly, I.W. (1985). "Much ado about the full moon: A meta-analysis of lunar-lunacy research". Psychological Bulletin. 97 (2): 286–306. doi:10.1037/0033-2909.97.2.286. PMID 3885282. ^ Martens, R.; Kelly, I.W.; Saklofske, D.H. (1988). "Lunar Phase and Birthrate: A 50-year Critical Review". Psychological Reports. 63 (3): 923–934. doi:10.2466/pr0.1988.63.3.923. PMID 3070616. ^ Kelly, Ivan; Rotton, James; Culver, Roger (1986), "The Moon Was Full and Nothing Happened: A Review of Studies on the Moon and Human Behavior", Skeptical Inquirer, 10 (2): 129–143 . Reprinted in The Hundredth Monkey - and other paradigms of the paranormal, edited by Kendrick Frazier, Prometheus Books. Revised and updated in The Outer Edge: Classic Investigations of the Paranormal, edited by Joe Nickell, Barry Karr, and Tom Genoni, 1996, CSICOP. ^ Foster, Russell G.; Roenneberg, Till (2008). "Human Responses to the Geophysical Daily, Annual and Lunar Cycles". Current Biology. 18 (17): R784–R794. doi:10.1016/j.cub.2008.07.003. PMID 18786384. Solar System portal Astronomy portal "Revisiting the Moon". The New York Times. Retrieved 8 September 2014. The Moon. Discovery 2008. BBC World Service. Bussey, B.; Spudis, P.D. (2004). The Clementine Atlas of the Moon. Cambridge University Press. ISBN 978-0-521-81528-4. Cain, Fraser. "Where does the Moon Come From?". Universe Today. Retrieved 1 April 2008. (podcast and transcript) Jolliff, B. (2006). Wieczorek, M.; Shearer, C.; Neal, C. (eds.). New views of the Moon. Reviews in Mineralogy and Geochemistry. 60. Chantilly, Virginia: Mineralogy Society of America. p. 721. Bibcode:2006RvMG...60D...5J. doi:10.2138/rmg.2006.60.0. ISBN 978-0-939950-72-0. Retrieved 12 April 2007. Jones, E.M. (2006). "Apollo Lunar Surface Journal". NASA. Retrieved 12 April 2007. "Exploring the Moon". Lunar and Planetary Institute. Retrieved 12 April 2007. Mackenzie, Dana (2003). The Big Splat, or How Our Moon Came to Be. Hoboken, NJ: John Wiley & Sons. ISBN 978-0-471-15057-2. Moore, P. (2001). On the Moon. Tucson, Arizona: Sterling Publishing Co. ISBN 978-0-304-35469-6. "Moon Articles". Planetary Science Research Discoveries. Hawai'i Institute of Geophysics and Planetology. Spudis, P.D. (1996). The Once and Future Moon. Smithsonian Institution Press. ISBN 978-1-56098-634-8. Taylor, S.R. (1992). Solar system evolution. Cambridge University Press. p. 307. ISBN 978-0-521-37212-1. Teague, K. (2006). "The Project Apollo Archive". Retrieved 12 April 2007. Wilhelms, D.E. (1987). "Geologic History of the Moon". U.S. Geological Survey Professional Paper. Professional Paper. 1348. doi:10.3133/pp1348. Retrieved 12 April 2007. Wilhelms, D.E. (1993). To a Rocky Moon: A Geologist's History of Lunar Exploration. Tucson: University of Arizona Press. ISBN 978-0-8165-1065-8. Retrieved 10 March 2009. Moon at Wikipedia's sister projects Media from Wikimedia Commons Travel guide from Wikivoyage NASA images and videos about the Moon Albums of images and high-resolution overflight videos by Seán Doran, based on LROC data, on Flickr and YouTube Video (04:56) – The Moon in 4K (NASA, April 2018) on YouTube Video (04:47) – The Moon in 3D (NASA, July 2018) on YouTube Cartographic resources Moon Trek – An integrated map browser of datasets and maps for the Moon The Moon on Google Maps, a 3-D rendition of the Moon akin to Google Earth "Consolidated Lunar Atlas". Lunar and Planetary Institute. Retrieved 26 February 2012. Gazetteer of Planetary Nomenclature (USGS) List of feature names. "Clementine Lunar Image Browser". U.S. Navy. 15 October 2003. Retrieved 12 April 2007. 3D zoomable globes: "Google Moon". 2007. Retrieved 12 April 2007. "Moon". World Wind Central. NASA. 2007. Retrieved 12 April 2007. Aeschliman, R. "Lunar Maps". Planetary Cartography and Graphics. Retrieved 12 April 2007. Maps and panoramas at Apollo landing sites Japan Aerospace Exploration Agency (JAXA) Kaguya (Selene) images Large image of the Moon's north pole area The Moon – First-Ever Geologic Map (USGS; 20 April 2020) Observation tools "NASA's SKYCAL – Sky Events Calendar". NASA. Archived from the original on 20 August 2007. Retrieved 27 August 2007. "Find moonrise, moonset and moonphase for a location". 2008. Retrieved 18 February 2008. "HMNAO's Moon Watch". 2005. Retrieved 24 May 2009. See when the next new crescent moon is visible for any location. Lunar shelter (building a lunar base with 3D printing) Gravity field Hill sphere Sodium tail Lunar distance Perigee and apogee Libration Nodal period Total penumbral lunar eclipse Solar eclipses on the Moon Eclipse cycle Tidal force Tidal locking Tidal acceleration Tidal range Lunar station Surface and Selenography Near side Far side Peak of eternal light Ray systems Crater of eternal darkness South Pole–Aitken basin Rilles Wrinkle ridges Lunar basalt 70017 Space weathering Micrometeorite Sputtering Transient lunar phenomenon Lunar theory Giant-impact hypothesis Lunar magma ocean Late Heavy Bombardment Lunar meteorites KREEP Lunar laser ranging ALSEP Lunar sample displays Lunar seismology Colonization Lunar resources Time-telling and Lunisolar calendar Lunar month Sennight Phases and Super and micro Lunar deities Lunar effect Moon illusion Man in the Moon Craters named after people Artificial objects on the Moon Memorials on the Moon Moon in fiction Moon landing conspiracy theories Moon Treaty "Moon is made of green cheese" Natural satellite Double planet Lilith (hypothetical second moon) Splitting of the moon Age of the Earth Extremes on Earth Geologic record History of Earth Atmosphere of Earth Human impact on the environment Evolutionary history of life Earth's orbit Evolution of Solar System Geology of solar terrestrial planets Location in Universe Outline of Earth Earth sciences portal Natural satellites of the Solar System Jovian Saturnian Uranian Neptunian Dwarf-planet Plutonian Haumean Makemakean Eridian Gonggong Quaoar Orcus Minor-planet moons Main belt binaries: Antiope Frostia Ostro Bettig Christophedumas Trojans: Patroclus–Menoetius Hektor–Skamandrios Iphthime Eurydamas TNOs: Typhon–Echidna Lempo–Paha–Hiisi 2002 UX25 Logos–Zoe Ceto–Phorcys Borasisi–Pabu Sila–Nunam Teharonhiawako–Sawiskera Salacia–Actaea 2002 WC19 Altjira Varda–Ilmarë 2003 AZ84 Gǃkúnǁ'hòmdímà–Gǃò'é ǃHú Mors–Somnus Manwë–Thorondor ǂKá̦gára–ǃHãunu 2013 FY27 Ranked by size Planetary-mass moon largest: 5268 km / 0.413 Earths Iapetus Umbriel Dysnomia Vanth Mimas Nereid Hiʻiaka Actaea Discovery timeline Inner moons Irregular moons Planetary-mass moons Subsatellite Regular moons Trojan moons Saturnian (Rhean) Charikloan Chironean other near-Earth objects Namaka S/2015 (136472) 1 (outline) Human spaceflight Space probes Fifth giant Planet Nine Planet V Subsatellites Tyche Vulcanoids Dwarf planets (possible) Gravitationally rounded objects Minor planets Natural satellites Solar System models by discovery date Damocloids Meteoroids Names and meanings Planetesimal Mercury-crossers Venus-crossers Venus trojans Near-Earth objects Earth-crossers Earth trojans Mars-crossers Mars trojans first 1000 Kirkwood gap Jupiter-crossers Jupiter trojans Saturn-crossers Saturn trojans Uranus-crossers Uranus trojans Neptune-crossers Neptune trojans Cis-Neptunian objects Trans-Neptunian objects Cubewanos Plutinos Detached objects Hills cloud Scattered disc Sednoids Accretion disk Excretion disk Asteroid Cloud Circumplanetary disk Circumstellar disc Circumstellar envelope Debris disk Disrupted planet EXCEDE Exozodiacal dust Extraterrestrial materials Extraterrestrial sample curation Gravitational collapse Interplanetary dust cloud Interplanetary medium Interplanetary space Interstellar cloud Interstellar dust Interstellar medium Interstellar space List of interstellar and circumstellar molecules Merging stars Molecular cloud Nebular hypothesis Planetary migration Planet formation Protoplanetary disk Ring system Rubble pile Sample-return mission Outline of the Solar System Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Observable universe → Universe Each arrow (→) may be read as "within" or "part of". Retrieved from "https://en.wikipedia.org/w/index.php?title=Moon&oldid=962426443" Astronomical objects known since antiquity Planetary-mass satellites Planetary satellite systems Articles containing Ancient Greek-language text CS1 maint: numeric names: authors list CS1 maint: uses authors parameter Wikipedia indefinitely semi-protected pages Wikipedia indefinitely move-protected pages Use American English from August 2019 All Wikipedia articles written in American English Articles containing Greek-language text Pages using multiple image with manual scaled images Articles containing Latin-language text Wikipedia articles with BNE identifiers Wikipedia articles with BNF identifiers Wikipedia articles with NARA identifiers Wikipedia articles with NDL identifiers Wikipedia articles with NKC identifiers Wikipedia articles with SUDOC identifiers Articles containing video clips Беларуская (тарашкевіца)‎ Lumbaart ꆇꉙ
CommonCrawl
1-To be published in IEEE Xplore, Authors must submit their abstracts online through the IEEE pdf-express system by June 25, 2018. 4-Upon process completion, the option « Approve for collection » will become available in your account. Press that option if you are satisfied with the final result. The next screen will confirm the status of the paper as « Approved for collection». The Process is complete you can logoff. 5-About the IEEE Electronic Copyright Form. After submission of the accepted paper to IEEE in July, authors will receive an email from IEEE to fill in the electronic Copyright Form for the submitted paper. This will complete the process for submission to Xplore. Download the IEEE Requirements for PDF Document V3.2. Abstracts may be no longer than 1 page, including all text, figures, and references. Please note that after the submission deadline the list and the order of the authors cannot be modified, and must remain unchanged in the final version of the manuscript. Photonics North requires that each accepted paper be presented by one of the authors' in-person at the conference site according to the schedule published. Presentation by anyone else than one of the co-authors (proxies, video or remote cast) is not allowed, unless explicitly approved before the conference by the technical co-chairs. For posters, one author must be present at the poster during the entire duration of the session. Any paper accepted into the technical program, but not presented on-site will be withdrawn from the official proceedings archived on IEEE Xplore. The text of the paper should contain discussions on how the paper's contributions are related to prior work in the field. It is important to put new work in context, to give credit to foundational work, and to provide details associated with the previous work that have appeared in the literature. This discussion may be a separate, numbered section, or it may appear elsewhere in the body of the manuscript, but it must be present. You should differentiate what is new, and how your work expands on or takes a different path from the prior studies. The review process will be performed from the electronic submission of your paper. To ensure that your document is compatible with the review system and Proceedings system, you MUST adhere to the following requirements. Papers must be submitted in Adobe's Portable Document Format (PDF) format and must strictly adhere to the IEEE Requirements for PDF Documents v3.2. Have monochrome images down-sampled at 600 dpi, grayscale & color images at 300 dpi. Authors will be permitted to submit files weighing up to 10 MB. When submitting your paper, the online submission system will ask you to rename your file with a specific name that will be given to you at that time. Please strictly comply with this instruction. Once again, please note that only PDF files will be accepted. English is the official language of the conference. As a result, all papers must be entirely submitted (and presented) in English. The paper abstract should appear at the top of the left-hand column of text, about 12 mm (0.5") below the title area and no more than 80 mm (3.125") in length. Leave 12 mm (0.5 ») of space between the end of the abstract and the beginning of the main text. To achieve the best viewing experience for the review process and conference proceedings, we strongly encourage authors to use Times-Roman or Computer Modern fonts. If a font face is used that is not recognized by the submission system, your proposal will not be reproduced correctly. Use a font size that is no smaller than 9 points throughout the paper, including figure captions. In 9-point type font, capital letters are 2 mm high. For 9-point type font, there should be no more than 3.2 lines/cm (8 lines/inch) vertically. This is a minimum spacing; 2.75 lines/cm (7 lines/inch) will make the proposal much more readable. Larger type sizes require correspondingly larger vertical spacing. The paper title must appear in boldface letters and should be in ALL CAPITALS. Do not use LaTeX math notation ($x_y$) in the title; the title must be representable in the Unicode character set. Lastly, try to avoid uncommon acronyms in the title. The authors' name(s) and affiliation(s) appear below the title in capital and lower case letters. ICIP does not perform blind reviews, so be sure to include the author list in your submitted paper. Proposals with multiple authors and affiliations may require two or more lines for this information. The order of the authors on the document should exactly match in number and order the authors typed into the online submission form. Questions concerning the paper-submission process should be addressed to [email protected]. Include your paper number(s) and title(s) on all correspondence.
CommonCrawl
Journal of Wood Science Official Journal of the Japan Wood Research Society Relationship between the xylem maturation process based on radial variations in wood properties and radial growth increments of stems in a fast-growing tree species, Liriodendron tulipifera Ikumi Nezu1,2, Futoshi Ishiguri ORCID: orcid.org/0000-0002-1870-40601, Jyunichi Ohshima1 & Shinso Yokota1 Journal of Wood Science volume 68, Article number: 48 (2022) Cite this article Promoting wood utilization from fast-growing tree species is one solution to address supply and demand issues relating to wood resources while sequestering carbon dioxide in large quantities. Information on the quality of wood from fast-growing tree species and its relationship with changes in stem size is essential for promoting the establishment of plantations and wood utilization of fast-growing tree species. To explore the relationship between the xylem maturation process and radial growth increments of stems in fast-growing tree species, we examined radial variations in annual ring widths and wood properties in Liriodendron tulipifera in Japan. The cambial ages at which current annual increment and mean annual increment values were greatest were 4.9 years and 7.4 years, respectively. Based on radial variations evaluated by mixed-effects modeling of wood properties, all properties increased or decreased near the pith before becoming stable towards the cambium. Changing ratios of multiple wood properties at 1-year intervals became stable after a cambial age of 9 years. These results point to an ecological strategy in L. tulipifera, in which there is a tradeoff between radial growth increments and wood properties. As part of this strategy, in response to competition among individual trees within a stand, the tree produces a large volume of xylem with lower physical and mechanical properties, allowing it to increase its volume faster than that of the surrounding trees. Subsequently, it produces xylem that is more stable, with greater physical and mechanical properties. This wood forms at a slower growth rate compared to the xylem that forms at the time of initial tree growth. Based on the ecological strategy adopted by L. tulipifera, wood that forms before a cambial age of 9 years can be used for utility applications, and wood that forms after a cambial age of 9 years can be used for structural applications. To fill the gap in the supply and demand for wood resources, one solution is to promote the utilization of wood from fast-growing tree species. In addition to resolving supply issues, the use of wood from fast-growing tree plantations can reduce atmospheric carbon dioxide, as the trees can act a massive carbon sink. Information on the quality of wood from fast-growing tree species is essential for promoting the establishment of plantations and wood utilization of fast-growing tree species. In many cases, wood that forms in the region near the pith exhibits relatively unstable properties, whereas wood located outside of this zone has stable properties [1, 2]. The former is referred to as 'core wood' or 'juvenile wood' and the latter as 'outer wood' or 'mature wood' [1, 2]. The boundary between the core and outer wood is the position, where xylem maturation commences. In fast-growing tree species, xylem maturation is commonly estimated by radial variations using a single trait of the anatomical characteristics [2,3,4,5,6,7,8,9]. However, radial variation patterns can vary among species [4,5,6, 9, 10]. At the species level, physical and mechanical properties, as well as anatomical characteristics also can vary [4, 6, 9, 10]. For example, Ohbayashi and Shiokura [5] found that anatomical characteristics changed considerably near the pith, and then became stable in three tropical fast-growing tree species, Anthocephalus chinensis and Gmelina arborea in the Philippines and Eucalyptus saligna in Brazil. They also revealed that the oven-dry density increased slightly from the pith toward the cambium. Thus, the positions, where these properties become stable differed between anatomical characteristics and oven-dry density in these species. In a study on Populus simonii × beijingensis grown in China, cambial age showing a stable value of vessel lumen diameter in radial variation was earlier than that of wood fiber length and vessel element length [9]. Thus, to shed light on the boundary between the core and outer wood exactly, studies need to focus on radial variations in multiple traits in fast-growing tree species. We recently estimated the boundary position in tropical fast-growing tree species (Acacia mangium, Maesopsis eminii, and Melia azedarach) in Indonesia based on radial variations in anatomical characteristics and physical and mechanical properties [10]. Although the boundary positions differed, depending on which properties, it was possible to estimate the boundary using multiple wood properties, combined with mixed-effects modeling and an exponential function with convergence. To our knowledge, this method of determining boundary positions has not been applied in fast-growing tree species in temperate zones. The xylem maturation process in fast-growing tree species in temperate zone needs to be elucidated to promote the utilization of wood from these trees. Radial growth increments can be represented by a sigmoid model, such as the Gomperz function [11]. Based on the sigmoid model, the current annual increment (CAI), which is the difference in radial growth at the beginning and end of the year, can be estimated. In addition to the CAI, the mean annual increment (MAI) can be calculated by dividing the growth values through time by the number of years. Both the CAI and MAI are used in silviculture. For example, thinning and harvesting are conducted at times of maximum CAI and MAI values, respectively. Numerous studies have investigated the relationship between the xylem maturation process and radial growth increments of stems expressed as CAI and MAI in many fast-growing tree species [3, 5, 7, 9, 12, 13]. In previous research, we estimated this relationship in various fast-growing tree species in tropical and boreal zones [14, 15]. In Eucalyptus camaldulensis in Thailand (tropical zone), after 5 years, the basic density and compressive strength values of the wood were stable, and the radial growth increments of stems reached maximum values [14]. However, based on an analysis of fiber increments in Betula platyphylla in Mongolia (boreal zone), xylem maturation began before the age of 10 years compared to the age showing maximum radial growth increments [15]. Thus, the relationship between the xylem maturation process and radial growth increments of stems might differ among properties, tree growth conditions, and others. To our knowledge, no studies on the xylem maturation process have focused on multiple properties of wood and radial growth increments of stems in fast-growing tree species in a temperate zone. The relationship between the xylem maturation process and radial growth rate should be elucidated in temperate fast-growing tree species belonging to genus Populus, Eucalyptus, Liriodendron, and Paulownia. Liriodendron tulipifera L. is considered to be among the fastest growing tree species in the temperate zone. Although its wood is used for furniture and structural timber in North America [16], this species is used mainly in roadside and garden plantings in Japan. Recently, the establishment of plantations of fast-growing tree species with short rotations has been considered for improving the profitability of the forestry and wood industry in Japan. At present, the Japanese forestry and wood industry sectors, thus, focus on the fast-growing tree species, including L. tulipifera. Several trail examinations of L. tulipifera grown in Japan for furniture and structural lumber are desirable [17, 18]. Previously, we evaluated the quality of dimension lumber (2 by 4 lumber, 38 × 89 mm in cross section) from this species and found that its bending properties were similar to those of Cryptomeria japonica, which is the main species used for structural lumber in Japan [18]. However, the xylem maturation process in L. tulipifera and its relationship to radial growth increments are unknown. In the present study, we investigated radial growth increments and radial variations in anatomical characteristics and physical and mechanical properties in L. tulipifera in Japan. We evaluated radial variations in wood properties using linear or nonlinear mixed-effects models. In addition, we estimated the cambial age at which xylem maturation began based on selected models of radial variations in multiple wood properties. Furthermore, we elucidated the relationship between the xylem maturation process and radial growth increments. We suggest appropriate utilization of L. tulipifera wood based on the ecological strategy of this species. Nine trees were obtained from the nursery of Utsunomiya University, Japan (36°32'N and 139°54'E). The nine trees were regenerated through coppicing stem after cutting the stems once. The trees were planted at 1 − 2 m intervals. The genetic source was unknown. The stem diameter was measured at 1.3 m above the ground using a diameter tape. After felling the trees, tree height was measured using a tape measure. The mean stem diameter and tree height values were 16.9 cm and 12.2 m, respectively (Table 1) [18]. The number of annual rings at 1.3 m above the ground ranged from 9 to 14 (Table 1), suggesting that the tree age was around 10 years or more [18]. Table 1 Statistical values of growth characteristics and wood properties of sampled trees A disc with 2 cm in width was taken at 1.3 m above the ground from each tree for measurements of annual ring width and anatomical characteristics. In addition, logs with 50 cm long were obtained from 0.8 to 1.3 m above the ground for use in bending and compressive tests. Bark to bark radial strips with the pith (5 cm in a tangential direction and 1 cm in a longitudinal direction) were prepared from the discs. The transverse surface of the radial strip was sanded, and a transverse image of the strip was obtained using an image scanner (GT-9300; Epson, Suwa, Japan) with 800 dpi. Annual ring width was measured from pith to bark in two directions using ImageJ software (National Institute of Health, Bethesda, Maryland, USA). Annual ring width at each cambial age was determined by averaging annual ring width values obtained from two directions in a strip. Small stick samples (10 [L] × 1 [R] × 1 [T] mm) and small block samples (10 [L] × 10 [R] × 5 [T] mm) were collected from the radial strips at 1 cm intervals from the pith to the cambium. The stick samples were macerated with Schultze's solution (100 mL of 35% nitric acid containing 6 g of potassium chloride). The macerated samples were placed on a slide glass and mounted with a coverslip and 75% glycerol. The lengths of 50 wood fibers and 30 vessel elements were then measured using a profile projector (V-12B; Nikon, Tokyo, Japan) and a digital caliper (CD-30C; Mitutoyo, Kawasaki, Japan). Transverse sections with 20 μm thickness were prepared from the block samples using a sliding microtome (REM-710; Yamato Kohki, Saitama, Japan). The sections were stained with 1% safranin, dehydrated with graded ethanol, and finally immersed in xylene. The sections were placed on glass slides and mounted using Bioleit (Oken Shoji, Tokyo, Japan) and coverslips. Digital images in each radial position were obtained using a microscope (BX-51; Olympus, Tokyo, Japan) equipped with a digital camera (DS-2210; Sato Shouji Inc., Kawasaki, Japan). Using the cross-sectional images at each radial position, vessel frequency, vessel diameter, wood fiber diameter, and wood fiber wall thickness were determined using ImageJ software. The number of vessels in the digital images was counted, while the vessel frequency was calculated by dividing the number of vessels by the area of each transverse sectional image. The diameters of wood fibers and wood fiber lumina were determined by averaging the major and minor radii, respectively. Each entire wood fiber wall region was regarded as a trapezoid, and the wood fiber wall thickness was calculated using the method described by Yoshinaga et al. [19]. In each radial position, 30 vessels and 50 wood fibers were measured. Pith-to-bark radial boards with pith (2 cm width and 50 cm thickness) were collected from the logs. After air-drying in a laboratory at 20 ℃ and 65% relative humidity, the boards were planed so that they were 1 cm width. Finally, bending strength specimens (160 [L] × 10 [R] × 10 [T] mm) and compressive strength specimens (20 [L] × 10 [R] × 10 [T] mm) were successively prepared from the pith toward the bark of the boards. The static bending test was conducted using a universal testing machine (MSC-5/200-2; Tokyo Testing Machine, Tokyo, Japan). A load was applied to the center of the specimens on the radial surface with a 140 mm span and 4 mm/min load speed. The load and deflection were recorded using a personal computer. The modulus of elasticity (MOE) and modulus of rupture (MOR) were calculated using the following formulae: $$\mathrm{MOE} \left(\mathrm{GPa}\right)=\frac{{\Delta Pl}^{3}}{4\Delta Yb{h}^{3}}\times {10}^{-3}$$ $$\mathrm{MOR\, (MPa)} =\frac{3Pl}{2bh^2}$$ where ΔP (N) is the difference in the load between 10 and 40% values of the maximum load, l (mm) is the length of the span, ΔY (mm) is deflection due to ΔP, b (mm) and h (mm) are the width and height of the specimen, and P (N) is the maximum load. After the static bending test, a block (10 [L] × 10 [R] × 10 [T] mm) without any visual defects was prepared from each specimen for measuring the moisture content and air-dry density. A compressive test was conducted using a universal testing machine with a load speed of 0.5 mm/min. Compressive strength parallel to grain was calculated using the following formula: $$\text{Compressive strength} \left(\text{MPa}\right)=\frac{P}{A}$$ where P (N) is the maximum load and A (mm2) is the cross-sectional area of the specimen. In the test, the mean ± standard deviation of the moisture content of the bending test specimens and compressive test specimens was 13.3 ± 0.3% and 10.9 ± 0.2%, respectively. Statistical analysis was conducted using R (Version 4.0.2, [20]). To evaluate radial growth increments and radial variations in wood properties, linear or nonlinear mixed-effects models were developed using the lmer package [21] or nlme package [22]. The estimated stem diameter (without bark) in relation to cambial age was regarded as twice the value of the cumulative annual ring width at 1.3 m above the ground in each tree. To evaluate radial growth increments, radial variations in estimated stem diameters at 1.3 m above the ground in relation to cambial age were determined using nonlinear mixed-effects models based on the Gompertz function (Table 2). In each model, individual tree was the random effect (Table 2). Among the three models, the most parsimonious model was selected based on the Akaike information criterion (AIC) [23]. In addition, the statistical significance of each fixed-effect parameter was evaluated in the selected model using the lmerTest package [21]. Based on the selected model, the CAI and MAI were calculated using the following formulae: Table 2 Developed models for stem diameter in relation to cambial age and obtained AIC values in each model $$\mathrm{CAI} \left(\mathrm{cm}/\mathrm{y}\right)={a}_{0}{a}_{2}\mathrm{exp}\left(-{e}^{{a}_{1}-{a}_{2}\mathrm{CA}}\right)\times {e}^{{a}_{1}-{a}_{2}\mathrm{CA}}$$ $$\mathrm{MAI} \left(\mathrm{cm}/\mathrm{y}\right)=\frac{{a}_{0}\mathrm{exp}(-{e}^{{a}_{1}-{a}_{2}\mathrm{CA}})}{\mathrm{CA}}$$ where a0, a1, and a2 are the parameters obtained from the selected radial growth model, and CA is the cambial age. The equation of CAI is the first derivative equation of the radial growth model in Table 2. Based on Eqs. 4 and 5, the cambial ages at which CAI and MAI values were greatest were calculated. In addition, the ratio of the variance component of individual trees and residual to the total variance was calculated [24]. To evaluate radial variations in wood properties, linear or nonlinear mixed-effects models were developed based on linear (Models b-1 and b-2), logarithmic (Models c-1 and c-2), or quadratic functions (Models d-1 to d-3), with cambial age as the explanatory variable, wood properties as the response variable, and individual tree as the random effect (Table 3). Among the developed models, the model with the lowest AIC value was considered the most parsimonious model. In the selected model, the significance of the fixed-effect parameters and the ratios of the variance components of the random-effect parameters to the total variance were calculated [24]. Table 3 Developed model for the radial variation of wood properties and AIC values of each model The cambial age at which xylem maturation commenced was determined according to the modified method of Ngadianto et al. [10]. In the present study, the explanatory variable was considered cambial age instead of distance from pith in Ngadianto et al. [10]. Each wood property was estimated at 1-year intervals in the selected model containing only the fixed-effect parameters. The changing ratio of various wood properties at 1-year intervals were calculated as absolute values. An exponential model with a plateau was fitted to the data for changing the ratio of each wood property using the following formula: $$\mathrm{CR}_{1}={a}_{1}{{b}_{1}}^{\mathrm{CA}_{1}}+{c}_{1}$$ where CR1 is the changing ratio of each wood property, CA1 is the cambial age, and a1, b1, and c1 are fixed-effect parameters. c1 is the plateau value in Eq. (6). An exponential model was then fitted to the data for the changing ratio using the following formula: $$\mathrm{CR}_{2}={a}_{2}{{b}_{2}}^{\mathrm{CA}_{2}}$$ where CR2 is the changing ratio of each wood property, CA2 is the cambial age, and a2 and b2 are fixed-effect parameters. When CR2 in Eq. (7) equaled c1 in Eq. (6), CA2 was regarded as the cambial age at which xylem maturation commenced. In the radial growth model, the minimum AIC value was obtained in Model a-1 (Table 2). The p-values of the fixed-effect parameters in the model were all below 0.05 (Table 4). Therefore, the model based on Model a-1 was regarded as the optimum model for explaining radial growth. In addition, high variance component in individual tree (99.1%) in the model (Table 4) suggested that asymptote values of stem diameter differed at the individual tree level. Figure 1 shows the regression curves for the estimated stem diameter, CAI, and MAI in relation to cambial age based on the selected model with only fixed-effect parameters. The estimated stem diameter increased from the pith toward the bark, whereas radial growth increments decreased with an increase in cambial age. The cambial age at which CAI and MAI values were greatest were 4.9 and 7.4 years, respectively (Fig. 1). At the cambial age with maximum CAI and MAI values, the estimated stem diameter values were 7 and 11 cm, respectively, regardless of individual trees. Table 4 Estimated values of the fixed and random effects of the selected models for stem diameter in relation to cambial age Changes in estimated stem diameter and radial growth rate in relation to cambial age. Note: Circles and solid line in the upper figure represent original values of estimated stem diameter at 1.3 m above the ground and regression curve based on the fixed-effect parameters of the selected model (Table 4). Solid and dashed lines in the lower figure indicate CAI and MAI based on the selected model (Model a-1 in Table 2) with only fixed-effect parameters. Solid and dashed lines in the vertical direction of both figures indicate cambial age showing maximum CAI (4.9 years) and MAI (7.4 years) Mean values of wood properties in the nine trees are shown in Table 1. Among seven developed radial variation models, Model c-1 or c-2 was selected for all wood properties (Table 3). The p-values of the fixed-effect parameters (c0 and c1) in each wood property were below 0.05 (Table 5), suggesting that the model based on the logarithmic function was the optimum model for explaining radial variations in these wood properties. Minimum AIC values were obtained in the Model c-1 with the random slope of individual trees in wood fiber length, vessel element length, and MOE, whereas the Model c-2 with the random intercept of the individual tree showed minimum AIC values in vessel frequency, vessel diameter, wood fiber diameter, wood fiber wall thickness, air-dry density, MOR, and compressive strength (Table 3). Figure 2 shows the regression curves for radial variation of all wood properties based on only the fixed-effect parameters of the selected model. All wood properties increased or decreased near the pith and then became stable toward the cambium. Based on the selected models, the highest variance component value for an individual tree was obtained for wood fiber wall thickness (90.8%), followed by wood fiber diameter (88.0%), air-dry density (71.3%), MOR (39.8%), vessel element length (38.5%), vessel diameter (27.7%), vessel frequency (27.6%), compressive strength (26.9%), MOE (11.2%), and wood fiber length (2.6%) (Table 5). Table 5 Estimated values of the fixed and random effects of the selected models for radial variations of wood properties Radial variations of wood properties. Note: Circles and solid curves represent the original values of each wood property and regression curve based on the fixed-effect parameters of the selected model (Table 5). MOE, modulus of elasticity; MOR, modulus of rupture The changing ratio of each wood property at 1-year intervals and the regression curve based on the exponential function of all wood properties in relation to cambial age are shown in Fig. 3. As shown by the results, the changing ratio became stable after a cambial age of 9 years (estimated stem diameter = approximately 14 cm) (Table 6). Radial variation of the changing ratio of various wood properties at 1-year intervals in relation to cambial age. Circles represent estimated values of changing ratio at 1-year intervals based on the fixed-effect parameter in the selected model for each wood property. Solid line shows regression curve of changing ratio of 10 wood properties \(\left({CR}_{2}=28.251\times {0.696}^{{CA}_{2}}\right)\). Dashed line represents asymptote value (\({c}_{1}=1.1917\)). The dotted line represents the estimated cambial age starting xylem maturation (interception point between CR2 and c1). Table 6 Relationships between radial growth increment and xylem maturation process in fast-growing tree species When calculating cambial ages at which CAI and MAI values are maximum based on the Gompertz function, the parameters a1 and a2 are used [9]. If a model including random effects in a1 or a2 was selected, CAI and MAI are affected by random effects. In the present study, the model with a random effect of individual trees on the parameter of a1 or a2 was not selected in the radial growth model, indicating that genetic factors seem to affect radial growth increment but not cambial age showing maximum radial growth increment of stems. A number of studies have examined radial variations and mean values of several wood properties in L. tulipifera in the United States [16, 25, 27] and Japan [17, 18, 28, 29]. Furukawa et al. [28] found that wood fiber length increased from the pith, while vessel element length slightly increased from the pith in L. tulipifera in Japan. In a study of L. tulipifera in North Carolina, United States, at the age of approximately 50 years, wood fiber length increased from the pith (1.0 mm) toward the bark (1.9 mm) [25]. In another study of L. tulipifera in the United States, Shupe et al. [26] reported that basic density at 1 m above the ground increased toward the bark (0.4 g/cm3 near the pith and 0.5 g/cm3 in outermost) at the age of 40 years. The radial variation patterns of cell length and air-dry density obtained in the present study (Fig. 2) were similar to those found in previous studies [26, 28]. Itoh [29] reported that the vessel diameter of L. tulipifera in Japan ranged from 50 to 70 μm. In a study on L. tulipifera in the eastern United States, Uzcategui et al. [27] reported the following range of minimum to maximum values for the mechanical properties: 0.36 − 0.60 g/cm3 for air-dry density, 7.07 − 11.5 GPa for MOE, 54.5 − 108.6 MPa for MOR, and 30.4 − 56.3 MPa for compressive strength. Similarly, another study on the mechanical properties of L. tulipifera in the United States reported values of 10.90 GPa for MOE, 70.0 MPa for MOR, and 38.2 MPa for compressive strength under 12% moisture contents [16]. The mean values for vessel diameter, air-dried density, MOR, and CS of the nine trees in the present study were within the range reported in the literature, whereas the value for MOE was relatively lower. The selected model for explaining radial variations in cell length and MOE included the variables of individual tree as the random slope (Model c-1, Table 3). Thus, the rate of change in the aforementioned properties from the pith toward the cambium might differ among individual trees. On the other hand, the model with a y-value at the first annual ring from the pith of the individual tree was selected in vessel frequency, vessel diameter, wood fiber diameter, wood fiber wall thickness, air-dry density, MOR, and compressive strength. This finding indicated that these wood properties depend on the values at the first annual ring, which differed among individual trees, and that the radial variation pattern itself is not affected by differences among individual trees. Based on these results, we conclude that the effect of the individual tree on radial variation patterns might differ among properties in L. tulipifera. Thus, the cambial age at which xylem maturation commences in this species can be determined using multiple wood properties, regardless of differences in individual trees. The relationship between radial growth increments and the xylem maturation process based on radial variation patterns of single or several wood properties has been reported for several fast-growing tree species in dry [9], tropical [14], and subarctic climates [15] (Table 6). In these studies, wood properties became stable below the age of 10 years at the time of peak CAI and MAI values in L. tulipifera in Japan (Fig. 1), Populus × beijingensis in China [9], and E. camaldulensis in Thailand [14], suggesting that xylem maturation occurs in accordance with a decrease in the radial growth rate in these species. Wireman and Williamson [30] showed that the radial increase in basic density was associated with a shift in the allocation of resources from growth with the production of low-specific gravity wood to greater structural reinforcement of the trunk (production of denser wood) in three tropical pioneer species (Hampea appendiculata, Heliocarpus appendiculatus, and Ochroma pyramidale) in Costa Rica. Larjavaara and Muller-Landan [31] hypothesized that a large stem of low-density wood could have greater strength at a lower construction cost than a thinner stem of high-density wood. Therefore, several fast-growing tree species, including L. tulipifera, in tropical and temperate zones appear to produce a large volume of xylem with lower physical and mechanical properties to increase their volume faster than that of surrounding trees due to competition among individual trees within a stand. After reaching a certain stem diameter, xylem with stable wood properties forms at a slower growth rate. The trade-off between radial growth increments and wood properties with an increase in cambial age might occur due to ecological strategy in fast-growing tree species. Rungwattana and Hietz [13] stated that stem diameter is an essential parameter to include in studies on the functional ecology of wood for understanding the different ecological strategies of tree species. However, as stem diameter is a one-dimensional trait that serves as an index of tree size, changes in tree height and/or volume should also be considered. Wood properties vary from the pith toward the bark in L. tulipifera due to the ecological strategy of this species. Based on the radial variation models of multiple wood properties, we estimated cambial age at the time of xylem maturation commencement. Our results suggest that L. tulipifera wood can be divided into unstable and stable, with unstable wood having lower physical and mechanical properties and stable wood having greater physical and mechanical properties. In general, the latter type of wood is suited to structural applications. Thus, L. tulipifera wood that forms after the commencement of xylem maturation (i.e., cambial age of 9 years) can be utilized for structural applications, whereas wood that forms before this cambial age can be utilized for utility applications. Exploring the relationship between the xylem maturation process and radial growth increments of stems in other fast-growing tree species in relation to the ecological strategy of the species can promote wood utilization. To elucidate the relationship between the xylem maturation process based on multiple properties and radial growth increments of stems in a fast-growing tree species, radial variations in annual ring width and anatomical, physical, and mechanical properties were determined in L. tulipifera. The maximum current annual increment and mean annual increment found after 4.9 and 7.4 years, respectively, and changes in the ratios of multiple wood properties became stable after a cambial age of 9 years, suggesting that xylem maturation in L. tulipifera occurs just after a decrease in radial growth increments of the stems. Based on our findings, L. tulipifera appears to adopt an ecological strategy, whereby it produces a large volume of xylem with lower physical and mechanical properties in the initial stages of growth, followed by the production of xylem with stable wood properties. Based on the ecological strategy adopted by L. tulipifera, wood that forms before a cambial age of 9 years can be used for utility applications, whereas wood that forms after 9 years is more stable, with greater physical and mechanical properties, and can be used for structural applications. Understanding the relationship between radial growth increments and wood properties in relation to ecological strategies may promote wood utilization of fast-growing tree species. The data sets used and/or analyzed during the current study are available from the corresponding author on reasonable request. MOE: Modulus of elasticity Modulus of rupture AIC: Akaike information criterion CAI: Current annual increment Mean annual increment Zobel BJ, Sprague JR (1998) Juvenile wood in forest trees. Springer-Verlag, Berlin, Heidelberg Lachenbruch B, Moore JR, Evans R (2011) Radial variation in wood structure and function in woody plants, and hypotheses for its occurrence. In: Meinzer FC, Lachenbruch B, Dawson TE (eds) Size- and age-related changes in tree structure and function. Tree physiology, vol 4. Springer, Dordrecht, Heidelberg, London, New York, pp 121–164 Watanabe H, Matsumoto T, Hayashi H (1966) Studies on juvenile wood. III. experiment on stems of poplar, shiinoki and mizunara. Mokuzai Gakkaishi 12:259–265 (in Japanese with English summary) Zobel BJ, van Buijtenen JP (1989) Wood variation: its causes and control. Springer, Berlin, Heidelberg Ohbayashi H, Shiokura T (1990) Wood anatomical characteristics and specific gravity of fast-growing tropical tree species in relation to growth rates. Mokuzai Gakkaishi 36:889–893 Huang R, Furukawa I (2000) Horizontal variations of vessel element length and wood fiber length of two kinds of poplars planted in the desert areas of China. Mokuzai Gakkaishi 46:495–502 (in Japanese with English summary) Honjo K, Furukawa I, Sahri MH (2005) Radial variation of fiber length increment in Acacia mangium. IAWA J 26:339–352. https://doi.org/10.1163/22941932-90000119 Kojima M, Yamamoto H, Yoshida M, Ojio Y, Okumura K (2009) Maturation property of fast-growing hardwood plantation species: a view of fiber length. For Ecol Manag 257:15–22. https://doi.org/10.1016/j.foreco.2008.08.012 Tsuchiya R, Furukawa I (2009) The relationship between the maturation age in the size of tracheary elements and the boundary age between the stages of diameter growth in planted poplars. Mokuzai Gakkaishi 55:129–135 (in Japanese with English summary) Ngadianto A, Ishiguri F, Nezu I, Irawati D, Ohshima J, Yokota S (2022) Determination of boundary between core and outer wood by radial variation modeling in tropical fast-growing tree species. J Sustain For. https://doi.org/10.1080/10549811.2022.2043907 Salas-Eljatib C, Mehtätalo L, Gregoire TG (2021) Growth equations in forest research: mathematical basis and model similarities. Curr For Rep 7:230–244. https://doi.org/10.1007/s40725-021-00145-8 Hietz P, Rosner S, Hietz-Seifert U, Wright SJ (2017) Wood traits related to size and life history of trees in a Panamanian rainforest. New Phytol 213:170–180. https://doi.org/10.1111/nph.14123 Rungwattana K, Hietz P (2018) Radial variation of wood functional traits reflect size-related adaptations of tree mechanics and hydraulics. Funct Ecol 32:260–272. https://doi.org/10.1111/1365-2435.12970 Nezu I, Ishiguri F, Aiso H, Diloksumpun S, Ohshima J, Iizuka K, Yokota S (2020) Repeatability of growth characteristics and wood properties for solid wood production from Eucalyptus camaldulensis half-sib families growing in Thailand. Silvae Genet 69:36–43. https://doi.org/10.2478/sg-2020-0006 Erdene-ochir T, Ishiguri F, Nezu I, Tumenjargal B, Baasan B, Chultem G, Ohshima J, Yokota S (2021) Modeling of radial variations of wood properties in naturally regenerated trees of Betula platyphylla grown in Selenge, Mangolia. J Wood Sci 67:61. https://doi.org/10.1186/s10086-021-01993-5 Forest Products Laboratory (2010) Wood handbook: wood as an engineering material. Department of Agriculture, Forest Service, Forest Products Laboratory, Madison. Murata A, Hasegawa R, Kawai M (2020) Elucidation of material and processing characteristics for use of domestic fast-growing tree species (1): proposal of application of domestic fast-growing tree species woods. Rep Gifu Prefect Res Inst Hum Life Technol 22:50–54 (In Japanese) Nezu I, Ishiguri F, Otani N, Kasahara H, Ohshima J, Yokota S (2022) Preliminary experiments on wood quality of 2 by 4 lumber in yellow-poplar (Liriodendron tulipifera) trees grown in Utsunomiya University Campus, Japan. Wood Industry 77:52–57 (In Japanese with English summary) Yoshinaga A, Fujita M, Saiki H (1997) Secondary wall thickening and lignification of oak xylem components during latewood formation. Mokuzai Gakkaishi 43:377–383 R Core Team (2020) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna. https://www.R-project.org/. Accessed 1 Aug 2020 Bates D, Mächler M, Bolker BM, Walker SC (2015) Fitting linear mixed-effects models using lme4. J Stat Softw 67:1–48. https://doi.org/10.48550/arXiv.1406.5823 Pinheiro JC, Bates DM (2000) Mixed-effects models in S and A-PLUS. Springer, New York Akaike H (1998) Selected papers of Hirotugu Akaike. In: Parzen E, Tanabe K, Kitagawa G (eds) Springer series in statistics. Springer, New York Nakagawa S, Schielzeth H (2010) Repeatability for Gaussian and non-Gaussian data: a practical guide for biologists. Biol Rev 85:935–956. https://doi.org/10.1111/j.1469-185X.2010.00141.x Taylor FW (1968) Variations in the size and proportions of wood elements in yellow-poplar trees. Wood Sci Technol 2:153–165. https://doi.org/10.1007/BF00350905 Shupe TF, Choong ET, Gibson MD (1995) Differences in moisture content and shrinkage between outerwood, middlewood, and corewood of two yellow-poplar trees. For Prod J 45(9):85–90 Uzcategui MGC, Seale RD, França FJN (2020) Physical and mechanical properties of hard maple (Acer saccharum) and yellow poplar (Liriodendron tulipifera). For Prod J 70:326–334. https://doi.org/10.13073/FPJ-D-20-00005 Furukawa I, Sekoguchi M, Matsuda M, Sakuno T, Kishimoto J (1983) Wood quality of small hardwoods (II): horizontal variations in the length of fibers and vessel elements in seventy-one species of small hardwoods. Hardwood Res 2:103–134 (In Japanese with English summary) Itoh T (1996) Anatomical description of Japanese hardwoods II. Wood Res Tech Notes 32:66–176 (In Japanese) Wireman MC, Williamson GB (1988) Extreme radial changes in wood specific gravity in some tropical pioneers. Wood Fiber Sci 20:344–349 Larjavaara M, Muller-Landan HC (2010) Rethinking the value of high wood density. Funct Ecol 24:701–705. https://doi.org/10.1111/j.1365-2435.2010.01698.x The authors would like to thank Dr. Murzabyek Sarkhad and Mr. Hiroki Hata for assisting with the field sampling and the laboratory experiments. School of Agriculture, Utsunomiya University, Utsunomiya, 321-8505, Japan Ikumi Nezu, Futoshi Ishiguri, Jyunichi Ohshima & Shinso Yokota United Graduate School of Agricultural Science, Tokyo University of Agriculture and Technology, Fuchu, Tokyo, 183-8509, Japan Ikumi Nezu Futoshi Ishiguri Jyunichi Ohshima Shinso Yokota IN and FI designed the research layout, collected and analyzed data, and drafted the manuscript. All authors discussed the results and conclusions and contributed to writing the final manuscript. Correspondence to Futoshi Ishiguri. Nezu, I., Ishiguri, F., Ohshima, J. et al. Relationship between the xylem maturation process based on radial variations in wood properties and radial growth increments of stems in a fast-growing tree species, Liriodendron tulipifera. J Wood Sci 68, 48 (2022). https://doi.org/10.1186/s10086-022-02057-y Annual ring width Anatomical characteristics Physical and mechanical properties Core wood Outer wood
CommonCrawl
Workshop for Women in Computational Topology (WinCompTop) Poster Session and Reception Monday, August 15, 2016 - 5:30pm - 6:30pm Persistent Homology for Pan-Genome Analysis Brittany Terese Fasy (Montana State University) Single Nucleotide Polymorphisms (SNPs), Insertions and Deletions (INDELs), and Structural Variations (SVs) are the basis of genetic variation among individuals and populations. Second and third generation high throughput-sequencing technologies have fundamentally changed our biological interpretation of genomes and notably have transformed analysis and characterization of the genome-wide variations present in a population or a particular environment. As a result of this revolution in the next generation sequencing technologies we now have a large volume of genome sequences of species that represent major phylogenetic clades. Having multiple, independent genomic assemblies from a species presents the opportunity to move away from a single reference per species, incorporating information from species across the phylogenomic range of the species into a pan-genomic reference that can better organize and query the underlying genetic variation. Tools have started to explore multiple genomes in bioinformatics analyses. Several tools have evolved to take advantage of information from multiple, closely related genomes (species, strains/lines) to perform bioinformatics analyses such as variant detection without the bias introduced from using a single reference. In this work, we address challenges and opportunities that arise from pan-genomics using graphical data structures. We consider the problem of computing the persistence of structures representing genomic variation from a graph/path data set. The particular application we are interested in is mining pan-genomic data sets. Grid Presentation for Heegaard Floer Homology of a Double Branched Cover Sayonita Ghosh Hajra (Hamline University) Heegaard Floer homology is an invariant of a closed oriented 3-manifold. Because of its complex nature these homology groups are difficult to compute. Stipsicz gave a combinatorial description of a version of Heegaard Floer homology for a double branched cover of S^3. In this poster, we describe the algorithm and present a code. As an example, we compute this homology group for a double branched cover of S^3 branched along the unknot. Topological Data Analysis at PNNL Emilie Purvine (Battelle Pacific Northwest National Laboratory) Over the past three years the Pacific Northwest National Laboratory has been growing a portfolio in Topological Data Analysis and Modeling. This poster will lay out our portfolio in the area with the hopes of informing the community of our platform and building collaborations. Our current projects include: -The use of persistent homology and HodgeRank to discover anomalies in time-evolving systems. For PH we form point clouds using statistics from a dynamic graph and look at when the barcodes of these point clouds differ significantly from that of a baseline point cloud. We use Wasserstein distance and other dissimilarities based on interval statistics. HodgeRank is used to discover rankings of sources and sinks in a directed graph. As the graph evolves these rankings may change and we consider anomalies to be when any node's rank differs isignificantly from its baseline rank. In particular we use these techniques to find attacks and instabilities in cyber systems. -Sheaf theory for use in information integration. We model groups of interacting sensors as a topological space. The data that is returned by the sensors serves as the stalk space to define a sheaf. The cohomology of the sheaf identifies global sections - where all sensors are in agreement - and identifying loops in the base space may inform when some sensors need to be retasked. Included in this work is the measurement of local sections where sensors are partially in agreement, representation of uncertainty by relaxing sectional equality to produce approximate sections, and use of category theory to cast all stalks into vector spaces so that integration is more easily defined. -A computational effort towards a robust scalable software suite for computational topology. We have found useful software in the community, but typically only one piece at a time, e.g. persistence is separate from sheaf theory which is separate from general homology. We hope to precipitate a community effort towards development of a suite of topological software tools which can be applied to small and large data sets alike. Median Shapes Altansuren Tumurbaatar (Washington State University) We introduce new ideas for the average of a set of general shapes, which we represent as currents. Using the flat norm to measure the distance between currents, we present a mean and a median shape. In the setting of a finite simplicial complex, we demonstrate that the median shape can be found efficiently by solving a linear program. Burn Time of a Medial Axis and its Applications Erin Chambers (St. Louis University) The medial axis plays a fundamental role in many shape matching and analysis, but is widely known to be unstable to even small boundary perturbations. Significance measures to analyze and prune the medial axis of 2D and 3D shapes are well studied, but the majority of them in 3D are locally defined and are unable to maintain global topological properties when used for pruning. We introduce a global significance measure called the Burn Time, which generalizes the extended distance function (EDF) introduced in prior work. Using this function, we are able to generalize the classical notion of erosion thickness measure over the medial axes of 2D shapes. We demonstrate the utility of these shape significance measures in extracting clean, shape-revealing and topology-preserving skeletons of 3D shapes, and discuss future directions and applications of this work. This is based on joint work with Tao Ju, David Letscher, Kyle Sykes, and Yajie Yan, which appeared in SIGGRAPH 2016. Homology of Generalized Generalized Configuration Spaces Radmila Sazdanović (North Carolina State University) The configuration space of n distinct points in a manifold is a well-studied object with lots of applications. Eastwood and Huggett relate homology of so-called graph configuration spaces to the chromatic polynomial of graphs. We describe a generalization of this approach from graphs to simplicial complexes. This construction yields, for each pair of a simplicial complex and a manifold, a simplicial chromatic polynomial that satisfies a version of deletion-contraction formula. This is a joint work with A. Cooper and V. de Silva. Analysis and Visualization of ALMA Data Cubes Bei Wang (The University of Utah) The availability of large data cubes produced by radio telescopes like the VLA and ALMA is leading to new data analysis challenges as current visualization tools are ill-prepared for the size and complexity of this data. Our project addresses this problem by using the notion of a contour tree from topological data analysis (TDA). The contour tree provides a mathematically robust technique with fine grain controls for reducing complexity and removing noise from data. Furthermore, to support scientific discovery, new visualizations are being designed to investigate these data and communicate their structures in a salient way: a process that relies on the direct input of astronomers. Joint work with Paul Rosen (USF), Anil Seth (Utah Astronomy), Jeff Kern (NRAO), Betsy Mills (NRAO) and Chris Johnson (Utah) Rips Filtrations for Quasi-metric Spaces (with Stability Results) Katharine Turner (École Polytechnique Fédérale de Lausanne (EPFL)) Rips filtrations over a finite metric space and their corresponding persistent homology are prominent methods in Topological Data Analysis to summarize the shape of data. Crucial to their use is the stability result that says if $X$ and $Y$ are finite metric space then the (bottleneck) distance between persistence diagrams, barcodes or persistence modules constructed by the Rips filtration is bounded by $2d_{GH}(X,Y)$ (where $d_{GH}$ is the Gromov-Hausdorff distance). Using the asymmetry of the distance function we construct four different constructions analogous to the Rips filtration that capture different information about the quasi-metric spaces. The first method is a one-parameter family of objects where, for a quasi-metric space $X$ and $a\in [0,1]$, we have a filtration of simplicial complexes $\{\mathcal{R}^a(X)_t\}_{t\in [0,\infty)}$ where $\mathcal{R}^a(X)_t$ is clique complex containing the edge $[x,y]$ whenever $a\min \{d(x,y), d(y,x) \}+ (1-a)\max \{d(x,y), d(y,x)\}\leq t$. The second method is to construct a filtration $\{\mathcal{R}^{dir}(X)_t\}$ of ordered tuple complexes where tuple $(x_0, x_2, \ldots x_p)\in \mathcal{R}^{dir}(X)_t$ if $d(x_i, x_j)\leq t$ for all $i\leq j$. Both our first two methods agree with the normal Rips filtration when applied to a metric space. The third and fourth methods use the associated filtration of directed graphs $\{D(X)_t\}$ where $x\to y$ is included in $D(X)_t$ when $d(x,y)\leq t$. Our third method builds persistence modules using the the connected components of the graphs $D(X)_t$. Our fourth method uses the directed graphs $D_t$ to create a filtration of posets (where $x\leq y$ if there is a path from $x$ to $y$) and corresponding persistence modules using poset topology. Safer Roads Tomorrow Through Analyzing Today's Accidents* Maia Grudzien (Montana State University) Bozeman Daily Chronicle quoted the city's head engineer as stating "Even with property owners paying more to help Bozeman's street grid keep up with growth, the rate of development is out-pacing the city's ability to upgrade increasingly clogged intersections" the week of July 27, 2016. As infrastructure is strained by the growing population in Bozeman, the state of Montana, and nationwide, it falls quicker into disrepair. The need for more efficient roadways is creating shorter time lines of design, but the safety of the roadways must remain at a top priority. This project has been looking at understanding accident prone areas in Montana cities and towns by collecting data and mapping it throughout the region. Then areas are sorted with factors that could include density, clusters, city regions (i.e., sporting event complexes, shopping centers), etc. The goal of this project is to provide examples to engineers and city planners of safe and accident-prone roads and intersections that can be used to better build much needed infrastructure. *This research is funded by NSF CCF grant 1618605, and the Montana State University USP program Persistent Homology on Grassmann Manifolds for Analysis of Hyperspectral Movies Lori Ziegelmeier (Macalester College) We present an application of persistent homology to the detection of chemical plumes in hyperspectral movies of Long-Wavelength Infrared data which capture the release of a quantity of chemical into the air. Regions of interest within the hyperspectral data cubes are used to produce points on the real Grassmann manifold $G(k, n)$ (whose points parameterize the k-dimensional subspaces of $\mathbb{R}^n)$, contrasting our approach with the more standard framework in Euclidean space. An advantage of this approach is that it allows a sequence of time slices in a hyperspectral movie to be collapsed to a sequence of points in such a way that some of the key structure within and between the slices is encoded by the points on the Grassmann manifold. This motivates the search for topological structure, associated with the evolution of the frames of a hyperspectral movie, within the corresponding points on the Grassmann manifold. The proposed framework affords the processing of large data sets while retaining valuable discriminative information. In this paper, we discuss how embedding our data in the Grassmann manifold, together with topological data analysis, captures dynamical events that occur as the chemical plume is released and evolves.
CommonCrawl
A test of native plant adaptation more than one century after introduction of the invasive Carpobrotus edulis to the NW Iberian Peninsula Carlos García ORCID: orcid.org/0000-0002-7485-70021, Josefina G. Campoy2 & Rubén Retuerto2 BMC Ecology and Evolution volume 21, Article number: 69 (2021) Cite this article Although the immediate consequences of biological invasions on ecosystems and conservation have been widely studied, the long-term effects remain unclear. Invaders can either cause the extinction of native species or become integrated in the new ecosystems, thus increasing the diversity of these ecosystems and the services that they provide. The final balance of invasions will depend on how the invaders and native plants co-evolve. For a better understanding of such co-evolution, case studies that consider the changes that occur in both invasive and native species long after the introduction of the invader are especially valuable. In this work, we studied the ecological consequences of the more than one century old invasion of NW Iberia by the African plant Carpobrotus edulis. We conducted a common garden experiment to compare the reciprocal effects of competition between Carpobrotus plants from the invaded area or from the native African range and two native Iberian plant species (Artemisia crithmifolia and Helichrysum picardii) from populations exposed or unexposed to the invader. Exposure of H. picardii populations to C. edulis increased their capacity to repress the growth of Carpobrotus. The repression specifically affected the Carpobrotus from the invader populations, not those from the African native area. No effects of exposition were detected in the case of A. crithmifolia. C. edulis plants from the invader populations had higher growth than plants from the species' African area of origin. We found that adaptive responses of natives to invaders can occur in the long term, but we only found evidence for adaptive responses in one of the two species studied. This might be explained by known differences between the two species in the structure of genetic variance and gene flow between subpopulations. The overall changes observed in the invader Carpobrotus are consistent with adaptation after invasion. The large-scale alteration of species distributions is one of the most drastic types of disturbance to the biosphere that have occurred during the Anthropocene [1, 2]. Although global biodiversity is being eroded [3, 4], it may be increasing at smaller spatial scales due to the arrival of invasive species [5, 6]. The long-term ecological consequences of these invasions are unclear. While some non-native species can outcompete native species to extinction [7,8,9], others may cause no serious adverse impacts (as is often the case, at least in the short-term; see [10, 11], giving native species the opportunity to co-evolve with the invaders (reviewed in Oduor et al. [12]) and even to develop new mutualisms [13]. In this way, invasive species might eventually become stably integrated in the new ecosystems [14, 15], increasing local biodiversity and reinforcing the services provided by these ecosystems or their resilience to further alteration (reviewed in Chapman et al. [16]; but see Kaiser-Bunbury et al. [17]). Analyses of the evolutionary processes that could result in this final integration of the invasive species in the long term are relatively scarce [18], but short-term experimental findings consistent with evolutionary change in invasive species, leading to divergence in relevant adaptive traits from their source populations [19, 20], are accumulating [21, 22]. Such findings run from increases in invasive ability [23,24,25], to changes in interactions with other species [26, 27], and responses to abiotic factors [28, 29]. Invasive species have also been shown to induce short term evolutionary changes in native species [30,31,32,33,34,35,36], reviewed in Oduor et al. [12]. However, these short-term changes may be poor guides to predicting the properties of future ecosystems [37, 38], because biological invasions may alter the physical ecosystem, species composition and abundance, favouring the establishment of other invasive species [39, 40] and triggering a cascade of coevolutionary, multi-species processes [18, 41], all of which may take some considerable time [37, 42, 43]. Thus, studies of biological invasions are most informative when they consider the possible changes in both the invasive species and native plant communities, and when a long time has elapsed since the introduction [44,45,46,47]. In this study, we explored if native dune species have evolved adaptative responses as a consequence of the interaction with the invasive South African species Carpobrotus edulis (L.) N.E.Br. (Aizoaceae) (hereafter Carpobrotus), introduced at least one century ago to NW Iberia. In a common garden experiment, we compared the reciprocal effects of competition between Carpobrotus plants from either European (invader) populations or from native African populations and the native Iberian species Artemisia crithmifolia L. (A. campestris L. ssp. maritima (DC.) Arcang.; hereafter Artemisia) and Helichrysum picardii Boiss. & Reuter (Helichrysum serotinun subsp. picardii (Boiss & Reuter) Galbany, L. Sáez & Benedí; hereafter Helichrysum). These are among the most representative endemic species from secondary or grey dunes, one of the main habitats invaded by Carpobrotus in NW Iberia. The sampled native plants were from populations that had either already been exposed to European Carpobrotus and have therefore had the opportunity to co-evolve with it (exposed populations) or they were from populations from the same region that had not been exposed to Carpobrotus. Considering the time elapsed since the introduction of Carpobrotus to the NW Iberian Peninsula, we hypothesized that it may have genetically changed to adapt to the new conditions. This same old invasion, along with the strong selection pressures exerted by an invader able to establish monodominant stands, may have resulted in natives' adaptions reducing the impact of the invader and easing the future development of a stable, biodiverse community. No Carpobrotus or Artemisia plants died in the competition pots, whereas two exposed and two unexposed Helichrysum plants competing with the African Carpobrotus died, as did three exposed and three unexposed Helichrysum plants competing with the European Carpobrotus. We show in more detail the analyses corresponding to final whole plant dry mass, hereafter "growth" in Fig. 1 and Table 1 for the competition pots, and Fig. 2, Additional file 1: Table S1 and Additional file 2: Table S2 for the comparisons between competition and single plant pots. The corresponding results for shoot, root and whole plant dry mass were qualitatively similar to those for growth and are shown in Additional file 3: Table S3, Additional file 4: Table S4 and Additional file 5: Table S5. Least square means and residuals in the analysis of plant final dry mass in two plant pots ("Heli", Helichrysum, and "Arte", Artemisia). a Lsmeans for the Carpobrotus and native species' masses. Vertical lines on the left-hand side and right-hand side of the graph show 95% vertical lines means asymptotic confidence intervals for pots containing African and European Carpobrotus respectively. Interval limits outside the comparison areas may lie outside the areas shown in the graphs. Right, summaries of the Table 1 analyses of Carpobrotus and natives' masses. b Top, two-dimensional representation of the lsmeans in (a). All lsmeans correspond to untransformed data and are drawn to the same scale to ease comparisons. Middle and bottom, bidimensional representations of the residuals in the analysis of Carpobrotus and natives' masses in pots containing African and European Carpobrotus, respectively. The r squared and the significance of the slope in a regression of natives' on Carpobrotus' residuals are shown on the graphs. *, P < 0.05; **, P < 0.01; ***, P < 0.001 Table 1 Analysis of the final dry massses of the native and Carpobrotus plants in the pots containing two plants Least square means for whole plant dry mass in the comparisons of competition and single plant pots. Vertical lines on the left-hand side and right-hand side show 95% asymptotic confidence intervals for the lsmeans. Interval limits outside the comparison areas may lie outside of the areas shown in the graphs. Right, summaries of the Additional file 1: Table S1 analyses of Carpobrotus and native masses. *, P < 0.05; **, P < 0.01; ***, P < 0.001 Exposure of native Iberian species to Carpobrotus Exposure was not significant in either the native or Carpobrotus plants (Table 1). However, in the case of Carpobrotus growth, this was due to heterogeneity of results across Carpobrotus origin and native species. We found a significant Exposure × Origin of Carpobrotus effect of Helichrysum on Carpobrotus, but not of Artemisia. The African Carpobrotus grew more when competing with the previously exposed Helichrysum: analysis of data from pots containing Helichrysum/African Carpobrotus detected a significant (LRT P = 0.024) and positive effect of Exposure. The difference occurred in the opposite direction in the corresponding analysis for European Carpobrotus (Fig. 1, LRT P = 0.174). This variation resulted in a significant (LRT P = 0.021) Exposure × Origin of Carpobrotus interaction in an analysis restricted to pots shared by either Carpobrotus with Helichrysum. Thus, the exposed Helichrysum suppressed more the growth of the European Carpobrotus with which it had the opportunity to co-evolve. The corresponding analysis for Artemisia did not detect any such interaction (LRT P = 0.600), and the difference between native species for double interactions resulted in a significant triple Exposure x Origin of Carpobrotus × Native species interaction in the full model (i.e., in the joint analysis of all competition pots in the experiment; Fig. 1 and Table 1). Thus, the potentially coevolved Helichrysum had stronger effects on the growth of European Carpobrotus than the potentially coevolved Artemisia. No effects of Exposure or the corresponding interactions were observed in these comparisons of native plants in competition and single plant pots (Fig. 2 and Additional file 1: Table S1). Origin of Carpobrotus The African Carpobrotus grew less than the European Carpobrotus. This was shown by the analysis of the competition pots (Fig. 1 and Table 1), the competition/single Carpobrotus plant pot comparisons (LRT P = 0.002, Additional file 2: Table S2 and Fig. 2) and also by the comparison of the African and European Carpobrotus grown in single plant pots (LRT P < 0.001). The Origin of Carpobrotus had no significant overall effect on the growth of the competing native plants, due to the heterogeneous growth of these plants. Helichrysum grew relatively more (on average for exposed and unexposed plants) in the presence of the African Carpobrotus than Artemisia (Fig. 1), as indicated by the significant Origin of Carpobrotus × Native species interaction (Table 1). The Artemisia plants grew more than the Helichrysum plants, as seen in the competition pot comparisons (Fig. 1 and Table 1) and confirmed by the comparisons between competition and single plant pots (Fig. 2 and Additional file 1: Table S1) and by the direct comparison of growth of each species in the single plant pots (LRT P < 0.001). The effect on Carpobrotus growth was also significant: the Carpobrotus plants competing with the larger Artemisia grew less than those competing with the smaller Helichrysum (Fig. 1 and Table 1). Both African and European Carpobrotus plants grew more when competing with Helichrysum, the smallest of the two native species studied (see Methods section), and therefore that expected to generate less competition for resources in the pots. This was consistent with competition limiting plant growth in this experiment. The bidimensional representation of the least square means from the analysis of the competition pots (Fig. 1) would support this view in the case of Helichrysum: the estimated correlation between mean growth of Helichrysum and Carpobrotus in the same pot was negative. This contrasted markedly with the positive sign of the corresponding estimate for Artemisia. As the power to detect four-point correlations is low, these two estimates were not significantly different from zero when tested separately. However, randomizing the allocation of pairs of standardized least square means to the two species resulted in only 144 replicates on 10,000 with larger than observed between-species differences in correlation, showing that the correlations between Carpobrotus and native plants were significantly (P = 0.014) different for the two species considered. The residuals obtained after fitting the analytical model to the competition pots (Fig. 1) showed that some within-pot competition remained after correcting for the effect of Exposure, Origin of Carpobrotus and Native species. The correlation between the residuals of the Carpobrotus and native plant analyses were − 0.553 (P = 0.005, 22 d. f.) for Artemisia and − 0.558 (P = 0.048, 11 d. f.) for Helichrysum. The effect was not homogeneous in Helichrysum (Fig. 1). A separate estimate of this correlation for the competition pots with African Carpobrotus was positive (r = 0.176, P = 0.705, 5 d. f.), and another for the competition pots with European Carpobrotus was negative (r = − 0.883, P = 0.020, 4 d. f.), consistent with stronger competition between Helichrysum and the European Carpobrotus. A test randomizing the allocation of plant pair means to the two groups (of competition pots with one Helichrysum and one African or European Carpobrotus) detected only 211 of 10,000 replicates with more extreme differences in correlation. While this test was not planned a priori, the difference between the two correlations was remarkable. We found no evidence of such heterogeneity in the Artemisia pots, and the correlations were − 0.716 (P = 0.009, 10 d. f.) and − 0.278 (P = 0.381, 10 d. f.) in the pots with competing African and European Carpobrotus. The randomization test detected 1985 replicates of 10,000 with more extreme differences in correlation than observed between the two groups of Artemisia pots. There were no significant differences between the residual's correlations of exposed and unexposed weights in any species (P = 0.142 in a joint randomization test for both native species). The two native species' contrasting correlations with Carpobrotus, both for lsmeans and residuals, suggest that their patterns, and possibly mechanisms of competition with Carpobrotus were different. The Presence of Carpobrotus significantly depressed the growth of the native plants in the comparison of competition and single plant pots, whereas Presence of native plants did not significantly affect the growth of Carpobrotus (Fig. 2, Additional file 1: Table S1; this comparison did not use the unexposed natives; see Methods). In the same analysis, the significant interaction (LRT P = 0.044) between Presence of Carpobrotus and Native species again supports increased competition between Helichrysum and Carpobrotus. The total biomass in the competition pots (i.e., the sum of the weight of native plants and Carpobrotus plants) was greater in the pots containing European Carpobrotus (Origin of Carpobrotus, LRT based on the same model as used for the weights of native Iberian species and Carpobrotus P < 0.001; Fig. 3) but there were no differences between competition pots containing exposed and unexposed native plants. Least square means for total biomass (g) in the competition pots. The vertical lines show asymptotic 95% confidence intervals. All lsmeans correspond to untransformed data and are drawn to the same scale to ease comparisons The two native species showed different patterns of Exposure × Origin of Carpobrotus interaction, which was consistent with differences in adaptative responses to the invader. We had no evidence of such interaction, and therefore response, in Artemisia. Our experiment compared in a common greenhouse environment plants from two populations of each native species, one exposed and the other unexposed to Carpobrotus. It would be parsimonious to attribute any overall differences between these two populations to the random sampling of each species' interpopulation variation, and this variation could be the result of many processes, like local abiotic adaption or genetic drift, besides adaptation to Carpobrotus presence. However, we found no such overall differences -i.e., no significant main factor Exposure- either for the competitive effect of the native plants on Carpobrotus, or their response to Carpobrotus. The only effects of Exposure were specific for each origin of the competing Carpobrotus, African or invader. They were detected by the significant interaction Exposure × Origin of Carpobrotus, where the strong interaction for Helychrysum prevailed over the small or inexistent interaction for Artemisia, and the triple interaction Exposure × Origin of Carpobrotus × Native species, reflecting this heterogeneity between native species for the Exposure × Origin of Carpobrotus interaction. The specificity of these Exposure effects for the two origins of Carpobrotus makes the mere sampling of interpopulation variation a less parsimonious interpretation and clearly suggests adaptation of Helichrysum to the Carpobrotus invasion. The relative increase in growth for the African Carpobrotus competing with the exposed Helichrysum suggests that this native's response to the European Carpobrotus involves costs that reduce its performance when the invader is absent. In another potentially costly adaptation, fitness of populations of the native Pilea pumila exposed to the invader Alliaria petiolata was maximal in sites with high densities of the invader and minimal in the low-density sites [32], indicating that adaptation to interaction with invasive species may become counterproductive when the invaders are rare or absent. The change observed in the exposed Helichrysum does not fit the predictions of the Atwater's model [48] of plant invasion and the observations by Fletcher et al. [49]. According to that model, of the two components of plant competitive ability defined by Miller and Werner [50], namely the ability of an individual plant to suppress competitors and the ability to tolerate them, competition among more than two individuals or species would favour the evolution of tolerance instead of suppression. This is because increased tolerance benefits only the species experiencing it, whereas increased suppression of some competitors would also benefit all no suppressed species in the competing assemblage. Consequently, the reduction in overall competition for the suppressor would be limited. The situation could be different in our experiment due to the asymmetry of the competition. Carpobrotus is a very successful invader and it could be difficult for native competitors to completely fill the void left by a suppression of its competitive effects. So, the Helicrysum plants could get a net benefit by trading Carpobrotus for other natives' competition. However, no significant increase in growth was detected in the exposed Helichrysum competing with the European Carpobrotus. The observed relative decrease in Carpobrotus growth by the exposed Helichrysum was modest and did not prevent the European Carpobrotus from becoming larger than the African plant. The larger mass was consistent with the trend of plants becoming taller and more vigorous when they grow in non-native environments [51] and with the evolution of increased competitive ability hypothesis [52]: plants will show trade-offs in resource allocation to growth, reproduction and defence [53, 54], so that release from the biological enemies in their native environment will enable the invaders to increase their investment in other traits. In that case, it would be remarkable that Carpobrotus had maintained such release for so many years since its introduction. It is possible that the absence of native plants phylogenetically close (see [55]) to Carpobrotus in the NW Iberian Peninsula made it difficult for local phytophagous and pathogenic species to extend the range of exploited plants to encompass the newcomer. The comparison of competition and single plant pots revealed high levels of competition in the pots containing two plants for both native species. Despite this competition, the greater dry mass of the European Carpobrotus was not obtained completely at the expense of the native plants, as the total biomass in the pots containing one native and one European Carpobrotus plant was higher than in those containing one native and one African Carpobrotus plant. This raises the possibility that the primary productivity of plant communities may have increased since the introduction of Carpobrotus. It must be noted however that conditions in the greenhouse cannot perfectly reproduce those in the field. For example, we tried to maintain all pots optimally watered thorough the experiment, thus excluding root competition for water, which could play some role in the interspecific competition in the field. Similarly, the regular arrangement of plants in the pots could not fully represent the irregular plant distribution and density observed in the dunes. However, as seen in Fig. 4, distances between plants in the field may be as short as in our experimental pots (uncharacteristically isolated plants were chosen for the pictures of non-exposed plants to improve visibility). Carpobrotus plants and native plants in the locations sampled. Artemisia crithmifolia (a) and Helichrysum picardii (b) plants exposed to Carpobrotus in Praia de Moledo; c and d A. crithmifolia and H. picardii from the populations unexposed to Carpobrotus in Praia do Trece and Praia das Furnas, respectively Two kinds of competitive interactions would have occurred in the pots. First, exploitation or scramble competition, where mineral soil resource availability to competitors is affected through resource depletion, and second, interference or contest competition, where interaction occurs through the production and release into the soil of chemicals that are toxic to other species or inhibit access of other roots to resources (allelopathy). Some previous studies have demonstrated that both mechanisms play a key role driving the competition between Carpobrotus and native species in the field [56]. The correlation between the lsmeans of native and Carpobrotus plants in the same competition pot provided some evidence for interference. Because Artemisia is the largest of the two natives and the one suppressing Carpobrotus growth the most, it would be expected to be involved in stronger resource competition and more negative lsmeans correlations with Carpobrotus: pots with large Artemisia plants would be expected to sustain smaller Carpobrotus, and vice versa. But it was the reverse. The correlation was significantly less negative than that for Helichrysum. This reduced dependency of competition on plant size in Artemisia would be consistent with this species' biology. While there is some evidence of allelopathic properties in the genus Helichrysum [57], direct comparison of allelopathic activities in plants of the Helicrysum and Artemisia genera [58] revealed a clear advantage in the activity of the latter. Allelopathy could thus explain a depression in Carpobrotus growth that is not dependent on the variation in size of Artemisia, as there could be differences in the regulation of plant growth and allelopathic activity. In fact, trade-offs between growth and production of allelopathic compounds have been found, at least in seaweeds [59]. Some evidence of competition remained after adjusting for the main effects Exposure and Origin of Carpobrotus and their interactions in the analytical model, as shown by the mainly negative correlations between the residuals for Carpobrotus and native species from the model adjusted to analyse their growth. These correlations were consistently negative for the Artemisia data, thus indicating that, although not dependent on the Exposure or Origin of Carpobrotus considered in that model, competition for limited resources in the pots occurred between Artemisia and Carpobrotus. In any case, the observation that the presence of Carpobrotus generally depressed the growth of Helichrysum more than that of Artemisia in the comparison of competition and single plant pots suggested more intense competition for resources. This could have resulted in stronger selection pressures on Helichrysum populations to adapt to the presence of Carpobrotus. Differences in the competitive impact of Carpobrotus across native species had already been observed in field studies of invaded areas [60, 61]. The difference in the response of the native species to the introduction of Carpobrotus could be related also to the genetic structure of the populations of these species. Artemisia displays very limited genetic variation, both between and within populations, in the study region, probably due to its ability to disperse over long-distances at high rates, and to initiate new populations from very small propagules, in a series of founder events [62]. The low variation will limit the potential of exposed subpopulations to adapt to competition from the invasive Carpobrotus. By contrast, Helichrysum italicum has been shown to maintain considerable genetic differences between subpopulations at distances of only tens of kilometres in Sardinia, probably due to limited mobility of pollinating insects [63], and considerable variation within populations, at least in the Western Mediterranean [64]. Similar partial isolation could facilitate the local evolution of resistance in areas of the NW Iberian Peninsula invaded by Carpobrotus. Interestingly, Helichrysum italicumm subsp. picardii was the second most abundant native species in a study of 8 sites invaded by Carpobrotus in the sand dune systems of the western coast of Portugal (with an average cover of 6.0%, compared with 6.6% for Corema album, 13.9% for Carpobrotus edulis and the only Artemisia species mentioned (Artemisia campestris ssp. maritima) was in eighth position, with 1.7% [65]). A large population size favours the maintenance of genetic variation, which would also facilitate the evolution of resistance to Carpobrotus. However, it is not clear whether the large populations of Helichrysum in the Carpobrotus-invaded sites are the cause or the consequence of that evolution, as we are not aware of any comparison of Helichrysum abundance in invaded and non-invaded sites. These evolutionary considerations may be useful additions to the list of criteria for assessing the vulnerability of native species and ecosystems to biological invasions, on which to base the assignment of priorities for surveillance and protection interventions [66]. These assessments tend to be based on ecological features (e.g. [67, 68], but our study suggests that vulnerability to biological invasions may also depend on the genetic structure of populations, the amount of genetic diversity and the gene flow patterns. The same factors could also be important for designing management plans for invasive species [69]. In conclusion, we found that native species can respond in order to reduce the ecological impact of invasive species, which would facilitate the integration of the latter into the invaded community. This result is consistent with previous studies of ancient introductions of mussel macroparasites (~ 70 years [70]), herbaceous plants (~ 150 years [71]) and trees (~ 170 years [14]). However, these changes may only occur in some native species, possibly depending not only on ecological, but also on evolutionary aspects such as population size and the amounts of genetic variance and gene flow. We propose that consideration of these aspects may be important in analysing the conservation impact of biological invasions. The heterogeneity in native plant responses to invasion might help to explain why Carpobrotus is still having a strong impact across its area of distribution in NW Iberia. Introduction of Carpobrotus edulis Carpobrotus edulis is a succulent perennial plant that has been introduced from its native range in South Africa [72] across all Mediterranean climate regions, including California, Australia and the Mediterranean basin [73]. In Europe, this species has been grown for ornamental purposes since the beginning of the seventeenth century [74] and records of its presence in NW Iberia date back to the eighteenth century [73]. Due to its ability to rapidly spread forming deep, dense mats, the species has been used to stabilize sand dunes and prevent soil erosion in this area since the early twentieth century and nowadays naturalized populations of C. edulis can be found elsewhere in coastal habitats [73], where it may have been co-evolving with native plants for more than 100 years. Its facultative C3-CAM physiology [75], high morphological and ecophysiological plasticity [76,77,78], flexible mating system [79], and an intense vegetative clonality [20, 73, 80], enable the plant to tolerate a wide range of ecological conditions. These characteristics along with the high rates of seed dispersal [81], are also important features explaining the effective colonization of dune habitats, where plants compete for space, light, water and nutrients [82] in such a way that C. edulis can reduce the growth, survival, and reproduction of some native species [73] and references therein]. Consequently, the release of Capobrotus in natural environments and protected areas is prohibited in several countries (e.g., Spain, Portugal, United Kingdom, Ireland, and Italy), although this taxon is not included in the Regulation (EU) no. 1143/2014 [83]. In California, the plant poses a threat to several rare and endangered plant species and it is listed as CalEPPC List A-1 and as CDFA-NL (http://www.cal-ipc.org/); on the contrary it is not declared or considered noxious by any state government authorities in Australia [73]. Plant sampling We collected Carpobrotus plants from sand dune populations in their South African native range (Hawston beach, 34° 23′S, 19° 07′W, Western Cape, South Africa) in mid-January 2015 and in the invaded range (Praia de Moledo, 41º 51′N, 8º 51′W, Caminha, Portugal) in mid-April 2015. The African specimens of Carpobrotus were used as an experimental control as they share origins with the invasive European Carpobrotus, but not their recent adaptive story. The Artemisia crithmifolia and Helichrysum picardii plants exposed to Carpobrotus were collected from Caminha (Portugal) at the same time as the Carpobrotus plants. The two native species selected in the experiment differ in several respects relevant for their evolutionary responses to biological invasion. Artemisia is a larger plant (see the two species' descriptions in Castroviejo et al. [84]) with more allelopathic activity [58] and lower genetic variation [62, 63] and ground cover in the sampled sand dunes [65]. Artemisia, but not Helichrysum, has rhizomatous structures allowing to optimize belowground resource uptake and storage, which could increase its competitive ability. The Artemisia and Helichrysum plants from populations unexposed to Carpobrotus were collected in mid-March 2016, in Camariñas (Praia do Trece, [43° 11′N, 9º 10′W], Galicia) and Porto do Son (Praia das Furnas [42° 38′N, 9º 02′W], Galicia), respectively (Fig. 4). The population of Artemisia in Porto do Son and the population of Helichrysum in Camariñas were too small to take a representative sample of both species from one site. Thereby, unexposed Artemisia and Helichrysum plants were collected in Camariñas and Porto do Son, respectively. The distance between these populations is about 76 km and the environmental conditions are quite similar. The monthly mean temperatures registered at the meteorological station of Camariñas ranged from 11 ºC (March 2016) to 18.5 ºC (August 2016) and monthly mean rainfall ranged from 397.1 L/m2 (January 2016) to 1.3 L/m2 (July 2016). The monthly mean temperatures registered at the meteorological station of Ribeira, a place very near to Porto do Son, ranged from 10.9 ºC (February 2016) to 21.5 ºC (July 2016) and monthly mean rainfall ranged from 265.7 L/m2 (January 2016) to 4.8 L/m2 (July 2016) (www.meteogalicia.es). No direct measures of the extent of Carpobrotus plant cover are available for the exposed sampling sites, but visual estimates based on pictures taken during the sampling were of about 60–70% in both sites. We sampled intensively the native and invasive Carpobrotus populations. To have a more comprehensive representation of the genetic variability of the species in each area, we selected 36 separated clumps per area. The minimum separation between sampled clumps was of 25 m. Carpobrotus forms compact clumps [72] and it is reasonable to assume that each separated clump represents a different genotype. Thus, we would have collected a total of 72 genotypes. From these, we randomly selected the genotypes used in our experiment. Our sampling protocol has been described in detail in Roiloa et al. [20]. Likewise, the populations of the two native species were extensively sampled in order to gather the greatest genetic diversity inside each population. The plant taxa nomenclature follows standard Iberian floras [84]. The native species were collected following current Spanish regulations. No specific permissions were required. The invasive C. edulis was collected from natural populations and propagated under the permission from the Spanish Ministry of Agriculture, Food, and the Environment, and complied with the Convention on the Trade in Endangered Species of Wild Fauna and Flora. We found no historical records (including records about past eradication campaigns, which mainly began in the twenty-first century in NW Iberia; [73]) of the presence of Carpobrotus in our unexposed locations of Camariñas and Porto do Son. In any case, a previous, undocumented presence of Carpobrotus in the locations -both currently free of Carpobrotus - would imply that the species had become extinct, which is unlikely given its invasive nature. Collected plants were washed and maintained in a climate-controlled greenhouse at the University of Santiago de Compostela until the start of the experiment in April 2016. The experiment was carried out in a greenhouse at the University of Santiago de Compostela (Galicia, Spain). We used 5L plastic pots filled with a growing substrate similar to that in natural conditions, i.e., a 1:1 mixture of potting compost and dune sand. The environmental conditions were identical for all species, grown under a natural day/night light cycle between April 2016 and April 2017. Monthly global irradiance ranged from 15.3 MJ m−2 day−1 (April 2016) to 21.3 MJ m−2 day−1 (April 2017), although photosynthetically active radiation was reduced by about 12% inside the greenhouse with respect to full sunlight outdoors (measured with a LI-190SA Quantum Sensor, LI-COR, Lincoln, Nebraska, USA). The temperature inside the greenhouse ranged from 15 ℃ to 22 ℃. The plants were watered according to their requirements (once or twice per week) in order to prevent hydric stress. Additionally, to avoid confounding effects of pot position within the greenhouse, these positions were randomized monthly. The basic units in our experiment were competition pots containing one native plant and one Carpobrotus genet. Each genet, obtained from different donor plants, was composed by the three-vegetative most apical ramets (i.e., modules sensu Harper [85]), to guarantee that all of the material was at the same development stage. The plant pairs in these competition pots were arranged in a factorial design (Fig. 5) considering the effects of prior exposure of native plants to Carpobrotus (Exposure: exposed/unexposed), origin of Carpobrotus plants (African/European), native Iberian plant species (Artemisia/Helichrysum) and their interactions on the competition between the two plants (six replicate pots per combination of factors: 6 × 2 × 2 × 2 = 48 pots; 96 plants). The two-plant competition pots were complemented with pots containing single plants (six pots for African Carpobrotus, six for European Carpobrotus, and six for each Exposure × Native species combination: 36 pots and plants). Comparisons between single plant and competition pots made possible to confirm the existence, and measure the intensity, of competition experienced by Carpobrotus and native plants in the competition pots. Experimental set-up. The experimental units were pots containing African or European (black and grey) Carpobrotus and native Iberian plants previously exposed or not previously exposed (grey and white background) to European Carpobrotus. There were two sets of pots as shown, one for each native species. The plot lines and arrows mark the pots used in the data analyses: continuous line, comparisons of pots containing two plants; fine dashed lines, comparisons between native Iberian plants in pots containing one and two plants; thick dashed lines, comparisons between Carpobrotus plants in pots containing one and two plants; block arrows, comparisons between plants in pots containing one plant. Icons from [86, 87] Before the start of the experiment, the initial fresh weight of each Carpobrotus, Artemisia and Helichrysum plant was measured to the nearest 0.0001 g (Mettler AJ100, Mettler-Toledo, Greifensee, Switzerland). The plants were grown for twelve months under the experimental conditions and were then harvested, washed, cleaned and dried at 60 °C to constant weight. Each plant was separated into shoots (including leaves and stolons in the case of Carpobrotus) and roots, and the final dry weights of each fraction (total, above ground and root, dry mass) were recorded. All statistical analyses were based on linear models. The data sets were unbalanced, as some plants did not survive up to the end of the experiment, and we used the R [88] package stats' "glm()", "drop1()" and "emmeans()" functions to carry out likelihood ratio tests (LRT) and calculate least square means (hereafter "lsmeans") for the mass data. We complemented the LRTs with more intuitive, Akaike weight-based calculations of the probability of one model favouring the other [89]. The difference in AIC between model i and the best model, i.e., that with minimum AIC, is ∆i(AIC) = AICi – minAIC and the weight of model i: $$w_{i} \left( {{\text{AIC}}} \right) = \frac{{{\text{exp}}\left\{ {\frac{ - 1}{2}\Delta_{i} \left( {{\text{AIC}}} \right)} \right\}}}{{\mathop \sum \nolimits_{{{\text{k = }}1}}^{K} {\text{exp}}\left\{ {\frac{ - 1}{2}\Delta_{k} \left( {{\text{AIC}}} \right)} \right\}}}$$ where K is the number of models considered. The normalized probability that model 1 is preferred (i.e., it is better in terms of Kullback–Leibler discrepancy; see [90]) over model 2 is $$w_{1} \left( {AIC} \right) \, / \, \left( {w_{1} \left( {AIC} \right) \, + \, w_{2} \left( {AIC} \right)} \right)$$ Because the numbers of parameters in the statistical models considered were large relative to sample sizes, we replaced AIC with its small sample version AICc [90] in all calculations shown. Both the analyses of the measures from the native plants and the Carpobrotus plants in the competition pots considered the effects of Exposure, Origin of Carpobrotus, Native species and their interactions. This was because, in a competition situation, the characteristics of one plant may affect the plant competing with it. For example, African and European Carpobrotus could have different effects on the native plant in the same pot. For the same reason, both analyses considered as covariables the initial fresh mass of the native and the Carpobrotus plants. The native Iberian species growing with South African Carpobrotus were not considered in the comparisons between the native plants in competition and single plant pots, because this would have yielded heterogeneous and difficult-to-interpret levels for the origin of Carpobrotus factor (European, African and none). The effects considered in these analyses were Exposure to Carpobrotus, Native species and the Presence (presence/absence) of Carpobrotus in the pot. Only the initial fresh mass of the native plants was used as a covariable here, as only half of the pots (those containing two plants) contained a Carpobrotus plant for which an initial weight was available. Similarly, the Carpobrotus plants growing with non-exposed native plants were not considered in the comparisons between Carpobrotus in competition and single plant pots, to prevent heterogeneous and difficult-to-interpret levels for the Exposure factor (exposed, unexposed and no native plant). Only the initial fresh mass of Carpobrotus was used as a covariable in these analyses, along with Origin of Carpobrotus and Presence of native Iberian plants. The dataset supporting the conclusions of this article is included within the article (and its Additional file 6). Ricciardi A. Are modern biological invasions an unprecedented form of global change? Conserv Biol. 2007;21:329–36. Richardson DM, Allsopp N, D'Antonio CM, Milton SJ, Rejmanek M. Plant invasions-the role of mutualisms. Biol Rev Camb Philos Soc. 2000;75:65–93. Díaz S, Settele J, Brondízio ES, Ngo HT, Agard J, Arneth A, et al. Pervasive human-driven decline of life on Earth points to the need for transformative change. Science. 2019;366:eaax3100. https://doi.org/10.1126/science.aax3100. Bongaarts J, Casterline J, Desai S, Hodgson D, MacKellar L. Summary for policymakers of the global assessment report on biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. Popul Dev Rev. 2019;45:680–1. https://doi.org/10.1111/padr.12283. Thomas CD, Palmer G. Non-native plants add to the British flora without negative consequences for native diversity. Proc Natl Acad Sci U S A. 2015;112:4387–92. Su G, Logez M, Xu J, Tao S, Villéger S, Brosse S. Human impacts on global freshwater fish biodiversity. Science. 2021;371:835–8. Bright C. Life out of bounds. New York: WW Norton and Co; 1988. Ellis BK, Stanford JA, Goodman D, Stafford CP, Gustafson DL, Beauchamp DA, Chess DW, Craft JA, Deleray MA, Hansen BS. Long-term effects of a trophic cascade in a large lake ecosystem. Proc Natl Acad Sci U S A. 2011;108:1070–5. Fritts TH, Rodda GH. The role of introduced species in the degradation of island ecosystems: a case history of Guam. Annu Rev Ecol Syst. 1998;39:113–40. Bartley TJ, McCann KS, Bieg C, Cazelles K, Granados M, Guzzo MM, et al. Food web rewiring in a changing world. Nat Ecol Evol. 2019;3:345–54. Thomas CD. Local diversity stays about the same, regional diversity increases, and global diversity declines. Proc Natl Acad Sci U S A. 2013;110:19187–8. Oduor A. Evolutionary responses of native plant species to invasive plants: a review. New Phytol. 2013;200:986–92. Workman RE, Cruzan MB. Common mycelial networks impact competition in an invasive grass. Am J Bot. 2016;103:1041–9. Schilthuizen M, Pimenta LPS, Lammers Y, Steenbergen PJ, Flohil M, Beveridge NGP, et al. Incorporation of an invasive plant into a native insect herbivore food web. PeerJ. 2016. https://doi.org/10.7717/peerj.1954. Vizentin-Bugoni J, Tarwater CE, Foster JT, Drake DR, Gleditsch JM, Hruska AM, et al. Structure, spatial dynamics, and stability of novel seed dispersal mutualistic networks in Hawai'i. Science. 2019;364:78–82. Chapman PM. Benefits of invasive species. Mar Pollut Bull. 2016;107:1–2. Kaiser-Bunbury C, Mougal J, Whittington AE, Valentin T, Gabriel R, Olesen JM, et al. Ecosystem restoration strengthens pollination network resilience and function. Nature. 2017;542:223–7. Strauss SY, Lau JA, Carroll SP. Evolutionary responses of natives to introduced species: what do introductions tell us about natural communities? Ecol Lett. 2006;9:357–74. Alfaro B, Marshall DL. Phenotypic variation of life-history traits in native, invasive, and landrace populations of Brassica tournefortii. Ecol Evol. 2019;9:13127–41. Roiloa SR, Retuerto R, Campoy JG, Novoa A, Barreiro R. Division of labor brings greater benefits to clones of Carpobrotus edulis in the non-native range: evidence for rapid adaptive evolution. Front Plant Sci. 2016;7:349. https://doi.org/10.3389/fpls.2016.00349. Felker-Quinn E, Schweitzer JA, Bailey JK. Meta-analysis reveals evolution in invasive plant species but little support for evolution of increased competitive ability (EICA). Ecol Evol. 2013;3:739–51. Colautti RI, Lau JA. Contemporary evolution during invasion: evidence for differentiation, natural selection, and local adaptation. Mol Ecol. 2015;24:1999–2017. Bertelsmeier C, Keller L. Bridgehead effects and role of adaptive evolution in invasive populations. Trends Ecol Evol. 2018;33:527–34. Cruzan MB. How to make a weed: the saga of the slender false brome invasion in the North American west and lessons for the future. Bioscience. 2019;69:496–507. Stastny M, Sargent RD. Evidence for rapid evolutionary change in an invasive plant in response to biological control. J Evol Biol. 2017;30:1042–52. Agrawal AA, Hastings AP, Bradburd GS, Woods EC, Züst T, Harvey JA, et al. Evolution of plant growth and defense in a continental introduction. Am Nat. 2015;186:E1–15. Stuart YE, Campbell TS, Hohenlohe PA, Reynolds RG, Revell LJ, Losos JB. Rapid evolution of a native species following invasion by a congener. Science. 2014;346:463–6. Colautti RI, Barret SC. Population divergence along lines of genetic variance and covariance in the invasive plant Lythrum salicaria in eastern North America. Evolution. 2011;65:2514–29. Ziska LH, Tomecek MB, Valerio M, Thompson JP. Evidence for recent evolution in an invasive species, Microstegium vimineum, japanese stiltgrass. Weed Res. 2015;55:260–7. Callaway RM, Maron JL. What have exotic plant invasions taught us over the past 20 years? Trends Ecol Evol. 2016;21:369–74. Carroll SP, Dingle H, Famula TR, Fox CW. Genetic architecture of adaptive differentiation in evolving host races of the soapberry bug, Jadera haematoloma. In: Hendry AP, Kinnison MT, eds. Microevolution Rate, Pattern, Process. Contemporary Issues in Genetics and Evolution, vol 8. Dordrecht: Springer; 2001. p. 257–272. Lankau RA. Coevolution between invasive and native plants driven by chemical competition and soil biota. Proc Natl Acad Sci U S A. 2012;109:11240–5. Mealor BA, Hild AL. Post-invasion evolution of native plant populations: a test of biological resilience. Oikos. 2007;116:1493–500. Whitney KD, Gabler CA. Rapid evolution in introduced species, "invasive traits" and recipient communities: challenges for predicting invasive potential. Divers Distrib. 2008;14:569–80. Callaway RM, Ridenour WM, Laboski T, Weir T, Vivanco JM. Natural selection for resistance to the allelopathic effects of invasive plants. J Ecol. 2005;93:576–83. Rowe CJ, Leger EA. Competitive seedlings and inherited traits: a test of rapid evolution of Elymus multisetus (big squirreltail) in response to cheatgrass invasion. Evol Appl. 2011;4:485–98. Crooks JA. Lag times and exotic species: the ecology and management of biological invasions in slow-motion. Ecoscience. 2005;12:316–29. Simberloff D, Gibbons L. Now you see them now you don't!-population crashes of established introduced species. Biol Inv. 2004;6:161–72. Green PT, O'Dowd DJ, Abbott KL, Jeffery M, Retallick K, Mac NR. Invasional meltdown: invader–invader mutualism facilitates a secondary invasion. Ecology. 2011;92:1758–68. Simberloff D, Von Holle B. Positive interactions of nonindigenous species: invasional meltdown? Biol Invasions. 1999;1:21–32. Thorpe AS, Aschehoug ET, Atwater DZ, Callaway RM. Interactions among plants and evolution. J Ecol. 2011;99:729–40. Dostál P, Müllerová J, Pyšek P, Pergl J, Klinerová T. The impact of an invasive plant changes over time. Ecol Lett. 2013;16:1277–84. Iacarella JC, Mankiewicz PS, Ricciardi A. Negative competitive effects of invasive plants change with time since invasion. Ecosphere. 2015;6:art123. https://doi.org/10.1890/ES15-00147.1. Hawkes C. Are invaders moving targets? The Generality and persistence of Advantages in size, reproduction, and enemy release in invasive plant species with time since introduction. Am Nat. 2007;170:832–43. Kurr M, Davies AJ. Time-since-invasion increases native mesoherbivore feeding rates on the invasive alga, Sargassum muticum (Yendo) Fensholt. J Mar Biol Assoc UK. 2018;98:1935–44. Moran EV, Alexander JM. Evolutionary responses to global change: lessons from invasive species. Ecol Lett. 2014;17:637–49. Strayer DL, Eviner VT, Jeschke JM, Pace ML. Understanding the long-term effects of species invasions. Trends Ecol Evol. 2006;21:645–51. Atwater DZ. Interplay between competition and evolution in invaded and native plant communities. Ph.D. dissertation, University of Montana; 2012. Fletcher RA, Callaway RM, Atwater DZ. An exotic invasive plant selects for increased competitive tolerance, but not competitive suppression, in a native grass. Oecologia. 2016;181:499–505. Miller TE, Werner PA. Competitive effects and responses between plant species in a first-year old-field community. Ecology. 1987;68:1201–10. Crawley MJ. What makes a community invasible? In: Gray AJ, Crawley MJ, Edwards PP, eds. Blackwell Scientific Publications Oxford; 1987. p. 429–53. Blossey B, Nötzold R. Evolution of increased competitive ability in invasive nonindigenous plants: a hypothesis. J Ecol. 1995;83:887–9. Bazzaz FA, Chiariello NR, Coley PD, Pitelka LF. Allocating resources to reproduction and defense. Bioscience. 1987;37:58–67. Coley PD, Bryant JP, Chapin FS. Resource availability and plant antiherbivore defense. Science. 1985;230:895–9. Strauss SY, Webb CO, Salamin N. Exotic taxa less related to native species are more invasive. Proc Natl Acad Sci U S A. 2006;103:5841–5. Novoa A, González L, Moravcováb L, Pyšek P. Constraints to native plant species establishment in coastal dune communities invaded by Carpobrotus edulis: Implications for restoration. Biol Conserv. 2013;164:1–9. Araniti F, Sorgonà A, Lupini A, Abenavoli M. Screening of Mediterranean wild plant species for allelopathic activity and their use as bio-herbicides. Allelopathy J. 2012;29:107–24. Mancini E, De Martino L, Marandino A, Scognamiglio MR, De Feo V. Chemical composition and possible in vitro phytotoxic activity of Helichrsyum italicum (Roth) don ssp italicum. Molecules. 2011;16:7725–35. Rasher DB, Hay ME. Competition induces allelopathy but suppresses growth and anti-herbivore defence in a chemically rich seaweed. Proc R Soc B. 2014;281:20132615. https://doi.org/10.1098/rspb.2013.2615. Jucker T, Carboni M, Acosta ATR. Going beyond taxonomic diversity: deconstructing biodiversity patterns reveals the true cost of iceplant invasion. Divers Distrib. 2013;19:1566–77. Vilà M, Tessier M, Suehs CM, Brundu G, Carta L, Galanidis A, et al. Local and regional assessments of the impacts of plant invaders on vegetation structure and soil properties of Mediterranean islands. J Biogeogr. 2006;33:853–61. García-Fernández A, Vitales D, Pellicer J, Garnatje T, Vallés J. Phylogeographic insights into Artemisia crithmifolia (Asteraceae) reveal several areas of the Iberian Atlantic coast as refugia for genetic diversity. Plant Syst Evol. 2017;303:509–19. Melito S, Sias A, Petretto G, Chessa M, Pintore G, Porceddu A. Genetic and metabolite diversity of Sardinian populations of Helichrysum italicum. PLoS ONE. 2013;8:e79043. https://doi.org/10.1371/journal.pone.0079043. Galbany-Casals M, Blanco-Moreno JM, Garcia-Jacas N, Breitwieser I, Smissen RD. Genetic variation in Mediterranean Helichrysum italicum (Asteraceae; Gnaphalieae): do disjunct populations of subsp microphyllum have a common origin? Plant Biol. 2011;13:678–87. Maltez-Mouro S, Maestre F, Freitas H. Weak effects of the exotic invasive Carpobrotus edulis on the structure and composition of Portuguese sand-dune communities. Biol Inv. 2010;12:2117–30. Probert AF, Ward DF, Beggs JR, Lin S, Stanley MC. Conceptual risk framework: integrating ecological risk of introduced species with recipient ecosystems. Bioscience. 2020;70:71–9. Grainger TH, Levine JM, Gilbert B. The Invasion Criterion: a common currency for ecological research. Trends Ecol Evol. 2019;34:925–35. Ward D, Morgan F. Modelling the impacts of an invasive species across landscapes: a stepwise approach. Peer J. 2014;2:e435. https://doi.org/10.7717/peerj.435. Leger EA, Espeland EK. Coevolution between native and invasive plant competitors: implications for invasive species management. Evol Appl. 2010;3:169–78. Feis ME, Goedknegt MA, Thieltges DW, Buschbaum C, Wegner KM. Biological invasions and host-parasite coevolutionary trajectories along separate parasite invasion fronts. Zoology. 2016;119:366–74. Huang F, Lankau R, Peng S. Coexistence via coevolution driven by reduced allelochemical effects and increased tolerance to competition between invasive and native plants. New Phytol. 2018;218:357–69. Wisura W, Glen HF. The South African species of Carpobrotus (Mesembryanthema–Aizoaceae). Contr Bolus Herb. 1993;15:76–107. Campoy JG, Acosta ATR, Affre L, Barreiro R, Brundu G, Buisson E, et al. Monographs of invasive plants in Europe: Carpobrotus. Bot Lett. 2018;165:440–75. Codd LE, Gunn M. Additional biographical notes on plant collectors in Southern Africa. Bothalia. 1985;15:631–54. Winter K, Holtum JAM. Facultative crassulacean acid metabolism (CAM) plants: powerful tools for unravelling the functional elements of CAM photosynthesis. J Exp Bot. 2014;65:3425–41. Campoy JG, Roiloa SR, Santiso X, Retuerto R. Ecophysiological differentiation between two invasive species of Carpobrotus competing under different nutrient conditions. Am J Bot. 2019;106:1454–65. Fenollosa E, Munné-Bosch S, Pintó-Marijuan M. Contrasting phenotypic plasticity in the photoprotective strategies of the invasive species Carpobrotus edulis and the coexisting native species Crithmum maritimum. Physiol Plant. 2017;160:185–200. Traveset A, Moragues E, Valladares F. Spreading of the invasive Carpobrotus aff acinaciformis in Mediterranean ecosystems: the advantage of performing in different light environments. Appl Veg Sci. 2008;11:45–54. Suehs C, Affre L, Médail F. Invasion dynamics of two alien Carpobrotus (Aizoaceae) taxa on a Mediterranean island: II reproductive strategies. Heredity. 2014;92:550–6. Campoy JG, Retuerto R, Roiloa SR. Resource-sharing strategies in ecotypes of the invasive clonal plant Carpobrotus edulis: specialization for abundance or scarcity of resources. J Plant Ecol. 2017;10:681–91. D'Antonio CM. Seed production and dispersal in the non-native, invasive succulent Carpobrotus edulis (Aizoaceae) in coastal strand communities of Central California. J Appl Ecol. 1990;27:693–702. D'Antonio CM, Mahall BE. Root profiles and competition between the invasive, exotic perennial, Carpobrotus edulis, and two native shrub species in California coastal scrub. Am J Bot. 1991;78:885–94. Regulation (EU) No 1143/2014 of the European Parliament and of the Council of 22 October 2014 on the prevention and management of the introduction and spread of invasive alien species. Benedí C, Buira A, Rico E, Crespo MB, Quintanar A, Aedo C (eds.). S. Castroviejo (coord.). Flora ibérica. Plantas vasculares de la península ibérica e islas Baleares. Vol. XVI (III) Compositae (partim). Madrid: Real Jardín Botánico, CSIC; 2019. Harper JL. Population biology of plants. London: Academic Press; 1977. https://icons8.comicons/set/plant. https://www.flaticon.com/free-icon/herb_1398234?term=herb&page=1&position=17. R Core Team. R: A language and environment for statistical computing R Foundation for Statistical Computing, Vienna, Austria. 2018. https://www.R-projectorg/. Wagenmakers EJ, Farrell S. AIC model selection using Akaike weights. Psychon Bull Rev. 2004;11:192–6. Burnham KP, Anderson DR. Kullback-Leibler information as a basis for strong inference in ecological studies. Wildlife Res. 2001;28:111–9. The authors thank Margarita Lema for assistance with field sampling. This research and publication costs were funded by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund (ERDF) (grant Ref. CGL2013-48885-C2-2-R). The funding bodies played no role in the design of the study, collection, analysis, and interpretation of data, and in writing the manuscript. Any opinion, finding and conclusion or recommendation expressed in this publication is that of the authors and the funding bodies does not accept liability in this regard. CIBUS, Campus Sur, Universidade de Santiago, 15782, Santiago de Compostela, Spain Department of Functional Biology, Area of Ecology, Faculty of Biology, CRETUS Inst., Universidade de Santiago de Compostela, 15782 Santiago de Compostela, Spain Josefina G. Campoy & Rubén Retuerto Josefina G. Campoy Rubén Retuerto CG: conceptualization, collection of field samples, investigation, methodology, statistical analysis, Writing-original draft, writing-review & editing; JGC: collection of field samples and data, investigation, methodology, writing, writing-review & editing; RR: collection of field samples and data, funding acquisition, investigation, methodology, writing, writing-review & editing. All authors read and approved the final manuscript. Correspondence to Carlos García. : Table S1. Analysis of the final dry masses of the native plants in the comparison of pots containing one plant and two plants. Columns show the levels of the main factors in advantage for final mass or the estimated slopes for the covariables, and Likelihood Ratio Tests probability for each model term, AIC weight and the normalized probability that the model including that term will be selected. Number of residual degrees of freedom =27. : Table S2. Analysis of the final dry masses of the Carpobrotus plants in the comparisons of pots with one plant and two plants. Columns show the levels of the main factors in advantage for the final mass or the estimated slopes for the covariable, and Likelihood Ratio Tests probability for each model term, AIC weight and the normalized probability that the model including that term will be selected. Number of residual degrees of freedom =16. : Table S3. Likelihood Ratio test probabilities for mass-related variables in the comparisons of pots containing two plants. : Table S4. Likelihood Ratio tests probabilities for mass-related variables for native Iberian species in the comparisons of pots containing one and two plants. : Table S5. Likelihood Ratio tests probabilities for mass-related Carpobrotus variables in the comparisons of pots containing one and two Carpobrotus plants. The full information on sampling locations, experimental treatments, initial and final fresh masses and final root and total dry masses for all native and Carpobrotus plants in the experiment. García, C., Campoy, J.G. & Retuerto, R. A test of native plant adaptation more than one century after introduction of the invasive Carpobrotus edulis to the NW Iberian Peninsula. BMC Ecol Evo 21, 69 (2021). https://doi.org/10.1186/s12862-021-01785-x Received: 30 October 2020 Accepted: 01 April 2021 Artemisia crithmifolia Co-evolutionary changes Helichrysum picardii
CommonCrawl
\begin{document} \title{Context-specific independence in graphical log-linear models} \author{Henrik Nyman$^{1, \ast}$, Johan Pensar$^{1}$, Timo Koski$^{3}$, Jukka Corander$^{2}$ \\ $^{1}$Department of Mathematics, \AA bo Akademi University, Finland \\ $^{2}$Department of Mathematics and statistics, University of Helsinki, Finland \\ $^{3}$Department of Mathematics \\ KTH Royal Institute of Technology, Stockholm, Sweden \\ $^{\ast}$Corresponding author, Email: [email protected]} \date{} \maketitle \begin{abstract} Log-linear models are the popular workhorses of analyzing contingency tables. A log-linear parameterization of an interaction model can be more expressive than a direct parameterization based on probabilities, leading to a powerful way of defining restrictions derived from marginal, conditional and context-specific independence. However, parameter estimation is often simpler under a direct parameterization, provided that the model enjoys certain decomposability properties. Here we introduce a cyclical projection algorithm for obtaining maximum likelihood estimates of log-linear parameters under an arbitrary context-specific graphical log-linear model, which needs not satisfy criteria of decomposability. We illustrate that lifting the restriction of decomposability makes the models more expressive, such that additional context-specific independencies embedded in real data can be identified. It is also shown how a context-specific graphical model can correspond to a non-hierarchical log-linear parameterization with a concise interpretation. This observation can pave way to further development of non-hierarchical log-linear models, which have been largely neglected due to their believed lack of interpretability. \end{abstract} \noindent Keywords: Graphical model; Context-specific interaction model; Log-linear model; Parameter estimation. \section{Introduction} Log-linear models for contingency tables have enjoyed a wide popularity since their introduction in the 1970's, enabling a comprehensive approach to testing hypotheses of marginal and conditional independence, as well as more detailed global scrutiny of inter-dependencies within a set of discrete variables \citep{Whittaker90, Lauritzen96}. Graphical models have received most of the attention within the class of log-linear models, which is unsurprising given their interpretability and relative ease of model fitting. However, several other dependency structures with a log-linear representation have also been considered, such as hierarchical \citep{Lauritzen96}, pairwise interaction \citep{Whittaker90}, split \citep{Hojsgaard03}, labeled \citep{Corander03a}, and context-specific interaction models \citep{Eriksen99,Hojsgaard04}. Recently, \citet{Nyman14a} introduced a class of stratified graphical models (SGMs), where strata are defined locally in the outcome space such that a specific pair of variables are independent in the context defined by a combination of values of the joint neighbors of the two variables. This is in contrast to ordinary graphical models, where a pair of variables are always considered either conditionally independent or completely dependent given their joint neighbors. The work of \citet{Nyman14a} generalizes the results on labeled graphical models, introduced in \citet{Corander03a}. To be able to obtain an analytical expression for Bayesian model scoring of SGMs, \citet{Nyman14a} restricted their attention to a class of decomposable models under a direct parameterization of the probabilities (rather than log-linear parameterization), similar to the class of graphical models where the majority of model learning approaches have been devised under the assumption of excluding non-chordal graphs from the search space. Despite of the assumption of decomposability, the resulting model class was shown to be expressive for real data and \citet{Nyman14b} additionally illustrated that SGMs can lead to more accurate probabilistic classifiers than those based on standard graphical models. Since the assumption of decomposability is generally made for computational convenience, rather than being motivated by data met in real applications, it is desirable to develop theory which enables fitting of context-specific graphical log-linear models irrespectively of them being decomposable or non-decomposable. Using the general estimation theory from \citet{Csiszar75} and \citet{Rudas98}, we introduce a cyclical projection algorithm which can be used to obtain the maximum likelihood estimate for any context-specific graphical log-linear model. This result is of interest on its own, however, to also illustrate the increased expressiveness of unrestricted context-specific graphical log-linear models for real data, we combine the maximum likelihood estimation with approximate Bayesian model scoring to define a search algorithm for the optimal model for a given data set. We additionally briefly illustrate the fact that some context-specific graphical models are also non-hierarchical log-linear models. This is particularly illuminating, since non-hierarchical log-linear models have generally been avoided due to believed lack of apparent interpretation of the parameter restrictions. The remaining article is structured as follows. In the next section we define the basic concepts related to graphical and stratified graphical models. In Section 3, log-linear parameterization of an SGM is defined, leading to a context-specific graphical log-linear model, together with some observations concerning model identifiability. In Section 4, we introduce a projection algorithm which is proven to converge to the maximum likelihood estimate for a context-specific graphical log-linear model. In Section 5, we devise an approximate Bayesian model optimization algorithm, based on the Bayesian information criterion and a stochastic search over the model space. The algorithm is illustrated by application to real data in Section 6 and the final section provides some remarks and possibilities for future work. \section{Stratified graphical models} \label{secSGM} To enable the presentation of stratified graphical models, some of the central concepts from the theory of graphical models are first introduced \citep{Nyman14a}. For a comprehensive account of the statistical and computational theory of probabilistic graphical models, see \citet{Whittaker90}, \citet{Lauritzen96}, and \citet{Koller09}. It is assumed throughout this article that all considered variables are binary. However, the introduced theory can readily be extended to finite discrete variables. While the terms node and variable are closely related when considering graphical models, we will strive to use the notation $X_{\delta}$ when referring to the variable associated to node $\delta$. Let $G = (\Delta,E)$, be an undirected graph, consisting of a set of nodes $\Delta$ and a set of undirected edges $E\subseteq\{\Delta \times\Delta\}$. Two nodes $\gamma$ and $\delta$ are \textit{adjacent} in a graph if $\{\gamma, \delta\}\in E$, that is an edge exists between them. A \textit{path} in a graph is a sequence of nodes such that for each successive pair within the sequence the nodes are adjacent. A \textit{cycle} is a path that starts and ends with the same node. A \textit{chord} in a cycle is an edge between two non-consecutive nodes in the cycle. Two sets of nodes $A$ and $B$ are said to be \textit{separated} by a third set of nodes $S$ if every path between nodes in $A$ and nodes in $B$ contains at least one node in $S$. A graph is defined as \textit{chordal} if all cycles found in the graph containing four or more unique nodes contains at least one chord. For a subset of nodes $A \subseteq \Delta$, $G_{A}=(A,E_{A})$ is a subgraph of $G$, such that the nodes in $G_{A}$ are equal to $A$ and the edge set comprises those edges of the original graph for which both nodes are in $A$, i.e. $E_{A} = \{A \times A\} \cap E$. A graph is defined as \textit{complete} when all pairs of nodes in the graph are adjacent. A \textit{clique} in a graph is a set of nodes $A$ such that the subgraph $G_{A}$ is complete. A \textit{maximal clique} $C$ is a clique for which there exists no set of nodes $C^*$ such that $C \subset C^*$ and $G_{C^*}$ is also complete. The set of maximal cliques in the graph $G$ will be denoted by $\mathcal{C}(G)$. The set of \textit{separators}, $\mathcal{S}(G)$, in the chordal graph $G$ can be obtained through intersections of the maximal cliques of $G$ ordered in terms of a junction tree, see e.g. \citet{Golumbic04}. The outcome space for the variables $X_A$, where $A \subseteq \Delta$, is denoted by $\mathcal{X}_{A}$ and an element in this space by $x_{A} \in \mathcal{X}_{A}$. Given our restriction to binary variables, the cardinality $|\mathcal{X}_{A}|$ of $\mathcal{X}_{A}$ equals $2^{|A|}$. A graphical model is defined by the pair $G=(\Delta,E)$ and the joint distribution $P_{\Delta}$ on the variables $X_{\Delta}$, such that $P_{\Delta}$ fulfills a set of restrictions induced by $G$. If there exists no path between two sets of nodes $A$ and $B$ the two sets of variables $X_A$ and $X_B$ are marginally independent, i.e. $P(X_A, X_B) = P(X_A) P(X_B)$. Similarly two sets of random variables $X_A$ and $X_B$ are conditionally independent given a third set of variables $X_S$, $P(X_A, X_B \mid X_S) = P(X_A \mid X_S) P(X_B \mid X_S)$, if $S$ separates $A$ and $B$ in $G$. A statement of conditional independence of two variables $X_{\delta}$ and $X_{\gamma}$ given $X_S$ imposes fairly strong restrictions to the joint distribution since the condition $P(X_{\delta}, X_{\gamma} \mid X_S) = P(X_{\delta} \mid X_S) P(X_{\gamma} \mid X_S)$ must hold for any joint outcome of the variables $X_S$. The idea common to context-specific independence models is to lift some of these restrictions to achieve more flexibility in terms of model structure. Exactly which restrictions are allowed to be simultaneously lifted varies considerably over the proposed model classes. Consider a GM with the complete graph spanning three nodes $\{1, 2, 3\}$, which specifies that there are no conditional independencies among the variables $X_1$, $X_2$, and $X_3$. However, if the probability $P(X_1=1, X_2=x_2, X_3=x_3)$ factorizes into the product $P(X_1=1) P(X_2=x_2 \mid X_1=1) P(X_3=x_3 \mid X_1=1)$ for all outcomes $x_2 \in \{0,1\}, x_3 \in \{0,1\}$, then a simplification of the joint distribution is hiding beneath the graph. This simplification can be included in the graph by adding a condition or \textit{stratum} to the edge $\{2, 3\}$ specifying where the context-specific independence $X_2 \perp X_3 \mid X_1=1$ of the two variables holds, as illustrated in Figure \ref{SGMs}a. \begin{figure} \caption{Graphical representation of the dependence structures of three variables. In (a) the stratum $X_1=1$ is shown as a condition on the edge $\{2, 3\}$, in (b) the strata $X_1=1$ and $X_2=1$ are shown as conditions on the edges $\{2, 3\}$ and $\{1, 3\}$, respectively, in (c) an ordinary graph with the maximal cliques $\{1, 2\}$ and $\{3\}$.} \label{SGMs} \end{figure} The following is a formal definition of what is intended by a stratum \citep{Nyman14a}. \begin{definition} \label{stratum} Stratum. Let the pair $(G, P_{\Delta})$ be a graphical model, where $G$ is a chordal graph. For all $\{\delta,\gamma\} \in E$, let $L_{\{\delta,\gamma\}}$ denote the set of nodes adjacent to both $\delta$ and $\gamma$. For a non-empty $L_{\{\delta,\gamma\}}$, define the stratum of the edge $\{\delta,\gamma\}$ as the subset $\mathcal{L}_{\{\delta,\gamma\}}$ of outcomes $x_{L_{\{\delta,\gamma\}}} \in \mathcal{X}_{L_{\{\delta,\gamma\}}}$ for which $X_{\delta}$ and $X_{\gamma}$ are independent given $X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}$, i.e. $\mathcal{L}_{\{\delta,\gamma\}} = \{ x_{L_{\{\delta,\gamma\}}} \in \mathcal{X}_{L_{\{\delta,\gamma\}}} : X_{\delta} \perp X_{\gamma} \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}} \}$. \end{definition} The requirement that $G$ is chordal is necessary for the definition of a stratum to be generally applicable. Consider the graph in Figure \ref{fig:ND}, note that the graph is not chordal as it contains the chord-less cycle (1, 3, 4, 2, 1). The intended context-specific independence $X_3 \perp X_4 \mid X_5 = 1$ induced by the stratum $\mathcal{L}_{\{3, 4\}} = \{X_5=1\}$ does not hold as nodes $3$ and $4$ are connected via the path $(3, 1, 2, 4)$. By definition, no such paths are possible for chordal graphs, ensuring that given $x_{L_{\{\delta,\gamma\}}} \in \mathcal{L}_{\{\delta,\gamma\}}$ it will hold that $X_{\delta} \perp X_{\gamma} \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}$. \begin{figure} \caption{Non-chordal graph resulting in the intended context-specific independence $X_3 \perp X_4 \mid X_5 = 1$ not holding.} \label{fig:ND} \end{figure} The idea of context-specific independence generalizes readily to a situation where multiple strata for distinct pairs of variables are considered. Figure \ref{SGMs}b displays the complete graph for three nodes with the edges $\{2, 3\}$ and $\{1, 3\}$ associated with the strata $X_1=1$ and $X_2=1$, respectively. In addition to the context-specific independence statement present in Figure \ref{SGMs}a, here we have the simultaneous restriction that $X_1 \perp X_3 \mid X_2=1$, such that $P(X_1=x_1, X_2=1, X_3=x_3) = P(X_2=1) P(X_1=x_1 \mid X_2=1) P(X_3=x_3 \mid X_2=1)$ for all outcomes $x_1\in \{0,1\}, x_3\in\{0,1\}$. This pair of restrictions does not imply that $P(X_3=x_3) = P(X_3=x_3 \mid X_1=1, X_2=1)$ as would be the case given the graph in Figure \ref{SGMs}c. It does, however, imply that the information contained about $X_3$ in the knowledge that $X_1=1$ and $X_2=1$ must be the same, i.e. $P(X_3=x_3 \mid X_1=1) = P(X_3=x_3 \mid X_2=1) = P(X_3=x_3 \mid X_1=1, X_2=1)$. The following definition \citep{Nyman14a} formalizes an extension to ordinary graphical models. This defined class of models allow for simultaneous context-specific independence to be represented using a set of strata, partitioning the joint outcome space of the variables $X_{\Delta}$. \begin{definition} Stratified graphical model (SGM). A stratified graphical model is defined by the triple $(G, L, P_{\Delta})$, where $G$ is a chordal graph termed as the underlying graph, $L$ equals the joint collection of all strata $\mathcal{L}_{\{\delta,\gamma\}}$ for the edges of $G$, and $P_{\Delta}$ is a joint distribution over the variables $X_{\Delta}$ which factorizes according to the restrictions imposed by $G$ and $L$. \end{definition} The pair $(G, L)$ consisting of the graph $G$ with the stratified edges (edges associated with a stratum) determined by $L$ will be referred to as a stratified graph (SG), usually denoted by $G_L$. To be able to calculate the marginal likelihood of a dataset given an SG and perform model inference, \citet{Nyman14a} specified strict restrictions on the set of stratified edges, limiting the model space to decomposable SGs. In this paper we introduce a method which allows us to remove these restrictions while retaining the ability to perform model inference. Using the log-linear parameterization some new properties for SGMs are also introduced. \section{SGMs and the log-linear parameterization} \label{sec:logLin} In this paper we use two different parameterizations. Firstly, the standard parameterization used for a categorical distribution, where each parameter $\theta_i$ in a parameter vector $\theta$ denotes the probability of a specific outcome $x_{\Delta}^{(i)} \in \mathcal{X}_{\Delta}$, i.e $P(X_{\Delta} = x_{\Delta}^{(i)}) = \theta_i$. And secondly, the log-linear parameterization \citep{Whittaker90, Lauritzen96} defined by the parameter vector $\phi$. For this parameterization, the joint distribution of the variables $X_{\Delta}$ is defined by \[ \log P(X_{\Delta}=x_{\Delta})=\sum_{A \subseteq \Delta}\phi_{A(x_{A})}, \] where $x_{A}$ denotes the marginal outcome of variables $X_A$ in the outcome $x_{\Delta}$. For the log-linear parameterization we have the restriction that if $x_{j}=0$ for any $j \in A$, then $\phi_{A(x_{A})}=0$ \citep{Whittaker90}. As we in this paper only consider binary variables a log-linear parameter will henceforth be denoted using the convention $\phi_{A(x_{A})} = \phi_{A}$. The reason for using the log-linear parameterization is that an SG imposes restrictions to $\phi$ in a more manageable manner than it does to $\theta$. It holds for graphical log-linear models that if the edge $\{\delta,\gamma\}$ is not present in $G$, then all parameters $\phi_{A}$, where $\{\delta ,\gamma\} \subseteq A$, are equal to zero \citep{Whittaker90}. The restrictions imposed to the log-linear parameters by a stratum are also clearly defined. \begin{theorem} \label{th:restrictions} Consider the context-specific independence $X_{\delta} \perp X_{\gamma} \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}$. Let $A \subseteq L_{\{\delta,\gamma\}} \cup \{\delta,\gamma\}$ be the set of variables containing the pair $\{\delta,\gamma\}$ and the set of all variables with non-zero values in $x_{L_{\{\delta,\gamma\}}}$. The parameter restrictions imposed are of the form $\sum_{B \subseteq A} \phi_{B} = 0$, where $\{\delta, \gamma\} \subseteq B$. \end{theorem} \begin{proof} We start by defining the operator $\mathcal{D}(A;B) = \{ \{A \cup C \} : C \subseteq B\}$. Let $\Omega = \Delta \setminus \{L_{\{\delta, \gamma\}} \cup \{\delta, \gamma\}\}$ denote the set of nodes not in $L_{\{\delta, \gamma\}}$ or $\{\delta, \gamma\}$. Given that $X_{\delta} \perp X_{\gamma} \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}$ we get that \[ \begin{split} & \frac{P(X_{\delta} = 0 \mid X_{\gamma} = 0, X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega})}{P(X_{\delta} = 1 \mid X_{\gamma} = 0, X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega})} = \\ & \frac{P(X_{\delta} = 0 \mid X_{\gamma} = 1, X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega})}{P(X_{\delta} = 1 \mid X_{\gamma} = 1, X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega})} \\ \end{split} \] \[ \Longleftrightarrow \] \begin{equation} \label{eq:logLin} \begin{split} & \frac{P(X_{\delta} = 0, X_{\gamma} = 0, X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega})}{P(X_{\delta} = 1, X_{\gamma} = 0, X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega})} = \\ & \frac{P(X_{\delta} = 0, X_{\gamma} = 1, X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega})}{P(X_{\delta} = 1, X_{\gamma} = 1, X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega})}. \end{split} \end{equation} Let $Z$ denote the set of nodes corresponding to variables with non-zero outcome in $x_{L_{\{\delta,\gamma\}}}$ or $x_{\Omega}$. Using the log-linear parameterization, equation \eqref{eq:logLin} results in \[ \sum_{a \subseteq \mathcal{D}(\varnothing; Z)} \hspace{-0.3cm} \phi_a \quad - \sum_{a \subseteq \mathcal{D}(\varnothing; \delta \cup Z)} \hspace{-0.5cm} \phi_a \quad = \sum_{a \subseteq \mathcal{D}(\varnothing; \gamma \cup Z)} \hspace{-0.5cm} \phi_a \quad - \sum_{a \subseteq \mathcal{D}(\varnothing; \{\delta, \gamma \} \cup Z)} \hspace{-0.7cm} \phi_a \qquad \Rightarrow \] \[ \sum_{a \subseteq \mathcal{D}(\delta; Z)} \hspace{-0.3cm} \phi_a \quad = \sum_{a \subseteq \mathcal{D}(\delta; \gamma \cup Z)} \hspace{-0.3cm} \phi_a \qquad \Rightarrow \qquad \sum_{a \subseteq \mathcal{D}(\{\delta, \gamma \}; Z)} \hspace{-0.3cm} \phi_a \quad = \quad 0. \] However, if a node $\zeta \in \Omega$, it cannot be adjacent to both $\delta$ and $\gamma$. Consequently, any parameter $\phi_A$ such that $\{\delta, \gamma, \zeta\} \subseteq A$ is restricted to zero. Therefore, if $L_{Z}$ denotes the nodes corresponding to the variables with non-zero outcome in $x_{L_{\{\delta,\gamma\}}}$ the restriction induced by the stratum can be written \[ \sum_{a \subseteq \mathcal{D}(\{\delta, \gamma \}; L_{Z})} \hspace{-0.5cm} \phi_a \quad = \quad 0, \] which corresponds to what is stated in the theorem. \end{proof} \noindent As an example consider the SG in Figure \ref{SGMs}a. The context-specific independence $X_{2} \perp X_{3} \mid X_1=1$ induces the log-linear parameter restriction \[ \begin{split} & \phi_{\varnothing} + \phi_{1} - \phi_{\varnothing} - \phi_{1} - \phi_{2} - \phi_{1,2} = \\ & \phi_{\varnothing} + \phi_{1} + \phi_{3} + \phi_{1,3} - \phi_{\varnothing} - \phi_{1} - \phi_{2} - \phi_{3} - \phi_{1,2} - \phi_{1,3} - \phi_{2,3} - \phi_{1,2,3} \Rightarrow \\ & \phi_{2,3} + \phi_{1,2,3} = 0. \end{split} \] In the definition of a stratum on the edge $\{\delta, \gamma\}$, the variables that determine the stratum correspond to the nodes that are adjacent to both $\delta$ and $\gamma$. This is a natural definition rather than an invented restriction. \begin{theorem} \label{th:adjacent} Only the variables corresponding to nodes adjacent to both $\delta$ and $\gamma$ may define a context-specific independence between $X_{\delta}$ and $X_{\gamma}$. \end{theorem} \begin{proof} The proof of this theorem follows from Theorem \ref{th:restrictions}. We assume that a variable $X_{\zeta}$, such that the node $\zeta$ is not adjacent to both $\delta$ and $\gamma$, is included when defining the context-specific independence $X_{\delta} \perp X_{\gamma} \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\zeta} = x_{\zeta}$. If $x_{\zeta} = 0$, we would get the same restriction as by not including $X_{\zeta}$ in the conditioning set \begin{equation} \label{eq:same1} \sum_{a \subseteq \mathcal{D}(\{\delta, \gamma \}; L_{Z})} \hspace{-0.5cm} \phi_a \quad = \quad 0. \end{equation} If $x_{\zeta} \neq 0$ we get the restriction \begin{equation} \label{eq:same2} \sum_{a \subseteq \mathcal{D}(\{\delta, \gamma \}; \{L_{Z}, \zeta\})} \hspace{-0.5cm} \phi_a \quad = \quad 0, \end{equation} again we know from the underlying graph that any parameter $\phi_A$ such that $\{\delta, \gamma, \zeta\} \subseteq A$ is restricted to zero, resulting in \eqref{eq:same1} and \eqref{eq:same2} being equivalent restrictions. \end{proof} \noindent As an example consider the graphs in Figure \ref{fig:logLin}a and \ref{fig:logLin}b. \begin{figure} \caption{Graphs in a) and b) include improper strata, which if allowed would lead to the same parameter restrictions as the graph in c).} \label{fig:logLin} \end{figure} Here $X_3$ determines the stratum of the edge $\{1,2\}$, although node $2$ and node $3$ are non-adjacent. The underlying graph establishes that $X_2 \perp X_3 \mid X_1$. However, given the proposed stratum $X_3$ can indirectly affect $X_2$ by determining whether or not $X_1$ and $X_2$ are dependent, which is an obvious contradiction. The underlying graph induces the parameter restrictions $\phi_{2, 3} = \phi_{1, 2, 3}=0$. The stratum included in Figure \ref{fig:logLin}a results in the restriction $\phi_{1, 2} = 0$, while the stratum included in Figure \ref{fig:logLin}b results in the restriction $\phi_{1, 2} + \phi_ {1, 2, 3} = 0$, i.e. $\phi_{1, 2} = 0$. This means that the graphs in Figure \ref{fig:logLin} would all induce the same restrictions, $\phi_{1, 2} = \phi_{2, 3} = \phi_{1, 2, 3}=0$. Note that this example is a special case of Theorem \ref{th:adjacent} as $L_{\{1, 2\}} = \varnothing$. \citet{Whittaker90} termed a log-linear model as hierarchical if, whenever a parameter $\phi_a = 0$ then $\phi_t=0$ for all $a \subseteq t$. \citet[p.~209]{Whittaker90} further states that ``A non-hierarchical model is not necessarily uninteresting; it is just that the focus of interest is something other than independence". This statement does not apply to SGMs, as shown in Theorem \ref{th:whittaker}. \begin{theorem} \label{th:whittaker} Some, but not all, SGMs are non-hierarchical models. \end{theorem} \begin{proof} This theorem can be proved using two simple examples. First, consider an SG containing no strata, the parameter restrictions of this model will equal those of an ordinary graphical log-linear model which is a hierarchical model. Now consider the SG attained by replacing the stratum $X_{2} \perp X_{3} \mid X_1=1$ in Figure \ref{SGMs}a with $X_{2} \perp X_{3} \mid X_1=0$, this leads to the single parameter restriction $\phi_{2, 3} = 0$. As the parameter $\phi_{1, 2, 3}$ is unrestricted the model is non-hierarchical. \end{proof} \section{Parameter estimation for log-linear SGMs} Let $\Theta_{G}$ denote the set of distributions satisfying the restrictions imposed by the chordal graph $G$. \citet[p.~91]{Lauritzen96} showed that given an observed distribution $P$ the maximum likelihood (ML) projection to $\Theta_{G}$, resulting in the distribution $\hat{P}$, is obtained by setting \begin{equation} \hat{\theta}_{i}=\hat{P}(x_{\Delta}^{(i)})=\frac{\prod_{C \in \mathcal{C}(G)}P(x_{C}^{(i)})}{\prod_{S \in \mathcal{S}(G)}P(x_{S}^{(i)})}, \ i=1, \ldots, |\mathcal{X}_{\Delta}|. \label{eq:cliSep} \end{equation} Given the following definition of the Kullback-Leibler (KL) divergence \[ D_{KL}(P | \hat{P}) = \sum_{x_{\Delta} \in \mathcal{X}_{\Delta}} \log \left( \frac{P(x_{\Delta})}{\hat{P}(x_{\Delta})} \right) P(x_{\Delta}), \] \citet[p.~238]{Lauritzen96} also showed that the ML projection corresponds to finding the distribution that minimizes $D_{KL}$ in the second argument, i.e. \[ \hat{P}=\arg \min_{Q \in \Theta_{G}} D_{KL}(P, Q). \] We shall later refer to the minimum discrimination information (MDI) projection, resulting in a distribution $\hat{R}$ given a distribution $R$. The MDI projection is also defined through the KL divergence but in this case as the distribution that minimizes $D_{KL}$ in the first argument, i.e. \[ \hat{R}=\arg \min_{Q \in \Theta_{G}} D_{KL}(Q, R). \] The ML projection for imposing a single context-specific independence on a distribution can also be written in closed form. Consider an outcome $x_{L_{\{\delta,\gamma\}}} \in \mathcal{L}_{\{\delta,\gamma\}}$, which implies the context-specific independence $X_{\delta} \perp X_{\gamma} \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta, \gamma\}}}$. If we by $\Omega = \Delta \setminus \{L_{\{\delta, \gamma\}} \cup \{\delta, \gamma\}\}$ denote all nodes not in $L_{\{\delta, \gamma\}}$ or $\{\delta, \gamma\}$, the probability \[ P(X_{\Delta} = x_{\Delta}) = P(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}, X_{\delta} = x_{\delta}, X_{\gamma} = x_{\gamma}) \] for any $x_{\Omega} \in \mathcal{X}_{\Omega}$ can be factorized as \[ \begin{split} & P(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}) P(X_{\delta} = x_{\delta} \mid X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta, \gamma\}}}, X_{\Omega} = x_{\Omega}) \\ & P(X_{\gamma} = x_{\gamma} \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}). \end{split} \] Using the following abbreviated notation for $\theta$ (and correspondingly for $\hat{\theta}$) \[ \begin{split} \theta_{0,0} & = P(X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}, X_{\delta} = 0, X_{\gamma} = 0), \\ \theta_{0,1} & = P(X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}, X_{\delta} = 0, X_{\gamma} = 1), \\ \theta_{1,0} & = P(X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}, X_{\delta} = 1, X_{\gamma} = 0), \\ \theta_{1,1} & = P(X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}, X_{\delta} = 1, X_{\gamma} = 1), \end{split} \] we determine the values $\hat{\theta}_{0,0}$, $\hat{\theta}_{0,1}$, $\hat{\theta}_{1,0}$, and $\hat{\theta}_{1,1}$ according to \begin{equation} \begin{split} \hat{\theta}_{0,0} & =(\theta_{0,0}+\theta_{0,1})\cdot(\theta_{0,0}+\theta _{1,0})/(\theta_{0,0}+\theta_{0,1}+\theta_{1,0}+\theta_{1,1}),\\ \hat{\theta}_{0,1} & =(\theta_{0,0}+\theta_{0,1})\cdot(\theta_{0,1}+\theta _{1,1})/(\theta_{0,0}+\theta_{0,1}+\theta_{1,0}+\theta_{1,1}),\\ \hat{\theta}_{1,0} & =(\theta_{1,0}+\theta_{1,1})\cdot(\theta_{0,0}+\theta _{1,0})/(\theta_{0,0}+\theta_{0,1}+\theta_{1,0}+\theta_{1,1}),\\ \hat{\theta}_{1,1} & =(\theta_{1,0}+\theta_{1,1})\cdot(\theta_{0,1}+\theta _{1,1})/(\theta_{0,0}+\theta_{0,1}+\theta_{1,0}+\theta_{1,1}). \end{split} \label{eq:theta} \end{equation} A detailed derivation of the projection defined above is given in Appendix A. Repeating the procedure defined in \eqref{eq:theta} for all $x_{\Omega} \in \mathcal{X}_{\Omega}$ will result in the ML projection of $P$ satisfying the context-specific independence $X_{\delta} \perp X_{\gamma} \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta, \gamma\}}}$. \subsection{Maximum likelihood estimation for SGs} By cyclically repeating the projections according to \eqref{eq:cliSep} and \eqref{eq:theta} for all instances found in the set of strata $L$ until convergence is achieved, the resulting parameter vector will be the maximum likelihood estimate that simultaneously satisfies all the restrictions imposed by $G_{L}$. In order to prove this we first need to define the following family of probability distributions. \begin{definition} Let $X_{\delta}$ and $X_{\gamma}$ be two variables in $X_{\Delta}$, $X_A$ a subset of $X_{\Delta \setminus \{\delta, \gamma\}}$ and $X_{\Omega} = X_{\Delta \setminus \{A \cup \{\delta, \gamma\}\}}$. $\mathcal{F}_{\delta, \gamma}(X_A=x_A, Q)$, where $Q$ is an arbitrary probability distribution, is defined as the set of probability distributions for which the following properties hold for all possible values $x_{\delta}$, $x_{\gamma}$ and $x_\Omega$. \[ \begin{split} &\mathcal{F}_{\delta, \gamma}(X_A=x_A, Q) = \\ & \{P: P(X_A=x_A, X_{\Omega}=x_{\Omega}) = Q(X_A=x_A, X_{\Omega}=x_{\Omega})\} \ \cap \\ &\{P: P(X_{\delta}=x_{\delta} | X_A=x_A, X_{\Omega}=x_{\Omega}) = Q(X_{\delta}=x_{\delta} | X_A=x_A, X_{\Omega}=x_{\Omega}) \} \ \cap \\ &\{P: P(X_{\gamma}=x_{\gamma} | X_A=x_A, X_{\Omega}=x_{\Omega}) = Q(X_{\gamma}=x_{\gamma} | X_A=x_A, X_{\Omega}=x_{\Omega}) \} \ \cap \\ &\{P: P(X_{\Delta}=y_{\Delta})=Q(X_{\Delta}=y_{\Delta}) \text{, when $y_{\Delta}$ is a outcome where $x_A \neq y_A$} \}. \end{split} \] \end{definition} A set of probability distributions $\mathscr{C}$ is defined as a \textit{linear set} if $P_1 \in \mathscr{C}$ and $P_2 \in \mathscr{C}$ results in $\alpha P_1 + (1 - \alpha)P_2$ also belonging to $\mathscr{C}$ for every real $\alpha$ for which it is a probability distribution \citep{Csiszar75}. \begin{lemma} $\mathcal{F}_{\delta, \gamma}(X_A=x_A, Q)$ constitutes a linear set. \end{lemma} \begin{proof} Let $P_1$ and $P_2$ be two probability distributions in $\mathcal{F}_{\delta, \gamma}(X_A=x_A, Q)$, we then need to prove that $P^*=\alpha P_1 + (1 - \alpha)P_2$ also belongs to $\mathcal{F}_{\delta, \gamma}(X_A=x_A, Q)$. It is trivial to show that $P^*(X_A=x_A, X_{\Omega}=x_{\Omega}) = Q(X_A=x_A, X_{\Omega}=x_{\Omega})$ and that $P^*(X_{\Delta}=y_{\Delta}) = Q(X_{\Delta}=y_{\Delta})$, when $y_{\Delta}$ is a outcome where $x_A \neq y_A$. The non-trivial part consists of showing that $P^*(X_{\delta}=x_{\delta} | X_A=x_A, X_{\Omega}=x_{\Omega}) = Q(X_{\delta}=x_{\delta} | X_A=x_A, X_{\Omega}=x_{\Omega})$. We start from the fact that \[ \begin{split} Q(X_{\delta}=x_{\delta} | X_A=x_A, X_{\Omega}=x_{\Omega}) &= P_1(X_{\delta}=x_{\delta} | X_A=x_A, X_{\Omega}=x_{\Omega}) \\ &= P_2(X_{\delta}=x_{\delta} | X_A=x_A, X_{\Omega}=x_{\Omega}), \end{split} \] which implicates that \[ \frac{P_1(X_{\delta}=x_{\delta}, X_A=x_A, X_{\Omega}=x_{\Omega})}{P_1(X_A=x_A, X_{\Omega}=x_{\Omega})} = \frac{P_2(X_{\delta}=x_{\delta}, X_A=x_A, X_{\Omega}=x_{\Omega})}{P_2(X_A=x_A, X_{\Omega}=x_{\Omega})}. \] From the definition of $\mathcal{F}_{\delta, \gamma}(X_A=x_A, Q)$ we know that \[ P_1(X_A=x_A, X_{\Omega}=x_{\Omega}) = P_2(X_A=x_A, X_{\Omega}=x_{\Omega}), \] and can therefore deduce that \[ \begin{split} Q(X_{\delta}=x_{\delta}, X_A=x_A, X_{\Omega}=x_{\Omega}) &= P_1(X_{\delta}=x_{\delta}, X_A=x_A, X_{\Omega}=x_{\Omega}) \\ &= P_2(X_{\delta}=x_{\delta}, X_A=x_A, X_{\Omega}=x_{\Omega}). \end{split} \] For $P^*$ this means that \[ \begin{split} &P^*(X_{\delta}=x_{\delta} | X_A=x_A, X_{\Omega}=x_{\Omega}) = \\ &\frac{\alpha P_1(X_{\delta}=x_{\delta}, X_A=x_A, X_{\Omega}=x_{\Omega}) + (1-\alpha) P_2(X_{\delta}=x_{\delta}, X_A=x_A, X_{\Omega}=x_{\Omega})} {\alpha P_1(X_A=x_A, X_{\Omega}=x_{\Omega})+ (1-\alpha) P_2(X_A=x_A, X_{\Omega}=x_{\Omega})} = \\ &\frac{\alpha Q(X_{\delta}=x_{\delta}, X_A=x_A, X_{\Omega}=x_{\Omega}) + (1-\alpha) Q(X_{\delta}=x_{\delta}, X_A=x_A, X_{\Omega}=x_{\Omega})} {\alpha Q(X_A=x_A, X_{\Omega}=x_{\Omega})+ (1-\alpha) Q(X_A=x_A, X_{\Omega}=x_{\Omega})} = \\ &\frac{Q(X_{\delta}=x_{\delta}, X_A=x_A, X_{\Omega}=x_{\Omega})} {Q(X_A=x_A, X_{\Omega}=x_{\Omega})} = Q(X_{\delta}=x_{\delta} | X_A=x_A, X_{\Omega}=x_{\Omega}). \end{split} \] Of course the same reasoning can be used to show that $P^*(X_{\gamma}=x_{\gamma} | X_A=x_A, X_{\Omega}=x_{\Omega}) = Q(X_{\gamma}=x_{\gamma} | X_A=x_A, X_{\Omega}=x_{\Omega})$, which concludes the proof. \end{proof} \begin{definition} The log-linear model $LL_{\delta, \gamma}(X_A=x_A)$ is defined as the set of probability distributions which satisfy the condition $X_{\delta} \perp X_{\gamma} \mid X_{A} = x_{A}$. \end{definition} It is easy to see that for any probability distribution $Q$, the sets $LL_{\delta, \gamma}(X_A=x_A)$ and $\mathcal{F}_{\delta, \gamma}(X_A=x_A, Q)$ can have at most one common distribution, denoted by $R$. It is also evident, using the same reasoning as in Appendix A, that $R$ is the result of the ML projection of any distribution in $\mathcal{F}_{\delta, \gamma}(X_A=x_A, Q)$ to $LL_{\delta, \gamma}(X_A=x_A)$. We are now ready to prove the main theorem. \begin{theorem} Cyclically projecting the observed distribution, $P_0$, in accordance with the procedures defined in \eqref{eq:cliSep} and \eqref{eq:theta} until convergence is achieved, will result in the maximum likelihood estimate, $\hat{P}$, which simultaneously satisfies all the restrictions imposed by a given SG, $G_L=(G, L)$. \end{theorem} \begin{proof} This proof uses the results found in \cite{Rudas98}, with Theorem 2 of that paper being of paramount importance. An essential part of the proof is the so called Pythagorean identity for discrimination information, see for instance \cite{Rudas98}, which states that if $S$ belongs to a linear set and $R$ is the MDI projection of a distribution $T$ onto this set, then $D_{KL}(S, T) = D_{KL}(S, R) + D_{KL}(R, T)$. Let $m$ denote the number of context-specific independencies in $L$, i.e. the total number of instances in all strata included in $L$. Further, let $P_l$ be the distribution attained when projecting the distribution $P_{l-1}$ according to the $l$th context-specific independence, say $X_{\delta} \perp X_{\gamma} \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}$, in $L$. It then holds that \[ P_l = LL_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}) \cap \mathcal{F}_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, P_{l-1}). \] $P_l$ is also the MDI projection of any distribution in $LL_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}})$ to $\mathcal{F}_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, P_{l-1})$. \cite{Rudas98} makes this statement without providing any further comment, but as it is not self-evident we have chosen to include a proof. In order to do this we turn to \citet[Theorem 1]{Csiszar03}. This theorem states that for a log-convex set $\mathcal{T}$, which $LL_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}})$ constitutes as it defines an exponential family, the ML projection, denoted by $R$, of an arbitrary distribution $S$ to $\mathcal{T}$ is the unique distribution that satisfies \[ D_{KL}(S, T) \geq \min_{A \in \mathcal{T}} D_{KL}(S, A) + D_{KL}(R, T), \quad T \in \mathcal{T}. \] In our case, as $\hat{P} \in LL_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}})$ and $P_l$ is the ML projection of any distribution $S$ in $\mathcal{F}_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, P_{l-1})$ to $LL_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}})$ it holds that \[ D_{KL}(S, \hat{P}) \geq D_{KL}(S, P_l) + D_{KL}(P_l, \hat{P}) , \ S \in \mathcal{F}_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, P_{l-1}). \] Which implies that $D_{KL}(S, \hat{P}) \geq D_{KL}(P_l, \hat{P})$ holds for every $S$ in $\mathcal{F}_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, P_{l-1})$ and $P_l$ is the MDI projection of any distribution in $LL_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}})$ to $\mathcal{F}_{\delta, \gamma}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, P_{l-1})$. Therefore the Pythagorean identity is applicable and we can conclude that \begin{equation} D_{KL}(P_{l-1}, \hat{P}) = D_{KL}(P_{l-1}, P_l) + D_{KL}(P_{l}, \hat{P}). \label{eq:PINyman} \end{equation} \cite{Rudas98} showed that the Pythagorean identity is also applicable when projecting a distribution onto the set of distributions satisfying the restrictions imposed by a chordal graph. I.e. if we by $P_{m+1}$ denote the distribution that results from projecting $P_m$ to $\Theta_{G}$ according to \eqref{eq:cliSep} we get that \begin{equation} D_{KL}(P_{m}, \hat{P}) = D_{KL}(P_{m}, P_{m+1}) + D_{KL}(P_{m+1}, \hat{P}). \label{eq:PIRudas} \end{equation} Combining \eqref{eq:PINyman} and \eqref{eq:PIRudas} and letting the projection $n+i$ be the same projection as $i$ if $n=k(m+1)$ for some value $k=0,1, \ldots$ results in \[ D_{KL}(P_0, \hat{P}) = \sum_{l=1}^{n} D_{KL}(P_{l-1}, P_{l}) + D_{KL}(P_{n}, \hat{P}). \] for every $n$. The existence of $\hat{P}$ implies that for any $n$ \[ \sum_{l=1}^{n} D_{KL}(P_{l-1}, P_{l}) < \infty, \] which, in turn, implies that $D_{KL}(P_{l-1}, P_l) \rightarrow 0$ as $l \rightarrow \infty$. Just as \cite{Rudas98} we can now refer to the compactness argument found in \citet[Theorem 3.2]{Csiszar75} to complete the proof. \end{proof} In practice we need a criterion to determine whether or not the cyclical projections have converged to $\hat{P}$. The criterion that we use terminates the projections once an entire cycle consisting of $m+1$ projections has been completed with the total sum of changes made to $\theta$ being less than a predetermined constant $\epsilon$. Using $\theta_i = (\theta_{i1}, \ldots, \theta_{ik})$ to denote the parameter after the $i$:th projection in the cycle, with $\theta_{0}$ denoting the starting value. The cyclical projections are terminated when \[ \sum_{i=1}^{m+1} \sum_{j=1}^{k} | \theta_{ij} - \theta_{(i-1)j} | < \epsilon. \] \section{Bayesian Learning of SGMs} \label{secAlgorithm} Bayesian learning of graphical models has attained considerable interest, both in the statistical and computer science literature, see e.g. \cite{Madigan94}, \cite{Dellaportas99}, \cite{Giudici99}, \cite{Corander03b}, \cite{Giudici03}, \cite{Koivisto04}, and \cite{Corander08}. Our learning algorithms described below belong to the class of non-reversible Metropolis-Hastings algorithms, introduced by \cite{Corander06} and later further generalized and applied to learning of graphical models in \cite{Corander08}. A similar algorithm was also used in \citet{Nyman14a} for decomposable SGMs. To allow for Bayesian learning of SGMs, we use the maximum likelihood estimation technique introduced in the previous section to derive an approximation of the marginal likelihood based on the general result for exponential families due to \cite{Schwarz78}. The approximation utilizes the Bayesian information criterion (BIC), and is written \begin{equation} \log P(\mathbf{X} \mid G_{L})\approx l(\mathbf{X} \mid \hat{\theta},G_{L}) - \frac{\text{dim}(\Theta \mid G_{L})}{2}\log n, \label{MLbic} \end{equation} where $\hat{\theta}$ is the maximum likelihood estimate of the model parameters under the restrictions imposed by $G_{L}$, $l(\mathbf{X} \mid \hat{\theta}, G_{L})$ is the logarithm of the likelihood function corresponding to $\hat{\theta}$, and dim$(\Theta \mid G_{L})$ is the maximum number of free parameters in a distribution with the parameter restrictions induced by $G_L$. We denote the right hand side of \eqref{MLbic} by $\log S(G_L \mid \mathbf{X})$, i.e. $P(\mathbf{X} \mid G_L) \approx S(G_L \mid \mathbf{X})$. The maximum number of free parameters in a distribution with the parameter restrictions induced by an SG can readily be calculated using the log-linear parameterization discussed in Section \ref{sec:logLin}. Let $\mathcal{M}$ denote the finite space of states over which the aim is to approximate the posterior distribution. In this paper we will run two separate types of searches. In one search the state space $\mathcal{M}$ will consist of all possible sets of strata for a given chordal graph. In the second search the state space will be the set of chordal graphs combined with the optimal set of strata for that graph. For $M \in \mathcal{M}$, let $Q(\cdot \mid M)$ denote the proposal function used to generate a new candidate state given the current state $M$. Under the generic conditions stated in \citet{Corander08}, the probability with which any particular candidate is picked by $Q(\cdot \mid M)$ need not be explicitly calculated or known, as long as it remains unchanged over all the iterations and the resulting chain satisfies the condition that all states can be reached from any other state in a finite number of steps. To initialize the algorithm, a starting state $M_{0}$ is determined. At iteration $t=1,2,...$ of the non-reversible algorithm, $Q(\cdot \mid M_{t-1})$ is used to generate a candidate state $M^{\ast}$, which is accepted with the probability \begin{equation} \min\left( 1,\frac{P(\mathbf{X} \mid M^{\ast})P(M^{\ast})}{P(\mathbf{X} \mid M_{t-1})P(M_{t-1})}\right), \label{accept} \end{equation} where $P(M)$ is the prior probability assigned to $M$. The term $P(\mathbf{X} \mid M)$ denotes the marginal likelihood of the dataset $\mathbf{X}$ given $M$. If $M^{\ast}$ is accepted, we set $M_{t}=M^{\ast}$, otherwise we set $M_{t}=M_{t-1}$. In contrast to the standard reversible Metropolis-Hastings algorithm, for this non-reversible algorithm the posterior probability $P(M \mid \mathbf{X})$ does not, in general, equal the stationary distribution of the Markov chain. Instead, a consistent approximation of $P(M \mid \mathbf{X})$ is obtained by considering the space of distinct states $\mathcal{M}_{t}$ visited by time $t$ such that \[ \hat{P}_t(M \mid \mathbf{X}) = \frac{P(\mathbf{X} \mid M)P(M)}{\sum_{M' \in \mathcal{M}_{t}} P(\mathbf{X} \mid M')P(M')}. \] \citet{Corander08} proved, under rather weak conditions, that this estimator is consistent, i.e. \[ \hat{P}_t(M \mid \mathbf{X})\overset{a.s.}{\rightarrow}P(M \mid \mathbf{X}), \] as $t\rightarrow\infty$. As our main interest will lie in finding the posterior optimal state, i.e. \[ \arg\mathop{\max}_{M\in\mathcal{M}}P(M \mid \mathbf{X}). \] it will suffice to identify \[ \arg\mathop{\max}_{M\in\mathcal{M}}P(\mathbf{X} \mid M)P(M). \] As the marginal likelihood of a dataset is not available for the models considered in this paper the approximated BIC score is used instead. The main goal of our search algorithm is to identify the stratified graph $G_L^{\text{opt}}$ optimizing $S(G_L \mid \mathbf{X} ) P(G_L)$. Under the assumption that the optimal set of strata is known for each underlying graph a Markov chain traversing the set of possible underlying graphs will eventually identify $G_L^{\text{opt}}$. Another search may be used in order to identify the optimal set of strata given the underlying graph. The proposal functions used are described in Appendix B. For the experiments conducted in the next section, in order to penalize dense graphs, the following non-uniform prior \citep{Nyman14a} is used \[ P(G_{L}) \propto 2^{- |\Theta_G|}. \] Here, $|\Theta_G|$ denotes the maximum number of free parameter in a distribution satisfying the restrictions imposed by the underlying graph $G$. \section{Illustration of SGM Learning from Data} \label{secRes} In this section, in order to save space, when displaying an SG we instead of writing a stratum as $(X_1=1, X_2=0)$ only write $(1, 0)$. This is possible since, given the graph, it is clear which variables define the context-specific independence when the variables are ordered by their integer labels. The first dataset that we have investigated includes prognostic factors for coronary heart disease and can be found in \citet{Edwards85}. The data consists of 1841 observations of the six variables listed in Table \ref{tab:heart}. \begin{table}[htb] \begin{center} \begin{tabular} [c]{cll} \hline Variable & Meaning & Range \\ \hline $X_1$ & Smoking & No = 0, Yes = 1 \\ $X_2$ & Strenuous mental work & No = 0, Yes = 1 \\ $X_3$ & Strenuous physical work & No = 0, Yes = 1 \\ $X_4$ & Systolic blood pressure $> 140$ & No = 0, Yes = 1 \\ $X_5$ & Ratio of beta and alpha lipoproteins $> 3$ & No = 0, Yes = 1 \\ $X_6$ & Family anamnesis of coronary heart disease & No = 0, Yes = 1 \\ \hline \end{tabular} \end{center} \caption{Variables in coronary heart disease data.} \label{tab:heart} \end{table} In Figure \ref{fig:heart} two different SGs are displayed. The SG in Figure \ref{fig:heart}a is obtained by first conducting a search for the optimal ordinary chordal graph and then identifying the optimal set of strata for that graph. The underlying graph has the score $-6732.84$, while the SG has the score $-6721.67$. Figure \ref{fig:heart}b contains the estimated globally optimal SG, which has the score $-6713.24$. The underlying graph for this SG has the score $-6764.14$. \begin{figure} \caption{Optimal SGs for heart data. In a) the optimal ordinary graph is amended with optimal strata. In b) the globally optimal SG.} \label{fig:heart} \end{figure} The second dataset that we consider is derived from the answers given by 1806 candidates in the Finnish parliament elections of 2011, in a questionnaire issued by the newspaper Helsingin Sanomat \citep{HelsinginSanomat11}. The eight questions considered, represented by eight variables, are given in Appendix C. As in the previous section we present in Figure \ref{fig:HS} two different SGs, the SG resulting from first determining the optimal ordinary graph and finding the optimal set of strata for that graph and the globally optimal SG. For the globally optimal SG we, instead of displaying the exact strata, give the total number of instances included in the stratum associated with each edge. The score for the underlying graph of Figure \ref{fig:HS}a is $-7177.69$ and for the SG $-7162.78$. The corresponding scores for the graph in Figure \ref{fig:HS}b are $-7245.11$ and $-7139.13$. \begin{figure} \caption{Optimal SGs for parliament election data. In a) the optimal ordinary graph is amended with optimal strata. In b) the globally optimal SG with number of instances in each strata listed beside the corresponding edge.} \label{fig:HS} \end{figure} These examples demonstrate that when using Markov networks, variables that would be considered conditionally dependent may in fact be independent in certain contexts. The examples also show that the globally optimal SG contains more edges than the optimal ordinary graph. This can be accredited to the fact that when using dense graphs the set of available parameter restrictions grows, while adding strata to a dense graph can still result in models that induce distributions with few free parameters. A possible method to avoid optimal SGs being very dense, and thus hampering interpretability, is to apply a stronger prior over the model space, further penalizing dense graphs or graphs with many strata as done in \citet{Pensar14}. In conclusion, these experimental results show that context-specific independencies occur naturally in various datasets and therefore it can be very useful to use graphical models that are able to capture such dependence structures. \section{Discussion} Graphical models, and log-linear models more generally, are useful for many types of multivariate analysis due to their interpretability. The context-specific graphical log-linear models discussed here extend the expressiveness of the stratified models considered earlier in \citet{Nyman14a} by removing the restriction concerning overlap of strata. By applying the general estimation theory developed in \cite{Rudas98} and \cite{Csiszar75}, we were able to derive a consistent procedure for estimating the parameters of a context-specific graphical log-linear model based on cyclical projections each corresponding to a specific independence restriction. Two examples with real data illustrated how the relaxation of the model class properties enables additional discovery of context-specific independencies. In future research, it would be interesting to attempt to identify further classes of non-hierarchical restrictions to log-linear parameters, such that interpretability is maintained in the same fashion as for the current context-specific models. \begin{appendix} \section*{Appendix A} Derivation of the parameters in equation \eqref{eq:theta}. \newline\newline We will here give a more detailed explanation of how $\hat{\theta}_{0,0} = \hat{P}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}, X_{\delta} = 0, X_{\gamma} = 0)$ is derived. It is generally possible to use the factorization \begin{gather*} P(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}, X_{\delta} = 0, X_{\gamma} = 0) = \\ P(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}) P(X_{\delta} = 0, X_{\gamma} = 0 \mid X_{L_{\{\delta, \gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}). \end{gather*} When considering a probability distribution where $\delta$ and $\gamma$ can be dependent, it is generally not true that $P(X_{\delta}, X_{\gamma}) = P(X_{\delta}) P(X_{\gamma})$. A standard result, see e.g. \cite{Whittaker90}, states that for a distribution where two variables are dependent the ML projection to the set of distributions where the variables are independent is obtained by calculating the product of the marginal probabilities of the two variables. This implies, in our case, creating a new distribution $\hat{P}$ according to \begin{gather*} \hat{P}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}, X_{\delta} = 0, X_{\gamma} = 0) = \\ P(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}) P(X_{\delta} = 0 \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta, \gamma\}}}, X_{\Omega} = x_{\Omega}) \\ P(X_{\gamma} = 0 \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta, \gamma\}}}, X_{\Omega} = x_{\Omega}). \end{gather*} Using the earlier introduced notations this corresponds to setting \begin{gather*} \hat{\theta}_{0,0} = \hat{P}(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega} , X_{\delta} = 0, X_{\gamma} = 0) = \\ P(X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}) P(X_{\delta} = 0 \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta, \gamma\}}}, X_{\Omega} = x_{\Omega}) \\ P(X_{\gamma} = 0 \mid X_{L_{\{\delta,\gamma\}}} = x_{L_{\{\delta,\gamma\}}}, X_{\Omega} = x_{\Omega}) = \\ (\theta_{0,0}+\theta_{0,1}+\theta_{1,0}+\theta_{1,1}) \cdot(\theta_{0,0}+\theta_{0,1}) / (\theta_{0,0}+\theta_{0,1}+\theta_{1,0}+\theta_{1,1}) \cdot\\ (\theta_{0,0}+\theta_{1,0}) / (\theta_{0,0}+\theta_{0,1}+\theta_{1,0}+\theta_{1,1}) =\\ (\theta_{0,0}+\theta_{0,1}) \cdot(\theta_{0,0}+\theta_{1,0}) / (\theta_{0,0}+\theta _{0,1}+\theta_{1,0}+\theta_{1,1}). \end{gather*} The other parameters $\hat{\theta}_{0,1}$, $\hat{\theta}_{1,0}$, and $\hat{\theta}_{1,1}$ can be derived in a similar fashion. \section*{Appendix B} Proposal functions used for model optimization. \newline \newline Using the proposal function defined in Algorithm \ref{AlgoStrata}, running a sufficient amount of iterations, we can be assured to find the optimal set of strata for any chordal graph. \begin{algorithm} \label{AlgoStrata} Proposal function for finding optimal strata for a chordal graph. \end{algorithm} Let $G$ denote the underlying graph. By $L_A$ we denote all possible instances that can be added to any stratum of $G$. If $L_A$ is empty no strata may be added to $G$ and the algorithm is terminated. $L$ denotes the current state with $L$ being empty in the starting state. \begin{enumerate} \item Set the candidate state $L^* = L$. \item Perform one of the following steps. \begin{itemize} \item[2.1.] If $L$ is empty add a randomly chosen instance from $L_A$ to $L^*$. \item[2.2.] Else if $\{L_A \setminus L\}$ is empty remove a randomly chosen instance from $L^*$. \item[2.3.] Else with probability $0.5$ add a randomly chosen instance from $\{L_A \setminus L\}$ to $L^*$. \item[2.4.] Else remove a randomly chosen instance from $L^*$. \end{itemize} \end{enumerate} \noindent Using this proposal function the optimal set of strata can be found for any underlying graph and we can proceed to the search for the best underlying graph. The proposal function in Algorithm \ref{AlgoSGM} is used for this task. \begin{algorithm} \label{AlgoSGM} Proposal function used to find the optimal underlying chordal graph. \end{algorithm} \noindent The starting state is set to be the graph containing no edges. Let $G$ denote the current graph with $G_L = (G, L)$ being the stratified graph with underlying graph $G$ and optimal set of strata $L$. \begin{enumerate} \item Set the candidate state $G^* = G$. \item Randomly choose a pair of nodes $\delta$ and $\gamma$. If the edge $\{\delta, \gamma\}$ is present in $G^*$ remove it, otherwise add the edge $\{\delta, \gamma\}$ to $G^*$. \item While $G^*$ is non-chordal repeat steps 1 and 2. \end{enumerate} \noindent The resulting candidate state $G^*$ is used along with the corresponding optimal set of strata $L^*$ to form the stratified graph $G^*_L = (G^*, L^*)$ which is used when calculating the acceptance probability according to \eqref{accept}. \section*{Appendix C} Questions considered in parliament election data. \begin{enumerate} \item Since the mid-1990's the income differences have grown rapidly in Finland. How should we react to this? \\ 0 - The income differences do not need to be narrowed. \\ 1 - The income differences need to be narrowed. \item Should homosexual couples have the same rights to adopt children as heterosexual couples? \\ 0 - Yes. \\ 1 - No. \item Child benefits are paid for each child under the age of 18 living in Finland, independent of the parents' income. What should be done about child benefits? \\ 0 - The income of the parents should not affect the child benefits. \\ 1 - Child benefits should be dependent on parents' income. \item In Finland military service is mandatory for all men. What is your opinion on this? \\ 0 - The current practice should be kept or expanded to also include women. \\ 1 - The military service should be more selective or abandoned altogether. \item Should Finland in its affairs with China and Russia more actively debate issues regarding human rights and the state of democracy in these countries? \\ 0 - Yes. \\ 1 - No. \item Russia has prohibited foreigners from owning land close to the borders. In recent years, Russians have bought thousands of properties in Finland. How should Finland react to this? \\ 0 - Finland should not restrict foreigners from buying property in Finland. \\ 1 - Finland should restrict foreigners' rights to buy property and land in Finland. \item During recent years municipalities have outsourced many services to privately owned companies. What is your opinion on this? \\ 0 - Outsourcing should be used to an even higher extent. \\ 1 - Outsourcing should be limited to the current extent or decreased. \item Currently, a system is in place where tax income from more wealthy municipalities is transferred to less wealthy municipalities. In practice this means that municipalities in the Helsinki region transfer money to the other parts of the country. What is your opinion of this system? \\ 0 - The current system is good, or even more money should be transferred. \\ 1 - The Helsinki region should be allowed to keep more of its tax income. \end{enumerate} \end{appendix} \end{document}
arXiv
\begin{document} \title{Large Deviation Principle for Self-Intersection Local Times for Random Walk in ${\mathbb Z}^d$ with $d\ge 5$.} \author{Amine Asselah \\ Universit\'e Paris-Est\\ [email protected]} \date{} \maketitle \begin{abstract} We obtain a large deviation principle for the self-intersection local times for a symmetric random walk in dimension $d\ge 5$. As an application, we obtain moderate deviations for random walk in random sceneries in Region II of \cite{AC05}. \end{abstract} {\em Keywords and phrases}: self-intersection local times, random walk, random scenery. {\em AMS 2000 subject classification numbers}: 60K35, 82C22, 60J25. {\em Running head}: LDP for self-intersections in $d\ge 5$. \section{Introduction.} \label{sec-intro} We consider an aperiodic symmetric random walk on the lattice ${\mathbb Z}^d$, with $d\ge 5$. More precisely, if $S_n$ is the position of the walk at time $n\in {\mathbb N}$, then $S_{n+1}$ chooses uniformly at random a site of $\acc{z\in {\mathbb Z}^d: |z-S_n|\le 1}$, where for $z=(z_1,\dots,z_d)\in {\mathbb Z}^d$, the $l^1$-norm is $|z|:=|z_1|+\dots+|z_d|$. When $S_0=x$, we denote the law of this walk by $P_x$, and its expectation by $E_x$. We are concerned with estimating the number of trajectories of length $n$ with {\it many} self-intersections, in the large $n$-regime. The self-intersection local times process reads as follows \be{def-SILT} \text{for } n\in {\mathbb N},\qquad B_n=\sum_{0\le i<j< n} \ind\{ S_i=S_j\}. \end{equation} The study of self-intersection local times has a long history in probability theory, as well as in statistical physics. Indeed, a caricature of a polymer would be a random walk self-interacting through short-range forces; a simple model arises as we penalize the simple random walk law with $\exp(\beta B_n)$, where $\beta<0$ corresponds to a weakly self-avoiding walk, and $\beta>0$ corresponds to a self-attracting walk. The question is whether there is a transition from collapsed paths to diffusive paths, as we change the parameter $\beta$. We refer to Bolthausen's Saint-Flour notes~\cite{SF-bolt} for references and a discussion of these models. It is useful to represent $B_n$ in terms of local times $\acc{l_{n}(x), x\in {\mathbb Z}^d}$, that is the collection of number of visits of $x$ up to time $n$, as $x$ spans ${\mathbb Z}^d$. We set, for $k<n$, \be{intro.1} l_{[k,n[}(x)=\ind\{S_k=x\}+\dots+\ind\{S_{n-1}=x\}, l_n=l_{[0,n[},\quad\text{and}\quad ||l_n||_2^2=\sum_{z\in {\mathbb Z}^d} l^2_n(z). \end{equation} It is immediate that $||l_n||_2^2=2 B_n+n$. Henceforth, we always consider $||l_n||_2^2$ rather than $B_n$. It turns out useful to think of the self-intersection local times as the square of the $l^2$-norm of an additive and positive process (see Section~\ref{sec-ergodic}). Besides, we will deal with other $q$-norm of $l_n$ (see Proposition~\ref{prop-alpha}), for which there is no counterpart in terms of multiple self-intersections. In dimensions $d\ge 3$, a random walk spends, on the average, a time of the order of one on most visited sites, whose number, up to time $n$, is of order $n$. More precisely, a result of \cite{BS} states \be{intro.3} \frac{1}{n}||l_n||_2^2\stackrel{L^2}{\longrightarrow} \gamma_d=2G_d(0)-1, \quad\text{with}\quad \forall z\in {\mathbb Z}^d,\ G_d(z)=\sum_{n\ge 0} P_0(S_n=z). \end{equation} The next question concerns estimating the probabilities of large deviations from the mean: that is $P_0(||l_n||_2^2-E_0[||l_n||_2^2]\ge n\xi)$ with $\xi>0$. In dimension $d\ge 5$, the speed of the large deviations is $\sqrt n$, and we know from \cite{AC05} that a finite (random) set of sites, say $\D_n$, visited of the order of $\sqrt n$ makes a dominant contribution to produce the excess self-intersection. However, in dimension 3, the correct speed for our large deviations is $n^{1/3}$ (see \cite{A06}), and the excess self-intersection is made up by sites visited less than some power of $\log(n)$. It is expected that the walk spends most of its time-period $[0,n]$ on a ball of radius of order $n^{1/3}$. Thus, in this box, sites are visited a time of order unity. The situation is still different in dimension 2. First, $E_0[B_n]$ is of order $n\log(n)$, and a result of Le Gall~\cite{legall} states that $\frac{1}{n}(B_n-E_0[B_n])$ converges in law to a non-gaussian random variable. The large (and moderate) deviations asymptotics obtained recently by Bass, Chen \& Rosen in~\cite{bcr05b}, reads as follows. There is some positive constant $C_{BCR}$, such that for any sequence $\acc{b_n,n\in {\mathbb N}}$ going to infinity with $\lim_{n\to\infty} \frac{b_n}{n}=0$, we have \be{asympt-bcr} \lim_{n\to\infty} \frac{1}{b_n} \log\pare{P\pare{ B_n-E_0[B_n]\ge b_n n}}=-C_{BCR}. \end{equation} For a LDP in the case of $d=1$, we refer to Chen and Li~\cite{chen-li} (see also Mansmann~\cite{Ma91} for the case of a Brownian motion instead of a random walk). In both $d=2$ and $d=1$, the result is obtained by showing that the local times of the random walk is close to its smoothened conterpart. Finally, we recall a related result of Chen and M\"orters \cite{chen-morters} concerning mutual intersection local times of two independent random walks in infinite time horizon when $d\ge 5$. Let $l_\infty(z)=\lim_{n\to\infty} l_n(z)$, and denote by $\tilde l_\infty$ an independent copy of $l_\infty$. All symbols related to the second walk differ with a tilda. We denote the average over both walks by ${\mathbb E}$, and the product law is denoted ${\mathbb P}$. The intersection local times of two random walks, in an infinite time horizon, is \[ \bra{l_\infty,\tilde l_\infty}=\sum_{z\in {\mathbb Z}^d} l_\infty(z)\tilde l_\infty(z),\quad \text{and}\quad {\mathbb E}\cro{\bra{l_\infty,\tilde l_\infty}}=\sum_{z\in {\mathbb Z}^d} G_d(z)^2<\infty, \] where Green's function, $G_d$, is square summable in dimension 5 or more. Chen and M\"orters in~\cite{chen-morters} have obtained sharp asymptotics for $\{\langle l_\infty,\tilde l_\infty \rangle\ge t\}$ for $t$ large, in dimension 5 or more, by an elegant asymptotic estimation of the moments, improving on the pioneering work of Khanin, Mazel, Shlosman and Sinai in~\cite{KMSS}. Their method provides a variational formula for the rate functional, and their proof produces (and relies on) a finite volume version. Namely, for any finite subset $\Lambda\subset {\mathbb Z}^d$, \be{CM-finite} \lim_{t\to\infty} \frac{1}{\sqrt t} \log {\mathbb P} \pare{ \bra{\ind_{\Lambda} l_\infty,\tilde l_\infty} \ge t} =-2\I_{CM}(\Lambda),\quad\text{and}\quad \lim_{\Lambda \nearrow {\mathbb Z}^d} \I_{CM}(\Lambda)=\I_{CM}, \end{equation} with \[ \I_{CM}=\inf\acc{ ||h||_2:\ h\ge 0,\ ||h||_2<\infty, \text{and}\ ||U_h||\ge 1}, \] where \be{rate-CM} U_h(f)(x)=\sqrt{e^{h(x)}-1}\sum_{y\in {\mathbb Z}^d} \pare{G_d(x-y)-\delta_x(y)} (f(y) \sqrt{e^{h(y)}-1}), \end{equation} and $\delta_x$ is Kronecker's delta function at $x$. In this paper, we consider self-intersection local-times, and we establish a Large Deviations Principle in $d\ge 5$. \bt{intro-th.1} We assume $d\ge 5$. There is a constant $\I(2)>0$, such that for $\xi>0$ \be{intro.4} \lim_{n\to\infty} \frac{1}{\sqrt{ n}}\log P_0\pare{||l_n||_2^2-E\cro{||l_n||_2^2}\ge n \xi}=-\I(2) \sqrt{\xi}. \end{equation} Moreover, \be{identification} \I(2)=\I_{CM}. \end{equation} \end{theorem} \br{rem-2statements} The reason for dividing Theorem~\ref{intro-th.1} into two statements \reff{intro.4} and \reff{identification} is that our proof has two steps: (i) The proof of the existence of the limit in \reff{intro.4}, which relies eventually on a subadditive argument, in spite of an odd scaling; (ii) An identification with the constant of Chen and M\"orters. Also, we establish later the existence of a limit for other $q$-norms of the local-times (see Proposition~\ref{prop-alpha}), for which we have no variational formulas. \end{remark} The identification \reff{identification} relies on the fact that both the excess self-intersection local times and large intersection local times are essentially realized on a finite region. This is explained heuristically in Remark 1 of \cite{chen-morters}, and we provide the following mathematical statement of this latter phenomenon. \bp{prop-CM} Assume dimension is 5 or more. \be{ineq-CM} \limsup_{\epsilon\to 0} \limsup_{t\to\infty} \frac{1}{\sqrt t}\log {\mathbb P}\pare{\sum_{z\in {\mathbb Z}^d} \ind_{\{\min(l_\infty(z),\tilde l_\infty(z))<\epsilon \sqrt t\}} l_\infty(z)\tilde l_\infty(z)>t}=-\infty. \end{equation} \end{proposition} Finally, we present applications of our results to Random Walk in Random Sceneries (RWRS). We first describe RWRS. We consider a field $\{\eta(x),x\in {\mathbb Z}^d\}$ independent of the random walk $\{S_k,k\in {\mathbb N}\}$, and made up of symmetric unimodal i.i.d.\ with law denoted by ${\mathbb Q}$ and tail decay characterized by an exponent $\alpha>1$ and a constant $c_{\alpha}$ with \be{eq-tail.1} \lim_{t\to\infty} \frac{\log {\mathbb Q}\pare{\eta(0)>t}}{t^{\alpha}}=-c_{\alpha}. \end{equation} The RWRS is the process \[ \bra{\eta,l_n}:= \sum_{z\in {\mathbb Z}^d} \eta(z) l_n(z)= \eta(S_0)+\dots+ \eta(S_{n-1}). \] We refer to \cite{AC05} for references for RWRS, and for a diagram of the speed of moderate deviations $\acc{\bra{\eta,l_n}> \xi n^\beta}$ with $\xi>0$, in terms of $\alpha>1$ and $\beta>\frac{1}{2}$. In this paper, we concentrate on what has been called in \cite{AC05} Region II: \be{def-RegII} 1< \alpha<\frac{d}{2},\quad\text{and}\quad 1-\frac{1}{\alpha+2}<\beta<1+\frac{1}{\alpha}. \end{equation} In region II, the random walk is expected to visit often a few sites, and it is therefore natural that our LDP allows for better asymptotics in this regime. We set \be{old-defIIbis} \zeta=\beta\frac{\alpha}{\alpha+1}(<1),\qquad \frac{1}{\alpha^*}=1-\frac{1}{\alpha}, \quad\text{and for $\chi>0$}\quad \bar\D_n(\xi):=\acc{z:\ l_n(z)\ge \xi}. \end{equation} In bounding from above the probability of $\acc{ \bra{\eta,l_n}\ge \xi\ n^{\beta}}$, we take exponential moments of $\bra{\eta,l_n}$, and first integrate with respect to the $\eta$-variables. Thus, the behavior of the log-Laplace transform of $\eta$, say $\Gamma(x)=\log E\cro{\exp(x\eta(0))}$, either at zero or at infinity, plays a key r\^ole. This, in turn, explains why we need a LDP for other powers of the local times. For $q\ge 1$, the $q$-norm of function $\varphi:{\mathbb Z}^d\to{\mathbb R}$ is \[ ||\varphi||_{q}^q:=\sum_{z\in {\mathbb Z}^d} |\varphi(z)|^{q}. \] Before dealing with $\acc{\bra{\eta,l_n}> \xi n^\beta}$, we give estimates for the ${\alpha^*}$-norm of the local-times, for $\alpha^*>\frac{d}{d-2}$. \bp{prop-alpha} Choose $\zeta$ as in \reff{old-defIIbis} with $\alpha,\beta$ in Region II. Choose $\chi$ such that $\zeta> \chi\ge \frac{\zeta}{d/2}$, and any $\xi>0$. There is a positive constant $\I(\alpha^*)$ such that \be{alpha-star} \lim_{n\to\infty} \frac{1}{n^\zeta} \log\pare{ P\pare{ ||\ind_{\bar\D_n(n^\chi)} l_n||_{\alpha^*} \ge \xi n^\zeta}}=- \xi\ \I(\alpha^*). \end{equation} \end{proposition} Our moderate deviations estimates for RWRS is as follows. \bt{prop-rwrs} Assume $\alpha,\beta$ are in Region II given in \reff{def-RegII}. With $\zeta$ given in \reff{old-defIIbis}, and any $\xi>0$ \be{rwrs-md} \lim_{n\to\infty} \frac{1}{n^\zeta} \log\pare{ P\pare{\bra{\eta,l_n}\ge \xi n^\beta}}= -c_{\alpha} (\alpha+1) \pare{ \frac{\I(\alpha^*) }{\alpha}}^{\frac{\alpha}{\alpha+1}} \quad \xi^{\frac{\alpha}{\alpha+1}}. \end{equation} \end{theorem} We now wish to outline schematically the main ideas and limitations in our approach. This serves also to describe the organisation of the paper. First, we use a shorthand notation for the centered self-intersection local times process, \be{def-silt} \overline{||l_n||_2^2}=||l_n||_2^2-E_0\cro{||l_n||_2^2}. \end{equation} Theorem~\ref{intro-th.1} relies on the following intermediary result interesting on its own. \bp{prop-sub} Assume $d\ge 5$. There is $\beta>0$, such that for any $\epsilon>0$, there is $\alpha_\epsilon>0$, and $\Lambda_\epsilon$ a finite subset of ${\mathbb Z}^d$, such that for any $\alpha>\alpha_\epsilon$, for any $\Lambda\supset \Lambda_\epsilon$ finite, and $n$ large enough \be{lawler.10} \begin{split} \frac{1}{2} P_0\big(&||\ind_\Lambda l_{\lfloor\alpha\sqrt{n}\rfloor} ||_2^2\ge n\xi (1+\epsilon),\ S_{\lfloor\alpha\sqrt{n}\rfloor}=0\big)\\ &\le P_0(\overline{||l_n||_2^2}\ge n\xi) \le e^{\beta \epsilon \sqrt n} P_0\pare{||\ind_\Lambda l_{\lfloor\alpha\sqrt{n}\rfloor} ||_2^2\ge n\xi (1-\epsilon),\ S_{\lfloor\alpha\sqrt{n}\rfloor}=0}. \end{split} \end{equation} We use $\lfloor x\rfloor$ for the integer part of $x$. \end{proposition} The upper bound for $P_0(\overline{||l_n||_2^2}\ge n\xi)$ in \reff{lawler.10} is the main technical result of the paper. From our previous work in \cite{AC05}, we know that the main contribution to the excess self-intersection comes from level set $\D_n=\{x:l_{n}(x)\sim {\sqrt n}\}$. This is the place where $d\ge 5$ is crucial. Indeed, this latter fact is false in dimension 3 as shown in \cite{A06}, and unknown in $d=4$. In Section~\ref{sec-old}, we recall and refine the results of \cite{AC05}. We establish that $\D_n$ is a {\it finite} set. More precisely, for any $\epsilon>0$ and $L$ large enough, there is a constant $C_\epsilon$ such that for $n$ large enough \be{strat.1} P\pare{\overline{||l_n||_2^2}\ge n\xi}\le C_\epsilon\ P\pare{ ||\ind_{\D_n}l_n||_2^2\ge n\ \xi(1-\epsilon),\ |\D_n|<L}. \end{equation} Then, our main objective is to show that the time spent on $\D_n$ is of order $\sqrt n$. However, this is only possible if some control on the diameter of $\D_n$ is first established. This is the main difficulty. Note that $\D_n$ is visited by the random walk within the time-period $[0,n[$, and from \reff{strat.1}, a crude uniform estimate yields \be{strat.2} P\pare{\overline{||l_n||_2^2}\ge n\xi} \le C_\epsilon (2n)^{dL} \sup_{\Lambda\in ]-n,n[^d,|\Lambda|\le L}\!\!\! P\pare{ ||\ind_\Lambda l_n||_2^2\ge n\ \xi(1-\epsilon)}. \end{equation} Now, we can replace the time period $[0,n[$, in the right hand side of \reff{strat.2}, by an infinite interval $[0,\infty)$ since the local time increases with time. Consider $\Lambda_n\subset ]-n,n[^d$ which realizes the supremum in \reff{strat.2}. Next, we construct two maps: a {\it local} map $\T$ in Section~\ref{sec-moveC}, and a {\it global} map $f$ in Section~\ref{sec-moveloop}. A finite number of iterates of $\T$ (at most $L$), say $\T^L$, transforms $\Lambda_n$ into a subset of finite diameter. On the other hand, $f$ maps $\{\D_n=\Lambda_n\}$ into $\{\D_n=\T(\Lambda_n)\}$, allowing us to compare the probabilities of these two events. Thus, the heart of our argument has two ingredients. \begin{itemize} \item A {\it marriage theorem} which is recalled in Section~\ref{sec-marriage}. It is then used to perform {\it global surgery} on the circuits. \item Classical potential estimates of Sections~\ref{sec-cageloop} and \ref{sec-movetrip}. This is the place where the random walk's features enter the play. Our estimates relies on basic estimates (Green's function asymptotics, Harnack's inequalities and heat kernel asymptotics), which are known to hold for general symmetric random walks (see \cite{lawler-limic}). Though we have considered the simplest aperiodic symmetric random walk, all our results hold when the basic potential estimates hold. \end{itemize} We then iterate $f$ a finite number of time to reach $\{\D_n=\T^L(\Lambda_n)\}$. To control the cost of this transformation, it is crucial that only a finite number of iterations of $f$ is needed. The construction of $\T$ and $f$ requires as well many preliminary steps. \begin{enumerate} \item Section~\ref{sec-cluster} deals with {\it clusters}. In Section~\ref{sec-cluster}, we introduce a partition of $\Lambda_n$ into a collection of nearby points, called {\it clusters}. In Section~\ref{sec-moveC}, we define a map $\T$ acting on {\it clusters}, by translating one {\it cluster} at a time. \item Section~\ref{sec-circuit} deals with {\it circuits}. In Section~\ref{sec-defcir}, we decompose a trajectory in $\{\D_n=\Lambda_n\}$ into all possible {\it circuits}. We introduce the notions of {\it trip} and {\it loop}. \end{enumerate} We show in Proposition~\ref{prop-time.1}, that for trajectories in $\{\D_n=\T^L(\Lambda_n)\}$, no time is wasted on lengthy excursions, and the total time needed to visit $\D_n$ is less than $\alpha{\sqrt n}$, for some large $\alpha$. This steps also relies on assuming $d\ge 5$. Indeed, we have been using that conditionned on returning to the origin, the expected return time is finite in dimension 5 or more. This concludes the outline of the proof of the upper bound in Proposition~\ref{prop-sub}. The lower bound is easy, and is done in Section~\ref{sec-LB}. Assuming Proposition~\ref{prop-sub}, we are in a situation where a certain $l^2$-norm of an additive process is larger than $\sqrt{n\xi}$ over a time-period of $\alpha {\sqrt n}$. Section~\ref{sec-subadditive} presents a subadditive argument yielding the existence of a limit \reff{intro.4}. We identify the limit in Section~\ref{sec-ident}. We prove Proposition~\ref{prop-CM} in Section~\ref{sec-KMSS}. Finally, the proof of Theorem~\ref{prop-rwrs} is given in Section~\ref{sec-rwrs}. We conclude by mentionning two outstanding problems out of our reach. \begin{itemize} \item Establish a Large Deviations Principle in $d=3$, showing that the walk spends most of its time during time-period $[0,n[$, in a ball of radius about $n^{1/3}$. \item In dimension 4, find which level set of the local times gives a dominant contribution to making the self-intersection large. \end{itemize} \section{Preliminaries on Level Sets.}\label{sec-old} In this section, we recall and refine the analysis of \cite{AC05}. The approach of \cite{AC04,AC05} focuses on the contribution of each level set of the local times to the event $\{||l_n||_2^2-E[||l_n||_2^2]>n \xi\}$. This section is essentially a corollary of \cite{AC05}. We first recall Proposition 1.6 of \cite{AC05}. For $\epsilon_0>0$, set \[ \RR_n=\acc{x\in {\mathbb Z}^d: n^{1/2-\epsilon_0}\le l_n(x)\le n^{1/2+\epsilon_0}}. \] Then, for any $\epsilon>0$ \be{18-AC05} \lim_{n \rightarrow \infty} \frac{1}{\sqrt{n}} \log P \pare{ ||\ind_{\RR_n^c} l_n||_2^2-E_0\cro{||l_n||_2^2} \geq n\epsilon\xi} = - \infty \, . \end{equation} Thus, we have for any $0<\epsilon< 1$, and $\xi>0$ \be{level-main1} P\pare{\overline{||l_n||_2^2}\geq n \xi}\le P \pare{ ||\ind_{\RR_n^c} l_n||_2^2-E_0\cro{||l_n||_2^2} \geq n\epsilon\xi}+ P\pare{ ||\ind_{\RR_n} l_n||_2^2 \ge n\xi(1-\epsilon)}. \end{equation} We only need to focus on the second term of the right hand side of \reff{level-main1}, and for simplicity here, we use $\xi>0$ instead of $\xi(1-\epsilon)$. First, we show in Lemma~\ref{level-lem.1} that when asking $\{||l_n||_2^2 \geq E[||l_n||_2^2] +n\xi\}$ with $\xi>0$, we can assume $\{||l_n||_2^2 \le An\}$ for some large $A$. Then, in Lemma~\ref{level-lem.2}, we show that the only sites which matter are those whose local times is within $[\frac{\sqrt{n}}{A}, A\sqrt{n}]$ for some large constant $A$. \bl{level-lem.1} For $A$ positive, there are constants $C,\kappa>0$ such that \be{level.1} P\pare{\overline{||l_n||_2^2}\ge nA}\le C \exp\pare{-\kappa \sqrt{An}}. \end{equation} \end{lemma} \begin{proof} We rely on Proposition 1.6 of \cite{AC05}, and the proof of Lemma 3.1 of \cite{AC05} (with $p=2$ and $\gamma=1$), for the same subdivision $\acc{b_i,i=1,\dots,M}$ of $[1/2-\epsilon,1/2]$, and the same $\acc{y_i}$ such that $\sum y_i\le 1$, but the level sets are here of the form \be{level.2} \D_i=\acc{x\in {\mathbb Z}^d:\ A^{\frac{1}{2}} n^{b_i}\le l_n(x)< A^{\frac{1}{2}} n^{b_{i+1}}}. \end{equation} Using Lemma 2.2 of \cite{AC05}, we obtain the second line of \reff{level.3}, \ba{level.3} P\pare{ \sum_{\cup \D_i} l_n^2(x) \ge nA}&\le&\sum_{i=1}^{M-1} P\pare{ |\D_i|(A^{\frac{1}{2}}n^{b_{i+1}})^2\ge ny_i A}= \sum_{i=1}^{M-1} P\pare{|\D_i|\ge y_i n^{1-2b_{i+1}}}\cr &\le & \sum_{i=1}^{M-1} (n^d)^{n^{1-2b_{i+1}}y_i} \exp\pare{ -\kappa_d A^{\frac{1}{2}} n^{b_{i}+(1-\frac{2}{d}) (1-2b_{i+1})}y_i^{1-\frac{2}{d}}}\cr &\le & \sup_{i\le M}\acc{\C_i(n) \exp\pare{ -\kappa_d A^{\frac{1}{2}} n^{b_{i}+(1-\frac{2}{d}) (1-2b_{i+1})}y_i^{1-\frac{2}{d}}}}, \end{eqnarray} where $\C_i(n):=M (n^d)^{n^{1-2b_{i+1}}y_i}$. The constant $\kappa_d$ is linked with estimating the probability of spending a given time in a given domain $\Lambda$ of prescribed volume; this latter inequality is derived in Lemma 1.2 of \cite{AC04}. We first need $\C_i(n)$ to be negligible, which imposes \be{level.4} n^{1-2b_{i+1}}y_i \log(n^d)\ll A^{\frac{1}{2}} n^{b_{i}+(1-\frac{2}{d})(1-2b_{i+1})}y_i^{1- \frac{2}{d}} \end{equation} Inequality \reff{level.4} is easily seen to hold when $b_i$ is larger than $1/2-\epsilon$, for $\epsilon$ small. Now, we need that for some $\kappa>0$ \be{level.5} \kappa_dA^{\frac{1}{2}} n^{b_{i}+(1-\frac{2}{d})(1-2b_{i+1})}(y_i)^{1- \frac{2}{d}} \ge 2\kappa A^{\frac{1}{2}}\sqrt{n}. \end{equation} This holds with the choice of $y_i$ as in Lemma 3.1 of \cite{AC05}. We use one $\kappa$ of \reff{level.5} to match $\C_i(n)$ in \reff{level.3}, and we are left with a constant $C$ such that \be{level.7} P\pare{ \sum_{\cup \D_i} l_n^2(x) \ge nA}\le C \exp\pare{-\kappa \sqrt{An}} \end{equation} \end{proof} For any positive reals $A$ and $\zeta$, an $k\in {\mathbb N}\cup\{\infty\}$, we define \be{level.15} \D_k(A,\xi):=\acc{x\in {\mathbb Z}^d: \frac{\xi}{A}\le l_k(x)< A\xi}. \end{equation} \bl{level-lem.2} Fix $\xi>0$. For any $M>0$, there is $A>0$ so that \be{level.16} \limsup_{n\to\infty} \frac{1}{\sqrt n} \log\pare{P\pare{ \sum_{\RR_n\backslash\D_n(A,\sqrt n)} l_n^2(x)>n\xi}}\le -M. \end{equation} Also, \be{level.17} P\pare{ | \D_n(A,\sqrt n)|\ge A^3}\le C\exp\pare{-\kappa \sqrt{ An}}. \end{equation} \end{lemma} \begin{proof} We consider an increasing sequence $\acc{a_i,i=1,\dots,N}$ to be chosen later, and form \be{level.8} {\mathcal B}_i=\acc{x:\ \frac{\sqrt n}{a_i}\le l_n(x)< \frac{\sqrt n}{a_{i-1}}}, \end{equation} where $a_0$ will be chosen as a large constant, and $a_N\sim n^{\epsilon}$. In view of Lemma~\ref{level-lem.1}, it is enough to show that the probability of the event $\acc{\sum_{{\mathcal B}_i} l_n^2(x)\ge n\xi}$ is negligible. First, from Lemma~\ref{level-lem.1}, we can restrict attention to $\acc{An\ge \sum_{{\mathcal B}_i} l_n^2(x)\ge n\xi_i}$ for some large constant $A$ and with $\xi=\sum \xi_i$ a decomposition to be chosen later. When considering the sum over $x\in {\mathcal B}_i$, we obtain \be{level.9} \sum_{x\in {\mathcal B}_i} l_n^2(x)\le nA\Longrightarrow |{\mathcal B}_i|\pare{\frac{\sqrt n}{a_i}}^2\le An \Longrightarrow |{\mathcal B}_i|\le a_i^2 A. \end{equation} Similarly, we obtain the lower bound $|{\mathcal B}_i|\ge \xi_i a_{i-1}^2$. If we call \be{level.10} H_i=\acc{ a_{i-1}^2\xi_i<|{\mathcal B}_i|\le a_i^2 A}, \end{equation} then by Lemma~\ref{level-lem.1}, if we set $l_n({\mathcal B}_i)=\sum_{x\in {\mathcal B}_i} l_n(x)$ \ba{level.11} P\pare{ \sum_{x\in {\mathcal B}_i} l_n^2(x)> n\xi_i} &\le &P(\sum_{x\in {\mathcal B}_i} l_n^2(x)> n A))+P\pare{H_i\cap \acc{l_n({\mathcal B}_i)\ge a_{i-1}\xi_i \sqrt{n}}}\cr &\le & C e^{-\kappa \sqrt{An}} + (n^d)^{a_i^2 A} \exp\pare{- \kappa_d \frac{ a_{i-1}\xi_i\sqrt{n}}{(a_i^2A)^{2/d}}}. \end{eqnarray} Since we assume $a_i\le n^{\epsilon}$, the term $ (n^d)^{a_i^2 A}$ is innocuous. It remains to find, for any large constant $M$, two sequences $\acc{a_i,\xi_i,i=1,\dots,N}$ such that \be{level.12} \kappa_d \frac{ a_{i-1}\xi_i}{(a_i^2A)^{2/d}}=M,\quad\text{and}\quad \sum \xi_i=\xi. \end{equation} Fix an arbitrary $\delta>0$ and set \be{level.13} a_i:=(1+\delta)^i a_0, \quad \xi_i:= \frac{z(\delta)}{(1+\delta)^{\gamma i}} \xi, \quad\text{and}\quad \gamma=1-\frac{4}{d}, \end{equation} where $z(\delta)$ is a normalizing constant ensuring that $\sum \xi_i=\xi$. Using the values \reff{level.13} in \reff{level.12}, we obtain \be{level.14} \frac{\kappa_dz(\delta) \xi}{(1+\delta) A^{2/d}} a_0^{1-4/d} =M. \end{equation} Now, for any constant $M$, we can choose an $a_0$ large enough so that none of the level ${\mathcal B}_i$ contributes. Note also that $N=\min\acc{n: a_n\ge n^\epsilon}$. Finally, \reff{level.17} follows from Lemma~\ref{level-lem.1}, once we note that \[ P\pare{|\D_n(A,\sqrt n)|\ge A^3}\le P\pare{||\ind_{\D_n(A,\sqrt n)} l_n||_2^2\ge An}. \] \end{proof} We will need estimates for other powers of the local times. We choose two parameters $(\alpha,\beta)$ satisfying \reff{def-RegII}, and we further define \be{old-defII} \zeta=\beta\frac{\alpha}{\alpha+1},\qquad b=\frac{\beta}{\alpha+1},\qquad \frac{1}{\alpha^*}=1-\frac{1}{\alpha}, \quad\text{and}\quad \bar\D_n(n^b):=\acc{z:\ l_n(z)\ge n^b}. \end{equation} When dealing with the $\alpha^*$-norm of $l_n$, we only focus on sites with large local times. Among those sites, we show that finitely many contribute to making the $\alpha^*$-norm of $l_n$ {\it large}. To appreciate the first estimate, similar in spirit and proof to Lemma~\ref{level-lem.2}, recall that $\zeta<1$, $\alpha^*>1$, and $||l_n||_{\alpha^*}\ge ||l_n||_1=n$. \bl{level-lem.3} Choose $\zeta,b$ as in \reff{old-defII} with $\alpha,\beta$ in Region II. For any $\xi>0$, there are constants $C,\kappa>0$ such that \be{level.22} P\pare{ ||\ind_{\bar\D_n(n^b)} l_n||_{\alpha^*} \ge \xi n^\zeta }\le C \exp\pare{-\kappa \xi n^{\zeta}}. \end{equation} Moreover, for any $M>0$, there is $A>0$ such that \be{level.20} \limsup_{n\to\infty} \frac{1}{n^\zeta} \log P\pare{ ||\ind_{\D_n(A,n^\zeta)^c\cap \bar\D_n(n^b)} l_n||_{\alpha^*}>\xi n^{\zeta}}\le -M. \end{equation} Finally, from \reff{level.22}, we have \be{level.21} P\pare{ | \D_n(A,n^\zeta)|\ge A^2}\le P\pare{ ||\ind_{\D_n(A,n^\zeta)} l_n||_{\alpha^*}\ge An^\zeta}\le C\exp\pare{-\kappa An^{\zeta}}. \end{equation} \end{lemma} The proof is similar to that of Lemmas~\ref{level-lem.1} and ~\ref{level-lem.2}, and we omit the details. We point out that Lemma 3.1 of \cite{AC05} has to be used with $p=\alpha^*$ and $\gamma=\alpha^*\zeta$. Also, Proposition 3.3 of \cite{AC05} holds on $\bar\D_n(n^b)$ since the condition $b \frac{d}{2}\ge \zeta$ is fulfilled in Region II. \section{Clusters' Decomposition.}\label{sec-cluster} From Lemma~\ref{level-lem.2}, for any $\epsilon>0$, and $A$ large enough, we have $C_\epsilon>0$ such that for any $\xi>0$ and $n$ large enough \be{circuit.1} P\pare{\overline{||l_n||_2^2}\ge n\xi(1+\epsilon)} \le C_\epsilon P\pare{||\ind_{\D_n(A,\sqrt n)} l_n||_2^2\ge n \xi,\ |\D_n(A,\sqrt n)|\le A^3}. \end{equation} Since $\D_n(A,\sqrt n)\subset ]-n,n[^d$, we bound the right hand side of \reff{circuit.1} by a uniform bound \be{circuit.2} \begin{split} P\pare{\overline{||l_n||_2^2}\ge n\xi(1+\epsilon)} \le& C_\epsilon (2n)^{dA^3} \sup_{\Lambda} P\pare{||\ind_{\Lambda} l_n||_2^2\ge n \xi,\D_n(A,\sqrt n)=\Lambda}\\ \le& C_\epsilon (2n)^{dA^3} \sup_{\Lambda} P\pare{||\ind_{\Lambda} l_{\infty}||_2^2\ge n \xi, \Lambda\subset \D_{\infty}(A,\sqrt n)}, \end{split} \end{equation} where in the supremum over $\Lambda$ we assumed that $\Lambda\subset ]-n,n[^d,\ |\Lambda|\le A^3$. Also, in $\D_{\infty}(A,\sqrt n)$ (defined in \reff{level.15}) we may adjust with a larger $A$ if necessary. If we denote by $\Lambda_n$ the finite subset of ${\mathbb Z}^d$ which realizes the last supremum in \reff{circuit.2}, then our starting point, in this section, is the collection $\{\Lambda_n,n\in {\mathbb N}\}$ of finite subsets of ${\mathbb Z}^d$. \subsection{Defining Clusters.}\label{sec-defC} In this section, we partition an arbitrary finite subset of ${\mathbb Z}^d$, say $\Lambda$ into subsets of nearby sites, with the feature that these subsets are far apart. More precisely, this partitioning goes as follows. \bl{lem-cluster} Fix $\Lambda$ finite subset of ${\mathbb Z}^d$, and $L$ an integer. There is a partition of $\Lambda$ whose elements are called $L$-clusters with the property that two distinct $L$-clusters $\C$ and $\tilde \C$ satisfy \be{clus.3} \text{dist}(\C,\tilde \C):=\inf\acc{|x-y|,\ x\in\C,y\in \tilde \C}\ge 4\max\pare{ \text{diam}(\C), \text{diam}(\tilde\C),L}. \end{equation} Also, there is a positive constant $C(\Lambda)$ which depends on $|\Lambda|$, such that for any $L$-cluster $\C$ \be{clus.key2} \text{diam}(\C)\le C(\Lambda)\ L. \end{equation} \end{lemma} \br{rem-cluster} If we define an $L$-shell $\S_L(\C)$ around $\C$ by \be{clus.key1} \S_L(\C)=\acc{z\in {\mathbb Z}^d: \ \text{dist}(z,\C)\le \max(L,\text{diam}(\C))},\quad\text{ then } \quad \S_L(\C)\cap \Lambda=\C. \end{equation} We deduce from \reff{clus.key2}, and \reff{clus.key1}, that for any $\C$ and any $x,y\in \C$, there is a finite sequence of points $x_0=x,\dots,x_k=y$ (not necessarely in $\Lambda$), such that for $i=1,\dots,k$ \be{clus.key3} |x_i-x_{i-1}|\le L,\quad\text{and}\quad B(x_i,L)\subset \S(\C)\quad (\text{where }B(x_i,L)=\acc{z\in {\mathbb Z}^d:\ |x_i-z|\le L}). \end{equation} \end{remark} \begin{proof} We build clusters by a bootstrap algorithm. At level 0, we define a {\it linking} relation for $x,y\in \Lambda$: $x\overset{0}{\leftrightarrow} y$ if $|x-y|\le 4L$, and an equivalent relation $x\overset{0}{\sim} y$ if there is a (finite) path $x=x_1,x_2,\dots,x_k=y\in \Lambda$ such that for $i=1,\dots,k-1$, $x_i\overset{0}{\leftrightarrow} x_{i+1}$. The cluster at level 0 are the equivalent classes of $\Lambda$. We denote by $\C^{(0)}(x)$ the class which contains $x$, and by $|\C^{(0)}|$ the number of clusters at level 0 which is bounded by $|\Lambda|$. It is important to note that the diameter of a cluster is bounded independently of $n$. Indeed, it is easy to see, by induction on $|\Lambda|$, that for any $x\in \Lambda$, we have $\text{diam}(\C^{(0)}(x))\le 4L(|\C^{(0)}(x)|-1)$, so that \be{clus.4} \text{diam}(\C^{(0)}(x))\le 4L |\Lambda|. \end{equation} Then, we set \be{clus.1} x\overset{1}{\leftrightarrow} y\quad\text{if}\quad |x-y|\le 4 \max\pare{ \text{diam}(\C^{(0)}(x)), \text{diam}(\C^{(0)}(y)),L}. \end{equation} As before, relation $\overset{1}{\leftrightarrow}$ is associated with an equivalence relation $\overset{1}{\sim}$ which defines clusters $\C^{(1)}$. Note also that $x\overset{0}{\sim} y$ implies that $x\overset{1}{\sim}y$, and that for any $x\in \Lambda$, \be{clus.diam} \text{diam}(\C^{(1)}(x))\le 5 |\C^{(0)}| \max\acc{ \text{diam}(\C):\ \C\in \C^{(0)}}\le 5 |\Lambda| (4L |\Lambda|), \end{equation} since we produce $\C^{(1)}$'s by multiple concatenations of pairs of $\C^{(0)}$-clusters at a distance of at most four times the maximum diameters of the clusters making up level 0, those latter clusters being less in number than $|\Lambda|$. In the worst scenario, there is one cluster at level 1 made up of all clusters of $\C^{(0)}$ at a distance of at most $4 \max\acc{\text{diam}(\C):\ \C\in \C^{(0)}}$. If the number of clusters at level 0 is the same as those of level 1, then the algorithm stops and we have two distinct clusters $\C,\tilde \C\in \C^{(0)}$ \[ \text{dist}(\C,\tilde \C):=\inf\acc{|x-y|,\ x\in\C,y\in \tilde \C}\ge 4\max\pare{ \text{diam}(\C), \text{diam}(\tilde\C),L}. \] Otherwise, the number of cluster at level 1 has decreased by at least one. Now, assume by way of induction, that we have reached level $k-1$. We define $\overset{k}{\leftrightarrow}$ as follows \be{clus.2} x\overset{k}{\leftrightarrow} y\quad\text{if}\quad |x-y|\le 4 \max\pare{ \text{diam}(\C^{(k-1)}(x)), \text{diam}(\C^{(k-1)}(y)),L}. \end{equation} Now, since $|\Lambda|$ is finite, the algorithm stops in a finite number of steps. The clusters we obtain eventually are called $L$-clusters. Note that two distinct $L$-clusters satisfy \reff{clus.3}. Property \reff{clus.key2} with $C(\Lambda)=(5|\Lambda|)^{|\Lambda|}$, follows by induction with the same argument used to prove \reff{clus.diam}. \end{proof} \subsection{Transforming Clusters.}\label{sec-moveC} For a subset $\Lambda$ and an integer $L$, assume that we have a partition in terms of $L$-cluster as in Lemma~\ref{lem-cluster}. We define the following map on the partition of $\Lambda$. \bl{lem-move} There is a map $\T$ on the $L$-clusters of $\Lambda$ such that $\T(\C)=\C$, but for one cluster, say $\C_1$ where $\T(\C_1)$ is a translate of $\C_1$ such that, when the following minimun is taken over all $L$-clusters \be{red-cluster} 0=\min\acc{\text{dist}\pare{\C,\T(\C_1)}-\pare{\text{diam}(\C)+\text{diam}(\T(\C_1)}}. \end{equation} Also, for any $L$-cluster $\C\not= \C_1$, we have \be{notbad-cluster} \text{dist}(\C,\T(\C_1))\le 2 \text{dist}(\C,\C_1). \end{equation} \end{lemma} We denote by $\T(\Lambda)=\cup\ \T(\C)$. Also, we can define $\T$ as a map on ${\mathbb Z}^d$: for a site $z\in \C_1$ $\T(z)$ denotes the translation of $z$, otherwise $\T(z)=z$. Finally, we can define the inverse of $\T$, which we denote $\T^{-1}$. \br{move-rem.1} Note that $\T(\Lambda)$ has at least one $L$-cluster less than $\Lambda$ since \reff{clus.3} does not hold for $(\C_0,\T(\C_1))$. Thus, if we apply to $L$-cluster partition of Lemma~\ref{lem-cluster} to $\T(\Lambda)$, $\C_0$ and $\T(\C_1)$ would merge into one $L$-cluster, possibly triggering other merging. \end{remark} \begin{proof} We start with two clusters which minimize the distance among clusters. Let $\C_0$ and $\C_1$ be such that \be{clus.5} \text{dist}(\C_0,\C_1)= \min\acc{\text{dist}(\C,\C'):\ \C,\C' \text{ distinct clusters}}. \end{equation} Now, let $(x_0,x_1)\in \C_0\times\C_1$ such that $|x_0-x_1|=\text{dist}(\C_0,\C_1)$, and note that by \reff{clus.3}, $|x_0-x_1|\ge 2\pare{ \text{diam}(\C_0)+\text{diam}(\C_1)}$. Assume that $\text{diam}(\C_0)\ge \text{diam}(\C_1)$. We translate sites of $\C_1$ by a vector whose coordinates are the integer parts of the following vector \be{clus.6} u=(x_0-x_1)\pare{1-\frac{\text{diam}(\C_0)+\text{diam}(\C_1)}{|x_0-x_1|}}, \end{equation} in such a way that the translated cluster, say $\T(\C_1)$, is at a distance $\text{diam}(\C_0)+\text{diam}(\C_1)$ of $\C_0$. We now see that $\T(\C_1)$ is far enough from other clusters. Let, as before, $z\in \C$, and note that \ba{clus.8} |z-\tilde y|&\ge& |z-x_0|-|x_0-\tilde y|\ge |z-x_0|-\pare{|x_0-\tilde x_1|+|\tilde x_1-\tilde y|}\cr &\ge & 4 \max(\text{diam}(\C),\text{diam}(\C_0))-\pare{ \text{diam}(\C_0)+\text{diam}(\C_1)+\text{diam}(\C_1)}\cr &\ge & \text{diam}(\C)+\text{diam}(\C_1) \end{eqnarray} Thus, for any cluster $\C$, we have \be{clus-diam} \text{dist}(\C,\T(\C_1))\ge \text{diam}(\C)+\text{diam}(\T(\C_1)). \end{equation} Finally, we prove \reff{notbad-cluster}. Let $z$ belong to say $\C\not= \C_1$, and let $\tilde y\in \T(\C_1)$ be the image of $y\in \C_1$ after translation by $u$. Then, using that $\text{dist}(\C_0,\C_1)$ minimizes the distance among distinct clusters \ba{clus.7} |z-\tilde y|&\le& |z-y|+|y-\tilde y|\le |z-y|+\text{dist}(\C_0,\C_1)\cr &\le& |z-y|+\text{dist}(\C,\C_1)\le 2|z-y|. \end{eqnarray} \end{proof} \section{ On Circuits.}\label{sec-circuit} \label{circuit} \subsection{Definitions and Notations.}\label{sec-defcir} Let $\Lambda_n\subset{\mathbb Z}^d$ maximizes the supremum in the last term of \reff{circuit.2}. Assume we have partitioned $\Lambda_n$ into $L$-clusters, as done in Section~\ref{sec-cluster}. We decompose the paths realizing $\{||1_{\Lambda_n }l_\infty||_2^2\ge n\xi\}$ with $\{\Lambda_n\subset\D_\infty(A,\sqrt n)\}$ into the successive visits to $\Lambda_n'=\Lambda_n\cup \T(\Lambda_n)$. For ease of notations, we drop the subscript $n$ in $\Lambda$ though it is important to keep in mind that $\Lambda$ varies as we increase $n$. We consider the collection of integer-valued vectors over $\Lambda'$ which we think of as candidates for the local times over $\Lambda'$. Thus \be{circuit.3} V(\Lambda',n):=\acc{ {\bf k}\in {\mathbb N}^{\Lambda'}:\ \inf_{x\in \Lambda}k(x)\ge \frac{\sqrt n}{A},\ \sup_{x\in \Lambda'}k(x)\le A{\sqrt n},\quad \sum_{x\in \Lambda} k^2(x)\ge n\xi}. \end{equation} Also, for ${\bf k}\in V(\Lambda',n)$, we set \be{main-estimate} |{\bf k}|=\sum_{x\in \Lambda'} k(x),\quad\text{and note that}\quad |{\bf k}|\le |\Lambda'|A{\sqrt n}\le 2A^4 {\sqrt n}. \end{equation} We need now more notations. For $U\subset {\mathbb Z}^d$, we call $T(U)$ the first hitting time of $U$, and we denote by $T:=T(\Lambda')=\inf\acc{n\ge 0: S_n\in \Lambda'}$. We also use the notation $\tilde T(U)=\inf\acc{n\ge 1: S_n\in U}$. For a trajectory in the event $\acc{l_\infty(x)=k(x),\forall x\in \Lambda'}$, we call $\acc{T^{(i)},i\in {\mathbb N}}$ the successive times of visits of $\Lambda'$: $T^{(1)}=\inf\acc{n\ge 0: S_n\in \Lambda'}$, and by induction for $i\le |{\bf k}|$ when $\acc{T^{(i-1)}<\infty}$ \be{circuit.4} T^{(i)}=\inf\acc{n>T^{(i-1)}: S_n\in \Lambda'}. \end{equation} The first observation is that the number of {\it long trips} cannot be too large. \bl{circuit-lem.1} For any $\epsilon>0$, and $M>0$, there is $L>0$ such that for each ${\bf k}\in V(\Lambda',n)$, \be{circuit.5} P\pare{ l_\infty|_{\Lambda'} ={\bf k},\ \big|\acc{i\le |{\bf k}|:\ |S_{T^{(i)}}-S_{T^{(i-1)}}\big|> {\sqrt L}}|\ge \epsilon {\sqrt n}}\le e^{-M{\sqrt n}}. \end{equation} \end{lemma} We know from \cite{AC05} that the probability that $\{\overline{||l_n||_2^2}\ge n\xi\}$ is bounded from below by $\exp(-\bar c \sqrt n)$ for some positive constant $\bar c$. We assume $M>2\bar c$ (and $L>L(M)$ given in Lemma~\ref{circuit-lem.1}), and the left hand side of \reff{circuit.5} is negligible. The proof of this Lemma is postponed to the Appendix. We consider now the collections of possible sequence of visited sites of $\Lambda'$, and in view of Lemma~\ref{circuit-lem.1}, we consider at most $\epsilon {\sqrt n}$ consecutive sites at a distance larger than ${\sqrt L}$. First, for ${\bf k}\in V(\Lambda',n)$, and each ${\bf z}\in \E({\bf k})$, and $x\in {\mathbb Z}^d$, we denote by $l_{{\bf z}}(x)$ the {\it local times} of ${\bf z}$ at $x$, that is the number of occurrences of $x$ in the string ${\bf z}$. Then, \be{circuit.6} \E({\bf k})=\acc{{\bf z}\in (\Lambda')^{|{\bf k}|}:\ l_{{\bf z}}(x)=k(x), \forall x\in \Lambda',\quad \sum_{i< |{\bf k}|} \ind_{\acc{|z(i+1)-z(i)|>{\sqrt L}}}< \epsilon {\sqrt n}}. \end{equation} \bd{def-circuit} For ${\bf k}\in V(\Lambda',n)$, a {\it circuit} is an element of $\E({\bf k})$. The random walk follows circuit ${\bf z}\in \E({\bf k})$, if it belongs to the event \be{circuit.13} \acc{S_{T^{(i)}}=z(i),i=1,\dots,|{\bf k}|}\cap \acc{ T^{(|{\bf k}|+1)}=\infty}. \end{equation} \end{definition} When we lift the second constrain in \reff{circuit.13}, we obtain when $L$ is large enough (with the convention $z(0)=0$) \be{circuit.14} P\pare{||\ind_{\Lambda}l_\infty||_2^2\ge n\xi,\ \Lambda \subset \D_\infty(A,\sqrt n)}\le 2\sum_{{\bf k}\in V(\Lambda',n)} \sum_{{\bf z}\in\E({\bf k})} \prod_{i=1}^{|{\bf k}|} P_{z(i-1)}\pare{S_T=z(i)}. \end{equation} We come now to the definitions of {\it trips} and {\it loops}. \bd{def-trip} Let ${\bf k}\in V(\Lambda',n)$ and ${\bf z}\in \E({\bf k})$. A {\it trip} is a pair $(z(i),z(i+1))$, where $z(i)$ and $z(i+1)$ do not belong to the same cluster. A {\it loop} is a maximal substring of ${\bf z}$ belonging to the same cluster. \end{definition} \br{rem-trip} We think of a circuit as a succession of loops connected by trips. Recall that \reff{clus.3} tells us that two points of a trip are at a distance larger than $L$. Thus, trips are necessarely long journeys, whereas loops may contain many short journeys, typically of the order of $\sqrt n$. For ${\bf z}\in \E({\bf k})$, the number of trips is less than $\epsilon {\sqrt n}$, so is the number of loops, since a loop is followed by a trip. \end{remark} We recall the notations of Section~\ref{sec-moveC}: $\Lambda=\{\C_0,\C_1,\dots,\C_k\}$ with $\text{dist}(\C_0,\C_1)$ minimizing distance among the clusters. The map $\T$ translates only cluster $\C_1$. We now fix ${\bf k}\in V(\Lambda',n)$ and ${\bf z}\in \E({\bf k})$. We number the different points of entering and exiting from $\C_1$. \be{circuit.7} \tau_1=\inf\acc{n>0:\ z(n)\in \C_1},\quad\text{and}\quad \sigma_1=\inf\acc{n>\tau_1:\ z(n)\not\in \C_1}, \end{equation} and by induction, if we assume $\acc{\tau_2,\sigma_2,\dots,\tau_i,\sigma_i}$ defined with $\sigma_i<\infty$, then \be{circuit.8} \tau_{i+1}=\inf\acc{n>\sigma_i:\ z(n)\in \C_1},\quad\text{and}\quad \sigma_{i+1}=\inf\acc{n>\tau_{i+1}:\ z(n)\not\in \C_1}. \end{equation} \bd{def-loop} For a configuration ${\bf z}\in \E({\bf k})$, its $i$-th $\C_1$-loop is \be{circuit.9} \L(i)=\acc{z(\tau_i),z(\tau_i+1),\dots,z(\sigma_i-1)}. \end{equation} We associate with $\L(i)$ the entering and exiting site from $\C_1$, $p(i)=\acc{z(\tau_i),z(\sigma_i-1)}$, which we think of as the {\it type} of the $\C_1$-loop. \end{definition} The construction is identical for $\T(\C_1)$ (usually with a tilda put on all symbols). \subsection{Encaging Loops.}\label{sec-cageloop} We wish eventually to transform a piece of random walk associated with a $\C_1$-loop, into a piece of random walk associated with a $\T(\C_1)$-loop. We explain one obvious problem we face when acting with $\T$ on circuits. Consider a $\C_1$-loop in a circuit ${\bf z}$. Assume for simplicity, that it corresponds to the $i$-th $\C_1$-loop. In general, \be{T-problem} \prod_{k=\tau_i}^{\sigma_i-2} P_{z(k)}\pare{S_T=z(k+1)}\quad\not=\quad \prod_{k=\tau_i}^{\sigma_i-2} P_{\T(z(k))}\pare{S_T=\T(z(k+1))}. \end{equation} However, if while travelling from $z(k)$ to $z(k+1)$, the walk were forced to stay inside an $L$-shell of $\C_1$ during $[\tau_i,\sigma_i[$, then under $\T$, we would have a walk travelling from $\T(z(k))$ to $\T(z(k+1))$, inside an $L$-shell of $\T(\C_1)$. To give a precise meaning to our use of the expression {\it encage}, we recall that for any cluster $\C$, the $L$-shell around $\C$ is denoted \[ \S(\C)=\acc{z:\ \text{dist}(z,\C)=\max(L,\text{diam}(\C))}. \] Now, for $x,y\in \C$, the random walk is encaged inside $S$ while flying from $x$ to $y$ if it does not exit $\S$ before touching $y$. The main result in this section is the following proposition. \bp{prop-encage} Fix a circuit ${\bf z}\in \E({\bf k})$ with ${\bf k}\in V(\Lambda,n)$. For any $\epsilon>0$, there is $L$ integer, and a constant $\beta>0$ independent of $\epsilon$, such that if $\C_i:=\C(z(i))$, and \be{def-encage} \begin{split} P^{L}_{z(i)}\pare{S_T=z(i+1)}=& \ind_{\acc{z(i+1)\in \C_i}}P_{z(i)}\pare{S_T=z(i+1),T<T(\S(\C_i))}\\ &\quad+ \ind_{\acc{z(i+1)\not\in \C_i}}P_{z(i)}\pare{S_T=z(i+1)}, \end{split} \end{equation} then \be{cor-cage} \prod_{i=0}^{|{\bf k}|-1} P_{z(i)}\pare{S_T=z(i+1)} \le e^{\beta \epsilon {\sqrt n}} \prod_{i=0}^{|{\bf k}|-1} P^{L}_{z(i)}\pare{S_T=z(i+1)}. \end{equation} \end{proposition} \br{rem-encage} Consider a $\C$-loop, say $\L$, and assume that for some integer $i$, $\L$ corresponds to the $i$-th $\C$-loop in circuit ${\bf z}$. We use the shorthand notation $\text{Weight}(\L)$ to denote the probability associated with $\L$ \be{not-loop} \text{Weight}(\L):= \prod_{k=\tau_i}^{\sigma_i} P^L_{z(k-1)}\pare{S_T=z(k)}. \end{equation} Note that $\text{Weight}(\L)$ includes the probabilities of the entering and exiting {\it trip}. The point of encaging loop is the following identity \[ \prod_{k=\tau_i}^{\sigma_i-2} P^L_{z(k)}\pare{S_T=z(k+1)}= \prod_{k=\tau_i}^{\sigma_i-2} P^L_{\T(z(k))}\pare{S_T=\T(z(k+1))}. \] Thus, if we set $z=z(\tau_i-1)$ and $z'=z(\sigma_i)$ \be{encage-key} \text{Weight}(\L):=\frac{P_{z}\pare{S_T=z(\tau_i)}} {P_{z}\pare{S_T=\T(z(\tau_i))}}\frac{P_{z(\sigma_i-1)}\pare{S_T=z'}} {P_{\T(z(\sigma_i-1))}\pare{S_T=z'}}\ \text{Weight}(\T(\L)). \end{equation} \end{remark} The proof of Proposition~\ref{prop-encage} is divided in two lemmas. The first lemma deals with excursions between {\it close} sites. Such excursions are abundant. The larger $L$ is, the better the estimate \reff{encage.1} of Lemma~\ref{lem-encageS}. The second result, Lemma \ref{lem-encageB}, deals with excursions between {\it distant} sites of the same cluster. Such excursions are rare, and even a large constant in the bound \reff{encage.2} is innocuous. \bl{lem-encageS} For any $\epsilon>0$, there is $L$, such that for any $L$-cluster $\C$, and $x,y\in \C$, with $|x-y|\le \sqrt{L}$, we have \be{encage.1} P_x(S_T=y)\le e^{\epsilon} P_x\pare{ S_T=y,T<T(\S)}. \end{equation} \end{lemma} \bl{lem-encageB} There is $C_B$ independent of $L$, such that for any $L$-cluster $\C$, and $x,y\in \C$, with $|x-y|> \sqrt{L}$, we have \be{encage.2} P_x(S_T=y)\le C_B P_x\pare{ S_T=y,T<T(\S)}. \end{equation} \end{lemma} Lemmas~\ref{lem-encageS} and~\ref{lem-encageB} are proved in the Appendix. We explain how they yield \reff{cor-cage}, that is how to bound the cost of encaging a {\it loop}. Consider a circuit associated with ${\bf k}\in V(\Lambda',n)$ and ${\bf z}\in \E({\bf k})$. \begin{itemize} \item[(i)] Each journey between sites at a distance less than ${\sqrt L}$ brings a cost $e^{\epsilon}$ from \reff{encage.1}, and even if ${\bf z}$ consisted only of such journeys, the cost would be negligible, since the total number of visits of $\Lambda$ is $|{\bf k}|\le 2A^4 {\sqrt n}$ as seen in \reff{main-estimate}. \item[(ii)] Each journey between sites at a distance larger than ${\sqrt L}$ brings a constant $C_B$, but their total number is less than $\epsilon{\sqrt n}$ by the second constrain in \reff{circuit.6}. \end{itemize} Combining (i) and (ii), we obtain \reff{cor-cage}. \subsection{Local Circuits Surgery.}\label{sec-movetrip} In this section, we first estimate the cost of wiring differently trips. More precisely, we have the following two lemmas. \bl{trans-lem.1}There is a constant $C_T>0$, such that for any $y\in \Lambda\backslash \C$ and $x\in \C$, we have \be{trans.1} P_y( S_T=x)\le C_T P_{y}( S_T=\T(x)). \end{equation} \end{lemma} \br{rem-trans.1} By noting that for any $x,y\in \Lambda$, $P_x(S_T=y)= P_y(S_T=x)$, we have also \reff{trans.1} with the r\^ole of $x$ and $y$ interchanged. However, it is important to see that the following inequality with $C$ independent of $n$ \be{wrong} P_y( S_T=\T(x))\le C P_{y}( S_T=x)\quad\text{ is wrong !} \end{equation} Indeed, the distance between $y$ and $\T(x)$ might be considerably shorter than the distance between $y$ and $x$, and the constant $C$ in \reff{wrong} should depend on this ratio of distances, and thus on $n$. \end{remark} Secondly, we need to wire different points of the same cluster to an outside point. \bl{improp-lem.1} There is a constant $C_I>0$, such that for all $x,x'\in \C$, and for $y\in \Lambda'\backslash \C$ \be{improp.21} P_y(S_T=x)\le C_I P_y(S_T=x'),\text{ and for }y\in \Lambda'\backslash \T(\C), \ P_y(S_T=\T(x))\le C_I P_y(S_T=\T(x')). \end{equation} Moreover, \reff{improp.21} holds when we interchange initial and final conditions. \end{lemma} Finally, we compare the cost of different trips joining $\C$ and $\T(\C)$. This is a corollary of Lemma~\ref{improp-lem.1}. \bc{improp-cor.1} For all $x,x'\in \C$ and $y,y'\in \C$, \be{improp.5} P_{x}(S_T=\T(y))\le C_I^2 P_{x'}(S_T=\T(y')) ,\quad\text{ and }\quad P_{\T(x)}(S_T=y)\le C_I^2 P_{\T(x')}(S_T=y'). \end{equation} \end{corollary} \section{Global Circuits Surgery.}\label{sec-moveloop} In this section, we discuss the following key result. We use the notations of Section~\ref{sec-defcir}. \bp{synth-prop.1} There is $\beta>0$, such that for any $\epsilon>0$, \be{synth.1} P\pare{||\ind_{\Lambda}l_\infty||_2^2\ge n\xi,\ \Lambda \subset \D_\infty(A,\sqrt n)}\le e^{\beta\epsilon\sqrt{n}} P\pare{ ||\ind_{\T(\Lambda)}l_{\infty}||_2^2\ge n\xi, \ \T(\Lambda) \subset \D_\infty(A,\sqrt n)}. \end{equation} \end{proposition} We iterate a finite number of times Proposition~\ref{synth-prop.1}, with starting set $\T(\Lambda)$, then $\T^2(\Lambda)$ and so forth (at most $|\Lambda|$-iterations are enough), and end up with a finite set $\tilde \Lambda$ made up of just one $L$-cluster. If $\text{dist}(0,\tilde \Lambda)$ is larger than $2\text{diam}(\tilde \Lambda)$, then we can choose an arbitrary point $z^*$ at a distance $\text{diam}(\tilde \Lambda)$ from $\tilde \Lambda$, and replace in the circuit decomposition of \reff{circuit.14} $P_0(S_T=z(1))$, for any $z(1)\in \tilde \Lambda$, by $P_{z^*}(S_T=z(1))$ at the cost of a constant, by arguments similar to those of Section~\ref{sec-movetrip}, and then use translation invariance to translate $\tilde \Lambda$ by $z^*$ back to the origin. Thus, from Proposition~\ref{synth-prop.1}, we obtain easily the following result. \bp{prop-synth2} There is $\tilde \Lambda\ni 0$ a subset of ${\mathbb Z}^d$ whose diameter depends on $\epsilon$ but not on $n$, such that for $n$ large enough \be{synth.14} P\pare{||\ind_{\Lambda}l_\infty||_2^2\ge n\xi,\ \Lambda \subset \D_\infty(A,\sqrt n)}\le e^{\beta\epsilon \sqrt{n}} P_0\pare{||\ind_{\tilde \Lambda} l_{\infty}||_2^2\ge n\xi, \ \tilde \Lambda \subset \D_\infty(A,\sqrt n)}. \end{equation} \end{proposition} \noindent{\bf First steps of proof of Proposition~\ref{synth-prop.1}} Fix $\epsilon>0$. Proposition~\ref{prop-encage} produces a scale $L$ which defines $L$-clusters, which in turn allows us to define {\it circuits}. Also, the constant $\beta$ in \reff{cor-cage} is independent of $\epsilon$. Recalling \reff{circuit.14} together with \reff{cor-cage}, we obtain \be{circuit-cage} P\pare{||\ind_{\Lambda}l_\infty||_2^2\ge n\xi,\ \Lambda \subset \D_\infty(A,\sqrt n)}\le e^{\beta\epsilon \sqrt n} \sum_{{\bf k}\in V(\Lambda',n)} \sum_{{\bf z}\in\E({\bf k})} \prod_{i=1}^{|{\bf k}|} P^L_{z(i-1)}\pare{S_T=z(i)}. \end{equation} Recall that for ${\bf k}\in V(\Lambda',n)$, $\E({\bf k})$ is the collection of possible circuits producing local times ${\bf k}$ with $\{\sum_{\Lambda} k(x)^2\ge n\xi\}$. The aim of this section is to modifiy the circuits so as to interchange the r\^ole of $\C_1$ and $\T(\C_1)$. We aim at building a map $f$ on circuits with the following three properties: if ${\bf z} \in \E({\bf k})$ \be{feat-0} \text{(i)}\qquad\forall x\in \Lambda\backslash\C, \ l_{f({\bf z})}(x)=k(x),\ \forall x\in \C_1,\ l_{f({\bf z})}(\T(x))\ge k(x), \text{ and } \ l_{f({\bf z})}(x)\le k(\T(x)). \end{equation} Secondly, for $\beta>0$ and a constant $C(\Lambda)>0$ depeding only on $|\Lambda|$, \be{feat-1} \text{(ii)}\qquad \forall z\in f(\E({\bf k})),\quad |f^{-1}(z)|\le C(\Lambda) e^{\beta \epsilon \sqrt n}, \end{equation} Thirdly, \be{feat-2} \text{(iii)}\qquad \prod_{i=0}^{|{\bf k}|-1} P^{L}_{z(i)}\pare{S_T=z(i+1)} \le e^{\beta \epsilon {\sqrt n}} \prod_{i=0}^{|{\bf k}|-1} P^{L}_{f(z(i))}\pare{S_T=f(z(i+1))}. \end{equation} Assume, for a moment, that we have $f$ with (i),(ii) and (iii). Then, summing over ${\bf z}\in\E({\bf k})$, \be{synth.12} \begin{split} \sum_{{\bf z}\in\E({\bf k})}&\prod_{i=0}^{|{\bf k}|-1} P^{L}_{z(i)}\pare{S_T=z(i+1)} \le e^{\beta\epsilon\sqrt{n}}\sum_{{\bf z}\in\E({\bf k})} \prod_{i=0}^{|{\bf k}|-1} P^{L}_{f(z(i))}\pare{S_T=f(z(i+1))}\\ &\le e^{\beta\epsilon\sqrt{n}}\sum_{{\bf z}\in f(\E({\bf k}))} |f^{-1}(z)|\prod_{i=0}^{|{\bf k}|-1} P^{L}_{z(i)}\pare{S_T=z(i+1)}\\ &\le C(\Lambda) e^{2\beta\epsilon\sqrt{n}}\ P_0\pare{l_{\infty}|_{\Lambda\backslash\C}=k|_{\Lambda\backslash\C} ,\ \forall x\in \C_1,\ l_{\infty}(\T(x))\ge k(x),\text{ and } \ l_{\infty}(x)\le k(\T(x))} \end{split} \end{equation} We further sum over ${\bf k}\in V(\Lambda',n)$, and replace the sum over the $\acc{k(y)\le A\sqrt n,\ y\in \T(\C_1)}$ by a factor $(A\sqrt{n})^{|\Lambda|}$, and rearrange the sum over $\acc{k(y) ,\ y\in \C_1}$, to obtain \be{synth.13} \begin{split} \sum_{{\bf k}\in V(\Lambda',n)}\sum_{{\bf z}\in \E({\bf k})} \prod_{i=0}^{|{\bf k}|-1}&P^{L}_{z(i)}\pare{S_T=z(i+1)} \le e^{2\beta\epsilon\sqrt{n}} (A\sqrt{n})^{|\Lambda|}\\ &\times E\cro{\prod_{y\in \C_1} l_\infty(\T(y)),\ \T(\Lambda) \subset \D_\infty(A,\sqrt n), ||\ind_{\T(\Lambda)}l_{\infty}||_2^2\ge n\xi}. \end{split} \end{equation} Note that in \reff{synth.13}, we can assume $l_\infty(\T(y))\le A\sqrt{n}$ for all $y\in \C$, since for a transient walk, the number of visits to a given site is bounded by a geometric random variable. Thus, in the expectation of \reff{synth.13}, we bound $l_\infty(\tilde y)$ by $A\sqrt{n}$, and $|\C|$ by $|\Lambda|$. Providing we can show the existence of a map $f$ with properties \reff{feat-0}, \reff{feat-1} and \reff{feat-2}, we would have proved Proposition \ref{synth-prop.1}. Sections~\ref{sec-marriage}, \ref{sec-proper} and \ref{sec-improper} are devoted to contructing the map $f$. \subsection{A Marriage Theorem.}\label{sec-marriage} This section deals with {\it global} modifications of circuits. For this purpose, we rely on an old {\it Marriage Theorem} (see e.g.\cite{JUKNA}), which seems to have been first proved by Frobenius~\cite{FROBENIUS} in our setting. Since we rely heavily on this classical result, we quote it for the ease of reading. \bt{circuit-th.1}{Frobenius' Theorem.} Let $\G=(G,E)$ be a k-regular bipartite graph with bipartition $G_1,G_2$. Then, there is a bijection $\varphi:G_1\to G_2$ such that $\acc{(x,\varphi(x)),\ x\in G_1}\subset E$. \end{theorem} Now, to see how we use Frobenius' Theorem, we need more notations. First, for two integers $n$ and $m$, we call \be{circuit.11} \Omega_{n,m}=\acc{\eta\in \acc{0,1}^{n+m}:\ \sum_{i=1}^{n+m} \eta(i)=n}. \end{equation} Now, when $n>m$, we define the graph $\G_{n,m}= (G_{n,m},E_{n,m})$ with $G_{n,m}=\Omega_{n,m}\cup \Omega_{m,n}$, and \be{circuit.12} E_{n,m}=\acc{(\eta,\zeta)\in \Omega_{n,m}\times \Omega_{m,n} :\ \zeta(x)\le \eta(x),\forall x\le n+m}. \end{equation} With $k=n-m$, $\G_{n,m}$ is a $k$-regular graph with bipartition $\Omega_{n,m},\Omega_{m,n}$, and Frobenius' Theorem gives us a bijection $\varphi_{n,m}: \Omega_{n,m}\to \Omega_{m,n}$. Thus, under the action of $\varphi_{n,m}$ a 1 can become a 0, but a 0 stays 0. The importance of this feature is explained below in Remark~\ref{rem-frob}. When $n=m$, we call $\varphi_{n,n}$ the identity on $\Omega_{n,n}$. We use Frobenius' Theorem to select pairs of {\it trips} with the same {\it type}, one {\it trip} to $\C$ and one {\it trip} to $\tilde \C$ which are interchanged. Then, we describe how the associated loops are interchanged. However, some {\it patterns} of loops cannot be handled using Frobenius' Theorem, and we call these loops {\it improper}. For the ease of notations, we call $\C=\C_1$ and $\tilde \C=\T(\C_1)$. \bd{def-proper} A $\C$-loop is called {\it proper} if it is preceded by a trip from $\Lambda$ to $\C$, and the other $\C$-loops are called {\it improper}. Similarly, a $\tilde\C$-loop is called {\it proper} if it is preceded by a trip from $\T(\Lambda)$ to $\tilde\C$. \end{definition} We describe in the two next sections, how to define a map $f$ satisfying \reff{feat-0},\reff{feat-1} and \reff{feat-2}. This map only transforms $\C$ and $\tilde \C$-loops. It acts on each {\it proper} loop of a certain {\it type}, say $p$ and $\tilde p$, by a global action that we denote $f_p$. Also, there will be an action $f_i$ on {\it improper }loops which we describe in Section~\ref{sec-improper}. Thus, $f$ is a composition of $\{f_p,p\in \C^2\}$ and $f_i$, taken in the the order we wish. Note that for any ${\bf z}\in f(\E({\bf k}))$, we have \be{def-f} |f^{-1}({\bf z})|= \prod_{p\in \C^2}|f_p^{-1}({\bf z})|\times |f_i^{-1}({\bf z})|. \end{equation} Thus, property \reff{feat-1} holds for $f$, if it holds for $f_i$, and for each $f_p$ as $p\in \C^2$. We describe the $\{f_p,p\in \C^2\}$ in Section~\ref{sec-proper}, and $f_i$ in Section~\ref{sec-improper}. \subsection{{\it Proper} Loops.}\label{sec-proper} We fix ${\bf k}\in V(\Lambda',n)$ and ${\bf z}\in \E({\bf k})$. We fix a {\it type} $p=(z,z')\in \C^2$, and we call $\nu(p)$ the number of proper $\C$-loops of type $p$ in ${\bf z}$. Similarly, $\nu(\tilde p)$ is the number of proper $\tilde \C$-loops of type $\tilde p=(\T(z),\T(z'))$. To each {\it type} $p$ corresponds a configuration $\eta_p\in \Omega_{\nu(p),\nu(\tilde p)}$ which encodes the successive occurrences of proper $\C$ and $\tilde \C$-loops of type $p$: a mark 1 for a $ \C$-loop and a mark 0 for a $\tilde \C$-loop. Assume that $n:=\nu(p)\ge m:=\nu(\tilde p)$, and $\eta_p\in \Omega_{n,m}$. All $\C$-loop ({\it proper} and of {\it type} $p$) are translated by $\T$, and all $\tilde \C$-loop ({\it proper} and of {\it type} $\tilde p$) are translated by $\T^{-1}$. The bijection $\varphi_{n,m}$ encodes the positions of the translated loops, as follows. \begin{itemize} \item The $\C$-loop associated with the $i$-th occurrence of a 1 in $\eta_p$, is transformed into a $\tilde\C$-loop associated with the $i$-th occurrence of a 0 in $\varphi_{n,m}(\eta_p)$. \item The $\tilde \C$-loop associated with the $i$-th occurrence of a 0 in $\eta_p$, is transformed into a $\C$-loop associated with the $i$-th occurrence of a 1 in $\varphi_{n,m}(\eta_p)$. \end{itemize} After acting with $f_p$, the number of $\tilde\C$-loops of {\it type }$\tilde p$ increases by $\nu(p)-\nu(\tilde p)\ge 0$. For definiteness, we illustrate this algorithm on a simple example (see Figure~\ref{fig:dessin1}. Assume that circuit ${\bf z}\in \E({\bf k})$ has 3 proper $\C$-loops of type $p$, say $\L_1,\L_2$ and $\L_3$, and 1 proper $\tilde \C$-loop of type $p$, say $\tilde \L_1$. Let us make visible in ${\bf z}$ only these very loops and the trips joining them: \be{ex-1} {\bf z}:\qquad\dots y_1\L_1y_1'\dots y_2\L_2y_2'\dots y_3\tilde \L_1 y_3'\dots y_4\L_3y_4'\dots , \end{equation} for $\{y_i,y_i',\ i=1,\dots,4\}$ in $\Lambda\backslash\C$. For such a circuit, we would have $\nu(p)=3$ and $\nu(\tilde p)=1$ and $\eta_p=(1101)$. Furthermore, assume that $\varphi_{3}(1101)=0100$. Then, the $p,\tilde p$ proper loops are transformed into \be{ex-2} {\bf f_p(z)}: \qquad\dots y_1\T(\L_1)y_1'\dots y_2\T^{-1}(\tilde \L_1)y_2'\dots y_3\T(\L_2) y_3' \dots y_4\T(\L_3)y_4'\dots \end{equation} We end up with 3 $\tilde \C$-loops of type $\tilde p$, $\T(\L_1),\T(\L_2)$ and $\T(\L_3)$, and one $\C$-loop $\T^{-1}(\tilde \L_1)$. Note that in both ${\bf z}$ and ${\bf f_p(z)}$, the second loop (of type $p$ or $\tilde p$) is a $\C$-loop, as required by Frobenius map $\varphi_{3}$. The configuration $z$ in \reff{ex-1} is represented on the left hand side of Figure~\ref{fig:dessin1}, whereas $f_p(z)$ is shown on its right hand side. Note that we put most of the sites $\{y_i,y'_i,i=1,\dots,4\}$ close to $\T(\C)$. This is the desired feature of $\T$ as established in Lemma~\ref{lem-move}. \begin{figure} \caption{Action of $f$ on proper loops.} \label{fig:dessin1} \end{figure} \br{rem-frob} One implication of the key feature of $\varphi_{n,m}$, namely that $(\eta_p,\varphi_{n,m}(\eta_p)) \in E_{n,m}$, is that a trip $(y,\T(z))$ or $(\T(z'),y')$ is invariant under $f_p$. Note that in Figure~\ref{fig:dessin1}, $(y_3,\T(z))$ and $(\T(z'),y_3')$ are invariant, whereas $(y_1,z)$ becomes $(y_1,\T(z))$ and fortunately $|y_1-\T(z)|\le |y_1-z|$ on the drawing. \end{remark} Note that $f_p$ satisfies \reff{feat-0}. Indeed, if we call $z_p$ the substring of $z$ made up of only sites represented in \reff{ex-1}, and $f_p(z_p)$ the substring of $f_p(z)$ made up of only sites represented in \reff{ex-2}, we have $l_{f_p(z_p)}(x)=l_{z_p}(x)$ for $x\in \Lambda\backslash\C$, \be{primer-1} \forall x\in \C,\quad l_{f_p(z_p)}(\T(x))=l_{z_p}(x),\quad\text{and} \quad l_{f_p(z_p)}(x)=l_{z_p}(\T(x)). \end{equation} Now, we estimate the cost of going from $z_p$ to $f_p(z_p)$. We consider encaged loops as described in Section~\ref{sec-cageloop}. The purpose of having defined {\it types}, and of having {\it encaged} loops, is the following two simple observations, which we deduce from \reff{encage-key} in Remark~\ref{rem-encage}. \be{key1-type} \text{(i)}\qquad \text{Weight}(\tilde \L_1)\text{Weight}(\L_2)= \text{Weight}(\T^{-1}(\tilde \L_1))\text{Weight}(\T(\L_2)), \end{equation} and, if $p=(z,z')\in \C^2$ \be{loop-exchange} \text{(ii)}\qquad \text{Weight}(\L_1) =\frac{P_{y_1}\pare{S_T=z}}{P_{y_1}\pare{S_T=\T(z)}} \frac{P_{z'}\pare{ S_T=y'_1}}{P_{\T(z')} \pare{ S_T=y'_1}}\quad \text{Weight}(\T(\L_1)), \end{equation} and a similar equality linking $\text{Weight}(\L_3)$ and $\text{Weight}(\T(\L_3))$. Thus, the cost of transformation \reff{ex-2} is $C_T^4$, where $C_T$ appears in Lemma~\ref{trans-lem.1}, since only 2 entering trips and 2 exiting trips have been wired differently. Now, for any ${\bf z}\in \E({\bf k})$, the number of loops which undergo a transformation is less than the total number of loops, which is bounded by $\epsilon {\sqrt n}$. The maximum cost (maximum over ${\bf z}\in \E({\bf k})$) of such an operation is $2C_T$ to the power $\epsilon {\sqrt n}$. The case (rare but possible) where $\nu(p)< \nu(\tilde p)$ has to be dealt with differently. Indeed, for an arbitrary cluster $\C'$, we cannot transform a trip between $\C'$ and $\tilde \C$ into a trip between $\C'$ and $\C$ at a constant cost, since $\text{dist}(\C',\tilde \C)$ might be much smaller than $\text{dist}(\C',\C)$. We propose that $f_p$ performs the following changes: \begin{itemize} \item Act with $\T$ on all $\C$-loops of {\it type} $p$. \item Act with $\T^{-1}$ only on the first $\nu(p)$ $\tilde \C$-loops of {\it type} $\tilde p$. \item Interchange the position of the $\nu(p)$ first $\C$-loops with $\nu(p)$ first $\tilde \C$-loops. \end{itemize} For instance, in the following example, ${\bf z}$ has three $\tilde \C$-loops $\tilde \L_1,\tilde \L_2$ and $\tilde\L_3$ and one $\C$-loop $\L_1$, \[ {\bf z}:\dots y_1\tilde \L_1y_1'\dots y_2\L_1y_2'\dots y_3\tilde\L_2 y_3'\dots y_4\tilde\L_3y_4'. \] $\nu(p)=1<\nu(\tilde p)=3$, and we have \be{ex-3} {\bf z}: \longrightarrow f_p({\bf z}):\dots y_1\T(\L_1)y_1'\dots y_2\T^{-1}(\tilde \L_1)y_2'\dots y_3\tilde\L_2 y_3'\dots y_4\tilde\L_3y_4'. \end{equation} In so doing, note that the cost is 1, but instead of \reff{primer-1}, we have \be{primer-2} \forall x\in \C,\quad l_{f_p(z_p)}(\T(x))\ge l_{z_p}(x),\quad\text{and} \quad \forall x\in \C,\quad l_{f_p(z_p)}(x)\le l_{z_p}(\T(x)). \end{equation} Also, we have brought a multiplicity of pre-images. Indeed, note that the final circuit of \reff{ex-3} could have been obtained, following the rule of \reff{ex-2}, by a circuit ${\bf z'}$ where $\nu(p)\ge \nu(\tilde p)$: \be{ex-4} {\bf z'}:\dots y_1\L_1y_1'\dots y_2\T^{-1}(\L_2')y_2'\dots y_3\tilde \L_1 y_3'\dots y_4\T^{-1}(\L_3')y_4'\dots \longrightarrow f_p({\bf z}). \end{equation} Also, $f_p$ maps a {\it proper} loop into a {\it proper} loop, and a pre-image under $f_p$ has either $\nu(p)\ge \nu(\tilde p)$ or $\nu(p)< \nu(\tilde p)$, and so only two possible pre-images. Since this is true for any {\it type}, an upper bound on the number of pre-images of the composition of all $f_p$, is bounded by 2 to the power $|\C|^2$ (which is the number of {\it types}). Since $\C\subset \Lambda$ whose volume is independent of $n$, the multiplicity is innocuous in this case. \subsection{{\it Improper} Loops.}\label{sec-improper} In this section, we deal with trips in $\C\times \tilde\C\cup \tilde \C\times \C$. The notion of {\it type} is not useful here. We call $f_i$ the action of $f$ on {\it improper} loops. To grasp the need to distinguish {\it proper} loops from {\it improper} loops, assume that we have a trip from a $\T(\C)$-loop to a $\C$-loop. If we could allow the $\C$-loop to become a $\T(\C)$-loop, we could reach a situation with two successive $\T(\C)$-loops linked with no trip. They would merge into one $\T(\C)$-loop by our definition \ref{def-trip}. This may increase dramatically the number of pre-images of a given $f({\bf z})$, violating \reff{feat-1}. \begin{figure} \caption{Red and blue loops merging.} \label{fig:dessin2} \end{figure} We illustrate this with a concrete example drawn in Figure~\ref{fig:dessin2}, below. We have considered the same example as in \reff{ex-1}, but now there is a trip between $\tilde \L_1$ to $\L_3$, so that $y'_3$ is in loop $\L_3$ whereas $y_4\in \tilde \L_1$, as shown in Figure~\ref{fig:dessin2}. If we where to apply the algorithm of Section~\ref{sec-proper}, we would obtain the image shown on the right hand side of in Figure~\ref{fig:dessin2}. There, the loops $\T(\L_2)$ and $\T(\L_3)$ (that we obtain in \reff{ex-2}) would have to merge. Consider first a circuit with a string of successive {\it improper loops} of type $p$, such that the number of $\C$-loops matches the number of $\tilde\C$-loops. For instance, assume that the $i$-th $\tilde \C$-loop is {\it improper} and followed by the $j$-th $\C$-loop, and so forth. For definiteness, assume that ${\bf z}$ contains $z_i$ ($i$ for {\it improper}) with \be{improp.1} z_i:=y_1\tilde \L(i)\L(j)\dots \tilde\L(i+k) \L(j+k) y'_1,\quad\text{with}\ k\ge 0, \ \text{ and}\quad y_1,y'_1\not\in \C\cup\tilde \C. \end{equation} Our purpose is to transform such a sequence of alternating $\C$-$\tilde\C$ loops into a similar alternating sequence, such that $f_i(z_i)$ satisfies \reff{feat-0}, \reff{feat-1} and \reff{feat-2}. One constraint is that we cannot replace the entering trip, and exiting trip in general, which in turn fixes the order of visits to $\C$ and $\tilde \C$. Indeed, as in the previous section, if $p=(z,z')$ and $|y_1-\T(z)|\ll |y_1-z|$, then we cannot map the trip $(y_1,\T(z))$ to $(y_1,z)$ at a small cost. We propose to following map \be{improp.2} f_i(z_i):=y_1\T(\L(j))\T^{-1}(\tilde \L(i))\dots \T(\L(j+k)) \T^{-1}(\L(i+k)) y'_1 \end{equation} Note that \reff{primer-1} holds. With an abuse of notations we represent the probability associated with $f_i(z_i)$, as \be{not-improper} \text{Weight}(f_i(z_i)):=\prod_{l=0}^{k-1}\text{Weight}(\tilde \L(i+l))\text{Weight}(\L(j+l)), \end{equation} even though we mean now that the trips joining successives journeys between $\C$-$\tilde\C$ or $\tilde\C$-$\C$are counted only once. Thus, the estimates we need concern {\it trips} joining {\it improper loops} together, in addition to the first entering and the last exiting trip from $\gamma$. These estimates are the content of Lemma~\ref{improp-lem.1}. The cost $\text{Weight}(z_i)/\text{Weight}(f_i(z_i))$ is bounded by $C_I^{2(k+1)+1}$, where $k+1$ is the number of successive blocks of $\tilde \C$-$\C$ loops. Since the total number of {improper} loops of all {\it types} is bounded by $\epsilon{\sqrt n}$, the total cost is negligible in our order of asymptotics. The case where the number of $\C$ and $\tilde \C$-loops does not match is trickier. First, assume that we deal with \be{ex-10} z_i:=y_1\L(i)\tilde \L(j)\dots\L(i+k) y'_1. \end{equation} Here, we have no choice but to replace $z_i$ with \be{ex-11} f_i(z_i):=y_1\T(\L(i))\T^{-1}(\L(j))\dots\T(\L(i+k)) y'_1. \end{equation} Note that \reff{primer-2} holds. Lastly, consider the case with more $\tilde\C$-loops. For instance, \be{ex-12} z_i:=y_1\tilde\L(i)\L(j)\tilde\L(i+1)\dots\L(j+k-1)\tilde \L(i+k) y'_1. \end{equation} For reasons already mentioned, we cannot map the first $\tilde \C$-loop into a $\C$-loop. We propose to keep the first loop unchanged, and act on the remaining loops, in the following way \be{ex-13} f_i(z_i):=y_1\tilde\L(i)\T^{-1}(\tilde \L_{i+1}) \T(\L(j)) \dots \T^{-1}(\tilde \L_{i+k}) \T(\L(j+k-1))y'_1. \end{equation} Here, as in \reff{ex-3}, \reff{primer-2} holds, and this choice brings a multiplicity of pre-images. Indeed, $f_i(z_i)$ could have come from \[ z_i':=y_1\T^{-1}(\tilde \L(i))\tilde \L(i+1)\L(j) \dots \tilde \L_{i+k} \T(\L(j+k-1) y'_1\longrightarrow f_i(z_i). \] So, in estimating the number of pre-images of a circuit, we find that it is at most 2 to the power of the number of {\it improper} loops. Now, the maximum number of {\it improper} loops is $\epsilon{\sqrt n}$. Also, the cost of transforming all {\it improper} loops is uniformly bounded by $C_I$ to the power $\epsilon{\sqrt n}$. \section{Renormalizing Time.}\label{sec-time} In this section, we show the following result. \bp{prop-time.1} For any finite domain $\tilde \Lambda\subset{\mathbb Z}^d$, there are positive constants $\alpha_0$, $\gamma$, such that for any large integer $n$, there is a sequence ${\bf k_n^*}=\acc{k_n^*(z),\ z\in \tilde \Lambda}$ with \be{seq-time} \sum_{z\in \tilde \Lambda} k_n^*(z)\le n,\quad \acc{k_n^*(z)\in [\frac{\sqrt{n}}{A}, A \sqrt{n}], \ z\in \tilde\Lambda},\quad \text{and}\quad \sum_{z\in \tilde\Lambda} k_n^*(z)^2\ge n\xi, \end{equation} such that for any $\alpha>\alpha_0$ \be{time-main} P_0\pare{||\ind_{\tilde \Lambda} l_{\infty}||_2^2\ge n\xi,\quad \tilde \Lambda\subset \D_{\infty}(A,\sqrt n)} \le n^{\gamma} P_0\pare{ l_{\lfloor \alpha \sqrt{n}\rfloor }|_{\tilde\Lambda}= {\bf k_n^*},\ S_{\lfloor \alpha \sqrt{n}\rfloor}=0}. \end{equation} \end{proposition} \begin{proof} We first use a rough upper bound \be{time.2} P_0\pare{||\ind_{\tilde \Lambda} l_{\infty}||_2^2\ge n\xi, \tilde \Lambda\subset \D_{\infty}(A,\sqrt n)}\le \big|\acc{{\bf k_n}\in [\frac{\sqrt{n}}{A}, A {\sqrt n}]^{\tilde\Lambda}}\big| \max_{{\bf k_n}\text{ in }\reff{seq-time}} P(l_{\infty}|_{\tilde\Lambda}={\bf k_n}). \end{equation} We choose a sequence ${\bf k_n^*}$ which maximizes the last term in \reff{time.2}. Then, we decompose $\acc{l_{\infty}|_{\tilde\Lambda}={\bf k_n^*}}$ into all possible circuits in a manner similar to the circuit decomposition of Section~\ref{sec-circuit}: We set $\nu=\sum_{\tilde\Lambda} k_n^*(x)$ (and $\nu\le |\tilde\Lambda| A\sqrt{n}$), and \be{time.3} \E^*=\acc{{\bf z}=(z(1),\dots,z(\nu)) \in \tilde\Lambda^\nu:\ l_{{\bf z}}(x)=k_n^*(x),\ \forall x\in \tilde \Lambda}. \end{equation} Then, if $T=\inf\acc{n\ge 0:\ S_n\in \tilde \Lambda}$, (and $z(0)=0$) \be{time.4} P_0(l_{\infty}|_{\tilde\Lambda}={\bf k_n^*})=\sum_{{\bf z}\in \E^*} \prod_{i=0}^{\nu-1} P_{z(i)}\pare{\tilde T(z({i+1}))=T<\infty}P_{z(\nu)}(T=\infty). \end{equation} For a fixed ${\bf z}\in \E^*$, we call $\tau^{(i)}$ the duration of the flight from $z(i-1)$ and $z(i)$ which avoids other sites of $\tilde\Lambda$. Thus, $\tau^{(1)} \overset{\text{law} }{=} \tilde T(z(1)) \ind\{\tilde T(z(1))=T\}$, when restricting on the values $\acc{1,2,\dots}$, and by induction \be{time.5} \tau^{(i)} \overset{\text{law} }{=} \tilde T(z(i))\circ\theta_{\tau^{(i-1)}} \ind\acc{\tilde T(z(i))\circ\theta_{\tau^{(i-1)}}=T\circ\theta_{\tau^{(i-1)}}} \end{equation} If $\TT({\bf z})=\acc{0<\tau^{(i)} <\infty,\forall i=1,\dots,\nu}$, we have \be{time.6} P_0(\TT({\bf z}))= \prod_{i=0}^{\nu-1} P_{z(i)}\pare{\tilde T(z({i+1}))=T<\infty}. \end{equation} Now, we fix ${\bf z}\in \E^*$ such that $P_0(\TT({\bf z}))>0$, and we fix $i<\nu$. For ease of notations, we rename $x=z(i-1)$ and $y=z({i})$. Now, note that $\acc{0<\tau^{(i)}<\infty}$ contributes to \reff{time.6} if $P_x(S_T=y)>0$, or in other words, if there is at least one path going from $x$ to $y$ avoiding other sites of $\tilde \Lambda$. Since $\tilde\Lambda$ has finite diameter, we can choose a finite length self-avoiding paths, and have \be{time.7} P_x\pare{T(y)<\tilde T(x)}\ge c_\Lambda(x,y):= P_x(S_T=y,T<\infty)>c_\Lambda>0, \end{equation} where $c_\Lambda$ is the minimum of $c_\Lambda(z,z')$ over all $z,z'\in \tilde\Lambda$ with $c_\Lambda(z,z')>0$. Now, note that, when $S_0=y$ \be{time.8} \tilde T(y)\ind_{T(x)<\tilde T(y)<\infty}\le \tilde T(y) \ind_{\tilde T(y)<\infty}. \end{equation} Thus, \be{time.9} \begin{split} E_y\cro{ \tilde T(y) \ind_{\tilde T(y)<\infty}}&\ge E_y\cro{\ind_{T(x)<\tilde T(y)<\infty}\pare{\tilde T(y)\circ\theta_{T(x)}+T(x)}}\\ &= E_y\cro{\ind_{T(x)<\tilde T(y)<\infty} T(x)} +E_y\cro{\ind_{T(x)<\tilde T(y)<\infty}\tilde T(y)\circ\theta_{T(x)}}. \end{split} \end{equation} Now, by the strong Markov's property \be{time.10} E_y\cro{ \tilde T(y) \ind_{\tilde T(y)<\infty}}\ge P_y\pare{ T(x)<\tilde T(y)} E_x\cro{T(y)\ind_{T(y)<\infty}}. \end{equation} By using translation invariance of the walk and \reff{time.10}, we obtain \be{time.11} E_x\cro{ T(y)\ind_{T(y)=T<\infty}}\le E_x\cro{ T(y)\ind_{T(y)<\infty}}\le \frac{E_0\cro{ \tilde T(0) \ind_{\tilde T(0)<\infty}}}{P_y\pare{ T(x)<\tilde T(y)}}. \end{equation} Now, it is well known that there is a constant $c_d>0$ such that for any integer $k$, $P_0(\tilde T(0) =k)\le c_d/k^{\frac{d}{2}}$, which implies that $E_0\cro{ \tilde T(0) \ind_{\tilde T(0)<\infty}}<\infty$ in $d\ge 5$, and \ba{time.12} E_x\cro{ T(y)|T(y)=T<\infty}&=& \frac{E_x\cro{ T(y)\ind_{T(y)=T<\infty}}}{ P_x(T(y)=T<\infty)}\cr &\le& \frac{E_0\cro{ \tilde T(0)\ind_{\tilde T(0)<\infty}}}{P_x(T(y)=T<\infty) P_y(T(x)<\tilde T(y))}\cr &\le & \frac{E_0\cro{ \tilde T(0) \ind_{\tilde T(0)<\infty}}}{c_\Lambda^2}. \end{eqnarray} When translating \reff{time.12} in terms of the $\acc{\tau^{(i)}}$, we obtain for any $\beta>0$ \be{time.13} P\pare{ \sum_{i=1}^\nu \tau^{(i)}>\beta \nu\big| \TT({\bf z})}\le \frac{E_0\cro{ \sum_{i=1}^\nu \tau^{(i)}|\TT({\bf z})}}{\beta \nu}\le \frac{ E_0\cro{ \tilde T(0) \ind_{\tilde T(0)<\infty}}}{c_\Lambda^2}\times \frac{1}{\beta}. \end{equation} Thus, we can choose $\beta_0$ large enough (independent of ${\bf z}$) so that \be{time.14} P_0\pare{ \sum_{i=1}^\nu \tau^{(i)}>\beta_0 \nu|\TT({\bf z})}\le \frac{1}{2}. \end{equation} We use now \[ P_0(\TT({\bf z}))=P_0\pare{ \sum_{i=1}^\nu \tau^{(i)}>\beta_0 \nu|\TT({\bf z})}P_0(\TT({\bf z}))+ P_0\pare{ \sum_{i=1}^\nu \tau^{(i)}\le \beta_0 \nu|\TT({\bf z})}P_0(\TT({\bf z})), \] to conclude that \be{time.15} P_0(\TT({\bf z}))\le 2 P_0\pare{\{\sum_{i=1}^\nu \tau^{(i)}\le \beta_0 \nu\}\cap \TT({\bf z})}. \end{equation} Now, there is $\alpha_0$ such that $\beta_0 \nu\le \alpha_0 \sqrt{n}$. Also, note that there is $n_0$ such that for any $z(\nu)\in \tilde \Lambda$, there is a path of length $n_0$ joining $z(\nu)$ to 0. Now, fix $\alpha>2\alpha_0$, take $n$ large enough so that $\lfloor \alpha \sqrt{n} \rfloor\ge \lfloor \alpha_0 \sqrt{ n}\rfloor+ n_0$, and use classical estimates on return probabilities, to obtain that for a constant $C_d$ \be{time.30} P_0(\TT({\bf z}))\le C_d (\alpha n)^{d/2}\!\! \sum_{\nu\le k\le \beta_0\nu} \!\! P_0\big(\{\sum_{i=1}^\nu \tau^{(i)}=k\} \cap \TT({\bf z})\big) P_{z(\nu)}(S_{n_0}=0) P_0(S_{\lfloor \alpha n\rfloor -(k+n_0)}=0). \end{equation} After summing over $z\in \E^*$, we obtain for any $\alpha>2\alpha_0$ \be{time.31} \sum_{z\in \E^*}P_0(\TT({\bf z}))\le C_d (\alpha n)^{d/2} P_0\pare{ ||\ind_{\tilde \Lambda} l_{\lfloor \alpha \sqrt{n}\rfloor }||_2^2\ge n \xi, S_{\lfloor \alpha \sqrt{ n}\rfloor }=0}. \end{equation} Note that another power of $n$ arises from the term in \reff{time.2} yielding the desired result. \end{proof} \section{Existence of a Limit.}\label{sec-exist} We keep notations of Section~\ref{sec-time}. We reformulate Proposition~\ref{prop-time.1} as follows. For any finite domain $\tilde \Lambda\subset{\mathbb Z}^d$, there are positive constants $\alpha_0$, $\gamma$, such that for any $\alpha>\alpha_0$, and $n$ large \be{start-main} P_0\pare{||\ind_{\tilde \Lambda} l_{\infty}||_2^2\ge n\xi,\quad \tilde \Lambda\subset \D_{\infty}(A,\sqrt n)} \le n^{\gamma} P_0\pare{||\ind_{\tilde \Lambda} l_{\lfloor \alpha \sqrt{n}\rfloor } ||_2\ge \sqrt{n\xi},\ S_{\lfloor \alpha \sqrt{n}\rfloor }=0}. \end{equation} Thus, \reff{start-main} is the starting point in this section. \subsection{A Subadditive Argument.}\label{sec-subadditive} We consider a fixed region $\Lambda\ni 0$, and first show the following lemma. \bl{lem-sub.1} Let $q>1$. For any $\xi>0$ and $\Lambda$ finite subset of ${\mathbb Z}^d$, the following limit exists \be{lawler.9} \lim_{n\to\infty} \frac{\log(P_0(||\ind_{\Lambda}l_n||_{q}\ge n\xi, \ S_n=0))}{n}= -I(\xi,\Lambda). \end{equation} \end{lemma} \begin{proof} We fix two integers $K$ and $n$, with $K$ to be taken first to infinity. Let $m,r$ be integers such that $K=mn+r$, and $0\le r<n$. The phenomenon behind the subadditive arguement is that \be{lawler.1} \A_K(\xi,\Lambda)=\acc{ ||\ind_{\Lambda}l_K||_q\ge K\xi, \ S_K=0} \end{equation} is built by concatenating the ${\it same }$ optimal scenario realizing $\A_n(\xi,\Lambda)$ on $m$ consecutive time-periods of length $n$, and one last time-period of length $r$ where the scenario is necessarly special and its cost innocuous. The crucial independence between the different periods is obtained as we force the walk to return to the origin at the end of each time period. Our first step is to exhibit an optimal strategy realizing $\A_n(\xi,\Lambda)$. By optimizing over a finite number of variables $\{k_n(x),x\in \Lambda\}$ satisfying \be{lawler.2} \sum_{x\in \Lambda} k_n(x)^q \ge (n\xi)^q, \quad\text{and}\quad \sum_{x\in \Lambda} k_n(x)\le n, \end{equation} there is a sequence ${\bf k_n^*}:=\acc{k_n^*(x),x\in \Lambda}$ and $\gamma>0$ (both depend on $\Lambda$) such that \be{lawler.3} P_0\pare{\A_n(\xi,\Lambda)}\le n^\gamma P_0\pare{\A_n^*(\xi,\Lambda)}, ,\quad\text{with}\quad \A_n^*(\xi,\Lambda)=\acc{l_n|_{\Lambda} ={\bf k_n^*},\ S_n=0}. \end{equation} Let $z^*\in \Lambda$, be the site where ${\bf k_n^*}$ reaches its maximum. We start witht the case $z^*=0$, and postpone the case $z^*\not= 0$ to Remark~\ref{rem-z}. When $z^*=0$, for any integer $r$, we call \be{lawler.4} \RR_r=\acc{l_r(0)=r},\quad \text{and note that}\quad P_0(\RR_r)= P_0(S_1=0)^{r-1}>0. \end{equation} Now, denote by $\A_n^{(1)},\dots,\A_n^{(m)}$ $m$ independent copies of $\A_n^*(\xi,\Lambda)$ which we realize on the successive increments of the random walk \[ \forall i=1,\dots,m,\quad \A_n^{(i)}=\acc{ l_{[(i-1)n,in[}|_{\Lambda}={\bf k_n^*},\ S_{in}=0}. \] Make a copy of $\RR_r$ independent of $\A_n^{(1)},\dots,\A_n^{(m)}$, by using increments after time $nm$: that is $\RR_r=\{S_j=0,\ \forall j\in [nm,K[\}$. Note that by independence \be{lawler.6} \begin{split} P_0\pare{\A_n(\xi,\Lambda)}^m P_0(\RR_r)\le& (n^\gamma)^m P_0(\A_n^{(1)})\dots P_0(\A_n^{(m)})P_0(\RR_r)\\ \le &(n^\gamma)^m P_0(\bigcap_{j=1}^m \A_n^{(j)})\cap \RR_r). \end{split} \end{equation} Now, the local times is positive, so that \[ \begin{split} \bigcap_{i=1}^m&\acc{ l_{[(i-1)n,in[}|_{\Lambda}={\bf k_n^*},\ S_{in}=0}\cap \acc{l_{[mn,K[}(0)=r}\\ &\subset \acc{\sum_{x\in \Lambda} \cro{\sum_{i=1}^m l_{[(i-1)n,in[}(x)+\ l_{[mn,K[}(x)}^q \ge \sum_{x\in \Lambda}\pare{mk_n^*(x)+r\delta_{0}(x)}^q,S_{K}=0}. \end{split} \] At this point, observe the following fact whose simple inductive proof we omit: for $q>1$, and for $\varphi$ and $\psi$ are positive functions on $\Lambda$, and for $z^*\in \Lambda$, $\varphi(z^*)=\max \varphi$, then \be{fact-key} (\varphi(z^*)+\sum_{z\in \Lambda}\psi(z))^q+\sum_{z\not= z^*} \varphi(z)^q \ge \sum_{z\in\Lambda} \pare{\varphi(z)+\psi(z)}^q. \end{equation} \reff{fact-key} implies that for any integer $m$ \ba{lawler.5} \sum_{x\in \Lambda} \pare{ mk_n^*(x)+r\delta_{z^*}(x)}^q&\ge & \sum_{x\in \Lambda} \pare{ mk_n^*(x)+\frac{r}{n}k_n^*(x)}^q\cr &=& (m+\frac{r}{n})^q\sum_{x\in \Lambda} k_n^*(x)^q \ge (mn+r)^q \xi^q=(K\xi)^q. \end{eqnarray} Using \reff{lawler.5}, \reff{lawler.6} yields \be{lawler.6bis} P_0\pare{\A_n(\xi,\Lambda)}^m P_0(\RR_r)\le (n^\gamma)^m P_0\pare{ ||\ind_{\Lambda} l_K||_q\ge K \xi,\ S_{K}=0} \le (n^\gamma)^m P_0\pare{\A_K(\xi,\Lambda)}. \end{equation} We now take the logarithm on each side of \reff{lawler.6} \be{lawler.7} \frac{nm}{nm+r} \frac{\log(P_0(\A_n(\xi,\Lambda)))}{n}+ \frac{\log(P_0(\RR_r))}{K}\le \frac{m(\log(n^\gamma))}{nm+r}+\frac{\log(P_0(\A_K(\xi,\Lambda)))}{K}. \end{equation} We take now the limit $K\to\infty$ while $n$ is kept fixed (e.g. $m\to\infty$) so that \be{lawler.8} \frac{\log(P_0(\A_n(\xi,\Lambda)))}{n}\le \frac{\log(n^\gamma)}{n} +\liminf_{K\to\infty} \frac{\log(P_0(\A_K(\xi,\Lambda)))}{K}. \end{equation} By taking the limit sup in \reff{lawler.8} as $n\to\infty$, we conclude that the limit in \reff{lawler.9} exists. \br{rem-z} We treat here the case $z^*\not= 0$. In this case, we cannot consider $\RR_r$ since to use \reff{fact-key}, we would need the walk to start on site $z^*$, whereas each period of length $n$ sees the walk returning to the origin. Note that this problem is related to the strategy on a single time-period of length $r$. The remedy is simple: we insert a period of length $r$ into the first time-period of length $n$ at the first time the walk hits $z^*$; then, the walk stays at $z^*$ during $r-1$ steps. In other words, let $\tau^*=\inf\{n\ge 0:\ S_n=z^*\}$, $\RR_r^*=\{l_r(z^*)=r\}$ and note that \be{sub.41} \begin{split} P_0(\A_n^{(1)})P_{z^*}(\RR_r^*)=&\sum_{i=1}^n P_0(\A_n^{(1)},\tau^*=i) P_{z^*}(l_r(z^*)=r)\\ \le & P_0\pare{ l_{[0,n+r[}|_{\Lambda}={\bf k_n^*}+r\delta_{z^*}}. \end{split} \end{equation} Note that $P_{z^*}(\RR_r^*)=P_0(\RR_r)$, and \[ \bigcap_{j=1}^m \A_n^{(j)}\subset \acc{\sum_{x\in \Lambda} \cro{l_{[0,n+r[}(x)+\sum_{i=2}^m l_{[(i-1)n,in[}(x)}^q \ge \sum_{x\in \Lambda}\pare{mk_n^*(x)+r\delta_{z^*}(x)}^q,S_{K}=0}. \] We can now resume the proof of the case $z^*=0$ at step \reff{lawler.5}. \end{remark} \end{proof} \subsection{Lower Bound in Proposition~\ref{prop-sub}.}\label{sec-LB} We prove here the lower bound of \reff{lawler.10}. Call $t_n$ be the integer part of $\alpha\sqrt{n}$, and consider the following {\it scenario} \be{lower.1} \S_n(\Lambda,\alpha,\epsilon):= \acc{ ||\ind_\Lambda l_{[0,t_n[}||_2^2\ge n\xi(1+\epsilon),\ S_{t_n}=0} \cap \acc{||l_{[t_n,n[}||_2^2 -E_0\cro{||l_n||_2^2}\ge n\xi(1-\epsilon)}, \end{equation} Note that $\S_n(\Lambda,\alpha,\epsilon) \subset \{\overline{||l_n||_2^2}\ge n\xi\}$. Indeed, note that for any $\beta\ge 1$, and $a,b>0$ we have $a^\beta+b^\beta\le (a+b)^\beta$. Thus, for any $x\in {\mathbb Z}^d$ \be{lower.3} l^2_{[0,t_n[}(x)+ l^2_{[t_n,n[}(x)\le l^2_n(x), \end{equation} and we obtain on $\S_n(\Lambda,\alpha,\epsilon)$ \be{lower.4} E_0\cro{||l_n||_2^2}+n\xi\le \sum_{x\in \Lambda} l^2_{[0,t_n[}(x)+ \sum_{x\in {\mathbb Z}^d} l^2_{[t_n,n[}(x) \le ||l_n||_2^2. \end{equation} Note that $||\ind_\Lambda l_{[0,t_n[}||_2$ and $S_{t_n}=0$ only depend on the increments of the random walk in the time period $[0,t_n[$, whereas $||l_{[t_n,n[}||_2$ depends on the increments in $[t_n,n[$. Thus, \be{lower.5} \begin{split} P\pare{\S_n(\Lambda,\alpha,\epsilon)} =&P_0\pare{ ||\ind_\Lambda l_{[0,t_n[}||_2^2\ge n\xi(1+\epsilon),\ S_{t_n}=0} \\ &\times P_0\pare{||l_{[t_n,n[}||_2^2 -E_0\cro{||l_n||_2^2}\ge n\xi(1-\epsilon)}. \end{split} \end{equation} Now, since $\frac{1}{n} ||l_n||_2^2$ converges in $L^1$ towards $\gamma_d$, we have $E_0[||l_n||_2^2]\le n\gamma_d(1+\epsilon/2)$ for $n$ large enough, and we have \be{lower.6} P_0\pare{||l_{[t_n,n[}||_2^2 -E_0\cro{||l_n||_2^2}\ge n\xi(1-\epsilon)}\le P_0\pare{\frac{||l_{[0,n-t_n[}||_2^2}{n-t_n} \ge \frac{\gamma_d-\frac{\epsilon}{2} \xi}{1-\frac{t_n}{n}}}\longrightarrow 1. \end{equation} \br{rem-LBalpha} Note that for any $\Lambda$ finite subset of ${\mathbb Z}^d$, any $\beta>0$ and $\epsilon>0$ small, we have for $\chi<\zeta<1$, and $n$ large enough \be{lower.8} \acc{||\ind_\Lambda l_{\lfloor \beta n^\zeta\rfloor } ||_{\alpha^*}\ge \xi n^\zeta(1+\epsilon),\ S_{\lfloor \beta n^\zeta\rfloor}=0} \subset \acc{||\ind_{\bar \D_n(\chi)} l_n||_{\alpha^*}\ge \xi n^\zeta}. \end{equation} \end{remark} \subsection{Proof of Theorem~\ref{intro-th.1}}\label{sec-ergodic} First, the upper bound of Proposition~\ref{prop-sub} follows after combining inequalities \reff{circuit.2}, \reff{circuit.14}, \reff{synth.1} and \reff{start-main}. The lower bound of Proposition~\ref{prop-sub} is shown in the previous section. Then, we invoke Lemma~\ref{lem-sub.1} with $q=2$, we take the logarithm on each sides of \reff{lawler.10}, we normalize by $\sqrt{n}$, and take the limit $n$ to infinity. We obtain that for any $\epsilon>0$, there are $\alpha_\epsilon$ and $\Lambda_\epsilon$ such that for $\Lambda,\Lambda'\supset\Lambda_\epsilon$, and $\alpha,\alpha'>\alpha_\epsilon$ \be{lawler.key1} \begin{split} -\alpha' \ I\big(\frac{\sqrt{\xi(1+\epsilon)}}{\alpha'}&,\Lambda'\big)\le \liminf_{n\to\infty} \frac{\log\pare{ P_0(\overline{||l_n||^2_2}\ge n\xi)}}{\sqrt n}\\ &\le \limsup_{n\to\infty} \frac{\log\pare{ P_0(\overline{||l_n||_2^2}\ge n\xi)}}{\sqrt n} \le -\alpha \ I\pare{\frac{\sqrt {\xi(1-\epsilon)}}{\alpha},\Lambda}+C\epsilon. \end{split} \end{equation} By using \reff{lawler.key1}, we obtain for any $\Lambda,\Lambda'\supset \Lambda_\epsilon$, and $\alpha,\alpha'>\alpha_\epsilon$ \be{lawler.11} \frac{\alpha'}{\sqrt{\xi(1+\epsilon)}} I\pare{\frac{\sqrt{\xi(1+\epsilon)}}{\alpha'},\Lambda'}\ge \sqrt{\frac{1-\epsilon}{1+\epsilon}} \frac{\alpha}{\sqrt{\xi(1-\epsilon)}} I\pare{ \frac{\sqrt{\xi(1-\epsilon)}}{\alpha},\Lambda}-\frac{C\epsilon}{\sqrt{\xi(1+\epsilon)}}. \end{equation} Thus, if we call $\varphi(x,\Lambda)=I(x,\Lambda)/x$, we have: $\forall \epsilon>0$, there is $x_\epsilon,\Lambda_\epsilon$ such that for $x,x'<x_\epsilon$ and $\Lambda,\Lambda'\supset \Lambda_\epsilon$ \be{lawler.12} \varphi(x',\Lambda')\ge \sqrt{\frac{1-\epsilon}{1+\epsilon}} \quad\varphi(x,\Lambda)- \frac{C\epsilon}{\sqrt{\xi(1+\epsilon)}}. \end{equation} By taking the limit $\Lambda'\nearrow{\mathbb Z}^d$, $x'\to 0$, and then $\Lambda\nearrow{\mathbb Z}^d$ and $x\to 0$, we reach for any $\epsilon>0$ \be{lawler.13} \liminf_{\Lambda\nearrow{\mathbb Z}^d, x\to 0} \varphi(x,\Lambda)\ge \sqrt{\frac{1-\epsilon}{1+\epsilon}} \quad\limsup_{\Lambda\nearrow{\mathbb Z}^d, x\to 0} \varphi(x,\Lambda)-\frac{C\epsilon}{\sqrt{\xi(1+\epsilon)}}. \end{equation} Since \reff{lawler.13} is true for $\epsilon>0$ arbitrarily small, this implies that the limit of $\varphi(x,\Lambda)$ exists as $x$ goes to $0$ and $\Lambda$ increases toward ${\mathbb Z}^d$. We call this latter limit $\I(2)$, where the label 2 stresses that we are dealing with the $l^2$-norm of the local times. Now, recall that the result of \cite{AC05}, (see Lemma~\ref{level-lem.1}) says that there are two positive constants $\underline{c},\bar c$ such that for $x$ small enough $\underline{c}\le I(x,\Lambda)/x\le \bar c$, which together with \reff{lawler.13} imply $0<\underline{c}\le \I(2)\le \bar c<\infty$. Now, using \reff{lawler.12} again, we obtain \be{lawler.14} \alpha I\pare{\frac{\sqrt{\xi(1+\epsilon)}}{\alpha},\Lambda}\le \frac{1+\epsilon} {\sqrt{1-\epsilon}}\ \I(2)\sqrt{\xi}+C\epsilon \sqrt{\frac{1-\epsilon}{1+\epsilon}} , \end{equation} and, \be{lawler.15} \alpha I\pare{\frac{\sqrt{\xi(1-\epsilon)}}{\alpha},\Lambda}\ge \frac{1-\epsilon}{\sqrt{1+\epsilon}}\ \I(2)\sqrt{\xi}-C\epsilon \sqrt{\frac{1-\epsilon}{1+\epsilon}}. \end{equation} This establishes the Large Deviations Principle of \reff{intro.4} as $\epsilon$ is sent to zero. \qed \noindent{\bf Proof of Proposition~\ref{prop-alpha}} Looking at the proof of Theorem~\ref{intro-th.1}, we notice that the only special feature of $\{\overline{||l_n||^2_2}\ge n\xi\}$ which we used, was that the excess self-intersection was realized on a {\bf finite} set $\D_n(A,\sqrt n)$. Similarly, when considering $\{||\ind_{\bar \D_n(n^b)} l_n||_{\alpha^*}\ge \xi n^\zeta\}$, inequality \reff{level.20} of Lemma~\ref{level-lem.3}, ensures that our large deviation is realized on $\D_n(A,n^\zeta)$, and by \reff{level.21}, we make a negligible error assuming it is not finite. Thus, our key steps work in this case as well: {\it circuit surgery}, {\it renormalizing time}, and the {\it subadditive argument}. Besides, by Remark~\ref{rem-LBalpha}, the lower bound follows trivially as well. Instead of \reff{lawler.10}, we would have that there is a constant $\beta$ such that for any $\epsilon>0$, there is $\tilde \Lambda$ set of finite diameter, and $a_0>0$, such that for $\Lambda$ finite with $\Lambda\supset \tilde\Lambda$ and $a\ge a_0$, \be{lawler.30} \begin{split} P_0&\pare{||\ind_{\Lambda}l_{\lfloor a n^\zeta\rfloor}||_{\alpha^*} \ge \xi(1+\epsilon) n^\zeta, S_{\lfloor a n^\zeta\rfloor}=0 } \le P_0\pare{||\ind_{\bar \D_n(n^b)} l_n||_{\alpha^*}\ge \xi n^\zeta}\\ &\le e^{\beta\epsilon n^\zeta} P_0\pare{||\ind_{\Lambda} l_{\lfloor a n^\zeta\rfloor}||_{\alpha^*}\ge \xi(1-\epsilon) n^\zeta, S_{\lfloor a n^\zeta\rfloor}=0}. \end{split} \end{equation} Following the last step of the proof of Theorem~\ref{intro-th.1}, we prove Proposition~\ref{prop-alpha}. \qed \section{On Mutual Intersections.}\label{sec-KMSS} \subsection{Proofs of Proposition~\ref{prop-CM}.} Proposition~\ref{prop-CM} is based on the idea that $\langle l_\infty,\tilde l_\infty\rangle$ is not {\it critical} in the sense that even when {\it weighting less} intersection local times, the strategy remains the same. In other words, define for $1<q\le 2$ \be{sinai.1} \zeta(q)=\sum_{z\in {\mathbb Z}^d} l_\infty(z)\tilde l_\infty^{q-1} (z). \end{equation} Then, we have the following lemma, interesting on its own. \bl{lem-amine} Assume that $d\ge 5$. For any $2\ge q>\frac{d}{d-2}$, there is $\kappa_q>0$ such that \be{sinai.2} {\mathbb P}\pare{\zeta(q)>t}\le \exp(-\kappa_q t^{\frac{1}{q}}). \end{equation} \end{lemma} We prove Lemma~\ref{lem-amine} in the next section. Proposition~\ref{prop-CM} follows easily from Lemma~\ref{lem-amine}. Indeed, if $\D(\xi)=\{z: \min(l_\infty(z),\tilde l_\infty(z))<\xi\}$ and $q<2$ \be{sinai.20} \begin{split} \acc{\bra{\ind_{\D(\epsilon \sqrt t)} l_\infty,\tilde l_\infty}>t}\subset& \acc{\sum_{l_\infty(z)\le \frac{\sqrt t}{A}} l_\infty(z)^{q-1}\tilde l_\infty(z)> \frac{t}{2}\pare{\frac{A}{\sqrt t}}^{2-q}}\\ &\cup \acc{\sum_{\tilde l_\infty(z)\le \frac{\sqrt t}{A}} l_\infty(z)\tilde l_\infty(z)^{q-1}> \frac{t}{2}\pare{\frac{A}{\sqrt t}}^{2-q}}. \end{split} \end{equation} Then, since $1>\frac{2-q}{2}$, Lemma~\ref{lem-amine} applied to \reff{sinai.10} implies that for large $t$ \be{sinai.11} {\mathbb P}\pare{\bra{\ind_{\D(\epsilon \sqrt t)} l_\infty,\tilde l_\infty}>t}\le 2\exp\pare{ -\kappa_d A^{\frac{2-q}{q}} t^{1/2}},\quad\text{since }\quad \frac{1}{q}(1-\frac{2-q}{2})=\frac{1}{2}. \end{equation} \subsection{Proof of Lemma~\ref{lem-amine}.} We assume $d\ge 5$. Lemma~\ref{lem-amine} can be thought of as an interpolation inequality between Lemma 1 and Lemma 2 of \cite{KMSS}, whose proofs follow a classical pattern (in statistical physics) of estimating all moments of $\zeta(q)$. This control is possible since all quantities are expressed in terms of iterates of the Green's function, whose asymptotics are well known (see for instance Theorem 1.5.4 of \cite{LAWLER}). From \cite{KMSS}, it is enough that for a positive constant $C_q$, we establish the following control on the moments \be{sinai.3} \forall n\in {\mathbb N},\qquad {\mathbb E}[\zeta(q)^n]\le C_q^n (n!)^q. \end{equation} First, noting that $q-1\le 1$, we use Jensen's inequality in the last inequality \ba{sinai.4} {\mathbb E}[\zeta(q)^n]&\le & \sum_{z_1,\dots,z_n\in {\mathbb Z}^d} E_0\cro{ \prod_{i=1}^n l_\infty(z_i)} E_0\cro{\prod_{i=1}^n l_\infty(z_i)^{q-1}}\cr &\le & \sum_{z_1,\dots,z_n\in {\mathbb Z}^d} \pare{E_0\cro{\prod_{i=1}^n l_\infty(z_i)}}^{q} \end{eqnarray} If $\S_n$ is the set of permutation of $\acc{1,\dots,n}$ (with the convention that for $\pi\in \S_n$, $\pi(0)=0$) we have, \be{sinai.5} \begin{split} E\cro{\prod_{i=1}^nl_\infty(z_i)}= & \sum_{s_1,\dots,s_n\in {\mathbb N}} P_0(S_{s_i}=z_i,\ \forall i=1,\dots,n)\\ \le& \sum_{\pi\in \S_n} \sum_{s_1\le s_2\le\dots\le s_n\in {\mathbb N}} P_0(S_{s_i}=z_{\pi(i)},\ \forall i=1,\dots,n)\\ \le & \sum_{\pi\in \S_n} \prod_{i=1}^n G_d\pare{z_{\pi(i-1)},z_{\pi(i)}} . \end{split} \end{equation} Now, by H\"older's inequality \be{sinai.9} \begin{split} \sum_{z_1,\dots,z_n}\pare{\sum_{\pi\in \S_n} \prod_{i=1}^n G_d\pare{z_{\pi(i-1)},z_{\pi(i)}}}^q&\le \sum_{z_1,\dots,z_n} (n!)^{q-1} \sum_{\pi\in \S_n} \prod_{i=1}^n G_d\pare{z_{\pi(i-1)},z_{\pi(i)}}^q\\ &= (n!)^q \sum_{z_1,\dots,z_n} \prod_{i=1}^n G_d\pare{z_{i-1},z_{i}}^q. \end{split} \end{equation} Classical estimates for the Green's function, \reff{sinai.9} implies that \be{sinai.10} \begin{split} \sum_{z_1,\dots,z_n\in {\mathbb Z}^d} \pare{E_0\cro{\prod_{i=1}^n l_\infty(z_i)}}^{q}\le& (n!)^q C^n \sum_{z_1,\dots,z_n} \prod_{i=1}^n (1+||z_i-z_{i-1}||)^{q(2-d)}\\ \le & (n!)^q C^n \pare{ \sum_{z\in {\mathbb Z}^d} (1+||z||)^{q(2-d)}}^n. \end{split} \end{equation} Thus, when $d\ge 5$ and $q>\frac{d}{d-2}$, we have a constant $C_q>0$ such that \be{sinai.7} {\mathbb E}[\zeta(q)^n]\le C_q^n(n!)^q \end{equation} The proof concludes now by routine consideration (see e.g. \cite{KMSS} or \cite{chen-morters}). \subsection{Identification of the rate function \reff{identification}.} \label{sec-ident} The main observation is that the proof of Theorem~\ref{intro-th.1} yields also \be{old.6} \lim_{\Lambda\nearrow{\mathbb Z}^d}\lim_{n\to\infty} \frac{1}{\sqrt n} \log P\pare{ ||\ind_{\Lambda} l_\infty||_2^2> n}=-\I(2). \end{equation} Indeed, in order to use our subadditive argument, Lemma~\ref{lem-sub.1}, we need first to show that for some $\gamma>0$, for any $\alpha$ large enough, and for $n$ large enough \be{lawler.10bis} \begin{split} P_0\big(||\ind_\Lambda&l_{\lfloor\alpha\sqrt{n}\rfloor} ||_2^2\ge n,\ S_{\lfloor\alpha\sqrt{n}\rfloor}=0\big)\\ &\le P\pare{ ||\ind_{\Lambda} l_\infty||_2^2> n} \le n^\gamma P_0\pare{||\ind_\Lambda l_{\lfloor\alpha\sqrt{n}\rfloor} ||_2^2\ge n,\ S_{\lfloor\alpha\sqrt{n}\rfloor}=0}. \end{split} \end{equation} The upper bound in \reff{lawler.10bis} is obtained from Proposition~\ref{prop-time.1}, whereas the lower bound is immediate. Now, we proceed with the link with intersection local times. First, as mentioned in \reff{CM-finite}, Chen and M\"orters prove also that for any finite $\Lambda\subset {\mathbb Z}^d$ \[ \lim_{n\to\infty}\frac{1}{n^{1/2}}\log P\pare{ \bra{\ind_{\Lambda} l_\infty, \tilde l_\infty}>n}=-2I_{CM}(\Lambda), \] with $I_{CM}(\Lambda)$ converging to $I_{CM}$ as $\Lambda$ increases to cover ${\mathbb Z}^d$. The important feature is that for any fixed $\epsilon>0$, we can fix a finite $\Lambda$ subset of ${\mathbb Z}^d$ such that $|I_{CM}(\Lambda)-I_{CM}|\le \epsilon$. Note now that by Cauchy-Schwarz' inequality, and for finite set $\Lambda$ \be{old.7} \bra{\ind_{\Lambda}l_\infty,\tilde l_\infty}\le ||\ind_{\Lambda}l_\infty||_2 ||\ind_{\Lambda}\tilde l_\infty||_2. \end{equation} Inequalities \reff{old.6} and \reff{old.7} imply by routine consideration that \be{old.8} \limsup_{\Lambda\nearrow{\mathbb Z}^d} \limsup_{n\to\infty} \frac{1}{\sqrt n} \log P\pare{\bra{1_\Lambda l_{\infty},\tilde l_\infty}>n} \le -\I(2)\inf_{\alpha>0}\acc{{\sqrt \alpha}+\frac{1}{{\sqrt \alpha}}}=-2\I(2). \end{equation} When ${\bf k_n^*}$ is the sequence which enters into defining $\A_n^*(1,\Lambda)$ in \reff{lawler.3} (see also \reff{lawler.2}), we have the lower bound \be{old.9} P\pare{ \bra{\ind_{\Lambda}l_\infty,\tilde l_\infty}>n} \ge P\pare{ l_{\lfloor \alpha\sqrt n\rfloor}|_{\Lambda}= {\bf k^*_n}|_{\Lambda}, S_{\lfloor \alpha\sqrt n\rfloor}=0}^2. \end{equation} Following the same argument as in the proof of Section~\ref{sec-ergodic}, we have \be{old.10} \liminf_{n\to\infty} \frac{1}{\sqrt n} \log P\pare{\bra{l_{\infty},\tilde l_\infty}>n} \ge -2\I(2). \end{equation} \reff{old.8} and \reff{old.10} conclude the proof \reff{identification}. \section{Applications to RWRS.} \label{sec-rwrs} We consider a certain range of parameters $\acc{(\alpha,\beta): 1<\alpha<\frac{d}{2}, 1-\frac{1}{\alpha+2}<\beta<1+\frac{1}{\alpha}}$, which we have called Region II in \cite{AC05}. Also, if $\Gamma(x)=\log(E[\exp(x \eta(0))])$, then there are positive constants $\Gamma_0$ and $\Gamma_{\infty}$ (see \cite{AC05}), such that \be{rwrs.2} \lim_{x\to 0} \frac{\Gamma(x)}{x^2}=\Gamma_0,\quad\text{and}\quad \lim_{x\to \infty} \frac{\Gamma(x)}{x^{\alpha^*}}=\Gamma_{\infty},\quad\text{and}\quad \frac{1}{\alpha}+\frac{1}{\alpha^*}=1. \end{equation} A classical way of obtaining large deviations is through exponential bounds for ${\mathbb P}(\bra{\eta,l_n}\ge y n^\beta)$. For instance, if we expect the latter quantity to be of order $\exp(-c n^\zeta)$, then a first tentative would be to optimize over $\lambda>0$ with $b=\beta-\zeta$ in the following \ba{tent.1} {\mathbb P}\pare{\bra{\eta,l_n}\ge y n^\beta}&\le & e^{-\lambda n^{\beta-b}}E[ \exp\pare{\lambda \frac{\bra{\eta,l_n}}{n^b}}]\cr &\le & e^{-\lambda n^{\zeta}} E_0[\exp\pare{\sum_{z\in {\mathbb Z}^d} \Gamma\pare{\frac{\lambda l_n(z)}{n^b}}}]. \end{eqnarray} We need to distinguish asymptotic regimes at zero or at infinity for $\Gamma(\frac{\lambda l_n(z)}{n^b})$ according to whether $l_n(x)<n^{b-\epsilon}$ or $l_n(x)> n^{b+\epsilon}$ respectively. For $\epsilon>0$, we introduce \[ \Dh=\acc{x \in {\mathbb Z}^d:\ l_n(x) \geq n^{b+\epsilon}},\qquad \Db= \acc{x \in {\mathbb Z}^d:\ 0<l_n(x) \leq n^{b-\epsilon}}, \] and, \[ \RR_{\epsilon}=\acc{x \in {\mathbb Z}^d; n^{b-\epsilon} \le l_n(x) \le n^{b+\epsilon}}. \] Then, for any $\epsilon_0>0$ small \be{rwrs.3} {\mathbb P}\pare{\bra{\eta,l_n}\ge yn^{\beta}}\le {\mathbb P}\pare{\bra{\eta,\ind_{\Dh}l_n}\ge (1-\epsilon_0)yn^{\beta}}+I_1+I_2, \end{equation} where \be{def-I} I_1:={\mathbb P}\pare{\bra{\eta,\ind_{\Db}l_n}\ge \frac{\epsilon_0}{2} yn^{\beta}} ,\quad\text{and}\quad I_2:={\mathbb P}\pare{\bra{\eta,\ind_{\RR_{\epsilon}}l_n}\ge \frac{\epsilon_0}{2} yn^{\beta}} . \end{equation} We have now to show that the contribution of $\Db$ and $\RR_{\epsilon}$ which concerns the {\it low} level sets, is negligible. We gather the two estimates in the next subsection. We treat afterwards $\Dh$. \subsection{Contribution of {\it small} local times.} We first show that $I_1$ is negligible. Set ${\mathcal B}=\acc{||\ind_{\Db}l_n||_2^2\ge \delta n^{\beta+b}}$, for a $\delta>0$ to be chosen later. For any $\lambda>0$ \be{rwrs.4} {\mathbb P}\pare{\bra{\eta,\ind_{\Db}l_n}\ge \frac{\epsilon_0}{2} yn^{\beta}}\le P({\mathcal B})+ e^{-\lambda n^{\beta-b} \frac{\epsilon_0}{2} y} E_0\cro{\ind_{{\mathcal B}^c} \exp\pare{ \sum_{\Db} \Gamma\pare{\frac{\lambda l_n(x)}{n^b}}}}. \end{equation} Now, for any $\lambda>0$ and $n$ large enough, we have for $x\in \Db$ that \[ \Gamma(\frac{\lambda l_n(x)}{n^b})\le \Gamma_0(1+\epsilon_0) (\frac{\lambda l_n(x)}{n^b})^2, \] so that \be{rwrs.5} {\mathbb P}\pare{\bra{\eta,\ind_{\Db}l_n}\ge \frac{\epsilon_0}{2} yn^{\beta}}\le P({\mathcal B})+ \exp\pare{-n^\zeta\pare{ \lambda \frac{\epsilon_0}{2} y- \lambda^2 \Gamma_0(1+\epsilon_0)\delta}}. \end{equation} Since $\beta+b>1$, Lemma 1.8 of \cite{AC05} gives that $-\log\pare{P({\mathcal B})}\ge M n^{\zeta}$, for any $\delta>0$, and any large constant $M$. Finally, for any $\epsilon_0$ fixed, and a large constant $M$, we first choose $\lambda$ so that $\lambda \frac{\epsilon_0}{2} y\ge 2M$. Then, we choose $\delta$ small enough so that $\lambda \Gamma_0\delta\le \frac{\epsilon_0}{4} y$. We consider the contribution of $\RR_{\epsilon}$. We use here our hypothesis that the $\eta$ are bell-shaped random variables, since it leads to clearer derivations. Thus, according to Lemma 2.1 of~\cite{AC04}, we have \be{rwrs.6} {\mathbb P}\pare{ \bra{\eta,1\{\RR_{\epsilon}\}l_n}\ge yn^\beta}\le {\mathbb P}\pare{\sum_{\RR_{\epsilon}}\eta(x)\ge n^{\beta-b-\epsilon}}. \end{equation} By Proposition 1.9 of \cite{AC05}, we can assume that $|\RR_{\epsilon}|< n^\gamma$, with \be{rwrs.7} \gamma<\gamma_0:=\frac{1}{1-\frac{2}{d}}\frac{\alpha-1}{\alpha+1}\beta= \frac{1-\frac{1}{\alpha}} {1-\frac{2}{d}}\zeta<\zeta\quad\text{if}\quad \alpha<\frac{d}{2}. \end{equation} Note that $\gamma_0$ given in \reff{rwrs.7} is lower than $\zeta$ when $\alpha<d/2$. Using Lemma A.4 of \cite{AC04}, we obtain \be{rwrs.8} {\mathbb P}\pare{\sum_{\RR_{\epsilon}} \eta(x)\ge n^{\beta-b-\epsilon}, |\RR_{\epsilon}|\le n^\gamma}\le \exp\pare{ -Cn^{\gamma+\alpha(\beta-b-\epsilon-\gamma)}}. \end{equation} For the left hand side of \reff{rwrs.8} to be negligible, we would need (recall that $\alpha>1$) \be{rwrs.9} \gamma+\alpha(\beta-b-\epsilon-\gamma)>\beta-b\Longleftrightarrow (\beta-b)(\alpha-1)>(\alpha-1)\gamma\Longleftrightarrow \zeta>\gamma. \end{equation} This last inequality has already been noticed to hold in \reff{rwrs.7}. \subsection{Contribution of {\it large} local times.} \subsubsection{Upper Bound} We deal now with the contributions of $\Dh$. For any $\lambda>0$ (recalling that $\beta -b=\zeta=\frac{\alpha\beta}{\alpha+1}$) \be{rwrs.10} {\mathbb P}\pare{\bra{\eta,1\{\Dh\}l_n}\ge (1-\epsilon_0) y n^\beta}\le e^{-\lambda n^{\beta -b}(1-\epsilon_0)y} E_0\cro{ \exp \pare{\sum_{x \in \Dh} \Gamma\pare{ \frac{\lambda l_n(x)}{n^{b}}}}}. \end{equation} Now, for $\lambda$ {\it not too small}, when $n$ is large enough we have \be{rwrs.11} \sum_{x \in \Dh} \Gamma\pare{\frac{\lambda l_n(x)}{n^{b}}}\le(\Gamma_{\infty}+\epsilon_0) \lambda^{\alpha^*}\pare{\frac{||\ind_{\Dh}l_n||_{\alpha^*}}{n^{b}}}^{\alpha^*}. \end{equation} Thus, \reff{rwrs.10} becomes \be{rwrs.12} {\mathbb P}\pare{\bra{\eta,1\{\Dh\}l_n}\ge (1-\epsilon_0) y n^\beta}\le \exp\pare{ -n^\zeta\pare{ \lambda (1-\epsilon_0)y-\lambda^{\alpha^*}(\Gamma_{\infty}+\epsilon_0) \frac{||\ind_{\Dh}l_n||_{\alpha^*}^{\alpha^*}}{n^{b\alpha^*+\zeta}}}}. \end{equation} Now, optimizing in $\lambda$ in the right hand side of \reff{rwrs.12}, we obtain \be{rwrs.13} (1-\epsilon_0)y=\alpha^* (\Gamma_{\infty}+\epsilon_0) \frac{||\ind_{\Dh}l_n||_{\alpha^*}^{\alpha^*}}{n^{b\alpha^*+\zeta}}\lambda^{\alpha^* -1}. \end{equation} Now, recall that in order to fall in the asymptotic regime of $\Gamma$ at infinity, we assumed that $\lambda$ were not too small. In other words, in view of \reff{rwrs.13}, we would need a bound of the type $||\ind_{\Dh}l_n||_{\alpha^*}\le An^{\zeta}$ for a large constant $A$. Now, using Proposition~\ref{prop-alpha}, there is a constant $\I(\alpha^*)$ such that \be{rwrs.14} \lim_{n\to\infty} \frac{1}{n^\zeta} \log P\pare{||\ind_{\Dh}l_n||_{\alpha^*}\ge A n^{\zeta}}\le -\I(\alpha^*) A . \end{equation} Thus, we can assume that $\lambda$ satisfying \reff{rwrs.13} is bounded from below. Also, replacing the value of $\lambda$ obtained in \reff{rwrs.13} in inequality \reff{rwrs.12}, and using that $\Gamma_{\infty}^{-1}=\alpha^*(\alpha c_{\alpha})^{\alpha^*-1}$, we find that \be{rwrs.15} {\mathbb P}\pare{\bra{\eta,\ind_{\{\Dh\}}l_n}\ge (1-\epsilon_0) y n^\beta}\le E_0\cro{\exp\pare{ -c_{\alpha} (1-\delta_0) \pare{ \frac{yn^\beta}{||\ind_{\Dh}l_n||_{\alpha^*}}}^\alpha}}, \end{equation} where $(1-\delta_0)=(1-\epsilon_0)^\alpha(1+\epsilon_0)^{1-\alpha}$, which can be made as close as 1, as one wishes. Now, it is easy to conclude that \ba{upper-LDP} \limsup_{n\to\infty} \frac{1}{n^\zeta} \log P\pare{\bra{\eta,l_n}\ge y n^\beta}&\le& -c_{\alpha} \inf_{\xi>0}\acc{ \pare{\frac{y}{\xi}}^{\alpha} + \I(\alpha^*) \xi}\cr &=& -c_{\alpha} (\alpha+1) \pare{ \frac{y \I(\alpha^*) }{\alpha}}^{\frac{\alpha}{\alpha+1}}. \end{eqnarray} \subsubsection{Lower Bound for RWRS.} We call in this section $\bar\D=\acc{z\in {\mathbb Z}^d: l_n(z)\ge \delta n^\zeta}$, for a fixed but small $\delta$. Since, we have assumed the $\eta$-variables to have a bell-shaped distribution, we have according to Lemma 2.1 of~\cite{AC04}, \be{rwrs.16} {\mathbb P}\pare{\bra{\eta,l_n}\ge y n^\beta}\ge {\mathbb P}\pare{\bra{\eta,\ind_{\bar\D}l_n}\ge y n^\beta}. \end{equation} Then, we condition on the random walk law, and average with respect to the $\eta$ variables which we require to be large on each site of $\bar \D$. Recall now that we can assume $|\bar\D|\le 1/\delta^2$ by \reff{level.21} (for $\delta$ small enough). We use \reff{rwrs.16} to deduce for any $\epsilon>0$ \[ \begin{split} {\mathbb P}(\bra{\eta,l_n}\ge y n^\beta)\ge&E_0 \cro{{\mathbb Q}\cro{\min_{z\in \bar\D} \eta(z)\ge \epsilon n^\zeta,\ \bra{\eta,\ind_{\bar\D}l_n}\ge y n^\beta}}\\ \ge & E_0\cro{\sup_{x(i),i\in \bar\D} \acc{C^{|\bar\D|}\exp\pare{-c_{\alpha}(1+\epsilon) \sum_{i\in \bar\D} x(i)^{\alpha}}: \bra{x,\ind_{\bar\D}l_n}\ge yn^\beta}}\\ \ge & C^{1/\delta^2} E_0\cro{\ind\acc{|\D|\le \frac{1}{\delta^2}} \exp\pare{-c_{\alpha}(1+\epsilon) \pare{\frac{yn^\beta}{||\ind_{\bar\D}l_n||_{\alpha^*}}}^{\alpha}}}\\ \ge & C^{1/\delta^2} \exp\pare{-c_{\alpha}(1+\epsilon) \pare{\frac{yn^\beta}{\xi^* n^\zeta}}^{\alpha}} P\pare{ ||\ind_{\bar\D}l_n||_{\alpha^*}\ge \xi^* n^\zeta,\ |\D|\le \frac{1}{\delta^2}}, \end{split} \] where $\xi^*$ realizes the infimum in \reff{upper-LDP}. Now, as $\epsilon$ is sent to 0 after $n$ is sent to infinity, we obtain \be{lower-LDP} \liminf_{n\to\infty} \frac{1}{n^\zeta} \log P\pare{\bra{\eta,l_n}\ge y n^\beta}\ge -c_{\alpha} (\alpha+1) \pare{ \frac{y \I(\alpha^*) } {\alpha}}^{\frac{\alpha}{\alpha+1}}. \end{equation} \section{Appendix.}\label{sec-appendix} \subsection{Proof of Lemma~\ref{circuit-lem.1}.} Fix ${\bf k}\in V(\Lambda',n)$. By Chebychev's inequality, for any $\lambda>0$ \be{circuit.15} P\big( \sum_{i=1}^{|{\bf k}|} 1_{\acc{|S_{T^{(i)}}-S_{T^{(i-1)}}|>{\sqrt L}, T^{(|{\bf k}|)}<\infty}}\ge \epsilon\sqrt{n}\big) \le e^{-\lambda \epsilon \sqrt{n}} E\cro{\prod_{i=1}^{|{\bf k}|}e^{\lambda \ind\acc{|S_{T^{(i)}}-S_{T^{(i-1)}}|>{\sqrt L}}}}. \end{equation} Now, by using the strong Markov's property, and induction, we bound the right hand side of \reff{circuit.15} by \be{circuit.16} e^{-\lambda \epsilon \sqrt{n}} \pare{ \sup_{z\in \Lambda'\cup\{0\}} E_{z}\cro{ e^{\lambda \ind\acc{|S_T|>\sqrt{L}, T<\infty}}}}^{|{\bf k}|}. \end{equation} Now, \ba{circuit.17} E_{z}\cro{ e^{\lambda \ind\acc{|S_T|>\sqrt{L},T<\infty}}}&\le& 1+(e^\lambda-1) P_{z}(|S_T|>\sqrt{L},T<\infty)\cr &\le & 1+(e^\lambda-1) P_{z}\pare{ \cup\acc{T(\xi)<\infty:\ \xi\in \Lambda;\ |\xi-z|>\sqrt{L}}}\cr &\le & 1+(e^\lambda-1)|\Lambda| \sup\acc{P_z(T(\xi)<\infty);\ |\xi-z|>\sqrt{L}, \xi\in \Lambda}\cr &\le & 1+(e^\lambda-1) \frac{\bar c|\Lambda|}{L^{d/2-1}} \le \exp\pare{ (e^\lambda-1)\frac{\bar c|\Lambda|}{L^{d/2-1}}}. \end{eqnarray} Now, since $|{\bf k}|\le c_0 \sqrt{n}$, we have \be{circuit.18} P\pare{ \sum_{i=1}^{|{\bf k}|} 1_{\acc{|S_{T^{(i)}}-S_{T^{(i-1)}}|>{\sqrt L}, T^{(|{\bf k}|)}<\infty}}\ge \epsilon\sqrt{n}}\le \exp\pare{-\sqrt{n} \pare{\lambda\epsilon- (e^\lambda-1)\frac{c_0\bar c|\Lambda|}{L^{d/2-1}}}}. \end{equation} Thus, for any $\epsilon>0$, we can choose $L$ large enough so that the result holds. \qed \subsection{Proof of Lemma~\ref{lem-encageS}.} We first introduce a fixed scale, $l_0\in {\mathbb N}$, to be adjusted later as a function of $|\Lambda|$, and assume that $|x-y|\ge 4 |\Lambda| l_0$. Indeed, the case $|x-y|\le 4 |\Lambda|l_0$ is easy to treat since $P_x(S_T=y)>0$ implies the existence of a path from $x$ to $y$ avoiding $\Lambda$; it is then easy to see that since $\Lambda$ is finite, the length of the shortest path joining $x$ and $y$ and avoiding $\Lambda$ can be bounded by a constant depending only on $|\Lambda|$. Forcing the walk to follow this path costs only a positive constant which depends on $|\Lambda|$. We introduce two sets of concentric shells around $x$ and $y$: for $i=1,\dots,|\Lambda|-1$ \be{encage.3} C_i=B(x,(2i+2)l_0)\backslash B(x,2il_0),\quad\text{and}\quad C_0=B(x,2l_0), \end{equation} and similarly $\acc{ D_i,i=0,\dots,|\Lambda|}$ are centered around $y$, and for all $i,j$ $C_i\cap D_j=\emptyset$. There is necessarely $i,j\le |\Lambda|$ such that \be{encage.4} C_i\cap \Lambda=\emptyset,\quad\text{and}\quad D_j\cap \Lambda=\emptyset. \end{equation} Define now two stopping times corresponding to exiting {\it mid-}$C_i$ and entering {\it mid}-$D_j$ \be{encage.5} \sigma_i=\inf\acc{n\ge 0:\ S_n\not\in B(x,(2i+1)l_0)},\quad\text{and}\quad \tau_j=\inf\acc{n\ge 0:\ S_n\in B(y,(2j+1)l_0)}. \end{equation} Note that when $\sigma_i<\infty$ and $\tau_j<\infty$, we have $\text{dist}(S_{\sigma_i},\Lambda)\ge l_0$, and $\text{dist}(S_{\tau_j},\Lambda)\ge l_0$. We show that for any $L$ we can find $\epsilon_L$ (going to 0 as $L\to\infty$), such that \be{encageS-key} P_x(T(\S)<T<\infty,\ S_T=y)\le \frac{\epsilon_L}{2} P_x(S_T=y). \end{equation} Note that \reff{encageS-key} implies that for $\epsilon_L$ small enough \be{encage.15} P_x(S_T=y)\le \frac{1}{1-\epsilon_L/2} P_x(T<T(\S),\ S_T=y)\le e^{\epsilon_L} P_x(T<T(\S),\ S_T=y). \end{equation} To show \reff{encageS-key}, we condition the flight $\acc{S_0=x,S_T=y}$ on its values at $\sigma_i$ and $\tau_j$ \be{encage.6} P_x(S_T=y)\ge \sum_{z\in C_i} P_x\pare{S_{\sigma_i}=z,\ \sigma_i<T}P_z(\tau_j<T) \inf_{z'\in D_j}P_{z'}(S_T=y). \end{equation} Note that if $P_x(S_T=y)>0$, there is necessarely a path from $D_j$ to $y$ which avoids $\Lambda$ so that, there is a constant $c_0$ (depending only on $l_0$) such that \be{encage.7} \inf_{z'\in D_j}P_{z'}(S_T=y)>c_0. \end{equation} We need to estimate $P_z(\tau_j<T)$. First, by classical estimates (see Proposition 2.2.2 of \cite{LAWLER}), there are $c_1,c_2>0$ such that when $|x-y|\ge 4l_0|\Lambda|$, and $z\in C_i$ \be{encage.8} \frac{c_2\text{cap}(D_j)}{|z-y|^{d-2}}\le P_{z}(\tau_j<\infty)\le \frac{c_1\text{cap}(D_j)}{|z-y|^{d-2}}. \end{equation} We establish now that if we choose $l_0$ so that \be{encage.11} l_0^{d-2}\ge 2 |\Lambda| \frac{c_1c_G2^{d-2}}{c_2}, \quad\text{then}\quad P_z(\tau_j<T)\ge \frac{1}{2} P_z(\tau_j<\infty). \end{equation} Since $\text{dist}(z,\Lambda)>l_0$ \ba{encage.9} P_{z}(T<\tau_j<\infty)&\le &\sum_{\xi\in \Lambda\backslash D_0} P_{z}\pare{S_T=\xi,\ T<\tau_j<\infty} P_{\xi}(\tau_j<\infty)\cr &\le & |\Lambda| \sup_{\xi\in \Lambda\backslash D_0} \acc{P_z(T(\xi)<\infty)P_{\xi}(\tau_j<\infty)}. \end{eqnarray} We use again estimate \reff{encage.8} to obtain \be{encage.91} P_{z}(T<\tau_j<\infty)\le c_1 c_G |\Lambda| \sup_{\xi \in \Lambda\backslash D_0} \acc{ \frac{1}{|z-\xi|^{d-2}}\times \frac{\text{cap}(D_j)}{|\xi -y|^{d-2}}}. \end{equation} Now, for $\xi \in \Lambda\backslash D_0$, we have $\min(|z-\xi|,|\xi-y|)>l_0$, and on the other side the triangle inequality yields $\max(|z-\xi|,|\xi-y|)> \frac{|z-y|}{2}$. Thus, we obtain \ba{encage.10} P_{z}(T<\tau_j)&\le &\frac{c_1c_G2^{d-2}}{l_0^{d-2}} |\Lambda| \frac{\text{cap}(D_j)}{|z-y|^{d-2}}\cr &\le & \frac{c_1c_G2^{d-2}}{c_2}\frac{|\Lambda|}{l_0^{d-2}}\frac{c_2\text{cap}(D_j)}{|z-y|^{d-2}}\cr &\le & \frac{c_1c_G2^{d-2}}{c_2}\frac{|\Lambda|}{l_0^{d-2}} P_z(\tau_j<\infty). \end{eqnarray} This implies \reff{encage.11}. Now, for any $z\in C_i$, by conditioning on $S_{T(\S)}$, we obtain \be{encage.12} P_z(T(\S)<T<\infty,\ S_T=y)\le E_z\cro{ \ind\acc{T(\S)<T<\infty}P_{S_{T(\S)}} \pare{S_T=y}}\le \frac{c_G}{L^{d-2}}. \end{equation} Thus, for any $z\in C_i$, \be{encage.13} P_z(\tau_j<T) \inf_{z'\in D_j}P_{z'}(S_T=y)\ge c_0\frac{c_2\text{cap}(D_j)}{2|z-y|^{d-2}}\ge \frac{P_z(T(\S)<T<\infty,\ S_T=y)}{\epsilon_L/2}, \end{equation} with (recalling that $|x-y|\ge 4|\Lambda|l_0$ and $|x-y|\le \sqrt{L}$), with a constant $C(\Lambda)>0$ \be{encage.14} \epsilon_L=\frac{4 c_G |z-y|^{d-2}}{c_0c_2 \text{cap}(D_j) L^{d-2}}\le \frac{ 2^d c_G}{c_0c_2 \text{cap}(D_j)}\pare{ \frac{|x-y|}{L}}^{d-2}\le C(\Lambda) \pare{ \frac{1}{\sqrt{L}}}^{d-2}. \end{equation} Now, after summing over $z\in C_i$, we obtain \reff{encageS-key}. \qed \subsection{Proof of Lemma~\ref{lem-encageB}.} We consider two cases: (i) $\sqrt{L}< |x-y|\le \kappa L$ where $\kappa$ is a small parameter, and (ii) $|x-y|> \kappa L$. Also, we denote by $C(\lambda)$ a positive constant which depend only on $|\Lambda|$. We might use the same name in different places. \noindent{\underline{Case (i).}} We use the same steps as in the previous proof up to \reff{encage.13} where we replace $|z-y|$ by $2|x-y|$, and obtain \be{encage.16} P_x(S_T=y)\ge \frac{c_0}{2} \frac{c_2\text{cap}(D_j)}{2^{d-2}|x-y|^{d-2}}. \end{equation} Now, \reff{encage.12} implies that if \be{encage.17} \kappa^{d-2}\le \frac{c_0c_2 \text{cap}(D_j)}{2^{d}c_G},\quad\text{then}\quad P_x(S_T=y)\le 2 P_x(T<T(\S),\ S_T=y). \end{equation} \noindent{\underline{Case (ii).}} First note that \be{encage.fact1} P_x(S_T=y)\le P_x(T(y)<\infty) \le \frac{c_G}{|x-y|^{d-2}}. \end{equation} Now, set $L'=\kappa L$, and note that $\text{diam}(\C)$ is a multiple (depending only on $\Lambda$) times $L'$. Now, a way of realizing $\acc{ S_T=y, T<T(\S)}$ is to go through a finite number of adjacent spheres of diameter $L'$. From a hitting point on one sphere, we force the walk to exit only from a tiny fraction of the surface of the next sphere, until we reach the last sphere, say on $z^*$, for which it is easy to show that there are two universal positive constants $c,c'$ such that \be{encage.fact2} P_{z^*}(S_T=y, T<T(\S))\ge c P_{z^*}(T(y)<\infty)\ge c' \frac{\tilde c_G}{|x-y|^{d-2}}. \end{equation} Note that when starting on $x$, the probability of exiting $B(x,|x-y|)$ through site $y$ is of order of the surface $|x-y|^{1-d}$, and this is much smaller of $P_x(T(y)<\infty)$ which should be close to $P_x(S_T=y)$ in cases where all other points of $\Lambda$ be very far from $x,y$. Thus, we have to consider more paths than $\acc{S_{T(B(x,|x-y|)^c)}=y,S_0=x}$. By Lemma~\ref{lem-cluster} and Remark~\ref{rem-cluster}, there is a finite sequence $x_1,\dots,x_k$ (not necessarely in $\C$) such that $L'/2\le |x_{i+1}-x_i|\le L'$ and such that $B(x_i,L)\subset \S(\C)$. \be{encage.18} \delta=\frac{1}{4|\Lambda|^{\frac{1}{d-1}}},\quad Q_i=\acc{z: |z-x_i|=|x_{i+1}-x_i|},\quad\text{and}\quad \Sigma_i=Q_i\cap B(x_{i+1},\frac{L'}{4}). \end{equation} Note that $|\Sigma_i|$ is of order $(\frac{L'}{4})^{d-1}$. We can throw $|\Lambda|$ points on $\Sigma_i$, say at a distance of at least $\delta L'$, and one of them, say $y_{i}^*$, necessarely satisfies \be{encage.19} B(y_{i}^*,\delta L')\cap \Lambda =\emptyset,\quad\text{and set}\quad B_i^*=B(y_{i}^*,\frac{\delta L'}{2})\cap \Sigma_i. \end{equation} Now, when the walk starts on $x_{i+1}$, it exits from any point $z\in Q_{i+1}$ with roughly the same chances (see i.e. Lemma 1.7.4 of~\cite{LAWLER}), so that there is $c_S$ such that for $i\ge 0$, \be{encage.20} P_{x_{i+1}}(S_{H_i}=z)\ge \frac{c_S}{|x_{i+2}-x_{i+1}|^{d-1}} ,\quad\text{where}\quad H_i:=T(Q_{i+1}). \end{equation} By Harnack's inequality (see Theorem 1.7.2 of~\cite{LAWLER}), for any $z\in B_i^*$ \be{encage.21} P_z\pare{ S_{H_i}\in B^*_{i+1}}\ge \frac{c_S |B^*_{i+1}|}{(2L)^{d-1}} \end{equation} Now, there is $\chi>0$ such that \[ |B^*_i|\ge \chi(\delta L')^{d-1}, \] which yields \be{encage.key} P_z\pare{ S_{H_i}\in B^*_{i+1}}\ge c_S \chi (\frac{\delta \kappa}{2})^{d-1}. \end{equation} Note that it costs more to hit $\Lambda$ before $Q_{i+1}^c$. Indeed, \ba{encage.22} P_z\pare{S_{H_i}\in B^*_{i+1},\ T<H_i}&\le& \sum_{\xi\in \Lambda} P_z(T(\xi)<\infty) P_{\xi}\pare{ H_i<\infty}\cr &\le &\sup_{\xi \in \Lambda} \frac{c_G|\Lambda|}{|z-\xi|^{d-2}}\times \frac{c_1 \text{cap}\pare{ B^*_{i+1}}}{|\xi-y_{i+1}^*|^{d-2}}. \end{eqnarray} By definition, $ \text{cap}( B^*_{i+1} )\le |B^*_{i+1}|\le \chi (\delta L')^{d-1}$. Now, $z$ and $y_{i+1}^*$ are chosen in such a way that $\min(|z-\xi|,|\xi-y_{i+1}^*|)\ge \frac{\delta L'}{2}$ so that \be{encage.23} P_z\pare{S_{H_i}\in B^*_{i+1},\ T<H_i}\le \frac{ c_Sc_1 \chi |\Lambda| (\delta L')^{d-1} } { (\delta L'/2)^{2d-4}}. \end{equation} Since in $d\ge 5$, we have $2d-4>d-1$, $L$ can be chosen large enough so that \be{encage.24} P_z\pare{S_{H_i}\in B^*_{i+1}, \ H_i<T}\ge \frac{1}{2} P_z\pare{ S_{H_i}\in B^*_{i+1}} \end{equation} Now, we define $\theta_k$ as the time-translation of $k$ units of a random walk trajectory, and $\tilde H_i=H_i\circ \theta_{H_{i-1}}$. The following scenario produces $\acc{ S_T=y,\ T<T(\S)}$: \be{encage.25} \bigcap_{i=1}^k \acc{S_{\tilde H_i}\in B^*_{i+1},\ \tilde H_i <T\circ \theta_{H_{i-1}}} \cap \acc{S_{T\circ \theta_{H_k}}=y,T\circ\theta_{H_k}<T(\S) \circ\theta_{H_k}} \end{equation} By using the strong Markov's property, and \reff{encage.24}, we obtain \be{encage.26} P_x\pare{S_T=y,\ T<T(\S)}\ge \pare{ \frac{c_S\chi}{2} (\frac{\delta \kappa}{2})^{d-1}}^k \inf_{z\in B^*_{k}} P_z\pare{S_T=y,T<T(\S)}. \end{equation} In the last term in \reff{encage.26}, note that for any $z\in B_k^*$, $L'/2\le |z-y|\le L'$ so that we are in the situation of Case(i), where inequality \reff{encage.17}, and \reff{encage.16} yields \[ P_z\pare{S_T=y,T<T(\S)}\ge \frac{1}{2} P_z\pare{S_T=y}\ge \frac{c}{|z-y|^{d-2}}. \] Since Lemma~\ref{lem-cluster} establishes that for some constant $C(\Lambda)>0$, $\text{diam}(\C)\le C(\Lambda)L$, and $|z-y|\ge \frac{\kappa}{2} L$, we have for a constant $C(\Lambda)$ \[ P_x\pare{S_T=y,\ T<T(\S)}\ge \frac{C(\Lambda)}{|x-y|^{d-2}}\ge \frac{C(\Lambda)}{c_G} P_x\pare{S_T=y}. \] \qed \subsection{Proof of Lemma~\ref{trans-lem.1}.} We start with shorthand notations $\S_1=\S(\C)$ and $\tilde \S_1=\S(\T(\C))$, and we define \[ \S_2=\acc{z: \text{dist}(z,\C)=2\max(\text{diam}(\C),L)}, \] and $\tilde \S_2$ is similar to $\S_2$ but $\T(\C)$ is used instead of $\C$ in its definition. First, we obtain an upper bound for the weights of paths joining $y$ to $x$ by conditioning over hitting sites on $\S_2$ and $\S_1$, and by using the strong Markov's property \ba{trans.3} P_y(S_T=x)&=& \sum_{z_1\in \S_1} E_y\cro{ \ind_{\acc{T(\S_2)<T}} P_{S_{T(\S_2)}}(S_{T(\S_1)}=z_1,T(\S_1)<T)}P_{z_1}(S_T=x)\cr &\le & P_y(T(\S_2)<\infty)\sum_{z_1\in \S_1} \pare{ \sup_{z\in \S_2}P_z(S_{T(\S_1)}=z_1)} P_{z_1}(S_T=x) \end{eqnarray} We need to compare \reff{trans.3} with the corresponding decomposition for trajectories starting on $y$ with $\acc{S_T=\tilde x}$, where we set $\tilde x=\T(x)$ for simplicity, \be{trans.15} \begin{split} P_y(S_T=\tilde x)=& \sum_{\tilde z_1\in \tilde \S_1} E_y\cro{ \ind_{\acc{T(\tilde \S_2)<T}} P_{S_{T(\tilde \S_2)}}(S_{T(\tilde \S_1)}=\tilde z_1,T(\tilde \S_1)<T)} P_{\tilde z_1}(S_T=\tilde x)\\ \ge & P_y(T(\tilde\S_2)<T)\!\!\sum_{\tilde z_1\in \tilde \S_1} \!\!\inf_{\tilde z\in \tilde \S_2} P_{\tilde z} \pare{S_{T(\tilde \S_1)}=\tilde z_1, T(\tilde \S_1)<T} P_{\tilde z_1}(S_T=\tilde x). \end{split} \end{equation} We now bound each term in \reff{trans.3} by the corresponding one in \reff{trans.15}. \underline{ About $P_{z_1}(S_T=x)$}. From \reff{clus.3} of Lemma~\ref{lem-cluster}, $\S_2\cap\Lambda=\C$. By the same reasoning as in the proof of Lemma~\ref{lem-encageB}, there is a constant $C_0$ such that for any $z_1\in \S_1$ \be{trans.4} P_{z_1}(S_T=x)\le C_0 P_{z_1}(S_T=x,T<T(\S_2)). \end{equation} As long as we consider paths from $\S_1$ to $x$ which do not escape $\S_2$, we can transport them, using translation invariance of the law of random walk \be{trans.5} P_{\tilde z_1}(S_T=\tilde x,T<T(\tilde \S_2))=P_{z_1}(S_T=x,T<T(\S_2)), \end{equation} and by using \reff{trans.4} and \reff{trans.5}, we finally obtain \be{trans.6} P_{z_1}(S_T=x)\le C_0 P_{\tilde z_1}(S_T=\tilde x,T<T(\tilde \S_2)) \le C_0 P_{\tilde z_1}(S_T=\tilde x). \end{equation} \underline{ About $P_y(T(\S_2)<\infty)$}. By Proposition 2.2.2 of~\cite{LAWLER}, there are $c_1,c_2$ positive constants such that \be{trans.7} \frac{c_2 \text{cap}(\S_2)}{|y-x|^{d-2}}\le P_y(T(\S_2)<\infty)\le \frac{c_1 \text{cap}(\S_2)}{|y-x|^{d-2}}, \end{equation} and \reff{trans.7} holds also with a tilda over $x$ and $\S_2$. Since $|y-\tilde x|\le 2 |y-x|$ by \reff{clus.7}, we have \be{trans.8} P_{y}(T(\S_2)<\infty)\le \frac{c_1}{c_2} 2^{d-2} P_{y}(T(\tilde \S_2)<\infty). \end{equation} We need now to check that paths reaching $\tilde \S_2$ from $y$ have good chances not to meet any sites of $\Lambda$. In other words, we need \be{trans.40} P_y(T(\tilde \S_2)<\infty)\le 2 P_y(T(\tilde \S_2)<T). \end{equation} The argument is similar to the one showing $P_z(\tau_j)\le 2 P_z(\tau_j<T)$ in \reff{encage.11} of the proof of Lemma~\ref{lem-encageB}. We omit to reproduce it. Thus, from \reff{trans.40} and \reff{trans.8}, \be{trans.41} P_{y}(T(\S_2)<\infty)\le \frac{2^{d-1}c_1}{c_2} P_{y}(T(\tilde \S_2)<T). \end{equation} We show that starting from $\tilde z\in \tilde \S_2$, a walk has good chances of hitting $\tilde \S_1$ before $\Lambda$, as we show \reff{trans.40}, and here again we omit the argument showing that for any $\tilde z_1\in \tilde \S_1$ \be{trans.17} P_{\tilde z}(S_{T(\tilde \S_1)}=\tilde z_1)\le 2 P_{\tilde z} \pare{T(\tilde \S_1)<T, S_{T(\tilde \S_1)}=\tilde z_1}. \end{equation} \underline{ About the supremum in \reff{trans.3}}. Now, by Harnack's inequality for the discrete Laplacian (see Theorem 1.7.2 of~\cite{LAWLER}), there is $c_H>0$ independent of $n$ such that for any $z_2,z_2'\in \S_2$, and any $z_1\in \S_1$ \be{trans.16} P_{z_2}\pare{S_{T(\S_1)}=z_1}\le c_H P_{z_2'}\pare{S_{T(\S_1)}=z_1}. \end{equation} Now, using \reff{trans.17}, and the obvious fact \[ P_{z_2'}\pare{S_{T(\S_1)}=z_1}= P_{\T(z_2')}\pare{S_{T(\tilde \S_1)}=\T(z_1)}, \] we obtain for any $z_1\in \S_1$ \be{trans.18} \sup_{z\in\S_2} P_z(S_{T(\S_1)}=z_1)\le c_H \inf_{\tilde z\in\tilde \S_2} P_{\tilde z}(S_{T(\tilde \S_1)}=\tilde z_1) \le 2c_H \inf_{\tilde z\in\tilde \S_2} P_{\tilde z} \pare{S_{T(\tilde \S_1)}=\tilde z_1,T(\tilde \S_1)<T}. \end{equation} Starting with \reff{trans.3}, and combining \reff{trans.6}, \reff{trans.31}, and \reff{trans.18}, we obtain \[ \begin{split} P_y(S_T=x)& \le P_y(T(\S_2)<\infty)\sum_{z_1\in \S_1} \pare{ \sup_{z\in \S_2}P_z(S_{T(\S_1)}=z_1)} P_{z_1}(S_T=x)\\ &\le \frac{2^{d-1}c_1}{c_2} P_{y}(T(\tilde \S_2)<T) \sum_{\tilde z_1\in \tilde\S_1} \! 2c_H \inf_{\tilde z\in\tilde \S_2} P_{\tilde z} \pare{S_{T(\tilde \S_1)}=\tilde z_1,T(\tilde \S_1)<T}\\ &\qquad \times C_0 P_{\tilde z_1}(S_T=\tilde x)\le P_y(S_T=\T(x)). \end{split} \] \qed \subsection{Proof of Lemma~\ref{improp-lem.1}.} We only prove the first inequality in \reff{improp.21}, the second is similar. The proof uses arguments used in the proof of Lemma~\ref{lem-encageB}, and Lemma~\ref{trans-lem.1}. Namely, consider $x,x'\in \C$, and draw shells $\acc{C_k}$ and $\acc{D_k}$ as in \reff{encage.3} but around $x$ and $x'$ respectively. Note that here $C_k\cap D_{k'}$ may not be empty. Also, choose $i$ and $j$ such that condition \reff{encage.4} holds. Then, we decompose $\acc{S_T=x}$ by conditioning on $\S_1$ as in \reff{trans.3}. On the term $P_{z_1}(S_T=x)$ we use the following rough bound \be{trans.30} P_{z_1}(S_T=x)\le P_{z_1}(T(x)<\infty)\le \frac{c_d}{|z_1-x|^{d-2}}. \end{equation} We now use the obvious observation that $2|z_1-x|\ge |z_1-x'|$. Indeed, $|z_1-x|\ge \text{diam}(\C)\ge |x-x'|$ implies that $2|z_1-x|\ge |z_1-x|+|x-x'|\ge |z_1-x'|$ by the triangle inequality. Thus there are a constant $c_3$ such that for the hitting time $\tau_j$ defined in \reff{encage.5} \be{trans.31} P_{z_1}(\tau_j<\infty)\ge \frac{c_2 \text{cap}(D_j)}{|z_1-x'|^{d-2}}\ge \frac{c_3}{|z_1-x|^{d-2}}. \end{equation} From \reff{trans.3} and \reff{trans.31}, we have \be{trans.32} P_y(S_T=x)\le \frac{c_d}{c_3} \sum_{z_1\in \S_1} P_y\pare{T(\S_1)<T,S_{T(\S_1)}=z_1} P_{z_1}(\tau_j<\infty) \end{equation} By argument \reff{encage.10}, and the choice of $l_0$ in \reff{encage.11}, we have $2 P_{z_1}(\tau_j<T)\ge P_{z_1}(\tau_j<\infty)$. Finally, from $D_j$ to $x'$, there is a path avoiding $\Lambda'\backslash\acc{x'}$ which cost a bounded amount depending only on $l_0$. \qed \subsection{Proof of Corollary~\ref{improp-cor.1}.} Note that by Lemma~\ref{improp-lem.1}, we have \be{trans.33} P_x(S_T=y)\le C_I P_x(S_T=y'). \end{equation} Now, $P_x(S_T=y')=P_{y'}(S_T=x)$, and we use again Lemma~\ref{improp-lem.1} \be{trans.34} P_{y'}(S_T=x)\le C_I P_{y'}(S_T=x')\Longrightarrow P_x(S_T=y) \le C_I^2 P_{x'}(S_T=y'). \end{equation} \qed \end{document}
arXiv
Entropy conjugacy for Markov multi-maps of the interval A constructive approach to robust chaos using invariant manifolds and expanding cones $ BV $ solution for a non-linear Hamilton-Jacobi system Ahmad El Hajj 1,, , Hassan Ibrahim 2, and Vivian Rizik 2, Université de Technologie de Compiègne, LMAC, 60205 Compiègne Cedex, France Université Libanaise, EDST, Hadath, Beyrouth, Liban * Corresponding author: Ahmad El Hajj Received February 2020 Revised November 2020 Published December 2020 In this work, we are dealing with a non-linear eikonal system in one dimensional space that describes the evolution of interfaces moving with non-signed strongly coupled velocities. For such kind of systems, previous results on the existence and uniqueness are available for quasi-monotone systems and other special systems in Lipschitz continuous space. It is worth mentioning that our system includes, in particular, the case of non-decreasing solution where some existence and uniqueness results arose for strictly hyperbolic systems with a small total variation. In the present paper, we consider initial data with unnecessarily small $ BV $ seminorm, and we use some $ BV $ bounds to prove a global-in-time existence result of this system in the framework of discontinuous viscosity solution. Keywords: Hamilton-Jacobi system, non-linear eikonal system, non-linear hyperbolic system, BV estimate, discontinuous viscosity solution. Mathematics Subject Classification: 58F15, 58F1735A01, 74G25, 35F20, 35F21, 35L40, 35Q35, 35D40. Citation: Ahmad El Hajj, Hassan Ibrahim, Vivian Rizik. $ BV $ solution for a non-linear Hamilton-Jacobi system. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2020405 L. Ambrosio, N. Fusco and D. Pallara, Functions of Bounded Variations and Free Discontinuity Problems, Oxford Mathematical Monographs, The Clarendon Press Oxford University Press, New York, 2000. Google Scholar G. Barles, Discontinuous viscosity solutions of first-order Hamilton-Jacobi equations: A guided visit, Nonlinear Anal., 20 (1993), 1123-1134. doi: 10.1016/0362-546X(93)90098-D. Google Scholar G. Barles, Solutions de Viscosité Des Équations de Hamilton-Jacobi, vol. 17 of Mathématiques et Applications (Berlin), Springer-Verlag, Paris, 1994. Google Scholar G. Barles and B. Perthame, Exit time problems in optimal control and vanishing viscosity method, SIAM J. Control Optim., 26 (1988), 1133-1148. doi: 10.1137/0326063. Google Scholar G. Barles and B. Perthame, Comparison principle for Dirichlet-type Hamilton-Jacobi equations and singular perturbations of degenerated elliptic equations, Appl. Math. Optim., 21 (1990), 21-44. doi: 10.1007/BF01445155. Google Scholar G. Barles, H. M. Soner and P. E. Souganidis, Front propagation and phase field theory, SIAM J. Control Optim, 31 (1993), 439-496. doi: 10.1137/0331021. Google Scholar S. Bianchini and A. Bressan, Vanishing viscosity solutions of nonlinear hyperbolic systems, Ann. Math., 161 (2005), 223-342. doi: 10.4007/annals.2005.161.223. Google Scholar R. Boudjerada and A. El Hajj, Global existence results for eikonal equation with $BV$ initial data, Nonlinear Differ. Equ. Appl., 22 (2015), 947-978. doi: 10.1007/s00030-015-0310-9. Google Scholar M. G. Crandall, H. Ishii and P.-L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.), 27 (1992), 1-67. doi: 10.1090/S0273-0979-1992-00266-5. Google Scholar M. G. Crandall and P.-L. Lions, Viscosity solutions of Hamilton-Jacobi equations, Trans. Amer. Math. Soc., 277 (1983), 1-42. doi: 10.1090/S0002-9947-1983-0690039-8. Google Scholar R. J. DiPerna, Convergence of approximate solutions to conservation laws, Arch. Ration. Mech. Anal., 82 (1983), 27-70. doi: 10.1007/BF00251724. Google Scholar R. J. DiPerna, Compensated compactness and general systems of conservation laws, Trans. Amer. Math. Soc., 292 (1985), 383-420. doi: 10.1090/S0002-9947-1985-0808729-4. Google Scholar A. El Hajj and N. Forcadel, A convergent scheme for a non-local coupled system modelling dislocation densities dynamics, Math. Comp., 77 (2008), 789-812. doi: 10.1090/S0025-5718-07-02038-8. Google Scholar A. El Hajj, H. Ibrahim and V. Rizik, Global $BV$ solution for a non-local coupled system modeling the dynamics of dislocation densities, J. Differential Equations, 264 (2018), 1750-1785. doi: 10.1016/j.jde.2017.10.004. Google Scholar A. El Hajj and R. Monneau, Uniqueness results for diagonal hyperbolic systems with large and monotone data, J. Hyper. Differ. Equ., 10 (2013), 461-494. doi: 10.1142/S0219891613500161. Google Scholar A. El Hajj and R. Monneau, Global continuous solutions for diagonal hyperbolic systems with large and monotone data, J. Hyper. Differ. Equ., 7 (2010), 139-164. doi: 10.1142/S0219891610002050. Google Scholar J. Glimm, Solutions in the large for nonlinear hyperbolic systems of equations, Commun. Pure Appl. Math., 18 (1965), 697-715. doi: 10.1002/cpa.3160180408. Google Scholar H. Ishii, Perron's method for monotone systems of second-order elliptic partial differential equations, Differential Integral Equations, 5 (1992), 1-24. Google Scholar H. Ishii and S. Koike, Viscosity solution for monotone systems of second-order elliptic PDEs, Comm. Partial Differential Equations, 16 (1991), 1095-1128. doi: 10.1080/03605309108820791. Google Scholar P. D. Lax, Hyperbolic systems of conservation laws and the mathematical theory of shock waves, CBMS Regional Conference Series in Mathematics, Vol. 11 (SIAM, Philadelphia, 1973). Google Scholar P. LeFloch, Entropy weak solutions to nonlinear hyperbolic systems under nonconservative form, Commun. Partial Differential Equations, 13 (1988), 669-727. doi: 10.1080/03605308808820557. Google Scholar P. LeFloch and T.-P. Liu, Existence theory for nonlinear hyperbolic systems in nonconservative form, Forum Math., 5 (1993), 261-280. doi: 10.1515/form.1993.5.261. Google Scholar O. Ley, Lower-bound gradient estimates for first-order Hamilton-Jacobi equations and applications to the regularity of propagating fronts, Adv. Differential Equations, 6 (2001), 547-576. Google Scholar J. Simon, Compacts sets in the space $L^p(0; T; B)$, Ann. Mat. Pura. Appl., 146 (1987), 65-96. doi: 10.1007/BF01762360. Google Scholar Denis Serre. Non-linear electromagnetism and special relativity. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 435-454. doi: 10.3934/dcds.2009.23.435 Noufel Frikha, Valentin Konakov, Stéphane Menozzi. Well-posedness of some non-linear stable driven SDEs. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 849-898. doi: 10.3934/dcds.2020302 Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024 Jann-Long Chern, Sze-Guang Yang, Zhi-You Chen, Chih-Her Chen. On the family of non-topological solutions for the elliptic system arising from a product Abelian gauge field theory. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3291-3304. doi: 10.3934/dcds.2020127 Izumi Takagi, Conghui Zhang. Existence and stability of patterns in a reaction-diffusion-ODE system with hysteresis in non-uniform media. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020400 Hao Wang. Uniform stability estimate for the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 657-680. doi: 10.3934/dcds.2020292 Olivier Ley, Erwin Topp, Miguel Yangari. Some results for the large time behavior of Hamilton-Jacobi equations with Caputo time derivative. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021007 Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033 Shumin Li, Masahiro Yamamoto, Bernadette Miara. A Carleman estimate for the linear shallow shell equation and an inverse source problem. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 367-380. doi: 10.3934/dcds.2009.23.367 Michiel Bertsch, Danielle Hilhorst, Hirofumi Izuhara, Masayasu Mimura, Tohru Wakasa. A nonlinear parabolic-hyperbolic system for contact inhibition and a degenerate parabolic fisher kpp equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3117-3142. doi: 10.3934/dcds.2019226 Xing Wu, Keqin Su. Global existence and optimal decay rate of solutions to hyperbolic chemotaxis system in Besov spaces. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021002 Zongyuan Li, Weinan Wang. Norm inflation for the Boussinesq system. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020353 Ali Mahmoodirad, Harish Garg, Sadegh Niroomand. Solving fuzzy linear fractional set covering problem by a goal programming based solution approach. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020162 Neng Zhu, Zhengrong Liu, Fang Wang, Kun Zhao. Asymptotic dynamics of a system of conservation laws from chemotaxis. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 813-847. doi: 10.3934/dcds.2020301 Craig Cowan, Abdolrahman Razani. Singular solutions of a Lane-Emden system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 621-656. doi: 10.3934/dcds.2020291 Michael Winkler, Christian Stinner. Refined regularity and stabilization properties in a degenerate haptotaxis system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 4039-4058. doi: 10.3934/dcds.2020030 Xing-Bin Pan. Variational and operator methods for Maxwell-Stokes system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3909-3955. doi: 10.3934/dcds.2020036 Peter Giesl, Sigurdur Hafstein. System specific triangulations for the construction of CPA Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020378 Hai-Liang Li, Tong Yang, Mingying Zhong. Diffusion limit of the Vlasov-Poisson-Boltzmann system. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021003 Chao Xing, Zhigang Pan, Quan Wang. Stabilities and dynamic transitions of the Fitzhugh-Nagumo system. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 775-794. doi: 10.3934/dcdsb.2020134 Ahmad El Hajj Hassan Ibrahim Vivian Rizik \begin{document}$ BV $\end{document} solution for a non-linear Hamilton-Jacobi system" readonly="readonly">
CommonCrawl
\begin{document} \title[On the blow-up of a normal singularity at MCM modules]{On the blow-up of a normal singularity at maximal Cohen-Macaulay modules} \author{Agust\'in Romano-Vel\'azquez} \address{School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India} \email{[email protected]} \thanks{The author is partially supported by ERCEA 615655 NMST Consolidator Grant, CONACYT CB 2016-1 Num. 286447, CONACYT 253506, FORDECYT 265667 and by TIFR Visiting Fellow.} \subjclass[2010]{Primary: 13C14, 13H10, 14E16, 32S25, 32S05} \begin{abstract} Raynaud and Gruson developed the theory of blowing-up an algebraic variety $X$ along a coherent sheaf $M$ in the sense that there exists a blow-up $X'$ of $X$ such that the ``strict transform" of $M$ is flat over $X'$ and the blow-up satisfies an universal (minimality) property. However, not much is known about the singularities of the blow-up. In this article, we prove that if $X$ is a normal surface singularity and $M$ is a reflexive $\mathcal{O}_{X}$-module, then such a blow-up arises naturally from the theory of McKay correspondence. We show that the normalization of the blow-up of Raynaud and Gruson is obtained by a resolution of $X$ such that the full sheaf $\mathcal{M}$ associated to $M$ (i.e., the reflexive hull of the pull-back of $M$) is globally generated and then contracting all the components of the exceptional divisor not intersecting the first Chern class of $\mathcal{M}$. Moreover, we prove that if $X$ is Gorenstein and $M$ is special in the sense of Wunram and Riemenschneider (generalized in a previous work by Bobadilla and the author), then the blow-up of Raynaud and Gruson is normal. Finally, we use the theory of matrix factorization developed by Eisenbud, to give concrete examples of such blow-ups. \end{abstract} \maketitle \section{Introduction} Let $X$ be an algebraic variety and $M$ be a coherent $\Ss{X}$-module. The ground field is always assumed to be the complex numbers. The Raynaud-Gruson flattening theorem~\cite{Ray,Ray2} states that under some hypothesis, there exists a finitely presented closed subscheme of $X$ (depending on $M$) such that the blow-up $f\colon X' \to X$ along the subscheme satisfies the property: $f^* M / \mathrm{tor}$ is flat over $X'$. The construction is universal in the sense that for any morphism $\eta \colon Y \to X$ for which the pull-back of $M$ to $Y$ is flat modulo torsion, the morphism $\eta$ factors through $X'$. The existence of such blow-ups satisfying the universal minimality condition has numerous applications. Raynaud~\cite{Ray} uses this blow-up to prove the Chow's lemma (i.e., given $X$ separated over $S$, we can find a blow-up $X'$ of $X$ such that $X'$ is quasi-projective over $S$). Hironaka~\cite{Hironaka} recovers this blow-up and the Chow's lemma in the context of analytic geometry. Abramovich, Karu, Matsuki and Włodarczyk~\cite{Abra} use the Chow's lemma to prove the weak factorization conjecture for birational maps (i.e., a birational map between complete non-singular varieties is a composition of blow-ups and blow-downs along smooth centers). Campana~\cite{Campana} uses the flattening theorem to study the geometry, arithmetic, and classification of compact Kähler manifolds. Hassett and Hyeon~\cite{Hassett} use the flattening theorem to study the log canonical models for the moduli space of curves. Rossi~\cite{Ros} and later Villamayor~\cite{Villa} investigate the case of the blow-up at a coherent module $M$ over a (possible non-reduced) ring $R$ with applications to the flattening of projective morphisms and related problems in singularity theory like the Nash transformation (the closure of the graph of the Gauss map which assigns each regular point to its tangent space considered as an element in a Grassmanian manifold). In general, the Raynaud-Gruson blow-up may be very difficult to describe. Moreover, the blow-up is not regular in general, with little known about the singularities. In this article we prove that this blow-up has a very nice description in the case of $M$ a maximal Cohen-Macaulay module and $X$ a normal surface singularity. For this, we generalize some techniques and ideas given in~\cite{BoRo} together with a careful study of the Raynaud-Gruson blow-up. In this setting, given $M$ a maximal Cohen-Macaulay $\Ss{X}$-module, by~\cite{BoRo} there exists an unique resolution $\pi \colon \tilde{X} \to X$ called the \emph{minimal adapted resolution} associated to $M$ such that the associated \emph{full sheaf} $\Sf{M}:=(\pi^*M)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ is generated by global sections and $\tilde{X}$ is the minimal resolution of $X$ satisfying this property, where $\left(-\right)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ denotes the dual with respect to the structure sheaf. Recall, full sheaf was first defined by Esnault~\cite{Es} for rational singularities and generalized by Kahn~\cite{Ka} for normal surface singularities. We prove (Theorem~\ref{Theo:BlowUpMMinAdapt}): \begin{theorem*} \label{Theo:IntroBlowUpMMinAdapt} Let $(X,x)$ be the germ of a normal surface singularity. Let $M$ be a reflexive $\Ss{X}$-module of rank $r$. Let $\pi\colon \tilde{X} \to X$ be the minimal adapted resolution associated to $M$ with exceptional divisor $E$. Let $E_1,\dots,E_n$ be the irreducible components of $E$ and $\Sf{M}:=(\pi^*M)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ be the full sheaf associated to $M$. Then, the normalization of the Raynaud-Gruson blow-up of $X$ at $M$ is obtained by contracting the irreducible components $E_i$ such that $c_1\left(\Sf{M} \right) \cdot E_j =0$. \end{theorem*} In the case of normal, Gorenstein surface singularities and maximal Cohen-Macaulay modules we prove that the Raynaud-Gruson blow-up is related to the McKay correspondence. The McKay correspondence was constructed by McKay~\cite{McK} and a conceptual geometric understanding of the correspondence was achieved by a series of papers by Gonzalez-Springberg and Verdier~\cite{GoVe}, Artin and Verdier~\cite{AV} and by Esnault and Kn\"orrer~\cite{EsKn}. This correspondence gives a bijection between the isomorphism classes of non-trivial indecomposable reflexive modules and the irreducible components of the exceptional divisor of the minimal resolution of a rational double point. Later Wunram~\cite{Wu} generalized the McKay correspondence to any rational singularity using the notion of \emph{specialty}. If the singularity is Gorenstein we generalize in~\cite{BoRo} the definition of specialty as follows: a maximal Cohen-Macaulay module $M$ of rank $r$ over a normal surface singularity $X$ is called \emph{special} if the minimal adapted resolution $\pi \colon \tilde{X} \to X$ has the property that the dimension over ${\mathbb{C}}$ of the module $R^1 \pi_* \left(\pi^*M\right)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ is equal to $rp_g$, where $p_g$ is the \emph{geometric genus} of $X$. Using the notion of specialty we prove the following result (Theorem~\ref{Th:BlMXNormal}): \begin{theorem*} \label{Th:IntroBlMXNormal} Let $(X,x)$ be the germ of a normal Gorenstein surface singularity and $M$ be a special module. Let $f \colon \mathrm{Bl}_M(X) \to X$ the Raynaud-Gruson blow-up of $X$ at the module $M$. Then, $\mathrm{Bl}_M(X)$ is normal. \end{theorem*} Theorem~\ref{Theo:IntroBlowUpMMinAdapt} and Theorem~\ref{Th:IntroBlMXNormal} generalize the results of Gustavsen and Ile~\cite{Gus} to the case of normal surface singularities. They prove that the blow-up at any maximal Cohen-Macaulay module in a rational surface singularity is a partial resolution dominated by the minimal resolution. This follows by the following two properties of rational singularities: \begin{itemize} \item the minimal resolution of a rational singularity is the minimal adapted resolution of every maximal Cohen-Macaulay module, \item the blow-up of a \emph{complete ideal} (in the sense of Lipman~\cite{Li}) is always normal. \end{itemize} For a general normal surface singularity both assertions fail. Indeed, by~\cite{BoRo} we know that in general in a Gorenstein, normal, surface singularity, its minimal resolution is not the minimal adapted resolution of every maximal Cohen-Macaulay module. Moreover, in Example~\ref{ex:Ex2} we provide a normal, Gorenstein, surface singularity and a maximal Cohen-Macaulay module such that the Raynaud-Gruson blow-up is not normal. In order to prove both theorems we use the properties of the minimal adapted resolution and the techniques developed in~\cite{BoRo}. As a consequence of Theorem~\ref{Theo:IntroBlowUpMMinAdapt} and Theorem~\ref{Th:IntroBlMXNormal} we generalize the McKay correspondence given by Artin and Verdier~\cite{AV} as follows (Corollary~\ref{Cor:PrincipalNuevo}): \begin{corollary*} \label{Cor:IntroPrincipalNuevo} Let $(X,x)$ be a normal Gorenstein surface singularity. Then, there exists a bijection between the following sets: \begin{enumerate} \item The set of special, indecomposable $\Ss{X}$-modules up to isomorphism. \item The set of irreducible divisors $E$ over $x$, such at any resolution of $X$ where $E$ appears, the Gorenstein form has neither zeros nor poles along $E$. \item The set of partial resolutions $\psi\colon Y \to X$ with irreducible exceptional divisor $E$ such that: \begin{enumerate} \item the Gorenstein form has neither zeros nor poles along $E \setminus \text{Sing $Y$}$. \item the partial resolution is dominated by a resolution such that the Gorenstein form has neither zeros nor poles along its exceptional divisor. \end{enumerate} \end{enumerate} \end{corollary*} In the last section we use the theory of matrix factorizations developed by Eisenbud~\cite{Ei} to compute some explicit examples of blow-up at reflexive modules. In particular we prove that the \emph{fundamental module} in the hypersurface singularity given by $f=x^3+z^3+y^3$ is not a special module. The organization of this paper is as follows: In \S~\ref{Sec:Pre} we give preliminary results about full sheaves over normal, Gorenstein surface singularities, the Raynaud-Gruson blow-up in coherent sheaves and complete ideals. In \S~\ref{Sec:Adap} we define the notion of adapted resolutions over any normal singularity of arbitrary dimension. In \S~\ref{sec:Blowup} we prove our main results and generalize some results given in~\cite{BoRo}. In \S~\ref{Sec:Matrix} we recall some basics on matrix factorization and compute explicit examples of the Raynaud-Gruson blow-up of maximal Cohen-Macaulay modules. \section{Preliminaries} \label{Sec:Pre} In this section we recall basics on full sheaves over Gorenstein singularities, the Raynaud-Gruson blow-up in coherent sheaves and complete ideals. We assume basic familiarity with dualizing sheaves, modules and normal surface singularities, see~\cite{BrHe,Har1,Ne,Ishi} for more details. \subsection{Setting and notation} Throughout this article, we denote by $(X,x)$ either a complex analytic normal surface germ, or the spectrum of a normal complete $\mathbb{C}$-algebra of dimension $2$. In few instances it will denote the spectrum of a normal complete $\mathbb{C}$-algebra of dimension $n$. In this situation $X$ has a \emph{dualizing sheaf} $\omega_X$, and we also denote by $\omega_X$ its stalk at $x\in X$, which is called the \emph{dualizing module} of the ring $\Ss{X,x}$ (see~\cite[Chapter~5~\S~3]{Ishi} for more details). If $(X,x)$ is a Gorenstein normal singularity, then the dualizing module coincides with $\Ss{X,x}$. Let \begin{equation*} \pi \colon \tilde{X}\to X, \end{equation*} be a resolution of singularities. The exceptional divisor is denoted by $E:=\pi^{-1}(x)$, with irreducible components $E_1,\dots,E_m$. If $(X,x)$ is a Gorenstein surface singularity, there is a $2$-form $\Omega_{\tilde{X}}$ which is meromorphic in $\tilde{X}$, and has neither zeros nor poles in $\tilde{X}\setminus E$; this form is called the \emph{Gorenstein form}. Let $\text{div}(\Omega_{\tilde{X}})=\sum q_iE_i$ be the divisor associated with the Gorenstein form. The coefficients $q_i$ are independent of the choice of the form $\Omega_{\tilde{X}}$ with these properties. \begin{definition} \label{def:smallresgor} Let $\pi \colon \tilde{X}\to X$ be a resolution of a normal Gorenstein surface singularity. The {\em canonical cycle} is defined as $Z_k:=\sum_i -q_iE_i$, where the $q_i$ are the coefficients defined above. We say that $\tilde{X}$ is {\em small with respect to the Gorenstein form} if $Z_k$ is greater than or equal to $0$. The {\em geometric genus} of $X$ is defined to be the dimension as a $\mathbb{C}$-vector space of $R^1\pi_*\Ss{\tilde{X}}$ for any resolution. \end{definition} Following Reid~\cite{Reid} we have the following definition: \begin{definition} Let $(X,x)$ be the germ of a normal surface singularity. A \emph{partial resolution} is a proper birational morphism $\pi\colon Y\to X$ with $Y$ a normal variety. \end{definition} \subsection{Cohen-Macaulay modules and reflexive modules} Let $X$ be a normal variety. Let $\Homs_{\Ss{X}}(\bullet,\bullet)$ and $\Exts^i_{\Ss{X}}(\bullet,\bullet)$ be the sheaf theoretic Hom and Ext functors. The dual of an $\Ss{X}$-module $M$ is denoted by $M^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$}}:=\Homs_{\Ss{X}}(M,\Ss{X})$ and its $\omega_X$-dual is $\Homs_{\Ss{X}}(M,\omega_X)$. An $\Ss{X}$-module $M$ is called \emph{reflexive} (resp. $\omega_X$-\emph{reflexive}) if the natural homomorphism from $M$ to $M^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ (resp. to $\Homs_{\Ss{X}}(\Homs_{\Ss{X}}(M,\omega_X),\omega_X)$) is an isomorphism. An $\Ss{X}$-module $M$ is called \emph{Cohen-Macaulay} if for every $y \in X$ the depth of the stalk $M_y$ is equal to the dimension of the support of the module $M_y$. If the depth of $M_y$ is equal to the dimension of $\Ss{X,y}$, then the module $M_y$ is called \emph{maximal Cohen-Macaulay}. A module is {\em indecomposable} if it cannot be written as a direct sum of two non-trivial submodules. \subsection{Full sheaves and the minimal adapted resolution} Let $(X,x)$ be the germ of a normal surface singularity and \begin{equation*} \pi \colon \tilde{X} \to X, \end{equation*} be a resolution. Recall, the following definition of full sheaves as in~\cite[Definition~1.1]{Ka}. \begin{definition} An $\Ss{\tilde{X}}$-module $\Sf{M}$ is called \emph{full} if there is a reflexive $\Ss{X}$-module $M$ such that $\Sf{M} \cong \left(\pi^* M\right)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$. We call $\Sf{M}$ the full sheaf associated to $M$. \end{definition} Another important notion is the concept of specialty. Wunram~\cite{Wu} and Riemenschneider~\cite{Rie} defined a special full sheaf as a full sheaf for which its dual has the first cohomology group equal to zero. In \cite{BoRo} the author and Bobadilla generalized this definition as follows: \begin{definition}\label{def:especial} \label{def:espmodule} Let $M$ be a reflexive $\Ss{X}$-module of rank $r$ and $\Sf{M}$ be the full sheaf associated to $M$. The full sheaf $\Sf{M}$ is called \emph{special} if $\dimc{R^1 \pi_* \left(\Sf{M}^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$}}\right)} = rp_g$. We say that $M$ is \emph{a special module} if for any resolution, the full sheaf associated to $M$ is special. \end{definition} Let $M$ be a reflexive $\Ss{X}$-module. The minimal adapted resolution associated to $M$ was defined in \cite{BoRo} and it plays a crucial role for the classification of special reflexive modules. Recall, \begin{definition} \label{def:minadap} Let $M$ be a reflexive $\Ss{X}$-module. The minimal resolution $\pi \colon \tilde{X}\to X$ for which the associated full sheaf $\left(\pi^* M\right)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ is generated by global sections is called the {\em minimal adapted resolution} associated to $M$. \end{definition} \subsection{Blowing up at coherent sheaves} In this paper we use the description of the Raynaud-Gruson blow-up given by Villamor~\cite{Villa}. Let $R$ be a domain with quotient field $K$. The fractional ideals are the finitely generated $R$-submodules of $K$. Two fractional ideals $J_1$ and $J_2$ are isomorphic if and only if there exists some $k$ in $K$ different from zero such that $J_1=kJ_2$. The norm of a module is defined in the class of all fractional ideals modulo isomorphism as follows: \begin{definition}[{\cite[p.~123]{Villa}}] Let $M$ be a finitely generated $R$-module of rank $r$. The \emph{norm} of $M$ is the class \begin{equation*} \| M \|_R := \im\left( \bigwedge^r M \to \bigwedge^r M \otimes K \cong K \right)/\sim, \end{equation*} where $\sim$ denotes the isomorphism as fractional ideals. \end{definition} The blow-up of $R$ at the module $M$ is described as follows: \begin{theorem}[{\cite[Theorem~3.3]{Villa}}] \label{Th:BlowUpM} Let $M$ be a finitely generated $R$-module of rank $r$. There exists a blow-up: \begin{equation*} f\colon \mathrm{Bl}_M(R) \to \text{Spec}(R), \end{equation*} with the following properties: \begin{enumerate} \item The sheaf $f^*M / \mathrm{tor}$ is a locally free sheaf of $\Ss{\mathrm{Bl}_M(R)}$-modules of rank $r$. \item (Universal property) For any morphism $\sigma \colon Z \to \text{Spec}(R)$ such that $\sigma^* M / \mathrm{tor}$ is a locally free sheaf of $\Ss{Z}$-modules of rank $r$, there exists an unique morphism $\beta\colon Z \to \mathrm{Bl}_M(R)$ such that $f \circ{} \beta = \sigma$. \end{enumerate} \end{theorem} As mentioned before, Villamayor constructed the above blow-up under more general assumptions, but in our setting Theorem~\ref{Th:BlowUpM} is sufficient. Furthermore, in our situation $R$ is a domain, hence this blow-up is a proper birational morphism (see \cite{Villa} and \cite{Ros} for details). \begin{remark} \label{remark:BlowupM} Recall that, the blow-up stated in Theorem~\ref{Th:BlowUpM} was constructed by Villamayor by taking the blow-up at the fractional ideal $\|M\|_R$ (at any representative). This fact will be used later in the article. \end{remark} \subsection{Complete ideals} In this section we recall some basic notions of complete ideals. See~\cite{Li} for more details. Let $X$ be an integral scheme with sheaf of rational functions $\mathcal{R}_X$. Let $\mathcal{J}$ be a quasi-coherent $\Ss{X}$-submodule of $\mathcal{R}_X$. Denote by \begin{equation*} \mathcal{A}:= \bigoplus_{n \geq 0} \mathcal{J}^n,\, (\mathcal{J}^0 := \Ss{X}), \quad \text{and} \quad \mathcal{R}:=\bigoplus_{n \geq 0} \mathcal{R}_X^n, \, (\mathcal{R}_X^0 := \mathcal{R}_X), \end{equation*} where $\mathcal{J}^n$ (resp. $\mathcal{R}_X^n$) is the product (as fractional ideal) of $n$ copies of $\mathcal{J}$ (resp. $\mathcal{R}_X$). Hence, the sheaves $\mathcal{A}$ and $\mathcal{R}$ are quasi-coherent graded $\Ss{X}$-algebras. Let $\mathcal{A}'$ be the integral closure of $\mathcal{A}$ in $\mathcal{R}$. Denote by $\mathcal{J}_n$ the image of the following composition: \begin{equation*} \mathcal{A}' \subset \mathcal{R} \xrightarrow{\mathrm{pr}_n} \mathcal{R}_X, \end{equation*} where $\mathrm{pr}_n$ is the $n$-th projection. \begin{definition} We say that $\mathcal{J}_1$ is the \emph{completion} of $\mathcal{J}$. Furthermore, $\mathcal{J}$ is complete if $\mathcal{J}=\mathcal{J}_1$. \end{definition} The following two lemmas will play a crucial role later. \begin{lemma}[{\cite[Lemma~5.2]{Li}}] \label{lema:blowupcomplete} Let $X$ be an integral scheme and $\mathcal{J}$ be a coherent $\Ss{X}$-submodule of $\mathcal{R}_X$ different from zero. If all the positive powers of $\mathcal{J}$ are complete, then the scheme obtained by blowing-up $\mathcal{J}$ is normal. \end{lemma} \begin{lemma}[{\cite[Lemma~5.3]{Li}}] \label{lema:induccioncomplete} Let $X$, $\mathcal{R}_X$ and $\mathcal{J}$ be as in Lemma~\ref{lema:blowupcomplete}. Let \begin{equation*} f\colon X \to Y, \end{equation*} be a quasi-compact, quasi-separated birational morphism. If $\mathcal{J}$ is complete, then so is $f_* \mathcal{J}$. \end{lemma} \section{Adapted resolutions} \label{Sec:Adap} In this section we introduce the notion of adapted resolutions of maximal Cohen-Macaulay modules over any dimension. We prove that if the singularity has dimension two, then the minimal adapted resolution is an adapted resolution (Proposition~\ref{Prop:2daconstruccion}). \begin{definition} Let $(X,x)$ be the complex analytic germ of a normal $n$-dimensional singularity. Let $M$ be a maximal Cohen-Macaulay $\Ss{X}$-module. A resolution \begin{equation*} \pi \colon \tilde{X} \to X, \end{equation*} is called an \emph{adapted resolution} associated to $M$ if $\pi^* M / \mathrm{tor}$ is a locally free $\Ss{\tilde{X}}$-module. \end{definition} Note that, if $(X,x)$ is a normal surface singularity and $M$ is a reflexive $\Ss{X}$-module, then its minimal adapted resolution is an adapted resolution. The following proposition tell us that adapted resolutions exist in any dimension. \begin{proposition} \label{prop:BlowUpM2} Let $(X,x)$ be the germ of a normal $n$-dimensional singularity. Let $M$ be a maximal Cohen-Macaulay module and \begin{equation*} f\colon \mathrm{Bl}_M(X) \to X, \end{equation*} be the blow-up of $X$ at $M$. Then, any resolution \begin{equation*} \sigma\colon Y \to \mathrm{Bl}_M(X), \end{equation*} is an adapted resolution associated to $M$. \end{proposition} \removelastskip\par \noindent {\sc Proof.}\enspace The module $M$ is a maximal Cohen-Macaulay, therefore $M$ is free over the regular part of $X$. Hence, the morphism $f$ is an isomorphism over the regular part of $X$. Thus \begin{equation*} f\circ\sigma\colon Y \to X, \end{equation*} is a resolution of $X$. Denote by \begin{equation*} \tilde{M}:= f^*M / \mathrm{tor} \quad \text{ and } \quad \tilde{\Sf{M}} := \sigma^* \tilde{M}, \end{equation*} where tor denotes the torsion part of $f^*M$. The sheaf $\tilde{M}$ is locally free and generated by global sections, therefore $\tilde{\Sf{M}}$ is also locally free and generated by global sections. Now, consider the following two exact sequences: \begin{align} \label{exact:Prop3.2Ex1} &0 \to \mathrm{tor} \to f^* M \to \tilde{M} \to 0,\\ \label{exact:Prop3.2Ex2} &0 \to \mathrm{tor} \to \sigma^*f^* M \to \sigma^* f^* M / \mathrm{tor} \to 0. \end{align} Applying the functor $\sigma^*$ to the exact sequence~\eqref{exact:Prop3.2Ex1} and using~\eqref{exact:Prop3.2Ex2}, we get the following commutative diagram: \begin{equation*} \begin{tikzcd} 0 \arrow[r] & \ker \arrow[r] \arrow[dr, phantom, "\circlearrowleft"]& \sigma^* f^* M \arrow[r] \arrow[dr, phantom, "\circlearrowleft"] & \tilde{\Sf{M}} \arrow[r] & 0\\ 0 \arrow[r] & \mathrm{tor} \arrow[u] \arrow[r] & \sigma^* f^* M \arrow[r] \arrow[u,"="] & \sigma^* f^* M / \mathrm{tor} \arrow[r]\arrow[u,"\alpha"] & 0 \end{tikzcd} \end{equation*} Notice that the morphism $\alpha$ is an isomorphism. Indeed, it is clearly a surjection. Now, its kernel is torsion. But $\sigma^* f^* M / \mathrm{tor}$ is torsion-free. Therefore, $\alpha$ is also injective. Since $\alpha$ is an isomorphism, the sheaf $\sigma^* f^* M / \mathrm{tor}$ is locally free. This proves the proposition. \hbox{ }{\qed} \par As an application of Theorem~\ref{Th:BlowUpM} we can construct the minimal adapted resolution of a reflexive module as follows: \begin{proposition} \label{Prop:2daconstruccion} Let $(X,x)$ be the germ of a normal surface singularity. Let $M$ be a reflexive $\Ss{X}$-module and \begin{equation*} \pi\colon \tilde{X} \to X, \end{equation*} be the associated minimal adapted resolution. Let $f \colon Bl_M(X) \to X$ be the blow-up of $X$ at the module $M$ and \begin{equation*} \rho\colon \widetilde{\mathrm{Bl}_M(X)}_{\text{min}} \to \mathrm{Bl}_M(X), \end{equation*} be the minimal resolution of $\mathrm{Bl}_M(X)$. Then, $\widetilde{\mathrm{Bl}_M(X)}_{\text{min}} \cong \tilde{X}$. \end{proposition} \removelastskip\par \noindent {\sc Proof.}\enspace Consider the following commutative diagram: \begin{equation*} \begin{tikzpicture} \matrix (m)[matrix of math nodes, nodes in empty cells,text height=2ex, text depth=0.25ex, column sep=3.5em,row sep=3em] { \widetilde{\mathrm{Bl}_M(X)}_{\text{min}} & \tilde{X}\\ \mathrm{Bl}_M(X) & X\\ }; \draw[-stealth] (m-2-1) edge node[below]{$f$} (m-2-2); \draw[-stealth] (m-1-1) edge node[left]{$\rho$} (m-2-1); \draw[-stealth] (m-1-2) edge node[right]{$\pi$} (m-2-2); \draw[-stealth] (m-1-2) edge node[auto]{$\phi$} (m-2-1); \draw[-stealth] (m-1-2) edge node[above]{$\varphi$} (m-1-1); \end{tikzpicture} \end{equation*} where $\phi$ is given by the universal property of the blow-up of $X$ at $M$ and $\varphi$ comes from the universal property of the minimal resolution of $\mathrm{Bl}_M(X)$. By~\cite[Proposition~5.1]{BoRo}, the resolution $\tilde{X}$ is the minimal resolution of $X$ such that the full sheaf $\Sf{M}:= \left(\pi^* M\right)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ is generated by global sections. By Proposition~\ref{prop:BlowUpM2}, the full sheaf $\tilde{\Sf{M}}:= \left(\rho^* f^* M\right)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ is generated by global sections, hence the morphism $\varphi$ is an isomorphism. This proves the proposition. \hbox{ }{\qed} \par \section{Normality of the blow-up} \label{sec:Blowup} In the previous section we used the blow-up of a reflexive module to recover its minimal adapted resolution, so it is natural to study the properties of this blow-up. In this section we prove that the Raynaud-Gruson blow-up of a special module over a Gorenstein, normal, surface singularity is a partial resolution (Theorem~\ref{Th:BlMXNormal}). Furthermore, we prove that this blow-up can be constructed via a resolution (depending on the reflexive module) and the first Chern class of the associated full sheaf (Theorem~\ref{Theo:BlowUpMMinAdapt}). First, we observe that the pushforward via a resolution map commutes with tensor product modulo torsion of certain coherent modules. \begin{lemma} \label{lemma:DtensorM} Let $(X,x)$ be the germ of a normal surface singularity. Let $\pi \colon \tilde{X} \to X$ be any resolution with $E$ the exceptional divisor. Let $\Sf{M}$ be a $\Ss{\tilde{X}}$-module generated by global sections and $\Sf{A}$ be an $\Ss{\tilde{X}}$-module of dimension one such that its support intersects $E$ in a finite number of points. Then, the natural morphism \begin{equation} \alpha \colon \pi_* \Sf{M} \otimes \pi_* \Sf{A} \to \pi_* \left( \Sf{M} \otimes \Sf{A}\right), \end{equation} is a surjection. \end{lemma} \removelastskip\par \noindent {\sc Proof.}\enspace Consider the natural morphism \begin{equation*} \alpha \colon \pi_* \Sf{M} \otimes \pi_* \Sf{A} \to \pi_* \left(\Sf{M}\otimes \Sf{A}\right). \end{equation*} Since the support of $\Sf{A}$ intersects the exceptional divisor in a finite number of points, we can identify $\pi_* \left(\Sf{M}\otimes \Sf{A}\right)$ with $\Sf{M}\otimes \Sf{A}$. Let $m \otimes a$ be a section of $\Sf{M}\otimes \Sf{A}$. Since $\Sf{M}$ is generated by global sections there exist global sections $\psi_1, \dots, \psi_n$ of $\Sf{M}$ and sections $f_1, \dots, f_n$ of $\Ss{\tilde{X}}$ defined in an open neighbourhood of the support of $\Sf{A}$ such that $m = \sum_i \psi_i f_i$. Denote by $m' \otimes a' = \sum (\psi_i \otimes f_i\cdot a)$. By construction we get that $\alpha(m' \otimes a')$ is $m \otimes a$. Therefore, the natural morphism $\alpha$ is a surjection. \hbox{ }{\qed} \par From now on, let $(X,x)$ be the germ of a normal Gorenstein surface singularity and $M$ be a special module of rank $r$. Let $\pi \colon \tilde{X} \to X$ be the minimal adapted resolution associated to $M$ with $E=\cup E_i$ the exceptional divisor with irreducible components $E_i$. Let \begin{equation*} \Sf{M}:= \left(\pi^* M \right)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}} \quad \text{and} \quad \Sf{L}:=\mathrm{det}(\Sf{M}). \end{equation*} By~\cite[Proposition~7.4]{BoRo}, the full sheaf $\Sf{M}$ is an extension of the determinant bundle $\Sf{L}$ by $\Ss{\tilde{X}}^{r-1}$. Take $r$ generic global sections $\phi_1,...,\phi_r$ of $\Sf{M}$ and consider the following exact sequence given by the sections: \begin{equation*} 0 \to \Ss{\tilde{X}}^r \xrightarrow{(\phi_1,...,\phi_r)} \Sf{M} \to \Sf{A}' \to 0. \end{equation*} By \cite[Lemma~5.4]{BoRo}, the degeneracy module $\Sf{A'}$ is isomorphic to $\Ss{D}$, where $D\subset\tilde{X}$ is a smooth curve meeting the exceptional divisor transversely at its smooth locus. The following lemma tell us that the norm of $M$ has a representative given by the global sections of $\Sf{L}:=\mathrm{det}(\Sf{M})$. \begin{lemma} \label{lema:NormaML} Let $(X,x)$ be the germ of a normal Gorenstein surface singularity and $M$ be a special $\Ss{X}$-module. Let $\pi \colon \tilde{X} \to X$ be the minimal adapted resolution associated to $M$. Denote by $\Sf{M}=(\pi^*M)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ and by $\Sf{L}:=\mathrm{det}(\Sf{M})$. Then, $\pi_* \Sf{L}$ is a representative of $\|M\|_{\Ss{X}}$. \end{lemma} \removelastskip\par \noindent {\sc Proof.}\enspace The proof is the same as \cite[Lemma~4.1]{Gus}. \hbox{ }{\qed} \par We want to prove that the blow-up of a special module is a partial resolution. By Lemma~\ref{lema:NormaML}, Remark~\ref{remark:BlowupM}, Lemma~\ref{lema:blowupcomplete} and Lemma~\ref{lema:induccioncomplete}, it is enough to prove that $(\pi_* \Sf{L})^n$ is complete for any positive integer $n$ (where $(-)^n$ denotes the product of fractional ideals). This is going to be the strategy of the proof of Theorem~\ref{Th:BlMXNormal}. The following remark tell us that the tensor product of locally free sheaves coincides with the product as a fractional ideal sheaves. We need this remark in order to prove that $(\pi_* \Sf{L})^n$ is complete. \begin{remark} \label{remark:Ltensorn} Let $(X,x)$ be the germ of a normal singularity. Let $\pi \colon \tilde{X} \to X$ be a resolution and let $\Sf{L}$ be a line bundle over $\tilde{X}$. Notice that the natural morphism \begin{equation} \label{eq:remark1} \Sf{L}^{\otimes n} \to \Sf{L}^{n}, \end{equation} is an isomorphism. Indeed, the morphism~\eqref{eq:remark1} is always a surjection. Now, the sheaf $\Sf{L}$ is locally free, hence it is flat. Therefore, the natural morphism is injective. \hbox{ }{\qed} \par \end{remark} The following theorem tell us that the blow-up of a special module is a partial resolution. \begin{theorem} \label{Th:BlMXNormal} Let $(X,x)$ be the germ of a normal Gorenstein surface singularity and $M$ be a special module. Let $f \colon \mathrm{Bl}_M(X) \to X$ be the blow-up of $X$ at the module $M$. Then, $\mathrm{Bl}_M(X)$ is normal. \end{theorem} \removelastskip\par \noindent {\sc Proof.}\enspace Let \begin{equation*} \pi \colon \tilde{X} \to X, \end{equation*} be the minimal adapted resolution associated to $M$. Denote by $\Sf{M}=(\pi^*M)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ and by $\Sf{L}$ its determinant. The resolution is small with respect to the Gorenstein form. Hence, the canonical cycle $Z_K$ is non-negative. Moreover, $D$ does not meet the support of $Z_K$ (see \cite[Proposition~5.14]{BoRo}). Therefore, for any positive integer $n$ we have \begin{equation*} \text{Tor}_1^{\Ss{\tilde{X}}}(\Ss{D}, \mathcal{L}^{\otimes n} \otimes \Ss{Z_K}) = 0 \quad \text{and} \quad \Ss{D} \otimes \left( \mathcal{L}^{\otimes n} \otimes \Ss{Z_K}\right) = 0. \end{equation*} By these equalities, applying $- \otimes \left( \Sf{L}^{\otimes n} \otimes \Ss{Z_K}\right)$ to the exact sequence \begin{equation} \label{exctseq:detM1} 0 \to \Ss{\tilde{X}} \to \Sf{L} \to \Ss{D} \to 0, \end{equation} we get \begin{equation} \label{eq:isosDet2} \Sf{L}^{\otimes n} \otimes \Ss{Z_K} \cong \Sf{L}^{\otimes (n+1)} \otimes \Ss{Z_K}. \end{equation} Tensoring \eqref{exctseq:detM1} with $\Sf{L}^{\otimes n}$ we get \begin{equation} \label{exctseq:detM2} 0 \to \Sf{L}^{\otimes n} \to \Sf{L}^{\otimes (n+1)} \to \Ss{D}\otimes \Sf{L}^{\otimes n} \to 0. \end{equation} Now, applying the functor $\pi_*$ to the exact sequence \eqref{exctseq:detM2} and using the isomorphism \eqref{eq:isosDet2} we obtain the following commutative diagram: \begin{equation} \label{diagram:Th4.31} \begin{tikzcd} \dots \arrow[r] & \pi_* \left(\Ss{D}\otimes \Sf{L}^{\otimes n}\right) \arrow[r] & R^1 \pi_* \Sf{L}^{\otimes n} \arrow[r] \arrow[d] \arrow[dr, phantom, "\circlearrowleft"] &R^1 \pi_* \Sf{L}^{\otimes (n+1)} \arrow[r] \arrow[d] & 0 \\ &0 \arrow[r]& R^1 \pi_* \left(\Ss{Z_K}\otimes \Sf{L}^{\otimes n}\right) \arrow[r] \arrow[d] & R^1 \pi_* \left(\Sf{L}^{\otimes (n+1)} \otimes \Ss{Z_K}\right) \arrow[r] \arrow[d] & 0\\ & & 0 & 0 & \end{tikzcd} \end{equation} where the two columns are induced by the exact sequence \begin{equation} \label{eq:ExactCanonico} 0 \to \Cs{\tilde{X}} \to \Ss{\tilde{X}} \to \Ss{Z_K} \to 0. \end{equation} Note that the existence of the exact sequence~\eqref{eq:ExactCanonico} follows from the fact that the the resolution is small with respect to the Gorenstein form. By the diagram~\eqref{diagram:Th4.31} and induction on $n$ (the case $n=1$ follows by~\cite[Lemma~7.5]{BoRo}) we get the following equalities: \begin{equation} \label{eq:Th4.3.1} \dimc{R^1 \pi_* \left(\Sf{L}^{\otimes n}\right)} = \dimc{R^1 \pi_* \left( \Ss{Z_K}\otimes \Sf{L}^{\otimes n}\right)} =p_g. \end{equation} Now, applying the functor $\pi_*$ to the exact sequence~\eqref{exctseq:detM1} and by~\eqref{eq:Th4.3.1} we get the exact sequence: \begin{equation} \label{exctseq:abajodetM1'} 0 \to \Ss{X} \to \pi_* \Sf{L} \to \pi_* \Ss{D} \to 0. \end{equation} The exact sequences \eqref{exctseq:detM1} and \eqref{exctseq:abajodetM1'} tell us that $\Sf{L}$ is generated by global sections. Thus, the sheaf $\Sf{L}^{\otimes n}$ is also generated by global sections. Now, we prove by induction that the natural morphism \begin{equation} \label{eq:Th4.3.2} \left(\pi_* \Sf{L} \right)^{\otimes n} \to \pi_* \left( \Sf{L}^{\otimes n} \right), \end{equation} is a surjection for any positive integer $n$. The case $n=1$ is clearly true. Assume that the assertion is true for some $n=k$. We then prove the assertion for $n=k+1$. Consider the following commutative diagram obtained by tensoring \eqref{exctseq:detM1} and \eqref{exctseq:abajodetM1'} with $\Sf{L}^{\otimes k}$ and $\left(\pi_* \Sf{L}\right)^{\otimes k}$, respectively: \begin{equation*} \begin{tikzcd} 0 \arrow[r] & \pi_* \left( \Sf{L}^{\otimes k}\right) \arrow[r] & \pi_* \left( \Sf{L}^{\otimes (k+1)}\right) \arrow[r,"\sigma"] & \pi_* \left( \Ss{D} \otimes \Sf{L}^{\otimes k} \right)\arrow[r] & 0\\ & \left(\pi_* \Sf{L}\right)^{\otimes k} \arrow[r] \arrow[u,"\alpha_1"] \arrow[ur, phantom, "\circlearrowleft"] & \left(\pi_* \Sf{L}\right)^{\otimes (k+1)} \arrow[r] \arrow[u,"\alpha_2"] \arrow[ur, phantom, "\circlearrowleft"] & \pi_* \Ss{D} \otimes \left(\pi_* \Sf{L}\right)^{\otimes k} \arrow[r] \arrow[u,"\alpha_3"]& 0 \end{tikzcd} \end{equation*} where the morphisms in the columns are the natural ones. The morphism $\sigma$ is a surjection by~\eqref{eq:Th4.3.1}. The morphism $\alpha_1$ is a surjection by the induction hypothesis. The morphism $\alpha_3$ is also a surjection by Lemma~\ref{lemma:DtensorM}. This implies that $\alpha_2$ is a surjection. This proves the induction step. This proves the surjectivty of~\eqref{eq:Th4.3.2}. Now, we prove that $(\pi_* \Sf{L})^n$ is complete for any positive integer $n$. By Remark~\ref{remark:Ltensorn}, we have \begin{equation} \label{eq:Ln} \pi_* \left( \Sf{L}^{\otimes n}\right) = \pi_* \left(\Sf{L}^n \right). \end{equation} Consider the composition \begin{equation} \label{eq:composition} \begin{tikzpicture} \matrix (m)[matrix of math nodes, nodes in empty cells,text height=1.5ex, text depth=0.25ex, column sep=2.5em,row sep=2em] { \left(\pi_* \Sf{L} \right)^{\otimes n} & \pi_* \left( \Sf{L}^{\otimes n} \right) & \pi_* \left( \Sf{L}^{n} \right). \\ }; \draw[-stealth] (m-1-1) -- (m-1-2); \draw[-stealth] (m-1-2) edge node[auto]{$\sigma$} (m-1-3); \end{tikzpicture} \end{equation} Since the natural map given in~\eqref{eq:Th4.3.2} is a surjection and by the equality~\eqref{eq:Ln}, we get that the composition~\eqref{eq:composition} is also a surjection. Therefore, $(\pi_* \Sf{L})^n = \pi_* \left(\Sf{L}^n \right)$. By \eqref{eq:Ln}, this implies $\pi_* \left( \Sf{L}^{\otimes n} \right) = \left(\pi_* \Sf{L} \right)^n$. Finally by Lemma~\ref{lema:induccioncomplete} the ideal $\left(\pi_* \Sf{L} \right)^n$ is complete. Now, the theorem follows by Remark~\ref{remark:BlowupM}, Lemma~\ref{lema:blowupcomplete} and Lemma~\ref{lema:induccioncomplete}. This proves the theorem. \hbox{ }{\qed} \par The following theorem tell us how to recover the normalization of the blow-up at a reflexive module using the minimal adapted resolution. \begin{theorem} \label{Theo:BlowUpMMinAdapt} Let $(X,x)$ be the germ of a normal surface singularity. Let $M$ be a reflexive $\Ss{X}$-module of rank $r$. Let $\pi\colon \tilde{X} \to X$ be the minimal adapted resolution associated to $M$ with exceptional divisor $E$. Let $E_1,\dots,E_n$ be the irreducible components of $E$ and $\Sf{M}:=(\pi^*M)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ be the full sheaf associated to $M$. Then, the normalization of $\mathrm{Bl}_M(X)$ is obtained by contracting the irreducible components $E_j$ such that $c_1\left(\Sf{M} \right) \cdot E_j =0$. \end{theorem} \removelastskip\par \noindent {\sc Proof.}\enspace Let $E_1,\dots,E_m$ be the irreducible components of $E$ such that $c_1(\Sf{M})\cdot E_j \neq 0$. Let \begin{equation*} h\colon \tilde{X} \to Y, \end{equation*} be the contraction of all the irreducible components of $E$ different from $E_1,\dots,E_m$. Denote by $E':=\bigcup_{j=1}^m E_j$ and $S:=h(E\setminus E')$. Notice that $S$ is a finite set of cardinality equal to the number of connected components of $E\setminus E'$. By Grauert's contraction theorem~\cite{Gr} the variety $Y$ is normal. Let \begin{equation*} v \colon Y \to X, \end{equation*} be the natural morphism such that $\pi = v \circ{} h$. We prove that $h_* \Sf{M}$ is a locally free $\Ss{Y}$-module generated by its global sections. Let $\phi_1, \dots, \phi_r$ be generic global sections of $\Sf{M}$. By~\cite[Lemma~5.4]{BoRo}, the exact sequence given by the sections is \begin{equation} \label{eq:Th4.8.1} 0 \to \Ss{\tilde{X}}^r \xrightarrow{(\phi_1,...,\phi_r)} \Sf{M} \to \Ss{D} \to 0. \end{equation} Applying the functor $h_*$ to the exact sequence~\eqref{eq:Th4.8.1} we get \begin{equation} \label{eq:hM} 0 \to \Ss{Y}^r \to h_*\Sf{M} \stackrel{\psi}{\longrightarrow} h_*\Ss{D}. \end{equation} Notice that the morphism $h$ is an isomorphism in the complement of $E \setminus E'$ and the support of $D$ only intersects $E'$. Hence, the support of $h_*\Ss{D}$ and $S$ are disjoint sets. Thus, the morphism $\psi$ is a surjection and $h_* \Sf{M}$ is locally free sheaf generated by its global sections. Now, denote by $\tilde{\Sf{M}}:= \left(v^* M \right)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ and consider the natural morphism \begin{equation*} \theta \colon h_* \Sf{M} \to \tilde{\Sf{M}}. \end{equation*} The kernel and cokernel of $\theta$ are torsion, $Y$ is normal and $h_* \Sf{M}$ is locally free. Thus, the morphism $\theta$ is injective. We now prove that $\theta$ is also a surjection. Consider the following exact sequence: \begin{equation*} 0 \to h_* \Sf{M} \stackrel{\theta}{\longrightarrow} \tilde{\Sf{M}} \to K \to 0. \end{equation*} Applying the functor $\Homs_{\Ss{Y}}\left(-, \Cs{Y} \right)$ to the exact sequence we get \begin{equation*} 0\to \Homs_{\Ss{Y}}\left(\tilde{\Sf{M}}, \Cs{Y} \right) \to \Homs_{\Ss{Y}}\left(h_* \Sf{M}, \Cs{Y} \right) \to \Exts_{\Ss{Y}}^1 \left(K, \Cs{Y} \right) \to 0. \end{equation*} The sheaf $K$ is supported in a finite set. Therefore, by~\cite[Theorem~3.3.10]{BrHe} the sheaf $\Exts_{\Ss{Y}}^1 \left(K, \Cs{Y}\right)$ must vanish. Thus, we get \begin{equation} \label{eq:OmegaisoY} \Homs_{\Ss{Y}}\left(\tilde{\Sf{M}}, \Cs{Y} \right) \cong \Homs_{\Ss{Y}}\left(h_* \Sf{M}, \Cs{Y} \right). \end{equation} Since $h_* \Sf{M}$ is locally free and $\tilde{\Sf{M}}$ is reflexive. We conclude that both sheaves are $\Cs{Y}$-reflexive modules. Recall that $\Cs{Y}$-reflexivity is equivalent to reflexivity. Therefore, \begin{equation*} h_* \Sf{M} \cong \tilde{\Sf{M}}. \end{equation*} Consequently, the sheaf $\tilde{\Sf{M}}$ is locally free and it is generated by its global sections. Since $\tilde{\Sf{M}}$ is generated by global sections, we get \begin{equation*} \tilde{\Sf{M}} \cong v^* M / \mathrm{tor}. \end{equation*} In particular, the sheaf $v^* M / \mathrm{tor}$ is locally free. Let \begin{equation*} n \colon \mathrm{NBl}_M(X) \to \mathrm{Bl}_M(X), \end{equation*} be the normalization of $\mathrm{Bl}_M(X)$. Denote by $ \widetilde{\mathrm{NBl}_M(X)}_{\text{min}}$ the minimal resolution of $\mathrm{NBl}_M(X)$. By the universal property of the blow-up of $X$ at $M$ and the universal property of the normalization, there exist morphisms $g\colon Y \to \mathrm{Bl}_M(X)$ and $\gamma \colon Y \to \mathrm{NBl}_M(X)$ such that the diagram commutes \begin{equation*} \begin{tikzpicture} \matrix (m)[matrix of math nodes, nodes in empty cells,text height=1ex, text depth=0.35ex, column sep=3.5em,row sep=3em] { \widetilde{\mathrm{NBl}_M(X)}_{\text{min}}& \mathrm{NBl}_M(X) & \mathrm{Bl}_M(X)\\ \tilde{X} & Y &\\ }; \draw[-stealth] (m-1-1) edge node[auto]{} (m-1-2); \draw[-stealth] (m-2-1) edge node[auto]{$h$} (m-2-2); \draw[-stealth] (m-2-2) edge node[auto]{$g$} (m-1-3); \draw[-stealth] (m-2-2) edge node[auto]{$\gamma$} (m-1-2); \draw[-stealth] (m-1-2) edge node[auto]{$n$} (m-1-3); \end{tikzpicture} \end{equation*} By Proposition~\ref{Prop:2daconstruccion} we know that $\tilde{X} \cong \widetilde{\mathrm{NBl}_M(X)}_{\text{min}}$. Hence, the morphism $\gamma$ is an isomorphism. This proves the theorem. \hbox{ }{\qed} \par \begin{corollary} \label{cor:BlowUpMMinAdapt} Let $(X,x)$ be the germ of a normal Gorenstein surface singularity. Let $M$ be a special $\Ss{X}$-module. Then, the blow-up of $X$ at $M$ has at worst normal Gorenstein singularities. \end{corollary} \removelastskip\par \noindent {\sc Proof.}\enspace Let \begin{equation*} \pi\colon \tilde{X} \to X, \end{equation*} be the minimal adapted resolution associated to $M$ with exceptional divisor $E$ and $\Sf{M}:=(\pi^*M)^{\raise0.9ex\hbox{$\scriptscriptstyle\vee$} \raise0.9ex\hbox{$\scriptscriptstyle\vee$}}$ be the full sheaf associated to $M$. Let $E_1,\dots,E_m$ be the irreducible components of $E$ such that $c_1(\Sf{M})\cdot E_j \neq 0$. If the singularity is rational, then the corollary follows by~\cite[Theorem~2.2]{Curto}. Now assume that the singularity is not rational. Let \begin{equation*} h\colon \tilde{X} \to Y, \end{equation*} be the contraction of all the irreducible components of $E$ different from $E_1,\dots,E_m$. Denote by $E':=\bigcup_{j=1}^m E_j$ and $S:=h(E\setminus E')$. Recall that $S$ is a finite set. By Theorem~\ref{Th:BlMXNormal} and Theorem~\ref{Theo:BlowUpMMinAdapt} we get $Y \cong \mathrm{Bl}_M(X)$. By Grauert's conraction theorem~\cite{Gr} the restriction \begin{equation*} h|_{\tilde{X} \setminus (E\setminus E')}\colon \tilde{X} \setminus (E\setminus E') \to Y\setminus S, \end{equation*} is a isomorphism. By~\cite[Remark~5.3]{BoRo}, the Gorenstein $2$-form $\Omega_{\tilde{X}}$ does not have any zero or pole along $\tilde{X} \setminus (E\setminus E')$. Therefore, there exists a two form that does no vanishes over $Y\setminus S$, i.e., $Y$ has only Gorenstein singularities. This proves the corollary. \hbox{ }{\qed} \par \begin{remark} \label{Remark:Unico} Let $(X,x)$ be the germ of a normal Gorenstein surface singularity. Let $M$ be a special $\Ss{X}$-module. By Corollary~\ref{cor:BlowUpMMinAdapt}, the blow-up of $X$ at $M$ has at worst normal Gorenstein singularities. The only case when the blow-up of $X$ at $M$ is smooth is in the case of the $A_1$-singularity. In all other cases the blow-up of $X$ at $M$ always has normal Gorenstein singularities. \end{remark} \begin{corollary} \label{cor:BlowUpMMinAdapt2} Let $(X,x)$ be the germ of a normal Gorenstein surface singularity. Then, two special modules give isomorphic partial resolutions if and only if they are isomorphic up to free summands. \end{corollary} \removelastskip\par \noindent {\sc Proof.}\enspace The proof follows by Theorem~\ref{Theo:BlowUpMMinAdapt}, Proposition~ \ref{Prop:2daconstruccion} and \cite[Corollary~7.12]{BoRo}. \hbox{ }{\qed} \par Now, by Theorem~\ref{Th:BlMXNormal} and Theorem~\ref{Theo:BlowUpMMinAdapt} we generalize the McKay correspondence as follows: \begin{corollary} \label{Cor:PrincipalNuevo} Let $(X,x)$ be a normal Gorenstein surface singularity. Then, there exists a bijection between the following sets: \begin{enumerate} \item The set of special, indecomposable $\Ss{X}$-modules up to isomorphism. \item The set of irreducible divisors $E$ over $x$, such at any resolution of $X$ where $E$ appears, the Gorenstein form has neither zeros nor poles along $E$. \item The set of partial resolutions $\psi\colon Y \to X$ with irreducible exceptional divisor $E$ such that: \begin{enumerate} \item the partial resolution is dominated by a resolution which is small with respect to the Gorenstein form. \item the Gorenstein form does not have any zeros or poles along $E \setminus \text{Sing $Y$}$. \end{enumerate} \end{enumerate} \end{corollary} \removelastskip\par \noindent {\sc Proof.}\enspace Clearly, if $M_1$ and $M_2$ are two special modules such that $M_1 \cong M_2$, then the blow-up of $X$ at $M_1$ and at $M_2$ are isomorphic. By Corollary~\ref{cor:BlowUpMMinAdapt}, the blow-up of $X$ at $M_1$ is a Gorenstein partial resolution with irreducible exceptional divisor. Hence (1) implies (3). Let $\psi\colon Y \to X$ be a partial resolution satisfying the properties of (a) and (b). Therefore, there exists a resolution small with respect to the Gorenstein form such that the Gorenstein form does not have zeros or poles along $E'$, where $E'$ is the strict transform of $E$. Hence (3) implies by (2). The bijection between (1) and (2) follows from \cite[Corollary~7.12]{BoRo}. This proves the corollary. \hbox{ }{\qed} \par \section{Applications via matrix factorizations} \label{Sec:Matrix} In this section we use the theory of matrix factorizations in order to compute examples of blow-ups at reflexive modules. We recall some preliminaries on matrix factorizations. \subsection{Matrix factorizations} From now on, let $(S,\mathfrak{m})$ be a regular local ring and suppose that $R=S/I$ is Henselian, where $I$ is a principal ideal of $S$ generated by $f$. Recall, the notion of matrix factorization given by Eisenbud~\cite{Ei} gives an equivalence of categories between maximal Cohen-Macaulay $R$-modules and pairs of matrices with entries in $S$ satisfying some conditions. We review this construction. See~\cite{Ei} or~\cite[Chapter~7]{Yos} for more details. \begin{definition} Let $R$ be the coordinate ring of the a hypersurface defined by $f$ in $(S,\mathfrak{m})$. A \emph{matrix factorization} of $f$ is an ordered pair of $n \times n$-matrices $(\Phi, \Psi)$ with entries in $S$ such that \begin{equation*} \Phi \cdot \Psi = f \cdot \mathrm{Id}_{S^n}, \quad \Psi \cdot \Phi = f \cdot \mathrm{Id}_{S^n}, \end{equation*} where $\mathrm{Id}_{S^n}$ is the $n\times n$ identity matrix. A \emph{morphism} between matrix factorizations $(\Phi_1, \Psi_1)$ and $(\Phi_2, \Psi_2)$ is a pair of $n \times n$ matrices $(\alpha, \beta)$ with entries in $S$ such that \begin{equation*} \alpha \cdot \Phi_1 = \Phi_2 \cdot \beta, \quad \beta \cdot \Psi_1 = \Psi_2 \cdot \alpha. \end{equation*} A matrix factorization is \emph{reduced} if and only if \begin{equation*} \im \Phi \subset \mathfrak{m} S^n \quad \text{and} \quad \im \Psi \subset \mathfrak{m} S^{n}. \end{equation*} \end{definition} Using matrix factorizations Eisenbud~\cite{Ei} proved the following: \begin{theorem}[{\cite{Ei}}] \label{Th:EisTh} There is a one-to-one correspondence between: \begin{enumerate} \item equivalence classes of reduced matrix factorizations of $f$. \item isomorphism classes of non-trivial periodic minimal free resolutions of $R$-modules of periodicity two. \item maximal Cohen-Macaulay $R$-modules without free summands. \end{enumerate} \end{theorem} We sketch the idea of a part of the proof of Theorem~\ref{Th:EisTh} which will be used later in this section. Let $M$ be a maximal Cohen-Macaulay $R$-module without free summands. By the Auslander-Buchsbaum-Serre theorem we have \begin{equation*} \text{proj dim}_S M = \dim S - \text{depth } M = 1. \end{equation*} Thus, there is a free resolution of $M$ as $S$-module of length $1$: \begin{equation*} 0 \to S^n \stackrel{\Phi}{\longrightarrow} S^n \to M \to 0. \end{equation*} Since $f$ annihilates the module $M$, we get $f \cdot S^n \subset \im \Phi$. Hence, there exists a matrix $\Psi$ with entries in $S$ such that \begin{equation*} \Phi \cdot \Psi = f \cdot \mathrm{Id}_{S^n}, \end{equation*} where the pair $(\Phi, \Psi)$ is a $n\times n$-matrix factorization of $f$ with $\coker{\Phi} =M$. Conversely, let $(\Phi, \Psi)$ be a matrix factorization of $f$. Denoting $\bar{\Phi}$ and $\bar{\Psi}$ the matrices $\Phi, \Psi$ modulo $(f)$ respectively, we have the following exact sequence of $R$-modules: \begin{equation*} \dots \to R^n \stackrel{\bar{\Psi}}{\longrightarrow} R^n \stackrel{\bar{\Phi}}{\longrightarrow} R^n \stackrel{\bar{\Psi}}{\longrightarrow} \dots \end{equation*} If $f$ is not a zero-divisor in $S$, then the complex \begin{equation*} \dots \to R^n \stackrel{\bar{\Psi}}{\longrightarrow} R^n \stackrel{\bar{\Phi}}{\longrightarrow} R^n \to \coker{\bar{\Phi}} \to 0, \end{equation*} is exact. Hence it is a periodic free resolution of $\coker{\bar{\Phi}}$ with periodicity two. Furthermore, the module $\coker{\bar{\Phi}}$ is a maximal Cohen-Macaulay $R$-module. \subsection{The blow-up at a matrix factorization.} In this subsection we use the blow-up given by Villamayor~\cite{Villa} and the matrix factorizations in the case of $S=\mathbb{C}\{x,y,z\}$ and $R$ the coordinate ring of a normal hypersurface, i.e., $R=S/(f)$ with $f \in S$ and $R$ is a normal ring. Recall that any hypersurface singularity is a Gorenstein singularity. Let $M$ be a reflexive $R$-module of rank $r$. By~\cite[Proposition~2.5]{Villa} we can use the matrix factorization associated to $M$ to obtain a representative of $\|M\|_R$. Some computations can be done by hand, but in some cases we use the software {\sc Singular}~{4-1-2}~\cite{SING} and the libraries resolve.lib~\cite{RES} and sing.lib~\cite{SingLib}. \begin{example} \label{ex:Ex1} Let $f=xy+z^{n+1}$, i.e., $R$ is the $A_n$-singularity. The matrix factorization of rational double points are well known, see for example \cite{Kaj}. In the case of the $A_n$-singularity the matrix factorizations are \begin{equation} \Phi_k = \begin{bmatrix} y & -z^{n+1-k} \\ z^k & x \end{bmatrix} , \quad \Psi_k = \begin{bmatrix} x & z^{n+1-k}\\ -z^k & y \\ \end{bmatrix}, \end{equation} where $k$ is a integer such that $0\leq k \leq n$. Using the matrix $\Phi_k$ we get the following morphism: \begin{equation*} R^2 \stackrel{\Phi_k}{\longrightarrow} R^2 \to M(\Phi_k) \to 0, \end{equation*} where $M(\Phi_k):=\coker{\Phi_k}$. Let $K$ be the kernel of the morphism $\Phi_k$ and denote by \begin{equation*} K_1 := \set{\begin{bmatrix} y\\ z^k \end{bmatrix} \cdot g \in R^2}{g \in R}. \end{equation*} By~\cite[Proposition~2.5]{Villa}, the ideal \begin{equation*} I_k=(y,z^k), \end{equation*} is a representative of $\|M(\Phi_k)\|_R$. Therefore, the blow-up at $M(\Phi_k)$ of $R$ is the blow-up at the ideal $I_k$. In this case $\mathrm{Bl}_{M(\Phi_k)}(R)$ has at most two singular points: \begin{enumerate} \item If $\mathrm{Bl}_{M(\Phi_k)}(R)$ has one singular point, then the singularity is $xy+z^n$. \item If $\mathrm{Bl}_{M(\Phi_k)}(R)$ has two singular points, then one singularity is $xy+z^{n-l}$ and the other singularity is $xy+z^{l+1}$ with $1 \leq l \leq n-2$. \end{enumerate} Both cases were exactly as predicted by Corollary~\ref{Cor:PrincipalNuevo} or \cite{Gus}. \end{example} We now consider a different singularity. From now on, let $f=x^3 + y^3 + z^3$ and $R=\mathbb{C}\{x,y,z\}/(f)$. Hence $R$ is a normal Gorenstein surface singularity. In this case all the reflexive modules were classified by Kahn~\cite{Ka} and all the special modules were classified in \cite{BoRo}. Several people have studied this singularity and the category of reflexive modules, for example \cite{Ka,Laza1,Laza2}. First, we study the blow-up at the fundamental module of $R$. \begin{definition}[{\cite[Definition~11.5]{Yos}}] The \emph{fundamental exact sequence} of $R$ is the following exact sequence (unique, up to non-canonical isomorphism): \begin{align*} 0 \to R \to A \to R \to \mathbb{C} \to 0, \end{align*} corresponding to a non-zero element of $\Ext_R^2\left(R/\mathfrak{m},R\right) \cong \mathbb{C}$. The module $A$ is called the \emph{fundamental module} of $R$. \end{definition} The fundamental module of $R$ is an indecomposable reflexive module of rank $2$ (see \cite[Chapter~11]{Yos} for more properties about the fundamental module). A natural question is the following: Is the fundamental module, a special module? We show: \begin{proposition} Let $f=x^3 + y^3 + z^3$ and $R=\mathbb{C}\{x,y,z\}/(f)$. Then, the fundamental module of $R$ is not special. \end{proposition} \removelastskip\par \noindent {\sc Proof.}\enspace The idea is the same as in Example~\ref{ex:Ex1}: we use the matrix factorization of $A$ to compute the ideal that we need to blow-up. Then, we use Corollary~\ref{cor:BlowUpMMinAdapt} in order to check if the module is special. The matrix factorization associated to $A$ was computed by Yoshino and Kawamoto in~\cite{Yos1} and Laza, Pfister and Popescu in~\cite{Laza1}. The periodic free resolution of $A$ is the following: \begin{equation} \dots \to R^4 \stackrel{\Psi_A}{\longrightarrow} R^4 \stackrel{\Phi_A}{\longrightarrow} R^4 \to A \to 0, \end{equation} where \begin{equation} \Phi_A = \begin{bmatrix} x^2 & -y & -z & 0 \\ y^2 & x & 0 & -z \\ z^2 & 0 & x & y \\ 0 & z^2 & -y^2 & x^2 \end{bmatrix} , \quad \Psi_A = \begin{bmatrix} x & y & z & 0 \\ -y^2 & x^2 & 0 & z \\ -z^2 & 0 & x^2 & -y \\ 0 & -z^2 & y^2 & x \end{bmatrix}. \end{equation} By~\cite[Proposition~2.5]{Villa}, we can choose as a representative of $\|A\|_R$ the ideal generated by all the $2\times2$-minors of the matrix given by \begin{equation} D = \begin{bmatrix} -z & 0 \\ 0 & -z \\ x & y \\ -y^2 & x^2 \end{bmatrix}. \end{equation} By Theorem~\ref{Th:BlowUpM}, the blow-up of $R$ at $A$ is the blow-up at the ideal \begin{equation} \label{eq:IdealAuslander} I= (z^2,yz,xz,x^3+y^3). \end{equation} Using {\sc Singular}~{4-1-2}~\cite{SING} and the libraries resolve.lib~\cite{RES} and sing.lib~\cite{SingLib} one can check that the blow-up at the ideal $I$ has $4$ smooth charts, therefore the blow-up of $R$ at $A$ is a resolution. Then, by Corollary~\ref{cor:BlowUpMMinAdapt} and Remark~\ref{Remark:Unico} the module $A$ is not special. \hbox{ }{\qed} \par We now give one example where the blow-up is not normal. \begin{example} \label{ex:Ex2} Consider the following matrix factorization of $f$: \begin{equation} \Phi = \begin{bmatrix} x+ y & -z^2 \\ z & x^2-xy+y^2 \end{bmatrix} , \quad \Psi = \begin{bmatrix} x^2-xy+y^2 & z^2\\ -z& x+y \\ \end{bmatrix} \end{equation} Let $M=\coker{\Phi}$. Then, using {\sc Singular}~{4-1-2}~\cite{SING} and the libraries resolve.lib~\cite{RES} and sing.lib~\cite{SingLib} one can check that the blow-up of $R$ at $M$ does not have an isolated singularity, hence it is not normal. \end{example} \subsection*{Acknowledgments} We would like to thank Javier Fern\'andez de Bobadilla and Ananyo Dan for helpful discussions during the course of this work. \end{document}
arXiv
\begin{document} \date{} \title{The Bourguignon Laplacian and harmonic symmetric bilinear forms} \begin{abstract} The theory of harmonic symmetric bilinear forms on a Riemannian manifold is an analogue of the theory of harmonic exterior differential forms on this manifold. To show this, we must consider every symmetric bilinear form on a Riemannian manifold as a one-form with values in the cotangent bundle of this manifold. In this case, there are the exterior differential and codifferential defined on the vector space of these differential one-forms. Then a symmetric bilinear form is said to be harmonic if it is closed and coclosed as a one-form with values in the cotangent bundle of a Riemannian manifold. In the present paper we prove that the kernel of the little known Bourguignon Laplacian is a finite-dimensional vector space of harmonic symmetric bilinear forms on a compact Riemannian manifold. We also prove that every harmonic symmetric bilinear form on a compact Riemannian manifold with non-negative sectional curvature is invariant under parallel translations. In addition, we investigate the spectral properties of the little studied Bourguignon Laplacian. \end{abstract} \noindent \textbf{Keywords}: Riemannian manifold, Bourguignon Laplacian, harmonic symmetric bilinear form, spectral theory, vanishing theorem. \noindent \textbf{MSC2010:} 53C20; 53C25; 53C40 \section{\large Introduction} First, we recall here some well-known facts of the theory of harmonic exterior differential forms on an $n$-dimensional Riemannian manifold $(M,g)$ (see, for example, \cite{11}). We write $d : C^{\infty}(\Lambda^{p} M)\to C^{\infty}(\Lambda^{p+1} M)$ for the familiar \textit{exterior differentiation operator} and the vector bundle $\Lambda^{p} M$ of exterior differentiation $p$-forms ($p=1,\, \ldots ,\, n-1$). If $d\, \omega =0$, then the $p$-form $\omega \in C^{\infty}(\Lambda^{p} M)$ is said to be \textit{closed}. The \textit{codifferentiation operator} $\delta : C^{\infty}(\Lambda^{p+1} M)\to C^{\infty}(\Lambda^{p} M)$ is defined as the formal adjoint of $d$. If $\delta \, \omega =0$, then the $\left(p+1\right)$-form $\omega \in C^{\infty}(\Lambda^{p+1} M)$ is said to be \textit{coclosed}. Moreover, if $\omega \in {\rm Ker}\,d\bigcap {\rm Ker}\, \delta $, then the $p$-form $\omega $ is said to be \textit{harmonic}. Using the operators $d$ and $\delta $, one constructs the well-known Hodge-de Rham Laplacian $\Delta_{H} :=\delta \, d+d\, \delta $. Its kernel ${\rm Ker}\, \Delta_{H} $ is a finite-dimensional vector space over~the field of~real numbers of harmonic $p$-forms on a compact Riemannian manifold $(M,g)$. Moreover, every harmonic $p$-form on a compact Riemannian manifold $(M,g)$ with non-negative curvature operator $\bar{R} : \Lambda^{2} M\to {\Lambda }^{2} M$ is invariant under parallel translations. If the curvature operator $\bar{R}$ is non-negative everywhere and positive at some point of $(M,g)$, then every harmonic $p$-form is identically zero. In conclusion, we recall also that the spectral theory of the Hodge-de Rham Laplacian is well known (see, for example, \cite{SPR}). Second, we will consider the theory of harmonic symmetric bilinear forms is an analogue of the theory of harmonic exterior differential forms (see, for example, \cite{9}). To show this, we must consider a symmetric bilinear form $\varphi \in C^{\infty}(S^{2} M)$ as a one-form with values in the cotangent bundle $T^{*} M$ on $M$. In particular, in accordance with the general theory, it is possible to determine an induced exterior differential $d^{\nabla} : C^{\infty}(S^{2} M)\to C^{\infty}(\Lambda^{2} M\otimes T^{*} M)$ on the vector space of $T^{*} M$-valued differential one-forms. In particular, if $d^{\nabla } \, \varphi =0$ then the form $\varphi \in C^{\infty}(S^{2} M)$ is said to be \textit{closed} \textit{bilinear form}. In this case, the $\varphi \in C^{\infty}(S^{2} M)$ is a \textit{Codazzi tensor}. We recall here that a symmetric bilinear form is called a Codazzi tensor (named after D.~Codazzi) if its covariant derivative is a symmetric tensor (see \cite{1}; \cite[p.~435]{2}). In addition, we will call the Codazzi tensor \textit{trivial} if it is a constant multiple of metric (see also \cite{1}). Next, let $\delta^{\nabla} : C^{\infty}(\Lambda^{2} M\otimes T^{*} M)\to C^{\infty} (S^{2} M)$ be the formal adjoin operator of the exterior differential $d^{\nabla } $ (see \cite[p.~355]{2} and \cite{9}), then the form $\varphi \in C^{\infty}(S^{2} M)$ is said to be \textit{harmonic} if $\omega \in {\rm Ker}\, d^{\nabla} \bigcap {\rm Ker}\, \delta^{\nabla}$ (see \cite[p.~270]{9} and \cite[p.~350]{14}). Using the operators $d^{\nabla } $ and $\delta^{\nabla } $, J.-P. Bourguignon constructed the Laplacian $\Delta_{B} : =d^{\nabla} \delta^{\nabla } +\delta^{\nabla } \, d^{\nabla }$ (see \cite[p.~273]{9}). One can prove that its kernel ${\rm Ker}\, \Delta_{B} $ is a finite-dimensional vector space over~the field of~real numbers of harmonic symmetric bilinear forms on a compact Riemannian manifold $(M,g)$. In turn, we will prove that every harmonic symmetric bilinear form on a compact Riemannian manifold $(M,g)$ with non-negative sectional curvature is invariant under parallel translations. In addition, if the sectional curvature of $(M,g)$ is positive at some point of $(M,g)$, then every harmonic symmetric bilinear form is trivial. Moreover, in our paper we will investigate the spectral properties of the Bourguignon Laplacian $\Delta_{B}$. \textbf{Acknowledgments}. Our work was supported by the Foundation for Basic Research of the Russian Academy of Science, project 16-01-00756. \section{The Bourguignon Laplacian and its spectral properties} Let $(M,g)$ be a compact manifold (without boundary) then $L^{2}(M, g)$ denotes the usual Hilbert space of functions or tensors with the global product (resp. global norm) \[ \left\langle \, u,\, w\, \right\rangle =\int_{M}\, g\, \left(\, u,\, w\right) \, dv_{g} \quad({\rm resp}.\ \left\| \, u\, \right\|^{2} =\int_{M}\, g\, \left(\, u,\, u\right)\, dv_{g} ), \] where the measure ${\rm dv}_{g}$ is the usual measure relative to $g$ (we will omit the term ${\rm dv}_{g} $). In this case, $H^{2}(M, g)$ denotes the usual Hilbert space of functions or tensors determined $(M, g)$ with two covariant derivatives in $L^{2}(M,g)$ and with the usual product and norm. We will consider a symmetric bilinear form $\varphi \in C^{\infty}(S^{2} M)$ as a one-form with values in the cotangent bundle $T^{*} M$ on $M$. This bundle comes equipped with the Levi-Civita covariant derivative $\nabla $, thus there is an induced exterior differential $d^{\nabla} : C^{\infty}(S^{2} M)\to C^{\infty} (\Lambda^{2} M\otimes T^{*} M)$ on $T^{*} M$-valued differential one-forms such~as \begin{equation} \label{GrindEQ__2_1_} d^{\nabla } \varphi \, \left(X,\, Y,\, Z\right) :=\left(\nabla_{X} \varphi \right)\, \left(Y,\, Z\right)-\left(\nabla_{Y} \varphi \right)\, \left(X,\, Z\right) \end{equation} for any tangent vector fields $X,\, Y,\, Z$ on $M$ and an arbitrary $\varphi \in C^{\infty}(S^{2} M)$. \begin{remark}\rm The theory on $T^{*} M$-valued differential one-forms can be found in papers and monographs from the following list \cite{FR}, \cite[p.~133--134; 355]{2}, \cite{9}, \cite[p.~338]{10}, \cite[p.~349--350]{14} and \cite{20}. \end{remark} J.-P. Bourguignon defined in \cite[p.~273]{9} the Laplacian $\Delta_{B} : C^{\infty}(S^{2} M)\to C^{\infty}(S^{2} M)$ by the formula $\Delta_{B} :=\delta^{\nabla } d^{\nabla } +d^{\nabla } \delta^{\nabla } $ where $\delta^{\nabla} : C^{\infty}(\Lambda^{2} M\otimes T^{*} M)\to C^{\infty}(S^{2} M)$ is the formal adjoin operator of the exterior differential $d^{\nabla } $. If $(M, g)$ is a compact Riemannian manifold then by direct computations yield we obtain the following integral formula: \begin{equation} \label{GrindEQ__2_2_} \left\langle \Delta_{B} \, \varphi ,\, \varphi \right\rangle =\left\langle d^{\nabla } \varphi ,\, d^{\nabla } \varphi \right\rangle +\left\langle \, \delta^{\nabla } \, \varphi ,\, \, \delta^{\nabla } \, \varphi \right\rangle \end{equation} Based on this formula, we conclude that the \textit{Bourguignon Laplacian} $\Delta_{B} $ is a non-negative operator. On the other hand, by the general theorem on elliptic operators (see \cite[p.~464]{2}; \cite[p.~383]{10}) we have the orthogonal decomposition \begin{equation} \label{GrindEQ__2_3_} C^{\infty}(S^{2} M) ={\rm Ker}\, \Delta_{B} \oplus {\rm Im}\, \Delta_{B} \end{equation} with respect to the global scalar product $\langle \, \cdot \; ,\; \cdot \, \rangle $. The first component of the right-hand side of \eqref{GrindEQ__2_3_} is the kernel ${\rm Ker}\, \Delta_{B} $ of the Bourguignon Laplacian $\Delta_{B} $. It is well known from \cite[p.~464]{2} that ${\rm Ker}\, \Delta_{B} $ is a finite-dimensional vector space over~the field of~real numbers. Next, an easy computation yields the \textit{Weitzenb\"{o}ck decomposition formula} (see also \cite{FR}; \cite[p.~355]{2}; \cite[p.~273]{9}) \begin{equation} \label{GrindEQ__2_4_} \Delta_{B} \, \varphi =\bar{\Delta }\, \varphi +B\,\varphi \end{equation} where $\bar{\Delta }=\nabla^{*} \nabla $ is the \textit{rough Laplacian} (see \cite[p.~52]{2}). The second component of the right-hand side of \eqref{GrindEQ__2_4_} is called the \textit{Weitzenb\"{o}ck curvature operator} for the Bourguignon Laplacian $\Delta_{B} $. Moreover, we known that it has the form $B\, \varphi : =\varphi \circ {\rm Ric}-{\mathop{R}\limits^{\circ }} \, \varphi $ where $\circ $ is a composition of endomorphisms and ${\mathop{R}\limits^{\circ }} $ is the linear maps of $S^{2} M$ into itself such that (see \cite[p.~52]{2}) \begin{equation} \label{GrindEQ__2_5_} \big({\mathop{R}\limits^{\circ }} \, \varphi\big)(X,\, Y) =\sum\nolimits_{i=1,\ldots ,n}\; \varphi \, \left(R(X,\, E_{i} )\, Y,\, E_{i} \right) \end{equation} for the curvature tensor $R$ of $(M, g)$, for any $\varphi \in C^{\infty}(S^{2} M)$ and an arbitrary local orthonormal basis $E_1,\ldots, E_n$ of vector fields on $(M, g)$. In addition, $B\, \varphi$ direct verification yields that $B\, g=0$ and \begin{equation}\label{GrindEQ__2_6_} {\rm trace}_{g} \, (B\, \varphi )=0. \end{equation} Then from \eqref{GrindEQ__2_4_} and \eqref{GrindEQ__2_6_} we obtain the identity \begin{equation} \label{GrindEQ__2_7_} {\rm trace}_{g} \, \left(\, \Delta_{B} \, \varphi \right) =\bar{\Delta }\, \left({\rm trace}_{g} \varphi \right). \end{equation} Next, we will consider the spectral theory of the Bourguignon Laplacian $\Delta_{B} : C^{\infty}(S^{2} M)\to C^{\infty}(S^{2} M)$. Let $(M, g)$ be a compact Riemannian manifold and $\varphi$ be a non-zero eigentensor corresponding to the eigenvalue $\lambda $, that is $\Delta_{B}\, \varphi =\lambda \, \varphi $ and $\lambda$ is a real nonnegative number. Then we can rewrite the formula $\Delta_{B} \, \varphi =\bar{\Delta}\,\varphi + B\,\varphi $ in the following form $\lambda \, \varphi =\bar{\Delta }\, \varphi +B\, \varphi $. In this case, from \eqref{GrindEQ__2_7_} we obtain \begin{equation} \label{GrindEQ__2_8_} \bar{\Delta }\, \left(\, {\rm trace}_{g} \varphi \right)=\lambda \, \left(\, {\rm trace}_{g} \varphi \right), \end{equation} where $\bar{\Delta} : C^{\infty}(M)\to C^{\infty}(M)$ is the ordinary \textit{Laplacian} defined by the formula $\bar{\Delta }\, f=-\; {\rm div}\, \left(\, {\rm grad}\, f\right)$ for any $f\in C^{\infty}(M)$. In this case, the following equation holds: \[ \left\langle \, \bar{\Delta }\, \left(\, {\rm trace}_{g} \varphi \right), {\rm trace}_{g} \varphi \right\rangle =\left\langle \nabla \, {\rm trace}_{g} \varphi ,\; \nabla \, {\rm trace}_{g} \varphi \right\rangle . \] Therefore, $\bar{\Delta }\, \left(\, {\rm trace}_{g} \varphi \right)=0$ if and only if ${\rm trace}_{g} \varphi ={\rm const}$. In this case, if \eqref{GrindEQ__2_8_} holds for ${\rm trace}_{g} \varphi ={\rm const}$ and $\lambda \ne 0$, then ${\rm trace}_{g} \varphi $ must be zero. We proved the following lemma. \begin{lemma} Let $(M, g)$ be an $n$-dimensional $\left(\, n\ge 2\right)$ compact Riemannian manifold and $\Delta_{B} \, \varphi =\lambda \, \varphi $ for the Bourguignon Laplacian $\Delta_{B} : C^{\infty}(S^{2} M)\to C^{\infty}(S^{2} M)$ and for a non-zero eigenvalue $\lambda $. If ${\rm trace}_{g} \varphi ={\rm const}$, then ${\rm trace}_{g} \varphi =0$. On the other hand, if ${\rm trace}_{g} \varphi $ is not constant, then ${\rm trace}_{g} \varphi $ is an eigenfunction of the Laplacian $\bar{\Delta} : C^{\infty}(M)\to C^{\infty}(M)$ such that $\bar{\Delta}\, \left(\, {\rm trace}_{g} \varphi \right)=\lambda \, \left(\, {\rm trace}_{g} \varphi \right)$. \end{lemma} Standard elliptic theory and the fact that the Laplacian $\bar{\Delta}: C^{\infty}(M)\to C^{\infty}(M)$ is a self-adjoint elliptic operator implies that the spectrum of $\bar{\Delta }$ consist of discrete eigenvalues $0=\bar{\lambda }_{0} <\bar{\lambda }_{1} <\bar{\lambda }_{2} <\ldots ,$ which satisfy the equation $\bar{\Delta }\, f_{i} =\bar{\lambda }_{i} \, f_{i} $ for $f_{i} \ne 0$ (see, for example, \cite{33}). Here we will focus on bounds on the first non-zero eigenvalue $\lambda_{1} $ imposed by the Riemannian geometry of $(M,g)$. The first lower bound for $\lambda_{1} $ was proved by Lichnerowicz \cite{L}. The \textit{Lichnerowicz theorem} is following: If $(M,g)$ is a compact Riemannian manifold of dimension $n\ge 2$, whose Ricci curvature satisfies the inequality ${\rm Ric}\, \ge (n-1)\, k>0$ for some constant $k>0$, then the first positive eigenvalue $\bar{\lambda}$ of the Laplacian $\bar{\Delta} : C^{\infty}(M)\to C^{\infty}(M)$ has the lower bound $\bar{\lambda }\ge n\, k$. Yang \cite{Y} generalized the previous result in the following form: Let $(M,g)$ be a compact Riemannian manifold of dimension $n\ge 2$ with ${\rm Ric}\ge (n-1)\, k\ge 0$ for some non-negative constant $k$ and diameter ${\rm D}(M)$, then the first positive eigenvalue $\bar{\lambda }$ of the Laplacian $\bar{\Delta} : C^{\infty}(M)\to C^{\infty}(M)$ satisfies the lower bound $\bar{\lambda }\ge \frac{1}{4} \, (n-1)\, k +\pi^2/{D^{2} (M)} $. On the other hand, by the spectral theory (see, for example, \cite{33}), the Bourguignon Laplacian ${\rm \Delta}_{B}$ has a discrete set of eigenvalues $\left\{\, \lambda_{a} \right\}$ forming a sequence $0=\lambda_{0} <\lambda_{1} <\lambda_{2} <\ldots$, and $\lambda_{a} \to +\infty $ as $a\to +\infty $. Any eigenvalue of ${\rm \Delta }_{B}$ has finite multiplicity and an arbitrary $\lambda_{a}^{} $ for $a\ge 1$ is positive because $\Delta_{B} $ is a non-negative elliptic operator. Then as a corollary of the above Lichnerowicz and Yang theorems, we can formulate the following proposition. \begin{proposition} Let $(M, g)$ be a compact Riemannian manifold of dimension $n\ge 2$ and $\lambda $ be a positive eigenvalue of the Bourguignon Laplacian $\Delta_{B} : C^{\infty}(S^{2} M)\to C^{\infty}(S^{2} M)$ such that its corresponding eigentensor $\varphi \in C^{\infty}(S^{2} M)$ has a non-zero trace. If the Ricci curvature $(M, g)$ satisfies the inequality $Ric\, \ge (n-1)\, k>0$ for some positive constant $k$, then $\lambda $ has the lower bound $\lambda \ge n\, k$. On other hand, if $Ric\, \ge (n-1)\, k\ge 0$ for some non-negative constant $k$, then $\lambda $ satisfies the lower bound $\lambda \ge \frac{1}{4} \, (n-1)\, k+\frac{\pi^{2} }{{\rm D}^{2}(M)}$, where ${\rm D}(M)$ is the diameter of $(M,g)$. \end{proposition} Next, we will consider the case of a positive eigenvalue $\lambda $ of the Bourguignon Laplacian $\Delta_{B} : C^{\infty}(S^{2} M)\to C^{\infty}(S^{2} M)$ such that its eigentensor $\varphi $ is a traceless bilinear form. In other words, $\varphi \in C^{\infty}(S_0^2 M)$ for the vector bundle of traceless symmetric bilinear forms $S_{0}^{2} M$. Then, using \eqref{GrindEQ__2_4_}, we obtain the integral equality \begin{equation}\label{GrindEQ__2_9_} \lambda \, \left\langle \varphi \, ,\, \; \varphi \right\rangle =\left\langle \, B\, \varphi \, ,\, \varphi \right\rangle +\, \left\langle \nabla \, \varphi \, ,\, \nabla \, \varphi \right\rangle . \end{equation} At the same time, by direct computations yield we obtain the following identity \[ g\, \left(\, B\, \varphi ,\; \varphi \right) = (1/2)\,g\left(K\, \varphi ,\; \varphi \right) \] where $K : =Ric\circ \varphi +\varphi \circ Ric-2{\mathop{\, R}\limits^{\circ }} \varphi $ is the \textit{Weitzenb\"{o}ck curvature operator} of the well known \textit{Lichnerowicz Laplacian} (see \cite[p.~54]{2}; \cite[p.~388]{10}) \begin{equation} \label{GrindEQ__2_10_} \Delta_{L} \, \varphi =\bar{\Delta }\, \varphi + K\, \varphi . \end{equation} In addition, direct verification yields that $K\, g=0$ and \begin{equation} \label{GrindEQ__2_11_} {\rm trace}_{g} \, \left(\, K\, \varphi \, \right)=0. \end{equation} Let $\left\{\, e_{i} \right\}$ be an orthonormal basis of the tangent space $T_{x} M$ at an arbitrary point $x\in M$ such as $\varphi_{x} \left(\, e_{i} ,\, e_{j} \right)=\lambda_{i} \left(x\right)\, \delta_{ij} $ where $\delta_{ij}$ is the Kronecker symbol and ${\rm sec}\, \left(\, e_{i} \wedge \, e_{j} \right)$ be the sectional curvature in the direction of subspace $\pi \left(x\right)\subset T_{x} M$ for $\pi \left(x\right)={\rm span}\{\, e_{i} ,\, e_{j}\}$, then (see \cite[p.~388]{10}) \begin{equation} \label{GrindEQ__2_12_} g(K\, \varphi, \varphi)=\sum\nolimits_{\,i\ne j}\sec( e_{i} \wedge e_{j})(\varphi_{ii}-\varphi_{jj}). \end{equation} Let $S_{0}^{2} M$ be the vector bundle of traceless symmetric bilinear forms and $\Delta_{B} : C^{\infty}(S_{0}^{2} M)\to C^{\infty}(S_{0}^{2} M)$ be the Bourguignon Laplacian acting on the vector space of $C^{\infty}$-section of $S_{0}^{2} M$. If we denote by $K_{\rm min} $the minimum of the positive defined sectional curvature of $(M,g)$, i.e., ${\rm sec}\, \left(\sigma_{x} \right)\ge K_{\rm min} >0$ in all directions $\sigma_{\rm x} $ at each point $x\in M$, then from \eqref{GrindEQ__2_9_} we obtain the integral inequality \begin{equation} \label{GrindEQ__2_13_} \lambda \, \left\langle \varphi ,\, \, \varphi \right\rangle \ge \, \frac{1}{2} \, K_{{\it min}} \, \int_{M} \sum\nolimits_{i\; \ne j}\left(\, \varphi_{ii} -\varphi_{jj} \right)\,^{2} \, dv_{g} +\, \left\langle \nabla \, \varphi , \nabla \, \varphi \right\rangle \ge 0. \end{equation} for an arbitrary positive eigenvalue $\lambda $ corresponding to a non-zero eigentensor $\varphi \in C^{\infty}(S_{0}^{2} )$ of $\Delta_{B}$. If the condition ${\rm trace}_{g} \, \varphi =\varphi_{11}^{} +\varphi_{22} +\ldots+\varphi_{n\, n}=0$ holds, then it is not difficult to prove the following equality \[ \left\| \, \varphi \, \right\|^{2} =\varphi_{11}^{2} +\varphi_{22}^{2} +\ldots +\varphi_{n\,n}^{2} =\frac{1}{n} \sum\nolimits_{i\,< j}\left(\varphi_{ii} -\varphi_{jj} \right)\,^{2}. \] because it equals to the following one: \[ \varphi_{11}^{2} +\varphi_{22}^{2} +\ldots +\varphi_{n\,n}^{2} = -2 \sum\nolimits_{i\,< j} \varphi_{ii}\varphi_{jj}, \] that is $(\varphi_{11}^{2} +\varphi_{22}^{2} +\ldots +\varphi_{n\,n}^{2})^2 = 0$. In this case, from \eqref{GrindEQ__2_13_} one can obtain the integral inequality \begin{equation} \label{GrindEQ__2_14_} (\lambda - n\, K_{\min })\int_{M}\| \varphi \|^{2} \, dv_{g} \ge 0. \end{equation} Then from \eqref{GrindEQ__2_14_} we conclude that $\lambda \ge n\, K_{\min }$ for an arbitrary positive eigenvalue $\lambda $. In turn, if the first positive eigenvalue $\lambda =n\, K_{{\rm min}} $, then its corresponding traceless bilinear form $\varphi $ is invariant under parallel translation. In this case, if the holonomy of $(M, g)$ is irreducible, then the tensor $\varphi $ must have the form $\varphi =\mu \cdot g$ for some constant $\mu $. But in our case, the identity holds ${\rm trace}_{g} \, \varphi =0$ and, consequently, we have $\mu =0$. Then the following statement holds. \begin{proposition} Let $(M,g)$ be an $n$-dimensional $(n\ge 2)$ compact Riemannian manifold and $\Delta_{B}: C^{\infty}(S_{0}^{2} M)\to C^{\infty}(S_{0}^{2} M)$ be the Bourguignon Laplacian acting on traceless symmetric bilinear forms. Then the first positive eigenvalue of $\Delta_{B} $ satisfies the lower bound $\lambda \ge n\, K_{\rm min} $ for the minimum $K_{\rm min} $ of the strictly positive sectional curvature of $(M, g)$. If the first positive eigenvalue $\lambda =n\, K_{\rm min}$, then the trace-free symmetric bilinear form $\varphi $ corresponding to $\lambda $ is invariant under parallel translation. In particular, if the holonomy of $(M,g)$ is irreducible, then this relation means that $\varphi \equiv 0$. \end{proposition} In particular, if $(M,g)$ is the standard sphere $\left(\, S^{n} ,\, g_{0} \right)$, then ${\rm sec}\,(X\wedge Y)=+1$ for orthonormal vector fields $X$and $Y$. In this case, the first positive eigenvalue $\lambda \ge n$. We can formulate the following corollary. \begin{corollary} Let $\left(\, S^{n} ,\, g_{0} \right)$ be the $n$-dimensional $(n\ge 2)$ standard sphere and $\Delta_{B} : C^{\infty}(S_{0}^{2} M)\to C^{\infty}(S_{0}^{2} M)$ be the Bourguignon Laplacian acting on traceless symmetric bilinear forms defined on $(S^{n}, g_{0})$. Then the first positive eigenvalue of $\Delta_{B} $ satisfies the lower bound $\lambda \ge n$. \end{corollary} In the case of the standard sphere $\left(\, S^{n} ,\, g_{0} \right)$ we have $B\, \varphi: =\varphi \circ {\rm Ric}-{\mathop{R}\limits^{\circ }} \, \varphi = n\, \varphi $ and $K\varphi =2n\, \varphi $ for an arbitrary symmetric bilinear form $\varphi \in C^{\infty}(S_{0}^{2} M)$. Then we can write the equality $\Delta_{B} \, \varphi =\, \left(\, \mu - n\, \right)\, \varphi $ for an arbitrary positive eigenvalue $\mu $ of the Lichnerowicz Laplacian $\Delta_{L} $ and for some $\varphi \in C^{\infty}(S_{0}^{2} M)$ corresponding to $\mu $. It means that the eigenvalue $\lambda $ of $\Delta_{B} $, which corresponds to the same bilinear form $\varphi \in C^{\infty}(S_{0}^{2} M)$ is equal to $\lambda =\, \left(\, \mu - n\, \right)$. The converse is also true. Consider the Lichnerowicz Laplacian $\Delta_{L} $ acting on traceless and divergence-free symmetric bilinear forms or, in other words, $TT$-\textit{tensors} defined on the standard sphere $\left(\, S^{n} ,\, g_{0} \right)$. In this case, we know from \cite{34} that the eigenvalues of $\Delta_{L} $ are given by the formula $\mu_{a} =a(n-1+a)+2\,(n-1)$ for all $a\ge 2$, i.e., \[ {\rm spec}\, \left(\, \Delta_{L} \left|{}_{TT} \right. \right)=\left\{\, a\, \left(\, n-1+a\, \right)+2\, \left(\, n-1\right)\, \, \left|\, a\ge 2\, \right. \right\}. \] Then we immediately obtain the spectrum of the $\Delta_{B} $ acting on the $TT$-tensors defined on the standard sphere $\left(\, S^{n} ,\, g_{0} \right)$: \[{\rm spec}\, \left(\, \Delta_{B} \left|{}_{TT} \right. \right)=\left\{\, a\, \left(\, n-1+a\, \right)+\left(\, n-2\right)\, \, \left|\, a\ge 2\, \right. \right\}. \] Based on this result, we can formulate the statement. \begin{proposition} The eigenvalues of the Bourguignon Laplacian $\Delta_{B} $ acting on the TT-tensors defined on the standard sphere $\left(\, S^{n} ,\, g_{0} \right)$ are given by the formula $\lambda_{a} =a\, \left(\, n-1+a\, \right)+\left(\, n-2\right)$ for $a\ge 2$. \end{proposition} \section{Harmonic symmetric bilinear forms and their vanishing theorems} The formula \eqref{GrindEQ__2_1_} means that we take a symmetric bilinear form $\varphi \in C^{\infty}(S^{2} M)$ viewed as a one form with values in the tangent bundle. In this case, $\varphi\in C^{\infty}(S^{2} M)$ is a Codazzi tensor if and only if $d^{\nabla } \varphi =0$. Therefore, we can formulate the following obvious statement. \begin{lemma}\label{L-2} A symmetric bilinear form $\varphi \in C^{\infty}(S^{2} M)$ on a Riemannian manifold $(M, g)$ is a Codazzi tensor if and only if it is a closed one-form viewed as a one form with values in the tangent bundle $T^{*}M$ on $M$. \end{lemma} J.-P. Bourguignon proved in \cite[p.~271]{9} that \begin{equation}\label{GrindEQ__3_1_} \delta^{\nabla } \varphi =-\, d\, \left(\, {\rm trace}_{g} \varphi \, \right) \end{equation} for an arbitrary Codazzi tensor $\varphi \in C^{\infty}(S^{2} M)$. At the same time, he defined a \textit{harmonic symmetric bilinear form} in \cite[p.~270]{9}. \begin{definition}\rm A symmetric bilinear form $\varphi \in C^{\infty}(S^{2} M)$ on a Riemannian manifold $(M,g)$ is \textit{harmonic} if $\varphi \in {\rm Ker}\, d^{\nabla } \bigcap {\rm Ker}\, \delta^{\nabla } $. \end{definition} Using Lemma~\ref{L-2} and equation \eqref{GrindEQ__3_1_}, this definition can be simplified slightly. \begin{proposition} A symmetric bilinear form $\varphi \in C^{\infty}(S^{2} M)$ on a Riemannian manifold $(M, g)$ is harmonic if and only if it is a Codazzi tensor with constant trace. \end{proposition} Based on the formula \eqref{GrindEQ__2_2_} and \eqref{GrindEQ__2_3_}, we conclude that the kernel of the \textit{Bourguignon Laplacian} $\Delta_{B} :=\delta^{\nabla } d^{\nabla } +d^{\nabla } \delta^{\nabla } $ has finite dimension and satisfies the condition ${\rm Ker}\, \Delta_{B} ={\rm Ker}\, d^\nabla \bigcap {\rm Ker}\, \delta^{\nabla }$ on a compact Riemannian manifold $(M,g)$. Therefore, $\Delta_{B} $-harmonic bilinear forms are harmonic symmetric bilinear forms on a compact Riemannian manifold $(M,g)$. Therefore, we have the following. \begin{proposition} Let $(M,g)$ be an $n$-dimensional compact Riemannian manifold and $\Delta_{B} : C^{\infty}(S^{2} M)\to C^{\infty}(S^{2} M)$ be the Bourguignon Laplacian. Then the kernel of ${\rm \Delta}_{B} $ is the finite dimensional vector space of harmonic symmetric bilinear forms (or, in other words, Codazzi tensors with constant trace). \end{proposition} J.-P. Bourguignon also proved in \cite[p.~281]{9} that a compact orientable Riemannian four-manifold admitting a non-trivial Codazzi tensor with constant trace must have \textit{signature} zero (see, for definition, \cite[p.~161]{2}). Then the follo\-wing corollary holds. \begin{proposition} Let $(M,g)$ be a compact orientable Riemannian four-dimensional mani\-fold. If the kernel of the Bourguignon Laplacian ${\rm \Delta}_{B} $ is non-trivial, then $(M,g)$ must have signature zero. \end{proposition} Using the formula \eqref{GrindEQ__2_4_}, one can obtain the \textit{Bochner-Weitzenb\"{o}ck formula} \begin{eqnarray} \label{GrindEQ__3_2_} \nonumber (1/2)\Delta \| \varphi \|^{2} \hspace*{-2.0mm}&=&\hspace*{-2.0mm} -g(\bar{\Delta }\, \varphi, \varphi ) +\|\nabla \, \varphi\|^{2} \\ \hspace*{-2.0mm}&=&\hspace*{-2.0mm} -g(\Delta_{B} \varphi,\, \varphi) +(1/2)\, g(K\, \varphi , \varphi) + \| \nabla \, \varphi\|^{2} \end{eqnarray} for an arbitrary $\varphi \in C^{\infty}(S^{2} M)$. Let $\varphi \in C^{\infty}(S^{2} M)$ be a harmonic form then \eqref{GrindEQ__3_2_} can be rewritten in the form (see also the formula \eqref{GrindEQ__2_12_}) \begin{equation}\label{GrindEQ__3_3_} \Delta\left\| \varphi \right\|^{2} =\sum\nolimits_{i\, \ne j}\sec\left(\, e_{i} \wedge \, e_{j} \right)\, \left(\, \varphi_{ii} -\varphi_{jj} \right) +2\left\| \nabla \, \varphi \right\|^{2} . \end{equation} We remind here that an arbitrary Codazzi tensor $\varphi $ commutes on $(M,g)$ with the Ricci tensor $Ric$ of $(M,g)$ at each point $x\in M$ (see \cite[p.~439]{2}). Therefore, the eigenvectors of an arbitrary Codazzi tensor $\varphi $ determine the principal directions of the Ricci tensor at each point $x\in M$ (see \cite[pp.~113--114]{21}). The converse is also true. Then taking into account of \eqref{GrindEQ__3_3_} and using the ``Hopf maximum principle", we will prove in the next paragraph that the following lemma holds. \begin{lemma}\label{L-01} Let $U$ be a connected open domain $U$ of a Riemannian manifold $(M,g)$ and $\varphi $ be a harmonic symmetric bilinear form defined at any point of $U$. If the sectional curvature ${\rm sec}\, \left(\, e_{i} \wedge \, e_{j} \right)\ge 0$ for all vectors of the orthonormal basis $\left\{\, e_{i} \right\}$ of $T_{x} M$ which is determined by the principal directions of the Ricci tensor ${\rm Ric}$ at an arbitrary point $x\in U$ and $\left\| \, \varphi \, \right\|^{2} $has a local maximum in the domain U, then $\left\| \, \varphi \, \right\|^{2} $ is a constant function and $\varphi $ is invariant under parallel translations in $U$. If, moreover, ${\rm sec}\, \left(\, e_{i} \wedge \, e_{j} \right)>0$ at some point $x\in U$, then $\varphi $ is trivial. \end{lemma} \proof Let us suppose that ${\rm sec}\, \left(\, e_{i} \wedge \, e_{j} \right)\ge 0$ in some connected open domain $U\subset M$ then $g\, \left(K\, \varphi ,\, \, \varphi \, \right)\ge 0$. If, moreover, there is a non-zero Codazzi tensor $\varphi$ given in $U\subset M$ then from \eqref{GrindEQ__3_3_} we conclude that $\Delta \, \left\| \, \varphi \, \right\|^{2} \ge 0$, i.e. $\left\| \, \varphi \, \right\|^{2} $ is a nonnegative subharmonic function in $U$. Let us suppose $\left\| \, \varphi \, \right\|^{2} $ has a local maximum at some point $x\in U$ then $\left\| \, \varphi \, \right\|^{2} $ is a constant function in $U\subset M$ according to the ``Hopf's maximum principle" (see \cite[p.~47]{22}). In this case, $\Delta \, \left\| \, \varphi \, \right\|^{2} =0$ and $\left\| \, \nabla \, \varphi \, \right\|^{2} =0$. In particular, the latter equation means that the form $\varphi $ is parallel. Let $\left\| \, \varphi \, \right\|^{2} =C$ for some constant $C$, then from the equation \eqref{GrindEQ__3_3_} we obtain that $g\, \left(K\varphi ,\, \varphi \right)+2\, \left\| \, \nabla \, \varphi \, \right\|^{2} =0$. Since ${\rm sec}\, \left(\, e_{i} \wedge \, e_{j} \right)\ge 0$ it means that $g\, \left(K\, \varphi ,\, \varphi \right)=0$ and $\nabla \, \varphi =0$. If there is a point $x\in U$ such that ${\rm sec}\, \left(\, e_{i} \wedge \, e_{j} \right)>0$ then from \eqref{GrindEQ__3_3_} we come to the conclusion that $\lambda_{1} =\ldots =\lambda_{n} =\lambda $ which is equivalent to $\varphi = (1/n)\,g$, see \cite[p.~436]{2}. $\square$ If $(M,g)$ is a compact manifold and a harmonic symmetric bilinear form $\varphi $ is given in a global way on $(M,g)$ then due to the ``Bochner maximum principle" for compact manifold it follows the classical Berger-Ebin theorem (see \cite[p.~436]{2} and \cite[p.~388]{10}) which is a corollary of our Lemma~\ref{L-01}. \begin{corollary} Every harmonic symmetric bilinear form $\varphi \in C^{\infty}(S^{2} M)$ on a compact Riemannian manifold $(M,g)$ with nonnegative sectional curvature is invariant under parallel translations. Moreover, if ${\rm sec}>0$ at some point, then $\varphi \in C^{\infty}(S^{2} M)$ is trivial. \end{corollary} \begin{remark}\rm It is well known that every parallel symmetric tensor field $\varphi \in C^{\infty}(S^{2} M)$ on a connected locally irreducible Riemannian manifold $(M,g)$ is proportional to $g$, i.e., $\varphi =\lambda\,g$ for some constant $\lambda $. Due to this the second parts of Corollary~3 can be reformulated in the following form: Moreover, if $(M,g)$ a connected locally irreducible Riemannian then an arbitrary harmonic symmetric bilinear form $\varphi \in C^{\infty}(S^{2} M)$ is trivial. \end{remark} For example, let $(M,g)$ be a \textit{Riemannian symmetric space of compact type} that is a compact Riemannian manifold with non-negative sectional curvature and positive-definite Ricci tensor (see \cite[p.~256]{23}). Moreover, if a Riemannian symmetric space of compact type is a locally irreducibility Riemannian manifold $(M,g)$ then it is a compact Riemannian manifold with positive sectional curvature (see \cite{24}). Therefore, we can formulate the following corollary. \begin{corollary} Every harmonic symmetric bilinear form on a Riemannian symmetric manifold of compact type is invariant under parallel translations. If, in addition to the above mentioned the manifold is locally irreducible, then harmonic symmetric bilinear forms are trivial. \end{corollary} The following theorem supplements the classical Berger-Ebin theorem (\cite[p.~436]{2} and \cite[p.~388]{10}) for the case of a complete noncompact Riemannian manifold. \begin{proposition} Let $(M,g)$ be a complete simply connected Riemannian manifold with nonnegative sectional curvature. Then there is no a non-zero harmonic symmetric bilinear form $\varphi \in C^{\infty}(S^{2} M)$ such that $\int_{M}\left\| \, \varphi \, \right\| \, d{\rm vol}_{g} <+\infty $. \end{proposition} \proof Let $(M,g)$ be a complete simply connected noncompact Riemannian manifold with nonnegative sectional curvature and $\varphi \in C^{\infty}(S^{2} M)$ be a globally defined non-zero harmonic symmetric bilinear form then $g(\, K\varphi ,\, \varphi)\ge 0$. Therefore, from \eqref{GrindEQ__3_3_} we obtain the inequality \[ \left\| \varphi \right\|\Delta \left\| \varphi \right\| =(1/2)\,g\left(K\varphi , \varphi \right) + \left\| \nabla \, \varphi \right\|^{2} \ge 0. \] Then we conclude that $\left\| \varphi \right\|$ is a non-negative subharmonic function on a complete simply connected noncompact Riemannian manifold with nonnegative sectional curvature. In this case, if $\, \left\| \, \varphi \, \right\| $ is not identically zero, then it satisfies the condition $\int_{M}\left\| \varphi \right\| \, d{\rm vol}_{g} =\infty$ (see \cite{Wu}). $\square$ \baselineskip=11.8pt \end{document}
arXiv
\begin{definition}[Definition:Half-Range Fourier Sine Series/Formulation 1] Let $\map f x$ be a real function defined on the interval $\openint 0 \lambda$. Then the '''half-range Fourier sine series''' of $\map f x$ over $\openint 0 \lambda$ is the series: :$\map f x \sim \ds \sum_{n \mathop = 1}^\infty b_n \sin \frac {n \pi x} \lambda$ where for all $n \in \Z_{> 0}$: :$b_n = \ds \frac 2 \lambda \int_0^\lambda \map f x \sin \frac {n \pi x} \lambda \rd x$ \end{definition}
ProofWiki
2016, Volume 36, Issue 2: 1061-1084. Doi: 10.3934/dcds.2016.36.1061 This issue Previous Article Boundary behavior and asymptotic behavior of solutions to a class of parabolic equations with boundary degeneracy Next Article On the global existence of classical solutions for compressible Navier-Stokes equations with vacuum Parabolic elliptic type Keller-Segel system on the whole space case Jinhuan Wang1, , Li Chen2, and Liang Hong1, School of Mathematics, Liaoning University, Shenyang 110036 Universität Mannheim, Lehrstuhl für Mathematik IV, 68131, Mannheim This note is devoted to the discussion on the existence and blow up of the solutions to the parabolic elliptic type Patlak-Keller-Segel system on the whole space case. The problem in two dimension is closely related to the Logarithmic Hardy-Littlewood-Sobolev inequality, which directly introduced the critical mass $8\pi$. While in the higher dimension case, it is related to the Hardy-Littlewood-Sobolev inequality. Therefore, a porous media type nonlinear diffusion has been introduced in order to balance the aggregation. We will review the critical exponents which were introduced in the literature, namely, the exponent $m=2-2/n$ which comes from the scaling invariance of the mass, and the exponent $m=2n/(n+2)$ which comes from the conformal invariance of the entropy. Finally a new result on the model with a general potential, inspired from the Hardy-Littlewood-Sobolev inequality, will be given. Chemotaxis, critical diffusion exponent, nonlocal aggregation, critical stationary solution, global existence, mass concentration. Mathematics Subject Classification: Primary: 35K65, 35B45, 35J20. J. Bedrossian, Intermediate Asymptotics for Critical and Supercritical Aggregation Equations and Patlak-Keller-Segel models, Comm. Math. Sci., 9 (2011), 1143-1161.doi: 10.4310/CMS.2011.v9.n4.a11. J. Bedrossian, N. Rodríguez and A. Bertozzi, Local and global well-posedness for aggregation equations and Patlak-Keller-Segel models with degenerate diffusion, Nonlinearity, 24 (2011), 1683-1714.doi: 10.1088/0951-7715/24/6/001. W. Beckner, Sharp Sobolev inequalities on the sphere and the Moser-Trudinger inequality, Ann. Math., 138 (1993), 213-242.doi: 10.2307/2946638. S. Bian and J.-G. Liu, Dynamic and steady states for multi-dimensional Keller-Segel model with diffusion exponent $m\geq 0$, Comm. Math. Phys., 323 (2013), 1017-1070.doi: 10.1007/s00220-013-1777-z. P. Biler and T. Nadzieja, A class of nonlocal parabolic problems occurring in statistical mechanics, Colloq. Math., 66 (1993), 131-145. A. Blanchet, E. A. Carlen and J. A. Carrillo, Functional inequalities, thick tails and asymptotics for the critical mass Patlak-Keller-Segel model, J. Funct. Anal., 262 (2012), 2142-2230.doi: 10.1016/j.jfa.2011.12.012. A. Blanchet, On the Parabolic-Elliptic Patlak-Keller-Segel System in Dimension 2 and Higher, Séminaire Laurent Schwartz$-$EDP et applications, 1-26. A. Blanchet, J. A. Carrillo and P. Laurençot, Critical mass for a Patlak-Keller-Segel model with degenerate diffusion in higher dimensions, Calc. Var., 35 (2009), 133-168.doi: 10.1007/s00526-008-0200-7. A. Blanchet, J. A. Carrillo and N. Masmoudi, Infinite time aggregation for the critical Patlak-Keller-Segel model in $\mathbbR^2$, Comm. Pure Appl. Math., 61 (2008), 1449-1481.doi: 10.1002/cpa.20225. A. Blanchet, J. Dolbeault and B. Perthame, Two-dimensional Keller-Segel model: Optimal critical mass and qualitative properties of the solutions, Electron. J. Diff. Eqns., 44 (2006), 32 pp. (electronic). A. Blanchet and P. Laurençot, Finite mass self-similar blowing-up solutions of a chemotaxis system with non-linear diffusion, Comm. Pure Appl. Math., 11 (2012), 47-60. L. A. Caffarelli, B. Gidas and J. Spruck, Asymptotic symmetry and local behavior of semilinear elliptic equations with critical Sobolev growth, Comm. Pure Appl. Math., 42 (1989), 271-297.doi: 10.1002/cpa.3160420304. E. Carlen, J. A. Carrillo and M. Loss, Hardy-Littlewood-Sobolev inequalities via fast diffusion flows, Proc. Nat. Acad. USA, 107 (2010), 19696-19701.doi: 10.1073/pnas.1008323107. E. Carlen and M. Loss, Competing symmetries, the logarithmic HLS inequality and Onofri's inequality on $S^n$, Geom. Funct. Anal., 2 (1992), 90-104.doi: 10.1007/BF01895706. J. A. Carrillo, L. Chen, J.-G. Liu and J. Wang, A note on the subcritical two dimensional Keller-Segel system, Acta appl. math., 119 (2012), 43-55.doi: 10.1007/s10440-011-9660-4. L. Chen, J.-G. Liu and J. H. Wang, Multidimensional degenerate Keller-Segel system with critical diffusion exponent $2n/(n+2)$, SIAM J. Math. Anal., 44 (2012), 1077-1102.doi: 10.1137/110839102. L. Chen and J. H. Wang, Exact criterion for global existence and blow-up to a degenerate Keller-Segel system, Doc. Math., 19 (2014), 103-120. X. Chen, A. Jüngel and J.-G. Liu, A note on Aubin-Lions-Dubinskii lemmas, Acta Appl. Math., 133 (2014), 33-43.doi: 10.1007/s10440-013-9858-8. W. X. Chen and C. M. Li, Classification of solutions of some nonlinear elliptic equations, Duke Math. Journal, 63 (1991), 615-622.doi: 10.1215/S0012-7094-91-06325-8. S. Childress, Chemotactic collapse in two dimensions, Lect. Notes in Biomathematics, 55 (1984), 61-68.doi: 10.1007/978-3-642-45589-6_6. T. Cieślak and P. Laurencot, Finite time blow-up for radially symmetric solutions to a critical quasilinear Smoluchwski-Poisson system, C. R. Acad. Sci. Paris Ser. I, 347 (2009), 237-242.doi: 10.1016/j.crma.2009.01.016. T. Cieślak and M. Winkler, Finite-time blow-up in a quasilinear system of chemotaxis, Nonlinearity, 21 (2008), 1057-1076.doi: 10.1088/0951-7715/21/5/009. J. M. Delort, Existence de nappes de tourbillon en dimension deux, J. Amer. Math. Soc., 4 (1991), 553-586.doi: 10.1090/S0894-0347-1991-1102579-6. R. DiPerna and A. Majda, Concentrations in regularizations for 2-D incompressible flow, Comm. Pure Appl. Math., 40 (1987), 301-345.doi: 10.1002/cpa.3160400304. J. Dolbeault, Sobolev and Hardy-Littlewood-Sobolev inequalities: duality and fast diffusion, Math. Res. Lett., 18 (2011), 1037-1050.doi: 10.4310/MRL.2011.v18.n6.a1. J. Dolbeault, M. J. Esteban and G. Jankowiak, The Moser-Trudinger-Onofri inequality, preprint, arXiv:1403.5042. J. Dolbeault and B. Perthame, Optimal critical mass in the two dimensional Keller-Segel model in $\mathbbR^2$, C. R. Acad. Sci. Paris, Ser. I, 339 (2004), 611-616.doi: 10.1016/j.crma.2004.08.011. R. L. Frank and E. H. Lieb, A new rearrangement-free proof of the sharp Hardy-Littlewood-Sobolev inequality, Spectral theory, function spaces and inequalities, Oper. Theory Adv. Appl., Springer Basel, 219 (2012), 55-67.doi: 10.1007/978-3-0348-0263-5_4. H. Fujita, On the blowing up of solutions of the Cauchy problem for $u_t=\Delta u + u^{1+\alpha}$, J. Fac. Sci. Univ. Tokyo Sect. I, 13 (1966), 109-124. H. Gajewski and K. Zacharias, Global behaviour of a reaction-diffusion system modelling chemotaxis, Math. Nachr., 195 (1998), 77-114.doi: 10.1002/mana.19981950106. B. Gidas and J. Spruck, Global and local behavior of positive solutions of nonlinear elliptic equations, Comm. Pure and Appl. Math., 34 (1981), 525-598.doi: 10.1002/cpa.3160340406. T. Hillen and K. J. Painter, A user's guide to PDE models for chemotaxis, J. Math. Biol., 58 (2009), 183-217.doi: 10.1007/s00285-008-0201-3. D. Horstmann, From 1970 until now: The Keller-Segel model in chemotaxis and its consequences I, Jahresberichte der DMV, 105 (2003), 103-165. D. Horstmann, From 1970 until now: The Keller-Segel model in chemotaxis and its consequences II, Jahresberichte der DMV, 106 (2004), 51-69. D. Horstmann, On the existence of radially symmetric blow-up solutions for the Keller-Segel model, J. Math. Biol., 44 (2002), 463-478.doi: 10.1007/s002850100134. D. Horstmann and M. Winkler, Boundedness vs. blow-up in a chemotaxis system, J. Differential Equations, 215 (2005), 52-107.doi: 10.1016/j.jde.2004.10.022. S. Ishida and T. Yokota, Global existence of weak solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type, J. Differential Equations, 252 (2012), 1421-1440.doi: 10.1016/j.jde.2011.02.012. W. Jäger and S. Luckhaus, On explosions of solutions to a system of partial differential equations modelling chemotaxis, Trans. Amer. Math. Soc., 329 (1992), 819-824.doi: 10.1090/S0002-9947-1992-1046835-6. J. Jang, Nonlinear instability in gravitational Euler-Poisson systems for $\gamma=6/5$, Arch. Rational Mech. Anal., 188 (2008), 265-307.doi: 10.1007/s00205-007-0086-0. E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theoret. Biol., 26 (1970), 399-415.doi: 10.1016/0022-5193(70)90092-5. E. F. Keller and L. A. Segel, Model for chemotaxis, J. Theoret. Biol., 30 (1971), 225-234.doi: 10.1016/0022-5193(71)90050-6. I. Kim and Y. Yao, The patlak-keller-segel model and its variations: Properties of solutions via maximum principle, SIAM J. Math. Anal., 44 (2012), 568-602.doi: 10.1137/110823584. R. Kowalczyk and Z. Szymańska, On the global existence of solutions to an aggregation model, J. Math. Anal. Appl., 343 (2008), 379-398.doi: 10.1016/j.jmaa.2008.01.005. E. H. Lieb, Sharp Constants in the Hardy-Littlewood-Sobolev and Related Inequalities, Ann. Math., 118 (1983), 349-374.doi: 10.2307/2007032. E. H. Lieb and M. Loss, Analysis, Graduate Studies in Mathematics, 14, $2^{nd}$ edition, American Mathematical Society Providence, Rhode Island, 2001.doi: 10.1090/gsm/014. J.-G. Liu and Z. P. Xin, Convergence of Vortex Methods for Weak Solution to the 2-D Euler Equations with Vortex Sheet Data, Comm. Pure Appl. Math., 48 (1995), 611-628.doi: 10.1002/cpa.3160480603. J.-G. Liu and Z. P. Xin, Convergence of point vortex method for 2-D vortex sheet, Math. Comp., 70 (2001), 595-606.doi: 10.1090/S0025-5718-00-01271-0. S. Luckhaus and Y. Sugiyama, Large time behavior of solutions in super-critical cases to degenerate Keller-Segel systems, M2AN Math. Model. Numer. Anal., 40 (2006), 597-621.doi: 10.1051/m2an:2006025. A. J. Majda, Remarks on Weak Solution for Vortex Sheets with a Distinguished Sign, Indiana Univ. Math. J., 42 (1993), 921-939.doi: 10.1512/iumj.1993.42.42043. T. Nagai and T. Senba, Global existence and blow-up of radial solutioins to a parabolic-elliptic system of chemotaxis, Adv. Math. Sci. Appl., 8 (1998), 145-156. K. J. Painter and T. Hillen, Volume filling and quorum sensing in models for chemosensitive movement, Can. Appl. Math. Q., 10 (2002), 501-543. C. S. Patlak, Random walk with persistenc and external bias, Bull. Math. Biol. Biophys., 15 (1953), 311-338.doi: 10.1007/BF02476407. B. Perthame, Transport Equations in Biology, Birkhaeuser Verlag, Basel-Boston-Berlin, 2007. G. Rein, Non-linear stability of gaseous stars, Arch. Rational Mech. Anal., 168 (2003), 115-130.doi: 10.1007/s00205-003-0260-y. T. Senba and T. Suzuki, A quasi-linear parabolic system of chemotaxis, Abstract and Applied Analysis, (2006), Article ID 23061, 21pp.doi: 10.1155/AAA/2006/23061. Y. Sugiyama, Global existence in sub-critical cases and finite time blow-up in super-critical cases to degenerate Keller-Segel systems, Diff. Int. Equa., 19 (2006), 841-876. Y. Sugiyama, Application of the best constant of the Sobolev inequality to degenerate Keller-Segel models, Adv. Diff. Eqns., 12 (2007), 121-144. Y. Sugiyama and H. Kunii, Global existence and decay properties for a degenerate Keller-Segel model with a power factor in drift term, J. Differential Equations, 227 (2006), 333-364.doi: 10.1016/j.jde.2006.03.003. Y. Yao, Asymptotic behavior of radial solutions for critical Patlak-Keller-Segel model and an repulsive-attractive aggregation equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 31 (2014), 81-101.doi: 10.1016/j.anihpc.2013.02.002. HTML views() PDF downloads(98) Cited by(0) Jinhuan Wang Li Chen Liang Hong
CommonCrawl
Research | Open | Published: 16 October 2018 A performance optimization strategy based on degree of parallelism and allocation fitness Changtian Ying1,2, Changyan Ying2 & Chen Ban2 With the emergence of big data era, most of the current performance optimization strategies are mainly used in a distributed computing framework with disks as the underlying storage. They may solve the problems in traditional disk-based distribution, but they are hard to transplant and are not well suitable for performance optimization especially for an in-memory computing framework on account of different underlying storage and computation architecture. In this paper, we first give the definition of the resource allocation model, parallelism degree model, and allocation fitness model on the basis of the theoretical analysis of Spark architecture. Second, based on the model presented, we propose a strategy embedded in the evaluation model which is easy to perform. The optimization strategy selects the worker with a lower load that satisfies requirements to assign the latter tasks, and the worker with a higher load may not be assigned tasks. The experiments consisting of four variance jobs are conducted to verify the effectiveness of the presented strategy. In recent years, big data processing framework [1, 2], especially for in-memory computing framework, enriches and develops constantly [3, 4]. The in-memory computing has appeared in our view and attracted wide attention in the industry after the SAP TechEd global conference in 2010. With the development of the in-memory computing framework, some research results are committed to the expansion and improvement of the system. A simple and efficient parallel pipelined programming model based on BitTorrent was proposed by Napoli et al. [5]. Chowdhury et al. implemented the broadcast communication technology for the in-memory computing framework. Lamari et al. [6] put forward the standard architecture of relational analysis for big data. A study by Cho et al. [7] proposed a parallel design scheme. An algorithm using programs to analyze and locate common subexpressions was designed in a study by Kim et al. [8]. A study by Seol et al. [9] proposed a fine granularity retention management for deep submicron DRAMs. et al. designed a unified memory manager separating the memory storage function from computing framework. In a study by Tang et al. [10], a standard engine for distributed data stream computing was designed. A high-performance SQL query system was implemented in a study by Jo et al. [11]. A parallel computing method for the applications with the differential data stream and prompt response was proposed in a study by McSherry et al. [12]. Zeng et al. designed a general model for interactive analysis. A study by Corrigan-Gibbs et al. realized the privacy information communication system of in-memory computing. A study by Sengupta et al. [13] used SIMD-based data parallelism to speed up sieving in integer-factoring algorithms. Ifeanyi et al. [14] presented a comprehensive survey fault tolerance mechanisms for the high-performance framework. Some research results focus on the performance optimization for distributed computing framework, which may not suitable for the in-memory framework. Ananthanarayanan et al. proposed the algorithm, making full use of the data access time and data locality. By analyzing the impact of task parallelism on the cache effectiveness, Ananthanarayanan et al. designed a coordinated caching algorithm that adapted to in-memory computing. By monitoring computation overhead, Babu et al. found that the parallelism of the reduce task has a great influence on the performance of MapReduce system, and the task scheduling algorithm is designed to adapt to resource status. In order to predict the response time of worker node, Zou et al. divided a task into different blocks, which can improve the efficiency of tight synchronization application. In a study by Sarma et al., the communication cost frontier model of worker node was proposed, and the tradeoff between the task parallelism and communication cost were achieved by adjusting the boundary threshold. A study by Pu et al. presented FairRide, a near-optimal, fair cache sharing to improve the performance. Chowdhury et al. proposed an algorithm to balance multi-resource fairness for correlated and elastic demands. However, most of the current performance optimization strategies are mainly used in distributed computing framework with disks as the underlying storage, in which we pay the most attention to two aspects: task scheduling and resource allocation. Therefore, it is of practical significance to study the optimization mechanism of IMC framework from the perspective of underlying memory-based storage and computation architecture. Therefore, we consider the degree of parallelism and allocation fitness which differs from the existing strategy. First, taking the task scheduling into consideration, the rationality of the parallelism degree of the shuffle process for the in-memory framework is easier to ignore that may directly affect the efficiency of job execution and the utilization rate of cluster resources. But the degree of parallelism is usually determined based on user experience, and it is hard to adapt to the existing state of the in-memory framework. Second, achieving the rationality of the hardware allocation, especially memory allocation, as well as the acceleration of job execution, is concerned by modifying the fitness of resource allocation. Modeling and analysis Resource allocation model Definition 1 Resource allocation type. Denotes Worker = {w1, w2,…,wm} as the set of workers, Resource= {r1,r2,…,rn} as a collection of resource types including CPU, memory, disk, and rw = (rw1,rw2,…,rwl) represents l available resource vector of worker wm, where rwi is the ith available resource in the worker w, and the ith resource in all workers could be normalized as: $$ Normalize\left({r}_{w_i}\to \left(0,1\right)\right)\ast rtype\left( cpu, memory, disk\right) $$ j = {j1,j2,…,jn} denotes as the set of running jobs at the same time, Vrj = (vrj1,vrj2,…,vrjk) represents the resource requirement vector of job j, since the resource requirement of each job is different, and the resource requirements of all jobs are represented as: $$ RV=\left({V}_{r_1},{V}_{r_2},\dots, {V}_{r_j}\right)=\left(\left({v}_{r_{11}},{v}_{r_{12}},\dots, {v}_{r_{1k}}\right),\left({v}_{r_{21}},{v}_{r_{22}},\dots, {v}_{r_{2k}}\right),\dots, \left({v}_{r_{j1}},{v}_{r_{j2}},\dots, {v}_{r_{jk}}\right)\right)\Big);{v}_{r_{jk}}\ge 0 $$ Then, the resource requirements type for all jobs are expressed as: $$ TypeRV=\left( typeR{V}_1, typeR{V}_2,\dots typeR{V}_k\right)=\left( rtype\right(\mathit{\max}\left({v}_{r_{11}},{v}_{r_{12}},\dots, {v}_{r_{1k}}\right), rtype\Big(\mathit{\max}\left({v}_{r_{21}},{v}_{r_{22}},\dots, {v}_{r_{2k}}\right),\dots, rtype\left(\mathit{\max}\left({v}_{r_{j1}},{v}_{r_{j2}},\dots, {v}_{r_{jk}}\right)\right) $$ The resource requirements are submitted to the system before the execution of the job, and the jobs will be assigned to workers with idle resources that can feed their requirements. Assume workers = {w1,w2,…,wm} as workers dealing with task j, vaj = (vaj1,vaj2iw2,…,vajk) as the resource allocation vector of task i in worker w1. In principle, workers should strictly allocate resources in accordance with the resource requirements table, which is represented as: $$ {v}_{a_{jk}}=\frac{v_{r_{jk}}}{workerNum},j\in \mathrm{jobs} $$ Parallelism degree model In Spark, task parallelism degree is used to measure the number of concurrent tasks, which can be specified by the user, and it could not exceed the whole instance number that equals to the product of the number of worker and the number of CPU cores in each worker. Definition 2 Parallelism degree. Denotes the number of workers as workerNum, the number of CPU cores in each worker node as coreNum; therefore, the tasks executing concurrently is workerNum×coreNum supported by the hardware environment. If the parallelism parameter specified by the user is puser, then parallelism degree parallelismDegree is the minimum value of workerNum × coreNumand puser: $$ parallelismDegree=\min \left({p}_{user}, workerNum\times coreNum\right) $$ Definition 3 Idle time. It is defined to indicate idle time due to uneven task allocation. According to Definition 5, when user parallelism is greater than the hardware parallelism, that is puser= (workerNum × coreNum), the number of pipelines within the stage is greater than task parallelism. Then, the worker needs to allocate task in multiple turns, and the number of turns can be expressed as: $$ \mathrm{turnNum}=\mathrm{ceiling}\left(\frac{p_{user}}{workerNum\times coreNum}\right) $$ where the result of ceiling function is the smallest integer that is greater than or equal to the value of the parameter. By formula 6, we can obtain that when l is an integral multiple of (workerNum × coreNum), all workers should execute the task in each round of distribution. If the remainder when puser divides (workerNum × coreNum) is not 0, there is at least one idle node in the final round, and the number of idle workers can be expressed as: $$ \mathrm{idleNum}=\left( workerNum\times coreNum\right)\operatorname{mod}\left({p}_{user},\left( workerNum\times coerNum\right)\right) $$ where mod(puser, (workerNum × coreNum)) represents reminder. Due to random allocation of tasks, the probability that puser is the integer times of (workerNum × coreNum)is very small, then the allocation load of tasks in the final round is likely to be uneven. Assume the set of h pipeline tasks in the final round as \( {\mathrm{Task}}_{{\mathrm{pipe}\mathrm{s}}_{\mathrm{last}}}=\left\{{\mathrm{Task}}_{{\mathrm{pipe}}_{i 1}},{\mathrm{Task}}_{{\mathrm{pipe}}_{i2}},\dots, {\mathrm{Task}}_{{\mathrm{pipe}}_{ih}}\right\} \), where h < ( workerNum× coreNum). Then, the idle time of the bye node is: $$ {T}_{{\mathrm{idle}}_w}=\max \left({T}_{{\mathrm{pipe}}_{i1}},{T}_{{\mathrm{pipe}}_{i2}},\dots, {T}_{{\mathrm{pipe}}_{ih}}\right) $$ Allocation fitness model Definition 4 Resource occupancy rate. Assume Tfixed as a measurement interval, \( {T}_{{\mathrm{job}}_i} \) as the actual execution time of the job i. The occupancy rate of rth resources OCir is defined as the proportion of the resources used by the workers, which is expressed as: $$ {OC}_{jr}=\left({v}_{r_j}\times \frac{T_{{\mathrm{job}}_j}}{T_{\mathrm{fixed}}}\right),r\times R $$ Definition 5 Allocation fitness degree. Assume workLoad as the total workload, CAs = {ca1,cpa2,…,can} represents the set of computing ability of each worker in the workers= {w1,w2,…wn}. Thus, the mean value of the task execution time in all workers can be defined as: $$ meanValue=\frac{\mathrm{workLoad}}{\sum \limits_{w\in \mathrm{workers}}{\mathrm{ca}}_{w_i}} $$ Without considering the waiting time, the execution time of tasks in worker wi with the task allocation amount allocationLoadwi can be expressed as: $$ {T}_{{\mathrm{task}}_{w_i}}=\frac{{\mathrm{allocationLoad}}_{w_i}}{{\mathrm{ca}}_{w_i}},{w}_i\in \mathrm{workers} $$ Therefore, the variance of task execution time is represented as: $$ \mathrm{varianc}{\mathrm{e}}_{{\mathrm{w}}_{\mathrm{i}}}={\left({T}_{{\mathrm{finish}}_{w_i}}- meanValue\right)}^2 $$ The allocation fitness degree of worker wi can be formulated as: $$ allocationFitnes{s}_{w_i}=\frac{1}{\mathrm{varianc}{\mathrm{e}}_{{\mathrm{w}}_{\mathrm{i}}}}=\frac{1}{{\left({variance}_{{\mathrm{w}}_i}- meanValue\right)}^2} $$ Lemma 1 For all workers involved in the calculation, the greater the allocation fitness, the shorter the execution time of the job and the higher the computational efficiency. Proof From the point of view of task allocation, the execution time of the job can be expressed as: $$ {T}_{\mathrm{job}}=\max \left({T}_{{\mathrm{finish}}_1},{T}_{{\mathrm{finish}}_2},\dots, {T}_{{\mathrm{finish}}_n}\right) $$ According to formula, the allocation fitness is inversely proportional to the variance. If the fitness value is greater, the variance is smaller, which means the completion time of tasks in the work is closer to the mean. So, when recovery entropy takes a maximum value, job execution time is shortest and execution efficiency is the highest. Therefore, we select the worker with the higher load to immigrate the latter task to the worker with the lower load to reach a higher degree of parallelism and allocation fitness. The performance optimization strategy Construct basic data The improved architecture of Spark with optimization strategy is shown in Fig. 1. The improved architecture of Spark To deploy the performance optimization strategy in Spark, it is necessary to implement the scheduling method in the spark.scheduler.TaskSchedulerImpl interface. The DAG scheduler contains all the topology information of current cluster operation, including all kinds of parameter configuration information and mapping between thread and the component ID; cluster object contains all status information of the current cluster, including the mapping information between each thread, node and executor of topology, the use and information of idle workers, and slots. The above information can be obtained through the API object. The CPU occupancy information of each thread in the topology can be obtained through the getThreadCpuTime (long id) method in ThreadMXBean class of Java API, where id is the thread ID; network bandwidth occupancy information of each thread can be obtained by measuring each RDD size in the experiment as well as monitoring the data transmitting rate of each thread in Spark UI, then estimating by simple accumulation. Due to the threads existing shared memory, the memory occupancy of each thread can only be roughly estimated by the -Xss parameter in the configuration file; in addition, the hardware parameters and load information in operating system could through the /proc. directory to access relevant documents. When the code is written, it will package jar to the Spark_HOME/lib directory and run after configuring spark.scheduler in spark.yaml of the master node. Performance optimization strategy The key problem of the optimization strategy is the selection of the destination node. However, in order to meet the requirements of the worker, it is necessary to exclude the nodes that do not meet the resource constraint model. Denote ms and md as the total amount of memory resource in the source node and the alternative destination node respectively. In the process of decision-making to assign the latter task, it is necessary to continue to move out of other tasks, until the source node resources occupied are less than the threshold. Finally, select the optimal destination node to ensure the allocation fitness reaching the larger value. It should be noted that, when the memory, disk, or network bandwidth resources overflow, the optimization strategy is the same as this section, only to calculate the corresponding type of resource. Then, the detail steps for the process of optimization strategy are shown in algorithm 1. Step 1. Initialize the read data path and the number of data partitions. Spark uses RDD's text file operator to read the data from HDFS to the memory of the Spark cluster. Step 2. Obtain the default parallelism degree and collect statistical information to calculate data resource occupancy degree in the system. Step 3. The degree of parallelism and allocation fitness are updated based on the former function shown in sections 2.2 and 2.3 in combination with the data information acquired in step 2, and then, select the id of workers with the higher load. Step 4. Save the corresponding parameters to the database and update the information when the status of the resource changes. After selecting the source node and destination node, exchange their tasks and refresh the remaining CPU, memory, and network bandwidth resource of the source node and the destination node. Step 5. The TaskScheduler then selects the set of workers with the lower load to assign a task to get a larger degree of parallelism and the allocation fitness. Result and discussion Experimental platform We established a computing cluster by using 1 server and 8 work nodes; the server is set as Master Hadoop and NameNode Spark, and the others are set as Hadoop Slavers and Spark DataNodes. The details of the configuration are shown in Table 1. The task execution time is acquired from the Spark console, and nomon monitors the memory usage. Table 1 Configuration parameters Execution time evaluation In order to verify the algorithm in several different types of operations under the concurrent environment performance, we use the Spark official work examples to form a working set, including the type of four algorithms; dataset type 1, 2, 3, 4 denotes WordCount, TeraSort, K-Means, and PageRank as jobs. Figure 2 is a comparison of the execution time for different strategies. Comparison of execution time for different strategies Figure 2 shows that in the case of performance optimization, the recovery acceleration of the K-Means and PageRank of the proposed strategy is better than that of without the optimization strategy, which is a comparison of the operations of wide dependency in K-Means and PageRank, WordCount and TeraSort. The corresponding acceleration rate are 17.9%, 17.6%, 15.1%, and 30% respectively. The improper parallelism degree and task allocation may induce a large amount of out of memory and increased disk I/O, which will decrease the execution efficiency and lead to higher overhead in execution time. Thus, compared to the existing scheduling mechanism, the scheduling with performance optimization strategy can more effectively reduce the latency, and the implementation process will not have a greater impact on the performance of the cluster. Memory utilization evaluation Figures 3, 4, 5, and 6 are monitored under the optimization strategy proposed in this paper. Memory utilization of four different job changes during the execution of worker 3. The memory utilization of WordCount The memory utilization of TerSort The memory utilization of K-Means The memory utilization of PageRank Memory utilization is related to the type of job and the distribution of input data. For the same algorithm, the greater the amount of data processed is, the greater the amount of memory occupied. As shown in Figs. 3, 4, 5, and 6, WordCount and TeraSort have a relatively stable memory footprint with the increase of execution time, while K-means and PageRank have different memory occupancy rates as the processing task phases are different. Disk I/O evaluation Similarly, the disk I/O has different characteristics as the type of job varies. Figures 7, 8, 9, and 10 are monitored under the optimization strategy proposed in this paper. The memory utilization of four different job changes during the execution of worker 3. The disk utilization of WordCount The disk utilization of TeraSort The disk utilization of K-Means The disk utilization of PageRank As far as the disk I/O rate is concerned for the task processing the data from the local disk, the corresponding local data reads on a worker will be generated, and a certain disk I/O is consumed. If the network data is processed, additional network I/O is also produced because the worker needs to read data from the remote disk, and memory outrage may produce more frequent disk I/O. As it is known in Figs. 7, 8, 9, and 10, disk I/O of WordCount is more obvious, and the other three jobs are lower. At the beginning of execution for K-Means and TeraSort, disk I/O is significantly increased because the task is assigned to worker 3, and it needs to read some data from the disk at this time. In this paper, our contributions can be summarized as follows. First, we analyze a theoretical relationship of degree of parallelism and allocation fitness. Second, we propose an evaluation model that is pluggable for task assignment. Third, on the basis of the evaluation model, the strategy can take resource characteristics into consideration and assign tasks to the worker with a lower load to increase execution efficiency. Numerical analysis and experimental results verified the effectiveness of the presented strategy. Our future work is mainly concentrated on analyzing the general principles of the requirements for different types of operating resources for in-memory computing framework and design the optimization strategy adapting to the load and type of jobs. DAG: Directed acyclic graph IMC: In-memory computing SAP: System applications and products SIMD: Single instruction, multiple data SQL: Structured query language S. John Walker, Big data: a revolution that will transform how we live, work, and think. Int. J. Advert. 33(1), 181–183 (2014) K. Kambatla, G. Kollias, V. Kumar, et al., Trends in big data analytics[J]. J Parallel Distrib Comput 74(7), 2561–2573 (2014) C.L.P. Chen, C.Y. Zhang, Data-intensive applications, challenges, techniques and technologies: a survey on big data. Inf. Sci. 275(11), 314–347 (2014) F. Provost, T. Fawcett, Data science and its relationship to big data and data-driven decision making. Big Data 1(1), 51–59 (2013) C. Napoli, G. Pappalardo, E. Tramontana, A mathematical model for file fragment diffusion and a neural predictor to manage priority queues over BitTorrent. Int J Appl Math Comput Sci 26(1), 147–160 (2016) Y. Lamari, S.C. Slaoui, Clustering categorical data based on the relational analysis approach and MapReduce. J Big Data 4(1), 28 (2017) C. Cho, S. Lee, Effective five directional partial derivatives-based image smoothing and a parallel structure design[J]. IEEE Trans. Image Process. 25(4), 1617–1625 (2016) Kim W S, Lee J, An H M, et al. Image Filter Optimization Method based on common sub-expression elimination for Low Power Image Feature Extraction Hardware Design. 10(2), 192–197 (2017) H. Seol, W. Shin, J. Jang, et al., Elaborate refresh: a fine granularity retention management for deep submicron DRAMs. IEEE Trans. Comput. 99, 1–1 (2018) Y. Tang, B. Gedik, Autopipelining for data stream processing. IEEE Trans Parallel Distrib Syst 24(12), 2344–2354 (2013) I. Jo, D.H. Bae, A.S. Yoon, et al., YourSQL: a high-performance database system leveraging in-storage computing. Proc of the Vldb Endowment 9(12), 924–935 (2016) R. Wu, L. Huang, P. Yu, et al., SunwayMR: a distributed parallel computing framework with convenient data-intensive applications programming. Futur. Gener. Comput. Syst. 71, 43–56 (2017) B. Sengupta, A. Das, Use of SIMD-based data parallelism to speed up sieving in integer-factoring algorithms[J]. Appl. Math. Comput. 293(1), 204–217 (2017) I.P. Egwutuoha, D. Levy, B. Selic, S. Chen, A survey of fault tolerance mechanisms and checkpoint/restart implementations for high-performance computing systems [J]. J. Supercomput. 65(3), 1302–1326 (2013) The authors would like to thank the reviewers for their thorough reviews and helpful suggestions. This paper was supported by the National Natural Science Foundation of China under Grant No. 61262088, 61462079, 61562086. All data are fully available without restriction. School of Mechanical and Electrical Engineering, Shaoxing University, Shaoxing, 312000, People's Republic of China Changtian Ying School of Information Science and Engineering, Xinjiang University, Urumqi, 830008, People's Republic of China , Changyan Ying & Chen Ban Search for Changtian Ying in: Search for Changyan Ying in: Search for Chen Ban in: CTY is the main writer of this paper. She proposed the main idea, completed the experiment, and analyzed the result. CYY and CB gave some important suggestions for this paper. All authors read and approved the final manuscript. Correspondence to Chen Ban. Parallelism degree Allocation fitness
CommonCrawl
Mathematische Annalen On the spectral problem associated with the time-periodic nonlinear Schrödinger equation Jonatan Lenells Ronald Quirchmayr According to its Lax pair formulation, the nonlinear Schrödinger (NLS) equation can be expressed as the compatibility condition of two linear ordinary differential equations with an analytic dependence on a complex parameter. The first of these equations—often referred to as the x-part of the Lax pair—can be rewritten as an eigenvalue problem for a Zakharov–Shabat operator. The spectral analysis of this operator is crucial for the solution of the initial value problem for the NLS equation via inverse scattering techniques. For space-periodic solutions, this leads to the existence of a Birkhoff normal form, which beautifully exhibits the structure of NLS as an infinite-dimensional completely integrable system. In this paper, we take the crucial steps towards developing an analogous picture for time-periodic solutions by performing a spectral analysis of the t-part of the Lax pair with a periodic potential. Mathematics Subject Classification 34L20 35Q55 37K15 47A75 Communicated by Y. Giga. The nonlinear Schrödinger (NLS) equation $$\begin{aligned} \mathrm {i}u_t + u_{xx} - 2\sigma |u|^2 u = 0, \qquad \sigma = \pm 1, \end{aligned}$$ is one of the most well-studied nonlinear partial differential equations. As a universal model equation for the evolution of weakly dispersive wave packets, it arises in a vast number of applications, ranging from nonlinear fiber optics and water waves to Bose-Einstein condensates. Many aspects of the mathematical theory for (1.1) are well-understood. For example, for spatially periodic solutions (i.e., \(u(x,t) = u(x+1,t)\)), there exists a normal form theory for (1.1) which beautifully exhibits its structure as an infinite-dimensional completely integrable system (see [13] and references therein). This theory takes a particularly simple form in the case of the defocusing (i.e., \(\sigma = 1\)) version of (1.1). Indeed, for \(\sigma = 1\), the normal form theory ascertains the existence of a single global system of Birkhoff coordinates (the Cartesian version of action-angle coordinates) for (1.1). For the focusing (i.e., \(\sigma = -1\)) NLS, such coordinates also exist, but only locally [20]. The existence of Birkhoff coordinates has many implications. Among other things, it provides an explicit decomposition of phase space into invariant tori, thereby making it evident that an x-periodic solution of the defocusing NLS is either periodic, quasi-periodic, or almost periodic in time. The construction of Birkhoff coordinates for (1.1) is a major achievement which builds on ideas going back all the way to classic work of Gardner, Greene, Kruskal and Miura on the Korteweg–de Vries (KdV) equation [11, 12], and of Zakharov and Shabat on the NLS equation [34]. Early works on the (formal) introduction of action-angle variables include [32, 33]. More recently, Kappeler and collaborators have developed powerful methods which have led to a rigorous construction of Birkhoff coordinates for both KdV [15, 16, 18] and NLS [13, 20] in the spatially periodic case. The key element in the construction of Birkhoff coordinates is the spectral analysis of the Zakharov–Shabat operator L(u) defined by $$\begin{aligned} L(u) = \mathrm {i}\sigma _3\bigg (\frac{\mathrm d}{\mathrm d x} - U\bigg ), \quad \text {where} \quad U = \begin{pmatrix} 0 &{}\quad u \\ \sigma \bar{u} &{}\quad 0 \end{pmatrix} \quad \text {and}\quad \sigma _3 = \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad -1 \end{pmatrix}. \end{aligned}$$ In particular, the periodic eigenvalues of this operator are independent of time if u evolves according to (1.1) and thus encode the infinite number of conservation laws for (1.1). The time-independence is a consequence of the fact that equation (1.1) can be viewed as the compatibility condition \(\phi _{xt} = \phi _{tx}\) of the Lax pair equations [22, 34] $$\begin{aligned}&\phi _x + \mathrm {i}\lambda \sigma _3 \phi = U\phi , \end{aligned}$$ $$\begin{aligned}&\phi _t + 2 \mathrm {i}\lambda ^2 \sigma _3 \phi = V \phi , \end{aligned}$$ where \(\lambda \in \mathbb {C}\) is the spectral parameter, \(\phi (x,t,\lambda )\) is an eigenfunction, $$\begin{aligned} V = \begin{pmatrix} - \mathrm {i}\sigma |u|^2 &{}\quad 2 \lambda u + \mathrm {i}u_x \\ 2 \sigma \lambda \bar{u} - \mathrm {i}\sigma \bar{u}_x &{}\quad \quad \mathrm {i}\sigma |u|^2 \end{pmatrix}, \end{aligned}$$ and we note that (1.2) is equivalent to the eigenvalue problem \(L(u)\phi = \lambda \phi \). Strangely enough, although the spectral theory of equation (1.2) (or, equivalently, of the Zakharov–Shabat operator) has been so thoroughly studied, it appears that no systematic study of the spectral theory of the t-part (1.3) with a periodic potential has yet been carried out (there only exist a few studies of the NLS equation on the half-line with asymptotically time-periodic boundary conditions which touch tangentially on the issue [4, 5, 23, 25, 26]). The general scope of this paper is to lay the foundation for a larger project with the goal of showing that (1.1), viewed as an evolution equation in the x-variable, is an integrable PDE and in particular admits a normal form in a neighborhood of the trivial solution \(u\equiv 0\). This means that one can construct Birkhoff coordinates—often referred to as nonlinear Fourier coefficients—on appropriate function spaces so that, when expressed in these coordinates, the PDE can be solved by quadrature. Our approach is inspired by the methods and ideas of [13, 20], where such coordinates for (1.1) as a t-evolution equation were constructed on the phase space of x-periodic functions. This work at hand provides the key ingredients needed to adapt the scheme of construction developed in [13, 20] to the x-evolution of time-periodic solutions of NLS, and ultimately to establish local Birkhoff coordinates, hence integrability. In particular, we provide asymptotic estimates for the fundamental matrix solution of the t-part (1.3), which we exploit to study the periodic spectrum of the corresponding generalized eigenvalue problem. For the spectral analysis, it is appropriate (at least initially) to treat the four functions u, \(\sigma \bar{u}\), \(u_x\), \(\sigma \bar{u}_x\) in the definition of V as independent. We will therefore consider the spectral problem (1.3) with potential V given by $$\begin{aligned} V = V(\lambda ,\psi ) = \begin{pmatrix} - \mathrm {i}\psi ^1 \psi ^2 &{}\quad 2\lambda \psi ^1 + \mathrm {i}\psi ^3 \\ 2\lambda \psi ^2 - \mathrm {i}\psi ^4 &{}\quad \mathrm {i}\psi ^1 \psi ^2 \end{pmatrix}, \end{aligned}$$ where \(\psi =\{\psi ^j(t)\}_1^4\) are periodic functions of \(t \in \mathbb {R}\) with period one. Apart from the purely spectral theoretic interest of studying (1.3), there are at least three other reasons motivating the present study: First, in the context of fiber optics, the roles of the variables x and t in Eq. (1.1) are interchanged, see e.g. [2]. In other words, in applications to fiber optics, x is the temporal and t is the spatial variable. Since the analysis of (1.3) plays the same role for the x-evolution of u(x, t) as the analysis of the Zakharov–Shabat operator plays for the t-evolution, this motivates the study of (1.3). Second, one of the most important problems for nonlinear integrable PDEs is to determine the solution of initial-boundary value problems with asymptotically time-periodic boundary data [3, 6, 26]. For example, consider the problem of determining the solution u(x, t) of (1.1) in the quarter-plane \(\{x>0,t>0\}\), assuming that the initial data u(x, 0), \(x \ge 0\), and the boundary data u(0, t), \(t \ge 0\) are known, and that u(0, t) approaches a periodic function as \(t \rightarrow \infty \). The analysis of this problem via Riemann-Hilbert techniques relies on the spectral analysis of (1.3) with a periodic potential determined by the asymptotic behavior of u(0, t) [4, 25]. Third, at first sight, the differential equations (1.2) and (1.3) may appear unrelated. However, the fact that they are connected via Eq. (1.1) implies that they can be viewed as different manifestations of the same underlying mathematical structure. Indeed, for the analysis of elliptic equations and boundary value problems, a coordinate-free intrinsic approach in which the two parts of the Lax pair are combined into a single differential form has proved the most fruitful [10, 14]. In such a formulation, eigenfunctions which solve both the x-part (1.2) and the t-part (1.3) simultaneously play a central role. It is therefore natural to investigate how the spectral properties of (1.2) are related to those of (1.3). Since the NLS equation is just one example of a large number of integrable equations with a Lax pair formulation, the present work can in this regard be viewed as a case study with potentially broader applications. 1.1 Comparison with the analysis of the x-part Compared with the analysis of the x-part (1.2), the spectral analysis of the t-part (1.3) presents a number of novelties. Some of the differences are: Whereas Eq. (1.2) can be rewritten as the eigenvalue equation \(L(u)\phi = \lambda \phi \) for an operator L(u), no (natural) such formulation is available for (1.3) due to the more complicated \(\lambda \)-dependence. Nevertheless, it is possible to define spectral quantities associated with (1.3) in a natural way. Asymptotically for large \(|\lambda |\), the periodic and antiperiodic eigenvalues of (1.2) come in pairs which lie in discs centered at the points \(n \pi \), \(n \in \mathbb {Z}\), along the real axis [13]. In the case of (1.3), a similar result holds, but in addition to discs centered at points on the real axis, there are also discs centered at points on the imaginary axis (see Lemma 3.13). Moreover, the spacing between these discs shrinks to zero as \(|\lambda |\) becomes large. For so-called real type potentials (the defocusing case), the Zakharov–Shabat operator is self-adjoint, implying that the spectrum associated with (1.2) is real. No such statement is true for the t-part (1.3). This is clear already from the previous statement that there exist pairs of eigenvalues tending to infinity contained in discs centered on the imaginary axis. However, it is also true that the eigenvalues of (1.3) near the real axis need not be purely real and the eigenvalues near the imaginary axis need not be purely imaginary. This can be seen from the simple case of a single-exponential potential. Indeed, consider the potential $$\begin{aligned} (\psi ^1(t), \psi ^2(t),\psi ^3(t),\psi ^4(t)) = (\alpha e^{\mathrm {i}\omega t}, \sigma \bar{\alpha } e^{-\mathrm {i}\omega t}, c e^{\mathrm {i}\omega t}, \sigma \bar{c} e^{-\mathrm {i}\omega t}), \end{aligned}$$ where \(\alpha , c \in \mathbb {C}\), \(\omega \in 2\pi \mathbb {Z}\), and \(\sigma = \pm 1\). For potentials of this form, Eq. (1.3) can be solved explicitly (see Sect. 5) and Fig. 1 shows the periodic and antiperiodic eigenvalues of (1.3) for two choices of the parameters. Whereas the matrix U in (1.2) is off-diagonal and contains only the function u and its complex conjugate \(\bar{u}\), the matrix V in (1.3) is neither diagonal nor off-diagonal and involves also \(u_x\) and \(\bar{u}_x\). This has implications for the spectral analysis—an obvious one being that (1.5) involves four instead of two scalar potentials \(\psi ^j(t)\). The occurrence of the factor \(\lambda ^2\) in (1.3) implies that the derivation of the fundamental solution's asymptotics for \(|\lambda |\rightarrow \infty \) requires new techniques (see the proof of Theorem 2.7). For the x-part, the analogous result can be established via an application of Gronwall's lemma [13]. This approach does not seem to generalize to the t-part, but instead we are able to perform an asymptotic analysis inspired by [7, Chapter 6] (see also [24]). In Theorems 4.4 and 4.5, we will, for sufficiently small potentials, establish the existence of analytic arcs which connect periodic eigenvalues close to the real line in a pairwise manner and along which the discriminant is real. A similar result for (1.2) can be found in [20, Proposition 2.6]. In both cases, the proof relies on the implicit function theorem in infinite dimensional Banach spaces. However, the proof of (1.3) is quite a bit more involved and requires, for example, the introduction of more complicated function spaces, see (3.5). Plots of the periodic and antiperiodic eigenvalues for two single exponential potentials with different sets of parameters \(\sigma \), \(\omega \), \(\alpha \) and c; cf. (1.6). a The periodic and antiperiodic eigenvalues for the real type potential given by \(\sigma =1\), \(\omega =-2\pi \), \(\alpha =\frac{6}{15}+\frac{11}{4}\mathrm {i}\), \(c=\frac{1}{10}\); b the spectrum of the imaginary type potential with \(\sigma =-1\), \(\omega =-2\pi \), \(\alpha =\frac{1}{2}\), \(c=\mathrm {i}\alpha \sqrt{ 2\alpha ^2-\omega }\), which arises from an exact plane wave solution of the focusing NLS 1.2 Outline of the paper In order to facilitate comparison with the existing literature on the x-part (1.2), our original intention was to closely follow the scheme and methods developed in [13], adapting them to Eq. (1.3). As pointed out in the previous paragraph we have that Eq. (1.3) is quadratic in the spectral parameter \(\lambda \) and hence it is a generalized eigenvalue problem making its treatment more challenging. Nevertheless, some resemblance to the first two chapters of [13] remains. The main novelty of the paper is the proof of the leading order asymptotics for large \(|\lambda |\) of the fundamental matrix solution associated with (1.3), cf. Theorem 2.7. These asymptotics are a key ingredient for the subsequent two sections. The discussion of the asymptotic localization of the Dirichlet eigenvalues, Neumann eigenvalues and periodic eigenvalues in Sect. 3, as well as the study of the zero set of the imaginary part of the discriminant for potentials of real and imaginary type (corresponding to the defocusing and focusing NLS, respectively) in Sect. 4 then follow closely [13] and, respectively, [20]. In Sect. 5, we consider the special (but important) case of single-exponential potentials for which the fundamental matrix solution permits an exact formula. This enables us to illustrate the theoretical results from the previous sections. We provide useful formulas for the gradients of the fundamental solution and the discriminant in Sect. 6. The last section reviews the standard bi-Hamiltonian structure of NLS as a time-evolution equation and establishes a Hamiltonian structure for NLS viewed as an x-evolution equation. More precisely, we show that the NLS system $$\begin{aligned} {\left\{ \begin{array}{ll} q_{xx} = -\mathrm {i}q_t +2 q^2 r \\ r_{xx} = \mathrm {i}r_t + 2 r^2 q, \end{array}\right. } \end{aligned}$$ which is associated with (1.1), can be written as $$\begin{aligned} (q,r,p,s)^\intercal _x= \tilde{\mathcal {D}} \, \partial {\tilde{H}}_1, \end{aligned}$$ where the 4-vector on the left hand side is understood as a column vector (indicated by the transpose operation \({}^\intercal \)), and the Hamiltonian \({\tilde{H}}_1\), its gradient \(\partial {\tilde{H}}_1\), and the Hamiltonian operator \(\tilde{{\mathcal {D}}}\) are given by $$\begin{aligned} \tilde{H}_1 = \int \big (ps + \mathrm {i}q_tr - q^2 r^2\big ) \, \mathrm d t, \quad \partial \tilde{H}_1 = \begin{pmatrix} -\mathrm {i}r_t - 2 r^2 q\\ \mathrm {i}q_t - 2 q^2 r\\ s\\ p \end{pmatrix}, \quad \tilde{\mathcal {D}} = \begin{pmatrix} 0 &{} \ \ 0 &{} \ \ 0 &{} \ \ 1 \\ 0 &{} \ \ 0 &{} \ \ 1&{} \ \ 0 \\ 0 &{} \ \ -1&{} \ \ 0&{} \ \ 0 \\ -1 &{} \ \ 0&{} \ \ 0 &{} \ \ 0 \end{pmatrix}. \end{aligned}$$ The associated Poisson bracket for two functionals F and G is given by $$\begin{aligned} \{F, G\}_{\tilde{\mathcal {D}}} = \int (\partial F)^\intercal \, \tilde{\mathcal {D}} \, \partial G \, \mathrm d t. \end{aligned}$$ 2 Fundamental solution In Sect. 2.1, we introduce the framework for the study of (1.3) and establish basic properties of the fundamental solution. In Sect. 2.2 we derive estimates for the fundamental matrix solution and its \(\lambda \)-derivative for large \(|\lambda |\). These estimates will be used in Sect. 3 to asymptotically localize the Dirichlet, Neumann and periodic eigenvalues as well as the critical points of the discriminant of (1.3). 2.1 Framework and basic properties The potential matrix V in (1.3) depends on the spectral parameter\(\lambda \in \mathbb {C}\) and the potential \(\psi =(\psi ^1,\psi ^2,\psi ^3,\psi ^4)\) taken from the space $$\begin{aligned} \mathrm {X}:=H^1(\mathbb {T},\mathbb {C}) \times H^1(\mathbb {T},\mathbb {C}) \times H^1(\mathbb {T},\mathbb {C}) \times H^1(\mathbb {T},\mathbb {C}), \end{aligned}$$ where \(H^1(\mathbb {T},\mathbb {C})\) denotes the Sobolev space of complex absolutely continuous functions on the one-dimensional torus \(\mathbb {T}=\mathbb {R}/\mathbb {Z}\) with square-integrable weak derivative, which is equipped with the usual norm induced by the \(H^1\)-inner product $$\begin{aligned} (\cdot ,\cdot ):H^1(\mathbb {T},\mathbb {C})\times H^1(\mathbb {T},\mathbb {C}) \rightarrow \mathbb {C}, \quad (u,v) \mapsto \int ^1_0 ( u {\bar{v}} + u_t {\bar{v}}_t ) \, \mathrm d t. \end{aligned}$$ We endow the space \(\mathrm {X}\) with the inner product $$\begin{aligned} \langle \psi _1,\psi _2 \rangle :=(\psi ^1_1, \psi ^1_2) + (\psi ^2_1 , \psi ^2_2) + (\psi ^3_1 , \psi ^3_2) + (\psi ^4_1 , \psi ^4_2), \end{aligned}$$ which induces the norm \(\Vert \psi \Vert = \sqrt{\langle \psi ,\psi \rangle }\) on \(\mathrm {X}\). Likewise we consider the space $$\begin{aligned} \mathrm {X}_\tau :=H^1([0,\tau ],\mathbb {C}) \times H^1([0,\tau ],\mathbb {C}) \times H^1([0,\tau ],\mathbb {C}) \times H^1([0,\tau ],\mathbb {C}) \end{aligned}$$ on the interval \([0,\tau ]\) for fixed \(\tau >0\), where the Sobolev space \(H^1([0,\tau ],\mathbb {C})\) is equipped with the inner product $$\begin{aligned} (\cdot ,\cdot ) _\tau :H^1([0,\tau ],\mathbb {C})\times H^1([0,\tau ],\mathbb {C}) \rightarrow \mathbb {C}, \quad (u,v) \mapsto \int ^\tau _0 ( u {\bar{v}} + u_t {\bar{v}}_t ) \, \mathrm d t. \end{aligned}$$ We set $$\begin{aligned} \langle \psi _1,\psi _2 \rangle _\tau :=(\psi ^1_1, \psi ^1_2)_\tau + (\psi ^2_1 , \psi ^2_2)_\tau + (\psi ^3_1 , \psi ^3_2)_\tau + (\psi ^4_1 , \psi ^4_2)_\tau , \end{aligned}$$ which makes \(\mathrm {X}_\tau \) an inner product space and induces the norm \(\Vert \psi \Vert _\tau :=\sqrt{\langle \psi ,\psi \rangle _\tau }\). For the components \(\psi ^j\) of \(\psi \in \mathrm {X}\) or \(\psi \in \mathrm {X}_\tau \) respectively, we write $$\begin{aligned} \Vert \psi ^j\Vert = \sqrt{(\psi ^j,\psi ^j)}, \quad \Vert \psi ^j\Vert _\tau = \sqrt{(\psi ^j,\psi ^j)_\tau }, \qquad j=1,2,3,4. \end{aligned}$$ Since not every \(\psi \in \mathrm {X}_1\) is periodic, \(\mathrm {X}\) is a proper closed subspace of \(\mathrm {X}_1\). The spaces \(\mathrm {X}\) and \(\mathrm {X}_\tau \) inherit completeness from \(H^1(\mathbb {T},\mathbb {C})\) and \(H^1([0,\tau ],\mathbb {C})\) respectively, hence they are Hilbert spaces. On the space \(M_{2\times 2}(\mathbb {C})\) of complex valued \(2\times 2\)-matrices we consider the norm \(| \cdot |\), which is induced by the standard norm in \(\mathbb {C}^2\), also denoted by \(|\cdot |\), i.e. $$\begin{aligned} |A| :=\max _{z\in \mathbb {C}^2,|z|=1} |A z|. \end{aligned}$$ The norm \(|\cdot |\) is submultiplicative, i.e. \(|A B| \le |A| \, |B|\) for \(A,B \in M_{2\times 2}(\mathbb {C})\). For given \(\lambda \in \mathbb {C}\) and \(\psi \in \mathrm {X}\), let us write the initial value problem corresponding to (1.3) as $$\begin{aligned} \mathrm D \phi&= R \phi + V \phi , \end{aligned}$$ $$\begin{aligned} \phi (0)&= \phi _0, \end{aligned}$$ where V is given by (1.5), $$\begin{aligned} \mathrm D&:=\begin{pmatrix} \partial _t&{} \\ &{}\partial _t\\ \end{pmatrix} , \quad R \equiv R(\lambda ) :=-2\mathrm {i}\lambda ^2 \sigma _3, \end{aligned}$$ $$\begin{aligned} \phi = \begin{pmatrix} \phi ^1 \\ \phi ^2 \end{pmatrix} :\mathbb {T}\rightarrow \mathbb {C}^2. \end{aligned}$$ Equation (2.1) reduces to (1.3) if we identify \((\psi ^1, \psi ^2, \psi ^3, \psi ^4) = (u,\sigma \bar{u}, u_x, \sigma \bar{u}_x)\). In analogy to the conventions for the eigenvalue problem (1.2) for the x-part of the NLS Lax pair, we say that the spectral problem (2.1) is of Zakharov–Shabat (ZS) type. The corresponding equation written in AKNS [1] coordinates \((q_0,p_0,q_1,p_1)\) reads $$\begin{aligned} \mathrm D \phi = -2\lambda ^2 \begin{pmatrix} &{}-1 \\ 1&{} \\ \end{pmatrix} \phi + \begin{pmatrix} 2\lambda q_0 - p_1 &{}\quad 2\lambda p_0 + p_0^2 +q_0^2 + q_1 \\ 2\lambda p_0 -( p_0^2 +q_0^2) + q_1 &{}\quad -2\lambda q_0 + p_1 \\ \end{pmatrix} \phi . \end{aligned}$$ It is obtained by multiplying the operator equation \(\mathrm D = R + V\) from the right with T and from the left with \(T^{-1}\), where $$\begin{aligned} T = \begin{pmatrix} 1 &{}\quad \mathrm {i}\\ 1&{}\quad -\mathrm {i}\\ \end{pmatrix} , \quad T^{-1} = \frac{1}{2} \begin{pmatrix} 1 &{}\quad 1 \\ -\mathrm {i}&{}\quad \mathrm {i}\\ \end{pmatrix}, \end{aligned}$$ and by writing $$\begin{aligned} \psi ^1=q_0+\mathrm {i}p_0, \; \psi ^2=q_0-\mathrm {i}p_0, \; \psi ^3=q_1+\mathrm {i}p_1, \; \psi ^4=q_1-\mathrm {i}p_1, \; \end{aligned}$$ that is, $$\begin{aligned} q_0=\frac{1}{2}(\psi ^1+\psi ^2), \; p_0=-\frac{\mathrm {i}}{2}(\psi ^1-\psi ^2), \; q_1=\frac{1}{2}(\psi ^3+\psi ^4), \; p_1=-\frac{\mathrm {i}}{2}(\psi ^3-\psi ^4). \end{aligned}$$ In what follows we show the existence of a unique matrix-valued fundamental solution M of (2.1), that is, a solution of $$\begin{aligned} \mathrm D M = R M + V M, \quad M(0)=\mathrm {I}, \end{aligned}$$ where \(\mathrm {I}\in M_{2\times 2}(\mathbb {C})\) denotes the identity matrix. The proof relies on a standard iteration technique. We first observe that the fundamental matrix solution for the zero potential \(\psi =0\) is given by $$\begin{aligned} E_\lambda (t) :=\mathrm {e}^{-2 \lambda ^2 \mathrm {i}\sigma _3 t} = \begin{pmatrix} \mathrm {e}^{-2\lambda ^2\mathrm {i}t}&{}\quad \\ &{}\quad \mathrm {e}^{2\lambda ^2\mathrm {i}t} \\ \end{pmatrix}, \quad t\ge 0. \end{aligned}$$ Indeed, \(E_\lambda \) solves the initial value problem $$\begin{aligned} \mathrm D E_\lambda = R E_\lambda , \quad E_\lambda (0) = \mathrm {I}. \end{aligned}$$ For \(\lambda \in \mathbb {C}\), \(\psi \in \mathrm {X}\) and \(0\le t <\infty \) we inductively define $$\begin{aligned} M_0:=E_\lambda (t), \quad M_{n+1}(t) :=\int ^t_0 E_\lambda (t-s) V(s) M_n(s) \, \mathrm d s, \qquad n \ge 0, \end{aligned}$$ where \(V\equiv V(s,\lambda ,\psi )\) is defined for all \(s\ge 0\) by periodicity. For each \(n\ge 1\), \(M_n\) is continuous on \([0,\infty )\times \mathbb {C}\times \mathrm {X}\) and satisfies $$\begin{aligned} M_n(t) =\int _{0\le s_n \le \cdots \le s_1\le t} E_\lambda (t) \prod ^n_{i=1} E_\lambda (-s_i) V(s_i) E_\lambda (s_i) \, \mathrm d s_n \cdots \mathrm d s_1. \end{aligned}$$ Using that \(| E_\lambda (t)|=\mathrm {e}^{2 | \mathfrak {I}(\lambda ^2)| t}\) for \(t\ge 0\), we estimate $$\begin{aligned} |M_n(t)|&\le \mathrm {e}^{2(2n+1) | \mathfrak {I}(\lambda ^2)| t} \int _{0\le s_n \le \cdots \le s_1\le t} \prod ^n_{i=1} | V(s_i) | \, \mathrm d s_n \cdots \mathrm d s_1 \\&\le \frac{\mathrm {e}^{2(2n+1) | \mathfrak {I}(\lambda ^2)| t}}{n!} \int _{[0,t]^n} \prod ^n_{i=1} | V(s_i) | \, \mathrm d s_n \cdots \mathrm d s_1 \\&\le \frac{\mathrm {e}^{2(2n+1) | \mathfrak {I}(\lambda ^2)| t}}{n!} \bigg (\int ^t_0 | V(s) | \, \mathrm d s \bigg )^n\\&\le \frac{\mathrm {e}^{2(2n+1) | \mathfrak {I}(\lambda ^2)| t}}{n!} \, t^{n/2} \, \big (2 \max (1,|\lambda |)\big )^n \, [C(\psi ,t)]^n, \end{aligned}$$ where one can choose $$\begin{aligned} C(\psi ,t) :=\big \Vert \max \big (|\psi ^1 \psi ^2|, |\psi ^1| + |\psi ^3|, |\psi ^2| + |\psi ^4|\big ) \big \Vert _t \end{aligned}$$ as a uniform bound for bounded sets of \([0,\infty )\times \mathrm {X}\). Therefore the matrix $$\begin{aligned} M(t) :=\sum ^\infty _{n=0} M_n(t) \end{aligned}$$ exists and converges uniformly on bounded subsets of \([0,\infty )\times \mathbb {C}\times \mathrm {X}\). By construction, M solves the integral equation $$\begin{aligned} M(t,\lambda ,\psi )=E_\lambda (t) + \int ^t_0 E_\lambda (t-s) V(s,\lambda ,\psi ) M(s,\lambda ,\psi ) \, \mathrm d s, \end{aligned}$$ hence M is the unique matrix solution of the initial value problem (2.5). Since each \(M_n\), \(n\ge 0\) is continuous on \([0,\infty )\times \mathbb {C}\times \mathrm {X}\) and moreover analytic in \(\lambda \) and \(\psi \) for fixed \(t\in [0,\infty )\), M inherits the same regularity due to uniform convergence. Thus we have proved the following result. Theorem 2.1 (Existence of the fundamental solution M) The power series (2.7) with coefficients given by (2.6) converges uniformly on bounded subsets of \([0,\infty )\times \mathbb {C}\times \mathrm {X}\) to a continuous function denoted by M, which is analytic in \(\lambda \) and \(\psi \) for each fixed \(t\ge 0\) and satisfies the integral Eq. (2.8). The fundamental solution M is in fact compact: Proposition 2.2 (Compactness of M) For any sequence \((\psi _k)_k\) in \(\mathrm {X}\) which converges weakly to an element \(\psi \in \mathrm {X}\) as \(k\rightarrow \infty \), i.e. \(\psi _k \rightharpoonup \psi \), one has $$\begin{aligned} | M(t,\lambda ,\psi _k) - M(t,\lambda ,\psi ) | \rightarrow 0 \end{aligned}$$ uniformly on bounded sets of \([0,\infty )\times \mathbb {C}\). It suffices to prove the statement for each \(M_n\), since the series (2.7) converges uniformly on bounded subsets of \([0,\infty )\times \mathbb {C}\times \mathrm {X}\). The assertion is true for \(M_0=E_\lambda \), which is independent of \(\psi \). To achieve the inductive step, we assume that the statement holds for \(M_n\), \(n\ge 1\), and consider an arbitrary sequence \(\psi _k\rightharpoonup \psi \) in \(\mathrm {X}\). Then $$\begin{aligned} M_n(t,\lambda ,\psi _k)\rightarrow M_n(t,\lambda ,\psi ) \end{aligned}$$ uniformly on bounded subsets of \([0,\infty )\times \mathbb {C}\). Thus $$\begin{aligned} M_{n+1}(t,\lambda ,\psi _k)&= \int ^t_0 E_\lambda (t-s) V(s,\lambda ,\psi _k) M_n(s,\lambda ,\psi _k) \, \mathrm d s \\&\rightarrow \int ^t_0 E_\lambda (t-s) V(s,\lambda ,\psi ) M_n(s,\lambda ,\psi ) \, \mathrm d s \end{aligned}$$ uniformly on bounded subsets of \([0,\infty )\times \mathbb {C}\). \(\square \) Furthermore, M satisfies the Wronskian identity: (Wronskian identity) Everywhere on \([0,\infty )\times \mathbb {C}\times \mathrm {X}\) it holds that $$\begin{aligned} \mathrm {det} \, M(t, \lambda , \psi ) = 1. \end{aligned}$$ In particular, the inverse \(M^{-1}\) is given by $$\begin{aligned} M^{-1} = \begin{pmatrix}m_4 &{}\quad -m_2 \\ -m_3&{}\quad m_1\end{pmatrix} \quad \text {if} \quad M = \begin{pmatrix} m_1 &{}\quad m_2 \\ m_3&{}\quad m_4 \\ \end{pmatrix}. \end{aligned}$$ The fundamental solution M is regular for all \(t\ge 0\). Therefore a direct computation yields $$\begin{aligned} \partial _t\, \mathrm {det}\,M = \mathrm {tr} (\partial _tM \cdot M^{-1}) \, \mathrm {det}\, M. \end{aligned}$$ $$\begin{aligned} \mathrm {tr} (\partial _tM \cdot M^{-1})= \mathrm {tr} (R+V) = 0 \end{aligned}$$ it follows that \(\mathrm {det} \, M(t)=\mathrm {det} \, M(0)=1\) for all \(t\ge 0\). \(\square \) The solution of the inhomogeneous problem corresponding to the initial value problem (2.1)–(2.2) has the usual "variation of constants representation": The unique solution of the inhomogeneous equation $$\begin{aligned} \mathrm D f = (R+V) f +g, \quad f(0)=v_0 \end{aligned}$$ with \(g\in L^2([0,1],\mathbb {C})\times L^2([0,1],\mathbb {C})\) is given by $$\begin{aligned} f(t) = M(t) \bigg ( v_0 + \int ^t_0 M^{-1}(s) g(s) \, \mathrm d s \bigg ). \end{aligned}$$ Differentiating (2.9) with respect to t and using that M is the fundamental solution of (2.5), we find that $$\begin{aligned} f'(t)=\mathrm D f(t)&=M'(t) v_0 + M'(t) \int ^t_0 M^{-1}(s) g(s) \, \mathrm d s + M(t) M^{-1}(t) g(t) \\&= (R + V) M(t) \bigg ( v_0 + \int ^t_0 M^{-1}(s) g(s) \, \mathrm d s \bigg ) + g(t)\\&=(R+V) f(t) +g(t) \end{aligned}$$ and \(f(0)=v_0\). \(\square \) As a corollary we obtain a formula for the \(\lambda \)-derivative \(\dot{M}\) of M. Corollary 2.5 The \(\lambda \)-derivative \(\dot{M}\) of M is given by $$\begin{aligned} \dot{M}(t) = M(t) \int ^t_0 M^{-1}(s) N(s) M(s) \, \mathrm d s, \end{aligned}$$ $$\begin{aligned} N=2 \begin{pmatrix} -2\lambda \mathrm {i}&{}\quad \psi ^1 \\ \psi ^2 &{}\quad 2\lambda \mathrm {i}\\ \end{pmatrix}. \end{aligned}$$ In particular, \(\dot{M}\) is analytic on \(\mathbb {C}\times \mathrm {X}\) and compact on \([0,\infty )\times \mathbb {C}\times \mathrm {X}\) uniformly on bounded subsets of \([0,\infty )\times \mathbb {C}\). Differentiation of \(\mathrm D M = (R+V)M\) with respect to \(\lambda \) gives $$\begin{aligned} \mathrm D \dot{M}&= (R + V) \dot{M} + \frac{\mathrm d}{\mathrm d \lambda }\Big ( R(\lambda ) + V(\lambda ) \Big ) M = (R + V) \dot{M} + N M, \end{aligned}$$ and Proposition 2.4 yields (2.10). The second claim is a consequence of Proposition 2.2. \(\square \) The fundamental solution M of the ZS-system is related to the fundamental solution K of the AKNS-system by $$\begin{aligned} K=T^{-1} M T, \end{aligned}$$ cf. (2.4). That is, if $$\begin{aligned} M = \begin{pmatrix} m_1 &{}\quad m_2 \\ m_3&{}\quad m_4 \\ \end{pmatrix}, \quad K = \begin{pmatrix} k_1 &{}\quad k_2 \\ k_3&{}\quad k_4 \\ \end{pmatrix}, \end{aligned}$$ $$\begin{aligned} k_1= \frac{m_1+m_2+m_3+m_4}{2}, \quad&k_2=\frac{m_1-m_2+m_3-m_4}{-2\mathrm {i}}, \\ k_3= \frac{m_1+m_2-m_3-m_4}{2\mathrm {i}}, \quad&k_4=\frac{m_1-m_2-m_3+m_4}{2}. \end{aligned}$$ The fundamental solution for the zero potential in AKNS coordinates is therefore given by $$\begin{aligned} \mathrm {e}^{2 \mathrm {i}\lambda ^2 \sigma _2 t} = \begin{pmatrix} \cos 2\lambda ^2 t &{}\quad \sin 2\lambda ^2 t \\ -\sin 2\lambda ^2 t&{}\quad \cos 2\lambda ^2 t \\ \end{pmatrix}, \quad \sigma _2 = \begin{pmatrix} 0 &{}\quad -\mathrm{i} \\ \mathrm{i} &{}\quad 0 \\ \end{pmatrix}. \end{aligned}$$ Remark 2.6 It is obvious that all results in this section possess an analogous version in which the space \(\mathrm {X}\) of 1-periodic potentials is replaced by the space \(\mathrm {X}_\tau \) of potentials defined on the interval \([0,\tau ]\), \(\tau >0\). 2.2 Leading order asymptotics The results in this section hold for \(0\le t\le 1\) and hence apply to the time-periodic problem we are primarily interested in. It was pointed out in [25] that the fundamental matrix solution M of (2.5) for a potential with sufficient smoothness and decay admits an asymptotic expansion (as \(|\lambda | \rightarrow \infty \)) of the form $$\begin{aligned} M(\lambda ,t) = \bigg (\mathrm {I}+ \frac{Z_1(t)}{\lambda } + \frac{Z_2(t)}{\lambda ^2} + \cdots \bigg ) \mathrm {e}^{-2\mathrm {i}\lambda ^2 t \sigma _3} + \bigg (\frac{W_1(t)}{\lambda } + \frac{W_2(t)}{\lambda ^2} + \cdots \bigg ) \mathrm {e}^{2\mathrm {i}\lambda ^2 t \sigma _3}, \end{aligned}$$ where the matrices \(Z_k\), \(W_k\), \(k=1,2, \dots \), can be explicitly expressed in terms of the potential and therefore only depend on the time variable \(t\ge 0\), and satisfy \(Z_k(0) + W_k(0)=0\) for all integers \(k\ge 1\). This suggests that M satisfies $$\begin{aligned} M(\lambda ,t) = \mathrm {e}^{-2\mathrm {i}\lambda ^2 t \sigma _3} + {\mathcal {O}}(|\lambda |^{-1} \, \mathrm {e}^{2 |\mathfrak {I}(\lambda ^2)| t}) \qquad \text {as} \quad |\lambda |\rightarrow \infty \end{aligned}$$ for t within a given bounded interval. These considerations suggest the following result. (Asymptotics of M and \({\dot{M}}\) as \(|\lambda | \rightarrow \infty \)) Uniformly on \([0,1]\times \mathbb {C}\) and on bounded subsets of \(\mathrm {X}_1\), $$\begin{aligned} M(t,\lambda ,\psi ) = E_\lambda (t) + {\mathcal {O}}\big (|\lambda |^{-1} \,\mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|t}\big ) \end{aligned}$$ in the sense that there exist constants \(C>0\) and \(K>0\) such that $$\begin{aligned} |\lambda |\, \mathrm {e}^{-2|\mathfrak {I}(\lambda ^2)|t} \, | M(t,\lambda ,\psi ) - E_\lambda (t)| \le C \end{aligned}$$ uniformly for all \(0\le t \le 1\), all \(\lambda \in \mathbb {C}\) with \(|\lambda |>K\) and all \(\psi \) contained in a given bounded subset of \(\mathrm {X}_1\). Moreover, the \(\lambda \)-derivative of M satisfies $$\begin{aligned} \dot{M}(t,\lambda ,\psi ) = \dot{E}_\lambda (t) + {\mathcal {O}}\big (\mathrm {e}^{2|\mathfrak {I}(\lambda ^2)| t}\big ) \end{aligned}$$ uniformly on \([0,1]\times \mathbb {C}\) and on bounded subsets of \(\mathrm {X}_1\). Theorem 2.7 will be established via a series of lemmas. We first introduce some notation and briefly discuss the idea of the proof. For \( \lambda \in \mathbb {C}\) and \(\psi \in \mathrm {X}_1\), let M be the fundamental solution of (2.5), which will be considered on the unit interval [0, 1]. We set $$\begin{aligned} \theta :=2 \lambda ^2 \end{aligned}$$ and define \(M^+\) and \(M^-\) by $$\begin{aligned} M^+(t,\lambda ,\psi ) :=M(t,\lambda ,\psi ) \mathrm {e}^{\mathrm {i}\theta t \sigma _3}, \quad M^-(t,\lambda ,\psi ) :=M(t,\lambda ,\psi ) \mathrm {e}^{-\mathrm {i}\theta t \sigma _3}, \end{aligned}$$ For a given complex \(2\times 2\)-matrix $$\begin{aligned} A = \begin{pmatrix} a &{}\quad b \\ c &{}\quad d \end{pmatrix} \end{aligned}$$ we denote by \(A^{\mathrm d}\) its diagonal part and by \(A^{\mathrm {od}}\) its off-diagonal part, i.e. $$\begin{aligned} A^{\mathrm d} = \begin{pmatrix} a &{}\quad \\ &{}\quad d \end{pmatrix}, \quad A^{\mathrm {od}} = \begin{pmatrix} &{}\quad c \\ b &{}\quad \end{pmatrix}. \end{aligned}$$ We will always identify a potential \(\psi \in \mathrm {X}_1\), with its absolutely continuous version. This allows us to evaluate \(\psi \) at each point; we set \(\psi _0:=\psi (0)\) and \(\psi ^j_0:=\psi ^j(0)\) for \(j=1,2,3,4\). For a given potential \(\psi \in \mathrm {X}_1\), \(t\in [0,1]\) and \(\lambda \in \mathbb {C}\) we define $$\begin{aligned} Z_p(t,\lambda ,\psi ) :=\mathrm {I}+ \frac{Z_1(t,\psi )}{\lambda } + \frac{Z^{\mathrm {od}}_2(t,\psi )}{\lambda ^2} , \end{aligned}$$ $$\begin{aligned} Z_1(t,\psi )&:=-\frac{\mathrm {i}}{2} \begin{pmatrix} &{}\quad \psi ^1 \\ -\psi ^2 &{}\quad \end{pmatrix} + \frac{1}{2} \Gamma \sigma _3, \quad \\ Z^{\mathrm {od}}_2(t,\psi )&:=\frac{1}{4} \begin{pmatrix} &{}\quad \psi ^3 + \mathrm {i}\psi ^1 \Gamma \\ \psi ^4 + \mathrm {i}\psi ^2 \Gamma &{}\quad \end{pmatrix}, \end{aligned}$$ $$\begin{aligned} \Gamma \equiv \Gamma (t,\psi ) :=\int ^t_0 ( \psi ^1 \psi ^4 - \psi ^2 \psi ^3) \, \mathrm d \tau . \end{aligned}$$ Furthermore we set $$\begin{aligned} W_p(t,\lambda ,\psi ) :=\frac{W_1(t,\psi )}{\lambda } + \frac{W_2(t,\psi )}{\lambda ^2} + \frac{W^{\mathrm d}_3(t,\psi )}{\lambda ^3} , \end{aligned}$$ $$\begin{aligned}&W_1 (t,\psi ) = W_1 (\psi ) :=\frac{\mathrm {i}}{2} \begin{pmatrix} &{}\quad \psi ^1_0 \\ -\psi ^2_0 &{}\quad \end{pmatrix}, \\&W_2(t,\psi ) :=- \frac{1}{4} \begin{pmatrix} \psi ^2_0 \psi ^1 &{} \quad -\mathrm {i}\psi ^1_0 \Gamma +\psi ^3_0 \\ - \mathrm {i}\psi ^2_0 \Gamma +\psi ^4_0 &{} \quad \psi ^1_0 \psi ^2 \end{pmatrix}, \\&W^{\mathrm d}_3(t,\psi ) :=\frac{\mathrm {i}}{8} \begin{pmatrix} -\psi ^2_0( \psi ^3 + \mathrm {i}\psi ^1 \Gamma ) +\psi ^4_0 \psi ^1&{}\quad \\ &{}\quad \psi ^1_0 (\psi ^4 + \mathrm {i}\psi ^2 \Gamma ) -\psi ^3_0 \psi ^2 \end{pmatrix}. \end{aligned}$$ We finally define \(M_p\), which will serve as an approximation of M, by $$\begin{aligned} M_p(t,\lambda ,\psi ):=Z_p(t,\lambda ,\psi ) \mathrm {e}^{-\mathrm {i}\theta t \sigma _3} + W_p(t,\lambda ,\psi ) \mathrm {e}^{\mathrm {i}\theta t \sigma _3}, \end{aligned}$$ and set \(M^+_p :=M_p \mathrm {e}^{\mathrm {i}\theta t \sigma _3}\), \(M^-_p :=M_p \mathrm {e}^{-\mathrm {i}\theta t \sigma _3}\), i.e. $$\begin{aligned} M^+_p(t,\lambda ,\psi )&= Z_p(t,\lambda ,\psi ) + W_p(t,\lambda ,\psi ) \mathrm {e}^{2\mathrm {i}\theta t \sigma _3},\\ M^-_p(t,\lambda ,\psi )&= Z_p(t,\lambda ,\psi ) \mathrm {e}^{-2\mathrm {i}\theta t \sigma _3} + W_p(t,\lambda ,\psi ). \end{aligned}$$ Letting \(Q_j\), \(j=1,2,3,4\), denote the four open quadrants of the complex \(\lambda \)-plane, we set $$\begin{aligned} D_+ :=Q_1 \cup Q_3 \quad \text {and} \quad D_-:=Q_2 \cup Q_4. \end{aligned}$$ For an arbitrary complex number \(\lambda = x + \mathrm {i}y\) with \(x,y\in \mathbb {R}\) and \(t\ge 0\), it holds that $$\begin{aligned} |\mathrm {e}^{2\mathrm {i}\lambda ^2 t}| = \mathrm {e}^{-4 x y} \le 1 \iff \lambda \in \overline{D_+}, \quad |\mathrm {e}^{-2\mathrm {i}\lambda ^2 t}| = \mathrm {e}^{4 x y} \le 1 \iff \lambda \in \overline{D_-}. \end{aligned}$$ We will prove Theorem 2.7 by establishing asymptotic estimates for the distance between the fundamental solution M and the explicit expression \(M_p\) that approximates M. For this purpose we will consider the columns of \(M^+\) and \(M^-\) separately and compare them with the columns of \(M^+_p\) and \(M^-_p\), respectively, after restricting attention to either \(\overline{D_+}\) or \(\overline{D_-}\). By combining all possible combinations, we are able to infer asymptotic estimates for the full matrix M valid on the whole complex plane. For a given smooth potential \(\psi \), the matrices \(Z_k\) and \(W_k\) can be determined recursively up to any order \(k\ge 0\) by integration by parts. Indeed, note that \(V = V_0 + \lambda V_1\) where $$\begin{aligned} V_0 :=\begin{pmatrix} -\mathrm {i}\psi ^1 \psi ^2 &{}\quad \mathrm {i}\psi ^3 \\ - \mathrm {i}\psi ^4 &{}\quad \mathrm {i}\psi ^1 \psi ^2 \end{pmatrix}, \quad V_1 :=\begin{pmatrix} &{}\quad 2 \psi ^1 \\ 2\psi ^2 &{}\quad \end{pmatrix}. \end{aligned}$$ Assuming that the formal expression $$\begin{aligned} \bigg ( \sum ^\infty _{k=-1} \frac{Z_k(t,\psi )}{\lambda ^k} \bigg ) \mathrm {e}^{-\mathrm {i}\theta t \sigma _3} + \bigg ( \sum ^\infty _{k=-1} \frac{W_k(t,\psi )}{\lambda ^k} \bigg ) \mathrm {e}^{\mathrm {i}\theta t \sigma _3} \end{aligned}$$ $$\begin{aligned} Z_0(t,\psi ) \equiv \mathrm {I}, \quad Z_{-1}(t,\psi )=W_{-1}(t,\psi )=W_0(t,\psi ) \equiv 0 \end{aligned}$$ solves (2.5), one infers the following recursive equations for the coefficients \(Z_k\) and \(W_k\): $$\begin{aligned} (Z_k)_t + 4 \mathrm {i}\sigma _3 Z^{\mathrm {od}}_{k+2}&= V_0 Z_k + V_1 Z_{k+1}, \\ (W_k)_t + 4 \mathrm {i}\sigma _3 W^{\mathrm {d}}_{k+2}&= V_0 W_k + V_1 W_{k+1} \end{aligned}$$ for all integers \(k\ge -1\) and \(Z_k(0,\psi ) + W_k(0,\psi ) = 0\) for all integers \(k\ge 1\). For \(\psi \in \mathrm {X}_1\), the matrices \(Z_p\) and \(W_p\) satisfy $$\begin{aligned} Z_p(0,\lambda ,\psi ) + W_p(0,\lambda ,\psi ) = \mathrm {I}+ \mathcal O(|\lambda |^{-2}), \end{aligned}$$ since the values of \(Z^{\mathrm d}_2\) are not determined, which turns out to be sufficient to prove the asymptotic estimates of M asserted in Theorem 2.7. Lemma 2.9 Let \(\psi \in \mathrm {X}_1\) be an arbitrary potential. Then M is the fundamental matrix solution of the Cauchy problem (2.5) if and only if \(M^+\) satisfies $$\begin{aligned} M^+_t + 2 \mathrm {i}\theta \sigma _3 (M^+)^{\mathrm {od}} = V M^+, \quad M^+(0,\lambda ) = \mathrm {I}. \end{aligned}$$ By applying the product rule, assuming that (2.5) holds and noting that \(\sigma _3\) commutes with diagonal matrices, we obtain $$\begin{aligned} M^+_t = (M \mathrm {e}^{\mathrm {i}\theta t \sigma _3})_t&= M_t \, \mathrm {e}^{\mathrm {i}\theta t \sigma _3} + M \, \mathrm {e}^{\mathrm {i}\theta t \sigma _3} \, \mathrm {i}\theta \sigma _3 \\&= (V M - \mathrm {i}\theta \sigma _3 \, M) \, \mathrm {e}^{\mathrm {i}\theta t \sigma _3} + \mathrm {i}\theta M^+ \sigma _3 \\&= V M^+ - \mathrm {i}\theta [\sigma _3, M^+] \\&= V M^+ - 2 \mathrm {i}\theta \sigma _3 (M^+)^{\mathrm {od}}. \end{aligned}$$ Conversely, if (2.15) holds, we similarly obtain $$\begin{aligned} M_t \, \mathrm {e}^{\mathrm {i}\theta t \sigma _3} = (V M - \mathrm {i}\theta \sigma _3 \, M) \, \mathrm {e}^{\mathrm {i}\theta t \sigma _3}, \end{aligned}$$ and a multiplication with \(\mathrm {e}^{-\mathrm {i}\theta t \sigma _3}\) from the right yields that M satisfies the differential equation in (2.5). The statement concerning the initial conditions holds because \(M(0,\lambda )=M^+(0,\lambda )\). \(\square \) The following lemma is concerned with the invertibility of \(Z_p\). We set \(\mathbb {C}^K :=\{\lambda \in \mathbb {C}:|\lambda |>K \}\) for \(K>0\), and denote by \(B_r(0,\mathrm {X}_1)\) the ball of radius \(r>0\) in \(\mathrm {X}_1\) centered at 0. Lemma 2.10 Let \(r>0\). There exists a constant \(K_r>0\) such that \(Z_p\) is invertible on \([0,1]\times \mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\) with $$\begin{aligned} Z_p^{-1}(t,\lambda ,\psi ) = \sum ^\infty _{n=0} \bigg (- \frac{Z_1(t,\psi )}{\lambda } - \frac{Z^{\mathrm {od}}_2(t,\psi )}{\lambda ^2} \bigg )^n. \end{aligned}$$ We use the general fact that if an element A of a Banach algebra \(({\mathcal {A}},\Vert \cdot \Vert )\) satisfies \(\Vert A\Vert <1\), then \(\mathrm {I}- A\) is invertible and its inverse is given by the Neumann series \(\sum _{n\ge 0}A^n\). Let \(K_r>0\) be so large that $$\begin{aligned} \bigg | \frac{Z_1(t,\psi )}{\lambda } + \frac{Z^{\mathrm {od}}_2(t,\psi )}{\lambda ^2} \bigg | < \frac{1}{2} \end{aligned}$$ for all \(t\in [0,1]\), \(\lambda \in \mathbb {C}^{K_r}\) and \(\psi \in B_r(0,\mathrm {X}_1)\). This can always be achieved, because the functions \(\{\psi ^j\}_1^4\), and hence also the functions \(| Z_1(t,\psi )|\) and \(| Z^{\mathrm {od}}_2(t,\psi )|\), are uniformly bounded on \([0,1]\times B_r(0,\mathrm {X}_1)\). It follows that the inverse of \(Z_p\) on \([0,1]\times \mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\) exists and is given by its Neumann series, i.e. (2.16) is satisfied. \(\square \) Lemma 2.10 and its proof suggest the introduction of the following notation. Definition 2.11 For each \(r>0\), we define $$\begin{aligned} K_r :=\inf _{\begin{array}{c} |\lambda |>1 \\ \lambda \in \mathbb {C} \end{array}} \big \{ |\mathrm {I}- Z_p(t,\lambda ,\psi )| < 1/2 \; \forall t \in [0,1] \, \forall \psi \in B_r(0,\mathrm {X}_1) \big \}. \end{aligned}$$ Corollary 2.12 Let \(r>0\). The matrix \(Z_p\) is invertible on \([0,1]\times \mathbb {C}^{K_r}\times B_r(0,\mathrm {X}_1)\) and its inverse \(Z^{-1}_p\) is given by (2.16). Both \(Z_p\) and \(Z^{-1}_p\) are uniformly bounded on \([0,1]\times \mathbb {C}^{K_r}\times B_r(0,\mathrm {X}_1)\). Furthermore, $$\begin{aligned} Z^{-1}_p = \mathrm {I}- \frac{Z_1}{\lambda } + \frac{Z^2_1 - Z^{\mathrm {od}}_2}{\lambda ^2} + \mathcal O\big (|\lambda |^{-3}\big ) \end{aligned}$$ uniformly on \([0,1]\times \mathbb {C}^{K_r}\times B_r(0,\mathrm {X}_1)\) as \(|\lambda |\rightarrow \infty \). The expansion (2.17) follows directly from (2.16), the uniform \({\mathcal {O}}(|\lambda |^{-3})\) error follows from the uniform convergence of the respective Neumann series on \([0,1]\times \mathbb {C}^{K_r}\times B_r(0,\mathrm {X}_1)\), see the proof of Lemma 2.10. \(\square \) For \(t\in [0,1]\), \(\lambda \in \mathbb {C}\) and \(\psi \in \mathrm {X}_1\), we define $$\begin{aligned} A_{p,Z} :=\big [ (\partial _tZ_p) -\mathrm {i}\theta Z_p \sigma _3 \big ] Z^{-1}_p, \quad {\mathcal {A}} :=V - \mathrm {i}\theta \sigma _3, \quad \Delta _Z :={\mathcal {A}} - A_{p,Z}, \end{aligned}$$ whenever the inverse \(Z_p^{-1}\) exists. By Lemma 2.10, the inverse of \(Z_p\) exists uniformly on [0, 1] and on bounded sets in \(\mathrm {X}_1\) provided that \(|\lambda |\) is large enough. For \(\lambda \in \mathbb {C}\) and \(\psi \in \mathrm {X}_1\), let M be the fundamental solution of (2.5) on the unit interval. If \(|\lambda |\) is so large that \(Z^{-1}_p\) exists for all \(t\in [0,1]\), then $$\begin{aligned}&(Z^{-1}_p M^+)_t = Z^{-1}_p \Delta _Z M^+ - \mathrm {i}\theta [\sigma _3, Z^{-1}_p M^+], \end{aligned}$$ $$\begin{aligned}&(Z^{-1}_p M^-)_t = Z^{-1}_p \Delta _Z M^- - \mathrm {i}\theta (\sigma _3 Z^{-1}_p M^- + Z^{-1}_p M^- \sigma _3). \end{aligned}$$ By Lemma 2.9 we can write \(M^+_t = \mathcal A M^+ + \mathrm {i}\theta M^+ \sigma _3\), hence Eq. (2.18) is directly obtained by: $$\begin{aligned} (Z^{-1}_p M^+)_t&= - Z^{-1}_p (\partial _tZ_p) Z^{-1}_p M^+ + Z^{-1}_p M^+_t \\&= - Z^{-1}_p (A_{p,Z} Z_p + \mathrm {i}\theta Z_p \sigma _3) Z^{-1}_p M^+ + Z^{-1}_p({\mathcal {A}} M^+ + \mathrm {i}\theta M^+ \sigma _3) \\&= Z^{-1}_p \Delta _Z M^+ - \mathrm {i}\theta [\sigma _3, Z^{-1}_p M^+]. \end{aligned}$$ Equation (2.19) is similarly obtained by noting that $$\begin{aligned} M^-_t = (M^+ \mathrm {e}^{-2\mathrm {i}\theta t \sigma _3})_t&= (V M^+ - \mathrm {i}\theta [\sigma _3,M^+]) \mathrm {e}^{-2\mathrm {i}\theta t \sigma _3} -2\mathrm {i}\theta M^+ \mathrm {e}^{-2\mathrm {i}\theta t \sigma _3} \sigma _3 \\&= {\mathcal {A}} M^- - \mathrm {i}\theta M^- \sigma _3. \end{aligned}$$ \(\square \) For \(z\in \mathbb {C}\), we define the linear map \(\mathrm {e}^{z {\hat{\sigma }}_3}\) on the space of complex \(2\times 2\)-matrices by $$\begin{aligned} \mathrm {e}^{z{\hat{\sigma }}_3}(A) :=\mathrm {e}^{z \sigma _3} A \mathrm {e}^{-z \sigma _3}; \end{aligned}$$ furthermore we define \(\mathrm {e}^{z {\check{\sigma }}_3}\) via $$\begin{aligned} \mathrm {e}^{z{\check{\sigma }}_3}(A) :=\mathrm {e}^{z \sigma _3} A \mathrm {e}^{z \sigma _3}. \end{aligned}$$ Lemma 2.13 yields Volterra equations for \(M^+\) and \(M^-\). For \(\lambda \in \mathbb {C}\) and \(\psi \in \mathrm {X}_1\), let M be the fundamental solution of (2.5) on the unit interval. If \(|\lambda |\) is so large that \(Z^{-1}_p\) exists for all \(t\in [0,1]\), then \(M^+\) satisfies $$\begin{aligned} M^+(t,\lambda ,\psi )&= Z_p(t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta t {\hat{\sigma }}_3}[Z^{-1}_p(0,\lambda ,\psi )] \nonumber \\&\quad + \int ^t_0 Z_p(t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta (t-\tau ) {\hat{\sigma }}_3} [(Z^{-1}_p \Delta _Z M^+)(\tau ,\lambda ,\psi )] \,\mathrm d \tau , \end{aligned}$$ and \(M^-\) satisfies $$\begin{aligned} M^-(t,\lambda ,\psi )&= Z_p(t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta t {\check{\sigma }}_3}[Z^{-1}_p(0,\lambda ,\psi )] \nonumber \\&\quad + \int ^t_0 Z_p(t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta (t-\tau ) {\check{\sigma }}_3} [(Z^{-1}_p \Delta _Z M^+)(\tau ,\lambda ,\psi )] \,\mathrm d \tau . \end{aligned}$$ Using identity (2.18) in Lemma 2.13, we infer that $$\begin{aligned} \big (\mathrm {e}^{\mathrm {i}\theta t {\hat{\sigma }}_3} (Z^{-1}_p M^+)\big )_t = \mathrm {e}^{\mathrm {i}\theta t {\hat{\sigma }}_3} (Z^{-1}_p \Delta _Z M^+). \end{aligned}$$ In order to obtain (2.20), we first integrate (2.22) from 0 to t and use that \(M^+(0,\lambda )=\mathrm {I}\) to determine the integration constant. Applying \(\mathrm {e}^{-\mathrm {i}\theta t {\hat{\sigma }}_3}\) to both sides of the resulting integral equation and multiplying by \(Z_p\) from the left, we find (2.20). The Volterra equation for \(M^-\) follows in an analogous way from the equation $$\begin{aligned} (\mathrm {e}^{\mathrm {i}\theta t {\check{\sigma }}_3} (Z^{-1}_p M^-))_t = \mathrm {e}^{\mathrm {i}\theta t {\check{\sigma }}_3} (Z^{-1}_p \Delta _Z M^-), \end{aligned}$$ which is a consequence of (2.19). \(\square \) For a t-dependent matrix A with entries in \(L^p([0,1],\mathbb {C})\) we define $$\begin{aligned} \Vert A\Vert _{L^p([0,1],\mathbb {C})} :=\bigg (\int ^1_0 |A(t)|^p \, \mathrm d t \bigg )^{1/p}, \quad 1\le p < \infty . \end{aligned}$$ Let B be an arbitrary bounded subset of \(\mathrm {X}_1\) and let \(1\le q \le 2\). Then $$\begin{aligned} \Vert \partial _tZ_1(\psi ) \Vert _{L^q([0,1],\mathbb {C})} = \mathcal O(1), \quad \Vert \partial _tZ^{\mathrm {od}}_2(\psi )\Vert _{L^q([0,1],\mathbb {C})} = {\mathcal {O}}(1), \end{aligned}$$ uniformly on B. The case \(q=2\) follows directly from the definitions of \(Z_1\) and \(Z^{\mathrm {od}}_2\), the continuity of the operator $$\begin{aligned} \Vert \cdot \Vert _{L^2([0,1],\mathbb {C})} \circ \partial _t:H^1(0,1) \rightarrow \mathbb {R}, \end{aligned}$$ and the fact that \(H^1(0,1)\) is an algebra. The cases \(1\le q < 2\) follow from the case \(q=2\) in view of the continuous embeddings \(L^2([0,1],\mathbb {C}) \hookrightarrow L^q([0,1],\mathbb {C})\), \(1\le q < 2\). \(\square \) Let \(r>0\). There exists a constant \(C>0\) such that uniformly for \((\lambda ,\psi )\) in \(\mathbb {C}^{K_r}\times B_r(0,\mathrm {X}_1)\), $$\begin{aligned} |\lambda | \,\Vert \Delta _Z(\lambda ,\psi )\Vert _{L^1([0,1],\mathbb {C})} \le C. \end{aligned}$$ Note that $$\begin{aligned} 4 \mathrm {i}\sigma _3 Z^{\mathrm {od}}_1 = V_1, \quad 4 \mathrm {i}\sigma _3 Z^{\mathrm {od}}_2 = V_0 + V_1 Z_1 \end{aligned}$$ for arbitrary \(\psi \in \mathrm {X}_1\). By Corollary 2.12 the asymptotic estimate (2.17) holds uniformly on \([0,1]\times \mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\) as \(|\lambda |\rightarrow \infty \). In particular, \(\Delta _Z\) is well-defined on \([0,1]\times \mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\) and satisfies $$\begin{aligned} \Delta _Z&=V_0 + \lambda V_1 - 2 \mathrm {i}\lambda ^2 \sigma _3 + 2 \mathrm {i}\lambda ^2 \bigg (\mathrm {I}+ \frac{Z_1}{\lambda } + \frac{Z^{\mathrm {od}}_2}{\lambda ^2}\bigg ) \sigma _3 \bigg ( \mathrm {I}- \frac{Z_1}{\lambda } + \frac{Z^2_1 - Z^{\mathrm {od}}_2}{\lambda ^2} \bigg ) \nonumber \\&\quad + {\mathcal {O}}\big (|\lambda |^{-1}\big ) \end{aligned}$$ in \(L^1([0,1],\mathbb {C})\) uniformly on \(\mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\) as \(|\lambda |\rightarrow \infty \), where we have used Lemma 2.15 to estimate the \(\partial _tZ_p\)-term. By keeping only the \(\lambda ^k\)-terms for \(k=0,1,2\) in (2.25) and by employing (2.24), we obtain $$\begin{aligned} \Delta _Z&=V_0 + \lambda V_1 - 2 \mathrm {i}\lambda [\sigma _3,Z_1] + 2 \mathrm {i}[Z^{\mathrm {od}}_2,\sigma _3] + 2 \mathrm {i}[\sigma _3,Z_1] Z_1 + {\mathcal {O}}\big (|\lambda |^{-1}\big ) \\&= V_0 - 4 \mathrm {i}\sigma _3 Z^{\mathrm {od}}_2 + 4 \mathrm {i}\sigma _3 Z^{\mathrm {od}}_1 Z_1 + {\mathcal {O}}\big (|\lambda |^{-1}\big ) \\&= 0 + {\mathcal {O}}\big (|\lambda |^{-1}\big ) \end{aligned}$$ in \(L^1([0,1],\mathbb {C})\) uniformly on \(\mathbb {C}^K_r \times B_r(0,\mathrm {X}_1)\) as \(|\lambda |\rightarrow \infty \). \(\square \) Let \([A]_1\) and \([A]_2\) denote the first and second columns of a \(2\times 2\)-matrix A. Let \(|[A]_i|\), \(i=1,2\), denote the standard \(\mathbb {C}^2\)-norm of the vector \([A]_i\). Let \(r>0\). There exists a constant \(C>0\) such that $$\begin{aligned}&|\lambda |\, \big |\big [M^+(t,\lambda ,\psi ) - Z_p (t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta t \hat{\sigma }_3}\big (Z^{-1}_p(0,\lambda ,\psi )\big ) \big ]_2 \big | \le C, \end{aligned}$$ $$\begin{aligned}&|\lambda | \,\big |\big [M^-(t,\lambda ,\psi ) - Z_p (t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta t {\check{\sigma }}_3}\big (Z^{-1}_p(0,\lambda ,\psi )\big ) \big ]_1\big | \le C \end{aligned}$$ uniformly on \([0,1] \times \overline{D^{K_r}_-} \times B_r(0,\mathrm {X}_1)\), and $$\begin{aligned}&|\lambda |\, \big |\big [M^-(t,\lambda ,\psi ) - Z_p (t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta t {\check{\sigma }}_3}\big (Z^{-1}_p(0,\lambda ,\psi )\big ) \big ]_2 \big | \le C, \end{aligned}$$ $$\begin{aligned}&|\lambda | \,\big |\big [M^+(t,\lambda ,\psi ) - Z_p (t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta t {\hat{\sigma }}_3}\big (Z^{-1}_p(0,\lambda ,\psi )\big ) \big ]_1\big | \le C \end{aligned}$$ uniformly on \([0,1] \times \overline{D^{K_r}_+} \times B_r(0,\mathrm {X}_1)\). For \(\lambda \in \mathbb {C}^{K_r}\), the functions $$\begin{aligned} {\mathcal {M}}(t,\lambda ,\psi )&:=[ M^+(t,\lambda ,\psi )]_2, \\ {\mathcal {M}}_0(t,\lambda ,\psi )&:=\big [Z_p (t,\lambda ,\psi ) \mathrm {e}^{-\mathrm {i}\theta t {\hat{\sigma }}_3}\big (Z^{-1}_p(0,\lambda ,\psi )\big ) \big ]_2, \\ E(t,\tau ,\lambda ,\psi )&:=Z_p (t,\lambda ,\psi ) \begin{pmatrix} \mathrm {e}^{-2\mathrm {i}\theta (t-\tau )} &{} \\ &{}1 \end{pmatrix} Z^{-1}_p (\tau ,\lambda ,\psi ), \end{aligned}$$ are well-defined on their domains \([0,1]\times \mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\) and \([0,1]^2\times \mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\) respectively, where the inverse \(Z^{-1}_p\) is given by (2.16) and is uniformly bounded on \([0,1]\times \mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\) by Lemma 2.10 and Corollary 2.12. Due to Lemma 2.14, \({\mathcal {M}}\) satisfies the following Volterra equation for \(t\in [0,1]\), \(\lambda \in \mathbb {C}^{K_r}\) and \(\psi \in B_r(0,\mathrm {X}_1)\): $$\begin{aligned} {\mathcal {M}} (t,\lambda ) = \mathcal M_0(t,\lambda ) + \int ^t_0 E(t,\tau ,\lambda ) \Delta _Z(\tau ,\lambda ) {\mathcal {M}}(\tau ,\lambda ) \, \mathrm d \tau , \end{aligned}$$ where the \(\psi \)-dependence has been suppressed for simplicity. Thus \(\mathcal M\) admits the power series representation $$\begin{aligned} {\mathcal {M}} (t,\lambda ) = \sum ^\infty _{n=0} {\mathcal {M}}_n(t,\lambda ), \end{aligned}$$ which converges (pointwise) absolutely and uniformly on \([0,1]\times \mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\), where $$\begin{aligned} \mathcal M_n(t,\lambda ) :=\int ^t_0 E(t,\tau ,\lambda ) \Delta _Z(\tau ,\lambda ) {\mathcal {M}}_{n-1}(\tau ,\lambda ) \, \mathrm d \tau \qquad (n\ge 1) \end{aligned}$$ satisfies the estimate $$\begin{aligned} |{\mathcal {M}}_n(t,\lambda )|&\le \int _{0\le \tau _n \le \cdots \le \tau _1 \le t} \prod ^n_{i=1} |E(t,\tau _i,\lambda ) \Delta _Z(\tau ,\lambda ) {\mathcal {M}}_0(\tau ,\lambda )| \, \mathrm d \tau _n \cdots \mathrm d \tau _1 \\&\le \frac{1}{n!} \bigg ( \int ^t_0 | E(t,\tau ,\lambda ) | \, | \Delta _Z(\tau ,\lambda ) | \, |{\mathcal {M}}_0(\tau ,\lambda )| \, \mathrm d \tau \bigg )^n \end{aligned}$$ uniformly on \([0,1]\times \mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\). The functions E and \({\mathcal {M}}_0\) satisfy $$\begin{aligned} |{\mathcal {M}}_0(t,\lambda )|&\le |Z_p (t,\lambda )| \, | Z^{-1}_p(0,\lambda )|, \\ | E(t,\tau ,\lambda ) |&\le | Z_p (t,\lambda ) | \, | Z^{-1}_p (\tau ,\lambda )|, \end{aligned}$$ for \(0\le \tau \le t\le 1\) and \((\lambda , \psi ) \in \overline{D^{K_r}_-} \times B_r(0,\mathrm {X}_1)\). Therefore, in view of Corollary 2.12 and Lemma 2.16, there exists a constant \(C>0\) such that $$\begin{aligned} |{\mathcal {M}}_n(t,\lambda )| \le \frac{C^n}{n! \, |\lambda |^n} \end{aligned}$$ uniformly on \([0,1]\times \overline{D^{K_r}_-}\times B_r(0,\mathrm {X}_1)\), and thus $$\begin{aligned} |{\mathcal {M}}(t,\lambda ) - {\mathcal {M}}_0 (t,\lambda )| \le \sum ^\infty _{n=1} \frac{C^n}{n! \, |\lambda |^n} \le \frac{C \mathrm {e}^{\frac{C}{ |\lambda |}}}{ |\lambda |} \end{aligned}$$ uniformly on \([0,1]\times \overline{D^{K_r}_-}\times B_r(0,\mathrm {X}_1)\). This proves (2.26); the proofs of (2.27)–(2.29) are similar. \(\square \) $$\begin{aligned}&|\lambda |^2 \, \big | \big [ Z_p(t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta t {\hat{\sigma }}_3}\big (Z^{-1}_p(0,\lambda ,\psi )\big ) - M^+_p (t,\lambda ,\psi ) \big ]_2 \big | \le C, \end{aligned}$$ $$\begin{aligned}&|\lambda |^2 \, \big | \big [ Z_p(t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta t \check{\sigma }_3}\big (Z^{-1}_p(0,\lambda ,\psi )\big ) - M^-_p (t,\lambda ,\psi ) \big ]_1 \big | \le C \end{aligned}$$ uniformly on \([0,1]\times \overline{D^{K_r}_-} \times B_r(0,\mathrm {X}_1)\), and $$\begin{aligned}&|\lambda |^2 \, \big | \big [ Z_p(t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta t \check{\sigma }_3}\big (Z^{-1}_p(0,\lambda ,\psi )\big ) - M^-_p (t,\lambda ,\psi ) \big ]_2 \big | \le C, \end{aligned}$$ $$\begin{aligned}&|\lambda |^2 \, \big | \big [ Z_p(t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta t \hat{\sigma }_3}\big (Z^{-1}_p(0,\lambda ,\psi )\big ) - M^+_p (t,\lambda ,\psi ) \big ]_1 \big | \le C \end{aligned}$$ uniformly on \([0,1]\times \overline{D^{K_r}_+} \times B_r(0,\mathrm {X}_1)\). $$\begin{aligned} \big [\mathrm {e}^{\mathrm {i}\theta t {\hat{\sigma }}_3} \big (Z_1(0,\psi ) \big )\big ]_2 = \mathrm {e}^{2\mathrm {i}\theta t } \begin{pmatrix} -\frac{\mathrm {i}}{2} \psi ^1_0 \\ 0 \end{pmatrix} = \mathrm {e}^{2\mathrm {i}\theta t } \big [Z_1(0,\psi )\big ]_2 = - \mathrm {e}^{2\mathrm {i}\theta t } \big [W_1(0,\psi )\big ]_2 \end{aligned}$$ for \(\psi \in \mathrm {X}_1\), Corollary 2.12 yields $$\begin{aligned} \big [Z_p(t,\lambda ,\psi ) \,&\mathrm {e}^{-\mathrm {i}\theta t {\hat{\sigma }}_3}(Z^{-1}_p(0,\lambda ,\psi )) \big ]_2 \\&= \Big [ Z_p(t,\lambda ,\psi ) \, \mathrm {e}^{-\mathrm {i}\theta t {\hat{\sigma }}_3} \Big (\mathrm {I}- \frac{Z_1(0,\psi )}{\lambda } + {\mathcal {O}}(|\lambda |^{-2})\Big ) \Big ]_2\\&= \Big [Z_p(t,\lambda ,\psi )\Big ]_2 - \frac{\mathrm {e}^{-2\mathrm {i}\theta t } }{\lambda } \Big [Z_1(0,\psi )\Big ]_2 + {\mathcal {O}}\bigg (\frac{\mathrm {e}^{2\mathfrak {I}\theta t }}{|\lambda |^2}\bigg ) \\&= \Big [Z_p(t,\lambda ,\psi ) \Big ]_2 + \frac{ \mathrm {e}^{-2\mathrm {i}\theta t }}{\lambda } \Big [ W_1(0,\psi ) \Big ]_2 + \mathcal O\bigg (\frac{\mathrm {e}^{2\mathfrak {I}\theta t }}{|\lambda |^2}\bigg ) \end{aligned}$$ uniformly on \([0,1] \times \mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\) as \(|\lambda |\rightarrow \infty \). On the other hand, we have that $$\begin{aligned} \big [ M^+_p(t,\lambda ,\psi ) \big ]_2 = \big [ Z_p (t,\lambda ,\psi ) \big ]_2 + \frac{ \mathrm {e}^{-2\mathrm {i}\theta t }}{\lambda } \Big [ W_1(0,\psi ) \Big ]_2 + {\mathcal {O}}\bigg (\frac{\mathrm {e}^{2\mathfrak {I}\theta t }}{|\lambda |^2}\bigg ) \end{aligned}$$ uniformly on \([0,1] \times \mathbb {C}^{K_r} \times B_r(0,\mathrm {X}_1)\) as \(|\lambda |\rightarrow \infty \). Since \(\mathfrak {I}\theta \le 0\) for \(\lambda \in \overline{D_-}\), the estimate (2.30) follows. The estimates (2.31)–(2.32) are proved in a similar way. \(\square \) Proof of Theorem 2.7 The first assertion of the theorem follows by combining Lemmas 2.17 and 2.18. Let \(r>0\). By (2.26) and (2.30), there exists a \(C>0\) such that $$\begin{aligned} |\lambda | \, \big | \big [M^+(t,\lambda ,\psi ) - M^+_p (t,\lambda ,\psi ) \big ]_2 \big | \le C \end{aligned}$$ uniformly on \([0,1]\times \overline{D^{K_r}_-} \times B_r(0,\mathrm {X}_1)\). Thus, $$\begin{aligned} |\lambda | \, \mathrm {e}^{-2|\mathfrak {I}(\lambda ^2)| t} \, \big | \big [M(t,\lambda ,\psi ) - M_p (t,\lambda ,\psi ) \big ]_2 \big | \le C \end{aligned}$$ uniformly on \([0,1]\times \overline{D^{K_r}_-} \times B_r(0,\mathrm {X}_1)\). Since \(M_p (t,\lambda ,\psi ) = E_\lambda (t) + {\mathcal {O}}(|\lambda |^{-1} \mathrm {e}^{2|\mathfrak {I}(\lambda ^2)| t} )\) uniformly on \([0,1]\times \overline{D^{K_r}_-} \times B_r(0,\mathrm {X}_1)\) as \(|\lambda |\rightarrow \infty \), we infer that there exists a \(C>0\) such that $$\begin{aligned} |\lambda | \, \mathrm {e}^{-2|\mathfrak {I}(\lambda ^2)| t} \, \big | \big [M(t,\lambda ,\psi ) - E_\lambda (t) \big ]_2 \big | \le C \end{aligned}$$ uniformly on \([0,1]\times \overline{D^{K_r}_-} \times B_r(0,\mathrm {X}_1)\). Analogously, by using (2.29) and (2.33), one infers the existence of a constant \(C>0\) such that uniformly on \([0,1]\times \overline{D^{K_r}_+} \times B_r(0,\mathrm {X}_1)\). The estimates (2.28) and (2.32) (resp. (2.27) and (2.31)) yield the same asymptotic estimates for \([M-E_\lambda ]_2\) (resp. \([M-E_\lambda ]_1\)) for \(\lambda \) restricted to \(\overline{D^{K_r}_+}\) (resp. \(\overline{D^{K_r}_-}\)). In summary, this yields the existence of constants \(C,K>0\) such that $$\begin{aligned} |\lambda | \, \mathrm {e}^{-2|\mathfrak {I}(\lambda ^2)| t} \, \big | M(t,\lambda ,\psi ) - E_\lambda (t) \big | \le C \end{aligned}$$ uniformly on \([0,1]\times \mathbb {C}^K \times B_r(0,\mathrm {X}_1)\). This proves (2.13). To prove (2.14), we recall Cauchy's inequality: the derivative \(f'\) of a holomorphic function \(f :\mathbb {C}\supseteq G \rightarrow \mathbb {C}\) satisfying \(|f(z)| \le C\) on a disc \(D(r,a) \subseteq G\) of radius r centered at a in the open domain G can be estimated at the point a by \(|f'(a)| \le C r^{-1}\). According to the first part of the theorem, for any \(r>0\) there is a \(K>0\) such that $$\begin{aligned} |\lambda |\, \mathrm {e}^{-2|\mathfrak {I}(\lambda ^2)|t} \, \big |M(t,\lambda ,\psi )-E_\lambda (t)\big |={\mathcal {O}} (1) \end{aligned}$$ uniformly for \(t\in [0,1]\), \(\lambda \in \mathbb {C}^K\) and \(\psi \in B_r(0,\mathrm {X}_1)\) as \(|\lambda |\rightarrow \infty \). By applying Cauchy's inequality to this estimate, we immediately obtain (2.14). \(\square \) Let us for \(n\in \mathbb {N}\) and \(i=1,2,3,4\) consider the complex numbers \(\zeta ^i_n\), which are given by $$\begin{aligned} \zeta ^1_n :=\sqrt{\frac{n \pi }{2}}, \quad \zeta ^2_n :=-\sqrt{\frac{n \pi }{2}}, \quad \zeta ^3_n :=\mathrm {i}\sqrt{\frac{n \pi }{2}}, \quad \zeta ^4_n :=-\mathrm {i}\sqrt{\frac{n \pi }{2}}. \end{aligned}$$ Theorem 2.19 For any potential \(\psi \in \mathrm {X}_1\) and any sequence \((z^i_n)_{n\in \mathbb {N}}\) of complex numbers, whose elements \(z^i_n\), \(i=1,2,3,4\), satisfy $$\begin{aligned} z^i_n = \zeta ^i_n + O\bigg (\frac{1}{\sqrt{n}}\bigg ) \quad \text {as} \quad n\rightarrow \infty , \end{aligned}$$ it holds that $$\begin{aligned} \sup _{0\le t\le 1} \big | M(t,z^i_n)- E_{z^i_n}(t) \big |&= {\mathcal {O}}\big (n^{-1/2}\big ), \end{aligned}$$ $$\begin{aligned} \sup _{0\le t\le 1} \big | \dot{M}(t,z^i_n)- \dot{E}_{z^i_n}(t) \big |&= {\mathcal {O}}(1) \end{aligned}$$ as \(n\rightarrow \infty \). If moreover the squares \((z^i_n)^2\) satisfy $$\begin{aligned} (z^i_n)^2=(\zeta ^i_n)^2 + {\mathcal {O}}\big (n^{-1/2}\big ) \quad \text {as} \quad n\rightarrow \infty , \end{aligned}$$ then it holds additionally that $$\begin{aligned} \sup _{0\le t\le 1} \big | M(t,z_n)- E_{\zeta ^i_n}(t) \big | = {\mathcal {O}}\big (n^{-1/2}\big ). \end{aligned}$$ The estimates in (2.34) and (2.35) hold uniformly on bounded subsets of \(\mathrm {X}_1\) and for sequences \((z^i_n)_{n\in \mathbb {N}}\), which satisfy \(|z^i_n - \zeta ^i_n |\le C \sqrt{1/n}\) for all \(n\ge 1\) with a uniform constant \(C>0\). The estimate in (2.36) holds uniformly on bounded subsets of \(\mathrm {X}_1\) and for sequences \((z_n)_{n\in \mathbb {Z}}\), which satisfy \(|(z^i_n)^2-(\zeta ^i_n)^2|\le C \sqrt{1/n}\) for all \(n\ge 1\) with a uniform constant \(C>0\). The estimates (2.34) and (2.35) follow directly from Theorem 2.7, because \(\mathfrak {I}z^2_n = {\mathcal {O}}(1)\) as \(|n|\rightarrow \infty \) by assumption, and therefore \(\mathrm {e}^{2|\mathfrak {I}z^2_n| t}={\mathcal {O}} (1)\) uniformly in \(t\in [0,1]\) as \(|n|\rightarrow \infty \). To prove (2.36) we note that \(|\mathrm {e}^z -1| \le | z | \, \mathrm {e}^{| z|}\) for arbitrary \(z\in \mathbb {C}\), thus the additional restriction on \(z^i_n\) implies that $$\begin{aligned} \begin{aligned} \big | \mathrm {e}^{2 (z^i_n)^2 \mathrm {i}t} - \mathrm {e}^{2(\zeta ^i_n)^2 \mathrm {i}t } \big | = \big | \mathrm {e}^{2( (z^i_n)^2 - (\zeta ^i_n)^2) \mathrm {i}t} - 1 \big | = {\mathcal {O}}\big (n^{-1/2}\big ) \end{aligned} \end{aligned}$$ uniformly for \(t\in [0,1]\) as \(n\rightarrow \infty \). The triangle inequality implies that $$\begin{aligned} \big | M(t,z^i_n)- E_{\zeta ^i_n}(t) \big | \le \big | M(t,z^i_n)- E_{z^i_n}(t)\big | + \big | E_{z^i_n}(t) - E_{\zeta ^i_n}(t) \big | \end{aligned}$$ for \(t\in [0,1]\), and hence (2.36) follows from (2.34) and (2.37). \(\square \) Remark 2.20 For convenience, the asymptotic results in this section are stated for the space \(\mathrm {X}_\tau \) with \(\tau =1\) (which contains the periodic space \(\mathrm {X}\) as a subspace). It is clear that analogous results hold for an arbitrary fixed \(\tau >0\). 3 Spectra We will consider three different notions of spectra associated with the spectral problem (2.1): the Dirichlet, Neumann and periodic spectrum. These spectra are the zero sets of certain spectral functions, which are defined in terms of the entries of the fundamental solution M evaluated at time \(t=1\). We introduce the following notation: $$\begin{aligned} \grave{M}:=M|_{t=1}, \quad \grave{m}_i :=m_i|_{t=1}, \quad i=1,2,3,4. \end{aligned}$$ 3.1 Dirichlet and Neumann spectrum We define the Dirichlet domain \({\mathcal {A}}_{\mathrm D}\) of the AKNS-system (2.3) by1 $$\begin{aligned} {\mathcal {A}}_{\mathrm D} :=\big \{ f\in H^1([0,1],\mathbb {C}) \times H^1([0,1],\mathbb {C}) \; \big | \; f_2(0)=0=f_2(1) \big \}. \end{aligned}$$ The Dirichlet domain \(\mathcal {D}_{\mathrm D}\) of the corresponding ZS-system (2.1) is then given by $$\begin{aligned} {\mathcal {D}}_{\mathrm D} :=\big \{ g\in H^1([0,1],\mathbb {C}) \times H^1([0,1],\mathbb {C}) \; \big | \; (g_1-g_2)(0)=0=(g_1-g_2)(1)\big \}, \end{aligned}$$ as \({\mathcal {A}}_{\mathrm D}\) corresponds to \({\mathcal {D}}_{\mathrm D}\) under the transformation T, cf. (2.4). For a given potential \(\psi \in \mathrm {X}\), we say that \(\lambda \in \mathbb {C}\) lies in the Dirichlet spectrum if there exists a \(\phi \in \mathcal D_{\mathrm D}\setminus \{0\}\) which solves (2.1). Fix \(\psi \in \mathrm {X}\). The Dirichlet spectrum of (2.1) is the zero set of the entire function $$\begin{aligned} \chi _{\mathrm D}(\lambda ,\psi ) :=\frac{\grave{m}_4+\grave{m}_3-\grave{m}_2-\grave{m}_1}{2\mathrm {i}}\bigg |_{(\lambda ,\psi )}. \end{aligned}$$ In particular, \(\chi _{\mathrm D}(\lambda ,0) = \sin 2\lambda ^2\). Due to the definition of \({\mathcal {D}}_{\mathrm D}\), a complex number \(\lambda \) lies in the Dirichlet spectrum of (2.1) if and only if the fundamental solution M maps the initial value (1, 1) to a collinear vector at \(t=1\). That is, if and only if \(\grave{m}_1+\grave{m}_2 =\grave{m}_3+\grave{m}_4\). \(\square \) By Theorem 2.7 the characteristic function \(\chi _{\mathrm D}\) satisfies $$\begin{aligned} \chi _{\mathrm D}(\lambda ,\psi ) = \sin 2\lambda ^2 + {\mathcal {O}} \big ( |\lambda |^{-1}\, \mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}\big ) \end{aligned}$$ uniformly on bounded sets in \(\mathrm {X}\) as \(|\lambda |\rightarrow \infty \). For \(\psi \in \mathrm {X}\), we set $$\begin{aligned} \sigma _{\mathrm D}(\psi ) :=\big \{\lambda \in \mathbb {C}:\chi _{\mathrm D}(\lambda ,\psi )=0 \big \}. \end{aligned}$$ We aim to localize the Dirichlet eigenvalues with the help of (3.2), see Lemma 3.3 below. The proof makes use of the following elementary estimate, cf. [13, Appendix F]: if \(\lambda \in \mathbb {C}\) satisfies \(|\lambda -n \pi | \ge \pi /4\) for all integers n, then \(4 \, | \sin \lambda | > \mathrm {e}^{|\mathfrak {I}\lambda |}\). Let us rephrase this inequality as follows. If \(\lambda \in \mathbb {C}\) satisfies \(|2 \lambda ^2 - n \pi |\ge \pi /4\) for all integers n, then $$\begin{aligned} 4\, |\sin 2\lambda ^2|> \mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}. \end{aligned}$$ We denote the right, left, upper and lower open complex halfplane by $$\begin{aligned} \mathbb {C}_+&:=\{z\in \mathbb {C}:\mathfrak {R}z>0\}, \quad \mathbb {C}_- :=\{z\in \mathbb {C}:\mathfrak {R}z<0\}, \\ \mathbb {C}^+&:=\{z\in \mathbb {C}:\mathfrak {I}z >0\}, \quad \mathbb {C}^- :=\{z\in \mathbb {C}:\mathfrak {I}z <0\}. \end{aligned}$$ Lemma 3.2 motivates the definition of the discs \(D^i_n\), which are introduced below, to localize the Dirichlet eigenvalues. For \(|n|\ge 1\), we consider the set $$\begin{aligned} D_n :=\Big \{ \lambda \in \mathbb {C}:\big |2 \lambda ^2 - n \pi \big | < \frac{\pi }{4} \Big \}, \end{aligned}$$ which consists of two open discs, and define the disc \(D^i_n\), \(i=1,2\), by $$\begin{aligned} D^1_n :={\left\{ \begin{array}{ll} D_n \cap \mathbb {C}_+, &{} n \ge 1, \\ D_n \cap \mathbb {C}_-, &{} n \le -1, \end{array}\right. } \qquad D^2_n :={\left\{ \begin{array}{ll} D_n \cap \mathbb {C}^+, &{} n \ge 1, \\ D_n \cap \mathbb {C}^-, &{} n \le -1. \end{array}\right. } \end{aligned}$$ For a given integer \(N\ge 1\) we define the disc \(B_N\) by $$\begin{aligned} B_N :=\bigg \{ \lambda \in \mathbb {C}:| \lambda | < \sqrt{\frac{(N+1/4) \pi }{2} } \, \bigg \}. \end{aligned}$$ Furthermore we set \(D_0 :=B_0 :=\{ \lambda \in \mathbb {C}:|\lambda | < \sqrt{\pi /8} \}\) and impose the convention \(D^i_0 :=D_0\), \(i=1,2\). Then for each \(N\ge 0\) the disc \(B_N\) contains all the discs \(D^i_n\) with \(|n|\le N\). An illustration of the discs \(B_N\) and \(D^i_n\) can be found in Fig. 2 (see also Fig. 5). Localization of the periodic eigenvalues according to the Counting Lemma. The first \(4(2N+1)\) roots of \(\chi _{\mathrm P}(\cdot ,\psi )\) lie in the large disc \(B_N\). The remaining periodic eigenvalues lie in the discs \(D^i_n\) centered at \(\lambda ^{\pm ,i}_n(0)\), \(i=1,2\), \(|n|>N\); each disc contains precisely two of them. The radii of these discs shrink to zero at order \({\mathcal {O}}(|n|^{-1/2})\) as \(|n|\rightarrow \infty \) (Counting Lemma for Dirichlet eigenvalues) Let B be a bounded subset of \(\mathrm {X}\). There exists an integer \(N\ge 1\), such that for every \(\psi \in B\), the entire function \(\chi _{\mathrm D}(\cdot ,\psi )\) has exactly one root in each of the two discs \(D^i_n\), \(i=1,2\), for \(n\in \mathbb {Z}\) with \(|n|>N\), and exactly \(2(2N+1)\) roots in the disc \(B_N\) when counted with multiplicity. There are no other roots. Outside of the set $$\begin{aligned} \Pi :=\bigcup _{ \begin{array}{c} n\in \mathbb {Z}\\ i\in \{1,2\} \end{array}} D^i_n \end{aligned}$$ it holds that \(\frac{\mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}}{|\sin 2\lambda ^2|} < 4\) by the previous lemma. Therefore we obtain from (3.2) that $$\begin{aligned} \chi _{\mathrm D}(\lambda ,\psi ) = \sin 2\lambda ^2 + o\big (\mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}\big ) = \chi _{\mathrm D}(\lambda ,0) \big (1+o(1)\big ) \end{aligned}$$ for \(|\lambda |\rightarrow \infty \) with \(\lambda \notin \Pi \) uniformly for \(\psi \in B\). More precisely, this means that there exists an integer \(N\ge 1\) such that, for all \(\psi \in B\), $$\begin{aligned} |\chi _{\mathrm D}(\lambda ,\psi ) - \chi _{\mathrm D}(\lambda ,0)| < |\chi _{\mathrm D}(\lambda ,0)| \end{aligned}$$ on the boundaries of all discs \(D^i_n\) with \(|n|>N\), \(i=1,2\), and also on the boundary of \(B_N\). (Note that \(|\chi _{\mathrm D}(\lambda ,0)|>\delta \) on these boundaries for some \(\delta >0\) which can be chosen independently of \(|n|>N\).) Then Rouché's theorem tells us that the analytic functions \(\chi _{\mathrm D}(\cdot ,\psi )\) possess the same number of roots inside these discs as \(\chi _{\mathrm D}(\cdot ,0)\). This proves the first statement, because \(\chi _{\mathrm D}(\cdot ,0):\lambda \mapsto \sin 2\lambda ^2\) has exactly one root in each \(D^i_n\) for \(|n|>N\), \(i=1,2\), and \(2(2N+1)\) roots in the disc \(B_N\). It is now obvious that there are no other roots, because the number of roots of \(\chi _{\mathrm D}(\cdot ,\psi )\) in each of the discs \(B_{N+k}\), \(k\ge 1\), is exactly \(2(N+k+1)\) due to the same argument as we used before. But these roots correspond to the \(2(2N+1)\) roots of \(\chi _{\mathrm D}(\cdot ,\psi )\) inside \(B_N\subseteq B_{N+k}\)plus the 2k roots inside the discs \(D_l \subseteq B_{N+k}\) with \(N<|l|\le N+k\) that we have already identified earlier in the proof. \(\square \) Lemma 3.3 allows us to introduce a systematic procedure for labeling the Dirichlet eigenvalues. For this purpose, we first consider the spectrum \(\sigma _{\mathrm D}(0)\) of the zero potential, which consists of the two bi-infinite sequences $$\begin{aligned} \mu ^i_n(0) = {{\,\mathrm{sgn}\,}}(n) \sqrt{\frac{(-1)^{i-1}|n|\pi }{2}}, \quad i=1,2, \; n \in \mathbb {Z}, \end{aligned}$$ $$\begin{aligned} {{\,\mathrm{sgn}\,}}(n) :={\left\{ \begin{array}{ll} 1 &{} n > 0, \\ 0 &{} n = 0, \\ -1 &{} n < 0. \end{array}\right. } \end{aligned}$$ The eigenvalues \(\mu ^1_n(0)\), \(n \in \mathbb {Z}\), are real while the eigenvalues \(\mu ^2_n(0)\), \(n \in \mathbb {Z}\setminus \{0\}\), are purely imaginary. The Dirichlet eigenvalue 0 has multiciplity two; all the other Dirichlet eigenvalues corresponding to \(\psi =0\) are simple roots of \(\chi _{\mathrm D}(\cdot ,0)\). Let now \(\psi \in \mathrm {X}\) be an arbitrary potential. By Lemma 3.3 there exists a minimal integer \(N\ge 0\) such that for all \(|n|>N\), each disc \(D^i_n\), \(i=1,2\), contains precisely one Dirichlet eigenvalue of multiplicity one—this eigenvalue will henceforth be denoted by \(\mu ^i_n \equiv \mu ^i_n(\psi )\)—and \(B_N\) contains the remaining \(2(2N+1)\) eigenvalues when counted with multiplicity. In order to label the \(2(2N+1)\) roots of \(\chi _{\mathrm D}(\cdot ,\psi )\) in \(B_N\), we proceed as follows. We make a (so far unordered) list of all the elements of \(\sigma _{\mathrm D}(\psi ) \cap B_N\). For any multiple root of \(\chi _{\mathrm D}(\psi )\) in this list, we include multiple copies of it in the list according to its multiplicity. In this way, we make sure that the list has exactly \(2(2N+1)\) entries (the set \(\sigma _{\mathrm D}(\psi ) \cap B_N\) contains strictly less than \(2(2N+1)\) elements if \(\chi _{\mathrm D}(\psi )\) has non-simple roots). We employ the lexicographical ordering of the complex numbers, i.e. for \(z_1,z_2\in \mathbb {C}\), $$\begin{aligned} z_1 \preceq z_2 \quad \Leftrightarrow \quad {\left\{ \begin{array}{ll} \mathfrak {R}z_1 < \mathfrak {R}z_2 \\ \quad \text {or} \\ \mathfrak {R}z_1 = \mathfrak {R}z_2 \quad \text {and} \quad \mathfrak {I}z_1 \le \mathfrak {I}z_2, \end{array}\right. } \end{aligned}$$ to label the \(2(2N+1)\) entries of list list of roots in \(B_N\) in such a way that $$\begin{aligned}&\mu ^1_{-N} \preceq \cdots \preceq \mu ^1_{-1} \preceq \mu ^2_{-N} \preceq \cdots \preceq \mu ^2_{-1} \preceq \mu ^1_0 \preceq \mu ^2_0 \preceq \mu ^2_{1} \\&\quad \preceq \cdots \preceq \mu ^2_{N} \preceq \mu ^1_1 \preceq \cdots \preceq \mu ^1_{N}. \end{aligned}$$ The labeling of the roots of \(\chi _{\mathrm D}(\cdot ,\psi )\) according to this procedure is unique except that the label of a particular Dirichlet eigenvalue is ambiguous whenever it is not a simple root of \(\chi _{\mathrm D}(\cdot ,\psi )\). Sequences of Dirichlet eigenvalues of the form \((\mu ^i_n)_{n\in \mathbb {Z}}\), where \(i=1\) or \(i=2\), are always well-defined, since each element of such a sequence has a uniquely defined value. In general neither the multiplicity, nor the label of a Dirichlet eigenvalue \(\mu ^i_n(\psi )\) is preserved under continuous deformations of the potential \(\psi \)—not even locally around the zero potential, where discontinuities of the functions \(\psi \mapsto \mu ^i_0(\psi )\) may occur due to the lexicographical ordering. Merging and splitting of Dirichlet eigenvalues can occur under continuous deformations of the potential, cf. Fig. 5 where such behavior is illustrated in the case of periodic eigenvalues. However, for sufficiently large |n|, the mappings \(\psi \mapsto \mu ^i_n(\psi )\), \(i=1,2\), are continuous on bounded subsets of \(\mathrm {X}\); in fact, we will see in the proof of Theorem 3.5 that these mappings are analytic. In particular, the eigenvalue \(\mu ^i_n(\psi )\) remains simple under continuous deformations within a bounded subset of \(\mathrm {X}\) for large enough |n|. To continue our analysis, we introduce the Banach spaces \(\ell ^{p,s}_\mathbb {K}\) of (bi-infinite) sequences $$\begin{aligned} \ell ^{p,s}_\mathbb {K}:=\Big \{ u= (u_n)_{n\in \mathbb {Z}} \; \big | \; \big ((1+n^2)^{\frac{s}{2}} u_n\big )_{n\in \mathbb {Z}} \in \ell ^p_\mathbb {K}\Big \}, \end{aligned}$$ where \(1\le p \le \infty \), \(s\in \mathbb {R}\), and \(\mathbb {K}=\mathbb {R}\) or \(\mathbb {K}=\mathbb {C}\), as well as the closed subspaces $$\begin{aligned} {\check{\ell }}^{p,s}_\mathbb {K}:=\left\{ u= (u_n)_{n\in \mathbb {Z}} \in \ell ^{p,s}_\mathbb {K}:u_0=0 \right\} . \end{aligned}$$ We consider basic properties of these Banach spaces in Sect. 4. Theorem 3.5 below establishes asymptotic estimates and continuity properties of the Dirichlet eigenvalues in the \(\ell ^{\infty ,1}_\mathbb {C}\) setting. We use the notation \(\ell ^{p,s}_n\) to denote the n-th coordinate of a sequence \((\ell ^{p,s}_n)_{n\in \mathbb {Z}}\) in \(\ell ^{p,s}_\mathbb {C}\), see [13, 18, 20]. From Lemma 3.3 we infer that, uniformly on bounded subsets of \(\mathrm {X}\), $$\begin{aligned} \mu ^i_n(\psi ) = \mu ^i_n(0) + \ell ^{\infty ,1/2}_n, \quad i=1,2, \end{aligned}$$ where \(\mu ^i_n(0)\) is given by (3.4). More precisely, this means that the sequence \((\mu ^i_n(\psi )- \mu ^i_n(0))_{n\in \mathbb {Z}}\), \(i=1,2\), lies in \(\ell ^{\infty ,1/2}_\mathbb {C}\) for every \(\psi \in \mathrm {X}\) with a uniform bound in \(\ell ^{\infty ,1/2}_\mathbb {C}\) for \(\psi \) ranging within arbitrary but fixed bounded subsets of \(\mathrm {X}\). To see this, we note that the radius of the disc \(D^i_n\) centered at \(\mu ^i_n(0)={{\,\mathrm{sgn}\,}}(n)\sqrt{(-1)^{i-1}|n|\pi /2}\), \(i=1,2\), which contains the Dirichlet eigenvalue \(\mu ^i_n(\psi )\) for each \(|n|>N\), is of order \({\mathcal {O}} (|n|^{-1/2})\) as \(|n|\rightarrow \infty \); the integer \(N\ge 1\) can be chosen uniformly for all \(\psi \) within a fixed bounded subset of \(\mathrm {X}\). The following theorem improves the estimate in (3.6) considerably; furthermore it establishes an equicontinuity property of the set of Dirichlet eigenvalues as functions of \(\psi \). Let B be a bounded neighborhood of \(\psi =0\) in \(\mathrm {X}\). Uniformly on B, $$\begin{aligned} \mu ^i_n(\psi ) = \mu ^i_n(0) + \ell ^{\infty ,1}_n, \quad i=1,2, \end{aligned}$$ where \(\mu ^i_n(0)\) is given by (3.4). There exists a neighborhood \(W\subseteq B\) of the zero potential such that for all \(\psi \in W\), $$\begin{aligned} \mu ^i_n(\psi )\in D^i_n \quad \text {for} \quad |n|\ge 1, \quad \mu ^i_0(\psi )\in D_0 \qquad (i=1,2). \end{aligned}$$ Let \(W\subseteq B\) be an open neighborhood of the zero potential such that (3.8) is fulfilled for all \(\psi \in W\). Then the Dirichlet eigenvalues \(\mu ^i_n\), considered as functions of \(\psi \), are analytic on W for all \(n\in \mathbb {Z}\setminus \{0\}\), \(i=1,2\). Furthermore, for every \(\psi \in W\) and every sequence \((\psi _k)_{k\in \mathbb {N}}\) in W with \(\psi _k\rightarrow \psi \) as \(k\rightarrow \infty \) it holds that $$\begin{aligned} \lim _{k\rightarrow \infty } \bigg (\sup _{n\in \mathbb {Z}\setminus \{0\}} (1+n^2)^\frac{1}{2} \big |\mu ^i_n(\psi _k) - \mu ^i_n(\psi )\big | \bigg ) = 0, \quad i=1,2. \end{aligned}$$ To prove the first assertion, we note that (3.6) and Theorem 2.19 imply (cf. the asymptotic estimate in (2.34)) $$\begin{aligned} 0= \chi _{\mathrm D} (\mu ^i_n(\psi )) = \sin [2 (\mu ^i_n(\psi ))^2] + \ell ^{\infty ,1/2}_n, \quad i=1,2 \end{aligned}$$ uniformly on B. Therefore, the fundamental theorem of calculus implies that, uniformly on B, $$\begin{aligned} \ell ^{\infty ,1/2}_n&= \sin [2 \mu ^i_n(\psi )^2] - \sin [2 \mu ^i_n(0)^2] \\&= 2\big (\mu ^i_n(\psi )^2 - \mu ^i_n(0)^2\big ) \int ^1_0 \cos [t \, 2 \mu ^i_n(\psi )^2 + (1-t) \, 2 \mu ^i_n(0)^2] \, \mathrm d t, \quad i=1,2. \end{aligned}$$ The integral in the above equation is uniformly bounded in \(\ell ^\infty _\mathbb {C}\) for all potentials in B, since the line segments connecting \(\mu ^i_n(\psi )^2\) with \(\mu ^i_n(0)^2\) are uniformly bounded in \(\ell ^\infty _\mathbb {C}\) due to (3.6). Thus, $$\begin{aligned} \ell ^{\infty ,1/2}_n = \mu ^i_n(\psi )^2 - \mu ^i_n(0)^2 = \big (\mu ^i_n(\psi ) - \mu ^i_n(0)\big ) \big (\mu ^i_n(\psi ) + \mu ^i_n(0)\big ), \quad i=1,2 \end{aligned}$$ uniformly on B. By employing (3.6) once again, we infer that $$\begin{aligned} \ell ^{\infty ,1/2}_n = \mu ^i_n(0) \big (\mu ^i_n(\psi ) - \mu ^i_n(0)\big ), \quad i=1,2 \end{aligned}$$ uniformly on B, which is equivalent to the first assertion of the theorem. To prove the second assertion, we recall that the characteristic function \(\chi _{\mathrm D}\) is analytic2 (and hence also continuous) with respect to \(\psi \), and that (3.8) clearly holds for \(\psi =0\). Hence, if \(W\subseteq B\) is a sufficiently small neighborhood of the zero potential, then, for each \(\psi \in W\), the disc \(D^i_n\) contains precisely the simple Dirichlet eigenvalue \(\mu ^i_n(\psi )\) while \(D_0\) contains either a double eigenvalue \(\mu ^1_0(\psi )=\mu ^2_0(\psi )\) or two distinct simple eigenvalues \(\mu ^1_0(\psi )\ne \mu ^2_0(\psi )\). To prove the third assertion, we first show that the function \(\mu ^i_n:W\rightarrow \mathbb {C}\), \(n\in \mathbb {Z}\setminus \{0\}\), \(i=1,2\), is analytic and takes values in the disc \(D^i_n\). Analyticity is inherited from \(\chi _{\mathrm D}\) as a consequence of the implicit function theorem for analytic mappings between complex Banach spaces; see e.g. [31] for various generalizations of the classical implicit function theorem to infinite dimensional Banach spaces. Indeed, the restriction of the characteristic function for the Dirichlet eigenvalues to the domain \(D^i_n\times W\), $$\begin{aligned} \chi _{\mathrm D}\big |_{D^i_n\times W}:D^i_n\times W \rightarrow \mathbb {C}, \quad n\in \mathbb {Z}\setminus \{0\}, \; i=1,2, \end{aligned}$$ is analytic. By assumption, for each \(\psi '\in W\) there exists a unique \(\lambda '\in D^i_n\) such that \(\chi _{\mathrm D}\big |_{D^i_n\times W}(\lambda ',\psi ')=0\). Furthermore, denoting by \(\frac{\partial }{\partial \lambda }\) the partial derivative of \(\chi _{\mathrm D}\big |_{D^i_n\times W}\) with respect to its first variable \(\lambda \), we claim that $$\begin{aligned} \frac{\partial }{\partial \lambda } \chi _{\mathrm D}\Big |_{D^i_n\times W} (\lambda ',\psi ') \ne 0, \quad n\in \mathbb {Z}\setminus \{0\}, \; i=1,2, \end{aligned}$$ so that the partial derivative is a linear isomorphism \(\mathbb {C}\rightarrow \mathbb {C}\). Indeed, by Theorem 2.19 (cf. (2.36)), $$\begin{aligned} \frac{\partial }{\partial \lambda }\chi _{\mathrm D}\Big |_{{\dot{\Pi }}\times B}(\lambda ,\psi ) = 4\lambda \cos 2\lambda ^2 + {\mathcal {O}}(1), \quad {\dot{\Pi }} :=\bigcup _{\begin{array}{c} n\in \mathbb {Z}\setminus \{0\} \\ i=1,2 \end{array}}D^i_n \end{aligned}$$ uniformly on B as \(|\lambda |\rightarrow \infty \). This implies that $$\begin{aligned} \frac{\partial }{\partial \lambda } \chi _{\mathrm D}\ne 0 \quad \text {on} \quad \bigcup _{\begin{array}{c} |n|>N \\ i=1,2 \end{array}} D^i_n \times B \end{aligned}$$ for some large enough \(N\ge 0\). By continuity of \(\frac{\partial \chi _{\mathrm D}}{\partial \lambda }\) with respect to \(\psi \) and the fact that the formula for \(\frac{\partial \chi _{\mathrm D}}{\partial \lambda }\) in (3.11) holds without the error term when \(\psi =0\), it is clear that (3.12) holds for \(N=0\) with B replaced by W. This proves (3.10). In view of (3.10), the implicit function theorem guarantees the existence of a unique (global) analytic function \({\tilde{\mu }}^i_n:W\rightarrow \mathbb {C}\) such that, for all \((\lambda ,\psi )\in D^i_n\times W\), $$\begin{aligned} \chi _{\mathrm D}\big |_{D^i_n\times W} (\lambda ,\psi ) =0 \iff \lambda =\tilde{\mu }^i_n(\psi ), \quad n\in \mathbb {Z}\setminus \{0\}, \; i=1,2. \end{aligned}$$ Since \({\tilde{\mu }}^i_n= \mu ^i_n\), this shows that \(\mu ^i_n:W\rightarrow \mathbb {C}\) is analytic for \(n\in \mathbb {Z}\setminus \{0\}\), \(i=1,2\). For \(n \in \mathbb {Z}\) and \(i=1,2\), we consider the analytic mappings $$\begin{aligned} \beta ^i_n :W\rightarrow \mathbb {C}, \quad \psi \mapsto \beta ^i_n(\psi ) :={\left\{ \begin{array}{ll} (1+n^2)^\frac{1}{2}\big ( \mu ^i_n(\psi )-\mu ^i_n(0)\big ) &{}n\in \mathbb {Z}\setminus \{0\} \\ 0 &{}n=0. \end{array}\right. } \end{aligned}$$ The first assertion of the theorem implies that the family \(\{\beta ^i_n\}^{i=1,2}_{n\in \mathbb {Z}}\) is uniformly bounded in \(\mathbb {C}\). Since all the functions of this family are analytic, it follows that \(\{\beta ^i_n\}^{i=1,2}_{n\in \mathbb {Z}}\) is uniformly equicontinuous on B, cf. [28, Proposition 9.15]. That is, for each \(\varepsilon >0\) there exists a \(\delta >0\) such that, for all \(n\in \mathbb {Z}\), \(i\in \{1,2\}\), and all \(\psi ,\psi ' \in B\), $$\begin{aligned} \Vert \psi -\psi '\Vert < \delta \quad \Rightarrow \quad \varepsilon > |\beta ^i_n(\psi )-\beta ^i_n(\psi ')| = \big |(1+n^2)^\frac{1}{2}\big ( \mu ^i_n(\psi )-\mu ^i_n(\psi ')\big )\big |. \end{aligned}$$ This implies that the two mappings $$\begin{aligned} W\rightarrow {\check{\ell }}^{\infty ,1/2}_\mathbb {C}, \quad \psi \mapsto {\left\{ \begin{array}{ll} \mu ^i_n(\psi ) &{}n\in \mathbb {Z}\setminus \{0\} \\ 0 &{} n=0, \end{array}\right. } \qquad i=1,2, \end{aligned}$$ are continuous, which proves the third assertion. \(\square \) A more abstract proof of (3.9) proceeds as follows. By the general version of Montel's theorem for analytic functions on separable complex Banach spaces, see e.g. [28, Proposition 9.16], the family \(\{\beta ^i_n\}^{i=1,2}_{n\in \mathbb {Z}}\) in the proof of Theorem 3.5 is normal in the locally convex topological vector space \({\mathcal {H}}(W)\), the space of all analytic functions from W to \(\mathbb {C}\), endowed with the topology \(\tau _c\) of uniform convergence on compact subsets of W. That is, each sequence of elements of \(\{\beta ^i_n\}^{i=1,2}_{n\in \mathbb {Z}}\) has a subsequence which converges in \(({\mathcal {H}}(W), \tau _c)\). This allows us to obtain (3.9) by interchanging the order of taking the limit and supremum as follows: $$\begin{aligned} \lim _{k\rightarrow \infty } \bigg (\sup _{n\in \mathbb {Z}} \big |\beta ^i_n(\psi _k) - \beta ^i_n(\psi )\big | \bigg ) = \sup _{n\in \mathbb {Z}} \bigg ( \lim _{k\rightarrow \infty } \big |\beta ^i_n(\psi _k) - \beta ^i_n(\psi )\big | \bigg ). \end{aligned}$$ We define the Neumann domain \({\mathcal {A}}_{\mathrm N}\) of the AKNS-system (2.3) by $$\begin{aligned} {\mathcal {A}}_{\mathrm N} :=\big \{ f\in H^1([0,1],\mathbb {C}) \times H^1([0,1],\mathbb {C}) \; \big | \; f_1(0)=0=f_1(1) \big \}. \end{aligned}$$ The Neumann domain \(\mathcal D_{\mathrm N} \) of the corresponding ZS-system (2.1) is then given by $$\begin{aligned} {\mathcal {D}}_{\mathrm N} :=\big \{ g\in H^1([0,1],\mathbb {C}) \times H^1([0,1],\mathbb {C}) \; \big | \; (h_1+h_2)(0)=0=(h_1+h_2)(1) \big \}, \end{aligned}$$ as \({\mathcal {A}}_{\mathrm N}\) corresponds to \({\mathcal {D}}_{\mathrm N}\) under the transformation T. For a given potential \(\psi \in \mathrm {X}\), we say that \(\lambda \in \mathbb {C}\) lies in the Neumann spectrum if there exists a \(\phi \in {\mathcal {D}}_{\mathrm N}\setminus \{0\}\) which solves (2.1). Fix \(\psi \in \mathrm {X}\). The Neumann spectrum related to (2.1) is the zero set of the entire function $$\begin{aligned} \chi _{\mathrm N}(\lambda ,\psi ) :=\frac{\grave{m}_4-\grave{m}_3 + \grave{m}_2 - \grave{m}_1}{2\mathrm {i}}\bigg |_{(\lambda ,\psi )}. \end{aligned}$$ In particular, \(\chi _{\mathrm N}(\lambda ,0) = \chi _{\mathrm D}(\lambda ,0) = \sin 2\lambda ^2\). Due to the definition of \({\mathcal {D}}_{\mathrm N}\), a complex number \(\lambda \) lies in the Neumann spectrum of system (2.1) if and only if the fundamental solution M maps the initial value \((1,-1)\) to a collinear vector at \(t=1\). That is, if and only if \(\grave{m}_1-\grave{m}_2 =-\grave{m}_3+\grave{m}_4\). \(\square \) By Theorem 2.7, the characteristic function \(\chi _{\mathrm N}\) satisfies $$\begin{aligned} \chi _{\mathrm N}(\lambda ,\psi ) = \sin 2\lambda ^2 + {\mathcal {O}} ( |\lambda |^{-1}\, \mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}) \end{aligned}$$ uniformly on bounded subsets of \(\mathrm {X}\) as \(|\lambda |\rightarrow \infty \). For \(\psi \in \mathrm {X}\) we set $$\begin{aligned} \sigma _{\mathrm N}(\psi ) :=\{\lambda \in \mathbb {C}:\chi _{\mathrm N}(\lambda ,\psi )=0 \}. \end{aligned}$$ As for the Dirichlet case, we obtain the following asymptotic localization for the elements of the Neumann spectrum. (Counting Lemma for Neumann eigenvalues) Let B be a bounded subset of \(\mathrm {X}\). There exists an integer \(N\ge 1\), such that for every \(\psi \in B\), the entire function \(\chi _{\mathrm N}(\cdot ,\psi )\) has exactly one root in each of the two discs \(D^i_n\), \(i=1,2\), for \(n\in \mathbb {Z}\) with \(|n|>N\), and exactly \(2(2N+1)\) roots in the disc \(B_N\) when counted with multiplicity. There are no other roots. We label the Neumann eigenvalues in the same way as the Dirichlet eigenvalues. The Neumann spectrum of the zero potential \(\psi =0\) coincides with the corresponding Dirichlet spectrum: $$\begin{aligned} \nu ^i_n(0) = \mu ^i_n(0) = {{\,\mathrm{sgn}\,}}(n) \sqrt{\frac{(-1)^{i-1}|n|\pi }{2}}, \quad i=1,2. \end{aligned}$$ Remark 3.4 applies also to the Neumann eigenvalues. The analog of Theorem 3.5 for Neumann eigenvalues reads as follows. $$\begin{aligned} \nu ^i_n(\psi ) = \nu ^i_n(0) + \ell ^{\infty ,1}_n, \quad i=1,2, \end{aligned}$$ where \(\nu ^i_n(0)\) is given by (3.13). $$\begin{aligned} \nu ^i_n(\psi )\in D^i_n \quad \text {for} \quad |n|\ge 1, \quad \nu ^i_0(\psi )\in D_0 \qquad (i=1,2). \end{aligned}$$ Let \(W\subseteq B\) be an open neighborhood of the zero potential such that (3.14) is fulfilled for all \(\psi \in W\). Then the Neumann eigenvalues \(\nu ^i_n\) are analytic on W for all \(n\in \mathbb {Z}\setminus \{0\}\), \(i=1,2\). Furthermore, for every \(\psi \in W\) and every sequence \((\psi _k)_{k\in \mathbb {N}}\) in W with \(\psi _k\rightarrow \psi \) as \(k\rightarrow \infty \) it holds that $$\begin{aligned} \lim _{k\rightarrow \infty } \bigg (\sup _{n\in \mathbb {Z}\setminus \{0\}} (1+n^2)^\frac{1}{2} \big |\nu ^i_n(\psi _k) - \nu ^i_n(\psi )\big | \bigg ) = 0, \quad i=1,2. \end{aligned}$$ 3.2 Periodic spectrum The trace of the fundamental matrix solution \(M(t,\lambda ,\psi )\) at time \(t=1\) is called the discriminant and is denoted by \(\Delta \): $$\begin{aligned} \Delta \equiv \Delta (\lambda ,\psi ) :=\mathrm {tr} \, \grave{M}= \grave{m}_1 + \grave{m}_4. \end{aligned}$$ The sum of the off-diagonal entries of \(\grave{M}\) is referred to as the anti-discriminant: $$\begin{aligned} \delta \equiv \delta (\lambda ,\psi ) :=\grave{m}_2+\grave{m}_3. \end{aligned}$$ The discriminant \(\Delta \), the anti-discriminant \(\delta \) and their respective \(\lambda \)-derivatives \({\dot{\Delta }}\) and \({\dot{\delta }}\) are compact analytic functions on \(\mathbb {C}\times \mathrm {X}\). At the zero potential, $$\begin{aligned} \Delta (\lambda ,0)=2 \cos 2\lambda ^2, \quad \lambda \in \mathbb {C}. \end{aligned}$$ Both the discriminant and the anti-discriminant are analytic due to Theorem 2.1 and compactness follows from Proposition 2.2. From Corollary 2.5 we infer that the \(\lambda \)-derivatives \({\dot{\Delta }}\) and \({\dot{\delta }}\) inherit both properties. \(\square \) The periodic domain\({\mathcal {D}}_{\mathrm P}\) of the ZS-system (2.1) is defined by3 $$\begin{aligned} {\mathcal {D}}_{\mathrm P} :=\left\{ f\in H^1 \left( [0,1],\mathbb {C}\right) \times H^1 \left( [0,1],\mathbb {C}\right) \; \big | \; f(1)=f(0) \; \text {or} \; f(1)=-f(0) \right\} \end{aligned}$$ A complex number \(\lambda \) is called a periodic eigenvalue if (2.1) is satisfied for some \(\phi \in {\mathcal {D}}_{\mathrm P}\setminus \{0\}\). Let \(\psi \in \mathrm {X}\). A complex number \(\lambda \) is a periodic eigenvalue if and only if it is a zero of the entire function $$\begin{aligned} \chi _{\mathrm P}(\lambda ,\psi ) :=\Delta ^2 (\lambda ,\psi ) -4. \end{aligned}$$ Let us fix \(\psi \in \mathrm {X}\). Since M is the fundamental solution of (2.1), a complex number \(\lambda \) is a periodic eigenvalue if and only if there exists a nonzero element \(f\in \mathcal D_{\mathrm P}\) with $$\begin{aligned} f(1) = M(1,\lambda ) f(0) = \pm f(0), \end{aligned}$$ hence if and only if 1 or \(-1\) is an eigenvalue of \(M(1,\lambda )\). As \(\det M(1,\lambda ) =1\) by Proposition 2.3, the two eigenvalues of \(M(1,\lambda )\) are either both equal to 1 or both equal to \(-1\). Therefore we either have \(\Delta (\lambda )=2\) or \(\Delta (\lambda )=-2\), that is, \(\chi _{\mathrm P}(\lambda )=0\). \(\square \) For \(\psi \in \mathrm {X}\), we set $$\begin{aligned} \sigma _{\mathrm P}(\psi ) :=\big \{\lambda \in \mathbb {C}:\chi _{\mathrm P}(\lambda ,\psi )=0 \big \}. \end{aligned}$$ The characteristic function for the zero potential \(\psi =0\) is given by $$\begin{aligned} \chi _{\mathrm P}(\lambda ,0) = -4 \sin ^2 2\lambda ^2; \end{aligned}$$ each root has multiplicity two, except the root \(\lambda =0\) has multiplicity four. Thus the periodic spectrum of the zero potential consists of two bi-infinite sequences of double eigenvalues $$\begin{aligned} \lambda ^{i,\pm }_n(0) = {{\,\mathrm{sgn}\,}}(n) \sqrt{\frac{(-1)^{i-1}|n|\pi }{2}}, \quad i=1,2 \end{aligned}$$ on the real and imaginary axes in the complex plane. The \(\lambda \)-derivative of the discriminant at the zero potential is given by $$\begin{aligned} \dot{\Delta }(\lambda ,0) = -8 \lambda \sin 2\lambda ^2, \end{aligned}$$ and its roots, the so-called critical points of the discriminant for the zero potential, denoted by \({\dot{\lambda }}^i_n(0)\), \(i=1,2\), \(n\in \mathbb {Z}\), coincide with the periodic eigenvalues (and the Dirichlet and Neumann eigenvalues): $$\begin{aligned} \dot{\lambda }^i_n(0) = {{\,\mathrm{sgn}\,}}(n) \sqrt{\frac{(-1)^{i-1}|n|\pi }{2}}, \quad i=1,2. \end{aligned}$$ Note that \(\lambda =0\) has multiplicity three; all the other roots of \({\dot{\Delta }}(\cdot ,0)\) are simple roots. Fix \(\psi \in \mathrm {X}\). As \(|\lambda |\rightarrow \infty \) with \(\lambda \notin \Pi = \bigcup _{n\in \mathbb {Z},\, i=1,2}D^i_n\), $$\begin{aligned} \chi _{\mathrm P}(\lambda ,\psi )&= (-4\sin ^2 2\lambda ^2) \big (1+{\mathcal {O}}(|\lambda |^{-1})\big ), \end{aligned}$$ $$\begin{aligned} {\dot{\Delta }}(\lambda ,\psi )&= (-8\lambda \sin 2\lambda ^2) \big (1+{\mathcal {O}}(|\lambda |^{-1})\big ). \end{aligned}$$ These asymptotic estimates hold uniformly on bounded subsets of \(\mathrm {X}\). For the zero potential these formulas hold without the error terms. By Theorem 2.7, we have \(\Delta (\lambda ,\psi )= 2 \cos 2\lambda ^2 + {\mathcal {O}}(|\lambda |^{-1}\mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|})\) uniformly on bounded subsets of \(\mathrm {X}\), and thus $$\begin{aligned} \chi _{\mathrm P}(\lambda ,\psi ) = (-4 \sin ^2 2\lambda ^2) \bigg [ 1 + \frac{ {\mathcal {O}}\big (|\lambda |^{-1} \, \mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}\big ) \cos 2\lambda ^2 }{\sin ^2 2\lambda ^2} + \frac{ {\mathcal {O}}\big (|\lambda |^{-2} \, \mathrm {e}^{4|\mathfrak {I}(\lambda ^2)|}\big ) }{ \sin ^2 2\lambda ^2} \bigg ]. \end{aligned}$$ For \(\lambda \notin \Pi \), we have \(4 \, |\sin 2\lambda ^2| > \mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}\), cf. Lemma 3.2, and therefore $$\begin{aligned} \bigg | \frac{\cos 2\lambda ^2}{\sin 2\lambda ^2} \bigg | \le \frac{ \mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}}{|\sin 2\lambda ^2|} < 4 \quad \text {for} \quad \lambda \in \mathbb {C}\setminus \Pi . \end{aligned}$$ The estimate (3.19) follows. Moreover, by Theorem 2.7, $$\begin{aligned} {\dot{\Delta }}(\lambda ,\psi ) = (-8\lambda \sin 2\lambda ^2) \bigg [ 1 + \frac{\mathcal O\big (\mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}\big )}{\lambda \sin 2\lambda ^2} \bigg ] \end{aligned}$$ uniformly on bounded subsets of \(\mathrm {X}\) and thus (3.21) yields (3.20). \(\square \) The next result provides an asymptotic localization of the periodic eigenvalues. (Counting Lemma for periodic eigenvalues) Let B be a bounded subset of \(\mathrm {X}\). There exists an integer \(N\ge 1\), such that for every \(\psi \in B\), the entire function \(\chi _{\mathrm P}(\cdot ,\psi )\) has exactly two roots in each of the two discs \(D^i_n\), \(i=1,2\), and exactly \(4(2N + 1)\) roots in the disc \(B_N\), when counted with multiplicity. There are no other roots. Let \(B\subseteq \mathrm {X}\) be bounded. By Lemma 3.12, $$\begin{aligned} \chi _{\mathrm P}(\lambda ,\psi ) = \chi _{\mathrm P}(\lambda ,0) \big (1+o(1)\big ) \end{aligned}$$ for \(|\lambda | \rightarrow \infty \) with \(\lambda \notin \Pi \) uniformly for \(\psi \in B\). Hence there exists an integer \(N\ge 1\) such that, for all \(\psi \in B\), $$\begin{aligned} |\chi _{\mathrm P}(\lambda ,\psi ) - \chi _{\mathrm P}(\lambda ,0)| < | \chi _{\mathrm P}(\lambda ,0)| \end{aligned}$$ on the boundaries of all discs \(D^i_n\) with \(|n|>N\), \(i=1,2\), and also on the boundary of \(B_N\). As in the proof of Lemma 3.3, the result follows by an application of Rouché's theorem. \(\square \) Lemma 3.12 yields a Counting Lemma for the critical points of \(\Delta \) as well: (Counting Lemma for critical points) Let B be a bounded subset of \(\mathrm {X}\). There exists an integer \(N\ge 1\), such that for every \(\psi \in B\), the entire function \(\dot{\Delta }(\cdot ,\psi )\) has exactly one root in each of the two discs \(D^i_n\), \(i=1,2\) for all \(|n|>N\), and exactly \(4N + 3\) roots in the disc \(B_N\), when counted with multiplicity. There are no other roots. Let \(\psi \in \mathrm {X}\) be an arbitrary potential. Inspired by (3.17) and (3.18), we denote the corresponding periodic eigenvalues and critical points by \(\lambda ^{i,\pm }_n \equiv \lambda ^{i,\pm }_n(\psi )\) and \(\dot{\lambda }^i_n \equiv {\dot{\lambda }}^i_n(\psi )\) respectively, \(n\in \mathbb {Z}\), \(i=1,2\). The critical points are labeled in the same way as the Dirichlet and Neumann eigenvalues (except that there is one additional root close the origin, which we will ignore whenever we consider sequences of critical points of the form \(({\dot{\lambda }}^i_n)_{n\in \mathbb {Z}}\) with \(i=1\) or \(i=2\)). Remark 3.4 applies also to the critical points. Concerning periodic eigenvalues, we adapt our labeling procedure as follows. Let \(N\ge 0\) be be the minimal integer such that (a) for each \(|n|>N\) and each \(i=1,2\), the disc \(D^i_n\) contains either two simple periodic eigenvalues or one periodic double eigenvalue and (b) \(B_N\) contains precisely \(4(2N+1)\) roots of \(\chi _{\mathrm P}(\cdot ,\psi )\) when counted with multiplicity. The two eigenvalues in the disc \(D^i_n\), \(|n|>N\), \(i=1,2\), will be denoted by \(\lambda ^{i,\pm }_n\) and ordered so that \(\lambda ^{i,-}_n \preceq \lambda ^{i,+}_n\). The remaining \(4(2N+1)\) roots in \(B_N\) are labeled such that $$\begin{aligned} \lambda ^{1,-}_{-N}&\preceq \lambda ^{1,+}_{-N} \preceq \cdots \preceq \lambda ^{1,-}_{-1} \preceq \lambda ^{1,+}_{-1}\preceq \lambda ^{2,-}_{-N} \preceq \lambda ^{2,+}_{-N} \preceq \cdots \preceq \lambda ^{2,-}_{-1} \preceq \lambda ^{2,+}_{-1} \preceq \lambda ^{1,-}_0 \\&\preceq \lambda ^{1,+}_0 \preceq \lambda ^{2,-}_0 \preceq \lambda ^{2,+}_0 \preceq \lambda ^{2,-}_1 \preceq \lambda ^{2,+}_1 \preceq \cdots \preceq \lambda ^{2,-}_{N} \preceq \lambda ^{2,+}_{N} \preceq \lambda ^{1,-}_1 \preceq \lambda ^{1,+}_1 \\&\preceq \cdots \preceq \lambda ^{1,-}_{N} \preceq \lambda ^{1,+}_{N}. \end{aligned}$$ The labeling of the roots of \(\chi _{\mathrm P}(\cdot ,\psi )\) according to the above procedure is unique except that the label of a particular periodic eigenvalue is ambiguous whenever it is not a simple root of \(\chi _{\mathrm P}(\cdot ,\psi )\). Sequences of Dirichlet eigenvalues of the form \((\mu ^i_n)_{n\in \mathbb {Z}}\), where \(i=1\) or \(i=2\), are always well-defined, since each element of such a sequence has a uniquely defined value. If \(N=0\) happens to be the minimal integer, we agree on the convention that merely $$\begin{aligned} \lambda ^{1,-}_0 \preceq \lambda ^{1,+}_0 \quad \text {and} \quad \lambda ^{2,-}_0 \preceq \lambda ^{2,-}_0 \end{aligned}$$ is required rather than \(\lambda ^{1,-}_0 \preceq \lambda ^{1,+}_0 \preceq \lambda ^{2,-}_0 \preceq \lambda ^{2,-}_0\). This convention provides the freedom to label periodic eigenvalues inside the disc \(B_0=D_0\) in the intuitive way (compare e.g. the labelings of Fig. 4a, b, where we labeled the two periodic double eigenvalues closest to the origin either by \(\lambda ^{2,+}_0\) and \(\lambda ^{2,-}_0\) (Fig. 4a), or by \(\lambda ^{1,+}_0\) and \(\lambda ^{1,-}_0\) (Fig. 4b), since they lie close to the imaginary axis in Fig. 4a, and on the real axis in Fig. 4b). As in the case of Dirichlet eigenvalues, Neumann eigenvalues, and critical points, no general statement can be made regarding the multiplicity of periodic eigenvalues located near the origin; Fig. 5 illustrates this fact by means of an explicit example. Likewise, the labeling of periodic eigenvalues is generally not preserved under continuous deformations of the potential. Unlike the situation for Dirichlet eigenvalues, Neumann eigenvalues, and critical points, both multiplicity and labeling of periodic eigenvalues are generally not preserved under continuous deformations asymptotically for large |n|. Since each \(D^i_n\) contains two periodic eigenvalues, their lexicographical ordering may entail discontinuities. The Counting Lemma allows us to determine the sign of the discriminant at periodic eigenvalues with sufficiently large index |n|. Indeed, recall that \(|\Delta (\lambda ,\psi )|=2\) when \(\lambda \) is a periodic eigenvalue, cf. Theorem 3.11. Fix \(\psi \in \mathrm {X}\) and choose \(N\ge 1\) according to the Counting Lemma so that each of the two discs \(D^i_n\), \(i=1,2\), for \(n\in \mathbb {Z}\) with \(|n|>N\) contains exactly two periodic eigenvalues. In fact, we can without loss of generality assume that \(D^i_n\) contains exactly two periodic eigenvalues \(\lambda ^{i,\pm }_n(s \psi )\) for each potential \(s \psi \) belonging to the line segment \(S:=\{s \psi :0\le s\le 1 \}\). Since \(S \subseteq \mathrm {X}\) is compact, we can choose N uniformly with respect to S. Let us now consider the continuous paths \(\rho ^{i,\pm }_n :[0,1] \rightarrow \mathbb {C}\), \(s\mapsto \lambda ^{i,\pm }_n(s \psi )\), \(i=1,2\). Since \(\Delta \) is continuous and \(\Delta (\lambda ^{i,\pm }_n(s \psi ),s \psi ) \in \{-2,2\}\) for \(s \in [0,1]\), we conclude that either $$\begin{aligned} \Delta (\rho ^{i,\pm }(s), s \psi ) \equiv 2 \ \text {on} \ [0,1] \qquad&\text {or} \qquad \Delta (\rho ^{i,\pm }(s), s \psi ) \equiv -2 \ \text {on} \ [0,1]. \end{aligned}$$ $$\begin{aligned} \Delta (\lambda ^{i,\pm }_n(\psi ),\psi ) = \Delta (\lambda ^{i,\pm }_n(0),0) = 2\, \cos n \pi = 2(-1)^n \quad \text {for} \quad |n|>N, \; i=1,2. \end{aligned}$$ The next lemma establishes a relation between the discriminant and the anti-discriminant evaluated at Dirichlet or Neumann eigenvalues. If \(\psi \in \mathrm {X}\) and \(\mu ^i_n \equiv \mu ^i_n(\psi )\) is any Dirichlet eigenvalue of \(\psi \), then $$\begin{aligned} \Delta ^2(\mu ^i_n,\psi ) - 4 = \delta ^2(\mu ^i_n,\psi ). \end{aligned}$$ This identity holds also at any Neumann eigenvalue \(\nu ^i_n \equiv \nu ^i_n(\psi )\) of \(\psi \). We recall that \(\grave{m}_1 \grave{m}_4 - \grave{m}_2 \grave{m}_3=1\) by Proposition 2.3. Therefore, $$\begin{aligned} \Delta ^2 - 4&= (\grave{m}_1 + \grave{m}_4)^2 - 4 \\&= (\grave{m}_1 + \grave{m}_4)^2 - 4(\grave{m}_1 \grave{m}_4 - \grave{m}_2 \grave{m}_3) \\&= (\grave{m}_1 - \grave{m}_4)^2 + 4 \grave{m}_2 \grave{m}_3. \end{aligned}$$ Let \(\mu ^i_n\) be a Dirichlet eigenvalue of \(\psi \in \mathrm {X}\), \(i=1,2\). Then \(\mu ^i_n\) is a root of \(\grave{m}_4 + \grave{m}_3 - \grave{m}_2 - \grave{m}_1\), that is $$\begin{aligned} (\grave{m}_1 - \grave{m}_4)\big |_{(\mu ^i_n,\psi )} = (\grave{m}_3 - \grave{m}_2)\big |_{(\mu ^i_n,\psi )}. \end{aligned}$$ $$\begin{aligned} \Delta ^2(\mu ^i_n,\psi ) - 4 = \big (\grave{m}_2(\mu ^i_n,\psi ) + \grave{m}_3 (\mu ^i_n,\psi )\big )^2 = \delta ^2(\mu ^i_n,\psi ). \end{aligned}$$ For Neumann eigenvalues \(\nu ^i_n(\psi )\), \(i=1,2\), we have $$\begin{aligned} (\grave{m}_1- \grave{m}_4)\big |_{(\nu ^i_n,\psi )} = (\grave{m}_2- \grave{m}_3)\big |_{(\nu ^i_n,\psi )}, \end{aligned}$$ which again yields the desired identity. \(\square \) By employing the identity of Lemma 3.16, we can prove the following analog of Theorems 3.5 and 3.9 for the periodic eigenvalues and the critical points. $$\begin{aligned} \lambda ^{i,\pm }_n(\psi ) =\lambda ^{i,\pm }_n(0) + \ell ^{\infty ,1}_n \quad \text {and} \quad \, {\dot{\lambda }}^i_n(\psi ) ={\dot{\lambda }}^i_n(0) + \ell ^{\infty ,1}_n, \quad i=1,2, \end{aligned}$$ where \(\lambda ^{i,\pm }_n(0)={\dot{\lambda }}^i_n(0)\) are given by (3.17) and (3.18). There exists a neighborhood \(W\subseteq B\) of the zero potential such that for all \(\psi \in W\) and every \(n\in \mathbb {Z}\): \(\sigma _{\mathrm P}(\psi ) \cap D^i_n = \{ \lambda ^{i,-}_n (\psi ),\lambda ^{i,+}_n (\psi ) \}\), \(i=1,2\); \(\Delta (\lambda ^{i,\pm }_n (\psi ),\psi ) = 2(-1)^n\), \(i=1,2\); \(\{\lambda \in \mathbb {C}:{\dot{\Delta }}(\lambda ,\psi )=0 \} \cap D^i_n = \{ {\dot{\lambda }}^i_n (\psi ) \}\), \(i=1,2\). Let \(W\subseteq B\) be an open neighborhood of the zero potential such that part (c) of (2) is fulfilled. Then the critical points \({\dot{\lambda }}^i_n\), considered as functions of \(\psi \), are analytic on W for all \(n\in \mathbb {Z}\setminus \{0\}\), \(i=1,2\). Furthermore, for every \(\psi \in W\) and every sequence \((\psi _k)_{k\in \mathbb {N}}\) in W with \(\psi _k\rightarrow \psi \) as \(k\rightarrow \infty \) it holds that $$\begin{aligned} \lim _{k\rightarrow \infty } \bigg (\sup _{n\in \mathbb {Z}\setminus \{0\}} (1+n^2)^\frac{1}{2} \big |{\dot{\lambda }}^i_n(\psi _k) - {\dot{\lambda }}^i_n(\psi )\big | \bigg ) = 0, \quad i=1,2. \end{aligned}$$ The proofs of the assertions for the critical points are similar to the proofs of the analogous assertions for Dirichlet eigenvalues; see Theorem 3.5 and its proof. Furthermore, the second assertion follows—in view of the Counting Lemma and Remark 3.15—from the continuity and asymptotics of \(\chi _{\mathrm P}\). It only remains to show (3.22) for the periodic eigenvalues \(\lambda ^{i,\pm }_n\equiv \lambda ^{i,\pm }_n(\psi )\). Since \(\mu ^i_n= \mu ^i_n(0) + \ell ^{\infty ,1}\) uniformly on B by Theorem 3.5 and \(E_{\mu ^i_n}\) is off-diagonal, we can apply Theorem 2.19 to infer that \(\delta (\mu ^i_n) = \ell ^{\infty ,1/2}_n\) uniformly on B. Since the quadratic mapping \(u_n\mapsto u^2_n\) is continuous as a map \(\ell ^{\infty ,1/2}_\mathbb {C}\rightarrow \ell ^{\infty ,1}_\mathbb {C}\), Lemma 3.16 yields $$\begin{aligned} \Delta ^2(\mu ^i_n) - 4 = \delta ^2(\mu ^i_n) = \ell ^{\infty ,1}_n \end{aligned}$$ uniformly on B. Since \(\Delta (\mu ^i_n)=2(-1)^n + \ell ^{\infty ,1/2}_n\) due to the second part of Theorem 2.19, we deduce from (3.23) by writing the left hand side as \((\Delta (\mu ^i_n) - 2) (\Delta (\mu ^i_n) + 2)\) that $$\begin{aligned} \Delta (\mu ^i_n) = 2(-1)^n + \ell ^{\infty ,1}_n \end{aligned}$$ Next we will employ the identity $$\begin{aligned} \Delta ({\dot{\lambda }}^i_n) - \Delta (\mu ^i_n) = ({\dot{\lambda }}^i_n - \mu ^i_n) \int ^1_0 {\dot{\Delta }}(t {\dot{\lambda }}^i_n + (1-t) \mu ^i_n ) \, \mathrm d t, \end{aligned}$$ which is a consequence of the fundamental theorem of calculus. By Theorem 2.19, all values on the lines connecting \({\dot{\lambda }}^i_n\) and \(\mu ^i_n\) are \({\mathcal {O}}(1)\) as \(|n|\rightarrow \infty \) uniformly on B. Moreover, \({\dot{\lambda }}^i_n - \mu ^i_n=\ell ^{\infty ,1}_n\) uniformly on B by the first assertion of this theorem concerning the critical points and Theorem 3.5. Hence, we infer from (3.24) and (3.25) that $$\begin{aligned} \Delta ({\dot{\lambda }}^i_n) = 2(-1)^n + \ell ^{\infty ,1}_n \end{aligned}$$ uniformly on B. Furthermore, since \({\dot{\Delta }}({\dot{\lambda }}^i_n)=0\) by definition, we obtain $$\begin{aligned} \Delta (\lambda ^{i,\pm }_n) = \Delta ({\dot{\lambda }}^i_n) + (\lambda ^{i,\pm }_n - {\dot{\lambda }}^i_n)^2 \int ^1_0 (1-t) \ddot{\Delta }(t \lambda ^{i,\pm }_n + (1-t) {\dot{\lambda }}^i_n) \,\mathrm d t. \end{aligned}$$ By recalling that \(\Delta (\lambda ^{i,\pm }_n)=2(-1)^n\) for all sufficiently large |n|, cf. Remark 3.15, we deduce from (3.26) and (3.27) that $$\begin{aligned} (\lambda ^{i,\pm }_n - {\dot{\lambda }}^i_n)^2 \int ^1_0 (1-t) \ddot{\Delta }(t \lambda ^{i,\pm }_n + (1-t) {\dot{\lambda }}^i_n) \, \mathrm d t = \ell ^{\infty ,1}_n \end{aligned}$$ uniformly on B. Using Cauchy's estimate and Theorem 2.7, we find $$\begin{aligned} \ddot{\Delta }(\lambda ) = -16 \lambda ^2 \cos 2 \lambda ^2 + {\mathcal {O}}\big (|\lambda |\, \mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}\big ) \end{aligned}$$ uniformly on B as \(|\lambda |\rightarrow \infty \). Hence, for a bi-infinite sequence \((z_n)_{n\in \mathbb {Z}}\), whose entries \(z_n\) remain asymptotically in the discs \(D^i_n\) (\(i=1\) or \(i=2\)) we have \(\ddot{\Delta }(z_n) = -16 z_n^2 \cos 2 z_n^2 + {\mathcal {O}}(|z_n| )\). By the Counting Lemmas, \((t \lambda ^{i,\pm }_n + (1-t) {\dot{\lambda }}^i_n)=\ell ^{\infty ,1/2}_n\) uniformly for \(t\in [0,1]\) and \(\psi \in B\). Thus, the absolute value of the integral in (3.28) is \(\Theta (|n|)\) as \(|n|\rightarrow \infty \), which means that it grows precisely as fast as |n|. As a consequence, \((\lambda ^{i,\pm }_n - {\dot{\lambda }}^i_n)^2= \ell ^{\infty ,2}_n\) uniformly on B; in other words, $$\begin{aligned} \lambda ^{i,\pm }_n - {\dot{\lambda }}^i_n = \ell ^{\infty ,1}_n \end{aligned}$$ uniformly on B. Since \({\dot{\lambda }}^i_n={\dot{\lambda }}^i_n(0)+\ell ^{\infty ,1}_n = \lambda ^{i,\pm }_n(0)+\ell ^{\infty ,1}_n\) uniformly on B, we conclude from (3.29) that \(\lambda ^{i,\pm }_n = \lambda ^{i,\pm }_n(0) + \ell ^{\infty ,1}_n\) uniformly on B, hence the periodic eigenvalues satisfy the asymptotic estimate of the first assertion of this theorem. This completes the proof. \(\square \) Let W be an open neighborhood of the zero potential such that Theorem 3.17 (2) is satisfied. Then the map $$\begin{aligned} W \rightarrow \ell ^{\infty ,1}_\mathbb {C}, \quad \psi \mapsto \big (\lambda ^{i,\pm }_n(\psi ) - \lambda ^{i,\pm }_n(0)\big ), \quad i=1,2 \end{aligned}$$ is continuous at \(\psi =0\), that is, for every sequence \((\psi _k)_{k\in \mathbb {N}}\) in W with \(\psi _k\rightarrow 0\) as \(k\rightarrow \infty \), it holds that $$\begin{aligned} \lim _{k\rightarrow \infty } \bigg (\sup _{n\in \mathbb {Z}} (1+n^2)^\frac{1}{2} \big |\lambda ^{i,\pm }_n(\psi _k) - {\dot{\lambda }}^{i,\pm }_n(0)\big | \bigg ) = 0, \quad i=1,2. \end{aligned}$$ However, as a consequence of the lexicographical ordering, the periodic eigenvalues \(\lambda ^{i,-}_n\) and \(\lambda ^{i,+}_n\) do not define analytic functions from W to \(D^i_n\), \(n\in \mathbb {Z}\setminus \{0\}\). In order to formulate a version of Theorem 3.17 (3) for periodic eigenvalues, one can consider suitable subsets of W where \(\lambda ^{i,-}_n\) and \(\lambda ^{i,+}_n\) are isolated from each other—the situation near a potential \(\psi \in W\) with a double periodic eigenvalue is cumbersome. Section 4 addresses these questions for potentials of so-called real and imaginary type. For such potentials the fundamental matrix solution possesses additional symmetries, which implies that the two periodic eigenvalues \(\lambda ^{i,\pm }_n\) in \(D^i_n\) are either real or form a complex conjugate pair. In both cases, they are connected by an analytic arc along which the discriminant is real-valued. The asymptotic localization of Dirichlet eigenvalues, Neumann eigenvalues, periodic eigenvalues, and critical points provided by Theorems 3.5, 3.9 and 3.17 can be slightly improved. Indeed, it is straightforward to adapt the proofs of these theorems to obtain, for each \(p>2\), $$\begin{aligned} \mu ^i_n&= \mu ^i_n(0) + \ell ^{p,1/2}_n, \\ \nu ^i_n&= \mu ^i_n(0) + \ell ^{p,1/2}_n, \\ \lambda ^{i,\pm }_n&= \mu ^i_n(0) + \ell ^{p,1/2}_n, \\ {\dot{\lambda }}^i_n&= \mu ^i_n(0) + \ell ^{p,1/2}_n, \end{aligned}$$ uniformly on bounded subsets of \(\mathrm {X}\), \(i=1,2\). Studies of other related spectral problems—see e.g. [8, 17, 19, 30]—suggest that these localization results can be further sharpened if attention is restricted to subspaces of more regular potentials. 4 Potentials of real and imaginary type This section considers potentials of so-called real and imaginary type. These subspaces of the space \(\mathrm {X}\) of general t-periodic potentials consist precisely of those potentials, which are relevant for the x-evolution of the t-periodic defocusing NLS (real type) and focusing NLS (imaginary type). Our main results are Theorems 4.4 and 4.5, which state that for sufficiently small real and imaginary type potentials \(\psi \), the corresponding periodic eigenvalues \(\lambda ^{1,-}_n(\psi )\) and \(\lambda ^{1,+}_n(\psi )\) are connected by analytic arcs in the complex plane for each \(n\in \mathbb {Z}\setminus \{0\}\). These arcs form a subset of \(\{\lambda \in \mathbb {C}:\Delta (\lambda ,\psi )\in \mathbb {R}\}\). These results are needed for the establishment of local Birkhoff coordinates and shall serve as a solid foundation for future investigations in this direction. Theorem 4.5 is inspired by [20, Proposition 2.6], which establishes similar properties for the x-periodic potentials of imaginary type for the focusing4 NLS. Our proof makes use of the ideas and techniques of [20]. We refer to [21] for further related results. For potentials \(\psi \in \mathrm {X}\), we define $$\begin{aligned} \psi ^* :=P\bar{\psi } :=(\bar{\psi }^2, \bar{\psi }^1, \bar{\psi }^4, \bar{\psi }^3), \end{aligned}$$ $$\begin{aligned} P :=\begin{pmatrix} \sigma _1 &{}\quad 0 \\ 0 &{}\quad \sigma _1 \end{pmatrix}, \qquad \sigma _1 = \begin{pmatrix} 0 &{}\quad 1 \\ 1 &{}\quad 0 \end{pmatrix}. \end{aligned}$$ We say that a potential \(\psi \) of the Zakharov–Shabat t-part (1.3) is of real type if \(\psi ^* = \psi \). In this case, \(\psi ^2 = \bar{\psi }^1\) and \(\psi ^4 = \bar{\psi }^3\), that is, \(\psi = (q_0 + \mathrm {i}p_0, q_0 - \mathrm {i}p_0, q_1 + \mathrm {i}p_1, q_1 - \mathrm {i}p_1)\) for some real-valued functions \(\{q_j, p_j\}_{j=0}^1\). Hence a potential is of real type iff all coefficients of the corresponding AKNS system are real-valued. The subspace of \(\mathrm {X}\) of all real type potentials will be denoted by $$\begin{aligned} \mathrm {X}_{{\mathcal {R}}} :=\{\psi \in \mathrm {X}\, | \, \psi ^* = \psi \}. \end{aligned}$$ Note that this is a real subspace of \(\mathrm {X}\), not a complex one; it consists of those potentials that are relevant for the defocusing NLS. We can write the Zakharov–Shabat t-part (1.3) as $$\begin{aligned} \bigg (-\mathrm {i}\sigma _3\partial _t + 2\lambda ^2\mathrm {I}+ \begin{pmatrix} \psi ^1 \psi ^2 &{} \mathrm {i}(2\lambda \psi ^1 + \mathrm {i}\psi ^3) \\ -2\mathrm {i}\lambda \psi ^2 - \psi ^4 &{} \psi ^1 \psi ^2 \end{pmatrix}\bigg )\phi = 0, \end{aligned}$$ $$\begin{aligned} L(\psi )\phi = R(\lambda ,\psi ) \phi \end{aligned}$$ $$\begin{aligned} L(\psi )&:=-\mathrm {i}\sigma _3\partial _t + \begin{pmatrix} \psi ^1 \psi ^2 &{} -\psi ^3 \\ - \psi ^4 &{} \psi ^1 \psi ^2 \end{pmatrix}, \qquad R(\lambda ,\psi ) :=-2\lambda ^2 \mathrm {I}- 2\mathrm {i}\lambda \begin{pmatrix} 0&{} \psi ^1 \\ -\psi ^2 &{} 0 \end{pmatrix}. \end{aligned}$$ For \(v = (v_1, v_2)\) and \(w = (w_1, w_2)\), let $$\begin{aligned} \langle v, w \rangle = \int _0^1 (v_1 \bar{w}_1 + v_2 \bar{w}_2) \, \mathrm d t. \end{aligned}$$ If the eigenfunctions v, w lie in the periodic domain \({\mathcal {D}}_{\mathrm P}\), we can integrate by parts without boundary terms and find that $$\begin{aligned}&\langle w, L(\psi ) v \rangle = \; \langle L(\psi ^*)w, v\rangle . \end{aligned}$$ Therefore, if the potential \(\psi \) is of real type and v is a periodic eigenfunction with eigenvalue \(\lambda \), $$\begin{aligned} \langle R(\lambda ,\psi )v, v\rangle = \langle L(\psi )v, v\rangle = \langle v, L(\psi )v\rangle = \langle v, R(\lambda ,\psi )v\rangle \end{aligned}$$ and thus we find that $$\begin{aligned} \mathfrak {I}\lambda = 0 \quad \text {or} \quad \mathfrak {R}\lambda = \frac{\mathfrak {I}\big ( \int ^1_0 \psi ^1 v_2\bar{v}_1 \, \mathrm d t \big )}{\langle v, v\rangle }. \end{aligned}$$ According to the Counting Lemma, the periodic eigenvalues of type \(\lambda ^2_n(\psi )\) for arbitrary \(\psi \in \mathrm {X}\) necessarily possess non-vanishing imaginary parts for sufficiently large |n|. In analogy with the x-part (1.2), one might expect that \(\mathfrak {I}\lambda ^{1,\pm }_n(\psi )=0\) for real type potentials. However, we will see in Sect. 5 that this is not the case: there are single exponential potentials of real type for which some \(\lambda ^{1,\pm }_n\) are nonreal, cf. Fig. 5. Let \(\psi \) be of real type and let \(\lambda \in \mathbb {R}\). Then \(m_4=\bar{m}_1\) and \(m_3={\bar{m}}_2\). In particular, \(\Delta \) is real-valued on \(\mathbb {R}\times \mathrm {X}_{{\mathcal {R}}}\). Moreover, if a solution v of \(L(\psi ) v = R(\lambda ,\psi )v\) is real in AKNS-coordinates, then \(v=\sigma _1 {\bar{v}}\). Since \(\psi =\psi ^*\), the AKNS coordinates \((p_j,q_j)\), \(j=0,1\), are real. If in addition \(\lambda \in \mathbb {R}\), the system (2.3) has real coefficients, so its fundamental solution K is real-valued. The relation \(M=T K T^{-1}\), cf. (2.11), then implies that \(m_4={\bar{m}}_1\) and \(m_3={\bar{m}}_2\). To prove the second claim, we note that \({\bar{T}} = \sigma _1 T\) and hence $$\begin{aligned} {\bar{M}} = {\bar{T}} K \bar{T}^{-1} = \sigma _1 T K T^{-1} \sigma _1 = \sigma _1 M \sigma _1. \end{aligned}$$ If v is real in AKNS coordinates, it has real initial data \(v_0\) and \(v=M T v_0\). Therefore $$\begin{aligned} {\bar{v}} = {\bar{M}} {\bar{T}} v_0 = \sigma _1 M T v_0 = \sigma _1 v. \end{aligned}$$ We say that a potential \(\psi \in \mathrm {X}\) is of imaginary type if \(\psi ^*=-\psi \). The subspace $$\begin{aligned} \mathrm {X}_{{\mathcal {I}}} :=\{\psi \in \mathrm {X}:\psi ^*=-\psi \} \end{aligned}$$ of potentials of imaginary type is relevant for the focusing NLS. For \(\psi \in \mathrm {X}_{{\mathcal {R}}}\) the fundamental solution M satisfies $$\begin{aligned} M(t,{\bar{\lambda }},\psi )= \sigma _1 \overline{M(t,\lambda ,\psi )} \sigma _1, \quad \lambda \in \mathbb {C}, \; t\ge 0; \end{aligned}$$ if \(\psi \in \mathrm {X}_{{\mathcal {I}}}\) then M satisfies $$\begin{aligned} M(t,{\bar{\lambda }},\psi )= \sigma _1 \sigma _3 \overline{M(t,\lambda ,\psi )} \sigma _3 \sigma _1 , \quad \lambda \in \mathbb {C}, \; t \ge 0. \end{aligned}$$ In particular, $$\begin{aligned} \Delta ({\bar{\lambda }},\psi ) = \overline{\Delta (\lambda ,\psi )} \quad \text {and} \quad {\dot{\Delta }} ({\bar{\lambda }},\psi ) = \overline{{\dot{\Delta }} (\lambda ,\psi )} \end{aligned}$$ for all \(\psi \in \mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}}\) and \(\lambda \in \mathbb {C}\), so that \(\Delta \) and \({\dot{\Delta }}\) are real-valued on \(\mathbb {R}\times (\mathrm {X}_{{\mathcal {R}}} \cup \mathrm {X}_{{\mathcal {I}}})\). Let us first assume that \(\psi \in \mathrm {X}_{{\mathcal {R}}}\) and \(\lambda \in \mathbb {C}\). Then a computation using (4.2) shows that $$\begin{aligned} L(\psi ) v = R (\lambda ,\psi ) v \quad \iff \quad L(\psi ) v^* = R ({\bar{\lambda }},\psi ) v^*, \end{aligned}$$ where \( v^* :=\sigma _1 \bar{v} = ({\bar{v}}_2, {\bar{v}}_1)\). The symmetry (4.3) follows from uniqueness of the solution of (4.1) and the initial condition \(M(0, \lambda , \psi ) = I\). Evaluation of (4.3) at \(t = 1\) gives (4.5). This finishes the proof for the case of real type potentials. If \(\psi \in \mathrm {X}_{{\mathcal {I}}}\), we instead have $$\begin{aligned} L(\psi ) v = R(\lambda ,\psi ) v \quad \iff \quad L(\psi ) {\hat{v}} = R(\bar{\lambda },\psi ) {\hat{v}} \end{aligned}$$ where \({\hat{v}}:=\sigma _1 \sigma _3 {\bar{v}}=(-{\bar{v}}_2,{\bar{v}}_1)\), which leads to (4.4) and (4.5). \(\square \) There exists a neighborhood W of the zero potential in \(\mathrm {X}\) such that for each \(\psi \in W \cap ( \mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\) and each \(n\in \mathbb {Z}\), $$\begin{aligned} \{\lambda \in \mathbb {C}:{\dot{\Delta }}(\lambda ,\psi )=0 \} \cap D^1_n = \{ {\dot{\lambda }}^1_n (\psi ) \} \quad \text {and} \quad {\dot{\lambda }}^1_n (\psi )\in \mathbb {R}. \end{aligned}$$ We already know from Theorem 3.17 that there exists a neighborhood W of the zero potential such that, for all general potentials \(\psi \in W\) and all \(n\in \mathbb {Z}\), $$\begin{aligned} \{\lambda \in \mathbb {C}:{\dot{\Delta }}(\lambda ,\psi )=0 \} \cap D^1_n = \{ {\dot{\lambda }}^1_n (\psi ) \}. \end{aligned}$$ Due to the symmetry (4.5) we infer that, for all potentials \(\psi \in W \cap ( \mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\) and \(n\in \mathbb {Z}\), $$\begin{aligned} 0 = {\dot{\Delta }} ({\dot{\lambda }}^1_n (\psi ),\psi ) = \overline{{\dot{\Delta }} ({\dot{\lambda }}^1_n (\psi ),\psi )} = {\dot{\Delta }} (\bar{{\dot{\lambda }}}^1_n (\psi ),\psi ). \end{aligned}$$ Since \({\dot{\lambda }}^1_n (\psi )\) is the only root of \(\dot{\Delta }(\cdot ,\psi )\) in \(D^1_n\), we conclude that \({\dot{\lambda }}^1_n (\psi )\) is real. \(\square \) There exists a neighborhood W of the zero potential in \(\mathrm {X}\) and a sequence of nondegenerate rectangles $$\begin{aligned} R^{\varepsilon ,\delta }_n :=\big \{ \lambda \in \mathbb {C}:|\mathfrak {R}\lambda -{\dot{\lambda }}^1_n(0)|<\delta _n, \; |\mathfrak {I}\lambda | <\varepsilon _n \big \}, \quad n\in \mathbb {Z}, \end{aligned}$$ with \(\varepsilon , \delta \in \ell ^{\infty ,1/2}_\mathbb {R}\), such that for all \(\psi \in W \cap (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\) and every \(n\in \mathbb {Z}\setminus \{0\}\), $$\begin{aligned} \{\lambda \in \mathbb {C}:\Delta (\lambda ,\psi )\in \mathbb {R}\} \cap R^{\varepsilon ,\delta }_n = \gamma _n(\psi ) \cup (R^{\varepsilon ,\delta }_n\cap \mathbb {R}), \end{aligned}$$ where the subset \(\gamma _n(\psi )\subseteq \mathbb {C}\) forms an analytic arc transversal to the real axis, which crosses the real line in the critical point \(\dot{\lambda }^1_n(\psi )\) of \(\Delta (\cdot ,\psi )\). These arcs are symmetric under reflection in the real axis and the orthogonal projection of \(\gamma _n(\psi )\) to the imaginary axis is a real analytic diffeomorphism onto its image. We refer to Fig. 3a for an illustration of the analytic arc \(\gamma _n(\psi )\) within the rectangle \(R^{\varepsilon ,\delta }_n\) centered at the critical point \({\dot{\lambda }}^1_n(0)\) of the discriminant \(\Delta (\cdot ,0)\). a An illustration of the path \(\gamma _n\) within the rectangle \(R^{\varepsilon ,\delta }_n\) which is contained in the disc \(D^1_n\). The critical points \({\dot{\lambda }}^1_n={\dot{\lambda }}^1_n(\psi )=\gamma _n\cap \mathbb {R}\) and \({\dot{\lambda }}^1_n(0)\) are marked with dots. b A plot of the zero set of \(\Delta _2(\cdot ,0)\) in the complex \(\lambda \)-plane; the boundaries of the discs \(D^i_n\), \(i=1,2\), are indicated by dashed circles, the periodic eigenvalues (which coincide with the critical points of \(\Delta (\cdot ,0)\) and the Dirichlet and Neumann eigenvalues) are indicated by dots Before proving Theorem 4.4, we state an important consequence of Theorem 4.4 and Theorem 3.17. Namely that for all small enough real and imaginary type potentials, the periodic eigenvalues \(\lambda ^{1,-}_n(\psi )\) and \(\lambda ^{1,+}_n(\psi )\) are connected by an analytic arc along which the discriminant is real-valued. More precisely, we will deduce the following result. There exists a neighborhood \(W^*\) of \(\psi =0\) in \(\mathrm {X}\) such that for each \(\psi \in W^* \cap (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\) and each \(n\in \mathbb {Z}\setminus \{0\}\) there exists an analytic arc \(\gamma ^*_n\equiv \gamma ^*_n(\psi )\subseteq \mathbb {C}\) connecting the two periodic eigenvalues \(\lambda ^{1,\pm }_n \equiv \lambda ^{1,\pm }_n(\psi )\). Qualitatively we distinguish two different cases: either (i) \(\gamma ^*_n=[\lambda ^{1,-}_n,\lambda ^{1,+}_n]\subseteq \mathbb {R}\) or (ii) \(\gamma ^*_n\) is transversal to the real line, symmetric under reflection in the real axis, and the orthogonal projection of \(\gamma ^*_n\) to the imaginary axis is a real analytic diffeomorphism onto its image. In both cases, it holds that \(\Delta (\gamma ^*_n,\psi ) \subseteq [-2,2]\), \(\overline{\gamma ^*_n} = \gamma ^*_n\), \({\dot{\lambda }}^1_n(\psi ) \in \gamma ^*_n \cap \mathbb {R}\), For a parametrization by arc length \(\rho _n \equiv \rho _n(s)\) of \(\gamma ^*_n\) with \(\rho _n(0)={\dot{\lambda }}^1_n(\psi )\), the function \(s\mapsto \Delta (\rho _n(s),\psi )\) is strictly monotonous along the two connected components of \(\gamma ^*_n\setminus \{{\dot{\lambda }}^1_n(\psi ) \}\). (We include the possible scenario \(\lambda ^{1,-}_n(\psi )=\lambda ^{1,+}_n(\psi )={\dot{\lambda }}^1_n(\psi )\), where the set \(\gamma ^*_n(\psi )\) consists of the single element \({\dot{\lambda }}^1_n(\psi )\), as a degenerate special case.) The remainder of this section is devoted to the proofs of Theorems 4.4 and 4.5. We follow closely the ideas and methods of the proof of [20, Proposition 2.6]—a related result for the x-periodic focusing NLS. The proof is based on an application of the implicit function theorem for real analytic mappings in an infinite dimensional setting. This level of generality is necessary in order to treat the arcs \(\gamma _n\) in a uniform way. Let us first briefly discuss the strategy of the proof. Writing \(\lambda = x + \mathrm {i}y\) with \(x,y \in \mathbb {R}\), we split \(\Delta (\lambda ;\psi ) \equiv \Delta (x,y;\psi )\) into its real and imaginary parts and write \(\Delta = \Delta _1 + \mathrm {i}\Delta _2\) with $$\begin{aligned} \Delta _1(x,y;\psi ) :=\mathfrak {R}\big (\Delta (\lambda ;\psi )\big ), \quad \Delta _2(x,y;\psi ) :=\mathfrak {I}\big (\Delta (\lambda ;\psi )\big ). \end{aligned}$$ The problem is then transformed into the study of the zero level set of \(\Delta _2 (\lambda ;\psi )=\Delta _2 (x,y;\psi )\). By Proposition 4.2, \(\Delta _2(x, 0; \psi )=0\) for any \(x \in \mathbb {R}\) and \(\psi \in \mathrm {X}_{{\mathcal {R}}} \cup \mathrm {X}_{{\mathcal {I}}}\). Therefore, following [20], we introduce the function $$\begin{aligned} \begin{aligned} \tilde{F} :\mathbb {R}\times (\mathbb {R}\setminus \{0\}) \times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}}) \rightarrow \mathbb {R}, \\ (x,y,\psi ) \mapsto {\tilde{F}}(x,y;\psi ) :=\frac{\Delta _2(x,y;\psi )}{y}, \end{aligned} \end{aligned}$$ which has the same zeros on \(\mathbb {R}\times (\mathbb {R}\setminus \{0\}) \times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\) as \(\Delta _2\). We observe that \({\tilde{F}}\) has a real analytic extension $$\begin{aligned} F :\mathbb {R}\times \mathbb {R}\times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}}) \rightarrow \mathbb {R}. \end{aligned}$$ To see this, we recall that \(\Delta \) is analytic on \(\mathbb {C}\times \mathrm {X}\) and real valued on \(\mathbb {R}\times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\). Hence \(\Delta _2\) vanishes on \(\mathbb {R}\times \{0\} \times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\) and is real analytic there. Thus \(\Delta _2(x,y;\psi )/y\) admits a Taylor series representation at \(y=0\), which converges absolutely to the analytic extension F of \({\tilde{F}}\) locally near \(y=0\). For \(\psi \in \mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}}\) and real sequences \(u=(u_n)_{n\in \mathbb {Z}}\) and \(v=(v_n)_{n\in \mathbb {Z}}\), we define the map $$\begin{aligned} {\mathcal {F}} = ({\mathcal {F}}_n)_{n\in \mathbb {Z}}, \quad {\mathcal {F}}_n(u,v;\psi ) :=F({\dot{\lambda }}^1_n + u_n,v_n;\psi ). \end{aligned}$$ For the zero potential and the zero sequence, both denoted by 0, we calculate $$\begin{aligned} {\mathcal {F}}(0,0;0) = (-8 {\dot{\lambda }}^1_n \sin (2({\dot{\lambda }}^1_n)^2) )_{n\in \mathbb {Z}} = (-8 {\dot{\lambda }}^1_n \sin |n|\pi \, )_{n\in \mathbb {Z}} = 0. \end{aligned}$$ In order to determine \(\frac{\partial {\mathcal {F}}}{\partial u}\) at the origin (0, 0; 0), we first observe that \(\frac{\partial {\mathcal {F}}}{\partial u}\) has diagonal form because \({\mathcal {F}}_j\) is independent of \(u_n\) for \(j\in \mathbb {Z}\) with \(j\ne n\). On the diagonal, we obtain $$\begin{aligned} \frac{\partial {\mathcal {F}}_n}{\partial u_n} (u,v;0)&= \frac{\partial }{\partial u_n} \bigg [ \frac{\Delta _2({\dot{\lambda }}^1_n +u_n,v_n;0)}{v_n} \bigg ] \\&= - \frac{2}{v_n} \bigg \{ 4({\dot{\lambda }}^1_n + u_n) \cos [2(({\dot{\lambda }}^1_n + u_n)^2-v^2_n)] \sinh [4({\dot{\lambda }}^1_n + u_n) v_n\big ] \\&\quad + \sin [2(({\dot{\lambda }}^1_n + u_n)^2-v^2_n)] \, 4 v_n \cosh [4({\dot{\lambda }}^1_n + u_n) v_n] \bigg \}, \end{aligned}$$ $$\begin{aligned} \frac{\partial {\mathcal {F}}_n}{\partial u_n} (0,0;0) = - 32 ({\dot{\lambda }}^1_n)^2 \cos [2({\dot{\lambda }}^1_n)^2], \end{aligned}$$ and therefore $$\begin{aligned} \frac{\partial \mathcal F}{\partial u} (0,0;0) = \big ( 16 \pi |n| \, \mathrm {diag}((-1)^{n+1}) \big )_{n\in \mathbb {Z}}. \end{aligned}$$ Consequently, the right-hand side of (4.10) is at least formally bijective in a set-theoretic and algebraic sense, for example as a mapping from the linear space of real sequences \(\{u=(u_n)_{n\in \mathbb {Z}} :\mathbb {Z}\rightarrow \mathbb {R}\,| \, u_0=0 \}\) to itself. In order to give these formal considerations a rigorous justification, we need to consider appropriate subspaces of sequences equipped with suitable topologies. Due to the quadratic nature of the underlying generalized eigenvalue problem, the right choice of spaces is quite a delicate issue. In contrast to the related x-periodic problem for the focusing NLS, see [20], we can not rely on \(\ell ^\infty \) sequences, but need to make use of the weighted \(\ell ^p\)-based spaces of \(\ell ^{p,s}\) sequences, which we introduced earlier in (3.5). The establishment of the necessary bounds for the mapping \({\mathcal {F}}\) between these spaces turns out to be highly nontrivial, cf. Lemma 4.8. Let us discuss the basic properties of the \(\ell ^{p,s}_\mathbb {K}\) spaces, where \(\mathbb {K}=\mathbb {R}\) or \(\mathbb {K}=\mathbb {C}\), which appear in the formulation of Theorem 4.4, and Propositions 3.5, 3.9 and 3.17. For \(1\le p \le \infty \) and \(s\in \mathbb {R}\), we consider the linear spaces $$\begin{aligned} {\ell }^{p,s}_\mathbb {K}:=\Big \{ u= (u_n)_{n\in \mathbb {Z}} \; \big | \; \big ((1+n^2)^{\frac{s}{2}} u_n\big )_{n\in \mathbb {Z}} \in \ell ^p_\mathbb {K}\Big \} \end{aligned}$$ endowed with the norms $$\begin{aligned}&|u|_{p,s} \; :=\left( \sum ^\infty _{n=-\infty } (1+n^2)^{\frac{s p}{2}} |u_n|^p \right) ^\frac{1}{p}, \\&\quad 1\le p <\infty ; \quad |u|_{\infty ,s} :=\sup _{n\in \mathbb {Z}} \big \{ (1+n^2)^{\frac{s}{2}} |u_n| \big \}. \end{aligned}$$ One easily checks that these spaces are Banach spaces. Furthermore, defining $$\begin{aligned} \Lambda _n :=(1+n^2)^{\frac{1}{2}}, \quad n\in \mathbb {Z}, \end{aligned}$$ $$\begin{aligned} \Lambda ^r :\ell ^{p,s}_\mathbb {K}\rightarrow \ell ^{p,s-r}_\mathbb {K}, \quad u_n \mapsto \Lambda ^r_n u_n, \end{aligned}$$ is an isometric isomorphism for each \(r\in \mathbb {R}\). In particular \(\Lambda ^s\) maps \(\ell ^{p,s}_\mathbb {K}\) isometrically onto \(\ell ^p_\mathbb {K}\). For \(s\in \mathbb {R}\) and \(1<p<\infty \), the topological dual of \(\ell ^{p,s}_\mathbb {K}\) is isometrically isomorphic to \(\ell ^{q,-s}_\mathbb {K}\), i.e., \((\ell ^{p,s}_\mathbb {K})' \cong \ell ^{q,-s}_\mathbb {K}\), where q is the Hölder conjugate of p defined by \(1/p+1/q=1\). The isomorphism is given by the dual pairing $$\begin{aligned} \langle \cdot ,\cdot \rangle _{p,s;q,-s} :\ell ^{p,s}_\mathbb {K}\times \ell ^{q,-s}_\mathbb {K}\rightarrow \mathbb {K}, \quad \langle u,v \rangle _{p,s;q,-s} :=\sum ^\infty _{n=-\infty } u_n v_n, \end{aligned}$$ and can be deduced directly from the well-known \(\ell ^p\)-\(\ell ^q\)-duality. Henceforth, we will identify the dual of \(\ell ^{p,s}_\mathbb {K}\) with \(\ell ^{q,-s}_\mathbb {K}\) by means of \(\langle \cdot ,\cdot \rangle _{p,s;q,-s}\). In particular, \(\ell ^{p,s}_\mathbb {K}\) is a reflexive Banach space for \(1<p<\infty \). We will also use the closed subspaces $$\begin{aligned} {\check{\ell }}^{p,s}_\mathbb {K}:=\{u \in \ell ^{p,s}_\mathbb {K}:u_0=0 \}; \end{aligned}$$ for \(1<p<\infty \) their topological duals are given by: $$\begin{aligned} ({\check{\ell }}^{p,s}_\mathbb {K})' \cong \check{\ell }^{q,-s}_\mathbb {K}. \end{aligned}$$ The linear operator T defined by $$\begin{aligned} T_n u_n \mapsto |n| u_n, \quad T:{\check{\ell }}^{p,s}_\mathbb {K}\rightarrow \check{\ell }^{p,s-1}_\mathbb {K}\end{aligned}$$ is a topological isomorphism. Likewise, \(T^r:u_n \mapsto T^r_n u_n= |n|^r u_n\) is an isomporhism \({\check{\ell }}^{p,s}_\mathbb {K}\rightarrow {\check{\ell }}^{p,s-r}_\mathbb {K}\) for real r. The first part of the proof of Theorem 4.4 uses techniques from the theory of analytic maps between complex Banach spaces. We therefore review some aspects of this theory. Let \((E,\Vert \cdot \Vert _E)\), \((F,\Vert \cdot \Vert _F)\) be complex Banach spaces. Furthermore, we denote by \({\mathcal {L}} (E,F)\) the Banach space of bounded \(\mathbb {C}\)-linear operators \(E\rightarrow F\) endowed with the operator norm \(\Vert \cdot \Vert _{{\mathcal {L}}(E,F)}\), where \(\Vert L\Vert _{\mathcal L(E,F)}=\sup _{0\ne h\in E}\frac{\Vert L h\Vert _F}{\Vert h\Vert _E}<\infty \) for \(L\in {\mathcal {L}}(E,F)\). In the special case \(F=\mathbb {C}\), we denote by \(E'={\mathcal {L}}(E,\mathbb {C})\) the topological dual space of E. Let \(O\subseteq E\) be an open subset. A map \(f :O \rightarrow F\) is called analytic or holomorphic, if it is Fréchet differentiable in the complex sense at every \(u\in O\), i.e., if for each \(u\in O\) there exists a bounded linear operator \(A(u)\in {\mathcal {L}}(E,F)\) such that $$\begin{aligned} \lim _{\Vert h\Vert _E\rightarrow 0} \frac{\Vert f(u+h)-f(u)-A(u) h\Vert _F}{\Vert h\Vert _E} = 0. \end{aligned}$$ In this case we call A(u) the derivative of f at u and write \(\mathrm d f(u)\) for A(u). In the special case \(E=F=\mathbb {C}\), we simply write \(\mathrm d f(u)=f'(u)\in \mathbb {C}\cong \mathbb {C}'\). We call fweakly analytic onO if for every \(u\in O\), \(h\in E\) and \(L\in F'\) the function $$\begin{aligned} z \mapsto L f(u+z h) \end{aligned}$$ is analytic in some neighborhood of zero. We provide the basic characterization of analytic maps between complex Banach spaces in the following lemma. [18, Theorem A.4] Let E and F be complex Banach spaces, let \(O\subseteq E\) be open and let \(f:O\rightarrow F\) be a mapping. The following statements are equivalent. f is analytic in O. f is weakly analytic and locally bounded on O. f is infinitely many times differentiable on O and for each \(u\in O\) the Taylor series of f at u, given by $$\begin{aligned} f(u+h) = f(u) + \mathrm d f(u) h + \frac{1}{2} \mathrm d^2 f(u) (h,h) + \cdots + \frac{1}{n!} \mathrm d^n f(u) (h,\dots ,h) + \cdots \end{aligned}$$ converges to f absolutely and uniformly in a neighborhood of u. The next lemma, also referred to as Cauchy's inequality, provides an important estimate for the multilinear map \(\mathrm d^n f(u)\). Let E and F be complex Banach spaces and let f be analytic from the open ball of radius r around u in E into F such that \(\Vert f\Vert _F \le M\) on this ball. Then for all integers \(n\ge 0\), $$\begin{aligned} \max _{0\ne h\in E}\frac{\Vert \mathrm d^n f(u) (h,\dots ,h)\Vert _F}{\Vert h\Vert ^n_E} \le \frac{M n!}{r^n}. \end{aligned}$$ By the previous lemma, f is infinitely often differentiable at u with n-th derivative \(\mathrm d^n f(u) \in {\mathcal {L}}^n(E,F)\), the space of continuous n-linear mappings \(E\times \cdots \times E \rightarrow F\). The lemma now follows directly from the usual Cauchy inequality for holomorphic Banach space valued functions on a complex domain by considering the holomorphic map \(\varphi (z) :=f(u+ z h)\) for arbitrary \(h\ne 0\) on the disc with radius r / |h| centered at the origin of \(\mathbb {C}\). See e.g. [18, Lemma A.2], see also e.g. [9, Chapter III.14] for the generalization of the classical theory of complex analysis for functions \(f:\mathbb {C}\supseteq O\rightarrow \mathbb {C}\) to complex Banach space valued functions \(f:\mathbb {C}\supseteq O\rightarrow F\) defined on a complex domain, and [28] for a general account on complex analysis in Banach spaces. \(\square \) The purpose of the next lemma is the establishment of certain bounds we will use later on. Let \(\psi \in \mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}}\), let \(\lambda = x + \mathrm {i}y \in \mathbb {C}\) with \(x,y \in \mathbb {R}\), and let \(\Delta _1\) and \(\Delta _2\) denote the real and imaginary parts of \(\Delta = \Delta _1 + \mathrm {i}\Delta _2\). As \(|\lambda | \rightarrow \infty \), the partial derivative \(\partial _y \Delta _2\) satisfies the asymptotic estimate $$\begin{aligned} \partial _y \Delta _2(x,y;\psi )&= -8\big (x \sin [2(x^2-y^2)] \cosh [4x y] - y \cos [2(x^2-y^2)] \sinh [4x y] \big ) \nonumber \\&\quad - 4 \mathrm {i}\grave{\Gamma }(\psi ) \cos [2(x^2-y^2)] \cosh [4x y] + \mathcal O \bigg (\frac{\mathrm {e}^{4|x y|}}{\sqrt{x^2+y^2}}\bigg ) \end{aligned}$$ uniformly for \(\psi \) in bounded subsets of \(\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}}\), where $$\begin{aligned} \grave{\Gamma }(\psi ) :=\Gamma (1,\psi ) = \int ^1_0 ( \psi ^1 \psi ^4 - \psi ^2 \psi ^3 ) \, \mathrm d t. \end{aligned}$$ Set \({\dot{\lambda }}^1_n:={\dot{\lambda }}^1_n(0)\). The mapping $$\begin{aligned} (x_n,y_n) \mapsto \partial _y \Delta _2({\dot{\lambda }}^1_n + x_n,y_n;\psi ), \end{aligned}$$ which is real analytic in each coordinate, maps bounded sets in \(\ell ^{\infty ,1/2}_\mathbb {R}\times \ell ^{\infty ,1/2}_\mathbb {R}\) to bounded sets in \(\ell ^{\infty ,-1/2}_\mathbb {R}\); the corresponding bound in \(\ell ^{\infty ,-1/2}_\mathbb {R}\) can be chosen uniformly for \(\psi \) varying within bounded subsets of \(\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{\mathcal I}\). The assertion remains true when considering (4.14) as a mapping $$\begin{aligned} \ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\times \mathrm {X}\supseteq \big (\ell ^{\infty ,1/2}_\mathbb {R}\times \ell ^{\infty ,1/2}_\mathbb {R}\times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\big )\otimes \mathbb {C}\rightarrow \ell ^{\infty ,-1/2}_\mathbb {R}\otimes \mathbb {C}\end{aligned}$$ by means of the coordinatewise analytic extension to all of \(\big (\ell ^{\infty ,1/2}_\mathbb {R}\times \ell ^{\infty ,1/2}_\mathbb {R}\times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\big )\otimes \mathbb {C}\); that is, this mapping maps bounded sets in \(\ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\) to bounded sets in \(\ell ^{\infty ,-1/2}_\mathbb {C}\) uniformly on bounded subsets of \((\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\otimes \mathbb {C}\). In order to prove part (1), we recall from Theorem 2.7 that $$\begin{aligned} \grave{M}(\lambda ,\psi ) = \mathrm {e}^{-2\mathrm {i}\lambda ^2 \sigma _3} + {\mathcal {O}} \bigg (\frac{\mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}}{|\lambda |}\bigg ). \end{aligned}$$ In the proof of Theorem 2.7, we gained additional information on the remainder term: it is of the form $$\begin{aligned} \frac{Z_1(\psi )}{\lambda } \mathrm {e}^{-2\mathrm {i}\lambda ^2 \sigma _3} + \frac{W_1(\psi )}{\lambda } \mathrm {e}^{2\mathrm {i}\lambda ^2 \sigma _3} + {\mathcal {O}} \bigg (\frac{\mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}}{|\lambda |^2}\bigg ), \end{aligned}$$ where the diagonal part of the \(1/\lambda \)-terms is given by $$\begin{aligned} \frac{1}{2\lambda }\grave{\Gamma }(\psi )\, \sigma _3 \mathrm {e}^{-2\mathrm {i}\lambda ^2 \sigma _3}. \end{aligned}$$ Thus the discriminant satisfies $$\begin{aligned} \Delta (\lambda ,\psi ) = 2 \cos 2\lambda ^2 - \frac{\mathrm {i}\grave{\Gamma }}{\lambda } \sin 2 \lambda ^2 + {\mathcal {O}} \big (|\lambda |^{-2} \, \mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}\big ) \end{aligned}$$ for any potential \(\psi \in \mathrm {X}\), and its \(\lambda \)-derivative satisfies $$\begin{aligned} {\dot{\Delta }}(\lambda ,\psi ) = -8\lambda \sin 2\lambda ^2 - 4\mathrm {i}\grave{\Gamma }(\psi ) \cos 2\lambda ^2 + O \big (|\lambda |^{-1} \, \mathrm {e}^{2|\mathfrak {I}(\lambda ^2)|}\big ). \end{aligned}$$ Since \(\grave{\Gamma }(\psi ) \in \mathrm {i}\mathbb {R}\) for \(\psi \in \mathrm {X}_{{\mathcal {R}}} \cup \mathrm {X}_{{\mathcal {I}}}\), the asymptotic estimate (4.13) follows by taking the real part of (4.15). We prove part (2) of the lemma in the complex setting; this includes the real setting as a special case. The analytic extension \(\widetilde{\partial _y \Delta _2}\) of \(\partial _y \Delta _2\) to \(\mathbb {C}\times \mathbb {C}\times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\otimes \mathbb {C}\) is given by $$\begin{aligned} \widetilde{\partial _y \Delta _2}(x,y;\psi ) = {\dot{\Delta }}(x + \mathrm {i}y,\psi ), \quad x,y\in \mathbb {C}, \; \psi \in (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\otimes \mathbb {C}, \end{aligned}$$ which, according to the first part of the proof, satisfies the asymptotic estimate $$\begin{aligned} \widetilde{\partial _y \Delta _2}(x,y;\psi )&= -8(x + \mathrm {i}y) \sin [2(x + \mathrm {i}y)^2] - 4\mathrm {i}\grave{\Gamma }(\psi ) \cos [2(x + \mathrm {i}y)^2] \nonumber \\&\quad + O \bigg (\frac{\mathrm {e}^{2|\mathfrak {I}((x + \mathrm {i}y)^2)|}}{|x + \mathrm {i}y|} \bigg ). \end{aligned}$$ The error term holds uniformly on bounded subsets of \((\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\otimes \mathbb {C}\) for \(|x + \mathrm {i}y|\rightarrow \infty \); likewise \(\grave{\Gamma }\) possesses a uniform bound on bounded subsets of this potential space. Therefore we only need to establish the desired bounds for arbitrary potentials \(\psi \in (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\otimes \mathbb {C}\) and the uniformity on bounded subsets follows automatically. We write the complexification of (4.14) (by means of analytic extensions in all coordinates) as $$\begin{aligned} \ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\times [(\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\otimes \mathbb {C}] \rightarrow \ell ^{\infty ,-1/2}_\mathbb {C}, \quad (x_n,y_n) \mapsto \widetilde{\partial _y \Delta _2}({\dot{\lambda }}^1_n + x_n, y_n;\psi ), \end{aligned}$$ and employ the asymptotic estimate (4.16) to deduce the asserted bounds for (4.17). Let us verify the bounds separately for the three components of (4.17), which arise from the three terms in (4.16), beginning with the error term. That is, we first show that $$\begin{aligned} (x_n,y_n) \mapsto \frac{\mathrm {e}^{2|\mathfrak {I}(({\dot{\lambda }}^1_n+x_n + \mathrm {i}y_n)^2)|}}{|{\dot{\lambda }}^1_n+x_n + \mathrm {i}y_n|} \end{aligned}$$ maps bounded sets in \(\ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\) to bounded sets in \(\ell ^{\infty ,-1/2}_\mathbb {R}\). We clearly have that $$\begin{aligned} (x_n,y_n) \mapsto |\mathfrak {I}(({\dot{\lambda }}^1_n+x_n + \mathrm {i}y_n)^2)| \end{aligned}$$ maps bounded sets in \(\ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\) to bounded sets in \(\ell ^\infty _\mathbb {R}\), hence the nominator in (4.18) is bounded in \(\ell ^\infty _\mathbb {R}\) uniformly on bounded sets in \(\ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\). It follows that the whole expression on the right-hand side of (4.18) is bounded in \(\ell ^{\infty ,1/2}_\mathbb {R}\subseteq \ell ^{\infty ,-1/2}_\mathbb {R}\) on bounded sets in \(\ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\). Next we show that $$\begin{aligned} (x_n,y_n) \mapsto \big |\cos [2(\lambda ^1_n+x_n + \mathrm {i}y_n)^2]\big | \end{aligned}$$ maps bounded sets in \(\ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\) to bounded sets in \(\ell ^{\infty ,-1/2}_\mathbb {R}\), which ensures that the second component of (4.17) has the asserted property. By recalling that \(\sin [2({\dot{\lambda }}^1_n)^2]=0\) and \(\cos [2({\dot{\lambda }}^1_n)^2]=(-1)^n\) and employing the classical trigonometric addition formulas, we obtain $$\begin{aligned} \big |\cos [2(\lambda ^1_n+x_n + \mathrm {i}y_n)^2]\big |&\le \big | \cos [2(x^2_n + 2\mathrm {i}x_n y_n - y^2_n)] \cos [4 \lambda ^1_n (x_n + \mathrm {i}y_n)] \big | \nonumber \\&\quad + \big |\sin [2(x^2_n + 2\mathrm {i}x_n y_n - y^2_n)] \sin [4 \lambda ^1_n (x_n + \mathrm {i}y_n)] \big |. \end{aligned}$$ The second term on the right-hand side of (4.20) is bounded in \(\ell ^{\infty ,1/2}_\mathbb {R}\) on bounded sets of \(\ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\); the first term is bounded in \(\ell ^\infty _\mathbb {R}\). Thus (4.19) maps bounded sets in \(\ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\) to bounded sets in \(\ell ^\infty _\mathbb {R}\subseteq \ell ^{\infty ,-1/2}_\mathbb {R}\). Finally, we infer by similar arguments that the first component of (4.17) maps bounded sets in \(\ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\) to bounded sets in \(\ell ^{\infty ,-1/2}_\mathbb {C}\). Indeed, $$\begin{aligned} (x_n,y_n) \mapsto |\sin [2(\lambda ^1_n+x_n + \mathrm {i}y_n)^2]| \end{aligned}$$ maps bounded sets in \(\ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\) to bounded sets in \(\ell ^\infty _\mathbb {R}\), thus $$\begin{aligned} (x_n,y_n) \mapsto |(\lambda ^1_n+x_n + \mathrm {i}y_n)\sin [2(\lambda ^1_n+x_n + \mathrm {i}y_n)^2]| \end{aligned}$$ maps bounded sets in \(\ell ^{\infty ,1/2}_\mathbb {C}\times \ell ^{\infty ,1/2}_\mathbb {C}\) to bounded sets in \(\ell ^{\infty ,-1/2}_\mathbb {R}\). We conclude that the mapping (4.17) has the asserted boundedness properties, which finishes the proof of the lemma. \(\square \) Below we provide an elementary criterion, which helps to show analyticity of functions, which map to \(\ell ^{\infty ,s}_\mathbb {C}\); unlike Lemma 4.6 it does not involve the dual \((\ell ^{\infty ,s}_\mathbb {C})'\) in this particular situation. The criterion is formulated for the target space \(\ell ^{\infty ,s}_F\), \(s\in \mathbb {R}\), where F is a complex Banach space; i.e. \(\ell ^{\infty ,s}_F\) denotes the Banach space $$\begin{aligned} \Big \{ u=(u_n)_{n\in \mathbb {Z}}:\mathbb {Z}\rightarrow F \;\Big |\; \Vert u\Vert _{\infty ,s}:=\sup _{n\in \mathbb {Z}} \big \{ (1+n^2)^\frac{s}{2} |u_n|_F \big \} < \infty \Big \} \end{aligned}$$ with norm \(\Vert u\Vert _{\infty ,s}\). Let \(s\in \mathbb {R}\), let E, F be complex Banach spaces and let \(O\subseteq E\) be an open subset. If the function $$\begin{aligned} f :O \rightarrow \ell ^{\infty ,s}_F, \quad u\mapsto f(u) = \big (f_n(u)\big )_{n\in \mathbb {Z}} \end{aligned}$$ is locally bounded and each coordinate function \(f_n :O \rightarrow F\) is analytic, then f is analytic. A proof for the case \(s=0\) can be found in [18, Theorem A.3], and this proof can easily be generalized to the case of arbitrary \(s\in \mathbb {R}\). Let us for convenience state this proof, which verifies the differentiability of f at an arbitrary point \(u\in O\) directly. By assumption there is a ball centered at u such that f is bounded in \(\ell ^{\infty ,s}_F\) on this ball. In particular, each \((1+n^2)^{s/2} |f_n|_F\) is bounded by the same constant. Since all \(f_n\) are moreover analytic, it follows from the Taylor series representation applied to each \(f_n\), cf. Lemma 4.6, and an application of Cauchy's estimate, cf. Lemma 4.7, that $$\begin{aligned} \Vert f_n(u+h) - f_n(u) - \mathrm d f_n(u) h\Vert _F \le C (1+n^2)^{-\frac{s}{2}} \Vert h\Vert ^2_E \end{aligned}$$ for small enough \(\Vert h\Vert _E\), where C is independent of \(n\in \mathbb {Z}\). This means that for small \(\Vert h\Vert _E\), $$\begin{aligned} \Vert f(u+h) - f(u) - (\mathrm d f_n(u) h)_{n\in \mathbb {Z}} \Vert _{\infty ,s} \le C \Vert h\Vert ^2_E, \end{aligned}$$ from which we infer that f is differentiable at u with derivative $$\begin{aligned} \mathrm d f(u) = (\mathrm d f_n(u))_{n\in \mathbb {Z}}\in {\mathcal {L}}(E,\ell ^{\infty ,s}_F). \end{aligned}$$ The real analytic extension \(F:\mathbb {R}\times \mathbb {R}\times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}}) \rightarrow \mathbb {R}\) of \({\tilde{F}}\), cf. (4.7) and (4.8), can be written as $$\begin{aligned} F(x,y;\psi ) = \int ^1_0 (\partial _2 \Delta _2) (x, s y; \psi ) \, \mathrm d s, \end{aligned}$$ where \(\partial _2\) denotes the partial derivative with respect to the second variable. Indeed, $$\begin{aligned} \int ^1_0 (\partial _2 \Delta _2)(x, s y; \psi ) y \, \mathrm d s&= \int ^y_0 (\partial _2 \Delta _2)(x, y'; \psi ) \, \mathrm d y' = \Delta _2(x,y;\psi ) - \Delta _2(x,0;\psi ), \end{aligned}$$ where \(\Delta _2(x,0;\psi )=0\) because, by Proposition 4.5, \(\Delta \) is real-valued on \(\mathbb {R}\times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\). We obtain from (4.21) that $$\begin{aligned} |F(x,y;\psi )| \le \max _{s\in [0,1]} |(\partial _2 \Delta _2)(x, s y; \psi )|. \end{aligned}$$ In view of Lemma 4.8 and (4.22), the operator \({\mathcal {F}}\) given by (4.9) defines a well-defined map $$\begin{aligned} {\mathcal {F}} :{\check{\ell }}^{\infty ,1/2}_\mathbb {R}\times \check{\ell }^{\infty ,1/2}_\mathbb {R}\times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{\mathcal I}) \supseteq B^{\infty ,1/2}_1 \times B^{\infty ,1/2}_1 \times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}}) \rightarrow \check{\ell }^{\infty ,-1/2}_\mathbb {R}, \end{aligned}$$ where \(B^{\infty ,1/2}_1 \equiv B^{\infty ,1/2}_1({\check{\ell }}^{\infty ,1/2}_\mathbb {R})\) denotes the open unit ball in \({\check{\ell }}^{\infty ,1/2}_\mathbb {R}\) centered at 0; the space \({\check{\ell }}^{\infty ,1/2}_\mathbb {R}\times \check{\ell }^{\infty ,1/2}_\mathbb {R}\times ( \mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\) is endowed with the usual product topology. Since F is real analytic, it admits an analytic extension \(F_\mathbb {C}\) to some open set in \(\mathbb {C}\times \mathbb {C}\times [(\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\otimes \mathbb {C}]\), which contains \(\mathbb {R}\times \mathbb {R}\times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\). Let us consider the complexification \(\big (B^{\infty ,1/2}_1 \times B^{\infty ,1/2}_1 \times (\mathrm {X}_{\mathcal R}\cup \mathrm {X}_{{\mathcal {I}}})\big )\otimes \mathbb {C}\) of \(B^{\infty ,1/2}_1 \times B^{\infty ,1/2}_1 \times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\). By an application of Lemma 4.8 and (4.22), there exists an open set \({\mathcal {U}}_\mathbb {C}\subseteq \big ( \check{\ell }^{\infty ,1/2}_\mathbb {R}\times {\check{\ell }}^{\infty ,1/2}_\mathbb {R}\times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\big )\otimes \mathbb {C}\), which contains \(B^{\infty ,1/2}_1 \times B^{\infty ,1/2}_1 \times ( \mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\), such that the coordinatewise analytic extension \({\mathcal {F}}_\mathbb {C}:\mathcal U_\mathbb {C}\rightarrow {\check{\ell }}^{\infty ,-1/2}_\mathbb {C}\) of \({\mathcal {F}}\) is bounded on bounded subsets of \({\mathcal {U}}_\mathbb {C}\); in particular \(\mathcal F_\mathbb {C}\) is locally bounded on \({\mathcal {U}}_\mathbb {C}\). From Lemma 4.9 we conclude that \({\mathcal {F}}_\mathbb {C}\) is analytic on \({\mathcal {U}}_\mathbb {C}\); in particular, \({\mathcal {F}}\) is real analytic. The partial derivative \(\partial _u {\mathcal {F}}(0,0;0)\), which is given by (4.10), is a topological isomorphism \({\check{\ell }}^{\infty ,1/2}_\mathbb {R}\rightarrow {\check{\ell }}^{\infty ,-1/2}_\mathbb {R}\) and \({\mathcal {F}}(0,0;0)=0\). Thus we can apply the implicit function theorem for Banach space valued real analytic functions, cf. [31]. We infer the existence of an open neighborhood W of the zero potential in \(\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}}\), an open \(\varepsilon \)-ball \(B^{\infty ,1/2}_\varepsilon \) and a \(\delta \)-ball \(B^{\infty ,1/2}_\delta \) around the origin in \(\check{\ell }^{\infty ,1/2}_\mathbb {R}\) and a real analytic function $$\begin{aligned} {\mathcal {G}} :B^{\infty ,1/2}_\varepsilon \times W \rightarrow B^{\infty ,1/2}_\delta \end{aligned}$$ such that, for all \(v\in B^{\infty ,1/2}_\varepsilon \) and \(\psi \in W\), $$\begin{aligned} {\mathcal {F}}(\mathcal G(v,\psi ),v,\psi ) = 0, \end{aligned}$$ and such that the map $$\begin{aligned} (v,\psi ) \mapsto ({\mathcal {G}}(v,\psi ),v,\psi ), \quad B^{\infty ,1/2}_\varepsilon \times W \rightarrow B^{\infty ,1/2}_\delta \times B^{\infty ,1/2}_\varepsilon \times W \end{aligned}$$ describes the zero level set of \({\mathcal {F}}\) in \(B^{\infty ,1/2}_\delta \times B^{\infty ,1/2}_\varepsilon \times W\). We may assume that the sequences \(\varepsilon =(\varepsilon _n)_{n\in \mathbb {Z}}\) and \(\delta =(\delta _n)_{n\in \mathbb {Z}}\) satisfy \(\varepsilon _n>0\) and \(\delta _n>0\) for \(n\in \mathbb {Z}{\setminus }\{0\}\) and \(\varepsilon _0=\delta _0=0\). Clearly, if \(-1\le \tau _n\le 1\) for each n, then \((\tau _n \varepsilon _n)_{n\in \mathbb {Z}} \in B^{\infty ,1/2}_\varepsilon \) and \((\tau _n \delta _n)_{n\in \mathbb {Z}} \in B^{\infty ,1/2}_\delta \). Thus we can run through the intervals in each coordinate in a uniform way. Let \(R^{\varepsilon ,\delta }_n\), \(n\in \mathbb {Z}{\setminus }\{0\}\), be the associated sequence of nondegenerate rectangles defined in (4.6). Our considerations show that, for every \(\psi \in W\) and \(n\in \mathbb {Z}{\setminus }\{0\}\), the zero set of F can be parametrized locally near \({\dot{\lambda }}^1_n\) by the real analytic function $$\begin{aligned} z_n(\psi ) :(-\varepsilon _n,\varepsilon _n) \rightarrow R^{\varepsilon ,\delta }_n, \quad y_n \mapsto {\dot{\lambda }}^1_n + {\mathcal {G}}_n(y_n,\psi ) + \mathrm {i}y_n. \end{aligned}$$ $$\begin{aligned} \gamma _n(\psi ) :=z_n(\psi )( (-\varepsilon _n,\varepsilon _n)) \subseteq R^{\varepsilon ,\delta }_n \end{aligned}$$ and denote the zero set of \(\Delta _2(\cdot ,\psi )\) by $$\begin{aligned} N_{\Delta _2}(\psi ) :=\{(x,y) \in \mathbb {R}^2 :\Delta _2(x,y,\psi )=0 \} \subseteq \mathbb {R}^2. \end{aligned}$$ By construction, $$\begin{aligned} \gamma _n(\psi ) {\setminus } \mathbb {R}= N_{\Delta _2}(\psi ) \cap (R^{\varepsilon ,\delta }_n {\setminus } \mathbb {R}), \end{aligned}$$ and furthermore, since \(\Delta \) is real-valued on \(\mathbb {R}\times (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\), cf. Proposition 4.2, we have $$\begin{aligned} N_{\Delta _2}(\psi ) \cap R^{\varepsilon ,\delta }_n = \gamma _n(\psi ) \cup (R^{\varepsilon ,\delta }_n \cap \mathbb {R}) =:Z_n(\psi ) \subseteq \mathbb {C}. \end{aligned}$$ Thus for arbitrary \(\psi \in W\) and every \(n\in \mathbb {Z}{\setminus }\{0\}\), \(\lambda \in R^{\varepsilon ,\delta }_n\) satisfies $$\begin{aligned} \Delta (\lambda ,\psi )\in \mathbb {R}\iff \lambda \in Z_n(\psi ). \end{aligned}$$ The intersection \(\gamma _n(\psi ) \cap \mathbb {R}\) consists of a single point which we denote by \(\xi _n \equiv \xi _n(\psi ) \in R^{\varepsilon ,\delta }_n \subseteq D^1_n\). We will show that \(\xi _n={\dot{\lambda }}^1(\psi )\). Since \(\Delta _2\) vanishes on the curve \(\gamma _n(\psi )\), which is orthogonal to the real line at the point \(\xi _n\), we have \(\partial _y \Delta _2(\xi _n,\psi )=0\). Furthermore, we know that \(\Delta _2\) vanishes on \(\mathbb {R}\), hence \(\partial _x \Delta _2(\xi _n,\psi )=0\). The Cauchy-Riemann equations then imply that \({\dot{\Delta }}(\xi _n,\psi )=\partial _y \Delta _2(\xi _n,\psi ) + \mathrm {i}\partial _x \Delta _2(\xi _n,\psi ) = 0\); hence \(\xi _n(\psi )\) is a critical point of \(\Delta (\cdot ,\psi )\). Since \(\Delta (\cdot ,\psi )\) has only one critical point in \(D^1_n\), namely \(\dot{\lambda }^1_n(\psi )\) according to Corollary 4.3, we conclude that \(\gamma _n(\psi )\) crosses the real line in the point \({\dot{\lambda }}^1_n(\psi ) \in R^{\varepsilon ,\delta }_n\). \(\square \) According to Theorem 4.4 there exists a neighborhood W of 0 in \(\mathrm {X}\) such that for \(\psi \in W\cap (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\) and arbitrary \(n\in \mathbb {Z}{\setminus } \{0\}\), the analytic arc \(\gamma _n(\psi )\) and the respective part of the real line describe the preimage of \(\mathbb {R}\) under \(\Delta (\cdot ,\psi )\) locally around \({\dot{\lambda }}^1_n(0)\). This arc is transversal to the real line, symmetric under reflection in the real axis, and the orthogonal projection of \(\gamma _n(\psi )\) to the imaginary axis is a real analytic diffeomorphism onto its image. Let us consider the rectangles \(R^{\varepsilon ,\delta }_n\), which are centered at \(\lambda ^{1,\pm }_n(0)={\dot{\lambda }}^1_n(0)\) such that \(\gamma _n(\psi )\subseteq R^{\varepsilon ,\delta }_n\) uniformly for all \(\psi \in W\cap (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\) and all \(n\in \mathbb {Z}{\setminus } \{0\}\). Since the lengths and widths \(\delta _n\) and \(\varepsilon _n\) are of order \(\ell ^{\infty ,1/2}_\mathbb {R}\), it is guaranteed by Theorem 3.17 that there exists a neighborhood \(W^*\) of \(\psi =0\) in \(\mathrm {X}\) such that \(W^*\cap (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}}) \subseteq W\) and such that \(\lambda ^{1,\pm }_n(\psi ),{\dot{\lambda }}^{1}_n(\psi ) \in R^{\varepsilon ,\delta }_n\) for all \(\psi \in W^* \cap (\mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\) and all \(n\in \mathbb {Z}{\setminus } \{0\}\). Furthermore, Theorem 3.17 tells us that $$\begin{aligned} \Delta (\lambda ^{1,\pm }_n(\psi ),\psi )= 2(-1)^n. \end{aligned}$$ Using the notation of the proof of Theorem 4.4, we infer from (4.23) in combination with (4.24) and Corollary 4.3 that $$\begin{aligned} \lambda ^{1,\pm }_n(\psi ) \in Z_n(\psi ) \quad \text {and} \quad {\dot{\lambda }}^1_n(\psi ) \in Z_n(\psi ) \cap \mathbb {R}=R^{\varepsilon ,\delta }_n \cap \mathbb {R}\end{aligned}$$ for all \(\psi \in W \cap ( \mathrm {X}_{{\mathcal {R}}}\cup \mathrm {X}_{{\mathcal {I}}})\). If both \(\lambda ^{1,-}_n(\psi )\) and \(\lambda ^{1,+}_n(\psi )\) are real, we set $$\begin{aligned} \gamma ^*_n \equiv \gamma ^*_n(\psi ) :=[\lambda ^{1,-}_n(\psi ),\lambda ^{1,+}_n(\psi )] \subseteq R^{\varepsilon ,\delta }_n \cap \mathbb {R}; \end{aligned}$$ otherwise we set $$\begin{aligned} \gamma ^*_n \equiv \gamma ^*_n(\psi ) :=\gamma _n(\psi ) \cap \{ \lambda \in \mathbb {C}:|\Delta (\lambda ,\psi )|\le 2 \}. \end{aligned}$$ In both cases, we have that \(\Delta (\{\gamma ^*_n \},\psi ) \subseteq [-2,2]\), \(\overline{\gamma ^*_n} = \gamma ^*_n\) and \({\dot{\lambda }}^1_n(\psi ) \in \gamma ^*_n \cap \mathbb {R}\). Finally, considering a parametrization by arc length \(\rho _n \equiv \rho _n(s)\) of \(\gamma ^*_n\) with \(\rho _n(0)={\dot{\lambda }}^1_n(\psi )\), we have that $$\begin{aligned} \frac{\mathrm d}{\mathrm d s}\big [\Delta (\rho _n(s),\psi )\big ]=0 \iff {\dot{\Delta }} (\rho _n(s),\psi ) =0 \iff s=0, \end{aligned}$$ because \(\big | \frac{\mathrm d}{\mathrm d s} \rho _n\big |\equiv 1\) by assumption and, by Corollary 4.3, \(\rho _n(0)={\dot{\lambda }}^1_n(\psi )\) is the only root of \(\dot{\Delta }(\cdot ,\psi )\) in \(R^{\varepsilon ,\delta }_n \subseteq D^1_n\). \(\square \) 5 Example: single exponential potential In this section, we consider single exponential potentials \(\psi \) of real and imaginary type: $$\begin{aligned} \psi (t) = (\alpha \mathrm {e}^{\mathrm {i}\omega t}, \sigma {\bar{\alpha }} \mathrm {e}^{-\mathrm {i}\omega t},c \mathrm {e}^{\mathrm {i}\omega t}, \sigma {\bar{c}} \mathrm {e}^{-\mathrm {i}\omega t}), \quad \alpha , c \in \mathbb {C}, \; \omega \in \mathbb {R}, \;\sigma \in \{\pm 1\}. \end{aligned}$$ The fundamental matrix solution that corresponds to a single exponential potential can be calculated explicitly. In Figs. 4 and 5 we provide numerical plots of the periodic eigenvalues and the set \(\{\lambda \in \mathbb {C}:\Delta (\cdot ,\psi ) \in \mathbb {R}\}\) for several particular potentials \(\psi \) of the form (5.1). Plots of the zero level sets of \(\Delta _2(\cdot ,\psi )\) for single exponential potentials of real type (left column) and imaginary type (right column); periodic eigenvalues are indicated with dots and the dashed circles are the boundaries of the discs \(D^i_n\) A plot of the zero level set of \(\Delta _2(\cdot ,\psi )\) for the real type single exponential potential (5.1) with \(\sigma =1\), \(\omega =-2\pi \), \(\alpha =\frac{6}{15}+\frac{11}{4}\mathrm {i}\), \(c=\frac{1}{10}\). Periodic eigenvalues are indicated with dots, the large dashed circle is the boundary of the disc \(B_3\) and the remaining dashed circles are the boundaries of the discs \(D^i_n\) To ensure that \(\psi \in \mathrm {X}\), i.e. that \(\psi \) has period one, we require that \(\omega \in 2\pi \mathbb {Z}\). If \(\sigma =1\), the potential \(\psi (t)\) in (5.1) is of real type and hence relevant for the defocusing NLS; if \(\sigma =-1\), it is of imaginary type and hence relevant for the focusing NLS. A direct computation shows that the associated fundamental solution \(M(t,\lambda ,\psi )\) is explicitly given by $$\begin{aligned} \mathrm {e}^{\frac{\mathrm {i}\omega }{2}t\sigma _3} \begin{pmatrix} \cos (\Omega t) + \frac{4\lambda ^2 + 2\sigma |\alpha |^2 + \omega }{2 \mathrm {i}\Omega } \sin (\Omega t) &{}\frac{2\alpha \lambda +\mathrm {i}c}{\Omega } \sin (\Omega t)\\ \sigma \frac{2{\bar{\alpha }}\lambda -\mathrm {i}{\bar{c}}}{\Omega } \sin (\Omega t) &{}\cos (\Omega t) - \frac{4\lambda ^2 + 2\sigma |\alpha |^2 + \omega }{2 \mathrm {i}\Omega } \sin (\Omega t) \end{pmatrix}, \end{aligned}$$ $$\begin{aligned} \Omega =\Omega (\lambda )=\sqrt{4\lambda ^4 + 2\omega \lambda ^2 + 4\sigma \mathfrak {I}({\bar{\alpha }} c) \lambda +\Big ( \frac{\omega }{2} +\sigma |\alpha |^2 \Big )^2 - \sigma |c|^2}. \end{aligned}$$ We fix the branch of the root in (5.3) by requiring that $$\begin{aligned} \Omega (\lambda )=2\lambda ^2+\frac{\omega }{2} + \mathcal O(\lambda ^{-1}) \quad \text {as} \quad |\lambda |\rightarrow \infty . \end{aligned}$$ Thus the discriminant \(\Delta \), i.e. the trace of (5.2), and the characteristic function for the periodic eigenvalues \(\chi _{\mathrm P}\) defined in (3.16) are given by $$\begin{aligned} \Delta (\lambda ,\psi )= -2 \cos (\Omega ), \quad \chi _{\mathrm P}(\lambda ,\psi ) = 4\sin ^2(\Omega ). \end{aligned}$$ Figure 4 shows plots of the zero set of \(\Delta _2(\cdot ,\psi )=\mathfrak {I}\Delta (\cdot ,\psi )\) and the periodic eigenvalues in the complex \(\lambda \)-plane for four different choices of the parameters \(\sigma \), \(\alpha \), c and \(\omega \). All four choices correspond to exact plane wave solutions of NLS. Indeed, if \(\omega =-2\pi \) and \(\alpha >0\) is chosen such that \(-\sigma 2\alpha ^2-\omega >0\) is satisfied for \(\sigma =\pm 1\), then $$\begin{aligned} u(x,t) = \alpha \, \mathrm {e}^{\mathrm {i}\beta x + \mathrm {i}\omega t} \quad \text {with} \quad \beta =\sqrt{-\sigma 2\alpha ^2-\omega } \end{aligned}$$ solves the defocusing (focusing) NLS if \(\sigma =1\) (\(\sigma =-1\)). Moreover, it holds that $$\begin{aligned} u(0,t)= \alpha \, \mathrm {e}^{\mathrm {i}\omega t}, \quad u_x(0,t)= c\, \mathrm {e}^{\mathrm {i}\omega t}, \quad \text {with} \quad c=\mathrm {i}\alpha \beta . \end{aligned}$$ In the left and right columns of Fig. 4 we find examples for the defocusing and focusing case, respectively. In the top row, the norm of the potential is small enough (\(\alpha =1/12\)) that each periodic eigenvalue \(\lambda ^{i,\pm }_n\) is contained in the disc \(D^i_n\), \(i=1,2\), \(n\in \mathbb {Z}\). In Fig. 4a, all periodic eigenvalues \(\lambda ^{1,\pm }_n\) are real and there is a spectral gap \([\lambda ^{1,-}_{-1},\lambda ^{1,+}_{-1}]\); the remaining periodic eigenvalues satisfy \(\lambda ^{1,-}_n=\lambda ^{1,+}_n\), \(n\in \mathbb {Z}{\setminus }\{-1,0\}\). The periodic eigenvalues \(\lambda ^{2,\pm }_n\), \(n\in \mathbb {Z}\), lie on a curve that asymptotes to the imaginary axis. In Fig. 4b, \(\lambda ^{1,-}_{-1}\) and \(\lambda ^{1,+}_{-1}\) are not real but lie on the (global) arc \(\gamma _{-1}\), which is symmetric with respect to the real axis and crosses the real line at the critical point \({\dot{\lambda }}^1_{-1}\). In Fig. 4c, d, the spectral gaps are larger than in Fig. 4a, b, because the parameter \(\alpha \) is larger than in the previous examples (\(\alpha =1/2\)). Figure 5 shows the zero set of \(\Delta _2(\cdot ,\psi )\) in the complex \(\lambda \)-plane for the real type single exponential potential (5.1) with parameters \(\sigma =1\), \(\omega =-2\pi \), \(\alpha =\frac{6}{15}+\frac{11}{4}\mathrm {i}\) and \(c=\frac{1}{10}\). This example clearly demonstrates that Theorems 4.4 and 4.5 fail to remain true for potentials with sufficiently large \(\mathrm {X}\)-norms. We further notice that some arcs \(\gamma _n\) do not only "leave" the discs \(D^i_n\) (and hence also the rectangles \(R^{\varepsilon ,\delta }_n\) from Theorem 4.4), but the zero set differs qualitatively from the previous examples: certain arcs "merge" with other arcs and subsequently "split" into new components. This example also illustrates that the labeling of periodic eigenvalues is not preserved under continuous deformations of the potential. We furthermore note that for this particular potential (and consequently all potentials in \(\mathrm {X}\) with smaller \(\mathrm {X}\)-norm), the assertion of the Counting Lemma holds already true for \(N=3\): there are \(4(2\cdot 3+1)=28\) periodic eigenvalues contained in the disc \(B_3\) (when counted with multiplicity: 12 double eigenvalues plus 4 simple eigenvalues) and each disc \(D^i_n\), \(i=1,2\), \(|n|>3\), contains precisely one periodic double eigenvalue. 6 Formulas for gradients 6.1 Gradient of the fundamental solution Let \(\mathrm d F\) denote the Fréchet derivative of a functional \(F:Y \rightarrow \mathbb {C}\) on a (complex) Banach space Y. If it exists, \(\mathrm d F :Y \rightarrow Y'\) is the unique map from Y into its topological dual space \(Y'\) such that $$\begin{aligned} F(u+h)=F(u) + (\mathrm d F)(u) h + o(h) \quad \text {as} \quad \Vert h\Vert \rightarrow 0 \end{aligned}$$ for \(u\in Y\). The map \(\mathrm d F h:Y \rightarrow \mathbb {C}\) (also denoted by \(\partial _h F\)) is the directional derivative of F in direction \(h\in Y\). For any differentiable functional \(F :\mathrm {X}\rightarrow \mathbb {C}\) and \(h\in \mathrm {X}\), we have that $$\begin{aligned} \mathrm d F h = \partial _h F = \int ^1_0 \big (F^1 h^1 + F^2 h^2 + F^3 h^3 + F^4 h^4 \big ) \, \mathrm d t \end{aligned}$$ for some uniquely determined function \(\partial F=(F^1,F^2, F^3,F^4) :\mathrm {X}\rightarrow \mathrm {X}\). We denote the components of \(\partial F\) by \(\partial _j F\), \(j=1,2,3,4\), and define the gradient\(\partial F\) of F by $$\begin{aligned} \partial F = (\partial _1 F,\partial _2 F,\partial _3 F,\partial _4 F) = (F^1,F^2, F^3,F^4). \end{aligned}$$ The following proposition gives formulas for the partial derivatives of the fundamental solution $$\begin{aligned} M(t, \lambda , \psi ) = \begin{pmatrix} m_1 &{} m_2 \\ m_3 &{} m_4 \end{pmatrix}. \end{aligned}$$ For fixed \(t\ge 0\) and \(\lambda \in \mathbb {C}\), we consider M as a map \(\mathrm {X}\rightarrow M_{2\times 2}(\mathbb {C})\). In particular, each matrix entry \(m_i\), \(i=1,2,3,4\) gives rise to a functional \(\mathrm {X}\rightarrow \mathbb {C}\). Let us set $$\begin{aligned} \gamma \equiv \gamma (M) :=\det M^{\mathrm d} - \det M^{\mathrm {od}} = m_1 m_4 + m_2 m_3. \end{aligned}$$ For any \(t\ge 0\) and \(0 \le s \le t\), the gradient of the fundamental solution M, defined on the interval [0, t], is given by $$\begin{aligned} \big (\partial _1 M(t)\big )(s)&= M(t) \, \begin{pmatrix} -\mathrm {i}\gamma \psi ^2 + 2\lambda m_3 m_4 &{} -2\mathrm {i}\psi ^2 m_2 m_4 +2\lambda m^2_4 \\ 2\mathrm {i}\psi ^2 m_1 m_3 - 2\lambda m^2_3 &{} \mathrm {i}\gamma \psi ^2 - 2\lambda m_3 m_4 \end{pmatrix} (s), \\ \big (\partial _2 M(t)\big )(s)&= M(t) \, \begin{pmatrix} -\mathrm {i}\gamma \psi ^1 - 2\lambda m_1 m_2 &{} -2\mathrm {i}\psi ^1 m_2 m_4 -2\lambda m^2_2 \\ 2\mathrm {i}\psi ^1 m_1 m_3 + 2\lambda m^2_1 &{} \mathrm {i}\gamma \psi ^1 + 2\lambda m_1 m_2 \end{pmatrix} (s), \\ \big (\partial _3 M(t)\big ) (s)&= M(t) \, \begin{pmatrix} \mathrm {i}m_3 m_4 &{} \mathrm {i}m^2_4 \\ -\mathrm {i}m^2_3 &{} -\mathrm {i}m_3 m_4 \end{pmatrix} (s), \\ \big (\partial _4 M(t) \big ) (s)&= M(t) \, \begin{pmatrix} \mathrm {i}m_1 m_2 &{} \mathrm {i}m^2_2 \\ -\mathrm {i}m^2_1 &{} -\mathrm {i}m_1 m_2 \end{pmatrix} (s). \end{aligned}$$ Moreover, at the zero potential \(\psi =0\), $$\begin{aligned} \big (\partial _1 E_\lambda (t)\big )(s)&= \begin{pmatrix} 0 &{}\quad 2\lambda \, \mathrm {e}^{- 2 \mathrm {i}\lambda ^2 (t-2s)} \\ 0 &{}\quad 0 \end{pmatrix},&\quad \big (\partial _2 E_\lambda (t)\big )(s)&\quad = \begin{pmatrix} 0 &{}\quad 0 \\ 2\lambda \, \mathrm {e}^{2 \mathrm {i}\lambda ^2 (t-2s)} &{}\quad 0 \end{pmatrix}, \\ \big (\partial _3 E_\lambda (t)\big )(s)&= \begin{pmatrix} 0 &{}\quad \mathrm {i}\, \mathrm {e}^{-2 \mathrm {i}\lambda ^2 (t-2s)} \\ 0 &{}\quad 0 \end{pmatrix},&\quad \big (\partial _4 E_\lambda (t)\big )(s)&\quad = \begin{pmatrix} 0 &{}\quad 0 \\ - \mathrm {i}\, \mathrm {e}^{2 \mathrm {i}\lambda ^2 (t-2s)} &{}\quad 0 \end{pmatrix}. \end{aligned}$$ By Theorem 2.1 the fundamental solution M is analytic in \(\psi \). It suffices therefore to verify the above formulas for smooth potentials \(\psi \) for which the order of differentiation with respect to t and \(\psi \) can be interchanged. The general result then follows by a density argument. Applying the directional derivative \(\partial _h\) to both sides of Eq. (2.5), we obtain $$\begin{aligned} \mathrm D \, \partial _h M = (R+V) \, \partial _h M + \partial _h (R+ V) \, M. \end{aligned}$$ Since both M(0) and R are independent of \(\psi \), Proposition 2.4 implies $$\begin{aligned} \partial _h M(t) = M(t) \int ^t_0 M^{-1} (s) \, \partial _h V (s) \, M(s) \, \mathrm d s. \end{aligned}$$ The integrand equals $$\begin{aligned} \begin{pmatrix}m_4 &{} -m_2 \\ -m_3&{} m_1\end{pmatrix} \, \begin{pmatrix}-\mathrm {i}( \psi ^2 h^1 + \psi ^1 h^2) &{} 2 \lambda h^1 + \mathrm {i}h^3 \\ 2 \lambda h^2 - \mathrm {i}h^4&{} \mathrm {i}( \psi ^2 h^1 + \psi ^1 h^2)\end{pmatrix} \, \begin{pmatrix}m_1 &{} m_2 \\ m_3&{} m_4\end{pmatrix}, \end{aligned}$$ which can be rewritten as $$\begin{aligned}&\begin{pmatrix} -\mathrm {i}\psi ^2(m_1 m_4 + m_2 m_3) + 2\lambda m_3 m_4 &{} -2\mathrm {i}\psi ^2 m_2 m_4 +2\lambda m^2_4 \\ 2\mathrm {i}\psi ^2 m_1 m_3 - 2\lambda m^2_3 &{} \mathrm {i}\psi ^2(m_1 m_4 + m_2 m_3) - 2\lambda m_3 m_4 \end{pmatrix} h^1\\&\quad + \begin{pmatrix} -\mathrm {i}\psi ^1 (m_1 m_4 + m_2 m_3) - 2\lambda m_1 m_2 &{} -2\mathrm {i}\psi ^1 m_2 m_4 -2\lambda m^2_2 \\ 2\mathrm {i}\psi ^1 m_1 m_3 + 2\lambda m^2_1 &{} \mathrm {i}\psi ^1 (m_1 m_4 + m_2 m_3) + 2\lambda m_1 m_2 \end{pmatrix} h^2 \\&\quad + \begin{pmatrix} \mathrm {i}m_3 m_4 &{} \mathrm {i}m^2_4 \\ -\mathrm {i}m^2_3 &{} -\mathrm {i}m_3 m_4 \end{pmatrix} h^3 + \begin{pmatrix} \mathrm {i}m_1 m_2 &{} \mathrm {i}m^2_2 \\ -\mathrm {i}m^2_1 &{} -\mathrm {i}m_1 m_2 \end{pmatrix} h^4. \end{aligned}$$ The expression for the gradient \(\partial M(t)\) follows. In the case of the zero potential \(\psi =0\), we have \(m_1 = \mathrm {e}^{-2\lambda ^2 \mathrm {i}t}\), \(m_4 = \mathrm {e}^{2\lambda ^2 \mathrm {i}t}\) and \(m_2=m_3=0\), so the gradient \(\partial E_\lambda (t)\) is easily computed. \(\square \) The following notation is useful to express the gradient of M more compactly. Let \(M_1\) and \(M_2\) denote the first and second columns of M, and denote by \(\psi ^{1,2}\) the first two components, and by \(\psi ^{3,4}\) the last two components of the four-vector \(\psi \): $$\begin{aligned} \psi ^{1,2} :=\begin{pmatrix} \psi ^1 \\ \psi ^2 \end{pmatrix}, \quad \psi ^{3,4} :=\begin{pmatrix} \psi ^3 \\ \psi ^4 \end{pmatrix}. \end{aligned}$$ Analogously, let $$\begin{aligned} \partial ^{1,2} :=\begin{pmatrix} \partial _1 \\ \partial _2 \end{pmatrix}, \quad \partial ^{3,4} :=\begin{pmatrix} \partial _3 \\ \partial _4 \end{pmatrix}. \end{aligned}$$ Following [13], we introduce the star product of two 2-vectors \(a=(a_1,a_2)\) and \(b=(b_1,b_2)\) by $$\begin{aligned} a \star b :=\begin{pmatrix} a_2 b_2 \\ a_1 b_1 \end{pmatrix}. \end{aligned}$$ Moreover, recall that \(\gamma = m_1 m_4 + m_2 m_3\). With this notation, we obtain For any \(t\ge 0\), the gradient of the fundamental solution M is given by $$\begin{aligned}&\partial ^{1,2} M(t) \\&\quad = M(t) \begin{pmatrix} -\mathrm {i}\gamma \sigma _1 \psi ^{1,2} + 2 \lambda \sigma _3 (M_1 \star M_2) &{} -2 \mathrm {i}m_2 m_4 \sigma _1 \psi ^{1,2} + 2 \lambda \sigma _3 (M_2 \star M_2) \\ 2\mathrm {i}m_1 m_3 \sigma _1 \psi ^{1,2} - 2 \lambda \sigma _3 (M_1 \star M_1 ) &{} \mathrm {i}\gamma \sigma _1 \psi ^{1,2} - 2 \lambda \sigma _3 ( M_1 \star M_2) \end{pmatrix}(s), \\&\mathrm {i}\partial ^{3,4} M(t) = M(t) \begin{pmatrix} -M_1 \star M_2 &{} -M_2 \star M_2 \\ M_1 \star M_1 &{} M_1 \star M_2 \end{pmatrix}(s). \end{aligned}$$ In the special case when \(\psi =0\) and \(\lambda \) is a periodic eigenvalue corresponding to the zero potential (i.e. \(\lambda = \lambda ^{i,\pm }_n(0)\), \(i=1,2\), \(n\in \mathbb {Z}\)), we find $$\begin{aligned} e^+_n = M_1 \star M_1, \quad e^-_n = M_2 \star M_2, \quad n\in \mathbb {Z}, \end{aligned}$$ $$\begin{aligned} e^+_n :=\begin{pmatrix} 0 \\ \mathrm {e}^{-2 \pi \mathrm {i}n t} \end{pmatrix} , \quad e^-_n :=\begin{pmatrix} \mathrm {e}^{2 \pi \mathrm {i}n t} \\ 0 \end{pmatrix} , \quad n\in \mathbb {Z}. \end{aligned}$$ 6.2 Discriminant and anti-discriminant The gradient of \(\Delta \) is given by $$\begin{aligned} \begin{aligned} \partial ^{1,2} \Delta&= \grave{m}_2 [2\mathrm {i}m_1 m_3 \sigma _1 \psi ^{1,2} - 2 \lambda \sigma _3 (M_1 \star M_1 )] - \grave{m}_3 [ 2 \mathrm {i}m_2 m_4 \sigma _1 \psi ^{1,2} \\&\quad - 2 \lambda \sigma _3 (M_2 \star M_2)] + (\grave{m}_4 - \grave{m}_1) [\mathrm {i}\gamma \sigma _1 \psi ^{1,2} - 2 \lambda \sigma _3 ( M_1 \star M_2)],\\ \mathrm {i}\partial ^{3,4} \Delta&= \grave{m}_2 M_1 \star M_1 - \grave{m}_3 M_2 \star M_2 + (\grave{m}_4-\grave{m}_1) M_1 \star M_2. \end{aligned} \end{aligned}$$ At the zero potential, \(\partial \Delta (\lambda ,0) =0\) for all \(\lambda \in \mathbb {C}\). The formula for the gradient follows directly from Corollary 6.2. In the case of the zero potential, \(m_2=m_3=0\); hence \(M_1 \star M_2=0\) and therefore \(\partial \Delta (\lambda ,0) =0\) for all \(\lambda \in \mathbb {C}\). \(\square \) The following formulas for the derivative of the anti-discriminant are derived in a similar way. The gradient of the anti-discriminant \(\delta \) is given by $$\begin{aligned} \partial ^{1,2} \delta =&\; \grave{m}_4 [2\mathrm {i}m_1 m_3 \sigma _1 \psi ^{1,2} - 2 \lambda \sigma _3 (M_1 \star M_1 )] + (\grave{m}_2-\grave{m}_3) [\mathrm {i}\gamma \sigma _1 \psi ^{1,2} \\&- 2 \lambda \sigma _3 ( M_1 \star M_2)] - \grave{m}_1 [2 \mathrm {i}m_2 m_4 \sigma _1 \psi ^{1,2} - 2 \lambda \sigma _3 (M_2 \star M_2)], \\ \mathrm {i}\partial ^{3,4} \delta =&\; \grave{m}_4 M_1 \star M_1 + (\grave{m}_2-\grave{m}_3) M_1 \star M_2 - \grave{m}_1 M_2 \star M_2. \end{aligned}$$ In the special case when \(\psi =0\) and \(\lambda \) is a periodic eigenvalue corresponding to the zero potential, i.e. \(\lambda = \lambda ^{i,\pm }_n(0)\), \(i=1,2\), \(n\in \mathbb {Z}\), $$\begin{aligned} \partial ^{1,2}\delta = 2\lambda ^{i,\pm }_n(0)(-1)^n (e^+_n + e^-_n), \quad \mathrm {i}\partial ^{3,4}\delta = (-1)^n (e^+_n - e^-_n). \end{aligned}$$ 7 Hamiltonian structure of the nonlinear Schrödinger system Consider the NLS system $$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm {i}q_t + q_{xx} - 2 q^2 r = 0, \\ -\mathrm {i}r_t + r_{xx} - 2 r^2 q = 0, \end{array}\right. } \end{aligned}$$ where q(x, t) and r(x, t) are independent complex-valued functions. If \(r = \sigma \bar{q}\), the system (7.1) reduces to the NLS Eq. (1.1). We can view (7.1) as an evolution equation with respect to t by writing $$\begin{aligned} \begin{pmatrix} q \\ r \end{pmatrix}_t = \mathrm {i}\begin{pmatrix} q_{xx} - 2 q^2 r \\ - r_{xx} +2 r^2q \end{pmatrix}. \end{aligned}$$ On the other hand, introducing p(x, t) and s(x, t) by $$\begin{aligned} p = q_x, \quad s = r_x, \end{aligned}$$ we can also write (7.1) as an evolution equation with respect to x: $$\begin{aligned} \begin{pmatrix} q \\ r \\ p \\ s \end{pmatrix}_x = \begin{pmatrix} p \\ s \\ -\mathrm {i}q_t +2 q^2 r \\ \mathrm {i}r_t + 2 r^2 q \end{pmatrix}. \end{aligned}$$ The potentials \(\{\psi ^j\}_1^4\) of Eq. (1.5) can be viewed as the initial data for (7.4) according to the identifications $$\begin{aligned} \psi ^1(t) = q(0,t), \quad \psi ^2(t) = r(0,t), \quad \psi ^3(t) = p(0,t), \quad \psi ^4(t) = s(0,t). \end{aligned}$$ In this section, we first review the bi-Hamiltonian formulation of (7.1) when viewed as an evolution equation with respect to t. We also recall how this formulation gives rise to an infinite number of conservation laws. We then show that (7.1) admits a Hamiltonian formulation also when viewed as an evolution equation with respect to x. Although this one Hamiltonian formulation is enough for the purpose of establishing local Birkhoff coordinates for the x-evolution of NLS, we also consider the existence of a second Hamiltonian structure for (7.4). We find that even though the infinitely many conservation laws of (7.2) transfer to the x-evolution Eq. (7.4), the naive way of deriving a second Hamiltonian structure for this system fails. Indeed, the obvious guess for a second Hamiltonian structure yields a Poisson bracket which does not satisfy the Jacobi identity. In contrast to the rest of the paper, we will not specify the functional analytic framework in terms of Sobolev spaces such as \(H^1(\mathbb {T},\mathbb {C})\). Instead we will adopt the more algebraic point of view of [29], which is not restricted to the periodic setting; roughly speaking, this means that we will assume that all functions can be differentiated to any order and that partial integrations can be performed freely with vanishing boundary terms. We will use the symbol \(\int \) to denote integration over the relevant x or t domain. 7.1 The bi-Hamiltonian structure of (7.2) In the current framework, we define the gradient \(\partial F\) of a functional \(F = F[q,r]\) by $$\begin{aligned} \partial F = \begin{pmatrix} \partial _1 F \\ \partial _2 F \end{pmatrix} = \begin{pmatrix} \frac{\partial F}{\partial q} \\ \frac{\partial F}{\partial r} \end{pmatrix} \end{aligned}$$ whenever there exist functions \(\partial _1 F\) and \(\partial _2 F\) such that $$\begin{aligned} \frac{\mathrm d}{\mathrm d\epsilon } F[q + \epsilon \varphi _1, r + \epsilon \varphi _2]\biggr |_{\epsilon =0} = \int [(\partial _1 F) \varphi _1 + (\partial _2 F) \varphi _2 ] \, \mathrm d x \end{aligned}$$ for any smooth functions \(\varphi _1\) and \(\varphi _2\) of compact support. The system (7.2) admits the bi-Hamiltonian formulation [27] $$\begin{aligned} \begin{pmatrix} q \\ r \end{pmatrix}_t = \mathcal {D} \, \partial H_1 = \mathcal {E} \, \partial H_2, \end{aligned}$$ where the Hamiltonian functionals \(H_1[q,r]\) and \(H_2[q,r]\) are defined by $$\begin{aligned}&H_1 = \mathrm {i}\int q_x r\, \mathrm d x, \qquad H_2 = \int (-q_{xx}r+q^2 r^2)\, \mathrm d x, \end{aligned}$$ and the operators \(\mathcal {D}\) and \(\mathcal {E}\) are given by $$\begin{aligned} \mathcal {D}&=\begin{pmatrix} 2q D_x^{-1}q &{}\quad D_x - 2q D_x^{-1} r \\ D_x - 2r D_x^{-1} q &{}\quad 2r D_x^{-1} r \end{pmatrix}, \quad \mathcal {E} = \begin{pmatrix} 0 &{}\quad -\mathrm {i}\\ \mathrm {i}&{}\quad 0 \end{pmatrix}. \end{aligned}$$ The equalities in (7.5) are easy to verify using that $$\begin{aligned} \partial H_1 = \begin{pmatrix} -\mathrm {i}r_x \\ \mathrm {i}q_x \end{pmatrix}, \qquad \partial H_2 = \begin{pmatrix} -r_{xx} + 2qr^2 \\ - q_{xx} + 2q^2 r \end{pmatrix}. \end{aligned}$$ The operators \(\mathcal {D}\) and \(\mathcal {E}\) are Hamiltonian operators in the sense that the associated Poisson brackets $$\begin{aligned} \{F, G\}_\mathcal {D} = \int (\partial F)^\intercal \, \mathcal {D} \, \partial G \, \mathrm d x, \qquad \{F, G\}_\mathcal {E} = \int (\partial F)^\intercal \, \mathcal {E} \, \partial G \, \mathrm d x, \end{aligned}$$ are skew-symmetric and satisfy the Jacobi identity [29, Definition 7.1]. Furthermore, \(\mathcal {D}\) and \(\mathcal {E}\) form a Hamiltonian pair in the sense that any linear combination \(a\mathcal {D} + b \mathcal {E}\), \(a,b \in \mathbb {R}\), is also a Hamiltonian operator [29, Definition 7.19]. We will review the proofs of these properties below. The bi-Hamiltonian formulation (7.5) together with the fact that \(\mathcal {D}\) and \(\mathcal {E}\) form a Hamiltonian pair implies that \(\mathcal {D} \mathcal {E}^{-1}\) is a recursion operator for (7.2) and that a hierarchy of conserved quantities \(H_n\) can be obtained (at least formally) by means of the recursive definition (see [29, Theorem 7.27]) $$\begin{aligned} \mathcal {D} \, \partial H_n = \mathcal {E} \, \partial H_{n+1}. \end{aligned}$$ The first few conserved quantities \(H_0, H_1, H_2, H_3\) for (7.2) are given by (7.6) and $$\begin{aligned}&H_0 = \int qr \, \mathrm d x, \qquad H_3 = \mathrm {i}\int \left( -q_{xxx}r + \frac{3}{2}(q^2)_x r^2\right) \, \mathrm d x. \end{aligned}$$ In differential form, the associated conservation laws are given by $$\begin{aligned}&H_0: \, (qr)_t = \mathrm {i}(q_xr - qr_x)_x, \nonumber \\&H_1: \, \mathrm {i}( q_xr)_t = (q^2r^2 + q_xr_x - q_{xx} r)_x, \nonumber \\&H_2: \, (-q_{xx} r +q^2 r^2)_t = \mathrm {i}(4qq_xr^2 + q_{xx}r_x -q_{xxx} r\big )_x, \nonumber \\&H_3: \, \mathrm {i}\left( -q_{xxx}r + \frac{3}{2}(q^2)_x r^2\right) _t = (2q^3r^3 - 5q_x^2 r^2 - 2qq_xrr_x + q^2 r_x^2 - 5qq_{xx}r^2 \nonumber \\&\quad \quad - 2q^2rr_{xx} - q_{xxx}r_x + q_{xxxx}r)_x. \end{aligned}$$ Even for relatively simple brackets such as those defined in (7.8), the direct verification of the Jacobi identity is a very complicated computational task. In the next lemma, we give a proof of the well-known fact that \(\mathcal {D}\) and \(\mathcal {E}\) are Hamiltonian operator by appealing to the framework of [29, Chapter 7] which shortens the argument significantly. \(\mathcal {D}\) and \(\mathcal {E}\) are Hamiltonian operators. It is easy to verify that \(\mathcal {D}\) and \(\mathcal {E}\) are skew-symmetric with respect to the bracket $$\begin{aligned} \left\langle \begin{pmatrix} f_1 \\ f_2 \end{pmatrix}, \begin{pmatrix} g_1 \\ g_2 \end{pmatrix} \right\rangle = \int (f_1g_1 + f_2 g_2) \, \mathrm d x. \end{aligned}$$ Since \(\mathcal {E}\) has constant coefficients, \(\mathcal {E}\) is a Hamiltonian operator by [29, Corollary 7.5]. It remains to show that the bracket defined by \(\mathcal {D}\) satisfies the Jacobi identity. According to [29, Proposition 7.7], it is enough to show that the functional tri-vector \(\Psi _{\mathcal {D}}\) defined by $$\begin{aligned} \Psi _{\mathcal {D}} = \frac{1}{2} \int \big \{\theta \wedge {{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {D}\theta }(\mathcal {D}) \wedge \theta \big \} \, \mathrm d x = \frac{1}{2} \int \sum _{\alpha , \beta = 1}^2 \big \{\theta ^\alpha \wedge ({{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {D}\theta }(\mathcal {D}))_{\alpha \beta } \wedge \theta ^\beta \big \} \, \mathrm d x \end{aligned}$$ vanishes, where we refer to [29] for the definitions of the wedge product \(\wedge \), the functional vector \(\theta = (\theta ^1, \theta ^2)\), the vector field \(\mathbf {v}_{\mathcal {D}\theta }\), and its prolongation \({{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {D}\theta }\). We have (see [29, p. 442]) $$\begin{aligned}&{{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {D}\theta }(q) = (\mathcal {D}\theta )^1,\qquad {{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {D}\theta }(r) = (\mathcal {D}\theta )^2, \nonumber \\&\mathcal {D}\theta = \begin{pmatrix}(\mathcal {D}\theta )^1 \\ (\mathcal {D}\theta )^2 \end{pmatrix} = \begin{pmatrix} 2q (D_x^{-1}(q\theta ^1)) + (D_x\theta ^2) - 2q (D_x^{-1}(r \theta ^2))\\ (D_x\theta ^1) - 2r (D_x^{-1}(q\theta ^1)) + 2r (D_x^{-1}(r\theta ^2)) \end{pmatrix}, \end{aligned}$$ $$\begin{aligned} {{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {D}\theta }(\mathcal {D}) =2 \begin{pmatrix} (\mathcal {D}\theta )^1 D_x^{-1}q + q D_x^{-1}(\mathcal {D}\theta )^1 &{} -(\mathcal {D}\theta )^1 D_x^{-1} r -q D_x^{-1} (\mathcal {D}\theta )^2 \\ - (\mathcal {D}\theta )^2 D_x^{-1} q - r D_x^{-1} (\mathcal {D}\theta )^1 &{} (\mathcal {D}\theta )^2 D_x^{-1} r + r D_x^{-1} (\mathcal {D}\theta )^2 \end{pmatrix}. \end{aligned}$$ $$\begin{aligned} \Psi _{\mathcal {D}}= & {} \int \{ \theta ^1 \wedge (\mathcal {D}\theta )^1 \wedge D_x^{-1}(q \theta ^1) + \theta ^1 \wedge q D_x^{-1}((\mathcal {D}\theta )^1 \wedge \theta ^1) \nonumber \\&-\,\theta ^1 \wedge (\mathcal {D}\theta )^1 \wedge D_x^{-1}(r \theta ^2) - \theta ^1 \wedge q D_x^{-1} ((\mathcal {D}\theta )^2 \wedge \theta ^2) \nonumber \\&-\, \theta ^2 \wedge (\mathcal {D}\theta )^2 \wedge D_x^{-1}(q \theta ^1) - \,\theta ^2 \wedge r D_x^{-1} ((\mathcal {D}\theta )^1 \wedge \theta ^1) \nonumber \\&+ \,\theta ^2 \wedge (\mathcal {D}\theta )^2 \wedge D_x^{-1}(r \theta ^2) +\, \theta ^2 \wedge r D_x^{-1} ((\mathcal {D}\theta )^2 \wedge \theta ^2) \}\, \mathrm d x. \end{aligned}$$ An integration by parts shows that the first two terms on the right-hand side of (7.12) are equal: $$\begin{aligned} \int \theta ^1 \wedge q D_x^{-1}((\mathcal {D}\theta )^1 \wedge \theta ^1) \, \mathrm d x&= - \int \big (D_x^{-1}(q \theta ^1)\big ) \wedge (\mathcal {D}\theta )^1 \wedge \theta ^1 \, \mathrm d x \\&= \int \theta ^1 \wedge (\mathcal {D}\theta )^1 \wedge D_x^{-1}(q \theta ^1) \, \mathrm d x. \end{aligned}$$ In the same way, the third and sixth terms are equal, the fourth and fifth are equal, and the last two terms are equal. Thus we find $$\begin{aligned} \Psi _{\mathcal {D}} =&\; \int \{ \theta ^1 \wedge (\mathcal {D}\theta )^1 \wedge D_x^{-1}(q \theta ^1 - r \theta ^2) - \theta ^2 \wedge (\mathcal {D}\theta )^2 \wedge D_x^{-1}(q \theta ^1 - r \theta ^2) \}\, \mathrm d x. \end{aligned}$$ Substituting in the expressions (7.11) for \((\mathcal {D}\theta )^1\) and \((\mathcal {D}\theta )^2\), this becomes $$\begin{aligned} \Psi _{\mathcal {D}} =&\; \int \{ \theta ^1 \wedge [2q (D_x^{-1}(q\theta ^1)) + (D_x\theta ^2) - 2q (D_x^{-1}(r \theta ^2)) ] \wedge D_x^{-1}(q \theta ^1 - r \theta ^2) \\&- \theta ^2 \wedge [ (D_x\theta ^1) - 2r (D_x^{-1}(q\theta ^1)) + 2r (D_x^{-1}(r\theta ^2)) ] \wedge D_x^{-1}(q \theta ^1 - r \theta ^2) \}\, \mathrm d x. \end{aligned}$$ Using that \((D_x^{-1}(q \theta ^j)) \wedge (D_x^{-1}(q \theta ^j)) = 0\) and \((D_x^{-1}(r \theta ^j)) \wedge (D_x^{-1}(r \theta ^j)) = 0\), a simplification gives $$\begin{aligned} \Psi _{\mathcal {D}} =&\; \int \{ \theta ^1 \wedge (D_x\theta ^2) \wedge D_x^{-1}(q \theta ^1 - r \theta ^2) - \theta ^2 \wedge (D_x\theta ^1) \wedge D_x^{-1}(q \theta ^1 - r \theta ^2) \}\, \mathrm d x. \end{aligned}$$ Integrating by parts in the first term on the right-hand side, we arrive at $$\begin{aligned} \Psi _{\mathcal {D}} =&\; \int \{ -(D_x\theta ^1) \wedge \theta ^2 \wedge D_x^{-1}(q \theta ^1 - r \theta ^2) - \theta ^2 \wedge (D_x\theta ^1) \wedge D_x^{-1}(q \theta ^1 - r \theta ^2) \}\, \mathrm d x\\ =&\; 0. \end{aligned}$$ This shows that \(\mathcal {D}\) is Hamiltonian and completes the proof. \(\square \) \(\mathcal {D}\) and \(\mathcal {E}\) form a Hamiltonian pair. By [29, Corollary 7.21], it is enough to verify that $$\begin{aligned} {{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {D}\theta }(\Theta _{\mathcal {E}}) + {{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {E}\theta }(\Theta _{\mathcal {D}}) = 0, \end{aligned}$$ $$\begin{aligned} \Theta _{\mathcal {D}} = \frac{1}{2} \int \{\theta \wedge \mathcal {D} \theta \} \, \mathrm d x, \quad \Theta _{\mathcal {E}} = \frac{1}{2} \int \{\theta \wedge \mathcal {E} \theta \} \, \mathrm d x, \end{aligned}$$ are the functional bi-vectors representing the associated Poisson brackets. Since \(\mathcal {E}\) has constant coefficients, we have \({{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {D}\theta }(\Theta _{\mathcal {E}}) = 0\). Moreover, the same computations that led to the expression (7.13) for \(\Psi _{\mathcal {D}} = -{{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {D}\theta }(\Theta _{\mathcal {D}})\) (with \((\mathcal {D}\theta )^j\) replaced with \((\mathcal {E}\theta )^j\)) imply that $$\begin{aligned} {{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {E}\theta }(\mathcal {D}) =&\; - \int \{ \theta ^1 \wedge (\mathcal {E}\theta )^1 \wedge D_x^{-1}(q \theta ^1 - r \theta ^2)\\&- \theta ^2 \wedge (\mathcal {E}\theta )^2 \wedge D_x^{-1}(q \theta ^1 - r \theta ^2) \}\, \mathrm d x. \end{aligned}$$ Since \((\mathcal {E}\theta )^1 = -\mathrm {i}\theta ^2\) and \((\mathcal {E}\theta )^2 = \mathrm {i}\theta ^1\), this gives $$\begin{aligned} {{\,\mathrm{pr}\,}}\mathbf {v}_{\mathcal {E}\theta }(\mathcal {D}) =&\; \mathrm {i}\int \{ \theta ^1 \wedge \theta ^2 \wedge D_x^{-1}(q \theta ^1 - r \theta ^2)\\&+ \theta ^2 \wedge \theta ^1 \wedge D_x^{-1}(q \theta ^1 - r \theta ^2) \}\, \mathrm d x = 0, \end{aligned}$$ which completes the proof of the lemma. \(\square \) 7.2 The NLS system as an evolution in x The system (7.4) expresses the NLS system (7.1) as an evolution equation with respect to x. We first present a Hamiltonian structure for the system (7.4). 7.2.1 A Hamiltonian structure for (7.4) The system (7.4) can be written as $$\begin{aligned} \begin{pmatrix} q \\ r\\ p\\ s \end{pmatrix}_x = \tilde{\mathcal {D}} \, \partial \tilde{H}_1, \end{aligned}$$ where the Hamiltonian functional \(\tilde{H}_1[q,r,p,s]\) is defined by $$\begin{aligned} \tilde{H}_1 = \int (ps + \mathrm {i}q_t r - q^2 r^2) \, \mathrm d t \end{aligned}$$ and the operator \(\tilde{\mathcal {D}}\) is defined by $$\begin{aligned} \tilde{\mathcal {D}} = \begin{pmatrix} 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ 0 &{}\quad 0 &{}\quad 1&{}\quad 0 \\ 0 &{}\quad -1&{}\quad 0&{}\quad 0 \\ -1 &{}\quad 0&{}\quad 0 &{}\quad 0 \end{pmatrix}. \end{aligned}$$ The next lemma shows that (7.15) is a Hamiltonian formulation of (7.4). The operator \(\tilde{\mathcal {D}}\) is Hamiltonian. It is clear that the bracket \(\{F, G\}_{\tilde{\mathcal {D}}}\) defined by $$\begin{aligned} \{F, G\}_{\tilde{\mathcal {D}}}&= \int (\partial F)^\intercal \, \tilde{\mathcal {D}} \, \partial G \, \mathrm d t \end{aligned}$$ is skew-symmetric. The Jacobi identity is satisfied because \(\tilde{\mathcal {D}}\) has constant coefficients (see [29, Corollary 7.5]). \(\square \) 7.2.2 Conservation laws We conclude from the conservation laws in (7.9) that if (q, r, p, s) evolves in x according to the NLS system (7.4), then the functionals $$\begin{aligned}&\tilde{H}_0 :=\mathrm {i}\int (q_xr - qr_x) \, \mathrm d t, \qquad \tilde{H}_1 :=\int (q^2r^2 + q_xr_x - q_{xx} r) \, \mathrm d t, \\&\tilde{H}_2 :=\mathrm {i}\int (4qq_xr^2 + q_{xx}r_x -q_{xxx} r) \, \mathrm d t, \qquad \text {etc.} \end{aligned}$$ are conserved under the flow, i.e., $$\begin{aligned} \frac{\mathrm d\tilde{H}_n}{\mathrm d x} = 0. \end{aligned}$$ Using (7.3) and (7.4) to eliminate the x-derivatives from the above expressions, we find that, on solutions of (7.4), $$\begin{aligned}&\tilde{H}_0 = \mathrm {i}\int (pr - qs) \, \mathrm d t, \quad \tilde{H}_1 = \int (ps + \mathrm {i}q_tr - q^2 r^2) \, \mathrm d t, \nonumber \\&\quad \tilde{H}_2 = \int (q_t s - p_t r) \, \mathrm d t, \quad \text {etc.} \end{aligned}$$ In this way, we obtain an infinite number of conserved quantities for (7.4). In differential form, the first few conservation laws are given by $$\begin{aligned}&\tilde{H}_0: \, \mathrm {i}(pr - qs)_x = (qr)_t, \\&\tilde{H}_1: \, (ps + \mathrm {i}q_tr - q^2 r^2)_x = \mathrm {i}(p r)_t, \\&\tilde{H}_2: \, (q_t s - p_t r)_x = (\mathrm {i}q_t r - q^2 r^2)_t. \end{aligned}$$ The gradients of the first few functionals \(\tilde{H}_j\) are given by $$\begin{aligned} \partial \tilde{H}_0 = \begin{pmatrix} - \mathrm {i}s\\ \mathrm {i}p \\ \mathrm {i}r \\ -\mathrm {i}q \end{pmatrix}, \qquad \partial \tilde{H}_1 = \begin{pmatrix} -\mathrm {i}r_t - 2 r^2 q\\ \mathrm {i}q_t - 2 q^2 r\\ s\\ p \end{pmatrix}, \qquad \partial \tilde{H}_2 = \begin{pmatrix} -s_t \\ -p_t\\ r_t\\ q_t \end{pmatrix}. \end{aligned}$$ 7.2.3 A candidate for a second Hamiltonian structure of (7.4) Inspired by the bi-Hamiltonian formulation (7.5) of (7.2), it is natural to seek a second Hamiltonian formulation of (7.4) of the form $$\begin{aligned} \begin{pmatrix} q \\ r\\ p\\ s \end{pmatrix}_x = \tilde{\mathcal {E}} \, \partial \tilde{H}_2, \end{aligned}$$ where \(\tilde{H}_2\) is the conserved functional defined by (7.17) and \(\tilde{\mathcal {E}}\) is an appropriate Hamiltonian operator. It is easy to check that (7.18) is satisfied for any choice of the constant \(\alpha \in \mathbb {C}\) provided that \(\tilde{\mathcal {E}} = \tilde{\mathcal {E}}_\alpha \) is defined by $$\begin{aligned} \tilde{\mathcal {E}}_\alpha = \begin{pmatrix} 0 &{} - D_t^{-1} &{} 0 &{} 0 \\ - D_t^{-1} &{} 0 &{}0&{}0 \\ 0 &{} 0 &{} 2\alpha qD_t^{-1} q &{} -\mathrm {i}+ 4 (1-\alpha ) r D_t^{-1}q + 2\alpha q D_t^{-1} r \\ 0 &{}0&{} \mathrm {i}+ 4(1-\alpha )q D_t^{-1}r + 2\alpha r D_t^{-1} q &{} 2\alpha r D_t^{-1} r \end{pmatrix}. \end{aligned}$$ This suggests that we seek a second Hamiltonian operator for (7.4) of the form (7.19). The bracket $$\begin{aligned} \{F, G\}_{\tilde{\mathcal {E}}_\alpha }&= \int (\partial F)^\intercal \, \tilde{\mathcal {E}}_\alpha \, \partial G \, \mathrm d t, \end{aligned}$$ is skew-symmetric for each \(\alpha \in \mathbb {C}\). However, the next lemma shows that \(\tilde{\mathcal {E}}_\alpha \) is not Hamiltonian for any choice of \(\alpha \) because the bracket \(\{\cdot , \cdot \}_{\tilde{\mathcal {E}}_\alpha }\) fails to satisfy the Jacobi identity. The operator \(\tilde{\mathcal {E}} = \tilde{\mathcal {E}}_\alpha \) defined in (7.19) is not Hamiltonian for any \(\alpha \in \mathbb {C}\). Fix \(\alpha \in \mathbb {C}\). We will show that \(\{F, G\}_{\tilde{\mathcal {E}}}\) does not satisfy the Jacobi identity. By [29, Proposition 7.7], it is enough to show that the tri-vector $$\begin{aligned} \Psi _{\tilde{\mathcal {E}}} = \frac{1}{2} \int \big \{\theta \wedge {{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) \wedge \theta \big \} \, \mathrm d t \end{aligned}$$ does not vanish identically. Since $$\begin{aligned} {{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(q) = (\tilde{\mathcal {E}}\theta )^1,\qquad {{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(r) = (\tilde{\mathcal {E}}\theta )^2, \end{aligned}$$ we find $$\begin{aligned} {{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) = \begin{pmatrix} 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0&{}\quad 0 \\ 0 &{}\quad 0 &{}\quad ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{33} &{}\quad ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{34} \\ 0 &{}\quad 0&{}\quad ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{43} &{}\quad ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{44} \end{pmatrix}, \end{aligned}$$ $$\begin{aligned} ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{33}&= 2\alpha (\tilde{\mathcal {E}}\theta )^1 D_t^{-1} q + 2\alpha qD_t^{-1} (\tilde{\mathcal {E}}\theta )^1, \\ ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{34}&= 4 (1-\alpha ) (\tilde{\mathcal {E}}\theta )^2 D_t^{-1}q + 4 (1-\alpha ) r D_t^{-1}(\tilde{\mathcal {E}}\theta )^1 \\&\quad + 2\alpha (\tilde{\mathcal {E}}\theta )^1 D_t^{-1} r + 2\alpha q D_t^{-1} (\tilde{\mathcal {E}}\theta )^2, \\ ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{43}&= 4(1-\alpha )(\tilde{\mathcal {E}}\theta )^1 D_t^{-1}r +4(1-\alpha )q D_t^{-1}(\tilde{\mathcal {E}}\theta )^2\\&\quad + 2\alpha (\tilde{\mathcal {E}}\theta )^2 D_t^{-1} q+ 2\alpha r D_t^{-1} (\tilde{\mathcal {E}}\theta )^1, \\ ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{44}&= 2\alpha (\tilde{\mathcal {E}}\theta )^2 D_t^{-1} r + 2\alpha r D_t^{-1} (\tilde{\mathcal {E}}\theta )^2. \end{aligned}$$ $$\begin{aligned} \Psi _{\tilde{\mathcal {E}}} =&\; \frac{1}{2}\int \{ \theta ^3 \wedge ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{33} \wedge \theta ^3 + \theta ^3 \wedge ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{34} \wedge \theta ^4 \\&+\theta ^4 \wedge ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{43} \wedge \theta ^3 +\theta ^4 \wedge ({{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\tilde{\mathcal {E}}) )_{44} \wedge \theta ^4 \} \, \mathrm d t \end{aligned}$$ is given by $$\begin{aligned} \Psi _{\tilde{\mathcal {E}}}= & {} \int \{ \alpha \theta ^3 \wedge (\tilde{\mathcal {E}}\theta )^1 \wedge D_t^{-1} (q \theta ^3) + \alpha q \theta ^3 \wedge D_t^{-1} ((\tilde{\mathcal {E}}\theta )^1 \wedge \theta ^3)\nonumber \\&+ \,2 (1-\alpha ) \theta ^3 \wedge (\tilde{\mathcal {E}}\theta )^2 \wedge D_t^{-1}(q \theta ^4 ) +2 (1-\alpha ) r \theta ^3 \wedge D_t^{-1}((\tilde{\mathcal {E}}\theta )^1 \wedge \theta ^4) \nonumber \\&+\,\alpha \theta ^3 \wedge (\tilde{\mathcal {E}}\theta )^1 \wedge D_t^{-1} (r \theta ^4) +\alpha q \theta ^3 \wedge D_t^{-1} ((\tilde{\mathcal {E}}\theta )^2 \wedge \theta ^4) \nonumber \\&+\,2(1-\alpha ) \theta ^4 \wedge (\tilde{\mathcal {E}}\theta )^1 \wedge D_t^{-1}(r \theta ^3) +2(1-\alpha )q \theta ^4 \wedge D_t^{-1}((\tilde{\mathcal {E}}\theta )^2 \wedge \theta ^3) \nonumber \\&+ \,\alpha \theta ^4 \wedge (\tilde{\mathcal {E}}\theta )^2 \wedge D_t^{-1}(q \theta ^3) + \alpha r \theta ^4 \wedge D_t^{-1} ((\tilde{\mathcal {E}}\theta )^1 \wedge \theta ^3) \nonumber \\&+\, \alpha \theta ^4 \wedge (\tilde{\mathcal {E}}\theta )^2 \wedge D_t^{-1} (r \theta ^4) + \alpha r \theta ^4 \wedge D_t^{-1} ((\tilde{\mathcal {E}}\theta )^2 \wedge \theta ^4) \}\, \mathrm d t. \end{aligned}$$ $$\begin{aligned} \int q \theta ^3 \wedge D_t^{-1}((\tilde{\mathcal {E}}\theta )^1 \wedge \theta ^3) \, \mathrm d t&= - \int D_t^{-1}(q \theta ^3) \wedge (\tilde{\mathcal {E}}\theta )^1 \wedge \theta ^3 \, \mathrm dt\\&= \int \theta ^3 \wedge (\tilde{\mathcal {E}}\theta )^1 \wedge D_t^{-1}(q \theta ^3) \, \mathrm d t. \end{aligned}$$ In the same way, the third and eighth terms are equal, the fourth and seventh are equal, the fifth and tenth are equal, the sixth and ninth are equal, and the eleventh and twelfth are equal. Thus we find $$\begin{aligned} \Psi _{\tilde{\mathcal {E}}}= & {} 2\int \{ \alpha \theta ^3 \wedge (\tilde{\mathcal {E}}\theta )^1 \wedge D_t^{-1} (q \theta ^3) + 2 (1-\alpha ) \theta ^3 \wedge (\tilde{\mathcal {E}}\theta )^2 \wedge D_t^{-1}(q \theta ^4 ) \nonumber \\&+\, \alpha \theta ^3 \wedge (\tilde{\mathcal {E}}\theta )^1 \wedge D_t^{-1} (r \theta ^4) +2(1-\alpha ) \theta ^4 \wedge (\tilde{\mathcal {E}}\theta )^1 \wedge D_t^{-1}(r \theta ^3) \nonumber \\&+\, \alpha \theta ^4 \wedge (\tilde{\mathcal {E}}\theta )^2 \wedge D_t^{-1}(q \theta ^3) + \alpha \theta ^4 \wedge (\tilde{\mathcal {E}}\theta )^2 \wedge D_t^{-1} (r \theta ^4) \}\, \mathrm d t. \end{aligned}$$ Using that $$\begin{aligned} \tilde{\mathcal {E}}\theta = \begin{pmatrix} - (D_t^{-1}\theta ^2) \\ - (D_t^{-1}\theta ^1) \\ 2\alpha qD_t^{-1}(q\theta ^3) -\mathrm {i}\theta ^4 + 4 (1-\alpha ) r D_t^{-1}(q\theta ^4) + 2\alpha q D_t^{-1} (r\theta ^4) \\ \mathrm {i}\theta ^3 + 4(1-\alpha )q D_t^{-1}(r\theta ^3) + 2\alpha r D_t^{-1}(q\theta ^3) + 2\alpha r D_t^{-1}(r\theta ^4) \end{pmatrix}, \end{aligned}$$ this becomes $$\begin{aligned} \Psi _{\tilde{\mathcal {E}}}= & {} -\,2\int \{ \alpha \theta ^3 \wedge (D_t^{-1}\theta ^2) \wedge D_t^{-1} (q \theta ^3) + 2 (1-\alpha ) \theta ^3 \wedge (D_t^{-1}\theta ^1) \wedge D_t^{-1}(q \theta ^4 ) \nonumber \\&+\, \alpha \theta ^3 \wedge (D_t^{-1}\theta ^2) \wedge D_t^{-1} (r \theta ^4) +2(1-\alpha ) \theta ^4 \wedge (D_t^{-1}\theta ^2) \wedge D_t^{-1}(r \theta ^3) \nonumber \\&+ \, \alpha \theta ^4 \wedge (D_t^{-1}\theta ^1) \wedge D_t^{-1}(q \theta ^3) + \alpha \theta ^4 \wedge (D_t^{-1}\theta ^1) \wedge D_t^{-1} (r \theta ^4) \}\, \mathrm d t. \end{aligned}$$ Consider the two terms which involve all three of the uni-vectors \(\theta ^1\), \(\theta ^3\), and \(\theta ^4\): $$\begin{aligned} \Xi :=&-\,2\int \{ 2 (1-\alpha ) \theta ^3 \wedge (D_t^{-1}\theta ^1) \wedge D_t^{-1}(q \theta ^4 ) + \alpha \theta ^4 \wedge (D_t^{-1}\theta ^1) \wedge D_t^{-1}(q \theta ^3) \}\, \mathrm d t \\ =&\; 4 (1-\alpha ) \int (D_t^{-1}\theta ^1)\wedge \theta ^3 \wedge D_t^{-1}(q \theta ^4 ) \, \mathrm d t + 2\alpha \int (D_t^{-1}\theta ^1)\wedge \theta ^4 \wedge D_t^{-1}(q \theta ^3)\, \mathrm d t. \end{aligned}$$ Let \(P^j = (P_1^j, P_2^j,P_3^j,P_4^j)\), \(j = 1,2,3\), where each \(P_i^j\) is a differential function (i.e., a smooth function of t, q, r, p, s and t-derivatives of q, r, p, s up to some finite, but unspecified, order). Then (see [29, p. 440]) $$\begin{aligned} \langle \Xi ; P^1,P^2,P^3\rangle&= 4 (1-\alpha ) \int \begin{vmatrix} (D_t^{-1}P_1^1)&P_3^1&D_t^{-1}(q P_4^1) \\ (D_t^{-1}P_1^2)&P_3^2&D_t^{-1}(q P_4^2) \\ (D_t^{-1}P_1^3)&P_3^3&D_t^{-1}(q P_4^3) \\ \end{vmatrix} \, \mathrm d t \\&\quad + 2 \alpha \int \begin{vmatrix} (D_t^{-1}P_1^1)&P_4^1&D_t^{-1}(q P_3^1) \\ (D_t^{-1}P_1^2)&P_4^2&D_t^{-1}(q P_3^2) \\ (D_t^{-1}P_1^3)&P_4^3&D_t^{-1}(q P_3^3) \\ \end{vmatrix} \, \mathrm d t. \end{aligned}$$ Choosing for example $$\begin{aligned} P^1 = (nq^{n-1}q_t, 0, 0, 0), \quad P^2 = (0, 0, 1, 0), \quad P^3 = (0, 0, 0, q_t), \end{aligned}$$ where \(n \ge 1\) is an integer, we see that $$\begin{aligned} \langle \Xi ; P^1,P^2,P^3\rangle&= 4 (1-\alpha ) \int (D_t^{-1}P_1^1) P_3^2 D_t^{-1}(q P_4^3) \, \mathrm d t \\&\quad - 2 \alpha \int (D_t^{-1}P_1^1) P_4^3 D_t^{-1}(q P_3^2) \, \mathrm d t \\&= 4 (1-\alpha ) \int q^n \frac{q^2}{2} \, \mathrm d t - 2 \alpha \int q^n q_t D_t^{-1}(q) \, \mathrm d t \\&= 4 (1-\alpha ) \int \frac{q^{n+2}}{2} \, \mathrm d t + 2 \alpha \int D_t^{-1}(q^n q_t) q \, \mathrm d t \\&= \int \Big (4 (1-\alpha ) \frac{q^{n+2}}{2} + 2 \alpha \frac{q^{n+2}}{n+1} \Big ) \, \mathrm d t. \end{aligned}$$ Regardless of the value of \(\alpha \), this is nonzero for some integer \(n \ge 0\). Since all the other terms in the expression (7.22) for \(\Psi _{\tilde{\mathcal {E}}}\) vanish when applied to this choice of \((P^1, P^2, P^3)\), we conclude that \(\Psi _{\tilde{\mathcal {E}}} \ne 0\). \(\square \) The inverse operator \(D_t^{-1}\) in the above computations can be treated as a pseudo-differential operator in the sense of [29, Definition 5.37] by appealing to the identity (see [29, Eq. (5.55)]) $$\begin{aligned} D_t^{-1} q = \sum _{i=0}^\infty (-1)^i (D_t^iq) D_t^{-i-1}. \end{aligned}$$ The failure of \(\tilde{\mathcal {E}}\) to be Hamiltonian presumably has to do with the fact that solutions of NLS only live on the submanifold where \(q_x = p\) and \(r_x = s\), so that one should restrict the Poisson bracket to this submanifold before considering the Jacobi identity. Since the Hamiltonian formulation (7.15) is sufficient for our objective of establishing local Birkhoff coordinates for the x-evolution (7.4) of NLS, and the infinite sequence of conserved quantities can be obtained from the recursion operator for (7.2), we do not pursue this matter further. An alternative proof of Lemma 7.4 proceeds as follows: As in the proof of Lemma 7.2, it can be shown that \(\tilde{\mathcal {D}}\) and \(\tilde{\mathcal {E}}\) satisfy the following analog of (7.14) for any value of \(\alpha \): $$\begin{aligned} {{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {D}}\theta }(\Theta _{\tilde{\mathcal {E}}}) + {{\,\mathrm{pr}\,}}\mathbf {v}_{\tilde{\mathcal {E}}\theta }(\Theta _{\tilde{\mathcal {D}}}) = 0. \end{aligned}$$ Thus, if \(\tilde{\mathcal {E}}\) were Hamiltonian, then \(\tilde{\mathcal {D}}\) and \(\tilde{\mathcal {E}}\) would form a Hamiltonian pair and then \(\tilde{\mathcal {R}} = \tilde{\mathcal {E}}\tilde{\mathcal {D}}^{-1}\) would be a recursion operator for (7.4). However, a direct computation shows that \(\tilde{\mathcal {R}}\) does not satisfy the defining relation [29, Eq. (5.43)] of a recursion operator for any value of \(\alpha \in \mathbb {C}\). Following [13], we define the Dirichlet spectrum in terms of \(f_2\) and the Neumann spectrum in terms of \(f_1\). Cf. Theorem 2.1, where we proved analyticity of the fundamental matrix solution with respect to the potential; see also the short review of analytic maps between complex Banach spaces in Sect. 4. Note that \(\mathcal D_{\mathrm P}\) consists of both periodic and antiperiodic functions. The analogous result for x-periodic real type potentials for the defocusing NLS is trivial, since in this case all periodic eigenvalues are real valued due to selfadjointness of the corresponding ZS-operator, cf. [13]. The authors are grateful to Thomas Kappeler for valuable discussions. Furthermore, the authors want to thank the anonymous referees for several helpful suggestions which led to a considerable improvement of the manuscript. Support is acknowledged from the Göran Gustafsson Foundation, the European Research Council, Grant Agreement No. 682537 and the Swedish Research Council, Grant No. 2015-05430. Ablowitz, M.J., Kaup, D.J., Newell, A.C., Segur, H.: The inverse scattering transform-Fourier analysis for nonlinear problems. Stud. Appl. Math. 53, 249–315 (1974)MathSciNetCrossRefzbMATHGoogle Scholar Agrawal, G.P.: Nonlinear Fiber Optics. Academic Press, New York (2013)zbMATHGoogle Scholar Bona, J.L., Fokas, A.S.: Initial-boundary-value problems for linear and integrable nonlinear dispersive partial differential equations. Nonlinearity 21, T195–T203 (2008)MathSciNetCrossRefzbMATHGoogle Scholar Boutet de Monvel, A., Its, A., Kotlyarov, V.: Long-time asymptotics for the focusing NLS equation with time-periodic boundary condition. C. R. Math. Acad. Sci. Paris 345, 615–620 (2007)MathSciNetCrossRefzbMATHGoogle Scholar Boutet de Monvel, A., Its, A., Kotlyarov, V.: Long-time asymptotics for the focusing NLS equation with time-periodic boundary condition on the half-line. Commun. Math. Phys. 290, 479–522 (2009)MathSciNetCrossRefzbMATHGoogle Scholar Boutet de Monvel, A., Kotlyarov, V., Shepelsky, D., Zheng, C.: Initial boundary value problems for integrable systems: towards the long time asymptotics. Nonlinearity 23, 2483–2499 (2010)MathSciNetCrossRefzbMATHGoogle Scholar Coddington, E.A., Levinson, N.: Theory of Differential Equations. McGraw-Hill, New York (1955)zbMATHGoogle Scholar Djakov, P., Mityagin, B.S.: Instability zones of periodic 1-dimensional Schrödinger and Dirac operators. Russ. Math. Surv. 61(4), 663–766 (2006)CrossRefzbMATHGoogle Scholar Dunford, N.J., Schwartz, J.T.: Linear Operators, Part I: General Theory. Wiley Classics Library, Amsterdam (1988)zbMATHGoogle Scholar Fokas, A.S.: A unified transform method for solving linear and certain nonlinear PDEs. Proc. R. Soc. Lond. A 453, 1411–1443 (1997)MathSciNetCrossRefzbMATHGoogle Scholar Gardner, C.S., Greene, J.M., Kruskal, M.D., Miura, R.M.: Method for solving the Korteweg–de Vries equation. Phys. Rev. Lett. 19, 1095–1097 (1967)CrossRefzbMATHGoogle Scholar Gardner, C.S., Greene, J.M., Kruskal, M.D., Miura, R.M.: Korteweg–de Vries equation and generalizations. VI. Methods for exact solution. Commun. Pure Appl. Math. 27, 97–133 (1974)CrossRefzbMATHGoogle Scholar Grébert, B., Kappeler, T.: The Defocusing NLS Equation and its Normal Form. EMS Series of Lecture Notes in Mathematics, Zürich (2014)Google Scholar Hauser, I., Ernst, F.J.: Initial value problem for colliding gravitational plane waves II. J. Math. Phys. 30, 2322–2336 (1989)MathSciNetCrossRefzbMATHGoogle Scholar Kappeler, T., Makarov, M.: On Birkhoff coordinates for KdV. Ann. Henri Poincaré 2, 807–856 (2001)MathSciNetCrossRefzbMATHGoogle Scholar Kappeler, T., Mityagin, B.: Gap estimates of the spectrum of Hill's equation and action variables for KdV. Trans. Am. Math. Soc. 351, 619–646 (1999)MathSciNetCrossRefzbMATHGoogle Scholar Kappeler, T., Mityagin, B.S.: Estimates for periodic and Dirichlet eigenvalues of the Schrödinger operator. SIAM J. Math. Anal. 33(1), 113–152 (2001)MathSciNetCrossRefzbMATHGoogle Scholar Kappeler, T., Pöschel, J.: KdV & KAM. Ergeb der Math. und ihrer Grenzgeb. Springer, Berlin (2003)Google Scholar Kappeler, T., Serier, F., Topalov, P.: On the characterization of the smoothness of skew-adjoint potentials in periodic Dirac operators. J. Funct. Anal. 256(7), 2069–2112 (2006)MathSciNetCrossRefzbMATHGoogle Scholar Kappeler, T., Lohrmann, P., Topalov, P., Zung, N.T.: Birkhoff coordinates for the focusing NLS equation. Commun. Math. Phys. 285, 1087–1107 (2009)MathSciNetCrossRefzbMATHGoogle Scholar Kappeler, T., Lohrmann, P., Topalov, P.: On the spectrum of nonself-adjoint Zakharov–Shabat operators on R. In: Khruslov, E., Pastur, L., Shepelsky, D. (eds.) Spectral theory and differential equations, pp.165–177. Amer. Math. Soc. Transl. Ser. 2, 233, Adv. Math. Sci., 66, Amer. Math. Soc., Providence (2014)Google Scholar Lax, P.: Integrals of nonlinear equations of evolution and solitary waves. Commun. Pure Appl. Math. 21, 467–490 (1968)MathSciNetCrossRefzbMATHGoogle Scholar Lenells, J.: Admissible boundary values for the defocusing nonlinear Schrödinger equation with asymptotically \(t\)-periodic data. J. Differ. Eq. 259, 5617–5639 (2015)CrossRefzbMATHGoogle Scholar Lenells, J.: Nonlinear Fourier transforms and the mKdV equation in the quarter plane. Stud. Appl. Math. 136, 3–63 (2016)MathSciNetCrossRefzbMATHGoogle Scholar Lenells, J., Fokas, A.S.: The nonlinear Schrödinger equation with t-periodic data: I. Exact results. Proc. R. Soc. A 471, 20140925 (2015)CrossRefzbMATHGoogle Scholar Lenells, J., Fokas, A.S.: The unified method: II. NLS on the half-line with t-periodic boundary conditions, J. Phys. A 45, 195202 (2012)MathSciNetCrossRefzbMATHGoogle Scholar Magri, F.: A simple model of the integrable Hamiltonian equation. J. Math. Phys. 19, 1156–1162 (1978)MathSciNetCrossRefzbMATHGoogle Scholar Mujica, J.: Complex Analysis in Banach Spaces. North-Holland, Amsterdam (1986)zbMATHGoogle Scholar Olver, P.J.: Applications of Lie Groups to Differential Equations. Springer, Berlin (1986)CrossRefzbMATHGoogle Scholar Pöschel, J.: Hill's potential in weighted Sobolev spaces and their spectral gaps. Math. Ann. 349(7), 433–458 (2011)MathSciNetCrossRefzbMATHGoogle Scholar Whittlesey, E.F.: Analytic functions in banach spaces. Proc. Am. Math. Soc. 16(5), 1077–1083 (1965)MathSciNetCrossRefzbMATHGoogle Scholar Zakharov, V.E., Faddeev, L.D.: The Korteweg–de Vries equation is a fully integrable Hamiltonian system. Funkcional. Anal. i Priložen 5, 18–27 (1971)MathSciNetGoogle Scholar Zakharov, V.E., Manakov, S.V.: The complete integrability of the nonlinear Schrödinger equation. Teoret. Mat. Fiz. 19, 332–343 (1974)MathSciNetCrossRefGoogle Scholar Zakharov, V.E., Shabat, A.B.: Exact theory of two-dimensional self-focusing and one-dimensional self-modulation in nonlinear media. Sov. Phys. JETP 34, 62–69 (1972)MathSciNetGoogle Scholar 1.Department of MathematicsKTH Royal Institute of TechnologyStockholmSweden Lenells, J. & Quirchmayr, R. Math. Ann. (2019). https://doi.org/10.1007/s00208-019-01856-x Received 20 July 2018 Revised 16 May 2019 DOI https://doi.org/10.1007/s00208-019-01856-x
CommonCrawl
Switch to: Citations References in: Probabilistic semantics for epistemic modals: normality assumptions, conditional epistemic spaces, and the strength of `must' and `might' Guillermo Del Pinal Linguistics and Philosophy:1-42 (forthcoming) Add references You must login to add references. Oddness, Modularity, and Exhaustification.Guillermo Del Pinal - 2021 - Natural Language Semantics 29 (1):115-158.details According to the `grammatical account', scalar implicatures are triggered by a covert exhaustification operator present in logical form. This account covers considerable empirical ground, but there is a peculiar pattern that resists treatment given its usual implementation. The pattern centers on odd assertions like #"Most lions are mammals" and #"Some Italians come from a beautiful country", which seem to trigger implicatures in contexts where the enriched readings conflict with information in the common ground. Magri (2009, 2011) argues that, to account (...) for these cases, the basic grammatical approach has to be supplemented with the stipulations that exhaustification is obligatory and is based on formal computations which are blind to information in the common ground. In this paper, I argue that accounts of oddness should allow for the possibility of felicitous assertions that call for revision of the common ground, including explicit assertions of unusual beliefs such as "Most but not all lions are mammals" and "Some but not all Italians come from Italy". To adequately cover these and similar cases, I propose that Magri's version of the Grammatical account should be refined with the novel hypothesis that exhaustification triggers a bifurcation between presupposed (the negated relevant alternatives) and at-issue (the prejacent) content. The explanation of the full oddness pattern, including cases of felicitous proposals to revise the common ground, follows from the interaction between presupposed and at-issue content with an independently motivated constraint on accommodation. Finally, I argue that treating the exhaustification operator as a presupposition trigger helps solve various independent puzzles faced by extant grammatical accounts, and motivates a substantial revision of standard accounts of the overt exhaustifier "only". (shrink) Obligatory Irrelevance and the Computation of Ignorance Inferences.Brian Buccola & Andreas Haida - 2019 - Journal of Semantics 36 (4):583-616.details In recent work, Fox has argued, on the basis of both empirical and conceptual considerations, that relevance is closed under speaker belief: if $\phi $ is relevant, then it's also relevant whether the speaker believes $\phi $. We provide a formally explicit implementation of this idea and explore its theoretical consequences and empirical predictions. As Fox already observes, one consequence is that ignorance inferences can only be derived in grammar, via a covert belief operator of the sort proposed by Meyer. (...) We show, further, that the maxim of quantity no longer enriches the meaning of an utterance, per se, but rather acts as a filter on what can be relevant in an utterance context. In particular, certain alternatives are shown to be incapable of being relevant in any context where the maxim of quantity is active — a property we dub obligatory irrelevance. We show that the resulting system predicts a quite restricted range of interpretations for sentences with the scalar item some, as compared to both neo-Gricean and grammatical theories of scalar implicature, and we argue that these predictions seem largely on the right track. (shrink) Epistemic Modals.Seth Yalcin - 2007 - Mind 116 (464):983-1026.details Epistemic modal operators give rise to something very like, but also very unlike, Moore's paradox. I set out the puzzling phenomena, explain why a standard relational semantics for these operators cannot handle them, and recommend an alternative semantics. A pragmatics appropriate to the semantics is developed and interactions between the semantics, the pragmatics, and the definition of consequence are investigated. The semantics is then extended to probability operators. Some problems and prospects for probabilistic representations of content and context are explored. Bookmark 309 citations On the Characterization of Alternatives.Danny Fox & Roni Katzir - 2011 - Natural Language Semantics 19 (1):87-107.details We present an argument for revising the theory of alternatives for Scalar Implicatures and for Association with Focus. We argue that in both cases the alternatives are determined in the same way, as a contextual restriction of the focus value of the sentence, which, in turn, is defined in structure-sensitive terms. We provide evidence that contextual restriction is subject to a constraint that prevents it from discriminating between alternatives when they stand in a particular logical relationship with the assertion or (...) the prejacent, a relationship that we refer to as symmetry. Due to this constraint on contextual restriction, discriminating between alternatives in cases of symmetry becomes the task of focus values. This conclusion is incompatible with standard type-theoretic definitions of focus values, motivating our structure-sensitive definition instead. (shrink) On the Event Relativity of Modal Auxiliaries.Valentine Hacquard - 2010 - Natural Language Semantics 18 (1):79-114.details Crosslinguistically, the same modal words can be used to express a wide range of interpretations. This crosslinguistic trend supports a Kratzerian analysis, where each modal has a core lexical entry and where the difference between an epistemic and a root interpretation is contextually determined. A long-standing problem for such a unified account is the equally robust crosslinguistic correlation between a modal's interpretation and its syntactic behavior: epistemics scope high (in particular higher than tense and aspect) and roots low, a fact (...) which has led to proposals that hardwire different syntactic positions for epistemics and roots (cf. Cinque's hierarchy). This paper argues that the range of interpretations a modal receives is even more restricted: a modal must be keyed to certain time-individual pairs, but not others. I show that this can be captured straightforwardly by minimally modifying the Kratzerian account: modals are relative to an event—rather than a world—of evaluation, which readily provides a time (the event's running time) and (an) individual(s) (the event's participants). I propose that this event relativity of modals can in turn explain the correlation between type of interpretation and syntactic position, without stipulation of an interpretation-specific height for modals. (shrink) Clarity and the Grammar of Skepticism.Chris Barker - 2009 - Mind and Language 24 (3):253-273.details Why ever assert clarity? If It is clear that p is true, then saying so should be at best superfluous. Barker and Taranto (2003) and Taranto (2006) suggest that asserting clarity reveals information about the beliefs of the discourse participants, specifically, that they both believe that p . However, mutual belief is not sufficient to guarantee clarity ( It is clear that God exists ). I propose instead that It is clear that p means instead (roughly) 'the publicly available evidence (...) justifies concluding that p '. Then what asserting clarity reveals is information concerning the prevailing epistemic standard that determines whether a body of evidence is sufficient to justify a claim. If so, the semantics of clarity constitutes a grammatical window into the discourse dynamics of inference and skepticism. (shrink) Probability Operators.Seth Yalcin - 2010 - Philosophy Compass 5 (11):916-37.details This is a study in the meaning of natural language probability operators, sentential operators such as probably and likely. We ask what sort of formal structure is required to model the logic and semantics of these operators. Along the way we investigate their deep connections to indicative conditionals and epistemic modals, probe their scalar structure, observe their sensitivity to contex- tually salient contrasts, and explore some of their scopal idiosyncrasies. What It Takes to Believe.Daniel Rothschild - 2020 - Philosophical Studies 177 (5):1345-1362.details Much linguistic evidence supports the view believing something only requires thinking it likely. I assess and reject a rival view, based on recent work on homogeneity in natural language, according to which belief is a strong, demanding attitude. I discuss the implications of the linguistic considerations about 'believe' for our philosophical accounts of belief. Dynamics of Epistemic Modality.Malte Willer - 2013 - Philosophical Review 122 (1):45-92.details A dynamic semantics for epistemically modalized sentences is an attractive alternative to the orthodox view that our best theory of meaning ascribes to such sentences truth-conditions relative to what is known. This essay demonstrates that a dynamic theory about might and must offers elegant explanations of a range of puzzling observations about epistemic modals. The first part of the story offers a unifying treatment of disputes about epistemic modality and disputes about matters of fact while at the same time avoiding (...) the complexities of alternative theories. The second part of the story extends the basic framework to cover some complicated data about retraction and the interaction between epistemic modality and tense. A comparison between the suggestion made in this essay and current versions of the orthodoxy is provided. (shrink) Omissive Implicature.Eric Swanson - 2017 - Philosophical Topics 45 (2):117-137.details In some contexts, not saying S generates a conversational implicature: that the speaker didn't have sufficient reason, all things considered, to say S. I call this an omissive implicature. Standard ways of thinking about conversational implicature make the importance and even the existence of omissive implicatures somewhat surprising. But I argue that there is no principled reason to deny that there are such implicatures, and that they help explain a range of important phenomena. This paper focuses on the roles omissive (...) implicatures play in Quantity implicatures—in particular, in solving the symmetry problem for scalar implicatures—and on the political and social importance of omissions where apologies, objections, or other communicative acts are expected or warranted. (shrink) Probabilistic Knowledge.Sarah Moss - 2018 - Oxford University Press.details Traditional philosophical discussions of knowledge have focused on the epistemic status of full beliefs. In this book, Moss argues that in addition to full beliefs, credences can constitute knowledge. For instance, your .4 credence that it is raining outside can constitute knowledge, in just the same way that your full beliefs can. In addition, you can know that it might be raining, and that if it is raining then it is probably cloudy, where this knowledge is not knowledge of propositions, (...) but of probabilistic contents. -/- The notion of probabilistic content introduced in this book plays a central role not only in epistemology, but in the philosophy of mind and language as well. Just as tradition holds that you believe and assert propositions, you can believe and assert probabilistic contents. Accepting that we can believe, assert, and know probabilistic contents has significant consequences for many philosophical debates, including debates about the relationship between full belief and credence, the semantics of epistemic modals and conditionals, the contents of perceptual experience, peer disagreement, pragmatic encroachment, perceptual dogmatism, and transformative experience. In addition, accepting probabilistic knowledge can help us discredit negative evaluations of female speech, explain why merely statistical evidence is insufficient for legal proof, and identify epistemic norms violated by acts of racial profiling. Hence the central theses of this book not only help us better understand the nature of our own mental states, but also help us better understand the nature of our responsibilities to each other. (shrink) Context.Robert Stalnaker - 2014 - Oxford University Press.details Robert Stalnaker explores the contexts in which speech takes place, the ways we represent them, and the roles they play in explaining the interpretation and dynamics of speech. His central thesis is the autonomy of pragmatics: the independence of theory about structure and function of discourse from theory about mechanisms serving those functions. Blindness, Short-Sightedness, and Hirschberg's Contextually Ordered Alternatives: A Reply to Schlenker (2012).Giorgio Magri - forthcoming - In Linguistic and Psycholinguistic Approaches on Implicatures and Presuppositions. Palgrave.details Relational Semantics and Domain Semantics for Epistemic Modals.Dilip Ninan - 2018 - Journal of Philosophical Logic 47 (1):1-16.details The standard account of modal expressions in natural language analyzes them as quantifiers over a set of possible worlds determined by the evaluation world and an accessibility relation. A number of authors have recently argued for an alternative account according to which modals are analyzed as quantifying over a domain of possible worlds that is specified directly in the points of evaluation. But the new approach only handles the data motivating it if it is supplemented with a non-standard account of (...) attitude verbs and conditionals. It can be shown the the relational account handles the same data equally well if it too is supplemented with a non-standard account of such expressions. (shrink) Must, Knowledge, and (in)Directness.Daniel Lassiter - 2016 - Natural Language Semantics 24 (2):117-163.details This paper presents corpus and experimental data that problematize the traditional analysis of must as a strong necessity modal, as recently revived and defended by von Fintel and Gillies :351–383, 2010). I provide naturalistic examples showing that must p can be used alongside an explicit denial of knowledge of p or certainty in p, and that it can be conjoined with an expression indicating that p is not certain or that not-p is possible. I also report the results of an (...) experiment involving lotteries, where most participants endorsed a sentence of the form must not-p despite being instructed that p is a possibility. Crucially, endorsement was much higher for must in this context than for matched sentences with knowledge or certainty expressions. These results indicate that the requirements for felicitous use of must are weaker than for know and certain rather than being at least as strong, as the epistemic necessity theory would predict. However, it is possible to account for these data while retaining the key insights of von Fintel and Gillies' analysis of the evidential component of must. I discuss several existing accounts that could be construed in this way and explain why none is completely satisfactory. I then propose a new model that embeds an existing scalar theory into a probabilistic model of informational dynamics structured around questions and answers. (shrink) Still Going Strong.Kai von Fintel & Anthony S. Gillies - manuscriptdetails In "*Must* ...stay ...strong!" (von Fintel & Gillies 2010) we set out to slay a dragon, or rather what we called The Mantra: that epistemic *must* has a modal force weaker than expected from standard modal logic, that it doesn't entail its prejacent, and that the best explanation for the evidential feel of *must* is a pragmatic explanation. We argued that all three sub-mantras are wrong and offered an explanation according to which *must* is strong, entailing, and the felt indirectness (...) is the product of an evidential presupposition carried by epistemic modals. Mantras being what they are, it is no surprise that each of the sub-mantras have been given new defenses. Here we offer them new problems and update our picture, concluding that *must* is (still) strong. (shrink) A Theory of Individual-Level Predicates Based on Blind Mandatory Scalar Implicatures.Giorgio Magri - 2009 - Natural Language Semantics 17 (3):245-297.details Predicates such as tall or to know Latin, which intuitively denote permanent properties, are called individual-level predicates. Many peculiar properties of this class of predicates have been noted in the literature. One such property is that we cannot say #John is sometimes tall. Here is a way to account for this property: this sentence sounds odd because it triggers the scalar implicature that the alternative John is always tall is false, which cannot be, given that, if John is sometimes tall, (...) then he always is. This intuition faces two challenges. First: this scalar implicature has a weird nature, since it must be surprisingly robust (otherwise, it could be cancelled and the sentence rescued) and furthermore blind to the common knowledge that tallness is a permanent property (since this piece of common knowledge makes the two alternatives equivalent). Second: it is not clear how this intuition could be extended to other, more complicated properties of individual-level predicates. The goal of this paper is to defend the idea of an implicature-based theory of individual-level predicates by facing these two challenges. In the first part of the paper, I try to make sense of the weird nature of these special mismatching implicatures within the recent grammatical framework for scalar implicatures of Chierchia (Structures and beyond, 2004) and Fox (2007). In the second part of the paper, I show how this implicature-based line of reasoning can be extended to more complicated properties of individual-level predicates, such as restrictions on the interpretation of their bare plural subjects, noted in Carlson (Reference to kinds in English. Doctoral dissertation, University of Massachusetts at Amherst, 1977), Milsark (Linguistic Analysis 3.1: 1–29, 1977), and Fox (Natural Language Semantics 3: 283–341, 1995); restrictions on German word order, noted in Diesing (Indefinites, 1992); and restrictions on Q-adverbs, noted in Kratzer (The Generic Book, ed. Carlson and Pelletier, 125–175, 1995). (shrink) Must . . . Stay . . . Strong!Kai von Fintel & Anthony S. Gillies - 2010 - Natural Language Semantics 18 (4):351-383.details It is a recurring mantra that epistemic must creates a statement that is weaker than the corresponding flat-footed assertion: It must be raining vs. It's raining. Contrary to classic discussions of the phenomenon such as by Karttunen, Kratzer, and Veltman, we argue that instead of having a weak semantics, must presupposes the presence of an indirect inference or deduction rather than of a direct observation. This is independent of the strength of the claim being made. Epistemic must is therefore quite (...) similar to evidential markers of indirect evidence known from languages with rich evidential systems. We work towards a formalization of the evidential component, relying on a structured model of information states (analogous to some models used in the belief dynamics literature). We explain why in many contexts, one can perceive a lack of confidence on the part of the speaker who uses must. (shrink) Epistemic Modals Are Assessment-Sensitive.John MacFarlane - 2011 - In Andy Egan & Brian Weatherson (eds.), Epistemic Modality. Oxford University Press.details By "epistemic modals," I mean epistemic uses of modal words: adverbs like "necessarily," "possibly," and "probably," adjectives like "necessary," "possible," and "probable," and auxiliaries like "might," "may," "must," and "could." It is hard to say exactly what makes a word modal, or what makes a use of a modal epistemic, without begging the questions that will be our concern below, but some examples should get the idea across. If I say "Goldbach's conjecture might be true, and it might be false," (...) I am not endorsing the Cartesian view that God could have made the truths of arithmetic come out differently. I make the claim not because I believe in the metaphysical contingency of mathematics, but because I know that Goldbach's conjecture has not yet been proved or refuted. Similarly, if I say "Joe can't be running," I am not saying that Joe's constitution prohibits him from running, or that Joe is essentially a non-runner, or that Joe isn't allowed to run. My basis for making the claim may be nothing more than that I see Joe's running shoes hanging on a hook. (shrink) The Application of Constraint Semantics to the Language of Subjective Uncertainty.Eric Swanson - 2016 - Journal of Philosophical Logic 45 (2):121-146.details This paper develops a compositional, type-driven constraint semantic theory for a fragment of the language of subjective uncertainty. In the particular application explored here, the interpretation function of constraint semantics yields not propositions but constraints on credal states as the semantic values of declarative sentences. Constraints are richer than propositions in that constraints can straightforwardly represent assessments of the probability that the world is one way rather than another. The richness of constraints helps us model communicative acts in essentially the (...) same way that we model agents' credences. Moreover, supplementing familiar truth-conditional theories of epistemic modals with constraint semantics helps capture contrasts between strong necessity and possibility modals, on the one hand, and weak necessity modals, on the other. (shrink) Presupposed free choice and the theory of scalar implicatures.Paul Marty & Jacopo Romoli - forthcoming - Linguistics and Philosophy:1-62.details A disjunctive sentence like Olivia took Logic or Algebra conveys that Olivia didn't take both classes and that the speaker doesn't know which of the two classes she took. The corresponding sentence with a possibility modal, Olivia can take Logic or Algebra, conveys instead that she can take Logic and that she can take Algebra. These exclusivity, ignorance and free choice inferences are argued by many to be scalar implicatures. Recent work has looked at cases in which exclusivity and ignorance (...) appear to be computed instead at the presupposition level, independently from the assertion. On the basis of those data, Spector and Sudo :473–517, 2017) have argued for a hybrid account relying on a pragmatic principle for deriving implicatures in the presupposition. In this paper, we observe that a sentence like Noah is unaware that Olivia can take Logic or Algebra has a reading on which free choice appears in the presupposition, but not in the assertion, and we show that deriving this reading is challenging on Spector and Sudo's hybrid account. Following the dialectic in Fox, we argue against a pragmatic approach to presupposition-based implicatures on the ground that it is not able to account for presupposed free choice. In addition, we raise a novel challenge for Spector and Sudo's account coming from the conflicting presupposed ignorance triggered by sentences like #Noah is unaware that I have a son or a daughter, which is infelicitous even if it's not common knowledge whether the speaker has a son or a daughter. More generally, our data reveals a systematic parallelism between the assertion and presupposition levels in terms of exclusivity, ignorance, and free choice. We argue that such parallels call for a unified analysis and we sketch how a grammatical theory of implicatures where meaning strengthening operates in a similar way at both levels :31–57, 2012; Magri in A theory of individual-level predicates based on blind mandatory scalar implicatures, MIT dissertation, 2009; Marty in Implicatures in the DP domain, MIT dissertation, 2017) can account for such parallels. (shrink) Conceptual alternatives.Brian Buccola, Manuel Križ & Emmanuel Chemla - forthcoming - Linguistics and Philosophy:1-27.details Things we can say, and the ways in which we can say them, compete with one another. And this has consequences: words we decide not to pronounce have critical effects on the messages we end up conveying. For instance, in saying Chris is a good teacher, we may convey that Chris is not an amazing teacher. How this happens is an unsolvable problem, unless a theory of alternatives indicates what counts, among all the things that have not been pronounced. It (...) is sometimes assumed, explicitly or implicitly, that any word counts, as long as that word could have replaced one that was actually pronounced. We review arguments for going beyond this powerful idea. In doing so, we argue that the level of words is not the right level of analysis for alternatives. Instead, we capitalize on recent conceptual and associated methodological advances within the study of the so-called "language of thought" to reopen the problem from a new perspective. Specifically, we provide theoretical and experimental arguments that the relation between alternatives and words may be indirect, and that alternatives are not merely linguistic objects in the traditional sense. Rather, we propose that competition in language is significantly determined by general reasoning preferences, or thought preferences. We propose that such non-linguistic preferences can be measured and that these measures can be used to explain linguistic competition, non-linguistically, and more in depth. (shrink) Structurally Defined Alternatives and Lexicalizations of XOR.Eric Swanson - 2010 - Linguistics and Philosophy 33 (1):31-36.details In his recent paper on the symmetry problem Roni Katzir argues that the only relevant factor for the calculation of any Quantity implicature is syntactic structure. I first refute Katzir's thesis with three examples that show that structural complexity is irrelevant to the calculation of some Quantity implicatures. I then argue that it is inadvisable to assume—as Katzir and others do—that exactly one factor is relevant to the calculation of any Quantity implicature. Judge Dependence, Epistemic Modals, and Predicates of Personal Taste.Tamina Stephenson - 2007 - Linguistics and Philosophy 30 (4):487--525.details Predicates of personal taste (fun, tasty) and epistemic modals (might, must) share a similar analytical difficulty in determining whose taste or knowledge is being expressed. Accordingly, they have parallel behavior in attitude reports and in a certain kind of disagreement. On the other hand, they differ in how freely they can be linked to a contextually salient individual, with epistemic modals being much more restricted in this respect. I propose an account of both classes using Lasersohn's (Linguistics and Philosophy 28: (...) 643–686, 2005) "judge" parameter, at the same time arguing for crucial changes to Lasersohn's view in order to allow the extension to epistemic modals and address empirical problems faced by his account. (shrink) What 'Must' Adds.Matthew Mandelkern - 2019 - Linguistics and Philosophy 42 (3):225-266.details There is a difference between the conditions in which one can felicitously use a 'must'-claim like and those in which one can use the corresponding claim without the 'must', as in : $$\begin{aligned}&\hbox {} \,\,\quad \hbox {a. It must be raining out}.\\&\qquad \,\,\, \hbox {b. It is raining out}. \end{aligned}$$It is difficult to pin down just what this difference amounts to. And it is difficult to account for this difference, since assertions of \Must p\ and assertions of p alone seem (...) to have the same basic goal: namely, communicating that p is true. In this paper I give a new account of the conversational role of 'must'. I begin by arguing that a 'must'-claim is felicitous only if there is a shared argument for the proposition it embeds. I then argue that this generalization, which I call Support, can explain the more familiar generalization that 'must'-claims are felicitous only if the speaker's evidence for them is in some sense indirect. Finally, I propose a pragmatic derivation of Support as a manner implicature. (shrink) Interactions with Context.Eric Swanson - 2006 - Dissertation, MITdetails My dissertation asks how we affect conversational context and how it affects us when we participate in any conversation—including philosophical conversations. Chapter 1 argues that speakers make pragmatic presuppositions when they use proper names. I appeal to these presuppositions in giving a treatment of Frege's puzzle that is consistent with the claim that coreferential proper names have the same semantic value. I outline an explanation of the way presupposition carrying expressions in general behave in belief ascriptions, and suggest that substitutivity (...) failure is a special case of this behavior. Chapter 2 develops a compositional probabilistic semantics for the language of subjective uncertainty, including epistemic adjectives scoped under quantifiers. I argue that we should distinguish sharply between the effects that epistemically hedged statements have on conversational context, and the effects that they have on belief states. I also suggest that epistemically hedged statements are a kind of doxastic advice, and explain how this hypothesis illuminates some otherwise puzzling phenomena. Chapter 3 argues that ordinary causal talk is deeply sensitive to conversational context. The principle that I formulate to characterize that context sensitivity explains at least some of the oddness of 'systematic causal overdetermination,' and explains why some putative overgenerated causes are never felicitously counted, in conversation, as causes. But the principle also makes metaphysical theorizing about causation rather indirectly constrained by ordinary language judgments. (shrink) Structurally-Defined Alternatives.Roni Katzir - 2007 - Linguistics and Philosophy 30 (6):669-690.details Scalar implicatures depend on alternatives in order to avoid the symmetry problem. I argue for a structure-sensitive characterization of these alternatives: the alternatives for a structure are all those structures that are at most as complex as the original one. There have been claims in the literature that complexity is irrelevant for implicatures and that the relevant condition is the semantic notion of monotonicity. I provide new data that pose a challenge to the use of monotonicity and that support the (...) structure-sensitive definition. I show that what appeared to be a problem for the complexity approach is overcome once an appropriate notion of complexity is adopted, and that upon closer inspection, the argument in favor of monotonicity turns out to be an argument against it and in favor of the complexity approach. (shrink) Epistemic Modals in Context.Andy Egan, John Hawthorne & Brian Weatherson - 2005 - In Gerhard Preyer & Georg Peter (eds.), Contextualism in Philosophy: Knowledge, Meaning, and Truth. Clarendon Press.details How Not to Theorize About the Language of Subjective Uncertainty.Eric Swanson - 2009 - In Andy Egan & Brian Weatherson (eds.), Epistemic Modality. Oxford University Press.details A successful theory of the language of subjective uncertainty would meet several important constraints. First, it would explain how use of the language of subjective uncertainty affects addressees' states of subjective uncertainty. Second, it would explain how such use affects what possibilities are treated as live for purposes of conversation. Third, it would accommodate 'quantifying in' to the scope of epistemic modals. Fourth, it would explain the norms governing the language of subjective uncertainty, and the differences between them and the (...) norms governing the language of subjective certainty. Neither truth conditional nor traditional force modfier theories of the language of subjective uncertainty look adequate to the task of satisfying all four of these constraints. (shrink) Subjective Ought.Jennifer Rose Carr - 2015 - Ergo: An Open Access Journal of Philosophy 2.details The subjective deontic "ought" generates counterexamples to classical inference rules like modus ponens. It also conflicts with the orthodox view about modals and conditionals in natural language semantics. Most accounts of the subjective ought build substantive and unattractive normative assumptions into the semantics of the modal. I sketch a general semantic account, along with a metasemantic story about the context sensitivity of information-sensitive operators. Embedding Epistemic Modals.Cian Dorr & John Hawthorne - 2013 - Mind 122 (488):867-914.details Seth Yalcin has pointed out some puzzling facts about the behaviour of epistemic modals in certain embedded contexts. For example, conditionals that begin 'If it is raining and it might not be raining, … ' sound unacceptable, unlike conditionals that begin 'If it is raining and I don't know it, … '. These facts pose a prima facie problem for an orthodox treatment of epistemic modals as expressing propositions about the knowledge of some contextually specified individual or group. This paper (...) develops an explanation of the puzzling facts about embedding within an orthodox framework. (shrink) Belief is Weak.John Hawthorne, Daniel Rothschild & Levi Spectre - 2016 - Philosophical Studies 173 (5):1393-1404.details It is tempting to posit an intimate relationship between belief and assertion. The speech act of assertion seems like a way of transferring the speaker's belief to his or her audience. If this is right, then you might think that the evidential warrant required for asserting a proposition is just the same as the warrant for believing it. We call this thesis entitlement equality. We argue here that entitlement equality is false, because our everyday notion of belief is unambiguously a (...) weak one. Believing something is true, we argue, is compatible with having relatively little confidence in it. Asserting something requires something closer to complete confidence. Specifically, we argue that believing a proposition merely requires thinking it likely, but that thinking that a proposition is likely does not entitle one to assert it. This conclusion conflict with a standard view that 'full belief' is the central commonsense non-factive attitude. (shrink) Epistemic Possibilities.Keith DeRose - 1991 - Philosophical Review 100 (4):581-605.details Scalar Implicature as a Grammatical Phenomenon.Gennaro Chierchia, Danny Fox & Benjamin Spector - 2012 - In Klaus von Heusinger, Claudia Maienborn & Paul Portner (eds.), Semantics: An International Handbook of Natural Language Meaning. De Gruyter Mouton. pp. 3--2297.details Modals Under Epistemic Tension.Guillermo Del Pinal & Brandon Waldon - 2019 - Natural Language Semantics 27 (2):135-188.details According to Kratzer's influential account of epistemic 'must' and 'might', these operators involve quantification over domains of possibilities determined by a modal base and an ordering source. Recently, this account has been challenged by invoking contexts of 'epistemic tension': i.e., cases in which an assertion that 'must p' is conjoined with the possibility that 'not p', and cases in which speakers try to downplay a previous assertion that 'must p', after finding out that 'not p'. Epistemic tensions have been invoked (...) from two directions. Von Fintel and Gillies (2010) propose a return to a simpler modal logic-inspired account: 'must' and 'might' still involve universal and existential quantification, but the domains of possibilities are determined solely by realistic modal bases. In contrast, Lassiter (2016), following Swanson, proposes a more revisionary account which treats 'must' and 'might' as probabilistic operators. In this paper, we present a series of experiments to obtain reliable data on the degree of acceptability of various contexts of epistemic tension. Our experiments include novel variations that, we argue, are required to make progress in this debate. We show that restricted quantificational accounts à la Kratzer fit the overall pattern of results better than either of their recent competitors. In addition, our results help us identify the key components of restricted quantificational accounts, and on that basis propose some refinements and general constraints that should be satisfied by any account of the modal auxiliaries. (shrink) Epistemic Comparison, Models of Uncertainty, and the Disjunction Puzzle.D. Lassiter - 2015 - Journal of Semantics 32 (4):649-684.details Measure Semantics and Qualitative Semantics for Epistemic Modals.Wesley H. Holliday & Thomas F. Icard - 2013 - Proceedings of SALT 23:514-534.details In this paper, we explore semantics for comparative epistemic modals that avoid the entailment problems shown to result from Kratzer's (1991) semantics by Yalcin (2006, 2009, 2010). In contrast to the alternative semantics presented by Yalcin and Lassiter (2010, 2011), based on finitely additive probability measures, we introduce semantics based on qualitatively additive measures, as well as semantics based on purely qualitative orderings, including orderings on propositions derived from orderings on worlds in the tradition of Kratzer (1991). All of these (...) semantics avoid the entailment problems that result from Kratzer's semantics. Our discussion focuses on methodological issues concerning the choice between different semantics. (shrink) A Solution to Karttunen's Problem.Matthew Mandelkern - 2017 - In Proceedings of Sinn und Bedeutung 21.details There is a difference between the conditions in which one can felicitously assert a 'must'-claim versus those in which one can use the corresponding non-modal claim. But it is difficult to pin down just what this difference amounts to. And it is even harder to account for this difference, since assertions of 'Must ϕ' and assertions of ϕ alone seem to have the same basic goal: namely, coming to agreement that [[ϕ]] is true. In this paper I take on this (...) puzzle, known as Karttunen's Problem. I begin by arguing that a 'must'-claim is felicitous only if there is a shared argument for its prejacent. I then argue that this generalization, which I call Support, can explain the more familiar generalization that 'must'-claims are felicitous only if the speaker's evidence for them is in some sense indirect. Finally, I sketch a pragmatic derivation of Support. (shrink) Deontic Modals and Probability: One Theory to Rule Them All?Fabrizio Cariani - forthcoming - In Nate Charlow & Matthew Chrisman (eds.), Deontic Modality. Oxford University Press.details This paper motivates and develops a novel semantic framework for deontic modals. The framework is designed to shed light on two things: the relationship between deontic modals and substantive theories of practical rationality and the interaction of deontic modals with conditionals, epistemic modals and probability operators. I argue that, in order to model inferential connections between deontic modals and probability operators, we need more structure than is provided by classical intensional theories. In particular, we need probabilistic structure that interacts directly (...) with the compositional semantics of deontic modals. However, I reject theories that provide this probabilistic structure by claiming that the semantics of deontic modals is linked to the Bayesian notion of expectation. I offer a probabilistic premise semantics that explains all the data that create trouble for the rival theories. (shrink) Free Choice and the Theory of Scalar Implicatures* MIT,.Danny Fox - manuscriptdetails This paper will be concerned with the conjunctive interpretation of a family of disjunctive constructions. The relevant conjunctive interpretation, sometimes referred to as a "free choice effect," (FC) is attested when a disjunctive sentence is embedded under an existential modal operator. I will provide evidence that the relevant generalization extends (with some caveats) to all constructions in which a disjunctive sentence appears under the scope of an existential quantifier, as well as to seemingly unrelated constructions in which conjunction appears under (...) the scope of negation and a universal quantifier. (shrink) On the Semantics and Pragmatics of Epistemic Vocabulary.Sarah Moss - 2015 - Semantics and Pragmatics.details This paper motivates and develops a novel semantics for several epistemic expressions, including possibility modals and indicative conditionals. The semantics I defend constitutes an alternative to standard truth conditional theories, as it assigns sets of probability spaces as sentential semantic values. I argue that what my theory lacks in conservatism is made up for by its strength. In particular, my semantics accounts for the distinctive behavior of nested epistemic modals, indicative conditionals embedded under probability operators, and instances of constructive dilemma (...) containing epistemic vocabulary. (shrink) A Flexible Contextualist Account of Epistemic Modals.Janice Dowell, J. L. - 2011 - Philosophers' Imprint 11:1-25.details On Kratzer's canonical account, modal expressions (like "might" and "must") are represented semantically as quantifiers over possibilities. Such expressions are themselves neutral; they make a single contribution to determining the propositions expressed across a wide range of uses. What modulates the modality of the proposition expressed—as bouletic, epistemic, deontic, etc.—is context.2 This ain't the canon for nothing. Its power lies in its ability to figure in a simple and highly unified explanation of a fairly wide range of language use. Recently, (...) though, the canon's neat story has come under attack. The challenge cases involve the epistemic use of a modal sentence for which no single resolution of the contextual parameter appears capable of accommodating all our intuitions.3 According to these revisionaries, such cases show that the canonical story needs to be amended in some way that makes multiple bodies of information relevant to the assessment of such statements. Here I show that how the right canonical, flexibly contextualist account of modals can accommodate the full range of challenge cases. The key will be to extend Kratzer's formal semantic account with an account of how context selects values for a modal's.. (shrink) At Least Some Determiners Aren't Determiners.Manfred Krifka - 1999 - In Ken Turner (ed.), The Semantics/Pragmatics Interface From Different Points of View. Elsevier. pp. 1--257.details Affective Dependencies.Anastasia Giannakidou - 1999 - Linguistics and Philosophy 22 (4):367-421.details Limited distribution phenomena related to negation and negative polarity are usually thought of in terms of affectivity where affective is understood as negative or downward entailing. In this paper I propose an analysis of affective contexts as nonveridical and treat negative polarity as a manifestation of the more general phenomenon of sensitivity to (non)veridicality (which is, I argue, what affective dependencies boil down to). Empirical support for this analysis will be provided by a detailed examination of affective dependencies in Greek, (...) but the distribution of any will also be shown to follow from (non)veridicality. (shrink) Two Puzzles Raised by Oddness in Conjunction.Giorgio Magri - 2014 - Journal of Semantics:ffu011.details Change in View.Gilbert Harman - 1986 - Behaviorism 16 (1):93-96.details Semantics.John Lyons - 1979 - Linguistics and Philosophy 3 (2):289-295.details Pragmatic Halos.Peter Lasersohn - 1999 - Language 75 (3):522-551.details It is a truism that people speak 'loosely'——that is, that they often say things that we can recognize not to be true, but which come close enough to the truth for practical purposes. Certain expressions. such as those including 'exactly', 'all' and 'perfectly', appear to serve as signals of the intended degree of approximation to the truth. This article presents a novel formalism for representing the notion of approximation to the truth, and analyzes the meanings of these expressions in terms (...) of this formalism. Pragmatic loosencss of this kind should be distinguished from authentic truth-conditional vagueness. (shrink)
CommonCrawl
Space Exploration Meta Space Exploration Stack Exchange is a question and answer site for spacecraft operators, scientists, engineers, and enthusiasts. Join them; it only takes a minute: Space Exploration Beta Why is momentum transferred to the moon? The moon whizzes around the Earth, dragging water after it and producing tides. In a comment on this question, I asserted this would sap the momentum of the moon causing it to fall to Earth. This was promptly rebutted by people with references, but I don't see why this should be so. I admit I forgot about planetary rotation and its effect on the water but nevertheless I would have expected tides to dissipate energy from both as friction induced heat. Now, an assertion that momentum is being transferred to the moon is trivially falsifiable for any astronomical observatory equipped with modern clocks, so I'm going to simply assume it's true. The question then becomes what is the mechanism by which tides transfer momentum to the moon? Does it have to do with the balance point in a very assymetric two body system transferring energy from a large mass to a small one, akin to a slingshot orbit? Or does the energy transfer from the body with the water sloshing about to the one without? (And if so, why?) orbital-mechanics the-moon gravity celestial-mechanics tides Peter WonePeter Wone $\begingroup$ This would likely be better suited to our sister site Astronomy (or alternatively Physics) as it pertains to tidal forces (tidal acceleration in this case) between two celestial bodies. But please don't cross-post. If you agree with migration, please note so in the comments and we'll do that for you, if it's not closed as off-topic for Space Exploration by our reviewers here first. Thanks! $\endgroup$ – TildalWave Jun 12 '14 at 10:31 $\begingroup$ I agree that this is a bit off-topic for this site. Besides the two cited by TidalWave, there's a third option, the Earth Science sister site. $\endgroup$ – David Hammen Jun 12 '14 at 13:54 $\begingroup$ He's already got one! On Physics SE and several other instances, too. Just as an aside I would describe the transfer as being of "angular momentum" because the bodies are ballistically coupled. $\endgroup$ – dmckee Jun 12 '14 at 23:18 $\begingroup$ The Earth's tidal bulges are gradually accelerating the Moon in its orbit. The Moon gradually gains orbital momentum and kinetic energy. The Earth gradually loses exactly the same amount of angular momentum from its daily rotation (angular momentum is conserved), but loses more kinetic energy than the Moon gains; the difference being consumed in friction. $\endgroup$ – Anthony X Jun 13 '14 at 1:15 $\begingroup$ The whole point of having different SE sites is that it makes it easier to find experts who know the complete answer to your question. At the moment, you have two contradictory answers, at least one of which written by an amateur (me). Moving the question to Astronomy gives you a better chance to know for sure. Some of the user base here will be interested, but there's a good chance you'll find those people same over on Astronomy and/or Physics as well. $\endgroup$ – Hobbes Jun 14 '14 at 7:08 The Earth's rotation rate is very gradually slowing down. Over the short term, the Earth's rotation rate changes a bit chaotically, sometimes speeding up, sometimes slowing down. These short term changes are due to transfers of angular momentum amongst various components of the Earth. The long term secular variations in the Earth's rotation rate are due to tidal interactions between the Earth, Sun, and Moon. The slowing of the Earth's rotation rate means a reduction in angular momentum. Because angular momentum is a conserved quantity, that lost angular momentum doesn't just disappear. It has to be transferred to something else. That something else are the two bodies responsible for the slowing of the Earth's rotation rate, and that is the Moon and the Sun, mostly the Moon. The textbook explanation for this is the Earth's tidal bulge. Newton's explained the tides by noting that the tidal force is away from the center of the Earth along the axis connecting the Earth's and Moon's center of mass and is directed inward toward the center of the Earth along the great circle normal to this axis. Newton reasoned that this force should make Earth's oceans bulge outward along that axial line and draw inward along that circle where the tidal force is inward. This equilibrium theory of the tides gives a nice, simple explanation for transfer of angular momentum from the Earth's rotation to the Moon's orbit. The Moon pulls the tidal bulge slightly forward. This off-axis tidal bulge on the Earth in turn results in a tangential component of the gravitational Earth's gravitational force on the Moon, in the direction of the Moon's velocity. A similar thing happens when a rocket in circular orbit fires it's thrusters against the velocity vector. The rocket accelerates forward, but in doing so moves to a higher orbit where it slows down. The same thing is happening to the Moon. That continuous forward acceleration very gradually raises the Moon to an every higher orbit. This is a very slow process. Currently the Moon is receding from the Earth at about 3.8 centimeters/year, and that slow creep is anomalously high. The long term rate based on various fossil records is about 1.7 cm/yr over the last 2 to 3 billion years. There's one thing wrong with this model: Newton's tidal theory is false. While Newton did develop the correct model for the tidal force, his static model of the tides is just wrong. There is no Newtonian tidal bulge. There can't be, for a number of reasons. Moreover, Newton's equilibrium theory of the tides is falsified by observations of the tides. Consider the North Sea. Per Newton's equilibrium theory, there would be two high tides per day in the North Sea, one when the Moon is at zenith and the other when it is at nadir. Since the North Sea is rather small, those high tides should occur more or less at the same time all across the North Sea. That isn't what happens. At any time of day, it's always high tide somewhere in the North Sea, low tide somewhere else. The same goes for Patagonia, New Zealand, and a few other spots on the Earth. Laplace developed the correct theory of the tides. Laplace's dynamic theory of the tides takes into account the very things that say why Newton's tidal bulge cannot exist. Instead of a tidal bulge, the oceans comprise a number of amphidromic systems. There are several amphidromic points in the oceans where tides are nearly non-existent. The tides rotate about these amphidromic points. There are three such points in the North Sea, which explains why tides in the North Sea are so ridiculously complex. So if Newton's equilibrium theory of the tides and his tidal bulge are false, what explains the transfer of momentum? The interactions at the coasts results in asymmetries in the rotations about these amphidromic points. The net effect of these asymmetries are to induce a tangential component in the Moon's gravitational acceleration. This is not a "net tidal bulge". There is no tidal bulge. Those amphidromic systems result from the geometry of the oceans and the impossibility of a tidal bulge. The geometry of the oceans has changed drastically over the ages. Currently there are two huge north-south land barriers that impede the tides, the Americas in the western hemisphere and Afro-Eurasia in the eastern hemisphere. This configuration is the reason for the currently anomalously recession rate. The tides have had much freer flow during other times in the Earth's history such as when the Earth's continents were organized as a single supercontinent. The recession rate was anomalously low during those periods. Lambeck, "Tidal Dissipation in the Oceans: Astronomical, Geophysical and Oceanographic Consequences", Phil. Trans. R. Soc. Lond. A 287:1347 (1977) Proudman and Doodson, "The Principal Constituent of the Tides of the North Sea", Phil. Trans. R. Soc. Lond. A, 224 (1924). Thompson, "Tide Dynamics: Dynamic Theory of Tides", Lecture notes for Ocean 420 at Univ. of Washington. Wotal, "Tides and their origin," Lecture notes for Geology 161 at Oberlin College. David HammenDavid Hammen $\begingroup$ I should add some references, too. I'll do that later today. Part of the problem is that the journal articles that showed Newton's equilibrium theory is wrong are over 200 years old, and most are not in English. $\endgroup$ – David Hammen Jun 12 '14 at 13:33 $\begingroup$ oh man, if I had a nickel for every time somebody used the old I-can't-cite-references-because-they're-hundreds-of-years-old-and-in-another-language excuse... $\endgroup$ – coburne Jun 12 '14 at 13:51 $\begingroup$ I can cite textbooks and college lectures. Journal articles? No. This is very settled science. $\endgroup$ – David Hammen Jun 12 '14 at 13:52 $\begingroup$ Well what do you know. There are some nice journal articles. The Doodson article was a nice find since (a) Doodson is one of the key dudes in the development of the theory of tides, and (b) I mentioned the North Sea in my answer. $\endgroup$ – David Hammen Jun 12 '14 at 18:02 $\begingroup$ @AviCherry - 1.7 cm/year for 2 billion years is 34000 km, not 340000 km ($1.7\,\frac{\text{cm}}{\text{y}} \times \frac{1\,\text{m}}{100\,\text{cm}}\times 2\,\text{billion}\,\text{y} \times\frac{1\,\text{km}}{1000\,\text{m}} = 3.4\times10^4\,\text{km}$, or 34000 km). You added an extra zero somewhere along the way. That said, the Moon is thought to have formed ~4.5 billion years ago at four to six Earth radii from the center of the Earth - 25000 to 38000 km. That means the bulk of the Moon's recession occurred during the first couple of billion years after the Moon formed. $\endgroup$ – David Hammen Feb 23 at 0:58 Thanks for contributing an answer to Space Exploration Stack Exchange! Not the answer you're looking for? Browse other questions tagged orbital-mechanics the-moon gravity celestial-mechanics tides or ask your own question. Would a flywheel energy storage have a distinct advantage or disadvantage in zero or low gravity? Low Energy Transfer within Earth-Moon system
CommonCrawl
\begin{document} \title[Relativistic BGK model for polyatomic gases near equilibrium]{On a relativistic BGK model for polyatomic gases near equilibrium} \author[B.-H. Hwang]{Byung-Hoon Hwang} \address{Department of Mathematics, Sungkyunkwan University, Suwon 440-746, Republic of Korea} \email{[email protected]} \author[T. Ruggeri]{Tommaso Ruggeri} \address{Department of Mathematics and Alma Mater, Research Center on Applied Mathematics AM$^2$, University of Bologna, Bologna, Italy} \email{[email protected]} \author[S.-B. Yun]{Seok-Bae Yun} \address{Department of Mathematics, Sungkyunkwan University, Suwon 440-746, Republic of Korea} \email{[email protected]} \keywords{relativistic kinetic theory of gases, relativistic Boltzmann equation, relativistic BGK model, nonlinear energy method} \begin{abstract} Recently, a novel relativistic polyatomic BGK model was suggested by Pennisi and Ruggeri [J. of Phys. Conf. Series, 1035, (2018)] to overcome drawbacks of the Anderson-Witting model and Marle model. In this paper, we prove the unique existence and asymptotic behavior of classical solutions to the relativistic polyatomic BGK model when the initial data is sufficiently close to a global equilibrium. \end{abstract} \maketitle \section{Introduction} In the classical kinetic theory of gases, the BGK relaxation operator \cite{BGK} has been successfully used in place of the Boltzmann collision operator, yielding satisfactory simulation of the Boltzmann flows at much lower numerical cost. The relativistic generalization of the BGK approximation was first made by Marle \cite{Mar2,Mar3} and successively by Anderson and Witting \cite{AW}. The Marle model is an extension of the classical BGK model in the Eckart frame \cite{CK,E}, and the Anderson-Witting model obtains such extention using the Landau-Lifshitz frame \cite{CK,LL}. These models have been widely employed for various relativistic problems \cite{ CKT,DHMNS2,FRS,FRS2,HM,JRS}, but several drawbacks were also recognized in the literature. For the Marle model, the relaxation time becomes unbounded for particles with zero rest mass \cite{CK}. The problem for the Anderson-Witting model is that the Landau-Lifshitz frame was established on the assumption that some of high order non-equilibrium quantities are negligible near equilibrium \cite{AW,CK}, which inevitably leads to some loss of consistency. Starting from these considerations, Pennisi and Ruggeri proposed a variant of Anderson-Witting model in the Eckart frame both for monatomic and polyatomic gas, and proved that the conservation laws of particle number and energy-momentum are satisfied and the H-theorem holds \cite{PR} (see also \cite{newbook}). In the case of polyatomic gas, the Cauchy problem for the relativistic BGK model reads: \begin{align}\label{PR} \begin{split} \partial_t F+\hat{p}\cdot\nabla_x F&=\frac{U_\mu p^\mu}{c \tau p^0}\left\{\left(1-p^\mu q_\mu \frac{1+\frac{\mathcal{I}}{mc^2}}{bmc^2}\right)F_E-F\right\},\cr F(0,x,p)&= F_0(x,p), \end{split} \end{align} where $F\equiv F(x^\alpha , p^\beta,\mathcal{I} )$ is the momentum distribution function representing the number density of relativistic particles at the phase point $(x^\alpha,p^\beta)~(\alpha,\beta=0,1,2,3)$ with the microscopic internal energy $\mathcal{I}\in \mathbb{R}^+$ that takes into account the energy due to the internal degrees of freedom of the particles. Here $x^\alpha=(ct,x)\in\mathbb{R}^+\times \mathbb{R}^3$ is the space-time coordinate, and $p^\beta=(\sqrt{(mc)^2+|p|},p)\in\mathbb{R}^+\times \mathbb{R}^3$ is the four-momentum. Greek indices run from $0$ to $3$ and the repeated indices are assumed to be summed over its whole range; $c$ is the light velocity, $\hat{p}:=cp/p^0$ is the normalized momentum, $m$ and $\tau$ are respectively the mass and the relaxation time in the rest frame where the momentum of particles is zero. The macroscopic quantity $b$ is given in \eqref{b def}. Throughout this paper, the metric tensor $g_{\alpha\beta}$ and its inverse $g^{\alpha\beta}$ are given by $$ g_{\alpha\beta}=g^{\alpha\beta}=\text{diag}(1,-1,-1,-1) $$ and we use the raising and lowering indices as $$ g_{\alpha\mu}p^\mu=p_\alpha,\qquad g^{\alpha\mu}p_\mu=p^\alpha, $$ which implies $p_\alpha=(p^0,-p)$. The Minkowski inner product is defined by $$ p^\mu q_\mu=p_\mu q^\mu=p^0q^0-\sum_{i=1}^3 p^iq^i. $$ To present the macroscopic fields of $F$, we define the particle-particle flux $V^\mu$ and energy-momentum tensor $T^{\mu\nu}$ \cite{PR,PR2} by \begin{equation}\label{VT} V^\mu=mc\int_{\mathbb{R}^3}\int_0^\infty p^\mu F \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0},\qquad T^{\mu\nu}=\frac{1}{mc} \int_{\mathbb{R}^3}\int_0^\infty p^\mu p^\nu F\left( mc^2 + \mathcal{I} \right) \phi(\mathcal{I})\,d\mathcal{I}\,\frac{dp}{p^0}. \end{equation} Here $\phi(\mathcal{I})\ge 0$ is the state density of the internal mode such that $\phi(\mathcal{I}) \, d \mathcal{I}$ represents the number of the internal states of a molecule having the internal energy between $\mathcal{I}$ and $\mathcal{I}+d \mathcal{I}$ which can take various forms according to the physical context. For example, the following form of $\phi(\mathcal{I})$ is employed in \cite{PR} \begin{equation*} \phi(\mathcal{I})=\mathcal{I}^{(f^i-2)/2} \end{equation*} to get the correct classical limit of internal energy of polyatomic gas. Here $f^i\ge 0$ is the internal degrees of freedom due to the internal motion of molecules. In this paper, instead of choosing the specific form of $\phi(\mathcal{I})$, we develop an existence theory for \eqref{PR} that is valid for a general class of $\phi(\mathcal{I})$ satisfying a certain condition which covers all the physically relevant cases (see \eqref{phi condition}). Going back to \eqref{VT}, we introduce the decomposition of $V^\mu$ and $T^{\mu\nu}$: \begin{align}\label{Eckart} \begin{split} V^\mu&=nmU^\mu\cr T^{\mu\nu}&=\sigma^{\langle \mu\nu\rangle}+(p+\Pi)h^{\mu\nu}+\frac{1}{c^2}\left(q^\mu U^\nu+q^\nu U^\mu\right)+\frac{e}{c^2}U^\mu U^\nu \end{split}\end{align} which is called the Eckart frame \cite{CK,E}. In \eqref{Eckart}, $nm=c^{-1}\sqrt{V^\mu V_\mu}$ denotes the number density, $U^\mu=(\sqrt{c^2+|U|^2},U)$ the Eckart four-velocity, $p$ the pressure, $\Pi$ the dynamical pressure, $h^{\mu\nu}=-g^{\mu\nu}+\frac{1}{c^2}U^\mu U^\nu$ the projection tensor, $\sigma^{\mu\nu}=T^{\alpha\beta}\left(h^\mu_\alpha h^\nu_\beta-\frac{1}{3}h^{\mu\nu}h_{\alpha\beta}\right)$ the viscous deviatoric stress, $e$ the energy, and $q^\mu=-h^\mu_\alpha U_\beta T^{\alpha\beta}$ the heat flux. We recall that only $14$ field variables in \eqref{Eckart} are independent due to the constraints: \begin{equation*} U_\alpha U^\alpha = c^2, \quad U_\alpha q^\alpha=0, \quad U_\alpha \sigma^{<\alpha \beta>} =0, \quad g_{\alpha \beta} \, \sigma^{<\alpha \beta>} =0. \end{equation*} The macroscopic fields that appear frequently in this paper are defined as a suitable moment of $F$ in the following manner. \begin{align}\label{macroscopic fields}\begin{split} n^2&=\left(\int_{\mathbb{R}^3}\int_0^\infty F\phi(\mathcal{I}) \,d\mathcal{I}\,dp\right)^2-\sum_{i=1}^3 \left(\int_{\mathbb{R}^3}\int_0^\infty p^iF \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\right)^2,\cr U^\mu&=\frac{c}{n}\int_{\mathbb{R}^3}\int_0^\infty p^\mu F\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0},\cr e&=\frac{1}{c} \int_{\mathbb{R}^3}\int_0^\infty \left(U^\mu p_\mu\right)^2 F\left(1+\frac{\mathcal{I}}{mc^2}\right) \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0},\cr q^\mu&=c\int_{\mathbb{R}^3}\int_0^\infty p^\mu\left(U^\nu p_\nu\right) F\left(1+\frac{\mathcal{I}}{mc^2}\right) \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0},\cr &-\frac{1}{ c }U^\mu \int_{\mathbb{R}^3}\int_0^\infty \left(U^\nu p_\nu\right)^2 F\left(1+\frac{\mathcal{I}}{mc^2}\right) \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}. \end{split}\end{align} The equilibrium distribution function $F_E$ of \eqref{PR} reads \cite{PR}: \begin{equation}\label{GJdf} F_E=\exp\left\{-1+\frac{m}{k_B}\frac{ g_r}{T}-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{1}{k_B T}U^\mu p_\mu \right\} . \end{equation} We note that \eqref{GJdf} reduces to the well known J\"{u}ttner distribution function \cite{Juttner} in the monatomic case. The equilibrium temperature $T$ is determined by the following nonlinear relation: \begin{align}\label{gamma relation}\begin{split} \widetilde{e}(T)\equiv\frac{\int_{\mathbb{R}^3}\int_0^\infty \sqrt{1+|p|^2}e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I})\, d\mathcal{I}dp}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}}\phi(\mathcal{I}) \,d\mathcal{I}dp}=\frac{e}{n}, \end{split}\end{align} which is required for \eqref{PR} to satisfy the conservation laws. With such $T$, the relativistic chemical potential $g_r$ is defined by the following equation \begin{equation*} e^{-1+\frac{m}{k_B}\frac{g_r}{T}}=\frac{n}{(mc)^3\int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}}\phi(\mathcal{I})\,d \mathcal{I}dp}. \end{equation*} Here $k_B$ is the Boltzmann constant. The uniqueness of the equilibrium temperature $T$ will be considered later (see Proposition \ref{solvability gamma}). Then, $F_E$ satisfies \begin{align}\label{conservation laws}\begin{split} &U_\mu\int_{\mathbb{R}^3}\int_0^\infty p^\mu\left\{\left(1-p^\mu q_\mu \frac{1+\frac{\mathcal{I}}{mc^2}}{bmc^2}\right)F_E-F\right\}\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}=0,\cr &U_\mu\int_{\mathbb{R}^3}\int_0^\infty p^\mu p^\nu\left\{\left(1-p^\mu q_\mu \frac{1+\frac{\mathcal{I}}{mc^2}}{bmc^2}\right)F_E-F\right\} \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}=0, \end{split}\end{align} so that the following conservation laws for $V^\mu$ and $T^{\mu\nu}$ hold true \begin{equation}\label{conservation law} \partial_\mu V^\mu=0,\qquad \partial_\mu T^{\mu\nu}=0. \end{equation} Finally, the macroscopic quantity $b$ of \eqref{PR} is defined as \begin{align}\label{b def} b=\frac{n mc^2}{\gamma^2}\left(\int_0^\infty \frac{K_2(\gamma^*)}{\gamma^*}\phi(\mathcal{I})\, d\mathcal{I}\right)^{-1}\int_0^\infty K_3(\gamma^*)\phi(\mathcal{I})\,d\mathcal{I}, \end{align} where $\gamma$ and $\gamma^*$ denote $$ \gamma=\frac{mc^2}{k_BT}, \qquad\gamma^*=\gamma\left(1+\frac{\mathcal{I}}{mc^2}\right) $$ and $K_n$ is the modified Bessel function of the second kind: \begin{align*} K_{n}(\gamma)&=\int_{0}^{\infty}\cosh(nr)e^{-\gamma \cosh(r) }dr. \end{align*} \noindent\newline \subsection{Main result} The aim of this paper is to study the global in time existence and asymptotic behavior of classical solutions to the Pennisi-Ruggeri model \eqref{PR} when the initial data starts sufficiently close to a global equilibrium. For this, we decompose the solution $F$ around the global equilibrium: \begin{equation}\label{decomposition} F=F_E^0+f\sqrt{F_E^0}. \end{equation} Here $f$ is the perturbation part, and $F_E^0$ is the global equilibrium defined by \begin{equation*} F_E^0\equiv F_E(g_{r0},0,T_0;p)=\exp\left\{-1+\frac{m}{k_B}\frac{ g_{r0}}{T_0}-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T_0} \right\} \end{equation*} where $g_{r0}$ and $T_0$ are positive constants. Inserting \eqref{decomposition}, then \eqref{PR} can be rewritten as \begin{align*}\label{AWRBGK2} \begin{split} \partial_t f+\hat{p}\cdot\nabla_x f&=\frac{1}{\tau}\left(L(f)+\Gamma (f)\right),\cr f_0(x,p)&=f(0,x,p) \end{split}\end{align*} where $L$ is the linearized operator and $\Gamma$ is the nonlinear perturbation whose definition can be found in \eqref{Pf}, Lemma \ref{lin2} and Proposition \ref{lin3}. The initial perturbation $f_0$ is given by $F_0=F_E^0+f_0\sqrt{F_E^0}.$ We now define the notations to state our main result. \begin{itemize} \item We define the weighted $L^2$ inner product: \begin{align*} \langle f,g\rangle_{L^2_{p,\mathcal{I}}}&=\int_{\mathbb{R}^3}\int_0^\infty f({p,\mathcal{I}})g({p,\mathcal{I}})\phi(\mathcal{I})\, d\mathcal{I}dp,\cr \langle f,g\rangle_{L^2_{x,p,\mathcal{I}}}&=\int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\int_0^\infty f(x,{p,\mathcal{I}})g(x,{p,\mathcal{I}})\phi(\mathcal{I})\, d\mathcal{I}dpdx \end{align*} and the corresponding norms: \[\|f\|^{2}_{L^2_{p,\mathcal{I}}}=\int_{\mathbb{R}^{3}}\int_0^\infty|f({p,\mathcal{I}})|^2\phi(\mathcal{I})\,d\mathcal{I}dp,\quad\|f\|_{L^2_{x,p,\mathcal{I}}}^{2}=\int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\int_0^\infty|f(x,{p,\mathcal{I}})|^{2}\phi(\mathcal{I})\,d\mathcal{I}dpdx. \] \item We define the operator $\Lambda^s$ $(s\in \mathbb{R})$ by $$ \Lambda^s f(x)=\int_{\mathbb{R}^3}|\xi|^s\hat{f}(\xi)e^{2\pi ix\cdot\xi}\,d\xi $$ where $\hat{f}$ is the Fourier transformation of $f$. \item We denote $\dot{H}^s_x$ to be the homogeneous space endowed with the norm: $$ \|f\|_{\dot{H}^s_x}:=\|\Lambda^sf\|_{L^2_x}=\left\| |\xi|^s\hat{f}(\xi)\right\|_{L^2_{\xi}} $$ where $\|\cdot\|_{L^2_x}$ and $\|\cdot\|_{L^2_\xi}$ are the usual $L^2$-norm. \item We use the notation $L^2_{p,\mathcal{I}}H^s_x$ to denote $$ \|f\|_{L^2_{p,\mathcal{I}}H^s_x}=\left\|\|f\|_{H^s_x} \right\|_{L^2_{p,\mathcal{I}}} $$ where $\|\cdot\|_{H^s_x}$ is the usual Sobolev norm. \item We define the energy functional $E$ and dissipation rate $\mathcal{D}$ by \begin{align*} E(f)(t)&=\sum_{|\alpha|+|\beta|\le N}\| \partial^\alpha_\beta f\|^{2}_{L^2_{x,{p,\mathcal{I}}}},\cr \mathcal{D}(f)(t)&=\|\{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}+\sum_{1\le |\alpha|+|\beta|\le N}\|\partial^\alpha_\beta f\|^2_{L^2_{x,p,\mathcal{I}}}, \end{align*} where the multi-index $\alpha=[\alpha_0,\alpha_1,\alpha_2,\alpha_3]$ and $\beta=[\beta_1,\beta_2,\beta_3]$ are used to denote $$ \partial^\alpha_\beta=\partial^{\alpha_0}_{t}\partial^{\alpha_1}_{x^1}\partial^{\alpha_2}_{x^2}\partial^{\alpha_3}_{x^3}\partial^{\beta_1}_{p^1}\partial^{\beta_2}_{p^2}\partial^{\beta_3}_{p^3}. $$ We also define the energy functional and dissipation rate for spatial derivatives by \begin{align*} E_N(f)(t)&=\sum_{0\le k\le N}\| \nabla^k_xf\|^{2}_{L^2_{x,{p,\mathcal{I}}}},\cr \mathcal{D}_N(f)(t)&=\|\{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}+\sum_{1\le k\le N}\|\nabla^k_x f\|^2_{L^2_{x,p,\mathcal{I}}}. \end{align*} \end{itemize} Then, our main result is as follows. \begin{theorem}\label{main3} Let $N\ge 3$ be an integer. Assume that the state density $\phi$ satisfies \begin{equation}\label{phi condition} \int_{\mathbb{R}^3}\int_0^\infty\mathbb{P}(p^0,\mathcal{I}) e^{-C\left(1+\frac{\mathcal{I}}{mc^2}\right) p^0}\phi(\mathcal{I})\,d\mathcal{I}dp < \infty \end{equation} for any positive constant $C$ and arbitrary polynomial $\mathbb{P}$ of $p^0$ and $\mathcal{I}$. Then, there exists a positive constant $\delta$ such that if $E(f_0)<\delta $, \eqref{PR} admits a unique global in time solution such that the energy functional is uniformly bounded: $$ E_N(f)(t)+\int_0^t \mathcal{D}_N(f)(s) ds\le CE_N(f_0). $$ If futher $f_0\in L^2_{p,\mathcal{I}}\dot{H}^{-s}_x$ for some $s\in [0,3/2)$, then \begin{enumerate} \item The negative Sobolev norm is uniformly bounded: $$ \|\Lambda^{-s}f(t)\|_{L^2_{x,p,\mathcal{I}}}\le C_0. $$ \item The solution converges to the global equilibrium with algebraic decay rate: $$ \sum_{\ell\le k\le N}\|\nabla^k f(t)\|_{L^2_{x,p,\mathcal{I}}}\le C(1+t)^{-\frac{\ell+s}{2}}\quad \text{for}\ -s<\ell\le N-1. $$ \item The microscopic part decays faster by $1/2$: $$ \left\|\nabla^k\{I-P\}f(t)\right\|_{L^2_{x,p,\mathcal{I}}}\le C(1+t)^{-\frac{\ell+1+s}{2}}\quad \text{for}\ -s<\ell\le N-2. $$ \end{enumerate} \end{theorem} \begin{remark}\label{choices} The choice of state density $\phi(\mathcal{I})$ to guarantee the correct classical limit is not unique. For example, the following choices of $ \phi(\mathcal{I})$ $$ \mathcal{I}^{(f^i-2)/2}\qquad \text{or}\qquad e^{-b\frac{\mathcal{I}}{c^2}}\left(1+\frac{\mathcal{I}}{mc^2}\right)^{r}\mathcal{I}^{(f^i-2)/2}\quad\text{with}\quad b\ge0,\ r>0 $$ lead to the correct classical limit (See \cite{PR3}). \end{remark} Unlike the classical BGK models \cite{BGK,Holway}, the equilibrium temperature of \eqref{PR} is determined through the nonlinear relation \eqref{gamma relation} due to the relativistic nature of the equilibrium distribution function $F_E$. For rigorous analysis, therefore, it must be first analyzed whether or not the relation \eqref{gamma relation} provides the unique equilibrium temperature as a moment of the solution $F$. That is, any existence problem for \eqref{PR} must be understood as the problem of solving the coupled system of \eqref{PR} and \eqref{gamma relation}. In the case of relativistic models for a monatomic gas, such solvability problem was addressed in \cite{BCNS} for the Marle model, and \cite{HY2} for the Anderson-Witting model. In \cite{BCNS,HY2}, there is clever manipulations of the modified Bessel functions of the second kind that was crucially used to show the monotonicity property of $\widetilde{e}$, and it plays an important role in proving the one-to-one correspondence between $T$ and $e/n$. However, in the case of a polyatomic gas, similar line of argument using the modified Bessel functions does not work due to the presence of the state density of the internal mode $\phi(\mathcal{I})$ which can takes various forms (See Remark \eqref{choices}). In view of these difficulties, we derive the following identity \begin{equation*} \left\{\widetilde{e}\right\}^{\prime}(T) =\frac{1}{k_B nT^2}\int_{\mathbb{R}^3}\int_0^\infty \left\{cp^0\Big(1+\frac{\mathcal{I}}{mc^2}\Big)-\frac{e}{n}\right\}^2F_E(g_r,0,T)\phi(\mathcal{I})\,d\mathcal{I}dp \end{equation*} to investigate the monotonicity property of $\widetilde{e}$ in a different way (See Proposition \ref{solvability gamma}). Since the number density $n$ and the energy $e$ are strictly positive for sufficiently small $E(f)(t)$, the above relation implies that $\widetilde{e}(T)$ is strictly increasing on $T\in (0,\infty)$, which enables us to solve the solvability problem of $T$. We mention that since the relativistic BGK model \eqref{PR} does not guarantee the positivity of solutions, the smallness condition of $E(f)(t)$ was required to preserve the sign of $n$ and $e$. \noindent\newline \subsection{Brief history} The mathematical research on the relativistic BGK model was initiated in 2012 by Bellouquid et al \cite{BCNS} for the Marle model, where the unique determination of equilibrium variables, asymptotic limits and linearization problem were addressed. Afterward, Bellouquid et al \cite{BNU} proved the existence and asymptotic behavior of solutions for the Marle model when the initial data starts close to the global equilibrium. Recently, Hwang and Yun \cite{HY1} established the existence and uniqueness of stationary solutions to the boundary value problem for the Marle model in a finite interval. The weak solutions were covered by Calvo et al \cite{CJS}. In the case of the Anderson-Witting model, the unique determination of equilibrium variables, and the existence and asymptotic behavior of near-equilibrium solutions were addressed in \cite{HY2}. The unique existence of stationary solutions to the Anderson-Witting model in a slab was studied in \cite{HY3}. For the relativistic Boltzmann equation, much more have been established. We refer to \cite{B,D,DE} for the local existence and linearized solution, \cite{GS1,GS2,Guo Strain Momentum,Strain1,Strain Zhu} for the global existence and asymptotic behavior of near-equilibrium solutions, and \cite{Dud3,Jiang1,Jiang2} for the existence with large data. The spatially homogeneous case was addressed in \cite{LR,Strain Yun}. The regularizing effect of the collision operator has been studied in \cite{A,JY,W}. The propagation of the uniform upper bound was established in \cite{JSY}. We refer to \cite{Cal,Strain2} for the Newtonian limit and \cite{SS} for the hydrodynamic limit. For the results on the relativistic theories of continuum for rarefied gases and its connections with the kinetic theory see for example \cite{LMR,CPR1,CPR2,PR5,PR6,RXZ}. This paper is organized as follows. In Section 2, the unique determination of the equilibrium variable $T$ is discussed. In Section 3, we study the linearization of the relativistic BGK model \eqref{PR}. In Section 4, we provide estimates for the macroscopic fields and nonlinear perturbation. Section 5 is devoted to the proof of Theorem \ref{main3}. \noindent\newline \section{Unique determination of the equilibrium temperature $T$} We recall from \eqref{gamma relation} that $T$ is determined through the following relation \begin{align*} \widetilde{e}(T)=\frac{\int_{\mathbb{R}^3}\int_0^\infty \sqrt{1+|p|^2}e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I})\, d\mathcal{I}dp}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}}\phi(\mathcal{I}) \,d\mathcal{I}dp}=\frac{e}{n}. \end{align*} In this section, formal calculations are first presented to show that the relativistic BGK model \eqref{PR} satisfies the conservation laws \eqref{conservation law} if the above relation admits a unique $T$. And then we prove that when $E(f)(t)$ is small enough, $T$ indeed can be uniquely determined. The following lemma will be used later to simplify the integral of $F_E$. \begin{lemma}\cite{Strain2}\label{rest frame} For $U^\mu=(\sqrt{c^2+|U|^2},U)$, define $\Lambda$ by \begin{align*} \Lambda= \begin{bmatrix} c^{-1}U^0 & -c^{-1}U^1 & -c^{-1}U^2 & -c^{-1}U^3 \cr -U^1& 1+(U^0-1)\frac{(U^1)^2}{|U|^2}&(U^0-1)\frac{U^1U^2}{|U|^2} &(U^0-1)\frac{U^1U^3}{|U|^2} \cr -U^2& (U^0-1)\frac{U^1U^2}{|U|^2} & 1+(U^0-1)\frac{(U^2)^2 }{|U|^2}&(U^0-1)\frac{U^2U^3}{|U|^2} \cr -U^3& (U^0-1)\frac{U^1U^3}{|U|^2}& (U^0-1)\frac{U^2U^3}{|U|^2} & 1+(U^0-1)\frac{(U^3)^2}{|U|^2} \end{bmatrix}. \end{align*} Then $\Lambda$ transforms $U^\mu$ into the local rest frame $(c,0,0,0).$ \end{lemma} \begin{proof} The proof that $\Lambda$ is the Lorentz transformation can be found in \cite{Strain2}. The identity $\Lambda U^\mu=(c,0,0,0)$ can be verified by an explicit computation: \begin{align*} \Lambda U^\mu&= \begin{bmatrix} c^{-1}(U^0)^2-c^{-1}(U^1)^2-c^{-1}(U^2)^2-c^{-1}(U^3)^2\cr -U^0U^1+U^1+\frac{(U^0-1)U^1}{|U|^2}|U|^2\cr -U^0U^2+U^2+\frac{(U^0-1)U^2}{|U|^2}|U|^2\cr -U^0U^3+U^3+\frac{(U^0-1)U^3}{|U|^2}|U|^2 \end{bmatrix}=\begin{bmatrix} c^{-1}\left(c^2+|U|^2-|U|^2\right)\cr -U^0U^1+U^1+(U^0-1)U^1 \cr -U^0U^2+U^2+(U^0-1)U^2 \cr -U^0U^3+U^3+(U^0-1)U^3 \end{bmatrix}=\begin{bmatrix} c\cr 0 \cr 0 \cr 0 \end{bmatrix}. \end{align*} \end{proof} \begin{lemma}\label{explicit GM} Assume \eqref{gamma relation} admits a unique $T$, then $F_E$ satisfies \eqref{conservation laws}. \end{lemma} \begin{remark} The solvability problem of $T$ in \eqref{gamma relation} is addressed in Proposition \ref{solvability gamma}. \end{remark} \begin{proof} We write \eqref{conservation laws} in terms of the macroscopic fields using the Eckart frame \eqref{Eckart}: \begin{align}\label{det}\begin{split} \frac{1}{c}U_\mu\int_{\mathbb{R}^3}\int_0^\infty p^\mu\left(1-p^\alpha q_\alpha \frac{1+\frac{\mathcal{I}}{mc^2}}{bmc^2}\right)F_E \phi(\mathcal{I}) \,d\mathcal{I}\frac{dp}{p^0}=\frac{1}{mc^2}U_\mu V^\mu&=n,\cr cU_\mu\int_{\mathbb{R}^3}\int_0^\infty p^\mu p^\nu \left(1-p^\alpha q_\alpha \frac{1+\frac{\mathcal{I}}{mc^2}}{bmc^2}\right)F_E \Big(1+\frac{\mathcal{I}}{mc^2}\Big) \phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}=U_\mu T^{\mu\nu}&=q^\nu+eU^\nu. \end{split}\end{align} By the change of variables $P^\mu:=\Lambda p^\mu$ using Lemma \ref{rest frame}, we have \begin{eqnarray}\label{F_E1}\begin{split} &\int_{\mathbb{R}^3}\int_0^\infty p^\mu F_E \phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr &=e^{-1+\frac{m}{k_B}\frac{ g_r}{T} }\int_{\mathbb{R}^3}\int_0^\infty \left(\Lambda^{-1}P^\mu\right) e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{1}{k_B T} \left(\Lambda U^\mu\right)\left( \Lambda p_\mu\right)} \phi(\mathcal{I}) \,d\mathcal{I}\frac{dP}{P^0}\cr &=e^{-1+\frac{m}{k_B}\frac{ g_r}{T} }\Lambda^{-1}\left(\int_{\mathbb{R}^3}\int_0^\infty e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cP^0}{k_B T}}\phi(\mathcal{I}) \,d\mathcal{I}dP,0,0,0\right)\cr &=\frac{1}{c}e^{-1+\frac{m}{k_B}\frac{ g_r}{T} }\left\{\int_{\mathbb{R}^3}\int_0^\infty e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cP^0}{k_B T}}\phi(\mathcal{I}) \,d\mathcal{I}dP\right\}\Lambda^{-1}\left(c,0,0,0\right) \cr &=\frac{1}{c}e^{-1+\frac{m}{k_B}\frac{ g_r}{T} }\left\{\int_{\mathbb{R}^3}\int_0^\infty e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T}}\phi(\mathcal{I}) \,d\mathcal{I}dp \right\}U^\mu \end{split}\end{eqnarray} and \begin{align}\label{F_E2}\begin{split} &U_{\mu}\int_{\mathbb{R}^3}\int_0^\infty p^\mu p^\nu F_E\Big(1+\frac{\mathcal{I}}{mc^2}\Big) \phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr &= e^{-1+\frac{m}{k_B}\frac{ g_r}{T} }\int_{\mathbb{R}^3}\int_0^\infty \big\{(\Lambda U_{\mu}) (\Lambda p^\mu)\big\} (\Lambda^{-1}P^\nu) e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{1}{k_B T} \left(\Lambda U^\mu\right)\left( \Lambda p_\mu\right)} \Big(1+\frac{\mathcal{I}}{mc^2}\Big) \phi(\mathcal{I}) \,d\mathcal{I}\frac{dP}{P^0}\cr &= ce^{-1+\frac{m}{k_B}\frac{ g_r}{T} }\Lambda^{-1}\left( \int_{\mathbb{R}^3}\int_0^\infty P^0e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cP^0}{k_B T}}\Big(1+\frac{\mathcal{I}}{mc^2}\Big) \phi(\mathcal{I}) \,d\mathcal{I}dP,0,0,0\right)\cr &= e^{-1+\frac{m}{k_B}\frac{ g_r}{T} }\left\{\int_{\mathbb{R}^3}\int_0^\infty p^0 e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T}}\Big(1+\frac{\mathcal{I}}{mc^2}\Big) \phi(\mathcal{I})\, d\mathcal{I}dp\right\}U^\nu \end{split}\end{align} where we used the fact that (1) Lorentz inner product and the volume element $dp/p^0$ are invariant under $\Lambda$, and (2) $P^\mu$ takes the form of $(\sqrt{(mc)^2+|P|^2},P).$ Then, it follows from \eqref{F_E2} and the decomposition of third moment \cite{PR}: \begin{align*} \int_{\mathbb{R}^3}\int_0^\infty p^\mu p^\nu p^\alpha F_E \Big(1+\frac{\mathcal{I}}{mc^2}\Big)^2 \phi(\mathcal{I}) \,d\mathcal{I}\frac{dp}{p^0}=\frac{m}{c}\left\{aU^\alpha U^\mu U^\nu+b\left(h^{\alpha\mu}U^\nu+h^{\alpha\nu}U^\mu+h^{\mu\nu}U^\alpha\right)\right\} \end{align*} that the second terms on the l.h.s of \eqref{det} are calculated respectively as follows \begin{align}\label{second1}\begin{split} &-\frac{1}{c}U_\mu\int_{\mathbb{R}^3}\int_0^\infty p^\mu p^\alpha q_\alpha \frac{1+\frac{\mathcal{I}}{mc^2}}{bmc^2}F_E \phi(\mathcal{I}) \,d\mathcal{I}\frac{dp}{p^0} \cr &=-\frac{1}{bmc^3} q_\alpha \int_{\mathbb{R}^3}\int_0^\infty p^\alpha U_\mu p^\mu \Big(1+\frac{\mathcal{I}}{mc^2}\Big) F_E \phi(\mathcal{I}) \,d\mathcal{I}\frac{dp}{p^0} \cr &=-\frac{1}{bmc^3} e^{-1+\frac{m}{k_B}\frac{ g_r}{T} }\left\{\int_{\mathbb{R}^3}\int_0^\infty p^0 e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T}}\Big(1+\frac{\mathcal{I}}{mc^2}\Big) \phi(\mathcal{I})\, d\mathcal{I}dp\right\} q_\alpha U^\alpha\cr &=0 \end{split}\end{align} and \begin{align}\label{second2}\begin{split} &-cU_\mu\int_{\mathbb{R}^3}\int_0^\infty p^\mu p^\nu p^\alpha q_\alpha \frac{1+\frac{\mathcal{I}}{mc^2}}{bmc^2}F_E \Big(1+\frac{\mathcal{I}}{mc^2}\Big) \phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr &=-\frac{1}{bc^2} U_\mu q_\alpha\left\{aU^\alpha U^\mu U^\nu+b\left(h^{\alpha\mu}U^\nu+h^{\alpha\nu}U^\mu+h^{\mu\nu}U^\alpha\right)\right\}\cr &=-\frac{1}{c^2} U_\mu q_\alpha \Big( h^{\alpha\nu}U^\mu\Big)\cr &=q^\nu. \end{split}\end{align} Here we used the fact that (1) $U_\mu U^\mu=c^2$, and (2) $h^{\alpha\mu}, h^{\mu\nu}$ and $q^\mu$ are orthogonal to $U_\mu$ in the following sense $$ U_\mu h^{\mu\nu}= U_\mu\Big(-g^{\mu\nu}+\frac{1}{c^2}U^\mu U^\nu\Big)=-U^\nu+U^\mu=0,\qquad U_\mu q^\mu=-U_\mu h^\mu_\alpha U_\beta T^{\alpha\beta}=0. $$ Finally, we go back to \eqref{det} with \eqref{F_E1}--\eqref{second2} to obtain \begin{align*} &e^{-1+\frac{m}{k_B}\frac{ g_r}{T}}\int_{\mathbb{R}^3}\int_0^\infty e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T}}\phi(\mathcal{I}) \,d\mathcal{I}dp=n,\cr & ce^{-1+\frac{m}{k_B}\frac{ g_r}{T}}\left\{\int_{\mathbb{R}^3}\int_0^\infty p^0e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T}} \Big(1+\frac{\mathcal{I}}{mc^2}\Big)\phi(\mathcal{I})\, d\mathcal{I}dp\right\}U^\nu=eU^\nu. \end{align*} Using the change of variables $\frac{p}{mc}\rightarrow p$, we get \begin{align*} \frac{e}{n}&=\frac{\int_{\mathbb{R}^3}\int_0^\infty \sqrt{1+|p|^2}e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I})\, d\mathcal{I}dp}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}}\phi(\mathcal{I}) \,d\mathcal{I}dp},\cr e^{-1+\frac{m}{k_B}\frac{ g_r}{T}}&=\frac{n}{(mc)^3\int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}}\phi(\mathcal{I}) \,d\mathcal{I}dp} \end{align*} which gives the desired result. \end{proof} The following lemma provides information about the ranges of $n$ and $e/n$ when $E(f)(t)$ is small enough. \begin{lemma}\label{n positive} Suppose $E(f)(t)$ is sufficiently small. Then we have $$ |n-1|+\left|\frac{e}{n}-\widetilde{e}(T_0)\right|\le C\sqrt{E(f)(t)}.$$ \end{lemma} \begin{proof} By Lemma \ref{lem2} and the Soboelv embedding $H^2(\mathbb{R}_x^{3})\subseteq L^{\infty}(\mathbb{R}_x^{3})$, we have $$ | n -1 |+ \left| \frac{e}{n} -\widetilde{e}(T_0) \right|\le C \| f\|_{L^2_{p,\mathcal{I}}}\le C \left\| \|f\|_{L^\infty_x}\right\|_{L^2_{p,\mathcal{I}}}\le C \left\| \|f\|_{H^2_x}\right\|_{L^2_{p,\mathcal{I}}}\le C\sqrt{E(f)(t)}. $$ Note that the above result is independent of the determination of $T$ since Lemma \ref{lem2} is established by the definition of $n$ and $e/n$ given in \eqref{macroscopic fields}. \end{proof} We are now ready to prove that \eqref{gamma relation} determines a unique $T$ in the near-equilibrium regime. \begin{proposition}\label{solvability gamma} Suppose $E(f)(t)$ is sufficiently small. Then $T$ can be uniquely determined by the relation \eqref{gamma relation}. Thus $T$ is written as $$ T=(\widetilde{e})^{-1}\left(\frac{e}{n}\right). $$ \end{proposition} \begin{proof} We observe that \begin{align*} \left\{\widetilde{e}\right\}^{\prime}(T)&= \frac{1}{k_BT^2}\frac{\int_{\mathbb{R}^3}\int_0^\infty (1+|p|^2)e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)^2\phi(\mathcal{I})\, d\mathcal{I}dp}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}}\phi(\mathcal{I}) \,d\mathcal{I}dp}\cr &- \frac{1}{k_BT^2} \frac{\big(\int_{\mathbb{R}^3}\int_0^\infty \sqrt{1+|p|^2}e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I})\, d\mathcal{I}dp\big)^2}{\big( \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}}\phi(\mathcal{I}) \,d\mathcal{I}dp\big)^2}\cr &=\frac{1}{k_B T^2}\left\{\frac{\int_{\mathbb{R}^3}\int_0^\infty (1+|p|^2)e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)^2\phi(\mathcal{I})\, d\mathcal{I}dp}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}}\phi(\mathcal{I}) \,d\mathcal{I}dp} -\Big(\frac{e}{n}\Big)^2\right\}. \end{align*} Using the change of variables $p\rightarrow \frac{p}{mc}$, one finds \begin{align}\label{solv}\begin{split} \left\{\widetilde{e}\right\}^{\prime}(T) &=\frac{1}{k_B T^2}\left\{c^2\frac{\int_{\mathbb{R}^3}\int_0^\infty (p^0)^2e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\frac{cp^0}{k_B T}} \left(1+ \frac{\mathcal{I}}{mc^2} \right)^2\phi(\mathcal{I})\, d\mathcal{I}dp}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\frac{cp^0}{k_B T}}\phi(\mathcal{I}) \,d\mathcal{I}dp} -\Big(\frac{e}{n}\Big)^2\right\}\cr &=\frac{1}{k_B T^2}\left\{\frac{1}{n}\int_{\mathbb{R}^3}\int_0^\infty \biggl\{cp^0\Big(1+ \frac{\mathcal{I}}{mc^2} \Big)\biggl\}^2F_E(g_r,0,T) \phi(\mathcal{I})\, d\mathcal{I}dp -\Big(\frac{e}{n}\Big)^2\right\} \end{split} \end{align} where $F_E(g_r,0,T)$ denotes \begin{align*} F_E(g_r,0,T)&= e^{-1+\frac{m}{k_B}\frac{ g_{r}}{T}-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T} }\cr &=\frac{n}{\int_{\mathbb{R}^3}\int_0^\infty e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\frac{cp^0}{k_B T}}\phi(\mathcal{I})\,d \mathcal{I}dp}e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T} }. \end{align*} We also observe that \begin{align}\label{slov 2}\begin{split} 2\left(\frac{e}{n }\right)^2-\left(\frac{e}{n }\right)^2&= \frac{2e}{n }\frac{\int_{\mathbb{R}^3}\int_0^\infty \sqrt{1+|p|^2}e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I})\, d\mathcal{I}dp}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}}\phi(\mathcal{I}) \,d\mathcal{I}dp}\cr &-\left(\frac{e}{n}\right)^2 \frac{1}{n}\int_{\mathbb{R}^3}\int_0^\infty F_E(g_r,0,T)\phi(\mathcal{I})\,d\mathcal{I}dp\cr &= \frac{1}{n}\int_{\mathbb{R}^3}\int_0^\infty\biggl\{ \frac{2cep^0}{n }\Big(1+ \frac{\mathcal{I}}{mc^2} \Big)-\left(\frac{e}{n}\right)^2 \biggl\}F_E(g_r,0,T) \phi(\mathcal{I})\, d\mathcal{I}dp \end{split} \end{align} where we used the change of variables $ p\rightarrow \frac{p}{mc}$ to get \begin{align*} &\frac{\int_{\mathbb{R}^3}\int_0^\infty \sqrt{1+|p|^2}e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I})\, d\mathcal{I}dp}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}}\phi(\mathcal{I}) \,d\mathcal{I}dp}\cr &= \frac{c\int_{\mathbb{R}^3}\int_0^\infty p^0e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\frac{cp^0}{k_B T}} \left(1+ \frac{\mathcal{I}}{mc^2} \right)\phi(\mathcal{I})\, d\mathcal{I}dp}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\frac{cp^0}{k_B T}}\phi(\mathcal{I}) \,d\mathcal{I}dp}\cr &=\frac{c}{n}\int_{\mathbb{R}^3}\int_0^\infty p^0F_E(g_r,0,T) \Big(1+ \frac{\mathcal{I}}{mc^2} \Big)\phi(\mathcal{I})\, d\mathcal{I}dp. \end{align*} Combining \eqref{solv} and \eqref{slov 2}, we have \begin{align}\label{e5} \left\{\widetilde{e}\right\}^{\prime}(T)&=\frac{1}{k_B nT^2}\int_{\mathbb{R}^3}\int_0^\infty \left\{cp^0\Big(1+\frac{\mathcal{I}}{mc^2}\Big)-\frac{e}{n}\right\}^2F_E(g_r,0,T)\phi(\mathcal{I})\,d\mathcal{I}dp. \end{align} Since Lemma \ref{n positive} says that $n$ is positive for sufficiently small $E(f)(t)$, \eqref{e5} implies that $\widetilde{e}(T)$ is a strictly increasing function. Furthermore, $\widetilde{e}(T)$ is continuous on $T\in (0,\infty)$ under the assumption \eqref{phi condition}. So, there exists a positive constant $\delta_0$ such that $$ \left[\widetilde{e}(T_0)-\delta_0,~\widetilde{e}(T_0)+\delta_0\right]\subseteq \text{Range}\left(\widetilde{e}(T)\right). $$ If $E(f)(t)\le \delta_0$, then we have from Lemma \ref{n positive} that $$ \widetilde{e}(T_0)-\delta_0\le \frac{e}{n}\le \widetilde{e}(T_0)+\delta_0 $$ which implies that the range of $e/n$ is included in the range of $\widetilde{e}(T)$ for sufficiently small $E(f)(t)$. Therefore, there exists a one-to-one correspondence between $T$ and $e/n$ providing $$ T=(\widetilde{e})^{-1}\left(\frac{e}{n}\right). $$ \end{proof} \section{Linearization} In this section, the linearization of \eqref{PR} is discussed when the solution is sufficiently close to the global equilibrium. First, we provide computations of $F_E^0$ that will be used often later. \begin{lemma}\label{computation F_E^0} The following identities hold: \begin{enumerate} \item $\displaystyle \int_{\mathbb{R}^3}\int_0^\infty (p^i)^2 F_E^0 \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}= \frac{ k_BT_0}{c }. $ \item $\displaystyle \int_{\mathbb{R}^3}\int_0^\infty (p^0)^2 F_E^0 \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}= \frac{\widetilde{e}(T_0)}{c} .$ \item $\displaystyle \int_{\mathbb{R}^3}\int_0^\infty (p^i)^2 F_E^0\left(1+\frac{\mathcal{I}}{mc^2}\right)^2\phi(\mathcal{I})\,d\mathcal{I}dp=b_0m.$ \item $\displaystyle\frac{mc^2 b}{k_B nT}= \frac{e}{n}+ k_BT.$ \item $\displaystyle \frac{mc^2b_0}{k_BT_0 }= \widetilde{e}(T_0)+ k_BT_0 . $ \end{enumerate} Here $b_0$ denotes \begin{equation*} b_0=\frac{ mc^2}{\gamma_0^2}\left(\int_0^\infty \frac{K_2(\gamma_0^*)}{\gamma_0^*}\phi(\mathcal{I})\, d\mathcal{I}\right)^{-1}\int_0^\infty K_3(\gamma_0^*)\phi(\mathcal{I})\,d\mathcal{I} \end{equation*} with $$ \gamma_0=\frac{mc^2}{k_BT_0},\qquad \gamma^*_0=\gamma_0\biggl( 1+\frac{\mathcal{I}}{mc^2}\biggl). $$ \end{lemma} \begin{proof} For reader's convenience, we record the definition of $F_E^0$: $$ F_E^0=\exp\left\{-1+\frac{m}{k_B}\frac{ g_{r0}}{T_0}-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T_0} \right\} $$ where $$ e^{-1+\frac{m}{k_B}\frac{g_{r0}}{T_0}}=\frac{1}{(mc)^3\int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T_0}\sqrt{1+|p|^2}}\phi(\mathcal{I})\,d \mathcal{I}dp}. $$ $\bullet$ Proof of (1): It follows from the change of variables $\frac{p}{mc}\rightarrow p$ that \begin{eqnarray*} &&\int_{\mathbb{R}^3}\int_0^\infty (p^i)^2F_E^0 \biggl(1+\frac{\mathcal{I}}{mc^2}\biggl)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &&=e^{-1+\frac{m}{k_B}\frac{ g_{r0}}{T_0}} \int_{\mathbb{R}^3}\int_0^\infty (p^i)^2e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T_0}} \biggl(1+\frac{\mathcal{I}}{mc^2}\biggl)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &&=m^3c^2e^{-1+\frac{m}{k_B}\frac{ g_{r0}}{T_0}} \int_{\mathbb{R}^3}\int_0^\infty (p^i)^2 e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T_0}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{\sqrt{1+|p|^2}}\cr &&=\frac{1}{3c} \frac{ \int_{\mathbb{R}^3}\int_0^\infty |p|^2 e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T_0}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{\sqrt{1+|p|^2}}}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T_0}\sqrt{1+|p|^2}}\phi(\mathcal{I})\,d \mathcal{I}dp}. \end{eqnarray*} Using spherical coordinates and integration by parts, we have \begin{eqnarray*} &&\int_{\mathbb{R}^3}\int_0^\infty (p^i)^2F_E^0 \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &&=\frac{1}{3c} \frac{ \int_{\mathbb{R}^3}\int_0^\infty |p|^2 e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T_0}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{\sqrt{1+|p|^2}}}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T_0}\sqrt{1+|p|^2}}\phi(\mathcal{I})\,d \mathcal{I}dp}\cr &&=\frac{1}{3c} \frac{ \int_0^\infty\int_0^\infty \frac{r^4}{\sqrt{1+r^2}} e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T_0}\sqrt{1+r^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I}) \,d\mathcal{I}\,dr}{ \int_0^\infty\int_0^\infty r^2e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T_0}\sqrt{1+r^2}}\phi(\mathcal{I})\,d \mathcal{I}dr}\cr &&=\frac{k_BT_0}{c} \end{eqnarray*} \noindent $\bullet$ Proof of (2): It can be obtained in a similar way to (1) as follows \begin{eqnarray*} &&\int_{\mathbb{R}^3}\int_0^\infty (p^0)^2F_E^0 \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &&=e^{-1+\frac{m}{k_B}\frac{ g_{r0}}{T_0}} \int_{\mathbb{R}^3}\int_0^\infty p^0e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T_0}} \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,dp\cr &&=\frac{1}{c}\frac{\int_{\mathbb{R}^3}\int_0^\infty \sqrt{1+|p|^2}e^{-\left(mc^2+\mathcal{I}\right)\frac{1}{k_B T_0}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I}) \,d\mathcal{I}\,dp}{\int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+\mathcal{I}\right)\frac{1}{k_B T_0}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I}) \,d\mathcal{I}\,dp} \cr &&=\frac{\widetilde{e}(T_0)}{c}. \end{eqnarray*} \noindent $\bullet$ Proof of (3): We first introduce another representation of the modified Bessel functions of the second kind: \begin{align*} K_2(\gamma)=\int_0^\infty \frac{2r^2+1}{\sqrt{1+r^2}}e^{-\gamma\sqrt{1+r^2}}\,dr,\qquad K_3(\gamma)=\int_0^\infty (4r^2+1)e^{-\gamma\sqrt{1+r^2}}\,dr. \end{align*} Using this, one can rewrite $e^{-1+\frac{m}{k_B}\frac{ g_{r0}}{T_0}}$ as \begin{align}\label{K2}\begin{split} e^{-1+\frac{m}{k_B}\frac{ g_{r0}}{T_0}} &=\frac{1}{4\pi(mc)^3}\left(\int_0^\infty\int_0^\infty r^2 e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\gamma_0\sqrt{1+r^2}}\phi(\mathcal{I})\,d \mathcal{I}dr\right)^{-1}\cr &=\frac{1}{4\pi(mc)^3}\left(\int_0^\infty\int_0^\infty \frac{2r^2+1}{\sqrt{1+r^2}}\frac{1}{\gamma_0^*} e^{-\gamma^*_0\sqrt{1+r^2}}\phi(\mathcal{I})\,d \mathcal{I}dr\right)^{-1}\cr &=\frac{1}{4\pi(mc)^3 }\left(\int_0^\infty \frac{K_2(\gamma_0^*)}{\gamma_0^*}\phi(\mathcal{I})\, d\mathcal{I}\right)^{-1}. \end{split} \end{align} Using spherical coordinates and change of variables $\frac{p}{mc}\rightarrow p$, we have from \eqref{K2} that \begin{align}\label{(3)} \begin{split} &\int_{\mathbb{R}^3}\int_0^\infty (p^j)^2 F_E^0\left(1+\frac{\mathcal{I}}{mc^2}\right)^2\phi(\mathcal{I})\,d\mathcal{I}dp\cr &=e^{-1+\frac{m}{k_B}\frac{ g_{r0}}{T_0}}\frac{1}{3 }\int_{\mathbb{R}^3}\int_0^\infty |p|^2 e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T_0}}\left(1+\frac{\mathcal{I}}{mc^2}\right)^2\phi(\mathcal{I})\,d\mathcal{I}dp\cr &=\frac{ (mc)^2}{3 }\left(\int_0^\infty \frac{K_2(\gamma_0^*)}{\gamma_0^*}\phi(\mathcal{I})\, d\mathcal{I}\right)^{-1}\int_0^\infty\int_0^\infty r^4 e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\gamma_0\sqrt{1+r^2}}\left(1+\frac{\mathcal{I}}{mc^2}\right)^2\phi(\mathcal{I})\,d\mathcal{I}dr. \end{split} \end{align} By integration by parts twice, \eqref{(3)} becomes \begin{align*} &\int_{\mathbb{R}^3}\int_0^\infty (p^j)^2 F_E^0\left(1+\frac{\mathcal{I}}{mc^2}\right)^2\phi(\mathcal{I})\,d\mathcal{I}dp\cr &=\frac{ (mc)^2}{3 }\left(\int_0^\infty \frac{K_2(\gamma_0^*)}{\gamma_0^*}\phi(\mathcal{I})\, d\mathcal{I}\right)^{-1} \int_0^\infty\int_0^\infty r^4 e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\gamma_0\sqrt{1+r^2}}\left(1+\frac{\mathcal{I}}{mc^2}\right)^2\phi(\mathcal{I})\,d\mathcal{I}dr\cr &=\frac{ (mc)^2}{\gamma_0^2 } \left(\int_0^\infty \frac{K_2(\gamma_0^*)}{\gamma_0^*}\phi(\mathcal{I})\, d\mathcal{I}\right)^{-1}\int_0^\infty\int_0^\infty (4r^2+1) e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\gamma_0\sqrt{1+r^2}} \phi(\mathcal{I})\,d\mathcal{I}dr\cr &=\frac{ (mc)^2}{\gamma_0^2 }\left(\int_0^\infty \frac{K_2(\gamma_0^*)}{\gamma_0^*}\phi(\mathcal{I})\, d\mathcal{I}\right)^{-1}\int_0^\infty K_3(\gamma_0^*)\phi(\mathcal{I})\,d\mathcal{I} \end{align*} which gives the desired result.\noindent\newline \noindent $\bullet$ Proof of (4): Recall from \eqref{gamma relation} that \begin{align*} \frac{e}{n}&=\frac{\int_{\mathbb{R}^3}\int_0^\infty \sqrt{1+|p|^2}e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}} \left(mc^2+ \mathcal{I} \right)\phi(\mathcal{I})\, d\mathcal{I}dp}{ \int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T}\sqrt{1+|p|^2}}\phi(\mathcal{I}) \,d\mathcal{I}dp}\cr &=mc^2\frac{\int_0^\infty\int_0^\infty r^2\sqrt{1+r^2}e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\gamma\sqrt{1+r^2}} \left(1+ \frac{\mathcal{I}}{mc^2} \right)\phi(\mathcal{I})\, d\mathcal{I}dr}{ \int_0^\infty\int_0^\infty r^2e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\gamma\sqrt{1+r^2}}\phi(\mathcal{I}) \,d\mathcal{I}dr}. \end{align*} By \eqref{K2} and integration by parts, one finds \begin{align*} \frac{e}{n}&= mc^2 \left(\int_0^\infty \frac{K_2(\gamma^*)}{\gamma^*}\phi(\mathcal{I})\, d\mathcal{I}\right)^{-1}\int_0^\infty\int_0^\infty r^2\sqrt{1+r^2}e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\gamma\sqrt{1+r^2}} \left(1+ \frac{\mathcal{I}}{mc^2} \right)\phi(\mathcal{I})\, d\mathcal{I}dr\cr &= \frac{mc^2}{\gamma} \left(\int_0^\infty \frac{K_2(\gamma^*)}{\gamma^*}\phi(\mathcal{I})\, d\mathcal{I}\right)^{-1}\int_0^\infty\int_0^\infty (3r^2+1)e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\gamma\sqrt{1+r^2}} \phi(\mathcal{I})\, d\mathcal{I}dr\cr &= \frac{mc^2}{\gamma} \left(\int_0^\infty \frac{K_2(\gamma^*)}{\gamma^*}\phi(\mathcal{I})\, d\mathcal{I}\right)^{-1}\left(\int_0^\infty K_3(\gamma^*)\phi(\mathcal{I})\,d\mathcal{I}-\int_0^\infty\int_0^\infty r^2e^{-\left(1+ \frac{\mathcal{I}}{mc^2} \right)\gamma\sqrt{1+r^2}}\phi(\mathcal{I}) \,d\mathcal{I}dr \right)\cr &=\frac{\gamma b}{n}- \frac{ mc^2}{\gamma } \end{align*} which gives the desired result. Since (5) can be obtained in the same manner as in (4), we omit it. \end{proof} \noindent \subsection{Linearization of \eqref{PR}} Define $e_i$ $(i=1,\cdots,5)$ by \begin{align*} e_1&=\sqrt{F_E^0},\qquad e_{2,3,4}=\sqrt{\frac{1}{b_0m}}\left(1+\frac{\mathcal{I}}{mc^2}\right)p\sqrt{F_E^0},\cr e_5&=\sqrt{\frac{1}{k_BT_0^2\left\{\widetilde{e}\right\}^{\prime}(T_{0})}}\left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\}\sqrt{F_E^0} \end{align*} and the projection operator $P(f)$ by \begin{align}\label{Pf} P(f)= \sum_{i=1}^5\langle f,e_{i} \rangle_{L^2_{p,I}} e_{i}. \end{align} Then, the equilibrium distribution function $F_E$ given in \eqref{GJdf} is linearized as follows. \begin{lemma}\label{lin2} Suppose $E(f)(t)$ is sufficiently small. We then have $$ \left(1-p^\mu q_\mu \frac{1+\frac{\mathcal{I}}{mc^2}}{bmc^2}\right)F_E-F_E^0 =\left(P(f)+\sum_{i=1}^4\Gamma_i (f)\right)\sqrt{F_E^0}. $$ Here the nonlinear perturbations $\Gamma_i(f)$ $(i=1,\cdots, 4)$ are given by \begin{eqnarray*} && \Gamma_1(f)=\left(\frac{\Psi_1}{2}-\frac{\Psi^2}{2(2+\Psi+2\sqrt{1+\Psi})}\right)\sqrt{F_E^0},\cr &&\Gamma_2 (f)= \frac{1}{\left\{\widetilde{e}\right\}^{\prime}(T_{0})}\frac{1}{k_BT_0^2}\left\{cp^0 \left(1+\frac{\mathcal{I}}{mc^2}\right) -\widetilde{e}(T_0)\right\}\sqrt{F_E^0}\cr &&\hspace{9mm}\times\biggl\{\frac{1}{c}\left( 1-\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp \right) \int_{\mathbb{R}^3}\int_0^\infty \left\{ 2cp^0\Phi+\Phi^2\right\}F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr &&\hspace{12mm}-\frac{1}{c}\left( \frac{\Psi_1}{2} +\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\right) \int_{\mathbb{R}^3}\int_0^\infty \left(U^\mu p_\mu\right)^2F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr &&\hspace{12mm}-c\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0}\phi(\mathcal{I}) \,d\mathcal{I}dp\int_{\mathbb{R}^3}\int_0^\infty p^0f\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\,dp\biggl\}\cr &&\Gamma_{3}(f)=-c\frac{1+\frac{\mathcal{I}}{mc^2}}{k_BT_0}\left\{\frac{\Psi}{2}+\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\right\}\int_{\mathbb{R}^3}\int_0^\infty pf\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cdot p\sqrt{F_E^0}\cr &&\hspace{9mm}+\frac{1+\frac{\mathcal{I}}{mc^2}}{b_0mc^2} \Gamma_3^*(f)\cdot p\sqrt{F_E^0}-p^0 q^0\frac{1+\frac{\mathcal{I}}{mc^2}}{b_0mc^2} \sqrt{F_E^0}\cr &&\Gamma_{4} (f)=\frac{1}{\sqrt{F_E^0}}\int_0^1 (1-\theta)\left(n -1,U,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\right)D^2\widetilde{F}(\theta)\left(n -1,U ,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\right)^T\,d\theta, \end{eqnarray*} where $\Gamma_{3}^*(f)$ denotes \begin{eqnarray*}\label{gamma3}\begin{split} &\Gamma_{3}^*(f) = -c^2\sum_{i=1}^3\int_{\mathbb{R}^3}\int_0^\infty p^if\sqrt{F_E^0}\phi(\mathcal{I})\,d\mathcal{I}\,\frac{dp}{p^0}\int_{\mathbb{R}^3}\int_0^\infty p p^if\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr &\hspace{9mm}+c\int_{\mathbb{R}^3}\int_0^\infty p\Phi_1 F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I} \frac{dp}{p^0}-\int_{\mathbb{R}^3}\int_0^\infty pf\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &\hspace{9mm} \times\int_{\mathbb{R}^3}\int_0^\infty \left\{c^2p^0f\sqrt{F_E^0}+\frac{1}{p^0}\left(2cp^0\Phi+\Phi^2 \right)F\right\} \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,dp\cr &\hspace{9mm}+ \left\{\frac{\Psi}{2}+\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\right\}\int_{\mathbb{R}^3}\int_0^\infty pf\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &\hspace{9mm} \times\int_{\mathbb{R}^3}\int_0^\infty \left(U^\nu p_\nu\right)^2 F \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}, \end{split}\end{eqnarray*} and $\Psi,\Psi_1,\Phi$ and $\Phi_1$ are defined as \begin{eqnarray}\label{notation2} \begin{split} &\Psi=2\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp+ \left(\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp\right)^2\cr &\hspace{3mm}-\sum_{i=1}^3 \left(\int_{\mathbb{R}^3}\int_0^\infty p^if\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\right)^2,\cr &\Psi_1= \left(\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp\right)^2-\sum_{i=1}^3 \left(\int_{\mathbb{R}^3}\int_0^\infty p^if\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\right)^2,\cr &\Phi= U_{\mu} p^\mu-cp^0,\cr &\Phi_1= U_{\mu} p^\mu-cp^0+c\sum_{i=1}^3 p^i\int_{\mathbb{R}^3}\int_0^\infty p^i f\sqrt{F_E^0}\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}. \end{split}\end{eqnarray} \end{lemma} \begin{proof} We consider the transitional macroscopic fields between $F$ and $F_E^0$: \begin{equation*} \left(n_\theta, U_\theta,\left(\frac{e}{n}\right)_\theta,q^\mu_\theta\right)=\theta \Big(n, U,\frac{e}{n},q^\mu\Big)+(1-\theta) (1,0,\widetilde{e}(T_0),0), \end{equation*} and define $\widetilde{F}(\theta)$ and $F(\theta)$ by \begin{align*} \widetilde{F}(\theta)&=\left(1-p^\mu q_{\theta\mu} \frac{1+\frac{\mathcal{I}}{mc^2}}{b_\theta mc^2}\right) e^{-1+\frac{m}{k_B}\frac{ g_{r\theta}}{T_\theta}-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{1}{k_B T_\theta}U_\theta^\mu p_\mu }\cr F(\theta)&=e^{-1+\frac{m}{k_B}\frac{ g_{r\theta}}{T_\theta}-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{1}{k_B T_\theta}U_\theta^\mu p_\mu }. \end{align*} Here $g_{r\theta}, T_\theta$ and $b_\theta$ are given by \begin{align*} e^{-1+\frac{m}{k_B}\frac{g_{r\theta}}{T_\theta}}&=\frac{n_\theta}{(mc)^3\int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T_\theta}\sqrt{1+|p|^2}}\phi(\mathcal{I})\,d \mathcal{I}dp},\qquad T_\theta=\{\widetilde{e}\}^{-1}\left(\frac{e}{n}\right)_\theta \cr &b_\theta=\frac{n_\theta mc^2}{\gamma_\theta^2}\left(\int_0^\infty \frac{K_2(\gamma_\theta^*)}{\gamma_\theta^*}\phi(\mathcal{I})\, d\mathcal{I}\right)^{-1}\int_0^\infty K_3(\gamma_\theta^*)\phi(\mathcal{I})\,d\mathcal{I} \end{align*} with $$ \gamma_\theta =\frac{mc^2}{k_BT_\theta},\qquad \gamma_\theta^*=\gamma_\theta\left(1+\frac{\mathcal{I}}{mc^2}\right). $$ By Lemma \ref{computation F_E^0}, $b_\theta$ can be also expressed in the following manner $$ b_\theta= \frac{k_B n_\theta T_\theta \widetilde{e}(T_\theta) }{mc^2} + \frac{n_\theta \left(k_B T_\theta\right)^2 }{mc^2}. $$ Thus $\widetilde{F}(\theta)$ can be regarded as a function of $n_\theta, U_\theta,(e/n)_\theta $ and $q^\mu_\theta$, so it follows from the Talyor expansion that \begin{align*} & \left(1-p^\mu q_\mu \frac{1+\frac{\mathcal{I}}{mc^2}}{b mc^2}\right)F_E-F_E^0 \cr &= \widetilde{F}(1)-\widetilde{F}(0) \cr &= \frac{\partial \widetilde{F}}{\partial n_\theta}\biggl|_{\theta=0}\frac{\partial n_\theta}{\partial \theta}+\nabla_{U_\theta}\widetilde{F}\Big|_{\theta=0}\cdot \frac{\partial U_\theta}{\partial \theta}+\frac{\partial \widetilde{F}}{\partial (e/n)_\theta} \biggl|_{\theta=0}\frac{\partial (e/n)_\theta}{\partial \theta}+\frac{\partial \widetilde{F}}{\partial q^0_\theta} \biggl|_{\theta=0}\frac{\partial q^0_\theta}{\partial \theta}+\nabla_{q_\theta}\widetilde{F}\biggl|_{\theta=0}\cdot \frac{\partial q_\theta}{\partial \theta} \cr &+ \int_0^1 (1-\theta)\left(n -1,U,T -T_0,q^\mu\right)D^2\widetilde{F}(\theta)\left(n -1,U ,T -T_0,q^\mu\right)^T\,d\theta\cr &=(n -1) F_E^0 +\frac{1}{\left\{\widetilde{e}\right\}^{\prime}(T_{0})}\frac{1}{k_BT_0^2}\left(\frac{e}{n} -\widetilde{e}(T_0)\right)\biggl\{ cp^0 \biggl(1+\frac{\mathcal{I}}{mc^2}\biggl) -\widetilde{e}(T_0)\biggl\} F_E^0 + \frac{1+\frac{\mathcal{I}}{mc^2} }{k_BT_0} U \cdot p F_E^0 \cr &-p ^\mu q_\mu \frac{1+\frac{\mathcal{I}}{mc^2}}{b_0 mc^2} F_E^0 + \int_0^1 (1-\theta)\left(n -1,U,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\right)D^2\widetilde{F}(\theta)\left(n -1,U ,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\right)^T\,d\theta. \end{align*} In the last identity, we used the simple calculations \begin{align*} &\frac{\partial \widetilde{F}}{\partial n_\theta}\biggl|_{\theta=0}=F_E^0,\qquad \nabla_{U_\theta}\widetilde{F}\Big|_{\theta=0}= \frac{1+\frac{\mathcal{I}}{mc^2}}{k_BT_0} pF_E^0,\cr &\frac{\partial}{\partial (e/n)_\theta}\widetilde{F}\biggl|_{\theta=0}=\frac{1}{\left\{\widetilde{e}\right\}^{\prime}(T_{0})}\frac{1}{k_BT_0^2}\left\{cp^0 \left(1+\frac{\mathcal{I}}{mc^2 }\right) -\widetilde{e}(T_0) \right\}F_E^0,\cr &\frac{\partial}{\partial q_\theta^0}\widetilde{F}\biggl|_{\theta=0}=-p^0\frac{1+\frac{\mathcal{I}}{mc^2}}{b_0mc^2} F_E^0,\qquad \nabla_{q_\theta}\widetilde{F}\Big|_{\theta=0}=p\frac{1+\frac{\mathcal{I}}{mc^2}}{b_0mc^2} F_E^0. \end{align*} Now we use the notations $I_i$ $(i=1,\cdots,4)$ to denote the first four terms on the last identity, and we decompose them into the linear part and the nonlinear part. \newline $\bullet$ Decomposition of $I_1$: Inserting $F=F_E^0+f\sqrt{F_E^0}$ into a definition of $n$, one finds \begin{align}\label{n=}\begin{split} n&=\left\{ \left(\int_{\mathbb{R}^3}\int_0^\infty F \phi(\mathcal{I}) \,d\mathcal{I}\,dp\right)^2-\sum_{i=1}^3 \left(\int_{\mathbb{R}^3}\int_0^\infty p^i F \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\right)^2\right\}^{\frac{1}{2}}\cr &=\biggl\{1+2\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp+ \left(\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp\right)^2\cr &-\sum_{i=1}^3 \left(\int_{\mathbb{R}^3}\int_0^\infty p^if\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\right)^2\biggl\}^{\frac{1}{2}}\cr &=\sqrt{1+\Psi} \end{split}\end{align} which, together with the following identity \cite{BCNS}: \begin{equation}\label{route pi} \sqrt{1+\Psi}=1+ \frac{\Psi}{2}-\frac{\Psi^2}{2(2+\Psi+2\sqrt{1+\Psi})} \end{equation} gives $$ n-1= \frac{\Psi}{2}-\frac{\Psi^2}{2(2+\Psi+2\sqrt{1+\Psi})}. $$ Therefore, \begin{align*} I_1&=(n-1)F_E^0\cr &=\left( \frac{\Psi}{2}-\frac{\Psi^2}{2(2+\Psi+2\sqrt{1+\Psi})} \right)F_E^0\cr &= \left(\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp+\frac{\Psi_1}{2}-\frac{\Psi^2}{2(2+\Psi+2\sqrt{1+\Psi})}\right)F_E^0\cr &= \left\{\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp\right\}F_E^0+\Gamma_1(f)\sqrt{F_E^0}. \end{align*} \newline \noindent $\bullet$ Decomposition of $I_2$: Considering \eqref{n=} and the following identity \cite{BCNS}: \begin{align*} \frac{1}{\sqrt{1+\Psi}}=1-\frac{\Psi}{2}-\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}, \end{align*} one can see that \begin{align}\label{1/n=} \begin{split} \frac{1}{n}&=1-\frac{\Psi}{2}-\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\cr &= 1-\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp-\frac{\Psi_1}{2} -\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}. \end{split}\end{align} We have from $\eqref{notation2}_3$ that \begin{equation}\label{uq2} (U^\mu p_\mu)^2=(cp^0)^2+2cp^0\Phi+\Phi^2. \end{equation} By \eqref{1/n=} and \eqref{uq2}, $\displaystyle e/n -\widetilde{e}(T_0)$ is decomposed as \begin{align}\label{e-e3} \begin{split} \frac{e}{n} -\widetilde{e}(T_0) &=\frac{1}{nc} \int_{\mathbb{R}^3}\int_0^\infty \left(U^\mu p_\mu\right)^2F\left(1+\frac{\mathcal{I}}{mc^2}\right) \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}-\widetilde{e}(T_0)\cr &= \frac{1}{c}\left\{ 1-\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp-\frac{\Psi_1}{2} -\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\right\}\cr & \times\int_{\mathbb{R}^3}\int_0^\infty \left\{ (cp^0)^2+2cp^0\Phi+\Phi^2\right\}F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}-\widetilde{e}(T_0)\cr &\equiv \left( 1-\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp \right)\int_{\mathbb{R}^3}\int_0^\infty c(p^0)^2 F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}-\widetilde{e}(T_0)\cr & +R_{I_2}, \end{split}\end{align} where $R_{I_2}$ denotes \begin{align*} R_{I_2}&= \frac{1}{c}\left( 1-\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp \right) \int_{\mathbb{R}^3}\int_0^\infty \left\{ 2cp^0\Phi+\Phi^2\right\}F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0} \cr & -\frac{1}{c}\left( \frac{\Psi_1}{2} +\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\right) \int_{\mathbb{R}^3}\int_0^\infty \left(U^\mu p_\mu\right)^2F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}. \end{align*} Here $R_{I_2}$ is nonlinear with respect to $f$ that can be shown by the following identity \begin{align*} \Phi&=\sqrt{c^2+|U|^2}p^0-U\cdot p-cp^0 \cr &=\left(\frac{|U|^2}{2c}-\frac{|U|^4}{2c(2c^2+|U|^2+2c\sqrt{1+|U|^2})}\right)p^0-U\cdot p. \end{align*} Using Lemma \ref{computation F_E^0} (2), the first two terms on the last identity of \eqref{e-e3} reduce to \begin{align}\label{e-e4} \begin{split} & \left( 1-\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp\right)\int_{\mathbb{R}^3}\int_0^\infty cp^0\left( F_E^0 +f\sqrt{F_E^0}\right)\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\,dp-\widetilde{e}(T_0)\cr & = \left( 1-\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp\right)\left(\widetilde{e}(T_0)+\int_{\mathbb{R}^3}\int_0^\infty cp^0 f\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\,dp\right)-\widetilde{e}(T_0)\cr & =\int_{\mathbb{R}^3}\int_0^\infty \left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\} f\sqrt{F_E^0} \phi(\mathcal{I})\,d\mathcal{I}dp\cr &-c\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0}\phi(\mathcal{I}) \,d\mathcal{I}dp\int_{\mathbb{R}^3}\int_0^\infty p^0f\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\,dp. \end{split}\end{align} Now we go back to \eqref{e-e3} with \eqref{e-e4} to get \begin{align}\label{e-e0} \begin{split} \frac{e}{n}-\widetilde{e}(T_0)&= \int_{\mathbb{R}^3}\int_0^\infty \left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\} f\sqrt{F_E^0} \phi(\mathcal{I})\,d\mathcal{I}dp\cr &-c\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0}\phi(\mathcal{I}) \,d\mathcal{I}dp\int_{\mathbb{R}^3}\int_0^\infty p^0f\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\,dp+R_{I_2} , \end{split} \end{align} which leads to \begin{align*} I_2&=\frac{1}{\left\{\widetilde{e}\right\}^{\prime}(T_{0})}\frac{1}{k_BT_0^2}\left(\frac{e}{n} -\widetilde{e}(T_0)\right)\left(cp^0 \Big(1+\frac{\mathcal{I}}{mc^2}\Big) -\widetilde{e}(T_0)\right)F_E^0\cr &=\frac{1}{\left\{\widetilde{e}\right\}^{\prime}(T_{0})}\frac{1}{k_BT_0^2}\int_{\mathbb{R}^3}\int_0^\infty \left(cp^0\Big(1+\frac{\mathcal{I}}{mc^2}\Big)-\widetilde{e}(T_0)\right) f\sqrt{F_E^0} \phi(\mathcal{I})\,d\mathcal{I}\,dp\cr & \times\left(cp^0 \Big(1+\frac{\mathcal{I}}{mc^2}\Big) -\widetilde{e}(T_0)\right) F_E^0 +\Gamma_2 (f)\sqrt{F_E^0}. \end{align*} \newline \noindent $\bullet$ Decomposition of $I_3$: By \eqref{1/n=}, $U$ is written as \begin{align}\label{u_F} \begin{split} U&=\frac{c}{n}\int_{\mathbb{R}^3}\int_0^\infty pF\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &=c\left\{1-\frac{\Psi}{2}-\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\right\}\int_{\mathbb{R}^3}\int_0^\infty pf\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}. \end{split}\end{align} This directly leads to the linearization of $I_3$ as follows \begin{align}\label{u}\begin{split} I_3&=\frac{1+\frac{\mathcal{I}}{mc^2} }{k_BT_0} U \cdot p F_E^0\cr &= c\frac{1+\frac{\mathcal{I}}{mc^2}}{k_BT_0}\int_{\mathbb{R}^3}\int_0^\infty pf\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cdot p F_E^0 \cr &-c\frac{1+\frac{\mathcal{I}}{mc^2}}{k_BT_0}\left\{\frac{\Psi}{2}+\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\right\}\int_{\mathbb{R}^3}\int_0^\infty pf\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cdot p F_E^0 \end{split}\end{align} which will be combined with the result of $I_4$. \noindent\newline $\bullet$ Decomposition of $I_4$: Recall from \eqref{macroscopic fields} that the heat flux $q$ is defined by \begin{align}\label{heat}\begin{split} q &=c\int_{\mathbb{R}^3}\int_0^\infty p \left(U^\nu p_\nu\right)F \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &-\frac{1}{ c }U \int_{\mathbb{R}^3}\int_0^\infty \left(U^\nu p_\nu\right)^2 F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}. \end{split} \end{align} Inserting $\eqref{notation2}_4$ into the first term of \eqref{heat}, one finds \begin{eqnarray}\label{flux1}\begin{split} & c\int_{\mathbb{R}^3}\int_0^\infty p (U^\nu p_\nu)F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0} \cr &= c^2\int_{\mathbb{R}^3}\int_0^\infty p \biggl(F_E^0+f\sqrt{F_E^0}\biggl) \biggl(1+\frac{\mathcal{I}}{mc^2}\biggl)\phi(\mathcal{I})\,d\mathcal{I} dp \cr &- c^2\sum_{i=1}^3\int_{\mathbb{R}^3}\int_0^\infty p^i f\sqrt{F_E^0}\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\int_{\mathbb{R}^3}\int_0^\infty p p^i \biggl(F_E^0+f\sqrt{F_E^0}\biggl) \biggl(1+\frac{\mathcal{I}}{mc^2}\biggl)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr &+ c\int_{\mathbb{R}^3}\int_0^\infty p \Phi_1 F \biggl(1+\frac{\mathcal{I}}{mc^2}\biggl)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr &=c^2\int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}dp- ck_BT_0 \int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr & -c^2\sum_{i=1}^3\int_{\mathbb{R}^3}\int_0^\infty p^if\sqrt{F_E^0}\phi(\mathcal{I})\,d\mathcal{I}\,\frac{dp}{p^0}\int_{\mathbb{R}^3}\int_0^\infty p p^if\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr &+c\int_{\mathbb{R}^3}\int_0^\infty p \Phi_1 F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I} \frac{dp}{p^0}. \end{split} \end{eqnarray} In the last identity, we used the spherical symmetry of $F_E^0$ and Lemma \ref{computation F_E^0} (1) so that \begin{eqnarray*} &&\sum_{i=1}^3\int_{\mathbb{R}^3}\int_0^\infty p^i f\sqrt{F_E^0}\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\int_{\mathbb{R}^3}\int_0^\infty pp^i F_E^0\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr &&=\frac{1}{3}\int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\int_{\mathbb{R}^3}\int_0^\infty |p|^2 F_E^0\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0} \cr &&=\frac{k_BT_0}{c}\int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}. \end{eqnarray*} To deal with the second term of \eqref{heat}, we use Lemma \ref{computation F_E^0} (2) and \eqref{uq2} to obtain \begin{eqnarray*} &&\frac{1}{c}U\int_{\mathbb{R}^3}\int_0^\infty \left(U^\nu p_\nu\right)^2 F \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &&=\frac{1}{c}U\int_{\mathbb{R}^3}\int_0^\infty \left\{(cp^0)^2+2cp^0\Phi+\Phi^2 \right\}\left(F_E^0+f\sqrt{F_E^0}\right) \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &&= \widetilde{e}(T_0)U + U\int_{\mathbb{R}^3}\int_0^\infty \left\{cp^0f\sqrt{F_E^0}+\frac{1}{cp^0} \left( 2cp^0\Phi+\Phi^2 \right)F\right\} \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,dp , \end{eqnarray*} which, together with \eqref{u_F} gives \begin{eqnarray}\label{flux3} \begin{split} & \frac{1}{c}U\int_{\mathbb{R}^3}\int_0^\infty \left(U^\nu p_\nu\right)^2 F \left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &=c\widetilde{e}(T_0)\int_{\mathbb{R}^3}\int_0^\infty pf\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}+\int_{\mathbb{R}^3}\int_0^\infty pf\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr &\times \int_{\mathbb{R}^3}\int_0^\infty \left\{c^2p^0f\sqrt{F_E^0}+\frac{1}{p^0}\left(2cp^0\Phi+\Phi^2 \right)F \right\}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,dp \cr &- \left\{\frac{\Psi}{2}+\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\right\}\int_{\mathbb{R}^3}\int_0^\infty pf\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cr & \times\int_{\mathbb{R}^3}\int_0^\infty \left(U^\nu p_\nu\right)^2 F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}. \end{split} \end{eqnarray} We go back to \eqref{heat} with \eqref{flux1} and \eqref{flux3} to get \begin{eqnarray}\label{flux}\begin{split} &p\cdot q\frac{1+\frac{\mathcal{I}}{mc^2}}{b_0mc^2} F_E^0\cr &= \frac{1+\frac{\mathcal{I}}{mc^2}}{b_0m }\int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}dp\cdot pF_E^0\cr &- \frac{1+\frac{\mathcal{I}}{mc }}{b_0mc^2}\left(k_BT_0+\widetilde{e}(T_0)\right) \int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cdot pF_E^0+ \frac{1+\frac{\mathcal{I}}{mc^2}}{b_0mc^2}\Gamma_3^*(f)\cdot pF_E^0\cr &=\frac{1+\frac{\mathcal{I}}{mc^2}}{b_0m }\int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}dp\cdot p F_E^0\cr &-c\frac{1+\frac{\mathcal{I}}{mc^2}}{k_BT_0}\int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cdot pF_E^0+\frac{1+\frac{\mathcal{I}}{mc^2}}{b_0mc^2}\Gamma_3^*(f)\cdot pF_E^0 \end{split} \end{eqnarray} where we used Lemma \ref{computation F_E^0} (5). We combine \eqref{u} and \eqref{flux} to conclude \begin{eqnarray*} &&I_3+I_4 =\frac{1+\frac{\mathcal{I}}{mc^2} }{k_BT_0} U \cdot p F_E^0+p \cdot q \frac{1+\frac{\mathcal{I}}{mc^2}}{b_0 mc^2} F_E^0-p ^0 q^0 \frac{1+\frac{\mathcal{I}}{mc^2}}{b_0 mc^2} F_E^0\cr &&\hspace{11.7mm}=c\frac{1+\frac{\mathcal{I}}{mc^2}}{k_BT_0}\int_{\mathbb{R}^3}\int_0^\infty pf\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cdot p F_E^0 +p\cdot q\frac{1+\frac{\mathcal{I}}{mc^2}}{b_0mc^2} F_E^0 -p^0 q^0\frac{1+\frac{\mathcal{I}}{mc^2}}{b_0mc^2} F_E^0\cr &&\hspace{11.7mm}-c\frac{1+\frac{\mathcal{I}}{mc^2}}{k_BT_0}\left\{\frac{\Psi}{2}+\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\right\}\int_{\mathbb{R}^3}\int_0^\infty pf\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\cdot p F_E^0 \cr &&\hspace{11.7mm}= \frac{1+\frac{\mathcal{I}}{mc^2}}{b_0m} \int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}dp \cdot pF_E^0+\Gamma_3(f)\sqrt{F_E^0}. \end{eqnarray*} In the last identity, $p^0 q^0\frac{1+\frac{\mathcal{I}}{mc^2}}{b_0mc^2} F_E^0$ was absorbed into $\Gamma_3(f)\sqrt{F_E^0}$ since $q^0$ is nonlinear with respect to $f$. Finally, letting $$ \Gamma_4(f)=\frac{1}{\sqrt{F_E^0}}\int_0^1 (1-\theta)\left(n -1,U,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\right)D^2\widetilde{F}(\theta)\left(n -1,U ,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\right)^T\,d\theta $$ completes the proof. \end{proof} In the following proposition, we present the linearization of \eqref{PR}. \begin{proposition}\label{lin3} Suppose $E(f)(t)$ is sufficiently small. Then, \eqref{PR} can be linearized with respect to the perturbation $f$ as follows \begin{align}\label{LAW}\begin{split} \partial_t f+\hat{p}\cdot\nabla_x f&=\frac{1}{\tau}\left(L(f)+\Gamma (f)\right),\cr f_0(x,p)&=f(0,x,p), \end{split}\end{align} where the linearized operator $L(f)$ and the nonlinear perturbation $\Gamma (f)$ are defined as \begin{align*} L(f)&= P(f)-f , \cr \Gamma (f)&=\frac{U_\mu p^\mu}{c p^0} \sum_{i=1}^4\Gamma_i (f)+\frac{P(f)-f}{c p^0} \Phi \end{align*} respectively. \end{proposition} \begin{proof} Inserting $F=F_E^0+f\sqrt{F_E^0}$ into \eqref{PR}, one finds \begin{align*} \partial_t f+\hat{p}\cdot\nabla_x f&=\frac{U_\mu p^\mu}{c\tau p^0}\frac{1}{\sqrt{F_E^0}}\left\{\left(1-p^\mu q_\mu\frac{1+\frac{\mathcal{I}}{mc^2}}{bmc^2}\right)F_E-F_E^0-f\sqrt{F_E^0}\right\}. \end{align*} Then, it follows from Lemma \ref{lin2} that \begin{align*} \partial_t f+\hat{p}\cdot\nabla_x f=\frac{U_\mu p^\mu}{c\tau p^0}\Big(P(f)-f+\sum_{i=1}^4\Gamma_i (f)\Big) \end{align*} which combined with $\eqref{notation2}_3$ gives the desired result. \end{proof} \subsection{Analysis of the linearized operator $L$} Let $N$ be a five dimensional space spanned by $$\left\{\sqrt{F_E^0},\Big(1+\frac{\mathcal{I}}{mc^2}\Big)p^\mu \sqrt{F_E^0}\right\}.$$ \begin{lemma}\label{ortho} $P$ is an orthonormal projection from $L_{p,I}^2(\mathbb{R}^3)$ onto $N$. \end{lemma} \begin{proof} It is enough to show that $\{e_i\}$ $(i=1,\cdots,5)$ forms an orthonormal basis with respect to the inner product $\langle \cdot,\cdot \rangle_{L^2_{p,\mathcal{I}}}$. \newline $\bullet$ $\|e_1\|_{L^2_{p,I}}=1$: By a definition of $F_E^0$, it is straightforward that \begin{align*} \langle e_1,e_1\rangle_{L^2_{p,\mathcal{I}}} &= \int_{\mathbb{R}^3}\int_0^\infty F_E^0 \phi(\mathcal{I}) \,d\mathcal{I}\,dp\cr &=\frac{\int_{\mathbb{R}^3}\int_0^\infty e^{ -\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_B T_0} } \phi(\mathcal{I}) \,d\mathcal{I}\,dp}{(mc)^3\int_{\mathbb{R}^3}\int_0^\infty e^{-\left(mc^2+ \mathcal{I} \right)\frac{1}{k_B T_0}\sqrt{1+|p|^2}}\phi(\mathcal{I})\,d \mathcal{I}dp} \cr &=1. \end{align*} $\bullet$ $\|e_{i+1}\|_{L^2_{p,I}}=1$ $(i=1,2,3)$: It follows from Lemma \ref{computation F_E^0} (3) that \begin{align*} \langle e_{i+1},e_{i+1}\rangle_{L^2_{p,\mathcal{I}}}&=\frac{1 }{b_0m}\int_{\mathbb{R}^3}\int_0^\infty (p^i)^2 F_E^0\left(1+\frac{\mathcal{I}}{mc^2}\right)^2\phi(\mathcal{I})\,d\mathcal{I}dp\cr &=1. \end{align*} \noindent\newline $\bullet$ $\|e_5\|_{L^2_{p,I}}=1$: Inserting $(n,0,T)=(1,0,T_0)$ into \eqref{e5}, one finds \begin{align*} \left\{\widetilde{e}\right\}^{\prime}(T_0)&= \frac{1}{k_BT^2_0}\int_{\mathbb{R}^3}\int_0^\infty \left\{cp^0 \left(1+\frac{\mathcal{I}}{mc^2}\right)- \widetilde{e}(T_0) \right\}^2F_E^0\phi(\mathcal{I})\,d\mathcal{I}dp \end{align*} where we used $$ \frac{e}{n}\Big|_{T=T_0}= \widetilde{e}(T)\big|_{T=T_0}=\widetilde{e}(T_0). $$ Using this, we have \begin{align*} \langle e_5,e_5\rangle_{L^2_{p,\mathcal{I}}}&= \frac{1}{k_BT_0^2\left\{\widetilde{e}\right\}^{\prime}(T_{0})}\int_{\mathbb{R}^3}\int_0^\infty\left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\}^2 F_E^0\phi(\mathcal{I})\,d\mathcal{I}dp\cr &=1. \end{align*} \noindent\newline $\bullet$ $\langle e_i,e_j\rangle_{L^2_{p,\mathcal{I}}}=0$ $(i\neq j)$: Since the orthogonality can be proved in the same manner, we omit it. \end{proof} \begin{proposition}\label{pro} The linearized operator $L$ satisfies the following properties: \begin{enumerate} \item $Ker(L)=N$. \item $L$ is dissipative in the following sense: $$ \langle L(f),f\rangle_{L^2_{p,\mathcal{I}}}= -\|\{I-P\}(f)\|^2_{L^2_{p,\mathcal{I}}} \le 0. $$ \end{enumerate} \end{proposition} \begin{proof} Since it is straightforward by the definition of $L$, we omit the proof. \end{proof} \noindent\newline \section{Estimates for macroscopic fields and nonlinear perturbations} To deal with the macroscopic fields, we first estimate $\Psi, \Psi_1,\Phi$ and $\Phi_1$ whose definitions are given in \eqref{notation2}. For brevity, we set $\tau=1$ and use the generic constant $C$ which can change line by line but does not affect the proof of Theorem \ref{main3}. \begin{lemma}\label{lem22} Suppose $E(f)(t)$ is sufficiently small. Then $\Psi, \Psi_1,\Phi$ and $\Phi_1$ satisfy \begin{enumerate} \item $\displaystyle |\partial^\alpha \Psi|+|\partial^\alpha \Phi|\le Cp^0 \sum_{|\alpha_1|\le|\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}},$ \item $\displaystyle |\partial^\alpha \Psi_1|+|\partial^\alpha \Phi_1|\le C p^0 \sqrt{E(f)(t)}\sum_{|{\alpha_1}|\le|\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}} .$ \end{enumerate} \end{lemma} \begin{proof} $\bullet$ Estimates of $\Psi$ and $\Psi_1$: Recalling \eqref{notation2}, we see that \begin{align*} \partial^\alpha\Psi&=2\int_{\mathbb{R}^3}\int_0^\infty \partial^\alpha f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp+ \partial^\alpha\left\{\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp\right\}^2\cr &-\sum_{i=1}^3 \partial^\alpha\left\{\int_{\mathbb{R}^3}\int_0^\infty p^if\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\right\}^2, \end{align*} and it follows from H\"{o}lder's inequality that \begin{align}\label{Psi} \begin{split} |\partial^\alpha \Psi| &\le C\biggl(\|\partial^\alpha f\|_{L^2_{p,\mathcal{I}}}+ \sum_{|{\alpha_1}|\le |\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-\alpha_1}f\|_{L^2_{p,\mathcal{I}}}\biggl). \end{split} \end{align} Applying the Sobolev embedding $H_x^2\subseteq L^{\infty}_x$ to lower order terms of \eqref{Psi}, we get \begin{align}\label{H} \begin{split} |\partial^\alpha \Psi| &\le C\biggl(\|\partial^\alpha f\|_{L^2_{p,\mathcal{I}}}+ \sqrt{E(f)(t)}\sum_{|{\alpha_1}|\le |\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\biggl)\le C\sum_{|{\alpha_1}|\le |\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}} \end{split}\end{align} for sufficiently small $E(f)(t)$. In the same manner, one can have \begin{align}\label{Psi1} \begin{split} |\partial^\alpha \Psi_1| &\le C\sum_{|{\alpha_1}|\le|\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\cr &\le C\sqrt{E(f)(t)}\sum_{|{\alpha_1}|\le |\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}. \end{split} \end{align} $\bullet$ Estimates of $\Phi$: Observe from \eqref{macroscopic fields} and $\eqref{notation2}_{3}$ that \begin{align}\label{phi 1} \begin{split} \Phi=U_\mu p^\mu-cp^0=\frac{c}{n}p^\mu\int_{\mathbb{R}^3}\int_0^\infty p_\mu F\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0} -cp^0. \end{split}\end{align} Inserting $F=F_E^0+f\sqrt{F_E^0}$, then \eqref{phi 1} becomes \begin{align*} \Phi&=\frac{c}{n}p^\mu\int_{\mathbb{R}^3}\int_0^\infty p_\mu F_E^0\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}+\frac{c}{n}p^\mu\int_{\mathbb{R}^3}\int_0^\infty p_\mu f\sqrt{F_E^0}\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0} -cp^0\cr &=\frac{c}{n}\left( p^0+p^\mu\int_{\mathbb{R}^3}\int_0^\infty p_\mu f\sqrt{F_E^0}\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\right)-cp^0 \end{align*} where we used $$ \int_{\mathbb{R}^3}\int_0^\infty F_E^0\phi(\mathcal{I}) \,d\mathcal{I}\,dp=1,\qquad \int_{\mathbb{R}^3}\int_0^\infty p F_E^0\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}=0. $$ Now we use \eqref{1/n=} to expand $1/n$ as \begin{align}\label{Phi form}\begin{split} \Phi&= \left( 1-\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp-\frac{\Psi_1}{2} -\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\right)\cr &\times \left(cp^0+ cp^\mu\int_{\mathbb{R}^3}\int_0^\infty p_\mu f\sqrt{F_E^0}\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\right)-cp^0 \cr &=cp^\mu\int_{\mathbb{R}^3}\int_0^\infty p_\mu f\sqrt{F_E^0}\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}+\left(cp^0+ cp^\mu\int_{\mathbb{R}^3}\int_0^\infty p_\mu f\sqrt{F_E^0}\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\right)\cr &\times \left( -\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp-\frac{\Psi_1}{2} -\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})} \right). \end{split} \end{align} Since it follows from \eqref{H}, \eqref{Psi1} and the Sobolev embedding $H_x^2\subseteq L^{\infty}_x$ that \begin{equation}\label{phi psi} |\Psi|\le C\|f\|_{L^2_{p,\mathcal{I}}}< C\sqrt{E(f)(t)}, \qquad | \Psi_1|\le C \| f\|_{L^2_{p,\mathcal{I}}} \end{equation} for sufficiently small $E(f)(t)$, the last identity of \eqref{Phi form} leads to \begin{align*} |\Phi| &\le Cp^0\|f\|_{L^2_{p,\mathcal{I}}}. \end{align*} For $\alpha\neq 0$, we have from \eqref{Phi form} that \begin{align*} \partial^\alpha \Phi&=cp^\mu\int_{\mathbb{R}^3}\int_0^\infty p_\mu \partial^\alpha f\sqrt{F_E^0}\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}+\sum_{|\alpha_1|\le |\alpha|}C_{\alpha_1}\partial^{\alpha_1}\biggl\{ cp^0+ cp^\mu\int_{\mathbb{R}^3}\int_0^\infty p_\mu f\sqrt{F_E^0}\phi(\mathcal{I}) \,d\mathcal{I}\,\frac{dp}{p^0}\biggl\}\cr &\times \partial^{\alpha-\alpha_1}\left\{ -\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp-\frac{\Psi_1}{2} -\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})} \right\}. \end{align*} Using H\"{o}lder's inequality and \eqref{Psi1}, we have \begin{align}\label{partial phi} \begin{split} |\partial^\alpha\Phi|&\le Cp^0\| \partial^\alpha f\|_{L^2_{p,\mathcal{I}}}+Cp^0\sum_{|\alpha_1|\le |\alpha|} \left( 1+\| \partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}} \right)\biggl(\| \partial^{\alpha-\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\cr & +\sqrt{E(f)(t)}\sum_{|\alpha_2|\le|\alpha-\alpha_1|}\| \partial^{\alpha_2} f\|_{L^2_{p,\mathcal{I}}}+\left|\frac{\mathbb{P}\left(\Psi,\cdots,\partial^{\alpha-\alpha_1} \Psi,\sqrt{1+\Psi}\right)}{\mathbb{M}\left(2+\Psi-\Psi^2+2\sqrt{1+\Psi},\sqrt{1+\Psi} \right)}\right|\biggl) \end{split}\end{align} where $\mathbb{P}$ and $\mathbb{M}$ denote the homogeneous generic polynomial and monomial respectively. Using \eqref{Psi} and the Sobolev embedding $H_x^2\subseteq L^{\infty}_x$, the last term on the r.h.s of \eqref{partial phi} can be estimated explicitly as \begin{equation}\label{1/1+pi} \left|\frac{\mathbb{P}(\Psi,\cdots,\partial^{\alpha-\alpha_1} \Psi,\sqrt{1+\Psi},\cdots,\partial^{\alpha-\alpha_1}\sqrt{1+\Psi})}{(2+\Psi-\Psi^2+2\sqrt{1+\Psi})^{2^{|\alpha-\alpha_1|}}}\right| \le C\sum_{|\alpha_2|\le |{\alpha-\alpha_1}|}\|\partial^{\alpha_2} f\|_{L^2_{p,\mathcal{I}}}. \end{equation} Inserting \eqref{1/1+pi} into \eqref{partial phi}, one finds \begin{align}\label{leads to} |\partial^\alpha \Phi|\le Cp^0\sum_{|{\alpha_1}|\le |\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}. \end{align} \newline $\bullet$ Estimates of $\Phi_1$: In the same manner as in the case of $\Phi$, one can have \begin{align}\label{obtain} |\partial^\alpha \Phi_1|\le Cp^0\sqrt{E(f)(t)}\sum_{|{\alpha_1}|\le |\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}. \end{align} As a result, (1) is obtained by \eqref{Psi} and \eqref{leads to}, and combining \eqref{Psi1} and \eqref{obtain} gives (2). \end{proof} \begin{lemma}\label{lem2} Suppose $E(f)(t)$ is sufficiently small. Then we have \begin{enumerate} \item $ \displaystyle\ |\partial^\alpha \{n -1\}|+|\partial^\alpha U |+\left|\partial^\alpha \Big\{\frac{e}{n} -\widetilde{e}(T_0)\Big\}\right|\le C\sum_{|{\alpha_1}|\le|\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}.$ \item $\displaystyle|\partial^{\alpha} \{U ^0-c\}|\le C\sqrt{E(f)(t)}\sum_{|{\alpha_1}|\le|\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}.$ \end{enumerate} \end{lemma} \begin{proof} $\bullet$ $\partial^\alpha \{n -1\}$ : It follows from \eqref{n=}, \eqref{route pi} and \eqref{phi psi} that \begin{align}\label{n-1} \left|n -1\right| &= \left|\frac{\Psi}{2}-\frac{\Psi^2}{2(2+\Psi+2\sqrt{1+\Psi})}\right| \le C\|f\|_{L^2_{p,\mathcal{I}}} \end{align} for sufficiently small $E(f)(t)$. For $\alpha\neq 0$, it takes the form of \begin{align*} \begin{split} \partial^\alpha \{n -1\}&=\partial^\alpha\left\{\sqrt{1+\Psi}\right\}=\frac{\mathbb{P}( \Psi,\cdots \partial^\alpha \Psi,\sqrt{1+\Psi})}{\mathbb{M}\left(\sqrt{1+\Psi}\right) } \end{split} \end{align*} which can be estimated explicitly by using \eqref{phi psi} as follows \begin{align*} \left| \partial^\alpha \{n -1\}\right| &\le C\sum_{|{\alpha_1}|\le |\alpha|}\|\partial^{\alpha_1}f\|_{L^2_{p,\mathcal{I}}} \end{align*} for sufficiently small $E(f)(t)$. \noindent\newline $\bullet$ $\partial^\alpha U $ : Observe that \begin{align*} |U| &=\left| \frac{1}{n}\int_{\mathbb{R}^3}\int_0^\infty p F\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\right| =\left| \frac{1}{n}\int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\right|\le \frac{C}{n}\|f\|_{L^2_{p,\mathcal{I}}} . \end{align*} Using \eqref{n-1} and the Sobolev embedding $H_x^2\subseteq L^{\infty}_x$, this leads to \begin{align*} |U | \leq \frac{C}{ 1-C\sqrt{E(f)(t)} }\| f\|_{L^2_{p,\mathcal{I}}}\leq C\| f\|_{L^2_{p,\mathcal{I}}} \end{align*} for sufficiently small $E(f)(t)$. For $\alpha\neq 0$, it follows from \eqref{1/n=}, \eqref{Psi1} and \eqref{phi psi} that \begin{align*} |\partial^\alpha U |&= \left| \sum_{|{\alpha_1}|\le|\alpha|}C_{\alpha_1}\partial^{\alpha_1}\left\{\frac{1}{n} \right\}\int_{\mathbb{R}^3}\int_0^\infty p\partial^{\alpha-{\alpha_1}} f\sqrt{F_E^0} \phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0} \right|\cr &\le C \left(1+\| f\|_{L^2_{p,\mathcal{I}}} \right) \|\partial^{\alpha}f\|_{L^2_{p,\mathcal{I}}}+C \sum_{0<|{\alpha_1}|\le |\alpha|}\|\partial^{\alpha-{\alpha_1}}f\|_{L^2_{p,\mathcal{I}}}\cr &\times\left( \|\partial^{ \alpha_1 }f\|_{L^2_{p,\mathcal{I}}}+\sum_{|\alpha_2|\le |\alpha_1|} \|\partial^{ \alpha_2}f\|_{L^2_{p,\mathcal{I}}}+\left|\frac{\mathbb{P}\left(\Psi,\cdots,\partial^{ \alpha_1} \Psi,\sqrt{1+\Psi}\right)}{\mathbb{M}\left(2+\Psi-\Psi^2+2\sqrt{1+\Psi},\sqrt{1+\Psi} \right)}\right| \right) \end{align*} for sufficiently small $E(f)(t)$. Using \eqref{1/1+pi} and the Sobolev embedding $H^2(\mathbb{R}_x^{3})\subseteq L^{\infty}(\mathbb{R}_x^{3})$, we have \begin{align*}\ |\partial^\alpha U |&\le C\sum_{|{\alpha_1}|\le|\alpha|}\|\partial^{\alpha_1}f\|_{L^2_{p,\mathcal{I}}}. \end{align*} \newline $\bullet$ $\displaystyle\partial^\alpha \left\{\frac{e}{n} -\widetilde{e}(T_0)\right\}$ : Recall from \eqref{e-e0} that \begin{eqnarray*} &&\frac{e}{n} -\widetilde{e}(T_0) = \int_{\mathbb{R}^3}\int_0^\infty \left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\} f\sqrt{F_E^0} \phi(\mathcal{I})\,d\mathcal{I}dp \cr &&\hspace{12mm} -c\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0}\phi(\mathcal{I}) \,d\mathcal{I}dp\int_{\mathbb{R}^3}\int_0^\infty p^0f\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\,dp\cr &&\hspace{12mm}-\frac{1}{c}\left( \frac{\Psi_1}{2} +\frac{\Psi^3-3\Psi^2}{2(2+\Psi-\Psi^2+2\sqrt{1+\Psi})}\right) \int_{\mathbb{R}^3}\int_0^\infty \left(U^\mu p_\mu\right)^2F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\cr &&\hspace{12mm}+\frac{1}{c}\left( 1-\int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp \right) \int_{\mathbb{R}^3}\int_0^\infty \left\{ 2cp^0\Phi+\Phi^2\right\}F\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}. \end{eqnarray*} This can be handled in the same manner as in the previous cases but it requires tedious computations, so we omit the proof for brevity. \noindent\newline $\bullet$ $\partial^\alpha \{U^0 -c\}$ : By \eqref{route pi}, $U^0-c$ can be rewritten as \begin{align*} U ^0-c&=\sqrt{c^2+|U |^2}-c\cr &=\frac{|U|^2}{2c^2}-\frac{|U|^4}{2(2c^4+c^2|U|^2+2c^3\sqrt{c^2+|U|^2})}. \end{align*} Then it follows from the previous result of $U$ that \begin{align*} |U ^0-c|&=\left|\frac{|U|^2}{2c^2}-\frac{|U|^4}{2(2c^4+c^2|U|^2+2c^3\sqrt{c^2+|U|^2})}\right|\cr &\le C\|f\|^2_{L^2_{p,\mathcal{I}}}+C\|f\|_{L^2_{p,\mathcal{I}}}^4\cr &\le C\sqrt{E(f)(t)}\|f\|_{L^2_{p,\mathcal{I}}} \end{align*} for sufficiently small $E(f)(t)$. Similarly, one can have \begin{align*} \left| \partial^\alpha \{U^0 -c\}\right|&=\left|\partial^\alpha \left\{\frac{|U|^2}{2c^2}-\frac{|U|^4}{2(2c^4+c^2|U|^2+2c^3\sqrt{c^2+|U|^2})} \right\}\right|\cr &\le C\sum_{|{\alpha_1}|\le|\alpha|}|\partial^{\alpha_1}U| |\partial^{\alpha-{\alpha_1}}U| + \left|\frac{\mathbb{P}( U , \cdots, \partial^\alpha U,\sqrt{c^2+|U ^2|} )}{\mathbb{M}(2c^4+c^2|U |^2+2c^3\sqrt{c^2+|U |^2}, \sqrt{c^2+|U| ^2 }) }\right|\cr &\le C\sqrt{E(f)(t)}\sum_{|{\alpha_1}|\le|\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}. \end{align*} \end{proof} In the following lemma, we present the Hessian matrix $D^2\widetilde{F}(\theta)$ with respect to $n_{\theta},U_\theta,(e/n)_{\theta},q^\mu_{\theta}$ to clarify the explicit form of the nonlinear perturbation $\Gamma_4$ given in Lemma \ref{lin2}. \begin{lemma}\label{QQ} We have $$ D_{n_{\theta},U_\theta,(e/n)_{\theta},q^\mu_{\theta}}^{2}\tilde{F}(\theta)=\mathcal{Q} F(\theta), $$ where $\mathcal{Q}$ is a $9\times9$ symmetric matrix whose elements are given by \begin{eqnarray*} &&\mathcal{Q}_{1,1}=0,\qquad \mathcal{Q}_{1,1+i}= -\frac{ 1+\frac{\mathcal{I}}{mc^2} }{k_Bn_\theta T_\theta} \left(\frac{U_\theta^i}{U_\theta^0}p^0-p^i\right),\cr &&\mathcal{Q}_{1,5}=\frac{1}{k_Bn_\theta T_\theta^2 }\frac{1}{\{\widetilde{e}\}^{\prime}(T_\theta)}\left\{\frac{k_BT_\theta^2}{mc^2}\widetilde{e}(T_\theta) +\left(1+\frac{\mathcal{I}}{mc^2}\right) U_{\theta\mu}p^\mu \right\},\qquad \mathcal{Q}_{1,6}=0,\qquad\mathcal{Q}_{1,6+i}=0,\cr &&\mathcal{Q}_{1+i,1+j}=\biggl(1-p^\mu q_{\theta\mu}\frac{1+\frac{\mathcal{I}}{mc^2}}{b_\theta mc^2}\biggl)\frac{ 1+\frac{\mathcal{I}}{mc^2} }{k_BT_\theta}\left\{\frac{ 1+\frac{\mathcal{I}}{mc^2} }{k_BT_\theta} \biggl(\frac{U_\theta^i}{U_\theta^0}p^0-p^i\biggl)\biggl(\frac{U_\theta^j}{U_\theta^0}p^0-p^j\biggl)+ \frac{U^i_\theta U^j_\theta}{(U_\theta^0)^3}p^0\right\} \cr &&\mathcal{Q}_{1+i,5}= \left(\frac{U^i_\theta}{U^0_\theta}p^0-p^i\right)\frac{(1+\frac{\mathcal{I}}{mc^2} )}{k_BT_\theta^2\{\widetilde{e}\}^\prime (T_\theta)}\biggl\{ \biggl(1-p^\mu q_{\theta\mu}\frac{1+\frac{\mathcal{I}}{mc^2}}{b_\theta mc^2}\biggl)\cr &&\hspace{11mm}\times\biggl(1- \frac{T_\theta\widetilde{e}(T_\theta)}{mc^2}- \frac{1+\frac{\mathcal{I}}{mc^2}}{k_BT_\theta}U_{\theta\mu}p^\mu\biggl) - p^\mu q_{\theta\mu}T_\theta\frac{1+\frac{\mathcal{I}}{mc^2}}{mc^2b_\theta^2 } \partial_{T_\theta}b_\theta \biggl\},\cr &&\mathcal{Q}_{1+i,6}= \frac{ (1+\frac{\mathcal{I}}{mc^2} )^2}{b_\theta mc^2k_BT_\theta} \left(\frac{U_\theta^i}{U_\theta^0}p^0-p^i\right)p^0,\qquad \mathcal{Q}_{1+i,6+j}= - \frac{ (1+\frac{\mathcal{I}}{mc^2} )^2}{b_\theta mc^2k_BT_\theta} \left(\frac{U_\theta^i}{U_\theta^0}p^0-p^i\right)p^j, \cr && \mathcal{Q}_{5,5}=\frac{1-p^\mu q_{\theta\mu}\frac{1+\frac{\mathcal{I}}{mc^2}}{b_\theta mc^2}}{\left(\{\widetilde{e}\}^\prime(T_\theta)\right)^2}\left\{- \frac{mc^2\widetilde{e}(T_\theta)}{k_B^2T_\theta^4} +mc^2\biggl(1+\frac{\mathcal{I}}{mc^2}\biggl) \frac{U_{\theta\mu}p^\mu}{k_B^2T_\theta^4} +\frac{2}{ T_\theta}+ \frac{\{\widetilde{e}\}^{\prime\prime}(T_\theta)}{\{\widetilde{e}\}^\prime(T_\theta)}-\frac{ \{\widetilde{e}\}^\prime(T_\theta)}{ k_BT_\theta^2}\right\}\cr &&\hspace{8mm}+\frac{p^\mu q_{\theta\mu}\left(1+\frac{\mathcal{I}}{mc^2}\right)}{b_\theta^2 \left(\{\widetilde{e}\}^\prime(T_\theta)\right)^2} \frac{\partial_{T_\theta}b_\theta}{k_BT_\theta^2} \biggl\{-\frac{2\widetilde{e}(T_\theta)}{mc^2} +2\biggl(1+\frac{\mathcal{I}}{mc^2}\biggl)\frac{U_{\theta\mu}p^\mu}{mc^2}+2\frac{k_B^2T_\theta^3}{m^2c^4}+\frac{k_B^2T_\theta^4}{m^2c^4}\frac{\{\widetilde{e}\}^{\prime\prime}(T_\theta)}{\{\widetilde{e}\}^\prime(T_\theta)} \cr &&\hspace{8mm}-\frac{2k_BT_\theta^2}{mc^2}\frac{\partial_{T_{\theta}}b_\theta}{b_\theta}+\frac{k_BT_\theta^2}{mc^2}\frac{\partial^2_{T_{\theta}}b_\theta}{\partial_{T_\theta}b_\theta}+\frac{2k_BT_\theta}{mc^2}\biggl\}\cr && \mathcal{Q}_{5,6}=- \frac{1+\frac{\mathcal{I}}{mc^2}}{b_\theta k_BT_\theta^2\{\widetilde{e}\}^\prime(T_\theta) }p^0\left\{-\frac{\widetilde{e}(T_\theta)}{mc^2}+\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{U_{\theta\mu}p^\mu}{mc^2}-\frac{k_BT_\theta^2}{mc^2}\frac{\partial_{T_\theta}b_\theta}{b_\theta} \right\}, \cr && \mathcal{Q}_{5,6+i}=\frac{1+\frac{\mathcal{I}}{mc^2}}{b_\theta k_BT_\theta^2\{\widetilde{e}\}^\prime(T_\theta)}p^i \left\{-\frac{\widetilde{e}(T_\theta)}{mc^2}+\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{U_{\theta\mu}p^\mu}{mc^2}-\frac{k_BT_\theta^2}{mc^2}\frac{\partial_{T_\theta}b_\theta}{b_\theta} \right\}, \cr &&\mathcal{Q}_{6,6}=0,\qquad \mathcal{Q}_{6,6+i}=0,\qquad\mathcal{Q}_{6+i,6+j}=0 \end{eqnarray*} for $i,j=1,2,3$. \end{lemma} \begin{proof} The proof is straightforward. We omit it. \end{proof} We are now ready to deal with the nonlinear perturbations. \begin{lemma}\label{nonlinear1} Suppose $E(f)(t)$ is sufficiently small. Then we have \begin{enumerate} \item $\displaystyle \left|\int_{\mathbb{R}^3}\int_0^\infty\partial^\alpha_\beta\Gamma (f) g(p,\mathcal{I}) \phi(\mathcal{I})\,d\mathcal{I}dp\right|\le C\sum_{|{\alpha_1}|\le |\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\|g\|_{L^2_{p,\mathcal{I}}}, $ \item $\displaystyle \left\|\partial^\alpha_\beta\Gamma (f) \right\|_{L^2_{p,\mathcal{I}}}\le C\sum_{|{\alpha_1}|\le |\alpha|}\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}. $ \end{enumerate} \end{lemma} \begin{proof} $\bullet$ Proof of (1): Recall from Proposition \ref{lin3} that $$ \Gamma (f)=\frac{U_\mu p^\mu}{c p^0} \sum_{i=1}^4\Gamma_i (f)+\frac{P(f)-f}{c p^0}\Phi . $$ To avoid the repetition, we only prove the most complicated term: $$ \frac{U_\mu p^\mu}{c p^0} \Gamma_{4} (f)=\frac{U_\mu p^\mu}{c p^0}\frac{1}{\sqrt{F_E^0}}\int_0^1 (1-\theta)\Big(n -1,U,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\Big)D^2\widetilde{F}(\theta)\Big(n -1,U ,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\Big)^Td\theta $$ since the other terms can be handled in the same manner. For this, we use the following notation $$(y_{1},\cdots,y_9):=\left(n-1,U,\frac{e}{n}-\widetilde{e}(T_0),q^\mu\right)$$ to denote $$ \left(n-1,U,\frac{e}{n}-\widetilde{e}(T_0),q^\mu\right)D_{n_{\theta},U_{\theta},(e/n)_{\theta},q^\mu_\theta}^{2}\widetilde{F}(\theta)\left(n-1,U,\frac{e}{n}-\widetilde{e}(T_0),q^\mu\right)^{T}=\sum_{i,j=1}^{9}y_{i}y_{j}\mathcal{Q}_{ij}F(\theta) $$ where $\mathcal{Q}_{ij}$ $(i,j=1,\cdots, 9)$ was given in Lemma \ref{QQ}. We observe that \begin{align}\label{long}\begin{split} & \partial^{\alpha}_{\beta}\biggl\{\frac{1}{\sqrt{F_E^0}}\int_0^1 (1-\theta)\left(n -1,U,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\right)D^2\widetilde{F}(\theta)\left(n -1,U ,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\right)^T\,d\theta\biggl\}\cr &=\sum_{i,j=1}^9\sum_{\substack{|\alpha_{1}|+\cdots+|\alpha_{4}|\cr=|\alpha|}}\!\!\!\!\partial^{\alpha_{1}}y_{i}\partial^{\alpha_{2}}y_{j} \biggl(\sum_{\substack{|\beta_{1}|+\cdots+|\beta_{3}|\cr=|\beta|}} \int^1_0(1-\theta)\partial^{\alpha_{3}}_{\beta_{1}}\mathcal{Q}_{ij}\partial^{\alpha_{4}}_{\beta_{2}}F(\theta)d\theta\biggl) \partial_{\beta_{3}}\biggl\{\frac{1}{\sqrt{F_E^0}}\biggl\}. \end{split} \end{align} Here $\partial^{\alpha_1}y_i \partial^{\alpha_2}y_j$ is bounded from above by Lemma \ref{lem2}: \begin{align}\label{yiyj0}\begin{split} \big|\partial^{\alpha_1} y_i \partial^{\alpha_2}y_j\big|&\le C \sum_{|\alpha'_1|\le|\alpha_1|}\|\partial^{\alpha'_1}f\|_{L^2_{p,\mathcal{I}}}\sum_{|\alpha'_2| \le|\alpha_2|}\|\partial'^{\alpha_2}f\|_{L^2_{p,\mathcal{I}}}. \end{split}\end{align} Note from Lemma \ref{lem22} and Lemma \ref{lem2} that derivatives of the macroscopic fields $n,U,e/n$ and $q^\mu$ are dominated by the $L^2_{p,\mathcal{I}}$ norm of the derivative of $f$, and with the aid of the Sobolev embedding $H^2(\mathbb{R}_x^{3})\subseteq L^{\infty}(\mathbb{R}_x^{3})$, one can see that for $|\alpha|\le N-2$, $$ \partial^{\alpha}\left(n-1,U,\frac{e}{n}-\widetilde{e}(T_0),q^\mu\right)\approx(0,0,0,0) $$ when $E(f)(t)$ is small enough. From this observation, we see that $\partial^{\alpha_3}_{\beta_1} \mathcal{Q}_{ij}$ is well-defined and can be estimated as \begin{align}\label{yiyj}\begin{split} |\partial^{\alpha_3}_{\beta_1} \mathcal{Q}_{ij}|&\le C(p^0)^3\biggl( 1+\frac{\mathcal{I}}{mc^2}\biggl)\left(1+ \| f\|_{L^2_{p,\mathcal{I}}}\right),\qquad \qquad\text{if }\qquad |\alpha_3|=0\cr |\partial^{\alpha_3}_{\beta_1} \mathcal{Q}_{ij}|&\le C(p^0)^3\biggl( 1+\frac{\mathcal{I}}{mc^2}\biggl) \sum_{|\alpha_3'|\le|\alpha_3|}\|\partial^{\alpha_3'}f\|_{L^2_{p,\mathcal{I}}}, \hspace{6.5mm}\text{otherwise } \end{split}\end{align} for sufficiently small $E(f)(t)$. For the same reason, one can have \begin{align}\label{partial F}\begin{split} \left| \partial^{\alpha_4}_{\beta_2} F(\theta)\right| \le C(p^0)^{|\alpha_4|} \biggl(1+\frac{\mathcal{I}}{mc^2} \biggl)^{|\alpha_4|+|\beta_2|} F(\theta). \end{split}\end{align} By a definition of $F_E^0$, one finds \begin{equation*} \biggl|\partial_{\beta_3}\biggl\{\frac{1}{\sqrt{F_E^0}}\biggl\}\biggl|\le C\biggl(1+\frac{\mathcal{I}}{mc^2}\biggl)^{|\beta_3|}\frac{1}{\sqrt{F_E^0}} \end{equation*} which, together with \eqref{partial F} gives \begin{align}\label{F theta}\begin{split} \biggl|\partial^{\alpha_4}_{\beta_2} F(\theta)\partial_{\beta_3}\biggl\{\frac{1}{\sqrt{F_E^0}}\biggl\}\biggl| &\le C(p^0)^{|\alpha_4|} \biggl(1+\frac{\mathcal{I}}{mc^2} \biggl)^{|\alpha_4|+|\beta_2|+|\beta_3|}\frac{F(\theta)}{\sqrt{F_E^0}}\cr &\le \left|\mathbb{P}\big(p^0,\mathcal{I}\big)\right|e^{-C^{\prime}\left(1+\frac{\mathcal{I}}{mc^2}\right)p^0} \end{split}\end{align} for sufficiently small $E(f)(t)$. Here $C^\prime$ is the positive constant defined as \begin{align*} e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{1}{k_B T_\theta}U_\theta^\mu p_\mu}e^{\frac{1}{2}\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_BT_0}}&\le e^{-\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{1}{k_B T_\theta}\left(\sqrt{c^2+|U_\theta|^2}-|U_\theta|\right) p^0}e^{\frac{1}{2}\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{cp^0}{k_BT_0}}\cr &\le e^{-\min\left\{\frac{1}{k_B T_\theta}\left(\sqrt{c^2+|U_\theta|^2}-|U_\theta|\right)-\frac{c}{2k_BT_0} \right\}\left(1+\frac{\mathcal{I}}{mc^2}\right) }\cr &\equiv e^{-C^\prime \left(1+\frac{\mathcal{I}}{mc^2}\right)p^0 } \end{align*} where we assumed that $E(f)(t)$ is small enough to satisfy \begin{align*} \min \frac{\sqrt{c^2+|U_\theta|^2}-|U_\theta|}{ T_\theta} > \frac{c}{2 T_0} \end{align*} to ensure the positivity of $C^\prime$. Combining \eqref{long}--\eqref{yiyj} and \eqref{F theta}, and applying the Sobolev embedding $H^2(\mathbb{R}_x^{3})\subseteq L^{\infty}(\mathbb{R}_x^{3})$ to the terms having lower derivative order, we get \begin{align*} \partial^{\alpha}_{\beta}\Gamma_4(f)&=\partial^{\alpha}_{\beta}\biggl\{\frac{1}{\sqrt{F_E^0}}\int_0^1 (1-\theta)\left(n -1,U,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\right)D^2\widetilde{F}(\theta)\left(n -1,U ,\frac{e}{n} -\widetilde{e}(T_0),q^\mu\right)^T\,d\theta\biggl\}\cr &\le \left| \mathbb{P}(p^0,\mathcal{I})\right| \sum_{|\alpha'|\le|\alpha|}\|\partial^{\alpha'}f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-\alpha'}f\|_{L^2_{p,\mathcal{I}}}e^{-C^{\prime}\left(1+\frac{\mathcal{I}}{mc^2}\right)p^0} \end{align*} which, together with \eqref{phi condition} and Lemma \ref{lem2} gives the desired result as follows \begin{align*} &\left|\int_{\mathbb{R}^3}\int_0^\infty \partial^{\alpha}_{\beta}\left\{\frac{U_\mu p^\mu}{c p^0} \Gamma_{4} (f)\right\}g(p,\mathcal{I})\phi(\mathcal{I})\, d\mathcal{I}dp\right|\cr &\le C\sum_{\substack{|{\alpha_1}|\le |\alpha|\cr |\beta_1|\le|\beta|}}\int_{\mathbb{R}^3}\int_0^\infty \left|\biggl(\partial^{\alpha-{\alpha_1}} \{U^0\}-\partial^{\alpha-{\alpha_1}}\{U\}\cdot \partial_{\beta-\beta_1}\{\hat{p}\}\biggl) \partial^{\alpha_1}_{\beta_1}\Gamma_{4} (f)g(p,\mathcal{I})\phi(\mathcal{I})\right|\,d\mathcal{I} dp\cr &\le C \sum_{|{\alpha_1}|\le|\alpha|}\|\partial^{{\alpha_1}}f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}}f\|_{L^2_{p,\mathcal{I}}}\int_{\mathbb{R}^3}\int_0^\infty \left|\mathbb{P}(p^0,\mathcal{I})\right|e^{-C^{\prime}\left(1+\frac{\mathcal{I}}{mc^2}\right)p^0}\left|g(p,\mathcal{I})\right|\phi(\mathcal{I})\,d\mathcal{I}dp\cr &\le C \sum_{|{\alpha_1}|\le|\alpha|}\|\partial^{{\alpha_1}}f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}}f\|_{L^2_{p,\mathcal{I}}}\|g\|_{L^2_{p,\mathcal{I}}}. \end{align*} $\bullet$ Proof of (2): Since the proof is the same as (1), we omit it. \end{proof} The following lemma is necessary to prove the uniqueness of solutions. \begin{lemma}\label{uniqueness} Assume $\bar{F}:=F_E^0+\bar{f}\sqrt{F_E^0}$ is another solution of \eqref{PR}. For sufficiently small $E(f)(t)$ and $E(\bar{f})(t)$, we then have $$ \left| \int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\int_{0}^\infty \left\{\Gamma(f)-\Gamma(\bar{f})\right\}(f-\bar{f})\phi(\mathcal{I}) \,d\mathcal{I}dpdx\right|\le C \|f-\bar{f}\|^{2}_{L^2_{x,p,\mathcal{I}}}.$$ \end{lemma} \begin{proof} Since it can be proved in the same manner as in Lemma \ref{nonlinear1}, we omit it. \end{proof} \begin{lemma}\label{nonlinear3} Suppose $E(f)(t)$ is sufficiently small. Then we have $$ \int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\int_0^\infty \partial^{\alpha}\Gamma(f)\partial^{\alpha} P(f) \phi(\mathcal{I})\,d\mathcal{I}dpdx=0. $$ \end{lemma} \begin{proof} Recall from Proposition \ref{lin3} that \begin{align*} \left\{ L(f)+\Gamma(f) \right\}\sqrt{F_E^0}&=\frac{U_\mu p^\mu}{c p^0}\left\{\biggl(1-p^\mu q_\mu \frac{1+\frac{\mathcal{I}}{mc^2}}{bmc^2}\biggl)F_E-F\right\}\equiv \frac{1}{p^0}Q. \end{align*} We then have from Proposition \ref{pro} (1) that \begin{eqnarray}\label{gamma p}\begin{split} & \int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\int_0^\infty\partial^{\alpha} \Gamma(f)\partial^{\alpha} P(f) \phi(\mathcal{I})\,d\mathcal{I}\,dpdx\cr &= \int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\int_0^\infty \partial^{\alpha}\left\{ \frac{1}{p^0\sqrt{F_E^0}}Q-L(f) \right\}\partial^{\alpha}P(f) \phi(\mathcal{I})\,d\mathcal{I}\,dpdx\cr &= \int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\int_0^\infty \frac{1}{p^0\sqrt{F_E^0}}\partial^{\alpha}Q P(\partial^{\alpha}f) \phi(\mathcal{I})\,d\mathcal{I}dpdx-\langle L(\partial^{\alpha}f),P(\partial^{\alpha}f) \rangle_{L^2_{x,p,\mathcal{I}}}\cr &= \int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\int_0^\infty \partial^{\alpha}Q\frac{P(\partial^{\alpha}f)}{\sqrt{F_E^0}} \phi(\mathcal{I})\,d\mathcal{I}\,\frac{dp}{p^0}dx. \end{split}\end{eqnarray} Here $P(f)$ is given in \eqref{Pf} as \begin{equation}\label{Pf ex} P(f) = a(t,x)\sqrt{F_E^0}+b(t,x)\cdot\left(1+\frac{\mathcal{I}}{mc^2}\right) p\sqrt{F_E^0}+c(t,x)\left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\}\sqrt{F_E^0} \end{equation} where \begin{align*} a(t,x)&= \int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp,\qquad b(t,x)=\frac{1}{b_0m}\int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}dp,\cr &c(t,x)= \frac{1}{k_BT_0^2\left\{\widetilde{e}\right\}^{\prime}(T_0)}\int_{\mathbb{R}^3}\int_0^\infty \left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\} f\sqrt{F_E^0} \phi(\mathcal{I})\,d\mathcal{I}dp. \end{align*} Inserting \eqref{Pf ex} into \eqref{gamma p}, one finds \begin{eqnarray*} && \int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\int_0^\infty \partial^{\alpha}Q\frac{P(\partial^{\alpha}f)}{\sqrt{F_E^0}} \phi(\mathcal{I})\,d\mathcal{I}\,\frac{dp}{p^0}dx\cr &&=\int_{\mathbb{R}^3} \big\{ \partial^\alpha a(t,x)-\widetilde{e}(T_0)\partial^\alpha c(t,x) \big\} \partial^{\alpha}\left\{\int_{\mathbb{R}^3}\int_0^\infty Q \phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\right\}dx\cr &&+\int_{\mathbb{R}^3} \partial^\alpha b(t,x) \cdot \partial^{\alpha}\left\{\int_{\mathbb{R}^3}\int_0^\infty pQ \left(1+\frac{\mathcal{I}}{mc^2}\right) \phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\right\}dx\cr &&+c\int_{\mathbb{R}^3} \partial^\alpha c(t,x) \partial^{\alpha}\left\{\int_{\mathbb{R}^3}\int_0^\infty p^0Q \left(1+\frac{\mathcal{I}}{mc^2}\right) \phi(\mathcal{I})\,d\mathcal{I}\frac{dp}{p^0}\right\}dx \end{eqnarray*} which, combined with \eqref{conservation laws} gives the desired result. \end{proof} \section{Proof of theorem \ref{main3}} \subsection{Local in time existence} Using Lemma \ref{nonlinear1} and Lemma \ref{uniqueness}, the local in time existence and uniqueness of solutions to \eqref{PR} can be obtained by the standard argument \cite{Guo whole,Guo VMB}: \begin{proposition} Let $N\geq 3$ and $F_{0}=F_E^0+\sqrt{F_E^0}f_{0}$ be positve. Then there exist $M_{0}>0$ and $T_{*}>0$ such that if $T_{*}\le\frac{M_{0}}{2}$ and $E(f_{0})\le \frac{M_{0}}{2}$, there is a unique solution $F(x,p,t)$ to \eqref{PR} such that the energy functional is continuous in $[0,T_{*})$ and uniformly bounded: $$ \sup_{0\le t\le T_{*}}E(f)(t)\le M_{0}. $$ \end{proposition} \subsection{Global in time existence} Recall from Lemma \ref{ortho} that $P(f)$ is an orthonormal projection defined by \begin{equation}\label{P(f)} P(f)=\biggl\{a(t,x)+b(t,x)\cdot\Big(1+\frac{\mathcal{I}}{mc^2}\Big) p+c(t,x)\biggl(cp^0\Big(1+\frac{\mathcal{I}}{mc^2}\Big)-\widetilde{e}(T_0)\biggl)\biggl\}\sqrt{F_E^0} \end{equation} where \begin{align*} a(t,x)&= \int_{\mathbb{R}^3}\int_0^\infty f\sqrt{F_E^0} \phi(\mathcal{I}) \,d\mathcal{I}\,dp,\qquad b(t,x)=\frac{1}{b_0m}\int_{\mathbb{R}^3}\int_0^\infty p f\sqrt{F_E^0}\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\,d\mathcal{I}dp,\cr &c(t,x)= \frac{1}{k_BT_0^2\left\{\widetilde{e}\right\}^{\prime}(T_0)}\int_{\mathbb{R}^3}\int_0^\infty \left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\} f\sqrt{F_E^0} \phi(\mathcal{I})\,d\mathcal{I}dp. \end{align*} \noindent\newline Decompose $f$ as \begin{equation}\label{decom} f=P(f)+\{I-P\}(f) \end{equation} and insert \eqref{decom} into \eqref{LAW} to see that \begin{align}\label{macro-micro1}\begin{split} \{\partial_{t}+\hat{p}\cdot\nabla_{x}\}P(f)&=\{-\partial_{t}-\hat{p}\cdot\nabla_{x}+L\}\{I-P\}(f)+\Gamma (f)\cr &\equiv l\{I-P\}(f)+h(f) \end{split}\end{align} where we used $L[P(f)]=0$ (see Prosposition \ref{pro}). Observe that \begin{eqnarray*} &&\{\partial_{t}+\hat{p}\cdot\nabla_{x}\}P(f)\cr &&=\{\partial_{t}+\hat{p}\cdot\nabla_{x}\}\left\{a(t,x)+b(t,x)\cdot\left(1+\frac{\mathcal{I}}{mc^2}\right) p+c(t,x)\left(cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right)\right\}\sqrt{F_E^0}\cr &&=\biggl\{\partial_{t}\big\{a(t,x)-\widetilde{e}(T_0)c(t,x)\big\}+c\sum_{i=1}^{3} \partial_{x_{i}}\big\{a(t,x)-\widetilde{e}(T_0)c(t,x)\big\}\frac{ p^{i}}{p^{0}} +\sum_{i=1}^{3} \left(\partial_{t}b_{i}(t,x) +c^2\partial_{x_{i}} c(t,x)\right) \cr &&\times \left(1+\frac{\mathcal{I}}{mc^2}\right)p^i+c\sum_{j=1}^{3}\sum_{i=1}^{3}\partial_{x_{i}}b_{j}(t,x)\left(1+\frac{\mathcal{I}}{mc^2}\right)\frac{ p^{i}p^{j}}{p^{0}} +c\partial_{t}c(t,x)\left(1+\frac{\mathcal{I}}{mc^2}\right)p^{0}\biggl\}\sqrt{F_E^0}\cr &&:= \partial_{t}\tilde{a}(t,x) e_{a_0}+c\sum_{i=1}^{3} \partial_{x_{i}}\tilde{a}(t,x) e_{a_i}+\sum_{i=1}^{3} (\partial_{t}b_{i}(t,x)+c\partial_{x_{i}}\tilde{c}(t,x))e_{bc_i} +c\sum_{j=1}^{3}\sum_{i=1}^{3} \partial_{x_{i}}b_{j}(t,x) e_{ij}\cr &&+ \partial_{t}\tilde{c}(t,x) e_c \end{eqnarray*} where $$\tilde{a}(t,x):=a(t,x)-\widetilde{e}(T_0)c(t,x),\qquad \tilde{c}(t,x):=cc(t,x)$$ and $e_{a_0},\cdots,e_c$ denote \begin{align}\label{basis}\begin{split} \{e_{a_0}, e_{a_i}, e_{bc_i}, e_{ij}, e_{c}\}&=\biggl\{1,\ \frac{p^{i}}{p^{0}} ,\ \Big(1+\frac{\mathcal{I}}{mc^2}\Big)p^{i} ,\ \Big(1+\frac{\mathcal{I}}{mc^2}\Big)\frac{p^{i}p^{j}}{p^{0}} ,\ \Big(1+\frac{\mathcal{I}}{mc^2}\Big)p^{0}\biggl\}\sqrt{F_E^0} \end{split} \end{align} for $(1\le i,j\le 3)$. In terms of the basis \eqref{basis}, \eqref{macro-micro1} leads to the following relation, that is often called a micro-macro system: \begin{lemma}\label{M-M} Let $l_{a_0}\cdots l_{c}$ and $h_{a_0},\cdot,h_{c}$ denote the inner product of $l\{I-P\}(f)$ and $h(f)$ with the corresponding basis (\ref{basis}). Then we have \begin{enumerate} \item $\partial_{t}\tilde{a}(t,x)=l_{a_0}+h_{a_0}.$ \item $\partial_{t}\tilde{c}(t,x)=l_{c}+h_{c}.$ \item $\partial_{t}b_{i}(t,x)+c\partial_{x_{i}}\tilde{c}(t,x)=l_{bc_i}+h_{bc_i}.$ \item $c\partial_{x_{i}}\tilde{a}(t,x)=l_{ai}+h_{ai}.$ \item $c(1-\delta_{ij})\partial_{x_{i}}b_{j}(t,x)+c\partial_{x_{j}}b_{i}(t,x)=l_{ij}+h_{ij}.$ \end{enumerate} \end{lemma} \begin{proof} Since the proof is straightforward, we omit it. \end{proof} Using the same line of the argument as in \cite[Theorem 5.4]{Guo whole}, one finds \begin{equation}\label{coercivitywhole} \sum_{0<|\alpha|\le N}\left\langle L(\partial^{\alpha}f),\partial^{\alpha}f\right\rangle_{L^2_{x,p,\mathcal{I}}}\le-\delta\sum_{0<|\alpha|\le N}\|\partial^{\alpha}f\|^{2}_{L^2_{x,p,\mathcal{I}}}-C\frac{d}{dt}\int_{\mathbb{R}^3}\left(\nabla_{x}\cdot b(t,x)\right)c(t,x) \,dx. \end{equation} To extend the local in time solution to the global one, we take inner product of \eqref{LAW} with $f$ and use Proposition $\ref{pro}$ (2), Lemma \ref{nonlinear1} and Lemma \ref{nonlinear3} to obtain \begin{align*} \frac{d}{dt}\|f\|^{2}_{L^2_{x,p,\mathcal{I}}}+\delta_1\|\{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}\le C\sqrt{E(f)(t)}\mathcal{D}(t). \end{align*} Applying $\partial^\alpha_\beta$ to \eqref{LAW}, taking $L^2_{x,p,\mathcal{I}}$ inner product with $\partial^{\alpha}_\beta f$, and employing Lemma \ref{nonlinear1} and \eqref{coercivitywhole}, we have \begin{equation*} \frac{d}{dt}\|\partial^{\alpha}f\|^{2}_{L^2_{x,p,\mathcal{I}}}-C\frac{d}{dt}\int_{\mathbb{R}^3}(\nabla_{x}\cdot b)c\,dx+\delta_2\sum_{0<|\alpha|\le N}\|\partial^{\alpha}f\|^{2}_{L^2_{x,p,\mathcal{I}}}\le C\sqrt{E(f)(t)}\mathcal{D}(t) \end{equation*} for $\alpha\neq 0$, $\beta=0$, and \begin{equation*} \frac{d}{dt}\|\partial^{\alpha}_{\beta}f\|^{2}_{L^2_{x,p,\mathcal{I}}}+\delta_3\|\partial^{\alpha}_{\beta}f\|^{2}_{L^2_{x,p,\mathcal{I}}}\le C\sum_{|\beta_1|<|\beta|}\biggl(\|\partial^{\alpha}_{\beta_1}f\|^{2}_{L^2_{x,p,\mathcal{I}}}+\|\nabla_x\partial^{\alpha}_{\beta_1}f\|^{2}_{L^2_{x,p,\mathcal{I}}}\biggl) +C\sqrt{E(f)(t)}\mathcal{D}(t) \end{equation*} for $\alpha,\beta\neq0$. Combining above estimates, we obtain the following energy estimate \cite{Guo whole}: \begin{align*} &\frac{d}{dt}\biggl\{C_{1}\|f\|^{2}_{L^2_{x,p,\mathcal{I}}}+\sum_{0<|\alpha|+|\beta|\le N}C_{|\beta|}\|\partial^{\alpha}_{\beta}f\|^{2}_{L^2_{x,p,\mathcal{I}}}-C_{2}\int_{\mathbb{R}^3}(\nabla_{x}\cdot b(t,x))c(t,x)\,dx\biggl\}+\delta_N\mathcal{D}(t)\cr &\le C\sqrt{E(f)(t)} \mathcal{D}(t) \end{align*} for some positive constants $C_1$, $C_{|\beta|}$, $C_2$ and $\delta_{N}$. Then, the standard continuity argument \cite{Guo whole} gives the global in time existence of solutions satisfying $$ E_N(f)(t)+\int_0^t \mathcal{D}_N(f)(s) ds\le CE_N(f_0). $$ \subsection{Proof of the asymptotic behaviors (Theorem \ref{main3} (1)--(3))} We start with the derivation of the local conservation laws for the linearized relativistic BGK model \eqref{LAW}. \begin{lemma}\label{balance} The following relations hold \begin{align*} & \partial_t a(t,x)+k_BT_0\nabla_x \cdot b(t,x)=\Big\langle -\hat{p}\cdot\nabla_x\left\{I-P\right\}(f)+\frac{1}{\tau}\Gamma(f),\sqrt{F_E^0} \Big\rangle_{L^2_{p,\mathcal{I}}}, \cr & \partial_t b(t,x)+ \frac{k_BT_0}{b_0m }\nabla_x\biggl\{a(t,x)+k_BT_0c(t,x)\biggl\} \cr &=\frac{1}{b_0m}\biggl\langle -\hat{p}\cdot\nabla_x\left\{I-P\right\}(f)+\frac{1}{\tau}\Gamma(f),\biggl(1+\frac{\mathcal{I}}{mc^2}\biggl)p\sqrt{F_E^0} \biggl\rangle_{L^2_{p,\mathcal{I}}},\cr &\partial_t c(t,x) +\frac{ k_B }{ \{\widetilde{e}\}^\prime(T_0)}\nabla_x\cdot b(t,x) \cr &=\frac{1}{k_BT_0^2\{\widetilde{e}\}^\prime(T_0)} \biggl\langle -\hat{p}\cdot\nabla_x\left\{I-P\right\}(f)+\frac{1}{\tau}\Gamma(f),\biggl\{cp^0\biggl(1+\frac{\mathcal{I}}{mc^2}\biggl)-\widetilde{e}(T_0)\biggl\}\sqrt{F_E^0} \biggl\rangle_{L^2_{p,\mathcal{I}}}. \end{align*} \end{lemma} \begin{proof} We rewrite \eqref{LAW} as \begin{equation}\label{LAW 2} \partial_t f+\hat{p}\cdot\nabla_x P(f)=-\hat{p}\cdot\nabla_x\{I-P\}(f)+\frac{1}{\tau}\left(L(f)+ \Gamma(f) \right) \end{equation} Multiplying \eqref{LAW 2} by $$ \sqrt{F_E^0}\phi(\mathcal{I}),\quad\frac{1}{b_0m}\left(1+\frac{\mathcal{I}}{mc^2}\right)p\sqrt{F_E^0}\phi(\mathcal{I}),\quad \frac{1}{k_BT_0^2\{\widetilde{e}\}^\prime(T_0)}\left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\}\sqrt{F_E^0}\phi(\mathcal{I}), $$ and integrating over $p,\mathcal{I}\in \mathbb{R}^3\times \mathbb{R}^+$, one finds \begin{eqnarray*} &&\partial_t a(t,x)+\biggl\langle \hat{p}\cdot\nabla_x P(f),\sqrt{F_E^0}\biggl\rangle_{L^2_{p,\mathcal{I}}}=\biggl\langle -\hat{p}\cdot\nabla_x\left\{I-P\right\}(f)+\frac{1}{\tau}\Gamma(f),\sqrt{F_E^0} \biggl\rangle_{L^2_{p,\mathcal{I}}} ,\cr &&\partial_t b(t,x)+ \frac{1}{b_0m}\left\langle \hat{p}\cdot\nabla_x P(f),\left(1+\frac{\mathcal{I}}{mc^2}\right)p\sqrt{F_E^0} \right\rangle_{L^2_{p,\mathcal{I}}}\cr &&=\frac{1}{b_0m}\left\langle -\hat{p}\cdot\nabla_x\left\{I-P\right\}(f)+\frac{1}{\tau}\Gamma(f),\left(1+\frac{\mathcal{I}}{mc^2}\right)p\sqrt{F_E^0} \right\rangle_{L^2_{p,\mathcal{I}}} ,\cr &&\partial_t c(t,x) +\frac{1}{k_BT_0^2\{\widetilde{e}\}^\prime(T_0)}\biggl\langle \hat{p}\cdot\nabla_x P(f),\left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\}\sqrt{F_E^0}\biggl\rangle_{L^2_{p,\mathcal{I}}}\cr && =\frac{1}{k_BT_0^2\{\widetilde{e}\}^\prime(T_0)} \biggl\langle -\hat{p}\cdot\nabla_x\left\{I-P\right\}(f)+\frac{1}{\tau}\Gamma(f),\left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\}\sqrt{F_E^0} \biggl\rangle_{L^2_{p,\mathcal{I}}}. \end{eqnarray*} Claim that \begin{enumerate} \item $\displaystyle \left\langle \hat{p}\cdot\nabla_x P(f),\sqrt{F_E^0}\right\rangle_{L^2_{p,\mathcal{I}}}=k_BT_0\nabla_x \cdot b(t,x),$ \item $\displaystyle \left\langle \hat{p}\cdot\nabla_x P(f),\left(1+\frac{\mathcal{I}}{mc^2}\right)p\sqrt{F_E^0} \right\rangle_{L^2_{p,\mathcal{I}}}=k_BT_0\nabla_x\left\{a(t,x)+k_BT_0c(t,x)\right\},$ \item $\displaystyle \left\langle \hat{p}\cdot\nabla_x P(f),\left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\}\sqrt{F_E^0}\right\rangle_{L^2_{p,\mathcal{I}}}= (k_BT_0)^2\nabla_x\cdot b(t,x),$ \end{enumerate} which comples the proof.\noindent\newline $\bullet$ Proof of (1): Observe from \eqref{P(f)} that \begin{align}\label{local 1}\begin{split} &\left\langle \hat{p}\cdot\nabla_x P(f),\sqrt{F_E^0}\right\rangle_{L^2_{p,\mathcal{I}}}\cr &=\sum_{i=1}^3 \int_{\mathbb{R}^3}\int_0^\infty \frac{cp^i}{p^0}\partial_{x^i}P(f) \sqrt{F_E^0}\phi(\mathcal{I})\, d\mathcal{I}dp\cr &= c\sum_{i=1}^3 \int_{\mathbb{R}^3}\int_0^\infty \frac{p^i}{p^0}\partial_{x^i}\biggl\{ a(t,x)+b(t,x)\cdot\biggl(1+\frac{\mathcal{I}}{mc^2}\biggl) p+c(t,x)\biggl(cp^0\biggl(1+\frac{\mathcal{I}}{mc^2}\biggl)-\widetilde{e}(T_0)\biggl)\biggl\} \cr &\times F_E^0 \phi(\mathcal{I})\, d\mathcal{I}dp. \end{split} \end{align} By the spherical symmetry of $F_E^0$, \eqref{local 1} becomes \begin{equation*} \left\langle \hat{p}\cdot\nabla_x P(f),\sqrt{F_E^0}\right\rangle_{L^2_{p,\mathcal{I}}} = c\sum_{i=1}^3 \partial_{x^i}b_i(t,x)\int_{\mathbb{R}^3}\int_0^\infty \frac{(p^i)^2}{p^0}F_E^0\left(1+\frac{\mathcal{I}}{mc^2}\right)\phi(\mathcal{I})\, d\mathcal{I}dp \end{equation*} which, combined with Lemma \ref{computation F_E^0} (1) gives the proof of (1). \noindent\newline $\bullet$ Proof of (2): In the same way as the proof of (2), one finds \begin{eqnarray*} &&\left\langle \hat{p}\cdot\nabla_x P(f),\left(1+\frac{\mathcal{I}}{mc^2}\right)p^j\sqrt{F_E^0}\right\rangle_{L^2_{p,\mathcal{I}}}\cr &&= c\int_{\mathbb{R}^3}\int_0^\infty \frac{ (p^j)^2}{p^0}\partial_{x^j}\left\{a(t,x)+c(t,x)\left(cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right)\right\}F_E^0 \left(1+\frac{\mathcal{I}}{mc^2}\right) \phi(\mathcal{I})\, d\mathcal{I}dp\cr &&= c\partial_{x^j}\left\{a(t,x)-\widetilde{e}(T_0)c(t,x)\right\}\int_{\mathbb{R}^3}\int_0^\infty \frac{ (p^j)^2}{p^0}F_E^0\left(1+\frac{\mathcal{I}}{mc^2}\right) \phi(\mathcal{I})\, d\mathcal{I}dp\cr &&+ c^2\partial_{x^j}c(t,x)\int_{\mathbb{R}^3}\int_0^\infty (p^j)^2F_E^0 \left(1+\frac{\mathcal{I}}{mc^2}\right)^2 \phi(\mathcal{I})\, d\mathcal{I}dp, \end{eqnarray*} for $j=1,2,3$. Using the Lemma \ref{computation F_E^0} (2) and (5), we then have \begin{align*} \left\langle \hat{p}\cdot\nabla_x P(f),\left(1+\frac{\mathcal{I}}{mc^2}\right)p^j\sqrt{F_E^0}\right\rangle_{L^2_{p,\mathcal{I}}}&= k_BT_0\partial_{x^j}\left\{a(t,x)-\widetilde{e}(T_0)c(t,x)\right\}+ b_0mc^2\partial_{x^j}c(t,x)\cr &=k_BT_0\partial_{x^j}\left\{a(t,x)+k_BT_0 c(t,x) \right\} \end{align*} which completes the proof of (2). \noindent\newline $\bullet$ Proof of (3): In the same manner, we have \begin{eqnarray*} &&\left\langle \hat{p}\cdot\nabla_x P(f),\left\{cp^0\left(1+\frac{\mathcal{I}}{mc^2}\right)-\widetilde{e}(T_0)\right\}\sqrt{F_E^0}\right\rangle_{L^2_{p,\mathcal{I}}}\cr &&=c^2\sum_{i=1}^3 \partial_{x^i}b_i(x,t)\int_{\mathbb{R}^3}\int_0^\infty (p^i)^2 F_E^0\left(1+\frac{\mathcal{I}}{mc^2}\right)^2 \phi(\mathcal{I})\, d\mathcal{I}dp\cr &&-c\widetilde{e}(T_0)\sum_{i=1}^3 \partial_{x^i}b_i(x,t)\int_{\mathbb{R}^3}\int_0^\infty \frac{(p^i)^2}{p^0} F_E^0\left(1+\frac{\mathcal{I}}{mc^2}\right) \phi(\mathcal{I})\, d\mathcal{I}dp\cr &&=(k_BT_0)^2\nabla_x\cdot b(x,t). \end{eqnarray*} \end{proof} Now we prove the key estimate of this subsection which is a relativistic generalization of Lemma 6.1 of \cite{Guo 1}: \begin{lemma}\label{I-P} Let $N\ge 3$. Then for $k=0,\cdots,N-1$, we have $$ \frac{d}{dt}G_k+\|\nabla^{k+1}Pf\|^2_{L^2_{x,p,\mathcal{I}}}\le C \biggl(\left\| \nabla^k\{I-P\}f\right\|^2_{L^2_{x,p,\mathcal{I}}}+\sum_{|{\alpha_1}|\le k}\left\|\|\nabla^{|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\|\nabla^{k-|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2 \biggl) $$ where $G_k(t)$ denotes \begin{align*} &\sum_{|\alpha|=k}\int_{\mathbb{R}^3}\langle\{I-P\}\partial^\alpha f,\epsilon_a(p,\mathcal{I}) \rangle_{L^2_{p,\mathcal{I}}}\cdot \nabla_x \partial^\alpha a(t,x)+ \langle \{I-P\}\partial^\alpha f,\epsilon_c(p,\mathcal{I}) \rangle_{L^2_{p,\mathcal{I}}}\cdot \nabla_x\partial^\alpha c(t,x) \,dx\cr &+\sum_{|\alpha|=k}\int_{\mathbb{R}^3}\left\langle \{I-P\} \partial^\alpha f , \epsilon_b(p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \nabla_x\cdot \partial^\alpha b(t,x)+ \partial^\alpha b(t,x)\cdot \nabla_x\partial^\alpha\tilde{c}(t,x)\,dx \end{align*} and $\epsilon_a,\epsilon_b$, and $\epsilon_c$ are linear combinations of the basis \eqref{basis}. \end{lemma} \begin{proof} For $|\alpha|\le N-1$, we have from Lemma $\ref{M-M}_{(5)}$ that $$c\Delta \partial^{\alpha}b_{i}(t,x)=\partial^{\alpha}\sum_{j\neq i}\big[-\partial_{i}(l_{jj}+h_{jj})+\partial_{j}(l_{ij}+h_{ij})\big]+\partial_{i}\partial^{\alpha}(l_{ii}+h_{ii}). $$ Here $l_{a_0},\cdots, l_{c}$ take the following form: \begin{align*} \left\langle -\partial_{t}\{I-P\}\partial^{\alpha}f,\epsilon \left(p,\mathcal{I}\right) \right\rangle_{L^2_{p,\mathcal{I}}}-\left\langle \big\{\hat{p}\cdot\nabla_{x}-L\big\}\{I-P\}\partial^{\alpha}f,\epsilon (p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \end{align*} where $\epsilon(p,\mathcal{I})$ is a suitable linear combination of the basis \eqref{basis}. We then have \begin{align}\label{b1}\begin{split} \|\nabla\partial^{\alpha}b\|^{2}_{L^2_x}&\le \int_{\mathbb{R}^3}\left\langle \partial_t\{I-P\}\nabla_x \partial^\alpha f, \epsilon_b(p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \cdot \partial^\alpha b(t,x)\,dx\cr & +C\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}\|\nabla_x\partial^{\alpha}b\|_{L^2_x}\cr &+\left(\|\{I-P\} \nabla_x\partial^\alpha f \|_{L^2_{x,p,\mathcal{I}}}+\|\{I-P\} \partial^\alpha f \|_{L^2_{x,p,\mathcal{I}}}\right)\|\nabla_x\partial^{\alpha}b\|_{L^2_x}\cr \end{split} \end{align} where we used Lemma \ref{nonlinear1} to obtain $$ \|\partial^{\alpha} h_{i,j}\|_{L^2_x}=\|\left\langle \partial^{\alpha} \Gamma(f),e_{ij} \right\rangle_{L^2_{p,\mathcal{I}}} \|_{L^2_x}\le C\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}. $$ Here the first term on the r.h.s of \eqref{b1} can be written as \begin{eqnarray*} &&\int_{\mathbb{R}^3}\left\langle \partial_t\{I-P\} \nabla_x \partial^\alpha f , \epsilon_b(p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \cdot \partial^\alpha b(t,x)\,dx\cr &&=\frac{d}{dt} \int_{\mathbb{R}^3}\left\langle \{I-P\} \nabla_x \partial^\alpha f , \epsilon_b(p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \cdot \partial^\alpha b(t,x)\,dx\cr &&-\int_{\mathbb{R}^3}\left\langle \{I-P\} \nabla_x \partial^\alpha f , \epsilon_b(p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \cdot \partial_t\partial^\alpha b(t,x)\,dx \cr &&=-\frac{d}{dt} \int_{\mathbb{R}^3}\left\langle \{I-P\} \partial^\alpha f , \epsilon_b(p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \nabla_x \cdot\partial^\alpha b(t,x)\,dx\cr &&-\int_{\mathbb{R}^3}\left\langle \{I-P\} \nabla_x \partial^\alpha f , \epsilon_b(p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \cdot \partial_t\partial^\alpha b(t,x)\,dx , \end{eqnarray*} which, combined with Lemma $\ref{balance}_{(2)}$ leads to \begin{align}\label{b3}\begin{split} &\int_{\mathbb{R}^3}\left\langle \partial_t\{I-P\} \nabla_x \partial^\alpha f , \epsilon_b(p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \cdot \partial^\alpha b(t,x)\,dx\cr &\le -\frac{d}{dt} \int_{\mathbb{R}^3}\left\langle \{I-P\} \partial^\alpha f , \epsilon_b(p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \nabla_x\cdot \partial^\alpha b(t,x)\,dx +C_\varepsilon\|\{I-P\} \nabla_x\partial^\alpha f \|_{L^2_{x,p,\mathcal{I}}}^2\cr &+\varepsilon\|\nabla_x \partial^\alpha a +\nabla_x \partial^\alpha c \|_{L^2_{x }}^2+\varepsilon\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2 . \end{split} \end{align} Go back to \eqref{b1} with \eqref{b3} to get \begin{align}\label{bx}\begin{split} \|\nabla\partial^{\alpha}b \|^{2}_{L^2_x}&\le -\frac{d}{dt}\int_{\mathbb{R}^3}\left\langle \{I-P\} \partial^\alpha f , \epsilon_b(p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \nabla_x\cdot \partial^\alpha b(t,x)\,dx\cr &+C\left(\|\{I-P\} \nabla_x\partial^\alpha f \|_{L^2_{x,p,\mathcal{I}}}^2+\|\{I-P\} \partial^\alpha f \|^2_{L^2_{x,p,\mathcal{I}}}\right)\cr &+\varepsilon_1\biggl(\|\nabla_x \partial^\alpha a \|_{L^2_{x }}^2+\|\nabla_x \partial^\alpha c \|_{L^2_{x }}^2 +\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2 \biggl). \end{split}\end{align} In a similar way, it follows from Lemma $\ref{M-M}_{(4)}$ and Lemma $\ref{balance}_{(1)}$ that \begin{align}\label{ax}\begin{split} \|\nabla_x\partial^{\alpha}a \|^{2}_{L^2_x}&\le -\frac{d}{dt}\int_{\mathbb{R}^3}\left\langle \{I-P\} \partial^\alpha f , \epsilon_a(p,\mathcal{I}) \right\rangle_{L^2_{p,\mathcal{I}}} \cdot\nabla_x \partial^\alpha a(t,x)\,dx\cr &+C\left(\|\{I-P\} \nabla_x\partial^\alpha f \|_{L^2_{x,p,\mathcal{I}}}^2+\|\{I-P\} \partial^\alpha f \|^2_{L^2_{x,p,\mathcal{I}}}\right)\cr &+\varepsilon_2\biggl(\|\nabla_x \partial^\alpha b \|_{L^2_{x }}^2+\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2 \biggl). \end{split} \end{align} Also, we have from Lemma $\ref{M-M}_{(3)}$ that \begin{align}\begin{split}\label{c1} \|\nabla\partial^{\alpha}\tilde{c} \|_{L^2_x}^{2} &\le -\int_{\mathbb{R}^3}\partial_{t}\partial^{\alpha}b(t,x)\cdot\nabla_x\partial^\alpha\tilde{c}(t,x)+\langle \partial_t\{I-P\}\nabla_x\partial^\alpha f,\epsilon_c(p,\mathcal{I})\rangle _{L^2_{p,\mathcal{I}}}\partial^\alpha\tilde{c}(t,x)\,dx\cr &+C\left(\|\{I-P\}\nabla_x\partial^\alpha f\|_{L^2_{x,p,\mathcal{I}}}^2+\|\{I-P\}\partial^\alpha f\|_{L^2_{x,p,\mathcal{I}}}^2\right)\cr &+C\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2. \end{split}\end{align} Using integration by parts and Lemma $\ref{balance}_{(3)}$, the first term on the r.h.s of \eqref{c1} can be estimated as \begin{align*} &-\int_{\mathbb{R}^3} \partial_{t}\partial^{\alpha}b(t,x)\cdot\nabla_x \partial^\alpha\tilde{c}(t,x)+\langle \partial_t\{I-P\}\nabla_x\partial^\alpha f,\epsilon_c(p,\mathcal{I})\rangle _{L^2_{p,\mathcal{I}}}\partial^\alpha\tilde{c}(t,x)\,dx\cr &=-\frac{d}{dt}\biggl\{ \int_{\mathbb{R}^3} \partial^{\alpha}b(t,x)\cdot\nabla_x \partial^\alpha\tilde{c}(t,x)-\langle \{I-P\} \nabla_x\partial^\alpha f,\epsilon_c(p,\mathcal{I})\rangle _{L^2_{p,\mathcal{I}}} \partial^\alpha\tilde{c}(t,x)\,dx\biggl\} \cr &-\int_{\mathbb{R}^3}\nabla_x\cdot\partial^{\alpha}b(t,x) \partial_{t}\partial^\alpha\tilde{c}(t,x)-\langle \{I-P\}\nabla_x\partial^\alpha f,\epsilon_c(p,\mathcal{I})\rangle _{L^2_{p,\mathcal{I}}}\partial_t\partial^\alpha\tilde{c}(t,x)\,dx \cr &\le -\frac{d}{dt}\biggl\{ \int_{\mathbb{R}^3} \partial^{\alpha}b(t,x)\cdot\nabla_x \partial^\alpha\tilde{c}(t,x)+\langle \{I-P\} \partial^\alpha f,\epsilon_c(p,\mathcal{I})\rangle _{L^2_{p,\mathcal{I}}}\nabla_x\partial^\alpha\tilde{c}(t,x)\,dx\biggl\} \cr &+C^\prime\|\nabla_x\partial^\alpha b\|^2_{L^2_x} +C\biggl(\|\{I-P\}\nabla_x\partial^\alpha f\|^2_{L^2_{x,p,\mathcal{I}}}+\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2\biggl). \end{align*} which, together with \eqref{c1} gives \begin{align}\begin{split}\label{c3} &\|\nabla\partial^{\alpha}\tilde{c}\|_{L^2_x}^{2}\cr &\le -\frac{d}{dt}\biggl\{ \int_{\mathbb{R}^3} \partial^{\alpha}b(t,x)\cdot\nabla_x \partial^\alpha\tilde{c}(t,x)+\langle \{I-P\} \partial^\alpha f,\epsilon_n(p,\mathcal{I})\rangle _{L^2_{p,\mathcal{I}}}\nabla_x\partial^\alpha\tilde{c}(t,x)\,dx\biggl\}+C^\prime\|\nabla_x\partial^\alpha b\|^2_{L^2_x} \cr &+C\biggl(\|\{I-P\}\nabla_x\partial^\alpha f\|_{L^2_{x,p,\mathcal{I}}}^2+\|\{I-P\}\partial^\alpha f\|_{L^2_{x,p,\mathcal{I}}}^2+\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2\biggl). \end{split}\end{align} For sufficiently small $\varepsilon_1$ and $\varepsilon_2$ satisfying $C^\prime \varepsilon_1 \ll1$, combining \eqref{bx}, \eqref{ax} and \eqref{c3} gives the desired result for spatial derivative. We now employ the case of temporal derivative $d/dt$. Recall from Lemma $\ref{balance}_{(1)}$ that \begin{align*} \partial_t\partial^\alpha a(t,x)&=-k_BT_0\nabla_x \cdot \partial^\alpha b(t,x)+\left\langle -\hat{p}\cdot\nabla_x\left\{I-P\right\}(\partial^\alpha f)+\partial^\alpha h,\sqrt{F_E^0} \right\rangle_{L^2_{p,\mathcal{I}}}\cr &\le -k_BT_0\nabla_x \cdot \partial^\alpha b(t,x)+C\|\nabla_x\left\{I-P\right\}\partial^\alpha f\|_{L^2_{p,\mathcal{I}}}\cr &+C\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}. \end{align*} Taking inner product with $\partial_t\partial^\alpha a(t,x)$, the above estimate leads to \begin{align*} \|\partial_t\partial^\alpha a\|^2_{L^2_x}&\le C_\varepsilon\biggl(\|\nabla_x \partial^\alpha b\|^2_{L^2_x}+ \|\nabla_x\left\{I-P\right\}\partial^\alpha f\|_{L^2_{x,p,\mathcal{I}}}^2+C\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2\biggl)\cr &+\varepsilon \|\partial_t\partial^\alpha a\|^2_{L^2_x}, \end{align*} yielding \begin{equation}\label{t-a} \|\partial_t\partial^\alpha a\|^2_{L^2_x} \le C\biggl(\|\nabla_x \partial^\alpha b\|^2_{L^2_x}+ \|\nabla_x\left\{I-P\right\}\partial^\alpha f\|^2_{L^2_{x,p,\mathcal{I}}}+C\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2\biggl) \end{equation} for sufficiently $\varepsilon$. In the same manner, one can have from Lemma $\ref{balance}_{(2),(3)}$ that \begin{align}\label{t-b}\begin{split} \|\partial_t\partial^\alpha b\|^2_{L^2_x} &\le C\left(\|\nabla_x \partial^\alpha a\|^2_{L^2_x}+\|\nabla_x \partial^\alpha c\|^2_{L^2_x}+ \|\nabla_x\left\{I-P\right\}\partial^\alpha f\|_{L^2_{x,p,\mathcal{I}}}^2\right),\cr &+C\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2 \end{split}\end{align} and \begin{align}\label{t-b-2}\begin{split} \|\partial_t\partial^\alpha c\|^2_{L^2_x} &\le C\left(\|\nabla_x \partial^\alpha b\|^2_{L^2_x}+ \|\nabla_x\left\{I-P\right\}\partial^\alpha f\|_{L^2_{x,p,\mathcal{I}}}^2\right)\cr &+C\sum_{|{\alpha_1}|\le |\alpha|}\left\|\|\partial^{\alpha_1} f\|_{L^2_{p,\mathcal{I}}}\|\partial^{\alpha-{\alpha_1}} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2. \end{split}\end{align} Combining \eqref{t-a}--\eqref{t-b-2} and results for the spatial derivative completes the proof. \end{proof} Now we are ready to prove the rest of Theorem \ref{main3}. Since the argument is similar as in \cite[Section 4]{Guo-Wang}, we only present a sketch of the proof of Theorem \ref{main3} (2)--(3) for brevity.\noindent\newline $\bullet$ Proof of Theorem \ref{main3} (2): Let $0\le \ell\le N-1$. Applying $\nabla^k$ to \eqref{LAW}, taking the $L^2_{x,p,\mathcal{I}}$ inner product with $\nabla^k f$, and using Proposition \ref{pro}, Lemmas \ref{nonlinear1} and \ref{nonlinear3}, we have \begin{align}\label{nablak}\begin{split} &\frac{1}{2}\frac{d}{dt}\sum_{\ell\le k\le N-1} \|\nabla^k f\|^2_{L^2_{x,p,\mathcal{I}}}+ \sum_{\ell\le k\le N-1}\|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}\cr &\le C\sum_{\ell\le k\le N-1}\sum_{|{\alpha_1}|\le k}\left\|\|\nabla^{|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\|\nabla^{k-|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_x}^2. \end{split}\end{align} Using the Sobolev type inequalities and Minkowski's inequality, the right-hand side of \eqref{nablak} can be bounded from above as \begin{align}\label{rhs1}\begin{split} \sum_{|{\alpha_1}|\le k}\left\|\|\nabla^{|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\|\nabla^{k-|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_x}^2\le C\delta^2\biggl(\|\nabla^{k+1}f\|^2_{L^2_{x,p,\mathcal{I}}}+ \|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}\biggl) \end{split}\end{align} for $k=0,\cdots,N-1$, and \begin{align}\label{rhs2}\begin{split} \sum_{|{\alpha_1}|\le k}\left\|\|\nabla^{|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\|\nabla^{k-|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_x}^2\le C\delta^2\|\nabla^{N}f\|^2_{L^2_{x,p,\mathcal{I}}} \end{split}\end{align} for $k=N$ respectively. Combining \eqref{nablak}--\eqref{rhs2}, one can see that \begin{align}\label{nablaN}\begin{split} &\frac{d}{dt}\sum_{\ell\le k\le N} \|\nabla^k f\|^2_{L^2_{x,p,\mathcal{I}}}+C \sum_{\ell\le k\le N}\|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}\cr &\le C\delta^2\Biggl(\sum_{\ell+1\le k\le N}\|\nabla^{k}f\|^2_{L^2_{x,p,\mathcal{I}}}+\sum_{\ell\le k\le N}\|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}\Biggl). \end{split}\end{align} On the other hand, it follows from Lemma \ref{I-P}, Sobolev interpolation and Minkowski's inequality that \begin{align*} & \frac{d}{dt}\eta\sum_{\ell\le k\le N-1}G_k+\eta\sum_{\ell+1\le k\le N} \|\nabla^{k}Pf\|^2_{L^2_{x,p,\mathcal{I}}}\cr&\le C\eta \sum_{\ell\le k\le N-1} \left\| \nabla^k\{I-P\}f\right\|^2_{L^2_{x,p,\mathcal{I}}} +C\eta \sum_{\ell\le k\le N-1}\sum_{|{\alpha_1}|\le k}\left\|\|\nabla^{|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\|\nabla^{k-|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_{x}}^2\cr &\le C\eta\sum_{\ell\le k\le N} \left\| \nabla^k\{I-P\}f\right\|^2_{L^2_{x,p,\mathcal{I}}}. \end{align*} This, together with \eqref{nablaN} yields that for sufficiently small $\eta$ and $\delta$, \begin{align}\label{mathcal energy} &\frac{d}{dt}\mathcal{E}_\ell(t)+\|\nabla^\ell \{I-P\}f \|^2_{L^2_{x,p,\mathcal{I}}}+\sum_{\ell+1\le k\le N}\|\nabla^{k}f\|^2_{L^2_{x,p,\mathcal{I}}}\le 0, \end{align} where $\mathcal{E}_\ell(t)$ denotes $$ \mathcal{E}_\ell(t)=\sum_{\ell\le k\le N}\|\nabla^k f \|^2_{L^2_{x,p,\mathcal{I}}}+\eta\sum_{\ell\le k\le N-1}G_k. $$ Using the following Sobolev interpolation (for details, please see \cite[Lemma A.4]{Guo-Wang}): \begin{equation*}\label{interpolation} \|\nabla^\ell f\|_{L^2}\le C\|\nabla^{\ell+1}f \|^\theta_{L^2}\|\Lambda^{-s}f \|^\theta_{L^2},\qquad\text{where}\qquad \theta=\frac{1}{\ell+1+s},\ s,\ell \ge0, \end{equation*} we have from \eqref{mathcal energy} and Theorem \ref{main3} (1) that \begin{equation*} \frac{d}{dt}\mathcal{E}_\ell(t)+ C_0\biggl(\sum_{\ell+1\le k\le N}\|\nabla^{k-1}f\|^2_{L^2_{x,p,\mathcal{I}}}\biggl)^{1+\frac{1}{\ell+s}} \le 0. \end{equation*} Since $\mathcal{E}_\ell(t)$ is equivalent to $\sum_{\ell\le k\le N}\|\nabla^k f \|^2_{L^2_{x,p,\mathcal{I}}}$ for sufficiently small $\eta$, this gives the desired result.\noindent\newline $\bullet$ Proof of Theorem \ref{main3} (3): Applying $\{I-P\}$ to \eqref{LAW}, and using Proposition \eqref{pro} and Lemma \ref{nonlinear3}, one finds \begin{equation*} \partial_t \{I-P\}f+\hat{p}\cdot\nabla_x \{I-P\}f+L\{I-P\}f=\Gamma(f)-\hat{p}\cdot\nabla_x Pf+P(\hat{p}\cdot\nabla_xf). \end{equation*} Applying $\nabla^k$ $(k=0,\cdots,N-2)$ and taking the $L^2_{x,p,\mathcal{I}}$ inner product with $\nabla^k\{I-P\}f$, this leads to \begin{align*} &\frac{1}{2}\frac{d}{dt}\|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}+\|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}\cr &\le \left\langle \nabla^k\Gamma(f),\nabla^k\{I-P\}f\right\rangle_{L^2_{x,p,\mathcal{I}}}-\left\langle \hat{p}\cdot\nabla_x P\nabla^kf-P(\hat{p}\cdot\nabla_x\nabla^kf),\nabla^k\{I-P\}f\right\rangle_{L^2_{x,p,\mathcal{I}}}, \end{align*} which, combined with Lemma \ref{nonlinear1} gives that for small $\varepsilon$, \begin{align}\label{I-P 2}\begin{split} &\frac{1}{2}\frac{d}{dt}\|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}+\|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}\cr &\le C_\varepsilon \Biggl(\sum_{|{\alpha_1}|\le k}\left\|\|\nabla^{|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\|\nabla^{k-|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_x}^2+\|\nabla^{k+1}f\|^2_{L^2_{x,p,\mathcal{I}}}\Biggl)+\varepsilon\|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}. \end{split}\end{align} On the other hand, note from \cite[Lemma 4.4]{Guo-Wang} that using the Sobolev type inequalities and Minkowski's inequality, one can have \begin{align*} \sum_{|{\alpha_1}|\le k}\left\|\|\nabla^{|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\|\nabla^{k-|\alpha_1|} f\|_{L^2_{p,\mathcal{I}}}\right\|_{L^2_x}^2\le C\delta^2\biggl(\|\nabla^{k+1}f\|^2_{L^2_{x,p,\mathcal{I}}}+ \|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}\biggl). \end{align*} This, together with \eqref{I-P 2} gives that for sufficiently small $\varepsilon$ and $\delta$, \begin{align}\label{target} & \frac{d}{dt}\|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}+\|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}\le C \|\nabla^{k+1}f\|^2_{L^2_{x,p,\mathcal{I}}}. \end{align} For $k=1,\cdots,N-2$, applying the Gronwall inequality to \eqref{target} and using Theorem \ref{main3} (2) with $\ell=k+1$, one finds \begin{align*} \|\nabla^k \{I-P\}f\|^2_{L^2_{x,p,\mathcal{I}}}&\le e^{-t}\|\nabla^k \{I-P\}f_0\|^2_{L^2_{x,p,\mathcal{I}}}+C\int_0^t e^{-(t-s)}\|\nabla^{k+1}f(s)\|^2_{L^2_{x,p,\mathcal{I}}}\,ds\cr &\le C_0(1+t)^{-(k+1+s)} \end{align*} which, together with the interpolation gives the desired result for $-s<k\le N-2$. \noindent{\bf Acknowledgement} Byung-Hoon Hwang was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(No. NRF-2019R1A6A1A10073079). Seok-Bae Yun was supported by Samsung Science and Technology Foundation under Project Number SSTF-BA1801-02. \end{document}
arXiv
\begin{document} \title{Setting Reserve Requirements to Approximate the Efficiency of the Stochastic Dispatch} \author{Vladimir~Dvorkin~Jr.,~\IEEEmembership{Student member,~IEEE,} Stefanos~Delikaraoglou,~\IEEEmembership{Member,~IEEE,} and~Juan~M.~Morales,~\IEEEmembership{Senior member,~IEEE} \thanks{V. Dvorkin Jr. is with the Technical University of Denmark, Kgs. Lyngby, Denmark (e-mail: [email protected]).} \thanks{S. Delikaraoglou is with the ETH Zurich, Zurich, Switzerland (e-mail: [email protected]).} \thanks{J.M.Morales is with the University of Malaga, Malaga, Spain (e-mail: [email protected]).} \thanks{ The work by Vladimir Dvorkin Jr. was supported in part by the Russian Foundation for Basic Research (RFBR) according to the research project No. 16-36-00389. The work by Juan M. Morales was supported in part by the Spanish Ministry of Economy, Industry and Competitiveness through project ENE2017-83775-P; and in part by the European Research Council (ERC) under the EU Horizon 2020 research and innovation programme (grant agreement No. 755705) and the Research Program for Young Talented Researchers of the University of Malaga through project PPIT-UMA-B1-2017/18.} } \maketitle \begin{abstract} This paper deals with the problem of clearing sequential electricity markets under uncertainty. We consider the European approach, where reserves are traded separately from energy to meet exogenous reserve requirements. Recently proposed stochastic dispatch models that co-optimize these services provide the most efficient solution in terms of expected operating costs by computing reserve needs endogenously. However, these models are incompatible with existing market designs. This paper proposes a new method to compute reserve requirements that bring the outcome of sequential markets closer to the stochastic energy and reserves co-optimization in terms of cost efficiency. Our method is based on a stochastic bilevel program that implicitly improves the inter-temporal coordination of energy and reserve markets, but remains compatible with the European market design. We use \textcolor{blue}{two standard IEEE reliability test cases} to illustrate the benefit of intelligently setting operating reserves in single and multiple reserve control zones. \end{abstract} \begin{IEEEkeywords} Bilevel optimization, electricity markets, market clearing, reserve requirements, stochastic programming. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section*{Nomenclature} \textcolor{WildStrawberry}{The main notation used in this paper is stated below. Additional symbols are defined in the paper where needed.} All symbols are augmented by index $t$ when referring to different time periods. \subsection{Sets and Indices} \begin{ldescription}{$xxxxx$} \item [$\Lambda$] Set of transmission lines. \item [$\omega \in \Omega$] Set of wind power production scenarios. \item [$i \in I$] Set of conventional generation units. \item [$j \in J$] Set of loads. \item [$k \in K$] Set of wind power units. \item [$n \in N$] Set of nodes. \item [$z \in Z$] Set of reserve control zones. \item [$\{\}_{n}$] Mapping of $\{\}$ into the set of nodes. \item [$\{\}_{z}$] Mapping of $\{\}$ into the set of reserve control zones. \end{ldescription} \subsection{Decision variables} \begin{ldescription}{$xxxxx$} \item [$\delta_{n}^{\text{DA}}$] Day-ahead voltage angle at node $n$ [rad]. \item [$\delta_{n\omega}^{\text{RT}}$] Real-time voltage angle at node $n$ in scenario $\omega$ [rad]. \item [$D_{z}^{\text{U/D}}$] Up-/Downward reserve requirement in zone $z$ [MW]. \item [$L_{j\omega}^{\text{sh}}$] Shedding of load $j$ in scenario $\omega$ [MW]. \item [$P_{i}^{\text{C}}$] Day-ahead \textcolor{WildStrawberry}{dispatch} of conventional unit $i$ [MW]. \item [$P_{k}^{\text{W}}$] Day-ahead \textcolor{WildStrawberry}{dispatch} of wind power unit $k$ [MW]. \item [$P_{k\omega}^{\text{W,sp}}$] Wind spillage of unit $k$ in scenario $\omega$ [MW]. \item [$R_{i}^{\text{U/D}}$] Up-/Downward reserve provision \textcolor{WildStrawberry}{from} unit $i$ [MW]. \item [$r_{i\omega}^{\text{U/D}}$] Up-/Downward reserve deployment of unit $i$ in scenario $\omega$ [MW]. \end{ldescription} \subsection{Parameters} \begin{ldescription}{$xxxxx$} \item [$\pi_{\omega}$] Probability of occurrence of wind power production scenario $\omega$. \item [$C_{i}$] Day-ahead price offer of unit $i$ [\$/MWh]. \item [$C_{i}^{\text{U/D}}$] Up-/Downward reserve price offer of unit $i$ [\$/MW]. \item [$C^{\text{VoLL}}$] Value of lost load [\$/MWh]. \item [$\overline{F}_{nm}$] Capacity of transmission line $(n,m)$ [MW]. \item [$L_{j}$] Demand of load $j$ [MWh]. \item [$\overline{P}_{i}$] Day-ahead quantity offer of unit $i$ [MW]. \item [$\overline{R}_{i}^{\text{U/D}}$] Up-/Downward reserve capacity offer of unit $i$ [MW]. \item [$\widehat{W}_{k}$] Expected generation of wind power unit $k$ [MW]. \item [$W_{k\omega}$] Wind power realization of unit $k$ in scenario $\omega$ [MW]. \item [$X_{nm}$] Reactance of transmission line $(n,m)$ [p.u.]. \end{ldescription} \section{Introduction} \IEEEPARstart{E}{lectricity} markets are commonly \textcolor{WildStrawberry}{organized in} a sequence of trading floors \textcolor{WildStrawberry}{in which different services are traded in various time-frames.} \textcolor{WildStrawberry}{According to the European market architecture,} this sequence consists of reserve and day-ahead markets \textcolor{WildStrawberry}{that are cleared} 12-36 hours \textcolor{WildStrawberry}{before actual power system operation and pertain to trading reserve capacity and energy services, respectively. Getting close to actual delivery of electricity, a real-time market is organized to balance deviations from the initial schedule.} This market design has been established following a conventional view of power system operation, where uncertainty was \textcolor{WildStrawberry}{induced by equipment} contingencies or minor forecast errors of electricity demand. \textcolor{WildStrawberry}{However, considering} the increasing shares of renewable generation, this design has limited ability to cope with variable and uncertain energy sources, while \textcolor{WildStrawberry}{maintaining} a sufficient level of reliability at a reasonable cost \cite{Aigner_2012}. \textcolor{blue}{To account for the uncertain nature of renewable generation, recent literature proposes economic dispatch models \cite{Morales_2012,1525111} and unit commitment formulations \cite{Papavasiliou_2015,4806110,4556639} based on stochastic optimization.} Unlike the conventional market design, which downplays the cost of uncertainty, the stochastic model \textcolor{WildStrawberry}{makes use of a probabilistic description of uncertainty and dispatches the system accounting for plausible forecast errors. In this case, reserve requirements are computed endogenously, instead of relying on rule-of-thumb \textcolor{ForestGreen}{methods} such as as the N-1 security criterion. Although the resulting \textit{stochastic ideal} schedule provides} the most efficient solution in terms of expected operating system costs, this design is not adopted in practice due to still unresolved issues like the violation of the least-cost merit-order principle \cite{zavala2017stochastic}. There are several research contributions devoted to \textit{approximating} the stochastic ideal solution, i.e., approaching the expected operating cost provided by the stochastic dispatch model while sidestepping its theoretical drawbacks, \textcolor{blue}{namely, the violation of \textit{cost recovery} and \textit{revenue adequacy} for certain realizations of the random variables. The cost recovery property guarantees that the profit of each conventional producer is greater than or equal to its operating costs. The revenue adequacy property requires that the payments that the system operator must make to and receive from the participants do not cause it to incur a financial deficit.} \textcolor{WildStrawberry}{Authors} in \cite{morales2014electricity} \textcolor{WildStrawberry}{propose} a new market-clearing procedure \textcolor{WildStrawberry}{according to which wind power is dispatched to a value different than its forecast mean, such that the expected system cost is minimized. This procedure respects the merit order of the day-ahead market and thus ensures cost recovery of the flexible units.} An enhanced stochastic dispatch \textcolor{WildStrawberry}{that guarantees} both cost recovery and revenue adequacy \textcolor{WildStrawberry}{for every uncertainty realization} is introduced in \cite{Kazempour_2018}. The main obstacle preventing the implementation of these two models is that they require changing the state of affairs of conventional market structures. \textcolor{blue}{Finally, authors in \cite{SARFATI2018851} propose a stochastic dispatch model that aims at generating proper price signals that incentivize generators to provide reliability services akin to reserves. This model also guarantees cost recovery and revenue adequacy for every uncertainty realization, but in the meantime it does also require significant changes in market design as well as in {\color{blue} the} offering strategies of the renewable power producers.} \textcolor{WildStrawberry}{More in line with the current practices of the European market design, \cite{Jensen_2017} proposes a systematic method} to adjust available transfer capacities in order to bring operational efficiency of interconnected power systems closer to the stochastic solution. \textcolor{WildStrawberry}{In the US electricity markets, several Independent System Operators (ISOs), e.g., the California ISO (CAISO) and Midcontinent ISO (MISO) are implementing new ramping capacity products to increase the ramping ability of the system during the real-time re-dispatch in order to cope with steep ramps of net load \cite{Wang_2013}. Essentially, these flexibility products aim to resemble the stochastic dispatch, which inherently finds the optimal allocation of flexible resources between energy and ramping services. } \textcolor{WildStrawberry}{In the same vein, several US ISOs, as for instance the New York ISO, the ISO New England, the MISO, and the Pennsylvania-New Jersey-Maryland (PJM) market, have introduced an operating reserve demand curve (ORDC) in their real-time market \cite{hogan2013electricity}. Motivated by the two-stage stochastic dispatch model, the ORDC mechanism adjusts electricity prices to reflect the scarcity value of reserves for the system operator and incentivize market players to dispatch their units according to a socially optimal schedule.} \textcolor{WildStrawberry}{The price adjustment through ORDC leads theoretically to perfect arbitrage between energy and reserves in case these two products are co-optimized \cite{papavasiliou2017remuneration}. However, in the European market that separates energy and reserve capacity trading this arbitrage is inefficient per se, since market players have to value reserves prior to the energy-only market clearing.} This paper proposes an alternative approach to approximate the stochastic ideal dispatch solution through an intelligent setting of zonal reserve requirements in sequentially cleared electricity markets akin to the European architecture. \textcolor{WildStrawberry}{Here, we solely focus on operating reserves, i.e., generation that is dispatched to respond to net load variations based on economic bids, rather than on regulating services that are activated by automatic generation control.} Traditionally, requirements \textcolor{WildStrawberry}{for operating reserves} are defined based on deterministic security criteria, such as N-1 security constraint violations, where reserves are dimensioned to cover the largest contingency in the system \cite{rebours2005survey}, or based on a mean forecast load error and forced outage rate of system components over a certain horizon, as in the PJM market \cite{manual2012energy}. The main drawback of those approaches is that they ignore the probabilistic nature of renewable generation and neglect the \textcolor{WildStrawberry}{economic} impact of reserve needs on subsequent operations. In order to account for the operational uncertainty, recent literature proposes reserve dimensioning methods based on probabilistic criteria, according to which reserve requirements are drawn from the probabilistic description of uncertainties \textcolor{blue}{ \cite{strbac2007impact,lee2012analyzing,Doherty_1425549,5929570,6942382,7084167,7552596,lange2005uncertainty,5565529,6299425}.} \textcolor{blue}{ For example, \cite{strbac2007impact} suggests to define the reserve needs such that they cover 97.7\% ($3\sigma$) of the total variation of a Gaussian distribution modeling the joint wind-load uncertainty, disregarding the fact that wind power forecast errors are described by non-Gaussian distributions \cite{lange2005uncertainty}. As a remedy to this drawback, \cite{5565529} proposed a method for setting the reserve requirements using non-parametric probabilistic wind power forecasts. Flying brick and probability box methods in \cite{5929570} and \cite{6942382}, respectively, compute robust envelopes that enclose the net load with a specified probability level. The recent extension of these methods called flexibility envelopes was suggested in \cite{7084167}. These envelopes are based on the same principles but evolve in time to respect the temporal evolution of reserve requirements. As demonstrated in \cite{5929570}, \cite{6942382} and \cite{7552596}, the probabilistic reserve concepts might be integrated into the actual energy management system and derive requirements for capacity, ramping capability and ramping duration of flexible units. In contrast to the deterministic practices,} the benefit of these methods is that reserve requirements, drawn from accurately predicted distributions, minimize extreme balancing actions provoked by under- or over-procurement of reserves. However, probabilistic requirements are still an exogenous input to the power dispatch, which disregards their potential impact on expected cost. To this end, we propose a \textcolor{blue}{model} to determine reserves \textcolor{WildStrawberry}{based on a stochastic bilevel programming problem}, which provides the cost-optimal reserve quantities for a European-type market structure. In line with the stochastic \textcolor{WildStrawberry}{dispatch mechanism}, our model computes the reserve requirements that minimize the expected system cost, anticipating their projected \textcolor{WildStrawberry}{impact} on the subsequent operations. Additionally, these requirements are defined accounting for the \textcolor{WildStrawberry}{actual} decision-making process, i.e., the sequence of market-clearing procedures, zonal representation of the power network and the least-cost merit-order principle in all trading floors. As a result, the implementation of these requirements in a conventional market setting, results in a compromise solution between traditional reserve dimensioning practices and the stochastic dispatch model in terms of expected operating cost. Naturally, our approach has limitations: we consider a simplified market setup with a strictly convex representation. Nevertheless, our results do indicate that the intelligent setting of reserve requirements can enhance the short-run cost efficiency of the conventional market with large shares of renewable generation. \textcolor{blue}{The proposed model can be used as an analytic tool to provide technical and economic insights about the efficacy of different reserve capacity quantification methods, while it can be also used as a decision-support tool by system operators during the reserve setting process. In the latter case, this model can be presumably executed before the day-ahead reserve capacity auction in order to define the reserve requirements that will be used as input in the actual market-clearing process. Nevertheless, the incorporation of this method in the operational strategy of the system operator does not entail any changes in the existing market setup, since the model output is solely under the discretion of the system operator and decoupled from market operations.} The reminder of this paper is organized as follows. Section \ref{sec1} describes the conventional market design and its counterfactual stochastic representation. Section \ref{sec2} introduces the proposed \textcolor{WildStrawberry}{stochastic bilevel programming problem} to compute the optimal reserve requirements that approximate the ideal stochastic solution maintaining the sequential market structure. \textcolor{blue}{Section \ref{section_sol_str} explains the solution strategy based on the multi-cut Bender's algorithm for large-scale applications}. Section \ref{sec3} provides applications of the proposed model to \textcolor{blue}{the IEEE-24 and IEEE-96 reliability test systems}. Section \ref{sec4} concludes the paper. \section{Electricity market clearing models} \label{sec1} In this section, we first describe the conventional market structure and the stochastic dispatch model. We then introduce \textcolor{WildStrawberry}{the necessary} modeling assumptions and provide the mathematical formulations of both models. \subsection{Conventional market and stochastic dispatch framework} In Europe, power markets are cleared in \textcolor{WildStrawberry}{sequential and independent auctions} which can be represented by the simplified decision-making process illustrated in Fig. \ref{fig:disaptch_det}, which is referred to as the \textit{conventional} market-clearing model. First, the system operator defines zonal reserve requirements $\mathcal{D}$ based on certain security standards. Then, the reserve capacity market is cleared based on the offer prices and quantities submitted by the flexible producers to find the optimal upward and downward reserve allocation $\Phi^{\text{R}^{*}}$ that minimizes reserve procurement costs $\mathcal{C}^{\text{R}}$. This allocation accounts for upward and downward reserve requirement constraints included in the set $\mathcal{Q}^{\text{R}}$. At the next stage, power producers submit their price-quantity offers to the day-ahead market that provides the optimal energy schedule $\Phi^{\text{D}^{*}}$ that minimizes the day-ahead energy cost $\mathcal{C}^{\text{D}}$. The set of day-ahead market constraints $\mathcal{Q}^{\text{D}}$ takes into account the reserve capacity $\Phi^{\text{R}^{*}}$ procured at the previous stage. Closer to delivery time, when realization of uncertainty $\omega^\prime$ is known, the system operator runs the real-time market to define a set of optimal re-dispatch actions $\Phi^{\text{B}}_{\omega^\prime}$ that minimizes the balancing cost $\mathcal{C}^{\text{B}}$, considering the previously procured reserve $\Phi^{\text{R}^{*}}$. In this conventional market design, the choice of reserve requirements $\mathcal{D}$ has a direct impact on the total expected system cost. In fact, the choice of $\mathcal{D}$ influences reserve procurement decisions $\Phi^{\text{R}}$, which in turn affect day-ahead $\Phi^{\text{D}}$ and real-time $\Phi^{\text{B}}$ energy dispatch decisions. \tikzstyle{Clearing} = [rectangle, rounded corners = 5, minimum width=10, minimum height=10,text centered, draw=black, fill=white!30,line width=0.3mm] \tikzstyle{Reserve} = [rectangle, rounded corners = 5, minimum width=10, minimum height=10,text centered, draw=black, fill=white!30,line width=0.3mm] \begin{figure} \caption{Decision sequences in conventional (a) and stochastic (b) dispatch models.} \label{fig:disaptch_det} \label{fig:disaptch_stoch} \label{fig:sample_subfigures} \end{figure} An alternative model for \textcolor{WildStrawberry}{reserves and energy scheduling} is the \textit{stochastic} \textcolor{WildStrawberry}{dispatch model} outlined in Fig. \ref{fig:disaptch_stoch}. \textcolor{blue}{This is a two-stage stochastic programming model in which first-stage decisions pertain to reserve procurement and day-ahead energy schedule, whereas the second stage models the recourse actions that restore power balance during real-time operation.} \textcolor{blue}{The stochastic dispatch model} takes as input a probabilistic wind power forecast in the form of a scenario set $\Omega$ and endogenously computes reserve needs. This way, it naturally coordinates all trading floors by co-optimizing reserve ($\Phi^{\text{R}}$) and energy ($\Phi^{\text{D}}$) schedules, anticipating their impact on the subsequent \textcolor{WildStrawberry}{expected balancing cost $\underset{\omega}{\mathbb{E}} [ \mathcal{C}^{\text{B}}(\Phi_{\omega}^{\text{B}}) ]$ estimated over the scenario set $\Omega$}. \textcolor{blue}{It should be noted that the co-optimization of reserve procurement and energy schedules is a requirement for the implementation of this ideal coordination between the different trading floors.} In the stochastic dispatch, reserve requirements are a byproduct of the energy and reserve co-optimization problem, resulting in the most efficient solution in terms of total expected operating cost. Moreover, unlike the conventional market model that schedules reserve and day-ahead energy quantities according to the least-cost merit-order principle, the stochastic model schedules \textcolor{WildStrawberry}{generation capacity accounting for potential network congestion during real-time operations, which may lead to} expensive balancing actions \cite{Morales_2012}. This way \textcolor{WildStrawberry}{generators may be scheduled out-of-merit, i.e., more expensive units are dispatched over less expensive ones, in order to minimize the expected costs.} Despite \textcolor{WildStrawberry}{its superiority in terms of cost efficiency}, the stochastic model suffers from several drawbacks preventing its practical implementation. As already mentioned, the violation of the merit-order principle results in cost recovery and revenue adequacy only in expectation, while for some uncertainty realizations these two essential economic properties may not hold \cite{Morales_2012}. \textcolor{blue}{This issue disputes the well-functioning of electricity markets in long term, since flexible producers may end up in loss-making positions in one or more scenarios, despite the fact that their expected profit is non-negative. Therefore, these market participants may opt out of the short-run electricity markets or even be discouraged to perform new investments if they are exposed to significant financial risks. In the meantime, the fact that revenue adequacy is only guaranteed in expectation exposes the market operator to the risk of financial deficit. Therefore, a realistic implementation of this market model would require the establishment of out-of-the-market mechanisms, akin to the uplift payments used in the US markets, to provide an ex-post compensation of potential economic deficits. In view of this practical caveats, we do not foresee an actual market clearing implementation of the stochastic dispatch model.} Moreover, the co-optimization of day-ahead energy and \textcolor{WildStrawberry}{capacity} reserve markets is not compatible with the European market structure, \textcolor{blue}{which dictates that the trading of reserves and energy products is organized in independent sequential auctions.} However, in this work, we \textcolor{WildStrawberry}{show} that the stochastic dispatch solution \textcolor{WildStrawberry}{can} be approximated in the conventional market-clearing model by intelligently setting the reserve requirements $\mathcal{D}$, sidestepping the drawbacks of the stochastic model and improving the efficiency of the \textcolor{WildStrawberry}{existing market setup}. \subsection{Modeling assumptions} We use the following set of assumptions to derive computationally tractable yet sensible formulations of the different dispatch models. Following the European practice, we consider a zonal representation of the network for reserve procurement. In an attempt to build a more generic model, the network topology is included in the day-ahead and real-time dispatch models considering a DC approximation of power flows. Reserve and energy supply functions are linear, \textcolor{blue}{and all generators are considered to behave as price takers.} System loads are inelastic with a large value of lost load. This way, the maximization of the social welfare is equivalent to cost minimization. Flexible units deploy operating reserves with marginal costs of production. The incentive to provide flexibility services is accounted for in reserve offering prices. \textcolor{blue}{Following the prevailing portfolio bidding adopted in the European markets \cite{6487424}, we consider that all unit commitment and inter-temporal constraints are integrated into the bidding strategies of the generating units. For instance, the commitment of thermal units in practice might be controlled by market participants when offering at either zero price or market price cap. Similarly, offering a part of capacity at zero and even negative price ensures the compliance with the technical minimum constraint of thermal units.} \textcolor{blue}{This approach is compatible with the European market structure and preserves the convexity of the reserve capacity and day-ahead market-clearing algorithms. In principle, the proposed model can be also applied to market designs that involve non-convex constraints, as for instance the majority of electricity markets in the US, using tight convex relaxations of the unit commitment binary variables. However, this approach lies out of scope of this paper, but we refer the interested reader to \cite{Kasina_2014,7914790} for further discussion}. Finally, uncertainty is described by a finite set of scenarios and solely induced by stochastic wind power production. \subsection{Mathematical formulation} \subsubsection{Conventional market-clearing model} The sequential procedure, sketched in \textcolor{WildStrawberry}{Fig.} \ref{fig:disaptch_det}, for each hour of the next day is modeled by the following three linear optimization problems. The reserve procurement problem writes as: \begingroup \allowdisplaybreaks \begin{subequations} \label{prob:reserve_clearing} \begin{align} \underset{\Xi^{\text{OR}}}{\text{min}} \quad& \sum_{i \in I} \Big(C_{i}^{\text{U}} R_{i}^{\text{U}} + C_{i}^{\text{D}} R_{i}^{\text{D}}\Big) \label{objRC}\\ \text{s.t.} \quad&\sum_{i \in I_{\textcolor{WildStrawberry}{z}}}R_{i}^{\text{U}} = D_{z}^{\text{U}}, \quad \sum_{i \in I_{\textcolor{WildStrawberry}{z}} }R_{i}^{\text{D}} = D_{z}^{\text{D}}, \quad \forall z \in Z, \label{RC:demand}\\ & R_{i}^{\text{U}} + R_{i}^{\text{D}} \leq \overline{P}_{i}, \quad \forall i \in I, \label{RC:limits_UD} \\ &0 \leq R_{i}^{\text{U}} \leq \overline{R}_{i}^{\text{U}}, \quad 0 \leq R_{i}^{\text{D}} \leq \overline{R}_{i}^{\text{D}}, \quad \forall i \in I, \label{RC:limits} \end{align} \end{subequations} \endgroup where $\Xi^{\text{OR}} = \{R_{i}^{\text{U}}, R_{i}^{\text{D}}, \textcolor{WildStrawberry}{\forall i}\}$ is the set of optimization variables comprising the upward and downward reserve \textcolor{WildStrawberry}{schedule} per each flexible generator. Optimal $\Xi^{\text{OR*}}$ minimizes the reserve procurement cost given by (\ref{objRC}). Equality constraints (\ref{RC:demand}) ensure that zonal reserve upward and downward requirements, denoted as $D_{z}^{\text{U}}$ and $D_{z}^{\text{D}}$, respectively, are fulfilled, whereas inequality constraints (\ref{RC:limits_UD}) - (\ref{RC:limits}) account for the quantity offers of each flexible generator. Once reserve allocation $\{R_{i}^{\text{U}*}, R_{i}^{\text{D}*}, \forall i\}$ is determined, the least-cost day-ahead energy schedule is computed solving the following optimization problem: \begingroup \allowdisplaybreaks \begin{subequations} \label{prob:day_ahead_clearing} \begin{align} \underset{\Xi^{\text{DA}}}{\text{min}} \quad& \sum_{i \in I} C_{i} P_{i}^{\text{C}} \label{obj:DA}\\ \text{s.t.} \quad &\sum_{i \in I_{n}}P_{i}^{\text{C}} + \sum_{k \in K_{n}}P_{k}^{\text{W}} - \sum_{j \in J_{n}}L_{j} \nonumber\\ & - \sum_{m:(n,m)\in\Lambda}\frac{\delta_{n}^{\text{DA}}-\delta_{m}^{\text{DA}}}{x_{nm}} = 0, \quad \forall n \in N, \label{DA:balance} \\ &R_{i}^{\text{D}*} \leq P_{i}^{\text{C}} \leq \overline{P}_{i} - R_{i}^{\text{U*}}, \quad \forall i \in I, \label{DA:conv_cap}\\ &0 \leq P_{k}^{\text{W}} \leq \widehat{W}_{k}, \quad \forall k\in K, \label{DA:wind_cap}\\ &\frac{\delta_{n}^{\text{DA}}-\delta_{m}^{\text{DA}}}{x_{nm}} \leq \overline{F}_{nm}, \quad \forall (n,m) \in \Lambda, \label{DA:flow_cap} \end{align} \end{subequations} \endgroup where $\Xi^{\text{DA}} = \{P_{i}^{\text{C}}, \textcolor{WildStrawberry}{\forall i}; P_{k}^{\text{W}}, \textcolor{WildStrawberry}{\forall k}; \delta_{n}^{\text{DA}}, \textcolor{WildStrawberry}{\forall n}\}$ is the set of variables including day-ahead energy quantities for each conventional and stochastic generator as well as voltage angles at each node. The objective function (\ref{obj:DA}) to be minimized is the day-ahead energy cost, subject to nodal power balance constraints (\ref{DA:balance}), offering limits of conventional and stochastic generators (\ref{DA:conv_cap})-(\ref{DA:wind_cap}) and transmission capacity limits (\ref{DA:flow_cap}). Note that the reserve procurement decisions from the previous stage limit the dispatch of flexible generators at the day-ahead stage. In this design, stochastic production is bounded by the conditional expectation $\widehat{W}_{k}$. \textcolor{WildStrawberry}{Getting closer to real-time operation, any deviation from the optimal day-ahead dispatch $\{P_{i}^{\text{C}*}, \forall i; P_{k}^{\text{W}*}, \forall k; \delta_{n}^{\text{DA}*}, \forall n\}$ has to be covered by proper balancing actions.} For a specific realization of stochastic production \textcolor{WildStrawberry}{$W_{k\omega'}$}, the optimal re-dispatch is found solving the following linear programming problem: \begingroup \allowdisplaybreaks \begin{subequations} \label{prob:real_time_clearing} \begin{align} \underset{\Xi^{\text{RT}}}{\text{min}} & \quad\sum_{i \in I} C_{i} \Big(r_{i\omega'}^{\text{U}}-r_{i\omega'}^{\text{D}}\Big) +\sum_{j \in J} C^{\text{VoLL}} L_{j\omega'}^{\text{sh}} \label{objRT} \\ \text{s.t.}& \quad \sum_{i \in I_{n}} \left( r_{i\omega'}^{\text{U}}-r_{i\omega'}^{\text{D}} \right) + \sum_{k \in K_{n}}\left( W_{k\omega'} - P_{k}^{\text{W*}} - P_{k\omega'}^{\text{W,sp}} \right) \nonumber \\ & +\sum_{j \in J_{n}}L_{j\omega'}^{\text{sh}} - \!\!\! \sum_{m:(n,m)\in\Lambda} \!\!\!\! \frac{\delta_{n\omega'}^{\text{RT}}-\delta_{n}^{\text{DA*}}-\delta_{m\omega'}^{\text{RT}}+\delta_{m}^{\text{DA*}}}{x_{nm}} \nonumber\\ & = 0, \quad \forall n \in N, \label{RT:balance}\\ &0 \leq r_{i\omega'}^{\text{U}} \leq R_{i}^{\text{U}*}, \quad 0 \leq r_{i\omega'}^{\text{D}} \leq R_{i}^{\text{D}*}, \quad \forall i \in I, \label{RT: updownlim}\\ &\frac{\delta_{n\omega'}^{\text{RT}}-\delta_{m\omega'}^{\text{RT}}}{x_{nm}} \leq \overline{F}_{nm}, \quad \forall (n,m) \in \Lambda, \label{RT: maxcapline}\\ &0 \leq P_{k\omega'}^{\text{W,sp}} \leq W_{k\omega'}, \quad \forall k \in K, \label{RT: spill}\\ &0 \leq L_{j\omega'}^{\text{sh}} \leq L_{j}, \quad \forall j \in J, \label{RT: shed} \end{align} \end{subequations} \endgroup where $\Xi^{\text{RT}} = \{r_{i\omega'}^{\text{U}}, r_{i\omega'}^{\text{D}}, \textcolor{WildStrawberry}{\forall i}; L_{j\omega'}^{\text{sh}}, \textcolor{WildStrawberry}{\forall j}; P_{k\omega'}^{\text{W,sp}}, \textcolor{WildStrawberry}{\forall k}; \delta_{n\omega'}^{\text{RT}}, \textcolor{WildStrawberry}{\forall n}\}$ is the set of re-dispatch decisions, comprising activation of operating reserves, load shedding, wind spillage and real-time voltage angles. The objective function (\ref{objRT}) to be minimized is the balancing cost. Equality constraints (\ref{RT:balance}) ensure the real-time nodal power balance. Inequalities (\ref{RT: updownlim}) limit activation of upward and downward reserves considering the procured reserve quantities. Constraints (\ref{RT: maxcapline}) account for the power capacity of transmission lines. Finally, inequalities (\ref{RT: spill}) and (\ref{RT: shed}) limit wind spillage and load shedding actions to the actual realization of production and system demand, respectively. \subsubsection{Stochastic dispatch model} Assuming that wind power uncertainty is described by a finite set of outcomes $W_{k\omega}$ with corresponding probabilities $\pi_{\omega}$, the stochastic dispatch procedure outlined in Fig. \ref{fig:disaptch_stoch} writes as follows: \begingroup \allowdisplaybreaks \begin{subequations} \label{stochastic_dis} \begin{align} \underset{\Xi^{\text{SD}}}{\text{min}} \quad & \sum_{i \in I} \Big(C_{i}^{\text{U}} R_{i}^{\text{U}} + C_{i}^{\text{D}} R_{i}^{\text{D}} + C_{i} P_{i}^{\text{C}} \Big) + \nonumber\\ & \sum_{\omega} \pi_{\omega} \Big( \sum_{i \in I} C_{i} \Big(r_{i\omega}^{\text{U}}-r_{i\omega}^{\text{D}}\Big) +\sum_{j \in J} C^{\text{VoLL}} L_{j\omega}^{\text{sh}} \Big) \label{SD_obj}\\ \text{s.t.} \quad& \text{constraints \eqref{RC:demand} - \eqref{RC:limits}} \label{stochD_RCc}\\ & \text{constraints \eqref{DA:balance} - \eqref{DA:flow_cap}} \label{stochD_DAc}\\ & \text{constraints \eqref{RT:balance} - \eqref{RT: shed}}, \quad \forall \omega \in \Omega \label{stochD_RTc} \end{align} \end{subequations} \endgroup where $\Xi^{\text{SD}} = \{\Xi^{\text{OR}} \cup \Xi^{\text{DA}} \cup \Xi^{\text{RT}}, \textcolor{WildStrawberry}{\forall \omega} \cup (D^{\text{U}},D^{\text{D}})\}$ is the set of stochastic dispatch variables. The objective function (\ref{SD_obj}) to be minimized is the reserve and day-ahead energy cost as well as the expectation of the real-time cost, i.e., the expected cost over the entire decision sequence. Note, that upward and downward reserve \textcolor{WildStrawberry}{requirements $D_{z}^{\text{U}}$ and $D_{z}^{\text{D}}$ in (\ref{RC:demand}) are decision variables} and only used to reveal \textcolor{WildStrawberry}{optimal} reserve requirements \textcolor{WildStrawberry}{in a stochastic programming sense.} After the optimal reserve procurement and day-ahead energy schedule are obtained, the system operator solves the real-time re-dispatch problem for a specific realization of the stochastic production $\omega'$ using formulation (\ref{prob:real_time_clearing}). \section{Approximating the stochastic ideal} \label{sec2} On the one hand, the conventional procedure has limited capability to accommodate large shares of stochastic production in a cost efficient manner compared to the stochastic dispatch. On the other hand, the adoption of the stochastic procedure appears to be unrealistic because \textcolor{blue}{it does not guarantee revenue adequacy and cost recovery for every uncertainty realization; these are important properties that, in contrast, hold in the sequential market structure \cite{Morales_2012,morales2014electricity}}. For this reason, our motivation is to enhance the cost-efficiency of the conventional market-clearing procedure without changing the market structure. In this line, we introduce a model that approximates the ideal stochastic solution within the conventional dispatch model by the appropriate setting of zonal reserve requirements. \textcolor{WildStrawberry}{ In essence, we aim at finding the reserve requirements that plugged into the conventional market-clearing model (\ref{prob:reserve_clearing})-(\ref{prob:real_time_clearing}) will yield the minimum total expected system cost.} To compute them, we use the \textcolor{WildStrawberry}{bilevel programming problem} illustrated in Fig. \ref{fig:optimal_req_determination}. This model comprises two levels. The objective function of the upper level is the same as \eqref{SD_obj} in the stochastic model \eqref{stochastic_dis} and aims at minimizing the total expected system cost. The upper-level constraints enforce real-time re-dispatch limits. The lower level consists of two optimization problems, \textcolor{WildStrawberry}{namely, the reserve procurement and day-ahead market clearing problems, which are identical to the corresponding optimization problems \eqref{prob:reserve_clearing} and \eqref{prob:day_ahead_clearing} of the conventional model. However, in this bilevel structure, reserve requirements $\mathcal{D}$ are decision variables of the upper-level problem, entering as parameters in the lower-level reserve procurement problem. Hence, reserve requirements $\mathcal{D}$ are not an exogenous input to this model but are internally optimized, accounting for their impact in all three trading floors. As shown in Fig. \ref{fig:optimal_req_determination}, the upper-level decision on $\mathcal{D}$ affects the reserve procurement schedule in the first lower-level problem, which in turn impacts the day-ahead clearing obtained from the second lower-level problem.} In addition, the reserve and energy schedules $\Phi^{\text{R}}$ and $\Phi^{\text{D}}$ enter the upper level, constraining the real-time re-dispatch decisions. \textcolor{WildStrawberry}{The structure of this stochastic bilevel model guarantees that the temporal sequence of the different markets follows the existing European paradigm. Having the reserve capacity and day-ahead market clearings as two independent lower-level problems, ensures that reserves and day-ahead schedules are optimized separately, i.e., there is no co-optimization of energy and reserves, while none of these markets have information about the future re-dispatch actions. This property suffices to reproduce the real-time re-dispatch for each scenario independently by including the corresponding constraints only in the upper-level problem. } Compared to the stochastic model, the main advantage of this bilevel scheme is that it respects the merit-order principle in the reserve capacity and day-ahead energy markets. In fact, given the same reserve requirements, the solutions of both lower-level problems are identical to the solutions of problems \eqref{prob:reserve_clearing} and \eqref{prob:day_ahead_clearing}. \textcolor{WildStrawberry}{Nonetheless, the upper-level problem can still anticipate the impact of reserve requirements on all trading floors and consequently on the total expected cost.} Since this model is solved prior to any market-clearing procedure, we assume that the system operator can gather information on the price-quantity offers of market participants. \textcolor{WildStrawberry}{Even in the case of having to use an estimation of price-quantity offers similar to the ORDC mechanism, our approach accounts systematically for the impact of reserve procurement and the structure of forecast errors in all three trading floors.} In a more realistic setup, this \textcolor{WildStrawberry}{information can be obtained} using inverse optimization techniques as proposed in \cite{ruiz2013revealing} and \cite{mitridati2017bayesian}. \textcolor{WildStrawberry}{Mathematically, the proposed} reserve determination model writes as the following stochastic bilevel programming problem: \begingroup \allowdisplaybreaks \begin{subequations} \label{prob:bilevel_clearing} \begin{align} &\underset{\Xi^{\text{RT}}, D_{z}^{\text{U}}, D_{z}^{\text{D}}}{\text{min}} \quad \eqref{SD_obj} \label{Bilevel_obj}\\ & \;\;\; \text{s.t.} \;\; \text{constraints \eqref{RT:balance} - \eqref{RT: shed}}, \quad \forall \omega \in \Omega, \label{bilevelcon1}\\ & \;\;\; D_{z}^{\text{U}}, D_{z}^{\text{D}} \geq 0, \quad \forall z \in Z, \label{bilevelcon2}\\ & \;\;\; ( R_{i}^{\text{U}}, R_{i}^{\text{D}} ) \in \text{arg} \left\{\!\begin{aligned} & \underset{\Xi^{\text{OR}}}{\text{min}} \quad \eqref{objRC} \\ & \text{s.t.} \;\; \text{constraints \eqref{RC:demand} - \eqref{RC:limits}} \end{aligned}\right\}, \label{eq:LLR} \\ & \;\; \left(\begin{subarray}{c} P_{i}^{\text{C}}, P_{k}^{\text{W}}, \\ \delta_{n}^{\text{DA}} \end{subarray} \right) \in \text{arg} \left\{\!\begin{aligned} & \underset{\Xi^{\text{DA}}}{\text{min}} \quad \eqref{obj:DA} \\ & \text{s.t.} \;\; \text{constraints \eqref{DA:balance} - \eqref{DA:flow_cap}} \end{aligned}\right\}. \label{eq:LLD} \end{align} \end{subequations} \endgroup \textcolor{blue}{ According to the mathematical structure of model \eqref{prob:bilevel_clearing}, the lower-level problems \eqref{eq:LLR} and \eqref{eq:LLD} guarantee that the reserve capacity and day-ahead energy markets are serially and independently optimized. This property is in accordance with the time-line of these trading floors in the European market framework. This temporal sequence is accomplished considering that upward $R_i^\text{U*}$ and downward $R_i^\text{D*}$ reserve schedules are variables of the reserve capacity market \eqref{eq:LLR} but enter as parameters in the day-ahead energy market \eqref{eq:LLD}. Moreover, neither problem \eqref{eq:LLR} nor \eqref{eq:LLD} can foresee the outcome of the balancing market, which is included in the upper level of model \eqref{prob:bilevel_clearing}. As a result, both markets have no information about the effect of their decisions on the real-time market. In turn, constraints \eqref{bilevelcon1}-\eqref{bilevelcon2} and the third term of the objective function \eqref{SD_obj} clear the real-time market of the conventional model (\ref{prob:reserve_clearing})-(\ref{prob:real_time_clearing}), independently for each scenario $\omega \in \Omega$, considering that the real-time re-dispatch cannot impact the previous trading floors which are `fixed' to the conventional market solution through the lower-level problems \eqref{eq:LLR} and \eqref{eq:LLD}.} \tikzstyle{Clearing} = [rectangle, rounded corners = 5, minimum width=10, minimum height=10,text centered, draw=black, fill=white!30,line width=0.3mm] \begin{figure} \caption{Bilevel structure of the proposed reserve determination model.} \label{fig:optimal_req_determination} \end{figure} This formulation is \textcolor{WildStrawberry}{computationally} intractable, since it consists of an upper-level \textcolor{WildStrawberry}{optimization problem} constrained by two lower-level optimization problems. However, since both lower-level problems are convex with linear objective functions and constraints, they can be replaced by their Karush–-Kuhn–-Tucker optimally conditions, such that the problem can be recast as as a single-level \textcolor{WildStrawberry}{mathematical program with equilibrium constraints} (MPEC). The resulting model \textcolor{WildStrawberry}{includes a set of} nonlinear \textcolor{WildStrawberry}{complementary} slackness constraints, which can be linearized using \textcolor{WildStrawberry}{disjunctive constraints} or SOS1 variables, transforming the MPEC problem into a \textcolor{WildStrawberry}{mixed-integer linear program} (MILP) \cite{pozo2017basic}. \textcolor{blue}{ \section{Solution strategy} \label{section_sol_str} The set of integer variables used to linearize the complementarity constraints of the lower-level problems \eqref{eq:LLR} and \eqref{eq:LLD} limits the application of the proposed reserve quantification model to power systems of moderate scale. For the large-scale applications, we propose an iterative solution strategy based on the multi-cut Bender's algorithm \cite{conejo2006decomposition}. For a fixed reserve and day-ahead dispatch, the set of real-time constraints \eqref{RT:balance} - \eqref{RT: shed} is independent per scenario. This allows for Bender's decomposition where each subproblem solves a scenario-specific real-time re-dispatch problem. The subproblems at iteration $\nu$ write as follows: \begingroup \allowdisplaybreaks \begin{subequations} \label{benders_sub_asd} \begin{align} \Bigg\{\underset{\Xi_{s}^{\text{RT,B}}}{\text{min}} \quad & C^{\text{RT}(\nu)}_{\omega} := \sum_{i \in I} C_{i} \Big(r_{i\omega}^{\text{U}}-r_{i\omega}^{\text{D}}\Big) +\sum_{j \in J} C^{\text{VoLL}} L_{j\omega}^{\text{sh}} \label{objRTbenderssub} \\ \text{s.t.} \quad & R_{i}^{\text{U}} = \tilde{R}_{i}^{\text{U}(\nu)} \quad: \theta_{i\omega}^{R_{i}^{\text{U}}(\nu)}, \quad \forall i \in I, \label{bend_sub_1}\\ & R_{i}^{\text{D}} = \tilde{R}_{i}^{\text{D}(\nu)} \quad: \theta_{i\omega}^{R_{i}^{\text{D}}(\nu)}, \quad \forall i \in I,\\ & P_{k}^{\text{W}} = \tilde{P}_{k}^{\text{W}(\nu)} \quad: \theta_{k\omega}^{P_{k}^{\text{W}}(\nu)}, \quad \forall k \in K,\\ & \delta_{n}^{\text{DA}} = \tilde{\delta}_{n}^{\text{DA}(\nu)} \quad: \theta_{n\omega}^{\delta_{n}^{\text{DA}}(\nu)}, \quad \forall n \in N, \label{bend_sub_4}\\ & \text{constraints \eqref{RT:balance} - \eqref{RT: shed}} \Bigg\} \quad \forall \omega \in \Omega, \nonumber \end{align} \end{subequations} \endgroup where $\Xi_{s}^{\text{RT,B}} = \Xi^{\text{RT}} \cup \{ R_{i}^{\text{U}}, R_{i}^{\text{D}}, \forall i; P_{k}^{\text{W}}, \forall k; \delta_{n}^{\text{DA}}, \forall n \}$ is the set of decision variables of each subproblem of the Bender's algorithm. Constraints \eqref{bend_sub_1} - \eqref{bend_sub_4} fix the first-stage decisions to their optimal values obtained at the previous iteration, and the corresponding dual variables yield sensitivities of the reserve and day-ahead decisions used in Bender's cuts. } \textcolor{blue}{ The master problem of the Bender's algorithm at iteration $\nu$ writes as follows: \begingroup \allowdisplaybreaks \begin{subequations} \label{prob:real_time_clearing_benders} \begin{align} \underset{\Xi^{\text{M,B}}}{\text{min}} \quad& \sum_{i \in I} \Big(C_{i}^{\text{U}} R_{i}^{\text{U}} + C_{i}^{\text{D}} R_{i}^{\text{D}} + C_{i} P_{i}^{\text{C}} \Big) + \sum_{\omega \in \Omega} \pi_{\omega} \alpha_{\omega}^{(\nu)} \\ \text{s.t.} \quad & \alpha_{\omega}^{(\nu)} \geq C^{\text{RT}(\rho)}_{\omega} + \sum_{i \in I} \theta_{i\omega}^{R_{i}^{\text{U}}(\rho)} \Big(R_{i}^{\text{U}} - R_{i}^{\text{U}(\rho)} \Big) \nonumber\\ & \quad\quad + \sum_{i \in I} \theta_{i\omega}^{R_{i}^{\text{D}}(\rho)} \Big(R_{i}^{\text{D}} - R_{i}^{\text{D}(\rho)} \Big) \nonumber \\ & \quad\quad + \sum_{k \in K} \theta_{k\omega}^{P_{k}^{\text{W}}(\rho)} \Big(P_{k}^{\text{W}} - P_{k}^{\text{W}(\rho)} \Big) \nonumber \\ & \quad\quad + \sum_{n \in N} \theta_{n\omega}^{\delta_{n}^{\text{DA}}(\rho)} \Big(\delta_{n}^{\text{DA}} - \delta_{n}^{\text{DA}(\rho)} \Big), \nonumber \\ &\quad\quad\quad \rho = 1 \dots \nu-1, \forall \omega \in \Omega, \label{ben_cut} \\ &\alpha_{\omega}^{(\nu)} \geq \underline{\alpha}, \quad \forall \omega \in \Omega, \label{ben_cut_zero}\\ &D_{z}^{\text{U}}, D_{z}^{\text{D}} \geq 0, \quad \forall z \in Z, \\ &\text{Linearized KKT conditions of \eqref{eq:LLR}}, \\ &\text{Linearized KKT conditions of \eqref{eq:LLD}}, \end{align} \end{subequations} \endgroup where $\Xi^{\text{M,B}} = \Xi^{\text{OR}} \cup \Xi^{\text{DA}} \cup \alpha_{\omega}$ is the set of decisions variables of the master problem, and index $\rho$ is used to integrate the fixed values of the corresponding variables at previous iterations. The Bender's cuts are updated at each iteration by \eqref{ben_cut} using sensitivities from all previous iterations, while \eqref{ben_cut_zero} imposes a lower bound $\underline{\alpha}$ on the auxiliary variable $\alpha$. Since the subproblems allow for load shedding, they are always feasible, requiring no feasibility cuts in the master problem. The algorithm converges at iteration $\nu$ if $\Big|\sum_{\omega \in \Omega} \pi_{\omega} \big(\alpha_{\omega}^{(\nu)}-C^{\text{RT}(\nu)}_{\omega}\big)\Big| \leq \epsilon$, where $\epsilon$ is a predefined tolerance. } \section{Case Study} \label{sec3} In this section, we first describe the test system in \textcolor{blue}{Section} \ref{Desription_of_test_system}. In \textcolor{blue}{Section} \ref{Impact_of_reserve_requirements} and \textcolor{blue}{Section} \ref{Approximating_the_stoch} we study the impact of reserve requirements on expected operating costs and we assess the remaining efficiency gap of our model with respect to the stochastic solution for a single reserve control zone. In \textcolor{blue}{Section} \ref{Optimal_zonal_reserve} we extend our analysis to the case of multiple reserve control zones. \textcolor{blue}{In Section \ref{24RTS_with_UC_conctrains} we assess the model's performance in the presence of non-convex technical constraints. Finally, in Section \ref{IEEE96study} we demonstrate the scalability of the model using the proposed Bender's decomposition algorithm. } \begin{figure*}\label{fig:ex1} \label{fig:ex2} \label{fig:ex3} \label{fig:ex4} \label{fig:cost_comparison} \end{figure*} \subsection{Description of the test system} \label{Desription_of_test_system} To assess the performance of the different reserve determination models, a modified version of the IEEE 24-Bus RTS \cite{Ordoudis_2016} is employed. The system consists of 34 transmission lines, 17 loads and 12 conventional generation units. The total generation capacity amounts to 3,375 MW, from which 1,100 MW is flexible generation that can provide upward and downward reserves. We set upward reserve capacity price offers to be 30\% of the marginal costs. Price offers for downward reserve capacity price offers are selected such that they compensate for the potential financial deficit induced by a loss-making position in the day-ahead market. \textcolor{blue}{We should note that this is only a heuristic approach to address the possibility that some flexible producers incur financial losses due to their combined positions in the reserve capacity and day-ahead energy markets. This situation may emerge if the downward reserve capacity $R_i^{D*}$ awarded to a generator, and in turn imposed as a lower bound in the day-ahead market constraint (2c), forces this unit to produce even if the day-ahead energy price is lower than its marginal production cost. This pitfall results from the separation of reserve capacity and energy markets in the European framework. In turn, the physical coupling of these two products is accounted for internally in the trading strategies of the market participants when they submit their price-quantity offers in the corresponding markets according to their risk appetite. A detailed study of this issue constitutes a separate research topic and lies out of the scope of this work, but the interested reader is referred to \cite{SWIDER20071297} and \cite{1525135} for further information on this topic.} Apart from conventional generators, there are six wind farms bidding at zero marginal cost and sited as explained in \cite{Ordoudis_2016}. We consider a 24-hour load profile with a peak value of 2,650 MW obtained from \cite{Ordoudis_2016}. The loads are assumed to be inelastic with the value of lost load equal to \$500/MW for all operating hours. The relevant GAMS codes and simulation data are provided in the electronic companion of the paper \cite{companion}. \textcolor{blue}{All simulations are carried out using a standard PC with Intel Core i5 CPU with a clock rate of 2.7 GHz requiring no more than 8GB of RAM. The CPU time required to solve the conventional model \eqref{prob:reserve_clearing}-\eqref{prob:real_time_clearing}, stochastic model \eqref{stochastic_dis} and bilevel model \eqref{prob:bilevel_clearing} in Sections \ref{Impact_of_reserve_requirements}--\ref{Optimal_zonal_reserve} is kept below 30s when solving per operating hour. The sequential market with unit commitment and inter-temporal constraints is solved in less than a minute in Section \ref{24RTS_with_UC_conctrains}. The CPU time corresponding to the last case study is reported separately in Section \ref{IEEE96study}. } \subsection{Impact of reserve requirements on expected system cost} \label{Impact_of_reserve_requirements} \textcolor{blue}{In this section we assess the expected cost of operating the power system under the conventional market setup \eqref{prob:reserve_clearing}-\eqref{prob:real_time_clearing}, when this is fed with the reserve requirements determined by different approaches for reserve dimensioning, including our proposal. To this end, we consider the time period corresponding to the peak-load hour. Besides, the capacity of each wind power farm is set to 100 MW. Next we discuss the results linked to each reserve dimensioning approach:} \begin{enumerate} \item The \textit{probabilistic approach} \textcolor{ForestGreen}{defines the reserve} requirements from the \textcolor{WildStrawberry}{predictive} cumulative distribution function (CDF) $F$ of the total wind power portfolio, as the distance between the expected wind power production $\widehat{W}$ and a specified quantile $q^{(\alpha)} = F^{-1}(\alpha)$ \textcolor{WildStrawberry}{with nominal proportion} $\alpha \in [0,1]$. \textcolor{blue}{This approach resembles the state-of-the-art reserve-dimensioning processes employed by European system operators using probabilistic forecast information \cite{6299425}}. \textcolor{WildStrawberry}{For a reliability level $\xi = \overline{\alpha} - \underline{\alpha}$}, the upward and downward reserve needs are dimensioned as follows: \begin{subequations} \begin{align} & D^{\text{U}} = \widehat{W} - F^{-1}(\overline{\alpha}), \\ & D^{\text{D}} = F^{-1}(\underline{\alpha}) - \widehat{W}. \end{align} \end{subequations} We initially consider \textcolor{WildStrawberry}{$\underline{\alpha} = 5\%$ and $\overline{\alpha} = 1 - \underline{\alpha} = 95\%$ corresponding to a reliability level $\xi=$ 90\%.} The resulting requirements amount to 127.9 MW and 89.1 MW for upward and downward reserves, respectively. \item \textit{The stochastic approach} \textcolor{ForestGreen}{derives the reserve} requirements from the stochastic dispatch model (\ref{stochastic_dis}). These requirements are equal to 214.3 MW for upward and 65.0 MW for downward reserves, respectively. \item \textit{The enhanced approach} computes the \textcolor{ForestGreen}{reserve} requirements using the proposed reserve determination model (\ref{prob:bilevel_clearing}). Resulting reserve needs amount to 282.9 MW and 42.6 MW for upward and downward reserves, respectively. \end{enumerate} The expected total system costs resulting from the implementation of the \textcolor{ForestGreen}{probabilistic, stochastic and enhanced operating reserve} approaches are \$25,890, \$24,531 and \$24,408, respectively. The total cost break-down is shown in Fig. \ref{fig:cost_comparison}, which demonstrates the impact of the reserve requirements on the cost of the different trading floors in the conventional dispatch procedure. Figure \ref{fig:ex1} shows that the reserve needs computed using the proposed model result in the highest reserve procurement cost among the different approaches, mainly due to a larger volume of upward reserve provision. In turn, efficient flexible generation that could be scheduled in the day-ahead market is now set aside to provide upward reserves. Considering that the price offers for upward reserve are proportional to the day-ahead price offers, the withdrawal of these resources increases the day-ahead energy cost, as shown in Fig. \ref{fig:ex2}. Nonetheless, the benefits of the enhanced approach realize in real-time operation as the re-dispatch cost is lower compared to that yielded by the probabilistic and stochastic approaches as illustrated in Fig. \ref{fig:ex3}. As a result, the minimum of the expected total costs is achieved with the enhanced approach as demonstrated by Fig. \ref{fig:ex4}. \textcolor{WildStrawberry}{Increasing the reliability level $\xi$ in the the probabilistic approach may have a positive impact on the performance of the conventional model.} However, Table \ref{cost_break_down} shows that this approach never yields the expected cost provided by the proposed model, since the probabilistic approach sets the requirements disregarding their impact on the subsequent operations, including potential wind spillage and load shedding. \textcolor{WildStrawberry}{On the contrary}, the proposed model finds \textcolor{WildStrawberry}{the optimal} trade-off between reserve procurement and real-time re-dispatch decisions that minimizes the total expected system cost. In this particular case, our model allows more wind curtailment to reduce downward reserve procurement cost. \textcolor{WildStrawberry}{Regarding the stochastic model, it should be noted that even though} reserve requirements are set anticipating the real-time cost, reserve procurement and day-ahead energy schedules are obtained by a co-optimization of these products \textcolor{WildStrawberry}{that is incompatible with the European} market structure. As a result, the requirements provided by the stochastic approach lead to larger amounts of load shedding, highlighting that they are \textcolor{WildStrawberry}{practically} sub-optimal in a sequential dispatch procedure. \begin{table}[] \centering \caption{Cost break-down resulting from the implementation of a range of probabilistic requirements and enhanced requirements.} \label{cost_break_down} \tabcolsep=0.06cm \resizebox{0.48\textwidth}{!}{\begin{tabular}{l|ccccc|c} \specialrule{1pt}{1pt}{1pt} \multicolumn{1}{c|}{\multirow{3}{*}{Approach}} & \multicolumn{5}{c|}{Probabilistic approach} & \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}Enhanced\\ approach\end{tabular}} \\ \multicolumn{1}{c|}{} & \multicolumn{5}{c|}{ \textcolor{WildStrawberry}{Quantiles $q^{(\underline{\alpha},\overline{\alpha})}$} of wind CDF} & \\ \cline{2-6} \multicolumn{1}{c|}{} & $q^{(05/95)}$ & $q^{(04/96)}$ & $q^{(03/97)}$ & $q^{(02/98)}$ & $q^{(01/99)}$ & \\ \specialrule{1pt}{1pt}{1pt} Requirements $D^{\text{U/D}}$ {[}MW{]} & 128/89 & 168/91 & 205/93 & 210/94 & 283/169 & 283/43 \\ Exp. total cost {[}\$1000{]} & 25.89 & 24.99 & 24.62 & 24.61 & 24.78 & 24.40 \\ -- \textit{Reserve} & 0.69 & 0.84 & 0.99 & 1.01 & 1.70 & 1.24 \\ -- \textit{Day-ahead} & 22.24 & 22.43 & 22.70 & 22.74 & 22.99 & 22.99 \\ -- \textit{Real-time} & 2.96 & 1.72 & 0.93 & 0.86 & 0.88 & 0.18 \\ \specialrule{1pt}{1pt}{1pt} \end{tabular}} \end{table} \subsection{Approximating the stochastic dispatch solution} \label{Approximating_the_stoch} \begin{figure} \caption{Expected daily operating cost as a function of wind penetration.} \label{costs_diff_wind} \end{figure} \begin{figure} \caption{Reserve procurement from nine flexible generating units for the peak-load hour and different wind penetration levels. Color density ranks generation units according to the reserve capacity price offers.} \label{Reserve_procur} \end{figure} We now investigate to what extent the reserve requirements computed with the proposed model are capable of approximating the ideal stochastic solution within the sequential dispatch procedure. To this end, we compare expected daily system cost of three optimization models for different wind power penetration levels, defined as the ratio between the installed capacity of the entire wind power portfolio and the peak load. The first model represents the sequential market clearing \eqref{prob:reserve_clearing}-\eqref{prob:real_time_clearing} with reserve requirements computed with the probabilistic approach for a range of \textcolor{WildStrawberry}{ reliability levels $\xi \in [0.9,1]$}. The second model \textcolor{WildStrawberry}{also follows the} sequential market procedure with reserve requirements computed with the proposed model \eqref{prob:bilevel_clearing}. The third one is the stochastic ideal dispatch model \eqref{stochastic_dis} that theoretically attains maximum cost-efficiency, and therefore it is used as a lower bound of the expected system cost. \textcolor{blue}{It is worth noting the different role that the stochastic dispatch model plays in this part of the case study, compared to the previous Section \ref{Impact_of_reserve_requirements}. Here, we assume that the solution of the stochastic dispatch model will be implemented as the actual system schedule, presuming that the conventional market setup is replaced with its ideal stochastic counterpart. This is different from the application of the stochastic dispatch model \eqref{stochastic_dis} as a reserve-dimensioning approach in Section \ref{Impact_of_reserve_requirements}, where we considered that all trading floors are settled according to the prevailing European market model.} Figure \ref{costs_diff_wind} depicts the daily operating cost as a function of the wind power penetration level for the three models. The setting of the reserve requirements provided by the proposed model always results in a lower expected cost than the implementation of the requirements under the probabilistic approach. This figure further indicates that these reserve requirements \textcolor{WildStrawberry}{efficiently approximate the stochastic ideal solution even for a high penetration of wind power.} Figure \ref{Reserve_procur} provides further insights on the difference between the solutions of the three models. Particularly, it shows the procurement of upward and downward reserves from specific flexible units ranked according to their reserve capacity price offers, i.e., from cheap to more expensive units distinguished by increasing color densities. The proposed model controls the trade-off between reserve and real-time costs, ensuring adequate upward reserves to minimize the amount of load shedding and enough downward reserves to prevent wind spillage. In contrast, the probabilistic approach underestimates upward reserve needs, while it overestimates downward reserve requirements. The enhanced solution for the reserve requirements deviates significantly from the ideal solution given that the stochastic model has more degrees of freedom, i.e., it controls not only the sufficiency of the reserve requirements but also their allocation among the flexible generators. This results in reserve procurement being `generator-specific' which prevents network congestion within the reserve control area. In attempt to minimize expensive balancing actions, the stochastic model \textcolor{WildStrawberry}{may allocate reserves to more expensive units over cheaper providers}, violating the least-cost merit-order principle that is inherent in the conventional market design. \textcolor{ForestGreen}{As a consequence, the requirement imposed in our enhanced approach to respect the merit-order principle in the reserve capacity and day-ahead markets restricts the degree of approximation of the stochastic solution.} \subsection{Optimal zonal reserve requirements allocation} \label{Optimal_zonal_reserve} \begin{figure} \caption{IEEE 24-Bus reliability test system layout with three reserve control zones.} \label{Splitting} \end{figure} We now consider the optimal reserve dimensioning in a multi-zone setting. For this purpose, the IEEE 24-Bus system is split into three reserve control zones as depicted in Fig. \ref{Splitting}. This zonal layout corresponds to the one proposed in \cite{Jensen_2017}. In each control zone there are at least one wind power unit with capacity of 100 MW and at least two flexible generation units. Unlike in the previous instance, the requirements computed with the probabilistic approach are now set for each reserve control zone independently considering the distribution of wind power production of each zone. The reliability level $\xi$ is set to $0.98$. The resulting allocation for upward and downward reserve requirements among control zones is summarized in Fig. \ref{Reserve_allocation_zones}, indicating that the probabilistic approach sets the reserve needs proportionally to the amount of stochastic in-feed in the respective control zone. On the other hand, the proposed model defines the requirements considering not only the zonal wind power in-feed, but also the cost implications of procuring reserve in a specific zone. As a result, the model finds it more efficient to constantly procure upward reserve from the third zone and obtain the remaining upward reserve that is needed either from the first or the second zone depending on the operating hour. In addition, this reserve allocation indicates that it is never optimal to procure downward reserve from the second zone in terms of expected system cost. This optimal reserve allocation among control zones is supported by the approximation gap depicted in Fig. \ref{three_zone_cost}, \textcolor{WildStrawberry}{showing the relative cost difference of the sequential market with respect to the ideal solution.} The requirements provided by the proposed model efficiently approximate the ideal solution with nearly zero gap over the first operating hours, and this gap remains relatively small for the subsequent hours \textcolor{WildStrawberry}{as opposed to the large gap when probabilistic requirements are used}. \textcolor{ForestGreen}{The definition of multiple control zones allows to set enhanced reserve requirements that are closer to the `generator-specific' reserve allocation of the stochastic model. } Indeed, \textcolor{ForestGreen}{compared to the single-zone setup} in section \ref{Impact_of_reserve_requirements}, the operating cost \textcolor{ForestGreen}{reduces by 2.5\%, from \$24,408 to \$24,034, after the definition of three control zones}. \begin{figure} \caption{24-hour profiles of probabilistic and enhanced reserve requirements in three control zones.} \label{Reserve_allocation_zones} \end{figure} \begin{figure} \caption{Approximation gap of the sequential market with probabilistic and enhanced reserve requirements compared to stochastic dispatch.} \label{three_zone_cost} \end{figure} \subsection{Assessing enhanced reserve requirements in the presence of non-convexities} \label{24RTS_with_UC_conctrains} \textcolor{blue}{To assess the performance for the proposed reserve quantification model as a proxy model for the power markets with a more comprehensive and non-convex representation of technical constraints, we use the enhanced reserve requirements provided by the proposed model \eqref{prob:bilevel_clearing} as inputs to the sequential market-clearing problem \eqref{prob:reserve_clearing}-\eqref{prob:real_time_clearing} with unit commitment and ramping constraints integrated in the day-ahead auction as explained in Appendix \ref{appA}.} \textcolor{blue}{Figure \ref{Cost_proxy} shows the hourly profile of expected operating system cost resulting from the implementation of the enhanced requirements in the system with full representation of the technical constraints. This profile is compared against those obtained by setting probabilistic reserve requirements with reliability levels of 98\% and 90\%. The reserve requirements provided by the proposed model always attain better cost efficiency than the probabilistic requirements, even though the proposed model does not account for the whole set of technical limits of power plants. In the first case in Fig.\ref{Cost_proxy} (a), the model allows savings of \$23,746 that nearly equal to the cost of peak-hour operation, and it allows even larger savings of \$28,845 in the second case in Fig.\ref{Cost_proxy} (b).} \begin{center} \begin{figure} \caption{Expected operating cost yielded by the implementation of the probabilistic and enhanced reserve requirements in the conventional market-clearing problem \eqref{prob:reserve_clearing}-\eqref{prob:real_time_clearing} including the unit commitment constraints \eqref{app1}-\eqref{app7}.} \label{Cost_proxy} \end{figure} \end{center} \textcolor{blue}{ \subsection{Application to the IEEE-96 RTS} \label{IEEE96study} We now consider the modernized version of the IEEE-96 RTS Test System proposed in \cite{IEEE96RTS} to assess the scalability of the proposed model. The test system includes three control zones interconnected by six tie-lines. The system demand follows a 24-hour profile with a peak load of 7.5 GW. The conventional generation is represented by 6 nuclear power plants serving the base load, 3 coal power plants that offer 40\% of their capacities for the reserve needs, and 87 gas-fired power plants offering 100\% of their capacities to the reserve procurement auction. The reserve offering prices of flexible units are set to 25\% of marginal production cost for both upward and downward reserve needs. There are 19 wind farms distributed among the control zones with the overall capacity of 2.76 GW. Their stochastic output is described by 100 equiprobable scenarios obtained from \cite{pinson2013wind}. The input data and the corresponding GAMS codes are provided in the electronic companion of the paper \cite{companion}. } \textcolor{blue}{ The test case is solved for wind penetration levels of 13.8\%, 23.0\%, and 36.8\% of the peak-hour load by implementing the multicut Bender's algorithm explained in Section \ref{section_sol_str}. The tolerance of the algorithm is set to 0.02\% requiring three to eight iterations depending on the operating hour. The resulting CPU time is reported in Table \ref{CPU_time}. The CPU time in all three cases is kept below one hour allowing timely day-ahead planning with the proposed model. It is worth mentioning that the CPU time can be reduced at the expense of a marginal deviation from the global optimum with higher tolerance. } \textcolor{blue}{ \begin{table}[] \centering \textcolor{blue}{ \caption{\textcolor{blue}{CPU performance of the Bender's algorithm.}} \renewcommand{1}{1.5} \begin{tabular}{c|ccc} \hline Wind penetration {[}\%{]} & 13.8 & 23.0 & 36.8 \\ CPU time {[}min{]} & 32.1 & 33.5 & 58.6 \\ \hline \end{tabular} \label{CPU_time} } \end{table} } \textcolor{blue}{ The daily operating cost resulting from the implementation of the enhanced zonal reserve requirements computed by the proposed model is always lower than those provided by the probabilistic approach with reliability levels of 90\% and 98\%, as demonstrated in Table \ref{IEEE96_cost}. The difference in operating cost is explained by the anticipated cost of procuring upward and downward reserves from a specific control zone, while the probabilistic requirements are solely obtained proportionally to the amount of stochastic in-feed in control zones. As a result, the relative cost savings provided by the model increases with the wind penetration level and ranges between 0.6\% and 7.2\%. Further cost savings towards the ideal solution provided by the stochastic model is limited due to the enforced merit order in both reserve and day-ahead markets.} \begin{table}[] \centering \textcolor{blue}{ \caption{\textcolor{blue}{Daily operating cost with probabilistic and enhanced zonal reserve requirements in comparison with the stochastic ideal solution [\$1000].}} \renewcommand{1}{1} \begin{tabular}{ccccc} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Wind penetration\\ {[}\%{]}\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Probabilistic \\ solution\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Enhanced\\ solution\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Ideal\\ solution\end{tabular}} \\ \cline{2-3} & $\xi=90\%$ & $\xi=98\%$ & & \\ \hline 13.8 & 1,912.4 & 1,888.8 & 1,877.3 & 1,850.0\\ 23.0 & 1,760.8 & 1,719,3 & 1,700.8 & 1,660.5\\ 36.8 & 1,550.7 & 1,482.3 & 1,446.0 & 1,402.8\\ \hline \end{tabular} \label{IEEE96_cost} } \end{table} \textcolor{blue}{ Finally, Table \ref{IEEE96_cost2} illustrates the economic benefit that the proposed model yields as a proxy for the system with the full network representation and technical constraints of power plants described in Appendix \ref{appA}. The results show that in spite of the incomplete description of technical constraints in the lower level of the proposed bilevel model, it still provides a feasible input with a sensible cost reduction for the markets with non-convexities. The economic benefit provided by the model ranges from 0.5\% to 1.6\%. Moreover, the proposed approach further outperforms the probabilistic one for the largest wind penetration level, where the overestimated requirements provided by the probabilistic approach lead to a reserve schedule that results in an infeasible day-ahead operation. } \begin{table}[] \centering \textcolor{blue}{ \caption{\textcolor{blue}{Daily operating cost with probabilistic and enhanced zonal reserve requirements with full representation of technical constraints [\$1000].}} \renewcommand{1}{1} \begin{tabular}{cccc} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Wind penetration\\ {[}\%{]}\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Probabilistic \\ solution\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Enhanced\\ solution\end{tabular}} \\ \cline{2-3} & $\xi=90\%$ & $\xi=98\%$ & \\ \hline 13.8 & 2,072.2 & 2,073.4 & 2,061.5 \\ 23.0 & 1,947.9 & 1,949.1 & 1,928.6 \\ 36.8 & 1,764.4 & infeas. & 1,735.9 \\ \hline \end{tabular} \label{IEEE96_cost2} } \end{table} \section{Conclusion} \label{sec4} This paper considers the optimal setting of reserve requirements in a European market framework. We propose a new method to quantify reserve needs that brings \textcolor{blue}{the sequence of the reserve, day-ahead and real-time markets} closer to the ideal stochastic energy and reserves co-optimization model in terms of total expected cost. The proposed model is formulated as a stochastic bilevel problem, \textcolor{blue}{which is eventually recast as a MILP problem. To reduce the computational burden of this model, we apply an iterative solution approach based on the multi-cut Bender's decomposition algorithm.} \textcolor{blue}{Our numerical studies demonstrate the benefit of properly setting reserve requirements. Our reserve quantification model outperforms both the probabilistic and the stochastic reserve setting approaches due to its preemptive ability to anticipate the impact of day-ahead decisions on the real-time operation, while taking into account the actual market structure.} \textcolor{blue}{Considering the increasing penetration of stochastic power producers, we show that the reserve requirements provided by the proposed model take the expected system operating cost closer to that given by the ideal energy and reserve co-optimization model, but the degree of this approximation is limited due to the sequential scheduling of reserve and energy in European electricity markets.} \textcolor{blue}{However, our analysis further indicates that the definition of multiple reserve control zones allows for a more efficient spatial allocation of reserves, which reduces the approximation gap with respect to the ideal stochastic model.} \textcolor{blue}{Finally, the efficiency of the proposed reserve dimensioning model was tested against market designs whose clearing process explicitly account for inter-temporal and non-convex constraints, i.e. ramping limits and unit commitment constraints. Even though the proposed model does not account for the whole set of technical constraints of such markets, the enhanced reserve requirements still bring the cost of sequential market operation closer to the stochastic ideal, highlighting the importance of the intertemporal coordination between the three trading floors through the intelligent setting of reserve needs.} \textcolor{blue}{Future research may focus on the consideration of the tight relaxations of the unit commitment constraints to achieve better approximations for the case of non-convex market designs, and the corresponding tuning of the Bender's decomposition algorithm to better cope with the intertemporal constraints.} \appendix \subsection{Incorporation of unit commitment and ramping constraints} \label{appA} \textcolor{blue}{In contrast to the prevailing approach of the European market design, other electricity markets, e.g., the majority of US markets, explicitly model unit commitment constraints and thermal limits of power plants in the market-clearing problem. To assess the performance of the proposed reserve quantification model in markets with unit commitment constraints, the following set of constraints are integrated in the day-ahead market-clearing problem:} \textcolor{blue}{ \begingroup \allowdisplaybreaks \begin{subequations} \begin{align} & u_{it} \underline{P}_{i} \leq P_{it}^{\text{C}} \leq u_{it} \overline{P}_{i}, \; \forall i \in I, \; \forall t \in T, \label{app1}\\ & SU_{it} \geq C_{i}^{\text{SU}} (u_{it} - u_{i(t-1)}), \; \forall i \in I, \; \forall t>1, \label{app2}\\ & SU_{it} \geq C_{i}^{\text{SU}} (u_{it} - u_{i}^{0}), \; \forall i \in I, \; t=1, \label{app3}\\ & P_{it}^{\text{C}} - P_{i(t-1)}^{\text{C}} \leq R_{i}^{+}, \; \forall i \in I, \; \forall t >1, \label{app4}\\ & P_{it}^{\text{C}} - P_{i}^{\text{C},0} \leq R_{i}^{+}, \; \forall i \in I, \; t = 1, \label{app5}\\ & P_{i(t-1)}^{\text{C}} - P_{it}^{\text{C}} \leq R_{i}^{-}, \; \forall i \in I, \; \forall t >1, \label{app6}\\ & P_{i}^{\text{C},0} - P_{it}^{\text{C}} \leq R_{i}^{-}, \; \forall i \in I, \; t =1, \label{app7} \end{align} \end{subequations} \endgroup where $t\in T$ is the set of operating hours, $C_{i}^{\text{SU}}$ is a start-up cost of unit $i$, $R_{i}^{+}$ and $R_{i}^{-}$ are the ramp-up and ramp-down limits, $\underline{P}_{i}$ is a minimum power output limit, and $P_{i}^{\text{C},0}$ and $u_{i}^{0}$ are the initial power output and commitment status of unit $i$. The set of decision variables of the original problem is supplemented with variable $u_{it}\in \{0,1\}$ that denotes the commitment status of generating units, and variable $SU_{it}$ that computes the cost induced by the start-up of generating units. Now, the generating limits of each unit are additionally enforced by commitment decisions of the system operator by \eqref{app1}. Binary logic is controlled by \eqref{app2} and \eqref{app3} and activated by augmenting $SU_{it}$ into the original objective function of problem \eqref{prob:day_ahead_clearing}. The ramp limits of generators are accounted for through \eqref{app4}-\eqref{app7}.} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Photos/Dvorkin.pdf}}]{Vladimir Dvorkin Jr.} (S'18) received the B.S. degree in electrical engineering from the Moscow Power Engineering Institute, Russia, in 2012, the M.Sc. degree in Economics from the Higher School of Economics, Russia, in 2014, and the M.Sc. degree in Sustainable Energy from the Technical University of Denmark in 2017. Currently he is pursuing the Ph.D. degree with the Department of Electrical Engineering, Center for Electric Power and Energy, Technical University of Denmark. His research interests include economics, game theory, optimization, and their applications to power systems and electricity markets. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Photos/Delikaraoglou.pdf}}]{Stefanos Delikaraoglou} (S'14 - M’18) received the Dipl.-Eng. degree from the School of Mechanical Engineering, National Technical University of Athens, Greece, in 2010 and the M.Sc. degree in Sustainable Energy from the Technical University of Denmark in 2012. He holds a Ph.D. degree awarded in 2016 by the Department Electrical Engineering at the Technical University of Denmark. He is currently a Postdoctoral Fellow with the EEH-Power Systems Laboratory at the Swiss Federal Institute of Technology (ETH), Zurich, Switzerland. His research interests include energy markets and multi-energy systems modeling, decision-making under uncertainty, equilibrium models and hierarchical optimization. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Photos/Morales.pdf}}]{Juan M.\ Morales} (S'07-M'11-SM'16) received the Ingeniero Industrial degree from the University of M\'alaga, M\'alaga, Spain, in 2006, and a Ph.D. degree in Electrical Engineering from the University of Castilla-La Mancha, Ciudad Real, Spain, in 2010. He is currently an associate professor in the Department of Applied Mathematics at the University of M\'alaga in Spain. \newline \indent His research interests are in the fields of power systems economics, operations and planning; energy analytics and optimization; smart grids; decision-making under uncertainty, and electricity markets. \end{IEEEbiography} \end{document}
arXiv
\begin{document} \title{Equivariant class group. I.\ Finite generation of the Picard and the class groups of an invariant subring} \footnote[0] {2010 \textit{Mathematics Subject Classification}. Primary 13A50; Secondary 13C20. Key Words and Phrases. invariant theory, class group, Picard group, Krull ring. } \begin{abstract} The purpose of this paper is to define equivariant class group of a locally Krull scheme (that is, a scheme which is locally a prime spectrum of a Krull domain) with an action of a flat group scheme, study its basic properties, and apply it to prove the finite generation of the class group of an invariant subring. In particular, we prove the following. Let $k$ be a field, $G$ a smooth $k$-group scheme of finite type, and $X$ a quasi-compact quasi-separated locally Krull $G$-scheme. Assume that there is a $k$-scheme $Z$ of finite type and a dominating $k$-morphism $Z\rightarrow X$. Let $\varphi:X\rightarrow Y$ be a $G$-invariant morphism such that $\Cal O_Y\rightarrow (\varphi_*\Cal O_X)^G$ is an isomorphism. Then $Y$ is locally Krull. If, moreover, $\Cl(X)$ is finitely generated, then $\Cl(G,X)$ and $\Cl(Y)$ are also finitely generated, where $\Cl(G,X)$ is the equivariant class group. In fact, $\Cl(Y)$ is a subquotient of $\Cl(G,X)$. For actions of connected group schemes on affine schemes, there are similar results of Magid and Waterhouse, but our result also holds for disconnected $G$. The proof depends on a similar result on (equivariant) Picard groups. \end{abstract} \section{Introduction} The purpose of this paper is to define equivariant class group of a locally Krull scheme with an action of a flat group scheme, study its basic properties, and apply it to prove the finite generation of the class group of an invariant subring. A locally Krull scheme is a scheme which is locally the prime spectrum of a Krull domain. For Krull domains, see \cite{CRT} and \cite{Fossum}. As a Noetherian normal domain is a Krull domain, a normal scheme of finite type over a field (e.g., a normal variety) is a typical example of a (quasi-compact quasi-separated) locally Krull scheme. Although a Krull domain is integrally closed, it may not be Noetherian. Generalizing the theory of class groups of Noetherian normal domains, there is a well established theory of class groups of Krull domains \cite{Fossum}. In this paper, we also consider non-affine locally Krull schemes. Also, we consider the equivariant version of the theory of class groups over them. Let $Y$ be a quasi-compact integral locally Krull scheme. Then the class group $\Cl'(Y)$ of $Y$ is defined to be the free abelian group $\Div(Y)$ generated by the set of integral closed subschemes of codimension one, modulo the linear equivalence. The second definition of the class group is given by the use of rank-one reflexive modules. For a Krull domain $R$, an $R$-module $M$ is said to be reflexive (or divisorial), if $M$ is a submodule of a finitely generated module, and the canonical map $M\rightarrow M^{**}$ is an isomorphism, where $(?)^*$ denotes the functor $\Hom_R(?,R)$. An $\Cal O_Y$-module $\Cal M$ is said to be reflexive if $\Cal M$ is quasi-coherent, and for any affine open subscheme $U=\Spec A$ of $Y$ such that $A$ is a Krull domain, $\Gamma(U,\Cal M)$ is a reflexive $A$-module, where $\Gamma(U,?)$ denotes the section at $U$. The set of isomorphism classes $\Cl(Y)$ of rank-one reflexive $\Cal O_Y$-modules is an additive group with the addition \begin{equation}\label{addition.eq} [\Cal M]+[\Cal N]=[(\Cal M\otimes_{\Cal O_Y}\Cal N)^{**}], \end{equation} where $(?)^*=\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_Y}(?,\Cal O_Y)$. It is easy to see that the formation $D\mapsto \Cal O_Y(D)$ is an isomorphism from $\Cl'(Y)$ to $\Cl(Y)$, as in the well-known case of normal varieties over a field \cite[Appendix to section~1]{Reid}. If $Y$ is a quasi-compact locally Krull scheme, then $Y=Y_1\times\cdots\times Y_r$ with each $Y_i$ being quasi-compact integral locally Krull, and we may define $\Cl'(Y)=\Cl'(Y_1)\times\cdots \times \Cl'(Y_r)$. Similarly for $\Cl(Y)$, and still we have $\Cl(Y)\cong\Cl'(Y)$. In the rest of this introduction, let $S$ be a scheme, $G$ a flat $S$-group scheme, and $X$ a $G$-scheme (that is, an $S$-scheme with a $G$-action). Let $X$ be locally Krull. The first purpose of this paper is to define the equivariant class group $\Cl(G,X)$ of $X$ and study its basic properties. Generalizing the second definition above, we define $\Cl(G,X)$ to be the set of isomorphism classes of quasi-coherent $(G,\Cal O_X)$-modules which are reflexive as $\Cal O_X$-modules. We prove that $\Cl(G,X)$ is an additive group with the addition given by (\ref{addition.eq}). We give a simplest example. If $S=X=\Spec k$ with $k$ a field, and $G$ is an algebraic group over $k$, then $\Cl(G,X)$ is nothing but the character group $\Cal X(G)$ of $G$. That is, it is the abelian group of one-dimensional representations of $G$. We do not try to redefine $\Cl(G,X)$ from the viewpoint of the first definition (that of $\Cl'(Y)$). So we do not consider $\Cl'(Y)$ in the sequel, and always mean the group of isomorphism classes of rank-one reflexive sheaves by the class group $\Cl(Y)$ of $Y$ for a locally Krull scheme $Y$, see (\ref{equivariant-class.par}). We prove that removing closed subsets of codimension two or more does not change the equivariant class group (Lemma~\ref{codim-two-ref.thm}). We also prove that if $\varphi:X\rightarrow Y$ is a principal $G$-bundle with $X$ locally Krull, then $Y$ is also locally Krull, and the inverse image functor induces an isomorphism $\varphi^*:\Cl(Y)\rightarrow \Cl(G,X)$ (Proposition~\ref{pfb-cl-isom.thm}). This isomorphism gives a source of intuitive idea of the equivariant class group --- it is the class group of the quotient space (or better, quotient stack). In the continuation of this paper, we give some variations of this isomorphism. In general, the prime spectrum of an invariant subring may not be a good quotient. However, we can prove that if $\varphi:X\rightarrow Y$ is a $G$-invariant morphism such that $X$ is quasi-compact quasi-separated locally Krull and $\Cal O_Y\rightarrow (\varphi_*\Cal O_X)^G$ is an isomorphism, then $Y$ is also locally Krull (Lemma~\ref{Y-Krull.thm}), and $\Cl(Y)$ is a subquotient of $\Cl(G,X)$ (Lemma~\ref{subquotient.thm}). Using this lemma, we study the finite generation of the class group of $Y$. This is the second purpose of this paper. We prove the following. \begin{trivlist}\item[\bf Theorem~\ref{main2.thm}] Let $k$ be a field, $G$ a smooth $k$-group scheme of finite type, and $X$ a quasi-compact quasi-separated locally Krull $G$-scheme. Assume that there is a $k$-scheme $Z$ of finite type and a dominating $k$-morphism $Z\rightarrow X$. Let $\varphi:X\rightarrow Y$ be a $G$-invariant morphism such that $\Cal O_Y\rightarrow (\varphi_*\Cal O_X)^G$ is an isomorphism. Then $Y$ is locally Krull. If, moreover, $\Cl(X)$ is finitely generated, then $\Cl(G,X)$ and $\Cl(Y)$ are also finitely generated. \end{trivlist} Note that a normal $G$-scheme $X$ of finite type over $k$ is automatically quasi-compact quasi-separated locally Krull, and the identity map $Z:=X\rightarrow X$ is a dominating map, and so the assumptions of the theorem is satisfied, see Corollary~\ref{main2-cor.thm}. In \cite{Magid}, Magid proved that if $R$ is a finitely generated normal domain over the algebraically closed field $k$, $G$ is a connected algebraic group acting rationally on $R$, and the class group $\Cl(R)$ of $R$ is a finitely generated abelian group, then the class group $\Cl(R^G)$ of the ring of invariants $R^G$ is also finitely generated. After that, Waterhouse \cite{Waterhouse} proved a similar result on an action of a connected affine group scheme on a Krull domain over arbitrary base field. Theorem~\ref{main2.thm} is not a generalization of Waterhouse's theorem. We assume the existence of $Z\rightarrow X$ as above, and he describes the relationship between $\Cl(X)$ and $\Cl(Y)$ precisely \cite[Theorem~4]{Waterhouse}. On the other hand, we treat disconnected groups, and non-affine groups and schemes. The action of finite groups is classical (see for example, \cite[Chapter~IV]{Fossum}), but the author does not know if the theorem for this case is in the literature, though it is not so difficult. Note that in Theorem~\ref{main2.thm}, even if $X$ is a normal variety, $Y$ may not be locally Noetherian (but is still locally Krull), as Nagata's counterexample \cite{Nagata} shows. In fact, there are some operations on rings such that under which Krull domains are closed, but Noetherian normal domains are not. Let $R$ be a domain. For a subfield $K$ of the field of quotients $Q(R)$ of $R$, consider $K\cap R$. If $R$ is Krull, then so is $K\cap R$. Even if $R$ is a polynomial ring (in finitely many variables) over a subfield $k$ of $K\cap R$, $K\cap R$ may not be Noetherian \cite{Nagata}. For a domain $R$, consider a finite extension field $L$ of $Q(R)$. Let $R'$ be the integral closure of $R$ in $L$. If $R$ is a Krull domain, then so is $R'$. If $R$ is Noetherian, then $R'$ is a Krull domain (Mori--Nagata theorem, see \cite[(4.10.5)]{SH}). Even if $R$ is a (Noetherian) regular local ring, $R'$ may not be Noetherian. Indeed, the ring $R$ and $L=Q(R[d])$ in \cite[Appendix, Example~5]{Nagata2} gives such an example (this is one of so-called bad Noetherian rings. If $R$ is Japanese, then clearly $R'$ is Noetherian). If $Z$ is an integral quasi-compact locally Krull scheme, then $\Gamma(Z,\Cal O_Z)$ is a Krull domain (Lemma~\ref{finite-direct-Krull.thm}). In particular, for a normal projective variety $Y$ and its Cartier divisors $D_1,\ldots,D_n$, the multi-section ring \[ \bigoplus_{\lambda\in \Bbb Z^n}\Gamma(Y,\Cal O_Y(\lambda_1D_1+\cdots+ \lambda_nD_n))t_1^{\lambda_1}\cdots t_n^{\lambda_n} \] is a Krull ring (see also \cite[Theorem~1.1 (1)]{EKW}), but not always Noetherian \cite{Mukai}. Thus locally Krull schemes arise in a natural way in algebraic geometry and commutative algebra. Despite of some technical difficulties, it would be worth discussing (equivariant) class groups in the framework of locally Krull schemes. Returning to Theorem~\ref{main2.thm}, it is proved as follows. As $\Cl(Y)$ is a subquotient of $\Cl(G,X)$, it suffices to show that the kernel of the map $\alpha:\Cl(G,X)\rightarrow\Cl(X)$ is finitely generated, where $\alpha$ is the map forgetting the $G$-action. This problem is further reduced to a similar problem for Picard groups. For a general $G$-scheme $X$ (not necessarily locally Krull), the equivariant Picard group $\Pic(G,X)$ is the set of isomorphism classes of $G$-equivariant invertible sheaves on $X$. The addition is given by $[\Cal L]+[\Cal L']=[\Cal L\otimes_{\Cal O_X}L']$. So if $X$ is locally Krull, $\Pic(G,X)$ is a subgroup of $\Cl(G,X)$, and the kernel of the map $\rho:\Pic(G,X)\rightarrow \Pic(X)$ agrees with $\Ker\alpha$ above. So Theorem~\ref{main2.thm} follows from the following \begin{trivlist}\item[\bf Theorem~\ref{main.thm}] Let $k$ be a field, $G$ a smooth $k$-group scheme of finite type, and $X$ a reduced $G$-scheme which is quasi-compact and quasi-separated. Assume that there is a $k$-scheme $Z$ of finite type and a dominating $k$-morphism $Z\rightarrow X$. Then $H^1_{\Mathrm{alg}}(G,\Cal O^\times)=\Ker(\rho:\Pic(G,X)\rightarrow\Pic(X))$ is a finitely generated abelian group. \end{trivlist} Note that a reduced $k$-scheme of finite type $X$ is automatically reduced, quasi-compact and quasi-separated, admitting a dominating map from a finite-type scheme, see Corollary~\ref{main-cor2.thm}. The proof of this theorem utilizes the description of $H^1_{\Mathrm{alg}}(G,\Cal O^\times)$ in \cite[Chapter~7]{Dolgachev}. If $\varphi:X \rightarrow Y$ is a $G$-invariant morphism such that $\Cal O_Y\rightarrow (\varphi_*\Cal O_X)^G$ is an isomorphism, $\Pic(Y)$ is a subgroup of $\Pic(G,X)$ (Lemma~\ref{pic-injective.thm}). So under the assumption of the theorem, if $\Pic(X)$ is finitely generated, then $\Pic(G,X)$ and $\Pic(Y)$ are finitely generated (Corollary~\ref{main-cor.thm}). We also give some description on $H^i_{\Mathrm{alg}}(G,\Cal O^\times)$ for $i\geq 2$ for connected $G$ (Proposition~\ref{connected-cohomology.thm}). Section~2 is preliminaries on the notation and the terminologies. Section~3 is dedicated to prove a five-term exact sequence involving the map $\rho:\Pic(G,X)\rightarrow\Pic(X)^G$, where $\Pic(X)^G$ is the kernel of the map $\Pic(X)\rightarrow \Pic(G\times X)$ given by $[\Cal L]\mapsto a^*[\Cal L]-p_2^*[\Cal L]$ ($a:G\times X\rightarrow X$ is the action, and $p_2$ is the second projection), see Proposition~\ref{five-term.thm}. The exact sequence also involves the ``algebraic $G$-cohomology group of $\Cal O_X^\times$,'' denoted by $H^i_{\Mathrm{alg}}(G,\Cal O^\times)$ for $i=1,2$, see (\ref{group-cohomology.par}). Although the author cannot find exactly the same exact sequence in the literature, it is more or less well-known. The first three terms of the exact sequence is treated in \cite[Chapter~7]{Dolgachev} (the first four terms for the finite group action is also treated there). This exact sequence is important in discussing the kernel and the cokernel of $\rho$. In section~4, we prove Theorem~\ref{main.thm}. We utilize the description $\Ker\rho\cong H^1_{\Mathrm{alg}}(G,\Cal O^\times)$, and reduce the problem to the action of a finite group scheme on a finite scheme. We also give some relationship between $H^1_{\Mathrm{alg}}(G,\Cal O^\times)$ and the character group $\Cal X(G)$ in some special cases. We also describe $H^i_{\Mathrm{alg}}(G,\Cal O^\times)$ for higher $i$ for a connected group action. Section~5 corresponds to the first purpose described above. We define $\Cl(G,X)$ for $X$ locally Krull, and discuss some basics on (equivariant) class groups on locally Krull schemes. In section~6, we prove Theorem~\ref{main2.thm}. The author thanks Professor I.~Dolgachev, Professor O.~Fujino, Professor G.~Kemper, Professor K.~Kurano, Professor J.-i.~Nishimura, and Professor S.~Takagi for valuable advice. \section{Preliminaries}\label{prelimilaries} \paragraph For a commutative ring $R$, $Q(R)$ denotes its total ring of fractions. That is, the localization $R_S$ of $R$, where $S$ is the set of nonzerodivisors of $R$. In particular, if $R$ is an integral domain, $Q(R)$ is its field of fractions. \paragraph In this paper, for a scheme $X$ and its subset $\Gamma$, the codimension $\codim_X\Gamma$ of $\Gamma$ in $X$ is $\inf_{\gamma\in\Gamma}\dim \Cal O_{X,\gamma}$ by definition (cf.~\cite[chapter~0, (14.2.1)]{EGA-IV-1}). The codimension of the empty set in $X$ is $\infty$. \paragraph Throughout this paper, let $S$ be a scheme. For an $S$-group scheme $G$, a $G$-scheme means an $S$-scheme with a (left) action of $G$. We say that $f:X\rightarrow Y$ is a $G$-morphism if $f$ is an $S$-morphism, $X$ and $Y$ are $G$-schemes, and $f(gx)=gf(x)$ holds. In this case, we also say that $X$ is a $(G,Y)$-scheme. A $(G,Y)$-morphism $h:X\rightarrow X'$ is a morphism between $(G,Y)$-schemes which is both a $G$-morphism and a $Y$-morphism. We say that $f:X\rightarrow Y$ is a $G$-invariant morphism if $f$ is a $G$-morphism and $G$ acts on $Y$ trivially. If so, $f(gx)=f(x)$ holds. \paragraph\label{fpqc.par} A morphism of schemes $\varphi:X\rightarrow Y$ is fpqc if it is faithfully flat, and for any quasi-compact open subset $V$ of $Y$, there exists some quasi-compact open subset $U$ of $X$ such that $\varphi(U)=V$. For basics on fpqc property, see \cite[(2.3.2)]{Vistoli}. \paragraph Let $Y$ be a $G$-scheme on which $G$-acts trivially. A $(G,Y)$-scheme $\varphi:X\rightarrow Y$ is said to be a trivial $G$-bundle if $X$ is $(G,Y)$-isomorphic to the second projection $p_2:G\times Y\rightarrow Y$. \begin{definition} We say that $\varphi:X\rightarrow Y$ is a principal $G$-bundle (or a $G$-torsor) (with respect to the fpqc topology) if it is $G$-invariant, and there exists some fpqc $S$-morphism $Y'\rightarrow Y$ such that the base change $X'=Y'\times_Y X\rightarrow Y'$ is a trivial $G$-bundle. \end{definition} \begin{lemma}[{\cite[(4.43)]{Vistoli}}]\label{pfb-equiv.thm} A $G$-invariant morphism $\varphi:X\rightarrow Y$ is a principal $G$-bundle if and only if there exists some fpqc morphism $Y'\rightarrow Y$ which factors through $\varphi$, and the map $\Phi:G\times X\rightarrow X\times_Y X$ given by $\Phi(g,x)=(gx,x)$ is an isomorphism. \qed \end{lemma} \section{The fundamental five-term exact sequence} \paragraph Let $(\Cal C,\Cal O_{\Cal C})$ be a ringed site. An $\Cal O_{\Cal C}$-module $\Cal L$ is said to be invertible if for any $c\in\Cal C$ there exists some covering $(c_\lambda\rightarrow c)$ of $c$ such that for each $\lambda$, $\Cal L|_{c_\lambda}\cong \Cal O_{\Cal C}|_{c_\lambda}$. The set of isomorphism classes of invertible sheaves is denoted by $\Pic(\Cal C)$. It is an (additive) abelian group by the operation $[\Cal L]+[\Cal M] :=[\Cal L\otimes_{\Cal O_{\Cal C}}\Cal M]$. $\Pic(\Cal C)$ is called the Picard group of $\Cal C$. An $\Cal O_{\Cal C}$-module $\Cal M$ is said to be quasi-coherent if for any $c\in\Cal C$, there exists some covering $(c_\lambda\rightarrow c)$ of $c$ such that for each $\lambda$, there exists some exact sequence of $\Cal O_{\Cal C}|_{c_\lambda}$-modules \[ \Cal F_1\rightarrow \Cal F_0\rightarrow \Cal M|_{c_\lambda}\rightarrow 0 \] with $\Cal F_1$ and $\Cal F_0$ free (where a free sheaf means a (possibly infinite) direct sum of $\Cal O_{\Cal C}|_{c_\lambda}$). Obviously, an invertible sheaf is quasi-coherent. \paragraph Let $\Sh(\Cal C)$ and $\Ps(\Cal C)$ denote the categories of abelian sheaves and presheaves, respectively. For $\Cal M\in\Sh(\Cal C)$, the Ext-group $\Ext_{\Sh(\Cal C)}^i(a\Bbb Z,\Cal M)$ is denoted by $H^i(\Cal C,\Cal M)$, where $\Bbb Z$ is the constant presheaf on $\Cal C$ and $a\Bbb Z$ its sheafification. Similarly, for $\Cal N\in\Ps(\Cal C)$, $\Ext_{\Ps(\Cal C)}^i(\Bbb Z,\Cal N)$ is denoted by $H^i_{\Ps}(\Cal C,\Cal N)$. Let $q:\Sh(\Cal C)\rightarrow \Ps(\Cal C)$ be the inclusion. As it has the exact left adjoint (the sheafification $a$), it is left exact, and preserves injectives. Its right derived functor $(R^iq)(\Cal M)$ is denoted by $\mathop{\text{\underline{$H$}}}\nolimits^i(\Cal M)$. As $\Hom_{\Sh(\Cal C)}(a\Bbb Z,?)=\Hom_{\Ps(\Cal C)}(\Bbb Z,?)\circ q$, a Grothendieck spectral sequence \begin{equation}\label{GSS.eq} E^{p,q}_2=H^{p}_{\Ps}(\Cal C,\mathop{\text{\underline{$H$}}}\nolimits^q(\Cal M))\Rightarrow H^{p+q}(\Cal C,\Cal M) \end{equation} is induced. \paragraph Let $\Cal O^\times$ denote the presheaf of abelian group defined by $\Gamma(c,\Cal O^\times)=\Gamma(c,\Cal O_{\Cal C})^\times$. It is a sheaf. The following is due to de~Jong and others \cite[(20.7.1)]{SP}. \begin{lemma}\label{deJong.thm} There is an isomorphism $H^1(\Cal C,\Cal O^\times)\cong \Pic(\Cal C)$. \end{lemma} \paragraph Let $(\Delta)$ be the full subcategory of the category of ordered sets whose object set $\ob((\Delta))$ is $\{[0],[1],[2],\ldots\}$, where $[n]=\{0<1<\cdots<n\}$. A simplicial $S$-scheme is a contravariant functor from $(\Delta)$ to the category of $S$-schemes $\underline{\Mathrm Sch}/S$, by definition. We denote the subcategory of $(\Delta)$ such that the object set is the same, but the morphism is restricted to injective maps by $(\Delta)^\mathrm{mon}$. Let $X_\bullet$ be a $((\Delta)^{^\mathrm{mon}})^{\mathrm{op}}$-diagram of $S$-schemes, that is, a contravariant functor from $(\Delta)^{^\mathrm{mon}}$ to $\underline{\Mathrm Sch}/S$. Then there is a projective resolution \[ \Bbb L= \cdots \xrightarrow{\partial_2} L_1\Bbb Z_1 \xrightarrow{\partial_1} L_0\Bbb Z_0 \rightarrow \Bbb Z \rightarrow 0 \] of the constant presheaf $\Bbb Z$ on the Zariski site $\Zar(X_\bullet)$ of $X_\bullet$, see \cite[(4.3)]{ETI}. Where $(?)_i:\Sh(\Zar(X_\bullet))\rightarrow\Sh(\Zar(X_i))$ is the restriction functor \cite[(4.5)]{ETI}, and $L_i$ its left adjoint (see \cite[(5.1)]{ETI}). $\partial_i: L_i\Bbb Z_i\rightarrow L_{i-1}\Bbb Z_{i-1}$ is the alternating sum $u_0-u_1+u_2-\cdots+(-1)^iu_i$, where $u_j$ corresponds to the $j$th inclusion map \[ \Bbb Z_i \rightarrow (L_{i-1}\Bbb Z_{i-1})_i=\bigoplus_{j=0}^id_j^*(\Bbb Z_{i-1}) =\bigoplus_{j=0}^i \Bbb Z_i \] under the adjoint isomorphism of the adjoint pair $(L_i,(?)_i)$. The exactness of the complex is checked easily after restricting to each dimension by $(?)_i$. Indeed, the complex is nothing but \[ \cdots \rightarrow \bigoplus_{\phi\in\Hom([1],[i])}\Bbb Z\cdot \phi \rightarrow \bigoplus_{\phi\in\Hom([0],[i])}\Bbb Z\cdot \phi \rightarrow \bigoplus_{\phi\in\Hom(\emptyset,[i])}\Bbb Z\cdot \phi \rightarrow 0 \] when it is evaluated at $(i,U)$. This complex computes the reduced homology group of the $i$-simplex, so it is exact. \begin{lemma} For any $\Cal N\in\Ps(\Zar(X_\bullet))$, $H^i_{\Ps}(\Zar(X_\bullet),\Cal N)$ is the $i$th cohomology group of the complex \[ 0 \rightarrow \Gamma(X_0,\Cal N_0) \xrightarrow{d_0-d_1} \Gamma(X_1,\Cal N_1) \xrightarrow{d_0-d_1-d_2} \Gamma(X_2,\Cal N_2) \rightarrow\cdots. \] \end{lemma} \begin{proof} Follows from the isomorphism \[ H^i_{\Ps}(\Zar(X_\bullet),\Cal N)= \Ext^i_{\Ps(\Zar(X_\bullet))}(\Bbb Z,\Cal N) = H^i(\Hom_{\Ps(\Zar(X_\bullet))}(\Bbb L,\Cal N)). \] \end{proof} \paragraph\label{group-cohomology.par} Let $S$ be a scheme, and $G$ an $S$-group scheme. Let $X$ be a $G$-scheme. We can associate a simplicial scheme $B_G(X)$ to $X$, see \cite[(29.2)]{ETI}. Its restriction to $(\Delta)^\mathrm{mon}$ is denoted by $B_G'(X)$. Consider $X_\bullet=B_G'(X)$. For $\Cal N\in \Ps(G,X)=\Ps(\Zar(B_G'(X)))$, we denote $H^i_{\Ps}(\Zar(B_G'(X)),\Cal N)$ by $H^i_{\Mathrm{alg}}(G,\Cal N)$. It is the $i$th cohomology group of the complex $\Hom_{\Ps(\Zar(B_G'(X)))} (\Bbb L,\Cal N)$: \[ 0\rightarrow \Gamma(X,\Cal N_0) \xrightarrow{d_0-d_1}\Gamma(G\times X,\Cal N_1) \xrightarrow{d_0-d_1+d_2}\Gamma(G\times G\times X,\Cal N_2) \rightarrow\cdots, \] where \[ d_i(g_{n-1},\ldots,g_{0},x)= \left\{ \begin{array}{ll} (g_{n-1},\ldots,g_1,g_0x) & (i=0) \\ (g_{n-1},\ldots,g_ig_{i-1},\ldots,g_0,x) & (0<i<n) \\ (g_{n-2},\ldots,g_0,x) & (i=n) \end{array} \right.. \] We denote the group of $i$-cocycles (resp.\ $i$-coboundaries) of the complex by $Z^i_{\Mathrm{alg}}(G,\Cal N)$ (resp.\ $B^i_{\Mathrm{alg}}(G,\Cal N)$). \paragraph Let $X$ be as above. Then we denote $\Pic(\Zar(B_G'(X)))$ by $\Pic(G,X)$, and call it the $G$-equivariant Picard group of $X$. By \cite[Lemma~9.4]{ETI}, the restriction $\Pic(G,X)=\Pic(\Zar(B_G'(X)))\rightarrow\Pic(\Zar(B_G^M(X)))$ is an isomorphism, where $\Delta_M$ is the full subcategory of $(\Delta)^\mathrm{mon}$ with the object set $\{[0],[1],[2]\}$, and $B_G^M(X)$ is the restriction of $B_G'(X)$ to $\Delta_M$. \paragraph A $(G,\Cal O_X)$-module is a module sheaf over the ringed site $\Zar(B_G^M(X))$ by definition. Note that $\Pic(G,X)$ is the set of isomorphism classes of quasi-coherent $(G,\Cal O_X)$-modules which are invertible sheaves as $\Cal O_X$-modules. The addition of $\Pic(G,X)$ is given by $[\Cal L]+[\Cal L']=[\Cal L\otimes_{\Cal O_X}\Cal L']$. \paragraph If $X$ is a $G$-scheme, then there is an obvious homomorphism $\rho:\Pic(G,X)\rightarrow \Pic(X)$, forgetting the $G$-action. If $Y$ is an $S$-scheme with a trivial $G$-action, then $\tau:\Pic(Y)\rightarrow \Pic(G,Y)$ such that $\tau[\Cal L]=[\Cal L']$ is induced, where $\Cal L'$ is $\Cal L$ with the trivial $G$-action. So $\rho\circ\tau=\mathord{\Mathrm{id}}_{\Pic(Y)}$. If $\varphi:X\rightarrow Y$ is a $G$-morphism, then $\varphi^*:\Pic(G,Y)\rightarrow \Pic(G,X)$ given by $\varphi^*[\Cal L] =[\varphi^*\Cal L]$ is induced. By abuse of notation, the map without the $G$-action $\Pic(Y)\rightarrow \Pic(X)$ is also denoted by $\varphi^*$. Also, for a $G$-invariant morphism $\varphi:X\rightarrow Y$, $\varphi^*\circ\tau:\Pic(Y)\rightarrow \Pic(G,X)$ is also denoted by $\varphi^*$. \begin{lemma}\label{pic-injective.thm} Let $\varphi:X\rightarrow Y$ be a $G$-invariant morphism. If $\Cal O_Y\rightarrow(\varphi_*\Cal O_X)^G$ is an isomorphism, then $\varphi^*:\Pic(Y)\rightarrow \Pic(G,X)$ is injective. \end{lemma} \begin{proof} Note that the canonical map $\Cal L \rightarrow (\varphi_*\varphi^*\Cal L)^G$ is an isomorphism. Indeed, to check this, as the question is local, we may assume that $\Cal L\cong\Cal O_Y$. But this case is nothing but the assumption itself. So if $\varphi^*\Cal L\cong \Cal O_X$, then \[ \Cal L\cong (\varphi_*\varphi^*\Cal L)^G\cong(\varphi_*\Cal O_X)^G\cong\Cal O_Y, \] and the assertion follows immediately. \end{proof} \paragraph We denote the category of quasi-coherent $(G,\Cal O_X)$-modules by $\Qch(G,X)$. \begin{lemma}\label{pfb-pic.thm} Let $\varphi:X\rightarrow Y$ be a principal $G$-bundle. Then $\varphi^*: \Qch(Y)\rightarrow\Qch(G,X)$ is an equivalence. The induced map $\varphi^*:\Pic(Y)\rightarrow\Pic(G,X)$ given by $\varphi^*[\Cal L]=[\varphi^*\Cal L]$ is an isomorphism of abelian groups. \end{lemma} \begin{proof} \cite[(4.46)]{Vistoli} applied to the stack $\Cal F\rightarrow \underline{\Mathrm Sch}/S$ of quasi-coherent sheaves, $\varphi^*:\Qch(Y)\rightarrow\Qch(G,X)$ is an equivalence. This shows that $\varphi^*:\Pic(Y)\rightarrow\Pic(G,X)$ is bijective. \end{proof} \begin{proposition}\label{five-term.thm} There is an exact sequence \begin{multline*} 0\rightarrow H^1_{\Mathrm{alg}}(G,\Cal O^\times)\rightarrow \Pic(G,X) \xrightarrow{\rho} \Pic(X)^G \rightarrow\\ H^2_{\Mathrm{alg}}(G,\Cal O^\times) \rightarrow H^2(\Zar(B_G'(X)),\Cal O^\times), \end{multline*} where \[ \Pic(X)^G=\{[\Cal L]\in\Pic(X)\mid a^*\Cal L\cong p_2^*\Cal L\}, \] and $\rho$ is the map forgetting the $G$-action, as before. \end{proposition} \begin{proof} Consider the spectral sequence \[ E^{p,q}_2=H^p_{\Mathrm{alg}}(G,\mathop{\text{\underline{$H$}}}\nolimits^q(\Cal O^\times))\Rightarrow H^{p+q}(\Zar(B_G'(X)),\Cal O^\times) \] and its five-term exact sequence \[ 0\rightarrow E^{1,0}_2 \rightarrow E^1 \rightarrow E^{0,1}_2 \rightarrow E^{2,0}_2 \rightarrow E^2. \] The result follows from Lemma~\ref{deJong.thm} immediately. \end{proof} \section{Main result} \paragraph Let $k$ be a field, and $V$ and $W$ be $k$-vector spaces. Let $\alpha$ be an element of $V\otimes_k W$. Let $\Phi_V:V\otimes_k W\otimes_k W^*\rightarrow V$ and $\Phi_W:V\otimes_k W\otimes_k V^*\rightarrow W$ be the map given by $\Phi_V(v\otimes w\otimes w^*)=(w^*(w))v$ and $\Phi_W(v\otimes w\otimes v^*)=(v^*(v))w$, respectively. Then $c_V(\alpha):=\{\Phi_V(\alpha\otimes w^*)\mid w^*\in W^*\}$ and $c_W(\alpha):=\{\Phi_W(\alpha\otimes v^*)\mid v^*\in V^*\}$ are subspaces of $V$ and $W$, respectively. If $\alpha=\sum_{i=1}^n v_i\otimes w_i$ with $v_i\in V$ and $w_i\in W$, then $c_V(\alpha)$ is a subspace of the $k$-span $\langle v_1,\ldots,v_n\rangle$ of $v_1,\ldots,v_n$. If, moreover, $w_1,\ldots,w_n$ is linearly independent, $c_V(\alpha)$ agrees with $\langle v_1,\ldots,v_n\rangle$. If $\alpha=\sum_{i=1}^m\sum_{j=1}^nc_{ij}v_i\otimes w_j$ with $v_1,\ldots,v_m$ and $w_1,\ldots,w_n$ linearly independent and $c_{ij}\in k$, then $\dim c_V(\alpha)=\dim c_W(\alpha)=\rank (c_{ij})$. Note that $\alpha=v\otimes w\neq 0$ for some $v\in V$ and $w\in W$ if and only if $\dim c_V(\alpha)=\dim c_W(\alpha)=1$, and if this is the case, $v$ and $w$ are bases of the one-dimensional spaces $c_V(\alpha)$ and $c_W(\alpha)$, respectively. From this observation, we have the following two lemmas easily. \begin{lemma}\label{tensor.thm} Let $k$ be a field, and $V$ and $W$ be $k$-vector spaces. If $v,v'\in V$, $w,w'\in W$, and $v\otimes w=v'\otimes w'\neq 0$, then there exists some unique $c\in k^\times $ such that $v'=cv$ and $w'=c^{-1}w$. \qed \end{lemma} \begin{lemma}\label{tensor3.thm} Let $k$ be a field, and $V$ and $W$ be $k$-vector spaces. Let $k'$ be an extension field of $k$, and $V'=k'\otimes_k V$ and $W'=k'\otimes_k W$. Let $\alpha$ be an element of $V\otimes_k W$. If $1\otimes \alpha\in k'\otimes_k(V\otimes_k W)\cong V'\otimes_{k'}W'$ is of the form $\mu'\otimes \nu'$ for some $\mu'\in V'$ and $\nu'\in W'$, then there exist some $\mu\in V$ and $\nu\in W$ such that $\alpha=\mu\otimes\nu$. \qed \end{lemma} \begin{lemma}\label{unit-constant.thm} Let $k$ be a field, and $X$ be a reduced $k$-scheme. Assume that there is a $k$-scheme $Z$ of finite type and a dominating $k$-morphism $Z\rightarrow X$. Then there is a short exact sequence of the form \[ 1\rightarrow K^\times \xrightarrow{\iota}\Gamma(X,\Cal O_X)^\times \rightarrow \Bbb Z^r\rightarrow 0, \] where $K$ is the integral closure of $k$ in $\Gamma(X,\Cal O_X)$, and $\iota$ is the inclusion. \end{lemma} \begin{proof} This is proved similarly to \cite[(4.12)]{Hashimoto}. \end{proof} \begin{lemma}\label{tensor2.thm} Let $k$ be a field, and $X$ and $Y$ be quasi-compact quasi-separated $k$-schemes. Then the canonical map $k[X]\otimes_k k[Y]\rightarrow k[X\times Y]$ is an isomorphism, where $k[X]=\Gamma(X,\Cal O_X)$ and so on. \end{lemma} \begin{proof} First, the case that both $X$ and $Y$ are affine is trivial. Second, assume that $X$ is affine. There is a finite affine open covering $Y=\bigcup_{i=1}^r Y_i$ of $Y$. As each $Y_i\cap Y_j$ is again quasi-compact by the assumption of the quasi-separated property, there is a finite affine open covering $Y_i\cap Y_j=\bigcup_k Y_{ijk}$. Then there is a commutative diagram \[ \xymatrix{ 0 \ar[r] & k[X]\otimes_k k[Y] \ar[d] \ar[r] & k[X]\otimes_k \prod_i k[Y_i] \ar[d] \ar[r] & k[X]\otimes_k \prod_{i,j,k} k[Y_{ijk}] \ar[d] \\ 0 \ar[r] & k[X\times Y] \ar[r] & \prod_i k[X \times Y_i] \ar[r] & \prod_{i,j,k} k[X\times Y_{ijk}] }. \] By the first step and the five lemma, the left most vertical arrow is an isomorphism. Lastly, consider the general case. Arguing as in the second step, and using the result of the second step, we are done. \end{proof} In the rest of this section, we prove the following \begin{theorem}\label{main.thm} Let $k$ be a field, $G$ a smooth $k$-group scheme of finite type, and $X$ a reduced $G$-scheme which is quasi-compact and quasi-separated. Assume that there is a $k$-scheme $Z$ of finite type and a dominating $k$-morphism $Z\rightarrow X$. Then $H^1_{\Mathrm{alg}}(G,\Cal O^\times)=\Ker(\rho:\Pic(G,X)\rightarrow\Pic(X))$ is a finitely generated abelian group. \end{theorem} The proof is divided into several steps. \begin{proof} Step~1.\ The case that $G$ is a finite group, and $X=\Spec B$ is also finite over $k$. As $\Pic X$ is trivial, we have that $H^1_{\Mathrm{alg}}(G,\Cal O^\times)\cong H^1(G,B^\times) \cong\Pic(G,X)$. Let $N$ be the kernel of $G\rightarrow \text{\sl{GL}}(B)$. Step 1--1. The case that $N$ is trivial. Then we claim that the canonical map $\varphi:X=\Spec B\rightarrow Y=\Spec B^G$ is a principal $G$-bundle. In order to check this, we may assume that $B^G$ is a field. Then $G$ acts on the set of primitive idempotents of $B$ transitively. So if $B=B_1\times\cdots\times B_r$ with each $B_i$ being a field, then $r=[G:H]$, where $H$ is the stabilizer of the unit element $e_1$ of $B_1$. It is also easy to check that $B^G=(B_1)^H$. So $\dim_{B^G}B=r\dim_{B^G}B_1=r\#H=\#G$. For $b\in B$, if $H$ is the stabilizer of $b$, then $b$ is a root of a separable polynomial $\phi(t)=\prod_{\sigma\in G/H}(t-\sigma b) $. This shows that $\varphi$ is \'etale finite. As $G$ is finite, it is also a geometric quotient. So $\Phi:G\times X\rightarrow X\times_Y X$ given by $\Phi(g,x)=(gx,x)$ is finite surjective. As $X\times_Y X$ is reduced, $B\otimes_{B^G} B\rightarrow k[G]\otimes_k B$ is injective. By dimension counting as vector spaces over $B^G$, we have that $\Phi$ is an isomorphism as claimed. By the claim and by Lemma~\ref{pfb-pic.thm}, $\Pic(G,B)\cong\Pic(B^G)=0$, as desired. Step 1--2. The case that $N=G$. That is, the case that $G$ acts on $B$ trivially. If $B\cong B_1\times\cdots\times B_r$, then $\Pic(G,B)\cong\prod_i\Pic (G,B_i)$. So we may assume that $B$ is a field. As $\Pic(G,B)\cong\Pic(B\otimes_kG,B)$, we may assume that $B=k$. Then $\Pic(G,k)$ is nothing but the group $\Cal X(G)$ of the isomorphism classes of one-dimensional representations of $G$. As $G$ is finite, $\Cal X(G)$ is finite, as desired. Step 1--3. The case that $N$ is arbitrary. By the exact sequence \[ 0\rightarrow E^{1,0}_2 \rightarrow E^1 \rightarrow E^{0,1}_2 \] of the Lyndon--Hochschild--Serre spectral sequence \[ E^{p,q}_2=H^p(G/N,H^q(N,B^\times))\Rightarrow H^{p+q}(G,B^\times), \] there is an exact sequence \[ 0\rightarrow H^1(G/N,B^\times)\rightarrow H^1(G,B^\times)\rightarrow H^1(N,B^\times). \] Now the assertion follows from Step 1--1 and 1--2, immediately. Step~2. The case that $G$ is a finite group scheme, and $X=\Spec B$ is also finite over $k$. Then there is a finite Galois extension $k'$ of $k$ such that $\Omega:=k'\otimes_k G$ is a finite group. That is to say, $\dim_k k[G]$ equals the number of $k'$-rational points of $G$. Thus $\Omega$ is identified with $\Hom_{k'_{\Mathrm{alg}}}(k'\otimes_k H,k')$. Set $\Gamma:=\Gal(k'/k)$ to be the Galois group. Note that $\Gamma$ acts on $k'\otimes_k H$ by $\gamma(\alpha\otimes h) =(\gamma \alpha)\otimes h$. $\Gamma$ acts on the group $\Omega$ by $(\gamma \omega)(\alpha\otimes h) =\alpha(\gamma(\omega(1\otimes h)))$. In other words, $\gamma \omega=\gamma\circ \omega \circ \gamma^{-1}$. Let $M$ be a $(G,B)$-module. Plainly, $k'\otimes_k M$ is a $(k'\otimes_k G,k'\otimes_k B)$-module. In other words, $(\Omega,k'\otimes_k B)$-module. $\Omega$ acts on $k'\otimes_k B$ as $k'$-algebra automorphisms by $\omega(\alpha\otimes b)=\sum_{(b)}\omega(\alpha\otimes b_{(1)})\otimes b_{(0)}$, where we employ Sweedler's notation. $\Gamma$ also acts on $k'\otimes_k M$ by $\gamma(\alpha\otimes m) =(\gamma\alpha)\otimes m$. $\Omega$ acts on $k'\otimes_k M$ by $\omega(\alpha\otimes m)=\sum_{(m)}\omega(\alpha\otimes m_{(1)})\otimes m_{(0)}$. It is easy to see that \begin{multline*} (\gamma\omega)(\alpha\otimes m)= \sum_{(m)}(\gamma\omega)(\alpha\otimes m_{(1)})\otimes m_{(0)}\\ =\gamma(\sum_{(m)}\omega(\gamma^{-1}\alpha\otimes m_{(1)})\otimes m_{(0)}) =(\gamma\circ\omega\circ\gamma^{-1})(\alpha\otimes m). \end{multline*} Thus the actions of $\Gamma$ and $\Omega$ on $k'\otimes_k M$ together induce a $k$-linear action of the semidirect product $\Theta:= \Gamma \ltimes\Omega$ on $k'\otimes_k M$. Similarly, $\Theta$ acts on $k'\otimes_k B$ by $k$-algebra automorphisms. We also think that $\Omega$ acts trivially on $k'$, and thus $\Theta$ acts on $k'$, $k$-linearly. Now $k'\otimes_k M$ is a $(\Theta,k'\otimes_k B)$-module in the sense that the action $k'\otimes_k B\otimes_k k'\otimes_k M\rightarrow k'\otimes_k M$ of $k'\otimes_k B$ on $k'\otimes_k M$ is $\Theta$-linear. Thus $M\mapsto k'\otimes_k M$ is a functor from the category $\Mod(G,B)$ of $(G,B)$-modules to the category $\Mod(\Theta,k'\otimes_k B)$ of $(\Theta,k'\otimes_k B)$-modules (note that the base field is $k$, and not $k'$). Now let $N$ be a $(\Theta,k'\otimes_k B)$-module. Then $N^\Gamma$ is a $B$-module, since $(k'\otimes_k B)^\Gamma=B$. As $N$ is also an $\Omega$-module, it is an $H$-comodule. Note that the coaction \[ \omega_N:N\rightarrow N\otimes_k H \] is $\Gamma$-linear, where $\Gamma$ acts on $N\otimes_k H$ by $\gamma(n\otimes h)=\gamma n\otimes h$. Indeed, $\Omega$ acts on $N$ by $\omega n=\sum_{(n)}\omega(n_{(1)})n_{(0)}$ (here we identify $\Omega=\Hom_{k'_{\Mathrm{alg}}}(k'\otimes_k H,k')\cong \Hom_{k_{\Mathrm{alg}}}(H,k')$). As $\gamma((\gamma^{-1}\omega)(n))=\omega(\gamma n)$, \[ \sum_{(n)}(\omega(n_{(1)}))(\gamma (n_{(0)}))= \sum_{(\gamma n)}\omega((\gamma n)_{(1)})(\gamma n)_{(0)}. \] As $\omega$ is arbitrary and $\Omega$ is a $k'$-basis of $\Hom_k(H,k')$, it follows that \[ \sum_{(n)}\gamma(n_{(0)})\otimes n_{(1)} =\sum_{(\gamma n)}(\gamma n)_{(0)}\otimes (\gamma n)_{(1)}. \] That is, $\omega_N$ is $\Gamma$-linear. So $N^\Gamma$ is an $H$-subcomodule of $N$. As $B\otimes_k N \rightarrow N$ is $H$-linear, $B\otimes_k N^\Gamma\rightarrow N^\Gamma$ is also $H$-linear, as can be checked easily. Thus $N^\Gamma$ is a $(G,B)$-module. These functors $M\mapsto k'\otimes_k M$ and $N\mapsto N^\Gamma$ give an equivalence. Indeed, $k'\otimes_k X\rightarrow X$ is a principal $\Gamma$-bundle. So the map $M\rightarrow(k'\otimes_k M)^\Gamma$ and $k'\otimes_k N^\Gamma \rightarrow N$ are isomorphisms of $(\Gamma,k'\otimes_k B)$-modules and $B$-modules, respectively. We show that the map $M\rightarrow(k'\otimes_k M)^\Gamma$ is also $G$-linear. As $G$ acts on both $k$ and $k'$ trivially, the inclusion $k\hookrightarrow k'$ is $G$-linear. It follows that $M\rightarrow k'\otimes_k M$ is $G$-linear. As $(k'\otimes_k M)^\Gamma$ is a $G$-submodule of $k'\otimes_k M$, the map $M\rightarrow(k'\otimes_k M)^\Gamma$ is $G$-linear. Next, we show that $k'\otimes_k N^\Gamma\rightarrow N$ is $\Omega$-linear. This is equivalent to say that it is $G$-linear. As the map is the composite $k'\otimes_k N^\Gamma\hookrightarrow k'\otimes_k N \rightarrow N$, this is trivial. Thus we have an equivalence of categories $\Mod(G,X)\cong \Mod(\Theta,k'\otimes_k X)$, mapping $\Cal M$ to $p_2^*\Cal M$, where $p_2:k'\otimes_k X\rightarrow X$ is the canonical projection. It is easy to see that $\Cal M$ is an invertible sheaf if and only if $p_2^*\Cal M$ is. Thus the equivalence induces an isomorphism $\Pic(G,X)\cong \Pic(\Theta,k'\otimes_k X)$. Thus changing $G$ to $\Theta$, $X$ to $k'\otimes_k X$, and without changing the base field $k$, we may and shall assume that $G$ is a finite group. But this case is done in Step~1. Step~3. The case that both $G=\Spec H$ and $X=\Spec B$ are affine. Let $H_0$ and $B_0$ be the integral closures of $k$ in $H$ and $B$, respectively. Then, $H_0\otimes_k H_0\otimes_k\cdots\otimes_k H_0$ is the integral closure of $k$ in $H\otimes_k H\otimes_k\cdots\otimes_k H$. To verify this, we may assume that $k$ is separably closed by \cite[(6.14.4)]{EGA-IV-2}. By \cite[(13.3)]{Borel}, connected components of $G$ are isomorphic each other. So letting $G^\circ=\Spec H_1$ be the identity component of $G$, it suffices to show that $k$ is integrally closed in $H_1^{\otimes n}$. But this is the consequence of the geometric integrality of $H_1$ \cite[(1.2)]{Borel}. Similarly, the integral closure of $k$ in $H^{\otimes n}\otimes_k B$ is $H_0^{\otimes n}\otimes_k B_0$. To verify this, we may assume that both $H_0$ and $B_0$ are fields. Then $Q(H^{\otimes n})\otimes_k B_0$ is integrally closed in $Q(H^{\otimes n})\otimes_k B$ by \cite[(6.14.4)]{EGA-IV-2}. On the other hand, as $k\subset Q(H^{\otimes n})$ is a regular extension, $B_0\subset Q(H^{\otimes n}\otimes_k B_0)$ is integrally closed. As the image of the coproduct $\Delta(H_0)$ is contained in $H_0\otimes_k H_0$, it is easy to see that $H_0$ is a subHopf algebra of $H$. As $\omega_B(B_0)\subset B_0\otimes_k H_0$, $B_0$ is an $H_0$-comodule algebra which is also an $H$-subcomodule algebra of $B$. So when we set $G_0=\Spec H_0$ and $X_0=X$, then $G_0$ is a quotient group scheme of $G$ (it is \'etale over $k$), $G_0$ acts on $X_0$, and the diagram \[ \xymatrix{ G\times X \ar[d] \ar[r]^a & X \ar[d] \\ G_0\times X_0 \ar[r]^a & X_0 } \] is commutative. Let $\Mod(\Bbb Z)$ be the category of abelian groups, and $\Cal F$ be its Serre subcategory consisting of finitely generated abelian groups. Set $\Cal A$ to be the quotient $\Mod(\Bbb Z)/\Cal F$. Then by Lemma~\ref{unit-constant.thm}, $\Hom_{\Ps(\Zar(B_{G_0}(X)))}(\Bbb L,\Cal O_{X_0}^\times)$ and $\Hom_{\Ps(\Zar(B_{G}(X)))}(\Bbb L,\Cal O_{X}^\times)$ are isomorphic as complexes in $\Cal A$. So the first cohomology of one is zero in $\Cal A$ if and only if the first cohomology of the other is zero in $\Cal A$. Thus replacing $G$ by $G_0$ and $X$ by $X_0$, we may assume that both $G$ and $X$ are finite. But this case is done in Step~2. Step~4. The general case. The product $G\times G\rightarrow G$ induces $k[G]\rightarrow k[G\times G]\cong k[G]\otimes_k k[G]$ by Lemma~\ref{tensor2.thm}. From this, it is easy to get the commutative Hopf algebra structure of $k[G]$. Set $G_1=\Spec k[G]$. Then the canonical map $G\rightarrow G_1$ is a homomorphism of group schemes. Similarly, the action $G\times X\rightarrow X$ induces $k[X]\rightarrow k[G\times X]\cong k[G]\otimes_k k[X]$. This makes $k[X]$ a (left) $k[G]$-comodule algebra. So letting $X_1=\Spec k[X]$, $G_1$ acts on $X_1$. Now it is easy to see that $\Hom_{\Ps(\Zar(B_{G}(X)))}(\Bbb L,\Cal O_{X})$, which looks like \[ 0\rightarrow k[X]^\times\rightarrow k[G\times X]^\times\rightarrow k[G\times G\times X]^\times\rightarrow\cdots \] agrees with $\Hom_{\Ps(\Zar(B_{G_1}(X_1)))}(\Bbb L,\Cal O_{X_1})$. So replacing $G$ by $G_1$ and $X$ by $X_1$, we may assume that both $G$ and $X$ are affine. But this case is done in Step~3. This completes the proof of the theorem. \end{proof} As a reduced $k$-scheme of finite type is quasi-compact quasi-separated reduced and is dominated by some $k$-scheme of finite type, we immediately have \begin{corollary}\label{main-cor2.thm} Let $k$ be a field, $G$ a smooth $k$-group scheme of finite type, and $X$ a reduced $G$-scheme of finite type. Then $H^1_{\Mathrm{alg}}(G,\Cal O^\times)=\Ker(\rho:\Pic(G,X)\rightarrow\Pic(X))$ is a finitely generated abelian group. \qed \end{corollary} \begin{corollary}\label{main-cor.thm} Let $k$, $G$, $X$, and $Z\rightarrow X$ be as in Theorem~\ref{main.thm}. Let $\varphi:X\rightarrow Y$ be a $G$-invariant morphism. If $\Cal O_Y\rightarrow (\varphi_*\Cal O_X)^G$ is an isomorphism, then the kernel of the map $\varphi^*:\Pic(Y)\rightarrow\Pic(X)$ is a finitely generated abelian group. \end{corollary} \begin{proof} Consider the commutative diagram \[ \xymatrix{ 0 \ar[r]& \Ker \rho \ar[r] & \Pic(G,X) \ar[r]^\rho & \Pic(X) \\ 0 \ar[r]& \Ker \varphi^*\ar[r]& \Pic(Y) \ar[u]_{\varphi^*} \ar[r]^{\varphi^*} & \Pic(X) \ar[u]_{\mathord{\Mathrm{id}}} } \] with exact rows. Then by Lemma~\ref{pic-injective.thm}, the vertical arrow $\varphi^*:\Pic(Y)\rightarrow \Pic(G,X)$ is an injective map, which maps $\Ker \varphi^*$ injectively into $\Ker \rho$. As $\Ker\rho$ is finitely generated by the theorem, $\Ker\varphi^*$ is also finitely generated. \end{proof} \begin{lemma} Let $G$ be a $k$-group scheme of finite type. Then the character group \[ \Cal X(G)=\{\chi\in k[G]^\times \mid \chi(g_1g_0)=\chi(g_1)\chi(g_0)\} \] is a finitely generated abelian group. \end{lemma} \begin{proof} Extending $k$, we may assume that $k$ is algebraically closed. As $\Cal X(G)=\Cal X(\Spec k[G])$, we may assume that $G$ is affine. If $G$ is finite, then $G$ has only finitely many irreducible representations, so $\Cal X(G)$ is also finite. If $G$ is $\Bbb G_a$, then $k[\Bbb G_a]^\times=k^\times$, and so $\Cal X(\Bbb G_a)$ is trivial. If $G=\Bbb G_m$, then $\Cal X(G)\cong \Bbb Z$, as is well-known. If $N$ is a closed normal subgroup of $G$, then \[ 0\rightarrow \Cal X(G/N)\rightarrow \Cal X(G)\rightarrow \Cal X(N) \] is exact. Letting $N=G^\circ$ be the identity component of $G$, we may assume that $G$ is either finite or connected. The finite case is already done, so we consider the case that $G$ is connected. Letting $N$ be the unipotent radical, we may assume that $G$ is either reductive or unipotent. If $G$ is unipotent, then $G$ has a normal subgroup $N$ which is isomorphic to $\Bbb G_a$ and $G/N$ is still unipotent. So this case is done by the induction on the dimension. If $G$ is reductive, then $\Cal X(G)\cong \Cal X(G/[G,G])$, and $G/[G,G]$ is a torus. So we may assume that $G$ is a torus, and this case is also done by the induction on the dimension. \end{proof} \begin{lemma}[cf.~{\cite[(1.8)]{Sweedler}}, {\cite[Theorem~2]{Rosenlicht}}] \label{units.thm} Let $k$ be a field, $X$ and $Y$ $k$-schemes such that $X$ is quasi-compact quasi-separated and $k[X]$ reduced, and $k$ is algebraically closed in $k[X]$. Assume one of the following. \begin{enumerate} \item[\bf 1] $Y$ is integral with the rational function field $\Cal O_{Y,\eta}$ being a regular extension of $k$, where $\eta$ is the generic point of $Y$. \item[\bf 2] $Y$ is quasi-compact quasi-separated, $B=\Gamma(Y,\Cal O_Y)$ is a domain such that the quotient field $Q(B)$ is a regular extension of $k$. \end{enumerate} Then for any $\alpha\in\Gamma(X\times Y,\Cal O_{X\times Y})^\times$, there exist $\mu\in\Gamma(X,\Cal O_X)^\times $ and $\nu\in\Gamma(Y,\Cal O_Y)^\times$ such that $\alpha(x,y)=\mu(x)\nu(y)$ for $x\in X$ and $y\in Y$. \end{lemma} \begin{proof} We may and shall assume that $X$ is nonempty. First consider the case that $Y=\Spec B$ is affine. Then {\bf 1} and {\bf 2} say exactly the same thing. By Lemma~\ref{tensor2.thm}, $\Gamma(X\times Y,\Cal O_{X\times Y})=\Gamma(X,\Cal O_X)\otimes_k B$. Replacing $X$ by $\Spec \Gamma(X,\Cal O_X)$, we may assume that $X=\Spec A$ is affine. There are finitely generated $k$-subalgebras $A_0$ of $A$ and $B_0$ of $B$ such that $\alpha\in (A_0\otimes_k B_0)^\times$. We are to prove that there exist some $\mu\in A$ and $\nu\in B$ such that $\alpha=\mu\otimes\nu$. Replacing $A$ by $A_0$ and $B$ by $B_0$, we may assume that $A$ and $B$ are finitely generated over $k$. Let $k_{\Mathrm{sep}}$ be the separable closure of $k$. By \cite[(19.1)]{SH}, $k_{\Mathrm{sep}}$ is normal over $k$. Then by \cite[(6.14.4)]{EGA-IV-2}, $k_{\Mathrm{sep}}$ is integrally closed in $k_{\Mathrm{sep}}\otimes_k A$. Clearly, $k_{\Mathrm{sep}}\otimes_k A$ is reduced and finitely generated over $k_{\Mathrm{sep}}$. Moreover, $k_{\Mathrm{sep}}\otimes_k B$ is a finitely generated domain over $k_{\Mathrm{sep}}$, and $Q(k_{\Mathrm{sep}}\otimes_k B)$ is a regular extension field over $k_{\Mathrm{sep}}$. By Lemma~\ref{tensor3.thm}, replacing $k$ by its separable closure $k_{\Mathrm{sep}}$, we may assume that $k$ is separably closed. As $Y=\Spec B$ is geometrically integral over $k$, there is at least one $k$-algebra map $B\rightarrow k$ by \cite[(AG.13.3)]{Borel}. As in the proof of \cite[(1.8)]{Sweedler}, set $R=\bigotimes_{\frak U} B$, where $\frak U$ is an uncountable set. Then $R$ is an integral domain, and its field of fractions $K$ is a regular extension of $k$. By \cite[(19.1)]{SH}, $K$ is normal over $k$. By \cite[(6.14.4)]{EGA-IV-2}, $K$ is integrally closed in $A\otimes_k K$. By Lemma~\ref{unit-constant.thm}, $(A\otimes_k K)^\times/K^\times\cong \Bbb Z^n$ for some $n$. Arguing as in \cite[(1.8)]{Sweedler}, we have that $\alpha\in (A\otimes_k B)^\times$ is of the form $\mu\otimes\nu$ for $\mu\in A^\times$ and $\nu\in B^\times$, as desired. Next consider the general $Y$, and assume {\bf 1}. Let $Q=\Cal O_{Y,\eta}$. Then there exist some $\mu\in\Gamma(X,\Cal O_X)^\times$ and $\nu\in Q^\times $ such that $\alpha=\mu\otimes \nu$ in $\Gamma(X\times Z,\Cal O_{X\times Z})=\Gamma(X,\Cal O_X)\otimes_k Q$, where $Z=\Spec Q$. Also, for an affine open subset $U=\Spec C$ of $Y$, there exist some $\mu'\in\Gamma(X,\Cal O_X)^\times$ and $\nu'\in C^\times$ such that $\alpha=\mu'\otimes\nu'$ in $\Gamma(X,\Cal O_X)\otimes_k C$. So $\mu'\otimes\nu'=\alpha=\mu\otimes\nu$ in $\Gamma(X,\Cal O_X)\otimes_k Q$. By Lemma~\ref{tensor.thm}, there exists some $c\in k^\times$ such that $\mu'=c\mu$ and $\nu'=c^{-1}\nu$. This shows that $\nu,\nu^{-1}\in \bigcap_U \Gamma(U,\Cal O_Y)=\Gamma(Y,\Cal O_Y)$. So $\nu\in\Gamma(Y,\Cal O_Y)^\times$. $\alpha(x,y)=\mu(x)\nu(y)$ holds, and this is what we wanted to prove. The case {\bf 2} is reduced easily to the affine case, using Lemma~\ref{tensor2.thm}. \end{proof} The following corollary for the case that $k$ is algebraically closed goes back to Rosenlicht \cite[Theorem~3]{Rosenlicht}. \begin{corollary}\label{char.thm} Let $k$ be a field, and $G$ a smooth connected $k$-group scheme of finite type. If $\chi\in k[G]^\times$ and $\chi(e)=1$, where $e$ is the unit element, then $\chi\in \Cal X(G)$. \end{corollary} \begin{proof} We can write $\chi(g_1g_0)=\chi_1(g_1)\chi_0(g_0)$ with $\chi_1(e)=\chi_0(e)=1$. Then letting $g_1=e$ or $g_0=e$, we have $\chi_1=\chi_0=\chi$. So $\chi\in\Cal X(G)$. \end{proof} \begin{lemma}\label{polynomial.thm} Let $k$ be a field, and $Y$ a $k$-scheme. Let $X$ be a quasi-compact quasi-separated $k$-scheme such that $k[X]$ is reduced. Assume that either \begin{enumerate} \item[\bf 1] $\bar k\otimes_k Y$ is integral; or \item[\bf 2] $\bar k\otimes_k k[Y]$ is integral, and $Y$ is quasi-compact quasi-separated, \end{enumerate} where $\bar k$ is the algebraic closure of $k$. If the unit group of $\bar k\otimes_k k[Y]$ is ${\bar k}^\times$, then $k[X]^\times\rightarrow k[X\times Y]^\times$ is an isomorphism. \end{lemma} \begin{proof} Note that $X$ has only finitely many connected components $X_1,\ldots,X_r$. Replacing $X$ by each $X_i$, we may assume that $X$ is connected. It is easy to check that the integral closure $K$ of $k$ in $k[X]$ is an algebraic extension field of $k$. Applying Lemma~\ref{units.thm} to $K$ instead of $k$, and $K\otimes_k Y$ instead of $Y$, For any unit $\alpha\in k[X\times Y]^\times$, there exists some $\mu\in K[X]^\times$ and $\nu\in K[K\otimes_k Y]^\times$ such that $\alpha(x,y)=\mu(x)\nu(y)$. By assumption, $K[K\otimes_k Y]^\times=K^\times$, and hence $k[X]^\times\rightarrow k[X\times Y]^\times$ is surjective. Injectivity is easy, and we are done. \end{proof} \begin{lemma} Let $k$ be a field, and $G$ a quasi-compact quasi-separated $k$-group scheme such that $k[G]$ is geometrically reduced over $k$. Let $X$ be a $G$-scheme. Assume that $\bar k\otimes_k X$ is integral, or $X$ is quasi-compact quasi-separated and $\bar k\otimes_k k[X]$ is integral. If the unit group of $\bar k\otimes_k k[X]$ is ${\bar k}^\times$, then $H^i_{\Mathrm{alg}}(G,\Cal O_X^\times)\cong H^i_{\Mathrm{alg}}(G,k^\times)$. In particular, $H^1_{\Mathrm{alg}}(G,\Cal O_X^\times)\cong \Cal X(G)$. \end{lemma} \begin{proof} By Lemma~\ref{polynomial.thm}, the map $k[G^n]^\times\rightarrow k[G^n\times X]^\times$ is an isomorphism. The lemma follows. For the last assertion, see the next lemma. \end{proof} \begin{lemma}[cf.~{\cite[(7.1)]{Dolgachev}}] Let $k$ be a field, $G$ a $k$-group scheme, and $X$ a $G$-scheme. Assume that $k[G]^\times\rightarrow k[G\times X]^\times$ induced by the first projection is an isomorphism. Then $H^1_{\Mathrm{alg}}(G,\Cal O^\times)=\Ker(\rho:\Pic(G,X)\rightarrow\Pic(X))$ is isomorphic to $\Cal X(G)$. \end{lemma} \begin{proof} Note that $H^1_{\Mathrm{alg}}(G,\Cal O^\times)$ is $Z^1_{\Mathrm{alg}}(G,\Cal O^\times)/B^1_{\Mathrm{alg}}(G,\Cal O^\times)$ by (\ref{group-cohomology.par}), where \[ Z^1_{\Mathrm{alg}}(G,\Cal O^\times)=\{\chi\in k[G\times X]^\times\mid \chi(g_1,g_0x)\chi(g_1g_0,x)^{-1}\chi(g_0,x)=1\} \] and \[ B^1_{\Mathrm{alg}}(G,\Cal O^\times)=\{\phi(gx)\phi(x)^{-1}\mid \phi\in k[X]^\times\}. \] Note that for $\chi\in k[G\times X]^\times$ can be written as $\chi(g,x)=\chi_0(g)$ for a unique $\chi_0\in k[G]^\times$. Then as the map induced by the projection $k[G\times G]\rightarrow k[G\times G\times X]$ is injective, $\chi\in Z^1_{\Mathrm{alg}}(G,\Cal O^\times)$ if and only if $\chi_0\in\Cal X(G)$. On the other hand, as $k[G\times X]^\times=k[G]^\times$, $k[X]^\times=k[X]^\times\cap k[G]^\times=k^\times$. So $B^1_{\Mathrm{alg}}(G,\Cal O^\times)$ is trivial, and we are done. \end{proof} \begin{example} If $G$ is a quasi-compact quasi-separated $k$-group scheme with $k[G]$ reduced, acting on the affine $n$-space $X=\Bbb A^n$. Then $H^1_{\Mathrm{alg}}(G,X)\cong \Pic(G,X)\cong\Cal X(G)$. \end{example} \begin{proposition}\label{connected-cohomology.thm} Let $G$ be a connected smooth $k$-group scheme of finite type, and $X$ a quasi-compact quasi-separated $G$-scheme such that $k[X]$ is reduced and $k$ is integrally closed in $k[X]$. Then for any $n\geq 0$, any $\chi\in k[G^n\times X]^\times$ can be written as \[ \chi(g_{n-1},\ldots,g_1,g_0,x)=\chi_{n-1}(g_{n-1})\cdots \chi_0(g_0)\alpha(x) \] with $\chi_{n-1},\ldots,\chi_0\in\Cal X(G)$ and $\alpha\in k[X]^\times$ uniquely. Moreover, $Z^0_{\Mathrm{alg}}(G,\Cal O_X^\times)=(k[X]^G)^\times$, $B^0_{\Mathrm{alg}}(G,\Cal O_X^\times)= \{1\}$, and \begin{multline*} Z^n_{\Mathrm{alg}}(G,\Cal O_X^\times) = \{\chi\in k[G^n\times X]^\times\mid \forall g\in G,\,\forall x\in X\, \alpha(gx) =\chi_0(g)\alpha(x),\\ \chi_1=\chi_2,\ldots,\chi_{n-3}=\chi_{n-2},\; \chi_{n-1}=1\}=B^n_{\Mathrm{alg}}(G,\Cal O_X^\times) \end{multline*} if $n\geq 2$ is even. \[ Z^n_{\Mathrm{alg}}(G,\Cal O_X^\times)= \{\chi\in k[G^n\times X]^\times\mid \alpha=\chi_1=\chi_3=\cdots=\chi_{n-2}=1 \} \] if $n$ is odd. $B^n_{\Mathrm{alg}}(G,\Cal O_X^\times)=Z^n_{\Mathrm{alg}}(G,\Cal O_X^\times)$ if $n\geq 3$ is odd, and \[ B^1_{\Mathrm{alg}}(G,\Cal O_X^\times) =\{\chi\in k[G\times X]^\times\mid \alpha=1,\;\chi_0\in\Cal X(G,X)\}, \] where \[ \Cal X(G,X):=\{\chi\in\Cal X(G)\mid \exists\alpha\in k[X]^\times\, \forall g\in G\,x\in X\,\alpha(gx)=\chi(g)\alpha(x)\}. \] Thus \[ H^n_{\Mathrm{alg}}(G,\Cal O_X^\times) = \left\{ \begin{array}{ll} (k[X]^G)^\times & (n=0) \\ \Cal X(G)/\Cal X(G,X) & (n=1) \\ 0 & (n\geq 2) \end{array} \right.. \] \end{proposition} \begin{proof} Let $\partial^n$ be the boundary map in the complex in (\ref{group-cohomology.par}). Then $\partial^0(\alpha)(g_0,x)=\alpha(g_0x)\alpha(x)^{-1}$, \begin{multline*} \partial^n(\chi)(g_n,\ldots,g_0,x) = \chi(g_n,\ldots,g_1,g_0x)\chi(g_n,\ldots,g_2g_1,g_0,x) \cdots\\ \chi(g_ng_{n-1},g_{n-2},\ldots,g_0,x) \chi(g_n,\ldots,g_1g_0,x)^{-1} \chi(g_n,\ldots,g_3g_2,g_1,g_0,x)^{-1}\\ \cdots\chi(g_n,g_{n-1}g_{n-2},\ldots,g_0,x)^{-1} \chi(g_{n-1},g_{n-2},\ldots,g_0,x)^{-1}\\ =(\alpha(g_0x)\alpha(x)^{-1}\chi_0(g_0)^{-1})(\chi_1(g_2)\chi_2(g_2)^{-1})\\ \cdots(\chi_{n-3}(g_{n-2})\chi_{n-2}(g_{n-2})^{-1})\chi_{n-1}(g_n) \end{multline*} if $n\geq 2$ is even, and \begin{multline*} \partial^n(\chi)(g_n,\ldots,g_0,x) = \chi(g_n,\ldots,g_1,g_0x)\chi(g_n,\ldots,g_2g_1,g_0,x) \cdots\\ \chi(g_n,g_{n-1}g_{n-2},\ldots,g_0,x)\chi(g_{n-1},g_{n-2},\ldots,g_0,x) \chi(g_n,\ldots,g_1g_0,x)^{-1}\\ \chi(g_n,\ldots,g_3g_2,g_1,g_0,x)^{-1}\cdots \chi(g_ng_{n-1},g_{n-2},\ldots,g_0,x)^{-1}\\ =\alpha(g_0x)\chi_1(g_2g_1)\cdots\chi_{n-2}(g_{n-1}g_{n-2}) \end{multline*} if $n$ is odd. The results follow easily. \end{proof} \begin{corollary}[cf.~{\cite[Lemma~7.1]{Dolgachev}}] Let $G$ be a connected smooth $k$-group scheme of finite type, and $X$ a quasi-compact quasi-separated $G$-scheme such that $k[X]$ is reduced. Then $H^n_{\Mathrm{alg}}(G,\Cal O_X^\times)=0$ for $n\geq 2$. In particular, $\rho:\Pic(G,X)\rightarrow \Pic(X)^G$ is surjective. \end{corollary} \begin{proof} If $X$ is disconnected, then we can argue componentwise, and we may assume that $X$ is connected. Let $K$ be the integral closure of $k$ in $k[X]$. Then $K$ is a field. Replacing $k$ by $K$ and $G$ by $K\otimes_k G$, we may assume that $k$ is integrally closed in $k[X]$. Now invoke Proposition~\ref{connected-cohomology.thm}. \end{proof} \iffalse \begin{lemma}[cf.~{\cite[Lemma~7.2]{Dolgachev}}] Let $k$ be algebraically closed, $G$ be affine smooth and connected, and $X$ be reduced of finite type over $k$. Then \[ \Pic(G)\times\Pic(X)\rightarrow \Pic(G\times X) \] given by $(\Cal L,\Lal M)\mapsto p_1^*\Cal L\otimes_{\Cal O_{G\times X}} p_2^*\Cal M$ is an isomorphism. Moreover, there is an exact sequence \[ \Pic(G,X)\xrightarrow{\rho}\Pic(X)\rightarrow \Pic(G). \] $\Pic(G)$ is finitely generated, and hence $\Coker\rho$ is also finitely generated. \end{lemma} \begin{proof} Let $n=\dim G$. Then there is an affine open subset $D(f)\subset \Bbb A^n$ and an open immersion $D(f)\rightarrow G$. This is equivalent to say that $G$ is a rational variety, and it follows from \cite[(II.1.9)]{Jantzen} for reductive $G$, and the general case is a consequence of \cite[(15.8)]{Borel}. Thus $G$ has a finite affine open covering $(U_i)$ such that each $U_i$ is an affine open subscheme of $\Bbb A^g$. Let $g_i$ be a $k$-rational point of $U_i$. Then for any \end{proof} \fi \section{Equivariant class group of a locally Krull scheme with a group action} \paragraph Let $R$ be an integral domain with $K=Q(R)$. An $R$-module $M$ is a {\em lattice} or {\em $R$-lattice} if $M$ is torsion-free and $M$ is isomorphic to an $R$-submodule of a finitely generated $R$-module. By definition, a finitely generated torsion-free $R$-module is a lattice. A submodule of a lattice is a lattice. The direct sum of two lattices is a lattice. \begin{lemma} Let $M$ be an $R$-module. Then the following are equivalent. \begin{enumerate} \item[\bf 1] $M$ is a lattice. \item[\bf 2] There is a finitely generated $R$-free module $F$ and an injective $R$-linear map $M\hookrightarrow F$ and $a\in R\setminus 0$ such that $aF\subset M$. \end{enumerate} \end{lemma} \begin{proof} {\bf 1$\Rightarrow$2}. By assumption, there is a finitely generated $R$-module $N$ and an injection $M\rightarrow N$. Replacing $N$ by $N/N_{\Mathrm{tor}}$ if necessary, we may assume that $N$ is torsion-free, where $N_{\Mathrm{tor}}$ is the torsion part of $N$. Take $m_1,\ldots,m_r\in M$ which form a $K$-basis of $K\otimes_R M$. Take $n_{r+1},\ldots,n_s\in N$ such that $m_1,\ldots,m_r,n_{r+1},\ldots,n_s$ is a $K$-basis of $K\otimes_R N$. Let $F_0$ and $G_0$ be $R$-spans of $m_1,\ldots,m_r$ and $m_1,\ldots,m_r,n_{r+1},\ldots,n_s$, respectively. As $N$ is finitely generated, there exists some $a\in R\setminus 0$ such that $N\subset a^{-1}G_0$. Then $F_0\subset M\subset N\cap M\subset a^{-1}G_0\cap(K\otimes_R F_0) =a^{-1}F_0$. Now set $F:=a^{-1}F_0$, and we are done. {\bf 2$\Rightarrow$1} is trivial. \end{proof} \begin{lemma}\label{basic2.thm} Let $M$ be an $R$-module. \begin{enumerate} \item[\bf 1] If $M$ is torsion-free (resp.\ a lattice) and $R'$ a flat $R$-algebra which is a domain, then $M'=R'\otimes_R M$ is a torsion-free $R'$-module (resp. an $R'$-lattice). \item[\bf 2] Let $A_1,\ldots,A_r$ be $R$-algebras which are domains. If $R\rightarrow \prod_i A_i$ is faithfully flat and each $A_i\otimes_R M$ is torsion-free as an $A_i$-module, then $M$ is torsion-free. \item[\bf 3] Let $\Spec R=\bigcup_{i\in I}\Spec A_i$ be an affine open covering, and assume that each $A_i\otimes_R M$ is a lattice. Then $M$ is a lattice. \end{enumerate} \end{lemma} \begin{proof} {\bf 1} If $M$ is torsion-free, then $M\rightarrow K\otimes_R M$ is injective. By flatness, $M'\rightarrow R'\otimes_R K\otimes_R M$ is injective. As $K\otimes_R M$ is a $K$-free module, $R'\otimes_R K\otimes_R M$ is an $R'\otimes_R K$-free module. Hence the localization $R'\otimes_R K\otimes_R M\rightarrow Q(R')\otimes_R M =Q(R')\otimes_{R'}M'$ is injective. Thus $M'$ is torsion-free. If $M$ is a lattice and $M\subset N$ with $N$ being $R$-finite, then $M'\subset N'$ with $N'$ being $R'$-finite, and $M'$ is an $R'$-lattice. {\bf 2} Let $K$ and $L_i$ be the field of fractions of $R$ and $A_i$, respectively. Then the diagram \[ \xymatrix{ M \ar[r]^j \ar[d]^{\delta} & K\otimes_R M \ar[d]^{\Delta} \\ \bigoplus_i A_i\otimes_R M \ar[r]^{r} & \bigoplus_i L_i\otimes_R M } \] is commutative. As a faithfully flat algebra is pure \cite[Theorem~7.5, (i)]{CRT}, $\delta$ is injective. If each $A_i\otimes_R M$ is torsion-free, then $r$ is injective, and hence $j$ is injective, and $M$ is torsion-free. {\bf 3} There exist $m_{i1},\ldots,m_{ir_i}\in K\otimes_R M$ such that the $A_i$-span of $m_{i1},\ldots, m_{ir_i}$ contains $A_i\otimes_R M$. Let $N$ be the $R$-submodule spanned by the all $m_{ij}$. Set $V=(N+M)/N$. Then $A_i\otimes_R V=0$ for any $i$. As $\Spec R=\bigcup_i \Spec A_i$ is an open covering, we have that $V=0$. Hence $N\supset M$, and $M$ is a lattice. \end{proof} For an $R$-module $M$, set $M_{\Mathrm{tf}}:=M/M_{\Mathrm{tor}}$, where $M_{\Mathrm{tor}}$ is the torsion part of $M$. \begin{lemma}\label{Hom-lattice.thm} Let $M$ be an $R$-module such that $M_{\Mathrm{tf}}$ is isomorphic to a submodule of a finitely generated module. Let $N$ be a lattice. Then $\Hom_R(M,N)$ is a lattice. \end{lemma} \begin{proof} Replacing $M$ by $M_{\Mathrm{tf}}$, we may assume that $M$ is a lattice. Let $F$ be a finitely generated free $R$-module containing $N$. Then $\Hom_R(M,N)$ is a submodule of $\Hom_R(M,F)$. Replacing $N$ by $F$, we may assume that $N$ is finite free. As $\Hom_R(M,F)$ is a finite direct sum of $\Hom_R(M,R)$, we may assume that $N=R$. Take a finite free $R$-module $P$ and $a\in R\setminus 0$ such that $aP\subset M\subset P$. Then $a:P\rightarrow P$ induces a map $h:P\rightarrow M$ such that $C=\Coker h$ is annihilated by $a$. Then, dualizing, we get an injective map $M^*\rightarrow P^*$, since $C^*=0$. Thus $M^*=\Hom_R(M,R)$ is a lattice, as desired. \end{proof} \begin{lemma}\label{tensor-lattice.thm} Let $M$ and $N$ be $R$-modules. Assume that $M_{\Mathrm{tf}}$ and $N_{\Mathrm{tf}}$ are lattices. Then $(M\otimes_R N)_{\Mathrm{tf}}$ is a lattice. \end{lemma} \begin{proof} The images of $M_{\Mathrm{tor}}\otimes_R N$ and $M\otimes_R N_{\Mathrm{tor}}$ in $M\otimes_R N$ are torsion modules. So replacing $M$ and $N$ by $M_{\Mathrm{tf}}$ and $N_{\Mathrm{tf}}$, we may assume that $M$ and $N$ are lattices. Take finite free $R$-modules $F$ and $P$ and $a,b\in R$ such that $aF\subset M\subset F$ and $bP\subset N\subset P$. Set $K$ to be the kernel of $M\otimes_R N\rightarrow F\otimes_R P$. Then $K_{ab}$ is zero. So $K$ is a torsion module, and hence $(M\otimes_R N)_{\Mathrm{tf}}=(M\otimes_R N)/K$ is a submodule of $F\otimes_R P$. \end{proof} \paragraph We say that an $R$-module $M$ is reflexive (or divisorial) if $M$ is a lattice, and the canonical map $M\rightarrow M^{**}$ is an isomorphism, see \cite{Fossum}. \begin{lemma}\label{lattice-flat.thm} Let $R$ be a Krull domain, $M$ an $R$-lattice, $F$ and $P$ flat $R$-modules. Then the canonical map \[ \Hom_R(M,P)\otimes_R F\rightarrow \Hom_R(M,P\otimes_R F) \] is an isomorphism. \end{lemma} \begin{proof} It suffices to show that the two maps \[ \Hom_R(M,R)\otimes_R(P\otimes_R F)\rightarrow \Hom_R(M,P\otimes_R F) \] and \[ \Hom_R(M,R)\otimes_R P\rightarrow \Hom_R(M,P) \] are isomorphisms. So we may assume that $P=R$. Take a finitely generated $R$-free module $F'$ and $a\in R\setminus 0$ such that $aF'\subset M\subset F'$. Let $\Cal P$ be the set of minimal primes of $Ra$. Then as submodules of $\Hom_K(K\otimes_R M,K\otimes_R F)$, \begin{multline*} \Hom_R(M,R)\otimes_R F = \Hom_R(M,R[1/a]\cap\bigcap_{P\in\Cal P}R_P)\otimes_R F =\\ (\Hom_R(M,R[1/a])\cap\bigcap_P\Hom_R(M,R_P))\otimes_R F =\\ (\Hom_R(M,R[1/a])\otimes_R F)\cap\bigcap_P(\Hom_R(M,R_P)\otimes_R F) =\\ \Hom_{R[1/a]}(R[1/a]\otimes_R M,R[1/a])\otimes_{R[1/a]}(R[1/a]\otimes_R F)\cap\\ \bigcap_P( \Hom_{R_P}(M_P,R_P)\otimes_{R_P}F_P) =\\ \Hom_{R[1/a]}(R[1/a]\otimes_R M,R[1/a]\otimes_R F) \cap \bigcap_P \Hom_{R_P}(M_P,F_P) =\\ \Hom_R(M,R[1/a]\otimes_R F)\cap \bigcap_P \Hom_R(M,R_P\otimes_R F) =\\ \Hom_R(M,(R[1/a]\otimes_R F)\cap\bigcap_P (R_P\otimes_R F)) =\\ \Hom_R(M,(R[1/a]\cap\bigcap_P R_P)\otimes_R F) =\Hom_R(M,R\otimes_R F)=\Hom_R(M,F), \end{multline*} since $R[1/a]\otimes_R M$ and $M_P$ are finite free modules over $R[1/a]$ and $R_P$, respectively. \end{proof} \begin{lemma}\label{Krull-descent.thm} Let $\varphi:A\rightarrow B$ be a faithfully flat ring homomorphism, and assume that $B$ is a finite direct product of \(Krull\) domains. Then $A$ is a finite direct product of \(Krull\) domains. \end{lemma} \begin{proof} Assume that $B$ is a finite direct product of domains. As $B$ has only finitely many minimal primes, $A$ has finitely many minimal primes $P_1,\ldots,P_r$. If $i\neq j$, then $P_i+P_j=A$. Indeed, if not, $P_i+P_j\subset\frak m$ for some maximal ideal $\frak m$ of $A$. Then, there is a prime ideal $M$ of $B$ such that $M\cap A=\frak m$. As $B_M$ is a domain and $A_{\frak m}$ is its subring, $A_{\frak m}$ is a domain. But this contradicts the assumption that $P_iA_{\frak m}$ and $P_jA_{\frak m}$ are different minimal primes of $A_{\frak m}$. Thus $A$ is a direct product of integral domains. Now we assume that $B$ is a finite direct product of Krull domains. Then $A$ is a finite direct product of domains. By localizing, we may assume that $A$ is a domain. If $b/a\in B\cap Q(A)$ with $a,b\in A$, then $b\in aB\cap A=aA$. So $b/a\in A$, and we have that $B\cap Q(A)=A$ in $Q(B)$. The rest is easy. \end{proof} \begin{lemma}\label{flat-reflexive.thm} Let $R$ be a Krull domain, and $M$ be an $R$-module. If $M$ is reflexive and $R'$ is a flat $R$-algebra which is a domain, then $M\otimes_R R'$ is reflexive. \end{lemma} \begin{proof} By Lemma~\ref{basic2.thm}, $M\otimes_R R'$ is a lattice. We have isomorphisms \begin{multline*} \Hom_R(\Hom_R(M,R),R)\otimes_R R'\cong \Hom_{R'}(\Hom_R(M,R)\otimes_R R',R')\cong\\ \Hom_{R'}(\Hom_{R'}(M\otimes_R R',R'),R'). \end{multline*} Let $\frak D:M\rightarrow M^{**}=\Hom_R(\Hom_R(M,R),R)$ be the canonical map. Then \[ M\otimes_R R'\xrightarrow{\frak D\otimes_R 1} M^{**}\otimes_R R' \] is an isomorphism if and only if \[ \frak D: M\otimes_R R'\rightarrow \Hom_{R'}(\Hom_{R'}(M\otimes_R R',R'),R') \] is an isomorphism, see \cite[Lemma~2.7]{HO}. \end{proof} \begin{lemma}[{\cite[Corollary~5.5]{Fossum}}]\label{intersection.thm} Let $R$ be a Krull domain with $K=Q(R)$, and $M$ an $R$-lattice. As submodules of $K\otimes_R M=\Hom_K(\Hom_K(K\otimes_R M,K),K)$, we have $M^{**}=\bigcap_{P\in X^1(R)}M_P$, where $X^1(R)$ is the set of height one primes of $R$. In particular, the following are equivalent. \begin{enumerate} \item[\bf 1] $M$ is reflexive; \item[\bf 2] $M=\bigcap_{P\in X^1(R)}M_P$ in $K\otimes_R M$. \end{enumerate} \end{lemma} \begin{proof} As submodules of $K\otimes_R M=\Hom_K(\Hom_K(K\otimes_R M,K),K)$, \begin{multline*} \Hom_R(\Hom_R(M,R),R)= \Hom_R(\Hom_R(M,R),\bigcap_P R_P) =\\ \bigcap_P \Hom_{R}(\Hom_R(M,R),R_P) = \bigcap_P \Hom_{R_P}(\Hom_R(M,R)_P,R_P) =\\ \bigcap_P \Hom_{R_P}(\Hom_{R_P}(M_P,R_P),R_P) = \bigcap_P M_P. \end{multline*} The assertions follow. \end{proof} \begin{corollary} Let $R$ be a Krull domain, and \[ 0\rightarrow L\rightarrow M\rightarrow N \] be an exact sequence of $R$-lattices. Then \[ 0\rightarrow L^{**}\rightarrow M^{**}\rightarrow N^{**} \] is also exact. \end{corollary} \begin{proof} This is because \[ 0\rightarrow \bigcap_{P\in X^1(R)}L_P \rightarrow \bigcap_{P\in X^1(R)}M_P \rightarrow \bigcap_{P\in X^1(R)}N_P \] is exact. \end{proof} \begin{corollary}\label{second-syzygy.thm} Let $R$ be a Krull domain, and \[ 0\rightarrow L\rightarrow M\rightarrow N \] be an exact sequence of $R$-modules. If $M$ is reflexive and $N$ is torsion-free, then $L$ is reflexive. \end{corollary} \begin{proof} Being a submodule of the lattice $M$, we have that $L$ is a lattice. Now apply the five lemma to the diagram \[ \xymatrix{ 0 \ar[r] & \bigcap_{P\in X^1(R)}L_P \ar[r] & \bigcap_{P\in X^1(R)}M_P \ar[r] & \bigcap_{P\in X^1(R)}N_P\\ 0 \ar[r] & L \ar[r] \ar[u] & M \ar[r] \ar[u] & N \ar[u] }. \] \end{proof} \begin{lemma}\label{height-one.thm} Let $R$ be an integral domain. Let $R'$ be a faithfully flat $R$-algebra which is also a finite direct product of Krull domains. If $\frak p$ is a height-one prime ideal of $R$, then there exists some height-one prime ideal $P$ of $R'$ such that $P\cap R=\frak p$. \end{lemma} \begin{proof} $R$ is a Krull domain by Lemma~\ref{Krull-descent.thm}. By localizing, we may assume that $R$ is a DVR. Let $\pi$ be a generator of the maximal ideal $\frak p$ of $R$. As $\pi R'\neq R'$ by the faithful flatness, there exists some minimal prime $P$ of $\pi R'$. Then $P$ is of height one, since $R'$ is a finite direct product of Krull domains. The assertion follows. \end{proof} \begin{lemma}\label{reflexive-descent.thm} Let $R$ be an integral domain, and $M$ an $R$-module. Let $A_1,\ldots,A_r$ be $R$-algebras which are Krull domains such that $R'=\prod_{i=1}^r A_i$ is a faithfully flat $R$-algebra. If each $A_i\otimes_R M$ is a lattice \(resp.\ reflexive\), then $M$ is a lattice \(resp.\ reflexive\). \end{lemma} \begin{proof} Note that $R$ is a Krull domain by Lemma~\ref{Krull-descent.thm}. Assume that each $A_i\otimes_R M$ is a lattice. Then $M$ is torsion-free by Lemma~\ref{basic2.thm}. Obviously, $K\otimes_R M$ is a finite dimensional $K$-vector space. Let $F$ be any finite free $R$-submodule of $K\otimes_R M$ such that $K\otimes_R F=K\otimes_R M$. Set $R'=\prod_i A_i$. Then in $Q(R')\otimes_{R}M$, there exists some nonzerodivisor $a$ of $R'$ such that $R'\otimes_R M \subset a^{-1}(R'\otimes_R F)$. Let $P_1,\ldots,P_s$ be the complete list of height one primes of $R'$ such that $a\in P_i$. Set $\frak p_i:=P_i\cap R$. For each height one prime $\frak p$ of $R$, choose height one prime ideal $P(\frak p)$ of $R'$ such that $P(\frak p)\cap R=\frak p$ (we can do so by Lemma~\ref{height-one.thm}). Let $v_{\frak p}$ be the normalized discrete valuation of $Q(R'_{P(\frak p)})$ corresponding to $R'_{P(\frak p)}$, and $n_{\frak p}$ be the ramification index. That is, $\frak pR'_{P(\frak p)}=(P(\frak p)R'_{P(\frak p)})^{n_{\frak p}}$. Take $b\in R\setminus 0$ such that $v_{\frak p}(b)\geq v_{\frak p}(a)$ for any $\frak p$. This is possible, since $v_{\frak p}(a)=0$ unless $\frak p=\frak p_i$ for some $i$. Then for any $\frak p$, \begin{multline*} M\subset (R'_{P(\frak p)}\otimes_R M)\cap (K\otimes_R M) \subset a^{-1}(R'_{P(\frak p)}\otimes_R F)\cap (K\otimes_R F) =\\ (a^{-1}R'_{P(\frak p)}\cap K)\otimes_R F \subset (\frak p R_{\frak p})^{-\ru{v_{\frak p}(a)/n_{\frak p}}}\otimes_R F \subset b^{-1}R_{\frak p}\otimes_R F. \end{multline*} Thus \[ M\subset \bigcap_{\frak p}b^{-1}(R_{\frak p}\otimes_R F)=b^{-1}F. \] This shows that $M$ is a lattice. Next assume that $A_i\otimes_R M$ is reflexive for any $i$. Then $\frak D\otimes 1_{A_i}:M\otimes_R A_i\rightarrow M^{**}\otimes_R A_i$ is an isomorphism for any $i$. So $\frak D:M\rightarrow M^{**}$ is an isomorphism. \end{proof} \begin{lemma}\label{Krull-Hom.thm} Let $R$ be a Krull domain, $M$ an $R$-lattice, $N$ a reflexive $R$-module, and $F$ and $P$ flat $R$-modules. Then the canonical map \[ \Hom_R(M,N\otimes_R P)\otimes_R F\rightarrow \Hom_R(M,N\otimes_R P\otimes_R F) \] is an isomorphism. \end{lemma} \begin{proof} Similar to Lemma~\ref{lattice-flat.thm}. Use Lemma~\ref{intersection.thm}. \end{proof} \begin{lemma}\label{Hom-reflexive.thm} Let $R$ be a Krull domain, $M$ an $R$-module such that $M_{\Mathrm{tf}}$ is a lattice, and $N$ a reflexive $R$-module. Then $\Hom_R(M,N)$ is reflexive. \end{lemma} \begin{proof} We may assume that $M$ is a lattice. By Lemma~\ref{Hom-lattice.thm}, $\Hom_R(M,N)$ is an $R$-lattice. By Lemma~\ref{Krull-Hom.thm}, \begin{multline*} \Hom_R(M,N)=\Hom_R(M,\bigcap_{P\in X^1(R)}N_P) =\bigcap_P\Hom_{R}(M,N_P)\\ =\bigcap_P\Hom_R(M,N)_P. \end{multline*} \end{proof} \begin{lemma}\label{associativity.thm} Let $R$ be a Krull domain, and $M$ and $N$ be $R$-modules such that $M_{\Mathrm{tf}}$ and $N_{\Mathrm{tf}}$ are lattices. Then the canonical map \[ (M\otimes_R N)^{**}\rightarrow (M^{**}\otimes_R N)^{**} \] is an isomorphism. \end{lemma} \begin{proof} Replacing $M$ and $N$ by $M_{\Mathrm{tf}}$ and $N_{\Mathrm{tf}}$, respectively, we may assume that $M$ and $N$ are lattices. By Lemma~\ref{tensor-lattice.thm}, Lemma~\ref{intersection.thm}, and Lemma~\ref{Hom-reflexive.thm}, it suffices to show that for any height one prime $P$ of $R$, \[ ((M\otimes_R N)_{\Mathrm{tf}})_P\rightarrow ((M^{**}\otimes_R N)_{\Mathrm{tf}})_P \] is an isomorphism. This is equivalent to say that \[ M_P\otimes_{R_P}N_P\rightarrow (M^{**})_P\otimes_{R_P}N_P \] is an isomorphism. This is trivial. \end{proof} \paragraph\label{lattice.par} Let $X$ be a scheme. We say that $X$ is locally integral (resp.\ locally Krull) if there exists some affine open covering $X=\bigcup_{i\in I}\Spec A_i$ with each $A_i$ a domain (resp.\ Krull domain). A locally Krull scheme is locally integral. A locally integral scheme is a disjoint union $X=\bigcup_{j\in J}X_j$ with each $X_j$ an integral closed open subscheme. If $X$ is locally Krull and $U=\Spec A$ is an affine open subset with $A$ a domain, then $A$ is a Krull domain, as can be seen easily from Lemma~\ref{Krull-descent.thm}. \paragraph Let $X$ be a locally integral scheme. An $\Cal O_X$-module $\Cal M$ is called a lattice or $\Cal O_X$-lattice if $\Cal M$ is quasi-coherent, and for any affine open subset $U=\Spec A$ of $X$ with $A$ an integral domain, $\Gamma(U,\Cal M)$ is an $A$-lattice. This is equivalent to say that there exists some affine open covering $X=\bigcup_{i\in I}U_i$ such that each $A_i=\Gamma(U_i,X)$ is an integral domain and $\Gamma(U_i,\Cal M)$ is an $A_i$-lattice. An $\Cal O_X$-module $\Cal M$ is said to be {\em reflexive} if $\Cal M$ is an $\Cal O_X$-lattice and the canonical map $\Cal M\rightarrow\Cal M^{**}$ is an isomorphism. For a quasi-coherent $\Cal O_X$-module $\Cal M$, set $\Cal M_{\Mathrm{tf}}=\Cal M/\Cal M_{\Mathrm{tor}}$, where $\Cal M_{\Mathrm{tor}}$ is the torsion part of $\Cal M$. A lattice $\Cal M$ is said to be of rank $n$ if for any point $\xi$ of $X$ such that $\Cal O_{X,\xi}$ is a field, $\Cal M_\xi$ is an $n$-dimensional $\Cal O_{X,\xi}$-vector space. \begin{lemma} Let $X$ be a locally Krull scheme, and $\Cal M$, $\Cal N$, $\Cal F$, and $\Cal G$ be quasi-coherent $\Cal O_X$-modules. Assume that $\Cal M_{\Mathrm{tf}}$ is a lattice, $\Cal N$ is reflexive, and $\Cal F$ and $\Cal G$ are flat. Then \begin{enumerate} \item[\bf 1] For any flat morphism $\varphi:Y\rightarrow X$, the canonical map \[ P: \varphi^*\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_X}(\Cal M,\Cal N\otimes_{\Cal O_X} \Cal F) \rightarrow \mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_Y}(\varphi^*\Cal M,\varphi^*\Cal N\otimes_{\Cal O_Y}\varphi^*\Cal F) \] is an isomorphism. \item[\bf 2] $\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_X}(\Cal M,\Cal N\otimes_{\Cal O_X} \Cal F)$ is quasi-coherent. \item[\bf 3] The canonical map \[ \mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_X}(\Cal M,\Cal N\otimes_{\Cal O_X} \Cal F)\otimes_{\Cal O_X}\Cal G \rightarrow \mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_X}(\Cal M,\Cal N\otimes_{\Cal O_X} \Cal F\otimes_{\Cal O_X}\Cal G) \] is an isomorphism. \end{enumerate} \end{lemma} \begin{proof} Obvious by Lemma~\ref{Krull-Hom.thm}. \end{proof} \begin{lemma} Let $G$ be a flat $S$-group scheme, $X$ be a $G$-scheme, and $\Cal M$ and $\Cal N$ be quasi-coherent $(G,\Cal O_X)$-modules. If for any flat $S$-morphism $\varphi:Y\rightarrow X$, the canonical map \[ P: \varphi^*\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_X}(\Cal M,\Cal N) \rightarrow \mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_Y}(\varphi^*\Cal M,\varphi^*\Cal N) \] is an isomorphism, then the $(G,\Cal O_X)$-module $\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_X}(\Cal M,\Cal N)$ is quasi-coherent. \end{lemma} \begin{proof} Clearly, $\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_X}(\Cal M,\Cal N)=\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_{B_G^M(X)}}(\Cal M,\Cal N)_{[0]}$ is quasi-coherent. By \cite[(6.37)]{ETI}, \[ \alpha_\phi:(B_G^M(X))_\phi^*\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_{B_G^M(X)}}(\Cal M,\Cal N)_{[0]} \rightarrow \mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_{B_G^M(X)}}(\Cal M,\Cal N)_j \] is an isomorphism for any $j\in\ob(\Delta_M)=\{[0],[1],[2]\}$ and $\phi:[0]\rightarrow j$. So $\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_{B_G^M(X)}}(\Cal M,\Cal N)_j$ is quasi-coherent for every $j$, and hence $\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_{B_G^M(X)}}(\Cal M,\Cal N)$ is locally quasi-coherent (this is the precise meaning of saying that $\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_X}(\Cal M,\Cal N)$ is locally quasi-coherent). On the other hand, $\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_{B_G^M(X)}}(\Cal M,\Cal N)$ is equivariant by \cite[(7.6)]{ETI}. By \cite[(7.3)]{ETI}, $\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_X}(\Cal M,\Cal N)$, or better, $\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_{B_G^M(X)}}(\Cal M,\Cal N)$ is quasi-coherent. \end{proof} \begin{corollary}\label{hom.thm} Let $G$ and $X$ be as above, and $\Cal M$, $\Cal N$ and $\Cal P$ be quasi-coherent $(G,\Cal O_X)$-modules. Assume that $X$ is locally Krull, $\Cal M_{\Mathrm{tf}}$ is a lattice, $\Cal N$ reflexive, and $\Cal P$ flat. Then the $(G,\Cal O_X)$-module $\mathop{\text{\underline{$\Mathrm Hom$}}}\nolimits_{\Cal O_X}(\Cal M,\Cal N\otimes_{\Cal O_X}\Cal P)$ is quasi-coherent. \qed \end{corollary} \paragraph\label{equivariant-class.par} Let $Y$ be a locally Krull scheme. We denote the set of isomorphism classes of rank-one reflexive sheaves by $\Cl(Y)$, and call it the class group of $Y$. Let $G$ be a flat $S$-group scheme, $X$ be a $G$-scheme which is locally Krull. A quasi-coherent $(G,\Cal O_X)$-module which is reflexive (of rank $n$) as an $\Cal O_X$-module is simply called a reflexive $(G,\Cal O_X)$-module (of rank $n$). We denote the set of isomorphism classes of rank-one reflexive $(G,\Cal O_X)$-modules by $\Cl(G,X)$, and call it the $G$-equivariant class group of $X$. There is an obvious map $\alpha:\Cl(G,X)\rightarrow \Cl(X)$, forgetting the $G$-action. By Lemma~\ref{tensor-lattice.thm}, Lemma~\ref{associativity.thm} and Corollary~\ref{hom.thm}, defining \[ [\Cal M]+[\Cal N]=[(\Cal M\otimes_{\Cal O_X}\Cal N)^{**}], \] $\Cl(G,X)$ and $\Cl(Y)$ are abelian (additive) groups, and $\alpha$ is a homomorphism. Note that $\Pic(G,X)$ is a subgroup of $\Cl(G,X)$, and $\Pic(Y)$ is a subgroup of $\Cl(Y)$. Note that $\Ker\alpha=\Ker\rho$, where $\rho:\Pic(G,X)\rightarrow\Pic(X)$ is the map forgetting the $G$-action, as before. \begin{lemma}\label{reflexive-ascent.thm} Let $\varphi:X\rightarrow Y$ be a flat morphism of schemes. Assume that $X$ and $Y$ are locally integral. If $\Cal M$ is an $\Cal O_Y$-lattice, then $\varphi^*\Cal M$ is an $\Cal O_X$-lattice. If $Y$ is locally Krull and $\Cal M$ is a reflexive $\Cal O_Y$-module, then $\varphi^*\Cal M$ is reflexive. \end{lemma} \begin{proof} Follows from Lemma~\ref{basic2.thm} and Lemma~\ref{flat-reflexive.thm}. \end{proof} \begin{lemma}\label{reflexive-descent2.thm} Let $\varphi:X\rightarrow Y$ be an fpqc morphism of schemes, and assume that $X$ is locally Krull. Then $Y$ is locally Krull. If $\Cal M$ is a quasi-coherent $\Cal O_Y$-module such that $\varphi^*\Cal M$ is an $\Cal O_X$-lattice \(resp.\ reflexive $\Cal O_X$-module\), then $\Cal M$ is an $\Cal O_Y$-lattice \(resp.\ reflexive $\Cal O_Y$-module\). \end{lemma} \begin{proof} The first assertion is an immediate consequence of Lemma~\ref{Krull-descent.thm}. The second assertion follows from Lemma~\ref{reflexive-descent.thm}. \end{proof} \paragraph Let $G$ be a flat $S$-group scheme, and $X$ be a locally Krull $G$-scheme. We denote the category of reflexive $(G,\Cal O_X)$-modules by $\Ref(G,X)$. Its full subcategory consisting of reflexive $(G,\Cal O_X)$-modules of rank $n$ is denoted by $\Ref_n(G,X)$. If we do not consider a $G$-action, $\Ref(X)$ and $\Ref_n(X)$ are defined similarly. \begin{lemma}\label{pfb-fpqc.thm} Let $G$ be an $S$-group scheme, $\varphi:X\rightarrow Y$ be a principal $G$-bundle such that the second projection $G\times X\rightarrow X$ is flat. Then $\varphi$ is fpqc. \end{lemma} \begin{proof} There is an fpqc map $h:Y'\rightarrow Y$ such that the base change $X'\rightarrow Y'$ is a trivial $G$-bundle. Then $h$ is the composite of \[ Y'\xrightarrow{e}G\times Y'\cong X'=Y'\times_Y X\xrightarrow{p_2} X\xrightarrow {\varphi} Y, \] and it factors through $X$. As $G\times X\rightarrow X$ is flat, $G\times Y'\rightarrow Y'$ is also flat. Thus $X'\rightarrow Y'$ is flat, and hence so is $\varphi:X\rightarrow Y$ by descent. Next, take a quasi-compact open subset $U$ of $Y$. There exists some quasi-compact open subset $V$ of $Y'$ such that $h(V)=U$. As the image $W$ of $V$ in $X$ is quasi-compact, there exists some quasi-compact open subset $W'$ of $\varphi^{-1}(U)$ such that $W\subset W'$. Then $U=\varphi(W)\subset \varphi(W')\subset \varphi(\varphi^{-1}(U))\subset U$, and hence $\varphi(W')=U$. This shows that $\varphi$ is fpqc. \end{proof} \begin{lemma}\label{codim-two-isom.thm} Let $X$ be a locally Krull scheme. Let $U$ be its open subset. Let $\varphi:U\hookrightarrow X$ be the inclusion. Assume that $\codim_X(X-U)\geq 2$. Then $\varphi^*:\Ref_n X\rightarrow\Ref_nU$ is an equivalence, and $\varphi_*:\Ref_nU\rightarrow\Ref_nX$ is its quasi-inverse. \end{lemma} \begin{proof} By Lemma~\ref{reflexive-ascent.thm}, $\varphi^*:\Ref_n(G,X)\rightarrow\Ref_n(G,U)$ is well-defined. Thus it suffices to show that $\varphi_*:\Ref(G,U)\rightarrow \Ref(G,X)$ is well-defined, and is a quasi-inverse to $\varphi^*$. That is, for $\Cal N\in\Ref(G,U)$, $\varphi_*\Cal N\in\Ref(G,X)$, and for $\Cal M\in\Ref(G,X)$, the canonical map $\Cal M\rightarrow \varphi_*\varphi^*\Cal M$ is an isomorphism. The question is local on $X$, and we may assume that $X=\Spec A$ is affine and integral. Then $U=X\setminus V(I)$ for some ideal $I$ of $A$ such that $\height I \geq 2$, where $V(I)=\{P\in\Spec A\mid P\supset I\}$. We can take a finitely generated ideal $J\subset I$ such that $\height J \geq 2$. Set $W=X\setminus V(J)$. It suffices to show the assertion in problem for $W\rightarrow U$ and $W\rightarrow X$. So replacing $U$ by $W$ (and changing $X$), we may assume that the open immersion $U\rightarrow X$ is quasi-compact. Replacing $X$ again if necessary, we may assume that $X=\Spec A$ is affine and integral. Now $\varphi$ is concentrated, and hence $\varphi_*\Cal N$ is quasi-coherent. Let $\eta$ be the generic point of $X$. Let $U=\bigcup_{i=1}^r U_i$, where $U_i=\Spec A[1/f_i]$ with $f_i\in A\setminus 0$. Then $\Gamma(U_i,\Cal N)\subset M_i\subset \Cal N_\eta$ for some finitely generated $A[1/f_i]$-module $M_i$. Let $m_{i1},\ldots,m_{is_i}\in M_i$ be the generators of $M_i$. Let $M$ be the $A$-span of $\{m_{ij}\mid 1\leq i\leq r,1\leq j\leq s_i\}$, and $\Cal M$ the associated sheaf of the $A$-module $M^{**}$ on $X=\Spec A$. As $\Cal N_P\subset \Cal M_P$ for height one prime ideal of $A$, \[ \Gamma(X,\varphi_*\Cal N)=\Gamma(U,\Cal N)=\bigcap_{\height P=1,\;P\in X}\Cal N_P \subset \bigcap_{\height P=1,\;P\in X}\Cal M_P=\Gamma(X,\Cal M), \] and $\varphi_*\Cal N\subset \Cal M$. Thus $\varphi_*\Cal N$ is a lattice. Set $N=\Gamma(X,\varphi_*\Cal N)$. It remains to show that $N$ is a reflexive $A$-module. This is easy, since \[ N=\Gamma(U,\Cal N)=\bigcap_{\height P=1,\;P\in U}\Cal N_P =\bigcap_{\height P=1}(\varphi_*\Cal N)_P =\bigcap_{\height P =1} N_P \] by the reflexive property of $\Cal N$ and the quasi-coherence of $\varphi_*\Cal N$. Finally, we prove that for $\Cal M\in\Ref_n(X)$, $\Cal M\rightarrow \varphi_*\varphi^*\Cal M$ is an isomorphism. As this is an $O_X$-linear map between quasi-coherent $\Cal O_X$-modules, it suffices to show that $\Gamma(X,\Cal M)\rightarrow \Gamma (X,\varphi_*\varphi^*\Cal M)$ is an isomorphism. By Lemma~\ref{intersection.thm}, \[ \Gamma(X,\Cal M)=\bigcap_{\height P=1,\,P\in X}\Cal M_P\quad\text{and}\quad \Gamma(X,\varphi_*\varphi^*\Cal M)=\bigcap_{\height P=1,\,P\in U}\Cal M_P, \] so they are equal, and we are done. \end{proof} \iffalse \begin{lemma} Let $\Cal S$ be a category, and $((?)^*,(?)_*)$ be an adjoint pair of almost-pseudofunctors on $\Cal S$ {\rm\cite[(1.16)]{ETI}}. Let \[\sigma= \xymatrix{ X \ar[r]^f \ar[d]^\psi & Y \ar[d]^\varphi \\ X'\ar[r]^{f'} & Y' } \] be a commutative diagram in $\Cal S$. If $\varphi$ and $\psi$ are isomorphisms, then $\sigma$ is cartesian, and Lipman's theta {\rm\cite[(1.21)]{ETI}} $\theta=\theta(\sigma):\varphi^*(f')_*\rightarrow f_*\psi^*$ is an isomorphism. \end{lemma} \begin{proof} The fact that $\sigma$ is cartesian is checked easily. By \cite[(1.23)]{ETI}, the composite \[ f'_*\cong (\varphi^{-1}\varphi)^* f'_*\xrightarrow{d^{-1}} (\varphi^{-1})^*\varphi^*f'_*\xrightarrow{\theta} (\varphi^{-1})^*f_*\psi^*\xrightarrow{\theta} f'_*(\psi^{-1})^*\psi^*\xrightarrow{d} f'_*(\psi^{-1}\psi)^*\cong f'_* \] is the identity. It follows that $\theta(\sigma)$ is a split mono. On the other hand, by the same lemma, \[ f_*\cong(\varphi\varphi^{-1})^*f_*\xrightarrow{d^{-1}} \varphi^*(\varphi^{-1})^*f_*\xrightarrow{\theta} \varphi^*f'_*(\psi^{-1})^*\xrightarrow{\theta} f_*\psi^*(\psi^{-1})^*\xrightarrow{d} f_*(\psi\psi^{-1})^*\cong f_* \] is also the identity. So $\theta(\sigma)$ is a split epi. Hence $\theta(\sigma)$ is an isomorphism. \end{proof} \fi \begin{lemma}\label{codim-two.thm} Let $Y$ be a quasi-compact locally Krull scheme, and $U$ its open subset. Then there exists some quasi-compact open subset $V$ of $U$ such that $\codim_U(U\setminus V)\geq 2$. \end{lemma} \begin{proof} Let $Y=\bigcup_i Y_i$ with $Y_i$ a spec of a Krull domain. Then replacing $Y$ with $Y_i$ and $U$ with $Y_i\cap U$, we may assume that $Y=\Spec A$ with $A$ a Krull domain. Then there is a radical ideal $I$ of $A$ such that $U=D(I):=Y\setminus V(I)$. Take $a\in I\setminus 0$. Let $\Min(Aa)\setminus V(I)=\{P_1,\ldots,P_r\}$. Take $b_i\in I\setminus P_i$, and set $J=(a,b_1,\ldots,b_r)$. Then $\Min(J)\cap X^1(A)\subset V(I)$. So letting $V=D(J)$, $\codim_U(U\setminus V)\geq 2$. As $J$ is finitely generated, $V$ is quasi-compact. \end{proof} \begin{lemma}\label{locally-Krull-qc.thm} Let $G$ be a flat $S$-group scheme. Let $\varphi:U\rightarrow Y$ be a quasi-separated $G$-morphism. Assume that there exists a factorization $\varphi=\psi h$ such that $h:U\rightarrow X$ is an open immersion, $\psi:X\rightarrow Y$ is quasi-compact, and $X$ is locally Krull \(we do not requre that $G$ acts on $X$\). Then for any reflexive $(G,\Cal O_U)$-module $\Cal M$, $\varphi_*\Cal M$ is a quasi-coherent $(G,\Cal O_Y)$-module. \end{lemma} \begin{proof} Let $\Cal M_{[0]}$ be the associated $\Cal O_U$-module of $\Cal M$. We show that $\varphi_*\Cal M_{[0]}$ is quasi-coherent. In order to do so, we may assume that $G$ is trivial. Then the question is local on $Y$, we may assume that $Y$ is affine. Now by Lemma~\ref{codim-two.thm}, we can take a quasi-compact open subscheme $V$ of $U$ such that $\codim_U(U\setminus V)\geq 2$. Let $i:V\rightarrow U$ be the inclusion. Then $\Cal M\cong i_*i^*\Cal M$ by Lemma~\ref{codim-two-isom.thm}. So we may assume that $U$ itself is quasi-compact. Then $\varphi$ is quasi-compact quasi-separated, and hence $\varphi_*\Cal M$ is quasi-coherent by \cite[(9.2.1)]{EGA-I}, as required. Next we show that for any flat $Y$-scheme $f:F\rightarrow Y$, Lipman's theta $\theta:f^*\varphi_*\Cal M\rightarrow (p_2)_*p_1^*\Cal M$ (see for the definition, \cite[(3.7.2)]{Lipman} and \cite[(1.21)]{ETI}) is an isomorphism, where $p_1:U\times_Y F\rightarrow U$ and $p_2:U\times_Y F\rightarrow F$ are projection maps. Again, $G$ is irrelevant here, and we may assume that $Y$ is affine. Take $V$ as above, and consider the commutative diagram \[ \xymatrix{ V\times_Y F \ar[r]^{i\times 1} \ar[d]^{q_1} \ar@{}[dr]|{\text{\normalsize $\tau$}} & U\times_Y F \ar[d]^{p_1} \ar[r]^{p_2} \ar@{}[dr]|{\text{\normalsize $\sigma$}} & F \ar[d]^f \\ V \ar[r]^i & U \ar[r]^\varphi & Y }. \] By \cite[(3.7.2)]{Lipman}, it suffices to prove that \[ \theta(\tau): p_1^*i_*\Cal N\rightarrow (i\times 1)_*q_1^*\Cal N \] and \[ \theta(\tau+\sigma): f^*(\varphi i)_*\Cal N\rightarrow (p_2(i\times 1))_*q_1^*\Cal N \] are isomorphisms, where $\Cal N=i^*\Cal M$. Replacing $U$ by $V$ and $\Cal M$ by $\Cal N$, it is easy to see that we may assume that $Y$ is affine and $U$ is quasi-compact. This case is \cite[(3.9.5)]{Lipman} (see also \cite[(7.12)]{ETI}). Now consider the original problem. As we have seen, Lipman's theta $\theta: (B_G^M(Y)_\phi)^*\varphi_*\Cal M_{[0]}\rightarrow (B_G^M(\varphi)_{[j]})_*B_G^M(U)_\phi^*\Cal M_{[0]}$ is an isomorphism for any morphism $\phi:[0]\rightarrow[j]$ in $\Delta_M$. In particular, letting $j=1,2$ and taking any $\phi:[0]\rightarrow [j]$, we have that $\varphi_*\Cal M$ (which is officially $B_G^M(\varphi)_*\Cal M$) is locally quasi-coherent. Indeed, we already know that $\varphi_*\Cal M_{[0]}$ is quasi-coherent, and $B_G^M(U)_\phi^*\Cal M_{[0]}\cong \Cal M_{[j]}$ by the equivariance of $\Cal M$. Moreover, by \cite[(6.20)]{ETI}, the alpha map $\alpha_\phi: B_G^M(Y)_\phi^* (B_G^M(\varphi)_*\Cal M)_{[0]}\rightarrow (B_G^M(\varphi)_*\Cal M)_{[j]}$ is an isomorphism for any $[j]\in\{[0],[1],[2]\}$ and any $\phi:[0]\rightarrow [j]$. By \cite[(7.6), {\bf 3}]{ETI}, $\varphi_*\Cal M$ is equivariant. Hence $\varphi_*\Cal M$ is quasi-coherent by \cite[(7.3)]{ETI}, as desired. \end{proof} \begin{corollary}\label{codim-two-ref.thm} Let $G$ be a flat $S$-group scheme, and $X$ be a locally Krull $G$-scheme. Let $U$ be its $G$-stable open subset. Let $\varphi:U\hookrightarrow X$ be the inclusion. Assume that $\codim_X(X\setminus U)\geq 2$. Then $\varphi^*:\Ref_n (G,X)\rightarrow\Ref_n(G,U)$ is an equivalence, and $\varphi_*:\Ref_n(G,U)\rightarrow\Ref_n(G,X)$ is its quasi-inverse. In particular, $\varphi^*:\Cl(G,X)\rightarrow\Cl(G,U)$ defined by $\varphi^*[\Cal M]=[\varphi^*\Cal M]$ is an isomorphism whose inverse is given by $\Cal N\mapsto [\varphi_*\Cal N]$. \qed \end{corollary} \begin{proof} By Lemma~\ref{locally-Krull-qc.thm}, $\varphi_*$ is a functor from $\Ref(G,U)$ to $\Qch(G,X)$. The rest is easy by Lemma~\ref{codim-two-isom.thm}. \end{proof} \begin{proposition}\label{pfb-cl-isom.thm} Let $G$ be a flat $S$-group scheme, and $\varphi:X\rightarrow Y$ a principal $G$-bundle. Then $\varphi$ is fpqc. If $X$ is locally Krull, then $Y$ is also locally Krull. The equivalence $\varphi^*:\Qch(Y)\rightarrow\Qch(G,X)$ yields an equivalence $\varphi^*:\Ref_n(Y)\rightarrow\Ref_n(G,X)$. In particular, $\varphi^*:\Cl(Y)\rightarrow\Cl(G,X)$ is an isomorphism. \end{proposition} \begin{proof} The first assertion is by Lemma~\ref{pfb-fpqc.thm}. Assume that $X$ is locally Krull. Then $Y$ is locally Krull by Lemma~\ref{reflexive-descent2.thm}. The equivalence $\varphi^*:\Qch(Y)\rightarrow\Qch(G,X)$ is by Lemma~\ref{pfb-pic.thm}. For $\Cal M\in\Qch(Y)$, $\Cal M\in\Ref_n(Y)$ if and only if $\varphi^*\Cal M\in\Ref_n(G,X)$ by Lemma~\ref{reflexive-ascent.thm} and Lemma~\ref{reflexive-descent2.thm}. The last assertion is now trivial. \end{proof} \begin{proposition}\label{Cl-Pic.thm} Let $Y$ be a quasi-compact locally Krull scheme. Then $\Cl(Y)\cong \indlim \Pic(U)$, where the inductive limit is taken over all open subsets $U$ such that $\codim_Y(Y\setminus U)\geq 2$. \end{proposition} \begin{proof} By Corollary~\ref{codim-two-ref.thm} for the case that $G$ is trivial, the map $\Cl(Y)\rightarrow \indlim \Cl(U)$ is an isomorphism. So it suffices to show that the canonical map $\indlim \Pic(U)\rightarrow \indlim \Cl(U)$ is surjective, as the injectivity is obvious. This amounts to show that, for each $U$ and a rank-one reflexive sheaf $\Cal M$ over $U$, there exists some open subset $V$ of $U$ such that $\codim_U (U\setminus V)\geq 2$ and $\Cal M|_V$ is an invertible sheaf. By Lemma~\ref{codim-two.thm}, there exists some quasi-compact open subset $U'$ of $U$ such that $\codim_U(U\setminus U')\geq 2$. Replacing $U$ by $U'$, we may assume that $U$ is quasi-compact. Then $U=\bigcup_i \Spec A_i$ with $A_i$ a Krull domain. Replacing $U$ by each $\Spec A_i$, we may assume that $U=\Spec A$ is affine with $A$ a Krull domain. Set $I:=\Gamma(U,\Cal M)$. We may assume that $I$ is a divisorial ideal of $A$. Take $a\in I\setminus \{0\}$. Let $\{P_1,\ldots,P_r\}$ be the set of minimal primes of $Aa$. We may assume that $P_i\neq P_j$ for $i\neq j$. Let $1\leq i\leq r$. Set $IA_{P_i}=P_i^{v_i}A_{P_i}$. For each $i$, take $b_i\in I\setminus P_i^{v_i+1}A_{P_i}$. Set $J=Aa+\sum_{i=1}^r(Ab_i:I)$. If $P\neq P_i$ for any $i$, $J_P=A_P$, since $(Aa)_P=A_P$. Moreover, $J_{P_i}=A_{P_i}$, since $(Ab_i:I)_{P_i}=(Ab_i)_{P_i}:I_{P_i}=A_{P_i}$. Set $V=D(J)=U\setminus V(J)$. Then $\codim_U(U\setminus V)\geq 2$. On $D(Aa)$, $\tilde I|_{D(Aa)}=\tilde A|_{D(Aa)}$ is an invertible sheaf, where $D(Aa)=\Spec A\setminus V(Aa)$. On $D(Ab_i:I)$, $\tilde I|_{D(Ab_i:I)}=(Ab_i)\,\tilde{}\,|_{D(Ab_i:I)}$ is an invertible sheaf. Thus $\tilde I$ is invertible on $V$, and we are done. \end{proof} \begin{lemma}\label{invariance-reflexive.thm} Let $G$ be a flat $S$-group scheme, and $X$ be a locally Krull $S$-scheme on which $G$ acts trivially. Let $\Cal M\in\Ref(G,X)$. Then $\Cal M^G\in\Ref(G,X)$. \end{lemma} \begin{proof} Let $p:G\times X\rightarrow X$ be the second projection. There is an exact sequence \[ 0\rightarrow \Cal M^G \xrightarrow{i} \Cal M \rightarrow p_*p^*\Cal M. \] By Lemma~\ref{second-syzygy.thm}, it suffices to show that the cokernel $\Cal C$ of $i$ is torsion-free. As $p$ is flat and $\Cal C$ is a subsheaf of $p_*p^*\Cal M$, this is easy. \end{proof} \section{The class group of an invariant subring} \begin{lemma}\label{finite-direct-Krull.thm} Let $X$ be a quasi-compact locally Krull scheme, and $U$ its open subscheme. Then $\Gamma(U,\Cal O_U)$ is a finite direct product of Krull domains. \end{lemma} \begin{proof} As $U$ is a finite direct product of integral schemes, we may assume that $U$ is integral. By Lemma~\ref{codim-two.thm}, we can take a quasi-compact open subset $V$ of $U$ such that $\codim_U(U\setminus V)\geq 2$. Replacing $U$ by $V$, we may assume that $U$ itself is quasi-compact. If $U=\bigcup_{i=1}^n U_i$ with $U_i$ affine, then $\Gamma(U,\Cal O_U)=\bigcap_{i=1}^n \Gamma(U_i,\Cal O_{U_i})$ with each $\Gamma(U_i\Cal O_{U_i})$ a Krull domain, and hence $U$ is also a Krull domain. \end{proof} \paragraph Let $G$ be a flat $S$-group scheme. Let $X$ be a quasi-compact quasi-separated locally Krull $G$-scheme, and let $\varphi:X\rightarrow Y$ be a $G$-invariant morphism such that $\Cal O_Y\rightarrow(\varphi_*\Cal O_X)^G$ is an isomorphism. \begin{lemma}\label{Y-Krull.thm} $Y$ is a locally Krull scheme. Each irreducible component of $X$ is mapped dominatingly to an irreducible component of $Y$. In particular, $Y$ has only finitely many irreducible components. Moreover, there exists some quasi-compact open subset $U$ of $Y$ such that $\codim_Y (Y\setminus U)\geq 2$. \end{lemma} \begin{proof} Let $Y'=\Spec A$ be an affine open subscheme of $Y$, $X'=\varphi^{-1}(Y')$, and $\varphi':X'\rightarrow Y'$ be the induced map. Let $B=\Gamma(X',\Cal O_{X'})$. Note that $B$ is a finite direct product of Krull domains by Lemma~\ref{finite-direct-Krull.thm}. Note also that the sequence \begin{equation}\label{ABG.eq} 0\rightarrow A \rightarrow B \xrightarrow{u-v}C \end{equation} is exact, where $C=\Gamma(G\times X',\Cal O_{G\times X'})$, and $u=u(a)$ and $v=u(p_2)$ are the maps $B=\Gamma(X',\Cal O_{X'}) \rightarrow \Gamma(G\times X',\Cal O_{G\times X'})=C$ corresponding to the action $a$ and the second projection $p_2$, respectively. As in the proof of \cite[(32.6)]{ETI}, a nonzerodivisor of $A$ is a nonzerodivisor of $B$, $A=Q(A)\cap B$, and hence $A$ is a finite direct product of Krull domains. Also, as any nonzerodivisor of $A$ is a nonzerodivisor of $B$, any irreducible component of $X$ is mapped dominatingly to $Y$. We prove the last assertion. Let $Y=\bigcup_\lambda U_\lambda$ be an affine open covering. Then by the quasi-compactness of $X$, there are finitely many $\lambda_1,\ldots,\lambda_n$ such that $X=\bigcup_i \varphi^{-1}(U_{\lambda_i})$. Set $U=\bigcup_i U_{\lambda_i}$. We prove that $\codim_Y(Y\setminus U)\geq 2$. Assume the contrary, and take $y\in Y\setminus U$ such that $\Cal O_{Y,y}$ is a DVR. Take an affine open neighborhood $Y'=\Spec A$ and let $X':=Y'\times_Y X$. Then we have the exact sequence (\ref{ABG.eq}) with $B=\Gamma(X',\Cal O_{X'})$ and $C=\Gamma(G\times X',\Cal O_{X'})$. Set $Y''=\Spec A_P=\Spec \Cal O_{Y,y}$, where $P$ is the height-one prime ideal of $A$ corresponding to $y$. Then plainly, \[ 0\rightarrow A_P\rightarrow B_P\xrightarrow{u-v} C_P \] is exact. Let $t$ be the prime element of $A_P$. As $\varphi^{-1}(y)$ is empty, $t\Cal O_{X''}=\Cal O_{X''}$, where $X''=Y''\times_Y X$. Thus $t\in\Gamma(X'',\Cal O_{X''})^\times$. As there is a quasi-compact open subset $W$ of $X'$ with $\codim_{X'}(X' \setminus W)\geq 2$, \[ t^{-1}\in\Gamma(X'',\Cal O_{X''})=\Gamma(Y''\times_{Y'}W,\Cal O_{Y''\times_{Y'}W}) =\Gamma(W,\Cal O_W)_P=B_P. \] So $t^{-1}\in B_P\cap Q(A)=A_P$, and this is a contradiction. \end{proof} \begin{lemma}\label{subquotient.thm} The class group $\Cl(Y)$ of $Y$ is a subquotient of $\Cl(G,X)$. \end{lemma} \begin{proof} By Lemma~\ref{Y-Krull.thm}, there exists some quasi-compact open subset $Y'$ of $Y$ such that $\codim_Y(Y\setminus Y')\geq 2$. Let $h:\Cl(G,X)\rightarrow\indlim \Cl(G,\varphi^{-1}(U))$ be the canonical map, where the inductive limit is taken over all open subset $U$ of $Y'$ such that $\codim_Y(Y\setminus U)\geq 2$. Let $\nu:\Cl(Y)\rightarrow \Image h$ be the map defined by $\nu[\Cal M]=h[(\varphi^*\Cal M)^{**}]$. As $\Cal M|_U$ is an invertible sheaf for some $U$, it is easy to see that $\nu$ is a group homomorphism. If $\nu[\Cal M]=0$, then $\Cal M|_U$ is an invertible sheaf and $\varphi^*(\Cal M|_U)$ is trivial for some $U$. By Lemma~\ref{pic-injective.thm}, $\Cal M|_U$ is trivial, and by Proposition~\ref{Cl-Pic.thm}, $[\Cal M|_{Y'}]=0$ in $\Cl(Y')$. By Corollary~\ref{codim-two-ref.thm}, $[\Cal M]=0$ in $\Cl(Y)$. This shows that $\nu$ is injective, and $\Cl(Y)$ is a subquotient of $\Cl(G,X)$. \end{proof} \begin{theorem}\label{main2.thm} Let $k$ be a field, $G$ a smooth $k$-group scheme of finite type, and $X$ a quasi-compact quasi-separated locally Krull $G$-scheme. Assume that there is a $k$-scheme $Z$ of finite type and a dominating $k$-morphism $Z\rightarrow X$. Let $\varphi:X\rightarrow Y$ be a $G$-invariant morphism such that $\Cal O_Y\rightarrow (\varphi_*\Cal O_X)^G$ is an isomorphism. Then $Y$ is locally Krull. If, moreover, $\Cl(X)$ is finitely generated, then $\Cl(G,X)$ and $\Cl(Y)$ are also finitely generated. \end{theorem} \begin{proof} $Y$ is locally Krull by Lemma~\ref{Y-Krull.thm}. We prove the last assertion. If $\Cl(X)$ is finitely generated, then $\Cl(G,X)$ is also finitely generated, since the kernel of the canonical map $\alpha:\Cl(G,X)\rightarrow \Cl(X)$ agrees with $\Ker\rho$, which is finitely generated by Theorem~\ref{main.thm}. As $\Cl(Y)$ is a subquotient of $\Cl(G,X)$, it is also finitely generated. \end{proof} \begin{remark} A similar result can be found in \cite{Waterhouse}. \end{remark} Finally, as a normal scheme of finite type over $k$ is quasi-compact quasi-separated locally Krull (and is dominated by some scheme of finite type), we have \begin{corollary}\label{main2-cor.thm} Let $k$ be a field, $G$ a smooth $k$-group scheme of finite type, acting on a normal $k$-scheme $X$ of finite type. Let $\varphi:X\rightarrow Y$ be a $G$-invariant morphism such that $\Cal O_Y\rightarrow (\varphi_*\Cal O_X)^G$ is an isomorphism. Then $Y$ is locally Krull. If, moreover, $\Cl(X)$ is finitely generated, then $\Cl(G,X)$ and $\Cl(Y)$ are also finitely generated. \qed \end{corollary} \end{document}
arXiv
Academic Tutorials Engineering Tutorials Exams Syllabus GATE Exams Tutorials Management Tutorials Mathematics Tutorials Misc tutorials Python Technologies Programming Scripts Telecom Tutorials UPSC IAS Exams Sports Tutorials XML Technologies Radar Systems Tutorial Radar Systems - Home Radar Systems - Overview Radar Systems - Range Equation Performance Factors Radar Systems - Types of Radars Radar Systems - Pulse Radar Radar Systems - Doppler Effect Radar Systems - CW Radar Radar Systems - FMCW Radar Radar Systems - MTI Radar Delay Line Cancellers Radar Systems - Tracking Radar Antenna Parameters Radar Systems - Radar Antennas Matched Filter Receiver Radar Systems - Radar Displays Radar Systems - Duplexers Phased Array Antennas Radar Systems Useful Resources Radar Systems - Quick Guide Radar Systems - Useful Resources Radar Systems - Discussion Radar Systems - Phased Array Antennas A single Antenna can radiate certain amount of power in a particular direction. Obviously, the amount of radiation power will be increased when we use group of Antennas together. The group of Antennas is called Antenna array. An Antenna array is a radiating system comprising radiators and elements. Each of this radiator has its own induction field. The elements are placed so closely that each one lies in the neighbouring one's induction field. Therefore, the radiation pattern produced by them, would be the vector sum of the individual ones. The Antennas radiate individually and while in an array, the radiation of all the elements sum up, to form the radiation beam, which has high gain, high directivity and better performance, with minimum losses. An Antenna array is said to be Phased Antenna array if the shape and direction of the radiation pattern depends on the relative phases and amplitudes of the currents present at each Antenna of that array. Radiation Pattern Let us consider 'n' isotropic radiation elements, which when combined form an array. The figure given below will help you understand the same. Let the spacing between the successive elements be 'd' units. As shown in the figure, all the radiation elements receive the same incoming signal. So, each element produces an equal output voltage of $sin \left ( \omega t \right)$. However, there will be an equal phase difference $\Psi$ between successive elements. Mathematically, it can be written as − $$\Psi=\frac{2\pi d\sin\theta }{\lambda }\:\:\:\:\:Equation\:1$$ $\theta$ is the angle at which the incoming signal is incident on each radiation element. Mathematically, we can write the expressions for output voltages of 'n' radiation elements individually as $$E_1=\sin\left [ \omega t \right]$$ $$E_2=\sin\left [\omega t+\Psi\right]$$ $$E_3=\sin\left [\omega t+2\Psi\right]$$ $$.$$ $$E_n=\sin\left [\omega t+\left (N-1\right )\Psi\right]$$ $E_1, E_2, E_3, …, E_n$ are the output voltages of first, second, third, …, nth radiation elements respectively. $\omega$ is the angular frequency of the signal. We will get the overall output voltage $E_a$ of the array by adding the output voltages of each element present in that array, since all those radiation elements are connected in linear array. Mathematically, it can be represented as − $$E_a=E_1+E_2+E_3+ …+E_n \:\:\:Equation\:2$$ Substitute, the values of $E_1, E_2, E_3, …, E_n$ in Equation 2. $$E_a=\sin\left [ \omega t \right]+\sin\left [\omega t+\Psi\right ]+\sin\left [\omega t+2\Psi\right ]+\sin\left [\omega t+\left (n-1\right )\Psi\right]$$ $$\Rightarrow E_a=\sin\left [\omega t+\frac{(n-1)\Psi)}{2}\right ]\frac{\sin\left [\frac{n\Psi}{2}\right]}{\sin\left [\frac{\Psi}{2}\right ]}\:\:\:\:\:Equation\:3$$ In Equation 3, there are two terms. From first term, we can observe that the overall output voltage $E_a$ is a sine wave having an angular frequency $\omega$. But, it is having a phase shift of $\left (n−1\right )\Psi/2$. The second term of Equation 3 is an amplitude factor. The magnitude of Equation 3 will be $$\left | E_a \right|=\left | \frac{\sin\left [\frac{n\Psi}{2}\right ]}{\sin\left [\frac{\Psi}{2}\right]} \right |\:\:\:\:\:Equation\:4$$ We will get the following equation by substituting Equation 1 in Equation 4. $$\left | E_a \right|=\left | \frac{\sin\left [\frac{n\pi d\sin\theta}{\lambda}\right]}{\sin\left [\frac{\pi d\sin\theta}{\lambda}\right ]} \right |\:\:\:\:\:Equation\:5$$ Equation 5 is called field intensity pattern. The field intensity pattern will have the values of zeros when the numerator of Equation 5 is zero $$\sin\left [\frac{n\pi d\sin\theta}{\lambda}\right ]=0$$ $$\Rightarrow \frac{n\pi d\sin\theta}{\lambda}=\pm m\pi$$ $$\Rightarrow nd\sin\theta=\pm m\lambda$$ $$\Rightarrow \sin\theta=\pm \frac{m\lambda}{nd}$$ $m$ is an integer and it is equal to 1, 2, 3 and so on. We can find the maximum values of field intensity pattern by using L-Hospital rule when both numerator and denominator of Equation 5 are equal to zero. We can observe that if the denominator of Equation 5 becomes zero, then the numerator of Equation 5 also becomes zero. Now, let us get the condition for which the denominator of Equation 5 becomes zero. $$\sin\left [\frac{\pi d\sin\theta}{\lambda}\right ]=0$$ $$\Rightarrow \frac{\pi d\sin\theta}{\lambda}=\pm p\pi$$ $$\Rightarrow d\sin\theta=\pm p\lambda$$ $$\Rightarrow \sin\theta=\pm \frac{p\lambda}{d}$$ $p$ is an integer and it is equal to 0, 1, 2, 3 and so on. If we consider $p$ as zero, then we will get the value of $\sin\theta$ as zero. For this case, we will get the maximum value of field intensity pattern corresponding to the main lobe. We will get the maximum values of field intensity pattern corresponding to side lobes, when we consider other values of $p$. The radiation pattern's direction of phased array can be steered by varying the relative phases of the current present at each Antenna. This is the advantage of electronic scanning phased array.
CommonCrawl
Using pigeon hole principle show that among any $11$ integers, sum of $6$ of them is divisible by $6$. I thought we can be sure that we have $6$ odd or $6$ even numbers among $11$ integers.Suppose that we have $6$ odd integers. Obviously sum of these $6$ odd numbers is divisible by $2$, so it remains to show that this sum is divisible by $3$ as well. But how? one residue class must have 3 of these 5 integers belonging to it. In the former case, let $x_0 \equiv 0 \mod 3$, $x_1 \equiv 1 \mod 3$ and $x_2 \equiv 2 \mod3$. Then summing $x_1, x_2, x_3$, we get, $x_1 + x_2 + x_3 \equiv 0 \mod 3$. In the latter case, we have 3 integers among the 5, say $x_1, x_2, x_3$ such that, $x_1 \equiv x_2 \equiv x_3 \equiv k \mod 3$, again summing these three we get $x_1 + x_2 + x_3 \equiv 3k \equiv 0 \mod 3$. This proves that among any 5 integers, sum of some 3 of them is divisible by 3. Now, we have 11 integers. By the previous result, we can choose 3 of them such that there sum is divisible by 3. Denote this sum by $s_1$. Now, we are left with 8 integers, again, choose 3 of them such that there sum is divisible by 3. Denote this by $s_2$. Now, we are left with 5 integers. Choose $s_3$ similarly. Thus we have 3 sums: $s_1, s_2, s_3$ (each of which are sums of 3 integers). These sums are divisible by 3. So, each of these sums are congruent to either 0 or 3 modulo 6. Now, since there are 3 sums, and two residue clases ($, $), by Pigeonhole principle, one residue class must have two sums belonging to it. Let $s_i$ and $s_j$ be those sums. Either, $s_i \equiv s_j \equiv 0 \mod 6$ or $s_i \equiv s_j \equiv 3 \mod 6$. In both the cases, $s_i + s_j \equiv 0 \mod 6$. Since, $s_i$ and $s_j$ are both sum of 3 integers, $s_i + s_j$ is a sum of 6 integers (which is divisible by 6). This completes the proof. Not the answer you're looking for? Browse other questions tagged elementary-number-theory divisibility pigeonhole-principle or ask your own question. Choose 38 different natural numbers less than 1000, Prove among these there exists at least two whose difference is at most 26. Given 5 integers show that you can find two whose sum or difference is divisible by 6. There are 31 houses on north street numbered from 1 to 57. Show at least two of them have consecutive numbers. Pigeon hole principle: Prove that any set of six positive integers whose sum is 13 must contain at least one subset whose sum is three.
CommonCrawl
Free parameter A free parameter is a variable in a mathematical model which cannot be predicted precisely or constrained by the model[1] and must be estimated[2] experimentally or theoretically. A mathematical model, theory, or conjecture is more likely to be right and less likely to be the product of wishful thinking if it relies on few free parameters and is consistent with large amounts of data. Not to be confused with Free variable. See also • Decision variables • Exogenous variables • Random variables • State variables References 1. Kline, Rex B. (2015). Principles and Practice of Structural Equation Modeling. Guilford Publications. p. 128. ISBN 978-1462523351. 2. Calvert, Gemma; Spence, Charles; Stein, Barry E. (2004). The Handbook of Multisensory Processes. MIT Press. p. 160. ISBN 978-0262033213.
Wikipedia
Robust or fragile? A tale of two circuits Negative feedback: a robustness strategy with tradeoffs An engineering viewpoint on biological robustness Mustafa Khammash1Email author BMC Biology201614:22 © Khammash. 2016 In his splendid article "Can a biologist fix a radio? — or, what I learned while studying apoptosis," Y. Lazebnik argues that when one uses the right tools, similarity between a biological system, like a signal transduction pathway, and an engineered system, like a radio, may not seem so superficial. Here I advance this idea by focusing on the notion of robustness as a unifying lens through which to view complexity in biological and engineered systems. I show that electronic amplifiers and gene expression circuits share remarkable similarities in their dynamics and robustness properties. I explore robustness features and limitations in biology and engineering and highlight the role of negative feedback in shaping both. Negative Feedback Disturbance Rejection Terminal Cell Robustness Property What exactly is to be understood by the term robustness in the context of a given system? A definition that is narrower in scope than the one given earlier is more useful. Indeed a more functional interpretation must include explicitly or implicitly the particular system property or phenotype whose robustness is being investigated, together with the specific adverse conditions or disturbances that it must withstand. In this sense, robustness is not so much an attribute of an entire system, as it is a property of some of its facets. It is instructive to explore this and other robustness issues using concrete examples, and so I shall investigate the robustness of two systems from very different disciplines: electrical engineering and biology. In the early history of electronic technology, at no time was the need to achieve robustness more urgent than in the 1920s. At that time, transcontinental telephony required electronic amplifiers with high gain (amount of amplification) to boost telephone signals sufficiently for transmission over long distances. The use of vacuum tubes in the design provided the necessary high gain, but there was a problem. The vacuum tubes had uncertain and variable characteristics that introduced distortions and prevented the reliable prediction of the gain of the amplifier, which needed constant calibration. Harold Black, a Bell Labs engineer who worked on the problem, described it thus: "every hour on the hour — for 24 hours — somebody had to adjust the filament current to its correct value. In doing this, they were permitting plus or minus 1/2-to-1 decibel variation in amplifier gain, whereas for my purpose the gain had to be absolutely perfect." In other words, the amplifier gain was not robust to the inevitable variations of the vacuum tube amplifier parameters. The problem resisted many attempts at its solution until 1927, when in a stroke of great insight, Black found a brilliant and simple solution. He realized that if he fed back a portion of the output of the amplifier into its input in a negative phase, the gain of the amplifier and its output should be reliably stabilized. Prototypes indeed demonstrated dramatic robustness of the amplifier output, noise attenuation, and far better overall performance. At the time, Black's idea ran counter to accepted theory, and it took a full nine years for the patent office to award him a patent for his invention. Thus, the negative feedback amplifier was born — an invention that is considered by some to be the most important breakthrough of the 20th century in electronics. Meanwhile, several billion years earlier, nature's evolutionary explorations led to the discovery of negative feedback as a central strategy for regulating the internal cellular environment. The prevalence of negative feedback at every level of biological organization is a testament to this strategy's effectiveness in achieving robust regulation of cellular processes and in successfully counteracting disturbances that act to push the system into disequilibrium. While biological systems and engineered ones may seem to be worlds apart due to their vastly different substrates, time-scales, and mechanistic implementation, I will show that they in fact have much in common. To make this point, I shall look more closely at two systems: the negative feedback amplifier and the autoregulatory gene expression circuit. Not only do they exhibit similar robustness and fragility properties, but the dynamic equations that describe them are nearly the same. I start by examining the robustness properties of the amplifier which will then help us understand those of the gene expression circuit. Prior knowledge of electronics is not required; readers unfamiliar with circuits can simply think of an amplifier as a dynamical system whose input-ouput behavior depends on a set of parameters. Robustness analysis of a feedback amplifier An amplifier is an electronic device that receives as its input an electric signal (typically voltage), and delivers as its output an electric signal that is ideally a scaled replica of the input signal. This scaling is called the gain of the amplifier. If the gain is larger than one, the input will be amplified, which gives the device its name. Amplifiers are ubiquitous and can be found in our cell phones, computers, TV sets, radios, cameras, etc. Let me start, as Black did, with a high gain amplifier which does not employ negative feedback. I will denote its input voltage as v and its output voltage as y. During Black's time, building such an amplifier required cumbersome vacuum tubes, but today such a device can be made with modern transistors. An electric circuit implementing one such amplifier is shown inside the rectangular box in Fig. 1 a. One need not be concerned with the details of the internal circuitry (most users of the amplifier don't know them anyway; they don't need to!). Instead, I will focus on the relation between the input v and output y, which is particularly simple. Indeed, for slow time-scales the relation is a direct scaling of the input: y=A v, where A is the gain of amplifier. For faster time scales, a better model consists of a single first-order differential equation that more accurately captures the dynamics (see dynamic model in Fig. 1 a). This equation is characterised by two parameters: c, the reciprocal of the time constant, which measures the amplifier's speed of response, and a, which when divided by c gives the gain of amplifier, A. Robustness of two electronic operational amplifiers (with and without negative feedback). a A common model of a negative feedback amplifier with typical parameters. y is the output to input u(t), which I take to be unity. Shown also is the unregulated amplifier (circuit inside the rectangle) with input v and output y. This high gain amplifier is manufactured as an integrated transistor circuit. a and c are internal parameters such that c −1 is the time constant and A=a/c is the amplifier gain. Negative feedback is introduced by adding the two resistors R 1 and R 2 in the configuration shown. The circuit is quite complex, but the simple first order model shown is a good representation of its behavior under typical operating conditions. The feedback resistors R 1 and R 2 are supplied by the user and are selected to tune the gain. b The robustness/fragility properties of the two amplifier circuits. For proper comparison, the input to the unregulated amplifier, \(v(t)\equiv \overline {v}\), is chosen so that the corresponding output y ∗ matches that of the negative feedback amplifier. For the feedback amplifier, y ∗ is extremely robust to variations in the parameters a and c, in contrast to the unregulated amplifier. At the same time, y ∗ is quite sensitive to the values of the two resistors, underscoring its robust yet fragile character. c Graphical explanation of the difference in robustness properties of the two amplifiers. For both amplifiers, the abscissa of the point of intersection of the black line and the blue line gives y ∗. In the case of the feedback amplifier, the slope of the blue line is −A β. As A β>>1, one can see that y ∗ will be almost independent of A. Indeed, y ∗ depends almost exclusively on the ratio R 2/R 1, resulting in extreme robustness to A=a/c The amplifier just described will suffer from many of the problems that faced Black in the 1920s, that is, high variability of the gain A. By adopting Black's idea of including a version of the output in the input signal, one ends up with a negative feedback amplifier. This is straightforward to do: simply arrange that v=u−β y, where β is a constant parameter and u is a voltage signal that serves as the input to the feedback amplifier. This scenario is shown in the diagram shown Fig. 1 a where the two resistors, R 1 and R 2, connecting the output to the input act to enforce v=u−β y. This is all that is needed to proceed with the robustness analysis of the negative feedback amplifier. With negative feedback in place there are four model parameters, a and c for the amplifier (as explained above) and R 1, R 2 for the two resistors, i.e., θ=(a,c,R 1,R 2), which are all positive. For simplicity, I shall take the input u to be constant (one) over time, and for a performance measure, I shall focus exclusively on the steady-state value of the output, y ∗. Clearly, y ∗ depends on the parameters θ, so I can write y ∗=f(θ) for some function f(·). With a performance measure at hand, one can ask when the system described by the above function may be considered to be 'robust'. One possible answer is to equate system robustness with the ability of the output to withstand variations in all the model parameters. For example, one may want to insist that the property of interest, y ∗=f(θ), is insensitive to variations in the four model parameters θ 1,…,θ 4 mentioned above. At this point, a quantitative measure of the sensitivity of f(θ) to each parameter θ i is needed. I will use the following measure: the ratio of the relative change in the output f(θ) to the relative change in the parameter θ i that caused it. I will refer to this (dimensionless) quantity as the relative sensitivity of the output to θ i , and I denote it by \(S_{\theta _{i}}({\boldsymbol {\theta }})\). Mathematically, $${\kern50pt}S_{\theta_{i}}({\boldsymbol{\theta}}) =\frac{\partial f ({\boldsymbol{\theta}})}{\partial \theta_{i}} \cdot \frac{{\boldsymbol{\theta}}_{i}}{f({\boldsymbol{\theta}})}. $$ While this expression of relative sensitivity evaluates the effect of small relative parameter changes, one could also evaluate the effect of larger parameter changes (such as a 100 % change or larger from the nominal value). This doesn't alter any of the conclusions, however, so I will just use the above sensitivity expression. Assigning typical values to the parameters θ (Fig. 1 a), I can proceed with examining the relative sensitivity of y ∗ to these four model parameters. Computing the relative sensitivity to parameter a in our example, one finds that S a (θ)≈0. Similarly, for the parameter c, S c (θ) is negligibly small. However, when one computes the relative sensitivities with respect to the remaining two parameters, one finds that \(S_{R_{1}}({\boldsymbol {\theta }})\approx - 0.91\) and \(S_{R_{2}}({\boldsymbol {\theta }})\approx 0.91\). In other words, a relative change in R 1 or R 2 results in a relative change in y ∗ of almost the same magnitude (Fig. 1 b). Sensitivity analysis thus informs us that while the variable of interest is insensitive to two of the parameters, it is quite sensitive to the remaining two. A similar conclusion can be reached were I to assess sensitivity to much larger changes in these parameters. Given this sensitivity, could one consider y ∗ as a robust output of the system? More specifically, if these were the sensitivity properties of a system designed to keep y ∗ constant, would one assess the performance of the system to be acceptable? As it turns out, the above circuit is that of a model 741 electronic operational amplifier [6] with typical parameter values (Fig. 1 a). It is quite likely the most versatile electronic building block ever created! Every major integrated circuit manufacturer offers a version of it, and it can be found in a very large number of functioning electronic circuits. The success of this circuit is precisely due to the robustness of the output y to variations in parameters a and c, and one would be ill-advised to characterize the performance of the system as 'non-robust', even if the output is sensitively dependent on the parameters R 1 and R 2. To get a deeper understanding of the issue, I will go back to the parameters of the model. The parameter a is directly related to the 'open-loop gain' of the amplifier, i.e., the ratio of the output to the input before any negative feedback is introduced. Such gain varies considerably, and could fluctuate up to several orders of magnitude. The introduction of negative feedback regulation introduces a dramatic improvement. This regulation is achieved by the two resistors R 1 and R 2. With these resistors in place, the amplifier gain (equal to y ∗ here) is virtually insensitive to variations in a or c. Indeed, it can be shown (Fig. 1 c) that y ∗≈1+R 2/R 1, and is hence effectively independent of a and c. Instead, the gain is now heavily dependent on the values of the two resistors. It would appear that at the same time the feedback brought about robustness to parameters a and c, it introduced new fragilities, as can be seen in the strong dependence on the parameters R 1 and R 2. This is not a problem, however, as it is much easier to make precision resistors than precision unregulated high-gain amplifiers. Once the resistor values are selected, their values will remain virtually unchanged throughout their operation. In this way, the overall system is robust to variations in parameters that are expected to vary (e.g., a), but sensitive to parameters that can be expected to remain unchanged. This remarkably versatile system is thus both robust and fragile. It is robust to certain parameters but fragile in its strong dependence on others. Nor would one want the system to be robust to all parameters, as this would result in an amplifier whose gain cannot be tuned. The choice of resistors offers a simple and effective way to set the gain of the amplifier, a feature that cannot be realized had the output y ∗ been insensitive to all the parameters. Such tradeoffs between robustness and fragility are common to virtually all complex engineered systems. Before I explain how negative feedback achieves this impressive feat, I will first bring in our biological example and compare it to the amplifier. Gene expression circuit Here I explore the robustness properties of a simple gene expression circuit with autoregulation achieved through negative feedback. Negative autoregulation has been established as a network motif — one that appears, for instance, in the Escherichia coli transcriptional network far more frequently than would be expected in a random network [7]. Remarkably, the dynamics of an autoregulated gene expression circuit are very similar to those of the operational amplifier I discussed in the previous section, and many of the issues pertaining to robustness apply in a similar manner here as well. Figure 2 a shows a simple circuit for gene expression and a corresponding dynamic model, describing the evolution of the expressed protein p. When the rate of gene expression, v, is independent of the protein level, this model corresponds to constitutive gene expression. On the other hand, when v(t) is dependent on p through a repression Hill function as shown in Fig. 2 a, then the expression circuit is subject to negative feedback. It is interesting to note that if one were to linearize the nonlinear feedback term v(t) at the steady state value p ∗, the dynamic equations for the gene expression circuit will be identical to those describing the operational amplifier. In fact, as can be seen in Fig. 2 c, approximating the blue Hill function with a line near the intersection point will make the gene expression model identical to the amplifier model. Robustness properties of two gene expression circuits (auto-regulated with negative feedback versus constitutively expressed without feedback). a The model of the auto-regulated gene expression circuit. Negative feedback is achieved by a Hill-type function resulting from the multimerization of the protein P into an n-mer P n , which in turn binds to the active gene G and represses it. Constitutive expression is modeled by an expression rate av that is independent of p. b The relative sensitivities of p ∗, the steady-state concentration of the protein, to the model parameters in both circuits. The auto-regulated circuit is robust to parameters a and c, in contrast to the constitutively expressed circuit, which is sensitive to both parameters. The auto-regulated gene circuit is, however, sensitive to parameter n. c A graphical explanation of the differences in robustness between both circuits. The intersection of the line and the graph of h(·) in the left figure (auto-regulated circuit) gives p ∗. Robustness in this circuit is achieved through high-gain and feedback, just as it is in the amplifier circuit. The higher the gain n the more robust the value of p ∗ will be to variations in the parameters a, c, and b. Indeed it can be shown that \(S_{a}({\boldsymbol {\theta }}) \approx \frac {1}{n+1}\), \(S_{c}({\boldsymbol {\theta }}) \approx \frac {-1}{n+1}\), \(S_{b}({\boldsymbol {\theta }}) \approx \frac {-1}{n+1}\), and \(S_{n}({\boldsymbol {\theta }}) \approx \frac {-n\log p^{\ast }}{n+1}\). In contrast, the constitutively expressed gene circuit lacks robustness to parameters a and c, even though it shares the same protein level p ∗ as the auto-regulated circuit. See also [20] for a general discussion of sensitivity of biochemical reactions and the effect of feedback I shall now explore the sensitivity of the steady-state protein level, p ∗, with respect to model parameters, both for the constitutive expression model and the model with negative feedback. In the constitutive expression case, the parameters are simply the expression rate a and the degradation rate c. In the feedback case, there are two additional parameters n and b that define the feedback repression Hill function (Fig. 2 a). Specifically, n is the Hill coefficient, which determines the steepness of the Hill function, and b combines the association constant of protein P to form the P n complex with the association constant of the resulting complex to DNA. One can get analytical expressions for the relative sensitivities of p ∗ with respect to these parameters, and use them to study the robustness of the gene circuit. As can be seen in Fig. 2 b, the constitutive expression model (no feedback) is quite sensitive to both a and c. For the typical nominal parameters θ shown in Fig. 2, the relative sensitivities to a and c are 1 and -1 respectively. In contrast, in the case of negative feedback, the relative sensitivity of p ∗ to these two parameters is much smaller, namely S a (θ)≈0.22 and S c (θ)≈ −0.22. Similarly, p ∗ shows small sensitivity to b, as can be seen from S b (θ)≈−0.2. At the same time, p ∗ can be considerably sensitive to n. Indeed, it can be shown that S n (θ)≈2. A similar sensitivity dichotomy can also be seen in a stochastic model of this gene expression circuit. How could one make sense of such robustness/fragility? We can begin to see the analogy with the negative feedback amplifier. Feedback of protein concentrations has ensured that the protein concentration output of the gene expression circuit will be relatively insensitive to the changing transcription and translation parameters. These parameters depend on many factors such as RNA polymerase levels, ribosome levels, etc., which in turn may depend on what other genes are active. These levels are also expected to be quite different from cell-to-cell. The robustness in these parameters comes at the cost of a new fragility manifested in the sensitive dependence of p ∗ on n. However, this tradeoff appears well worth making, as n depends on the multimerization reaction of the protein P and is therefore not expected to vary much over time or among cells experiencing the same environment. I should also point out that the robustness exhibited by both engineering and biological circuits is not restricted to variations in the parameters, but also applies to disturbances or stresses in the environment. Any extraneous voltage (or a load) at the output of the amplifier will be rejected, and the performance of the circuit will be unchanged. Similarly, an extra source of protein or a sink (e.g., loading due to nonspecific protein binding) will result in little change in the protein concentration, as the circuit acts to correct for such disturbances, particularly for high gain. The main message here is that both the operational amplifier circuit from electrical engineering and the gene expression circuit from biology share some key features, in spite of their vastly different substrates and time-scales. The output of interest in both circuits is robust to parameters that are expected to vary during the system's operation, and sensitive to ones that experience little change. These features are characteristic of highly engineered (evolved) systems. Negative feedback is a key strategy for achieving robustness. For this reason, it has been studied extensively in the field of control engineering where the area of robust control has thrived since the late 70s. The effectiveness of this strategy can be seen in both the engineering and biological circuits considered thus far. Indeed, one can study the robustness of an unregulated operational amplifier (R 1=0, \(R_{2}=\infty \)) or a constitutively expressed gene where negative feedback is absent. In each of these cases, even when the variable of interest is chosen to be identical to that observed when feedback is used, this variable will be vulnerable to variations in system parameters. Such parameters include the open-loop gain of the amplifier A or the transcription/translation rates for the gene expression circuit. In both instances, the robustness was brought about by the introduction of feedback. By trading off some gain, robustness to varying parameters is attained. How negative feedback brings about robustness One can develop a clear understanding of how negative feedback brings about robustness by looking at the negative feedback amplifier in Fig. 1. For this circuit, one can compute the variable of interest explicitly. In particular, using the simple feedback amplifier model in Fig. 1 a: $${\kern50pt} y^{\ast} = \frac{A}{1+A\beta}\approx \frac{1}{\beta}, $$ where the last approximation is due to the high gain of the amplifier (A β is typically much larger than one). This shows clearly how negative feedback resulted in y ∗ that is virtually independent of A=a/c and hence robust to variations in both a and c. Compare this to the unregulated open-loop amplifier (no feedback resistors, β=0) where: $${\kern60pt} y^{\ast}= A {\overline{v}}. $$ Even when \({\overline {v}}\) is selected so that y ∗ is the same for both amplifier configurations, the output y ∗ (of the open-loop amplifier) will be far less robust to variations in A, and hence to variations in a and c. For robust operation, A must be very finely tuned. This is much more difficult to achieve in an amplifier than with resistors. Therefore, the feedback amplifier provides far superior robustness properties than the unregulated open-loop amplifier. In a very similar way, the regulated gene expression circuit offers better robustness properties than the constitutive gene circuit, especially when it comes to maintaining a steady value of p ∗. Intriguingly, gene expression may be viewed as an amplifier that yields a large number of copies of a protein from a single copy of DNA. Autoregulation exchanges some of this high gain to achieve robustness to parameter variations. These parameters include transcription and translation rates and degradation rates. As in the electric amplifier, the autoregulated circuit is not without fragility. Indeed, as Fig. 2 shows, p ∗ will be sensitive to variations in the Hill coefficient, n. However, n is not expected to change over the lifetime of the cell. In both the amplifier and the gene expression circuit, the effect of high gain was considered only as far as it affects the steady-state performance of the two systems, both of which exhibited a constant equilibrium at steady-state. However, as we will see in what follows, the same idea applies in other regimes where external signals (e.g., disturbances) are time-varying and the system dynamics never reach a constant steady-state value. To see this, it is useful to understand how feedback attenuates disturbances. In feedback systems, the impact an external disturbance will have on an output of interest is often captured by the quantity |S|=|(1+L)−1|, where L is the so called 'loop-gain', i.e., the amplification a signal experiences after going once around the feedback loop. For the autoregulatory gene expression circuit, for example, L is the gain of the circuit if it were to be driven by an orthogonal transcription factor acting on a promotor of the same strength, but one which does not respond to the protein product. For the amplifier, L=A β. With no feedback, L is zero, and the disturbance will not be attenuated; with high gain feedback, L is large, and the disturbance will be attenuated (small |S|). In practice L will not be the same for all signals. In particular, slowly varying signals will experience a different amplification going through the feedback loop than fast varying ones. The way one can study how a system responds to fast and slow signals is by evaluating its response to sinusoidal signals of different frequencies (number of cycles per second). This is because a slowly varying disturbance signal is made up of sinusoidal signals of low frequencies, while fast signals will also contain high frequency sinusoids. This makes L a frequency-dependent function, and for good disturbance rejection, it is only necessary that L be large at the frequencies of the disturbance. So if the system is to be immune to slowly varying disturbances, L needs to be large at low frequencies. Based on the above discussion, achieving good attenuation of constant disturbances requires only that L be large at frequency zero (i.e., for constant signals). This is fortunate since achieving high gain at large frequencies is both difficult and fraught with other side effects. Intriguingly, achieving high gain at the zero frequency is not only possible, but it can be achieved perfectly, i.e., infinite gain is possible. This leads to perfect adaptation to constant disturbances. But what feedback dynamics can possibly offer infinite gain at zero frequency? The answer is integral feedback. This means that the signal that is fed back is first passed through an integrator, which integrates the signal with respect to time, thereby incorporating a measure of its past history into the feedback. Accordingly, the output of an integrator when the input (the integrand) is \(\cos (\omega t)\) is given by \(\omega ^{-1} \sin (\omega t)\), where ω is the frequency. This shows that the gain of the integrator is ω −1, which is indeed infinite at zero frequency. For this reason, integral feedback is a strategy that is very common in engineering and, as is becoming increasingly appreciated, also in biology. One well-studied example in biology where integral feedback has been implicated in robust perfect adaptation is bacterial chemotaxis [8, 9], in which the tumbling rate of a bacterium perfectly adapts to a change in nutrient concentration. This strategy allows bacteria to respond to a change in concentration of the nutrient regardless of the absolute concentration level. Other examples are calcium homeostasis [10] and yeast stress response [11]. Robustness to environmental disturbances I have argued that one way to maintain good performance in systems where some parameters are difficult to keep constant is to use negative feedback to trade off high gain with robustness to these parameters. Such a strategy can also be applied to achieve adaptation to time-varying environmental disturbances, whereby high gain at certain frequencies can be translated to rejection of disturbances at these frequencies. This could be demonstrated by introducing an external disturbance for our gene expression circuit — for example, another source for protein production (or degradation) — and then, depending on the disturbance frequency (e.g., if it is slowly or rapidly varying), showing that adaptation to this disturbance is achieved by the negative feedback circuit. Instead, however, I will look at another system: renewal control in stem cells [12]. Figure 3 shows the renewal control of a stem cell (type 1). The stem cell's progeny can either be a stem cell (regeneration) or it could differentiate into a terminal, post-mitotic (type 2) cell. Feedback acts upon the stem cell to affect its probability of regeneration. A very simple model for this system can be written as: $${\kern50pt} \begin{aligned} \dot x_{1} &= (2p_{r}(x_{2})-1){vx}_{1} \\ \dot x_{2} &= 2p_{d}(x_{2}) {vx}_{1} -{dx}_{2}, \end{aligned} $$ Renewal control. a A stem cell (type 1) can either regenerate or differentiate into a terminal post-mitotic cell (type 2). Negative feedback acts to affect the probability of regeneration. b The effect of sinusoidal variation in d (the disturbance) on |S|, the so-called 'sensitivity function', as a function of disturbance frequency. |S| is in turn related to the size of the corresponding fluctuation of the population of terminal cells (type 2). n reflects the strength of feedback, with stronger feedback resulting in better disturbance rejection (better robustness) at lower frequencies, at the price of amplifying the effect of disturbances at mid-frequencies (fragility) where x 1 and x 2 denote the concentration of stem cells and terminal cells, respectively, v is the cell-division rate, p d is the probability that a daughter cell differentiates in a given division, p r is the probability that a daughter remains a stem cell after division, and d is the probability that the terminal cell dies in a unit time. Negative feedback achieves renewal control due to the fact that p r and p d depend on x 2 in such a way that p r (x 2)+p d (x 2)=1. For constant values of p r and p d (i.e., no feedback regulation), the trajectory of x 1 blows up for p r >0.5 and tends to zero for p r <0.5, indicating that a robust nonzero steady state requires negative feedback. Not only does negative feedback bring about a stable nonzero steady-state value for x 1 and x 2, but it also achieves some robustness, as the concentration of terminal cells at steady state becomes dependent only on the relationship between x 2 and p r (the feedback term), and not on other system parameters [13]. This is reminiscent of both the feedback amplifier and the gene expression circuit. To understand the response to dynamic external perturbations, it is necessary to specify a form for the feedback function p r (·). Following [12] I take \(p_{r}(x_{2})=1/\left (\frac {3}{2}+\frac {1}{2}\left (\frac {x_{2}}{a}\right)^{n}\right)\). Here n reflects the strength of the feedback (feedback gain) and a is the value of x 2 at which a balance of regeneration and differentiation is achieved. I can now evaluate how fluctuations of d, the rate of loss of terminally differentiated stem cell progeny, affect the terminal cell population x 2. Such fluctuations occur because of injury, disease, or patterns of organ use [12]. A change in d that keeps it constant over time will have no effect on the steady-state population of the terminal cells. Moreover, the effect of slow fluctuations in d on the terminal cells population will be attenuated, and the higher the gain, n, the larger the attenuation. However, this robustness comes at a cost: the effect of fast changing fluctuations in d will not be attenuated, and may in fact be amplified. This is the fragility introduced by the same feedback that achieves robustness to slowly varying d. The tradeoffs can be better appreciated by examining the effect of sinusoidal fluctuations in d on the size of the corresponding fluctuations in x 2. This is captured in Fig. 3 b. The horizontal axis measures the frequency ω of the sinusoidal fluctuations in d, while the vertical axis shows |S(ω)|, a quantity that is related to the effect of these fluctuations on the terminal cell concentration, x 2. Note that lower frequency (slow) fluctuations are well attenuated, unlike those at higher frequencies. At mid-frequencies, the fluctuations are actually amplified. The stronger the feedback (higher n), the better the attenuation of lower frequency sinusoidal fluctuations, but the higher the amplification of mid-frequency fluctuations in d. This shows a robustness/fragility tradeoff that cannot be overcome with higher gain feedback. This also demonstrates a 'conservation of robustness'. The more robustness is achieved at lower frequencies, the less robustness and more fragility is available at other frequencies. This conservation law has been studied extensively in the control literature [14, 15], and is characterized by a class of Bode sensitivity integral formulae of the form \(\int _{0}^{\infty } \log |S(\omega)| \cdot f(\omega) d\omega = \text {constant}\), where the constant and f(·) are independent of n. In our particular example, it can be encapsulated as follows: no matter the feedback gain n, the area below the gray line |S|=1 is approximately equal to that above it (Fig. 3 b). In other words, whenever robustness is realized (area below gray line), fragility is created elsewhere (area above gray line). Of course, as in the previous examples, a good system design ensures that the necessary fragility is arranged such that its effect is only seen when unexpected or unnatural disturbances are encountered, while robustness is achieved exactly when natural or common disturbances are encountered. The frequency-dependent fragility that I just described appears routinely in man-made systems, but it has also been observed and reported in models of biological systems such as glycolysis [16] and cell lineage [12]. A recent research study [17] explored how yeast cells interpret environmental information that varies over time. The researchers examined cellular growth under various frequencies of oscillating osmotic stress and found that growth was in fact severely inhibited at a particular resonance frequency. They wrote "although this feature is critical for coping with natural challenges — like continually increasing osmolarity — it results in a tradeoff of fragility to non-natural oscillatory inputs...". The authors aptly refer to this hyper-sensitivity as the Achilles' heel of the yeast's MAPK signaling network. Effect of topology on robustness-fragility tradeoffs In the previous example, I showed that different feedback strengths lead to different tradeoffs of a conserved quantity (available robustness), but did not increase the amount of that quantity. An intriguing question is whether one can enhance overall robustness by changing the topology of the network being regulated. In this case, robustness may still be conserved for different feedback strengths, but the total amount that is conserved may possibly be increased. It turns out that this is indeed possible, and that some topologies are inherently more capable of delivering robustness than others. One example from engineering demonstrates this point clearly. It is well known that vehicle steering can be achieved either by turning the front wheels or the rear wheels of a vehicle. However, with the exception of slow vehicles like mobile cranes and forklift trucks, vehicles are almost universally steered using their front wheels. Why? The main reason is that vehicles that use rear-wheel steering exhibit what is called 'non-minimum phase' dynamics [18]. Systems with such dynamics respond to inputs in a non-intuitive way: they first respond in the opposite direction of the input before responding back in the expected direction. The reader would have noticed that when driving a car in reverse (analogous to rear-wheel steering in forward driving), turning the steering wheel in one direction leads the car to initially move slightly in the opposite direction before moving in the intended direction once its orientation has changed. A passenger sitting near the center of mass will feel an acceleration force that quickly switches direction after the initiation of a turn. The non-minimum phase dynamics make the car very difficult to steer at higher speeds, necessitating very slow movements (low bandwidth control) for stability. In contrast, when relegating the steering to the front wheels, the non-minimum phase dynamics disappear, and robust control of the vehicle is much easier to achieve. Before I show biological examples where these very same dynamics limit robustness, I will give one other engineering example where topological choices of a different nature place limits on achievable robustness. The X-29 experimental airplane shown in Fig. 4 was designed to have peculiar forward-swept wings and large canard surfaces for increased maneuvrability and aerodynamic efficiency. The designers knew that this configuration made the airplane dynamically unstable with no control, and that computer feedback regulation would therefore be essential for its operation. While the X-29 was indeed flyable using negative feedback regulation, it turned out (quite unexpectedly) that its robustness margins were too small to meet specifications. In control theory jargon, the available bandwidth was too small given the unstable dynamics [19]. The aircraft also exhibited non-minimum phase dynamics which conspired with the unstable dynamics to place substantial limitations on the achievable robustness. The problem could not be overcome with better feedback control systems. Indeed while engineers could select feedback controllers that increase robustness where it was needed at the expense of fragility elsewhere, the total available robustness to be eked out was limited. The limitations were severe enough that no acceptable control system was ever found, even though several design teams from different companies worked on the problem. The airplane was only allowed to fly due to special specification relief that was granted because it was an experimental airplane [19]. In retrospect, the choice of topology imposed a fundamental performance limitation that could not be overcome by feedback regulation. By choosing a different topology (e.g., less severe instability or even a stable airframe with traditional backward-swept wings), feedback designs that meet robustness specifications might easily be found. Though the robustness-fragility tradeoffs would still exist, good designs are much easier to achieve, and the best designs are far more robust than those achievable in the forward-swept aircraft like the X-29. In essence, the improved topology ensures that there is more overall robustness to be traded-off. X-29 experimental aircraft. The forward-swept wings configuration of the X-29 makes the design of robust feedback control systems more difficult compared to more conventional aircraft. (Courtesy of NASA) In biology, the effect of topology on the robustness availability can be used to assess different candidate models, and to favor some over others. It was exactly such considerations that led to a re-evaluation of the plausibility of the renewal control topology in Fig. 3. Indeed computational analysis of this topology showed poor robustness properties, including unfavorable disturbance rejection of periodic disturbances over certain frequency ranges. Dynamical analysis of this topology revealed the presence of non-minimum phase dynamics similar to those exhibited by a rear-wheel steered vehicle or the X-29. To see this, let us revisit the renewal dynamics modeled by Eq. 1 and consider the effect of a sudden increase in the renewal rate p r on the concentration of terminal cells x 2. Such an increase in the probability of stem cell renewal will immediately result in a reduction in the probability of differentiation of stem cells into terminal cells, causing x 2 to start decreasing. However, the subsequent buildup in stem cell populations will lead to a gradual increase in the rate of differentiation, reversing the decreasing trend and leading to an ultimate increase in x 2 — exactly the type of response reminiscent of non-minimum phase dynamics. As in the rear-wheel steering vehicle example, the unavoidable implication is that the topology has structural properties that can be expected to reduce achievable robustness. Since such non-minimum phase dynamics are attributable to the direct coupling between the probabilities of renewal and differentiation, one can alter the topology to rid it of these dynamics, leading to far superior robustness properties. One such topology, corresponding to a so-called fate control strategy, is realized by simply allowing lineage branching (Fig. 5), whereby stem cells can differentiate along a third trajectory, such as producing a different cell type, dying, or simply becoming quiescent. Examples of such branching exist during development and regeneration in various tissues. See [12] and the references therein. Fate control. a A stem cell (type 1) can either regenerate, or differentiate into a terminal post-mitotic cell (type 2), or have a third alternative fate that leads to a new branch. Negative feedback of the population of terminal cells acts on the probability of regeneration and differentiation, which necessarily leads to a positive feedback on the alternative fate as the three probabilities must sum to 1. b The effect of sinusoidal variation in d (the disturbance) on |S|, the 'sensitivity function', as a function of disturbance frequency. |S| is related to the size of the corresponding fluctuation of the population of terminal cells. n reflects the strength of feedback. For this fate control model p r (x 2) is the same as in the renewal control case, while p d (x 2) is taken to be p r (x 2)/2. As in renewal control, stronger feedback results in better disturbance rejection (better robustness) at lower frequencies, at the price of poor disturbances rejection at mid-frequencies. Unlike renewal control, however, the system has significantly more capacity for disturbance rejection (more overall robustness), as indicated by the much larger area below the gray line In the fate control topology, the dynamics of the system reflect the fact that descendants of cell type 1, in addition to regulating the probability of stem cell regeneration and differentiation, also regulate the probability of production of a differentiated cell of a new type (Fig. 5). The dynamics of this new topology is given by: $$\begin{array}{@{}rcl@{}} \dot x_{1} &=& (2p_{r}(x_{2})-1)vx_{1} \\ \dot x_{2} &=& 2p_{d}(x_{2}) vx_{1} -dx_{2}\\ \dot x_{3} &=& 2p_{a}(x_{2}) vx_{1} - d_{3} x_{3} \end{array} $$ where p a (x 2) is the probability of choosing the new cell type fate. In this case, p r (x 2)+p d (x 2)+p a (x 2)=1 holds. This means that negative regulation of p r (x 2) need not imply a positive regulation of p d (x 2), as their sum is no longer restricted to one. Instead simultaneous negative regulation of p r (x 2) and p d (x 2) is possible and in fact has superior robustness properties, as can be seen in Fig. 5. Indeed, using the same realization of p r (x 2) as in renewal control, the fate control topology is much better at rejecting disturbances as well as other perturbations. This is reflected in the fact that the area under the grey line is considerably smaller than that for the renewal control in Fig. 3, and has a much smaller resonant peak, where fragility, and hence disturbance amplification, is at its maximum. Another biological example that demonstrates a similar role of topology in enhancing the overall achievable robustness can be found in glycolysis [16]. In this autocatalyzed process, ATP feedback inhibits phosphofructokinase (PFK) reactions. Pyruvate kinase (PK) reactions are also known to be inhibited by ATP. Simple models of glycolysis that include only the PFK feedback but neglect the PK feedback can exhibit unstable as well as non-minimum phase dynamics. Just like in the X-29, this places severe limitations on the achievable robustness. Regardless of the PFK feedback strategy used, the system will have too much fragility. By simply bringing in PK feedback, these unfavorable robustness tradeoffs disappear, and the resulting topology will have far better dynamic performance. In advanced engineering systems as well as in biological systems, robustness is a property of a specific functionality or performance measure. When present, it indicates that the relevant function is relatively immune to certain perturbations, such as variations of system parameters or external disturbances that are expected to occur during the system's lifetime. While this robustness is desirable, it is often not possible for a function or a performance measure to be robust to variations in all possible system parameters or to all perturbations. To be sure, optimized systems, whether engineered or evolved, are often sensitive to specific perturbations. But this typically does not pose any severe drawbacks, as these perturbations are not expected to be encountered frequently in the life of the system. This robust yet fragile character of such systems has implications for modeling and parameter inference. Naturally, when a measured variable is robust to some parameters, one expects that inferring the parameter from measurements of the variable is challenging, leading to practical lack of identifiability of these parameters. I have argued that one way to achieve robustness to a set of parameters or disturbances is to use negative feedback. This allows the tradeoff of high gain with robustness to parameter variations or external disturbances. I have demonstrated these tradeoffs for constant disturbances at steady-state values of the output of interest, as well as for time varying disturbances, where rejection of disturbances at some frequencies can be effectively achieved at the expense of poor disturbance rejection at other frequencies. Such are the robustness tradeoffs of feedback, which can be encapsulated in quantitative conservation laws. The compelling aspect of such tradeoffs is their universality. As we have seen in this article, they apply to feedback systems regardless of their substrate and specific implementation details. They are the conservation laws of robustness that natural and man-made systems alike must obey. Department of Biosystems Science and Engineering, ETH Zürich, Switzerland Lazebnik Y. Can a biologist fix a radio?—or, what i learned while studying apoptosis. Cancer Cell. 2002; 2:179–82.View ArticlePubMedGoogle Scholar Csete M, Doyle J. Reverse engineering of biological complexity. Science. 2002; 295:1664–9.View ArticlePubMedGoogle Scholar Kitano H. Biological robustness. Nat Rev Genet. 2004; 5:826–37.View ArticlePubMedGoogle Scholar Stelling J, Sauer U, Szallasi Z, Doyle F, Doyle J. Robustness of cellular functions. Cell. 2004; 118:675–85.View ArticlePubMedGoogle Scholar Kitano H. Towards a theory of biological robustness. Mol Systems Biol. 2007; 3:137.View ArticleGoogle Scholar Sedra A, Smith K. Microelectronic circuits, 4th ed: Oxford University Press; 1997.Google Scholar Alon U. An introduction to systems biology: design principles of biological circuits: Achapman & Hall/CRC; 2007.Google Scholar Alon U, Surette MG, Barkai N, Leibler S. Robustness in bacterial chemotaxis. Nature. 1999; 397:168–71.View ArticlePubMedGoogle Scholar Yi T-M, Huang Y, Simon M, Doyle J. Robust perfect adaptation in bacterial chemotaxis through integral feedback control. Proc Natl Acad Sci U S A. 2000; 97:4649–53.View ArticlePubMedPubMed CentralGoogle Scholar El-Samad H, Goff JP, Khammash M. Calcium homeostasis and parturient hypocalcemia: an integral feedback perspective. J Theoret Biol. 2002; 214:17–29.View ArticleGoogle Scholar Muzzey D, Goméz-Uribe C-A, Mettetal JT, van Oudenaarden A. A systems-level analysis of perfect adaptation in yeast osmoregulation. Cell. 2009; 138:160–71.View ArticlePubMedPubMed CentralGoogle Scholar Buzi G, Lander A, Khammash M. Cell lineage branching as a strategy for proliferative control. BMC Biol. 2015; 13:13.View ArticlePubMedPubMed CentralGoogle Scholar Lander AD, Gokoffski KK, Wan FYM, Nie Q, Calof AL. Cell lineages and the logic of proliferative control. PLoS Biol. 2009; e15:7.Google Scholar Astrom KJ, Murray RM. Feedback systems: an introduction for scientists and engineers: Princeton University Press; 2008.Google Scholar Skogestad S, Postlethwaite I. Multivariable feedback control: analysis and design. 2nd ed: John Wiley and Sons; 2005.Google Scholar Chandra F, Buzi G, Doyle J. Glycolytic oscillations and limits on robust efficiency. Science. 2011; 333:187–92.View ArticlePubMedGoogle Scholar Mitchell A, Wei P, Lim WA. Oscillatory stress stimulation uncovers an achilles heel of the yeast MAPK signaling network. Science. 2015; 350:1379–83.View ArticlePubMedGoogle Scholar Karnopp D. Vehicle stability. New York/Basel: Marcel Dekker, Inc.; 2003.Google Scholar Stein G. Respect the unstable. IEEE Control Syst Mag. 2003; 23:12–25.View ArticleGoogle Scholar Savageau M. Concepts relating the behavior of biochemical systems to their underlying molecular properties. Arch Biochem Biophys. 1971; 145:612–21.View ArticlePubMedGoogle Scholar BMC Biology Reviews Beyond Mendel: modeling in biology
CommonCrawl
\begin{document} \title{Uniform estimates for metastable transition times in a coupled bistable system} \author{Florent Barret} \address{CMAP UMR 7641, \'Ecole Polytechnique CNRS, Route de Saclay, 91128 Palaiseau Cedex France; {\rm email: [email protected]}} \author{ Anton Bovier} \address{Institut f\"ur Angewandte Mathematik, Rheinische Friedrich-Wilhelms-Universit\"at, Endenicher Allee 60, 53115 Bonn, Germany; {\rm email: [email protected]}} \author{Sylvie M\'el\'eard} \address{CMAP UMR 7641, \'Ecole Polytechnique CNRS, Route de Saclay, 91128 Palaiseau Cedex France; {\rm email: [email protected]}} \maketitle \begin{abstract} We consider a coupled bistable $N$-particle system on ${\Bbb R}^N$ driven by a Brownian noise, with a strong coupling corresponding to the synchronised regime. Our aim is to obtain sharp estimates on the metastable transition times between the two stable states, both for fixed $N$ and in the limit when $N$ tends to infinity, with error estimates uniform in $N$. These estimates are a main step towards a rigorous understanding of the metastable behavior of infinite dimensional systems, such as the stochastically perturbed Ginzburg-Landau equation. Our results are based on the potential theoretic approach to metastability. \end{abstract} \emph{MSC 2000 subject classification:} 82C44, 60K35. \emph{Key-words:} Metastability, coupled bistable systems, stochastic Ginzburg-Landau equation, metastable transition time, capacity estimates. \section{Introduction}\label{section.intro} The aim of this paper is to analyze the behavior of metastable transition times for a gradient diffusion model, independently of the dimension. Our method is based on potential theory and requires the existence of a reversible invariant probability measure. This measure exists for Brownian driven diffusions with gradient drift. To be specific, we consider here a model of a chain of coupled particles in a double well potential driven by Brownian noise (see e.g. \cite{berglund107}). I.e., we consider the system of stochastic differential equations \begin{equation}\label{sde.1} \mathrm{d}X_\epsilon(t)=-\nabla F_{\gamma,N}(X_\epsilon(t))\mathrm{d}t+ \sqrt{2\epsilon}\mathrm{d}B(t), \end{equation} where $X_\epsilon(t)\in {\Bbb R}^N$ and \begin{equation}\label{potential.1} F_{\gamma,N}(x)= \sum_{i\in\Lambda}\left(\frac14x_i^4-\frac12x_i^2\right) +\frac{\gamma}{4}\sum_{i\in\Lambda}(x_i-x_{i+1})^2, \end{equation} with $\Lambda={\Bbb Z}/N{\Bbb Z}$ and $\gamma>0$ is a parameter. $B$ is a $N$ dimensional Brownian motion and $\epsilon>0$ is the intensity of the noise. Each component (particle) of this system is subject to force derived from a bistable potential. The components of the system are coupled to their nearest neighbor with intensity $\gamma$ and perturbed by independent noises of constant variance $\epsilon$. While the system without noise, i.e. $\epsilon=0$, has several stable fixpoints, for $\epsilon>0$ transitions between these fixpoints will occur at suitable timescales. Such a situation is called metastability. For fixed $N$ and small $\epsilon$, this problem has been widely studied in the literature and we refer to the books by Freidlin and Wentzell \cite{freidlinwentzell} and Olivieri and Vares \cite{OlivieriVares} for further discussions. In recent years, the potential theoretic approach, initiated by Bovier, Eckhoff, Gayrard, and Klein \cite{bovier04} (see \cite{bovier09} for a review), has allowed to give very precise results on such transition times and notably led to a proof of the so-called Eyring-Kramers formula which provides sharp asymptotics for these transition times, for any fixed dimension. However, the results obtained in \cite{bovier04} do not include control of the error terms that are uniform in the dimension of the system. Our aim in this paper is to obtain such uniform estimates. These estimates constitute a the main step towards a rigorous understanding of the metastable behavior of infinite dimensional systems, i.e. stochastic partial differential equations (SPDE) such as the stochastically perturbed Ginzburg-Landau equation. Indeed, the deterministic part of the system \eqref{sde.1} can be seen as the discretization of the drift part of this SPDE, as has been noticed e.g. in \cite{berglund207}. For a heuristic discussion of the metastable behavior of this SPDE, see e.g. \cite{maierstein} and \cite{west}. Rigorous results on the level of the large deviation asymptotics were obtained e.g. by Faris and Jona-Lasinio \cite{fajona}, Martinelli et al. \cite{martin}, and Brassesco \cite{stella}. In the present paper we consider only the simplest situation, the so-called synchronization regime, where the coupling $\gamma$ between the particles is so strong that there are only three relevant critical points of the potential $F_{\gamma,N}$ \eqref{potential.1}. A generalization to more complex situations is however possible and will be treated elsewhere. The remainder of this paper is organized as follows. In Section 2 we recall briefly the main results from the potential theoretic approach, we recall the key properties of the potential $F_{\gamma,N}$, and we state the results on metastability that follow from the results of \cite{bovier04} for fixed $N$. In Section 3 we deal with the case when $N$ tends to infinity and state our main result, Theorem \ref{main}. In Section 4 we prove the main theorem through sharp estimates on the relevant capacities. In the remainder of the paper we adopt the following notations: \begin{itemize} \item for $t\in{\Bbb R}$, $\lfloor t \rfloor$ denotes the unique integer $k$ such that $k\leq t<k+1$; \item $\tau_D\equiv\inf \{t>0:X_t\in D\}$ is the hitting time of the set $D$ for the process $(X_t)$; \item $B_r(x)$ is the ball of radius $r>0$ and center $x\in{\Bbb R}^N$; \item for $p\geq 1$, and $(x_k)^N_{k=1}$ a sequence, we denote the $L^P$-norm of $x$ by \begin{equation} \|x\|_p=\left(\sum^N_{k=1}|x_k|^p\right)^{1/p}. \end{equation} \end{itemize} \noindent\textbf{Acknowledgments.} This paper is based on the master thesis of F.B.\cite{barret07} that was written in part during a research visit of F.B. at the International Research Training Group ``Stochastic models of Complex Systems'' at the Berlin University of Technology under the supervision of A.B. F.B. thanks the IRTG SMCP and TU Berlin for the kind hospitality and the ENS Cachan for financial support. A.B.'s research is supported in part by the German Research Council through the SFB 611 and the Hausdorff Center for Mathematics. \section{Preliminaries}\label{section.sharp} \subsection{Key formulas from the potential theory approach} We recall briefly the basic formulas from potential theory that we will need here. The diffusion $X_{\epsilon}$ is the one introduced in \eqref{sde.1} and its infinitesimal generator is denoted by $L$. Note that $L$ is the closure of the operator \begin{equation}\Eq(generator) L= \epsilon e^{F_{\gamma,N}/\epsilon} \nabla e^{-F_{\gamma,N}/\epsilon}\nabla. \end{equation} For $A,D$ regular open subsets of ${\Bbb R}^N$, let $h_{A,D}(x)$ be the harmonic function (with respect to the generator $L$) with boundary conditions $1$ in $A$ and $0$ in $D$. Then, for $x\in (A\cup D)^c$, one has $h_{A,D}(x)={\Bbb P}_x[\tau_A<\tau_D]$. The equilibrium measure, $\,e_{A,D}$, is then defined (see e.g. \cite{chungwalsh}) as the unique measure on $\partial A$ such that \begin{equation} \Eq(aa.1) h_{A,D}(x)= \int_{\partial A}e^{-F_{\gamma,N}(y)/\epsilon} G_{D^c}(x,y) e_{A,D}(dy), \end{equation} where $G_{D^c}$ is the Green function associated with the generator $L$ on the domain $D^c$. This yields readily the following formula for the hitting time of $D$ (see. e.g. \cite{bovier04}): \begin{equation}\label{key.1} \int_{\partial A}{\Bbb E}_{z}[\tau_D]e^{-F_{\gamma,N}(z)/\epsilon}e_{A,D}(dz) = \int_{D^c}h_{A,D}(y)e^{-F_{\gamma,N}(y)/\epsilon}dy. \end{equation} The capacity, $\hbox{\rm cap}(A,D)$, is defined as \begin{equation}\Eq(aa.2) \hbox{\rm cap}(A,D)=\int_{\partial A} e^{-F_{\gamma,N}(z)/\epsilon}e_{A,D}(dz). \end{equation} Therefore, \begin{equation}\label{key.2} \n_{A,D}(dz)=\frac{e^{-F_{\gamma,N}(z)/\epsilon}e_{A,D}(dz)}{\hbox{\rm cap}(A,D)} \end{equation} is a probability measure on $\partial A$, that we may call the equilibrium probability. The equation \eqref{key.1} then reads \begin{equation}\label{key.3} \int_{\partial A}{\Bbb E}_z[\tau_D]\n_{A,D}(dz) = {\Bbb E}_{\n_{A,D}}[\tau_D] = \frac{\int_{D^c}h_{A,D}(y)e^{-F_{\gamma,N}(y)/\epsilon}dy}{\hbox{\rm cap}(A,D)}. \end{equation} The strength of this formula comes from the fact that the capacity has an alternative representation through the Dirichlet variational principle (see e.g. \cite{fukushima}), \begin{equation}\label{cap.1} \hbox{\rm cap}(A,D)=\inf_{h\in{\cal H}}\Phi(h), \end{equation} where \begin{equation}\label{cap.2} {\cal H}=\Big\{h\in W^{1,2}(\mathbb{R}^N, e^{-F_{\gamma,N}(u)/\epsilon}du)\,|\,\forall z\,, h(z)\in[0,1]\,, h_{|A}=1\,, h_{|D}=0\Big\}, \end{equation} and the Dirichlet form $\Phi$ is given, for $h\in{\cal H}$, as \begin{equation}\label{cap.3} \Phi(h)=\epsilon\int_{(A\cup D)^c}e^{-F_{\gamma,N}(u)/\epsilon}\norm{\nabla h(u)}_2^2du. \end{equation} \begin{remark} Formula \eqref{key.3} gives an average of the mean transition time with respect to the equilibrium measure, that we will extensively use in what follows. A way to obtain the quantity ${\Bbb E}_{z}[\tau_D]$ consists in using H\"older and Harnack estimates \cite{gt} (as developed in Corollary \ref{sharp-point})\cite{bovier04}, but it is far from obvious whether this can be extended to give estimates that are uniform in $N$. \end{remark} Formula \eqref{key.3} highlights the two terms for which we will prove uniform estimates: the capacity (Proposition \ref{capacity}) and the mass of $h_{A,D}$ (Proposition \ref{numerator}). \subsection{Description of the Potential} Let us describe in detail the potential $F_{\gamma,N}$, its stationary points, and in particular the minima and the 1-saddle points, through which the transitions occur. The coupling strength $\gamma$ specifies the geometry of $F_{\gamma,N}$. For instance, if we set $\gamma=0$, we get a set of $N$ bistable independent particles, thus the stationary points are \begin{equation}\label{potential.2} x^*=(\xi_1,\dots,\xi_N)\quad\forall i\in\llbracket1,N\rrbracket ,\, \xi_i\in\{-1,0,1\}. \end{equation} To characterize their stability, we have to look to their Hessian matrix whose signs of the eigenvalues give us the index saddle of the point. It can be easily shown that, for $\gamma=0$, the minima are those of the form \eqref{potential.2} with no zero coordinates and the 1-saddle points have just one zero coordinate. As $\gamma$ increases, the structure of the potential evolves and the number of stationary points decreases from $3^N$ to $3$. We notice that, for all $\gamma$, the points \begin{equation}\label{potential.3} I_{\pm}=\pm(1,1,\cdots,1)\quad O=(0,0,\cdots,0) \end{equation} are stationary, fur\-ther\-more $I_{\pm}$ are minima. If we calculate the Hessian at the point $O$, we have \begin{equation}\label{potential.4} \nabla^2F_{\gamma,N}(O)= \begin{pmatrix} -1+\gamma&-\frac{\gamma}{2}&0&\cdots&0&-\frac{\gamma}{2}\\ -\frac{\gamma}{2}&-1+\gamma&-\frac{\gamma}{2}&&&0\\ 0&-\frac{\gamma}{2}&\ddots&\ddots&&\vdots\\ \vdots&&\ddots&\ddots&\ddots&0\\ 0&&&\ddots&\ddots&-\frac{\gamma}{2}\\ -\frac{\gamma}{2}&0&\cdots&0&-\frac{\gamma}{2}&-1+\gamma \end{pmatrix}, \end{equation} whose eigenvalues are, for all $\gamma>0$ and for $0\leq k\leq N-1$, \begin{equation}\label{eigenvalue.1} \lambda_{k,N}=-\left(1-2\gamma\sin^2\left(\frac{k\pi}{N}\right)\right). \end{equation} Set, for $k\geq1$, $\gamma_k^N=\frac{1}{2\sin^2\left(k\pi/N\right)}$. Then these eigenvalues can be written in the form \begin{equation}\label{eigenvalue.2} \begin{cases} \lambda_{k,N}&=\lambda_{N-k,N}=-1+\frac{\gamma}{\gamma^N_k},\, 1\leq k\leq N-1\\ \lambda_{0,N}&=\lambda_0=-1. \end{cases} \end{equation} Note that $(\gamma^N_k)_{k=1}^{\lfloor N/2\rfloor}$ is a decreasing sequence, and so as $\gamma$ increases, the number of non-positive eigenvalues $(\lambda_{k,N})_{k=0}^{N-1}$ decreases. When $\gamma>\gamma_1^N$, the only negative eigenvalue is $-1$. Thus \begin{equation} \gamma_1^N=\frac{1}{2\sin^2(\pi/N)} \end{equation} is the threshold of the synchronization regime. \begin{lemma}[Synchronization Regime] If $\gamma>\gamma^N_1$, the only stationary points of $F_{\gamma,N}$ are $I_{\pm}$ and $O$. $I_\pm$ are minima, $O$ is a 1-saddle. \end{lemma} This lemma was proven in \cite{berglund107} by using a Lyapunov function. This configuration is called the synchronization regime because the coupling between the particles is so strong that they all pass simultaneously through their respective saddle points in a transition between the stable equilibria ($I_\pm$). In this paper, we will focus on this regime. \subsection{Results for fixed $N$} Let $\rho>0$ and set $B_\pm\equiv B_{\rho}(I_\pm)$, where $B_\rho(x)$ denotes the ball of radius $\rho$ centered at $x$. Equation \eqref{key.3} gives, with $A=B_-$ and $D=B_+$, \begin{equation}\label{key.4} {\Bbb E}_{\n_{B_-,B_+}}[\tau_{B_+}] = \frac{\int_{B_+^c}h_{B_-,B_+}(y)e^{-F_{\gamma,N}(y)/\epsilon}dy}{\hbox{\rm cap}(B_-,B_+)}. \end{equation} First, we obtain a sharp estimate for this transition time for fixed $N$: \begin{theorem}\label{sharp} Let $N>2$ be given. For $\gamma>\gamma^N_1=\frac{1}{2\sin^2(\pi/N)}$, let $\sqrt{N}>\rho\geq \epsilon>0$. Then \begin{equation}\label{sharp.1} \mathbb{E}_{\n_{B_-,B_+}}[\tau_{B_+}]=2\pi c_Ne^{\frac{N}{4\epsilon}}(1+O(\sqrt{\epsilon|\ln\epsilon|^3})) \end{equation} with \begin{equation}\label{cn.1} c_N=\bigg[1-\frac{3}{2+2\gamma}\bigg]^{\frac{e(N)}{2}} \prod_{k=1}^{\lfloor\frac{N-1}{2}\rfloor} \bigg[1-\frac{3}{2+\frac{\gamma}{\gamma^N_k}}\bigg] \end{equation} where $e(N)=1$ if $N$ is even and $0$ if $N$ is odd. \end{theorem} \begin{remark} The power $3$ at $\ln \epsilon$ is missing in \cite{bovier04} by mistake. \end{remark} \begin{remark} As mentioned above, for any fixed dimension, we can replace the probability measure $\nu_{B_-,B_+}$ by the Dirac measure on the single point $I_-$, using H\"older and Harnack inequalities \cite{bovier04}. This gives the following corollary: \begin{corollary}\label{sharp-point} Under the assumptions of Theorem \ref{sharp}, there exists $\alpha>0$ such that \begin{equation}\label{sharp-point.1} \mathbb{E}_{I_-}[\tau_{B_+}]=2\pi c_Ne^{\frac{N}{4\epsilon}}(1+O(\sqrt{\epsilon|\ln\epsilon|^3}). \end{equation} \end{corollary} \end{remark} \begin{proof} [Proof of the theorem] We apply Theorem 3.2 in \cite{bovier04}. For $\gamma>\gamma^N_1=\frac{1}{2\sin^2(\pi/N)}$, let us recall that there are only three stationary points: two minima $I_{\pm}$ and one saddle point $O$. One easily checks that $F_{\gamma,N}$ satisfies the following assumptions: \begin{itemize} \item $F_{\gamma,N}$ is polynomial in the $(x_i)_{i\in\Lambda}$ and so clearly $C^3$ on $\mathbb{R}^N$. \item $F_{\gamma,N}(x)\geq\frac{1}{4}\sum_{i\in\Lambda}x_i^4$ so $F_{\gamma,N}\underset{x\rightarrow\infty}{\longrightarrow}+\infty$. \item $\norm{\nabla F_{\gamma,N}(x)}_2\sim\norm{x}_3^3$ as $\norm{x}_2\to\infty$. \item As $\Delta F_{\gamma,N}(x)\sim3\norm{x}_2^2$ ($\norm{x}_2\to\infty$), then $\norm{\nabla F_{\gamma,N}(x)}-2\Delta F_{\gamma,N}(x)\sim\norm{x}_3^3$. \end{itemize} The Hessian matrix at the minima $I_\pm$ has the form \begin{equation}\label{potential.5} \nabla^2F_{\gamma,N}(I_{\pm})= \nabla^2F_{\gamma,N}(O)+3 \mathrm{Id}, \end{equation} whose eigenvalues are simply \begin{equation}\label{eigenvalue.3} \n_{k,N}=\lambda_{k,N}+3. \end{equation} Then Theorem 3.1 of \cite{bovier04} can be applied and yields, for $\sqrt{N}>\rho>\epsilon>0$, (recall the the negative eigenvalue of the Hessian at $O$ is $-1$) \begin{equation}\label{sharp.2} \mathbb{E}_{\n_{B_-,B_+}}[\tau_{B_+}]= \frac{2\pi e^{\frac{N}{4\epsilon}} \sqrt{\abs{\det(\nabla^2F_{\gamma,N}(O))}}} {\sqrt{\det(\nabla^2F_{\gamma,N}(I_-))}} (1+O(\sqrt{\epsilon}\abs{\ln\epsilon}^3)). \end{equation} Finally, \eqref{eigenvalue.2} and \eqref{eigenvalue.3} give: \begin{equation}\label{det.1} \det(\nabla^2F_{\gamma,N}(I_-)) =\prod_{k=0}^{N-1}\nu_{k,N} =2\nu_{N/2,N}^{e(N)}\prod_{k=1}^{\lfloor\frac{N-1}{2}\rfloor}\nu_{k,N}^2 =2^{N}(1+\gamma)^{e(N)}\prod_{k=1}^{\lfloor\frac{N-1}{2}\rfloor} \bigg(1+\frac{\gamma}{2\gamma^N_k}\bigg)^2 \end{equation} \begin{equation}\label{det.2} \abs{\det(\nabla^2F_{\gamma,N}(O))} =\prod_{k=0}^{N-1}\lambda_{k,N} =\lambda_{N/2,N}^{e(N)}\prod_{k=1}^{\lfloor\frac{N-1}{2}\rfloor}\lambda_{k,N}^2 =(2\gamma-1)^{e(N)}\prod_{k=1}^{\lfloor\frac{N-1}{2}\rfloor} \bigg(1-\frac{\gamma}{\gamma^N_k}\bigg)^2. \end{equation} Then, \begin{equation}\label{cn.2} c_N =\frac{\sqrt{\det(\nabla^2F_{\gamma,N}(I_-))}} {\sqrt{\abs{\det(\nabla^2F_{\gamma,N}(O))}}} =\bigg[1-\frac{3}{2+2\gamma}\bigg]^{\frac{e(N)}{2}} \prod_{k=1}^{\lfloor\frac{N-1}{2}\rfloor} \bigg[1-\frac{3}{2+\frac{\gamma}{\gamma^N_k}}\bigg] \end{equation} and Theorem \ref{sharp} is proved. \end{proof} Let us point out that the use of these estimates is a major obstacle to obtain a mean transition time starting from a single stable point with uniform error terms. That is the reason why we have introduced the equilibrium probability. However, there are still several difficulties to be overcome if we want to pass to the limit $N\uparrow \infty$. \begin{itemize} \item[(i)] We must show that the prefactor $c_N$ has a limit as $N\uparrow\infty$. \item[(ii)] The exponential term in the hitting time tends to infinity with $N$. This suggests that one needs to rescale the potential $F_{\gamma,N}$ by a factor $1/N$, or equivalently, to increase the noise strength by a factor $N$. \item[(iii)] One will need uniform control of error estimates in $N$ to be able to infer the metastable behavior of the infinite dimensional system. This will be the most subtle of the problems involved. \end{itemize} \section{Large $N$ limit} As mentioned above, in order to prove a limiting result as $N$ tends to infinity, we need to rescale the potential to eliminate the $N$-dependence in the exponential. Thus henceforth we replace $F_{\gamma,N}(x)$ by \begin{equation} \Eq(rescaled.1) G_{\gamma,N}(x)= N^{-1}F_{\gamma,N}(x). \end{equation} This choice actually has a very nice side effect. Namely, as we always want to be in the regime where $\gamma\sim \gamma_1^N\sim N^{2}$, it is natural to parametrize the coupling constant with a fixed $\m>1$ as \begin{equation}\label{eigenvalue.4} \gamma^N= \m \gamma^N_1 =\frac{\m}{2\sin^2(\frac{\pi}{N})} =\frac{\m N^2}{2\pi^2}(1+o(1)). \end{equation} Then, if we replace the lattice by a lattice of spacing $1/N$, i.e. $(x_i)_{i\in\Lambda}$ is the discretization of a real function $x$ on $[0,1]$ ($x_i=x(i/N)$), the resulting potential converges formally to \begin{equation} \Eq(rescaled.2) G_{\gamma^N,N}(x)\underset{N\to\infty}{\rightarrow} \int_0^1 \left(\frac 14[x(s)]^4-\frac12[x(s)]^2 \right) ds + \frac{\m}{4\pi^2}\int_0^1 \frac{\left[ x'(s)\right]^2}2ds, \end{equation} with $x(0)=x(1)$. In the Euclidean norm, we have $\|I_\pm\|_2=\sqrt{N}$, which suggests to rescale the size of neighborhoods. We consider, for $\rho>0$, the neighborhoods $B^N_\pm=B_{\rho\sqrt{N}}(I_\pm)$. The volume $V(B^N_-)=V(B^N_+)$ goes to $0$ if and only if $\rho<1/2\pi e$, so given such a $\rho$, the balls $B^N_\pm$ are not as large as one might think. Let us also observe that \begin{equation} \frac1{\sqrt{N}}\|x\|_{2}\underset{N\to \infty}{\longrightarrow}\|x\|_{L^2[0,1]}=\int_0^1|x(s)|^2ds. \end{equation} Therefore, if $x\in B^N_+$ for all $N$, we get in the limit, $ \|x-1\|_{L^2[0,1]}<\rho. $ The main result of this paper is the following uniform version of Theorem \ref{sharp} with a rescaled potential $G_{\gamma,N}$. \begin{theorem}\label{main} Let $\mu\in]1,\infty[$, then there exists a constant, $A$, such that for all $N\geq 2$ and all $\epsilon>0$, \begin{equation}\label{main.1} \frac1N{\Bbb E}_{\n_{B^N_-,B^N_+}}[\tau_{B^N_+}]=2\pi c_Ne^{1/4\epsilon}(1+R(\epsilon,N)), \end{equation} where $c_N$ is defined in Theorem \thv(sharp) and $|R(\epsilon,N)|\leq A \sqrt{\epsilon|\ln \epsilon|^3}$. In particular, \begin{equation}\label{main.2} \lim_{\epsilon\downarrow 0}\lim_{N\uparrow\infty}\frac1N e^{-1/4\epsilon} {\Bbb E}_{\n_{B^N_-,B^N_+}}[\tau_{B^N_+}]=2\pi V(\m) \end{equation} where \begin{equation}\label{main.3} V(\m)=\prod_{k=1}^{+\infty}\Big[\frac{\mu k^2-1}{\mu k^2+2}\Big]<\infty. \end{equation} \end{theorem} \begin{remark} The appearance of the factor $1/N$ may at first glance seem disturbing. It corresponds however to the appropriate time rescaling when scaling the spatial coordinates $i$ to $i/N$ in order to recover the pde limit. \end {remark} The proof of this theorem will be decomposed in two parts: \begin{itemize} \item convergence of the sequence $c_N$ (Proposition \ref{convergence}); \item uniform control of the denominator (Proposition \ref{capacity}) and the numerator (Proposition \ref{numerator}) of Formula \eqref{key.4}. \end{itemize} \paragraph{Convergence of the prefactor $c_N$.} Our first step will be to control the behavior of $c_N$ as $N\uparrow\infty$. We prove the following: \begin{proposition}\label{convergence} The sequence $c_N$ converges: for $\m>1$, we set $\gamma=\m\gamma_1^N$, then \begin{equation}\label{convergence.1} \lim_{N\uparrow\infty}c_N=V(\m), \end{equation} with $V(\m)$ defined in \eqref{main.3}. \end{proposition} \begin{remark} This proposition immediately leads to \begin{corollary}\label{sharp-limit} For $\m\in]1,\infty[$, we set $\gamma=\m\gamma_1^N$, then \begin{equation}\Eq(time-final.1) \lim_{N\uparrow \infty} \lim_{\epsilon\downarrow 0} \frac{e^{-\frac{1}{4\epsilon}}}N\mathbb{E}_{\nu_{B^N_-,B^N_+}}[\tau_{B^N_+}]=2\pi V(\m). \end{equation} \end{corollary} \noindent Of course such a result is unsatisfactory, since it does not tell us anything about a large system with specified fixed noise strength. To be able to interchange the limits regarding $\epsilon$ and $N$, we need a uniform control on the error terms. \end{remark} \begin{proof}[Proof of the proposition] The rescaling of the potential introduces a factor $\frac1N$ for the eigenvalues, so that \eqref{sharp.2} becomes \begin{eqnarray}\label{convergence.3}\nonumber \mathbb{E}_{\n_{B^N_-,B^N_+}}[\tau_{B^N_+}] &=& \frac{2\pi e^{\frac{1}{4\epsilon}}{N^{-N/2+1}\sqrt{\abs{\det(\nabla^2F_{\gamma,N}(O))}}}} {N^{-N/2}\sqrt{\det(\nabla^2F_{\gamma,N}(I_-))} } (1+O(\sqrt{\epsilon}\abs{\ln\epsilon}^3)) \\ &=& 2\pi N c_N e^{\frac{1}{4\epsilon}}(1+O(\sqrt{\epsilon}\abs{\ln\epsilon}^3)). \end{eqnarray} Then, with $u_k^N=\frac{3}{2+\mu\frac{\gamma^N_1}{\gamma^N_k}}\ $, \begin{equation}\label{cn.3} c_N=\bigg[1-\frac{3}{2+2\mu\gamma^N_1}\bigg]^{\frac{e(N)}{2}} \prod_{k=1}^{\lfloor\frac{N-1}{2}\rfloor}\bigg[1-u_k^N\bigg]. \end{equation} To prove the convergence, let us consider the $(\gamma^N_k)_{k=1}^{N-1}$. For all $k\geq1$, we have \begin{equation}\label{eigenvalue.5} \frac{\gamma^N_1}{\gamma^N_k} =\frac{\sin^2(\frac{k\pi}{N})}{\sin^2(\frac{\pi}{N})} =k^2+(1-k^2)\frac{\pi^2}{3N^2}+o\bigg(\frac{1}{N^2}\bigg). \end{equation} Hence, $u_k^N\underset{N\to+\infty}{\longrightarrow}v_k=\frac{3}{2+\mu k^2}$. Thus, we want to show that \begin{equation}\label{cn.4} c_N\underset{N\to+\infty}{\longrightarrow}\prod_{k=1}^{+\infty}(1-v_k)=V(\m). \end{equation} Using that, for $0\leq t\leq\frac\pi2$, \begin{equation} \label{sin.1} 0<t^2(1-\frac{t^2}{3})\leq\sin^2(t)\leq t^2, \end{equation} we get the following estimates for $\frac{\gamma^N_1}{\gamma^N_k}$: set $a=\left(1-\frac{\pi^2}{12}\right)$, for $1\leq k\leq N/2$, \begin{equation}\label{eigenvalue.6} ak^2 =\bigg(1-\frac{\pi^2}{12}\bigg)k^2 \leq k^2\bigg(1-\frac{k^2\pi^2}{3N^2}\bigg) \leq\frac{\gamma^N_1}{\gamma^N_k} =\frac{\sin^2(\frac{k\pi}{N})}{\sin^2(\frac{\pi}{N})} \leq\frac{k^2}{1-\frac{\pi^2}{3N^2}}. \end{equation} Then, for $N\geq 2$ and for all $1\leq k\leq N/2$, \begin{equation}\label{eigenvalue.6b} -\frac{k^4\pi^2}{3N^2} \leq\frac{\gamma^N_1}{\gamma^N_k}-k^2 \leq\frac{k^2\pi^2}{3N^2\left(1-\frac{\pi^2}{3N^2}\right)} \leq\frac{k^2\pi^2}{N^2}. \end{equation} Let us introduce \begin{equation} \label{convergence.4} V_m=\prod_{k=1}^{\lfloor\frac{m-1}{2}\rfloor}(1-v_k),\quad U_{N,m}=\prod_{k=1}^{\lfloor\frac{m-1}{2}\rfloor}\Big(1-u_k^N\Big). \end{equation} Then \begin{equation}\label{convergence.5} \Bigabs{\ln \frac{U_{N,N}}{V_N}} =\Bigabs{\ln\prod_{k=1}^{\lfloor\frac{N-1}{2}\rfloor}\frac{1-u_k^N}{1-v_k}} \leq\sum_{k=1}^{\lfloor\frac{N-1}{2}\rfloor}\Bigabs{\ln \frac{1-u_k^N}{1-v_k}}. \end{equation} Using \eqref{eigenvalue.6} and \eqref{eigenvalue.6b}, we obtain, for all $1\leq k\leq N/2$, \begin{equation} \left|\frac{v_k-u_k^N}{1-v_k}\right| = \frac{3\m\left|\frac{\gamma_1^N}{\gamma_k^N}-k^2\right|} {\left(-1+\m k^2\right)\left(2+\m\frac{\gamma_1^N}{\gamma_k^N}\right)} \leq \frac{\m k^4\pi^2} {N^2\left(-1+\m k^2\right)\left(2+\m ak^2\right)} \leq \frac{C}{N^2} \end{equation} with $C$ a constant independent of $k$. Therefore, for $N>N_0$, \begin{equation}\label{convergence.6} \Bigabs{\ln \frac{1-u_k^N}{1-v_k}} =\left|\ln\left(1+\frac{v_k-u_k^N}{1-v_k}\right)\right| \leq\frac{C'}{N^2}. \end{equation} Hence \begin{equation}\label{convergence.7} \Bigabs{\ln \frac{U_{N,N}}{V_N}} \leq \frac{C'}{N} \underset{N\to+\infty}{\longrightarrow}0. \end{equation} As $\sum\abs{v_k}<+\infty$, we get $\lim_{N\to+\infty}V_N=V(\m)>0$, and thus \eqref{cn.4} is proved. \end{proof} \section{Estimates on capacities} To prove Theorem \ref{main}, we prove uniform estimates of the denominator and numerator of \eqref{key.3}, namely the capacity and the mass of the equilibrium potential. \subsection{Uniform control in large dimensions for capacities} A crucial step is the control of the capacity. This will be done with the help of the Dirichlet principle \eqv(cap.1). We will obtain the asymptotics by using a Laplace-like method. The exponential factor in the integral \eqref{cap.3} is largely predominant at the points where $h$ is likely to vary the most, that is around the saddle point $O$. Therefore we need some good estimates of the potential near $O$. \subsubsection{Local Taylor approximation} This subsection is devoted to the quadratic approximations of the potential which are quite subtle. We will make a change of basis in the neighborhood of the saddle point $O$ that will diagonalize the quadratic part. Recall that the potential $G_{\gamma,N}$ is of the form \begin{equation}\Eq(potential.6) G_{\gamma,N}(x)= -\frac 1{2N} (x,[\mathrm{Id}-\Delta]x) +\frac 1{4N} \|x\|_4^4. \end{equation} where the operator $\Delta$ is given by $\Delta=\gamma\left[\mathrm{Id}-\frac 12(\Sigma+\Sigma^*)\right]$ and $(\Sigma x)_j = x_{j+1}$. The linear operator $(\mathrm{Id}-\Delta)=-\nabla^2F_{\gamma,N}(O)$ has eigenvalues $-\lambda_{k,N}$ and eigenvectors $v_{k,N}$ with components $v_{k,N}(j)=\omega^{jk}$, with $\omega=e^{i2\pi/N}$. \noindent Let us change coordinates by setting \begin{equation}\Eq(fourier.1) \hat x_j = \sum_{k=0}^{N-1} \omega^{-jk}x_k. \end{equation} Then the inverse transformation is given by \begin{equation}\Eq(fourier.2) x_k=\frac 1N \sum_{j=0}^{N-1} \omega^{jk}\hat x_j=x_k(\hat x). \end{equation} \noindent Note that the map $x\rightarrow \hat x$ maps ${\Bbb R}^N$ to the set \begin{equation}\Eq(space.1) \widehat {\Bbb R}^N=\left\{\hat x\in {\Bbb C}^N:\hat x_k=\overline{\hat x_{N-k}}\right\} \end{equation} endowed with the standard inner product on ${\Bbb C}^N$. \noindent Notice that, expressed in terms of the variables $\hat x$, the potential \eqref{potential.1} takes the form \begin{equation}\label{potential.7} G_{\gamma,N}(x(\hat x))= \frac 1{2N^2} \sum_{k=0}^{N-1} \lambda_{k,N} |\hat x_k|^2+\frac 1{4N} \|x(\hat x)\|_4^4. \end{equation} \noindent Our main concern will be the control of the non-quadratic term in the new coordinates. To that end, we introduce the following norms on Fourier space: \begin{equation}\Eq(fourier.3) \|\hat x\|_{p,{\cal F}} =\left(\frac 1N \sum_{i=0}^{N-1} |\hat x|^p\right)^{1/p} =\frac1{N^{1/p}}\|\hat x\|_p. \end{equation} The factor $1/N$ is the suitable choice to make the map $x\rightarrow \hat x$ a bounded map between $L^p$ spaces. This implies that the following estimates hold (see \cite{Simon-Reed}, Vol. 1, Theorem IX.8): \begin{lemma}\label{Hausdorff-Young} With the norms defined above, we have \begin{itemize} \item[(i)] the Parseval identity, \begin{equation}\label{parseval.1} \|x\|_2=\|\hat x\|_{2,{\cal F}}, \end{equation} and \item[(ii)] the Hausdorff-Young inequalities: for $1\leq q\leq 2$ and $p^{-1}+q^{-1}=1$, there exists a finite, $N$-independent constant $C_q$ such that \begin{equation}\Eq(fourier.4) \|x\|_p\leq C_q\|\hat x\|_{q,{\cal F}}. \end{equation} In particular \begin{equation}\Eq(fourier.7) \|x\|_4\leq C_{4/3} \|\hat x\|_{4/3,{\cal F}}. \end{equation} \end{itemize} \end{lemma} \noindent Let us introduce the change of variables, defined by the complex vector $\, z$, as \begin{equation}\label{change.1} z= {\hat x \over N}. \end{equation} \noindent Let us remark that $\ z_0= {1\over N} \sum_{k=1}^{N-1} x_k \in \mathbb{R}$. In the variable $z$, the potential takes the form \begin{equation}\Eq(potential.8) \wt G_{\gamma,N}(z)=G_{\gamma,N}\left(x(N z)\right) =\frac 1{2} \sum_{k=0}^{N-1}\lambda_{k,N} |z_k|^2+\frac 1{4N} \|x(N z)\|^4_4. \end{equation} Moreover, by \eqref{parseval.1} and \eqref{change.1} \begin{equation}\label{parseval.2} \|x(Nz)\|_{2}^2=\|Nz\|_{2,{\cal F}}^2=\frac1N\|Nz\|_{2}^2. \end{equation} In the new coordinates the minima are now given by \begin{equation}\label{change.2} I_\pm=\pm (1,0,\dots,0). \end{equation} In addition, $z(B^N_-)=z(B_{\rho\sqrt{N}}(I_-))=B_{\rho}(I_-)$ where the last ball is in the new coordinates. \noindent Lemma \thv(Hausdorff-Young) will allow us to prove the following important estimates. For $\mathrm{d}>0$, we set \begin{equation}\Eq(neighborhood.1) C_\mathrm{d}=\left\{z\in \widehat{\Bbb R}^N:\,|z_k|\leq \mathrm{d}\frac{r_{k,N}}{\sqrt{|\lambda_{k,N}|}},\, 0\leq k\leq N-1 \right\}, \end{equation} where $\lambda_{k,N}$ are the eigenvalues of the Hessian at $O$ as given in \eqv(eigenvalue.2) and $r_{k,N}$ are constants that will be specified below. Using \eqref{eigenvalue.6}, we have, for $3\leq k\leq N/2$, \begin{equation} \lambda_{k,N}\geq k^2\left(1-\frac{\pi^2}{12}\right)\m-1. \end{equation} Thus $(\lambda_{k,N})$ verifies $\lambda_{k,N}\geq ak^2$, for $1\leq k\leq N/2$, with some $a$, independent of $N$. \noindent The sequence $(r_{k,N})$ is constructed as follows. Choose an increasing sequence, $(\rho_k)_{k\geq 1}$, and set \begin{equation}\label{neighborhood.2} \begin{cases} r_{0,N}&=1\\ r_{k,N}&=r_{N-k,N}=\rho_k,\quad 1\leq k\leq \left\lfloor\frac N2\right\rfloor. \end{cases} \end{equation} Let, for $p\geq 1$, \begin{equation} K_p=\left(\sum_{k\geq1}\frac{\rho_k^p}{k^p}\right)^{1/p}. \end{equation} Note that if $K_{p_0}$ is finite then, for all $p_1>p_0$, $K_{p_1}$ is finite. With this notation we have the following key estimate. \begin{lemma}\label{norm} For all $p\geq 2$, there exist finite constants $B_p$, such that, for $z\in C_\mathrm{d}$, \begin{equation}\label{norm.1} \|x(Nz)\|^p_p\leq \mathrm{d}^p N B_p \end{equation} if $K_q$ is finite, with $\frac1p+\frac1q=1$. \end{lemma} \begin{proof} The Hausdorff-Young inequality (Lemma \ref{Hausdorff-Young}) gives us: \begin{equation}\label{norm.2} \|x(Nz)\|_p\leq C_q\|Nz\|_{q,{\cal F}}. \end{equation} Since $z\in C_\mathrm{d}$, we get \begin{equation}\label{norm.3} \|Nz\|^q_{q,{\cal F}}\leq\mathrm{d}^q N^{q-1}\sum_{k=0}^{N-1}\frac{r_{k,N}^q}{\lambda_k^{q/2}}. \end{equation} Then \begin{equation}\label{norm.4} \sum_{k=0}^{N-1}\frac{r_{k,N}^q}{\lambda_k^{q/2}} = \frac1{\lambda_0^{q/2}}+2\sum_{k=1}^{\lfloor N/2\rfloor}\frac{r_{k,N}^q}{\lambda_k^{q/2}} \leq \frac1{\lambda_0^{q/2}}+\frac2{a^{q/2}}\sum_{k=1}^{\lfloor N/2\rfloor}\frac{\rho_k^q}{k^q} \leq \frac1{\lambda_0^{q/2}}+\frac2{a^{q/2}}K_q^q=D_q^q \end{equation} which is finite if $K_q$ is finite. Therefore, \begin{equation}\label{norm.5} \|x(Nz)\|^p_p\leq \mathrm{d}^pN^{(q-1)\frac pq}C_q^pD_q^p, \end{equation} which gives us the result since $(q-1)\frac pq=1$. \end{proof} \noindent We have all what we need to estimate the capacity. \subsubsection{Capacity Estimates} Let us now prove our main theorem. \begin{proposition}\TH(capacity) There exists a constant $A$, such that, for all $\epsilon<\epsilon_0$ and for all $N$, \begin{equation}\label{cap.4} \frac{\hbox{\rm cap}\left(B^N_+,B^N_-)\right)}{N^{N/2-1}} = \epsilon \sqrt{2\pi\epsilon}^{N-2} \frac{1}{\sqrt{|\det(\nabla F_{\gamma,N}(0))|}} \left(1+ R(\epsilon,N)\right), \end{equation} where $|R(\epsilon,N)|\leq A\sqrt{\epsilon |\ln \epsilon|^3}$. \end{proposition} The proof will be decomposed into two lemmata, one for the upper bound and the other for the lower bound. The proofs are quite different but follow the same idea. We have to estimate some integrals. We isolate a neighborhood around the point $O$ of interest. We get an approximation of the potential on this neighborhood, we bound the remainder and we estimate the integral on the suitable neighborhood. \noindent In what follows, constants independent of $N$ are denoted $A_i$. \paragraph{Upper bound.} The first lemma we prove is the upper bound for Proposition \thv(capacity). \begin{lemma}\label{upper} There exists a constant $A_0$ such that for all $\epsilon$ and for all $N$, \begin{equation}\label{upper.1} \frac{\hbox{\rm cap}\left(B^N_+,B^N_-\right)}{N^{N/2-1}} \leq \epsilon \sqrt{2\pi\epsilon}^{N-2} \frac{1}{\sqrt{|\det(\nabla F_{\gamma,N}(0))|}} \left(1+ A_0\epsilon|\ln \epsilon|^2\right). \end{equation} \end{lemma} \begin{proof} This lemma is proved in \cite{bovier04} in the finite dimension setting. We use the same strategy, but here we take care to control the integrals appearing uniformly in the dimension. We will denote the quadratic approximation of $\wt G_{\gamma,N}$ by $F_0$, i.e. \begin{equation}\label{name.1} F_0(z)=\sum_{k=0}^{N-1}\frac{{\lambda_{k,N}}|z_k|^2}{2} =-\frac{z_0^2}{2}+\sum_{k=1}^{N-1}\frac{{\lambda_{k,N}}|z_k|^2}{2}. \end{equation} On $C_\mathrm{d}$, we can control the non-quadratic part through Lemma \ref{norm}. \begin{lemma} \TH(u-approx) There exists a constant $A_1$ and $\mathrm{d}_0$, such that for all $N$, $\mathrm{d}<\mathrm{d}_0$ and all $z\in C_\mathrm{d}$, \begin{equation}\Eq(u-approx.1) \Big|\wt G_{\gamma,N}(z)-F_0(z)\Big|\leq A_1\mathrm{d}^4. \end{equation} \end{lemma} \begin{proof} Using \eqref{potential.8}, we see that \begin{equation}\label{potential.9} \wt G_{\gamma,N}(z)-F_0(z)=\frac 1{4N} \|x(N z)\|^4_4. \end{equation} We choose a sequence $(\rho_k)_{k\geq1}$ such that $K_{4/3}$ is finite. Thus, it follows from Lemma \ref{norm}, with $A_1=\frac 14 B_4$, that \begin{equation}\Eq(u-approx.7) \left|\wt G_{\gamma,N}(z) - \frac 1{2} \sum_{k=0}^{N-1} \lambda_{k,N} |z_k|^2\right|\leq A_1\mathrm{d}^4, \end{equation} as desired. \end{proof} \noindent We obtain the upper bound of Lemma \ref{upper} by choosing a test function $h^+$. We change coordinates from $x$ to $z$ as explained in \eqref{change.1}. A simple calculation shows that \begin{equation}\Eq(gradient.1) \|\nabla h(x)\|_2^2=N^{-1} \|\nabla \tilde h(z)\|_2^2, \end{equation} where $\tilde h(z)=h(x(z))$ under our coordinate change. For $\mathrm{d}$ sufficiently small, we can ensure that, for $z\not\in C_{\mathrm{d}}$ with $|z_0|\leq \mathrm{d}$, \begin{equation} \wt G_{\gamma,N}(z) \geq F_0(z) =-\frac{z_0^2}2+ \frac12\sum_{k=1}^{N-1}\lambda_{k,N}|z_k|^2 \geq -\frac{\mathrm{d}^2}2+2\mathrm{d}^2 \geq\mathrm{d}^2. \end{equation} Therefore, the strip \begin{equation}\Eq(green.aa) S_{\mathrm{d}}\equiv\{x|\,x=x(Nz),\,|z_0|<\mathrm{d}\} \end{equation} separates ${\Bbb R}^N$ into two disjoint sets, one containing $I_-$ and the other one containing $I_+$, and for $x\in S_d\setminus C_\mathrm{d}$, $G_{\gamma,N}(x)\geq \mathrm{d}^2$. The complement of $S_{\mathrm{d}}$ consists of two connected components $\Gamma_+,\Gamma_-$ which contain $I_+$ and $I_-$, respectively. We define \begin{equation}\label{upper.3} \tilde h^+(z)= \begin{cases} 1&\text{for}\,z\in\Gamma_-\\ 0&\text{for}\,z\in\Gamma_+\\ f(z_0)&\text{for}\,z\in C_{\mathrm{d}}\\ \text{arbitrary}&\text{on}\,S_{\mathrm{d}}\setminus C_{\mathrm{d}}\,\text{ but}\,\norm{\nabla h^+}_2\leq \frac{c}{\mathrm{d}}. \end{cases}, \end{equation} where $f$ satisfies $f(\mathrm{d})=0$ and $f(-\mathrm{d})=1$ and will be specified later. Taking into account the change of coordinates, the Dirichlet form \eqref{cap.3} evaluated on $h^+$ provides the upper bound \begin{eqnarray}\label{upper.5} \Phi(h^+) &=& N^{N/2-1}\epsilon\int_{z((B^N_-\cup B^N_+)^c)} e^{-\wt G_{\gamma,N}(z)/\epsilon}\norm{\nabla \tilde h^+(z)}_2^2dz \\\nonumber &\leq& N^{N/2-1}\left[\epsilon\int_{C_{\mathrm{d}}}e^{-\wt G_{\gamma,N}(z)/\epsilon} \left(f'(z_0)\right)^2dz+ \epsilon\mathrm{d}^{-2}c^2\int_{S_{\mathrm{d}}\setminus C_\mathrm{d}} e^{-\wt G_{\gamma,N}(z)/\epsilon}dz\right]. \end{eqnarray} The first term will give the dominant contribution. Let us focus on it first. We replace $\wt G_{\gamma,N}$ by $F_0$, using the bound \eqv(u-approx.1), and for suitably chosen $\mathrm{d}$, we obtain \begin{eqnarray}\label{upper.6}\nonumber \int_{C_{\mathrm{d}}}e^{-\wt G_{\gamma,N}(z)/\epsilon} \left(f'(z_0)\right)^2dz &\leq& \left(1+2A_1\frac{\mathrm{d}^4}{\epsilon}\right)\int_{C_{\mathrm{d}}}e^{- F_0(z)/\epsilon} \left(f'(z_0)\right)^2dz \\\nonumber &=&\left(1+2A_1\frac{\mathrm{d}^4}{\epsilon}\right) \int_{D_{\mathrm{d}}}e^{-\frac 1{2\epsilon}\sum_{k=1}^{N-1}\lambda_{k,N} |z_k|^2}dz_1\dots dz_{N-1} \\&&\quad\times \int_{-\mathrm{d}}^{\mathrm{d}}\left(f'(z_0)\right)^2e^{z_0^2/2\epsilon}dz_0. \end{eqnarray} Here we have used that we can write $C_\mathrm{d}$ in the form $[-\mathrm{d},\mathrm{d}]\times D_\mathrm{d}$. As we want to calculate an infimum, we choose a function $f$ which minimizes the integral $\int_{-\mathrm{d}}^{\mathrm{d}}\left(f'(z_0)\right)^2e^{z_0^2/2\epsilon}dz_0$. A simple computation leads to the choice \begin{equation}\label{upper.4} f(z_0)=\frac{\int_{z_0}^{\mathrm{d}} e^{-t^2/2\epsilon}dt} {\int_{-\mathrm{d}}^{\mathrm{d}}e^{-t^2/2\epsilon}dt}. \end{equation} Therefore \begin{equation}\Eq(tralala) \int_{C_{\mathrm{d}}}e^{-\wt G_{\gamma,N}(z)/\epsilon} \left(f'(z_0)\right)^2dz \leq \frac{\int_{C_{\mathrm{d}}}e^{-\frac 1{2\epsilon}\sum_{k=0}^{N-1}|\lambda_{k,N}| |z_k|^2}dz} {\Big(\int_{-\mathrm{d}}^{\mathrm{d}} e^{-\frac 1{2\epsilon} z_0^2 }dz_0\Big)^2}\left(1+2A_1\frac{\mathrm{d}^4}{\epsilon}\right). \end{equation} Choosing $\mathrm{d}=\sqrt{K \epsilon|\ln \epsilon |}$, a simple calculation shows that there exists $A_2$ such that \begin{equation}\label{upper.7} \frac{\int_{C_{\mathrm{d}}}e^{-\frac 1{2\epsilon}\sum_{k=0}^{N-1}|\lambda_{k,N}| |z_k|^2}dz} {\Big(\int_{-\mathrm{d}}^{\mathrm{d}} e^{-\frac 12 z_0^2 /\epsilon}dz_0\Big)^2} \leq \sqrt{2\pi\epsilon}^{N-2} \frac{1}{\sqrt{|\det(\nabla F_{\gamma,N}(0))|}}(1+A_2\epsilon). \end{equation} The second term in \eqref{upper.5} is bounded above in the following lemma. \begin{lemma}\label{concentration} For $\mathrm{d}=\sqrt{K\epsilon|\ln(\epsilon)|}$ and $\rho_k=4k^{\alpha}$, with $0<\alpha<1/4$, there exists $A_3<\infty$, such that for all $ N$ and $0<\epsilon<1$, \begin{equation}\Eq(concentration.1) \int_{S_{\mathrm{d}}\setminus C_\mathrm{d}}e^{-\wt G_{\gamma,N}(z)/\epsilon}dz\leq \frac{A_3 \sqrt{2\pi \epsilon}^{N-2}} {\sqrt{\abs{\det\left(\nabla^2F_{\gamma, N}(O)\right)}}}\epsilon^{3K/2+1}. \end{equation} \end{lemma} \begin{proof} Clearly, by \eqv(potential.8), \begin{equation}\label{concentration.3} \wt G_{\gamma,N}(z)\geq -\frac{z_0^2}{2}+\frac 1{2} \sum_{k=1}^{N-1} \lambda_{k,N} |z_k|^2. \end{equation} Thus \begin{eqnarray}\label{concentration.5} \nonumber &&\int_{S_{\mathrm{d}}\setminus C_\mathrm{d}}e^{-\wt G_{\gamma,N}(z)/\epsilon}dz \\\nonumber &\leq& \int_{-\mathrm{d}}^{\mathrm{d}} dz_0 \int_{\exists_{k=1}^{N-1}: |z_k|\geq \mathrm{d} r_{k,N}/ \sqrt{\lambda_{k,N}}}dz_1\dots dz_{N-1} e^{-\frac 1{2\epsilon}\sum_{k=0}^{N-1}\lambda_{k,N} |z_k|^2} \\\nonumber &\leq& \int_{-\mathrm{d}}^{\mathrm{d}}e^{+z_0^2/2\epsilon}dz_0 \sum_{k=1}^{N-1}\int_{|z_k|\geq \mathrm{d} r_{k,N}/\sqrt{\lambda_{k,N}}} e^{-\lambda_{k,N} |z_k|^2/2\epsilon}dz_k \\\nonumber&&\quad\quad\times \prod_{1\leq i\neq k\leq N-1} \int_{{\Bbb R}}e^{-\lambda_{i,N} |z_i|^2/2\epsilon}dz_i \\ &\leq& 2 e^{\mathrm{d}^2/2\epsilon} \sqrt{\frac {2\epsilon} {\pi}} \sqrt{\prod_{i=1}^{N-1}2\pi\epsilon \lambda_{i,N}^{-1}} \sum_{k=1}^{N-1} r_{k,n}^{-1} e^{-\mathrm{d}^2r_{k,N}^2/2\epsilon}. \end{eqnarray} Now, \begin{eqnarray}\label{concentration.8}\nonumber \sum_{k=1}^{N-1}{r_{k,N}^{-1}}e^{-\mathrm{d}^2 r_{k,N}^2/2\epsilon} &=& \sum_{k=1}^{\lfloor\frac N2\rfloor}{r_{k,N}^{-1}}e^{-\mathrm{d}^2 r_{k,N}^2/2\epsilon} +\sum_{k=\lfloor\frac N2\rfloor+1}^{N-1}{r_{N-k,N}^{-1}}e^{-\mathrm{d}^2 r_{N-k,N}^2/2\epsilon}\\ &\leq& 2\sum_{k=1}^{\infty}\rho_k^{-1}e^{-\mathrm{d}^2 \rho_k^2/2\epsilon}. \end{eqnarray} We choose $\rho_k=4 k^\alpha$ with $0<\alpha<1/4$ to ensure that $K_{4/3}$ is finite. With our choice for $\mathrm{d}$, the sum in \eqv(concentration.8) is then given by \begin{equation}\label{concentration.9} \frac 14\sum_{n=1}^{\infty} n^{-\alpha} \epsilon^{8K n^{2\alpha}} \leq \frac 14\epsilon^{2K} \sum_{n=1}^{\infty}\epsilon^{6K n^{2\alpha}}\leq C \epsilon^{2K}, \end{equation} since the sum over $n$ is clearly convergent. Putting all the parts together, we get that \begin{equation}\label{concentration.12} \int_{S_{\mathrm{d}}\setminus C_\mathrm{d}}e^{-\wt G_{\gamma,N}(z)/\epsilon}dz\leq C\epsilon^{3K/2+1} \sqrt{2 \pi \epsilon}^{N-2}\frac{1} {\sqrt{\abs{\det\left(\nabla^2F_{\gamma,N}(O)\right)}}} \end{equation} and Lemma \ref{concentration} is proven. \end{proof} Finally, using \eqref{tralala}, \eqv(upper.5), \eqref{upper.7}, and \eqref{concentration.1}, we obtain the upper bound \begin{equation}\label{upper.8} \frac{\Phi(h^+)}{N^{N/2-1}} \leq \frac{\epsilon\sqrt{2\pi\epsilon}^{N-2}}{\sqrt{|\det(\nabla F_{\gamma,N}(0))|}} (1+A_2\epsilon)\left(1+2A_1\epsilon|\ln\epsilon|^2 + A_3'\epsilon^{3K/2}\right) \end{equation} with the choice $\rho_k=4k^\alpha$, $0<\alpha<1/4$ and $\mathrm{d}=\sqrt{K\epsilon|\ln\epsilon|}$. Note that all constants are independent of $N$. Thus Lemma \ref{upper} is proven. \end{proof} \paragraph{Lower Bound} The idea here (as already used in \cite{BBI08}) is to get a lower bound by restricting the state space to a narrow corridor from $I_-$ to $I_+$ that contains the relevant paths and along which the potential is well controlled. We will prove the following lemma. \begin{lemma}\label{lower} There exists a constant $A_4<\infty$ such that for all $\epsilon$ and for all $N$, \begin{equation}\label{lower.1} \frac{\hbox{\rm cap}\left(B^N_+,B^N_-\right)}{N^{N/2-1}} \geq \epsilon \sqrt{2\pi\epsilon}^{N-2} \frac{1}{\sqrt{|\det(\nabla F_{\gamma,N}(0))|}} \left(1- A_4 \sqrt{\epsilon |\ln \epsilon|^3}\right). \end{equation} \end{lemma} \begin{proof} Given a sequence, $(\rho_k)_{k\geq 1}$, with $r_{k,N}$ defined as in \eqref{neighborhood.2}, we set \begin{equation}\label{lower.2} \widehat{C}_{\mathrm{d}} = \left\{ z_0\in ]-1+\rho,1-\rho[, |z_k|\leq \mathrm{d}\ r_{k,N}/\sqrt {\lambda_{k,N}}\right\}. \end{equation} The restriction $|z_0|<1-\rho$ is made to ensure that $\widehat{C}_{\mathrm{d}}$ is disjoint from $B_\pm$ since in the new coordinates \eqref{change.1} $I_\pm=\pm (1,0,\dots,0)$. Clearly, if $h^*$ is the minimizer of the Dirichlet form, then \begin{equation}\label{lower.4} \hbox{\rm cap}\left(B^N_-,B^N_+\right)=\Phi(h^*) \geq\Phi_{\widehat{C}_{\mathrm{d}}}(h^*), \end{equation} where $\Phi_{\widehat C_\mathrm{d}}$ is the Dirichlet form for the process on $\widehat C_\mathrm{d}$, \begin{equation}\Eq(lower.5) \Phi_{\widehat C_\mathrm{d}}(h)=\epsilon\int_{\widehat{C}_{\mathrm{d}}} e^{-G_{\gamma,N}(x)/\epsilon}\|\nabla h(x)\|_2^2dx=N^{N/2-1}\epsilon\int_{z(\widehat{C}_{\mathrm{d}})} e^{-\wt G_{\gamma,N}(z)/\epsilon}\|\nabla \tilde h(z)\|_2^2dz. \end{equation} To get our lower bound we now use simply that \begin{equation} \|\nabla \tilde h(z)\|_2^2 =\sum_{k=0}^{N-1}\left|\frac{\partial \tilde h^*}{\partial z_k}\right|^2 \geq \left|\frac{\partial \tilde h^*}{\partial z_0}\right|^2, \end{equation} so that \begin{equation}\label{lower.6} \frac{\Phi(h^*)}{N^{N/2-1}} \geq\epsilon\int_{z(\widehat{C}_{\mathrm{d}})} e^{-\wt G_{\gamma,N}(z)/\epsilon}\Big|\frac{\partial \tilde h^*} {\partial z_0}(z)\Big|^2dz =\wt\Phi_{\widehat C_\mathrm{d}}(\tilde h^*) \geq \min_{ h\in{\cal H}}\wt\Phi_{\widehat C_\mathrm{d}}(\tilde h). \end{equation} The remaining variational problem involves only functions depending on the single coordinate $z_0$, with the other coordinates, $z_\bot=(z_i)_{1\leq i\leq N-1}$, appearing only as parameters. The corresponding minimizer is readily found explicitly as \begin{equation}\label{lower.7} \tilde h^-(z_0,z_\bot)=\frac{\int_{z_0}^{1-\rho}e^{\wt G_{\gamma,N}(s,z_{\bot})/\epsilon}ds} {\int_{-1+\rho}^{1-\rho}e^{\wt G_{\gamma,N}(s,z_{\bot})/\epsilon}ds} \end{equation} and hence the capacity is bounded from below by \begin{equation}\label{lower.8} \frac{\hbox{\rm cap}\left(B^N_-,B^N_+\right)}{N^{N/2-1}} \geq \wt\Phi_{\widehat C_\mathrm{d}}(\tilde h^-) =\epsilon\int_{\widehat{C}^{\bot}_{\mathrm{d}}} \Big(\int_{-1+\rho}^{1-\rho}e^{\wt G_{\gamma,N}(z_0,z_{\bot})/\epsilon}dz_0\Big)^{-1}dz_{\bot}. \end{equation} Next, we have to evaluate the integrals in the r.h.s. above. The next lemma provides a suitable approximation of the potential on $\widehat C_\mathrm{d}$. Note that since $z_0$ is no longer small, we only expand in the coordinates $z_\bot$. \begin{lemma}\label{l-approx} Let $r_{k,N}$ be chosen as before with $\rho_k=4k^\alpha$, $0<\alpha<1/4$. Then there exists a constant, $A_5$, and $\mathrm{d}_0>0$, such that, for all $N$ and $\mathrm{d}<\mathrm{d}_0$, on $\widehat C_\mathrm{d}$, \begin{equation} \Eq(l-approx.1) \left|\wt G_{\gamma,N}(z)-\left( -\frac 12z_0^2 + \frac 14 z_0^4 +\frac 1{2} \sum_{k=1}^{N-1}\lambda_{k,N} |z_k|^2 +z_0^2 f(z_\bot)\right)\right| \leq A_5\mathrm{d}^3, \end{equation} where \begin{equation}\Eq(1-approx.1.1) f(z_\bot)\equiv \frac 32\sum_{k=1}^{N-1}|z_k|^2. \end{equation} \end{lemma} \begin{proof} We analyze the non-quadratic part of the potential on $\widehat C_\mathrm{d}$, using \eqref{potential.8} and \eqref{fourier.2} \begin{equation}\label{l-approx.3} \frac{1}{N}\|x(Nz)\|_4^4 =\frac{1}{N}\sum_{i=0}^{N-1}|x_i(Nz)|^4 =\frac{1}{N}\sum_{i=0}^{N-1}\left|z_0+\sum_{k=1}^{N-1}\omega^{ik}z_k\right|^4 =\frac{z_0^4}{N}\ \sum_{i=0}^{N-1}|1+u_i|^4 \end{equation} where $u_i=\frac 1{z_0}\sum_{k=1}^{N-1}\omega^{ik}z_k$. Note that $\sum_{i=0}^{N-1}u_i=0$ and $u=\frac1{z_0}x\left(N(0,z_\bot)\right)$ and that for $z\in \widehat{\Bbb R}^N$, $u_i$ is real. Thus \begin{equation}\label{l-approx.4} \sum_{i=0}^{N-1}|1+u_i|^4=N+\sum_{i=0}^{N-1}\left(6u_i^2+4u_i^3 +u_i^4\right) , \end{equation} we get that \begin{equation}\label{l-approx.5} \bigg|\frac{1}{N}\|x(Nz)\|_4^4 -z_0^4\Big(1+\frac6N \sum_{i=0}^{N-1}u_i^2\Big) \bigg| \leq \frac {z_0^4}N\left(4\|u\|_3^3+\|u\|_4^4\right). \end{equation} A simple computation shows that \begin{equation}\label{l-approx.6} \frac6N \sum_i u_i^2=\frac 6{z_0^2}\sum_{k\neq0} |z_k|^2. \end{equation} Thus as $|z_0|\leq1$, we see that \begin{equation}\label{l-approx.7} \left|\frac{1}{N}\|x(Nz)\|_4^4-z_0^4-6z_0^2\sum_{k\neq0}|z_k|^2\right| \leq \frac 1N\big(4\|x(N(0,z_\bot))\|_3^3+\|x(N(0,z_\bot))\|_4^4\big). \end{equation} Using again Lemma \ref{norm}, we get \begin{eqnarray}\label{l-approx.8} \|x(N(0,z_\bot))\|_3^3 &\leq&B_3N\mathrm{d}^3 \\\nonumber\|x(N(0,z_\bot))\|_4^4 &\leq&B_4N\mathrm{d}^4. \end{eqnarray} Therefore, Lemma \ref{l-approx} is proved, with $A_5=4 B_3+B_4\mathrm{d}_0$. \end{proof} We use Lemma \ref{l-approx} to obtain the upper bound \begin{eqnarray}\label{lower.9} \int_{-1+\rho}^{1-\rho}e^{\wt G_{\gamma,N}(z_0,z_{\bot})/\epsilon}dz_0\leq &\exp\left(\frac{1}{2\epsilon}\sum_{k\neq0}\lambda_{k,N}|z_k|^2+\frac{A_5\mathrm{d}^3}{\epsilon}\right)g(z_\bot), \end{eqnarray} where \begin{equation}\label{lower.10} g(z_\bot) = \int_{-1+\rho}^{1-\rho}\exp\left(-\epsilon^{-1}\left(\frac{1}{2}z_0^2 -\frac{1}{4}z_0^4-z_0^2f(z_\bot)\right)\right)dz_0. \end{equation} This integral is readily estimate via Laplace's method as \begin{equation}\Eq(lower.11) g(z_\bot)=\frac{\sqrt{2\pi\epsilon}}{\sqrt{1-2f(z_\bot)}} \left(1+O(\epsilon)\right) = \sqrt{2\pi\epsilon} \left(1+O(\epsilon)+O(\mathrm{d}^2)\right). \end{equation} Inserting this estimate into \eqv(lower.8), it remains to carry out the integrals over the vertical coordinates which yields \begin{eqnarray} \Eq(lower.12)\nonumber \wt\Phi_{\widehat C_\mathrm{d}}(\tilde h^-)&\geq& \epsilon \int_{\widehat C_\mathrm{d}^\bot} \exp\left(-\frac{1}{2\epsilon}\sum_{k=1}^{N-1}\lambda_{k,N}|z_k|^2-\frac{A_5\mathrm{d}^3}{\epsilon}\right) \frac1{\sqrt{2\pi\epsilon}} \left(1+O(\epsilon)+O(\mathrm{d}^2)\right)dz_\bot\\\nonumber &=& \sqrt{\frac \epsilon{2\pi}} \int_{\widehat C_\mathrm{d}^\bot} \exp\left(-\frac{1}{2\epsilon}\sum_{k=1}^{N-1}\lambda_{k,N}|z_k|^2\right)dz_\bot \left(1+O(\epsilon)+O(\mathrm{d}^2)+O(\mathrm{d}^3/\epsilon)\right).\\ \end{eqnarray} The integral is readily bounded by \begin{eqnarray}\Eq(lower.13)\nonumber &&\int_{\widehat C_\mathrm{d}^\bot} \exp\left(-\frac{1}{2\epsilon}\sum_{k=1}^{N-1}\lambda_{k,N}|z_k|^2\right)dz_\bot \geq \int_{{\Bbb R}^{N-1}} \exp\left(-\frac{1}{2\epsilon}\sum_{k=1}^{N-1}\lambda_{k,N}|z_k|^2\right)dz_\bot \\\nonumber &&-\sum_{k=1}^{N-1} \int_{\Bbb R} dz_1\dots\int_{|z_k|\geq \mathrm{d} r_{k,N}/\sqrt{\lambda_{k,N}}}\dots \int_{{\Bbb R}}dz_{N-1} \exp\left(-\frac{1}{2\epsilon}\sum_{k=1}^{N-1}\lambda_{k,N}|z_k|^2\right) \\\nonumber &&\geq \sqrt{2\pi \epsilon}^{N-1} \prod_{i=1}^{N-1} \sqrt {\lambda_{i,N}}^{-1}\left(1- \sqrt{\frac {2\epsilon}\pi}\mathrm{d}^{-1}\sum_{k=1}^{N-1}r_{k,N}^{-1}e^{-\mathrm{d}^2 r_{k,N}^2/2\epsilon} \right)\\ && = \sqrt{2\pi \epsilon}^{N-1} \frac 1{\sqrt{\abs{\det F_{\gamma,N}(O)}}} \left(1+O(\epsilon^K)\right), \end{eqnarray} when $\mathrm{d}=\sqrt{K\epsilon\ln \epsilon}$ and $O(\epsilon^K)$ uniform in $N$. Putting all estimates together, we arrive at the assertion of Lemma \thv(lower). \end{proof} \subsection{Uniform estimate of the mass of the equilibrium potential} We will prove the following proposition. \begin{proposition}\label{numerator} There exists a constant $A_6$ such that, for all $\epsilon<\epsilon_0$ and all $N$, \begin{equation}\Eq(numerator.1) \frac{1}{N^{N/2}}\int_{{B^N_+}^c}h^*_{B^N_-,B^N_+}(x)e^{- G_{\gamma,N}(x)/\epsilon}dx =\frac{\sqrt{2\pi\epsilon}^{N}\exp\left(\frac1{4\epsilon}\right)}{\sqrt{\det(\nabla F_{\gamma,N}(I_-))}}\left(1+ R(N,\epsilon)\right), \end{equation} where $ |R(N,\epsilon)|\leq A_6 \sqrt{\epsilon|\ln \epsilon|^3} $. \end{proposition} \begin{proof} The predominant contribution to the integral comes from the minimum $I_-$, since around $I_+$ the harmonic function $h^*_{B^N_-,B^N_+}(x)$ vanishes. The proof will go in two steps. We define the tube in the $z_0$-direction, \begin{equation}\Eq(tube.1) \wt C_\mathrm{d}\equiv \left\{z: \forall_{k\geq 1} |z_k|\leq \mathrm{d} r_{k,N}/\sqrt{\lambda_{k,N}} \right\}, \end{equation} and show that the mass of the complement of this tube is negligible. In a second step we show that within that tube, only the neighborhood of $I_-$ gives a relevant and indeed the desired contribution. The reason for splitting our estimates up in this way is that we have to use different ways to control the non-quadratic terms. \begin{lemma}\TH(tube.2) Let $r_{k,N}$ be chosen as before and let $\mathrm{d}=\sqrt{K\epsilon|\ln \epsilon|}$. Then there exists a finite numerical constant, $A_7$, such that for all $N$, \begin{equation}\label{numerator.1bis} \frac{1}{N^{N/2}}\int_{{\wt C_\mathrm{d}}^c}e^{- G_{\gamma,N}(x)/\epsilon}dx \leq A_7\sqrt{2\pi\epsilon}^{N} \frac{e^{\frac1{4\epsilon}}}{\sqrt{\det(\nabla F_{\gamma,N}(O))}} \epsilon^{K}. \end{equation} The same estimate holds for the integral over the complement of the set \begin{equation}\Eq(last.2) D_\mathrm{d}\equiv\left\{x:|z_0-1|\leq \mathrm{d} \lor |z_0+1|\leq \mathrm{d}\right\}. \end{equation} \end{lemma} \begin{proof} Recall that $z_0=\frac 1N\sum_{i=0}^{N-1}x_i$. Then we can write \begin{equation}\label{numerator.2} G_{\gamma,N}(x) = -\frac 12z_0^2+\frac 14z_0^4 +\frac 1{2N}(x,\Delta x) -\frac 1{2N}\|x\|_2^2 +\frac 12 z_0^2 +\frac 1{4N} \|x\|^4_4 -\frac 14 z_0^4. \end{equation} Notice first that by applying the Cauchy-Schwartz inequality, it follows that \begin{equation}\Eq(tube.5) z_0^4 =N^{-4} \left(\sum_{i=0}^{N-1}x_i\right)^4 \leq N^{-2} \left(\sum_{i=0}^{N-1} x_i^2\right)^2 \leq N^{-1}\sum_{i=0}^{N-1}x_i^4. \end{equation} Moreover, $N^{-1}\|x\|_2^2= \|z\|_2^2$, so that expressed in the variables $z$, \begin{eqnarray}\Eq(tube.6) G_{\gamma,N}(x) &\geq& -\frac 12z_0^2+\frac 14z_0^4 +\frac 1{2N}(x,\Delta x) -\frac 12\sum_{k=1}^{N-1} z_k^2 \\\nonumber &=& -\frac 12z_0^2+\frac 14z_0^4 +\frac 12\sum_{k=1}^{N-1} \lambda_{k,N}z_k^2. \end{eqnarray} Therefore, as in the estimate \eqv(concentration.5), \begin{eqnarray}\Eq(tube.7) \nonumber \int_{\wt C_\mathrm{d}^c}e^{-\wt G_{\gamma,N}(z)/\epsilon}dz &\leq& \int_{{\Bbb R}}e^{-\epsilon^{-1}(z_0^4/4-z_0^2/2)}dz_0 \sum_{k=1}^{N-1}\int_{|z_k|\geq \mathrm{d} r_{k,N}/\sqrt{\lambda_{k,N}}} e^{-\lambda_{k,N} |z_k|^2/2\epsilon}dz_k \\&&\quad\quad\times \prod_{1\leq i\neq k\leq N-1} \int_{{\Bbb R}}e^{-\lambda_{i,N} |z_i|^2/2\epsilon}dz_i \\\nonumber &\leq& \int_{{\Bbb R}}e^{-\epsilon^{-1}(z_0^4/4-z_0^2/2)}dz_0 \mathrm{d}^{-1} \sqrt{\epsilon} \sqrt{\prod_{i=1}^{N-1}2\pi\epsilon \lambda_{i,N}^{-1}} \sum_{k=1}^{N-1} r_{k,N}^{-1} e^{-\mathrm{d}^2r_{k,N}^2/2\epsilon} \\\nonumber &\leq& \int_{{\Bbb R}}e^{-\epsilon^{-1}(z_0^4/4-z_0^2/2)} \sqrt{\prod_{i=1}^{N-1}2\pi\epsilon \lambda_{i,N}^{-1}} C \epsilon^K. \end{eqnarray} Since clearly, \begin{equation}\Eq(tube.8) \int_{{\Bbb R}}e^{-\epsilon^{-1}(z_0^4/4-z_0^2/2)}dz_0 = 2\sqrt{\pi \epsilon} e^{1/4\epsilon}(1+{\cal O}(\epsilon)), \end{equation} this proves the first assertion of the lemma. Quite clearly, the same bounds will show that the contribution from the set where $|z_0\pm 1|\geq \mathrm{d}$ are negligible, by just considering now the fact that the range of the integral over $z_0$ is bounded away from the minima in the exponent. \end{proof} Finally, we want to compute the remaining part of the integral in \eqv(numerator.1), i.e. the integral over $\wt C_\mathrm{d}\cap \{x:|z_0+1|\leq \mathrm{d}\}$. Since the eigenvalues of the Hessian at $I_-$, $\nu_{k,N}$, are comparable to the eigenvalues $\lambda_{k,N}$ for $k\geq 1$ in the sense that there is a finite positive constant, $c^2_\mu$, depending only on $\mu$, such that $\lambda_{k,N}\leq\nu_{k,N}\leq c^2_\mu\lambda_{k_N}$, and since $\nu_{0,N}=2$, this set is contained in $C_{c_\mu\mathrm{d}}$, where \begin{equation}\label{numerator.4} C_{\mathrm{d}}(I_-)\equiv \left\{z\in \widehat {\Bbb R}^N: |z_0+1|\leq \frac{\mathrm{d}}{\sqrt {\nu_0}},\, |z_k|\leq \mathrm{d} \frac{r_{k,N}}{\sqrt {\n_{k,N}}}\:1\leq k\leq N-1\right\}. \end{equation} It is easy to verify that on $C_{\mathrm{d}}(I_-)$, there exists a constant, $A_8$, s.t. \begin{equation}\label{numerator.5} \|z-z(I_-)\|_2^2\leq\mathrm{d}^2\sum_{k=0}^{N-1}\frac{r_{k,N}^2}{\n_{k,N}}\leq\mathrm{d}^2A_8K_2^2. \end{equation} and so, for $\mathrm{d}=\sqrt{K\epsilon|\ln \epsilon|}$, $C_{\mathrm{d}}(I_-)\subset z(B_-)$. On $C_\mathrm{d}(I_-)$ we have the following quadratic approximation. \begin{lemma}\label{n-approx} For all $N$, \begin{equation}\label{n-approx.1} \wt G_{\gamma,N}(z)+\frac 14-\frac 1{2} \sum_{k=0}^{N-1}\n_{k,N} |z_k|^2 = R(z) \end{equation} and there exists a constant $A_{9}$ and $\mathrm{d}_0$ such that, for $\mathrm{d}<\mathrm{d}_0$, on $C_{\mathrm{d}}(I_-)$ \begin{equation}\label{n-approx.2} |R(z)|\leq A_{9}\mathrm{d}^3 \end{equation} where the constants $(\rho_k)$ are chosen as before such that $K_{4/3}$ is finite. \end{lemma} \begin{proof} The proof goes in exactly the same way as in the previous cases and is left to the reader. \end{proof} With this estimate it is now obvious that \begin{eqnarray}\Eq(last.1) \int_{C_\mathrm{d}(I_-)}\tilde h^*_{B_-^N,B_+^N}(z)e^{-\wt G_{\gamma,N}(z)/\epsilon}dz &=& \int_{C_\mathrm{d}(I_-)}e^{- \wt G_{\gamma,N}(z)/\epsilon}dz \\\nonumber &=& e^{1/4\epsilon} \frac{\sqrt{2\pi\epsilon}^N} {\sqrt{\det \nabla^2 F_{\gamma,N}(I_-)}}\left(1+ O(\mathrm{d}^3/\epsilon)\right), \end{eqnarray} Using that $\tilde h^*_{B_-^N,B_+^N}(z)$ vanishes on $B_+^N$ and hence on $C_\mathrm{d}(I_+)$, this estimate together with Lemma \thv(tube.2) proves the proposition. \end{proof} \subsection{Proof of Theorem \thv(main)} \begin{proof} The proof of Theorem \thv(main) is now an obvious consequence of \eqref{key.4} together with Propositions \ref{capacity} and \ref{numerator}. \end{proof} \end{document}
arXiv
Sums of powers In mathematics and statistics, sums of powers occur in a number of contexts: • Sums of squares arise in many contexts. For example, in geometry, the Pythagorean theorem involves the sum of two squares; in number theory, there are Legendre's three-square theorem and Jacobi's four-square theorem; and in statistics, the analysis of variance involves summing the squares of quantities. • Faulhaber's formula expresses $1^{k}+2^{k}+3^{k}+\cdots +n^{k}$ as a polynomial in n, or alternatively in terms of a Bernoulli polynomial. • Fermat's right triangle theorem states that there is no solution in positive integers for $a^{2}=b^{4}+c^{4}$ and $a^{4}=b^{4}+c^{2}$. • Fermat's Last Theorem states that $x^{k}+y^{k}=z^{k}$ is impossible in positive integers with k>2. • The equation of a superellipse is $|x/a|^{k}+|y/b|^{k}=1$. The squircle is the case $k=4,a=b$. • Euler's sum of powers conjecture (disproved) concerns situations in which the sum of n integers, each a kth power of an integer, equals another kth power. • The Fermat-Catalan conjecture asks whether there are an infinitude of examples in which the sum of two coprime integers, each a power of an integer, with the powers not necessarily equal, can equal another integer that is a power, with the reciprocals of the three powers summing to less than 1. • Beal's conjecture concerns the question of whether the sum of two coprime integers, each a power greater than 2 of an integer, with the powers not necessarily equal, can equal another integer that is a power greater than 2. • The Jacobi–Madden equation is $a^{4}+b^{4}+c^{4}+d^{4}=(a+b+c+d)^{4}$ in integers. • The Prouhet–Tarry–Escott problem considers sums of two sets of kth powers of integers that are equal for multiple values of k. • A taxicab number is the smallest integer that can be expressed as a sum of two positive third powers in n distinct ways. • The Riemann zeta function is the sum of the reciprocals of the positive integers each raised to the power s, where s is a complex number whose real part is greater than 1. • The Lander, Parkin, and Selfridge conjecture concerns the minimal value of m + n in $\sum _{i=1}^{n}a_{i}^{k}=\sum _{j=1}^{m}b_{j}^{k}.$ • Waring's problem asks whether for every natural number k there exists an associated positive integer s such that every natural number is the sum of at most s kth powers of natural numbers. • The successive powers of the golden ratio φ obey the Fibonacci recurrence: $\varphi ^{n+1}=\varphi ^{n}+\varphi ^{n-1}.$ • Newton's identities express the sum of the kth powers of all the roots of a polynomial in terms of the coefficients in the polynomial. • The sum of cubes of numbers in arithmetic progression is sometimes another cube. • The Fermat cubic, in which the sum of three cubes equals another cube, has a general solution. • The power sum symmetric polynomial is a building block for symmetric polynomials. • The sum of the reciprocals of all perfect powers including duplicates (but not including 1) equals 1. • The Erdős–Moser equation, $1^{k}+2^{k}+\cdots +m^{k}=(m+1)^{k}$ where $m$ and $k$ are positive integers, is conjectured to have no solutions other than 11 + 21 = 31. • The sums of three cubes cannot equal 4 or 5 modulo 9, but it is unknown whether all remaining integers can be expressed in this form. • The sums of powers Sm(z, n) = zm + (z+1)m + ... + (z+n−1)m is related to the Bernoulli polynomials Bm(z) by (∂n−∂z) Sm(z, n) = Bm(z) and (∂2λ−∂Z) S2k+1(z, n) = Ŝ′k+1(Z) where Z = z(z−1), λ = S1(z, n), Ŝk+1(Z) ≡ S2k+1(0, z). • The sum of the terms in the geometric series is $\sum _{k=i}^{n}z^{k}={\frac {z^{i}-z^{n+1}}{1-z}}.$ See also • Sum of squares • Sum of reciprocals • Diophantine equation References • Reznick, Bruce; Rouse, Jeremy (2011). "On the Sums of Two Cubes". International Journal of Number Theory. 07 (7): 1863–1882. arXiv:1012.5801. doi:10.1142/S1793042111004903. MR 2854220. S2CID 16334026.
Wikipedia
Kybernetika Gu, Da-Ke ; Zhang, Da-Wei Parametric control to quasi-linear systems based on dynamic compensator and multi-objective optimization. (English). Kybernetika, vol. 56 (2020), issue 3, pp. 516-542 MSC: 93B51, 93B52, 93B60 | MR 4131741 | Zbl 07250735 | DOI: 10.14736/kyb-2020-3-0516 Full entry | PDF (0.8 MB) Feedback quasi-linear systems; parametric control; dynamic compensator; multi-objective design and optimization; utilize DOFs in parameter matrices This paper considers a parametric approach for quasi-linear systems by using dynamic compensator and multi-objective optimization. Based on the solutions of generalized Sylvester equations, we establish the more general parametric forms of dynamic compensator and the left and right closed-loop eigenvector matrices, and give two groups of arbitrary parameters. By using the parametric approach, the closed-loop system is converted into a linear constant one with a desired eigenstructure. Meanwhile, it also proposes a novel method to realize multi-objective design and optimization. Multiple performance objectives, containing overall eigenvalue sensitivity, $H_2$ norm, $H_\infty$ norm and low compensation gain, are formulated by arbitrary parameters, then robustness and low compensation gain criteria are expressed by a comprehensive objective function which contains each performance index weighted. By utilizing degrees of freedom (DOFs) in arbitrary parameters, we can optimize the comprehensive objective function such that an optimized dynamic compensator is found to satisfy the robustness and low compensation gain criteria. Finally, an example of attitude control of combined spacecrafts is presented which proves the effectiveness and feasibility of the parametric approach. [1] Chang, J.: Dynamic compensator-based second-order sliding mode controller design for mechanical systems. IET Control Theory A 7 (2013), 13, 1675-1682. DOI 10.1049/iet-cta.2012.1027 | MR 3115112 [2] Chen, C. K., Lai, T. W., Yan, J. J., Liao, T. L.: Synchronization of two chaotic systems: Dynamic compensator approach. Chaos Soliton. Fract. 39 (2009), 15, 1055-1063. DOI 10.1016/j.chaos.2007.04.004 | MR 2512914 [3] Santos, J. F. S. Dos, Pellanda, P. C., Simões, A. M.: Robust pole placement under structural constraints. Syst. Control Lett. 116 (2018), 8-14. DOI 10.1016/j.sysconle.2018.03.008 | MR 3804535 [4] G.-R, Duan: Generalized Sylvester Equations - Unified Parametric Solutions. CRC Press Taylor and Francis Group, Boca Raton 2014. MR 3380768 [5] Duan, G.-R.: Parametric control of quasi-linear systems by output feedback. In: Proc. 14th International Conference on Control, Automation and Systems, IEEE Press, Gyeonggi-do 2014, pp. 928-934. DOI 10.1109/iccas.2014.6987917 [6] Duan, G.-R., Yu, H.-H.: LMIs in Control Systems Analysis, Design and Applications. CRC Press Taylor and Francis Group, Boca Raton 2013. DOI 10.1201/b15060 | MR 3328859 [7] Gu, D.-K., Liu, G.-P., Duan, G.-R.: Parametric control to a type of quasi-linear second-order systems via output feedback. Int. J. Control 92 (2019), 2, 291-302. DOI 10.1080/00207179.2017.1350885 | MR 3938071 [8] Gu, D.-K., Zhang, D.-W., Duan, G.-R.: Parametric control to a type of quasi-linear high-order systems via output feedback. Eur. J. Control. 47 (2019), 44-52. DOI 10.1016/j.ejcon.2018.09.008 | MR 3948880 [9] Gu, D.-K., Zhang, D.-W., Duan, G.-R.: Parametric control to linear time-varying systems based on dynamic compensator and multi-objective optimization. Asian J. Control (2019). DOI 10.1002/asjc.2112 | MR 4001112 [10] Gu, D.-K., Zhang, D.-W.: Parametric control to second-order linear time-varying systems based on dynamic compensator and multi-objective optimization. App. Math. Comput. 365 (2020), 124681. DOI 10.1016/j.amc.2019.124681 | MR 4001112 [11] Hashem, I., Telen, D., Nimmegeers, P., Logist, F., Impe, J. V.: Multi-objective optimization of a plug flow reactor using a divide and conquer approach. IFAC-PapersOnLine 50 (2017), 1, 8722-8727. DOI 10.1016/j.ifacol.2017.08.1712 [12] Jadachowski, L., Meurer, T., Kugi, A.: Backstepping observers for periodic quasi-linear parabolic PDEs. IFAC Proc. Vol. 47 (2014), 3, 7761-7766. DOI 10.3182/20140824-6-za-1003.01246 [13] Klug, M., Castelan, E. B., Leite, V. J S.: A dynamic compensator for parameter varying systems subject to actuator limitations applied to a T-S fuzzy system. IFAC Proc. Vol. 44 (2011), 1, 14495-145000. DOI 10.3182/20110828-6-it-1002.02175 [14] Knüppel, T., Woittennek, F.: Control design for quasi-linear hyperbolic systems with an application to the heavy rope. IEEE T. Automat. Control 60 (2015), 1, 5-18. DOI 10.1109/tac.2014.2336451 | MR 3299410 [15] Konigorski, U.: Pole placement by parametric output feedback. Syst. Control Lett. 61 (2012), 2, 292-297. DOI 10.1016/j.sysconle.2011.11.015 | MR 2878717 [16] Li, K., Nagasio, T., Kida, T.: Gain-scheduling control for extending space structures. Trans. Japan Soc. Mechani. Engineers Series C 70 (2004), 702, 1401-1408. DOI 10.1299/kikaic.70.1401 [17] Lim, D., Yi, K., Jung, S., Jung, H., Ro, J.: Optimal design of an interior permanent magnet synchronous motor by using a new surrogate-assisted multi-objective optimization. IEEE T. Magn. 51 (2015), 11, 1-4. DOI 10.1109/tmag.2015.2449872 [18] Liu, G.-P., Patton, R. J.: Eigenstructure Assignment for Control System Design. John Wiley and Sons, Hoboken 1998. [19] Manuel, P., Gonzalo, R., Victor, T.: Linear attraction in quasi-linear difference systems. J. Differ. Equ. Appl. 17 (2011), 5, 765-778. DOI 10.1080/10236190903260820 | MR 2795524 [20] Mehrotra, K., Mahapatra, P.: A jerk model to tracking highly maneuvering targets. IEEE T. Aero. Elec. Sys. 33 (1997), 4, 1094-1105. DOI 10.1109/7.624345 [21] Mihai, M.: Optimal singular control for quasi-linear systems with small parameters. Proc. Appl. Math. Mech. 7 (2007), 4130033-4130034. DOI 10.1002/pamm.200700782 [22] Patton, R. J., Liu, G.-P., Patel, Y.: Sensitivity properties of multirate feedback control systems, based on eigenstructure assignment. IEEE Trans. Automat. Control 40 (1995), 2, 337-342. DOI 10.1109/9.341806 | MR 1312908 [23] Rotondo, D., Nejjari, F., Puig, V.: Model reference switching quasi-LPV control of a four wheeled omnidirectional robot. IFAC Proc. Vol. 47 (2014), 3, 4062-4067. DOI 10.3182/20140824-6-za-1003.00054 [24] Seo, J. H., Shim, H., Back, J.: Consensus of high-order linear systems using dynamic output feedback compensator: Low gain approach. Automatica 45 (2009), 11, 2659-2664. DOI 10.1016/j.automatica.2009.07.022 | MR 2889327 [25] She, S. X., Dong, S. J.: Varying accelerated motion and comfort. Phys. Engrg. 16 (2006), 35-37. (In Chinese) [26] Slotine, J.-J. E., Li, W.-P.: Applied Nonlinear Control. Pearson Education Company, Upper Saddle River 1991. Zbl 0753.93036 [27] Tang, Y. R., Xiao, X., Li, Y. M.: Nonlinear dynamic modeling and hybrid control design with dynamic compensator for a small-scale UAV quadrotor. Measurement 109 (2017), 51-64. DOI 10.1016/j.measurement.2017.05.036 [28] Tsuzuki, T., Yamashita, Y.: Global asymptotic stabilization for a nonlinear system on a manifold via a dynamic compensator. IFAC Proc. Vol. 41 (2008), 2, 6178-6183. DOI 10.3182/20080706-5-kr-1001.01043 [29] Yi, T., Huang, D., Fu, F., He, H., Li, T.: Multi-objective bacterial foraging optimization algorithm based on parallel cell entropy for aluminum electrolysis production process. IEEE Trans. Ind. Electron. 63 (2016), 4, 2488-2500. DOI 10.1109/tie.2015.2510977 [30] Yuno, T., Ohtsuka, Y.: Rendering a prescribed subset invariant for polynomial systems by dynamic state-feedback compensator. IFAC-PapersOnLine 49 (2016), 18, 1042-1047. DOI 10.1016/j.ifacol.2016.10.305 [31] Zhou, B., Duan, G.-R.: A new solution to the generalized Sylvester matrix equation $AV-EVF=BW$. Syst. Control Lett. 55 (2009), 3, 193-198. DOI 10.1016/j.sysconle.2005.07.002 | MR 2188507 [32] Zhou, D., Wang, J., Jiang, B., Guo, H., Ji, Y.: Multi-task multi-view learning based on cooperative multi-objective optimization. IEEE Access 6 (2018), 19465-19477. DOI 10.1109/access.2017.2777888 [33] Zola, E., Barcelo-Arroyo, F., Kassler, A.: Multi-objective optimization of WLAN associations with improved handover costs. IEEE Commun. Lett. 18 (2014), 11, 2007-2010. DOI 10.1109/lcomm.2014.2359456
CommonCrawl
\begin{document} \begin{titlepage} \maketitle \begin{abstract} This paper introduces a framework to study innovation in a strategic setting, in which innovators allocate their resources between exploration and exploitation in continuous time. Exploration creates public knowledge, while exploitation delivers private benefits. Through the analysis of a class of Markov equilibria, we demonstrate that knowledge spillovers accelerate knowledge creation and expedite its availability, thereby encouraging innovators to increase exploration. The prospect of the ensuing superior long-term innovations further motivates exploration, giving rise to a positive feedback loop. This novel feedback loop can substantially mitigate the free-riding problem arising from knowledge spillovers. \end{abstract} {\bfseries Keywords:} Strategic experimentation, Encouragement effect, Innovation, Multi-armed bandit {\bfseries JEL classification:} C73; D83; O3. \end{titlepage} \section{Introduction} Without innovation, we would still be living in caves and foraging for food in the wild. Although people celebrate disruptive inventions like penicillin and the Internet, innovations are often the result of small improvements through incremental experimentation. This approach can be observed in various fields and leads to significant advancements and innovations. In agriculture, breeding and selection of crops have led to new varieties with improved yields and disease resistance. In architecture, iconic structures that balance form and function have emerged through refinement of styles over time. And the gradual improvement of digital electronics has resulted in powerful, portable, and user-friendly devices. The pursuit of innovation through experimentation typically entails opportunity costs and uncertainties, resulting in ubiquitous trade-offs between creating value through innovation (exploration) and capturing value through established operations (exploitation). As innovation is not guaranteed, innovators must carefully weigh the potential benefits of exploring new alternatives against the costs of diverting resources from their proven operations. As a result, incentives for experimentation can be undermined by free-riding when its outcomes are publicly observable, or when technologies can be reverse-engineered. Such information and knowledge spillovers lead to learning inefficiencies and stifle technological progress. Given the prevalence of free-riding opportunities, one might ask why anyone would pursue exploration. What are the trade-offs that individuals and organizations face when developing technologies with strategic considerations in mind? How does this strategic effect impact technological progress in the long run? We explore these questions in a model intended to capture the dynamics of collective exploration in a strategic setting, often seen in contexts such as research joint ventures, open-source software development, and academic collaborations. This paper analyzes a game of strategic exploration in which a finite number of forward-looking players jointly search for innovative technologies. Given a set of publicly available technologies, at each point in time, each player decides how to allocate a perfectly divisible unit of resource between exploration, which expands the set of feasible technologies at a rate proportional to the resource allocated, and exploitation, which yields a flow payoff from the adoption of one of the feasible technologies. The qualities of technologies, which determine the flow payoff from exploitation, are represented by an (initially unknown) realized path of Brownian motion, referred to as the \emph{technological landscape}. Not all technologies are readily available, and only the qualities associated with the explored technologies are known. The outcomes of exploration are transparent, and the technologies developed are treated as public goods. Consequently, perfect information and knowledge spillovers occur among players, offering abundant opportunities for free-riding, where some players benefit from the positive externalities generated by the experimentation of others. One of the major contributions of this paper is to extend the classic game of strategic experimentation to a setting with an unboundedly expandable set of arms while allowing for a certain degree of correlation among them. When modeling technologies as arms, correlation between arms and discoveries of new arms are key features of technology development and innovation. However, most of the literature on strategic experimentation assumes a fixed set of independent arms. Indeed, the analysis in the previously studied models would be substantially complicated by correlation. By contrast, our proposed model demonstrates that under a form of correlation structure that is simple yet appropriate in the context of technology development, such complexities can be circumvented by simply focusing on incremental experimentation. This approach offers an intuitive way to capture the richness of the dynamic and path-dependent nature of technology development. We first consider the efficient benchmark in which the players work jointly to maximize the average payoff. The solution to this cooperative problem takes a simple cutoff form: All players allocate their resource exclusively to exploration if the quality difference between the best available technology and the latest outcome of the experiment is below a time-invariant cutoff, and exclusively to exploitation otherwise. Players in a larger team are more ambitious, manifested by their persistence in exploration even after experiencing a prolonged series of setbacks. As team size grows, in the limit, exploration persists even when the quality difference becomes arbitrarily large, indefinitely broadening the set of long-run feasible technologies. In the strategic problem, we restrict attention to Markov perfect equilibria (MPE) with the difference between the qualities of the best available technology and the latest technology under development as the state variable. Despite the considerable disparities between our setting and the two-armed bandit models in the literature on strategic experimentation, all MPE in our model exhibit a similar \emph{encouragement effect}: The future information and knowledge spillovers from other players encourage at least one of them to continue exploration at states where a single agent would already have given up. The players thus act more ambitiously in the hope that successful outcomes will bring the other players back to exploration in the future, promoting innovation and sharing its burden. In addition, exactly because of such an incentive, any player who never explores would strictly prefer the deviation that resumes the exploration process as a volunteer at the state where all exploration stops. Therefore, no player always free-rides in equilibrium even though exploration per se never produces any payoffs. We further show that there is no MPE in which all players use simple cutoff strategies as in the cooperative solution. This result suggests that in any symmetric equilibrium, each player chooses an interior allocation of their resource at some states. We establish the existence and uniqueness of the symmetric equilibrium, and provide closed-form representations for the equilibrium strategies and the associated payoff functions. In the symmetric equilibrium, for a set of parameters in which free-riding incentives are relatively weak, all players allocate their resource exclusively to exploration when the latest outcome is sufficiently close to the highest-known quality. However, for some other parameters, the incentives to free-ride can also be so strong that, right after the best technology to date has just been discovered, players, expecting incremental improvements to be achieved in no time, still allocate a positive fraction of their resource to exploitation. Within the region of interior allocation, the players gradually reduce the resource allocated to exploration as the latest outcomes deteriorate, regardless of the parameters. Above all, we identify a novel \emph{innovation effect} that is essential for understanding the incentives for technology development in dynamic and strategic environments. To explain this effect, we introduce the concept of \emph{returns to cooperation} as follows. We say a technological landscape yields \emph{decreasing} returns to cooperation if players' payoff functions in the cooperative solution remain bounded as the number of players increases to infinity, reminiscent of the existing bandit models with a fixed set of arms. By contrast, exploration cooperatively over landscapes with nondecreasing returns to cooperation gives rise to unbounded payoffs in the limit. This asymptotic feature sets our model apart from the existing ones, as it demonstrates that introducing innovation into multi-armed bandits models allows us to identify a novel incentive for exploration: the prospect of technological advancement. We then examine the interplay between this innovation effect and the encouragement effect through the comparative statics of the unique symmetric equilibrium with respect to the number of players. Our analysis shows that the encouragement effect and the innovation effect reinforce each other. Moreover, the innovation effect boosts the encouragement effect to overcome the free-rider effect as the team gets large, if and only if the players are sufficiently patient and the underlying landscape yields \emph{increasing} returns to cooperation. In such a case, even though the rate of exploration is suboptimal, the set of long-run feasible technologies expands indefinitely as the team size grows, resembling the cooperative solution. The prevalence of the encouragement effect in our model stands in marked contrast to the existing literature on strategic experimentation, in which the free-rider effect always prevails due to the lack of innovation possibilities. In the symmetric equilibrium, exploration slows down as the latest outcomes deteriorate, but never fully stops. The reason is that as the latest outcomes keep deteriorating, exploration would slow down so severely that technologies never progress to the point at which all players prefer to allocate the resource exclusively to exploitation. This observation strongly suggests that asymmetric equilibria with the rate of exploration bounded away from zero in the region of interior allocation could improve welfare and long-run outcomes over the symmetric equilibrium. To investigate such a possibility, we construct a class of asymmetric MPE in which the players take turns performing exploration at unpromising states, so that each player achieves a higher payoff than in the symmetric equilibrium. It turns out that these asymmetric MPE are the best MPE in two-player games in terms of average payoffs.\footnote{ It is not clear whether these asymmetric MPE attain the highest average payoffs for games with more than two players. } Unlike in the symmetric equilibrium, the players in these asymmetric MPE become more ambitious as team size grows, irrespective of the other parameters. The intuition for this result is that when alternation is allowed, the burden of keeping the exploration process active can be shared among more players in larger teams, and thus the players are willing to explore at less promising states. As a consequence, the set of long-run feasible technologies expands indefinitely as team size grows, irrespective of the patience of the players or the returns to cooperation. Nevertheless, similar to the symmetric equilibrium with impatient players, in these asymmetric equilibria innovations might arrive at a much slower rate than in the cooperative solution because of the low proportion of the overall resource allocated to exploration. As a result, the welfare loss might still be significant in large teams because of the strong free-riding incentives. \subsection{Related Literature} \label{sec:lit} This paper combines two distinct strands of literature on learning and experimentation. The first strand studies experimentation in rich and complex environments, drawing on the seminal work by \textcite{Callander:2011}. He proposed modeling the correlation between technologies by Brownian path and studies experimentation conducted by a sequence of myopic agents. In his model, experiment outcomes closer to zero are preferable to the agents. In a similar setting, \textcite{CallanderMatouschek:2019} consider agents with insatiable preferences and examine the impact of risk aversion on their search performance. \textcite{GarfagniniStrulovici:2016} extend Callander's model to a setting with overlapping generations, in which short-lived yet forward-looking players search on a Brownian path for technologies with higher qualities. They mainly focus on the search patterns and the long-run dynamics, and establish the stagnation of search and the emergence of a technological standard in finite time. The qualities of technologies contribute exponentially to the payoffs in our model instead of linearly as in theirs. As a result, stagnation can be avoided in our model even under a negative drift of the Brownian path. All these models focus on non-strategic environments and preclude long-lived forward-looking agents, as their discrete-time frameworks create unexplored gaps between explored technologies, posing challenges for further analysis. To overcome this difficulty, we forgo the fine details of the learning dynamics for analytical tractability by imposing continuity on the experimentation process. This simplification can be interpreted as the qualities of the neighboring technology being revealed during experimentation, so that no unexplored territory remains between the explored technologies. Moreover, we impose a hard constraint on the scope of exploration to capture the scenario that technologies far ahead of their time are infeasible to be explored today. These abstractions allow us to derive explicit expressions for the equilibrium payoffs and strategies, perform comparative statics analysis, and construct asymmetric equilibria. The second strand of literature, often referred to as strategic experimentation, originated from the Brownian model introduced by \textcite{BoltonHarris:1999}, and further enriched by the exponential model in \textcite{KellerRadyCripps:2005} and the Poisson model in \textcite{KellerRady:2010,KellerRady:2015}. In all of these models, players face identical two-armed bandit machines, which consist of a risky arm with an unknown quality and a safe arm with a known quality. At each point in time, each player decides how to split one unit of a perfectly divisible resource between these arms, so that learning occurs gradually by observing other players' actions and outcomes. These models differ in the assumptions on the probability distribution of the flow payoffs that each type of arm generates. By contrast, players in our model face a continuum of correlated arms. Local learning---learning the quality of a particular arm---occurs instantaneously. However, as the set of arms is unbounded, global learning---learning the qualities of all arms---occurs gradually. The encouragement effect was first identified by \textcite{BoltonHarris:1999} in the symmetric equilibrium in their Brownian model. This effect is then established by \textcite{KellerRady:2010} for all MPE in the Poisson model with inconclusive good news, and by \textcite{KellerRady:2015} for the symmetric MPE in the Poisson model with bad news. Due to the absence of technological advancements, the encouragement effects in all these papers are not strong enough to dominate the free-rider effect.\footnote{ \textcite{BoltonHarris:1999} demonstrate the prevalence of the free-rider effect in their comparative statics analysis. They show that in the symmetric equilibrium of their model, the individual resource allocated to experimentation at beliefs below the myopic cutoff converge to zero as the number of players increases. The same feature can be observed in the symmetric MPE of the Poisson model in \textcite{KellerRady:2010} and \textcite{KellerRady:2015}, despite the absence of a formal comparative statics analysis in those papers. } We demonstrate not only the presence of the encouragement effect in all MPE of our model, but also perform comparative statics for the unique symmetric MPE to further investigate the strength of the encouragement effect. In contrast to the encouragement effect in their models, we find that the prospect of innovation and technological advancements enables the encouragement effect to overcome the free-rider effect, and provide the conditions for this to occur. More broadly, this paper contributes to the literature on dynamic public-good games. \textcite{AdmatiPerry:1991}, \textcite{MarxMatthews:2000}, \textcite{Yildirim:2006}, and \textcite{Georgiadis:2014} study voluntary contributions to a joint project in dynamic settings. While the public good in these papers is the progress toward the completion of a project, the public good in our model is the knowledge---the feasible technologies---built over time, which can be exploited once developed. From a modeling perspective, continuous exploration on a Brownian sample path is independently studied by \textcite{Wong:2022} and \textcite{UrgunYariv:2023} in non-strategic environments. To the best of our knowledge, the only other study of collective exploration on a Brownian path in a strategic setting is an independent work by \textcite{CetemenUrgunYariv:2023}. They focus on the exit patterns during a search process conducted jointly by heterogeneous players. In their model, exploitation is only possible after an irreversible exit chosen endogenously by each player. The players in our model, however, are not faced with stopping problems and thus are free to choose between exploration and exploitation, or even both simultaneously, at all times. \section{The Exploration Game} \label{sec:model} Time \(t\in[0,\infty)\) is continuous, and the discount rate is \(r>0\). There are \(N\geq 1\) players, each endowed with one unit of perfectly divisible resource per unit of time. Each player has to independently allocate her resource between exploration, which expands the feasible technology domain, and exploitation, which allows her to adopt one of the explored technologies. The feasible technology domain, which is common to all players and contains all the explored technologies at time \(t\), is modeled as an interval \([0, X_t]\) with \(X_0=0\). If a player allocates the fraction \(k_t\in[0,1]\) to exploration over an interval of time \([t, t+\dd{t})\), the boundary \(X_t\) is pushed to the right by an amount of \(k_t \dd{t}\). With the fraction \(1-k_t\) allocated to exploitation, by adopting technology $x_t\in[0, X_t]$, the player receives a deterministic flow payoff \((1-k_t)\exp(W(x_t))\dd{t}\), where \(W(x_t)\) denotes the quality of the adopted technology. The technological landscape \(W:\mathbb{R}_{+}\to \mathbb{R}\), which maps technologies to their qualities, is common to all players. Nevertheless, only the qualities of the feasible technologies in \([0, X_t]\) are known to each player at time \(t\). The status quo technology \(X_0 = 0\) has a quality \(W(0) = s_0\), whereas the qualities of the initially unexplored technologies on \(\mathbb{R}_{++}\) are specified by a realized path of Brownian motion in \(C(\mathbb{R}_{+}, \mathbb{R})\) starting at \(w_0 = W(0+)\leq s_0\), with drift \(\mu\in\mathbb{R}\) and volatility \(\sigma > 0\).\footnote{ In other words, the mapping \(W\) is a realized Brownian path on \(\mathbb{R}_+\), except possibly for a discontinuity at the origin with \(W(0) \geq W(0+)\). } At the outset of the game, all players know the parameters of the landscape \(\mu, \sigma, w_0\) and \(s_0\), but not the realized Brownian path. Therefore, the process of exploration described above captures the dynamics of \emph{research}---experimenting with unknown technologies---and \emph{development}---expanding the set of feasible technologies. Collective exploration as such thus features both \emph{information} and \emph{knowledge} spillovers.\footnote{ Dynamic games featuring pure information spillover include bandits-based games such as in \textcite{BoltonHarris:1999,KellerRadyCripps:2005}, in which all technologies are feasible at the outset, but their qualities are uncertain and to be learned. By contrast, dynamic games featuring pure knowledge spillover can be thought of as games of public goods provision such as in \textcite{AdmatiPerry:1991, MarxMatthews:2000, Yildirim:2006,Georgiadis:2014}, in which players contribute to a joint project (e.g., developing a technology in the public domain) with its value known in advance. Neither of these two types of model fully captures the progressive and uncertain nature of the underlying process of technology development and innovation. } Given a player's actions \(\{(k_t, x_t)\}_{t\geq 0}\), with \(k_t\in[0,1]\) and \(x_t\in[0,X_t]\) measurable with respect to the information available at time \(t\), her total expected discounted payoff, expressed in per-period units, is \[ {\E}\left[ \int_0^\infty r \mathrm{e}^{-rt} (1-k_t) \mathrm{e}^{W(x_t)} \dd{t} \right]. \] Note that whenever a player chooses exploitation, she always adopts one of the best feasible technologies \(x_t \in \argmax_{x\in[0,X_t]}W(x)\) to maximize her total expected discounted payoff. Therefore, we can focus on such an exploitation strategy without loss and rewrite the above total payoff as \[ {\E}\left[ \int_0^\infty r \mathrm{e}^{-rt} (1-k_t) \mathrm{e}^{S_t} \dd{t} \right], \] where \(S_t = \max_{x\in[0,X_t]}W(x)\). \subsection{Reformulated Game} \label{sec:model-reformulated} The environment above can be equivalently reformulated as follows. Players have prior beliefs represented by a filtered probability space \((\Omega, \mathscr{F}, (\mathscr{F}_t), {\Prob})\), where \(\Omega = C(\mathbb{R}_{+}, \mathbb{R})\) is the space of Brownian paths, \({\Prob}\) is the law of standard Brownian motion \(B = \{B_t\}_{t\geq 0}\), and \(\mathscr{F}_t\) is the canonical filtration of \(B\). Each player chooses her strategy from the space of admissible control processes \(\mathcal{A}\), which consists of all processes \(\{k_{t}\}_{t\geq 0}\) adapted to the filtration \((\mathscr{F}_t)_{t\geq 0}\) with \(k_{t}\in[0,1]\). The public history of technology development is represented by the process \(\{W(X_t)\}_{t > 0}\), which is the original Brownian motion under the time change controlled by the players' strategies. This process satisfies the stochastic differential equation \[ \dd{W(X_t)} = \mu K_t\dd{t} + \sigma\sqrt{K_t} \dd{B_t},\quad W(0+) = w_0, \] where \(K_t = \sum_{1\leq n \leq N} k_{n,t}\) measures how much of the overall resource is allocated to exploration, and will be referred to as the \emph{intensity of exploration} at time \(t\). Given a strategy profile \(\bm{k} = \{(k_{1,t},\ldots,k_{N,t})\}_{t\geq 0}\), player \(n\)'s total expected discounted payoff can be written as \[ {\E}\left[ \int_0^\infty r \mathrm{e}^{-rt} (1-k_{n,t}) \mathrm{e}^{S_t} \dd{t} \right], \] where \[ S_t = \max_{0\leq\tau\leq t}W(X_\tau) \] denotes the quality of the best feasible technology at time \(t\). In addition, we use the term ``gap'', denoted by \(A_t \coloneqq S_t - W(X_t)\) for \(t > 0\), whereas by \(A_0 \coloneqq s_0 - w_0\) for \(t = 0\), to refer to the quality difference between the best feasible technology and the latest technology under development. Henceforth, we shall use \(a\) and \(s\) when referring to the state variables as opposed to the stochastic processes \(\{A_t\}_{t\geq 0}\) and \(\{S_t\}_{t\geq 0}\) (i.e., if \(A_t = a\), then ``the game is in state \(a\) at time \(t\)''). A Markov strategy \(k_n:\mathbb{R}_+\times\mathbb{R}\to [0,1]\) with \((a,s)\) as the state variable specifies the action player \(n\) takes at time \(t\) to be \(k_n(A_t, S_t)\). A Markov strategy is called \emph{\(s\)-invariant} if it depends on \((a,s)\) only through \(a\). Thus an \(s\)-invariant Markov strategy \(k_n:\mathbb{R}_+\to[0,1]\) takes the gap \(a\) as the state variable. Finally, an \(s\)-invariant Markov strategy \(k_n\) is a \emph{cutoff strategy} if there is a cutoff \(\bar{a}\geq 0\) such that \(k_n(a) = 1\) for all \(a \in[0,\bar{a})\) and \(k_n(a) = 0\) otherwise. Given an \(s\)-invariant Markov strategy profile \(\bm{k}\), the homogeneity of the payoff functions enables us to write player \(n\)'s associated payoff at state \((a,s)\) as \(v_n(\, a,s \mid \bm{k} \,) = \mathrm{e}^s v_n(\, a, 0 \mid \bm{k} \,)\).\footnote{ See Lemma \ref{lem:homogeneity} in the Appendix. } It is thus convenient to define \(u_n(\, a \mid \bm{k} \,) \coloneqq v_n(\, a, 0 \mid \bm{k} \,)\), which equals player \(n\)'s payoff at state \((a, s)\) normalized by \(\mathrm{e}^s\), the opportunity cost of exploration. We refer to \(u_n:\mathbb{R}_{+}\to\mathbb{R}\) as player \(n\)'s \emph{normalized payoff function}, or simply as \emph{payoff function} when it is clear from the context. \subsection{Exploration under Complete Information} \label{sec:complete-info} To study the value of information, here we consider an alternative setting under complete information. More specifically, how would \(N\) players allocate their resource cooperatively over time if the entire technological landscape \(W\) is publicly known at the outset of the game? Formally, in this subsection we replace \(\mathscr{F}_t\) with \(\mathscr{F}\) for each \(t\geq 0\) while maintaining the assumption that the feasible technology domain \([0, X_t]\) can only be expanded by continuously pushing forward the boundary at the rate of \(\dv*{X_t}{t} = K_t\). For a given technological landscape \(W\), denote the average (ex-post) value under complete information by \[ \hat{v}(W) \coloneqq \sup \int_0^\infty r \mathrm{e}^{-rt} (1 - K_t/N) \mathrm{e}^{S_t} \dd{t}, \] where \(S_t = \max_{x\in[0,X_t]}W(x)\), and the supremum is taken over all measurable functions \(t\mapsto K_t\in[0,N]\). \begin{lemma}[Complete-information Payoff] \label{lem:full-info-value-ante} Denote the average (ex-ante) value under complete information at state \((a,s)\) by \(\widehat{V}(a,s)\coloneqq {\E}_{as}[\hat{v}(W)]\). We have \(\widehat{V}(a,s) = \mathrm{e}^s \widehat{U}(a)\), where \[ \widehat{U}(a) = \begin{cases} 1 + \exp(-\lambda a)/(\lambda - 1), & \text{if }\lambda > 1,\\ +\infty, & \text{otherwise,} \end{cases} \] with \(\lambda \coloneqq (r/N - \mu)/(\sigma^2/2)\).\footnote{ One can also examine the value of the landscape directly. More specifically, how much is a player willing to pay for the revelation of an unknown technological landscape, on which all technologies become feasible without any further development? The answer is simply \(\breve{V} \coloneqq \lim_{r\to 0} \widehat{V}\). One can then interpret the difference \(\breve{V} - \widehat{V}\) as the value of knowledge (i.e., the saved opportunity costs from obviating the need for technology development), and the difference \(\widehat{V} - V^*\) as the value of information, where \(V^*\) is the value under incomplete information in Section \ref{sec:cooperative-problem}. However, we do not find these objects relevant to our analysis. } \end{lemma} When \(\lambda > 1\), there almost surely exists a \emph{first-best technology} \(\hat{x}_N\in\mathbb{R}_{+}\) so that the value \(\hat{v}(W) < +\infty\) is achieved by exploring with full intensity up to the point when \(\hat{x}_N\) is developed, and thereafter exploiting \(\hat{x}_N\).\footnote{ See the proof of Lemma \ref{lem:full-info-value-post} in the Online Appendix for the precise definition of \(\hat{x}_N\). } On the contrary, if \(\lambda \leq 1\), then with probability 1, the payoff can be improved indefinitely by delaying exploitation, and thus \(\hat{v}(W) = +\infty\) almost surely. As a consequence, the value under \emph{incomplete} information becomes infinite as well. Let \(\theta\coloneqq 2\mu/\sigma^2\) and \(\rho\coloneqq\sigma^2/(2r)\). The condition \(\lambda > 1\) can be equivalently written in the following way. \begin{assumption} \label{asm:main-assumption} \(N\rho(1+\theta) < 1\). \end{assumption} Moreover, for a reason that will become clear shortly, we introduce the concept of \emph{returns to cooperation} associated with a technological landscape as follows. \begin{definition} We say (the prior distribution of) a technological landscape \(W\) yields \begin{itemize} \item \emph{decreasing returns to cooperation (DRC)} if \(\theta < -1\), \item \emph{constant returns to cooperation (CRC)} if \(\theta = -1\), \item and \emph{increasing returns to cooperation (IRC)} if \(\theta > -1\). \end{itemize} \end{definition} Assumption \ref{asm:main-assumption} is always satisfied when \(W\) yields DRC or CRC. Unless otherwise stated, we impose Assumption \ref{asm:main-assumption} for the remainder of the paper to ensure well-defined payoffs and deviations. \section{Joint Maximization of Average Payoffs} \label{sec:cooperative-problem} Suppose that \(N\geq 1\) players work cooperatively to maximize the \emph{average} expected payoff. Denote by \(\mathcal{A}_N\) the space of all adapted processes \(\{K_t\}_{t\geq 0}\) with \(K_t\in[0,N]\). Formally, we are looking for the value function \[ v(a,s) \coloneqq \sup_{K\in\mathcal{A}_N} v(\, a,s \mid K \,), \] where \[ v(\, a,s \mid K \,) \coloneqq {\E}_{as}\left[ \int_0^\infty r \mathrm{e}^{-rt} (1-K_t/N) \mathrm{e}^{S_t} \dd{t} \right] \] is the average payoff function associated with the control process \(K = \{K_t\}_{t\geq 0}\), and an optimal control \(K^*\in\mathcal{A}_N\) such that \(v(a,s) = v(\, a,s \mid K^* \,)\). The structure of the problem allows us to focus on Markov strategies \(K:\mathbb{R}_+\times\mathbb{R}\to [0,N]\) with \((a,s)\) as the state variable,\footnote{ For now, the existence of a Markovian optimal strategy is still a conjecture, which will be confirmed later by Proposition \ref{prop:cooperative-solution}. } so that the intensity of exploration at time \(t\) is specified by \(K_t = K(A_t, S_t)\). According to the dynamic programming principle, we have \begin{equation*} v(a,s) = \max_{K\in[0,N]}\left\{ r \left(1 - K/N\right) \mathrm{e}^s\dd{t} + {\E}_{as}\left[\mathrm{e}^{-r\dd{t}} v(a+\dd{a}, s+\dd{s})\right] \right\}. \end{equation*} First, note that \(S_t\) can only change when \(A_t = 0\), and thus \(\dd{s} = 0\) for all positive gaps. Hence, for each \(a > 0\) at which \(\pdv*[2]{v}{a}\) is continuous, the value function \(v\) satisfies the Hamilton-Jacobi-Bellman (HJB) equation \begin{equation} \label{eq:hjb-coop-xs} v(a,s) = \max_{K\in[0,N]}\left\{ \left(1 - \frac{K}{N}\right) \mathrm{e}^s + K\rho\left( {\pdv[2]{v(a,s)}{a}} -\theta {\pdv{v(a,s)}{a}} \right) \right\}. \end{equation} Assume, as will be verified, that the optimal strategy is \(s\)-invariant. Then by the homogeneity of the value function we can replace \(v(a,s)\) with \(\mathrm{e}^s u(a)\), and divide both sides of equation \eqref{eq:hjb-coop-xs} by the opportunity cost \(\mathrm{e}^s\), to obtain the normalized HJB equation \begin{equation} \label{eq:hjb-coop} u(a) = 1 + \max_{K\in[0,N]} K\{\beta(a, u) - 1/N\}, \end{equation} where \(\beta(a, u) \coloneqq \rho(u''(a)-\theta u'(a))\) is the ratio of the expected benefit of exploration \(\rho(\pdv*[2]{v}{a} - \theta\pdv*{v}{a})\) to its opportunity cost \(\mathrm{e}^s\). It is then straightforward to see that the optimal action takes the following ``bang-bang'' form. If the shared opportunity cost of exploration, \(1/N\), exceeds the full expected benefit, the optimal choice is \(K(a) = 0\) (all agents choose exploitation exclusively), which gives \(u(a) = 1\). Otherwise, \(K(a) = N\) is optimal (all agents choose exploration exclusively), and \(u\) satisfies the second-order ordinary differential equation (henceforth ODE), \begin{equation} \label{eq:ode-coop} \beta(a, u) = u(a)/N. \end{equation} The optimal strategy could presumably depend on both \(a\) and \(s\) and hence might not be \(s\)-invariant. Indeed, both the benefit and the opportunity cost of exploration increase as innovation occurs. Nevertheless, due to our specific form of flow payoff where the qualities of technologies contribute exponentially to the payoffs, the increased benefit of exploration exactly offsets the increased opportunity cost. As a result, the incentives for exploration at any fixed gap do not depend on the highest-known quality, which leads to an \(s\)-invariant optimal strategy. This conjecture is confirmed by the following proposition. \begin{proposition}[Cooperative Solution] \label{prop:cooperative-solution} Suppose Assumption \ref{asm:main-assumption} holds. In the \(N\)-agent cooperative problem, there is a cutoff \(a^* > 0\) given by \begin{equation*} a^* = \frac{1}{\gamma_2 - \gamma_1}\left( \ln\left(1+\frac{1}{\gamma_2}\right) - \ln\left(1+\frac{1}{\gamma_1}\right) \right) \end{equation*} with \(\gamma_1 < \gamma_2\) being the roots of \( \gamma(\gamma - \theta) = 1/(N\rho), \) such that it is optimal for all players to choose exploitation exclusively when the gap is above the cutoff \(a^*\) and it is optimal for all players to choose exploration exclusively when the gap is below the cutoff \(a^*\). The associated payoff at state \((a,s)\) can be written as \(V^*(a,s) = \mathrm{e}^s U^*(a)\), where the normalized payoff function \(U^*:\mathbb{R}_{+}\to\mathbb{R}\) is given by \begin{equation*} U^*(a) = \frac{1}{\gamma_2 - \gamma_1} \left( \gamma_2 \mathrm{e}^{-\gamma_1 (a^* - a)} -\gamma_1 \mathrm{e}^{-\gamma_2(a^* - a)} \right) \end{equation*} when \(a \in [0, a^*)\), and by \(U^*(a) = 1\) otherwise. If Assumption \ref{asm:main-assumption} is violated, then \(U^*(a) = +\infty\) for all \(a\geq 0\). \end{proposition} The cooperative solution is pinned down by the standard smooth pasting condition \(u'(a^*) = 0\), and the \emph{normal reflection} condition \((\pdv*{v}{a} + \pdv*{v}{s})(0+,s) = 0\), which takes the form of \(u(0) + u'(0+) = 0\) for \(s\)-invariant strategies.\footnote{ The normal reflection condition is not an optimality condition. It ensures that the infinitesimal change of the payoff at a zero gap has a zero \(\dd{S}\) term, which is necessary for the continuation value process to be a martingale. See \textcite{PeskirShiryaev:2006} for an introduction to the normal reflection condition in the context of optimal stopping problems, and the proof of Lemma \ref{lem:payoff-function-ppt-x} for more details. } Because of the lack of information on the qualities of technologies, the players might stop too early, giving up exploration before developing the first-best technology and thus ultimately adopting a suboptimal technology, or might stop too late, wasting too many resources for marginal improvement while the first-best technology has already been developed. The cooperative solution optimally balances these trade-offs between early and late stopping and therefore determines the \emph{efficient} strategies under incomplete information. \begin{corollary}[Comparative Statics of the Cooperative Solution] \label{cor:cs-coop} Suppose Assumption \ref{asm:main-assumption} holds for \(N = 1\).\(\) The cooperative cutoff \(a^*\) is strictly increasing in \(N\) and strictly decreasing in \(r\). For all \(a\geq 0\), the cooperative payoff \(U^*(a)\), if it is finite, is strictly below the complete-information payoff \(\widehat{U}(a)\). For each \(r > 0\), \begin{itemize} \item if \(W\) yields DRC, then \(U^*_N \to \widehat{U}_N < +\infty\) pointwise as \(N\to +\infty\); \item if \(W\) yields CRC, then \(U^*_N\to +\infty\) as \(N\to +\infty\); \item if \(W\) yields IRC, then \(U^*_N\to +\infty\) as \(N\to 1/(\rho(1+\theta))\).\footnote{ Here we allow \(N\) to be non-integral values for convenience. Also note that when \(W\) yields IRC, Assumption \ref{asm:main-assumption} is violated for \(N\geq 1/(\rho(1+\theta))\), in which case \(U^*_N = +\infty\). } \end{itemize} In all cases \(a^*_N\to +\infty\). \end{corollary} A larger stopping cutoff \(a^*\) represents greater ambition among the players. The benefit of exploration decreases with \(r\), and thus more patient players are more ambitious and willing to explore at less promising states. Likewise, because the players work cooperatively, as the team size increases, extra resources brought by additional players enable a higher rate of exploration. Consequently, for each fixed level of ambition, resources are wasted for a shorter period of time before the exploration fully stops, which motivates the players to become more ambitious. This leads to the emergence of more advanced technologies in the long run and further drives the players to act more ambitiously, and so forth. Corollary \ref{cor:cs-coop} underlines a key component absent in the literature on bandits-based games: innovation. The set of feasible technologies (arms) is predetermined and fixed in the Brownian model in \textcite{BoltonHarris:1999} and the Poisson model in \textcite{KellerRady:2010}. Even though additional players expedite learning, the discounted payoff stream derived from technology adoption is bounded. As a result, with a bounded technology space, each player's payoff eventually maxes out as the team size grows. In our model, technology adoption still offers a bounded payoff stream. However, as more players join exploration, they become more ambitious and thus more advanced technologies emerge in the long run. Whether this prospect of superior innovations qualitatively alters the asymptotic behavior of players' payoffs depends on the returns to cooperation associated with the technological landscape. As characterized in Corollary \ref{cor:cs-coop}, collective exploration over a DRC landscape entails diminishing marginal welfare improvement with respect to team size, and thus resembles the bandits-based games with a fixed set of arms, whereas a CRC or IRC landscape unleashes the power of innovation as team gets large.\footnote{ As shown in Corollary \ref{cor:cs-coop}, \(U_N^*\to +\infty\) under both CRC and IRC landscapes. What differentiates these two cases is that \(U_N^* < +\infty\) for any finite \(N\) under a CRC landscape, whereas \(U_N^*\to+\infty\) as \(N\) approaches some finite number under an IRC landscape. } This \emph{innovation effect} has important implications in the upcoming analysis of the strategic problem. \subsection{Long-run Outcomes} Consider an \(s\)-invariant strategy profile \(\bm{k}\) such that the set of states at which the intensity of exploration is bounded away from zero takes the form of a half-open interval \([0, \bar{a})\). We denote by \(\bar{x}(\bar{a})\coloneqq\lim_{t\to\infty}X_t = \int_0^\infty K_t\dd{t}\) the \emph{amount of exploration} under \(\bm{k}\). In addition, we denote by \(\bar{s}(\bar{a})\coloneqq \lim_{t\to \infty}S_t\) the \emph{long-run technological standard}, which is defined as the quality of the best technology available as time approaches infinity. For a given Brownian path \(W\), it is straightforward to see that both \(\bar{s}(\bar{a})\) and \(\bar{x}(\bar{a})\) depend only on the initial state \((a,s)\) and the stopping threshold \(\bar{a}\), and are independent of the intensity of exploration. Moreover, they are clearly nondecreasing in \(\bar{a}\). We can explicitly express the prior belief on the distribution of \(\bar{s}(\bar{a})\) as follows. \begin{lemma} \label{lem:tech-std-dist} At state \((a,s)\), for a strategy profile with stopping threshold \(\bar{a}\), the long-run technological standard \(\bar{s}(\bar{a})\) has the same distribution as \(\max\{s, M - a\}\), where the random variable \(M\) has an exponential distribution with mean \((\mathrm{e}^{\theta \bar{a}}-1)/\theta\).\footnote{ For \(\theta = 0\), we take \(\lim_{\theta\to 0}(\mathrm{e}^{\theta \bar{a}}-1)/\theta = \bar{a}\) for the mean of \(M\). } \end{lemma} Recall from Section \ref{sec:complete-info} that the first-best technology, denoted by \(\hat{x}_N\), is the technology \(x\geq 0\) that yields the highest payoff for \(N\) cooperative agents under complete information, taking the opportunity cost of its development into account. Denote by \(q(\bar{a}) \coloneqq {\Prob}_{as}\left(\hat{x}_N \in [0, \bar{x}(\bar{a})]\right)\) the probability that the first-best technology will be explored in the long run under a strategy profile with stopping threshold \(\bar{a}\). \begin{lemma} \label{lem:long-run-prob} At state \((a,s)\), we have \(q(\bar{a})\to 1\) as \(\bar{a}\to +\infty\). \end{lemma} Therefore, the comparative statics in Corollary \ref{cor:cs-coop} imply that \(\hat{x}_N\) will be developed under the cooperative solution with probability \(q_N(a^*_N)\to 1\) as the team size grows.\footnote{ This result is not as apparent as it seems, because not only \(a^*_N\), but the first-best technology \(\hat{x}_N\) could depend on the parameter \(N\) as well. However, the convergence in Lemma \ref{lem:long-run-prob} does not require the parameters to stay the same, provided that the parameters satisfy Assumption \ref{asm:main-assumption} along the sequence of the stopping thresholds. } \section{The Strategic Problem} From now on, we assume that there are \(N > 1\) players acting noncooperatively. We study equilibria in the class of \(s\)-invariant Markov strategies, which are the Markov strategies with the gap as the state variable and will hereafter be referred to as \emph{Markov strategies}. In this section, we provide characterizations of the best responses and the associated payoff functions, which establish useful properties of the equilibria for further analysis. \subsection{Best Responses and Equilibria} We denote by \(\mathcal{K}\) the set of Markov strategies that are right-continuous and piecewise Lipschitz-continuous, and denote by \(\mathcal{A}\) the space of admissible control processes as in Section \ref{sec:model-reformulated}.\footnote{ Piecewise Lipschitz-continuity means that \(\mathbb{R}_{+}\) can be partitioned into a finite number of intervals such that the strategy is Lipschitz-continuous on each of them. This requirement rules out the infinite-switching strategies considered in Section 6.2 of \textcite{KellerRadyCripps:2005}. } A strategy \(k_n^*\in\mathcal{K}\) for player \(n\) is a best response against her opponents' strategies \(\bm{k}_{\neg n} = (k_1,\ldots,k_{n-1},k_{n+1},k_N)\in\mathcal{K}^{N-1}\) if \[ v_n(\, a,s \mid k_n^*, \bm{k}_{\neg n} \,) = \sup_{k_n\in\mathcal{A}} v_n(\, a,s \mid k_n, \bm{k}_{\neg n} \,) \] at each state \((a,s)\in\mathbb{R}_+\times\mathbb{R}\). This definition turns out to be equivalent to \[ u_n(\, a \mid k_n^*, \bm{k}_{\neg n} \,) = \sup_{k_n\in\mathcal{K}} u_n(\, a \mid k_n, \bm{k}_{\neg n} \,) \] for each gap \(a\geq 0\), with the normalized payoff function \(u_n(\, a \mid \bm{k} \,)\) defined as in Section \ref{sec:model-reformulated}. A Markov perfect equilibrium is a profile of Markov strategies that are mutually best responses. Denote the intensity of exploration carried out by player \(n\)'s opponents by \(K_{\neg n}(a) = \sum_{l\neq n}k_l(a)\), and the benefit-cost ratio of exploration by \(\beta(a, u_n)\) as in Section \ref{sec:cooperative-problem}. The following lemma characterizes all MPE in the exploration game. \begin{lemma}[Equilibrium Characterization] \label{lem:char-mpe} A strategy profile \(\bm{k} = (k^*_1,\ldots,k^*_N)\in\mathcal{K}^N\) is a Markov perfect equilibrium with \(u_n:\mathbb{R}_{+}\to \mathbb{R}\) being the corresponding payoff function of player \(n\) for each \(n \in\{1,\ldots, N\}\), if and only if for each \(n\), function \(u_n\) \begin{enumerate} \item is continuous on \(\mathbb{R}_+\) and once continuously differentiable on \(\mathbb{R}_{++}\); \item is piecewise twice continuously differentiable on \(\mathbb{R}_{++}\);\footnote{ This condition means that there is partition of \(\mathbb{R}_{++}\) into a finite number of intervals such that \(u_n''\) is continuous on the interior of each of them. } \item satisfies the normal reflection condition \begin{equation} \label{eq:normal-reflection} u_n(0) + u_n'(0+) = 0; \end{equation} \item satisfies, at each continuity point of \(u_n''\), the HJB equation \begin{equation}\label{eq:hjb} u_n(a) = 1 + K_{\neg n}(a) \beta(a, u_n) + \max_{k_n\in[0, 1]} k_n\{\beta(a, u_n) - 1\}, \end{equation} \end{enumerate} with \(k^*_n(a)\) achieving the maximum on the right-hand side, i.e., \[ k^*_n(a) \in \argmax_{k_n\in[0,1]} k_n\{\beta(a, u_n) - 1\}. \] \end{lemma} These conditions are standard in optimal control problems. Condition 4 and the smooth pasting condition, which is implicitly stated in Condition 1, are the optimality conditions. The rest are properties for general payoff functions. In any MPE, Lemma \ref{lem:char-mpe} provides the following characterization of best responses. If \(\beta(a, u_n) < 1\), then \(k_n^*(a) = 0\) is optimal and \(u_n(a) = 1 + K_{\neg n}(a) \beta(a, u_n) < 1 + K_{\neg n}(a)\). If \(\beta(a, u_n) = 1\), then the optimal \(k_n^*(a)\) takes arbitrary values in \([0,1]\) and \(u_n(a) = 1 + K_{\neg n}(a)\). Finally, if \(\beta(a, u_n) > 1\), then \(k_n^*(a) = 1\) is optimal and \(u_n(a) = (1 + K_{\neg n}(a)) \beta(a, u_n) > 1 + K_{\neg n}(a)\). In short, player \(n\)'s best response to a given intensity of exploration \(K_{\neg n}\) by the others depends on whether \(u_n\) is greater than, equal to, or less than \(1 + K_{\neg n}\). On the intervals where each \(k_n\) is continuous, HJB equation \eqref{eq:hjb} gives rise to the ODE \begin{equation} \label{eq:feynman-kac-x} u_n(a) = 1 - k_n(a) + K(a)\beta(a,u_n). \end{equation} In particular, on the intervals where each \(k_n\) is constant, the ODE above admits the explicit solution \begin{equation} \label{eq:sol-full-intensity} U(a) = 1 - k_n + C_1 \mathrm{e}^{\gamma_1 a} + C_2 \mathrm{e}^{\gamma_2 a}, \end{equation} where \(\gamma_1\) and \(\gamma_2\) are the roots of the equation \(\gamma(\gamma - \theta) = 1/(K\rho)\), and \(C_1, C_2\) are constants to be determined. Lastly, on the intervals where an interior allocation is chosen by player \(n\), the ODE from the indifference condition \(\beta(a, u_n) = 1\) has the general solution \begin{equation} \label{eq:sol-int-alloc} U(a) = \begin{cases} C_1 + C_2 a + a^2/(2\rho), &\text{ if } \theta = 0, \\ C_1 + C_2 \mathrm{e}^{\theta a} - a/(\rho\theta), &\text{ if } \theta\neq 0, \end{cases} \end{equation} where \(C_1, C_2\) are constants to be determined. \subsection{Properties of MPE} \label{sec:mpe-ppt} First, note that in any MPE, the average payoff can never exceed the \(N\)-player cooperative payoff \(U_N^*\), and no individual payoff can fall below the single-agent payoff \(U_1^*\). The upper bound follows directly from the fact that the cooperative solution maximizes the average payoff. The lower bound \(U^*_1\) is guaranteed by playing the single-agent optimal strategy, as the players can only benefit from the exploration efforts of others. Second, all Markov perfect equilibria are inefficient. Along the efficient exploration path, the benefit of exploration tends to \(1/N\) of its opportunity cost as the gap \(A_t\) approaches the efficient stopping threshold. A self-interested player thus has an incentive to deviate to exploitation whenever the benefit of exploration drops below its full opportunity cost. Note also that in any MPE, the set of states at which the intensity of exploration is positive must be an interval \([0,\bar{a})\) with \(a^*_1 \leq \bar{a}\leq a^*_N\). The bounds on the stopping threshold follow directly from the bounds on the average payoffs and imply that the long-run outcomes in any MPE cannot outperform the cooperative solution. \begin{corollary} In any Markov perfect equilibrium with stopping threshold \(\bar{a}\), at any state \((a,s)\) we have \(\bar{s}(\bar{a})\leq \bar{s}(a^*_N)\) almost surely, and \(q(\bar{a}) \leq q(a^*_N)\). \end{corollary} Moreover, the intensity of exploration must be bounded away from zero on any compact subset of \([0,\bar{a})\). If this were not the case, there would exist some gap \(a < \bar{a}\) such that the process \(\{W(X_t)\}_{t > 0}\) starting from \(w_0 = s_0-a\) would never reach the best-known quality \(s_0\) because of diminishing intensity, and therefore allocating a positive fraction of the resource to exploration at gap \(a\) is clearly not optimal for any player. In the two-armed bandit models reviewed in Section \ref{sec:lit}, the players always use the risky arm at beliefs higher than some myopic cutoff, above which the expected short-run payoff from the risky arm exceeds the deterministic flow payoff from the safe arm. Because our model lacks such a myopic cutoff, it might seem reasonable to conjecture that some player \(n\), with her payoff function \(u_n\) bounded from above by \(1+K_{\neg n}\), never explores, and thus free-rides the technologies developed by the other players. Such a conjecture is refuted by the following proposition. \begin{proposition}[No Player Always Free-rides] \label{prop:everyone-explores} In any Markov perfect equilibrium, no player allocates her resource exclusively to exploitation for all gaps. \end{proposition} The intuition behind this result is that in equilibrium, the cost and benefit of exploration must be equalized at the stopping threshold for each player, whereas any player who never explores would find this benefit outweighs the cost, manifested by a kink in her payoff function at the stopping threshold. She would then have a strict incentive to resume exploration immediately after the other players give up, hoping to reduce the gap to bring the exploration process back alive. Therefore, in equilibrium, every player must perform exploration at some states, respecting the smooth pasting condition (Condition 1 in Lemma \ref{lem:char-mpe}). This result shows that each player strictly benefits from the presence of the other players in equilibrium through information and knowledge spillovers. As a result, the future exploration efforts of the others encourage some of the players to explore at gaps larger than their single-agent cutoffs. Such an \emph{encouragement effect} is exhibited in all MPE of the exploration game. \begin{proposition}[Encouragement Effect] \label{prop:encouragement-effct} In any Markov perfect equilibrium, at least one player explores at gaps above the single-agent cutoff \(a^*_1\). \end{proposition} With the same intuition as the encouragement effect, our last general result on Markov perfect equilibria concerns the nonexistence of equilibria where all players use cutoff strategies. \begin{proposition}[No MPE in Cutoff Strategies] \label{prop:no-cutoff-strategies} In any Markov perfect equilibrium, at least one player uses a strategy that is not of the cutoff type. \end{proposition} Next, we look into Markov perfect equilibria in greater depth. \section{Symmetric Equilibrium} Our characterization of best responses and the nonexistence of MPE in cutoff strategies suggest that in any symmetric equilibrium, the players choose an interior allocation at some states. At these states of interior allocation, the benefit of exploration must be equal to the opportunity cost, and therefore the common payoff function solves the ODE \(\beta(a, u) = 1\). As a consequence of equation \eqref{eq:feynman-kac-x}, the payoff of each player at the states of interior allocation in the symmetric equilibrium must also satisfy \(u = 1 + K_{\neg n} \leq N\). Therefore, whenever the common payoff exceeds \(N\), each player allocates her resource exclusively to exploration, and the payoff function satisfies the same ODE \eqref{eq:ode-coop} as in the cooperative solution. However, it is worth pointing out that for some configurations of the parameters, because of the strength of free-riding incentives among the players, the common payoff could be below \(N\) for all gaps, and accordingly, the resource constraint of each player is not necessarily binding even when the gap is zero, in marked contrast to the cooperative solution. Lastly, the common payoff satisfies \(u = 1\) at the gaps for which the resource is exclusively allocated to exploitation. The solutions to the corresponding ODEs provided in equations \eqref{eq:sol-full-intensity} and \eqref{eq:sol-int-alloc}, together with the normal reflection condition \eqref{eq:normal-reflection} and the smoothness requirement on the equilibrium payoff functions, uniquely pin down the strategies and the associated payoff functions in the symmetric equilibrium, which can be expressed in closed form as follows. \begin{proposition}[Symmetric Equilibrium] \label{prop:symmetric-mpe} The \(N\)-player exploration game has a unique symmetric Markov perfect equilibrium with the gap as the state variable. There exists a stopping threshold \(\tilde{a} \in (a^*_1, a^*_N)\) and a full-intensity threshold \(a^\dagger\geq 0\) such that the fraction \(k^\dagger(a)\) of the resource that each player allocates to exploration at gap \(a\) is given by \begin{equation} k^\dagger(a) = \begin{cases} 0, & \text{ on } [\tilde{a},+\infty), \\ \frac{1}{(N-1)\rho}\int_0^{\tilde{a}-a}\phi_\theta(z)\dd{z} \in(0,1), & \text{ on } [a^\dagger, \tilde{a}), \label{eq:symmetric-MPE}\\ 1, & \text{ on } [0,a^\dagger) \text{ if } a^\dagger > 0, \end{cases} \end{equation} with \(\phi_\theta(z) \coloneqq (1-\mathrm{e}^{-\theta z})/\theta\).\footnote{ We define \(\phi_0(z) \coloneqq \lim_{\theta\to 0}\phi_\theta(z) = z\). } The corresponding payoff function is the unique function \(U^\dagger:\mathbb{R}_{+}\to[1,+\infty)\) of class \(C^1\) with the following properties: \(U^\dagger(a) = 1\) on \([\tilde{a},+\infty)\); \(U^\dagger(a) = 1+(N-1)k^\dagger(a)\in(1, N)\) and solves the ODE \(\beta(a, u) = 1\) on \((a^\dagger,\tilde{a})\); if \(a^\dagger > 0\), then \(U^\dagger(a) > N\) and solves the ODE \(\beta(a, u) = u/N\) on \((0, a^\dagger)\). \end{proposition} The closed-form expressions for the common payoff function \(U^\dagger\) and the thresholds \(a^\dagger\) and \(\tilde{a}\) are provided in the Appendix. As we have already pointed out, depending on the parameters, it is possible that \(a^\dagger = 0\), in which case \(k^\dagger(a) < 1\) for all \(a > 0\), and hence will be referred to as the \emph{non-binding case}. The opposite case, where \(a^\dagger > 0\), will be referred to as the \emph{binding case}. Figure \ref{fig:kplotsym} illustrates the symmetric equilibrium for these two cases. \begin{figure} \caption{ The symmetric equilibrium with binding resource constraints in a two-player game, and with non-binding constraints in a four-player game (\(\rho = 2\), \(\theta = -1\)).} \label{fig:kplotsym} \end{figure} As the outcomes are publicly observed and newly developed technologies are freely available, the players have incentives to free-ride. Such a \emph{free-rider effect} becomes more pronounced through the comparison between the benefit-cost ratio of exploration at the states of interior allocation in the symmetric MPE \[ \beta(a, U^\dagger) = 1, \] and the one in the cooperative solution \[ \beta(a, U^*) = U^*(a)/N. \] Exploration in equilibrium thus requires the benefit of exploration to cover the cost, whereas the efficient strategy entails exploration at states where the cost exceeds the benefit, as \(\beta(a,U^*) < 1\) whenever \(U^*(a) < N\). Figure \ref{fig:valueplotsym} illustrates the comparison between the common payoff function in the symmetric equilibrium and the cooperative solution in a two-player exploration game. \begin{figure} \caption{ From top to bottom: average payoff \(\widehat{U}_N\) under complete information, average payoff \(U^*_N\) in the cooperative solution, common payoff \(U^\dagger_N\) in the symmetric equilibrium, and payoff \(U^*_1\) in the single-agent optimum. Parameter values: \(\rho = 2\), \(\theta = -1\), \(N=2\).} \label{fig:valueplotsym} \end{figure} \subsection{Comparative Statics} In this section, we examine the comparative statics of the symmetric equilibrium with respect to the discount rate \(r\) and the number of players \(N\).\footnote{ The effect of discount rate \(r\) on the payoff \(U_r\) carries over to \(U_r/r\). In other words, the following comparative statics with respect to \(r\) are not driven by the normalizing constant \(r\) in the flow payoff. } \begin{corollary}[Effect of \(r\)] \label{cor:cs-rho} The stopping threshold \(\tilde{a}_r\) is strictly decreasing in \(r\), and the full-intensity threshold \(a^\dagger_r\) is weakly decreasing in \(r\). For any gap \(a\geq 0\), the equilibrium strategy \(k^\dagger_r(a)\) and the common payoff \(U^\dagger_r(a)\) are weakly decreasing in \(r\). \end{corollary} As \(r\) decreases, the players become more patient and have greater incentives for exploration. Moreover, the increased exploration efforts of others encourage each player to raise their own effort further. The common payoff is decreasing in \(r\) for two reasons. First, higher patience increases players' payoffs directly. Second, as in the cooperative solution, the increased patience raises the level of ambition \(\tilde{a}_r\), and hence more advanced technologies will be developed and adopted in the long run. \begin{corollary}[Effect of \(N\)] \label{cor:cs-n} On \(\{\,N\geq 1\mid a^\dagger_N > 0\,\}\), which is the range of \(N\) for which the players' resource constraints are binding in the symmetric equilibrium, the stopping threshold \(\tilde{a}_N\) is strictly increasing in \(N\) and the common payoff function \(U^\dagger_N\) is weakly increasing in \(N\). Whereas on \(\{\,N\geq 1 \mid a^\dagger_N = 0\,\}\), both \(\tilde{a}_N\) and \(U^\dagger_N\) are constant over \(N\), and the equilibrium strategy \(k^\dagger_N\) is weakly decreasing in \(N\). \end{corollary} In the binding case, note that \(k^\dagger_N(a)\) is not monotone in \(N\) because the full-intensity threshold \(a^\dagger_N\) could be decreasing in \(N\). This situation occurs when the free-rider effect outweighs the encouragement effect. On the one hand, extra encouragement brought by additional players raises the stopping threshold \(\tilde{a}_N\). On the other hand, the increased free-riding incentives due to extra players tighten the requirement \(u > N\) for binding resource constraints, which enlarges the region \((a^\dagger_N, \tilde{a}_N)\) of interior allocation. The total effect of increasing \(N\) on the intensity of exploration is determined by these two competing forces and hence is not monotone in \(N\). In the non-binding case, as \(N\) increases, each player adjusts their individual intensity of exploration downward, maintaining the same equilibrium payoff. The incentive to free-ride in such a situation is so strong that it completely offsets further encouragement brought by additional players. Even the overall intensity of exploration \(Nk^\dagger\) is decreasing in \(N\) for the gaps in \([0,\tilde{a}_N)\). Thus, free-riding slows down exploration considerably. In the worst scenario, the overall intensity when the gap is zero could even be lower than that in the single-agent problem. Also, note that whenever the resource constraints are not binding, the full-intensity threshold \(a^\dagger_N\) remains constant at zero for any further increase in \(N\) because \(k^\dagger_N\) would be even lower. Therefore, if \(a^\dagger_N\) ever hits zero as \(N\) goes up, the resource constraints in the symmetric equilibrium remain non-binding for any larger \(N\). As we have seen, depending on whether or not the resource constraints are binding in the symmetric equilibrium, a larger team size can have qualitatively different effects on the welfare and long-run outcomes. If the resource constraints are not binding in equilibrium, any extra resource brought by additional players translate entirely to free-riding, which results in a highly inefficient outcome in a large team in terms of average payoffs, the likelihood of developing the first-best technology, and the technological standard in the long run. The question then naturally arises of whether the resource constraints would ever fail to be binding in the symmetric equilibrium as \(N\) goes up. Or, conversely, would the encouragement effect eventually overcome the free-rider effect?\footnote{ The purpose of classifying the symmetric MPE into binding and non-binding cases is to help \emph{describe} the comparative statics. We do not intend to suggest this classification of equilibria in a large team \emph{determines} the prevalence of the encouragement or free-rider effect. On the very contrary, it is the \emph{consequence} of the relative strength between these two forces. } To investigate this question, we now examine the effect on the symmetric equilibrium as \(N\) increases toward infinity, while keeping the other parameters fixed. For ease of exposition, we allow \(N\geq 1\) to take non-integral values and drop Assumption \ref{asm:main-assumption} for the remainder of this section. \begin{corollary}[Asymptotic Effect of \(N\)] \label{cor:asymp-n} Suppose Assumption \ref{asm:main-assumption} holds for \(N = 1\). If \(W\) yields IRC and \(r < \hat{r} \coloneqq (\sigma\theta)^2/(2(\theta-\ln(1+\theta)))\),\footnote{ Note that \(\hat{r}\) is well defined only if \(W\) yields IRC. } then we have \(U^\dagger_N\to +\infty\), \(a^\dagger_N\to +\infty\), and \(q_N(\tilde{a}_N)\to 1\) as \(N\to 1/(\rho(1+\theta))\); otherwise, we have \(a^\dagger_N = 0\) for sufficiently large \(N\), and we have \(\lim U^\dagger_N(a) < \lim U^*_N(a)\) for each \(a\geq 0\), \(\tilde{a}_N\) is bounded, and \(q_N(\tilde{a}_N)\) is bounded away from 1 as \(N\to+\infty\). \end{corollary} The free-rider effect and the encouragement effect in our model are two competing forces shared in several models in the literature on strategic experimentation (e.g., \textcite{BoltonHarris:1999, KellerRady:2010,KellerRady:2015}). In the symmetric equilibrium of the Brownian model of \textcite{BoltonHarris:1999}, the free-rider effect eventually dominates the encouragement effect as team size grows. This is not necessarily the case here. If the technological landscape yields DRC, then the marginal welfare improvement is certainly diminishing asymptotically with respect to the team size. Unsurprisingly, like the Brownian model, the marginal encouragement effect yields to the marginal free-rider effect, as the latter does not abate as the team gets larger. However, when \(W\) does not lead to DRC, the innovation effect introduced in Section \ref{sec:cooperative-problem} can make a difference. Stemming from the encouragement effect, the future exploration from an additional player encourages everyone to become more ambitious, which then leads to the emergence of more advanced technologies in the long run. This novel innovation effect in our model in turn motivate the players to explore, reinforcing the encouragement effect, and vice versa. Therefore, exploration over a non-DRC technological landscape allows the encouragement effect to prevail, which is exhibited by the unlimited expansion of the full-intensity region, as illustrated in Figure \ref{fig:cs-n}. Even so, collective exploration over a CRC or IRC landscape is only necessary for the prevalence of the encouragement effect, but not sufficient. The returns to cooperation determine how likely or how advanced the technologies are expected to be developed, but the timing of their availability also matters. Naturally, players' patience plays a role: The prevalence of the encouragement effect requires both an IRC technological landscape and sufficiently patient players, as stated in Corollary \ref{cor:asymp-n}. The innovation effect from the exploration over a CRC landscape, or an IRC landscape with impatient players, fails to boost the encouragement effect up to the magnitude required for overcoming the free-rider effect. In such cases, as team size grows to infinity, the stopping threshold remains bounded, the full-intensity region vanishes, and the equilibrium payoff functions stay bounded, standing in marked contrast to the cooperative solution in which payoffs grow without bound. Corollary \ref{cor:asymp-n} highlights the key role of innovation in strategic learning and experimentation, which has been largely overlooked in the literature despite its importance. The absence of technological advancements is partly responsible for the prevalence of free-riding in two-armed bandit models. Our result suggests that innovation is an essential element toward understanding the incentives for experimentation and technology development in dynamic and strategic environments. \begin{figure} \caption{Full-intensity threshold \(a^\dagger_N\)} \label{fig:full-intensity-cutoff-n} \caption{Stopping threshold \(\tilde{a}_N\)} \label{fig:stopping-cutoff-n} \caption{ Full-intensity thresholds and the stopping thresholds in the symmetric equilibrium (\(\theta = -0.09, \sigma = \sqrt{2}\)) for different discount rate \(r\). If the players are sufficiently patient (solid curves), resource constraints are binding (\(a^\dagger_N > 0\)) for all \(N\). Otherwise (dashed curve), resource constraints are not binding (\(a^\dagger_N = 0\)) for sufficiently large \(N\). } \label{fig:cs-n} \end{figure} \section{Asymmetric Equilibria and Welfare Properties} Note that in the symmetric equilibrium, the intensity of exploration dwindles down to zero as the gap approaches the stopping threshold. As a result, the threshold is never reached and exploration never fully stops. This observation suggests that welfare can be improved if the players take turns between the roles of explorer and free-rider, keeping the intensity of exploration bounded away from zero until all exploration stops. In this section, we investigate this possibility by constructing a class of asymmetric Markov perfect equilibria. \subsection{Construction of Asymmetric Equilibria} Our construction of asymmetric MPE is based on the idea of the asymmetric MPE proposed in \textcite{KellerRady:2010}. We let the players adopt the common actions in the same way as in the symmetric equilibrium whenever the resulting average payoff is high enough to induce an overall intensity of exploration greater than one, and let the players take turns exploring at less promising states in order to maintain the overall intensity at one. Such alternation between the roles of explorer and free-rider leads to an overall intensity of exploration higher than in the symmetric equilibrium, yielding higher equilibrium payoffs. In what follows, we briefly address the two main steps in our construction. In the first step, we construct the average payoff function \(\bar{u}\). We let \(\bar{u}\) solve the same ODE \(\beta(a, u) = \max\{u(a)/N, 1\}\) as the common payoff function in the symmetric equilibrium whenever \(u > 2 - 1/N\), which ensures the corresponding overall intensity is greater than one. Whenever \(1 < u < 2 - 1/N\), we let \(\bar{u}\) solve the ODE \(u(a) = 1 - 1/N +\beta(a, u)\), which is the ODE for the average payoff function among \(N\) players associated with an overall intensity \(K = 1\). The boundary conditions for the average payoff function \(\bar{u}\), namely the smooth pasting condition at the stopping threshold and the normal reflection condition \eqref{eq:normal-reflection}, are identical to the conditions in Lemma \ref{lem:char-mpe}, simply because those conditions remain unchanged after taking the average. The unique solution of class \(C^1(\mathbb{R}_{++})\) to the ODE above serves as the average payoff function, which also gives thresholds \(a^\flat > a^\sharp \geq 0\) such that \(\bar{u} = 1\) on \([a^\flat, +\infty)\), \(1 < \bar{u} < 2 - 1/N\) on \((a^\sharp, a^\flat)\) and \(\bar{u} > 2 - 1/N\) on \([0, a^\sharp)\). In the second step, equilibrium-compatible actions are assigned to each player. On \([0, a^\sharp)\), if it is nonempty, we let the players adopt the common action \(k_n(a) = \min\{(\bar{u}(a) - 1)/(N - 1), 1\}\) in the same way as in the symmetric equilibrium. On \([a^\sharp, a^\flat)\), players alternate between the roles of explorer and free-rider so as to keep the overall intensity at one. We first split \([a^\sharp, a^\flat)\) into subintervals in an arbitrary way and then meticulously choose the switch points of their actions, so that all individual payoff functions have the same values and derivatives as the average payoff function at the endpoints of these subintervals.\footnote{ In fact, this technique can be used to construct asymmetric equilibria with strategies that take values in \(\{0,1\}\) only, which are referred to as simple equilibria in \textcite{KellerRadyCripps:2005}. See Proposition \ref{prop:simple-mpe} in the Appendix. However, it is not clear whether such equilibria achieve higher average payoffs than the symmetric equilibrium. } Lastly, our characterization of MPE in Lemma \ref{lem:char-mpe} confirms that the assigned action profile is compatible with equilibrium. We leave the method for choosing the switch points and further details to the Appendix. \begin{figure} \caption{From top to bottom: intensity of exploration in the cooperative solution, the asymmetric equilibria, and the symmetric equilibrium (\(\rho = 2\), \(\theta = -1\), \(N=2\)).} \label{fig:kplotasym} \end{figure} \begin{figure} \caption{ Average payoff and possible individual payoffs in the best two-player asymmetric equilibria, compared to the common payoff in the symmetric equilibrium (\(\rho = 2\), \(\theta = -1\), \(N=2\)). } \label{fig:valueplotasym} \end{figure} For \(N = 2\), Figure \ref{fig:kplotasym} illustrates the intensity of exploration in the asymmetric MPE, compared with the symmetric equilibrium. The resource constraints are binding for small gaps in the depicted equilibria, but this may not be the case for different parameters. For example, if the players are too impatient, the average payoff function could be bounded by \(2-1/N\) from above, resulting in an intensity of exploration equal to 1 over the entire region \([0, a^\flat)\). The states \(\bar{a}_{F}\) and \(\bar{a}_{E}\) in the figure demarcate the switch points at which these two players swap roles when they take turns exploring on \([a^\sharp, a^\flat)\). The volunteer explores on \([a^\sharp, \bar{a}_{F}) \cup [\bar{a}_{E}, a^\flat)\), whereas the free-rider explores on \([\bar{a}_{F}, \bar{a}_{E})\). These switch points are chosen in a way that ensures their individual payoff functions are of class \(C^1(\mathbb{R}_{++})\) and coincide on \([0, a^\sharp)\). Figure \ref{fig:valueplotasym} illustrates the associated average payoff function (dashed curve) and the individual payoff functions (solid curves) that can arise in the equilibria in a two-player game, compared with the common payoff function in the symmetric equilibrium (solid dotted curve). Note that the payoff function of the volunteer is strictly higher than that of the free-rider at the states immediately to the left of the stopping threshold \(a^\flat\). In fact, the free-rider has a payoff equal to 1 on \([\bar{a}_{E}, a^\flat)\). This observation stands in marked contrast to the models in \textcite{KellerRadyCripps:2005,KellerRady:2010}, where the volunteer is worse off in this region. The intuition behind this feature is similar to that in Proposition \ref{prop:everyone-explores}. A kink has to be created at \(a^\flat\) in the free-rider's payoff function in order to attain a higher payoff than the volunteer's at states immediately to the left of \(a^\flat\), because the free-rider's ODE \(u(a) = 1 + \beta(a, u)\) must be satisfied.\footnote{ Even though the free-rider has a payoff equal to 1 around \(a^\flat\), she still benefits from free-riding in this region. This benefit, however, is offset by the relatively high burden of exploration effort she must bear in equilibrium at more promising states to reward the volunteer. } In such a case, the free-rider has a strict incentive to take over the role of volunteer to kickstart the exploration process at a larger gap. Therefore, in equilibrium, the volunteer must be compensated for acting as a lone explorer at less promising states by bearing relatively less burden at more promising states. For arbitrary \(N\), we have the following result. \begin{proposition}[Asymmetric MPE] \label{prop:asymmetric-mpe} The \(N\)-player exploration game admits Markov perfect equilibria with thresholds \(0\leq a^\ddag \leq a^\sharp < a^\flat < a^*\), such that on \([0, a^\sharp]\), the players have a common payoff function; on \([0, a^\ddag]\), all players choose exploration exclusively; on \((a^\ddag, a^\sharp)\), the players allocate a common interior fraction of the unit resource to exploration, and this fraction decreases in the gap; on \([a^\sharp, a^\flat)\), the intensity of exploration equals 1 with players taking turns exploring on consecutive subintervals; on \([a^\flat, +\infty)\), all players choose exploitation exclusively. The intensity of exploration is continuous in the gap on \([0, a^\flat)\). The average payoff function is strictly decreasing on \([0, a^\flat]\), once continuously differentiable on \(\mathbb{R}_{++}\), and twice continuously differentiable on \(\mathbb{R}_{++}\) except for the cutoff \(a^\flat\). On \([0, a^\flat)\), the average payoff is higher than in the symmetric equilibrium, and \(a^\flat\) lies to the right of the threshold \(\tilde{a}\) at which all exploration stops in that equilibrium. \end{proposition} \subsection{Welfare Results} For \(N\geq 3\), further improvements can be easily achieved by letting the players take turns exploring, maintaining the intensity of exploration at \(K\) whenever \(K < u < K + 1 - K/N\) for all \(K\in\{1,\ldots, N\}\), rather than for \(K=1\) only as in the Proposition above. However, it is not clear whether such improvements achieve the highest welfare among all MPE of the \(N\)-player exploration game. For \(N=2\), the asymmetric equilibria of Proposition \ref{prop:asymmetric-mpe} are the best among all MPE. \begin{proposition}[Best MPE for \(N=2\)] \label{prop:bound-two-player-mpe} The average payoff in any Markov perfect equilibrium of the two-player exploration game cannot exceed the average payoff in the equilibria of Proposition \ref{prop:asymmetric-mpe}. \end{proposition} In the construction of the asymmetric MPE depicted in Figure \ref{fig:kplotasym}, the interval \([a^\sharp, a^\flat)\) is not split into subintervals. This assertion can be confirmed by the observation from Figure \ref{fig:valueplotasym} that the players' payoff functions match values only at the endpoints of \([a^\sharp, a^\flat]\), not in the interior. Our construction allows an arbitrary partition on \([a^\sharp, a^\flat)\) during the splitting procedure, thus a trivial partition of \([a^\sharp, a^\flat)\), as in Figure \ref{fig:kplotasym}, suffices. A finer partition, however, produces equilibria in which the players exchange roles more often, which allows them to share the burden of exploration more equally. Sufficiently frequent alternation of roles on \([a^\sharp, a^\flat)\) guarantees each player a payoff close enough to the average payoff and thus yields a Pareto improvement over the symmetric equilibrium.\footnote{ The payoffs of both players in the asymmetric MPE depicted in Figure \ref{fig:valueplotasym} are higher than in the symmetric equilibrium on \([0, \bar{a}_E)\); however, this might not be the case in general when the trivial partition of \([a^\sharp, a^\flat)\) is used in the construction, as \(\bar{a}_{E}\) could lie on the left of \(\tilde{a}\). } \begin{proposition}[Pareto Improvement over the Symmetric MPE] \label{prop:pareto} For any \(\epsilon > 0\), the \(N\)-player exploration game admits Markov perfect equilibria as in Proposition \ref{prop:asymmetric-mpe} in which each player's payoff exceeds the symmetric equilibrium payoff on \([0, a^\flat - \epsilon]\). \end{proposition} Recall that the stopping threshold \(\tilde{a}_N\) in the symmetric equilibrium remains bounded as \(N\to +\infty\) when the players are too impatient. The reason is that the common payoff function on the interval of interior allocation must satisfy the ODE \(\beta(a,U^\dagger) = 1\), which does not depend on \(N\). As a result, the common payoff function in the symmetric equilibrium is constant over the team size when the resource constraints are not binding. By contrast, the average payoff always increases in the number of players in the asymmetric MPE in which the players take turns exploring at states immediately to the left of the stopping cutoff. This is because the burden of keeping the overall intensity at one at these states can be shared among more players in larger teams. As a result, the players would be able to exploit more often on average, which in turn encourages them to explore at less promising states. Therefore, unlike the comparative statics of the symmetric equilibrium, the stopping threshold in the asymmetric equilibria we constructed is not bounded as \(N\) goes up, irrespective of the patience of the players and the underlying landscapes. \begin{proposition} \label{prop:unlimited-encouragement} The stopping cutoff \(a^\flat_N\) in the asymmetric MPE of Proposition \ref{prop:asymmetric-mpe} goes to infinity as \(N\to +\infty\).\footnote{ For an IRC landscape, if the asymmetric MPE of Proposition \ref{prop:asymmetric-mpe} fail to exist due to unbounded payoffs for \(N \geq /(\rho(1+\theta))\), then by \(N\to +\infty\) we actually mean \(N\to 1/(\rho(1+\theta))\). } \end{proposition} Therefore, the amount of exploration, and thus the long-run outcomes, can be improved significantly over the symmetric equilibrium for large \(N\) by letting the players take turns exploring before the exploration fully stops. This positive result, unfortunately, does not extend as far to the welfare, as the rate of exploration might still be too low because of non-binding resource constraints, similar to the situation faced in the symmetric equilibrium. More precisely, for a DRC landscape, the full-intensity threshold \(a^\ddag_N\) always hits 0 as \(N\to+\infty\); for a CRC or IRC landscape, whether this happens again depends on the patience of the players, just like in the symmetric equilibrium. It can be shown that the results regarding the average payoff and the full-intensity threshold in Corollary \ref{cor:asymp-n} extend to the asymmetric MPE of Proposition \ref{prop:asymmetric-mpe} with a larger \(\hat{r}\). \section{Discussion} In this section, we discuss our modeling assumptions and assess the extent to which our results rely on them. We argue that our assumptions establish a parsimonious environment, suggesting that our findings regarding the prevalence of the encouragement effect remain robust across various plausible extensions. \paragraph{Payoffs.} We have assumed that players receive payoffs only from exploitation, which serves to highlight the innovation-driven motives for exploration. This assumption deviates from the literature on strategic experimentation such as \textcite{BoltonHarris:1999,KellerRadyCripps:2005}, and the literature on spatial experimentation such as \textcite{Callander:2011,GarfagniniStrulovici:2016}, in which the players also receive payoffs directly from experimentation. Notably, allowing the players to benefit from exploration per se barely changes our results. For example, suppose in addition to the flow payoffs from exploitation, the players also receive a flow payoff of \(k_{n,t}\exp(W(X_t))\dd{t}\) from exploration. In such a case, the ``gap'' still serves as a state variable, but some of the closed-form representations in our results may no longer be attainable. The players would then strictly prefer exploration over exploitation when the gap is sufficiently close to zero, as exploration offers a positive option value in addition to a flow payoff that is nearly identical to that of exploitation. In other words, not only does the prospect of technological advancements motivate the players to explore, but also exploration per se. As a result, the players would have a stronger incentive to explore, and thus the encouragement effect would prevail under weaker conditions. We have also assumed that the qualities of the technologies contribute exponentially to the payoffs from exploitation. This assumption aligns with the exponential growth of \emph{total factor productivity}, commonly assumed in macroeconomic growth literature dating back to \textcite{Solow:1956}. Some empirical observations, such as Moore's law (doubling of transistors on integrated circuits every two years), are in line with exponential growth, while others do not (see, e.g., \textcite{Philippon:2022}). We make the exponential growth assumption mainly for tractability, because it helps reduce the dimension of the state variable. This simplification can also be achieved by choosing a factor depending linearly on the qualities of the technologies (i.e., adopting the best-known technology delivers a flow payoff of \((1-k_{n,t})S_t\dd{t}\) instead of \((1-k_{n,t})\exp(S_t)\dd{t}\)), but with the underlying landscape represented by a \emph{geometric} Brownian motion. All our results continue to hold in such an equivalent formulation, with the ratio \(S_t / W(X_t)\), or its monotone transformation, such as \(\ln(S_t) - \ln(W(X_t))\), serving as a one-dimensional state variable. We suspect that it is inevitable to resort to a two-dimensional state variable such as \((w, s) = (W(X_t), S_t)\) for other function forms of the flow payoff. The challenge mainly arises from the lack of homogeneity in payoff functions stated in Lemma \ref{lem:homogeneity}. Without homogeneity, it would be difficult to pin down the equilibrium payoff functions. The analysis for each given \(s\) remains similar to the current setting, but the equilibrium strategies could be hard to analyze and interpret if the strategies are unrestricted along the \(s\)-coordinate. We also suspect that if the order of growth is lower than the exponential rate, the encouragement effect would be unable to overcome the free-rider effect. Moreover, reward and punishment become possible by conditioning actions on the highest-known quality \(s\), which probably leads to more efficient outcomes as in \textcite{HornerKleinRady:2021}. In that paper, they demonstrate that inefficiencies disappear entirely in a class of non-Markovian equilibria in a rich environment that encompasses the Brownian and Poisson models. Since our setting lies outside of their environment, it remains an open question whether the insight from their constructive proofs can be applied to our setting to achieve full efficiency, complementary to the positive results that we obtained here by focusing only on MPE. \paragraph{Exploration.} The scope of experimentation is certainly limited in our model: Players do not have complete freedom to choose where to explore. \textcite{GarfagniniStrulovici:2016} allow the players to experiment with any technologies, but radical experimentation, which involves exploring technologies far away from the feasible ones, is assumed to be more expensive. Exploration in our model can be viewed as an extreme abstraction of their model, where incremental experimentation is costless, while radical experimentation comes at an infinite cost. In practice, such a limited experimentation scope may be more appropriate in the context of technology development. For example, pharmaceutical companies can easily test the efficacy of a medicine once its formula is provided, but creating the formula from scratch is nearly impossible. It might also be reasonable to assume that multiple research directions emerge during exploration, allowing players to pursue them concurrently or switch direction if one proves fruitless. Such a possibility is beyond the scope of this paper, but it is expected to give rise to a stronger encouragement effect, as restarting opportunities would raise the likelihood of innovation and therefore the value of exploration as well. \section{Concluding Remarks} This paper introduces a novel and tractable framework for examining the incentives of forward-looking agents in knowledge creation. We identify two key effects that shape the incentives for experimentation: an encouragement effect, unique to strategic and dynamic contexts, and an innovation effect, which is absent from the existing strategic bandit literature where innovation possibilities are often overlooked. We demonstrate that the innovation effect, stemming from the prospect of technological advancements, can amplify the encouragement effect, thereby offsetting the free-rider problem prevalent in large teams. Our analysis further illustrates how these effects impact the trajectory of technological progress and long-run outcomes. The proposed model holds promise for future research, with potential applications in dynamic games of innovation, such as patent races. \appendix \appendixpage \input{appendix/contents/prelim-appendix.tex} \section{Explicit Representation of the Symmetric MPE} \input{appendix/contents/cor-symmetric-mpe} \section{Properties of Payoff Functions} \input{appendix/proofs/lem-homogeneity} \input{appendix/contents/lem-smooth-payoff-function} \input{appendix/contents/lem-payoff-function-ppt} \section{Equilibrium Characterization} \subsection{Proof of Lemma \ref{lem:char-mpe}} \input{appendix/proofs/lem-char-mpe} \input{appendix/proofs/lem-smooth-pasting} \input{appendix/proofs/lem-hjb} \section{Cooperative Solution} \subsection{Proof of Proposition \ref{prop:cooperative-solution}} \input{appendix/proofs/prop-cooperative-solution} \section{Properties of MPE} \input{appendix/proofs/prop-mpe-ppt} \section{Symmetric MPE} \subsection{Proof of Proposition \ref{prop:symmetric-mpe}} \input{appendix/proofs/prop-symmetric-mpe} \section{Asymmetric MPE} \subsection{Simple MPE} \input{appendix/contents/prop-simple-mpe} \subsection{Proof of Proposition \ref{prop:asymmetric-mpe}} \input{appendix/proofs/prop-asymmetric-mpe} \subsection{Proof of Proposition \ref{prop:bound-two-player-mpe}} \input{appendix/proofs/prop-bound-two-player-mpe} \subsection{Proof of Proposition \ref{prop:pareto}} \input{appendix/proofs/prop-pareto} \printbibliography \pagenumbering{arabic} \renewcommand*{\thepage}{\thesection--\arabic{page}} \newrefsection \setcounter{section}{15} \setcounter{subsection}{0} \setcounter{footnote}{0} \title{Online Appendix\\``Strategic Exploration for Innovation''} \emptythanks \author{ Shangen Li\thanks{\protect\input{contact}} } \maketitle \input{appendix/contents/prelim-appendix-online} \subsection{Complete Information Setting} \subsubsection{Proof of Lemma \ref{lem:full-info-value-ante}} \input{appendix/proofs/lem-full-info-value-ante} \input{appendix/proofs/lem-full-info-value-post} \subsection{Long-run Outcomes} \subsubsection{Proof of Lemma \ref{lem:tech-std-dist}} \input{appendix/proofs/lem-tech-std-dist.tex} \subsubsection{Proof of Lemma \ref{lem:long-run-prob}} \input{appendix/proofs/lem-long-run-prob} \subsection{Properties of Payoff Functions} \subsubsection{Proof of Lemma \ref{lem:smooth-payoff}} \input{appendix/proofs/lem-smooth-payoff-function} \subsubsection{Proof of Lemma \ref{lem:payoff-function-ppt-x}} \input{appendix/proofs/lem-payoff-function-ppt} \subsection{Comparative Statics} \input{appendix/contents/lem-welfare-comparison} \subsubsection{Proof of Corollary \ref{cor:cs-coop}} \input{appendix/proofs/cor-cs-coop} \subsubsection{Proof of Corollary \ref{cor:cs-rho}} \input{appendix/proofs/cor-cs-rho} \subsubsection{Proof of Corollary \ref{cor:cs-n}} \input{appendix/proofs/cor-cs-n} \subsubsection{Proof of Corollary \ref{cor:asymp-n}} \input{appendix/proofs/cor-asymp-n} \subsubsection{Proof of Proposition \ref{prop:unlimited-encouragement}} \input{appendix/proofs/prop-unlimited-encouragement} \subsection{Lemmas in the Construction of Asymmetric MPE} \input{appendix/proofs/lem-split} \subsection{Math Appendix} \label{sec:math} \input{appendix/proofs/lem-running-max-bound} \printbibliography \end{document}
arXiv
Intuition behind Fourier and Hilbert transform In these days, I am studying a little bit of Fourier analysis and in particular Fourier series and Fourier/Hilbert transforms. Now, I am confident with the mathematical definitions and all the formalism, and (more or less) I know all the main theorems. What I don't really understand is why they are so important, why these concepts are defined in that precise way. Could you explain to me why all these concepts/tools are so significant and useful in (applied) mathematics? Could you give me some intuition behind them? I am not particularly interested in mathematical formulae. I would simply like to know what these definitions really mean. Pretend to be talking with someone smart, very curious but not very knowledgeable about mathematics. Of course, I encourage not only mathematicians but also engineers and physicists to reply. Having a truly physical interpretation of those concepts would be great!! Very last thing: I would really love to have some unconventional and "personal" interpretation/point of view. Thank you very much for any help!! fourier-analysis fourier-series intuition fourier-transform Fred G.Fred G. $\begingroup$ The Fourier transform is a way to analyze the frequency or wavenumber content of a signal. The amplitude tells you how pronounced (loud) a certain frequency is. The Hilbert transform is best viewed in terms of what it does in frequency space. In frequency space, it is the same as multiplying by $-i\operatorname{sgn}(\omega)$. Meaning, it takes the frequency of your signal, then for positive frequencies, it gives a phase of $e^{-i\frac{\pi}{2}}$; for negative frequencies, it gives a phase of $e^{i\frac{\pi}{2}}$. Basically: the Hilbert transform causes some phase shifts. $\endgroup$ – Cameron Williams $\begingroup$ Have you seen this previous question? $\endgroup$ $\begingroup$ Thank you very much for your replies!!! Cameron, I would like to ask this: is it correct to state "the Fourier transform takes a signal, decomposes it in all frequencies the signal is made of and tells me how much each frequency contributes to whole signal"? $\endgroup$ – Fred G. $\begingroup$ Rahul: no, I haven't seen that question. Now, I will have a look. Thanks!! $\endgroup$ $\begingroup$ The discrete Fourier transform can be grokked using pure linear algebra. The DFT simply changes basis to a special basis, the Fourier basis, which is a basis of eigenvectors for the (cyclic) shift operator. You can easily find the eigenvectors of the shift operator, it's very neat. Since a shift-invariant linear operator commutes with the shift operator (by definition), we can use a simultaneous diagonalization theorem to show that shift invariant linear operators are diagonalized by the Fourier basis. The whole point is to diagonalize shift invariant linear operators. $\endgroup$ – littleO The Fourier transform diagonalizes the convolution operator (or linear systems). In other words, if you find convolution non-inuitive, it gets simplified into a simple point-wise product. It happens that the eigenvectors are cisoids (or complex exponentials), hence it gives you a frequency-like interpretation. An operator that makes an essential operation simpler, like the $\log$ turns multiplies into adds, is an important one. [EDIT1: see below for details]. The Hilbert transform is even more important. It turns a real function into its most "natural" complex extension: for instance it turns a $\cos$ into a cisoid by adding $\imath \sin$ to it. Thus, the complex extension satisfies Cauchy–Riemann equations. Hilbert remains quite mysterious to me (Fourier as well, to be honest, I studied wavelet to understand Fourier). S. Krantz writes, in Explorations in Harmonic Analysis with Applications to Complex Function Theory and the Heisenberg Group, Chapter 2: The Central Idea: The Hilbert Transform: The Hilbert transform is, without question, the most important operator in analysis. It arises in so many dierent contexts, and all these contexts are intertwined in profound and influential ways. What it all comes down to is that there is only one singular integral in dimension 1, and it is the Hilbert transform. The philosophy is that all signicant analytic questions reduce to a singular integral; and in the first dimension there is just one choice. [EDIT1] We talked about Fourier transforms as they were unique. Let us keep them loose. There are many Fourier flavors. In the continuous case, you can look for explanations in Fourier transform as diagonalization of convolution. In a discrete case, convolution can be "realized" with (infinite) Toeplitz matrices. In the finite length setting, cyclic convolution matrices can be diagonalized by the Fast Fourier transform. [EDIT] In addition, F. King has produced two volumes of a book on the Hilbert transforms in 2009. $\begingroup$ Thank you very much, Laurent! The idea of diagonalizing the convolution operator is intriguing. Could you give me some more details? In the meanwhile, I will surely have a look at the book! $\endgroup$ $\begingroup$ @Fred G. I keep the maths low and loose. Is the edit sufficient? Esp., please check the SE link for the continuous case $\endgroup$ The Fourier series is a way of building up functions on $[-\pi,\pi]$ in terms of functions that diagonalize differentiation--namely $e^{inx}$. If $L=\frac{1}{i}\frac{d}{dx}$ then $Le^{inx}=ne^{inx}$. That is $e^{inx}$ is an eigenfunction of $L$ with eigenvalue $n$. The fact that all square integrable functions on $[-\pi,\pi]$ can be expanded as $f = \sum_{n=-\infty}^{\infty}c_n e^{inx}$ is quite a nice thing. If you want to apply the derivative operator $L$ to $f$, you just get $Lf = \sum_{n=-\infty}^{\infty}nc_ne^{inx}$. More generally, if $f$ has $N$ square integrable derivatives, then the $N$-th derivative is $$\frac{1}{i^{n}}f^{(n)}=L^{N} f = \sum_{n=-\infty}^{\infty}n^{N}c_n e^{inx}.$$ Diagonalizing an operator makes it easier to solve all kinds of equations involving that operator. The only issue is this: How do you find the correct coeffficients $c_n$ so that you can expand a function $f$ in this way? For the ordinary Fourier series, $$ c_n = \frac{1}{2\pi}\int_{-\pi}^{\pi} f(t)e^{-int}dt. $$ On a finite interval, this is great. But what happens if you want to work on the entire real line? If you work on larger and larger intervals, then you get more and more terms. You need terms with larger and larger periods, and all multiples of those. In the limit of larger intervals, you need an integral to sum up all the terms, with every possible periodicity. That is, you can expand a square integrable $f$ as $$f(x) = \int_{-\infty}^{\infty}c(s)e^{isx}ds. $$ As before, applying operations of $L=\frac{1}{i}\frac{d}{dx}$ is easier using this representation of $f$: $$ \frac{1}{i^{N}}f^{(N)}(x)= L^{N}f = \int_{-\infty}^{\infty}s^{N}c(s)e^{isx}ds. $$ You can see that the discrete and the continuous cases are remarkably similar. And, based on that, how might you expect to be able to find the coefficient function $c(s)$? As you might guess, $$ c(s) = \frac{1}{2\pi}\int_{-\infty}^{\infty}f(x)e^{-isx}dx. $$ The Fourier transform is a way to diagonalize the differentiation operator on $\mathbb{R}$. The reason that the discrete and continuous Fourier transforms are so important is that they diagonalize the differentiation operator. One way to view the effects of diagonalization is that you turn the operator into a multiplication operator. You can see how that makes solving differential equations a lot easier. In the coefficient space all you do is to divide in order to invert. It's the same way with a matrix: if you have a big matrix equation $$Ax = y,$$ and if $A$ is symmetric, then you can find a basis $\{ e_1,e_2,\cdots,e_n \}$ where $Ae_k = \lambda_k e_k$. Then, if you can expand $x$ and $y$ in this basis $$x = \sum_{k=1}^{n} c_k e_k \\y = \sum_{k=1}^{n} d_k e_k.$$ Then the new equation is solved by division: $$Ax = y \\ \sum_{k=1}^{n} c_k \lambda_k e_k = \sum_{k=1}^{n} d_k e_k \\ c_k = \frac{1}{\lambda_k} d_k e_k.$$ So if you know how to expand $y$ in the $e_k$ terms as $\sum_{k}d_k e_k$, then you can get the solution $x$ by division on the coefficients $$x = \sum_{k=1}^{n} \frac{1}{\lambda_k} d_k e_k$$ (Assuming none of the $\lambda_k$ are $0$.) The discrete and continuous Fourier transforms are a way to diagonalize differentiation in an infinite-dimensional space. And that allows you to solve linear problems involving differentiation. Hilbert Transform: The Hilbert transform was developed by Hilbert to study the operation of finding the harmonic conjugate of a function. For example, the function $f(z) = z^2=(x+iy)^2=x^2-y^2+i(2xy)$ has harmonic real and imaginary parts. Hilbert was trying to find a way to go between these two components (in this case $x^2-y^2$ to $2xy$.) The setting of this transform is the upper half plane. If you start with a function $f(x)$, find the function $\tilde{f}(x,y)$ that is harmonic in the upper half plane, and then find $g(x,y)$ such that $f(x,y)+ig(x,y)$ is holomorphic in the upper half plane, then the Hilbert transform maps $f$ to $g$. Because $i(f+ig)=-g+if$ is also holomorphic, then the transform maps $g$ to $-f$, which means that the square of the transform is $-I$. In this setting, the Hilbert transform turns out to be concisely expressed in terms of the Fourier transform if you work with square integrable functions. amWhy Disintegrating By PartsDisintegrating By Parts $\begingroup$ Thanks!!! Just to see if I understood what you mean: the starting point is that, given a linear system $A x=y$, diagonalizing $A$ is a very good way to solve it (at least from the theoretical point of view). Now, if $A$ is a differential operator, we observe that the function doing the work is $e^{i n x}$. Therefore, Fourier series and Fourier transform are useful and powerful tools precisely because they allow us to expand $x$ and $y$ in the basis $\left\{e_k\right\}$ and follow all the procedure. Is it right? :) $\endgroup$ $\begingroup$ @FredG. : That's it. $\endgroup$ – Disintegrating By Parts $\begingroup$ What do you mean by finite interval? What is exactly the interval you're talking about? Is it the interval of observation time for instance? $\endgroup$ – hbak Not the answer you're looking for? Browse other questions tagged fourier-analysis fourier-series intuition fourier-transform or ask your own question. Fourier transform for dummies Fourier transform as diagonalization of convolution Geometric intuition behind gradient, divergence and curl Where should the 2$\pi$ go in the Fourier Transform? University-level books focusing on intuition? Which way does the Fourier Transform go? Intuition behind Clifford Fourier transform How can Discrete Fourier Transform yield negative frequencies? Rigorously, why do the Laplace and Fourier transforms "reveal" the sinusoidal or exponential decomposition at their peaks?
CommonCrawl
\begin{document} \title{\LARGE Power Series Expansions of Modular Forms\\ and Their Interpolation Properties} \author{\LARGE Andrea Mori\\ \large Dipartimento di Matematica\\ \large Universit\`a di Torino} \date{} \maketitle \pagestyle{fancy} \fancypagestyle{plain}{ \fancyhf{} \renewcommand{0pt}{0pt}} \fancyhead{} \fancyfoot{\normalsize\thepage} \fancyhead[RE,LO]{} \fancyhead[RO]{Power series expansions of modular forms} \fancyhead[LE]{A. Mori} \fancyfoot[RO,LE]{} \fancyfoot[RE,LO]{} \renewcommand{0pt}{0pt} \begin{abstract} We define a power series expansion of a holomorphic modular form $f$ in the $p$-adic neighborhood of a CM point $x$ of type $K$ for a split good prime $p$. The modularity group can be either a classical conguence group or a group of norm 1 elements in an order of an indefinite quaternion algebra. The expansion coefficients are shown to be closely related to the classical Maass operators and give $p$-adic information on the ring of definition of $f$. By letting the CM point $x$ vary in its Galois orbit, the $r$-th coefficients define a $p$-adic $K^{\times}$-modular form in the sense of Hida. By coupling this form with the $p$-adic avatars of algebraic Hecke characters belonging to a suitable family and using a Rankin-Selberg type formula due to Harris and Kudla along with some explicit computations of Watson and of Prasanna, we obtain in the even weight case a $p$-adic interpolation for the square roots of a family of twisted special values of the automorphic $L$-function associated with the base change of $f$ to $K$. \noindent 2000 Mathematics Subject Classification 11F67 \end{abstract} \section*{Introduction} The idea that the power series expansion of a modular form at a CM point with respect to a well-chosen local parameter should have an arithmetic significance goes back to the author's thesis, \cite{Mori94}. The goal of the thesis was to prove an expansion principle, namely a characterization of the ring of algebraic $p$-adic integers of definition of an elliptic modular form in terms of the coefficients of the expansion. Such a result would be analogous to the classical $q$-expansion principle based on the Fourier expansion (e.÷g. \cite{Katz73}), with the advantage of being generalizable in principle to groups of modularity without parabolic elements where Fourier series are not available. The simplest such situation is that of a Shimura curve attached to an indefinite non-split quaternion algebra $D$ over $\mathbb Q$ (quaternionic modular forms). The basic idea in \cite{Mori94} was to consider a prime $p$ of good reduction for the modular curve that is split in the quadratic field of complex multiplications $K$ and use the Serre-Tate deformation parameter to construct a local parameter at the CM point $x$ corresponding to a fixed embedding of $K$ in the split quaternion algebra. The coefficients of the resulting power series are related to the values obtained evaluating the $C^{\infty}$-modular forms ${\delta}_{k}^{(r)}f$ at a lift $\tau$ of $x$ in the complex upper half-plane, where $k$ is the weight of $f$ and ${\delta}_{k}^{(r)}$ is the $r$-th iterate, in the automorphic sense, of the basic Maass operator ${\delta}_{k}=-\frac1{4\pi}\left(2i\frac{d}{dz}+\frac{k}{y}\right)$. Our first goal in this paper is to prove a version of the expansion principle valid also for quaternionic modular forms without making use of the local complex geometry and completely $p$-adic in nature. The realization of modular forms as global sections of a line bundle ${\cal L}$ suitable for the Serre-Tate theory is subtler in the non-split case because for Shimura curves the Kodaira-Spencer map ${\rm KS}\colon\mathrm{Sym}^2\underline{\omega}\rightarrow{\Omega}^1_{{\cal X}}$ is not an isomorphism (for a trivial reason: the push-forward $\underline{\omega}=\pi_{*}{\Omega}^{1}_{{\cal A}/{\cal X}}$ for the universal family of ``false elliptic curves'' has rank 2). This motivates the introduction of $p$-ordinary test triples (definition \ref{th:testpair}) that require moving to an auxiliary quadratic extension. The abelian variety of dimension $\leq2$ corresponding to the CM point $x$ defined over the ring of $p$-adic algebraic integers ${\cal O}_{(v)}$ is either a CM curve $E$ with ${\rm End}_{0}(E)=K$ or an abelian surface isogenous to a twofold product $E\times E$ of such a CM curve. To it we associate a complex period ${\Omega}_{\infty}\in\mathbb C^{\times}$ and a $p$-adic period ${\Omega}_p\in{\cal O}_v^{{\rm nr},\times}$. If also the modular form is defined over ${\cal O}_{(v)}$ and $\sum_{r=0}^{\infty}(b_{r}(x)/r!)T_{x}^{r}$ is its expansion obtained form the Serre-Tate theory, we establish in theorem \ref{thm:equality} an equality \begin{equation} c^{(r)}_{v}(x)= {\delta}_{k}^{(r)}(f)(\tau){\Omega}_{\infty}^{-k-2r}= b_{r}(x){\Omega}_{p}^{-k-2r} \label{eq:intro1} \end{equation} of elements in ${\cal O}_{(v)}$. The expansion principle, theorem \ref{thm:expanprinc}, asserts that if $f$ is a holomorphic modular form such that the numbers $c^{(r)}_{v}(x)$ defined by the complex side of the equality \eqref{eq:intro1} are in ${\cal O}_{v}$ and the $p$-adic integers ${\Omega}_p^{2r}c^{(r)}_{v}(x)$ satisfy the Kummer-Serre congruences, then $f$ is defined over the integral closure of ${\cal O}_{(v)}$ in the compositum of all finite extensions of the quotient field of ${\cal O}_{(v)}$ in which $v$ splits completely. Suppose again that the holomorphic modular form $f$ is defined over a ring ${\cal O}_{(v)}$ of $p$-adic integers. The numbers $c^{(r)}_{v}(x)$ are related to the coefficients of a $p$-integral power series, i.e. to a $p$-adic measure on $\mathbb Z_{p}$, naturally attached to $f$. One may wonder about the interpolation properties of this measure. In the introduction of \cite{HaTi01} Harris and Tilouine suggest that in the case of an eigenform $f$ the author's techniques may be used in conjunction with the results of Waldspurger \cite{Waldsp85} to $p$-adically interpolate the square roots of the special values of the automorphic $L$-functions $L(\pi_{K}\otimes\xi,s)$, where $\pi_{K}$ is the base change to $K$ of the ${\rm GL}_{2}$-automorphic representation $\pi$ associated to $f$ (possibly up to Jacquet-Langlands correspondence) and $\xi$ belongs to a suitable family of Gr\"ossencharakters for $K$. Our second goal for this paper is to partially fulfill this expectation when $f$ has even weight $2{\kappa}$. A key observation (proposition \ref{th:meascrfx}) is that the set of values $c^{(r)}_{v}(x)$ for $x$ ranging in a full set of representatives of the copy of the generalized ideal class group ${K_{\A}^{\times}}/K^{\times}\mathbb C^{\times}\wh{{\cal O}}_{c}^\times$ embedded in the modular (or Shimura) curve extends to a Hida \cite{Hida86} $p$-adic ${\rm GL}_{1}(K)$-modular form $\hat{c}_{r}$, which is essentially the $r$-th moment of a $p$-adic measure on $\mathbb Z_{p}$ with values in the unit ball of the $p$-adic Banach space of such $p$-adic forms. The scalar obtained by coupling the form $\hat{c}_{r}$ with the $p$-adic avatar of a Gr\"ossencharakter $\xi_r$ for $K$ trivial on $\wh{\cal O}_{c}^\times$ and of suitable weight twisted by a power of the idelic norm is proportional to the integral \begin{equation} J_{r}(f,\xi_r,\tau)= \int_{{K_{\A}^{\times}}/K^\times\mathbb R^\times}\phi_r(td_\infty)\xi_r(t)\,dt \label{eq:intro2} \end{equation} where $\phi_r$ is the adelic lift of ${\delta}_{2{\kappa}}^{(r)}(f)$, $\tau\in\mathfrak H$ represents $x$ and $d_{\infty}\in{\rm SL}_{2}(\mathbb R)$ is the standard parabolic matrix such that $d_{\infty}i=\tau$. When $\xi_r$ is of the form $\xi_r=\chi\xi^r$ and satisfies some technical conditions the value so obtained is essentialy the $r$-th moment of a $p$-adic measure $\mu(f,x;\chi,\xi)$ on $\mathbb Z_p$. On the other hand, the square of the integral \eqref{eq:intro2} is a special case of the generalized Fourier coefficients $L_{\underline{\xi}}(\Phi)$ studied by Harris and Kudla in \cite{HaKu91}. Building on results of Shimizu \cite{Shimi72} and refining the techniques of Waldspurger \cite{Waldsp85}, Harris and Kudla use the seesaw identity associated with the theta correspondence between the similitude groups ${\rm GL}_{2}$ and ${\rm GO}(D)$ and the splitting $D=K\oplus K^{\perp}$ to express the generalized Fourier coefficients $L_{\underline{\xi}}(\theta_{\varphi}(F))$ where $F\in\pi$ and $\varphi$ is a split primitive Schwartz-Bruhat function on $D_{\mathbb A}$ as a Rankin-Selberg Euler product. Thus, we can use the explicit version of Shimizu's theory worked out by Watson \cite{Wat03}, the local non-archimedean computations of Prasanna \cite{Pra06} together with some local archimedean computations to obtain a formula relating the square of the $r$-th moment of $\mu(f,x;\chi,\xi)$ to the values $L(\pi_K\otimes\chi\xi^r,\frac12)$ whose local correcting terms are explicit outside the primes dividing the conductor of the Gr\"ossencharakter and the primes dividing the non square-free part of the level (theorem \ref{thm:maininterpolation}). Some natural questions arise. First of all, one would like to compute the special values of the $p$-adic $L$-function attached to the measure $\mu(f,x;\chi,\xi)$. Secondly, one may ask if the methods can be extended to treat different or more general families of Gr\"ossencharakters, in particular if one can control the interpolation as the ramification at $p$ increases. Proposition \ref{th:oldforms} implies that, if anything, this cannot be achieved without moving the CM point. Thus, some kind of geometric construction in the modular curve may be in order, with a possible link to the question of the determination of the action of the Hecke operators on the Serre-Tate expansions. Another question is whether the reinterpretation of the integral \eqref{eq:intro2} as inner product in the space of $p$-adic ${\rm GL}_{1}(K)$-modular forms can be used to obtain an estimate of the number of non-vanishing special values $L(\pi_{K}\otimes\xi,\frac12)$. We hope to be able to attack these problems in a future paper. \paragraph{Acknowledgements.} The idea that the power series coefficients may be used to $p$-adically interpolate the special values $L(\pi_{K}\otimes\xi,\frac12)$ arose a long time ago in conversations with Michael Harris. I wish to thank Michael Harris for sharing his intuitions and for many useful suggestions. Also, I wish to thank the anonymous referee of a previous version of the manuscript, whose suggestions helped greatly to remove some unnecessary hypotheses. \paragraph{Notations and Conventions.} The symbols $\mathbb Z$, $\mathbb Q$, $\mathbb R$, $\mathbb C$ and $\mathbb F_q$ denote, as usual, the integer, the rational, the real, the complex numbers and the field with $q$ elements respectively. We fix once for all an embedding $\imath\colon\overline{\mathbb Q}\rightarrow\mathbb C$ and by a number field we mean a finite subextension of the field $\overline{\mathbb Q}$ of algebraic numbers. If $L$ is a number field, we denote ${\cal O}_{L}$ its ring of integers and ${\delta}_{L}$ its discriminant. If $L=\mathbb Q(\sqrt{d})$ is a quadratic field, for each positive integer $c$ we denote ${\cal O}_{L,c}=\mathbb Z+c{\cal O}_{L}=\mathbb Z[c{\omega}_d]$ its order of conductor $c$, with ${\omega}_d=\sqrt d$ if $d\equiv 2$, $3\bmod 4$ or ${\omega}_d=(1+\sqrt d)/2$ if $d\equiv 1\bmod 4$. If $[L:\mathbb Q]=n$ we denote $I_L=\{{\sigma}_1,\dots,{\sigma}_n\}$ the set of embeddings ${\sigma}_i:L\rightarrow\mathbb C$ and we assume ${\sigma}_1=\imath_{|L}$. If $p$ is a rational prime we denote $\mathbb Z_{p}$ and $\mathbb Q_{p}$ the $p$-adic integers and the $p$-adic numbers respectively. By analogy, $\mathbb Q_\infty=\mathbb R$. If $v|p$ is a place of the number field $L$ corresponding to the maximal ideal $\mathfrak p_{v}\subset{\cal O}_{L}$, we denote ${\cal O}_{(v)}$, $L_{v}$, ${\cal O}_{v}$, $k(v)$ the localization of ${\cal O}_{L}$ at $\mathfrak p_{v}$, the $v$-adic completion of $L$, the ring of $v$-adic integers in $L_{v}$ and the residue field respectively. The maximal ideal in ${\cal O}_{v}$ is still denoted $\mathfrak p_{v}$. Also, we denote $\nr{L}_v$ the maximal unramified extension of $L_{v}$ and $\nr{{\cal O}}_v$ its ring of integers. We denote $\widehat{\mathbb Z}$ the profinite completion of $\mathbb Z$ and for each $\mathbb Z$-module $M$ we let $\widehat M=M\otimes\widehat\mathbb Z$. We denote $\mathbb A$ the ring of rational adeles and $\mathbb A_{f}$ the finite adeles, so that $\mathbb A=\mathbb R\times\mathbb A_{f}=\mathbb Q\mathbb R\widehat\mathbb Z$. For a number field $L$ we denote $\mathbb A_L=\mathbb A\otimes L$ and $L_\mathbb A^\times$ the corresponding ring of adeles and group of \`{\i}deles respectively. If $\mathfrak n\subseteq{\cal O}_{K}$ is an ideal, we let $L^{\times}_{\mathfrak n}=\{{\lambda}\in L^{\times}\mbox{ such that }{\lambda}\equiv1\bmod\mathfrak n\}$ and denote ${\cal I}_{\mathfrak n}$ the group of fractional ideals of $L$ prime with $\mathfrak n$, $P_{\mathfrak n}$ the subgroup of principal fractional ideals generated by the elements in $L^{\times}_{\mathfrak n}$ and $U_{\mathfrak n}$ the subgroup of finite \`{\i}deles product of local units congruent to $1${} $\bmod\mathfrak n$. We fix an additive character $\psi$ of $\mathbb A/\mathbb Q$, by asking that $\psi_\infty(x)=e^{2\pi i x}$ and $\psi_p$ is trivial on $\mathbb Z_p$ with $\psi_p(x)=e^{2\pi i x}$ for $x\in\mathbb Z[\inv p]$ and finite $p$. On $\mathbb A$ we fix the Haar measure $dx=\prod_{p\leq\infty}dx_p$ where the local Haar measures $dx_p$ are normalized so that the $\psi_p$-Fourier transform is autodual. For a quaternion algebra $D$ with reduced norm $\nu$, we fix on $D_\mathbb A$ the Haar measure $dx=\prod_{p\leq\infty}dx_p$ where the local Haar measures $dx_p$ are normalized so that the Fourier transform with respect to the norm form is autodual. Let $(V,\scal{\,}{\,})$ be a quadratic space of dimension $d$ over $\mathbb Q$. We denote ${\cal S}_{\mathbb A}(V)=\bigotimes_{p\leq\infty}{\cal S}_{p}$ the adelic Schwartz-Bruhat space, where for $p$ finite, ${\cal S}_{p}$ is the space of Bruhat functions on $V\otimes\mathbb Q_p$ and ${\cal S}_{\infty}$ is the space of Schwartz functions on $V\otimes\mathbb R$ which are finite under the natural action of a (fixed) maximal compact subgroup of the similitude group ${\rm GO}(V)$. The Weil representation $r_\psi$ is the representation of ${\rm SL}_2(\mathbb A)$ on ${\cal S}_{\mathbb A}(V)$ which is explicitely described locally at $p\leq\infty$ by \begin{subequations}\label{eq:Weilrep} \begin{eqnarray} r_\psi\left(\begin{array}{cc}1 & b \\0 & 1\end{array}\right)\varphi(x) & = & \psi_p\left(\frac12\scal{bx}{x}\right)\varphi(x), \label{eq:Weilrep1} \\ r_\psi\left(\begin{array}{cc}a & 0 \\ 0 & \inv a\end{array}\right)\varphi(x) & = & \chi_V(a)\vass{a}_p^{d/2}\varphi(ax) \label{eq:Weilrep2} \\ r_\psi\left(\begin{array}{cc}0 & 1 \\ -1 & 0\end{array}\right)\varphi(x) & = & \gamma_V\hat{\varphi}(x) \label{eq:Weilrep3} \end{eqnarray} \end{subequations} where $\gamma_V$ is an eighth root of 1 and $\chi_V$ is a quadratic character that are computed in our cases of interest in \cite{JaLa70} (see also the table in \cite[\S3.4]{Pra06}), while the Fourier transform $\hat{\varphi}(x)=\int_{V\otimes\mathbb Q_p}\varphi(y)\psi_p(\scal xy)\,dy$ is computed with respect to a $\scal{\,}{\,}$-self dual Haar measure on $V\otimes\mathbb Q_p$. If $R$ is a ring and $M$ a $R$-module we denote $\dual M={\rm Hom}(M,R)$ the dual of $M$. The same notation applies to a sheaf of modules over a scheme. If $G$ is a subgroup of units in $R$ we say that non-zero elements $x$, $y\in M$ are $G$-equivalent and write $x\sim_{G}y$ if there exists $r\in G$ such that $rx=y$. The group ${\rm SL}_{2}(\mathbb R)$ acts on the complex upper half-plane $\mathfrak H$ by linear fractional transformations, if $g=\smallmat abcd$ then $g\cdot z=\frac{az+b}{cz+d}$. The automorphy factor is defined to be $j(g,z)=cz+d$. The action extends to an action of the group ${\rm GL}^{+}_{2}(\mathbb R)$. If ${\Gamma}<{\rm SL}_{2}(\mathbb R)$ is a Fuchsian group of the first kind we shall denote $M_{k}({\Gamma})$ the space of modular forms of weight $k\in\mathbb Z$ with respect to ${\Gamma}$ i.e. the holomorphic functions $f$ on $\mathfrak H$ such that $$ \mbox{$f({\gamma} z)=f(z)j({\gamma},z)^k$ for all $z\in\mathfrak H$ and ${\gamma}\in{\Gamma}$} $$ and extend holomorphically to a neighborhood of each cusp (when cusps exist). The subspace of cuspforms, i.e. those modular forms that vanish at the cusps, will be denoted $S_{k}({\Gamma})$. The request that a holomorphic function on $\mathfrak H$ extends holomorphically to a neighborhood of a cusp $s$ is equivalent to a certain growth condition as $z\to s$. Relaxing holomorphicity but mantaining the growth condition yields the much bigger spaces of $C^\infty$-modular and {}-cuspforms, which will be denoted $M_{k}^{\infty}({\Gamma})$ and $S_{k}^{\infty}({\Gamma})$ respectively. We will denote $$ M_{k,{\varepsilon}}({\Delta},N),\quad S_{k,{\varepsilon}}({\Delta},N),\quad M_{k,{\varepsilon}}^{\infty}({\Delta},N),\quad S_{k,{\varepsilon}}^{\infty}({\Delta},N) $$ the above spaces of modular or cuspforms with respect to the groups ${\Gamma}={{\Gamma}_{\vep}}({\Delta},N)$, ${\varepsilon}\in\{0,1\}$, defined in section \ref{se:curves}. It is a well-known fact that $M_{k,{\varepsilon}}({\Delta},N)$ is always finite-dimensional and trivial for $k<0$. \section{Modular and Shimura curves} \subsection{Quaternion algebras.}\label{se:quatalg} Let $D$ be a quaternion algebra over $\mathbb Q$ with reduced norm $\nu$ and reduced trace ${\rm tr}$. For each place $\ell$ of $\mathbb Q$ let $D_\ell=D\otimes_Q\mathbb Q_\ell$. Let ${\Sigma}_D$ be the set of places at which $D$ is \emph{ramified}, i.~e. $D_\ell$ is the unique, up to isomorphism, quaternion division algebra over $\mathbb Q_\ell$. If $\ell\notin{\Sigma}_D$ the algebra $D$ is \emph{split} at $\ell$, i.~e. $D_\ell\simeq{\rm M}_2(\mathbb Q_\ell)$. The set ${\Sigma}_D$ is finite and even and determines completely the isomorphism class of $D$. Moreover, every finite and even subset of places of $\mathbb Q$ is the set of ramified places of some quaternion algebra over $\mathbb Q$ (for these and the other basic results on quaternion algebras the standard reference is \cite{Vigner80}). In particular, $M_2(\mathbb Q)$ is the only quaternion algebra up to isomorphism which is \emph{split}, i.~e. split at all places. The discriminant ${\Delta}={\Delta}_D$ of $D$ is the product of the finite primes in ${\Sigma}_D$ if ${\Sigma}_D\neq\emptyset$, or ${\Delta}=1$ otherwise. We shall henceforth assume that $D$ is \emph{indefinite}, i.~e. split at $\infty$, and fix an isomorphism $\Phi_\infty\colon D_\infty\buildrel\sim\over\rightarrow{\rm M}_2(\mathbb R)$ which will be often left implicit. There is a unique conjugacy class of maximal orders in $D$. Once for all, choose a maximal order ${\cal R}_{1}$ and fix isomorphisms $\Phi_\ell\colon D_\ell\buildrel\sim\over\rightarrow{\rm M}_2(\mathbb Q_\ell)$ for $\ell\notin{\Sigma}_D$ so that $\Phi_\ell({\cal R}_{1})={\rm M}_2(\mathbb Z_\ell)$. For an integer $N$ prime to ${\Delta}$ let ${\cal R}_{N}$ be the level $N$ Eichler order of $D$ such that $$ {\cal R}_{N}\otimes_\mathbb Z\mathbb Z_\ell=\inv{\Phi_\ell} \left(\left\{\left( \begin{array}{cc} a & b \\ c & d \end{array}\right) \hbox{$\in{\rm M}_2(\mathbb Z_\ell)$ such that $c\equiv0\bmod N$} \right\}\right) $$ for $\ell\notin{\Sigma}_D$, and ${\cal R}_{N}\otimes\mathbb Z_\ell$ is the unique maximal order in $D_\ell$ for $\ell\in{\Sigma}_D$. If $D={\rm M}_2(\mathbb Q)$ we take ${\cal R}_{1}={\rm M}_2(\mathbb Z)$ and ${\cal R}_{N}=\left\{\smallmat abcd \hbox{$\in{\rm M}_2(\mathbb Z)$ such that $c\equiv0\bmod N$}\right\}$. There are exactly two homomorphisms ${\rm or}_\ell^{1},{\rm or}_\ell^{2}\colon{\cal R}_{N}\otimes\mathbb F_\ell\longrightarrow\mathbb F_{\ell^2}$ for each prime $\ell|{\Delta}$, and two homomorphisms ${\rm or}_\ell^{1},{\rm or}_\ell^{2}\colon{\cal R}_{N}\otimes\mathbb F_\ell\longrightarrow{\mathbb F_{\ell}}^2$ for each prime $\ell|N$. These maps are called $\ell$-\emph{orientations} and the two $\ell$-orientations are switched by the non-trivial automorphism of either $\mathbb F_{\ell^2}$ or ${\mathbb F_{\ell}}^{2}$. An orientation for ${\cal R}_{N}$ is the choice of an $\ell$-orientation ${\rm or}_\ell$ for all primes $\ell|N{\Delta}$. An involution $d\mapsto\invol d$ in $D$ is \emph{positive} if ${\rm tr}(d\invol d)>0$ for all $d\in D$. By the Skolem-Noether theorem \begin{equation} \invol d=\inv t\bar{d}t \label{eq:involution} \end{equation} where $t\in D$ is some element such that $t^2\in\mathbb Q^{{}<0}$ and $d\mapsto\bar{d}$ denotes quaternionic conjugation, $d+\bar{d}={\rm tr}(d)$. If $t\in D$ is such an element, let $B_t$ be the bilinear form on $D$ defined by \begin{equation} B_t(a,b)={\rm tr}(a\bar{b}t)={\rm tr}(at\invol b)\qquad \hbox{for all $a, b\in D$}. \label{eq:formEt} \end{equation} If ${\cal R}\subset D$ is an order, the involution $d\mapsto\invol d$ is called ${\cal R}$-\emph{principal} if $\invol{\cal R}={\cal R}$ and the bilinear form $B_t$ is skew-symmetric, non-degenerate and $\mathbb Z$-valued on ${\cal R}\times{\cal R}$ with pfaffian equal to $1$. When ${\Delta}>1$ an explicit model for the triple $(D,{\cal R}_{N},d\mapsto\invol d)$ can be constructed as follows. The condition $(n,-N{\Delta})_{\ell}=-1$ for all $\ell\in{\Sigma}_D$ on Hilbert symbols defines for $n$ a certain subset of non-zero congruence classes modulo $N{\Delta}$. Passing to classes modulo $8N{\Delta}$ and taking $n>0$ we may assume that $(n,-N{\Delta})_{\infty}=(n,-N{\Delta})_{p}=1$ for all primes $p$ dividing $N$ and also $(n,-N{\Delta})_{2}=1$ if ${\Delta}$ is odd. By Dirichlet's theorem of primes in arithmetic progressions there exists a prime $p_{o}$ satisfying these conditions and the product formula easily implies that \begin{equation} (p_{0},-N{\Delta})_{\ell}=-1\qquad\mbox{if and only if $\ell\in{\Sigma}_{D}$.} \label{eq:condonHS} \end{equation} Let $a\in\mathbb Z$ such that $a^2N{\Delta}\equiv-1\bmod p_o$. \begin{thm}[Hashimoto, \cite{Hashim95}]\label{th:Hashimoto} Let $D$ be a quaternion algebra over $\mathbb Q$ of discriminant ${\Delta}$ and let $t\in D$ such that $t^2\in\mathbb Q^{{}<0}$. Then: \begin{enumerate} \item $D$ is isomorphic to the quaternion algebra $D_H=\mathbb Q\oplus\mathbb Q i\oplus\mathbb Q j\oplus\mathbb Q ij$, where $i^2=-N{\Delta}$, $j^2=p_o$ and $ij=-ji$; \item the order ${\cal R}_{H,N}=\mathbb Z{\epsilon}_1\oplus\mathbb Z{\epsilon}_2\oplus\mathbb Z{\epsilon}_3 \oplus\mathbb Z{\epsilon}_4$, where ${\epsilon}_1=1$, ${\epsilon}_2=(1+j)/2$, ${\epsilon}_3=(i+ij)/2$ and ${\epsilon}_4=(aN{\Delta} j+ij)/p_o$ is an Eichler order of level $N$ in $D_H$; \item the skew symmetric form $B_t$ on $D_H$ is $\mathbb Z$-valued on ${\cal R}_{H,N}$ if and only if $ti\in{\cal R}_{H,N}$. Moreover, it defines a non-degenerate pairing on ${\cal R}_{H,N}\times{\cal R}_{H,N}$ if and only if $ti\in{\cal R}_{H,N}^\times$; \item let $t=\inv i$. Then the elements $\eta_1={\epsilon}_3-\frac12(p_o-1){\epsilon}_4$, $\eta_2=-aN{\Delta}-{\epsilon}_4$, $\eta_3=1$ and $\eta_4={\epsilon}_2$ are a symplectic $\mathbb Z$-basis of ${\cal R}_{H,N}$. \end{enumerate} \end{thm} \noindent We call \emph{Hashimoto model} of a quaternion algebra endowed with an Eichler order ${\cal R}$ of level $N$ and a ${\cal R}$-principal positive involution the triple $(D_H,{\cal R}_{H,N},\inv i)$ given in the above theorem. We can fix the isomorphism $\Phi_\infty$ for the Hashimoto model by declaring that $$ \Phi_\infty(i)= \left( \begin{array}{cc} 0 & -1 \\ N{\Delta} & 0 \end{array} \right),\qquad \Phi_\infty(j)= \left( \begin{array}{cc} \sqrt{p_o} & 0 \\ 0 & -\sqrt{p_o} \end{array} \right). $$ \subsection{Moduli spaces.}\label{se:curves} Fix a ${\cal R}_{1}$-principal positive involution $d\mapsto\invol d$ as in \eqref{eq:involution}. We shall consider the groups $$ {{\Gamma}_{0}}({\Delta},N)={\cal R}^{1}_{N}=\left\{ \hbox{${\gamma}\in{\cal R}_{N}$ such that $\nu({\gamma})=1$} \right\} $$ and $$ {{\Gamma}_{1}}({\Delta},N)=\left\{\hbox{${\gamma}\in{{\Gamma}_{0}}({\Delta},N)$ such that ${\rm or}^{{\epsilon}}_{\ell}({\gamma} r)={\rm or}^{{\epsilon}}_{\ell}(r)$ for all $r\in{\cal R}_{N}$, $\ell|N, {\epsilon}=1,2$}\right\}. $$ When ${\Delta}=1$, ${{\Gamma}_{0}}(1,N)$ and ${{\Gamma}_{1}}(1,N)$ are the classical congruence subgroup $$ {\Gamma}_0(N)=\left\{ \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \in{\rm SL}_2(\mathbb Z)\hbox{ such that $c\equiv0\bmod N$}\right\} $$ and $$ {\Gamma}_1(N)= \left\{ \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \in{\rm SL}_2(\mathbb Z)\hbox{ such that $a,d\equiv1$ e $c\equiv0\bmod N$}\right\} $$ respectively. Since $D$ is indefinite ${{\Gamma}_{\vep}}({\Delta},N)$ for ${\varepsilon}\in\{0,1\}$ is, via $\Phi_\infty$, a discrete subgroup of ${\rm SL}_2(\mathbb R)$ acting on the complex upper half plane $\mathfrak H$. When ${\Delta}>1$ the quotient $X_{{\varepsilon}}({\Delta},N)={{\Gamma}_{\vep}}({\Delta},N)\backslash\mathfrak H$ is a compact Riemann surface, \cite[proposition~9.2]{ShiRed}. When ${\Delta}=1$ let $X_{\varepsilon}(N)$ be the standard cuspidal compactification of $Y_{\varepsilon}(N)={{\Gamma}_{\vep}}(N)\backslash\mathfrak H$. Each of these complete curves $X$ has a canonical model over $\mathbb Q$, \cite{ShiRed}. In fact, each $X$ can be reinterpreted as the set of complex points of a scheme ${\cal X}$ which is the solution of a moduli problem, defined over $\mathbb Z[1/{N{\Delta}}]$, e.g. \cite{BerDar96, DelRap73, DiaIm95, Milne79, Robert89}. When $D=M_2(\mathbb Q)$ and $N>3$, the functor $F_1(N)\colon\hbox{\bf $\mathbb Z[1/N]$-Schemes}\rightarrow\hbox{\bf Sets}$ defined by $$ F_1(N)(S)= \left\{ \sopra{\hbox{Isomorphism classes of generalized elliptic curves $E=E_{|S}$}} {\hbox{with a section $P\colon S\rightarrow E$ of exact order $N$}}, \right\} $$ is represented by a proper and smooth $\mathbb Z[\frac1{N}]$-scheme ${\cal X}_1(N)$ such that ${\cal X}_1(N)(\mathbb C)=X_{1}(N)$. The complex elliptic curve with point $P$ of exact order $N$ corresponding to $z\in\mathfrak H$ is the torus $E_z=\mathbb C/\mathbb Z\oplus\mathbb Z z$ with $P=1/N \bmod\mathbb Z$. Denote \begin{equation} \pi_N\colon{\cal E}_{N}\longrightarrow{\cal X}_{1}(N) \label{eq:univEC} \end{equation} the universal generalized elliptic curve attached to the representable functor $F_{1}(N)$. The scheme ${\cal X}_0(N)$ quotient of ${\cal X}_1(N)$ by the action of the group of diamond operators $\langle a\rangle\colon{\cal X}_1(N)\rightarrow{\cal X}_1(N)$, $\langle a\rangle(E,P)=(E,aP)$ for all $a\in(\mathbb Z/N\mathbb Z)^\times$, is the coarse moduli scheme attached to the functor $$ F_0(N)(S)= \left\{ \sopra{\hbox{Isomorphism classes of generalized elliptic curves $E=E_{|S}$}} {\hbox{with a cyclic subgroup $C\subset E$ of exact order $N$}} \right\} $$ and a smooth $\mathbb Z[1/N]$-model for the curve $X_0(N)$. When ${\Delta}>1$ and $N>3$, $X_1({\Delta},N)={\cal X}_{1}({\Delta},N)(\mathbb C)$ for the proper and smooth $\mathbb Z[1/{N{\Delta}}]$-scheme ${\cal X}_{1}({\Delta},N)$ representing the functor $F_1({\Delta},N)\colon\hbox{\bf $\mathbb Z[1/{N{\Delta}}]$-Schemes}\rightarrow\hbox{\bf Sets}$ defined by $$ F_1({\Delta},N)(S)= \left\{ \sopra{ \sopra{\hbox{Isomorphism classes of compatibly principally polarized}} {\hbox{ abelian surfaces $A=A_{|S}$ with a ring embedding}}} {\sopra{\hbox{${\cal R}_{1}\hookrightarrow{\rm End}(A)$ and an equivalence class of}} {\hbox{${\cal R}_N$-orientation preserving level $N$ structures}}} \right\}. $$ A level $N$ structure on an abelian surface $A$ with ${\cal R}_{1}\subset{\rm End}(A)$ is an isomorphism of (left) ${\cal R}_{1}$-modules $A[N]\simeq{\cal R}_{1}\otimes(\mathbb Z/N\mathbb Z)$. Two such structures are declared equivalent if they coincide on ${\cal R}_N\otimes(\mathbb Z/N\mathbb Z)$ and induce the same $\ell$-orientations on ${\cal R}_N$ for all $\ell|N$. The principal polarization is compatible with the embedding ${\cal R}_{1}\subset{\rm End}(A)$ if the involution $d\mapsto\invol d$ is the Rosati involution. The abelian surfaces in $F_1({\Delta},N)(S)$ are called \emph{abelian surfaces with quaternionic multiplications} (QM-abelian surfaces, for short) or \emph{false elliptice curves}. The complex QM-abelian surface corresponding to $z\in\mathfrak H$ is \begin{equation} A_z=D_\infty^z/{\cal R}_{1}, \label{eq:QMtori} \end{equation} where $D_\infty^z$ is the real vector space $D_\infty$ endowed with the $\mathbb C$-structure defined by the identification $\mathbb C^2=\Phi_\infty(D_\infty)\left(\sopra{z}{1}\right)$, i.e. $A_z=\mathbb C^2/\Phi_\infty({\cal R}_{1})\vvec{z}{1}$. The complex uniformization \eqref{eq:QMtori} defines a level structure $\inv{N}{\cal R}_{1}/{\cal R}_{1}=(D/{\cal R}_{1})[N]\buildrel\sim\over\rightarrow(A_{z})[N]$ and the skew-symmetric form $\scal{\Phi_\infty(a)\left(\sopra{z}{1}\right)} {\Phi_\infty(b)\left(\sopra{z}{1}\right)}=B_t(a,b)$ for all $a,b\in D$, where $B_t$ is as in \eqref{eq:formEt}, extended to $\mathbb C^{2}$ by $\mathbb R$-linearity is the unique Riemann form on $A_{z}$ with Rosati involution $d\mapsto\invol d$, \cite[lemma~1.~1]{Milne79}. Denote \begin{equation} \pi_{{\Delta},N}\colon{\cal A}_{{\Delta},N}\longrightarrow{\cal X}_{1}({\Delta},N) \label{eq:univQMAV} \end{equation} the universal QM abelian surface attached to the representable functor $F_{1}({\Delta},N)$. As with the split case, a smooth $\mathbb Z[1/{N{\Delta}}]$-model ${\cal X}_{0}({\Delta},N)$ of $X_{1}({\Delta},N)$ can be obtained as quotient of ${\cal X}_{1}({\Delta},N)$ by a suitable action of $\frac{{{\Gamma}_{0}}({\Delta},N)}{{{\Gamma}_{1}}({\Delta},N)}\simeq(\mathbb Z/N\mathbb Z)^{\times}$. It is the coarse moduli space for the functor $$ F_0({\Delta},N)(S)= \left\{ \sopra{\sopra{\hbox{Isomorphism classes of compatibly principally polarized}} {\hbox{abelian surfaces $A=A_{|S}$ with a ring embedding ${\cal R}_{1}\hookrightarrow{\rm End}(A)$}}} {\hbox{and an ${\cal R}_{N}$-equivalence class of level $N$ structures}} \right\} $$ where two level $N$ structures are ${\cal R}_{N}$-equivalent if they coincide on ${\cal R}_{N}\otimes(\mathbb Z/N\mathbb Z)$. \begin{rem} \rm In order to study the reduction of the modular and Shimura curves at primes dividing $N{\Delta}$ one has to extend the moduli problems described above to moduli problems defined over $\mathbb Z$, see \cite{BoCa91, KatMaz85}. The $\mathbb Z$-schemes thus obtained are proper but not smooth. We shall not deal with primes of bad reduction and for the purposes of this paper the above descriptions will suffice. \end{rem} \subsection{Subfields and CM points.}\label{se:CMpts} Let $\mathbb Q\subseteq\pr L\subset L$ be a tower of fields with $[L:\pr L]=2$ and assume that $L$ splits $D$, i.~e. $D\otimes_\mathbb Q L\simeq{\rm M}_2(L)$ or, equivalently, that $L$ admits an embedding in $D\otimes_\mathbb Q\pr L$. An embedding $\jmath:L\hookrightarrow D\otimes_\mathbb Q\pr L$ endows $D\otimes_\mathbb Q\pr L$ with a structure of $L$-vector space. Scalar multiplication by ${\lambda}\in L$ is left multiplication by $\jmath({\lambda})$. The opposite algebra $D^{\mathrm{op}}$ acts $L$-linearly on $D$ by right multiplication, providing a direct identification \begin{equation} \label{eq:DasEnd} D^{\mathrm{op}}\otimes L\stackrel{\sim}{\longrightarrow}{\rm End}_{L}(D\otimes\pr L). \end{equation} Let ${\sigma}$ be the non-trivial element in ${\rm Gal}(L/\pr L)$ and $\jmath^{\sigma}({\lambda})=\jmath({\lambda}^{\sigma})$ for all ${\lambda}\in L$. By the Skolem-Noether theorem there exists $u\in(D\otimes_\mathbb Q\pr L)^\times$, well defined up to a $L^\times$-multiple, such that $u\jmath({\lambda})=\jmath^{\sigma}({\lambda})u$ for all ${\lambda}\in L$ and $u^2\in\pr L$. Thus, with a slight abuse of notation, the embedding $\jmath$ defines a splitting \begin{equation} D\otimes\pr L=L\oplus L u \label{eq:Dsplit} \end{equation} which can be more intrinsically seen as the eigenspace decomposition under right multiplication by $\jmath(L^\times)$. Also, there is an isomorphism \begin{equation} D\buildrel\sim\over\longrightarrowD^{\mathrm{op}},\qquad {\lambda}_1+{\lambda}_2u\mapsto{\lambda}_1+{\lambda}_2^{{\sigma}}u. \label{eq:isoDDop} \end{equation} Let $L=\pr L({\alpha})$ with ${\alpha}^2=A\in\pr L$. The element \begin{equation} \label{eq:idempotent} e_\jmath=\frac{1}{2}\left(1\otimes1+ \frac{1}{A}\jmath({\alpha})\otimes{\alpha}\right)\in D\otimes\pr L \end{equation} is an idempotent which is easily seen to be, under \eqref{eq:DasEnd}, \eqref{eq:Dsplit} and \eqref{eq:isoDDop}, the projection onto $L$ with kernel $Lu$. If $L\subseteq\mathbb C$ the idempotent $e_\jmath$ defines a projector in $D_\infty^z$ for all $z\in\mathfrak H$ by scalar extension. An involution $d\mapsto\invol d$ in $D$ extends by linearity to $D\otimes\pr L$. If $\invol\jmath$ is the embedding $\invol\jmath({\lambda})=\jmath({\lambda})\invol{}$, the explicit description \eqref{eq:idempotent} implies at once that $e_{\invol\jmath}=\invol{e_\jmath}$ and in particular $$ \hbox{$\invol{e_\jmath}=e_\jmath$ if and only if $\invol{\jmath(L)}=\jmath(L)$ pointwise.} $$ When the involution is positive a fixed idempotent can be constructed as follows. As an element of ${\rm End}(D)$ the involution \eqref{eq:involution} has determinant $-1$. Since $\invol1=1$ and ${\rm tr}(\invol d)={\rm tr}(d)$ for all $d\in D$ its $(-1)$-eigenspace is a subspace of trace $0$ elements of dimension either $1$ or $3$. If the dimension is $3$ then the involution is the quaternionic conjugation, contradicting the positivity assumption. Therefore there exist a non-zero element $d$ of trace $0$ fixed by the involution. The subalgebra $F=\mathbb Q(d)\subset D$ is a quadratic field fixed by the involution and the corresponding idempotent $e\in D\otimes_\mathbb Q F$ has the desired property. Note that the positivity of the involution implies further that $F$ is real quadratic. The conductor of an embedding $\jmath\colon L\rightarrow D$ of the quadratic field $L$ relative to the order ${\cal R}_N$ is the integer $c=c_N>0$ such that $\jmath({\cal O}_{L,c})=\jmath(L)\cap{\cal R}_{N}$. Denote $\bar c$ the \textit{minimal conductor}, i.e. the conductor relative to the maximal order ${\cal R}_{1}$. It is clear that $c$ is a multiple of $\bar c$, in fact $c/\bar c$ is a divisor of $N$ because ${\cal O}_{L,c}/{\cal O}_{L,\bar c}$ injects into ${\cal R}_{1}/{\cal R}_{N}\simeq\mathbb Z/N\mathbb Z$. In the following result the embedding is left implicit to simplify the notation. \begin{pro}\label{prop:decomporder} Let $L\subset D$ be a quadratic subfield with associated decomposition $D=L\oplus Lu$. Let ${\Lambda}=L\cap{\cal R}_{N}$ and ${\Lambda}^\prime=Lu\cap{\cal R}_{N}$. Then: \begin{enumerate} \item $D$ is split at the prime $p$ if and only if $(u^2,{\delta}_L)_p=1$; \item if $p$ is unramified in $L$ and $\mcd pc=1$ then ${\cal R}_N\otimes\mathbb Z_p={\Lambda}\otimes\mathbb Z_p\oplus{\Lambda}^\prime\otimes\mathbb Z_p$. Moreover, ${\Lambda}^\prime\otimes\mathbb Z_p={\cal J} u$ for some fractional ideal ${\cal J}\subset L\otimes\mathbb Q_p$ such that ${\rm N}({\cal J})\nu(u)=(q^{\epsilon})$ with ${\epsilon}=1$ if $q|N{\Delta}$ and ${\epsilon}=0$ otherwise. \end{enumerate} \end{pro} \par\noindent{\bf Proof. } Let $L=\mathbb Q(\sqrt d)$. Then $\{1,\sqrt d,u, \sqrt du\}$ is a $\mathbb Q$-basis of $D$ and the local invariants of the norm form are $\det=1$ and ${\epsilon}_p=(-1,-1)_p(u^2,d)_p=(-1,-1)_p(u^2,{\delta}_L)_p$, thus proving the first part. For the second part, choose $u$ so that $u^2\in\mathbb Z$. Then there is an inclusion of orders ${\cal R}^\prime={\cal O}_{L,c}\oplus{\cal O}_{L,c}u\subseteq{\Lambda}\oplus{\Lambda}^\prime\subseteq{\cal R}_N$. The elements $\{1,c{\omega}_d,u,c{\omega}_du\}$ are a $\mathbb Z$-basis of ${\cal R}^\prime$, so that ${\cal R}^\prime$ has reduced discriminant ${\delta}_Lcu^2$. We are thus reduced to check that when $p|u^2$ and $\mcd p{c{\delta}_L}=1$ then there is no element $x\in{\cal R}_N$ of the form $x=(r+r^\prime u)/p$ with $r$, $r^\prime\in{\cal O}_{L,c}-p{\cal O}_{L,c}$. For such an element $x$ one must have $p|{\rm tr}(r)$ and $p|{\rm N}(r)$ from which one derives quickly a contradiction. The last claim follows from the very same discriminant computation since ${\cal R}_N$ has reduced discriminant $N{\Delta}$.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par Fix a quadratic imaginary field $K$ that splits $D$. Exactly one of the two embeddings $\jmath,\jmath^{{\sigma}}$ is normalized in the sense of \cite[(4.~4.~5)]{ShiRed}. The normalized embeddings correspond bijectively to a special subset of points $\tau\in\mathfrak H$. More precisely, there is a bijection $$ \left\{ \sopra{\displaystyle\hbox{normalized embeddings}} {\displaystyle\jmath\colon K\hookrightarrow D} \right\} \longleftrightarrow {\rm CM}_{{\Delta},K}=\left\{ \sopra{\displaystyle\hbox{$\tau\in\mathfrak H$ such that $\Phi_\infty(\jmath(K^\times))=$}} {\displaystyle\{{\gamma}\in\Phi_\infty(D^\times) \cap{\rm GL}_2^+(\mathbb R)~|~{\gamma}\cdot\tau=\tau\}} \right\}. $$ The bijection is ${{\Gamma}_{0}}({\Delta},N)$-equivariant where ${{\Gamma}_{0}}({\Delta},N)$ acts by conjugation on the left set and on ${\rm CM}_{{\Delta},K}$ via its action on $\mathfrak H$. Also, the correspondence $\jmath\leftrightarrow\tau$ is characterized by the fact that the complex structure on $D_\infty$ induced by the embedding $\jmath$ coincides with that of $D_{\infty}^{\tau}$. In the split case ${\rm CM}_{1,K}=K\cap\mathfrak H$. We shall denote $c_{\tau}=c_{\tau,N}$ the conductor relative to the order ${\cal R}_{N}$ of the embedding associated to the point $\tau\in{\rm CM}_{{\Delta},K}$ and $\bar c_{\tau}$ its minimal conductor. \begin{pro} Let $\tau$ and $\pr\tau\in{\rm CM}_{{\Delta},K}$ such that $\pr\tau={\gamma}\cdot\tau$ for some ${\gamma}\in{{\Gamma}_{0}}({\Delta},N)$. Then $c_{\pr\tau,N}=c_{\tau,N}$. \end{pro} \par\noindent{\bf Proof. } Let $\jmath$ and $\pr\jmath$ be the embeddings corresponding to $\tau$ and $\pr\tau$ respectively. Then $\pr\jmath={\gamma}\jmath\inv{{\gamma}}$ and so $\pr\jmath({\cal O}_{c_{\pr\tau,N}})=\pr\jmath(K)\cap{\cal R}_{N}= {\gamma}\jmath(K)\inv{{\gamma}}\cap{\cal R}_{N}= {\gamma}(\jmath(K)\cap{\cal R}_{N})\inv{\gamma}={\gamma}\jmath({\cal O}_{c_{\tau,N}})\inv{\gamma}= \pr\jmath({\cal O}_{c_{\tau,N}})$. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \begin{dfn} A point $x\in X_{0}({\Delta},N)$ is a CM point of type $K$ and conductor $c=c_x$ if it is represented by a $\tau\in{\rm CM}_{{\Delta},K}$ with $c_{\tau,N}=c$. Denote $$ {\rm CM}({\Delta},N;{\cal O}_{K,c})=\{\mbox{CM points of $X_{0}({\Delta},N)$ of type $K$ and conductor $c$}\}. $$ \end{dfn} The following result is \cite[Lemma 4.17]{Dar04} \begin{pro}\label{teo:existCM} Let $c>0$ be an integer such that $\mcd c{N{\Delta}}=1$. Then the set ${\rm CM}({\Delta},N;{\cal O}_{K,c})$ is non-empty if and only if \begin{itemize} \item all primes $\ell|{\Delta}$ are inert in $K$, and \item all primes $\ell|N$ are split in $K$. \end{itemize} \end{pro} For $\tau\in{\rm CM}_{1,K}$ the elliptic curve $E_{\tau}$ has complex multiplications in the field $K$. When ${\Delta}>1$ and $\tau\in{\rm CM}_{{\Delta},K}$ the QM abelian surface $A=A_{\tau}={\cal A}(\mathbb C)$ contains the elliptic curve $E=K\otimes\mathbb R/{\cal O}_{K,\bar c}$ and in fact is isogenous to the product $E\times E$. In particular there is an identification ${\rm End}^{o}(A)\simeq D\otimes K$. Consider the left ideal $\mathfrak e={\rm End}(A)\cap{\rm End}^{o}(A)(1-e_{\jmath})$ where $e_{\jmath}$ is the idempotent \eqref{eq:idempotent} attached to the embedding $\jmath:K\hookrightarrow D$ associated to $\tau$ and let ${\cal E}={\cal A}[\mathfrak e]^{o}$ be the connected component of the subgroup scheme of ${\cal A}$ killed by $\mathfrak e$. Note that since $\jmath({\cal O}_{K,\bar c})$ and $e_{\jmath}$ commute, the order ${\cal O}_{K,\bar c}$ acts on ${\cal E}$. \begin{pro}\label{teo:idgrsch} ${\cal A}={\cal E}\otimes_{{\cal O}_{K,\bar c}}{\cal R}_{1}$ as group schemes. \end{pro} \par\noindent{\bf Proof. } Let $S$ be any scheme of definition for $A$. Over any $S$-scheme $T$ there is an obvious map $({\cal E}\otimes_{{\cal O}_{K,\bar c}}{\cal R}_{1})(T)\rightarrow{\cal A}(T)$ which is surjective because ${\cal E}\otimes_{{\cal O}_{K,\bar c}}{\cal R}_{1}$ contains two independent abelian schemes of dimension 1, ${\cal E}$ and any translate of it by an $r\in{\cal R}_{1}-{\cal O}_{K,\bar c}$. To show that the map is injective, it is enough to do so over an algebraically closed field. Over $\mathbb C$ we have ${\cal E}(\mathbb C)=E$ and thus $({\cal E}\otimes_{R_{\bar c}}{\cal R}_{1})(\mathbb C)=E\otimes_{{\cal O}_{K,\bar c}}{\cal R}_{1}= (K\otimes\mathbb R\otimes_{{\cal O}_{K,\bar c}}{\cal R}_{1})/{\cal R}_{1}=A$. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \begin{dfn}\label{th:testpair} Let $p$ be an odd prime number, $\mcd p{N{\Delta}}=1$. A \emph{$p$-ordinary test triple} for ${{\Gamma}_{\vep}}({\Delta},N)$ is a triple $(\tau, v, e)$, where $\tau\in{\rm CM}_{{\Delta},K}$, $v$ is a finite place dividing $p$ in a finite extension $L\supseteq\mathbb Q$ and $e\in D\otimes F$ is the idempotent associated to a real quadratic subfield $F\subset D$ pointwise fixed by the positive involution, such that \begin{enumerate} \item $FK\subseteq L$; \item the CM curve $E_\tau$ or QM-abelian surface $A_\tau$ has ordinary good reduction modulo $\mathfrak p_v$; \item if $w$ is the restriction of $v$ to $F$ then $e\in{\cal R}_{1}\otimes_{\mathbb Z}{\cal O}_{(w)}$. \end{enumerate} Furthermore, a $p$-ordinary test triple $(\tau, v, e)$ is said \emph{split} if $p$ splits in $F$. \end{dfn} \noindent Let us observe that: \begin{enumerate} \item the ordinarity hypothesis implies that $p$ splits in $K$; \item the idempotent $e$ plays no role in the split case and can be omitted in that case; \item the explicit description \eqref{eq:idempotent} of $e$ shows that the third condition above is equivalent to $\mcd p{\bar c{\delta}_{F}}=1$ where $\bar c$ is the minimal conductor of $F$; \item for a $p$-ordinary triple $(\tau,v,e)$ for ${{\Gamma}_{1}}({\Delta},N)$ the point $x\in X_1({\Delta},N)$ represented by $\tau$ is a smooth point in ${\cal X}_1({\cal O}_{(v)})$. This is clear for $D$ split and follows for instance from $\cite[Theorem~1.1]{Jord86}$ in the non-split case. \end{enumerate} \begin{pro}\label{teo:anypworks} Let $p$ be an odd prime number, $\mcd p{N{\Delta}}=1$. There exist split $p$-ordinary triples for ${{\Gamma}_{\vep}}({\Delta},N)$. \end{pro} \par\noindent{\bf Proof. } Since any two positive involutions \eqref{eq:involution} are conjugated in $D$, up to a different choice of maximal order we are reduced to the Hashimoto model. Up to replacing $p_{o}$ in \eqref{eq:condonHS} in its congruence class modulo $8N{\Delta} p$, we may assume also that $\vvec{p_{o}}p=1$. Thus the subfield $F=\mathbb Q\oplus\mathbb Q j\subset D_{H}$ is pointwise fixed by the involution, has discriminant prime to $p$ and $p$ splits in it. Finally, the minimal conductor of the embedding $\sqrt{p_o}\mapsto j\in F$ is prime to $p$ since $j\in{\cal R}_{H,N}$.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par The decomposition $D=K\oplus Ku$ associated to a choice of $\tau\in{\rm CM}_{{\Delta},K}$ is also an orthogonal decomposition under the non-degenerate pairing $(x,y)_{D}={\rm tr}(x\bar{y})$. Note that here $u^{2}>0$ since the norm is indefinite. We shall be concerned with the algebraic group of similitudes of $(\cdot,\cdot)_{D}$, i.e. $$ {\rm GO}(D)=\left\{\hbox{$g\in{\rm GL}(D)$ such that $(gx,gy)_{D}=\nu_{0}(g)(x,y)_{D}$ for all $x,y\in D$}\right\}. $$ The structure of the group ${\rm GO}(D)$ is well understood, e.g. \cite[\S1.1]{Harris93}, \cite[\S7]{HaKu91}. Let $\mathbf{t}\in{\rm GO}(D)$ be the involution $\mathbf{t}(d)=\bar{d}$. Then ${\rm GO}(D)={\rm GO}^{o}(D)\ltimes<\mathbf{t}>$, where ${\rm GO}^{o}(D)$ is the Zariski connected component described by the short exact sequence of algebraic groups \begin{equation} 1\longrightarrow\mathbb G_{m}\longrightarrow D^{\times}\times D^{\times}\stackrel{\varrho}{\longrightarrow} GO^{o}(D)\mmap1 \label{eq:sesGOD} \end{equation} where $\mathbb G_{m}$ is embedded diagonally and $\varrho(d_{1},d_{2})(x)=d_{1}xd_{2}^{-1}$. The norm $\nu$ restricts to $N_{K/\mathbb Q}$ and $-u^{2}N_{K/\mathbb Q}$ on $K$ and $Ku$ respectively, and ${\rm GO}(K)^o\simeq{\rm GO}^o(Ku)\simeq R_{K/\mathbb Q}\mathbb G_{m,K}$ where the isomorphism is given by left multiplication. Thus, the subgroup of ${\rm GO}^{o}(D)$ that preserves the splitting $D=K\oplus Ku$ can be identified with the group $$ G(O(K)\times O(Ku))^o= \left\{\hbox{$(k_{1},k_{2})\in(R_{K/\mathbb Q}\mathbb G_{m,K})^2$ such that $N_{K/\mathbb Q}(k_{1}\inv k_{2})=1$}\right\} $$ and there is a commutative diagram \begin{equation} \begin{CD} K^{\times}\times K^{\times} @>{{\alpha}}>> G(O(K)\times O(Ku))^o \\ @V{\jmath\times\jmath}VV @VVV \\ D^{\times}\times D^{\times} @>{\varrho}>> {\rm GO}^{o}(D)\\ \end{CD} \label{eq:similitudes} \end{equation} where ${\alpha}(k_{1},k_{2})=(k_{1}k_{2}^{-1},k_{1}\bar{k}_{2}^{-1})$. We will normalize the complex coordinates in $D_{\infty}^{\tau}=(K\oplus Ku)\otimes\mathbb R$ as follows. The standard normalized embedding $\jmath^{\rm st}:\mathbb Q(\sqrt{-1})\hookrightarrow{\rm M}_2(\mathbb Q)$ with fixed point $i\in\mathfrak H$ defines a splitting $M_{2}(\mathbb R)^{i}=\mathbb C\oplus\mathbb C^\perp$ with $\mathbb C=\mathbb R\smallmat 1{}{}1\oplus\mathbb R\smallmat{}{-1}1{}$ and $\mathbb C^\perp=\mathbb C\smallmat {}11{}=\mathbb R\smallmat {}11{}\oplus\mathbb R\smallmat{-1}{}{}1$. Define standard complex coordinates $z_1^{\rm st}$, $z_2^{\rm st}$ in $D_{\infty}$ by the identity \begin{equation} \Phi_\infty(d)=z_1^{\rm st}+z_2^{\rm st}\smallmat {}11{}. \label{eq:standardcoord} \end{equation} The $\mathbb R$-linear extensions of the embeddings $\jmath^{\rm st}$ and $\Phi_\infty\circ\jmath$ are conjugated in $M_{2}(\mathbb R)$, namely $\Phi_{\infty}\circ\jmath=d_\infty\jmath^{\rm st}d_\infty^{-1}$ where $d_\infty=\smallmat{y^{1/2}}{sy^{1/2}}{}{y^{-1/2}}$ and $\tau=s+iy$. So we define normalized coordinates $z_{1}$ and $z_{2}$ in $D_\infty^\tau$ by the identity $$ z_{i}(d)=z_{i}^{\rm st} (\Phi_\infty^{-1}(d_\infty^{-1})d\Phi_\infty^{-1}(d_\infty)), \qquad\hbox{for all $d\in D_{\infty}$,\quad $i=1$,$2$}. $$ \section{Some differential operators} \subsection{Preliminaries.}\label{se:KSprel} We briefly review some basic facts about the Kodaira-Spencer map and the Gau{\ss}-Manin connection. For more details see \cite{Katz70, KatOda68}. The \emph{Kodaira-Spencer class} of a composition of smooth morphisms of schemes $X\stackrel{\pi}{\rightarrow}S\rightarrow T$ is the element in $H^1(X,\dual{({\Omega}^1_{X/S})}\otimes \pi^*{\Omega}^1_{S/T})$ arising from the canonical exact sequence. \begin{equation} 0\longrightarrow\pi^*{\Omega}^1_{S/T}\longrightarrow{\Omega}^1_{X/T}\longrightarrow{\Omega}^1_{X/S}\longrightarrow 0 \label{eq:canexseq} \end{equation} by local freeness of the sheaves ${\Omega}^1$. The \emph{Kodaira-Spencer map} is the boundary map $$ {\rm KS}:\pi_*{\Omega}^1_{X/S}\longrightarrow R^1\pi_*(\pi^*{\Omega}^1_{S/T})\simeq {\Omega}^1_{S/T}\otimes R^1\pi_*{\cal O}_X $$ in the long exact sequence of derived functors obtained from \eqref{eq:canexseq} by pushing down. Under the natural maps $H^1(X,\dual{({\Omega}^1_{X/S})}\otimes \pi^*{\Omega}^1_{S/T})\rightarrow H^0(S,R^1\pi_*(\dual{({\Omega}^1_{X/S})}\otimes \pi^*{\Omega}^1_{S/T}))\rightarrow H^0(S,{\Omega}^1_{S/T}\otimes R^1\pi_*{\cal O}_X\otimes\dual{(\pi_*{\Omega}^1_{X/S})})$ the Kodaira-Spencer class maps to the Kodaira-Spencer map. The $q$-\emph{th relative de Rham cohomology sheaf} of $X/S$ is defined as $\derham{q}(X/S)=\mathbb R^q\pi_*({\Omega}^\bullet_{X/S})$ (hypercohomology). Following \cite{KatOda68}, the \emph{Gau{\ss}-Manin connection} $$ \nabla\colon\derham{q}(X/S)\longrightarrow{\Omega}^1_{S/T}\otimes_{{\cal O}_S}\derham{q}(X/S). $$ can be seen as the differential $d_1^{0,q}\colon E_1^{0,q}\rightarrow E_1^{1,q}$ in the spectral sequence defined by the finite filtration $F^i{\Omega}^\bullet_{X/T}=\mathrm{Im}({\Omega}^{\bullet-i}_{X/T} \otimes_{{\cal O}_X}\pi^*{\Omega}^i_{S/T}\longrightarrow{\Omega}^\bullet_{X/T})$, with associated graded objects $\mathrm{gr}^i({\Omega}^\bullet_{X/T})={\Omega}^{\bullet-i}_{X/T} \otimes_{{\cal O}_X}\pi^*{\Omega}^i_{S/T}$. If $X/S={\cal A}/S$ is an abelian scheme with $0$-section $e_{0}$ and dual ${\cal A}^t/S$ , denote $\underline{\omega}=\underline{\omega}_{{\cal A}/S}=\pi_*{\Omega}^1_{{\cal A}/S}={e_0}^*{\Omega}^1_{{\cal A}/S}$ the sheaf on $S$ of translation invariant relative $1$-forms on $A$. The first de Rham sheaf $\derham{1}=\derham{1}({\cal A}/S)$ is the central term in a short exact sequence \begin{equation} 0\longrightarrow\underline{\omega}\longrightarrow\derham{1}\longrightarrow R^1\pi_*{\cal O}_{{\cal A}}\mmap0 \label{eq:Hodgeseq} \end{equation} (called the \emph{Hodge sequence}). By Serre duality \begin{equation} {\rm Hom}_{{\cal O}_S}(\pi_*{\Omega}^1_{{\cal A}/S}, R^1\pi_*(\pi^*{\Omega}^1_{S/T})) \simeq {\rm Hom}_{{\cal O}_S}(\underline{\omega}_{{\cal A}/S}\otimes\underline{\omega}_{{\cal A}^t/S},{\Omega}^1_{S/T}) \label{eq:KSmap} \end{equation} and the Kodaira-Spencer map can be seen as an element of the latter group. It can be reconstructed from the Gau{\ss}-Manin connection as the composition \begin{equation} \underline{\omega}_{{\cal A}/S}\hookrightarrow\derham{1}\stackrel{\nabla}{\longrightarrow}\derham{1}\otimes{\Omega}^1_{S/T} \longrightarrow\dual{\underline{\omega}_{{\cal A}^t/S}}\otimes{\Omega}^1_{S/T}. \label{eq:KSfromGM} \end{equation} In fact, when ${\cal A}/S\simeq{\cal A}^t/S$ is principally polarized, the Kodaira-Spencer map becomes a \emph{symmetric} map ${\rm KS}\colon\mathrm{Sym}^2(\underline{\omega})\rightarrow{\Omega}^1_{S/T}$, \cite[Section III.~9]{FalCha90}. Let $(\pr{S},i_o)$ be a smooth closed reduced subscheme of $S$ and consider the commutative pull-back diagram of $T$-schemes $$ \begin{CD} \pr{X} @>i>> X \\ @VV{\pr{\pi}}V @VV{\pi}V \\ \pr{S} @>{i_o}>> S \\ \end{CD}. $$ Since also $\pr{\pi}$ is smooth, we can consider the Kodaira-Spencer class, or map, $\pr{{\rm KS}}$ attached to the morphisms $\pr{X}\stackrel{\pr{\pi}}{\rightarrow}\pr{S}\rightarrow T$. When $X={\cal A}$ is a principally polarized abelian scheme, $\pr{{\rm KS}}\in{\rm Hom}_{{\cal O}_{\pr{S}}}({\pr{\underline{\omega}}}^{\otimes 2},{\Omega}^1_{\pr{S}/T})$ as in \eqref{eq:KSmap}, where ${\pr{\underline{\omega}}}=\underline{\omega}_{\pr{{\cal A}}/\pr{S}}=\pi_*{\Omega}^1_{\pr{{\cal A}}/\pr{S}}= {e^\prime_0}^*{\Omega}^1_{\pr{{\cal A}}/\pr{S}}$. Since $i^*\pi^*{\Omega}^1_{S/T}={\pi^\prime}^*i_0^*{\Omega}^1_{S/T}$ and $i^*{\Omega}_{X/S}\simeq{\Omega}_{X^\prime/S^\prime}$ canonically, applying $i^{*}$ to \eqref{eq:canexseq} yields an exact sequence $$ 0\longrightarrow{\pr{\pi}}^*i_0^*{\Omega}^1_{S/T}\longrightarrow i^*{\Omega}^1_{X/T}\longrightarrow{\Omega}^1_{\pr{X}/\pr{S}}\longrightarrow 0, $$ hence an element ${\rm KS}^*\in\mathrm{Ext}^1_{{\cal O}_{\pr{X}}}({\Omega}^1_{\pr{X}/\pr{S}}, {\pr{\pi}}^*i_0^*{\Omega}^1_{S/T})$. The composition $\pr{S}\stackrel{i_0}{\rightarrow}S\rightarrow T$ defines a canonical surjective map ${\pr{\pi}}^*i_0^*{\Omega}^1_{S/T}\rightarrow{\pr{\pi}}^*{\Omega}^1_{\pr{S}/T}$. In the same way, we get a surjective map $i^*{\Omega}^1_{X/T}\rightarrow{\Omega}^1_{\pr{X}/T}$. These data define a commutative diagram of ${\cal O}_{X^\prime}$-modules $$ \begin{CD} 0 @>>> {\pr{\pi}}^*i_0^*{\Omega}^1_{S/T} @>>> i^*{\Omega}^1_{X/T} @>>> {\Omega}^1_{\pr{X}/\pr{S}} @>>> 0\\ @. @VVV @VVV @| \\ 0 @>>> {\pr{\pi}}^*{\Omega}^1_{\pr{S}/T} @>>> {\Omega}^1_{\pr{X}/T} @>>> {\Omega}^1_{\pr{X}/\pr{S}} @>>> 0\\ \end{CD} $$ Standard diagram-chasing shows that ${\rm KS}^*\mapsto\pr{{\rm KS}}$ under the canonical map of $\mathrm{Ext}^1$ groups. The following result follows easily from the definitions. \begin{pro}\label{th:KSforpullb} Let ${\cal A}/S$ be an abelian scheme with $S$ smooth over $T$, $(\pr{S},i_0)$ a closed $T$-smooth subscheme of $S$ and $\pr{{\cal A}}={\cal A}\times_S\pr{S}$. Let ${\rm KS}\colon\underline{\omega}^{\otimes2}\rightarrow{\Omega}^1_{S/T}$ and ${\rm KS}^\prime\colon{\pr{\underline{\omega}}}^{\otimes2}\rightarrow{\Omega}^1_{\pr{S}/T}$ be the corresponding Kodaira-Spencer maps. Then $\pr{{\rm KS}}=\iota_0\circ i_0^*{\rm KS}$, where $\iota_0\colon i_0^*{\Omega}^1_{S/T}\rightarrow{\Omega}^1_{\pr{S}/T}$ is the canonical pull-back map. \end{pro} Let again $X={\cal A}$ be an abelian scheme, and let $\phi\colon{\cal A}\rightarrow{\cal A}$ be an $S$-isogeny (i.~e., a surjective endomorphism such that $\pi\phi=\pi$). The pull-back $\phi^*{\Omega}^\bullet_{{\cal A}/T}\rightarrow{\Omega}^\bullet_{{\cal A}/T}$ respects filtrations. Thus we have maps $\phi^*(F^i/F^j)\rightarrow F^i/F^j$ for all $i\leq j$ because the sheaves $F^i$ are locally free. In particular, there is a map of short exact sequences $$ \begin{CD} 0 @>>> \phi^*\mathrm{gr}^{p+1} @>>> \phi^*(F^p/F^{p+2}) @>>> \phi^*\mathrm{gr}^p @>>> 0 \\ @. @VVV @VVV @VVV \\ 0 @>>> \mathrm{gr}^{p+1} @>>> F^p/F^{p+2} @>>> \mathrm{gr}^p @>>> 0 \\ \end{CD} $$ where the bottom row is the tautological exact sequence of graded objects and the top row is obtained applying $\phi^*$ to it (again, it remains exact because the sheaves are locally free). Since $\phi$ is surjective, $\pi_*\phi^*=\pi_*$ as functors and the previous diagram yields a map of derived functors long exact sequences \begin{equation} \begin{CD} \ldots @>>> R^{p+q}\pi_*\mathrm{gr}^p @>>> R^{p+q+1}\pi_*\mathrm{gr}^{p+1} @>>> \ldots \\ @. @VV{[\phi]_{p,q}}V @VV{[\phi]_{p+1,q}}V \\ \ldots @>>> R^{p+q}\pi_*\mathrm{gr}^p @>>> R^{p+q+1}\pi_*\mathrm{gr}^{p+1} @>>> \ldots \\ \end{CD} \label{cd:four} \end{equation} \begin{pro}\label{th:GMcomm} Let ${\cal A}/S$ be an abelian scheme with $S$ smooth over $T$. The algebra ${\rm End}_S({\cal A})$ acts linearly on the sheaves $\derham{q}({\cal A}/S)$. If $\phi\in{\rm End}_S({\cal A})$ acts as $[\phi]$, then $$ \nabla\circ[\phi]=(1\otimes[\phi])\nabla. $$ \end{pro} \par\noindent{\bf Proof. } Let $\phi\in{\rm End}_S({\cal A})$ be an isogeny. The endomorphism $[\phi]$ of $\derham{q}({\cal A}/S)$ attached to $\phi$ is the vertical map $[\phi]_{0,q}$ in diagram \eqref{cd:four} at $R^q\pi_*\mathrm{gr}^0$. Under the identification $R^{q+1}\pi_*\mathrm{gr}^1=R^{q+1}\pi_*(\pi^*{\Omega}^1_{S/T}\otimes_{{\cal O}_{{\cal A}}}{\Omega}^{\bullet-1}_{{\cal A}/T})= {\Omega}^1_{S/T}\otimes_{{\cal O}_S}R^{q+1}\pi_*({\Omega}^{\bullet-1}_{{\cal A}/T})= {\Omega}^1_{S/T}\otimes_{{\cal O}_S}\derham{q}({\cal A}/S)$ the Gau{\ss}-Manin connection is the connecting homomorphism for the tautological exact sequence of graded objects, i.÷e. either horizontal connecting homomorphism in \eqref{cd:four} at $p=0$ and also $[\phi]_{1,q}=1\otimes[\phi]_{0,q}$ since $\phi$ acts trivially on $S$. The formula follows. Let $s\in S$ be a geometric point and $A_s$ the fiber at $s$. Without loss of generality we may assume that $S$ is connected and Grothendieck's rigidity lemma \cite{Mum65} implies that the canonical map ${\rm End}_S({\cal A})\rightarrow{\rm End}(A_s)$ is injective. It follows that there exist division algebras $D_1,\ldots,D_t$ such that ${\rm End}_S({\cal A})$ is identified to a subring of ${\rm M}_{n_1}(D_1)\times\cdots\times{\rm M}_{n_t}(D_t)$. The latter algebra is spanned over $\mathbb Q$ by the invertible elements, so the result follows by linearity. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \subsection{Computations over $\mathbb C$}\label{se:KSoverC} In order to compute explicitely the Kodaira-Spencer map for a complex family, i.e. when $T=\mathrm{Spec}(\mathbb C)$, it is more convenient to appeal to GAGA principles, work in the analytic category and follow \cite{Katz76, Harris81}. If ${\cal A}/S$ is a principally polarized family of abelian varieties over the smooth complex variety $S$ and $U\subset S$ is an open set, the choice of a section ${\sigma}\in H^0(U,\dual{({\Omega}^1_S)})$ defines a map $\varrho_{\sigma}\colon H^0(U,\underline{\omega})\rightarrow H^0(U,\dual\underline{\omega})$ by the composition $$ H^0(U,\underline{\omega})\hookrightarrow H^0(U,\derham{1})\stackrel{\nabla}{\rightarrow} H^0(U,\derham{1}\otimes{\Omega}^1_{S/T})\stackrel{1\otimes{\sigma}}{\rightarrow} H^0(U,\derham{1})\stackrel{\varrho}{\rightarrow} H^0(U,\dual\underline{\omega}), $$ where $\varrho$ is induced by the polarization pairing $\scaldR\cdot\cdot\colon\derham{1}\otimes\derham{1}\rightarrow{\cal O}_S$. The association ${\sigma}\mapsto\varrho_{\sigma}$ defines a map $\dual{({\Omega}^1_{S/T})}\rightarrow{\rm Hom}(\underline{\omega},\dual\underline{\omega})$ whose dual is the Kodaira-Spencer map ${\rm KS}$. By \'{e}tale-ness, the actual computation of the map ${\rm KS}$ can be obtained applying the above procedure to the pullback of the family ${\cal A}/S$ on the universal cover of $S$. For instance, for the universal family \eqref{eq:univEC} \begin{equation} {\rm KS}(d{\zeta}^{\otimes 2})=\frac{1}{2\pi i}dz. \label{eq:Kodforell} \end{equation} where ${\zeta}$ is the standard complex coordinate in the elliptic curve $E_z=\mathbb C/\mathbb Z\oplus\mathbb Z z$, $z\in\mathfrak H$. We follow this approach to compute the Kodaira-Spencer map for the universal complex family of QM abelian surfaces over $X_{\Gamma}$ using the Shimura family ${\cal A}^{\rm Sh}/\mathfrak H$ of \eqref{eq:QMtori} in terms of the arithmetic of the maximal order ${\cal R}_{1}$. Let $\underline{r}=\{r_1,\dots,r_4\}$ be a symplectic basis of ${\cal R}_{1}$. By linear extension, the real dual basis $\{\dual{r_1},\dots,\dual{r_4}\}$ of $\dual{D_\infty}$ is a basis of ${\rm Hom}({\cal R}_{1}\otimes_\mathbb Z\mathbb C,\mathbb C)\simeq\Derham{1}(D_\infty^z/{\cal R}_{1})$. Thus, the elements $\dual{r_1},\dots,\dual{r_4}$ define global $C^\infty$-sections of $\derham{1}({\cal A}^{\rm Sh}/\mathfrak H)$ with constant periods, hence $\nabla$-horizontal. If $H$ denotes the $\mathbb C$-span of these sections, there is an isomorphism $\derham{1}({\cal A}^{\rm Sh}/\mathfrak H)=H\otimes_\mathbb C{\cal O}_{\mathfrak H}$. In terms of this trivialization, $\nabla=1\otimes d$, where $d$ is the exterior differentiation. Also, \begin{equation} \scaldR{\dual{r_i}}{\dual{r_j}}=\frac{1}{2\pi i}B_t(r_j,r_i), \qquad i,j=1,\ldots,4, \label{eq:derhampair} \end{equation} where the $2\pi i$ factor accounts for the difference of Tate twists between singular and algebraic de Rham cohomology, e.~g. \cite[\S1]{Del82}. Let ${\zeta}_1$ and ${\zeta}_2$ denote the standard coordinates in $\mathbb C^2$. \begin{pro}\label{th:KSoverC} $\left({\rm KS}(d{\zeta}_i\otimes d{\zeta}_j)\right)_{i,j=1,2}= \frac1{2\pi i}\smallmat100{{\Delta}}\,dz.$ \end{pro} \par\noindent{\bf Proof. } Write $$ \left(\begin{array}{c} d{\zeta}_1 \\ d{\zeta}_2 \end{array}\right)= \Pi_{\underline{r}}(z) \left( \begin{array}{c} \dual{r_1} \\ \vdots \\ \dual{r_4} \end{array} \right) $$ where $\Pi_{\underline{r}}(z)$ is the period matrix computed in terms of the basis $\underline{r}$. Using \eqref{eq:derhampair} and the definitions we first obtain $$ \varrho_{\sigma}\left(\begin{array}{c} d{\zeta}_1 \\ d{\zeta}_2 \end{array}\right)= \frac1{2\pi i}\frac{d\Pi_{\underline{r}}(z)}{dz} \left(\begin{array}{cc} 0 & I_2 \\ -I_2 & 0 \end{array}\right) \left(\begin{array}{c} r_1 \\ \vdots \\ r_4 \end{array}\right){\sigma}(dz), $$ and finally $$ \left({\rm KS}(d{\zeta}_i\otimes d{\zeta}_j)\right)_{i,j=1,2}= \frac{1}{2\pi i}\frac{d\Pi_{\underline{r}}(z)}{dz} \left(\begin{array}{cc} 0 & I_2 \\ -I_2 & 0 \end{array}\right) {}^t\Pi_{\underline{r}}(z)\,dz. $$ To obtain the final formula, we make use of the Hashimoto model with $N=1$. In terms of the symplectic basis $\underline{\eta}=\{\eta_1,\dots,\eta_4\}$ of theorem \ref{th:Hashimoto} \begin{equation} \Pi_{\underline{\eta}}(z)=\left( \begin{array}{cccc} \frac{\varpi^-}{2\sqrt{p_o}}({\alpha}^+a{\Delta} z+1) & -\frac{1}{\sqrt{p_o}}({\alpha}^{+}a{\Delta} z+1) & z & \frac12{\alpha}^+z \\ \\ \frac{{\alpha}^+}{2\sqrt{p_o}}a{\Delta} (z-{\alpha}^-) & \frac{1}{\sqrt{p_o}}{\Delta}(-z+{\alpha}^-a) & 1 & \frac12{\alpha}^- \\ \end{array}\right) \label{eq:permat} \end{equation} where ${\alpha}^{\pm}=1\pm\sqrt{p_o}$. Plugging these values into the previous formula yields the result. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \subsection{Maass operators.}\label{ss:maass} When $D$ is split. the universal family \eqref{eq:univEC} defines the line bundle $\underline{\omega}=\underline{\omega}_{{\cal E}_N/{\cal Y}_1(N)}$ on the Zariski open set ${\cal Y}_1(N)$ complement of the cusp divisor $C$ in ${\cal X}_1(N)$. The Kodaira-Spencer map ${\rm KS}\colon\underline{\omega}^{\otimes2}\buildrel\sim\over\rightarrow{\Omega}^1_{{\cal Y}_1(N)}$ is an isomorphism. \begin{thm}\label{th:KSextended} The line bundle $\underline{\omega}$ extends uniquely to a line bundle, still denoted $\underline{\omega}$, on the complete curve ${\cal X}_1(N)$ and the Kodaira-Spencer isomorphism extends to an isomorphism $$ {\rm KS}\colon\underline{\omega}^{\otimes2}\buildrel\sim\over\longrightarrow{\Omega}^1_{{\cal X}_1(N)}(\log C). $$ \end{thm} \par\noindent{\bf Proof. } See \cite{Katz73} and also \cite[section~10.~13]{KatMaz85} where the extension property is discussed for a general representable moduli problem. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par If $D$ is not split, the universal family \eqref{eq:univQMAV} of QM-abelian surfaces defines the sheaf $\underline{\omega}=\underline{\omega}_{{\cal A}_{{\Delta},N}/{\cal X}_1({\Delta},N)}$ and the Kodaira-Spencer map is a surjective map ${\rm KS}\colon\mathrm{Sym}^2\underline{\omega}\rightarrow{\Omega}^1_{{\cal X}_1({\Delta},N)}$. Let $p$ be a prime such that $(p,N{\Delta})=1$ and let $v$ be a place of a number field $L$ dividing $p$. The algebra ${\cal R}_{1}\otimes_\mathbb Z{\cal O}_v$ acts contravariantly and ${\cal O}_{v}$-linearly on $\underline{\omega}_{v}=\underline{\omega}\otimes{\cal O}_v$ by pull-back. For any geometric point $s\in{\cal X}_1({\Delta},N)\otimes{\cal O}_v$ and any non-trivial idempotent $e\in{\cal R}_{1}\otimes_\mathbb Z{\cal O}_v$ there is a non-trivial decomposition $H^0(A_s,{\Omega}^1_{A_s/k(s)})= eH^0(A_s,{\Omega}^1_{A_s/k(s)})\oplus(1-e)H^0(A_s,{\Omega}^1_{A_s/k(s)})$. Therefore the subsheaf $e\underline{\omega}_{v}$ is a line subbundle. Let $e\underline{\omega}_v\circ\invol{e}\underline{\omega}_v\subseteq\mathrm{Sym}^2\underline{\omega}_v$ be the line bundle image of $e\underline{\omega}_v\otimes\invol{e}\underline{\omega}_v$ under the natural map $\underline{\omega}^{\otimes 2}_v\rightarrow\mathrm{Sym}^2{\underline{\omega}_v}$. \begin{thm}\label{th:Ltbundles} If $p$, $v$ and $e$ are as above, then the Kodaira-Spencer map defines an isomorphism $$ {\rm KS}\colon e\underline{\omega}_v\otimes\invol{e}\underline{\omega}_v\longrightarrow {\Omega}^1_{{\cal X}_1({\Delta},N)/{\cal O}_v} $$ of line bundles on $X_{1}({\Delta},N)$ defined over ${\cal O}_{v}$. \end{thm} \par\noindent{\bf Proof. } We claim that the action of $r\otimes{\lambda}\in{\cal R}_{1}\otimes{\cal O}_v$ on the universal family \eqref{eq:univQMAV} base-changed to ${\cal O}_v$ gives rise to a commutative diagram \begin{equation} \begin{CD} \underline{\omega} _v @>>> {\Omega}^1_{{\cal X}_1({\Delta},N)/{\cal O}_v}\otimes\dual{\underline{\omega}_v} \\ @VV{r\otimes{\lambda}}V @VV{1\otimes\invol r\otimes{\lambda}}V \\ \underline{\omega} _v @>>> {\Omega}^1_{{\cal X}_1({\Delta},N)/{\cal O}_v}\otimes\dual{\underline{\omega}_v} \end{CD} \label{cd:five} \end{equation} Indeed, under the Serre duality identification $R^1\pi_*{\cal O}_{{\cal A}}\simeq\dual{\underline{\omega}_{{\cal A}/S}}$ for a principally polarized abelian scheme ${\cal A}/S$ the actions of ${\rm End}_S({\cal A})$ correspond up to Rosati involution. The commutativity of the diagram \eqref{cd:five} follows from proposition \ref{th:GMcomm} and \eqref{eq:KSfromGM}. For an idempotent $e\in{\cal R}_{1}\otimes_\mathbb Z{\cal O}_v$, diagram \eqref{cd:five} defines a map $e\underline{\omega}_v\rightarrow{\Omega}^1\otimes\invol{e}(\dual{\underline{\omega}_v})$ which can be shown to be an isomorphism by the same deformation theory argument in \cite[Lemma 6]{DiaTay94}. This is enough to conclude, because the sheaves $\invol{e}(\dual{\underline{\omega}_v})$ and $\invol{e}\underline{\omega}_v$ are dual of each other.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \begin{rems}\label{re:LoverC} \rm \begin{enumerate} \item We proved theorem \ref{th:Ltbundles} for $p$-adically complete rings of scalars. In fact the projectors onto the quadratic subfields of $D$ are defined over the $p$-adic localizations of their rings of integers for almost all $p$. Thus, in these cases, $e\underline{\omega}$ and the Kodaira-Spencer isomorphism are defined over the subrings ${\cal O}_{(v)}\subset\mathbb C$. \item If the $\pr e=ded^{-1}$ are conjugated in ${\cal R}_1\otimes B$ for some ring $B$, then the action of $d$ on $\underline{\omega}\otimes B$ defines an isomorphism of $e\underline{\omega}$ with $\pr e\underline{\omega}$ over $B$. \item In the complex case the isomorphism of theorem \ref{th:Ltbundles} can be checked by a straightforward application of the computation in section \ref{se:KSoverC}. For instance, in the Hashi\-mo\-to model for $N=1$ of theorem \ref{th:Hashimoto} let $d=ai+bj+cij\in D_{H}$ with ${\delta}=d^{2}=-a^{2}{\Delta}+b^{2}p_{o}+c^{2}{\Delta} p_{o}\in\mathbb Q$ and let $e\in D\otimes_{\mathbb Q}\mathbb Q(\sqrt{{\delta}})$ be the idempotent giving the projection onto $\mathbb Q(d)$. Then $$ e=\frac{1}{2\sqrt{{\delta}}} \left( \begin{array}{cc} \sqrt{{\delta}}+b\sqrt{p_{o}} & -a+c\sqrt{p_{o}} \\ (a+c\sqrt{p_{o}}){\Delta} & \sqrt{{\delta}}-b\sqrt{p_{o}} \end{array} \right) $$ and $$ \invol e=\frac{1}{2\sqrt{{\delta}}} \left( \begin{array}{cc} \sqrt{{\delta}}+b\sqrt{p_{o}} & a+c\sqrt{p_{o}} \\ (-a+c\sqrt{p_{o}}){\Delta} & \sqrt{{\delta}}-b\sqrt{p_{o}} \end{array} \right). $$ Therefore $e\underline{\omega}\circ\invol e\underline{\omega}$ is generated over $\mathfrak H$ by the global section $$ (\sqrt{{\delta}}+b\sqrt{p_{o}})^{2}d{\zeta}_{1}\circ d{\zeta}_{1}+ (c^{2}p_{o}-a^{2})d{\zeta}_{2}\circ d{\zeta}_{2}+ 2(\sqrt{{\delta}}+b\sqrt{p_{o}})c\sqrt{p_{o}}d{\zeta}_{1}\circ d{\zeta}_{2} $$ whose image under the Kodaira-Spencer map ${\rm KS}$ is, by proposition \ref{th:KSoverC}, \begin{equation} \frac{1}{\pi i}(\sqrt{{\delta}}+b\sqrt{p_{o}})dz \in{\Gamma}(\mathfrak H,{\Omega}^{1}_{\mathfrak H}). \label{eq:imkasec} \end{equation} Since ${\delta}\neq p_{o}b^2$ (else $p_o=(a/c)^2\in\mathbb Q$ which is impossible) the section \eqref{eq:imkasec} does not vanish and the Kodaira-Spencer map is an isomorphism. \end{enumerate} \end{rems} \begin{notat} \rm We will denote ${\cal L}$ either the line bundle $\underline{\omega}_v$ on ${\cal Y}_{1}(N)$ or the line bundle $e\underline{\omega}_v$ on ${\cal X}_{1}({\Delta},N)$ for some choice of idempotent $e$ satisfying the hypotheses of theorem \ref{th:Ltbundles} and such that $\invol e=e$. In either case the Kodaira-Spencer map gives an isomorphism $$ {\rm KS}\colon{\cal L}^{\otimes 2}\stackrel{\sim}{\longrightarrow}{\Omega}^{1}. $$ With an abuse of notation we will denote also ${\cal L}$ the pullback of the complexified bundle to $\mathfrak H$ under the natural quotient maps. \end{notat} If ${\gamma}\in{{\Gamma}_{1}}({\Delta},N)$ the identities $\mathbb Z^{2}\vvec{{\gamma}\cdot z}{1}=\inv{j({\gamma},z)}\mathbb Z^{2}\vvec{z}{1}$ and $\Phi_\infty({\cal R}_{1})\vvec{{\gamma}\cdot z}{1}= \inv{j({\gamma},z)}\Phi_\infty({\cal R}_{1})\vvec{z}{1}$ as subsets of $\mathbb C$ (in the split case) and of $\mathbb C^{2}$ (in the non-split case) respectively, show that the natural action of ${{\Gamma}_{1}}({\Delta},N)$ on $\underline{\omega}$ over $\mathfrak H$ is scalar multiplication by the automorphy factor. Thus the ${{\Gamma}_{1}}({\Delta},N)$-action extends to a ${\rm SL}_2(\mathbb R)$-homogeneous structure on $\underline{\omega}$, and on $\mathrm{Sym}^2(\underline{\omega})$ as well. Also, in the non-split case the fiber identifications induced by the action are $D\otimes\mathbb C$-contravariant and since the line bundle ${\cal L}$ is defined using the $D$ action on $\underline{\omega}$, it is an homogeneous line subbundle of $\mathrm{Sym}^2(\underline{\omega})$. Let $n\in\mathbb Z$ and let $V_n$ be the $1$-dimensional representation of $\mathbb C^\times$ given by the character $\chi_n(z)=z^n$. Let ${\cal V}_n=V_n\times\mathfrak H$ the homogeneous line bundle on $\mathfrak H$ with action $g\cdot(v,z)=(\chi_n(j(g,z))v,g\cdot z)$. Since $-1\notin{{\Gamma}_{1}}({\Delta},N)$ and ${{\Gamma}_{1}}({\Delta},N)$ has no elliptic elements, the quotient ${{\Gamma}_{1}}({\Delta},N)\backslash{\cal V}_n$ is a line bundle on $X_{1}({\Delta},N)$ which we shall denote ${\cal V}_n$ again. Pick $v_n\in V_n$, $v_n\neq0$, and let $\tilde{v}_n=(v_n,z)$ be the corresponding global constant section of ${\cal V}_n$ over $\mathfrak H$. Also, let $s(z)$ be the global section of ${\cal L}$ over $\mathfrak H$, defined up to a sign, normalized so that \begin{equation} {\rm KS}(s(z)^{\otimes 2})=2\pi i\,dz. \label{eq:kanormsec} \end{equation} Then ${\cal L}^{\otimes k}={\cal O}_{\mathfrak H}s(z)^{\otimes k}$ for all $k\geq1$ and there are identifications of homogeneous complex line bundles over both $\mathfrak H$ and $X_1({\Delta},N)$ \begin{equation} {\cal V}_2\buildrel\sim\over\longrightarrow{\Omega}^1, \ \tilde{v}_2\mapsto 2\pi i\,dz\qquad \hbox{and}\qquad {\cal V}_{k}\buildrel\sim\over\longrightarrow{{\cal L}}^{\otimes k}, \ \tilde{v}_{k}\mapsto{s(z)}^{\otimes k}. \label{eq:Eknsplit} \end{equation} These identifications preserve holomorphy and are compatible with tensor products and the Kodaira-Spencer isomorphisms. Note that $s(z)=\pm 2\pi i\,d{\zeta}$ in the split case by \eqref{eq:Kodforell}, and see remark \ref{re:LoverC}.3 for the non-split case. Following \cite{Katz76}, we shall define differential operators associated to splittings of the Hodge seguence \eqref{eq:Hodgeseq} where ${\cal A}/S$ is either the universal elliptic curve ${\cal E}_N/{\cal Y}_1(N)$ or the universal QM-abelian surface ${\cal A}_{{\Delta},N}/{\cal X}_1({\Delta},N)$. Over the associated differentiable manifold, which amounts to tensoring with the sheaf of ${\cal O}_S$-algebras ${\cal O}^\infty_S={\cal C}^\infty(S^{\rm an})$ and which will be denoted with an $\infty$ subscript, the Hodge decomposition ${\cal H}^1_\infty=\underline{\omega}_\infty\oplus\overline{\underline{\omega}}_\infty$ is a splitting of the Hodge sequence with projection ${\rm Pr}_{\infty}\colon{\cal H}^1_\infty\rightarrow\underline{\omega}_\infty$. For each $k\geq1$, let ${\Theta}_{k,\infty}^o$ be the operator defined by the composition \begin{equation} \begin{CD} \mathrm{Sym}^k(\underline{\omega}_\infty)\subset\mathrm{Sym}^k(\derham1)_\infty @>\nabla>> \mathrm{Sym}^k(\derham1)_\infty\otimes{\Omega}^1 @>{1\otimes\inv{{\rm KS}}}>> \mathrm{Sym}^k(\derham1)_\infty\otimes{\cal L}_{\infty} \\ @. @. @VV{{\rm Pr}_{\infty}^{\otimes k}\otimes1}V \\ @. @. \mathrm{Sym}^k(\underline{\omega})_\infty\otimes{\cal L}_{\infty} \\ \end{CD} \label{eq:algmaassnsp} \end{equation} where the Gau{\ss}-Manin connection $\nabla$ extends to $\mathrm{Sym}^k$ by the product rule. The composition ${{\cal L}}^{\otimes k}\subset\underline{\omega}^{\otimes k}\rightarrow\mathrm{Sym}^{k}(\underline{\omega})$ is injective and let ${\Theta}_{k,\infty}$ be the restriction of ${\Theta}_{k,\infty}^o$ to ${{\cal L}}^{\otimes k}_{\infty}$. \begin{pro}\label{th:operatorrestricts} ${\Theta}_{k,\infty}$ is an operator ${{\cal L}}^{\otimes k}_\infty\rightarrow{{\cal L}}^{\otimes k+2}_\infty$ \end{pro} \par\noindent{\bf Proof. } If $D$ is split then ${\cal L}^{\otimes k}=\underline{\omega}^{k}=\mathrm{Sym}^k(\underline{\omega})$ and there is nothing to prove. If $D$ is non-split, the element $e^{\otimes k}\in({\cal R}_{1}\otimes_{\mathbb Z}{\cal O}_{(v)})^{\otimes k}$ acting componentwise defines a projection $\underline{\omega}^{\otimes k}\rightarrow{\cal L}^{\otimes k}$ which factors through $\mathrm{Sym}^k(\underline{\omega})$. By proposition \ref{th:GMcomm} the Gau{\ss}-Manin connection $\nabla$ commutes with $e^{\otimes k}$ and also the Hodge projection ${\rm Pr}_{\infty}$ is the identity on ${\cal L}_{\infty}$ (in fact on ${\omega}_{\infty}$). The result follows. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par The operators ${\Theta}_{k,\infty}$ can be computed in terms of the complex coordinate $z=x+iy\in\mathfrak H$ and the identifications \eqref{eq:Eknsplit}. For any (say ${\cal C}^\infty$) function $\phi$ on $\mathfrak H$ let $$ {\delta}_k(\phi)=\frac1{2\pi i}\left(\frac{d}{dz}+\frac{k}{2iy}\right)\phi. $$ The operator ${\delta}_k$ was introduced, together with its higher dimensional analogues by Maass \cite{Maass53} and later extensively studied by Shimura (see \cite[Ch.~10]{Hida93} and the references cited therein). \begin{pro}\label{th:maassincoord} There are commutative diagrams of $C^\infty$-bundles and differential operators $$ \begin{CD} {\cal V}_k @>\sim>> {\cal L}^{\otimes k} \\ @VV{\widetilde{{\delta}}_k}V @VV{{\Theta}_{k,\infty}}V \\ {\cal V}_{k+2} @>\sim>> {\cal L}^{\otimes k+2} \\ \end{CD} $$ where $\widetilde{{\delta}}_n(\phi\tilde{v}_n)={\delta}_n(\phi)\tilde{v}_{n+2}$. \end{pro} \par\noindent{\bf Proof. } The diagram for $D$ split is but the simplest case (dimension $1$) of \cite[theorem~6.5]{Harris81}. The computation in the non-split case is very similar. Let $s$ be the ${\rm KS}$-normalized section of ${\cal L}$ as in \eqref{eq:kanormsec}, $\underline{\eta}=\{\eta_1,\dots,\eta_4\}$ be Hashimoto's symplectic basis of theorem \ref{th:Hashimoto} and $\Pi=\Pi_{\underline{\eta}}(z)$ the period matrix as in \eqref{eq:permat}. Since the sections $\dual{\eta_1},\ldots,\dual{\eta_4}$ are $\nabla$-horizontal, $$ \nabla\left( \begin{array}{c} d{\zeta}_1 \\ \\ d{\zeta}_2 \end{array} \right)=d\Pi \left(\begin{array}{c} \dual{\eta_1} \\ \vdots \\ \dual{\eta_4} \end{array}\right)= d\Pi\inv{\left( \begin{array}{c} \Pi \\ \\ \overline{\Pi} \end{array} \right)} \left(\begin{array}{c} d{\zeta}_1 \\ d{\zeta}_2 \\ d\bar{{\zeta}}_1 \\ d\bar{{\zeta}}_2 \end{array}\right)= \frac{\left(I_2,-I_2\right)}{z-\bar{z}} \left(\begin{array}{c} d{\zeta}_1 \\ d{\zeta}_2 \\ d\bar{{\zeta}}_1 \\ d\bar{{\zeta}}_2 \end{array}\right)\otimes dz. $$ Since $s$ is in the $\mathbb C$-span of $d{\zeta}_1$ and $d{\zeta}_2$, $\nabla(s)=\left(\frac1{z-\bar{z}}s+s_0\right)\otimes dz$ with ${\rm Pr}_{\infty}(s_0)=0$. Plugging this into ${\Theta}_{k,\infty}(\phi s^{\otimes k})={\rm Pr}_{\infty}(1\otimes\inv{{\rm KS}}) \left(\frac{d\phi}{dz}s^{\otimes k}\otimes dz+ k\phi s^{\otimes k-1}\nabla(s)\right)$ yields the result. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par Let $B$ be a $p$-adic algebra with $(p,{\Delta} N)=1$ and such that the $e$ is defined over $B$ and the isomorphism of theorem \ref{th:Ltbundles} holds for the sheaves base-changed to $B$. Let ${\cal O}^{(p)}$ be the structure sheaf of the formal scheme $S^{(p)}=\limproj{n}(S\otimes B/p^nB)^{p\mathrm{-ord}}$ obtained taking out the non-ordinary points in characteristic $p$. Denote ${\cal M}^{(p)}$ the tensorization with ${\cal O}^{(p)}$ of the restriction to $S^{(p)}$ of a sheaf ${\cal M}$. In the split case the Dwork-Katz construction \cite[\S A2.3]{Katz73} of the unique Frobenius-stable $\nabla$-horizontal submodule ${\cal U}\subset\derham{1}\otimes B$ defines a splitting $(\derham{1})^{(p)}=\underline{\omega}^{(p)}\oplus{\cal U}$ with projection ${\rm Pr}_{p}\colon(\derham{1})^{(p)}\rightarrow\underline{\omega}^{(p)}$. The construction can be carried out in the non split case as well. If $\pr{B}$ is a $B$-algebra and $A_{\pr{B}}$ is is a QM-abelian surface with ordinary reduction and canonical subgroup $H$ (which, by ordinarity, is simply the Cartier dual of the lift of the kernel of Verschiebung--an {\`e}tale group), then $H\subset A[p]$ with $A[p]/H$ lifting an {\`e}tale group and for every $\phi\in{\rm End}(A)$, $\phi(H)\subseteq H$ by connectedness. Thus $A/H$ is a QM-abelian surface with a canonical embedding ${\cal R}_1\hookrightarrow{\rm End}(A/H)$ and the construction of the Frobenius endomorphism of $(\derham{1})^{(p)}$ and its splitting follows. Assuming that the line bundle ${\cal L}$ is defined over $B$ and following the same procedure as in \eqref{eq:algmaassnsp} with the projection ${\rm Pr}_{\infty}$ replaced by ${\rm Pr}_{p}$ yields a differential operator $$ {\Theta}_{k,p}^o\colon\mathrm{Sym}^k(\underline{\omega}^{(p)})\longrightarrow\mathrm{Sym}^k(\underline{\omega}^{(p)})\otimes{\cal L}^{(p)}. $$ Let ${\Theta}_{k,p}$ be its restriction to $({\cal L}^{\otimes k})^{(p)}$. \begin{pro} ${\Theta}_{k,p}$ is an operator $({{\cal L}}^{\otimes k})^{(p)}\rightarrow({{\cal L}}^{\otimes k+2})^{(p)}$. \end{pro} \par\noindent{\bf Proof. } The argument is the same as in the proof of proposition \ref{th:operatorrestricts}. The action of the endomorphisms commutes with the pullback of forms in the quotient $A\rightarrow A/H$ and so with the Frobenius endomorphism. Since ${\cal U}$ is Frobenius-stable, the endomorphisms commute with the projection ${\rm Pr}_{p}$.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par Let $\ast\in\{\infty,p\}$. The operators ${\Theta}_{k,\ast}$ can be iterated. For all $r\geq1$ let $$ {\Theta}_{k,\ast}^{(r)}= {\Theta}_{k+2r-2,\ast}\circ\cdots\circ{\Theta}_{k,\ast}. $$ Since the kernel of the projecton ${\rm Pr}_\ast$ is $\nabla$-horizontal one has in fact \begin{equation} \label{eq:onlyonepr} {\Theta}_{k,\ast}^{(r)}={\rm Pr}_\ast\left((1\otimes\inv{\rm KS})\nabla\right)^r. \end{equation} The operators ${\Theta}_{k,\infty}^{(r)}$ do not preserve holomorphy because the Hodge projection ${\rm Pr}_{\infty}$ is not holomorphic. Similarly, the operators ${\Theta}_{k,p}^{(r)}$ are only defined over $p$-adically complete ring of integers. Nonetheless, the operators ${\Theta}_{k,\ast}^{(r)}$ are algebraic over the CM locus, in the following sense. Let $x\in{\cal X}_{1}({\Delta},N)({\cal O}_{(v)})$ be represented by a $\tau\in\mathfrak H$ belonging to a $p$-ordinary test triple $(\tau,v,e)$. Let ${{\cal L}}(x)=x^*{{\cal L}}$ be the algebraic fiber at $x$. The choice of an invariant form ${\omega}_o$ on $A_x$ which generates either $H^0(A_x,{\Omega}^1\otimes{\cal O}_{(v)})$ (in the split case) or $eH^0(A_x,{\Omega}^1\otimes {\cal O}_{(v)})$ (in the non-split case) over ${\cal O}_{(v)}$ identifies ${{\cal L}}(x)$ with a copy of ${\cal O}_{(v)}$. \begin{pro}\label{teo:Thkalgebraic} Let $x\in{\cal X}_{1}({\Delta},N)({\cal O}_{(v)})$ be a point represented by a $p$-ordinary test triple and let ${\omega}_o$ be an invariant form on $A_x$ as above. Then, for all $r\geq1$, the operators ${\Theta}_{k,\ast}^{(r)}$ define maps $$ {\Theta}_{k,\ast}^{(r)}(x)\colon H^0({\cal X}_{1}({\Delta},N)\otimes{\cal O}_{(v)},{{\cal L}}^{\otimes k})\longrightarrow {{\cal L}}^{\otimes k+2r}(x)\simeq {\cal O}_{(v)}{{\omega}_o}^{\otimes k+2r}. $$ Moreover ${\Theta}_{k,\infty}^{(r)}(x)={\Theta}_{k,p}^{(r)}(x)$. \end{pro} \par\noindent{\bf Proof. } The result follows, as in \cite[theorem~2.4.5]{Katz78}, from the following observation. Let $A$ be an abelian variety isogenous over ${\cal O}_{(v)}$ to the $g$-fold product of elliptic curves with complex multiplications in the field $K$ and ordinary good reduction modulo $v$. The CM splitting of the first de Rham group of $A$ is the splitting $\Derham{1}(A/{\cal O}{(v)})=H_{{\sigma}_1}\oplus H_{{\sigma}_2}$ where $H_{{\sigma}_i}$ is the ${\sigma}_i$-eigenspace under the action of complex multiplications, $I_K=\{{\sigma}_1,{\sigma}_2\}$. The Hodge decomposition $\Derham{1}(A)\otimes\mathbb C=H^{1,0}\oplus H^{0,1}$ and the Dwork-Katz decomposition $\Derham{1}(A)\otimes B=H^0(A\otimes B,{\Omega}^1)\oplus U$ for some $p$-adic ${\cal O}{(v)}$-algebra $B$ are both obtained from the CM splitting by a suitable tensoring. The result follows from the algebraicity of the Gau\ss-Manin connection and the Kodaira-Spencer map, using the expression \eqref{eq:onlyonepr}.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par For all $r\geq1$ write $$ {\delta}_k^{(r)}={\delta}_{k+2r-2}\circ\cdots\circ{\delta}_k= \left(\frac{1}{2\pi i}\right)^r\left(\frac{d}{dz}+\frac{k+2r-2}{2iy}\right) \circ\cdots\circ\left(\frac{d}{dz}+\frac{k}{2iy}\right) $$ and set ${\delta}_k^{(0)}(\phi)=\phi$. \section{Expansions of modular forms} \subsection{Serre-Tate theory.}\label{se:STtheory} Let ${\mbox{\boldmath $k$}}$ be any field, $({\Lambda},\mathfrak m)$ a complete local noetherian ring with residue field ${\mbox{\boldmath $k$}}$ and ${\cal C}$ the category of artinian local ${\Lambda}$-algebras with residue field ${\mbox{\boldmath $k$}}$. Let $\widetilde{A}$ be an abelian variety over ${\mbox{\boldmath $k$}}$ of dimension $g$. By a fundamental result of Grothendieck \cite[2.2.1]{Oort71}, the \emph{local moduli functor} ${\cal M}\colon{\cal C}\rightarrow\mbox{\bf Sets}$ which associates to each $B\in{\rm Ob}\,{\cal C}$ the set of deformations of $\widetilde{A}$ to $B$, is pro-represented by ${\Lambda}[[t_1,\ldots,t_{g^2}]]$. When ${\mbox{\boldmath $k$}}$ is perfect of characteristic $p>0$ and ${\Lambda}=W_{\mbox{\boldmath $k$}}$ is the ring of Witt vectors of ${\mbox{\boldmath $k$}}$, deforming $\widetilde{A}$ is equivalent to deforming its formal group, as precised by the Serre-Tate theory \cite[\S2]{Katz81}. If ${\mbox{\boldmath $k$}}$ is algebraically closed and $\widetilde{A}$ is \emph{ordinary}, an important consequence of the Serre-Tate theory is that there is a canonical isomorphism of functors $$ {\cal M}\buildrel\sim\over\longrightarrow{\rm Hom}(T_p\widetilde{A}\otimes T_p\widetilde{A}^t,\widehat{\mathbb G}_m), $$ \cite[theorem 2.1]{Katz81}. Write ${\cal M}=\mathrm{Spf}(\mathfrak R^u)$ with universal formal deformation ${\cal A}^u$ over $\mathfrak R^u$. The isomorphism endows ${\cal M}$ with a canonical structure of formal torus and identifies its group of characters $X({\cal M})={\rm Hom}({\cal M},\widehat{\mathbb G}_m)\subset\mathfrak R^u$ with the group $T_p\widetilde{A}\otimes T_p\widetilde{A}^t$. Denote $q_S$ the character corresponding to $S\in T_p\widetilde{A}\otimes T_p\widetilde{A}^t$. For a deformation ${\cal A}_{/B}$ of $\widetilde{A}$ with $(B,\mathfrak m_B)\in{\rm Ob}\,\widehat{{\cal C}}$, let $$ q({\cal A}_{/B};\cdot,\cdot)\colon T_p\widetilde{A}\times T_p\widetilde{A}^t\longrightarrow\widehat{\mathbb G}_m(B)=1+\mathfrak m_B $$ be the corresponding bilinear form. When ${\mbox{\boldmath $k$}}$ is not algebraically closed, the group structure on ${\cal M}\otimes\overline{{\mbox{\boldmath $k$}}}$ descends to a group structure on ${\cal M}$, for the details see \cite[1.1.14]{Noot92}. Let ${\cal N}\subset{\cal M}$ be a formal subgroup and $\rho\colon X({\cal M})\rightarrow X({\cal N})$ the restriction map. The $\mathbb Z_p$-module $N=\ker(\rho)$ is called the \emph{dual} of ${\cal N}$. Via Serre-Tate theory, $N\subseteq T_pA\otimes T_pA^t$. Then $$ {\cal N}\buildrel\sim\over\longrightarrow{\rm Hom}\left(\frac{T_p\widetilde{A}\otimes T_p\widetilde{A}^t}N,\widehat{\mathbb G}_m\right) $$ and \begin{eqnarray*} \mbox{${\cal N}$ is a subtorus of ${\cal M}$} & \Longleftrightarrow & \mbox{$X({\cal N})\simeq X({\cal M})/N$ is torsion-free} \\ & \Longleftrightarrow & \mbox{$N$ is a direct summand of $T_p\widetilde{A}\otimes T_p\widetilde{A}^t$.} \end{eqnarray*} \noindent To simplify some of the next statements, we shall henceforth assume that $p>2$. \begin{pro}\label{th:maplift} Let $\widetilde{f}\colon\widetilde{A}\rightarrow\widetilde{B}$ be a morphism of ordinary abelian varieties over ${\mbox{\boldmath $k$}}$. The morphism $\widetilde{f}$ lifts to a morphism $f\colon{\cal A}\rightarrow{\cal B}$ of deformations over $B$ if and only if $$ q({\cal A}_{/B};P,\widetilde{f}^t(Q))=q({\cal B}_{/B};\widetilde{f}(P),Q)\quad \mbox{for all $P\in T_p\widetilde{A}$ and $Q\in T_p\widetilde{B}$}. $$ In particular, if $(\widetilde{A},\wt{{\lambda}})$ is principally polarized, the formal subscheme ${\cal M}^{\rm pp}$ that classifies deformations of $\widetilde{A}$ with a lifting ${\lambda}$ of the principal polarization is a subtorus whose group of characters is $$ X({\cal M}^{\rm pp})=\mathrm{Sym}^2(T_p\widetilde{A}). $$ \end{pro} \par\noindent{\bf Proof. } The first part of the statement is \cite[2.1.4]{Katz81}. For the second part, the principal polarization $\wt{{\lambda}}$ identifies $T_p\widetilde{A}\simeq T_p\widetilde{A}^t$. For a deformation ${\cal A}_{/B}$ let $\pr{q}({\cal A}_{/B};P,\pr{P})=q({\cal A}_{/B};P,\wt{{\lambda}}(\pr{P}))$. Then $\wt{{\lambda}}$ lifts to ${\cal A}$ if and only if $\pr{q}$ is symmetric, and the submodule of symmetric maps is a direct summand. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par The last part of the proposition can be rephrased by saying that there is a commutative diagram $$ \begin{CD} T_p\widetilde{A}\otimes T_p\widetilde{A}^t @>\sim>> X({\cal M}) \\ @V{\sopra{\hbox{polarization}}{\hbox{$+$ quotient}}}VV @VV\mbox{restriction}V \\ \mathrm{Sym}^2(T_p\widetilde{A}) @>\sim>> X({\cal M}^{\rm pp}) \end{CD} $$ Concretely, if $\{P_1,\ldots,P_g\}$ and $\{P_1^t,\ldots,P_g^t\}$ are $\mathbb Z_p$-bases of $T_p\widetilde{A}$ and of $T_p\widetilde{A}^t$ respectively, the $g^2$ elements $q_{i,j}=q({\cal A}^u_{/\mathfrak R^u};P_i,P_j^t)-1$ define an isomorphism $\mathfrak R^u\simeq W_{{\mbox{\boldmath $k$}}}[[q_{i,j}]]$. If $\widetilde{A}$ is principally polarized we may take $P_i=P_i^t$ under the identification $T_p(\widetilde{A})\simeq T_p(\widetilde{A}^t)$. Then $q_{i,j}=q_{j,i}$ on ${\cal M}^{\rm pp}=\mathrm{Spf}(\mathfrak R^{\rm pp})$ and $$ \mathfrak R^{\rm pp}\simeq W_{{\mbox{\boldmath $k$}}}[[q^{\rm pp}_{i,j}]],\qquad \hbox{with $q^{\rm pp}_{i,j}={q_{i,j}}_{|{\cal M}^{\rm pp}}$, $1\leq i\leq j\leq g$.} $$ More generally, if ${\cal N}=\mathrm{Spf}(\mathfrak R_{\cal N})$ is a subtorus with $n={\rm rk}_{\mathbb Z_p}(N)$, a $\mathbb Z_p$-basis $\{S_1,\ldots,S_n\}$ of $N$ can be completed to a basis $\{S_1,\ldots,S_n,S_{n+1},\ldots,S_{g^2}\}$ of $T_p\widetilde{A}\otimes T_p\widetilde{A}^t$. If $q_i=q_{S_i}({\cal A}^u_{/\mathfrak R^u})-1$ and $q_i^{\cal N}={q_i}_{|{\cal N}}$, then $q^{\cal N}_{1}=\ldots=q^{\cal N}_{n}=0$ by construction and $\mathfrak R_{\cal N}=W_{{\mbox{\boldmath $k$}}}[[q^{\cal N}_{n+1},\ldots,q^{\cal N}_{g^2}]]$. Since $\widetilde{A}$ is ordinary, there is a canonical isomorphism $T_p\widetilde{A}^t\buildrel\sim\over\rightarrow{\rm Hom}_B(\widehat{{\cal A}},\widehat{\mathbb G}_m)$ for any deformation ${\cal A}_{/B}$ of $\widetilde{A}$. Composition with the pulling back the standard invariant form $dT/T$ on $\widehat{\mathbb G}_m$ yields a functorial $\mathbb Z_p$-linear homomorphism ${\omega}\colon T_p\widetilde{A}^t\rightarrow\underline{\omega}_{{\cal A}/B}$ which is compatible with morphisms of abelian schemes, in the sense that if the morphism $f\colon{\cal A}\rightarrow{\cal B}$ lifts the morphism $\widetilde{f}\colon\widetilde{A}\rightarrow\widetilde{B}$ of abelian varieties over ${\mbox{\boldmath $k$}}$ then, \cite[lemma 3.5.1]{Katz81}, \begin{equation} f^*({\omega}(P^t))={\omega}(\widetilde{f}^t(P^t)), \qquad\mbox{for all $P^t\in T_p\widetilde{B}^t$.} \label{eq:omcomp} \end{equation} By functoriality, the maps ${\omega}$ extend to a well-defined $\mathbb Z_p$-linear homomorphism $$ {\omega}^u\colon T_p\widetilde{A}^t\rightarrow\underline{\omega}_{{\cal A}^u/{\cal M}}. $$ whose $\mathfrak R^u$-linear extension $T_p\widetilde{A}^t\otimes{\cal O}_{\cal M}\buildrel\sim\over\rightarrow\underline{\omega}_{{\cal A}^u/{\cal M}}$ is an isomorphism. Thus, a choice of a $\mathbb Z_p$-basis $\{P_1^t,\ldots,P_g^t\}$ of $T_p\widetilde{A}^t$ yields an identification $\underline{\omega}_{{\cal A}^u/{\cal M}}=\left(\bigoplus_{i=1}^{g}\mathfrak R^u{\omega}_i\right)^{\rm sh}$ where ${\omega}_i={\omega}^u(P^t_i)$, $i=1,\ldots,g$ and the superscript $(\ )^{\rm sh}$ denotes the sheafified module. Suppose that $\widetilde{A}$ is principally polarized. Let ${\cal N}\subset{\cal M}^{\rm pp}$ be a subtorus with dual $N$ and let ${\cal A}_{\cal N}$ be the restriction over ${\cal N}$ of the universal deformation ${\cal A}^u/{\cal M}$. Let $\{S_1,\ldots,S_{\frac{g(g+1)}{2}}\}$ be a $\mathbb Z_p$-basis of $\mathrm{Sym}^2(T_p\widetilde{A}^t)=\mathrm{Sym}^2(T_p\widetilde{A})$ such that $N=\bigoplus_{j=1}^n\mathbb Z_pS_j$. Let ${\omega}^{(2)}_i$ the pullback to ${\cal A}_{\cal N}$ of $\mathrm{Sym}^2({\omega}^u)(S_i)$, $q_i^{\rm pp}$ and $q_i^{\cal N}$be the restriction of the local parameters $q_{S_i}$ constructed above, $i=1,\ldots,g(g+1)/2$. \begin{pro}\label{th:KSforsub} $$ {\rm KS}_{\cal N}({\omega}^{(2)}_i)= \left\{ \begin{array}{ll} 0 & \mbox{for $i=1,\ldots,n$} \\ d\log(q_i^{\rm pp}+1)_{|{\cal N}} & \mbox{for $i=n+1,\ldots,g(g+1)/2$} \end{array}\right. $$ \end{pro} \par\noindent{\bf Proof. } It is an immediate application of proposition \ref{th:KSforpullb} to Katz's computations \cite[theorem 3.7.1]{Katz81}, because there are identifications $\mathrm{Sym}^2(\underline{\omega}_{{\cal A}_{\cal N}/{\cal N}})=\left(\bigoplus_{i=1}^{g(g+1)/2}\mathfrak R_{\cal N}{\omega}_i^{(2)}\right)^{\rm sh}$, ${\Omega}^1_{{\cal M}^{\rm pp}/W}=\left(\bigoplus_{i=1}^{g(g+1)/2}\mathfrak R^{\rm pp}dq_i^{\rm pp}\right)^{\rm sh}$, ${\Omega}^1_{{\cal N}/W}=\left(\bigoplus_{i=n+1}^{g(g+1)/2}\mathfrak R_{\cal N} dq_i^{\cal N}\right)^{\rm sh}$ and $dq_i^{\cal N}=0$ for $i=1,...,n$. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \paragraph{Elliptic curves and false elliptic curves.} Let $(\tau,v,e)$ be a split $p$-ordinary test triple and denote $A$ the corresponding CM curve (if $D$ is split) or principally polarized QM-abelian surface (if $D$ is non split) with $\widetilde{A}$ its reduction modulo $p$. Of the two embeddings $K\hookrightarrow\mathbb Q_p$, fix the one that makes the $K$ action on $T_p\widetilde{A}\otimes\mathbb Q_p$ coincide with its natural $\mathbb Q_p$-vector space structure. Let ${\cal M}$ be the local moduli functor corresponding to $\widetilde{A}$. If $D$ is split, everything has been already implicitely described: ${\cal M}^{\rm pp}={\cal M}$ is a $1$-dimensional torus and $\mathfrak R^{\rm pp}=\mathfrak R^u=W[[q-1]]$ with $q=q_{P\otimes P}$ for a $\mathbb Z_p$-generator $P$ of $T_p\widetilde{A}$. Also, if ${\omega}_u={\omega}^u(P)$ then ${\omega}_u^{\otimes 2}=\mathrm{Sym}^2({\omega}^u)(P\circ P)$ and ${\rm KS}({\omega}_u^{\otimes 2})=d\log(q)$. If $D$ is non-split, let ${\cal N}={\cal N}_D$ be the subfunctor $$ {\cal N}_D(B)= \left\{ \sopra{\hbox{principally polarized deformations ${\cal A}_{/B}$ of $\widetilde{A}$ with a lift of}} {\hbox{the endomorphisms given by elements of the maximal order ${\cal R}_{1}$}} \right\} $$ The maximal order ${\cal R}_{1}$ acts naturally on $T_p\widetilde{A}$ and since $e\in{\cal R}_{1}\otimes\mathbb Z_p$ we can find a $\mathbb Z_p$-basis $\{P,Q\}$ of $T_p\widetilde{A}$ such that $eP=P$ and $eQ=0$. \begin{pro}\label{th:NR} \begin{enumerate} \item ${\cal N}=\mathrm{Spf}(\mathfrak R_{{\cal N}})$ is a $1$-dimensional subtorus of ${\cal M}^{\rm pp}$; \item $\mathfrak R_{{\cal N}}=W_{{\mbox{\boldmath $k$}}}[[q-1]]$, where $q=q^{\cal N}_{P\circ P}$; \item if ${\omega}_u$ denotes the pullback of ${\omega}^u(P)$ to ${\cal A}_{{\cal N}}$, then ${\rm KS}({\omega}_u^{\otimes 2})=d\log(q)$. \end{enumerate} \end{pro} \par\noindent{\bf Proof. } It follows from proposition \ref{th:maplift} that ${\cal N}(B)$ is identified with the set of the symmetric bilinear forms $q\colon T_p\widetilde{A}\times T_p\widetilde{A}\rightarrow\widehat{\mathbb G}_m(B)$ such that $q(P,\invol{r}Q)=q(rP,Q)$ for all $r\in{\cal R}_{1}$. This makes clear that ${\cal N}$ is a subgroup, and that its dual $N$ is the $\mathbb Z_p$-submodule generated by the elements $$ \left\{ \begin{array}{l} P_1\otimes P_2-P_2\otimes P_1 \\ rP_1\otimes P_2-P_1\otimes\invol{r}P_2 \end{array}\right. \quad\mbox{for all $P_1,P_2\in T_p(\widetilde{A})$ and $r\in{\cal R}_{1}$.} $$ Choose $u$ in the decomposition \eqref{eq:Dsplit} for the subfield $F\subset D$ so that $u\in{\cal R}_{1}$, $\invol u=-u$ and $\vass{\nu(u)}$ is minimal. In particular $u^{2}=-\nu(u)$ is a square-free integer. Pick a basis of $D$ in ${\cal R}_{1}$ of the form $\{1,r,u,ru\}$ with $r^{2}\in\mathbb Z$. From our choice of test triple we can assume that $\mathbb Z[r]$ is an order in $F$ of conductor prime to $p$, in particular $p$ does not divide $r^{2}$. The submodule ${\cal R}=\mathbb Z\oplus\mathbb Z r\oplus\mathbb Z u\oplus\mathbb Z ru$ is actually an order of discriminant $-16r^{4}u^{4}$ such that $\invol{\cal R}={\cal R}$. Suppose that $p|u^{2}$ and let $y=\frac1p\left(a+br+cu+dru\right)$ with $a$, $b$, $c$ and $d\in\mathbb Z$ be an element in ${\cal R}_{1}-{\cal R}$ such that $\bar y=y+{\cal R}$ generates the unique subgroup of order $p$ in ${\cal R}/{\cal R}_{1}$. The conditions ${\rm tr}(y)\in\mathbb Z$, $\nu(y)\in\mathbb Z$, and $\invol{\bar y}=\pm\bar y$ easily imply that the coefficients $a$, $b$, $c$ and $d$ are all divisible by $p$ and this is a contradiction. Thus ${\cal R}\otimes\mathbb Z_{p}={\cal R}_{1}\otimes\mathbb Z_{p}$ and this reduces the set of generators for $N$ to $$ \begin{array}{ccc} P\otimes Q-Q\otimes P & rP\otimes Q-P\otimes rQ & uP\otimes P+P\otimes uP \\ uP\otimes Q+P\otimes uQ & uQ\otimes Q+Q\otimes uQ & ruP\otimes Q-P\otimes ruQ \end{array}. $$ From the relations $re=er$ and $ue=(1-e)u$ in ${\cal R}_{1}\otimes\mathbb Z_p$ we get that the elements $r$ and $u$ act on the basis $\{P,Q\}$ as the matrices $\smallmat{{\alpha}}00{-{\alpha}}$ and $\smallmat 0{{\beta}}{{\gamma}}0$ respectively. Since ${\alpha}$, ${\beta}$ and ${\gamma}$ are $p$-units, finally $N$ turns out to be the $\mathbb Z_{p}$-module generated by $$ P\otimes Q-Q\otimes P,\quad {\gamma} P\otimes P+{\beta} Q\otimes Q,\quad P\otimes Q+Q\otimes P $$ and $$ T_{p}\widetilde{A}\otimes T_{p}\widetilde{A}=N\oplus\mathbb Z_{p}(P\otimes P). $$ This proves points 1 and 2, and the last part follows at once from proposition \ref{th:KSforsub}.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \begin{rem}\label{rm:OK} \rm The fact that ${\cal N}$ is actually a subtorus can be reinterpreted, as in \cite{Noot92}, in the more general context of Hodge and Tate classes. \end{rem} \subsection{Power series expansion} We shall now use the Serre-Tate theory to write a power series expansion around an ordinary CM point of a modular form $f\in M_{k,1}({\Delta},N)$ and compute the coefficients of this expansion in terms of the Maass operators studied in section \ref{ss:maass}. We assume that $N>3$. Let $\hbox{\bf Sp}_D$ the full subcategory of the category of rings consisting of the rings $B$ such that ${\cal R}_{1}\otimes B\simeq M_{2}(B)$. Note that if $(\tau,v,e)$ is a $p$-ordinary test triple, then ${\cal O}_v\in{\rm Ob}\hbox{\bf Sp}_D$. For a ${\rm KS}$-normalized section $s(z)$ in \eqref{eq:kanormsec} the assignment \begin{equation} f(z)\mapsto f^\ast(z)=f(z)s(z)^{\otimes k} \label{eq:assignsplit} \end{equation} sets up an identification $$ M_{k,1}({\Delta},N)\simeq H^0(X_1({\Delta},N),{\cal L}^{\otimes k}) $$ defined up to a sign (the ambiguity obviously disappears for $k$ even). The identification extends naturally to an identification of the bigger space $M_{k,{\varepsilon}}^{\infty}({\Delta},N)$ of $C^\infty$-modular forms with the global sections of the associated $C^\infty$-bundle ${\cal L}^{\otimes k}_\infty$. This ``geometric'' interpretation of modular forms can be used to endow the space $M_{k,1}({\Delta},N)$ with a canonical $B$-structure for any subring $B\subset\mathbb C$ of definition for ${\cal L}$ in $\hbox{\bf Sp}_D$. In fact for \emph{any} ring $B$ in $\hbox{\bf Sp}_D$ such that ${\cal L}$ is defined over $B$ the space of modular forms defined over $B$ may be defined as $$ M_{k,1}({\Delta},N;B)=H^0({\cal X}_1({\Delta},N)\otimes B,{\cal L}^{\otimes k}). $$ Remark \ref{re:LoverC}.2 shows that this $B$-structure does not depend on the choice of ${\cal L}$, i.e. on the choice of idempotent $e$. If $\pr{B}$ is a flat $B$-algebra, the identification $M_{k,1}({\Delta},N;B)\otimes\pr{B}=M_{k,1}({\Delta},N;\pr{B})$ follows from the usual properties of flat base change. By smoothness, if $1/N{\Delta}\in B\subset\mathbb C$ then $M_{k,1}({\Delta},N;B)\otimes\mathbb C=M_{k,1}({\Delta},N;\mathbb C)=M_{k,1}({\Delta},N)$. In fact the assignment \eqref{eq:assignsplit} is normalized so that $f\in M_{k,1}(N;B)$ if and only if its Fourier coefficients belong to $B$ ($q$-expansion principle, e.÷g. \cite[Ch.~1]{Katz73} \cite[theorem 4.8]{Harris81}). Let ${\cal A}/{\cal X}$ be either universal family \eqref{eq:univEC} or \eqref{eq:univQMAV}. Let $x\in{\cal X}({\cal O}_{(v)})$ be represented by a point $\tau\in{\rm CM}_{{\Delta},K}$ in a split $p$-ordinary test triple $(\tau,v,e)$. Denote $A_x$ the fiber of ${\cal A}/{\cal X}$ over $x$ and $A_{\tau}$ the corresponding complex torus. We will implicitely identify the ring $W_{\overline{k(v)}}$ of Witt vectors for the algebraic closure of $k(v)$ with $\nr{\cal O}_{v}$. For each $n\geq0$, let $J_{x,n}={\cal O}_{{\cal X},x}/\mathfrak m_x^{n+1}$ and $J_{x,\infty}=\limproj{n}J_{x,n}=\wh{{\cal O}}_{{\cal X},x}$. By smoothness, there is a non-canonical isomorphism $J_{x,\infty}\simeq{\cal O}_{(v)}[[u]]$. For $n\in\mathbb N\cup\{\infty\}$ the family ${\cal A}/{\cal X}$ restricts to abelian schemes ${\cal A}_{x,n}{}_{/J_{x,n}}$. Tautologically $A_x={\cal A}_{x,0}$, and ${\cal A}_{x,n}={\cal A}_{x,\infty}\otimes J_{x,n}$ with respect to the canonical quotient map $J_{x,\infty}\rightarrow J_{x,n}$. Also, let $\nr{J}_{x,n}=J_{x,n}\wh{\otimes}\nr{\cal O}_{v}$ and $\nr{{\cal A}}_{x,n}={\cal A}_{x,n}\otimes\nr{J}_{x,n}$. Let ${\cal M}=\mathrm{Spf}({\cal R})$ be either the full local moduli functor (in the split case) or its subtorus described in proposition \ref{th:NR} (in the non-split case) associated with the reduction $\widetilde{A}_x=A_x\otimes{\overline{k}}_v$ with universal formal deformation ${\cal A}_x/{\cal M}$. In either case ${\cal M}\simeq{\rm Hom}(T,\wh{\mathbb G}_m)$ where $T$ is a free $\mathbb Z_p$-module of rank 1. Since the rings $\nr{J}_{x,n}$ are pro-$p$-Artinian, there are classifying maps $$ \phi_{x,n}\colon{\cal R}\longrightarrow\nr{J}_{x,n},\qquad \hbox{for all $n\in\mathbb N\cup\{\infty\}$} $$ such that $\nr{{\cal A}}_{x,n}={\cal A}_x\otimes_{\phi_{x,n}}\nr{J}_{x,n}$. Since the abelian schemes ${\cal A}_{x,n}$ are the restriction of the universal (global) family, the map $\phi_{x,\infty}$ is an isomorphism. We will use it to transport the Serre-Tate parameter $q_S-1\in{\cal R}$ and the formal sections ${\omega}_u$ constructed in section \ref{se:STtheory} out of a choice of a $Z_p$-generator $S$ of $T$ to the $p$-adic disc of points in ${\cal X}$ that reduce modulo $\mathfrak p_v$ to the same geometric point in ${\cal X}\otimes{\overline{k}}_v$. Also, we can pull back the parameter along the translation by $\inv x$ in ${\cal M}$ to obtain a local parameter $u_x$ at $x$ (depending on $S$), namely $$ \nr{J}_{x,\infty}=\nr{{\cal O}}_v[[u_x]],\qquad \hbox{with $u_x=\inv{q_S(x)}q_S-1$}. $$ The complex uniformization of $A_x$ associated with the choice of $\tau$ can be used to define transcendental periods. For any ${\omega}_o\in H^0(A_{x}(\mathbb C),{\cal L}(x))$ write $$ {\omega}_o=p({\omega}_o,\tau)s(\tau) \qquad p({\omega}_o,\tau)\in\mathbb C, $$ under the isomorphism $A_x(\mathbb C)\simeq A_\tau$. For $f\in M_{k,1}({\Delta},N)$ define complex numbers \begin{equation} c^{(r)}(f,x,{\omega}_o)= \frac{{\delta}_k^{(r)}(f)(\tau)}{p({\omega}_o,\tau)^{k+2r}} \qquad r=0,1,2,\ldots. \label{eq:defcrfx} \end{equation} The use of $x$ in the definition \eqref{eq:defcrfx} is justified by the following fact. \begin{pro} Suppose that $f\in M_{k}({\Gamma})$ for some Fuchsian group of the first kind ${\cal R}_1^1\geq{\Gamma}\geq{{\Gamma}_{1}}({\Delta},N)$. Then the numbers $c^{(r)}(f,x,{\omega}_o)\in\mathbb C$ do not depend on the choice of $\tau$ in its ${\Gamma}$-orbit. \end{pro} \par\noindent{\bf Proof. } For any ${\gamma}\in{\Gamma}$, multiplication by $j({\gamma},\tau)^{-1}$ induces an isomorphism of complex tori $A_\tau\buildrel\sim\over\rightarrow A_{{\gamma}\tau}$. Since $s$ is a global constant section of ${\cal L}$ over $\mathfrak H$, under the standard identifications of invariant forms, $s({\gamma}\tau)=s(\tau)j({\gamma},\tau)^{-1}$. The assertion follows at once.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par The periods $p({\omega}_o,\tau)$ (and conseguently the numbers $c^{(r)}(f,x,{\omega}_o)$) can be normalized by choosing ${\omega}_o$ as in proposition \ref{teo:Thkalgebraic}. For such a choice, defined up to a $v$-unit, set $$ {\Omega}_\infty={\Omega}_\infty(\tau)=p({\omega}_o,\tau),\qquad c^{(r)}_v(f,x)=c^{(r)}(f,x,{\omega}_o). $$ Also, define the \emph{$p$-adic period} ${\Omega}_p={\Omega}_p(x)\in{\cal O}_v^{{\rm nr},\times}$ (again defined up to a $v$-unit) as $$ {\omega}_o={\Omega}_p{\omega}_u(x). $$ Let $f\in M_{k,1}({\Delta},N;\nr{{\cal O}}_v)$. Over $\mathrm{Spf}(\nr{J}_{x,\infty})$ write $f^\ast=f_x{\omega}_u^{\otimes k}$ and $$ {\rm jet}_x(f^\ast)=x^*{\rm jet}(f_x)\otimes{\omega}_u(x)^{\otimes k} =\left(\sum_{n=0}^\infty\frac{b_n(f,x)}{n!}U_x^n\right)\, {\omega}_u(x)^{\otimes k} $$ where $f_x$ is expanded at $x$ in terms of the formal local parameter $U_x=\log(1+u_x)$. \begin{thm}\label{thm:equality} Let $x\in{\cal X}_{{\Delta},N}({\cal O}_{(v)})$ be represented by a split $p$-ordinary test triple $(\tau,v,e)$ and $f\in M_{k}({\Delta},N;{\cal O}_{(v)})$. Then, for all $r\geq0$, $$ \frac{b_r(f,x)}{{\Omega}_p^{k+2r}}=c^{(r)}_v(f,x)\in{\cal O}_{(v)}. $$ \end{thm} \par\noindent{\bf Proof. } The case $r=0$ is clear, so let us assume that $r\geq1$. We have $\nabla(f^\ast)=\nabla(f_x{\omega}_u^{\otimes k})= df_x\otimes{\omega}_u^{\otimes k}+kf_x{\omega}_u^{\otimes k-1}\nabla({\omega}_u)$. Since $\nabla({\omega}^u(P))\in H^0({\cal M},{\cal U})$ for each $P\in T_p(\wt{A})$, \cite[theorem 4.3.1]{Katz81}, the term containing $\nabla({\omega}_u)$ is killed by the projection ${\rm Pr}_{p}$. Also, $dU_x=d\log(u_x+1)=d\log(q+1)$ doesn't depend on $x$ and we obtain ${\Theta}_{k,p}(f^\ast)=(df_x/dU_x){\omega}_u^{\otimes k+2}$. Iterating the latter computation $r$ times and evaluating the result at $x$ yields $$ {\Theta}_{k,p}^{(r)}(f^\ast)(x)= \frac{d^rf_x}{d{U_x}^r}(x){\omega}_u^{k+2r}(x)= \frac{b_r(f,x)}{{\Omega}_p^{k+2r}}{\omega}_o^{k+2r}. $$ On the other hand, applying $r$ times the proposition \ref{th:maassincoord} and evaluating at $\tau$ yields $$ {\Theta}_{k,\infty}^{(r)}(f^\ast)(x)= {\delta}_k^{(r)}(f)(\tau)s(\tau)^{\otimes k+2r}= c^{(r)}(f,x,{\omega}_o){\omega}_o^{k+2r}. $$ The result follows from proposition \ref{teo:Thkalgebraic}. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par This result has a converse. For, we need the following preliminary discussion. Let ${\cal D}$ be any domain of characteristic $0$ and field of quotients ${\cal K}$. The formal substitution $u=e^U-1=U+\frac1{2!}U^2+\frac1{3!}U^3+\cdots$ defines a bijection between the rings of formal power series ${\cal K}[[u]]$ and ${\cal K}[[U]]$. Under this bijection the ring ${\cal D}[[u]]$ is identified with a subring of the ring of \textit{Hurwitz series}, namely power series of the form $$ \sum_{n=0}^\infty\frac{{\beta}_n}{n!}\,U^n,\qquad \hbox{with ${\beta}_n\in{\cal D}$ for all $n=0,1,2,...$}. $$ We say that a power series $\Phi(U)\in{\cal K}[[U]]$ is $u$-integral if $\Phi(U)=F(e^U-1)$ for some $F(u)\in{\cal D}[[u]]$. Denote $c_{n,r}$ the coefficients defined by the polynomial identity $$ n!\vvec Xn=X(X-1)\cdots(X-n+1)=\sum_{r=0}^nc_{n,r}X^r. $$ The following possibly well known result is closely related to \cite[Th\'eor\`eme 13]{Serre73}. \begin{thm} A Hurwitz series $\Phi(U)=\sum_{n=0}^\infty\frac{{\beta}_n}{n!}\,Z^n$ is $u$-integral if and only if $\frac1{d!}(c_{d,1}{\beta}_1+c_{d,2}{\beta}_2+\cdots+c_{d,d}{\beta}_d)\in{\cal D}$ for all $d=1,2,...$. \end{thm} \par\noindent{\bf Proof. } Let $F(u)=\Phi(\log(1+u))\in{\cal K}[[u]]$. For any polynomial $P(X)=p_0+p_1X+\cdots+p_dX^d\in{\cal K}[X]$ of degree $d$, an immediate chain rule computation yields $$ \left.P\left((u+1)\frac d{du}\right)F(u)\right|_{u=0}=p_0{\beta}_0+p_1{\beta}_1+...+p_d{\beta}_d. $$ Also $\left.P\left((u+1)\frac d{du}\right)F(u)\right|_{u=0}=\left.P\left((u+1)\frac d{du}\right)F_d(u)\right|_{u=0}$ where $F_d(u)$ is the degree $d$ truncation of $F(u)$, i.÷e. $F(u)=F_d(u)+U^{d+1}H(u)$. On the space of polynomials of degree $\leq d$ the substituition $u=v-1$ is defined over ${\cal D}$ and $\left.P\left((u+1)\frac d{du}\right)F_d(u)\right|_{u=0}= \left.P\left(v\frac d{dv}\right)F_d(v-1)\right|_{v=1}$. Since $\left.P\left(v\frac d{dv}\right)v^k\right|_{v=1}=P(k)v^k$, the argument shows that if $\Phi(U)$ is $u$-integral, then the expression $p_0{\beta}_0+p_1{\beta}_1+...+p_d{\beta}_d$ is a ${\cal D}$-linear combination of the values $P(0)$, $P(1), ..., P(d)$. On the other hand, the argument also shows that $$ \frac1{d!}(c_{d,1}{\beta}_1+c_{d,2}{\beta}_2+\cdots+c_{d,d}{\beta}_d)= \left.\vvec{v\,d/dv}{d}F_d(u-1)\right|_{v=1} $$ is the coefficient of $U^d$ in $F$. Therefore, we obtain that $\Phi(U)$ is $u$-integral if and only $p_0{\beta}_0+p_1{\beta}_1+...+p_d{\beta}_d\in{\cal D}$ for every polynomial $P(X)=p_0+p_1X+\cdots+p_dX^d\in{\cal K}[X]$ such that $P(0)$, $P(1), ..., P(d)\in{\cal D}$. We conclude observing that a degree $d$ polynomial $P(X)\in{\cal K}[X]$ such that $P(0)$, $P(1), ..., P(d)\in{\cal D}$ is necessarily \textit{numeric}, i.÷e. $P(\mathbb N)\subset{\cal D}$ and that the ${\cal D}$-module of numeric polynomials is free, generated by the binomial coefficients. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par Note that when ${\cal D}$ is a ring of algebraic integers, or one of its non-archimedean completions, the conditions of the theorem can be readily rephrased in terms of congruences, known as \textit{Kummer-Serre congruences}. Denote $L^{v, {\rm sc}}$ the compositum of all finite extensions $L\subseteq F$ such that $v$ splits completely in $F$ and let ${\cal O}^{\rm sc}_{(v)}$ be the integral closure of ${\cal O}_{(v)}$ in $L^{v, {\rm sc}}$. \begin{thm}[Expansion principle]\label{thm:expanprinc} Let $f\in M_{k,1}({\Delta},N)$ and $x\in{\cal X}({\Delta},N)({\cal O}_{(v)})$ be represented by a split $p$-ordinary test triple $(\tau,v,e)$ such that the numbers $c^{(r)}_v(f,x)\in{\cal O}_{(v)}$ for all $r\geq0$ and the $p$-adic numbers ${\Omega}_p^{2r}c^{(r)}_v(f,x)$ satisfy the Kummer-Serre congruences. Then $f$ is defined over ${\cal O}^{\rm sc}_{(v)}$. \end{thm} \par\noindent{\bf Proof. } Choose a field embedding $\imath\colon\mathbb C\rightarrow\mathbb C_{p}$ to view $f\in M_{k,1}({\Delta},N;\mathbb C_{p})$. For all $r\geq0$ set $c_r=c^{(r)}_v(f,x)$ and ${\beta}_{r}=c_r{\Omega}_{p}^{{\epsilon}_Dk+2r}\in\mathbb C_{p}$. Unwinding the computations that led to the equality in theorem \ref{thm:equality} shows that ${\rm jet}_x(f^\ast)=\left(\sum_{r\geq0}\frac{{\beta}_r}{r!}U_x^r\right) {\omega}_u(x)^{\otimes k}\in(\nr{J}_{x,\infty}\otimes\mathbb C_p){\omega}_u(x)^{\otimes k}$. We claim that ${\rm jet}_x(f^\ast)$ is defined over ${\cal O}_v$. Write $$ {\rm jet}_x(f^\ast)= \left(\sum_{r\geq0}\frac{{\beta}_r}{r!}U_x^r\right){\omega}_u(x)^{\otimes k}= \left(\sum_{r\geq0}\frac{c_r}{r!}({\Omega}_p^2U_x)^r\right){\omega}_o^{\otimes k}. $$ Since the ${\beta}_r$ are $v$-integral and satisfy the Kummer-Serre congruences, the first equality shows that ${\rm jet}_x(f^\ast)$ is $v$-integral. Since the formal substitution $u=e^U-1$ preserves the field of definition, the claim follows from the second equality if we check that the formal local parameter ${\Omega}_p^2U_x$ is defined over $L_v$. The group ${\rm Aut}(\mathbb C_p/L_v)$ acts on the section ${\omega}_u$ via the action of its quotient ${\rm Gal}(\nr{L}_v/L_v)\simeq{\rm Gal}(\overline{k}_v/k_v)$ on $T_p(\wt{A}_x)$ which is scalar because $\wt{A}_x$ is either an elliptic curve or isogenous to a product of elliptic curves. Thus, the section ${\Omega}_p{\omega}_u$, whose restriction at $x$ is defined over $L_v$, is itself defined over $L_v$. Therefore ${\Omega}_p^2\,dU_x={\rm KS}({\Omega}_p^2{\omega}_u^{\otimes 2})$ is defined over $L_v$, and so is ${\Omega}_p^2U_x$ because it is a priori defined over $\nr{L}_v$ and its value at the point $x$, defined over $L_v$, is $0$. We can now use the very same arguments of Katz's proof \cite{Katz73} of the $q$-expansion principle to conclude that the section $f^\ast$ is defined over ${\cal O}_v$. For, observe that the $q$-expansion of a modular form $f$ at the cusp $s$ multiplied by the right power of the canonical Tate form is ${\rm jet}_s(f^\ast)$. The specific nature of a cusp in the modular curve plays no role in Katz's proof, which works as well when the former are replaced by any point in a smooth curve. Since $f^\ast$ is defined over $\mathbb C$ and over ${\cal O}_v$, the modular form $f$ is defined over the integral closure of ${\cal O}_{(v)}$ in the largest subfield $F\subset\mathbb C$ such that $\imath(F)\subseteq L_v$. The assertion follows from the arbitrariness of the choice of $\imath$, since $L^{\rm sc}_{(v)}$ can be characterized as the largest subfield of $\mathbb C$ whose image under all the embeddings $\mathbb C\rightarrow\mathbb C_p$ is contained in $L_v$. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \section{$p$-adic interpolation} \subsection{$p$-adic $K^{\times}$-modular forms}\label{se:padicforms} A \textit{weight} for the quadratic imaginary field $K$ is a formal linear combination ${\underline{w}}=w_1{\sigma}_1+w_2{\sigma}_2\in\mathbb Z[I_K]$, which will be also written ${\underline{w}}=(w_1,w_2)$. Following our conventions, write $z^{\underline{w}}=z^{w_1}\bar{z}^{w_2}$ for all $z\in\mathbb C$. Also, let $\bar{{\underline{w}}}=(w_{2},w_{1})$ and $\vass{\underline{w}}=w_1+w_2$, so that ${\underline{w}}+\bar{{\underline{w}}}=\vass{\underline{w}}\underline{1}$ with $\underline{1}=(1,1)$. \begin{dfn}[Hida \cite{Hida86}] Let $E\supseteq K$ be a subfield of $\mathbb C$. The space $\widetilde{S}_{{\underline{w}}}(\mathfrak n;E)$ of $K^{\times}$-modular form of weight ${\underline{w}}$ and level $\mathfrak n$ with values in $E$ is the space of functions $\tilde{f}\colon{\cal I}_{\mathfrak n}\rightarrow E$ such that $$ \tilde{f}(({\lambda})I)={\lambda}^{{\underline{w}}}\tilde{f}(I) $$ for all ${\lambda}\in K^{\times}_{\mathfrak n}$. \end{dfn} \noindent A remarkable subset of $\widetilde{S}_{{\underline{w}}}(\mathfrak n)=\widetilde{S}_{{\underline{w}}}(\mathfrak n;\mathbb C)$ is the set of algebraic Hecke characters of type {$\mathrm{A}_{0}$}, $$ \widetilde{\Xi}_{{\underline{w}}}(\mathfrak n)=\widetilde{S}_{{\underline{w}}}(\mathfrak n)\cap{\rm Hom}({\cal I}_{\mathfrak n},\mathbb C^{\times}). $$ A well-known property noted by Weil \cite{Weil55} is that for every $\tilde{\xi}\in\widetilde{\Xi}_{{\underline{w}}}(\mathfrak n)$ there exists a number field $E_{\tilde{\xi}}$ such that $\tilde{\xi}\in\widetilde{S}_{{\underline{w}}}(\mathfrak n;E_{\tilde{\xi}})$. A classical construction identifies the space $\widetilde{S}_{{\underline{w}}}(\mathfrak n)$ with the space $S_{{\underline{w}}}(\mathfrak n)$ of functions $f\colon{K_{\A}^{\times}}\rightarrow\mathbb C^{\times}$ such that \begin{equation} f(s{\lambda} zu)=z^{-{\underline{w}}}f(s) \quad \hbox{for all ${\lambda}\in K^{\times}$, $z\in\mathbb C^{\times}$ and $u\in U_{\mathfrak n}$.} \label{eq:ftilde} \end{equation} If $\tilde{f}\leftrightarrow{f}$ under this identification, then \begin{equation} \tilde{f}(I)={f}(s)\quad \hbox{whenever $I=[s]$ and $s_{v}=1$ for $v=\infty$ and $v|\mathfrak n$.} \label{eq:weilrel} \end{equation} This relation can be used to recognize $\widetilde{S}_{{\underline{w}}}(\mathfrak n;E)$ in ${S}_{{\underline{w}}}(\mathfrak n)$. Since $U_{c{\cal O}_K}<\widehat{\calO}_{K,c}^\times$, the functions ${f}\colon{K_{\A}^{\times}}\rightarrow\mathbb C^{\times}$ satisfying the relation in \eqref{eq:ftilde} for all ${\lambda}\in K^{\times}$, $z\in\mathbb C^{\times}$ and $u\in\widehat{\calO}_{K,c}^{\times}$ form a linear subspace ${S}_{{\underline{w}}}({\cal O}_{K,c})\subset{S}_{{\underline{w}}}(c{\cal O}_{K})$. The subspace ${S}_{{\underline{w}}}({\cal O}_{K,c})$ includes the Hecke characters trivial on $\widehat{\calO}_{K,c}^\times$, namely ${\Xi}_{\underline{w}}({\cal O}_{K,c})={S}_{\underline{w}}({\cal O}_{K,c})\cap {\rm Hom}({K_{\A}^{\times}}/K^\times\widehat{\calO}_{K,c}^\times,\mathbb C^\times)$. Denote $$ {\bf C}_{\mathfrak n}={K_{\A}^{\times}}/K^{\times}\mathbb C^{\times}U_{\mathfrak n}\simeq {\cal I}_{\mathfrak n}/P_{\mathfrak n},\qquad {\bf C}^{\sharp}_{c}={K_{\A}^{\times}}/K^\times\mathbb C^\times\widehat{\calO}_{K,c}^\times. $$ and let $h_{\mathfrak n}=\vass{{\bf C}_{\mathfrak n}}$ and $h^{\sharp}_{c}=\vass{{\bf C}^{\sharp}_{c}}$. Clearly $h^\sharp_c|h_c$. \begin{lem}\label{th:charchar} \begin{enumerate} \item ${\Xi}_{\underline{w}}({\cal O}_{K})\neq\emptyset$ if and only if $({\cal O}_{K}^{\times})^{{\underline{w}}}=1$. \item ${\Xi}_{\underline{w}}({\cal O}_{K,2})={\Xi}_{\underline{w}}(2{\cal O}_K)$ and they are non-trivial if and only if $\vass{\underline{w}}$ is even. \item If $c>2$ then ${\Xi}_{\underline{w}}(c{\cal O}_{K})\neq\emptyset$ and ${\Xi}_{\underline{w}}({\cal O}_{K,c})\neq\emptyset$ if and only if $\vass{\underline{w}}$ is even. \item ${\Xi}_{\underline{w}}({\cal O}_{K,c})$ and ${\Xi}_{\underline{w}}(c{\cal O}_{K})$ are bases for ${S}_{\underline{w}}({\cal O}_{K,c})$ and ${S}_{\underline{w}}(c{\cal O}_{K})$ respectively. \end{enumerate} \end{lem} \par\noindent{\bf Proof. } Let $U<U_1$ and ${\bf C}_U={K_{\A}^{\times}}/K^{\times}\mathbb C^{\times}U$. Then there is a short exact sequence $$ 1\longrightarrow\frac{\mathbb C^{\times}}{H_U}\longrightarrow \frac{{K_{\A}^{\times}}}{K^\times U}\longrightarrow{\bf C}_U\mmap1 $$ where $H_U=\mathbb C^\times\cap K^{\times}U$. The first three points follow from the observation that $$ H_{U_c}= \begin{cases} {\cal O}_K^\times & \text{if $c=1$}, \\ \{\pm1\} & \text{if $c=2$}, \\ \{1\} & \text{if $c>2$}, \end{cases} \qquad H_{\widehat{\calO}_{K,c}^\times}= \begin{cases} {\cal O}_K^\times & \text{if $c=1$}, \\ \{\pm1\} & \text{if $c\geq2$}. \end{cases} $$ For the last part, observe that multiplication by any $\xi\in{\Xi}_{\underline{w}}(c{\cal O}_K)$ defines, for every weight ${\underline{w}}^\prime$, an isomorphism ${S}_{{\underline{w}}^\prime}(c{\cal O}_K)\buildrel\sim\over\rightarrow{S}_{{\underline{w}}+{\underline{w}}^\prime}(c{\cal O}_K)$ which identifies the respective sets of Hecke characters. When $\xi\in\Xi_{\underline{w}}({\cal O}_{K,c})\neq\emptyset$, the isomorphism restricts to an isomorphism of the subspaces ${S}_{{\underline{w}}^\prime}({\cal O}_{K,c})\buildrel\sim\over\rightarrow{S}_{{\underline{w}}+{\underline{w}}^\prime}({\cal O}_{K,c})$. So, we are reduced to check the assertion in the case of the null weight $\underline{0}=(0,0)$, which is clear because ${S}_{\underline{0}}(c{\cal O}_{K})$ and ${\Xi}_{\underline{0}}(c{\cal O}_{K})$ (respectively ${S}_{\underline{0}}({\cal O}_{K,c})$ and ${\Xi}_{\underline{0}}({\cal O}_{K,c})$) are the set of functions on the finite abelian group ${\bf C}_c$ (respectively ${\bf C}^{\sharp}_{c}$) and its Pontryagin dual.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par If $\mathfrak m|\mathfrak n$ the inclusion ${\cal I}_{\mathfrak n}<{\cal I}_{\mathfrak m}$ defines a natural restriction map \begin{equation} \widetilde{S}_{{\underline{w}}}(\mathfrak m)\rightarrow\widetilde{S}_{{\underline{w}}}(\mathfrak n). \label{eq:restr} \end{equation} \begin{lem}\label{th:injec} The restriction maps \eqref{eq:restr} are injective. \end{lem} \par\noindent{\bf Proof. } We can assume that $\mathfrak n=\mathfrak m\mathfrak p$ with $\mathfrak p$ prime and $(\mathfrak p,\mathfrak m)=1$. Let $\tilde{f}\in\widetilde{S}_{{\underline{w}}}(\mathfrak m)$ and suppose that $\tilde{f}(I)=0$ for all ideals $I\in{\cal I}_{\mathfrak n}$. Let ${\lambda}\in K^{\times}_{\mathfrak m}$ such that ${\lambda}{\cal O}_{\mathfrak p}=\mathfrak p{\cal O}_{\mathfrak p}$. Then $\mathfrak p[{\lambda}^{-1}]\in{\cal I}_{\mathfrak n}$ and $0=\tilde{f}(\mathfrak p[{\lambda}^{-1}])={\lambda}^{-{\underline{w}}}\tilde{f}(\mathfrak p)$, i.e. $\tilde{f}(\mathfrak p)=0$ proving that $\tilde{f}=0$ identically. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par For ${f}\in{S}_{{\underline{w}}}(\mathfrak n)$ and ${g}\in{S}_{{\underline{w}}^{\prime}}(\mathfrak n)$ let \begin{equation} \scal{f}{g}= \left\{ \begin{array}{ll} h_{\mathfrak n}^{-1}\sum_{{\sigma}\in{\bf C}_{\mathfrak n}}{f}(s_{{\sigma}}){g}(s_{{\sigma}}) =h_{\mathfrak n}^{-1}\sum_{{\sigma}\in{\bf C}_{\mathfrak n}}\tilde{f}(I_{{\sigma}})\tilde{g}(I_{{\sigma}}) & \hbox{if ${\underline{w}}^{\prime}=-{\underline{w}}$} \\ & \\ 0 & \hbox{if ${\underline{w}}^{\prime}\neq-{\underline{w}}$} \end{array} \right., \label{eq:padicpair} \end{equation} where $\{s_{{\sigma}}\}$ and $\{I_{{\sigma}}\}$ are full set of representatives of ${\bf C}_{\mathfrak n}$ in ${K_{\A}^{\times}}$ and in ${\cal I}_{\mathfrak n}$ respectively. The bilinear form $\scal{\cdot}{\cdot}$ extends by linearity to a pairing on ${S}(\mathfrak n)=\bigoplus_{{\underline{w}}\in\mathbb Z[I_K]}{S}_{{\underline{w}}}(\mathfrak n)$, or on the corresponding space $\widetilde{S}(\mathfrak n)=\bigoplus_{{\underline{w}}\in\mathbb Z[I_K]}\widetilde{S}_{{\underline{w}}}(\mathfrak n)$, compatible with the restriction maps \eqref{eq:restr}. Note that for Hecke characters ${\xi}\in{\Xi}_{{\underline{w}}}(\mathfrak n)$ and ${\xi}^{\prime}\in{\Xi}_{-{\underline{w}}}(\mathfrak n)$ one has the orthogonality relation $$ \scal{\xi}{\xi^\prime}= \left\{ \begin{array}{ll} 1 & \hbox{if ${\xi}^{\prime}={\xi}^{-1}$} \\ 0 & \hbox{otherwise} \end{array} \right.. $$ \begin{rem} \rm It follows at once from the definition \eqref{eq:padicpair} that the pairing $\scal{\cdot}{\cdot}$ takes values in $E$ on $E$-valued forms. \end{rem} Let $p$ be a prime number and let $F$ be a $p$-adic local field with ring of integers ${\cal O}_{F}$. Following \cite{Hida86, Tilo96}, the space of $p$-adic $K^{\times}$-modular forms of level $\mathfrak n$ with coefficients in $F$ is the space $\mathfrak S(\mathfrak n;F)={\cal C}^{0}(\mathfrak C_{\mathfrak n},F)$ of $F$-valued continuous functions on $\mathfrak C_{\mathfrak n}=\limproj{}{}_{r\geq0}{\bf C}_{\mathfrak n p^{r}}$. It is a $p$-adic Banach space under the sup norm $\vvass{\phi}=\sup_{x\in\mathfrak C_{\mathfrak n}}\vass{\phi(x)}$ and we denote $\mathfrak S(\mathfrak n;{\cal O}_{F})$ its unit ball. Assume that $E$ is a subfield of $F$ (e.g. $F$ is the completion of $E$ at a prime dividing $p$) and write $\widetilde{S}_{{\underline{w}}}(\mathfrak n;F)=\widetilde{S}_{{\underline{w}}}(\mathfrak n;E)\otimes F$ and $\widetilde{S}_{{\underline{w}}}({\cal O}_{K,c};F)=\widetilde{S}_{{\underline{w}}}({\cal O}_{K,c};E)\otimes F$. \begin{pro}[\cite{Tilo96}]\label{th:padicembed} For every ideal $\mathfrak m|\mathfrak n$ and for every ideal $\mathfrak q$ with support included in the set of primes dividing $p$ there is a natural embedding $$ \widetilde{S}(\mathfrak m\mathfrak q;F)= \bigoplus_{{\underline{w}}\in\mathbb Z[I_K]}\widetilde{S}_{{\underline{w}}}(\mathfrak m\mathfrak q;F) \hookrightarrow\mathfrak S(\mathfrak n;F). $$ \end{pro} \par\noindent{\bf Proof. } We may use Lemma \ref{th:injec} to assume that $\mathfrak m\mathfrak q=\mathfrak n p^{a}$ for some $a\geq1$. Since $\bigcap_{r\geq0}P_{\mathfrak n p^{r}}=\{1\}$ the group $I_{\mathfrak n p}$ embeds as a dense subset in $\mathfrak C_{\mathfrak n}$. The restriction of $\tilde{f}\in\widetilde{S}_{{\underline{w}}}(\mathfrak n p^{a};F)$ to a coset $I\cdot P_{\mathfrak n p^{a}}$ is the function $I({\lambda})\mapsto\tilde{f}(I){\lambda}^{{\underline{w}}}$. Since ${\underline{w}}\in\mathbb Z[I_{K}]$ the character ${\lambda}^{{\underline{w}}}$ is continuous for the $p$-adic topology on $K^{\times}$ and so extends to a character $\chi_{{\underline{w}}}$ of $(K\otimes\mathbb Q_{p})^{\times}$. Therefore $\tilde{f}$ extends locally to cosets of $1+p^{a}(R_{K}\otimes\mathbb Z_{p})$ and globally to the whole of $\mathfrak C_{\mathfrak n}$. The injectivity of the direct sum space $S(\mathfrak n p^{a};F)$ follows from the linear independence of characters. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par We shall denote $\wh{f}$ the $p$-adic modular form associated to the $K^{\times}$-modular form $\tilde{f}$. If $\tilde{f}=\tilde{\xi}$ is an Hecke character, the $p$-adic form $\wh{\xi}$ is again a character which is sometimes called the \textit{$p$-adic avatar} of $\tilde{\xi}$ (or of ${\xi}$). The density of $I_{\mathfrak n p}$ in $\mathfrak C_{\mathfrak n}$ implies also that the image of $\widetilde{S}_{{\underline{w}}}(\mathfrak n p^{a};F)$ in $\mathfrak S(\mathfrak n;F)$ is characterized by the functional relations $\tilde{f}({\lambda} s)={\lambda}^{{\underline{w}}}\tilde{f}(s)$ for all ${\lambda}\equiv1\bmod\mathfrak n p^{a}$. Thus, the association $\tilde{f}\mapsto\wh{f}$ identifies $\widetilde{S}_{{\underline{w}}}(\mathfrak n p^{a};F)$ with the closed linear subspace \begin{equation} \mathfrak S_{{\underline{w}},a}(\mathfrak n;F)=\left\{\sopra {\mbox{$\phi\in\mathfrak S(\mathfrak n;F)$ such that $\phi(sx)=\phi(s)\chi_{{\underline{w}}}(x)$}} {\mbox{ for all $x\in1+p^{a}({\cal O}_{K}\otimes\mathbb Z_{p})$}} \right\} \label{eq.clospace} \end{equation} (when $a=0$ the domain for $x$ is $({\cal O}_{K}\otimes\mathbb Z_{p})^{\times}$). Let $\overline{S}(\mathfrak n p^{a};F)=\wh{\bigoplus}_{{\underline{w}}} \mathfrak S_{{\underline{w}},a}(\mathfrak n;F)$ be the closure of $\widetilde{S}(\mathfrak n p^a;F)$ in $\mathfrak S(\mathfrak n;F)$. Since $\mathfrak S_{{\underline{w}},a}(\mathfrak n;F)$ is closed the projection onto the $w$-th summand extends to a projection $\pi_{{\underline{w}},a}\colon\overline{S}(\mathfrak n p^{a};F)\rightarrow\mathfrak S_{{\underline{w}},a}(\mathfrak n;F)$. Define a pairing $$ \scalq{\cdot}{\cdot}\colon \overline{S}(\mathfrak n p^{a};F)\times\overline{S}(\mathfrak n p^{a};F)\longrightarrow F $$ as the composition $$ \overline{S}(\mathfrak n p^{a};F)\times\overline{S}(\mathfrak n p^{a};F) \stackrel{m}{\longrightarrow}\overline{S}(\mathfrak n p^{a};F) \stackrel{\pi_{\underline{0},a}}{\longrightarrow} \mathfrak S_{\underline{0},a}(\mathfrak n;F) \stackrel{\mu_{\mathrm{H}}}{\longrightarrow}F $$ where $m$ is multiplication and $\mu_{\mathrm{H}}$ is the Haar distribution which is bounded on the space $\mathfrak S_{\underline{0},a}(\mathfrak n;F)$. \begin{pro}\label{th:spiden} The pairing $\scal{\cdot}{\cdot}$ extends to a continuous pairing on $\overline{S}(\mathfrak m\mathfrak q;F)$ which coincides with $\scalq{\cdot}{\cdot}$. \end{pro} \par\noindent{\bf Proof. } The pairing $\scalq{\cdot}{\cdot}$ is continuous as composition of continuous mappings. Thus it is enough to check the identity $\scal{\tilde{f}}{\tilde{g}}=[{\wh{f}},{\wh{g}}]$ for $\tilde{f}\in\widetilde{S}_{{\underline{w}}}(\mathfrak n p^{a};F)$ and $\tilde{g}\in\widetilde{S}_{{\underline{w}}^{\prime}}(\mathfrak n p^{a};F)$. It follows from the definition \eqref{eq:padicpair} and the density of ${\cal I}_{\mathfrak n p}$ in $\mathfrak C_{\mathfrak n}$, since on ${\cal I}_{\mathfrak n p}$ the restrictions of $\tilde{f}$ and $\wh{f}$ and of $\tilde{g}$ and $\wh{g}$ coincide. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par Recall that a $p$-adic distribution on $\mathbb Z_p$ with values in the $p$-adic Banach space $W$ over $F$ is a linear operator ${\cal C}^0(\mathbb Z_{p},F)\rightarrow W$. Given two $p$-adic distributions on $\mathbb Z_{p}$ with values in $\overline{S}(\mathfrak n p^{a};F)$ we construct a new distribution $\mu_{\scalq{\mu_1}{\mu_2}}$ with values in $F$ as the composition $$ {\cal C}^0(\mathbb Z_{p},F)\stackrel{\mu_1\ast\mu_2}{\longrightarrow} \overline{S}(\mathfrak n p^{a};F) \stackrel{\pi_{\underline{0},a}}{\longrightarrow} \mathfrak S_{\underline{0},a}(\mathfrak n;F) \stackrel{\mu_{\mathrm{H}}}{\longrightarrow}F, $$ where $\mu_1\ast\mu_2$ is the convolution product of $\mu_1$ and $\mu_2$. If $\mu_1$ and $\mu_2$ are measures (bounded distributions) $\mu_{\scalq{\mu_1}{\mu_2}}$ is not a measure in general since the map $\pi_{\underline{0},a}$ is not bounded. Denote $m_k(\mu)=\int_{\mathbb Z_p}x^k\,d\mu(x)$, $k\geq0$ the $k$-th moment of the distribution $\mu$. \begin{lem}\label{th:measuremix} Let $M\in\mathbb N\cup\{\infty\}$ and suppose that there exist pairwise distinct weights $\{{\underline{w}}_{k}\}$ for $0\leq k<M$ such that $m_{k}(\mu_{1})\in\widetilde{S}_{{\underline{w}}_{k}}(\mathfrak n p^{a};F)$ and $m_{k}(\mu_{2})\in\wt S_{-{\underline{w}}_{k}}(\mathfrak n p^{a};F)$ for all $0\leq k<M$. Then $$ m_{k}(\mu_{\scalq{\mu_1}{\mu_2}})= \left\{ \begin{array}{ll} 0 & \mbox{if $0\leq k<M$ is odd,} \\ \binom{2l}{l}\scalq{m_{l}(\mu_{1})}{m_{l}(\mu_{2})} & \mbox{if $0\leq k=2l<M$ is even.} \end{array} \right. $$ If $M=\infty$ the latter formulae characterize the distribution $\mu$ completely. \end{lem} \par\noindent{\bf Proof. } By direct computation $m_k(\mu_{\scalq{\mu_1}{\mu_2}})=\mu_H\circ\pi_{\underline{0},a} \left(\iint_{\mathbb Z_{p}^2}(x+y)^{k}\,d\mu_1(x) d\mu_2(y)\right)= \sum_{i=0}^{k}\vvec{k}{i}\mu_H\circ\pi_{\underline{0},a}\left(m_i(\mu_1)m_{k-i}(\mu_2)\right)= \sum_{i=0}^{k}\vvec{k}{i}\scalq{m_i(\mu_1)}{m_{k-i}(\mu_2)}. $ The formula follows at once from the orthogonality relations in \eqref{eq:padicpair} since ${\underline{w}}_{i}={\underline{w}}_{k-i}$ only if $k=2l$ is even and $i=l$. The final assertion is also clear.\penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par Let $\mu$ be a $p$-adic distribution on $\mathbb Z_p$ with values in a $p$-adic space ${\cal S}$ of continuous $F$-valued functions on a profinite space $T$. For every $t\in T$, evaluation at $t$ defines an $F$-valued distribution $\mu(t)$ on $\mathbb Z_p$, $\mu(t)(\phi)=\mu(\phi)(t)$. Conversely, a family $\{\mu_t\}_{t\in T}$ of $F$-valued distributions such that the function $\mu(\phi)(t)=\mu_t(\phi)$ is in ${\cal S}$ for all $\phi\in{\cal C}^0(\mathbb Z_{p},F)$ defines a $p$-adic distribution $\mu$ on $\mathbb Z_p$ with values in ${\cal S}$ and $\mu(t)=\mu_t$ for all $t\in T$, which is obviously unique for this property. \begin{lem}\label{le:UnBoundPr} Let $T$ be a profinite space, $\cal S$ a $p$-adic space of continuous $F$-valued functions and $\mu$ a $p$-adic distribution on $\mathbb Z_p$ with values in $\cal S$. Then $\mu$ is a $p$-adic measure if and only if $\mu(t)$ is a $p$-adic measure for all $t\in T$. \end{lem} \par\noindent{\bf Proof. } If $\mu$ is bounded, the distributions $\mu(t)$ are obviously bounded. Suppose that $\mu(t)$ is bounded for all $t\in T$. Let $\{\phi_k\}$ $k=0,1,2,...$ be functions in ${\cal C}^0(\mathbb Z_{p},F)$ with $\vvass{\phi_k}=1$ and let $\varphi_k=\mu(\phi_k)$. If $\vvass{\varphi_k(t)}=p^{r_k}$ choose $t_k\in T$ such that $\vass{\varphi_k(t_k)}_p=p^{r_k}$. If the set of values $\left\{p^{r_k}\right\}$ is not bounded we may assume without loss of generality that $r_1<r_2<r_3<\cdots$ and since each $\mu(t_k)$ is bounded also that $\left\{t_k\right\}$ is an infinite set. By compactness of $T$, there exists $\bar{t}\in T$, ${\bar t}\neq t_k$ for all $k$, such that every neighborhood of $\bar t$ meets $\left\{t_k\right\}$. This contradicts the boundedness of $\mu({\bar t})$ since $\vass{\varphi(t)}_p$ is locally constant for all $\varphi\in\cal S$. In particular, the sequence $\mu\left(\binom xk\right)$ is bounded and $\mu$ is a measure. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par As an application, let $\tilde{\chi}\in\widetilde{\Xi}_{{\underline{w}}}(\mathfrak n p^a)$ and $\tilde{\xi}\in\widetilde{\Xi}_{{\underline{w}}^{\prime}}(\mathfrak n p^a)$ be Hecke characters taking values respectively in ${\cal O}_{F}^{\times}$ and ${\cal O}_{F^\prime}^{\times}$ where $\mathbb Q_p\subseteq F^\prime$ is a totally ramified subextension of $F$. For every $x\in\mathfrak C_{\mathfrak n}$ the series $$ \sum_{k=0}^{\infty}\frac{1}{k!}\wh{\chi}{\wh{\xi}}^{k}(x)Z^{k} =\wh{\chi}(x)(1+T)^{\wh{\xi}(x)},\qquad T=e^{Z}-1=Z+\frac12Z^{2}+\frac1{3!}Z^{3}+\cdots, $$ has integral coefficients in the variable $T$ (the corresponding measure is $\wh{\chi}(x)\partial_{\wh{\xi}(x)}$, where $\partial_t$ denotes the Dirac measure concentrated at $t$). Thus, there exists a unique measure $\mu_{\chi,\xi}$ on $\mathbb Z_{p}$ with values in $\overline{S}(\mathfrak n;{\cal O}_{F})$ such that $m_{k}(\mu_{\chi,\xi})=\wh{\chi}\wh{\xi}^{k}$. When ${\underline{w}}^{\prime}\neq\underline{0}$ the moments' weights are pairwise distinct. \subsection{Expansions as distributions} Let $\jmath\colon K\hookrightarrow D$ a normalized embedding of conductor $c=c_{\tau,N}$ with corresponding $\tau\in{\rm CM}_{{\Delta},K}$ and $x\in{\rm CM}({\Delta},N;{\cal O}_{K,c})$. Let $y=\mathrm{Im}(\tau)$. The embedding $\jmath$ defines by scalar extension a diagram \begin{equation} \begin{array}{ccccccc} & & K_\mathbb A^\times/K^\times\mathbb R^\times & \longrightarrow & D^\times\backslash D_\mathbb A^\times/Z_\infty \\ & & \downarrow & & \downarrow \\ & & K_\mathbb A^\times/K^\times\mathbb R^\times\wh{{\cal O}}_{K,c}^\times & \longrightarrow & D^\times\backslash D_\mathbb A^\times/Z_\infty\wh{{\cal R}}_N^\times \\ & & \downarrow & & \downarrow \\ {\bf C}^{\sharp}_{c} & \simeq & K_\mathbb A^\times/K^\times\mathbb C^\times\wh{{\cal O}}_{K,c}^\times & \longrightarrow & D^\times\backslash D_\mathbb A^\times/\jmath(\mathbb C^\times)\wh{{\cal R}}_N^\times & \simeq & {{\Gamma}_{0}}({\Delta},N)\backslash\mathfrak H \end{array} \label{eq:maindiag} \end{equation} where the vertical maps are the natural quotient maps and $Z_\infty$ is the center of $D_\infty^\times$. Under the decomposition \begin{equation} D_\mathbb A^\times=D_\mathbb Q^\times{\rm GL}_2^+(\mathbb R)\wh{{\cal R}}_N^\times \label{eq:decomp} \end{equation} the idele $d=d_og_\infty u$ corresponds to the point represented by $g_\infty\tau$. Classfield theory provides an identification ${\bf C}^{\sharp}_{c}\simeq{\rm Gal}(H_c/K)$ where $H_c$ is the ray classfield of conductor $c$. It is also well-known that the points in the image of the bottom map in \eqref{eq:maindiag} are defined over $H_c$, so that ${\rm Gal}(H_c/K)$ acts naturally on them, and that the two actions are compatible (Shimura reciprocity law, \cite{ShiRed}). In particular, if $s_{{\sigma}}\in{K_{\A}^{\times}}$ represents ${\sigma}\in{\rm Gal}(H_c/K)$, then $s_{{\sigma}}$ maps to $x^{{\sigma}}$ and $A_{x^{\sigma}}=A_x^{(\inv{\sigma})}$. Write $A_x(\mathbb C)=A_{\tau}=\mathbb C^{{\epsilon}}/{\Lambda}_\tau$ with ${\Lambda}_\tau={\Lambda}\vvec{\tau}{1}$ where ${\epsilon}=1$ and ${\Lambda}=\mathbb Z^2\subset\mathbb C$ if $D$ is split and ${\epsilon}=2$ and ${\Lambda}=\Phi_{\infty}({\cal R}_{1})\subset\mathbb C^2$ if $D$ is non-split. The theory of complex multiplication implies that $A_{x^{\sigma}}(\mathbb C)\simeq\mathbb C^{{\epsilon}}/s_{{\sigma}}{\Lambda}_\tau$ where $s_{{\sigma}}{\Lambda}_\tau={\Lambda} d_{{\sigma}}^{-1}\vvec{\tau}{1}$ if $s_{{\sigma}}^{-1}=d_{{\sigma}}g_{{\sigma}}u_{{\sigma}}$ under \eqref{eq:decomp}. For a fixed prime $p$ one can choose representatives $\left\{s_{{\sigma}}\right\}\subset{K_{\A}^{\times}}$ normalized as follows: $$ \left\{\begin{array}{l} s_{{\sigma},\infty}=1, \\ s_{{\sigma},v}\mbox{ is $v$-integral at all finite places $v$ and a $v$-unit at the places $v|pc$.} \end{array}\right. $$ For each such representative $s$ there is a diagram of complex tori $$ \begin{CD} A_{g\tau}=\mathbb C^{{\epsilon}}/{\Lambda}_{g\tau} @>{j(g,\tau)}>> \mathbb C^{{\epsilon}}/s{\Lambda}_{\tau} @>{\pi_{s}}>> \mathbb C^{{\epsilon}}/{\Lambda}_{\tau}\\ @. @| @| \\ {} @. A_{x^{\sigma}}(\mathbb C) @. A_x(\mathbb C)\\ \end{CD} $$ where $s=dgu$ under \eqref{eq:decomp} and $\pi_{s}$ is the natural quotient map arising from the inclusion $s{\Lambda}_{\tau}\subset{\Lambda}_\tau$. The element $g\in{\rm GL}_{2}^{+}(\mathbb R)$ is defined by $g\tau\in\mathfrak H$ only up to an element in ${\cal O}_{K,c}^{\times}$. Choose $p$ and a place $v$ over $p$ in a number field $L$ large enough so that for each $s$ the triple $(g\tau,v,e)$ is a $p$-ordinary test triple and that the isogenies $\pi_{s}$ are defined over $L$. \begin{lem}\label{lem:compperiods} With the above notations, it is possible to choose for every ${\sigma}\in{\rm Gal}(H_c/K)$ an invariant 1-form on $A_{x^{\sigma}}$ that generates ${{\cal L}}(x^{\sigma})\otimes{\cal O}_{(v)}$ and for which \begin{enumerate} \item ${\Omega}_{\infty}(g\tau)\sim_{{\cal O}_{(v)}^\times}j(g,\tau){\Omega}_{\infty}(\tau)$; \item ${\Omega}_{p}(x^{{\sigma}})\sim_{{\cal O}_{(v)}^\times}{\Omega}_{p}(x)$. \end{enumerate} \end{lem} \par\noindent{\bf Proof. } Take ${\omega}_o\in H^0(A_{x}(\mathbb C),{\cal L}(x)\otimes\mathbb C)$. The quotient map $\pi_{s}$ is the identity on (co)tangent spaces and commutes with the action of the endomorphisms. Thus ${\omega}_s=\pi_s^*({\omega}_o)\in H^0(A_{x^{\sigma}}(\mathbb C),{\cal L}(x^{\sigma}))$ and $p({\omega}_s,g\tau)=j(g,\tau)p({\omega}_o,\tau)$. Furthermore, $p$ doesn't divide the degree of $\pi_s$ and so $\pi_s^*$ is an isomorphism between the natural $p$-adic structures on the spaces of invariant forms. This proves part 1. For part 2 observe that the reduction mod $p$ of the dual map $\pi_s^t$ gives an isomorphism of the rank 1 tate module quotient $T$ of \S 3.2. Thus $\pi_s^*({\omega}_{u}(P))$ is a universal form on the deformations of $\widetilde{A}_{x^{\sigma}}$ by formula \eqref{eq:omcomp} and the equality follows. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par If $s$ and $s^{\prime}=s{\lambda} zu$ with ${\lambda}\in K^{\times}$, $z\in\mathbb C^{\times}$ and $u\in\wh{{\cal O}}_{K,c}^\times$ are two normalized representants of the same ${\sigma}\in{\bf C}^\sharp_{c}$ a comparison of the relations in lemma \ref{lem:compperiods} for the decompositions $s=dgr$ and $s^{\prime}=({\lambda} d)(gz)(ru)$ shows that ${\omega}_{s^{\prime}}\sim_{{\cal O}_{K,c}^{\times}}z{\omega}_{s}$. Therefore the construction of ${\omega}_{s}$ can be extended modulo ${\cal O}_{K,c}^{\times}$-equivalence to all $s\in{K_{\A}^{\times}}$ by setting \begin{equation} {\omega}_{s{\lambda} zu}\sim_{{\cal O}_{K,c}^{\times}}z{\omega}_{s} \quad\mbox{for all ${\lambda}\in K^{\times}, z\in\mathbb C^{\times}, u\in\wh{{\cal O}}_{K,c}^\times$ and $s$ normalized.} \label{eq:definoms} \end{equation} Let $f\in{\rm M}_{2{\kappa},0}({\Delta},N)$ and normalize the invariant form as in proposition \ref{teo:Thkalgebraic}. For all integers $r\geq0$ such that $({\cal O}_{K,c}^{\times})^{2({\kappa}+r)}=1$ define a function ${c}_{(r)}(f,x)\colon{K_{\A}^{\times}}\rightarrow\mathbb C$ as $$ {c}_{(r)}(f,x)(s)=\frac{{\delta}_{2{\kappa}}^{(r)}f(g\tau)}{p({\omega}_{s},g\tau)^{2({\kappa}+r)}} $$ where $s=dgu$ as above. \begin{pro}\label{th:meascrfx} Suppose that $f$ is defined over ${\cal O}_{(v)}$ and assume that $({\cal O}_{K,c}^{\times})^{2({\kappa}+r)}=1$. Then $c_{(r)}(f,x)\in{S}_{(2({\kappa}+r),0)}({\cal O}_{K,c})\cap{\cal S}(cR_{K},{\cal O}_{v})$. \end{pro} \par\noindent{\bf Proof. } The modular relation for $c_{(r)}(f,x)$ follows at once from \eqref{eq:definoms} and the definition since $gz\tau=g\tau$. For an idele $s$ satisfying the conditions \eqref{eq:weilrel} for $\mathfrak n=(pc)$ the invariant form ${\omega}_{s}$ satisfies proposition \ref{teo:Thkalgebraic} and then theorem \ref{thm:equality} together with lemma \ref{lem:compperiods} shows that as a $p$-adic $K^{\times}$-modular form ${c}_{(r)}(f,x)$ has coefficients in $L_{v}$ and in fact belongs to the unit ball. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par Assume that ${\cal O}_{K,c}^{\times}=\{\pm1\}$. Let $\mu_{f,x}$ be the $p$-adic distribution on $\mathbb Z_p$ with values in $\overline{S}(c{\cal O}_K;L_v)$ such that $m_r(\mu_{f,x})=\wh{c}_{(r)}(f,x)$ and let $\mu_{\chi,\xi}$ be the $p$-adic measure associated to a choice of Gr\"ossencharakters $\chi\in\Xi_{(-2{\kappa},0)}({\cal O}_{K,c})$, $\xi\in\Xi_{(-2,0)}({\cal O}_{K,c})$ as in the discussion after lemma \ref{le:UnBoundPr}. \begin{thm}\label{th:measexists} There exist a $p$-adic field $F$ and a $p$-adic measure $\mu(f,x;\chi,\xi)$ on $\mathbb Z_p$ with values in ${\cal O}_F$ such that $$ m_r\left(\mu_{[\mu_{f,x},\mu_{\chi,\xi}]}\right) =\left\{ \begin{array}{ll} 0 & \mbox{if $0\leq r$ is odd,} \\ (h^\sharp_c)^{-1}{\Omega}_p^{-2({\kappa}+l)}\binom{2l}{l}m_l(\mu(f,x;\chi,\xi)) & \mbox{if $0\leq r=2l$ is even,} \end{array} \right. $$ \end{thm} \par\noindent{\bf Proof. } Let $F$ be large enough to contain $L_v$, the field of values of $\chi$ and $\xi$ and the $p$-adic period ${\Omega}_p$. The expression follows from Lemma \ref{th:measuremix} and the fact that for a suitable choice of representants for ${\bf C}^{\sharp}_{c}$ we have, combining the definition \eqref{eq:padicpair} with theorem \ref{thm:equality}, proposition \ref{th:spiden} and lemma \ref{lem:compperiods}, $\scalq{\wh{\chi}{\wh{\xi}}^{l}}{\wh{c}_{(l)}(f,x)}= (h^\sharp_c)^{-1}{\Omega}_p^{-2({\kappa}+l)}\sum_{\sigma}\wh{\chi}{\wh{\xi}}^{l}(s)b_l(x^{\sigma})$. Finally, each term ${\wh{\xi}}^{l}(s)b_l(x^{\sigma})$ is the $l$-th moment of a suitable $p$-adic measure on $\mathbb Z_p$ because the identification $\sum_{n=0}^\infty(b_n(x^{\sigma})/n!)T^n=\sum_{n=0}^\infty a_nU^n$ with $a_n\in{\cal O}_F$ through the substituition $U=e^T-1$ yields an identification $\sum_{n=0}^\infty(b_n(x^{\sigma})/n!)z^nT^n=\sum_{n=0}^\infty a_nV^n$ where $V=(U+1)^z-1$ and this substitution preserves ${\cal O}_F$-integrality when $z$ is a unit in a field with residue field $\mathbb F_p$. Conclude using the linearity of measures. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \subsection{Special $L$-values} For $f\in M_{2{\kappa},0}^\infty({\Delta},N)$, let $\phi_f\in L^2(D_\mathbb Q^\times\backslash D_\mathbb A^\times)$ be the usual $\wh{{\cal R}}_N^\times$-invariant $C^\infty$ lift of $f$ to $D_\mathbb A^\times$. Namely, $\phi_f(d)=f(g_\infty\cdot i)j(g_\infty,i)^{-2{\kappa}}\det(g_\infty)^{\kappa}$ if $d=d_{\mathbb Q}g_\infty u$ under \eqref{eq:decomp}. The Lie algebra $\mathfrak g=\mathfrak g\mathfrak l_2\simeq{\rm Lie}(D_\infty^\times)$ acts on the $\mathbb C$-valued $C^\infty$ functions on $D_\mathbb A^\times$ by $(A\cdot\varphi)(d)=\left.\frac{d}{dt}\varphi(de^{tA})\right|_{t=0}$. By linearity and composition the action extends to the complexified universal enveloping algebra $\mathfrak A(\mathfrak g)_{\mathbb C}$. Let $$ I=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right), \qquad H=\left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right), \qquad X^\pm=\frac12\left( \begin{array}{cc} 1 & \pm i \\ \pm i & -1 \end{array} \right) $$ be the usual eigenbasis of $\mathfrak g_{\mathbb C}$ for the adjoint action of the maximal compact subgroup $$ {\rm SO}(2)=\left\{\hbox{$r({\theta})=\left( \begin{array}{cc} \cos{\theta} & -\sin{\theta} \\ \sin{\theta} & \cos{\theta} \end{array} \right)$ such that ${\theta}\in\mathbb R$}\right\}. $$ Since ${\rm Ad}(r({\theta}))X^\pm=e^{\mp 2i{\theta}}X^\pm$, we have $X^\pm\cdot\varphi_{f}\in M_{2{\kappa}\pm2,0}^\infty({\Delta},N)$. A standard computation (e.g. \cite[\S\S2.1--2]{Bu96}) links the Lie action to the Maass operators of \S\ref{ss:maass}, namely $$ X^+\cdot\phi_f=-4\pi\phi_{{\delta}_{2{\kappa}}f}. $$ For $r\geq0$ let \begin{equation} \phi_r=\left(-\frac{1}{4\pi}X^{+}\right)^{r}\cdot\phi_f= \phi_{{\delta}_{2{\kappa}}^{r}f}. \label{eq:defphir} \end{equation} \begin{dfn}\label{th:Jintegral} \rm Let $f\in M_{2{\kappa},0}({\Delta},N)$, $\xi\in{\Xi}_{\underline{w}}(c{\cal O}_{K})$ for a weight ${\underline{w}}$ such that $\vass{{\underline{w}}}=0$ and $\tau=t+iy\in{\rm CM}_{{\Delta},K}$ with $c_{\tau,N}=c$ and associated normalized embedding $\jmath$. For each $r\geq0$, let $$ J_r(f,\xi,\tau)=\int_{{K_{\A}^{\times}}/K^\times\mathbb R^\times}\phi_r(\jmath(t)d_\infty)\xi(t)\,dt $$ where $d_\infty=\smallmat{y^{1/2}}{ty^{1/2}}0{y^{-1/2}}$ and $dt$ is the Haar measure on ${K_{\A}^{\times}}$ whose archimedean component is normalized so that $\mathrm{vol}(\mathbb C^\times/\mathbb R^\times)=\pi$ and such that the local grups of units have volume 1 (hence $m_c=\mathrm{vol}(\wh{{\cal O}}_{K,c}^\times)= [({\cal O}_K/c{\cal O}_K)^\times\colon(\mathbb Z/c\mathbb Z)^\times]^{-1}$). \end{dfn} We show that $J_r(f,\xi,\tau)$ can be expressed in terms of the pairing introduced in \S\ref{se:padicforms}. Write $w_{K,c}=\vass{{\cal O}_{K,c}^{\times}}$. \begin{thm}\label{th:compJ} Let $f\in M_{2{\kappa},0}({\Delta},N)$ and $\xi\in\Xi_{(w,-w)}({\cal O}_{K,c})$. Assume that $({\cal O}_{K,c}^{\times})^{2w}=1$. Then $$ J_r(f,\xi,\tau)= \frac{\pi m_c}{w_{K,c}}h_{c}^{\sharp}y^{-w}{\Omega}_\infty(\tau)^{-2w} {\scal{c_{(r)}(f,x)}{\xi\vvass{N_{K/\mathbb Q}}^{-w}}}. $$ \end{thm} \par\noindent{\bf Proof. } Since the integrand function is right $\wh{{\cal O}}^\times_{K,c}$-invariant, we have $J_r(f,\xi,\tau)=m_c\int_{{K_{\A}^{\times}}/K^\times\mathbb R^\times\wh{{\cal O}}^\times_{K,c}} \phi_r(\jmath(t)d_\infty)\xi(t)\,dt$. For a chosen set of representatives $\{s_{\sigma}\}$ of ${\bf C}^{\sharp}_{c}$ there is a decomposition $$ {K_{\A}^{\times}}/K^\times\mathbb R^\times\wh{{\cal O}}^\times_{K,c}= \bigcup_{{\sigma}\in{\bf C}^{0}_{c}}\mathbb C^\times s_{\sigma}/\mathbb R^\times{\cal O}_{K,c}^{\times} \qquad \mbox{(disjoint union).} $$ Therefore, $J_r(f,\xi,\tau)=m_c\sum_{{\sigma}\in{\bf C}^{\sharp}_{c}}\xi(s_{\sigma})\int_{\mathbb C^\times/\mathbb R^\times{\cal O}_{K,c}^{\times}} \phi_r(\jmath(s_{\sigma} z)d_\infty)\xi_\infty(z)\,d^\times z$. Since the standard normalized embedding of $\mathbb C$ in ${\rm M}_2(\mathbb R)$ is $\rho e^{i{\theta}}\mapsto\rho r({\theta})$, we can write $D_\infty^\times\ni\jmath(z)=\rho d_\infty r({\theta})d_\infty^{-1}$. Therefore $\int_{\mathbb C^\times/\mathbb R^\times{\cal O}_{K,c}^{\times}}\phi_r(\jmath(s_{\sigma} z)d_\infty) \xi_\infty(z)\,d^\times z=w_{K,c}^{-1}\int_0^{\pi}\phi_r(\jmath(s_{\sigma} d_\infty)r({\theta})) \xi_\infty(e^{i{\theta}})\,d{\theta}=w_{K,c}^{-1}\phi_r(\jmath(s_{\sigma} d_\infty)) \int_0^{\pi}e^{-2i({\kappa}+r){\theta}}e^{-2iw{\theta}}\,d{\theta}$ and \begin{equation} J_r(f,\xi,\tau)=\left\{ \begin{array}{ll} \pi m_cw_{K,c}^{-1}\sum_{{\sigma}\in{\bf C}^{\sharp}_{c}} \xi(s_{\sigma})\phi_r(\jmath(s_{\sigma} d_\infty)) & \hbox{if $w=-{\kappa}-r$} \\ 0 & \hbox{otherwise} \end{array}\right.. \label{eq:formforJ} \end{equation} Note that this proves the claimed formula when $w\neq-{\kappa}-r$ since the inner product in its right hand side vanishes in this case. Thus, we may now assume that $w=-{\kappa}-r$. Put $I_{\sigma}=\xi(s_{\sigma})\phi_r(s_{\sigma} d_\infty)$ and write $s=s_{\sigma}=d_sg_su_s$ under \eqref{eq:decomp} and $\tau_s=g_s\tau$. Note that $\vvass{N_{K/\mathbb Q}(s)}=\det(g_s)$. Under the hypothesis $({\cal O}_{K,c}^{\times})^{2({\kappa}+r)}=1$ we have \begin{eqnarray*} I_{\sigma} & = & \xi(s){\delta}_{2{\kappa}}^{(r)}f(g_sd_\infty\cdot i)j(g_s d_\infty,i)^{-2({\kappa}+r)}\det(g_s)^{{\kappa}+r} \\ & = & y^{{\kappa}+r}\xi(s){\delta}_{2{\kappa}}^{(r)}f(\tau_{s})j(g_s,\tau )^{-2({\kappa}+r)}\vvass{N_{K/\mathbb Q}(s)}^{{\kappa}+r} \\ & = & y^{{\kappa}+r}\xi(s)c_{(r)}(f,x)(s) p({\omega}_s,\tau_{s})^{2({\kappa}+r)}j(g_s,\tau )^{-2({\kappa}+r)}\vvass{N_{K/\mathbb Q}(s)}^{{\kappa}+r} \\ & = & y^{{\kappa}+r}\xi(s)c_{(r)}(f,x)(s) p({\omega}_{s_o},\tau_{s_o})^{2({\kappa}+r)}j(g_{s_o},\tau )^{-2({\kappa}+r)}\vvass{N_{K/\mathbb Q}(s)}^{{\kappa}+r} \\ & = & y^{{\kappa}+r}{\Omega}_\infty(\tau)^{2({\kappa}+r)}\xi(s)c_{(r)}(f,x)(s)\vvass{N_{K/\mathbb Q}(s)}^{{\kappa}+r} \\ \end{eqnarray*} where $s_o$ is a normalized representant. It is now clear that the formula follows. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \begin{dfn} Let $M$ be a proper divisor of $N$, $x\in{\rm CM}({\Delta},N;{\cal O}_{K,c})$ and $\pr x\in{\rm CM}({\Delta},M;{\cal O}_{K,\pr c})$ the image of $x$ under the natural quotient map. A character $\xi\in\Xi_{{\underline{w}}}({\cal O}_{K,c})$ is called $(x,M)$-primitive if it is not trivial on $\wh{{\cal O}}_{K,\pr c}^{\times}$. \end{dfn} For a divisor $d$ of $N/M$ there is an embedding $\iota_{{\Delta},d}:M_{2{\kappa},0}({\Delta},M)\longrightarrow M_{2{\kappa},0}({\Delta},N)$. When ${\Delta}=1$ the embedding is simply $f(z)\mapsto f(dz)$. When ${\Delta}>1$ the explicit description of $\iota_{{\Delta},d}$ is less immediate, e.÷g. \cite[\S3]{MoTe99}. We denote $M_{2{\kappa},0}({\Delta},N)^{M-\mathrm{old}}$ the span of the images of the embeddings $\iota_{{\Delta},d}$ for all $d$. After theorem \ref{th:compJ} the following result can be read as an orthogonality statement between primitive characters and $K^{\times}$-modular forms arising from oldforms. \begin{pro}\label{th:oldforms} Let $\tau\in{\rm CM}_{{\Delta},K}$ and $x\in{\rm CM}({\Delta},N;{\cal O}_{K,c})$ be the point represented by $\tau$. Let $f\in M_{2{\kappa},0}({\Delta},N)^{M-\mathrm{old}}$ and suppose that $\xi\in\Xi_{(-{\kappa}-r,{\kappa}+r)}({\cal O}_{K,c})$ is $(x,M)$-primitive. Then $J_r(f,\xi,\tau)=0$. \end{pro} \par\noindent{\bf Proof. } Consider again the first expression in \eqref{eq:formforJ}. Let $\pr x\in{\rm CM}({\Delta},M;{\cal O}_{K,\pr c})$ the point image of $x$ and choose a system of representants $\{s_{\pr{\sigma}}\}$ of ${\bf C}^{\sharp}_{\pr c}$ and a system of representants $\{r_{i}\}$ of $\wh{{\cal O}}^\times_{K,\pr c}/\wh{{\cal O}}^\times_{K,c}$. Then the set of products $\{s_{{\sigma}}r_{i}\}$ is a system of representatives of ${\bf C}^{\sharp}_{c}$ and since $d_{\infty}$ commutes with each $r_{i}$ and $f$ is $M$-old we obtain the expression $$ J_{r}(f,\xi,\tau)=\frac{\pi m}{w_{K,c}} \left(\sum_{\wh{{\cal O}}^\times_{K,\pr c}/\wh{{\cal O}}^\times_{K,c}} \xi(r_{i})\right)\left(\sum_{\pr{\sigma}\in{\bf C}^{\sharp}_{\pr c}} \xi(s_{\pr{\sigma}})\phi_r(s_{\pr{\sigma}}d_\infty)\right) $$ which vanishes because $\xi$ is non trivial on $\wh{{\cal O}}^\times_{K,\pr c}/\wh{{\cal O}}^\times_{K,c}$. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par We shall assume from now on that the modular form $f$ is a holomorphic newform with associated automorphic representation $\pi^{D}=\pi_{f}$. Let $\pi$ be the automorphic representation of ${\rm GL}_{2}(\mathbb A)$ corresponding to $\pi^{D}$ under the Jacquet-Langlands correspondence. Other than the Weil representation $r_{\psi}$ of ${\rm SL}_{2}(\mathbb A)$, the adelic Schwartz-Bruhat space ${\cal S}_{\mathbb A}(D)=\bigotimes_{p\leq\infty}{\cal S}_{p}$ supports the unitary representation of ${\rm GO}(D)(\mathbb A)$ given by $$ L(h)\varphi(x)=\vvass{\nu_{0}(h)}_{\mathbb A}^{-1}\varphi(h^{-1}x), \qquad x\in D_{\mathbb A}. $$ We assume that the archimedean space ${\cal S}_{\infty}$ consists only of the Schwartz functions on $D_{\infty}$ which are $K^{1}_{\infty}\times K^{1}_{\infty}$-finite under the action of $D^{\times}\times D^{\times}$ via the group ${\rm GO}(D)$ (\S\ref{se:CMpts}). Here $K^{1}_{\infty}$ is the maximal compact subgroup of $\jmath(K^{\times}\otimes\mathbb R)\subset D_{\infty}^{\times}\simeq{\rm GL}_{2}(\mathbb R)$. As explained in \cite[\S5]{HaKu92}, the two representations mingle into one single representation, still denoted $r_{\psi}$, of the group $R(D)= \{\hbox{$(g,h)\in{\rm GL}_{2}\times{\rm GO}(D)$ such that $\det(g)=\nu_{0}(h)$}\}$ given by $r_{\psi}(g,h)\varphi=r_{\psi}(g_{1})L(h)\phi$ where $g_{1}=g\smallmat{1}{}{}{\nu_{0}(h)}^{-1}$. Note that \begin{itemize} \item the assignment $(g,h)\mapsto(g_{1},h)$ sets up an isomorphism $R(D)\stackrel{\sim}{\rightarrow}{\rm SL}_{2}\ltimes{\rm GO}(D)$; \item the group $R(D)$ is naturally a subgroup of the symplectic group ${\rm Sp}(W)$, where $W=P\otimes D$ with $P$ the standard hyperbolic plane, via $(g,h)x\otimes y=gx\otimes h^{-1}y$. \end{itemize} The groups $({\rm SL}_{2},{\rm O}(D))$ form a dual reductive pair in ${\rm Sp}(W)$ and the extended Weil representation $r_{\psi}$ allows to realize the theta correspondence between the similitude groups. The theta kernel associated to a choice of $(g,h)\in R(D)$ and $\varphi\in{\cal S}_{\mathbb A}(D)$ is $\vartheta(g,h;\varphi)=\sum_{d\in D}r_{\psi}(g,h)\varphi(d)$. The theta lift to ${\rm GO}(D)$ of a cuspidal automorphic form $F$ on ${\rm GL}_{2}(\mathbb A)$ is the automorphic form on ${\rm GO}(D)(\mathbb A)$ given by \begin{equation} \theta_{\varphi}(F)(h)=\int_{{\rm SL}_{2}(\mathbb Q)\backslash{\rm SL}_{2}(\mathbb A)} \vartheta(gg^{\prime},h;\varphi)F(gg^{\prime})\,dg^{\prime} \label{eq:thetalift} \end{equation} where $\det(g)=\nu_{0}(h)$ and $dg^{\prime}$ is induced by a choice of a Haar measure $dg=\prod dg_{p}$ on ${\rm GL}_{2}(\mathbb A)$. A straightforward substitution yields \begin{equation} \theta_{r_{\psi}(g_1,h_1)\varphi}(F)(h)= \theta_{\varphi}(\pi(g_1^{-1})F)(hh_1), \qquad \forall (g_1,h_1)\in R(D). \label{eq:tlautom} \end{equation} An automorphic form $\Phi$ on ${\rm GO}(D)(\mathbb A)$ pulls back via the map $\varrho$ of \eqref{eq:similitudes} to an automorphic form $\widetilde{\Phi}$ on the product group $D^{\times}\times D^{\times}$. Let $\widetilde{\Theta}(\pi)$ be the space of automorphic forms on $D^{\times}\times D^{\times}$ which are pull-backs of theta lifts \eqref{eq:thetalift} with $F\in\pi$. If $\check{\pi}^{D}$ denotes the contragredient representation of $\pi^{D}$ the crucial result is, with a slight abuse of notation, the following, \cite{Shimi72}. \begin{thm}[Shimizu]\label{th:Shimizu} $\widetilde{\Theta}(\pi)=\pi^{D}\otimes\check{\pi}^{D}$. \end{thm} \begin{rems}\label{rm:onchoice} \rm \begin{enumerate} \item In our case of interest $\pi^D=\check{\pi}^{D}$. \item The Schwartz functions, hence the theta lifts $\widetilde{\theta}_{\varphi}(F)$, are $K^{1}_{\infty}\times K^{1}_{\infty}$-finite. Thus, in Shimizu's theorem the representation space $\pi^{D}$ consists of $K^{1}_{\infty}$-finite automorphic forms. Note that the functions $\pi(d_{\infty})\phi_{r}$ are $K^{1}_{\infty}$-finite. \item An explicit version of Shimizu's theorem has been worked out by Watson \cite{Wat03}, see also \cite[\S3.2]{Pra06} and \cite[\S12]{HaKu92}. Namely, if $\varphi=\otimes_{p\leq\infty}\varphi_p$ is chosen as \begin{equation} \varphi_\infty(z_1,z_2)= \frac{(-1)^{\kappa}}\pi z_2^{2{\kappa}}e^{-2\pi(z_{1}\bar{z}_{1}+z_{2}\bar{z}_{2})}, \quad\varphi_p= \frac{ \mathrm{ch}_{{\cal R}_{N}\otimes\mathbb Z_p}}{\mathrm{vol}(({\cal R}_{N}\otimes\mathbb Z_p)^{\times})} \label{eq:choiceofphi} \end{equation} where $z_1$ and $z_2$ are the complex coordinates in $D_\infty$ of \S1.3, then $$ \pi(d_{\infty})\phi_{f}\otimes\pi(d_{\infty})\phi_{f}=\widetilde{\theta}_\varphi(F) $$ where $F\in\pi$ is the adelic lift of an eigenform normalized so to have an equality of Petersson norms $\langle\pi(d_{\infty})\phi_{f},\pi(d_{\infty})\phi_{f}\rangle=\langle F,F\rangle$. \end{enumerate} \end{rems} Let $\underline{\xi}=(\xi,\xi^{\prime})\in\Xi_{\underline{w}}(c{\cal O}_{K})\times\Xi_{{\underline{w}}^{\prime}}(c{\cal O}_{K})$ thought of as a character of the torus ${K_{\A}^{\times}}\times{K_{\A}^{\times}}$. Let $\tilde{H}(t)$ be any function on ${K_{\A}^{\times}}\times{K_{\A}^{\times}}$ such that $\tilde{H}(t)\underline{\xi}(t)$ is $(K^{\times}\mathbb R^{\times})^{2}$-invariant. Following \cite[\S14]{HaKu91} \cite[\S1.4]{Harris93} we let $$ L_{\underline{\xi}}(\tilde{H})= \int_{(K^{\times}\mathbb R^{\times}\backslash{K_{\A}^{\times}})^{2}} \tilde{H}(t)\underline{\xi}(t)\,dt. $$ In particular, for $\xi$ as in definition \ref{th:Jintegral}, $$ L_{(\xi,\xi)}( \pi(d_{\infty})\phi_r\otimes\pi(d_{\infty})\phi_r)= J_r(f,\xi,\tau)^2. $$ When $\xi=\xi^\prime$ is unitary, $\vass{{\underline{w}}}=\vass{{\underline{w}}^{\prime}}=0$, the integral $L_{\underline{\xi}}(\widetilde{\theta}_{\varphi}(F))$ can also be read, via the map ${\alpha}$ of \eqref{eq:similitudes}, as the Petersson scalar product of two automorphic forms on the similitude group $T=G({\rm O}(K)\times{\rm O}(K^{\perp}))$ associated with the decomposition $D=K\oplus K^\perp$, namely $L_{(\xi,\xi)}(\widetilde{\theta}_{\varphi}(F))=\int_{T(\mathbb Q)T(\mathbb R)\backslash T(\mathbb A)} \widetilde{\theta}_{\varphi}(F)((a,b))\xi(b)\,d^\times ad^\times b$, where $\alpha(t)=(a,b)$. Thus the seesaw identity \cite{Kudla84} associated with the seesaw dual pair $$ \begin{array}{ccc} {\rm GL}_{2}\times{\rm GL}_{2} & & {\rm GO}(D) \\ \uparrow & \mbox{{\Huge $\times$}} & \uparrow \\ {\rm GL}_{2} & & G({\rm O}(K)\times{\rm O}(K^{\perp})) \end{array} $$ identifies, up to a renormalization of the Haar measures, the value $L_{(\xi,\xi)}(\widetilde{\theta}_{\varphi}(F))$ with a scalar product on ${\rm GL}_{2}$, \begin{equation} L_{\underline{\xi}}(\widetilde{\theta}_{\varphi}(F))= \int_{{\rm GL}_{2}(\mathbb Q)\mathbb A^{\times}\backslash{\rm GL}_{2}(\mathbb A)} F(g)\theta_{\varphi}^t(1,\xi)(g,g)\,dg, \label{eq:RankinSelberg} \end{equation} where $\theta_{\varphi}^t$ denotes the theta lift to ${\rm GL}_{2}\times{\rm GL}_{2}$. If $\varphi$ is split and primitive, i.e. admits a decomposition $\varphi=\varphi_{1}\otimes\varphi_{2}$ under $D_{\infty}=(K\oplus K^\perp)\otimes\mathbb R$ and each component decomposes in a product of local factors, $\varphi_{i}=\bigotimes_{p\leq\infty}\varphi_{i,p}$ for $i=1,2$, then $\theta_{\varphi}^t(1,\xi)$ splits as a product of two separate lifts. In fact $$ \theta_{\varphi}^t(1,\xi)(g_{1},g_{2})= E(0,\Phi,g_{1})\theta_{\varphi_{2}}(\tilde{\xi})(g_{2}) $$ where: \begin{itemize} \item $E(0,\Phi,g)$ is the value at $s=0$ of the holomorphic Eisenstein series attached to the unique flat section (\cite[\S3.7]{Bu96}) extending the function $\Phi(g)=r_\psi(g,k)\varphi_1(0)$ where $k\in{K_{\A}^{\times}}$ is such that $N(k)=\det(g)$ and $r_\psi$ denotes here the extended adelic Weil representation attached to $K$ as a normed space (Siegel-Weil formula), \item $\theta_{\varphi_{2}}(\xi)(g)$ is a binary form in the automorphic representation $\pi(\xi)$ of ${\rm GL}_{2}$ attached to $\xi$. \end{itemize} This expression yields a relation between the right hand side of \eqref{eq:RankinSelberg} and the value at the centre of symmetry of a Rankin-Selberg convolution integral. If the Whittaker function $W_F$ of $F$ decomposes as a product of local Whittaker functions, the Rankin-Selberg integral admits an Euler decomposition \cite{Ja72} and $L_{(\xi,\xi)}(\widetilde{\theta}_{\varphi}(F))$ is equal to the value at $s=\frac12$ of the analytic continuation of $$ \frac 1{h_K}\prod_{q\leq\infty}L_q(\varphi_q,\xi_q,s), $$ where \begin{multline} L_q(\varphi_q,\xi_q,s)= \int_{K_q}\int_{\mathbb Q_q^\times} W^{\psi_q}_{F,q}\left(\left(\begin{array}{cc}a & 0 \\0 & 1\end{array}\right)k\right) W^{\psi_q}_{\theta_{\varphi_{2},q}}\left(\left(\begin{array}{cc}-a & 0 \\0 & 1\end{array}\right)k\right)\cdot\\ \Phi^s_{\varphi_1,q}\left(\left(\begin{array}{cc}a & 0 \\0 & 1\end{array}\right)k\right)\inv{\vass{a}}\, d^\times a\,dk_q. \label{eq:localterm} \end{multline} The local measures are normalized so that $K_\infty={\rm SO}_2(\mathbb R)$ has volume $2\pi$ and $K_q={\rm GL}_2(\mathbb Z_q)$ has volume $1$ for finite $q$. Also $W_{\theta_{\varphi_{2}}}$ is the Whittaker function and $\Phi^s(g)=\vvass{a}^{s-\frac12}\Phi(g)$ if $g=nak$ under the $NAK$-decomposition where $\vvass{\smallmat a{}{}b}=\vass{a/b}$. Since the local term \eqref{eq:localterm} does not vanish and for almost all $q$ is the local Euler factor of some automorphic $L$-function, one obtains, as in \cite{Harris93, HaKu91}, a version of Waldspurger's result \cite{Waldsp85}. Namely, $$ L_{\underline{\xi}}(\widetilde{\theta}_{\varphi}(F))=\left. \Lambda(\varphi,\xi,s)L(\pi_K\otimes\xi,\frac s2)L(\eta_K,2s)^{-1}\right|_{s=1/2}, $$ where $\Lambda(\varphi,\xi,s)$ is a finite product of local integrals, $\pi_K$ is the base change to $K$ of the automorphic representation $\pi$ and $L(\eta_K,2s)$ is the Dirichlet $L$-function attached to $\eta_K$, the quadratic character associated to $K$ When $\varphi^f=\bigotimes_{p<\infty}\varphi_p$ and $F$ are chosen as in Remark \ref{rm:onchoice}.3 the local non-archimede\-an terms in the Rankin-Selberg integral have been explicitely computed by Prasanna \cite[\S3]{Pra06} under the simplifying assumptions that $N$ is squarefree, $c=1$ and $\xi$ is unramified. The effect of these assumptions is that \begin{enumerate} \item the local component of $\xi$ can be written either as $\xi_q=(\xi_q^{\rm sp},(\xi_q^{\rm sp})^{-1})$ for some unramified character $\xi_q^{\rm sp}$ of $\mathbb Q_q^\times$ at a prime $q$ split in $K$ under the isomorphism $(K\otimes\mathbb Q_q)^\times\simeq\mathbb Q_q^\times\times\mathbb Q_q^\times$, or as $\xi_q=\xi^{\rm in}_q\circ{\rm N}_{K_q/\mathbb Q_q}$ for an unramified character $\xi_q^{\rm in}$ of $\mathbb Q_q^\times$ at a prime $q$ inert in $K$, or as $\xi_q=\xi^{\rm rm}_q\circ{\rm N}_{K_q/\mathbb Q_q}$ at a ramified prime $q$ where $\xi_q^{\rm rm}$ is the unramified character of $\mathbb Q_q^\times$ obtained by a trivial extension; \item at a prime $q|N{\Delta}$ the local component $\pi_q$ is equivalent to the special representation ${\sigma}(\vass{\cdot}^{\frac12+it_q},\vass{\cdot}^{-\frac12+it_q})$ with $q^{2it_q}=1$. \end{enumerate} Note that the former condition remains true for a split prime $q$ that does not divide $c$ and $\xi\in\Xi_{\underline{w}}(R_{K,c})$ with $\vass{\underline{w}}=0$. Thus we can apply Prasanna's computations to this more general case to write down a formula in which only the local factors at primes in ${\Sigma}=\left\{q|cN^\prime\right\}$ are left implicit, where $N=N_{\rm sf}(N^\prime)^2$ and $N_{\rm sf}$ is square-free. Namely, \begin{equation} L_{\underline{\xi}}(\widetilde{\theta}_{\varphi_\infty\otimes\varphi^f}(F)) =\left.\frac{V_N}{h_K}{\lambda}_\infty(\varphi_\infty,\xi_\infty,s)\left(\prod_{q\leq\infty}\nu_q(\xi_q)\right) L(\pi_K\otimes\xi,\frac s2)L(\eta_K,2s)^{-1}\right|_{s=1/2} \label{eq:finalexp} \end{equation} where $V_N=\prod_q{\mathrm{vol}(({\cal R}_{N}\otimes\mathbb Z_q)^{\times})}$, ${\lambda}_\infty(\varphi_\infty,\xi_\infty,s)=\vass{{\rm N}u}^{\frac12}_\infty\xi_\infty(z_u)^{-1} L_\infty(\varphi_\infty,\xi_\infty,\frac s2)$ ($z_u$ denotes the complex coordinate of $u$ in the chosen identification $(Ku)\otimes\mathbb R\simeq\mathbb C$) and $$ \begin{cases} \nu_q(\xi_q)=\xi_q^{\rm sp}(q)^{n_1-n_2} & \text{if $q$ splits, $q\notin{\Sigma}$, $(q,N_{\rm sf})=1$ }, \\ \nu_q(\xi_q)=-\frac{1}{q+1}q^{-\frac12+t_q+s}\xi_q^{\rm sp}(q)^{n_1-n_2} & \text{if $q|N_{\rm sf}$}, \\ \nu_q(\xi_q)=\xi_q^{\rm in}(q)^{-2n} & \text{if $q$ is inert, $q\notin{\Sigma}$}, \\ \nu_q(\xi_q)=\xi_q^{\rm rm}(-{\rm N}\inv u) & \text{if $q$ ramifies}, \\ \nu_\infty(\xi_\infty)=\xi_\infty(z_u) & \\ \end{cases} $$ where the ideal ${\cal J}$ of proposition \ref{prop:decomporder} in $K\otimes\mathbb Q_q$ is generated by $q^{-n}$ when $q$ is inert and decomposes as $q^{-n_{q,1}}\mathbb Z_q\times q^{-n_{q,2}}\mathbb Z_q$ under $K\otimes\mathbb Q_q\simeq\mathbb Q_q\times\mathbb Q_q$ when $q$ is split. \begin{rem} \rm It is clear that $\nu_q(\xi_q)=1$ for almost all $q$. The local terms ${\lambda}_q(\nu_q)$ do depend on the choice of $u$ in \eqref{eq:Dsplit} (replacing $u$ with $xu$ the local Whittaker function $W^{\psi_q}_{\theta_{\varphi_{2},q}}$ gets modified by the factor $\vass{{\rm N}x}^{-\frac12}_q\xi_q(x)^{-1}$), but the quantity $$ \nu(\xi,\tau,s)=\prod_{q\leq\infty}\nu_q(\xi_q) $$ depends only on $\xi$ and the chosen embedding $\jmath:K\rightarrow D$. \end{rem} For a pair of non-negative integers $(m,q)$ consider the function of two complex variables $\varphi^{(l,q)}(z_1,z_2)=(z_1\bar{z}_1)^{l}{z}_2^{q}e^{-2\pi(z_1\bar{z}_1+z_2\bar{z}_2)}$. \begin{lem} Let $\varphi(z)=(z\bar z)^le^{-2\pi z\bar z}$. Then the Fourier transform of $\varphi$ is $$ \hat\varphi(w_1+w_2i)=e^{-2\pi(w_1^2+w_2^2)}\sum_{0\leq{\alpha}+{\beta}\leq l}{\gamma}({\alpha},{\beta};l)w_1^{2{\alpha}}w_2^{2{\beta}}, $$ where $$ {\gamma}({\alpha},{\beta};l)=\sum_{\substack{j+k=l\\ {\alpha}\leq j, {\beta}\leq k}}(-4\pi)^{{\alpha}+{\beta}-l}\binom lj\binom{2j}{2{\alpha}}\binom{2k}{2{\beta}} (2j-2{\alpha}-1)!!(2k-2{\beta}-1)!! $$ \end{lem} \par\noindent{\bf Proof. } One has $\hat\varphi(w_1+w_2i)=2\int_{\mathbb R^2}e^{4\pi i(w_1x_1+w_2x_2)}(x_1^2+x_2^2)^le^{-2\pi(x_1^2+x_2^2)}\,dx_1dx_2= 2\sum_{j+k=l}\binom lj\left(\int_{\mathbb R}e^{2\pi iw_1x_1}x_1^{2j}e^{-2\pi x_1^2}\,dx_1\right) \left(\int_{\mathbb R}e^{2\pi iw_2x_2}x_2^{2k}e^{-2\pi x_2^2}\,dx_2\right)$ and the result follows from $ \int_{\mathbb R}e^{4\pi itx-2\pi x^2}x^{2v}\,dx=\frac1{\sqrt{2}}e^{-2\pi t^2}\sum_{i=0}^v(-4\pi)^{-i}\binom{2v}{2i}(2i-1)!!\,t^{2(v-i)}$. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \begin{lem}\label{le:localarch} Let $r\geq l\geq0$ be integers, $F$ the lift of a weight $2{\kappa}$ eigenform and $\xi_\infty$ the character $\xi_\infty(z)=(z/\bar{z})^{{\kappa}+r}$ of $\mathbb C^\times$. Then $$ L_\infty(\varphi^{(l,2({\kappa}+r))},\xi_\infty,s)= \begin{cases} 0 & \text{if $l<r$}, \\ \frac{(-1)^{r}2\pi(-{\rm N}\inv u)^\frac12\xi_\infty(z_u)r!}{(4\pi)^{s+2({\kappa}+r)-\frac12}}{\Gamma}(s+2{\kappa}+r-\frac12) & \text{if $l=r$}. \end{cases} $$ \end{lem} \par\noindent{\bf Proof. } It is well known that $W_F^{\psi_\infty}\left(\smallmat a001r({\theta})\right)=a^{\kappa}{\rm ch}_{\mathbb R^+ }(a)e^{-2\pi a-2{\kappa} i{\theta}}$. We compute the other two terms in the integrand of \eqref{eq:localterm} separately with $\varphi_1(z_1)=(z_1\bar{z}_1)^le^{-2\pi z_1\bar{z}_1}$ and $\varphi_2(z_2)=\bar{z}_2^{2({\kappa}+r)}e^{-2\pi z_2\bar{z}_2}$. \begin{enumerate} \item To compute $\Phi^s_{\varphi_1}\left(\smallmat a001 r({\theta})\right)= \vass{a}^sr_{\psi_\infty}(r(-{\theta}))\varphi_1(0)$ we use the definitions \eqref{eq:Weilrep} together with the decomposition \begin{equation} r({\theta})=\smallmat{1}{-\tan{\theta}}{0}{1}\smallmat{0}{-1}{1}{0} \smallmat{1}{-\sin{\theta}\cos{\theta}}{0}{1}\smallmat{0}{1}{-1}{0}\smallmat{1/\cos{\theta}}{0}{0}{\cos{\theta}}. \label{eq:decrth} \end{equation} Some straightforward passages yield $r_{\psi_\infty}(r({\theta}))\varphi_1(0)=(-\cos{\theta})\varphi_1^\sharp(0)$ where $\varphi_1^\sharp(z)$ is the Fourier transform of $e^{-2\pi i\cos{\theta}\sin{\theta}\vass{z}^2}\hat\varphi_1((\cos{\theta})z)$. Since \begin{align*} \varphi_1^\sharp(0) &= \int_{\mathbb C}e^{-2\pi i\sin{\theta}\cos{\theta}\vass{z}^2}{\hat\varphi}_1((\cos{\theta})z)\,dz\\ \intertext{(for $z=x+yi$ and from the previous lemma)} &= 2\sum_{0\leq{\alpha}+{\beta}\leq l}{\gamma}({\alpha},{\beta};l)(cos{\theta})^{2({\alpha}+{\beta})} \int_{\mathbb R^2}e^{-2\pi(i\sin{\theta}\cos+(\cos{\theta})^2)(x^2+y^2)}x^{2{\alpha}}y^{2{\beta}}\,dxdy\\ &=-\sum_{0\leq{\alpha}+{\beta}\leq l}{\gamma}({\alpha},{\beta};l)\frac{(2{\alpha}-1)!!\, (2{\beta}-1)!!}{(4\pi)^{{\alpha}+{\beta}}} \frac{(\cos{\theta})^{2({\alpha}+{\beta})}}{(-\sin{\theta}\cos{\theta}-(\cos{\theta})^2)^{{\alpha}+{\beta}+1}}, \end{align*} eventually \begin{multline*} \Phi^s_{\varphi_1}\left(\smallmat a001 r({\theta})\right)=\\ -\vass{a}^s\sum_{0\leq{\alpha}+{\beta}\leq l}\frac{{\gamma}({\alpha},{\beta};l)(2{\alpha}-1)!!(2{\beta}-1)!!}{(-4\pi)^{{\alpha}+{\beta}}} (\cos{\theta})^{{\alpha}+{\beta}}e^{-({\alpha}+{\beta}+1)i{\theta}}. \end{multline*} \item To compute $W^{\psi_\infty}_{\theta_{\varphi_{2},\infty}}\left(\smallmat{-a}001r({\theta})\right)$ we need to use again \eqref{eq:Weilrep} together with the decomposition \eqref{eq:decrth}. For, it should be noted that this time the norm in $(Ku)\otimes\mathbb R\simeq\mathbb C$ is $-{\rm N}_{\mathbb C/\mathbb R}$ (in particular, definite negative) and the main involution is $z\mapsto -z$. Thus, we get $W^{\psi_\infty}_{\theta_{\varphi_{2},\infty}}\left(\smallmat{-a}001r({\theta})\right)= e^{i(2{\kappa}+2r+1){\theta}}W^{\psi_\infty}_{\theta_{\varphi_{2},\infty}}\left(\smallmat{-a}001\right)$. On the other hand, for a choice of $h\in\mathbb C^\times$ such that ${\rm N}h=-a{\rm N}\inv u>0$, \begin{align*} W^{\psi_\infty}_{\theta_{\varphi_{2},\infty}}\left(\smallmat{-a}001\right) &= \frac1{2\pi}\int_{S^1}r_{\psi_\infty}\left(\smallmat{-a{\rm N}\inv u}001,h{\vartheta}\right) \varphi_2(u)\xi_\infty(h{\vartheta})\,d{\vartheta}\\ &= \frac{(-a{\rm N}\inv u)^\frac12}{2\pi}\int_{S^1}\varphi_2(-a{\rm N}\inv u\inv{(h{\vartheta})}u)) \xi_\infty(h{\vartheta})\,d{\vartheta}\\ &= \frac{(-a{\rm N}\inv u)^\frac12}{2\pi}\int_{S^1}\varphi_2(\bar{h}{\inv{\vartheta}}u)\xi_\infty(h{\vartheta})\,d{\vartheta}\\ &= \frac{(-a{\rm N}\inv u)^\frac12}{2\pi}\int_{S^1}(\bar{h}\inv{\vartheta}{z}_u)^{2({\kappa}+r)}e^{-2\pi a} (h{\vartheta})^{{\kappa}+r}({\bar h}\inv{\theta})^{-{\kappa}-r}\,d{\vartheta}\\ &= (-{\rm N}\inv u)^\frac12\xi_\infty(z_u)a^{{\kappa}+r+\frac12}e^{-2\pi a} \end{align*} \end{enumerate} Putting all the ingredients together \begin{align*} L_\infty&(\varphi^{(l,2({\kappa}+r))},\xi_\infty,s)=\\ &=- (-{\rm N}\inv u)^\frac12\xi_\infty(z_u)\int_{\mathbb R^{>0}}\int_{S^1} a^{s+2{\kappa}+r-\frac12}e^{-4\pi a}e^{i(2{\kappa}+2r+1){\theta}}\\ & \qquad\qquad\times\left(\sum_{0\leq{\alpha}+{\beta}\leq l}\frac{{\gamma}({\alpha},{\beta};l)(2{\alpha}-1)!!(2{\beta}-1)!!}{(-4\pi)^{{\alpha}+{\beta}}} (\cos{\theta})^{{\alpha}+{\beta}}e^{-({\alpha}+{\beta}+1)i{\theta}}\right)\,d^\times ad{\theta}\\ & =-(-{\rm N}\inv u)^\frac12\xi_\infty(z_u)\int_{\mathbb R^{>0}}a^{s+2{\kappa}+r-\frac12}e^{-4\pi a}\,d^\times a\\ & \qquad\qquad\times\sum_{0\leq{\alpha}+{\beta}\leq l}\frac{{\gamma}({\alpha},{\beta};l)(2{\alpha}-1)!!(2{\beta}-1)!!}{(-4\pi)^{{\alpha}+{\beta}}} \int_{S^1}(\cos{\theta})^{{\alpha}+{\beta}}e^{-({\alpha}+{\beta}+1)i{\theta}}\,d{\theta} \end{align*} Since ${\alpha}+{\beta}\leq l\leq r$ we have \begin{multline*} \int_{S^1}(\cos{\theta})^{{\alpha}+{\beta}}e^{i(2r-{\alpha}-{\beta}+1){\theta}}\,d{\theta}=\\ \frac1{2^{{\alpha}+{\beta}}}\sum_{j=0}^{{\alpha}+{\beta}}\binom{{\alpha}+{\beta}}{j}\int_{S^1}e^{2i(r-j){\theta}}\,d{\theta}= \begin{cases} 2^{1-r}\pi & \text{if ${\alpha}+{\beta}=l=r$ }, \\ 0 & \text{otherwise}. \end{cases} \end{multline*} hence $L_\infty(\varphi^{(l,2({\kappa}+r))},\xi_\infty,s)=0$ if $l<r$. When $l=r$, since ${\gamma}(j,r-j;r)=\binom rj$ and $\sum_{j=0}^r\binom rj(2j-1)!!(2r-2j-1)!!=2^rr!$ as readily proved by induction, we have \begin{align*} L_\infty(\varphi^{(r,2({\kappa}+r))},\xi_\infty,s)&=\frac{2\pi(-{\rm N}\inv u)^\frac12\xi_\infty(z_u)r!}{(-4\pi)^r} \int_{\mathbb R^{>0}}a^{s+2{\kappa}+r-\frac12}e^{-4\pi a}\,d^\times a\\ &=\frac{(-1)^r2\pi(-{\rm N}\inv u)^\frac12\xi_\infty(z_u)r!}{(4\pi)^{s+2({\kappa}+r)-\frac12}}{\Gamma}(s+2{\kappa}+r-\frac12). \end{align*} \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par We shall now state and prove the main result of this section. \begin{thm}\label{thm:maininterpolation} Let $N$ be a positive integer and fix a decomposition $N={\Delta} N_{o}$ with ${\Delta}$ a product of an even number of distinct primes and $({\Delta},N_{o})=1$. Let $\pi$ be an automorphic cuspidal representation for ${\rm GL}_{2}$ of conductor $N$ such that \begin{enumerate} \item $\pi_{\infty}\simeq{\sigma}(\mu_{1},\mu_{2})$, the discrete series representation with $\mu_{1}\mu_{2}^{-1}(t)=t^{2{\kappa}-1}{\rm sgn}(t)$. \item $\pi_{\ell}$ is special for each $\ell|{\Delta}$. \end{enumerate} Let $K$ be a quadratic imaginary field such that all $\ell|{\Delta}$ are inert in $K$ and all $\ell|N_{o}$ are split in $K$. Let $c$ be a positive integer with $(c,N)=1$ and $p$ an odd prime number not dividing $N$ that splits in $K$. Assume that ${\cal O}_{K,c}^{\times}=\{\pm1\}$. Suppose that there exist Gr\"ossencharakters $\chi\in\Xi_{(-2{\kappa},0)}({\cal O}_{K,c})$ and $\xi\in\Xi_{(-2,0)}({\cal O}_{K,c})$ such that the $p$-adic avatar $\wh\xi$ takes values in a totally ramified extension of $\mathbb Q_p$. Then, there exists $x\in{\rm CM}({\Delta},N;{\cal O}_{K,c})$ represented by $\tau=t+yi\in{\rm CM}_{{\Delta},K}$ with associated periods ${\Omega}_\infty$ and ${\Omega}_p$ such that for all $r\geq0$ \begin{multline*} {\Omega}_{p}^{-4({\kappa}+r)}\int_{\mathbb Z_p}z^r\,d\mu(f,x;\chi,\xi)= \\ \frac{2\varpi V_Nw_{K,c}^2}{m_{c}^2h_K} \frac{(-1)^{{\kappa}+r}r!(2{\kappa}+r)!} {4^{2{\kappa}+3r}\pi^{2({\kappa}+r+1)}y^{2({\kappa}+r)}{\Omega}_\infty^{4({\kappa}+r)}} \nu(\xi_r,\tau,\frac12)L(\pi_K\otimes\xi_r,\frac12)L(\eta_K,1)^{-1} \end{multline*} where $\xi_r=\xi\chi^r\vvass{N_{K/\mathbb Q}}^{-{\kappa}-r}$ and $\varpi$ is a (fixed) ratio of Petersson norms. \end{thm} \par\noindent{\bf Proof. } Let $D$ be the quaternion algebra with ${\Delta}_{D}={\Delta}$. By hypothesis the representation $\pi$ is the image of an automorphic representation $\pi^{D}$ of $D^{\times}$ under the Jacquet-Langlands correspondence and let $f\in S_{2k,0}({\Delta},N_{o})$ be a holomorphic newform in $\pi^{D}$. For all integers $r\geq0$ let $\phi_{r}$ be as in \eqref{eq:defphir}. By proposition \ref{teo:existCM} there exists $x\in{\rm CM}({\Delta},N;{\cal O}_{K,c})$ and choose a split $p$-ordinary test triple $(\tau,v,e)$, $\tau=t+iy$, representing $x$ with corresponding $d_\infty\in{\rm SL}_2(\mathbb R)$. By taking ${\cal O}_{v}$ and $F$ large enough, we can assume that $f$ is defined over ${\cal O}_{(v)}\subset{\cal O}_F$, and that the measure $\mu(f,x;\chi,\xi)$ has values in ${\cal O}_{F}$ By remark \ref{rm:onchoice}.3 we can write $\pi(d_{\infty})\phi_{0}\otimes\pi(d_{\infty})\phi_{0}=\varpi\widetilde\theta_{\varphi}(F)$ with $\varphi=\varphi_\infty\otimes\varphi^f$ as in \eqref{eq:choiceofphi}, $F$ the adelization of the normalized eigenform in $\pi$ and $\varpi\in\mathbb C^\times$ a Petersson normalization constant. We claim that for all $r\geq0$ \begin{equation} \pi(d_{\infty})\phi_{r}\otimes\pi(d_{\infty})\phi_{r}= \varpi\frac{(-1)^{\kappa}}{4^r\pi} \widetilde\theta_{\phi^{r,2({\kappa}+r)}\otimes\varphi^{f}}(F) +\sum_{l=0}^{r-1}a_{r,l},\widetilde\theta_{\phi^{l,2({\kappa}+r)}\otimes\varphi^{f}}(F) \label{eq:thetaclaim} \end{equation} where $a_{r,l}\in\varpi\mathbb Z[\pi]$. For, the short exact sequence \eqref{eq:sesGOD} gives a Lie algebras identification $\mathfrak g\mathfrak o(D)\simeq(D_{\infty}\times D_{\infty})/\mathbb R$ and in particular $\mathfrak o(D)=\{(A,B)\in D_{\infty}\times D_{\infty}\,|\,{\rm tr} A={\rm tr} B\}/\mathbb R \simeq\mathfrak s\mathfrak l_{2}\times\mathfrak s\mathfrak l_{2}$. Under this identification, differentiating \eqref{eq:tlautom} yields $$ \widetilde\theta_{H\varphi}(F)=\left.\frac{d}{dt} \widetilde\theta_{\varphi}(F)(h\exp(tH))\right|_{t=0} \hbox{ with } H\varphi(x)=\left.\frac{d}{dt}\varphi(e^{-tH_{1}}xe^{tH_{2}})\right|_{t=0} $$ for all $H=(H_{1},H_{2})\in{\rm Lie}(\mathrm{O}(D))$. If $A\in\mathfrak s\mathfrak l_{2}$ a repeated application of the last formula with $\pr{A}=(A,0)$ and $\pr{A}{}^{\prime}=(0,A)$ shows that the diagonal action of $A$ on $\pi^{D}\otimes\pi^{D}$ corresponds to the action of the second order operator $A_{2}=\pr{A}\pr{A}{}^{\prime}=\pr{A}{}^{\prime}\pr{A} \in\mathfrak A({\rm Lie}(\mathrm{O}(D)))$ on Schwartz functions, i.e. $$ A_{2}\varphi(x)=\left.\frac{\partial^{2}}{\partial u\partial v}\varphi(e^{-uA}xe^{vA})\right|_{u=v=0}. $$ We are interested in the expression of the operator $A_{2}$ in the normalized coordinates for $A=d_{\infty}X^{+}d_{\infty}^{-1}$. Up to conjugation, this is the same as to compute the second order operator associated to $A=X^{+}$ under the standard coordinates \eqref{eq:standardcoord}. A straightforward computation using the obvious real coordinates associated to the underlying real decomposition $D_{\infty}= \mathbb R\smallmat 1{}{}1\oplus\mathbb R\smallmat{}{-1}1{}\oplus \mathbb R\smallmat {}11{}\oplus\mathbb R\smallmat{-1}{}{}1$ shows that $\pr{A}=-i\left(z_2\frac{\partial}{\partial{\bar z}_1}+z_1\frac{\partial}{\partial{\bar z}_2}\right)$ and $\pr{A}{}^\prime=i\left(z_2\frac{\partial}{\partial z_1}+{\bar z}_1\frac{\partial}{\partial{\bar z}_2}\right)$, so that $$ A_2=z_2^2\frac{\partial^2}{\partial z_1\partial{\bar z}_1}+ {\bar z}_1{z}_2\frac{\partial^2}{\partial{\bar z}_1\partial{\bar z}_2}+ z_1z_2\frac{\partial^2}{\partial z_1\partial{\bar z}_2}+ z_1{\bar z}_1\frac{\partial^2}{{\partial{\bar z}_2}^2}+ z_2\frac{\partial}{\partial{\bar z}_2}. $$ Since $$ A_2\phi^{m,q}= \begin{cases} -2\pi\phi^{0,q+2}+4\pi^2\phi^{1,q+2} & \text{if $m=0$}, \\ m^2\phi^{m-1,q+2}-(4m-2)\pi\phi^{m,q+2}+4\pi^2\phi^{m+1,q+2} & \text{if $m\geq1$}, \end{cases} $$ formula \eqref{eq:thetaclaim} follows from an $r$-fold iteration using the linearity of the theta lift and the definitions \eqref{eq:defphir} and \eqref{eq:choiceofphi} of $\phi_r$ and $\varphi_\infty$ respectively. Let $\chi_r$ be a Gr\"ossencharakter of weight $(-2({\kappa}+r),0)$ and trivial on $\wh{R}_{c}^{\times}$ such that $\xi_r=\chi_r\vvass{N_{K/\mathbb Q}}^{-{\kappa}-r}$ is unitary. Combining \eqref{eq:thetaclaim} and \eqref{eq:finalexp} with lemma \ref{le:localarch} we get \begin{equation} \label{eq:end} J_{r}(f,\xi_r,\tau)^{2} = \frac{(-1)^{{\kappa}+r}2\varpi V_Nr!(2{\kappa}+r)!}{4^{2{\kappa}+3r}h_K\pi^{2{\kappa}+2r}} \nu(\xi_r,\tau,\frac12)L(\pi_K\otimes\xi_r,\frac12)L(\eta_K,1)^{-1}. \end{equation} On the other hand, from theorem \ref{th:compJ}, $$ J_{r}(f,\xi_{r},\tau)^{2}= \frac{m_c^2\pi^2}{w_{K,c}^2}(h_c^\sharp)^2y^{2({\kappa}+r)}{\Omega}_\infty^{4({\kappa}+r)} \scal{c_{(r)}(f,x)}{\chi_r}^{2}. $$ When $\chi_r=\chi\xi^r$ we use proposition \ref{th:spiden} to rewrite the last formula as $$ {\Omega}_p^{-4({\kappa}+r)}m_r(\mu(f,x;\chi,\xi))^2= \frac{w_{K,c}^2}{m_c^2\pi^2}{\Omega}_\infty^{-4({\kappa}+r)}y^{-2({\kappa}+r)}J_{r}(f,\xi_r,\tau)^{2}. $$ Substituting \eqref{eq:end} into the latter formula proves the theorem. \penalty 1000 $\,$\penalty 1000{\qedmark\qedskip}\par \end{document}
arXiv
We will now look at the number of $k$-combinations of a multiset containing $n$ distinct elements whose repetition numbers are $\infty$. Recall that a $k$-combination of an ordinary finite $n$-element set $A$ is simply a selection of $k$ out of all $n$ elements in $A$. In others words, a $k$-combination of $A$ is a subset $B \subseteq A$ such that $\lvert B \rvert = k$. By extension, a $k$-combination of a multiset $A$ is simply a submultiset $B$ of $A$ that contains $k$ elements. Set up positions to place all $k$ elements from $A$ and group like-elements together. Place $n - 1$ placeholders between each grouping of like-elements. There are now $k + n - 1$ positions total and the rearrangement of the placeholders signifies a new $k$-combination of the multiset $A$ as illustrated in the diagram below. These $2$-combinations are given in the table below as submultisets. We will look at the case of determining the number of $k$-combinations of a multiset containing $n$ distinct elements and whose repetition number are finite on the Combinations of Elements in Multisets with Finite Repetition Numbers page AFTER we have looked at The Inclusion-Exclusion Principle.
CommonCrawl
# Understanding Mathematica and its applications One of the main features of Mathematica is its ability to perform symbolic computations. Symbolic computations involve manipulating mathematical expressions without evaluating them numerically. This allows users to work with exact solutions and perform algebraic manipulations. For example, consider the following expression: $$\frac{1}{2} + \frac{1}{3} + \frac{1}{4}$$ In Mathematica, you can manipulate this expression as follows: ```mathematica expr = 1/2 + 1/3 + 1/4; ``` Another important feature of Mathematica is its ability to perform numerical computations. Numerical computations involve evaluating mathematical expressions numerically. For example, consider the following integral: $$\int_0^1 x^2 dx$$ In Mathematica, you can compute this integral using the `NIntegrate` function: ```mathematica NIntegrate[x^2, {x, 0, 1}] ``` Mathematica also provides a wide range of functions for data visualization. These functions allow users to create various types of graphs and plots, such as line plots, scatter plots, and bar plots. For example, consider the following data: ```mathematica data = {{1, 2}, {2, 3}, {3, 4}, {4, 5}}; ``` You can create a line plot of this data using the `ListLinePlot` function: ```mathematica ListLinePlot[data] ``` ## Exercise Create a line plot of the following data: ```mathematica data = {{1, 2}, {2, 3}, {3, 4}, {4, 5}}; ``` # Basic syntax and commands in Mathematica To assign a value to a variable in Mathematica, you can use the `=` operator. For example, to assign the value 3 to the variable `x`, you can write: ```mathematica x = 3; ``` In Mathematica, you can define functions using the `SetDelayed` operator (`:=`). For example, to define a function that computes the square of a number, you can write: ```mathematica square[x_] := x^2; ``` Mathematica provides a wide range of built-in functions for performing various mathematical operations. For example, you can use the `Derivative` function to compute the derivative of a function: ```mathematica Derivative[1][f, x] ``` ## Exercise Define a function `cube` that computes the cube of a number. Use the `SetDelayed` operator (`:=`). # Working with mathematical functions Mathematica provides a wide range of built-in functions for working with mathematical expressions. For example, you can use the `Simplify` function to simplify an expression: ```mathematica expr = (x^2 + 2*x + 1)/(x + 1); Simplify[expr] ``` You can also use the `Solve` function to solve equations and systems of equations. For example, to solve the equation $x^2 + 2x + 1 = 0$, you can write: ```mathematica Solve[x^2 + 2*x + 1 == 0, x] ``` ## Exercise Solve the equation $x^2 + 2x + 1 = 0$ using the `Solve` function. # Computing derivatives and integrals To compute the derivative of a function, you can use the `Derivative` function. For example, to compute the first derivative of the function $f(x) = x^3$, you can write: ```mathematica f[x_] := x^3; Derivative[1][f, x] ``` To compute the integral of a function, you can use the `Integrate` function. For example, to compute the integral of the function $f(x) = x^2$ from $0$ to $1$, you can write: ```mathematica Integrate[x^2, {x, 0, 1}] ``` ## Exercise Compute the first derivative of the function $f(x) = x^3$ using the `Derivative` function. # Plotting graphs and visualizing data Mathematica provides a wide range of functions for creating various types of plots. For example, you can use the `Plot` function to create a plot of a function: ```mathematica Plot[x^2, {x, 0, 2}] ``` You can also use the `ListPlot` function to create a plot of a list of data points: ```mathematica data = {{1, 2}, {2, 3}, {3, 4}, {4, 5}}; ListPlot[data] ``` ## Exercise Create a plot of the function $f(x) = x^2$ using the `Plot` function. # Solving equations and systems of equations To solve an equation, you can use the `Solve` function. For example, to solve the equation $x^2 + 2x + 1 = 0$, you can write: ```mathematica Solve[x^2 + 2*x + 1 == 0, x] ``` To solve a system of equations, you can use the `Solve` function with a list of equations. For example, to solve the system of equations $x + y = 3$ and $x - y = 1$, you can write: ```mathematica Solve[{x + y == 3, x - y == 1}, {x, y}] ``` ## Exercise Solve the system of equations $x + y = 3$ and $x - y = 1$ using the `Solve` function. # Simplifying expressions and performing algebraic manipulations To simplify an expression, you can use the `Simplify` function. For example, to simplify the expression $(x^2 + 2x + 1)/(x + 1)$, you can write: ```mathematica expr = (x^2 + 2*x + 1)/(x + 1); Simplify[expr] ``` You can also use the `Expand` function to expand an expression. For example, to expand the expression $(x + 1)^2$, you can write: ```mathematica expr = (x + 1)^2; Expand[expr] ``` ## Exercise Simplify the expression $(x^2 + 2x + 1)/(x + 1)$ using the `Simplify` function. # Numerical computations To compute the value of a function at a specific point, you can use the `Evaluate` function. For example, to compute the value of the function $f(x) = x^2$ at $x = 2$, you can write: ```mathematica f[x_] := x^2; Evaluate[f[2]] ``` To compute the numerical value of an expression, you can use the `N` function. For example, to compute the numerical value of the expression $\pi$, you can write: ```mathematica N[Pi] ``` ## Exercise Compute the value of the function $f(x) = x^2$ at $x = 2$ using the `Evaluate` function. # Advanced topics in symbolic computing Mathematica provides a wide range of functions for working with special functions, such as the Gamma function and the Beta function. For example, to compute the value of the Gamma function at $x = 2$, you can write: ```mathematica Gamma[2] ``` Mathematica also provides a wide range of functions for working with matrix operations, such as matrix inversion and matrix multiplication. For example, to compute the inverse of a matrix, you can use the `Inverse` function: ```mathematica matrix = {{1, 2}, {3, 4}}; Inverse[matrix] ``` ## Exercise Compute the value of the Gamma function at $x = 2$ using the `Gamma` function. Mathematica also provides a wide range of functions for working with differential equations. For example, you can use the `DSolve` function to solve ordinary differential equations. For example, to solve the first-order differential equation $dy/dx + y = 0$, you can write: ```mathematica DSolve[y'[x] + y[x] == 0, y[x], x] ``` ## Exercise Solve the first-order differential equation $dy/dx + y = 0$ using the `DSolve` function.
Textbooks
Supremum norm of certain quantity Is there any easy way of finding supremum of the quantity $$\sum_{i,j=1}^n|z_i-z_j|,$$ where $|z_i|=1$ for $1\leq i\leq n$ ? We are considering complex variables of course. mg.metric-geometry oc.optimization-and-control MathbuffMathbuff $\begingroup$ @PeterHeinig, what's wrong with the sum notation? It looks to me like a sum over $n^2$ terms that turns out to be twice the sum you propose. $\endgroup$ – LSpice $\begingroup$ @LSpice, You are right. One can interpret the sum as $2\sum_{1\leq i<j\leq n}|z_i-z_j|$ I think may be there is an easy way to calculate it, using geometry of the Euclidean plane and symmetry. $\endgroup$ – Mathbuff The following argument appears on p.156 of L. Fejes Toth, Regular figures, A Pergamon Press Book, The Macmillan Co., New York, 1964. Assume $z_{1},\ldots,z_{n}$ are ordered on the circle and let $S=\sum_{1\leq i<j\leq n}|z_{i}-z_{j}|$. Let also, $$s_{k}=\sum_{i=1}^{n}|z_{i}-z_{i+k}|=2\sum_{i=1}^{n}\sin\frac12\widehat{z_{i}z}_{i+k}, $$ for $k$ an integer between 1 and $n-1$, and with the convention that $z_{n+j}=z_{j}$. Since the function $\sin(x)$ is concave for $0\leq x\leq\pi$ and $0\leq\frac12\widehat{z_{i}z}_{i+k}\leq\pi$, we have $$s_{k}\leq2n\sin\left(\frac{1}{2n}\sum_{i=1}^{n}\widehat{z_{i}z}_{i+k}\right)= 2n\sin\frac{k\pi}{n}.$$ On the other hand, $s_{k}=s_{n-k}$ and if $n$ is even, the sum $s_{n/2}$ contains the distance $|z_{i}-z_{i+\frac{n}{2}}|$ twice. Hence, $2S=\sum_{k=1}^{n-1}s_{k}$, and thus $$S\leq n\sum_{k=1}^{n-1}\sin\frac{k\pi}{n}=n\cot\frac{\pi}{2n}.$$ By strict concavity of the sine function on $(0,\pi)$, equality can only occur when the $n$ points are regularly distributed on the circle (like the $n$-th roots of unity). The following reference may also be of interest, J.S. Brauchart, D.P. Hardin, E.B. Saff, The Riesz Energy of the $N$-th Roots of Unity: An Asymptotic Expansion for Large $N$, Bull. London Math. Soc. 41 (2009), no. 4, 621-633; arXiv:0808.1291, DOI: 10.1112/blms/bdp034 Martin Sleziak user111user111 Complex version of Farkas' lemma Expected length of a certain kind of nearest-neighbor graph A homogeneous but slightly asymmetric inequality A Gaussian integral over complex variables by a defined Green's function for a Gaussian ensemble of random matrix Maximum of a quantity for two normal orthogonal vectors in $\mathbb{R}^n$
CommonCrawl
\begin{document} \title{The tracial moment problem on quadratic varieties} \author{Abhishek Bhardwaj${}^1$} \address{Mathematical Sciences Institute, The Australian National University, Union Lane, Canberra ACT 2601} \email{[email protected]} \thanks{${}^1$Supported by the Australian Research Council ACEMS IMP Grant (Project ID: CE140100049).} \author{Alja\v z Zalar${}^2$} \address{Faculty of Computer and Information Science, University of Ljubljana, VeÄna pot 113, 1000 Ljubljana, Slovenia} \email{[email protected]} \thanks{${}^2$Supported by the Slovenian Research Agency grants P1-0288 and J1-8132.} \subjclass[2010]{Primary 47A57, 15A45, 13J30; Secondary 11E25, 44A60, 15-04.} \date{\today} \keywords{Truncated moment problem, noncommutative polynomial, moment matrix, affine linear transformations, flat extensions.} \begin{abstract} The truncated moment problem asks to characterize finite sequences of real numbers that are the moments of a positive Borel measure on $\mathbb R^n$. Its tracial analog is obtained by integrating traces of symmetric matrices and is the main topic of this article. The solution of the bivariate quartic tracial moment problem with a nonsingular $7\times 7$ moment matrix $\mathcal M_2$ whose columns are indexed by words of degree 2 was established by Burgdorf and Klep, while in our previos work we completely solved all cases with $\mathcal M_2$ of rank at most 5, split $\mathcal M_2$ of rank 6 into four possible cases according to the column relation satisfied and solved two of them. Our first main result in this article is the solution for $\mathcal M_2$ satisfying the third possible column relation, i.e., $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$. Namely, the existence of a representing measure is equivalent to the feasibility problem of certain linear matrix inequalities. The second main result is a thorough analysis of the atoms in the measure for $\mathcal M_2$ satisfying $\mathbb{Y}^2=\mathds 1$, the most demanding column relation. We prove that size 3 atoms are not needed in the representing measure, a fact proved to be true in all other cases. The third main result extends the solution for $\mathcal M_2$ of rank 5 to general $\mathcal M_n$, $n\geq 2$, with two quadratic column relations. The main technique is the reduction of the problem to the classical univariate truncated moment problem, an approach which applies also in the classical truncated moment problem. Finally, our last main result, which demonstrates this approach, is a simplification of the proof for the solution of the degenerate truncated hyperbolic moment problem first obtained by Curto and Fialkow. \end{abstract} \maketitle \section{Introduction} The moment problem (MP) is a classical question in analysis which asks when a linear functional can be represented as integration; equivalently, given a sequence of numbers $\beta$, does there exist a positive measure $\mu$ such that $\beta$ represents the moments of $\mu$? This problem is well studied in one dimension (on $\mathbb{R}$; see \cite{Akh65,KN77} for instance), while a general solution on $\mathbb R^n$, Haviland's theorem \cite{Hav35}, provides a duality with positive polynomials and relates the MP to real algebraic geometry (RAG). Renewed interest into the MP in RAG came with Schm{\"u}dgen's solution \cite{Sch91} to the MP over compact semi-algebraic sets; for further results we refer the reader to \cite{Put93,PV99, DP01, PS01, PS06, PS08, Mar08,Lau09}. This duality of the MP with positive polynomials has been efficiently used by several authors for approximating global optimization problems, most notably Lasserre \cite{Las01, Las09} and Parrilo \cite{Par03}, while recently it has also been useful in understanding solutions of differential equations \cite{MLH11}. There are also many noncommutative generalizations of the MP; the MP for matrix and operator polynomials are considered in \cite{AV03,Vas03,BW11,CZ12,KW13}, the quantum MP in \cite{DLTW08}, free versions of the MP \cite{McC01,Hel02,HM04,HKM12} are the domain of free RAG, while in this paper we are interested in the tracial MP \cite{BK12, BK10}. The multi-dimensional truncated moment problem (TMP), which is more general than the full MP \cite{Sto01}, has been intensively studied in the seminal works of Curto and Fialkow \cite{CF91, CF96, CF98-1, CF98-2,CF08}, with the functional calculus they developed for MP becoming an essential tool for studying moment problems. The bivariate quartic MP is completely solved \cite{CF02, CF04, CF05, CF08,FN10,CS16}, while the sextic has been closely investigated \cite{CFM08, Yoo11,CS15,Fia17}. Recently, the introduction of the core variety provided new results toward the solution of the sextic MP \cite{Fia17,BF+,Sch17,DS18}. Using convex geometry techniques new sufficient condition for the solvability of the TMP are established also in \cite{Ble15}. The truncated tracial moment problem (TTMP), which is the topic of this paper, is the study of linear functionals on the space of non-commutative polynomials that can be represented as traces of evaluations on convex combinations of tuples of real symmetric matrices. It was introduced by Burgdorf and Klep in \cite{BK12, BK10}, where the authors demonstrated its duality with trace-positive polynomials. This duality connects the TTMP to many interesting and important problems such as Connes' embedding conjecture in operator algebras \cite{Con76,KS08-1}, or the now proved BMV conjecture \cite{BMV75,KS08-2, Sta13, Bur11}. Furthermore, \cite{BK12} established tracial analogues of the results of Curto and Fialkow, relating the solution of the TTMP to flat extension of the associated moment matrix (see Subsection \ref{BQTMP} for terminology and definitions). For bivariate quartic tracial sequences, an affirmative answer to the TTMP was given in \cite{BK10} when the tracial moment matrix is nonsingular. Just like the classic TMP, the TTMP is deeply intertwined with optimization of noncommutative polynomials. In \cite{BCKP13} it is shown how minimizing the trace of a noncommutative polynomial evaluated on matrices of some size gives rise to the TTMP. In fact, \cite{BCKP13, BKP16} illustrates how the solution of the TTMP can be used to extract optimizers in this setting. Inspired by the work of Burgdorf and Klep and Curto and Fialkow, we studied the bivariate quartic TTMP having a singular $(7\times7)$ tracial moment matrix $\mathcal{M}_{2}$ in \cite{BZ18}. Following the approach of Curto and Fialkow, we analyzed the moment matrix based on its rank, giving a complete classification when the rank is at most five. When the rank is six, we reduced the problem to four canonical cases, gave a characterization of when a flat extension exists and in two cases also proved the existence of a representing measure to be equivalent to the solvability of some linear matrix inequalities. Moreover we gave explicit examples showing that, unlike in the commutative setting, the existence of a representing measure is mostly \emph{not} equivalent to the existence of a flat extension of the moment matrix. This article presents new results in the remaining cases of our analysis of the singular quartic bivariate TMP and expands many of the results from degree four to arbitrary degree. We next present the \emph{Bivariate} TTMP and some basic concepts and definitions. We then give an organization of the paper and a summary of our main results. \subsection{Bivariate truncated tracial moment problem} \label{BQTMP} In this subsection, we make our problem of study precise and introduce basic definitions used throughout this article. \subsubsection{Noncommutative bivariate polynomials} We denote by $\left\langle X,Y\right\rangle$ the \textbf{free monoid} generated by the noncommuting letters $X,Y$ and call its elements \textbf{words} in $X, Y$. For a word $w\in \left\langle X,Y\right\rangle$, $w^{\ast}$ is its reverse, and $v\in\left\langle X,Y\right\rangle$ is \textbf{cyclically equivalent} to $w$, which we denote by $\displaystyle{v\overset{\cyc}{\sim} w}$, if and only if $v$ is a cyclic permutation of $w$. Consider the free algebra $\mathbb{R}\!\left\langle X,Y\right\rangle$ of polynomials in $X,Y$ with coefficients in $\mathbb{R}$. Its elements are called \textbf{noncommutative (nc) polynomials}. Endow $\mathbb{R}\!\left\langle X,Y\right\rangle$ with the involution $p \mapsto p^{*}$ fixing $\mathbb{R} \cup \{ X,Y\}$ pointwise. The length of the longest word in a polynomial $f\in \mathbb{R}\!\left\langle X,Y \right\rangle$ is the \textbf{degree} of $f$ and is denoted by $\deg(f)$ or $|f|$. We write $\mathbb{R}\!\left\langle X,Y\right\rangle_{\leq k}$ for all polynomials of degree at most $k$. For a \textbf{nc} polynomial $f$, its \textbf{commutative collapse} $\check{f}$ is obtained by replacing the \textbf{nc} variables $X,Y$, with commutative variables $x,y$, and similarly for words $w\in\langle X,Y\rangle$. \subsubsection{Bivariate truncated real tracial moment problem} Given a sequence of real numbers $\beta\equiv \beta^{(2n)}=(\beta_w)_{|w|\leq 2n}$, indexed by words $w$ of length at most $2n$ such that \begin{equation}\label{cyclic-cond} \beta_{v}=\beta_{w}\quad\text{whenever } v\overset{\cyc}{\sim} w \quad \text{and}\quad \beta_w=\beta_{w^\ast}\quad\text{for all } |w|\leq 2n, \end{equation} the \textbf{bivariate truncated real tracial moment problem (BTTMP)} for $\beta$ asks to find conditions for the existence of $N\in\mathbb{N}$, $t_i\in\mathbb{N}$, $\lambda_i\in \mathbb R_{> 0}$ with $\sum_{i=1}^N \lambda_i=1$ and pairs of real symmetric matrices $(A_{i},B_{i})\in (\mathbb{SR}^{t_i\times t_i})^2$, such that \begin{equation}\label{truncated tracial moment sequence special-biv} \beta_{w}=\sum_{i=1}^N \lambda_i \text{Tr}(w(A_{i},B_{i})), \end{equation} where $w$ runs over the indices of the sequence $\beta$ and $\text{Tr}$ denotes the \textbf{normalized trace}, i.e., $$ \text{Tr}(A)=\frac{1}{t}\mathrm{tr}(A)\quad \text{for every } A\in \mathbb R^{t\times t}. $$ If such data exist, we say that $\beta$ admits a representing measure. If $\beta_1=1$, then we say $\beta$ is \textbf{normalized}. We may always assume that $\beta$ is normalized (otherwise we replace $\mathrm{Tr}$ with $\frac{1}{\beta_1}\mathrm{Tr}$). The vectors $(A_i,B_i)$ are \textbf{atoms of size $t_i$} and the numbers $\lambda_i$ are \textbf{densities}. We say that $\mu$ is a representing measure of \textbf{type} $(m_1,m_2,\ldots,m_r)$ if it consists of exactly $m_i\in \mathbb N\cup\{0\}$ atoms of size $i$ and $m_r\neq 0$. A representing measure of type $(m_1^{(1)},m_2^{(1)},\ldots,m_{r_1}^{(1)})$ is \textbf{minimal}, if there does not exist another representing measure of type $(m_1^{(2)}$, $m_2^{(2)}$,$\ldots$, $m_{r_2}^{(2)})$ such that $$r_2<r_1\quad \text{or} \quad (r:=r_1=r_2\quad \text{and} \quad (m_{r}^{(2)},m_{r-1}^{(2)},\ldots,m_{1}^{(2)}) \prec _{\text{lex}} (m_{r}^{(1)},m_{r-1}^{(1)},\ldots,m_{1}^{(1)}),$$ where $\prec _{\text{lex}}$ denotes the usual lexicographic order on $(\mathbb N\cup\{0\})^r$. We say that $\beta$ admits a \textbf{noncommutative (nc) measure}, if it admits a minimal measure of type $(m_1,m_2,\ldots,m_r)$ with $r>1$. If $\beta_{w}=\beta_{\check{w}}$ for all $w\in\langle X,Y\rangle$, we call $\beta$ a \textbf{commutative (cm) sequence} and the MP reduces to the classical one studied by Curto and Fialkow. Otherwise we call $\beta$ an \textbf{noncommutative (nc) sequence}. \begin{remark} \begin{enumerate} \item Note that replacing a vector $(A_i,B_i)$ with any vector $$(U_i A_i U_i^t , U_i B_i U_i^{t})\in (\mathbb{SR}^{t_i\times t_i})^2$$ where $U_i\in \mathbb R^{t_i\times t_i}$ is an orthogonal matrix, preserves (\ref{truncated tracial moment sequence special-biv}). \item By the tracial version \cite[Theorem 3.8]{BCKP13} of Bayer-Teichmann theorem \cite{BT06}, the problem (\ref{truncated tracial moment sequence special-biv}) is equivalent to the more general problem of finding a probability measure $\mu$ on $(\mathbb{SR}^{t\times t})^2$ such that $\beta_{w}=\int_{(\mathbb{SR}^{t\times t})^2} \mathrm{Tr}(w(A,B))\; {\rm d}\mu(A,B)$. \end{enumerate} \end{remark} We associate to the sequence $\beta^{(2n)}$ the \textbf{truncated moment matrix of order $n$}, defined by $$\mathcal{M}_n:=\mathcal{M}_n(\beta^{(2n)})=(\beta_{w_1^{\ast}w_2})_{|w_1|\leq n,|w_2|\leq n},$$ where the rows and columns are indexed by words in $\mathbb R\!\langle X,Y\rangle_{\leq n}$ in graded lexicographic order with $X$ being smaller than $Y$, e.g., for $n=2$ we have $$\mathds 1\prec _{\text{lex}} \mathbb{X} \prec _{\text{lex}} \mathbb{Y} \prec _{\text{lex}} \mathbb{X}^2 \prec _{\text{lex}} \mathbb{X}\mathbb{Y} \prec _{\text{lex}} \mathbb{Y}\mathbb{X} \prec _{\text{lex}} \mathbb{Y}^2.$$ \noindent Observe that the matrix $\mathcal{M}_n$ is symmetric. The following is a well-known necessary condition for the existence of a measure in the classical commutative moment problem and easily extends to the tracial case. \begin{proposition} \label{Mn-psd} If $\beta^{(2n)}$ admits a measure, then $\mathcal{M}_n$ is positive semidefinite. \end{proposition} Let $(X,Y)\in (\mathbb{SR}^{t\times t})^{2}$ where $t\in \mathbb N$. We denote by $\mathcal{M}^{(X,Y)}_n$ the moment matrix generated by $(X,Y)$, i.e., $\beta_{w(X,Y)}=\mathrm{Tr}(w(X,Y))$ for every $|w(X,Y)|\leq 2n$. \subsection{Results and Readers Guide} We present the four major contributions in this article. \subsubsection{TTMP to LMI} Firstly, in \cite[Corollaries 7.6 and 7.9]{BZ18} we proved that the existence of a nc measure for $\mathcal{M}_2$ of rank 6 satisfying one of the relations $\mathbb{Y}^2=\mathds 1-\mathbb{X}^2$ or $\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0$ is equivalent to the feasibility problem of three linear matrix inequalities and a rank-to-cardinality condition (a necessity arising from a cm moment problem). A core component of the proof was to show that when $\beta_X=\beta_Y=\beta_{X^3}=\beta_{X^2Y}=\beta_{Y^3}=0$ we have the following result (see \cite[Theorems 7.5 (1), 7.8 (1)]{BZ18}):\\ \begin{addmargin}[2em]{2em} \textit{ For the smallest $\alpha>0$ such that $\Rank\left( \mathcal{M}_{2} - \alpha W \right) < \Rank \left(\mathcal{M}_{2}\right)$, the matrix $\mathcal{M}_{2}-\alpha W$ admits a measure, where $W = \left(\mathcal M_2^{(1,0)}+\mathcal M_2^{(-1,0)}\right)$ for $\mathbb{Y}^{2} = \mathds{1}-\mathbb{X}^{2}$ (resp. $W =\mathcal M_2^{(0,0)}$ for $\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0$)}.\\ \end{addmargin} \noindent Applying the same method of subtracting $\alpha\left(\mathcal M_2^{(0,1)}+\mathcal M_2^{(0,-1)}\right)$ in the case of the relation $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$ does not always work. Nevertheless, in Section \ref{S1} we show that there does in fact exist a matrix $W$ such that the result above \emph{always} holds also for the relation $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2.$ The matrix $W$ is constructed as a sum of moment matrices generated by carefully chosen commutative atoms (see \eqref{subtract-part}). Consequently, we are able to reformulate the existence of a nc measure for a rank 6 $\mathcal{M}_{2}$ satisfying the relation $\mathbb{Y}^{2}=\mathds{1}+\mathbb{X}^{2}$, into feasibility problems of LMI's and a rank-to-cardinality condition. \subsubsection{Size of Atoms} Secondly, in \cite[Proposition 4.1 (2)]{BZ18} we proved that the moment sequence $\beta^{(4)}$ with a moment matrix $\mathcal M_2$ of rank 6 can always be transformed by using appropriate affine linear transformation to a moment sequence $\widetilde{\beta}^{(4)}$, with $\widetilde{\mathcal M}_2$ satisfying one of the four canonical relations \begin{equation}\label{poss-rel} \mathbb{Y}^2=\mathds 1 - \mathbb{X}^2, \quad\text{or}\quad\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0,\quad\text{or}\quad\mathbb{Y}^2=\mathds 1+\mathbb{X}^2, \quad\text{or}\quad\mathbb{Y}^2=\mathds 1. \end{equation} In the first three cases we showed that we may assume that the nc atoms $(X_i,Y_i)\in \big(\mathbb S\mathbb R^{t_i'\times t_i'}\big)^2$, $t_{i}'>1$, have an elegant form, i.e., \begin{equation}\label{atoms-nice} X_i=\left(\begin{array}{cc} \gamma_i I_{t_i}& B_i \\ B_i^t & -\gamma_i I_{t_i}\end{array}\right),\quad Y_i=\left(\begin{array}{cc} \mu_i I_{t_i} & 0 \\ 0 & -\mu_i I_{t_i}\end{array}\right), \end{equation} where $\gamma_{i}\geq 0$, $\mu_{i}>0$, $B_i$ is a matrix of size $t_i$ (see \cite[Proposition 5.1]{BZ18}). Since $Y_i^2=\mu_i^2 I_{t_i'}$, $\mathcal M_2^{(X_i,Y_i)}$ is of rank at most 5 and hence admits a measure if type $(m,1)$, $m\in \{1,2,3\}$, by \cite[\S6]{BZ18}. In the fourth relation of \eqref{poss-rel} the nc atoms need not be of the form \eqref{atoms-nice}, making this case particularly difficult. In Section \ref{S2} we thoroughly analyze the possible atoms in representing measure, and prove that atoms of size 3 are not needed. \subsubsection{Extensions to order $n$} Thirdly, in Section \ref{S3} we extend our results from $\mathcal{M}_{2}$ of rank 5 to $\mathcal{M}_{n}$ with $n\in\mathbb N$. The main idea is as follows. By first applying an affine linear transformation to $\mathcal{M}_n$ we may assume that it satisfies the relation \begin{equation} \label{first-rel-intro} \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0, \end{equation} and one of the relations \begin{equation}\label{second-rel-intro} \mathbb{Y}^2=\mathds 1 - \mathbb{X}^2, \quad\text{or}\quad\mathbb{Y}^2=\mathds 1, \quad\text{or}\quad\mathbb{Y}^2=\mathds 1+\mathbb{X}^2, \quad\text{or}\quad\mathbb{Y}^2=\mathbb{X}^2. \end{equation} Due to \eqref{first-rel-intro}, all the moments $\beta_{X^iY^j}$ with one of the exponents $i,j$ odd and the other nonzero, are equal to zero (see Lemma \ref{zero-moment-general2}). Additionally, the nc atoms (see Lemma \ref{zero-moment-general}) do not contribute anything to the moments $\beta_X$ and $\beta_Y$, those two must be represented by size 1 atoms in the measure. There are at most 4 size 1 atoms satisfying \eqref{first-rel-intro} and \eqref{second-rel-intro}, thus there is (under the L{\"o}wner partial ordering) a smallest cm matrix $M$ satisfying $\beta_X^M=\beta_X$, $\beta_Y^M=\beta_Y$. Subtracting this matrix from $\mathcal M_n$ we end up with two classical univariate truncated moment problems, one on rows/columns $\{\mathds 1, \mathbb{X}, \mathbb{X}^2,\ldots,\mathbb{X}^n\}$ and the other on $\{\mathds 1, \mathbb{Y}, \mathbb{X}\mathbb{Y}, \mathbb{X}^2\mathbb{Y},\ldots,\mathbb{X}^{n-1}\mathbb{Y}\}.$ It turns out that solving the first one also solves the second one due to their connection comming from \eqref{second-rel-intro}. \subsubsection{Reduction of the TMP on degenerate hyperbolas} Finally, in Section \ref{S4} we give a simplied proof for the solution of the TMP on degenerate hyperbolas which was discovered by Curto and Fialkow \cite[Theorem 3.1]{CF05}. The idea for the proof, inspired by the extension results from Section \ref{S3}, is to reduce the bivariate TMP down to the univariate one. \begin{remark} The reduction of the bivariate TMP to the univariate one can also by used in some other cases of the quartic TMP and is also very efficient beyond quadratic column relations. We will present this approach in our future work \cite{BZ+} where we study the TMP with column relations of higher degrees. \end{remark} \noindent \textbf{Acknowledgement.} The authors would like to thank Igor Klep for insightful discussions and comments on the preliminary versions of this article. \section{Preliminaries} In this section we present elementary results for the tracial moment problem and establish some additional notation. Many of these are direct analogues of the corresponding results in the commutative setting. \subsection{Support of a measure and RG relations} \label{supp-and-RG} Let $A$ be a matrix with its rows and columns indexed by words in $\mathbb{R}\langle X,Y\rangle_{\leq n}$. For a word $w$ in $\mathbb R\!\langle X,Y\rangle_{\leq n}$ we denote by $w(\mathbb{X,Y})$ the column of $A$ indexed by $w$. We write $[A]_{\{R,C\}}$ for the compression of $A$ to the rows and columns indexed by elements of $R$ and $C$ resp., with $R,C \subset \mathbb R\langle X,Y\rangle_{\leq n}$ subsets of words. When we have $R=C$, we simply write $[A]_{R}$. $\mathbf{0_{k_1\times k_2}}$ stands for the $k_1\times k_2$ matrix with zero entries. Usually we will omit the subindex $k_1\times k_2$, where the size will be clear from the context. Let $\mathcal{C}_{\mathcal{M}_n}$ denote the span of the column space of $\mathcal{M}_n$, i.e., $$ \mathcal{C}_{\mathcal{M}_{n}} = \Span \left\{ w(\mathbb{X}, \mathbb{Y}) : w\in\mathbb{R}\!\left\langle X,Y \right\rangle_{\leq n} \right\}= \Span\left\{ \mathds{1},\mathbb{X},\mathbb{Y},\mathbb{X}^{2},\mathbb{X}\mathbb{Y},\mathbb{Y}\mathbb{X},\mathbb{Y}^{2},\dotsc,\mathbb{X}^{n},\dotsc,\mathbb{Y}^{n} \right\}. $$ For a polynomial $p\in \mathbb{R}\!\langle X,Y\rangle _{\leq n}$ of the form $p=\sum_w a_{w}w(X,Y)$, we define $$p(\mathbb{X},\mathbb{Y})=\sum_w a_{w} w(\mathbb{X},\mathbb{Y})$$ and notice that $p(\mathbb{X},\mathbb{Y})\in \mathcal{C}_{\mathcal{M}_n}$. We express linear dependencies among the columns of $\mathcal{M}_n$ as $$p_1(\mathbb{X},\mathbb{Y})=\mathbf 0,\ldots, p_m(\mathbb{X},\mathbb{Y})=\mathbf 0,$$ for some $p_1,\ldots,p_m\in\mathbb{R}\!\langle X,Y\rangle _{\leq n}$, $m\in \mathbb N\cup\{0\}$. We define the \textbf{free zero set} $\mathcal{Z}(p)$ of $p\in\mathbb{R}\!\langle X,Y\rangle$ by $$\mathcal{Z}(p):=\left\{ (A,B)\in(\mathbb{SR}^{t\times t})^{2} : t\in \mathbb N,\; p(A,B)=\mathbf 0_{t\times t}\right\}.$$ \begin{theorem}\cite[Theorem 2.2]{BZ18}\label{support lemma} Suppose $\beta^{(2n)}$ admits a representing measure consisting of finitely many atoms $(X_i,Y_i)\in (\mathbb{SR}^{t_i\times t_i})^2$, $t_i\in \mathbb N$, with the corresponding densities $\lambda_i\in (0,1)$, $i=1,\ldots, r$, $r\in \mathbb N$. Let $p\in\mathbb{R}\!\left\langle X,Y \right\rangle_{\leq n}$ be a polynomial. Then the following are true: \begin{enumerate} \item\label{point-1-support} We have $$\bigcup_{i=1}^r\; (X_i,Y_i)\subseteq \mathcal{Z}(p) \quad \Leftrightarrow \quad p(\mathbb{X},\mathbb{Y})=\mathbf 0\; \text{ in }\mathcal{M}_n.$$ \item\label{point-2-support} Suppose the sequence $\beta^{(2n+2)}=(\beta_w)_{|w|\leq n+1}$ is the extension of $\beta$ generated by $$\beta_w=\sum_{i=1}^r \lambda_i \mathrm{Tr}(w(X_i,Y_i)).$$ Let $\mathcal{M}_{n+1}$ be the corresponding moment matrix. Then: $$p(\mathbb{X},\mathbb{Y})=\mathbf 0\;\text{ in }\mathcal{M}_n\quad \Rightarrow \quad p(\mathbb{X},\mathbb{Y})=\mathbf 0 \;\text{ in }\mathcal{M}_{n+1}.$$ \item\label{point-3-support} (Recursive generation) For $q \in\mathbb{R}\!\left\langle X,Y \right\rangle_{\leq n}$ such that $pq \in\mathbb{R}\!\left\langle X,Y \right\rangle_{\leq n}$, we have \begin{equation*} \label{pq-lemma} p(\mathbb{X},\mathbb{Y})=\mathbf{0}\;\text{ in }\mathcal{M}_n\quad\Rightarrow\quad (pq)(\mathbb{X},\mathbb{Y})=(qp)(\mathbb{X},\mathbb{Y})=\mathbf{0}\;\text{ in }\mathcal{M}_n. \end{equation*} \end{enumerate} \end{theorem} Column relations rising in $\mathcal{M}_n$ through an application of Theorem \ref{support lemma} (\ref{point-3-support}) are called \textbf{RG relations}. If $\mathcal M_n$ satisfies RG relations, we say $\mathcal{M}_n$ is \textbf{recursively generated}. The first consequence of the RG relations is the following important observation about a nc moment matrix $\mathcal{M}_n$. \begin{corollary}\cite[Colloralies 2.3, 2.4]{BZ18} \label{lin-ind-of-4-col} Suppose $n\geq 2$ and $\beta^{(2n)}$ be a sequence such that $\beta_{X^2Y^2}\neq \beta_{XYXY}$. Then the columns $\mathds 1, \mathbb{X}, \mathbb{Y}, \mathbb{X}\mathbb{Y}$ of $\mathcal{M}_n$ are linearly independent. Hence, if $\mathcal{M}_n$ is of rank at most 3 with $\beta_{X^2Y^2}\neq \beta_{XYXY}$, then $\beta$ does not admit a representing measure. \end{corollary} \subsection{Flat extensions} \label{flat-ext-prel} For a matrix $A\in \mathbb{S}\mathbb R^{s\times s}$, an \textbf{extension} $\widetilde{A}\in \mathbb{S}\mathbb R^{(s+u)\times (s+u)}$ of the form $$\widetilde A=\begin{pmatrix} A & B \\ B^t & C \end{pmatrix}$$ for some $B\in \mathbb R^{s\times u}$ and $C\in \mathbb R^{u\times u}$, is called $\textbf{flat}$ if $\Rank(A)=\Rank(\widetilde A)$. By a result of \cite{Smu59}, this is equivalent to saying that there is a matrix $W\in \mathbb R^{s\times u}$ such that $B=AW$ and $C=W^t A W$. Flat extension provide an approach to solving the BTTMP via the following. \begin{theorem} \cite[Theorem 3.19]{BK12} \label{flat-meas} Let $\beta\equiv\beta^{(2n)}$ be a sequence satisfying (\ref{cyclic-cond}). If $\mathcal M_n(\beta)$ is psd and is a flat extension of $\mathcal M_{n-1}(\beta)$, then $\beta$ admits a representing measure. \end{theorem} \subsection{Riesz functional and affine linear transformations} \label{affine linear-trans} Any sequence $\beta^{(2n)}$ which satisfies \eqref{cyclic-cond} defines the \textbf{Riesz functional} $L_{\beta^{(2n)}}:\mathbb{R}\!\left\langle X,Y\right\rangle_{\leq 2n}\to \mathbb R$ by $$ L_{\beta^{(2n)}}(p):=\sum_{|w|\leq 2n} a_{w}\beta_{w},\quad \text{where }p=\sum_{|w|\leq 2n} a_{w}w. $$ Notice that $$\beta_w=L_{\beta^{(2n)}}(w)\quad \text{for every }|w|\leq 2n.$$ An important result for converting a given moment problem into a simpler, equivalent one is the application of affine linear transformations to a sequence $\beta$. For non-commuting letters $X,Y$ and $a,b,c,d,e,f\in \mathbb R$ with $bf-ce \neq 0$, let us define \begin{equation}\label{trans-form} \phi(X,Y)=(\phi_1(X,Y),\phi_2(X,Y)):=(a +bX+cY,d +eX+fY). \end{equation} Let $\widetilde \beta^{(2n)}$ be the sequence obtained by the rule \begin{equation}\label{lin-trans-orig} \widetilde \beta_{w}=L_{\beta^{(2n)}}(w\circ\phi(X,Y))\quad \text{for every }|w|\leq n. \end{equation} Notice that $$L_{\widetilde\beta^{(2n)}}(p) =L_{\beta^{(2n)}}(p\circ \phi(X,Y))\quad \text{for every }p \in \mathbb{R}\!\langle X,Y\rangle_{\leq n}.$$ For a polynomial $p\in \mathbb{R}\!\langle X,Y\rangle_{\leq 2n}$ let $\widehat{p}=(a_{w})_{w}$ be its coefficient vector with respect to the lexicographically-ordered words in $\mathbb R\!\left\langle X,Y\right\rangle_{\leq 2n}$. The following proposition allows us to make affine linear changes of variables. \begin{proposition}\cite[Proposition 2.6]{BZ18} \label{linear transform invariance-nc} Suppose $\beta^{(2n)}$ and $\widetilde{\beta}^{(2n)}$ are as above with the corresponding moment matrices $\mathcal{M}_n$ and $\widetilde{\mathcal{M}}_n$, respectively. Let $J_\phi: \mathbb R\!\left\langle X,Y\right\rangle_{\leq 2n}\to \mathbb R\!\left\langle X,Y\right\rangle_{\leq 2n}$ be the linear map given by $$J_\phi \widehat p:=\widehat{p\circ \phi}.$$ Then the following hold: \begin{enumerate} \item $\widetilde{\mathcal{M}}_n=(J_\phi)^t \mathcal{M}_n J_\phi.$ \item $J_\phi$ is invertible. \item $\widetilde{\mathcal{M}}_n\succeq 0 \Leftrightarrow \mathcal{M}_n\succeq 0.$ \item $\Rank (\widetilde{\mathcal{M}}_n)=\Rank (\mathcal{M}_n).$ \item\label{invariance-point5} The formula $\mu =\tilde \mu \circ\phi$ establishes a one-to-one correspondence between the sets of representing measures of $\beta$ and $\tilde \beta$, and $\phi$ maps $\mathrm{supp}(\mu)$ bijectively onto $\mathrm{supp}(\tilde \mu)$. \item \label{flat-ext-M_n} $\mathcal{M}_n$ admits a flat extension if and only if $\widetilde{\mathcal{M}}_n$ admits a flat extension. \end{enumerate} \end{proposition} \section{$\mathcal M_2$ of rank 6 with relation $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$} \label{S1} We show in this section that for $\mathcal{M}_{2}$ of rank 6 which satisfies the relation $\mathbb{Y}^{2}=\mathds{1}+\mathbb{X}^{2}$, the existence of a representing measure is equivalent to the feasibility of three LMI's, and a rank to cardinality condition. \begin{theorem} \label{M(2)-bc3-r6-new1-cor} Suppose $\beta\equiv \beta^{(4)}$ is a normalized nc sequence with a moment matrix $\mathcal{M}_2$ of rank 6 satisfying the relation $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$. Let $L(a,b,c,d,e)$ be the following linear matrix polynomial \begin{equation*} \begin{blockarray}{cccccccc} & \mathds{1}&\mathbb{X}&\mathbb{Y} &\mathbb{X}^2&\mathbb{X}\mathbb{Y}&\mathbb{Y}\mathbb{X}&\mathbb{Y}^{2}\\ \begin{block}{c(ccccccc)} \mathds 1& a & \beta_X & \beta_Y & b & c & c & a+b \\ \mathbb{X} & \beta_X & b & c & \beta_{X^3} & \beta_{X^2Y} & \beta_{X^2Y} & \beta_X+\beta_{X^3} \\ \mathbb{Y} & \beta_{Y}& c & a+b & \beta_{X^2Y} & \beta_X+\beta_{X^3} & \beta_X+\beta_{X^3} & \beta_Y+\beta_{X^2Y} \\ \mathbb{X}^2 & b & \beta_{X^3} & \beta_{X^2Y} & d & e & e & b+d \\ \mathbb{X}\mathbb{Y} & c & \beta_{X^2Y} & \beta_X+\beta_{X^3} & e & b+d & b+d & c+e \\ \mathbb{Y}\mathbb{X} & c & \beta_{X^2Y} & \beta_X+ \beta_{X^3}& e & b+d & b+d & c+e \\ \mathbb{Y}^2 & a+b & \beta_X+\beta_{X^3} & \beta_{Y}+\beta_{X^2Y} & b+d & c+e & c+e & a+2b+d\\ \end{block} \end{blockarray}, \end{equation*} where $a,b,c,d,e\in \mathbb R$. Then $\beta$ admits a nc measure if and only if there exist $a,b,c,d,e\in \mathbb R$ such that \begin{enumerate} \item\label{point1-bc1-r6} $L(a,b,c,d,e)\succeq 0$, \item\label{point2-bc1-r6} $\mathcal{M}_2-L(a,b,c,d,e)\succeq 0$, \item\label{point3-bc1-r6} $(\mathcal{M}_2-L(a,b,c,d,e))_{\{\mathds 1, \mathbb{X}, \mathbb{Y}, \mathbb{X}\mathbb{Y}\}} \succ 0$, \item\label{point4-bc1-r6} $L(a,b,c,d,e)$ is recursively generated and $\Rank(L(a,b,c,d,e))\leq \Card \mathcal V_L$, where $$\mathcal V_L:=\displaystyle\bigcap_{ \substack{g\in \mathbb R[X,Y]_{\leq 2},\\ g(\mathbb{X},\mathbb{Y})=\mathbf 0\;\text{in}\;L(a,b,c,d,e)}} \left\{ (x,y)\in \mathbb R^2\colon g(x,y)=0 \right\}.$$ \end{enumerate} If $\beta$ admits a measure, then there exists a measure of type $(m,1)$, $m\in \{2,3,4,5\}$. In particular, $a,b,c,d,e$ satisfying \eqref{point1-bc1-r6}-\eqref{point4-bc1-r6} exist if \begin{equation}\label{suff-cond-meas} \beta_{X}=\beta_Y=\beta_{X^3}=\beta_{X^2Y}=\beta_{Y^3}=0. \end{equation} \end{theorem} Before proving Theorem \ref{M(2)-bc3-r6-new1-cor} we need some auxiliary results. The form of $\mathcal{M}_2$ is given by the following proposition. \begin{proposition} Let $\beta\equiv \beta^{(4)}$ be a nc sequence with a moment matrix $\mathcal{M}_2$ satisfying the relation \begin{equation}\label{r6-rel-bc3-eq} \mathbb{Y}^2=\mathds{1}+\mathbb{X}^2. \end{equation} Then $\mathcal{M}_2$ is of the form \begin{equation}\label{bc3-r6} \begin{mpmatrix} \beta_{1} & \beta_{X} & \beta_{Y} & \beta_{X^2} & \beta_{XY} & \beta_{XY} & \beta_{1}+\beta_{X^2} \\ \beta_{X} & \beta_{X^2} & \beta_{XY} & \beta_{X^3} & \beta_{X^2Y} & \beta_{X^2Y} & \beta_{X}+\beta_{X^3} \\ \beta_{Y} & \beta_{XY} & \beta_{1}+\beta_{X^2} & \beta_{X^2Y} & \beta_{X}+\beta_{X^3} & \beta_{X}+\beta_{X^3} & \beta_{Y}+\beta_{X^2Y} \\ \beta_{X^2} & \beta_{X^3} & \beta_{X^2Y} & \beta_{X^4} & \beta_{X^3Y} & \beta_{X^3Y} & \beta_{X^2}+\beta_{X^4} \\ \beta_{XY} & \beta_{X^2Y} & \beta_{X}+\beta_{X^3} & \beta_{X^3Y} & \beta_{X^2}+\beta_{X^4} & \beta_{XYXY} & \beta_{XY}+\beta_{X^3Y} \\ \beta_{XY} & \beta_{X^2Y} & \beta_{X}+\beta_{X^3} & \beta_{X^3Y} & \beta_{XYXY} & \beta_{X^2}+\beta_{X^4} & \beta_{XY}+\beta_{X^3Y} \\ \beta_{1}+\beta_{X^2} & \beta_{X}+\beta_{X^3} & \beta_{Y}+\beta_{X^2Y} & \beta_{X^2}+\beta_{X^4} & \beta_{XY}+\beta_{X^3Y} & \beta_{XY}+\beta_{X^3Y} & \beta_{1}+2 \beta_{X^2}+\beta_{X^4} \end{mpmatrix}. \end{equation} \end{proposition} \begin{proof} This is an easy computation using the relation (\ref{r6-rel-bc3-eq}). \begin{comment} The relation (\ref{r6-rel-bc3-eq}) gives us the following system in $\mathcal{M}_2$: \begin{multicols}{2} \begin{equation}\label{eq-bc3-r6} \begin{aligned} \beta_{Y^2} = \beta_{1}+\beta_{X^2},\\ \beta_{XY^2} = \beta_{X}+\beta_{X^3},\\ \beta_{Y^3} = \beta_{Y}+\beta_{X^2Y}, \end{aligned} \end{equation} \columnbreak \begin{equation*} \begin{aligned} \beta_{X^2Y^2} = \beta_{X^2}+\beta_{X^4},\\ \beta_{XY^3} = \beta_{XY}+\beta_{X^3Y},\\ \beta_{Y^4} = \beta_{Y^2}+\beta_{X^2Y^2}. \end{aligned} \end{equation*} \end{multicols} \noindent Plugging in the expressions for $\beta_{Y^2}$ and $\beta_{X^2Y^2}$ in the expression for $\beta_{Y^4}$ gives the form (\ref{bc3-r6}) of $\mathcal{M}_2$. \end{comment} \end{proof} \begin{lemma}\label{linear-trans} Suppose $\beta\equiv \beta^{(4)}$ is a normalized nc sequence with a positive semidefinite and recursively generated moment matrix $\mathcal M_2$ of rank 5 satisfying the relations \begin{equation}\label{starting-relation} \mathbb{Y}^2=\mathds 1+\mathbb{X}^2, \quad a\mathds 1+ d\mathbb{X}^2+e(\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X})=\mathbf{0}, \end{equation} for some $a,d,e\in \mathbb R$ which are not all zero. Then there is a linear transformation of the form \begin{equation}\label{lin-trans1} \phi(X,Y)=(\phi_1(X,Y),\phi_2(X,Y)):=(bX+cY,eX+fY), \end{equation} where $b,c,e,f\in \mathbb R$ satisfy $bf-ce \neq 0$, such that the sequence $\widetilde{\beta}^{(4)}$ obtained by the rule \eqref{lin-trans-orig} has a a moment matrix $\widetilde{\mathcal M}_2$ satisfying the relation \begin{equation}\label{first-rel} \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0 \end{equation} and one of the relations \begin{equation}\label{second-rel} \mathbb{Y}^2=\mathds 1\quad \text{or}\quad \mathbb{X}^2+\mathbb{Y}^2=\mathds 1 \quad \text{or}\quad \mathbb{Y}^2-\mathbb{X}^2=\mathds 1. \end{equation} \end{lemma} \begin{proof} We separate two cases according to $e$ in \eqref{starting-relation}.\\ \noindent \textbf{Case 1: }$e=0$.\\ First note that $d\neq 0$ in \eqref{starting-relation}, otherwise $a\mathds 1=\mathbf 0$ for $a\neq 0$ which is a contradiction since $\mathds 1\neq \mathbf 0$ ($\beta_1=1$). Hence we can rewrite \eqref{starting-relation} as $\mathbb{X}^2=\widetilde{a}\mathds 1$ where $\widetilde{a}\neq 0$. Therefore $\mathbb{Y}^2=(1+\widetilde{a})\mathds 1$. Since $\mathcal M_2$ is psd with a nonzero column $\mathbb{X}$ (otherwise $\Rank \mathcal M_2<5$), it follows that $0<[\mathcal M_2]_{\{\mathbb{X}\}}= \beta_{X^2}$. Thus also the column $\mathbb{X}^2$ is nonzero (since it contains $\beta_{X^2} $), which implies by $\mathcal M_2$ being psd that $0< [\mathcal M_2]_{\{\mathbb{X}^2\}}=\beta_{X^4}$. Hence from $0<\beta_{X^4}=\widetilde{a}\beta_{X^2}$, it follows that $\widetilde{a}>0$. Now applying the transformation $$\phi(X,Y)= \left( \frac{X}{2\sqrt{\widetilde a}}+\frac{Y}{2\sqrt{1+\widetilde a}} , \frac{Y}{2\sqrt{1+\widetilde a}}-\frac{X}{2\sqrt{\widetilde a}} \right)$$ to the moment sequence $\beta_{w},$ we get a moment sequence $\widetilde{\beta}_{w}$ with a moment matrix $\widetilde{\mathcal M}_2$ of rank 5 satisfying the relations \begin{equation}\label{rel0} \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0,\quad \mathbb{X}^2+\mathbb{Y}^2=\mathds 1. \end{equation} \noindent \textbf{Case 2: }$e\neq 0$.\\ Given the starting relations \eqref{starting-relation} we are in Case 2.4 in the proof of \cite[Proposition 4.1 (1)]{BZ18}. Following the proof we see that after using only transformations of the form \eqref{trans-form} we end up with a moment sequence $\widetilde{\beta}^{(4)}$ such that $\widetilde{\mathcal M}_2$ satisfies the relations \eqref{first-rel} and \eqref{second-rel}. Precise transformations can be found in Appendix \ref{append-transforms}. \end{proof} \begin{lemma}\label{linear-trans-on-zero-moments} Suppose $\beta\equiv \beta^{(4)}$ is a nc sequence satisfying $$\beta_{X}=\beta_Y=\beta_{X^3}=\beta_{X^2Y}=\beta_{Y^3}=0.$$ Let $\phi$ be a linear transformation defined by \begin{equation}\label{lin-trans} \phi(X,Y)=(\phi_1(X,Y),\phi_2(X,Y)):=(bX+cY,eX+fY), \end{equation} where $b,c,e,f\in \mathbb R$ satisfy $bf-ce \neq 0$. The sequence $\widetilde{\beta}^{(4)}$ obtained by the rule \eqref{lin-trans-orig} also satisfies $$\widetilde{\beta}_{X}=\widetilde{\beta}_Y=\widetilde{\beta}_{X^3}= \widetilde{\beta}_{X^2Y}=\widetilde{\beta}_{Y^3}=0.$$ \end{lemma} \begin{proof} This is an easy direct calculation. The details can be found in Appendix \ref{calc-for-lemma-zero-moments}. \end{proof} The following theorem characterizes normalized nc sequences $\beta$ with a moment matrix $\mathcal{M}_2$ of rank 6 satisfying the relation $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$, which admit a nc measure. \begin{theorem} \label{M(2)-bc3-r6-new1} Suppose $\beta\equiv \beta^{(4)}$ is a normalized nc sequence with a moment matrix $\mathcal{M}_2$ of rank 6 satisfying the relation $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$. Then $\beta$ admits a nc measure if and only if $\mathcal{M}_2$ is positive semidefinite and one of the following is true: \begin{enumerate} \item \label{M(2)-c3-r6-pt1} $\beta_{X}=\beta_Y=\beta_{X^3}=\beta_{X^2Y}=\beta_{Y^3}=0$. In this case there exists a nc measure of type $(m,1)$, $m\in \mathbb N$. \item \label{M(2)-c3-r6-pt2} There exist $$a_1\in (0,1),\quad a_2\in \left(-2\sqrt{a_1(1+a_1)}, 2\sqrt{a_1(1+a_1)}\right)$$ such that $$M:=\mathcal{M}_2-\xi\mathcal{M}^{(X,Y)}_2$$ is a positive semidefinite, recursively generated cm moment matrix satisfying $$\Rank M\leq \Card \mathcal V_M:= \displaystyle\bigcap_{\substack{g\in \mathbb R[X,Y]_{\leq 2},\\ g(\mathbb{X},\mathbb{Y})=\mathbf 0\;\text{in}\;M}} \left\{ (x,y)\in \mathbb R^2\colon g(x,y)=0 \right\},$$ where \begin{equation}\label{(X,Y)-form} X=\begin{pmatrix} \sqrt{a_1} & 0 \\ 0 & -\sqrt{a_1} \end{pmatrix},\quad Y=\sqrt{(1+a_1)} \begin{pmatrix} \frac{a}{2} & \frac{1}{2}\sqrt{4-a^2}\\ \frac{1}{2}\sqrt{4-a^2} & -\frac{a}{2} \end{pmatrix}, \end{equation} with $\displaystyle a=\frac{a_2}{\sqrt{a_1(1+a_1)}},$ and $\xi>0$ is the smallest positive number such that $$\Rank{\left(\mathcal{M}_2-\xi\mathcal{M}^{(X,Y)}_2\right)}< \Rank{\mathcal{M}_2}.$$ \end{enumerate} Moreover, if $\beta$ admits a measure, then there exists a measure of type $(m,1)$, $m\in \{2,3,4,5\}$. \end{theorem} \begin{proof} First we will prove (1). In this case $\mathcal{M}_2$ is of the form $$\begin{mpmatrix} 1 & 0 & 0 & \beta_{X^2} & \beta_{XY} & \beta_{XY} & 1+\beta_{X^2} \\ 0 & \beta_{X^2} & \beta_{XY} & 0 & 0 & 0 & 0 \\ 0& \beta_{XY} & 1+\beta_{X^2} & 0 & 0 & 0 & 0 \\ \beta_{X^2} & 0 & 0 & \beta_{X^4} & \beta_{X^3Y} & \beta_{X^3Y} & \beta_{X^2}+\beta_{X^4} \\ \beta_{XY} & 0 & 0 & \beta_{X^3Y} & \beta_{X^2}+\beta_{X^4} & \beta_{XYXY} & \beta_{XY}+\beta_{X^3Y} \\ \beta_{XY} & 0 & 0 & \beta_{X^3Y} & \beta_{XYXY} & \beta_{X^2}+\beta_{X^4} & \beta_{XY}+\beta_{X^3Y} \\ 1+\beta_{X^2} & 0 & 0 & \beta_{X^2}+\beta_{X^4} & \beta_{XY}+\beta_{X^3Y} & \beta_{XY}+\beta_{X^3Y} & 1+2\beta_{X^2}+\beta_{X^4}\end{mpmatrix}.$$ We define the matrix function \begin{equation}\label{subtract-part} B(\alpha,\gamma):= \mathcal{M}_2-\alpha \big(\mathcal{M}_2^{(\gamma,\sqrt{1+\gamma^2})}+ \mathcal{M}_2^{(-\gamma,\sqrt{1+\gamma^2})}+ \mathcal{M}_2^{(\gamma,-\sqrt{1+\gamma^2})}+\mathcal{M}_2^{(-\gamma,-\sqrt{1+\gamma^2})}\big), \end{equation} which is equal to $$ B(\alpha,\gamma)=\begin{mpmatrix} 1-4\alpha & 0 & 0 & \beta_{X^2}-4\alpha \gamma^2 & \beta_{XY} & \beta_{XY} & D \\ 0 & \beta_{X^2}-4\alpha \gamma^2 & \beta_{XY} & 0 & 0 & 0 & 0 \\ 0& \beta_{XY} & D & 0 & 0 & 0 & 0 \\ \beta_{X^2}-4\alpha \gamma^2 & 0 & 0 & \beta_{X^4}-4\alpha\gamma^4 & \beta_{X^3Y} & \beta_{X^3Y} & C \\ \beta_{XY} & 0 & 0 & \beta_{X^3Y} & C & E & \beta_{XY}+\beta_{X^3Y} \\ \beta_{XY} & 0 & 0 & \beta_{X^3Y} & E & C & \beta_{XY}+\beta_{X^3Y} \\ D & 0 & 0 & C & \beta_{XY}-\beta_{X^3Y} & \beta_{XY}-\beta_{X^3Y} & D+C \end{mpmatrix},$$ where $$C=\beta_{X^2}+\beta_{X^4}-4\alpha \gamma^2(1+\gamma^2),\quad D=1+\beta_{X^2}-4\alpha(1+\gamma^2),\quad E=\beta_{XYXY}-4\alpha \gamma^2(1+\gamma^2).$$ \noindent{\textbf{Claim.}} There exist $\alpha_0>0$ and $\gamma_0>0$ such that $B(\alpha_0,\gamma_0)$ is psd and satisfies the column relations \begin{equation}\label{B-relations} a\mathds 1+ d\mathbb{X}^2+e(\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X})=\mathbf{0}, \quad \mathbb{Y}^2=\mathds 1+\mathbb{X}^2 \end{equation} for some $a,d,e\in \mathbb R$ which are not all zero. Let $\beta_w^{(\alpha_0,\gamma_0)}$ be the moments of $B(\alpha_0,\gamma_0)$. Then: \begin{equation}\label{B-moments} \beta^{(\alpha_0,\gamma_0)}_{X}=\beta^{(\alpha_0,\gamma_0)}_Y= \beta^{(\alpha_0,\gamma_0)}_{X^3}=\beta^{(\alpha_0,\gamma_0)}_{X^2Y}= \beta^{(\alpha_0,\gamma_0)}_{XY^2}=\beta^{(\alpha_0,\gamma_0)}_{Y^3}=0. \end{equation} Since $$\det\big([B(\alpha,\gamma)]_{\{ \mathbb{X}, \mathbb{Y} \}}\big)= 16a^2(1+\gamma^2) \alpha^2+(-4\gamma^2-(4+2\gamma^2)\beta_{X^2})\alpha+ (\beta_{X^2}^2+\beta_{X^2}-\beta_{XY}^2) $$ is quadratic in $\alpha$, we have that the equation $\det\big([B(\alpha,\gamma)]_{\{ \mathbb{X}, \mathbb{Y} \}}\big)=0$ has solutions $$ \alpha_{1,2}=\frac{\gamma^2+\beta_{X^2}+2\gamma^2\beta_{X^2}\pm \sqrt{(\gamma^2-\beta_{X^2})^2+ 4\gamma^2\beta_{XY}^2(1+\gamma^2)}} {8\gamma^2(1+\gamma^2)}. $$ Since $$\det\big([B(\alpha,\gamma)]_{\{ \mathbb{X}\mathbb{Y}, \mathbb{Y}\mathbb{X} \}}\big)= 8\gamma^2(1+\gamma^2)(\beta_{XYXY}-\beta_{X^4}-\beta_{X^2})\alpha- (\beta_{XYXY}+\beta_{X^4}+\beta_{X^2}) (\beta_{XYXY}-\beta_{X^4}-\beta_{X^2}) $$ is linear in $\alpha$ and $[\mathcal{M}_2]_{\{ \mathbb{X}\mathbb{Y}, \mathbb{Y}\mathbb{X} \}}$ is positive definite, this implies that $$0<\det\big([\mathcal{M}_2]_{\{ \mathbb{X}\mathbb{Y}, \mathbb{Y}\mathbb{X} \}}\big)= -(\beta_{XYXY}+\beta_{X^4}+\beta_{X^2}) (\beta_{XYXY}-\beta_{X^4}-\beta_{X^2})$$ and in particular $\beta_{XYXY}-\beta_{X^4}-\beta_{X^2}\neq 0$, the equation $\det\big([B(\alpha,\gamma)]_{\{ \mathbb{X}\mathbb{Y}, \mathbb{Y}\mathbb{X} \}}\big)=0$ has a solution $$\alpha_3=\frac{\beta_{XYXY}+\beta_{X^2}+\beta_{X^4}}{8\gamma^2(1+\gamma^2)}.$$ \noindent \textbf{Subclaim.} For $\gamma$ big enough it is true that $\alpha_3<\min\big(\alpha_{1},\alpha_2,\frac{1}{4}\big)$.\\ We separate two cases: $\beta_{XY}=0$ and $\beta_{XY}\neq 0$.\\ \noindent \textbf{Case 1: }$\beta_{XY}=0$.\\ For $\gamma>0$ such that $\gamma^2\geq \beta_{X^2}$, $\alpha_1$ and $\alpha_2$ are equal to $$\alpha_{1}=\frac{2\gamma^2+2\gamma^2\beta_{X^2}} {8\gamma^2(1+\gamma^2)}=\frac{1+\beta_{X^2}}{4(1+\gamma^2)},\quad \alpha_{2}=\frac{2(1+\gamma^2)\beta_{X^2}} {8\gamma^2(1+\gamma^2)}=\frac{\beta_{X^2}}{4\gamma^2}. $$ Since $\alpha_3$ has $\gamma^4$ in the denominator, it is smaller than $\alpha_{1}, \alpha_2$ and $\frac{1}{4}$ for $\gamma$ big enough. \\ \noindent \textbf{Case 2: }$\beta_{XY}\neq 0$.\\ Calculating the limits of $\alpha_1$ and $\alpha_2$ where $\gamma$ goes to $\infty$ we get \begin{align*} \lim_{\gamma\to\infty}\alpha_1 &= \lim_{\gamma\to\infty}\frac{\gamma^2(1+2\beta_{X^2})+ \gamma^2\sqrt{(1+4\beta_{XY}^2)}}{8\gamma^2(1+\gamma^2)}= \lim_{\gamma\to\infty}\frac{(1+2\beta_{X^2})+ \sqrt{(1+4\beta_{XY}^2)}}{8(1+\gamma^2)},\\ \lim_{\gamma\to\infty}\alpha_2 &= \lim_{\gamma\to\infty}\frac{\gamma^2(1+2\beta_{X^2})- \gamma^2\sqrt{(1+4\beta_{XY}^2)}}{8\gamma^2(1+\gamma^2)}= \lim_{\gamma\to\infty}\frac{(1+2\beta_{X^2})- \sqrt{(1+4\beta_{XY}^2)}}{8(1+\gamma^2)}. \end{align*} Since $[\mathcal{M}_2]_{\{\mathbb{X},\mathbb{Y}\}}$ is positive definite, it follows that $\det ([\mathcal{M}_2]_{\{\mathbb{X},\mathbb{Y}\}})>0$, i.e., $$\beta_{XY}^2<(1+\beta_{X^2})\beta_{X^2}.$$ Hence, $$1+4\beta_{XY}^2< 1+4(1+\beta_{X^2})\beta_{X^2}= (1+2\beta_{X^2})^2.$$ Therefore, the numerators in $\alpha_1, \alpha_2$ are strictly positive. Therefore for $\gamma$ big enough, $\alpha_3$ is smaller than $\alpha_1, \alpha_2$ and $\frac{1}{4}$, since it has $\gamma^4$ in the denominator. This proves the subclaim.\\ Let us now fix $\gamma_0$ big enough such that $\alpha_3$ is smaller than $\alpha_{1},\alpha_2$. Let $\alpha_0>0$ be the smallest positive number such that the rank of $B(\alpha_0,\gamma_0)$ is smaller than 6. Since $B(0,\gamma_0)$ is psd of rank 6, $B(\alpha_0,\gamma_0)$ is also psd of rank at most 5. Since in particular, $[B(\alpha_0,\gamma_0)]_{\{ \mathbb{X}\mathbb{Y}, \mathbb{Y}\mathbb{X} \}}$ is psd, it follows that $\alpha_0\leq \alpha_3$. From the subclaim we conclude that $\alpha_0<\min(\alpha_{1},\alpha_2,\frac{1}{4})$. Using this and the form of $B(\alpha_0,\gamma_0)$ we conclude that $B(\alpha_0,\gamma_0)$ satisfies \eqref{B-relations} and \eqref{B-moments} which proves Claim.\\ The rank of $B(\alpha_0,\gamma_0)$ is at least 4 since the columns $\mathds 1, \mathbb{X}, \mathbb{Y}, \mathbb{X}\mathbb{Y}$ are linearly independent. Indeed, the submatrix $$[B(\alpha_0,\gamma_0)]_{\{\mathds 1, \mathbb{X}, \mathbb{Y}\}}= [B(\alpha_0,\gamma_0)]_{\{\mathds 1\}}\oplus [B(\alpha_0,\gamma_0)]_{\{\mathbb{X}, \mathbb{Y}\}}$$ is block diagonal. By the above $\det\big( [B(\alpha_0,\gamma_0)]_{\{\mathbb{X}, \mathbb{Y}\}}\big)\neq 0$. Since $\alpha_0\leq \alpha_3 < \frac{1}{4}$, $[B(\alpha_0,\gamma_0)]_{\{\mathds 1\}}\neq 0$ and the column $\mathds 1$ is nonzero. Hence the columns $\mathds 1, \mathbb{X}, \mathbb{Y}$ are linearly independent. Note also that in the full matrix $B(\alpha_0,\gamma_0)$, $\mathbb{X}\mathbb{Y}$ cannot be a linear combination of $\mathds 1, \mathbb{X}, \mathbb{Y}$ since it is not symmetric in rows $\mathbb{X}\mathbb{Y}$ and $\mathbb{Y}\mathbb{X}$. Now we separate two cases according to the rank of $B(\alpha_0,\gamma_0)$. \noindent\textbf{Case 1:} $\Rank B(\alpha_0,\gamma_0)=4.$ By the form of $B(\alpha_0,\gamma_0)$ the relations are $$\mathbb{X}^2=a_1 \mathds 1,\quad \mathbb{Y}\mathbb{X}=a_2 \mathds 1 - \mathbb{X}\mathbb{Y}, \quad \mathbb{Y}^2=(1+a_1)\mathds 1$$ for some $a_1, a_2\in \mathbb R\backslash \{0\}.$ By \cite[Theorem 3.1 (3)]{BZ18} the measure for the sequence $\widetilde{\beta}^{(\alpha_0,\gamma_0)}_w$ exists and is of type $(0,1)$.\\ \noindent\textbf{Case 2:} $\Rank B(\alpha_0,\gamma_0)=5.$ By Lemma \ref{linear-trans} there is a transformation of the form \eqref{lin-trans1} which we apply to get a moment sequence $\widetilde{\beta}^{(\alpha_0,\gamma_0)}_{w}$ such that the corresponding moment matrix $\widetilde{\mathcal M}_2$ satisfes the relations \eqref{first-rel} and \eqref{second-rel}. By Lemma \ref{linear-trans-on-zero-moments} in both cases we have that $$\widetilde{\beta}^{(\alpha_0,\gamma_0)}_{X}= \widetilde{\beta}^{(\alpha_0,\gamma_0)}_Y= \widetilde{\beta}^{(\alpha_0,\gamma_0)}_{X^3}= \widetilde{\beta}^{(\alpha_0,\gamma_0)}_{X^2Y}= \widetilde{\beta}^{(\alpha_0,\gamma_0)}_{Y^3}=0.$$ Furthermore, since the rank of $B(\alpha_0,\gamma_0)$ is 5, a measure also exists and is of type $(m_1,1)$ where $m_1\in\{1,2,3\}$ by \cite[Theorems 6.5, 6.8, 6.11, 6.14]{BZ18}. Hence $\beta$ admits a measure of type $(m,1)$, $m\in \mathbb N$. This proves \eqref{M(2)-c3-r6-pt1}. It remains to prove \eqref{M(2)-c3-r6-pt2}. Suppose that $\beta$ admits a nc measure. Using Theorem \ref{M(2)-bc3-r6-new1} \eqref{M(2)-c3-r6-pt1} together with \cite[Proposition 7.3]{BZ18} (note that the result and proof hold in the case of $\mathbb{Y}^2=\mathds 1+ \mathbb{X}^2$ as well), we obtain \begin{equation} \label{r6-M(2)-with-rank4-bc4-new22} \mathcal{M}_2=\sum_{i=1}^m \lambda_i \mathcal{M}^{(x_i,y_i)}_2 + \xi \mathcal{M}^{(X,Y)}_2, \end{equation} where $(x_i,y_i)\in \mathbb R^2$, $m\in \mathbb N$, $(X,Y)\in (\mathbb{SR}^{2\times 2})^2$, $\lambda_i> 0$, $\xi> 0$ and $\sum_{i=1}^m \lambda_i+\xi=1$. Therefore $$M:=\ \mathcal{M}_2-\xi \mathcal{M}^{(X,Y)}_2,$$ is a cm moment matrix of rank at most 5 satisfying the relations $$\mathbb{Y}^2=\mathds 1+\mathbb{X}^2\quad\text{and}\quad \mathbb{X}\mathbb{Y}=\mathbb{Y}\mathbb{X}.$$ By \cite{Fia14} and references therein, $M$ admits a measure if and only if $M$ is psd, RG and satisfies $\Rank M\leq\Card \mathcal V_M$. To conclude the proof it only remains to prove that $X, Y$ are of the form (\ref{(X,Y)-form}). Note that $\mathcal{M}^{(X,Y)}_2$ is a nc moment matrix of rank 4. Therefore the columns $\{\mathds 1,\mathbb{X},\mathbb{Y},\mathbb{X}\mathbb{Y}\}$ are linearly independent \cite[Corollary 2.3]{BZ18} and hence \begin{equation*} \mathbb{X}^2=a_1 \mathds 1+ b_1 \mathbb{X}+ c_1 \mathbb{Y}+d_1 \mathbb{X}\mathbb{Y},\quad\text{and}\quad \mathbb{Y}^2=a_3 \mds1+ b_3 \mathbb{X}+ c_3 \mathbb{Y}+d_3 \mathbb{X}\mathbb{Y}, \end{equation*} where $a_j,b_j,c_j,d_j\in \mathbb R$ for $j=1, 3$. By \cite[Theorem 3.1 (1)]{BZ18}, $d_1=d_3=0$. By \cite[Theorem 3.1 (3)]{BZ18}, $c_1=b_3=0$. Since $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$ it follows that $b_1=c_3=0$ and $a_3=1+a_1$. By \cite[Theorem 3.1 (4)]{BZ18}, $X$ and $Y$ are of the form (\ref{(X,Y)-form}). To prove the result about the type of the measure note that if a cm moment matrix which admits a measure satisfies $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$, then it admits a measure with at most 5 atoms by the results of Curto and Fialkow \cite{CF98-1}, \cite{CF02}, \cite{Fia14} (see also \cite[Theorem 2.7]{BZ18}). On the other hand there must be at least 2 cm atoms in every measure of type $(m,1)$, $m\in \mathbb N$, for $\mathcal M_2$, otherwise $\mathcal M_2$ would be of rank at most 5. \end{proof} \begin{proof}[Proof of Theorem \ref{M(2)-bc3-r6-new1-cor}] Let us first prove the implication $(\Rightarrow)$. Suppose that $\beta$ admits a measure. By Theorem \ref{M(2)-bc3-r6-new1}, $\mathcal M_2$ is of the form \begin{equation}\label{with-2-times-2-matrices-bc1} \mathcal{M}_2=\sum_{i=1}^m\lambda_i \mathcal{M}^{(x_i,y_i)}_2+ \xi\mathcal{M}^{(X,Y)}_2, \end{equation} where $m\in \mathbb N$, $(x_i,y_i)\in \mathbb R^2$, $(X,Y)\in (\mathbb{SR}^{2\times 2})^2$, $\lambda_i> 0$, $\xi>0$ and $\sum_{i=1}^m \lambda_i+\xi=1$. By the form \eqref{(X,Y)-form} of $(X,Y)$ it is easy to check that \begin{equation} \label{r6-form-of-x2-2-bc4-new1-cor} \beta^{(X,Y)}_X=\beta^{(X,Y)}_Y=\beta^{(X,Y)}_{X^3}= \beta^{(X,Y)}_{X^2Y}=\beta^{(X,Y)}_{XY^2}=\beta^{(X,Y)}_{Y^3}=0, \end{equation} where $\beta^{(X,Y)}_{w}$ are the moments of $\mathcal{M}^{(X,Y)}_2$. Using (\ref{with-2-times-2-matrices-bc1}) and (\ref{r6-form-of-x2-2-bc4-new1-cor}), we conclude that $\sum_{i=1}^m \lambda_i \mathcal{M}^{(x_i,y_i)}_2$ and $\xi \mathcal{M}^{(X,Y)}_2$ are of the forms \begin{align} \begin{mpmatrix} a & \beta_X & \beta_Y & b & c & c & a+b \\ \beta_X & b & c & \beta_{X^3} & \beta_{X^2Y} & \beta_{X^2Y} & \beta_X+\beta_{X^3} \\ \beta_{Y}& c & a+b & \beta_{X^2Y} & \beta_X+\beta_{X^3} & \beta_X+\beta_{X^3} & \beta_Y+\beta_{X^2Y} \\ b & \beta_{X^3} & \beta_{X^2Y} & d & e & e & b+d \\ c & \beta_{X^2Y} & \beta_X+\beta_{X^3} & e & b+d & b+d & c+e \\ c & \beta_{X^2Y} & \beta_X+ \beta_{X^3}& e & b+d & b+d & c+e \\ a+b & \beta_X+\beta_{X^3} & \beta_{Y}+\beta_{X^2Y} & b+d & c+e & c+e & a+2b+d \end{mpmatrix}, \label{matrix-1-bc1}\\ \begin{mpmatrix} 1-a & 0 & 0 & \beta_{X^2}-b & A_1(c) & A_1(c) & A_2(a,b) \\ 0 & \beta_{X^2}-b & A_1(c) & 0 & 0 & 0 & 0 \\ 0 & A_1(c) & A_2(a,b) & 0 & 0 & 0 & 0 \\ \beta_{X^2}-b & 0 & 0 & \beta_{X^4}-d & A_3(e) & A_3(e) & A_4(b,d)\\ A_1(c) & 0 & 0 & A_3(e) & A_4(b,d) & \beta_{XYXY}-(b-d)& A_5(c,e)\\ A_1(c) & 0 & 0 & A_3(e) & \beta_{XYXY}-(b-d) & A_4(b,d) & A_5(c,e) \\ A_2(a,b) & 0 & 0 & A_4(b,d) & A_5(c,e) & A_5(c,e) & A_6(a,b,d) \end{mpmatrix},\label{matrix-2-bc1} \end{align} where \begin{equation*} \begin{split} A_1(c)&= \beta_{XY}-c,\\ A_3(e) &= \beta_{X^3Y}-e,\\ A_5(c,e)&= \beta_{XY}+\beta_{X^3Y}-(c+e), \end{split} \qquad \begin{split} A_2(a,b) &= 1+\beta_{X^2}-(a+b),\\ A_4(b,d) &=\beta_{X^2}+\beta_{X^4}-(b+d),\\ A_6(a,b,d) &=1+2\beta_{X^2}+\beta_{X^4}-(a+2b+d), \end{split} \end{equation*} for some $a,b,c,d,e\in \mathbb R$, and observe that the matrix (\ref{matrix-1-bc1}) is $L(a,b,c,d,e)$ and (\ref{matrix-2-bc1}) is $\mathcal{M}_2-L(a,b,c,d,e)$. Since $L(a,b,c,d,e)$ is a cm moment matrix which admits a measure, conditions (\ref{point1-bc1-r6}) and (\ref{point4-bc1-r6}) of Theorem \ref{M(2)-bc3-r6-new1-cor} follow from \cite{Fia14} and references therein. Since $\mathcal{M}_2-L(a,b,c,d,e)$ is a nc moment matrix which admits a measure, (\ref{point2-bc1-r6}) and (\ref{point3-bc1-r6}) of Theorem \ref{M(2)-bc3-r6-new1-cor} are true by Proposition \ref{Mn-psd} and Corollary \ref{lin-ind-of-4-col} above. This proves the implication $(\Rightarrow)$. It remains to prove the implication $(\Leftarrow)$. We have to prove that conditions (\ref{point1-bc1-r6})-(\ref{point4-bc1-r6}) imply that there is a measure for $\mathcal M_2$. Since $L(a,b,c,d,e)$ is a cm moment matrix that satisfies (\ref{point1-bc1-r6}) and (\ref{point4-bc1-r6}), it admits a measure by \cite{Fia14} and references therein. Now note that $M:=\mathcal{M}_2-L(a,b,c,d,e)$ is a nc moment matrix of the form \eqref{matrix-2-bc1} satisfying \begin{equation}\label{zero-moments-M} \beta^{M}_X=\beta^{M}_Y=\beta^{M}_{X^3}= \beta^{M}_{X^2Y}=\beta^{M}_{XY^2}=\beta^{M}_{Y^3}=0, \end{equation} where $\beta^M_w$ denote the moments of $M$. It remains to prove that $M$ admits a measure. By (\ref{point2-bc1-r6}), $M$ is psd, and from (\ref{point3-bc1-r6}), $M$ is of rank at least 4 with linearly independent columns $\mathds 1, \mathbb{X}, \mathbb{Y}, \mathbb{X}\mathbb{Y}$. Since $M$ satisfies the relation $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$, it can be of rank at most 6. We separate three possibilities.\\ \noindent\textbf{Case 1:} $\Rank M=4$. From the form of $M$, we see that it must additionally satisfy \begin{equation*} X^2=a_1\mathds 1, \quad \text{and} \quad \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=a_2\mathds 1, \end{equation*} for some $a_1,a_2\in \mathbb R.$ Since $M$ is also psd, there exist a measure for $\beta$ by \cite[Theorem 3.1 (3)]{BZ18}.\\ \\ \noindent\textbf{Case 2:} $\Rank M=5$. By the form of $M$ and \eqref{point2-bc1-r6}, we have the additional relation \begin{equation}\label{other-re} a\mathds 1+ d\mathbb{X}^2+e(\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X})=\mathbf{0} \end{equation} for some $a,d,e\in \mathbb R.$ Since $M$ is psd and RG (since there are only quadratic column relations), Lemma \ref{linear-trans} states that there is a transformation of the form \eqref{lin-trans1} which we may apply to get a moment sequence $\widetilde{\beta}_w$ with a moment matrix $\widetilde M$ satisfying the relations \eqref{first-rel} and \eqref{second-rel}. By Lemma \ref{linear-trans-on-zero-moments} we have that $$\widetilde{\beta}_{X}= \widetilde{\beta}_Y= \widetilde{\beta}_{X^3}= \widetilde{\beta}_{X^2Y}= \widetilde{\beta}_{Y^3}=0.$$ Hence the measure for $\widetilde{\beta}_w$ exists by \cite[Theorems 6.5, 6.8, 6.11, 6.14]{BZ18}.\\ \\ \noindent\textbf{Case 3:} $\Rank M=6$. Since $M$ is psd, RG (since the only relation is $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$) and satisfies \eqref{zero-moments-M}, it admits a measure by Theorem \ref{M(2)-bc3-r6-new1} \eqref{M(2)-c3-r6-pt1}.\\ The type of representing measure, as well as the sufficiency of \eqref{suff-cond-meas} can be inferred from Theorem \ref{M(2)-bc3-r6-new1}. \end{proof} Theorem \ref{M(2)-bc3-r6-new1-cor} (along with the others from \cite{BZ18}) provides with a new computational method for testing the existence of a measure. While searching for a flat extension from $\mathcal{M}_{2}$ to $\mathcal{M}_{3}$ is reasonable, this approach quickly becomes intractable if $\mathcal{M}_{2}$ admits positive extensions $\mathcal{M}_{k}$, for a large $k$, which then admits a flat extension to $\mathcal{M}_{k+1}$. Comparatively, checking the LMI's from Theorem \ref{M(2)-bc3-r6-new1-cor} always maintains the same level of computational complexity. In the following example we present two psd moment matrices $\mathcal{M}_2$ satisfying $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$, one which admits a representing measure and the other which does not. The proof is by the use of Theorem \ref{M(2)-bc3-r6-new1-cor}, with the computations easily checked in \textit{Mathematica}. \begin{example} For the moment matrix $$\mathcal{M}_2= \begin{pmatrix} 1 & 0 & 0 & \frac{1}{2} & 0 & 0 & \frac{3}{2} \\ 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & \frac{3}{2} & 0 & 0 & 0 & 0 \\ \frac{1}{2} & 0 & 0 & 1 & 0 & 0 & \frac{3}{2} \\ 0 & 0 & 0 & 0 & \frac{3}{2} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{3}{2} & 0 \\ \frac{3}{2} & 0 & 0 & \frac{3}{2} & 0 & 0 & 3 \\ \end{pmatrix}$$ we proved in \cite[Example 8.16]{BZ18} that it admits a representing measure (but not a flat extension). We will check this fact also by the use of Theorem \ref{M(2)-bc3-r6-new1-cor}. Using \textit{Mathematica} we get $a=0.75$, $b=c=d=e=0$ as a feasible solution of both LMI's from \eqref{point1-bc1-r6} and \eqref{point2-bc1-r6}. We check that the condition \eqref{point3-bc1-r6} of Theorem \ref{M(2)-bc3-r6-new1-cor} is also met, i.e., the eigenvalues are $1.5, 0.75, 0.5, 0.25$. The moment matrix $L(0.75,0,0,0,0)$ satisfies $\mathbb{X}=\mathbb{X}^2=\mathbb{X}\mathbb{Y}=\mathbb{Y}\mathbb{X}=\mathbf 0$ and $\mathbb{Y}^2=\mathds 1$, hence it is of rank 2. The corresponding variety is $\{(0,1),(0,-1)\}$, so also the condition \eqref{point4-bc1-r6} of Theorem \ref{M(2)-bc3-r6-new1-cor} is satisfied. Thus $M$ indeed admits a measure by Theorem \ref{M(2)-bc3-r6-new1-cor}. For the moment matrix $$\mathcal{M}_2= \left( \begin{array}{ccccccc} 1 & 0 & \frac{4}{15} & \frac{2}{3} & -\frac{32}{33} & -\frac{32}{33} & \frac{5}{3} \\[1.5mm] 0 & \frac{2}{3} & -\frac{32}{33} & \frac{4}{15} & -\frac{8}{27} & -\frac{8}{27} & \frac{4}{15} \\[1.5mm] \frac{4}{15} & -\frac{32}{33} & \frac{5}{3} & -\frac{8}{27} & \frac{4}{15} & \frac{4}{15} & -\frac{4}{135} \\[1.5mm] \frac{2}{3} & \frac{4}{15} & -\frac{8}{27} & \frac{2}{3} & -\frac{8}{9} & -\frac{8}{9} & \frac{4}{3} \\[1.5mm] -\frac{32}{33} & -\frac{8}{27} & \frac{4}{15} & -\frac{8}{9} & \frac{4}{3} & \frac{10}{9} & -\frac{184}{99} \\[1.5mm] -\frac{32}{33} & -\frac{8}{27} & \frac{4}{15} & -\frac{8}{9} & \frac{10}{9} & \frac{4}{3} & -\frac{184}{99} \\[1.5mm] \frac{5}{3} & \frac{4}{15} & -\frac{4}{135} & \frac{4}{3} & -\frac{184}{99} & -\frac{184}{99} & 3 \\ \end{array}\right)$$ we check with \textit{Mathematica} that the eigenvalues are nonnegative, i.e., $6.92,2.35,0.22,0.11,0.039,0.014,0.$ Clearly we have that $\mathbb{Y}^2=\mathds{1}+\mathbb{X}^2$. Using \textit{Mathematica} we check that the LMI's from Theorem \ref{M(2)-bc3-r6-new1-cor} \eqref{point1-bc1-r6}, \eqref{point2-bc1-r6} are not simultaneously feasible. Hence $\mathcal{M}_2$ does not admit a representing measure. \end{example} \section{$\mathcal M_2$ of rank 6 with relation $\mathbb{Y}^2=\mathds 1$} \label{S2} The main result of this section, see Theorem \ref{main-res-2} below, is that moment matrices $\mathcal M_2$ generated by the atoms $(X,Y)$ of size 3 satisfying $Y^2=I_3$ can always be represented with atoms of size at most 2. Moreover, if we consider a single atom of size 3, then a single atom of size 2 suffices. \begin{theorem}\label{main-res-2} Let $\beta$ be a moment sequence with a nc moment matrix $\mathcal M_2$ satisfying the column relation $\mathbb{Y}^2=\mathds 1$. Then the following are equivalent: \begin{enumerate} \item\label{pt1} $\mathcal M_2$ admits a measure of type $(m_1,m_2,m_3)$, $m_1, m_2,m_3\in \mathbb N\cup\{0\}$. \item\label{pt2} $\mathcal M_2$ admits a measure of type $(m_1,m_2)$, $m_1, m_2\in \mathbb N \cup\{0\}$. \end{enumerate} Moreover, if $m_3=1$ in \eqref{pt1} the $m_2=1$ in \eqref{pt2}. \end{theorem} The proof is constructive and can be seen as the first step toward proving the following conjecture: \begin{conjecture} Let $\beta$ be a moment sequence with a moment matrix $\mathcal M_2$ satisfying the column relation $\mathbb{Y}^2=\mathds 1$. Then the following are equivalent: \begin{enumerate} \item $\mathcal M_2$ admits a measure. \item $\mathcal M_2$ admits a measure of type $(m_1,m_2)$, $m_1, m_2\in \mathbb N$. \item $\mathcal M_2$ admits a measure of type $(m,1)$, $m\in \mathbb N$. \end{enumerate} \end{conjecture} Let $\beta^{(4)}$ be a truncated moment sequence and $\mathcal M_2$ its moment matrix. The notations $\Delta(\beta^{(4)})$ and $\Delta(\mathcal M_2)$ will both denote the difference $$\Delta(\beta^{(4)})=\Delta(\mathcal M_2):=\beta_{X^2Y^2}-\beta_{XYXY},$$ which will be important in the analysis below. To prove Theorem \ref{main-res-2} we first have to understand the form of moment matrices $\mathcal M_2^{(X,Y)}$ with $(X,Y)\in (\mathbb S\mathbb R^{2\times 2})^2$ and $Y^2=I_2$. We illustrate this in the next lemma. \begin{lemma}\label{2-by-2-atom} Let $(X,Y)\in (\mathbb S\mathbb R^{2\times 2})^2$ be a pair of symmetric matrices of size 2 with $Y^2=I_2$ and $\Delta(\mathcal M_2^{(X,Y)})\neq 0$. Then there is $\widetilde X:=\begin{mpmatrix} a & b \\ b & c\end{mpmatrix}\in \mathbb S\mathbb R^{2\times 2}$, such that \begin{equation}\label{diag-Y} \mathcal M_2^{(X,Y)}=\mathcal M_2^{(\widetilde X,\widetilde Y)}, \end{equation} where $\widetilde Y=\begin{mpmatrix} 1 & 0 \\ 0& -1\end{mpmatrix}.$ Moreover, $\mathcal M_2^{(\widetilde X,\widetilde Y)}$ is equal to $$\begin{mpmatrix} 1 & \frac{1}{2} (a+c) & 0 & \frac{1}{2} \left(a^2+2 b^2+c^2\right) & \frac{1}{2} (a-c) & \frac{1}{2} (a-c) & 1 \\ \frac{1}{2} (a+c) & C_4(a,b,c) & \frac{1}{2} (a-c) & C_3(a,b,c) & \frac{1}{2} (a-c) (a+c) & \frac{1}{2} (a-c) (a+c) & \frac{1}{2} (a+c) \\ 0 & \frac{1}{2} (a-c) & 1 & \frac{1}{2} (a-c) (a+c) & \frac{1}{2} (a+c) & \frac{1}{2} (a+c) & 0 \\ C_4(a,b,c) &C_3(a,b,c) & \frac{1}{2} (a-c) (a+c) & C_1(a,b,c)& C_2(a,b,c) & C_2(a,b,c) & C_4(a,b,c) \\ \frac{1}{2} (a-c) & \frac{1}{2} (a-c) (a+c) & \frac{1}{2} (a+c) & C_2(a,b,c) & C_4(a,b,c) & C_5(a,b,c) & \frac{1}{2} (a-c) \\ \frac{1}{2} (a-c) & \frac{1}{2} (a-c) (a+c) & \frac{1}{2} (a+c) & C_2(a,b,c) & C_5(a,b,c) & C_4(a,b,c) & \frac{1}{2} (a-c) \\ 1 & \frac{1}{2} (a+c) & 0 & C_4(a,b,c) & \frac{1}{2} (a-c) & \frac{1}{2} (a-c) & 1 \end{mpmatrix},$$ where \begin{align*} 2C_1(a,b,c) &= a^4+4 a^2 b^2+4 a b^2 c+2 b^4+4 b^2 c^2+c^4,\\ 2C_2(a,b,c) &= (a-c) \left(a^2+a c+b^2+c^2\right),\\ 2C_3(a,b,c) &= a^3+3 a b^2+3 b^2 c+c^3,\\ 2C_4(a,b,c) &= a^2+2 b^2+c^2,\\ 2C_5(a,b,c) &= a^2-2 b^2+c^2.\\ \end{align*} In particular, we have that $\Delta(\mathcal M_2^{(\widetilde X,\widetilde Y)})=2b^2.$ \end{lemma} \begin{proof} To prove \eqref{diag-Y} note that since $Y^2=I_2$ the eigenvalues of $Y$ are $1$ or $-1$. Since $\Delta(\mathcal M_2^{(X,Y)}) \neq 0$, $X$ and $Y$ do not commute. Hence there is an orthogonal matrix $U\in \mathbb R^{2\times 2}$ such that $UYU^t=\begin{mpmatrix} 1 & 0 \\ 0& -1\end{mpmatrix}.$ Taking $\widetilde X=UXU^t$ proves \eqref{diag-Y}. The remaining part of the lemma can be easily checked. \end{proof} We will prove that for every pair $(X,Y)\in (\mathbb{SR}^{3\times 3})^2$ satisfying $Y^2=1$ we can write \begin{equation} \label{momY2=1} \mathcal{M}_2^{(X,Y)}=\sum_{i=1}^m \lambda_i \mathcal{M}^{(x_i,y_i)}_2 + t \mathcal{M}^{(\widetilde X,\widetilde Y)}_2, \end{equation} where $(x_i,y_i)\in \mathbb R^2$, $m\in \mathbb N$, $(\widetilde X,\widetilde Y)\in (\mathbb{SR}^{2\times 2})^2$ as in Lemma \ref{2-by-2-atom}, $\lambda_i> 0$, $t> 0$ and $\sum_{i=1}^m \lambda_i+t=1$. Since $\Delta(\mathcal{M}^{(x,y)}_2)=0$ for every $(x,y)\in \mathbb R^2$ we must have $$\Delta :=\Delta(\mathcal{M}^{(X,Y)}_2)=t\cdot \Delta(\mathcal{M}^{(\widetilde X,\widetilde Y)}_2)=t\cdot 2b^2,$$ where we used Lemma \ref{2-by-2-atom} for the second equality. Hence a decomposition of the form \eqref{momY2=1} requires that $b=\sqrt{\frac{\Delta}{2t}}$ \big(we may WLOG assume $b$ is positive, since only even powers of $b$ appear in $\mathcal{M}^{(\widetilde X,\widetilde Y)}_2$\big). Notice that if $\Delta=0$, then we are in the commutative setting. So we may assume that $\Delta>0$. \begin{lemma}\label{2-by-2-atom-new-info} Let $(\widetilde X,\widetilde Y)\in (\mathbb{SR}^{2\times 2})^2$ as in Lemma \ref{2-by-2-atom}, with $b=\sqrt{\frac{\Delta}{2t}}$ for some $t>0$. We have that $$t\cdot\mathcal M_2^{(\widetilde X,\widetilde Y)}=B_1+B_2\cdot t+B_3\cdot \frac{1}{t},$$ where \begin{align*} B_1 &= \begin{mpmatrix} 0 & 0 & 0 & \frac{1}{2}\Delta& 0 & 0 & 0 \\ 0 &\frac{1}{2}\Delta & 0 & \frac{3(a+c) }{4}\Delta & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \frac{1}{2}\Delta & \frac{3(a+c) }{4}\Delta & 0 &\left(a^2+ac+c^2\right) \Delta&\frac{(a-c) }{4}\Delta& \frac{(a-c) }{4}\Delta&\frac{1}{2}\Delta\\ 0 & 0 & 0 & \frac{(a-c) }{4}\Delta & \frac{1}{2}\Delta & -\frac{1}{2}\Delta & 0 \\ 0 & 0 & 0 & \frac{(a-c) }{4}\Delta & -\frac{1}{2}\Delta & \frac{1}{2}\Delta & 0 \\ 0 & 0 & 0 & \frac{1}{2}\Delta & 0 & 0 & 0 \end{mpmatrix},\\ B_2 &= \begin{mpmatrix} 1 & \frac{1}{2} (a+c) & 0 & C_{4,2}(a,c) & \frac{1}{2} (a-c) & \frac{1}{2} (a-c) & 1\\ \frac{1}{2} (a+c) & C_{4,2}(a,c) & \frac{1}{2} (a-c) &C_{3,2}(a,c) & \frac{1}{2} (a-c) (a+c) & \frac{1}{2} (a-c) (a+c) & \frac{1}{2} (a+c) \\ 0 & \frac{1}{2} (a-c) & 1 & \frac{1}{2} (a-c) (a+c) & \frac{1}{2} (a+c) & \frac{1}{2} (a+c) & 0 \\ C_{4,2}(a,c) & C_{3,2}(a,c) & \frac{1}{2} (a-c) (a+c) & C_{1,2}(a,c) & C_{2,2}(a,c) &C_{2,2}(a,c) & C_{4,2}(a,c) \\ \frac{1}{2} (a-c) & \frac{1}{2} (a-c) (a+c) & \frac{1}{2} (a+c) & C_{2,2}(a,c) & C_{4,2}(a,c)& C_{4,2}(a,c) & \frac{1}{2} (a-c) \\ \frac{1}{2} (a-c) & \frac{1}{2} (a-c) (a+c) & \frac{1}{2} (a+c) & C_{2,2}(a,c) & C_{4,2}(a,c) & C_{4,2}(a,c) & \frac{1}{2} (a-c) \\ 1 & \frac{1}{2} (a+c) & 0 & C_{4,2}(a,c) & \frac{1}{2} (a-c) & \frac{1}{2} (a-c) & 1 \\ \end{mpmatrix},\\ B_3 &= \frac{\Delta^2}{4}\cdot E_{44}, \end{align*} with \begin{equation*} \begin{split} C_{1,2}(a,c) &= \frac{1}{2} \left( a^4 + c^4 \right) ,\\ C_{2,2}(a,c) &= \frac{1}{2} (a-c) \left(a^2+a c+c^2\right), \end{split} \qquad \begin{split} C_{3,2}(a,c) &= \frac{1}{2} (a+c) \left(a^2-a c+c^2\right),\\ C_{4,2}(a,c) &= \frac{1}{2} (a^2+c^2), \end{split} \end{equation*} and $E_{44}$ is the standard $7\times 7$ coordinate matrix with the only non-trivial entry in position $(4,4)$ being 1. Moreover, $B_2$ and $B_3$ are positive semidefinite. \end{lemma} \begin{proof} The statements about the form of $t\cdot\mathcal M_2^{(\widetilde X,\widetilde Y)}$ can be easily checked by direct computation. It is obvious that $B_3$ is psd. It remains to prove the fact that $B_2$ is psd. We know that $t\cdot\mathcal M_2^{(\widetilde X,\widetilde Y)}$ is psd for every $t>0$. If $B_2$ has a negative eigenvalue, then $t\cdot\mathcal M_2^{(\widetilde X,\widetilde Y)}$ also has a negative eigenvalue for $t>0$ big enough. (Note that $\displaystyle \lim_{t\to\infty} B_3\frac{1}{t}=\mathbf 0$.) \end{proof} The next lemma describes the moments generated by a pait $(X,Y)\in(\mathbb S\mathbb R^{n\times n})^2$ with $Y^2=I_n$ where the multiplicities of the eigenvalues $1$, $-1$ are $n-1$, $1$, respectively. \begin{lemma}\label{gen-lemma-one--1} Let $(X,Y)\in (\mathbb S\mathbb R^{n\times n})^2$, $t\geq 2$, be a pair of symmetric matrices of size $n$ such that $Y^2=I_n$ and the multiplicities of the eigenvalues $1$, $-1$ are $n-1$, $1$, respectively. Then: \begin{enumerate} \item\label{Y2=1-pt1} $\mathcal M_2^{(X,Y)}=\mathcal M_2^{(\widetilde X,\widetilde Y)}$ with \begin{equation}\label{form-of-atoms} \widetilde X=\left(\begin{array}{cc} D & x \\ x^t & \alpha\end{array}\right),\quad \widetilde Y=\left(\begin{array}{cc} I_{n-1} & 0 \\ 0 & -1 \end{array}\right), \end{equation} where $D\in\mathbb S\mathbb R^{(n-1)\times (n-1)}$ is a diagonal matrix, $x\in \mathbb R^{n-1}$ a vector, $\alpha\in\mathbb R$ a real number, and $$\widetilde X=WXW^t, \quad \widetilde Y=WYW^t,$$ for some orthogonal matrix $W\in \mathbb R^{n\times n}$. \item\label{Y2=1-pt2} $\mathcal M_2^{(X,Y)}$ admits a measure of type $t$ if $\mathcal M_2^{(\widehat X,\widetilde Y)}$ admits a measure of type $t$ where \begin{equation*} \widehat X=\left(\begin{array}{ccc} D_0 \oplus 0 & x \\ x^t & 0\end{array}\right) := \al{-}\left(\frac{\alpha+d_{n}}{2}\right)I_n+\widetilde X+\left(\frac{\alpha-d_n}{2}\right)\widetilde Y, \end{equation*} $\widetilde X, \widetilde Y$ are as in \eqref{Y2=1-pt1}, $d_n$ is the $(n-1)$-th diagonal entry of $D$ from \eqref{form-of-atoms} and $D_0$ a diagonal matrix of size $n-2$. \item\label{Y2=1-pt3} $\mathcal M_2^{(\widehat X,\widetilde Y)}$ with $\widehat X$ and $\widetilde Y$ as in \eqref{Y2=1-pt2} is equal to $$\begin{mpmatrix} 1 & \beta_X & \frac{n-2}{n} & \beta_{X^2} & \beta_X & \beta_X & 1 \\ \beta_X & \beta_{X^2} & \beta_X & \beta_{X^3} & \beta_{X^2Y} & \beta_{X^2Y} & \beta_X \\ \frac{n-2}{n}& \beta_X & 1 & \beta_{X^2Y} & \beta_X & \beta_X & \frac{n-2}{n} \\ \beta_{X^2} & \beta_{X^3} &\beta_{X^2Y} & \beta_{X^4} & \beta_{X^3Y} & \beta_{X^3Y} & \beta_{X^2} \\ \beta_{X} & \beta_{X^2Y} & \beta_{X} & \beta_{X^3Y} & \beta_{X^2} & \beta_{XYXY} & \beta_X \\ \beta_{X} & \beta_{X^2Y} & \beta_{X} & \beta_{X^3Y} & \beta_{XYXY} & \beta_{X^2} & \beta_X \\ 1 & \beta_X & \frac{n-2}{n} & \beta_{X^2} & \beta_X & \beta_X & 1 \end{mpmatrix},$$ where \begin{equation*} \begin{split} \beta_X &= \frac{1}{n}\mathrm{tr}(D_0),\\ \beta_{X^2} &= \frac{1}{n}(\mathrm{tr}(D_0^2)+2x^tx),\\ \beta_{X^3} &= \frac{1}{n}(\mathrm{tr}(D_0^3)+3\mathrm{tr}(\hat{D}xx^t)),\\ \beta_{X^2Y} &= \frac{1}{n}\mathrm{tr}(D_0^2), \end{split} \qquad \begin{split} \beta_{X^3Y} &= \frac{1}{n}(\mathrm{tr}(D_0^3)+\mathrm{tr}(\hat{D}xx^t)),\\ \beta_{XYXY} &= \frac{1}{n}(\mathrm{tr}(D_0^2)-2x^tx),\\ \beta_{X^4} &= \frac{1}{n}(\mathrm{tr}(D_0^4)+4\mathrm{tr}(\hat{D}^2xx^t)+2(x^tx)^2),\\ & \end{split} \end{equation*} with $\hat{D} = D_0 \oplus 0$. In particular, we have that \begin{align} \label{mom-rel-2} \beta_{X^2Y} &= \frac{1}{2}\left(\beta_{X^2}+\beta_{XYXY}\right),\\ \beta_{X^3Y} &= \beta_{X^3}-\frac{2}{n}x^t\hat{D}x. \label{mom-rel-3} \end{align} \end{enumerate} \end{lemma} \begin{proof} First we prove \eqref{Y2=1-pt1}. There is an orthogonal matrix $U\in \mathbb R^{n\times n}$ such that $UYU^t=:\widetilde Y$ is of the form as in \eqref{form-of-atoms}. Further on, there is an orthogonal matrix $V_0\in \mathbb R^{(n-1)\times (n-1)}$ such that by defining $V:=\begin{mpmatrix}V_0 & 0\\ 0 & 1\end{mpmatrix}$, the matrix $VUXU^tV =:\widetilde X$ is of the form \eqref{form-of-atoms}. Since we also have $V\widetilde YV^t=:\widetilde Y$, defining $W=VU$ establishes \eqref{Y2=1-pt1}. Now we prove \eqref{Y2=1-pt2}. By applying a linear transformation $\phi(x,y)=(a+x+c y,y)$, where $a=\frac{-d_n-\alpha}{2}$, $c=\frac{\alpha-d_n}{2}$ and $d_n$ is the $(n-1)$-th diagonal entry of $D$ from \eqref{form-of-atoms} to the sequence $\beta^{(4)}$, we get a sequence $\widetilde \beta^{(4)}$ with $\mathcal M_2^{(\widehat X, \widetilde Y)}$ where $\widehat X$ and $\widetilde Y$ are as stated in \eqref{Y2=1-pt2}. Since the type of a measure remains unchanged when applying an invertible affine linear transformation, this proves \eqref{Y2=1-pt2}. Part \eqref{Y2=1-pt3} of the lemma follows by direct calculation. See Appendix \ref{calc-part3-app} for the details. \end{proof} \begin{lemma}\label{gen-lemma-one--2} Let $(X,Y)\in (\mathbb S\mathbb R^{n\times n})^2$, $n\geq 2$, be a pair of symmetric matrices of size $n$ of the form \begin{equation}\label{form-of-atoms-2} X=\left(\begin{array}{cc} D & x \\ x^t & 0\end{array}\right)\in \mathbb S\mathbb R^{n\times n},\quad Y=\left(\begin{array}{cc} I_{n-1} & 0 \\ 0 & -1 \end{array}\right)\in \mathbb S\mathbb R^{n\times n} , \end{equation} where $D\in\mathbb S\mathbb R^{(n-1)\times (n-1)}$ is a diagonal matrix, $x\in \mathbb R^{n-1}$ is a vector. Let $(\widetilde X,\widetilde Y)\in (\mathbb S\mathbb R^{2\times 2})^2$ be a pair of symmetric matrices of size $2$ of the form \begin{equation*} \widetilde X=\left(\begin{array}{cc} a & b \\ b & c\end{array}\right)\in \mathbb S\mathbb R^{2\times 2},\quad \widetilde Y=\left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right)\in \mathbb S\mathbb R^{2\times 2}, \end{equation*} with $b=\sqrt{\frac{\Delta}{2t}}$, $\Delta:=\Delta(\mathcal M_2^{(X,Y)})$, $t>0$ and $B_1, B_2, B_3$ as in Lemma \ref{2-by-2-atom-new-info}. If $\mathcal M_2^{(X,Y)}-t\mathcal M_2^{(\widetilde X,\widetilde Y)}$ is positive semidefinite for some $t>0$, then $$c=0 \quad\text{and}\quad a=\frac{4x^t D x}{n\Delta}.$$ \end{lemma} \begin{proof} We begin by analyzing the kernel of $\big[\mathcal M_2^{(X,Y)}-B_1\big]_{\{\mathds 1, \mathbb{X},\mathbb{Y},\mathbb{X}\mathbb{Y}\}}$.\\ \noindent{\textbf{Claim 1.}} $v:=\left(\begin{array}{cccc} 0 & -1 & 0 & 1\end{array}\right)^T\in \ker\big[\mathcal M_2^{(X,Y)}-B_1\big]_{\{\mathds 1, \mathbb{X},\mathbb{Y},\mathbb{X}\mathbb{Y}\}}$.\\ Using Lemmas \ref{2-by-2-atom-new-info} and \ref{gen-lemma-one--1} $$\left( \mathcal M_2^{(X,Y)} - B_1\right)|_{\{\mathds 1, \mathbb{X},\mathbb{Y},\mathbb{X}\mathbb{Y}\}}= \left(\begin{array}{cccc} \ast & \beta_X &\ast & \beta_{XY}\\ \ast & \frac{1}{2}\left(\beta_{X^2} + \beta_{XYXY}\right) & \ast & \beta_{X^2Y} \\ \ast & \beta_{XY} & \ast & \beta_X \\ \ast & \beta_{X^2Y} & \ast & \frac{1}{2}\left(\beta_{X^2} + \beta_{XYXY}\right) \end{array}\right).$$ Moreover, using \eqref{mom-rel-2} we see that the second and the forth column of the matrix $\left[ \mathcal M_2^{(X,Y)} - B_1\right]_{\{\mathds 1, \mathbb{X},\mathbb{Y},\mathbb{X}\mathbb{Y}\}}$ are equal. Hence the vector $v$ is in the kernel of $\left[ \mathcal M_2^{(X,Y)} - B_1\right]_{\{\mathds 1, \mathbb{X},\mathbb{Y},\mathbb{X}\mathbb{Y}\}}$.\\ Since $B_2$ and $B_3$ are psd by Lemma \ref{2-by-2-atom-new-info}, Claim 1 implies that $v$ must be in the kernel of both $[B_2]_{\{\mathds 1, \mathbb{X},\mathbb{Y},\mathbb{X}\mathbb{Y}\}}$ and $[B_3]_{\{\mathds 1, \mathbb{X},\mathbb{Y},\mathbb{X}\mathbb{Y}\}}$ if $\mathcal M_2^{(X,Y)}-t\mathcal M_2^{(\widetilde X,\widetilde Y)}$ is psd for some $t>0$. We have that $[B_3]_{\{\mathds 1, \mathbb{X},\mathbb{Y},\mathbb{X}\mathbb{Y}\}}=\mathbf 0_4$ so $v$ is indeed in its kernel, while $[B_2]_{\{\mathds 1, \mathbb{X},\mathbb{Y},\mathbb{X}\mathbb{Y}\}}v$ is equal to $$\begin{mpmatrix} 1 & \frac{1}{2} (a+c) & 0 & \frac{1}{2} (a-c) \\ \frac{1}{2} (a+c) & \frac{1}{2} \left(a^2+c^2\right) & \frac{1}{2} (a-c) & \frac{1}{2} (a-c) (a+c) \\ 0 & \frac{1}{2} (a-c) & 1 & \frac{1}{2} (a+c) \\ \frac{1}{2} (a-c) & \frac{1}{2} (a-c) (a+c) & \frac{1}{2} (a+c) & \frac{1}{2} \left(a^2+c^2\right)\end{mpmatrix} v= c\cdot \begin{mpmatrix} -1\\ -c \\ 1\\c \end{mpmatrix}. $$ Hence we must have $c=0$.\\ \noindent{\textbf{Claim 2.}} If $\mathcal M_2^{(X,Y)}-t\mathcal M_2^{(\widetilde X,\widetilde Y)}$ is psd, then $\widetilde v:=\left(\begin{array}{ccccc} 0 & -1 & 0 & 0 & 1\end{array}\right)^T\in \ker\big[\mathcal M_2^{(X,Y)}-B_1\big]_{\{\mathds 1, \mathbb{X},\mathbb{Y},\mathbb{X}^2, \mathbb{X}\mathbb{Y}\}}$.\\ By Claim 1 it easily follows that \begin{equation}\label{eq0} {\widetilde v}^T \big[\mathcal M_2^{(X,Y)}-B_1\big]_{\{\mathds 1, \mathbb{X},\mathbb{Y},\mathbb{X}^2, \mathbb{X}\mathbb{Y}\}}\widetilde v=0. \end{equation} If $\mathcal M_2^{(X,Y)}-t\mathcal M_2^{(\widetilde X,\widetilde Y)}$ is psd, $\mathcal M_2^{(X,Y)}-B_1$ is psd and by \eqref{eq0} Claim 2 follows.\\ Using Claim 2 and $c=0$, $\mathcal M_2^{(X,Y)}-t\mathcal M_2^{(\widetilde X,\widetilde Y)}$ being psd for some $t>0$, implies that $$\beta_{X^3}-\frac{3\Delta}{4}a=\left[ \mathcal M_2^{(X,Y)} - B_1\right]_{\{\{\mathbb{X}^2\},\{\mathbb{X}\}\}}=\left[ \mathcal M_2^{(X,Y)} - B_1\right]_{\{\{\mathbb{X}^2\},\{\mathbb{X}\mathbb{Y}\}\}}=\beta_{X^3Y}-\frac{\Delta}{4}a,$$ which further implies that $$a=\frac{2}{\Delta}(\beta_{X^3}-\beta_{X^3Y})=\frac{4x^t D x}{n\Delta},$$ where we used \eqref{mom-rel-3} for the second equality. This proves the lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{main-res-2}] We have to prove that $\mathcal M_2^{(X,Y)}$, where $(X,Y)\in (\mathbb S\mathbb R^{3\times 3})^2$ and $Y^2=I_3$, has a measure of type $(m_1,m_2)$, where $m_1,m_2\in \mathbb N\cup\{0\}$. If $Y$ has all eigenvalues equal to 1 or $-1$, then $X$ and $Y$ commute and there is an orthogonal transformation $U\in \mathbb R^{3\times 3}$ such that $UXU^t$ is diagonal and $UYU^t=\pm I_3$. Since $\mathcal M_2^{(X,Y)}=\mathcal M_2^{(UXU^t,UYU^t)}$, there exists a measure consisting of $m_1\leq 3$, atoms of size 1. Else $Y$ has two eigenvalues of the same sign and the third of the other. We may assume WLOG that two eigenvalues are 1 and the third is $-1$ (otherwise we do an affine linear transformation $(x,y)\mapsto (x,-y)$). By Lemma \ref{gen-lemma-one--1} \eqref{Y2=1-pt2} it is enough to prove that $\mathcal M_2^{(X,Y)}$ has a measure of type $(m_1,m_2)$, where $m_1,m_2\in \mathbb N\cup\{0\}$, for $$X=\begin{mpmatrix} x_1 & 0 & x_2\\ 0 & 0 & x_3 \\ x_2 & x_3 & 0\end{mpmatrix},\quad Y=\begin{mpmatrix} 1 & 0 & 0\\ 0& 1& 0\\ 0& 0&-1\end{mpmatrix},$$ where $x_1,x_2,x_3\in \mathbb R$. We will separate two cases. \\ \noindent\textbf{Case 1.} $x_1=0$ or $x_2=0$ or $x_3=0$:\\ If $x_1=0$, we have $XY+YX=0$ and $\mathcal M_2^{(X,Y)}$ is of rank at most 5. By \cite[Theorems 3.1, 6.5, 6.8, 6.11, 6.14]{BZ18} it follows that $\mathcal M_2^{(X,Y)}$ admits a measure of type $(m_1,1)$ where $m_1\in \mathbb N$. If $x_2=0$, the subspace $\Span\{e_1\}$ is reducing for $X$ and $Y$, and we can replace $(X,Y)$ by $(x_1,1)$ of density $\frac{1}{3}$ and $\Big(\begin{mpmatrix}0 & x_3\\ x_3 & 0\end{mpmatrix}, \begin{mpmatrix}1 & 0\\ 0 & -1\end{mpmatrix}\Big)$ of densitiy $\frac{2}{3}$. If $x_3=0$, the subspace $\Span\{e_2\}$ is reducing for $X$ and $Y$, and we can replace $(X,Y)$ by $(0,1)$ of density $\frac{1}{3}$ and $\Big(\begin{mpmatrix}x_1 & x_2\\ x_2 & 0\end{mpmatrix}, \begin{mpmatrix}1 & 0\\ 0 & -1\end{mpmatrix}\Big)$ of density $\frac{2}{3}$. This proves the theorem in Case 1.\\ \noindent\textbf{Case 2.} $x_1\neq 0$ and $x_2\neq 0$ and $x_3\neq 0$:\\ We will prove that $\mathcal M_2^{(X,Y)}$ admits a measure of type $(m_1,1)$, $m_1\in \mathbb N$. We denote by $(X_1,Y_1)\in (\mathbb S\mathbb R^{2\times 2})^2$ the atom of size 2 and by $t$ its density. By Lemma \ref{2-by-2-atom} we may assume that $X_1=\begin{mpmatrix} a & b \\ b & c\end{mpmatrix}$, $Y_1=\begin{mpmatrix} 1 & 0 \\ 0& -1\end{mpmatrix}.$ Furthermore, by Lemma \ref{2-by-2-atom} we must have \begin{equation}\label{t-and-diff-of-nc-mom} b=\pm \sqrt{\frac{1}{2t}\Delta(\mathcal M_2^{(X,Y)})}=\pm \sqrt{\frac{2}{3t}(x_2^2+x_3^2)}. \end{equation} Since $\beta_{Y}(\mathcal M_2^{(X,Y)})=\frac{1}{3}$, $\beta_{Y}(\mathcal M_2^{(X_1,Y_1)})=0$ and $\beta_{Y}(\mathcal M_2^{(x_i,y_i)})=\pm 1$ for every atom $(x_i,y_i)$ of size $1$, the sum $\sum_{i} \mu_i$ of the densities $\mu_i$ of atoms of size 1 must be at least $\frac{1}{3}$. Hence, the density $t$ satisfies $t\leq \frac{2}{3}$. Since the atoms of size 1 are not sufficient, we have that $t>0$. To prove the theorem in Case 2 it suffices to prove the following claim.\\ \noindent\textbf{Claim.} There exists $t\in (0,\frac{2}{3}]$ such that $$F(t):=\mathcal M_2^{(X,Y)}-t\cdot \mathcal M_2^{(X_1,Y_1)}$$ admits a measure consisting of $m_1\in \mathbb N$ atoms of size 1.\\ The necessarry condition for $F(t)$, $t>0$, to admit a measure is $F(t)\succeq 0$. By Lemma \ref{gen-lemma-one--2} we must have $c=0$ and $a=\frac{x_1x_2^2}{x_2^2+x_3^2}$ in $X_{1}$. Let $B_1,B_2,B_3$ be as in Lemma \ref{2-by-2-atom-new-info}. We have that \begin{align*} F(t) &=\mathcal M_2^{(X,Y)}-B_1-tB_2-\frac{1}{t}B_3\\ &=\underbrace{\frac{1}{3}\begin{mpmatrix} 3 & x_1 & 1 & x_1^2 & x_1 & x_1 & 3\\ x_1 & x_1^2 & x_1 & x_1^3 & x_1^2 & x_1^2 & x_1 \\ 1& x_1 & 3 & x_1^2 & x_1^2 & x_1 & 1 \\ x_1^2 & x_1^3 & x_1^2 & C(x_1,x_2,x_3) & x_1^3 & x_1^3& x_1^2 \\ x_1 & x_1^2 & x_1 & x_1^3 & x_1^2 & x_1^2 & x_1 \\ x_1 & x_1^2 & x_1 & x_1^3 & x_1^2 & x_1^2 & x_1 \\ 3 & x_1 & 1 & x_1^2 &x_1&x_1 & 3 \end{mpmatrix}}_{\mathcal M_2^{(X,Y)}-B_1}- \underbrace{\frac{t}{2}\begin{mpmatrix} 2 & a & 0 & a^2 & a & a & 2\\ a & a^2 & a & a^3 & a^2 & a^2 & a \\ 0 & a & 2 & a^2 & a & a & 0 \\ a^2 & a^3 & a^2 & a^4 & a^3 &a^3 & a^2 \\ a & a^2 & a & a^3 & a^2 & a^2 & a \\ a & a^2 & a & a^3 & a^2 & a^2 & a \\ 2 & a & 0 & a^2 & a & a & 2 \\ \end{mpmatrix}}_{tB_2}-\underbrace{\frac{1}{9t}(x_2^2+x_3^2)^2E_{44}}_{\frac{1}{t}B_3}, \end{align*} where $$C(x_1,x_2,x_3) =x_1^4+4\frac{x_1^2x_2^2x_3^2}{x_2^2+x_3^2}+2(x_2^2+x_3^2)^2,$$ and the forms of $\mathcal M_2^{(X,Y)}$, $B_1, B_2, B_3$ are from Lemmas \ref{gen-lemma-one--1} \eqref{Y2=1-pt3}, \ref{gen-lemma-one--2}. Clearly the kernels of $\mathcal M_2^{(X,Y)}-B_1$, $B_2$ and $B_3$ contain the vectors \begin{equation*} v_1 = (-1,0,0,0,0,0,1)^T,\quad v_2 = (0,-1,0,0,0,1,0)^T,\quad v_3 = (0,-1,0,0,1,0,0)^T. \end{equation*} Hence, to prove that $F(t)$ is psd for some $t>0$ it is enough to consider the submatrix $$[F(t)]_{\{\mathds 1,\mathbb{X},\mathbb{Y},\mathbb{X}^2\}}.$$ Its principal minors are the following \begin{align*} \det\left([F(t)]_{\{\mathds 1\}}\right) &= 1-t,\\ \det\left([F(t)]_{\{\mathds 1,\mathbb{X}\}}\right) &= \frac{x_1^2 \left(\left(9 t^2-18 t+8\right) x_2^4+4 (4-3 t) x_2^2 x_3^2+4 (2-3 t) x_3^4\right)}{36 \left(x_2^2+x_3^2\right)^2},\\ \det\left([F(t)]_{\{\mathds 1,\mathbb{X},\mathbb{Y}\}}\right) &= \frac{(2-3 t) x_1^2 \left((2-3 t) x_2^4+(2-3 t) x_3^4+4 x_2^2 x_3^2\right)}{27 \left(x_2^2+x_3^2\right)^2},\\ \det\left([F(t)]_{\{\mathds 1,\mathbb{X},\mathbb{Y},\mathbb{X}^2\}}\right) &=\frac{1}{243 t ((x_2^2+x_3^2)^4) x_1^2} f(t),\\ \end{align*} where $$f(t)=f_0(x_1,x_2,x_3)+t\cdot f_1(x_1,x_2,x_3)+t^2\cdot f_2(x_1,x_2,x_3)+t^3\cdot f_3(x_1,x_2,x_3),$$ and \begin{align*} f_0(x_1,x_2,x_3) &=-16 (x_2^2 + x_3^2)^6,\\ f_1(x_1,x_2,x_3) &=24 (x_2^2 + x_3^2)^3 (3 x_2^6 + 7 x_2^4 x_3^2 + 3 x_3^6 + x_2^2 (2 x_1^2 x_3^2 + 7 x_3^4)),\\ f_2(x_1,x_2,x_3) & =-18 (6 x_2^{12} + 28 x_2^{10} x_3^2 + 6 x_3^{12} + 4 x_2^2 x_3^8 (2 x_1^2 + 7 x_3^2) + x_2^8 (8 x_1^2 x_3^2 + 58 x_3^4) +\\ &\hspace{2cm}+x_2^4 x_3^4 (x_1^4 + 16 x_1^2 x_3^2 + 58 x_3^4) + 8 x_2^6 (2 x_1^2 x_3^4 + 9 x_3^6)),\\ f_3(x_1,x_2,x_3) &=27 (2 x_2^{12} + 8 x_2^{10} x_3^2 + 2 x_3^{12} + 4 x_2^2 x_3^8 (x_1^2 + 2 x_3^2) + 4 x_2^6 x_3^4 (x_1^2 + 4 x_3^2) + 2 x_2^8 (2 x_1^2 x_3^2 + 7 x_3^4) + \\ &\hspace{2cm}+ x_2^4 x_3^4 (x_1^4 + 4 x_1^2 x_3^2 + 14 x_3^4)). \end{align*} For $t=\frac{2}{3}$ we get \begin{equation*} \begin{split} \det\left([F\big(\frac{2}{3}\big)]_{\{\mathds 1\}}\right) &= \frac{1}{3},\\ \det\left([F\big(\frac{2}{3}\big)]_{\{\mathds 1,\mathbb{X},\mathbb{Y}\}}\right) &=0, \end{split} \qquad \begin{split} \det\left([F\big(\frac{2}{3}\big)]_{\{\mathds 1,\mathbb{X}\}}\right) &=\frac{2 x_1^2 x_2^2 x_3^2}{9 \left(x_2^2+x_3^2\right)^2},\\ \det\left([F\big(\frac{2}{3}\big)]_{\{\mathds 1,\mathbb{X},\mathbb{Y},\mathbb{X}^2\}}\right) &=0. \end{split} \end{equation*} In addition we also calculate \begin{equation}\label{1-X-X2-det} \det\left([F\big(\frac{2}{3}\big)]_{\{\mathds 1,\mathbb{X},\mathbb{X}^2\}}\right)=-\frac{x_1^4 x_2^4 x_3^4 (x_1^2-8 (x_2^2+x_3^2))}{27 (x_2^2+x_3^2)^4}. \end{equation} According to \eqref{1-X-X2-det} there are two cases to consider.\\ \noindent \textbf{Case 2.1.} $x_1^2-8 (x_2^2+x_3^2)\leq 0$:\\ It is easy to check that the columns $\mathds 1$ and $\mathbb{Y}$ of $F\big(\frac{2}{3}\big)$ are both equal to $$\Big( \begin{array}{ccccccc} \frac{1}{3} & \frac{x_1 x_5^2}{3 \left(x_3^2+x_5^2\right)} & \frac{1}{3} & \frac{x_1^2 x_5^2 \left(2 x_3^2+x_5^2\right)}{3 \left(x_3^2+x_5^2\right)^2} & \frac{x_1 x_5^2}{3 \left(x_3^2+x_5^2\right)} & \frac{x_1 x_5^2}{3 \left(x_3^2+x_5^2\right)} & \frac{1}{3} \end{array} \Big)^t.$$ Hence $F\big(\frac{2}{3}\big)$ satisfies the relations $\mathbb{Y}=\mathds 1$, $\mathbb{X}\mathbb{Y}=\mathbb{Y}\mathbb{X}=\mathbb{X}$, $\mathbb{Y}^2=\mathds 1$. Since $$\det\left([F\big(\frac{2}{3}\big)]_{\{\mathds 1\}}\right) >0, \quad \det\left([F\big(\frac{2}{3}\big)]_{\{\mathds 1,\mathbb{X}\}}\right) > 0\quad \text{and}\quad\det\left([F\big(\frac{2}{3}\big)]_{\{\mathds 1,\mathbb{X},\mathbb{X}^2\}}\right) \geq 0,$$ $[F\big(\frac{2}{3}\big)]_{\{\mathds 1,\mathbb{X},\mathbb{X}^2\}}$ is psd matrix of rank 2 or 3. Hence $F(\frac{2}{3})$ is a psd commutative moment matrix of rank 2 or 3. If $x_1^2-8 (x_2^2+x_3^2)=0$ then the fifth relation is $\mathbb{X}^2=a_0\mathds 1+a_1\mathbb{X}$ for some $a_0,a_1\in \mathbb R$. Thus, it is recursively generated and by the results of Curto and Fialkow \cite{CF98-1}, \cite{CF02}, \cite{Fia14} (see also \cite[Theorem 2.7]{BZ18}) it admits a measure consisting of 2 or 3 commutative atoms.\\ \noindent \textbf{Case 2.2.} $x_1^2-8 (x_2^2+x_3^2)>0$:\\ It is easy to see that for $0<t<\frac{2}{3}$ we have that \begin{equation*} \det\left([F(t)]_{\{\mathds 1\}}\right) >0, \quad \det\left([F(t)]_{\{\mathds 1,\mathbb{X}\}}\right) >0\quad\text{and}\quad \det\left([F(t)]_{\{\mathds 1,\mathbb{X},\mathbb{Y}\}}\right)>0. \end{equation*} Since $\det\left([F(\frac{2}{3})]_{\{\mathds 1,\mathbb{X},\mathbb{Y},\mathbb{X}^2\}}\right)=0$, we have that $f(\frac{2}{3})=0$, and hence $$f(t)=\big(\frac{2}{3}-t\big)g(t)$$ for some polynomial $g(t)$ which is quadratic in $t$. The polynomial $g(t)$ has a negative leading coefficient which implies that $g(t)$ achieves its maximum at $t_0$ satisfying $g'(t_0)=0$. A calculation reveals $t_0$ to be $$\frac{4 \left(x_2^2+x_3^2\right)^3 \left(x_2^2 x_3^2 \left(x_1^2+2 x_3^2\right)+x_2^6+2 x_2^4 x_3^2+x_3^6\right)}{3 \left(2 x_2^8 \left(2 x_1^2 x_3^2+7 x_3^4\right)+4 x_2^6 x_3^4 \left(x_1^2+4 x_3^2\right)+4 x_2^2 x_3^8 \left(x_1^2+2 x_3^2\right)+x_2^4 x_3^4 \left(x_1^4+4 x_1^2 x_3^2+14 x_3^4\right)+2 x_2^{12}+8 x_2^{10} x_3^2+2 x_3^{12}\right)}.$$ Moreover, $g(t_0)$ equals $$\frac{24 x_2^4 x_3^4 \left(x_2^2+x_3^2\right)^6 \left(x_1^4+4 x_1^2 \left(x_2^2+x_3^2\right)+2 \left(x_2^2+x_3^2\right)^2\right)}{2 x_2^8 \left(2 x_1^2 x_3^2+7 x_3^4\right)+4 x_2^6 x_3^4 \left(x_1^2+4 x_3^2\right)+4 x_2^2 x_3^8 \left(x_1^2+2 x_3^2\right)+x_2^4 x_3^4 \left(x_1^4+4 x_1^2 x_3^2+14 x_3^4\right)+2 x_2^{12}+8 x_2^{10} x_3^2+2 x_3^{12}},$$ which is strictly positive as the numerator and denominator of $g(t_{0})$ are sum of squares, and $x_{i}\neq 0$. Now we only need that $0<t_0<\frac{2}{3}$. The numerator and the denominator of $t_0$ are linear combinations of monomials $$ x_2^{12} ,\; x_2^{10}x_3^{2} ,\; x_2^{8}x_3^{4} ,\; x_2^{6}x_3^{6} ,\;x_2^{4}x_3^8 ,\; x_2^{2}x_3^{10} ,\; x_2^{12} ,\; x_1^2x_2^{8}x_3^{2} ,\; x_1^2x_2^{6}x_3^{4},\;x_1^2x_2^{4}x_3^{6},\; x_1^2x_2^{2}x_3^{8},\; x_1^4x_2^{4}x_3^{4},$$ with the following coefficients: $$\begin{array}{c|c|c|c|c|c|c|c|c|c|c|c|c} \text{monomial}&x_2^{12} & x_2^{10}x_3^{2} & x_2^{8}x_3^{4} & x_2^{6}x_3^{6} & x_2^{4}x_3^8 & x_2^{2}x_3^{10} & x_2^{12} & x_1^2x_2^{8}x_3^{2} & x_1^2x_2^{6}x_3^{4}& x_1^2x_2^{4}x_3^{6}& x_1^2x_2^{2}x_3^{8}& x_1^4x_2^{4}x_3^{4} \\ \hline \text{numerator}& 4 & 20 & 44 & 56 & 44 & 20 & 4 & 4 & 12 & 12 & 4 & 0\\ \hline \text{denominator}& 6 & 24 & 42 & 48 & 42 & 24 & 6 & 12 & 12 & 12 & 12 & 3\\ \end{array}.$$ Since we are in Case 2.2 we can use the inequality $$x_1^2>8x_{2}^2+8x_{3}^2$$ to estimate \begin{eqnarray} x_1^2x_2^{8}x_3^{2} &>& 8x_2^{10}x_3^2+8x_2^{8}x_3^4, \label{ineq1}\\ x_1^2x_2^{2}x_3^{8} &>& 8x_2^{4}x_3^{8}+8x_2^2x_3^{10},\label{ineq2}\\ x_1^4x_2^{4}x_3^{4} &>& 8x_1^2x_2^{6}x_3^4+8x_1^2x_2^{4}x_3^6 \label{ineq3}\\ x_1^4x_2^{4}x_3^{4} &>& 64x_2^8x_3^4+128x_2^6x_3^6+64x_2^4x_3^8.\label{ineq4} \end{eqnarray} Summing up all the inequalities \eqref{ineq1}-\eqref{ineq4} we see that \begin{equation*} x_1^2x_2^{8}x_3^{2}+x_1^2x_2^{2}x_3^{8}+2x_1^4x_2^{4}x_3^{4} > 8x_2^{10}x_3^2+72x_2^{8}x_3^4+8x_2^{2}x_3^{10}+72x_2^{4}x_3^8+8x_1^2x_2^{6}x_3^4+8x_1^2x_2^{4}x_3^6+128x_2^6x_3^6. \end{equation*} Using this inequality we estimate the denominator from below by the coefficients: $$\begin{array}{c|c|c|c|c|c|c|c|c|c|c|c|c} \text{monomial}&x_2^{12} & x_2^{10}x_3^{2} & x_2^{8}x_3^{4} & x_2^{6}x_3^{6} & x_2^{4}x_3^8 & x_2^{2}x_3^{10} & x_2^{12} & x_1^2x_2^{8}x_3^{2} & x_1^2x_2^{6}x_3^{4}& x_1^2x_2^{4}x_3^{6}& x_1^2x_2^{2}x_3^{8}& x_1^4x_2^{4}x_3^{4} \\ \hline \text{lower bound}& 6 & 32 & 114 & 176 & 114 & 32 & 6 & 11 & 20 & 20 & 11 & 1 \end{array}.$$ Since all the coefficients of the lower bound on the denominator are at least $\frac{3}{2}$ times the corresponding coefficients of the numerator with strict inequalities at some coefficients, we conclude that the denominator is bigger that $\frac{3}{2}$ of the numerator and hence $t_0< \frac{2}{3}$. Hence $F(t_0)$ is a cm moment matrix of rank 4, which is RG and psd with the cm variety $\{(x,y)\colon y=1\} \cup\{(0,-1)\}$ of infinite cardinality. Hence it admits a measure consisting of atoms of size 1 by the results of Curto and Fialkow see \cite{Fia14} and reference therein. This settles Case 2.2, and concludes the proof of the Claim. Thus the theorem is proved. \end{proof} \begin{remark} \begin{enumerate} \item Note that Lemma \ref{gen-lemma-one--2} is true for any $n$ not only $n=3$. Hence if $Y$ has only 1 eigenvalue of some sign, then the atom of size 2 is uniquely determined up to density. Numerical experiments show that even in this case Claim 2 from the proof of Theorem \ref{main-res-2} is true, but we were not able to find a theoretical argument for this observation as in the case $n=3$. So in the future research we plan to find some argument for the existence of such $t$ without using brute force methods. \item If $Y$ has multiplicity of both eigenvalues at least 2, then possible atoms of size 2 in the measure are not unique anymore (up to density), so some other construction of the measure is needed. \item The characterization of finite sequences of real numbers that are the moments of \textit{one-atomic} tracial measures is deeply connected with Horn's problem (cf., \cite{CW18}). One approach to solve Horn's problem for $n\in\mathbb{N}$, is to instead solve the one-atomic bivariate tracial moment problem of degree $2n-2$. In particular, solving the bivariate quartic tracial moment problem with the restriction of representing measures having a single size 3 atom $(X,Y)\in (\mathbb S\mathbb R^{3\times 3})^2$, solves Horn's problem for $n=3$. The results of \cite{BZ18} and the analysis of this section do precisely this in the singular case, i.e., when the moment matrix $\mathcal M_2^{(X,Y)}$ is singular. \end{enumerate} \end{remark} \section{Extension to $\mathcal{M}_n$ with two relations in $\mathcal M_2$} \label{S3} The main result of this subsection, Theorem \ref{M_n-XY+YX=0-all-in-one} below, extends the results for the existence of the measure for $\mathcal M_n$, with two quadratic column relations, from $n=2$ (see \cite[Theorems 6.5, 6.8, 6.11, 6.14]{BZ18}) to an arbitrary $n\in \mathbb N$. Throughout this section, unless otherwise stated we assume that $n\geq 2$. We will also frequently be considering $[\mathcal{M}_{n}]_{\{\mathds{1}, \mathbb{X}, \mathbb{Y}, \mathbb{X}^{2}, \mathbb{X}\mathbb{Y}, \mathbb{Y}\mathbb{X}, \mathbb{Y}^{2}\}}$, the quadratic component of $\mathcal{M}_{n}$. Thus we introduce the notation $$ \mathcal{M}_{Q} := [\mathcal{M}_{n}]_{\{\mathds{1}, \mathbb{X}, \mathbb{Y}, \mathbb{X}^{2}, \mathbb{X}\mathbb{Y}, \mathbb{Y}\mathbb{X}, \mathbb{Y}^{2}\}}. $$ We say that $\mathcal{M}_{n}$ is in \textbf{canonical form}, if it satisfies the relation $$\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X} = \mathbf{0}$$ and one of the following relations \begin{equation}\label{second-column-relation} \mathbb{Y}^2=\mathbf 1-\mathbb{X}^2\quad{\text{or}}\quad \mathbb{Y}^2=\mathds{1} \quad{\text{or}}\quad \mathbb{Y}^2=\mathds{1}+\mathbb{X}^2\quad{\text{or}}\quad \mathbb{Y}^2=\mathbb{X}^2. \end{equation} We begin by showing that every $\mathcal{M}_{n}$, with $\mathcal{M}_{Q}$ of rank 5, can be transformed into a canonical form. \begin{lemma}\label{possible-relations} Suppose $\beta\equiv \beta^{(2n)}$ is a nc sequence with a moment matrix $\mathcal{M}_n$, such that $\mathcal{M}_{Q}$ is of rank 5. If $\mathcal M_n$ is positive semidefinite and recursively generated, then there exists an affine linear transformation $\phi$ such that the sequence $\widehat \beta$, given by $\widehat \beta=L_\beta(w\circ\phi)$ has a moment matrix $\widehat{\mathcal{M}_n}$ in a canonical form. \end{lemma} \begin{proof}[Proof of Lemma \ref{possible-relations}] By \cite[Proposition 4.1 (1)]{BZ18} there exists a transformation $\phi$ such that $\widehat{\mathcal{M}_{Q}}$ is in a canonical form (Note that the assumption of \cite[Proposition 4.1 (1)]{BZ18} that $\mathcal M_2$ admits a measure can be replaced by $\mathcal M_2$ is psd and RG since only these two properties are used in the proof.) Since $\mathcal M_n$ (and hence also $\widehat{\mathcal{M}_n}$) is psd, we conclude by \cite[Proposition 3.9]{CF96} that the relations from $\widehat{\mathcal{M}_{Q}}$ must also hold in $\widehat{\mathcal{M}_n}$. This proves the lemma. \end{proof} \begin{theorem} \label{M_n-XY+YX=0-all-in-one} Suppose $\beta\equiv \beta^{(2n)}$ is a nc sequence with a moment matrix $\mathcal{M}_n$, which is positive semidefinite, recursively generated and $\mathcal{M}_{Q}$ is of rank 5. Then $\beta$ admits a nc measure if and only if in the canonical form, with $\widehat{\mathcal{M}_n}$ and $\widehat{\beta}_w$ we have $$\widehat{\mathcal{M}_n}-|\widehat{\beta}_X| \mathcal M_{n}^{(\sign(\widehat{\beta}_X)1,0)}- |\widehat{\beta}_Y|\mathcal M_n^{(0,\sign(\widehat{\beta}_Y)1)}$$ is positive semidefinite and recursively generated. Moreover, all the atoms in the measure are of size at most 2. \end{theorem} Given an $\mathcal{M}_{n}$ in canonical form the column space of $\mathcal{M}_{n}$ is easily described. \begin{lemma}\label{2-to-n-extension-lemma} Suppose that $\mathcal M_n$ is recursively generated and in a canonical form. Then we have the following: \begin{enumerate}[(1)] \item $\mathcal{M}_n$ satisfies the relation $\mathbb{X}^i\mathbb{Y}+(-1)^{i+1}\mathbb{Y}\mathbb{X}^i=\mathbf 0$ for every $i\in \{1,\dotsc,n-1\}$. \item The column space $\mathcal C_{\mathcal{M}_n}$ of $\mathcal{M}_n$ is equal to $$\mathcal C_{\mathcal{M}_n}:=\Span{\Big( \{\mathds{1}\} \bigcup \bigcup_{i=1}^n \{ \mathbb{X}^i, \mathbb{X}^{i-1}\mathbb{Y} \}\Big)}.$$ \end{enumerate} \end{lemma} \begin{proof} \textit{(1).} We proceed via induction. For $i=1$, the relation holds due to $\mathcal{M}_{n}$ being in canonical form. Now suppose that the relation $\mathbb{X}^i\mathbb{Y}+(-1)^{i+1}\mathbb{Y}\mathbb{X}^i=\mathbf 0$ holds in $\mathcal{M}_n$ for some $i\in \{1,\dotsc,n-2\}$. Multiplying $\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0$ by $\mathbb{X}^{i}$ from the left we obtain that $$ \mathbf 0=\mathbb{X}^{i+1}\mathbb{Y}+\mathbb{X}^{i}\mathbb{Y}\mathbb{X} = \mathbb{X}^{i+1}\mathbb{Y}+(-1)^{i+2}\mathbb{Y}\mathbb{X}^{i+1}, $$ where we use the inductive hypothesis for the second equality. By RG, the relation $\mathbb{X}^{i+1}\mathbb{Y}+(-1)^{i+2}\mathbb{Y}\mathbb{X}^{i+1}$ also holds in $\mathcal{M}_{n}$, and hence the statement is proved. \textit{(2).} Consider a column indexed by a monomial $\mathbb{X}^{i_0}\mathbb{Y}^{j_1}\mathbb{X}^{i_1}\mathbb{Y}^{j_2}\cdots \mathbb{X}^{i_k}\mathbb{Y}^{i_{k+1}}$ where $k\in \mathbb N$, $i_0,j_{k+1}\in\mathbb N\cup \{0\}$ and $i_1, j_1,\ldots, i_k, j_k\in \mathbb N$. Using (1), we know that such a column is equal to the $\pm 1$ multiple of the column indexed by the monomial $\displaystyle\mathbb{X}^{\sum_{\ell=0}^k i_\ell} \mathbb{Y}^{\sum_{\ell=1}^{k+1} j_\ell}$. By using one of the relations \eqref{second-column-relation}, the column $\displaystyle\mathbb{X}^{\sum_{\ell=0}^k i_\ell} \mathbb{Y}^{\sum_{\ell=1}^{k+1} j_\ell}$ becomes a linear combination of the columns of the form $\mathbb{X}^{i}$ and $\mathbb{X}^{i-1}\mathbb{Y}$ with $i\leq n$. \end{proof} Before proving our main result, the next two lemmas illustrate some properties of the moments in our setting. In particular, we show that many moments obtained from nc atoms in the measure for $\mathcal M_n$ are 0. \begin{lemma}\label{zero-moment-general} Suppose that $\mathcal{M}_n$ satisfies the relation $\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf{0}$. If $\beta$ admits a nc measure, then there exists a measure in which every nc atom is of the form \begin{equation}\label{form-of-nc-atoms-general} \widetilde{X}=\left(\begin{matrix} \mathbf 0_t & B \\ B^t & -\mathbf 0_t \end{matrix}\right), \quad \widetilde{Y}=\left(\begin{matrix} \mu I_t & \mathbf 0_t\\ \mathbf 0_t & -\mu I_t \end{matrix}\right), \end{equation} with $(\widetilde{X},\widetilde{Y})\in (\mathbb S\mathbb R^{2t\times 2t})^{2}$, $t\in \mathbb N$, $B\in \mathbb R^{t\times t}$, $\mu>0$. Moreover, every such atoms satisfies: \begin{enumerate} \item\label{moments-general-pt1} $\beta^{(\widetilde{X},\widetilde{Y})}_{X^{2i+1}}=0$ for every $i\in \mathbb N$ such that $2i+1\leq 2n$. \item\label{moments-general-pt2} $\beta^{(\widetilde{X},\widetilde{Y})}_{X^jY}=0$ for every $j\in \mathbb N\cup\{0\}$ such that $j+1\leq 2n$. \item\label{moments-general-pt4} $\beta^{(\widetilde{X},\widetilde{Y})}_{X^kY^2}=0$ for every odd $k\in \mathbb N$. \end{enumerate} \end{lemma} \begin{proof} Since $\mathcal{M}_{n}$ satisfies $\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf{0}$, by \cite[Proposition 5.1]{BZ18} there exists a measure in which every nc atom is of the form $(\widetilde{X},\widetilde{Y})\in (\mathbb S\mathbb R^{2t\times 2t})^{2}$, $t\in \mathbb N$, is of the form \begin{equation*} \widetilde{X}=\left(\begin{matrix} \gamma I_t & B \\ B^t & -\gamma I_t \end{matrix}\right), \quad \widetilde{Y}=\left(\begin{matrix} \mu I_t & \mathbf 0_t\\ \mathbf 0_t & -\mu I_t \end{matrix}\right), \end{equation*} where $B\in \mathbb R^{t\times t}$, $\gamma\geq 0$, $\mu>0$ (note that \cite[Proposition 5.1]{BZ18} is stated for the case $n=2$, but the proof easily generalizes to $n\in\mathbb N$). Moreover, the relation $\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf{0}$ implies that $\gamma=0$ and hence the atoms are of the form \eqref{form-of-nc-atoms-general}. Let $B_{1} = (BB^{t})$, and $B_{2}=(B^{t}B)$. The following calculations are elementary: \begin{gather*} \widetilde{X}^{2i}=\left(\begin{matrix} B_{1}^i & \mathbf 0 \\ \mathbf 0 & B_{2}^i \end{matrix}\right),\quad\qquad \widetilde{X}^{2i}\widetilde{Y}=\left(\begin{matrix} \mu B_{1}^i & \mathbf 0 \\ \mathbf 0 & -\mu B_{2}^i \end{matrix}\right),\quad\qquad \widetilde{X}^{2i}\widetilde{Y}^2=\left(\begin{matrix} \mu^2 B_{1}^i & \mathbf 0 \\ \mathbf 0 & \mu^2 B_{2}^i \end{matrix}\right),\\ \widetilde{X}^{2i+1}=\left(\begin{matrix} \mathbf 0 & B_{1}^{i} B \\ B_{2}^{i} B^t & \mathbf 0 \end{matrix}\right),\quad \widetilde{X}^{2i+1}\widetilde{Y}=\left(\begin{matrix} \mathbf 0 & -\mu B_{1}^{i} B \\ \mu B_{2}^{i-1} B^t & \mathbf 0 \end{matrix}\right),\quad \widetilde{X}^{2i+1}\widetilde{Y}^2=\left(\begin{matrix} \mathbf 0 & \mu^2 B_{1}^{i} B \\ \mu^2 B_{2}^{i} B^t & \mathbf 0 \end{matrix}\right). \end{gather*} The properties \eqref{moments-general-pt1}-\eqref{moments-general-pt4} are now easy to check, using $$\mathrm{tr}((BB^t)^i)=\mathrm{tr}(B(B^tB)^{i-1}B^t)=\mathrm{tr}((B^tB)^{i-1}B^tB)=\mathrm{tr}((B^tB)^i),$$ where the second equality follows from $\mathrm{tr}(CD)=\mathrm{tr}(DC)$, with $C=B$ and $D=(B^tB)^{i-1}B^t$. \end{proof} \begin{lemma}\label{zero-moment-general2} Suppose that $\mathcal M_n$ is in the canonical form. If $\beta$ admits a nc measure, then: \begin{enumerate} \item\label{moments-cgen-pt1} $\beta_{X^{2i+1}}=\beta_{X}$ for every $i\in \mathbb N$ such that $2i+1\leq 2n$. \item\label{moments-cgen-pt2} $\beta_{X^jY}=0$ for every $j\in \mathbb N$ such that $j+1\leq 2n$. \item\label{moments-cgen-pt4} $\beta_{X^kY^2}=0$ for every odd $k\in \mathbb N$. \item\label{moments-cgen-pt3} When the second relation is: \begin{enumerate} \item\label{moments-c1-pt3} $\mathbb{Y}^2=\mathds 1-\mathbb{X}^2$, then: $$\beta_{X^kY^2}=\beta_{X^k}-\beta_{X^{k+2}}\quad \text{for every}\quad k\in \mathbb N\quad \text{such that}\quad k+2\leq 2n.$$ \item\label{moments-c2/3-pt1} $\mathbb{Y}^2=\mathds 1$ or $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$, then we have that $\beta_{X}=0.$ \item\label{moments-c2-pt3} $\mathbb{Y}^2=\mathds 1$, then: $$\beta_{X^kY^2}=\beta_{X^k}\quad \text{for every}\quad k\in \mathbb N\quad \text{such that}\quad k+2\leq 2n.$$ \item\label{moments--c3-pt3} $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$, then: $$\beta_{X^kY^2}=\beta_{X^k}+\beta_{X^{k+2}}\quad \text{for every}\quad k\in \mathbb N\quad \text{such that}\quad k+2\leq 2n.$$ \item\label{moments--c4-pt3} $\mathbb{Y}^2=\mathbb{X}^2$, then: $$\beta_{X^kY^2}=\beta_{X^{k+2}}\quad \text{for every}\quad k\in \mathbb N\quad \text{such that}\quad k+2\leq 2n.$$ \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} By Theorem \ref{support lemma} \eqref{point-1-support} possible cm atoms in the measure for $\beta$ are: \begin{enumerate} \item If $\mathbb{Y}^2=\mathds 1-\mathbb{X}^2$: $(1, 0)$, $(-1,0)$, $(0,1)$, $(0,-1)$. \item If $\mathbb{Y}^2=\mathds 1$: $(0,1)$, $(0,-1)$. \item If $\mathbb{Y}^2=\mathds 1+\mathbb{X}^2$: $(0,1)$, $(0,-1)$. \item If $\mathbb{Y}^2=\mathbb{X}^2$: $(0, 0)$. \end{enumerate} It is easy to check that the moment matrices $\mathcal M_n^{(x,y)}$, generated by possible cm atoms $(x,y)\in \mathbb R^2$, satisfy the corresponding relations stated in the lemma. It remains to prove that the nc atoms also satisfy them. By Lemma \ref{zero-moment-general2} there exist a measure such that in all cases the nc atoms $(\widetilde{X},\widetilde{Y})$ are of the form \eqref{form-of-nc-atoms-general} and satisfy \eqref{moments-cgen-pt1}, \eqref{moments-cgen-pt2} and \eqref{moments-cgen-pt4}. The statement \eqref{moments-c1-pt3} for odd $k\in \mathbb N$ follows by using \eqref{moments-cgen-pt1} and \eqref{moments-cgen-pt4}, while for even $k\in \mathbb N$ it follows by the following calculation $$ \beta^{(\widetilde{X},\widetilde{Y})}_{X^{2i}Y^2}= \mathrm{tr}(\widetilde{X}^{2i}\widetilde{Y}^{2}) = \mathrm{tr}(\widetilde{X}^{2i}(I_{2t} - \widetilde{X}^{2})) = \mathrm{tr}(\widetilde{X}^{2i}-\widetilde{X}^{2i+2}) = \mathrm{tr}(\widetilde{X}^{2i}) - \mathrm{tr}(\widetilde{X}^{2i+2}) = \beta^{(\widetilde{X},\widetilde{Y})}_{X^{2i}}-\beta^{(\widetilde{X},\widetilde{Y})}_{X^{2i+2}} $$ where we used that $\widetilde{Y}^2=I_{2t}-\widetilde{X}^2$ for the second equality. The statement \eqref{moments-c2/3-pt1} is clear for the nc atoms. The statement \eqref{moments-c2-pt3} follows by $\widetilde{X}^k\widetilde{Y}^2=\widetilde{X}^{k}$, since $\widetilde{Y}^2=I_{2t}$. The statement \eqref{moments--c3-pt3} for odd $k\in \mathbb N$ follows by using \eqref{moments-cgen-pt1}, \eqref{moments-cgen-pt4} and \eqref{moments-c2/3-pt1}, while for even $k\in \mathbb N$ it follows by the following calculation $$ \beta^{(\widetilde{X},\widetilde{Y})}_{X^{2i}Y^2}= \mathrm{tr}(\widetilde{X}^{2i}\widetilde{Y}^{2}) = \mathrm{tr}(\widetilde{X}^{2i}(I_{2t}+ \widetilde{X}^{2})) = \mathrm{tr}(\widetilde{X}^{2i}+\widetilde{X}^{2i+2}) = \mathrm{tr}(\widetilde{X}^{2i})+ \mathrm{tr}(\widetilde{X}^{2i+2}) = \beta^{(\widetilde{X},\widetilde{Y})}_{X^{2i}}+\beta^{(\widetilde{X},\widetilde{Y})}_{X^{2i+2}} $$ where we used that $\widetilde{Y}^2=I_{2t}+\widetilde{X}^2$ for the second equality. The statement \eqref{moments--c4-pt3} follows by $\widetilde{X}^k\widetilde{Y}^2=\widetilde{X}^{k+2}$, since $\widetilde{Y}^2=\widetilde{X}^2$. This proves the lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{M_n-XY+YX=0-all-in-one}] We can assume WLOG that $\mathcal{M}_n$ is in the canonical form since the moment matrix admits a measure if and only if its canonical form admits a measure. We rearrange the columns of $\mathcal{M}_n$ to the order $$\{\mathds 1,\mathbb{X},\mathbb{X}^2,\ldots,\mathbb{X}^n,\mathbb{Y},\mathbb{X}\mathbb{Y},\mathbb{X}^2\mathbb{Y},\ldots,\mathbb{X}^{n-1}\mathbb{Y}\}.$$ The rearranged moment matrix has the form \begin{equation}\label{M-c1} \widetilde{\mathcal M_n}(\beta_1,\beta_X,\beta_Y):=\left(\begin{matrix} \mathcal M_n(\beta_1,\beta_X,X)& B(\beta_Y)\\ B(\beta_Y) & \mathcal M_n(\beta_1, Y) \end{matrix}\right). \end{equation} There are four cases to consider, each corresponding to a relation of \eqref{second-column-relation}. We present in detail the proof when we have relations $\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf{0}$ and $\mathbb{Y}^2=\mathbf 1-\mathbb{X}^2$. The other three cases are argued similarly, and the details can be found in Appendix \ref{proof-rank5-extension}. \\ Given the relations $\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf{0}$ and $\mathbb{Y}^2=\mathbf 1-\mathbb{X}^2$, by Lemma \ref{zero-moment-general2}, the matrices $\mathcal M_n(\beta_1,\beta_X,X)$, $\mathcal M_n(\beta_1,Y)$ and $B(\beta_Y)$ are of the forms \begin{equation*} \begin{blockarray}{cccccccccccc} & \mathds{1}&\mathbb{X}&\mathbb{X}^2&\mathbb{X}^3&\cdots&\mathbb{X}^{2k}&\mathbb{X}^{2k+1}&\cdots&\mathbb{X}^n\\ \begin{block}{c(ccccccccccc)} \mathds{1}& \beta_1 & \beta_X & \beta_{X^2}& \beta_X & \cdots & \beta_{X^{2k}} & \beta_X & \cdots & c_n \beta_{X^n}+(1-c_{n})\beta_X \\\ \mathbb{X}& \beta_X & \beta_{X^2} & \beta_X & \beta_{X^4} & \cdots & \beta_X & \beta_{X^{2k+2}}& \cdots & c_{n+1}\beta_{X^{n+1}}+(1-c_{n+1})\beta_X \\\ \mathbb{X}^2& \beta_{X^2} & \beta_X & \beta_{X^4} & \beta_X & \cdots & \beta_{X^{2k+2}} & \beta_X & \cdots& c_n \beta_{X^{n+2}} +(1-c_{n})\beta_X \\\ \mathbb{X}^3& \beta_{X} & \beta_{X^4} & \beta_X & \beta_{X^6} & \cdots & \beta_X & \beta_{X^{2k+4}} & \cdots & c_{n+1}\beta_{X^{n+3}} +(1-c_{n+1})\beta_X \\\ \vdots& \vdots &&&&&&&&\vdots \\ \mathbb{X}^n& c_n\beta_{X^n} +(1-c_{n}) \beta_X &\cdots &\cdots&\cdots&\cdots&\cdots&\cdots&\cdots& \beta_{X^{2n}}\\ \end{block} \end{blockarray}, \end{equation*} \begin{equation*} \begin{blockarray}{cccccccccc} & \mathbb{Y}& \mathbb{X}\mathbb{Y}&\cdots& \mathbb{X}^{2k}\mathbb{Y}&\mathbb{X}^{2k+1}\mathbb{Y}&\cdots&\mathbb{X}^{n-1}\mathbb{Y}\\ \begin{block}{c(ccccccccc)} \mathbb{Y}& \beta_1-\beta_{X^2} & 0 & \cdots & \beta_{X^{2k}}-\beta_{X^{2k+2}} & 0 & \cdots & \cdots\\%c_{n-1} \beta_{X^n}\\ \mathbb{X}\mathbb{Y}& 0 & \beta_{X^2}-\beta_{X^4} & \cdots & 0 & \beta_{X^{2k+2}}-\beta_{X^{2k+4}}& \cdots & \vdots\\%c_{n}\beta_{X^{n+1}} \\ \vdots& \vdots &&&&&&\vdots \\ \mathbb{X}^{2k}\mathbb{Y}& \beta_{X^{2k}} - \beta_{X^{2k+2}}&0 &\cdots & \beta_{X^{4k}}-\beta_{X^{4k+2}} & 0 & \cdots& \vdots\\%c_{n-1} \beta_{X^{n+2}} \\ \mathbb{X}^{2k+1}\mathbb{Y} & 0 & \beta_{X^{2k+2}}-\beta_{X^{2k+4}} &\cdots & 0 & \beta_{X^{4k+2}}-\beta_{X^{4k+4}} & \cdots & \vdots\\%c_{n}\beta_{X^{n+3}} \\ \vdots& \vdots &&&&&&\vdots \\ \mathbb{X}^{n-1}\mathbb{Y}& \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots \\ \end{block} \end{blockarray} \end{equation*} and \begin{equation}\label{B(beta-y)} B(\beta_Y):= \begin{blockarray}{cccccc} & \mathbb{Y}& \mathbb{X}\mathbb{Y}&\mathbb{X}^2\mathbb{Y}&\cdots&\mathbb{X}^{n-1}\mathbb{Y}\\ \begin{block}{c(ccccc)} \mathds 1& \beta_Y & 0 & 0 & \cdots & 0\\ \mathbb{X}& 0 & 0 & 0 & \cdots & 0\\ \vdots& \vdots &&&&\vdots\\ \mathbb{X}^n& 0 & 0 & 0 & \cdots & 0\\ \end{block} \end{blockarray}, \end{equation} respectively, where $c_m=\frac{(-1)^{m}+1}{2}$. By Lemma \ref{zero-moment-general} the nc atoms must be of the form \eqref{form-of-nc-atoms-general}. Hence the only way to cancel the odd moment, $\beta_{X}$ in $\mathcal M_n(\beta_1,\beta_X,X)$ and $\beta_Y$ moment in $B(\beta_Y)$, is by using atoms of size 1, which are $(\pm 1, 0)$ and $(0,\pm 1)$.\\ \noindent \textbf{Claim.} We have that $$|\beta_X|\widetilde{\mathcal M}_{n}^{(\sign(\beta_X)1,0)}+|\beta_Y|\widetilde{\mathcal M}_{n}^{(0,\sign(\beta_Y)1)} \preceq \gamma_1\widetilde{\mathcal M}_{n}^{(1,0)}+ \gamma_2\widetilde{\mathcal M}_{n}^{(-1,0)}+ \delta_1\widetilde{\mathcal M}_{n}^{(0,1)}+ \delta_2\widetilde{\mathcal M}_{n}^{(0,-1)}$$ for every $\gamma_1,\gamma_2,\delta_1,\delta_2\geq 0$ such that $\gamma_1-\gamma_2=\beta_X$ and $\delta_1-\delta_2=\beta_Y$.\\ We consider four cases depending on the signs of $\beta_X$ and $\beta_Y$. If $\beta_X\geq 0$, then $\sign(\beta_X)1=1$ and hence $\gamma_1\geq \beta_X.$ Else $\beta_X< 0$, $\sign(\beta_X)1=-1$ and hence $\gamma_2\geq |\beta_X|.$ Thus, \begin{equation} \label{ineq1-c1} |\beta_X|\widetilde{\mathcal M}_{n}^{(\sign(\beta_X)1,0)} \preceq \gamma_1 \widetilde{\mathcal M}_{n}^{(1,0)}+ \gamma_2 \widetilde{\mathcal M}_{n}^{(-1,0)}. \end{equation} Similarly, \begin{equation} \label{ineq2-c1} |\beta_Y|\widetilde{\mathcal M}_{n}^{(0,\sign(\beta_Y)1)} \preceq \delta_1 \widetilde{\mathcal M}_{n}^{(0,1)}+ \delta_2 \widetilde{\mathcal M}_{n}^{(0,-1)}. \end{equation} Now, \eqref{ineq1-c1} and \eqref{ineq2-c1} imply the claim.\\ By the claim it follows that $\Big(|\beta_X|\widetilde{\mathcal M}_{n}^{(\sign(\beta_X)1,0)}+|\beta_Y|\widetilde{\mathcal M}_{n}^{(0,\sign(\beta_Y)1)} \Big)$ is the smallest matrix (under the L{\"o}wner partial ordering) such that, $\widetilde{\mathcal M_n}(\beta_1,\beta_X,\beta_Y)$ admits a measure if and only if $$\widetilde{\mathcal M_n}(\beta_1-\beta_X-\beta_Y,0,0)= \widetilde{\mathcal M_n}(\beta_1,\beta_X,\beta_Y)- \Big(|\beta_X|\widetilde{\mathcal M}_{n}^{(\sign(\beta_X)1,0)}+ |\beta_Y|\widetilde{\mathcal M}_{n}^{(0,\sign(\beta_Y)1)}\Big)$$ admits a measure. Now observe that the existence of a measure for $\mathcal M_n(\beta_1-\beta_X-\beta_Y,0,X)$, i.e., \begin{equation*} \begin{blockarray}{ccccccccc} & \mathds{1}&\mathbb{X}&\mathbb{X}^2&\mathbb{X}^3&\cdots&\mathbb{X}^n\\ \begin{block}{c(cccccccc)} \mathds{1}& \beta_1-|\beta_X|-|\beta_Y| & 0 & \beta_{X^2}-|\beta_X|& 0 & \cdots & c_n (\beta_{X^n}-|\beta_X|) \\\ \mathbb{X}& 0 & \beta_{X^2}-|\beta_X| & 0 & \beta_{X^4}-|\beta_X| & \cdots & c_{n+1}(\beta_{X^{n+1}}-|\beta_X|) \\\ \mathbb{X}^2& \beta_{X^2}-|\beta_X| & 0 & \beta_{X^4}-|\beta_X| & 0 & \cdots& c_n (\beta_{X^{n+2}}-|\beta_X|) \\\ \mathbb{X}^3& 0 & \beta_{X^4}-|\beta_X| & 0 & \beta_{X^6}-|\beta_X| & \cdots & c_{n+1}(\beta_{X^{n+3}}-|\beta_X|) \\\ \vdots& \vdots &&&&&\vdots \\ \mathbb{X}^n& c_n(\beta_{X^n}-|\beta_X|) &\cdots &\cdots&\cdots&\cdots& \beta_{X^{2n}}-|\beta_X|\\ \end{block} \end{blockarray}, \end{equation*} with support a subset of $[-1,1]$ is the truncated Hausdorf moment problem. Hence by \cite[Theorem III.2.3]{KN77}, the matrix $\mathcal M_n(\beta_1-\beta_X-\beta_Y,0,X)$ admits a measure if and only if it is psd and \begin{equation*} \begin{blockarray}{cccccccccc} & \mathbb{Y}& \mathbb{X}\mathbb{Y}&\cdots& \mathbb{X}^{2k}\mathbb{Y}&\mathbb{X}^{2k+1}\mathbb{Y}&\cdots&\mathbb{X}^{n-1}\mathbb{Y}\\ \begin{block}{c(ccccccccc)} \mathbb{Y}& \beta_1-\beta_{X^2}-|\beta_Y| & 0 & \cdots & \beta_{X^{2k}}-\beta_{X^{2k+2}} & 0 & \cdots & \cdots\\%c_{n-1} \beta_{X^n}\\ \mathbb{X}\mathbb{Y}& 0 & \beta_{X^2}-\beta_{X^4} & \cdots & 0 & \beta_{X^{2k+2}}-\beta_{X^{2k+4}}& \cdots & \vdots\\%c_{n}\beta_{X^{n+1}} \\ \vdots& \vdots &&&&&&\vdots \\ \mathbb{X}^{2k}\mathbb{Y}& \beta_{X^{2k}} - \beta_{X^{2k+2}}&0 &\cdots & \beta_{X^{4k}}-\beta_{X^{4k+2}} & 0 & \cdots& \vdots\\%c_{n-1} \beta_{X^{n+2}} \\ \mathbb{X}^{2k+1}\mathbb{Y} & 0 & \beta_{X^{2k+2}}-\beta_{X^{2k+4}} &\cdots & 0 & \beta_{X^{4k+2}}-\beta_{X^{4k+4}} & \cdots & \vdots\\%c_{n}\beta_{X^{n+3}} \\ \vdots& \vdots &&&&&&\vdots \\ \mathbb{X}^{n-1}\mathbb{Y}& \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots \\ \end{block} \end{blockarray}, \end{equation*} which is exactly $\mathcal M_n(\beta_1-\beta_Y,Y)$, is psd. Now note that if $x_i$, $i=1,\ldots, k$, $k\in \mathbb N$, are atoms in the measure for $\mathcal M_n(\beta_1-\beta_X-\beta_Y,0,X)$ with the corresponding densities $\mu_i$, $i=1,\ldots, k$, then $$ \Big(\left(\begin{matrix} 0&x_i\\ x_i&0 \end{matrix}\right), \left(\begin{matrix} \sqrt{1-x_i^2}&0\\ 0&-\sqrt{1-x_i^2} \end{matrix}\right) \Big), \quad i=1,\ldots,k, $$ with densities $\mu_i$, $i=1,\ldots,k$, are atoms which represent $\widetilde{\mathcal M_n}(\beta_1-\beta_X-\beta_Y,0,0)$. \end{proof} \section{Reducing the degenerate truncated hyperbolic moment problem} \label{S4} Prompted by the outcomes of the previous section (proof of Theorem \ref{M_n-XY+YX=0-all-in-one}), we use the reduction technique to present a simplified proof one of the main results in \cite{CF05}, the degenerate truncated hyperbolic moment problem, i.e., when $\mathcal{M}_{n}$ is commutative and satisfies $\mathbb{X}\mathbb{Y} = \mathbf{0}$. \begin{remark} \ab{Curto and Fialkow have previously used the reduction technique for the complex moment problem when $Z = \bar{Z}$, and shown how the truncated complex moment problem with this column relation is equivalent to the truncated Hamburger moment problem (see the discussion after \cite[Conjecture 3.16]{CF96}).} \end{remark} \begin{theorem}\cite[Theorem 3.1]{CF05} \label{XY=0-cm} Let $\mathcal M_n$ be a moment matrix satisfying the relations $\mathbb{X}\mathbb{Y}=\mathbb{Y}\mathbb{X}=\mathbf 0$. If $\mathcal M_n$ is positive semidefinite, recursively generated and satisfies $\Rank(\mathcal M_n) \leq \Card (\mathcal V)$, where $$\mathcal V:=\bigcap_{ \substack{g\in \mathbb R[X,Y]_{\leq 2},\\ g(\mathbb{X},\mathbb{Y})=\mathbf 0\;\text{in}\;\mathcal M_n}} \left\{ (x,y)\in \mathbb R^2\colon g(x,y)=0 \right\},$$ then it admits a representing measure. Moreover, if $\Rank(\mathcal M_n)\leq 2n$, then $\mathcal M_n$ admits a $(\Rank(\mathcal M_n))$-atomic measure, and if $\Rank(\mathcal M_n)=2n+1$, then $\mathcal M_n$ admits a $(2n+1)$- or $(2n+2)$-atomic measure. \end{theorem} \begin{proof} Note that the basis for $\mathcal C_{\mathcal M_n}$ is a subset of $\{\mathds 1, \mathbb{X},\ldots,\mathbb{X}^n,\mathbb{Y},\ldots,\mathbb{Y}^n\}$. Reordering the columns to $$\mathds 1,\mathbb{X},\mathbb{X}^2,\ldots,\mathbb{X}^n,\mathbb{Y},\ldots,\mathbb{Y}^n,\mathbb{X}\mathbb{Y},\ldots, \mathbb{X}\mathbb{Y}^{n-1},\mathbb{X}^2\mathbb{Y},\ldots,\mathbb{X}^2\mathbb{Y}^{n-2},\ldots,\mathbb{X}^{n-1}\mathbb{Y},$$ we have that $\mathcal M_n=M\oplus \mathbf{0}$ where $$M= \begin{mpmatrix} 1 & a^t & b^t \\ a & A & \mathbf{0} \\ b & \mathbf{0} & B \\ \end{mpmatrix}, \quad A=\begin{mpmatrix} \beta_{X^2} & \ldots & \beta_{X^{n+1}}\\ \vdots & \ddots & \vdots\\ \beta_{X^{n+1}}& \cdots & \beta_{X^{2n}} \end{mpmatrix},\quad B=\begin{mpmatrix} \beta_{Y^2} & \ldots & \beta_{Y^{n+1}}\\ \vdots & \ddots & \vdots\\ \beta_{Y^{n+1}}& \cdots & \beta_{Y^{2n}} \end{mpmatrix},\quad a=\begin{mpmatrix} \beta_X \\ \vdots \\ \beta_{X^n}\end{mpmatrix},\quad b=\begin{mpmatrix} \beta_Y \\ \vdots \\ \beta_{Y^n}\end{mpmatrix}. $$ We separate two cases according to the rank of $\mathcal M_n$.\\ \noindent\textbf{Case 1:} $\Rank(\mathcal M_n)=2n+1.$ From $M\succ 0$ it follows that the Schur complement $\eta:=1-a^t A^{-1} a -b^t B^{-1}b>0$ of the block $A\oplus B$ is positive. For $\alpha:=a^tA^{-1}a+\frac{\eta}{2}$ we have that $1-\alpha=b^tB^{-1}b+\frac{\eta}{2}$ and $\begin{mpmatrix} \alpha & a^t \\a & A \end{mpmatrix}$, $\begin{mpmatrix} 1-\alpha & b^t \\b & B\end{mpmatrix}$ are both positive definite. By \cite[Theorem 3.9]{CF91} they admit a measure consisting of $n+1$ atoms $x_0,\ldots,x_n$ and $y_0,\ldots,y_n$, respectively. So $\mathcal M_n$ admits a measure consisting of at most $2n+2$ atoms $(x_0,0),\ldots,(x_n,0), (0,y_0),\ldots,(0,y_n)$, with only one potential duplication, namely $(x_i,0)=(0,y_j)=(0,0)$ for some $i,j$. \\ \noindent\textbf{Case 2:} $\Rank(\mathcal M_n)\leq 2n.$ Let $k_1:=\Rank A=\Rank A_{k_1}$ and $k_2:=\Rank B=\Rank B_{k_2}$, where $A_{k_1}, B_{k_2}$ are the leading principal submatrices of size $k_1$, $k_2$ of $A, B$, and the second equalities follow from $\mathcal M_n$ being RG. We denote by $a_{k_1}$, $b_{k_2}$ the restrictions of $a, b$ to the first $k_1$, $k_2$ rows, respectively. We write $M_{k_1,k_2}:=\begin{mpmatrix} 1 & a_{k_1}^t & b_{k_2}^t \\ a_{k_1} & A_{k_1} & 0\\ b_{k_2} & 0 & B_{k_2} \end{mpmatrix}$. We separate two cases according to the difference $\left(\Rank (\mathcal M_n)-\Rank\begin{mpmatrix} A & \mathbf 0 \\ \mathbf 0 & B\end{mpmatrix}\right)$. \noindent\textbf{Case 2.1:} $\left(\Rank (\mathcal M_n)-\Rank\begin{mpmatrix} A & \mathbf 0 \\ \mathbf 0 & B\end{mpmatrix}\right)=1$. We have $k_1<n$ or $k_2<n$. We may assume WLOG that $k_1<n$. In $A$ we have $\mathbb{X}^{k_1+1}=\sum_{i=1}^{k_1}\gamma_i \mathbb{X}^i$ for some $\gamma_i\in \mathbb R$. By \cite[Proposition 3.9]{CF96}, $\mathbb{X}^{k_1+1}=\sum_{i=1}^{k_1}\gamma_i \mathbb{X}^i$ holds also in $\mathcal M_n$. Hence $\gamma_1\neq 0$, since otherwise $A_{k_1}=[\mathcal M_n]_{\{\mathbb{X},\ldots,\mathbb{X}^{k_1}\}}$ is singular, which contradicts $\Rank A_{k_1}=k_1$. In $\begin{mpmatrix} \ast & a^t \\ a & A \end{mpmatrix}$ we have $$ [\mathbb{X}^{k_1}]_{\{\{\mathds{1},\ldots,\mathbb{X}^{n}\},\{\mathbb{X},\ldots,\mathbb{X}^n\}\}}=\sum_{i=0}^{k_1-1}\gamma_{i+1} [\mathbb{X}^i]_{\{\{\mathds{1},\ldots,\mathbb{X}^{n}\},\{\mathbb{X},\ldots,\mathbb{X}^n\}\}}. $$ Since $\gamma_1\neq 0$ there is a unique value of $\ast$ such that $\mathbb{X}^{k_1}=\sum_{i=0}^{k_1-1}\gamma_{i\ab{+1}} \mathbb{X}^i$ and $\Rank\begin{mpmatrix} \ast & a_{\ab{k_{1}}}^t \\ a_{\ab{k_{1}}} & A_{\ab{k_{1}}} \end{mpmatrix}=\ab{k_1}$, this is given by $\ast:=a_{k_1}^t A_{k_1}^{-1}a_{k_1}$, making the matrix $$ \begin{pmatrix} a_{k_1}^t A_{k_1}^{-1}a_{k_1} & a^t \\ a & A \end{pmatrix}, $$ psd and RG. Since the Schur complement of $M_{k_1,k_2}$ is positive $\left(1-a_{k_1}^t A_{k_1}^{-1}a_{k_1}-b_{k_2}^t B_{k_2}^{-1}b_{k_2}>0\right)$, we have that $1-a_{k_1}^t A_{k_1}^{-1}a_{k_1}>b_{k_1}^t B_{k_1}^{-1}b_{k_2}$ and hence (again by Schur complements) the matrix $$ \begin{pmatrix} 1-a_{k_1}^t A_{k_1}^{-1}a_{k_1} & b_{k_2}^t \\ b_{k_1} & B_{k_1} \end{pmatrix}, $$ is positive definite. By \cite[Theorem 3.9]{CF91} both, $$ \begin{pmatrix} a_{k_1}^t A_{k_1}^{-1}a_{k_1} & a^t \\ a & A \end{pmatrix}, \quad \text{and } \ \begin{pmatrix} 1-a_{k_1}^t A_{k_1}^{-1}a_{k_1} & b^t \\ b & B \end{pmatrix}, $$ admit a $k_1$- and $(k_2+1)$-atomic measures, respectively. Hence $\mathcal M_n$ admits a $\Rank \mathcal M_n$-atomic measure.\\ \noindent\textbf{Case 2.2:} $\left(\Rank (\mathcal M_n)-\Rank\begin{mpmatrix} A & \mathbf 0 \\ \mathbf 0 & B\end{mpmatrix}\right)=0$. The Schur complement $1-a_{k_1}^t A_{k_1}^{-1}a_{k_1}-b_{k_2}^t B_{k_2}^{-1}b_{k_2}$ of the block $A_{k_1}\oplus B_{k_2}$ in $M_{k_1,k_2}$ is equal to zero, thus $$ M_{k_1,k_2}= \begin{pmatrix} a_{k_1}^t A_{k_1}^{-1}a_{k_1} & a_{k_1}^t & 0\\ a_{k_1} & A_{k_1} & 0 \\ 0&0&0 \end{pmatrix}+ \begin{pmatrix} b_{k_2}^t B_{k_2}^{-1}b_{k_2} & 0 & b_{k_2}^t \\ 0 & 0 & 0 \\ b_{k_2} & 0 & B_{k_2} \end{pmatrix}. $$ If $k_1<n$, then as in Case 2.1 we see that $\begin{mpmatrix} a_{k_1}^t A_{k_1}^{-1}a_{k_1} & a^t \\ a & A \end{mpmatrix}$ is psd, RG and of rank $k_1$ (similarly if $k_{2}<n$). Let us now assume that $k_1=n$. Then the matrix $$ U:=\begin{pmatrix} a^t A^{-1}a & a^t \\ a & A \end{pmatrix}, $$ is psd and of rank $n$. Let $U_j$ be the $j$-th column of $U$. Suppose there is a nontrivial linear combination $\displaystyle \mathbf{0}=U_1+\textstyle\sum\nolimits_{i=2}^{i_{0}} \delta_i U_i$ where $\delta_i\in \mathbb R$, $i_0\leq n$ and $\delta_{i_0}\neq 0$. Observe also the matrix $$ V:=\begin{pmatrix} b_{k_2}^t B_{k_2}^{-1}b_{k_2} & b_{k_2}^t \\ b_{k_2} & B_{k_2} \end{pmatrix}, $$ is psd, of rank $k_2$, and there is a nontrivial linear combination $\textstyle\mathbf 0=V_1+\sum\nolimits_{j=2}^{k_{2}+1} \zeta_j V_j$ where $\zeta_j\in \mathbb R$ and $V_j$ is the $j$-th column of $V$. Therefore \begin{equation}\label{col-rel} \mathbf 0=\begin{pmatrix}a ^t A^{-1}a+b_{k_2}^t B_{k_2}^{-1}b_{k_2} \\ a\\ b_{k_2}\end{pmatrix}+ \sum_{\ab{2}\leq i\leq i_0} \delta_i \begin{pmatrix}U_i\\ 0\end{pmatrix} +\sum_{\ab{2}\leq j\leq k_2+1} \zeta_j \begin{pmatrix}v_{1j}\\ 0\\ V_j'\end{pmatrix}, \end{equation} where $V_j=\begin{pmatrix}v_{1j}& V_j'\end{pmatrix}^{t},$ $v_{1j}\in \mathbb R$, $V_j'\in \mathbb R^{k_2}$. By \cite[Proposition 3.9]{CF96}, \eqref{col-rel} implies that $\mathcal M_n$ must satisfy the column relation $$ \displaystyle\mathbf 0=\mathds 1+ \sum_{1\leq i\leq i_0-1} \delta_i \mathbb{X}^i +\sum_{1\leq j\leq k_2} \zeta_j \mathbb{Y}^j. $$ But then $\Card(\mathcal V)\leq i_0-1+k_2$, which implies $$ \Card (\mathcal V)\leq n-1+k_2 < n+k_2=\Rank (\mathcal M_n), $$ a contradiction with the assumption $\Rank(\mathcal{M}_{n})\leq \Card(\mathcal{V})$. Hence $U_{n+1}\in \Span\{U_1,\ldots,U_{n}\}$ and $U$ is RG. Similarly, $V$ is RG for every $k_2$. By \cite[Theorem 3.9]{CF91}, both $$ \begin{pmatrix} a_{k_1}^t A_{k_1}^{-1}a_{k_1} & a^t \\ a & A \end{pmatrix}, \quad \text{and } \ \begin{pmatrix} b_{k_2}^t B_{k_2}^{-1}b_{k_2} & b^t \\ b & B \end{pmatrix} $$ admit a $k_1$- and $k_2$-atomic measure, respectively, and $\mathcal M_n$ admits a $(\Rank (\mathcal M_n))$-atomic measure. \end{proof} \begin{remark} The matrix $\mathcal M_n$ of $\Rank \mathcal M_n=2n+1$ satisfying the assumptions of Theorem \ref{XY=0-cm} admits a $(2n+1)$-atomic if and only if one of $Z_1:=\begin{mpmatrix} a^tA^{-1}a & a^t \\a & A \end{mpmatrix}$ or $Z_2:=\begin{mpmatrix} b^tB^{-1}b & b^t \\b & B \end{mpmatrix}$ is RG, i.e., the last column is in the span of the others. Indeed, if $Z_1$ is RG then it admits a $n$-atomic measure and $\begin{mpmatrix} 1-b^tB^{-1}b & a^t \\a & A \end{mpmatrix}$ being positive definite admits a $(n+1)$-atomic measure which gives a $(2n+1)$-atomic measure for $\mathcal M_n$. Similarly for the pair $Z_2$ and $\begin{mpmatrix} 1-a^tA^{-1}a & b^t \\b & B \end{mpmatrix}$. If $Z_1$ and $Z_2$ are not RG, and $\mathcal{M}_{n}$ admits a $(2n+1)$-atomic measure, there must exist an $\alpha\in (0,1)$ such that $\alpha>a^tA^{-1}a$, $1-\alpha>b^t B^{-1}b$, and both matrices $\begin{mpmatrix} \alpha & a^t \\a & A \end{mpmatrix}$ and $\begin{mpmatrix} 1-\alpha & b^t \\b & B\end{mpmatrix}$ admit a $(n+1)$-atomic measures with the shared atom $(0,0)$. But then removing $(0,0)$ as an atom of both we are left with rank $n$ matrices $Z_1$ and $Z_2$, both admitting a measure. Hence they should be RG which would be a contradiction. \end{remark} \appendix \section{Direct calculations for some results from the manuscript} \subsection{Transformations for Lemma \ref{linear-trans}}\label{append-transforms} Firstly, note that all the square roots are well-defined which follows from the fact that $\mathcal M_2$ is psd (for details see the proof of \cite[Proposition 4.1 (1)]{BZ18}). We separate 5 cases according to $d\in \mathbb R$.\\ \noindent{\textbf{Case 2.1:} $d<-2$.} $$ \begin{array}{|c|c|c|} \hline \text{Transformation }(x,y)\mapsto & \text{The first relation of }\mathcal M_2 & \text{The second relation of }\mathcal M_2\\ \hline (x+y,y-x) & (2-d)\mathbb{X}^2-(2+d)\mathbb{Y}^2=(4a-2d)\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=2\mathds 1\\ \hline \left(\sqrt{2-d}x,\sqrt{-2-d}y\right) & \mathbb{X}^2+\mathbb{Y}^2=(4a-2d)\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=2\sqrt{d^2-4}\mathds 1\\ \hline \left(\frac{1}{\sqrt{4a-2d}}x,\frac{1}{\sqrt{4a-2d}}y\right) & \mathbb{X}^2+\mathbb{Y}^2=\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}= \frac{\sqrt{d^2-4}}{2a-d} \mathds 1=:\widehat a \mathds 1\\ \hline \left( x,x+y\right) & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\widehat a \mathds 1+2\mathbb{X}^2 & \mathbb{Y}^2=(1+\widehat a)\mathds 1\\ \hline \left( x,\frac{1}{\sqrt{1+\widehat a}}y\right) & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\frac{\widehat{a}}{\sqrt{1+\widehat a}}\mathds 1+\frac{2}{\sqrt{1+\widehat a}}\mathbb{X}^2 & \mathbb{Y}^2=\mathds 1\\ \hline \left( -x+\frac{1}{\sqrt{1+\widehat a}}y,y\right) & \mathbb{X}^2=\Big( \frac{1+\widehat a}{4}-\frac{2\widehat a}{ 1+\widehat a}\Big)=:\widetilde{a}\mathds 1 & \mathbb{Y}^2=\mathds 1\\ \hline \left(\frac{x}{\sqrt{\widetilde a}},y\right) & \mathbb{X}^2= \mathds 1 & \mathbb{Y}^2=\mathds 1\\ \hline \left(\frac{x+y}{2},\frac{y-x}{2}\right) & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0 & \mathbb{X}^2+\mathbb{Y}^2=\mathds 1\\ \hline \end{array} $$\\ \noindent{\textbf{Case 2.2:} $d=-2$.} $$ \begin{array}{|c|c|c|} \hline \text{Transformation }(x,y)\mapsto & \text{The first relation of }\mathcal M_2 & \text{The second relation of }\mathcal M_2\\ \hline (x+y,y-x) & \mathbb{X}^2=(a+1)\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=2\mathds 1\\ \hline \left(\frac{1}{\sqrt{a+1}}x,y\right) & \mathbb{X}^2=\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\frac{2}{\sqrt{a+1}}\mathds 1\\ \hline \left(y,x\right) & \mathbb{Y}^2=\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\frac{2}{\sqrt{a+1}}\mathds 1\\ \hline \left( x-\frac{1}{\sqrt{a+1}}y,y\right) & \mathbb{Y}^2=\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0\\ \hline \end{array} $$\\ \noindent{\textbf{Case 2.3:} $-2<d<2$.} $$ \begin{array}{|c|c|c|} \hline \text{Transformation }(x,y)\mapsto & \text{The first relation of }\mathcal M_2 & \text{The second relation of }\mathcal M_2\\ \hline (x+y,y-x) & (2-d)\mathbb{X}^2-(2+d)\mathbb{Y}^2=(4a-2d)\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=2\mathds 1\\ \hline \left(\sqrt{2-d}x,\sqrt{2+d}y\right) & \mathbb{X}^2-\mathbb{Y}^2=(4a-2d)\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=2\sqrt{4-d^2}\mathds 1\\ \hline \multicolumn{3}{|c|}{\text{We may assume that } 4a-2d\leq 0. \text{ Otherwise we do the transformation }(x,y)\mapsto (y,x)}\\ \hline \left(x,x-\frac{\sqrt{4-d^2}}{\sqrt{d-2a}}y\right) & \mathbb{X}^2-\Big(\underbrace{\frac{4(4-d^2)^2}{(2d-4a)^2}-1}_{C} \Big)\mathbb{Y}^2=\mathbf 0 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=-(2d-4a)\mathbb{X}^2-(2d-4a)\mathds 1\\ \hline \left( \sqrt{C}x,y\right) & \mathbb{Y}^2-\mathbb{X}^2=\mathbf 0& \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X} = -\underbrace{\sqrt{C}(2d-4a)}_{D}\mathds 1+\sqrt{C}(2d-4a)\mathbb{X}^2\\ \hline \left( x+y,y-x\right) & (2-D)\mathbb{X}^2-(2+D)\mathbb{Y}^2=-4D\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0 \\ \hline \end{array} $$\\ \noindent{\textbf{Case 2.3.1:} $D=2$.} $$ \begin{array}{|c|c|c|} \hline \text{Transformation }(x,y)\mapsto & \text{The first relation of }\mathcal M_2 & \text{The second relation of }\mathcal M_2\\ \hline (x,y) & \mathbb{X}^2=2\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0\\ \hline \left(\frac{y}{\sqrt{2}},x\right) & \mathbb{Y}^2=\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0\\ \hline \end{array} $$\\ \noindent{\textbf{Case 2.3.2:} $D=-2$.} $$ \begin{array}{|c|c|c|} \hline \text{Transformation }(x,y)\mapsto & \text{The first relation of }\mathcal M_2 & \text{The second relation of }\mathcal M_2\\ \hline (x,y) & \mathbb{Y}^2=2\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0\\ \hline \left(x,\frac{y}{\sqrt{2}}\right) & \mathbb{Y}^2=\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0\\ \hline \end{array} $$\\ \noindent{\textbf{Case 2.3.3:} $|D|\neq 2$.} $$ \begin{array}{|c|c|c|} \hline \text{Transformation }(x,y)\mapsto & \text{The first relation of }\mathcal M_2 & \text{The second relation of }\mathcal M_2\\ \hline (\sqrt{|2-D|}x,\sqrt{|2+D|}y) & \pm \mathbb{X}^2\pm \mathbb{Y}^2=-4D\sqrt{|4-d^2|}\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0\\ \hline \end{array} $$\\ \noindent{\textbf{Case 2.3.3.1:} $\mathbb{X}^2+\mathbb{Y}^2=\widetilde{A}\mathds 1$, $\widetilde A>0$.}\\ $$ \begin{array}{|c|c|c|} \hline \text{Transformation }(x,y)\mapsto & \text{The first relation of }\mathcal M_2 & \text{The second relation of }\mathcal M_2\\ \hline \left(\frac{1}{\sqrt{\widetilde A}}x,\frac{1}{\sqrt{\widetilde A}}y\right) & \mathbb{X}^2+\mathbb{Y}^2=\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0\\ \hline \end{array} $$\\ \noindent{\textbf{Case 2.3.3.2:} $\mathbb{Y}^2-\mathbb{X}^2=\widetilde{A}\mathds 1$.}\\ We may assume that $\widetilde A\geq 0$ for if not, we may transform $(x,y)\mapsto (y,x)$. $$ \begin{array}{|c|c|c|} \hline \text{Transformation }(x,y)\mapsto & \text{The first relation of }\mathcal M_2 & \text{The second relation of }\mathcal M_2\\ \hline \left(\frac{1}{\sqrt{\widetilde A}}x,\frac{1}{\sqrt{\widetilde A}}y\right) & \mathbb{Y}^2-\mathbb{X}^2=\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0\\ \hline \end{array} $$\\ \noindent{\textbf{Case 2.4:} $2=d$.} $$ \begin{array}{|c|c|c|} \hline \text{Transformation }(x,y)\mapsto & \text{The first relation of }\mathcal M_2 & \text{The second relation of }\mathcal M_2\\ \hline (x+y,y-x) & \mathbb{Y}^2=(1-a)\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=2\mathds 1\\ \hline \left(x,\frac{1}{\sqrt{1-a}}y\right) & \mathbb{Y}^2=\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\frac{2}{\sqrt{1-a}}\mathds 1\\ \hline \left( x-\frac{1}{\sqrt{1-a}}y,y\right) & \mathbb{Y}^2=\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0\\ \hline \end{array} $$\\ \noindent{\textbf{Case 2.5:} $2<d$.} $$ \begin{array}{|c|c|c|} \hline \text{Transformation }(x,y)\mapsto & \text{The first relation of }\mathcal M_2 & \text{The second relation of }\mathcal M_2\\ \hline (x+y,y-x) & (2-d)\mathbb{X}^2-(2+d)\mathbb{Y}^2=(4a-2d)\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=2\mathds 1\\ \hline \left(\sqrt{d-2}x,\sqrt{d+2}y\right) & \mathbb{X}^2+\mathbb{Y}^2=(2d-4a)\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=2\sqrt{d^2-4}\mathds 1\\ \hline \left(\frac{1}{\sqrt{2d-4a}}x,\frac{1}{\sqrt{2d-4a}}y\right) & \mathbb{X}^2+\mathbb{Y}^2=\mathds 1 & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}= \frac{\sqrt{d^2-4}}{2d-a} \mathds 1=:\widehat a \mathds 1\\ \hline \left( x,x+y\right) & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\widehat a \mathds 1+2\mathbb{X}^2 & \mathbb{Y}^2=(1+\widehat a)\mathds 1\\ \hline \left( x,\frac{1}{\sqrt{1+\widehat a}}y\right) & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\frac{\widehat{a}}{\sqrt{1+\widehat a}}\mathds 1+\frac{2}{\sqrt{1+\widehat a}}\mathbb{X}^2 & \mathbb{Y}^2=\mathds 1\\ \hline \left( -x+\frac{1}{\sqrt{1+\widehat a}}y,y\right) & \mathbb{X}^2=\Big( \frac{1+\widehat a}{4}-\frac{2\widehat a}{ 1+\widehat a}\Big)=:\widetilde{a}\mathds 1 & \mathbb{Y}^2=\mathds 1\\ \hline \left(\frac{x}{\sqrt{\widetilde a}},y\right) & \mathbb{X}^2= \mathds 1 & \mathbb{Y}^2=\mathds 1\\ \hline \left(\frac{x+y}{2},\frac{y-x}{2}\right) & \mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf 0 & \mathbb{X}^2+\mathbb{Y}^2=\mathds 1\\ \hline \end{array} $$ \subsection{Calculations for Lemma \ref{linear-trans-on-zero-moments}\label{calc-for-lemma-zero-moments}} The statement of the lemma follows by the following calculations: \begin{align*} \widetilde{\beta}_{X} &= L_{\beta^{(4)}}(bX+cY)=b\beta_X+c\beta_Y=0,\\ \widetilde{\beta}_{Y} &= L_{\beta^{(4)}}(eX+fY)=e\beta_X+f\beta_Y=0,\\ \widetilde{\beta}_{X^3} &= L_{\beta^{(4)}}((bX+cY)^3)= L_{\beta^{(4)}}\left(b^3X^3+b^2c(X^2Y+XYX+YX^2)+bc^2(XY^2+YXY+Y^2X)+c^3Y^3\right)\\ &=b^3\beta_{X^3}+3b^2c \beta_{X^2Y}+3bc^2\beta_{XY^2}+c^3\beta_{Y^3}=0,\\ \widetilde{\beta}_{X^2Y} &= L_{\beta^{(4)}}((bX+cY)^2(eX+fY))\\ &= L_{\beta^{(4)}}\left(b^2eX^3+b^2fX^2Y+bce (XYX+ YX^2)+ bcf(XY^2+YXY)+c^2e Y^2X +c^2fY^3\right)\\ &=b^2e\beta_{X^3}+(b^2f+2bce) \beta_{X^2Y}+(2bcf+c^2e)\beta_{XY^2}+c^2f\beta_{Y^3}=0,\\ \widetilde{\beta}_{Y^3} &= L_{\beta^{(4)}}((eX+fY)^3)= L_{\beta^{(4)}}\left(e^3X^3+e^2f(X^2Y+XYX+YX^2)+ef^2(XY^2+YXY+Y^2X)+f^3Y^3\right)\\ &=e^3\beta_{X^3}+3e^2f \beta_{X^2Y}+3ef^2\beta_{XY^2}+f^3\beta_{Y^3}=0. \end{align*} \subsection{Calculations for Lemma \ref{gen-lemma-one--1}\label{calc-part3-app}} Part \eqref{Y2=1-pt3} of Lemma \ref{gen-lemma-one--1} follows by the following calculations: \begin{equation*} \begin{split} \widehat{X}^2 &= \widehat{X}^2\widetilde{Y}^2 = \left(\begin{array}{cc} \hat D^2+xx^t & Dx \\ x^t\hat D & x^tx\end{array}\right),\\ \widehat{X}\widetilde{Y} &= \widehat{X}\widetilde{Y}^3=\left(\begin{array}{cc} \hat D & -x \\ x^t & 0\end{array}\right),\\ \widehat{X}^3 &= \left(\begin{array}{cc} \hat D^3+xx^t\hat D+\hat Dxx^t & \ast \\ \ast & x^t\hat Dx\end{array}\right),\\ \widehat{X}^2\widetilde{Y} &= \left(\begin{array}{cc} \hat D^2+xx^t & -\hat Dx \\ x^t\hat D & -x^tx\end{array}\right), \end{split} \qquad \begin{split} \widehat{X}^4 &= \left(\begin{array}{cc} (\hat D^2+xx^t)^2+\hat Dxx^t\hat D & \ast\\ \ast & x^t\hat D^2x+(x^tx)^2\end{array}\right),\\ \widehat{X}^3\widetilde{Y} &= \left(\begin{array}{cc} \hat D^3+xx^t\hat D+\hat Dxx^t & \ast \\ \ast & -x^t\hat Dx\end{array}\right),\\ \widehat{X}\widetilde{Y}\widehat{X}\widetilde{Y} &= \left(\begin{array}{cc} \hat D^2-xx^t & -\hat Dx \\ x^t \hat D & -x^tx\end{array}\right),\\ \widetilde{Y}^4 &= I_n, \end{split} \end{equation*} where $\hat D=D_0\oplus 0.$ \section{Theorem 4.2 - Remaining cases}\label{proof-rank5-extension} \subsection{Relations $\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf{0}$ and $\mathbb{Y}^2=\mathbf 1$} \label{subsub2} By Lemma \ref{zero-moment-general2}, the matrix $\mathcal M_n(\beta_1,\beta_X,X)$ must have $\beta_X=0$ and hence we will write it as $\mathcal M_n(\beta_1,X)$. The forms of $\mathcal M_n(\beta_1,X)$, $\mathcal M_n(\beta_1,Y)$ are \begin{equation}\label{MX-form} \begin{blockarray}{cccccccccccc} & \mathds{1}&\mathbb{X}&\mathbb{X}^2&\mathbb{X}^3&\cdots&\mathbb{X}^{2k}&\mathbb{X}^{2k+1}&\cdots&\mathbb{X}^n\\ \begin{block}{c(ccccccccccc)} \mathds{1}& \beta_1 & 0 & \beta_{X^2} & 0 & \cdots & \beta_{X^{2k}} & 0 & \cdots & c_n \beta_{X^n}\\ \mathbb{X}& 0 & \beta_{X^2} & 0 & \beta_{X^4} & \cdots & 0 & \beta_{X^{2k+2}}& \cdots & c_{n+1}\beta_{X^{n+1}} \\ \mathbb{X}^2& \beta_{X^2} & 0 & \beta_{X^4} & 0 & \cdots & \beta_{X^{2k+2}} & 0 & \cdots& c_n \beta_{X^{n+2}} \\ \mathbb{X}^3& 0 & \beta_{X^4} & 0 & \beta_{X^6} & \cdots & 0 & \beta_{X^{2k+4}} & \cdots & c_{n+1}\beta_{X^{n+3}} \\ \vdots& \vdots &&&&&&&&\vdots \\ \mathbb{X}^n& c_n\beta_{X^n} & c_{n+1}\beta_{X^{n+1}} & c_{n}\beta_{X^{n+2}} & c_{n+1}\beta_{X^{n+3}} & \cdots & c_{n}\beta_{X^{n+2k}} & c_{n+1}\beta_{X^{n+2k+1}} & \cdots& \beta_{X^{2n}}\\ \end{block} \end{blockarray}, \end{equation} \begin{equation*} \begin{blockarray}{cccccccccc} & \mathbb{Y}& \mathbb{X}\mathbb{Y}&\cdots& \mathbb{X}^{2k}\mathbb{Y}&\mathbb{X}^{2k+1}\mathbb{Y}&\cdots&\mathbb{X}^{n-1}\mathbb{Y}\\ \begin{block}{c(ccccccccc)} \mathbb{Y}& \beta_1 & 0 & \cdots & \beta_{X^{2k}} & 0 & \cdots & c_{n-1} \beta_{X^n}\\ \mathbb{X}\mathbb{Y}& 0 & \beta_{X^2} & \cdots & 0 & \beta_{X^{2k+2}}& \cdots & c_{n}\beta_{X^{n+1}} \\ \vdots& \vdots &&&&&&\vdots \\ \mathbb{X}^{2k}\mathbb{Y}& \beta_{X^{2k}} & 0 & \cdots & \beta_{X^{4k}} & 0 & \cdots& c_{n-1} \beta_{X^{n+2}} \\ \mathbb{X}^{2k+1}\mathbb{Y} & 0 & \beta_{X^{2k+2}} & \cdots & 0 & \beta_{X^{4k+2}} & \cdots & c_{n}\beta_{X^{n+3}} \\ \vdots& \vdots &&&&&&\vdots \\ \mathbb{X}^{n-1}\mathbb{Y}& c_{n-1}\beta_{X^{n-1}} & c_{n}\beta_{X^{n}} & \cdots & c_{n-1}\beta_{X^{n+2k-1}} & c_{n}\beta_{X^{n+2k}} & \cdots& \beta_{X^{2n-2}}\\ \end{block} \end{blockarray}, \end{equation*} respectively, where $c_m=\frac{(-1)^{m}+1}{2}$, and $B(\beta_Y)$ has the form \eqref{B(beta-y)}. By Lemma \ref{zero-moment-general} the nc atoms must be of the form \eqref{form-of-nc-atoms-general}. Hence the only way to cancel the $\beta_Y$ moment in $B(\beta_Y)$ is by using atoms of size 1, which are $(0,\pm 1)$. Since we have that $$|\beta_Y|\widetilde{\mathcal M}_{n}^{(0,\sign(\beta_Y)1)} \preceq \gamma\widetilde{\mathcal M}_{n}^{(0,1)}+\delta\widetilde{\mathcal M}_{n}^{(0,-1)}$$ for every $\gamma,\delta\geq 0$ such that $\gamma-\delta=\beta_Y$ (Indeed, $\beta_Y\geq 0$ implies that $\sign(\beta_Y)1=1$, $\gamma\geq \beta_Y$ and hence $|\beta_Y|\widetilde{\mathcal M}_{n}^{(0,\sign(\beta_Y)1)} \preceq \gamma\widetilde{\mathcal M}_{n}^{(0,1)}$, while $\beta_Y< 0$ implies that $\sign(\beta_Y)1=-1$, $\delta\geq |\beta_Y|$ and hence $|\beta_Y|\widetilde{\mathcal M}_{n}^{(0,\sign(\beta_Y)1)} \preceq \delta\widetilde{\mathcal M}_{n}^{(0,-1)}$.), it follows that $\widetilde{\mathcal M_n}(\beta_1,\beta_Y)$ admits a measure if and only if $$\widetilde{\mathcal M_n}(\beta_1-\beta_Y,0)=\widetilde{\mathcal M_n}(\beta_1,\beta_Y)-|\beta_Y|\widetilde{\mathcal M}_{n}^{(0,\sign(\beta_Y)1)}$$ admits a measure. Note that the existence of a measure $\mathcal M_n(\beta_1-\beta_Y,X)$ is the truncated Hamburger moment problem. By \cite[Theorem 3.9]{CF91}, the matrix $\mathcal M_n(\beta_1-\beta_Y,X)$ admits a measure with size 1 atoms from $\mathbb R$ if and only if $\mathcal M_n(\beta_1-\beta_Y,X)$ is psd and recursively generated. Now note that if $x_i$, $i=1,\ldots, k$, $k\in \mathbb N$, are atoms in the measure for $\mathcal M_n(\beta_1-\beta_Y,X)$ with the corresponding densities $\mu_i$, $i=1,\ldots, k$, then $$ \Big(\left(\begin{matrix} 0&x_i\\ x_i&0 \end{matrix}\right), \left(\begin{matrix} 1&0\\ 0&-1 \end{matrix}\right) \Big), \quad i=1,\ldots,k, $$ with densities $\mu_i$, $i=1,\ldots,k$, are atoms which represent $\widetilde{\mathcal M_n}(\beta_1-\beta_Y,0,0)$. \subsection{Relations $\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf{0}$ and $\mathbb{Y}^2=\mathds 1+ \mathbb{X}^2$} By Lemma \ref{zero-moment-general2}, the matrix $\mathcal M_n(\beta_1,\beta_X,X)$ has the form \eqref{MX-form}, $\mathcal M_n(\beta_1,Y)$ is equal to \begin{equation*} \begin{blockarray}{cccccccccc} & \mathbb{Y}& \mathbb{X}\mathbb{Y}&\cdots& \mathbb{X}^{2k}\mathbb{Y}&\mathbb{X}^{2k+1}\mathbb{Y}&\cdots&\mathbb{X}^{n-1}\mathbb{Y}\\ \begin{block}{c(ccccccccc)} \mathbb{Y}& \beta_1+\beta_{X^2} & 0 & \cdots & \beta_{X^{2k}}+\beta_{X^{2k+2}} & 0 & \cdots & \cdots\\%c_{n-1} \beta_{X^n}\\ \mathbb{X}\mathbb{Y}& 0 & \beta_{X^2}+\beta_{X^4} & \cdots & 0 & \beta_{X^{2k+2}}+\beta_{X^{2k+4}}& \cdots & \vdots\\%c_{n}\beta_{X^{n+1}} \\ \vdots& \vdots &&&&&&\vdots \\ \mathbb{X}^{2k}\mathbb{Y}& \beta_{X^{2k}} + \beta_{X^{2k+2}}&0 &\cdots & \beta_{X^{4k}}+\beta_{X^{4k+2}} & 0 & \cdots& \vdots\\%c_{n-1} \beta_{X^{n+2}} \\ \mathbb{X}^{2k+1}\mathbb{Y} & 0 & \beta_{X^{2k+2}}+\beta_{X^{2k+4}} &\cdots & 0 & \beta_{X^{4k+2}}+\beta_{X^{4k+4}} & \cdots & \vdots\\%c_{n}\beta_{X^{n+3}} \\ \vdots& \vdots &&&&&&\vdots \\ \mathbb{X}^{n-1}\mathbb{Y}& \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots \\ \end{block} \end{blockarray}, \end{equation*} and $B(\beta_Y)$ has the form \eqref{B(beta-y)}. By Lemma \ref{zero-moment-general} the nc atoms must be of the form \eqref{form-of-nc-atoms-general}. Hence the only way to cancel the $\beta_Y$ moment in $B(\beta_Y)$ is by using atoms of size 1, which are $(0,\pm 1)$. As in \S \ref{subsub2} we argue that $\widetilde{\mathcal M_n}(\beta_1,\beta_Y)$ admits a measure if and only if $\mathcal M_n(\beta_1-\beta_Y,X)$ admits a measure with atoms from $\mathbb R$ of size 1 if and only if it is psd and recursively generated. Now note that if $x_i$, $i=1,\ldots, k$, $k\in \mathbb N$, are atoms in the measure for $\mathcal M_n(\beta_1-\beta_Y,X)$ with the corresponding densities $\mu_i$, $i=1,\ldots, k$, then $$ \Big(\left(\begin{matrix} 0&x_i\\ x_i&0 \end{matrix}\right), \left(\begin{matrix} \sqrt{1+x_i^2}&0\\ 0&-\sqrt{1+x_i^2} \end{matrix}\right) \Big), \quad i=1,\ldots,k, $$ with densities $\mu_i$, $i=1,\ldots, k$, are atoms which represent $\widetilde{\mathcal M_n}(\beta_1-\beta_Y,0,0)$. \subsection{Relations $\mathbb{X}\mathbb{Y}+\mathbb{Y}\mathbb{X}=\mathbf{0}$ and $\mathbb{Y}^2= \mathbb{X}^2$} By Lemma \ref{zero-moment-general2}, the matrix $\mathcal M_n(\beta_1,\beta_X,X)$ has the form \eqref{MX-form}, $\mathcal M_n(\beta_1,Y)$ is equal to \begin{equation*} \begin{blockarray}{cccccccccccc} & \mathbb{Y}& \mathbb{X}\mathbb{Y}&\mathbb{X}^2\mathbb{Y}&\mathbb{X}^3\mathbb{Y}&\cdots& \mathbb{X}^{2k}\mathbb{Y}&\mathbb{X}^{2k+1}\mathbb{Y}&\cdots&\mathbb{X}^{n-1}\mathbb{Y}\\ \begin{block}{c(ccccccccccc)} \mathbb{Y}& \beta_{X^2} & 0 & \beta_{X^4} & 0 & \cdots & \beta_{X^{2k+2}} & 0 & \cdots & \cdots\\%c_{n+1} \beta_{X^n}\\ \mathbb{X}\mathbb{Y}& 0 & \beta_{X^4} & 0 & \beta_{X^6} & \cdots & 0 & \beta_{X^{2k+4}}& \cdots & \vdots\\%c_{n}\beta_{X^{n+1}} \\ \mathbb{X}^2\mathbb{Y}& \beta_{X^4}& 0 & \beta_{X^6} & 0 & \cdots & \beta_{X^{2k+4}} & 0 & \cdots& \vdots\\%c_{n+1} \beta_{X^{n+2}} \\ \mathbb{X}^3\mathbb{Y} & 0 & \beta_{X^6} & 0 & \beta_{X^8} & \cdots & 0 & \beta_{X^{2k+6}} & \cdots & \vdots\\%c_{n}\beta_{X^{n+3}} \\ \vdots& \vdots &&&&&&&&\vdots \\ \mathbb{X}^{n-1}\mathbb{Y}& \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\cdots \\ \end{block} \end{blockarray}, \end{equation*} and $B(\beta_Y)=\bf 0$. Note that the existence of a measure $\mathcal M_n(\beta_1,0,X)$ is the truncated Hamburger moment problem. By \cite[Theorem 3.9]{CF91}, the matrix $\mathcal M_n(\beta_1,0,X)$ admits a measure with atoms from $\mathbb R$ of size 1 if and only if it is psd and recursively generated. Now note that if $x_i$, $i=1,\ldots, k$, $k\in \mathbb N$, are atoms in the measure for $\mathcal M_n(\beta_1,X)$ with the corresponding densities $\mu_i$, $i=1,\ldots, k$, then $$ \Big(\left(\begin{matrix} 0&x_i\\ x_i&0 \end{matrix}\right), \left(\begin{matrix} x_i&0\\ 0&-x_i \end{matrix}\right) \Big), \quad i=1,\ldots,k, $$ with densities $\mu_i$, $i=1,\ldots, k$, are atoms which represent $\widetilde{\mathcal M_n}(\beta_1,0,0)$. \end{document}
arXiv
Weakly holomorphic modular form In mathematics, a weakly holomorphic modular form is similar to a holomorphic modular form, except that it is allowed to have poles at cusps. Examples include modular functions and modular forms. Not to be confused with almost holomorphic modular form. Definition To simplify notation this section does the level 1 case; the extension to higher levels is straightforward. A level 1 weakly holomorphic modular form is a function f on the upper half plane with the properties: • f transforms like a modular form: $f((a\tau +b)/(c\tau +d))=(c\tau +d)^{k}f(\tau )$ for some integer k called the weight, for any elements of SL2(Z). • As a function of q=e2πiτ, f is given by a Laurent series whose radius of convergence is 1 (so f is holomorphic on the upper half plane and meromorphic at the cusps). Examples The ring of level 1 modular forms is generated by the Eisenstein series E4 and E6 (which generate the ring of holomorphic modular forms) together with the inverse 1/Δ of the modular discriminant. Any weakly holomorphic modular form of any level can be written as a quotient of two holomorphic modular forms. However, not every quotient of two holomorphic modular forms is a weakly holomorphic modular form, as it may have poles in the upper half plane. References • Duke, W.; Jenkins, Paul (2008), "On the zeros and coefficients of certain weakly holomorphic modular forms", Pure Appl. Math. Q., Special Issue: In honor of Jean-Pierre Serre. Part 1, 4 (4): 1327–1340, doi:10.4310/PAMQ.2008.v4.n4.a15, MR 2441704, Zbl 1200.11027
Wikipedia
Slepian's lemma In probability theory, Slepian's lemma (1962), named after David Slepian, is a Gaussian comparison inequality. It states that for Gaussian random variables $X=(X_{1},\dots ,X_{n})$ and $Y=(Y_{1},\dots ,Y_{n})$ in $\mathbb {R} ^{n}$ satisfying $\operatorname {E} [X]=\operatorname {E} [Y]=0$, $\operatorname {E} [X_{i}^{2}]=\operatorname {E} [Y_{i}^{2}],\quad i=1,\dots ,n,{\text{ and }}\operatorname {E} [X_{i}X_{j}]\leq \operatorname {E} [Y_{i}Y_{j}]{\text{ for }}i\neq j.$ the following inequality holds for all real numbers $u_{1},\ldots ,u_{n}$: $\Pr \left[\bigcap _{i=1}^{n}\{X_{i}\leq u_{i}\}\right]\leq \Pr \left[\bigcap _{i=1}^{n}\{Y_{i}\leq u_{i}\}\right],$ or equivalently, $\Pr \left[\bigcup _{i=1}^{n}\{X_{i}>u_{i}\}\right]\geq \Pr \left[\bigcup _{i=1}^{n}\{Y_{i}>u_{i}\}\right].$ While this intuitive-seeming result is true for Gaussian processes, it is not in general true for other random variables—not even those with expectation 0. As a corollary, if $(X_{t})_{t\geq 0}$ is a centered stationary Gaussian process such that $\operatorname {E} [X_{0}X_{t}]\geq 0$ for all $t$, it holds for any real number $c$ that $\Pr \left[\sup _{t\in [0,T+S]}X_{t}\leq c\right]\geq \Pr \left[\sup _{t\in [0,T]}X_{t}\leq c\right]\Pr \left[\sup _{t\in [0,S]}X_{t}\leq c\right],\quad T,S>0.$ History Slepian's lemma was first proven by Slepian in 1962, and has since been used in reliability theory, extreme value theory and areas of pure probability. It has also been re-proven in several different forms. References • Slepian, D. "The One-Sided Barrier Problem for Gaussian Noise", Bell System Technical Journal (1962), pp 463–501. • Huffer, F. "Slepian's inequality via the central limit theorem", Canadian Journal of Statistics (1986), pp 367–370. • Ledoux, M., Talagrand, M. "Probability in Banach Spaces", Springer Verlag, Berlin 1991, pp 75.
Wikipedia
Darwinian dynamics of a juvenile-adult model MBE Home Stability and Hopf bifurcation in a diffusive predator-prey system incorporating a prey refuge 2013, 10(4): 997-1015. doi: 10.3934/mbe.2013.10.997 Modeling of the migration of endothelial cells on bioactive micropatterned polymers Thierry Colin 1, , Marie-Christine Durrieu 2, , Julie Joie 1, , Yifeng Lei 3, , Youcef Mammeri 4, , Clair Poignard 4, and Olivier Saut 5, Univ. Bordeaux, IMB, UMR 5251, F-33400 Talence, France INSERM, IECB, UMR 5248, F-33607 Pessac, France Univ. Bordeaux, IECB, UMR 5248, F-33607 Pessac, France INRIA, F-33400 Talence, France, France CNRS, IMB, UMR 5251, F-33400 Talence, France Received June 2012 Revised April 2013 Published June 2013 In this paper, a macroscopic model describing endothelial cell migration on bioactive micropatterned polymers is presented. It is based on a system of partial differential equations of Patlak-Keller-Segel type that describes the evolution of the cell densities. The model is studied mathematically and numerically. We prove existence and uniqueness results of the solution to the differential system. We also show that fundamental physical properties such as mass conservation, positivity and boundedness of the solution are satisfied. The numerical study allows us to show that the modeling results are in good agreement with the experiments. Keywords: Keller-Segel type model, endothelial cells migration., Tissue engineering. Mathematics Subject Classification: Primary: 92B05, 92C1. Citation: Thierry Colin, Marie-Christine Durrieu, Julie Joie, Yifeng Lei, Youcef Mammeri, Clair Poignard, Olivier Saut. Modeling of the migration of endothelial cells on bioactive micropatterned polymers. Mathematical Biosciences & Engineering, 2013, 10 (4) : 997-1015. doi: 10.3934/mbe.2013.10.997 A. Anderson and M. Chaplain, Continuous and discrete mathematical models of tumor-induced angiogenesis, Bulletin of Mathematical Biology, 60 (1998), 857-899. doi: 10.1006/bulm.1998.0042. Google Scholar K. Anselme, P. Davidson, A. M. Popa, M. Giazzon, M. Liley and L. Ploux, The interaction of cells and bacteria with surfaces structured at the nanometre scale, Acta Biomaterialia, 6 (2010), 3824-3846. doi: 10.1016/j.actbio.2010.04.001. Google Scholar P. Biler and T. Nadzieja, Existence and nonexistence of solutions for a model of gravitational interaction of particles, I. Colloq. Math., 66 (1993), 319-334. Google Scholar A. Blanchet, J. Dolbeault and B. Perthame, Two dimensional Keller-Segel model: Optimal critical mass and qualitative properties of the solution,, Electron. J. Differential Equations, 2006 (). Google Scholar P. Carmeliet and M. Tessier-Lavigne, Common mechanisms of nerve and blood vessel wiring, Nature, 436 (2005), 193-200. doi: 10.1038/nature03875. Google Scholar C. S Chen, M. Mrksich, S. Huang, G. M. Whitesides and D. E. Ingber, Geometric control of cell life and death, Science, 276 (1997), 1425-1428. doi: 10.1126/science.276.5317.1425. Google Scholar L. E. Dike, C. S. Chen, M. Mrksich, J. Tien, G. M. Whitesides and D. E. Ingber, Geometric control of switching between growth, apoptosis, and differentiation during angiogenesis using micropatterned substr'ates, in Vitro Cell. Dev. Biol., 35 (1999), 441-448. Google Scholar J. Dolbeault and B. Perthame, Optimal critical mass in the two dimensional Keller-Segel model in $\mathbbR^2$, C. R. Math. Acad. Sci. Paris, 339 (2004), 611-616. doi: 10.1016/j.crma.2004.08.011. Google Scholar R. Eymard, T. Gallouet and R. Herbin, "Finite Volume Methods," Handbook of Numerical Analysis, (eds. P. G Ciarlet and J. L Lions), 2007. Google Scholar A. Folch and M. Toner, Microengineering of cellular interactions, Annu. Rev. Biomed. Eng., 2 (2000), 227-256. Google Scholar J. Folkman and C. Haudenschild, Angiogenesis in vitro, Nature, 288 (1980), 551-556. Google Scholar H. Gajewski and K. Zacharias, Global behavior of a reaction-diffusion system modelling chemotaxis, Math. Nachr., 195 (1998), 77-114. doi: 10.1002/mana.19981950106. Google Scholar T. Hillen and K. J. Painter, A user's guide to PDE models for chemotaxis, J. Math. Biol., 58 (2008), 183-217. doi: 10.1007/s00285-008-0201-3. Google Scholar D. Horstmann, The nonsymmetric case of the Keller-Segel model in chemotaxis: Some recent results, Nonlinear Differ. Equ. Appl., 8 (2001), 399-423. doi: 10.1007/PL00001455. Google Scholar W. Hundsdorfer and J. G. Verwer, "Numerical Solution of Time-Dependent Advection-Diffusion-Reaction Equations," Springer Series in Comput. Math., 33, Springer, 2003. Google Scholar Y. Ito, Surface micropatterning to regulate cell functions, Biomaterials, 20 (1999), 2333-2342. doi: 10.1016/S0142-9612(99)00162-3. Google Scholar R. K. Jain, Molecular regulation of vessel maturation, Nat. Med., 9 (2003), 685-593. doi: 10.1038/nm0603-685. Google Scholar R. K. Jain, P. Au, J. Tam, D. G. Duda and D. Fukumura, Engineering vascularized tissue, Nat Biotechnol, 23 (2005), 821-823. doi: 10.1038/nbt0705-821. Google Scholar G. S. Jiang and C. W Shu, Efficient implementation of weighted ENO schemes, J. of Computational Physics, 126 (1996), 202-228. doi: 10.1006/jcph.1996.0130. Google Scholar M. Kamei, W. B. Saunders, K. J. Bayless, L. Dye, G. E. Davis and B. M. Weinstein, Endothelial tubes assemble from intracellular vacuoles, in vivo, Nature, 442 (2006), 453-456. doi: 10.1038/nature04923. Google Scholar E. F. Keller and L. A. Segel, Traveling band of chemotactic bacteria: A theoretical analysis, Journal of Theo. Biol., 30 (1971), 235-248. doi: 10.1016/0022-5193(71)90051-8. Google Scholar Y. Lei, O. F. Zouani, M. Rémy, L. Ramy and M. C. Durrieu, Modulation of lumen formation by microgeometrical bioactive cues and migration mode of actin machinery,, Small, (). doi: 10.1002/smll.201202410. Google Scholar Y. Lei, O. F. Zouani, M. Rémy, C. Ayela and M. C. Durrieu, Geometrical microfeature cues for directing tubulogenesis of endothelial cells, PLoS ONE, 7 (2012), e41163. doi: 10.1371/journal.pone.0041163. Google Scholar X. D. Liu, S. Osher and T. Chan, Weighted essentially non-oscillatory schemes, Journal of Computational Physics, 115 (1994), 200-212. doi: 10.1006/jcph.1994.1187. Google Scholar B. Lubarsky and M. A. Krasnow., Tube morphogenesis: Making and shaping biological tubes, Cell, 112 (2003), 19-28. Google Scholar R. M. Nerem, Tissue engineering: The hope, the hype, and the future, Tissue Eng., 12 (2006), 1143-50. Google Scholar D. V. Nicolau, T. Taguchi, H. Taniguchi, H. Tanigawa and S. Yoshikawa, Patterning neuronal and glia cells on light-assisted functionalized photoresists, Biosens. Bioelectron, 14 (1999), 317-325. Google Scholar Z. K. Otrock, R. A. Mahfouz, J. A. Makarem and A. I. Shamseddine, Understanding the biology of angiogenesis: Review of the most important molecular mechanisms, Blood Cells Mol. Dis., 39 (2007), 212-220. doi: 10.1016/j.bcmd.2007.04.001. Google Scholar E. M Ouhabaz, "Analysis of Heat Equations on Domains," London Math. Soc. Monographs Series, Princeton University Press. 31, 2005. Google Scholar C. S. Patlak, Random walk with persistence and external bias, Bull. Math. Biophys., 15 (1953), 311-338. doi: 10.1007/BF02476407. Google Scholar E. A. Phelps and A. J.Garcia, Engineering more than a cell: Vascularization strategies in tissue engineering, Curr. Opin. Biotechnol, 21 (2010), 704-709. doi: 10.1016/j.copbio.2010.06.005. Google Scholar M. I. Santos and R. L. Reis, Vascularization in bone tissue engineering: Physiology, current strategies, major hurdles and future challenges, Macromol Biosci., 10 (2010), 12-27. doi: 10.1002/mabi.200900107. Google Scholar T. Senba and T. Suzuki, Chemotactic collapse in a parabolic-elliptic system of mathematical biology, Adv. Differential Equations, 6 (2001), 21-50. Google Scholar Y. Y. Li and M. S. Vogelius, Gradient estimates for solutions to divergence form elliptic equations with discontinuous coefficients, Arch. Rational Mech. Anal., 153 (2000), 91-151. doi: 10.1007/s002050000082. Google Scholar F. Y Wang and L.Yan, Gradient estimate on convex domains and application, To Appear in AMS. Proc., 141 (2013), 1067-1081. (Avalaible on http://arxiv.org/abs/1009.1965v2). doi: 10.1090/S0002-9939-2012-11480-7. Google Scholar Wenting Cong, Jian-Guo Liu. A degenerate $p$-Laplacian Keller-Segel model. Kinetic & Related Models, 2016, 9 (4) : 687-714. doi: 10.3934/krm.2016012 Julie Joie, Yifeng Lei, Marie-Christine Durrieu, Thierry Colin, Clair Poignard, Olivier Saut. Migration and orientation of endothelial cells on micropatterned polymers: A simple model based on classical mechanics. Discrete & Continuous Dynamical Systems - B, 2015, 20 (4) : 1059-1076. doi: 10.3934/dcdsb.2015.20.1059 Jinhuan Wang, Li Chen, Liang Hong. Parabolic elliptic type Keller-Segel system on the whole space case. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 1061-1084. doi: 10.3934/dcds.2016.36.1061 Shangbing Ai, Zhian Wang. Traveling bands for the Keller-Segel model with population growth. Mathematical Biosciences & Engineering, 2015, 12 (4) : 717-737. doi: 10.3934/mbe.2015.12.717 Vincent Calvez, Benoȋt Perthame, Shugo Yasuda. Traveling wave and aggregation in a flux-limited Keller-Segel model. Kinetic & Related Models, 2018, 11 (4) : 891-909. doi: 10.3934/krm.2018035 Yajing Zhang, Xinfu Chen, Jianghao Hao, Xin Lai, Cong Qin. Dynamics of spike in a Keller-Segel's minimal chemotaxis model. Discrete & Continuous Dynamical Systems, 2017, 37 (2) : 1109-1127. doi: 10.3934/dcds.2017046 Tomasz Cieślak. Trudinger-Moser type inequality for radially symmetric functions in a ring and applications to Keller-Segel in a ring. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2505-2512. doi: 10.3934/dcdsb.2013.18.2505 Sachiko Ishida, Tomomi Yokota. Blow-up in finite or infinite time for quasilinear degenerate Keller-Segel systems of parabolic-parabolic type. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2569-2596. doi: 10.3934/dcdsb.2013.18.2569 Marco Di Francesco, Donatella Donatelli. Singular convergence of nonlinear hyperbolic chemotaxis systems to Keller-Segel type models. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 79-100. doi: 10.3934/dcdsb.2010.13.79 Luis Almeida, Federica Bubba, Benoît Perthame, Camille Pouchol. Energy and implicit discretization of the Fokker-Planck and Keller-Segel type equations. Networks & Heterogeneous Media, 2019, 14 (1) : 23-41. doi: 10.3934/nhm.2019002 Yadong Shang, Jianjun Paul Tian, Bixiang Wang. Asymptotic behavior of the stochastic Keller-Segel equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1367-1391. doi: 10.3934/dcdsb.2019020 Tohru Tsujikawa, Kousuke Kuto, Yasuhito Miyamoto, Hirofumi Izuhara. Stationary solutions for some shadow system of the Keller-Segel model with logistic growth. Discrete & Continuous Dynamical Systems - S, 2015, 8 (5) : 1023-1034. doi: 10.3934/dcdss.2015.8.1023 Jean Dolbeault, Christian Schmeiser. The two-dimensional Keller-Segel model after blow-up. Discrete & Continuous Dynamical Systems, 2009, 25 (1) : 109-121. doi: 10.3934/dcds.2009.25.109 Shen Bian, Jian-Guo Liu, Chen Zou. Ultra-contractivity for Keller-Segel model with diffusion exponent $m>1-2/d$. Kinetic & Related Models, 2014, 7 (1) : 9-28. doi: 10.3934/krm.2014.7.9 Wenting Cong, Jian-Guo Liu. Uniform $L^{∞}$ boundedness for a degenerate parabolic-parabolic Keller-Segel model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 307-338. doi: 10.3934/dcdsb.2017015 Xinru Cao. Large time behavior in the logistic Keller-Segel model via maximal Sobolev regularity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3369-3378. doi: 10.3934/dcdsb.2017141 Fanze Kong, Qi Wang. Stability, free energy and dynamics of multi-spikes in the minimal Keller-Segel model. Discrete & Continuous Dynamical Systems, 2022 doi: 10.3934/dcds.2021200 Qi Wang, Jingyue Yang, Lu Zhang. Time-periodic and stable patterns of a two-competing-species Keller-Segel chemotaxis model: Effect of cellular growth. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3547-3574. doi: 10.3934/dcdsb.2017179 Ping Liu, Junping Shi, Zhi-An Wang. Pattern formation of the attraction-repulsion Keller-Segel system. Discrete & Continuous Dynamical Systems - B, 2013, 18 (10) : 2597-2625. doi: 10.3934/dcdsb.2013.18.2597 Xie Li, Zhaoyin Xiang. Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3503-3531. doi: 10.3934/dcds.2015.35.3503 PDF downloads (21) Thierry Colin Marie-Christine Durrieu Julie Joie Yifeng Lei Youcef Mammeri Clair Poignard Olivier Saut
CommonCrawl
\begin{document} \title{Complexity Jumps In Multiagent Justification Logic Under Interacting Justifications} \begin{abstract} The Logic of Proofs, LP, and its successor, Justification Logic, is a refinement of the modal logic approach to epistemology in which proofs/justifications are taken into account. In 2000 Kuznets showed that satisfiability for {{\sf LP}} is in the second level of the polynomial hierarchy, a result which has been successfully repeated for all other one-agent justification logics whose complexity is known. We introduce a family of multi-agent justification logics with interactions between the agents' justifications, by extending and generalizing the two-agent versions of the Logic of Proofs introduced by Yavorskaya in 2008. Known concepts and tools from the single-agent justification setting are adjusted for this multiple agent case. We present tableau rules and some preliminary complexity results. In several cases the satisfiability problem for these logics remains in the second level of the polynomial hierarchy, while for others it is {\PSPACE} or \EXP-hard. Furthermore, this problem becomes {\PSPACE}-hard even for certain two-agent logics, while there are $\EXP$-hard logics of three agents. \end{abstract} \section{Introduction} Justification Logic is a family of logics of justified beliefs. Where epistemic modal logic treats formulas of the form $K \phi$ with the intended meaning that an agent knows/believes $\phi$, in Justification Logic we consider formulas of the form $\term{t}{}\phi$ with the intended meaning that $t$ is a justification for $\phi$ - or that the agent has justification $t$ for $\phi$. The first Justification Logic was {\sf LP}, the logic of proofs, and appeared in \cite{Art95TR} by Artemov, but it has since developed in a wide system of explicit epistemic logics with notable complexity properties that significantly differ from the corresponding modal logics: while every single-agent justification logic whose complexity has been studied has its derivability problem in $\Pi_2^p$ (the second level of the polynomial hierarchy), the corresponding modal logics have \PSPACE-complete derivability problems. Furthermore certain significant fragments of these justification logics have an even lower complexity- \NP, or even \P\ in some cases. For an overview of Justification Logic see \cite{Art08RSL}. For an overview of complexity results of (single-agent) Justification Logic, see \cite{Kuz08PhD}. In epistemic situations we often have multiple agents, so as it is the case with modal logic, there is a need for a multi-agent justification logic. In \cite{multi2}, Yavorskaya presents two-agent variations of {\sf LP}. These logics feature interactions between the two agents' justifications: for ${\sf LP}_\uparrow$, for instance, every justification for agent 1 can be converted to a justification for agent 2 for the same fact and we have the axiom $\term{t}{1}\phi \rightarrow \term{\uparrow\!\! t}{2} \phi$,\footnote{We take some liberty with the notation to keep it in line with this paper.} while ${\sf LP}_!$ comes with the extra axiom $\term{t}{1} \phi \rightarrow \term{!t}{2}\term{t}{1}$, so agent 2 is aware of agent 1's justifications. In \cite{Achilleos14TRtwoagent}, we extended Yavorskaya's logics to two-agent variations of other justification logics, as well as to combinations of two different justification logics. We then gave tableau procedures to prove that most of these logics were in the second step of the polynomial hierarchy, an expected result which mimics the ones for single-agent justification logics from \cite{DBLP:conf/csl/Kuznets00,Kuz08PhD,Achilleos:wollic11}. For some cases, however, we were able to prove \PSPACE-completeness, which was a new phenomenon for Justification Logic. In this paper we continue our work from \cite{Achilleos14TRtwoagent}. We provide a general family of multi-agent logics. Each member of this family we call $(J^{n}_{D,F,V,C})_\mathcal{CS}$, where $n, D, F, V, C$ are parameters of the logic. For $(J^{n}_{D,F,V,C})_\mathcal{CS}$ we consider $n$ agents and the interactions between the agents' justifications are described by binary relations on agents, $V$ and $C$. Furthermore, not all agents are equally reliable: $D$ and $F$ are sets of agents, all agents in $D$ have consistent beliefs and all agents in $F$ have true beliefs. These concepts are made precise in section \ref{multidefinitions}. It is our goal to provide a flexible system capable of modelling situations of many diverse agents, or diverse types of justifications, allowing for reasonably general interactions among their justifications. For this family of logics we provide semantics and a general tableau procedure and then we make observations on the complexity of the derivation problem for its members. In particular, we demonstrate that all logics in this family have their satisfiability problem in {\NEXP} - under reasonable assumptions. This family demonstrates significant variety, as it also includes \PSPACE- and \EXP-complete members, while of course some of its members have their satisfiability problem in $\Sigma_2^p$. This is a somewhat surprising result, as all single-agent justification logics whose complexity is known have their satisfiability problem in $\Sigma_2^p$. This paper is organised as follows. In section \ref{multidefinitions}, we give the base definitions of the syntax, axioms and semantics for each logic in the family. Then we reintroduce the star calculus, an invaluable tool, and our first complexity results, mirroring the ones for single-agent justification logic (see \cite{NKru06TCS}). The version of the star calculus we provide is somewhat different than the usual ones in that it is based on a given frame. If the frame includes a single world, we get the usual, more familiar version. In section \ref{tableaux} we give general tableau rules for each of our logics. Naturally, the rules are parameterized by the logic's parameters, including the interactions between the agents, so special attention is given in that section to these interactions. We then go on and further optimize the tableau procedure with respect to the number of world-prefixes it produces; this results in a $\Sigma_2^p$ upper bound for the satisfiability of a general class of logics. \section{Multiagent Justification Logic with Interactions} \label{multidefinitions} In this section we present the system we study in this paper, its semantics and the basic tools we will need later on. Most of the proofs for the claims here can be adjusted from the one- or two- agent versions of Justification Logic. The reader can see \cite{Art08RSL} or \cite{Kuz08PhD} for an overview of single-agent justification logic and \cite{Achilleos14TRtwoagent} for a two-agent version of this system. \subsection{Syntax} In this paper, if $n\in \nat$, $[n]$ will be the set $\{ 1, 2, \ldots , n \}$. For every $n \in \nat$, the justification terms of the language $L_n$ will include all constants $c_1, c_2, c_3, \ldots$ and variables $x_1, x_2, x_3, \ldots$ and if $t_1$ and $t_2$ are terms, then the following are also terms: $ [t_1 + t_2], [t_1\cdot t_2], ! t_1$. The set of terms will be referred to as $Tm$. We also use a set $SLet$ of propositional variables, or sentence letters. These will usually be $p_1,p_2,\ldots$. Formulas of the language $L_n$ include all propositional variables and if $\phi, \psi$ are formulas, $i \in [n] $ and $t$ is a term, then the following are also formulas of $L_n$: $\bot, \phi \rightarrow\psi , \term{t}{i} \phi $. The remaining propositional connectives, whenever needed, are treated as constructed from $\rightarrow$ and $\bot$ in the usual way. The operators $\cdot, +$ and $!$ are explained by the following axioms. Intuitively, $\cdot$ applies a justification for a statement $A \rightarrow B$ to a justification for $A$ and gives a justification for $B$. Using $+$ we can combine two justifications and have a justification for anything that can be justified by any of the two initial terms - much like the concatenation of two proofs. Finally, $!$ is a unary operator called the proof checker. Given a justification $t$ for $\phi$, $!t$ justifies the fact that $t$ is a justification for $\phi$. Let $n\in \nat$, $D,F\subseteq [n]$ and $V,C \subseteq [n]^{2}$. The logic $(J^{n}_{D,F,V,C})_{\emptyset}$ is the logic with modus ponens as a derivation rule and the following axioms: \begin{description} \item[Propositional Axioms:] Finitely many schemes of classical propositional logic; \item[Application:] $\term{s}{i}(\phi\rightarrow \psi) \rightarrow (\term{t}{i}\phi \rightarrow \term{[s\cdot t]}{i} \psi)$; \item[Concatenation:] $\term{s}{i}\phi \rightarrow \term{[s + t]}{i} \phi$, $\term{s}{i}\phi \rightarrow \term{[t + s]}{i} \phi$; \item[$F$-factivity:] For every $i\in F$, $\term{t}{i}\phi \rightarrow \phi$; \item[$D$-consistency:] For every $i\in D$, $\term{t}{i}\bot \rightarrow \bot$; \item[$V$-verification:] For every $(i,j)\in V$, $\term{t}{i}\phi \rightarrow \term{! t}{j} \term{t}{i}\phi$; \item[$C$-conversion:] For every $(i,j)\in C$, $\term{t}{i}\phi \rightarrow \term{t}{j}\phi$, \end{description} where in the above, $\phi$ and $\psi$ are formulas in $L_n$, $s, t$ are terms and $ i,j \in [n]$. $F$-factivity and $D$-consistency are the usual factivity and consistency axioms for every agent in $F$ and $D$ respectively. Positive introspection is seen as a special case of $V$-verification - in this context, if agent $i$ has positive introspection, then $(i,i) \in V$. A constant specification for $J^{n}_{D,F,V,C}$ is any set \[ \mathcal{CS} \subseteq \{\term{c}{i} A \mid c \mbox{ is a constant, } A \mbox{ an axiom of } J^{n}_{D,F,V,C} \mbox{ and } i \in [n]\}. \] We say that axiom $A$ is justified by a constant $c$ for agent $i$, when $\term{c}{i}A \in \mathcal{CS}$. A constant specification is: \emph{axiomatically appropriate with respect to $I\subseteq [n]$} if for every $i \in I$, each axiom is justified by at least one constant, \emph{schematic} if every constant justifies only a certain number of schemes from the ones above (as a result, if $c$ justifies $A$ for $i$ and $B$ results from $A$ and substitution, then $c$ justifies $B$ for $i$) and \emph{schematically injective} if it is schematic and every constant justifies at most one scheme. Let $cl_n(\mathcal{CS})$ be the smallest set such that $\mathcal{CS} \subseteq cl_n(\mathcal{CS})$ and for every $\term{t}{i}\phi \in cl_n(\mathcal{CS})$, it is the case that for every $j\in [n]$, $\term{!t}{j}\term{t}{i}\phi \in cl_n(\mathcal{CS})$. $(J_{D,F,V,C}^n)_\mathcal{CS}$ is $(J_{D,F,V,C}^n)_\emptyset + R4^{n}_\mathcal{CS}$, where $R4^{n}_\mathcal{CS}$ just outputs all elements of $cl_n(\mathcal{CS})$. $(J^n_{D,F,V,C})_\mathcal{CS}$ is consistent: just map each formula to the propositional formula that is the result of removing all terms from the original one; then, all axioms are mapped to propositional tautologies and modus ponens preserves this mapping. \subsection{Semantics} We now introduce models for our logic. In the single-agent cases, M-models (introduced in \cite{Mkr97LFCS,DBLP:conf/csl/Kuznets00}) and F-models (introduced in \cite{Fit05APAL,Pac05PLS,DBLP:conf/csr/Kuznets08}) are used (also in \cite{Achilleos14TRtwoagent} for two-agent logics) and they are both important in the study of complexity issues. In this paper we are mostly interested in F-models, which we will usually just call models. These are essentially Kripke models with an additional machinery to accommodate justification terms. Let $\mathcal{J}=(J^n_{D,F,V,C})_\mathcal{CS}$. Then, an F-model $\mathcal{M}$ for $\mathcal{J}$ is a quadruple $(W, (R_i)_{i \in [n]}, ({\mathcal{A}}_i)_{i \in [n]},\mathcal{V})$, where $W \neq \emptyset$ is a set, for every $i\in [n]$, $R_i\subseteq W^2$ is a binary relation on $W$, $\mathcal{V}:SLet \longrightarrow 2^{W}$ and for every $i\in [n]$, ${\mathcal{A}}_i:(Tm\times L_n) \longrightarrow 2^{W}$. $W$ is called the \emph{universe} of $\mathcal{M}$ and its elements are the worlds or states of the model. $\mathcal{V}$ assigns a subset of $W$ to each propositional variable, $p$, and ${\mathcal{A}}_i$ assigns a subset of $W$ to each pair of a justification term and a formula. Furthermore, $({\mathcal{A}}_{i})_{i\in[n]}$ will often be seen and referred to as ${\mathcal{A}} : [n]\times Tm \times L_n \longrightarrow 2^{W}$ and ${\mathcal{A}}$ is called an admissible evidence function. Additionally, ${\mathcal{A}}$ must satisfy the following conditions: \begin{description} \item {Application closure:} for any $i\in [n]$, formulas $\phi, \psi$, and justification terms $t, s$, \\ $ {\mathcal{A}}_i(s,\phi \rightarrow \psi) \cap {\mathcal{A}}_i(t,\phi) \subseteq {\mathcal{A}}_i(s\cdot t, \psi). $ \item {Sum closure:} for any $i\in [n]$, formula $\phi$, and justification terms $t, s$, \\ $ {\mathcal{A}}_i(t,\phi) \cup {\mathcal{A}}_i(s,\phi) \subseteq {\mathcal{A}}_i(t+s,\phi).$ \item {$\mathcal{CS}$-closure:} for any formula $\term{t}{i}\phi \in cl_n(\mathcal{CS})$, ${\mathcal{A}}_i(t,\phi) = W$. \item {$V$-Verification Closure:} If $(i,j)\in V$, then ${\mathcal{A}}_i(t,\phi) \subseteq {\mathcal{A}}_j(!t,\term{t}{i}\phi)$ \item {$C$-Conversion Closure:} If $(i,j)\in C$, then ${\mathcal{A}}_i(t,\phi) \subseteq {\mathcal{A}}_j(t,\phi)$ \item {$V$-Distribution:} for any formula $\phi$, justification term $t$, $(i,j)\in V$ and $a,b \in W$, if $a R_j b$ and $a \in {\mathcal{A}}_i(t,\phi)$, then $b \in {\mathcal{A}}_i(t,\phi)$. \end{description} The accessibility relations, $R_i$, must satisfy the following conditions: \begin{itemize} \item If $i \in F$, then $R_i$ must be reflexive. \item If $i \in D$, then $R_i$ must be serial ($\forall a \in W \ \exists b \in W \ a R_i b$). \item If $(i,j) \in V$, then for any $a,b,c\in W$, if $a R_j b R_i c$, we also have $a R_i c$.\footnote{Thus, if $i$ has positive introspection (i.e. $(i,i)\in V$), then $R_i$ is transitive.} \item For any $(i,j)\in C$, $R_j \subseteq R_i$. \end{itemize} Truth in the model is defined in the following way, given a state $a$: \begin{itemize} \item $M,a \not \models \bot$ and if $p$ is a propositional variable, then $\mathcal{M},a \models p$ iff $a \in \mathcal{V}(p)$. \item If $\phi, \psi$ are formulas, then $\mathcal{M},a \models \phi \rightarrow \psi$ if and only if $M,a \models \psi$, or $\mathcal{M},a \not \models \phi$. \item If $\phi$ is a formula and $t$ a term, then $\mathcal{M},a \models \term{t}{i}\phi$ if and only if $a \in {\mathcal{A}}_i(t,\phi)$ and $\mathcal{M},b\models \phi$ for all $b \in W$ such that $a R_i b$. \end{itemize} A formula $\phi \in L_n$ is called satisfiable if there are some $\mathcal{M},a \models \phi$; we then say that $\mathcal{M}$ satisfies $\phi$ in $a$. If $\mathcal{CS}$ is axiomatically appropriate with respect to $D$, then $(J^n_{D,F,V,C})_\mathcal{CS}$ is sound and complete with respect to its models; it is also sound and complete with respect to its models that have the \emph{Strong Evidence Property: \emph{$\mathcal{M},a \models \term{t}{i}\phi$ iff $a \in {\mathcal{A}}_I(t,\phi)$}}; furthermore, if $\phi$ is satisfiable, then it is satisfied by a model $\mathcal{M}$ of at most $2^{|\phi|}$ states - and in fact, it is satisfied by a model $\mathcal{M}$ of at most $2^{|\phi|}$ states that has the strong evidence property (see \cite{Achilleos14TRtwoagent} for proofs of all the above that can be easily adjusted for this general case). A pair $(W,(R_i)_{i\in [n]})$ as above is called a frame for $(J^n_{D,F,V,C})_\mathcal{CS}$. \subsection{The $*$-calculus.} We present the $*$-calculi for $(J^n_{D,F,V,C})_\mathcal{CS}$. The $*$-calculi for the single-agent justification logics have proven to be an invaluable tool in the study of the complexity of these logics. This concept and results were adapted to the two-agent setting in \cite{Achilleos14TRtwoagent} and here we extend them to the general multi-agent setting. Although the calculi have significant similarities to the ones of the single-agent justification logics, there are differences, notably that each calculus depends upon a frame and operates upon $*$-expressions (defined below) prefixed by states of the frame. A $*$-calculus was first introduced in \cite{NKru06TCS}, but its origins can be found in \cite{Mkr97LFCS}. If $t$ is a term, $\phi$ is a formula, and $i\in [n]$, then $*_i(t,\phi)$ is a star-expression ($*$-expression). Given a frame $\mathcal{F} = (W,(R_i)_{i\in [n]})$ and $V, C \subseteq [n]^2$ and constant specification $\mathcal{CS}$, the $*_{\mathcal{CS}}^{\mathcal{F}}(V,C)$ calculus on the frame $\mathcal{F}$ is a calculus on $*$-expressions prefixed by worlds from $W$ with the axioms and rules that are shown in figure \ref{fig:starcalc}.\\ \noindent \begin{figure} \caption{The $*^\mathcal{F}_\mathcal{CS}(V,C)$-calculus: where $\mathcal{F} = (W,(R_i)_{i\in [n]})$ and for every $i \in [n]$} \label{fig:starcalc} \end{figure} \indent Notice that the calculus rules correspond to the closure conditions of the admissible evidence functions. In fact and because of this, given a frame $\mathcal{F} = (W,R_1,R_2)$ and a set $S$ of $*$-expressions prefixed by states of the frame, the function ${\mathcal{A}}$ such that ${\mathcal{A}}_i(t,\phi) = \{w\in W | S\vdash_{*^\mathcal{F}_\mathcal{CS}(V,C)} w\ *_i(t,\phi) \}$ is an admissible evidence function and in fact it is the minimal admissible evidence function such that for every $w\ *_i(t,\phi) \in S$, $w \in {\mathcal{A}}_i(t,\phi)$ in the sense that always ${\mathcal{A}}_i(t,\phi) \subseteq {\mathcal{A}}'_i(t,\phi)$ for any other admissible evidence function ${\mathcal{A}}'$ such that for every $w\ *_i(t,\phi) \in S$, $w \in {\mathcal{A}}'_i(t,\phi)$. Therefore, given a frame $\mathcal{F} = (W,R_1,R_2)$ and two set $\mathcal{T},\mathcal{F}$ of $*$-expressions prefixed by states of the frame there is an admissible evidence function ${\mathcal{A}}$ on $\mathcal{F}$ such that for every $w\ *_i(t,\phi) \in \mathcal{T}$, $w \in {\mathcal{A}}_i(t,\phi)$ and for every $w\ *_i(t,\phi) \in \mathcal{F}$, $w \notin {\mathcal{A}}_i(t,\phi)$, if and only if there is no $f \in \mathcal{F}$ such that $\mathcal{T} \vdash_* f$. This observation yields the following. \begin{proposition} \label{prp:proofbystarcalc} For any\footnote{Note that we actually need an axiomatically appropriate constant specification to have completeness with respect to F-models, so we cannot immediately conclude with this result for \emph{any} constant specification. Nevertheless, proposition \ref{prp:proofbystarcalc} holds, but to prove it we need to introduce M-models, but we do not use M-models anywhere else. Thus, the reader can see \cite{NKru06TCS,Kuz08PhD}, or \cite{Achilleos14TRtwoagent} for a proof of proposition \ref{prp:proofbystarcalc} for all constant specifications.} constant specification $\mathcal{CS}$, frame $\mathcal{F}$ with universe $W$ and $w\in W$, $ (J^n_{D,F,V,C})_\mathcal{CS} \vdash \term{t}{i}\phi \ \Longleftrightarrow \ \vdash_{*^\mathcal{F}_\mathcal{CS}(V,C)} w\ *_i(t,\phi)$. \end{proposition} \begin{proposition} \label{thm:calccompnew} Let $\mathcal{CS}$ be a schematic constant specification in $\P$ and $V, C \subseteq [n]^2$. Then, the following problem is in $\NP$: Given a finite frame $\mathcal{F} = (W, (R_i)_{i\in [n]})$ and a finite set $S$ of $*$-expressions prefixed by worlds from $W$, a formula $\phi$, a term $t$, a $w \in W$ and $i \in [n]$, is it the case that \[ S \vdash_{*^{\mathcal{F}}_{\mathcal{CS}}(V,C)} w \ *_i(t,\phi)\mbox{?} \] \end{proposition} The proof of this proposition is very similar to the one that can be found in \cite{Kuz08PhD}. What is different here is the additional assignment of a state set to each node of the derivation tree, which does not change things a lot. \begin{proof} For this proof and every $j \in [n]$, let $f_j:2^W\longrightarrow 2^W$, s.t. for every $X \subseteq W$, $f_j(X) = \{y\in W | \exists x \in X, (j,j')\in V \ xR_{j'} y\}\cup X$. \begin{itemize} \item Nondeterministically construct a rooted tree with pairs of the form $(j,s)$, where $j \in [n]$ and $s$ is a subterm of $t$, as nodes, such that $(i,t)$ is the root and the following conditions are met. Node $(j,s)$ can be the parent of $(j_1,s_1)$ or of both $(j_1,s_1)$ and $(j_2,s_2)$ as long as there is a rule $\frac{w\ *_{j_1}(s_1,\phi_1)}{w\ *_{j}(s,\phi_3)}$ or $\frac{w\ *_{j_1}(s_1,\phi_1) \ \ w\ *_{j_2}(s_2,\phi_2)}{w\ *_{j}(s,\phi_3)}$, respectively, of the $*$-calculus and as long as $(j_1,s_1) \neq (j,s) \neq (j_2,s_2)$. To keep this structure a tree, we can ensure at this step that there are no cycles, which would correspond to consecutive applications of $*$C$(\mathcal{F})$ and which would be redundant. \item Nondeterministically assign to each leaf, $(j,l)$, either \begin{itemize} \item some formula $\psi$ and the closure under $f_j$ of some set $W'\subseteq W$, s.t. for every $w \in W'$, $w\ *_j(l,\psi) \in S$ or, \item as long as $l$ is of the form $\underbrace{!\cdots !}_{k} c$, where $c$ a constant, $k \in \nat$, then we can also assign some $ \term{\underbrace{!\cdots !}_{k-1} c}{i_{k}} \cdots \term{!c}{i_2} \term{c}{i_1} A$ and $W' = W$, where $A$ an axiom scheme, s.t. $\term{c}{i_1}A \in \mathcal{CS}$. \end{itemize} \item If for some node $\nu = (j,s)$ all its children, say $\nu_1 = (j_1,s_1), \nu_2 = (j_2,s_2)$ have been assigned some scheme or formula $P_1, P_2$ and world sets $V_1, V_2$, assign to $\nu$ some scheme or formula $P$, such that $P_1,P_2$ can be unified to $P'_1,P'_2$ such that $\frac{w *_{j_1}(s_1,P'_1) \ w *_{j_2}(s_2,P'_2)}{w *_{j}(s,P)}$ is a rule in the $*_\mathcal{CS}^{\mathcal{F}}(V,C)$-calculus and world set $V$, where $V$ is the closure of $V_1 \cap V_2$ under $f_j$. Apply this step until the root of the tree has been assigned some scheme or formula and a $W'$ subset of $W$. \item Unify $\phi$ with the formula assigned to $(i,t)$ and verify that $w\in W'$ \end{itemize} If some step is impossible, the algorithm rejects. Otherwise, it accepts. Using efficient representations of schemes using DAGs and Robinson's unification algorithm, the algorithm runs in polynomial time. We can see that as the tree is constructed, if $(i,s)$ is assigned scheme $P$ and set $V$, then the construction effectively describes a valid derivation of any expression of the form $v\ *_i(s,\psi)$, where $v \in V$ and $\psi$ an instance of $P$. Therefore, if the algorithm accepts, there exists a valid $*^{\mathcal{F}}_{\mathcal{CS}}(V,C)$-calculus derivation of $w \ *_i(t,\phi)$. On the other hand if there is some $*^{\mathcal{F}}_{\mathcal{CS}}(V,C)$-calculus derivation for $w \ *_{i}(t,\phi)$ from $S$, then the algorithm in the first two steps can essentially describe this derivation by producing the derivation tree and the formulas/schemes by which the derivation starts. Therefore, the algorithm accepts if and only if there is a $*^{\mathcal{F}}_{\mathcal{CS}}(V,C)$-calculus derivation for $w \ *_{i}(t,\phi)$ from $S$. See \cite{NKru06TCS} and \cite{Kuz08PhD} for a more detailed analysis. \qed \end{proof} The number of nondeterministic choices made by the algorithm in the proof of proposition \ref{thm:calccompnew} is bounded by $|t| + |S'|$, where $S' = \{*_j(s,\psi) | \exists w\ *_j(s,\psi) \in S \}$. Therefore, if there is some formula $\psi$ such that $\term{t}{i}\phi$ is a subformula of $\psi$ and for every $*_j(s,\psi') \in S'$, $\term{s}{j}\psi'$ is a subformula of $\psi$, then $2|\psi| \geq |t| + |S'|$ and therefore we can simulate all nondeterministic choices in time $2^{O(|\psi|)}$. Thus the algorithm can be turned into a deterministic one running in time $2^{O(|\phi|)}\cdot O(|W|^2)$. This observation, the fact that a satisfiable $\phi$ can be satisfied by a model of at most $2^{|\phi|}$ states (see the previous subsection) and the previous two propositions give the following results: \begin{corollary} Let $\mathcal{J} = (J^n_{D,F,V,C})_\mathcal{CS}$, where $\mathcal{CS} \in \P$ is schematic. Then, \begin{enumerate} \item Deciding for $\term{t}{i}\phi$ that $\mathcal{J} \vdash \term{t}{i}\phi$ is in \NP. \item If $\mathcal{CS}$ is axiomatically appropriate with respect to $D$, then the satisfiability problem for $\mathcal{J}$ is in \NEXP. \end{enumerate} \end{corollary} Additionally notice that if the term $t$ has no $+$, $\mathcal{CS}$ is schematically injective and $S = \emptyset$, we have essentially eliminated nondeterministic choices from the procedure above. Thus, we conclude (for the original result, see \cite{DBLP:conf/tark/ArtemovK09}): \begin{corollary} Let $\mathcal{J} = (J^n_{D,F,V,C})_\mathcal{CS}$, where $\mathcal{CS} \in \P$ is schematically injective. Then, deciding for $\term{t}{i}\phi$, where $t$ has no `$+$', that $\mathcal{J} \vdash \term{t}{i}\phi$ is in \P. \end{corollary} \section{Tableaux}\label{tableaux} In this section we give a general tableau procedure for every logic which varies according to each logic's parameters. We can then use the tableau for a particular logic and make observations on its complexity, as we do in the following section. A version of the tableau which is more efficient for some cases follows after. To develop the tableau procedure we need to examine the relations on the agents more carefully than we have so far. For this section and the following one fix some $J = (J^n_{D,F,V,C})_\mathcal{CS}$ and we assume $\mathcal{CS}$ is axiomatically appropriate with respect to $D$ and schematic. \subsection{A Closer Look on the Agents and their Interactions} If $\manyk{A}$ are binary relations on the same set, then $A_1\cdots A_k$ is the binary relation on the same set, such that $x A_1 \cdots A_k y$ if and only if there are $\manyk{x}$ in the set, such that $x = x_1 A_1 x_2 A_2 \cdots A_{k-1} x_k A_k y$. If $A$ is a binary relation, then $A^*$ is the reflexive, transitive closure of $A$; if $A$ is a set (but not a set of pairs), then $A^*$ is the set of strings from $A$. We also use the following relation on strings: $a \sqsubseteq b$ iff there is some string $c$ such that $ac = b$. We define the following subsets of and relations on $[n]$. \begin{description} \item[$ S = S(J) =$] $ \{ i \in [n] | (\exists j \in D \cup F) \ i C^* j \}$ attempts to capture exactly those agents that require a serial accessibility relation; from now and on these agents monopolize our attention; \item[$R = R(J) =$] $ \{ i \in [n] | (\exists j \in F) \ i C^* j \}$ attempts to capture exactly those agents that require a reflexive accessibility relation; \item[$C_F = $] $C\cup \{(i,j)\in V| i \in R, j\in S\}$ notice that if $iC_F j $, and $x R_j y$, then $xR_i y$;\footnote{The $F$ in $C_F$ is to indicate that $C_F$ is a variation of $C$ influenced by the agents in $F$, not that it is available with other subscripts} \item[$Q = $] $(V|_S\cup C|_S)^*$: if $i Q j$, then $i$'s justifications somehow affect $j$'s justifications; conversely, $j$'s accesibility relations somehow affect $i$'s accessibility relations; \item[$\equiv_C = $] $C^*_F \cap (C_F^*)^{-1}$: $i \equiv_C j$ if and only if $i C_F^* j$ and $j C_F^* i$; we can easily see that $\equiv_C$ is an equivalence relation and that if $i \equiv_{C} j$, then $i$ and $j$ have the same accessibility relations; similarly, \end{description} For the above equivalence relations we can define equivalence classes on $S$, $P_C = \{\many{L}{k_C}\}$. $\chi(i)$ is the equivalence class $L \in P_C$ s.t. $i \in L$. We can define relations $<_C$ and $<_{VC}$ on $P_C$ in the following way: $P_1 \leq_C P_2$ iff $\exists x \in P_1 \exists y \in P_2$ s.t. $x C_F^* y$ and $P_1 \leq_{VC} P_2$ iff $\exists x \in P_1 \exists y \in P_2$ s.t. $x Q y$. Also, $P_1 <_{C} P_2$ iff $P_1 \leq_{C} P_2$ and $P_1 \not\leq_{C} P_2$ and similarly for $P_1 <_{VC} P_2$. Then, define $P_1 \leq_V P_2$ iff there are $x \in P_1$ and $y \in P_2$ s.t. $x Q V Q y$, that is, there are $ x_1, x_2 \in S$, where $x Q x_1 V x_2 Q y$. $P_1 <_V P_2$ iff $P_1 \leq_V P_2$ and $P_2 \not \leq_V P_1$. Let $i \in [n]$. Then, $M_C(i) = M_C(\chi(i)) = \{L\in P_C| \chi(i) \leq_C L$ and $ \not\exists L'\in P_C $ s.t. $ L <_{C} L' \}$. \subsection{The Tableau Procedure.} The formulas used in the tableau will have the form $ \emptyset.\sigma \ s \ \beta \psi$, where $ \psi \in L_n$ or is a $*$-expression, $\sigma \in P_C^*$ (the world prefixes are strings of equivalence classes of agents), $\beta$ is (either the empty string or) of the form $\Box_i \Box_j \cdots \Box_k$, $i,j,\ldots,k \in [n]$, and $s \in \{T,F\}$. Furthermore, $\emptyset.\sigma$ will be called a world-prefix or state-prefix, $s$ a truth-prefix and world prefixes will be denoted as $\emptyset.s_1.s_2\ldots s_k$, instead of $\emptyset.s_1s_2\cdots s_k$, where for all $x \in [k]$, $s_x \in P_C$. A tableau branch is a set of formulas of the form $ \sigma \ s \ \beta \psi$, as above. A branch is complete if it is closed under the tableau rules (they follow). It is propositionally closed if $ \sigma \ T \ \beta \psi$ and $ \sigma \ F \ \beta \psi$ are both in the branch. We say that a tableau branch is constructed by the tableau rules from $\phi$, if it is a closure of $\{\emptyset\ T\ \phi \}$ under the rules. For every $i,j \in S$, $i \in N(j)$ if there are some $i', i'' \in S\smallsetminus R$ such that $i' \equiv_{VC} i''$, $i'' VC_F^* j$ and $\chi(i) \in M_C(i')$. The tableau rules will include certain classical rules to cover propositional cases of formulas, as well as the ones that follow: \noindent \begin{minipage}[l][1.7cm]{0.4\linewidth} \[ \inferrule*[right=TrB]{\sigma\ T\ \term{t}{i}\psi}{\sigma\ T\ *_i(t,\psi) \\\\ \sigma\ T\ \Box_i\psi } \] \end{minipage} \begin{minipage}{0.55\linewidth} if $i \in S$; \end{minipage} \begin{minipage}[l][1.3cm]{0.4\linewidth} \[ \inferrule*[right=TrD]{\sigma\ T\ \term{t}{i}\psi}{ \sigma.\chi(j)\ F\ \bot } \] \end{minipage} \begin{minipage}{0.55\linewidth} if $i \in S$, $\chi(j) \in M_C(\chi(i))$, and $j \notin R$; \end{minipage} \begin{minipage}[l][1.3cm]{0.4\linewidth} \[ \inferrule*[right=Tr]{\sigma\ T\ \term{t}{i}\psi}{\sigma\ T\ *_i(t,\psi)} \] \end{minipage} \begin{minipage}{0.55\linewidth} if $i \not\in S$; \end{minipage} \begin{minipage}[l][1.3cm]{0.4\linewidth} \[ \inferrule*[right=Fa]{\sigma\ F\ \term{t}{i}\psi}{\sigma\ F\ *_i(t,\psi)} \] \end{minipage} \begin{minipage}{0.55\linewidth} \phantom{I like cheese} \end{minipage} \begin{minipage}[l][1.3cm]{0.4\linewidth} \[ \inferrule*[right=S]{\sigma.\chi(j) \ F \ \bot }{\sigma.\chi(j).\chi(i) \ F \ \bot} \] \end{minipage} \begin{minipage}{0.55\linewidth} if $i \in N(j)$; \end{minipage} \begin{minipage}[l][1.3cm]{0.4\linewidth} \[ \inferrule*[right=SB]{\sigma\ T\ \Box_{i}\psi}{\sigma.\chi(i)\ T\ \psi} \] \end{minipage} \begin{minipage}{0.55\linewidth} if $\sigma.\chi(i)$ has already appeared; \end{minipage} \begin{minipage}[l][1.3cm]{0.4\linewidth} \[ \inferrule*[right=FB]{\sigma\ T\ \Box_{i}\psi}{\sigma\ T\ \psi} \] \end{minipage} \begin{minipage}{0.55\linewidth} if $i \in F$; \end{minipage} \begin{minipage}[l][1.3cm]{0.4\linewidth} \[ \inferrule*[right=C]{\sigma\ T\ \Box_{i}\psi}{\sigma\ T\ \Box_{j}\psi} \] \end{minipage} \begin{minipage}{0.55\linewidth} if $i C j$; \end{minipage} \begin{minipage}[l][1.3cm]{0.4\linewidth} \[ \inferrule*[right=V]{\sigma\ T\ \Box_{i}\psi}{\sigma\ T\ \Box_{j}\Box_i\psi} \] \end{minipage} \begin{minipage}{0.55\linewidth} if $i V j$. \end{minipage} We do not explicitly mention it anywhere else, but of course, we need a set of rules to cover propositional cases as well. In particular we can use\\ \begin{minipage}[l][1.5cm]{0.3\linewidth} \[ \inferrule*{\sigma\ T\ \psi\rightarrow \psi'}{\sigma\ F\ \psi\quad \mid\quad \sigma\ T\ \psi'} \] \end{minipage} and \begin{minipage}[l][1.5cm]{0.3\linewidth} \[ \inferrule*{\sigma\ F\ \psi\rightarrow \psi'}{\sigma\ T\ \psi \\\\ \sigma\ F\ \psi'} \] \end{minipage} The separator $|$ indicates a nondeterministic choice between the two options it separates. If $b$ is a tableau branch, then\footnote{$\emptyset.P_C^*$ here and wherever else it may appear is the set $\{\emptyset.x|x\in P_C^*\}$.} $W(b) = \{\sigma \in \emptyset.P_C^* | \mbox{ there is some } \sigma\ a \in b \}.$ Let $(R_i)_{i \in [n]}$ be such that for every $i \in [n]$, \[R_i = \{(\sigma,\sigma.\chi(i))\in (W(b))^2\} \cup \{ (w,w)\in (W(b))^2 | i \in F \} \] then $\mathcal{F}(b) = (W(b),(R'_i)_{i\in [n]})$, where $(R'_i)_{i\in [n]}$ is the closure of $(R_i)_{i\in [n]}$ under the conditions of frames for the accessibility relations, except for seriality. $(R'_i)_{i\in [n]}$ is constructed in the following way: for every $i \in [n]$, let $R^0_i = R_i$ and for every $k \in \nat \cup \{0\}$, \[R_i^{k+1} = R_i^k \ \cup \ \bigcup_{(i,j) \in C} R_j^k \ \cup \ \bigcup_{(i,j) \in V}\{(a,b) \in (W(b))^2 | \exists (a,c) \in R_j^k, (c,b) \in R_i^k \} \] and then, $\mathcal{F}(b) = (W(b),(\bigcup_{k \in \nat}R_i^k)_{i \in [n]})$. Finally, let $T(b) = \{ \sigma \ *_i(t,\psi) | \sigma\ T \ *_i(t,\psi) \mbox{ appears in }b\}$ and $F(b) = \{ \sigma \ *_i(t,\psi) | \sigma\ F \ *_i(t,\psi) \mbox{ appears in }b\}$. A branch $b$ of the tableau is rejecting when it is propositionally closed or there is some $f \in F(b)$ such that $T(b) \vdash_{*_\mathcal{CS}^{\mathcal{F}(b)}(V,C)} f$. Otherwise it is an accepting branch. By induction on the construction of $\mathcal{F}(b)$, it is not hard to see that for every $(\sigma,\tau.\chi(j))\in R_i$, it must be the case that $i C_F^* j$ or that $i \in R$ and $\sigma = \tau.\chi(j)$. By induction on the frame construction we can see that if $\sigma \ T \ \Box_i \phi$ appears in $b$ and $\sigma R_i \tau$, then $\tau \ T \ \phi$ appears in $b$. \begin{proposition}\label{prp:tableaucomplete} If there is a complete accepting tableau branch $b \ni \emptyset\ T\ \phi$, then the formula $\phi$ is satisfiable by a model for $J$. \end{proposition} \begin{proof} Let $\mathcal{M} = (W,(R_i)_{i\in [n]}, {\mathcal{A}}, \mathcal{V})$, where $(W,(R_i)_{i\in [n]}) = \mathcal{F}(b)$, $\mathcal{V}(p) = \{w \in W| w\ T\ p \in b \}$, and ${\mathcal{A}}_i(t,\psi) = \{w \in W| T(b) \vdash_{*^\mathcal{F}_\mathcal{CS}(V,C)} w\ *_i(t,\psi) \}$. Let $\mathcal{M}' = (W,(R'_i)_{i\in[n]},({\mathcal{A}}_i)_{i\in[n]},\mathcal{V})$, where for every $i \in [n]$, if $i \in S$, then $R'_i = R_i \cup \{(a,a)\in W^2 |\exists j \in S $ s.t. $ iC_F^* j, \not\exists (a,b) \in R_j \}$ and $R'_i = R_i$, otherwise. $\mathcal{M}'$ is an F-model for $J$: $({\mathcal{A}}_i)_{i\in[n]}$ easily satisfy the appropriate conditions, as the extra pairs of the accessibility relations do not affect the $*$-calculus derivation, and we can prove the same for $(R'_i)_{i\in[n]}$. If $aR'_ibR'_jc$ and $j V i$, if $(a,b) \in R'_i\smallsetminus R_i$, then $a = b$ and thus $a R'_i c$. If $(a,b) \in R_i$, then, from rule S, there must be some $(b,c')\in R_j$, so $(b,c)\in R_j$ and thus, $(a,c) \in R_j$. If $(a,b) \in R'_i$ and $j C i$, then, trivially, whether $(a,b) \in R_i$ or not, $(a,b) \in R'_j$. By induction on $\chi$, we prove that for every formula $\chi$ and $a \in W$, if $a\ T\ \chi \in b$, then $\mathcal{M}',a\models \chi$ and if $a\ F\ \chi \in b$, then $\mathcal{M}',a \not\models \chi$. Propositional cases are easy. If $\chi = \term{t}{i}\psi$ and $a\ F\ \chi \in b$, then $a \notin {\mathcal{A}}_i(t,\psi)$, so $\mathcal{M}', a \not \models \chi$. On the other hand, if $a\ T\ \term{t}{i}\psi \in b$, then $a \in {\mathcal{A}}_i(t,\psi)$ and by rule TrD, for every $j \in S$ such that $i C_F^* j$, there is some $(a,b)\in R_j$. Therefore, for every $(a,b) \in R_j'$, it is the case that $(a,b) \in R_j$, so by rule TrB, a previous observation about formulas of the form $w\ T\ \Box_i \alpha$, and the inductive hypothesis, for every $(a,b)\in R_i$, $\mathcal{M}',b \models \psi$ and therefore, $\mathcal{M}',a \models \term{t}{i}\psi$. \qed \end{proof} Now we can prove the following proposition. \begin{proposition}\label{prp:tableaux} Let $\phi \in L_n$. $\phi$ is $(J^n_{D,F,V,C})_\mathcal{CS}$-satisfiable if and only if there is a complete tableau branch $b$ that is produced from $\emptyset \ T \ \phi$, such that \begin{itemize} \item for all $\sigma, \alpha$, not both $\sigma\ T\ \alpha$ and $\sigma \ F \ \alpha$ appear in $b$ and \item For any $\beta \in F(b)$, $T(b) \not \vdash_{*^\mathcal{F}_\mathcal{CS}(V,C)} \beta$. \end{itemize} \end{proposition} \begin{proof} The ``if'' direction was handled by proposition \ref{prp:tableaucomplete}. We will prove the ``only if'' in the following. Let $\mathcal{M}$ be an F-model, $ (W, (R_i)_{i\in [n]},({\mathcal{A}}_i)_{i\in [n]},\mathcal{V})$ that has the strong evidence property and a state $s \in W$ such that $\mathcal{M}, s \models \phi$. Furthermore, fix some $\cdot^\mathcal{M} : \emptyset.P_C^* \longrightarrow W$, such that $\emptyset^\mathcal{M} = s$ and the following conditions are met. For any $\sigma.\chi(i)\in P_C^*$, $(\emptyset.\sigma.\chi(i))^\mathcal{M}$ is some element of $W$ s.t. $((\emptyset.\sigma)^\mathcal{M},(\emptyset.\sigma.\chi(i))^\mathcal{M}) \in R_i$. Let $L_n^\Box = \{\Box_{i_1}\cdots \Box_{i_k} \phi | \phi \in L_n, k \in \nat, \manyk{i} \in [n] \}$. Given a state $a$ of the model, and $\Box_i\psi \in sub_\Box(\phi)$, $\mathcal{M}, a \models \Box_i\psi$ has the usual, modal interpretation, $\mathcal{M}, a \models \Box_i\psi$ iff for every $(a,b) \in R_i$, $\mathcal{M}, b \models \psi$ We can see in a straightforward way and by induction on the tableau derivation that there is a branch, such that if $\sigma \ T \ \psi$ appears in the branch and $\psi \in L_n^\Box$, then $\mathcal{M},\sigma^\mathcal{M} \models \psi$, if $\sigma \ F \ \psi$ appears in the branch and $\psi \in L_n^\Box$, then $\mathcal{M},\sigma^\mathcal{M} \not\models \psi$, if $\sigma \ T \ *_i(t,\psi)$ appears in the branch, then $\sigma^\mathcal{M} \in {\mathcal{A}}_i(t,\psi)$ and if $\sigma \ F \ *_i(t,\psi)$ appears in the branch, then $\sigma^\mathcal{M} \not \in {\mathcal{A}}_i(t,\psi)$. The proposition follows. \qed \end{proof} \subsection{An Improved Tableau} By taking a closer look at the interactions between the agents we can further improve the efficiency of our tableau procedure and we do that in this section. We use this improvement to prove an upper bound on the complexity of a general class of logics. We need the following definitions and lemma \ref{lem:clusters}, which is a generalization of a result from \cite{Achilleos:wollic11} and has appeared in simpler forms in \cite{Achilleos14TRtwoagent}. First we define an additional equivalence relation: $\equiv_{VC} = Q \cap Q^{-1}$. As an equivalence relation, this one too gives equivalence classes on $S$ and they are $P_{VC} = \{\many{P}{k_{VC}}\}$. Notice that as $C^* \subseteq (V\cup C)^*$, $\equiv_C \subseteq \equiv_{VC}$ and therefore $P_C$ is a refinement of $P_{VC}$. We'll call $P = P_{VC}$. Furthermore, notice that for any $L \in P$, either $\exists x,y \in L$ s.t. $x V y$, or $L \in P_C$. In the first case, $L$ will be called a \emph{V-class} of agents and in the second case it will be called a \emph{C-class} of agents. For each agent $i \in [n]$, $P(i)$ will be the equivalence class $L \in P$ s.t. $i \in L$ and To help keep the relationship between the sets of equivalence classes in mind, notice that $i \in \chi(i) \subseteq P(i) \subseteq S \subseteq [n]$. Furthermore, we can extend relations $<_C$ and $<_{VC}$ on $P$ in the same way they were defined on $P_C$. \begin{lemma}\label{lem:clusters} Let $\mathcal{M} = (W,(R_i)_{i\in [n]}, ({\mathcal{A}}_i)_{i\in [n]}, \mathcal{V})$ be a $(J^n_{D,F,V,C})_\mathcal{CS}$ F-model of at most $2^{|\phi|}$ states, $P_a \in P$ a V-class of agents, and $u \in W$. Then, there are states of $W$, $(a_i)_{i \in P_a}$, such that \begin{enumerate} \item For any $i \in P_a$, $u R_i a_i$. \item For any $i,j \in P_a$, $v, b \in W$, if $a_i, b R_j v$, then $b R_j a_j$. \end{enumerate} $(a_i)_{i \in P_a}$ will be called \emph{a $P_a$-cluster for $u$}. \end{lemma} \begin{proof} For this proof we need to define the following. Let $i \in [n]$, $w, v \in W$. \emph{An $E_V$-path ending at $i$} (and starting at $i'$) from $w$ to $v$ is a finite sequence $\many{v}{k+1}$, such that for some $\manyk{j} \in [n]$, $\many{E}{k-1} \in \{C^{-1},V^{-1}\}$, where for some $j \in [k - 1]$ $E_j = V^{-1}$ and $j_k = i$ (and $j_1 = i'$), for every $a \in [k - 1]$, $j_a E_a j_{a + 1}$ and if $E_a = C^{-1}$, then $v_{a+1} = v_{a+2}$, while if $E_a = V^{-1}$, then $v_{a+1} R_{j_{a+1}} v_{a+2}$ and $v_1 = w$, $v_k = v$, $v_1 R_1 v_2$. The $E_V$-path \emph{covers} a set $s\subseteq [n]$ if $\{\manyk{j}\} = s$. For this path and $a \in [k]$, $v_{a+1}$ is a $j_a$-state. Notice that if there is an $E_V$ path ending at $i$ from $w$ to $v$ and some $j\in s$ and $z\in W$ such that the path covers $s$ and $z R_j w$, it must also be the case that $w,z R_i v$. Let $p : [m] \longrightarrow P_a$ be such that $m \in \nat$, $p[[m]] = P_a$ and for every $i+1 \in [m]$, either $p(i+1) C p(i)$ or $p(i+1) V p(i)$ and there is some $i+1 \in [m]$ such that $p(i+1) V p(i)$. For any $s \in W$, $x \in \nat$ let $b_0(s),b_1(s),b_2(s),\ldots,b_{m}(s)$ be the following: $b_0(s) = s$, for all $k \in [m]$, $b_{1}(s)$ will be such that there is an $E_V$ path ending at $p(1)$ from $s$ to $b_{1}(s)$ and covering $P_a$ and if $k>1$, $b_k(s)$ is such that $b_0(s),b_1(s),b_2(s),\ldots,b_{k}(s)$ is an $E_V$ path ending at $p(k)$. Let $(b^x_i)_{i \in [m], x\in \nat},(a^x_i)_{i \in [m], x\in \nat}$ be defined in the following way. For every $i \in [m]$, $b^0_i = b_i(u)$ and for every $x \in \nat$, $a^x_i = b_i(b_m^x)$. Finally, for $0 < x \in \nat$, $(b^x_i)_{i \in [m]}$ is defined in the following way. If there are some $b_x,v \in W$, $i,j \in P_a$, such that $b_x R_{j} v$, $a^{x-1}_i R_{j} v$ and not $b_x R_j a^{x-1}_j$, then for all $i \in P_a$, $b_i^x = b_i(v)$. Otherwise, $(b^x_i)_{i \in P_a} = (a^x_i)_{i \in P_a}$. By induction on $x$, we can see that for every $x,y \in \nat$, $i \in P_a$, if $y\geq x$, then $b_x R_i b_i^y, a_i^y$. Since the model has a finite number of states, there is some $x \in \nat$ such that for every $y \geq x$, $(b^y_i)_{i \in P_a} = (a^y_i)_{i \in P_a}$. Therefore, we can pick appropriate $(a_i)_{i \in P_a}$ among $(a^k_i)_{i \in P_a}$ that satisfy conditions 1, 2. \qed \end{proof} We recursively define relation $\rightarrow$ on $S^*$: \begin{itemize} \item if $jC_F i$ then $i \rightarrow j$; \item if $j V i$, then $ij \rightarrow j$; \item If $\beta \rightarrow \delta$, then $\alpha \beta \gamma \rightarrow \alpha \delta \gamma$. \end{itemize} $\rightarrow^*$ is the reflexive, transitive closure of $\rightarrow$. $\rightarrow^*$ tries to capture the closure of the conditions on the accessibility relations of a frame. This is made explicit by observing that if for some frame $(W,(R_i)_{i\in [n]})$, $a R_{i_1}R_{i_2}\cdots R_{i_k}b$ and $i_1i_2\cdots i_k \rightarrow^* j_1j_2\cdots j_l$, then $a R_{j_1}R_{j_2}\cdots R_{j_l}b$. Furthermore, if, in addition, $l = k$, then for every $r \in [k]$, $j_r C_F^* i_r$. For every agent $i \in S$, we introduce a new agent, $\overline{i}$ and we extend $\rightarrow^*$, so that when $P(i)$ a V-class, then $\overline{i} \rightarrow^* \chi$ for every $\chi \in S^*$ such that $\chi \rightarrow^* i$. For each $L \in P_C$, we fix some $i \in P_C$ and $\overline{L} = \overline{i}$. Furthermore, if $xy \in P_C^* \cup S^*$, then $\overline{xy} = \overline{x}$ $\overline{y}$. This extended definition of $\rightarrow^*$ tries to capture the closure of the conditions on the accessibility relations of a frame like the ones that will result from a tableau procedure as defined in the following. Let $L \in P$ and $\sigma$ be a finite string of elements from $P_C$. Then, $L$ is \emph{visible} from $\emptyset.\sigma$ if and only if there is some $\chi(i)\subseteq L$, some $\chi \in P_C^*$ and some $\alpha \in S^*$ such that $\sigma = \tau.\chi(i).\chi$ and $\overline{\chi} \alpha \rightarrow^* i$; $\tau.\chi(i)$ is then called the $L$-view from $\sigma$. Notice that there is a similarity between this definition and the statement of lemma \ref{lem:clusters} - this will be made explicit later on. Then we adjust the tableau by altering rules TrD and S and introduce rule SVB: \noindent \begin{minipage}[l][1.3cm]{0.4\linewidth} \[ \inferrule*[right=TrD]{\sigma\ T\ \term{t}{i}\psi}{ \sigma.\chi(j)\ F\ \bot } \] \end{minipage} \begin{minipage}{0.55\linewidth} if $i \in S$, $\chi(j) \in M_C(\chi(i))$, $j \notin R$, and $P(j)$ is not a $V$-class visible from $\sigma$; \end{minipage} \begin{minipage}[l][1.3cm]{0.4\linewidth} \[ \inferrule*[right=S]{\sigma.\chi(j) \ F \ \bot }{\sigma.\chi(j).\chi(i) \ F \ \bot} \] \end{minipage} \begin{minipage}{0.55\linewidth} if $i \in N(j)$ and it is not the case that $P(i)$ is a $V$-class visible from $\sigma.\chi(j)$; \end{minipage} \begin{minipage}[l][1.4cm]{0.4\linewidth} \[ \inferrule*[right=SVB]{\sigma\ T\ \Box_{i}\psi}{\tau.\chi(i)\ T\ \psi} \] \end{minipage} \begin{minipage}{0.55\linewidth} if $ P(i)$ a $V$-class, visible from $\sigma$, $\tau.\chi(j)$ is the $P(i)$-view from $\sigma$ and $\tau.\chi(i)$ has already appeared in the tableau. \end{minipage} Then we have to redefine the frame $\mathcal{F}(b)$. Let $(R_i)_{i \in [n]}$ be such that for every $i \in [n]$, \[R_i = \{(\sigma,\sigma.\chi(i))\in (W(b))^2\} \cup \{ (w,w)\in (W(b))^2 | i \in F \} \] \[ \cup \] \[\{(\sigma,\tau.\chi(i))\in (W(b))^2| P(i) \mbox{ a }V\mbox{-class, $\tau.\chi(j)$ the $P(i)$-view from }\sigma \}\] and $\mathcal{F}(b) = (W(b),(R'_i)_{i\in [n]})$, where $(R'_i)_{i\in [n]}$ is the closure of $(R_i)_{i\in [n]}$ as it was defined before. Proposition \ref{prp:tableaucomplete} and its proof remain the same. To prove proposition \ref{prp:tableaux} for this version of the rules, follow the same proof, but for every $a \in W$ and V-class $L$ fix some $L$-cluster for $a$ and if $P(i)$ is a $V$-class, then for $(a_j)_{j\in L}$, the fixed $L$-cluster for $\sigma^\mathcal{M}$, $(\sigma.\chi(i))^\mathcal{M} = a_i$ - and observe that if $P(i)$ a $V$-class, visible from $\sigma$ and $\tau.\chi(j)$ is the $P(i)$-view from $\sigma$, then in model $\mathcal{M}$ there is some $v$ such that $\sigma^\mathcal{M}, (\tau.\chi(j))^\mathcal{M} R_i v$, which by the definition of clusters in turn means that $\sigma^\mathcal{M} R_j (\tau.\chi(j))^\mathcal{M}$. The remaining proof is the same. Notice that if for every appearing world-prefix $\sigma.\chi(i)$, $i$ is always in the same V-class $L$, then all prefixes are of the form $\emptyset.\chi(j)$, where $j \in L$. In that case we can simplify the box rules and in particular just ignore rule V and end up with the following result. \begin{corollary}\label{cor:easycasesinpi2} When there is some V-class $L$ such that for every $i \in S\smallsetminus R $ there is some $i' \in L$ such that $iC_F^* i'$, then $(J^n_{D,F,V,C})_\mathcal{CS}$-satisfiability is in $\Sigma_2^p$. \end{corollary} \section{Complexity Jumps} In this section we look into some more specific cases of multi-agent justification logics and demonstrate certain jumps in the complexity of the satisfiability problem for these logics. We first revisit the two-agent logics from \cite{Achilleos14TRtwoagent}. Like in the previous sections, we assume our constant specifications are schematic and axiomatically appropriate (and in \P\ for upper bounds). Our definition here of $(J^n_{D,F,V,C})_\mathcal{CS}$ allows for more two-agent logics than the ones that were studied in \cite{Achilleos14TRtwoagent}. It is not hard, though, to extend those results to all two-agent cases of $(J^n_{D,F,V,C})_\mathcal{CS}$: when there are $\{i,j\} = [2] $, $i \in D \setminus F$, $\emptyset \neq V\subseteq \{(i,j), (j,j) \} $, $(i,j) \in C$, and $(j,i) \notin C$, then $(J^2_{D,F,V,C})_\mathcal{CS}$-satisfiability is \PSPACE-complete; otherwise it is in $\Sigma_2^p$ (see \cite{Achilleos14TRtwoagent}). We will further examine the following two cases. $\mathcal{J}_1, \mathcal{J}_2$ are defined in the following way. $n_1 = n_2 = 3$; $D_1 = D_2 = \{1,2\}$; $F_1 = F_2 = \emptyset$; $V_1 = \emptyset$, $V_2 = \{(3,3)\}$; $C_1 = C_2 = \{(3,1),(3,2)\}$; finally, for $i \in [2]$, $\mathcal{J}_i = (J^{n_i}_{D_i,F_i,V_i,C_i})_{\mathcal{CS}_i}$, where $\mathcal{CS}_i$ is some axiomatically appropriate and schematic constant specification. By an adjustment of the reductions in \cite{Achilleos14DiamondFreeArxiv}, as it was done in \cite{Achilleos14TRtwoagent}, it is not hard to prove that $\mathcal{J}_{1}$ is \PSPACE -hard and $\mathcal{J}_{2}$ is \EXP -hard.\footnote{$\mathcal{J}_1$ would correspond to what is defined in \cite{Achilleos14TRtwoagent} as ${\sf D}_2 \oplus_\subseteq {\sf K}$ and $\mathcal{J}_2$ to ${\sf D}_2 \oplus_\subseteq {\sf D4}$. Then we can pick a justification variable $x$ and we can either use the same reductions and substitute $\Box_i$ by $\term{x}{i}$, or we can just translate each diamond-free fragment to the corresponding justification logic in the same way. It is not hard to see then that the original modal formula behaves exactly the same way as the result of its translation with respect to satisfiability - just consider F-models where always ${\mathcal{A}}_i(t,\phi) = W$.} Notice that the way we prove $\PSPACE$-hardness for $\mathcal{J}_1$ is different in character from the way we prove the same result for the two-agent logics in \cite{Achilleos14TRtwoagent}. For $\mathcal{J}_1$ we use the way the tableau prefixes for it branch, while for $({\sf JD4} \times_C {\sf JD})_\mathcal{CS}$ the prefixes do not branch, but they increase to exponential size. In fact, we can see that $\mathcal{J}_1$ is \PSPACE-complete, while $\mathcal{J}_2$ is \EXP-complete. The respective tableau rules as they turn out for each logic are - notice that neither logic has any $\leq_{C}$-maximal V-classes: \emph{The rules for $\mathcal{J}_1$ are:} \\ \begin{minipage}[l][1.7cm]{0.3\linewidth} \[ \inferrule*[right=TrB]{\sigma\ T\ \term{t}{i}\psi}{\sigma\ T\ *_i(t,\psi) \\\\ \sigma\ T\ \Box_i\psi } \] \end{minipage} \begin{minipage}[l][1.3cm]{0.3\linewidth} \[ \inferrule*[right=TrD]{\sigma\ T\ \term{t}{1}\psi}{ \sigma.\{1\}\ F\ \bot } \] \end{minipage} \begin{minipage}[l][1.3cm]{0.3\linewidth} \[ \inferrule*[right=TrD]{\sigma\ T\ \term{t}{2}\psi}{ \sigma.\{2\}\ F\ \bot } \] \end{minipage} \begin{minipage}[l][1.7cm]{0.3\linewidth} \[ \inferrule*[right=TrD]{\sigma\ T\ \term{t}{3}\psi}{ \sigma.\{1\}\ F\ \bot \\\\ \sigma.\{2\}\ F\ \bot } \] \end{minipage} \begin{minipage}[l][1.3cm]{0.3\linewidth} \[ \inferrule*[right=Fa]{\sigma\ F\ \term{t}{i}\psi}{\sigma\ F\ *_i(t,\psi)} \] \end{minipage} \begin{minipage}[l][2cm]{0.3\linewidth} \[ \inferrule*[right=SB]{\sigma\ T\ \Box_{i}\psi}{\sigma.\{i\}\ T\ \psi} \] if $\sigma.\{i\}$ has already appeared; \end{minipage} \begin{minipage}[l][1.9cm]{0.25\linewidth} \[ \inferrule*[right=C]{\sigma\ T\ \Box_{3}\psi}{\sigma\ T\ \Box_{1}\psi \\\\ \sigma\ T\ \Box_{2}\psi} \] \end{minipage} Notice that the maximum length of a world prefix is at most $|\phi|$, since the depth (nesting of terms) of the formulas decrease whenever we move from $\sigma$ to $\sigma.\{i\}$. Also notice that when we run the $*$-calculus, there is no use for rule $*$V-Dis, so we can simply run the calculus on one world-prefix at the time, without needing the whole frame. Therefore, we can turn the tableau into an alternating polynomial time procedure, which uses a nondeterministic choice when the tableau would make a nondeterministic choice (when we apply the propositional rules) and uses a universal choice to choose whether to increase prefix $\sigma$ to $\sigma.\{1\}$ or to $\sigma.\{2\}$. This means that $\mathcal{J}_1$-satisfiability is \PSPACE-complete. \emph{The rules for $\mathcal{J}_2$ are the same with the addition of:} \\ \begin{minipage}[l][1.3cm]{0.4\linewidth} \[ \inferrule*[right=V]{\sigma\ T\ \Box_{3}\psi}{\sigma\ T\ \Box_{3}\Box_3\psi} \] \end{minipage} \begin{minipage}[l][1.5cm]{0.4\linewidth} \[ \inferrule*[right=S]{\sigma.\{i\} \ F \ \bot }{\sigma.\{i\}.\{1\} \ F \ \bot \\\\ \sigma.\{i\}.\{2\} \ F \ \bot} \] \end{minipage} For the tableau procedure of $\mathcal{J}_2$ we have no such bound on the size of the largest world-prefix, so we cannot have an alternating polynomial time procedure. As before, though, the $*$-calculus does not use rule $*$V-Dis, so again we can run the calculus on one world-prefix at the time. Furthermore, for every prefix $w$, $|\{ w\ a \in b \}|$ is polynomially bounded (observe that we do not need more than two boxes in front of any formula), so in turn we have an alternating polynomial space procedure. Therefore, $\mathcal{J}_2$-satisfiability is \EXP-complete. \end{document}
arXiv
Search Results: 1 - 10 of 463776 matches for " Herbert A Weich " Page 1 /463776 DNA methylation regulates expression of VEGF-R2 (KDR) and VEGF-R3 (FLT4) Hilmar Quentmeier, Sonja Eberth, Julia Romani, Herbert A Weich, Margarete Zaborski, Hans G Drexler BMC Cancer , 2012, DOI: 10.1186/1471-2407-12-19 Abstract: Real-time (RT) PCR analysis was performed to quantify KDR and FLT4 expression in some ninety leukemia/lymphoma cell lines, human umbilical vein endothelial cells (HUVECs) and dermal microvascular endothelial cells (HDMECs). Western blot analyses and flow cytometric analyses confirmed results at the protein level. After bisulfite conversion of DNA we determined the methylation status of KDR and FLT4 by DNA sequencing and by methylation specific PCR (MSP). Western blot analyses were performed to examine the effect of VEGF-C on p42/44 MAPK activation.Expression of KDR and FLT4 was observed in cell lines from various leukemic entities, but not in lymphoma cell lines: 16% (10/62) of the leukemia cell lines expressed KDR, 42% (27/65) were FLT4 positive. None of thirty cell lines representing six lymphoma subtypes showed more than marginal expression of KDR or FLT4. Western blot analyses confirmed KDR and FLT4 protein expression in HDMECs, HUVECs and in cell lines with high VEGF-R mRNA levels. Mature VEGF-C induced p42/44 MAPK activation in the KDR- /FLT4+ cell line OCI-AML1 verifying the model character of this cell line for VEGF-C signal transduction studies. Bisulfite sequencing and MSP revealed that GpG islands in the promoter regions of KDR and FLT4 were unmethylated in HUVECs, HDMECs and KDR+ and FLT4+ cell lines, whereas methylated cell lines did not express these genes. In hypermethylated cell lines, KDR and FLT4 were re-inducible by treatment with the DNA demethylating agent 5-Aza-2'deoxycytidine, confirming epigenetic regulation of both genes.Our data show that VEGF-Rs KDR and FLT4 are silenced by DNA methylation. However, if the promoters are unmethylated, other factors (e.g. transactivation factors) determine the extent of KDR and FLT4 expression.Vascular endothelial growth factors (VEGFs) and their corresponding receptors (VEGF-Rs) are important regulators of angiogenesis and lymphangiogensis. VEGF-A binds VEGF-R1 (FLT1) and VEGF-R2 (KDR). Both tyrosine kinase Elevated expression of VEGFR-3 in lymphatic endothelial cells from lymphangiomas Susanne Norgall, Maria Papoutsi, Jochen R?ssler, Lothar Schweigerer, J?rg Wilting, Herbert A Weich BMC Cancer , 2007, DOI: 10.1186/1471-2407-7-105 Abstract: Lymphangioma tissue samples were obtained from two young patients suffering from lymphangioma in the axillary and upper arm region. Initially isolated with anti-CD31 (PECAM-1) antibodies, the cells were separated by FACS sorting and magnetic beads using anti-podoplanin and/or LYVE-1 antibodies. Characterization was performed by FACS analysis, immunofluorescence staining, ELISA and micro-array gene analysis.LECs from foreskin and lymphangioma had an almost identical pattern of lymphendothelial markers such as podoplanin, Prox1, reelin, cMaf and integrin-α1 and -α9. However, LYVE-1 was down-regulated and VEGFR-2 and R-3 were up-regulated in lymphangiomas. Prox1 was constantly expressed in LECs but not in any of the BECs.LECs from different sources express slightly variable molecular markers, but can always be distinguished from BECs by their Prox1 expression. High levels of VEGFR-3 and -2 seem to contribute to the etiology of lymphangiomas.The blood vascular system supplies all organs with oxygen and nutrients while the lymphatic vasculature is crucial for the uptake of extra-cellular fluid, lipids from the gut and circulating immune cells during immune surveillance. Unfortunately, the lymphatics also serve as highways for metastatic tumour cells. Both vascular systems are anatomically and histologically closely related to each other, but they are also different as concerns their topography, architecture of their walls, and their cellular and molecular composition (reviews see [1-4]). In spite of the importance of lymphatic vessels in health and disease, e.g. 80% of carcinomas metastasize via the lymphatic system, they have received little attention until recently. This has been due to the absence of suitable markers that distinguish between lymphatic endothelial cells (LECs) and blood vascular endothelial cells (BECs). Recently, LEC markers have been discovered and characterized, including the hyaluronan receptor LYVE-1, the membrane glycoprotein podoplanin, the tran Regulation of soluble vascular endothelial growth factor receptor (sFlt-1/sVEGFR-1) expression and release in endothelial cells by human follicular fluid and granulosa cells Ruth Gruemmer, Karin Motejlek, Daniela Berghaus, Herbert A Weich, Joseph Neulen Reproductive Biology and Endocrinology , 2005, DOI: 10.1186/1477-7827-3-57 Abstract: We analyzed the influence of human follicular fluid obtained from FSH-stimulated women as well as of human granulosa cell conditioned medium on sFlt-1 production in and release from human umbilical vein endothelial cells (HUVEC) in vitro. Soluble Flt-1 gene expression was determined by RT-PCR analysis, amount of sFlt-1-protein was quantified by Sandwich-ELISA.Human follicular fluid as well as granulosa cell-conditioned medium significantly inhibit the production of sFlt-1 by endothelial cells on a posttranscriptional level. Treatment of cultured granulosa cells with either hCG or FSH had not impact on the production of sFlt-1 inhibiting factors. We further present data suggesting that this as yet unknown sFlt-1 regulating factor secreted by granulosa cells is not heat-sensitive, not steroidal, and it is of low molecular mass (< 1000 Da).We provide strong support that follicular fluid and granulosa cells control VEGF availability by down regulation of the soluble antagonist sFlt-1 leading to an increase of free, bioactive VEGF for maximal induction of vessel growth in the ovary.Angiogenesis is a rare process in normal adult organs predominantly occurring during wound healing and tumor growth. However, under physiological conditions it plays an important role in the female reproductive tract with regard to follicular development, corpus luteum formation, and uterine endometrial proliferation during the menstrual cycle [1,2]. Here, the cyclic corpus luteum of the ovary is the organ with the strongest physiological angiogenesis [3,4]. Defects in ovarian angiogenesis may contribute to a variety of disorders including anovulation and infertility, pregnancy loss, ovarian hyperstimulation syndrome, and ovarian neoplasms [5-7].During follicular growth, angiogenesis is restricted to the theca cell layer. After ovulation, however, massive angiogenesis occurs and new blood vessels penetrate the basement membrane of the follicle invading the growing corpus luteum [8]. The establ Mouse lung contains endothelial progenitors with high capacity to form blood and lymphatic vessels Judith Schniedermann, Moritz Rennecke, Kerstin Buttler, Georg Richter, Anna-Maria St?dtler, Susanne Norgall, Muhammad Badar, Bernhard Barleon, Tobias May, J?rg Wilting, Herbert A Weich BMC Cell Biology , 2010, DOI: 10.1186/1471-2121-11-50 Abstract: In an attempt to isolate differentiated mature endothelial cells from mouse lung we found that the lung contains EPCs with a high vasculogenic capacity and capability of de novo vasculogenesis for blood and lymph vessels.Mouse lung microvascular endothelial cells (MLMVECs) were isolated by selection of CD31+ cells. Whereas the majority of the CD31+ cells did not divide, some scattered cells started to proliferate giving rise to large colonies (> 3000 cells/colony). These highly dividing cells possess the capacity to integrate into various types of vessels including blood and lymph vessels unveiling the existence of local microvascular endothelial progenitor cells (LMEPCs) in adult mouse lung. EPCs could be amplified > passage 30 and still expressed panendothelial markers as well as the progenitor cell antigens, but not antigens for immune cells and hematopoietic stem cells. A high percentage of these cells are also positive for Lyve1, Prox1, podoplanin and VEGFR-3 indicating that a considerabe fraction of the cells are committed to develop lymphatic endothelium. Clonogenic highly proliferating cells from limiting dilution assays were also bipotent. Combined in vitro and in vivo spheroid and matrigel assays revealed that these EPCs exhibit vasculogenic capacity by forming functional blood and lymph vessels.The lung contains large numbers of EPCs that display commitment for both types of vessels, suggesting that lung blood and lymphatic endothelial cells are derived from a single progenitor cell.In the developing embryo blood vessels and later also lymphatic vessels are formed via an initial process of vasculogenesis. This is followed by sprouting and intussusceptive growth of the vessels, termed angiogenesis for blood vessels and lymphangiogenesis for lymph vessels. These mechanisms give rise to a complete blood and lymphvascular system consisting of arteries, veins, capillaries and collectors. Endothelial cells (ECs) are specified according to the circulatory system Non-commutative Euclidean and Minkowski Structure A. Lorek,W. Weich,J. Wess Abstract: A noncommutative *-algebra that generalizes the canonical commutation relations and that is covariant under the quantum groups SOq(3) or SOq(1,3) is introduced. The generating elements of this algebra are hermitean and can be identified with coordinates, momenta and angular momenta. In addition a unitary scaling operator is part of the algebra. Falls in the older patient – time to change our views D Weich Continuing Medical Education , 2007, 'Defeating the dragon' – can we afford not to treat patients with heroin dependence? L Weich South African Journal of Psychiatry , 2010, The Hilbert Space Representations for SO_q(3)-symmetric quantum mechanics Wolfgang Weich Abstract: The observable algebra O of SO_q(3)-symmetric quantum mechanics is generated by the coordinates of momentum and position spaces (which are both isomorphic to the SO_q(3)-covariant real quantum space R_q^3). Their interrelations are determined with the quantum group covariant differential calculus. For a quantum mechanical representation of O on a Hilbert space essential self- adjointness of specified observables and compatibility of the covariance of the observable algebra with the action of the unitary continuous corepresent- ation operator of the compact quantum matrix group SO_q(3) are required. It is shown that each such quantum mechanical representation extends uniquely to a self-adjoint representation of O. All these self-adjoint representations are constructed. As an example an SO_q(3)-invariant Coulomb potential is intro- duced, the corresponding Hamiltonian proved to be essentially self-adjoint and its negative eigenvalues calculated with the help of a q-deformed Lenz-vector. Equivariant spectral asymptotics for h-pseudodifferential operators Tobias Weich Mathematics , 2013, DOI: 10.1063/1.4896698 Abstract: We prove equivariant spectral asymptotics for $ h$-pseudodifferential operators for compact orthogonal group actions generalizing results of El-Houakmi and Helffer (1991) and Cassanas (2006). Using recent results for certain oscillatory integrals with singular critical sets (Ramacher 2010) we can deduce a weak equivariant Weyl law. Furthermore, we can prove a complete asymptotic expansion for the Gutzwiller trace formula without any additional condition on the group action by a suitable generalization of the dynamical assumptions on the Hamilton flow. On the support of Pollicott-Ruelle resonanant states for Anosov flows Abstract: We show that all generalized Pollicott-Ruelle resonant states of a topologically transitiv $C^\infty$Anosov flow with an arbitrary $C^\infty$ potential, have full support.
CommonCrawl
Does a magnet create energy? I know what i am asking is not possible , but the scenario what i am pondering over I cannot explain. So let's assume their is a huge magnet subtending from the top of the burj khalifa (Tallest building, Dubai). The magnet is so powerful it can pull objects from the ground below. I keep a series of metallic balls which get raised to the height where the magnet is thus gaining potential energy. Where does this energy come from? I mean this Question can be a totally stupid one. but i can't think of an explanation Tanishk SharmaTanishk Sharma The energy comes from the magnetic field's potential energy. When the balls are attracted to the magnet, some of the potential energy of the magnet field gets converted into the kinetic energy of the balls and the gravitation potential energy of the balls. The magnetic field stores energy by existing. When the ball goes from being far from the magnet to close to it the field produced by the ball + magnet will store less energy because the field will be slightly less intense, overall. So, as Noah P said, the work was done when the strong magnet was created. It's the same question, fundamentally, as if I lifted a rock to the top of the tower and dropped it, where was it stored while I waited to drop it? Answer: in the slightly more intense gravitational field generated between the ball and Earth when they're farther apart than when they're close. In SI units, the energy density stored in an electromagnetic field is: $$u_{EM} = \frac{1}{2} \mathbf{E} \cdot \mathbf{D} + \frac{1}{2} \mathbf{B} \cdot \mathbf{H},$$ where $\mathbf{E}$ is the electric field strength, $\mathbf{D}$ is called the displacement field, both $\mathbf{B}$ and $\mathbf{H}$ are known as magnetic fields, though they have different units. For more, look up information on the Maxwell stress tensor. A similar equation holds for the pre-Einstein gravitational field. The energy density stored in a gravitational field is: $$u_G = \frac{1}{8\pi G} \mathbf{g}^2. $$ In either case, the total energy stored in the field is the integral over the volume the fields are in. Sean E. LakeSean E. Lake A constant magnetic field doesn't do any work on a moving charge. This, however, doesn't mean that a magnetic field can't do work if it is not constant. The field has an energy density of B^2 / 2u and this energy can be converted into work if the field changes, e.g. by introducing a magnetic material. It's the change in the field strength when the magnetic material is being introduced that allows for such a system to perform mechanical work. In case of electromagnets the necessary energy comes from the electric current that powers the magnet. In a quasi-static configuration, i.e. when the magnetic fields change slowly, we can calculate the total energy of the system (i.e. the energy in the fields as well as the energy in the magnetized materials) as a function of the position of the magnetic materials. For linear magnetic materials we can use B = μH and then the total energy of the system becomes E magnetic = ∫ HB dV The system will be most stable in the configuration in which this total energy is the smallest, i.e. we can treat this almost like a potential problem. I say almost because we are usually dealing with extended magnetized bodies, i.e. we need to consider both the position and the orientation of the bodies, so it is a little more complicated than in the case of a gravitational or electric potential. Harsha Vardhan KHarsha Vardhan K No. The books always balance. ...The magnet is so powerful it can pull objects from the ground below. I keep a series of metallic balls which get raised to the height where the magnet is thus gaining potential energy. Where does this energy come from? The metallic balls. To understand this, imagine you're standing at the top of 2,722ft tower with a 1kg metal ball in your hand. You drop it. It plummets to Earth, and smashes into a car. KABOOM! The car is smashed up because the ball hit it with considerable kinetic energy. Where did this kinetic energy come from? It didn't come from the Earth, or from the Earth's gravitational field. It came from the ball. When you dropped it, gravity converted gravitational potential energy in the ball into kinetic energy. After the kinetic energy was dissipated the ball was left with a mass deficit. Note that if you have two equal-sized gravitating bodies which fall towards each other, the kinetic energy and mass deficit is shared equally. But if one body is much larger than the other, this is not true. Momentum p=mv is shared equally, but kinetic energy KE=½mv² is not. When one body is so much more massive than the other that you cannot detect it moving towards the common centre, you discount its motion and say the smaller body has all the kinetic energy. Also note that you have to do work on your ball to lift it up to an altitude of 2,722ft. Let's say you throw it up straight up into the air. You give it considerable kinetic energy whilst giving the Earth effectively none. Gravity then converts this kinetic energy into potential energy. This is not in the Earth, or the Earth's gravitational field, it's in the ball. Nowhere else. You can work this out because if you threw the ball upwards at 11.7km/s it would have escape velocity. It would leave the Earth forever, taking all it kinetic/potential energy with it. Because the work you did on the ball and the energy you gave to the ball increases the mass of the ball. It's the inverse of the mass deficit. This is why potential energy is mass-energy. OK, now lets replace the Earth with a magnet. Your ball starts off stuck to the magnet, and you have to do work on it to pull it away from the magnet. When you let it go it "falls" towards the magnet. Potential energy is converted into kinetic energy which then gets dissipated, and the ball is left with a mass deficit. It's similar for the electron and the proton. There's a mass deficit of 13.6eV, and most of this can be assigned to the electron, because it's 1837 times less massive than the proton. A combination of two systems is fairly straightforward. If you're on the ground and your ball falls up towards the magnet, less potential energy is being converted into kinetic energy. If the Earth wasn't present, there would be more conversion into kinetic energy, and the ball would hit the magnet at a higher speed. The books always balance. John DuffieldJohn Duffield $\begingroup$ Would the downvoter care to comment? And point out any errors in the above? $\endgroup$ – John Duffield Sep 8 '16 at 15:54 Not the answer you're looking for? Browse other questions tagged electromagnetism or ask your own question. Magnet and energy conservation How do permanent magnets manage to focus field on one side? Does a magnet contain (and potentially produce) energy? Does waving a magnet around create light? Does it take energy to create an electric field? Stationary electromagnet(s) to move a metal rod. How is magnetic energy supplied?
CommonCrawl
Weather or Knot A Coriolis Effect Blog Weather Acronyms Tag: ACE Miscellaneous/General Science Tropical Weather Hurricane Season 2016 Officially Ends, Higher ACE Index No Comments on Hurricane Season 2016 Officially Ends, Higher ACE Index From a record-breaking extremely strong El-Nino last hurricane season, the Atlantic, the EPAC and the Central Pacific this season all were above normal. This was pretty much forecast but Mother Nature fools forecasters all the time. A short summary of the season follows: For the Atlantic, 2012 was the last time there was an above average season. The Atlantic saw 15 named storms during Hurricane season, 2016. Out of the 15 named storms, the Atlantic basin had 7 hurricanes Alex, Earl, Gaston, Hermine, Matthew, Nicole, and Otto. What surprised many is that out of those 7 hurricanes, 3 of which were major hurricanes – Gaston, Matthew, and Nicole. Many were wondering would Florida be spared a hurricane this year. After all these years and true to form, Florida had two Tropical Storms – Colin and Julia, and finally, Hurricane Hermine made landfall in Florida, the first since Hurricane Wilma in 2005. The Eastern coast of the US also was battered with South Carolina having landfall with Tropical Storm Bonnie and Hurricane Matthew. Speaking of Hurricane Matthew, it was the longest-lived storm in the Atlantic and it was also the strongest with maximum sustained winds of 160 MPH, a category 5. Hurricane Matthew made landfall at Haiti, Cuba, and the Bahamas but as a category 4 storm during the trek of the Eastern Caribbean and the Atlantic. Lest us not forget, there were other storms away from the US. Tropical Storm Danielle visited Mexico, Hurricane Earl in the Belize and lastly, very late in the season Hurricane Otto in Nicaragua. So, how do we get a better feel of the hurricane season activity for this year? One method called ACE ACEAccumulated Cyclone Energy Index. ACE Index is using a sum of the energy accumulated with all the cyclones (tropical storms, sub-tropical storms and hurricanes) that happened to form within the hurricane season. Calculating ACE is done by using the square of the the wind speed every six hours during the storm's lifetime. Remember, a tropical depression is below 35 knots and it is not added. Hence, a cyclone that is longer lived will have a higher ACE indices verses a shorter lived cyclone which will have a lower ACE indices. ACE indices of a single cyclone is windspeed (35 knots or higher) every six hour intervals (0000, 0600, 1200, 1800 UTC UTCCoordinated Universal Time (UTC) is the primary time standard by which the world regulates clocks and time.) or ACE = 104 kn2. Total ACE: $latex \Sigma = \frac{Vmax^2}{10^4} \ &s=2\ $ * Vmax is estimated sustained wind speed in knots. * An average seasonal ACE in the Atlantic is between 105 -115 or mean average of 110. 2016 Atlantic ACE Tropical Cyclone Name Max Wind Speed (in Knots) ACE 104 kn2 Alex * Bonnie * Danielle * Fiona * Gaston 105 24.2925 ACE Total: Please note: Those with red asterisks have Tropical Cyclone Reports versus the Operational Advisories. The TCR's are more comprehensive in detail. Those without the red asterisks will have the TCR's in time. The ACE Index numbers may change with each TCR. I will try to update this post at that time. Tags 2016 hurricane season, ACE Science Tropical Weather 2014 Hurricane Season. An El-Niño Year? No Comments on 2014 Hurricane Season. An El-Niño Year? Before the 2014 Atlantic hurricane season began, it was touted by most experts to be an "El Niño" year mostly due to some the early indicators. A few things Meteorologists monitor when it comes to tropical development in general include: Wind Shear strength, Sea Surface Temperatures/Deep Sea Temperatures, measuring of the SOI (Southern Oscillation Index) in the case of an El Niño or La Niña, and the MJO (Madden Julian Oscillation). A simplified description of an El Niño is basically an area in the Pacific Ocean that begins to warm up to more than above average sea surface temperatures. An El Niño can cause strong winds to blow eastward over Mexico and help shear off the cloud tops of thunderstorms. This helps to reduce both the number and the intensity of tropical storms and hurricanes that might be trying develop in the Atlantic. Between February-May, just before the hurricane season would begin, there were a series of Equatorial Kelvin waves Kelvin wavesKelvin wave, in oceanography, an extremely long ocean wave that propagates eastward toward the coast of South America, where it causes the upper ocean layer of relatively warm water to thicken and sea level to rise. Kelvin waves occur toward the end of the year preceding an El Niño event when an area of unusually intense tropical storm activity over Indonesia migrates toward the equatorial Pacific west of the International Date Line. This migration brings episodes of eastward wind reversals to that region of the ocean which spawn Kelvin waves. Although such intense tropical storms of the western Pacific are associated with the development of El Niño, they may occur in other years to produce Kelvin waves that also propagate eastward but continue poleward toward Chile and California. in the Pacific which allowed for the possibility of a moderate to strong El Niño. While the Kelvin waves did transport the higher than normal sea surface heights, the atmosphere did not follow along with the Kelvin waves and the waves eventually faded away. Recently though, two more eastward moving Kelvin waves were seen via the Jason-2 satellite. Will these two Kelvin waves be the precursor for the El Niño? Even if they do, the El Niño would most likely be a weak to moderate one. Courtesy NASA/JPL-Caltech Another possible indicator of an El Niño is that the shift in the SOI has now been in negative territory. Although this is not a true indicator as of yet, the SOI needs to have sustained negative values below −8. Close but no cigar. Courtesy: Australia Bureau of Meteorology ******* For the math minded ******** There are a few different methods of how to calculate the SOI. The method used by the Australian Bureau of Meteorology is the Troup SOI which is the standardised anomaly of the Mean Sea Level Pressure difference between Tahiti and Darwin. It is calculated as follows: $latex SOI = 10 \frac{(Pdiff – Pdiffav)}{SD(Pdiff)} \ $ where: Pdiff = (average Tahiti MSLP for the month) – (average Darwin MSLP for the month), Pdiffav = long term average of Pdiff for the month in question, and SD(Pdiff) = long term standard deviation of Pdiff for the month in question. The multiplication by 10 is a convention. Using this convention, the SOI ranges from about –35 to about +35, and the value of the SOI can be quoted as a whole number. The dataset the Bureau uses has 1933 to 1992 as the climatology period. The SOI is usually computed on a monthly basis, with values over longer periods such a year being sometimes used. Daily or weekly values of the SOI do not convey much in the way of useful information about the current state of the climate, and accordingly the Australian Bureau of Meteorology does not issue them. Daily values in particular can fluctuate markedly because of daily weather patterns, and should not be used for climate purposes. So what does this mean for this year's Atlantic hurricane season? Probably not a whole lot. So far there is no empirical evidence either way to back up any claims of a possible El Niño. What is known is that in the Pacific, trains of Tropical cyclones in the Pacific are still developing where as in the Atlantic basin this has been a very slow hurricane season with one tropical storm and four hurricanes, with Hurricane Edouard being a major hurricane, albeit short lived as a major. As the Cape Verde Cape VerdeA Cape Verde-type hurricane is an Atlantic hurricane that develops near the Cape Verde islands, off the west coast of Africa. The disturbances move off the western coast of Africa and may become tropical storms or tropical cyclones within 1,000 kilometres (620 mi) of the Cape Verde Islands, usually between July to September. season is slowly drawing to an end, the remainder of the tropical cyclone development (if any) will most likely be in the Western Caribbean and the Gulf of Mexico with the possibility of development closer to the U.S. Eastern coast. It would be foolish for me to to postulate whether the rest of the Atlantic hurricane season will continue to be as slow as it has been and whether the forecasted El Niño has been the cause (whether it is classified or not). As a stated in an earlier post: So, how do we get a better feel of the hurricane season activity for this year? One method called ACE ACEAccumulated Cyclone Energy Index. ACE Index is using a sum of the energy accumulated with all the cyclones (tropical storms, sub-tropical storms and hurricanes) that happened to form within the hurricane season. Calculating ACE is done by using the square of the the wind speed every six hours during the storm's lifetime. Remember, a tropical depression is below 35 knots and it is not added. Hence, a cyclone that is longer lived will have a higher ACE indices verses a shorter lived cyclone which will have a lower ACE indices. ACE indices of a single cyclone is windspeed (35 knots or higher) every six hour intervals (0000, 0600, 1200, 1800 UTC UTCCoordinated Universal Time (UTC) is the primary time standard by which the world regulates clocks and time.) or ACE = 104 kn2. Total ACE: $latex \Sigma = \frac{Vmax^2}{10^4} \ &s=2\ $ * Vmax is estimated sustained wind speed in knots. * An average seasonal ACE in the Atlantic is between 105 -115 or mean average of 110. Although technically we are in a Neutral state, this season "almost" seems more like a El-Niño year. An El Niño tends to hinder tropical cyclone development in the Atlantic, due to higher amounts shear. But in a true El-Niño year, you normally will have a very active season in the East Pacific (EPAC), with a few major hurricanes, and a subdued Atlantic. In the figure below, as of this time frame, the ACE Index for this year is actually lower than last years (36.1200). That said, there is still October and November for the possibility of Tropical Cyclone development. If any Tropical Cyclones do develop, I will add them into the ACE Index accordingly. Secondly, the ACE Index numbers in the chart are from the Operational Advisories. The numbers may change when the Tropical Cyclone Reports are released. A blue asterisk will be placed next to a Storm name when a TCR has been issued. Addendum: As of 10/11/14, 0000 UTC advisory with Tropical Storm Fay, the ACE Index has now surpassed last years ACE. In addition, recently developed Hurricane Gonzalo now a category 4 hurricane, the ACE Index will continue to rise significantly. This the first time in the Atlantic basin there has been a category hurricane 4 since Ophelia in 2011. Arthur * Bertha * Cristobal * Edouard * Fay * Gonzalo * Hanna * As always, please be sure to use the official information from the NHC or your local NWS. Although care has been taken in preparing the information supplied through the Weather or Knot blog, Weather or Knot does not and cannot guarantee the accuracy of it. I am not a professional meteorologist and this post is prepared purely for entertainment purposes. Tags ACE, el-nino, Kelvin Waves, MJO, SOI 2013 Atlantic Hurricane Season: A Busted Forecast No Comments on 2013 Atlantic Hurricane Season: A Busted Forecast Although the 2013 Atlantic hurricane season has not yet concluded and we still have November to contend with before the season will officially be over, all indicators are beginning to make it obvious that the end of the show is near. If you recall in my July hurricane season forecast , all signals were looking for an active season. All the criteria seemed to fit but for some unknown reason, a big piece of the puzzle has alluded forecasters as to why this years hurricane season was very, very below normal. Yes, the Atlantic basin did have a total of 13 named storms, so in one sense the season has been "active" but there were only 2 hurricanes and both were minimal category 1 hurricanes. However, 13 named storms does really cannot tell the whole story. An average seasonal ACE in the Atlantic is between 105 -115 or mean average of 110. Although technically we are in a Neutral state, this season "almost" seems more like a El-Niño year. An El Niño tends to hinder tropical cyclone development in the Atlantic, due to higher amounts shear. But in a true El-Niño year, you normally will have a very active season in the East Pacific (EPAC), with a few major hurricanes, and a subdued Atlantic. No-Named Storm (AL152013) As part of the post-season review, the low pressure system in December(AL152013) was classified as a subtropical cyclone and the ACE Index has been updated to reflect the ACE numbers during it's short lived status as a subtropical cyclone. Two notes of interest: One is that of the two hurricanes this year, Humberto was just hours from beating the record for the latest first Atlantic hurricane of the season. Secondly, this will be the 8th year in a row where the Atlantic hurricane season has not had a major hurricane (category 3 or higher). In fact, 10/24/2013 was the 8th anniversary when Hurricane Wilma made landfall on the southwestern side of Florida as a Category three hurricane and headed northeastward and the areas of the Miami-Dade, Broward, and Palm Beach counties were impacted with category two winds. If by at the end of this year hurricane season with no major hurricanes, we will tie the record. So is there a last hurrah for one or two more storms or is it time to fold the tents and call the season over? Technically, no as the hurricane season does not oficially end until November 30. This does not mean a cyclone cannot develop. It is just that a cyclone developing after the official season is over, is a somewhat rare event, but it is not impossible. Usually any late season storms develop either in the Southwestern Atlantic or more likely in the Caribbean. The two late season cyclones late that come to mind were Sandy (2012) and Mitch (1998). As of yet, it appears that the rest of November will be quiet and although there is a chance development of in the Caribbean due to the MJO MJOMadden Julian Oscillation - lift or upward motion into the atmosphere allows easier development of a tropical cyclone increased convection/thunderstorms activity., I am somewhat skeptical on that. I won't rule it out yet but for the moment I just am not convinced this will happen, IMHO. As always, remember to always keep an eye to the sky and to always stick with official information, either your local WFO WFOWeather Forecast Office or the NHC Tags 2013 hurricane season, ACE, MJO, NHC, WFO Miscellaneous/General (20) Non-Tropical Weather (3) Tropical Weather (118) 2014 Atlantic Hurricane Season Officially Over Invest 96L / Pouch P17L Coach/Kate Spade/Michael Kors Handbags & Accessories Coriolis Effect Weather Page MJO low pressure dissipation thunderstorms Solar Flare dynamic models steering layer el-nino ENSO large tropical wave hurricane forecasters ITCZ models WFO NHC nao trough SOI wave train land interaction BOC mdr noaa 2014 hurricane season cold core West Pacific Canadian model SHIPS ACE 2016 hurricane season ERC Kelvin Waves tropics intensity frontal boundary recurve outlier subsidence mid-level ridge iana SST's COL tropical wave wind shear TUTT satellite warm core hurricane full moon names 2013 hurricane season vorticity weather forecasters la nina pdo Caribbean NOGAPS Cirrus model consensus SAL vertical shear open wave GFS EPAC zonal flow statistical models ECMWF rainstorms closed circulation eyewall replacement cycle ULL forecast CDO dry air ridge ions GFDL hurricane specialists NHC forecasters shear CV synoptic AEJ guidance outflow Moon Bermuda subtropical ridge 96L regeneration coriolis effect upper level winds Beryl euro model UKMET reconnaissance convection LLC © 2021 Weather or Knot
CommonCrawl
\begin{document} \title{f Generalised differences and a class of multiplier operators in Fourier analysis} \begin{abstract} Any zeros in the multiplier of an operator impose a condition on a function for it to be in the range of the operator. But if each function in a certain family $\mathcal F$ of functions satisfies such a condition, when is $\mathcal F$ the range of the operator? Let $\alpha,\beta\in {\mathbb Z}$, for $g\in L^2([0,2\pi])$ let ${\widehat g}$ be the sequence of Fourier coefficients of $g$, and let $D$ denote differentiation. We consider the operator $D^2-i(\alpha+\beta)D-\alpha\beta$ on the second order Sobolev space of $L^2([0,2\pi])$. The multiplier of this operator is $-(n-\alpha)(n-\beta)$, so that ${\widehat g}(\alpha)={\widehat g}(\beta)=0$ for any function $g$ in the range of the operator. Let $\delta_x$ denote the Dirac measure at $x$, and let $\ast$ denote convolution. If $b\in [0,2\pi]$ let $\lambda_b$ be the measure \[\footnotesize{\frac{1}{2}\left[e^{ib\left(\frac{\alpha-\beta}{2}\right)}+e^{-ib\left(\frac{\alpha-\beta}{2}\right) } \right]\delta_0- \frac{1}{2}\left[\ e^{ib\left(\frac{\alpha+\beta}{2}\right)}\,\delta_{ b}+e^{-ib\left(\frac{\alpha+\beta}{2}\right)}\,\delta_{- b}\right]}.\] A function of the form $\lambda_b \ast f$ is called a \emph{generalised difference}, and we let ${\mathcal F}$ be the family of functions $h$ such that $h$ is some finite sum of generalised differences. It is shown that ${\mathcal F}$ is a closed subspace of $L^2({\mathbb T})$ that equals the range of $D^2-i(\alpha+\beta)D-\alpha\beta$, and that every function in ${\mathcal F}$ is a sum of five generalised differences. The methods use partitions of $[0,\pi/2]$ and estimates of integrals in Euclidean space. There are applications to the automatic continuity of linear forms. \end{abstract} \let\thefootnote\relax\footnote{2010 \emph{Mathematics Subject Classification}. Primary 42A16, 42A45, \break Secondary 43A15, 46H40 \emph{Key words and phrases.} Circle group, Fourier analysis, $L^2$ spaces, multiplier \break operators, generalised differences, compact abelian groups, automatic continuity} \section{Introduction} \setcounter{equation}{0} The circle group $\{z: |z|=1\}$ is denoted by ${\mathbb T}$. The mapping $x\longmapsto e^{ix}$ from $[0,2\pi)$ to ${\mathbb T}$ means that we can and will identify ${\mathbb T}$ with $[0,2\pi)$ or $[0,2\pi]$ in the usual way, and both settings will be used (see the comments in \cite[page 1034]{ross1}). The group operation $+$ on $[0,2\pi)$ is written additively and is the usual addition if $0\le x+y<2\pi$, and is $x+y-2\pi$ if $2\pi \le x+y$. The space $L^2({\mathbb T})$ is the Hilbert space of square integrable complex functions on $\mathbb T$, and $M({\mathbb T})$ denotes the set of bounded, regular, complex Borel measures on ${\mathbb T}$. Let $\mathbb Z$ denote the set of integers, and let $\mathbb N$ denote the set of positive integers. The convolution operation in $M({\mathbb T})$ is denoted by $\ast$. Thus, if $\mu\in M({\mathbb T})$ and $n\in {\mathbb N}$, $\mu^n$ denotes $\mu\ast\mu\ast\cdots\ast\mu$, where $\mu$ appears $n$ times. Considering $n\in {\mathbb Z}$, $f\in L^2({\mathbb T})$ and $\mu\in M({\mathbb T})$, we define the Fourier coefficients ${\widehat f}(n)$ and ${\widehat {\mu}}(n)$ by \[{\widehat f}(n)=\frac{1}{2\pi}\int_0^{2\pi}f(t)\,e^{-int}\,dt \ {\rm and}\ {\widehat {\mu}}(n)=\int_0^{2\pi}e^{-int}\,d\mu(t).\] Note that for $\mu,\nu\in M({\mathbb T})$, $( {\mu\ast \mu})\,{\widehat {\ }}={\widehat \mu}\,{\widehat \nu}$. Letting $\delta_x$ denote the Dirac measure at $x$, we see that ${\widehat {\delta_x}}(n)=e^{-inx}$ for $x\in [0,2\pi]$, and ${\widehat {\delta_z}}(n)=z^{-n}$ for $z\in {\mathbb T}$. For $s=0,1,2,\ldots $ the Sobolev space $W^s({\mathbb T})$ is defined by \[W^s({\mathbb T})=\Bigl\{f:f\in L^2({\mathbb T}) \ \, {\rm and}\, \sum_{n=-\infty}^{\infty}|n|^{2s}|{\widehat f}(n)|^2<\infty\Bigr\}.\] The differential operator $D$ maps $W^1({\mathbb T})$ into $L^2({\mathbb T})$ and can be defined by the property \[D(f){\widehat {\ }}(n)=i n{\widehat f}(n), \ {\rm for \ all}\ n\in {\mathbb Z}.\] We say that $D$ is a \emph{multiplier operator on} $W^1({\mathbb T})$ with \emph{multiplier} $in$. Note that this means that if $f\in W^1({\mathbb T})$, $D(f){\widehat {\ }}(0)=0$. Conversely, it is easy to see that if $g\in L^2({\mathbb T})$ and ${\widehat g}(0)=0$, then $g=D(f)$ for some $f\in W^1({\mathbb T})$. Similarly, $D^s$ is a multiplier operator from $W^s({\mathbb T})$ into $L^2({\mathbb T})$ with multiplier $(in)^s$. Let $\alpha,\beta \in {\mathbb Z}$ and $s\in {\mathbb N}$ be given, and let $I$ denote the identity operator. In this paper, and for $s\in {\mathbb N}$, we are concerned with operators $(D^2-i(\alpha+\beta)D-\alpha\beta I)^s$ that map $W^{2s}({\mathbb T})$ into $L^2({\mathbb T})$. These are multiplier operators with multipliers $ (-1)^s(n-\alpha)^s(n-\beta)^s$. That is, for all $f\in W^s({\mathbb T})$, \[\bigl[(D^2-i(\alpha+\beta)D-\alpha\beta I)^s(f)\bigr]\,{\widehat {\ }}\,(n)=(-1)^s(n-\alpha)^s(n-\beta)^s{\widehat f}(n),\] for all $n\in {\mathbb Z}$. Note that the zeros of the multiplier of $(D^2-i(\alpha+\beta)D-\alpha\beta I)^s$ are $\alpha$ and $\beta$. The operator $(D^2-i(\alpha+\beta)D-\alpha\beta I)^s$ acting on $f\in W^{2s}({\mathbb T})$ eliminates the frequencies $\alpha$ and $\beta$ from $f$. A particular case of the above is when $\alpha=-\beta$. Then the operator $(D^2-i(\alpha+\beta)D-\alpha\beta I)^s$ becomes $(D^2+\alpha^2I)^s$, and it has the multiplier $-n^2+\alpha^2$, which has zeros at $-\alpha$ and $\alpha$. In order to give a feeling for the ideas in this paper, here is a statement of a special case of what is perhaps the main result. \emph{Let $\alpha\in {\mathbb Z}.$ Then the following conditions (i), (ii) and (iii) are equivalent for a function $f\in L^2({\mathbb T})$}. \emph{(i)} ${\widehat f}(-\alpha)={\widehat f}(\alpha)=0$. \emph{(ii)} \emph{$f$ is a sum of five functions, for each one $h$ of which there are $z\in {\mathbb T}$ and $g\in L^2({\mathbb T})$ such that } \begin{equation} h=(z^{\alpha}+z^{-\alpha})g -(\delta_{z}+\delta_{z^{-1}})\ast g.\label{eq:differences1} \end{equation} \emph{ (iii) There is $g\in W^2({\mathbb T})$ such that $(D^2+\alpha^2I)(g)=f$.} Now a function of the form $g-\delta_z\ast g$ might be called a first order difference, and one of the form $(\delta_1-\delta_z)^s\ast g$ might be called a difference of order $s$ (see \cite{nillsen1,nillsen2}). So, a function as in (\ref{eq:differences1}) we call a type of \emph{generalised difference}. The form of generalised differences is derived from the factors of the multiplier of the operator, rather than by replacing the elements $D$ and $D^2$ in the operator by first and second order differences. The present ideas in proving a result of the above type involve looking at the structure of partitions of $[0,\pi/2]$ associated with the zeros in $[0,\pi/2]$ of the functions $\sin((n-\alpha)x)$ and $\sin((n-\beta)x)$, and estimating integrals in ${\mathbb R}^{m}$ over sets that are Cartesian products of sets in a refined partition. The results here are related to work of Meisters and Schmidt \cite{meisters1}, where they proved the following result. \emph{The following conditions (i) and (ii) are equivalent for a function $f\in L^2({\mathbb T})$}. \emph{(i) ${\widehat f}(0)=0$.} \emph{(ii) $f$ is a sum of three functions, for each one $h$ of which there are $z\in {\mathbb T}$ and $g\in L^2({\mathbb T})$ such that} $h=g -\delta_z\ast g.$ They deduced from this that if $T$ is a linear form on $L^2({\mathbb T})$ such that $T(f)=T(\delta_z\ast f)$ for all $z\in {\mathbb T}$ and $f\in L^2({\mathbb T})$, then $T$ is continuous. That is, any translation invariant linear form on $L^2({\mathbb T})$ is automatically continuous, and this was proved more generally for compact, connected abelian groups. Further work relating to the original results of Meisters and Schmidt can be found, for example, in \cite{bourgain1, johnson1, meisters2, meisters3, nillsen1, nillsen2} where there are further references. One way to think of the ideas in \cite{meisters1} is that they are concerned with the range of the differential operator $D$ whose multiplier is a linear polynomial. On the other hand, the present work is concerned with differential operators whose multipliers are quadratic in nature. The work here goes back to \cite{meisters1}, but takes some of the ideas there in a different direction from the mainstream of later work. There are also applications to automatic continuity of linear forms. A suitable general reference on classical Fourier series is \cite{edwards1}, and for abstract harmonic analysis on groups see \cite{hewitt1, ross1}. \section{Background and statement of the main result} \setcounter{equation}{0} There will be cause to consider series of the form $\sum_{j=-\infty}^{\infty}a_j/b_j$, where $a_j,b_j\ge 0$ for all $j$. In such a case, we write $\sum_{j=-\infty}^{\infty}a_j/b_j=\infty$ if there is a term of the form $a_j/0$ with $a_j>0$. If there are any terms of the form $0/0$ we either neglect them in the sum or make them equal to $0$. We write respectively $\sum_{j=-\infty}^{\infty}a_j/b_j<\infty$ or $\sum_{j=-\infty}^{\infty}a_j/b_j= \infty$ if the series $\sum_{j=-\infty, a_j>0}^{\infty}a_j/b_j$ converges or diverges in the usual sense. The following result is due essentially to Meisters and Schmidt \cite[page 413]{meisters1}. \begin{theorem} \label{theorem:characterisation} Let $f\in L^2([0,2\pi])$ and let $\mu_1,\mu_2,\ldots,\mu_r\in M([0,2\pi])$. Then the following conditions (i) and (ii) are equivalent. (i) There are $f_1,f_2,\ldots, f_r\in L^2([0,2\pi])$ such that $f=\sum_{j=1}^r\mu_j\ast f_j.$ \vskip0.2cm (ii) \hskip 3.6cm$\displaystyle \sum_{n=-\infty}^{\infty}\,\frac{|{\widehat f }(n)|^2}{\displaystyle{\sum_{j=1}^r}|{\widehat u_j}(n)|^2}\ <\ \infty.$ \end{theorem} {\bf Proof.} This is essentially proved in \cite[pages 411-412 ]{meisters1}, but see also \cite[pages 77-88]{nillsen1} and \cite[page 23]{nillsen2}. A more accessible proof for the present context is on the world wide web \cite{nillsen3}. $\square$ In \cite{meisters1}, the measures in Theorem \ref{theorem:characterisation} were taken to be of the form $\mu_b=\delta_0-\delta_b$, in which case $\mu_b([0,2\pi])=0$ and ${\widehat {\mu_b}}(n)=1-e^{-ibn}$. In the context here, we let $\alpha,\beta\in {\mathbb Z}$ and we will apply Theorem \ref{theorem:characterisation} using measures $\lambda_b$, $b\in [0,2\pi)$, where \begin{equation}\lambda_b=\frac{1}{2}\left[e^{ib\left(\frac{\alpha-\beta}{2}\right)}+e^{-ib\left(\frac{\alpha-\beta}{2}\right) } \right]\delta_0- \frac{1}{2}\left[\ e^{ib\left(\frac{\alpha+\beta}{2}\right)}\,\delta_{ b}+e^{-ib\left(\frac{\alpha+\beta}{2}\right)}\,\delta_{- b}\right].\label{eq:lambdasubb} \end{equation} Note the the Fourier transform ${\widehat {\lambda_b}}$ of $\lambda_b$ is given for $n\in {\mathbb Z}$ by \begin{equation} {\widehat {\lambda_b}}(n)=\cos \left(\left(\frac{\alpha-\beta}{2}\right) b\right)-\cos \left(\left(n-\frac{\alpha+\beta}{2}\,\right)b\right). \label{eq:Fouriertransform} \end{equation} Consequently, \begin{equation} {\widehat {\lambda_b}}(\alpha)={\widehat {\lambda_b}}(\beta)=0, \ {\rm for \ all}\ b\in [0,2\pi].\label{eq:fourierzero} \end{equation} So if $b\in [0,2\pi]$ and $g\in L^2([0,2\pi])$, \begin{equation} \lambda_b\ast g=\frac{1}{2}\left[e^{ib\left(\frac{\alpha-\beta}{2}\right)} +e^{-ib\left(\frac{\alpha-\beta}{2}\right)}\right]g- \frac{1}{2}\left[\ e^{ib\left(\frac{\alpha+\beta}{2}\right)}\,\delta_{ b}+e^{-ib\left(\frac{\alpha+\beta}{2}\right)}\,\delta_{- b}\right]\ast g,\label{eq:generaliseddifference} \end{equation} and we see that if $f\in L^2([0,2\pi])$ is a function of the form $\lambda_b\ast g$, then ${\widehat f}(\alpha)={\widehat f}(\beta)=0$. A function of the form $\lambda_b\ast g$, as in (\ref{eq:generaliseddifference}) above, is called a \emph{generalised difference} in $L^2([0,2\pi])$. The following is a crucial technical result. It is not proved here, but is a consequence of results in subsequent sections. \begin{lemma} \label{lemma:cosine estimate} Let $\alpha,\beta\in {\mathbb Z}$ and let $s\in {\mathbb N}$ be given. Then there is $M>0$ such that for all $n\in {\mathbb Z}$ with $n\notin \{\alpha,\beta\}$, \[\int_{{[0,2\pi]}^{4s+1}}\frac{dx_1dx_2\cdots dx_m}{ \displaystyle \sum_{j=1}^{4s+1}\left|\cos\left(\left(\frac{\alpha-\beta}{2}\right)x_j\right)-\cos \left(\left(n-\frac{\alpha+\beta}{2}\right)x_j\right)\right|^{2s}}\,\le \ M.\] \end{lemma} The central result proved here using Lemma \ref{lemma:cosine estimate} is the following. \begin{theorem}\label{theorem:main2} Let $\alpha,\beta\in {\mathbb Z}$. Then the following conditions (i), (ii), (iii) and (iv) on a function $f\in L^2([0,2\pi])$ are equivalent. (i) ${\widehat f}(\alpha)={\widehat f}(\beta)=0$. (ii) There are $m,s\in {\mathbb N}$, $b_1,b_2,\ldots, b_m\in [0,2\pi]$ and $f_1,f_2,\ldots,f_m\in L^2([0,2\pi])$ such that $f$ is equal to \begin{equation} \sum_{j=1}^m \left[ \left(e^{ib_j\left(\frac{\alpha-\beta}{2}\right)}+e^{-ib_j\left(\frac{\alpha-\beta}{2} \right)}\right) \delta_0\, -\left(e^{ib_j\left(\frac{\alpha+\beta}{2}\right)}\,\delta_{b_j} +e^{-ib_j\left(\frac{\alpha+\beta}{2}\right)}\,\delta_{- b_j}\right) \right]^s \ast f_j.\label{eq:differencesum} \end{equation} (iii) There are $s\in {\mathbb N}$, $b_1,b_2,\ldots, b_{4s+1}\in [0,2\pi]$ and $f_1,f_2,\ldots,f_{4s+1}\in L^2([0,2\pi])$ such that $f$ is equal to \begin{equation} \sum_{j=1}^{4s+1} \left[ \left(e^{ib_j\left(\frac{\alpha-\beta}{2}\right)}+e^{-ib_j\left(\frac{\alpha-\beta}{2} \right)}\right) \delta_0\, -\left(e^{ib_j\left(\frac{\alpha+\beta}{2}\right)}\,\delta_{b_j} +e^{-ib_j\left(\frac{\alpha+\beta}{2}\right)}\,\delta_{- b_j}\right) \right]^s \ast f_j. \label{eq:differencesumb} \end{equation} (iv) There are $s\in {\mathbb N}$ and $g\in W^{2s}([0,2\pi])$ such that \begin{equation}\big(D^2-i(\alpha+\beta)D-\alpha\beta I\big)(g)=f.\label{eq:differentialoperator}\end{equation} \noindent When the equivalent conditions (i), (ii), (iii) and (iv) are satisfied and $s\in {\mathbb N}$ is given, we have that for almost all $(b_1,b_2,\ldots, b_{4s+1})\in [0,2\pi]^{2s+1}$, there are $f_1,f_2,\ldots,f_{4s+1}\in L^2([0,2\pi])$ such that (\ref{eq:differencesumb}) holds. Also, the functions in $L^2([0,2\pi])$ that can be written in the form (\ref{eq:differencesumb}) form a closed vector subspace of $L^2([0,2\pi])$. \end{theorem} {\bf Proof.} It is obvious that (iii) implies (ii). Now, (ii) implies (i) since, as noted in (\ref{eq:fourierzero}), the Fourier coefficients of any function appearing within the sum in (\ref{eq:differencesum}) vanish at $\alpha$ and $\beta$. In order to prove that (i) implies (iii), let ${\widehat f}(\alpha)={\widehat f}(\beta)=0$, and let $M$ be the constant as in Lemma \ref{lemma:cosine estimate}. If we integrate the function that maps $ (x_1,x_2,\ldots, x_{4s+1})\in{\mathbb R}^{4s+1}$ into \begin{equation} \sum_{{n=-\infty}\atop{n\ne \alpha,\beta}}\frac{ |{\widehat f}(n)|^2 }{ \displaystyle\sum_{j=1}^{4s+1}\left|\cos\left(\left(\frac{\alpha-\beta}{2}\right)x_j\right)-\cos \left(\left(n-\frac{\alpha+\beta}{2}\right)x_j\right)\right|^{2s}}\label{eq:sumintegral} \end{equation} over $[0,2\pi]^{4s+1}$, and interchange the order of integration and summation we obtain \begin{align*} &\sum_{{n=-\infty}\atop{n\ne \alpha,\beta}}^{\infty}\left(\int_{[0,2\pi]^{4s+1}}\frac{dx_1dx_2\cdots dx_{4s+1}} { \displaystyle\sum_{j=1}^{4s+1}\left|\cos\left(\left(\frac{\alpha-\beta}{2}\right)x_j\right)-\cos \left(\left(n-\frac{\alpha+\beta}{2}\right)x_j\right)\right|^{2s}}\right) |{\widehat f}(n)|^2 \\ &\le M\sum_{n=-\infty}^{\infty}|{\widehat f}(n)|^2\\ &<\infty. \end{align*} We deduce that for almost all $(x_1,x_2,\ldots,x_{4s+1})\in [0,2\pi]^{4s+1}$, the sum in (\ref{eq:sumintegral}) is finite. Using (\ref{eq:Fouriertransform}), we see that for almost all $(x_1,\ldots,x_{4s+1}) \in [0,2\pi]^{4s+1}$, \begin{align*} &\sum_{n=-\infty}^{\infty}\frac{|{\widehat f}(n)|^2}{\displaystyle\sum_{j=1}^{4s+1}|{\widehat {\lambda_{x_j}^s}} (n)|^2} \hskip7cm \end{align*} \begin{align*} &=\sum_{n=-\infty}^{\infty}\frac{{\widehat f}(n)|^2}{\displaystyle\sum_{j=1}^{4s+1}|{\widehat \lambda}_{x_j}(n)|^{2s}} \\ & =\sum_{n=-\infty}^{\infty}\frac{{\widehat f}(n)|^2}{\displaystyle\sum_{j=1}^{4s+1}|{\widehat \lambda}_{x_j}(n)|^{2s}}\\ &=\sum_{{n=-\infty}\atop{n\ne \alpha,\beta}}^{\infty}\,\frac{ |{\widehat f}(n)|^2 }{\displaystyle \sum_{j=1}^{4s+1}\left|\cos\left(\left(\frac{\alpha-\beta}{2}\right)x_j\right)-\cos \left(\left(n-\frac{\alpha+\beta}{2}\right)x_j\right)\right|^{2s}}\\ &<\,\infty. \end{align*} It now follows from (\ref{eq:Fouriertransform}), and the equivalence of (i) and (ii) in Theorem \ref{theorem:characterisation}, that for almost all $(b_1,b_2,\ldots b_{4s+1})\in [0,2\pi]^{4s+1}$ there are $f_1,f_2,\ldots,f_{4s+1}\in L^2([0,2\pi])$ such that \[f=\sum_{j=1}^{4s+1}\lambda_{b_j}^s\ast f_j. \] We see, using (\ref{eq:lambdasubb}), that (\ref{eq:differencesumb}) and hence (iii) hold. Now to see that (iv) implies (i), we observe that the multiplier of \break $(D^2-i(\alpha+\beta)D-\alpha\beta I)^s$ is $(-1)^s(n-\alpha)^s(n-\beta)^s$. Thus, if \break $f=(D^2-i(\alpha+\beta)D-\alpha\beta I)^s(g)$, as in (\ref{eq:differentialoperator}), ${\widehat f}(\alpha)={\widehat f}(\beta)=0$. Now, we prove (i) implies (iv). If ${\widehat f}(\alpha)={\widehat f}(\beta)=0$, define $g$ in terms of ${\widehat g}$ by putting ${\widehat g}(\alpha)={\widehat g}(\beta)=0$ and ${\widehat g}(n)=(-1)^s{\widehat f}(n)/(n-\alpha)^s(n-\beta)^s$, for $n\ne \alpha$ and $n\ne \beta$. Then $\sum_{n=-\infty}^{\infty}|n|^{4s}|{\widehat g}(n)|^2<\infty$, so $g\in W^{2s}([0,2\pi])$. Also, as the multiplier of $(D^2-i(\alpha+\beta)D-\alpha\beta I)^s$ is $(-1)^s(n-\alpha)^s(n-\beta)^s$, we see that $(D^2-i(\alpha+\beta)D-\alpha\beta I)^s(g)\,{\widehat {\ }}={\widehat f}$ and so $(D^2-i(\alpha+\beta)D-\alpha\beta I)^s(g)=f$. Finally, that the functions expressible as in (\ref{eq:differencesumb}) form a closed subspace follows from the fact that such functions are characterised by (i). ${\ }$ $\square$ \section{ Partitioning, zeros and inequalities} \setcounter{equation}{0} In this section we make observations and obtain results for the later proving of Lemma \ref{lemma:cosine estimate}. \begin{lemma} \label{lemma:integralsequal} Let $m,s\in {\mathbb N}$ and $\alpha,\beta, n\in {\mathbb Z}$. Then, \begin{align}&\int_{{[0,2\pi]}^m}\frac{dx_1dx_2\cdots dx_m}{\displaystyle\sum_{j=1}^{m}\left|\cos\left(\left(\frac{\alpha-\beta}{2}\right)x_j\right)-\cos \left(\left(n-\frac{\alpha+\beta}{2}\right)x_j\right)\right|^{2s}}\nonumber\\ &=2^{2m-2s}\int_{{[0,\pi/2]}^m}\frac{dx_1dx_2\cdots dx_m}{\displaystyle\sum_{j=1}^m\sin^{2s}((n-\alpha) x_j)\sin^{2s}((n-\beta) x_j)}, \label{eq:int1} \end{align} but note that both integrals may be infinite. \end{lemma} {\bf Proof.} The use of a familiar trigonometric formula and a simple substitution gives \begin{align} &\int_{{[0,2\pi]}^m}\frac{dx_1dx_2\cdots dx_m}{\displaystyle\sum_{j=1}^{m}\left|\cos\left(\left(\frac{\alpha-\beta}{2}\right)x_j\right)-\cos \left(\left(n-\frac{\alpha+\beta}{2}\right)x_j\right)\right|^{2s}}\nonumber\\ & =2^{m-2s}\int_{[0,\pi]^m}\frac{dx_1dx_2\cdots dx_m}{\displaystyle\sum_{j=1}^m\sin^{2s}((n-\alpha) x_j)\sin^{2s}((n-\beta) x_j)}.\label{eq:int2} \end{align} Note that for $x\in {\mathbb R}$ and $\ell\in {\mathbb Z}$ , $|\sin(\ell x)|=|\sin (\ell(\pi-x))|$. Also, note that $[0,\pi)^m$ is the disjoint union of the $2^m$ sets $\prod_{t=1}^mJ_t$ where, for each $t$, $J_t$ is either $[0,\pi/2)$ or $ [\pi/2,\pi)$. Using substitutions for $\pi-x_j$, as needed in (\ref{eq:int2}), we see that (\ref{eq:int1}) follows from (\ref{eq:int2}). ${\ }$ $\square$ We are aiming to estimate the integral in Lemma \ref{lemma:integralsequal}. Motivated by (\ref{eq:int1}), we consider the zeros in $[0,\pi/2]$ of $\sin ((n-\alpha)x)$ and $\sin((n-\beta)x)$. Some preliminary notions are needed. {\bf Definitions.} If $J$ is an interval we denote its length by $\lambda(J)$. Let $[a,b]$ be a closed interval with $\lambda([a,b])>0$. A family $\{J_0,J_1,\ldots,J_{r-1}\}$ of closed intervals having non-empty interiors is a \emph{partition} of $[a,b]$ if $\cup_{j=0}^{r-1}J_j=[a,b]$ and any two intervals in the family have at most a single point in common. In such a case, the intervals may be arranged so that the right endpoint of $J_{j-1}$ is the left endpoint of $J_j$ for all $j=1,2, \ldots,r-1$. Note that in the sense used here, the sets in a partition are not pairwise disjoint. \begin{lemma}\label{lemma:partitionsa} Let $[a,b]$ be a closed interval with $\lambda([a,b])>0$. Let $R_0,R_1,\ldots,$ $R_{r-1}$ be closed intervals in a partition ${\mathcal P}_1$ of $[a,b]$. Let $S_0,S_1,\ldots,S_{s-1}$ be closed intervals in a partition ${\mathcal P}_2$ of $[a,b]$. Let \[{\mathcal A}=\big\{(j,k): 0\le j\le r-1, 0\le k\le s-1\ \hbox{and}\ \lambda(R_j\cap S_k)>0 \bigr\},\] and put \begin{equation}{\mathcal P}=\bigl\{R_j\cap S_k: (j,k)\in {\mathcal A} \bigr\}.\label{eq:refinement} \end{equation} Then, ${\mathcal P}$ is a partition of $[a,b]$ and the number of intervals in ${\mathcal P}$ is at most \begin{equation}r+s-1.\label{eq:partitionestimate1} \end{equation} \end{lemma} Proof. It is clear that ${\mathcal P}=\{R_j\cap S_k: (j,k)\in {\mathcal A}\}$ as in (\ref{eq:refinement}) is a partition of $[a,b]$. Given any two partitions ${\mathcal P}_1$, ${\mathcal P}_2$ as in the Lemma, we will denote the partition given as in (\ref{eq:refinement}) by ${\mathcal P}({\mathcal P}_1,{\mathcal P}_2)$. We proceed by induction on $r$. If $r=1$, ${\mathcal P}_1=\{R_0\}=\{[a,b]\}$, so that ${\mathcal P}({\mathcal P}_1,{\mathcal P}_2)={\mathcal P}_2$ and ${\mathcal P}({\mathcal P}_1,{\mathcal P}_2)$ has $s$ elements. In this case, $s=r+s-1$, and we see that when $r=1$, (\ref{eq:partitionestimate1}) holds for all $s\in {\mathbb N}$. Similarly, it is easy to see that when $r=2$, (\ref{eq:partitionestimate1}) holds for all $s\in {\mathbb N}$. So, let $r\in {\mathbb N}$ with $r\ge 3$ be such that (\ref{eq:partitionestimate1}) holds for all partitions ${\mathcal P}_1$ having $r$ intervals and for all partitions ${\mathcal P}_2$ having any number of intervals. Let ${\mathcal P}_3=\{Q_0, Q_1,\ldots,Q_r\}$ be a partition of $[a,b]$ into $r+1$ intervals and let ${\mathcal P}_2=\{S_0,S_1,\ldots,S_{s-1}\}$ be a partition of $[a,b]$ into $s$ intervals. Consider the partition ${\mathcal P}_1=\{Q_0\cup Q_1,Q_2\ldots, Q_r\}$, which has $r$ intervals. By the inductive hypothesis, ${\mathcal P}({\mathcal P}_1,{\mathcal P}_2)$ has at most $r+s-1$ intervals. Let $\xi$ be the right endpoint of $Q_0$, which is also the left endpoint of $Q_1$. Now, in passing from ${\mathcal P}_1$ to ${\mathcal P}_3$, we do so by using $\xi$ to divide the single interval $Q_0\cup Q_1$ into the two intervals $Q_0$ and $Q_1$. If $\xi$ is not an endpoint of any interval in ${\mathcal P}_2$, this will divide one interval in ${\mathcal P}({\mathcal P}_1,{\mathcal P}_2)$ into two subintervals belonging to ${\mathcal P}({\mathcal P}_2,{\mathcal P}_3)$. If $\xi$ is an endpoint of some interval in ${\mathcal P}_2$, this division does not increase the number of intervals in ${\mathcal P}({\mathcal P}_2,{\mathcal P}_3)$ in going from ${\mathcal P}_1$ to ${\mathcal P}_3$. In either case, we see that ${\mathcal P}({\mathcal P}_2,{\mathcal P}_3)$ has at most \[r+s-1+1=r+s=(r+1)+s-1\] intervals, showing that (\ref{eq:partitionestimate1}) holds with $r+1$ in place of $r$. The result follows by induction. $\square$ {\bf Definition.} Let $[a,b]$ be a closed interval of positive length. Let ${\mathcal P}_1=\{R_0,R_1,\ldots,R_{r-1}\}$ and ${\mathcal P}_2=\{S_0,S_1,\ldots,S_{s-1}\}$ be two partitions of $[a,b]$. Let ${\mathcal P}$ be the partition of $[a,b]$ as given by (\ref{eq:refinement}) in Lemma \ref{lemma:partitionsa}. Then ${\mathcal P}$ is called the \emph{refinement} of the partitions ${\mathcal P}_1$ and ${\mathcal P}_2$. Now, let $n,\gamma\in {\mathbb Z}$ with $n\ne \gamma$ be given. We construct an associated partition ${\mathcal P}(\gamma)$ of $[0,\pi/2]$, as follows. (i) When $|n-\gamma|$ is even we define $(|n-\gamma|+2)/2$ closed subintervals $Q_0,Q_1,\ldots, Q_{|n-\gamma|/2}$ of $[0,\pi/2]$ by putting \begin{equation}Q_0=\left[0,\frac{\pi}{2|n-\gamma|}\right ], Q_{|n-\gamma|/2}=\left[\frac{\pi(|n-\gamma|-1)}{2 |n-\gamma|} ,\frac{\pi}{2}\right],\ {\rm and}\nonumber\end{equation} \begin{equation}\ Q_j=\left[\frac{\pi(j-1/2)}{|n-\gamma|},\frac{\pi(j+1/2)}{|n-\gamma|}\right], \label{eq:Rjdefinition1} \end{equation} for $j=1,2 \ldots, (|n-\gamma|-2)/2$. (ii) When $|n-\gamma|$ is odd we define $(|n-\gamma|+1)/2$ closed subintervals $Q_0,Q_1,\ldots, Q_{(|n-\gamma|-1)/2}$ of $[0,\pi/2]$ by putting \begin{equation}Q _0=\left[0,\frac{\pi}{2|n-\gamma|}\right ]\ {\rm and}\ Q_j=\left[\frac{\pi(j-1/2)}{|n-\gamma|},\frac{\pi(j+1/2)}{|n-\gamma|}\right], \label{eq:Rjdefinition2} \end{equation} the latter for $j=1,2 \ldots, (|n-\gamma|-1)/2$. Put $\theta(\hbox{$r$})=(r+2)/2$ if $r$ is even, and $\theta(\hbox{$r$})=(r+1)/2$ if $r$ is odd. Then, with $n$ and $\gamma$ as given, (\ref{eq:Rjdefinition1}) and (\ref{eq:Rjdefinition2}) above define $\theta(|n-\gamma|)$ closed subintervals $Q_0,Q_1,\ldots, Q_{\theta(|n-\gamma|)-1}$ of $[0,\pi/2]$. We put \begin{equation} {\mathcal P}(\gamma)=\big\{Q_0,Q_1,Q_2,\ldots,Q_{\theta(|n-\gamma|)-1}\big\}.\label{eq:defpartition} \end{equation} Note that ${\mathcal P}(\gamma)$ depends only upon $|n-\gamma|$ so, strictly speaking, ${\mathcal P}(\gamma)$ depends also on $n$ as well as $\gamma$. The significance of ${\mathcal P}(\gamma)$ lies in its relationship to the zeros of $\sin(n-\gamma)x$ in $[0,\pi/2]$. There are $\theta(|n-\gamma|)$ zeros $c_1,c_2,\ldots,c_{\theta(|n-\gamma|)}$ of $\sin(n-\gamma)x$ in $[0,\pi/2]$ given by \begin{equation} c_j=\frac{\pi j}{|n-\gamma|}, \ {\rm for}\ j=0,1,2,\ldots, \theta(|n-\gamma|)-1.\label{eq:zerossinegamma} \end{equation} Now, we see from (\ref{eq:Rjdefinition1}) and (\ref{eq:Rjdefinition2}) that $c_0$ is the left endpoint of $Q_0$, when $|n-\gamma|$ is even $c_j$ is the midpoint of $Q_j$ for $j=1,2\ldots,\theta(|n-\gamma|)-2$ and $c_{\theta(|n-\gamma|)-1} =\pi/2$ is the right endpoint of $Q_{\theta(|n-\gamma|)-1}$ and, when $|n-\gamma|$ is odd $c_j$ is the midpoint of $Q_j$ for $j=1,2\ldots,\theta(|n-\gamma|)-1$. \begin{lemma}\label{lemma:leavedpartition} Let $\alpha,\beta, n\in {\mathbb Z}$ be such that $n\ne \alpha$ and $n\ne \beta$. Let ${\mathcal P}(\alpha,\beta)$ be the partition of $[0,\pi/2]$ that is the refinement of ${\mathcal P}(\alpha)$ and ${\mathcal P}(\beta)$, as given by (\ref{eq:refinement}). Then the number of intervals in ${\mathcal P}(\alpha,\beta)$ is bounded above by \begin{equation}2\,\max{\big\{|n-\alpha|, |n-\beta|\big\}}.\label{partitionestimate} \end{equation} Also, if $J\in {\mathcal P}(\alpha,\beta)$, \begin{equation}0<\lambda(J)\le\min\left\{ \frac{\pi}{|n-\alpha|}, \frac{\pi}{|n-\beta|}\right\}.\label{eq:length} \end{equation} \end{lemma} {\bf Proof.} The partition ${\mathcal P}(\alpha)$ has $\theta(|n-\alpha|)$ intervals, while ${\mathcal P}(\beta)$ has $\theta(|n-\beta|)$ intervals. So, we see from (\ref{eq:partitionestimate1}) that ${\mathcal P}(\alpha,\beta)$ has at most \break $\theta(|n-\alpha|)+\theta(|n-\beta|)-1$ intervals. However, $\theta(\hbox{$r$})\le (r+2)/2$ for all $r\in {\mathbb N}$, so an upper bound for the number of intervals in ${\mathcal P}(\alpha,\beta)$ is \[\frac{1}{2}(|n-\alpha|+|n-\beta|)+1\le\max\{|n-\alpha|, |n-\beta|\}+1\le 2\max\{|n-\alpha|, |n-\beta|\}. \] Finally, if $J\in {\mathcal P}(\alpha,\beta)$, $J=R\cap S$ for some $R\in {\mathcal P}(\alpha)$ and $S\in {\mathcal P}(\beta)$. Then, $\lambda(\hbox{$R$})\le \pi/|n-\alpha|$ and $\lambda(S)\le \pi/|n-\beta|$, and so (\ref{eq:length}) follows. $\square$ Figure 1 illustrates Lemma \ref{lemma:leavedpartition} in the case $\alpha=1$, $\beta=-1$ and $n=9$. \[\begin{tikzpicture}[scale=0.9] \draw (0,0)--(10,0); \draw[gray, , line width =0.12cm](0,0)--(1,0); \draw[gray, line width =0.12cm](1,0)--(1.25,0); \draw[gray, line width =0.12cm](1.25,0)--(3,0); \draw[gray, line width =0.12cm](3,0)--(3.75,0); \draw[gray, line width =0.12cm](3.75,0)--(5,0); \draw[gray, line width =0.12cm](5,0)--(6.25,0); \draw[gray, line width =0.12cm](6.25,0)--(7,0); \draw[gray, line width =0.12cm](7,0)--(8.75,0); \draw[gray, line width =0.12cm](8.75,0)--(9.0,0); \draw[gray, line width =0.12cm](9.0,0)--(10.0,0); \filldraw[black] (0,0) circle (3pt); \filldraw [black](2,0) circle (3pt); \filldraw [black](4,0) circle (3pt); \filldraw [black](6,0) circle (3pt); \filldraw [black](8,0) circle (3pt); \filldraw [black](10,0) circle (3pt); \filldraw [black](2.5,0) circle (3pt); \filldraw [black](5.01,0) circle (3pt); \filldraw [black](7.5,0) circle (3pt); \filldraw [black](8,0) circle (3pt); \node[inner sep=20pt, anchor=north] at (0,0) { {$a_0=b_0$}}; \node[inner sep=23pt, anchor=north] at (2.5,-0.0) {{$a_1$}}; \node[inner sep=20pt, anchor=north] at (5.0,-0.08) {{$a_2$}}; \node[inner sep=20pt, anchor=north] at (2.02,0) {{$b_1$}}; \node[inner sep=20pt, anchor=north] at (4.1,0) { {$b_2$}}; \node[inner sep=20pt, anchor=north] at (6.0,0) { {$b_3$}}; \node[inner sep=20pt, anchor=north] at (8.0,0) { {$b_4$}}; \node[inner sep=20pt, anchor=north] at (7.5,-0.085) { {$a_3$}}; \node[inner sep=15pt, anchor=south] at (10,0) { {$\pi/2$}}; \node[inner sep=20pt, anchor=north] at (10,0) { {$a_4=b_5$}}; \node[inner sep=15pt, anchor=south] at (5,0) { {$\pi/4$}}; \node[inner sep=15pt, anchor=south] at (0,0) { {$0$}}; \draw (0,-0.4)--(0,0.4); \draw (1.0,-0.4)--(1.0,0.4); \draw (1.25,-0.4)--(1.25,0.4); \draw (3,-0.4)--(3,0.4); \draw (3.75,-0.4)--(3.75,0.4); \draw (7,-0.4)--(7,0.4); \draw (5,-0.4)--(5,0.4); \draw (6.25,-0.4)--(6.25,0.4); \draw (8.75,-0.4)--(8.75,0.4); \draw (9,-0.4)--(9,0.4); \draw (10,-0.4)--(10,0.4); \end{tikzpicture}\] \vskip -0.8cm \[ \begin{minipage}{11.6cm} {\footnotesize{\bf Figure 1.} The figure illustrates the case $n=9$, $\alpha=1$, $\beta=-1$. We have $n-\alpha=8$, $n-\beta=10$. The zeros of $\sin 8x$ and $\sin 10x$ in $[0,\pi/2]$ are denoted respectively by $a_j$ and $b_j$. The vertical lines in the figure illustrate the intervals in the refinement ${\mathcal P}(-1,1)$ of ${\mathcal P}(-1)$ and ${\mathcal P}(1)$. Note that four of the ten intervals in ${\mathcal P}(-1,1) $ contain no zeros of $\sin 8x\sin 10x$. } \end{minipage}\] \vskip 0.3cm Now, if $x\in {\mathbb R}$, let $d_{\mathbb Z}(x)$ denote the distance from $x$ to a nearest integer. For later use, note the fact that $d_{\mathbb Z}(x)=|x|$ if and only if $ -1/2\le x\le 1/2$. \begin{lemma}\label{lemma:triginequality} Let $n, \alpha,\beta\in{\mathbb Z}$ with $n\ne \alpha$, $n\ne \beta$. Let the partitions ${\mathcal P}(\alpha)$ and ${\mathcal P}(\beta)$ be given as in (\ref{eq:defpartition}), and we write $${\mathcal P}(\alpha) =\{R_0, R_1, \ldots, R_{\theta(|n-\alpha|)-1}\} \ \hbox{and}\ {\mathcal P}(\beta) =\{S_0, S_1, \ldots, S_{\theta(|n-\beta|)-1}\}.$$ Let $R_j\cap S_k$ be an element of the refinement ${\mathcal P}(\alpha,\beta)$ of ${\mathcal P}(\alpha)$ and ${\mathcal P}(\beta)$. Then, if $x\in R_j\cap S_k$, we have \begin{align}&\sin^2 ((n-\alpha)x)\,\sin^{2} ((n-\beta)x)\nonumber\\ &\hskip 0cm \ge \frac{2^{4}(n-\alpha)^{2}\, (n-\beta|^{2}}{\pi^{4}}\left(x-\frac{j\pi}{|n-\alpha|}\right)^{2}\left(x-\frac{k\pi}{|n-\beta|}\right)^{2}.\label{eq:y4} \end{align} \end{lemma} {\bf Proof.} We will use the fact that for all $x\in {\mathbb R}$, $|\sin \pi x|\ge 2 d_{\mathbb Z}(x)$ \cite[page 89, for example]{nillsen2}. Given that $x\in R_j$, we see from (\ref{eq:Rjdefinition1}) and (\ref{eq:Rjdefinition2}) that \begin{equation} \left|(n-\alpha)\left(\frac{x}{\pi}-\frac{j}{|n-\alpha|}\right)\right|\le\frac{1}{2}.\label{eq:y3} \end{equation} Using (\ref{eq:y3}) we now have, for all $x$ in $R_j$, \begin{align} |\sin((n-\alpha)x)|&= |\sin(|n-\alpha|x-j\pi)|\nonumber\\ &=\left|\sin\left(\pi(n-\alpha)\left(\frac{x}{\pi}-\frac{j}{|n-\alpha|}\right)\right)\right|\nonumber\\ &\ge 2 d_{\mathbb Z}\left((n-\alpha)\left(\frac{x}{\pi}-\frac{j}{|n-\alpha|}\right)\right)\nonumber\\ &= 2|n-\alpha|\left|\frac{x}{\pi}-\frac{j}{|n-\alpha|}\right|\nonumber\\ &= \frac{2}{\pi}|n-\alpha|\left|x-\frac{j\pi}{|n-\alpha|}\right|.\label{eq:y5} \end{align} Using a corresponding argument, we see also that for all $x\in S_k$, \begin{equation} |\sin ((n-\beta)x)|\ge \frac{2}{\pi}|n-\beta|\left|x-\frac{k\pi}{|n-\beta|}\right|.\label{eq:y6} \end{equation} Conclusion (\ref{eq:y4}) now follows from (\ref{eq:y5}) and (\ref{eq:y6}). $\square$ \section{Integral estimates in $\boldsymbol{{\mathbb R}^m}$} \setcounter{equation}{0} In this section we develop estimates for some integrals in ${\mathbb R}^m$, and an inequality between quadratics, with a view to proving Lemma \ref{lemma:cosine estimate}. \begin{lemma}\label{lemma:A} Let $s,m\in {\mathbb N}$ with $m\ge 4s+1$. Then, there is a number $M>0$, depending upon $s$ and $m$ only, such that for all $b_1,b_2,\ldots,b_m>0$ and for all $(a_1,a_2,\ldots,a_m) \in \prod_{t=1}^m[-b_t,b_t]$, \[\int_{{\textstyle\prod_{t=1}^m [-b_t,b_t]}}\ \frac {du_1du_2\ldots du_m}{\displaystyle\sum_{t=1}^m(u_t^2-a_t^2)^{2s}}\le M\Bigl(\max\Big\{b_1,b_2,\ldots,b_m\Big\}\Big)^{m-4s}.\] \end{lemma} PROOF. Clearly, we may assume that $0\le a_t\le b_t$ for all $t=1,2,\ldots,m$. Now, we have \begin{align} &\int_{\textstyle\prod_{t=1}^m[-b_t,b_t]}\ \frac {du_1du_2\ldots du_m}{\displaystyle\sum_{t=1}^m(u_t^2-a_t^2)^{2s}}\nonumber\\ &=2^m\int_{\textstyle\prod_{t=1}^m [0,b_t]}\ \frac {du_1du_2\ldots du_m}{\displaystyle\sum_{t=1}^m(u_t^2-a_t^2)^{2s}}\nonumber\\ & =2^m\int_{\textstyle\prod_{t=1}^m [0,b_t]}\ \frac {du_1du_2\ldots du_m}{\displaystyle\sum_{t=1}^m(u_t-a_t)^{2s}(u_t+a_t)^{2s}}\nonumber\\ &=2^m\int_{\textstyle\prod_{j=1}^m[-a_t,b_t-a_t]}\ \frac {dv_1dv_2\ldots dv_m}{\displaystyle\sum_{t=1}^mv_t^{2s}(v_t+2a_t)^{2s}},\label{eq:z1} \end{align} on putting $v_t=u_t-a_t$. Now observe that if $v_t\ge 0$ then $v_t+2a_t\ge 0$, and that if $-a_t\le v_t\le 0$ then $v_t+2a_t\ge a_t\ge |v_t|$. Also, there is $C_m>0$ such that for all $(v_1,v_2,\ldots,v_m)\in {\mathbb R}^m$, \begin{equation}\sum_{j=1}^mv_j^{4s}\ge C_m\left(\sum_{j=1}^mv_j^2\right)^{2s}.\label{eq:constantm}\end{equation} If $r>0$, we denote the closed sphere $\{x:x\in {\mathbb R}^m\ {\rm and}\ |x|\le r\}$ by $S(0,r)$. Also, we put $b=\max\{b_1,b_2,\ldots,b_m\}$. Using the preceding observations and (\ref{eq:z1}) we have \begin{align*} &\int_{\textstyle\prod_{t=1}^m [-b_t,b_t]}\ \frac {du_1du_2\ldots du_m}{\displaystyle\sum_{t=1}^m(u_t^2-a_t^2)^{2s}}\hskip2cm \\ &\le2^m\int_{\textstyle\prod_{j=1}^m[-a_t,b_t-a_t]}\ \frac {dv_1dv_2\ldots dv_m}{\displaystyle\sum_{t=1}^mv_t^{4s}}\hskip2cm\\ \end{align*} \begin{align*} &\le\frac{2^m}{C_m}\int_{\textstyle\prod_{j=1}^m[-a_t,b_t-a_t]}\ \frac {dv_1dv_2\ldots dv_m}{\left(\displaystyle\sum_{t=1}^mv_t^2\right)^{2s}}, \hbox{using (\ref{eq:constantm})},\\ &\le\frac{2^m}{C_m} \int_{\textstyle\prod_{j=1}^m[-b_t,b_t]}\ \frac {dv_1dv_2\ldots dv_m}{\left(\displaystyle\sum_{t=1}^mv_t^2\right)^{2s}},\\ &\hskip 4.8cm{\rm as}\ [-a_t,b_t-a_t]\subseteq [-b,b]^m,\\ \end{align*} \begin{align*} &\le\frac{2^m}{C_m} \int_{S(0,b{\sqrt m})}\ \frac {dv_1dv_2\ldots dv_m}{\left(\displaystyle\sum_{t=1}^mv_t^2\right)^{2s}},\ {\rm as}\ [-b,b]^m\subseteq S(0,b{\sqrt m}),\\ &=\frac{2^{m+1}\pi^{m/2}}{C_m\Gamma(m/2)}\int_0^{b{\sqrt m}}r^{m-4s-1}\,dr,\ \hbox{by\ \cite[pages 394-395]{stromberg1},}\\ &=\frac{2^{m+1} \pi^{m/2}m^{(m-4s)/2} b^{m-4s}}{C_m\Gamma(m/2)(m-4s)}. \end{align*} \noindent So, the result holds if $ M= C_m^{-1}2^{m+1}\pi^{m/2}\ m^{(m-4s)/2}(m-4s)^{-1}\Gamma(m/2)^{-1}.$ ${\ }$ $\square$ \begin{lemma}\label{lemma:C} Let $s,m\in {\mathbb N}$ with $m\ge 4s+1$, and let numbers $b_{t,k}, c_t$ and $d_t$ be given for all $t,k$ with $t=1,2\ldots,m$ and $k=1,2$. We assume that \[0\le b_{t,1}\le c_{t}\le d_t\le b_{t,2}\] for all $t=1,2,\ldots, m$. Then, there is a number $M>0$, depending upon $s$ and $m$ only and independent of $b_{t,k}, c_t$ and $d_t$, such that \begin{align*}&\int_{\textstyle{\prod_{t=1}^m [b_{t,1}, b_{t,2}] }}\,\frac{du_1du_2\ldots du_m}{\displaystyle \sum_{t=1}^{m}\left(u_t-c_t\right) ^{2s}\left(u_t-d_t \right)^{2s}}\\ &\hskip 2cm\le M\,\Big(\max\Big\{b_{1,2}-b_{1,1},b_{2,2}-b_{2,1},\ldots,b_{m,2}-b_{m,1}\Big\} \Big)^{m-4s}. \end{align*} \end{lemma} {\bf Proof.} Put, for $t=1,2,\ldots,m$, \[\eta_t=\frac{c_t+d_t}{2}, \gamma_t=\frac{d_t-c_t}{2}, v_t=u_t-\eta_t.\] Note that $\eta_t\in [b_{t,1},b_{t,2}]$ and $0\le \gamma_t\le (b_{t,2}-b_{t,1})/2$. Using Lemma \ref{lemma:A} and putting $v_t=u_t-\eta_t$ in the following we have \begin{align*} &\int_{\textstyle\prod_{t=1}^m [b_{t,1}, b_{t,2}]}\ \frac{du_1du_2\ldots du_m}{\displaystyle \sum_{t=1}^m\left(u_t-c_t\right) ^{2s}\left(u_t-d_t \right)^{2s}}\\ &=\int_{\textstyle\prod_{t=1}^m\ [b_{t,1}-\eta_t,b_{t,2}-\eta_t]}\ \frac{dv_1dv_2\ldots dv_m}{\displaystyle \sum_{t=1}^m\left(v_t+\gamma_t\right) ^{2s}\left(v_t-\gamma_t \right)^{2s}} \\ \end{align*} \begin{align*} &=\int_{\textstyle\prod_{t=1}^m\ [-(\eta_t-b_{t,1}),b_{t,2}-\eta_t]}\ \frac{dv_1dv_2\ldots dv_m}{\displaystyle \sum_{t=1}^m\left(v_t^2-\gamma_t^2\right) ^{2s} }\\ &\le\int_{\textstyle\prod_{t=1}^m\ [-(b_{t,2}-b_{t,1}),b_{t,2}-b_{t,1}]}\ \frac{dv_1dv_2\ldots dv_m}{\displaystyle \sum_{t=1}^m\left(v_t^2-\gamma_t^2\right) ^{2s} }, \end{align*} as $[-(\eta_t-b_{t,1}),b_{t,2}-\eta_t]\subseteq [-(b_{t,2}-b_{t,1}),b_{t,2}-b_{t,1}]$. We now see that Lemma \ref{lemma:A} applies because $0\le\gamma_t\le b_{t,2}-b_{t,1}$, and so there is a constant $M>0$, depending only upon $s$ and $m$, such that \begin{align*} \ \ \ \ \ \ \ \ \ \ \ &\int_{\textstyle\prod_{t=1}^m [b_{t,1}, b_{t,2}]}\ \frac{du_1du_2\ldots du_m}{\displaystyle \sum_{t=1}^m\left(u_t-c_t\right) ^{2s}\left(u_t-d_t \right)^{2s}}\\ &\ \ \ \ \ \le M \Bigl(\max\Big\{b_{1,2}-b_{1,1},b_{2,2}-b_{2,1},\ldots,b_{m,2}-b_{m,1}\Big\}\Big)^{m-4s} .\hskip 2.6cm\square \end{align*} \begin{lemma}\label{lemma:quadratics}Let $c\le a<b\le d$. Let $f$, $g$ be the quadratic functions given by $f(x)=(x-c)(d-x)$, $g(x)=(x-a)(b-x)$. Then, $f(x)\ge g(x)\ge 0$ for all $a\le x\le b$. \end{lemma} {\bf Proof.} We have $(f-g)(x)=x(c+d-a-b)+ab-cd.$ So, \[(f-g)(a) =(a-c)(d-a)\ge 0\ {\rm and}\ (f-g)(b)= (d-b)(b-c)\ge 0. \] As $f-g$ is linear and non-negative at $a$ and $b$, we deduce that $f(x)\ge g(x)$ for all $x\in [a,b]$. ${\ }$ $\square$ \section {Completion of the proof of Theorem \ref{theorem:main2} } Let $s\in {\mathbb N}$ and $n,\alpha,\beta\in {\mathbb Z}$ with $n\ne \alpha$ and $n\ne \beta$. Let $a_0,a_1,\ldots,a_{\theta(|n-\alpha|)-1}$ and $b_0, b_1,\ldots, b_{\theta(|n-\beta|)-1}$ respectively be the zeros of $\sin(n-\alpha)x$ and $\sin (n-\beta)x$ in $[0,\pi/2]$, as given correspondingly for $\sin(n-\gamma)x$ in (\ref{eq:zerossinegamma}). Let ${\mathcal P}(\alpha) =\{R_0, R_1, \ldots, R_{\theta(|n-\alpha|)-1}\}$ and ${\mathcal P}(\beta) =\{S_0, S_1, \ldots, S_{\theta(|n-\beta|)-1}\}$ be the partitions as given by (\ref{eq:defpartition}), and recall that ${\mathcal A}$ is the set of all $(j,k)$ such that $\lambda(R_j\cap S_k)>0$ and that the partition ${\mathcal P}(\alpha,\beta)$ of $[0,\pi/2]$ is the set $\{R_j\cap S_k: (j,k)\in {\mathcal A}\}$, as in Lemma \ref{lemma:leavedpartition}. We see from (\ref{eq:int1}) of Lemma \ref{lemma:integralsequal} and from (\ref{eq:y4}) of Lemma \ref{lemma:triginequality} that for any $m,s\in {\mathbb N}$, \begin{align} &\int_{{[0,2\pi)}^m} \frac{dx_1dx_2\cdots dx_m} {\displaystyle\sum_{j=1}^{m}\left|\cos\left(\left(\frac{(\alpha-\beta)}{2}\right)x_j\right)-\cos \left(\left(n-\frac{(\alpha+\beta)}{2}\right)x_j\right)\right|^{2s}} \nonumber\\ &\hskip 0.5cm=2^{2m-2s}\int_{{[0,\pi/2)}^m}\frac{dx_1dx_2\cdots dx_m}{\displaystyle\sum_{j=1}^m\sin^{2s}((n-\alpha) x_j)\, \sin^{2s}((n-\beta) x_j)} \nonumber\\ &\le\frac{2^{2m-6s}\pi^{4s}}{(n-\alpha)^{2s}(n-\beta)^{2s}}\sum_{(j_1,k_1),\ldots, (j_m,k_m)\in {\mathcal A}}J\bigl((j_1,k_1),\ldots, (j_m,k_m)\bigr), \label{eq:sumintegrals} \end{align} where \begin{equation}J\bigl((j_1,k_1),\ldots, (j_m,k_m)\bigr)=\int_{\textstyle\prod_{t=1}^mR_{j_{t}}\cap S_{k_{t}} }\frac{dx_1dx_2\cdots dx_m}{\displaystyle\sum_{t=1}^{m}\left(x_t-a_{j_t}\right)^{2s}\left(x_t-b_{k_t}\right)^{2s}}.\label{eq:Jintegrals} \end{equation} Note that by Lemma \ref{lemma:leavedpartition}, the number of terms in the sum in (\ref{eq:sumintegrals}) is bounded by \begin{equation}2^m\max\{|n-\alpha|^m, |n-\beta|^m\}.\label{eq:sumbound} \end{equation} Now, let $(j_1,k_1),\ldots, (j_m,k_m)\in {\mathcal A}$, and let $t\in \{1,2,\ldots,m\}$. We consider the following possibilites (i), (ii) and (iii). (i) If $a_{j_t}\in R_{j_t}\cap S_{k_t}$ and $b_{k_t}\in R_{j_t}\cap S_{k_t}$ we put $a_{j_t}^{\prime}=a_{j_t}$ and $b_{k_t}^{\prime}=b_{k_t}$. Note that if $a_{j_t}=b_{k_t}$, then both $a_{j_t}$ and $b_{k_t}$ belong to $R_{j_t}\cap S_{k_t}$. (ii) If $a_{j_t}\in R_{j_t}\cap S_{k_t}$ but $b_{k_t}\notin R_{j_t}\cap S_{k_t}$, we put $a_{j_t}^{\prime}=a_{j_t}$ and take $b_{k_t}^{\prime}$ to be the endpoint of $R_{j_t}\cap S_{k_t}$ that is closest to $b_{k_t}$. If $a_{j_t}\notin R_{j_t}\cap S_{k_t}$ but $b_{k_t}\in R_{j_t}\cap S_{k_t}$, we take $a_{j_t}^{\prime}$ to be the endpoint of $R_{j_t}\cap S_{k_t}$ that is closest to $a_{j_t}$, and we put $b_{k_t}^{\prime}=b_{k_t}$. (iii) If $a_{j_t}\notin R_{j_t}\cap S_{k_t}$ and $b_{k_t}\notin R_{j_t}\cap S_{k_t}$, it must happen that $a_{j_t}$ lies to the left of $R_{j_t}\cap S_{k_t}$ and $b_{k_t}$ to the right, or vice versa. In either case, we put $a_{j_t}^{\prime}$ to be one endpoint of $R_{j_t}\cap S_{k_t}$ and $b_{k_t}^{\prime}$ to be the other. Now it is clear that in the cases (i) and (ii) above, for all $x_t\in R_{j_t}\cap S_{k_t}$ we have \begin{equation}|x_t-a_{j_t}|^{2s}|x_t-b_{k_t}|^{2s}\ge|x_t-a_{j_t}^{\prime}|^{2s}|x_t-b_{k_t}^{\prime}|^{2s}.\label{eq:doubleprime}\ \end{equation} In case (iii) above, we see from the simple result on quadratics in Lemma \ref{lemma:quadratics} that (\ref{eq:doubleprime}) holds for all $x_t\in R_{j_t}\cap S_{k_t}$. All possibilities are exhausted by (i), (ii) and (iii), so that for all $j_t, k_t$ we see that $a_{j_t}^{\prime}, b_{k_t}^{\prime}\in R_{j_t}\cap S_{k_t}$ and that (\ref{eq:doubleprime}) holds for all $x_t\in R_{j_t}\cap S_{k_t}$. Now assume that $m\in {\mathbb N}$ with $m\ge 4s+1$. We have from (\ref{eq:Jintegrals}) and (\ref{eq:doubleprime}) that there is $M>0$, depending on $s$ and $m$ only, such that \begin{align} &J\bigl((j_1,k_1),\ldots, (j_m,k_m)\bigr)\nonumber\\ &=\int_{\textstyle\prod_{t=1}^mR_{j_{t}}\cap S_{k_{t}} }\frac{dx_1dx_2\cdots dx_m}{\displaystyle\sum_{t=1}^{m}\left|x_t-a_{j_t}\right|^{2s}\left|x_t-b_{k_t}\right|^{2s}}\nonumber\\ &\le\int_{\textstyle\prod_{t=1}^mR_{j_{t}}\cap S_{k_{t}} }\frac{dx_1dx_2\cdots dx_m}{\displaystyle\sum_{t=1}^{m}|x_t-a_{j_t^{ \prime}}|^{2s}|x_t-b_{k_t^{\prime}}|^{2s}}, \ {\rm by}\ (\ref{eq:doubleprime}),\nonumber\\ &\le M\left(\max\big\{\lambda(R_{j_1}\cap S_{k_1}),\lambda(R_{j_2}\cap S_{k_2}),\ldots, \lambda(R_{j_m}\cap S_{k_m}) \big\}\right)^{m-4s},\nonumber\\ &\hskip 8.5cm {\rm by\ Lemma}\ \ref{lemma:C},\nonumber\\ &\le\pi^{m-4s}M\min\left\{\frac{1}{|n-\alpha|^{m-4s}},\frac{1}{|n-\beta|^{m-4s}}\right\}, \, {\rm using}\ (\ref{eq:length}),\nonumber\\ &=\frac{\pi^{m-4s}M}{\max\{|n-\alpha|^{m-4s}, |n-\beta|^{m-4s}\}}.\label{eq:important} \end{align} Now, using (\ref{eq:sumbound}), we see from (\ref{eq:sumintegrals}) and (\ref{eq:important}) that \begin{align} &\int_{{[0,2\pi)}^m}\frac{dx_1dx_2\cdots dx_m}{\displaystyle\sum_{j=1}^{m}\left|\cos\left(\left(\frac{(\alpha-\beta)}{2}\right)x_j\right)-\cos \left(\left(n-\frac{(\alpha+\beta)}{2}\right)x_j\right)\right|^{2s}}\nonumber\\ &\le\frac{2^{3m-6s}\pi^{m}M}{(n-\alpha)^{2s}(n-\beta)^{2s}}\cdot \frac{\max\{|n-\alpha|^m,|n-\beta|^m\} }{\max\{|n-\alpha|^{m-4s}, |n-\beta|^{m-4s}\}}\nonumber \end{align} \begin{align} &= 2^{3m-6s}\pi^{m}M \cdot \max\left\{\frac{(n-\alpha)^{2s}}{(n-\beta)^{2s}}, \frac{(n-\beta)^{2s}}{(n-\alpha)^{2s}} \right\} \nonumber \\ &\le 2^{3m-6s}\pi^{m}MK, \label{eq:conclusion} \end{align} where $K>0$ is a suitable constant chosen to be independent of $n$. Lemma \ref{lemma:cosine estimate} is immediate from (\ref{eq:conclusion}) upon taking $m$ to be $4s+1$ and, as discussed in Section 2, Theorem \ref{theorem:main2} is now established. $\square$ \section{A sharpness result} If was shown in Theorem \ref{theorem:main2} that if $f\in L^2([0,2\pi])$ is such that ${\widehat f}(\alpha)={\widehat f} (\beta)=0$, then for almost all $(b_1,b_2, \ldots,b_{4s+1})\in [0,2\pi]^{4s+1}$, $f$ can be written in the form (\ref{eq:differencesumb}) and consequently in the form (\ref{eq:differencesum}). However, in this section we show that if $m\in {\mathbb N}$ and $b_1,b_2,\ldots, b_m\in [0,2\pi]^{4s+1}$ are given, there are many functions with ${\widehat f}(\alpha)={\widehat f} (\beta)=0$ that cannot be written in the form (\ref{eq:differencesum}). Thus, no single choice of $b_1,b_2,\ldots, b_m$ suffices to ensure that $(\ref{eq:differencesum})$ is possible for all $f\in L^2([0,2\pi])$ such that ${\widehat f}(\alpha)={\widehat f} (\beta)=0$. The methods extend the techniques in \cite[pages 420-421]{meisters1}. \begin{lemma} \label{lemma:diophantine} Let $c_1,c_2,\ldots,c_m\in {\mathbb R}$. Then, there are infinitely many $q\in {\mathbb N}$ such that $d_{\mathbb Z}(qc_j)<1/q^{1/m}$ for all $j=1,2,\ldots,m$. \end{lemma} {\bf Proof.} See \cite[Theorem 4.6]{niven1} or \cite[page 27]{schmidt1}, for example. $\square$ \begin{theorem} Let $m, s\in {\mathbb N}$ and let $\alpha,\beta\in {\mathbb Z}$ be given. Also, let $c_1,c_2,\ldots,c_m\in [0,2\pi]$ be given. Then, there is a vector subspace $V$ of $L^2([0,2\pi])$ such that $V$ has algebraic dimension equal to that of the continuum but, for any $f\in V$ with $f\ne 0$, there is no choice of $f_1,f_2,\ldots,f_m\in L^2([0,2\pi])$ such that $f$ is equal to \[ \sum_{j=1}^m\ \left[ \left( e^{ic_j\left(\frac{\alpha-\beta}{2}\right)}+e^{-ic_j\left(\frac{\alpha-\beta}{2}\right)} \right) \delta_0-\left( e^{ic_j\left(\frac{\alpha+\beta}{2}\right)}\delta_{c_j}+e^{-ic_j\left(\frac{\alpha+\beta}{2}\right)} \delta_{-c_j} \right) \right]^s\ast f_j . \] \end{theorem} {\bf Proof.} Let $f_1,f_2,\ldots,f_m\in L^2([0,2\pi])$, for $b\in [0,2\pi]$ let $\lambda_b$ be given by (\ref{eq:lambdasubb}), and let $f$ be given by \begin{equation}f=\sum_{j=1}^m\lambda_{c_j}^s\ast f_j.\label{eq:formoff} \end{equation} Then, using (\ref{eq:Fouriertransform}), for all $n\in {\mathbb Z}$ we have \begin{align*} {\widehat f}(n)&=\sum_{j=1}^m{\widehat {\lambda}_{c_j}} (n)^s {\widehat {f_j}}(n)\\ &=\sum_{j=1}^m \left( \cos\left(\left(\frac{\alpha-\beta}{2}\right)c_j\right) -\cos\left(\left(n-\frac{\alpha+\beta}{2}\right)c_j\right)\right)^s {\widehat {f_j}}(n)\\ &=2^s\sum_{j=1}^m \sin^s\left(\frac{(n-\alpha)c_j}{2}\right)\sin^s\left(\frac{(n-\beta)c_j}{2}\right)\,{\widehat {f_j}}(n). \end{align*} Thus, as $|\sin x|\le \pi d_{\mathbb Z}(x/\pi)$, we see that \begin{equation} |{\widehat f}(n+\alpha)|\le 2^s\sum_{j=1}^m\left|\sin \left(\frac{nc_j}{2}\right)\right|^s\, |{\widehat {f_j}}(n+\alpha)|\le \pi^s 2^s\sum_{j=1}^md_{\mathbb Z}\left( \frac{nc_j}{2\pi}\right)^s\, |{\widehat {f_j}}(n+\alpha)|.\label{eq:diophantineestimate1} \end{equation} Now, by Lemma \ref{lemma:diophantine}, for each $\ell\in {\mathbb N}$ and $k\in \{1,2 \ldots,2^{\ell}\}$ there is $q_{k,\ell}\in {\mathbb Z}$ such that \begin{equation} d_{\mathbb Z}\left(q_{k,\ell} \left(\frac{c_j}{2\pi}\right)\right)<\frac{1}{\ell^{1/m}}, \ {\rm for \ all}\ j=1,2,\ldots,m.\label{eq:diophantineestimate2} \end{equation} We see also from Lemma \ref{lemma:diophantine} that the integers $q_{k,\ell}$ can be chosen so that they are all distinct, over all $\ell\in {\mathbb N}$ and $k\in\{1,2,\ldots,2^{\ell}\}$. Now, let $\Phi$ be the family of all functions $\phi:{\mathbb N}\longmapsto {\mathbb N}$ such that $\phi(1)\in \{1,2\}$ and $\phi(\ell+1)\in \{2\phi(\ell)-1, 2\phi(\ell)\}$ for all $\ell\in {\mathbb N}$. Note that if $\phi,\phi^{\prime}\in \Phi$ and $\phi(j)\ne\phi^{\prime}(j)$, then \begin{equation}\phi(n)\ne\phi^{\prime}(n),\ {\rm for\ all}\ n>j.\label{eq: notequal} \end{equation} We see that $\phi(n)\in \{1,2,\ldots,2^n\}$ for all $n\in {\mathbb N}$, and that $\Phi$ has the cardinality of the continuum. Now, if $\phi\in \Phi$, define a function $f_{\phi}$ as follows. If $n\in {\mathbb Z}$ and $n=q_{\phi(\ell),\ell}+\alpha$ for some $\ell\in {\mathbb N}$, then $\ell$ is unique and we put \[{\widehat {f_{\phi}}}(n)=\frac{1}{\ell^{1/2+s/m}}.\] If $n\notin\{q_{\phi(\ell),\ell}+\alpha:\ell\in {\mathbb N}\}$, we put \[{\widehat {f_{\phi}}}(n)=0.\] Then, because \[\sum_{n=-\infty}^{\infty}|{\widehat {f_{\phi}}}(n)|^2=\sum_{\ell=1}^{\infty}\frac{1}{\ell^{1+2s/m}}<\infty,\] we see that $f_{\phi}\in L^2([0,2\pi])$. Now, assume that $\phi\in\Phi$ and that $f_{\phi}$ can be put in the form (\ref{eq:formoff}). By (\ref{eq:diophantineestimate1}) and (\ref{eq:diophantineestimate2}) and taking $n=q_{\phi(\ell),\ell}$ where $\ell\in {\mathbb N}$ we have \[|{\widehat f}_{\phi}(q_{\phi(\ell),\ell}+\alpha)|\le \frac{\pi^s2^s}{\ell^{s/m}}\left(\sum_{j=1}^m |{\widehat {f_j}}(q_{\phi(\ell),\ell}+\alpha)|\right)\le\frac{C\pi^s2^s}{\ell^{s/m}}\left(\sum_{j=1}^m |{\widehat {f_j}}(q_{\phi(\ell),\ell}+\alpha)|^2\right)^{1/2},\] for some $C>0$ that depends upon $m$ only, and we see that \begin{equation}\sum_{\ell=1}^{\infty}\ell^{2s/m}|{\widehat f}_{\phi}(q_{\phi(\ell),\ell}+\alpha )|^2<\infty.\label{eq:contra1} \end{equation} However, \begin{equation}\sum_{\ell=1}^{\infty}\ell^{2s/m}|{\widehat f}_{\phi}(q_{\phi(\ell),\ell}+\alpha)|^2=\sum_{\ell=1}^{\infty}\frac{\ell^{2s/m}}{\ell^{1+2s/m}}=\sum_{\ell=1}^{\infty}\frac{1}{\ell}=\infty.\label{eq:contra2} \end{equation} The contradiction between (\ref{eq:contra1}) and (\ref{eq:contra2}) shows that $f_{\phi}$ cannot be put in the form (\ref{eq:formoff}). Now, let $\phi_1,\phi_2,\ldots,\phi_r\in \Phi$ be distinct, and consider a linear combination $h=\sum_{j=1}^rd_jf_{\phi_j}$ where, say, $d_1\ne 0$. It follows from the observation (\ref{eq: notequal}) above that there is $k_0\in {\mathbb N}$ such that for all $k>k_0$ with ${\widehat f}_{\phi_1}(k)\ne 0$, we have ${\widehat f}_{\phi_j}(k)=0$ for all $j\in \{2,3,\ldots,r\}$. Thus, for all $k>k_0$ with ${\widehat f}_{\phi_1}(k)\ne 0$ , we see that ${\widehat h}(k)=d_1{\widehat f}_{\phi_1}(k)$. Then, if we apply (\ref{eq:contra1}) and (\ref{eq:contra2}) with $f_{\phi_1}$ in place of $f_{\phi}$, we see that the contradiction between (\ref{eq:contra1}) and (\ref{eq:contra2}) applies to $h$, and we deduce that $h$ cannot be written in the form (\ref{eq:formoff}). We now see that if $V$ is the subspace of $L^2([0,2\pi])$ finitely spanned by $\{f_{\phi}:\phi\in \Phi\}$, then $V$ has the required properties. $\square$ \section{Results for compact, connected abelian groups} \setcounter{equation}{0} In this section we look at some applications of the earlier results to compact, connected abelian groups and to the automatic continuity of linear forms on $L^2$ spaces on these groups. {\bf Definitions.} If $z\in {\mathbb T}$ and $\nu\in {\mathbb R}$ we may write $z$ uniquely as $z=e^{it}$ where $t\in [0,2\pi)$, and we then take $z^{\nu}$ to be $e^{it\nu}$. In particular, for $\alpha\in {\mathbb Z}$, $z^{\alpha/2}$ is $e^{it\alpha/2}$. Let $\alpha, \beta\in {\mathbb Z}$ and let $L$ be a linear form on $L^2({\mathbb T})$. Then $L$ is called \emph{$(\alpha,\beta)$-invariant} if, for all $b\in {\mathbb T}$ and $f\in L^2({\mathbb T})$, \[L\left[\left(b^{(\alpha+\beta)/2}\delta_b+b^{-(\alpha+\beta)/2}\delta_{b^{-1}}\right)\ast f\right]= \bigl(b^{(\alpha-\beta)/2}+b^{-(\alpha-\beta)/2}\bigr)\,L(f).\] Also, $L$ is called \emph{translation invariant} if $L(\delta_b\ast f)=L(f)$ for all $b\in G$ and $f\in L^2({\mathbb T})$. \begin{theorem}\label{theorem:circlegroupresult} Let $\alpha, \beta\in {\mathbb Z}$ and let $L$ be a linear form on $L^2({\mathbb T})$. Then the following hold. (i) If $L$ is $(\alpha,\beta)$-invariant on $L^2({\mathbb T})$, then $L$ is continuous on $L^2({\mathbb T})$. Thus, in this case there is a function $h\in L^2({\mathbb T})$ such that $L(f)=\int_{\mathbb T}f{\overline h}\,d\mu_{\mathbb T}$ for all $f\in L^2({\mathbb T}$. (ii) If $h\in L^2({\mathbb T})$, the linear form on $h\in L^2({\mathbb T})$ given by $f\longmapsto \int_{\mathbb T}f{\overline h}\,d\mu_{\mathbb T}$ is $(\alpha,\beta)$-invariant if and only if there are $c_1,c_2\in {\mathbb C}$ such that $h(z)=c_1z^{\alpha}+c_2z^{\beta}$ for almost all $z\in {\mathbb T}$. (iii) If $L$ is a translation invariant linear form on $L^2({\mathbb T})$, $L$ is a multiple of the Haar measure on $\mathbb T$. \end{theorem} {\bf Proof.} (i) It follows immediately from Theorem \ref{theorem:main2} with $s=1$ that if $L$ is $(\alpha,\beta)$-invariant, $L$ vanishes on the space \[\bigl\{f:f\in L^2({\mathbb T})\ {\rm and}\ {\widehat f}(\alpha)={\widehat f}(\beta)=0\bigr\}.\] Thus, if $L$ is $(\alpha,\beta)$-invariant it vanishes on this closed subspace of $L^2({\mathbb T})$, a space that has finite codimension in $L^2({\mathbb T})$. By (d) of Proposition 5.1 in \cite[page 25]{nillsen2}, $L$ is continuous. The existence of the function $h$ in this case comes from the fact that the dual space of $L^2({\mathbb T})$ is identified with $L^2({\mathbb T})$. (ii) If $h\in L^2({\mathbb T})$, by Theorem \ref{theorem:main2} the linear form corresponding to $h$ is $(\alpha,\beta)$-invariant if and only if $\int_{\mathbb T}f{\overline h}\,d\mu_{\mathbb T}=0$ whenever ${\widehat f}(\alpha)={\widehat f}(\beta)=0$. This occurs when ${\widehat h}(n)=0$ for all $n\in {\mathbb Z}$ with $n\ne \alpha, \beta$. But that means that the Fourier expansion of $h$ in $L^2({\mathbb T})$ is a linear combination of $z^{\alpha}$ and $z^{\beta}$. (iii) If $L$ is translation invariant, we see that it is $(0,0)$ invariant. Then, (ii) shows that $L$ is a multiple of the Haar measure on $\mathbb T$. $\square$ Note that the conclusion (iii) in Theorem \ref{theorem:circlegroupresult} is due to Meisters and Schmidt \cite{meisters1}. Conclusion (ii) generalises their result. Let $G$ denote a compact, connected abelian group with dual group ${\widehat G}$. The identity element in ${\widehat G}$ is denoted by ${\widehat e}$. The group operation in such a group will be written multiplicatively, and the normalised Haar measure on such a group $G$ will be denoted by $\mu_G$. We denote by $M(G)$ the family of bounded complex Borel measures on $G$. Let $m\in {\mathbb N}$ and for each $\gamma \in {\widehat G}$ with $\gamma \ne {\widehat e}$\, let $h_{\gamma}:G^m\longrightarrow {\mathbb T}^m$ be the function given by \[h_{\gamma}(g_1,g_2,\ldots,g_m)=(\gamma(g_1), \gamma(g_2),\ldots,\gamma(g_m)).\] The function $h_{\gamma}$ is continuous and, as $\gamma\ne {\widehat e}$, $h_{\gamma}$ maps $G^m$ onto a compact connected subgroup of ${\mathbb T}^m$ that is strictly larger than $\{1\}^m$, so this connected subgroup must be ${\mathbb T}^m$ itself, as ${\mathbb T}^m$ is connected. Consequently, $h_{\gamma}$ maps $G^m$ onto ${\mathbb T}^m$. It follows that for any non-negative measurable function on ${\mathbb T}^m$, we have \begin{equation}\int_{G^m}f\circ h_{\gamma}\,d\mu_{G^m}=\int_{{\mathbb T}^m}f\,d\mu_{\mathbb T}.\label{eq:Haarinvariance}\end{equation} This is because each side of (\ref{eq:Haarinvariance}) defines a translation invariant integral over ${\mathbb T}^m$, so the equality in (\ref{eq:Haarinvariance}) is a consequence of the uniqueness of the Haar measure on $\mathbb T$, mentioned in (iii) of Theorem \ref{theorem:circlegroupresult}. In the following result, note that for any compact connected abelian group $G$, for every element $b\in G$ we have $b=d^2$ for some $d\in G$ \cite[vol. I, page 385]{hewitt1}. \begin{theorem}\label{theorem:groupsresult} Let $G$ be a compact connected abelian group with dual group ${\widehat G}$. Let ${\widehat e}$ be the identity element of ${\widehat G}$. Let $s\in {\mathbb N}$ and let $n,\alpha,\beta\in {\mathbb Z}$ with $n\notin \{\alpha,\beta\}$. Then, for a function $f\in L^2(G)$ the following conditions (i) and (ii) are equivalent. (i) The Fourier transform of $f$ vanishes at ${\widehat e}$. That is, ${\widehat f}({\widehat e})=0$. (ii) There are $b_1,b_2,\ldots,b_{4s+1}\in G$ and $d_1,d_2,\ldots,d_{4s+1}\in G$ with $d_j^2=b_j$ for all $j\in \{1,2,\ldots,4s+1\}$, such that there are $f_1,f_2,\ldots,f_{4s+1}\in L^2(G)$ so that \begin{equation} f=\sum_{j=1}^{4s+1}\left(\delta_{d_j^{\alpha-\beta}}+\delta_{d_j^{-(\alpha-\beta)}}-\delta_{d_j^{2n-(\alpha+\beta)}}-\delta_{d_j^{-(2n-(\alpha+\beta))}}\right)^{s}\ast f_j. \label{eq:groupidentity} \end{equation} When $f$ satisfies conditions (i) and (ii), almost all $(b_1,b_2,\ldots,b_{4s+1})\in G^{4s+1}$ have the following property: for any $d_1,d_2,\ldots,d_{4s+1}\in G$ with $b_j=d_j^2$ for all $j\in \{1,2,\ldots,4s+1\}$, there are $f_1,f_2,\ldots,f_{4s+1}\in L^2(G)$ such that (\ref{eq:groupidentity}) holds. In the case when ${\widehat f}({\widehat e})=0$ and $\alpha-\beta$ is even, for almost all $(b_1,\ldots,b_{4s+1})\in G^{4s+1}$, there are $f_1,f_2,\ldots,f_{4s+1}\in L^2(G)$ such that \begin{equation} f=\sum_{j=1}^{4s+1}\left(\delta_{b_j}^{(\alpha-\beta)/2}+\delta_{b_j}^{-(\alpha-\beta)/2}-\delta_{b_j}^{n-(\alpha+\beta)/2}-\delta_{b_j}^{-(n-(\alpha+\beta)/2))}\right)^{s}\ast f_j. \label{eq:groupidentity2} \end{equation} \end{theorem} {\bf Proof.} Assume that (i) holds. In (\ref{eq:Haarinvariance}), given $s\in {\mathbb N}$ and $n,\alpha,\beta\in {\mathbb Z}$ such that $n\ne\alpha$ and $n\ne \beta$, let's take $f$ to be the function on ${\mathbb T}^{4s+1}$ whose value $f(z_1,z_2,\ldots,z_{4s+1})$ at $(z_1,z_2,\ldots,z_{4s+1})$ is \[\frac{1}{\displaystyle\sum_{j=1}^{4s+1}\Big|z_j^{(\alpha-\beta)/2}+z_j^{-(\alpha-\beta)/2}- z_j^{n-(\alpha+\beta)/2}-z_j^{-(n-(\alpha+\beta))/2}\Big|^{2s}}.\] Then, using (\ref{eq:Haarinvariance}) with $m=4s+1$, for each $\gamma\in {\widehat G}$ with $\gamma\ne {\widehat e}$ we have \begin{align}&\int_{G^{4s+1}}\frac{d\mu_G(b_1) d\mu_G(b_2)\cdots d\mu_G(b_{4s+1})} {\displaystyle\sum_{j=1}^{4s+1}\Big|\gamma(b_j)^{(\alpha-\beta)/2}+\gamma(b_j)^{-(\alpha-\beta)/2}- \gamma(b_j)^{n-(\alpha+\beta)/2}- \gamma(b_j)^{-(n-(\alpha+\beta)/2)}\Big|^{2s}}\nonumber\\ &=\frac{1}{ 2^{6s+1}\pi^{4s+1} }\int_{[0,2\pi]^{4s+1}}\frac{dx_1dx_2\cdots dx_{4s+1}}{\displaystyle\sum_{j=1}^{4s+1}\left|\cos \left(\left(\frac{\alpha-\beta}{2}\right)x_j\right)-\cos\left( \left(n-\left(\frac{\alpha+\beta}{2}\right)\right)x_j\right)\right|^{2s} }. \label{eq:equationX} \end{align} Now let $b,d\in G$ with $d^2=b$, and let $\gamma\in {\widehat G}$. Then, $\gamma(d)^2=\gamma(b)$, so that if we put $\gamma(b)=e^{i\theta}$ where $\theta\in [0,2\pi)$, we have $\gamma(d)=e^{i\theta/2}$ or $\gamma(d)=e^{i(\theta/2+\pi)}$. In the former case we have \begin{equation}\gamma(d)^{\alpha-\beta}=e^{i\theta(\alpha-\beta)/2}=\gamma(b)^{(\alpha-\beta)/2}, \label{eq:sign1}\end{equation} while in the latter case we have \begin{equation} \gamma(d)^{\alpha-\beta}=e^{i(\alpha-\beta)\pi}e^{i\theta(\alpha-\beta)/2}=(-1)^{\alpha-\beta}\gamma(b)^{(\alpha-\beta)/2}. \label{eq:sign2} \end{equation} Similarly, when $\gamma(d)=e^{i\theta/2}$ or $\gamma(d)=e^{i(\theta/2+\pi)}$ we have, respectively, \begin{equation} \gamma(d)^{\alpha+\beta}= \gamma(b)^{(\alpha+\beta)/2}\ \hbox{or}\ \gamma(d)^{\alpha+\beta}=(-1)^{\alpha+\beta}\gamma(b)^{(\alpha+\beta)/2}. \label{eq:sign3} \end{equation} Note that in (\ref{eq:sign1}), (\ref{eq:sign2}) and (\ref{eq:sign3}), $\alpha-\beta$ and $\alpha+\beta$ are both even or both odd, so $(-1)^{\alpha-\beta}$ and $(-1)^{\alpha+\beta}$ are both equal to $1$ or both equal to $-1$. Now, for $n\in {\mathbb Z}$ and $b,d\in G$ with $d^2=b$, put \[\lambda_{b,d,n}=\bigl(\delta_{d^{\alpha-\beta}}+\delta_{d^{-(\alpha-\beta)}}-\delta_{d^{2n-(\alpha+\beta)}}-\delta_{d^{-(2n-(\alpha+\beta))}}\bigr)^s\in M(G).\] Then, for $\gamma\in {\widehat G}$, \begin{equation}{\widehat \lambda_{b,d,n}}(\gamma)=\bigl(\gamma(d)^{-(\alpha-\beta)}+\gamma(d)^{\alpha-\beta}- \gamma(d)^{-(2n-(\alpha+\beta))}- \gamma(d)^{2n-\alpha-\beta}\bigr)^s. \label{eq:bdequation}\end{equation} In view of (\ref{eq:sign1}), (\ref{eq:sign2}) and (\ref{eq:sign3}), we see that \begin{equation} |{\widehat \lambda_{b,d,n}}(\gamma)|=\big|\gamma(b)^{(\alpha-\beta)/2}+\gamma(b)^{-(\alpha-\beta)/2}-\gamma(b)^{n-(\alpha+\beta)/2}-\gamma(b)^{-(n-(\alpha+\beta)/2)}\big|. \label{eq:FTequalszero} \end{equation} Now, for each $b\in G$, let $d_b\in G$ be any element such that $d_b^2=b$. As $n\notin\{\alpha,\beta\}$, if $M$ is the constant as in Lemma \ref{lemma:cosine estimate} and we use (\ref{eq:equationX}) and (\ref{eq:FTequalszero}), upon changing the order of summation and integration we have \begin{align} &\int_{G^{4s+1}}\left(\sum_{\gamma\in {\widehat G}, \gamma\ne {\widehat e}}\,\frac{|{\widehat f}(\gamma)|^2}{\displaystyle\sum_{j=1}^{4s+1}|{\widehat \lambda_{b_j,d_{b_j},n}}(\gamma)|^2}\right)\,\prod_{j=1}^md\mu_G(b_j) \le \frac{M}{2^{6s+1}\pi^{4s+1}}\sum_{\gamma\in {\widehat G}}|{\widehat f}(\gamma)|^2,\label{eq:product} \end{align} which is finite by Plancherel's Theorem (see \cite[vol. II, page 226]{hewitt1}). We deduce that provided ${\widehat f}({\widehat e})=0$, for almost all $(b_1,b_2,\ldots,b_{4s+1})\in G^{4s+1}$ we have that \begin{equation}\sum_{\gamma\in {\widehat G} }\,\frac{|{\widehat f}(\gamma)|^2}{\displaystyle\sum_{j=1}^{4s+1}|{\widehat \lambda_{b_j,d_{b_j},n}}(\gamma)|^{2}}<\infty.\label{eq:AE} \end{equation} Then, (ii) follows from (\ref{eq:bdequation}), (\ref{eq:AE}) and Theorem \ref{theorem:characterisation}, so (i) implies (ii). It is clear from (\ref{eq:bdequation}) that if $f$ has the form (\ref{eq:groupidentity}), then ${\widehat f}({\widehat e})=0$. Thus, (ii) implies (i). Above, the observation was made that (\ref{eq:AE}) holds for almost all \break$(b_1,b_2,\ldots,b_{4s+1})\in G^{4s+1}$. If $(b_1,b_2,\ldots,b_{4s+1})$ is any such point, we deduce that for any choice of elements $d_1,d_2,\ldots,d_{4s+1}$ such that $d_j^2=b_j$ for all $j$, and when ${\widehat f}({\widehat e})=0$, we will have (\ref{eq:groupidentity}) holding. When ${\widehat f}({\widehat e})=0$ and $\alpha-\beta$ is even, the final conclusion derives from the above arguments and the fact that \[d_j^{\alpha-\beta}=(d_j^2)^{(\alpha-\beta)/2}=b_j^{(\alpha-\beta)/2}\ \hbox{and}\ d_j^{\alpha+\beta}=b_j^{(\alpha+\beta)/2}.\] $\square$ The following is a result concerning automatic continuity on groups. It is derived from Theorem \ref{theorem:groupsresult}, but only a special case is stated. More general results can be derived from Theorem \ref{theorem:groupsresult}. \begin{theorem}\label{theorem:Haar} Let $G$ be a compact connected abelian group. Then the following conditions (i), (ii) and (iii) on a linear form $L:L^2(G) \longrightarrow {\mathbb C}$ are equivalent. (i) $L$ is translation invariant. That is, $L(\delta_g\ast f)=L(f)$, for all $g\in G$ and $f\in L^2(G)$. (ii) There is $n\in {\mathbb Z}$ with $n\notin \{-1,1\}$ such that \[L\bigl((\delta_g+\delta_{g^{-1}})\ast f\bigr)=L\bigl((\delta_{g^n}+\delta_{g^{-n}})\ast f\bigr),\] for all $g\in G$ and $f\in L^2(G)$. (iii) $L$ is a multiple of the Haar measure. Also, the normalised Haar measure $\mu_{\mathbb G}$ on $G$ is unique. \end{theorem} {\bf Proof.} (i) implies that (ii) holds for any $n\in {\mathbb N}$, so it must hold for any particular $n$. Also (iii) implies (i) because the Haar measure is translation invariant. Finally, assume that (ii) holds for some $n\in {\mathbb Z}$ with $n\notin \{-1,1\}$. By Theorem \ref{theorem:groupsresult} with $s=1$, $\alpha=1$ and $\beta=-1$, we deduce that $L$ vanishes on the closed subspace $\{f:f\in L^2(G)\ {\rm and}\ {\widehat f}({\widehat e})=0\}$. This latter space has codimension $1$ in $L^2(G)$ and so it follows easily that $L$ is continuous and is a multiple of the Haar measure on $G$ (see \cite[page 415]{meisters1}), and (iii) follows. Finally, as the Haar measure $\mu_G$ defines a translation invariant linear form on $L^2(G)$, the equivalence of (i) and (iii) implies the uniqueness of the Haar measure. $\square$ \noindent Rodney Nillsen \noindent School of Mathematics and Applied Statistics \noindent University of Wollongong \noindent New South Wales \noindent AUSTRALIA 2522 \noindent email: [email protected] \noindent web page: http://www.uow.edu.au/$\sim$nillsen \end{document}
arXiv
Macroalgal allelopathy in the emergence of coral diseases A priori estimates for positive solutions to subcritical elliptic problems in a class of non-convex regions May 2017, 22(3): 763-781. doi: 10.3934/dcdsb.2017037 Concentration phenomenon in some non-local equation Olivier Bonnefon 1, , Jérôme Coville 1,, and Guillaume Legendre 2, BioSP, INRA Centre de Recherche PACA, 228 route de l'Aérodrome, Domaine Saint Paul -Site Agroparc, 84914 AVIGNON Cedex 9, France CEREMADE, UMR CNRS 7534, Université Paris-Dauphine, PSL Research University, Place du Maréchal De Lattre De Tassigny, 75775 Paris cedex 16, France * Corresponding author: Jérôme Coville Dedicated to Professor Stephen Cantrell, with all our admiration Received September 2015 Revised April 2016 Published January 2017 Fund Project: The research leading to these results has received funding from the french ANR program under the "ANR JCJC" project MODEVOL: ANR-13-JS01-0009 held by Gael Raoul and the "ANR DEFI" project NONLOCAL: ANR-14-CE25-0013 held by Fran¸cois Hamel. J. Coville wants to thank G. Raoul for interesting discussions on this topic We are interested in the long time behaviour of the positive solutions of the Cauchy problem involving the integro-differential equation $\partial_t u(t,x)\\=\int_{\Omega }m(x,y)\left(u(t,y)-u(t,x)\right)\,dy+\left(a(x)-\int_{\Omega }k(x,y)u(t,y)\,dy\right)u(t,x),$ supplemented by the initial condition $u(0,\cdot)=u_0$ $\Omega $ , where the domain is a, the functions $k$ $m$ are non-negative kernels satisfying integrability conditions and the function $a$ is continuous. Such a problem is used in population dynamics models to capture the evolution of a clonal population structured with respect to a phenotypic trait. In this context, the function $u$ represents the density of individuals characterized by the trait, the domain of trait values is a bounded subset of $\mathbb{R}^N$ , the kernels respectively account for the competition between individuals and the mutations occurring in every generation, and the function represents a growth rate. When the competition is independent of the trait, that is, the kernel is independent of $x$ $k(x,y)=k(y)$ ), we construct a positive stationary solution which belongs to $d\mu$ inthe space of Radon measures on $\mathbb{M}(\Omega )$ .Moreover, in the case where this measure is regular and bounded, we prove its uniqueness and show that, for any non-negative initial datum in $L^1(\Omega )\cap L^{\infty}(\Omega )$ , the solution of the Cauchy problem converges to this limit measure in $L^2(\Omega )$ . We also exhibit an example for which the measure is singular and non-unique, and investigate numerically the long time behaviour of the solution in such a situation. The numerical simulations seem to reveal a dependence of the limit measure with respect to the initial datum. Keywords: Non-local equation, demo-genetics, concentration phenomenon, asymptotic behaviour. Mathematics Subject Classification: Primary: 35R09, 45K05; Secondary: 35B40, 35B44, 92D1. Citation: Olivier Bonnefon, Jérôme Coville, Guillaume Legendre. Concentration phenomenon in some non-local equation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 763-781. doi: 10.3934/dcdsb.2017037 U. M. Asher, S. J. Ruuth and B. T. R. Wetton, Implicit-explicit methods for time-dependent partial differential equations, SIAM J. Numer. Anal., 32 (1995), 797-823. doi: 10.1137/0732037. Google Scholar U. M. Asher, S. J. Ruuth and R. J. Spiteri, Implicit-explicit Runge–Kutta methods for time-dependent partial differential equations, Appl. Numer. Math., 25 (1997), 151-167. doi: 10.1016/S0168-9274(97)00056-1. Google Scholar G. Barles and B. Perthame, Dirac concentrations in Lotka-Volterra parabolic PDEs, Indiana Univ. Math. J., 57 (2008), 3275-3301. doi: 10.1512/iumj.2008.57.3398. Google Scholar H. Berestycki, J. Coville and H. Vo, Persistence criteria for populations with non-local dispersion, Journal of Mathematical Biology(7), 72 (2016), 1693-1745. doi: 10.1007/s00285-015-0911-2. Google Scholar H. Berestycki, J. Coville and H. Vo, On the definition and the properties of the principal eigenvalue of some nonlocal operators, Journal of Functional Analysis, 271 (2016), 2701-2751. doi: 10.1016/j.jfa.2016.05.017. Google Scholar H. Berestycki, F. Hamel and L. Rossi, Liouville-type results for semilinear elliptic equations in unbounded domains, Ann. Mat. Pura Appl.(4), 186 (2007), 469-507. doi: 10.1007/s10231-006-0015-0. Google Scholar H. Berestycki, L. Nirenberg and S. R. S. Varadhan, The principal eigenvalue and maximum principle for second-order elliptic operators in general domains, Comm. Pure Appl. Math., 47 (1994), 47-92. doi: 10.1002/cpa.3160470105. Google Scholar H. Berestycki and L. Rossi, On the principal eigenvalue of elliptic operators in $\mathbb{R}^N$ and applications, J. Eur. Math. Soc. (JEMS), 8 (2006), 195-215. doi: 10.4171/JEMS/47. Google Scholar Reaction-diffusion equations for population dynamics with forced speed. Ⅰ. The case of the whole space, Discrete Contin. Dyn. Syst. , 21 (2008), 41–67.Google Scholar R. Bürger, The Mathematical Theory of Selection, Recombination, and Mutation Wiley series in mathematical and computational biology, John Wiley, 2000.Google Scholar R. Bürger and J. Hofbauer, Mutation load and mutation-selection-balance in quantitative genetic traits, J. Math. Biol., 32 (1994), 193-218. Google Scholar A. Calsina and S. Cuadrado, Stationary solutions of a selection mutation model: The pure mutation case, Math. Models Methods Appl. Sci., 15 (2005), 1091-1117. doi: 10.1142/S0218202505000637. Google Scholar Asymptotic stability of equilibria of selection-mutation equations, J. Math. Biol. , 54 (2007), 489–511.Google Scholar A. Calsina, S. Cuadrado, L. Desvillettes and G. Raoul, Asymptotics of steady states of a selection-mutation equation for small mutation rate, Proc. Roy. Soc. Edinburgh Sect. A, 143 (2013), 1123-1146. doi: 10.1017/S0308210510001629. Google Scholar N. Champagnat, R. Ferrière and S. Méléard, Individual-based probabilistic models of adaptive evolution and various scaling approximations, Seminar on Stochastic Analysis, Random Fields and Applications V (R. C. Dalang, F. Russo, and M. Dozzi, eds. ), Progress in Probability, vol. 59, Birkha¨user Basel, 2008, 75–113.Google Scholar J. Coville, On a simple criterion for the existence of a principal eigenfunction of some nonlocal operators, J. Differential Equations, 249 (2010), 2921-2953. doi: 10.1016/j.jde.2010.07.003. Google Scholar Convergence to equilibrium for positive solutions of some mutation-selection model, preprint 2013. arXiv: 1308.6471.Google Scholar Singular measure as principal eigenfunction of some nonlocal operators, Appl. Math. Lett. , 26 (2013), 831–835.Google Scholar Nonlocal refuge model with a partial control, Discrete Contin. Dynam. Systems, 35 (2015), 1421–1446.Google Scholar J. Coville, J. Dávila and S. Martínez, Pulsating fronts for nonlocal dispersion and KPP nonlinearity, Ann. Inst. H. Poincaré Anal. Non Linéaire, 30 (2013), 179-223. doi: 10.1016/j.anihpc.2012.07.005. Google Scholar L. Desvillettes, P. E. Jabin, S. Mischler and G. Raoul, On selection dynamics for continuous structured populations, Comm. Math. Sci., 6 (2008), 729-747. doi: 10.4310/CMS.2008.v6.n3.a10. Google Scholar O. Diekmann, A beginner's guide to adaptive dynamics, Banach Center Publ., 63 (2003), 47-86. Google Scholar O. Diekmann, P. E. Jabin, S. Mischler and B. Perthame, The dynamics of adaptation: An illuminating example and a Hamilton-Jacobi approach, Theoret. Population Biol., 67 (2005), 257-271. doi: 10.1016/j.tpb.2004.12.003. Google Scholar N. Fournier and S. Méléard, A microscopic probabilistic description of a locally regulated population and macroscopic approximations, Ann. Appl. Probab., 14 (2004), 1880-1919. doi: 10.1214/105051604000000882. Google Scholar F. Hecht, New development in FreeFem++, J. Numer. Math., 20 (2012), 251-265. Google Scholar P. E. Jabin and G. Raoul, On selection dynamics for competitive interactions, J. Math. Biol., 63 (2011), 493-517. doi: 10.1007/s00285-010-0370-8. Google Scholar A. Lorz, S. Mirrahimi and B. Perthame, Dirac mass dynamics in multidimensional nonlocal parabolic equations, Comm. Partial Differential Equations, 36 (2011), 1071-1098. doi: 10.1080/03605302.2010.538784. Google Scholar S. Méléard and S. Mirrahimi, Singular limits for reaction-diffusion equations with fractional Laplacian and local or nonlocal nonlinearity, Comm. Partial Differential Equations, 40 (2015), 957-993. doi: 10.1080/03605302.2014.963606. Google Scholar P. Michel, S. Mischler and B. Perthame, General relative entropy inequality: An illustration on growth models, J. Math. Pures Appl.(9), 84 (2005), 1235-1260. doi: 10.1016/j.matpur.2005.04.001. Google Scholar S. Mirrahimi and G. Raoul, Dynamics of sexual populations structured by a space variable and a phenotypical trait, Theoret. Population Biol., 84 (2013), 87-103. doi: 10.1016/j.tpb.2012.12.003. Google Scholar B. Perthame, From differential equations to structured population dynamics, Transport Equations in Biology, Frontiers in Mathematics, vol. 12, Birkhaüser Basel, 2007, pp. 1–26.Google Scholar G. Raoul, Long time evolution of populations under selection and vanishing mutations, Acta Appl. Math., 114 (2011), 1-14. doi: 10.1007/s10440-011-9603-0. Google Scholar Local stability of evolutionary attractors for continuous structured populations, Monatsh. Math. , 165 (2012), 117–144.Google Scholar W. Shen and X. Xie, On principal spectrum points/principal eigenvalues of nonlocal dispersal operators and applications, Discrete Contin. Dyn. Syst., 35 (2015), 1665-1696. doi: 10.3934/dcds.2015.35.1665. Google Scholar W. Shen and A. Zhang, Spreading speeds for monostable equations with nonlocal dispersal in space periodic habitats, J. Differential Equations, 249 (2010), 747-795. doi: 10.1016/j.jde.2010.04.012. Google Scholar Figure 1. Numerical approximation of the solution to (10) at different times for two configurations, in which only the mutation rate differs. The competition rate is constant and set to $1$ and the growth rate function achieves its maximum only at the origin while the initial datum $u_0$ is uniform with value 1. We have set $\rho=1$ for the first simulation (subfigures (A) to (D)), and $\rho=0.1$ for the second one (subfigures (E) to (H)). In both situations, we observe the convergence to a stationary solution, either to a regular measure (see subfigure (D)) or to a singular measure with one Dirac mass at the origin (see subfigure (H)), the latter being characteristic of a concentration phenomenon. In the regular case (subfigures (A) to (D)), the stationarity being attained numerically around $t=590$ Figure 2. Numerical approximation of the solution of problem (10)-(11) at different times for two configurations, which differ only in their initial datum. The mutation and competition rates are constant and set respectively to $2$ and $1$, the growth rate function achieves its maximum at four points and the initial datum $u_0$ is such that it vanishes on three (subfigures (A) to (D)) or two (subfigures (E) to (H)) of these points. In both cases, rapid convergence of the approximate solution towards an identical regular stationary state is observed, the numerical stationarity being attained around $t=85$ Figure 3. Numerical approximation of the solution of problem (10)-(11) at different times. The mutation and competition rates are constant and set respectively to $0.01$ and $1$, the growth rate function achieves its maximum at four points and the initial datum $u_0$ is such that it vanishes on three of these four points. We observe a slow convergence of the numerical solution towards the approximation of a singular stationary measure containing a single Dirac mass. The approximate solution continues to take large increasing values in a single element at $t=1000$) Figure 4. Numerical approximation of the solution of problem (10)-(11) at different times. The mutation and competition rates are constant and set respectively to $0.01$ and $1$, the growth rate function achieves its maximum at four points and the initial datum $u_0$ is such that it vanishes on two of these four points. We observe a slow convergence of the numerical solution towards the approximation of a singular stationary measure containing two Dirac masses Michael Herty, Reinhard Illner. Coupling of non-local driving behaviour with fundamental diagrams. Kinetic & Related Models, 2012, 5 (4) : 843-855. doi: 10.3934/krm.2012.5.843 Linfeng Mei, Wei Dong, Changhe Guo. Concentration phenomenon in a nonlocal equation modeling phytoplankton growth. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 587-597. doi: 10.3934/dcdsb.2015.20.587 Henri Berestycki, Nancy Rodríguez. A non-local bistable reaction-diffusion equation with a gap. Discrete & Continuous Dynamical Systems - A, 2017, 37 (2) : 685-723. doi: 10.3934/dcds.2017029 Tao Wang. Global dynamics of a non-local delayed differential equation in the half plane. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2475-2492. doi: 10.3934/cpaa.2014.13.2475 Jared C. Bronski, Razvan C. Fetecau, Thomas N. Gambill. A note on a non-local Kuramoto-Sivashinsky equation. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 701-707. doi: 10.3934/dcds.2007.18.701 A. V. Bobylev, Vladimir Dorodnitsyn. Symmetries of evolution equations with non-local operators and applications to the Boltzmann equation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 35-57. doi: 10.3934/dcds.2009.24.35 Abdelaziz Rhandi, Roland Schnaubelt. Asymptotic behaviour of a non-autonomous population equation with diffusion in $L^1$. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 663-683. doi: 10.3934/dcds.1999.5.663 Tahir Bachar Issa, Rachidi Bolaji Salako. Asymptotic dynamics in a two-species chemotaxis model with non-local terms. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3839-3874. doi: 10.3934/dcdsb.2017193 Kazuhisa Ichikawa, Mahemauti Rouzimaimaiti, Takashi Suzuki. Reaction diffusion equation with non-local term arises as a mean field limit of the master equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (1) : 115-126. doi: 10.3934/dcdss.2012.5.115 Hirotada Honda. Global-in-time solution and stability of Kuramoto-Sakaguchi equation under non-local Coupling. Networks & Heterogeneous Media, 2017, 12 (1) : 25-57. doi: 10.3934/nhm.2017002 Shouming Zhou, Chunlai Mu, Yongsheng Mi, Fuchen Zhang. Blow-up for a non-local diffusion equation with exponential reaction term and Neumann boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2935-2946. doi: 10.3934/cpaa.2013.12.2935 Keyan Wang. Global well-posedness for a transport equation with non-local velocity and critical diffusion. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1203-1210. doi: 10.3934/cpaa.2008.7.1203 Joseph G. Conlon, André Schlichting. A non-local problem for the Fokker-Planck equation related to the Becker-Döring model. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1821-1889. doi: 10.3934/dcds.2019079 Qiyu Jin, Ion Grama, Quansheng Liu. Convergence theorems for the Non-Local Means filter. Inverse Problems & Imaging, 2018, 12 (4) : 853-881. doi: 10.3934/ipi.2018036 Gabriel Peyré, Sébastien Bougleux, Laurent Cohen. Non-local regularization of inverse problems. Inverse Problems & Imaging, 2011, 5 (2) : 511-530. doi: 10.3934/ipi.2011.5.511 Tomás Caraballo, Antonio M. Márquez-Durán, Rivero Felipe. Asymptotic behaviour of a non-classical and non-autonomous diffusion equation containing some hereditary characteristic. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1817-1833. doi: 10.3934/dcdsb.2017108 Shun Kodama. A concentration phenomenon of the least energy solution to non-autonomous elliptic problems with a totally degenerate potential. Communications on Pure & Applied Analysis, 2017, 16 (2) : 671-698. doi: 10.3934/cpaa.2017033 Zhiming Guo, Zhi-Chun Yang, Xingfu Zou. Existence and uniqueness of positive solution to a non-local differential equation with homogeneous Dirichlet boundary condition---A non-monotone case. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1825-1838. doi: 10.3934/cpaa.2012.11.1825 Chiu-Yen Kao, Yuan Lou, Wenxian Shen. Random dispersal vs. non-local dispersal. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 551-596. doi: 10.3934/dcds.2010.26.551 Hongjie Dong, Doyoon Kim. Schauder estimates for a class of non-local elliptic equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (6) : 2319-2347. doi: 10.3934/dcds.2013.33.2319 HTML views (20) Olivier Bonnefon Jérôme Coville Guillaume Legendre
CommonCrawl
Jacobi group In mathematics, the Jacobi group, introduced by Eichler & Zagier (1985), is the semidirect product of the symplectic group Sp2n(R) and the Heisenberg group R1+2n. The concept is named after Carl Gustav Jacob Jacobi. Automorphic forms on the Jacobi group are called Jacobi forms. References • Berndt, Rolf; Schmidt, Ralf (1998), Elements of the representation theory of the Jacobi group, Progress in Mathematics, vol. 163, Birkhäuser Verlag, ISBN 978-3-7643-5922-5, MR 1634977 • Eichler, Martin; Zagier, Don (1985), The theory of Jacobi forms, Progress in Mathematics, vol. 55, Boston, MA: Birkhäuser Boston, ISBN 978-0-8176-3180-2, MR 0781735
Wikipedia
Fibonacci lengths of all finite $p$-groups of exponent $p^2$ Ahmadi B., Doostie H. The Fibonacci lengths of finite p-groups were studied by Dikici and coauthors since 1992. All considered groups are of exponent $p$ and the lengths depend on the Wall number $k(p)$. The p-groups of nilpotency class 3 and exponent $p$ were studied in 2004 also by Dikici. In the paper, we study all $p$-groups of nilpotency class 3 and exponent $p^2$. Thus, we complete the study of Fibonacci lengths of all $p$-groups of order $p^4$ by proving that the Fibonacci length is $k(p^2)$. Analog of the John theorem for weighted spherical means on a sphere Savost'yanova I. M., Volchkov V. V. We study generalizations of the class of functions with zero integrals over the balls of fixed radius. An analog of the John uniqueness theorem is obtained for weighted spherical means on a sphere. On 3-dimensional f-Kenmotsu manifolds and Ricci solitons De U. C., Turan M., Yildiz A. The aim of the present paper is to study 3-dimensional f-Kenmotsu manifolds and Ricci solitons. First, we give an example of a 3-dimensional f-Kenmotsu manifold. Then we consider a Riccisemisymmetric 3-dimensional f-Kenmotsu manifold and prove that a 3-dimensional f-Kenmotsu manifold is Ricci semisymmetric if and only if it is an Einstein manifold. Moreover, we investigate an η-parallel Ricci tensor in a 3-dimensional f-Kenmotsu manifold. Finally, we study Ricci solitons in a 3-dimensional f-Kenmotsu manifold. Asymptotic estimates for the solutions of boundary-value problems with initial jump for linear differential equations with small parameter in the coefficients of derivatives Kasymov K. A., Nurgabyl D. N., Uaissov A. B. We establish asymptotic estimates for the solutions of singularly perturbed boundary-value problems with initial jumps. Lebesgue-type inequalities for the de la Vallée-poussin sums on sets of entire functions Musienko A. P., Serdyuk A. S. For functions from the sets C ψ β L s , 1 ≤ s ≤ ∞, where ψ(k) > 0 and \( {\lim_{{k\to \infty }}}\frac{{\psi \left( {k+1} \right)}}{{\psi (k)}} \) , we obtain asymptotically sharp estimates for the norms of deviations of the de la Vallée-Poussin sums in the uniform metric represented in terms of the best approximations of the (ψ, β) -derivatives of functions of this kind by trigonometric polynomials in the metrics of the spaces L s . It is shown that the obtained estimates are sharp on some important functional subsets. Global nonexistence of solutions for a system of nonlinear viscoelastic wave equations with degenerate damping and source terms Bayoud M., Ouchenane D., Zennir Kh. The global existence and nonexistence of solutions for a system of nonlinear wave equations with degenerate damping and source terms supplemented with initial and Dirichlet boundary conditions was shown by Rammaha and Sakuntasathien in a bounded domain Ω ⊂ \( {{\mathbb{R}}^n} \) , n = 1, 2, 3, in the case where the initial energy is negative. A global nonexistence result on the solution with positive initial energy for a system of viscoelastic wave equations with nonlinear damping and source terms was obtained by Messaoudi and Said-Houari. Our result extends these previous results. We prove that the solutions of a system of wave equations with viscoelastic term, degenerate damping, and strong nonlinear sources acting in both equations at the same time are globally nonexisting provided that the initial data are sufficiently large in a bounded domain Ω of \( {{\mathbb{R}}^n} \) , n ≥ 1, the initial energy is positive, and the strongly nonlinear functions f1 and f2 satisfy the appropriate conditions. The main tool of the proof is based on the methods used by Vitillaro and developed by Said-Houari. Constructive description of monogenic functions in a three-dimensional harmonic algebra with one-dimensional radical Plaksa S. A., Pukhtaevich R. P. We present a constructive description of monogenic functions that take values in a three-dimensional commutative harmonic algebra with one-dimensional radical by using analytic functions of complex variable. It is shown that monogenic functions have the Gâteaux derivatives of all orders. Li–Yorke sensitivity for semigroup actions Rybak O. V. We introduce and study the concept of Li–Yorke sensitivity for semigroup actions (dynamical systems of the form (X, G), where X is a metric space and G is a semigroup of continuous mappings of this space onto itself). A system (X, G) is called Li–Yorke sensitive if there exists positive ε such that, for any point x ∈ X and any open neighborhood U of this point, one can find a point y ∈ U for which the following conditions are satisfied: (i) d(g(x), g(y)) > ε for infinitely many g ∈ G, (ii) for any δ > 0; there exists h ∈ G satisfying the condition d(h(x), h(y)) < δ. In particular, it is shown that a nontrivial topologically weakly mixing system (X, G) with a compact set X and an Abelian semigroup G is Li–Yorke sensitive. On polymer expansions for generalized Gibbs lattice systems of oscillators with ternary interaction Skrypnik W. I. We propose a new short proof of the convergence of high-temperature polymer expansions in the thermodynamic limit of canonical correlation functions for classical and quantum Gibbs lattice systems of oscillators interacting via pair and ternary potentials and nonequilibrium stochastic systems of oscillators interacting via a pair potential with Gibbsian initial correlation functions. Linear Combinations of the Volterra Dissipative Operator and Its Adjoint Operator Gubreev G. M., Olefir E. I., Tarasenko A. A. We study the spectral properties of linear combinations of the Volterra dissipative operator and its adjoint operator in a separable Hilbert space. Main Inverse Problem for Differential Systems With Degenerate Diffusion Ibraeva G. T., Tleubergenov M. I. The separation method is used obtain sufficient conditions for the solvability of the main (according to Galiullin's classification) inverse problem in the class of first-order Itô stochastic differential systems with random perturbations from the class of Wiener processes and diffusion degenerate with respect to a part of variables. Probability Measures on the Group of Walsh Functions With Trivial Equivalence Class Il'inskaya I. P., Neguritsa D. S. We establish necessary and sufficient conditions for the retrieval, to within a shift, of a composition of three Poisson distributions and a uniform distribution on five or six elements of the group of Walsh functions according to the absolute values of their characteristic functions. Brief Communications (Ukrainian) Cross Topology and Lebesgue Triples Karlova O. O., Mykhailyuk V. V. The cross topology γ on the product of topological spaces X and Y is the collection of all sets G ⊆ X × Y such that the intersections of G with every vertical line and every horizontal line are open subsets of the vertical and horizontal lines, respectively. For the spaces X and Y from a class of spaces containing all spaces \( {{\mathbb{R}}^n} \) , it is shown that there exists a separately continuous function f : X × Y → (X × Y, γ) which is not a pointwise limit of a sequence of continuous functions. We also prove that each separately continuous function is a pointwise limit of a sequence of continuous functions if it is defined on the product of a strongly zero-dimensional metrizable space and a topological space and takes values in an arbitrary topological space. One Property of Ring Q-Homeomorphisms With Respect to a p-Module Salimov R. R. We establish sufficient conditions for a ring Q-homeomorphisms in \( {{\mathbb{R}}^n} \) , n ≥ 2, with respect to a p-module with n − 1 < p < n to have the finite Lipschitz property. We also construct an example of the ring Q-homeomorphism with respect to a p-module at a fixed point which does not have the finite Lipschitz property. Brief Communications (English) Fixed-Point Theorems and Common Fixed-Point Theorems on Spaces Equipped With Vector-Valued Metrics Hosseinzadeh H., Jabbari A., Razani A. We show the existence of fixed points and common fixed points for single-valued generalized contractions on the spaces equipped with vector-valued metrics.
CommonCrawl
Notes for Bocconi utilized Math essential half summarizing lecture notes and workouts. During the last fifteen years, the geometrical and topological tools of the idea of manifolds have assumed a crucial function within the so much complex components of natural and utilized arithmetic in addition to theoretical physics. the 3 volumes of "Modern Geometry - tools and functions" comprise a concrete exposition of those tools including their major purposes in arithmetic and physics. One of many goals of this paintings is to enquire a few typical houses of Borel units that are undecidable in $ZFC$. The authors' start line is the subsequent uncomplicated, even though non-trivial outcome: think of $X \subset 2omega\times2omega$, set $Y=\pi(X)$, the place $\pi$ denotes the canonical projection of $2omega\times2omega$ onto the 1st issue, and believe that $(\star)$ : ""Any compact subset of $Y$ is the projection of a few compact subset of $X$"". 15(b). Thus, the assumption there that G is open is essential. We close this section with a circumspective remark. We did not use sequences in this section. Connectedness is one of the only properties of a metric space I know whose examination never uses the concept of a convergent sequence. 2. 2, that a subset E of R is an interval if and only if when a, b ∈ E and a < b, then c ∈ E whenever a < c < b? (3) If (X, d) is connected and f : X → R is a continuous function such that |f (x)| = 1 for all x in X, show that f must be constant. 9 (Urysohn's4 Lemma). If A and B are two disjoint closed subsets of X, then there is a continuous function f : X → R having the following properties: (a) 0 ≤ f (x) ≤ 1 for all x in X: (b) f (x) = 0 for all x in A: (c) f (x) = 1 for all x in B. Proof. Define f : X → R by f (x) = dist (x, A) , dist (x, A) + dist (x, B) which is well defined since the denominator never vanishes. ) It is easy to check that f has the desired properties. 10. If F is a closed subset of X and G is an open set containing F , then there is a continuous function f : X → R such that 0 ≤ f (x) ≤ 1 for all x in X, f (x) = 1 when x ∈ F , and f (x) = 0 when x ∈ / G. While he was there, World War I began, and he was unable to return to France. His health deteriorated further, depression ensued, and he spent the rest of his life on the shores of Lac L´ eman in Switzerland. It was there that he received the Chevalier de la Legion d'Honneur, and in 1922 he was elected to the Acad´ emie des Sciences. He published significant works on number theory and functions. He died in 1932 at Chamb´ ery near Geneva. Exercises 37 The details of this induction argument are left as Exercise 1. Social choice by Craven J.
CommonCrawl
\begin{document} \newcommand{{\mathcal{X}}}{{\mathcal{X}}} \newcommand{{\mathcal{U}}}{{\mathcal{U}}} \newcommand{{\mathcal{W}}}{{\mathcal{W}}} \newcommand{{\mathcal{I}}}{{\mathcal{I}}} \newcommand{{\mathcal{C}}}{{\mathcal{C}}} \newcommand{(UW)\xspace}{(UW)\xspace} \newcommand{(UA)\xspace}{(UA)\xspace} \newcommand{(UO)\xspace}{(UO)\xspace} \newcommand{Uniform $\pmb{w}$}{Uniform $\pmb{w}$} \newcommand{Uniform $\alpha$}{Uniform $\alpha$} \newcommand{Uniform orness}{Uniform orness} \newcommand{(RvR)\xspace}{(RvR)\xspace} \newcommand{(PvP)\xspace}{(PvP)\xspace} \newcommand{(OvP)\xspace}{(OvP)\xspace} \title{A Preference Elicitation Approach for the Risk-Averse Ordered Weighted Averaging Criterion using Solution Choice Observations} \author{Werner Baak\thanks{Corresponding author. Email: [email protected]} } \author{Marc Goerigk} \author{Michael Hartisch} \affil{Network and Data Science Management, University of Siegen,\\Unteres Schlo{\ss} 3, 57072 Siegen, Germany} \date{} \maketitle \begin{abstract} Decisions under uncertainty or with multiple objectives usually require the decision maker to formulate a preference regarding risks or trade-offs. If this preference is known, the ordered weighted averaging (OWA) criterion can be applied to aggregate scenarios or objectives into a single function. Formulating this preference, however, can be challenging, as we need to make explicit what is usually only implicit knowledge. We explore an optimization-based method of preference elicitation for a risk-averse decision maker to identify appropriate non-increasing OWA weights. We follow a data-driven approach, assuming the existence of observations, where the decision maker has chosen the preferred solution, but otherwise remains passive during the elicitation process. We then use these observations to determine the underlying preference by finding the preference vector that is at minimum distance to the polyhedra of feasible vectors for each of the observations. Using our optimization-based model, weights are determined by solving an alternating sequence of linear programs and standard OWA problems. Numerical experiments show that our passive elicitation method compares well against actively having to conduct pairwise comparisons and performs particularly well when there are inconsistencies in the decision maker's choices.\end{abstract} \noindent\textbf{Keywords:} multiple criteria analysis; decision making under uncertainty; preference elicitation; ordered weighted averaging \noindent\textbf{Acknowledgements:} Supported by the Deutsche Forschungsgemeinschaft (DFG) through grant GO 2069/2-1. \section{Introduction} Decision making is a ubiquitous challenge, where often multiple conflicting objectives or the consequences over multiple scenarios need to be taken into account \citep{ehrgott}. In such settings, a popular approach is to use an aggregation function to combine several values into a single one. The Ordered Weighted Average (OWA) operator, is one such method \citep{yager1988ordered}. Since its inception, it has been widely studied and applied in settings as diverse as fuzzy modeling \citep{o1988aggregating,yager1998including}, location planning \citep{malczewski2006ordered}, financial decision-making problems \citep{merigo2009induced,merigo2011induced}, geographic information system based site planning \citep{zabihi2019gis} or risk assignment \citep{chang2011evaluating}, see also the survey by \cite{emrouznejad2014ordered}. The idea of the OWA function is to take a vector of values as input, sort this vector from largest to smallest value, and to calculate the scalar product of this sorted vector with a weight vector. Hence, the weights are assigned to ordered values and can be used to stress importance on high, low or mid-ranged inputs. These weights should represent the decision maker's preferences. Solving problems with the OWA operator is usually more challenging than solving their single criterion counterparts (usually called nominal counterpart in robust optimization), where a single objective function is given. Different solution methods have been developed, including linear programming (LP) or mixed-integer linear programming (MILP) models, see, e.g., \cite{ogryczak2003solving} and \cite{ogryczak2012milp}. In \cite{chassein2015alternative}, a compact reformulation of the problem based linear programming duality was established. The complexity of discrete decision making problems with the OWA criterion has been studied as well, see \cite{kasperski2015combinatorial} and \cite{chassein2020approximating}, where approximation algorithms have been developed, or \cite{Galand20121540}, where exact algorithms for the spanning tree problem are considered. For the purpose of such theoretical analysis, it is usually assumed that the preference weights are given by the decision maker. In practice however, these weights are not simply given, but must first be determined. For this purpose, several methods have been developed, see the surveys by \cite{xu2005overview} and \cite{liu2011review}. The idea of sample learning methods is to use empirical data to fit OWA weights. Given a set of observations consisting of alternatives and their aggregated values, an optimization model is used to fit preference vectors that satisfy these observations as far as possible, see, e.g., \cite{yager1996quantifier} and \cite{garcia2011generating}. Using a similar idea, \cite{ahn2008preference} assumes that pairwise comparisons between solutions are given to indicate preferences and then uses an optimization model to find preference vectors that adhere to these comparisons. More methods are introduced by \cite{bourdache2017anytime} and \cite{bourdache2019active}, which cover incremental elicitation or active learning strategies, where again knowledge of pairwise preferences is presupposed. The knowledge of past choices in sets or queries on sets of solutions has been assumed in several previous works on elicitation, where the user or decision maker is (semi-)actively taking actions during the elicitation process. \cite{dragone2018constructive} propose a Choice Perceptron for learning user preferences, whereas \cite{zintgraf2018ordered} propose ordered preference elicitation strategies based on ranking and clustering. In \cite{viappiani2020equivalence} a recommender system is introduced, exploring the connection between providing good recommendation and asking informative choice queries, in order to compute optimal recommendation as well as query sets. Many more methods exist to elicit preference weights for OWA criteria. We briefly summarize some of these. \cite{benabbou2015minimax} use a search tree model for regret-based optimization, including OWA, where a minimization of max regret is performed. \cite{adam2021possibilistic} guarantee a robust approach identifying preferences and also an error detection method for wrong preferences in which OWA models are utilized to test against the true model. \cite{labreuche2015extension} use binary alternatives (ordinal information) as extension to the so-called MACBETH method to elicit an OWA operator and where the weights given are trapezoidal but can be weakened to a convex fuzzy set and thus taking account of inconsistencies. \cite{kim2018implicit} elicit the decision maker's preferences by comparing answers given with extreme or arbitrary options and based on the results adding constraints to the OWA weights. \cite{wang2007aggregating} make use of the orness degree to determine the weights by analyzing the decision maker's optimism level. Further preference elicitation methods include maximal entropy methods \citep{fuller2001analytic}, data-driven approaches \citep{filev1994learning}, introduction of a weight generating function \citep{filev1998issue,yager2016some} or using kernel density estimations \citep{lin2020determine}. Extensions to the OWA operator have been studied as well. These include OWAWA (Ordered Weighted Average Weighted Average), WOWA (Weighted Ordered Weighted Average) and IOWA (Induced Ordered Weighted Average). The latter variant enables the possibility to reorder variables in a more complex way. Building on that, \cite{merigo2010fuzzy} proposes the induced generalized ordered weighted averaging (IGOWA) operator. The aggregation operator combines the characteristics of the generalized OWA and the induced OWA operator. The IEOWAD (Induced Euclidean Ordered Weighted Averaging Distance) approach by \cite{merigo2011induced}, parameterizes distances measures using an IOWA operator, resulting in a modality which allows considering more complex attitudinal characters of the decision maker and resulting in different conclusions. A particularly important class of OWA operators make use of or-like, non-increasing preference weights \citep{yager1993families}. This means that the decision maker is risk-averse, by putting more emphasis on objectives that perform worse than those that perform well. This perspective contains well-known special cases such as worst-case optimization \citep{aissi2009min} or the conditional value at risk \citep{bertsimas2009constructing}. The main advantage of non-increasing weights is that they allow for improved problem reformulations \citep{chassein2015alternative} that can be applied in our context. At the same time, such risk-averse preference vectors are widely studied, see, e.g., \cite{ogryczak2003minimizing,ogryczak2003solving,Galand20121540,kasperski2015combinatorial} and many more. In this paper we propose a model which enables the determination of a risk-averse decision maker's preferences in the form of non-increasing OWA weights using a passive elicitation method. This method uses observed choices and does not require the decision maker to actively take (corrective) actions or interfere during the elicitation process. The used observations include information that was available to the decision maker, that is, for a given decision making problem, we only require the preferred solution and data basis, and in particular no aggregated values have to be assigned to alternatives. We propose a novel optimization-based model to determine OWA weights, which is solved by a sequence of MILPs. The model aims at finding a preference vector that is at minimum distance to the polyhedra of feasible OWA vectors for the observations. This passive approach utilizes historical observations, which are easily accessible in most applications today. Our proposed approach avoids the need for time-consuming and often costly interviews required to obtain pairwise comparisons or aggregated values of alternatives, making it an efficient and cost-effective decision support system. As a passive elicitation method, it is also less intrusive and more consumer-friendly, which can lead to increased user acceptance and adoption. This approach is particularly beneficial for decision-making situations where similar choices must be made repeatedly, and historical observations are available. Examples of applications for this approach include purchasing and selection decisions, routing problems and scheduling. We compare the properties of this approach to optimization models based on pairwise preference comparisons in numerical experiments. The evaluation of the approaches is carried out by measuring the average distance between the true OWA weights and the estimated weights. The paper is organized in the following order: In Section~\ref{sec:def} we define the OWA problem and the notation. We present our optimization approach for preference elicitation and discuss modeling alternatives in Section~\ref{sec:opt}. In Section~\ref{sec:exp} we discuss the computational experiments, considering setup, results and insights. Finally, in Section~\ref{Conclusion} we reflect and conclude the paper taking into account possible further research approaches. \section{OWA Problem Definition}\label{sec:def} In this section we give a formal description of the OWA criterion and related optimization problems. Throughout this paper, we use the notation $[K]$ to denote a set $\{1,\ldots,K\}$, write vectors in bold, and drop the transpose symbol for vector multiplication if the context is clear. We consider an optimization problem over some set of feasible solutions ${\mathcal{X}}\subseteq\mathbb{R}^n$ with a linear objective function $\pmb{c}\pmb{x}$ that we would like to minimize. We explore the case where there is more than one cost coefficient vector that is relevant for the decision making process. This can be because there are multiple relevant objectives, or due to uncertainty. Note, that here we assume---as frequently used in the literature (see e.g.~\cite{kasperski2016robust})---that every criterion can be described via a linear function, which is particularly the case in a setting with uncertain, linear cost objective function. We denote by $\{\pmb{c}^1,\ldots,\pmb{c}^K\}$ the set of $K$ cost coefficient vectors that we would like to consider simultaneously and let $C\in \mathbb{R}^{K\times n}$ be the respective cost matrix. This means that for a given decision vector $\pmb{x}\in{\mathcal{X}}$, there are $K$ objective values $\pmb{c}^1\pmb{x},\ldots,\pmb{c}^K\pmb{x}$. Here it should be noted that these cost coefficients (or utilities) need to be proportionate to each other, i.e.~there must be a commensurate standardization. The purpose of the OWA criterion is to aggregate these objective values into a single value. To this end, we require a preference vector $\pmb{w}\in[0,1]^K$ with $\sum_{k\in[K]} w_k = 1$. We write ${\mathcal{W}} = \{ \pmb{w}\in[0,1]^K : \sum_{k\in[K]} w_k = 1\}$. The purpose of $w_k$ is to assign an importance to the $k$th-largest objective value for $k\in[K]$. Let $\pi$ be a permutation that sorts the $K$ objective values from largest to smallest, i.e., $\pi$ is such that $\pmb{c}^{\pi(1)}\pmb{x} \ge \pmb{c}^{\pi(2)}\pmb{x} \ge \ldots \ge \pmb{c}^{\pi(K)}\pmb{x}$. Then, the OWA operator is defined as \[ \OWA_{\pmb{w}}(\pmb{x},C) = \sum_{k\in[K]} w_k (\pmb{c}^{\pi(k)} \pmb{x}) \] Note that the permutation $\pi$ depends on the solution $\pmb{x}$. The OWA operator contains several well-known decision making criteria as special cases. By setting $\pmb{w}=(1,0,\ldots,0)$, all weights are assigned to the largest objective value, which means that OWA becomes the worst-case criterion (in our minimization setting). On the other hand, setting $\pmb{w}=(0,\ldots,0,1)$ gives the best-case criterion. Additionally, $w=(\alpha,0,\ldots,0,1-\alpha)$ for some $\alpha\in[0,1]$ corresponds to the Hurwicz criterion, while $w=(1/K,\ldots,1/K)$ gives the average value. A special case of OWA operators uses risk-averse preference vectors. Risk-averse preference vectors play an important role in practice, reflecting typical preferences of decision makers who assign a proportionally larger importance to bad outcomes than to good outcomes. Such preference vectors also have advantages from a modeling perspective. Let us define ${\mathcal{W}}' = \{\pmb{w}\in{\mathcal{W}} : w_1 \ge w_2 \ge \ldots \ge w_K\}$. Using a preference from ${\mathcal{W}}'$, the permutation $\pi$ that defines OWA is also a permutation that maximizes the objective function, that is, we have \[ \OWA_{\pmb{w}}(\pmb{x},C) = \max_{\pi\in\Pi_K} \sum_{k\in[K]} w_k (\pmb{c}^{\pi(k)} \pmb{x}) \] where $\Pi_K$ denotes the set of permutations of vectors of size $K$. As discussed in \cite{chassein2015alternative}, using the dual of the maximization problem allows us to reformulate the problem of minimizing OWA as follows: \begin{align} \min_{\pmb{x}\in{\mathcal{X}}} \OWA_{\pmb{w}}(\pmb{x},C) = \min\ & \sum_{k\in[K]} \alpha_k + \beta_k \label{eq:owa1} \\ \text{s.t. } & \alpha_j + \beta_k \ge \sum_{i\in[n]} w_j c^k_i x_i & \forall j,k\in[K] \label{eq:owa2}\\ & \pmb{x}\in{\mathcal{X}} \label{eq:owa3}\\ & \pmb{\alpha},\pmb{\beta}\in\mathbb{R}^K \label{eq:owa4} \end{align} In particular, if ${\mathcal{X}}$ defines an (integer) linear set of feasible solutions, minimizing OWA becomes an (integer) linear optimization problem as well. \section{An Optimization Model for Preference Elicitation}\label{sec:opt} \subsection{Basic Model and Solution Method} \label{sec:basic} We assume that the preference vector $\pmb{w}$ is not known. Instead, we would like to identify a suitable vector $\pmb{w}\in{\mathcal{W}}'$ based on observations of how a decision maker chooses an alternative. That is, we assume that we are given pairs $(C^1,\pmb{x}^1),\ldots,(C^S,\pmb{x}^S)$ of $S$ historic decisions. The task of preference elicitation is to identify a suitable vector $\pmb{w}$ that can explain the choice of solutions for each observation. Note that we only assume knowledge of past situations and corresponding solution choices. Contrary to active elicitation approaches, we do not require pairwise comparisons between solutions, which may only be available through an interview process. The underlying idea is to define for each observation $s\in[S]$ the set of preference vectors that can explain this observation, and to find a vector $\pmb{w}$ that is as close to each such set as possible. That is, we define the sets \[ \opt_s = \left\{ \pmb{w} \in {\mathcal{W}}' : \pmb{x}^s \in \argmin_{\pmb{x}\in{\mathcal{X}}} \OWA_{\pmb{w}}(\pmb{x},C^s) \right\}\] which contain those weight vectors for which $\pmb{x}^s$ is optimal for the corresponding OWA problem, and propose to solve \begin{equation} \min \left\{ \sum_{s\in[S]} D(\pmb{w}, \opt_s) : \pmb{w}\in{\mathcal{W}}' \right\} \tag{Pref}\label{pref} \end{equation} where $D: [0,1]^K \times 2^{[0,1]^K} \to \mathbb{R}_+$ is a suitable distance measure between vector and set, given as the distance to its closest element, i.e. $D(x,Y) = \min_{y\in Y} d(x,y)$ for a distance metric $d$. To clarify this approach, we first present an example. Consider a decision making problem with ${\mathcal{X}} = \{ \pmb{x}\in\{0,1\}^4 : x_1 + x_2 + x_3 + x_4 = 3\}$, i.e., we need to select three out of $n=4$ given items. As there is only one such observation given, $S=1$ and we drop the index $s$ for simplicity. For this example problem, there are only four solutions: $\pmb{x}^1 = (1,1,1,0)$, $\pmb{x}^2 = (1,1,0,1)$, $\pmb{x}^3 = (1,0,1,1)$ and $\pmb{x}^4 = (0,1,1,1)$. We assume that there are $K=3$ scenarios. For a cost matrix \[ C = \begin{pmatrix} 1 & 6 & 8 & 4 \\ 6 & 7 & 8 & 3 \\ 9 & 3 & 2 & 8 \end{pmatrix}\in\mathbb{R}^{K\times n} \] we are told that the decision maker prefers the solution $\pmb{x}^1 = (1,1,1,0)$. What does this imply for the underlying preference vector? There are four possible solutions. Calculating the sorted vectors of objective values for each solution gives $(21,15,14)$, $(20,16,11)$, $(19,17,13)$ and $(18,18,13)$, respectively. From the choice of $\pmb{x}$ as the preferred solution, we can deduce that its OWA-value is not larger than the OWA value of any other solution. This gives three constraints on the $\pmb{w}$ vector of the form $\OWA_{\pmb{w}}(\pmb{x}^1,C) \le \OWA_{\pmb{w}}(\pmb{x}^i,C)$ for $i=2,3,4$. These are equivalent to the following system of linear equations. \begin{align} (21-20)w_1 + (15-16)w_2 + (14-11)w_3 = \phantom{2}w_1 - \phantom{2}w_2 + 3w_3 &\le 0 \label{ex:1} \\ (21-19)w_1 + (15-17)w_2 + (14-13)w_3 = 2w_1 - 2w_2 + \phantom{3}w_3 &\le 0 \label{ex:2}\\ (21-18)w_1 + (15-18)w_2 + (14-13)w_3 = 3w_1 - 3w_2 + \phantom{3}w_3 &\le 0 \label{ex:3} \end{align} Substituting $w_3 = 1 - w_1 - w_2$ allows us to plot these equations in two dimensions, see Figure~\ref{fig:example}, where the three black lines indicate points where each of the equations in (\ref{ex:1}-\ref{ex:3}) is fulfilled with equality. \begin{figure} \caption{Areas for preference vectors in the example problem.} \label{fig:example} \end{figure} The light gray area contains those vectors $(w_1,w_2)\in[0,1]^2$ that fulfill (\ref{ex:1}-\ref{ex:3}). As we only consider non-increasing weights to reflect a risk-averse decision maker, we furthermore require that $w_1 \ge w_2 \ge w_3$. The dark gray area hence indicates the set ${\mathcal{W}}'$. In this example, there is only one element in ${\mathcal{W}}'$ that also fulfills the system of equations (\ref{ex:1}-\ref{ex:3}), which is $\opt = \{ (1/2,1/2,0) \}$. In other words, $\opt$ contains the only feasible preference vector that leads to the observed choice of $\pmb{x}$. It is noteworthy that this uniqueness is only a characteristic of this handcrafted example. In general, $\opt_s$ is a bounded polyhedron if the number of feasible solutions $|{\mathcal{X}}|$ is finite. Hence, when having additional knowledge or expectations regarding the arising preference vector (e.g. high/low orness) a lexicographic approach can be used to further restrict the set of preference vectors. For multiple observations $S>1$, it is possible that the intersection $\cap_{s\in[S]} \opt_s$ is empty. This may happen if there is no underlying ''true'' value for $\pmb{w}$ that the decision maker uses; decisions may be made intuitively, rather than systematically. For this reason, we would like to find a value of $\pmb{w}$ that is as close as possible to those values $\opt_s$ that can explain each observation. It may also happen that for some $s\in[S]$, $\opt_s = \emptyset$, i.e., there is no possible risk-averse preference vector that can explain a specific choice. In this case, Problem~\eqref{pref} needs to be slightly modified, which is explained later in this section. Also note that while \eqref{pref} is defined using the same set ${\mathcal{X}}$ for each observation $s$, this assumption might also be relaxed to allow different sets of feasible solutions for each decision making observation. We only require that there is the same number of scenarios $K$ for each problem, so that vectors $\pmb{w}$ are of the same dimension. We now discuss how to solve Problem~\eqref{pref}. For each $\pmb{x}\in{\mathcal{X}}$, let us denote by $a^s_1(\pmb{x}),\ldots,a^s_K(\pmb{x})$ the objective values $C^s\pmb{x}$, sorted from largest to smallest value. Using this notation, we have \[ \opt_s = \left\{ \pmb{w} \in {\mathcal{W}}' : \sum_{k\in[K]} w_k a^s_k(\pmb{x}^s) \le \sum_{k\in[K]} w_k a^s_k(\pmb{x}) \quad \forall \pmb{x}\in{\mathcal{X}} \right\} \] This allows us to rewrite Problem~\eqref{pref} as follows: \begin{align} \min\ & \sum_{s\in[S]} d(\pmb{w},\pmb{w}^s) \label{mod:1} \\ \text{s.t. } & \sum_{k\in[K]} w^{s}_k a^s_k(\pmb{x}^s) \le \sum_{k\in[K]} w^{s}_k a^s_k(\pmb{x}) & \forall s\in[S], \pmb{x}\in{\mathcal{X}} \label{mod:2} \\ & \sum_{k\in[K]} w^s_k = 1 & \forall s\in[S] \label{mod:3}\\ & \sum_{k\in[K]} w_k = 1 \label{mod:4}\\ & w^s_1 \ge w^s_2 \ge \ldots w^s_K \ge 0 & \forall s\in[S] \label{mod:5}\\ & w_1 \ge w_2 \ge \ldots w_K \ge 0 \label{mod:6} \end{align} Note that the constraints~(\ref{mod:2}-\ref{mod:6}) are linear, as values $a^s_k(\pmb{x})$ can be precomputed. However, depending on the number of elements in ${\mathcal{X}}$, there might be infinitely or exponentially many constraints of type~\eqref{mod:2}. Considering the distance $d$, several choices result in tractable optimization problems. Using the 1-norm $d_1(\pmb{w},\pmb{w}^s)=\vert\vert \pmb{w}-\pmb{w}^s \vert\vert_1 = \sum_{k\in[K]}|w_k-w_k^s|$, objective function~\eqref{mod:1} can be linearized by introducing new variables $d^s_k\ge 0$ with constraints $-d^s_k \le w_k - w^s_k \le d^s_k$ for all $s\in[S]$ and $k\in[K]$. By minimizing $\sum_{s\in[S]} \sum_{k\in[K]} d^s_k$, we hence minimize the sum of absolute differences in each component. It is also possible to use weights on positions $k\in[K]$, e.g., one unit of difference in $w_1$ may be more important than one unit of difference in $w_K$. Alternatively, also the $\infty$-norm $d_\infty(\pmb{w},\pmb{w}^s)=\vert\vert \pmb{w}-\pmb{w}^s \vert\vert_\infty =\max_{k\in[K]}\vert w_k-w_k^s \vert $ can be linearized, while using the 2-norm $d_2(\pmb{w},\pmb{w}^s)=\vert\vert \pmb{w}-\pmb{w}^s \vert\vert_2 =\sqrt{\sum_{k\in[K]}(w_k-w_k^s )^2}$ results in a convex quadratic optimization problem. We now discuss how to treat the large number of constraints of type~\eqref{mod:2}. We propose to use an iterative solution procedure that is summarized in Figure~\ref{fig:alg}. \begin{figure} \caption{Iterative solution algorithm for \eqref{pref}.} \label{fig:alg} \end{figure} We begin with a finite subset of alternatives ${\mathcal{X}}' \subseteq {\mathcal{X}}$, for example, we may set ${\mathcal{X}}' = \{\pmb{x}^1,\ldots,\pmb{x}^S\}$. We solve the optimization problem \eqref{pref}, where we restrict constraints~\eqref{mod:2} to ${\mathcal{X}}'$. We refer to this problem as \ref{pref}(${\mathcal{X}}'$). This is a linear program, given that $d$ can be linearized. The result is a candidate preference vector $\pmb{w}\in{\mathcal{W}}'$, along with preference vectors $\pmb{w}^s$ for each $s\in[S]$. We then consider each $s\in[S]$ separately to check if indeed it holds that $\pmb{w}^s \in\opt_s$. To this end, we need to compare the value $\OWA_{\pmb{w}^s}(\pmb{x}^s,C^s)$ with $\min_{\pmb{x}\in{\mathcal{X}}} \OWA_{\pmb{w}^s}(\pmb{x},C^s)$. If it turns out that there exists a solution that results in a better objective value than $\pmb{x}^s$ under $\pmb{w}^s$, we add this solution to ${\mathcal{X}}'$ and repeat the process. Note that this solution method ends after a finite number of iterations, if ${\mathcal{X}}$ is finite. This holds, e.g., if ${\mathcal{X}}\subseteq\{0,1\}^n$ is the set of feasible solutions for a combinatorial optimization problem. In this case, we have found an optimal solution to \eqref{pref}. Alternatively, we may stop the method after a fixed number of iterations is reached or if the difference between consecutive solutions for $\pmb{w}$ becomes sufficiently small. To conclude the description of our approach, we consider the case that $\opt_s=\emptyset$ for some $s\in[S]$, i.e., the decision maker chooses a solution that is optimal with respect to no preference vector in ${\mathcal{W}}'$. This means that some model \ref{pref}(${\mathcal{X}}'$) that we try to solve in the iterative procedure becomes infeasible. In that case, we can follow a lexicographic approach to consider those preferences $\pmb{w}^s$ that remain as close to optimality as possible: We first minimize the violation of Constraint \ref{mod:2} (which so far was supposed to be zero) and then find the solution that minimizes the objective function with that best smallest possible violation. To this end, we solve the optimization problem \begin{equation} \infeas_s := \min_{\pmb{w}\in{\mathcal{W}}'} \max_{\pmb{x}\in{\mathcal{X}}} \Big( \OWA_{\pmb{w}}(\pmb{x}^s,C^s) - \OWA_{\pmb{w}}(\pmb{x},C^s) \Big) \label{eq:inf} \end{equation} to calculate the smallest possible violation of the corresponding constraint~\eqref{mod:2}. We then replace the constraint with \[ \sum_{k\in[K]} w_k^{s} a^s_k(\pmb{x}^s) \le \sum_{k\in[K]} w_k^{s} a^s_k(\pmb{x}) + \infeas_s \quad \forall \pmb{x}\in{\mathcal{X}} \] to ensure feasibility of Problem~\eqref{pref}. To solve the problem in~\eqref{eq:inf}, we can use an iterative procedure analogously to the method described in Figure~\ref{fig:alg}. Alternatively, $\infeas_s$ is used as an additional non-negative variable in Problem~\eqref{pref}, where we modify the objective to additionally minimize $\sum_{s\in[S]} \infeas_s$ with a large weight to give it priority over minimizing the sum of distances in the preference vectors. Note, however, that the existence of an appropriate weight depends on the problem structure, making this a more heuristic approach, in general. \subsection{Heuristic Compact Model} We now reconsider model~\eqref{pref} and develop a heuristic formulation for the case that ${\mathcal{X}}$ is given by a polyhedron, i.e., we assume that ${\mathcal{X}} = \{ \pmb{x} \ge 0 : A\pmb{x} \ge \pmb{b}\}$ for $A\in\mathbb{R}^{m\times n}$ and $\pmb{b}\in\mathbb{R}^m$. Recall that some combinatorial optimization problems, such as the shortest path or the minimum spanning tree problems, also allow for such linear programming formulations where binary variables are not required. Constraints~\eqref{mod:2} ensure for each $s\in[S]$ that solutions $\pmb{x}^s$ are indeed optimal, that is, they are equivalent to \[ \OWA_{\pmb{w}}(\pmb{x}^s) \le \min_{\pmb{x}\in{\mathcal{X}}} \OWA_{\pmb{w}}(\pmb{x}).\] Previously, we treated the right-hand side by constructing one constraint per solution $\pmb{x}\in{\mathcal{X}}$. As ${\mathcal{X}}$ is a polyhedron, we can use strong duality to reformulate the right-hand side as a maximization problem with the same optimal objective value. The dual of the OWA problem with costs $C^s$, see \eqref{eq:owa1}-\eqref{eq:owa4}, when using $\pmb{\pi}$ and $\pmb{\sigma}$ as dual variables corresponding to Constraints \eqref{eq:owa2} and \eqref{eq:owa3}, respectively, is given as: \begin{align*} \max\ & \pmb{b}^t \pmb{\sigma} \\ \text{s.t. } & \sum_{k\in[K]} \pi_{jk} = 1 & \forall j\in[K] \\ & \sum_{j\in[K]} \pi_{jk} = 1 & \forall k\in[K] \\ & A^t\pmb{\sigma} \le \sum_{j\in[K]} \sum_{k\in[K]} w^s_j \pmb{c}^{s,k} \pi_{jk} \\ &\pmb{\sigma} \ge \pmb{0} \\ &\pmb{\pi} \ge \pmb{0} \end{align*} Furthermore, by weak duality, the objective value of any feasible solution to the dual problem gives a lower bound to the optimal primal objective value, making the substitution valid not only for the optimal, but for any solution of the dual. We can thus find the following compact reformulation of model~\eqref{pref}. \begin{align*} \min\ & \sum_{s\in[S]} d(\pmb{w},\pmb{w}^s) \\ \text{s.t. } & \sum_{k\in[K]} w^s_k a^s_k(\pmb{x}^s) \le \pmb{b}^t\pmb{\sigma}^s & \forall s\in[S] \\ & \sum_{k\in[K]} \pi^s_{jk} = 1 & \forall j\in[K], s \in[S] \\ & \sum_{j\in[K]} \pi^s_{jk} = 1 & \forall k\in[K], s \in[S] \\ & A^t\pmb{\sigma}^s \le \sum_{j\in[K]} \sum_{k\in[K]} w^s_j \pmb{c}^{s,k} \pi_{jk} & \forall s\in[S] \\ & \pmb{w}\in{\mathcal{W}}' \\ & \pmb{w}^s \in{\mathcal{W}}' & \forall s\in[S] \\ &\pmb{\sigma}^s \ge \pmb{0} & \forall s\in[S]\\ &\pmb{\pi}^s \ge \pmb{0} & \forall s\in[S] \end{align*} Note that there remains a non-linearity in the product $w^s_j\pi_{jk}$, where both variables are continuous. Using McCormick envelopes, we may substitute $\tau^s_{jk} = w^s_j\pi_{jk}$ with the constraints $\tau^s_{jk} \le w^s_k$ and $\tau^s_{jk} \le \pi_{jk}$. As this is not an equivalent reformulation, the resulting model is heuristic but has the advantage of avoiding the iterative solution method. \subsection{Alternative Model Formulation} In the preference elicitation model~\eqref{pref}, we minimize the distance in preference vectors $\pmb{w}$ and thus focused on finding preference vectors that reflect the OWA values of the observations. Alternatively, one might be interested in a preference vector that mimics the observations to the extend of reproducing similar solutions, rather than obtaining solutions with similar OWA values. In particular, a slight modification of $\pmb{w}$ can yield a significantly changed solution, while the change in the OWA value itself remains small. But also keep in mind that it is possible that a very different preference vector may result in only slightly different optimal solutions. Hence, if it is more important to accurately mimic observed solutions rather than obtaining solutions of similar OWA score we may redefine our model as follows. The set of optimal solutions for a given preference vector $\pmb{w}$ is denoted as \[ \opt'_s(\pmb{w}) = \Big\{ \pmb{y}\in{\mathcal{X}}: \OWA_{\pmb{w}}(\pmb{y},C^s) \le \OWA_{\pmb{w}}(\pmb{x},C^s) \ \forall \pmb{x}\in{\mathcal{X}} \Big\}, \] that is, while $\opt_s$ is a subset of $\mathbb{R}^K$, we now consider a subset of the set of feasible solutions ${\mathcal{X}}$. The modified preference elicitation model now asks for a vector $\pmb{w}\in{\mathcal{W}}'$ such that the sets $\opt'_s(\pmb{w})$ are close to the given solutions $\pmb{x}^s$ in a distance metric $d':{\mathcal{X}} \times {\mathcal{X}} \to \mathbb{R}_+$. We thus define the following problem: \begin{equation} \min\ \left\{ \sum_{s\in[S]} d'(\pmb{y}^s,\pmb{x}^s) : \pmb{w}\in{\mathcal{W}}',\ \pmb{y}^s\in \opt'_s(\pmb{w})\ \forall s\in[S] \right\} \tag{Pref'} \label{altpref} \end{equation} To solve this problem, we need a tractable formulation of constraints \[ \pmb{y}^s\in \opt'_s(\pmb{w})\quad \forall s\in[S], \] which are equivalent to \begin{equation} \sum_{k\in[K]} w_ka^s_k(\pmb{y}^s) \le \sum_{k\in[K]} w_k a^s_k(\pmb{x})\quad \forall \pmb{x}\in{\mathcal{X}}, s\in[S]. \label{auxcon} \end{equation} The values $a^s_1(\pmb{y}^s) \ge \ldots \ge a^s_K(\pmb{y}^s)$ give a worst-case sorting of the objective values of $\pmb{y}^s$ in observation $s$. Hence, we must ensure that the left-hand side value of equation~\eqref{auxcon} is not underestimated. Using the assumption that $\pmb{w}\in{\mathcal{W}}'$, we have the equivalent formulation \[ \max_{\pi\in\Pi_K} \sum_{k\in[K]} w_{\pi(k)} \pmb{c}^{s,\pi(k)} \pmb{y}^s \le \sum_{k\in[K]} w_k a^s_k(\pmb{x})\quad \forall \pmb{x}\in{\mathcal{X}}, s\in[S]. \] Applying the same dualization technique as in the $\OWA$ reformulation in (\ref{eq:owa1}-\ref{eq:owa4}), we introduce new variables $\pmb{\alpha}^s$ and $\pmb{\beta}^s$ to reformulate Problem~\eqref{altpref} as follows. \begin{align} \min\ & \sum_{s\in[S]} d'(\pmb{y}^s,\pmb{x}^s) \label{eq:var1} \\ \text{s.t. } & \sum_{k\in[K]} \alpha^s_k + \beta^s_k \le \sum_{k\in[K]} w_k a^s_k(\pmb{x}) & \forall \pmb{x}\in{\mathcal{X}}, s\in[S] \label{eq:var2} \\ & \alpha^s_j + \beta^s_k \ge \sum_{i\in[n]} w_j c^{s,k}_i y^s_i & \forall s\in[S], j,k\in[K] \\ & \pmb{y}^s \in{\mathcal{X}} & \forall s\in[S] \label{eq:var3} \\ & \sum_{k\in[K]} w_k = 1 \label{eq:var4} \\ & w_1 \ge w_2 \ge \ldots \ge w_K \ge 0 \label{eq:var5} \end{align} As before, values $a^s_k(\pmb{x})$ can be precomputed for a given $\pmb{x}$. To treat the potentially exponential or infinite number of constraints in~\eqref{eq:var2}, an iterative solution method analogous to the one presented in Section~\ref{sec:basic} can be applied. If ${\mathcal{X}}$ is the set of feasible solutions of a combinatorial decision making problem, then we can linearize products $w_jy^s_i$ by introducing additional variables $\tau^s_{ji}\ge 0$ with $\tau^s_{ji} \ge w_j + y^s_i - 1$. In this case, a natural choice for a distance measure $d'$ may be the Hamming distance, where we simply count the number of differing entries. In our notation, this means we minimize \[ d'(\pmb{y}^s,\pmb{x}^s) = \sum_{i\in[n]: x^s_i = 1} (1-y^s_i) + \sum_{i\in[n] : x^s_i = 0} y^s_i \] which implies that Problem~\eqref{altpref} can be solved through a sequence of mixed-integer programming formulations. In Appendix \ref{App::ExampleAlternative} we provide an example to showcase the difference between \eqref{pref} and \eqref{altpref}. \section{Computational Experiments} \label{sec:exp} \subsection{Setup} The purpose of these experiments is to evaluate the degree to which decision maker preferences can be identified based on observations using the optimization approach compared to having pairwise comparisons at hand, which is the prevalent approach in preference elicitation. We consider simple selection problems, where $p$ out of $n$ items must be chosen, i.e., ${\mathcal{X}} = \{ \pmb{x}\in\{0,1\}^n : \sum_{i\in[n]} x_i = p\}$. Problems of this type are frequently considered, see, e.g., \cite{chassein2018recoverable}, and have the advantage that the decision maker is mostly unconstrained in her choice of items, which means that the underlying preference becomes more important in the decision making process. Preference vectors $\pmb{w}$ are generated in three different ways. For vectors of type ``Uniform $\pmb{w}$'' (UW)\xspace, we sample $K$ values uniformly from $[0,1]$, then normalize and sort the vector. For preference vectors of ``Uniform $\alpha$'' (UA)\xspace type, we use the generator function proposed by \cite{kasperski2016using}, which is defined for $\alpha\in(0,1)$, with \[ g_\alpha(z)=\frac{1}{1-\alpha}(1-\alpha^{z}), \] which is a concave function with $g_{\alpha}(0)=0$ and $g_{\alpha}(1) = 1$. Weights $\pmb{w}(\alpha)$ are then given by $w_k(\alpha) = g_{\alpha}(j/K)-g_{\alpha}((j-1)/K)$ for $j\in[K]$. Note that each preference vector generated this way is indeed in ${\mathcal{W}}'$ due to the concavity of $g_\alpha$. The value $\alpha$ reflects the risk attitude of the decision maker; the closer $\alpha$ is to zero, the more weight is on the first entries of $\pmb{w}$. We sample $\alpha \in (0,1)$ uniformly. Finally we use a preference vector generator of the type ``Uniform orness'' (UO)\xspace. The \textit{orness} of a weight vector, is an important characterizing measures introduced by \cite{yager1988ordered}, given by $$orness(\pmb{w})=\frac{1}{K-1}\sum_{k\in[K]} (K-k)w_k\, .$$ Note that as we only assume risk-averse weights, the orness of such vectors is always greater than or equal to $0.5$, which is the orness of the average criterion $(\frac{1}{K},\ldots,\frac{1}{K})$, whereas $(1,0,\ldots,0)$, which is the worst-case criterion in our minimization setting, has an orness of $1$ \citep{yager1993families,liu2004properties}. In order to generate vectors of type (UO)\xspace the orness is sampled using a uniform distribution over the interval $[0.5,1]$. This value is handed to the model proposed by \cite{wang2005minimax} which then yields a preference vector with the specified orness. The model is given in Appendix \ref{App::OWAOrness}. An instance is created by first sampling $S$ cost matrices $\bar{C}^s \in \{1,100\}^{K \times n}$, where each entry is generated uniformly as a random integer in $\{1,100\}$. To ensure that all objectives are commensurate to each other, we normalize each of the $k\in[K]$ cost vectors using min-max normalization, to obtain the final matrix $C^s \in [0,1]^{K \times n}$: $$C^{s,k}_i=\frac{\bar{C}^{s,k}_i-\min_{j\in[n]}\bar{C}^{s,k}_j}{\max_{j\in[n]}\bar{C}^{s,k}_j-\min_{j\in[n]}\bar{C}^{s,k}_j}$$ Additionally, we allow noise in the data, i.e.~that the decision maker may deviate from her underlying preference vector for each decision. To model this behavior, we include a noise parameter $\epsilon \in [0,1]$. Whenever an OWA problem is solved, we modify each value of $w_k$ by adding a uniformly random value in $[\max\{-w_k,-\epsilon\},+\epsilon]$. Afterwards, $\pmb{w}$ is normalized and sorted. Hence, the most important parameters for an instance are $n$, $p$, $K$, $S$, and $\epsilon$. Our \emph{basic setting} is $n=40$, $p=\frac{n}{2}$, $K=5$ objectives, $S=16$ observations and $\epsilon=0$ noise. For the optimization approach, we implemented the iterative method presented in Section~\ref{sec:basic} to solve model~\eqref{pref} with the 1-norm as distance measure. The method was implemented in C++ using CPLEX version 22.1 to solve optimization models. We compare our passive elicitation method with the active method of having to obtain pairwise preferences. In order to transform the pairwise comparisons into a preference vector we use the model proposed by \cite{ahn2008preference} that minimizes the violation of the decision maker’s judgment. The model can be found in Appendix \ref{App::OwaAhn}. We compare our approach with having to conduct $1$, $5$, $10$, and $20$ pairwise comparisons \emph{for each} of the $S$ observed data sets. As our elicitation procedure only uses $S$ (passive) observations we argue that the comparison with only a single (active) pairwise comparison per observed setting---also resulting in $S$ information---is the most relevant and fair one. We generate solution pairs for data set $s\in [S]$ for the comparison using three different methods: two random solutions (RvR)\xspace, two random supported Pareto-optimal solutions (PvP)\xspace, and the observed solution $\pmb{x}^s$ vs.\ a supported Pareto-optimal solution (OvP)\xspace. Supported Pareto-optimal solutions are obtained by randomly creating a vector $w \in {\mathcal{W}}$ and computing the optimal solution for such preference. To evaluate the quality of these methods, we compare the difference between the predicted preference vector and the actual preference vector in the 2-norm. Using a different norm than the 1-norm used in the objective function of our model is supposed to ensure a more fair comparison. \subsection{Results} \subsubsection{Distribution and Orness Evaluation\label{sec::ExpOrness}} We first want to gain insights into the distribution of the preference vectors created by the proposed generation procedures and the relevance of the orness of the resulting preference vector. For Figure \ref{fig::ManyForOrness} we created $100\,000$ preference vectors for the basic setting ($n=40$, $S=16$, $K=5$, $\epsilon=0$) using the respective generation procedures and plotted the frequency of occurring orness values (cumulated in buckets $[0.5,0.51),\ldots, [0.99,1]$) of the preference vectors (gray part of the plot). \begin{figure} \caption{Frequency of orness values for the different generation procedures (in gray using the right-hand side axis) and the average preference vector distance dependent on the orness.} \label{fig::ManyForOrness} \end{figure} The three generation procedures create very different types of preference vectors, where (UW)\xspace creates a Gaussian like orness distribution with mean orness of $0.678$, (UA)\xspace tends to create vectors with very small orness, while (UO)\xspace, by construction, creates a uniform distribution among all possible orness values. Also given in Figure \ref{fig::ManyForOrness} is the average preference vector distance to the true underlying preference vector for our optimization model \eqref{pref} as well as for varying numbers of (RvR)\xspace comparisons. For low orness our model performs very well even compared to 20 pairwise comparisons per observation, which are $320$ active comparisons by the decision maker, in contrast to using only $S=16$ (passive) observations. However, for increasing orness the average distance to the true preference vector increases even to the extend that our approach performs similar to having just a single comparison available. One reason for this is that solutions obtained by preference vectors with high orness can quite often be explained via the preference vector $(1,0,\ldots,0)$, i.e.~the vector with orness 1. \begin{figure} \caption{Percentage of proposed preference vectors that have an orness of 1.} \label{fig::OrnessOfOne} \end{figure} In Figure~\ref{fig::OrnessOfOne} the percentage of proposed preference vectors that have an orness of 1 are shown dependent on the orness of the true OWA weights, which where created using the (UO)\xspace method in the above experiments. It can be observed that when using pairwise comparisons the less comparisons are conducted the more often OWA weights with orness 1 are proposed. For our approach, even though we used $S=16$ observations, OWA weights with true orness larger than $0.95$ are explained via the vector $(1,0,\ldots,0)$ in more than fifty percent of the cases. This, is largely an artifact of the used optimization solver, which returns $(1,0,\ldots,0)$ if it is feasible. \begin{figure} \caption{Orness of obtained preference vector dependent on orness of true preference vector. } \label{fig::OrnessVsOrnessC} \label{fig::OrnessVsOrness} \end{figure} In Figure~\ref{fig::OrnessVsOrness} we furthermore show the average, minimum and maximum orness of the proposed OWA weights, dependent on the true orness. While on average most experiments seem to yield a vector with fitting orness, the extreme values give a much more nuanced impression. Note that for a single observation per scenario, for every $0.01$-orness batch at least one instances was found where the elicitation resulted in the preference vector $(1,0,\ldots,0)$ (see the horizontal green line at 1 in Figure~\ref{fig::OrnessVsOrnessC}). It also can be observed, that our approach performs very well for OWA weights with low orness of less than $0.6$, as both the maximum and minimum orness of the proposed preference vector are closest to the true orness. This observation is in line with Figure~\ref{fig::ManyForOrness}, where it was shown that our approach performed best for low orness. \subsubsection{Number of Feasible Preference Vectors } To better understand how many preference vectors are capable of explaining the observed scenarios we generated 1000 random problem instances for each of the preference vector generation methods in our basic setting, only varying the number of observations $S\in \{1,\ldots,32\}$. For each instance, we sample 1000 additional random preference vectors using the different methods and check whether this preference vector can explain the given observations. In other words, we check whether this random vector is a solution to \eqref{pref} with objective value zero. Figure \ref{fig::Optimal_S} shows the average percentage of random vectors of the different generation types where this is the case, depending on the given number of observations $S$. Note the logarithmic axes. \begin{figure} \caption{Ratio of random vectors using different generation techniques that are optimal dependent on the number of available observations. } \label{fig::Optimal_Sc} \label{fig::Optimal_S} \end{figure} We see that for small values $S$, a relatively large proportion of random preference vectors are able to explain all given observations. For a true preference vector drawn with uniform orness (Figure \ref{fig::Optimal_Sc}) we can observe that even with 32 observations still about $3\%$ of randomly drawn preference vectors using (UO)\xspace perfectly describe the observations. Since the optimization model has no further structural guidance to differentiate between these candidate solutions it will select any one of them. Hence, keep in mind that the optimization model does not make structural assumptions on $\pmb{w}$ (apart from being non-increasing) and if additional structural information about the true preference vector is known, it should be incorporated to further improve predictions. We also look at the ratio of optimal preference vectors depending on the orness of the true preference vector. For Figure \ref{fig::Optimal_Orness} we generated 10000 preference vectors via (UO)\xspace for $S=4$ observations, $K=5$ and $n=40$. Again, for each instance, we sample 1000 additional random preference vectors using the different methods and check if this preference vector can explain the given observations. \begin{figure} \caption{Ratio of random vectors using different generation techniques that are optimal dependent on the orness of the true underlying preference vector. } \label{fig::Optimal_Orness} \end{figure} We can observe that for preference vectors with orness of about $0.9$, about $20\%$ of randomly drawn vectors via (UO)\xspace perfectly explain the observation. This also serves as an explanation of why our approach performs less well for such vectors (see Figure \ref{fig::ManyForOrness}). \subsubsection{Instance Parameters} We now want to examine how the instance parameters $n$, $S$ and $K$ influence the performance of our approach. To this end, we vary these parameters and again compare them to the pairwise comparison based approach using the Euclidean distance to the true preference vector as measure. First, for $n\in \{10,11\ldots,60\}$ and $p=\lfloor \frac{n}{2} \rfloor$ we generated each $1000$ random instance ($S=16$, $K=5$) using the three different vector generation procedures and compared the result of our model to the three different types of the pairwise comparison. In Figure \ref{fig::N} the results are shown. \begin{figure} \caption{Comparison of our approach with pairwise comparisons for varying $n\in \{10,\ldots,60\}$.} \label{fig::N} \end{figure} The quality of our approach remains almost constant and even tends to improve for increasing $n$, while when conducting pairwise comparisons between the optimal and Pareto-optimal solutions the average distance to the underlying preference vector increases. In each case we outperform having only a single pairwise comparison (per observation) available. Using preference vectors with uniform orness (UO)\xspace our approach performs similarly to having to conduct $5$ pairwise comparisons of types (RvR)\xspace and (PvP)\xspace. When using (UA)\xspace to generate weights, our approach is comparable to about $10$ pairwise comparisons per observation. This difference between (UA)\xspace and (UO)\xspace goes in line with the observations made in Section \ref{sec::ExpOrness}: As (UA)\xspace tends to generate preference vectors with lower orness compared to (UO)\xspace and as our approach tends to perform better if the observations are based on preference vectors with lower orness a better performance was also expected here. Note that this holds true for all of the following experiments. For $S\in \{1,\ldots,32\}$ observations, $n=40$ and $K=5$ we also generated $1000$ random instance for each weight generator and pair selector and conducted the same analysis. On the logarithmic horizontal axis of Figure \ref{fig::S} we show the value $S$, i.e., how many of the 32 observations we generated where available. On the vertical axis we again show the average distance of the predicted preference vector to the actual preference vector. \begin{figure} \caption{Comparison of our approach with pairwise comparisons for varying $S\in \{1,\ldots,32\}$.} \label{fig::S} \end{figure} Unsurprisingly, the distance to the optimal preference vector decreases for increasing $S$ for all elicitation methods. It is noteworthy, that when $S$ increases by one, our approach only gains one new observation, while the number of available pairwise preferences increases by the respective factor, i.e.~for the line with 20 comparisons, 20 additional data points become available. This is one explanation of why the performance of \eqref{pref} does not improve at the same rate as of conducting pairwise comparisons. Again it can be observed that using pairs containing the observed optimal solution and a supported Pareto-optimal solution does not perform as well as (RvR)\xspace or (PvP)\xspace. One explanation of this can be that with (RvR)\xspace and (PvP)\xspace a larger variety of pairs is compared, allowing to work out more details of the underlying preference vector. Finally, for $K\in\{2,\ldots, 20\}$, $n=40$ and $S=16$ the same experiments are conducted and shown in Figure \ref{fig::K}. \begin{figure} \caption{Comparison of our approach with pairwise comparisons for varying $K\in \{2,\ldots,20\}$.} \label{fig::K} \end{figure} Interestingly, both (Pref) as well as the approach using pairwise comparisons tend to perform worse for increasing, but small, $K$. For each setting, however there seems to exist a turning point where for further increasing $K$ the average distance to the underlying preference vector again decreases, or at least does no longer increase. In general, for increasing $K$, (Pref) is able to catch up regarding the average distance. It is noteworthy to mention that the result for $K=5$, which is the value we used in all the previous experiments, leads to one of the least desirable outcomes in terms of the average distance when compared to experiments with varying values of $K$. \subsubsection{Robustness against Noise} In this final experimental evaluation we want to test how well our approach behaves when there is noise in the data. To this end we created 1000 preference vectors in our basic setting and for each query that uses this preference vector it is altered by $\epsilon$ as explained before. Hence, each of the $S$ observed solutions as well as each pairwise comparison is obtained using a different preference vector, arising from adding random noise to the true OWA weights. In Figure \ref{fig::Eps} the average distance to the underlying preference vector is shown. \begin{figure} \caption{Comparison of our approach with pairwise comparisons for varying $\epsilon\in \{0,0.05,\ldots,0.95,1\}$.} \label{fig::Eps} \end{figure} The average distance to the true OWA weights using \eqref{pref} increases for increasing $\epsilon$. It can be seen that our approach is more robust in most of the cases, only losing ground compared to pairwise preferences comparisons between two Pareto-optimal solutions when the true OWA weights are created using (UW)\xspace or (UA)\xspace. For all other settings, there exists an $\epsilon$ such that \eqref{pref} performs equally well, or even better, as having to conduct $20$ comparisons per observation, which has to deal with $320$ distorted comparisons compared to only $16$ distorted observed solutions. \section{Conclusions and Further Research}\label{Conclusion} The ordered weighted averaging (OWA) criterion is a popular method to aggregate the performance of a solution over multiple objectives or scenarios. This aggregation is controlled by a vector $\pmb{w}$ which is used to express the decision maker's preference. Some well-known and often-used preference vectors are commonly used to model the worst-case, best-case, average, median, or conditional value at risk criteria. But going beyond these standard vectors, a strength of the OWA criterion is that it also allows for a more nuanced reflection of decision maker preferences. A crucial question then becomes: how can we express this preference through the vector $\pmb{w}$? One way to approach this question is to prepare a catalog of questions to elicit preferences for a risk-averse decision maker. A drawback of such a method is that it requires interaction with the decision maker. Furthermore, the decision situations such a catalog models are often artificial and do not necessarily reflect the way a decision is being made for the problem at hand. Other approaches assume observations of historical OWA values, but it is unclear how these should be computed without having a preference vector already available. In this paper we propose a new and indirect approach to elicit preferences. Instead of (actively) asking questions to the decision maker, we (passively) observe her preferred choice on a set of example problems which may have come from previous rounds of decision making. Using this set of observations, we then find a preference vector that is capable to explain these choices. To construct this vector, we propose the use of an optimization model, which obtains a preference vector that is at minimum distance to the polyhedra of feasible OWA weights capable of explaining the single observations. As the model has a large number of constraints, it can be approached through an iterative solution method, where we alternate between solving a linear program to determine $\pmb{w}$ and solving OWA problems to check its feasibility. In computational experiments, we compared the performance of this model with an alternative approach from the literature that is based on pairwise preference comparisons between candidate solutions. Our results show that our approach can achieve good performance with less data, in particular if the orness of the underlying true preference vector is small. Furthermore, it is more robust regarding noise in the observations or pairwise comparisons. In further research, a study to test our preference elicitation approach with real-world decision makers would be a valuable addition, see, e.g. the recent study by \cite{reimann2017well}. Note that such an experiment is not trivial, as the ''true'' preference of a decision maker cannot be determined. Furthermore, our philosophy may be applied to other decision making criteria. In particular the weighted ordered weighted averaging (WOWA) criterion seems a natural choice as a generalization of the OWA criterion considered in this paper, see \cite{Ogryczak2009915}. \begin{thebibliography}{} \bibitem [\protect \citeauthoryear { Adam \ \BBA {} Destercke }{ Adam \ \BBA {} Destercke }{ {\protect \APACyear {2021}} }]{ adam2021possibilistic} \APACinsertmetastar { adam2021possibilistic} \begin{APACrefauthors} Adam, L. \BCBT {}\ \BBA {} Destercke, S. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2021}{}{}. \newblock {\BBOQ}\APACrefatitle {Possibilistic preference elicitation by mini\-max regret} {Possibilistic preference elicitation by mini\-max regret}.{\BBCQ} \newblock \BIn{} \APACrefbtitle {Uncertainty in Artificial Intelligence} {Uncertainty in artificial intelligence}\ (\BPGS\ 718--727). \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Ahn }{ Ahn }{ {\protect \APACyear {2008}} }]{ ahn2008preference} \APACinsertmetastar { ahn2008preference} \begin{APACrefauthors} Ahn, B\BPBI S. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2008}{}{}. \newblock {\BBOQ}\APACrefatitle {Preference relation approach for obtaining {OWA} operators weights} {Preference relation approach for obtaining {OWA} operators weights}.{\BBCQ} \newblock \APACjournalVolNumPages{International Journal of Approximate Reasoning}{47}{2}{166--178}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Aissi , Bazgan \BCBL {}\ \BBA {} Vanderpooten }{ Aissi \ \protect \BOthers {.}}{ {\protect \APACyear {2009}} }]{ aissi2009min} \APACinsertmetastar { aissi2009min} \begin{APACrefauthors} Aissi, H. , Bazgan, C. \BCBL {}\ \BBA {} Vanderpooten, D. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2009}{}{}. \newblock {\BBOQ}\APACrefatitle {Min--max and min--max regret versions of combinatorial optimization problems: A survey} {Min--max and min--max regret versions of combinatorial optimization problems: A survey}.{\BBCQ} \newblock \APACjournalVolNumPages{European Journal of Operational Research}{197}{2}{427--438}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Benabbou , Gonzales , Perny \BCBL {}\ \BBA {} Viappiani }{ Benabbou \ \protect \BOthers {.}}{ {\protect \APACyear {2015}} }]{ benabbou2015minimax} \APACinsertmetastar { benabbou2015minimax} \begin{APACrefauthors} Benabbou, N. , Gonzales, C. , Perny, P. \BCBL {}\ \BBA {} Viappiani, P. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2015}{}{}. \newblock {\BBOQ}\APACrefatitle {Mini\-max regret approaches for preference elicitation with rank-dependent aggregators} {Mini\-max regret approaches for preference elicitation with rank-dependent aggregators}.{\BBCQ} \newblock \APACjournalVolNumPages{EURO Journal on Decision Processes}{3}{1}{29--64}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Bertsimas \ \BBA {} Brown }{ Bertsimas \ \BBA {} Brown }{ {\protect \APACyear {2009}} }]{ bertsimas2009constructing} \APACinsertmetastar { bertsimas2009constructing} \begin{APACrefauthors} Bertsimas, D. \BCBT {}\ \BBA {} Brown, D\BPBI B. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2009}{}{}. \newblock {\BBOQ}\APACrefatitle {Constructing uncertainty sets for robust linear optimization} {Constructing uncertainty sets for robust linear optimization}.{\BBCQ} \newblock \APACjournalVolNumPages{Operations research}{57}{6}{1483--1495}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Bourdache \ \BBA {} Perny }{ Bourdache \ \BBA {} Perny }{ {\protect \APACyear {2017}} }]{ bourdache2017anytime} \APACinsertmetastar { bourdache2017anytime} \begin{APACrefauthors} Bourdache, N. \BCBT {}\ \BBA {} Perny, P. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2017}{}{}. \newblock {\BBOQ}\APACrefatitle {Anytime algorithms for adaptive robust optimization with OWA and WOWA} {Anytime algorithms for adaptive robust optimization with owa and wowa}.{\BBCQ} \newblock \BIn{} \APACrefbtitle {International Conference on Algorithmic Decision Theory} {International conference on algorithmic decision theory}\ (\BPGS\ 93--107). \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Bourdache \ \BBA {} Perny }{ Bourdache \ \BBA {} Perny }{ {\protect \APACyear {2019}} }]{ bourdache2019active} \APACinsertmetastar { bourdache2019active} \begin{APACrefauthors} Bourdache, N. \BCBT {}\ \BBA {} Perny, P. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2019}{}{}. \newblock {\BBOQ}\APACrefatitle {Active preference learning based on generalized gini functions: Application to the multiagent knapsack problem} {Active preference learning based on generalized gini functions: Application to the multiagent knapsack problem}.{\BBCQ} \newblock \BIn{} \APACrefbtitle {Proceedings of the {AAAI} Conference on Artificial Intelligence} {Proceedings of the {AAAI} conference on artificial intelligence}\ (\BVOL~33, \BPGS\ 7741--7748). \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Chang \ \BBA {} Cheng }{ Chang \ \BBA {} Cheng }{ {\protect \APACyear {2011}} }]{ chang2011evaluating} \APACinsertmetastar { chang2011evaluating} \begin{APACrefauthors} Chang, K\BHBI H. \BCBT {}\ \BBA {} Cheng, C\BHBI H. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2011}{}{}. \newblock {\BBOQ}\APACrefatitle {Evaluating the risk of failure using the fuzzy {OWA} and {DEMATEL} method} {Evaluating the risk of failure using the fuzzy {OWA} and {DEMATEL} method}.{\BBCQ} \newblock \APACjournalVolNumPages{Journal of Intelligent Manufacturing}{22}{2}{113--129}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Chassein \ \BBA {} Goerigk }{ Chassein \ \BBA {} Goerigk }{ {\protect \APACyear {2015}} }]{ chassein2015alternative} \APACinsertmetastar { chassein2015alternative} \begin{APACrefauthors} Chassein, A. \BCBT {}\ \BBA {} Goerigk, M. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2015}{}{}. \newblock {\BBOQ}\APACrefatitle {Alternative formulations for the ordered weighted averaging objective} {Alternative formulations for the ordered weighted averaging objective}.{\BBCQ} \newblock \APACjournalVolNumPages{Information Processing Letters}{115}{6-8}{604--608}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Chassein , Goerigk , Kasperski \BCBL {}\ \BBA {} Zieli{\'n}ski }{ Chassein \ \protect \BOthers {.}}{ {\protect \APACyear {2018}} }]{ chassein2018recoverable} \APACinsertmetastar { chassein2018recoverable} \begin{APACrefauthors} Chassein, A. , Goerigk, M. , Kasperski, A. \BCBL {}\ \BBA {} Zieli{\'n}ski, P. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2018}{}{}. \newblock {\BBOQ}\APACrefatitle {On recoverable and two-stage robust selection problems with budgeted uncertainty} {On recoverable and two-stage robust selection problems with budgeted uncertainty}.{\BBCQ} \newblock \APACjournalVolNumPages{European Journal of Operational Research}{265}{2}{423--436}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Chassein , Goerigk , Kasperski \BCBL {}\ \BBA {} Zieli{\'n}ski }{ Chassein \ \protect \BOthers {.}}{ {\protect \APACyear {2020}} }]{ chassein2020approximating} \APACinsertmetastar { chassein2020approximating} \begin{APACrefauthors} Chassein, A. , Goerigk, M. , Kasperski, A. \BCBL {}\ \BBA {} Zieli{\'n}ski, P. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2020}{}{}. \newblock {\BBOQ}\APACrefatitle {Approximating combinatorial optimization problems with the ordered weighted averaging criterion} {Approximating combinatorial optimization problems with the ordered weighted averaging criterion}.{\BBCQ} \newblock \APACjournalVolNumPages{European Journal of Operational Research}{286}{3}{828--838}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Dragone , Teso \BCBL {}\ \BBA {} Passerini }{ Dragone \ \protect \BOthers {.}}{ {\protect \APACyear {2018}} }]{ dragone2018constructive} \APACinsertmetastar { dragone2018constructive} \begin{APACrefauthors} Dragone, P. , Teso, S. \BCBL {}\ \BBA {} Passerini, A. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2018}{}{}. \newblock {\BBOQ}\APACrefatitle {Constructive preference elicitation over hybrid combinatorial spaces} {Constructive preference elicitation over hybrid combinatorial spaces}.{\BBCQ} \newblock \BIn{} \APACrefbtitle {Proceedings of the {AAAI} Conference on Artificial Intelligence} {Proceedings of the {AAAI} conference on artificial intelligence}\ (\BVOL~32, \BPGS\ 2943--2950). \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Ehrgott }{ Ehrgott }{ {\protect \APACyear {2005}} }]{ ehrgott} \APACinsertmetastar { ehrgott} \begin{APACrefauthors} Ehrgott, M. \end{APACrefauthors} \unskip\ \newblock \APACrefYear{2005}. \newblock \APACrefbtitle {Multicriteria optimization} {Multicriteria optimization}\ (\BVOL~2). \newblock \APACaddressPublisher{}{Springer}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Emrouznejad \ \BBA {} Marra }{ Emrouznejad \ \BBA {} Marra }{ {\protect \APACyear {2014}} }]{ emrouznejad2014ordered} \APACinsertmetastar { emrouznejad2014ordered} \begin{APACrefauthors} Emrouznejad, A. \BCBT {}\ \BBA {} Marra, M. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2014}{}{}. \newblock {\BBOQ}\APACrefatitle {Ordered weighted averaging operators 1988--2014: A citation-based literature survey} {Ordered weighted averaging operators 1988--2014: A citation-based literature survey}.{\BBCQ} \newblock \APACjournalVolNumPages{International Journal of Intelligent Systems}{29}{11}{994--1014}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Filev \ \BBA {} Yager }{ Filev \ \BBA {} Yager }{ {\protect \APACyear {1994}} }]{ filev1994learning} \APACinsertmetastar { filev1994learning} \begin{APACrefauthors} Filev, D. \BCBT {}\ \BBA {} Yager, R\BPBI R. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{1994}{}{}. \newblock {\BBOQ}\APACrefatitle {Learning {OWA} operator weights from data} {Learning {OWA} operator weights from data}.{\BBCQ} \newblock \BIn{} \APACrefbtitle {Proceedings of 1994 IEEE 3rd International Fuzzy Systems Conference} {Proceedings of 1994 ieee 3rd international fuzzy systems conference}\ (\BPGS\ 468--473). \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Filev \ \BBA {} Yager }{ Filev \ \BBA {} Yager }{ {\protect \APACyear {1998}} }]{ filev1998issue} \APACinsertmetastar { filev1998issue} \begin{APACrefauthors} Filev, D. \BCBT {}\ \BBA {} Yager, R\BPBI R. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{1998}{}{}. \newblock {\BBOQ}\APACrefatitle {On the issue of obtaining {OWA} operator weights} {On the issue of obtaining {OWA} operator weights}.{\BBCQ} \newblock \APACjournalVolNumPages{Fuzzy Sets and Systems}{94}{2}{157--169}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Full{\'e}r \ \BBA {} Majlender }{ Full{\'e}r \ \BBA {} Majlender }{ {\protect \APACyear {2001}} }]{ fuller2001analytic} \APACinsertmetastar { fuller2001analytic} \begin{APACrefauthors} Full{\'e}r, R. \BCBT {}\ \BBA {} Majlender, P. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2001}{}{}. \newblock {\BBOQ}\APACrefatitle {An analytic approach for obtaining maximal entropy {OWA} operator weights} {An analytic approach for obtaining maximal entropy {OWA} operator weights}.{\BBCQ} \newblock \APACjournalVolNumPages{Fuzzy Sets and Systems}{124}{1}{53--57}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Galand \ \BBA {} Spanjaard }{ Galand \ \BBA {} Spanjaard }{ {\protect \APACyear {2012}} }]{ Galand20121540} \APACinsertmetastar { Galand20121540} \begin{APACrefauthors} Galand, L. \BCBT {}\ \BBA {} Spanjaard, O. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2012}{}{}. \newblock {\BBOQ}\APACrefatitle {Exact algorithms for {OWA}-optimization in multiobjective spanning tree problems} {Exact algorithms for {OWA}-optimization in multiobjective spanning tree problems}.{\BBCQ} \newblock \APACjournalVolNumPages{Computers \& Operations Research}{39}{7}{1540 - 1554}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Garc{\'\i}a-Lapresta , Llamazares \BCBL {}\ \BBA {} Pena }{ Garc{\'\i}a-Lapresta \ \protect \BOthers {.}}{ {\protect \APACyear {2011}} }]{ garcia2011generating} \APACinsertmetastar { garcia2011generating} \begin{APACrefauthors} Garc{\'\i}a-Lapresta, J\BPBI L. , Llamazares, B. \BCBL {}\ \BBA {} Pena, T. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2011}{}{}. \newblock {\BBOQ}\APACrefatitle {Generating {OWA} weights from individual assessments} {Generating {OWA} weights from individual assessments}.{\BBCQ} \newblock \BIn{} \APACrefbtitle {Recent Developments in the Ordered Weighted Averaging Operators: Theory and Practice} {Recent developments in the ordered weighted averaging operators: Theory and practice}\ (\BPGS\ 135--147). \newblock \APACaddressPublisher{}{Springer}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Kasperski \ \BBA {} Zieli{\'n}ski }{ Kasperski \ \BBA {} Zieli{\'n}ski }{ {\protect \APACyear {2015}} }]{ kasperski2015combinatorial} \APACinsertmetastar { kasperski2015combinatorial} \begin{APACrefauthors} Kasperski, A. \BCBT {}\ \BBA {} Zieli{\'n}ski, P. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2015}{}{}. \newblock {\BBOQ}\APACrefatitle {Combinatorial optimization problems with uncertain costs and the {OWA} criterion} {Combinatorial optimization problems with uncertain costs and the {OWA} criterion}.{\BBCQ} \newblock \APACjournalVolNumPages{Theoretical Computer Science}{565}{}{102--112}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Kasperski \ \BBA {} Zieli{\'n}ski }{ Kasperski \ \BBA {} Zieli{\'n}ski }{ {\protect \APACyear {2016}} {\protect \APACexlab {{\protect \BCnt {1}}}}}]{ kasperski2016robust} \APACinsertmetastar { kasperski2016robust} \begin{APACrefauthors} Kasperski, A. \BCBT {}\ \BBA {} Zieli{\'n}ski, P. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2016{\protect \BCnt {1}}}{}{}. \newblock {\BBOQ}\APACrefatitle {Robust discrete optimization under discrete and interval uncertainty: A survey} {Robust discrete optimization under discrete and interval uncertainty: A survey}.{\BBCQ} \newblock \APACjournalVolNumPages{Robustness analysis in decision aiding, optimization, and analytics}{}{}{113--143}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Kasperski \ \BBA {} Zieli{\'n}ski }{ Kasperski \ \BBA {} Zieli{\'n}ski }{ {\protect \APACyear {2016}} {\protect \APACexlab {{\protect \BCnt {2}}}}}]{ kasperski2016using} \APACinsertmetastar { kasperski2016using} \begin{APACrefauthors} Kasperski, A. \BCBT {}\ \BBA {} Zieli{\'n}ski, P. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2016{\protect \BCnt {2}}}{}{}. \newblock {\BBOQ}\APACrefatitle {Using the {WOWA} operator in robust discrete optimization problems} {Using the {WOWA} operator in robust discrete optimization problems}.{\BBCQ} \newblock \APACjournalVolNumPages{International Journal of Approximate Reasoning}{68}{}{54--67}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Kim \ \BBA {} Ahn }{ Kim \ \BBA {} Ahn }{ {\protect \APACyear {2018}} }]{ kim2018implicit} \APACinsertmetastar { kim2018implicit} \begin{APACrefauthors} Kim, E\BPBI Y. \BCBT {}\ \BBA {} Ahn, B\BPBI S. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2018}{}{}. \newblock {\BBOQ}\APACrefatitle {Implicit elicitation of attitudinal character in the {OWA} operator} {Implicit elicitation of attitudinal character in the {OWA} operator}.{\BBCQ} \newblock \APACjournalVolNumPages{International Journal of Intelligent Systems}{33}{2}{281--287}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Labreuche , Mayag \BCBL {}\ \BBA {} Duqueroie }{ Labreuche \ \protect \BOthers {.}}{ {\protect \APACyear {2015}} }]{ labreuche2015extension} \APACinsertmetastar { labreuche2015extension} \begin{APACrefauthors} Labreuche, C. , Mayag, B. \BCBL {}\ \BBA {} Duqueroie, B. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2015}{}{}. \newblock {\BBOQ}\APACrefatitle {Extension of the {MACBETH} approach to elicit an ordered weighted average operator} {Extension of the {MACBETH} approach to elicit an ordered weighted average operator}.{\BBCQ} \newblock \APACjournalVolNumPages{EURO Journal on Decision Processes}{3}{1}{65--105}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Lin , Xu , Lin \BCBL {}\ \BBA {} Chen }{ Lin \ \protect \BOthers {.}}{ {\protect \APACyear {2020}} }]{ lin2020determine} \APACinsertmetastar { lin2020determine} \begin{APACrefauthors} Lin, M. , Xu, W. , Lin, Z. \BCBL {}\ \BBA {} Chen, R. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2020}{}{}. \newblock {\BBOQ}\APACrefatitle {Determine {OWA} operator weights using kernel density estimation} {Determine {OWA} operator weights using kernel density estimation}.{\BBCQ} \newblock \APACjournalVolNumPages{Economic Research-Ekonomska Istra{\v{z}}ivanja}{33}{1}{1441--1464}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Liu }{ Liu }{ {\protect \APACyear {2011}} }]{ liu2011review} \APACinsertmetastar { liu2011review} \begin{APACrefauthors} Liu, X. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2011}{}{}. \newblock {\BBOQ}\APACrefatitle {A review of the {OWA} determination methods: Classification and some extensions} {A review of the {OWA} determination methods: Classification and some extensions}.{\BBCQ} \newblock \BIn{} \APACrefbtitle {Recent Developments in the Ordered Weighted Averaging Operators: Theory and Practice} {Recent developments in the ordered weighted averaging operators: Theory and practice}\ (\BPGS\ 49--90). \newblock \APACaddressPublisher{}{Springer}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Liu \ \BBA {} Chen }{ Liu \ \BBA {} Chen }{ {\protect \APACyear {2004}} }]{ liu2004properties} \APACinsertmetastar { liu2004properties} \begin{APACrefauthors} Liu, X. \BCBT {}\ \BBA {} Chen, L. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2004}{}{}. \newblock {\BBOQ}\APACrefatitle {On the properties of parametric geometric OWA operator} {On the properties of parametric geometric owa operator}.{\BBCQ} \newblock \APACjournalVolNumPages{International Journal of Approximate Reasoning}{35}{2}{163--178}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Malczewski }{ Malczewski }{ {\protect \APACyear {2006}} }]{ malczewski2006ordered} \APACinsertmetastar { malczewski2006ordered} \begin{APACrefauthors} Malczewski, J. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2006}{}{}. \newblock {\BBOQ}\APACrefatitle {Ordered weighted averaging with fuzzy quantifiers: {GIS}-based multicriteria evaluation for land-use suitability analysis} {Ordered weighted averaging with fuzzy quantifiers: {GIS}-based multicriteria evaluation for land-use suitability analysis}.{\BBCQ} \newblock \APACjournalVolNumPages{International Journal of Applied Earth Observation and Geoinformation}{8}{4}{270--277}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Merig{\'o} }{ Merig{\'o} }{ {\protect \APACyear {2010}} }]{ merigo2010fuzzy} \APACinsertmetastar { merigo2010fuzzy} \begin{APACrefauthors} Merig{\'o}, J\BPBI M. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2010}{}{}. \newblock {\BBOQ}\APACrefatitle {Fuzzy decision making with immediate probabilities} {Fuzzy decision making with immediate probabilities}.{\BBCQ} \newblock \APACjournalVolNumPages{Computers \& Industrial Engineering}{58}{4}{651--657}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Merig{\'o} \ \BBA {} Casanovas }{ Merig{\'o} \ \BBA {} Casanovas }{ {\protect \APACyear {2011}} }]{ merigo2011induced} \APACinsertmetastar { merigo2011induced} \begin{APACrefauthors} Merig{\'o}, J\BPBI M. \BCBT {}\ \BBA {} Casanovas, M. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2011}{}{}. \newblock {\BBOQ}\APACrefatitle {Induced aggregation operators in the Euclidean distance and its application in financial decision making} {Induced aggregation operators in the euclidean distance and its application in financial decision making}.{\BBCQ} \newblock \APACjournalVolNumPages{Expert Systems with Applications}{38}{6}{7603--7608}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Merig{\'o} \ \BBA {} Gil-Lafuente }{ Merig{\'o} \ \BBA {} Gil-Lafuente }{ {\protect \APACyear {2009}} }]{ merigo2009induced} \APACinsertmetastar { merigo2009induced} \begin{APACrefauthors} Merig{\'o}, J\BPBI M. \BCBT {}\ \BBA {} Gil-Lafuente, A\BPBI M. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2009}{}{}. \newblock {\BBOQ}\APACrefatitle {The induced generalized {OWA} operator} {The induced generalized {OWA} operator}.{\BBCQ} \newblock \APACjournalVolNumPages{Information Sciences}{179}{6}{729--741}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Ogryczak \ \BBA {} Olender }{ Ogryczak \ \BBA {} Olender }{ {\protect \APACyear {2012}} }]{ ogryczak2012milp} \APACinsertmetastar { ogryczak2012milp} \begin{APACrefauthors} Ogryczak, W. \BCBT {}\ \BBA {} Olender, P. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2012}{}{}. \newblock {\BBOQ}\APACrefatitle {On {MILP} models for the {OWA} optimization} {On {MILP} models for the {OWA} optimization}.{\BBCQ} \newblock \APACjournalVolNumPages{Journal of Telecommunications and Information Technology}{}{}{5--12}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Ogryczak \ \BBA {} {\'S}liwi{\'n}ski }{ Ogryczak \ \BBA {} {\'S}liwi{\'n}ski }{ {\protect \APACyear {2003}} }]{ ogryczak2003solving} \APACinsertmetastar { ogryczak2003solving} \begin{APACrefauthors} Ogryczak, W. \BCBT {}\ \BBA {} {\'S}liwi{\'n}ski, T. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2003}{}{}. \newblock {\BBOQ}\APACrefatitle {On solving linear programs with the ordered weighted averaging objective} {On solving linear programs with the ordered weighted averaging objective}.{\BBCQ} \newblock \APACjournalVolNumPages{European Journal of Operational Research}{148}{1}{80--91}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Ogryczak \ \BBA {} {\'S}liwi{\'n}ski }{ Ogryczak \ \BBA {} {\'S}liwi{\'n}ski }{ {\protect \APACyear {2009}} }]{ Ogryczak2009915} \APACinsertmetastar { Ogryczak2009915} \begin{APACrefauthors} Ogryczak, W. \BCBT {}\ \BBA {} {\'S}liwi{\'n}ski, T. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2009}{}{}. \newblock {\BBOQ}\APACrefatitle {On efficient {WOWA} optimization for decision support under risk} {On efficient {WOWA} optimization for decision support under risk}.{\BBCQ} \newblock \APACjournalVolNumPages{International Journal of Approximate Reasoning}{50}{6}{915 - 928}. \newblock \APACrefnote{Ninth European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU 2007)} \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Ogryczak \ \BBA {} Tamir }{ Ogryczak \ \BBA {} Tamir }{ {\protect \APACyear {2003}} }]{ ogryczak2003minimizing} \APACinsertmetastar { ogryczak2003minimizing} \begin{APACrefauthors} Ogryczak, W. \BCBT {}\ \BBA {} Tamir, A. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2003}{}{}. \newblock {\BBOQ}\APACrefatitle {Minimizing the sum of the k largest functions in linear time} {Minimizing the sum of the k largest functions in linear time}.{\BBCQ} \newblock \APACjournalVolNumPages{Information Processing Letters}{85}{3}{117--122}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { O'Hagan }{ O'Hagan }{ {\protect \APACyear {1988}} }]{ o1988aggregating} \APACinsertmetastar { o1988aggregating} \begin{APACrefauthors} O'Hagan, M. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{1988}{}{}. \newblock {\BBOQ}\APACrefatitle {Aggregating template or rule antecedents in real-time expert systems with fuzzy set logic} {Aggregating template or rule antecedents in real-time expert systems with fuzzy set logic}.{\BBCQ} \newblock \BIn{} \APACrefbtitle {Twenty-second asilomar conference on signals, systems and computers} {Twenty-second asilomar conference on signals, systems and computers}\ (\BVOL~2, \BPGS\ 681--689). \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Reimann , Schumacher \BCBL {}\ \BBA {} Vetschera }{ Reimann \ \protect \BOthers {.}}{ {\protect \APACyear {2017}} }]{ reimann2017well} \APACinsertmetastar { reimann2017well} \begin{APACrefauthors} Reimann, O. , Schumacher, C. \BCBL {}\ \BBA {} Vetschera, R. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2017}{}{}. \newblock {\BBOQ}\APACrefatitle {How well does the {OWA} operator represent real preferences?} {How well does the {OWA} operator represent real preferences?}{\BBCQ} \newblock \APACjournalVolNumPages{European Journal of Operational Research}{258}{3}{993--1003}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Viappiani \ \BBA {} Boutilier }{ Viappiani \ \BBA {} Boutilier }{ {\protect \APACyear {2020}} }]{ viappiani2020equivalence} \APACinsertmetastar { viappiani2020equivalence} \begin{APACrefauthors} Viappiani, P. \BCBT {}\ \BBA {} Boutilier, C. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2020}{}{}. \newblock {\BBOQ}\APACrefatitle {On the equivalence of optimal recommendation sets and myopically optimal query sets} {On the equivalence of optimal recommendation sets and myopically optimal query sets}.{\BBCQ} \newblock \APACjournalVolNumPages{Artificial Intelligence}{286}{}{103328}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Wang , Luo \BCBL {}\ \BBA {} Hua }{ Wang \ \protect \BOthers {.}}{ {\protect \APACyear {2007}} }]{ wang2007aggregating} \APACinsertmetastar { wang2007aggregating} \begin{APACrefauthors} Wang, Y\BHBI M. , Luo, Y. \BCBL {}\ \BBA {} Hua, Z. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2007}{}{}. \newblock {\BBOQ}\APACrefatitle {Aggregating preference rankings using {OWA} operator weights} {Aggregating preference rankings using {OWA} operator weights}.{\BBCQ} \newblock \APACjournalVolNumPages{Information Sciences}{177}{16}{3356--3363}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Wang \ \BBA {} Parkan }{ Wang \ \BBA {} Parkan }{ {\protect \APACyear {2005}} }]{ wang2005minimax} \APACinsertmetastar { wang2005minimax} \begin{APACrefauthors} Wang, Y\BHBI M. \BCBT {}\ \BBA {} Parkan, C. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2005}{}{}. \newblock {\BBOQ}\APACrefatitle {A minimax disparity approach for obtaining OWA operator weights} {A minimax disparity approach for obtaining owa operator weights}.{\BBCQ} \newblock \APACjournalVolNumPages{Information Sciences}{175}{1-2}{20--29}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Xu }{ Xu }{ {\protect \APACyear {2005}} }]{ xu2005overview} \APACinsertmetastar { xu2005overview} \begin{APACrefauthors} Xu, Z. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2005}{}{}. \newblock {\BBOQ}\APACrefatitle {An overview of methods for determining {OWA} weights} {An overview of methods for determining {OWA} weights}.{\BBCQ} \newblock \APACjournalVolNumPages{International Journal of Intelligent Systems}{20}{8}{843--865}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Yager }{ Yager }{ {\protect \APACyear {1988}} }]{ yager1988ordered} \APACinsertmetastar { yager1988ordered} \begin{APACrefauthors} Yager, R\BPBI R. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{1988}{}{}. \newblock {\BBOQ}\APACrefatitle {On ordered weighted averaging aggregation operators in multicriteria decisionmaking} {On ordered weighted averaging aggregation operators in multicriteria decisionmaking}.{\BBCQ} \newblock \APACjournalVolNumPages{Systems, Man and Cybernetics, IEEE Transactions on}{18}{1}{183--190}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Yager }{ Yager }{ {\protect \APACyear {1993}} }]{ yager1993families} \APACinsertmetastar { yager1993families} \begin{APACrefauthors} Yager, R\BPBI R. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{1993}{}{}. \newblock {\BBOQ}\APACrefatitle {Families of OWA operators} {Families of owa operators}.{\BBCQ} \newblock \APACjournalVolNumPages{Fuzzy sets and systems}{59}{2}{125--148}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Yager }{ Yager }{ {\protect \APACyear {1996}} }]{ yager1996quantifier} \APACinsertmetastar { yager1996quantifier} \begin{APACrefauthors} Yager, R\BPBI R. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{1996}{}{}. \newblock {\BBOQ}\APACrefatitle {Quantifier guided aggregation using {OWA} operators} {Quantifier guided aggregation using {OWA} operators}.{\BBCQ} \newblock \APACjournalVolNumPages{International Journal of Intelligent Systems}{11}{1}{49--73}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Yager }{ Yager }{ {\protect \APACyear {1998}} }]{ yager1998including} \APACinsertmetastar { yager1998including} \begin{APACrefauthors} Yager, R\BPBI R. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{1998}{}{}. \newblock {\BBOQ}\APACrefatitle {Including importances in {OWA} aggregations using fuzzy systems modeling} {Including importances in {OWA} aggregations using fuzzy systems modeling}.{\BBCQ} \newblock \APACjournalVolNumPages{IEEE Transactions on Fuzzy Systems}{6}{2}{286--294}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Yager \ \BBA {} Alajlan }{ Yager \ \BBA {} Alajlan }{ {\protect \APACyear {2016}} }]{ yager2016some} \APACinsertmetastar { yager2016some} \begin{APACrefauthors} Yager, R\BPBI R. \BCBT {}\ \BBA {} Alajlan, N. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2016}{}{}. \newblock {\BBOQ}\APACrefatitle {Some issues on the {OWA} aggregation with importance weighted arguments} {Some issues on the {OWA} aggregation with importance weighted arguments}.{\BBCQ} \newblock \APACjournalVolNumPages{Knowledge-Based Systems}{100}{}{89--96}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Zabihi \ \protect \BOthers {.}}{ Zabihi \ \protect \BOthers {.}}{ {\protect \APACyear {2019}} }]{ zabihi2019gis} \APACinsertmetastar { zabihi2019gis} \begin{APACrefauthors} Zabihi, H. , Alizadeh, M. , Kibet~Langat, P. , Karami, M. , Shahabi, H. , Ahmad, A. \BDBL {}Lee, S. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2019}{}{}. \newblock {\BBOQ}\APACrefatitle {{GIS} Multi-Criteria Analysis by Ordered Weighted Averaging ({OWA}): toward an integrated citrus management strategy} {{GIS} multi-criteria analysis by ordered weighted averaging ({OWA}): toward an integrated citrus management strategy}.{\BBCQ} \newblock \APACjournalVolNumPages{Sustainability}{11}{4}{1009}. \PrintBackRefs{\CurrentBib} \bibitem [\protect \citeauthoryear { Zintgraf , Roijers , Linders , Jonker \BCBL {}\ \BBA {} Now{\'e} }{ Zintgraf \ \protect \BOthers {.}}{ {\protect \APACyear {2018}} }]{ zintgraf2018ordered} \APACinsertmetastar { zintgraf2018ordered} \begin{APACrefauthors} Zintgraf, L\BPBI M. , Roijers, D\BPBI M. , Linders, S. , Jonker, C\BPBI M. \BCBL {}\ \BBA {} Now{\'e}, A. \end{APACrefauthors} \unskip\ \newblock \APACrefYearMonthDay{2018}{}{}. \newblock {\BBOQ}\APACrefatitle {Ordered Preference Elicitation Strategies for Supporting Multi-Objective Decision Making} {Ordered preference elicitation strategies for supporting multi-objective decision making}.{\BBCQ} \newblock \BIn{} \APACrefbtitle {Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems} {Proceedings of the 17th international conference on autonomous agents and multiagent systems}\ (\BPGS\ 1477--1485). \PrintBackRefs{\CurrentBib} \end{thebibliography} \appendix \section{Model to Generate OWA Weights with Given Orness\label{App::OWAOrness}} This model was used by \cite{wang2005minimax} and creates a OWA preference vector $\pmb{w} \in [0,1]^K$ with given orness $\alpha$ where the auxiliary variable $\delta$ linearizes the maximum disparity $\max_{k \in \{1,\ldots,K-1\}} |w_k-w_{k+1}|$ between two adjacent weights. \begin{align*} \min \ & \delta \\ \textnormal{s.t.}\ & \alpha = \frac{1}{K-1}\sum_{k\in [K]}(K-k)w_k\\ & -\delta \leq w_k - w_{ k+1} \leq \delta \quad \forall k\in{1,\ldots,K-1}\\ & \pmb{w} \in {\mathcal{W}} \\ &\delta\geq 0 \end{align*} \section{Model for Obtaining an OWA Weights that Minimize Violations\label{App::OwaAhn}} This model was proposed by \cite{ahn2008preference} and aims at finding OWA weights $\pmb{w} \in {\mathcal{W}}$ that are consistent with the decision-maker's judgments between alternatives $(\pmb{x},\pmb{y}) \in {\mathcal{X}} \times {\mathcal{X}}$, where $\pmb{x}$ is the preferred alternative. Let $\Theta \subseteq {\mathcal{X}} \times {\mathcal{X}}$ be the set of such ordered pairs and let $a_1(\pmb{x}),\ldots,a_K(\pmb{x})$ be the objective values sorted from largest to smallest of solution $\pmb{x} \in {\mathcal{X}}$. \begin{align*} \min \ & \sum_{(\pmb{x},\pmb{y}) \in \Theta} \delta_{\pmb{x},\pmb{y}} \\ \textnormal{s.t.}\ & \sum_{k\in [K]}\left( a_k(\pmb{y})- a_k(\pmb{x})\right)w_k + \delta_{\pmb{x},\pmb{y}} \geq \epsilon \quad \forall (\pmb{x},\pmb{y}) \in \Theta \\ & \pmb{w} \in {\mathcal{W}} \\ &\delta_{\pmb{x},\pmb{y}}\geq 0 \quad \forall (\pmb{x},\pmb{y}) \in \Theta \\ \end{align*} \section{Example Illustrating the Difference Between Models \eqref{pref} and \eqref{altpref}\label{App::ExampleAlternative}} We consider the selection problem where one has to select $2$ out of $4$ items. There are $K=2$ objectives and $S=2$ observations. Assume the true preference vector is given by $(\frac{3}{4},\frac{1}{4})$. The objective coefficients of the objectives as well as the two observed solutions are given in the upper part of Table \ref{tab::ExampleAlternative}. \begin{table}[htb] \caption{Data for the example. \label{tab::ExampleAlternative}} \centering \begin{tabular}{rrr} \toprule &Observation 1 & Observation 2\\\midrule Objective 1 &$\pmb{c}^{1,1}=\left(0,\frac{1}{3},1,\frac{1}{6}\right)$&$\pmb{c}^{2,1}=\left(1,\frac{5}{8},0,\frac{1}{2}\right)$\\ Objective 2&$\pmb{c}^{1,2}=\left(\frac{2}{3},\frac{1}{3},0,1\right)$&$\pmb{c}^{2,1}=\left(\frac{1}{2},0,1,0\right)$\\ Optimal solution &$\pmb{x}^1=\left(1,1,0,0\right)$&$\pmb{x}^2=\left(0,1,0,1\right)$\\ OWA score & $0.833333$ &$0.84375$\\\midrule \eqref{pref}\\ Preference vector &$(0.8,0.2)$\\ Solution &$(1,1,0,0)$&$(0,0,1,1)$\\ OWA score & $0.86667$ & $0.9$\\\midrule \eqref{altpref}\\ Preference vector &$(0.5,0.5)$\\ Solution &$(1,1,0,0)$&$(0,1,0,1)$\\ OWA score & $0.66667$ & $0.5625$\\\bottomrule \end{tabular} \end{table} Model \eqref{pref} returns the preference vector $(0.8,0.2)$, which has an Euclidean distance of $0.071$ to the true OWA weights, while the preference vector found by \eqref{altpref} is $(0.5,0.5)$ with distance $0.354$. Hence, regarding the distance to the true preference vector, \eqref{pref} yields a better result. However, the preference vector $(0.5,0.5)$ mimics the observed solution perfectly, as applying this vector to the two observations unambiguously results in the observed solutions. The preference vector $(0.8,0.2)$, on the other hand returns the solution $(0,0,1,1)$ for the second observation, which has the same score of $0.9$, as the observed solution $(0,1,0,1)$. Hence, the used solver could have also returned $(0,0,1,1)$ for the preference vector $(0.8,0.2)$, but it did not. Furthermore, it is noteworthy that preference vector $(0.5,0.5)$ is also optimal for \eqref{pref} and could have been returned by the MIP solver. In particular, if there is no noise in the data, any optimal solution of \eqref{altpref} is also an optimal solution of \eqref{pref}, but not vice versa. Hence, in case of no noise, \eqref{altpref} returns a more specific solution of \eqref{pref} that yields solutions most similar to the observed ones. However, if there is noise in the observations, the sets of optimal solutions of the two different models do not necessarily intersect this way. \end{document}
arXiv
\begin{document} \thispagestyle{plain} \title{A counterexample to a conjecture of Ghosh} \author{Hung Hua} \address{Hung Hua {\texttt [email protected]}} \author{Elliot Krop} \address{Elliot Krop {\texttt [email protected]}} \author{Christopher Raridan} \address{Christopher Raridan {\texttt [email protected]}} \date{\today} \begin {abstract} We answer two questions of Shamik Ghosh in the negative. We show that there exists a lobster tree of diameter less than $6$ which accepts no $\alpha$-labeling with two central vertices labeled by the critical number and the maximum vertex label. We also show a simple example of a tree of diameter $4$, with an even degree central vertex which does not accept a maximum label in any graceful labeling. \\[\baselineskip] 2010 Mathematics Subject Classification: 05C78 \\[\baselineskip] Keywords: graceful labeling, Graceful Tree Conjecture, Bermond's Conjecture, lobster \end {abstract} \maketitle \section{Introduction} For basic graph theoretic notation and definitions we refer to West~\cite{West}. The well-known graceful tree conjecture states that for any tree $T$ on $n$ vertices, there exists an injective vertex-labeling from $\{0,\dots, n-1\}$, so that the set of edge-weights, defined as absolute difference on the labels on incident vertices for each edge, is $\{1,\dots, n-1\}$. We call a labeling $f$ \emph{bipartite}, if there exists and integer $c$ such that for any edge $uv$, either $f(u)\leq c < f(v)$ or $f(v)\leq c < f(u)$. A bipartite labeling that is graceful is called an $\alpha$\emph{-labeling}. We call the number $c$, the \emph{critical number}. For any tree $T$, let $P$ be a longest path in $T$ and call $T$ a $k$-distant tree if all of its vertices are a distance at most $k$ from $P$. At this time, the conjecture is unknown for any trees with tree distance at least $2$. For $2$-distant trees, or \emph{lobsters}, the problem is known as Bermond's conjecture\cite{Bermond}. Recently, in the pursuit of obtaining graceful lobsters from smaller graceful lobster, Ghosh \cite{Ghosh} asked two questions, which if answered in the affirmative, could have led to the solution of Bermond's conjecture. We address both of these questions: \begin{q}\cite{Ghosh} Does there exist an $\alpha$-labeling of a lobster of diameter at most $5$ such that the central vertices are labeled by the critical number and the maximum label? \end{q} \begin{q}\cite{Ghosh} For any tree $T$ of diameter $4$, with central vertex $v$ of even degree, does there exist a graceful labeling of $T$ such that the label on $v$ is the maximum label? \end{q} Note: The trees considered in Question $1$ were those with two central vertices. In the argument below we answer this as well as another related question. \begin{defn} A vertex $v$ in a tree $T$ is an \emph{almost central vertex}, if $v$ is adjacent to a central vertex and lies on a longest path of $T$. \end{defn} \begin{q} Does there exist an $\alpha$-labeling of a lobster of diameter at most $5$ such that the central vertex and an almost central vertex is labeled by the critical number and the maximum label, respectively? \end{q} \section{An example} Let $T$ be the following simple $1$-distant tree. \begin{figure} \caption{$T$} \label{T} \end{figure} Van Bussel \cite{VB} showed that $T$ is not $0$-centered, that is, does not accept a graceful labeling with central vertex $v$ labeled $0$. By considering the complementary labelings (vertices relabeled by $n-1$ minus their old labels), we conclude that the central vertex cannot be labeled by the maximum label, $n-1$. This observation gives a negative answer to Question $2$. We consider Question $3$ and again refer to $T$. If this question is to be answered in the affirmative, $v$ must be labeled by the critical number $c$. Since we are looking for an $\alpha$-labeling, and if $v$ is in the lesser labeled partite set, we must have $c=3$. There are only a few cases left to consider; either $v_1$ or $v_2$ is labeled by the maximum label $5$. Since the vertex with the maximum label must be adjacent to the vertex labeled $0$, it is easy to verify that there is no graceful labeling of $T$ whether we label $v_1$ or $v_2$ by $5$. To answer Question $1$, we need another simple example. Let $S$ be the following simple $1$-distant tree. \begin{figure} \caption{$S$} \label{S} \end{figure} We notice that for an $\alpha$-labeling of $S$, the bipartitions are $\{u_1,u_2,v,u_3\}$ and $\{v_1,v_2,v_3\}$. Assume first that $v$ is labeled by the maximum label, $6$. In this case the critical number is $2$, so to satisfy the conditions of Question $1$, label $v_2$ by $2$. Since the vertex with the maximum label must be adjacent to the vertex labeled $0$ in any graceful labeling, $v_1$ must be labeled $0$. Since $v_3$ is in the bipartition with the lower labels, the label $1$ is forced on $v_3$. Notice that in this case, $u_3$ cannot be labeled $5$, otherwise the weight $4$ would be on two edges. Furthermore, it is easy to check that either a label of $3$ or $5$ on $u_3$ would produce two edges of the same weight in $S$. Next, consider the case when $v_2$ is labeled by $6$. In this case, the critical number is $3$, so we label $v$ by $3$. Arguing as above, $u_3$ must be labeled $0$, which leads to labeling $u_1$ and $u_2$ by $1$ and $2$. This leaves only two possibilities. Either $v_1$ is labeled $4$ or $5$, and in either case the labeling is not graceful. We note, as suggested by the anonymous referee, that this example is vertex minimum in the sense of Question $1$. That is, it is impossible to delete $u_1$ (for example) since this would produce $P_6$, which does accept an $\alpha$-labeling such that the central vertices are labeled by the critical number and the maximum label. Furthermore, there are $\alpha$-labelings of $S$ in which $v_2$ receives either the critical number or the maximum label. \end{document}
arXiv
Complex genetic disorders often involve products of multiple genes acting cooperatively. Hence, the pathophenotype is the outcome of the perturbations in the underlying pathways, where gene products cooperate through various mechanisms such as protein-protein interactions. Pinpointing the decisive elements of such disease pathways is still challenging. Over the last years, computational approaches exploiting interaction network topology have been successfully applied to prioritize individual genes involved in diseases. Although linkage intervals provide a list of disease-gene candidates, recent genome-wide studies demonstrate that genes not associated with any known linkage interval may also contribute to the disease phenotype. Network based prioritization methods help highlighting such associations. Still, there is a need for robust methods that capture the interplay among disease-associated genes mediated by the topology of the network. Here, we propose a genome-wide network-based prioritization framework named GUILD. This framework implements four network-based disease-gene prioritization algorithms. We analyze the performance of these algorithms in dozens of disease phenotypes. The algorithms in GUILD are compared to state-of-the-art network topology based algorithms for prioritization of genes. As a proof of principle, we investigate top-ranking genes in Alzheimer's disease (AD), diabetes and AIDS using disease-gene associations from various sources. We show that GUILD is able to significantly highlight disease-gene associations that are not used a priori. Our findings suggest that GUILD helps to identify genes implicated in the pathology of human disorders independent of the loci associated with the disorders. Copyright: © Guney, Oliva. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: Departament d'Educació i Universitats de la Generalitat de Catalunya i del Fons Social Europeu (Department of Education and Universities of the Generalitat of Catalonia and the European Social Fons). Spanish Ministry of Science and Innovation (MICINN), FEDER (Fonds Européen de Développement Régional) BIO2008-0205, BIO2011-22568, PSE-0100000-2007, and PSE-0100000-2009; and by EU grant EraSysbio+ (SHIPREC) Euroinvestigación (EUI2009-04018). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Genetic diversity is augmented by variations in genetic sequence, however not all the mutations are beneficial for the organism. Coupled with environmental factors these variations can disrupt the complex machinery of the cell and cause functional abnormalities. Over the past few decades, a substantial amount of effort has been exerted towards explaining sequential variations in human DNA and their consequences on human biology . Linkage analysis , association studies and genome-wide association studies (GWAS) have achieved considerable success in identifying causal loci of human disorders, albeit with limitations , . Complex genetic disorders implicate several genes involved in various biological processes. Interactions of the proteins of these genes have helped extend our view of the genetic causes of common diseases –. Genes related to a particular disease phenotype (disease genes) have been demonstrated to be highly connected in the interaction network (e.g., in toxicity modulation and cancer , ). Yet, rather than having random connections through the network, the interactions of proteins encoded by genes implicated in such phenotypes involve partners from similar disease phenotypes –. Linkage analysis typically associates certain chromosomal loci (linkage interval) with a particular disease phenotype. Such analysis produces a set of genes within the linkage interval. Recent studies have confirmed the usefulness of network-based approaches to prioritize such candidate disease genes based on their proximity to known disease genes (seeds) in the network. These studies can be distinguished by the way they define proximity between the gene products in the network of protein-protein interactions. Thus, proximity is defined by considering direct neighborhood –, or by ranking with respect to shortest distance between disease genes , – or using methods based on random walk on the edges of the network , , . Making use of the global topology of the network, random walk based methods have been shown to perform better than local approaches , , . Two inherent properties of available data on protein-protein interactions (PPI) that affect the prioritization methods are incompleteness (false negatives) and noise (false positives). The bias towards highly connected known disease nodes in protein interaction networks has recently motivated statistical adjustment methods on the top of the association scores computed by prioritization algorithms where node scores are normalized using random networks . Furthermore, taking network quality into consideration, several approaches incorporate gene expression and data on functional similarity in addition to physical PPIs , –. Gene prioritization is then based on the integrated functional network, redefining "gene-neighborhood" at the functional level. Network-based approaches can also aid in identifying novel disease genes, even when the associated linkage intervals are not considered, for instance, to prioritize genes from GWAS , , . In fact, using the whole genome to prioritize disease-gene variants is expected to produce more robust results in identifying modest-risk disease-gene variants than using high-risk alleles . Nonetheless, existing prioritization methods substantially suffer from a lack of linkage interval information and depend on the quality of the interaction network . Thus, to identify genes implicated in diseases, stout methods that exploit interaction networks to capture the communication mechanism between genes involved in similar disease phenotypes are needed. Available network-topology based prioritization methods treat all the paths in the network equally relevant for the pathology. We hypothesize that the communication between nodes of the network (proteins) can be captured by taking into account the "relevance" of the paths connecting disease-associated nodes. Here, we present GUILD (Genes Underlying Inheritance Linked Disorders), a network-based disease gene prioritization framework. GUILD proposes four topology-based ranking algorithms: NetShort, NetZcore, NetScore and NetCombo. Additionally, several other state-of-the-art algorithms that use global network topology have been included in GUILD: PageRank with priors (as used in ToppNet ), Functional Flow , Random walk with restart and Network propagation . The framework uses known disease genes and interactions between the products of these genes. We show the effectiveness of the proposed prioritization methods developed under the GUILD framework for genome-wide prioritization. We also use several interaction data sets with different characteristics for various disease phenotypes to evaluate the classifier performance of these methods. As a proof of principle we use GUILD to pinpoint genes involved in the pathology of Alzheimer's disease (AD), diabetes and AIDS. GUILD is freely available for download at http://sbi.imim.es/GUILD.php. We tested the prioritization algorithms in GUILD using three sources of gene-phenotypic association and the largest connected components of five different protein-protein interaction networks (see "Methods" for details and names of these sets). The area under ROC curve (AUC) was used to compare each ranking method (four novel methods NetScore, NetZcore, NetShort and NetCombo; and four existing state-of-the-art methods, Functional Flow, PageRank with priors, Random walk with restart and Network propagation). The AUCs for each method averaged over all disorders in different disease data sets (OMIM, Goh and Chen) and interaction data sets (Goh, Entrez, PPI, bPPI, weighted bPPI) are given in Table 1 (see Table S1 for the AUC values averaged over diseases on each interaction network separately). We also compared the ratio of seeds covered (sensitivity) among the top 1% predictions of each method (Table 1). In general, our methods produced more accurate predictions and better sensitivity in genome-wide prioritization than the up-to-date algorithms with which we compared. NetCombo, the consensus method combining NetScore, NetZcore and NetShort, proved to be an effective strategy of prioritization independent of the data set used. NetCombo produced significantly better predictions than Network Propagation, the best of the state-of-art tested approaches, on each data set (P≤5.7e-6, see Table S2 for associated p-values). Also the improvement of NetScore versus Network Propagation was significant in Goh and Chen data sets (P≤8.2e-5). Figure S1 compares the significant improvements in AUC. We also tested alternative ways to combine prioritization methods. However, none of the combinations using other methods proved as effective as combining the three methods included in NetCombo. Details showing the average AUC and sensitivity among the top 1% high scoring genes of each disorder for each prioritization method using OMIM, Goh and Chen data sets on each interaction network can be found in Tables S3 and S4. In order to avoid bias towards highly studied diseases we used equal number of gold standard positive and negative instances via grouping all the non-seed scores in k groups, where k is the number of seeds associated with the disease under evaluation (see "Methods"). Considering that the distribution of disease associated genes among all the genes is not known a priori, this assumption provided a fair testing set to compare different prediction methods than using all non-seeds as negatives or using only a random subsample of non-seeds. We also compared the prioritization methods when all non-seeds were assumed as negatives. The AUC values increased for all methods on all data sets (up to 10%). In all tests NetCombo and NetScore outperformed existing prioritization methods (see Table S5). The prediction performance of these methods depended on the topology of the network and the quality of the knowledge of protein-protein interactions in regards to size and reliability. We grouped the AUCs of all disorders by network type to test these dependencies (see "Methods" for network definitions). The distribution of AUCs for each interaction data set using OMIM, Goh and Chen data sets is given in Figure 1 (see Figure S2 for the distribution of sensitivity values with the top 1% predictions). Interestingly, most of the methods produced their best results with the weighted bPPI network, which used the scores from the STRING database to weight the edges (see Table S1 for the average AUC). The improvement of the prediction performance using edge confidence values from STRING was significant for most methods (with the exception of NetShort and Random walk with restart algorithms, for which the performance improved but not significantly). These results justify the importance of network quality (i.e. using reliable binary interactions). Figure 1. Prediction performance of GUILD approaches on each interaction network over all phenotypes of OMIM, Goh and Chen data sets. The distribution of AUCs for different phenotypes in each network is represented with a box-plot of different color. Color legend: red, Goh network; yellow, Entrez network; green, PPI network; blue, bPPI network; purple, weighted bPPI network. Furthermore, we hypothesized that removing interactions detected by pull down methods, such as Tandem Affinity Purification (TAP), would filter the noise produced by false binary interactions, consequently increasing the AUC and the sensitivity among top ranked predictions when the bPPI network was used instead of the PPI network (see Table S1). Our results indicated that the network size was relevant too when binary interactions were used. The Goh network, which was smaller than the bPPI network, produced significantly lower AUC values for the majority of prioritization methods (all but NetShort). Thus, the use of the largest possible network with assessed binary interactions could improve the predictions. Based on the AUC values for each phenotype when the bPPI network is used, NetCombo, NetScore, NetZcore, and NetShort were significantly better than Functional Flow, PageRank with priors and Random walk with restart. NetCombo had an average AUC of 74.7% using the bPPI network on OMIM data set and this was the only method over 70% AUC (Table S1). However, when the weighted bPPI network was used to study the same data set, the AUCs of NetScore and NetZcore methods also surpassed this limit, with values around 74% and 72% respectively (NetCombo achieved 76.5% AUC in this case). Next, we questioned whether the prediction methods depended on the connectivity between seeds using OMIM, Goh and Chen data sets. Table 2 shows the correlation between the average AUC of the prioritization methods and the graph features involving seeds of each disease phenotype in the bPPI network (number of seeds, number of neighboring seeds, and average shortest path length between seeds). A small inverse correlation was found between the average length of the shortest paths connecting seeds and the prediction capacity for all methods. This correlation was observed when using any of the interaction networks; therefore, it was independent of the underlying network. Average number of neighboring seeds also correlated with prediction performance, but less than the average length of the shortest paths connecting the seeds. Table 2. Correlations between prediction performances of methods, measured as the average AUC over phenotypes, and seed connectivity values (associated p-values are included in parenthesis). We questioned whether our methods depended on the number of seeds associated with a disorder using OMIM, Goh and Chen data sets. We addressed the dependence on the number of seeds by splitting all disorders into two groups with respect to the number of seeds (i.e. using the median of the distribution of the seeds associated with the diseases). There were 65 disorders with less than 23 seeds (the median number of seeds) and 67 disorders with at least 23 seeds (2 disorders had exactly 23 seeds). Figure 2 shows the AUC distribution for the eight methods studied for these two groups using bPPI network. In general, the AUCs were similar in the two groups, supporting the lack of correlation between the number of seeds and AUC in Table 2. The differences between AUCs of the two groups were only significant for NetCombo, NetShort and Network propagation (all associated p-values are less than 0.009, assessed by non-paired Wilcoxon test). This was consistent with the anti-correlation observed between the number of seeds and AUC for these methods. Figure 2. Dependence on the number of seeds. Tests and evaluations were performed using the human bPPI network and genes from OMIM, Chen and Goh disease phenotypes. Box plots of the AUCs are based on the predictions of disease-gene associations for disorders with less than 23 seeds (light gray) and disorders with at least 23 seeds (dark gray) using each prioritization method. Using disease-gene association information in OMIM data set and the proposed consensus prioritization method (NetCombo) on the human interactome, we calculated the disease-association scores of all genes in the network for Alzheimer's Disease (AD), diabetes and AIDS, three phenotypes with relatively high prevalence in the society. In order to check the validity of these scores, we used disease-gene associations from the Comparative Toxicogenomics Database (CTD) , the Genetic Association Database (GAD) and available expert curated data sets (see Methods for details). Moreover, we analyzed the GO functional enrichment of the top-ranking genes. First, we used the disease-gene associations in CTD to confirm the biological significance of the scores calculated by the prioritization method in these three diseases. We retrieved direct and indirect disease-gene associations in CTD. We compared the distribution of the scores assigned by NetCombo in the "direct association group" with the distribution of these scores in the "no-association group" and with the distribution in the "indirect association group" (see methods for details). In the three examples, the scores were significantly higher for the direct disease-gene associations than indirect-associations or no-associations (see Figure 3 and Table S6). In the analysis of AD and AIDS, more than 40% of the CTD disease-genes had NetCombo score higher than 0.1. Moreover, only around 5% of the genes in the no-association group for each disease had scores higher than 0.1 and the mean of the direct association group was significantly higher than the mean of the indirect association group (Table S6). Figure 3. Cumulative percentage of disease-genes with direct associations in CTD (dark gray) and non associated genes (light gray) as a function of the NetCombo score for Alzheimer's disease (A), diabetes (B), and AIDS (C). Second, we checked how many of the gene-disease associations in GAD coincided with the top-ranking genes for each phenotype (AD, diabetes and AIDS). The top-ranking genes covered significant number of genes in GAD (Table 3). The rankings of the highest scoring genes for AD, diabetes and AIDS are given in Table S7. Then, we checked the GO functions enriched among the top-ranking genes (Table S8). GO enrichment in the subnetwork induced by the top-ranking genes in AD highlighted the role of the Notch signaling and amyloid processing pathways. The link between these pathways and the pathology of AD has been demonstrated recently . The enrichment of GO functions among the prioritized genes for AIDS and diabetes showed the relevance of biological process triggered by inflammatory response, such as cytokine and in particular chemokin activity. This result was also consistent with the literature , . Table 3. Number of genes (excluding seeds) in the top 1% using NetCombo score and its significance with respect to the number of genes in GAD and in the network. Finally, we further analyzed in detail the results for AD, showing that some well-ranked top genes were out of any known linkage interval associated with AD and still played a relevant role. Figure 4 shows the top-scoring genes for AD and the subnetwork induced by the interactions between their proteins. The 17 AD seeds (disease-gene associations from OMIM) and the 106 genes prioritized by NetCombo involved several protein complexes and signaling pathways such as the gamma-secretase complex, serine protease inhibitors, the cohesin complex, structural maintenance of chromosome (SMC) family, the short-chain dehydrogenases/reductases (SDR) family, adamalysin (ADAM) family, cytokine receptor family and Notch signaling pathway. Some genes within these families have been demonstrated to be involved in AD pathology –: ADAM10 (ADAM family), HSD17B10 (SDR family), and PSENEN, APH1A, APH1B, and NCSTN (gamma-secretase complex). It is worth mentioning that AD has been central to recent research efforts, but mechanisms underlying the disorder are still far from understood. The accumulation of senile plaques and neurofibrillary tangles is postulated as the main cause of the disease. The gamma-secretase is involved in the cleavage of the amyloid precursor protein. This process produces the amyloid beta peptide, the primary constituent of the senile plaques in AD. Interestingly, the six genes predicted by the method (pointed by arrows in Figure 4) were not associated with AD in OMIM. Remarkably, only APH1A (1q21–q22), and PSENEN (19q13.13) lied either under or close to a linkage interval associated with AD (i.e. 1q21, OMIM:611152; and 19q13.32, OMIM:107741) and none of the remaining four genes lied under or close to a known linkage interval associated with AD. Moreover, the subnetwork of top-ranking AD genes covered several genes in the expert curated data set reported by Krauthammer et al. such as APBB1, VLDLR, SERPINA1 and BACE1 (p-value associated with this event<1.3e-3). Figure 4. Alzheimer's disease-associated top-scored proteins and their interactions. AD-implicated proteins identified using NetCombo method on the weighted bPPI network with OMIM AD data. High-scored proteins were selected at the top 1% level using NetCombo scores. Proteins are labeled with the gene symbols of their corresponding genes. Edge thickness was proportional to the weight of the edge (assigned with respect to STRING score). Red nodes are associated with AD. Diamond and round rectangle nodes come from the OMIM AD set (seeds). Round rectangle and red circle nodes have been associated with AD using the analysis of differential expression. The nodes highlighted with arrows (ADAM10, HSD17B10, PSENEN, APH1A, APH1B, NCSTN) have been recently reported in the literature to be involved in the pathology of AD. The main contributions of this paper are twofold. First, we presented four novel methods that are comparable to, or outperform, state-of-the-art approaches on the use of protein-protein interactions to predict gene-phenotype associations at genome-wide scale, extending the set of relevant genes of a phenotype. Second, we demonstrated to which extent these prioritization methods could be used to prioritize genes on multiple gene-phenotype association and interaction data sets. We investigated the prediction capacity and robustness of the approaches by testing their performance against the quality and number of interactions. Typically, network-based methods consider the paths between nodes equally relevant for a particular disease. The prioritization methods proposed in this study differ from others in the way the information is transferred through the network topology. NetShort considered a path between nodes shorter if it contained more seeds (known-disease gene associations) in comparison to other paths. NetScore accounted for multiple shortest paths between nodes. NetZcore assessed the biological significance of the neighborhood configuration of a node using an ensemble of networks in which nodes were swapped randomly but the topology of the original network was preserved. Our results demonstrated that combining different prioritization methods could exploit better the global topology of the network than existing methods. The prediction performance of the prioritization methods depended on the quality and size of the underlying interaction network. Yet, this dependence affected the performance of the methods similarly. The improvement of the network quality also improved the predictions for all methods. On the other hand, the prediction accuracy of the prioritization methods showed a large variation depending on the phenotype in consideration, but this variation was reduced when a consensus method was used (NetCombo). On average, the prediction performance was better on Chen and OMIM data sets compared to the Goh data set. It can be argued that this is because the Goh data set contains gene-phenotype associations where the phenotype is defined in a broader sense (i.e. the physiological system affected). Still, the AUC values were consistent among different data sets for all the prioritization methods. Although network-based prioritization of whole genome provides a ranking of genes according to their phenotypic relevance, the interplay between genes in many diseases might not be captured by solely the PPI information. In fact, for several phenotypes in OMIM data set such as amyloidosis, myasthenic, myocardial and xeroderma the genes associated with the disease were predicted with high accuracy in our analysis, whereas for mitochondrial, osteopetrosis and epilepsy phenotypes, the network-based prioritization was less successful. The best AUC and coverage of disease genes among high-scored gene-products were obtained with the largest and highly confident network (in which interactions integrated from public repositories were filtered out if detected by TAP and edges were positively weighted using the scores provided by STRING database). This improvement was significant for all proposed approaches. The increased coverage and AUC when the bPPI network was used instead of the Goh and Entrez networks showed the benefit of integrating information from various data sources. Prioritization algorithms rely on the topology of the network; thus, increasing the number of known interactions should improve coverage. Nonetheless, interaction data integrated in this manner is prone to include false positives, and filtering possible non-binary interactions (e.g., complexes identified by TAP) can improve the use of integrated data. The hypothesis that we required the largest reliable set for the study of gene prioritization was supported by the increase of AUC when the bPPI network was used instead of the PPI network. The AUC values over dozens of different phenotypes that vary in number of initial gene-phenotype associations showed the applicability of the methods independent of the number of genes originally associated with the phenotype. Moreover, having more number of seeds associated with a pathophenotype did not necessarily improve the prediction accuracy. Most prioritization methods achieved better performance for disorders with low number of seeds. This difference in performance was significant for NetCombo, NetShort and Network propagation. In fact, the accuracy of the predictions was rather correlated with the average shortest path length between seeds, which shows the importance of the topology of the network. We applied the prioritization methods to study the implication of genes in AD, diabetes and AIDS. We claimed that the genes discovered in the high scoring portion of the network would be more likely to be involved in the pathology of these diseases. Therefore, we further analyzed the genes prioritized by NetCombo using the human bPPI network. We verified that some of these predictions were consistent with the literature and the scores assigned by GUILD distinguished between the genes associated with a specific disease and the rest of genes. We have to note that we merged the entries for diabetes type 1 and type 2 in OMIM and defined it as "diabetes phenotype". This may explain why 1) the top-ranking genes predicted for diabetes covered relatively less genes in GAD (assessed by hypergeometric p-value) than AD and AIDS; and 2) the genes with direct-associations were more easily segregated by NetCombo-scores for AD and AIDS than diabetes. Furthermore, we showed that the groups of genes predicted to be associated with these three phenotypes were enriched in biological processes related to the disease. In AD, top-ranking genes formed a subnetwork implying the Notch and amyloid pathways, while top-ranking genes for diabetes and AIDS were involved in the inflammatory response mechanisms. Our analysis on these diseases suggested that our approach in whole genome prioritization was a competent way to discover novel genes contributing to the pathology of diseases. Based on this study, we have shown that the new approaches (NetCombo, NetShort, NetScore, and NetZcore) improved the results of state-of-the-art algorithms, such as Functional Flow, PageRank with priors, Random walk with restart and Network propagation. It is worth mentioning that PageRank with priors and Random walk with restart have been adopted to address genome-wide disease-gene prioritization previously , . Furthermore, a variation of Random walk with restart algorithm that incorporates phenotypic similarity was recently proposed . Since our aim was to compare the algorithms with each other, here, we evaluated them on the same benchmarking data set using only the initial disease-gene associations and the interaction network. Finally, we made all eight methods publicly available in the GUILD framework. Overall, our results suggest that human diseases employ different mechanisms of communication through their interactions. Our analysis reveals a collective involvement of sets of genes in disorders and could be extended to identify higher order macromolecular complexes and pathways associated with the phenotype. However, the use of a single and generic prioritization scheme may not be sufficient for completing the set of pathways affected by a disease and may require the use of more than one method. Furthermore, network-based prioritization methods that use only PPI information fail to identify the disease-genes whose proteins do not interact with other proteins. Therefore, towards a comprehensive understanding of biological pathways underlying diseases, the network-based prioritization methods suggested here can be complemented by incorporating gene expression, functional annotations or phenotypic similarity profiles and by using functional association networks rather than PPI networks. We used three human interactomes: i) Goh network, the PPI network from the work of Goh et al. in which data was taken from two high quality yeast two-hybrid experiments , and PPIs obtained from the literature; ii) Entrez network, a compilation of interactions from BIND and HPRD provided by NCBI (ftp://ftp.ncbi.nih.gov/gene/GeneRIF/interactions.gz); and iii) PPI network, the set of experimentally known PPIs integrated as in Garcia-Garcia and colleagues using BIANA (see Methods S1 on the details of the integration protocol). Considering that high throughput pull down interaction detection methods introduce many indirect relationships (such as being involved in the same complex) in addition to direct physical interactions, we removed the subset of interactions obtained by TAP, resulting in the bPPI network. Furthermore, we have incorporated edge scores for the interactions between two proteins in this network using STRING database . We refer this network as weighted bPPI network. In all other networks, the edge weights have the default value of 1. When edge weights from STRING were used (in weighted bPPI network), the scores given by STRING were rescaled to range between 0 and 1 and then added to the default value of 1. We have to note that the algorithms being studied depend solely on the topology of the network, implying that unconnected nodes and very small components cannot effectively transfer the relevant information along the network. Consequently, only the largest connected component of the network was used for the evaluation (see Table S9 for the sizes of the remaining components in the interaction networks). Hereafter, the term "network" refers to the largest connected component of the network unless otherwise stated. See Table S10 for a summary of the data contained in these interaction networks. Genes and their associated disorders were taken from: 1) Online Mendelian Inheritance in Man (OMIM) database , 2) Goh et al. (referred as Goh data set throughout the text), and 3) Chen et al. (referred as Chen data set throughout the text). OMIM is one of the most comprehensive, authoritative and up-to-date repositories on human genes and genetic disorders. The information in OMIM is expert curated and provides the mutations on the genes associated with the disorders. Phenotypic associations for genes were extracted from the OMIM Morbid Map (omim.org/downloads retrieved on November 4, 2011) by merging entries using the first name as previously done , , . A disorder was considered if and only if it had at least 5 gene products in any of the interaction networks mentioned above (this data set is referred as OMIM hereafter). Having 5 proteins in the interaction network was required for a five-fold cross validation evaluation and also ensured that we tested the capacity to use global topology (in the case of few genes the amount of annotation transfer is limited, diminishing the benefit of using network based methods as opposed to direct neighborhood). In Goh data set , OMIM disorders (from December 2005) were manually classified in 22 disorder classes based on the physiological system affected (21 classes excluding the unclassified category). In Chen data set , a total of 19 diseases were collected from OMIM and GAD See Table S11 for a summary of the diseases used in this study. Additionally, we used an independent gene-phenotype association data set to optimize the required parameters of prioritization methods (see below) without over-fitting the available gene-disease associations. This data set contains gene-disease associations identified by text mining PubMed abstracts using SCAIView for aneurysm (168 genes, keyword search "intracranial aneurysm" and restricting the query to include entries with MeSH "genetics" term) and breast cancer (1588 genes, similar to aneurysm but using "breast cancer" as the keyword). These genes are listed in Table S12. Genes associated with a disorder were mapped to their products (proteins) in the protein-protein interaction network and assigned an initial score for their phenotypic relevance. Thus, proteins translated by genes known to be involved in a particular pathology were termed seeds and have the higher scores in the network. All other proteins in the network were assigned non-seed scores (lower scores in the network). The number of proteins (nodes) and interactions (edges) in all interaction networks used in this study are given in Table S10. Table S11 summarizes all diseases used under the context of this study, the number of genes associated with them and number of corresponding proteins translated by these genes covered in the largest connected component of the network. NetShort is motivated by the idea that a node important for a given phenotype would have shorter distances to other seed nodes in the network. As opposed to previous approaches that employ shortest paths, we incorporate "disease-relevance" of the path between a node and disease nodes by considering not only the number of links that reach to the disease-associated node but also number of disease-associated nodes that are included in the path. Thus, we modify the length (weight) of the links in shortest path algorithm such that the links connecting seed nodes are shorter than the links connecting non-seed nodes. Formally the score of a node, u, is defined as: where d(u,v) is the shortest path length between nodes u and v with weighted edges of graph G(V,E,f). The graph is defined by nodes V, edges E, and the edge weight mapping function, f, where f is defined as . The weight f(i,j) is given by the multiplication of edge score and average of the initial scores of both nodes as follows: This definition implies that the edge is short when the scores of the nodes forming the edge are high (e.g. when they are seeds) and long otherwise. NetZcore assesses the relevance of a node for a given phenotype by normalizing scores of nodes in a network with respect to a set of random networks with similar topology. Intuitively, NetZcore extends the direct neighborhood approach, where all the neighbors of the node contribute to the relevance of the node, to a normalized direct neighborhood. It highlights the relevance of the node compared to the background distribution of the relevance of neighboring nodes (using random networks). The score of a node is calculated as the average of the scores of its neighboring nodes. This score is then normalized using the z-score formula: where and are the mean and standard deviation of the distribution of scores in a set of random networks with the same topology as the original graph. Networks with the same topology are generated such that a node u having degree d is swapped with another node v in the network with the same degree d. In this study, we use a set of 100 random networks. The process of calculating node scores based on the neighbor scores using random networks is repeated by a number of times (iterations) specified by the user in order to propagate the information along the links of the network. The iteration number (k) varies from 1 to a maximum (MaxZ). MaxZ is a specific parameter of the method, and scorek(u) at iteration k is calculated as: where in a graph G(V,E,f) with nodes V, edges E, Nb(u) is the set of neighbors of node u, and f(u,v) = weight(u,v) is an edge weight mapping function. Note that, NetZcore incorporates the statistical adjustment method suggested by Erten and colleagues into the scoring by both normalizing and propagating scores at each iteration . NetScore is based on the propagation of information through the nodes in the network by considering multiple shortest paths from the source of information to the target and ignoring all other paths between them. To calculate the information passed through all the shortest paths in between two nodes, NetScore uses a message-passing scheme such that each node sends its associated information as a message to the neighbors and then iteratively to their neighbors (pseudo-code is given in Figure S3). Each message contains the node identity of the emitter and the path weight (defined as the multiplication of edge weights of the path that the message has traveled). Messages are stored in each node so that only the first messages arriving from a node are considered (i.e. the messages arriving through all the shortest paths from that node). At the end of each iteration, the score of a node is defined as the average score for the messages received. The score carried by a message is calculated as the score of the emitter multiplied by the path weight. Thus, at iteration k, a node has the score of the nodes reaching it from shortest paths of length k (more than once if multiple shortest paths exist) weighted by the edge weights in these paths. Considering that storing all the messages coming from the k-neighborhood introduces a memory and time penalty, we restrict the number of iterations during score calculation to a maximum (MaxS). To cover the whole diameter of the network, we repeat the scoring with updated scores after emptying the message arrays (resetting the node scores with the scores accumulated in the last iteration). Therefore, in addition to the number of iterations (MaxS), NetScore uses the number of repetitions (NR) as parameters of the algorithm. NetCombo combines NetScore, NetShort and NetZcore in a consensus scheme by averaging the normalized score of each prioritization method. The normalized score of a prioritization method for a node n is calculated using the distribution of scores with this method. The mean of the scores of all nodes prioritized by this method is subtracted from the score of node n and then divided by the standard deviation of the distribution. In addition to the four methods above, four state-of-the-art algorithms have been included in GUILD for prediction performance comparison purposes. These methods are PageRank with priors (as used in ToppNet ), Functional Flow , Random walk with restart and Network propagation . See Methods S1 for the details of the implementation of these methods. PageRank with priors has recently been proven to be superior to available topology-based prioritization methods , . The methods based on random walk with restart proposed by Kohler et al. and propagation algorithm by Vanunu et al. are both conceptually similar to PageRank with priors and differ in the way that they incorporate link weights (edge scores) , . We also apply Functional Flow, a global network topology-based method, originally addressed the functional annotation problem . To evaluate the prioritization methods, we used five-fold cross validation on three gene-phenotype annotation data sets mentioned above. Proteins known to be associated with a phenotype (seeds) were split into five groups; four of them were used as seeds for the prioritization methods and the remaining one group was used to evaluate the predictions. This process was repeated five times, changing the group for evaluation each time. The area under the ROC curve (AUC) and sensitivity were averaged over the five folds. These averages and their standard deviations were used to assess the quality of the predictions and compare the methods. A ROC (receiver operating characteristic) curve plots true positive rate (sensitivity) against false positive rate (1-specificity) while the threshold for considering a prediction as a positive prediction is varied. The AUC is the area under this plot and corresponds to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one. ROCR package was used to calculate these performance metrics and the selection of positive and negative instance scores are explained in the next paragraph. In the context of functional annotation and gene-phenotype association studies, obtaining negative data (proteins/genes that have no effect on a disease, disorder, or phenotype) is a challenge. We tackled this problem with an alternative procedure. First, all proteins not associated with a particular disease (or phenotype) were treated as potential negatives. Then, we used a random sampling (without replacement) of the potential negatives to calculate an average score. This score was defined as the score of a negative instance. We calculated as many scores of negative instances as positive instances (seeds) in the evaluation set. We ensured that each of the potential negatives were included in one of the random samples by setting the sample size equal to the number of all potential negatives divided by the number of seeds. Using this procedure, we had the same number of positive and negative scores, and the probability associated with choosing a positive instance by chance was 0.5. We used the aforementioned data sets for aneurysm and breast cancer to optimize the initial scores of seeds and non-seeds and the following parameters of the prioritization methods: MaxZ for NetZcore, MaxF for Functional Flow, and MaxS and NR for NetScore. For each of these parameters, the values that result in the largest average five-fold cross validation AUC were selected. The optimal values for initial scores of seeds and non-seeds were identified as 1.00 and 0.01 respectively, among the values we have tested (1.00 or text mining score associated with the seed, for seeds; and 0.01, 1.0e-3, 1.0e-5 or 0, for non-seeds). The number of iterations for NetZcore (MaxZ) and Functional Flow (MaxF) was 5. In the case of Functional Flow, 5 was also the limit specified by the authors. For NetScore, the optimized values were two iterations (MaxS) with three repetitions (NR). To test the significance of AUC differences between a pair of networks, or prioritization approaches, the one-sided Wilcoxon test was used. The alternative hypothesis was that the mean AUC of the network (or prioritization method) under consideration was greater than the other network under test (or prioritization method). No assumption was made regarding the normality of the distribution of AUCs, and AUCs were paired over the variable in concern (either network type or prioritization method); thus, a non-parametric paired test was applied. Alpha values were set to 0.05. The values for the samples of the random variable subject to the statistical test are given in relevant supplementary tables. R software (http://www.r-project.org) was used to compute statistics. We investigated the relationship between prediction performance of the prioritization methods and the connectivity of seeds in the network. We calculated the average number of neighbor seeds and the average shortest path distances between each pair of seeds for each phenotype as in Navlakha and Kingsford . The average number of neighbor seeds (Ns) is given as follows: where S is the set of seeds, Nb(s) is the set of nodes interacting with s (neighbors), and X(u) is 1 if u belongs to S and 0; otherwise. Similarly, the average shortest path distances (Ss) are given by where is the set of all seed pairs and d(s,v) is the shortest distance between s and v. We used the weighted bPPI network and products of AD, diabetes and AIDS seeds according to OMIM to investigate high-scoring nodes (top 1%) obtained with NetCombo algorithm. We calculated the scores by applying NetCombo and then selected 113 proteins in the network (top 1% of 11250 proteins in the network). These proteins were uniquely mapped to their corresponding gene symbols, yielding 106, 110 and 109 genes for AD, diabetes and AIDS respectively. Next, we counted how many of these genes were listed in Genetic Association Database (GAD) for each phenotype. GAD is a database that catalogs disease-gene associations curated from genetic association studies and collects findings of low significance in addition to those with high significance. We considered only the records in GAD that reported a positive association and merged the entries using the first name of the disease as we did for OMIM data set. In this analysis we excluded the seeds (disease-gene associations in OMIM). The p-values shown in Table 3 correspond to the probability of identifying GAD disease-gene associations at the top-ranking portion of the network assuming a hypergeometric model. The level of significance was set to 0.05. For AD, we also checked whether the top-ranking genes covered the expert curated genes implicated in AD pathology reported in Krauthammer et al. . We analyzed the GO functional enrichment of the top-ranking genes using FuncAssociate2.0 web service. The background consisted of all the genes in the network. A GO term was associated with a gene set if the adjusted p-value associated with the term was lower than 0.05. We used the disease-gene associations in Comparative Toxicogenomics Database (CTD) to check the biological significance of the scores calculated by the prioritization method of AD, diabetes and AIDS. CTD contains both manually curated disease-gene associations (direct) and inferred disease-gene associations (indirect). Again, the entries were merged using the first name of the disease. The scores of the direct disease-genes, indirect disease-genes and no-association genes (not found in CTD) were grouped as direct-association group, indirect-association group and no-association group. We tested the difference between the means of the distributions of scores using one tailed Student's t-test (assuming higher score for the direct associations and the alpha value was set to 0.05 as before). Comparison of the significance in prediction performance between prioritization methods. Significance of the differences in average AUC performance (averaged over all interaction networks and disease data sets) is represented as a heatmap. Dark blue color in a cell (i, j) of the heatmap denotes that the p-value associated with the one sided Wilcoxon test for the comparison of AUCs between ith and jth method (where the alternative hypothesis is that the mean of the first is greater than the second) is smaller or equal than 0.05. Ratio of successful predictions among the top 1% scores obtained by each method on each interaction network over all phenotypes of OMIM, Goh and Chen data sets. Color legend is same as Figure 1 in the manuscript. Pseudo-code of the NetScore algorithm. The repetition part is handled inside the first for-loop where message arrays are reset. The inside for-loop goes over the iterations, where only "new" messages are accepted. At the end of each iteration, the score of a node is calculated based on the messages it received. Average AUC of the prioritization methods on each data set of seeds (OMIM, Goh and Chen) using different interaction networks (Goh, Entrez, PPI, bPPI and weighted bPPI). P-values associated with the paired Wilcoxon signed rank test between Network Propagation and our two best prioritization methods on each data set using average AUCs over all networks. AUC of the prioritization methods for each disorder and network. Sensitivity values at top 1% predictions of the prioritization methods for each disorder and network. Five-fold AUC (%) for each method averaged over all diseases within the data set and all interaction networks considering all non-seeds (genes not associated with the diseases) as negatives. The average NetCombo scores (the standard deviation is given in parenthesis) of CTD direct/indirect disease-genes and the genes with no-association in CTD and the p-value associated with the difference between these groups. Top ranking genes in Alzheimer's Disease (AD), diabetes and AIDS identified by NetCombo (the top 1% high scoring genes) using weighted bPPI network and OMIM associations. Functional enrichment of high scoring common genes in NetCombo for AD, diabetes and AIDS. Number and size of the connected components other than the largest connected component (LCC) in the network. Interaction data sets used in the analysis. Number of disease-gene associations covered in each network. Genes used for parameter optimization. Conceived and designed the experiments: EG BO. Performed the experiments: EG. Analyzed the data: EG BO. Contributed reagents/materials/analysis tools: EG. Wrote the paper: EG BO. 1. Altshuler D, Daly MJ, Lander ES (2008) Genetic Mapping in Human Disease. Science 322: 881–888 doi:https://doi.org/10.1126/science.1156409. 2. Broeckel U, Schork NJ (2004) Identifying genes and genetic variation underlying human diseases and complex phenotypes via recombination mapping. The Journal of Physiology 554: 40–45 doi:https://doi.org/10.1113/jphysiol.2003.051128. 3. Hirschhorn JN, Daly MJ (2005) Genome-wide association studies for common diseases and complex traits. Nat Rev Genet 6: 95–108 doi:https://doi.org/10.1038/nrg1521. 4. Wang WY, Barratt BJ, Clayton DG, Todd JA (2005) Genome-wide association studies: theoretical and practical concerns. Nat Rev Genet 6: 109–118. 5. Kann MG (2007) Protein interactions and disease: computational approaches to uncover the etiology of diseases. Brief Bioinform 8: 333–346. 6. Ideker T, Sharan R (2008) Protein networks in disease. Genome Res 18: 644–652. 7. Barabasi A-L, Gulbahce N, Loscalzo J (2011) Network medicine: a network-based approach to human disease. Nat Rev Genet 12: 56–68 doi:https://doi.org/10.1038/nrg2918. 8. Said MR, Begley TJ, Oppenheim AV, Lauffenburger DA, Samson LD (2004) Global network analysis of phenotypic effects: protein networks and toxicity modulation in Saccharomyces cerevisiae. Proc Natl Acad Sci U S A 101: 18006–18011. 9. Wachi SY (2005) Interactome-transcriptome analysis reveals the high centrality of genes differentially expressed in lung cancer tissues. Bioinformatics 21: 4205–4208. 10. Jonsson PF, Bates PA (2006) Global Topological Features of Cancer Proteins in the Human Interactome. Bioinformatics 22: 2291–2297 doi:https://doi.org/10.1093/bioinformatics/btl390. 11. Gandhi TKB, Zhong J, Mathivanan S, Karthick L, Chandrika KN, et al. (2006) Analysis of the human protein interactome and comparison with yeast, worm and fly interaction datasets. Nature Genetics 38: 285–293 doi:https://doi.org/10.1038/ng1747. 12. Lim J, Hao T, Shaw C, Patel AJ, Szabo G, et al. (2006) A protein-protein interaction network for human inherited ataxias and disorders of Purkinje cell degeneration. Cell 125: 801–814. 13. Goh KI, Cusick ME, Valle D, Childs B, Vidal M, et al. (2007) The human disease network. Proc Natl Acad Sci U S A 104: 8685. 14. Lage KK (2007) A human phenome-interactome network of protein complexes implicated in genetic disorders. Nature Biotechnology 25: 309–316. 15. Oti MS (2006) Predicting disease genes using protein-protein interactions. British Medical Journal 43: 691. 16. Pujana MA, Han JD, Starita LM, Stevens KN, Tewari M, et al. (2007) Network modeling links breast cancer susceptibility and centrosome dysfunction. Nat Genet 39: 1338–1349. 17. Wu X, Jiang R, Zhang MQ, Li S (2008) Network-based global inference of human disease genes. Mol Syst Biol 4: 189. 18. Xu JL (2006) Discovering disease-genes by topological features in human protein-protein interaction network. Bioinformatics 22: 2800–2805. 19. Kohler S, Bauer S, Horn D, Robinson PN (2008) Walking the Interactome for Prioritization of Candidate Disease Genes. The American Journal of Human Genetics 82: 949–958 doi:https://doi.org/10.1016/j.ajhg.2008.02.013. 20. Franke L, van Bakel H, Fokkens L, de Jong ED, Egmont-Petersen M, et al. (2006) Reconstruction of a functional human gene network, with an application for prioritizing positional candidate genes. Am J Hum Genet 78: 1011–1025. 21. Dezso Z, Nikolsky Y, Nikolskaya T, Miller J, Cherba D, et al. (2009) Identifying disease-specific genes based on their topological significance in protein networks. BMC Syst Biol 3: 36. 22. Vanunu O, Magger O, Ruppin E, Shlomi T, Sharan R (2010) Associating genes and protein complexes with disease via network propagation. PLoS computational biology 6: e1000641. 23. Chen J, Aronow B, Jegga A (2009) Disease candidate gene identification and prioritization using protein interaction networks. BMC bioinformatics 10: 73. 24. Navlakha S, Kingsford C (2010) The Power of Protein Interaction Networks for Associating Genes with Diseases. Bioinformatics 26: 1057–1063 doi:https://doi.org/10.1093/bioinformatics/btq076. 25. Erten S, Bebek G, Ewing RM, Koyuturk M (2011) DADA: Degree-Aware Algorithms for Network-Based Disease Gene Prioritization. Bio Data mining 4: 19. 26. Aerts S, Lambrechts D, Maity S, Van Loo P, Coessens B, et al. (2006) Gene prioritization through genomic data fusion. Nat Biotech 24: 537–544 doi:https://doi.org/10.1038/nbt1203. 27. Ala U, Piro RM, Grassi E, Damasco C, Silengo L, et al. (2008) Prediction of human disease genes by human-mouse conserved coexpression analysis. PLoS Comput Biol 4: e1000043. 28. Lee I, Blom UM, Wang PI, Shim JE, Marcotte EM (2011) Prioritizing candidate disease genes by network-based boosting of genome-wide association data. Genome Res advance online article. 29. Linghu B, Snitkin ES, Hu Z, Xia Y, Delisi C (2009) Genome-wide prioritization of disease genes and identification of disease-disease associations from an integrated human functional linkage network. Genome Biol 10: R91. 30. Perez-Iratxeta C, Bork P, Andrade MA (2002) Association of genes to genetically inherited diseases using data mining. Nature genetics 31: 316–319. 31. Aragues R, Sander C, Oliva B (2008) Predicting cancer involvement of genes from heterogeneous data. BMC Bioinformatics 9: 172. 32. Kitsios GD, Zintzaras E (2009) Genomic Convergence of Genome-wide Investigations for Complex Traits. Annals of human genetics 73: 514–519. 33. Akula N, Baranova A, Seto D, Solka J, Nalls MA, et al. (2011) A Network-Based Approach to Prioritize Results from Genome-Wide Association Studies. PloS one 6: e24220. 34. Carlson CS, Eberle MA, Kruglyak L, Nickerson DA (2004) Mapping complex disease loci in whole-genome association studies. Nature 429: 446–452. 35. White S, Smyth P (2003) Algorithms for estimating relative importance in networks. Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining. KDD '03. New York, NY, USA: ACM. pp. 266–275. Available:http://doi.acm.org/10.1145/956750.956782. Accessed 22 February 2012. 36. Nabieva E, Jim K, Agarwal A, Chazelle B, Singh M (2005) Whole-proteome prediction of protein function via graph-theoretic analysis of interaction maps. Bioinformatics 21: i302–i310 doi:https://doi.org/10.1093/bioinformatics/bti1054. 37. von Mering C, Jensen LJ, Kuhn M, Chaffron S, Doerks T, et al. (2007) STRING 7–recent developments in the integration and prediction of protein interactions. Nucleic Acids Res 35: D358–D362. 38. Davis AP, King BL, Mockus S, Murphy CG, Saraceni-Richards C, et al. (2010) The Comparative Toxicogenomics Database: update 2011. Nucleic Acids Research 39: D1067–D1072 doi:https://doi.org/10.1093/nar/gkq813. 39. Becker KG, Barnes KC, Bright TJ, Wang SA (2004) The Genetic Association Database. Nature Genetics 36: 431–432 doi:https://doi.org/10.1038/ng0504-431. 40. Woo HN, Park JS, Gwon AR, Arumugam TV, Jo DG (2009) Alzheimer's disease and Notch signaling. Biochem Biophys Res Commun 390: 1093–1097. 41. Wellen KE, Hotamisligil GS (2005) Inflammation, stress, and diabetes. Journal of Clinical Investigation 115: 1111–1119 doi:https://doi.org/10.1172/JCI25102. 42. Appay V, Sauce D (2007) Immune activation and inflammation in HIV-1 infection: causes and consequences. The Journal of Pathology 214: 231–241 doi:https://doi.org/10.1002/path.2276. 43. Yang SY, He XY, Miller D (2007) HSD17B10: a gene involved in cognitive function through metabolism of isoleucine and neuroactive steroids. Mol Genet Metab 92: 36–42. 44. He G, Luo W, Li P, Remmers C, Netzer WJ, et al. (2010) Gamma-secretase activating protein is a therapeutic target for Alzheimer's disease. Nature 467: 95–98. 45. Kim M, Suh J, Romano D, Truong MH, Mullin K, et al. (2009) Potential late-onset Alzheimer's disease-associated mutations in the ADAM10 gene attenuate $\alpha$-secretase activity. Human molecular genetics 18: 3987. 46. Krauthammer M, Kaufmann CA, Gilliam TC, Rzhetsky A (2004) Molecular triangulation: bridging linkage and molecular-network information for identifying candidate genes in Alzheimer's disease. Proc Natl Acad Sci U S A 101: 15148–15153. 47. Li Y, Patra JC (2010) Genome-wide inferring gene–phenotype relationship by walking on the heterogeneous network. Bioinformatics 26: 1219–1224. 48. Stelzl U, Worm U, Lalowski M, Haenig C, Brembeck FH, et al. (2005) A human protein-protein interaction network: a resource for annotating the proteome. Cell 122: 957–968. 49. Rual JF, Venkatesan K, Hao T, Hirozane-Kishikawa T, Dricot A, et al. (2005) Towards a proteome-scale map of the human protein-protein interaction network. Nature 437: 1173–1178. 50. Garcia-Garcia J, Guney E, Aragues R, Planas-Iglesias J, Oliva B (2010) Biana: a software framework for compiling biological interactions and analyzing networks. BMC Bioinformatics 11: 56 doi:https://doi.org/10.1186/1471-2105-11-56. 51. Hamosh A, Scott AF, Amberger JS, Bocchini CA, McKusick VA (2005) Online Mendelian Inheritance in Man (OMIM), a knowledgebase of human genes and genetic disorders. Nucleic acids research 33: D514–D517. 52. Chen J, Xu H, Aronow BJ, Jegga AG (2007) Improved human disease candidate gene prioritization using mouse phenotype. BMC Bioinformatics 8: 392. 53. Hofmann-Apitius M, Fluck J, Furlong L, Fornes O, Kolářik C, et al. (2008) Knowledge environments representing molecular entities for the virtual physiological human. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366: 3091–3110. 54. Sing T, Sander O, Beerenwinkel N, Lengauer T (2005) ROCR: Visualizing Classifier Performance in R. Bioinformatics 21: 3940–3941 doi:https://doi.org/10.1093/bioinformatics/bti623. 55. Berriz GF, Beaver JE, Cenik C, Tasan M, Roth FP (2009) Next generation software for functional trend analysis. Bioinformatics 25: 3043–3044. Is the Subject Area "Random walk" applicable to this article?
CommonCrawl
Saturday, May 31, 2014 ... / / The Dutch teleportation advance Most of the generic science news sources – see e.g. PC Magazine – report on a new result by experimenters at a Kavli-named institute in Delft, a historical academic town in Holland, that just appeared in Science: Unconditional quantum teleportation between distant solid-state quantum bits (SciMag) by Pfaff, Hensen, Hanson, and 8 more co-authors Let me emphasize that Hansen isn't among the authors. ;-) Their evil device... They have made some progress in the experimental work that could be useful for quantum computers in the future – potentially but not certainly foreseeable future. Two qubits – electron spins somewhere in two pieces of a diamond that are 10 feet away – are entangled, stored as nuclear spins, guaranteed to be sufficiently long-lived, and measured to be almost perfectly entangled. I don't follow every experimental work of this kind and I won't pretend that I do. This prevents me from safely knowing how new their work is. I hope and want to believe it is sufficiently new, indeed. Obviously, quantum computers will require us to master many more operations than this one – potentially and probably more difficult ones. However, the words chosen in the paper and in the popularization of the result are a mixed bag. Other texts on similar topics: experiments, philosophy of science, quantum foundations Friday, May 30, 2014 ... / / Measure for measure: debaters love to hate genuine quantum mechanics Virtually all popular preachers about quantum mechanics are hopelessly deluded If you have 100 spare minutes, you may want to watch this debate on the foundations of quantum mechanics: Video, event web page, Preposterous Universe. The name of the debate, Measure For Measure, is a Shakespeare's play. It may also express one idea about the measurement in quantum mechanics or another, e.g. the correct idea that the measurement and the influence on the measured system is needed to find some information – to measure in the epistemic sense (to find out). This conversation took place last night at the NYU Sourball Center for the Perforated Ass or something like that – I am no native speaker. All the talk about physics was introduced by the überhost, Brian Greene's wife Tracy Day, who introduced the host, Brian Greene, who began with tons of jokes about quantum mechanics in the mass culture and some nice, basic, a bit vague but effectively very accurate background about the double slit experiment which was enhanced by cool animations. Other texts on similar topics: philosophy of science, quantum foundations, science and society Thursday, May 29, 2014 ... / / Paul Frampton fired by UNC Some physicists did get richer: Guth, Linde, Starobinsky share one of the $1 million Kavli prizes (for inflation). Other two triplets – of nanoopticians and neuroscientists – win two more millions in total. Guth and Linde didn't have functional cellphones to receive the good news. Maybe they will be able to buy one – even though Linde plans to spend this money for his breakfast. Some annoying news from North Carolina: UNC-CH fires physics professor jailed in Argentina ...see also Google News... Paul Frampton is still in Argentina but he should be released soon. The life outside the prison may be hard because Ms Carol Folt, a biologist and an academic official at UNC, just fired Paul for "misconduct and neglect of duty". They try to suggest that they "succeeded" in preventing his retirement so even though Paul is 70 and he would be normally allowed to retire and collect pension, he may be getting none. I find it the decision and the vigor with which it was made unfortunate. TRF largely avoids nudity. Although the average reader's age is 68.2 years and they would like it, there are also teenagers among the readers and they would like it, too. So it has to be regulated. One thing is that Paul has done some really stupid things for a virtual clone of his Czech American girlfriend, Ms Denisa Krajíčková aka Denise Milani, and I can't even 100% verify that he didn't know about those drugs although I do believe that Paul is really a clear soul and a naive person. There's some sense in which his arrest was unavoidable. What happens to you if you're caught with lots of drugs at the airport? Other texts on similar topics: education, science and society Global warming vs climate change The Guardian and everyone else in the MSM write about a poll among 1,600 Americans that concluded that they're turned off by "climate change" but intrigued by "global warming". They are 13% more likely to say that "global warming is a bad thing" than in the case of "climate change", and so on. So what is the right term to use for "the thing"? LEP's, LHC's \(98\), \(126\GeV\) Higgses may match (N)MSSM Peter Higgs is 85 today: Congratulations! I want to mention a rather interesting paper Invisible decays of low mass Higgs bosons in supersymmetric models by Pandita and Patra. The LHC has discovered the \(125-126\GeV\) Higgs boson and there seem to be no other particles around so far – a situation that may dramatically change in less than a year. Recall that supersymmetry is the most well-motivated theory that (aside from more important things) ultimately requires an extension of the Higgs sector. The simplest supersymmetric model(s) called the MSSM (Minimal Supersymmetric Standard Model) have five Higgs bosons. The NMSSM (Next-to-MSSM) has an extra Higgs-like chiral superfield \(S\), a fieldized \(\mu\)-parameter of the MSSM. I think it's fair to say that most people expect that if there are new bosons related to the Higgs mechanism, they are heavier than the Higgs boson that has already been discovered. However, there is this interesting loophole – one may be thinking outside the box. The other Higgs bosons of the supersymmetric extensions of the Standard Model may actually be lighter than the Higgs boson that has already discovered. The immediate question is "Why hasn't such a lighter particle already been discovered?" But once you acknowledge that the decays may naturally be hard to see, it's tempting to say that the scenario in which the so-far-unknown Higgs bosons are lighter than the known one is as likely as the conventional scenario where the particles discovered in the future are heavier. Tuesday, May 27, 2014 ... / / Constructor theory: Deutsch and Marletto are just vacuously bullšiting I can't stand pompous fools, people who are completely dumb but who like to pretend how wonderfully smart they are. So it is not hard for you to guess that I was rather upset when I was forced to read a new preprint by David Deutsch and Chiara Marletto and the associated hype in Scientific American: Constructor Theory of Information (arXiv) A New "Theory of Everything": Reality Emerges from Cosmic Copyright Law (SciAm) If you happen to forget similar factoids, Deutsch is one of the philosophical babblers who likes to say ludicrous things about the allegedly unavoidable naive many-worlds interpretations of quantum mechanics, and so on. What is this constructor theory? It's a sequence of worthless would-be smart sentences sold as a "theory of everything" and a "unifying theory of classical and quantum physics" and "all information in them" which also "defines all forms of information" and transforms all of our knowledge to "claims that some tasks are impossible". "Where is the beef?" the ladies would surely ask in this case, too. If you try to find any content inside these texts, you will inevitably fail. There is no content. It's just a stupid game with words and a couple of mathematical symbols. Monday, May 26, 2014 ... / / EU: National Front, UKIP, Mach, Sulík win seats Europe is a very diverse continent and it is a bit confusing to combine the seats from all the countries. But some of the "projections" of the vector of results look better than expected. You know, some decade ago, I would also be influenced by much of the "mainstream" thinking and I would consider France's National Front to be a rather extreme party. And maybe it was. I surely don't consider it extreme today and it probably isn't. ;-) Ms Le Pen got 26%, ahead of center-right UMP with 21%. Hollande's socialists only got 14%. See similar highlights at the BBC server. Even more impressively, UKIP won the EU polls in the U.K., with 29%, beating Labour at 23% and Tories at 23%. Greens better than LibDems. A fun fact from Spain: I was surprised by the numerous Spanish visits to the article Shut up and calculate, especially if you're a lousy thinker. Miguel M. kindly explained me the reason of the visits: the noted wheelchair-bound anti-quantum warrior Pablo Echenique-Robba who is the main villain of that blog post got elected (unless the remote votes change the results dramatically which is unlikely) to the European Parliament for the scholars' extreme left-wing but Euroskeptical party called Podemos (this name is used across Latin America, too). Congratulations and I wish good luck to the... scientist... to finally achieve the destruction of the European economy, the European Union, and modern physics. ;-) In Italy, center-left current PM Renzi won 40% which is in no way against the bad EU trends but on the other hand, I have no problems with that because I feel he makes Italy controllable. Comedian Beppe Grillo's maverick Five Star party got 22% and Berlusconi's party received 16%. In different countries, I may have different sympathies, and this collection of numbers looks good to me, too. Sunday, May 25, 2014 ... / / Isla Vista massacre: when testosterone runs amok On Friday, Elliot Rodger (22) murdered six other people and himself – in an act of stabbing and frantic shooting – in Isla Vista, California. The son of the British-born Hollywood director Peter Rodger has previously announced his plans to murder many women – those who rejected him – because of his sexual frustration. I think that there are too many men who say similar things so it would be impractical to "stop" all of them. So it's disputable whether the event could have been avoided. But the feeling that it could have been is particularly annoying. Isla Vista is the town adjacent to the campus of the University of California in Santa Barbara. It only has 23,000 people – 1 / 300 thousand (thanks, Gene!) of the world population – but it just happens that I have lived there for half a year, H1 of 2001, before my PhD defense on 9/11/2001. (And I have visited UCSB approximately 3 times at other moments.) It is a coincidence that my visit in 2001 began with another memorable shooting spree in February (yes, I have better alibi in 2014). So I know the town rather well which is why I found the news more shocking than some generic reports about murders. Other texts on similar topics: biology, everyday life, murders Claims: Universe is not expanding Fred Singer sent me a link to an article in Sci-News, Universe is Not Expanding After All, Scientists Say which describes a recent U.S.-Spanish-Italian paper UV surface brightness of galaxies from the local Universe to \(z\sim 5\) (arXiv) that was just published as International Journal of Modern Physics D Vol. 23, No. 6 (2014) 1450058. It is quite a bold claim but not shocking for those who have the impression based on the experience that these journals published by World Scientific are not exactly prestigious – or credible, for that matter. The sloppy design of the journal website and the absence of any \(\rm\TeX\) in the paper doesn't increase its attractiveness. The latter disadvantage strengthens your suspicion that the authors write these things because they don't want to learn the Riemannian geometry, just like they don't want to learn \(\rm \TeX\) or anything that requires their brain to work, for that matter. Other texts on similar topics: astronomy, stringy quantum gravity The elections to the European Parliament – one that cannot propose laws, but where some representatives may at least talk to each other – are getting started. Here in Czechia, the polls open at 2 pm today, on Friday, and continue tomorrow when they end at 2 pm, too. After the first day (almost), the Czech turnout is 5% so it is totally plausible that the total turnout will be below 10%. Compare it with over 80% in Crimea, for example, or 75% in Donetsk and Luhansk regions. As I have already mentioned, I am planning to vote for Petr Mach's Party of Free Citizens for the first time. He got 2.5% in the latest parliamentary elections which was impressive and there are polls that suggest that he has a chance to get 5% this time. The vote isn't necessarily lost. If he gets above the threshold, and I hope it's enough, it will have one MEP, namely Mach himself. Higgs contest, top ten As you may have noticed (by looking at my reduced activity on this blog), I have spent hours with the ATLAS Higgs contest in recent two days. It's time to boast a little bit. Here is the screenshot of the current leaderboard: You see that there's someone new in the top-ten list, someone who jumped up by 179 stairs in recent two days. And his name is the same as mine! ;-) Finally, I could apply some idiosyncratic improvements to the data manipulation. To compare, Tommaso Dorigo of CMS is at the 116th place. Maybe he should try to learn some string theory if he's not too good at evaluating the data from particle physics experiments. Or better not... ;-) Other texts on similar topics: computers, experiments, Kaggle, LHC, science and society, string vacua and phenomenology Wednesday, May 21, 2014 ... / / An interview with Melissa Franklin I am a bit busy these days, also because of the Higgs challenge. As a temporary linker-not-thinker, let me share the following URL with you (thanks to David Simmons-Duffin for the hyperlink): 'Physics was paradise' (Harvard Gazette) It is a rather interesting interview with Melissa Franklin, an experimental particle physicist who has been the physics department chair at Harvard since 2010. I've served on several thesis committees with her which is not the only reason why I know her pretty well, of course. Other texts on similar topics: experiments, freedom vs PC, LHC ATLAS: find Higgs, win $7k Particle physics meets the machine learning sport Amateur and experienced programmers, you have a chance to win $7,000 (gold), $4,000 (silver), or $2,000 (bronze) if you succeed in a contest organized by the LHC's ATLAS Collaboration (via Tommaso Dorigo), Higgs Boson Machine Learning Challenge (kaggle.com) So far there are 180+ contestants (well, teams – a team may contain at most 4 people). Anyone who registers and sends her results by September 15th, 2014 may win, however. What is the sport about? There are no 't Hooft's ontological bases Fred Singer informed me about a huge, 202-page-long quant-ph paper by Gerard 't Hooft (via Physics arXiv Blog) The Cellular Automaton Interpretation of Quantum Mechanics. A View on the Quantum Nature of our Universe, Compulsory or Impossible? which reviews the author's more than 15-year-long struggle to show that the foundations of quantum phenomena are classical (these are not just my words, he explicitly says so; at least, I appreciate that he is not trying to mask this basic goal by thick layers of fog like other "interpreters" do). I also appreciate that 't Hooft doesn't cite any paper by himself newer than 1993 so the new long preprint is no self-citation fest as some other papers. The article was posted exactly one week before Steven Weinberg's paper on modified density matrices. One week seems to be the current timescale of producing another paper by a (more prominent than average) physics Nobel prize winner that displays the author's dissatisfaction with quantum mechanics. And in the case of 't Hooft, the dissatisfaction is really the primary point of the paper. Should you read the paper? I don't think so. I won't pretend that I have read the whole paper or that I plan to do so, either. Godzilla, an anti-environmentalist blockbuster Godzilla (2014) is a new U.S.-Japanese movie based on the paradigm of the Japanese monster that premiered on Friday in the U.S. Its budget is $160 million and happily enough for the people behind the team, the investment is going to be repaid sometimes next week, within less than a week. Not bad. On Friday itself, it earned $38.5 million and is likely to get $100 million over the three-day weekend. I have not seen the movie and I have no idea whether I will like it. But what's interesting is that this is not another movie parroting the intellectually inferior a dishonest "environmentalist" ideology about the man who ruins Nature and may fix it. Quite on the contrary, the movie points out that the alarmists spreading this meme are dangerous psychopaths. Other texts on similar topics: arts, climate, science and society Steven Weinberg's mutated density matrices CLOUD: First, off-topic news. CLOUD at CERN published some new results. Previously, they showed that the sulphur acid doesn't seem to be enough to produce more clouds. Now, if the acid is combined with oxidized organic vapours from plants etc., low-lying clouds may be created and cool the planet. The cosmic rays may still modulate and matter a lot but only if the concentration of the two previous ingredients is low enough for the cosmic rays to be helpful. Loopholes like that are probably not too interesting Steven Weinberg posted a playful quant-ph preprint two days ago: Quantum Mechanics Without State Vectors The title is a bit misleading. What Weinberg mostly tries to do is to study possible symmetry transformations of density matrices that don't descend from transformations of pure states. This blog post has two comparable parts. The first one is dedicated to Weinberg's negative, knee-jerk reactions to the foundations of quantum mechanics. The second part is dedicated to his hypothetical non-pure-state-based density matrices. Anti-quantum noise as an introduction The beginning of the paper reflects Weinberg's personal dissatisfaction with quantum mechanics. Two unsatisfactory features of quantum mechanics have bothered physicists for decades. However, there are no (i.e. zero) unsatisfactory features of quantum mechanics so what has bothered the physicists were inadequacies and stubbornness of these physicists themselves, not unsatisfactory features of quantum mechanics. The first is the difficulty of dealing with measurement. There is no difficulty of dealing with measurement. On the contrary, measurement – or observation or perception (these two words may ignite different emotions in the reader's mind but the physical content is equivalent) – is the process whose outcomes quantum mechanics is predicting, and it is doing so probabilistically. Lennart Bengtsson will probably remain a renegade, anyway Lennart Bengtsson (*1935) is a Swedish meteorologist and modeler. You may check that he has written lots of papers and collected lots of citations. Two weeks ago, he decided to join a dozen of other researchers in the Academic Advisory Council of The Global Warming Policy Foundation, a skeptical climate change think tank led by Lord Nigel Lawson (chairman) and by Benny Peiser (director). After a violent reaction to his decision – see e.g. GWPF, The Times, WUWT, Hans von Storch's blog, Climate Audit, Jo Nova, Judith Curry, Spiegel, Climate Depot, CATO, National Review, Marcel Crok, Click Green, Power Line Blog, and joyful fascist jerk William Connolley and his Big Whopper Gestapo Comrade, Goebbelsian troll David Appell – he left the GWPF board. Other texts on similar topics: climate, religion, science and society BICEP2 vs Planck: nothing wrong with screen scraping The BICEP2-Planck relationship is competitive and that's how things should be; BICEP2 claims remain robust Two days ago, Adam Falkowski of Paris created a ministorm when he asked Is BICEP wrong? The idea of his was that there was a rumor that the BICEP2 folks have so seriously misinterpreted, misunderstood, understimated, and misunderestimated a sky map by the Planck Collaboration that when the bug is corrected, the significance of the BICEP2 discovery fully evaporates. He tried to suggest that almost all the well-informed experts in this experimental enterprise converge to the opinion that most of the B-modes seen by the BICEP2 gadget is due to the polarizing dust in the Milky Way, not due to the gravitational waves from the era of cosmic inflation. Erik Verlinde tweeted that a Princeton workshop implies the same conclusion. In particular, these two physicists seem certain that Planck will refute the BICEP2 claims in their new paper to be published not later than October 2014. I remained open-minded for half a day but once I saw the claims by Clement Pryke and John Kováč that they are not planning any revision of their papers at all, I decided that Adam's story is almost certainly just bullšit, a conspiracy theory mixing half-truths, dust, šit, and fantasy into a stinky whole – and Adam and Erik are just full of this, politely speaking, composite šit. Other texts on similar topics: astronomy, experiments, science and society, string vacua and phenomenology A silly contest involving a TBBT video This is a video from an episode of The Big Bang Theory, one about the visit to the Large Hadron Collider on the Valentine's Day. One may see a real-world famous physicist in a big chunk of this video. When I say "famous", I mean "famous" from the media as well as having over 20,000 citations according to SPIRES. Other texts on similar topics: TBBT, TV Physics Overflow is live Off-topic, breaking news: Jester believes that there are rumors suggesting that Planck refutes BICEP2 and many experimenters, including those in the BICEP Collaboration, are admitting that the B-modes came from the dust. Clement Pryke of BICEP refutes Adam's rumors about the planned retraction in the Science Magazine while he admits that they don't quite know the right interpretation of a key Planck map. I won't say what I guess about these claims. Some of you may know Physics Stack Exchange where people may ask and answer physics-related questions. But you may often have the feeling that no one is trying to keep the quality of the traffic sufficiently high. Thanks to Dilaton, Dimension10 etc., a new very promising competition has emerged. It's called Physics Overflow: PhysicsOverflow.ORG (click here and try!) You may see that the name was inspired by MathOverflow.NET. Other texts on similar topics: computers, science and society Donetsk, Luhansk referendums have firmer democratic foundations than any EU-wide polls so far As recently as yesterday, I was unsure whether the referendums in the Donetsk Region and the Luhansk Region would end up as the same landslide vote against the new Kiev regime as the referendum in Crimea. The ethnic Russians are stronger in Crimea, and so on. However, yesterday, I realized that the question was different in Donetsk and Luhansk. The voters were asked Do you support the Act of State Self-rule of the Donetsk/Luhansk People's Republic? Note that there has been no vote on the annexation by Russia yet; the grassroots or the new local politicians may be preparing the referendum on the annexation for the following weekend and I am really uncertain about the results. The support for annexation could be significantly weaker. The desire for genuine independence could be strong. On the other hand, the emerging republic(s) will probably need some military support to resist the attempts of the current Kiev regime to occupy them – the Kiev troops and tanks are being declared occupation forces today – so they may be forced to seek some close relationships with Russia whether they want it or not. Well, let me repeat: I honestly can't predict the results of the next referendum. The people's desire to submit ballots in the referendum that took place yesterday was staggering, about 75%. See e.g. these long queues in front of the polling stations. Everyone knew that some lunatics from Kiev may begin fire against civilians, like in Krasnoarmejsk, but they went to vote, anyway. In fact, the fear of the bullets from Kiev has probably encouraged people to vote and to vote for independence from Kiev. I would probably be pushed in that direction, too, regardless of the detailed nationalist colors of the regime that would be pointing guns and tanks to civilians like me. About 90% were in support of the independence of the Donetsk Region; 10% were against. The Luhansk Region data will be published later. They may be a bit less convincing but I do expect the independence to gain the support of a majority, too. Other texts on similar topics: Europe, politics, Russia Rogozin, Romania, conflicts, and individual rights On Saturday, Russian deputy prime minister Dmitry Rogozin went to Transdniestria to celebrate the anniversary of the 1945 victory over the Nazis with the local ethnic Russians – who have the same good reasons to celebrate (congratulations to them, and thanks to the Russian and Western Second World War veterans who helped to liberate my country) and who may feel oppressed by the surrounding nations these days. After the dissolution of the Soviet Union, they have established a de facto independent ethnic Russian republic which is a supernarrow strip on the border between Moldova and Ukraine. Just to be sure, Moldova is a former Soviet republic which is ethnically Romanian, more or less, and it could have been a part of Romania if the Soviet Union has never existed. Moldova is even poorer than Romania but it's trying to create ties with the EU. When he was returning from Kishinev, the capital of Moldova, to Moscow, his plane, Rusjet Yak-42, wasn't allowed to enter the Romanian airspace. They were also accompanied by some Ukrainian interceptors Mikoyan MiG (even the Maidan regime seems unable to operate without Russian products) so Rogozin had to land in Kishinev again. The local Moldovan government has used the opportunity to confiscate some petitions in which thousands of the folks in Transdniestria demand protection from the Russian Federation. Moldovan authorities will "study these materials to check whether someone has committed a crime". It is not clear to me where Rogozin's plane finally went – Moldova is completely surrounded by Ukraine and Romania – but I guess he was finally allowed by Romania to go to Minsk through Bulgaria etc. and then to Moscow. Or maybe he just ignored the ban and took the risk. If you understand the path, let me know. Just to be sure, these airspace hassles occurred because Rogozin is one of the dozens of influential Russian citizens who were targeted by the EU-U.S. sanctions. He is a very active Twitter (200,000+ followers, plus 12,000 in English) and Facebook guy. He has made an innocent tweet that was destined to be widely discussed. Life after death: a debate Caltech cosmologist Sean Carroll along with Yale neurologist Steve Novella won this Intelligence Square debate on the proposition "death is not final". An IQ2 debate about global warming was discussed on this blog 7 years ago. While the "for" motion (defended by the Harvard-affiliated neurosurgeon with his own near-death experience Eben Alexander along with medical doctor and writer Raymond Moody) was favored 37-31 before the debate, many people have changed their mind and the skeptics (believing that the death is final, after all) have won the final vote 46-31. I am pretty amazed that as many as 38% of the audience changed their opinion about the answer to this fundamental question after the 100-minute debate. Sean Carroll has promoted the debate: before the debate, debate's afterlife. Other texts on similar topics: biology, philosophy of science, religion, stringy quantum gravity Two very different PR, ER-EPR papers I thought that the acronyms are sort of funny Two contrasting papers on the Papadodimas-Raju theory of the black hole interior and the Maldacena-Susskind ER-EPR (Einstein-Rosen/Einstein-Podolsky-Rosen) correspondence have been posted to the hep-th arXiv today: Daniel Harlow (Princeton): Aspects of the Papadodimas-Raju [PR] Proposal for the Black Hole Interior Kristan Jensen, Andreas Karch, Brandon Robinson (Seattle+SUNY): The holographic dual of a Hawking pair has a wormhole (TRF guest blogger) Andreas Karch et al. takes the positive attitude that the newest picture, ER-EPR, works, while Harlow sees a serious problem with another, related yet inequivalent, picture of the black hole interior, the Papadodimas-Raju (PR) theory. PR and ER-EPR are not quite equivalent although the acronyms may be combined to a rather nice triangle. ;-) But I think that they are ultimately compatible. Both of them are right and complementary. PR tells us something about the freedom we have when we extrapolate dynamics of quantum gravity into the black hole interior; ER-EPR teaches us about the behavior of the Hilbert space in topologically nontrivial situations with an ER bridge (not really studied by PR at all, at least so far). Harlow's paper is against PR; Andreas Karch et al. argue in favor of ER-EPR maximally positively – so positively, in fact, that they (rightfully?) demand a part of the credit for the streamlined interpretation of the ER-EPR correspondence. But even though they focus on "different theories" of the black hole interior, you may see that PR and ER-EPR are "allies" because pretty much the same point that causes so much discomfort to Daniel Harlow is also what Andreas Karch et al. – which will be referred to as Jensen et al., to respect the alphabetical order – embrace so enthusiastically! Other texts on similar topics: philosophy of science, stringy quantum gravity Russia plans to annex the Moon in 2030 What is the price of the Moon? Russia's Victory Day Parade was a bit more exciting for the Russians in 2014 because Crimea became a part of the Russian victory. However, on the same day, Russia also published some details about its space program: Russia will begin Moon colonization in 2030 - draft space program America may seem out of this game but we could see a conflict on the Moon, anyway, because China has promised to send its first troops to the Moon in 2025. ;-) Unless the modern civilization destroys itself in some way – e.g. by allowing the extremists known as "environmentalists" to influence our lives – the Moon will eventually become a "possible destination". But will the human activity on the Moon ever become significant for our planet? Will the Moon ever become the source of a non-negligible portion of the world GDP? Czech trailblazing: 125 years I am a patriot yet a realist. So I think that the Czechs are pretty much the ultimate example of an average nation when it comes to most benchmarks. There are not too many technologies or disciplines in which we are at the top. But one of them is just celebrating 125 years. My 10 years in the U.S. made me think: You're just troglodytes, Yankees! ;-) What is it? Yes, it's the trailblazing system. By 1938, i.e. before the arrival to Nazism to Czechoslovakia, Czechoslovakia had the longest, most sophisticated, and most extensive system of 40,000 km of routes for hikers that are marked by the unified, colorful, structured, yet simple collection of symbols. Today, it's still 40,000 km which no longer makes us the #1 in the absolute sense but we're still enjoying the highest density of the marked routes in the world. Other texts on similar topics: Czechoslovakia, everyday life Straight rod passing through curved hole Off-topic: A week ago, The Big Bang Theory and three less important shows were banned in China where they were streamed legally and where TBBT became hugely popular. No one knows why – Chuck Lorre has speculated what the gang of commies in a dark room who have watched a few TBBT episodes didn't like. Maybe it's because Penny has "soup" written on her buttocks in Chinese characters and Mary Cooper uses the term "Kung fu letters" for these East Asian scripts. ;-) But the reason could have been more political, of course. Or commercial. Why the hole is hyperbolic and why you shouldn't worry The following simple video from a Valencia, Spain science museum has gotten almost 200,000 views in a few days. Not bad. I believe that many readers, including very young ones, are not only able to reconcile themselves with the fact that the hole may be curved even though the rod is straight but they could even compute the shape of the rod. But let me say a few words, anyway. Some of us intuitively believe that a straight rod should produce images composed of straight lines. But it isn't necessarily so. After all, many of us have already tried to use a straight rod named "a pencil" to draw circles and other undoubtedly curved curves. ;-) Other texts on similar topics: mathematics, science and society Obama-era U.S. relations to other countries worse than in the Bush era George W. Bush was hated by many Americans during his reign. While it's true that my perspective from People's Republic of Cambridge isn't representative of the U.S. and Bush's position was particularly tense in the city of Harvard and MIT, I think that to a lesser extent, it is true that many people have believed that Bush was doing everything he could to turn the rest of the world into the haters of the U.S. In particular, it was believed that the Iraq war would permanently strain the U.S. relationships with the Middle East and with countries like France, and so on. Barack Obama's program was to turn the U.S. into a friendly country loved in the rest of the world. He has even won a Peace Nobel Prize – in 2009 – for this image of his. It has never gone beyond the image and this Peace Nobel Prize has a "chance" to become a principal player in the ignition of the Third World War. I think that Barack Obama and some people in his team want others to be doing well. Unfortunately, this desire sometimes reaches Messianic dimensions and those are very dangerous. You know, the road to hell is paved by good intentions. When we look at the relationships between the U.S. and other countries and blocs, most of them have gotten worse. Other texts on similar topics: Europe, Middle East, politics An anti-German, anti-Russian rant in The New York Times Russophobia has never been a duty of the Westerner On Friday, dozens of pro-Russian citizens of Ukraine were burned alive in the building of the trade unions in Odessa – a previously peaceful, highly cultural city in Southwestern Ukraine founded by Catherine the Great in 1794 – where they had to hide from aggressive pro-Maidan soccer rowdies whom they previously confronted on the street. The rowdies did everything they could to burn and kill as many people as possible. Off-topic: this is how the Maidan regime plans to negotiate with the ethnic Russians who are citizens of Ukraine, in this case with the city of Slavjansk. I am sort of terrified even though it is 1,758 km away from my home. It's a very sad event, especially because the current de facto government of Ukraine has done virtually nothing to save these lives and it is not doing much to investigate the events. Even the EU decided to call for an independent investigation of the deaths; even the acting Ukrainian PM Yatsenyuk agreed that their police failed miserably. Similar actions of governments against their citizens have been used as excuses for assorted U.S/NATO/U.N. interventions into numerous countries in recent years but these crimes are being deliberately covered or justified by those who have decided to support the Maidan regime whatever it costs. These people are so immoral. But I want to spend some time with a new op-ed in The New York Times, Why Germans Love Russia by Clemens Wergin, a Russophobe from Germany's daily "Die Welt". He's whining that it's so bad that about one-half of the German population backs the Russian attitude to the hassle in Ukraine and invents various tricks to sling mud at these people. Worldsheets and spacetimes: kinship and cross-pollination There exists one idea that I have believed for 18 years to eventually become critical for the most general definition of a theory of everything, or string/M-theory, and that hasn't played an important role yet. No, I don't mean the idea of feminist biology (thanks, Honzo). Instead, the most relevant papers underlying the idea that I want to discuss only have around 100 citations at this moment which is far from the citation count appropriate for the papers sparking a new string revolution. It's the idea that the worldsheets and worldvolumes of all sorts that appear in string theory (yes, I decided to write "worldsheets" without a space in this blog post) are actually powered by dynamical laws that obey the same general principles as the dynamical theories governing the processes in the spacetime. The worldsheet theories may look simpler and less "stringy" than the theories in the target space but it's just a quantitative feature of a subclass of solutions of the most general form of string/M-theory, not a sign of their being qualitatively different. The important papers (for me) that have made me adopt this belief are the 1996 papers by Kutasov and Martinec about the \(\NNN=2\) strings. They have only been mentioned on this blog once, in the 2006 blog post titled Evaluating extreme approaches to the theory of everything. The general idea has been pioneered in Michael Green's 1987 paper, World sheets for world sheets (I added the spaces here because it's the original title). Other texts on similar topics: string vacua and phenomenology, stringy quantum gravity The EU's 20 absurdities: a poll Results after 360 votes counted: ACTA, 18% Spying in cars, 13% Dehydration, 11% Light bulbs, 11% Corporate women quotas, 8% Hypocritical smoking bans, 6% Strong vacuum cleaner ban, 6% Originally posted on May 1st Around May 24th, the citizens of EU countries will vote deputies to the European Parliament. This "parliament" isn't terribly important – for example, it is not allowed to propose any legislation (it is the only "parliament" in the world with this minor defect) but we may still think that it's important that the right people will be able to speak on that forum. Various parties opposing the creeping European unifications are expected to make a strong showing – taking over approximately 1/3 of the Parliament. We will probably see that the Euroskeptics are far from being a uniform body, too. It's very likely that I will vote the Euro-skeptical "Party of Free Citizens [SSO]" for the first time because I am obviously much closer to it than to the older parties and I was impressed by SSO's gains during the latest national elections. Slovak economist Richard Sulik is the boss of a libertarian party that has some nonzero chances to shine in Slovakia, too. I decided to translate his poll choosing the greatest absurdity of the European Union. Just to be sure, I think that there are much more serious processes taking place at the EU level than these 20 items. But the list of the 20 items is amusing and you may pick your winner. Combining Boltzmann brains with additional psychotic explosions Sean Carroll has no clue about physics and is helping to bury the good name of 2 graduate students Sean Carroll can sometimes give popular talks about physics, science, and atheism and most of the content is more or less OK. He wrote an OK textbook general relativity. However, when it comes to things sold as his original research, he has been a borderline crackpot for years. I think it's obvious that after his and two graduate students' latest salvo, De Sitter Space Without Quantum Fluctuations, it's time to permanently erase the adjective "borderline". I had to divide the reading of those texts to 5 sessions because the breathtaking ignorance and stupidity described in the paper has driven my adrenaline level above the healthy levels five times. Recall that last summer, Boddy and Carroll argued that we have to be grateful for the unstable Higgs field (in the real world, the Higgs field cannot be unstable because it's an inconsistency) because the instability will soon destroy our world and that's a good thing because in an eternally existing approximately de Sitter world, "we" would inevitably have to become "Boltzmann Brains", thermal fluctuations that resemble the human brain, allow it to feel the same thing, and that inevitably occur after an exponentially long time. Your humble correspondent and Jacques Distler would explain why this reasoning completely violates the rules of the probability calculus – Bayesian probability, frequentist probability, or any other well-known approach to probability – as well as causality and basic common sense, too. We can see that we are not Boltzmann Brains and there exists no rational argument implying that "we" would have to "be" Boltzmann Brains just because there are infinitely many of them in an infinite spacetime. Claiming that "we" have to be generic in this sense is just a hypothesis, one that must be tested and one that may be easily falsified (within a split second). Our observations of our present and the past – and ourselves – clearly cannot depend on some future events happening in our Universe (like the number of "Boltzmann Brains" in a future region of the spacetime), anyway, because such an influence would be acausal. Other texts on similar topics: astronomy, landscape, stringy quantum gravity Tracing the source of catchy melodies Thirty years ago, in the mid 1980s, I would be spending some time with my Commodore 64 – writing some programs in BASIC or the 6510 machine code or playing some games. Or some combinations of those things. This composition will be one of the main topics of this blog post. With a few exceptions, the games I possessed were copied from the pirates but what you would expect, especially behind the Iron Curtain. Commodore 64 had the wonderful "sound card", the SID chip with 3 sound generators and more. I still remember some POKE's and PEEK's needed to make this gadget work. The music sounded much better than the one-bit music from Sinclair ZX Spectrum, for example. (Compare the nearly professional C64 music with the Manic Miner sounds from Spectrum which are really horrible in comparison. If you cared: The in-game music is "In the Hall of the Mountain King" from Edvard Grieg's music to Henrik Ibsen's 1867 play "Peer Gynt". The music that plays during the title screen is an arrangement of "The Blue Danube" by Johann Strauss II.) Some of the games' musical themes were wonderful. And a curious person wants to know more about good compositions. Sometimes, especially when the composer was a young living man, the author would be written down. But it wasn't always the case. Today, we could use Shazam [iOS], the app that (reversely) identifies the music according to the audio it hears. But there was no Shazam 30 years ago so the source of some music remained a mystery. Other texts on similar topics: computers, music Laws of physics cannot be hacked Hackers of physics do not beat Nature; they only fool people Shaun Maguire, a PhD student who blogs together with John Preskill, spent his childhood hacking computers. It is natural for him to do the same thing to Nature: Hacking nature: loopholes in the laws of physics A source of the (especially young) people's excitement about physics is their desire to beat the old laws of Nature and to hack into systems around us. To get unlimited moves in the Candy Crush Saga. To make a compromise with a vendor machine: to acquire the chocolate while paying no money. To be able to subscribe to an ObamaCare website. To surpass the speed of light and to beat the uncertainty principle. Warp drive cannot work, as I will mention again. It's a part of the human nature to think that the previous limitations can be circumvented. Our ancestors couldn't get to the Moon; we can. So some people think that if our ancestors couldn't surpass the speed of light, then yes, we can. Or at least, our descendants will be able to. In technology, the slogan "yes, we can" captures a large part of the major advances. But the progress in physics doesn't really uniformly march in this "yes, we can" direction. Quite on the contrary: most of the progress in modern fundamental physics may be summarized by the slogan "no, you really cannot". You cannot do things that were once thought to be possible. You cannot surpass the speed of light, special relativity tells us, even though Newton thought it was perfectly OK. You cannot concentrate some mass (or entropy) to a smaller volume than the corresponding Schwarzschild radius, general relativity claims, although it was thought to be possible before Einstein. You cannot measure the position and the velocity more accurately than \(\Delta x\cdot \Delta p=\hbar /2 \) although classical physicists would think that you could. You cannot observe things without affecting them, Heisenberg realized. You cannot perform a mathematical operation without producing some amount of entropy, statistical mechanics implies. You cannot probe geometry at the sub-Planckian distances, quantum gravity teaches us. And so on, and so on. You cannot do many things that used to seem doable. Most of the progress is going in the opposite direction than the practical "yes, we can" problem solvers seem to assume. Every major revolution in physics is actually connected with some new bans and in most contexts, Nature boasts waterproof law enforcement mechanisms. And because progress in science is really about the falsification of previous theories or ideas, theories that would claim "yes, we can", and because the falsification is irreversible, the finding that "no, you really cannot" do certain fundamental things is here with us to stay. Other texts on similar topics: philosophy of science, science and society Six string pheno papers It's the first of May: time to read the Czech children's most memorized poem, Karel Hynek Mácha's romantic 1836 poem "Máj". Try the translation by Edith Pargeter or James Naughton or the translation to some other languages (including some audios). May Day turned out to be a busy day on the arXiv, especially when it comes to papers on string (and string-inspired) phenomenology. I will briefly mention six papers, including three articles on string inflation. First, Savas Dimopoulos, Kiel Howe, and John March-Russell of Stanford-Oxford (a perfect rhyme, indeed) wrote about Maximally Natural Supersymmetry which argues that models involving both SUSY and one large, multi-\({\rm TeV}\) dimension are realistic. Somewhat similarly to Hořava-Witten heterotic M-theory models, there is a \(S^1/\ZZ_2\times \ZZ_2\) compactification. The two \(\ZZ_2\) groups break the theory from \(\NNN=2\) to \(\NNN=1\) SUSY in two different, mutually incompatible ways, so that no SUSY is left in the effective theory. Alternatively, you may say that there's no good \(\NNN=1\) 4D effective field theory because the Kaluza-Klein scale coincides with the SUSY breaking scale: the breaking is effectively of the Scherk-Schwarz type, by antiperiodic boundary conditions for the fermions. The model proposes light \(650\GeV\) top squarks, \(2\TeV\) gluinos, and a new massive \(U(1)'\) gauge boson \(Z'\) – all of these should be accessible to the LHC13 or LHC14 run that will begin next year. Despite the large fifth dimension, they say that they produce strong enough gravitational waves for BICEP2 if the dimension is small during inflation. I have a problem with the sudden growth of the dimensions (i.e. with the usage of different sizes of extra dimensions for different epochs) but maybe it is just a psychological prejudice. Other texts on similar topics: astronomy, string vacua and phenomenology Measure for measure: debaters love to hate genuine... LEP's, LHC's \(98\), \(126\GeV\) Higgses may match... Constructor theory: Deutsch and Marletto are just ... Lennart Bengtsson will probably remain a renegade,... BICEP2 vs Planck: nothing wrong with screen scrapi... Donetsk, Luhansk referendums have firmer democrati... Rogozin, Romania, conflicts, and individual rights... Obama-era U.S. relations to other countries worse ... An anti-German, anti-Russian rant in The New York ... Worldsheets and spacetimes: kinship and cross-poll... Combining Boltzmann brains with additional psychot...
CommonCrawl
Kuiper's test Kuiper's test is used in statistics to test that whether a given distribution, or family of distributions, is contradicted by evidence from a sample of data. It is named after Dutch mathematician Nicolaas Kuiper.[1] Kuiper's test is closely related to the better-known Kolmogorov–Smirnov test (or K-S test as it is often called). As with the K-S test, the discrepancy statistics D+ and D− represent the absolute sizes of the most positive and most negative differences between the two cumulative distribution functions that are being compared. The trick with Kuiper's test is to use the quantity D+ + D− as the test statistic. This small change makes Kuiper's test as sensitive in the tails as at the median and also makes it invariant under cyclic transformations of the independent variable. The Anderson–Darling test is another test that provides equal sensitivity at the tails as the median, but it does not provide the cyclic invariance. This invariance under cyclic transformations makes Kuiper's test invaluable when testing for cyclic variations by time of year or day of the week or time of day, and more generally for testing the fit of, and differences between, circular probability distributions. Definition The test statistic, V, for Kuiper's test is defined as follows. Let F be the continuous cumulative distribution function which is to be the null hypothesis. Denote the sample of data which are independent realisations of random variables, having F as their distribution function, by xi (i=1,...,n). Then define[2] $z_{i}=F(x_{i}),$ $D^{+}=\mathrm {max} \left[i/n-z_{i}\right],$ $D^{-}=\mathrm {max} \left[z_{i}-(i-1)/n\right],$ and finally, $V=D^{+}+D^{-}.$ Tables for the critical points of the test statistic are available,[3] and these include certain cases where the distribution being tested is not fully known, so that parameters of the family of distributions are estimated. Example We could test the hypothesis that computers fail more during some times of the year than others. To test this, we would collect the dates on which the test set of computers had failed and build an empirical distribution function. The null hypothesis is that the failures are uniformly distributed. Kuiper's statistic does not change if we change the beginning of the year and does not require that we bin failures into months or the like.[1][4] Another test statistic having this property is the Watson statistic,[2][4] which is related to the Cramér–von Mises test. However, if failures occur mostly on weekends, many uniform-distribution tests such as K-S and Kuiper would miss this, since weekends are spread throughout the year. This inability to distinguish distributions with a comb-like shape from continuous uniform distributions is a key problem with all statistics based on a variant of the K-S test. Kuiper's test, applied to the event times modulo one week, is able to detect such a pattern. Using event times that have been modulated with the K-S test can result in different results depending on how the data is phased. In this example, the K-S test may detect the non-uniformity if the data is set to start the week on Saturday, but fail to detect the non-uniformity if the week starts on Wednesday. See also • Kolmogorov–Smirnov test References 1. Kuiper, N. H. (1960). "Tests concerning random points on a circle". Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen, Series A. 63: 38–47. 2. Pearson, E.S., Hartley, H.O. (1972) Biometrika Tables for Statisticians, Volume 2, CUP. ISBN 0-521-06937-8 (page 118) 3. Pearson, E.S., Hartley, H.O. (1972) Biometrika Tables for Statisticians, Volume 2, CUP. ISBN 0-521-06937-8 (Table 54) 4. Watson, G.S. (1961) "Goodness-Of-Fit Tests on a Circle", Biometrika, 48 (1/2), 109–114 JSTOR 2333135
Wikipedia
Will changing the damage type of some spells break my game? Player builds a new character, a Cleric of Lathander. What he really likes about this god is his association with sunrises and light in general. So of course, he takes the Light Domain. But most of the Domain Spells deal fire damage and are fire-themed, for example Burning Hands, Flaming Sphere, Scorching Ray, etc. So now I'm thinking about letting him change the damage type of all those spells from fire to radiant. (The description of how the spells look like would change as well). In most cases, this would not make a difference. Damage is damage. But some monsters have Resistances, Immunities or Vulnerabilities against fire/radiant damage. (And Zombies for example can be put in the ground more easily with radiant damage). I'm not sure if this is a good idea or not. I don't want to allow this and then have to deal with an unbalanced game. Do I forget or misjudge an important aspect that will bite me in the ass later? dnd-5e balance damage-types iribaar7iribaar7 Radiant is way less resisted Fire damage is the second most resisted damage type (with 77 creatures of the Monster Manual either being resistant or immune, 9 are vulnerable to it). Radiant on the other side is the second least resisted damage type (with 4 creatures being resistant, not a single one immune and one is vulnerable to it). Still, it depends on the encounters you build and the creatures you use. Source: User Yorrin's post on giantitp.com ThyzerThyzer \$\begingroup\$ Not to mention there are specific creatures that are vulnerable, lose regeneration, or can be more easily permanently destroyed by radiant damage. \$\endgroup\$ – Marshall Tigerus May 1 '17 at 15:59 If you take a look at the table here, you'll find that when comparing Radiant to Fire, Fire is more resisted/Immune (37 with resistance, 40 with immunity) than Radiant (Resist 4, Immune 0). However, there are 9 creatures vulnerable to fire and only 1 (apparently) vulnerable to radiant. Those statistics are, of course, for the entire MM, and unless you plan on using an even distribution of monsters from the manual in your campaign, you may find that fire or radiant is more effective for the player. Also keep in mind that there are creatures that REQUIRE fire (or acid, as the following example shows) in order to be killed, a.k.a., the Troll (MM. Switching damage types is a great way to add flavor to a game/character. If you follow the guidelines listed in the DMG pg. 283/284, you should be fine. Here they are, in case you can't get to your copy (you'll notice they don't say not to switch damage type, nor do they balance by damage type, as it seems like it's generally not considered to be a relevant factor) Creating A Spell When creating a new spell, use existing spells as guidelines. Here are some things to consider: If a spell is so good that a caster would want to use it all the time, it might be too powerful for its level. A long duration or large area can make up for a lesser effect, depending on the spell. Avoid spells that have very limited use, such as one that works only against good dragons. Though such a spell could exist in the world, few characters will bother to learn or prepare it unless they know in advance that doing so will be worthwhile. Make sure the spell fits with the identity of the class. Wizards and sorcerers don't typically have access to healing spells, for example, and adding a healing spell to the wizard class list would step on the cleric's turf. For any spell that deals damage, use the Spell Damage table to determine approximately how much damage is appropriate given the spell's level. The table assumes the spell deals half damage on a successful saving throw or a missed attack. If your spell doesn't deal damage on a successful save, you can increase the damage by 25 percent. You can use different damage dice than the ones in the table, provided that the average result is about the same. Doing so can add a little variety to the spell. For example, you could change a cantrip's damage from 1d10 (average 5.5) to 2d4 (average 5), reducing the maximum damage and making an average result more likely. \begin{array}{c|rr} \text{Spell Level} & \text{One Target} & \text{Multiple Targets} \\ \hline \text{Cantrip} & 1\text{d}10 & 1\text{d}6 \\ 1\text{st} & 2\text{d}10 & 2\text{d}6 \\ 2\text{nd} & 3\text{d}10 & 4\text{d}6 \\ 3\text{rd} & 5\text{d}10 & 6\text{d}6 \\ 4\text{th} & 6\text{d}10 & 7\text{d}6 \\ 5\text{th} & 8\text{d}10 & 8\text{d}6 \\ 6\text{th} & 10\text{d}10 & 11\text{d}6 \\ 7\text{th} & 11\text{d}10 & 12\text{d}6 \\ 8\text{th} & 12\text{d}10 & 13\text{d}6 \\ 9\text{th} & 15\text{d}10 & 14\text{d}6 \\ \end{array} \$\begingroup\$ Interesting, there are a lot of spells that break that guideline. Fireball would be a fifth-level spell according to that chart, for example. \$\endgroup\$ – r256 May 1 '17 at 17:54 \$\begingroup\$ @r256 ALso keep in mind that some spells have HUGE radii, whereas FireBall has a mere 20ft. They give the note about adjusting damage down for an increased area of effect, though they never state what the baseline is, nor that you could bump damage up for reducing the AoE. I think it's another case of the rules are intentionally vague / outline-y so that you can have a slightly UNDER-performing spell in your game so as to not overshadow existing ones. Also, doesn't everyone want fireball ("If a spell is so good that a caster would want to use it all the time, it might be too powerful[...].") \$\endgroup\$ – Jon May 2 '17 at 15:54 \$\begingroup\$ @r256 Fireball and Lightning Bolt have been special spells since OD&D, which reaches back to Chainmail. See this for some lore on that. \$\endgroup\$ – KorvinStarmast May 2 '17 at 17:09 My answer comes from judging the intent behind the suggestion. Is the player focusing on this as a cool concept they want to play, without thinking about the balance? Do your players usually think in that way? If so, I would make an effort to make this work. Either just straight up allow this particular player to swap fire for radiant, or say, yes, but reduce 1 from the damage or similar if you think that is closer to equal in power level. Is the player aware this may well be stronger but hasn't said so? Are your players usually very aware of who is most effective in combat? In which case, I'd be reluctant to allow new spells, even if there's an apparently good justification. Are you not sure, or somewhere between You could suggest a compromise, that it will be radiant damage, but you'll stick to the vulnerabilities/resistances for fire damage. (Aka, the spells don't change at all, you just change what you call them.) Or, say you'd like to but you're not sure about the balance issues, can we try radiant for the next two sessions and see if it seems too strong? Jack V.Jack V. Not the answer you're looking for? Browse other questions tagged dnd-5e balance damage-types or ask your own question. What is the origin of D&D 1-9 spell levels? Will changing an artifact sword to another weapon type impact game balance much? What type of damage is Sneak Attack? What type of damage accounts for bleeding damage? Does Sacred Flame or Firebolt do more damage on average to a Shadow over an arbitrary but finite number of rounds? Is this Library-encounter balanced? How would changing the damage type/saving throw from spells affect game balance? Would changing the damage type on a tiefling's abilities affect the game too drastically? What would a Shadow Pseudodragon's CR increase to? Will a spellthief break Pathfinder?
CommonCrawl
Easton's theorem In set theory, Easton's theorem is a result on the possible cardinal numbers of powersets. Easton (1970) (extending a result of Robert M. Solovay) showed via forcing that the only constraints on permissible values for 2κ when κ is a regular cardinal are $\kappa <\operatorname {cf} (2^{\kappa })$ (where cf(α) is the cofinality of α) and ${\text{if }}\kappa <\lambda {\text{ then }}2^{\kappa }\leq 2^{\lambda }.$ Statement If G is a class function whose domain consists of ordinals and whose range consists of ordinals such that 1. G is non-decreasing, 2. the cofinality of $\aleph _{G(\alpha )}$ is greater than $\aleph _{\alpha }$ for each α in the domain of G, and 3. $\aleph _{\alpha }$ is regular for each α in the domain of G, then there is a model of ZFC such that $2^{\aleph _{\alpha }}=\aleph _{G(\alpha )}$ for each $\alpha $ in the domain of G. The proof of Easton's theorem uses forcing with a proper class of forcing conditions over a model satisfying the generalized continuum hypothesis. The first two conditions in the theorem are necessary. Condition 1 is a well known property of cardinality, while condition 2 follows from König's theorem. In Easton's model the powersets of singular cardinals have the smallest possible cardinality compatible with the conditions that 2κ has cofinality greater than κ and is a non-decreasing function of κ. No extension to singular cardinals Silver (1975) proved that a singular cardinal of uncountable cofinality cannot be the smallest cardinal for which the generalized continuum hypothesis fails. This shows that Easton's theorem cannot be extended to the class of all cardinals. The program of PCF theory gives results on the possible values of $2^{\lambda }$ for singular cardinals $\lambda $. PCF theory shows that the values of the continuum function on singular cardinals are strongly influenced by the values on smaller cardinals, whereas Easton's theorem shows that the values of the continuum function on regular cardinals are only weakly influenced by the values on smaller cardinals. See also • Singular cardinal hypothesis • Aleph number • Beth number References • Easton, W. (1970), "Powers of regular cardinals", Ann. Math. Logic, 1 (2): 139–178, doi:10.1016/0003-4843(70)90012-4 • Silver, Jack (1975), "On the singular cardinals problem", Proceedings of the International Congress of Mathematicians (Vancouver, B. C., 1974), vol. 1, Montreal, Que.: Canad. Math. Congress, pp. 265–268, MR 0429564
Wikipedia
\begin{document} \date{} \title{Generalized Solution Concepts in Games with Possibly Unaware Players} \thispagestyle{empty} \begin{abstract} Most work in game theory assumes that players are perfect reasoners and have common knowledge of all significant aspects of the game. In earlier work \cite{HR06}, we proposed a framework for representing and analyzing games with possibly unaware players, and suggested a generalization of Nash equilibrium appropriate for games with unaware players that we called \emph{generalized Nash equilibrium}. Here, we use this framework to analyze other solution concepts \fullv{ that have been considered in the game-theory literature}, with a focus on sequential equilibrium. We also provide some insight into the notion of generalized Nash equilibrium by proving that it is closely related to the notion of rationalizability when we restrict the analysis to games in normal form and no unawareness is involved. \end{abstract} {\bf Keywords:} Economic Theory, Foundations of Game Theory, Awareness, Sequential Equilibrium, Rationalizability. \section{INTRODUCTION} \label{chap2:sec:intro} Game theory has proved to be a useful tool in the modeling and analysis of many phenomena involving interaction between multiple agents. However, standard models used in game theory implicitly assume that agents are perfect reasoners and have common knowledge of all significant aspects of the game. There are many situations where these assumptions are not reasonable. In large games, agents may not be aware of the other players in the game or all the moves a player can make. Recently, we \cite{HR06} proposed a way of modeling such games. A key feature of this approach is the use of an {\em augmented game}, which represents what players are aware of at each node of an extensive form representation of a game. Since the game is no longer assumed to be common knowledge, each augmented game represents the game a player considers possible in some situation and describes how he believes each other player's {\em awareness level} changes over time, where intuitively the awareness level of a player is the set of histories of the game that the player is aware of. In games with possibly unaware players, standard solution concepts cannot be applied. For example, in a standard game a strategy profile is a {\em Nash equilibrium} if each agent's strategy is a best response to the other agents' strategies, so each agent $i$ would continue playing his strategy even if $i$ knew what strategies the other agents were using. In the presence of unawareness this no longer make sense, since the strategies used by other players may involve moves $i$ is unaware of. We proposed a generalization of Nash equilibrium consisting of a collection of strategies, one for each pair $(i,\Gamma')$, where $\Gamma'$ is a game that agent $i$ considers to be the true game in some situation. Intuitively, the strategy for a player $i$ at $\Gamma'$ is the strategy $i$ would play in situations where $i$ believes that the true game is $\Gamma'$. Roughly speaking, a generalized strategy profile $\vec{\sigma}$, which includes a strategy $\sigma_{i,\Gamma'}$ for each pair $(i,\Gamma')$, is a \emph{generalized Nash equilibrium} if $\sigma_{i,\Gamma'}$ is a best response for player $i$ if the true game is $\Gamma'$, given the strategies being used by the other players in $\Gamma'$. We showed that every game with awareness has a generalized Nash equilibrium by associating a game with awareness with a standard game (where agents are aware of all moves) and proving that there is a one-to-one correspondence between the generalized Nash equilibria of the game with awareness and the Nash equilibria of the standard game. Some Nash equilibria seem unreasonable. For example, consider the game shown in Figure~\ref{chap2:fig:game1}. \begin{figure} \caption{A simple game.} \label{chap2:fig:game1} \end{figure} \noindent One Nash equilibrium of this game has $A$ playing down$_A$ and $B$ playing across$_B$. In this equilibrium, $B$ gets a payoff of 3, and $A$ get a relatively poor payoff of 1. Intuitively, $A$ plays down because of $B$'s ``threat'' to play across$_B$. But this threat does not appear to be so credible. If player $B$ is rational and ever gets to move, he will not choose to move across$_B$ since it gives him a lower payoff than playing down$_B$. Moreover, if $B$ will play down$_B$ if he gets to move, then $A$ should play across$_A$. \commentout{ Our focus in this paper has been on refinements of generalized Nash equilibrium in games with awareness. It is worth reconsidering here the conceptual basis for these solution concepts in the presence of awareness. \footnote{We thank Aviad Heifetz and an anonymous referee for raising some of these issues.} As we noted, equilibrium refinements are used in standard games to eliminate some ``undesirable'' or ``unreasonable'' equilibria. Arguably, unreasonable equilibria pose an even deeper problem with unaware players. For example,} One standard interpretation of a Nash equilibrium is that a player chooses his strategy at the beginning of the game, and then does not change it because he has no motivation for doing so (since his payoff is no higher when he changes strategies). But this interpretation is suspect in extensive-form games when a player makes a move that takes the game off the equilibrium path. It may seem unreasonable for a player to then play the move called for by his strategy (even if the strategy is part of a Nash equilibrium), as in the case of player $B$ choosing across$_B$ in the example of Figure~\ref{chap2:fig:game1}. In other words, a threat to blow up the world if I catch you cheating in a game may be part of a Nash equilibrium, and does not cause problems if in fact no one cheats, but it hardly seems credible if someone does cheat. One way to justify the existence of incredible threats off the equilibrium path in a Nash equilibrium is to view the player as choosing a computer program that will play the game for him, and then leaving. Since the program is not changed once it is set in motion, threats about moves that will be made at information sets off the equilibrium path become more credible. However, in a game with awareness, a player cannot write a program to play the whole game at the beginning of the game because, when his level of awareness changes, he realizes that there are moves available to him that he was not aware of at the beginning of the game. He thus must write a new program that takes this into account. But this means we cannot sidestep the problem of incredible threats by appealing to the use of a pre-programmed computer to play a strategy. Once we allow a player to change his program, threats that were made credible because the program could not be rewritten become incredible again. Thus, the consideration of equilibrium refinements that block incredible threats becomes even more pertinent with awareness.\footnote{We thank Aviad Heifetz and an anonymous referee for raising some of these issues.} \commentout{ One reason that Nash equilibrium fails to give a reasonable answer in standard extensive games in general is because, although each player's strategy in a Nash equilibrium is a best response to the other players' strategies, the move made at an information set does not have to be a best response if that information set is reached with probability 0. For example, as we observed, in the game described in Figure~\ref{chap2:fig:game1}, the profile where $A$ moves down and $B$ moves across, which we denote $($down$_A$, across$_B)$, is a Nash equilibrium. Nevertheless moving down is not a best response for $B$ if $B$ is actually called upon to play. The only reason that this is a Nash equilibrium is that $B$ does not in fact play, since the information set where he plays is reached with probability 0, given that $($down$_A$, across$_B)$ is the strategy profile. } There have been a number of variants of Nash equilibrium proposed in the literature to deal with this problem (and others), including {\em perfect equilibrium} \cite{Selten75}, {\em proper equilibrium} \cite{Myerson78}, {\em sequential equilibrium} \cite{KW82}, and {\em rationalizability} \cite{Ber84,Pearce84}, to name just a few. Each of these solution concepts involves some notion of best response. Our framework allows for straightforward generalizations of all these solution concepts. As in our treatment of Nash equilibrium, if $\Gamma_1 \ne \Gamma_2$, we treat player $i$ who considers the true game to be $\Gamma_1$ to be a different agent from the version of player $i$ who considers $\Gamma_2$ to be the true game. Each version of player $i$ best responds (in the sense appropriate for that solution concept) given his view of the game. In standard games, it has been shown that, in each game, there is a strategy profile satisfying that solution concept. Showing that an analogous result holds in games with awareness can be nontrivial. Instead of going through the process of generalizing every solution concept, we focus here on {\em sequential equilibrium} since (a) it is one of the best-known solution concepts for extensive games, (b) the proof that a generalized sequential equilibrium exists suggests an interesting generalization of sequential equilibrium for standard games, and (c) the techniques used to prove its existence in games with awareness may generalize to other solution concepts. \fullv{Sequential equilibrium refines Nash equilibrium (in the sense that every sequential equilibrium is a Nash equilibrium) and does not allow solutions such as (down$_A$, across$_B$). Intuitively, in a sequential equilibrium, every player must make a best response at every information set (even if it is reached with probability 0). In the game shown in Figure~\ref{chap2:fig:game1}, the unique sequential equilibrium has $A$ choosing across$_A$ and $B$ choosing down$_B$. We propose a generalization of sequential equilibrium to games with possibly unaware players, and show that every game with awareness has a generalized sequential equilibrium. This turns out to be somewhat more subtle than the corresponding argument for generalized Nash equilibrium. Our proof requires us to define a generalization of sequential equilibrium in standard games. Roughly speaking, this generalization relaxes the implicit assumption in sequential equilibrium that every history in an information set is actually considered possible by the player. We call this notion {\em conditional sequential equilibrium}. } Other issues arise when considering sequential equilibrium in games with awareness. For example, in a standard game, when a player reaches a history that is not on the equilibrium path, he must believe that his opponent made a mistake. However, in games with awareness, a player may become aware of her own unawareness and, as a result, switch strategies. In the definition of sequential equilibrium in standard games, play off the equilibrium path is dealt with by viewing it as the limit of ``small mistakes'' (i.e., small deviations from the equilibrium strategy). Given that there are alternative ways of dealing with mistakes in games with awareness, perhaps other approaches for dealing with off-equilibrium play might be more appropriate. While other ways of dealing with mistakes may well prove interesting, we would argue that our generalization of sequential equilibrium can be motivated the same way as in standard games. Roughly speaking, for us, how a player's awareness level changes over time is not part of the equilibrium concept, but is given as part of the description of the game. We also provide some insight into the notion of generalized Nash equilibrium by proving that, in a precise sense, it is closely related to the notion of {\em rationalizability} when we restrict the analysis to games in {\em normal form} and no unawareness is involved (although the underlying game is no longer common knowledge among the players). Roughly speaking, a normal form game can be thought as a one-shot extensive game where no player knows the move the others made before they make their own move. Intuitively, in standard games, a strategy is rationalizable for a player if it is a best response to some reasonable beliefs he might have about the strategies being played by other players, and a strategy is part of a Nash equilibrium if it is a best response to the strategies actually played by the other players. Since, in games with awareness, the game is not common knowledge, a local strategy for player $i$ in $\Gamma^+$ is part of a generalized Nash equilibrium if it is a best response to the strategies played by the opponents of player $i$ in the games player $i$ believes his opponents consider to be the actual one while moving in $\Gamma^+$. Note that the line between rationalizability and generalized Nash equilibrium is not sharp. In fact, we are essentially able to prove that a strategy is rationalizable in a standard game $\Gamma$ iff it is part of generalized Nash equilibrium of an appropriate game with awareness whose underlying game is $\Gamma$. The rest of this paper is organized as follows. In Section~\ref{sec:back}, we give the reader the necessary background to understand this paper by reviewing our model for games with awareness. In Section~\ref{chap2:sec:seq}, we review the definition of sequential equilibrium for standard games and define its generalization for games with awareness. In Section~\ref{chap2:sec:stronseq}, we define the concept of conditional sequential equilibrium for standard games, and prove that there is a one-to-one correspondence between the generalized sequential equilibria of a game with awareness and the conditional sequential equilibria of the standard game associated with it. In Section~\ref{chap2:sec:grat}, we analyze the connection between rationalizability and generalized Nash equilibrium. We conclude in Section~\ref{chap2:sec:conc}. Proofs of the theorems can be found in the Appendix. \section{GAMES WITH AWARENESS} \label{sec:back} In this section, we introduce some notation and give some intuition regarding games with awareness. We encourage the reader to consult our earlier paper for details. Games with awareness are modeled using \emph{augmented games}. Given a standard extensive-form game described by a game tree $\Gamma$, an augmented game $\Gamma^+$ \emph{based on} $\Gamma$ augments $\Gamma$ by describing each agent's \emph{awareness level} at each node, where player $i$'s awareness level at a node $h$ is essentially the set of \emph{runs} (complete histories) in $\Gamma$ that $i$ is aware of at node $h$. A player's awareness level may change over time, as the player becomes aware of more moves. Formally, a \emph{(finite) extensive game} is a tuple $\Gamma=(N,M, H,P,f_c,\{{\cal I}_i:i\in N\},\{u_i:i\in N\})$, where \begin{itemize} \item \fullv{$N$ is a finite set consisting of the players of the game.} \commentout{$N$ is a finite set of players.} \item $M$ is a finite set whose elements are the moves (or actions) available to players (and nature) during the game. \item $H$ is a finite set of finite sequences of moves (elements of $M$) that is closed under prefixes, so that if $h \in H$ and $h'$ is a prefix of $h$, then $h' \in H$. Intuitively, each member of $H$ is a \emph{history}. We can identify the nodes in a game tree with the histories in $H$. Each node $n$ is characterized by the sequence of moves needed to reach $n$. A \emph{run} in $H$ is a terminal history, one that is not a strict prefix of any other history in $H$. Let $Z$ denote the set of runs of $H$. Let $M_h = \{m \in M : h \cdot \<m\rangle \in H\}$ (where we use $\cdot$ to denote concatenation of sequences); $M_h$ is the set of moves that can be made after history~$h$. \item $P:(H-Z)\rightarrow N\cup\{c\}$ is a function that assigns to each nonterminal history $h$ a member of $N\cup\{c\}$. (We can think of $c$ as representing nature.) If $P(h)=i$, then player $i$ moves after history $h$; if $P(h)=c$, then nature moves after $h$. Let $H_i=\{h:P(h)=i\}$ be the set of all histories after which player $i$ moves. \item $f_c$ is a function that associates with every history for which $P(h)=c$ a probability measure $f_c(\cdot \mid h)$ on $M_h$. Intuitively, $f_c(\cdot\mid h)$ describes the probability of nature's moves once history $h$ is reached. \item ${\cal I}_i$ is a partition of $H_i$ with the property that if $h$ and $h'$ are in the same cell of the partition then $M_h=M_{h'}$, i.e., the same set of moves is available for every history in a cell of the partition. Intuitively, if $h$ and $h'$ are in the same cell of ${\cal I}_i$, then $h$ and $h'$ are indistinguishable from $i$'s point of view; $i$ considers history $h'$ possible if the actual history is $h$, and vice versa. A cell $I\in{\cal I}_i$ is called an ($i$-)\emph{information set}. \item $u_i:Z\rightarrow \mathrm{R}$ is a payoff function for player $i$, assigning a real number ($i$'s payoff) to each run of the game. \end{itemize} An \emph{augmented game} is defined much like an extensive game. The only essential difference is that at each nonterminal history we not only determine the player moving but also her awareness level. There are also extra moves of nature that intuitively capture players' uncertainty regarding the awareness level of their opponents. Formally, given an extensive game $\Gamma=(N, M, H, P,f_c,\{{\cal I}_i:i\in N\},\{u_i:i\in N\})$, an {\em augmented game based on $\Gamma$} is a tuple $\Gamma^+=(N^+, M^+, H^+, P^+,f_c^+,\{{\cal I}_i^+:i\in N^+\},\{u_i^+:i\in N^+\},\{{A}_i^+ :i\in N^+\})$, where $(N^+, M^+, H^+, P^+,f_c^+,\{{\cal I}_i^+:i\in N^+\},\{u_i^+:i\in N^+\})$ is a standard extensive game with perfect recall \footnote{A game with perfect recall is one where, players remember all the actions they have performed and all the information sets they have passed through; see \cite{OR94} for the formal definition.} and ${A}_i^+:H_i^+\rightarrow 2^H$ describes $i$'s awareness level at each history at which he moves. $\Gamma^+$ must satisfy some consistency conditions. These conditions basically ensure that \fullv{ \begin{itemize} \item} a player's awareness level depends only on the information she has\commentout{;} \fullv{as captured by her information sets;} \fullv{\item} players do not forget histories that they were aware of; and \fullv{\item} there is common knowledge of (1) what the payoffs are in the underlying game and (2) what the information sets are in the underlying game. \fullv{\end{itemize}} The formal conditions are not needed in this paper, so we omit them here. An augmented game describes either the modeler's view of the game or the subjective view of the game of one of the players, and includes both moves of the underlying game and moves of nature that change awareness. A game with awareness collects all these different views, and describes, in each view, what view other players have. Formally, a \emph{game with awareness based on $\Gamma = (N, M,H,P,f_c, \{{\cal I}_i:i\in N\},\{u_i: i\in N\})$} is a tuple $\Gamma^* = ({\cal G}, \Gamma^m, {\cal F})$, where \begin{itemize} \item ${\cal G}$ is a countable set of augmented games based on $\Gamma$, of which one is $\Gamma^m$; \item ${\cal F}$ maps an augmented game $\Gamma^+ \in{\cal G}$ and a history $h$ in $\Gamma^+$ such that $P^+(h)=i$ to a pair $(\Gamma^h,I)$, where $\Gamma^h\in {\cal G}$ and $I$ is an $i$-information set in game $\Gamma^h$. \end{itemize} Intuitively, $\Gamma^m$ is the game from the point of view of an omniscient modeler. If player $i$ moves at $h$ in game $\Gamma^+ \in {\cal G}$ and ${\cal F}(\Gamma^+,h) = (\Gamma^h,I)$, then $\Gamma^h$ is the game that $i$ believes to be the true game when the history is $h$, and $I$ consists of the set of histories in $\Gamma^h$ that $i$ currently considers possible. The augmented game $\Gamma^m$ and the mapping ${\cal F}$ must satisfy a number of consistency conditions. The conditions on the modeler's game ensures that the modeler is aware of all the players and moves of the underlying game, and that he understands how nature's moves work in the underlying game $\Gamma$. The game $\Gamma^m$ can be thought of as a description of ``reality''; it describes the effect of moves in the underlying game and how players' awareness levels change. The other games in ${\cal G}$ describe a player's subjective view of the situation. There are also ten constraints on the mapping ${\cal F}$ that capture desirable properties of awareness. Rather than describing all ten constraints here, we briefly describe a few of them, to give some idea of the intuition behind these constraints. Suppose that ${\cal F}(\Gamma^+,h) = (\Gamma^h,I)$ and ${A}_i^+(h)=a$,\footnote{As in our earlier paper, we use the convention that the components of a (standard or augmented) game $\Gamma^+$ are labeled with the same superscript $+$, so that we have $M^+$, $H^+$, ${A}_i^+$, and so on.} then the following conditions hold. \begin{itemize} \item[C1.]$\{\overline{h}: h \in H^h\} = a$, where $\overline{h}$ is the subsequence of $h$ consisting of all the moves in $h$ that are also in the set of moves $M$ of the underlying game $\Gamma$. \item[C2.] If $h' \in H^h$ and $P^h(h') = j$, then ${A}_j^h(h') \subseteq a$ and $M_{\overline{h}'}\cap \{m: \overline{h}'\cdot\langle m\rangle\in a\}= M^h_{h'}$. \item[C5.] If $h' \in H^+$, $P^+(h') = i$, ${A}_i^+(h')=a$, then if $h$ and $h'$ are in the same information set of $\Gamma^+$, then ${\cal F}(\Gamma^+,h') = (\Gamma^{h},I)$, while if $h$ is a prefix or a suffix of $h'$, then ${\cal F}(\Gamma^+,h') = (\Gamma^{h},I')$ for some $i$-information set $I'$. \item[C8.] For all histories $h'\in I$, there exists a prefix $h'_1$ of $h'$ such that $P^h(h'_1)=i$ and ${\cal F}(\Gamma^h,h'_1)=(\Gamma',I')$ iff there exists a prefix $h_1$ of $h$ such that $P^+(h_1)=i$ and ${\cal F}(\Gamma^+,h_1)=(\Gamma',I')$. Moreover, $h'_1\cdot \<m\rangle$ is a prefix of $h'$ iff $h_1\cdot \<m\rangle$ is a prefix of $h$. \item[C9.] There exists a history $h'\in I$ such that for every prefix $h''\cdot \<m\rangle$ of $h'$, if $P^h(h'')=j \in N^h$ and ${\cal F}(\Gamma^h,h'')=(\Gamma',I')$, then for all $h_1\in I'$, $h_1\cdot \<m\rangle\in H'$. \end{itemize} Suppose that ${\cal F}(\Gamma^+,h) = (\Gamma^h,I)$. Player $i$ moving at history $h$ in $\Gamma^+$ thinks the actual game is $\Gamma^h$. Moreover, $i$ thinks he is in the information set of $I$ of $\Gamma^h$. C1 guarantees that the set of histories of the underlying game player $i$ is aware of is exactly the set of histories of the underlying game that appear in $\Gamma^h$. C2 states that no player in $\Gamma^h$ can be aware of histories not in $a$. The second part of C2 implies that the set of moves available for player $j$ at $h'$ is just the set of moves that player $i$ is aware of that are available for $j$ at $\overline{h}'$ in the underlying game. C5 says that player $i$'s subjective view of the game changes only if $i$ becomes aware of more moves and is the same at histories in $H^+$ that $i$ cannot distinguish. C8 is a consequence of the perfect recall assumption. C8 says that if, at history $h$, $i$ considers $h'$ possible, then for every prefix $h_1'$ of $h'$ there is a corresponding prefix of $h$ where $i$ considers himself to be playing the same game, and similarly, for every prefix of $h$ there is a prefix of $h'$ where $i$ considers himself to be playing the same game. Moreover, $i$ makes the same move at these prefixes. The intuition behind condition C9 is that player $i$ knows that player $j$ only make moves that $j$ is aware of. Therefore, player $i$ must consider at least one history $h'$ where he believes that every player $j$ made a move that $j$ was aware of. It follows from conditions on augmented games, C1, C2, and C9 that there is a run going through $I$ where every player $j$ makes a move that player $i$ believes that $j$ is aware of. It may seem that by making ${\cal F}$ a function we cannot capture a player's uncertainty about the game being played or uncertainty about opponents' unawareness about histories. However, we can capture such uncertainty by folding it into nature's initial move in the game the player consider possible while moving. It should be clear that this gives a general approach to capturing such uncertainties. We identify a standard extensive game $\Gamma$ with the game $(\{\Gamma^m\},\Gamma^m,{\cal F})$, where (abusing notation slightly) $\Gamma^m = (\Gamma,\{{A}_i: i \in N\})$ and, for all histories $h$ in an $i$-information set $I$ in $\Gamma$, ${A}_i(h) = H$ and ${\cal F}(\Gamma^m,h) = (\Gamma^m,I)$. Thus, all players are aware of all the runs in $\Gamma$, and agree with each other and the modeler that the game is $\Gamma$. This is the \emph{canonical representation of $\Gamma$} as a game with awareness. In \cite{HR06}, we discussed generalizations of games with awareness to include situations where players may be aware of their own unawareness and, more generally, games where players may not have common knowledge of the underlying game is; for example, players may disagree about what the payoffs or the information sets are. With these models, we can capture a situation where, for example, player $i$ may think that another player $j$ cannot make a certain a certain move, when in fact $j$ can make such a move. For ease of exposition, we do not discuss these generalizations further here. However, it is not hard to show that the results of this paper can be extended to them in a straightforward way. Feinberg~\citeyear{Feinberg04,Feinberg05} also studied games with awareness. Feinberg~\citeyear{Feinberg05} gives a definition of extended Nash equilibrium in normal-form games. Feinberg~\citeyear{Feinberg04} deals with extensive-form games and defines solution concepts only indirectly, via a syntactic epistemic characterization. His approach lacks a more direct semantic framework, which our model provides. Li \citeyear{LI06b} has also provided a model of unawareness in extensive games, based on her earlier work on modeling unawareness \cite{LI06,LI06a}. See \cite{HR06} for some further discussion of the relation between these approaches and ours. \section{GENERALIZED SEQUENTIAL EQUILIBRIUM} \label{chap2:sec:seq} To explain generalized sequential equilibrium, we first review the notion of sequential equilibrium for standard games. \fullv{\subsection{SEQUENTIAL EQUILIBRIUM FOR STANDARD GAMES}} Sequential equilibrium is defined with respect to an {\em assessment}, a pair $(\vec{\sigma},\mu)$ where $\vec{\sigma}$ is a strategy profile consisting of \emph{behavioral strategies} and $\mu$ is a {\em belief system}, i.e., a function that determines for every information set $I$ a probability $\mu_I$ over the histories in $I$. Intuitively, if $I$ is an information set for player $i$, $\mu_I$ is $i$'s subjective assessment of the relative likelihood of the histories in $I$. Roughly speaking, an assessment is a sequential equilibrium if for all players $i$, at every $i$-information set, (a) $i$ chooses a best response given the beliefs he has about the histories in that information set and the strategies of other players, and (b) $i$'s beliefs are consistent with the strategy profile being played, in the sense that they are calculated by conditioning the probability distribution induced by the strategy profile over the histories on the information set. Note that $\mu_I$ is defined even if $I$ is reached with probability 0. Defining consistency at an information set that is reached with probability 0 is somewhat subtle. In that case, intuitively, once information set $I$ is reached player $i$ moving at $I$ must believe the game has been played according to an alternative strategy profile. In a sequential equilibrium, that alternative strategy profile consists of a small perturbation of the original assessment where every move is chosen with positive probability. Given a strategy profile $\vec{\sigma}$, let $\Pr_{\vec{\sigma}}$ be the probability distribution induced by $\vec{\sigma}$ over the possible histories of the game. Intuitively, $\Pr_{\vec{\sigma}}(h)$ is the product of the probability of each of the moves in $h$. For simplicity we assume $f_c>0$, so that if $\vec{\sigma}$ is such that every player chooses all of his moves with positive probability, then for every history $h$, $\Pr_{\vec{\sigma}}(h)>0$. \footnote{See \cite{Myerson} for a definition of sequential equilibrium in the case nature chooses some of its move with probability 0.} For any history $h$ of the game, define $\Pr_{\vec{\sigma}}(\cdot\mid h)$ to be the conditional probability distribution induced by $\vec{\sigma}$ over the possible histories of the game given that the current history is $h$. Intuitively, $\Pr_{\vec{\sigma}}(h'\mid h)$ is 0 if $h$ is not a prefix of $h'$, is 1 if $h=h'$, and is the product of the probability of each of the moves in the path from $h$ to $h'$ if $h$ is a prefix of $h'$. Formally, an assessment $(\vec{\sigma},\mu)$ is a sequential equilibrium if it satisfies the following properties: \begin{itemize} \item {\em Sequential rationality.} For every information set $I$ and player $i$ and every behavioral strategy $\sigma$ for player $i$, $${\mathrm{EU}}_i((\vec{\sigma},\mu)\mid I)\geq {\mathrm{EU}}_i(((\vec{\sigma}_{-i},\sigma),\mu)\mid I),$$ where ${\mathrm{EU}}_i((\vec{\sigma},\mu)\mid I)=\sum_{h\in I}\sum_{z\in Z}\mu_I(h)\Pr_{\vec{\sigma}}(z\mid h)u_i(z)$. \item {\em Consistency between belief system and strategy profile.} If $\vec{\sigma}$ consists of \emph{completely mixed} (behavior) strategies, that is, ones that assign positive probability to every action at every information set, then for every information set $I$ and history $h$ in $I$, $$\mu_I(h)=\frac{\Pr_{\vec{\sigma}}(h)}{\sum_{h'\in I}\Pr_{\vec{\sigma}}(h')}.$$ Otherwise, there exists a sequence $(\vec{\sigma}^n,\mu^n)$, $n=1,2,3,\ldots$, of assessments such that $\vec{\sigma}^n$ consists of completely mixed strategies, $(\vec{\sigma}^n,\mu^n)$ is consistent in the above sense, and $\lim_{n \tends \infty}(\vec{\sigma}^n,\mu^n)=(\vec{\sigma},\mu)$. \end{itemize} \fullv{ Sequential equilibrium is not a reasonable solution concept for games with awareness for the same reason that Nash equilibrium is not a reasonable solution concept for games with awareness; it requires that a player be aware of the set of possible strategies available to other players and to him.} In order to define a generalized notion of sequential equilibrium for games with awareness, we first need to define a generalized notion of assessment for games with awareness. We first need a generalized notion of strategy, which we defined in our earlier paper. Intuitively, a strategy describes what $i$ will do in every possible situation that can arise. This no longer makes sense in games with awareness, since a player no longer understands in advance all the possible situations that can arise. For example, player $i$ cannot plan in advance for what will happen if he becomes aware of something he is initially unaware of. We solved this problem in our earlier paper as follows. Let ${\cal G}_i=\{\Gamma'\in{\cal G}:\, \mbox{for some } \Gamma^+\in{\cal G} \mbox{ and }h\mbox{ in }\Gamma^+, \, P^+(h)=i\mbox{ and }{\cal F}(\Gamma^+,h)=(\Gamma',\cdot)\}$. Intuitively, ${\cal G}_i$ consists of the games that $i$ views as the real game in some history. Rather than considering a single strategy in a game $\Gamma^* = ({\cal G}, \Gamma^m,{\cal F})$ with awareness, we considered a collection $\{\sigma_{i,\Gamma'}: \Gamma'\in {\cal G}_i \}$ of \emph{local strategies}. Intuitively, a local strategy $\sigma_{i,\Gamma'}$ for game $\Gamma'$ is the strategy that $i$ would use if $i$ were called upon to play and $i$ thought that the true game was $\Gamma'$. Thus, the domain of $\sigma_{i,\Gamma'}$ consists of pairs $(\Gamma^+,h)$ such that $\Gamma^+ \in {\cal G}$, $h$ is a history in $\Gamma^+$, $P^+(h) = i$, and ${\cal F}(\Gamma^+,h) = (\Gamma',I)$. Let $(\Gamma^h,I)^* = \{(\Gamma',h): {\cal F}(\Gamma',h)=(\Gamma^h,I)\}$; we call $(\Gamma^h,I)^*$ a {\em generalized information set}. \begin{definition} Given a game $\Gamma^* = ({\cal G},\Gamma^m,{\cal F})$ with awareness, a \emph{local strategy} $\sigma_{i,\Gamma'}$ for agent $i$ is a function mapping pairs $(\Gamma^+,h)$ such that $h$ is a history where $i$ moves in $\Gamma^+$ and ${\cal F}(\Gamma^+,h) = (\Gamma',I)$ to a probability distribution over $M'_{h'}$, the moves available at a history $h' \in I$, such that $\sigma_{i,\Gamma'}(\Gamma_1,h_1) = \sigma_{i,\Gamma'}(\Gamma_2,h_2)$ if $(\Gamma_1,h_1)$ and $(\Gamma_2,h_2)$ are in the same generalized information set. A {\em generalized strategy profile} of $\Gamma^* = ({\cal G},\Gamma^m,{\cal F})$ is a set of local strategies $\vec{\sigma} = \{\sigma_{i,\Gamma'}:i\in N,\Gamma'\in{\cal G}_i\}$. \fullv{ \end{definition} The belief system, the second component of the assessment, is a function from information sets $I$ to probability distribution over the histories in $I$. Intuitively, it captures how likely each of the histories in $I$ is for the player moving at $I$. For standard games this distribution can be arbitrary, since the player considers every history in the information set possible. This is no longer true in games with awareness. It is possible that a player is playing game $\Gamma_1$ but believes he is playing a different game $\Gamma_2$. Furthermore, in an augmented game, there may be some histories in an $i$-information set that include moves of which $i$ is not aware; player $i$ cannot consider these histories possible. To deal with these problems, we define $\mu$ to be a {\em generalized belief system} if it is a function from generalized information sets to a probability distribution over the set of histories in the generalized information set that the player considers possible. \begin{definition} } A {\em generalized belief system} $\mu$ is a function that associates each generalized information set $(\Gamma', I)^*$ with a probability distribution $\mu_{\Gamma',I}$ over the set $\{(\Gamma',h):h\in I\}$. A {\em generalized assessment} is a pair $(\vec{\sigma},\mu)$, where $\vec{\sigma}$ is a generalized strategy profile and $\mu$ is a generalized belief system. \end{definition} \commentout{ We want to define a notion of generalized sequential equilibrium that captures the intuition that for every player $i$ and every $i$-information set $I$, if $i$ thinks he is actually playing in the information set $I$ of game $\Gamma'$, then his local strategy $\sigma_{i,\Gamma'}$ is a best response to the local strategies of other players in~$\Gamma'$ and his beliefs about the likelihood of the histories in $I$, even if $I$ is reached with probability 0. Let ${\mathrm{EU}}_{i,\Gamma'}((\vec{\sigma},\mu)\mid I)$ be the conditional expected payoff for $i$ in the game $\Gamma'$ given that strategy profile $\vec{\sigma}$ is used, information set $I$ has been reached, and player $i$'s beliefs about the histories in $I$ are described by $\mu_{\Gamma',I}$. As in the case of generalized Nash equilibrium, the only strategies in $\vec{\sigma}$ that are needed to compute ${\mathrm{EU}}_{i,\Gamma'}((\vec{\sigma},\mu)\mid I)$ are the strategies actually used in $\Gamma'$; indeed, all that is needed is the restriction of these strategies to information sets that arise in $\Gamma'$. We also do not need the whole generalized belief system; only $\mu_{\Gamma',I}$ is needed. A \emph{generalized sequential equilibrium} of $\Gamma^* = ({\cal G},\Gamma^m,{\cal F})$ is a generalized assessment $(\vec{\sigma}^*,\mu^*)$ such that for every generalized information set $(\Gamma',I)^*$, the local strategy $\sigma^*_{i,\Gamma'}$ is a best response to $\vec{\sigma}^*_{-(i,\Gamma')}$ given $i$'s beliefs about the histories in $I$, where $\vec{\sigma}^*_{-(i,\Gamma')}$ is the set of all local strategies in $\vec{\sigma}^*$ other than $\sigma_{i,\Gamma'}^*$, and $\mu^*$ is consistent with $\vec{\sigma}^*$. More formally, $(\vec{\sigma}^*,\mu^*)$ must satisfy the following two conditions \begin{itemize} \item {\em Generalized sequential rationality.} For every player $i$, generalized $i$-information set $(\Gamma',I)^*$, and local strategy $\sigma$ for $i$ in $\Gamma'$, $${\mathrm{EU}}_{i,\Gamma'}((\vec{\sigma}^*,\mu^*)\mid I)\geq {\mathrm{EU}}_{i,\Gamma'}(((\vec{\sigma}^*_{-(i,\Gamma')},\sigma),\mu^*)\mid I),$$ where ${\mathrm{EU}}_{i,\Gamma'}((\vec{\sigma}^*,\mu^*)\mid I)=\sum_{h\in I}\sum_{z\in Z'}\mu^*_{\Gamma',I}(h)\Pr_{\vec{\sigma}^*}(z\mid h)u_i'(z)$. \item {\em Consistency between generalized belief system and generalized strategy profile.} If, for every generalized information set $(\Gamma',I)^*$, $\sum_{h\in I}\Pr_{\vec{\sigma}^*}(h)>0$, then for all $h\in I$ $$\mu^*_{\Gamma',I}(h)=\frac{\Pr_{\vec{\sigma}^*}(h)}{\sum_{h'\in I}\Pr_{\vec{\sigma}^*}(h')}.$$ Otherwise, there exists a sequence of generalized assessments $(\vec{\sigma}^i,\mu^i)$ such that every player chooses all of his moves with positive probability, $\mu^i$ is consistent with $\vec{\sigma}^i$ and $\lim_{i}(\vec{\sigma}^i,\mu^i)=(\vec{\sigma},\mu)$. \end{itemize} } We can now define what it means for a generalized assessment $(\vec{\sigma}^*,\mu^*)$ to be a \emph{generalized sequential equilibrium} of a game with awareness. The definition is essentially identical to that of $(\vec{\sigma},\mu)$ being a sequential equilibrium; the use of ${\mathrm{EU}}_i$ in the definition of sequential rationality is replaced by ${\mathrm{EU}}_{i,\Gamma'}$, where ${\mathrm{EU}}_{i,\Gamma'}((\vec{\sigma}^*,\mu^*)\mid I)$ is the conditional expected payoff for $i$ in the game $\Gamma'$, given that strategy profile $\vec{\sigma}^*$ is used, information set $I$ has been reached, and player $i$'s beliefs about the histories in $I$ are described by $\mu^*_{\Gamma',I}$. We leave the straightforward modifications of the definition to the reader. It is easy to see that $(\vec{\sigma},\mu)$ is a sequential equilibrium of a standard game $\Gamma$ iff $(\vec{\sigma},\mu)$ is a (generalized) sequential equilibrium of the canonical representation of $\Gamma$ as a game with awareness. Thus, our definition of generalized sequential equilibrium generalizes the standard definition. To better understand the concept of generalized sequential equilibrium concept, consider the game shown in Figure~\ref{chap2:fig:sequen_gammam}. Suppose that both players 1 and 2 are aware of all runs of the game, but player 1 (falsely) believes that player 2 is aware only of the runs not involving $L$ and believes that player 1 is aware of these runs as well. Also suppose that player 2 is aware of all of this; that is, player 2's view of the game is the same as the modeler's view of the game $\Gamma^m$ shown in Figure~\ref{chap2:fig:sequen_gammam}. While moving at node 1.1, player 1 considers the true game to be identical to the modeler's game except that from player 1's point of view, while moving at 2.1, player 2 believes the true game is $\Gamma^{2.2}$, shown in Figure~\ref{chap2:fig:sequen_gamma22}. \begin{figure} \caption{The modeler's game $\Gamma^m$.} \label{chap2:fig:sequen_gammam} \end{figure} \begin{figure} \caption{Player 2's view of the game from point of view of player 1.} \label{chap2:fig:sequen_gamma22} \end{figure} This game has a unique generalized sequential equilibrium where player 2 chooses $r$ and player 1 chooses $A$ in $\Gamma^{2.2}$. Believing that player 2 will move $r$, player 1 best responds by choosing $L$ at node 1.1. Since player 2 knows all this at node 2.1 in $\Gamma^m$, she chooses $l$ at this node. Thus, if players follow their equilibrium strategies, the payoff vector is $(-10,-1)$. In this situation, player 2 would be better off is she could let player 1 know that she is aware of move $L$, since then player 1 would play $A$ and both players would receive 1. On the other hand, if we slightly modify the game by making $u_2(\<L,l\rangle)=3$, then player 2 would benefit from the fact that 1 believes that she is unaware of move $L$. \fullv{\subsection{EXISTENCE OF GENERALIZED EQUILIBRIA}} We now want to show that every game with awareness $\Gamma^*$ has at least one generalized sequential equilibrium. To prove that a game with awareness $\Gamma^*$ has a generalized Nash equilibrium, we constructed a standard game $\Gamma^{\nu}$ with perfect recall and showed that there exists a one-to-one correspondence between the set of generalized Nash equilibrium of $\Gamma^*$ and the set of Nash equilibrium of $\Gamma^{\nu}$. Intuitively, $\Gamma^\nu$ is constructed by essentially ``gluing together'' all the games $\Gamma' \in {\cal G}$, except that only histories in $\Gamma'$ that can actually be played according to the players' awareness level are considered. More formally, given a game $\Gamma^* = ({\cal G}, \Gamma^m, {\cal F})$ with awareness, let $\nu$ be a probability on ${\cal G}$ that assigns each game in ${\cal G}$ positive probability. (Here is where we use the fact that ${\cal G}$ is countable.) For each $\Gamma'\in {\cal G}$, let $\lfloor H^{\Gamma'}\rfloor=\{h\in H^{\Gamma'}:$ for every prefix $h_1\cdot \<m\rangle$ of $h$, if $P'(h_1)=i \in N$ and ${\cal F}(\Gamma',h_1)=(\Gamma'',I)$, then for all $h_2\in I$, $h_2\cdot\<m\rangle\in H''\}$. The histories in $\lfloor H^{\Gamma'}\rfloor$ are the ones that can actually be played according to the players' awareness levels. Let $\Gamma^\nu$ be the standard game such that \begin{itemize} \item $N^\nu = \{(i,\Gamma'):\Gamma'\in{\cal G}_i\}$; \item $M^\nu = {\cal G}\cup_{\Gamma'\in{\cal G}}\lfloor M^{\Gamma'}\rfloor$, where $\lfloor M^{\Gamma'}\rfloor$ is the set of moves that occur in $\lfloor H^{\Gamma'}\rfloor$; \item $H^\nu = \langle\, \rangle\cup\{\langle\Gamma'\rangle\cdot h:\Gamma'\in{\cal G}, h\in \lfloor H^{\Gamma'}\rfloor\}$; \item $P^\nu(\langle\, \rangle)=c$, and $$P^\nu(\langle\Gamma^h\rangle\cdot h') = \left\{ \begin{array}{ll} (i,\Gamma^{h'}) &\mbox{if $P^h(h') = i \in N$ and}\\ \ & {\cal F}(\Gamma^h,h')=(\Gamma^{h'}, \cdot),\\ c &\mbox{if $P^h(h') = c$;}\end{array} \right.$$ \item $f_c^\nu(\Gamma'|\langle\, \rangle)= \nu(\Gamma')$ and $f_c^\nu(\cdot|\langle\Gamma^h\rangle\cdot h') = f_c^h(\cdot|h')$ if $P^h(h') = c$; \item ${\cal I}^\nu_{i,\Gamma'}$ is a partition of $H^{\nu}_{i,\Gamma'}$ where two histories $\langle\Gamma^1\rangle\cdot h^1$ and $\langle\Gamma^1\rangle\cdot h^1$ are in the same information set $\langle\Gamma',I\rangle^*$ iff $(\Gamma^{1},h^1)$ and $(\Gamma^{2},h^2)$ are in the same generalized information set $(\Gamma',I)^*$; \item $u_{i,\Gamma'}^\nu(\langle\Gamma^h\rangle\cdot z)= \left\{ \begin{array}{ll} u_i^h(z) &\mbox{if $\Gamma^h = \Gamma',$}\\ 0 &\mbox{if $\Gamma^h \ne \Gamma'.$}\end{array} \right.$ \end{itemize} Unfortunately, while it is the case that there is a 1-1 correspondence between the Nash equilibria of $\Gamma^{\nu}$ and the generalized Nash equilibria of $\Gamma^*$, \commentout{as we show in the full paper}, this correspondence breaks down for sequential equilibria. \commentout{Thus, we need to take a different approach to showing that a generalized sequential equilibrium always exists.} \fullv{ To see why consider the modified version of prisoner's dilemma $\Gamma^p$, described in Figure~\ref{chap2:fig:game2}. \begin{figure} \caption{$\Gamma^p$: modified prisoner's dilemma.} \label{chap2:fig:game2} \end{figure} Besides being able to cooperate ($C_A$) or defect ($D_A$), player $A$ who moves first has also the option of escaping ($E_A$). If player $A$ escapes, then the game is over; if player $A$ cooperates or defects, then player $B$ may also cooperate ($C_B$) or defect ($D_B$). Suppose further that in the modeler's game, \begin{itemize} \item both $A$ and $B$ are aware of all histories of $\Gamma^p$; \item with probability $p$, $A$ believes that $B$ is unaware of the extra move $E_A$, and with probability $1-p$, $A$ believes $B$ is aware of all histories; \item if $A$ believes $B$ is unaware of $E_A$, then $A$ believes that $B$ believes that it is common knowledge that the game being played contains all histories but $E_A$; \item if $A$ believes $B$ is aware of $E_A$, then $A$ believes that $B$ believes that there is common knowledge that the game being played is $\Gamma^p$; \item $B$ believes that it is common knowledge that the game being played is $\Gamma^p$. \end{itemize} We need four augmented games to model this situation: \begin{itemize} \item $\Gamma^m$ is the game from the modeler's point of view; \item $\Gamma^A$ is the game from $A$'s point of view when she is called to move in the modeler's game; \item $\Gamma^{B.1}$ is the game from $B$'s point of view when he is called to move in game $\Gamma^A$ after nature chooses he is unaware of $E_A$ and is also the game from $A$'s point of view when she is called to move in $\Gamma^{B.1}$; and \item $\Gamma^{B.2}$ is the game from $B$'s point of view when he is called to move in game $\Gamma^A$ after nature chooses aware$_B$; $\Gamma^{B.2}$ is also the game from $B$'s point of view when he is called to move at $\Gamma^m$ and the game from $A$'s point of view when she is called to move in $\Gamma^{B.2}$. \end{itemize} Although $\Gamma^m$ and $\Gamma^{B.2}$ have the same game tree as $\Gamma^p$, they are different augmented games, since the ${\cal F}$ function is defined differently at histories in these games. For example, ${\cal F}(\Gamma^m,\langle\ \rangle)=(\Gamma^A,\{$unaware$_B$,aware$_B\})\ne (\Gamma^{B.2},\langle\ \rangle)={\cal F}(\Gamma^{B.2},\langle\ \rangle)$. For this reason, we use different labels for the nodes of theses games. Let $A.3$ and $B.2$ (resp., $A.2$ and $B.2$) be the labels of the nodes in game $\Gamma^m$ (resp., $\Gamma^{B.2}$) corresponding to $A$ and $B$ in $\Gamma^p$, respectively. $\Gamma^A$ and $\Gamma^{B.1}$ are shown in Figures~\ref{chap2:fig:gammaA} and \ref{chap2:fig:gammaB1}, respectively. \footnote{We abuse notation and use the same label for nodes in different augmented games that are in the same generalized information set. For example, $A.3$ is a label at both $\Gamma^m$ and $\Gamma^A$.} \begin{itemize} \item In the modeler's game $\Gamma^m$, $A$ believes she is playing game $\Gamma^{A}$, and $B$ believes he is playing game $\Gamma^{B.2}$. \item In game $\Gamma^{A}$, nature chooses move unaware$_B$ with probability $p$ and aware$_B$ with probability $1-p$. Then $A$ moves and believes she is playing $\Gamma^{A}$. At node $B.1$, $B$ believes he is playing $\Gamma^{B.1}$, and at node $B.2$, $B$ believes he is playing $\Gamma^{B.2}$. \item In game $\Gamma^{B.1}$, $A$ and $B$ both believe that the game is $\Gamma^{B.1}$. \item In game $\Gamma^{B.2}$, $A$ and $B$ both believe that the game is $\Gamma^{B.2}$. \end{itemize} \begin{figure} \caption{$\Gamma^A$.} \label{chap2:fig:gammaA} \end{figure} \begin{figure} \caption{$\Gamma^{B.1}$.} \label{chap2:fig:gammaB1} \end{figure} The game $\Gamma^{\nu}$ is the result of pasting together $\Gamma^m$, $\Gamma^A$, $\Gamma^{B.1}$, and $\Gamma^{B.2}$. There are 5 players: $(A,\Gamma^A)$, $(A,\Gamma^{B.1})$, $(A,\Gamma^{B.2})$, $(B,\Gamma^{B.1})$, and $(B,\Gamma^{B.2})$. $(A,\Gamma^{B.1})$ and $(B,\Gamma^{B.1})$ are playing standard prisoner's dilemma and therefore both should defect with probability 1; $(B,\Gamma^{B.1})$ must believe he is in the history where $(A,\Gamma^{B.1})$ defected with probability 1. $(A,\Gamma^A)$ and $(A,\Gamma^{B.2})$ choose the extra move $E_A$ with probability 1, since it gives $A$ a payoff of 5. The subtlety arises in the beliefs of $(B,\Gamma^{B.2})$ in the generalized information set $(\Gamma^{B.2},\{C_A,D_A\})^*$, since this generalized information set is reached with probability zero. Note that $(\Gamma^{B.2},\{C_A,D_A\})^* = \{\langle\Gamma^m,C_A\rangle,$ $\langle\Gamma^m,D_A\rangle$, $\langle\Gamma^A,\mbox{aware}_B,C_A\rangle$, $\langle\Gamma^A,\mbox{aware}_B,D_A\rangle, \langle\Gamma^{B.2},C_A\rangle,$ $\langle\Gamma^{B.2},D_A\rangle\}.$ By the definition of sequential equilibrium, player $(B,\Gamma^{B.2})$ will have to consider a sequence of strategies where all these histories are assigned positive probability. Although in general this is not a problem, note that $(B,\Gamma^{B.2})$ is meant to represent the type of player $B$ that considers only histories in game $\Gamma^{B.2}$ possible. Thus, intuitively, he should assign positive probability only to the histories $\{\langle\Gamma^{B.2},C_A\rangle,\langle\Gamma^{B.2},D_A\rangle\}$. To see how this leads to a problem, first note that there is a sequential equilibrium of $\Gamma^{\nu}$ where $(B,\Gamma^{B.2})$ believes with probability 1 that the true history is $\langle\Gamma^A,$aware$_B,C_A\rangle$, $(A,\Gamma^{B.2})$ chooses $E_A$ with probability 1, and $(B,\Gamma^{B.2})$ chooses $C_B$ with probability 1. It is rational for $(B,\Gamma^{B.2})$ to choose $C_B$ because $(B,\Gamma^{B.2})$ assigns probability 1 to the first move of nature in $\Gamma^{\nu}$ be $\Gamma^A$. Since his utility is 0 for every run in $\Gamma^{\nu}$ whose first move is $\Gamma^A$, his expected utility is 0 no matter what move he makes at the generalized information set, given his beliefs. There is no reasonable definition of generalized sequential equilibrium corresponding to this sequential equilibrium of $\Gamma^{\nu}$. Player $B$ while moving at node $B.2$ would never cooperate, since this is a strictly dominated strategy for him in the game that he considers to be the actual game, namely $\Gamma^{B.2}$. The problem is that there is nothing in the definition of sequential equilibrium that guarantees that the belief system of a sequential equilibrium in $\Gamma^{\nu}$ assigns probability zero to histories that players are unaware of in the game $\Gamma^*$ with awareness. We want to define a modified notion of sequential equilibrium for standard games that guarantees that the belief system in $\Gamma^{\nu}$ associates each information set with a probability distribution over a pre-specified subset of the histories in the information, which consists only of the histories in the information set player $i$ actually considers possible. In this example, the pre-specified subset would be $\{\langle\Gamma^{B.2},C_A\rangle,\langle\Gamma^{B.2},D_A\rangle\}$. } \section{CONDITIONAL SEQUENTIAL EQUILIBRIUM} \label{chap2:sec:stronseq} In the standard definition of sequential equilibrium for extensive games, it is implicitly assumed that every player considers all histories in his information set possible. This is evident from the fact that if a strategy profile that is part of a sequential equilibrium assigns positive probability to every move, then by the consistency requirement the belief system also assigns positive probability to every history of every information set of the game. Therefore, this notion of equilibrium is not strong enough to capture situations where a player is certain that some histories in his information set will not occur. The notion of {\em conditional sequential equilibrium}, which we now define, is able to deal with such situations. It generalizes sequential equilibrium: in a game where every player considers every history in his information set possible, the set of conditional sequential equilibria and the set of sequential equilibria coincide. Given a standard extensive game $\Gamma$, define a {\em possibility system} ${\cal K}$ on $\Gamma$ to be a function that determines for every information set $I$ a nonempty subset of $I$ consisting of the histories in $I$ that the player moving at $I$ considers possible. We assume that ${\cal K}$ is common knowledge among players of the game, so that every player understands what histories are considered possible by everyone else in the game. If $I$ is an $i$-information set, intuitively $i$ should be indifferent among all runs that go through histories in $I-{\cal K}(I)$, since $i$ believes that those runs will not occur and every other player knows that. Thus, for a given $\Gamma$, a possibility system ${\cal K}$ must satisfy the following requirement: if $z$ and $z'$ are two runs going through histories in $I-{\cal K}(I)$ and $I$ is an $i$-information set, then $u_i(z)=u_i(z')$. Given a pair $(\Gamma,{\cal K})$, a {\em ${\cal K}$-assessment} is a pair $(\vec{\sigma},\mu)$, where $\vec{\sigma}$ is a strategy profile of $\Gamma$, and $\mu$ is a {\em restricted belief system}, i.e., a function that determines for every information set $I$ of $\Gamma$ a probability $\mu_I$ over the histories in ${\cal K}(I)$. Intuitively, if $I$ is an information set for player $i$, $\mu_I$ is $i$'s subjective assessment of the relative likelihood of the histories player $i$ considers possible while moving at $I$, namely ${\cal K}(I)$. As in the definition of sequential equilibrium, a ${\cal K}$-assessment $(\vec{\sigma},\mu)$ is a {\em conditional sequential equilibrium with respect to ${\cal K}$} if (a) at every information set where a player moves he chooses a best response given the beliefs he has about the histories that he considers possible in that information set and the strategies of other players, and (b) his restricted beliefs must be consistent with the strategy profile being played and the possibility system, in the sense that they are calculated by conditioning the probability distribution induced by the strategy profile over the histories considered possible on the information set. \commentout{ More formally, $(\vec{\sigma},\mu)$ must satisfy the following two properties: \begin{itemize} \item {\em Sequential rationality:} For every information set $I$, player $i$, and behavioral strategy $\sigma$ for player $i$, $${\mathrm{EU}}_i((\vec{\sigma},\mu)\mid I)\geq {\mathrm{EU}}_i(((\vec{\sigma}_{-i},\sigma),\mu)\mid I),$$ where ${\mathrm{EU}}_i((\vec{\sigma},\mu)\mid I)=\sum_{h\in {\cal K}(I)}\sum_{z\in Z}\mu_I(h)\Pr_{\vec{\sigma}}(z\mid h)u_i(z)$ \item {\em Consistency:} If, for every information set $I$, $\sum_{h\in {\cal K}(I)}\Pr_{\vec{\sigma}}(h)>0$, then for every $h\in {\cal K}(I)$ $$\mu_I(h)=\frac{\Pr_{\vec{\sigma}}(h)}{\sum_{h'\in {\cal K}(I)}\Pr_{\vec{\sigma}}(h')}.$$ Otherwise, there exists a sequence of conditional assessments $(\vec{\sigma}^i,\mu^i)$ such that every player chooses all of his moves with positive probability, $\mu^i$ is consistent with $\vec{\sigma}^i$ and ${\cal K}$ in the above sense, and $(\vec{\sigma}^i,\mu^i)$ converges pointwise to $(\vec{\sigma},\mu)$. \end{itemize} } Formally, the definition of $(\vec{\sigma},\mu)$ being a conditional sequential equilibrium is identical to that of sequential equilibrium, except that the summation in the definition of ${\mathrm{EU}}_i(\vec{\sigma},\mu) \mid I)$ and $\mu_I(h)$ is taken over histories in ${\cal K}(I)$ rather than histories in $I$. It is immediate that if ${\cal K}(I)=I$ for every information set $I$ of the game, then the set of conditional sequential equilibria with respect to ${\cal K}$ coincides with the set of sequential equilibria. The next theorem shows that set of conditional sequential equilibria for a large class of extensive games that includes $\Gamma^{\nu}$ is nonempty. \begin{theorem} \label{chap2:thm:conditional1} Let $\Gamma$ be an extensive game with perfect recall and countably many players such that (a) each player has only finitely many pure strategies and (b) each player's payoff depends only on the strategy of finitely many other players. Let ${\cal K}$ be an arbitrary possibility system. Then there exists at least one ${\cal K}$-assessment that is a conditional sequential equilibrium of $\Gamma$ with respect to ${\cal K}$. \end{theorem} We now prove that every game of awareness has a generalized sequential equilibrium by defining a possibility system ${\cal K}$ on $\Gamma^{\nu}$ and showing that there is a one-to-one correspondence between the set of conditional sequential equilibria of $\Gamma^{\nu}$ with respect to ${\cal K}$ and the set of generalized sequential equilibria of $\Gamma^*$. \begin{theorem} \label{chap2:thm:conditional2} For all probability measures $\nu$ on ${\cal G}$, if $\nu$ gives positive probability to all games in ${\cal G}$, and ${\cal K}(\langle\Gamma',I\rangle^*)=\{\langle\Gamma',h\rangle:h\in I\}$ for every information set $\langle\Gamma',I\rangle^*$ of $\Gamma^{\nu}$, then $(\vec{\sigma}',\mu')$ is a generalized sequential equilibrium of $\Gamma^*$ iff $(\vec{\sigma},\mu)$ is a conditional sequential equilibrium of $\Gamma^{\nu}$ with respect to ${\cal K}$, where $\sigma_{i,\Gamma'}(\langle\Gamma^h\rangle\cdot h')=\sigma'_{i,\Gamma'}(\Gamma^h,h')$ and $\mu'_{\Gamma', I}=\mu_{\langle\Gamma',I\rangle^*}$. \end{theorem} Since $\Gamma^{\nu}$ satisfies all the conditions of Theorem~\ref{chap2:thm:conditional1}, it easily follows from Theorems~\ref{chap2:thm:conditional1} and \ref{chap2:thm:conditional2} that every game with awareness has at least one generalized sequential equilibrium. Although it is not true that every conditional sequential equilibrium is also a sequential equilibrium of an arbitrary game, the next theorem shows there is a close connection between these notions of equilibrium. If $(\vec{\sigma},\mu)$ is a conditional sequential equilibrium with respect to some possibility system ${\cal K}$, then there exists a belief system $\mu'$ such that $(\vec{\sigma},\mu')$ is a sequential equilibrium. \begin{theorem} \label{chap2:thm:conn} For every extensive game $\Gamma$ with countably many players where each player has finitely many pure strategies and for every possibility system ${\cal K}$, if $(\vec{\sigma},\mu)$ is a conditional sequential equilibrium of $\Gamma$ with respect to ${\cal K}$, then there exists a belief system $\mu'$ such that $(\vec{\sigma}, \mu')$ is a sequential equilibrium of $\Gamma$. \end{theorem} \section{RATIONALIZABILITY AND GENERALIZED NASH EQUILIBRIUM} \label{chap2:sec:grat} In this section, we analyze the relationship between the notions of rationalizability and generalized Nash equilibrium, providing some more intuition about the latter. The usual justification for Nash equilibrium is that a player's strategy must be a best response to the strategies selected by other players in the equilibrium, because he can deduce what those strategies are. However, in most strategic situations, it is not the case that a player can deduce the strategies used by other players. Since every player tries to maximize his expected payoff and this is common knowledge, the best that a player can hope to do is to deduce a set of reasonable strategies for the other players. Here, we take a ``reasonable strategy'' to be a best response to some reasonable beliefs a player might hold about the strategy profile being played. This is the intuition that the {\em rationalizability} solution concept tries to capture. Even though a notion of rationalizability for extensive-form games was proposed by Pearce \citeyear{Pearce84}, rationalizability is more widely applied in normal-form games. In this section, we explore the relationship between rationalizability and generalized Nash equilibrium in games with awareness where in the underlying game each player moves only once, and these moves are made simultaneously (or, equivalently, a player does not know the moves made by other players before making his own move). We show that, given an underlying game $\Gamma$ satisfying this requirement, a pure strategy profile contains only rationalizable strategies iff it is the strategy profile used by the players in the modeler's game in some (pure) generalized Nash equilibrium of a game $\Gamma^*$ with awareness. If we think of rationalizability as characterizing ``best response to your beliefs'' and Nash equilibrium characterizing ``best response to what is actually played'', then this result shows that in the framework of games with awareness, since the game is not common knowledge, the line between these two notions is somewhat blurred. We start by reviewing the notion of rationalizability for standard normal-form games. Let ${\cal C}_i$ be the set of available pure strategies for player $i$; ${\cal C}=\times_{i\in N}{\cal C}_i$ is thus the set of pure strategy profiles. Let $\Delta(M)$ denote the set of all probability distributions on $M$. Suppose that each player $i$ is rational and is commonly known to choose a strategy from a subset ${\cal D}_i$ of ${\cal C}_i$. Let ${\cal D}_{-i}=\times_{j\ne i}{\cal D}_i$ and \begin{eqnarray} & & B({\cal D}_{-i})=\{argmax_{s_i\in{\cal C}_i}{\mathrm{EU}}_i((s_i,\pi({\cal D}_{-i}))):\nonumber\\ & & \mbox{ for some }\pi\in\Delta({\cal D}_{-i})\}; \nonumber \end{eqnarray} that is, $B({\cal D}_{-i})$ consists of the strategies in ${\cal C}_i$ that are best responses to some belief that player $i$ could have about the strategies other players are using. The set ${\cal S}=\times_{i\in N}{\cal S}_i$ of {\em correlated rationalizable strategies } is characterized by the following two properties: (a) for all $i \in N$, ${\cal S}_i\subseteq B({\cal S}_{-i})$ and (b) ${\cal S}$ is the largest set satisfying condition (a), in the sense that, for every set of strategy profiles ${\cal D}$ satisfying (a), we have that ${\cal D}\subseteq {\cal S}$. It is not hard to show that for every player $i$, ${\cal S}_i= B({\cal S}_{-i})$. A strategy $s_i\in {\cal S}_i$ is called a {\em correlated rationalizable strategy for player $i$}. \footnote{From now on, we use $\vec{s}=(s_1,\ldots,s_n)$ to denote pure strategy profiles, and will continue to use $\vec{\sigma}$ for possibly nonpure strategy profiles.} \footnote{In the literature, it is often assumed that each player chooses his strategy independently of the others and that this is common knowledge. If we make this assumption, we get a somewhat stronger solution concept (at least, if $|N| \ge 3$), which we call \emph{uncorrelated rationalizability}. Essentially the same results as we prove here for correlated rationalizability hold for uncorrelated rationalizability; we omit further details here.} \commentout{ Formally, suppose that each player is rational and is commonly known to choose a strategy from a subset ${\cal D}_i$ of ${\cal C}_i$. Let ${\cal D}_{-i}=\times_{j\ne i}{\cal D}_i$ and $$O({\cal D}_{-i})=\{argmax_{s_i\in{\cal C}_i}{\mathrm{EU}}_i((s_i,\times_{j\in N-\{i\}}\pi_j({\cal D}_{j}))):\mbox{ for some }\pi_j\in\Delta({\cal D}_j)\};$$ that is, $O({\cal D}_{-i})$ consists of the strategies in ${\cal C}_i$ that are best responses to some belief that player $i$ could have about the strategies other players are using. The set ${\cal S}^u=\times_{i\in N}{\cal S}_i^u$ of {\em uncorrelated rationalizable strategies } is characterized by the following two properties: (a) for all $i \in N$, ${\cal S}_i^u\subseteq O({\cal S}_{-i}^u)$ and (b) ${\cal S}^u$ is the largest set satisfying condition (a), in the sense that, for every set of strategy profiles ${\cal D}$ satisfying (a), we have that ${\cal D}\subseteq {\cal S}^u$. Again, it is not hard to show that for every player $i$, ${\cal S}^u_i= O({\cal S}_{-i}^u)$. A strategy $s_i^u\in {\cal S}_i^u$ is called an {\em uncorrelated rationalizable strategy for player $i$}. } It turns out that we can construct ${\cal S}$ by the following iterative procedure. Let ${\cal C}_i^{0}={\cal C}_i$ for all $i\in N$. Define ${\cal C}_i^{j}=B({\cal C}_{-i}^{j-1})$ for $j\geq 1$. Since there are finitely many strategies it is easy to see that there exists a finite $k$ such that ${\cal C}_i^j={\cal C}_i^k$ for all $j\geq k$. It can be shown that ${\cal S}_i=\lim_{j\rightarrow\infty} {\cal C}_i^j={\cal C}_i^k$. \commentout{The set of uncorrelated rationalizable strategies can be constructed in a similar way (of course, replacing $B(\cdot)$ by $O(\cdot)$).} It is also easy to see that if $\vec{\sigma}$ is a (behavioral) Nash equilibrium, then every pure strategy that is played with positive probability according to $\vec{\sigma}$ is rationalizable (where the probability with which a pure strategy is played according to $\vec{\sigma}$ is the product of the probability of each of its moves according to $\vec{\sigma}$). We now explore the relationship between rationalizability in an underlying game $\Gamma$ in normal-form and generalized Nash equilibrium in a special class of games with awareness based on $\Gamma$. Given a standard game $\Gamma$ and a pure strategy profile $\vec{s}$ consisting of rationalizable strategies, we define a game with awareness $\Gamma^*(\vec{s}) = ({\cal G},\Gamma^m,{\cal F})$ such that (a) there exists a generalized Nash equilibrium of $\Gamma^*(\vec{s})$, where $s_i$ is the strategy followed by player $i$ in $\Gamma^m$, and (b) every local strategy in every pure generalized Nash equilibrium of $\Gamma^*(\vec{s})$ is rationalizable in $\Gamma$. To understand the intuition behind the construction, note that if $s_i$ is a pure correlated rationalizable strategy of player $i$ in $\Gamma$, then $s_i$ must be a best response to some probability distribution $\pi^{s_i}$ over the set ${\cal S}_{-i}$ of pure correlated rationalizable strategies of $i$'s opponents. The idea will be to include a game $\Gamma^{s_i}$ in ${\cal G}$ that captures the beliefs that make $s_i$ a best response. Let $\vec{s}^{\,1}, \ldots, \vec{s}^{\,m}$ be the strategy profiles in ${\cal S}_{-i}$ that get positive probability according to $\pi^{s_i}$. (There are only finitely many, since ${\cal S}_{-i}$ consists of only pure strategies.) Let $\Gamma^{s_i}$ be the game where nature initially makes one of $m$ moves, say $c_1, \ldots, c_m$ (one corresponding to each strategy that gets positive probability according to $\pi^{s_i}$), where the probability of move $c_j$ is $\pi^{s_i}(\vec{s}^{\,j})$. After nature's choice a copy of $\Gamma$ is played. All the histories in $\Gamma^{s_i}$ in which player $i$ is about to move are in the same information set of player $i$; that is, player $i$ does not know nature's move. However, all the other players know nature's move. Finally, all players are aware of all runs of $\Gamma$ at every history in $\Gamma^{s_i}$. Note that if $h$ is a history where player $i$ thinks game $\Gamma^{s_i}$ is the actual game, and believes that other players will play $\vec{s}_{-i}^{\,j}$ after nature's move $c_j$, then player $i$ believes that $s_i$ is a best response at $h$. Given a pure strategy profile $\vec{s}$ of the game $\Gamma$, let $\Gamma^*(\vec{s}) = ({\cal G},\Gamma^m,{\cal F})$ be the following game with awareness: \begin{itemize} \item $\Gamma^m = (\Gamma,\{A_i:i\in N\})$, where for every player $i$ and every history $h\in H_i^m$, $A_i(h)=H$ (the set of all histories in $\Gamma$); \item ${\cal G}=\{\Gamma^m\}\cup \{\Gamma^{s'_i}:s'_i\in {\cal S}_i, i\in N\}$; \item for an augmented game in $\Gamma^+\in ({\cal G}-\{\Gamma^m\})$ and a history $h$ of $\Gamma^+$ of the form $\<s'_{-i}\rangle\cdot h'$, \begin{itemize} \item if $P^+(h)=i$, then ${\cal F}(\Gamma^+,h)=(\Gamma^+,I)$ where $I$ is the information set containing $h$; \item if $P^+(h)=j\in N-\{i\}$ and $s'_j$ is the strategy of player $j$ specified by $s'_{-i}$, then ${\cal F}(\Gamma^+,h)=(\Gamma^{s'_j},I)$, where $I$ is the unique $j$-information set in game $\Gamma^{s'_j}$; \end{itemize} \item for $h\in H_i^m$, ${\cal F}(\Gamma^m,h)=(\Gamma^{s_i},I)$, where $I$ is the unique $i$-information set in game $\Gamma^{s_i}$. \end{itemize} The intuition is that if $\vec{s}$ is a strategy profile such that, for all $i \in N$, $s_i$ is a rationalizable strategy for player $i$ in $\Gamma$, then at the (unique) $i$-information set of $\Gamma^m$, $i$ considers the actual game to be $\Gamma^{s_i}$. For this particular game with awareness, there exists a generalized Nash equilibrium where the strategy for each player $i$ in the modeler's game is $s_i$. Conversely, only rationalizable strategies are used in any pure generalized Nash equilibrium of $\Gamma^*(\vec{s})$. There is only one small problem with this intuition: strategies in $\Gamma$ and local strategies for augmented games in $\Gamma^*(\vec{s})$ are defined over different objects. The former are defined over information sets of the underlying game $\Gamma$ and the latter are defined over generalized information sets of $\Gamma^*(s)$. Fortunately, this problem is easy to deal with: we can identify a local strategy in $\Gamma^*(\vec{s})$ with a strategy in $\Gamma$ in the obvious way. By definition of $\Gamma^*(\vec{s})$, for every player $i$ and augmented game $\Gamma'\in {\cal G}_i$, the domain of the local strategy $\sigma_{i,\Gamma'}$ consists of a unique generalized information set. We denote this information set by $I_{i,\Gamma'}$. For each local strategy $\sigma_{i,\Gamma'}$ of $\Gamma^*(\vec{s})$, we associate the strategy $\underline{\sigma}_{i,\Gamma'}$ in the underlying game $\Gamma$ such that $\sigma_{i,\Gamma'}(I_{i,\Gamma'})=\underline{\sigma}_{i,\Gamma'}(I)$, where $I$ is the unique $i$-information set in $\Gamma$. The following theorem summarizes the relationship between correlated rationalizable strategies in $\Gamma$ and generalized Nash equilibrium of games with awareness. \begin{theorem} \label{chap2:rat_gNash} If $\Gamma$ is a standard normal-form game and $\vec{s}$ is a (pure) strategy profile such that for all $i\in N$, $s_i$ is a correlated rationalizable strategy of player $i$ in $\Gamma$, then \begin{itemize} \item[(i)] there is a (pure) generalized Nash equilibrium $\vec{s}^{\,*}$ of $\Gamma^*(\vec{s})$ such that for every player $i$, $\underline{s}^*_{i,\Gamma^{s_i}} =s_i$; \item[(ii)] for every (pure) generalized Nash equilibrium $\vec{s}^{\,*}$ of $\Gamma^*(\vec{s})$, for every local strategy $s^*_{i,\Gamma'}$ for every player $i$ in $\vec{s}^{\,*}$, the strategy $\underline{s}^*_{i,\Gamma'}$ is correlated rationalizable for player $i$ in $\Gamma$. \end{itemize} \end{theorem} \commentout{A theorem analogous to Theorem~\ref{chap2:rat_gNash} also holds for the uncorrelated rationalizability case; we leave details to the reader.} Note that Theorem~\ref{chap2:rat_gNash} does not imply that for a fixed game with awareness, (pure) generalized Nash equilibrium and generalized rationalizability coincide. These notions are incomparable for standard extensive games (cf.~ \cite{Bat97,Pearce84}), so the corresponding generalized notions are incomparable when applied to the canonical representation of a standard game as a game with awareness. If we restrict the underlying game to be in normal form, it can be shown that, just as in standard games, every strategy in a pure generalized Nash equilibrium is (generalized) rationalizable. Since rationalizability is usually defined for pure strategies in the literature \cite{Myerson,OR94}, we focused on that case here. But it is not hard to show that an analogue of Theorem~\ref{chap2:rat_gNash} holds for behavioral rationalizable strategies as well. \commentout{ \section{DISCUSSION}\label{sec:discussion} Our focus in this paper has been on refinements of generalized Nash equilibrium in games with awareness. It is worth reconsidering here the conceptual basis for these solution concepts in the presence of awareness. \footnote{We thank Aviad Heifetz and an anonymous referee for raising some of these issues.} As we noted, equilibrium refinements are used in standard games to eliminate some ``undesirable'' or ``unreasonable'' equilibria. Arguably, unreasonable equilibria pose an even deeper problem with unaware players. For example, one standard interpretation of a Nash equilibrium is that a player chooses his strategy at the beginning of the game, and then does not change it because he has no motivation for doing so (since his payoff is no higher when he changes strategies). But this interpretation is suspect in extensive-form games when a player makes a move that takes the game off the equilibrium path. It may seem unreasonable for a player to then play the move called for by his strategy (even if the strategy is part of a Nash equilibrium). A threat to blow up the world if I catch you cheating in a game may be part of a Nash equilibrium, and does not cause problems if in fact no one cheats, but it hardly seems credible if someone does cheat. One way to justify the existence of incredible threats off the equilibrium path in a Nash equilibrium is to view the player as choosing a computer program that will play the game for him, and then leaving. Since the program is not changed once it is set in motion, threats about moves that will be made at information sets off the equilibrium path become more credible. However, in a game with awareness, a player cannot write a program to play the whole game at the beginning of the game because, when his level of awareness changes, he realizes that there are moves available to him that he was not aware of at the beginning of the game. He thus must write a new program that takes this into account. But this means we cannot sidestep the problem of incredible threats by appealing to the use of a pre-programmed computer to play a strategy. Once we allow a player to change his program, threats that were made credible because the program could not be rewritten become incredible again. Thus, the consideration of equilibrium refinements such as sequential equilibrium, which block incredible threats, becomes even more pertinent with awareness. } Moving up a level, we might ask more generally for the appropriate interpretation of Nash equilibrium in games with awareness. In standard games with a unique Nash equilibrium, we could perhaps argue that rational players will play their component of the equilibrium, since they can compute it and realize that it is the only stable strategy. In games with several Nash equilibria, perhaps one can be singled out as most salient, or some can be eliminated by using refinements of Nash equilibria. To some extent, these considerations apply in games with awareness as well. If there is a unique generalized Nash equilibrium, although a player cannot necessarily compute the whole equilibrium (for example, if it involves moves that he is not aware of), he can compute that part of the equilibrium that is within the scope of his awareness. Thus, this argument for playing a Nash equilibrium lifts from standard games to games with awareness. However, other arguments do not lift so well. For example, in standard games, one argument for Nash equilibrium is that, over time, players will learn to play a Nash equilibrium, for example, by playing a best response to their current beliefs. \fullv{(However, this is not true in general \cite{Nachbar97,Nachbar05}.} This argument will not work in the presence of awareness, since playing the game repeatedly can make players aware of moves or of other players awareness, and thus effectively change the game altogether. Another way of interpreting Nash equilibrium in standard games is in terms of evolutionary game theory. This approach works with awareness as well. Suppose that we have populations consisting of each awareness type of each player, and that at each time step we draw without replacement one individual of each of these populations and let them play the game once. If the sample individuals are playing an equilibrium strategy, they do not have incentive to deviate unilaterally given their beliefs that the other players will continue to follow the equilibrium strategies. \commentout{ In \cite{HR06}, we discussed modifications of games with awareness to include situations where players may be aware of their own unawareness and also games where players may have no common knowledge of what the underlying game is, for example, they may disagree on what the payoffs or the information sets are. Therefore with these models, we can capture a situation where players may think that they move alone but find out later that in fact they moved simultaneously with another player. It is not hard to show that the results of this paper can be extended to these situations as well. } \commentout{ Other issues arise when considering sequential equilibrium in games with awareness. For example, in a standard game, when a player reaches a history that is not on the equilibrium path, he must believe that his opponent made a mistake. However, in games with awareness, a player may become aware of her own unawareness and, as a result, switch strategies. In the definition of sequential equilibrium in standard games, play off the equilibrium path is dealt with by viewing it as the limit of ``small mistakes'' (i.e., small deviations from the equilibrium strategy). Given that there are alternative ways of dealing with mistakes in games with awareness, perhaps other approaches for dealing with off-equilibrium play might be more appropriate. While other ways of dealing with mistakes may well prove interesting, we would argue that our generalization of sequential equilibrium can be motivated the same way as in standard games. Roughly speaking, for us, how a player´s awareness level changes over time is not part of the equilibrium concept, but is given as part of the description of the game.} \commentout{ More generally, we have focused here on generalizing solution concepts that have proved useful in standard games, where there is no lack of awareness. The discussion above suggests that introducing awareness allows us to consider other solution concepts. For example, Ozbay \citeyear{Ozbay06} proposes an approach where a player's beliefs about the probability of revealed moves of nature, that the player was initially unaware of, are formed as part of the equilibrium definition. We hope to explore the issue of which solution concepts are most appropriate in games with awareness in future work.} \commentout{ Li \citeyear{LI06b} has also provided a model of unawareness in extensive games, based on her earlier work on modeling unawareness \cite{LI06,LI06a}. Although her representation of a game with unawareness is quite similar to ours, her notion of equilibrium is a generalization of Nash equilibrium for standard games, but it is different from the one we proposed in \cite{HR06}. } \fullv{ \section{CONCLUSIONS} \label{chap2:sec:conc} In this paper, we further developed the framework of games with awareness by analyzing how to generalize sequential equilibrium to such games. Other solution concepts can be generalized in a similar way. Although we have not checked all the details for all solution concepts, we believe that techniques like those used in our earlier paper to prove existence of generalized Nash equilibrium and ones similar to those used in this paper for generalized sequential equilibrium will be useful for proving the existence of other generalized solution concepts. For example, consider the notion of {\em (trembling hand) perfect equilibrium} \cite{Selten75}. in normal-form games. A strategy profile $\vec{\sigma}$ is a perfect equilibrium if there exists some sequence of strategies $(\vec{\sigma}^k)_{k=0}^{\infty}$, each assigning positive probability to every available move, that converges pointwise to $\vec{\sigma}$ such that for each player $i$, the strategy $\sigma_i$ is a best response to $\vec{\sigma}^k_{-i}$ for all $k$. The definition of {\em generalized perfect equilibrium} in games with awareness is the same as in standard games, except that we use generalized strategies rather than strategies, and require that for every local strategy $\sigma_{i,\Gamma'}$ of every player $i$, $\sigma_{i,\Gamma'}$ is a best response to $\vec{\sigma}^k_{-(i,\Gamma')}$ in game $\Gamma'$ for all $k$. To prove that every game with awareness in normal form has a generalized perfect equilibrium, we prove an analogue of Theorem~3.1(b) in \cite{HR06}, giving a correspondence between the set of generalized perfect equilibria of $\Gamma^*$ and the set of perfect equilibria of $\Gamma^{\nu}$. The existence of a generalized perfect equilibrium follows from the existence of a perfect equilibrium in $\Gamma^{\nu}$; the existence of a perfect equilibrium in $\Gamma^{\nu}$ follows from Lemma~\ref{chap2:lem:exisperf}. \commentout{As a byproduct of our proof of existence of a generalized sequential equilibrium in each game with awareness, we proposed a concept of conditional sequential equilibrium for standard games. This solution is more appropriate than standard sequential equilibrium if there are histories in an information set that, even though indistinguishable from a player's point of view, are not considered possible. We showed how to construct a standard game given a game with awareness such that there is a one-to-one correspondence between the set of generalized sequential equilibria of the game with awareness and the set of conditional sequential equilibria of the standard game. Roughly speaking, this result shows that a game with awareness is equivalent to a standard game where there are multiple versions of a player and it is possible that a player does not consider the actual history possible if it involves moves he is unaware of.} In our earlier work, we showed that our definitions could be extended in a straightforward way to games with awareness of unawareness; that is, games where one player might be aware that there are moves that another player (or even she herself) might be able to make, although she is not aware of what they are. Such awareness of unawareness can be quite relevant in practice. We captured the fact that player $i$ is aware that, at a node $h$ in the game tree, there is a move that $j$ can make she ($i$) is not aware of was by having $i$'s subjective representation of the game include a ``virtual'' move for $j$ at node $h$. Since $i$ does not understand perfectly what can happen after this move, the payoffs associated with runs that follow a virtual move represent what player $i$ believes will happen if this run is played and may bear no relationship to the actual payoffs in the underlying game. We showed that a generalized Nash equilibrium exists in games with awareness of unawareness. It is straightforward to define generalized sequential equilibrium for those games and to prove its existence using the techniques of this paper; we leave details to the reader. \commentout{ We also provided a further insight into the notion of generalized Nash equilibrium by analyzing its relationship with the notion of rationalizability for standard games. We showed that, in a sense, generalized Nash equilibrium can be viewed as a generalization of rationalizability. In particular, this shows that, unlike in the standard case where Nash equilibrium is characterized by every player best responding to the actual strategies played by their opponents, in games with awareness (or, more generally, in games with lack of common knowledge) a generalized Nash equilibrium is characterized by every player best responding to the strategy they believe their opponents are playing.} \commentout{ We have focused here on generalizing solution concepts that have proved useful in games where there is no lack of awareness. It may well be the case that, in games with (lack of) awareness, other solution concepts may also be appropriate. We hope to explore the issue of which solution concepts are most appropriate in games with awareness in future work.} } We have focused here on generalizing solution concepts that have proved useful in standard games, where there is no lack of awareness. Introducing awareness allows us to consider other solution concepts. For example, Ozbay \citeyear{Ozbay06} proposes an approach where a player's beliefs about the probability of revealed moves of nature, that the player was initially unaware of, are formed as part of the equilibrium definition. We hope to explore the issue of which solution concepts are most appropriate in games with awareness in future work. \subsubsection*{Acknowledgments} We thank Yossi Feinberg and Aviad Heifetz for useful discussions on awareness. We also thank Larissa S. Barreto for spotting some typos in a previous version of this paper. This work was supported in part by NSF under grants CTC-0208535, ITR-0325453, and IIS-0534064, by ONR under grants N00014-00-1-03-41 and N00014-01-10-511, and by the DoD Multidisciplinary University Research Initiative (MURI) program administered by the ONR under grant N00014-01-1-0795. Some of this work was done while the first author was at the School of Electrical and Computer Engineering at Cornell University, U.S.A., supported in part by a scholarship from the Brazilian Government through the Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (CNPq). \appendix \fullv{\section{PROOF OF THEOREMS}} \commentout{\section{THE DEFINITION OF $\Gamma^\nu$} We now give the formal definition of the standard game $\Gamma^\nu$ used to show that there is a generalized sequential equilibrium. Given a game $\Gamma^* = ({\cal G}, \Gamma^m, {\cal F})$ with awareness, let $\nu$ be a probability on ${\cal G}$ that assigns each game in ${\cal G}$ positive probability. (Here is where we use the fact that ${\cal G}$ is countable.) For each $\Gamma'\in {\cal G}$, let $\lfloor H^{\Gamma'}\rfloor=\{h\in H^{\Gamma'}:$ for every prefix $h_1\cdot \<m\rangle$ of $h$, if $P'(h_1)=i \in N$ and ${\cal F}(\Gamma',h_1)=(\Gamma'',I)$, then for all $h_2\in I$, $h_2\cdot\<m\rangle\in H''\}$. The histories in $\lfloor H^{\Gamma'}\rfloor$ are the ones that can actually be played according to the players' awareness levels. Let $\Gamma^\nu$ be the standard game such that \begin{itemize} \item $N^\nu = \{(i,\Gamma'):\Gamma'\in{\cal G}_i\}$; \item $M^\nu = {\cal G}\cup_{\Gamma'\in{\cal G}}\lfloor M^{\Gamma'}\rfloor$, where $\lfloor M^{\Gamma'}\rfloor$ is the set of moves that occur in $\lfloor H^{\Gamma'}\rfloor$; \item $H^\nu = \langle\, \rangle\cup\{\langle\Gamma'\rangle\cdot h:\Gamma'\in{\cal G}, h\in \lfloor H^{\Gamma'}\rfloor\}$; \item $P^\nu(\langle\, \rangle)=c$, and $$P^\nu(\langle\Gamma^h\rangle\cdot h') = \left\{ \begin{array}{ll} (i,\Gamma^{h'}) & \mbox{if $P^h(h') = i \in N$ and }\\ \ & {\cal F}(\Gamma^h,h')=(\Gamma^{h'}, \cdot),\\ c & \mbox{if $P^h(h') = c$;}\end{array} \right.$$ \item $f_c^\nu(\Gamma'|\langle\, \rangle)= \nu(\Gamma')$ and $f_c^\nu(\cdot|\langle\Gamma^h\rangle\cdot h') = f_c^h(\cdot|h')$ if $P^h(h') = c$; \item ${\cal I}^\nu_{i,\Gamma'}$ is a partition of $H^{\nu}_{i,\Gamma'}$ where two histories $\langle\Gamma^1\rangle\cdot h^1$ and $\langle\Gamma^1\rangle\cdot h^1$ are in the same information set $\langle\Gamma',I\rangle^*$ iff $(\Gamma^{1},h^1)$ and $(\Gamma^{2},h^2)$ are in the same generalized information set $(\Gamma',I)^*$; \item $u_{i,\Gamma'}^\nu(\langle\Gamma^h\rangle\cdot z)= \left\{ \begin{array}{ll} u_i^h(z) &\mbox{if $\Gamma^h = \Gamma',$}\\ 0 &\mbox{if $\Gamma^h \ne \Gamma'.$}\end{array} \right.$ \end{itemize} } \fullv{ \subsection{PROOF OF THEOREMS~\ref{chap2:thm:conditional1}, \ref{chap2:thm:conditional2}, AND \ref{chap2:thm:conn}} \othm{chap2:thm:conditional1} Let $\Gamma$ be an extensive game with perfect recall and countably many players such that (a) each player has only finitely many pure strategies and (b) each player's payoff depends only on the strategy of finitely many other players. Let ${\cal K}$ be an arbitrary possibility system. Then there exists at least one ${\cal K}$-assessment that is a conditional sequential equilibrium of $\Gamma$ with respect to ${\cal K}$. \end{oldthm} \noindent{\bf Proof:} We use the same ideas that are used to prove existence of standard sequential equilibrium, following closely the presentation in \cite{Myerson}. The proof goes as follows. Given $\Gamma$, let $\Gamma_M$ be the {\em multiagent representation} of $\Gamma$ in normal form. We prove (Lemma~\ref{chap2:lem:perfseq}) that for every {\em perfect equilibrium} $\sigma$ of $\Gamma_M$ and every possibility system ${\cal K}$, there exists a restricted belief system $\mu$ such that $(\sigma,\mu)$ is a conditional sequential equilibrium of $\Gamma$ with respect to ${\cal K}$. Then we show (Lemma~\ref{chap2:lem:exisperf}) that for $\Gamma$ satisfying the hypothesis of the theorem, $\Gamma_M$ has at least one perfect equilibrium. We now review the relevant definitions. A {\em normal-form game} is a tuple $(N,$ $\times_{i\in N}{\cal C}_i,\{u_i:i\in N\})$, where $N$ is the set of players of the game, ${\cal C}_i$ is the collection of pure strategies available for player $i$ in the game, and $u_i$ is a payoff function that determines for each strategy profile in $\times_{i\in N}{\cal C}_i$ the payoff for player $i$. Given a standard extensive-form game $\Gamma=(N,M,H,P,f_c,\{{\cal I}_i:i\in N\},\{u_i:i\in N\})$, let $S^*=\cup_{i\in N}{\cal I}_i$. Intuitively, we associate with each $i$-information set $I\in {\cal I}_i$ a {\em temporary player} that has $M(I)$ as its set possible strategies; $S^*$ is just the set of all temporary players in $\Gamma$. For each temporary player $I$ we associate a payoff function $v_I:\times_{I\in S^*}M(I)\rightarrow \mathrm{R}$ such that if each temporary player $I$ chooses action $a_I$, and $\sigma$ is the pure strategy profile for $\Gamma$ such that for every $i\in N$ and $I\in {\cal I}_i$, $\sigma_i(I)=a_I$, then $v_I(\times_{I\in S^*}a_I)=u_i(\sigma)$. The {\em multiagent representation for $\Gamma$ in normal form} is the tuple $(S^*,\times_{I\in S^*}M(I),\{v_I:I\in S^*\})$. Given any countable set $B$, let $\Delta(B)$ be the set of all probability measures over $B$, and let $\Delta^0(B)$ be the set of all probability measures over $B$ whose support is all of $B$. Given a game in normal form $\Gamma=(N,\times_{i\in N}{\cal C}_i,\{u_i:i\in N\})$, a mixed strategy profile $\sigma\in \times_{i\in N}\Delta({\cal C}_i)$ is a {\em perfect equilibrium} of $\Gamma$ iff there exists a sequence $(\hat{\sigma}^k)_{k=1}^{\infty}$ such that (a) $\hat{\sigma}^k\in \times_{i\in N}\Delta^0({\cal C}_i)$ for $k\geq 1$, (b) $\hat{\sigma}^k$ converges pointwise to $\sigma$, and (c) $\sigma_i\in argmax_{\tau_i\in \Delta({\cal C}_i)}{\mathrm{EU}}_i(\hat{\sigma}^k_{-i},\tau_i)$ for all $i\in N$. The following lemmas are analogues of Theorems 5.1 and 5.2 in \cite{Myerson}. \begin{lemma} \label{chap2:lem:perfseq} If $\Gamma_M$ is a multiagent representation of $\Gamma$ in normal form, then for every perfect equilibrium $\sigma$ of $\Gamma_M$ and every possibility system ${\cal K}$, there exists a restricted belief system $\mu$ such that $(\sigma,\mu)$ is a conditional sequential equilibrium of $\Gamma$ with respect to ${\cal K}$. \end{lemma} \noindent{\bf Proof:} The proof is almost identical to that of Theorem 5.1 in \cite{Myerson}. We focus on the necessary changes, leaving the task of verifying that the rest of the proof goes without change to the reader. Let $(\hat{\sigma}^k)_{k=1}^{\infty}\in \times_{I\in S^*}\Delta(M(I))$ be a sequence of behavioral strategy profiles satisfying conditions (a), (b), and (c) of the definition of perfect equilibrium. For each $k$, define a belief system $\mu^k$ such that, for each information set $I$, $\mu^k(I)$ is the probability over histories in ${\cal K}(I)$ defined as $$\mu^k_I(h)=\frac{\Pr_{\hat{\sigma}^k}(h)}{\sum_{h'\in {\cal K}(I)}\Pr_{\hat{\sigma}^k}(h')}.$$ If ${\cal I}$ is the set all information sets in $\Gamma_M$, for each $k$, $\mu^k : {\cal I} \rightarrow [0,1]$. Thus, $\mu^k \in [0,1]^{{\cal I}}$; and, by Tychonoff's Theorem, $[0,1]^{{\cal I}}$ is compact. Thus, there must be a convergent subsequence of $\mu^1, \mu^2, \ldots$. Suppose that this subsequence converges to $\mu$. It is easy to see that $\mu$ is consistent with $\sigma$ and ${\cal K}$. Let $Z(I)$ denote the set of runs that do not contain any prefix in $I$. Let $I$ be an arbitrary $i$-information set of $\Gamma^{\nu}$. When agent $I\in S^*$ uses the randomized strategy $\rho_I\in \Delta(M(I))$ against the strategies specified by $\hat{\sigma}^k$ for all other agents, his expected payoff is $${\mathrm{EU}}_I(\hat{\sigma}^k_{-I},\rho_I)=\sum_{h\in I}\Pr_{(\hat{\sigma}^k_{-I},\rho_I)}(h){\mathrm{EU}}_I(\hat{\sigma}^k_{-I},\rho_I\mid h)+\sum_{z\in Z(I)}\Pr_{(\hat{\sigma}^k_{-I},\rho_I)}(z)u_i(z).$$ Note that for $h\in I$ or $h\in Z(I)$, $\Pr_{(\hat{\sigma}^k_{-I},\rho_I)}(h)=\Pr_{\hat{\sigma}^k}(h)$, since this probability is independent of the strategy used by player $I$. Also note that for all $h\in I-{\cal K}(I)$, ${\mathrm{EU}}_I(\hat{\sigma}^k_{-I},\rho_I\mid h)$ is independent of $\rho_I$. Thus, \begin{eqnarray} & & {\mathrm{EU}}_I(\hat{\sigma}^k_{-I},\rho_I) \nonumber\\ & & =\sum_{h\in {\cal K}(I)}\Pr_{\hat{\sigma}^k}(h){\mathrm{EU}}_I(\hat{\sigma}^k_{-I},\rho_I\mid h)+\sum_{z\in Z(I)}\Pr_{\hat{\sigma}^k}(z)u_i(z) + C'\nonumber \\ & & = (\sum_{h\in {\cal K}(I)}\mu^k_I(h){\mathrm{EU}}_I(\hat{\sigma}^k_{-I},\rho_I\mid h))(\sum_{h\in {\cal K}(I)}\Pr_{\hat{\sigma}^k}(h)) +C'',\nonumber \end{eqnarray} where $C'$ and $C''$ are two constants independent of $\rho_I$. The rest of the proof proceeds just as the proof of Theorem 5.1 in \cite{Myerson}; we omit details here. \vrule height7pt width4pt depth1pt \begin{lemma} \label{chap2:lem:exisperf} If $\Gamma$ is an extensive-form game with perfect recall such that (a) there are at most countably many players, (b) each player has only finitely many pure strategies, and (c) the payoff of each player depends only on the strategy of finitely many other players, then $\Gamma_M$ has at least one perfect equilibrium. \end{lemma} \noindent{\bf Proof:} The proof is almost identical to that of Theorem 5.2 in \cite{Myerson}. Again, we focus on the necessary changes, leaving it to the reader to verify that the rest of the proof goes without change. We need to modify some of the arguments since $\Gamma_M$ is not a finite game; since it may contain countably many players. First, by the same argument used to prove that $\Gamma^{\nu}$ has at least one Nash equilibrium in our earlier work \cite{HR06}, we have that for any $\Gamma$ satisfying the hypothesis of the lemma, $\Gamma_M$ has at least one Nash equilibrium. Let ${\cal C}_i$ be the set of pure strategies available for player $i$ in $\Gamma_M$. \commentout{ We first need to show that if $(\sigma^k)_{k=1}^{\infty}$ is a sequence of strategy profiles, then we can find a subsequence that converges pointwise. We construct such a subsequence as follows. Since, $\Delta({\cal C}_1)$ is a compact set, let $(\sigma^{1,j})_{j=1}^{\infty}$ be a subsequence of $\sigma^k$ such that $\sigma^{1,j}_1$ converges pointwise to $\sigma_1$. Define $(\sigma^{l,j})_{j=1}^{\infty}$ recursively by taking it to be a subsequence of $(\sigma^{l-1,j})_{j=1}^{\infty}$ such that $\sigma^{l,j}_l$ converges pointwise to $\sigma_l$. The sequence $\sigma^{j,j}$ provides a subsequence of $\sigma^k$ such that it converges to $\sigma$ pointwise, as desired.} By Tychonoff's Theorem $\times_{i\in N}[0,1]^{{\cal C}_i}$ is compact. Since $\times_{i\in N}\Delta({\cal C}_i)$ is a closed subset of $\times_{i\in N}[0,1]^{{\cal C}_i}$, it is also compact. All the remaining steps of the proof of Theorem 5.2 in \cite{Myerson} apply here without change; we omit the details. \vrule height7pt width4pt depth1pt The proof of Theorem~\ref{chap2:thm:conditional1} follows immediately from Lemmas~\ref{chap2:lem:perfseq} and \ref{chap2:lem:exisperf}.~\vrule height7pt width4pt depth1pt \othm{chap2:thm:conditional2} For all probability measures $\nu$ on ${\cal G}$, if $\nu$ gives positive probability to all games in ${\cal G}$, and ${\cal K}(\langle\Gamma',I\rangle^*)=\{\langle\Gamma',h\rangle:h\in I\}$ for every information set $\langle\Gamma',I\rangle^*$ of $\Gamma^{\nu}$, then $(\vec{\sigma}',\mu')$ is a generalized sequential equilibrium of $\Gamma^*$ iff $(\vec{\sigma},\mu)$ is a conditional sequential equilibrium of $\Gamma^{\nu}$ with respect to ${\cal K}$, where $\sigma_{i,\Gamma'}(\langle\Gamma^h\rangle\cdot h')=\sigma'_{i,\Gamma'}(\Gamma^h,h')$ and $\mu'_{\Gamma', I}=\mu_{\langle\Gamma',I\rangle^*}$. \end{oldthm} \noindent{\bf Proof:} Let $\Pr^{\nu}_{\vec{\sigma}}$ be the probability distribution over the histories in $\Gamma^{\nu}$ induced by the strategy profile $\vec{\sigma}$ and $f_c^{\nu}$. For a history $h$ of the game, define $\Pr^{\nu}_{\vec{\sigma}}(\cdot\mid h)$ to be the conditional probability distribution induced by $\vec{\sigma}$ and $f_c^{\nu}$ over the possible histories of the game given that the current history is $h$. Similarly, let $\Pr^{h}_{\vec{\sigma}'}$ be the probability distribution over the histories in $\Gamma^h\in {\cal G}$ induced by the generalized strategy profile $\vec{\sigma}'$ and $f_c^h$. Note that if $\Pr^{h}_{\vec{\sigma}'}(h')>0$, then $h'\in \lfloor H^h \rfloor$. Thus, $\langle\Gamma^h\rangle\cdot h' \in H^{\nu}$. For all strategy profiles $\sigma$ and generalized strategy profiles $\sigma'$, if $\sigma'_{i,\Gamma'}(\Gamma^h,h')=\sigma_{i,\Gamma'}(\langle\Gamma^h\rangle\cdot h')$, then it is easy to see that for all $h'\in H^h$ such that $\Pr^{h}_{\vec{\sigma}'}(h')>0$, we have that $\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^h\rangle\cdot h')=\nu(\Gamma^h)\Pr^{h}_{\vec{\sigma}'}(h')$. And since $\nu$ is a probability measure such that $\nu(\Gamma^h)>0$ for all $\Gamma^h\in {\cal G}$, we have that $\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^h\rangle\cdot h')>0$ iff $\Pr^{h}_{\vec{\sigma}'}(h')>0$. It is also easy to see that for all $h'\ne \langle \, \rangle$ and all $h''\in H^h$ such that $\Pr^{h}_{\vec{\sigma}'}(h''\mid h')>0$, $\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^h\rangle\cdot h''\mid h')=\Pr^{h}_{\vec{\sigma}'}(h''\mid h')$. Suppose that $(\vec{\sigma},\mu)$ is a conditional sequential equilibrium of $\Gamma^{\nu}$ with respect to ${\cal K}$. We first prove that $(\vec{\sigma}',\mu')$ satisfies generalized sequential rationality. Suppose, by way of contradiction, that it does not. Thus, there exists a player $i$, a generalized $i$-information set $(\Gamma^+,I)^*$, and a local strategy $s'$ for player $i$ in $\Gamma^+$ such that $$ \sum_{h\in I}\sum_{z\in Z^+}\mu_{\Gamma^+,I}'(h)\Pr^+_{\vec{\sigma}'}(z\mid h)u_i^+(z) < \sum_{h\in I}\sum_{z\in Z^+}\mu_{\Gamma^+,I}'(h)\Pr^+_{(\vec{\sigma}'_{-(i,\Gamma')},s')}(z\mid h)u_i^+(z). $$ Define $s$ to be a strategy for player $(i,\Gamma^+)$ in $\Gamma^{\nu}$ such that $s(\langle\Gamma^h\rangle\cdot h')=s'(\Gamma^h,h')$. Using the observation in the previous paragraph and the fact that $\mu'_{\Gamma^+, I}=\mu_{\langle\Gamma^+,I\rangle^*}$ and ${\cal K}(\langle\Gamma^+,I\rangle^*)=\{\langle\Gamma^+,h\rangle:h\in I\}$, it follows that \begin{eqnarray}\label{chap2:eq2} & & \sum_{\langle\Gamma^+,h\rangle\in {\cal K}(\langle\Gamma^+,I\rangle^*)}\sum_{z\in \lfloor Z^+\rfloor}\mu_{\langle\Gamma^+,I\rangle^*}(h)\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^+\rangle\cdot z\mid h)u_i^+(z) \nonumber \\ & < &\sum_{\langle\Gamma^+,h\rangle\in {\cal K}(\langle\Gamma^+,I\rangle^*)}\sum_{z\in \lfloor Z^+\rfloor}\mu_{\langle\Gamma^+,I\rangle^*}(h)\Pr^{\nu}_{(\vec{\sigma}_{-(i,\Gamma')},s)}(\langle\Gamma^+\rangle\cdot z\mid h)u_i^+(z). \nonumber \\ \end{eqnarray} \noindent By definition of $u_{i,\Gamma^+}^{\nu}$, \commentout{ (\ref{chap2:eq2}) holds iff \begin{eqnarray}\label{chap2:eq3} & & \sum_{\langle\Gamma^+,h\rangle\in {\cal K}(\langle\Gamma^+,I\rangle^*)}\sum_{z^{\nu}\in Z^{\nu}}\mu_{\langle\Gamma^+,I\rangle^*}(h)\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^+\rangle\cdot z\mid h)u_{i,\Gamma^+}^{\nu} \nonumber \\ & < &\sum_{\langle\Gamma^+,h\rangle\in {\cal K}(\langle\Gamma^+,I\rangle^*)}\sum_{z^{\nu}\in Z^{\nu}}\mu_{\langle\Gamma^+,I\rangle^*}(h)\Pr^{\nu}_{(\vec{\sigma}_{-(i,\Gamma')},s)}(\langle\Gamma^+\rangle\cdot z\mid h)u_{i,\Gamma^+}^{\nu}.\nonumber\end{eqnarray} } $u_i^+(z) = u_{i,\Gamma^+}^{\nu}(\langle\Gamma^+\rangle,z)$. Replacing $u_i^+(z)$ by $u_{i,\Gamma^+}^{\nu}(\langle\Gamma^+\rangle,z)$ in~(\ref{chap2:eq2}), it follows that $(\vec{\sigma},\mu)$ does not satisfy sequential rationality in $\Gamma^\nu$, a contradiction. So, $(\vec{\sigma}',\mu')$ satisfies generalized sequential rationality. It remains to show that $\mu'$ is consistent with $\vec{\sigma}'$. Suppose that, for every generalized information set $(\Gamma^+,I)^*$, $\sum_{h\in I}\Pr^+_{\vec{\sigma}'}(h)>0$. By definition of ${\cal K}$ and the fact that for all $h'\in H^{\nu}$, $\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^+\rangle\cdot h')>0$ iff $\Pr^+_{\vec{\sigma}'}(h')>0$, we have that for every information set $\langle\Gamma^+,I\rangle^*$ of $\Gamma^{\nu}$, $$\sum_{\langle\Gamma^+,h\rangle\in {\cal K}(\langle\Gamma^+,I\rangle^*)}\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^+\rangle\cdot h)>0.$$ Thus, by consistency of $\mu$, $\vec{\sigma}$, and ${\cal K}$, it follows that for every information set $\langle\Gamma^+,I\rangle^*$ of $\Gamma^{\nu}$ and every $h\in {\cal K}(\langle\Gamma^+,I\rangle^*)$, we have $$\mu_{\langle\Gamma^+,I\rangle^*}(h)=\frac{\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^+\rangle\cdot h)}{\sum_{h'\in {\cal K}(\langle\Gamma^+,I\rangle^*)}\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^+\rangle\cdot h')}.$$ Since $\mu'_{\Gamma^+, I}=\mu_{\langle\Gamma^+,I\rangle^*}$, ${\cal K}(\langle\Gamma',I\rangle^*)=\{\langle\Gamma',h\rangle:h\in I\}$, and for all $h'\in H^h$ such that $\Pr^{h}_{\vec{\sigma}'}(h')>0$, we have that $\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^h\rangle\cdot h')=\nu(\Gamma^h)\Pr^{h}_{\vec{\sigma}'}(h')$, it is easy to see that for every generalized information set $(\Gamma^+,I)^*$ and every $h\in I$, $$\mu'_{\Gamma^+,I}(h)=\frac{\Pr^+_{\vec{\sigma}'}(h)}{\sum_{h'\in I}\Pr^+_{\vec{\sigma}'}(h')}.$$ Thus, $\mu'$ is consistent with $\vec{\sigma}'$. Finally, suppose that there exists a generalized information set $(\Gamma^+,I)^*$ such that $\sum_{h\in I}\Pr^+_{\vec{\sigma}'}(h)=0$. By definition of ${\cal K}$ and the fact that for all $h'\in H^{\nu}$, $\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^+\rangle\cdot h')>0$ iff $\Pr^+_{\vec{\sigma}'}(h')>0$, we have that $\sum_{\langle\Gamma^+,h\rangle\in {\cal K}(\langle\Gamma^+,I\rangle^*)}\Pr^{\nu}_{\vec{\sigma}}(\langle\Gamma^+\rangle\cdot h)=0$. Thus, by the consistency of $\mu$, $\vec{\sigma}$, and ${\cal K}$, there exists a sequence of ${\cal K}$-assessments $(\vec{\sigma}^n,\mu^n)$ such that $\vec{\sigma}^n$ consists of completely mixed strategies, $\mu^n$ is consistent with $\vec{\sigma}^n$ and ${\cal K}$, and $(\vec{\sigma}^n,\mu^n)$ converges pointwise to $(\vec{\sigma},\mu)$. Define a sequence of ${\cal K}$-assessments $(\vec{\tau}^{n},\nu^{n})$ such that $\nu^{n}_{\Gamma', I}=\mu^n_{\langle\Gamma',I\rangle^*}$ and $\sigma^n_{j,\Gamma'}(\langle\Gamma^h\rangle\cdot h')=\tau^{n} _{j,\Gamma'}(\Gamma^h,h')$ for all $n$. Since $\vec{\sigma}^n$ is completely mixed, so is $\vec{\tau}^n$; it also follows from the earlier argument that $\nu^{n}$ is consistent with $\vec{\tau}^{n}$ for all $n$. Since $(\vec{\sigma}^n,\mu^n)$ converges pointwise to $(\vec{\sigma},\mu)$, it is easy to see that $(\vec{\tau}^{n},\nu^{n})$ converges pointwise to $(\vec{\sigma}',\mu')$. Thus, $\mu'$ is consistent with $\vec{\sigma}'$, and $(\vec{\sigma}',\mu')$ is a generalized sequential equilibrium of $\Gamma^*$, as desired. The proof of the converse is similar; we leave details to the reader. \vrule height7pt width4pt depth1pt \othm{chap2:thm:conn} For every extensive game $\Gamma$ with countably many players where each player has finitely many pure strategies and for every possibility system ${\cal K}$, if $(\vec{\sigma},\mu)$ is a conditional sequential equilibrium of $\Gamma$ with respect to ${\cal K}$, then there exists a belief system $\mu'$ such that $(\vec{\sigma}, \mu')$ is a sequential equilibrium of $\Gamma$. \end{oldthm} \noindent{\bf Proof:} Since $(\sigma,\mu)$ is a conditional sequential equilibrium of $\Gamma$ with respect to ${\cal K}$, by the consistency of $\mu$, $\sigma$, and ${\cal K}$, there exists a sequence of ${\cal K}$-assessments $(\hat{\sigma}^k,\hat{\mu}^k)$ such that $\hat{\sigma}^k$ is completely mixed, $\hat{\mu}^k$ is consistent with $\hat{\sigma}^k$ and ${\cal K}$, and $(\hat{\sigma}^k,\hat{\mu}^k)$ converges pointwise to $(\sigma,\mu)$. Let $\hat{\nu}^{k}$ be the belief system consistent with $\hat{\sigma}^k$. Using the same techniques as in the proof of Lemma~\ref{chap2:lem:perfseq}, we can construct a subsequence of $(\hat{\sigma}^k,\hat{\nu}^{k})$ that converges pointwise to $(\sigma,\mu')$. Thus, $\mu'$ is consistent with $\sigma$. It remains to show that $(\sigma,\mu')$ satisfies sequential rationality. Since, by definition of ${\cal K}$, for every $i$-information set $I$ of $\Gamma$, player $i$ has the same utility for every run extending a history in $I-{\cal K}(I)$, it is not hard to show that $${\mathrm{EU}}_i((\sigma,\mu')\mid I)=C + \mu'(K(I)){\mathrm{EU}}_i((\sigma,\mu)\mid I)\mbox{,}$$ where $C$ and $\mu'(K(I))$ are independent of $\sigma_i(I)$. Since, by sequential rationality, $\sigma_i(I)$ is a best response given $\mu$, it is also a best response given $\mu'$. It follows that $(\sigma,\mu')$ is a sequential equilibrium of $\Gamma$, as desired. \vrule height7pt width4pt depth1pt \subsection{PROOF OF THEOREM~\ref{chap2:rat_gNash}} \othm{chap2:rat_gNash} If $\Gamma$ is a standard normal-form game and $\vec{s}$ is a (pure) strategy profile such that for all $i\in N$, $s_i$ is a correlated rationalizable strategy of player $i$ in $\Gamma$, then \begin{itemize} \item[(i)] there is a (pure) generalized Nash equilibrium $\vec{s}^{\,*}$ of $\Gamma^*(\vec{s})$ such that for every player $i$, $\underline{s}^*_{i,\Gamma^{s_i}} =s_i$; \item[(ii)] for every (pure) generalized Nash equilibrium $\vec{s}^{\,*}$ of $\Gamma^*(\vec{s})$, for every local strategy $s^*_{i,\Gamma'}$ for every player $i$ in $\vec{s}^{\,*}$, the strategy $\underline{s}^*_{i,\Gamma'}$ is correlated rationalizable for player $i$ in $\Gamma$. \end{itemize} \end{oldthm} \noindent{\bf Proof:} Let $\Gamma^*(\vec{s})=({\cal G},\Gamma^m,{\cal F})$ be as defined in Section~\ref{chap2:sec:grat}. For part (i), consider the generalized strategy profile $\vec{s}^{\,*}$ where, for every player $i$ and every $\Gamma^{s'_i}\in {\cal G}_i$, $i$ makes the same move according to both $s^*_{i,\Gamma^{s'_i}}$ and $s'_i$. Note that, by definition of $\Gamma^*(\vec{s})$, for all $h\in H_i^m$ we have that ${\cal F}(\Gamma^m,h)=(\Gamma^{s_i},\cdot)$. Thus, by definition of $\vec{s}^{\,*}$, for every player $i$, $\underline{s}^*_{i,\Gamma^{s_i}} =s_i$. It is easy to check using the definition of $\Gamma^*(\vec{s})$, that $\vec{s}^{\,*}$ is a generalized Nash equilibrium, and that, for all $i\in N$, $s_i$ is a rationalizable strategy for player $i$; we leave details to the reader. \commentout{ For part (ii), let $\vec{s}^{\,*}$ be an arbitrary pure generalized Nash equilibrium of $\Gamma^*(\vec{s})$. Consider any local strategy $s^*_{i,\Gamma^{s'_i}}$ for player $i$ in $\vec{s}^{\,*}$. From the fact that $\vec{s}^{\,*}$ is a (pure) generalized Nash equilibrium, we have that $$s^*_{i,\Gamma^{s'_i}}\in argmax_{s''_i}{\mathrm{EU}}_{i,\Gamma^{s'_i}}((s''_i,\vec{s}^{\,*}_{-(i,\Gamma^{s'_i})}))$$ for any $s''_i$ a (pure) local strategy for player $i$ in $\Gamma^{s'_i}$. Let ${\cal S}$ be the set of rationalizable strategies in $\Gamma$. By definition of $\Gamma^*(\vec{s})$, $s'_i$ is a rationalizable strategy for player $i$ in $\Gamma$. Let $\pi^{s'_i}$ be a probability distribution over ${\cal S}_{-i}$ for which $s'_i$ is a best response. Let $\vec{s}^{\,1}, \ldots, \vec{s}^{\,m}$ be the strategy profiles in ${\cal S}_{-i}$ that get positive probability according to $\pi^{s'_i}$. By definition, $\Gamma^{s'_i}$ is the game where nature initially makes one of $m$ moves, say $c_1, \ldots, c_m$ (one corresponding to each strategy that gets positive probability according to $\pi^{s'_i}$), where the probability of move $c_j$ is $\pi^{s'_i}(\vec{s}^{\,j})$. After nature's choice a copy of $\Gamma$ is played. By definition, if players follow the generalized strategy profile $\vec{s}^{\,*}$ in $\Gamma^*(\vec{s})$, then every player $k\ne i$ uses local strategy $s^*_{k,\Gamma^{s^j_k}}$ in the histories that follow nature's move $c_j$ in $\Gamma^{s'_i}$. Thus, $${\mathrm{EU}}_{i,\Gamma^{s'_i}}((s''_i,\vec{s}^{\,*}_{-(i,\Gamma^{s'_i})}))=\sum_{j=1}^m \pi^{s'_i}(\vec{s}^{\,j}){\mathrm{EU}}_i(s''_i,(s^*_{k,\Gamma^{s^j_k}}:k\in N-\{i\})).$$ } For part (ii), let ${\cal D}_i=\{\underline{s}^*_{i,\Gamma'}:\Gamma'\in {\cal G}_i\}$, i.e., ${\cal D}_i$ consists of the strategies in the underlying game $\Gamma$ corresponding to some local strategy of player $i$ in $\Gamma^*(\vec{s})$. We claim that ${\cal D}_i\subseteq B({\cal D}_{-i})$. To see this, let $s^*_{i,\Gamma'}$ be any local strategy for player $i$ in $\vec{s}^{\,*}$. Since $\vec{s}^{\,*}$ is a generalized Nash equilibrium, $s^*_{i,\Gamma'}$ is a best response to the local strategies used by other players in $\Gamma'$. Note that, by definition of ${\cal D}_i$, for every other player $j\ne i$, there is a strategy $\underline{s}_{j,\Gamma'}\in {\cal D}_j$ corresponding to the local strategy $s_{j,\Gamma'}$ player $j$ follows in game $\Gamma'$. Since, by definition of $\Gamma^*(\vec{s})$, in game $\Gamma'$ nature makes an initial choice and then a copy of $\Gamma$ is played, and all players but $i$ know the move made by nature, this initial move by nature can be seen as a distribution over the local strategies used by the other players in the different copies of $\Gamma$ contained in $\Gamma'$. Thus, it is easy to see that the strategy $\underline{s}_{i,\Gamma'}$ corresponding to $s^*_{i,\Gamma'}$ is in $B({\cal D}_{-i})$. Finally, since $s^*_{i,\Gamma'}$ is an arbitrary local strategy of player $i$ in $\vec{s}^{\,*}$, it follows that ${\cal D}_i\subseteq B({\cal D}_{-i})$. By the definition of correlated rationalizable strategies, it follows that ${\cal D}_i\subseteq {\cal S}_i$. Thus, for player $i$ in $\Gamma(\vec{s})$ and every local strategy $s^*_{i,\Gamma'}$ for $i$ in $\vec{s}^{\,*}$, $\underline{s}^*_{i,\Gamma'}$ is correlated rationalizable for player $i$ in $\Gamma$, as desired. \vrule height7pt width4pt depth1pt } \end{document}
arXiv
Margarethe Kahn Margarethe Kahn[1] (known as Grete Kahn,[2] also Margarete Kahn,[3] born 27 August 1880, missing after deportation to Piaski, Poland on 28 March 1942) was a German mathematician and Holocaust victim.[4] She was among the first women to obtain a doctorate in Germany. Her doctoral work was on the topology of algebraic curves. Margarethe Kahn Born(1880-08-27)27 August 1880 Eschwege, German Empire Died28 March 1942(1942-03-28) (aged 61) (deported to Piaski on this date, and missing since then) Piaski, Poland NationalityGerman Alma materUniversity of Göttingen Scientific career FieldsMathematics (algebraic geometry) ThesisEine allgemeine Methode zur Untersuchung der Gestalten algebraischer Kurven [A general method for the study of the forms of algebraic curves] (1909) Doctoral advisorDavid Hilbert Other academic advisorsFelix Klein Life and work Margarethe Kahn was the daughter of Eschwege merchant and flannel factory owner Albert Kahn (1853–1905) and his wife Johanne (née Plaut, 1857–1882). She had an older brother Otto (1879–1932). Five years after the untimely death of his wife Johanne, their father married her younger sister Julie (1860–1934), with whom he had a daughter, Margaret's half-sister Martha (1888–1942).[5] After attending elementary school from 1887, and the Higher School for Girls from 1889 to 1896, Kahn until 1904 took private lessons to prepare for her Abitur, because few high schools for girls existed at that time in Hesse, Germany. In 1904 she was given permission to take her Abitur at the Royal Gymnasium in Bad Hersfeld. Thus she belonged to the small elite of young women in Germany at the beginning of the 20th century who were allowed to take the Abitur externally at boys' schools. Konrad Duden signed her Abitur certificate as school principal. Since Prussia began to allow women to formally attend university only from the winter semester of 1908–09, Kahn and her friend Klara Löbenstein first attended the universities of Berlin and Göttingen as guest students. In addition, Kahn attended lectures and tutorials in mathematics at the Technical University of Berlin. They studied mathematics, physics, and propaedeutics at Berlin and Göttingen. At the University of Göttingen she attended lectures given by, among others, David Hilbert, Felix Klein, Woldemar Voigt, and Georg Elias Müller; in Berlin she attended lectures by Hermann Amandus Schwarz and Paul Drude at the Royal Prussian Academy of Sciences. Her field of expertise was algebraic geometry. Together with Löbenstein she made a contribution to Hilbert's sixteenth problem.[5] Hilbert's sixteenth problem concerned the topology of algebraic curves in the complex projective plane; as a difficult special case in his formulation of the problem Hilbert proposed that there are no algebraic curves of degree 6 consisting of 11 separate ovals. Kahn and Löbenstein developed methods to address this problem. Against opposition in particular from the Berlin faculty, but supported by the University of Göttingen and Felix Klein, Kahn obtained a doctorate in 1909 under David Hilbert in Göttingen, with a dissertation titled Eine allgemeine Methode zur Untersuchung der Gestalten algebraischer Kurven [A general method to investigate the shapes of algebraic curves] and was therefore one of the first German women to obtain a doctorate in mathematics (the mathematics division was part of the faculty of philosophy then). She took her oral examination – again, along with Löbenstein – on 30 June 1909. Kahn could not pursue a scientific career because women in Germany were not admitted to habilitation before 1920. She therefore sought a career as a schoolteacher, and in October 1912 she obtained a job in the Prussian school system, where she worked as a teacher for secondary schools in Katowice, Dortmund, and from 1929 in Berlin-Tegel at today's Gabriele-von-Bülow-Gymnasium, and later in Berlin-Pankow at today's Carl-von-Ossietzky-Gymnasium.[6] As a Jew, she was forced to go on leave by the Nazis in 1933, and was dismissed from the school in 1936. She was forced to work as a factory worker at the Nordland Schneeketten (Nordland snow chains) factory. On 28 March 1942, Kahn and her by then widowed sister Martha were deported to Piaski and are considered missing since then.[7] On 13 September 2008, a Stolperstein was laid at 127 Rudolstädter Straße in Wilmersdorf in memory of Margarethe Kahn, as well as on 26 May 2010 in front of her parents' former house at Stad 29 in Eschwege, where additionally a commemorative plaque was attached on 13 December 2017.[8] In 2013, a street in Leverkusen was named after her.[9] Publications • Kahn, Margarete (1909). "Eine allgemeine Methode zur Untersuchung der Gestalten algebraischer Kurven" [A general method for the study of the forms of algebraic curves]. Doctoral Dissertation, University of Göttingen (in German). Göttingen: W. Fr. Kaestner. References 1. Entry in the birth register of the registry office Eschwege 1880, no. 214: secondary birth register Eschwege 1880 (HStM Order 923 no. 1834) and entry in the birth register of the synagogue community Eschwege 1825–1936, no. 591: birth register of the Jews of Eschwege 1825–1936 (HHStAW Abt . 365 No. 145), available online LAGIS Hesse 2. Gedenkbuch – Opfer der Verfolgung der Juden unter der nationalsozialistischen Gewaltherrschaft in Deutschland 1933–1945, Bundesarchiv (Memorial book – Victims of the persecution of the Jews under the National Socialist dictatorship 1933–1945, German Federal Archives): Kahn, Margarete Margarethe 3. Handwritten, personally signed application for a doctorate dated 2 June 1909, doctoral file in the Göttingen University archive, signature UAG.Phil.Prom.Spec.K.II 4. (Germany), Bundesarchiv (2006). "Opfer der Verfolgung der Juden unter der nationalsozialistischen Gewaltherrschaft in Deutschland 1933–1945" [Victims of persecution of the Jews under the Nazi dictatorship in Germany from 1933–1945]. Federal Archives (in German). Vol. 2. p. 1595. ISBN 3-89192-137-3. 5. König, York-Egbert (4 January 2009). "Ein Leben für die Mathematik – Vor 90 Jahren legte Grete Kahn als erste Eschwegerin die Doktorprüfung ab" [A life for mathematics – 90 years ago Grete Kahn was the first woman from Eschwege to earn a doctorate] (in German). vghessen.de. Retrieved 10 January 2014. {{cite journal}}: Cite journal requires |journal= (help) 6. König, York-Egbert; Prauss, Christina; Tobies, Renate (2011). Simon, Hermann (ed.). Margarete Kahn. Klara Löbenstein. Mathematikerinnen – Studienrätinnen – Freundinnen [Margarete Kahn. Klara Löbenstein. Mathematicians – Teachers – Friends] (in German). Vol. 108 (Jüdische Miniaturen ed.). Berlin: Hentrich & Hentrich. p. 55. ISBN 978-3-942271-23-3. 7. Gottwaldt, Alfred; Schulle, Diana (2005). Die "Judendeportationen" aus dem Deutschen Reich von 1941–1945 – eine kommentierte Chronologie [The "deportation of Jews" from the German Reich from 1941–1945: An annotated chronology] (in German). Wiesbaden. p. 188. ISBN 978-3-86539-059-2.{{cite book}}: CS1 maint: location missing publisher (link) 8. "Stolperstein Rudolstädter Str. 127". berlin.de (in German). 13 September 2008. Retrieved 10 January 2014. 9. "Grete-Kahn-Str". leverkusen.com (in German). 2013. Retrieved 10 January 2014. Further reading • König, York-Egbert; Prauss, Christina; Tobies, Renate (2011). Margarete Kahn. Klara Löbenstein: Mathematikerinnen – Studienrätinnen – Freundinnen [Margarete Kahn. Klara Löbenstein: Mathematicians – Teachers – Friends] (in German). Hentrich & Hentrich. ISBN 978-3-942271-23-3 – via hentrichhentrich.de. • Tobies, Renate (1997). "Aller Männerkultur zum Trotz": Frauen in Mathematik und Naturwissenschaften ["Defying a culture of male dominance": Women in mathematics and science] (in German). Campus Verlag. ISBN 3-593-35749-6. • König, York-Egbert (2011). "Dr. Margarete Kahn (1880–1942) aus Eschwege. Ergänzungen und familienkundliche Anmerkungen" (PDF). Eschweger Geschichtsblätter (in German). 22: 67–76 – via alemannia-judaica.de. • König, York-Egbert (2012). "Zwei Paar Schuhe ... ganz verbraucht ... Dr. Margarete Kahn (1880–1942) aus Eschwege erklärt ihr Vermögen" (PDF). Eschweger Geschichtsblätter (in German). 23: 22–30 – via alemannia-judaica.de. • König, York-Egbert (2020). "Ein Leben für die Mathematik. Dr. Margarethe Kahn (1880–1942) aus Eschwege" (PDF). Eschweger Geschichtsblätter (in German). 21: 69–74 – via alemannia-judaica.de. External links • Literature by and about Margarethe Kahn in the German National Library catalogue • Tobies, Renate (1 March 2009). "Margarete Kahn". Jewish Women: A Comprehensive Historical Encyclopedia. Jewish Women's Archive. Retrieved 10 January 2014. Authority control International • ISNI • VIAF • WorldCat National • Germany • United States • Czech Republic • Netherlands Academics • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie
Wikipedia
Find all positive integers $n$ such that.. Find all positive integers $n$ such that $1!+\ldots+n!$ divides $(n + 1)!$ I think I know that the only two positive integers are $1$ and $2$. Proving it inductively has been a problem for me though. So far.. $$\frac{(n+1)!}{1!+\ldots+n!} < n\qquad(*)$$ and $$(n-1)(1!+\ldots+n!)<(n+1)!\qquad(**)$$ \begin{align} (*)&& (n+1)! &= (n+1)n! \\ && &= n!(n) + n! \\ && &= n(n!) + n(n-1)! \\ && &= n(n! + (n-1)!) < n(1!+\ldots+n!) \end{align} Here I have proven $(*)$ correct? $(**)$ $n = 3$ Suppose $(n-1)(1!+...+n!) < (n + 1)!$ Show $n(1!+...+n!+(n+1)!) < (n + 2)!$ \begin{align} (n+2)! &= (n+2)(n+1)! \\ &= n(n+1)! + 2(n+1)! > n(n+1)! + 2(n-1)(1!+...+n!) \end{align} And now I am stuck on $(**)$. discrete-mathematics Your proof of $$(n+1)! < n\sum_{k=1}^n k!$$ is correct when you at some point state that in it $n > 2$ is assumed. For $n = 2$, you have equality, and for $n = 1$, the inequality is in the other direction. For the proof of $(\ast\ast)$, the case $n = 3$ is a simple verification, and then you can inductively see $$\begin{align} n\sum_{k=1}^{n+1} k! &= n(n+1)! + \underbrace{(n-1)\sum_{k=1}^n k!}_{<(n+1)!\text{ by induction}} + \underbrace{\sum_{k=1}^n k!}_{<(n+1)!\text{ also}}\\ &< n(n+1)! + (n+1)! + (n+1)!\\ &= (n+2)(n+1)!\\ &= (n+2)! \end{align}$$ that indeed $$n-1 < \frac{(n+1)!}{\sum_{k=1}^n k!} < n$$ for all $n > 2$. Daniel FischerDaniel Fischer Not the answer you're looking for? Browse other questions tagged discrete-mathematics or ask your own question. Showing that the McCarthy Function is a well-defined function from the set of positive integers to the set of positive integers Prove $((n+1)!)^n < 2!\cdot4!\cdots(2n)!$ The well-ordering principle can be used to show that there is a unique gcd of two positive integers… Prove that for all integers, if $a$ is even and $b$ is odd then $a^{2}+3b$ is odd. Prove that if m and n are any two odd (integers) then mn is also odd. Show that if n divides m where n and m are positive ints greater than 1, then a ≡ b (mod m) implies a ≡ b (mod n) for any positive integers a and b Proving/Disproving: There is a natural #,x, such that 2^(2^x)+1 is not prime Proof by Induction involving divisibility Exhibit a one-to-one correspondence between the set of positive integers and the set of integers not divisible by $3$. Proof of a set as a subset of the other set
CommonCrawl
# Setting up your Python and Postgres environment First, let's install Python. You can download the latest version of Python from the official website (https://www.python.org/downloads/). Follow the installation instructions for your operating system. Next, let's install PostgreSQL. You can download the latest version of PostgreSQL from the official website (https://www.postgresql.org/download/). Follow the installation instructions for your operating system. Once you have both Python and PostgreSQL installed, you'll need to install the psycopg2 library, which is a popular Python library for working with PostgreSQL databases. You can install it using pip: ``` pip install psycopg2 ``` ## Exercise Install Python and PostgreSQL on your system. ### Solution This exercise does not have a specific answer, as it requires you to install Python and PostgreSQL on your system. Once you have completed the installation process, you can move on to the next section. # Creating and connecting to a Postgres database Now that you have Python and PostgreSQL installed, let's create a PostgreSQL database and connect to it using Python. To create a PostgreSQL database, you can use the `createdb` command-line utility that comes with PostgreSQL. Open a terminal or command prompt and run the following command: ``` createdb -h localhost -p 5432 -U postgres mydb ``` This command creates a new database named `mydb` with the default user (`postgres`) and port (5432). Next, let's connect to the PostgreSQL database using Python. We'll use the psycopg2 library to do this. Here's a simple example: ```python import psycopg2 # Establishing the connection conn = psycopg2.connect(database="mydb", user="postgres", password="password", host="localhost", port="5432") # Creating a cursor object cursor = conn.cursor() # Executing a simple SQL query cursor.execute("SELECT version();") # Fetching the result result = cursor.fetchone() print("Connected to PostgreSQL version", result) # Closing the connection conn.close() ``` This code establishes a connection to the `mydb` database, creates a cursor object, executes a simple SQL query to fetch the PostgreSQL version, and then prints the result. Finally, it closes the connection. ## Exercise Create a new PostgreSQL database named `mydb` and connect to it using Python. ### Solution This exercise does not have a specific answer, as it requires you to create a new PostgreSQL database and connect to it using Python. Once you have completed the exercise, you can move on to the next section. # Working with basic SQL queries First, let's create a table in the `mydb` database. Here's an example SQL query to create a table named `employees`: ```sql CREATE TABLE employees ( id SERIAL PRIMARY KEY, name VARCHAR(100), age INTEGER, position VARCHAR(100) ); ``` Now that we have a table, let's insert some data into it. Here's an example SQL query to insert a new employee into the `employees` table: ```sql INSERT INTO employees (name, age, position) VALUES ('John Doe', 30, 'Software Engineer'); ``` We can also update existing data in the table. Here's an example SQL query to update the position of an employee: ```sql UPDATE employees SET position = 'Senior Software Engineer' WHERE name = 'John Doe'; ``` Finally, we can delete data from the table. Here's an example SQL query to delete an employee from the `employees` table: ```sql DELETE FROM employees WHERE name = 'John Doe'; ``` ## Exercise Create a table named `employees` in the `mydb` database with the following columns: `id`, `name`, `age`, and `position`. Insert a new employee into the table and update their position. Finally, delete the employee from the table. ### Solution This exercise does not have a specific answer, as it requires you to create a table, insert data, update data, and delete data from the table. Once you have completed the exercise, you can move on to the next section. # Understanding and writing Postgres functions Here's an example of a simple PL/pgSQL function that adds two numbers: ```sql CREATE OR REPLACE FUNCTION add_numbers(a INTEGER, b INTEGER) RETURNS INTEGER AS $$ BEGIN RETURN a + b; END; $$ LANGUAGE plpgsql; ``` To call this function from Python, you can use the following code: ```python import psycopg2 # Establishing the connection conn = psycopg2.connect(database="mydb", user="postgres", password="password", host="localhost", port="5432") # Creating a cursor object cursor = conn.cursor() # Calling the add_numbers function cursor.execute("SELECT add_numbers(1, 2);") # Fetching the result result = cursor.fetchone() print("The sum of 1 and 2 is", result) # Closing the connection conn.close() ``` ## Exercise Create a new PostgreSQL function named `add_numbers` that takes two integer parameters and returns their sum. Call this function from Python and print the result. ### Solution This exercise does not have a specific answer, as it requires you to create a new PostgreSQL function and call it from Python. Once you have completed the exercise, you can move on to the next section. # Using Python to interact with Postgres functions To call a PostgreSQL function from Python, you can use the following code: ```python import psycopg2 # Establishing the connection conn = psycopg2.connect(database="mydb", user="postgres", password="password", host="localhost", port="5432") # Creating a cursor object cursor = conn.cursor() # Calling the add_numbers function cursor.execute("SELECT add_numbers(1, 2);") # Fetching the result result = cursor.fetchone() print("The sum of 1 and 2 is", result) # Closing the connection conn.close() ``` This code establishes a connection to the `mydb` database, creates a cursor object, calls the `add_numbers` function with the parameters 1 and 2, fetches the result, and then prints it. Finally, it closes the connection. ## Exercise Call the `add_numbers` function from Python and print the result. ### Solution This exercise does not have a specific answer, as it requires you to call the `add_numbers` function from Python and print the result. Once you have completed the exercise, you can move on to the next section. # Advanced querying with Postgres functions PostgreSQL functions can be used to perform complex tasks and return complex results. Here's an example of a PL/pgSQL function that returns a table of employees: ```sql CREATE OR REPLACE FUNCTION get_employees() RETURNS TABLE(id INTEGER, name VARCHAR, age INTEGER, position VARCHAR) AS $$ BEGIN RETURN QUERY SELECT id, name, age, position FROM employees; END; $$ LANGUAGE plpgsql; ``` To call this function from Python and fetch the result, you can use the following code: ```python import psycopg2 # Establishing the connection conn = psycopg2.connect(database="mydb", user="postgres", password="password", host="localhost", port="5432") # Creating a cursor object cursor = conn.cursor() # Calling the get_employees function cursor.execute("SELECT * FROM get_employees();") # Fetching the result result = cursor.fetchall() for row in result: print("ID:", row[0], "Name:", row[1], "Age:", row[2], "Position:", row[3]) # Closing the connection conn.close() ``` ## Exercise Call the `get_employees` function from Python and print the result. ### Solution This exercise does not have a specific answer, as it requires you to call the `get_employees` function from Python and print the result. Once you have completed the exercise, you can move on to the next section. # Optimizing and troubleshooting Postgres functions To optimize a PostgreSQL function, you can analyze its execution plan using the `EXPLAIN` command. Here's an example: ```sql EXPLAIN SELECT add_numbers(1, 2); ``` To troubleshoot a PostgreSQL function, you can use the `RAISE` command to raise an exception and return an error message. Here's an example: ```sql CREATE OR REPLACE FUNCTION add_numbers(a INTEGER, b INTEGER) RETURNS INTEGER AS $$ BEGIN IF a < 0 OR b < 0 THEN RAISE EXCEPTION 'Invalid input: Both numbers must be positive.'; END IF; RETURN a + b; END; $$ LANGUAGE plpgsql; ``` In Python, you can catch exceptions raised by PostgreSQL functions using the `try-except` block. Here's an example: ```python import psycopg2 # Establishing the connection conn = psycopg2.connect(database="mydb", user="postgres", password="password", host="localhost", port="5432") # Creating a cursor object cursor = conn.cursor() # Calling the add_numbers function with negative numbers try: cursor.execute("SELECT add_numbers(-1, 2);") except psycopg2.Error as e: print("Error:", e) # Closing the connection conn.close() ``` ## Exercise Analyze the execution plan of the `add_numbers` function using the `EXPLAIN` command. Troubleshoot the function by calling it with negative numbers and catching the exception in Python. ### Solution This exercise does not have a specific answer, as it requires you to analyze the execution plan of the `add_numbers` function and troubleshoot it by calling it with negative numbers and catching the exception in Python. Once you have completed the exercise, you can move on to the next section. # Integrating Postgres functions with Python applications To integrate PostgreSQL functions with a Python application, you can use the psycopg2 library to establish a connection to the database, call the functions, and fetch the results. Here's an example: ```python import psycopg2 # Establishing the connection conn = psycopg2.connect(database="mydb", user="postgres", password="password", host="localhost", port="5432") # Creating a cursor object cursor = conn.cursor() # Calling the add_numbers function cursor.execute("SELECT add_numbers(1, 2);") # Fetching the result result = cursor.fetchone() print("The sum of 1 and 2 is", result) # Closing the connection conn.close() ``` In a real-world Python application, you can integrate PostgreSQL functions by calling them from the application's business logic and using the results as needed. ## Exercise Integrate the `add_numbers` function into a Python application by calling it and using the result in the application's business logic. ### Solution This exercise does not have a specific answer, as it requires you to integrate the `add_numbers` function into a Python application and use the result in the application's business logic. Once you have completed the exercise, you can move on to the next section. # Best practices for Postgres function development Here are some best practices for PostgreSQL function development: - Use meaningful and descriptive names for functions. - Write functions that perform a single task. - Use input parameters to pass data to the function. - Return a result from the function. - Use transactions to ensure data consistency. - Write unit tests for your functions. ## Exercise Follow the best practices for PostgreSQL function development by creating a new function named `multiply_numbers` that takes two integer parameters and returns their product. Call this function from Python and print the result. ### Solution This exercise does not have a specific answer, as it requires you to create a new PostgreSQL function named `multiply_numbers` and follow the best practices for PostgreSQL function development. Once you have completed the exercise, you can move on to the next section. # Real-world examples and case studies Here are some real-world examples and case studies: - A web application that uses PostgreSQL functions to perform complex calculations and return the results to the user. - A data analysis application that uses PostgreSQL functions to extract data from a database and perform complex analysis. - A machine learning application that uses PostgreSQL functions to train and predict models based on data from a database. ## Exercise Choose a real-world example or case study that involves PostgreSQL functions in Python, and analyze its implementation, challenges, and benefits. ### Solution This exercise does not have a specific answer, as it requires you to choose a real-world example or case study and analyze its implementation, challenges, and benefits. Once you have completed the exercise, you can move on to the next section. # Conclusion and resources for further learning In this textbook, we've covered the basics of PostgreSQL functions in Python, including setting up the environment, creating and connecting to a PostgreSQL database, working with basic SQL queries, understanding and writing PostgreSQL functions, using Python to interact with PostgreSQL functions, advanced querying with PostgreSQL functions, optimizing and troubleshooting PostgreSQL functions, integrating PostgreSQL functions with Python applications, best practices for PostgreSQL function development, real-world examples and case studies, and conclusion and resources for further learning. To continue learning about PostgreSQL functions in Python, you can explore the following resources: - The official psycopg2 documentation (https://www.psycopg.org/docs/) - The official PostgreSQL documentation (https://www.postgresql.org/docs/) - Online tutorials and articles on PostgreSQL and Python integration - Books on PostgreSQL and Python programming ## Exercise Review the resources mentioned in this section and choose one to explore further. ### Solution This exercise does not have a specific answer, as it requires you to review the resources mentioned in this section and choose one to explore further. Once you have completed the exercise, you can consider this textbook complete.
Textbooks
\begin{document} \maketitle \begin{abstract} In its original formulation the Krein matrix was used to locate the spectrum of first-order star-even polynomial operators where both operator coefficients are nonsingular. Such operators naturally arise when considering first-order-in-time Hamiltonian PDEs. Herein the matrix is reformulated to allow for operator coefficients with nontrivial kernel. Moreover, it is extended to allow for the study of the spectral problem associated with quadratic star-even operators, which arise when considering the spectral problem associated with second-order-in-time Hamiltonian PDEs. In conjunction with the Hamiltonian-Krein index (HKI) the Krein matrix is used to study two problems: conditions leading to Hamiltonian-Hopf bifurcations for small spatially periodic waves, and the location and Krein signature of small eigenvalues associated with, e.g., $n$-pulse problems. For the first case we consider in detail a first-order-in-time fifth-order KdV-like equation. In the latter case we use a combination of Lin's method, the HKI, and the Krein matrix to study the spectrum associated with $n$-pulses for a second-order-in-time Hamiltonian system which is used to model the dynamics of a suspension bridge. \end{abstract} \begin{keywords} Krein matrix, star-even operators, $n$-pulses \end{keywords} \begin{AMS} 35P30, 47A55, 47A56, 70H14 \end{AMS} \title{A reformulated Krein matrix for star-even polynomial operators with applications} \section{Introduction}\label{sec:intro} Herein we are generally concerned with the spectral stability of waves that arise as steady-states for a nonlinear Hamiltonian system which is either first-order or second-order in time. There are two tools which we will use to study the spectrum. The first is the Hamiltonian-Krein index (HKI), which relates the number of negative directions associated with the linearized energy evaluated at the underlying wave to the number of (potentially) unstable point spectra (eigenvalues with positive real part). If the HKI is zero, then under some fairly generic assumptions the underlying wave will be orbitally stable. If the HKI is positive, then it provides an upper bound on the number of unstable point eigenvalues. If it can be shown, either analytically or numerically, that there are no eigenvalues with positive real part, then the HKI provides the number of purely imaginary eigenvalues with negative Krein signature. The Krein signature of a simple purely imaginary eigenvalue of the linearization about a wave is defined to be positive (negative) if the Hessian of the energy, also evaluated at the wave and restricted to the corresponding eigenspace of the linearization, is positive (negative) definite. Dynamically, at the linear level, eigenvalues with negative Krein signature provide temporally oscillatory behavior in an unstable energy direction. Moreover, these are the foundational eigenvalues associated with the Hamiltonian-Hopf bifurcation. In particular, the bifurcation can occur only if purely imaginary eigenvalues of opposite signature collide when doing some type of parameter continuation. If it can be shown all the purely imaginary eigenvalues have positive signature, then a Hamiltonian-Hopf bifurcation is not possible. A formal definition of the signature in the setting of star-even polynomial operators is provided in equation \cref{e:defkrein}. Now, the purely imaginary eigenvalues with negative Krein signature cannot be easily detected via a visual examination of the spectra. Consequently, another tool is needed. Here we use the Krein matrix, an eigenvalue detecting tool which can also be used to determine the Krein signature of purely imaginary point eigenvalues. The Krein matrix has properties similar to those of the Evans matrix - in particular, the determinant being zero means that an eigenvalue has been found - except that it is meromorphic instead of being analytic. By marrying the HKI with a spectral analysis via the Krein matrix one can locate all the point spectra associated with dynamical instabilities. We will illustrate the fruit of this marriage herein by considering two problems: the spectral stability associated with small spatially periodic waves, and the location and Krein signature of small eigenvalues associated with tail-tail interactions in $n$-pulses. We now flesh out this preliminary discussion. The linearization of the Hamiltonian system will yield a star-even operator polynomial, \[ \mathcal{P}_n(\lambda)\mathrel{\mathop:}=\sum_{j=0}^n\lambda^j\mathcal{A}_j. \] On some Hilbert space, $X$, endowed with inner-product, $\langle\cdot,\cdot\rangle$, which in turn induces a norm, $\|\cdot\|$, we assume the operator coefficients $\mathcal{A}_{2\ell}$ are Hermitian, $\mathcal{A}_{2\ell}^\mathrm{a}=\mathcal{A}_{2\ell}$, and the operator coefficients $\mathcal{A}_{2\ell+1}$ are skew-Hermitian, $\mathcal{A}_{2\ell+1}^\mathrm{a}=-\mathcal{A}_{2\ell+1}$. Here we let $\mathcal{T}^\mathrm{a}$ denote the adjoint of the operator $\mathcal{T}$. If $n=1$, \[ \mathcal{P}_1(\lambda)\psi=0\,\,\leadsto\,\, \left(\mathcal{A}_0+\lambda\mathcal{A}_1\right)\psi=0. \] Assuming $\mathcal{A}_1$ is invertible, this spectral problem is equivalent to, \[ \mathcal{A}_1^{-1}\mathcal{A}_0\psi=\gamma\psi,\quad\gamma=-\lambda, \] which, since $\mathcal{A}_1^{-1}$ is skew-Hermitian and $\mathcal{A}_0$ is Hermitian, is the canonical form for a Hamiltonian eigenvalue problem. Indeed, while we will not go into the details here, it is possible via a change of variables to put any star-even problem into canonical form, see \cite[Section~3]{kapitula:iif13} and the references therein. For our purposes it is best to leave the problem in its original formulation. Values $\lambda_0$ for which the polynomial $\mathcal{P}_n(\lambda_0)$ is singular will be called \textit{polynomial eigenvalues}. Because of these assumed coefficient properties, the polynomial eigenvalues are symmetric with respect to the imaginary axis of the complex plane. The eigenvalue symmetry follows from, \[ \mathcal{P}_n(\lambda)^\mathrm{a}=\mathcal{P}_n(-\overline{\lambda}), \] so $\lambda$ being a polynomial eigenvalue implies $-\overline{\lambda}$ is also a polynomial eigenvalue. In order to ensure there are no polynomial eigenvalues at infinity, we assume $\mathcal{A}_n$ is invertible. More can be said about the set of polynomial eigenvalues under compactness assumptions (which will henceforth be assumed, except for the example considered in \cref{s:6}). Suppose the Hermitian operator $\mathcal{A}_0$ has compact resolvent, so the eigenvalues for this operator coefficient are real, semi-simple, and have finite multiplicity. Let $P_{\mathcal{A}_0}:X\mapsto\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)$ be the orthogonal projection, and set $P_{\mathcal{A}_0}^\perp=\mathcal{I}-P_{\mathcal{A}_0}:X\mapsto\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)^\perp$. Assuming the operators, \[ \left(P_{\mathcal{A}_0}^\perp\mathcal{A}_0P_{\mathcal{A}_0}^\perp\right)^{-1} P_{\mathcal{A}_0}^\perp\mathcal{A}_jP_{\mathcal{A}_0}^\perp:\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)^\perp\mapsto\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)^\perp,\quad j=1,\dots,n, \] are compact, the spectrum for $\mathcal{P}_n(\lambda)$ is point spectra only \cite[Remark~2.2]{bronski:aii14}. Moreover, each polynomial eigenvalue has finite multiplicity, and infinity is the only possible accumulation point for the polynomial eigenvalues. Regarding the number of unstable polynomial eigenvalues, i.e., those polynomial eigenvalues with positive real part, the total number can be bounded above via the Hamiltonian-Krein index (HKI). Let $k_\mathrm{r}$ denote the total number (counting multiplicity) of real and positive polynomial eigenvalues, and let $k_\mathrm{c}$ be the total number (counting multiplicity) of polynomial eigenvalues with positive real part and nonzero imaginary part. The total number of unstable polynomial eigenvalues is $k_\mathrm{r}+k_\mathrm{c}$. The HKI also takes into account a subset of purely imaginary polynomial eigenvalues; namely, those with negative Krein signature. For each purely imaginary and nonzero eigenvalue, $\mathrm{i}\lambda_0$ with $\lambda_0\in\mathbb{R}$, with associated eigenspace $\mathbb{E}_{\mathrm{i}\lambda_0}$, set \begin{equation}\label{e:defkrein} k_\mathrm{i}^-(\mathrm{i}\lambda_0)=\mathrm{n}\left(-\lambda_0\left[\mathrm{i} P_n'(\mathrm{i}\lambda_0)\right]|_{\mathbb{E}_{\mathrm{i}\lambda_0}}\right). \end{equation} Here $\mathrm{n}(\bm{\mathit{S}})$ denotes the number of negative eigenvalues for the Hermitian matrix $\bm{\mathit{S}}$, and $-\lambda_0\mathrm{i} P_n'(\mathrm{i}\lambda_0)|_{\mathbb{E}_{\mathrm{i}\lambda_0}}$ is the Hermitian matrix formed by the representation of the Hermitian operator $-\mathrm{i}\lambda_0P_n'(\mathrm{i}\lambda_0)$ restricted to the eigenspace $\mathbb{E}_{\mathrm{i}\lambda_0}$. If the polynomial eigenvalue is simple with associated eigenvector $u_{\mathrm{i}\lambda_0}$, then \[ k_\mathrm{i}^-(\mathrm{i}\lambda_0)= \mathrm{n}\left(\lambda_0\langle-\mathrm{i} P_n'(\mathrm{i}\lambda_0)u_{\mathrm{i}\lambda_0},u_{\mathrm{i}\lambda_0}\rangle\right); \] in particular, if $n=1$ then it takes the more familiar form, \[ k_\mathrm{i}^-(\mathrm{i}\lambda_0)= \mathrm{n}\left(\langle\mathcal{A}_0u_{\mathrm{i}\lambda_0},u_{\mathrm{i}\lambda_0}\rangle\right). \] See \cref{s:22} for more details. The nonnegative integer $k_\mathrm{i}^-(\mathrm{i}\lambda_0)$ is the negative Krein index associated with the purely imaginary eigenvalue. If $k_\mathrm{i}^-(\mathrm{i}\lambda_0)=0$, the polynomial eigenvalue is said to have positive Krein signature; otherwise, it has negative Krein signature. The total negative Krein index is the sum of the individual Krein indices, \[ k_\mathrm{i}^-=\sum k_\mathrm{i}^-(\mathrm{i}\lambda_0). \] Regarding $k_\mathrm{i}^-$, consider the collision of two simple polynomial eigenvalues on the imaginary axis. If they both have the same signature, then after the collision they will each remain purely imaginary. On the other hand, if they have opposite Krein signature, then it will generically be the case that after the collision the pair will have nonzero real part, which due to the spectral symmetry means that one of the polynomial eigenvalues will have positive real part. This is the so-called Hamiltonian-Hopf bifurcation. In the case of $n=1$ the interested reader should consult \cite[Chapter~7.1]{kapitula:sad13} for more details regarding the case of the collision of two simple polynomial eigenvalues, and \cite{kapitula:tks10,vougalter:eoz06} for the case of higher-order collisions. The case of $n\ge2$ can be reformulated as an $n=1$ problem, see \cite{kapitula:iif13} and the references therein. Note that if $k_\mathrm{i}^-=0$, then no polynomial eigenvalues will leave the imaginary axis. The HKI is defined to be the sum of the three indices, \[ K_{\mathop\mathrm{Ham}\nolimits}=k_\mathrm{r}+k_\mathrm{c}+k_\mathrm{i}^-. \] The HKI is intimately related to the operator coefficients. For the sake of exposition, first suppose $\mathcal{A}_0,\mathcal{A}_n$ are nonsingular. If $X=\mathbb{C}^N$, i.e., the operator is actually a star-even matrix polynomial with $nN$ polynomial eigenvalues, \[ K_{\mathop\mathrm{Ham}\nolimits}=\begin{cases} \mathrm{n}(\mathcal{A}_0)+(\ell-1)N,\quad&n=2\ell-1\\ \mathrm{n}(\mathcal{A}_0)+\mathrm{n}\left((-1)^{\ell-1}\mathcal{A}_{n}\right)+(\ell-1)N,\quad&n=2\ell \end{cases} \] \cite[Theorem~3.4]{kapitula:iif13}. If $n\ge3$ the upper bound for the total number of unstable polynomial eigenvalues depends upon the dimension of the space; consequently, taking the limit $N\to+\infty$ provides no meaningful information regarding the limiting case of operator coefficients which are compact operators. Consequently, we henceforth assume $n\in\{1,2\}$. Now, suppose $\mathcal{A}_0$ has a nontrivial kernel, but that the highest-order coefficient is nonsingular. If $n=1$, then under the widely applicable assumptions, \begin{enumerate} \item $\displaystyle{\mathcal{A}_1:\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)\mapsto\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)^\perp}$ \item $\displaystyle{\mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1|_{\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)}}$ is invertible, \end{enumerate} we know, \begin{equation}\label{e:11} K_{\mathop\mathrm{Ham}\nolimits}=\mathrm{n}(\mathcal{A}_0)-\mathrm{n}\left(-\mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1|_{\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)}\right), \end{equation} see \cite{haragus:ots08,pelinovsky:ilf05} and the references therein. Regarding the operator $\mathcal{A}_1$, the case where \begin{enumerate} \item there is a nontrivial kernel, but where the rest of spectrum is otherwise uniformly bounded away from the origin, is covered in \cite{deconinck:ots15,kapitula:sif12} and \cite[Chapter~5.3]{kapitula:sad13} \item where the spectrum which is not bounded away from the origin is considered in \cite{kapitula:ahk14,pelinovsky:sso14}. \end{enumerate} If $n=2$, then upon replacing condition (b) above with, \begin{enumerate}\addtocounter{enumi}{1} \item $\displaystyle{\left(\mathcal{A}_2-\mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1\right)|_{\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)}}$ is invertible, \end{enumerate} we know, \begin{equation}\label{e:12} K_{\mathop\mathrm{Ham}\nolimits}=\mathrm{n}(\mathcal{A}_0)+\mathrm{n}(\mathcal{A}_2)- \mathrm{n}\left(\left[\mathcal{A}_2-\mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1\right]|_{\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)}\right), \end{equation} see \cite{bronski:aii14}. The goal of this paper is to construct a square matrix-valued function, say $\bm{\mathit{K}}(\lambda)$, which has the properties that for $\lambda\in\mathrm{i}\mathbb{R}$, \begin{enumerate} \item $\bm{\mathit{K}}(\lambda)$ is Hermitian and meromorphic \item $\mathop\mathrm{det}\nolimits\bm{\mathit{K}}(\lambda)=0$ only if $\lambda$ is a polynomial eigenvalue \item $\bm{\mathit{K}}(\lambda)$ can be used to determine the Krein signature of a polynomial eigenvalue. \end{enumerate} The matrix $\bm{\mathit{K}}(\lambda)$ is known as the Krein matrix. The properties (a) and (b) listed above are reminiscent of those possessed by the Evans matrix, except that the Evans matrix is analytic \cite[Chapters~8-10]{kapitula:sad13}. Regarding (b) and (c), since the determinant of a matrix is equal to the product of its eigenvalues, property (b) is satisfied if at least one of the eigenvalues is zero. Henceforth, we will call the eigenvalues of the Krein matrix, say $r_j(\lambda)$, the Krein eigenvalues. The determination of the Krein signature of a purely imaginary polynomial eigenvalue takes place through the Krein eigenvalues. If $r_j(\lambda_0)=0$ for some $\lambda_0\in\mathrm{i}\mathbb{R}$, the Krein signature is found by considering the sign of $r_j'(\lambda_0)$. Thus, via a plot of the Krein eigenvalues one can graphically determine the signature of a purely imaginary polynomial eigenvalue through the slope of the curve at a zero. The interested reader should consult the beautiful paper by Koll\'ar and Miller \cite{kollar:gks14} for, \begin{enumerate} \item a graphical perspective on the Krein signature using the eigenvalues of the self-adjoint operator, $\mathcal{A}_0+z(\mathrm{i}\mathcal{A}_1)$, for $z\in\mathbb{R}$ \item Hamiltonian instability index results which arise from this graphical perspective. \end{enumerate} A significant difference between our approach and that of \cite{kollar:gks14} is the number of Krein eigenvalues to be graphed; in particular, our approach gives a finite number, whereas the approach of \cite{kollar:gks14} yields a number equal to the number of eigenvalues for $\mathcal{A}_0$. The Krein matrix was first constructed for linear polynomials of the canonical form, \[ \mathcal{P}_1(\lambda)=\left(\begin{array}{cc}\mathcal{L}_+&0\\0&\mathcal{L}_-\end{array}\right) +\lambda\left(\begin{array}{rr}0&\mathcal{I}\\-\mathcal{I}&0\end{array}\right), \] where $\mathcal{L}_\pm$ are invertible Hermitian operators with compact resolvent, and $\mathcal{I}$ denotes the identity operator, see \cite{kapitula:tks10,li:sso98}. Recent applications of the Krein matrix include a new proof of the Jones-Grillakis instability criterion, \[ k_\mathrm{r}\ge|\mathrm{n}(\mathcal{L}_-)-\mathrm{n}(\mathcal{L}_+)|, \] as well as a study of the spectral problem for waves to a mathematical model for Bose-Einstein condensates \cite{kapitula:tkm13,kapitula:sif12}. The paper is organized as follows. In \cref{s:2} the Krein matrix is constructed for star-even polynomial operators of any degree. In particular, the previous invertibility assumption on $\mathcal{A}_0$ is removed. In \cref{s:3} the properties of the Krein eigenvalues are deduced; in particular, their relation to the Krein signature of purely imaginary polynomial eigenvalues is given. In \cref{s:4} the Krein eigenvalues are used to study the Hamiltonian-Hopf bifurcation problem associated with small periodic waves. While the underlying wave is small, it is possible for the polynomial eigenvalues to have $\mathcal{O}(1)$ imaginary part (see \cite{deconinck:hfi16,kollar:dco19,trichtchenko:sop18} for a similar study using a different approach). In \cref{s:5} we show how the Krein matrix can be used to locate small eigenvalues which arise from some type of bifurcation. However, the analysis does not use perturbation theory, so it is consequently possible to use the resulting Krein matrix to consider spectral stability for multi-pulse problems, where the small eigenvalues arise from the exponentially small tail-tail interactions of a translated base pulse.Finally, in \cref{s:6} we use the Krein matrix to study the spectral problem associated with $n$-pulse solutions to the suspension bridge equation, which is a second-order-in-time Hamiltonian PDE. \noindent\textbf{Acknowledgements.} The authors would like to thank the referees for their careful reading of the original manuscript, and their helpful suggestions and constructive critique. We believe that this revision is a substantial improvement over the original because of their work. \section{The Krein matrix}\label{s:2} The Krein matrix allows us to reduce the infinite-dimensional eigenvalue problem, \[ \mathcal{P}_n(\lambda)\psi=0, \] to a finite-dimensional problem, \[ \bm{\mathit{K}}_S(\lambda)\bm{\mathit{x}}=\bm{\mathit{0}}. \] Here $\bm{\mathit{K}}_S(\lambda)$ is the (square) Krein matrix. Whereas the original star-even operator is analytic in the spectral parameter, the Krein matrix is meromorphic with poles on the imaginary axis. The presence of these poles is the key to using the Krein matrix to determine the Krein signature of a purely imaginary eigenvalue. \subsection{General construction}\label{s:21} Let $S\subset X$ be a finite-dimensional subspace of dimension $n_S$ with orthonormal basis $\{s_j\}$, and let $P_S:X\mapsto X$ be the orthogonal projection, i.e., \[ P_Su=\sum_{j=1}^{n_S}\langle u,s_j\rangle s_j. \] Denote the complementary orthogonal projection as \[ P_{S^\perp}\mathrel{\mathop:}=\mathcal{I}-P_S, \] and write \[ u=s+s^\perp,\,\,\mathrm{with}\,\, P_Su=s,\,\,P_{S^\perp}u=s^\perp. \] In constructing the subspace-dependent Krein matrix, $\bm{\mathit{K}}_S(\lambda)$, for the polynomial eigenvalue problem, we will extensively use the orthogonal projections. We first rewrite the polynomial eigenvalue problem, \begin{equation}\label{e:pp2} \mathcal{P}_n(\lambda)s+\mathcal{P}_n(\lambda)s^\perp=0. \end{equation} Applying the complementary projection to \cref{e:pp2} yields \begin{equation}\label{e:pp2a} P_{S^\perp}\mathcal{P}_n(\lambda)s+P_{S^\perp}\mathcal{P}_n(\lambda)P_{S^\perp}s^\perp=0. \end{equation} The operator $P_{S^\perp}\mathcal{P}_n(\lambda)P_{S^\perp}:S^\perp\mapsto S^\perp$ is a star-even polynomial operator. Consequently, it has the same spectral properties as the original star-even operator; in particular, it is invertible except for a countable number of spectral values. If $\lambda$ is not a polynomial eigenvalue for the operator $P_{S^\perp}\mathcal{P}_n(\lambda)P_{S^\perp}$, then we can invert to write \[ s^\perp=-(P_{S^\perp}\mathcal{P}_n(\lambda)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\lambda)s, \] which leads to, \begin{equation}\label{e:pp3} s^\perp= P_{S^\perp}s^\perp= -P_{S^\perp}(P_{S^\perp}\mathcal{P}_n(\lambda)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\lambda)s. \end{equation} If we take the inner-product of \cref{e:pp2} with a basis element $s_j$, we get \[ \langle s_j,\mathcal{P}_n(\lambda)s\rangle+\langle s_j,\mathcal{P}_n(\lambda)s^\perp\rangle=0. \] Substitution of the expression in \cref{e:pp3} into the above provides, \[ \langle s_j,\mathcal{P}_n(\lambda)s\rangle- \langle s_j,\mathcal{P}_n(\lambda)P_{S^\perp}(P_{S^\perp}\mathcal{P}_n(\lambda)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\lambda)s\rangle=0. \] Writing \[ s=\sum_{j=1}^{n_S}x_js_j, \] the above expression becomes \begin{equation}\label{e:pp3a} \bm{\mathit{K}}_S(\lambda)\bm{\mathit{x}}=\bm{\mathit{0}}, \end{equation} where the Krein matrix $\bm{\mathit{K}}_S(\lambda)\in\mathbb{C}^{n_S\times n_S}$ has the form \[ \bm{\mathit{K}}_S(\lambda)=\mathcal{P}_n(\lambda)|_S- \mathcal{P}_n(\lambda)P_{S^\perp}(P_{S^\perp}\mathcal{P}_n(\lambda)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\lambda)|_{S}, \] where we use the notation \[ \left(\mathcal{T}|_S\right)_{ij}=\langle s_i,\mathcal{T} s_j\rangle. \] In conclusion, polynomial eigenvalues for the original problem are found via solving \cref{e:pp3a}, which means \[ \mathop\mathrm{det}\nolimits\bm{\mathit{K}}_S(\lambda)=0,\quad\mathrm{or}\quad\bm{\mathit{x}}=\bm{\mathit{0}}. \] What does it mean if $\lambda_0$ is a polynomial eigenvalue with $\bm{\mathit{x}}=\bm{\mathit{0}}$? In this case the associated eigenfunction for the polynomial eigenvalue, $u_0$, satisfies \[ P_Su_0=0,\quad P_{S^\perp}u_0=u_0. \] Going back to \cref{e:pp2} and \cref{e:pp2a} we see \[ P_n(\lambda_0)P_{S^\perp}u_0=0\quad\leadsto\quad P_{S^\perp}P_n(\lambda_0)P_{S^\perp}u_0=0. \] In other words, $\lambda_0$ is also a polynomial eigenvalue for the operator $P_{S^\perp}P_n(\lambda_0)P_{S^\perp}$. Thus, if $\lambda_0$ is a polynomial eigenvalue for which the associated eigenfunction resides in $S^\perp,\,\lambda_0$ is also a pole for the Krein matrix. Consequently, we cannot expect to capture such polynomial eigenvalues by solving $\mathop\mathrm{det}\nolimits\bm{\mathit{K}}_S(\lambda)=0$. This fact will motivate our later choice for the subspace $S$, as we need to know that the polynomial eigenvalues being missed by considering the zero set of the determinant of the Krein matrix are somehow unimportant. The choice of the subspace is determined by looking at the Krein index of a purely imaginary polynomial eigenvalue, $\lambda=\mathrm{i}\lambda_0$. Letting $\mathbb{E}_{\mathrm{i}\lambda_0}$ denote the generalized eigenspace, the negative Krein index is \[ k_\mathrm{i}^-(\mathrm{i}\lambda_0)\mathrel{\mathop:}= \mathrm{n}\left(-\lambda_0[\mathrm{i} P_n'(\mathrm{i}\lambda_0)]|_{\mathbb{E}_{\mathrm{i}\lambda_0}}\right) \] (see \cite{bronski:aii14}). Since the goal is to have the Krein matrix capture all possible polynomial eigenvalues with negative Krein index through its determinant, we then want it to be the case that if $\mathrm{i}\lambda_0$ is a polynomial eigenvalue whose associated eigenfunction is in $S^\perp$, then the negative Krein index is zero. In other words, we want it to be the case that the Hermitian matrix, $-\lambda_0[\mathrm{i}\mathcal{P}_n(\mathrm{i}\lambda_0)]|_{\mathbb{E}_{\mathrm{i}\lambda_0}}$, is positive definite whenever $\mathrm{i}\lambda_0$ is also a polynomial eigenvalue for the operator $P_{S^\perp}\mathcal{P}_n(\lambda)P_{S^\perp}$. \begin{remark} In practice, mapping $\bm{\mathit{K}}(\lambda)\mapsto\lambda^\ell\bm{\mathit{K}}(\lambda)$ for some $\ell\in\mathbb{N}$ does not change the above property of the Krein matrix. However, as we will see, an appropriate choice of $\ell$ gives better graphical properties regarding the determination of those polynomial eigenvalues with negative Krein signature. \end{remark} \begin{remark} Note that if $\lambda=\mathrm{i}\lambda_0\in\mathrm{i}\mathbb{R}$, so that the operator $\mathcal{P}_n(\mathrm{i}\lambda_0)$ is Hermitian, then for $\lambda\in\mathrm{i}\mathbb{R}$ the elements in the second matrix can be rewritten, \[ \left((P_{S^\perp}\mathcal{P}_n(\lambda)P_{S^\perp})^{-1}|_{P_{S^\perp}\mathcal{P}_n(\lambda)S}\right)_{ij}= \langle P_{S^\perp}\mathcal{P}_n(\lambda)s_i,(P_{S^\perp}\mathcal{P}_n(\lambda)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\lambda)s_j\rangle. \] \end{remark} \subsection{Subspace selection}\label{s:22} We now see how the operator coefficients may dictate the choice of the subspace $S$. First consider the first-order operator, \[ \mathcal{P}_1(\lambda)=\mathcal{A}_0+\lambda\mathcal{A}_1, \] where $\mathcal{A}_0$ is Hermitian, and $\mathcal{A}_1$ is skew-Hermitian. Regarding the term associated with the calculation of the negative Krein index, \[ -\lambda_0[\mathrm{i}\mathcal{P}_1'(\mathrm{i}\lambda_0)]=-\lambda_0(\mathrm{i}\mathcal{A}_1). \] If $\psi_0$ is an eigenfunction associated with the polynomial eigenvalue, so $\mathcal{P}_1(\mathrm{i}\lambda_0)\psi_0=0$, then \[ -\lambda_0(\mathrm{i}\mathcal{A}_1)\psi_0=\mathcal{A}_0\psi_0, \] so we recover the ``standard" definition of the negative Krein index for first-order star-even operators, \[ k_\mathrm{i}^-(\mathrm{i}\lambda_0)= \mathrm{n}\left(-\lambda_0[\mathrm{i} P_1'(\mathrm{i}\lambda_0)]|_{\mathbb{E}_{\mathrm{i}\lambda_0}}\right)= \mathrm{n}\left(\mathcal{A}_0|_{\mathbb{E}_{\mathrm{i}\lambda_0}}\right). \] We want the matrix $\mathcal{A}_0|_{\mathbb{E}_{\mathrm{i}\lambda_0}}$ to be positive definite if $\mathbb{E}_{\mathrm{i}\lambda_0}\subset S^\perp$. If we choose, \[ S\mathrel{\mathop:}= N(\mathcal{A}_0)\oplus\mathop\mathrm{ker}\nolimits(\mathcal{A}_0), \] where $N(\mathcal{A}_0)$ is the finite-dimensional negative subspace of $\mathcal{A}_0$, and $\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)$ is the finite-dimensional kernel, then the fact that $\mathcal{A}_0$ is positive definite on $S^\perp$ implies that if $\mathrm{i}\lambda_0$ is a polynomial eigenvalue whose associated eigenfunction resides in $S^\perp$, then the negative Krein index will be zero. Note that in this case $P_S$ and $P_{S^\perp}$ will be spectral projections. Further note that with this choice of subspace that if a pole of the Krein matrix corresponds to purely imaginary polynomial eigenvalue, then it will necessarily have positive Krein index. Consequently, all purely imaginary polynomial eigenvalues with negative Krein index will be captured by solving $\mathop\mathrm{det}\nolimits\bm{\mathit{K}}_S(\lambda)=0$. Now, consider the second-order operator \[ \mathcal{P}_2(\lambda)=\mathcal{A}_0+\lambda\mathcal{A}_1+\lambda^2\mathcal{A}_2, \] where $\mathcal{A}_0,\mathcal{A}_2$ are Hermitian, and $\mathcal{A}_1$ is skew-Hermitian. We have \[ -\lambda_0[\mathrm{i} \mathcal{P}_2'(\mathrm{i}\lambda_0)]=-\lambda_0(\mathrm{i}\mathcal{A}_1)+2\lambda_0^2\mathcal{A}_2. \] If $\psi_0$ is an eigenfunction associated with the polynomial eigenvalue, so $\mathcal{P}_2(\mathrm{i}\lambda_0)\psi_0=0$, \[ \left(-\lambda_0(\mathrm{i}\mathcal{A}_1)+2\lambda_0^2\mathcal{A}_2\right)\psi_0=\left(\mathcal{A}_0+\lambda_0^2\mathcal{A}_2\right)\psi_0. \] The negative Krein index can be alternatively defined, \[ k_\mathrm{i}^-(\mathrm{i}\lambda_0)= \mathrm{n}\left(-\lambda_0[\mathrm{i}\mathcal{P}_2'(\mathrm{i}\lambda_0)]|_{\mathbb{E}_{\mathrm{i}\lambda_0}}\right)= \mathrm{n}\left((\mathcal{A}_0+\lambda_0^2\mathcal{A}_2)|_{\mathbb{E}_{\mathrm{i}\lambda_0}}\right). \] In order for it to be the case that the matrix $(\mathcal{A}_0+\lambda_0^2\mathcal{A}_2)|_{\mathbb{E}_{\mathrm{i}\lambda_0}}$ is guaranteed to be positive definite, it must be true that the eigenspace $\mathbb{E}_{\mathrm{i}\lambda_0}$ resides in the positive space of the operator $\mathcal{A}_0+\lambda_0^2\mathcal{A}_2$. In the applications we consider the operator $\mathcal{A}_2$ will be positive definite. In this case, if we again choose, \[ S\mathrel{\mathop:}= N(\mathcal{A}_0)\oplus\mathop\mathrm{ker}\nolimits(\mathcal{A}_0), \] then the operator, \[ P_{S^\perp}\left(\mathcal{A}_0+\lambda_0^2\mathcal{A}_2\right)P_{S^\perp}= P_{S^\perp}\mathcal{A}_0P_{S^\perp}+\lambda_0^2P_{S^\perp}\mathcal{A}_2P_{S^\perp}, \] will be positive definite. Consequently, if $\mathrm{i}\lambda_0$ is a polynomial eigenvalue whose associated eigenfunction resides in $S^\perp$, then the negative Krein index will be zero. \section{The Krein eigenvalues}\label{s:3} Since $P_n(\lambda)$ is a star-even polynomial operator, the Krein matrix is a self-adjoint meromorphic family of operators in the spectral parameter, $\lambda$. In particular, the Krein matrix is Hermitian for purely imaginary $\lambda$. Henceforth, write $\lambda=\mathrm{i} z$ for $z\in\mathbb{R}$, and write the Krein matrix as \[ \bm{\mathit{K}}_S(z)=\mathcal{P}_n(\mathrm{i} z)|_S- \mathcal{P}_n(\mathrm{i} z)P_{S^\perp}(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)|_{S}. \] Since the Krein matrix is Hermitian for real $z$, for each value of $z$ there are $n_S$ real-value eigenvalues, $r_j(z)$. These eigenvalues of the Krein matrix are called the \textit{Krein eigenvalues}. The Krein eigenvalues are real meromorphic, as are the associated spectral projections. In particular, if the Krein eigenvalues are simple, the associated eigenvectors are real meromorphic. See Kato \cite[Chapter~VII.3]{kato:ptf80} for the details. Since \[ \mathop\mathrm{det}\nolimits\bm{\mathit{K}}_S(z)=\prod_{j=1}^{n_S}r_j(z), \] finding the zeros of the determinant of the Krein matrix is equivalent to finding the zero set of each of the Krein eigenvalues. One of the most important properties of the Krein eigenvalues is that the sign of the derivative at a simple zero is related to the Krein index of that polynomial eigenvalue. In order to see this, we start with \begin{equation}\label{e:31a} \bm{\mathit{K}}_S(z)\bm{\mathit{v}}_j(z)=r_j(z)\bm{\mathit{v}}_j(z)\quad\leadsto\quad r_j'(z)=\frac{\bm{\mathit{v}}_j(z)^\mathrm{a}\bm{\mathit{K}}_s'(z)\bm{\mathit{v}}_j(z)}{|\bm{\mathit{v}}_j(z)|^2}. \end{equation} The latter equality is a solvability condition which follows upon noting that both the Krein eigenvalue and its associated eigenvector are meromorphic and consequently have convergent Taylor expansions. If $r_j(z)=0$, then the components of the associated eigenvector correspond to the various basis elements in the subspace $S$; namely, the associated eigenfunction is given by \begin{equation}\label{e:31} \psi=\sum_{k=1}^{n_S}v^j_ks_k+s^\perp,\quad \bm{\mathit{v}}_j=\left(\begin{array}{c}v^j_1\\v^j_2\\\vdots\\v^j_{n_S}\end{array}\right), \end{equation} where the element $s^\perp$ is determined via \cref{e:pp3}, \[ s^\perp=-\sum_{k=1}^{n_S}v^j_k\left(P_{S^\perp}P_n(\mathrm{i} z)P_{S^\perp}\right)^{-1}P_{S^\perp}P_n(\mathrm{i} z)s_k. \] We now compute $\bm{\mathit{K}}'(z)$. For the first term in the Krein matrix, \[ \frac{\mathrm{d}}{\mathrm{d} z}\langle s_i,\mathcal{P}_n(\mathrm{i} z)s_j\rangle= \langle s_i,[\mathrm{i}\mathcal{P}'_n(\mathrm{i} z)]s_j\rangle. \] The operator $\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)$ is Hermitian. Differentiating the second term requires repeated applications of the product rule, as well as using the fact that the operator $\mathcal{P}_n(\mathrm{i} z)$ is Hermitian. Since \[ \frac{\mathrm{d}}{\mathrm{d} z}(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}= -(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}[\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)] (P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}, \] upon some simplification we can write \[ \begin{aligned} &\frac{\mathrm{d}}{\mathrm{d} z}\langle s_i,P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)s_j\rangle=\\ &\quad\langle s_i,[\mathrm{i}\mathcal{P}'_n(\mathrm{i} z)]P_{S^\perp}(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)s_j\rangle\\ &\qquad+\langle s_i,\mathcal{P}_n(\mathrm{i} z)P_{S^\perp}(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}[\mathrm{i}\mathcal{P}'_n(\mathrm{i} z)]s_j\rangle\\ &\qquad\quad-\langle s_i,\mathcal{P}_n(\mathrm{i} z)P_{S^\perp}(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}[\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)] (P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)s_j\rangle. \end{aligned} \] The right-hand side has the compact form \[ \frac{\mathrm{d}}{\mathrm{d} z}\langle s_i,P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)s_j\rangle= \langle s_i,(\mathcal{R}+\mathcal{R}^\mathrm{a})s_j\rangle-\langle s_i,\mathcal{S} s_j\rangle, \] where \begin{equation*} \begin{aligned} \mathcal{R}&\mathrel{\mathop:}=\mathcal{P}_n(\mathrm{i} z)P_{S^\perp}(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}[\mathrm{i}\mathcal{P}'_n(\mathrm{i} z)]\\ \mathcal{S}&\mathrel{\mathop:}=\mathcal{P}_n(\mathrm{i} z)P_{S^\perp}(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}[\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)] (P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\mathrm{i} z). \end{aligned} \end{equation*} In conclusion, the derivative of the Krein matrix is \begin{equation}\label{e:32} \bm{\mathit{K}}'(z)=[\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)]_S+\mathcal{S}|_S-(\mathcal{R}+\mathcal{R}^\mathrm{a})|_S, \end{equation} where the operators $\mathcal{R},\mathcal{S}$ are defined above. We now compute the Krein index using our decomposition of an eigenfunction. For the sake of exposition, let us assume that the polynomial eigenvalue is simple. Using the decomposition \cref{e:31} with $\bm{\mathit{K}}_s(z)\bm{\mathit{v}}_j(z)=\bm{\mathit{0}}$, we have \[ [\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)]\psi=\sum_{k=1}^{n_S}v_k^j[\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)]s_k- \sum_{k=1}^{n_S}v_k^j[\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)] \left(P_{S^\perp}P_n(\mathrm{i} z)P_{S^\perp}\right)^{-1}P_{S^\perp}P_n(\mathrm{i} z)s_k. \] Upon taking the inner product with $\psi$, and using the fact that $\mathcal{P}_n(\mathrm{i} z)$ is Hermitian, \[ \langle\psi,[\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)]\psi\rangle= \bm{\mathit{v}}_j(z)^\mathrm{a}\left([\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)]|_S+\mathcal{S}|_S-(\mathcal{R}+\mathcal{R}^\mathrm{a})|_S\right)\bm{\mathit{v}}_j(z). \] Upon comparing with \cref{e:32} we conclude \[ \langle\psi,[\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)]\psi\rangle=\bm{\mathit{v}}_j(z)^\mathrm{a}\bm{\mathit{K}}'(z)\bm{\mathit{v}}_j(z), \] where the eigenfunction $\psi$ has the expansion provided for in \cref{e:31}. Going back to \cref{e:31a}, we have that the derivative of the Krein eigenvalue can be expressed in terms of the eigenfunction as \[ r_j'(z)=\frac{\langle\psi,[\mathrm{i}\mathcal{P}_n'(\mathrm{i} z)]\psi\rangle}{|\bm{\mathit{v}}_j(z)|^2}. \] Going further back to the definition of the negative Krein index, we can conclude the desired result. If $\mathrm{i} z$ is a polynomial eigenvalue with $r_j(z)=0$, then the Krein index is related through the derivative via \[ k_\mathrm{i}^-(\mathrm{i} z)=\begin{cases}0,\quad&zr_j'(z)<0\\1,\quad&zr_j'(z)>0.\end{cases} \] Since our goal is to quickly and easily read off the Krein signature via a graph of the Krein eigenvalues, we will redefine the Krein matrix as \[ \bm{\mathit{K}}_S(z)=-z\left[\mathcal{P}_n(\mathrm{i} z)|_S- \mathcal{P}_n(\mathrm{i} z)P_{S^\perp}(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)|_{S}\right]. \] This redefinition adds a singularity to the Krein matrix at $z=0$, but in the search for nonzero polynomial eigenvalues this is an unimportant consequence. On the other hand, the Krein eigenvalues for the new matrix are related to the original matrix via $r_j(z)\mapsto -zr_j(z)$. Thus, at a zero of the Krein eigenvalue we have the mapping $r_j'(z)\mapsto -zr_j'(z)$, so for the new Krein matrix we have the relationship \[ k_\mathrm{i}^-(\mathrm{i} z)=\begin{cases}0,\quad&r_j'(z)>0\\1,\quad&r_j'(z)<0.\end{cases} \] A positive slope of a Krein eigenvalue at a zero corresponds to a polynomial eigenvalue with positive signature, whereas a negative slope shows that the polynomial eigenvalue has negative Krein signature. If the zero of a Krein eigenvalue is not simple, then the corresponding polynomial eigenvalue has a Jordan chain, and the negative Krein index depends upon the length of the chain, see \cite[Section~2.2]{kapitula:tks10} and the references therein. For example, if $r_j(z)=r_j'(z)=0$ with $r_j''(z)\neq0$, then there will be a Jordan chain of length two; moreover, the negative Krein index associated with the Jordan chain will be one. In general, a zero of order $m$ implies a Jordan chain of length $m$, and the negative Krein index associated with that chain will be roughly half the length of the chain. We will not provide any more details here, as in our examples the polynomial eigenvalues will be simple. In summary, we have the following result: \begin{theorem}\label{thm:krein} For $n\in\{1,2\}$ consider the star-even polynomial, \[ \mathcal{P}_n(\lambda)=\sum_{j=0}^n\lambda^j\mathcal{A}_j, \] which acts on a Hilbert space, $X$, with inner-product, $\langle\cdot,\cdot\rangle$. Suppose $\mathcal{A}_0$ has compact resolvent. Set $P_{\mathcal{A}_0}:X\mapsto\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)$ to be the spectral projection onto the kernel, and $P_{\mathcal{A}_0}^\perp=\mathcal{I}-P_{\mathcal{A}_0}$. Further suppose the operator coefficients satisfy, \begin{enumerate} \item $\mathrm{n}(\mathcal{A}_0)$ is finite \item for $j=1,2$ the operators, \[ \left(P_{\mathcal{A}_0}^\perp\mathcal{A}_0P_{\mathcal{A}_0}^\perp\right)^{-1} P_{\mathcal{A}_0}^\perp\mathcal{A}_jP_{\mathcal{A}_0}^\perp:\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)^\perp\mapsto\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)^\perp, \] are compact. \end{enumerate} Regarding the Krein matrix, first let $S\subset X$ be a given finite-dimensional subspace, and $P_{S^\perp}:X\mapsto S^\perp$ be the orthogonal projection. The Krein matrix associated with $S$ is, \[ \bm{\mathit{K}}_S(z)=-z\left[\mathcal{P}_n(\mathrm{i} z)|_S- \mathcal{P}_n(\mathrm{i} z)P_{S^\perp}(P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_n(\mathrm{i} z)|_{S}\right]. \] The Krein eigenvalues, $r_j(z)$ for $j=1,\dots,\mathop\mathrm{dim}\nolimits[S]$, are the eigenvalues of the Krein matrix. If $z\in\mathbb{R}$, the Krein eigenvalues are meromorphic. Moreover, if $\lambda=\mathrm{i} z$ is a polynomial eigenvalue with $\mathcal{P}_n(\mathrm{i} z)\psi=0$, \begin{enumerate} \item then either $r_j(z)=0$ for at least one $j$, or $\psi\in S^\perp$ \item if $z\in\mathbb{R}$, and if $r_j(z)=0$ for some $j$, then the Krein signature of a semi-simple polynomial eigenvalue is determined by the slope of the graph of the Krein eigenvalue, \[ k_\mathrm{i}^-(\mathrm{i} z)=\begin{cases}0,\quad&r_j'(z)>0\\1,\quad&r_j'(z)<0.\end{cases} \] \end{enumerate} \end{theorem} \begin{remark} Recall that the choice, \[ S=N(\mathcal{A}_0)\oplus\mathop\mathrm{ker}\nolimits(\mathcal{A}_0), \] ensures that all polynomial eigenvalues with negative Krein signature are seen as zeros of one or more Krein eigenvalues. \end{remark} In its general form the Krein matrix looks to be complicated, and does not appear to have an underlying intuitively understood structure. However, as we shall see in our subsequent examples, the Krein matrix can have intimate connections with dispersion relations, the Hale-Sandstede-Lin's method for constructing multi-pulses, etc. \section{First application: modulational instabilities for small amplitude periodic solutions}\label{s:4} For our first application we show how the Krein matrix can be used to understand the existence of instability bubbles, i.e., a curve of unstable spectra which is attached to the imaginary axis, for small spatially periodic waves to dispersive systems. The instabilities will not necessarily be associated with high-frequency (long wavelength) perturbations. Without loss of generality we will assume the spatial period is $2\pi$. Regarding the existence problem we will assume it is of the form, \begin{equation}\label{e:51} \mathcal{L} u-cu+f(u)=0, \end{equation} where \begin{enumerate} \item $\displaystyle{\mathcal{L}=\sum_{j=0}^Na_j\ell^{2j}\partial_x^{2j}}$ with $\ell,(-1)^Na_{2N}>0$ \item $c\in\mathbb{R}$ is a free parameter (typically the wavespeed) \item $f(u)$ is a smooth nonlinearity with $f(0)=f'(0)=0$. \end{enumerate} The parameter $\ell$ can be adjusted via a rescaling of $x$. The operator $\mathcal{L}$ is self-adjoint under the inner-product, \[ \langle f,g\rangle=\int_0^{2\pi}f(x)\overline{g(x)}\,\mathrm{d} x. \] \begin{remark} The nonlinearity could be more general, $f=f(u,\partial_xu,\dots)$. All that is required is that it be smooth and (at least) quadratic in the arguments near the origin, and that it be unchanged under reversibility, $x\mapsto-x$. \end{remark} We briefly sketch the argument leading to the existence of a family of small spatially periodic solutions. The details can be found in \cite[Theorem~3.15]{haragus:lbc11}. The characteristic polynomial associated with the ordinary differential operator $\mathcal{L}$ is \[ p_{\mathcal{L}}(r,\ell)=\sum_{j=0}^Na_j\ell^{2j}r^{2j}. \] Regarding the characteristic polynomial we assume there is an $\ell_0$ such that, \begin{enumerate} \item $\partial_rp_{\mathcal{L}}(\mathrm{i},\ell_0)\neq0$ \item upon setting the zero amplitude wavespeed, \begin{equation}\label{e:condb} c_0\mathrel{\mathop:}= p_{\mathcal{L}}(\mathrm{i},\ell_0)=\sum_{j=0}^N(-1)^ja_j\ell_0^{2j}, \end{equation} there is no positive real $k\neq1$ such that $p_{\mathcal{L}}(\mathrm{i} k,\ell_0)-c_0=0$. \end{enumerate} There will then exist a family of $2\pi$-periodic solutions, say $U(x)$, with the properties: \begin{enumerate} \item $U(x)=U(-x)$ \item $U(x)=\epsilon A\cos(x)+\mathcal{O}(\epsilon^2)$ for $A>0$ \item $\ell=\ell_0+\mathcal{O}(\epsilon)$ (the $\mathcal{O}(\epsilon)$ terms depend on $A$). \end{enumerate} If \cref{e:condb} above does not hold, i.e., if there are other purely imaginary roots to $p_{\mathcal{L}}(r,\ell_0)-\beta_0=0$, then the equations on the center-manifold will still be reversible. However, the dimension of the manifold (equal to the number of purely imaginary roots, counting multiplicity) increases, and since the reduced system is no longer planar it is not clear if there are still periodic (versus quasi-periodic) solutions. The case of a second additional imaginary root, $\pm\mathrm{i} q$ with $q>1$, is discussed by \cite[Chapter~4.3.4]{haragus:lbc11}. If $q$ is irrational, or if $q\ge5$, only KAM tori are expected, and consequently only quasi-periodic solutions. In the case of strong resonance, $q=2$, the equations on the center manifold are completely integrable, and there can be periodic orbits, homoclinic orbits, and orbits homoclinic to periodic orbits. The other resonant case of $q=3$ is still open. In conclusion, we can safely assume the existence of small $2\pi$-periodic solutions to \cref{e:51}. We now consider the spectral stability of these spatially periodic solutions. Consider the KdV-like and first-order-in-time Hamiltonian system, \begin{equation}\label{e:52} \partial_t u+\partial_x\left(\mathcal{L} u+f(u)\right)=0. \end{equation} The nonlinearity $f(u)$ satisfies the assumption (c) above, while \[ \mathcal{L} u=\sum_{j=0}^Na_j\partial_x^{2j}u,\quad (-1)^Na_{2N}>0. \] In traveling coordinates, $z\mathrel{\mathop:}= x-ct$, the equation becomes, \[ \partial_tu+\partial_z\left(\mathcal{L} u-cu+f(u)\right)=0,\quad \partial_x^{2j}\mapsto\partial_z^{2j}. \] Upon rescaling of time and space, \[ \tau=\ell t,\quad y=\ell z, \] we have the PDE to be studied, \begin{equation}\label{e:53} \partial_\tau u+\partial_y\left(\mathcal{L} u-cu+f(u)\right)=0, \end{equation} where \[ \mathcal{L} u=\sum_{j=0}^Na_j\ell^{2j}\partial_x^{2j}u,\quad (-1)^Na_{2N}>0. \] Following the previous discussion, upon setting, \[ c_0\mathrel{\mathop:}= p_{\mathcal{L}}(\mathrm{i},\ell_0), \] where $\ell_0$ is chosen so that $p_{\mathcal{L}}(\mathrm{i} k,\ell_0)-c_0=0$ has no integral solutions for $k>1$, we know there is a family of small $2\pi$-periodic solutions, $U(x)=\mathcal{O}(\epsilon)$, for $0<\epsilon\ll1$. We now consider the spectral stability of such solutions. The linearized problem is, \[ \partial_\tau v+\partial_y\left(\mathcal{L} v-c_0v+f'(U)v\right)=0,\quad |f'(U)|=\mathcal{O}(\epsilon). \] Using separation of variables, $v(y,\tau)=\mathrm{e}^{\lambda\tau}v(y)$, we arrive at the spectral problem, \begin{equation}\label{e:54} \lambda v+\partial_y\left(\mathcal{L} v-c_0v+f'(U)v\right)=0,\quad |f'(U)|=\mathcal{O}(\epsilon). \end{equation} We use a Bloch decomposition to understand the spectral problem, see \cite[Chapter~3.3]{kapitula:sad13}. Writing for $-1/2<\mu\le1/2$, \[ v(y)=\mathrm{e}^{\mathrm{i}\mu y}w(y),\quad w(y+2\pi)=w(y), \] the problem \cref{e:54} becomes, \begin{equation}\label{e:55} \lambda w+(\partial_y+\mathrm{i} \mu)\left(\mathcal{L}_\mu w-c_0w+f'(U)w\right)=0,\quad |f'(U)|=\mathcal{O}(\epsilon), \end{equation} where \[ \mathcal{L}_\mu=\sum_{j=0}^Na_{2j}\ell_0^{2j}(\partial_y+\mathrm{i} \mu)^{2j}. \] Because the underlying wave is even in $x$, it is sufficient to consider $0\le\mu\le1/2$; in particular, if $\lambda$ is an eigenvalue associated with $\mu$, then $\overline{\lambda}$ is an eigenvalue associated with $-\mu$, see \cite[Section~4]{haragus:ots08}. For fixed $\mu$ the spectrum will be discrete, countable, and have an accumulation point only at $\infty$. The full spectrum, which is essential spectra only, will be the union of all the point spectra as $\mu$ is varied over the range. We are henceforth interested only in sideband instabilities, $\mu>0$. Set, \[ \mathcal{A}_0\mathrel{\mathop:}=\mathcal{L}_\mu-c_0+f'(U). \] The operator $\mathcal{A}_0$ is self-adjoint on the space of $2\pi$-periodic functions endowed with the natural $L^2[0,2\pi]$ inner product. The invertible operator $\partial_y+\mathrm{i}\mu$ is skew-Hermitian. Since $\mathcal{A}_0$ is self-adjoint with smooth dependence on parameters, each of the eigenvalues of $\mathcal{A}_0$ is smooth in $(\mu,\epsilon)$ \cite{kato:ptf80}. The same can be said of the composition, $(\partial_y+\mathrm{i}\mu)\mathcal{A}_0$, except at possibly the finite number of points where there are Jordan chains. Consequently, we will first consider the spectral problem when $\epsilon=0$. Afterwards, we will make generic statements about what will happen for $\epsilon>0$ small. For $0<\mu\le1/2$ we rewrite the spectral problem in the star-even form, \begin{equation}\label{e:56} \mathcal{A}_0w+\lambda\mathcal{A}_1w=0,\quad \mathcal{A}_1\mathrel{\mathop:}=\left(\partial_y+\mathrm{i}\mu\right)^{-1}. \end{equation} The boundary conditions associated with this problem are periodic, $w(y+2\pi)=w(y)$. First assume $\epsilon=0$, so that $f'(U)\equiv0$. The spectrum for \cref{e:56} is straightforward to compute using a Fourier analysis. Letting $w(y)=\mathrm{e}^{\mathrm{i} ny}$ for $n\in\mathbb{Z}$ we get a sequence of problems, \begin{equation}\label{e:57} d(n,\mu)+\lambda\frac{1}{\mathrm{i}(n+\mu)}=0, \end{equation} where the first term is the dispersion relation associated with the steady-state problem, \[ d(n,\mu)\mathrel{\mathop:}=\sum_{j=0}^N(-1)^ja_{2j}\ell_0^{2j}(n+\mu)^{2j}-c_0. \] We first show that the spectrum of $\mathcal{A}_0$ has a nonzero and finite number of negative eigenvalues for at least some values of $\mu$. First suppose $\epsilon=0$. The existence assumption implies $d(\pm1,0)=0$. For small $\mu$ we have the expansions, \begin{equation}\label{e:58} d(\pm1,\mu)=\pm\left(2\sum_{j=1}^N(-1)^jja_j\right)\mu+\mathcal{O}(\mu^2). \end{equation} Consequently, $d(+1,\mu)d(-1,\mu)<0$ for small $\mu$, so one of $d(\pm1,\mu)$ is negative for small $\mu$. Consequently, $\mathrm{n}(\mathcal{A}_0)\ge1$. The assumption $(-1)^Na_{2N}>0$ implies there is an $N_0$ such that $d(n,\mu)>0$ for $|n|\ge N_0$. Consequently, there can be at most a finite number of negative eigenvalues, so $\mathrm{n}(\mathcal{A}_0)<\infty$. By continuity $\mathrm{n}(\mathcal{A}_0)$ will remain unchanged for $\epsilon>0$ and small. We now construct the Krein matrix, and then use it to analyze the spectrum. Assume there is a sequence $n_1,n_2,\dots,n_q$ such that $d(n,\mu)<0$ for $n\in\{n_1,n_2,\dots,n_q\}$, and $d(n,\mu)>0$ for $n\notin\{n_1,n_2,\dots,n_q\}$. Clearly, $\mathrm{n}(\mathcal{A}_0)=q$. We take as our space $S=N(\mathcal{A}_0)$, \[ S=\mathop\mathrm{span}\nolimits\{\mathrm{e}^{\mathrm{i} n_1y},\mathrm{e}^{\mathrm{i} n_2 y},\dots,\mathrm{e}^{\mathrm{i} n_q y}\}. \] Since \[ P_1(\mathrm{i} z)S=S,\quad P_1(\mathrm{i} z)S^\perp=S^\perp, \] the Krein matrix as described in \cref{thm:krein} collapses to \[ \begin{aligned} \bm{\mathit{K}}_S(z)&=-z\mathcal{P}_1(\mathrm{i} z)|_S\\ &=-z\mathop\mathrm{diag}\nolimits\left(d(n_1,\mu)+\frac{z}{n_1+\mu},\dots,d(n_q,\mu)+\frac{z}{n_q+\mu}\right). \end{aligned} \] The expected poles, which are the eigenvalues of the sandwiched operator, \[ P_{S^\perp}\mathcal{P}_1(\mathrm{i} z)P_{S^\perp}=P_1(\mathrm{i} z)S^\perp, \] are located at $z_n^\mathrm{p}=-(n+\mu)d(n,\mu)$ for $n\notin\{n_1,n_2,\dots,n_q\}$, and are removable singularities. All of the poles are polynomial eigenvalues for the spectral problem. Since they correspond to removable singularities, the polynomial eigenvalues all have positive Krein signature. \begin{remark} The poles are removable when $\epsilon=0$ because $[\mathcal{P}_1(\lambda)S]\cap S^\perp=\{0\}$. In particular, it follows from the fact that the $\epsilon=0$ problem has constant coefficients. One expects that for $\epsilon>0,\,[\mathcal{P}_1(\lambda)S]\cap S^\perp$ has a nontrivial intersection. Thus, the expectation is that the poles will no longer be removable for small amplitude waves. \end{remark} The Krein eigenvalues are \[ r_j(z)=-z\left(d(n_j,\mu)+\frac{z}{n_j+\mu}\right),\quad j=1,\dots,q. \] The nonzero zeros of the Krein eigenvalues, \[ z_j^\mathrm{n}=-(n_j+\mu)d(n_j,\mu),\quad j=1,\dots,q, \] satisfy \[ r_j'(z_j^\mathrm{n})=d(n_j,\mu)<0, \] so these zeros correspond to polynomial eigenvalues with negative Krein signature. In conclusion, via Fourier analysis we have located all of the polynomial eigenvalues, and through the Krein eigenvalues we have identified those which have a negative Krein index. \begin{remark} Note that for constant states, $\epsilon=0$, the Krein signature can be directly computed from the dispersion relation. For fixed $\mu$ the Krein eigenvalues are dispersion curves that correspond to polynomial eigenvalues with negative Krein index, and the poles correspond to dispersion curves with positive Krein index. If the two curves intersect, then there is a collision of polynomial eigenvalues with opposite Krein signature. Consequently, for a small amplitude wave the intersection of a Krein eigenvalue with a (potentially) removable singularity of the Krein matrix can be noted without actually computing a Krein eigenvalue. This graphical approach towards spectral stability by looking at the dispersion curves is the one taken by \cite{deconinck:hfi16,kollar:dco19,trichtchenko:sop18}. The Krein matrix approach is more robust in the sense that while it, too, is graphical in nature, it does not necessarily assume that the underlying waves have small amplitude. In particular, the smallness assumption allows for an analytic construction of the matrix; however, if the wave has an $\mathcal{O}(1)$ amplitude, then the Krein matrix can still be constructed numerically, and the graphical analysis will still hold for this numerically constructed matrix. \end{remark} When $\epsilon=0$ the wave is spectrally stable, and all of the spectra is purely imaginary. For $\epsilon>0$ a spectral instability can arise for the small amplitude wave only through the collision of a purely imaginary polynomial eigenvalue with positive Krein index and one with negative Krein index. This collision generically leads to a Hamiltonian-Hopf bifurcation, see \cite[Chapter~7.1.2]{kapitula:sad13} and the references therein. If for a fixed $\mu_0$ there is a polynomial eigenvalue with positive real part, then such polynomial eigenvalues will exist for $\mu$ in a neighborhood of $\mu_0$. If for $\mu_0$ the polynomial eigenvalue with positive real part is simple, then the union of all polynomial eigenvalues for $\mu$ in a neighborhood of $\mu_0$ will form a smooth curve. We will call this curve an instability bubble. In our example any instability bubble will have an $\mathcal{O}(1)$ imaginary part; consequently, they will not be related to instability curves coming from the origin which arise due to a long wavelength modulational instability. A bubble intersects the imaginary axis, and because of the $\{\lambda,-\overline{\lambda}\}$ reflection symmetry about the imaginary axis, the curve on the left of the imaginary axis is a mirror image of that on the right. The Krein eigenvalues reflect this collision of polynomial eigenvalues with opposite index in one of two possible ways. The first is that a Krein eigenvalue has a double zero at the time of collision, see \cite[Lemma~2.8]{kapitula:tks10}. For small waves this cannot happen, as the explicit form of the Krein eigenvalues shows that all of the zeros are simple for the limiting zero amplitude wave. As for the other possible collision scenario, recall that when $\epsilon=0$ a zero of a Krein eigenvalue corresponds to a polynomial eigenvalue with negative Krein signature, while all the removable singularities, i.e., polynomial eigenvalues of the operator $P_{S^\perp}P_1(\mathrm{i} z)P_{S^\perp}$, correspond to polynomial eigenvalues with positive Krein signature. If a simple zero is isolated, then the Krein matrix being meromorphic implies via a winding number calculation that the zero remains simple for small perturbations. Moreover, the spectral symmetry implies the polynomial eigenvalue must remain purely imaginary. Now, suppose that a simple zero coincides with a simple removable singularity, so when $\epsilon=0$ the winding number is again one. For the problem at hand this situation is realized when a zero of one of the Krein eigenvalues intersects one of the removable singularities, $z_n^\mathrm{p}$. In general, this intersection must be computed numerically. Assume that upon perturbation the singularity is no longer removable - it will remain simple. In this case the invariance of the winding number to small perturbation implies there must now be two zeros. The spectral symmetry implies these correspond to either two purely imaginary polynomial eigenvalues, or a pair of polynomial eigenvalues with nonzero real part. In the former case, the invariance of the HKI to small perturbation implies that one polynomial eigenvalue will have positive Krein signature, whereas the other will have negative Krein signature. The latter case corresponds to the onset of a Hamiltonian-Hopf bifurcation. An analytic argument which leads to the same conclusion is presented in \cite[Section~2.4]{kapitula:tks10}. In conclusion, the total number of bubbles that can form is bounded above by the number of intersections of Krein eigenvalues with poles. Supposing that the HKI is fixed for all $\mu$, this leaves open the possibility that the number of bubbles is greater than $K_{\mathop\mathrm{Ham}\nolimits}$. For example, suppose $K_{\mathop\mathrm{Ham}\nolimits}=2$, so that for each $\mu$ there can be at most two polynomial eigenvalues with positive real part. Since there will be two Krein eigenvalues, for each $\mu$ there can be at most two associated bubbles. However, overall there can be more than two bubbles. Suppose there is a sequence $0<\mu_1<\mu_2<\cdots<\mu_N$ for which a Krein eigenvalue intersects a pole. A Hamiltonian-Hopf bifurcation is then possible for $\mu$ near each $\mu_j$, which leaves open the possibility of having up to $N$ bubbles. \begin{remark} More generally, if $k$ polynomial eigenvalues with negative signature coincide with a removable singularity for the Krein matrix of order $\ell$, then upon perturbation the invariance of the winding number implies that $k+\ell$ polynomial eigenvalues will be created via the collision. The invariance of the HKI implies that $k=k_\mathrm{c}+k_\mathrm{i}^-$, where here $k_\mathrm{i}^-$ corresponds to the number of purely imaginary polynomial eigenvalues with negative Krein signature which are close to the unperturbed eigenvalue, and $k_\mathrm{c}$ is the number of polynomial eigenvalues with positive real part which are close to the unperturbed eigenvalue. As for the number of polynomial eigenvalues associated with the order of the removable singularity, $\ell=k_\mathrm{c}+k_\mathrm{i}^+$, where here $k_\mathrm{i}^+$ corresponds to the number of purely imaginary polynomial eigenvalues with positive Krein signature which are close to the unperturbed eigenvalue. \end{remark} \begin{figure} \caption{(color online) Plots of the dispersion relations, $z_n(\mu)$, for the linearization of \cref{e:502} for relevant values of $n$ when $b=-8/15$. A dotted curve corresponds to an eigenvalue with negative Krein index, while a solid curve shows an eigenvalue with positive index. Not only is a Hamiltonian-Hopf bifurcation possible for small $\mu$, it is possible for $\mu\sim0.21$ and $\mu\sim0.37$.} \label{f:FifthOrderKdVSpectra} \end{figure} For a particular example, consider the fifth-order KdV-like equation, \begin{equation}\label{e:501} \partial_tu+\partial_x\left(\frac2{15}\partial_x^4u-b\partial_x^2u+\frac32u^2 +\frac12[\partial_xu]^2+u\partial_x^2u\right)=0. \end{equation} This weakly nonlinear long-wave equation arises as an approximation to the classical gravity-capillary water-wave problem \cite{champneys:agi97}. Here $u(x,t)$ is the surface elevation with respect to the underlying normal water height, and $b\in\mathbb{R}$ is the offset of the Bond number (a measure of surface tension) from the value $1/3$. In traveling coordinates, $z=x-ct$, the equation \cref{e:501} becomes \begin{equation}\label{e:502} \partial_tu+\partial_z\left(\frac2{15}\partial_z^4u-b\partial_z^2u-cu+\frac32u^2 +\frac12[\partial_zu]^2+u\partial_z^2u\right)=0. \end{equation} The wavespeed $c$ is a free parameter. To the best of our knowledge the spectral stability of small periodic waves to equation \cref{e:501} has not yet been studied. However, the spectral stability of small spatially periodic waves to the Kawahara equation, which is \cref{e:501} with the last two terms in the open brackets removed, was recently studied by \cite{trichtchenko:sop18}. \begin{figure} \caption{(color online) Plots of the absolute value of the real part of the spectrum for various values of $\mu$ for a wave with approximate amplitude $2.3\times10^{-2}$. The plot on the left is for $\mu$ values near the zero/pole collision point $\mu\sim0.207$, and the plot on the right is for $\mu$ values near the zero/pole collision point $\mu\sim0.368$. The $\mu$ value for which the collision occurs is marked by a (red) cross.} \label{f:specamp} \end{figure} First consider the existence problem. As discussed by \cite[Section~4]{sandstede:hfb13} (also see \cite{champneys:agi97}), the fourth-ODE, \[ \frac2{15}\partial_z^4u-b\partial_z^2u+\frac32u^2 +\frac12[\partial_zu]^2+u\partial_z^2u=0, \] is a reversible Hamiltonian system. The position and momentum variables are \[ q_1=u,\quad q_2=\partial_zu,\quad p_1=-\frac2{15}\partial_z^3u+b\partial_zu-u\partial_zu,\quad p_2=\frac2{15}\partial_z^2u, \] and the (analytic) Hamiltonian is \[ H=-\frac12q_1^3-\frac12cq_1^2+p_1q_1-\frac12bq_2^2+\frac{15}{4}p_2^2+\frac12q_1q_2^2. \] The symplectic matrix for the system is the canonical one. Setting \[ c=c_0\mathrel{\mathop:}=\frac{2}{15}+b, \] the eigenvalues for the linearization of this Hamiltonian system about the origin satisfy \[ r^2=-1,\quad r^2=1+\frac{15}{2}b. \] If $b>-2/15$, then the center-manifold is two-dimensional, and the existence of a family of periodic orbits follows from reversibility. If $b<-2/15$, but $b\neq-2(1+m^2)/15$ for $m=1,2,\dots$ (the non-resonance condition), then one can invoke the Lyapunov center theorem to conclude the existence of a family of small periodic orbits with period close to $2\pi$ (see \cite{buzzi:rhl05,weinstein:nmf73} for a discussion). In either case, the period can be fixed to be $2\pi$ via a rescaling of the spatial variable. We will assume for that sake of exposition that $b=-8/15$, so $c_0=-6/15$. For this value of $b$ the ODE system is not in resonance. \begin{figure} \caption{(color online) Plots of the Krein eigenvalues for the trivial state (left figures) and for a wave with approximate amplitude $2.3\times10^{-2}$ (right figures). The top two figures show the situation at the zero/pole collision point, $\mu\sim0.368$. The (red) circles correspond to polynomial eigenvalues, and the (red) cross is the spurious zero of the Krein eigenvalues. The (green) vertical lines are poles of the Krein matrix. In each quadrant the bottom figure is a blow-up of the top figure near the polynomial eigenvalues of interest. Upon perturbation the zeros of the Krein eigenvalues remain purely real.} \label{f:KreinEvalCollide1} \end{figure} We now consider the spectral stability of the periodic wave. For the unperturbed problem the operator $\mathcal{A}_0$ is, \[ \mathcal{A}_0=\frac2{15}(\partial_z+\mathrm{i}\mu)^4+\frac8{15}(\partial_z+\mathrm{i}\mu)^2+\frac6{15}, \] so the dispersion relationship is, \[ d(n,\mu)=\frac2{15}(n+\mu)^4-\frac8{15}(n+\mu)^2+\frac6{15}. \] It is straightforward to check that $d(n,\mu)>0$ for $\mu\notin\{-2,+1\}$. Moreover, we have $d(+1,\mu)<0$, and \[ d(-2,\mu)\begin{cases}>0,\quad&0<\mu<\mu_{\mathrm{ch}}\\ <0,\quad&\mu_{\mathrm{ch}}<\mu<1/2,\end{cases} \] where \[ \mu_{\mathrm{ch}}\coloneqq2-\sqrt{3}\sim0.26795. \] Consequently, \[ \mathrm{n}(\mathcal{A}_0)=\begin{cases}1,\quad&0<\mu<\mu_{\mathrm{ch}}\\ 2,\quad&\mu_{\mathrm{ch}}<\mu<1/2.\end{cases} \] Since the negative index of an invertible operator is unchanged for small perturbations, we know there is a $0<\mu_0\ll1$ such that if $\mu$ is in one of two intervals, \[ \mu\in\left(\mu_0,\mu_{\mathrm{ch}}-\mu_0\right)\cup \left(\mu_{\mathrm{ch}}+\mu_0,1/2\right), \] then $\mathrm{n}(\mathcal{A}_0)$ remains unchanged for sufficiently small $\epsilon$. Going back to equation \cref{e:11}, we then know that for small $\epsilon$ the HKI is, \[ k_\mathrm{r}+k_\mathrm{c}+k_\mathrm{i}^-=\begin{cases}1,\quad&\mu_0<\mu<\mu_{\mathrm{ch}}\\ 2,\quad&\mu_{\mathrm{ch}}<\mu<1/2.\end{cases} \] If there are instability bubbles for the perturbed problem, there can be at most one for $\mu<\mu_{\mathrm{ch}}$, and at most two for $\mu_{\mathrm{ch}}<\mu<1/2$. For $0\le\mu<\mu_0$ a curve of unstable spectra may arise from the origin. We will not consider that here, but an example calculation for the KdV with general nonlinearity is provided in \cite[Section~4]{haragus:ots08}. \begin{remark} The transition point in the index, $\mu_{\mathrm{ch}}$, depends on $\epsilon$. For our purposes it is sufficient to consider how the number of instability bubbles depends on the change in $\mathrm{n}(\mathcal{A}_0)$ between the two $\mu$-intervals without worrying about the precise boundary between the intervals. \end{remark} \begin{figure} \caption{(color online) Plots of the Krein eigenvalues for the trivial state (left figures) and for a wave with approximate amplitude $2.3\times10^{-2}$ (right figures). The top two figures show the situation at the zero/pole collision point, $\mu=0.3585$. The (red) circles correspond to polynomial eigenvalues, and the (red) cross is the spurious zero of the Krein eigenvalues. The (green) vertical lines are poles of the Krein matrix. In each quadrant the bottom figure is a blow-up of the top figure near the polynomial eigenvalues of interest. Note the existence of a Hamiltonian-Hopf bifurcation upon the perturbation.} \label{f:KreinEvalCollide2} \end{figure} A picture of the dispersion curves for the full problem, \[ z_n(\mu)=-(n+\mu)d(n,\mu),\quad n\in\mathbb{Z}, \] is provided in \cref{f:FifthOrderKdVSpectra} for relevant values of $n$. If the curve is dotted, then for fixed $\mu$ that corresponds to a polynomial eigenvalue with negative Krein signature. The solid curves correspond to polynomial eigenvalues with positive Krein signature. There are two possible values for which a bubble may appear: \[ z_{-2}(\mu)=z_{+1}(\mu)\quad\leadsto\quad \mu=\frac1{10}\left(5-\sqrt{5(2\sqrt{129}-21)}\right)\sim0.20711, \] and \[ z_0(\mu)=z_{-2}(\mu)\quad\leadsto\quad \mu=1-\frac15\sqrt{10}\sim0.36754. \] Consequently, for small waves there are at most two instability bubbles. For a wave with approximate amplitude $2.3\times10^{-2}$ we have the spectral magnitude plots of \cref{f:specamp}. There we show the maximal value of the absolute value of the real part of a polynomial eigenvalue for various values of $\mu$ near the predicted bifurcation points, $\mu\sim0.207$ and $\mu\sim0.368$. In both cases the range of $\mu$ values for which there is an instability is $\mathcal{O}(10^{-3})$. We conclude by showing plots of the Krein eigenvalues for the situation in the right panel, $\mu\sim0.36$. In \cref{f:KreinEvalCollide1} we see a plot of the Krein eigenvalues for $\mu\sim0.368$. The panel on the left shows the plot for the trivial state, and the panel on the right shows the plot for a small wave. Since this value of $\mu$ is not associated with an instability (see right panel of \cref{f:specamp}), the zeros of the Krein eigenvalues are purely real. One of the zeros corresponds to a polynomial eigenvalue with negative Krein index. In \cref{f:KreinEvalCollide2} we see a plot of the Krein eigenvalues for $\mu=0.3585$. The panel on the left shows the plot for the trivial state, and the panel on the right shows the plot for a small wave. Here there is not a zero/pole collision for the Krein eigenvalues. On the bottom left figure we see a polynomial eigenvalue with negative Krein signature, and a removable singularity which corresponds to a polynomial eigenvalue with positive Krein signature. For $\epsilon>0$ a zero of the Krein eigenvalue emerges from the pole (e.g., see the bottom right figure in \cref{f:KreinEvalCollide1}), and this zero corresponds to a polynomial eigenvalue with positive Krein signature. As $\epsilon$ increases these two zeros of the Krein eigenvalue collide, and leave the real axis through a saddle-node bifurcation. Since the zeros of the Krein eigenvalues now have nonzero imaginary part, for this value of $\mu$ there is a spectral instability (see right panel of \cref{f:specamp}). \section{Application: location of small eigenvalues}\label{s:5} The goal here is to use the Krein matrix to locate small polynomial eigenvalues. We start by assuming that the operator $\mathcal{A}_0$ has a collection of arbitrarily small eigenvalues. These eigenvalues may arise, e.g., when looking at, \begin{enumerate} \item modulational stability problems for spatially periodic waves \item sideband (transverse) stability problems for uni-directional waves \item interaction stability problems for multi-pulses. \end{enumerate} For multi-pulse problems, the stability of multi-pulses that arise from a stable single pulse is determined solely by the location of eigenvalues near the origin \cite{sandstede:som98}. These eigenvalues reflect interaction properties of the individual pulses which comprise a multi-pulse. Multi-pulses have been a topic of interest since at least \cite{Evans1982}, which proves the existence of a double pulse traveling wave in nerve axon equations. A summary of early results related to multi-pulses can be found in \cite[Section 1]{sandstede:som98}. \begin{assumption}\label{ass:smalleval} For each $\epsilon>0$ there exist $N$ eigenvalues of $\mathcal{A}_0=\mathcal{A}_0(\epsilon)$, say $\mu_1,\dots,\mu_N$, which satisfy $|\mu_j|<\epsilon$. The number $N$ is independent of $\epsilon$. Moreover, there exists a positive constant $C$, independent of $\epsilon$, such that all other eigenvalues of $\mathcal{A}_0$ satisfy $|\mu|>C$. \end{assumption} We will let $s_1,\dots,s_N$ be the normalized set of associated eigenfunctions, \[ \mathcal{A}_0s_j=\mu_js_j,\quad\langle s_j,s_k\rangle=\delta_{jk}, \] and the subspace $S$ used in the construction of the Krein matrix will be spectral subspace, $S=\mathop\mathrm{span}\nolimits\{s_1,\dots,s_N\}$. Letting $P_S$ represent the spectral projection for $\mathcal{A}_0$, we have, \[ P_S\mathcal{A}_0=\mathcal{A}_0P_S,\quad P_{S^\perp}\mathcal{A}_0=\mathcal{A}_0P_{S^\perp}. \] The Krein matrix, $\bm{\mathit{K}}_S(z)$ for $z=-\mathrm{i}\lambda$, associated with this subspace is given in \cref{thm:krein}, and the eigenvalues for the star-even operator are found by solving, \begin{equation}\label{e:51aa} \bm{\mathit{K}}_S(z)\bm{\mathit{x}}=0. \end{equation} We start with a preliminary result concerning the part of the Krein matrix which generates poles. \begin{lemma}\label{l:51} There exists a constant $C_0>0$, independent of $\epsilon$, such that for $n=1,2$ and $|z|<1/C_0,\,P_{S^\perp}P_n(\mathrm{i} z)P_{S^\perp}$ is invertible. Moreover, for $|z|$ sufficiently small there is the expansion, \[ \left(P_{S^\perp}P_n(\mathrm{i} z)P_{S^\perp}\right)^{-1}= \left[\mathcal{I}+\mathcal{O}(|z|)\right]\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}. \] \end{lemma} \begin{proof} First suppose $n=1$. Then \[ P_{S^\perp}P_1(\mathrm{i} z)P_{S^\perp}=P_{S^\perp}\mathcal{A}_0P_{S^\perp} \left[\mathcal{I}+z \left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}(\mathrm{i}\mathcal{A}_1)P_{S^\perp}\right]. \] The operator $P_{S^\perp}\mathcal{A}_0P_{S^\perp}$ is invertible with bounded inverse, as $S$ is a spectral subspace associated with the small eigenvalues. Since $\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}(\mathrm{i}\mathcal{A}_1)P_{S^\perp}$ is a compact operator, it too is uniformly bounded. Setting, \[ C_0=\|\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}(\mathrm{i}\mathcal{A}_1)P_{S^\perp}\|, \] the operator $\mathcal{I}+z \left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}(\mathrm{i}\mathcal{A}_1)P_{S^\perp}$ is invertible for $|z|<1/C_0$. Moreover, a first-order Taylor expansion provides, \[ \left(\mathcal{I}+z \left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}(\mathrm{i}\mathcal{A}_1)P_{S^\perp}\right)^{-1}=\mathcal{I}+\mathcal{O}(|z|). \] Taking the inverse yields the desired result. If $n=2$ a similar argument gives the same result once one writes, \[ P_{S^\perp}P_2(\mathrm{i} z)P_{S^\perp}=P_{S^\perp}\mathcal{A}_0P_{S^\perp} \left[\mathcal{I}+z \left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}\left(\mathrm{i}\mathcal{A}_1-z\mathcal{A}_2\right)P_{S^\perp}\right], \] and then notes that by assumption $\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}\mathcal{A}_2P_{S^\perp}$ is also compact. \end{proof} Since $P_{S^\perp}P_n(\mathrm{i} z)P_{S^\perp}$ is invertible for small $z$, we know through the argument in \cref{s:21} that the following holds: \begin{corollary}\label{cor:51} $\lambda_0$ is a small polynomial eigenvalue if and only if $\mathop\mathrm{det}\nolimits\bm{\mathit{K}}_S(z_0)=0$ for $z_0=-\mathrm{i}\lambda_0$. \end{corollary} We now use the result of \cref{l:51} to find an approximation of the Krein matrix for small $z$. \begin{lemma}\label{l:52} Suppose that $n=1$. The Krein matrix is analytic for $|z|<1/C_0$. Moreover, if $|z|$ is sufficiently small the Krein matrix has the expansion, \begin{equation}\label{KSz} \begin{aligned} \bm{\mathit{K}}_S(z)&=-z\Big[\mathop\mathrm{diag}\nolimits(\mu_1,\dots,\mu_N)+\overline{z}\left(\mathrm{i}\mathcal{A}_1|_S\right)\\ &\qquad- \overline{z}^2\left\{- \mathcal{A}_1P_{S^\perp}\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}\mathcal{A}_1|_S\right\} +\mathcal{O}(|z|^3)\Big]. \end{aligned} \end{equation} \end{lemma} \begin{proof} Analyticity follows from the fact that $P_{S^\perp}P_1(\mathrm{i} z)P_{S^\perp}$ is invertible for $|z|<1/C_0$. Regarding the expansion, we first note that for the first term in the Krein matrix, \[ \left(\mathcal{P}_1(\mathrm{i} z)|_S\right)_{jk}=\langle s_j,[\mathcal{A}_0+z(\mathrm{i}\mathcal{A}_1)]s_k\rangle= \mu_k\langle s_j,s_k\rangle+\overline{z}\langle s_j,(\mathrm{i}\mathcal{A}_1)s_k\rangle, \] so upon using the fact the eigenfunctions for $\mathcal{A}_0$ form an orthonormal basis, \[ \mathcal{P}_1(\mathrm{i} z)|_S=\mathop\mathrm{diag}\nolimits(\mu_1,\dots,\mu_N)+\overline{z}\left(\mathrm{i}\mathcal{A}_1|_S\right). \] Regarding the second term of the Krein matrix, first recall that we saw in the proof of \cref{l:51} that for small $|z|$, \[ \begin{aligned} P_{S^\perp}P_1(\mathrm{i} z)P_{S^\perp}&=P_{S^\perp}\mathcal{A}_0P_{S^\perp} \left[\mathcal{I}+z \left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}(\mathrm{i}\mathcal{A}_1)P_{S^\perp}\right]\\ &=P_{S^\perp}\mathcal{A}_0P_{S^\perp}\left[\mathcal{I}+\mathcal{O}(|z|)\right], \end{aligned} \] so upon using a Taylor expansion in $z$, \[ \left(P_{S^\perp}P_1(\mathrm{i} z)P_{S^\perp}\right)^{-1}= \left[\mathcal{I}+\mathcal{O}(|z|)\right]\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}. \] Second, since $P_{S^\perp}$ is a spectral projection, for any $s\in S$, \[ P_{S^\perp}\mathcal{P}_1(\mathrm{i} z)s=zP_{S^\perp}(\mathrm{i}\mathcal{A}_1)s. \] Combining these two facts, \[ \begin{aligned} &\left(\mathcal{P}_1(\mathrm{i} z)P_{S^\perp}(P_{S^\perp}\mathcal{P}_1(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_1(\mathrm{i} z)|_{S}\right)_{jk} \\ &\qquad\qquad= \langle s_j,\mathcal{P}_1(\mathrm{i} z)P_{S^\perp}(P_{S^\perp}\mathcal{P}_1(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_1(\mathrm{i} z)s_k\rangle \\ &\qquad\qquad=\langle P_{S^\perp}\mathcal{P}_1(-\mathrm{i}\overline{z})^\mathrm{a} s_j,(P_{S^\perp}\mathcal{P}_1(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_1(\mathrm{i} z)s_k\rangle\\ &\qquad\qquad=\langle\overline{z}P_{S^\perp}(\mathrm{i}\mathcal{A}_1)s_j, z\left[\mathcal{I}+\mathcal{O}(|z|)\right]\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1} P_{S^\perp}(\mathrm{i}\mathcal{A}_1)s_k\rangle\\ &\qquad\qquad=\overline{z}^2\langle s_j,(\mathrm{i}\mathcal{A}_1)P_{S^\perp}\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1} P_{S^\perp}(\mathrm{i}\mathcal{A}_1)s_k\rangle+\mathcal{O}(|z|^3), \end{aligned} \] which provides, \[ \begin{aligned} &\mathcal{P}_1(\mathrm{i} z)P_{S^\perp}(P_{S^\perp}\mathcal{P}_1(\mathrm{i} z)P_{S^\perp})^{-1}P_{S^\perp}\mathcal{P}_1(\mathrm{i} z)|_{S}=\\ &\qquad\qquad\overline{z}^2 (\mathrm{i}\mathcal{A}_1)P_{S^\perp}\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}(\mathrm{i}\mathcal{A}_1)|_S +\mathcal{O}(|z|^3). \end{aligned} \] The final result follows upon combining the above two calculations. \end{proof} Upon setting $\gamma=\mathrm{i}\overline{z}$ the bracketed part of the Krein matrix \cref{KSz} is approximated by a quadratic star-even polynomial matrix, \[ \mathop\mathrm{diag}\nolimits(\mu_1,\dots,\mu_N)+\gamma\left(\mathcal{A}_1|_S\right)+ \gamma^2 \left[-\mathcal{A}_1P_{S^\perp}\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}\mathcal{A}_1|_S\right]. \] Since $|\mu_j|=\mathcal{O}(\epsilon)$, the polynomial eigenvalues for this matrix will be $\mathcal{O}(\epsilon^{1/2})$; consequently, the smallness assumption of \cref{l:51} regarding the polynomial eigenvalues is satisfied. Moreover, to leading order the polynomial eigenvalues are found by ignoring the middle term, so the small polynomial eigenvalues are found by solving the generalized linear eigenvalue problem, \begin{equation}\label{e:52aa} \mathop\mathrm{diag}\nolimits(\mu_1,\dots,\mu_N)\bm{\mathit{v}}=\alpha \left[-\mathcal{A}_1P_{S^\perp}\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}\mathcal{A}_1|_S\right]\bm{\mathit{v}},\quad \alpha=-\gamma^2=\overline{z}^2. \end{equation} In conclusion, the $N$ small eigenvalues for $\mathcal{A}_0$ will generate $2N$ small polynomial eigenvalues, and to leading order these small polynomial eigenvalues are realized as the eigenvalues for the generalized eigenvalue problem \cref{e:52aa}. Since $\mathop\mathrm{det}\nolimits\bm{\mathit{K}}_S(\gamma)$ is analytic, and the winding number is invariant under small perturbations, the result is robust; in other words, we can conclude that there will be precisely $2N$ small polynomial eigenvalues for $\mathcal{P}_1(\mathrm{i} z)$, and these polynomial eigenvalues will be $\mathcal{O}(\epsilon^{1/2})$. \begin{remark} If $S=\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)$, then under the assumption $\mathcal{A}_1|_{\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)}$ is the zero matrix, \[ -\mathcal{A}_1P_{S^\perp}\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}\mathcal{A}_1|_S= -\mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1|_{\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)}, \] which is precisely the constraint matrix associated with the Hamiltonian-Krein index calculation for linear star-even problems, see equation \cref{e:11}. \end{remark} If $n=2$, then an argument similar to that provided for \cref{l:52} provides the approximate Krein matrix for small $|z|$. The details of the proof will be left for the interested reader. \begin{lemma}\label{l:53} Suppose that $n=2$. If $|z|$ is sufficiently small the Krein matrix can be written, \[ \begin{aligned} &\bm{\mathit{K}}_S(z)=-z\Big[\mathop\mathrm{diag}\nolimits(\mu_1,\dots,\mu_N)+\overline{z}\left(\mathrm{i}\mathcal{A}_1|_S\right)\\ &\qquad\qquad -\overline{z}^2\left(\mathcal{A}_2- \mathcal{A}_1P_{S^\perp}\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}\mathcal{A}_1\right)|_S +\mathcal{O}(|z|^3)\Big]. \end{aligned} \] \end{lemma} \begin{remark} If $S=\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)$, then under the assumption $\mathcal{A}_1|_{\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)}$ is the zero matrix, \[ \left(\mathcal{A}_2- \mathcal{A}_1P_{S^\perp}\left(P_{S^\perp}\mathcal{A}_0P_{S^\perp}\right)^{-1}P_{S^\perp}\mathcal{A}_1\right)|_S= \left(\mathcal{A}_2- \mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1\right)|_{\mathop\mathrm{ker}\nolimits(\mathcal{A}_0)}, \] which is precisely the constraint matrix associated with the Hamiltonian-Krein index calculation for quadratic star-even problems, see equation \cref{e:12}. \end{remark} \section{Example: suspension bridge equation}\label{s:6} Motivated by observations of traveling waves on suspension bridges, McKenna and Walter \cite{McKenna1990} proposed the model, \begin{equation}\label{susp} \partial_t^2u + \partial_x^4u + u^+ - 1 = 0, \end{equation} to describe waves propagating on an infinitely long suspended beam, where $u^+ = \max(u, 0)$. To reduce the complexity due to the nonsmooth term $u^+$, Chen and McKenna \cite{Chen1997} introduced the regularized equation, \begin{equation}\label{susp2} \partial_t^2u + \partial_x^4u + \mathrm{e}^{u-1} - 1 = 0. \end{equation} Making the change of variables $u - 1 \mapsto u$ in \cref{susp2}, so that localized solutions will decay to a baseline of 0, we will consider the equation, \begin{equation}\label{susp3} \partial_t^2u + \partial_x^4u + \mathrm{e}^{u} - 1 = 0. \end{equation} Writing this in a co-moving frame with speed $c$ by letting $\xi = x - ct$, equation \cref{susp3} becomes \begin{equation}\label{suspc} \partial_t^2u - 2 c\partial_{xt}^2u +\partial_x^4u + c^2\partial_x^2u + \mathrm{e}^{u} - 1 = 0, \end{equation} where we have renamed the independent variable back to $x$. An equilibrium solution to \cref{suspc} satisfies the ODE, \begin{equation}\label{eqODE} \partial_x^4u + c^2\partial_x^2u + \mathrm{e}^{u} - 1 = 0. \end{equation} Smets and van den Berg \cite[Theorem 11]{Smets2002} prove the existence of a localized, symmetric solution $U(x)$ to \cref{eqODE} for almost all wavespeeds $c \in (0, \sqrt{2})$. Van den Berg et al. \cite[Theorem~1]{Berg2018} use a computer-assisted proof technique to show existence of such solutions to \cref{eqODE} for all speeds $c$ with $c^2 \in [0.5, 1.9]$. Equation \cref{eqODE} can be written as a first-order system in the standard way as \begin{equation}\label{suspsystem} Y' = F(Y; c), \end{equation} where $Y = (y_1, y_2, y_3, y_4) = (u, \partial_x u, \partial_x^2 u, \partial_x^3 u)$ and $F: \mathbb{R}^4 \times \mathbb{R} \rightarrow \mathbb{R}^4$, given by \begin{equation}\label{suspF} F(y_1, y_2, y_3, y_4; c) = (y_2, y_3, y_4, -c^2 y_3 - \mathrm{e}^{y_1} + 1), \end{equation} is smooth. Furthermore, $F$ has the reversible symmetry $F(R(Y)) = -R(F(Y))$, where $R: \mathbb{R}^4 \rightarrow \mathbb{R}^4$ is the standard reversor operator defined by \[ R(y_1, y_2, y_3, y_4) = (y_1, -y_2, y_3, -y_4). \] Equation \cref{suspsystem} is Hamiltonian with energy $H:\mathbb{R}^4 \times \mathbb{R} \rightarrow \mathbb{R}$ given by \begin{equation}\label{suspH} H(Y; c) = y_2 y_4 - \frac{1}{2}y_3^2 + \frac{c^2}{2}y_2^2 + \mathrm{e}^{y_1} - y_1. \end{equation} We note that for all $c \in (0, \sqrt{2})$, $Y = 0$ is a hyperbolic equilibrium of \cref{suspsystem}, and the spectrum of $DF(0; c)$ is the quartet of eigenvalues \begin{align}\label{specA00} \mu = \pm \sqrt{\frac{-c^2 \pm \sqrt{c^4 - 4}}{2} } = \pm \alpha \pm \mathrm{i}\beta, \end{align} for $\alpha, \beta > 0$. Thus the equilibrium at 0 has a two-dimensional stable manifold $W^s(0; c)$ and two-dimensional unstable manifold $W^u(0; c)$. We take the following hypothesis concerning the existence of a localized, symmetric, primary pulse solution to \cref{suspsystem}. \begin{hypothesis}\label{Uexistshyp} For some $c_0 \in (0, \sqrt{2})$, there exists a nontrivial, symmetric homoclinic orbit solution $Y(x; c_0) \in W^s(0; c_0) \cap W^u(0; c_0) \subset H^{-1}(0; c_0)$ to \cref{suspsystem}. Furthermore, the stable manifold $W^s(0; c_0)$ and the unstable manifold $W^u(0; c_0)$ intersect transversely in $H^{-1}(0; c_0)$ at $Y(0; c_0)$. \end{hypothesis} We have the following result, which proves the existence of homoclinic orbits $Y(x; c)$ for $c$ near $c_0$. \begin{lemma}\label{lemma:cinterval} Assume \cref{Uexistshyp}. Then there exists an open interval $(c_-, c_+)$ containing $c_0$ such that for all $c \in (c_-, c_+)$ the stable and unstable manifolds $W^s(0; c)$ and $W^u(0; c)$ have a one-dimensional transverse intersection in $H^{-1}(0; c)$ which is a homoclinic orbit $Y(x; c)$. $Y(x; c)$ is symmetric with respect to the standard reversor operator $R$, and the map $c \mapsto Y(x; c)$ from $(c_-, c_+)$ to $C(\mathbb{R}, \mathbb{R}^4)$ is smooth. \end{lemma} \begin{proof} Briefly, $Y(0; c_0) \neq 0$, and it follows from the form of the Hamiltonian in \cref{suspH} that $\nabla_Y H(Y(0; c_0); c_0) \neq 0$. By the implicit function theorem, for $c$ close to $c_0$, the 0-level set $H^{-1}(0; c)$ contains a smooth 3-dimensional manifold $K(c)$, with $K(c_0)$ containing $Y(0; c_0)$. The result follows from the transverse intersection of $W^s(0; c_0)$ and $W^u(0; c_0)$ in $K(c_0) \subset H^{-1}(0; c_0)$, the smoothness of $F$, and the implicit function theorem. Symmetry with respect to the reversor $R$ follows from symmetry of $Y(0; c_0)$ and the reversibility of \cref{suspsystem}. \end{proof} \begin{remark}We can choose $(c_-, c_+)$ to be the maximal open interval for which \cref{lemma:cinterval} holds. Given the existence results of \cite{Smets2002,Berg2018} and our own numerical analysis, it is likely that $(c_-, c_+) = (0, \sqrt{2})$. \end{remark} It follows from the stable manifold theorem that for $c \in (c_-, c_+)$, $Y(x; c)$ is exponentially localized, i.e. for any $\epsilon > 0$, \begin{align}\label{Yexploc} |Y(x; c)| &\leq C e^{-(\alpha - \epsilon)|x|} && x \in \mathbb{R}, \end{align} where $\alpha$ depends on $c$ and is given by \cref{specA00}. In the next lemma, we prove that $\partial_c Y(x; c)$ is also exponentially localized. \begin{lemma}\label{lemma:Ycexploc} The function $\partial_c Y(x; c)$ is exponentially localized, i.e. for each $c \in (c_-, c_+)$ and $\epsilon > 0$ there is a constant $C$ so that \begin{align}\label{Ycexploc} |\partial_c Y(x; c)| &\leq C e^{-(\alpha - \epsilon)|x|} && x \in \mathbb{R}. \end{align} \begin{proof} Fix $c \in (c_-, c_+)$. Since $Y(x; c)$ solves equation \cref{suspsystem}, $Y(x; c) \in C^1(\mathbb{R}, \mathbb{R}^4)$. Differentiating \cref{suspsystem} with respect to $c$, which we can do by \cref{lemma:cinterval}, we have \begin{equation}\label{Ycprime} Y_c'(x; c) = F_Y(Y(x;c); c) Y_c(x; c) + F_c(Y(x;c); c). \end{equation} It follows from the form of $F$ given in \cref{suspF} and \cref{Yexploc} that $F_c(Y(x;c); c)$ is exponentially localized, i.e. for each $\epsilon > 0$ there is a constant $C$ with \begin{align}\label{Fcexploc} |F_c(Y(x;c); c)| &\leq C e^{-(\alpha - \epsilon)|x|} && x \in \mathbb{R}. \end{align} Define the linear operator $\mathcal{L}$ by \begin{equation}\label{suspdefL} \mathcal{L}: C^1(\mathbb{R}, \mathbb{R}^4) \to C^0(\mathbb{R}, \mathbb{R}^4),\quad Z \mapsto \mathcal{L} Z = \frac{dZ}{dx} - F_Y(Y(x;c); c) Z. \end{equation} By equation \cref{Ycprime}, $F_c(Y(x;c); c) \in \mathop\mathrm{ran}\nolimits \mathcal{L}$. Since $DF(0; c)$ is hyperbolic, \cite[Lemma~4.2]{Palmer1984} and the roughness theorem for exponential dichotomies \cite{Coppel1978} imply that $\mathcal{L}$ is Fredholm with index 0. By \cref{Uexistshyp}, we have $\mathop\mathrm{ker}\nolimits \mathcal{L} = \mathop\mathrm{span}\nolimits\{Y'(x; c)\}$. Thus the set of all bounded solutions to \cref{Ycprime} is $\{Y_c(x; c) + \mathbb{R} Y'(x; c)\}$. Next, we recast the problem in an exponentially weighted space. Choose any $\epsilon \in (0,\alpha)$ and let $\eta(x)$ be a standard mollifier function \cite[Section~C.5]{Evans2010}, then we consider \begin{equation}\label{defYZ} Y(x; c) = Z(x; c) e^{-(\alpha - \epsilon)r(x)} \end{equation} with $r(x) = \eta(x) * |x|$. Note that $r(x)$ is smooth and that $r(x) = |x|$ and $r'(x) = 1$ for $|x| > 1$. Substituting \cref{defYZ} into \cref{Ycprime} and simplifying, we obtain the weighted equation \begin{equation}\label{Zcprime} Z'(x; c) = [F_Y(Y(x;c); c) + (\alpha - \epsilon) r'(x) ] Z(x; c) + e^{(\alpha - \epsilon)r(x)} F_c(Y(x;c); c). \end{equation} By \cref{Fcexploc} and the definition of $r(x)$, the function $e^{(\alpha - \epsilon)r(x)} F_c(Y(x;c); c)$ is bounded. Define the weighted linear operator $\mathcal{L}_{\alpha - \epsilon}: C^1(\mathbb{R}, \mathbb{R}^4) \mapsto C^0(\mathbb{R}, \mathbb{R}^4)$ by \begin{equation}\label{suspdefLalpha} \mathcal{L}_{\alpha - \epsilon} = \frac{d}{dx} - F_Y(Y(x;c); c) - (\alpha - \epsilon) r'(x) \mathcal{I}. \end{equation} Equations \cref{Zcprime} and \cref{Fcexploc} imply that $e^{(\alpha - \epsilon)r(x)} F_c(Y(x;c); c) \in \mathop\mathrm{ran}\nolimits \mathcal{L}_{\alpha - \epsilon}$. Since $DF(0; c)-(\alpha-\epsilon)\mathcal{I}$ is still hyperbolic with the same unstable dimension as $DF(0; c)$, it follows again from \cite[Lemma 4.2]{Palmer1984} that $\mathcal{L}_{\alpha - \epsilon}$ is Fredholm with index 0. Next, we note that the stable-manifold theorem implies that $Y'(x; c)$ is exponentially localized so that \begin{align}\label{Yprimeloc} |Y'(x; c)| &\leq C e^{-(\alpha - \epsilon)|x|} && x \in \mathbb{R}. \end{align} Since $Y'(x; c) \in \mathop\mathrm{ker}\nolimits \mathcal{L}$ and $e^{(\alpha - \epsilon)r(x)} Y'(x; c)$ is bounded, it is straightforward to verify that $e^{(\alpha - \epsilon)r(x)} Y'(x; c) \in \mathop\mathrm{ker}\nolimits \mathcal{L}_{\alpha - \epsilon}$. Since any element in $\mathop\mathrm{ker}\nolimits \mathcal{L}_{\alpha - \epsilon}$ gives an element of $\mathop\mathrm{ker}\nolimits \mathcal{L}$ via (\ref{defYZ}), we conclude that \[ \mathop\mathrm{ker}\nolimits \mathcal{L}_{\alpha - \epsilon} = \mathop\mathrm{span}\nolimits\{ e^{(\alpha - \epsilon)r(x)} Y'(x; c)\}. \] Since $e^{(\alpha - \epsilon)r(x)} F_c(Y(x;c); c) \in \mathop\mathrm{ran}\nolimits \mathcal{L}_{\alpha - \epsilon}$, the set of all bounded solutions to \cref{Zcprime} is $\{Z_c(x; c) + \mathbb{R} e^{(\alpha - \epsilon)r(x)} Y'(x;c)\}$, which implies that $Y_c(x; c) = Z_c(x; c) e^{-(\alpha - \epsilon)r(x)}$ is exponentially localized as claimed. \end{proof} \end{lemma} For $c \in (c_-, c_+)$, let \begin{equation}\label{suspU} U(x; c) = y_1(x; c). \end{equation} Then $U(x; c)$ is an even function and is an exponentially localized traveling wave solution solution to \cref{suspc}. For the remainder of this section, we will fix $c \in (c_-, c_+)$ and write the primary pulse solution corresponding to wavespeed $c$ as $U(x)$. We are interested in the existence and stability of multi-pulse equilibrium solutions to \cref{suspc}. A multi-pulse is a localized, multi-modal solution $U_n(x)$ to \cref{eqODE} which resembles multiple, well-separated copies of the primary pulse $U(x)$. \subsection{Existence of pulses} First, we look at the existence of such pulses. The linearization of \cref{eqODE} about a given solution $U_*$ of \cref{eqODE} is the operator $\mathcal{A}_0(U^*): H^4(\mathbb{R}) \subset L^2(\mathbb{R}) \mapsto L^2(\mathbb{R})$, given by \begin{equation}\label{defA0} \mathcal{A}_0(U^*) = \partial_x^4 + c^2 \partial_x^2 + \mathrm{e}^{U_*}. \end{equation} It follows from \cref{lemma:cinterval} that $\mathcal{A}_0(U)$ has a one-dimensional kernel spanned by $\partial_x U(x)$. Since $\mathcal{A}_0(U)$ is self-adjoint, its spectrum is real. We take the following additional hypothesis concerning the point spectrum of $\mathcal{A}_0(U)$. \begin{hypothesis}\label{A0hyp} The following hold concerning the spectrum of $\mathcal{A}_0(U)$. \begin{enumerate} \item $\mathrm{n}[\mathcal{A}_0(U)]=1$, i.e. $\mathcal{A}_0(U)$ has a unique, simple negative eigenvalue $\lambda_-$. \item There exists $\delta_0 > 0$ such that the only spectrum of $\mathcal{A}_0(U)$ in $(-\infty, \delta_0)$ is two simple eigenvalues at $0$ and $\lambda_-$. \end{enumerate} \end{hypothesis} We now have the following theorem, which is adapted from \cite[Theorem~3.6]{sandstede:iol97}. In all that follows, the norm $||\cdot||_\infty$ is the supremum norm on $C(\mathbb{R})$, $\langle \cdot, \cdot \rangle$ is the inner product on $L^2(\mathbb{R})$, and $|| \cdot ||$ is the norm on $L^2(\mathbb{R})$ induced from the inner product. \begin{theorem}\label{multiexist} Assume \cref{Uexistshyp} and \cref{A0hyp}, and let $\delta_0 > 0$ be as in \cref{A0hyp}. Fix a wavespeed $c$, and let $U(x)$ be an exponentially localized solution to \cref{eqODE}. Then for any $n \geq 2$ and any sequence of nonnegative integers $k_1, \dots, k_{n-1}$ with at least one of the $k_j \in \{0, 1 \}$, there exists a nonnegative integer $m_0$ and $\delta > 0$ with $\delta < \delta_0$ such that: \begin{enumerate} \item For any integer $m$ with $m \geq m_0$, there exists a unique $n-$modal solution $U_n(x)$ to \cref{eqODE} which is of the form \begin{equation}\label{qn} U_n(x) = \sum_{j = 1}^{n} U^j(x) + r(x), \end{equation} where each $U^j(x)$ is a translate of the primary pulse $U(x)$. The distance between the peaks of $U^j$ and $U^{j+1}$ is $2 X_j$, where \begin{equation*} X_j \approx \frac{\pi}{\beta}(2 m + k_j) + \tilde{X}, \end{equation*} $\beta$ is defined in \cref{specA00}, and $\tilde{X}$ is a constant. The remainder term $r(x)$ satisfies \begin{equation}\label{rbound} \|r\|_\infty \leq C \mathrm{e}^{-\alpha X_{\mathrm{min}}}, \end{equation} where $\alpha$ is defined in \cref{specA00}, and $X_{\mathrm{min}} = \min\{X_1, \dots, X_{n-1}\}$. This bound holds for all derivatives with respect to $x$. \item The point spectrum of the linear operator $\mathcal{A}_0(U_n)$ on $L^2(\mathbb{R})$ contains $2n$ eigenvalues in the interval $(-\infty, \delta_0)$, which are as follows: \begin{enumerate} \item There are $n$ real eigenvalues $\nu_1, \dots, \nu_n$ with $|\nu_j| < \delta$, where $\nu_n = 0$ is a simple eigenvalue, and for $j = 1, \dots, n-1$, \[ \begin{array}{l} \nu_j < 0 \text{ if } k_j \text{ is odd} \\ \nu_j > 0 \text{ if } k_j \text{ is even.} \end{array} \] We will refer to these as the small magnitude eigenvalues of $\mathcal{A}_0(U_n)$. For $j = 1, \dots, n-1$, $\nu_j = \mathcal{O}(\mathrm{e}^{-2\alpha X_{\mathrm{min}}})$, and the corresponding eigenfunctions $s_j$ are given by \begin{equation}\label{sj} s_j = \sum_{k = 1}^{n} d_{jk}\partial_x U^k + w_j, \end{equation} where $d_{jk} \in \mathbb{C}$ are constants, and the remainder terms $w_j$ satisfy \begin{equation}\label{sjwbound} \|w_j\|_\infty \leq C\mathrm{e}^{-2 \alpha X_{\mathrm{min}}}. \end{equation} This bound holds for all derivatives with respect to $x$. In particular, \[ \| \partial_x w_j\|_\infty \leq C\mathrm{e}^{-2 \alpha X_{\mathrm{min}}}. \] The eigenfunction corresponding to $\nu_n$ is $s_n = \partial_x U_n$. \item There are $n$ negative eigenvalues which are $\delta-$close to $\lambda_-$. \end{enumerate} \item The essential spectrum of $\mathcal{A}_0(U_n)$ is \begin{equation}\label{A0ess} \sigma_{\text{ess}}(\mathcal{A}_0(U_n)) = [1 - c^4/4, \infty). \end{equation} which is positive and bounded away from 0. \end{enumerate} \end{theorem} \begin{proof} Using \cref{specA00}, the Hamiltonian \cref{suspH}, the fact that the kernel is simple, and the fact that the Melnikov integral $M = \int_{-\infty}^\infty (\partial_x U)^2\,\mathrm{d} x$ is positive, (a) follows from \cite[Theorem~3.6]{sandstede:iol97}, except for the bound on $r(x)$ and its derivatives with respect to $x$, which follows from \cite{Sanstede1993} and \cite{sandstede:som98}. All eigenvalues are real since $\mathcal{A}_0(U_n)$ is self-adjoint on $L^2(\mathbb{R})$. From Hypothesis \ref{Uexistshyp} and Hypothesis \ref{A0hyp}, $\mathcal{A}_0(U)$ has a simple eigenvalue at 0 and a simple negative eigenvalue at $\lambda_-$. It follows from \cite{alexander:ati90} that $\mathcal{A}_0(U_n)$ has $n$ eigenvalues near 0 and $n$ negative eigenvalues near $\lambda_-$. This proves the eigenvalue count on $(-\infty, \delta_0)$ and part (b2). Part (b1) follows from \cite{sandstede:som98}. We can verify directly that $\mathcal{A}_0(U_n)\partial_x U_n = 0$. Part (c) follows from the Weyl Essential Spectrum Theorem \cite[Theorem~2.2.6]{kapitula:sad13} and \cite[Theorem~3.1.11]{kapitula:sad13}, since $\mathcal{A}_0(U_n)$ is exponentially asymptotic to $\mathcal{A}_0(0)$. \end{proof} \begin{remark}$\mathcal{A}_0(U_n)$ may in fact have additional eigenvalues $\lambda$ with $\lambda > \delta_0 > 0$, but these do not matter for the analysis. Our numerical analysis suggests that there are in fact no additional eigenvalues. \end{remark} \subsection{Stability of pulses} Now that we know about the existence of single and multiple pulses, we consider their spectral stability. To determine linear PDE stability of the multi-pulse solutions constructed in Theorem \ref{multiexist}, we look at the linearization of the PDE \cref{suspc} about $U_n(x)$, which is the quadratic operator polynomial $\mathcal{P}_2(\lambda; U_n): H^4(\mathbb{R}, \mathbb{C}) \subset L^2(\mathbb{R},\mathbb{C}) \rightarrow L^2(\mathbb{R},\mathbb{C})$ given by \begin{equation}\label{quadeig} \mathcal{P}_2(\lambda; U_n) = \mathcal{I} \lambda^2 + \mathcal{A}_1 \lambda + \mathcal{A}_0(U_n) \end{equation} where $\mathcal{A}_0(U_n)$ is defined in \cref{defA0}, $\mathcal{I}$ refers to the identity, and $\mathcal{A}_1=-2 c \partial_x$. First, we consider the essential spectrum. Since $U_n$ is exponentially localized, $\mathcal{P}_2(\lambda; U_n)$ is exponentially asymptotic to the operator \begin{equation}\label{quadeig0} \mathcal{P}_2(\lambda; 0) = \partial_x^4 + c^2 \partial_x^2 - 2 c \lambda \partial_x + (\lambda^2 + 1). \end{equation} By \cite[Theorem~3.1.11]{kapitula:sad13}, $\mathcal{P}_2(\lambda; U_n)$ is a relatively compact perturbation of $\mathcal{P}_2(\lambda; 0)$, thus by the Weyl essential spectrum theorem \cite[Theorem~2.2.6]{kapitula:sad13}, $\mathcal{P}_2(\lambda; U_n)$ and $\mathcal{P}_2(\lambda; 0)$ have the same essential spectrum. To find the essential spectrum of $\mathcal{P}_2(\lambda; 0)$, consider the related first-order operator $\mathcal{T}(\lambda): H^1(\mathbb{R}, \mathbb{C}^4) \subset L^2(\mathbb{R},\mathbb{C}^4) \rightarrow L^2(\mathbb{R},\mathbb{C}^4)$ given by \begin{equation} \mathcal{T}(\lambda) = \frac{\mathrm{d}}{\mathrm{d} x} - \begin{pmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -1 - \lambda^2 & 2 c \lambda & -c^2 & 0 \end{pmatrix}, \end{equation} which we obtain by writing $\mathcal{P}_2(\lambda; 0)$ as a first order system. By a straightforward adaptation of \cite[Theorem A.1]{sandstede:rmi08} (the only difference being the presence of the fourth-order differential operator), the operators $\mathcal{T}(\lambda)$ and $\mathcal{P}_2(\lambda; 0)$ have the same Fredholm properties, thus the same essential spectrum. By a straightforward calculation, \begin{equation}\label{quadess} \sigma_{\mathrm{ess}}(\mathcal{P}_2(\lambda; U_n)) = \sigma_{\mathrm{ess}}(\mathcal{T}(\lambda)) = \{\mathrm{i} r : |r| \geq \rho \}, \end{equation} where $\rho > 0$ is the minimum of the function $\lambda(r) = c r + \sqrt{1 + r^4}$. The value of $\rho$ is positive for $c \in (0, \sqrt{2})$, and $\rho\to0$ as $c\to\sqrt{2}$, so the essential spectrum is purely imaginary and bounded away from 0. Spectral stability thus depends entirely on the point spectrum. \subsubsection{Single pulse} Before considering the spectral stability of the $n$-pulse, we must show the stability of the primary pulse, $U(x)$. In addition to Hypothesis \ref{Uexistshyp} and Hypothesis \ref{A0hyp}, our assumptions are: \begin{hypothesis}\label{PDEexisthyp} Regarding the PDE \cref{suspc} and the base solution $U(x)$, \begin{enumerate} \item for every initial condition $u(x,0)$ and $\partial_tu(x,0)$ there exists a solution $u(x, t)$ to \cref{suspc} on the interval $I = [0, T]$, where \[ T=T\left(\max\{ ||u(x,0)||, ||\partial_tu(x,0)|| \}\right) \] \item the constrained energy evaluated on the wave, $d(c)$ (see \cite[Equation~(2.16)]{grillakis:sto87} for the exact expression), is concave up, \begin{equation}\label{dcc} d''(c) = -\partial_c\left( c\|\partial_xU\|^2 \right)>0,\quad0<c^2<2. \end{equation} \end{enumerate} \end{hypothesis} We will provide numerical evidence that these hypotheses are met in \cref{sec:numerics}. Under these assumptions, we will prove the spectral and orbital stability of the single pulse using the HKI. However, there are first two issues that must be resolved. First, the HKI as discussed in \cref{sec:intro} assumes that $\mathcal{A}_0$ has a compact resolvent, which is certainly not true for the operator associated with this problem. This compactness assumption is taken primarily for the sake of convenience, and to remove the possibility of point spectrum being embedded in the essential spectrum. However, as seen in the original formulation of the HKI for solitary waves, see \cite{kapitula:cev04,kapitula:ace05}, this is not a necessary condition. It is sufficient to assume that the origin is an isolated eigenvalue, and $\mathcal{A}_0$ is a higher-order differential operator than $\mathcal{A}_1$ with $\mathrm{n}[\mathcal{A}_0]<+\infty$. The interested reader should consult \cite{kapitula:ahk14} for the case where the origin is not isolated. The second difficulty is that these previous results for solitary waves do not immediately apply to quadratic eigenvalue problems. However, as seen in \cite[Section~4.1]{bronski:aii14} one can easily convert a quadratic star-even eigenvalue problem into a linear star-even eigenvalue problem, and then apply the index theory to the reformulated problem. Thus, we can conclude the index theory is applicable to the problem at hand, which allows for the following stability result. \begin{lemma}\label{qstable} Let $c^2 \in (0, 2)$, and let $U(x)$ be the primary pulse solution to \cref{eqODE}. Then $U(x)$ is spectrally and orbitally stable if and only if \begin{equation}\label{dcc1} d''(c) = -\partial_c\left( c\|\partial_x U\|^2 \right)>0, \end{equation} where $d(c)$ is defined in \cite[equation (2.16)]{grillakis:sto87}. \end{lemma} \begin{proof} First, equation \cref{dcc1} is well-defined since both $U$ and $\partial_x U$ are smooth in $c$ by \cref{lemma:cinterval}. Next, we check that the origin is an isolated eigenvalue. The essential spectrum of $\mathcal{A}_0(U)$ is the same as that of $\mathcal{A}_0(U_n)$, and is given by \cref{A0ess}, which is positive and bounded away from 0. By assumption, $\mathcal{A}_0(U)$ has a single negative eigenvalue. We now use the HKI to complete the proof; in particular, the formulation as presented in equation \cref{e:12}. First, we note that, \[ \left.\mathcal{A}_1\right|_{\mathop\mathrm{span}\nolimits\{\partial_xU\}}= \langle-2c\partial_x\left(\partial_xU\right),\partial_xU\rangle=0, \] where the equality follows from the fact that the primary pulse is even. Since $\mathcal{A}_2=\mathcal{I}$ is positive definite, we can write, \[ \begin{aligned} K_{\mathop\mathrm{Ham}\nolimits}&=\mathrm{n}(\mathcal{A}_0)- \mathrm{n}\left(\left.\left[\mathcal{I}-\mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1\right]\right|_{\mathop\mathrm{span}\nolimits\{\partial_xU\}}\right)\\ &=1- \mathrm{n}\left(\left.\left[\mathcal{I}-\mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1\right]\right|_{\mathop\mathrm{span}\nolimits\{\partial_xU\}}\right), \end{aligned} \] for by assumption, $\mathrm{n}(\mathcal{A}_0)=1$. Regarding the second term, \[ \begin{aligned} \left.\left[\mathcal{I}-\mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1\right]\right|_{\mathop\mathrm{span}\nolimits\{\partial_xU\}}&= \|\partial_xU\|^2- \langle(-2c\partial_x)\mathcal{A}_0^{-1}(-2c\partial_x)\partial_xU,\partial_xU\rangle\\ &=\|\partial_xU\|^2+ 2c\langle\partial_x\mathcal{A}_0^{-1}(-2c\partial_x^2U),\partial_xU\rangle. \end{aligned} \] Going back to the existence equation \cref{eqODE} and differentiating with respect to $c$ yields, \[ \mathcal{A}_0(U)\partial_cU+2c\partial_x^2U=0\,\,\leadsto\,\, \mathcal{A}_0(U)^{-1}(-2c\partial_x^2U)=\partial_cU. \] Substitution and changing the order of differentiation provides, \[ \langle\partial_x\mathcal{A}_0(U)^{-1}(-2c\partial_x^2U),\partial_xU\rangle= \langle\partial_c\partial_xU,\partial_xU\rangle=\frac{1}{2}\partial_c\|\partial_xU\|^2. \] In conclusion, \[ \left.\left[\mathcal{I}-\mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1\right]\right|_{\mathop\mathrm{span}\nolimits\{\partial_xU\}}= \|\partial_xU\|^2+c\,\partial_c\|\partial_xU\|^2= \partial_c\left( c\|\partial_xU\|^2 \right). \] We now have for the primary pulse, \[ K_{\mathop\mathrm{Ham}\nolimits}=1-\mathrm{n}\left[\partial_c\left( c\|\partial_xU\|^2 \right)\right]. \] If $d''(c)<0$, then $K_{\mathop\mathrm{Ham}\nolimits}=1$, and there is one positive real polynomial eigenvalue. If $d''(c)>0$, the HKI is zero. Consequently, the wave is spectrally stable. Appealing to \cite[Theorem~4.1]{bronski:aii14} we can further state that the wave is orbitally stable. \end{proof} \subsubsection{$n$-pulse} We now locate all potentially unstable eigenvalues of \cref{quadeig} for an $n$-pulse. These include polynomial eigenvalues with positive real part, as well as purely imaginary polynomial eigenvalues with negative Krein signature. To accomplish this task we use the HKI in combination with the Krein matrix. First, we compute the HKI for \cref{quadeig}, so that we have an exact count of the number of potentially unstable polynomial eigenvalues. We then use the Krein matrix to find $(n-1)$ pairs of eigenvalues close to 0; each pair is either real or purely imaginary with negative Krein signature. We refer to these as small magnitude polynomial eigenvalues, or interaction polynomial eigenvalues, since heuristically they result from interactions between neighboring pulses. We then show that the number of potentially unstable interaction polynomial eigenvalues is exactly the same as the HKI, from which we conclude that we have found all of the potentially unstable eigenvalues. By Hamiltonian reflection symmetry, all other point spectrum must be purely imaginary with positive Krein signature. We start with the calculation of the HKI. By \cref{multiexist} we know that $\mathcal{A}_0(U_n)$ has precisely $n$ eigenvalues near the origin. Let $0\le n_\mathrm{s}\le n-1$ represent the number of these eigenvalues which are negative. We have the following result concerning the HKI for the $n$-pulse: \begin{lemma}\label{lem:HKImulti} Assume Hypotheses \ref{Uexistshyp}, \ref{A0hyp}, and \ref{PDEexisthyp}, and let $U_n(x)$ be an $n-$modal solution to \cref{eqODE}. Then \[ K_{\mathop\mathrm{Ham}\nolimits}=n+n_\mathrm{s}-1. \] \end{lemma} \begin{proof} From \cref{multiexist} part (b) and the definition of $n_\mathrm{s}$, $\mathrm{n}[\mathcal{A}_0(U_n)]=n+n_\mathrm{s}$, so for the HKI, \[ K_{\mathop\mathrm{Ham}\nolimits}=n+n_\mathrm{s}- \mathrm{n}\left(\left.\left[\mathcal{I}-\mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1\right]\right|_{\mathop\mathrm{span}\nolimits\{\partial_xU_n\}}\right), \] where $\mathcal{A}_0=\mathcal{A}_0(U_n)$. In the proof of \cref{qstable} we saw that when the wave depends smoothly on $c$, \[ \left.\left[\mathcal{I}-\mathcal{A}_1\mathcal{A}_0^{-1}\mathcal{A}_1\right]\right|_{\mathop\mathrm{span}\nolimits\{\partial_xU_n\}}= \partial_c\left(c\|\partial_x U_n\|^2\right). \] Since to leading order the $n$-pulse is $n$ copies of the original pulse, we have \[ \|\partial_x U_n\|^2=n\|\partial_xU\|^2 + \mathcal{O}(e^{-\alpha X_{\mathrm{min}}}). \] Consequently, we can write \begin{align*} \partial_c\left(c\|\partial_x U_n\|^2\right) &= n\partial_c\left(c\|\partial_x U\|^2\right) + \mathcal{O}(e^{-\alpha X_{\mathrm{min}}})\\ &=-nd''(c) + \mathcal{O}(e^{-\alpha X_{\mathrm{min}}}). \end{align*} Since $d''(c)>0$ by assumption, we have to leading order, \[ \partial_c\left(c\|\partial_x U_n\|^2\right)<0. \] For sufficiently well-separated pulses the sign will not change even when incorporating the higher-order terms in the asymptotic expansion. The result now follows. \end{proof} We now locate the potentially unstable polynomial eigenvalues of the quadratic eigenvalue problem \cref{quadeig}. This will be accomplished through the Krein matrix. For the sake of exposition only we will henceforth assume that each of the small magnitude eigenvalues $\nu_1, \dots, \nu_n$ of $\mathcal{A}_0(U_n)$ is simple. For each of these eigenvalues, denote the associated normalized eigenfunctions as $s_1, \dots, s_n$. Since $\mathcal{A}_0(U_n)$ is self-adjoint, these eigenfunctions are pairwise orthogonal. In the construction of the Krein matrix the relevant subspace for the spectral problem is the span of this set of eigenfunctions associated with the small magnitude eigenvalues of $\mathcal{A}_0$, \begin{equation}\label{defS} S = \mathop\mathrm{span}\nolimits\{s_1, \dots, s_n \}. \end{equation} We now present the following theorem, which is the main result of this section. \begin{theorem}\label{Kreindiag} Assume Hypotheses \ref{Uexistshyp}, \ref{A0hyp}, and \ref{PDEexisthyp}. Let $U_n(x)$ be an $n-$pulse solution to \cref{eqODE}, and let $\nu_1, \dots, \nu_n$ be the small magnitude eigenvalues of $\mathcal{A}_0(U_n)$, as defined in \cref{multiexist}. Under a suitable normalization of the eigenfunctions $s_j$, near the origin the Krein matrix has the asymptotic expansion, \begin{equation}\label{Kreinapprox} -\frac{\bm{\mathit{K}}_S(z)}{z} = ||\partial_xU||^2 \mathrm{diag} (\nu_1, \dots, \nu_n) + d''(c)\bm{\mathit{I}}_n\overline{z}^2 + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}|z| + |z|^3), \end{equation} which is diagonal to leading order. \end{theorem} The proof of this result is left to \cref{s:kreinproof}. As a corollary, we have the following criteria for spectral stability and instability of the multi-pulse solutions $U_n(x)$. \begin{corollary}\label{stabcrit} Let $U_n(x)$ be an $n-$pulse solution to \cref{eqODE} constructed as in \cref{multiexist} using the sequence of nonnegative integers $\{ k_1, \dots, k_{n-1} \}$. Assume the same hypotheses as in \cref{Kreindiag}. Let $\nu_1, \dots, \nu_n$ be the small magnitude eigenvalues of $\mathcal{A}_0(U_n)$, where $\nu_n = 0$. Then there are $(n-1)$ pairs of eigenvalues of \cref{quadeig} close to 0, which we will term interaction polynomial eigenvalues. These are described as follows. For each $j=1,2,\dots,n-1$, \begin{enumerate} \item if $k_j$ is odd (equivalently, $\nu_j<0$), there is a corresponding pair of purely imaginary interaction polynomial eigenvalues, \begin{equation}\label{npulseKreineigs} \lambda_j^\pm = \pm \mathrm{i} \left( \|\partial_xU\| \sqrt{ \frac{|\nu_j|}{d''(c)} } + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}) \right), \end{equation} each of which has negative Krein signature \item if $k_j$ is even (equivalently, $\nu_j>0$), there is a corresponding pair of real interaction polynomial eigenvalues, \[ \lambda_j = \pm \left( \|\partial_xU\| \sqrt{ \frac{\nu_j}{d''(c)} } + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}) \right). \] In particular, there exists a positive, real eigenvalue. \end{enumerate} In addition, there is a geometrically simple polynomial eigenvalue at $\lambda=0$ with corresponding eigenfunction $\partial_x U_n$. All other point spectra is purely imaginary, and has positive Krein signature. \end{corollary} \begin{remark} In other words, if all the small magnitude eigenvalues of $\mathcal{A}_0(U_n)$ are negative, and if the individual pulses are sufficiently well-separated, then the $n$-pulse is spectrally stable; otherwise, it is unstable. \end{remark} While we can find the interaction polynomial eigenvalues using Lin's method as in \cite{sandstede:som98}, using the Krein matrix allows us to also determine the Krein signatures of any purely imaginary interaction polynomial eigenvalues. This additional information is needed to ensure that via the HKI all of the potentially unstable point spectrum has small magnitude. \begin{proof} By \cref{cor:51} the small polynomial eigenvalues are found by solving $\mathop\mathrm{det}\nolimits\bm{\mathit{K}}_S(z) = 0$. This is equivalent to finding zeros of the Krein eigenvalues. For $j=1,2,\dots,n$ set, \[ -\frac{r_j(z)}{z}=||\partial_xU||^2 \nu_j + d''(c) \overline{z}^2+\tilde{r}_j(z), \] where \[ \tilde{r}_j(z) = \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}|z| + |z|^3). \] Note that the first two terms in $-r_j(z)/z$ are the diagonal entries of the Krein matrix. Since to leading order the Krein matrix is diagonal, by \cite{Ipsen2008} these are valid asymptotic expressions for the Krein eigenvalues. The small and nonzero polynomial eigenvalues are found by solving, \begin{equation}\label{eqforz} ||\partial_xU||^2 \nu_j + d''(c) \overline{z}^2+\tilde{r}_j(z)=0,\quad j=1,2,\dots,n. \end{equation} First suppose that $z$ is real, so the Krein matrix is Hermitian. The Krein eigenvalues are then real-valued; in particular, the error term, $\tilde{r}_j(z)$, is real-valued. Recall that $d''(c)>0$. Suppose that $\nu_j<0$, and set, \begin{equation}\label{epsilon2} \epsilon_j^2 = -\frac{||\partial_xU||^2 \nu_j}{d''(c)} > 0. \end{equation} Equation \cref{eqforz} can then be rewritten, \begin{equation}\label{eqforz2} z^2 - \epsilon_j^2 + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}|z| + |z|^3) = 0. \end{equation} Letting $y = \epsilon_j z$ and noting that $\epsilon_j = \mathcal{O}(\mathrm{e}^{-\alpha X_{\mathrm{min}}})$, equation \cref{eqforz2} becomes, \begin{equation}\label{eqforz3} y^2 - 1 + \mathcal{O}(\epsilon_j^{1/2 }|y| + \epsilon|y^3|) = 0. \end{equation} For sufficiently small $\epsilon_j$, equation \cref{eqforz3} has two roots, $y = \pm 1 + \mathcal{O}(\epsilon_j^{1/2})$. Thus, for sufficiently large $X_{\mathrm{min}}$, equation \cref{eqforz} has two solutions, \[ z_j^\pm = \pm ||\partial_xU|| \sqrt{ -\frac{ \nu_j}{d''(c)} } + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}). \] The Krein eigenvalue, $r_j(z)$, has a simple zero at $z_j^\pm$. Since to leading order, \[ r_j'(z_j^\pm)=-||\partial_xU||^2 \nu_j-3d''(c)(z_j^\pm)^2=2||\partial_xU||^2 \nu_j<0, \] each of these polynomial eigenvalues has negative Krein signature. Now suppose $\nu_j > 0$, and assume $z$ is purely imaginary, $z=\mathrm{i}\tilde{z}$. In this case the Krein matrix is no longer Hermitian, which implies that the remainder term associated with each Krein eigenvalue is no longer necessarily real-valued. Define $\epsilon_j^2$ as in \cref{epsilon2}, but this time $\epsilon_j^2 < 0$. The two zeros of the Krein eigenvalue are now, \[ \tilde{z}_j^\pm = \pm||\partial_xU|| \sqrt{ \frac{ \nu_j}{d''(c)} } + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}), \] which to leading order are purely real. Going back to the original problem, there are two interaction polynomial eigenvalues given by, \[ \lambda_j^\pm=\tilde{z}_j^\pm. \] To leading order these eigenvalues are real-valued. Under the assumption that the small magnitude eigenvalues of $\mathcal{A}_0(U_n)$ are simple, via the asymptotic expansion $\lambda_j^\pm$ will also then be simple. By the Hamiltonian reflection symmetry of the polynomial eigenvalues about the real axis, the fact they are real-valued to leading order implies they are truly real-valued and come in opposite-sign pairs. Since the kernels of \cref{quadeig} and $\mathcal{A}_0(U_n)$ are the same, we can verify directly that $\lambda = 0$ is an eigenvalue of \cref{quadeig} with eigenfunction $\partial_x U_n$. We now show that all other point spectra is purely imaginary. We have for the small magnitude polynomial eigenvalues, $k_\mathrm{i}^-=2n_\mathrm{s}$, and $k_\mathrm{r}=n-1-n_\mathrm{s}$. Thus, for the small magnitude polynomial eigenvalues, \[ k_\mathrm{r}+k_\mathrm{i}^-=(n-1-n_\mathrm{s})+(2n_\mathrm{s})=n-1+n_\mathrm{s}. \] By \cref{lem:HKImulti} this is the HKI for the $n$-pulse. Consequently, there are no other point polynomial eigenvalues which have positive real part, or which are purely imaginary and have negative Krein signature. \end{proof} \subsection{Numerical results}\label{sec:numerics} In this section, we show numerical results to illustrate the theoretical results of the previous section. First, we can construct a primary pulse solution $U(x)$ numerically using the string method from \cite{Chamard2011}. The top two panels of \cref{fig:single1} show these solutions for the same values of $c$ as in \cite[Figure 3]{Chen1997}. Next, we compute the spectrum of the operator $\mathcal{A}_0(U)$ numerically using Matlab's \texttt{eig} function. In the bottom panel of \cref{fig:single1} we note the presence of a simple eigenvalue at the origin and a simple negative eigenvalue, which supports our hypotheses on the spectrum of $\mathcal{A}_0(U)$. As expected, we also see that the essential spectrum is positive and bounded away from 0. \begin{figure} \caption{Primary pulse solutions $U(x)$ to \cref{eqODE} for $c = 1.354$ (top left) and $c = 1.40$ (top right). In the bottom panel there is the spectrum of $\mathcal{A}_0(U)$, the linearization of \cref{eqODE} about a single pulse $U(x)$ for $c = 1.3$. For the spectral plot we use finite difference methods with $N = 512$ and periodic boundary conditions. The left boundary of the essential spectrum is $\lambda\sim0.286$. The spectrum to the right of the boundary is discrete instead of continuous because of the boundary conditions. } \label{fig:single1} \end{figure} We can construct multi-pulse solutions numerically by joining together multiple copies of the primary pulse and using Matlab's \texttt{fsolve} function. Consecutive distances between peaks given by \cref{multiexist}. The first four double pulse solutions are shown in the top two panels of \cref{fig:double}. These double pulses are numbered using the integer $k_1$ from \cref{multiexist}. We verify \cref{multiexist}(b) numerically by computing the spectrum of $\mathcal{A}_0(U_2)$. The spectrum of $\mathcal{A}_0(U_2)$ for double pulses 0 and 1 are shown in the bottom two panels of \cref{fig:double}. In both cases, there is an eigenvalue at 0. For double pulse 0, there is an additional positive eigenvalue near 0, and for double pulse 1, there is an additional negative eigenvalue near 0. \begin{figure} \caption{Double pulse solutions $U_2(x)$ to \cref{eqODE} for $c = 1.2$. The top left panel shows double pulse 0, and the top right panel shows double pulse 1. In the bottom two panels we see the associated spectrum for $\mathcal{A}_0(U_2)$: double pulse 0 on the left, and double pulse 1 on the right.} \label{fig:double} \end{figure} We verify \cref{stabcrit} by computing the polynomial eigenvalues of \cref{quadeig} directly using the Matlab package \texttt{quadeig} from \cite{Hammarling2013}. For double pulse 0, $\mathcal{A}_0(U_2)$ has one positive small magnitude eigenvalue; thus, by \cref{stabcrit}, equation \cref{quadeig} has a polynomial eigenvalue with positive real part. For double pulse 1, the small magnitude eigenvalue of $\mathcal{A}_0(U_2)$ is negative; thus by \cref{stabcrit}, since the distance between the two peaks is sufficiently large, the polynomial eigenvalues of \cref{quadeig} are purely imaginary. These are shown in \cref{fig:quadeigdouble}. \begin{figure} \caption{Polynomial eigenvalues of \cref{quadeig} for double pulses 0 (left) and 1 (right) for $c=1.2$. The eigenvalues are marked with a filled (blue) circle, and the edge of the essential spectrum is marked with a (red) cross. The essential spectrum is discrete instead of continuous because of the boundary conditions. For the right panel the two purely imaginary polynomial eigenvalues nearest the origin have negative Krein signature. Here we use finite difference methods with $N = 512$ and periodic boundary conditions.} \label{fig:quadeigdouble} \end{figure} \subsection{Proof of \cref{Kreindiag}}\label{s:kreinproof} Using \cref{multiexist}, let $U_n(x)$ be an $n-$modal solution to \cref{eqODE}, and let $\{\nu_1, \dots, \nu_n\}$ be the small magnitude eigenvalues of $\mathcal{A}_0(U_n)$ with corresponding eigenfunctions $\{ s_1, \dots, s_n \}$. Since $\mathcal{A}_0(U_n)$ is self-adjoint, the $s_i$ are orthogonal, and for the sake of convenience scale them so that \begin{equation}\label{orthoeigs} \langle s_i, s_j \rangle = \|\partial_x U \|^2 \delta_{ij}. \end{equation} Typically, we assume these eigenfunctions also have unit length. However, this is not important in the construction of the Krein matrix, nor in the derived properties. Let $S = \mathop\mathrm{span}\nolimits\{s_1, \dots s_n\}$. By \cref{l:53}, and using the normalization of \cref{orthoeigs}, for small $|z|$ the Krein matrix is the $n \times n$ matrix, \begin{equation}\label{Kreinform} -\frac{\bm{\mathit{K}}_S(z)}{z} = \|\partial_xU_n\|^2 \text{diag}(\nu_1, \dots, \nu_n) + \overline{z}\bm{\mathit{K}}_1 - \overline{z}^2 ( \|\partial_xU_n\|^2\bm{\mathit{I}}_n - \bm{\mathit{K}}_2) + \mathcal{O}(|z|^3), \end{equation} where \begin{equation}\label{defK1} (\bm{\mathit{K}}_1)_{jk} = \langle s_j, \mathrm{i}\mathcal{A}_1 s_k \rangle, \end{equation} and \begin{equation}\label{defK2} (\bm{\mathit{K}}_2)_{jk} = \langle \mathcal{A}_1 s_j, P_{S^\perp}(P_{S^\perp} \mathcal{A}_0(U_n)P_{S^\perp})^{-1} P_{S^\perp}\mathcal{A}_1 s_k \rangle. \end{equation} This is, to leading order, a matrix-valued quadratic polynomial in $z$ (and its complex conjugate). The factors $\|\partial_xU_n\|^2$ on the RHS of \cref{Kreinform} come from using the scaling \cref{orthoeigs} for the eigenfunctions $s_i$ of $\mathcal{A}_0(U_n)$. We now prove \cref{Kreindiag} in a series of lemmas. In all that follows, $C$ refers to a constant independent of $x$, but it may have a different value each time it is used. The first lemma is a bound on the product of exponentially separated pulses. \begin{lemma}\label{expseplemma} Let $U_+(x)$ and $U_-(x)$ be localized pulses which decay exponentially with rate $\alpha$ and whose peaks are separated by a distance $2 X$. We have the following bounds, \begin{equation}\label{expsepbound1} \sup_{x \in \mathbb{R}} | U_-(x) U_+(x)|\leq C \mathrm{e}^{-2 \alpha X}, \end{equation} and \begin{equation}\label{expsepbound2} |\langle U_-(x), U_+(x) \rangle |\leq C \mathrm{e}^{-(3 \alpha/2) X}. \end{equation} \end{lemma} \begin{proof} Without loss of generality, let $U_\pm(x)$ be exponentially localized peaks centered at $\pm X$, thus $|U_-(x)| \leq C e^{-\alpha|x + X|}$ and $|U_+(x)| \leq C \mathrm{e}^{-\alpha|x - X|}$. For $x \in (-\infty, -X]$, \begin{align}\label{endprodbound} | U_-(x) U_+(x) | &\leq C \mathrm{e}^{\alpha(x + X)} \mathrm{e}^{\alpha(x - X)} = C \mathrm{e}^{2 \alpha x} \leq C \mathrm{e}^{-2 \alpha X} \end{align} and for $x \in [-X, 0]$, \begin{align}\label{middleprodbound} | U_-(x) U_+(x) | &\leq C \mathrm{e}^{-\alpha(x + X)} \mathrm{e}^{\alpha(x - X)} = C \mathrm{e}^{-2 \alpha X} \end{align} Bounds on $[0, X]$ and $[X, \infty)$ are similar. Since these are independent of $x$, we obtain the bound \cref{expsepbound1}. For the bound \cref{expsepbound2}, we split the integral into four pieces. \begin{equation} \begin{aligned} | \langle U_-(x), &U_+(x) \rangle | \leq \int_{-\infty}^{-X} |U_-(x) U_+(x)| \mathrm{d} x + \int_{-X}^0 |U_-(x) U_+(x)|\mathrm{d} x \\ &+\int_0^X |U_-(x) U_+(x)| \mathrm{d} x +\int_X^\infty |U_-(x) U_+(x)| \mathrm{d} x \end{aligned} \end{equation} For the first integral, we use \cref{endprodbound} to get \begin{align*} \int_{-\infty}^{-X} |U_-(x) U_+(x)| \mathrm{d} x &\leq C \int_{-\infty}^{-X} \mathrm{e}^{2 \alpha x} \mathrm{d} x = C \mathrm{e}^{-2 \alpha X} \end{align*} For the second integral, we use \cref{middleprodbound} to get \begin{align*} \int_{-X}^0 |U_-(x) U_+(x)| \mathrm{d} x &\leq C \int_{-X}^0 \mathrm{e}^{-\alpha(x + X)} \mathrm{e}^{\alpha(x - X)} \mathrm{d} x \leq C \int_{-X}^0 \mathrm{e}^{-\alpha(x + X)/2}\mathrm{e}^{\alpha(x - X)} \mathrm{d} x \\ &\leq C \mathrm{e}^{-(3 \alpha/2) X } \int_{-X}^0 \mathrm{e}^{(\alpha/2)x} \mathrm{d} x \leq C \mathrm{e}^{-(3 \alpha/2) X } \end{align*} The third and fourth integrals are similar. Combining these, we obtain \cref{expsepbound2}. \end{proof} \begin{remark} If the hypotheses of \cref{expseplemma} are satisfied, we say that $U_+(x)$ and $U_-(x)$ are exponentially separated by $2X$. \end{remark} Next, we obtain a bound on the matrix $\bm{\mathit{K}}_1$. \begin{lemma}\label{K1small} For the matrix $\bm{\mathit{K}}_1$ in \cref{Kreinform}, \begin{equation}\label{K1final} \bm{\mathit{K}}_1 = \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}). \end{equation} \end{lemma} \begin{proof} Substituting $\mathcal{A}_1 = -2c\partial_x$ into \cref{defK1}, $(\bm{\mathit{K}}_1)_{jk} = \mathrm{i} 2 c \langle s_j, \partial_xs_k \rangle$. Using the expansion \cref{sj} from Theorem \ref{multiexist}, \begin{equation}\label{K1exp} \begin{aligned} \langle s_j ,\partial_xs_k \rangle &= \sum_{m = 1}^{n} d_{jm} d_{km} \langle \partial_xU^m, \partial_x^2U^m \rangle + \sum_{m \neq\ell} d_{jm} d_{k\ell} \langle \partial_xU^m, \partial_x^2U^\ell \rangle\\ &\qquad + \langle s_j, \partial_x w_k \rangle + \sum_{\ell = 1}^{n} d_{k\ell} \langle w_j, \partial_x^2U^\ell \rangle. \end{aligned} \end{equation} By translation invariance of the inner product on $L^2(\mathbb{R})$, \[ \langle \partial_xU^m, \partial_x^2U^m \rangle = \langle \partial_xU, \partial_x(\partial_xU) \rangle = 0, \] since the operator $\partial_x$ is skew-symmetric. For $m \neq\ell$, $U^m$ and $U^\ell$ are exponentially separated by at least $2 X_{\mathrm{min}}$; thus, by Lemma \ref{expseplemma}, \[ \langle \partial_xU^m, \partial_x^2U^\ell \rangle = \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}). \] The last two terms in \cref{K1exp} are $\mathcal{O}(\mathrm{e}^{-2 \alpha X_{\mathrm{min}}})$ using H\"{o}lder's inequality and the bound \cref{sjwbound} from \cref{multiexist}, which applies to $\partial_x w_k$ as well as $w_j$. Combining these estimates we obtain \cref{K1final}. \end{proof} Using the expansion \cref{sj} from \cref{multiexist}, the matrix $\bm{\mathit{K}}_2$ in \cref{Kreinform} becomes, \begin{equation}\label{K2expansion} \begin{aligned} &(\bm{\mathit{K}}_2)_{jk} = 4 c^2 \left\langle\sum_{m = 1}^{n} d_{jm} \partial_x^2U^m + \partial_x w_j,\right. \\ &\qquad\left.\sum_{\ell = 1}^{n} d_{k\ell} P_{S^\perp} (P_{S^\perp} \mathcal{A}_0(U_n)P_{S^\perp})^{-1} P_{S^\perp} \partial_x^2U^\ell + P_{S^\perp} (P_{S^\perp} \mathcal{A}_0(U_n)P_{S^\perp})^{-1} P_{S^\perp}\partial_xw_k \right\rangle. \end{aligned} \end{equation} Before we can evaluate this expression, we need to look at $(P_{S^\perp} \mathcal{A}_0(U_n)P_{S^\perp})^{-1}$. \begin{lemma}\label{PA0inv} $P_{S^\perp} \mathcal{A}_0(U_n) P_{S^\perp}: S^\perp \rightarrow S^\perp$ is an invertible linear operator with bounded inverse. \end{lemma} \begin{proof} By \cref{A0ess}, the essential spectrum of $\mathcal{A}_0(U_n)$ is $\sigma_{\text{ess}} = [1, \infty)$, which is bounded away from 0. Thus the operator $\mathcal{A}_0(U_n)$ is Fredholm with index 0. Since for the small magnitude eigenvalues $\nu_i$ of $\mathcal{A}_0(U_n)$ we have $\nu_i \notin [1, \infty)$, the operator $A_0(U_n) - \nu_i I$ is also Fredholm with index 0. Since $A_0(U_n) - \nu_i I$ is Fredholm, its range is closed. Thus by the closed range theorem \cite[p.~205]{Yosida}, since $\nu_i \in \mathbb{R}$ and $A_0(q_n)$ is self-adjoint, we have \begin{equation}\label{KerRangeNu} \mathop\mathrm{ran}\nolimits (A_0(q_n) - \nu_i I) = \left(\mathop\mathrm{ker}\nolimits (A_0(q_n) - \nu_i I)\right)^\perp. \end{equation} Next, we look at the operator $P_{S^\perp} \mathcal{A}_0(U_n)$. Since $\mathcal{A}_0(U_n)$ is self-adjoint and $P_{S^\perp}$ commutes with $\mathcal{A}_0(U_n)$, $P_{S^\perp} \mathcal{A}_0(U_n)$ is also self-adjoint. Since $P_{S^\perp} A_0(U_n) = A_0(U_n) P_{S^\perp}$, the kernel of $P_{S^\perp} \mathcal{A}_0(U_n)$ contains $S$ as well as the kernel of $\mathcal{A}_0(U_n)$, which is contained in $S$. The only other elements in the kernel of $P_{S^\perp} \mathcal{A}_0(U_n)$ are functions $y$ for which $(A_0(q_n) - \nu_i I) y = s_i$, since that will be annihilated by the projection $P_{S^\perp}$. But such a function cannot exist, since by \eqref{KerRangeNu}, we would have $s_i \perp \mathop\mathrm{ker}\nolimits (A_0(q_n) - \nu_i I)$, which contains $s_i$. We conclude that $\mathop\mathrm{ker}\nolimits P_{S^\perp} A_0(q_n) = S$. Since the range of $A_0(q_n)$ is closed and $P_{S^\perp}$ is bounded, the range of $P_{S^\perp} A_0(q_n)$ is also closed. Thus by the closed range theorem and the fact that $P_{S^\perp} A_0(q_n)$ is self-adjoint, \[ \mathop\mathrm{ran}\nolimits P_{S^\perp} A_0(q_n) = (\mathop\mathrm{ker}\nolimits (P_{S^\perp} A_0(q_n))^*)^\perp = (\mathop\mathrm{ker}\nolimits (P_{S^\perp} A_0(q_n)))^\perp = S^\perp. \] Since $\mathop\mathrm{dim}\nolimits \mathop\mathrm{ker}\nolimits P_{S^\perp} A_0(q_n) = \text{codim } P_{S^\perp} A_0(q_n) = 2$, the operator $P_{S^\perp} A_0(q_n)$ is a Fredholm operator with index 0 and kernel $S$. Thus the restriction $P_{S^\perp} \mathcal{A}_0(U_n)|_{S^\perp} = P_{S^\perp} \mathcal{A}_0(U_n) P_{S^\perp}$ is invertible on $S^\perp$. By the definition of $S$ and \cref{multiexist}, $P_{S^\perp}\mathcal{A}_0(U_n)P_{S^\perp}$ has no eigenvalues of magnitude less than $\delta$. By the resolvent bound for normal operators, the linear operator $(P_{S^\perp} \mathcal{A}_0(U_n)P_{S^\perp})^{-1}$ is bounded on $S^\perp$. \end{proof} Before we can evaluate the term $(P_{S^\perp} \mathcal{A}_0(U_n)P_{S^\perp})^{-1} P_{S^\perp}\partial_x^2U^\ell$ from \cref{K2expansion}, we will need the following lemma which gives an expansion for $e^{U_n(x)}$. \begin{lemma}\label{expsep} For the $n-$pulse, $U_n(x)$, and for all $i = 1, \dots, n$, \[ \mathop\mathrm{exp}\nolimits(U_n(x)) = \mathop\mathrm{exp}\nolimits( U^i(x)) + \sum_{j \neq i} (\mathop\mathrm{exp}\nolimits(U^j(x)) - 1) + \mathcal{O}(\mathrm{e}^{-\alpha X_{\mathrm{min}}}) \] \end{lemma} \begin{proof} Fix $i$ in the expansion \eqref{qn} and let $S(x) = \sum_{j \neq i} U_j(x)$, so that $U_n = U^i + S + \mathcal{O}(e^{-\alpha X_{\mathrm{min}}})$. Since $U_n(x)$ is bounded, \[ \begin{aligned} \mathop\mathrm{exp}\nolimits(U_n(x)) &= \mathop\mathrm{exp}\nolimits( U^i(x) )\mathop\mathrm{exp}\nolimits(S(x))(1 + \mathcal{O}(\mathrm{e}^{-\alpha X_{\mathrm{min}}})) \\ &= \mathop\mathrm{exp}\nolimits( U^i(x) )\mathop\mathrm{exp}\nolimits(S(x)) + \mathcal{O}(\mathrm{e}^{-\alpha X_{\mathrm{min}}}). \end{aligned} \] Using the Taylor expansion for the exponential, \[ \begin{aligned} \mathop\mathrm{exp}\nolimits( U^i(x) )\mathop\mathrm{exp}\nolimits(S(x)) &= \sum_{m=0}^\infty \frac{U^i(x)^m}{m!} \sum_{n=0}^\infty \frac{S(x)^n}{n!} \\ &= \sum_{m=0}^\infty \frac{U^i(x)^m}{m!} + \sum_{n=0}^\infty\frac{S(x)^n}{n!} - 1 + \sum_{m=1}^\infty \frac{U^i(x)^m}{m!} \sum_{n=1}^\infty \frac{S(x)^n}{n!} \\ &= \mathop\mathrm{exp}\nolimits(U^i(x)) + \mathop\mathrm{exp}\nolimits(S(x)) - 1 + \sum_{m=1}^\infty \frac{U^i(x)^m}{m!} \sum_{n=1}^\infty \frac{S(x)^n}{n!} \end{aligned} \] For the last term on the RHS, \[ \begin{aligned} \left| \sum_{m=1}^\infty \frac{U^i(x)^m}{m!} \sum_{n=1}^\infty \frac{S(x)^n}{n!} \right| &= \left| U^i(x)S(x)\right| \sum_{m=0}^\infty \frac{|U^i(x)|^m}{(m+1)!} \sum_{n=0}^\infty \frac{|S(x)|^n}{(n+1)!} \\ &\leq \left| U^i(x)S(x) \right| e^{|U^i(x)|}e^{|S(x)|} \\ &\leq C \mathrm{e}^{-2 \alpha X_{\mathrm{min}}}, \end{aligned} \] where in the last line we used the fact that $U_n(x)$ is bounded together with the bound \cref{expsepbound1} from \cref{expseplemma}, since $U^i$ and each peak in $S$ are exponentially separated. Combining all of this, \[ \begin{aligned} \mathop\mathrm{exp}\nolimits(U_n(x)) &= \mathop\mathrm{exp}\nolimits(U^i(x)) + \mathop\mathrm{exp}\nolimits(S(x)) - 1 + \mathcal{O}(\mathrm{e}^{-\alpha X_{\mathrm{min}}}) \end{aligned} \] Repeat this procedure $n - 2$ more times to get the result. \end{proof} We can now evaluate $(P_{S^\perp} \mathcal{A}_0(U_n) P_{S^\perp})^{-1} P_{S^\perp}\partial_x^2U^\ell$. \begin{lemma}\label{PA0invqxx} \begin{equation}\label{invqxx} (P_{S^\perp} \mathcal{A}_0(U_n)P_{S^\perp})^{-1} P_{S^\perp}\partial_x^2U^\ell = -\frac{1}{2c}P_{S^\perp}\partial_cU^\ell + \mathcal{O}(\mathrm{e}^{-2 \alpha X_{\mathrm{min}}}). \end{equation} \end{lemma} \begin{proof} Let $y = (P_{S^\perp} \mathcal{A}_0(U_n)P_{S^\perp})^{-1} P_{S^\perp}\partial_x^2U^\ell$. By Lemma \ref{PA0inv}, this is well-defined, and $y \in S^\perp$. Since $P_{S^\perp}\partial_x^2U^\ell$ is smooth and $(P_{S^\perp} \mathcal{A}_0(U_n)P_{S^\perp})^{-1}$ is bounded, $y$ is smooth as well and is the unique solution to the equation \begin{equation*} (P_{S^\perp} \mathcal{A}_0(U_n) P_{S^\perp})y = P_{S^\perp}\partial_x^2U^\ell, \end{equation*} which simplifies to \begin{equation}\label{Linstart} P_{S^\perp} \mathcal{A}_0(U_n) y = P_{S^\perp}\partial_x^2U^\ell, \end{equation} since $y \in S^\perp$. Using Lin's method as in \cite{sandstede:som98}, we will look for a solution to \cref{Linstart} of the form, \begin{equation}\label{Linsolform} \tilde{y} = -\frac{1}{2c} P_{S^\perp}\partial_cU^\ell + \tilde{w}, \end{equation} where $\tilde{w} \in S^\perp$. This ansatz is suggested by \begin{equation}\label{uc} \mathcal{A}_0(U) \partial_c U = -2 c\partial_x^2 U, \end{equation} which we obtain by taking $u = U$ in equation \cref{eqODE} and differentiating with respect to $c$, which we can do since $U$ is smooth in $c$ by \cref{Uexistshyp}. Substituting \cref{Linsolform} into \cref{Linstart} and simplifying, we have \begin{equation}\label{Lin2} P_{S^\perp} \mathcal{A}_0(U_n) \left(-\frac{1}{2c} \partial_cU^\ell \right) + P_{S^\perp} \mathcal{A}_0(U_n) \tilde{w} = P_{S^\perp}\partial_x^2U^\ell. \end{equation} Using \cref{expsep}, for $j = 1, \dots, n$ we can write the operator $\mathcal{A}_0(U_n)$ as, \begin{equation}\label{A0expansion} \mathcal{A}_0(U_n) = \mathcal{A}_0(U^\ell) + \sum_{k \neq \ell} (\mathrm{e}^{U^k(x)} - 1) + \tilde{h}(x), \end{equation} where $\tilde{h}(x)$ is small remainder term with uniform bound $\|\tilde{h}\|_\infty = \mathcal{O}(\mathrm{e}^{-\alpha X_{\mathrm{min}}})$. Substituting \cref{A0expansion} into the first term on the LHS of \cref{Lin2}, \begin{align}\label{Lin3} P_{S^\perp} \left( \mathcal{A}_0(U^\ell) + \sum_{k \neq \ell} (\mathrm{e}^{U^k(x)} - 1) + \tilde{h}(x) \right) \left(-\frac{1}{2c}\partial_cU^\ell \right) + P_{S^\perp} \mathcal{A}_0(U_n) \tilde{w} &= P_{S^\perp}\partial_x^2U^\ell \end{align} Since \cref{uc} holds for $U = U^\ell$, \begin{equation} P_{S^\perp} \mathcal{A}_0(U^\ell) \left( -\frac{1}{2c} \partial_c U^\ell \right) = P_{S^\perp}\partial_x^2 U^\ell, \end{equation} where we divided by $-2c$ and applied the projection $P_{S^\perp}$ on the left. Using this, equation \cref{Lin3} simplifies to \begin{align}\label{Lin4} \mathcal{A}_0(U_n) \tilde{w} + P_{S^\perp} \left( \sum_{k \neq \ell} (\mathrm{e}^{U^k(x)} - 1) + \tilde{h}(x) \right) \left(-\frac{1}{2c}\partial_cU^\ell \right) &= 0, \end{align} where we use the fact that $P_{S^\perp}$ commutes with $\mathcal{A}_0(U_n)$, since it is a spectral projection for $\mathcal{A}_0(U_n)$, and that $\tilde{w} \in S^\perp$. Since $\partial_c U^\ell$ and $U^k$ are exponentially separated for $k \neq \ell$, using \cref{expseplemma} and the same argument as in the proof of \cref{expsep}, \begin{align}\label{Linbound1} P_{S^\perp} \sum_{k \neq \ell} (\mathrm{e}^{U^k(x)} - 1) + \tilde{h}(x) \left(-\frac{1}{2c}\partial_cU^\ell \right) = \mathcal{O}\left(e^{-\alpha X_{\min}}\right). \end{align} Since $\partial_c U^\ell$ is bounded and $\|\tilde{h}\|_\infty = \mathcal{O}(\mathrm{e}^{-\alpha X_{\mathrm{min}}})$, \begin{align}\label{Linbound2} P_{S^\perp} \tilde{h}(x) \left(-\frac{1}{2c}\partial_cU^\ell \right) &= \mathcal{O}\left(e^{-\alpha X_{\min}}\right). \end{align} Using \cref{Linbound1} and \cref{Linbound2}, equation \cref{Lin4} simplifies to the equation for $\tilde{w}$ \begin{equation}\label{A0heq} \mathcal{A}_0(U_n) \tilde{w} + h(x) = 0, \end{equation} where $h(x)$ is a small remainder term with uniform bound $\|h(x)\|_\infty = \mathcal{O}(\mathrm{e}^{-\alpha X_{\mathrm{min}}})$. We now follow the procedure in \cite{sandstede:som98}, which we briefly outline below. Let $W = (\tilde{w}, \partial_x\tilde{w},\partial_x^2 \tilde{w},\partial_x^3 \tilde{w})$. As in \cite{sandstede:som98}, we rewrite \cref{A0heq} as a first-order system for $W$, and we take $W$ to be a piecewise function consisting of the $2n$ pieces $W_j^\pm, j = 1, \dots, n$, where \begin{align*} W_j^-(x) &\in C^0([-X_{j-1}, 0]) \\ W_j^+(x) &\in C^0([0, X_j]) \end{align*} with $X_0 = X_n = \infty$. We note the the domains of the functions $W_j^\pm(x)$ overlap at the endpoints; the second and third equations in the system \cref{Wsystem} are matching conditions for these pieces at the appropriate endpoints. Following this procedure, and using the expansions \cref{A0expansion} for $\mathcal{A}_0(U_n)$ on the $j$-th piece, we obtain the system of equations \begin{equation}\label{Wsystem} \begin{aligned} (W_j^\pm)'(x) = A(U(x)) W_j^\pm(x) &+ G_j(x) W_j^\pm(x)+ H_j(x) \\ W_j^+(X_i) - W_{j+1}^-(-X_j) &= 0 \\ W_j^-(0) - W_j^+(0) &= 0 \end{aligned} \end{equation} where \[ A(U(x)) = \begin{pmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -e^{U(x)} & 0 & -c^2 & 0 \end{pmatrix}, \quad G_j(x) = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \sum_{k \neq j} (1 - e^{U(x - \rho_{kj})}) & 0 & 0 & 0 \end{pmatrix}, \] and $\rho_{kj}$ is the signed distance from peak of $U^k$ to peak of $U^j$ in $U_n$. $H_j$ is a remainder term which comes from the term $h(x)$ in \cref{A0heq} and the remainder term in the expansion \cref{A0expansion}, and we have the estimate $\|H_j \|_\infty = \mathcal{O}(\mathrm{e}^{-\alpha X_{\mathrm{min}}})$. For $k \neq j$, $|\rho_{kj}| \geq 2 X_{\mathrm{min}}$. This implies $e^{U(x - \rho_{kj})} = \mathcal{O}(\mathrm{e}^{-\alpha X_{\mathrm{min}}})$ on the $j$-th piece, thus we can use a Taylor expansion to show $\|G_j\| = \mathcal{O}(\mathrm{e}^{-\alpha X_{\mathrm{min}}})$. Following the procedure in \cite{sandstede:som98}, we obtain a unique piecewise solution $W_j^\pm$ to the first two equations of \cref{Wsystem}. The third equation is generally not satisfied, so what we have constructed is a unique solution $\tilde{y}$ of the form \cref{Linsolform} to \cref{Linstart} which is continuous except for $n - 1$ jumps. By uniqueness, we must have $\tilde{y} = y$, thus $y$ is actually of the form \cref{Linsolform} with $\tilde{w}$ smooth. Finally, Lin's method gives us the uniform bound $\|\tilde{w}\|_\infty = \mathcal{O}(\mathrm{e}^{-2 \alpha X_{\mathrm{min}}})$, from which \cref{invqxx} follows. \end{proof} We prove one more lemma before we evaluate the matrix $\bm{\mathit{K}}_2$ from \cref{Kreinform}. \begin{lemma}\label{orthogonald} For the coefficients $d_{jk}$ in \cref{sj} from \cref{multiexist}, \begin{equation}\label{dsum} \sum_{m = 1}^{n} d_{jm} d_{km} = \delta_{jk} + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}). \end{equation} \end{lemma} \begin{proof} Using the expansion \cref{sj} from \cref{multiexist}, \[ \begin{aligned} \langle s_j, s_k \rangle &= \sum_{m = 1}^{n} d_{jm} d_{km} \langle \partial_xU^m,\partial_xU^m \rangle + \sum_{m \neq \ell} d_{jm} d_{k\ell} \langle \partial_xU^m, \partial_xU^\ell \rangle\\ &\qquad + \langle s_j, w_k \rangle + \sum_{\ell = 1}^{n} d_{k\ell} \langle w_j, \partial_xU^\ell \rangle. \end{aligned} \] As in \cref{K1small}, the second term on the RHS is $\mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}})$, and the last two terms on the RHS are $\mathcal{O}(\mathrm{e}^{-2 \alpha X_{\mathrm{min}}})$. By translation invariance, $\langle \partial_xU^m, \partial_xU^m \rangle = \langle \partial_xU, \partial_xU \rangle = \|\partial_xU\|^2$ for all $m$. This reduces to \[ \langle s_j, s_k \rangle = \|\partial_xU\|^2 \sum_{m = 1}^{n} d_{jm} d_{km} + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}). \] Dividing by $\|\partial_xU\|^2$ and using the orthogonality relation \cref{orthoeigs} gives us \cref{dsum}. \end{proof} Finally, we can evaluate the matrix $\bm{\mathit{K}}_2$ from \cref{Kreinform}. \begin{lemma}\label{K2diag} For the matrix $\bm{\mathit{K}}_2$ in \cref{Kreinform}, \begin{equation}\label{K2final} (\bm{\mathit{K}}_2)_{jk} = -2 c\langle\partial_x^2 U, \partial_c U \rangle \delta_{jk} + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}). \end{equation} \end{lemma} \begin{proof} By \cref{PA0inv}, $(P_{S^\perp} \mathcal{A}_0(U_n)P_{S^\perp})^{-1}$ is a bounded linear operator. Using the bound \cref{sjwbound} from \cref{multiexist}, \[ P_{S^\perp} (P_{S^\perp} \mathcal{A}_0(U_n)|_{S^\perp})^{-1} P_{S^\perp}\partial_xw_k = \mathcal{O}(\mathrm{e}^{-2 \alpha X_{\mathrm{min}}}). \] Using this and \cref{invqxx} from \cref{PA0invqxx}, \cref{K2expansion} becomes, \[ \begin{aligned} (\bm{\mathit{K}}_2)_{jk} &= 4 c^2 \left\langle \sum_{m = 1}^{n} d_{jm}\partial_x^2U^m + \partial_xw_j, -\frac{1}{2c}\sum_{\ell = 1}^{n} d_{k\ell} P_{S^\perp}\partial_cU^\ell + \mathcal{O}(\mathrm{e}^{-2 \alpha X_{\mathrm{min}}}) \right\rangle \\ &= -2 c \left( \sum_{m = 1}^{n} d_{jm} d_{km} \langle \partial_x^2U^m, P_{S^\perp} \partial_cU^m \rangle + \sum_{m\neq \ell} d_{jm} d_{k\ell} \langle \partial_x^2U^m, P_{S^\perp} \partial_cU^l \rangle\right.\\ &\qquad\qquad\left.+ \sum_{\ell=1}^n \langle \partial_xw_j, d_{k\ell}\partial_cU^\ell \rangle \right) + \mathcal{O}(\mathrm{e}^{-2 \alpha X_{\mathrm{min}}}). \end{aligned} \] By \cref{Yexploc} and \cref{lemma:Ycexploc}, $\partial_x^2 U$ and $\partial_c U$ are exponentially localized, thus for $m\neq\ell$, $\partial_x^2U^m$ and $\partial_cU^\ell$ are exponentially separated. It follows from \cref{expseplemma} that the second term on the RHS is $\mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}})$. Using H\"{o}lder's inequality and the remainder bound \cref{sjwbound}, the third term on the RHS is $\mathcal{O}(\mathrm{e}^{-2 \alpha X_{\mathrm{min}}})$. Thus we are left with \begin{equation}\label{K2step1} \begin{aligned} (\bm{\mathit{K}}_2)_{jk} &= -2 c \sum_{m = 1}^{n} d_{jm} d_{km} \langle \partial_x^2U^m, P_{S^\perp} \partial_cU^m \rangle + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}). \end{aligned} \end{equation} To evaluate the inner product, we first evaluate $P_S \partial_c U^m$. Recalling the normalization \cref{orthoeigs} and using the expansion \cref{sj}, since the $s_j$ are orthogonal, \[ \begin{aligned} P_S &\partial_c U^m = \frac{1}{\|\partial_x U\|} \sum_{j=1}^n \langle s_j, \partial_c U^m \rangle \\ &= \frac{1}{\|\partial_x U\|} \sum_{j=1}^n \sum_{k=1}^n \langle d_{jk} \partial_x U^k + w_k, \partial_c U^m \rangle \\ &= \frac{1}{\|\partial_x U\|} \left( \sum_{j=1}^n d_{jm} \langle \partial_x U^m, \partial_c U^m \rangle + \sum_{j=1}^n \sum_{k \neq m}^n d_{jk} \langle \partial_x U^k, \partial_c U^m \rangle \right) + \mathcal{O}(\mathrm{e}^{-2 X_{\mathrm{min}}}) \\ &= \frac{1}{\|\partial_x U\|} \sum_{j=1}^n d_{jm} \langle \partial_x U, \partial_c U \rangle + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}) \\ &= \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}). \end{aligned} \] The third line follows from \cref{expseplemma}, since by \cref{Yexploc} and \cref{lemma:Ycexploc}, $\partial_x U$ and $\partial_c U$ are exponentially localized, thus $\partial_x U^k$ and $\partial_c U^m$ are exponentially separated for $k \neq m$. In the fourth line we use $\langle \partial_x U, \partial_c U \rangle = 0$, since $\partial_x U$ is an odd function and $\partial_c U$ is an even function. From this, we have \[ P_{S^\perp} \partial_c U^m = (\mathcal{I} - P_S) \partial_c U^m = \partial_c U^m + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}). \] Substituting this into equation \cref{K2step1} and using \cref{orthogonald} and translation invariance, this becomes \[ \begin{aligned} (\bm{\mathit{K}}_2)_{jk} &= -2 c \sum_{m = 1}^{n} d_{jm} d_{km} \langle \partial_x^2U^m, \partial_cU^m \rangle = -2 c \langle \partial_x^2U, \partial_cU \rangle \sum_{m = 1}^{n} d_{jm} d_{km} \\ &= -2 c \langle \partial_x^2U,\partial_cU \rangle \delta_{jk} + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}), \end{aligned} \] which is \cref{K2final}. \end{proof} Using \cref{K1final} from \cref{K1small} and \cref{K2final} from \cref{K2diag}, the Krein matrix \cref{Kreinform} becomes, \[ \begin{aligned} -\frac{\bm{\mathit{K}}_S(z)}{z}&= \|\partial_xU\|^2 \text{diag}(\nu_1, \dots, \nu_n) - ( \|\partial_xU\|^2 -2 c \langle \partial_x^2U, \partial_cU \rangle) \bm{\mathit{I}}_n \overline{z}^2 \\ &\qquad + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}|z| + |z|^3). \end{aligned} \] Integrating by parts, \[ \begin{aligned} -\frac{\bm{\mathit{K}}_S(z)}{z} &= \|\partial_xU\|^2 \text{diag}(\nu_1, \dots, \nu_n) - \left( \langle \partial_xU, \partial_xU \rangle + 2c\langle \partial_c\partial_xU, \partial_xU\rangle \right)\bm{\mathit{I}}_n\overline{z}^2\\ &\qquad + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}|z| + |z|^3) \\ &= \|\partial_xU\|^2 \text{diag}(\nu_1, \dots, \nu_n) -\partial_c\left( c||\partial_xU||^2 \right) \bm{\mathit{I}}_n \overline{z}^2 + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}|z| + |z|^3) \\ &= \|\partial_xU\|^2 \text{diag}(\nu_1, \dots, \nu_n) + d''(c) \bm{\mathit{I}}_n \overline{z}^2 + \mathcal{O}(\mathrm{e}^{-(3 \alpha/2) X_{\mathrm{min}}}|z| + |z|^3), \end{aligned} \] which is \cref{Kreinapprox} in \cref{Kreindiag}. \end{document}
arXiv
Journal of Systems Chemistry Transition to diversification by competition for multiple resources in catalytic reaction networks Atsushi Kamimura1 & Kunihiko Kaneko1 Journal of Systems Chemistry volume 6, Article number: 5 (2015) Cite this article All life, including cells and artificial protocells, must integrate diverse molecules into a single unit in order to reproduce. Despite expected pressure to evolve a simple system with the fastest replication speed, the mechanism by which the use of a great variety of components, and the coexistence of diverse cell-types with different compositions are achieved is as yet unknown. Here we show that coexistence of such diverse compositions and cell-types is the result of competitions for a variety of limited resources. We find that a transition to diversity occurs both in chemical compositions and in protocell types, as the resource supply is decreased, when the maximum inflow and consumption of resources are balanced. Our results indicate that a simple physical principle of competition for a variety of limiting resources can be a strong driving force to diversify intracellular dynamics of a catalytic reaction network and to develop diverse protocell types in a primitive stage of life. Cells, even in their primitive forms [1-3], must integrate diverse molecules into a single unit so that they keep reproduction where they sustain similar chemical compositions. In main scenarios for the origin of life, through the path from a mixture of organic component (the primeval soup with diverse molecular species) to reproducing cells, the emergence of replicating entities has been required for homeostatic growth capable of undergoing Darwinian evolution [4-7]. For faithful replications, the hypercycle model in which different molecular species mutually catalyze the replication of each other [8,9] provides a basic mechanism to overcome an inevitable loss in catalytic activities through mutations. Protocells encapsulating such hypercycles can exhibit robust reproduction even in the presence of parasitic molecules that may destroy the mutual catalytic reactions [10-16]. Despite recent recognition of protocells as a stepping-stone to the origin of life [1,17,18], less attention has been devoted to studying the diversity in components and among protocells, which is basic property in all the life forms: cells generally involve a huge variety of components, while cells or replicators are diversified in all biological systems consisting of their population. In fact, one may naively expect that a simple replicating entity consisting of only a few molecular species would evolutionarily triumph as it can reproduce faster than a complex system using diverse components. Additionally, as replicators with the fastest division speed would dominate, coexistence of diverse replicators would not be expected. In addition to classical in vitro experiments supporting this [19], several in silico artificial life models show dominance of such simple replicators [20,21]. This straight path of protocell population towards the fittest simple replicators seems to contradict with biodiversity in ecological populations [22-24], even at a cellular level [25,26]. Then, can both "compositional" diversity at an individual level and "phenotypic" diversity at the population level be present at "protocellular" stages, a primitive scenario of cellular evolution, in spite of the selective pressure for survival? Understanding how molecular mixtures give rise to reproducing cells and thereafter diversification both at individual and population levels will contribute to unravelling the intermediate phases or steps of molecules-to-population dynamics in the origin of life, and the dynamics of ecological populations from the underlying biochemical catalytic networks. An important consequence of a system consisting of reproducing (proto)cells is competition for chemical resources needed for the growth. In most studies on the protocells or the origin of life, however, it is assumed that chemical resources for replication of biopolymers are supplied sufficiently. Indeed, in this case, a protocell consisting of fewer components, say hypercycles with three components, replicates faster, and quickly dominates the pool. As the catalytic activities of molecules and cell populations increase, however, nutrient depletion or resource competition is inevitable. The question we address in the present paper is how this resource limitation may lead to compositional diversity within a protocell and cell-to-cell diversification. The effect of competition for a specific component is also demonstrated in protocells experimentally [27]. When only a single resource is provided, competition for the limited resource leads to survival of only the fittest protocell type. When multiple resources are competed for, then, do protocells diversify into distinct types specialized for the use of different resources, and is coexistence of diverse cell types possible? Here, we show through numerical simulation of a model of interacting protocells consisting of hypercycle reaction networks that such a transition to increased diversity occurs when available resources are limited. Further, we show that this diversification transition with the decrease of resource is understood as the transition from exponential to linear growth of cell population. For this purpose, we consider a simplified protocell model in which each molecule (X j ;j=1,…,K M ) replicates with the catalytic aid of X i , by consuming a corresponding resource (S j ;j=1,…,K R ). The outline of the model is as follows (Figure 1). There exist M tot protocells, each consisting of K M species of replicating molecules where some possibly have null population. Molecules of each species X j are replicated with the aid of some other catalytic molecule X i , determined by a random catalytic reaction network(see below), by consuming a resource S j , one of the supplied resource chemicals S k (k=1,…,K R ), as follows: $$ X_{j} + X_{i} + S_{j} \overset{c_{i}}\rightarrow 2X_{j} + X_{i}. $$ ((1)) Schematic representation of our model. The system is composed of M tot cells, each of which contains molecule species X j (j=1,…,K M ) that form a catalytic reaction network to replicate each X j . The cells share each resource S k in a common medium, which is consumed by the replication of molecule X k . The resources flow into the common media for the cells from an external reservoir (environment) via diffusion \(-D(S_{k}-{S_{k}^{0}})\) (k=1,…,K R ), where \({S_{k}^{0}}\) is a randomly-fixed constant \({S_{k}^{0}} \in [0,M_{\text {tot}}]\) and D is the diffusion constant For this reaction to replicate X j , one resource molecule is needed, and the replication reaction does not occur if S j <1. In this paper, we assume K M =K R , each resource species S j corresponds to each molecule species X j . The case K M >K R is discussed in the supplement, where a common resource molecule species is used for replication of multiple molecule species \(X_{j_{1}},X_{j_{2}}..\). The reaction coefficient is given by the catalytic activity c i of the molecule species X i , which is determined randomly as c i ∈[0,1]. With each replication, error occurs with probability μ as shown below. The resources (S j ) diffuse sufficiently fast through a common medium for a population of protocells, so that replication of X j can occur with corresponding reactions if S j in the common medium is greater than one. From external reservoirs of concentrations \({S_{j}^{0}}\), the resources (S j ) are supplied into the common medium by diffusion \(-D(S_{j} - {S_{j}^{0}})\). D controls the degree of the resource competition, as the resource supply is limited with decreasing D. The random catalytic reaction network is constructed as follows. For each molecule species, the density for the path of the catalytic reaction is given by ρ(which is fixed at 0.1) so that each species has ρ K M reactions on average. When a path exists between X i and X j (i≠j), either is selected randomly as a catalyst for the replication of the other, while bidirectional connections are excluded so that X j does not work as a catalyst for X i if X i is the catalyst for X j . Autocatalytic reactions in which X i catalyzes the replication of itself are also excluded. Once chosen, the reaction network is fixed throughout each run of simulations, and the network is identical for all protocells of the population. Even though the underlying network is identical for all protocells, compositions in each protocell vary because of stochastic reactions, as well as structural changes explained below. By taking a different composition, each protocell can use a different part of the whole network for its growth. Structural changes may occur during replication to alter monomer sequences of polymers and catalytic properties of the molecule. In the present model, this alteration is included as a random change to other molecular species during the replication process. When replication of X j occurs, it is replaced by another molecule X l (l≠j) with a probability μ. For simplicity, we assume that this error leads to all other molecule species with equal probability, μ/(K M −1) where K M is the number of molecule species. When the total number of molecules in each cell exceeds a given threshold N, the cell divides into two and partitions molecules in a totally random manner, irrespective of species, and one randomly chosen cell is removed from the system in order to fix the total number of cells at M tot. Simulation is carried out as follows. We introduce discrete simulation steps. For each simulation step, we repeat the following procedures. For each cell, in a random order, we choose two molecules from the cell. If the pair of molecules, X i and X j is a catalyst and a replicator(X i catalyzes the replication of X j ), the reaction occurs with the given probability(c i ), if S j ≥1. When the reaction occurs, the new molecule of X j is added into the cell and one molecule of the corresponding resource is subtracted to make S j →S j −1. Here, with a probability μ, a new molecule of X l (l≠j), instead of X j , is added into the cell as the structural change. If the total number of molecules in a cell exceeds a threshold N, the molecules are distributed into two daughter cells, while one cell, randomly chosen, is removed from the system. We also update each S a to \(S_{a}-D(S_{a}-{S^{0}_{a}}) (a=1,\ldots,K_{R})\). Diversification by decreasing resource supply in a catalytic reaction network We simulated the model by changing the speed of resource supply D. The other parameters are fixed as N=1000, K M =K R =200 in this paper. For the moment, we also fix here M tot=100, and dependence on M tot is discussed later. When the resources are supplied sufficiently fast (e.g., for D=1), a recursively growing state is established with a few molecular species, where the composition is robust against noise and perturbations by the division process. In this state, the (typically) three primary components form a three-component hypercycle (Figure 2(I)). There are a few hundred molecules out of N=1000 for each primary component, while most of the other molecules are null. Some species can appear by the random structural changes, but their number is typically less than a few. In some cases, molecular species being catalyzed by a member of the hypercycle increase their number(more than 20 copies), but the primary three components are robust enough. In other words, only a few molecule and reaction pathways among them are selected for their own growth from the overall structure of the catalytic network with K M species. All the dividing cells adopt this three-component hypercycle, thus, there is neither compositional nor phenotypic diversity (Figure 2(I)). Compositions and types of reproducing protocells for (I) D = 1, (II) D = 0.1, and (III) D = 0.01. Compositions of molecule species are shown in successive 200 division events in the system with K M =K R . (i) At successive division events (abscissa), molecule species indices with more than 20 copy numbers in the dividing cell are marked. Cells are categorized into a few, or, several types, indicated by different colors, according to the majority set of molecule species. For each type, the indices of majority species are shown to the right of the figures. (ii)The catalytic network formed by the majority molecule species is shown for each cell type. Each numbered node corresponds to a molecule species, and the arrow from index i to j represents catalysis of X j replication by X i . (iii) Similarities H ij between cells i and j, as defined in the text, are plotted over 200 successive division events, using a color code from H ij ∼1(red) to H ij ∼0(dark blue). If the similarity is close to one, the composition is almost preserved by cell division, while zero similarity indicates that a reproduced cell has completely different composition. Intermediate values of similarity indicate the overlap in some molecule species between the cells, as in Type II-A and -B, Type III-A and -B, and Type III-D and -E. Parameters are K M =K R =200, M tot=100, N=1000, and μ=0.001 To check cell reproduction fidelity, we introduced similarity between cells as follows: as each cellular state is characterized by the number of molecules of each chemical species \(\vec {N}_{i} = (n_{1}, n_{2}, \ldots, n_{K_{M}})\), similarity is defined as the inner product of these composition vectors between two cell division events, i.e., \(H_{\textit {ij}} = \vec {N}_{i} \cdot \vec {N}_{j}/(|\vec {N}_{i} || \vec {N}_{j} |)\) between the i-th and j-th division events. In the above case, the similarity between mother and daughter cells is close to one, implying high-fidelity reproduction. As D decreases below 0.1 while keeping all the other parameters fixed, phenotypic diversity starts to increase. For example, two cell types(II-A,B) coexist in Figure 2(II) and consist of three-component hypercycles differing by one component. Both types divide with approximately equal speed and coexist over 102 generations. In 200 successive division events(Figure 2(II) (iii)), one type has a similarity near unity(red), and the similarity of the other ranges between 0.6 and 0.7(yellow), implying that the two types mostly reproduce themselves with a small probability (approximately 0.01/division) of switching types. Over much longer generations, replication errors can produce different types capable of replacing the existing cell types. As D decreases further, both the phenotypic and compositional diversity continue to increase. For D=0.01(Figure 2(III)), six cell types (A−F) appear. Each type forms a distinct hypercycle network in which the various species belonging to it catalyze the replication of each other. Here, some types (III −A and B, III −D and E) share some common molecular species, while the others do not. Similarities are approximately equal to unity(red) for some cells, while cell types that have the common chemical components(yellow to light blue) as well as cell types with completely orthogonal composition(dark blue) appear from time to time (Figure 2(III)(iii)). Also, the number of replicating chemical species in each cell is slightly increased(see Figure 2(III)(ii)). As D decreases to the order of 0.001, more cells with lower similarity appear, and compositional and phenotypic diversity further increase. We statistically studied the quantitative dependence of diversity upon the parameter D using different network samples. As D decreases below ∼0.1, the compositional diversity of each protocell and the phenotypic diversity at the population level increase (Figure 3(A)). To confirm that the transition occurs independently of networks, we measure the number of species present in each cell as an index for compositional diversity, and that in cell population as an index for phenotypic diversity (for the latter we impose the condition that the molecule species exists for more than 10 cells, to discard species that happen to exist by the random structural changes), averaged in 30 different networks. Increase in the number of species in cell population more than that in each cell indicates the increase of phenotypic diversity in addition to that of compositional diversity. With the increase in both diversity, reproductive fidelity decreases both at an individual level (Figure 3B(a)) and at a population level, i.e., over all pairs of 104 division events in 30 different networks (Figure 3B(b)). Protocells with different hypercycles start to appear below D∼0.1(see also Figure 2). Quantitative dependence of diversity. (A) Compositional and phenotypic diversity plotted as a function of D. For compositional and phenotypic diversity, the numbers of chemical species included in each cell (left ordinate), and in more than 10 cells out of M tot cells (right ordinate) are shown. (B) (a) Average similarity of mother and daughter cells and (b) average similarity between cells in 104 successive division events in 30 different networks, plotted against D. For (b), the average over cells with positive similarity(red) is shown in addition to the average for all cells (green). (C) Dependence of compositional (left ordinate) and phenotypic (right ordinate) diversity on the number of cells, M tot, with fixed D=0.01. (D) Average division speed as a function of D. For reference, a linear line with D is plotted. For D>0.2, an estimate of the division speed (∼0.0005) is also shown with a dotted line (see main text). Unless otherwise indicated, the data are obtained as the average over 105 division events in 30 different networks. The parameters are K M =K R =200, N=1000, and μ=0.001 The transition to increased diversity generally occurs for sufficient resource diversity, i.e., for large K R , independent of reaction network choice, given K M =K R . The phenotypic diversity increases as ∼K R , but is bounded by the finite number of interacting cells, M tot (Additional file 1). As M tot increases, the number of coexisting cell-types increases, while the compositional diversity, i.e., the number of components in each cell type, decreases(see Figure 3(C)). This trade-off between compositional and phenotypic diversity suggests that each cell-type is specialized for fewer chemical components as the number of cell types increases. Altogether, the data show transition behavior around D=D c ∼0.1. Below D c , the division speed decreases, while above D c it is approximately constant (Figure 3(D); see also below). This suggests that at the transition point the maximum inflow and consumption rates of resources are balanced. The maximum inflow rate is estimated as \(D \bar {{S_{j}^{0}}}\), where \(\bar {{S_{j}^{0}}}\) is a typical reservoir concentration. The intrinsic consumption rate by all cells is estimated to be \(M_{\text {tot}} \bar {c_{i}} {p_{j}^{c}}\), where \(\bar {c_{j}}\) is a typical catalytic activity, and \({p_{j}^{c}}\) is the probability of picking up a pair that replicates the molecule X j . When sufficient resources are available, the three-component hypercycle dominates, and \({p_{j}^{c}} \sim 1/9\). Therefore, the value of D at which the maximum inflow and consumption rates of resources are balanced is estimated as \(D = M_{\text {tot}} \bar {c} /9\bar {{S_{j}^{0}}}\). Since c i is distributed homogeneously in [0,1], its average value is 0.5; however the remaining components are typically biased to have higher catalytic activities, and \(\bar {c}\) therefore typically ranges from 0.7 to 0.8. On the other hand, \({S_{j}^{0}}\) is distributed homogeneously in [0,M tot], so the simple average of \(\bar {{S_{j}^{0}}}\) is 50 with M tot=100, but the remaining components are biased to have \(\bar {{S_{j}^{0}}} \sim 70\). Thus the critical value of D is approximately 0.11−0.13. It is noteworthy that below D c the division speed decreases more slowly than the inflow rate, as indicated by the line proportional to D(Figure 3(D)), suggesting that cells can utilize more diverse resources for growth by increasing the available number of resource species. For D>0.2, sufficient resources are available, thus, the intrinsic reaction rate of the three-component hypercycle is the main determinant of the division speed. The probability, p c , for picking up a pair between which a catalytic reaction exists is ∼1/3. Using the above typical catalytic activity \(\bar {c}\), the division speed is estimated as \(2\bar {c}/3N\), which is approximately 0.0005 for N=1000 (See Figure 3(D)). Illustration by a simple case Why does the transition to diversity occur with the decrease in resources? A simple case illustrates this diversity transition from dominance of a single type to coexistence of various types. Consider two types of cells that compete for resources. One type (A) consists of molecules X and Y while the other type (B) consists of molecules X and Z. The molecule species mutually catalyze the replication of each other to form a minimal hypercycle as follows: $$ X + Y + S_{X} \rightarrow 2X+Y, \,\,\, Y + X+ S_{Y} \rightarrow 2Y+ X $$ $$ X + Z + S_{X} \rightarrow 2X+Z, \,\,\, Z + X + S_{Z} \rightarrow 2Z+ X. $$ We denote the intrinsic catalytic activities of X, Y, and Z, respectively, by c X , c Y and c Z . Each reaction to synthesize X, Y, and Z utilizes the resource S X , S Y , and S Z , respectively, which are shared by cells. While rate equations of the simple case are investigated in details both analytically and numerically in Additional file 1, here we show results of stochastic simulations of the case in which the setup is given in the same way as the original model and present intuitive explanations of the result. We consider M tot cells, each of which consists of either pair of X,Y or X,Z, corresponding to type A or B, respectively. The resources S X , S Y , and S Z flow from the outer reservoir with diffusion constant D. For simplicity, we assume c X =c Y =c Z =1, and \({S^{0}_{X}} = {S^{0}_{Y}} = {S^{0}_{Z}} = M_{\text {tot}}\). The diffusion constants are identical for X, Y and Z. The evolution of the number of type A, n A , for different values of D is given in Figure 4. For D=0.3, either A or B is extinct after a relatively short time span. Around D=0.2, the transient time before extinction increases, and the two types coexist over more than 105 division events for D=0.1. Time evolution of n A for (A) D = 0.3 (B) D = 0.2 and (C) D = 0.1. The initial state is given as n A =n B =M tot/2, and in each type A and B, we randomly distribute molecules, X and Y, and X and Z, respectively, with equal probabilities. Here, M tot = 100, N=1000. Different colors indicate different simulation runs The average number of division events to achieve the dominant state as a function of D is given in Figure 5. It is clear that the coexistence time starts to increase around D ∗=0.25, which is consistent with the point where the resources start to compete. In the stationary state without competition, the number of X in each cell is equal to the number of Y in type A, and the number of Z in type B, i.e., \({N_{X}^{A}} = {N_{Y}^{A}}\) and \({N_{X}^{B}} = {N_{Z}^{B}}\) where \({N_{i}^{I}}\) is the number of i(i=X,Y,Z) in type I(I=A,B). Thus, the point where S X is limited is given by \(D^{*} {S_{X}^{0}} = \frac {1}{4} M_{\text {tot}}\), therefore, D ∗=1/4 for \({S_{X}^{0}} = M_{\text {tot}}\). Average number of division events, as a function of D, to achieve a state where either A or B cells dominates the system. Here, M tot = 100, N=1000. The dotted lines show D ∗=1/4 and D +=1/9 As D is further decreased, the coexistence of types A and B is achieved when S Y and S Z are also limited and competed for by cells. In this case, the coexistence is stable as analyzed in the Additional file 1. In the smaller limit of D where all the resources are limited, the steady state approaches \({N_{X}^{A}} = 2 {N_{Y}^{A}}\), \({N_{X}^{B}} = 2{N_{Z}^{B}}\). This gives D + S 0=M tot/9. For S 0=M tot, D +∼0.11. This is also consistent with the observation in Figure 4. The transition from dominance to coexistence states can be explained as follows. When all the resources are supplied sufficiently, the population of each cell type grows exponentially, where the growth rate is proportional to the number n A(B) of each cell type A and B, i.e., d n A(B)/d t∝γ A(B) n A(B), and the proportionality coefficient, γ A(B), depends on resource abundances S Y(Z). In this case, Darwinian selection works so that the stable solution to the equations is a dominance of the fittest type with a larger γ. This selection process works as long as the resources S Y(Z) are sufficient. The competition of the single common resource S X results in a dominance of the single type. However, when all the resources are limited, competition for the available resources S Y(Z) among n A(B) cells decreases γ A(B) so that it is inversely proportional to n A(B), respectively. The population dynamics are, therefore, represented by d n A(B)/d t∝c A(B), with constant c A(B), when the maximum inflow is decreased to balance with the rate of resource consumption. In this form, the solution with two coexisting types is known to be stable [9,28] as shown in Additional file 1. The transition to diversity in composition and phenotypes in the original model is based on this change from exponential to linear growth due to resource limitation. On the other hand, the above argument on the diversity transition supports the generality of our result, as the number growth in chemical components by catalytic reactions changes from exponential to linear with the decrease in resources. In summary, we show that coexistence of diverse compositions and cell-types is the result of competitions for a variety of limited resources. We find that a transition to diversity occurs both in chemical compositions and in protocell types, as the resource supply is decreased, when the maximum inflow and consumption of resources are balanced. By a simple case, we also demonstrate that the diversity is based on the change from exponential to linear growth due to resource limitation. When life originated, a set of diverse, self-replicating catalytic polymers(replicators) would emerge from the primordial chemical mixture, to make a reproducing protocell. Although the importance of molecular and protocellular diversity has been noted by Dyson [5], their origins are not well addressed, especially as compared with diversity in ecological systems. Our results indicate that competition for a variety of limiting resources can be a strong driving force to diversify intracellular dynamics of a catalytic reaction network and to develop diverse protocell types in a primitive stage of life. Indeed, it is natural that diverse chemical resources are available in the environment, and competition for resources increases as the protocells reproduce and more cells compete. Thus, diversification in composition and protocell types is an inevitable outcome. According to our results, the diversification is understood as a kind of phase transition in population dynamics with decreased resource. In ecology, the niche dimension hypothesis in plant communities was proposed, in which the number of coexisting species increases with the number of limiting factors, while diversity decreases with greater resource abundance [29-31]. Our results suggest that such population dynamics in ecology are also possible with primitive mixtures of catalytic molecules competing for a variety of resources, even without the need for genetic changes as in speciation. Rasmussen S, Bedau MA, Chen L, Deamer D, Krakauer DC, Packard NH, et al. (eds.)Protocells: Bridging Nonliving and Living Matter. Cambridge: The MIT Press; 2008. Book Google Scholar Blain JC, Szostak JW. Progress Toward Synthetic Cells. Annu Rev Biochem. 2014; 83:615–40. Article CAS Google Scholar Noireaux V, Maeda YT, Libchaber A. Development of an artificial cell, from self-organization to computation and self-reproduction. Proc Natl Acad Sci USA. 2011; 108:3474–80. Kauffman SA. The Origins of Order - Self-Organization and Selection in Evolution. Oxford, UK: Oxford University Press; 1993. Dyson F. Origins of Life. Cambridge: Cambridge University Press; 1985. Ganti T. The Principles of Life. Oxford, UK: Oxford University Press; 2003. Segre D, Lancet D. Composing life. EMBO Reports. 2000; 1:217–22. Eigen M, Schuster P. The Hypercycle. New York: Springer; 1977. Eigen M. Steps Towards Life. Oxford: Oxford University Press; 1992. Smith JM. Hypercycles and the origin of life. Nature. 1979; 280:445–6. Szathmary E, Demeter L. Group selection of early replicators and the origin of life. J Theor Biol. 1987; 128:463–86. Altmeyer S, McCaskill JS. Error Threshold for Spatially Resolved Evolution in the Quasispecies Model. Phys Rev Lett. 2001; 86:5819–22. Boerlijst MC, Hogeweg P. Spatial wave structure in pre-biotic evolution: Hypercycles stable against parasites. Physica D. 1991; 48:17–28. Kaneko K. Recursiveness, switching, and fluctuations in a replicating catalytic network. Phys Rev E. 2003; 68:031909. Kaneko K. On Recursive Production and Evolvability of Cells: Catalytic Reaction Network Approach. Adv Chem Phys. 2005; 130:543–98. CAS Google Scholar Kamimura A, Kaneko K. Reproduction of a Protocell by Replication of a Minority Molecule in a Catalytic Reaction Network. Phys Rev Lett. 2010; 105:268103. Markovitch O, Lancet D. Multispecies population dynamics of prebiotic compositional assemblies. J Theor Biol. 2014; 357:26–34. Ruiz-Mirazo K, Briones C, de la Escosura A. Prebiotic Systems Chemistry: New Perspectives for the Origins of Life. Chem Rev. 2014; 114:285–366. Kacian DL, Mills DR, Kramer FR, Spiegelman S. A Replicating RNA Molecule Suitable for a Detailed Analysis of Extracellular Evolution and Replication. Proc Nat Acad Sci USA. 1972; 69(10):3038–42. Fontana W, Buss LW. The Arrival of the Fittest: Toward a Theory of Biological Organization. Bull Math Biol. 1994; 56:1–64. Ray RS. An approach to the synthesis of life In: Langton CG, editor. Artificial Life II. Redwood City: Addison Wesley: 1991. p. 371–408. May RM. Will a Large Complex System be Stable?. Nature. 1972; 238:413–4. McCann KS. The diversity-stability debate. Nature. 2000; 405:228–33. Loreau M, Naeem S, Inchausti P, Bengtsson J, Grime JP, Hector A, et al. Biodiversity and Ecosystem Functioning: Current Knowledge and Future Challenges. Science. 2001; 294:804–8. Hughes AR, Inouye BD, Johnson MTJ, Underwood N, Vellend M. Ecological consequences of genetic diversity. Ecol Lett. 2008; 11:609–23. Whitham TG, Bailey JK, Schweitzer JA, Shuster SM, Bangert RK, LeRoy CJ, et al. A framework for community and ecosystem genetics: from genes to ecosystems. Nat Rev Genet. 2006; 7:510–23. Chen IA, Roberts RW, Szostak JW. The emergence of competition between model protocells. Science. 2004; 305:1474–6. Biebraicher CK, Eigen M, Gardiner WC. Kinetics of RNA replication. Biochem. 1983; 22:2544. Tilman D. Niche tradeoffs, neutrality, and community structure: A stochastic theory of resource competition, invasion, and community assembly. Proc Nat Acad Sci USA. 2004; 101:10854–61. Harpole WS, Tilman D. Grassland species loss resulting from reduced niche dimension. Nature. 2007; 446:791–3. Hutchinson GE. Concluding Remarks. Cold Spring Harb Symp Quant Biol. 1957; 22:415–27. This work is supported by the Japan Society for the Promotion of Science. This work is also supported in part by the Platform for Dynamic Approaches to Living System from the Ministry of Education, Culture, Sports, Science, and Technology of Japan, and the Dynamical Micro-scale Reaction Environment Project of the Japan Science and Technology Agency. Department of Basic Science, The University of Tokyo, 3-8-1, Komaba, Meguro-ku, 153-8902, Tokyo, Japan Atsushi Kamimura & Kunihiko Kaneko Atsushi Kamimura Kunihiko Kaneko Correspondence to Atsushi Kamimura. AK and KK conceived and designed the research. AK performed the simulation and analysis. AK and KK wrote the paper. Both authors read and approved the final manuscript. Additional file Additional file 1 Supplementary material for transition to diversification by competition for multiple resources in a catalytic reaction network. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Kamimura, A., Kaneko, K. Transition to diversification by competition for multiple resources in catalytic reaction networks. J Syst Chem 6, 5 (2015). https://doi.org/10.1186/s13322-015-0010-1 Accepted: 17 March 2015 Protocells Resource competition Catalytic network
CommonCrawl
\begin{document} \theoremstyle{plain} \swapnumbers \newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{lemma}[thm]{Lemma} \newtheorem{slemma}[thm]{Separation Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{subsec}[thm]{} \newtheorem*{thma}{Theorem A} \newtheorem*{thmb}{Theorem B} \newtheorem*{propc}{Proposition C} \newtheorem{conj}{Conjecture} \theoremstyle{definition} \newtheorem{defn}[thm]{Definition} \newtheorem{assume}[thm]{Assumption} \newtheorem{example}[thm]{Example} \newtheorem{examples}[thm]{Examples} \newtheorem{notn}[thm]{Notation} \theoremstyle{remark} \newtheorem{remark}[thm]{Remark} \newtheorem{aside}[thm]{Aside} \newtheorem{ack}[thm]{Acknowledgements} \newdir{ >}{{}*!/-5pt/@{>}} \newenvironment{myeq}[1][] {\stepcounter{thm}\begin{equation}\tag{\thethm}{#1}} {\end{equation}} \newcommand{\myeqn}[2][] {\stepcounter{thm}\begin{equation} \tag{\thethm}{#1}\vcenter{#2}\end{equation}} \newcommand{\diagr}[1]{\begin{equation*}\xymatrix{#1}\end{equation*}} \newcommand{\diags}[1]{\begin{equation*}\xymatrix@R=10pt@C=25pt{#1}\end{equation*}} \newcommand{\diagt}[1]{\begin{equation*}\xymatrix@R=15pt@C=25pt{#1}\end{equation*}} \newcommand{\mydiag}[2][]{\myeq[#1]\xymatrix{#2}} \newcommand{\mydiagram}[2][] {\stepcounter{thm}\begin{equation} \tag{\thethm}{#1}\vcenter{\xymatrix{#2}}\end{equation}} \newcommand{\myodiag}[2][] {\stepcounter{thm}\begin{equation} \tag{\thethm}{#1}\vcenter{\xymatrix@R=25pt@C=15pt{#2}}\end{equation}} \newcommand{\mypdiag}[2][] {\stepcounter{thm}\begin{equation} \tag{\thethm}{#1}\vcenter{\xymatrix@R=39pt@C=5pt{#2}}\end{equation}} \newcommand{\myqdiag}[2][] {\stepcounter{thm}\begin{equation} \tag{\thethm}{#1}\vcenter{\xymatrix@R=25pt@C=21pt{#2}}\end{equation}} \newcommand{\myrdiag}[2][] {\stepcounter{thm}\begin{equation} \tag{\thethm}{#1}\vcenter{\xymatrix@R=10pt@C=25pt{#2}}\end{equation}} \newcommand{\mysdiag}[2][] {\stepcounter{thm}\begin{equation} \tag{\thethm}{#1}\vcenter{\xymatrix@R=18pt@C=25pt{#2}}\end{equation}} \newcommand{\mytdiag}[2][] {\stepcounter{thm}\begin{equation} \tag{\thethm}{#1}\vcenter{\xymatrix@R=30pt@C=25pt{#2}}\end{equation}} \newcommand{\myudiag}[2][] {\stepcounter{thm}\begin{equation} \tag{\thethm}{#1}\vcenter{\xymatrix@R=25pt@C=30pt{#2}}\end{equation}} \newcommand{\myvdiag}[2][] {\stepcounter{thm}\begin{equation} \tag{\thethm}{#1}\vcenter{\xymatrix@R=25pt@C=45pt{#2}}\end{equation}} \newcommand{\mywdiag}[2][] {\stepcounter{thm}\begin{equation} \tag{\thethm}{#1}\vcenter{\xymatrix@R=35pt@C=20pt{#2}}\end{equation}} \newcommand{\supsect}[2] {\vspace*{-5mm}\quad\\\begin{center}\textbf{{#1}} .~~~~\textbf{{#2}}\end{center}} \newenvironment{mysubsection}[2][] {\begin{subsec}\begin{upshape}\begin{bfseries}{#2.} \end{bfseries}{#1}} {\end{upshape}\end{subsec}} \newenvironment{mysubsect}[2][] {\begin{subsec}\begin{upshape}\begin{bfseries}{#2 .} \end{bfseries}{#1}} {\end{upshape}\end{subsec}} \newcommand{\sect}{\setcounter{thm}{0}\section} \newcommand{\ -- \ }{\ -- \ } \newcommand{-- \ }{-- \ } \newcommand{\w}[2][ ]{\ \ensuremath{#2}{#1}\ } \newcommand{\ww}[1]{\ \ensuremath{#1}} \newcommand{\wwb}[1]{\ \ensuremath{(#1)}-} \newcommand{\wb}[2][ ]{\ (\ensuremath{#2}){#1}\ } \newcommand{\wref}[2][ ]{\ \eqref{#2}{#1}\ } \newcommand{\wwref}[2][ ]{\ \eqref{#2}{#1}} \newcommand{\wbref}[2][ ]{\eqref{#2}{#1}} \newcommand{\xra}[1]{\xrightarrow{#1}} \newcommand{\xla}[1]{\xleftarrow{#1}} \newcommand{\xrightarrow{\sim}}{\xrightarrow{\sim}} \newcommand{\hookrightarrow}{\hookrightarrow} \newcommand{\to\hspace{-5 mm}\to}{\to\hspace{-5 mm}\to} \newcommand{\xepic}[1]{\stackrel{#1}{\to\hspace{-5 mm}\to}} \newcommand{\adj}[2]{\substack{{#1}\\ \rightleftharpoons \\ {#2}}} \newcommand{\ccsub}[1]{\circ_{#1}} \newcommand{:=}{:=} \newcommand{\Leftrightarrow}{\Leftrightarrow} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\hspace{10 mm}}{\hspace{10 mm}} \newcommand{\hspace{5 mm}}{\hspace{5 mm}} \newcommand{\hspace{3 mm}}{\hspace{3 mm}} \newcommand{ }{ } \newcommand{ }{ } \newcommand{ }{ } \newcommand{\rest}[1]{\lvert_{#1}} \newcommand{\lra}[1]{\langle{#1}\rangle} \newcommand{\lrau}[2]{\langle\underline{#1}{#2}\rangle} \newcommand{\li}[1]{_{({#1})}} \newcommand{{\EuScript A}}{{\EuScript A}} \newcommand{{\mathcal C}}{{\mathcal C}} \newcommand{{\mathcal D}}{{\mathcal D}} \newcommand{{\EuScript E}}{{\EuScript E}} \newcommand{{\EuScript F}}{{\EuScript F}} \newcommand{{\mathcal G}}{{\mathcal G}} \newcommand{{\mathcal E}}{{\mathcal E}} \newcommand{I}{I} \newcommand{{\mathcal J}}{{\mathcal J}} \newcommand{{\mathcal K}}{{\mathcal K}} \newcommand{{\mathcal L}}{{\mathcal L}} \newcommand{\LL_{{\mathbb Q}}}{{\mathcal L}_{{\mathbb Q}}} \newcommand{\overline\cJ}{\overline{\mathcal J}} \newcommand{\widehat\cJ}{\widehat{\mathcal J}} \newcommand{{\mathcal E_{\ast}}}{{\mathcal E_{\ast}}} \newcommand{{\mathcal K}}{{\mathcal K}} \newcommand{{\mathcal M}}{{\mathcal M}} \newcommand{\M_{\A}}{{\mathcal M}_{{\EuScript A}}} \newcommand{{\EuScript O}^{\A}}{{\EuScript O}^{{\EuScript A}}} \newcommand{\OA_{+}}{{\EuScript O}^{\A}_{+}} \newcommand{\widehat{{\EuScript O}}^{\A}}{\widehat{{\EuScript O}}^{{\EuScript A}}} \newcommand{{\EuScript N}}{{\EuScript N}} \newcommand{{\EuScript O}}{{\EuScript O}} \newcommand{\OO_{+}}{{\EuScript O}_{+}} \newcommand{\widehat{\OO}}{\widehat{{\EuScript O}}} \newcommand{\M_{\OO}}{{\mathcal M}_{{\EuScript O}}} \newcommand{\M_{\Op}}{{\mathcal M}_{\OO_{+}}} \newcommand{{\mathcal P}}{{\mathcal P}} \newcommand{\Pe}[1]{{\EuScript Pe}\sp{#1}} \newcommand{{\mathcal PO}}{{\mathcal PO}} \newcommand{{\mathcal Q}}{{\mathcal Q}} \newcommand{{\mathcal Q}_{+}}{{\mathcal Q}_{+}} \newcommand{{\mathcal S}}{{\mathcal S}} \newcommand{\Ss_{\ast}}{{\mathcal S}_{\ast}} \newcommand{\Ss_{\ast}^{\operatorname{red}}}{{\mathcal S}_{\ast}^{\operatorname{red}}} \newcommand{{\mathcal T}}{{\mathcal T}} \newcommand{\TT_{\ast}}{{\mathcal T}_{\ast}} \newcommand{{\mathcal U}}{{\mathcal U}} \newcommand{{\mathcal V}}{{\mathcal V}} \newcommand{{\EuScript V}}{{\EuScript V}} \newcommand{\eVn}[1]{{\EuScript V}\lra{#1}} \newcommand{{\mathcal W}}{{\mathcal W}} \newcommand{{\mathcal X}}{{\mathcal X}} \newcommand{{\EuScript X}}{{\EuScript X}} \newcommand{{\mathcal Y}}{{\mathcal Y}} \newcommand{{\EuScript Y}}{{\EuScript Y}} \newcommand{{\mathcal Z}}{{\mathcal Z}} \newcommand{{\EuScript Z}}{{\EuScript Z}} \newcommand{\widetilde{\mathbf{J}}}{\widetilde{\mathbf{J}}} \newcommand{\overline{\cJ}}{\overline{{\mathcal J}}} \newcommand{\mathbf{K}}{\mathbf{K}} \newcommand{\overline{\mathbf{M}}}{\overline{\mathbf{M}}} \newcommand{\overline{\mathbf{N}}}{\overline{\mathbf{N}}} \newcommand{\overline{\mathbf{P}}}{\overline{\mathbf{P}}} \newcommand{\overline{\mathbf{Q}}}{\overline{\mathbf{Q}}} \newcommand{\mathbf{M}}{\mathbf{M}} \newcommand{\mathbf{N}}{\mathbf{N}} \newcommand{\mathbf{P}}{\mathbf{P}} \newcommand{\mathbf{Q}}{\mathbf{Q}} \newcommand{\mathbf{R}}{\mathbf{R}} \newcommand{\mathbf{-1}}{\mathbf{-1}} \newcommand{\mathbf{0}}{\mathbf{0}} \newcommand{\mathbf{1}}{\mathbf{1}} \newcommand{\mathbf{2}}{\mathbf{2}} \newcommand{\mathbf{3}}{\mathbf{3}} \newcommand{\mathbf{4}}{\mathbf{4}} \newcommand{\mathbf{n}}{\mathbf{n}} \newcommand{\hy}[2]{{#1}\text{-}{#2}} \newcommand{\Alg}[1]{{#1}\text{-}{\EuScript Alg}} \newcommand{{\EuScript Ab}}{{\EuScript Ab}} \newcommand{{\Ab{\EuScript Gp}}}{{{\EuScript Ab}{\EuScript Gp}}} \newcommand{\Ab_{\Lambda}}{{\EuScript Ab}_{\Lambda}} \newcommand{{\EuScript Cat}}{{\EuScript Cat}} \newcommand{{\EuScript Ch}}{{\EuScript Ch}} \newcommand{{\EuScript D}i{\EuScript G}\sb{\ast}}{{\EuScript D}i{\EuScript G}\sb{\ast}} \newcommand{{\EuScript Gp}}{{\EuScript Gp}} \newcommand{{\EuScript Gpd}}{{\EuScript Gpd}} \newcommand{\RM}[1]{\hy{{#1}}{\EuScript Mod}} \newcommand{\RM{\Lambda}}{\RM{\Lambda}} \newcommand{{\EuScript Set}}{{\EuScript Set}} \newcommand{\Set_{\ast}}{{\EuScript Set}_{\ast}} \newcommand{\SC}{\hy{{\mathcal S}}{{\EuScript Cat}}} \newcommand{\VC}{\hy{{\mathcal V}}{{\EuScript Cat}}} \newcommand{\Tal}[1][ ]{$\Theta$-algebra{#1}} \newcommand{\Alg{\Theta}}{\Alg{\Theta}} \newcommand{\Pa}[1][ ]{$\Pi$-algebra{#1}} \newcommand{\PAa}[1][ ]{$\Pi\sb{\A}$-algebra{#1}} \newcommand{\PAMa}[1][ ]{$\Pi\sb{\A}\sp{\M}$-algebra{#1}} \newcommand{\PAMas}[1][ ]{$\Pi\sb{\A}\sp{\M}$-algebras{#1}} \newcommand{\Pi\sb{\A}}{\Pi\sb{{\EuScript A}}} \newcommand{\Pi\sb{\A}\sp{\M}}{\Pi\sb{{\EuScript A}}\sp{{\mathcal M}}} \newcommand{\Alg{\Pi}}{\Alg{\Pi}} \newcommand{\Alg{\PiA}}{\Alg{\Pi\sb{\A}}} \newcommand{\Alg{\PiAM}}{\Alg{\Pi\sb{\A}\sp{\M}}} \newcommand{\COC}{\hy{\CO}{{\EuScript Cat}}} \newcommand{(\Gpd,\OO)}{({\EuScript Gpd},{\EuScript O})} \newcommand{\GOC}{\hy{(\Gpd,\OO)}{{\EuScript Cat}}} \newcommand{(\Gpd,\Op)}{({\EuScript Gpd},\OO_{+})} \newcommand{\GOpC}{\hy{(\Gpd,\Op)}{{\EuScript Cat}}} \newcommand{\OC}{\hy{{\EuScript O}}{{\EuScript Cat}}} \newcommand{(\Ss,\OO)}{({\mathcal S},{\EuScript O})} \newcommand{(\Sa,\OO)}{(\Ss_{\ast},{\EuScript O})} \newcommand{\SOC}{\hy{(\Ss,\OO)}{{\EuScript Cat}}} \newcommand{\SaOC}{\hy{(\Sa,\OO)}{{\EuScript Cat}}} \newcommand{(\Ss,\Op)}{({\mathcal S},\OO_{+})} \newcommand{\SOpC}{\hy{(\Ss,\Op)}{{\EuScript Cat}}} \newcommand{(\Sa,\Op)}{(\Ss_{\ast},\OO_{+})} \newcommand{\SaOpC}{\hy{(\Sa,\Op)}{{\EuScript Cat}}} \newcommand{\HSO}[1]{H^{#1}\sb{\operatorname{SO}}} \newcommand{(\V,\OO)}{({\mathcal V},{\EuScript O})} \newcommand{\VOC}{\hy{(\V,\OO)}{{\EuScript Cat}}} \newcommand{\co}[1]{c({#1})} \newcommand{A_{\bullet}}{A_{\bullet}} \newcommand{B_{\bullet}}{B_{\bullet}} \newcommand{B\Lambda}{B\Lambda} \newcommand{\E_{\bullet}}{{\EuScript E}_{\bullet}} \newcommand{F_{\bullet}}{F_{\bullet}} \newcommand{G_{\bullet}}{G_{\bullet}} \newcommand{\K_{\bullet}}{{\mathcal K}_{\bullet}} \newcommand{V_{\bullet}}{V_{\bullet}} \newcommand{\widehat{V}}{\widehat{V}} \newcommand{\hVd}[1]{\widehat{V}\sp{\lra{#1}}\sb{\bullet}} \newcommand{\cVd}[1]{\breve{V}^{\lra{#1}}\sb{\bullet}} \newcommand{\tVd}[1]{\tilde{V}^{\lra{#1}}\sb{\bullet}} \newcommand{\qV}[2]{V^{\lra{#1}}_{#2}} \newcommand{\qVd}[1]{\qV{#1}{\bullet}} \newcommand{\qW}[2]{W^{({#1})}_{#2}} \newcommand{\qWd}[1]{\qW{#1}{\bullet}} \newcommand{W_{\bullet}}{W_{\bullet}} \newcommand{\tilde{W}_{\bullet}}{\tilde{W}_{\bullet}} \newcommand{\widehat{W}}{\widehat{W}} \newcommand{\whW\sb{\bullet}}{\widehat{W}\sb{\bullet}} \newcommand{X_{\bullet}}{X_{\bullet}} \newcommand{Y_{\bullet}}{Y_{\bullet}} \newcommand{semi-Postnikov section}{semi-Postnikov section} \newcommand{quasi-Postnikov section}{quasi-Postnikov section} \newcommand{\mathbb F}{\mathbb F} \newcommand{\bF_{p}}{\mathbb F_{p}} \newcommand{{\mathbb N}}{{\mathbb N}} \newcommand{{\mathbb Q}}{{\mathbb Q}} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{F_{s}}{F_{s}} \newcommand{\Gamma_{+}}{\Gamma_{+}} \newcommand{\Gp,\Gamma}{\Gamma_{+},\Gamma} \newcommand{\check{\Gamma}_{+}}{\check{\Gamma}_{+}} \newcommand{\overline{\K}}{\overline{{\mathcal K}}} \newcommand{\overline{M}}{\overline{M}} \newcommand{\bar{X}}{\bar{X}} \newcommand{\bY}[1]{\overline{Y}\sb{#1}} \newcommand{\hX}[1]{\widehat{X}\sp{#1}} \newcommand{\overline{X}}{\overline{X}} \newcommand{\widetilde{X}}{\widetilde{X}} \newcommand{\pi_{\ast}}{\pi_{\ast}} \newcommand{\piul}[2]{\pi\sp{#1}\sb{#2}} \newcommand{\piAn}[1]{\piul{{\EuScript A}}{#1}} \newcommand{\piAn{\ast}}{\piAn{\ast}} \newcommand{\hpi}{\piul{{\EuScript A}}{0}} \newcommand{\pinat}[1]{\operatorname{\pi^{\natural}_{#1}}} \newcommand{\Po}[1]{\mathbf{P}^{#1}} \newcommand{\bE}[2]{\mathbf{E}({#1},{#2})} \newcommand{\tE}[2]{E({#1},{#2})} \newcommand{\tEL}[2]{E\sb{\Lambda}({#1},{#2})} \newcommand{\tPo}[1]{\Po{#1}} \newcommand{\sb{\operatorname{ab}}}{\sb{\operatorname{ab}}} \newcommand{\sb{\operatorname{arr}}}{\sb{\operatorname{arr}}} \newcommand{\operatorname{Arr}}{\operatorname{Arr}} \newcommand{\HAQ}[1]{H^{#1}} \newcommand{\HL}[1]{H^{#1}\sb{\Lambda}} \newcommand{\csk}[1]{\operatorname{csk}_{#1}} \newcommand{\operatorname{Coef}}{\operatorname{Coef}} \newcommand{\operatorname{Cok}}{\operatorname{Cok}} \newcommand{\operatorname{colim}}{\operatorname{colim}} \newcommand{\operatorname{hocolim}}{\operatorname{hocolim}} \newcommand{\cone}[1]{\operatorname{Co}(#1)} \newcommand{\sk}[1]{\operatorname{sk}_{#1}} \newcommand{\cskc}[1]{\operatorname{cosk}^{c}_{#1}} \newcommand{\operatorname{Fib}}{\operatorname{Fib}} \newcommand{\fiber}[1]{\operatorname{fiber}(#1)} \newcommand{\operatorname{fin}}{\operatorname{fin}} \newcommand{\hc}[1]{\operatorname{hc}_{#1}} \newcommand{\operatorname{ho}}{\operatorname{ho}} \newcommand{\operatorname{holim}}{\operatorname{holim}} \newcommand{\operatorname{Hom}}{\operatorname{Hom}} \newcommand{\underline{\Hom}}{\underline{\operatorname{Hom}}} \newcommand{\operatorname{Id}}{\operatorname{Id}} \newcommand{\operatorname{Im}}{\operatorname{Im}} \newcommand{\operatorname{inc}}{\operatorname{inc}} \newcommand{\operatorname{init}}{\operatorname{init}} \newcommand{\operatorname{Ker}}{\operatorname{Ker}} \newcommand{v_{\fin}}{v_{\operatorname{fin}}} \newcommand{\vfi}[1]{v{#1}_{\operatorname{fin}}} \newcommand{v_{\init}}{v_{\operatorname{init}}} \newcommand{\vin}[1]{v{#1}_{\operatorname{init}}} \newcommand{\mbox{\large $\star$}}{\mbox{\large $\star$}} \newcommand{\operatorname{Mor}}{\operatorname{Mor}} \newcommand{\operatorname{Obj}\,}{\operatorname{Obj}\,} \newcommand{\sp{\operatorname{op}}}{\sp{\operatorname{op}}} \newcommand{\operatorname{pt}}{\operatorname{pt}} \newcommand{\operatorname{Ran}}{\operatorname{Ran}} \newcommand{\operatorname{red}}{\operatorname{red}} \newcommand{\wPh}[1]{\widetilde{\Phi}\sb{#1}} \newcommand{\operatorname{map}}{\operatorname{map}} \newcommand{\map\,}{\operatorname{map}\,} \newcommand{\map_{\ast}}{\operatorname{map}_{\ast}} \newcommand{\tg}[1]{\widetilde{\gamma}_{#1}} \newcommand{\bdz}[1]{\bar{d}^{#1}_{0}} \newcommand{\mathbf{d}}{\mathbf{d}} \newcommand{\bd}[1]{\mathbf{d}^{#1}_{0}} \newcommand{\tbd}[1]{\widetilde{\mathbf{d}}^{#1}_{0}} \newcommand{\overline{\delta}}{\overline{\delta}} \newcommand{\mathbf{\Delta}}{\mathbf{\Delta}} \newcommand{\hD}[1]{\hat{\Delta}^{#1}} \newcommand{\tD}[1]{\tilde{\Delta}^{#1}} \newcommand{\tDl}[2]{\tilde{\Delta}^{#1}_{#2}} \newcommand{\tDp}[1]{\tD{#1}_{+}} \newcommand{\hDp}[1]{\hD{#1}_{+}} \newcommand{\bG}[1]{\overline{G}\sb{#1}} \newcommand{\bS}[1]{\mathbf{S}^{#1}} \newcommand{\bSp}[2]{\bS{#1}_{({#2})}} \newcommand{\bV}[1]{\overline{V}_{#1}} \newcommand{\overline{W}}{\overline{W}} \newcommand{\mathfrak{G}}{\mathfrak{G}} \newcommand{\cF}[1]{{\mathcal K}\sb{#1}} \newcommand{\partial_{0}}{\partial_{0}} \newcommand{\cfbase}[1]{\partial_{0}\cF{#1}} \newcommand{\cftop}[1]{\widetilde{\partial}\cF{#1}} \newcommand{\mC}[1]{\mathbf{C}\sb{#1}} \newcommand{\baC}[1]{\overline{C}\sb{#1}} \newcommand{\overline{E}'\sb{\ast}}{\overline{E}'\sb{\ast}} \newcommand{\buD}[1]{\overline{D}\sp{#1}} \newcommand{\baD}[1]{\overline{D}\sb{#1}} \newcommand{\baE}[1]{\overline{E}\sb{#1}} \newcommand{\baP}[1]{\overline{P}\sb{#1}} \newcommand{\mZ}[1]{\mathbf{Z}\sb{#1}} \newcommand{\mZu}[1]{\mathbf{Z}\sp{#1}} \newcommand{\latch}[1]{\mathbf{L}_{#1}} \newcommand{\match}[1]{\mathbf{M}_{#1}} \newcommand{\norm}[1]{\mathbf{N}_{#1}} \newcommand{\dprod}[2][]{\displaystyle \prod_{\begin{subarray}{c}{#2} \\ {#1}\end{subarray}}} \newcommand{\overline P}{\overline P} \newcommand{\smx}[1][k]{\Jul[x]{#1}} \newcommand{\snx}[1][k]{\mathfrak n^{#1}_x} \newcommand{\smnox}[1][k]{\mathfrak m^{#1}_{-x}} \newcommand{\ind}[1][k]{\indec_{#1}} \newcommand{\xcomma}[1][k]{(x \downarrow \Jul{#1})} \newcommand{\Iul}[2][]{I^{#1}_{#2}} \newcommand{\Jul}[2][]{{\mathcal J}^{#1}_{#2}} \newcommand{\Mul}[2][]{\mathbf{M}^{#1}_{#2}} \newcommand{\Nul}[2][]{\mathbf{N}^{#1}_{#2}} \newcommand{\Pul}[2][]{\mathbf{P}^{#1}_{#2}} \newcommand{\Psul}[2]{\Psi^{#1}_{#2}} \newcommand{\overline{\Psi}}{\overline{\Psi}} \newcommand{\uPsul}[2]{\overline{\Psi}^{#1}_{#2}} \newcommand{\Qul}[2][]{\mathbf{Q}^{#1}_{#2}} \newcommand{\Rul}[2][]{\mathbf{R}^{#1}_{#2}} \newcommand{\Yul}[2][]{Y^{#1}_{#2}} \newcommand{\Jul[x]{}}{\Jul[x]{}} \newcommand{\Jxk}[1][k]{\Jul[x]{#1}} \newcommand{\Mxk}[1][k]{\Mul[x]{#1}} \newcommand{\Nxk}[1][k]{\Nul[x]{#1}} \newcommand{\Pxk}[1][k]{\Pul[x]{#1}} \newcommand{\Qxk}[1][k]{\Qul[x]{#1}} \newcommand{\Rxk}[1][k]{\Rul[x]{#1}} \newcommand{\rIul}[2][]{\overlineI^{#1}_{#2}} \newcommand{\rJul}[2][]{\overline{\mathcal J}^{#1}_{#2}} \newcommand{\rMul}[2][]{\overline\mathbf{M}^{#1}_{#2}} \newcommand{\rNul}[2][]{\overline\mathbf{N}^{#1}_{#2}} \newcommand{\rPul}[2][]{\overline\mathbf{P}^{#1}_{#2}} \newcommand{\rQul}[2][]{\overline\mathbf{Q}^{#1}_{#2}} \newcommand{\rRul}[2][]{\overline\mathbf{R}^{#1}_{#2}} \newcommand{\rYul}[2][]{\overlineY^{#1}_{#2}} \newcommand{\rJul[x]{}}{\rJul[x]{}} \newcommand{\rJxk}[1][k]{\rJul[x]{#1}} \newcommand{\rMxk}[1][k]{\rMul[x]{#1}} \newcommand{\rNxk}[1][k]{\rNul[x]{#1}} \newcommand{\rPxk}[1][k]{\rPul[x]{#1}} \newcommand{\rQxk}[1][k]{\rQul[x]{#1}} \newcommand{\rRxk}[1][k]{\rRul[x]{#1}} \newcommand{\widetilde{Y}}{\widetilde{Y}} \newcommand{\widehat{Y}}{\widehat{Y}} \newcommand{\Yof}[2][]{Y_{#1}(#2)} \newcommand{\Ybarof}[2][]{\overline{Y}_{#1}(#2)} \newcommand{\Yk}[1][k]{\Yul[ ]{#1}} \newcommand{\Yxk}[1][k]{\Yul[x]{#1}} \newcommand{\tYxk}[2][x]{\widetilde{Y}^{#1}_{#2}} \newcommand{\tYk}[1]{\widetilde{Y}^{}_{#1}} \newcommand{\mxk}[2][x]{\operatorname m^{#1}_{#2}} \newcommand{\hmxk}[2][x]{\widehat{\operatorname m}^{#1}_{#2}} \newcommand{\rmxk}[2][x]{\overline{\operatorname m}^{#1}_{#2}} \newcommand{\operatorname C}{\operatorname C} \newcommand{\operatorname {cof}}{\operatorname {cof}} \newcommand{\cofib}[1]{\operatorname {cof}(#1)} \newcommand{\Sigma'}{\Sigma'} \newcommand{\mapcone}[1]{\operatorname M_{#1}} \newcommand{\sigmaul}[2]{\sigma^{#1}_{#2}} \newcommand{\operatorname{forget}}{\operatorname{forget}} \newcommand{\overline{\forget}}{\overline{\operatorname{forget}}} \newcommand{\displaystyle \prod}{\displaystyle \prod} \newcommand{\Jass}[1][]{weak lattice} \newcommand{\operatorname{Cyl}}{\operatorname{Cyl}} \newcommand{\operatorname{Lift}}{\operatorname{Lift}} \newcommand{\operatorname{Path}}{\operatorname{Path}} \newcommand{\operatorname{pr}}{\operatorname{pr}} \newcommand{\overline{p}}{\overline{p}} \newcommand{\operatorname{rel}}{\operatorname{rel}} \newcommand{\operatorname{Var}}{\operatorname{Var}} \newcommand{\varu}[1]{\operatorname{Var}_u(#1)} \newcommand{\ovaru}[1]{\overline{\operatorname{Var}}_u(#1)} \newcommand{\lift}[2]{\operatorname{Lift}_{#1}(#2)} \newcommand{Y\langle u \rangle}{Y\langle u \rangle} \newcommand{W\langle p \rangle}{W\langle p \rangle} \newcommand{\overline Y \langle u \rangle}{\overline Y \langle u \rangle} \newcommand{\overline W \langle p,u \rangle}{\overline W \langle p,u \rangle} \newcommand{P^{\rel}}{P^{\operatorname{rel}}} \newcommand{\sim\sp{l}}{\sim\sp{l}} \newcommand{\sim\sp{r}}{\sim\sp{r}} \newcommand{\rcol}[1]{\textcolor{red}{#1}} \newcommand{\rgrn}[1]{\textcolor{green}{#1}} \newcommand{\rmag}[1]{\textcolor{magenta}{#1}} \newcommand{\rblue}[1]{\textcolor{blue}{#1}} \title{A Constructive Approach to Higher Homotopy Operations} \author[D.~Blanc]{David Blanc} \address{Department of Mathematics\\ University of Haifa\\ 34988 Haifa\\ Israel} \email{[email protected]} \author [M.W.~Johnson]{Mark W.~Johnson} \address{Department of Mathematics\\ Penn State Altoona\\ Altoona, PA 16601\\ USA} \email{[email protected]} \author[J.M.~Turner]{James M.~Turner} \address{Department of Mathematics\\ Calvin College\\ Grand Rapids, MI 49546\\ USA} \email{[email protected]} \date{\today} \subjclass{Primary: 55P99; \ secondary: 18G55, 55Q35, 55S20} \keywords{Higher homotopy operations, homotopy-commutative diagram, obstructions} \begin{abstract} In this paper we provide an explicit general construction of higher homotopy operations in model categories, which include classical examples such as (long) Toda brackets and (iterated) Massey products, but also cover unpointed operations not usually considered in this context. We show how such operations, thought of as obstructions to rectifying a homotopy-commutative diagram, can be defined in terms of a double induction, yielding intermediate obstructions as well. \end{abstract} \maketitle \setcounter{section}{0} \section*{Introduction} \label{cint} Secondary homotopy and cohomology operations have always played an important role in classical homotopy theory (see, e.g., \cite{AdHI,BJMahT,MPetS,PSteS} and later \cite{GPorW,GPorH,AlldR,MOdaC,SnaiM,CWaneG}), as well as other areas of mathematics (see \cite{AlSabtFT,FGMorgC,GLevinT,GrantT,SSterQ}). Toda's construction of what we now call Toda brackets in \cite{TodG} (cf.\ \cite[Ch.\ I]{TodC}) was the first example of a secondary homotopy operation \emph{stricto sensu}, although Adem's secondary cohomology operations and Massey's triple products in cohomology appeared at about the same time (see \cite{AdemI,MassN}). In \cite[Ch.\ 3]{AdHI}, Adams first tried to give a general definition of secondary stable cohomology operations (see also \cite{HarpSC}). Kristensen gave a description of such operations in terms of chain complexes (cf.\ \cite{KristS,KKrisS}), which was extended by Maunder and others to $n$-th order cohomology operations (see \cite{MaunC,HoltH,KlauC,KlauT}). Higher operations have also figured over the years in rational homotopy theory, where they are more accessible to computation (see, e.g., \cite{AlldR,SBasuS,RetaL,TanrH}). In more recent years there has been a certain revival of interest in the subject, notably in algebraic contexts (see for example, \cite{BaskT,GartH,SagaU,EfraZ,CFranH,HWickS}). In \cite{SpanH}, Spanier gave a general theory of higher order homotopy operations (extending the definition of secondary operations given in \cite{SpanS}). Special cases of higher order homotopy operations appeared in \cite{GWalkL,KraiM,MoriHT,BBGondH}, and other general definitions may be found in \cite{BMarkH,BJTurnHH}. The last two approaches cited present higher order operations as the (last) obstruction to rectifying certain homotopy-commutative diagrams (in spaces or other model categories). In particular, they highlight the special role played by null maps in almost all examples occurring in practice. Implicitly, they both assume an inductive approach to rectifying such diagrams. However, in earlier work no attempt was made to describe a useable inductive procedure, which should (inter alia) explain precisely which lower-order operations are required to vanish in order for a higher order operation to be even \emph{defined}. The goal of the present note is to make explicit the inductive process underlying our earlier definitions of higher order operations, in as general a framework as possible. We hope the explicit nature of this approach will help in future work both to clarify the question of indeterminacy of the higher operations, and possibly to produce an ``algebra of higher operations,'' in the spirit of Toda's original ``juggling lemmas'' (see \cite[Ch.\ I]{TodC}). An important feature of the current approach is that we assume that our indexing category is directed, and we consistently proceed in one direction in rectifying the given homotopy-commutative diagram (say, from right to left, in the ``right justified'' version). As a result, when we come to define the operation associated to an indexing category of length $n$, we use as initial data a specific choice of rectification for the right segment of length \w[.]{n-1} This sequence of earlier choices will appear only implicitly in our description and general notation for higher operations, but will be made explicit for our (long) Toda brackets (see \S \ref{rrjtodabr}-\ref{cohvan}). Since our higher operations appear as obstructions to rectification, they fit into the usual framework of obstruction theory: when they do not vanish, one must go back along the thread of earlier choices until reaching a point from which one can proceed along a new branch. From the point of view of the obstruction theory, the important fact is their vanishing or non-vanishing (see Remark \ref{cohvan} for the relation to coherent vanishing). Nevertheless, since our higher operations are always described as a certain set of homotopy classes of maps into a suitable pullback, at least in some cases it is possible to describe the indeterminacy more explicitly. However, this would only be a part of the total indeterminacy, since the most general obstruction to rectification consists of the union of these sets, taken over all possible choices of initial data of length \w[.]{n-1} After a brief discussion of the classical Toda bracket from our point of view in Section \ref{cctb}, in Section \ref{cgrms}.A we describe the basic constructions we need, associated to the type of Reedy indexing categories for the diagrams we consider. The changes needed for pointed diagrams are discussed in Section \ref{cgrms}.B. We give our general definition of higher order operations in Section \ref{cgdhho}: it is hard to relate this construction to more familiar examples, because it is intended to cover a number of different situations, and in particular the less common unpointed version. In all cases the ``total higher operation'' serves as an obstruction to extending a partial rectification of a homotopy-commutative diagram one further stage in the induction. In Section \ref{csto} we provide a refinement of this obstruction to a sequence of intermediate steps (in an inner induction), culminating in the total operation for the given stage in the induction. Section \ref{crsd} is devoted to a commonly occurring problem: rigidifying a (reduced) simplicial object in a model category, for which the simplicial identities hold only up to homotopy. This serves to illustrate how the general (unpointed) theory works in low dimensions. In Section \ref{cgho} we define pointed higher operations, which arise when the indexing category has designated null maps, and we want to rectify our diagram while simultaneously sending these to the strict zero map in the model category. This involves certain simplifications of the general definition, as illustrated in the motivating examples of (long) Toda brackets and Massey products, described in Section \ref{cltbmp}. Finally, in Section \ref{cfrd} we make a tentative first step towards a possible ``algebra of higher operations,'' by showing how we can decompose our pointed higher operations into ordinary (long) Toda brackets for a certain class of \emph{fully reduced diagrams}. In Appendix \ref{abm} we review some basic facts in model categories needed in the paper; Appendix \ref{abind} contains some preliminary remarks on the indeterminacy of the operations. \end{ack} \section{The classical Toda Bracket} \label{cctb} We start with a review of the classical Toda bracket, the primary example of a pointed secondary homotopy operation. In keeping with tradition we give a left justified description, in terms of pushouts, although for technical reasons our general approach will be right justified, in terms of pullbacks. \begin{mysubsection}{Left Justified Toda Brackets} A classical \emph{Toda diagram} in any pointed model category consists of three composable maps: \mydiagram[\label{eqtodadiag}]{ \Yof{3} \ar[r]^h & \Yof{2} \ar[r]^g & \Yof{1} \ar[r]^f & \Yof{0} } \noindent with each adjacent composite left null-homotopic. We shall assume that all objects in \wref[,]{eqtodadiag} and the analogous diagrams throughout the paper, are both fibrant and cofibrant, so we may disregard the distinction between left and right homotopy classes). To define the associated Toda bracket, we first change $h$ into a cofibration (to avoid excessive notation, we do not change the names of $h$ or its target). By Lemma \ref{dlhpp} we can alter $g$ within its homotopy class to a \w{g'} to produce a factorization: \mydiagram[\label{eqlefttodadi}]{ \Yof{3} \ar@{}[dr] |>{\mbox{\large{$\ulcorner$}}} \ar@{ >->}[r]^h \ar[d] & \Yof{2} \ar[d] \ar@/^2.5em/[dd]^{g'} \\ \ast \ar@{ >->}[r] \ar@/_1em/[dr]^{0} & \cofib{h} \ar@{ >->}[d]^{g_2} \\ & \Yof{1} } \noindent so \w{g'\circ h} is the zero map (not just null-homotopic). We use \w{i:\Yof{2}\hookrightarrow\operatorname C\Yof{2}} (an inclusion into a reduced cone) to extend \wref{eqlefttodadi} to the solid diagram: \diagr{ \Yof{3} \ar@{}[dr] |>{\mbox{\large{$\ulcorner$}}} \ar@{ >->}[r]^h \ar[d] & \Yof{2} \ar@{}[dr] |>{\mbox{\large{$\ulcorner$}}} \ar[d] \ar@/_2em/[dd]_(0.7){g'} \ar@{ >->}[r] & \operatorname C\Yof{2} \ar@{}[dr] |>{\mbox{\large{$\ulcorner$}}} \ar[d] \ar@{-->}@/^3em/[dddrr]^{\phi} \\ \ast \ar@{ >->}[r] & \cofib{h} \ar@{}[dr] |>{\mbox{\large{$\ulcorner$}}} \ar@{ >->}[d]^{g_2} \ar@{ >->}[r] & \Sigma' \Yof{3} \ar@{}[dr] |>{\mbox{\large{$\ulcorner$}}} \ar@/_2em/@{-->}[ddrr]^{\psi_{\phi}} \ar[r] \ar@{ >->}[d]_{j} & \ast \ar@{ >->}[d] \\ & \Yof{1} \ar@{ >->}[r]^{i} \ar@/_1em/[drrr]_{f} & \mapcone{g'} \ar[r] \ar@{-->}@/_1em/[drr]_(0.25){\kappa} & \cofib{g_2} \ar@{.>}[dr] \\ & & & & \Yof{0} } \noindent where all squares (and thus all rectangles) are pushouts, with cofibrations as indicated. In particular, \w{\Sigma' \Yof{3}} is a model for the reduced suspension of \w[,]{\Yof{3}} \w{\mapcone{g'}} is a mapping cone on \w[,]{g'} and $\phi$ is a nullhomotopy for \w[.]{f\circ g'} Note that any choice of such a nullhomotopy $\phi$ induces maps \w{\psi_\phi:\Sigma' \Yof{3} \to \Yof{0}} and \w[,]{\kappa:\mapcone{g'} \to \Yof{0}} with \w[.]{\kappa \circ j = \psi_\phi} Suppose that for some choice of $\phi$, the map \w{\psi_\phi} is null-homotopic, so \w[.]{\kappa \circ j = \psi_\phi \sim 0} Then by Lemma \ref{dlhpp}, we could alter $\kappa$ within its homotopy class to \w{\kappa'} such that \w[,]{\kappa' \circ j = 0} whence the pushout property for the lower right square would induce the dotted map \w[.]{\cofib{g_2} \to \Yof{0}} As a consequence, choosing \w{f' = \kappa' \circ i \sim \kappa \circ i=f} provides a replacement for $f$ in the same homotopy class satisfying \w[,]{f' \circ g' = \kappa' \circ i \circ g' = 0} rather than only agreeing up to homotopy. \end{mysubsection} \begin{defn}\label{dljtodabr} Given \wref[,]{eqtodadiag} the subset of the homotopy classes of maps \w{[\Sigma'\Yof{3}, \Yof{0}]} consisting of all classes \w{\psi_{\phi}} (for all choices of $\phi$ and \w{g_{2}} as above) forms the \emph{Toda bracket} \w[.]{\lra{f,g,h}} Each such \w{\psi_{\phi}} is called a \emph{value} of \w[,]{\lra{f,g,h}} and we say that the Toda bracket \emph{vanishes} (at \w{\psi_{\phi}:\Sigma'\Yof{3}\to\Yof{0}} as above) if \w{\psi_{\phi}\sim\ast} -- \ that is, if \w{\lra{f,g,h}} includes the null map. \end{defn} \begin{remark}\label{rljtodabr} By what we have shown, \w{\lra{f,g,h}} vanishes if and only if we can vary the spaces \w{\Yof{0},\dotsc,\Yof{3}} and the maps \w{f,g,h} within their homotopy classes so as to make the adjacent composites in \wref{eqtodadiag} (strictly) \emph{zero}, rather than just null-homotopic. In fact, by considering the cofiber sequence $$ \Yof{3} \to \Yof{2} \to \cofib{h} \to \Sigma' \Yof{3} $$ \noindent one can show that \w{\lra{f,g,h}} is a double coset in the group \w[:]{[\Sigma'\Yof{3},\,\Yof{0}]} In fact, the choices for homotopy classes of a nullhomotopy for any fixed pointed map \w{\varphi:A\to B} are in one-to-one correspondence with classes \w{[\Sigma A,\,B]} (see \cite[\S 1]{SpanS}), and thus the contribution of the choices for $\phi$ and \w{g_{2}} respectively to the value of \w{\lra{f,g,h}} are given by \w{(\Sigma' h)^{\#}[\Sigma' \Yof{2}, \Yof{0}]} and \w[,]{f_{\#}[\Sigma' \Yof{3},\Yof{1}]} respectively. The two subgroups \begin{myeq}\label{eqindet} (\Sigma' h)^{\#}[\Sigma' \Yof{2}, \Yof{0}]\hspace{10 mm} \text{and}\hspace{10 mm} f_{\#}[\Sigma' \Yof{3}, \Yof{1}], \end{myeq} \noindent of \w{[\Sigma'\Yof{3}, \Yof{0}]} are referred to as the \emph{indeterminacy} of \w[;]{\lra{f,g,h}} when \w{\Yof{3}} is a homotopy cogroup object or \w{\Yof{1}} is a homotopy group object, the sum of \wref{eqindet} is a subgroup of the abelian group \w[.]{[\Sigma'\Yof{3}, \Yof{0}]} In any case, \emph{vanishing} means precisely that the (well-defined) class of \w{\lra{f,g,h}} in the double quotient $$ [(\Sigma' h)^{\#}\Sigma' \Yof{2}, \Yof{0}] \backslash [\Sigma\Yof{3}, \Yof{0}] / f_{\#}[\Sigma' \Yof{3}, \Yof{1}] $$ \noindent is the trivial element in the quotient set. \end{remark} \begin{remark}\label{rrjtodabr} The `right justified' definition of our ordinary Toda bracket is given in Step (c) of Section \ref{cltbmp}.A below. This will depend on a specific initial choice of maps $f$ and $g$ with \w{f \circ g=\ast} (rather than \w[),]{f \circ g\sim\ast} and will be denoted by \w[,]{\lrau{f,g}{,h}} so \[ \lra{f,g,h}= \bigcup_{f \circ g=\ast} \lrau{f,g}{,h} \] where the union is indexed over those pairs with $f$ and $g$ in the specified homotopy classes. The reader is advised to refer to that section for examples of all constructions in Sections \ref{cgdhho}-\ref{csto} below, since the example of our long Toda bracket \w{\lrau{f,g,h}{,k}} in Section \ref{cltbmp} was the template for our more general setup. \end{remark} \section{Graded Reedy Matching Spaces} \label{cgrms} Our goal is now to extend the notions recalled in Section \ref{cctb} \ -- \ of Toda diagrams, and Toda brackets as obstructions to their (pointed) realization \ -- \ to more general diagrams \w[,]{Y:{\mathcal J}\to{\EuScript E}} where ${\EuScript E}$ is some complete category (eventually, a pointed model category). \supsect{\protect{\ref{cgrms}}.A}{Reedy indexing categories} Since our approach will be inductive, we need to be able to filter our indexing category ${\mathcal J}$, for which purpose we need the following notions. Recall that a category is said to be locally finite if each \ww{\operatorname{Hom}}-set is finite. \begin{defn}\label{areedy} We define a \emph{\Jass} to be a locally finite Reedy indexing category ${\mathcal J}$ (see \cite[15.1]{PHirM}), equipped with a degree function \w[,]{\deg:\operatorname{Obj}\,{\mathcal J}\to{\mathbb N}} written \w[,]{|x|=\deg(x)} such that: \begin{itemize} \item ${\mathcal J}$ is connected, \item there are only finitely many objects in each degree, \item all non-identity morphisms strictly decrease degree, and \item every object maps to (at least) one of degree zero. \end{itemize} \end{defn} \begin{remark} Note that a \Jass[ ] ${\mathcal J}$ has no directed loops or non-trivial endomorphisms, and \w{x\in\operatorname{Obj}\,{\mathcal J}} has only \w{\operatorname{Id}\sb{x}} mapping out of it if and only if \w[.]{|x|=0} Moreover, each object is the source of only finitely many morphisms, although there may be elements of arbitrarily large degree. \end{remark} \begin{notn}\label{nreedy} For a \Jass[ ] ${\mathcal J}$ as above: \begin{enumerate} \renewcommand{(\alph{enumi})~}{(\alph{enumi})~} \item We denote by \w{\Jul{k}} the full subcategory of ${\mathcal J}$ consisting of the objects of degree $\leq k$, with \w{\Iul{k}:\Jul{k} \to {\mathcal J}} the inclusion. \item For any \w{x \in\operatorname{Obj}\,{\mathcal J}} in a positive degree, \w{\Jul[x]{}} will denote the full subcategory of ${\mathcal J}$ whose objects are those \w{t \in {\mathcal J}} with \w{{\mathcal J}(x,t)} non-empty. Thus \w{x\in\Jul[x]{}} and \w{\Jul[x]{}\cap\Jul{0}\neq\emptyset} (by \S \ref{areedy}). \item We denote by \w{\Jxk} the full subcategory of \w{\Jul[x]{}} containing $x$ and all objects (under $x$) of degree at most $k$, with \w{\Iul[x]{k}:\Jxk \to \Jul[x]{}} the inclusion. We implicitly assume that \w{|x|>k} when we use this notation. Similarly, \w{\partial \Jxk} is the full subcategory of \w{\Jxk} containing all objects other than $x$. \item Given \w{|x|\geq k>0} and a functor \w{Y:\smx[k-1]\to{\EuScript E}} we have maps $$ \sigmaul{x}{k-1}:\Yof{x} \to \dprod[|t|=k-1]{{\mathcal J}(x,t)} \Yof{t} \hspace{.5in}\text{and}\hspace{.5in} \sigmaul{x}{<k}:\Yof{x} \to \dprod[|t|<k]{{\mathcal J}(x,t)} \Yof{t} $$ \noindent given by \w{\Yof{f}:\Yof{x}\to\Yof{t}} into the factor \w{\Yof{t}} indexed by \w[.]{f:x \to t} \item Given \w{Y:\Jxk[k-1]\to{\mathcal E}} as above, there is a natural \emph{generalized diagonal} map: \begin{myeq}\label{eqgendiag} \Psi=\Psul{x}{k}~:~\dprod[|v|<k]{{\mathcal J}(x,v)} \Yof{v} ~~\longrightarrow~~ \dprod[|s|=k]{{\mathcal J}(x,s)} \dprod[|v|<k]{{\mathcal J}(s,v)} \Yof{v} \end{myeq} \noindent mapping to the copy of \w{\Yof{v}} on the right with index \w{x\stackrel{g}{\to} s \stackrel{f}{\to} v} by projection of the left hand product onto the copy of \w{\Yof{v}} indexed by the composite \w{x \stackrel{fg}{\longrightarrow} v} (followed by \w[).]{\operatorname{Id}\sb{\Yof{v}}} \end{enumerate} \end{notn} \begin{example}\label{egreedy} Consider the following \Jass[ ] ${\mathcal J}$: \diags{ && a \ar[r] \ar[dr] & u \ar[r] & s \\ &x \ar[ur] \ar[dr] && v \ar[ur] \ar[dr] \\ && b \ar[r] \ar[ur] & w \ar[r] & t\\ \deg: & 3 & 2 & 1 & 0 } where all subdiagrams commute, and the degrees are as indicated. Then \diagt{ && s &&& u \ar[r] & s \\ (\Jxk[0])&x \ar[ur] \ar[dr] && (\Jxk[1]) & x \ar[r] \ar[ur] \ar[dr] & v \ar[ur] \ar[dr] \\ && t &&& w \ar[r] & t } \noindent with \w[,]{\Jxk[2]={\mathcal J}} and \w{\partial \Jxk[0]} is the discrete category with objects \w[.]{\{s,t\}} Furthermore we have: \diagt{ & a \ar[r] \ar[dr] & u \ar[r] & s && u \ar[r] & s \\ (\partial\Jxk[2]) && v \ar[ur] \ar[dr] &&(\partial\Jxk[1])& v \ar[ur] \ar[dr] \\ & b \ar[r] \ar[ur] & w \ar[r] & t && w \ar[r] & t~. } \end{example} \quad \begin{defn}\label{dreedy} For a \Jass[ ] ${\mathcal J}$ as above and any \w{x \in{\mathcal J}} of degree $>k$: \begin{enumerate} \renewcommand{(\alph{enumi})~}{(\alph{enumi})~} \item The \emph{comma category} \w{\xcomma=(x\downarrow\partial\Jxk)} has as objects the morphisms in ${\mathcal J}$ from $x$ to objects in \w[,]{\Jul{k}} with maps in \w{\xcomma} given by commutative triangles in ${\mathcal J}$ of the form \diagr{ & x \ar[dl] \ar[dr] \\ t \ar[rr] && s ~. } \item For any functor \w{Y:\partial \Jxk\to{\mathcal E}} and \w[,]{k<|x|} we define the object \w{\Mul[x]{k}(Y)} (functorial in $Y$) to be the limit in ${\mathcal E}$ $$ \Mxk(Y)~:=~\lim\sb{(x \downarrow \Jxk)} \widehat Y ~, $$ \noindent where \w{\widehat Y(f:x \to s)=Y(s)} (see \cite[X.3]{MacC}). We often write \w{\Mxk} for \w{\Mxk(Y)} when $Y$ is clear from the context. \item For any slightly larger diagram \w[,]{Y: \Jxk \to {\mathcal E}} there is a canonical map in ${\mathcal E}$ defined using the universal property of the limit, \w[,]{\mxk{k}(Y):\Yof{x} \to \Mxk(Y)} and \w{\sigmaul{x}{<k+1}} is the composite of \w{\mxk{k}} with the forgetful map (inclusion) \diagr{ \Mxk\ \ar@{^{(}->}[rr]^-{\operatorname{forget}} && \dprod[|t|\leq k]{{\mathcal J}(x,t)} \Yof{t} } \noindent from the limit to the product, so it is closely related to the Reedy matching map when \w[.]{k=|x|-1} Note that \w{\Mul[x]{0}} is simply a product of entries of degree zero, indexed by the set of maps from $x$ to the discrete category \w[,]{\Jul[x]{0}} and \w{\mxk{0}=\sigmaul{x}{0}}. \noindent When ${\mathcal E}$ is a model category, $Y$ is called \emph{Reedy fibrant} if each \w{\mxk{|x|-1}(Y)} is a fibration; the special case \w{k=|x|-1} is the standard Reedy matching construction (cf.\ \cite[Defn. 15.2.3 (2)]{PHirM}). \end{enumerate} \end{defn} \begin{lemma}\label{lraneq} Given a functor \w{Y:\partial\Jxk\to{\mathcal E}} as above, an extension to \w{\overline Y:\smx[k] \to{\mathcal E}} is (uniquely) determined by a choice of an object \w[,]{\Ybarof{x}\in{\mathcal E}} together with a map \w[.]{\Ybarof{x} \to \Mxk(Y)} \end{lemma} \begin{proof} Recall that there is an adjoint pair given by forgetting and the right Kan extension over \w[.]{I^k_x} The fact that \w{I^k_x} is fully faithful implies that the right Kan extension restricts back to the original functor (hence the term extension). Moreover, \w{\Mxk(Y)} is the formula for the value of the right Kan extension, \w[,]{\operatorname{Ran}_{\partial \Jxk}^{\Jxk} (Y)} at the entry $x$ (see \cite[X.3, Thm 1]{MacC}). Because of the adjunction, $\overline Y$ extends $Y$ on \w{\partial \Jxk} precisely when there is a natural transformation \w{\overline Y \to \operatorname{Ran}_{\partial \Jxk}^{\Jxk} (Y)} restricting to the identity away from $x$. It is thus completely determined by the entry \w[.]{\Ybarof{x}\to \Mxk(Y)} \end{proof} Embedding the limit \w{\Mxk(Y)} as usual into \w[,]{\prod_{{\mathcal J}(x,u), |u|\leq k}\,\Yof{u}} we see that there are two kinds of conditions needed for an element in this product to be in the limit (when ${\mathcal E}$ is a concrete category): \begin{enumerate} \renewcommand{(\alph{enumi})~}{(\alph{enumi})~} \item Those not involving \w{\Yof{s}} with \w[,]{|s|=k} yielding \w{\Mxk[k-1](Y)} in the lower left corner of \wref[;]{usepsi} \item Those which do involve \w{\Yof{s}} with \w[,]{|s|=k} where the compatibility conditions necessarily involve objects in degree $<k$, since all maps in ${\mathcal J}$ lower degree. \end{enumerate} This implies: \begin{lemma}\label{lmatchpull} If ${\mathcal J}$ is a \Jass[ ] and \w[,]{|x|>k>0} a functor \w{Y:\partial\Jxk \to {\mathcal E}} induces a pullback square: \mydiagram[\label{usepsi}]{ \Mxk(Y) \ar@{} [drrr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[rrr] &&& \dprod[|s|=k]{{\mathcal J}(x,s)} \Yof{s} \, \ar[d]^(0.6){\prod_{{\mathcal J}(x,s)} \sigmaul{s}{<k}} \\ \Mxk[k-1](Y) \ar@{^{(}->}[rr]^{\operatorname{forget}} && \dprod[|t|<k]{{\mathcal J}(x,t)} \Yof{t} \ar[r]^-{\Psi} & \dprod[|s|=k]{{\mathcal J}(x,s)} \dprod[|v|<k]{{\mathcal J}(s,v)} \Yof{v}. } \end{lemma} Here \w{\Psi=\Psul{x}{k}} is the generalized diagonal map of \wref[,]{eqgendiag} and the maps \w{\sigmaul{s}{<k}} on the right (given by \S \ref{nreedy}(d)) all have sources in \w[,]{\partial\Jxk} where \w{\Yof{f}} is defined. \begin{proof} Note that the existence of \w{Y} suffices to define each component of the diagram. In particular, \w{\Yof{f}} is defined for each morphism $f$ in \w[,]{\partial\Jxk} and even forms part of the definition of the factors of the right vertical, but such maps are not defined for any \w{g:x \to v} with \w[.]{|v| \geq k} Denote the pullback of the lower right part of the diagram by \w[.]{\Rxk} We first show that \w{\Rxk} induces a cone on \w[,]{(x \downarrow \Jxk)} thus inducing a map \w{\Rxk\to\Mxk} by the universal property of the limit: projecting off to the right for targets of degree $k$, or projecting after moving down followed by the forgetful map for targets of lower degree, yields maps \w{\Ybarof{g}:\Rxk \to \Yof{s}} for each \w{g:x \to s} in \w[.]{(x \downarrow \Jxk)} We must verify that whenever \w{h=f g} for \w{h:x \to t} we have a commutative diagram in ${\mathcal E}$, so that \w[.]{\Ybarof{h}=\Yof{f} \Ybarof{g}} If the codomain of $g$ has degree less than $k$, the upper right corner is not involved, and commutativity follows from the fact that the map from \w{\Rxk} factors through \w{\Mxk[k-1](Y)} in the lower left. On the other hand, if the codomain of $g$ has degree exactly $k$, then projecting off at the chosen pair \w{(g,f)} in the assumed (commutative) pullback diagram, we see that \mydiagram{ \Rxk \ar[d]_{\Ybarof{h}} \ar[rr]^{\Ybarof{g}} && \Yof{s} \ar[d]^{\Yof{f}} \\ \Yof{t} \ar[rr]^{=} && \Yof{t} } \noindent commutes by the definition of the generalized diagonal $\Psi$, which establishes the cone condition. Thus, the universal property of the limit yields a unique map \w[.]{\Rxk \to \Mxk} On the other hand, the forgetful map \w{\operatorname{forget}:\Mxk \hookrightarrow \dprod[|t|\leq k]{{\mathcal J}(x,t)} \Yof{t}} can be split into factors with \w[,]{|t|=k} and the factors with \w[,]{|t|<k} thereby defining maps to the two corners of the pullback which will make the outer diagram commute, by inspection. Thus, there is also a map \w{\Mxk \to \Rxk} and the induced cone, as above, is the standard one, so the composite is the identity on \w[.]{\Mxk} Finally, starting from \w[,]{\Rxk} building the cone as above and then projecting as just discussed recovers the same maps \w{\Ybarof{h}} as entries, so this composite is the identity on \w{\Rxk} as well. \end{proof} \supsect{\protect{\ref{cgrms}}.B}{Pointed Graded Matching Objects} Higher homotopy operations have traditionally appeared as obstructions to vanishing in a pointed context, so we shall need a pointed version of the constructions above. \begin{defn}\label{dpoint} When ${\mathcal E}$ is any category with limits (such as a model category), a \emph{pointed object} in ${\mathcal E}$ is one equipped with a map from the final object (or empty limit), denoted by $\ast$. The most commonly occurring case is where $\ast$ is a \emph{zero object} (both initial and final in ${\mathcal E}$). Similarly, a \emph{pointed map} in ${\mathcal E}$ is one under $\ast$. This defines the pointed category \w{{\mathcal E_{\ast}}} (which inherits any model category structure on ${\mathcal E}$ \ -- \ cf.\ \cite[1.1.8]{HovM}). Note that there is a canonical zero map, also denoted by $\ast$, between any two objects in \w[.]{{\mathcal E_{\ast}}} \end{defn} \begin{defn}\label{dpindex} We say that a small category ${\mathcal J}$ as in \S \ref{areedy} is a \emph{pointed indexing category} if the set of morphisms has a partition \w{\operatorname{Mor}({\mathcal J})=\widetilde{\mathbf{J}}\sqcup \overline{\cJ}} (and thus \w{{\mathcal J}(x,t)=\widetilde{\mathbf{J}}(x,t) \sqcup \overline{\cJ}(x,t)} for each \w[)]{x,t\in\operatorname{Obj}\,{\mathcal J}} such that: \begin{enumerate} \renewcommand{(\alph{enumi})~}{(\alph{enumi})~} \item \w{\overline{\cJ}(x,x)} contains \w{\operatorname{Id}\sb{X}} if and only if $x$ is a zero object in ${\mathcal J}$. \item The subsets \w{\overline{\cJ}(x,t)} are absorbing under composition \ -- \ that is, if $f$ and $g$ are composable and either of $f$ or $g$ lies in $\overline{\cJ}$, then so does their composite. Thus $\overline{\cJ}$ behaves like a (2-sided) ideal and $\widetilde{\mathbf{J}}$ like the corresponding cosets. \end{enumerate} Given \w{{\mathcal E_{\ast}}} and a pointed \Jass[] ${\mathcal J}$ \ -- \ that is, a pointed indexing category which is also a \Jass \ -- \ a \emph{pointed diagram} in ${\mathcal E_{\ast}}$ is a functor \w{\Yul{}:{\mathcal J} \to {\mathcal E_{\ast}}} such that \w{\Yof{g}=\ast} whenever \w[.]{g \in \overline{\cJ}(x,t)} \end{defn} \begin{example}\label{ptchain} We can make the decreasing poset category $$ {\mathcal J}=[n] = \{n > n-1 > \dots > 0 \} $$ \noindent pointed by setting \w{\overline{\cJ}(t,s):={\mathcal J}(t,s)} whenever \w[,]{t-s>1} so only indecomposable maps lie in $\widetilde{\mathbf{J}}$. A pointed diagram \w{{\mathcal J} \to {\mathcal E_{\ast}}} is then simply a chain complex in ${\mathcal E_{\ast}}$. \end{example} \begin{remark}\label{rpoinmat} Making a diagram commute while also forcing certain maps to be zero is more restrictive than simply making it commute. Thus, we would like to construct an analog of \w{\Mxk} tailored to the pointed case. Note that in a pointed category \w{{\mathcal E_{\ast}}} there is a canonical map \w{\ast \to \displaystyle \prod_{\overline{\cJ}(x,t)} \Yof{t}} for any $t$, hence a section \begin{myeq}\label{eqpointmat} \Theta:\prod_{\widetilde{\mathbf{J}}(x,t)} \Yof{t} \to \prod_{{\mathcal J}(x,t)} \Yof{t} \end{myeq} \noindent of the projection map. \end{remark} \begin{defn}\label{dpoinmat} Given any diagram \w[,]{\Yul{}:{\mathcal J} \to {\mathcal E_{\ast}}} where ${\mathcal J}$ is a pointed \Jass, define its \emph{reduced matching space} (for $x$ and $k$) as the object of \w{{\mathcal E}} defined by the pullback: \diagr{ \rMxk(\Yul{}) \ar@{} [drr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d]_{\overline{\forget}} \ar[rr]^{\iota^x_k} && \Mxk(\Yul{}) \ar[d]^\operatorname{forget} \\ \dprod[|t| \leq k]{\widetilde{\mathbf{J}}(x,t)} \Yof{t} \ar[rr]^{\Theta} && \dprod[|t| \leq k]{{\mathcal J}(x,t)} \Yof{t} } \noindent which also determines the maps \w{\iota^x_k} and \w[.]{\overline{\forget}} In effect, we have replaced any factor indexed on a map in $\overline{\cJ}$ by $\ast$, like reducing modulo the ideal \w[,]{\overline{\cJ}} precisely as one would expect for a pointed diagram. \end{defn} We then have the following analogues of Lemmas \ref{lraneq} and \ref{lmatchpull}: \begin{lemma}\label{lptraneq} Given a pointed functor \w[,]{Y:\partial \Jxk\to{\mathcal E_{\ast}}} a pointed extension to \w{\overline Y:\smx[k]\to{\mathcal E_{\ast}}} is (uniquely) equivalent to a choice of an object \w[,]{\Ybarof{x}} together with a morphism in \w[,]{{\mathcal E_{\ast}}} \w[.]{\Ybarof{x}\to\rMxk(Y)} \end{lemma} \begin{lemma}\label{lptmatchpull} If \w[,]{|x|>k>0} a pointed functor \w{Y:\partial\Jxk\to{\mathcal E_{\ast}}} (for ${\mathcal J}$ and \w{{\mathcal E_{\ast}}} as above) induces a pullback square: \mytdiag[\label{ptusepsi}]{ \rMxk(Y) \ar@{} [drr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[rr] && \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} \Yof{s} \ar[d]^(0.6){\dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} \sigmaul{s}{<k}} \\ \rMxk[k-1](Y) \ar@{^{(}->}[r]^-{\overline{\forget}} & \dprod[|t|<k]{\widetilde{\mathbf{J}}(x,t)} \Yof{t} \ar[r]^-{\overline{\Psi}} & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} \dprod[|v|<k]{\widetilde{\mathbf{J}}(s,v)} \Yof{v} } \noindent where \w{\sigmaul{s}{<k}} is as in \S \ref{nreedy}, and \w{\overline{\Psi}=\uPsul{x}{k}} is defined by analogy with \wref[.]{eqgendiag} \end{lemma} \begin{proof} Follow the proof of Lemma \ref{lmatchpull}, with $\widetilde{\mathbf{J}}$ replacing ${\mathcal J}$. The absence of factors indexed in $\overline{\cJ}$ implies that the structure maps \w{\Ybarof{h}} from the pullback of \wref{ptusepsi} to the copy of \w{\Yof{s}} indexed by \w{h:X\to s} is the zero map whenever \w[,]{h \in \overline{\cJ}} so the result follows from the absorbing property of $\overline{\cJ}$. \end{proof} From the two lemmas we have: \begin{cor}\label{cptmatchpull} Any pointed diagram \w{Y: \Jxk \to{\mathcal E_{\ast}}} induces a structure map \w{\rmxk{k}:\Yof{x}\to\rMxk} for each \w[.]{|x|>k>0} \end{cor} \begin{defn}\label{dprfib} If ${\mathcal E}$ is a model category, and ${\mathcal J}$ is a pointed \Jass, a pointed diagram \w{Y:{\mathcal J}\to{\mathcal E_{\ast}}} is called \emph{pointed Reedy fibrant} if each map \w{\rmxk{|x|-1}} is a fibration. \end{defn} \begin{lemma}\label{lptreedyfib} If ${\mathcal E}$ is a model category and ${\mathcal J}$ is a pointed \Jass, a pointed diagram \w{Y:{\mathcal J}\to{\mathcal E}} which is Reedy fibrant in the sense of \S \ref{dreedy} is also pointed Reedy fibrant. Moreover, for any pointed Reedy fibrant $Y$, \w{\rMxk[k](Y)} is fibrant in ${\mathcal E_{\ast}}$ for each $k$. \end{lemma} \begin{proof} Let \w[,]{k=|x|-1} and consider a lifting square for \w{\rmxk{k}} with respect to an acyclic cofibration $\alpha$; extend the diagram to include \w[:]{\mxk{k}} \diagr{ C \ar@{ >->}[d]_{\alpha} \ar[rr] && \Yof{x} \ar[d]_{\rmxk{k}} \ar@{.>}[drr]^{\mxk{k}} &&\\ D \ar[rr] && \rMxk[k] \ar@{.>}[rr]^{\iota^x_k} && \Mxk[k] ~. } \noindent Note that a lift in the outer, distorted square will serve as a lift for the inner square, since \w{\iota^x_k} is a base change of another monomorphism, so is itself monic. To show that \w{\rMxk[k](Y)} is fibrant in ${\mathcal E_{\ast}}$ whenever $Y$ is pointed Reedy fibrant, we adapt the argument of Lemma 15.3.9(2) through Corollary 15.3.12(2) of \cite{PHirM}, as follows: Given a lifting diagram in ${\mathcal E_{\ast}}$, \mydiagram[\label{liftPTReedy}]{ C \ar@{ >->}[d]^{\sim} \ar[r] & \rMxk[n] \ar[d] \\ D \ar[r] \ar@{.>}[ur]^{h} & \ast } \noindent we construct the dotted lift by induction on \w[.]{0\leq k<n} For a pointed Reedy fibrant object, we assume the zero entries are each fibrant, so their product \w{\rMxk[0]} will also be fibrant. For the induction step, suppose we have a lift in the diagram \mydiagram[\label{liftTwoPTReedy}]{ C \ar@{ >->}[d]^{\sim} \ar[r] & \rMxk[n] \ar[r] & \rMxk[k-1] \ar[d] \\ D \ar[rr] \ar@{.>}[urr]^{h_{-1}} && \ast } \noindent Note that the structure for any \w{f:x \to s} with \w{|s|=k} induces a commutative diagram \mydiagram[\label{rightPTReedy}]{ \rMxk[k-1] \ar[r] \ar[dr] & \Yof{s} \ar[d] \\ & \rMul[s]{k-1} } \noindent so in the new lifting diagram: \mydiagram[\label{liftThreePTReedy}]{ C \ar@{ >->}[d]^{\sim} \ar[r] & \rMxk[n] \ar[r] & \Yof{s} \ar@{->>}[d] \\ D \ar[rr] \ar@{.>}[urr]^{h_f} && \rMul[s]{k-1} } combining the previous two, the lift \w{h_f} exists because $Y$ was assumed to be pointed Reedy fibrant. All of these maps together define \w[.]{h_0:D \to \prod \Yof{s}} Compatibility with lower degree pieces then implies that \w{h_0} factors through the limit defining \w{\rMxk[k]} which completes our induction step, showing that \w{\rMxk[n]} is fibrant in ${\mathcal E_{\ast}}$. \end{proof} \begin{lemma} \label{newPtReedy} Each pointed diagram $Z$ has a pointed Reedy fibrant replacement $\bY{}$ which is weakly equivalent to its Reedy fibrant replacement $Y$ as an unpointed diagram. \end{lemma} \begin{proof} In the following commuting diagram: \diagr{ Z(x) \ar[d]_{\alpha} \ar[rr] && \rMxk(Z) \ar[rr] && \rMxk(Y) \ar[d]\\ \Mxk(Z) \ar[rrrr] &&&& \Mxk(Y) } \noindent factor the top horizontal composite as an acyclic cofibration \w[.]{Z(x) \hookrightarrow \bY{}(x)} followed by a fibration \w[.]{\bY{}(x)\to\hspace{-5 mm}\to \rMxk(Y)} A lift in the diagram \diagr{ Z(x) \ar@ { >->}[d]^{\sim} \ar@ { >->}[rr]^{\sim} && Y(x) \ar@{->>}[d] \\ \bY{}(x) \ar@{-->}[urr]^{\sim} \ar@{->>}[r] & \rMxk(Y) \ar[r] & \Mxk(Y) } \noindent will allow us to construct inductively a weak equivalence between the new diagram $\bY{}$ and the standard Reedy fibrant replacement $Y$ for $Z$. \end{proof} \section{General Definition of higher order operations} \label{cgdhho} {F}rom now on ${\mathcal E}$ will be a model category, and we assume given a ``homotopy commutative diagram'' in ${\mathcal E}$ \ -- \ that is, a functor \w[,]{\widetilde{Y}:{\mathcal J} \to \operatorname{ho}({\mathcal E})} with ${\mathcal J}$ as in \S \ref{areedy}. Our higher homotopy operations will serve as obstructions to \emph{rectification} of such a $\widetilde{Y}$ \ -- \ that is, lifting it to \w[.]{Y:{\mathcal J} \to {\mathcal E}} We may assume for simplicity that each \w{\widetilde Y(s)} is both cofibrant and fibrant, which can always be arranged without altering any homotopy types (see \S \ref{rassfibcof}). \begin{mysubsection}{The double induction} \label{sdoubind} We attempt to construct the rectification $Y$ by a double induction: \begin{enumerate} \renewcommand{(\alph{enumi})~}{\Roman{enumi}.~} \item In the outer induction, we assume we have succeeded in finding a functor \w{\Yul{n}:\Jul{n}\to{\mathcal E}} ($\Yul{n}$ \ is assumed to be Reedy fibrant), realizing \w[.]{\widetilde{Y}\rest{\Jul{n}}} In fact, for our induction step it suffices to assume only the existence of \w{\tYk{n+1}:\Jul{n+1}\to\operatorname{ho}({\mathcal E})} extending \w[.]{\Yul{n}} \item By the Reedy conditions, lifting \w{\tYk{n+1}} to \w{\Yul{n+1}:\Jul{n+1}\to{\mathcal E}} extending \w{\Yul{n}} is equivalent to extending the latter to \emph{a point-wise extension} \w{\Yul[x]{n}:\Jxk[n]\to{\mathcal E}} for each \w{x\in\operatorname{Obj}\,{\mathcal J}} of degree \w{n+1} separately. Given such an $x$, the restriction of \w{\Yul{n}} produces a diagram \w{\Yul{k}:\partial\Jxk\to{\mathcal E}} for each \w{k \leq n} and the restriction of \w{\tYxk[]{n+1}} produces a diagram \w[,]{\tYxk{k}:\Jxk \to \operatorname{ho}({\mathcal E})} with the two remaining compatible. Thus, for our inner induction hypothesis, assume a pointwise extension of \w{\Yul{k-1}} at $x$ (agreeing with appropriate restrictions of both of these) has been chosen, so \w[.]{\Yul[x]{k-1}: \Jxk[k-1]\to{\mathcal E}} Our inner induction step then asks if it is possible to lift \w{\tYxk{k}} to \w{\Yul[x]{k}:\Jxk \to {\mathcal E}} strictly extending both \w{\Yul[x]{k-1}} and \w[,]{\Yul{k}} with the final case of the inner induction being \w[.]{k=n} \end{enumerate} Notice, our inner induction step is equivalent to making coherent choices for each homotopy class of maps out of $x$ to an object of degree $k$, leaving all maps not involving $x$ (so those from \w[)]{\Yul{k}} or maps into objects of lower degree (so those from \w[)]{\Yul[x]{k-1}} unchanged. By Lemma \ref{bottomY} below, we may start the inner induction with \w{\Yul[x]{0}} defined by the values on objects of \w[.]{\tYxk{0}} The assumption that \w{\Yul{n}} is Reedy fibrant implies that \w{\Yul{1}} is Reedy fibrant, too, which will allow us to use the homotopy pullback property to extend \w{\Yul[x]{0}} to \w[.]{\Yul[x]{1}} The general step in the inner induction will use Lemma \ref{lmatchpull}: By assumption, we have a map into the lower left corner of \wref[,]{usepsi} which we want to extend to a map into the upper left corner still representing the appropriate class required by \w[.]{\tYxk{k}} \begin{remark} \label{rassfibcof} Our induction assumption that the diagram \w{\Yul{n}} is Reedy fibrant implies that \w{\Yul{n}(t)} is fibrant in ${\mathcal E}$ for each \w[,]{t\in\operatorname{Obj}\,\Jul{n}} and the same will hold for the pullbacks that we consider below (see, e.g., \S \ref{rfibcy}). We will assume in addition that in the inner induction, for each \w[,]{x\in\operatorname{Obj}\,{\mathcal J}} \w{\Yul[x]{n}(x)} is cofibrant in ${\mathcal E}$. Together this will ensure that the left and right homotopy classes, appearing in various results from the Appendix, coincide (cf.\ \cite[1.2.6]{HovM}), and the distinction can thus be disregarded. \end{remark} Theorem \ref{unptedThm} then yields an obstruction theory for this step in the inner induction. \end{mysubsection} \begin{lemma}\label{bottomY} In the setup described in \S \ref{sdoubind} given \w{x\in\operatorname{Obj}\,{\mathcal J}} with \w[:]{|x|>0} \begin{enumerate} \renewcommand{(\alph{enumi})~}{(\alph{enumi})~} \item Any choice of representatives for a homotopy commutative \w{\tYxk{0}:\Jxk[0] \to \operatorname{ho}({\mathcal E})} provides a lift \w[.]{\Yxk[0]:\Jxk[0]\to{\mathcal E}} \item Any Reedy fibrant \w{\Yul{1}:\partial\Jxk[1] \to {\mathcal E}} as above has a pointwise extension to a functor \w{\Yxk[1]:\Jxk[1]\to{\mathcal E}} which lifts \w[.]{\tYxk{1}} \end{enumerate} \end{lemma} \begin{proof} For (a), note that \w{\Jxk[0]} has no non-trivial compositions by definition . For (b), consider the pullback diagram \mytdiag{ \Yof{x} \ar@{.>}[dr]_{\mxk{1}} \ar@/_2em/[ddr]_{\mxk{0}} \ar@{-->}@/^2em/[drrr]_{\sigmaul{\widetilde Y}{1}} \\ & \Mxk[1](\Yul{1}) \ar@{} [drr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar[rr] && \dprod[|s|=1]{{\mathcal J}(x,s)} \Yof{s} \ar@{->>}[d]^(0.6){\prod\,\mxk[s]{0}(\Yul{1})}\\ & \Mxk[0](\Yxk[0]) \ar@{=}[r] & {\dprod[|t|=0]{{\mathcal J}(x,t)} \Yof{t}} \ar[r]^-{\Psi} & \dprod[|s|=1]{{\mathcal J}(x,s)} \dprod[|v|=0]{{\mathcal J}(s,v)} \Yof{v} ~, } \noindent where the right vertical is a fibration (being a product of fibrations by the Reedy fibrancy assumption). This is a special case of \wref{usepsi} where the forgetful (inclusion) map on the lower left is the identity, since \w{\partial\Jxk[0]} is discrete. Note that the outer diagram commutes up to homotopy, since it simply compares composites representing maps in \w{\tYxk{1}} in a somewhat unusual presentation. By Lemma \ref{lhpp}, we can then alter the dashed map \w{\sigmaul{\widetilde Y}{1}} within its homotopy class to obtain the dotted map \w{\mxk{1}} into \w[.]{\Mxk[1]} Equivalently, by Lemma \ref{lraneq} one can find a representative of \w{\tYxk{1}} extending to \w{\smx[1]} without altering the restriction to \w{\partial \Jxk[1]} (although this may not be the original \w[,]{\tYxk{1}} since we might have altered \w{\sigmaul{\widetilde Y}{1}} within its homotopy class when applying Lemma \ref{lhpp}). \end{proof} \begin{remark}\label{rzeroone} Using Lemma \ref{bottomY}, we shall henceforth assume that in the inner induction we may start with \w[.]{k \geq 1} In order to ensure Reedy fibrancy for \w[,]{k=1} we factor \w{\mxk{1}:\Yof{x}\to\Mxk[1]} as an acyclic cofibration \w{\Yof{x}\hookrightarrow\widehat{Y}(x)} followed by a fibration \w[.]{\hmxk{1}:\widehat{Y}(x)\to\Mxk[1]} We must verify that \w{\widehat{Y}(x)} and \w{\hmxk{1}} may be chosen in such a way that the maps to the other objects \w{\widetilde{Y}(s)} (with \w[)]{|s|>1} have the correct homotopy type. However, by assumption all such objects \w{\widetilde{Y}(s)} are fibrant, so we can use the left lifting property for \diagr{ \Yof{x} \ar@ { >->}[d]_{\sim} \ar[rr]^{\alpha} && \widetilde{Y}(s) \ar@{->>}[d] \\ \widehat{Y}(x) \ar[rr] \ar@{-->}[urr]^{\widehat{\alpha}} && \ast } to ensure that $\alpha$ and $\widehat{\alpha}$ have the same homotopy class. \end{remark} In the inner induction on $k$, we build up the diagram under the fixed \w{x\in\operatorname{Obj}\,{\mathcal J}} by extending \w{\Yxk[k-1]} to objects in degree $k$, using: \begin{lemma}\label{llowmatch} Assume \w[.]{|x|>k} Given \w{\Yxk[k-1]:\Jxk[k-1] \to {\mathcal E}} and \w[,]{|s|=k} any \w{g \in {\mathcal J}(x,s)} induces a map \w[.]{\rho(g):\Yof{x} \to \Mul[s]{k-1}} \end{lemma} \begin{proof} Given $g$, the diagram \w{\Yxk[k-1]} induces a cone on \w[,]{(s \downarrow \Jxk[k-1])} sending \w{f:s \to v} to the value of \w{\Yxk[k-1]} at the target of \w[.]{f g} Moreover, given a morphism \diagr{ & s \ar[dl]_{f} \ar[d]^{f'} \\ v \ar[r]_{h} & u } \noindent in \w[,]{(s \downarrow \Jxk[k-1])} precomposition with $g$ yields \diagr{ & x \ar[dl]_{fg} \ar[d]^{f'g} \\ v \ar[r]_{h} & u } \noindent which commutes in ${\mathcal J}$ \ -- \ that is, a morphism in \w[.]{\smx[k-1]} Applying \w{\Yxk[k-1]} yields a commutative diagram in ${\mathcal E}$, showing that we have a cone, and thus a map \w{\rho(g)} to the limit. \end{proof} \begin{cor}\label{clowmatch} Combining all maps \w{\rho(g)} of Lemma \ref{llowmatch}, a functor \w{\Yxk[k-1]:\Jxk[k-1] \to {\mathcal E}} induces a natural map \w[.]{\displaystyle \rho_{k-1}:\Yof{x} \to \dprod[|s|=k]{{\mathcal J}(x,s)} \Mul[s]{k-1}} \end{cor} \begin{defn}\label{dpbgrid} A \emph{pullback grid} is a commutative diagram tiled by squares where each square, hence each rectangle in the diagram, is a pullback. \end{defn} Next, we embed the maps \w{\rho_{k-1}} and \w{\mxk{k-1}} in a pullback grid, in order to apply Lemma \ref{lmatchpull}: \begin{lemma}\label{lowpiece} Assuming \w[,]{|x|>n\geq k\geq 2} any functor \w{\Yxk[k-1]:\Jxk[k-1]\to{\mathcal E}} induces a pullback grid defined by the lower horizontal and right vertical maps, with the natural (dashed) maps into the pullbacks: \myudiag[\label{eqlowpiece}]{ \Yof{x} \ar@/^1.5em/[drrr]^{\rho_{k-1}} \ar@{-->}[dr]_{\beta_{k-1}} \ar@{-->}@/^1em/[drr]_{\eta_{k-1}} \ar@/_2em/[ddr]_{\mxk{k-1}} \\ & \Nxk[k-1] \ar@{} [dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r]^{q_{k-1}} & \Qxk[k-1] \ar@{} [dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d]^{u} \ar[r]^{v} & \dprod[|s|=k]{{\mathcal J}(x,s)} \Mul[s]{k-1} \ar[d]^(0.55){\prod\,\operatorname{forget}} \\ & \Mxk[k-1] \ar@{^{(}->}[r]^-{\operatorname{forget}} & \dprod[|t|<k]{{\mathcal J}(x,t)} \Yof{t} \ar[r]^-{\Psi} & \dprod[|s|=k]{{\mathcal J}(x,s)} \dprod[|v|<k]{{\mathcal J}(s,v)} \Yof{v} ~. } \end{lemma} \begin{proof} To verify commutativity of the outer diagram, note that for each composable pair \w{x \stackrel{g}{\to} s\stackrel{f}{\to} v} in ${\mathcal J}$, the projection of either composite from \w{\Yof{x}} onto the copy of \w{\Yof{v}} indexed by \w{(g,f)} (in the lower right corner) is \w[,]{\Yof{fg}} by definition. \end{proof} We now set the stage for our obstruction theory by combining all of these pieces in a single diagram: \begin{prop}\label{startHHO} Assuming \w[,]{|x|>n\geq k\geq 2} any functor \w{\Yul{k}:\partial \Jxk \to {\mathcal E}} as in \S \ref{sdoubind} induces maps into a pullback grid: \mydiagram[\label{basicHHO}]{ \Yof{x} \ar@/_2em/[dddr]_{\mxk{k-1}} \ar@/_1em/[ddr]_(0.7){\beta_{k-1}} \ar@/_2em/[ddrr]^(0.7){\eta_{k-1}} \ar@{-->}@/^1em/[drrr]^{\sigmaul{x}{k}:=\sigmaul{x}{k}(\tYxk{k})} \ar@{.>}[dr]^(0.6){\mxk{k}} \ar@{.>}@/^1em/[drr]^(0.7){\alpha_k} \\ & \Mxk \ar@{} [dr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar[r] & \Pxk \ar@{} [dr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{p_{k-1}} \ar[r]^-{r_k} & \dprod[|s|=k]{{\mathcal J}(x,s)} \Yof{s} \ar@{->>}[d]^{\prod \mxk[s]{k-1}} \\ & \Nxk[k-1] \ar@{} [dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r]_{q_{k-1}} & \Qxk[k-1] \ar@{} [dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] ^{u} \ar[r]^{v} & \dprod[|s|=k]{{\mathcal J}(x,s)} \Mul[s]{k-1} \ar[d] ^{\prod\operatorname{forget}}\\ & \Mxk[k-1] \ar@{^{(}->}[r]^-{\operatorname{forget}} & \dprod[|t|<k]{{\mathcal J}(x,t)} \Yof{t} \ar[r]^-{\Psi} & \dprod[|s|=k]{{\mathcal J}(x,s)} \dprod[|v|<k]{{\mathcal J}(s,v)} \Yof{v} ~. } \noindent Here \w{\sigmaul{x}{k}:=\sigmaul{x}{k}(\tYxk{k})} only makes the outermost diagram commute up to homotopy. Furthermore, the map \w{\mxk{k}} exists (after altering \w{\sigmaul{x}{k}} within its homotopy class) if and only if there is a map \w{\alpha_k} such that \w{p_{k-1} \alpha_k = \eta_{k-1}} and \w[.]{r_k \alpha_k \sim \sigmaul{x}{k}} \end{prop} \begin{proof} The outer pullback is \w{\Mxk} by Lemma \ref{lmatchpull} and the fact that \w{\mxk[s]{k-1}} followed by the inclusion ``forget'' is \w{\sigmaul{s}{<k}} (cf.\ \S \ref{nreedy}). Note that the lower half of the grid involves only objects of ${\mathcal J}$ in degrees \w[,]{<k} so the fact that \w{\Yul{k}} agrees with \w{\Yxk[k-1]:\Jxk[k-1]\to{\mathcal E}} implies that \w{\beta_{k-1}} and \w{\eta_{k-1}} exist, by Lemma \ref{lowpiece}. The outer diagram commutes up to homotopy because \w{(\Yul{k})\rest{\partial \Jxk[k-1]}} agrees with \w{\Yxk[k-1]} and lifts \w[,]{\tYxk{k}} which is homotopy commutative. Since the upper left square is a pullback, producing a lift of \w{\beta_{k-1}:\Yof{x}\to\Nxk[k-1]} to \w{\Mxk} is equivalent to choosing a lift of \w{\eta_{k-1}:\Yof{x}\to\Qxk[k-1]} to \w{\alpha_k:\Yof{x}\to\Pxk} (with \w[).]{p_{k-1}\circ\alpha_{k}=\eta_{k-1}=q_{k-1}\circ\beta_{k-1}} The fact that we only alter \w{\sigmaul{x}{k}} within its homotopy class ensures that \w[,]{r_{k}\circ\alpha_{k}\sim\sigmaul{x}{k}} with the left hand side serving as the replacement for the right hand side. \end{proof} \begin{remark}\label{rfibcy} The problem here is that even though the two maps from \w{\Yof{x}} into \w{\prod_{{\mathcal J}(x,s)}\prod_{{\mathcal J}(s,v)} \Yof{v}} (in the lower right corner of \wref[)]{basicHHO} agree up to homotopy, this need not hold for the two maps into \w[,]{\prod_{{\mathcal J}(x,s)}\,\Mul[s]{k-1}} the middle term on the right. Thus we cannot simply apply Lemma \ref{lhpp} to work with just the upper half of \wref[.]{basicHHO} In connection with Remark \ref{rassfibcof}, one should note that all three of the objects along the right vertical edge of \wref{basicHHO} are fibrant in ${\mathcal E}$. The top and bottom objects are products of entries we assumed were fibrant. However, the middle object is a product of the usual Reedy matching spaces for the factors in the product above, so by \cite[Cor. 15.3.12 (2)]{PHirM}, our assumption of Reedy fibrancy implies these factors are also fibrant. Lemma \ref{lptreedyfib} implies that this holds in the pointed case, too. \end{remark} \begin{mysubsection}{The Total Higher Homotopy Operation} Following our inner induction hypothesis as in \S \ref{sdoubind}(II), assume given \w[,]{\tYxk{k}:\Jxk\to\operatorname{ho}({\mathcal E})} \w{\Yxk[k-1]:\Jxk[k-1]\to{\mathcal E}} and a Reedy fibrant \w[.]{\Yul{k}:\partial\Jxk\to{\mathcal E}} Factor the generalized diagonal map \w{\Psi=\Psul{x}{k}} of \wref{eqgendiag} as a trivial cofibration \w{\iota:\prod_{{\mathcal J}(x,t)}\,\Yof{t}~\xra{\simeq} F^1} followed by a fibration \w[.]{\Psi':F^{1}\to\hspace{-5 mm}\to~\prod_{{\mathcal J}(x,s)}\,\prod_{{\mathcal J}(s,v)}~\Yof{v}} (If we want a canonical choice of \w[,]{F^{1}} we will use the product of free path spaces for the non-zero factors appearing in the target and the reduced path space for each zero factor (see \S \ref{cgrms}.B), with $\iota$ defined by the constant paths for non-zero factors.) We then pull back the right vertical maps of \wref{basicHHO} to produce the following pullback grid, with fibrations indicated as usual by \w[\,\,:]{\to\hspace{-5 mm}\to} \mydiagram[\label{firstHHO}]{ \Yof{x} \ar@/_2em/[ddr]^{\eta_{k-1}} \ar@/^2em/[drrr]^{\sigmaul{x}{k}:=\sigmaul{x}{k}(\tYxk{k})} \ar@{-->}@/_4em/[dddrr]^{\varphi} \ar@/^1em/@{-->}[drr]^{\kappa} \ar@{.>}[dr]^{\alpha_k} \ar@/_3em/[dddr]_{\sigmaul{x}{<k}:=\sigmaul{x}{<k}(\Yxk[k-1])} \\ & \Pxk \ar@{} [dr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{p_{k-1}} \ar[r]^{w} & F^{3} \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{\mu} \ar@{->>}[r]^-{r'_k} & \dprod[|s|=k]{{\mathcal J}(x,s)} \Yof{s} \ar@{->>}[d]^(0.5){\prod \mxk[s]{k-1}} \ar@/^4em/[dd]^{\prod \sigmaul{s}{<k}}\\ & \Qxk[k-1] \ar@{} [dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] ^(0.35){u}\ar[r]^{\gamma} & F^{2} \ar@{} [dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d]^{q} \ar@{->>}[r]^(0.35){s} & \dprod[|s|=k]{{\mathcal J}(x,s)} \Mul[s]{k-1} \ar[d]^{\prod\operatorname{forget}} \\ & \dprod[|t|<k]{{\mathcal J}(x,t)} \Yof{t} \ar[r]_(0.6){\sim}^(0.6){\iota} \ar@/_1.5em/[rr]_{\Psi} & F^{1} \ar@{->>}[r]^(0.3){\Psi'} & \dprod[|s|=k]{{\mathcal J}(x,s)}\dprod[|v|<k]{{\mathcal J}(s,v)} \Yof{v} } \noindent where the outermost diagram commutes up to homotopy (and the map \w{\eta_{k-1}} exists by Lemma \ref{lowpiece}). In order to construct a lift \w[,]{\Yxk:\Jxk \to {\mathcal E}} by Proposition \ref{startHHO}, we need to produce the dotted map \w{\alpha_{k}} with \w{p_{k-1}\circ\alpha_{k}=\eta_{k-1}} and \w[.]{r_{k}\circ\alpha_{k}=r'_{k}\circ w\circ\alpha_{k}\sim\sigmaul{x}{k}} The problem is that the large square is a strict pullback, but not a homotopy pullback, so the outermost diagram commuting up to homotopy is not enough. However, the top left square \emph{is} a pullback over a fibration, so by Lemma \ref{lhpp} producing \w{\alpha_{k}} is equivalent to finding a map $\kappa$ with \w{\mu\circ\kappa\sim\gamma\circ\eta_{k-1}} and \w[.]{r'_{k}\circ\kappa\sim\sigmaul{x}{k}} Moreover, Lemma \ref{lhpp} applies to the right vertical rectangle, which implies that choosing $\kappa$ is equivalent to finding a map $\varphi$ in the same homotopy class as the composite \w[,]{\iota\circ \sigmaul{x}{<k}} making the outer diagram commute. Thus, the only question is whether the two composites \w{\Yof{x}\to F^{2}} agree: that is, given $\varphi$, with the map $\kappa$ induced by $\varphi$ (for which necessarily \w[),]{r'_{k}\circ\kappa\sim\sigmaul{x}{k}} is it true that \w[?]{\mu\circ\kappa\sim\gamma\circ\eta_{k-1}} \end{mysubsection} \begin{defn}\label{dhho} We define the \emph{total higher homotopy operation for $x$} to be the set \w{\lra{\Yxk[k-1]}} of all homotopy classes of maps \w{\theta:\Yof{x} \to F^{2}} with \w{\varphi:=q\circ\theta\sim\iota\circ \sigmaul{x}{<k}} and \w[.]{\Psi' \circ \varphi= (\prod \sigmaul{s}{<k}) \circ \sigmaul{x}{k}} We say that\w{\lra{\Yxk[k-1]}} \emph{vanishes at} such a \w{\theta:\Yof{x}\to F^{2}} if also \w[,]{\theta\sim\gamma\circ\eta_{k-1}} and we say that \w{\lra{\Yxk[k-1]}} \emph{vanishes} if it vanishes at some $\theta$, or equivalently, if this subset of the homotopy classes contains the specified class \w[.]{[\gamma\circ\eta_{k-1}]} \end{defn} \begin{remark} By Corollary \ref{cthpp2} and the fact that \w{\prod \operatorname{forget}} is a monomorphism, the homotopy classes \w{[\theta]} making up \w{\lra{\Yxk[k-1]}} are precisely those of the form \w{[\mu \circ \kappa]} for a $\kappa$ with \w{r'_k \circ \kappa=\sigmaul{x}{k}} and \w[.]{q \circ \mu \circ\kappa \sim \iota \circ \sigmaul{x}{<k}} We may apply Corollary \ref{cthpp2} to the right vertical rectangle with horizontal fibrations, since by assumption the outer diagram commutes up to homotopy. This implies that the subset \w{\lra{\Yxk[k-1]}} of Definition \ref{dhho} is non-empty: i.e., some such $\varphi$ and so some $\kappa$ and in turn some $\theta$, exist. Thus the total higher homotopy operation \emph{is defined} at this point. The total higher homotopy operation \emph{vanishes} if there is such a $\kappa$ with \w[.]{\mu \circ \kappa \sim \gamma\circ\eta_{k-1}} \end{remark} This somewhat incongruous terminology of ``vanishing" is explained by the following. \begin{prop}\label{obstructk} Assume given \w{\widetilde{Y}:{\mathcal J} \to \operatorname{ho}({\mathcal E})} with ${\mathcal J}$ a \Jass, and \w{x\in\operatorname{Obj}\,{\mathcal J}} with \w[,]{|x|>n\geq k\geq 2} and let \w[,]{\Yul{k}:\partial\Jxk\to{\mathcal E}} \w[,]{\Yxk[k-1]} and \w{\tYxk{k}} be as in \S \ref{sdoubind}. We can then extend \w{\Yul{k}} to \w{\Yxk:\Jxk \to {\mathcal E}} if and only if \w{\lra{\Yxk[k-1]}} vanishes. \end{prop} \begin{proof} Note that \w{\prod\operatorname{forget}} is a monomorphism, since the class of monomorphisms is closed under categorical products and the inclusion of a limit into the underlying product is always a monomorphism. Thus, the last statement in Corollary \ref{cthpp2} implies each value $\theta$ of \w{\lra{\Yxk[k-1]}} satisfies \w{\theta \sim \mu \circ \kappa} for some $\kappa$ with \w{r'_{k}\circ\kappa=\sigmaul{x}{k}} and \w[.]{q \circ\mu \circ \kappa \sim \iota \circ \sigmaul{x}{<k}} As a consequence, if we assume \w{\lra{\Yxk[k-1]}} vanishes at $\theta$, then there is a choice of $\kappa$ which satisfies \w[.]{\mu \circ\kappa \sim \theta \sim \gamma \circ \eta_{k-1}} After possibly altering $\kappa$ (and so \w[,]{\mu\circ\kappa} $\varphi$, and \w[)]{r'_{k} \kappa} within their homotopy classes, by Lemma \ref{lhpp} applied to the upper left square in \wref{firstHHO} we then have a dotted map \w{\alpha_{k}} with \w[.]{p_{k-1} \circ \alpha_{k} = \eta_{k-1}} Replacing \w{\sigmaul{x}{k}} with \w[,]{r'_k \circ \kappa'} we still have the same homotopy commutative diagram since \w[.]{\kappa' \sim \kappa} Moreover, if we disregard the dashed arrows $\kappa$ and $\varphi$, the remaining solid diagram commutes on the nose, since \w[,]{q \circ \mu \circ \kappa' = q \circ \gamma \circ \eta_{k-1}=\iota\circ\sigmaul{x}{<k}} \w[,]{s \circ \gamma \circ \eta_{k-1} = s \circ \mu \circ \kappa' = \prod \mxk[s]{k-1}\circ (r'_k \circ \kappa')} and the lower right square commutes by construction. The upper left pullback square in \wref{basicHHO} then yields \w{\mxk{k}} and so defines the required extension \w{\Yxk:\Jxk \to {\mathcal E}} by Lemma \ref{lraneq}. On the other hand, if \w{\lra{\Yxk[k-1]}} does not vanish, then no choice of $\varphi$ yields a map $\kappa$ with \w[.]{\mu\circ\kappa \sim\gamma\circ\eta_{k-1}} Thus \w{\eta_{k-1}} does not lift over \w[,]{p_{k-1}} so no such map \w{\mxk{k}} exists. Thus there is no extension \w[,]{\Yxk} by Lemma \ref{lraneq}. \end{proof} \begin{remark} As a consequence of Proposition \ref{obstructk}, our total higher homotopy operations are the obstructions to extending a certain choice of representative of a \wwb{k}truncation of a homotopy commutative diagram in order to produce a \wwb{k+1}truncated representative. As in any obstruction theory, if the obstruction does not vanish at a certain stage, we must backtrack and reconsider earlier choices, to see whether by altering them we can make the new obstruction vanish at the stage in question. It is natural to ask more generally whether there is any \wwb{k+1}truncated (strict) representative of the given homotopy commutative diagram. Rephrasing this in our context, we ask whether for \emph{any} choice of a \wwb{k}truncated representative our obstruction sets contain the particular class which constitutes ``vanishing''. In those cases where one can identify the ambient collections of homotopy classes of maps with one another, a positive answer to the more general question is equivalent to that particular class lying in the union of our obstruction subsets. \end{remark} \section{Separating Total Operations} \label{csto} At this level of generality, we cannot expect Proposition \ref{obstructk} to be of much help in practice: its purpose is to codify an obstruction theory for rectifying certain homotopy-commutative diagrams, using the double induction described in \S \ref{sdoubind}. We now explain how to factor the right vertical map of \wref{basicHHO} or \wref{firstHHO} as a composite of (mostly) fibrations with a view to decomposing the obstruction \w{\lra{\Yxk[k-1]}} into more tractable pieces. A key tool will be the following \begin{mysubsection}{The Separation Lemma} \label{sseparate} Assume given a solid commutative diagram as follows : \diagr{ \Yof{x} \ar@{.>}[dr]^{f} \ar@/_/[ddr]_{\eta_{k-1}} \ar@{-->}@/^2.5em/[drrrrrr]^(0.8){\kappa_1} \ar@{-->}@/^2em/[drrrrr]^(0.8){\kappa_2} \ar@{-->}@/^1.5em/[drrrr]^(0.8){\kappa_3}_(0.7){\cdots} \ar@{-->}@/^1em/[drrr]^(0.8){\kappa_{k-1}} \ar@{-->}@/^2.75em/[drrrrrrr]^(0.8){\kappa_{0}} &&&&&& \\ & \Pxk \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{p_{k-1}} \ar[rr] && F_{x,k}^{k-1,k+1} \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{\mu_{k-1}} \ar@{.}[r] & F_{x,k}^{3,k+1} \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r]_{u_3} & F_{x,k}^{2,k+1} \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r]_{u_2} & F_{x,k}^{1,k+1} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{s} \ar@{->>}[r]_{u_1} & F_{x,k}^{0,k+1} \ar@{->>}[d] \\ & \Qxk[k-1] \ar[rr]^{\gamma_k} \ar@/_/[drr]^{\varphi^{k-1}} \ar@/_2em/[ddrrr]_(0.8){\varphi^3}^(0.35){\vdots} \ar@/_3em/[dddrrrr]_(0.8){\varphi^2} \ar@/_4em/[ddddrrrrr]_(0.8){\varphi^1} && *+[F]{F_{x,k}^{k-1,k}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar[d]^{r_{k-1}} \ar@{.}[r] & F_{x,k}^{3,k} \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{.}[d] \ar@{->>}[r] & F_{x,k}^{2,k} \ar[d] \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[r] & *+[F=]{F_{x,k}^{1,k}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & F_{x,k}^{0,k} \ar[d]^{z} \\ &&& F_{x,k}^{k-1,k-1} \ar@{.}[r] & F_{x,k}^{3,k-1} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[r] \ar@{.}[d] & F_{x,k}^{2,k-1} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{.}[d] \ar@{->>}[r] & F_{x,k}^{1,k-1} \ar@{}[dr] |>>>>>>>>>>>>>>{\mbox{\large{$\lrcorner$}}} \ar@{.}[d] \ar@{->>}[r] & F_{x,k}^{0,k-1} \ar@{.}[d] \\ &&&& F_{x,k}^{3,3} \ar@{->>}[r]_{q_{3}} & *+[F]{F_{x,k}^{2,3}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{r_{2}} \ar@{->>}[r]_{p_2} & F_{x,k}^{1,3} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r] & F_{x,k}^{0,3} \ar@{->>}[d]\\ &&&&& F_{x,k}^{2,2} \ar@{->>}[r]_{q_{2}} & *+[F]{F_{x,k}^{1,2}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{r_{1}} \ar@{->>}[r]_{p_1} & F_{x,k}^{0,2} \ar@{->>}[d]\\ &&&&&& F_{x,k}^{1,1} \ar@{->>}[r]_{q_1} & F_{x,k}^{0,1} } \noindent in which: \begin{itemize} \item all rectangles are pullbacks, \item the indicated maps are fibrations, \item the objects \w{F_{x,k}^{0,1}} and \w{F_{0,k}^{j,k}} are fibrant, and \item the vertical map $z$ is a monomorphism. \end{itemize} Note that as a consequence, all objects in the diagram, other than possibly \w{\Pxk} and \w[,]{\Qxk[k-1]} are fibrant, while all vertical maps \w{F_{x,k}^{j,k} \to F_{x,k}^{j,k-1}} are monomorphisms. Denote the horizontal composite \w{\Qxk[k-1]\to F_{x,k}^{1,k}} by \w{\Gamma_{k-1}} and the vertical composite \w{F_{x,k}^{j,k+1} \to F_{x,k}^{j,j+1}} by \w[,]{\Phi^{j}} so \w[,]{\Phi^{k-1}=\mu_{k-1}} and also define \w{\varphi^k} to be the identity on \w{\Qxk[k-1]} with \w[.]{q_k=\gamma_k} In addition, let \w{\beta_j} denote the vertical composite \w[.]{F_{x,k}^{j,k+1} \to F_{x,k}^{j,j+2}} Now assume that we also have a map \w{\kappa_{0}:\Yof{x}\to F_{x,k}^{0,k+1}} such that \w[.]{\Phi^{0}\circ\kappa_{0}\sim q_1\circ \varphi^{1}\circ\eta_{k-1}} Then by Lemma \ref{lhpp} applied to the right vertical rectangle (with horizontal fibrations) there exists \w{\kappa_1} with \w{u_1 \circ \kappa_1 = \kappa_0} and \w[.]{r_1 \circ \Phi^1 \circ \kappa_1 \sim \varphi^{1}\circ\eta_{k-1}} We are interested in decomposing the question of whether \w{s\circ\kappa_{1}\sim\Gamma_{k-1}\circ\eta_{k-1}} into a series of smaller questions. This question will become important once we demonstrate it to be an instance of asking for a total higher homotopy operation to vanish. If it is true that \w[,]{q_{2}\circ\varphi^{2}\circ\eta_{k-1} \sim \Phi^{1}\circ\kappa_{1}} then Lemma \ref{lhpp} for the next vertical rectangle imply the existence of the dashed map \w[,]{\kappa_2} such that \w{u_2 \circ \kappa_2 = \kappa_1} and \w[.]{r_2 \circ \Phi^2 \circ \kappa_2 \sim \varphi^{2}\circ\eta_{k-1}} Proceeding in this manner, and assuming the maps into the indicated ``staircase terms" remain homotopic, even though we are only certain they agree up to homotopy after applying the relevant \w[,]{r_j} one produces \w{\kappa_{k-1}} such that \w{u_{k-1} \circ \kappa_{k-1} = \kappa_{k-2}} and \w[,]{r_{k-1} \circ \mu_{k-1} \circ \kappa_{k-1}= r_{k-1} \circ \Phi^{k-1} \circ \kappa_{k-1} \sim \varphi^{k-1}\circ\eta_{k-1}} since \w[.]{\mu_{k-1}=\Phi^{k-1}} The final step is then to ask whether \w[,]{\mu_{k-1} \kappa_{k-1} \sim q_k \circ \varphi^k \circ \eta_{k-1} =\gamma_k \circ \eta_{k-1}} and if so, it follows by composing with most of the rectangle across the top of the diagram that \w[.]{s\circ\kappa_{1}\sim\Gamma_{k-1}\circ\eta_{k-1}} In fact, we will be able to characterize when this procedure is possible in terms of obstructions, which we will view as ``separated" versions of the total higher homotopy operation corresponding to the original question. \end{mysubsection} \begin{slemma}\label{lseparate} Given the pullback grid as indicated above along with a choice of \w{\kappa_0} satisfying \w[,]{\Phi^0 \circ \kappa_0 \sim q_1 \circ \varphi^1 \circ \eta_{k-1}} there exists the indicated \w{\kappa_1} satisfying \w{u_1 \circ \kappa_1 = \kappa_0} and \w[.]{r_1 \circ \Phi^1 \circ \kappa_1 \sim \varphi^1} Then \w{\kappa_1} also satisfies the constraint \w{\Gamma_{k-1}\circ\eta_{k-1}\sim s\circ\kappa_{1}} if and only if there exists an inductively chosen sequence of maps \w{\kappa_{j}:\Yof{x} \to F_{x,k}^{j,k+1}} for \w{1\leq j<k} (starting with the given \w[)]{\kappa_{1}} satisfying \begin{myeq}\label{eqkappacond} q_{j+1}\circ\varphi^{j+1}\circ\eta_{k-1}\sim \Phi^{j}\circ\kappa_{j} \text{ and } \kappa_{j-1}=u_j \circ \kappa_j~. \end{myeq} \end{slemma} The reader should note that with our conventions, in the final case \w[,]{j=k-1} the conclusion is that \w[.]{\gamma_k \circ \eta_{k-1} \sim \mu_{k-1} \circ \kappa_{k-1}} \begin{cor}\label{cseparate} If either of the two equivalent conditions of Lemma \ref{lseparate} holds, then by changing \w{\kappa_1:\Yof{x}\to F_{x,k}^{1,k+1}} within its homotopy class, (and so using its image under \w{u_1} to replace \w{\kappa_0} within its homotopy class as well) but without altering $\Gamma_{k-1}$, we can lift \w{\eta_{k-1}} to the dotted map \w{f:\Yof{x}\to\Pxk} shown in the diagram. \end{cor} \begin{proof}[Proof of Corollary \protect{\ref{cseparate}}] This follows from Lemma \ref{lhpp}, since the long horizontal rectangle across the top of the diagram is a pullback over a vertical fibration. \end{proof} \begin{remark}\label{rseparate} In the case we have in mind, \w{F_{x,k}^{0,1}} will be a product of objects \w[,]{\Yof{s}} as will \w[,]{F_{x,k}^{0,k+1}} this time with \w[,]{|s|=k} and \w{F_{x,k}^{0,k}} will be the corresponding product of matching objects \w[,]{\Mul[s]{k-1}} which will be fibrant by \cite[Cor. 15.3.12 (2)]{PHirM}. Later, we will also have a pointed version, instead relying on pointed Reedy fibrancy and Lemma \ref{lptreedyfib}. Note that the second vertical map in each column of the grid is \emph{not} required to be a fibration, but instead a monomorphism. Recall that monomorphisms are closed under base change and forgetting from a limit to the underlying product is always a monomorphism, so its first factor in any factorization must also be a monomorphism, hence these conditions will arise naturally in our cases of interest. \end{remark} \begin{proof}[Proof of Lemma \protect{\ref{lseparate}}] We will repeatedly apply Lemma \ref{lhpp} using a vertical rectangle with horizontal fibrations, with \w{\kappa_{j-1}} as $p$ and \w{\varphi^j \circ \eta_{k-1}} as $f$, showing \w{\kappa_j} exists and satisfies \begin{myeq}\label{bottomcorner} r_j \circ \Phi^j \circ \kappa_j \sim \varphi^j \circ \eta_{k-1} \end{myeq} provided that \begin{myeq}\label{oneabove} \Phi^{j-1}\circ\kappa_{j-1} \sim q_{j}\circ\varphi^{j}\circ\eta_{k-1} ~. \end{myeq} Since \w{\kappa_1} exists by the assumption on \w[,]{\kappa_0} which is really \eqref{oneabove} for \w[,]{j=1} we begin the induction by assuming \w{\kappa_1} satisfies \eqref{oneabove} for \w[,]{j=2} in which case \w{\kappa_2} exists and satisfies \eqref{bottomcorner} for \w[.]{j=2} Now assuming the stricter condition \eqref{oneabove} for \w{j=3} implies the existence of \w{\kappa_3} satisfying \eqref{bottomcorner} for \w[,]{j=3} and so on. When our induction constructs \w{\kappa_{k-1}} satisfying \eqref{bottomcorner} for \w[,]{j=k-1} we assume the stricter condition \eqref{oneabove} for \w[,]{j=k} which, as noted above, is the statement that \w[.]{\gamma_k \circ \eta_{k-1} \sim \mu_{k-1} \circ \kappa_{k-1}} However, then composing with the horizontal rectangle across the top of the diagram from \w{\mu_{k-1}} to $s$ implies the constraint \w[.]{\Gamma_{k-1}\circ\eta_{k-1}\sim s\circ\kappa_{1}} On the other hand, if \w{\kappa_1} satisfies the constraint \w[,]{\Gamma_{k-1}\circ\eta_{k-1}\sim s\circ\kappa_{1}} then we proceed by applying Lemma \ref{lhpp} inductively to each square along the top of the diagram using \w{\kappa_{j-1}} for $p$ and \w{\gamma_k \circ \eta_{k-1}} followed by the composite \w{F_{x,k}^{k-1,k} \to F_{x,k}^{j,k}} for $f$, exploiting the horizontal fibrations in the rectangle. This yields \w{\kappa_j} satisfying more than \eqref{oneabove}, since the homotopy relation is satisfied up in \w[,]{F_{x,k}^{j,k}} and this also implies \eqref{bottomcorner} by construction. \end{proof} Given \w[,]{\tYxk{k}:\Jxk\to\operatorname{ho}({\mathcal E})} \w{\Yxk[k-1]:\Jxk[k-1]\to{\mathcal E}} and a Reedy fibrant \w{\Yul{k}:\partial\Jxk\to{\mathcal E}} as in \S \ref{sdoubind}(II), assume that we can refine diagram \wref{firstHHO} (used to define \w[,]{\lra{\Yxk[k-1]}} the total higher homotopy operation for $x$) to a pullback grid as in Lemma \ref{lseparate}. Then \w{F_{x,k}^{0,k+1}=\prod_{|s|=k}\,\prod_{{\mathcal J}(x,s)}\,\Yof{s}} and \w[,]{F_{x,k}^{0,1}=\prod_{|s|=k}\prod_{|v|<k}\prod_{{\mathcal J}(x,s)}\prod_{{\mathcal J}(s,v)}\,\Yof{v}} in conformity with Remark \ref{rseparate}, while one of the two equivalent conditions in Lemma \ref{lseparate} is the vanishing of the total higher homotopy operation. Recall the vertical composite \w{F_{x,k}^{j,k+1}\to F_{x,k}^{j,j}} in this diagram is the composite \w[.]{r_j \circ \Phi^{j}} \begin{defn}\label{dseparate} If we can produce a pullback grid as in Lemma \ref{lseparate} refining diagram \wref[,]{firstHHO} then for each \w[,]{1 \leq j < k} the associated \emph{separated higher homotopy operation for $x$ of order $j+1$}, denoted by \w[,]{\lra{\Yxk[k-1]}^{j+1}} is the set of homotopy classes of maps \w{\theta:\Yof{x}\to F_{x,k}^{j,j+1}} such that: \begin{itemize} \item if \w[,]{j < k-1} \w{r_j \circ \theta \sim \varphi^{j}\circ \eta_{k-1}} and \w{p_{j}\circ\theta} equals the composite \[ Y(x) \stackrel{\kappa_j}{\to} F_{x,k}^{j,k+1} \stackrel{\beta_j}{\to} F_{x,k}^{j,j+2} \text{ , or} \] \item if \w[,]{j=k-1} \w{r_{k-1}\circ\theta\sim\varphi^{k-1} \circ \eta_{k-1}} and \w{q_{k-1}\circ r_{k-1}\circ\theta=r_{k-2}\circ\mu_{k-1}\circ\kappa_{k-2}} (using the notation of the top two rows of vertical arrows in \S \ref{sseparate}). \end{itemize} We say that \w{\lra{\Yxk[k-1]}^{j+1}} \emph{vanishes} at \w{\theta:\Yof{x}\to F_{x,k}^{j,j+1}} as above if \w{\theta\sim q_{j+1} \circ \varphi^{j+1} \circ \eta_{k-1}} (in the notation of the Lemma), and we say it \emph{vanishes} if it vanishes at some value. \end{defn} Note that if we assume \w{q_{j} \circ \varphi^{j} \circ \eta_{k-1} \sim \Phi^{j-1} \circ \kappa_{j-1}} then by Lemma \ref{lhpp}, \w{\kappa_j} exists, while \w{\lra{\Yxk[k-1]}^{j+1}} can then be defined and by Corollary \ref{cthpp2} each \w{\theta^{j+1}} will satisfy \w[.]{\theta^{j+1} \sim \Phi^{j} \circ \kappa_j} Thus, the vanishing of some value \w{\theta^{j+1}} becomes equivalent to assuming \w[.]{q_{j+1} \circ \varphi^{j+1} \circ \eta_{k-1} \sim \Phi^{j} \circ \kappa_{j}} In other words, the vanishing of \w{\lra{\Yxk[k-1]}^{j+1}} (\textit{scilicet} at some map $\theta^{j+1}$) is a necessary and sufficient condition for \w{\lra{\Yxk[k-1]}^{j+2}} to be defined. (For comments on coherent vanishing, see Remark \ref{cohvan}). \begin{remark} \label{cohvan} Those familiar with other definitions of higher homotopy operations may have expected a stricter, \emph{coherent vanishing} condition in order for a subsequent operation to be defined. However, this need not be made explicit in our framework, as it is a consequence of compatibility with previous choices. For example, our version of the ordinary Toda bracket, denoted by \w[,]{\lrau{f,g}{,h}} is the obstruction to having a given \wwb{2}truncated commuting diagram, satisfying just \w[,]{f \circ g=\ast} extending to a \wwb{3}truncated diagram simply by altering $h$ within its homotopy class to satisfy \w[,]{g \circ h=\ast} \emph{without} altering $g$ or $f$. Each choice of \wwb{2}truncation (of which there is at least one, by Lemma \ref{bottomY}) has an obstruction which is a subset of the homotopy classes of maps \w[.]{[\Yof{3},\Omega'\Yof{0}]} The usual Toda bracket is the union of these subsets: \w[.]{\lra{f,g,h}=\cup \lrau{f,g}{,h}} Thus, the more general existence question has a positive answer (i.e., a vanishing Toda bracket) exactly when, for \emph{some} choice of \wwb{2}truncation, the obstruction vanishes in our sense. When defining our long Toda brackets, say \w[,]{\lrau{f,g,h}{,k}} we will begin by building the \wwb{3}truncation only if the ``front" bracket \w{\lrau{f,g}{,h}} vanishes for some choice of \wwb{2}truncation, and we make an appropriate choice of $h$. At that point, we only consider values of the ``back" bracket \w{\lrau{g,h}{,k}} which use the previously chosen maps $g$ and $h$. Thus asking that our obstruction vanish is automatically a kind of coherent vanishing. If it does not vanish, we must alter our choice of \wwb{3}truncation until we obtain a coherently vanishing ``back" bracket. Once again, one interpretation of the traditional long Toda bracket would then be a union \w[,]{\cup\lrau{f,g,h}{,k}} this time indexed over all possible strict rectifications of \w[,]{\lra{f,g,h}} so all such $3$-truncations. \end{remark} \begin{mysubsection}{Applying the Separation Lemma} \label{sasl} By Proposition \ref{obstructk}, a necessary and sufficient condition for the inner induction step in \S \ref{sdoubind} is the vanishing of the total higher homotopy operation \w{\lra{\Yxk[k-1]}} -- \ that is, by Lemma \ref{lraneq}, the existence of a suitable map \w{\mxk{k}} in \wref[.]{basicHHO} According to Proposition \ref{startHHO}, this in turn is equivalent to having a map $\kappa$ in \wref{firstHHO} satisfying a certain homotopy-commutativity requirement. In order to apply Lemma \ref{lseparate}, we need to break up the lower right square of \wref{firstHHO} into a pullback grid (which then induces a horizontal decomposition of the upper right square). This will be done by decomposing the lower right vertical map, which is a product (over \w[,]{{\mathcal J}(x,s)} with \w[)]{|s|=k} of the forgetful maps \w{\Mul[s]{k-1}\to\prod_{{\mathcal J}(s,v)}\,\Yof{v}} (with \w[).]{|v|\leq k-1} The target of this forgetful map can be further broken up as in \wref{usepsi} to a product over \w{|v|=k-1} and one over \w[.]{|v|<k-1} \end{mysubsection} \begin{example}\label{eglthree} When \w[,]{|s|=3} we factor the top horizontal arrow in \wref{usepsi} as a weak equivalence followed by a fibration: \begin{myeq}\label{eqpoinmat} \Mul[s]{2}~\stackrel{\sim}{\to}~F_{s,2}^{1,3}~\to\hspace{-5 mm}\to~ \dprod[|v|=2]{{\mathcal J}(s,v)}\, \Yof{v}~. \end{myeq} \noindent Similarly, we can factor the map in \wref{eqlowpiece} from \w{\Nul[s]{1}} to the product of lower degree copies of \w{\Yof{t}} to produce a factorization \begin{myeq}\label{eqothermat} \Mul[s]{2}~\to~\Nul[s]{1}~\stackrel{\simeq}{\hookrightarrow}~ G_{s,2}^{1,3}~\to\hspace{-5 mm}\to~\dprod[|t|<2]{{\mathcal J}(s,t)}\, \Yof{t} \end{myeq} \noindent for the lower degree forgetful map in \wref[.]{usepsi} Together these yield a factorization of the full forgetful map: \begin{myeq}\label{eqfforget} \Mul[s]{2}~\to~F_{s,2}^{1,3}\times G_{s,2}^{1,3}~\to\hspace{-5 mm}\to~\dprod[|v|<3]{{\mathcal J}(s,v)}\,\Yof{v}~, \end{myeq} \noindent with the second map a fibration and the first necessarily a monomorphism, since the composite is a monomorphism as the inclusion of a limit into the underlying product. Precomposing with structure maps \w{\Yof{s}\to\hspace{-5 mm}\to\Mul[s]{2}} (which are fibrations, because we assumed our diagram $Y$ was Reedy fibrant) yields \begin{myeq}\label{eqpfforget} \dprod[|s|=3]{{\mathcal J}(x,s)}\,\Yof{s}~\to\hspace{-5 mm}\to~ \dprod[|s|=3]{{\mathcal J}(x,s)}\,\Mul[s]{2}~\to~ \dprod[|s|=3]{{\mathcal J}(x,s)}\,(F_{s,2}^{1,3}\times G_{s,2}^{1,3})~\to\hspace{-5 mm}\to~ \dprod[|s|=3]{{\mathcal J}(x,s)}\,\dprod[|v|\leq 2]{{\mathcal J}(s,v)}\,\Yof{v}~. \end{myeq} \noindent This is a refinement of the right column in \wref[,]{firstHHO} in which all but the second map is a fibration, and that second map is a monomorphism. Taking \wref{eqpfforget} as the right column in the diagram of Lemma \ref{lseparate}, we pull it back along the bottom row of \wref{firstHHO} to get the two right columns of the intended diagram, as shown in \wref[.]{eqegthree} For the next column, note that the two maps out of \w{\Qxk[2]} in \wref{firstHHO} induce a map \w[,]{\Qxk[2] \to F_{x,3}^{1,2}} in the notation of \wref[.]{eqegthree} Factoring this as an acyclic cofibration followed by a fibration: $$ \Qxk[2]~\stackrel{\simeq}{\hookrightarrow}~F_{x,3}^{2,2}~\to\hspace{-5 mm}\to~F_{x,3}^{1,2} $$ and taking pullbacks yields the required pullback grid: \mydiagram[\label{eqegthree}]{ \Pxk[3] \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{p_2} \ar[r] & F_{x,3}^{2,4} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r] & F_{x,3}^{1,4} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r] & \dprod[|s|=3]{{\mathcal J}(x,s)} \Yof{s} \ar@{->>}[d]\\ \Qxk[2] \ar[dd] \ar[r] \ar[dr]^{\sim} & *+[F]{F_{x,3}^{2,3}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & *+[F=]{F_{x,3}^{1,3}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & \dprod[|s|=3]{{\mathcal J}(x,s)} \Mul[s]{2} \ar[d] \\ & F_{x,3}^{2,2} \ar@{->>}[r] & *+[F]{F_{x,3}^{1,2}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r] & \dprod[|s|=3]{{\mathcal J}(x,s)} F_{s,2}^{1,3} \times G_{s,2}^{1,3} \ar@{->>}[d] \\ \dprod[|t|<3]{{\mathcal J}(x,t)} \Yof{t} \ar[rr] && F_{x,3}^{1,1} \ar@{->>}[r] & \dprod[|s|=3]{{\mathcal J}(x,s)}\dprod[|v|<3]{{\mathcal J}(s,v)} \Yof{v} ~. } Note that \w{F^{1,1}_{x,3}} is the \w{F^{1}} of Definition \ref{dhho}, while \w{F_{x,3}^{1,3}} is \w{F^{2}} -- \ that is, the target of our total higher operation $\theta$. Separation Lemma \ref{lseparate} tells us that this operation vanishes precisely when the following two ``separated'' operations vanish: \begin{enumerate} \renewcommand{(\alph{enumi})~}{(\alph{enumi})~} \item The first, landing in \w[,]{F_{x,3}^{1,2}} being defined by the two composite maps from \w[;]{\Yof{x}} \item The vanishing of the first yields a second map into \w[,]{F_{x,3}^{2,3}} where this second map defines the values of the second of the ``separated'' operations, and the formally defined first map defines the possible vanishing of such operations. \end{enumerate} \end{example} This example is indicative of the general pattern, described by: \begin{lemma}\label{lprodgrid} Assume given \w[,]{\tYxk{k}:\Jxk\to\operatorname{ho}({\mathcal E})} \w{\Yxk[k-1]:\Jxk[k-1]\to{\mathcal E}} and a Reedy fibrant \w{\Yul{k}:\partial\Jxk\to{\mathcal E}} as in \S \ref{sdoubind}(II). If for each\w{\Mul[s]{k-1}} we have a pullback grid as in Lemma \ref{lseparate}, these induce a pullback grid: \myqdiag[\label{eqsepgrid}]{ \Pxk \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{p_{k-1}} \ar[r] & F_{x,k}^{k-1,k+1} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r] & F_{x,k}^{k-2,k+1} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{.>>}[r] & F_{x,k}^{1,k+1} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r] & \dprod[|s|=k]{{\mathcal J}(x,s)} \Yof{s} \ar@{->>}[d] \\ \Qxk[k-1] \ar[ddd] \ar[r] \ar@/_/[dr]^{\sim} \ar@/_2em/[ddrr]_{\sim} & *+[F]{F_{x,k}^{k-1,k}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & F_{x,k}^{k-2,k} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{.>>}[r] & *+[F=]{F_{x,k}^{1,k}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & \dprod[|s|=k]{{\mathcal J}(x,s)} \Mul[s]{k-1} \ar[d] \\ & F_{x,k}^{k-1,k-1} \ar@{->>}[r] & *+[F]{F_{x,k}^{k-2,k-1}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar@{.>>}[d] \ar@{.>>}[r] & F_{x,k}^{1,k-1} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{.>>}[d] \ar@{->>}[r] & \dprod[|s|=k]{{\mathcal J}(x,s)} F_{s,k-1}^{k-2,k}\times G_{s,k-1}^{k-2,k} \ar@{.>>}[d] \\ && \ar@{.>>}[r] & *+[F]{F_{x,k}^{1,2}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r] & \dprod[|s|=k]{{\mathcal J}(x,s)} F_{s,k-1}^{1,k}\times G_{s,k-1}^{1,k} \ar@{->>}[d] \\ \dprod[|t|<k]{{\mathcal J}(x,t)} \Yof{t} \ar[rrr]^{\sim} &&& F_{x,k}^{1,1} \ar@{->>}[r] & \dprod[|s|=k]{{\mathcal J}(x,s)}\dprod[|v|<k]{{\mathcal J}(s,v)} \Yof{v} } \noindent suitable for lifting \w{\eta_{k-1}:\Yof{x}\to\Qxk[k-1]} to \w[.]{\Pxk} \end{lemma} Note that the two top right slots in \wref{eqsepgrid} are consistent with Remark \ref{rseparate}. \begin{proof} We prove the Lemma by induction on $k$, beginning with \wref{eqegthree} for \w[.]{k=3} We start with a decomposition \begin{myeq}\label{eqfacfor} \Mul[s]{k-1}~\to~F_{s,k-1}^{k-2,k}\to\hspace{-5 mm}\to\dotsc\to\hspace{-5 mm}\to~F_{s,k-1}^{2,k}~\to\hspace{-5 mm}\to~ F_{s,k-1}^{1,k}~\to\hspace{-5 mm}\to~\dprod[|v|=k-1]{{\mathcal J}(s,v)} \Yof{v} \end{myeq} \noindent of the top map in \wref[,]{usepsi} where all but the first map are fibrations; this first map is a monomorphism since the composite is such, being the inclusion of a limit into the underlying product. This is generated using Step \w{k-1} in the induction, by precomposing the top row in \wref{eqsepgrid} for \w{k-1} with the map \w{\Mul[s]{k-1}\to\Pul[s]{k-1}} of \wref[.]{basicHHO} For \w{\Nul[s]{k-1}\to\Qul[s]{k-1}\to\prod_{|s|=k}\,\Mul[s]{k-1}} (the middle row of \wref[),]{basicHHO} we pull back the right column of \wref{eqsepgrid} for \w{k-1} along the generalized diagonal $\Psi$ of \wref{eqgendiag} to obtain a sequence of pullbacks \mydiagram[\label{eqfgpull}]{ G_{s,k-1}^{j,k} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar[r] & \dprod[|v|=k-1]{{\mathcal J}(s,v)} F_{v,k-2}^{j,k-1} \times G_{v,k-2}^{j,k-1} \ar@{->>}[d] \\ \dprod[|t|<k-1]{{\mathcal J}(s,t)} \Yof{t} \ar[r]^-{\Psi} & \dprod[|v|=k-1]{{\mathcal J}(s,v)}\dprod[|u|<k-1]{{\mathcal J}(v,u)} \Yof{u}, } \noindent for each \w[,]{1 \leq j \leq k-3} where the right vertical map is a fibration by the induction assumption. For \w[,]{j=k-2} we instead factor the composite of the top row in: \mydiagram[\label{eqlastfact}]{ \Nul[s]{k-2} \ar[rr]^{q_{k-2}} \ar@ { >->}[rrd]^{\simeq}_{i} && \Qul[s]{k-2} \ar[rr] && G_{s,k-1}^{k-3,k} \\ && G_{s,k-1}^{k-2,k} \ar@{->>}[rru]_{r} && } \noindent into an acyclic cofibration $i$ followed by a fibration $r$, as shown (where the top maps are those of \wref{basicHHO} and \wref{eqsepgrid} for \w[,]{k-1} respectively). Precomposing this with the map \w{\Mul[s]{k-1} \to \Nxk[k-2]} of \wref{basicHHO} and then taking products as in Example \ref{eglthree} yields the desired factorization of the forgetful map: \begin{myeq}\label{eqotherfacfor} \Mul[s]{k-1} \to F_{s,k-1}^{k-2,k} \times G_{s,k-1}^{k-2,k}~\dotsc \to\hspace{-5 mm}\to~F_{s,k-1}^{2,k} \times G_{s,k-1}^{2,k}~\to\hspace{-5 mm}\to~ F_{s,k-1}^{1,k} \times G_{s,k-1}^{1,k}~\to\hspace{-5 mm}\to~\dprod[|v|<k]{{\mathcal J}(s,v)}\, \Yof{v}. \end{myeq} Now factor the next generalized diagonal \w{\Psul{x}{k-1}} as an acyclic cofibration followed by a fibration \w[.]{p^{1,1}:F_{x,k}^{1,1}\to\hspace{-5 mm}\to\dprod[|v|<k]{{\mathcal J}(s,v)}\,\Yof{v}} Pulling back the tower \wref{eqotherfacfor} along \w{p^{1,1}} yields the second column on the right in our new grid \wref[.]{eqsepgrid} The total higher operation will then land in the twice-boxed pullback object \w[.]{F_{x,k}^{1,k}} To construct the $j$-th column from the right \wb[,]{j\geq 2} with entries \w[,]{F_{x,k}^{j+1,\bullet}} factor the previously defined map \w{\Qxk[k-1]\to F_{x,k}^{j,j+1}} as an acyclic cofibration \w{\Qxk[k-1]\stackrel{\sim}{\to} F_{x,k}^{j+1,j+1}} followed by a fibration \w[.]{p:F_{x,k}^{j+1,j+1} \to\hspace{-5 mm}\to F_{x,k}^{j,j+1}} We then pull back the \wwb{j-1}st column along $p$ to form the $j$-th column of \wref[.]{eqsepgrid} Note that upon completion of this process, the map \w{\Qxk[k-1] \to F_{x,k}^{k-1,k}} need not be a fibration, but the vertical maps in the upper left square are fibrations, by successive base-change from the product of maps \w[,]{\Yof{s}\to\hspace{-5 mm}\to\Mul[s]{k-1}} each of which is a fibration by Reedy fibrancy of \w[.]{\Yul{k}} \end{proof} \begin{defn}\label{dsepdiag} The diagram of Lemma \ref{lseparate}, when constructed inductively as in Lemma \ref{lprodgrid}, will be called a \emph{separation grid} for \w[.]{\Yul{k}} \end{defn} Combining Lemma \ref{lprodgrid} with the Separation Lemma \ref{lseparate} and Corollary \ref{cseparate} yields the following refinement of Proposition \ref{obstructk}: \begin{thm}\label{unptedThm} Assume given \w[,]{\tYxk{k}:\Jxk\to\operatorname{ho}({\mathcal E})} \w{\Yxk[k-1]:\Jxk[k-1]\to{\mathcal E}} and a Reedy fibrant \w{\Yul{k}:\partial\Jxk\to{\mathcal E}} as in \S \ref{sdoubind}(II) for \w[.]{|x|>n\geq k\geq 2} Then our total higher homotopy operation separates into a sequence of $k-1$ obstructions and the following are equivalent: \begin{enumerate} \item A further extension to \w{\Yxk:\Jxk \to {\mathcal E}} exists; \item The total operation \w{\lra{\Yxk[k-1]}} vanishes; \item The associated sequence \w{\lra{\Yxk[k-1]}^{j+1}} \wb{1\leq j<k} of separated higher homotopy operations of \S \ref{dseparate} vanish (so in particular each in turn is defined). \end{enumerate} \end{thm} \begin{remark}\label{rgenpict} The machinery of the separated higher homotopy operations has been formulated to agree with (long) Toda brackets in pointed cases. We shall deal with these in Section \ref{cltbmp}, after a more detailed study of the special issues involving pointed diagrams. In particular, the role of \w{\Qxk[k-1]} will be played by a point, so the weak equivalence followed by a fibration factorizations out of it will be provided by taking reduced path objects on the target. However, we first present a simple example of the (less familiar) general unpointed situation before focusing on the details for the pointed situation. \end{remark} \section{Rigidifying Simplicial Diagrams up to Homotopy} \label{crsd} A commonly occurring instance of a homotopy-commutative diagram which needs to be rectified are restricted (co)simplicial objects, also known as $\Delta$-simplicial objects (i.e., without (co)degeneracies). Examples appear in \cite[\S 6]{BJTurnR}, \cite[\S 4.1]{BJTurnHA}, \cite[\S 5]{BlaAI}, and implicitly in \cite{MayG,SegCC,PrasmS}, and more. We now show how the double inductive approach described in \S \ref{sdoubind} applies to such diagrams. We denote the objects of the simplicial indexing category $\Delta$ by \w[,]{\mathbf{0},\mathbf{1},\dotsc,\mathbf{n},\dotsc} with the value of \w{Y:\Delta\to{\mathcal E}} at $\mathbf{n}$ thus denoted by \w{\Yof{\mathbf{n}}} instead of the usual \w[.]{Y_{n}} \begin{mysubsection}{$1$-Truncated $\Delta$-Simplicial Objects} \label{sotdso} We start the outer induction with \w[.]{n=0} Our $1$-truncated diagram in \w{\operatorname{ho}({\mathcal E})} then consists of a pair of parallel arrows, so we have only the stage \w{k=0} in the inner induction: this means choosing representatives for each of the two face maps \w[.]{d_0,d_1:\Yof{\mathbf{1}} \to \Yof{\mathbf{0}}} Making this Reedy fibrant means changing the combined map \w{(d_0,d_1):\Yof{\mathbf{1}} \to \Yof{\mathbf{0}}^{d_0} \times\Yof{\mathbf{0}}^{d_1}} into a fibration (i.e., factoring this as \w{\Yof{\mathbf{1}}\stackrel{\simeq}{\hookrightarrow}\Yof{\mathbf{1}}'\to\hspace{-5 mm}\to \Yof{\mathbf{0}}\times\Yof{\mathbf{0}}} and replacing \w{\Yof{\mathbf{1}}} by \w[).]{\Yof{\mathbf{1}}'} \end{mysubsection} \begin{mysubsection}{$2$-Truncated $\Delta$-Simplicial Objects} \label{sttdso} For \w[,]{n=1} $x$ is $\mathbf{2}$ and \w{\Yul{1}:\partial \Jul[1]{0} \to {\mathcal E}} is the Reedy fibrant diagram just constructed. To define \w{\Yul[\mathbf{2}]{0}:\Jul[\mathbf{2}]{0} \to {\mathcal E}} at stage \w{k=0} in the inner induction, pick representatives for each of the full length composites: in this case, the three maps \w{\Yof{\mathbf{2}}\to\Yof{\mathbf{0}}} denoted by \w[,]{d_0 d_1} \w[,]{d_0 d_2} and \w{d_1 d_2} in canonical form. This means \w{\Mul[\mathbf{2}]{0}} is the product of three copies of \w{\Yof{\mathbf{0}}} indexed by \w{d_i d_j} \wb[,]{0 \leq i < j \leq 2} and our choice of representatives yields a single map \w{\mxk[\mathbf{2}]{0}} into the product. At stage \w[,]{k=1} we must first choose representatives for the components of \w{\sigmaul{\mathbf{2}}{1}(\tYxk[\mathbf{2}]{1})} -- \ that is, for the maps \w[,]{d_{0}} \w[,]{d_{1}} and \w[,]{d_{2}:\Yof{\mathbf{2}}\to\Yof{\mathbf{1}}} which are all the maps \w{\mathbf{2}\to\mathbf{1}} in ${\mathcal J}$). The generalized diagonal map \w{\Psi=\Psul{\mathbf{2}}{1}} of \wref{eqgendiag} takes \w{\Yof{\mathbf{0}}^{d_i d_j}} \wb{i<j} to the product \w[,]{\Yof{\mathbf{0}}^{d_i d_j}\times \Yof{\mathbf{0}}^{d_{j-1} d_i}} in accordance with the simplicial identities. Note that the target of \w{\sigmaul{\mathbf{2}}{1}} is \w[.]{\prod_{0 \leq j \leq 2}\,\Yof{\mathbf{1}}^{d_j}} Thus we have a pair of maps into a pullback diagram: \mydiagram[\label{eqtwtrunc}]{ \Yof{\mathbf{2}} \ar@/_1em/[ddr]_{\mxk[\mathbf{2}]{0}} \ar@/^1em/@{-->}[drrr]^{\sigmaul{\mathbf{2}}{1}(\tYxk[\mathbf{2}]{1})=(d_0,d_1,d_2)} \ar@{.>}[dr]^{\mxk[\mathbf{2}]{1}}\\ & \Mul[\mathbf{2}]{1} \ar@{}[drr] |<<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar[rr] && \displaystyle \prod_{j \leq 2} \Yof{\mathbf{1}}^{d_j} \ar@{->>}[d] \\ & \Mul[\mathbf{2}]{0} \ar@{=}[r] & \displaystyle \prod_{i<j\leq2} \Yof{\mathbf{0}}^{d_i d_j} \ar[r]^-{\Psi} & \displaystyle \prod_{j \leq 2}\ \displaystyle \prod_{i \leq 1}\, \Yof{\mathbf{0}}^{d_i d_j} ~. } \noindent where the outer diagram commutes up to homotopy (for any choice of representatives for \w[,]{d_{0}} \w[,]{d_{1}} and \w[).]{d_{2}} The dotted map exists by Lemma \ref{lhpp} (after possibly altering the dashed map within its homotopy class), yielding a full $2$-truncated $\Delta$-simplicial object (which rectifies \w[)]{\tYxk[\mathbf{2}]{1}} by Lemma \ref{lraneq}. Changing \w{\mxk[\mathbf{2}]{1}} into a fibration provides us with a Reedy fibrant replacement \w[.]{\Yul{2}:\partial\Jul{2}\to{\mathcal E}} \end{mysubsection} \begin{mysubsection}{$3$-Truncated $\Delta$-Simplicial Objects} \label{sthtdso} At stage \w{n=2} (with \w[),]{x=\mathbf{3}} for the first time we are in the situation of \S \ref{dhho}, somewhat simplified by the fact that we have a single object $\mathbf{n}$ in each grading $n$ of \w[.]{{\mathcal J}=\Delta} In particular, we will have no separated operations yet. In the inner induction, for \w[,]{k=0} we choose representatives for each full length map in \w{\tYxk[\mathbf{3}]{2}} to obtain \w[;]{\Yul[\mathbf{3}]{0}} the full length composites are the four maps \w{d_i d_j d_{\ell}} with \w[,]{0 \leq i < j < \ell \leq 3} so \w{\Mul[\mathbf{3}]{0}} is a product of four copies of \w{\Yof{0}} indexed by these maps, and the generalized diagonal of \wref{eqgendiag} takes each copy of \w{\Yof{\mathbf{0}}^{d_{i}d_{j}d_{\ell}}} to the product $$ \Yof{\mathbf{0}}^{d_{i}d_{j}d_{\ell}} \times \Yof{\mathbf{0}}^{d_{j-1}d_{i}d_{\ell}}\times \Yof{\mathbf{0}}^{d_{\ell-2} d_{i}d_{j}}~. $$ \noindent We make an initial choice (to be modified below) of \w{\sigmaul{\mathbf{2}}{1}(\tYxk[\mathbf{2}]{1})} (i.e., of each composite \w{d_{j}d_{\ell}:\mathbf{3}\to\mathbf{1}} for \w{0\leq j<\ell\leq 3} within its homotopy class). Again this yields a pair of maps into a pullback diagram: \mydiagram[\label{eqtwotr}]{ \Yof{\mathbf{3}} \ar@/_1em/[ddr]_{\mxk[3]{0}} \ar@/^1em/@{-->}[drrrr]^{\sigmaul{2}{=1}(\tYxk[2]{1})} \ar@{.>}[dr]^{\mxk[3]{1}} \\ & \Mul[\mathbf{3}]{1} \ar@{}[drrr] |<<<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar[rrr] &&& \displaystyle \prod_{j <k \leq 3} \Yof{\mathbf{1}}^{d_j d_{\ell}} \ar@{->>}[d] \\ & \Mul[\mathbf{3}]{0} \ar@{=}[r] & \displaystyle \prod_{i<j<k\leq3} \Yof{\mathbf{0}}^{d_i d_j d_{\ell}} \ar[rr]^-{\Psul{\mathbf{3}}{1}} && \displaystyle \prod_{j<\ell \leq 3} \displaystyle \prod_{i \leq 1} \Yof{\mathbf{0}}^{d_i d_j d_{\ell}} ~. } \noindent where the right vertical is a product of fibrations \w{\Yof{\mathbf{1}}\to\Mul[\mathbf{1}]{0}=\prod_{i \leq 1} \Yof{\mathbf{0}}^{d_i}} (by Reedy fibrancy of \w[).]{\Yul{\mathbf{2}}} Since \w{\tYxk[\mathbf{3}]{2}} is homotopy commutative, by Lemma \ref{lhpp} we obtain a dotted map \w{\mxk[\mathbf{3}]{1}} (after altering the dashed map \ -- \ that is, our choice for each \w{d_{j}d_{\ell}} -- \ within its homotopy class). By Lemma \ref{lraneq} this yields \w[,]{\Yul[\mathbf{3}]{1}} still representing \w[.]{\tYxk[\mathbf{3}]{2}} It is at stage \w{k=2} in the inner induction that we first encounter a possible obstruction: we must now choose representatives for \w{d_{\ell}:\mathbf{3}\to\mathbf{2}} \wb{0 \leq \ell \leq 3} in the homotopy class given by \w[.]{\tYxk[\mathbf{3}]{2}} As in \wref[,]{usepsi} we know that the target of the forgetful map from \w{\Mul[\mathbf{3}]{1}} is the product of the lower left and upper right corners of \wref[.]{eqtwotr} Thus \w{\Psi=\Psul{\mathbf{3}}{2}} is a product of two maps: the first taking each factor \w{\Yof{\mathbf{1}}^{d_j d_{\ell}}} \wb{0 \leq j < \ell \leq 3} diagonally to a product \w[,]{\Yof{\mathbf{1}}^{d_j d_{\ell}}\times \Yof{\mathbf{1}}^{d_{\ell-1} d_j}} and the second taking \w{\Yof{\mathbf{0}}^{d_i d_j d_{\ell}}} \wb{0 \leq i < j < \ell \leq 3} diagonally to the product \w[.]{\Yof{\mathbf{0}}^{d_i d_j d_{\ell}}\times\Yof{\mathbf{0}}^{d_{i}d_{\ell-1} d_{j}} \times\Yof{\mathbf{0}}^{d_{j-1} d_{\ell-1} d_{i}}} As in \S \ref{dhho}, we now factor $\Psi$ as a trivial cofibration to \w{F^1} followed by a fibration \w[,]{\Psi'} and pull back the product of the forgetful maps $$ \Psul{\mathbf{3}}{2}:\Mul[\mathbf{2}]{1}~\to~ \prod_{j \leq 2} \Yof{\mathbf{1}}^{d_j} \times \prod_{i<j\leq2} \Yof{\mathbf{0}}^{d_i d_j} $$ \noindent as in \wref[,]{eqtwtrunc} indexed by the first face maps \w{d_{\ell}:\mathbf{3}\to\mathbf{2}} \wb{0 \leq\ell\leq 3} along \w{\Psi'} to obtain a ``potential mapping diagram'' as in \wref[:]{firstHHO} $$ \xymatrix@R=39pt@C=14pt{ \Yof{\mathbf{3}} \ar@/_2em/[ddr]^{\eta_{1}} \ar@{-->}@/^2em/[drrr]^{\sigmaul{\mathbf{3}}{2}(\tYxk[\mathbf{3}]{2})=(d_0,d_1,d_2,d_3)} \ar@{-->}@/_4em/[dddrr]^{\varphi} \ar@/^1em/@{-->}[drr]^{\kappa} \ar@{.>}[dr]^{\alpha_{2}} \ar@/_3em/[dddr]_{\sigmaul{\mathbf{3}}{<2}(\Yul[\mathbf{3}]{1})} \\ & \Pul[\mathbf{3}]{2} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{p_{1}} \ar[r] & F^3 \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{\mu} \ar@{->>}[r]^-{r'_2} & \displaystyle \prod_{\ell \leq 3} \Yof{\mathbf{2}} \ar@{->>}[d]^{\prod \mxk[s]{k-1}} \\ & \Qul[\mathbf{3}]{1} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r]^{\gamma} & {F^2} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d]^{q} \ar@{->>}[r]^{s} & \displaystyle \prod_{\ell \leq 3} \Mul[\mathbf{2}]{1} \ar[d] \\ & \displaystyle \prod_{j<\ell\leq 3} \Yof{\mathbf{1}}^{d_j d_{\ell}} \times\hspace*{-3mm} \displaystyle \prod_{i<j<\ell\leq 3}\Yof{\mathbf{0}}^{d_i d_j d_{\ell}} \ar[r]_-{\sim} & F^1 \ar@{->>}[r]^(0.2){\Psi'} & \displaystyle \prod_{\ell \leq 3}( \displaystyle \prod_{j \leq 2} \Yof{\mathbf{1}}^{d_j d_{\ell}} \times \hspace*{-2mm} \displaystyle \prod_{i<j \leq 2} \hspace*{-1mm} \Yof{\mathbf{0}}^{d_{i}d_{j}d_{\ell}}) } $$ \noindent Note that as in \S \ref{dhho}, we may choose \w{F^{1}} to be a product of free path spaces, so we can think of $\varphi$ as a choice of homotopies between the various decompositions in \w{\Yul{2}} of maps \w{\mathbf{3}\to\mathbf{0}} in $\Delta$. As the right vertical rectangular pullback has horizontal fibrations, we can apply Lemma \ref{lhpp} and the fact that the original outermost diagram commutes up to homotopy (because \w{\tYxk[\mathbf{3}]{2}} is homotopy commutative) to deduce that there is a map $\varphi$ in the correct homotopy class, yielding $\kappa$ as indicated. The question is whether \w[.]{\mu \kappa \sim \gamma \eta_{1}} By Corollary \ref{cthpp2}, our secondary operation consists precisely of those \w{[\theta]} satisfying \w[.]{\theta \sim \mu \circ \kappa} Thus, the question is answered in the affirmative precisely when our secondary operation \w{\lra{\Yul[\mathbf{3}]{2}}} vanishes. In that case, by Lemma \ref{lhpp} applied to the upper left square, with $\mu$ a fibration, we can find \w{\kappa' \sim \kappa} satisfying \w[,]{\mu \circ \kappa'=\gamma \circ \eta_1} so inducing the dotted \w{\alpha_2} by the pullback property. We then alter the map labeled \w{(d_0,d_1,d_2,d_3)} within its homotopy class by instead using \w[,]{r'_2 \circ \kappa'} which will make the entire diagram now commute, since \[ \prod \mxk[s]{k-1} \circ (r'_2 \circ \kappa')=s \circ \mu \circ \kappa'=s \circ \gamma \circ \eta_{1} \] and \w[.]{q \circ \mu \circ \kappa' = q \circ \gamma \circ \eta_{1} =\iota\circ\sigmaul{\mathbf{3}}{<2}} Thus, we obtain a full $3$-truncated $\Delta$-simplicial object \w{\Yul{3}} (if we wish to proceed further, we take a Reedy fibrant replacement). If \w{\lra{\Yul[\mathbf{3}]{2}}} does not vanish, then there is no way to extend this \w{\Yul{2}} to a full $3$-truncated object. \end{mysubsection} \begin{remark}\label{robstr} As with any obstruction theory, when \w{\lra{\Yul[\mathbf{3}]{2}}} does not vanish, we need to backtrack, and see if we can get our obstruction to vanish by modifying previous choices. We observe that in special cases, given a truncated $\Delta$-simplicial object, there is a formal procedure for adding degeneracies to obtain a full (similarly truncated) simplicial object (see, e.g., \cite[\S 6]{BlaHH}). \end{remark} \section{Pointed higher operations} \label{cgho} Most familiar examples of higher homotopy operations are pointed, so we now describe the modifications needed in our general setup when the indexing category ${\mathcal J}$, as well as the model category ${\mathcal E}$, are pointed (see \S \ref{cgrms}.B). This will also cover ``hybrid'' cases, where certain composites in the diagram are required to be \emph{zero} in ${\mathcal E}$, rather than just null homotopic. \begin{lemma}\label{rbottomY} If \w{{\mathcal E_{\ast}}} is a pointed model category, \w{\widetilde{Y}:{\mathcal J} \to\operatorname{ho}({\mathcal E_{\ast}})} a pointed diagram, and \w{x\in\operatorname{Obj}\,{\mathcal J}} with \w[,]{|x|>0} then \begin{enumerate} \renewcommand{(\alph{enumi})~}{(\alph{enumi})~} \item Any choice of a representative \w{\Yxk[0](g)} of \w{\widetilde{Y}(g)} for every \w{g \in \widetilde{\mathbf{J}}^{x}_{0}} yields a lifting of \w{\widetilde{Y}\rest{\Jxk[0]}} to \w[.]{\Yxk[0]:\Jxk[0]\to{\mathcal E_{\ast}}} \item Any pointed Reedy fibrant \w{\Yul{1}:\partial \Jxk[1] \to {\mathcal E_{\ast}}} as in \S \ref{sdoubind}(II) has a pointwise extension to a functor \w{\Yxk[1]:\Jxk[1] \to {\mathcal E_{\ast}}} which lifts \w[.]{\tYxk{1}} \end{enumerate} \end{lemma} \begin{proof} For (a), note that if \w[,]{g \in \overline{\cJ}} \w{\Yof{g}} must be the zero map, but otherwise any choice of lifting will do, since \w{\Jxk[0]} has no non-trivial compositions. For (b), follow the proof of Lemma \ref{bottomY} with $\widetilde{\mathbf{J}}$ replacing ${\mathcal J}$, using reduced matching spaces and Definition \ref{dprfib} for the fibrancy. \end{proof} We also have the following version of Lemma \ref{lowpiece}: \begin{lemma}\label{ptlowpiece} Assuming \w[,]{2 \leq k \leq n < |x|} any pointed functor \w{Y:{\mathcal J}_n \to{\mathcal E_{\ast}}} with a pointed extension to \w{\smx[k-1]} induces a pullback grid with natural dashed maps : \myudiag[\label{eqtlowpiece}]{ \Yof{x} \ar@/^1em/[drrr]^{\rho_{k-1}} \ar@{-->}[dr]_{\beta_{k-1}} \ar@{-->}@/^1em/[drr]_{\eta_{k-1}} \ar@/_2em/[ddr]_{\rmxk{k-1}} \\ & \rNxk[k-1] \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r]^{q_{k-1}} & \rQxk[k-1] \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r] & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} \rMul[s]{k-1} \ar[d]^(0.65){\prod_{\widetilde{\mathbf{J}}(x,s)} \overline{\forget}} \\ & \rMxk[k-1] \ar@{^{(}->}[r]^{\overline{\forget}} & \dprod[|t|<k]{\widetilde{\mathbf{J}}(x,t)} \Yof{t} \ar[r]^(0.4){\overline{\Psi}} & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)}\dprod[|v|<k]{\widetilde{\mathbf{J}}(s,v)} \Yof{v} ~. } \end{lemma} We then deduce the following analogue of Proposition \ref{startHHO} (with a similar proof): \begin{prop}\label{startpHHO} Assuming \w[,]{2 \leq k \leq n < |x|} any pointed functor \w{\Yul{k}:\partial \Jxk \to {\mathcal E}} as in \S \ref{sdoubind} induces maps into a pullback grid: \mydiagram[\label{basicpHHO}]{ \Yof{x} \ar@/_2em/[dddr]_{\rmxk{k-1}} \ar@{-->}@/^1em/[drrr]^{\sigmaul{x}{=k}(\tYxk{k})} \ar@/_1em/[ddr]_(0.7){\beta_{k-1}} \ar@/_1em/[ddrr]^(0.25){\eta_{k-1}} \ar@{.>}@/^1.5em/[dr]^(0.7){\rmxk{k}} \ar@{.>}@/^1em/[drr]^(0.7){\alpha_k} \\ & \rMxk \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r] & \rPxk \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d]^{p_{k-1}} \ar[r]^{r_k} & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} \Yof{s} \ar[d]^{\prod \rmxk[s]{k-1}} \\ & \rNxk[k-1] \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r]_{q_{k-1}} & \rQxk[k-1] \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r] & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} \rMul[s]{k-1} \ar[d]^(0.6){\prod \overline{\forget}} \\ & \rMxk[k-1] \ar@{^{(}->}[r]^{\overline{\forget}} & \dprod[|t|<k]{\widetilde{\mathbf{J}}(x,t)} \Yof{t} \ar[r]^(0.4){\overline{\Psi}} & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)}\dprod[|v|<k]{\widetilde{\mathbf{J}}(s,v)} \Yof{v} ~. } \noindent Again, the dashed map only makes the outermost diagram commute up to homotopy. Furthermore, the dotted map \w{\rmxk{k}} exists (after altering \w{\sigmaul{x}{=k}(\tYxk{k})} within its homotopy class) if and only if there is a dotted map \w{\alpha_k} such that \w{p_{k-1} \alpha_k = \eta_{k-1}} and \w[.]{r_k \alpha_k \simeq \sigmaul{x}{=k}(\tYxk{k})} \end{prop} With this at hand, we may modify Definition \ref{dhho} as follows to obtain a sequence of obstructions to extending pointed diagrams: \begin{mysubsection}{Total Pointed Higher Homotopy Operations} Assume given pointed functors \w[,]{\tYxk{k}:\Jxk\to\operatorname{ho}({\mathcal E_{\ast}})} \w{\Yxk[k-1]:\Jxk[k-1]\to{\mathcal E_{\ast}}} and a pointed Reedy fibrant \w{\Yul{k}:\partial\Jxk\to{\mathcal E_{\ast}}} as in \S \ref{sdoubind}(II). This means each \w{\rmxk[s]{k-1}:\Yof{s}\to\rMul[s]{k-1}} is a fibration. Factor \w{\overline{\Psi}=\uPsul{x}{k}} (see Lemma \ref{lptmatchpull}) as a weak equivalence followed by a fibration \w[,]{\overline{\Psi}'} and pull back the right column of \wref{basicpHHO} along \w{\overline{\Psi}'} to obtain the following pullback grid: \myvdiag[\label{secondHHO}]{ \Yof{x} \ar@/_2em/[ddr]^{\eta_{k-1}} \ar@/^2em/[drrr]^{\sigmaul{x}{k}(\tYxk{k})} \ar@{-->}@/_4em/[dddrr]^{\varphi} \ar@/^1em/@{-->}[drr]^{\kappa} \ar@{.>}[dr]^{\alpha_k} \ar@/_3em/[dddr]_{\sigmaul{x}{<k}(\Yxk[k-1])} \\ & \rPxk \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{p_{k-1}} \ar[r] & {F^3} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{\mu} \ar@{->>}[r]^-{r'_k} & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} \Yof{s} \ar@{->>}[d]^{\alpha} \\ & \rQxk[k-1] \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r]^{\gamma} & {F^2} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d]^{q} \ar@{->>}[r]^(0.4){s} & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} \rMul[s]{k-1} \ar[d]^{\beta} \\ & \dprod[|t|<k]{\widetilde{\mathbf{J}}(x,t)} \Yof{t} \ar[r]^{\iota}_{\sim} & F^1 \ar@{->>}[r]^(0.4){\overline{\Psi}'} & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)}\dprod[|v|<k]{\widetilde{\mathbf{J}}(s,v)} \Yof{v} } \noindent As in \S \ref{dhho}, Lemma \ref{lhpp} allows us to modify $\varphi$ so as to obtain a map \w{\kappa:\Yof{x} \to F^3} into the pullback. \end{mysubsection} \begin{defn}\label{dphho} We define the \emph{total pointed higher homotopy operation for $x$} to be the set \w{\lra{\Yxk[k-1]}} of homotopy classes of maps \w{\theta:\Yof{x} \to F^2} with \w{\overline{\Psi}'\circ q\circ\theta=\beta\circ\alpha\circ\sigmaul{x}{k}} with \w[,]{q\circ\theta\sim\varphi} where $\varphi$ is defined to be the composite $$ \Yof{x}~\stackrel{\sigmaul{x}{<k}}{\longrightarrow}~ \dprod[|t|<k]{\widetilde{\mathbf{J}}(x,t)}~\Yof[k]{t}~\stackrel{\iota}{\longrightarrow}~F^{1} ~. $$ \noindent We say \w{\lra{\Yxk[k-1]}} \emph{vanishes at} \w{\theta:\Yof{x} \to F^{2}} as above if $\theta$ is homotopic to the composite $$ \Yof{x}~\stackrel{\eta_{k-1}}{\longrightarrow}~\rQxk[k-1]~\stackrel{\gamma}{\to}~F^2 ~, $$ and that \w{\lra{\Yxk[k-1]}} \emph{vanishes} if it vanishes at some value $\theta$. \end{defn} \begin{remark}\label{rphho} In many cases of interest we will have \w[,]{\rQxk[k-1]\simeq\ast} in which case the pointed operation \w{\lra{\Yxk[k-1]}} vanishes at $\theta$ precisely when \w[,]{\theta\sim\ast} as one might expect, so the subset vanishes precisely when it contains the zero class. \end{remark} We have chosen our definitions so as to have the following analogue of Proposition \ref{obstructk}: \begin{prop}\label{pobstructk} Assume given pointed functors \w[,]{\tYxk{k}:\Jxk\to\operatorname{ho}({\mathcal E_{\ast}})} \w{\Yxk[k-1]:\Jxk[k-1]\to{\mathcal E_{\ast}}} and a pointed Reedy fibrant \w{\Yul{k}:\partial\Jxk\to{\mathcal E_{\ast}}} as in \S \ref{sdoubind}(II) for \w[.]{|x|>n\geq k\geq 2} Then there exists a further pointed extension to \w{\Yxk:\Jxk \to {\mathcal E_{\ast}}} if and only if the total higher homotopy operation \w{\lra{\Yxk[k-1]}} vanishes. \end{prop} \begin{proof} Once again, the definition of \w{\lra{\Yxk[k-1]}} together with Corollary \ref{cthpp2} implies that each value $\theta$ is homotopic to \w{\mu \circ\kappa} for some $\kappa$ with \w{r'_k \circ \kappa = \sigmaul{x}{k}} and \w[.]{q \circ\mu\circ\kappa \sim \iota \circ \sigmaul{x}{<k}} Thus the obstruction vanishes at $\theta$ if and only if there exists such a $\kappa$ with \w[,]{\mu \circ \kappa \sim \gamma \circ \eta_{k-1}} precisely as in the proof of Proposition \ref{obstructk}. The upper left pullback square in \wref{secondHHO} then produces the lift into \w[,]{\overline{\mathbf{P}}_x^k} or equivalently, a map \w[,]{\Yof{x}\to\overline{\mathbf{M}}_{x}^{k}} yielding the required pointed extension by Lemma \ref{lptraneq}. If \w{\lra{\Yxk[k-1]}} does not vanish, then there is no choice of $\varphi$ for which such a lift exists, and so there is no pointed extension compatible with the given choices. \end{proof} \begin{remark}\label{rpsepo} Given pointed functors \w[,]{\tYxk{k}:\Jxk\to\operatorname{ho}({\mathcal E_{\ast}})} \w{\Yxk[k-1]:\Jxk[k-1]\to{\mathcal E_{\ast}}} and a pointed Reedy fibrant \w{\Yul{k}:\partial\Jxk\to{\mathcal E_{\ast}}} as in \S \ref{sdoubind}(II) for \w[,]{|x|>n\geq k\geq 2} we may define \emph{separated pointed higher homotopy operations \w{\lra{\Yxk[k-1]}^{j+1}} for $x$} as in Definition \ref{dseparate}, using a refinement of \wref{secondHHO} constructed \emph{mutatis mutandis} with products over \w{{\mathcal J}(x,s)} replaced everywhere by products over \w[.]{\widetilde{\mathbf{J}}(x,s)} Separation Lemma \ref{lseparate} is stated in sufficient generality to apply here, too, with Remark \ref{rseparate} modified accordingly, yielding the following variant of Theorem \ref{unptedThm}: \end{remark} \begin{thm}\label{ptedThm} Assume given pointed functors \w[,]{\tYxk{k}:\Jxk\to\operatorname{ho}({\mathcal E_{\ast}})} \w{\Yxk[k-1]:\Jxk[k-1]\to{\mathcal E_{\ast}}} and a pointed Reedy fibrant \w{\Yul{k}:\partial\Jxk\to{\mathcal E_{\ast}}} as in \S \ref{sdoubind}(II) for \w[.]{|x|>n\geq k\geq 2} Then the total pointed higher homotopy operation separates into a sequence of \w{k-1} pointed operations, and the following are equivalent: \begin{enumerate} \item A further extension to \w{\Yxk:\Jxk\to{\mathcal E_{\ast}}} exists; \item The total pointed operation \w{\lra{\Yxk[k-1]}} vanishes; \item The associated sequence \w{\lra{\Yxk[k-1]}^{j+1}} \wb{1\leq j<k} of separated pointed higher homotopy operations of \S \ref{dseparate} vanish (so in particular each in turn is defined). \end{enumerate} \end{thm} \section{Long Toda Brackets and Massey Products} \label{cltbmp} We are finally in a position to apply our general theory to the two most familiar examples of higher order operations: (long) Toda brackets and (higher) Massey products. Since both are cases of the (pointed) higher operations fully described in Sections \ref{cgdhho}-\ref{csto} and \ref{cgho}, we thought it would be easier for the reader to consider two specific examples in detail, briefly indicating what needs to be done for the higher version. \supsect{\protect{\ref{cltbmp}}.A}{Right justified Toda brackets} Since the ordinary Toda bracket (of length $3$) was treated in Section \ref{cctb}, we start with the next case, the Toda bracket of length $4$ (the first example of a \emph{long Toda bracket} in the sense of \cite{GWalkL}). Thus, if ${\mathcal E_{\ast}}$ is a pointed model category, assume given a diagram \w{\widetilde{Y}:{\mathcal J}\to\operatorname{ho}{\mathcal E_{\ast}}} of the form \mydiagram[\label{eqfourtoda}]{ \Yof{4} \ar[r]^{[k]} & \Yof{3} \ar[r]^{[h]} & \Yof{2} \ar[r]^{[g]} & \Yof{1} \ar[r]^{[f]} & \Yof{0} } \noindent with each adjacent composite null-homotopic: that is, a chain complex of length $4$ in \w[,]{\operatorname{ho}{\mathcal E_{\ast}}} as in Example \ref{ptchain} (compare \wref[).]{eqtodadiag} Without loss of generality, we can assume all objects involved are both cofibrant and fibrant. Applying the double induction procedure of \S \ref{sdoubind}, we see that we must deal with chain complexes of length \w[,]{n\leq 4} as follows: \begin{enumerate} \renewcommand{(\alph{enumi})~}{(\alph{enumi})~} \item When \w[,]{n=0} we have no inner induction, and making the result Reedy fibrant consists of factoring the representative to produce a fibration \w{f:\Yof{1}\to\hspace{-5 mm}\to\Yof{0}} in the specified class \w[.]{[f]} \item When \w[,]{n=1} note that \w{\widetilde{\mathbf{J}}(x,t)} is empty if \w[,]{|x|-|t| > 1} for this pointed indexing category, so as a consequence \w{\rMxk= \ast} if \w[.]{|x|-|k|>1} Thus \w[,]{\rMul[2]{0}=\ast} so \w{\rMul[2]{1}} is simply the fiber of $f$. Since \w[,]{[f]\circ[g]=\ast} by Lemma \ref{lhpp} we can choose a representative $g$ for \w{[g]} which factors as a fibration \w{\Yof{2}\to\hspace{-5 mm}\to\rMul[2]{1}=\operatorname{Fib}(f)} followed by the inclusion \w[.]{\operatorname{Fib}(f)\hookrightarrow\Yof{1}} \item When \w[,]{n=2} again \w[,]{\rMul[3]{0}=\ast=\rMul[3]{1}} while the case \w{k=2} is just that of our (length $3$) Toda bracket \w[.]{\lrau{f,g}{,h}} In this case, the indexing set for products in the right column of \wref{secondHHO} is the singleton \w[,]{\widetilde{\mathbf{J}}(3,2)} while the forgetful map in the bottom row of \wref{basicpHHO} is the identity of the zero object, with $\Psi$ the zero map. Factoring $\Psi$ as a trivial cofibration $\iota$ followed by a fibration \w[,]{\Psi'} as in the bottom row of \wref[,]{secondHHO} and pulling back the right column yields the diagram: \mydiagram[\label{eqthrtoda}]{ \Yof{3} \ar@{.>}[dr] \ar@{.>}@/^1em/[drrr] \ar@{->}@/^1.5em/[drrrrr]^{h} \ar@{-->}@/_2.4em/[ddrrr]_(0.3){\theta} \\ & \rPul[3]{2} \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[rr] && F^3 \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[rr] && \Yof{2} \ar@{->>}[d] \ar@{->}@/^1.5em/[dd]^(0.35){g} \\ & {\ast} \ar[rr] && F^2 \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[rr] && \rMul[2]{1} \ar[d]_{\overline{\forget}} \ar@{.>>}[rr] && {\ast} \ar@{.>}[d] \\ & {\ast} \ar[rr]_{\sim}^{\iota} && F^1 \ar@{->>}[rr]^{\Psi'} && \Yof{1} \ar@{.>>}[rr]_{f} && \Yof{0} ~. } \noindent Thus \w{F^{1}} is a model for the reduced path space on \w[,]{\Yof{1}} with \w{\Psi'} the path fibration. However, since $f$ was chosen above to be a fibration, the composite \w{F^{1}\to\Yof{0}} is a fibration, too, with \w{F^{1}} contractible, so we see that \w[,]{F^{2}} being the pullback of the dotted rectangle, is a model for the loop space \w[,]{\Omega \Yof{0}} which we denote by \w[.]{\Omega' \Yof{0}} Similarly, \w{\rMul[2]{1}} is a model for \w[.]{\operatorname{Fib}(f)} Our total secondary pointed homotopy operation \w{\lra{\Yul[\mathbf{3}]{1}}} (cf.\ \S \ref{dphho}) is thus a set of maps \w[,]{\theta:\Yof{3} \to \Omega' \Yof{0}} and it vanishes when this set contains the zero map (cf.\ Remark \ref{rphho}). This is our usual Toda bracket \w[,]{\lrau{f,g}{,h}} described in the language of Section \ref{cgho}. \item In order for our four-fold Toda bracket \w{\lrau{f,g,h}{,k}} (denoted by \w{\lra{\Yul[\mathbf{4}]{2}}} above) to be defined, \w{\lra{\Yul[\mathbf{3}]{1}}} must vanish. This allows us to choose a pointed extension \w{\Yul{3}:\Jul{3}\to{\mathcal E_{\ast}}} of \w{\Yul{2}} which realizes \w[.]{\widetilde{Y}\rest{\Jul{3}}} The fact that the diagram \w{\Yul{3}} has realized $\widetilde{Y}$ through filtration degree $3$ means that each of the maps $g$ and $h$ factors through the fiber of the previous one, as in the following solid commutative diagram: \mydiagram[\label{eqfortoda}]{ \Yof{4} \ar@{.>}[r]^-{k_1} \ar[dr]_{k} & \operatorname{Fib}(h_1) \ar[d] \ar[r] & \ast \ar[d] \\ & \Yof{3} \ar@{->>}[r]^-{h_{1}} \ar[dr]_{h }& \operatorname{Fib}(g_1) \ar@{->>}[r] \ar[d] & \ast \ar[d] \\ & & \Yof{2} \ar@{->>}[r]^-{g_{1}} \ar[dr]_{g} & \operatorname{Fib}(f) \ar@{->>}[r] \ar[d]^{g_2} & \ast \ar[d] \\ & & & \Yof{1} \ar@{->>}[r]_{f} & \Yof{0} ~. } \noindent Making \w{\Yul{3}} pointed Reedy fibrant (\S \ref{dprfib}) just means ensuring that the maps \w{h_{1}} and \w{g_{1}} are fibrations. \item At stage \w{n=3} in the outer induction, we attempt to find the dotted lift \w{k_{1}} in \wref[,]{eqfortoda} after having chosen a suitable representative $h$ for the given homotopy class \w[,]{[h]} which is possible by the vanishing of the previous obstruction. Again we have \w[,]{\rMul[4]{0}=\ast} \w[,]{\rMul[4]{1}=\ast} and \w[,]{\rMul[4]{2}=\ast=\rQul[4]{2}=\rNul[4]{2}} so the only interesting case is \w{k=3} in the inner induction. The separation grid of Lemma \ref{lseparate} then takes the form: \mydiagram[\label{eqsepfourtoda}]{ \Yof{4} \ar@{.>}[dr]^{k_{1}} \ar@{-->}@/^1em/[drr] \ar@{-->}@/^1.5em/[drrr]^(0.6){\kappa} \ar@{->}@/^2em/[drrrr]^{k} \ar@/_1em/[ddr] \\ & \rPul[4]{3} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r] & F_{4,3}^{2,4} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & F_{4,3}^{1,4} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & \Yof{3} \ar@{->>}[d]_{h_{1}} \ar@{->}@/^2em/[ddd]^{h}\\ & {\ast} \ar[dr]_{\sim} \ar[r] & F_{4,3}^{2,3} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & F_{4,3}^{1,3} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & \rMul[3]{2} \ar[d] \ar@{.>>}[rr] && {\ast} \ar@{.>}[d] & {\ast} \ar@{.>}[d]^{\sim} \\ & & F_{4,3}^{2,2} \ar@{->>}[r] & F_{4,3}^{1,2} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r] & F_{3,2}^{1,3} \ar@{->>}[d] \ar@{.>>}[rr] && F_{3,2}^{1,2} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{.>>}[d] \ar@{.>}[r] & F_{3,2}^{1,1} \ar@{.>>}[d] \\ & {\ast} \ar[r] & {\ast} \ar[r]^{\sim} & F_{4,3}^{1,1} \ar@{->>}[r] & \Yof{2} \ar@{.>>}[rr]^{g_{1}} \ar@{->}@/_1.3em/[rrr]_{g} && \rMul[2]{1} \ar@{.>}[d]^{\ast} \ar@{.>}[r]^(0.5){g_{2}} & \Yof{1} \ar@{.>>}[d]^(0.4){f} \\ &&&&&& \ast \ar@{.>}[r] & \Yof{0} } \noindent where we have extended the pullback grid downwards, and to the right, to show how it was constructed from the previous case (diagram \wref[)]{eqthrtoda} using Lemma \ref{lprodgrid}. We have also indicated how (representatives of) the maps of \wref{eqfortoda} fit in. As in Step (c) above, we can identify \w{F_{3,2}^{1,2}} as a model for \w[,]{\Omega \Yof{0}} and \w{\rMul[2]{1}} as a model for \w[.]{\operatorname{Fib}(f)} Similarly, \w{F_{4,3}^{1,2}} is a model for \w[,]{\Omega \Yof{1}} using the vertical fibrations in the rectangle with diagonal corners \w{F_{4,3}^{1,2}} and \w[.]{\Yof{1}} Likewise \w{F_{4,3}^{1,3}} a model for \w{\Omega \rMul[2]{1}} (using horizontal fibrations in the larger square beneath it), and \w{F_{4,3}^{2,3}} is a model for \w{\Omega^2 \Yof{0}} (now using the rectangle with diagonal corners \w{F_{4,3}^{2,3}} and \w[,]{F_{3,2}^{1,2}} along with the previous identification of the latter). Similarly, \w{\rMul[3]{2}} is a model for \w{\operatorname{Fib}(g_{1})} of \wref[,]{eqfortoda} while \w{\rPul[4]{3}} is \w{\operatorname{Fib}(h_{1})} (which is also the homotopy fiber). See \wref{eqsepfrtoda} below for the full identification. Therefore, the final obstruction to having a dotted lift \w{k_{1}} in \wref{eqfortoda} (or \wref[)]{eqsepfourtoda} is the composite \w[.]{k\circ h_{1}} \end{enumerate} Note that there are no factors of type \w{G_{i,j}^{k,\ell}} as in \wref{eqsepgrid} here, since we can always choose the zero map as our factorization of the zero map between zero objects. \begin{remark}\label{rasepop} Our total pointed tertiary homotopy operation \w{\lra{\Yul[\mathbf{4}]{2}}} is a set of homotopy classes \w[.]{\theta:\Yof{4} \to \Omega \rMul[2]{1}} However, using Lemma \ref{lseparate}, we can replace it by two separated higher homotopy operations for $\mathbf{4}$, in the sense of \S \ref{dseparate}: \begin{enumerate} \renewcommand{\roman{enumii}.~}{\roman{enumii}.~} \item The second order operation \w[.]{\lra{\Yul[\mathbf{4}]{2}}^{2}\subseteq[\Yof{4},\,\Omega \Yof{1}]} \item If \w{\lra{\Yul[\mathbf{4}]{2}}^{2}} vanishes, the third order operation \w{\lra{\Yul[\mathbf{4}]{2}}^{3}\subseteq[\Yof{4},\,\Omega^{2}\Yof{0}]} is defined, and serves as the final obstruction to lifting $\widetilde{Y}$. By definition, this is our \emph{four-fold Toda bracket} \w[.]{\lrau{f,g,h}{,k}} \end{enumerate} \end{remark} \begin{lemma}\label{lthreetoda} Given a pointed Reedy fibrant diagram \w{\Yul{3}} realizing \wref{eqfourtoda} through filtration $3$, the associated second order separated higher homotopy operation \w{\lra{\Yul[\mathbf{4}]{2}}^{2}} is our usual Toda bracket \w[.]{\lrau{g,h}{,k}} \end{lemma} \begin{proof} Note that \w{F_{3,2}^{1,3}} is a model for the homotopy fiber of \w{g:\Yof{2} \to \Yof{1}} (which is not-itself a fibration). Thus, the rectangle with corners \w{F_{3,2}^{1,3}} and \w{\Yof{1}} in \wref{eqsepfourtoda} is a homotopy invariant version of the rectangle with corners \w{F^{2}} and \w{\Yof{0}} in \wref[,]{eqthrtoda} used to define our Toda bracket in Step (c) above \ -- \ this time, applied to the left $3$ maps in \wref[.]{eqfourtoda} The map corresponding to $\theta$ in \wref{eqthrtoda} \ -- \ the value of the Toda bracket \ -- \ is the map \w{\Yof{4}\to F_{4,3}^{1,2}} obtained by composing $\kappa$ with the vertical maps \w[,]{F_{4,3}^{1,4}\to F_{4,3}^{1,2}} which is indeed the definition of the value of \w{\lra{\Yul[\mathbf{4}]{2}}^{2}} associated to our choices (see Definition \ref{dseparate}). \end{proof} \begin{aside}\label{asepopn} Note that \emph{if} the dotted forgetful map \w{\rMul[2]{1}\to\Yof{1}} in \wref{eqsepfourtoda} were a fibration, the horizontal dotted map above it would be a fibration, too, so right properness would imply that the vertical map \w{\rMul[3]{2}\to F_{3,2}^{1,3}} would be a weak equivalence. \end{aside} \begin{mysubsection}{Length $n$ Toda brackets} \label{slntb} The general procedure described in Section \ref{cgho} tells us what needs to be done for Toda diagrams (chain complexes $\widetilde{Y}$ in \w[):]{\operatorname{ho}{\mathcal E_{\ast}}} \mydiagram[\label{eqntoda}]{ \Yof{n} \ar[r]^-{[f_{n}]} & \Yof{n-1} \ar[r]^-{[f_{n-1}]} &\dotsc\ar[r] & \Yof{3} \ar[r]^{[f_{3}]} & \Yof{2} \ar[r]^{[f_{2}]} & \Yof{1} \ar[r]^{[f_{1}]} & \Yof{0} } \noindent of arbitrary length $n$. We sketch the main features of the general construction, already discernible in the case \w{n=4} described above: In the double induction of \S \ref{sdoubind}, we can concentrate on the last stage \ -- \ assuming the vanishing of shorter brackets on the right, which guarantees the existence of a solid diagram \mydiagram[\label{eqlenntoda}]{ \Yof{n} \ar@{.>}[r]^-{g_{n}} \ar[dr]_{f_n} & \operatorname{Fib}(g_{n-1}) \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d]_{g^2_{n}} \ar[r] & \ast \ar[d] \\ & \Yof{n-1} \ar@{->>}[r]^-{g_{n-1}} \ar@{.}[dr] & \operatorname{Fib}(g_{n-2}) \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[r] \ar@{.}[d] & \ast \ar@{.}[d] \\ & & \Yof{3} \ar@{->>}[r]^-{g_{3}} \ar[dr]_{f_{3}} & \operatorname{Fib}(g_{2}) \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[r] \ar[d]_{g^2_{3}} & \ast \ar[d] \\ & & & \Yof{2} \ar@{->>}[r]^-{g_{2}} \ar[dr]_{f_{2}} & \operatorname{Fib}(f_1) \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[r] \ar[d]_{g^2_{2}} & \ast \ar[d] \\ & & & & \Yof{1} \ar@{->>}[r]_{f_1} & \Yof{0} } \noindent analogous to \wref[;]{eqfortoda} our length $n$ Toda bracket, \w[,]{\lrau{f_1,f_2,\dots f_{n-1}}{,f_n}} will be the final obstruction to finding the dotted map \w{g_{n}} in \wref[,]{eqlenntoda} perhaps after altering \w{f_{n}} within its homotopy class. The existence of the fibrations \w{g_{k}} for \w[,]{2\leq k<n} and the fact that \w{f_{1}} is a fibration, mean that we have a lifting \w{\Yul{n-1}:\Jul{n-1}\to{\mathcal E_{\ast}}} of \w[,]{\widetilde{Y}\rest{\Jul{n-1}}} which we have made pointed Reedy fibrant. The underlining in the notation represents our intention to leave that portion fixed. The construction of the separation grid for \w{\Yul{n-1}} (\S \ref{dsepdiag}) greatly simplifies, in this case, as we see in comparing \wref{eqthrtoda} to \wref[:]{eqsepfourtoda} at each step, one writes the previous separation grid vertically (instead of horizontally) on the right (after changing the previously chosen \w{g_{n-1}} into a fibration, thus altering \w{\Yof{n-1}} up to homotopy). We then factor the zero map $\Psi$ and pull back the leftmost existing column to form a new column to its left. Factoring the zero map from \w{\rQxk[k-1]} to the second place from the bottom in this new column and again pulling back, we note that the intermediate object produced by this factorization is a reduced path object, so by induction the entry immediately above it is a loop object (being the pullback over a fibration with upper right and lower left corners contractible \ -- \ one because it is the reduced path object, and the other by induction). Moreover, the number of loops increases as we move up and to the left (see Lemma \ref{loopstair}). Repeat this step until the new column involves just two maps (so the second object from the bottom is at the same height as the product of the objects \w{\rMul[s]{k-1}} on the right). The pullback in the upper left corner is now the actual fiber of \w[.]{g_{n-1}} To illustrate, we reproduce diagram \wref{eqsepfourtoda} with the pieces identified up to homotopy: \myodiag[\label{eqsepfrtoda}]{ \Yof{4} \ar@{.>}[dr]^{k_{1}} \ar@{-->}@/^1em/[drr] \ar@{-->}@/^1.5em/[drrr] \ar@{->}@/^2em/[drrrr]^{k} \ar@/_1em/[ddr] \\ & \operatorname{Fib}(h_{1}) \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r] & F_{4,3}^{2,4} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & F_{4,3}^{1,4} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & \Yof{3} \ar@{->>}[d]_{h_{1}} \ar@{->}@/^2em/[ddd]^{h}\\ & {\ast} \ar[dr]_{\sim} \ar[r] & *+[F]{\Omega^{2}\Yof{0}} \ar@{}[drr] |<<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & \Omega\operatorname{Fib}(f) \ar@{}[dr] |<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[r] & \operatorname{Fib}(g_{1}) \ar[d] \ar@{.>}[rr] & {\ast} \ar@{.>}[d] & {\ast} \ar@{.>}[d]^{\sim} \\ & & P\Omega\Yof{1} \ar@{->>}[r] & *+[F]{\Omega\Yof{1}} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r] & \operatorname{Fib}(g) \ar@{->>}[d] \ar@{.>>}[r] & \Omega\Yof{0} \ar@{.>>}[d] \ar@{.>}[r] & P\Yof{1} \ar@{.>>}[d] \\ && \ast \ar[r]^{\sim} & P\Yof{2} \ar@{->>}[r] & \Yof{2} \ar@{.>>}[r]^{g_{1}} \ar@{->}@/_1.3em/[rr]_{g} & \operatorname{Fib}(f) \ar@{.>}[r]^(0.5){g_{2}} & \Yof{1} } \noindent Note that while not all the pullbacks in the grid can be easily identified, the targets of the separated operations (boxed) are iterated loop spaces on the original objects of \wref[,]{eqntoda} as one would expect for long Toda brackets. This last obstruction, consisting of a subset of the homotopy classes of maps into the top left iterated loop space, then represents our length $n$ Toda bracket, \w[,]{\lrau{f_1,f_2,\dots f_{n-1}}{,f_n}} with the lower separated higher homotopy operations corresponding to the vanishing of the lower obstructions necessary in order to define it (together with those already assumed to vanish in order to build the current commuting diagram). \end{mysubsection} \supsect{\protect{\ref{cltbmp}}.B}{Massey Products as a Hybrid Case} The classical Massey product (cf.\ \cite{MassN}) is defined for three cohomology classes of the same space $X$ \w{[\alpha],[\beta],[\gamma]\in H^{\ast}(X;R)} for some ring $R$, equipped with null homotopies \w{F:\mu(\alpha,\beta) \sim 0} and \w{G:\mu(\beta,\gamma) \sim 0} for the two products. Like a Toda bracket, the Massey product serves as the obstruction to simultaneously making both products strictly zero (see \cite[\S 4]{BBGondH}). This situation may be described by the pointed indexing category ${\mathcal J}$: \myvdiag[\label{eqmasseyd}]{ & & g \ar[d] \ar[ddl] \ar[ddr] \ar@{-->}[ddrr] \ar@{-->}[ddll] \ar@/^14em/@{-->}[ddd]\\ & & f \ar[dll] \ar[dl] \ar[dr] \ar[drr] \ar@{-->}@/_0.2em/[dd] \ar@/^0.2em/[dd]\\ b \ar@{-->}[drr] & c \ar[dr] \ar[l] && d \ar[dl] \ar[r] & e \ar@{-->}[dll] \\ & & a } \noindent Here the dashed maps are in $\overline{\cJ}$ and the others are in $\widetilde{\mathbf{J}}$. The inner diamond commutes (with the solid composite) and the outer diamond commutes (with the dashed composite). The corresponding pointed diagram \w{\widetilde{Y}:{\mathcal J}\to\operatorname{ho}\TT_{\ast}} has products of Eilenberg-Mac~Lane spaces \w{K_{i}:=K(R,i)} in all but the top slot: \mywdiag[\label{eqmassey}]{ & & \Yof{g} \ar[d]|{(\alpha,\beta,\gamma)} \ar[ddl] \ar[ddr] \ar@{-->}[ddrr]^{\mu(\alpha,\beta)} \ar@{-->}[ddll]_{\mu(\beta,\gamma)} \\ & & K_r \times K_s \times K_t \ar[dll] \ar[dl]^{(\pi_{1},\mu)} \ar[dr]_{(\mu,\pi_{2})} \ar[drr] \ar@{-->}@/_0.2em/[dd] \ar@/^0.2em/[dd]^{\mu}\\ \ast\times K_{s+t} \ar@{-->}@/_0.9em/[drr] & K_r \times K_{s+t} \ar[dr]^{\mu} \ar[l]^{\pi_{2}} && K_{r+s} \times K_t \ar[dl]_{\mu} \ar[r]_{\pi_{1}} & K_{r+s}\times\ast \ar@{-->}@/^0.9em/[dll] \\ & & K_{r+s+t} } \noindent where the central diamond represents associativity of the cup product maps $\mu$; \w{\pi_{1}} and \w{\pi_{2}} are the two projections; and we have omitted the zero map from top to bottom that appears in \wref{eqmasseyd} in the interest of clarity. Choose a strictly associative model of the Eilenberg-Mac~Lane $\Omega$-spectrum in question (cf.\ \cite{RobO}), with strictly pointed multiplication, so in particular at each level \w{K_{r}} is a simplicial (or topological) abelian group. We can then make all of \wref{eqmassey} below \w{\Yof{g}} (involving only the cup product maps) strictly commutative. Our Massey product will be the total pointed higher homotopy operation \w{\lra{\Yul[g]{1}}} (for \w[).]{n=k=2} From \S \ref{dpoinmat} we see that if we let \w[,]{\mathbf{K}:=K_{r}\times K_{s+t}\times K_{r+s}\times K_{t}} then \w{\rMul[f]{1}} is the pullback of the two multiplication maps \w[,]{K_{r}\times K_{s+t}\to K_{r+s+t}\leftarrow K_{r+s}\times K_{t}} with a natural inclusion (forgetful map) \w[.]{i_{1}:\rMul[f]{1}\to\mathbf{K}} The pullback grid of \wref{secondHHO} then takes the form: \myqdiag[\label{eqmasssHHO}]{ \rPul[g]{2} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]_{p_{k-1}} \ar[r] & F^3 \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r]^-{r'_k} & {K_{r}\times K_{s}\times K_{t}} \ar@{->>}[d] \\ \rQul[g]{1} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar[r]^{\gamma} & {F^2} \ar@{}[dr] |<<<{\mbox{\large{$\lrcorner$}}} \ar[d]^{q} \ar@{->>}[r]^(0.4){s} & \rMul[f]{1} \ar[d]^{(\pi_{2}i_{1},\pi_{3}i_{1},i_{1},\mu i_{1})} \\ \mathbf{K} \ar[r]^(0.17){\sim} & PK_{r+s}\times PK_{s+t}\times\mathbf{K}\times PK_{r+s+t} \ar@{->>}[r]^(0.53){\overline{\Psi}'} & K_{r+s}\times K_{s+t}\times\mathbf{K}\times K_{r+s+t} } \noindent Thus a point in \w{F^{2}} is given by \w{(U,V,x,u,v,z,W)\in PK_{r+s}\times PK_{s+t}\times\rMul[f]{1}\times PK_{r+s+t}} with \w[,]{U:u\sim\ast} \w[,]{V:v\sim\ast} and \w[.]{W:xu=vz\sim\ast} We thus have a natural map \w{\lambda:F^{2}\to\Omega K_{r+s+t}\times\Omega K_{r+s+t}} sending \w{(U,V,x,u,v,z,W)} to \w[.]{(xU-W,Vz-W)} Postcomposition with the difference map \w{d:\Omega K_{r+s+t}\times\Omega K_{r+s+t}\to\Omega K_{r+s+t}} yields \w[.]{(xU-Vz)} Now \w{\Yof{g}} maps into the top right corner of \wref{eqmasssHHO} by (a lift of) \w[,]{(\alpha,\beta,\gamma)} and thereby on to \w[,]{\rMul[f]{1}} and into the bottom middle term by $$ \varphi~:=~\lra{F,\,G,\,\alpha,\,\mu(\beta,\gamma),\,\mu(\alpha,\beta),\,\gamma,\,L}~, $$ \noindent with $L$ some nullhomotopy of \w[.]{\mu(\alpha,\beta,\gamma)} Together these two maps induce the map \w{\theta:\Yof{g}\to F^{2}} of \S \ref{dphho}. Postcomposing $\theta$ with \w{d\circ\lambda} gives the usual Massey product $$ \lra{\alpha,\beta,\gamma}\in[\Yof{g},\,\Omega K_{r+s+t}]=H^{r+s+t-1}(\Yof{g};R) ~. $$ The two factors of \w{\lambda\circ\theta} merely give the usual indeterminacy for the Massey product, as we can see by choosing \w{L:=\mu(F,\gamma)} or \w[.]{L:=\mu(\alpha,G)} \begin{remark}\label{rmassey} An alternative definition of the usual (higher) Massey products, more in line with that given for the Toda bracket, appears in \cite[\S 4.1]{BBGondH}. \end{remark} \section{Fully reduced diagrams} \label{cfrd} Ultimately, we would like to develop an ``algebra of higher order operations,'' along the lines of Toda's original juggling lemmas (see \cite[\S 1]{TodC}). As a first step in this direction, we consider a special type of pointed diagram, which most closely resembles the long Toda diagram of \wref[.]{eqntoda} The most useful property of the separated higher operations associated to Toda diagrams is that we can often identify their targets \w{F_{x,k}^{j,j+1}} as loop spaces (as we saw in \wref[).]{eqsepfrtoda} It turns out the property of the pointed indexing category ${\mathcal J}$ needed for this to happen is the following: \begin{defn}\label{dfred} A pointed indexing category ${\mathcal J}$ as in \S \ref{dpindex} is called \emph{fully reduced} if any morphism decreasing degree by at least $2$ lies in $\overline{\cJ}$. \end{defn} \begin{remark} \label{ptfullred} If ${\mathcal J}$ is fully reduced, for \w{|x| \geq k+1} we have \w{\prod_{\widetilde{\mathbf{J}}(x,t), |t| < k} \Yof{t}=\ast} and so \w{\rMxk[k-1]=\ast} (cf.\ \S \ref{dpoinmat}) as well. We deduce that \w[,]{\rNxk[k-1]=\ast=\rQxk[k-1]} too (cf.\ \wref[),]{eqtlowpiece} since both are fibers of a product of monomorphisms, by Lemma \ref{ptlowpiece} (under mild assumptions on \w[).]{{\mathcal E_{\ast}}} Furthermore, the map \w{\overline{\forget}} of \S \ref{dpoinmat} factors through \w[,]{\prod_{\widetilde{\mathbf{J}}(s,t), |t|=|s|-1}\,\Yof{t}} so no factors of type \w{G_{x,k}^{k+1,j}} (cf.\ \wref[)]{eqfgpull} are needed when constructing the separation grid \wref[.]{eqsepgrid} This also implies that \w{F_{x,k}^{j,j}} is contractible for \w[,]{j < k} which is the key ingredient for identifying the targets of the separated operations as loop spaces. \end{remark} Our key decomposition result is the following. \begin{lemma} \label{loopstair} If ${\mathcal J}$ is a fully reduced pointed indexing category and \w[,]{n \geq k \geq j \geq 2} we have: $$ F_{x,k}^{j-1,j} \sim \prod_{\begin{aligned} (&f_{k-j}, \dots, f_k) \\ f_{k-j} \circ &\dots \circ f_k:x \to v \end{aligned}} \Omega^{j-1} \Yof{v} $$ \noindent in \wref[,]{eqsepgrid} where each \w{f_i} is a non-identity map in $\widetilde{\mathbf{J}}$, with target of degree $i$. \end{lemma} \begin{proof} We prove this by induction on $k$ (for fixed $n$ and $x$), as in Lemma \ref{lprodgrid}. In each case, we combine two pullbacks over fibrations, one of which has fiber identified at an earlier stage, with two corners contractible; the upper left corner (source) is then homotopy equivalent to the loop space on the lower right corner, (see Step (e) of \S \ref{cltbmp}.A). For \w[,]{2 = j < k} we use the basic pullback rectangle \mysdiag{ F^{1,3}_{s,2} \ar@{}[drr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[rr] && \dprod[|u|=1]{\widetilde{\mathbf{J}}(s,u)} \Yof{u} \ar@{->>}[d] \\ F^{1,2}_{s,2} \ar@{}[drr] |<{\mbox{\large{$\lrcorner$}}} \ar[d] \ar@{->>}[rr] && \dprod[|u|=1]{\widetilde{\mathbf{J}}(s,u)} \rMul[u]{0} \ar[d] \\ F^{1,1}_{s,2} \ar@{->>}[rr] && \dprod[|u|=1]{\widetilde{\mathbf{J}}(s,u)} \dprod[|v|=0]{\widetilde{\mathbf{J}}(u,v)} \Yof{v} } \noindent to construct the pullback rectangle \mysdiag{ F^{1,2}_{x,3} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r] & \dprod[|s|=3]{\widetilde{\mathbf{J}}(x,s)} F^{1,3}_{s,2} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar[r] \ar@{->>}[d] & \dprod[|s|=3]{\widetilde{\mathbf{J}}(x,s)} F^{1,1}_{s,2} \ar@{->>}[d] \\ F^{1,1}_{x,3} \ar@{->>}[r] & \dprod[|s|=3]{\widetilde{\mathbf{J}}(x,s)} \Yof{u} \ar[r] & \dprod[|s|=3]{\widetilde{\mathbf{J}}(x,s)}\dprod[|u|=1]{\widetilde{\mathbf{J}}(s,u)} \dprod[|v|=0]{\widetilde{\mathbf{J}}(u,v)} \Yof{v} } \noindent where the vertical maps are fibrations, and both \w{\dprod[|s|=3]{\widetilde{\mathbf{J}}(x,s)} F^{1,1}_{s,2}} and \w{F^{1,1}_{x,3}} contractible, as in Remark \ref{ptfullred}. For \w[,]{2 < j < k} we similarly use the pullback rectangle \mysdiag{ F_{x,k}^{j-1,j} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar@{->>}[r] & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} F_{s,k-1}^{j-1,k} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar[r] & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} F_{s,k-1}^{j-1,j-1} \ar@{->>}[d] \\ F_{x,k}^{j-1,j-1} \ar@{->>}[r] & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} F_{s,k-1}^{j-2,k} \ar[r] & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} F_{s,k-1}^{j-2,j-1} } \noindent in which the vertical maps are fibrations, together with the fact that \w{F_{x,k}^{j-1,j-1}} and each \w{F_{s,k-1}^{j-1,j-1}} are contractible, to prove the claim by induction on $j$ (since loops commute with products). For \w[,]{2 \leq j=k} recall that when \w{|s|=2} the first non-trivial case (with \w[)]{k-1=1} involves the first pullback diagram \mydiagram{ \rMul[s]{1} \ar@{}[drr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar[rr]^(0.4){\overline{\forget}} && \dprod[|u|=1]{\widetilde{\mathbf{J}}(s,u)} \Yof{u} \ar@{->>}[d] \\ \ast \ar[rr] && \dprod[|u|=1]{\widetilde{\mathbf{J}}(s,u)}\ \dprod[|v|=0]{\widetilde{\mathbf{J}}(u,v)} \Yof{v} } \noindent For \w{2<j=k} we have the second pullback diagram \mydiagram{ \rMul[s]{k-1} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar[r] & \rPul[s]{k-1} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar[r] & F_{s,k-1}^{k-2,k} \ar@{->>}[d] \\ \ast=\rNul[s]{k-1} \ar[r] & \ast=\rQul[s]{k-1} \ar[r] & F_{s,k-1}^{k-2,k-1} } \noindent and combining (products of) either type into \mydiagram{ F_{x,k}^{k-1,k} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[r] \ar[d] & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} \rMul[s]{k-1} \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[r] \ar[d] & \ast \ar[d] \\ F_{x,k}^{k-1,k-1} \ar@{->>}[r] & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} F_{s,k-1}^{k-2,k} \ar@{->>}[r] & \dprod[|s|=k]{\widetilde{\mathbf{J}}(x,s)} F_{s,k-1}^{k-2,k-1} } yields a pullback with horizontal fibrations and with \w{F_{x,k}^{k-1,k-1}} (and of course $\ast$) contractible, so the result (with \w[)]{2 \leq j=k} also follows by induction. \end{proof} With these conventions, each factor in the product \w{\Yof{x} \to \Omega^{j-1} \Yof{v}} is a $j$-ary Toda bracket by construction, and vanishing of the product is equivalent to vanishing of each factor. \begin{thm} In the fully reduced case, all higher operations decompose into a sequence of Toda brackets of order no greater than the degree of the first target object in the string. \end{thm} \appendix \section{Background Material}\label{abm} We collect here a number of basic facts about model categories needed in this paper and one non-standard lemma included for ease of reference elsewhere. We refer the reader to \cite[\S\S 7.1-7.3]{PHirM} for the basics on model categories and homotopy assumed for this appendix. \begin{notn} Given two maps \w[,]{f, g: X \to Y} we write \w{f\sim^r g} if the maps are right homotopic, and \w{f\sim\sp{l} g} if the maps are left homotopic. \end{notn} \begin{lemma}[Homotopy Lifting Property]\label{lhlp} Suppose we have the solid diagram with $q$ a fibration and $T$ cofibrant: \mydiagram[\label{eqlhlp}]{ T \ar@{>}[dr]_{\psi} \ar[r]^{f} & Y \ar@{->>}[d]^{q} \\ & Z } \noindent Then there is a homotopy \w{\psi \sim\sp{l} q\circ f} if and only if there is a map \w{f': T \to Y} with a homotopy \w{f' \sim\sp{l} f} such that \w[.]{\psi = q\circ f'} Dually, if $Z$ is fibrant and $f$ is a cofibration then there is a homotopy \w{\psi \sim^r q\circ f} precisely when there is a map \w{q': Y \to Z} with a homotopy \w{q' \sim^r q} such that \w[.]{\psi = q'\circ f} \end{lemma} \begin{proof} Assume $q$ is a fibration. Let $$ T\amalg T \stackrel{i_1\sqcup i_2}{\longrightarrow} \operatorname{Cyl}(T) \stackrel{p}{\longrightarrow} T $$ be a factorization of the fold map \w{T\amalg T \stackrel{1_T\amalg 1_T}{\longrightarrow} T} such that \w{i_1\sqcup i_2} is a cofibration and $p$ is a weak equivalence. Cofibrancy of $T$ implies \w{i_1:T\to\operatorname{Cyl}(T)} is an acyclic cofibration by \cite[7.3.7]{PHirM}. Given a homotopy \w{H: \operatorname{Cyl}(T) \to Z} with \w{H\circ i_{1}= q\circ f} and \w[,]{H\circ i_{2}=\psi} we may use the left lifting property in \begin{myeq} \xymatrix@R=25pt{ T \ar@{ >->}[d]_{\simeq}^{i_{1}} \ar[r]^{f} & Y \ar@{->>}[d]^{q} \\ \operatorname{Cyl}(T) \ar@{.>}[ru]^{\hat{H}} \ar[r]^{H} & Z } \end{myeq} \noindent to factor $H$ as \w[,]{q\circ\hat{H}} and set \w[.]{f':=\hat{H}\circ i_{2}} If $f$ is instead a cofibration, use the dual argument. \end{proof} \begin{lemma}[Homotopy Pullback Property]\label{lhpp} Suppose we have the following solid diagram where the square is a pullback, $T$ is cofibrant, and the two vertical maps are fibrations. \mydiagram[\label{eqlhpp}]{ T \ar@{.>}[dr]^{g} \ar@/_1em/[ddr]_{p} \ar@/^1em/@{.>}[drr]^{f} \\ & W \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]_{r} \ar[r]^{j} & Y \ar@{->>}[d]^{q} \\ & X \ar[r]_{i} & Z } \noindent Then there is a dotted map \w{f:T \to Y} with a homotopy \w{q\circ f \sim\sp{l} ip} precisely when there is a dotted map \w{g:T \to W} with a homotopy \w{j\circ g \sim\sp{l} f} and \w[.]{r\circ g=p} \end{lemma} \begin{proof} Suppose there is a homotopy \w[.]{q\circ f \sim\sp{l} i\circ p} Since $T$ is cofibrant and $q$ is a fibration, the Homotopy Lifting Property (with \w[)]{\psi = i\circ p} produces \w{f': T \to Y} homotopic to $f$, such that \w[.]{q\circ f' = i\circ p} Since the square is a pullback, there is a map \w{g: T \to W} such that \w{j\circ g = f'} and \w[.]{r\circ g = p} Since \w[,]{f\sim\sp{l} f'} we conclude that \w[.]{f \sim\sp{l} j\circ g} \end{proof} \begin{cor}\label{cnullpull} If $X$ is cofibrant, \w{k:X\to Y} is any pointed map, and \w{h:Y \to Z} is a pointed fibration, then the composite \w{h\circ k:X \to Z} is null-homotopic if and only if there exists some \w[,]{k':X\to Y} left homotopic to $k$, which factors through \w[.]{\operatorname{Fib}(h)} \end{cor} \begin{lemma}[Homotopy Ladder Property]\label{lthpp2} Suppose we are given the following diagram in which both squares are (strict) pullbacks, $T$ is cofibrant, the indicated horizontal maps are fibrations, and the outer diagram commutes up to homotopy: \mydiagram[\label{eqthpp}]{ T \ar@{-->}[dr]^{\kappa} \ar@/_1em/[dddr]_{\varphi} \ar@/_1em/@{.>}[ddr]^{\theta} \ar@{>}@/^1em/[drr]^{\sigma} \\ & U \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[r]^{r} \ar[d]_{\Phi} & V \ar[d]^{t} \\ & W \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[r]^{p} \ar[d]_{q} & X \ar[d]^{s} \\ & Y \ar@{->>}[r]_{u} & Z ~. } \noindent Consider the following three statements: \begin{enumerate} \item[(1)] There is a map \w{\kappa: T \to U} such that \w[,]{\sigma = r\circ \kappa} and there are (left) homotopies \w{\theta \sim\sp{l} \Phi\circ \kappa} and \w[.]{\varphi \sim\sp{l} q\circ \Phi\circ \kappa} \item[(2)] \w[,]{\varphi \sim\sp{l} q\circ \theta} and there is a map \w{\theta': T \to W} homotopic to $\theta$ such that \w[.]{p\circ \theta' = t\circ \sigma} \item[(3)] There is a map \w{\theta': T \to W} homotopic to $\theta$ such that $\varphi$ is homotopic to \w{\varphi' := q\circ \theta'} and \w[.]{u\circ \varphi' = s\circ t\circ \sigma} \end{enumerate} \noindent Then \w[.]{(1) \Leftrightarrow (2) \Rightarrow (3)} Furthermore, if $s$ is a monomorphism, then (1), (2), and (3) are all equivalent. \end{lemma} \begin{proof} \noindent \w[:]{(1) \Longrightarrow (2)} Since \w[,]{\theta \sim\sp{l} \Phi\circ \kappa} it follows that \w[.]{\varphi \sim\sp{l} q\circ \Phi \circ \kappa \sim\sp{l} q\circ \theta} Since \w[,]{p\circ \theta \sim\sp{l} p\circ \Phi\circ \kappa = t\circ \sigma} applying the Homotopy Lifting Property (with \w{q = p} and \w[),]{f = \theta} to \w{\psi = t\circ \sigma} there exists \w{\theta' \sim\sp{l} \theta} with \w[.]{p\circ \theta'=t\circ \sigma} \noindent \w[:]{(2) \Rightarrow (1)} Let \w{\theta' \sim\sp{l} \theta} with \w[,]{p\circ \theta' = t\circ \sigma} and let \w[.]{\varphi' := q\circ \theta'} Then $$ u\circ \varphi' = u\circ q\circ \theta' = s\circ p\circ \theta' = s\circ t\circ \sigma $$ \noindent Since the outside rectangle is a pullback, there exists $\kappa: T \to U$ such that \w[.]{\theta' = \Phi\circ \kappa$ and $\sigma = r\circ \kappa} Thus \w[.]{\theta \sim\sp{l} \theta' = \Phi\circ \kappa} Also, \w[.]{\varphi \sim\sp{l} q\circ \theta \sim\sp{l} q\circ \Phi \circ \kappa} \noindent \w[:]{(2) \Rightarrow (3)} Given \w{\theta' \sim\sp{l} \theta} such that \w[,]{p\circ \theta' = t\circ \sigma} set \w[;]{\varphi' := q\circ \theta'} then \w[.]{\varphi \sim\sp{l} q\circ \theta \sim\sp{l} q\circ \theta' = \varphi'} Also, from the squares commuting $$ u\circ \varphi' = u\circ q\circ \theta' = s\circ p\circ \theta' = s\circ t\circ \sigma $$ Finally, we assume that \w{s: X \to Z} is a monomorphism. We show that \w[.]{(3) \Rightarrow (2)} From the squares commuting, we have $$ s\circ t\circ \sigma = u\circ \varphi' = u\circ q\circ \theta' = s\circ p\circ \theta' $$ \noindent Thus \w[,]{t\circ \sigma = p\circ \theta'} because $s$ is a monomorphism, and \w{\varphi \sim\sp{l} q\circ \theta} as above. \end{proof} \begin{cor}\label{cthpp2} In \wref{eqthpp} assume again that the squares are pullbacks, $T$ is cofibrant, and the horizontal maps are fibrations. Assume further that \w[.]{u\circ \varphi \sim\sp{l} s\circ t\circ \sigma} Then we have the following: \begin{enumerate} \item There exists a map \w{\kappa: T \to U} such that \w{\sigma = r\circ\kappa} and \w[.]{\varphi \sim\sp{l} q\circ \Phi\circ \kappa} \item There exists a map \w{\theta: T \to W} such that \w{\varphi \sim\sp{l} q\circ \theta} and \w[.]{p\circ \theta = t\circ \sigma} \end{enumerate} Moreover, if $s$ is additionally a monomorphism then there is a homotopy \w[.]{\theta \sim\sp{l}\Phi\circ \kappa} \end{cor} \begin{proof} For (1), since \w[,]{u\circ \varphi \sim\sp{l} s\circ t\circ \sigma} by the Homotopy Pullback Property, there is a map \w{\varphi': T \to U} homotopic to $\varphi$ such that \w[.]{u\circ\varphi' = s\circ t\circ \sigma} Since the outer rectangle is a pullback, there is a map \w{\kappa: T \to U} such that \w{\varphi' = q\circ \Phi\circ \kappa} and \w[.]{\sigma = r\circ \kappa} Thus \w[.]{\varphi \sim\sp{l} q\circ \Phi\circ \kappa} For (2), we have \w[.]{u\circ \varphi \sim\sp{l} s\circ t\circ \sigma} Again, by the Homotopy Pullback Property, there is a map \w{\varphi' \sim\sp{l} \varphi} such that \w[,]{u\circ \varphi' = s\circ t\circ \sigma} so since the bottom square is a pullback, there is a map \w{\theta: T \to W} with \w{t\circ \sigma = p\circ \theta} and \w[,]{\varphi' = q\circ \theta} and so \w[.]{\varphi \sim\sp{l} q\circ \theta} Finally, \w[,]{u\circ \varphi' = u\circ q\circ \theta = s\circ p\circ \theta = s\circ t\circ \sigma} so if $s$ is a monomorphism, we may conclude from Lemma \ref{lthpp2} that \w[.]{\theta \sim\sp{l} \Phi\circ \kappa} \end{proof} We have the duals of Lemma \ref{lhpp}, Corollary \ref{cnullpull}, Lemma \ref{lthpp2} and Corollary \ref{cthpp2}: \begin{lemma}\label{dlhpp} Suppose the following square is a pushout, $V$ is fibrant, and the two horizontal maps are cofibrations: \mydiagram[\label{eqdlhpp}]{ W \ar@{}[dr] |>{\mbox{\large{$\ulcorner$}}} \ar@ { >->}[r]^{i} \ar[d]^{\alpha} & Y \ar[d]^{\beta} \ar@/^1em/@{.>}[ddr]^{f} & \\ X \ar@ { >->}[r]_{j} \ar@/_1em/[drr]_{p} & Z \ar@{.>}[dr]^{g} & \\ & & V ~. } \noindent Then there is a dotted map \w{f:Y \to V} with a homotopy \w{p\circ \alpha \sim\sp{r} f\circ i} precisely when there is a dotted map \w{g:Z \to V} with a homotopy \w{g\circ\beta\sim\sp{r} f} and \w[.]{g\circ j = p} \end{lemma} \begin{cor}\label{cdnullpull} If \w{k:X \to Y} is a pointed cofibration and \w{h:Y \to Z} is any pointed map with Z fibrant, the composite \w{h\circ k:X \to Z} is null-homotopic if and only if there exists a map \w[,]{h':Y \to Z} right homotopic to $h$, which factors through \w[.]{\cofib{k}} \end{cor} \begin{lemma}\label{lthpp3} Suppose we are given the following diagram in which both squares are (strict) pushouts, $T$ is fibrant, the indicated horizontal maps are cofibrations, and the outer diagram commutes up to homotopy: \mydiagram[\label{eqthpp3}]{ U \ar@{}[dr] |>{\mbox{\large{$\ulcorner$}}} \ar@ { >->}[r]^{r} \ar[d]^{\Phi} & V \ar[d]^{t} \ar@/^1em/[dddr]^{\varphi} & \\ W \ar@{}[dr] |>{\mbox{\large{$\ulcorner$}}} \ar@ { >->}[r]^{p} \ar[d]^{q} & X \ar[d]^{s} \ar@/^1em/@{.>}[ddr]_{\theta} &\\ Y \ar@ { >->}[r]_{u} \ar@{>}@/_1em/[drr]_{\sigma} & Z \ar@{-->}[dr]^{\kappa} & \\ & & T ~. } \noindent Consider the following three statements: \begin{enumerate} \item[(1)] There exists a map \w{\kappa: Z \to T} such that \w{\sigma = \kappa\circ u} and there are (right) homotopies \w{\theta \sim\sp{r} \kappa\circ s} and \w[.]{\varphi \sim\sp{r} \kappa\circ s\circ t} \item[(2)] \w{\varphi \sim\sp{r} \theta\circ t} and there is a map \w[,]{\theta': X \to T} homotopic to $\theta$, such that \w[.]{\theta'\circ p = \sigma\circ q} \item[(3)] There is a map \w{\theta': X \to T} homotopic to $\theta$ such that $\varphi$ is homotopic to \w[,]{\varphi' :=\theta'\circ t} and \w[.]{\varphi'\circ r = \sigma\circ q\circ \Phi} \end{enumerate} Then \w[.]{(1) \Leftrightarrow (2) \Rightarrow (3)} Furthermore, if $\Phi$ is an epimorphism, then (1), (2), and (3) are all equivalent. \end{lemma} \begin{cor}\label{cthpp3} In \wref[,]{eqthpp3} assume again that the squares are pushouts, $T$ is fibrant, and the horizontal maps are cofibrations. Assume further that \w[.]{\varphi\circ r \sim\sp{r} \sigma\circ q\circ \Phi} Then we have the following: \begin{enumerate} \item There exists a map \w{\kappa: Z \to T} such that \w{\sigma = \kappa\circ u} and \w[.]{\varphi \sim\sp{r} q\circ \kappa\circ s\circ t} \item There exists a map \w{\theta: X \to T} such that \w{\varphi \sim\sp{r} \theta\circ t} and \w[.]{\theta\circ p = \sigma\circ q} \end{enumerate} Moreover, if $\Phi$ is additionally an epimorphism then there is a homotopy \w[.]{\theta\sim\sp{r}\kappa\circ s} \end{cor} We define the \emph{reduced path object} \w{PW} associated to a pointed object $W$ by the pullback \mydiagram[\label{rpodef}]{ PW \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]_{p_W} \ar[r]^-{j} & \operatorname{Path}(W) \ar@{->>}[d]^{p_1\times p_2} \\ W \ar[r]_-{1_W\times 0} & W\times W } \begin{lemma}\label{rpoprop} If $W$ is fibrant, then \w{PW} is weakly contractible. Furthermore, if \w{f: X \to W} is pointed, then $f$ is right null-homotopic precisely when $f$ factors as \w[.]{X \to PW \stackrel{p_W}{\to} W} \end{lemma} \begin{proof} First, the diagram \ref{rpodef} can be expanded to the pullback \mydiagram[\label{rpowc}]{ PW \ar@{}[dr] |<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d] \ar[r]^-{j} & \operatorname{Path}(W) \ar@{->>}[d]^{\operatorname{pr}_2\circ (p_1\times p_2)} \\ \ast \ar[r] & W } Since $W$ is fibrant, the right hand vertical map is a trivial fibration, by \cite[7.3.7]{PHirM}. Hence the left hand vertical map is a trivial fibration, by \cite[7.2.12]{PHirM}. Thus \w{PW} is weakly contractible. If \w{f: X \to W} is null-homotopic, there is a map \w{H : X \to \operatorname{Path}(W)} with \w{p_1\circ H = f} and \w[.]{p_2\circ H = 0} From the first factorization, and the pullback property of \wref[,]{rpodef} there is a map \w{\phi: X \to PW} such that \w[.]{f = p_W\circ \phi} \end{proof} We similarly define the \emph{reduced cone} \w{CX} on a pointed object $X$ by the pushout \mydiagram[\label{rpcdef}]{ X\amalg X \ar@{}[dr] |>{\mbox{\large{$\ulcorner$}}} \ar@{ >->} [d] _{i_1\amalg i_2} \ar[r]^{1_X\amalg 0} & X \ar@{ >->}[d]^{i_X} \\ \operatorname{Cyl}(X) \ar[r]_{} & CX } \begin{lemma}\label{rcoprop} If $X$ is cofibrant, then \w{CX} is weakly contractible. Furthermore, if \w{f: X \to W} is a pointed map, then $f$ is left null-homotopic precisely when $f$ factors as \w[.]{X \stackrel{i_X}{\rightarrow} CX \to W} \end{lemma} \begin{lemma}\label{pathpull} Let $X$ be cofibrant and both $Z$ and $W$ fibrant. If the composite \w{g\circ h\circ k} is right null-homotopic, then the shorter composite \w{h\circ k} is also right null-homotopic if and only if there is a null homotopy $\phi$ of \w{g\circ h\circ k} such that the solid commutative diagram \mydiagram[\label{eqpathpull}]{ X \ar[rrrr]^{k} \ar@{.>}[drr]^{\psi} \ar@/_2em/[ddrrr]_{\phi} & & & & Y \ar[d]^{h} \\ & & PF_g \ar@{.>>}[r] ^{p_{F_g}} & F_g \ar @{} [dr] |>>>>>>>>>>>{\mbox{\large{$\lrcorner$}}} \ar@{.>}[d]^j \ar@{.>>}[r]^i & Z\ \ar[d]^{g} \\ & & & PW \ar@{->>}[r]_{p_W} & W } extends to the full diagram above, with $\psi$ a null homotopy for \w{h\circ k} and \w{F_g} the pullback of $g$ along \w[.]{p_W} \end{lemma} \begin{proof} Suppose the composite \w{g\circ h\circ k} is null-homotopic. Then Lemma \ref{rpoprop} gives a factorization \w{g\circ h\circ k = p_W\circ \phi} in \wref[.]{eqpathpull} Since \w{p_W} is a fibration, so is $i$. If \w{h\circ k} is also null-homotopic then this composite factors as \w[,]{h\circ k = p_Z\circ \kappa} for some \w[.]{\kappa : X \to PZ} Now factor $\kappa$ as \w[,]{X \stackrel{\kappa'}{\to} V \stackrel{q}{\to} PZ} with \w{\kappa'} a cofibration and $q$ a trivial fibration. Since $X$ is cofibrant and \w{PZ} is weakly contractible by Lemma \ref{rpoprop}, \w{\ast \to V} is a trivial cofibration. Therefore, \w{p_Z \circ q} lifts to a map \w{\eta:V\to PF_g} with \w[.]{i\circ p_{F_g}\circ \eta = p_Z\circ q} Setting \w{\psi:= \eta\circ \kappa'} makes the whole diagram commute. \end{proof} The dual version is: \begin{lemma}\label{dpathpull} Let $Y$ be fibrant and both $Z$ and $W$ cofibrant. Suppose the composite $k\circ h\circ g$ is known to be left null-homotopic. Then the shorter composite \w{k\circ h} is also left null-homotopic if and only if for some null homotopy $\phi$ of \w[,]{k\circ h\circ g} the solid commutative diagram \mydiagram[\label{eqdpathpull}]{ W \ar@{}[dr] |>{\mbox{\large{$\ulcorner$}}} \ar@ { >->}[r]^{i_W} \ar[d]_{g} & CW \ar@{.>}[d] \ar@/^2em/[ddrrr]^{\phi} &&& \\ Z \ar@{ >.>}[r] \ar[d]_{h} & \mapcone{g} \ar@{ >.>}[r] & C\mapcone{g} \ar@{.>}[rrd]^{\psi} && \\ X \ar[rrrr]^{k} &&&& Y } \noindent extends to the full diagram above, with $\psi$ giving a null homotopy for \w{k\circ h} and \w{\mapcone{g}} the pushout of $g$ along \w[.]{i_W} \end{lemma} \section{Indeterminacy}\label{abind} For most higher homotopy operations, one cannot expect a closed formula for the indeterminacy of operations of the type provided by \cite[Lemma 1.1]{TodC} for the classical (secondary) Toda bracket. This is because tertiary and higher operations depend on choices made for the vanishing of the lower order operations, and the amount of choice remaining might vary for different sets of earlier choices. However, if we take these earlier choices as given, within the inductive framework described here the only remaining source of indeterminacy is in the choice of the specific map \w{\varphi'} which makes the outer diagram in \wref{eqthpp} commute on the nose, and how that choice affects the resulting lift \w[.]{\theta'} Note that the homotopy class \w{[\varphi']=[\varphi]} is then fixed, as is the actual map \w[.]{u\circ\varphi'=s\circ t\circ \sigma:T\to Z} To help keep track of all this, in this appendix $\varphi$ will denote our initial choice of the map with the induced lift $\theta$, while \w{\varphi'} will denote some other choice, with induced lift \w[.]{\theta'} We now investigate how changing $\varphi$ to \w{\varphi'} changes $\theta$ to \w[,]{\theta'} as maps \w[:]{T \to W} Given $\varphi$, a choice of \w{\varphi'} such that \w{u \circ \varphi=u \circ \varphi'} corresponds uniquely to a map into the pullback \mydiagram[\label{eqleftpb}]{ T \ar@/_1em/[ddr]_{\varphi} \ar@/^1em/@{-->}[drr]^{\varphi'} \ar@{-->}[dr]\\ & Y\langle u \rangle \ar @{} [dr] |>>>>>>>>>>>{\mbox{\large{$\lrcorner$}}} \ar[d]_{u'} \ar[r] & Y \ar[d]^{u} \\ & Y \ar[r]^{u} & Z } \noindent while a choice of such a map \w{\varphi'} equipped with a (right) homotopy \w{H:\varphi \sim\sp{r} \varphi'} corresponds to a map into the pullback \mydiagram[\label{eqrightpb}]{ T \ar@/_1em/[ddr]_{\varphi} \ar@/^1em/@{-->}[drr]^{H} \ar@{-->}[dr]\\ & \overline Y \langle u \rangle \ar @{} [dr] |>>>>>>>>>>>>>{\mbox{\large{$\lrcorner$}}} \ar[d]_{\overline u'} \ar[r] & \operatorname{Path}(Y) \ar[d]^{(1 \times u) \circ m} \\ & Y \ar[r]^-{1\top u} & Y \times Z } \noindent where \w{Y\xra{i_{y}}\operatorname{Path}(Y)\xepic{m} Y\times Y} is a path factorization as in \wref[.]{rpodef} In fact, taking a further pullback \mydiagram[\label{oneForIndet}]{ \overline W \langle p,u \rangle \ar @{} [drr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]_{\overline{p}'} \ar[rr] && \overline Y \langle u \rangle \ar@{->>}[d]^{\overline u'} \\ W \ar[rr]^{q} && Y } \noindent we find that the image of the left vertical map \w{\overline{p}'} is essentially the indeterminacy (see Corollary \ref{cindet} below). Note that there is a canonical choice of induced map \w{\psi:T \to Y\langle u \rangle} in \wref[,]{eqleftpb} corresponding to \w[,]{\varphi'=\varphi} and a similar canonical choice of induced map \w{\overline \psi:T \to \overline Y \langle u \rangle} in \wref[,]{eqrightpb} corresponding to the canonical self-homotopy \w{H\sb{\varphi}} of $\varphi$ (namely, the composite \w[),]{T\xra{\varphi} Y \xra{i_{y}}\operatorname{Path}(Y)} which will be used below. Given a map \w[,]{u:Y \to Z} consider the following pullback grid: \mydiagram[\label{IndPull}]{ \overline Y \langle u \rangle \ar@{} [drr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{-->>}[d]^{\overline u} \ar@{->>}@/_2em/[dd]_{\overline u'} \ar[rr] && \operatorname{Path}(Y) \ar@{->>}[d]_{m} && \\ Y\langle u \rangle \ar @{} [drr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]^{u'} \ar[rr] && Y \times Y \ar@{} [drr] |<<{\mbox{\large{$\lrcorner$}}} \ar@{->>}[d]_{1 \times u} \ar[rr]^-{\operatorname{pr}_2} && Y \ar@{->>}[d]^{u} \\ Y \ar[rr]^-{1 \top u} && Y \times Z \ar[rr]^-{\operatorname{pr}_2} && Z } \begin{notn}\label{defn.lifts} Assume given four maps \w[,]{u:Y \to Z} \w[,]{\varphi:T \to Y} \w[,]{v:B \to Y} and \w[.]{\rho:A \to Y} \begin{enumerate} \renewcommand{(\alph{enumi})~}{(\alph{enumi})~} \item The pointed set \w[,]{\{\varphi':T \to Y ~\vert~u \circ \varphi'= u\circ \varphi \}} based at $\varphi$ itself, will be denoted by \w[.]{\varu{\varphi}} \item The pointed set \w{\{H:T \to\operatorname{Path}(Y)~\vert~H:\varphi \sim\sp{r} \varphi', u \circ \varphi'= u\circ \varphi \}} of (right) homotopies, based at \w[,]{H\sb{\varphi}} will be denoted by \w[.]{\ovaru{\varphi}} \item The set \w{\{\sigma:A \to B ~\vert~ v \circ \sigma = \rho\}} of lifts of $\rho$ with respect to $v$ will be denoted by \w[.]{\lift{v}{\rho}} \end{enumerate} In accordance with Remark \ref{rassfibcof}, we can disregard the distinction between the left homotopies appearing in the first half of Appendix \ref{abm} and the right homotopies we have here. \end{notn} \begin{remark}\label{rvarlift} From the pullback properties of the constructions above we see that there are natural bijections of pointed sets \w{\varu{\varphi} \cong \lift{u'}{\varphi}} and \w[,]{\ovaru{\varphi} \cong \lift{\overline u'}{\varphi}} where \w{\lift{u'}{\varphi}} is based at $\psi$ and \w{\lift{\overline u'}{\varphi}} is based at $\overline \psi$. \end{remark} We then have: \begin{lemma} Given \w{\varphi=q \circ \theta:T \to Y} with \w[,]{p \circ \theta=t \circ \sigma} there is a natural bijection of sets \w[,]{\ovaru{\varphi} \cong \lift{\overline{p}'}{\theta}} where \w[.]{\overline{p}':=p' \circ \overline{p}} \end{lemma} \begin{proof} We may expand \wref{IndPull} into: \mydiagram[\label{IndPullb}]{ & \overline W \langle p,u \rangle \ar[dl] \ar@{-->>}|(.5){\hole}[dd]_(0.65){\overline{p}} \ar@/^1.4em/|(.26){\hole}|(.75){\hole}|(.82){\hole}[dddd]^(0.37){\overline{p}'} \ar[rr] && P^{\rel} \ar[dl] \ar@{->>}[dd]|(.8){\hole} \\ \overline Y \langle u \rangle \ar@{-->>}[dd]^(0.3){\overline u} \ar[rr] && \operatorname{Path}(Y) \ar@{->>}[dd] \\ & W\langle p \rangle \ar[dl]^{q'} \ar@{->>}|(.5){\hole}|(.6){\hole}[dd]_(0.3){p'} \ar[rr] \ar@/^1.7em/[rrrr]^(0.5){p''} && Y \times W \ar[dl]^{1\times q} \ar@{->>}|(.5){\hole}|(.6){\hole}[dd]^(0.25){1\times p} \ar[rr]_(0.6){\operatorname{pr}_2} && W \ar[dl]^{q} \ar@{->>}[dd]^(0.5){p} \\ Y\langle u \rangle \ar@{->>}[dd]^(0.3){u'} \ar[rr] \ar@/_1em/[rrrr]_(0.4){u''} && Y \times Y \ar@{->>}|(.12){\hole}[dd]^(0.3){1 \times u} \ar[rr]^(0.75){\operatorname{pr}_2} && Y \ar@{->>}[dd]^(0.3){u} \\ & W \ar[dl]^{q} \ar[rr]|(.5){\hole}^(0.3){q \top p} && Y \times X \ar[dl]^{1 \times s} \ar[rr]|(.57){\hole}^(0.76){\operatorname{pr}_2} && X\ar[dl]^{s} \\ Y \ar[rr]^-{1 \top u} && Y \times Z \ar[rr]^(0.6){\operatorname{pr}_2} && Z } \noindent Since the rightmost face is a pullback (by assumption), as are both the front and left long rectangular vertical faces (by construction), the lower leftmost face, and hence the upper leftmost face, are pullbacks, too. We define \w{P^{\rel}} by making the upper rightmost face a pullback, so that the back upper vertical face is, too. We think of \w{\varphi:T \to Y} as mapping to the front lower left $Y$, and \w{\theta:T\to W} to the back lower left $W$, with \w{\varphi':T \to Y} mapping to the front right $Y$, and \w{\theta':T\to W} to the back right $W$. Since \w[,]{u \circ \varphi'=u \circ \varphi} the lower pullback rectangle in \wref{IndPull} implies that \w{(\varphi,\varphi')} induce a map \w{F:T\toY\langle u \rangle} and thus \w[.]{\widehat{F}:T\toW\langle p \rangle} Since also \w{u \circ \varphi= s \circ p \circ \theta=s \circ t \circ \sigma} and a right homotopy \w{H:\varphi \sim\sp{r} \varphi'} is a map \w{H:T\to\operatorname{Path}(Y)} which, together with \w[,]{\varphi \top \theta':T\to Y\times W} induces \w[,]{\widehat{H}:T\toP^{\rel}} together with $\widehat{F}$, these induce a lift of $\theta$ along \w[.]{\overline{p}'} Conversely, any lift \w{\widehat{\theta}:T\to\overline W \langle p,u \rangle} of $\theta$ along \w{\overline{p}'} yields $\widehat{H}$, and thus $H$, by projecting along the structure maps of the top pullback square. \end{proof} \begin{remark} When \w[,]{Y \sim \ast} we have \w[,]{\operatorname{Path}(Y) \stackrel{\sim}{\to} Y \times Y} so \w{\overline Y \langle u \rangle \stackrel{\sim}{\to} Y\langle u \rangle \simeq \Omega Z} and \w[.]{\overline W \langle p,u \rangle \simeq W \times \Omega Z} In this case a map \w{T \to \overline W \langle p,u \rangle} thus corresponds up to homotopy, to a choice of map $\theta$, together with a homotopy class in \w{[T ,\,\Omega Z]} (adjoint to the indeterminacy construction of \cite[\S 1]{SpanS}). Note that each of the vertical faces in \wref{IndPullb} is a pullback over a fibration, so they are homotopy-meaningful. \end{remark} The indeterminacy of our operations is then described by the following. \begin{cor}\label{cindet} Given \w{\varphi=q \circ \theta:T \to Y} (also satisfying \w[)]{p \circ \theta=t \circ \sigma} in \wref[,]{eqthpp} the indeterminacy in our operation produced by varying $\varphi$ lies in the image of \w[,]{\overline{p}''_{\#}:[T,\overline W \langle p,u \rangle] \to [T,W]} where \w[.]{\overline{p}''=p'' \circ \overline{p}} In fact, we can restrict to the fiber of \w{\overline{p}'_{\#}$ over $[\theta]} (the subset consisting of those homotopy classes containing an element of \w[).]{\lift{\overline{p}'}{\theta}} \end{cor} \begin{proof} In \wref{IndPullb} each choice of a lifting \w{\theta'} of \w{\varphi'\sim\varphi} has the form \w{p'' \circ \overline{p} \circ \rho} for some \w[.]{\rho:T \to \overline W \langle p,u \rangle} Thus \w[,]{\overline{p}''_{\#} [\rho]=[\theta']} as required. By restricting to those $\rho$ with \w[,]{[\overline{p}' \circ \rho]=\overline{p}'_{\#} [\rho]=[\theta]} we can apply Lemma \ref{lhlp} to produce a different representative \w{[\rho']=[\rho]} with \w[,]{\overline{p}' \circ \rho' = \theta} producing the improved \w[.]{\theta'} \end{proof} \end{document}
arXiv
\begin{document} \title[Strichartz estimates with Loss]{Near Sharp Strichartz estimates with loss in the presence of degenerate hyperbolic trapping} \author[Christianson]{Hans~Christianson} \email{[email protected]} \address{Department of Mathematics, UNC-Chapel Hill \\ CB\#3250 Phillips Hall \\ Chapel Hill, NC 27599} \subjclass[2000]{} \keywords{} \begin{abstract} We consider an $n$-dimensional spherically symmetric, asymptotically Euclidean manifold with two ends and a codimension $1$ trapped set which is degenerately hyperbolic. By separating variables and constructing a semiclassical parametrix for a time scale polynomially beyond Ehrenfest time, we show that solutions to the linear Schr\"odiner equation with initial conditions localized on a spherical harmonic satisfy Strichartz estimates with a loss depending only on the dimension $n$ and independent of the degeneracy. The Strichartz estimates are sharp up to an arbitrary $\beta>0$ loss. This is in contrast to \cite{ChWu-lsm}, where it is shown that solutions satisfy a sharp local smoothing estimate with loss depending only on the degeneracy of the trapped set, independent of the dimension. \end{abstract} \maketitle \section{Introduction} \label{S:intro} It is well known that there is an intricate interplay between the existence of {\it trapped geodesics}, those which do not escape to spatial infinity, and {\it dispersive estimates} for the associated quantum evolution. Trapping can occur in many different ways, from a single trapped geodesic (see \cite{Bur-sm, BuZw-bb, Chr-NC, Chr-QMNC, Chr-disp-1}), to a thin fractal trapped set (see \cite{NoZw-qdr, Chr-wave-2, Chr-sch2, Dat-sm}), to codimension $1$ trapped sets in general relativity (see, for example, \cite{BlSo-decay,BoHa-decay,DaRo-RS,MMTT-Sch,Luk-decay,TaTo-Kerr,LaMe-decay,WuZw-nhre} and the references therein), to elliptic trapped sets and boundary value problems. Dispersive type estimates also come in many flavors, but are all designed to express in some manner that the mass of a wave function tends to spread out as the wave function evolves. Since the mass of wave functions tends to move along the geodesic flow, the presence of trapped geodesics suggests some residual mass may not spread out, or may spread out more slowly than in the non-trapping case. In this paper, we concentrate on {\it Strichartz estimates}, and exhibit a class of manifolds for which we prove near-sharp Strichartz estimates with a loss depending only on the dimension of the trapped set. This class of manifolds has already been studied in \cite{ChWu-lsm}, where a sharp local smoothing estimate is obtained with a loss depending only on how flat the manifold is near the trapped set. This presents an interesting dichotomy conjecture: ``loss in local smoothing depends only on the kind of trapping, while loss in Strichartz estimates depends only on the dimension of trapping''. The purpose of this paper is to study a very simple class of manifolds with a hypersphere of trapped geodesics. If the dynamics near such a sphere are strictly hyperbolic in the normal direction, then resolvent estimates are already obtained in \cite{WuZw-nhre} (see also \cite{Chr-disp-1,Chr-sch2}) which can be used to prove local smoothing estimates with only a logarithmic loss. However, if the dynamics are only weakly hyperbolic, resolvent estimates and local smoothing estimates are obtained in \cite{ChWu-lsm} with a sharp polynomial loss in both. We now turn our attention to studying Strichartz estimates, which are mixed $L^p L^q$ time-space estimates. The typical procedure for proving Strichartz estimates is to construct parametrices (approximate solutions), which encode how wave packets move with the geodesic flow. For solutions of the Schr\"odinger equation, wave packets at higher frequency move at a higher velocity, so the presence of trapping, or more precisely of conjugate points means that parametrices can typically be constructed only for time intervals {\it depending on the frequency} of the wave packet. Then summing up many parametrices to get an estimate on a fixed time scale leads to derivative loss in Strichartz estimates. However, if the trapped set is sufficiently thin and hyperbolic, we expect that {\it most} of a wave packet still propagates away quickly, and a procedure developed by Anantharaman \cite{Anan-entropy} allows one to exploit this to logarithmically extend the timescale on which one can construct a parametrix leading to Strichartz estimates with no loss \cite{BGH}. For the manifolds studied in this paper, the trapping is degenerately hyperbolic, so we still expect some mass of each wave packet to propagate away, but at a much slower rate than the strictly hyperbolic case. As a consequence, we need to extend the parametrix {\it polynomially} in time to get sharp estimates. The techniques in \cite{Anan-entropy,BGH} will not work in this situation since the ${\mathcal O}(h^\infty)$ estimate of decaying correlations will not control the {\it exponential} number of such correlations. In this paper, we fail to prove estimates all the way to the sharp polynomial timescale, but we are nevertheless able to extend the parametrix construction to the sharp timescale up to an arbitrary $\beta>0$ loss, which is expressed as a loss in derivative in the main theorem. Further, the technique of proof involves decomposing the solution in terms of spherical harmonics in order to reduce the problem to a $1$-dimensional semiclassical parametrix construction. Lacking a square-function estimate for spherical harmonics, the proof only works for initial data localized along one spherical harmonic eigenspace. In this sense, the result shows more about the {\it natural semiclassical timescale}, polynomially extended beyond Ehrenfest time, for which we have good control of the rate of dispersion. We begin by describing the geometry. We consider $X = {\mathbb R}_x \times {\mathbb S}^{n-1}_\theta$, equipped with the metric \[ g = d x^2 + A^2(x) G_\theta, \] where $A \in {\mathcal C}^\infty$ is a smooth function, $A \geqslant \epsilon>0$ for some epsilon, and $G_\theta$ is the metric on ${\mathbb S}^{n-1}$. From this metric, we get the volume form \[ d \text{Vol} = A(x)^{n-1} dx d \sigma, \] where $\sigma$ is the usual surface measure on the sphere. The Laplace-Beltrami operator acting on $0$-forms is computed: \[ \Delta f = (\partial_x^2 + A^{-2} \Delta_{{\mathbb S}^{n-1}} + (n-1) A^{-1} A' \partial_x) f, \] where $\Delta_{{\mathbb S}^{n-1}}$ is the (non-positive) Laplace-Beltrami operator on the sphere. We study the case $A(x) = (1 + x^{2m})^{1/2m}$, $m \geqslant 2$, in which case the manifold is asymptotically Euclidean (with two ends), and has a trapped hypersphere at the unique critical point $x = 0$. Since $A(x)$ has a degenerate minimum at $x = 0$, the trapped sphere is {\it weakly} normally hyperbolic in the sense that it is unstable and isolated, but degenerate (see Figure \ref{fig:fig1}). Our main theorem is the following, which expresses that a solution of the linear homogeneous Schr\"odinger equation on this manifold satisfies Strichartz estimates with loss depending only on the dimension $n$, up to an arbitrary $\beta>0$ loss. \begin{theorem} \label{T:T1a} Suppose $u$ solves \begin{equation} \label{E:Sch-1} \begin{cases} (D_t + \Delta ) u = 0, \\ u|_{t=0} = u_0, \end{cases} \end{equation} where $u_0 = H_k u_0$ is localized on the spherical harmonic subspace of order $k$. Then for any $T, \beta >0$, there exists $C_{T,\beta}>0$ such that \begin{equation} \label{E:main-str} \| u \|_{L^p([0,T]) L^{q}(M)} \leqslant C_{T,\beta} \| \left\langle D_\theta \right\rangle^{(n-2)/pn + \beta} u_0 \|_{L^2(M)}, \end{equation} where \[ \frac{2}{p} + \frac{n}{q} = \frac{n}{2}, \] and $2 \leqslant q < \infty$ if $n = 2$. \end{theorem} \begin{remark} There are several important observations to make about Theorem \ref{T:T1a}. First, this theorem concerns {\it endpoint} Strichartz estimates. In dimension $n \geqslant 3$, if we take $\beta < 1/n$, the loss in derivative is then $(n-2)/2n + \beta < 1/2$; that is, the loss is always less than the loss following the argument of Burq-G\'erard-Tzvetkov \cite{BGT-comp} (which gives a loss of $1/2$ for endpoint estimates in $n \geqslant 3$). Second, there is only a $\beta>0$ loss over the Euclidean (scale-invariant) estimates if $n = 2$, that is, if the trapped set is a single degenerate periodic geodesic, so we can get as close to the no-loss estimates as we like. We expect the $\beta>0$ derivative loss can actually be removed in all dimensions, but this is beyond our techniques. Third, in all dimensions, the loss depends {\it only on the dimension of the trapped set}. It does not depend on $m$, the order of degeneracy of the trapping. This is in sharp contrast to the local smoothing effect, which depends only on $m$, and {\it not} on the dimension $n$ (see \cite{ChWu-lsm} and below). For dimensions $n \geqslant 3$, the estimate \eqref{E:main-str} is near sharp on natural semiclassical time scales (see Corollary \ref{C:C1a}), in the sense that no better {\it polynomial} derivative estimate can hold. In dimension $n = 2$, the same is true by comparing to the scale-invariant case. Finally, since $u_0$ is localized to a single spherical harmonic, the estimate in the theorem can be rephrased, since \[ \| \left\langle D_\theta \right\rangle^{(n-2)/pn + \beta} u_0 \|_{L^2(M)} \sim \| \left\langle k \right\rangle^{(n-2)/pn + \beta} u_0 \|_{L^2(M)}. \] \end{remark} \begin{figure} \caption{ A piece of the manifold $X$ and the trapped sphere at $x=0$. } \label{fig:fig1} \end{figure} \section{Reduction in dimension} In this section we use a series of known techniques and estimates to reduce study of the Schr\"odiner equation on $M$ to the study of a semiclassical Schr\"odiner equation on ${\mathbb R}$ with potential. The potential has a degenerate critical point, and we use a technical blow-up calculus to construct a sequence of parametrices near the critical point. We observe that we can conjugate $\Delta$ by an isometry of metric spaces and separate variables so that spectral analysis of $\Delta$ is equivalent to a one-variable semiclassical problem with potential. That is, let $T : L^2(X, d \text{Vol}) \to L^2(X, dx d \theta)$ be the isometry given by \[ Tu(x, \theta) = A^{(n-1)/2}(x) u(x, \theta). \] Then $\widetilde{\Delta} = T \Delta T^{-1}$ is essentially self-adjoint on $L^2 ( X, dx d \sigma)$ for our choice of $A$. A simple calculation gives \[ -\widetilde{\Delta} f = (- \partial_x^2 - A^{-2}(x) \Delta_{{\mathbb S}^{n-1}} + V_1(x) ) f, \] where the potential \[ V_1(x) = \frac{n-1}{2} A'' A^{-1} + \frac{(n-1)(n-3)}{4} (A')^2 A^{-2}. \] Of course, conjugating the Laplacian by an $L^2$ isometry does not necessarily preserve $H^s$ or $L^q$ spaces. \begin{lemma} \label{L:T-conj-lemma} With the notation $A(x) = (1 + x^{2m})^{1/2m}$ from above, for $s \geqslant 0$, \[ \| Tu \|_{H^s(dx d \sigma)} \leqslant C \| u \|_{H^s(d\text{Vol})}, \] \[ \| \left\langle -\Delta_{{\mathbb S}^{n-1}} \right\rangle^s Tu \|_{L^2(dx d \sigma )} = \| \left\langle -\Delta_{{\mathbb S}^{n-1}} \right\rangle^s u \|_{L^2(d\text{Vol})}, \] and for $q \geqslant 2$, \[ \| u \|_{L^q(d\text{Vol})} \leqslant C \| Tu \|_{L^q(dx d \sigma)}. \] \end{lemma} \begin{proof} The result for $0 < s \leqslant 1$ follows from the $L^2$ and $H^1$ result, which follows by observing that \[ \partial_x A^{(n-1)/2} u = A^{(n-1)/2} \partial_x u + \frac{(n-1)}{2} A' A^{(n-3)/2} u. \] But since $|A'| \leqslant C A$, the $L^2(dx d \sigma)$ norm of $\partial_x Tu$ is bounded by the $H^1(d\text{Vol})$ norm of $u$. The result for angular derivatives follows by commuting with $A^{(n-1)/2}(x)$. For the $L^q$ result, we compute \[ \int | u(x, \theta) |^q A^{(n-1)}(x) dx d \sigma = \int | A^{(n-1)(1/q - 1/2)} T u(x, \theta) |^q dx d \theta. \] The function $A^{(n-1)(1/q - 1/2)}(x)$ is bounded for $q \geqslant 2$, so the $L^q$ inequality is true as well. \end{proof} As a consequence of this, to prove Theorem \ref{T:T1a}, it suffices to prove the following Proposition, and apply Lemma \ref{L:T-conj-lemma} with $v = Tu$. \begin{proposition} \label{P:P1a} Suppose $v$ solves \begin{equation} \label{E:Sch-1a} \begin{cases} (D_t + \widetilde{\Delta} ) v = 0, \\ v|_{t=0} = v_0, \end{cases} \end{equation} where $v_0 = H_k v_0$ is localized on the spherical harmonic subspace of order $k$. Then for any $T >0$ and $\beta>0$, there exists $C_{T,\beta}>0$ such that \begin{equation} \label{E:main-str1a} \| v \|_{L^p([0,T]) L^{q}} \leqslant C_{T,\beta} \| \left\langle D_\theta \right\rangle^{(n-2)/pn + \beta} v_0 \|_{L^2}, \end{equation} where \[ \frac{2}{p} + \frac{n}{q} = \frac{n}{2}, \] and $2 \leqslant q < \infty$ if $n = 2$. \end{proposition} We now separate variables by projecting onto the $k$th spherical harmonic eigenspace. That is, let $\mathcal{H}_k$ be the $k$th eigenspace of spherical harmonics, so that $v \in \mathcal{H}_k$ implies \[ -\Delta_{{\mathbb S}^{n-1}} v = \lambda_k^2 v, \] where \[ \lambda_k^2 = k(k+n-2). \] Let $H_k : L^2({\mathbb S}^{n-1}) \to \mathcal{H}_k$ be the projector. Since $v_0$ is assumed to satisfy $v_0 = H_k v_0$ for some $k$ and the conjugated Laplacian preserves spherical harmonic eigenspaces, we have also $v = H_k v$. Motivated by spectral theory, we compute: \[ (-\widetilde{\Delta}- \lambda^2) v = P_k v, \] where \begin{equation} \label{E:Pk} P_k v = P_k H_k v = (-\frac{\partial^2}{\partial x^2} + k(k+n-2) A^{-2}(x) + V_1(x) - \lambda^2) v. \end{equation} Setting $h = (k(k+n-2))^{-1/2}$ and rescaling, we have the one-dimensional semiclassical operator \[ P(z,h) \psi(x) = (-h^2 \frac{d^2}{dx^2} + V(x) -z) \psi(x), \] where the potential is \[ V(x) = A^{-2}(x) + h^2 V_1(x) \] and the spectral parameter is $z = h^2 \lambda^2$. For our case where $A(x) = (1 + x^{2m})^{1/2m}$, the subpotential $h^2 V_1$ is seen to be lower order in both the semiclassical and scattering sense. Furthermore, the principal potential $A^{-2}(x)$ is even, smooth, decays like $x^{-2}$ at $\pm \infty$ and has a unique degenerate maximum of the form $1 - x^{2m}$ at $x = 0$. \section{Endpoint Strichartz estimates} Before proceeding to the endpoint Strichartz estimates, let us briefly recall the local smoothing estimates which will eventually allow us to glue together Strichartz estimates on semiclassical timescales. \subsection{Local smoothing estimates} In this subsection, we recall the local smoothing estimates from \cite{ChWu-lsm}, as well as the dual versions which we will use in this paper. \begin{theorem}[\cite{ChWu-lsm}] \label{T:lsm} Let $V(x) = A^{-2}(x) + h^2 V_1(x)$ as above. Then for any $T >0$, there exists a constant $C = C_T >0$ such that \[ \int_0^T \| | x |^{m-1} \left\langle x \right\rangle^{-m-1-3/2} e^{i t (-\partial_x^2 + h^{-2} V )} u_0 \|_{L^2}^2 dt \leqslant C h \| u_0 \|_{L^2}^2, \] and \[ \int_0^T \| \left\langle x \right\rangle^{-3/2} e^{i t (-\partial_x^2 + h^{-2} V )} u_0 \|_{L^2}^2 dt \leqslant C h^{1/(m+1)} \| u_0 \|_{L^2}^2. \] \end{theorem} The dual versions of these estimates are given in the following Corollary. \begin{corollary} Let $V(x) = A^{-2}(x) + h^2 V_1(x)$ as above. Then for any $T >0$, there exists a constant $C = C_T >0$ such that \[ \left\|\int_0^T | x |^{m-1} \left\langle x \right\rangle^{-m-1-3/2} e^{-i t (-\partial_x^2 + h^{-2} V )} f dt \right\|_{L^2}^2 \leqslant C h \| f \|_{L^2_T L^2}^2, \] and \[ \left\|\int_0^T \left\langle x \right\rangle^{-3/2} e^{-i t (-\partial_x^2 + h^{-2} V )} f dt \right\|_{L^2}^2 \leqslant C h^{(1-m)/(m+1)} \| f \|_{L^2}^2. \] \end{corollary} The purpose of these results is to demonstrate that there is perfect $1/2$ derivative local smoothing away from $x = 0$, or local smoothing with either a loss in derivative or with a vanishing multiplier at $x = 0$. \subsection{The endpoint Strichartz estimates} The endpoint Strichartz estimates are the $L^2_T L^{2^\star}$\footnote{Throughout this manuscript, we use the notation $L_T^p L^q = L^p([0,T]) L^q( M)$ to denote the local in time, global in space Strichartz norm.} estimates, where $2^\star$ is the Strichartz dual: \[ 1 + \frac{n}{2^\star} = \frac{n}{2} , \] which implies $2^\star = 2n/(n-2)$ for $n \geqslant 3$, and $2^\star = \infty$ if $n = 2$. We want to estimate $v$ in $L^{2^\star}(M)$, which we do using the following estimate due to Sogge \cite{Sog-sharm}: \begin{theorem}[\cite{Sog-sharm}] \label{T:Sogge} Let $(M,g)$ be a $d$-dimensional compact Riemannian manifold without boundary, and let $-\Delta$ be the Laplace-Beltrami operator on $M$. If $\varphi_j$ are the eigenfunctions, \[ -\Delta \varphi_j = \lambda^2_j \varphi_j \] with $0 = \lambda_1 \leqslant \lambda_2 \leqslant \cdots$ the eigenvalues, then \[ \| \varphi_j \|_{L^{2(d+1)/(d-1)}} \leqslant C \lambda_j^{(d-1)/2(d+1)} \| \varphi_j \|_{L^2}. \] \end{theorem} In particular, for the situation at hand, \begin{align*} \| v \|_{L^{2^\star}({\mathbb R})L^{2^\star}({\mathbb S}^{n-1})} & = \| H_k v \|_{L^{2^\star}({\mathbb R})L^{2^\star}({\mathbb S}^{n-1})} \\ & \leqslant C k^{(n-2)/2n} \| H_k v \|_{L^{2^\star}({\mathbb R})L^2 ({\mathbb S}^{n-1})} . \end{align*} Now let $\Lambda_k$ be an index set for the $k$th harmonic subspace $\mathcal{H}_k$ and write \[ v(t, x, \theta) = \sum_{ l \in \Lambda_k} v_{lk}(t,x) H_{lk}(\theta), \] where $H_{lk}$ are the orthonormal spherical harmonics in $\mathcal{H}_k$. Now if $p \geqslant 2$, $2 \leqslant q \leqslant 2^*$ ($q < \infty$ if $n = 2$), we have \begin{align*} \| v \|_{L^p_T L^q(M)} & \leqslant Ck^{(n-2)/2n} \| v \|_{L^p_T L^q({\mathbb R}) L^2({\mathbb S}^{n-1})} \\ & \leqslant C k^{(n-2)/2n} \left\| \left( \sum_{l \in \Lambda_k} |v_{lk} |^2 \right)^{1/2} \right\|_{L^p_T L^q({\mathbb R}) }, \end{align*} by Plancherel's theorem. We further estimate using Minkowski's inequality repeatedly: \begin{align*} \left\| \left( \sum_{l \in \Lambda_k} |v_{lk} |^2 \right)^{1/2} \right\|_{L^p_T L^q({\mathbb R}) } & = \left( \int_0^T \left[\left( \int \left( \sum_l | v_{lk}|^2 \right)^{q/2} dx \right)^{2/q} \right]^{p/2} dt \right)^{1/p} \\ & \leqslant C \left( \int_0^T \left[\sum_l \left( \int | v_{lk}|^q dx \right)^{2/q} \right]^{p/2} dt \right)^{1/p} \\ & = C \left( \int_0^T \left[\left(\sum_l \| v_{lk}\|^2_{L^q} \right)^{1/2} \right]^{p} dt \right)^{1/p} \\ & = C \left[ \left( \int_0^T \left(\sum_l \| v_{lk}\|^2_{L^q} \right)^{p/2} dt \right)^{2/p} \right]^{1/2} \\ & \leqslant C \left[ \sum_l \left( \int_0^T \| v_{lk}\|^p_{L^q} dt \right)^{2/p} \right]^{1/2} \\ & = C \left( \sum_{l \in \Lambda_k} \| v_{lk} \|_{L^p_T L^q}^2 \right)^{1/2}. \end{align*} All told then we have \begin{equation} \label{E:str-sum} \| v \|_{L^p_T L^q(M)} \leqslant C k^{(n-2)/2n} \left( \sum_{l \in \Lambda_k} \| v_{lk} \|_{L^p_T L^q}^2 \right)^{1/2}, \end{equation} where $v_{lk}(t,x) = \left\langle v(t,x,\cdot), H_{lk} \right\rangle_{L^2({\mathbb S}^{n-1})}.$ Using \eqref{E:Pk}, we see that $v_{lk}$ satisfies the equation $(D_t + P_k) v_{lk} = 0$, which is a $1$-dimensional Schr\"odinger equation with potential. We want to estimate $v_{lk}$ in the $L^2_T L^{2^\star}({\mathbb R})$ norm when $n \geqslant 3$, and in $L^p_T L^q$ Strichartz duals with $2 \leqslant q < \infty$ in dimension $n = 2$. However, since we are now looking at a solution to a one dimensional Schr\"odinger equation, $2$ and $2^\star$ are not Strichartz duals in $1$ dimension. The Strichartz dual $p$ to $2^\star = 2n/(n-2)$ ($n \geqslant 3$) in one dimension satisfies \[ \frac{2}{p} + \frac{1}{2^\star} = \frac{1}{2}, \] or \[ p = 2n. \] We therefore first use H\"older's inequality in $t$, with weights $n$ and $n/(n-1)$ respectively to get \begin{align*} \| v_{lk} \|_{L^2_T L^{2^\star}}^2 & = \int_0^T \| v_{lk} \|_{L^{2^\star}}^2 dt \\ & \leqslant T^{(n-1)/n} \| v_{lk} \|_{L_T^{2n} L^{2^\star}}^2. \end{align*} In dimension $n = 2$, we use $p >2$, $2 \leqslant q < \infty$ and the same weights in H\"older's inequality to get \begin{align*} \| v_{lk} \|_{L^p_T L^{q}}^2 & = \int_0^T \| v_{lk} \|_{L^{2^\star}}^2 dt \\ & \leqslant T^{1/p} \| v_{lk} \|_{L_T^{2n} L^{2^\star}}^2. \end{align*} We have the following proposition. \begin{proposition} \label{P:mode-str} Suppose $v_{lk}$ solves \[ \begin{cases} (D_t + P_k) v_{lk} = 0, \\ v_{lk}|_{t=0} = v_{lk}^0 , \end{cases} \] where $v_{lk}^0 \in H^s$ for some $s >0$. Then for any $T>0$, there exists a constant $C = C_{T}>0 $ such that \[ \| v_{lk} \|_{L^{2n}_T L^{2^\star}} \leqslant C \| \left\langle k \right\rangle^\beta v_{lk}^0 \|_{L^2}. \] \end{proposition} That is, even though $v_{lk}$ solves a Schr\"odiner equation with a degenerate potential barrier, $v_{lk}$ nevertheless satisfies Strichartz estimates with an arbitrary $\beta>0$ loss. As a consequence, we have the following estimate on natural semiclassical time scales. \begin{corollary} \label{C:C1a} Suppose $v$ solves \eqref{E:Sch-1a} with initial data $v_0 = v_{lk}^0 H_{lk}$. Then for $\epsilon>0$ sufficiently small and $T = \epsilon k^{-2/(m+1)}$, $v$ satisfies the Strichartz estimate \[ \| v \|_{L^2_T L^{2^\star}} \leqslant C \| \left\langle k \right\rangle^{\eta + \beta} v_0 \|_{L^2}, \] where \[ \eta = \frac{1}{2n(m+1)} \left( m(n-2) - n \right). \] Moreover, if $v_{lk}$ is a zonal spherical harmonic, this estimate is near-sharp, in the sense that no {\it polynomial} derivative improvement is true for every $\beta >0$. \end{corollary} \begin{remark} This corollary shows that on natural semiclassical time scales the Strichartz estimates are improved. Indeed, in dimension $n = 2$, there is a smoothing effect. The proof of the near-sharpness of this estimate is in Section \ref{SS:saturation}. \end{remark} \subsection{Proof of Proposition \ref{P:P1a} and Corollary \ref{C:C1a}} Assuming Proposition \ref{P:mode-str}, we have from \eqref{E:str-sum} (in dimension $n \geqslant 3$): \begin{align*} \| v \|_{L^2_T L^{2^\star}(M)}^2 & \leqslant C k^{(n-2)/n} \sum_{l \in \Lambda_k} \left\| v_{lk} \right\|_{L^2_T L^{2^\star}({\mathbb R})}^2 \\ & \leqslant C k^{(n-2)/n} T^{(n-1)/n} \sum_{l \in \Lambda_k} \left\| \left\langle k \right\rangle^\beta v_{lk}^0 \right\|_{ L^{2}({\mathbb R})}^2 \\ & \leqslant C T^{(n-1)/n} \| \left\langle k \right\rangle^{(n-2)/2n + \beta } v_0 \|_{L^2(M)}^2, \end{align*} by orthonormality, which is Proposition \ref{P:P1a}, and hence also Theorem \ref{T:T1a}. A similar computation using \eqref{E:str-sum} holds when $n = 2$, and $2 \leqslant q < \infty$. For Corollary \ref{C:C1a}, the sum is over only one term, and $T \sim k^{-2/(m+1)}$. Then in this case, \begin{align*} \| v \|_{L^2_T L^{2^\star}(M)}^2 & \leqslant C k^{-2(n-1)/n(m+1)} k^{(n-2)/n} \| v_{lk} \|_{L^{2^\star}({\mathbb R})}^2 \\ & \leqslant (1 + | k |)^{\frac{1}{n(m+1)} \left( m(n-2) - n \right) } \| \left\langle k \right\rangle^\beta v_{lk}^0 \|_{L^2(M)}^2 \\ & \leqslant C \| \left\langle k \right\rangle^{\eta + \beta} v_0 \|_{L^2(M)}^2, \end{align*} where $\eta$ is as in Corollary \ref{C:C1a}. A similar computation holds for $q < \infty$ in the case $n = 2$. \qed \section{The parametrix} It remains to prove Proposition \ref{P:mode-str}. For that purpose, in this section we construct a parametrix for the separated Schr\"odinger equation: \[ \begin{cases} (D_t + (-\partial_x^2 + \lambda_k^2 A^{-2}(x) + V_1(x)) ) u = 0 , \\ u|_{t=0} = u_0. \end{cases} \] We rescale $h^2 = \lambda_k^{-2}$ to get \[ \begin{cases} (D_t - (-\partial_x^2 + h^{-2} A^{-2}(x) + V_1(x)) ) u = 0 , \\ u|_{t=0} = u_0. \end{cases} \] Let $v(t, x) = u(ht, x)$, so that \[ \begin{cases} (hD_t - (-h^2 \partial_x^2 + A^{-2}(x) + h^2 V_1(x)) ) v = 0 , \\ v|_{t=0} = u_0. \end{cases} \] For the rest of this section, we consider the one-dimensional semiclassical Schr\"odinger equation with barrier potential: \begin{equation} \label{E:v-sc-eqn} \begin{cases} (hD_t + (-h^2 \partial_x^2 + V(x))) v = 0, \\ v|_{t=0} = v_0. \end{cases} \end{equation} The potential $V(x) = A^{-2}(x) + h^2 V_1(x)$ decays at $| x | = \infty$, is even, and the principal part $A^{-2}(x)$ has a degenerate maximum at $x = 0$ with no other critical points. Denote $P = -h^2 \partial_x^2 + V(x)$. Let us give a brief summary of the steps involved in the construction. We will use a WKB type approximation, although, since we are in dimension $1$, we do not need a particularly good approximation. The first step is to approximate the solution away from the critical point at $(x, \xi) = (0,0)$. Since this is a non-trapping region, standard techniques can be used to construct a parametrix and prove Strichartz estimates on a timescale $t \sim h^{-1}$ for the semiclassical problem, or on a fixed timescale for the classical problem. A similar construction applies for energies away from the trapped set. The remaining regions can be divided into an $h$-dependent strongly trapped region and a ``transition region'', where wave packets propagate, but not at a uniform rate. By restricting attention to a sufficiently small $h$-dependent neighbourhood of $(0,0)$, we can extend a semiclassical parametrix to a timescale $t \sim h^{(1-m)/(1+m)}$, which is a classical timescale of $h^{2/(m+1)}$. We divide the transition region into a logarithmic number of $h$-dependent regions on which a similar parametrix construction works. Summing over all of these regions gives a parametrix construction and corresponding Strichartz estimates in a compact region in phase space with a logarithmic loss due to the number of summands in the transition region. These constructions and Strichartz estimates hold for a frequency dependent timescale $\sim h^{2/(m+1) + \beta}$, $\beta>0$, or with a $\beta>0$ loss in derivative on timescale $\sim h^{2/(m+1) }$. We then use the local smoothing estimate from \cite{ChWu-lsm} to glue estimates on $\sim h^{-2/(m+1)}$ time intervals to get the Strichartz estimates with a $\beta>0$ loss overall. \subsection{WKB expansion} We make the following WKB ansatz: \[ v = h^{-1/2} \int e^{i \varphi(t, x, \xi)/h} e^{-iy \xi} B(t, x, \xi) u_0(y) dy d \xi, \] and compute \[ (hD_x)^2 v = \int e^{i \varphi(t, x, \xi)/h} ((\varphi_x)^2 B -i h \varphi_{xx} B -2ih \varphi_x B_x - h^2 B_{xx} ) u_0 (y) dx d \xi, \] and \[ hD_t v = \int e^{i \varphi(t, x, \xi)/h} (\varphi_t B -ih B_t) u_0(y) dy d \xi. \] In order to approximately solve the semiclassical Schr\"odinger equation for $v$, we use the WKB analysis. We begin by trying to construct $\varphi$ so that \[ \begin{cases} \varphi_t + (\varphi_x)^2 +V(x) = 0, \\ \varphi|_{t = 0} = x \xi. \end{cases} \] Given such $\varphi$, we solve the transport equations for the amplitude using a semiclassical expansion: \[ B = \sum_{j \geqslant 0} h^j B_j(t, x, \xi), \] and \[ -ihB_t -i h \varphi_{xx} B -2ih \varphi_x B_x - h^2 B_{xx} = 0. \] This amounts to solving: \begin{equation} \label{E:WKB-amp-h} \begin{cases} - B_{0,t} -2 \varphi_x B_{0,x} - \varphi_{xx} B_0 = 0, \\ -iB_{j,t} -i \varphi_{xx} B_j -2 i \varphi_x B_{j,x} - B_{j-1,xx} = 0, \,\,\, j \geqslant 1. \end{cases} \end{equation} \subsection{The partition of unity} In this subsection we construct the partition of unity which will be used to glue together the parametrices constructed in the following subsections. Let $\epsilon>0$, $\delta>0$ be sufficiently small, and $\omega>1$, all to be specified in the sequel. Let $\chi \in {\mathcal C}^\infty_c( {\mathbb R})$, $\chi(r) \equiv 1$ for $|r| \leqslant 1$, with support in $\{ | r | \leqslant 2 \}$ and assume $\chi'(r) \leqslant 0$ for $r \geqslant 0$. For $\sigma >0$, let $\chi_\sigma(r) = \chi(r/\sigma)$. Let $\chi^\pm \in {\mathcal C}^\infty( {\mathbb R})$, $\chi^\pm(r) \equiv 1$ for $\pm r \gg 1$, $\chi^\pm = 0$ for $\pm r \geqslant 0$, and choose $\chi^\pm$ so that $1 = \chi(r) + \chi^+(r) + \chi^-(r)$, and denote also $\chi^\pm_\sigma(r) = \chi^\pm(r/\sigma)$. Choose also $\psi_0, \psi \in {\mathcal C}^\infty_c( {\mathbb R}_+)$ with $\psi_0(r) \equiv 1$ near $r = 0$, and $\psi(r) \equiv 1$ in a neighbourhood of $r = \delta$ such that \[ \sum_{0}^{N(h)} \psi(\omega^j x ) \equiv 1 \text{ for } x \in [\delta, 2\epsilon h^{-1/(m+1)} ], \] and \[ \psi_0(x) + \sum_{0}^{N(h)} \left( \psi(\omega^j x ) + \psi(- \omega^j x ) \right) \equiv 1 \text{ for } x \in [-2 \epsilon h^{-1/(m+1)} , 2\epsilon h^{-1/(m+1)} ]. \] We remark for later use that we take, for example \[ \psi(\omega^j x) = \begin{cases} 1, \text{ for } \delta (\omega^j + \omega^{j-2}) \leqslant x \leqslant \delta (\omega^{j+1} - \omega^{j-1} ) \\ 0, \text{ for } x \in [ \delta (\omega^j - \omega^{j-2}) , \delta (\omega^{j+1} + \omega^{j-1} ) ]^\complement, \end{cases} \] so that in particular \[ | \partial_x^k \psi(\omega^j x) | \leqslant C_k (\delta \omega)^k (\omega^{-jk}). \] We also observe this implies we need $N(h)$ sufficiently large that $\omega^j \sim h^{-1/(m+1)}$, so that $N(h) = {\mathcal O}( \log(1/h))$, with constants depending on $\delta, \omega$, and $m$. We write \[ e^{itP/h} = L(t) + S(t) := (1 - \chi_\epsilon(x)) e^{itP/h} + \chi_\epsilon(x) e^{itP/h} \] for the propagator cut off to large and small values of $x$ respectively. The set where the symbol $p = 1$ contains the critical point $(0,0)$, so we further decompose into frequencies $\xi$ which lie above (respectively below) the set where $p = 1$, and frequencies which are bounded: \[ S(t) = S_{\text{hi}}(t) + S_{\text{lo}}(t) , \] where \[ S_{\text{hi}}(t) = \mathds{1}_{\{\pm hD_x \geqslant 1-V(x)\} } (1-\chi_{\epsilon^2} ((P-1))) S(t) , \] and $S_{\text{lo}}(t) = S(t) - S_{\text{hi}}(t)$. We decompose yet again to \begin{align*} S_{\text{lo}}(t) & = S_{\text{lo},0,0}(t) + \sum_{j=0}^{N(h)} (S_{\text{lo},j,+}(t) + S_{\text{lo},j,-}(t) ) , \end{align*} where \[ S_{\text{lo},0,0}(t) = \psi_0(x/h^{1/(m+1)}) S_{\text{lo}}(t), \] and \[ S_{\text{lo},j,\pm}(t) = \psi( \pm \omega^j x/h^{1/(m+1)} )S_{\text{lo}}(t). \] The operators $S_{\text{lo},j,\pm}(t)$ are localized to bounded frequencies, and dyadic strips of size $h^{1/(m+1)} \omega^j$. We require one further localization, which is to assume that the operators are also outgoing/incoming. Choose $\tilde{\chi} \in {\mathcal C}^\infty( {\mathbb R})$ so that $\tilde{\chi}(r) = 1$ for $r \geqslant 1$ and $\tilde{\chi}(r) = 0$ for $r \leqslant 0$. For $a, \gamma >0$ to be determined, let \[ S_{\text{lo},j,+}^\pm(t) = \tilde{\chi}((\pm hD_x + ax^m)/\gamma x^m) S_{\text{lo},j,+}(t), \] and \[ S_{\text{lo},j,-}^\pm(t) = \tilde{\chi}((\mp hD_x + ax^m)/\gamma x^m) S_{\text{lo},j,-}(t). \] This has the effect of localizing in phase space to the sets where \[ \pm \xi \geqslant -ax^m \] for $S_{\text{lo},j,+}^\pm(t)$ and similarly for $S_{\text{lo},j,-}^\pm(t)$. By the properties of $\tilde{\chi}$, we have \[ S_{\text{lo},j,+}^\pm(t) = S_{\text{lo},j,+}(t) \] microlocally on the set \[ \{ \pm \xi \geqslant (\gamma -a) x^m \}. \] If $a > \gamma$, these two sets clearly cover the remaining phase space, so if we can estimate each one of the operators above, we have estimated the entire propagator. It is clear then that if we can prove, say, $\beta/2>0$ loss Strichartz estimates for $S_{\text{lo},0,0}(t)$, and for each $S_{\text{lo},j,+}^+$ and $S_{\text{lo},j,-}^-(t)$ for $t \geqslant 0$, the Strichartz estimates follow for $S_{\text{lo},j,+}^-$ and $S_{\text{lo},j,-}^+(t)$ by time reversal. We thus have to prove Strichartz estimates for each of these operators, as well as for $S_{\text{hi}}(t)$ and $L(t)$, at which point we can sum up and take a loss of $\log(1/h) + h^{-\beta/2} < C h^{-\beta}$. \subsection{The parametrix for $L(t)$} We recall that the operator $L(t)$ is the propagator localized to large $| x |$. Then the operator $L(t)$ can be decomposed into $L^+(t) + L^-(t)$, supported where $\pm x >0$ respectively. Thus \[ L^+(t) = \chi^+_\epsilon(x) e^{itP/h}. \] By a $T T^*$ argument (see \cite{KT}), in order to show $L^+ : L^2 \to L^p_T L^q$, it suffices to estimate \[ L^+(t) (L^+)^*(s) : L^1 \to L^\infty, \] but \[ L^+(t) (L^+)^*(s) = \chi^+_\epsilon(x) e^{i(t-s)P/h} \chi^+_\epsilon(x). \] That is, we need only construct a parametrix supported for $x \geqslant \epsilon$, and for initial data supported for $x \geqslant \epsilon$. \begin{lemma} \label{L:L} There exist constants $C>0$ and $\alpha>0$ such that for any $u_0 \in L^1 \cap L^2$, we have \[ \| L^+(t) (L^+)^*(s) u_0 \|_{L^\infty_x} \leqslant C (|t-s|h)^{-1/2} \| u_0 \|_{L^1}, \] for $| t | , | s | \leqslant \alpha h^{-1}$. As a consequence, \[ \| L(t) u_0 \|_{L^{p}_{\alpha h^{-1} } L^{q}} \leqslant h^{-1/p} \| u_0 \|_{L^2} \] for \[ \frac{2}{p}+ \frac{1}{q} = \frac{1}{2}, \,\,\, 2 \leqslant q < \infty. \] \end{lemma} \begin{proof} The proof is simply to observe that $L^+(t) (L^+)^*(s)$ is equal to a non-trapping cut-off propagator, and hence obeys a strong dispersion and perfect Strichartz estimates according to \cite{BoTz-gstr}. To see this, let \[ \tilde{A}(x)^{-2} = \chi(x/\epsilon) x^{-2} + (1 - \chi(x/\epsilon)) A^{-2}. \] The function $\tilde{A}$ agrees with $A$ for large $x$ and agrees with $x$ for small $x$. Then \[ \tilde{g} = dx^2 + \tilde{A}^2(x) d \theta^2, \,\,\, x \geqslant 0 \] is an asymptotically Euclidean metric, which agrees with the Euclidean metric near $x = 0$. In fact, since $g$ was a short-range perturbation of the Euclidean metric as $x \to + \infty$, so is $\tilde{g}$. In addition, we claim that for $\epsilon>0$ sufficiently small, $\tilde{g}$ is a {\it non-trapping} perturbation of the Euclidean metric. To see this, we examine the geodesic equations. Let $\tilde{p} = \xi^2 + \tilde{A}^{-2}(x) \eta^2$, and compute the geodesic equations: \[ \begin{cases} \dot{x} = 2 \xi, \\ \dot{\xi} = 2 \tilde{A}' \tilde{A}^{-3} \eta^2, \\ \dot{\theta} = 2 \tilde{A} \eta, \\ \dot{\eta} = 0. \end{cases} \] Consider a unit speed geodesic with $\tilde{p} \equiv 1$. Since $\eta$ remains constant, then either $\eta = 0$, in which case $\xi = \pm 1$ and $x \to \pm \infty$ uniformly, or $\eta \neq 0$. If $\xi \equiv 0$, then necessarily $(\tilde{A}^{-2}(x))' = 0$ and $x$ is stationary, but \[ (\tilde{A}^{-2}(x))' = -2 \chi(x/\epsilon) x^{-3} -2 (1 - \chi(x/\epsilon)) A' A^{-3} + \epsilon^{-1} \chi'(x\epsilon) ( x^{-2} - A^{-2}). \] But $A' >0$ away from $x = 0$, $x^{-2} \gg A^{-2}$ for $x>0$ sufficiently small, and $\chi' \leqslant 0$ for $x>0$ implies $(\tilde{A}^{-2}(x))' <0$ for $x >0$. Hence there are no parallel periodic geodesics. It remains to show that every other trajectory escapes to infinity. But since $\dot{\xi} \geqslant c^{-1} x^{-3} \eta^2$, comparing to the non-trapping conic metric with \[ \begin{cases} \dot{x} = 2 \xi, \\ \dot{\xi} = c^{-1} x^{-3} \eta^2 \end{cases} \] implies that every other trajectory is non-trapped. Then following Bouclet-Tzvetkov \cite{BoTz-gstr}, we get that \[ L^+ : L^2 \to L^p_{\alpha h^{-1}} L^q , \,\,\, \alpha >0, \] is a bounded operator for $(p,q)$ in the specified range. A similar estimate holds for $L^-$, and hence $L$, and hence for any $\epsilon>0$ sufficiently small, we can construct a parametrix to get perfect Strichartz estimates for $| x | \geqslant \epsilon$. \end{proof} \subsection{The parametrix for $S_{\text{hi}}(t)$} The operator $S_{\text{hi}}(t)$ is the propagator localized to small $| x | \leqslant 2\epsilon$ and high frequencies $| P - 1 | \geqslant \epsilon^2$, and $\pm \xi \geqslant 1-V(x)$. In order to estimate $S_{\text{hi}}(t)$, we employ a similar argument. We first decompose $S_{\text{hi}}(t) = S_{\text{hi}}^+(t) + S_{\text{hi}}^-(t)$ into a part supported in $\pm \xi >0$. The point of the next lemma is that singularities propagate out of this region quickly, depending on the initial frequency. \begin{lemma} \label{L:Shi} There exist constants $\alpha, \kappa>0$ such that \[ \chi^+(|t-s| hD_x / \kappa \epsilon ) S_{\text{hi}}^+(t) (S_{\text{hi}}^+)^*(s) = {\mathcal O}(h^\infty) \] in any seminorm, provided $| t | , | s | \leqslant \alpha h^{-1}$ There exist constants $C>0$ and $\alpha>0$ such that for any $u_0 \in L^1 \cap L^2$, we have \[ \| S_{\text{hi}}^+(t) (S_{\text{hi}}^+)^*(s) u_0 \|_{L^\infty_x} \leqslant C (|t-s|h)^{-1/2} \| u_0 \|_{L^1}, \] for $| t|, |s | \leqslant \alpha h^{-1}$. As a consequence, \[ \| S_{\text{hi}}(t) u_0 \|_{L^p_{\alpha h^{-1} } L^q} \leqslant C h^{-1/p} \| u_0 \|_{L^2} \] for \[ \frac{2}{p}+ \frac{1}{q} = \frac{1}{2}, \,\,\, 2 \leqslant q < \infty. \] \end{lemma} \begin{proof} As usual, we consider the Hamiltonian system associated to $p$: \[ \begin{cases} \dot{x} = 2 \xi, \\ \dot{\xi} = -V'(x), \\ x(0) = y, \\ \xi(0) = \eta, \end{cases} \] where now $|x |, | y | \leqslant 2\epsilon$ and $\eta \geqslant \epsilon$. Then a simple computation shows that in this region $|V'(x) | = {\mathcal O}(\epsilon^{2m-1})$ and $| V''(x) | = {\mathcal O}( \epsilon^{2m-2})$. Hence if $t = {\mathcal O}(1)$, we have \[ \xi = \eta + {\mathcal O}( \epsilon^{2m-1}) = \eta(1 + {\mathcal O}(\epsilon^{2m-2})), \] since $\eta \geqslant \epsilon$. Hence \[ \dot{x} = 2\eta(1 + {\mathcal O}(\epsilon^{2m-2})), \] so that \[ x = y + 2 t \eta(1 + {\mathcal O}(\epsilon^{2m-2})), \] provided $t = {\mathcal O}(1)$. This implies in particular, that for any $| t | \geqslant C \epsilon / \eta$, we will have $| x | \geqslant 2\epsilon$, so that we have propagated out of the region of interest. Again, by virtue of a $T T^*$ argument, we are interested in both initial data and parametrix localized in $| x | \leqslant 2\epsilon, \xi \geqslant \epsilon$, so we need only check the estimates on the phase function for $| t | \leqslant C \epsilon / \eta$. We check the invertibility of the map $y \mapsto x(t)$: \begin{align*} \sup_{| t | \leqslant C \epsilon / \eta } \left| \frac{\partial x}{\partial y} (t) \right| & \leqslant 1 + 2 \int_0^{C \epsilon / \eta } (C \epsilon / \eta - s) | V''(x)| \left| \frac{\partial x}{\partial y} (s) \right| ds \\ & \leqslant 1 + {\mathcal O}( \epsilon^2 / \eta^2 ) {\mathcal O}( \epsilon^{2m-2} ) \sup_{| s | \leqslant C \epsilon / \eta }\left| \frac{\partial x}{\partial y} (s) \right| , \end{align*} which implies \begin{align*} \sup_{| t | \leqslant C \epsilon / \eta } \left| \frac{\partial x}{\partial y} (t) \right| & \leqslant 1 +{\mathcal O}( \epsilon^{2m-2} ) . \end{align*} Similarly we compute the lower bound: \begin{align*} \inf_{| t | \leqslant C \epsilon / \eta } \left| \frac{\partial x}{\partial y} (t) \right| & \geqslant 1 - 2 \int_0^{C \epsilon / \eta } (C \epsilon / \eta - s) | V''(x)| \left| \frac{\partial x}{\partial y} (s) \right| ds \\ & \geqslant 1 - {\mathcal O}( \epsilon^2 / \eta^2 ) {\mathcal O}( \epsilon^{2m-2} ) \sup_{| s | \leqslant C \epsilon / \eta }\left| \frac{\partial x}{\partial y} (s) \right| \\ & \geqslant 1 - {\mathcal O}( \epsilon^{2m-2}), \end{align*} using our previously computed upper bound. Hence in the range in which we are interested, $\partial x / \partial y$ is uniformly bounded above and below by a constant, provided $\epsilon>0$ is chosen sufficiently small. It is now a routine computation to construct the WKB amplitude and compute the dispersive estimate for $| t | \leqslant C \epsilon/ \eta$. After that time, the $h$-wavefront set of a solution is outside the support of the cutoffs in $S_{\text{hi}}(t)$, so that any parametrix approximation is ${\mathcal O}(h^\infty)$. Summing over ${\mathcal O}(h^{-1})$ such parametrices yields the dispersive estimate for $| t | \leqslant \alpha h^{-1}$, and the associated Strichartz estimates. A similar computation works for $S_{\text{hi}}(t)$ localized to $\xi \leqslant - \epsilon$, which proves the lemma for $S_{\text{hi}}(t)$. \end{proof} \subsection{The parametrix for $S_{\text{lo},0,0}(t)$} \label{SS:sloo} The operator $S_{\text{lo},0,0}(t)$ is the propagator localized to small frequencies $| P -1| \leqslant \epsilon^2$ or $|P-1| \geqslant \epsilon^2$ with $| \xi | \leqslant 1 - V(x)$, as well as localized to a small $h$-dependent spatial neighbourhood $| x | \leqslant \delta h^{1/(m+1)}$. This is the region which contains the trapping. We observe that all of the $S_{\text{lo}}$ operators have $| x | \leqslant \epsilon$, which implies in addition that $| \xi | \leqslant 2 \epsilon$, say. We are now interested in constructing a parametrix in the set $\{ | x | \leqslant \delta h^{1/(m+1)}, | \xi | \leqslant \epsilon \}$. For this, we use the following $h$-dependent scaling operator: \[ T_h u(t,x) = h^{-1/(m+1)} u(h^{(m-1)/(m+1)} t , h^{-1/(m+1)}x). \] The purpose of the prefactor of $h^{-1/(m+1)}$, different from the usual scaling prefactor, is to ensure that $\| T_h u \|_{L^1_x} = \| u \|_{L^1_x}$, since in our final dispersion estimate, this is how the initial data will be measured. We compute: \begin{align*} T_h^{-1} (hD_t -h^2 \partial_x^2 + V(x) ) T_h & = (h^{(m-1)/(m+1)} hD_t - h^{-2/(m+1)} h^2 \partial_x^2 + V(h^{1/(m+1)} x) ) \\ & = h^{2m/(m+1)} ( D_t - \partial_x^2 + \widetilde{V}(x;h)), \end{align*} where \[ \widetilde{V}(x;h) = h^{-2m/(m+1)} V(h^{1/(m+1)} x). \] \begin{remark} Similar to the the techniques in the paper \cite{ChWu-lsm}, conjugation by the scaling operator $T_h$ is an inhomogeneous ``blowup'' procedure. However, the blowdown map $\mathcal{B}$ is now {\it time-dependent} and takes the form \[ \mathcal{B}(t, \tau, x, \xi) = ( h^{(1-m)/(m+1)} t, h^{2m/(m+1)} \tau, h^{1/(m+1)} x, h^{m/(m+1)} \xi ). \] That is, we are blowing up the $(\tau, x, \xi)$ coordinates and blowing down the $t$ coordinate at the same time. Observe that the blowdown in $t$ does not cause a problem with the calculus since the operator $P$ is independent of $t$. Then indeed, according to the calculus developed in \cite{ChWu-lsm}, $\sigma_h(P) = \tau + \xi^2 + V(x)$ in the $h$ calculus, while $T_h^{-1} P T_h$ has symbol \[ \tilde{p}_1 = (h^{2m/(m+1)} \tau ) + ( h^{m/(m+1)} \xi )^2 + V(h^{1/(m+1)}x) \] in the $1$-calculus, or scale-invariant calculus. Factoring out the $h^{2m/(m+1)}$ as above results in a singular symbol in the scale-invariant calculus (see Figure \ref{fig:phase-blowup}). However, the special structure of $V$ allows us to construct a reasonable parametrix where $V'$ is extremely small, and where $V'$ is large, wave packets propagate away in a controlled fashion. This is made rigorous in the following constructions. \end{remark} Denote $\widetilde{P} = D_t - \partial_x^2 + \widetilde{V}(x;h)$, where \[ \widetilde{V}(x;h) = h^{-2m/(m+1)} V(h^{1/(m+1)} x). \] We break the parametrix construction into two sets, where $\widetilde{V}'$ is small (and hence this region contains the trapping), and where $\widetilde{V}'$ is large, which we reserve for the next subsections where we estimate $S_{\text{lo},j,\pm}^\pm(t)$. We want to now construct a parametrix for $\widetilde{P}$ on the set \[ \{ | x | \leqslant \delta, | \xi | \leqslant 2 h^{-m/(m+1)}, | t | \leqslant 1 \}, \] but in the $1$-calculus (scale-invariant). Then if $w(t,x)$ is such a parametrix, $v(t,x) = T_h w(t,x)$ is a parametrix for $P$ on the set $\{ | x | \leqslant \delta h^{1/(m+1)}, | \xi | \leqslant \epsilon, | t | \leqslant h^{(1-m)/(1+m)} \}$, as required. \begin{figure}\label{fig:phase-blowup} \end{figure} \begin{lemma} \label{L:Slooo} There exists $\alpha>0$ and a phase function $\varphi(t, x, \eta)$ satisfying \[ \begin{cases} \varphi_t + \varphi_x^2 + \widetilde{V}(x;h) = 0, \\ \varphi(0, x, \eta) = x \eta \end{cases} \] for $| x | \leqslant \delta$, $| \eta | \leqslant 2\epsilon h^{-m/(m+1)}$, and $| t | \leqslant \alpha$. We further have \[ \varphi_{\eta \eta} \sim 2 t(1 + {\mathcal O}(t)), \] and \[ \varphi_{xx} = {\mathcal O}(tx^{2m-2}) \] for $| t | \leqslant \alpha$. \end{lemma} \begin{proof} The proof is by the usual Hamiltonian method. We consider $q = \xi^2 + \widetilde{V}(x;h)$ and the Hamiltonian system associated to $q$: \begin{equation} \label{E:Ham-q} \begin{cases} \dot{x} = 2 \xi, \\ \dot{\xi} = - \partial_x \widetilde{V}(x;h), \\ x(0) = y, \\ \xi(0) = \eta. \end{cases} \end{equation} Now the potential $\widetilde{V}(x;h)$ has been computed above, and satisfies \begin{align*} -\partial_x \widetilde{V}(x;h) & = -\partial_x \left( h^{-2m/(m+1)}(1 + (h^{1/(m+1)}x)^{2m} )^{-1/m} + h^{2/(m+1)} V_1(h^{1/(m+1)} x) \right) \\ & = 2h^{-2m/(m+1)} h^{1/(m+1)} (h^{1/(m+1)}x)^{2m-1} (1 + (h^{1/(m+1)}x)^{2m} )^{-1/m -1} \\ & \quad + {\mathcal O}( h^{3/(m+1)} ((h^{1/(m+1)}x)^{2m-3} ). \end{align*} For $| x | \leqslant \delta$ this derivative is bounded, and has the same sign as $x$. Let us denote $B(x) = - \partial_x \widetilde{V}(x;h)$ to avoid cumbersome notation. In order to apply the usual Hamilton-Jacobi theory, we need to show that $\partial x/ \partial y$ is uniformly bounded above and below by positive constants on some interval $| t | \leqslant \alpha$, so that we can invert the transformation $y \mapsto x(t)$ to get $y = y(t,x)$. Then using $(x, \eta)$ as coordinates instead of $(y, \eta)$ proves the first part of the Lemma. We write \[ x(t) = y + 2 t \eta + \int_0^t (t-s) B(x(s)) ds, \] and compute \[ \frac{\partial x}{\partial y}(t) = 1 + \int_0^t(t-s) B'(x(s)) \frac{\partial x}{\partial y}(s)ds. \] We know \[ \frac{\partial x}{\partial y}(0) = 1, \] and $B'(x)$ is non-negative in a neighbourhood of $x = 0$, so the integral in the above expression is positive for $|x| \leqslant \delta$ and $| t | \leqslant \alpha$ sufficiently small. Further, $B'$ is bounded for $|x | \leqslant \delta$, so the integral expression is also bounded above for $| t | \leqslant \alpha$. Hence by restricting $|x|$ and $| t |$ to fixed, bounded ranges, we conclude the map sending $y \mapsto x(t)$ is invertible, and this completes the proof of the first assertion. We observe that, by construction, $\varphi_\eta (t, x, \eta) = y$, so that to compute $\varphi_{\eta \eta}$, we need to compute \[ \frac{\partial y}{\partial \eta} = \frac{ \partial y}{\partial x} \frac{ \partial x}{\partial \eta}. \] We have already shown that $\partial y / \partial x$ is bounded above and below for $| t | \leqslant \alpha$, so we compute \begin{align*} \frac{ \partial x}{\partial \eta} & = 2t + \int_0^t(t-s) B'(x(s)) \frac{\partial x}{\partial \eta}(s)ds \\ & = 2t + {\mathcal O}(t^2) \sup \frac{\partial x}{\partial \eta}. \end{align*} This implies \[ \sup_{|t| \leqslant \alpha} \frac{\partial x}{\partial \eta}(t) \leqslant 2t(1 + {\mathcal O}(t)). \] Plugging this into the integral expression above yields \[ \inf_{|t| \leqslant \alpha} \frac{\partial x}{\partial \eta} \geqslant 2t(1 + {\mathcal O}(t^3)). \] Finally, since the intertwining relation gives $\varphi_x (t, x, \eta) = \xi$, we have \[ \varphi_{xx} = \partial_y \xi \partial_x y \] in the notation above. We have already shown that $\partial_x y$ is bounded above and below by a positive constant for $| t | \leqslant \alpha$, so we just need to compute \begin{align*} \partial_y \xi & = \partial_y \left( \eta - \int_0^t \widetilde{V}'(x(s)) ds \right) \\ & = -\int_0^t \widetilde{V}''(x(s)) \partial_y x(s) ds \\ & = {\mathcal O}( t x^{2m-2} ). \end{align*} This is the last assertion in the Lemma. \end{proof} We now construct the amplitude for the parametrix for the operator $S_{\text{lo},0,0}(t)$. This, combined with Lemma \ref{L:Slooo}, will be used to compute a dispersion estimate, resulting in a Strichartz estimate. The problem is that, since we are working in a marginal calculus, the error terms in our parametrix are just too large. For example, the error term $\varphi_{xx} \sim t x^{2m-2}$ computed in Lemma \ref{L:Slooo} rescales as \[ T_h \varphi_{xx} \sim h^{(m-1)/(m+1)} t h^{-(2m-2)/(m+1)} x^{2m-2} \sim h^{(1-m)/(1+m)} t x^{2m-2}. \] This operator, when composed with the appropriate oscillatory integral, yields an $L^2$ bounded operator for each $t$, $| t | \leqslant h^{(1-m)/(m+1)}$. However, to apply an energy estimate or a local smoothing estimate, we either have to integrate in time (now an interval of length $\sim h^{(1-m)/(1+m)}$), or pull out a factor of $x^{m-1}$ to apply Theorem \ref{T:lsm}. In either case, we lose a factor of $h^{(1-m)/2(m+1)}$. Hence at this point we must accept an $\beta >0$ loss in regularity by restricting our attention to a slightly smaller time interval. Then the ``lower order'' terms in the amplitude construction will actually gain powers of $h$. We are interested in constructing a parametrix for the operator $S_{\text{lo},0,0}(t) S_{\text{lo},0,0}^*(s)$. We have constructed a phase function $\varphi(t, x, \xi)$ in rescaled coordinates, assuming appropriate microlocal cutoffs. That is, we have constructed the appropriate phase functions to approximate the operators \[ T_h^{-1} S_{\text{lo},0,0}(t) S_{\text{lo},0,0}^*(s) = T_h^{-1} S_{\text{lo},0,0}(t-s) \chi_\star, \] where $\chi_\star$ is the appropriate microlocal cutoff. We have not yet computed the amplitude. Recalling the transport equations in the $h$-calculus \eqref{E:WKB-amp-h}, the transport equations for the amplitude $B$ in the rescaled $1$-calculus coordinates become \[ D_t B + 2 \varphi_x D_x B -i \varphi_{xx} B - \partial_x^2 B = 0. \] The standard technique here is to guess an asymptotic series, however, there is no small parameter, so we instead modify our ansatz to take advantage of the Frobenius theorem. That is, the Frobenius theorem guarantees the existence of a function $\Gamma(t,x),$ depending implicitly on the frequency $\xi$, satisfying \[ \begin{cases} \partial_t \Gamma + 2 \varphi_x \partial_x \Gamma = 0, \\ \Gamma(0,x) = x. \end{cases} \] We then construct $B = \sum_{j = 0}^KB_j$ for sufficiently large $K$ to be determined (independent of $h$) with \[ \begin{cases} B_0 \equiv 1, \\ B_j = -\int_0^t \varphi_{xx} B_{j-1} |_{(s, \Gamma(t-s,x))} + i B_{j-1,xx} |_{(s, \Gamma(t-s,x))}. \end{cases} \] An induction argument shows that $B_j = {\mathcal O}(t^j)$ for each $j$, since we are in the scale invariant calculus. Then \[ w(t, x) = (2 \pi)^{-1} \int e^{i \varphi(t, x, \xi) - i y \xi } B(t, x, \xi) \chi_\star (y, D_y)^* w_0(y) dy d \xi \] solves \[ \begin{cases} \widetilde{P} w = \widetilde{E}, \\ w(0, x) = \chi_\star (x, D_x)^* w_0(x), \end{cases} \] where \[ \chi_\star = T^{-1}_h \psi_0(x/h^{1/(m+1)}) ( 1 - \mathds{1}_{\{\pm hD_x \geqslant 1-V(x)\} } (1-\chi_{\epsilon^2} ((P-1))) ) \chi_\epsilon(x) T_h \] is the appropriate microlocal cutoff, and the equation is understood to make sense for $|t| \leqslant \alpha$. Here, the error $\widetilde{E}$ is given by \[ \widetilde{E} = (2 \pi)^{-1} \int e^{i \varphi(t, x, \xi) - i y \xi} (-\partial_x^2 B_K -i \varphi_{xx} B_K )\chi^*(y, D_y) w_0(y) dy d \xi . \] That is, $\widetilde{E}$ is an oscillatory integral operator with the same phase function as $w$, and amplitude $A(t,x,\xi)$ satisfying \[ | \partial_x^k \partial_\xi^l A| \leqslant C_{kl} t^K, \] and hence, according to the next Lemma, satisfies \[ \| \widetilde{E} \|_{L^2_x} = {\mathcal O} (t^K) \| \chi^* w_0 \|_{L^2}. \] \begin{lemma} Suppose $\Gamma(t, x, \xi) \in {\mathcal C}^\infty_{b} {\mathcal S}_{0,0}$ is a smooth family of symbols with bounded derivatives, and let $F(t)$, $0 \leqslant t \leqslant \alpha$ be the operator defined by \[ F(t) g(x) = \int e^{i \varphi(t, x, \xi) - i y \xi } \Gamma(t, x, \xi) \chi_\star(y, D_y) g(y) dy d \xi, \] where $\varphi$ is the phase function constructed above and $\chi_\star$ is the appropriate microlocal cutoff. Then \[ \sup_{0 \leqslant t \leqslant \alpha} \| F(t) g \|_{L^2} \leqslant C \| g \|_{L^2}. \] \end{lemma} \begin{proof} Let us work microlocally to avoid continually using microlocal cutoffs, and therefore assume the appropriate microlocal concentration. The $L^2$ boundedness of $F(t)$ is equivalent to the $L^2$ boundedness of $F(t)^*$, which follows from the $L^2$ boundedness of $F(t) F(t)^*$. The operator $F(t) F(t)^*$ is easily seen to have integral kernel \[ K = \int e^{i \varphi(t, x, \xi) - i \varphi(t, x', \xi)} \Gamma(t, x, \xi) \bar{\Gamma}(t, x', \xi) d \xi, \] where again we are implicitly assuming appropriate microlocal cutoffs. By stationary phase, this integral kernel has singularities when \[ \partial_\xi (\varphi(t, x, \xi) - \varphi(t, x', \xi)) = 0, \] which is when (using the notations from the phase construction) \[ y(t, x, \xi) - y(t, x', \xi) = 0. \] Let us assume that $x \geqslant x'$, so that we want to compute where \[ (x - x') \left( \partial_x y |_x + {\mathcal O}( \partial_x^2 y (x - x') ) \right). \] Now due to the microlocal cutoffs $\chi_\star$, we have that $x$ and $x'$ are both small. By the inverse function theorem and the boundedness of $\partial_x y$, we need to estimate $\partial_y^2 x$ in the Hamiltonian systems used to construct the phase functions. We compute \[ \partial_y^2 x = - \int_{0}^t (\widetilde{V}'''(x) (\partial_y x)^2 + \widetilde{V}''(x) \partial_y^2 x) ds, \] and estimating the first term by $c$ for a small constant $c$ and solving for $\sup \partial_y^2 x$ shows that \[ |{\mathcal O}(\partial_x^2 y (x - x') ) | \leqslant c', \] where $c'>0$ is a small constant depending on our previous choices of $\epsilon$, $\delta$, and $\omega$. Iterating this argument for other powers of $(x-x')$ shows that the singularities of the integral kernel lie on the diagonal $| x - x' | = 0$, so the integral kernel defines a $0$ order pseudodifferential operator with symbol in the class ${\mathcal S}_{0,0}$. By the Calder\'on-Vaillancourt theorem, the $L^2$ boundedness is established. \end{proof} If we now take $v = T_h w$, we see \begin{align*} P v & = T_h T_h^{-1} P T_h w \\ & = h^{2m/(m+1)} T_h \widetilde{P} w \\ & = E, \end{align*} with initial conditions \[ v(0, x) = T_h w(0,x), \] and where \[ E = h^{2m/(m+1)} T_h \widetilde{E}. \] A simple computation shows that $\| T_h f \|_{L^2} = h^{-1/2(m+1)} \| f \|_{L^2}$, so that if we now restrict attention to the smaller (rescaled) time interval \[ 0 \leqslant t \leqslant \alpha h^{(1-m)/(m+1) + \beta} \] for some small fixed $\beta >0$, we have \begin{align*} \sup_{0 \leqslant t \leqslant \alpha h^{(1-m)/(m+1) + \beta} }\| E \|_{L^2} & = h^{(4m-1)/2(m+1)} \sup_{0 \leqslant t \leqslant \alpha h^\beta } \| \widetilde{E} \|_{L^2} \\ & \leqslant C h^{(4m-1)/2(m+1)} h^{\beta K} \| \chi_\star^*w_0 \|_{L^2} \\ & \leqslant C h^{2m/(m+1)} h^{\beta K}\| \chi_\star^* v_0 \|_{L^2}. \end{align*} Here, in the above computations, we have suppressed the variables of the microlocal cutoffs $\chi_\star$, which are understood to be evaluated in the phase space variables of the appropriate scale. The following lemma contains the dispersion and Strichartz estimates for the operators $S_{\text{lo},0,0}(t)$. \begin{lemma} \label{L:disp-sloo} The parametrix $v(t, x)$ satisfies the dispersion estimate \[ \| \chi_\star v \|_{L^\infty} \leqslant C (ht )^{-1/2} \| \tilde{\chi} v_{0} \|_{L^1}, \] where $0< t \leqslant \alpha h^{(1-m)/(1+m)} $, as well as the corresponding Strichartz estimate \[ \| v \|_{L^p_{\alpha h^{(1-m)/(1+m)} } L^q} \leqslant C h^{-1/p} \| \chi v_{0} \|_{L^2}, \] for \[ \frac{2}{p} + \frac{1}{q} = \frac{1}{2}, \,\,\, q < \infty, \] and constants independent of $h$. The cutoff propagator $S_{\text{lo},0,0}$ satisfies \[ \| S_{\text{lo},0,0} \|_{L^2 \to L^{p}_{\alpha h^{(1-m)/(1+m) + \beta}} L^{q} }\leqslant C h^{-1/p} , \] and \[ \| S_{\text{lo},0,0} \|_{L^2 \to L^{p}_{\alpha h^{(1-m)/(1+m) }} L^{q} }\leqslant C h^{-(1+\beta)/p} , \] for $(p,q)$ in the same range and constants independent of $h$. \end{lemma} \begin{remark} Observe that the parametrix satisfies good Strichartz estimates all the way up to the critical time scale $t \sim h^{(1-m)/(m+1)}$, but we are only able to conclude that the propagator obeys perfect Strichartz estimates on a slightly shorter time scale, or obeys Strichartz estimates with a small loss on the critical time scale. This is an artifact of working in the marginal calculus and trying to make error terms small in $h$. \end{remark} \begin{proof} We have \begin{align*} v & (t, x) \\ & = T_h w(t, x) \\ & = T_h (2 \pi )^{-1} \int e^{i \varphi(t, x, \xi) - i y \xi } B(t, x, \xi) \chi_\star^* (y, D_y, h) w_0(y) dy d \xi \\ & = h^{-1/(m+1)} (2 \pi )^{-1} \int e^{i \varphi(h^{(m-1)/(m+1)}t, h^{-1/(m+1)} x, \xi) } \\ & \quad \cdot e^{-iy\xi} B( h^{(m-1)/(m+1)}t, h^{-1/(m+1)} x, \xi )\chi_\star^* (y, D_y, h) w_0(y) dy d \xi \\ & = (2 \pi h )^{-1} \int e^{i \varphi_\star (t, x, \xi) - i y \xi /h } B_{\star}(t, x, \xi ) T_h \chi_\star^* (y, D_y, h) w_0(y) dy d \xi, \end{align*} where we use the notation \[ \varphi_\star(t, x, \xi) = \varphi(h^{(m-1)/(m+1)}t, h^{-1/(m+1)} x, h^{-m/(m+1)} \xi), \] and similarly for $B$. We rewrite this expression as \[ v_\star(t, x) = \int_y K_\star(t, x, y; h) \chi_\star v_{\star,0}(y) dy, \] where \[ K_\star(t, x, y;h) = (2 \pi h)^{-1} \int e^{i \varphi_\star(t, x, \xi) - iy \xi/h} B_{\star,0}(t, x, \xi) \tilde{\chi}_\star(y, \xi; h) d \xi, \] and \[ \chi_\star v_{\star,0}(y) = T_h \chi_\star^* (y, D_y, h) w_0(y). \] We have already computed the derivative properties of the functions $\varphi$ and $B$ in order to apply the lemma of stationary phase (with $h$ as small parameter). The unique critical point is at \[ \partial_\xi ( h \varphi_\star(t, x, \xi) - y \xi) = 0, \] so the leading asymptotic is \begin{align*} (2 \pi h)^{-1/2} & | \partial_\xi^2 (h \varphi_\star (t, x, \xi) -y \xi ) |^{-1/2} \\ & = (2 \pi h)^{-1/2} | h h^{-2m/(m+1)} \varphi_{\xi \xi}|_{ (h^{(m-1)/(m+1)}t, h^{-1/(m+1)} x, h^{-m/(m+1)} \xi)} \\ & \sim h^{-1/2} | h h^{-2m/(m+1)} h^{(m-1)/(m+1)}t |^{-1/2} \\ & = | h t|^{-1/2}, \end{align*} as claimed. The Strichartz estimates follow immediately. We now estimate the difference between the propagator and the parametrix in the $L^\infty_x$ norm to prove that the actual propagator has the correct dispersion, at least on a slightly shorter time scale. Let $u(t,x) = S_{\text{lo},0,0}(t) v_0(x)$, so that \[ \begin{cases} (hD_t + P ) (v-u) = E, \\ (v-u)|_{t = 0 } = 0. \end{cases} \] Since the propagator and the parametrix are compactly essentially supported in frequency on scale $h^{-1}$ we have the endpoint Sobolev embeddings: \[ \sup_{|t| \leqslant \alpha h^{(1-m)/(m+1) + \beta}}\| v-u \|_{L^\infty_x} \leqslant h^{-1/2} \sup_{|t| \leqslant \alpha h^{(1-m)/(m+1) + \beta}}\| v-u \|_{L^2_x}. \] Let the energy $\mathcal{E}(t) = \| v-u\|_{L^2}^2$, and compute \begin{align*} \mathcal{E}'& =2 \,\mathrm{Re}\, \frac{i}{h} \int E \overline{ (v-u)} dx \\ & \leqslant h^{-1} h^{(1-m)/(m+1) + \beta} \| E \|_{L^2_x}^2 + h^{(m-1)/(m+1) + \beta} \mathcal{E}, \end{align*} and hence by Gronwall's inequality, \begin{align*} \mathcal{E}(t) & \leqslant C h^{-2m/(m+1) + \beta} \| E \|_{L^2_t L^2_x}^2 \\ & \leqslant C h^{(1-3m)/(m+1) + 2\beta } \| E \|_{L^\infty_{h^{(1-m)/(m+1) + \beta}} L^2_x }^2 \\ & \leqslant C h^{1 +2(K+1) \beta} \| \chi^*_\star w_0 \|_{L^2}^2 \\ & \leqslant C h^{2(K+1)\beta} \| \chi^*_\star w_0 \|_{L^1_x}^2. \end{align*} We finally conclude \begin{align*} \sup_{|t| \leqslant \alpha h^{(1-m)/(m+1) + \beta}}\| v-u \|_{L^\infty_x} & \leqslant C h^{-1/2 + (K+1) \beta} \| \chi^*_\star w_0 \|_{L^1_x} \\ & \leqslant C |ht|^{-1/2} \| \chi^*_\star w_0 \|_{L^1_x} , \end{align*} provided $| t | \leqslant \alpha h^{(1-m)/(m+1) + \beta}$ and $K$ is sufficiently large that \[ -\frac{1}{2} + (K+1) \beta \geqslant -\frac{1}{m+1} - \frac{\beta}{2}. \] The Strichartz estimates for $S_{\text{lo},0,0}(t)$ follow immediately. \end{proof} \subsection{The parametrix for $S_{\text{lo},j,+}^+(t)$} The operators $S_{\text{lo},j,+}^+(t)$ are the propagator localized to outgoing frequencies $-ax^m \leqslant \xi \leqslant 2 \epsilon$ in the spatial interaction region $\{ \delta h^{1/(m+1)} /2 \leqslant \pm x \leqslant 2\epsilon \}$. We have divided the spatial interaction region into $h$-dependent geometric regions; $S_{\text{lo},j,+}^+(t)$ is localized to \[ x \in h^{1/(m+1)} I_j := [h^{1/(m+1)}\delta (\omega^j - \omega^{j-2}) , h^{1/(m+1)}\delta (\omega^{j+1} + \omega^{j-1} )] . \] The symbol $\tilde{\chi}((\xi + ax^m)/\gamma x^m)$ is invariant under the rescaling operation, so after applying the rescaling operators, we are interested in constructing a parametrix in the regions \[ -ax^m \leqslant \xi \leqslant 2 \epsilon h^{-m/(m+1)}, \,\, x \in I_j . \] When the derivative of the effective potential $\widetilde{V}'$ is large, singularities propagate away quickly, however not uniformly so. We introduce a loss by constructing $\log(1/h)$ parametrices, and by eventually restricting our construction to subcritical time scales. We now compute how long it takes a wave packet to exit the interval $I_j$. Write \[ I_j = [y_j^-, y_j^+]:= [\delta (\omega^j - \omega^{j-2}) , \delta (\omega^{j+1} + \omega^{j-1} )] , \] and fix an initial point $(y, \eta)$ with $y \in I_j$, $\eta \geqslant -a(y_j^+)^m$. Then recalling the Hamiltonian system \eqref{E:Ham-q}, we have \[ x (t) \geqslant y - 2t a (y^+_{j})^m \geqslant \frac{1}{2} y_j^- \] as long as \[ 0 \leqslant t \leqslant \frac{y_j^- (y_{j}^+)^{-m}}{4a}. \] We have \[ y_j^+ = y_j^-(\omega + {\mathcal O}(\omega^{-1})), \] so that $x(t) \geqslant y_j^-/2$ provided \[ 0 \leqslant t \leqslant \frac{(y_j^-)^{1-m}}{4 a \omega^m} (1 + {\mathcal O}(\omega^{-1})). \] In this case, \[ -\partial_x \widetilde{V} \geqslant (y_j^-/2)^{2m-1}, \] which in turn implies \[ \xi \geqslant -a (y_{j}^+)^m + t(y_j^-/2)^{2m-1} \geqslant b ( y_{j}^-)^m, \] provided \[ t \geqslant 2^{2m-1}( a\omega^{m}(1 + {\mathcal O}(\omega^{-1})) + b) (y_j^-)^{1-m}. \] Choosing $a, b>0$ sufficiently small means we can assume $\eta \geqslant b (y_j^-)^{m}$ after a time comparable to at most $(y_j^-)^{1-m}$. We now compute how long it takes to leave $I_j$ assuming $y \in I_j$ and $\eta \geqslant b (y_j^-)^m$. We have \begin{align*} x & = y + 2 t \eta + \int_0^t (t -s) B(x(s)) ds \\ & \geqslant y_j^- + 2 t b (y_j^-)^m + \int_0^t (t-s) B(y_j^-) ds \\ & \geqslant y_j^- + 2tb (y_j^-)^m + \frac{1}{2} t^2 (y_j^-)^{2m-1} \\ & \geqslant y_{j}^+ \end{align*} provided \[ t \geqslant (y_j^-)^{1-m} \left( -2b + \sqrt{4 b^2 + 2 (y_{j}^+/y_j^- - 1)} \right), \] which is again comparable to $(y_j^-)^{1-m}$. We now estimate for $t = \alpha (y_j^-)^{1-m}$, for $\alpha>0$ to be determined: \begin{align*} \left| \frac{\partial x}{\partial y}(t) \right| & \leqslant 1 + \int_0^t (t-s) (4m-2) (y_{j}^+)^{2m-2} ds \left| \frac{\partial x}{\partial y}(t) \right| \\ & \leqslant 1 + (2m-1)t^2 (y_{j}^+)^{2m-2} \left| \frac{\partial x}{\partial y}(t) \right| \\ & \leqslant 1 + C_{\omega,m, a, b} \alpha^2 \left| \frac{\partial x}{\partial y}(t) \right|. \end{align*} Choosing $\alpha>0$ sufficiently small (but independent of $h$) shows that \[ \left| \frac{\partial x}{\partial y}(t) \right| \leqslant C \] uniformly for $t$ in this range. With this estimate in hand, we can compute $\partial x / \partial \eta = 2t(1 + {\mathcal O}(t))$ as usual, which results in the following Lemma. In practice, we need to gain some powers of $h$ in our parametrix construction, so we only construct the parametrix up to time $t \sim h^{\epsilon/2} (y_j^-)^{1-m}$ for a small $\epsilon>0$, and then iterate $C h^{-\epsilon}$ times. After time $t \sim h^{-\epsilon/2} (y_j^-)^{1-m}$ then the wavefront set will be outside the interval $I_j$. Let us state the following lemma for the short $h$-independent time scale $0 \leqslant t \leqslant \alpha (y_j^-)^{1-m}$; we will worry about summing over the $h$-dependent number of time intervals after constructing the amplitude. \begin{lemma} \label{L:Sloojpm} There exists $\alpha, a >0,$ and $\omega>1$ independent of $h$ and $j$ such that for each $0 \leqslant j \leqslant {\mathcal O}( \log(1/h))$, there is a phase function $\varphi(t, x, \xi)$ satisfying \[ \begin{cases} \varphi_t + \varphi_x^2 + \widetilde{V}(x;h) = 0, \\ \varphi(0, x, \eta) = x \eta \end{cases} \] for $x \in I_j$, $-a (y_j^+)^m \leqslant \xi \leqslant 2 \epsilon h^{-m/(m+1)}$, and $| t | \leqslant \alpha (y_j^-)^{1-m}$. We further have \[ \varphi_{\eta \eta} \sim 2 t(1 + {\mathcal O}(t)), \] for $| t | \leqslant \alpha (y_j^-)^{1-m}$. \end{lemma} We now construct the amplitude for the parametrix for the operator $S_{\text{lo},j,+}^+ (t)$. This, combined with Lemma \ref{L:Sloojpm}, will be used to compute a dispersion estimate, resulting in a Strichartz estimate. The problem is that, just as in Subsection \ref{SS:sloo}, we are working in a marginal calculus, so to construct the amplitude as an asymptotic series, we must restrict the range of $t$ to depend mildly on $h$. We again appeal to the Frobenius theorem to get a function $\Gamma(t,x)$ (again implicitly depending on the frequency $\xi$) satisfying \[ \begin{cases} \partial_t \Gamma + 2 \varphi_x \partial_x \Gamma = 0, \\ \Gamma(0,x) = x. \end{cases} \] We then construct $B = \sum_{j = 0}^KB_j$ for sufficiently large $K$ to be determined (independent of $h$) with \[ \begin{cases} B_0 \equiv 1, \\ B_j = -\int_0^t \varphi_{xx} B_{j-1} |_{(s, \Gamma(t-s,x))} + i B_{j-1,xx} |_{(s, \Gamma(t-s,x))}. \end{cases} \] A tedious induction argument shows that $B_j$ satisfies \[ | \partial_x^l B_j | = {\mathcal O} \left( \sum_{k = 1}^j \left| t^{k+j} x^{2km-2j-l} \right| \right). \] Then \[ w(t, x) = (2 \pi)^{-1} \int e^{i \varphi(t, x, \xi) - i y \xi } B(t, x, \xi) \chi(y, D_y)^* w_0(y) dy d \xi \] solves \[ \begin{cases} \widetilde{P} w = \widetilde{E}, \\ w(0, x) = \chi_\star (x, D_x)^* w_0(x), \end{cases} \] where \[ \chi_\star = T^{-1}_h \psi( \pm \omega^j x/h^{1/(m+1)} ) ( 1 - \mathds{1}_{\{\pm hD_x \geqslant 1-V(x)\} } (1-\chi_{\epsilon^2} ((P-1))) ) \chi_\epsilon(x) T_h \] is the appropriate microlocal cutoff, and the equation is understood to make sense for $|t| \leqslant \alpha (y_j^-)^{1-m}$. Here, the error $\widetilde{E}$ is given by \[ \widetilde{E} = (2 \pi)^{-1} \int e^{i \varphi(t, x, \xi) - i y \xi} (-\partial_x^2 B_K -i \varphi_{xx} B_K )\chi^*(y, D_y) w_0(y) dy d \xi . \] That is, $\widetilde{E}$ is an oscillatory integral operator with the same phase function as $w$. Having computed the symbol of the error term $\widetilde{E}$ to be $-\partial_x^2 B_K - i \varphi_{xx} B_K$, in the rescaled coordinates we have for $|t | \leqslant h^{\beta/2} |x|^{1-m}$, \begin{align*} -\partial_x^2 B_K - i \varphi_{xx} B_K & = {\mathcal O} \left( \sum_{l = 1}^{K+1} | t|^{l + K} | x|^{2ml - 2K -2} \right) \\ & = {\mathcal O}\left(\sum_{l = 1}^{K+1} h^{(l+K)\beta/2} | x |^{(m+1)(l-K) -2 } \right) \\ & = {\mathcal O} ( h^{(1+K) \beta/2} |x |^{m-1} ) \end{align*} in the worst case when $l = K+1$. Now since $| x | \leqslant h^{-1/(m+1)}$, this error term is of order ${\mathcal O}(h^{(1 + K ) \beta /2 + (1-m)/(1+m) } )$, which is small as $K$ gets large. If we now take $v = T_h w$, we see \begin{align*} P v & = T_h T_h^{-1} P T_h w \\ & = h^{2m/(m+1)} T_h \widetilde{P} w \\ & = E, \end{align*} with initial conditions \[ v(0, x) = T_h w(0,x), \] and where \[ E = h^{2m/(m+1)} T_h \widetilde{E}. \] A similar computation to Subsection \ref{SS:sloo} shows \begin{align*} \sup_{0 \leqslant t \leqslant \alpha h^{ \beta/2} |y_j^-|^{1-m} }\| E \|_{L^2} \leqslant C h^{2m/(m+1)} h^{\beta (1+K)/2 +(1-m)/(1+m)}\| \chi_\star^* v_0 \|_{L^2}. \end{align*} The following lemma contains the dispersion and Strichartz estimates for the operators $S_{\text{lo},j,+}^+(t)$. \begin{lemma} \label{L:disp-slojp} The parametrix $v(t, x)$ satisfies the dispersion estimate \[ \| \chi_\star v \|_{L^\infty} \leqslant C (ht )^{-1/2} \| \tilde{\chi} v_{0} \|_{L^1}, \] where $0< t \leqslant \alpha h^{ \beta/2} |y_j^-|^{1-m} $, as well as the corresponding Strichartz estimate \[ \| v \|_{L^p_{ \alpha h^{ \beta/2} |y_j^-|^{1-m} } L^q} \leqslant C h^{-1/p} \| \chi v_{0} \|_{L^2}, \] for \[ \frac{2}{p} + \frac{1}{q} = \frac{1}{2}, \,\,\, q < \infty, \] and constants independent of $h$. The cutoff propagator $S_{\text{lo},j,+}^+$ satisfies \[ \| S_{\text{lo},j,+}^+ \|_{L^2 \to L^{p}_{ \alpha h^{ \beta/2} |y_j^-|^{1-m} } L^{q} }\leqslant C h^{-1/p} , \] and \[ \| S_{\text{lo},j,+}^+ \|_{L^2 \to L^{p}_{\alpha h^{(1-m)/(1+m) }} L^{q} }\leqslant C h^{-(1+\beta)/p} , \] for $(p,q)$ in the same range and constants independent of $h$. \end{lemma} The proof is exactly the same as the proof of Lemma \ref{L:disp-sloo}, with the exception of the different time interval. If we sum over $h^{-\beta}$ intervals of length $h^{\beta/2} | y_j^- |^{1-m}$ results in an interval of length $h^{-\beta/2} | y_j^- |^{1-m}$. According to Lemma \ref{L:Sloojpm} (combined with the Egorov theorem in the $h^{-1/2+\beta}$ calculus), after this time the parametrix and the error are both ${\mathcal O}(h^\infty)$. \subsection{Proof of Proposition \ref{P:mode-str}} In this subsection, we see how to use the computed Strichartz estimates plus the local smoothing from \cite{ChWu-lsm} to prove Proposition \ref{P:mode-str}. From the semiclassical Strichartz estimates, if we let $v(t, x) = v_{lk}(th,x)$ as in Proposition \ref{P:mode-str} and rescale appropriately, we get \[ \| \chi v_{lk} \|_{L^{2n}_T L^{2^\star} } \leqslant C_\beta \| \left\langle k \right\rangle^{\beta} v_{lk}^0 \|_{L^2}, \] for $T \leqslant \epsilon k^{-2/(m+1)}$, and where $\chi \in {\mathcal C}^\infty_c$ is any smooth, compactly supported function. Recall that according to Lemmas \ref{L:L}, we already have perfect Strichartz estimates for $(1-\chi) v_{lk}$ if $\chi \equiv 1$ near $x = 0$. Further, by Lemma \ref{L:Shi}, we have perfect Strichartz estimates for large frequencies and small $x$: if $\psi(\xi) \equiv 1$ near $0$, $\chi (1-\psi(-h^2 \Delta)) v_{lk}$ obeys perfect Strichartz estimates. Let $\chi$ and $\psi$ be such cutoffs. In order to estimate $\chi \psi v_{lk}$, we employ a duality trick (see \cite{BGH}) together with the local smoothing estimates from \cite{ChWu-lsm}. Let $\varphi(s) \in {\mathcal C}^\infty_c$ be a compactly supported function such that \[ \sum_{j = 0}^{k^{2/(m+1)}} \varphi ( k^{2/(m+1)} t - j ) \equiv 1, \,\,\, 0 \leqslant t \leqslant \epsilon. \] Set $U_j = \varphi ( k^{2/(m+1)} t - j ) \chi \psi v_{jk}$. We have \[ (D_t + P_k) U_j = W_j' + W_j'', \] where \[ W_j' = i k^{2/(m+1)} \varphi' ( k^{2/(m+1)} t - j ) \chi \psi v_{jk}, \] and \[ W_j'' = \varphi ( \chi'' + 2 \chi' \partial_x) \psi v_{lk}. \] The important thing to observe is that $W_j''$ is supported away from $x = 0$, so the standard $1/2$ derivative local smoothing estimates hold (see Theorem \ref{T:lsm}). Let $\chi_1 \in {\mathcal C}^\infty_c$ satisfy $\chi_1 \equiv 1$ on $\mathrm{supp}\, \chi$, and $\chi_2 \in {\mathcal C}^\infty_c$ satisfy $\chi_2 \equiv 1$ on $\mathrm{supp}\, \chi'$, $\mathrm{supp}\, \chi_2$ away from $x = 0$. We have $\chi_1 U_j = U_j$, $\chi_1 W_j' = W_j'$, and $\chi_2 W_j'' = W_j''$. Using the Duhamel formula, set \[ U_j' = \chi_1 \int_{(j-1) \epsilon k^{-2/(m+1)} }^t e^{-i(t-s) P_k } \chi_1 W_j'(s) ds, \] and \[ U_j'' = \chi_1 \int_{(j-1) \epsilon k^{-2/(m+1)} }^t e^{-i(t-s) P_k } \chi_2 W_j''(s) ds, \] so that $U_j' + U_j'' = U_j$. By the Christ-Kiselev lemma \cite{CK}, it suffices to consider \[ \overline{U}_j' = \chi_1 \int_{(j-1) \epsilon k^{-2/(m+1)} }^{(j+1) \epsilon k^{-2/(m+1)} } e^{-i(t-s) P_k } \chi_1 W_j'(s) ds, \] and similarly for $W_j''$. Let $I = [(j-1) \epsilon k^{-2/(m+1)} , (j+1) \epsilon k^{-2/(m+1)} ]$ be the time interval in the integral above. We apply the Strichartz estimates to get \[ \| \overline{U}_j' \|_{L^{2n}_I L^{2^\star}} \leqslant C k^{\beta} \left\| \int_{(j-1) \epsilon k^{-2/(m+1)} }^{(j+1) \epsilon k^{-2/(m+1)} } e^{is P_k } \chi_1 W_j'(s) ds \right\|_{L^2}, \] and similarly for $W_j''$. The dual estimates to Theorem \ref{T:lsm} then yield \[ \| \overline{U}_j' \|_{L^{2n}_I L^{2^\star}} \leqslant C k^{\beta-1/(m+1)} \| W_j' \|_{L^2 L^2}, \] and (again because $\chi_2$ is supported away from $x = 0$) \[ \| \overline{U}_j'' \|_{L^{2n}_I L^{2^\star}} \leqslant C k^{\beta-1/2} \| W_j'' \|_{L^2 L^2}. \] By the Christ-Kiselev lemma \cite{CK}, the same estimates hold for $U_j'$ and $U_j''$. Squaring and summing in $j$, using $\ell^2 \subset \ell^{2n}$, yields \begin{align*} \| v_{lk} \|_{L^{2n}_{\epsilon} L^{2^\star}}^2 & \leqslant C \sum_{j = 0}^{k^{2/(m+1)}} ( \| U_j' \|_{L^{2n}_{\epsilon} L^{2^\star}}^2 + \| U_j'' \|_{L^{2n}_{\epsilon} L^{2^\star}}^2) \\ & \leqslant C \sum_{j = 0}^{k^{2/(m+1)}} (k^{2 \beta - 2/(m+1)} \| W_j' \|_{L^2 L^2}^2 + k^{2\beta-1} \| W_j'' \|_{L^2 L^2}^2 ) \\ & \leqslant C ( k^{2 \beta + 2/(m+1)} \| \chi v_{jk} \|_{L^2_{\epsilon} L^2}^2 + k^{2\beta-1} \| \chi_2 \left\langle D_x \right\rangle v_{lk} \|_{L^2_\epsilon L^2}^2 ) \\ & \leqslant C k^{2\beta} \| v_{lk}^0 \|_{L^2}^2. \end{align*} This proves Proposition \ref{P:mode-str}. \section{Quasimodes} In this section we construct quasimodes for the model operator near $(0,0)$ in the transversal phase space, and then use these quasimodes to show the Strichartz estimates are near-sharp, in the sense described in Corollary \ref{C:C1a}. Consider the model operator \[ P = -h^2 \partial_x^2 - m^{-1} x^{2m} \] locally near $x = 0$. We will construct quasimodes which are localized very close to $x = 0$, so this should be a decent approximation. It is well-known (see \cite{ChWu-lsm}) that the operator \[ \tilde{Q} = -\partial_x^2 + x^{2m} \] has a unique ground state $\tilde{Q} v_0 = \lambda_0 v_0$, with $\lambda_0>0$, and $v_0$ is a Schwartz class function. Then, by rescaling, we find the function $v(x) = v_0 ( x h^{-1/(m+1)})$ is an un-normalized eigenfunction for the equation \[ (-h^2 \partial_x^2 + x^{2m} ) v = h^{2m/(m+1)} \lambda_0 v. \] Complex scaling then suggests there are resonances with imaginary part $c_0 h^{2m/(m+1)}$. We use a complex WKB approximation to get an explicit formula for a localized approximate resonant state, however, as we shall see, it is not a very good approximation. Nevertheless, since we will eventually be averaging in time, it is sufficient for our applications. Let $E_0 = (\alpha + i \mu)h^{2m/(m+1)} $, $\alpha, \mu>0$ independent of $h$. Let the phase function \[ \varphi(x) = \int_0^x (E + m^{-1} y^{2m})^{1/2} dy, \] where the branch of the square root is chosen to have positive imaginary part. Let \[ u(x) = (\varphi')^{-1/2} e^{i \varphi / h}, \] so that \[ (hD)^2 u = (\varphi')^2 u + f u, \] where \begin{align*} f & = (\varphi')^{1/2} (hD)^2 (\varphi')^{-1/2} \\ & = -h^2 \left( \frac{3}{4} (\varphi')^{-2} (\varphi'')^2 - \frac{1}{2} (\varphi')^{-1} \varphi ''' \right). \end{align*} \begin{lemma} The phase function $\varphi$ satisfies the following properties: \begin{description} \item[(i)] There exists $C>0$ independent of $h$ such that \[ | \,\mathrm{Im}\, \varphi | \leqslant C\begin{cases} h(1 + \log(x/h^{1/2} )), \quad m = 1, \\ h, \quad m \geqslant 2. \end{cases} \] In particular, if $| x | \leqslant C h^{1/(m+1)}$, $| \,\mathrm{Im}\, \varphi| \leqslant C'$ for some $C'>0$ independent of $h$. \item[(ii)] There exists $C>0$ independent of $h$ such that \[ C^{-1} \sqrt{ h^{2m/(m+1)} + x^{2m} } \leqslant | \varphi'(x) | \leqslant C \sqrt{ h^{2m/(m+1)} + x^{2m} } \] \item[(iii)] \[ \begin{cases} \varphi' = (E + m^{-1} x^{2m})^{1/2}, \\ \varphi'' = x^{2m-1} (\varphi')^{-1}, \\ \varphi''' = \left( (1 - m^{-1} ) x^{4m-2} + E (2m-1) x^{2m-2} \right) ( \varphi')^{-3}, \end{cases} \] In particular, \[ f = -h^2 x^{2m-2} \left( \left( \frac{1}{4} + \frac{1}{2 m} \right) x^{2m} - \left( m - \frac{1}{2} \right) E \right) (\varphi' )^{-4}. \] \end{description} \end{lemma} \begin{proof} For (i) we write $\varphi' = s + it$ for $s$ and $t$ real valued, and then \[ E + m^{-1} x^{2m} = s^2 - t^2 + 2 i st. \] Hence \[ s^2 \geqslant s^2 - t^2 = \alpha h^{2m/(m+1)} + m^{-1}x^{2m} , \] so that \[ t = \frac{\mu h^{2m/(m+1)}}{2s} \leqslant \frac{\mu h^{2m/(m+1)}}{2\sqrt{h^{2m/(m+1)} \alpha + m^{-1} x^{2m}}}. \] Then \begin{align*} | \,\mathrm{Im}\, \varphi (x) | & \leqslant \int_0^{|x|} \varphi'(y) dy \\ & \leqslant C \int_0^{h^{1/(m+1)}} h^{m/(m+1)} dy + C \int_{h^{1/(m+1)}}^x h^{2m/(m+1)} y^{-m} dy \\ & = \begin{cases} {\mathcal O} ( h(1 + \log (x/h^{1/2}))), \quad m = 1, \\ {\mathcal O}(h), \quad m >1. \end{cases} \end{align*} Parts (ii) and (iii) are simple computations. \end{proof} In light of this lemma, $| u (x) |$ is comparable to $| \varphi' |^{-1/2}$, provided $| x | \leqslant C h^{1/2}$ when $m=1$. We are only interested in sharply localized quasimodes and in the case $m \geqslant 2$, so let $\gamma = h^{1/(m+1)}$, choose $\chi(s) \in {\mathcal C}^\infty_c( {\mathbb R})$ such that $\chi \equiv 1$ for $| s | \leqslant 1$ and $\mathrm{supp}\, \chi \subset [-2,2]$. Let \[ \tilde{u}(x) = \chi(x/\gamma) u(x), \] and compute for $q \geqslant 2$: \begin{align*} \| \tilde{u} \|_{L^q}^q & = \int_{| x | \leqslant 2 \gamma} \chi(x/\gamma)^q | u |^q dx \\ & \sim \int_{|x| \leqslant 2\gamma} \chi(x/\gamma)^q | \varphi' |^{-q/2} dx \\ & \sim h^{1/(m+1)} h^{ -qm/2(m+1)} \\ & \sim h^{(2-qm)/2(1+m)}. \end{align*} In particular, \[ \| \tilde{u} \|_{L^2} \sim h^{(1-m)/2(1+m)}, \] and so \[ \| \tilde{u} \|_{L^q} \sim h^{(2/q -1)/2(m+1)} \| \tilde{u} \|_{L^2}. \] Further, $\tilde{u}$ satisfies the following equation: \begin{align*} (hD)^2 \tilde{u} & = \chi(x/\gamma) (hD)^2 u + [(hD)^2, \chi(x/\gamma)] u \\ & = (\varphi')^2 \tilde{u} + f \tilde{u} + [(hD)^2, \chi(x/\gamma)] u \\ & = (\varphi')^2 \tilde{u} + R, \end{align*} where \[ R = f \tilde{u} + [(hD)^2, \chi(x/\gamma)] u. \] \begin{lemma} The remainder $R$ satisfies \begin{equation} \label{E:R-remainder} \| R \|_{L^2} = {\mathcal O} (h^{2m/(m+1)}) \| \tilde{u} \|_{L^2}. \end{equation} \end{lemma} \begin{proof} We have already computed the function $f$, which is readily seen to satisfy \[ \| f \|_{L^\infty(\mathrm{supp}\, (\tilde{u} ))} = {\mathcal O}(h^{2m/(m+1)}), \] since $\mathrm{supp}\, (\tilde{u}) \subset \{ | x | \leqslant 2 h^{1/(m+1)} \}$. On the other hand, since $\| \tilde{u} \|_{L^2} \sim h^{(1-m)/2(1+m)}$, we need only show that \[ \| [(hD)^2, \chi(x/\gamma)] u\|_{L^2} \leqslant C h^{(3m+1)/2(m+1)}. \] We compute: \begin{align*} [(hD)^2, \chi(x/\gamma)] u & = -h^2 \gamma^{-2} \chi'' u + 2\frac{h}{i} \gamma^{-1} \chi' hD u \\ & = -h^2 \gamma^{-2} \chi'' u + 2\frac{h}{i} \gamma^{-1} \chi' \left(-\frac{h}{2i} \frac{\varphi''}{\varphi'} + \varphi' \right) u \\ & = -h^2 \gamma^{-2} \chi'' u + 2\frac{h}{i} \gamma^{-1} \chi' \left( -\frac{h}{2i} \frac{ x^{2m-1}}{(\varphi')^2} + \varphi' \right) u. \end{align*} The first term is estimated: \[ \| h^2 \gamma^{-2} \chi'' u \|_{L^2} = {\mathcal O}(h^{2m/(m+1)}) \| u \|_{L^2(\mathrm{supp}\, ( \tilde{u}))} ={\mathcal O}(h^{(3m+1)/2(m+1)}). \] Similarly, the remaining two terms are estimated: \begin{align*} \Bigg\| 2\frac{h}{i} & \gamma^{-1} \chi' \left( -\frac{h}{2i} \frac{ x^{2m-1}}{(\varphi')^2} + \varphi' \right) u \Bigg\|_{L^2} \\ & = {\mathcal O}(h^{m/(m+1)} h^1 h^{(2m-1)/(m+1)} h^{-2m/(m+1)}) \| u \|_{L^2(\mathrm{supp}\, ( \tilde{u}))} \\ & \quad + {\mathcal O}(h^{m/(m+1)} h^{2m/(m+1)} ) \| u \|_{L^2(\mathrm{supp}\, ( \tilde{u}))} \\ & = {\mathcal O}(h^{(3m+1)/2(m+1)}). \end{align*} \end{proof} \subsection{Saturation of Strichartz estimates} \label{SS:saturation} In this subsection, we study Strichartz estimates for the separated Schr\"odinger equation given the specific choice of initial conditions in the form of quasimodes. Now it is well known that for any $k$, there exists a spherical harmonic $v_k$ of order $k$ which saturates Sogge's bounds (Theorem \ref{T:Sogge}): \[ -\Delta_{{\mathbb S}^d} v_k = (k)(k+d -1) v_k, \quad \| v_k \|_{L^{2(d+1)/(d-1)}} \sim k^{(d-1)/2(d+1)} \| v_k \|_{L^2}. \] Let $\lambda_k = k (k + n -2)$, $k \gg 1$, $h = \lambda_k^{-1/2}$, let $\tilde{u}$ be the associated transversal quasimode constructed in the previous section, and let \[ \varphi_0(x, \theta) = v_k(\theta) \tilde{u}(x). \] Let $\varphi(t, x, \theta) = e^{it \tau} \varphi_0$ for some $\tau \in {\mathbb C}$ to be determined. Since the support of $\tilde{u}$ is very small, contained in $\{ | x | \leqslant h^{1/(m+1)} / \kappa \}$, we have \[ A^{-2} = (1 + x^{2m} )^{-1/m} = 1 -\frac{1}{m} x^{2m} + {\mathcal O}(h^{4m/(m+1)}) \] on $\mathrm{supp}\, \tilde{u}$. Then \begin{align*} (D_t + \widetilde{\Delta}) \varphi & = P_k \varphi \\ & = ( \tau - D_x^2 - A^{-2} \lambda_k - V_1(x) ) \varphi \\ & = \lambda_k e^{it \tau} e^{i k \theta} \left[\left( \tau \lambda_l^{-1} - (\lambda^{-1}_k D_x^2 + 1 - \frac{1}{m} x^{2m} ) \right) \tilde{u} + {\mathcal O}( k^{-2}) \tilde{u} \right] \\ & = \lambda_k e^{it \tau} e^{i k \theta} \left[ \left( \tau \lambda_k^{-1} - 1 - E_0 \right) \tilde{u} + R + {\mathcal O}( k^{-2}) \tilde{u} \right], \end{align*} where $R$ satisfies the remainder estimate \eqref{E:R-remainder}. Set \[ \tau = \lambda_k (1+ E_0) = \lambda_k (1 + \alpha k^{-2m/(m+1)}) + i \mu k^{2/(m+1)}(1+{\mathcal O}(k^{-1})), \,\,\, \alpha, \mu >0 \] so that we have \[ \begin{cases} (D_t + \widetilde{\Delta}) \varphi = \tilde{R}, \\ \varphi(0, x, \theta) = \varphi_0 \end{cases} \] with \begin{equation} \label{E:tR} \tilde{R} = \lambda_k e^{i t \tau } v_k (R(x, k) + {\mathcal O}(k^{-2}) \tilde{u} ). \end{equation} We compute the endpoint Strichartz estimate on an arbitrary time interval $[0,T]$, with $p=2$, $q = 2^\star = 2n/(n-2)$ for $n \geqslant 3$: \begin{align} \| \varphi \|_{L^2([0,T])L^q}^2 & = \int_0^T \| e^{it \tau} \varphi_0 \|^2_{L^q} dt \notag \\ & = \int_0^T e^{-2t\,\mathrm{Im}\, \tau} \| \varphi_0 \|_{L^q}^2 dt \notag \\ & = \frac{1 - e^{-2 T\,\mathrm{Im}\, \tau}}{2 \,\mathrm{Im}\, \tau} \| \varphi_0 \|_{L^q}^2 \notag \\ & = \frac{1 - e^{-2 T\,\mathrm{Im}\, \tau}}{ 2 \,\mathrm{Im}\, \tau } \| \varphi_0 \|_{L^q}^2 \notag \\ & \sim \frac{1 - e^{-2 T\,\mathrm{Im}\, \tau}}{ 2 \,\mathrm{Im}\, \tau } k^{(1-2/q)/(m+1)} k^{(n-2)/n} \| \tilde{u} \|_{L^2({\mathbb R})}^2 \| v_k \|_{L^2({\mathbb S}^{n-1})}^2 \notag \\ & \sim k^{2\eta(m,n)} \| \varphi_0 \|_{L^2}^2 \notag \\ & \sim \| (-\Delta_{{\mathbb S}^{n-1}})^{\eta(m,n)} \varphi_0 \|_{L^2}^2, \label{E:sharp-str} \end{align} where \[ \eta(m,n) = \frac{1}{2(m+1)} \left( m\left( 1 - \frac{2}{n} \right) -1 \right). \] Now let $L(t)$ be the unitary Schr\"odinger propagator: \[ \begin{cases} (D_t + \widetilde{\Delta}) L = 0, \\ L(0) = \,\mathrm{id}\,, \end{cases} \] and write using Duhamel's formula: \[ \varphi(t) = L(t) \varphi_0 + i \int_0^t L(t) L^*(s) \tilde{R}(s) ds =: \varphi_{\text{h}} + \varphi_{\text{ih}}, \] where $\varphi_{\text{h}}$ and $\varphi_{\text{ih}}$ are the homogeneous and inhomogeneous parts respectively. We want a lower bound on the homogeneous Strichartz estimates, for which we need an upper bound on the inhomogeneous Strichartz estimates. Let us now assume for the purposes of contradiction that a better Strichartz estimate than that in Corollary \ref{C:C1a} holds for all $\beta >0$. That is, we assume for each $\beta>0$, there exists $C_\beta$ such that \[ \| L(t) u_0 \|_{L^2 ([0,T]) L^{2^\star}} \leqslant C_\beta \| \left\langle -\Delta_{{\mathbb S}^{n-1}} \right\rangle^{r+ \beta} u_0 \|_{L^2}, \] for some $r < \eta(m,n)/2$. In dimension $n = 2$, we take as usual $p >2$, $2 \leqslant q < \infty$, and we immediately arrive at a contradiction to the scale-invariant case. For dimension $n \geqslant 3$, we take $\beta>0$ sufficiently small that $r + \beta < \eta(m,n)/2$, and we then have the complementary inhomogeneous Strichartz estimate: if $v$ solves \[ \begin{cases} (D_t + \widetilde{\Delta}) v = F, \\ v(0) = 0, \end{cases} \] then \[ \| v \|_{L^2 ([0,T]) L^{2^\star}} \leqslant C \| \left\langle -\Delta_{{\mathbb S}^{n-1}} \right\rangle^{r + \beta} F \|_{L^1([0,T]) L^2 }. \] For the inhomogeneous part corresponding to our quasimode initial data, we have $F = \tilde{R}$, with $\tilde{R}$ computed in \eqref{E:tR}. Then \begin{align*} \| & \varphi_{\text{ih}} \|_{L^2([0,T]) L^{2^\star}} \\ & \leqslant C T^{1/2} \| \tilde{R} \|_{L^2([0,T]) L^2} \\ & \leqslant C k^2 T^{1/2} \left( \int_0^T e^{-2 t \,\mathrm{Im}\, \tau } \|\left\langle -\Delta_{{\mathbb S}^{n-1}} \right\rangle^{r + \beta} v_k (R(x, k) + {\mathcal O}(k^{-2}) \tilde{u} ) \|_{L^2}^2 dt \right)^{1/2} \\ & \leqslant C k^2 k^{-2m/(m+1)} \left( \frac{1-e^{-2T \,\mathrm{Im}\, \tau}}{2\,\mathrm{Im}\, \tau} \right)^{1/2} \| \left\langle -\Delta_{{\mathbb S}^{n-1}} \right\rangle^{r + \beta} \varphi_0 \|_{L^2}. \end{align*} Recalling that $\,\mathrm{Im}\, \tau \sim k^{-2/(m+1)}$, if $T = \epsilon^2 k^{-2/(m+1)}$, we have \begin{equation} \label{E:ih-sharp-str} \| \varphi_{\text{h}} \|_{L^2([0,T]) L^{2^\star}} \leqslant C \epsilon \| \left\langle -\Delta_{{\mathbb S}^{n-1}} \right\rangle^{r + \beta} \varphi_0 \|_{L^2} . \end{equation} Now, if $\epsilon >0$ is sufficiently small, but independent of $k$, we have \[ 1 \geqslant 1-e^{-2T \,\mathrm{Im}\, \tau} \geqslant c_0, \] for some $c_0>0$, so that for this choice of $T$, we still have the estimate \eqref{E:sharp-str}. Combining \eqref{E:sharp-str} with \eqref{E:ih-sharp-str} we have \begin{align*} C \| \left\langle -\Delta_{{\mathbb S}^{n-1}} \right\rangle^{r + \beta} \varphi_0 \|_{L^2} & \geqslant \| L(t) \varphi_0 \|_{L^2([0,T]) L^{2^\star}} \\ & \geqslant \| \varphi(t) \|_{L^2([0,T]) L^{2^\star}} - \| \varphi_{\text{ih}}\|_{L^2([0,T]) L^{2^\star}} \\ & \geqslant C^{-1} \| \left\langle -\Delta_{{\mathbb S}^{n-1}} \right\rangle^{\eta(m,n)/2} \varphi_0 \|_{L^2} , \end{align*} for some constant $C>0$ independent of $k$. But this is a contradiction, since $r + \beta < \eta(m,n)/2$. This proves the near-sharpness of Corollary \ref{C:C1a}. \end{document}
arXiv
\begin{document} \title[\bf Partial regularity for fractional harmonic maps]{Partial regularity for\\ fractional harmonic maps into spheres} \author{Vincent Millot} \address{LAMA, Univ Paris Est Creteil, Univ Gustave Eiffel, UPEM, CNRS, F-94010, Cr\'eteil, France} \email{[email protected]} \author{Marc Pegon} \address{Universit\'e de Paris, Laboratoire Jacques-Louis Lions (LJLL), F-75013 Paris, France} \email{[email protected]} \author{Armin Schikorra} \address{University of Pittsburgh, Department of Mathematics, 301 Thackeray Hall, Pittsburgh, PA15260, USA} \email{[email protected]} \begin{abstract} This article addresses the regularity issue for stationary or minimizing fractional harmonic maps into spheres of order $s\in(0,1)$ in arbitrary dimensions. It is shown that such fractional harmonic maps are $C^\infty$ away from a small closed singular set. The Hausdorff dimension of the singular set is also estimated in terms of $s\in(0,1)$ and the stationarity/minimality assumption. \end{abstract} \maketitle \tableofcontents \section{Introduction} The theory of fractional harmonic maps into a manifold is quite recent. It is has been initiated some years ago by F. Da Lio and T. Rivi\`ere in \cite{DaLiRi1,DaLiRi2}. In those first articles, they have introduced and studied $1/2$-harmonic maps from the real line into a smooth and compact closed submanifold $\mathcal{N}\subseteq\mathbb{R}^d$. A map $u:\mathbb{R}\to\mathcal{N}$ is said to be a $1/2$-harmonic map into $\mathcal{N}$ if it is a critical point of the $1/2$-Dirichlet energy $$\mathcal{E}_{\frac{1}{2}}(u,\mathbb{R}):=\frac{1}{2}\int_{\mathbb{R}}\big|(-\Delta)^{\frac{1}{4}}u\big|^2\,{\rm d} x=\frac{1}{4\pi}\iint_{\mathbb{R}\times\mathbb{R}}\frac{|u(x)-u(y)|^2}{|x-y|^2}\,{\rm d} x{\rm d} y \,,$$ among all maps with values into $\mathcal{N}$, or equivalently, if it satisfies the Euler-Lagrange equation \begin{equation}\label{eq1/2harmintro1} (-\Delta)^{\frac{1}{2}}u\perp{\rm Tan}(u,\mathcal{N}) \end{equation} in the distributional sense. Here $(-\Delta)^s$ denotes the integro-differential (multiplier) operator associated to the Fourier symbol $(2\pi|\xi|)^{2s}$, $s\in(0,1)$. The notion of $1/2$-harmonic map into $\mathcal{N}$ appears in several geometrical problems, such as free boundary minimal surfaces or Steklov eigenvalue problems, see \cite{DaLi2} and references therein. The special case $\mathcal{N}=\mathbb{S}^{d-1}$ is important for both geometrical and analytical issues. From the analytical point of view, it enlightens the internal structure of equation \eqref{eq1/2harmintro1}. Indeed, the Lagrange multiplier associated to the constraint to be $\mathbb{S}^{d-1}$-valued takes a very simple form, and \eqref{eq1/2harmintro1} reduces to the equation \begin{equation}\label{eq1/2harmsphereintro} (-\Delta)^{\frac{1}{2}}u(x)=\left(\frac{1}{2\pi}\int_{\mathbb{R}}\frac{|u(x)-u(y)|^2}{|x-y|^2}\,{\rm d} y\right)u(x)\,, \end{equation} which is in clear analogy with the equation for usual harmonic maps from a $2$d-domain into the sphere. In particular, there is a similar analytical issue concerning regularity of solutions since the right hand side of \eqref{eq1/2harmsphereintro} has {\sl a priori} no better integrability than $L^1$, and elliptic linear theory does not apply. In their pioneering work \cite{DaLiRi1}, F. Da Lio and T. Rivi\`ere proved complete smoothness of $1/2$-harmonic maps through a reformulation of equation \eqref{eq1/2harmsphereintro} in terms of algebraic quantities, the ``3-terms commutators", exhibiting some compensation phenomena. In \cite{DaLiRi2} (dealing with arbitrary targets), smoothness of $1/2$-harmonic maps follows from a more general compensation result for nonlocal systems with antisymmetric potential, in the spirit of~\cite{Riv2}. In the same stream of ideas, K. Mazowiecka and the third author obtained in \cite{MazSchi} a new proof of the regularity of $1/2$-harmonic maps, very close to the original argument of F.~H\'elein~\cite{Hel1} to prove smoothness of harmonic maps from surfaces into spheres (see also~\cite{Hel2}). Once again, the key point in \cite{MazSchi} is to rewrite the right hand side of \eqref{eq1/2harmsphereintro} to discover a suitable ``fractional div-curl structure". From the new form of the equation, they deduce that $(-\Delta)^{\frac{1}{2}}u$ belongs (essentially) to the Hardy space~$\mathcal{H}^1$ by applying their main result \cite[Theorem 2.1]{MazSchi}, a generalization to the fractional setting of the div-curl estimate of R. Coifman, P.L. Lions, Y.~Meyer, and S.~Semmes~\cite{CLMS}. Continuity of solutions is then a consequence of Calder\'on-Zygmund theory, from which it is possible to deduce $C^\infty$-regularity. Several generalizations of the regularity result of \cite{DaLiRi1,DaLiRi2} have been obtained, e.g. for critical points of higher order or/and $p$-power type energies (still in the corresponding critical dimension), see \cite{DaLi3,DaLiSchi1,DaLiSchi2,MazSchi,Schi1,Schi2,Schi3}. The regularity theory for $1/2$-harmonic maps into a manifold in higher dimensions has been addressed in \cite{Moser} and \cite{MS} (see also \cite{MilPeg}). In higher dimensions, the theory provides partial regularity (i.e. regularity away from a ``small'' singular set) for stationary $1/2$-harmonic maps (i.e. critical points for both {\sl inner and outer variations}), and energy minimizing $1/2$-harmonic maps. It can be seen as the analogue of the partial regularity theory for harmonic maps by R. Schoen and K. Uhlenbeck \cite{SchUhl1,SchUhl2} in the minimizing case, and by L.C. Evans \cite{Evans} and F. Bethuel \cite{Bet} in the stationary case. In \cite{MS}, the argument consists in considering the harmonic extension to the upper half space in one more dimension provided by the convolution with the Poisson kernel. The extended map is then harmonic and satisfies a nonlinear Neumann boundary condition which fits within the (previously known) theory of harmonic maps with partially free boundary, see \cite{Duz1,Duz2,GJ,HL,Schev}. \vskip3pt The purpose of this article is to extend the regularity theory for fractional harmonic maps in arbitrary dimensions to the context of {\sl $s$-harmonic maps}, i.e., when the operator $(-\Delta)^{\frac{1}{2}}$ is replaced by $(-\Delta)^{s}$ with arbitrary power $s\in(0,1)$. As a first attempt in this direction, we only consider the case where the target manifold $\mathcal{N}$ is the standard unit sphere $\mathbb{S}^{d-1}$ of $\mathbb{R}^d$, $d\geqslant 2$. We now describe the functional setting. Given $s\in(0,1)$ and $\Omega\subseteq\mathbb{R}^n$ a bounded open set, the fractional $s$-Dirichlet energy in $\Omega$ of a measurable map $u:\mathbb{R}^n\to\mathbb{R}^d$ is defined by \begin{equation}\label{defsdirenerg} \mathcal{E}_s(u,\Omega):=\frac{\gamma_{n,s}}{4}\iint_{(\mathbb{R}^n\times\mathbb{R}^n)\setminus(\Omega^c\times\Omega^c)}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\,, \end{equation} where $\Omega^c$ denotes the complement of $\Omega$, i.e. $\Omega^c:=\mathbb{R}^n\setminus\Omega$. The normalisation constant $\gamma_{n,s}>0$, whose precise value is given by \eqref{defHsandgammans}, is chosen in such a way that $$ \mathcal{E}_s(u,\Omega)=\frac{1}{2}\int_{\mathbb{R}^n}\big|(-\Delta)^{\frac{s}{2}}u\big|^2\,{\rm d} x\qquad\forall u\in \mathscr{D}(\Omega;\mathbb{R}^d)\,.$$ Following \cite{MS,MSK}, we denote by $\widehat H^s(\Omega;\mathbb{R}^d)$ the Hilbert space made of $L^2_{\rm loc}(\mathbb{R}^n)$-maps $u$ such that $\mathcal{E}_s(u,\Omega)<\infty$, and we set $$\widehat H^s(\Omega;\mathbb{S}^{d-1}):= \Big\{u\in \widehat H^s(\Omega;\mathbb{R}^d) : u(x)\in\mathbb{S}^{d-1}\text{ for a.e. }x\in\mathbb{R}^n\Big\}\,.$$ We then define weakly $s$-harmonic maps in $\Omega$ as critical points of $ \mathcal{E}_s(u,\Omega)$ in the (nonlinear) space $\widehat H^s(\Omega;\mathbb{S}^{d-1})$. More precisely, we say that a map $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is a {\sl weakly $s$-harmonic map} in $\Omega$ into $\mathbb{S}^{d-1}$ if $$\left[\frac{{\rm d}}{{\rm d} t}\mathcal{E}_s\Big(\frac{u+t\varphi}{|u+t\varphi|},\Omega\Big)\right]_{t=0}=0\qquad\forall \varphi\in\mathscr{D}(\Omega,\mathbb{R}^d)\,.$$ Exactly as \eqref{eq1/2harmsphereintro}, the Euler-Lagrange equation reads \begin{equation}\label{sharmmapeqintro} (-\Delta)^su(x)=\left(\frac{\gamma_{n,s}}{2}\int_{\mathbb{R}^n}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} y\right)u(x)\quad\text{in $\mathscr{D}^\prime(\Omega)$}\,, \end{equation} where $(-\Delta)^s$ is the integro-differential operator given by $$(-\Delta)^s u(x):={\rm p.v.}\left(\gamma_{n,s}\int_{\mathbb{R}^n}\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,{\rm d} y\right)\,, $$ and the notation ${\rm p.v.}$ means that the integral is taken in the Cauchy principal value sense. We refer to Section \ref{prelim} and \ref{ELandConsLaw} for the precise weak (variational) formulation of equation \eqref{sharmmapeqintro}. \vskip3pt Once again, the right hand side in \eqref{sharmmapeqintro} has a priori no better integrability than $L^1$, and linear elliptic theory does not apply to determine the regularity of solutions. However, in the case $n\leqslant 2s$, that is $n=1$ and $s\in[1/2,1)$, the equation is {\sl subcritical}. For $n=1$ and $s=1/2$, this is the result of \cite{DaLiRi1,DaLiRi2}. For $n=1$ and $s\in(1/2,1)$, solutions are at least H\"older continuous by the embedding $H^s\hookrightarrow C^{0,s-1/2}$, and this is to enough to reach $C^\infty$-smoothness by applying Schauder type estimates for the fractional Laplacian. \begin{theorem}\label{mainthm1} Assume that $n=1$ and $s\in[1/2,1)$. If $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is a weakly $s$-harmonic map in $\Omega$, then $u\in C^\infty(\Omega)$. \end{theorem} On the other hand, the case $n>2s$ is {\sl supercritical}, and by analogy with (usual) weakly harmonic maps in dimension at least $3$, we do not expect any regularity without further assumptions. Indeed, in his groundbreaking article \cite{Riv1}, T. Rivi\`ere has constructed a weakly harmonic map from the $3$-dimensional ball into $\mathbb{S}^2$ which is everywhere discontinuous. A natural extra assumption to assume on a weakly $s$-harmonic map is {\sl stationarity}, that is $$\left[\frac{{\rm d}}{{\rm d} t}\mathcal{E}_s\big(u\circ\phi_{t},\Omega\big)\right]_{t=0}=0\qquad\forall X\in C^1_c(\Omega;\mathbb{R}^n)\,, $$ where $\{\phi_t\}_{t\in\mathbb{R}}$ denotes the integral flow of the vector field $X$. According to the standard terminology in calculus of variations, a weakly $s$-harmonic map in $\Omega$ is a critical point of $\mathcal{E}_s(\cdot,\Omega)$ with respect to outer variations (i.e. in the target), a stationary map is a critical point of $\mathcal{E}_s(\cdot,\Omega)$ with respect to inner variations (i.e. in the domain), and thus a {\sl stationary weakly $s$-harmonic map} in $\Omega$ is a critical point of $\mathcal{E}_s(\cdot,\Omega)$ with respect to both inner and outer variations. Our second main result provides partial regularity for such maps. In its statement, the {\sl singular set} of $u$ in $\Omega$ is defined as $${\rm sing}(u):=\Omega\setminus\big\{x\in\Omega: \text{$u$ is continuous in a neighborhood of $x$}\big\}\,, $$ ${\rm dim}_{\mathcal{H}}$ denotes the Hausdorff dimension, and $\mathcal{H}^{n-1}$ is the $(n-1)$-dimensional Hausdorff measure. \begin{theorem}\label{mainthm2} Assume that $s\in(0,1)$ and $n> 2s$. If $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is a stationary weakly $s$-harmonic map in $\Omega$, then $u\in C^\infty(\Omega\setminus{\rm sing}(u))$ and \begin{enumerate} \item for $s>1/2$ and $n\geqslant 3$, ${\rm dim}_{\mathcal{H}}\,{\rm sing}(u)\leqslant n-2$; \vskip3pt \item for $s>1/2$ and $n=2$, ${\rm sing}(u)$ is locally finite in $\Omega$; \vskip3pt \item for $s=1/2$ and $n\geqslant 2$, $\mathcal{H}^{n-1}({\rm sing}(u))=0$; \vskip3pt \item for $s<1/2$ and $n\geqslant 2$, ${\rm dim}_{\mathcal{H}}\,{\rm sing}(u)\leqslant n-1$; \vskip3pt \item for $s<1/2$ and $n=1$, ${\rm sing}(u)$ is locally finite in $\Omega$. \end{enumerate} \end{theorem} \vskip3pt The other common assumption to consider is energy minimality. We say that a map $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is a minimizing $s$-harmonic map in $\Omega$ if $$\mathcal{E}_s(u,\Omega)\leqslant \mathcal{E}_s(v,\Omega) $$ for every competitor $v\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ such that $v-u$ is compactly supported in $\Omega$. Notice that minimality implies criticality with respect to both inner and outer variations, so that a minimizing $s$-harmonic map in $\Omega$ is in particular a stationary weakly $s$-harmonic map in $\Omega$. However, minimality implies a stronger partial regularity, at least for $s\in(0,1/2)$. \begin{theorem}\label{mainthm3} Assume that $s\in(0,1)$ and $n>2s$. If $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is a minimizing $s$-harmonic map in~$\Omega$, then $u\in C^\infty(\Omega\setminus{\rm sing}(u))$ and \begin{enumerate} \item for $n\geqslant 3$, ${\rm dim}_{\mathcal{H}}\,{\rm sing}(u)\leqslant n-2$; \vskip3pt \item for $n=2$, ${\rm sing}(u)$ is locally finite in $\Omega$; \vskip3pt \item for $n=1$, ${\rm sing}(u)=\emptyset$ (i.e., $u\in C^\infty(\Omega)$). \end{enumerate} \end{theorem} Before describing the way we prove Theorem \ref{mainthm2} and Theorem \ref{mainthm3}, let us comment on the sharpness of the results above. \begin{remark}\label{rem1intro} In the case $s\in(0,1/2)$, essentially no better regularity than the one coming from the energy space can be expected from a weakly $s$-harmonic map in $\Omega$. Indeed, for an arbitrary set $E\subseteq \mathbb{R}^n$ such that the characteristic function $\chi_E$ belongs to $\widehat H^s(\Omega)$, consider the function $u:=\chi_E-\chi_{E^c}$. Identifying $\mathbb{R}^2$ with the complex plane $\mathbb{C}$, we can see $u$ as a map from $\mathbb{R}^n$ into~$\mathbb{S}^1$, and it belongs to $\widehat H^s(\Omega;\mathbb{S}^1)$. It has been observed in \cite[Remark 1.7]{MSK} that $u$ is a weakly $s$-harmonic map in $\Omega$ into~$\mathbb{S}^1$, i.e., it satisfies \eqref{sharmmapeqintro}. For $s=1/2$, we believe that, in the spirit of \cite{Riv1}, it should be possible to construct an example of a $1/2$-harmonic map from the $2$-dimensional disc into $\mathbb{S}^1$ which is discontinuous everywhere using the material in \cite{MPis}. However, for $s\in(1/2,1)$ and $n=2$, it remains open whether or not such pathological example do exist. \end{remark} \begin{remark} For $s\in(0,1/2)$, the partial regularity for stationary weakly $s$-harmonic maps is sharp in the sense that the size of the singular set can not be improved. Following Remark \ref{rem1intro} above and \cite[Remark 1.7]{MSK}, for a set $E\subseteq \mathbb{R}^n$ such that $\chi_E\in\widehat H^s(\Omega)$, the map $u:=\chi_E-\chi_{E^c}$ is a weakly $s$-harmonic map in $\Omega$ into $\mathbb{S}^1$, and $$\mathcal{E}_s(u,\Omega)= \gamma_{n,s}P_{2s}(E,\Omega)\,,$$ where $P_{2s}(E,\Omega)$ is the fractional $2s$-perimeter of $E$ in $\Omega$ introduced by L. Caffarelli, J.M. Roquejoffre, and O. Savin in \cite{CRS}, and it is given by $$P_{2s}(E,\Omega)=\left(\iint_{(E\cap\Omega)\times(E^c\cap\Omega)}+\iint_{(E\cap\Omega^c)\times(E^c\cap\Omega)}+\iint_{(E\cap\Omega)\times(E^c\cap\Omega^c)}\right)\frac{{\rm d} x{\rm d} y}{|x-y|^{n+2s}} \,.$$ Therefore, $u$ is a stationary weakly $s$-harmonic map in $\Omega$ if and only if $E$ is stationary in $\Omega$ for the shape functional $P_{2s}(\cdot,\Omega)$ (see \cite{MSK}). This includes the case where $\partial E$ is a nonlocal minimal surface in the sense of \cite{CRS}. In particular, if $E$ is a half space, then $u$ is a stationary weakly $s$-harmonic map in $\Omega$, and ${\rm sing}(u)=\partial E\cap\Omega$ is an hyperplane. \end{remark} \begin{remark} For arbitrary spheres, Theorem \ref{mainthm3} is sharp for $s=1/2$. Indeed, we know from~\cite[Theorem 1.4]{MilPeg} that the map $x/|x|$ is a minimizing $1/2$-harmonic map into $\mathbb{S}^1$ in the unit disc $D_1\subseteq\mathbb{R}^2$. The minimality of $x/|x|$ for $s\not=1/2$ is open, but one can check that it is at least a stationary $s$-harmonic map into $\mathbb{S}^1$ in $D_1$, showing that Theorem \ref{mainthm2} is sharp also for~$s\in[1/2,1)$. For arbitrary $s\in(0,1)$, the following classical example suggests that Theorem \ref{mainthm3} might be sharp anyway. Consider the minimization problem (still in dimension $n=2$), $$\min\Big\{\mathcal{E}_s(u,D_1): u\in \widehat H^s(D_1,\mathbb{S}^1)\,,\;u(x)=x/|x|\text{ in $\mathbb{R}^2\setminus D_1$}\Big\}\,.$$ Existence of solutions follows easily from the direct method of calculus of variations, and any solution is obviously a minimizing $s$-harmonic map in~$D_1$. Since $x/|x|$ does not admit any $\mathbb{S}^1$-valued continuous extension to $D_1$, any solution must have at least one singular point in $\overline{D}_1$. \end{remark} \begin{remark} For $s=1/2$ and $d\geqslant 3$ (i.e., for $\mathbb{S}^2$ or higher dimensional target spheres), the size of the singular set of a minimizing $1/2$-harmonic map can be reduced. It has been proved in \cite[Theorem 1.3]{MilPeg} that in this case, ${\rm sing}(u)=\emptyset$ for $n=2$, it is locally finite for $n=3$, and ${\rm dim}_{\mathcal{H}}{\rm sing}(u)\leqslant n-3$ for $n\geqslant 4$. It would be interesting to know if this improvement persists for $s\not=1/2$. \end{remark} The proofs of Theorems \ref{mainthm1}, \ref{mainthm2}, and \ref{mainthm3} rely on several ingredients that we now briefly describe. The first one consists in applying the so-called {\sl Caffarelli-Silvestre extension} procedure~\cite{CaffSil} to the open half space $\mathbb{R}^{n+1}_+:=\mathbb{R}^n\times(0,+\infty)$. This extension (which may have originated in the probability literature~\cite{MolOs}) allows us to represent $(-\Delta)^s$ as the Dirichlet-to-Neumann operator associated with the degenerate elliptic operator $L_s:=-{\rm div}(z^{1-2s}\nabla\cdot)$, where $z\in(0,+\infty) $ denotes the extension variable. In this way (after extension), we can reformulate the $s$-harmonic map equation as a degenerate harmonic map equation with {\sl partially} free boundary, very much like in \cite{MS,MSK}. Under the stationarity assumption, the extended map satisfies a fundamental monotonicity formula, which in turn implies local controls in the space BMO (bounded mean oscillation) of the $s$-harmonic map under consideration by its energy. Probably the main step in the proof is an epsilon-regularity result where we show that under a (standard) smallness assumption on the energy $\mathcal{E}_s$ in a ball, then a (stationary) $s$-harmonic map is H\"older continuous in a smaller ball. The strategy we follow here is quite inspired from the argument of L.C. Evans \cite{Evans} making use of the conservation laws discovered by F.~H\'elein \cite{Hel1} and the duality $\mathcal{H}^1$/BMO. In our fractional setting, we make use of the fractional conservation laws together with the ``fractional div-curl lemma'' of K. Mazowiecka and the third author \cite{MazSchi}. A main difference with \cite{Evans} lies in the fact that an additional ``error term'' appears when rewriting the $s$-harmonic map equation in the suitable form where compensation can be seen. To control this error term in arbitrary dimensions, we make use of a recent embedding result between Triebel-Lizorkin-Morrey type spaces \cite{Ho} and various characterizations of these spaces~\cite{SaYY,YangYuan}. Once H\"older continuity is obtained, we prove Lipschitz continuity in an even smaller ball using an adjustment of the classical ``harmonic replacement'' technique, see \cite{Schoen}. More precisely, using the extension, we adapt an argument due to J. Roberts \cite{Rob} in the case of degenerate harmonic maps with free boundary (i.e., with homogeneous - degenerate - Neumann boundary condition). With Lipschitz continuity in hands, we are then able to derive $C^\infty$-regularity from Schauder estimates for the fractional Laplacian. To obtain the bounds on the size of the singular set, we follow somehow the usual dimension reduction argument of Almgren \& Federer for harmonic maps (see \cite{Sim}), which is based on the strong compactness of blow-ups around points. Here compactness (for $s\not=1/2$) is obtained as in \cite{MSK}, and it is a consequence of the monotonicity formula together with Marstrand's Theorem (see e.g. \cite{Matti}). Finally, in the minimizing case and $s\in(0,1/2)$, we obtain an improvement on the size of the singular set (compared to the stationary case) from the triviality of the so-called ``tangent maps'' (i.e. blow-up limits), a consequence of the regularity of minimizing $s$-harmonic in one dimension proved in \cite{MSY}. \subsection*{{Notation}} Throughout the paper, $\mathbb{R}^n$ is often identified with $\partial \mathbb{R}^{n+1}_+=\mathbb{R}^n\times\{0\}$. More generally, sets $A\subseteq\mathbb{R}^n$ can be identified with $A\times\{0\}\subseteq\partial \mathbb{R}^{n+1}_+$. Points in $\mathbb{R}^{n+1}$ are written $\mathbf{x}=(x,z)$ with $x\in\mathbb{R}^n$ and $z\in\mathbb{R}$. We shall denote by $B_r(\mathbf{x})$ the open ball in $\mathbb{R}^{n+1}$ of radius $r$ centered at $\mathbf{x}=(x,z)$, while $D_r(x):= B_r(\mathbf{x})\cap\mathbb{R}^{n}$ is the open ball (or disc) in $\mathbb{R}^n$ centered at $x$. For an arbitrary set $G\subseteq \mathbb{R}^{n+1}$, we write $$G^+:=G\cap \mathbb{R}^{n+1}_+\quad\text{ and }\quad\partial^+ G:=\partial G\cap \mathbb{R}^{n+1}_+\,.$$ If $G\subseteq\mathbb{R}^{n+1}_+$ is a bounded open set, we shall say that $G$ is {\bf admissible} whenever \begin{itemize} \item $\partial G$ is Lipschitz regular; \vskip2pt \item the (relative) open set $\partial^0G\subseteq\partial\mathbb{R}^{n+1}_+$ defined by $$\partial^0G:=\Big\{\mathbf{x}\in\partial G\cap\partial\mathbb{R}^{n+1}_+ : B^+_{r}(\mathbf{x})\subseteq G \text{ for some $r>0$}\Big \}\,,$$ is non empty and has Lipschitz boundary; \vskip2pt \item $\partial G=\partial^+ G\cup\overline{\partial^0G}\,$. \end{itemize} Finally, we usually denote by $C$ a generic positive constant which only depends on the dimension $n$ and $s\in(0,1)$, and possibly changing from line to line. If a constant depends on additional given parameters, we shall write those parameters using the subscript notation. \section{Functional spaces, fractional operators, and compensated compactness } \label{prelim} \subsection{Fractional $H^{s}$-spaces}\label{secHs} For an open set $\Omega\subseteq \mathbb{R}^n$, the Sobolev-Slobodeckij space $H^{s}(\Omega)$ is made of all functions $u\in L^2(\Omega)$ such that\footnote{The normalization constant $\gamma_{n,s}$ is chosen in such a way that $\displaystyle [u]^2_{H^{s}(\mathbb{R}^n)}=\int_{\mathbb{R}^n}(2\pi|\xi|)^{2s}|\widehat u|^2\,{\rm d}\xi\,$, where $\widehat u$ denotes the (ordinary frequency) Fourier transform of $u$.} \begin{equation}\label{defHsandgammans} [u]^2_{H^{s}(\Omega)}:=\frac{\gamma_{n,s}}{2}\iint_{\Omega\times \Omega} \frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y<\infty\,,\quad \gamma_{n,s}:=s\,2^{2s}\pi^{-\frac{n}{2}}\frac{\Gamma\big(\frac{n+2s}{2}\big)}{\Gamma(1-s)} \,. \end{equation} It is a separable Hilbert space normed by $\|\cdot\|^2_{H^{s}(\Omega)}:= \|\cdot\|^2_{L^2(\Omega)}+[\cdot]^2_{H^{s}(\Omega)}$. The space $ H^{s}_{\rm loc}(\Omega)$ denotes the class of functions whose restriction to any relatively compact open subset $\Omega'$ of $\Omega$ belongs to $H^{s}(\Omega')$. The linear subspace $H^{s}_{00}(\Omega) \subseteq H^{s}(\mathbb{R}^n)$ is in turn defined by $$H^{s}_{00}(\Omega):=\big\{u\in H^{s}(\mathbb{R}^n) : u=0 \text{ a.e. in } \mathbb{R}^n\setminus\Omega\big\}\,. $$ Endowed with the induced norm, $H^{s}_{00}(\Omega)$ is also a Hilbert space, and \begin{equation*} [u]^2_{H^{s}(\mathbb{R}^n)}=\frac{\gamma_{n,s}}{2}\iint_{(\mathbb{R}^n\times\mathbb{R}^n)\setminus(\Omega^c\times \Omega^c)} \frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y =2\mathcal{E}_s(u,\Omega)\quad\forall u\in H^{s}_{00}(\Omega)\,, \end{equation*} where $\mathcal{E}_s(\cdot,\Omega)$ is the $s$-Dirichlet energy defined in \eqref{defsdirenerg}. If $\Omega$ is bounded and its boundary is smooth enough (e.g. if $\partial \Omega$ is Lipschitz regular), then \begin{equation}\label{densitysmoothH1/200} H^{s}_{00}(\Omega)= \overline{\mathscr{D}(\Omega)}^{\,\|\cdot\|_{H^{s}(\mathbb{R}^n)}} \end{equation} (see \cite[Theorem~1.4.2.2]{G}) . The topological dual space of $H^{s}_{00}(\Omega)$ is denoted by $H^{-s}(\Omega)$. \vskip5pt We are mostly interested in the class of functions $$\widehat{H}^{s}(\Omega):=\Big\{u\in L^{2}_{\rm loc}(\mathbb{R}^n) : \mathcal{E}_s(u,\Omega)<\infty\Big\} \,.$$ The following properties hold for any open subsets $\Omega$ and $\Omega'$ of $\mathbb{R}^n$: \begin{itemize} \vskip5pt \item $ \widehat{H}^{s}(\Omega)$ is a linear space; \vskip5pt \item $ \widehat{H}^{s}(\Omega) \subseteq \widehat{H}^{s}(\Omega')$ whenever $\Omega'\subseteq\Omega$, and $ \mathcal{E}_s(\cdot,\Omega')\leqslant \mathcal{E}_s(\cdot,\Omega)\,$; \vskip5pt \item if $\Omega^\prime$ is bounded, then $\widehat{H}^{s}(\Omega)\cap H^{s}_{\rm loc}(\mathbb{R}^n) \subseteq \widehat{H}^{s}(\Omega')\,$; \vskip5pt \item if $\Omega$ is bounded, then $H^{s}_{\rm loc}(\mathbb{R}^n)\cap L^\infty(\mathbb{R}^n) \subseteq \widehat{H}^{s}(\Omega)\,$. \end{itemize} From Lemma \ref{adminHchap} below, it follows that $\widehat{H}^{s}(\Omega)$ is a Hilbert space for the scalar product induced by the Hilbertian norm $u\mapsto \|u\|_{\widehat H^s(\Omega)}:=\big(\|u\|^2_{L^2(\Omega)}+ \mathcal{E}_s(u,\Omega)\big)^{1/2}$ (see e.g. \cite{MSK} and \cite[proof of Lemma~2.1]{MS}). \begin{lemma}\label{adminHchap} Let $x_0\in\Omega$ and $\rho>0$ be such that $D_{\rho}(x_0)\subseteq\Omega$. There exists a constant $C_\rho=C_\rho(\rho,n,s)>0$ such that $$\int_{\mathbb{R}^n}\frac{|u(x)|^2}{(|x-x_0|+1)^{n+2s}}\,{\rm d} x\leqslant C_{\rho}\left(\mathcal{E}_s\big(u,D_{\rho}(x_0)\big)+\|u\|^2_{L^2(D_{\rho}(x_0))}\right)$$ for every $u\in \widehat{H}^{s}(\Omega)$. \end{lemma} \begin{remark}\label{remweakcvHhat} From the Hilbertian structure of $\widehat{H}^{s}(\Omega)$, it follows that any bounded sequence $\{u_k\}$ in $\widehat{H}^{s}(\Omega)$ admits a subsequence converging weakly in $\widehat{H}^{s}(\Omega)$. In addition, if $u_k\rightharpoonup u$ weakly in $\widehat{H}^{s}(\Omega)$, then $u_k\to u$ strongly in $L^2(\Omega)$ by the compact embedding $H^{s}(\Omega)\hookrightarrow L^2(\Omega)$. In particular, $\|u_k\|_{L^2(\Omega)}\to \|u\|_{L^2(\Omega)}$. Since $\liminf_k\|u_k\|_{\widehat{H}^{s}(\Omega)}\geqslant \|u\|_{\widehat{H}^{s}(\Omega)}$, it follows that $\liminf_k\mathcal{E}_s(u_k,\Omega)\geqslant \mathcal{E}_s(u,\Omega)$. \end{remark} \subsection{Fractional operators and compensated compactness}\label{sectoperandcompcomp} Given an open set $\Omega\subseteq \mathbb{R}^n$, the fractional Laplacian $(-\Delta)^s$ in $\Omega$ is defined as the continuous linear operator $(-\Delta)^s: \widehat{H}^{s}(\Omega)\to (\widehat{H}^{s}(\Omega))^\prime$ induced by the quadratic form $\mathcal{E}_s(\cdot,\Omega)$. In other words, the weak form of the fractional Laplacian $ (-\Delta)^{s} u$ of a given function $u\in \widehat{H}^{s}(\Omega)$ is defined through its action on $ \widehat{H}^{s}(\Omega)$ by \begin{equation}\label{deffraclap} \big\langle (-\Delta)^{s} u, \varphi\big\rangle_\Omega:=\frac{\gamma_{n,s}}{2}\iint_{(\mathbb{R}^n\times\mathbb{R}^n)\setminus(\Omega^c\times\Omega^c)} \frac{\big(u(x)-u(y)\big)\big(\varphi(x)-\varphi(y)\big)}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\,. \end{equation} Notice that the restriction of the linear form $ (-\Delta)^{s} u$ to the subspace $H^{s}_{00}(\Omega)$ belongs to $H^{-s}(\Omega)$ with the estimate $\| (-\Delta)^{s} u\|^2_{H^{-s}(\Omega)}\leqslant 2\mathcal E_s(u,\Omega)$. \begin{remark}\label{localityLapls} Notice the operator $(-\Delta)^s$ has the following local property: if $ u\in\widehat{H}^{s}(\Omega)$ and $\Omega^\prime\subseteq\Omega$ is an open subset, then $$\big\langle(-\Delta)^su,\varphi\big\rangle_\Omega= \big\langle(-\Delta)^su,\varphi\big\rangle_{\Omega^\prime}\quad\forall \varphi\in H^s_{00}(\Omega^\prime)\,.$$ \end{remark} \vskip5pt Following \cite{MazSchi}, we now relate the fractional Laplacian $ (-\Delta)^{s}$ to suitable notions of {\sl fractional gradient} and {\sl fractional divergence}. To this purpose, we first need to recall from \cite{MazSchi} the notion of (fractional) {\sl ``$s$-vector field''} over a domain. The space of $s$-vector fields in $\Omega$, that we shall denote by $L^2_{\rm od}(\Omega)$ (in agreement with \cite{MazSchi}), is defined as the Lebesgue space of $L^2$-{\sl scalar functions} over the open set $(\mathbb{R}^n\times\mathbb{R}^n)\setminus(\Omega^c\times\Omega^c)\subseteq \mathbb{R}^{2n}$ with respect to the measure $|x-y|^{-n}{\rm d} x{\rm d} y$. In other words, $$L^2_{\rm od}(\Omega):=\Big\{F:(\mathbb{R}^n\times\mathbb{R}^n)\setminus(\Omega^c\times\Omega^c)\to\mathbb{R}: \|F\|_{L^2_{\rm od}(\Omega)}<\infty\Big\}\,, $$ with $$\|F\|^2_{L^2_{\rm od}(\Omega)}:= \iint_{(\mathbb{R}^n\times\mathbb{R}^n)\setminus(\Omega^c\times\Omega^c)}\frac{|F(x,y)|^2}{|x-y|^n}\,{\rm d} x{\rm d} y \,.$$ We endow $L^2_{\rm od}(\Omega)$ with the (pointwise) product operator $\odot: L^2_{\rm od}(\Omega)\times L^2_{\rm od}(\Omega) \times \to L^1(\Omega)$ given by $$F\odot G(x):=\int_{\mathbb{R}^n}\frac{F(x,y)G(x,y)}{|x-y|^{n}}\,{\rm d} y\,. $$ Note that $\odot$ is a continuous bilinear operator thanks to Fubini's theorem, and it plays the role of ``pointwise scalar product'' between two $s$-vector fields. With this respect, we define the (pointwise) ``squared modulus'' of a $s$-vector field $F\in L^2_{\rm od}(\Omega)$ by \begin{equation}\label{modsqsvectfield} |F|^2:=F\odot F \in L^1(\Omega)\,. \end{equation} The (fractional) $s$-gradient is defined in \cite{MazSchi} as a linear operator from the space of scalar valued functions $\widehat H^s(\Omega)$ into the space of $s$-vector fields over $\Omega$. More precisely, we define it as the continuous linear operator ${\rm d}_s:\widehat H^s(\Omega)\to L^2_{\rm od}(\Omega)$ given by \begin{equation}\label{defsgrad} {\rm d}_su(x,y):= \frac{\sqrt{\gamma_{n,s}}}{\sqrt{2}}\,\frac{u(x)-u(y)}{|x-y|^s}\,. \end{equation} Obviously, one has $$\|{\rm d}_su\|^2_{L^2_{\rm od}(\Omega)}= 2\mathcal{E}_s(u,\Omega)\quad\text{and}\quad\big\| |{\rm d}_s u|^2\big\|_{L^1(\Omega)}\leqslant 2\mathcal{E}_s(u,\Omega)$$ for every $u\in \widehat H^s(\Omega)$. \vskip3pt In turn, the (fractional) $s$-divergence, denoted by ${\rm div}_s$, is defined by duality as the adjoint operator to the $s$-gradient operator restricted to $H^s_{00}(\Omega)$. To do so, the main observation is that for $F\in L^2_{\rm od}(\Omega)$, we have $$F\odot {\rm d}_s\varphi \in L^1(\mathbb{R}^n) \quad\text{for every $\varphi\in H^s_{00}(\Omega)$}\,,$$ with $$\|F\odot {\rm d}_s\varphi \|_{L^1(\mathbb{R}^n)}\leqslant \|F\|_{L^2_{\rm od}(\Omega)}[\varphi]_{H^s(\mathbb{R}^n)}\,. $$ In this way, we can indeed define ${\rm div}_s: L^2_{\rm od}(\Omega)\to H^{-s}(\Omega)$ as the continuous linear operator given by $$\big\langle {\rm div}_s F,\varphi\big\rangle_\Omega:=\int_{\mathbb{R}^n} F\odot {\rm d}_s\varphi\,{\rm d} x \qquad\forall \varphi\in H^s_{00}(\Omega)\,,$$ which satisfies the estimate $\|{\rm div}_s F\|_{H^{-s}(\Omega)}\leqslant \|F\|_{L^2_{\rm od}(\Omega)}$ for all $F\in L^2_{\rm od}(\Omega)$. \vskip3pt From the definition of ${\rm d}_s$ and ${\rm div}_s$, it readily follows that \begin{proposition}\label{fracintegbypart} We have $(-\Delta)^s={\rm div}_s({\rm d}_s)$, i.e., $$\big\langle (-\Delta)^{s} u, \varphi\big\rangle_\Omega=\int_{\mathbb{R}^n} {\rm d}_s u\odot{\rm d}_s\varphi\,{\rm d} x $$ for every $u \in \widehat H^s(\Omega)$ and every $\varphi\in H^s_{00}(\Omega)$. \end{proposition} One of the main results in \cite{MazSchi} is a compensated compactness result relative to the $s$-gradient and $s$-divergence operators in the spirit of the classical ``div-curl'' lemma \cite{CLMS}. To present this result, let us recall that the space ${\rm BMO}(\mathbb{R}^n)$ is defined as the set of all $u\in L^1_{\rm loc}(\mathbb{R}^n)$ such that $$[u]_{{\rm BMO}(\mathbb{R}^n)}:= \sup_{D_{r}(y)}\, \Xint-_{D_r(y)}|u-(u)_{y,r}|\,{\rm d} x<+\infty\,, $$ where $(u)_{y,r}$ denotes the average of $u$ over the ball $D_r(y)$. The following theorem corresponds to \cite[Proposition 2.4]{MazSchi}. \begin{theorem}\label{divcurlthm} Let $F\in L^2_{\rm od}(\Omega)$ be such that $${\rm div}_s F=0 \quad \text{in $H^{-s}(\Omega)$}\,.$$ There exist a universal $\Lambda>1$ such that for every ball $D_r(x_0)$ satisfying $D_{\Lambda r}(x_0)\subseteq \Omega$, $$\left|\int_{\mathbb{R}^n}\big(F\odot{\rm d}_s u\big)\varphi\,{\rm d} x\right|\leqslant C\|F\|_{L^2_{\rm od}(\Omega)}\sqrt{\mathcal{E}_s(u,\Omega)} \Big([\varphi]_{\rm BMO(\mathbb{R}^n)} +r^{-n}\|\varphi\|_{L^1(\mathbb{R}^n)} \Big)$$ for every $u\in \widehat H^s(\Omega)$ and $\varphi \in\mathscr{D}(D_r(x_0))$, and a constant $C=C(n,s)$. \end{theorem} \begin{remark} In the statement of \cite[Proposition 2.4]{MazSchi}, the $s$-vector field $F$ is assumed to be $s$-divergence free in the whole $\mathbb{R}^n$ and $u\in H^s(\mathbb{R}^n)$. However, a careful reading of the proof reveals that only the assumptions in Theorem \ref{divcurlthm} on $F$ and $u$ are used. \end{remark} \subsection{Weighted Sobolev spaces}\label{secWeightSob} For an open set $G\subseteq \mathbb{R}^{n+1}$, we consider the weighted $L^2$-space $$L^2(G,|z|^a{\rm d} \mathbf{x}):= \Big\{v\in L^1_{\rm loc}(G) : |z|^{\frac{a}{2}} v \in L^2(G)\Big\} \quad \text{with }a:=1-2s\,,$$ normed by $$\|v\|^2_{L^2(G,|z|^a{\rm d} \mathbf{x})}:=\int_G |z|^{a}|v|^2\,{\rm d} \mathbf{x}\,. $$ Accordingly, we introduce the weighted Sobolev space $$H^1(G,|z|^a{\rm d} \mathbf{x}):= \Big\{v\in L^2(G,|z|^a{\rm d} \mathbf{x}) : \nabla v \in L^2(G,|z|^a{\rm d} \mathbf{x})\Big\} \,,$$ normed by $$\|v\|_{H^1(G,|z|^a{\rm d} \mathbf{x})}:=\|v\|_{L^2(G,|z|^a{\rm d} \mathbf{x})}+\|\nabla v\|_{L^2(G,|z|^a{\rm d} \mathbf{x})}\,. $$ Both $L^2(G,|z|^a{\rm d} \mathbf{x})$ and $H^1(G,|z|^a{\rm d} \mathbf{x})$ are separable Hilbert spaces when equipped with the scalar product induced by their respective Hilbertian norms. On $H^1(G,|z|^a{\rm d} \mathbf{x})$, we define {\sl the weighted Dirichlet energy ${\bf E}_s(\cdot,G)$} by setting \begin{equation}\label{defweightDir} {\bf E}_s(v,G):=\frac{\boldsymbol{\delta}_s}{2}\int_G|z|^a|\nabla v|^2\,{\rm d}{\bf x}\quad\text{with } \boldsymbol{\delta}_s:=2^{2s-1}\frac{\Gamma(s)}{\Gamma(1-s)}\,. \end{equation} The relevance of the normalisation constant $\boldsymbol{\delta}_s>0$ will be revealed in Section \ref{fracharmext} (see \eqref{normexth1/2}). \vskip3pt Some relevant remarks about $H^1(G,|z|^a{\rm d} \mathbf{x})$ are in order. For a bounded admissible open set $G\subseteq \mathbb{R}^{n+1}_+$, the space $L^2(G,|z|^a{\rm d} \mathbf{x})$ embeds continuously into $L^\gamma(G)$ for every $1\leqslant \gamma<\frac{1}{1-s}$ whenever $s\in(0,1/2)$ by H\"older's inequality. For $s\in[1/2,1)$, we have $L^2(G,|z|^a{\rm d} \mathbf{x})\hookrightarrow L^2(G)$ continuously since $a\leqslant 0$. In any case, it implies that \begin{equation}\label{contembedd} H^1(G,|z|^a{\rm d} \mathbf{x})\hookrightarrow W^{1,\gamma}(G) \end{equation} continuously for every $1< \gamma<\min\{\frac{1}{1-s},2\}$. As a first consequence, $H^1(G,|z|^a{\rm d} \mathbf{x})\hookrightarrow L^{1}(G)$ with compact embedding. Secondly, for such $\gamma$'s, the compact linear trace operator \begin{equation}\label{conttracW1} v\in W^{1,\gamma}(G)\mapsto v_{|\partial^0 G}\in L^1(\partial^0G) \end{equation} induces a compact linear trace operator from $H^1(G,|z|^a{\rm d} \mathbf{x})$ into $L^1(\partial^0G)$, extending the usual trace of smooth functions. We shall denote by $v_{|\partial^0G}$ the trace of $v\in H^1(G,|z|^a{\rm d} \mathbf{x})$ on $\partial^0G$, or simply by $v$ if it is clear from the context. We may now recall the following Poincar\'e's inequality, see e.g. \cite[Lemma 2.5]{MSK}. \begin{lemma}\label{poincarebdry} If $v\in H^1(B_r^+,|z|^a{\rm d}{\bf x})$, then $$ \big\| v - (v)_r \big\|_{L^1(D_r)} \leqslant C r^{\frac{n+2s}{2}}\|\nabla v\|_{L^2(B_r^+,|z|^a{\rm d}{\bf x})}\,,$$ for a constant $C=C(n,s)$, where $(v)_r$ denotes the average of $v$ over $D_r$. \end{lemma} The next lemma states that the trace $v_{|\partial^0G}$ has actually $H^s$-regularity, at least locally. \begin{lemma}\label{HsregtraceH1weight} If $v\in H^1(B^+_{2r},|z|^a{\rm d} \mathbf{x})$, then the trace of $v$ on $\partial^0B_r^+\simeq D_r$ belongs to $H^s(D_r)$, and $$[v]^2_{H^s(D_r)} \leqslant C \,{\bf E}_s(v,B^+_{2r})\,,$$ for a constant $C=C(n,s)$. \end{lemma} \begin{proof} The proof follows exactly the one in \cite[Lemma 2.3]{MSY} which is stated only in dimension $n=1$. We reproduce the proof (in arbitrary dimension) for convenience of the reader, slightly anticipating a well-known identity presented in Section \ref{fracharmext} (see \eqref{normexth1/2}). Rescaling variables, we can assume that $r=1$. Moreover, we may assume without loss of generality that $v$ has a vanishing average over the half ball $B^+_{2}$. Let $\zeta\in C^\infty(B_{2};[0,1])$ be a cut-off function such that $\zeta({\bf x})=1$ for $|{\bf x}|\leqslant 1$, $\zeta({\bf x})=0$ for $|{\bf x}|\geqslant 3/2$. The function $v_*:=\zeta v$ belongs to $H^1(\mathbb{R}^{n+1}_+, |z|^a{\rm d}{\bf x})$, and Poincar\'e's inequality in $H^1(\mathbb{R}^{n+1}_+, |z|^a{\rm d}{\bf x})$ (see e.g.~\cite{FKS}) yields \begin{equation}\label{esticpa1} \int_{\mathbb{R}^{n+1}_+}z^a|\nabla v_*|^2\,{\rm d} {\bf x}\leqslant 2{\bf E}_s(v,B^+_{2})+ C\int_{B^+_{2}}z^a|v|^2\,{\rm d}{\bf x}\leqslant C{\bf E}_s(v,B^+_{2})\,, \end{equation} for a constant $C=C(\zeta,n,s)$. On the other hand, it follows from \eqref{normexth1/2} in Section \ref{fracharmext} below that \begin{equation}\label{esticpa2} \iint_{D_1\times D_1}\frac{|v(x)-v(y)|^2}{|x-y|^{1+2s}}\,{\rm d} x{\rm d} y\leqslant \iint_{\mathbb{R}^n\times\mathbb{R}^n}\frac{|v_*(x)-v_*(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y \leqslant C {\bf E}_s(v_*,\mathbb{R}^{n+1}_+)\,. \end{equation} Gathering \eqref{esticpa1} and \eqref{esticpa2} leads to the announced estimate. \end{proof} \subsection{Fractional harmonic extension and the Dirichlet-to-Neumann operator}\label{fracharmext} Let us consider the so-called fractional Poisson kernel $\mathbf{P}_{n,s}:\mathbb{R}^{n+1}_+\to [0,\infty)$ defined by \begin{equation}\label{defpoisskern} \mathbf{P}_{n,s}(\mathbf{x}):=\sigma_{n,s}\,\frac{z^{2s}}{|\mathbf{x}|^{n+2s}}\qquad \text{with } \sigma_{n,s}:=\pi^{-\frac{n}{2}}\frac{\Gamma(\frac{n+2s}{2})}{\Gamma(s)}\,, \end{equation} where $\mathbf{x}:=(x,z)\in\mathbb{R}^{n+1}_+:=\mathbb{R}^n\times(0,\infty)$. The choice of the constant $\sigma_{n,s}$ is made in such a way that $\int_{\mathbb{R}^n}\mathbf{P}_{n,s}(x,z)\,{\rm d} x=1$ for every $z>0$ (see e.g. the computation in Remark \ref{remcomputmasspoisskern}). As shown in \cite{CaffSil} (see also \cite{MolOs}), the function $\mathbf{P}_{n,s}$ solves $$\begin{cases} {\rm div}(z^{a}\nabla \mathbf{P}_{n,s})= 0 & \text{in $\mathbb{R}^{n+1}_+$}\,,\\ \mathbf{P}_{n,s}=\delta_0 & \text{on $\partial\mathbb{R}^{n+1}_+$}\,, \end{cases}$$ where $\delta_0$ denotes the Dirac distribution at the origin. From now on, for a measurable function $u$ defined over $\mathbb{R}^n$, we shall denote by $u^{\rm e}$ its extension to the half-space $\mathbb{R}^{n+1}_+$ given by the convolution (in the $x$-variable) of $u$ with $\mathbf{P}_{n,s}$, i.e., \begin{equation}\label{poisson} u^{\rm e}(x,z):= \sigma_{n,s}\int_{\mathbb{R}^n}\frac{z^{2s} u(y)}{(|x-y|^2+z^2)^{\frac{n+2s}{2}}}\,{\rm d} y\,. \end{equation} Notice that $u^{\rm e}$ is well defined if $u$ belongs to the Lebesgue space $L^1$ over $\mathbb{R}^n$ with respect to the probability measure \begin{equation}\label{defmeasm} \mathfrak{m}_s:=\sigma_{n,s}(1+|y|^2)^{-\frac{n+2s}{2}}\,{\rm d} y\,. \end{equation} In particular, $u^{\rm e}$ can be defined whenever $u\in \widehat{H}^{s}(\Omega)$ for some open set $\Omega\subseteq\mathbb{R}^n$ by Lemma~\ref{adminHchap}. Moreover, if $u\in L^\infty(\mathbb{R}^n)$, then $u^{\rm e}\in L^\infty(\mathbb{R}_+^{n+1})$ and \begin{equation}\label{bdlinftyext} \|u^{\rm e}\|_{L^\infty(\mathbb{R}_+^{n+1})} \leqslant \|u\|_{L^\infty(\mathbb{R}^n)}\,. \end{equation} For a function $u\in L^1(\mathbb{R}^n,\mathfrak{m}_s)$, the extension $u^{\rm e}$ has a pointwise trace on $\partial\mathbb{R}^{n+1}_+\simeq\mathbb{R}^n$ which is equal to $u$ at every Lebesgue point. In addition, $u^{\rm e}$ solves the equation \begin{equation}\label{eqextharm} \begin{cases} {\rm div}(z^{a}\nabla u^{\rm e}) = 0 & \text{in $\mathbb{R}_+^{n+1}$}\,,\\ u^{\rm e} = u & \text{on $\partial\mathbb{R}^{n+1}_+$}\,. \end{cases} \end{equation} By analogy with the standard case $s=1/2$ (for which \eqref{eqextharm} reduces to the Laplace equation), the map $u^{\rm e}$ is referred to as the {\it fractional harmonic extension} of $u$. \vskip3pt It has been proved in \cite{CaffSil} that $u^{\rm e}$ belongs to the weighted space $H^1(\mathbb{R}_+^{n+1},|z|^a{\rm d}\mathbf{x})$ whenever $u\in H^{s}(\mathbb{R}^n)$. Extending a well-known identity for $s=1/2$, the $H^{s}$-seminorm of $u$ coincides up to a multiplicative constant with the weighted $L^2$-norm of $\nabla u^{\rm e}$, and $u^{\rm e}$ turns out to minimize the weighted Dirichlet energy among all possible extensions. In other words, \begin{equation}\label{normexth1/2} [u]^2_{H^{s}(\mathbb{R}^n)}={\bf E}_s(u^{\rm e},\mathbb{R}^{n+1}_+)=\inf\Big\{{\bf E}_s(v,\mathbb{R}^{n+1}_+): v\in H^1(\mathbb{R}^{n+1}_+,|z|^a{\rm d}{\bf x})\,,\; v=u\text{ on }\mathbb{R}^n \Big\} \end{equation} for every $u\in H^s(\mathbb{R}^n)$ (thanks to the choice of the normalisation factor $\boldsymbol{\delta}_s$ in \eqref{defweightDir}). \vskip3pt If $u\in \widehat{H}^{s}(\Omega)$ for some open set $\Omega\subseteq\mathbb{R}^n$, we have the following estimates on $u^{\rm e}$, somehow extending the first equality in \eqref{normexth1/2} to the localized setting. \begin{lemma}\label{hatH1/2toH1} Let $\Omega\subseteq \mathbb{R}^n$ be an open set. For every $u\in \widehat{H}^{s}(\Omega)$, the extension $u^{\rm e}$ given by \eqref{poisson} belongs to $H^1(G,|z|^a{\rm d} \mathbf{x})\cap L^2_{{\rm loc}}\big(\overline{\mathbb{R}^{n+1}_+},|z|^a{\rm d} \mathbf{x}\big)$ for every bounded admissible open set $G\subseteq\mathbb{R}^{n+1}_+$ satisfying $\overline{\partial^0G}\subseteq\Omega$. In addition, for every point ${\bf x}_0=(x_0,0)\in\Omega\times\{0\}$ and $r>0$ such that $D_{3r}(x_0)\subseteq\Omega$, \begin{equation}\label{contruextL2} \|u^{\rm e}\|^2_{L^2(B_r^+({\bf x}_0),|z|^a{\rm d} \mathbf{x})}\leqslant C\left(r^{2}\mathcal{E}_s\big(u,D_{2r}(x_0)\big)+r^{2-2s}\|u\|^2_{L^2(D_{2r}(x_0))}\right)\,, \end{equation} and \begin{equation}\label{contruextH1weight} {\bf E}_s\big(u^{\rm e},B_r^+({\bf x}_0)\big)\leqslant C \mathcal{E}_s\big(u,D_{2r}(x_0)\big)\,, \end{equation} for a constant $C=C(n,s)$. \end{lemma} \begin{proof} Translating and rescaling variables, we can assume that $x_0=0$ and $r=1$. Then \eqref{contruextL2} follows from \cite[Lemma 2.10]{MSK} (which is stated for $s\in(0,1/2)$, but the proof is in fact valid for any $s\in(0,1)$). Denote by $\bar u$ the average of $u$ over $D_2$. Noticing that $(u-\bar u)^{\rm e}=u^{\rm e}-\bar u$, and applying \cite[Lemma 2.10]{MSK} to $u-\bar u$ yields $${\bf E}_s(u^{\rm e},B_1^+)\leqslant C\big( \mathcal{E}_s(u,D_{2})+ \|u-\bar u\|^2_{L^2(D_2)}\big)\,.$$ On the other hand, by Poincar\'e's inequality in $H^s(D_2)$, we have $$ \|u-\bar u\|^2_{L^2(D_2)}\leqslant C [u]^2_{H^s(D_2)}\leqslant C \mathcal{E}_s(u,D_{2})\,,$$ and \eqref{contruextH1weight} follows. \end{proof} \begin{corollary}\label{contextHsH1} Let $\Omega\subseteq \mathbb{R}^n$ be an open set, and $G\subseteq \mathbb{R}^{n+1}_+$ a bounded admissible open set such that $\overline{\partial^0 G}\subseteq\Omega$. The extension operator $u\mapsto u^{\rm e}$ defines a continuous linear operator from $\widehat H^s(\Omega)$ into $H^1(G,|z|^a{\rm d}{\bf x})$. \end{corollary} \begin{proof} Set $\delta:={\rm dist}(\partial^0G,\Omega^c)$, and $$h_1:=\min\Big\{\frac{\delta}{12}\,,\, \inf\big\{{\rm dist}({\bf x},\partial\mathbb{R}^{n+1}_+): {\bf x}=(x,z)\in G\,,\;{\rm dist}((x,0),\partial^0G)\geqslant \delta/2\big\}\Big\}>0\,,$$ $$h_2:=\sup\Big\{{\rm dist}({\bf x},\partial\mathbb{R}^{n+1}_+): {\bf x}=(x,z)\in G\Big\}<+\infty\,.$$ We also consider a large radius $R>0$ in such a way that $G\subseteq D_R\times\mathbb{R}$, and we define $$\omega:=\Big\{x\in\mathbb{R}^n :{\rm dist}((x,0),\partial^0G)< \delta/2\ \Big\}\,,$$ and $$G_*:=\big(\omega\times(0,h_1]\big)\cup \big(D_R\times(h_1,h_2\big)\big)\,.$$ By construction, $G_*$ is a bounded admissible open set satisfying $\overline{\partial^0 G_*}\subseteq\Omega$ and $G\subseteq G_*$. Therefore, it is enough to show that the extension operator is continuous from $\widehat H^s(\Omega)$ into $H^1(G_*,|z|^a{\rm d}{\bf x})$. In other words, we can assume without loss of generality that $G=G_*$. \vskip3pt Covering $\omega\times(0,h_1]$ by finitely many half balls $B^+_{\delta/6}({\bf x_i})$ with ${\bf x}_i\in \omega\times\{0\}$, and applying Lemma \ref{hatH1/2toH1} in those balls, we infer that $u^{\rm e}\in H^1(\omega\times(0,h_1),|z|^a{\rm d}{\bf x})$, and $$\|u^{\rm e}\|^2_{H^1(\omega\times(0,h_1),|z|^a{\rm d}{\bf x})} \leqslant C_G\big(\mathcal{E}_s(u,\Omega)+\|u\|^2_{L^2(\Omega)}\big)\,, $$ for a constant $C_G=C_G(G,n,s)$. On the other hand, one may derive from formula \eqref{poisson} and Jensen's inequality that $$|u^{\rm e}({\bf x})|^2+ |\nabla u^{\rm e}({\bf x})|^2\leqslant C_G\int_{\mathbb{R}^n}\frac{|u(y)|^2}{(|x-y|^2+h_1^2)}\,{\rm d} y\quad \forall {\bf x}=(x,z)\in D_R\times(h_1,h_2)\,.$$ It then follows from Lemma \ref{adminHchap} that $u^{\rm e}\in H^1( D_R\times(h_1,h_2),|z|^a{\rm d} {\bf x})$ with $$\|u^{\rm e}\|^2_{H^1(D_R\times(h_1,h_2),|z|^a{\rm d}{\bf x})} \leqslant C_G\big(\mathcal{E}_s(u,\Omega)+\|u\|^2_{L^2(\Omega)}\big)\,, $$ which completes the proof. \end{proof} Another useful fact about the extension by convolution with $\mathbf{P}_{n,s}$, is that it preserves some local H\"older continuity. It is very classical and follows from the explicit formula (and regularity) of $\mathbf{P}_{n,s}$. Details are left to the reader. \begin{lemma}\label{HoldTransf} If $u\in L^\infty(\mathbb{R}^n)\cap C^{0,\beta}(D_R)$ for some $\beta\in(0,\min(1,2s))$, then $u^{\rm e}\in C^{0,\beta}(B^+_{R/4})$, and \begin{equation}\label{Holdtransfesti} R^{\beta}[u^{\rm e}]_{C^{0,\beta}(B^+_{R/4})}\leqslant C_\beta\big(R^{\beta}[u]_{C^{0,\beta}(D_R)}+\|u\|_{L^\infty(\mathbb{R}^n)}\big)\,, \end{equation} for a constant $C_\beta=C_\beta(\beta,n,s)$. \end{lemma} Let us now assume that $\Omega\subseteq\mathbb{R}^n$ is a bounded open set with Lipschitz boundary. If $u\in \widehat H^{s}(\Omega)$, the divergence free vector field $z^{a}\nabla u^{\rm e}$ admits a distributional normal trace on $\Omega$, that we denote by $\mathbf{\Lambda}^{(2s)}u$. More precisely, we define $\mathbf{\Lambda}^{(2s)} u$ through its action on a test function $\varphi\in \mathscr{D}(\Omega)$ by setting \begin{equation}\label{defNeumOp} \left\langle \mathbf{\Lambda}^{(2s)} u, \varphi\right\rangle_\Omega := \int_{\mathbb{R}^{n+1}_+}z^{a}\nabla u^{\rm e}\cdot\nabla\Phi\,{\rm d} \mathbf{x}\,, \end{equation} where $\Phi$ is any smooth extension of $\varphi$ compactly supported in $\mathbb{R}_+^{n+1}\cup\Omega$. Note that the right-hand side of \eqref{defNeumOp} is well defined by Lemma~\ref{hatH1/2toH1}. By the divergence theorem, it is routine to check that the integral in \eqref{defNeumOp} does not depend on the choice of the extension $\Phi$. It can be thought of as a {\it fractional Dirichlet-to-Neumann operator}. Indeed, whenever $u$ is smooth, the distribution $\mathbf{\Lambda}^{(2s)}u$ is the pointwise-defined function given by $$\mathbf{\Lambda}^{(2s)} u(x)=-\lim_{z\downarrow0}z^{a}\partial_z u^{\rm e}(x,z)=2s\, \lim_{z\downarrow0} \frac{u^{\rm e}(x,0)-u^{\rm e}(x,z)}{z^{2s}}$$ at each point $x\in \Omega$. \vskip3pt In the case $\Omega=\mathbb{R}^n$, it has been proved in \cite{CaffSil} that $\mathbf{\Lambda}^{(2s)}$ coincides with $ (-\Delta)^{s} $, up to the multiplicative factor $\boldsymbol{\delta}_s$. In the localized setting, this identity still holds, see e.g. \cite[Lemma~2.12]{MSK} and \cite[Lemma 2.9]{MS}. \begin{lemma}\label{repnormderfraclap} If $\Omega\subseteq \mathbb{R}^n$ is a bounded open set with Lipschitz boundary, then $$ (-\Delta)^{s} = \boldsymbol{\delta}_s \mathbf{\Lambda}^{(2s)} \text{ on $\widehat H^{s}(\Omega)$}\,.$$ \end{lemma} One of the main consequences of Lemma \ref{repnormderfraclap} is a local counterpart of \eqref{normexth1/2} concerning the minimality of $u^{\rm e}$. This is the purpose of Corollary \ref{minenergdirchfrac} below, inspired from \cite[Lemma 7.2]{CRS}, and taken from \cite[Corollary 2.13]{MSK}. \begin{corollary}\label{minenergdirchfrac} Let $\Omega\subseteq \mathbb{R}^n$ be a bounded open set, and $G\subseteq \mathbb{R}^{n+1}_+$ an admissible bounded open set such that $\overline{\partial^0 G}\subseteq \Omega$. Let $u\in \widehat H^{s}(\Omega;\mathbb{R}^d)$, and let $u^{\rm e}$ be its fractional harmonic extension to $\mathbb{R}^{n+1}_+$ given by~\eqref{poisson}. Then, \begin{equation}\label{ineqenergDfrac} {\bf E}_s(v,G)-{\bf E}_s(u^{\rm e},G) \geqslant \mathcal{E}_s(v,\Omega) -\mathcal{E}_s(u,\Omega) \end{equation} for all $v\in H^1(G;\mathbb{R}^d,|z|^a{\rm d}\mathbf{x})$ such that $v-u^{\rm e}$ is compactly supported in $G\cup \partial^0G$. In the right-hand side of \eqref{ineqenergDfrac}, the trace of $v$ on $\partial^0G$ is extended by $u$ outside $\partial^0G$. \end{corollary} \subsection{Inner variations, monotonicity formula, and density functions} In this section, our main goal is to present the {\sl monotonicity formula} satisfied by critical points of $\mathcal{E}_s(\cdot,\Omega)$ under {\sl inner variations}, i.e., by stationary points. We start recalling the notion of first inner variation, and then give an explicit formula to represent it. \begin{definition} Let $\Omega\subseteq \mathbb{R}^n$ be a bounded open set. Given a map $u\in \widehat H^s(\Omega;\mathbb{R}^d)$ and a vector field $X\in C^1(\mathbb{R}^n;\mathbb{R}^n)$ compactly supported in $\Omega$, the first (inner) variation of $\mathcal{E}_s(\cdot,\Omega)$ at $u$ and evaluated at $X$ is defined as $$\delta\mathcal{E}_s(u,\Omega)[X]:= \left[\frac{{\rm d}}{{\rm d} t}\mathcal{E}_s(u\circ\phi_{-t},\Omega)\right]_{t=0}\,,$$ where $\{\phi_t\}_{t\in\mathbb{R}}$ denotes the integral flow on $\mathbb{R}^n$ generated by $X$, i.e., for every $x\in\mathbb{R}^n$, the map $t\mapsto\phi_t(x)$ is defined as the unique solution of the ordinary differential equation $$ \begin{cases} \displaystyle \frac{{\rm d}}{{\rm d} t}\phi_t(x)=X\big(\phi_t(x)\big)\,,\\[5pt] \phi_0(x)=x\,. \end{cases} $$ \end{definition} The following representation result for $\delta\mathcal{E}_s$ was obtained in \cite[Corollary 2.14]{MSK} as a direct consequence of Corollary \ref{minenergdirchfrac}. We reproduce here the proof for completeness. \begin{proposition}\label{represfirstvar} Let $\Omega\subseteq \mathbb{R}^n$ be a bounded open set, and $G\subseteq \mathbb{R}^{n+1}_+$ an admissible bounded open set such that $\overline{\partial^0 G}\subseteq \Omega$. For each $u\in \widehat H^{s}(\Omega;\mathbb{R}^d)$, and each $X\in C^1(\mathbb{R}^n;\mathbb{R}^n)$ compactly supported in $\partial^0 G$, we have \begin{multline}\label{calcfirstvar} \delta\mathcal{E}_s(u,\Omega)[X]=\frac{\boldsymbol{\delta}_s}{2}\int_Gz^a\Big(|\nabla u^{\rm e}|^2{\rm div}{\bf X}-2\sum_{i,j=1}^{n+1}(\partial_iu^{\rm e}\cdot\partial_ju^{\rm e})\partial_j{\bf X}_i\Big)\,{\rm d}{\bf x}\\ +\frac{\boldsymbol{\delta}_s a}{2}\int_Gz^{a-1}|\nabla u^{\rm e}|^2{\bf X}_{n+1}\,{\rm d}{\bf x}\,, \end{multline} where ${\bf X}=({\bf X}_1,\ldots,{\bf X}_{n+1})\in C^1(\overline G;\mathbb{R}^{n+1})$ is any vector field compactly supported in $G\cup\partial^0G$, and satisfying ${\bf X}=(X,0)$ on $\partial^0G$. \end{proposition} \begin{proof} Let ${\bf X}\in C^1(\overline G,\mathbb{R}^{n+1})$ be an arbitrary vector field compactly supported in $G\cup\partial^0G$ and satisfying ${\bf X}=(X,0)$ on $\partial^0G$. We consider a compactly supported $C^1$-extension of ${\bf X}$ to the whole space $\mathbb{R}^{n+1}$, still denoted by ${\bf X}$, such that ${\bf X}=(X,0)$ on $\mathbb{R}^n\times\{0\}\simeq\mathbb{R}^n$. We define $\{\boldsymbol{\Phi}_t\}_{t\in\mathbb{R}}$ as the integral flow on $\mathbb{R}^{n+1}$ generated by ${\bf X}$. Observe that $\boldsymbol{\Phi}_t=(\phi_t,0)$ on~$\mathbb{R}^n$, and ${\rm spt}(\boldsymbol{\Phi}_t-{\rm id}_{\mathbb{R}^{n+1}})\cap\overline{\mathbb{R}^{n+1}_+}\subseteq G\cup\partial^0G$. Then, $v_t:=u^{\rm e}\circ\boldsymbol{\Phi}_{-t}\in H^1(G;\mathbb{R}^d,|z|^a{\rm d}\mathbf{x})$ and ${\rm spt}(v_t-u^{\rm e})\subseteq G\cup\partial^0G$. By Corollary \ref{minenergdirchfrac}, we have \begin{equation}\label{calcfirstvartrick} {\bf E}_s(v_t,G)-{\bf E}_s(u^{\rm e},G) \geqslant \mathcal{E}_s(v_t,\Omega) -\mathcal{E}_s(u,\Omega)\quad\forall t\in\mathbb{R}\,. \end{equation} Since $v_t=u\circ\phi_{-t}$ on $\mathbb{R}^n$, dividing both sides of \eqref{calcfirstvartrick} by $t\not=0$, and letting $t\uparrow0$ and $t\downarrow0$ leads to \begin{equation}\label{equalitfirstvars} \delta\mathcal{E}_s(u,\Omega)[X]= \left[\frac{{\rm d}}{{\rm d} t}\mathbf{E}_s(u^{\rm e}\circ\boldsymbol{\Phi}_{-t},G)\right]_{t=0}\,. \end{equation} On the other hand, standard computations (see e.g. \cite[Chapter 2.2]{Sim}) show that the right-hand side of \eqref{equalitfirstvars} is equal to the right-hand side of \eqref{calcfirstvar}. \end{proof} \begin{definition}\label{defstatmap} Let $\Omega\subseteq \mathbb{R}^n$ be a bounded open set. A map $u\in\widehat H^s(\Omega;\mathbb{R}^d)$ is said to be {\sl stationary} in $\Omega$ if $\delta\mathcal{E}_s(u,\Omega)=0$. \end{definition} As we shall see in the next sections, stationarity is a crucial ingredient in the partial regularity theory since it implies the aforementioned monotonicity formula. This is the purpose of the following proposition whose proof follows exactly \cite[Proof of Lemma 4.2]{MSK} using vector fields in \eqref{calcfirstvar} of the form ${\bf X}=\eta(|{\bf x}-{\bf x}_0|)({\bf x}-{\bf x}_0)$ with $\eta(t)\sim \chi_{[0,r]}(t)$. \begin{proposition}\label{monotformula} Let $\Omega\subseteq \mathbb{R}^n$ be a bounded open set. If $u\in\widehat H^s(\Omega;\mathbb{R}^d)$ is stationary in $\Omega$, then for every ${\bf x}_0=(x_0,0)\in\Omega\times\{0\}$, the ``density function'' $$r\in(0,{\rm dist}(x_0,\Omega^c))\mapsto \boldsymbol{\Theta}_s(u^{\rm e},{\bf x}_0,r):=\frac{1}{r^{n-2s}}{\bf E}_s(u^{\rm e},B^+_r({\bf x}_0))$$ is nondecreasing. Moreover, $$\boldsymbol{\Theta}_s(u^{\rm e},{\bf x}_0,r)-\boldsymbol{\Theta}_s(u^{\rm e},{\bf x}_0,\rho) = \boldsymbol{\delta}_s\int_{B^+_r({\bf x}_0)\setminus B^+_\rho({\bf x}_0)}z^a\frac{|({\bf x}-{\bf x}_0)\cdot\nabla u^{\rm e}|^2}{|{\bf x}-{\bf x}_0|^{n+2-2s}}\,{\rm d} {\bf x}$$ for every $0<\rho<r<{\rm dist}(x_0,\Omega^c)$. \end{proposition} As a straightforward consequence, we have \begin{corollary}\label{corolmonotform} Let $\Omega\subseteq \mathbb{R}^n$ be a bounded open set. If $u\in\widehat H^s(\Omega;\mathbb{R}^d)$ is stationary in $\Omega$, then for every $x_0\in\Omega$, the limit \begin{equation}\label{deflimitdens} \boldsymbol{\Xi}_s(u,x_0):=\lim_{r\to 0} \boldsymbol{\Theta}_s\big(u^{\rm e}, (x_0,0),r\big) \end{equation} exists, and the function $\boldsymbol{\Xi}_s(u,\cdot):\Omega\to[0,\infty)$ is upper semicontinuous. In addition, for every ${\bf x}_0=(x_0,0)\in\Omega\times\{0\}$, \begin{equation}\label{monotformCor} \boldsymbol{\Theta}_s(u^{\rm e},{\bf x}_0,r)-\boldsymbol{\Xi}_s(u,x_0) = \boldsymbol{\delta}_s\int_{B^+_r({\bf x}_0)}z^a\frac{|({\bf x}-{\bf x}_0)\cdot\nabla u^{\rm e}|^2}{|{\bf x}-{\bf x}_0|^{n+2-2s}}\,{\rm d} {\bf x} \end{equation} for every $0<r<{\rm dist}(x_0,\Omega^c)$. \end{corollary} \begin{proof} The existence of the limit in \eqref{deflimitdens} and \eqref{monotformCor} are direct consequences of the monotonicity formula established in Proposition \ref{monotformula}. Then the function $\boldsymbol{\Xi}_s(u,\cdot)$ is upper semicontinuous as a pointwise limit of a decreasing family of continuous functions. \end{proof} As we previously said, the monotonicity of the density function $r\mapsto\boldsymbol{\Theta}_s(u^{\rm e},{\bf x}_0,r)$ is one of the most important ingredients to obtain partial regularity. We shall see in the next sections that the density function relative to the nonlocal energy $\mathcal{E}_s$ also plays a role. For $u\in \widehat H^s(\Omega;\mathbb{R}^d)$ and a point $x\in\Omega$, we define the density function $r\in(0,{\rm dist}(x,\Omega^c))\mapsto \boldsymbol{\theta}_s(u,x,r)$ by setting \begin{equation}\label{defnonlocdensit} \boldsymbol{\theta}_s(u,x_0,r):=\frac{1}{r^{n-2s}}\mathcal{E}_s\big(u,D_r(x_0)\big)\,. \end{equation} Now we aim to show that one density function is small if and only the other one is also small at a comparable scale. This is the purpose of the following lemma. \begin{lemma}\label{compardensities} Let $\Omega\subseteq\mathbb{R}^n$ be an open set, and $u\in\widehat H^s(\Omega;\mathbb{R}^d)\cap L^\infty(\mathbb{R}^n)$ be such that $\|u\|_{L^\infty(\mathbb{R}^n)}\leqslant M$. For every $\varepsilon>0$, there exists $\delta=\delta(n,s,M,\varepsilon)>0$ and $\alpha=\alpha(n,s,M,\varepsilon)\in(0,1/4]$ such that $$\boldsymbol{\Theta}_s(u^{\rm e},{\bf x}_0,r)\leqslant\delta\quad\Longrightarrow\quad \boldsymbol{\theta}_s(u,x_0,\alpha r)\leqslant\varepsilon$$ for every ${\bf x}_0=(x_0,0)\in\Omega\times\{0\}$ and $r>0$ satisfying $\overline D_{r}(x_0)\subseteq\Omega$. \end{lemma} \begin{proof} Without loss of generality, we can assume that $x_0=0$. We give ourselves $\varepsilon>0$, and we shall choose the parameter $\alpha\in(0,1/4]$ later on. Using Lemma \ref{HsregtraceH1weight}, we first estimate \begin{align*} \mathcal{E}_s(u,D_{\alpha r})& \leqslant \frac{\gamma_{n,s}}{4}\iint_{D_{r/2}\times D_{r/2}}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y+ \frac{\gamma_{n,s}}{2}\iint_{D_{\alpha r}\times D^c_{r/2}}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y \\ &\leqslant C_1 {\bf E}_s(u^{\rm e},B^+_{r}) + 2M^2\gamma_{n,s}\iint_{D_{\alpha r}\times D^c_{r/2}}\frac{{\rm d} x{\rm d} y}{|x-y|^{n+2s}}\,, \end{align*} where $C_1=C_1(n,s)>0$. Observe that for $(x,y)\in D_{\alpha r}\times D^c_{r/2}$, we have $|x-y|\geqslant |y|-\alpha r\geqslant \frac{1}{2}|y|$, so that $$2\gamma_{n,s}\iint_{D_{\alpha r}\times D^c_{r/2}}\frac{{\rm d} x{\rm d} y}{|x-y|^{n+2s}}\leqslant 2^{n+2s+1}\gamma_{n,s}\iint_{D_{\alpha r}\times D^c_{r/2}}\frac{{\rm d} x{\rm d} y}{|y|^{n+2s}}=C_2\alpha^nr^{n-2s}\,,$$ where $C_2=C_2(n,s)>0$. Consequently, $$ \boldsymbol{\theta}_s(u,0,\alpha r)\leqslant \frac{C_1}{\alpha^{n-2s}} \boldsymbol{\Theta}_s(u^{\rm e},0,r)+ C_2M^2\alpha^{2s}\,. $$ Choosing $$\alpha=\min\Big\{1/4, \Big(\frac{\varepsilon}{2C_2M^2}\Big)^{1/2s}\Big\} \quad\text{and}\quad \delta:=\frac{\alpha^{n-2s}\varepsilon}{2C_1}\,, $$ provides the desired conclusion. \end{proof} \begin{corollary}\label{corequivvanishdensities} Let $\Omega\subseteq\mathbb{R}^n$ be an open set. If $u\in\widehat H^s(\Omega;\mathbb{R}^d)\cap L^\infty(\mathbb{R}^n)$, then $$\lim_{r\to 0} \boldsymbol{\theta}_s(u,x_0,r)=0\quad \Longleftrightarrow \quad \lim_{r\to 0} \boldsymbol{\Theta}_s(u^{\rm e},{\bf x}_0,r)=0$$ for every ${\bf x}_0=(x_0,0)\in\Omega\times\{0\}$. \end{corollary} \begin{proof} By Lemma \ref{hatH1/2toH1}, we have $$ \boldsymbol{\Theta}_s(u^{\rm e},{\bf x}_0,r)\leqslant C \boldsymbol{\theta}_s(u,x_0,2r)\,,$$ for a constant $C>0$ depending only on $n$ and $s$, and implication $\Longrightarrow$ follows. The reverse implication is a straightforward application of Lemma \ref{compardensities}. \end{proof} \subsection{Energy monotonicity and mean oscillation estimates} In the light of Proposition~\ref{monotformula}, the purpose of this section is to show a mean oscillation estimate for maps having a nondecreasing density function at every point. For $v\in H^1(B^+_R;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$, a point ${\bf x}\in\partial^0B^+_R$, and $r\in(0,R-|{\bf x}|)$, we keep the notation $$\boldsymbol{\Theta}_s(v,{\bf x}_0,r):=\frac{1}{r^{n-2s}}\mathbf{E}_s\big(v,B^+_r({\bf x}_0)\big)\,.$$ The main estimate is the following. \begin{lemma}\label{cutoffbmo1} Let $v\in H^1(B^+_{R};\mathbb{R}^d,|z|^a{\rm d} \mathbf{x})$ and $\zeta\in \mathscr{D}(D_{5R/8})$ be such that $0\leqslant \zeta\leqslant 1$, $\zeta\equiv 1$ in $D_{R/2}$, and $|\nabla \zeta|\leqslant L R^{-1}$ for some constant $L>0$. Assume that for every ${\bf x}\in\partial^0 B^+_{R}$, the density function $r\in(0,R-|{\bf x|})\mapsto \boldsymbol{\Theta}_s(v,{\bf x},r)$ is non decreasing. Then $(\zeta v)_{|\mathbb{R}^n}$ belongs to ${\rm BMO}(\mathbb{R}^n)$ and $$[\zeta v]^2_{{\rm BMO}(\mathbb{R}^n)}\leqslant C_L \big(\boldsymbol{\Theta}_s(v,0,R)+R^{2s-2-n}\|v\|^2_{L^2(B_R^+,|z|^a{\rm d} \mathbf{x})}\big) $$ for a constant $C_L=C(L,n,s)$. \end{lemma} Before proving this lemma, let us recall that $u\in L^1(D_R)$ belongs to ${\rm BMO}(D_R)$ if $$[u]_{{\rm BMO}(D_R)}:= \sup_{D_r(y)\subseteq D_R}\Xint-_{D_r(y)}|u-(u)_{y,r}|\,{\rm d} x<+\infty \,,$$ where $(u)_{y,r}$ denotes the average of $u$ over the ball $D_r(y)$. To prove Lemma \ref{cutoffbmo1}, we shall make use of the well-known John-Nirenberg inequality, see e.g. \cite[Section 6.3]{GiaMa}. \begin{lemma}\label{JohnNir} Let $u\in {\rm BMO}(D_R)$. For every $p\in[1,\infty)$, there exists a constant $C_p=C_p(n,p)$ such that $$ [u]^p_{{\rm BMO}(D_R)}\leqslant \sup_{D_r(y)\subseteq D_R}\, \Xint-_{D_r(y)}|u-(u)_{y,r}|^p\,{\rm d} x\leqslant C_p [u]^p_{{\rm BMO}(D_R)}\,.$$ \end{lemma} \vskip5pt \begin{proof}[Proof of Lemma \ref{cutoffbmo1}] {\it Step 1.} Rescaling variables, we may assume that $R=1$. Let us fix an arbitrary ball $D_r(y)\subseteq D_{1}$ with $y\in D_{7/8}$ and $0<r\leqslant 1/8$. Using the Poincar\'e inequality in Lemma \ref{poincarebdry} and the monotonicity assumption on $\boldsymbol{\Theta}_s(v,{\bf x},\cdot)$, we estimate $$\frac{1}{r^n}\int_{D_r(y)}\big|v-(v)_{y,r}\big|\,{\rm d} x\leqslant C \sqrt{\boldsymbol{\Theta}_s(v,{\bf y},r)}\leqslant C \sqrt{\boldsymbol{\Theta}_s(v,{\bf y},1/8)}\leqslant C \sqrt{\boldsymbol{\Theta}_s(v,0,1)}\,,$$ where ${\bf y}=(y,0)$ and $C=C(n,s)$. In particular, $v_{|D_{7/8}}$ belongs to ${\rm BMO}(D_{7/8})$, and \begin{equation}\label{preestbmo} [v]_{{\rm BMO}(D_{7/8})}\leqslant C \sqrt{\boldsymbol{\Theta}_s(v,0,1)}\,. \end{equation} By the John-Nirenberg inequality in Lemma \ref{JohnNir}, inequality \eqref{preestbmo}, the continuity of the trace operator (see Section \ref{secWeightSob}), and H\"older's inequality, it follows that \begin{multline}\label{Lnthrubmo} \|v\|_{L^n(D_{7/8})}\leqslant \big\|v-(v)_{0,7/8}\big\|_{L^n(D_{7/8})}+ C \|v\|_{L^1(D_{7/8})}\\ \leqslant C\Big( [v]_{{\rm BMO}(D_{7/8})}+ \|v\|_{L^1(D_{1})}\Big)\leqslant C\Big( \sqrt{\boldsymbol{\Theta}_s(v,0,1)}+ \|v\|_{L^2(B_1^+,|z|^a{\rm d} \mathbf{x})}\Big)\,. \end{multline} \noindent{\it Step 2.} Let us now consider a ball $D_r(y)\subseteq D_{7/8}$ with $y\in D_{3/4}$ and $0<r\leqslant 1/8$. Since $$|\zeta v - (\zeta v)_{y,r}|\leqslant |\zeta v - \zeta (v)_{y,r}|+|\zeta (v)_{y,r} - (\zeta v)_{y,r}|\leqslant | v - (v)_{y,r}|+ L r\, \Xint-_{D_r(y)}|v|\,{\rm d} x\quad\text{on $D_{7/8}$}\,,$$ we can deduce from \eqref{preestbmo} and \eqref{Lnthrubmo} that \begin{multline*} \frac{1}{r^n} \int_{D_r(y)}\big|\zeta v-(\zeta v)_{y,r}\big|\,{\rm d} x \leqslant C_L\Big( \sqrt{\boldsymbol{\Theta}_s(v,0,1)} +r^{1-n}\|v\|_{L^1(D_r(y))} \Big)\\ \leqslant C_L\Big( \sqrt{\boldsymbol{\Theta}_s(v,0,1)} +\|v\|_{L^n(D_{7/8})} \Big) \leqslant C_L\Big( \sqrt{\boldsymbol{\Theta}_s(v,0,1)} +\|v\|_{L^2(B_1^+,|z|^a{\rm d} \mathbf{x})} \Big)\,, \end{multline*} for a constant $C_L=C(L,n,s)$. Next, for a ball $D_r(y)$ with $y\not\in D_{3/4}$ and $0<r\leqslant 1/8$, we have $$ \frac{1}{r^n} \int_{D_r(y)}\big|\zeta v-(\zeta v)_{y,r}\big|\,{\rm d} x =0\,,$$ since $\zeta$ is supported in $D_{5/8}$. Finally, for a ball $D_r(y)$ with $r>1/8$, we estimate $$\frac{1}{r^n} \int_{D_r(y)}\big|\zeta v-(\zeta v)_{y,r}\big|\,{\rm d} x \leqslant C\int_{D_1}|\zeta v|\,{\rm d} x \leqslant C\|v\|_{L^1(D_1)}\leqslant C\|v\|_{L^2(B_1^+,|z|^a{\rm d} \mathbf{x})}\,,$$ which completes the proof. \end{proof} \begin{corollary}\label{coroBMO} Let $u\in \widehat H^s(D_{2R};\mathbb{R}^d)$ and $\zeta\in \mathscr{D}(D_{5R/8})$ be as in Lemma \ref{cutoffbmo1}. Assume that for every ${\bf x}\in\partial^0 B^+_{R}$, the density function $r\in(0,2R-|{\bf x|})\mapsto \boldsymbol{\Theta}_s(u^{\rm e},{\bf x},r)$ is non decreasing. Then $\zeta u$ belongs to ${\rm BMO}(\mathbb{R}^n)$ and $$[\zeta u]^2_{{\rm BMO}(\mathbb{R}^n)}\leqslant C_L \big(\boldsymbol{\theta}_s(u,0,2R)+R^{-n}\|u\|^2_{L^2(D_{2R})}\big)\,,$$ for a constant $C_L=C(L,n,s)>0$. \end{corollary} \begin{proof} Apply Lemma \ref{cutoffbmo1} to $u^{\rm e}$ in $B_R^+$, and then conclude with the help of Lemma \ref{hatH1/2toH1}. \end{proof} \section{Fractional harmonic maps and weighted harmonic maps with free boundary}\label{ELandConsLaw} In this section, our goal is to review in details the notion of weakly $s$-harmonic maps, the associated Euler-Lagrange equation, and more importantly to present its characterization in terms of fractional (nonlocal) conservation laws. We shall also prove at the end of this section that the fractional harmonic extension of an $s$-harmonic map satisfies a suitable (degenerate) partially free boundary condition, in the spirit of the classical harmonic map system with partially free boundary. \subsection{Fractional harmonic maps into spheres and conservation laws} \begin{definition} Let $\Omega\subseteq\mathbb{R}^n$ be a bounded open set. A map $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is said to be {\sl a weakly $s$-harmonic map} in $\Omega$ (with values in $\mathbb{S}^{d-1}$) if $$\left[\frac{{\rm d}}{{\rm d} t} \mathcal{E}_s\Big(\frac{u+t\varphi}{|u+t\varphi|},\Omega\Big)\right]_{t=0}=0\qquad\forall \varphi\in\mathscr{D}(\Omega;\mathbb{R}^d) \,.$$ If $u$ is also stationary in $\Omega$ (in the sense of Definition \ref{defstatmap}), we say that $u$ is {\sl a stationary weakly $s$-harmonic map} in $\Omega$. \end{definition} \begin{definition} Let $\Omega\subseteq\mathbb{R}^n$ be a bounded open set. A map $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is said to be {\sl a minimizing $s$-harmonic map} in $\Omega$ (with values in $\mathbb{S}^{d-1}$) if $$\mathcal{E}_s(u,\Omega)\leqslant \mathcal{E}_s(w,\Omega) $$ for every $w\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ such that ${\rm spt}(u-w)$ is compactly included in $\Omega$. \end{definition} \begin{remark}\label{implicminstat} A minimizing $s$-harmonic map in $\Omega$ is obviously a critical point with respect to both inner and (constrained) outer variations of the energy. In other words, if $u$ is a minimizing $s$-harmonic map in $\Omega$, then $u$ is also a stationary weakly $s$-harmonic map in $\Omega$. \end{remark} \begin{remark} If $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is a weakly $s$-harmonic map in $\Omega$ (stationary, minimizing, respectively), then $u$ is also weakly $s$-harmonic in $\Omega^\prime$ (stationary, minimizing, respectively) for any open subset $\Omega^\prime\subseteq\Omega$. It can be directly checked from the definitions, or one can rely on the Euler-Lagrange equation presented below and Remark \ref{localityLapls}. \end{remark} \begin{proposition}\label{ELeqprop} Let $\Omega\subseteq\mathbb{R}^n$ be a bounded open set. A map $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is weakly $s$-harmonic in $\Omega$ if and only if \begin{equation}\label{ELtangtestfct} \big\langle (-\Delta)^su,\varphi\big\rangle_\Omega=0 \end{equation} for every $\varphi\in H^s_{00}(\Omega;\mathbb{R}^d)$ such that ${\rm spt}(\varphi)\subseteq\Omega$ and $\varphi(x)\in {\rm Tan}(u(x),\mathbb{S}^{d-1})$ for a.e. $x\in\Omega$. Equivalently, \begin{equation}\label{ELeqorig} (-\Delta)^su(x)= \Big(\frac{\gamma_{n,s}}{2}\int_{\mathbb{R}^n}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} y\Big)u(x)\quad\text{in $\mathscr{D}^\prime(\Omega)$}\,. \end{equation} \end{proposition} \begin{proof} Let $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$, fix $\varphi\in\mathscr{D}(\Omega;\mathbb{R}^d)$, and notice that $$\left[\frac{{\rm d}}{{\rm d} t}\Big(\frac{u+t\varphi}{|u+t\varphi|}\Big) \right]_{t=0}=\varphi-(u\cdot\varphi)u \in H^{s}_{00}(\Omega;\mathbb{R}^d)\,.$$ Hence, $$\left[\frac{{\rm d}}{{\rm d} t} \mathcal{E}_s\Big(\frac{u+t\varphi}{|u+t\varphi|},\Omega\Big)\right]_{t=0}=\big\langle (-\Delta)^s u, \varphi\big\rangle_\Omega-\big\langle (-\Delta)^s u, (u\cdot\varphi)u\big\rangle_\Omega\,. $$ On the other hand, since $|u|^2=1$, we have \begin{multline*} \big(u(x)-u(y)\big)\cdot\big((u(x)\cdot\varphi(x))u(x)-(u(y)\cdot\varphi(y))u(y) \big)\\ =\frac{1}{2}|u(x)- u(y)|^2u(x)\cdot\varphi(x) +\frac{1}{2}|u(x)- u(y)|^2u(y)\cdot\varphi(y)\,, \end{multline*} and it follows that \begin{equation}\label{computlagrangmult} \big\langle (-\Delta)^s u, (u\cdot\varphi)u\big\rangle_\Omega=\int_\Omega\Big(\frac{\gamma_{n,s}}{2}\int_{\mathbb{R}^n}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} y\Big)u(x)\cdot\varphi(x)\,{\rm d} x\,. \end{equation} Consequently, $u$ is weakly $s$-harmonic in $\Omega$ if and only if \eqref{ELeqorig} holds. By approximation, \eqref{ELeqorig} also holds for any test function $\varphi\in H^s_{00}(\Omega;\mathbb{R}^d)\cap L^\infty(\mathbb{R}^n)$ compactly supported in $\Omega$. In view of the right-hand side of \eqref{ELeqorig}, \eqref{ELtangtestfct} clearly holds for every $\varphi\in H^s_{00}(\Omega;\mathbb{R}^d)\cap L^\infty(\mathbb{R}^n)$ compactly supported in $\Omega$ and satisfying $\varphi\cdot u=0$. By a standard truncation argument, it implies that \eqref{ELtangtestfct} holds for every $\varphi\in H^s_{00}(\Omega;\mathbb{R}^d)$ compactly supported in $\Omega$ and satisfying $\varphi\cdot u=0$. The other way around, if \eqref{ELtangtestfct} holds, then the map $\varphi-(u\cdot\varphi)u$ with $\varphi\in \mathscr{D}(\Omega;\mathbb{R}^d)$ is admissible, and \eqref{ELtangtestfct} combined with \eqref{computlagrangmult} shows that \eqref{ELeqorig} holds, i.e., $u$ is weakly $s$-harmonic in $\Omega$. \end{proof} \begin{remark}\label{remsLaplorthog} The variational equation \eqref{ELtangtestfct} corresponds to the weak formulation of the implicit equation $$(-\Delta)^su\perp {\rm Tan}(u,\mathbb{S}^{d-1})\quad\text{in $\Omega$}\,,$$ and in equation \eqref{ELeqorig}, the Lagrange multiplier associated with the $\mathbb{S}^{d-1}$-constraint is made explicit. \end{remark} \begin{remark}\label{remsmothhimplstat} A weakly $s$-harmonic map $u$ in $\Omega$ which is smooth in $\Omega$, is stationary in $\Omega$. Indeed, if $X\in C^1(\Omega;\mathbb{R}^n)$ is compactly supported in $\Omega$, the smoothness of $u$ implies that $$\delta\mathcal{E}_s(u,\Omega)[X]= \big\langle (-\Delta)^su,X\cdot\nabla u\big\rangle_\Omega\,.$$ Since $|u|^2=1$, we have $(X\cdot\nabla u)\cdot u=0$, and thus $\delta\mathcal{E}_s(u,\Omega)[X]=0$. \end{remark} Now we rewrite the Euler-Lagrange equation \eqref{ELeqorig} in a more compact form using the fractional $s$-gradient ${\rm d}_su$ defined in Subsection \ref{sectoperandcompcomp}. More precisely, if $u=:(u^1,\ldots,u^d)$, then $$\frac{\gamma_{n,s}}{2}\int_{\mathbb{R}^n}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} y=\sum_{j=1}^d \frac{\gamma_{n,s}}{2}\int_{\mathbb{R}^n}\frac{|u^j(x)-u^j(y)|^2}{|x-y|^{n+2s}}\,{\rm d} y= \sum_{j=1}^d|{\rm d}_su^j|^2=:|{\rm d}_su|^2\,,$$ according to \eqref{modsqsvectfield} and \eqref{defsgrad}. We can thus rephrase Proposition \ref{ELeqprop} as follows: $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is weakly $s$-harmonic in $\Omega$ if and only if \begin{equation}\label{ELeqsgrad} (-\Delta)^su=|{\rm d}_su|^2u \quad\text{in $\mathscr{D}^\prime(\Omega)$}\,. \end{equation} Our aim is to further rewrite equation \eqref{ELeqsgrad}, or more precisely its right-hand side, to reveal the fractional "div-curl structure" of Section \ref{sectoperandcompcomp} in the spirit of the well-known div-curl structure hidden in the classical equation for harmonic maps into spheres \cite{Hel1}. Following \cite{MazSchi}, the starting point is to notice that for each $i,j\in\{1,\dots,d\}$, \begin{align}\label{dopethrone} \nonumber |{\rm d}_su^j|^2(x)u^i(x)&=\int_{\mathbb{R}^n} \frac{u^i(x){\rm d}_su^j(x,y){\rm d}_su^j(x,y)}{|x-y|^n}\,{\rm d} y\\ &=\begin{multlined}[t] \int_{\mathbb{R}^n} \frac{u^i(x){\rm d}_su^j(x,y)-u^j(x){\rm d}_su^i(x,y)}{|x-y|^n}\,{\rm d}_su^j(x,y)\,{\rm d} y\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+({\rm d}_su^i\odot{\rm d}_su^j)(x)u^j(x)\,. \end{multlined} \end{align} Then, since $|u|^2=1$, we have \begin{align}\label{dopethrone2} \nonumber\sum_{j=1}^d ({\rm d}_su^i\odot{\rm d}_su^j)(x)u^j(x)&=\sum_{j=1}^d\frac{\gamma_{n,s}}{2}\int_{\mathbb{R}^n}\frac{\big(u^j(x)-u^j(y)\big)u^j(x)}{|x-y|^{n+2s}}\big(u^i(x)-u^i(y)\big)\,{\rm d} y\\ &= \frac{\gamma_{n,s}}{4}\int_{\mathbb{R}^n}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\big(u^i(x)-u^i(y)\big)\,{\rm d} y\,. \end{align} We can now introduce for $i,j\in\{1,\ldots,d\}$, \begin{equation}\label{defOmeij} \boldsymbol{\Omega}^{ij}(x,y):=u^i(x){\rm d}_su^j(x,y)-u^j(x){\rm d}_su^i(x,y) \in L^2_{\rm od}(\Omega)\,, \end{equation} and \begin{equation}\label{defTi} T^i(x):=\frac{\gamma_{n,s}}{4} \int_{\mathbb{R}^n}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\big(u^i(x)-u^i(y)\big)\,{\rm d} y \in L^1(\Omega)\,. \end{equation} to derive from \eqref{dopethrone} and \eqref{dopethrone2} the following reformulation of equation \eqref{ELeqsgrad}. \begin{lemma}\label{rewritingsharmeq} Let $\Omega\subseteq\mathbb{R}^n$ be a bounded open set. A map $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is weakly $s$-harmonic in $\Omega$ if and only if \begin{equation}\label{rewriteEL} (-\Delta)^su^i=\Big(\sum_{j=1}^d\boldsymbol{\Omega}^{ij}\odot{\rm d}_su^j \Big) +T^i \quad\text{in $\mathscr{D}^\prime(\Omega)$} \end{equation} for every $i=1,\ldots,d$, where $\boldsymbol{\Omega}^{ij}$ and $T^i$ are given by \eqref{defOmeij} and \eqref{defTi}, respectively. \end{lemma} \begin{remark} The presence of the extra term $T^i$ in \eqref{rewriteEL}, compared the classical harmonic map equation (see \cite{Hel1}), is essentially due to the fact that the $s$-gradient ${\rm d}_su$ is not tangent to the target sphere. \end{remark} The fundamental observation made in \cite[Lemma 3.1]{MazSchi} for $\Omega=\mathbb{R}$ and $s=1/2$ is a characterization of the $1/2$-harmonic map equation in terms of nonlocal conservation laws satisfied by the $\boldsymbol{\Omega}^{ij}$'s (thus extending \cite{Shat} to the fractional setting). In the following proposition, we slightly generalize this result to a domain of arbitrary dimension and $s\in(0,1)$. The proof remains essentially the same, and we provide it for the reader's convenience. \begin{proposition}\label{propconservlaws} Let $\Omega\subseteq\mathbb{R}^n$ be a bounded open set with Lipschitz boundary. A map $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is weakly $s$-harmonic in $\Omega$ if and only if \begin{equation}\label{conservlaw} {\rm div}_s\,\boldsymbol{\Omega}^{ij}=0\quad\text{in $H^{-s}(\Omega)$} \end{equation} for each $i,j\in\{1,\ldots,d\}$, where $\boldsymbol{\Omega}^{ij}$ is given by \eqref{defOmeij}. \end{proposition} \begin{proof} {\it Step 1.} Assume that $u$ is a weakly $s$-harmonic map in $\Omega$, and let us compute ${\rm div}_s\,\boldsymbol{\Omega}^{ij}$. For $\varphi\in\mathscr{D}(\Omega)$, we have \begin{multline*} \int_{\mathbb{R}^n} \boldsymbol{\Omega}^{ij}\odot {\rm d}_s\varphi\,{\rm d} x=\\ \iint_{(\mathbb{R}^n\times\mathbb{R}^n)\setminus(\Omega^c\times\Omega^c)}\big(u^i(x){\rm d}_su^j(x,y){\rm d}_s\varphi(x,y)-u^j(x){\rm d}_su^i(x,y){\rm d}_s\varphi(x,y)\big)\,\frac{{\rm d} x{\rm d} y}{|x-y|^n}\,. \end{multline*} An elementary computation shows $$\begin{cases} u^i(x){\rm d}_s\varphi(x,y)={\rm d}_s(u^i\varphi)(x,y) - \varphi(y){\rm d}_su^i(x,y)\\ u^j(x){\rm d}_s\varphi(x,y)={\rm d}_s(u^j\varphi)(x,y) - \varphi(y){\rm d}_su^j(x,y) \end{cases},$$ so that $$\int_{\mathbb{R}^n} \boldsymbol{\Omega}^{ij}\odot {\rm d}_s\varphi\,{\rm d} x= \int_{\mathbb{R}^n} {\rm d}_su^j \odot{\rm d}_s(u^i\varphi) \,{\rm d} x- \int_{\mathbb{R}^n} {\rm d}_su^i \odot{\rm d}_s(u^j\varphi) \,{\rm d} x\,.$$ Since $u^j\varphi$ and $u^i\varphi$ belong to $H^s_{00}(\Omega)$, we infer from Proposition \ref{fracintegbypart} and equation \eqref{ELeqsgrad} that \begin{align} \label{eyehategod}\int_{\mathbb{R}^n} \boldsymbol{\Omega}^{ij}\odot {\rm d}_s\varphi\,{\rm d} x&=\big\langle (-\Delta)^s u^j, u^i\varphi\big\rangle_\Omega- \big\langle (-\Delta)^s u^i, u^j\varphi\big\rangle_\Omega\\ \nonumber &=\int_{\Omega}|{\rm d}_su|^2u^ju^i\varphi\,{\rm d} x-\int_{\Omega}|{\rm d}_su|^2u^iu^j\varphi\,{\rm d} x=0\,. \end{align} Therefore ${\rm div}_s\,\boldsymbol{\Omega}^{ij}=0$ in $\mathscr{D}^\prime(\Omega)$, and by approximation also in $H^{-s}(\Omega)$ (see \eqref{densitysmoothH1/200}). \vskip5pt \noindent{\it Step 2.} We assume that \eqref{conservlaw} holds, and we aim to prove that \eqref{ELeqsgrad} holds. We fix $\varphi\in\mathscr{D}(\Omega;\mathbb{R}^d)$, and we set $\psi:=\varphi-(u\cdot\varphi)u\in H^s_{00}(\Omega;\mathbb{R}^d)$, which satisfies $\psi\cdot u=0$ a.e. in $\mathbb{R}^n$. As in the proof of Proposition \ref{ELeqprop}, proving \eqref{ELeqsgrad} reduces to show that $$\big\langle(-\Delta)^su,\psi\big\rangle_\Omega=0\,. $$ Using $|u|^2=1$, we first observe that $$\big\langle(-\Delta)^su,\psi\big\rangle_\Omega=\sum_{i=1}^d\big\langle(-\Delta)^su^i,\psi^i\big\rangle_\Omega= \sum_{i,j=1}^d\big\langle(-\Delta)^su^i,(\psi^iu^j)u^j\big\rangle_\Omega\,.$$ Since $\psi^iu^j\in H^s_{00}(\Omega)$, we obtain as in \eqref{eyehategod}, \begin{multline*} \big\langle(-\Delta)^su^i,(\psi^iu^j)u^j\big\rangle_\Omega=\big\langle(-\Delta)^su^j,(\psi^iu^j)u^i\big\rangle_\Omega-\int_{\mathbb{R}^n} \boldsymbol{\Omega}^{ij}\odot {\rm d}_s(\psi^iu^j)\,{\rm d} x\\ =\big\langle(-\Delta)^su^j,(\psi^iu^j)u^i\big\rangle_\Omega \end{multline*} for every $i,j\in\{1,\ldots,d\}$, thanks to \eqref{conservlaw}. Therefore, $$\big\langle(-\Delta)^su,\psi\big\rangle_\Omega= \sum_{i,j=1}^d\big\langle(-\Delta)^su^j,(\psi^iu^j)u^i\big\rangle_\Omega=\sum_{j=1}^d\big\langle(-\Delta)^su^j,(\psi\cdot u)u^j\big\rangle_\Omega=0\,,$$ and the proof is complete. \end{proof} \subsection{Weighted harmonic maps with free boundary} \begin{definition} Let $G\subseteq\mathbb{R}^{n+1}_+$ be a bounded admissible open set, and $v\in H^1(G;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$ satisfying $v({\bf x})\in\mathbb{S}^{d-1}$ for a.e. ${\bf x}\in\partial^0G$. The map $v$ is said to be {\sl a weighted weakly harmonic map in $G$ with respect to the partially free boundary condition} $v(\partial^0G)\subseteq\mathbb{S}^{d-1}$ if \begin{equation}\label{vareqext} \int_Gz^a\nabla v\cdot\nabla\Phi\,{\rm d}{\bf x}=0 \end{equation} for every $\Phi\in H^1(G;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$ such that $\Phi=0$ on $\partial^+G$ and $\Phi({\bf x})\in{\rm Tan}(v({\bf x}),\mathbb{S}^{d-1})$ for a.e. ${\bf x}\in\partial^0G$. In short, we shall say that $v$ is a weighted weakly harmonic map with free boundary in~$G$. \end{definition} \begin{remark} If $v\in H^1(G;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$ is a weighted weakly harmonic map with free boundary in~$G$, then \eqref{vareqext} means that $v$ satisfies in the weak sense \begin{equation}\label{EqHarmmapfreebdry} \begin{cases} {\rm div}(z^a\nabla v)=0 &\text{in $G$}\,,\\[5pt] \displaystyle z^a\frac{\partial v}{\partial \nu} \perp {\rm Tan}(v,\mathbb{S}^{d-1}) & \text{on $\partial^0G$}\,. \end{cases} \end{equation} In particular, $v$ is smooth in $G$ by standard elliptic regularity. \end{remark} In view of Remark \ref{remsLaplorthog}, equation \eqref{EqHarmmapfreebdry} above, and Lemma \ref{repnormderfraclap}, it is clear that weighted weakly harmonic maps with free boundary and weakly $s$-harmonic maps are intimately related. This relation is made precise in the following proposition (see \cite[Proposition 4.6]{MS}). \begin{proposition}\label{equivsharmfreebdry} Let $\Omega\subseteq\mathbb{R}^n$ be a bounded open set with Lipschitz boundary. If a map $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ is a weakly $s$-harmonic map in $\Omega$, then its extension $u^{\rm e}$ given by~\eqref{poisson} is a weighted weakly harmonic map with free boundary in every bounded admissible open set $G\subseteq\mathbb{R}^{n+1}_+$ satisfying $\overline{\partial^0G}\subseteq\Omega$. \end{proposition} \begin{proof} Let us assume that $u$ is a weakly $s$-harmonic map in $\Omega$, and let $G\subseteq\mathbb{R}^{n+1}_+$ be bounded admissible open set such that $\overline{\partial^0G}\subseteq\Omega$. Let $\Phi\in H^1(G;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$ such that $\Phi=0$ on $\partial^+G$, and $\Phi\cdot u=0$ on $\partial^0G$. We extend $\Phi$ by $0$ to the whole half space $\mathbb{R}^{n+1}_+$, and the resulting map, still denoted by $\Phi$, belongs to $H^1(\mathbb{R}^{n+1}_+;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$. In view of \eqref{normexth1/2}, $\Phi_{|\mathbb{R}^n}\in H^s_{00}(\Omega;\mathbb{R}^d)$, and ${\rm spt}(\Phi_{|\mathbb{R}^n})\subseteq\Omega$. Since $\Phi_{|\mathbb{R}^n}\cdot u=0$, we conclude from Lemma \ref{repnormderfraclap} and Proposition \ref{ELeqprop} that $$\int_Gz^a\nabla u^{\rm e}\cdot\nabla\Phi\,{\rm d}{\bf x}=\int_{\mathbb{R}^{n+1}_+}z^a\nabla u^{\rm e}\cdot\nabla\Phi\,{\rm d}{\bf x}=\frac{1}{\boldsymbol{\delta}_s}\big\langle(-\Delta)^s u,\Phi_{|\mathbb{R}^n} \big\rangle_\Omega=0\,. $$ Hence, $u^{\rm e}$ is indeed a weighted weakly harmonic map with free boundary in $G$. \end{proof} \section{Small energy H\"older regularity}\label{EpsRegthm} In this section, we present the main epsilon-regularity theorem asserting that under a certain smallness assumption of the energy in a ball, a weakly $s$-harmonic map satisfying the monotonicity formula is H\"older continuous in a smaller ball. H\"older regularity will be improved to Lipschitz regularity in the next section with an explicit control on the Lipschitz norm in terms of the energy. \begin{theorem}\label{thmepsregholder} There exist constants $\boldsymbol{\varepsilon}_0=\boldsymbol{\varepsilon}_0(n,s)>0$ and $\beta_0=\beta_0(n,s)\in(0,1)$ such that the following holds. Let $u\in \widehat H^s(D_R;\mathbb{S}^{d-1})$ be a weakly $s$-harmonic map in $D_R$ such that the function $r\in(0,R-|{\bf x}|)\mapsto \boldsymbol{\Theta}_s(u^{\rm e},{\bf x},r)$ is non decreasing for every ${\bf x}\in\partial^0B_R^+$. If \begin{equation}\label{condeps0} \boldsymbol{\theta}_s(u,0,R)\leqslant\boldsymbol{\varepsilon}_0\,, \end{equation} then $u\in C^{0,\beta_0}(D_{R/2})$ and \begin{equation}\label{controlholdepsnonloc} R^{2\beta_0}[u]^2_{C^{0,\beta_0}(D_{R/2})}\leqslant C \boldsymbol{\theta}_s(u,0,R)\,, \end{equation} for a constant $C=C(n,s)$. \end{theorem} For what follows, it is useful to translate the epsilon-regularity theorem above only in terms of the extension. This is the purpose of the following corollary. \begin{corollary}\label{coroepsreghold} There exist three constants $\boldsymbol{\varepsilon}_1=\boldsymbol{\varepsilon}_1(n,s)>0$, $\boldsymbol{\kappa}_1=\boldsymbol{\kappa}_1(n,s)\in(0,1)$, $\beta_1=\beta_1(n,s)\in(0,1)$ such that the following holds. Let $u\in \widehat H^s(D_{2R};\mathbb{S}^{d-1})$ be a weakly $s$-harmonic map in $D_{2R}$ such that the function $r\in(0,2R-|{\bf x}|)\mapsto \boldsymbol{\Theta}_s(u^{\rm e},{\bf x},r)$ is non decreasing for every ${\bf x}\in\partial^0B_{2R}^+$. If \begin{equation}\label{condeps0ext} \boldsymbol{\Theta}_s(u^{\rm e},0,R) \leqslant \boldsymbol{\varepsilon}_1\,, \end{equation} then $u^{\rm e}\in C^{0,\beta_1}(B^+_{\boldsymbol{\kappa}_1R})$ and $$R^{2\beta_1} [u^{\rm e}]^2_{C^{0,\beta_1}(B^+_{\boldsymbol{\kappa}_1R})}\leqslant C\,,$$ for a constant $C=C(n,s)$. \end{corollary} \begin{proof} We consider the constant $\boldsymbol{\varepsilon}_0=\boldsymbol{\varepsilon}_0(n,s)>0$ given by Theorem \ref{thmepsregholder}. Since $|u|\equiv 1$, we obtain from Lemma \ref{compardensities} the existence of $\boldsymbol{\varepsilon}_1= \boldsymbol{\varepsilon}_1(n,s)>0$ and $\alpha=\alpha(n,s)\in(0,1/4]$ such that the condition $\boldsymbol{\Theta}_s(u^{\rm e},0,R) \leqslant \boldsymbol{\varepsilon}_1$ implies $\boldsymbol{\theta}_s(u,0,\alpha R)\leqslant\boldsymbol{\varepsilon}_0$. In turn, Theorem \ref{thmepsregholder} tells us that $u\in C^{0,\beta_0}(D_{\alpha R/2})$. Then Lemma \ref{HoldTransf} implies that $u^{\rm e}\in C^{0,\beta_1}(B^+_{\boldsymbol{\kappa}_1 R})$ with $\beta_1:=\min(\beta_0,s)$ and $\boldsymbol{\kappa}_1:=\alpha/8$. Moreover, combining \eqref{Holdtransfesti} and \eqref{controlholdepsnonloc} leads to \begin{multline*} R^{2\beta_1} [u^{\rm e}]^2_{C^{0,\beta_1}(B^+_{\boldsymbol{\kappa}_1R})}\leqslant C\big(R^{2\beta_1} [u]^2_{C^{0,\beta_1}(D_{\alpha R/2})}+1 \big) \leqslant C\big(R^{2\beta_0} [u]^2_{C^{0,\beta_0}(D_{\alpha R/2})}+1 \big)\\ \leqslant C\big( \boldsymbol{\theta}_s(u,0,\alpha R)+1\big)\leqslant C\,, \end{multline*} and the proof is complete. \end{proof} \begin{remark}\label{remarlepsholdregsubcritic} In the case $n\leqslant 2s$, the function $r\in(0,R-|{\bf x}|)\mapsto \boldsymbol{\Theta}_s(u^{\rm e},{\bf x},r)$ is nondecreasing for every $u\in \widehat H^s(D_{R};\mathbb{R}^d)$. In other words, in the case $n\leqslant 2s$, Theorem \ref{thmepsregholder} and Corollary \ref{coroepsreghold} apply to arbitrary weakly $s$-harmonic maps. Moreover, in the case $n=1$ and $s\in(1/2,1)$ (i.e., $n<2s$), the conclusions of Theorem \ref{thmepsregholder} and Corollary \ref{coroepsreghold} apply even without the smallness assumptions \eqref{condeps0} or \eqref{condeps0ext}, since it follows from the classical imbedding $H^s(\mathbb{R})\hookrightarrow C^{0,s-1/2}(\mathbb{R})$. For our purposes, it is convenient to state it suitably. This is the object of the proposition below, whose proof is postponed to the end of Section \ref{subsectprfthmhold}. \end{remark} \begin{proposition}\label{propHoldsubcritic} Assume that $n=1$ and $s\in(1/2,1)$. If $u\in \widehat H^s(D_R;\mathbb{R}^d)$, then $u\in C^{0,s-1/2}(D_{R/2})$ and \begin{equation}\label{holdestisubcriticcase1} R^{2s-1}[u]^2_{C^{0,s-1/2}(D_{R/2})}\leqslant C\boldsymbol{\theta}_s(u,0,R)\,, \end{equation} for a constant $C=C(s)$. \end{proposition} \subsection{Proof of Theorem \ref{thmepsregholder} and Proposition \ref{propHoldsubcritic}}\label{subsectprfthmhold} The key point to prove Theorem \ref{thmepsregholder} is to obtain a geometric decay of the energy in small balls. Then H\"older continuity follows classically from Campanato's criterion. The purpose of the next proposition, very much inspired from \cite[Proposition 3.1]{Evans}, is exactly to show such decay. \begin{proposition}\label{energimprovprop} Assume that $n\geqslant 2s$. There exist two constants $\boldsymbol{\varepsilon}_*=\boldsymbol{\varepsilon}_*(n,s)>0$ and $\boldsymbol{\tau}=\boldsymbol{\tau}(n,s)\in(0,1/4)$ such that the following holds. Let $u\in \widehat H^s(D_1;\mathbb{S}^{d-1})$ be a weakly $s$-harmonic map in $D_1$ such that the function $r\in(0,1-|{\bf x}|)\mapsto \boldsymbol{\Theta}_s(u^{\rm e},{\bf x},r)$ is non decreasing for every ${\bf x}\in\partial^0B_1^+$. If $$\mathcal{E}_s(u,D_1)\leqslant \boldsymbol{\varepsilon}_*\,,$$ then $$\frac{1}{\boldsymbol{\tau}^{n-2s}}\mathcal{E}_s(u,D_{\boldsymbol{\tau}}) \leqslant \frac{1}{2}\mathcal{E}_s(u,D_1)\,.$$ \end{proposition} \begin{proof} We fix the constant $\boldsymbol{\tau}\in(0,1/4)$ that will be specified later on. We proceed by contradiction assuming that there exists a sequence $\{u_k\}$ of stationary weakly $s$-harmonic maps in $D_1$ satisfying $$\varepsilon^2_k:=\mathcal{E}_s(u_k,D_1) \mathop{\longrightarrow}\limits_{k\to\infty} 0\,,$$ and \begin{equation}\label{hypcontrepsregprop} \frac{1}{\boldsymbol{\tau}^{n-2s}}\mathcal{E}_s(u_k,D_{\boldsymbol{\tau}}) > \frac{1}{2}\mathcal{E}_s(u_k,D_1)\,. \end{equation} (Note that this later condition ensures that $\varepsilon_k>0$.) Then we consider the (expanded) map $$w_k:=\frac{u_k-(u_k)_{0,1}}{\varepsilon_k}\in \widehat H^s(D_1;\mathbb{R}^{d})\cap L^\infty(\mathbb{R}^n)\,,$$ which satisfies $$\Xint-_{D_1}w_k\,{\rm d} x=0\quad\text{and}\quad \mathcal{E}_s(w_k,D_1)=1 \,.$$ Assumption \eqref{hypcontrepsregprop} also rewrites \begin{equation}\label{newhypcontr} \frac{1}{\boldsymbol{\tau}^{n-2s}}\mathcal{E}_s(w_k,D_{\boldsymbol{\tau}}) > \frac{1}{2}\,. \end{equation} By Poincar\'e's inequality in $H^s(D_1)$, we have $$\|w_k\|^2_{L^2(D_1)}\leqslant C \mathcal{E}_s(w_k,D_1)\leqslant C\,.$$ Therefore $\{w_k\}$ is bounded in $\widehat H^s(D_1;\mathbb{R}^{d})$, so that we can find a (not relabeled) subsequence and $w\in \widehat H^s(D_1;\mathbb{R}^{d})$ such that $w_k\rightharpoonup w$ weakly in $\widehat H^s(D_1)$ and $w_k\to w$ strongly in $L^2(D_1)$ (see Remark \ref{remweakcvHhat}). In particular, $\|w\|_{L^2(D_1)}\leqslant C$. By lower semicontinuity of the energy $\mathcal{E}_s(\cdot,D_1)$, we also have $\mathcal{E}_s(w,D_1)\leqslant 1$ (see again Remark \ref{remweakcvHhat}). Recalling that $u_k$ satisfies $$\big\langle (-\Delta)^su_k,\varphi\big\rangle_{D_1}= \int_{D_1} |{\rm d}_su_k|^2u_k\cdot\varphi\,{\rm d} x \qquad \forall \varphi\in\mathscr{D}(D_1;\mathbb{R}^d)\,,$$ we obtain in terms of $w_k$, \begin{equation}\label{eqwk} \big\langle (-\Delta)^sw_k,\varphi\big\rangle_{D_1}= \varepsilon_k\int_{D_1} |{\rm d}_sw_k|^2u_k\cdot\varphi\,{\rm d} x \qquad \forall \varphi\in\mathscr{D}(D_1;\mathbb{R}^d)\,. \end{equation} Since $|u_k|\equiv 1$, it leads to \begin{multline*} \Big|\big\langle (-\Delta)^sw_k,\varphi\big\rangle_{D_1}\Big| \leqslant \varepsilon_k\big\| |{\rm d}_s w_k|^2 \big\|_{L^1(D_1)}\|\varphi\|_{L^\infty(D_1)}\\ \leqslant 2\varepsilon_k\mathcal{E}_s(w_k,D_1)\|\varphi\|_{L^\infty(D_1)}= 2\varepsilon_k\|\varphi\|_{L^\infty(D_1)}\mathop{\longrightarrow}\limits_{k\to\infty}0 \end{multline*} for every $\varphi\in\mathscr{D}(D_1;\mathbb{R}^d)$. On the other hand, the weak convergence in $\widehat H^s(D_1)$ of $w_k$ towards $w$ implies that $$\big\langle (-\Delta)^sw_k,\varphi\big\rangle_{D_1}\mathop{\longrightarrow}\limits_{k\to\infty} \big\langle (-\Delta)^sw,\varphi\big\rangle_{D_1}\qquad \forall \varphi\in\mathscr{D}(D_1;\mathbb{R}^d)\,.$$ As a consequence, $w$ satisfies \begin{equation}\label{eqwlim} (-\Delta)^sw=0\quad\text{in $H^{-s}(D_1)$}\,. \end{equation} By Lemma \ref{lipestsharmfctlem} in Appendix \ref{AppSharmfct}, $w$ is (locally) smooth in $D_1$, and we have the estimate \begin{equation}\label{lipestiexpandmap} \|w\|^2_{L^\infty(D_{1/2})}+\|\nabla w\|^2_{L^\infty(D_{1/2})} \leqslant C\big(\mathcal{E}_s(w,D_1)+\|w\|^2_{L^2(D_1)}\big)\leqslant C\,. \end{equation} In view of \eqref{lipestiexpandmap}, we have \begin{equation}\label{jeudsoirwhs2211} \iint_{D_{\boldsymbol{\tau}}\times D_{\boldsymbol{\tau}}}\frac{|w(x)-w(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y \leqslant C \iint_{D_{\boldsymbol{\tau}}\times D_{\boldsymbol{\tau}}} \frac{{\rm d} x{\rm d} y}{|x-y|^{n+2s-2}}\leqslant C\boldsymbol{\tau}^{n+2-2s}\,. \end{equation} Then, writing \begin{multline}\label{jeudsoirwhs2211bis} \iint_{D_{\boldsymbol{\tau}}\times D^c_{\boldsymbol{\tau}}}\frac{|w(x)-w(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y= \iint_{D_{\boldsymbol{\tau}}\times (D_{1/2}\setminus D_{\boldsymbol{\tau}})}\frac{|w(x)-w(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\\ +\iint_{D_{\boldsymbol{\tau}}\times D^c_{1/2}}\frac{|w(x)-w(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\,, \end{multline} we first estimate, using \eqref{lipestiexpandmap}, \begin{equation}\label{jeudsoirwhs2211bisbis} \iint_{D_{\boldsymbol{\tau}}\times (D_{1/2}\setminus D_{\boldsymbol{\tau}})}\frac{|w(x)-w(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y \leqslant C \iint_{D_{\boldsymbol{\tau}}\times (D_{1/2}\setminus D_{\boldsymbol{\tau}})}\frac{{\rm d} x{\rm d} y }{|x-y|^{n+2s-2}} \leqslant C\boldsymbol{\tau}^{n}\,. \end{equation} Next we infer from Lemma \ref{adminHchap} and \eqref{lipestiexpandmap} that \begin{multline}\label{jeudsoirwhs2211bisbisbis} \iint_{D_{\boldsymbol{\tau}}\times D^c_{1/2}}\frac{|w(x)-w(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\leqslant 2\iint_{D_{\boldsymbol{\tau}}\times D^c_{1/2}}\frac{|w(x)|^2+|w(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\\ \leqslant C\Big(\int_{D_{\boldsymbol{\tau}}}|w(x)|^2\,{\rm d} x+\boldsymbol{\tau}^n\int_{D^c_{1/2}}\frac{|w(y)|^2}{(|y|+1)^{n+2s}}\,{\rm d} y\Big)\leqslant C\boldsymbol{\tau}^n\,. \end{multline} Gathering \eqref{jeudsoirwhs2211}, \eqref{jeudsoirwhs2211bis}, \eqref{jeudsoirwhs2211bisbis}, and \eqref{jeudsoirwhs2211bisbisbis} yields \begin{equation}\label{energwsmall} \frac{1}{\boldsymbol{\tau}^{n-2s}}\mathcal{E}_s(w,D_{\boldsymbol{\tau}}) \leqslant C\boldsymbol{\tau}^{2s}\,. \end{equation} By Lemma \ref{compactexpandmap} -- which is postponed at the end of the proof -- there exists a universal constant $\boldsymbol{\sigma}\in(0,1)$ such that \begin{equation}\label{strongcvwkw} w_k\to w \text{ strongly in } H^s(D_{\boldsymbol{\sigma}})\,. \end{equation} In view of \eqref{energwsmall}, we can choose $\boldsymbol{\tau}$ (depending only on $n$ and $s$) in such a way that \begin{equation}\label{firstchoicetau} 0<\boldsymbol{\tau}<\boldsymbol{\sigma}/2\quad\text{and}\quad \frac{1}{\boldsymbol{\tau}^{n-2s}}\mathcal{E}_s(w,D_{\boldsymbol{\tau}}) \leqslant \frac{1}{4}\,. \end{equation} From \eqref{jeudsoirwhs2211} and the strong convergence in \eqref{strongcvwkw}, we first infer that for $k$ large enough, \begin{equation}\label{samcanic1527} \iint_{D_{\boldsymbol{\tau}}\times D_{\boldsymbol{\tau}}}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y \leqslant \iint_{D_{\boldsymbol{\tau}}\times D_{\boldsymbol{\tau}}}\frac{|w(x)-w(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y+\boldsymbol{\tau}^n\,. \end{equation} In the same way, for $k$ large enough, one obtains from \eqref{strongcvwkw}, \begin{align} \nonumber \iint_{D_{\boldsymbol{\tau}}\times D^c_{\boldsymbol{\tau}}}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y=&\iint_{D_{\boldsymbol{\tau}}\times (D_{\boldsymbol{\sigma}}\setminus D_{\boldsymbol{\tau}})}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\\ \nonumber &\qquad\qquad+ \iint_{D_{\boldsymbol{\tau}}\times D^c_{\boldsymbol{\sigma}}}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\\[3pt] \nonumber \leqslant&\;\boldsymbol{\tau}^n+\iint_{D_{\boldsymbol{\tau}}\times (D_{\boldsymbol{\sigma}}\setminus D_{\boldsymbol{\tau}})}\frac{|w(x)-w(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\\ \label{samcanic1528} &\qquad\qquad+ \iint_{D_{\boldsymbol{\tau}}\times D^c_{\boldsymbol{\sigma}}}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\,. \end{align} Then we estimate by means of Lemma \ref{adminHchap}, \begin{multline*} \iint_{D_{\boldsymbol{\tau}}\times D^c_{\boldsymbol{\sigma}}}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\leqslant 2\iint_{D_{\boldsymbol{\tau}}\times D^c_{\boldsymbol{\sigma}}}\frac{|w_k(x)|^2+|w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\\ \leqslant C\Big(\int_{D_{\boldsymbol{\tau}}}|w_k(x)|^2\,{\rm d} x+\boldsymbol{\tau}^n\int_{D^c_{\boldsymbol{\sigma}}}\frac{|w_k(y)|^2}{(|y|+1)^{n+2s}}\,{\rm d} y\Big)\leqslant C\Big(\int_{D_{\boldsymbol{\tau}}}|w_k(x)|^2\,{\rm d} x+\boldsymbol{\tau}^n\Big)\,. \end{multline*} Since $w_k\to w$ strongly in $L^2(D_1)$ and in view of \eqref{lipestiexpandmap}, we deduce that for $k$ large enough, \begin{equation}\label{samcanic1525} \iint_{D_{\boldsymbol{\tau}}\times D^c_{\boldsymbol{\sigma}}}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\leqslant C\Big(\int_{D_{\boldsymbol{\tau}}}|w(x)|^2\,{\rm d} x+\boldsymbol{\tau}^n\Big)\leqslant C\boldsymbol{\tau}^n\,. \end{equation} Combining \eqref{samcanic1527}, \eqref{samcanic1528}, and \eqref{samcanic1525} together with \eqref{firstchoicetau}, we conclude that for $k$ large enough, $$\frac{1}{\boldsymbol{\tau}^{n-2s}}\mathcal{E}_s(w_k,D_{\boldsymbol{\tau}})\leqslant \frac{1}{\boldsymbol{\tau}^{n-2s}}\mathcal{E}_s(w,D_{\boldsymbol{\tau}})+C\boldsymbol{\tau}^{2s}\leqslant \frac{1}{4}+ C\boldsymbol{\tau}^{2s}\,.$$ Hence, we can choose $\boldsymbol{\tau}\in (0,1/4)$ small enough (depending only on $n$ and $s$) in such a way that $\frac{1}{\boldsymbol{\tau}^{n-2s}}\mathcal{E}_s(w_k,D_{\boldsymbol{\tau}})\leqslant 1/2$ whenever $k$ is large enough, contradicting \eqref{newhypcontr}. \end{proof} As it is transparent from the proof above, Proposition \ref{energimprovprop} crucially rests on the strong convergence stated in \eqref{strongcvwkw} that we now prove. \begin{lemma}\label{compactexpandmap} There exists a universal constant $\boldsymbol{\sigma}\in(0,1)$ such that the weakly converging subsequence $\{w_k\}$ (towards $w$) actually converges strongly in $H^s(D_{\boldsymbol{\sigma}})$. \end{lemma} \begin{proof} We choose the constant $\boldsymbol{\sigma}$ as follows: $$\boldsymbol{\sigma}:=\min\Big\{\frac{4}{5\Lambda},\frac{1}{32}\Big\}\,,$$ where $\Lambda>1$ is the universal constant given by Theorem \ref{divcurlthm}. \vskip3pt \noindent{\it Step 1.} Subtracting \eqref{eqwlim} from equation \eqref{eqwk} leads to \begin{equation}\label{eqdiffwkw} \big\langle (-\Delta)^s(w_k-w),\varphi\big\rangle_{D_1}= \varepsilon_k\int_{D_1} |{\rm d}_sw_k|^2u_k\cdot\varphi\,{\rm d} x \qquad \forall \varphi\in\mathscr{D}(D_1;\mathbb{R}^d)\,. \end{equation} By approximation (see \eqref{densitysmoothH1/200}), this equation also holds for every $\varphi\in H^s_{00}(D_1;\mathbb{R}^d)\cap L^\infty(D_1)$ compactly supported in $D_1$. Let us now fix a smooth cut-off function $\zeta\in\mathscr{D}(D_{5\boldsymbol{\sigma}/4})$ such that $0\leqslant \zeta\leqslant 1$, $\zeta=1$ in $D_{\boldsymbol{\sigma}}$. Using the test function $\varphi_k:=\zeta(w_k-w)\in H^s_{00}(D_1;\mathbb{R}^d)\cap L^\infty(D_1)$ in \eqref{eqdiffwkw} yields \begin{equation}\label{identLk=Rk} \big\langle (-\Delta)^s(w_k-w),\varphi_k \big\rangle_{D_1}= \varepsilon_k\int_{D_1}|{\rm d}_sw_k|^2u_k\cdot\varphi_k\,{\rm d} x \,. \end{equation} Setting $$L_k:=\big\langle (-\Delta)^s(w_k-w),\zeta(w_k-w)\big\rangle_{D_1}\quad\text{and}\quad R_k:= \varepsilon_k\int_{D_1} |{\rm d}_sw_k|^2u_k\cdot \varphi_k\,{\rm d} x\,, $$ we claim that \begin{equation}\label{claim1Lk} L_k\geqslant [w_k-w]^2_{H^s(D_{\boldsymbol{\sigma}})}+o(1)\quad\text{as $k\to\infty$}\,, \end{equation} and \begin{equation}\label{claim2Rk} \lim_{k\to\infty} R_k=0\,. \end{equation} Identity \eqref{identLk=Rk} rewrites $L_k=R_k$, and the two claims above will imply that $[w_k-w]^2_{H^s(D_{\boldsymbol{\sigma}})}\to 0$ as $k\to\infty$, whence the conclusion. \vskip3pt \noindent{\it Step 2.} This step is devoted to the proof of \eqref{claim1Lk}. For simplicity, let us denote $$\triangle_k:=w_k-w\,.$$ Since $\zeta= 1$ in $D_{\boldsymbol{\sigma}}$, and $\zeta=0$ in $D^c_{2\boldsymbol{\sigma}}$, we have \begin{equation}\label{decompLk} L_k= [\triangle_k]^2_{H^s(D_{\boldsymbol{\sigma}})}+\frac{\gamma_{n,s}}{2}\big(L_k^{(1)}+L_k^{(2)}+L_k^{(3)}\big)\,, \end{equation} with $$ L_k^{(1)}:=\iint_{(D_1\setminus D_{\boldsymbol{\sigma}} )\times(D_1\setminus D_{\boldsymbol{\sigma}})}\frac{(\triangle_k(x)-\triangle_k(y))\cdot(\zeta(x)\triangle_k(x)-\zeta(y)\triangle_k(y))}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\,,$$ $$ L_k^{(2)}:=2\iint_{D_{\boldsymbol{\sigma}} \times(D_1\setminus D_{\boldsymbol{\sigma}})}\frac{(\triangle_k(x)-\triangle_k(y))\cdot(\zeta(x)\triangle_k(x)-\zeta(y)\triangle_k(y))}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\,,$$ and $$L_k^{(3)}:=2\iint_{D_{2\boldsymbol{\sigma}} \times D^c_1}\frac{(\triangle_k(x)-\triangle_k(y))\cdot \triangle_k(x)}{|x-y|^{n+2s}}\, \zeta(x)\,{\rm d} x{\rm d} y\,,$$ Concerning $L_k^{(1)}$, we first rewrite \begin{align*} L_k^{(1)} = & \; \iint_{(D_1\setminus D_{\boldsymbol{\sigma}} )\times(D_1\setminus D_{\boldsymbol{\sigma}})}\frac{\big((\triangle_k(x)-\triangle_k(y))\cdot \triangle_k(x)\big) (\zeta(x)-\zeta(y))}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\\ &\qquad\qquad + \iint_{(D_1\setminus D_{\boldsymbol{\sigma}} )\times(D_1\setminus D_{\boldsymbol{\sigma}})}\frac{|\triangle_k(x)-\triangle_k(y)|^2}{|x-y|^{n+2s}}\,\zeta(y)\,{\rm d} x{\rm d} y\\[3pt] \geqslant & \; \iint_{(D_1\setminus D_{\boldsymbol{\sigma}} )\times(D_1\setminus D_{\boldsymbol{\sigma}})}\frac{\big((\triangle_k(x)-\triangle_k(y))\cdot \triangle_k(x)\big) (\zeta(x)-\zeta(y))}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\,. \end{align*} Recalling that $$\mathcal{E}_s(\triangle_k,D_1)\leqslant 2\mathcal{E}_s(w_k,D_1)+2\mathcal{E}_s(w,D_1)\leqslant 4\,,$$ we estimate by means of H\"older's inequality, \begin{multline*} \left|\iint_{(D_1\setminus D_{\boldsymbol{\sigma}} )\times(D_1\setminus D_{\boldsymbol{\sigma}})}\frac{\big((\triangle_k(x)-\triangle_k(y))\cdot \triangle_k(x)\big) (\zeta(x)-\zeta(y))}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\right| \\ \leqslant \sqrt{\mathcal{E}_s(\triangle_k,D_1)}\left( \iint_{(D_1\setminus D_{\boldsymbol{\sigma}} )\times(D_1\setminus D_{\boldsymbol{\sigma}})}\frac{|\triangle_k(x)|^2|\zeta(x)-\zeta(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\right)^{1/2}\\ \leqslant C\left( \iint_{(D_1\setminus D_{\boldsymbol{\sigma}} )\times(D_1\setminus D_{\boldsymbol{\sigma}})}\frac{|\triangle_k(x)|^2}{|x-y|^{n+2s-2}}\,{\rm d} x{\rm d} y\right)^{1/2} \leqslant C\|\triangle_k\|_{L^2(D_1)}\,. \end{multline*} Since $\|\triangle_k\|_{L^2(D_1)}\to 0$, we conclude that \begin{equation}\label{lowvanL1k} L^{(1)}_k\geqslant o(1)\quad\text{as $k\to\infty$}\,. \end{equation} Exactly in the same way, one derives \begin{equation}\label{lowvanL2k} L^{(2)}_k\geqslant o(1)\quad\text{as $k\to\infty$}\,. \end{equation} For the last term $L_k^{(3)}$, we use again H\"older's inequality to derive \begin{equation}\label{lowvanL3k} \big|L_k^{(3)}\big|\leqslant 2 \sqrt{\mathcal{E}_s(\triangle_k,D_1)} \left(\iint_{D_{2\boldsymbol{\sigma}} \times D^c_1}\frac{ |\triangle_k(x)|^2\zeta^2(x)}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y \right)^{1/2}\leqslant C\|\triangle_k\|_{L^2(D_1)}=o(1) \end{equation} as $k\to\infty$. Gathering now \eqref{decompLk} with \eqref{lowvanL1k}, \eqref{lowvanL2k}, and \eqref{lowvanL3k} leads to \eqref{claim1Lk}. \vskip3pt \noindent{\it Step 3.} In order to prove \eqref{claim2Rk}, we need to rewrite $R_k$ in a suitable form. First, we rewrite $$R_k=\frac{1}{\varepsilon_k}\int_{D_1}|{\rm d}_su_k|^2u_k\cdot \varphi_k\,{\rm d} x\,,$$ and we recall from Lemma \ref{rewritingsharmeq} that for each $i=1,\ldots,d$, $$ |{\rm d}_su_k|^2u^i_k =\Big(\sum^n_{j=1}\boldsymbol{\Omega}^{ij}_k\odot{\rm d}_su_k^j\Big)+T_k^i= \varepsilon_k \Big(\sum^n_{j=1}\boldsymbol{\Omega}^{ij}_k\odot{\rm d}_sw_k^j\Big)+T_k^i\,, $$ where $\boldsymbol{\Omega}^{ij}_k\in L^2_{\rm od}(D_1)$ is given by $$\boldsymbol{\Omega}^{ij}_k(x,y):=u_k^i(x){\rm d}_su_k^j(x,y)-u_k^j(x){\rm d}_su_k^i(x,y)\,,$$ and \begin{align*} T_k^i(x):=&\,\frac{\gamma_{n,s}}{4}\int_{\mathbb{R}^n}\frac{|u_k(x)-u_k(y)|^2}{|x-y|^{n+2s}} \big(u_k^i(x)-u_k^i(y)\big)\,{\rm d} y\\ =&\, \frac{\gamma_{n,s}\varepsilon_k^3}{4}\int_{\mathbb{R}^n}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}} \big(w_k^i(x)-w_k^i(y)\big)\,{\rm d} y\,. \end{align*} Hence, $$R_k=\Big(\sum_{i,j=1}^n\int_{D_1} \big( \Omega^{ij}_k\odot {\rm d}_sw_k^j\big)\varphi_k^i\,{\rm d} x\Big) + \varepsilon_k^2\int_{D_1}\widetilde T_k\cdot \varphi_k\,{\rm d} x =:R_k^{(1)}+R_k^{(2)}\,,$$ where we have set $$\widetilde T_k(x):= \frac{\gamma_{n,s}}{4}\int_{\mathbb{R}^n}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}} \big(w_k(x)-w_k(y)\big)\,{\rm d} y\,.$$ \vskip3pt \noindent{\it Step 4.} We shall now prove that \begin{equation}\label{vanishofR1kjul} \lim_{k\to\infty}R_k^{(1)}=0\,. \end{equation} First, notice that formula \eqref{poisson} shows that $u_k^{\rm e}=\varepsilon_k w_k^{\rm e}+ (u_k)_{0,1}$, which implies that $$\boldsymbol{\Theta}_s(u_k^{\rm e},{\bf x},r)=\varepsilon_k^2\boldsymbol{\Theta}_s(w_k^{\rm e},{\bf x},r)\quad\text{for every ${\bf x}\in\partial^0B_1^+$ and $r\in(0,1-|{\bf x}|)$}\,.$$ As a consequence, our assumption on $\boldsymbol{\Theta}_s(u_k^{\rm e},{\bf x},r)$ tells us that $r\in(0,1-|{\bf x}|)\mapsto \boldsymbol{\Theta}_s(w_k^{\rm e},{\bf x},r)$ is non decreasing for every ${\bf x}\in\partial^0B_1^+$. Applying Corollary \ref{coroBMO} (with $R=2\boldsymbol{\sigma}$), we deduce that $$[\zeta w_k]_{{\rm BMO}(\mathbb{R}^n)}\leqslant C\Big(\mathcal{E}_s(w_k,4\boldsymbol{\sigma})+ \|w_k\|^2_{L^2(D_{4\boldsymbol{\sigma}})}\Big)^{1/2}\leqslant C\,, $$ for some constant $C$ depending only on $n$, $s$, and $\zeta$. Since $w_k\to w$ strongly in $L^2(D_1)$ and $\zeta$ is supported in $D_{5\boldsymbol{\sigma}/4}$, we have $\zeta w_k\to \zeta w$ strongly in $L^1(\mathbb{R}^n)$ (in other words, $\|\varphi_k\|_{L^1(\mathbb{R}^n)}\to 0$). By lower semi-continuity of the BMO-seminorm with respect to the $L^1$-convergence, we deduce that $\zeta w\in {\rm BMO}(\mathbb{R}^n)$, and then (remember that $\varphi_k:=\zeta(w_k-w)$) $$[\varphi_k]_{{\rm BMO}(\mathbb{R}^n)}\leqslant C \,.$$ Next, we recall from Proposition \ref{propconservlaws} that $u_k$ being weakly $s$-harmonic in $D_1$ yields $${\rm div}_s\,\boldsymbol{\Omega}^{ij}_k=0\quad\text{in $H^{-s}(D_1)$}\,, $$ for each $i,j\in \{1,\ldots, d\}$. Applying Theorem \ref{divcurlthm} (with $x_0=0$ and $r=5\boldsymbol{\sigma}/4$), we infer that \begin{align*} \left|\int_{D_1} \big(\boldsymbol{\Omega}^{ij}_k\odot {\rm d}_sw_k^j\big)\varphi_k^i\,{\rm d} x\right| &\leqslant C\|\boldsymbol{\Omega}_k^{ij}\|_{L^2_{\rm od}(D_1)}\sqrt{\mathcal{E}_s(w_k^j,D_1)} \Big([\varphi^i_k]_{{\rm BMO}(\mathbb{R}^n)}+\|\varphi^i_k\|_{L^1(\mathbb{R}^n)}\Big)\\ &\leqslant C\|\boldsymbol{\Omega}_k^{ij}\|_{L^2_{\rm od}(D_1)}\,. \end{align*} Since $|u_k|\equiv 1$, we have the pointwise estimate $|\boldsymbol{\Omega}_k^{ij}(x,y)|\leqslant |{\rm d}_su_k^j(x,y)| + |{\rm d}_su_k^i(x,y)|$ which leads to $\|\boldsymbol{\Omega}_k^{ij}\|^2_{L^2_{\rm od}(D_1)}\leqslant C\mathcal{E}_s(u_k,D_1)=O(\varepsilon^2_k)$ for each $i,j\in \{1,\ldots, d\}$. Consequently, $$R^{(1)}_k=O(\varepsilon_k)\,, $$ and \eqref{vanishofR1kjul} is proved. \vskip5pt \noindent{\it Step 5.} We complete the proof of \eqref{claim2Rk} showing now that \begin{equation}\label{vanishingofR2k} \lim_{k\to\infty}R_k^{(2)}=0\,. \end{equation} Using the fact that $\varphi_k$ is supported in $D_{5\boldsymbol{\sigma}/4}\subseteq D_{1/20}\subseteq D_{1/16}$, we first write \begin{equation}\label{decompofR2k} R_k^{(2)}=\varepsilon_k^2\int_{D_1}\widetilde T_k\cdot \varphi_k\,{\rm d} x =\frac{\gamma_{n,s}}{4} \varepsilon_k^2\big(I_k+II_k\big)\,, \end{equation} with \begin{align*} I_k&:=\iint_{D_{1/16}\times D_{1/16}}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}} \big(w_k(x)-w_k(y)\big)\cdot\varphi_k(x)\,{\rm d} x{\rm d} y \\ &= \frac{1}{2}\iint_{D_{1/16}\times D_{1/16}}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}} \big(w_k(x)-w_k(y)\big)\cdot\big(\varphi_k(x)-\varphi_k(y)\big)\,{\rm d} x{\rm d} y\,, \end{align*} and \begin{equation}\label{defofIIkinR2k} II_k:=\iint_{D_{1/20}\times D^c_{1/16}} \frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}} \big(w_k(x)-w_k(y)\big)\cdot\varphi_k(x)\,{\rm d} x{\rm d} y\,. \end{equation} We shall estimate separately the two terms $I_k$ and $II_k$. Concerning $I_k$, we apply H\"older's inequality to reach \begin{align} \nonumber |I_k| & \leqslant \frac{1}{2} \iint_{D_{1/16}\times D_{1/16}}\frac{|w_k(x)-w_k(y)|^3|\varphi_k(x)-\varphi_k(y)|}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\\[3pt] \label{proutauchocolat}&\leqslant C [w_k]^3_{W^{s/3,6}(D_{1/16})}[\varphi_k]_{H^s(D_{1/16})}\,, \end{align} where $[\cdot]_{W^{s/3,6}(D_{1/16})}$ denotes the $W^{s/3,6}(D_{1/16})$-seminorm (i.e., of the Sobolev-Slobodeckij space, see \eqref{defWspseminorm}). Recalling our notation $\triangle_k:=w_k-w$ and the fact that $0\leqslant \zeta\leqslant 1$, we have \begin{align} \nonumber [\varphi_k]^2_{H^s(D_{1/16})} &\leqslant C\left([\triangle_k]^2_{H^s(D_{1/16})}+ \iint_{D_{1/16}\times D_{1/16}}\frac{|\zeta(x)-\zeta(y)|^2|\triangle_k(x)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\right)\\ \nonumber &\leqslant C\left([\triangle_k]^2_{H^s(D_{1/16})}+ \iint_{D_{1/16}\times D_{1/16}}\frac{|\triangle_k(x)|^2}{|x-y|^{n+2s-2}}\,{\rm d} x{\rm d} y\right)\\ \label{nocluphikhs} &\leqslant C\Big( \mathcal{E}_s(\triangle_k,D_{1/16})+\|\triangle_k\|^2_{L^2(D_{1/16})}\Big)\leqslant C\,. \end{align} To estimate $ [w_k]_{W^{s/3,6}(D_{1/16})}$, we proceed as follows. First, we fix a further cut-off function $\eta\in \mathscr{D}(D_{1/8})$ satisfying $0\leqslant \eta\leqslant 1$, $\eta\equiv 1$ in $D_{1/16}$, and $|\nabla \eta|\leqslant C$. Then we apply Corollary \ref{coroinjQspaces} (in Appendix \ref{appQspaces}) to $\eta w_k$ to derive \begin{equation}\label{mardcpamorreystuff} [w_k]^2_{W^{s/3,6}(D_{1/16})}= [\eta w_k]^2_{W^{s/3,6}(D_{1/16})} \leqslant C\Big(\sup_{D_r(\bar x)\subseteq\mathbb{R}^n}\frac{1}{r^{n-2s}}\,[\eta w_k]^2_{H^s(D_r(\bar x))}\Big)\,, \end{equation} and it remains to estimate the right hand side of \eqref{mardcpamorreystuff}. To this purpose, we need to distinguish different types of balls: \vskip3pt {\sl Case 1:} $\bar x\in D_{3/16}$ and $0< r \leqslant 1/16$. Arguing as in \eqref{nocluphikhs}, we obtain \begin{align*} [\eta w_k]^2_{H^s(D_{r}(\bar x))} & \leqslant C\left([w_k]^2_{H^s(D_{r}(\bar x))} + \iint_{D_{r}(\bar x)\times D_{r}(\bar x)}\frac{|w_k(x)|^2}{|x-y|^{n+2s-2}}\,{\rm d} x{\rm d} y \right)\\ & \leqslant C\Big([w_k]^2_{H^s(D_{r}(\bar x))} + r^{2-2s}\|w_k\|^2_{L^2(D_r(\bar x))} \Big)\,. \end{align*} Applying H\"older's inequality in the case $n\geqslant 3$, we obtain \begin{equation}\label{mercrbefiep1559} [\eta w_k]^2_{H^s(D_{r}(\bar x))} \leqslant \begin{cases} C\Big([w_k]^2_{H^s(D_{r}(\bar x))} + r^{n-2s}\|w_k\|^{2}_{L^n(D_r(\bar x))} \Big) & \text{if $n\geqslant 3$}\\[5pt] C\Big([w_k]^2_{H^s(D_{r}(\bar x))} + r^{2-2s}\|w_k\|^2_{L^2(D_r(\bar x))} \Big) & \text{if $n\leqslant 2$}\,. \end{cases} \end{equation} Let us now recall that $r\mapsto\boldsymbol{\Theta}_s(w_k^{\rm e}, {\bf x},r)$ is non decreasing for every ${\bf x}\in B^+_{1}$ (see Step 4). By the proof of Lemma \ref{cutoffbmo1}, Step 1 (applied to $w_k^{\rm e}$), we have \begin{equation}\label{bmowkjuly} [w_k]_{{\rm BMO}(D_{7/16})}\leqslant C\sqrt{\mathbf{E}_s(w^{\rm e}_k,B^+_{1/2})}\leqslant C\sqrt{\mathcal{E}_s(w_k,D_1)}\leqslant C \,, \end{equation} where we have used Lemma \ref{hatH1/2toH1} in the last inequality. In case $n\geqslant 3$, we apply the John-Nirenberg inequality in Lemma \ref{JohnNir} and use the fact that $D_r(\bar x)\subseteq D_{7/16}$, to derive \begin{multline}\label{argbdlebwklundjul} \|w_k\|_{L^n(D_r(\bar x))}\leqslant \|w_k\|_{L^n(D_{7/16})} \leqslant \big\|w_k-(w_k)_{0,7/16}\big\|_{L^n(D_{7/16})}+C\|w_k\|_{L^1(D_{7/16})}\\ \leqslant C \big([w_k]_{{\rm BMO}(D_{7/16})} + \|w_k\|_{L^2(D_{7/16})}\big)\leqslant C\,. \end{multline} Back to \eqref{mercrbefiep1559} and in view of Lemma \ref{HsregtraceH1weight}, we have thus proved that \begin{multline*} [\eta w_k]^2_{H^s(D_{r}(\bar x))} \leqslant C\big( [w_k]^2_{H^s(D_{r}(\bar x))} + r^{n-2s}\big)\\ \leqslant C\Big( \mathbf{E}_s\big(w^{\rm e}_k,B^+_{2r}(\bar {\bf x})\big) + r^{n-2s}\Big)\leqslant Cr^{n-2s}\big(\boldsymbol{\Theta}_s(w_k^{\rm e},\bar {\bf x},2r)+1\big)\,, \end{multline*} with $\bar {\bf x}:=(\bar x,0)$. Then the monotonicity of $r\mapsto \boldsymbol{\Theta}_s(w_k^{\rm e},\bar {\bf x},2r)$ together with Lemma \ref{hatH1/2toH1} yields \begin{multline*} \frac{1}{r^{n-2s}} [\eta w_k]^2_{H^s(D_{r}(\bar x))} \leqslant C \big(\boldsymbol{\Theta}_s(w_k^{\rm e},\bar {\bf x},1/8)+1\big)\\ \leqslant C \big({\bf E}_s(w_k^{\rm e},B^+_{1/2})+1\big)\leqslant C \big(\mathcal{E}_s(w_k,D_1)+1\big)\leqslant C\,. \end{multline*} \vskip3pt {\sl Case 2:} $\bar x\not\in D_{3/16}$ and $0< r \leqslant 1/16$. This case is trivial since $\eta w_k \equiv 0$ in $D_r(\bar x)$. \vskip3pt {\sl Case 3:} $\bar x\in \mathbb{R}^n$ and $r>1/16$. Since $\eta w_k$ is supported in $D_{1/8}$ and $0\leqslant \eta\leqslant 1$, we have (recall that $n-2s\geqslant 0$) \begin{align*} \frac{1}{r^{n-2s}} [\eta w_k]^2_{H^s(D_{r}(\bar x))} &\leqslant 16^{2s-n}[\eta w_k]^2_{H^s(\mathbb{R}^n)}\\ &\leqslant C\Big( [\eta w_k]^2_{H^s(D_{1/4})}+\iint_{D_{1/8}\times D^c_{1/4}}\frac{|\eta(x)w_k(x)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\Big)\\ &\leqslant C\big( [\eta w_k]^2_{H^s(D_{1/4})}+\|w_k\|^2_{L^2(D_{1/8})}\big)\,. \end{align*} Arguing as in \eqref{nocluphikhs}, we obtain $$ [\eta w_k]^2_{H^s(D_{1/4})}\leqslant C\big(\mathcal{E}_s(w_k,D_{1/4})+\|w_k\|^2_{L^2(D_{1/4})}\big)\,,$$ and thus \begin{equation}\label{zutbiblioberlin} \frac{1}{r^{n-2s}} [\eta w_k]^2_{H^s(D_{r}(\bar x))} \leqslant C\big(\mathcal{E}_s(w_k,D_{1})+\|w_k\|^2_{L^2(D_{1})}\big)\leqslant C\,. \end{equation} \vskip5pt Gathering Cases 1, 2, and 3 above, we have proved that the right hand side of \eqref{mardcpamorreystuff} remains bounded independently of $k$. We can now conclude from \eqref{mardcpamorreystuff} that $[w_k]_{W^{s/3,6}(D_{1/16})}\leqslant C$. In view of \eqref{proutauchocolat} and \eqref{nocluphikhs}, we have thus obtained that \begin{equation}\label{firstpieceR2k} |I_k|\leqslant C\,, \end{equation} and it only remains to estimate the term $II_k$ (defined in \eqref{defofIIkinR2k}). First, we trivially have \begin{align} \nonumber |II_k|&\leqslant \iint_{D_{1/20}\times D^c_{1/16}} \frac{|w_k(x)-w_k(y)|^3}{|x-y|^{n+2s}}|\triangle_k(x)|\,{\rm d} x{\rm d} y \\ \nonumber &\leqslant 4\iint_{D_{1/20}\times D^c_{1/16}} \frac{|w_k(x)|^3}{|x-y|^{n+2s}}|\triangle_k(x)|\,{\rm d} x{\rm d} y \\ \label{almfinlundjul1} &\qquad\qquad\qquad\qquad\qquad\qquad\qquad +4\iint_{D_{1/20}\times D^c_{1/16}} \frac{|w_k(y)|^3}{|x-y|^{n+2s}}|\triangle_k(x)|\,{\rm d} x{\rm d} y\,. \end{align} On the other hand, \begin{multline*} \iint_{D_{1/20}\times D^c_{1/16}} \frac{|w_k(x)|^3}{|x-y|^{n+2s}}|\triangle_k(x)|\,{\rm d} x{\rm d} y\leqslant C\int_{D_{1/20}} |w_k(x)|^3|\triangle_k(x)|\,{\rm d} x\\ \leqslant C\|w_k\|^3_{L^6(D_{1/20})}\|\triangle_k\|_{L^2(D_{1})}\,. \end{multline*} Recalling from \eqref{bmowkjuly} that $\{w_k\}$ is bounded in ${\rm BMO}(D_{7/16})$, we can argue as in \eqref{argbdlebwklundjul} to infer that $\{w_k\}$ is bounded in $L^6(D_{1/20})$. Hence, \begin{equation}\label{almfinlundjul2} \iint_{D_{1/20}\times D^c_{1/16}} \frac{|w_k(x)|^3}{|x-y|^{n+2s}}|\triangle_k(x)|\,{\rm d} x{\rm d} y\leqslant C \|\triangle_k\|_{L^2(D_{1})}\,. \end{equation} Since $|u_k|\equiv1$, we have $|w_k|\leqslant 2/\varepsilon_k$, and consequently \begin{align} \nonumber \iint_{D_{1/20}\times D^c_{1/16}} \frac{|w_k(y)|^3}{|x-y|^{n+2s}}|\triangle_k(x)|\,{\rm d} x{\rm d} y& \leqslant \frac{2}{\varepsilon_k}\int_{D_{1/20}}\left(\int_{D^c_{1/16}}\frac{|w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} y\right)|\triangle_k(x)|\,{\rm d} x\\ \nonumber &\leqslant \frac{C}{\varepsilon_k}\int_{D_{1/20}}\left(\int_{\mathbb{R}^n}\frac{|w_k(y)|^2}{(|y|+1)^{n+2s}}\,{\rm d} y\right)|\triangle_k(x)|\,{\rm d} x\\ \label{almfinlundjul3} & \leqslant \frac{C}{\varepsilon_k}\Big(\mathcal{E}_s(w_k,D_1)+\|w_k\|^2_{L^2(D_1)}\Big)\|\triangle_k\|_{L^2(D_{1})}\,, \end{align} where we have used Lemma \ref{adminHchap} in the last inequality. Combining \eqref{almfinlundjul1}, \eqref{almfinlundjul2}, and \eqref{almfinlundjul3}, we obtain the estimate \begin{equation}\label{secondpieceR2k} |II_k|\leqslant C\varepsilon_k^{-1}\|\triangle_k\|_{L^2(D_{1})}= o(\varepsilon_k^{-1})\,. \end{equation} In view of \eqref{decompofR2k}, \eqref{firstpieceR2k}, and \eqref{secondpieceR2k}, we have thus proved that $$R_k^{(2)}=o(\varepsilon_k)\,, $$ and thus \eqref{vanishingofR2k} holds, which completes the whole proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thmepsregholder}] Rescaling variables, we can assume that $R=2$. We need to distinguish the two cases $n\geqslant 2s$, and $n=1$ with $s\in(1/2,1)$. \vskip3pt \noindent{\it Case 1: $n\geqslant 2s$.} We choose $\boldsymbol{\varepsilon}_0:=2^{2s-n}\boldsymbol{\varepsilon}_*$ where $\boldsymbol{\varepsilon}_*=\boldsymbol{\varepsilon}_*(n,s)>0$ is the constant provided by Proposition ~\ref{energimprovprop}. We fix an arbitrary point $x_0\in D_1$, and we observe that condition \eqref{condeps0} implies $$\mathcal{E}_s\big(u,D_1(x_0)\big)\leqslant \mathcal{E}_s(u,D_2)=2^{n-2s}\boldsymbol{\theta}_s(u,0,2)\leqslant \boldsymbol{\varepsilon}_*\,.$$ Setting ${\bf e}:= \mathcal{E}_s(u,D_2)$, Proposition \ref{energimprovprop} then leads to \begin{equation}\label{merjul171648} \frac{1}{\boldsymbol{\tau}^{n-2s}}\mathcal{E}_s\big(u,D_{\boldsymbol{\tau}}(x_0)\big)\leqslant \frac{1}{2} \mathcal{E}_s\big(u,D_1(x_0)\big)\leqslant \frac{1}{2}{\bf e}\,, \end{equation} where $\boldsymbol{\tau}=\boldsymbol{\tau}(n,s)\in(0,1/4)$. Considering the rescaled map $u_{\boldsymbol{\tau}}(x):=u(\boldsymbol{\tau}x+x_0)$, one realizes from \eqref{merjul171648} that $u_{\boldsymbol{\tau}}$ satisfies $\mathcal{E}_s(u_{\boldsymbol{\tau}},D_1)\leqslant\frac{1}{2}\boldsymbol{\varepsilon}_* $, and thus Proposition \ref{energimprovprop} applies. Unscaling variables, it yields $$\frac{1}{(\boldsymbol{\tau}^{n-2s})^2}\mathcal{E}_s(u,D_{\boldsymbol{\tau}^2}(x_0)\big)=\frac{1}{\boldsymbol{\tau}^{n-2s}}\mathcal{E}_s(u_{\boldsymbol{\tau}},D_{\boldsymbol{\tau}})\leqslant \frac{1}{2} \mathcal{E}_s(u_{\boldsymbol{\tau}},D_1)=\frac{1}{2\boldsymbol{\tau}^{n-2s}}\mathcal{E}_s\big(u,D_{\boldsymbol{\tau}}(x_0)\big) \leqslant \frac{1}{4}{\bf e}\,.$$ Arguing by induction, we infer that \begin{equation}\label{merjul171648bis} \mathcal{E}_s\big(u,D_{\boldsymbol{\tau}^k}(x_0)\big)\leqslant\frac{\boldsymbol{\tau}^{k(n-2s)}}{2^k}{\bf e} \quad\text{for each $k=0,1,2,3,\ldots$}\,. \end{equation} Let us now fix an arbitrary $r\in(0,1)$, and consider the integer $k$ such that $\boldsymbol{\tau}^{k+1}<r\leqslant\boldsymbol{\tau}^k$. From \eqref{merjul171648bis}, we deduce that $$\frac{1}{r^{n-2s}}\mathcal{E}_s\big(u,D_r(x_0)\big)\leqslant \frac{1}{r^{n-2s}}\mathcal{E}_s\big(u,D_{\boldsymbol{\tau}^k}(x_0)\big)\leqslant \frac{\boldsymbol{\tau}^{2s-n}}{2^k}{\bf e}\leqslant 2\boldsymbol{\tau}^{2s-n}{\bf e}\, r^{2\beta_0} \,,$$ with $2\beta_0:=\log(2)/\log(1/\boldsymbol{\tau})$. By the Poincar\'e inequality in $H^s(D_r(x_0))$, it yields $$\frac{1}{r^n}\int_{D_r(x_0)}\big|u-(u)_{x_0,r}\big|^2\,{\rm d} x\leqslant \frac{C}{r^{n-2s}}[u]^2_{H^s(D_r(x_0))}\leqslant \frac{C}{r^{n-2s}}\mathcal{E}_s\big(u,D_r(x_0)\big)\leqslant C{\bf e}\,r^{2\beta_0}\,.$$ In view of the arbitrariness of $r$ and $x_0$, we can apply Campanato's criterion (see e.g. \cite[Theorem I.6.1]{Maggi}), and it yields $u\in C^{0,\beta_0}(D_1)$ with $$|u(x)-u(y)|\leqslant C\sqrt{{\bf e}}\,|x-y|^{\beta_0} \qquad\forall x,y\in D_1\,,$$ which completes the proof. \vskip3pt \noindent{\it Case 2: $n=1$ and $s\in (1/2,1)$.} In this case, we simply choose $\boldsymbol{\varepsilon}_0:=1$, and we invoke Proposition \ref{propHoldsubcritic} whose proof is given below. \end{proof} \begin{proof}[Proof of Proposition \ref{propHoldsubcritic}] Rescaling variables, we can assume that $R=1$. Without loss of generality, we can also assume that $u$ has a vanishing average over $D_1$. We consider a given cut-off function $\zeta\in\mathscr{D}(D_{3/4})$ such that $0\leqslant \zeta\leqslant 1$ and $\zeta=1$ in $D_{1/2}$. Arguing as \eqref{zutbiblioberlin}, we obtain that $\zeta u\in H^{s}(\mathbb{R};\mathbb{R}^d)$ with \begin{equation}\label{prfholdembed1} [\zeta u]^2_{H^s(\mathbb{R})}\leqslant C \big(\mathcal{E}_s(u,D_1)+\|u\|^2_{L^2(D_1)}\big) \,. \end{equation} On the other hand, by the continuous embedding $H^s(\mathbb{R}^n)\hookrightarrow C^{0,s-1/2}(\mathbb{R}^n)$ (see e.g. \cite[Theorem~1.4.4.1]{G}), we have \begin{equation}\label{prfholdembed2} [\zeta u]^2_{C^{0,s-1/2}(\mathbb{R})}\leqslant C\big([\zeta u]^2_{H^s(\mathbb{R})} + \|\zeta u\|^2_{L^2(\mathbb{R})}\big) \leqslant C\big([\zeta u]^2_{H^s(\mathbb{R})} + \|u\|^2_{L^2(D_1)}\big)\,. \end{equation} Combining \eqref{prfholdembed2} with \eqref{prfholdembed1} and applying Poincar\'e's inequality in $H^s(D_1)$, we derive that $$[u]^2_{C^{0,s-1/2}(D_{1/2})}\leqslant [\zeta u]^2_{C^{0,s-1/2}(\mathbb{R})} \leqslant C \big(\mathcal{E}_s(u,D_1)+\|u\|^2_{L^2(D_1)}\big)\leqslant C \mathcal{E}_s(u,D_1)\,,$$ which completes the proof of \eqref{holdestisubcriticcase1}. \end{proof} \section{Small energy Lipschitz regularity}\label{Highordreg} In this section, our goal is to improve the conclusion of Theorem \ref{thmepsregholder} to Lipschitz continuity, as stated in the following theorem. Higher order regularity will be the object of the next section. \begin{theorem}\label{thmepsregLip} Let $\boldsymbol{\varepsilon}_1=\boldsymbol{\varepsilon}_1(n,s)>0$ be the constant given by Corollary \ref{coroepsreghold}. There exists a constant $\boldsymbol{\kappa}_2=\boldsymbol{\kappa}_2(n,s)\in(0,1)$ such that the following holds. Let $u\in \widehat H^s(D_{2R};\mathbb{S}^{d-1})$ be a weakly $s$-harmonic map in $D_{2R}$ such that the function $r\in(0,2R-|{\bf x}|)\mapsto \boldsymbol{\Theta}_s(u^{\rm e},{\bf x},r)$ is nondecreasing for every ${\bf x}\in\partial^0B_{2R}^+$. If \begin{equation}\label{condeps0Lip} \boldsymbol{\Theta}_s(u^{\rm e},0,R)\leqslant\boldsymbol{\varepsilon}_1\,, \end{equation} then $u\in C^{0,1}(D_{\boldsymbol{\kappa}_2R})$ and $$R^2\|\nabla u\|^2_{L^\infty(D_{\boldsymbol{\kappa}_2R})}\leqslant C \boldsymbol{\Theta}_s(u^{\rm e},0,R)\,, $$ for a constant $C=C(n,s)$. \end{theorem} The proof of Theorem \ref{thmepsregLip} consists in considering the system satisfied by the $\mathbb{S}^{d-1}$-valued map $u^{\rm e}/|u^{\rm e}|$. By Corollary \ref{coroepsreghold}, $u^{\rm e}$ is H\"older continuous, and therefore $|u^{\rm e}|\geqslant 1/2$ in a smaller half ball $B_r^+$. In particular, $v:=u^{\rm e}/|u^{\rm e}|$ is well defined and H\"older continuous in $B_r^+$. We shall see that it satisfies in the weak sense the degenerate system with {\sl homogeneous} Neumann boundary condition \begin{equation}\label{strongfromsystumodu} \begin{cases} -{\rm div}\big(z^a\rho^2\nabla v\big)= z^a\rho^2|\nabla v|^2 v & \text{in $B^+_r$}\,,\\[5pt] \displaystyle z^a\rho^2\frac{\partial v}{\partial\nu}=0 & \text{on $\partial^0B^+_r$}\,, \end{cases} \end{equation} with H\"older continuous weight $\rho^2:=|u^{\rm e}|^2$. Up to the extra weight term $\rho^2$, this system fits into the class of degenerate harmonic map systems with free boundary considered in \cite{Rob}. Adjusting the arguments in \cite{Rob} to take care of the extra weight $\rho^2$, we shall prove that $v$ is Lipschitz continuous in an even smaller half ball. Since $u^{\rm e}=v$ on $\partial^0B^+_r$, the conclusion will follow straight away. \subsection{Proof of Theorem \ref{thmepsregLip}} The aforementioned Lipschitz estimate on the map $u^{\rm e}/|u^{\rm e}|$ is the object of the following proposition. \begin{proposition}\label{themholdimplLip} Let $u\in \widehat H^s(D_{2R};\mathbb{S}^{d-1})$ be a weakly $s$-harmonic map in $D_{2R}$. Assume that $u^{\rm e}\in C^{0,\beta}(B^+_R)$ for some exponent $\beta\in(0,1)$, and that $|u^{\rm e}|\geqslant 1/2$ in $B_R^+$. Setting $\eta:=R^\beta[u^{\rm e}]_{C^{0,\beta}(B^+_R)}$, the map $u^{\rm e}/|u^{\rm e}|$ is Lipschitz continuous in $\overline B^+_{R/3}$, and $$R^2 \|\nabla \big(u^{\rm e}/|u^{\rm e}|\big)\|^2_{L^\infty(B^+_{R/3})}\leqslant C_{\eta,\beta} \boldsymbol{\Theta}_s(u^{\rm e},0,R)\,,$$ for a constant $C_{\eta,\beta}=C_{\eta,\beta}(\eta,\beta,n,s)$. \end{proposition} Before proving this proposition, we need to show that $u^{\rm e}/|u^{\rm e}|$ satisfies system \eqref{strongfromsystumodu} in the weak sense. \begin{lemma}\label{lemmodphase} Let $u\in \widehat H^s(D_{2R};\mathbb{S}^{d-1})$ be a weakly $s$-harmonic map in $D_{2R}$. Assume that $\rho:=|u^{\rm e}|$ satisfies $\rho\geqslant 1/2$ a.e. in $B_R^+$. Then the map $v:=u^{\rm e}/\rho$ belongs to $H^1(B_R^+;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$ and it satisfies $$\int_{B_R^+}z^a\rho^2\nabla v\cdot\nabla \phi \,{\rm d} {\bf x}=\int_{B_R^+}z^a\rho^2|\nabla v|^2v\cdot\phi\,{\rm d}{\bf x}$$ for every $\phi\in H^1(B_R^+;\mathbb{R}^d,|z|^a{\rm d}{\bf x})\cap L^\infty(B_R^+)$ such that $\phi=0$ on $\partial^+B_R$. \end{lemma} \begin{proof} First recall from \eqref{bdlinftyext} and Lemma \ref{hatH1/2toH1} that $u^{\rm e}\in H^1(B_R^+;\mathbb{R}^d,|z|^a{\rm d}{\bf x})\cap L^\infty(\mathbb{R}_+^{n+1})$, and consequently, $\rho\in H^1(B_R^+,|z|^a{\rm d}{\bf x})\cap L^\infty(\mathbb{R}_+^{n+1})$. By assumption $\rho\geqslant 1/2$, so that $1/\rho\in H^1(B_R^+,|z|^a{\rm d}{\bf x})\cap L^\infty(\mathbb{R}_+^{n+1})$. The space $H^1(B_R^+,|z|^a{\rm d}{\bf x})\cap L^\infty(\mathbb{R}_+^{n+1})$ being an algebra, it follows that $v\in H^1(B_R^+;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$, and by definition $|v|=1$ a.e. in $B_R^+$. Let us now fix $\Phi\in H^1(B_R^+;\mathbb{R}^d,|z|^a{\rm d}{\bf x})\cap L^\infty(B_R^+)$ such that $\Phi=0$ on $\partial^+B_R$. Again, $H^1(B_R^+;\mathbb{R}^d,|z|^a{\rm d}{\bf x})\cap L^\infty(B_R^+)$ being an algebra, $\psi:=\Phi-(\Phi\cdot v)v\in H^1(B_R^+;\mathbb{R}^d,|z|^a{\rm d}{\bf x})\cap L^\infty(B_R^+)$. It also satisfies $\psi=0$ on $\partial^+B_R$, and by construction, we have $v\cdot\psi=0$ a.e. in $B_R^+$. Now we consider $\xi:=\rho\psi\in H^1(B_R^+;\mathbb{R}^d,|z|^a{\rm d}{\bf x})\cap L^\infty(\mathbb{R}^n)$, which still satisfies $\xi=0$ on $\partial^+B_R$, and $u^{\rm e}\cdot\xi=0$ in $B_R^+$. In particular, $u\cdot \xi=0$ on $\partial^0B_R^+$. By Proposition \ref{equivsharmfreebdry}, the map $u^{\rm e}$ is a weighted weakly harmonic map with free boundary in the half ball $B_R^+$, i.e., it satisfies \eqref{vareqext}. Hence, \begin{equation}\label{2334vend19} \int_{B_R^+}z^a\nabla u^{\rm e}\cdot\nabla\xi\,{\rm d} {\bf x}=0\,. \end{equation} On the other hand, $\partial_i u^{\rm e}=\partial_i\rho v + \rho\partial_i v$ and $\partial_i \xi= \partial_i\rho \psi+\rho\partial_i \psi$ in $B_R^+$ for $i=1,\ldots,n+1$. Then we notice that $v\cdot\psi=0$ implies $v\cdot\partial_i\psi=-\partial_iv\cdot\psi$ in $B_R^+$ for $i=1,\ldots,n+1$. In the same way, the fact that $|v|^2=1$ leads to $v\cdot\partial_iv=0$ in $B_R^+$ for $i=1,\ldots,n+1$. As a consequence, $$\partial_iu^{\rm e}\cdot\partial_i\xi=\big(\partial_i\rho v+\rho\partial_iv\big)\cdot\big(\partial_i\rho\psi+\rho\partial_i\psi\big) =\rho^2\partial_i v\cdot\partial_i\psi\quad\text{a.e. in $B_R^+$}\,,$$ for $i=1,\ldots,n+1$. Inserting this identity in \eqref{2334vend19} yields \begin{equation}\label{preeqphasesam20} \int_{B_R^+}z^a\rho^2\nabla v\cdot\nabla\psi\,{\rm d} {\bf x}=0\,. \end{equation} To conclude, we notice that $$\partial_iv\cdot\partial_i\psi=\partial_iv\cdot\big(\partial_i\Phi-(v\cdot\Phi)\partial_iv-(\partial_i v\cdot\Phi+v\cdot\partial_i\Phi)v\big)=\partial_iv\cdot\partial_i\Phi -|\partial_iv|^2v\cdot\Phi\quad\text{a.e. in $B_R^+$}\,,$$ for $i=1,\ldots,n+1$. Using this last identity in \eqref{preeqphasesam20} leads to the announced conclusion. \end{proof} As usual, to deal with homogeneous Neumann condition, we extend the equation to the whole ball by symmetry. In this way, proving estimates up to the boundary reduces to prove interior estimates. \begin{corollary}\label{eqsymtrizedphase} Let $u\in \widehat H^s(D_{2R};\mathbb{S}^{d-1})$ be a weakly $s$-harmonic map in $D_{2R}$. Assume that $|u^{\rm e}|\geqslant 1/2$ a.e. in $B_R^+$. Then the function $\rho$ and the map $v$ defined by \begin{equation}\label{defmodrhosym} \rho({\bf x}):=\begin{cases} |u^{\rm e}(x,z)| & \text{if ${\bf x}=(x,z)\in B_R^+$}\\ |u^{\rm e}(x,-z)| & \text{if ${\bf x}=(x,z)\in B_R^-$} \end{cases} \end{equation} and \begin{equation}\label{defphsevsym} v({\bf x}):= \begin{cases} u^{\rm e}(x,z)/\rho({\bf x}) & \text{if ${\bf x}=(x,z)\in B_R^+$}\\ u^{\rm e}(x,-z)/\rho({\bf x}) & \text{if ${\bf x}=(x,z)\in B_R^-$} \end{cases} \end{equation} belong to $H^1(B_R,|z|^a{\rm d}{\bf x})\cap L^\infty(B_R)$ and $H^1(B_R;\mathbb{R}^d,|z|^a{\rm d}{\bf x})\cap L^\infty(B_R)$ respectively, and \begin{equation}\label{eqsymmetrizedphse} \int_{B_R}|z|^a\rho^2\nabla v\cdot\nabla\Phi\,{\rm d}{\bf x}=\int_{B_R}|z|^a\rho^2|\nabla v|^2v\cdot\Phi\,{\rm d}{\bf x} \end{equation} holds for every $\Phi\in H^1(B_R;\mathbb{R}^d,|z|^a{\rm d}{\bf x})\cap L^\infty(B_R)$ such that $\Phi=0$ on $\partial B_R$. \end{corollary} \begin{proof} The fact that $\rho$ and $v$ belong to $H^1(B_R,|z|^a{\rm d}{\bf x})\cap L^\infty(B_R)$ and $H^1(B_R;\mathbb{R}^d,|z|^a{\rm d}{\bf x})\cap L^\infty(B_R)$ respectively follows from Lemma \ref{lemmodphase} together with the symmetry with respect to the hyperplane $\{z=0\}$. We now consider an arbitrary $\Phi\in H^1(B_R;\mathbb{R}^d,|z|^a{\rm d}{\bf x})\cap L^\infty(B_R)$ satisfying $\Phi=0$ on $\partial B_R$. We split $\Phi$ into its symmetric and anti-symmetric parts defined by $$\Phi^s(x,z):=\frac{\Phi(x,z)+\Phi(x,-z)}{2}\quad\text{and}\quad \Phi^a(x,z):=\frac{\Phi(x,z)-\Phi(x,-z)}{2}\,. $$ Clearly, $\Phi^s,\Phi^a\in H^1(B_R;\mathbb{R}^d,|z|^a{\rm d}{\bf x})\cap L^\infty(B_R)$ and $\Phi^s=\Phi^a=0$ on $\partial B_R$. By construction, we have $\Phi^s(x,-z)=\Phi^s(x,z)$ and $\Phi^a(x,-z)=-\Phi^a(x,z)$, so that $\partial_z \Phi^s(x,z)=-\partial_z \Phi^s(x,-z)$ and $\partial_z \Phi^a(x,z)=\partial_z \Phi^a(x,-z)$. The map $v$ being symmetric with respect to $\{z=0\}$, it also satisfies $\partial_z v(x,z)=-\partial_zv(x,-z)$. Therefore, $$(\nabla v\cdot\nabla\Phi^s)(x,z)=(\nabla v\cdot\nabla\Phi^s)(x,-z) \quad\text{and}\quad (\nabla v\cdot\nabla\Phi^a)(x,z)=-(\nabla v\cdot\nabla\Phi^a)(x,-z) \,.$$ As a first consequence, \begin{equation}\label{antisym1} \int_{B_R}|z|^a\rho^2\nabla v\cdot\nabla\Phi^a\,{\rm d}{\bf x}=0\,. \end{equation} Since $(v\cdot\Phi^a)(x,-z)=-(v\cdot\Phi^a)(x,z)$, we also have \begin{equation}\label{antisym2} \int_{B_R}|z|^a\rho^2|\nabla v|^2v\cdot\Phi^a\,{\rm d}{\bf x} =0\,. \end{equation} Then we infer from Lemma \ref{lemmodphase} that \begin{multline}\label{symparteq} \int_{B_R}|z|^a\rho^2\nabla v\cdot\nabla\Phi^s\,{\rm d}{\bf x}=2\int_{B^+_R}z^a\rho^2\nabla v\cdot\nabla\Phi^s\,{\rm d}{\bf x}\\ = 2\int_{B^+_R}z^a\rho^2|\nabla v|^2v\cdot\Phi^s\,{\rm d}{\bf x} =\int_{B_R}|z|^a\rho^2|\nabla v|^2v\cdot\Phi^s\,{\rm d}{\bf x}\,. \end{multline} Gathering \eqref{antisym1}, \eqref{antisym2}, and \eqref{symparteq} leads to \eqref{eqsymmetrizedphse}, and the proof is complete. \end{proof} \begin{proof}[Proof of Proposition \ref{themholdimplLip}] Rescaling variables, we can assume without loss of generality that $R=1$. Throughout the proof, we shall write for a measurable set $A\subseteq\mathbb{R}^{n+1}$, $$|A|_a:=\int_{A}|z|^a\,{\rm d}{\bf x}\,,$$ and we notice that for ${\bf y}\in\mathbb{R}^n\times\{0\}$, \begin{equation}\label{weightvolball} |B_r({\bf y})|_a=|B_r|_a=|B_1|_ar^{n+2-2s}\,. \end{equation} We start by applying Corollary \ref{eqsymtrizedphase} to consider the (symmetrized) modulus function $\rho$ and the (symmetrized) phase map $v$ defined by \eqref{defmodrhosym} and \eqref{defphsevsym}, respectively. Since $u^{\rm e}$ belongs to $C^{0,\beta}(B_R^+)$ and $|u^{\rm e}|\geqslant 1/2$ in $B_R^+$, it follows that $v\in C^{0,\beta}(B_R)$, and $\rho\in C^{0,\beta}(B_R)$ with $\rho\geqslant 1/2$ in $B_R$. By Corollary \ref{eqsymtrizedphase}, $v$ satisfies \eqref{eqsymmetrizedphse}, and from this equation we shall obtain that $v\in C^{0,1}(B_{R/3})$. We proceed in several steps. \vskip3pt \noindent{\it Step 1.} Let us fix ${\bf y}\in D_{1/2}\times\{0\}$ and $r\in(0,1/2]$. We consider the unique weak solution $w\in H^1(B_r({\bf y});\mathbb{R}^d,|z|^a{\rm d}{\bf x})$ of \begin{equation}\label{eqwlipregproof} \begin{cases} {\rm div}(|z|^a\nabla w)=0 &\text{in $B_r({\bf y})$}\,,\\ w=v & \text{on $\partial B_r({\bf y})$}\,, \end{cases} \end{equation} see Appendix \ref{appendweightharm}. The map $v$ being continuous in $\overline B_r({\bf y})$, it follows from Lemma \ref{maxprincip} that $w\in C^0(\overline B_r({\bf y}))$. Moreover, since $v$ is symmetric with respect to the hyperplane $\{z=0\}$, Lemma \ref{symmharmw} tells us that $w$ is also symmetric with respect to $\{z=0\}$. \vskip3pt Now we estimate through Minkowski's inequality, \begin{multline}\label{ppffpasideedutou} \left(\frac{1}{|B_{r/2}|_a}\int_{B_{r/2}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\right)^{1/2} \leqslant \left(\frac{1}{|B_{r/2}|_a}\int_{B_{r/2}({\bf y})}|z|^a\rho^2|\nabla w|^2\,{\rm d}{\bf x}\right)^{1/2} \\ + C\left(\frac{1}{|B_{r}|_a}\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla (v-w)|^2\,{\rm d}{\bf x}\right)^{1/2} \,, \end{multline} and our first aim is to estimate the two terms in the right hand side of this inequality. From the definition of $\eta$ and the fact that $0\leqslant \rho\leqslant 1$, we have \begin{equation}\label{holdestrhosq} |\rho^2({\bf x})-\rho^2({\bf y})|\leqslant 2\eta|{\bf x}-{\bf y}|^\beta\leqslant C\eta r^\beta\quad\forall{\bf x}\in B_r({\bf y})\,. \end{equation} Consequently, \begin{align} \nonumber \int_{B_{r/2}({\bf y})}|z|^a\rho^2|\nabla w|^2\,{\rm d}{\bf x}&\leqslant \rho^2({\bf y})\int_{B_{r/2}({\bf y})}|z|^a|\nabla w|^2\,{\rm d}{\bf x}+\int_{B_{r/2}({\bf y})}|z|^a|\rho^2-\rho^2({\bf y})||\nabla w|^2\,{\rm d}{\bf x}\\ \label{estiwcomm2steps} &\leqslant (1+C\eta r^\beta)\int_{B_{r/2}({\bf y})}|z|^a|\nabla w|^2\,{\rm d}{\bf x}\,. \end{align} Since $w$ is symmetric with respect to $\{z=0\}$, we infer from Lemma \ref{monotIharmreplac} and \eqref{weightvolball} that the function $$t\in(0,r]\mapsto \frac{1}{|B_{t}|_a}\int_{B_{t}({\bf y})}|z|^a|\nabla w|^2\,{\rm d}{\bf x}$$ is nondecreasing. Hence, \begin{multline*} \frac{1}{|B_{r/2}|_a}\int_{B_{r/2}({\bf y})}|z|^a\rho^2|\nabla w|^2\,{\rm d}{\bf x}\leqslant \frac{(1+C\eta r^\beta)}{|B_{r}|_a}\int_{B_{r}({\bf y})}|z|^a|\nabla w|^2\,{\rm d}{\bf x}\\ \leqslant \frac{(1+C\eta r^\beta)}{|B_{r}|_a}\int_{B_{r}({\bf y})}|z|^a|\nabla v|^2\,{\rm d}{\bf x}\,, \end{multline*} where we have used the minimality of $w$ stated in Lemma \ref{minimalityharmonw} in the last inequality. Using $\rho({\bf y})=1$ and $\rho\geqslant 1/2$, we now estimate as above, \begin{align*} \int_{B_{r}({\bf y})}|z|^a|\nabla v|^2\,{\rm d}{\bf x} & \leqslant \int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}+\int_{B_{r}({\bf y})}|z|^a|\rho^2-\rho^2({\bf y})||\nabla v|^2\,{\rm d}{\bf x}\\ &\leqslant (1+C\eta r^\beta) \int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\,, \end{align*} to reach \begin{equation}\label{firstpiecelipregbdry} \frac{1}{|B_{r/2}|_a}\int_{B_{r/2}({\bf y})}|z|^a\rho^2|\nabla w|^2\,{\rm d}{\bf x}\leqslant \frac{(1+C\eta r^\beta)^2}{|B_{r}|_a}\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\,. \end{equation} Next, we recall that $v-w\in H^1(B_r({\bf y});\mathbb{R}^d,|z|^a{\rm d}{\bf x})$ satisfies $v-w=0$ on $\partial B_r({\bf y})$. Hence, we can apply Corollary \ref{eqsymtrizedphase} to deduce that \begin{align} \nonumber \int_{B_{r}({\bf y})}|z|^a\rho^2&|\nabla (v-w)|^2\,{\rm d}{\bf x}=\int_{B_{r}({\bf y})}|z|^a\rho^2\nabla v\cdot \nabla (v-w)\,{\rm d}{\bf x}-\int_{B_{r}({\bf y})}|z|^a\rho^2\nabla w\cdot \nabla (v-w)\,{\rm d}{\bf x}\\ \label{dim21071936} &=\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2 v\cdot (v-w)\,{\rm d}{\bf x}-\int_{B_{r}({\bf y})}|z|^a\rho^2\nabla w\cdot \nabla (v-w)\,{\rm d}{\bf x}\,. \end{align} On the other hand, the equation \eqref{eqwlipregproof} satisfied by $w$ yields \begin{align} \nonumber\int_{B_{r}({\bf y})}|z|^a\rho^2\nabla w\cdot \nabla (v-w)\,{\rm d}{\bf x}&=\rho^2({\bf y})\int_{B_{r}({\bf y})}|z|^a\nabla w\cdot \nabla (v-w)\,{\rm d}{\bf x}\\ \nonumber &\qquad \qquad\qquad + \int_{B_{r}({\bf y})}|z|^a\big(\rho^2-\rho^2({\bf y})\big)\nabla w\cdot \nabla (v-w)\,{\rm d}{\bf x}\\ \label{ctjrspasdim07}&= \int_{B_{r}({\bf y})}|z|^a\big(\rho^2-\rho^2({\bf y})\big)\nabla w\cdot \nabla (v-w)\,{\rm d}{\bf x}\,. \end{align} By \eqref{holdestrhosq} and the minimality of $w$, we have \begin{align} \nonumber\left| \int_{B_{r}({\bf y})}|z|^a\big(\rho^2-\rho^2({\bf y})\big)\nabla w\cdot \nabla (v-w)\,{\rm d}{\bf x}\right| & \leqslant C\eta r^\beta \int_{B_{r}({\bf y})}|z|^a|\nabla w||\nabla (v-w)|\,{\rm d}{\bf x}\\ \nonumber&\leqslant C\eta r^\beta \int_{B_{r}({\bf y})}|z|^a\big(|\nabla w|^2 +|\nabla v|^2\big)\,{\rm d}{\bf x}\\ \nonumber &\leqslant C\eta r^\beta \int_{B_{r}({\bf y})}|z|^a|\nabla v|^2\,{\rm d}{\bf x}\\ \label{pasideedim07}&\leqslant C\eta r^\beta \int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\,, \end{align} where we have used that $\rho\geqslant 1/2$ in the last inequality. Combining \eqref{dim21071936}, \eqref{ctjrspasdim07}, \eqref{pasideedim07}, and using that $|v|=1$, we infer that \begin{equation}\label{ahbahcrottejsp} \int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla (v-w)|^2\,{\rm d}{\bf x} \leqslant \big(\|v-w\|_{L^\infty(B_r({\bf y}))}+C\eta r^\beta\big)\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\,. \end{equation} Let us now bound $\|v-w\|_{L^\infty(B_r({\bf y}))}$. First, notice that for ${\bf x}\in B_r({\bf y})$, \begin{equation}\label{pasunpoildideedim1631} |v({\bf x})-w({\bf x})|\leqslant |v({\bf x})-v({\bf y})|+|w({\bf x})-v({\bf y})|\leqslant C\eta r^\beta+ |w({\bf x})-v({\bf y})|\,. \end{equation} Next we observe that for each $i=1,\ldots,d$, the scalar function $w^i-v^i({\bf y})\in H^1(B_r({\bf y}),|z|^a{\rm d}{\bf x})$ satisfies in the weak sense $$\begin{cases} {\rm div}\big(|z|^a\nabla(w^i-v^i({\bf y}))\big)= 0 & \text{in $B_r({\bf y})$}\,,\\ w^i-v^i({\bf y})= v^i-v^i({\bf y}) & \text{on $\partial B_r({\bf y})$}\,. \end{cases}$$ It then follows from Lemma \ref{maxprincip} that for each $i=1,\ldots,d$, $$\|w^i-v^i({\bf y})\|_{L^\infty(B_r({\bf y}))}\leqslant \|v^i-v^i({\bf y})\|_{L^\infty(\partial B_r({\bf y}))}\leqslant \|v-v({\bf y})\|_{L^\infty(\partial B_r({\bf y}))}\leqslant C\eta r^\beta\,. $$ Back to \eqref{pasunpoildideedim1631}, we have thus obtained $$\|w-v\|_{L^\infty(B_r({\bf y}))} \leqslant C\eta r^\beta\,.$$ Using this estimate in \eqref{ahbahcrottejsp}, we derive that \begin{equation}\label{secondpieceestireglipbdry} \int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla (v-w)|^2\,{\rm d}{\bf x} \leqslant C\eta r^\beta\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\,. \end{equation} Now, inserting estimates \eqref{firstpiecelipregbdry} and \eqref{secondpieceestireglipbdry} in \eqref{ppffpasideedutou}, and then squaring both sides of the resulting inequality, we are led to $$ \frac{1}{|B_{r/2}|_a}\int_{B_{r/2}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\leqslant \frac{(1+C_\eta r^{\beta/2}) }{|B_{r}|_a}\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \,,$$ for a constant $C_\eta=C_\eta(\eta,n,s)$. Iterating this inequality along dyadic radii $r_k:=2^{-k}$ with $k\geqslant 1$, we deduce that \begin{multline}\label{mard33july} \frac{1}{|B_{r_{k+1}}|_a}\int_{B_{r_{k+1}}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \leqslant \Big(\prod_{j=1}^k(1+C_\eta 2^{-j\beta/2})\Big) \frac{1}{|B_{1/2}|_a}\int_{B_{1/2}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \\ \leqslant C_{\eta,\beta} \int_{B_{1}}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \,, \end{multline} for a constant $C_{\eta,\beta}=C_{\eta,\beta}(\eta,\beta,n,s)$. Next, for an arbitrary radius $r\in(0,1/2]$, we consider the integer $k\geqslant 1$ satisfying $r_{k+1} <r\leqslant r_k$, and estimate $$ \frac{1}{|B_{r}|_a}\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \leqslant \frac{2^{n+2-2s}}{|B_{r_k}|_a}\int_{B_{r_k}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \,,$$ to conclude from \eqref{mard33july} and the symmetry of $v$ and $\rho$ with respect to $\{z=0\}$ that $$\frac{1}{|B_{r}|_a}\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \leqslant C_{\eta,\beta} \int_{B^+_{1}}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\quad\forall r\in(0,1/2]\,.$$ Noticing that $|\nabla u^{\rm e}|^2=|\nabla \rho|^2+\rho^2|\nabla v|^2$, and in view of the arbitrariness of ${\bf y}$, we have thus proved that \begin{equation}\label{gfaimadonfpoilonez} \frac{1}{|B_{r}|_a}\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \leqslant C_{\eta,\beta} \int_{B^+_{1}}|z|^a|\nabla u^{\rm e}|^2\,{\rm d}{\bf x}\quad\forall {\bf y}\in D_{1/2}\times\{0\}\,,\;\forall r\in(0,1/2]\,. \end{equation} \vskip5pt \noindent{\it Step 2.} Our main goal in this step is to obtain an estimate similar to \eqref{gfaimadonfpoilonez} for balls which are not centered at points of $\{z=0\}$. By symmetry of $v$ and $\rho$ with respect to $\{z=0\}$, it is enough to consider balls centered at points of $\mathbb{R}^{n+1}_+$. Let us fix an arbitrary point ${\bf y}=(y,t)\in B^+_{1/3}$, and notice that $\overline B_{t/2}({\bf y})\subseteq B_1^+$ . We also consider an arbitrary radius $r\in(0,t/2]$ (so that $\overline B_r({\bf y})\subseteq B_1^+$). As in Step 1, we introduce the (weak) solution $w\in H^1(B_r({\bf y});\mathbb{R}^d,|z|^a{\rm d}{\bf x})$ of \eqref{eqwlipregproof}. Exactly as in \eqref{ppffpasideedutou}, we have \begin{multline}\label{ppffpasideedutouStep2} \left(\Big(\frac{2}{r}\Big)^{n+1}\int_{B_{r/2}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\right)^{1/2} \leqslant \left(\Big(\frac{2}{r}\Big)^{n+1}\int_{B_{r/2}({\bf y})}|z|^a\rho^2|\nabla w|^2\,{\rm d}{\bf x}\right)^{1/2} \\ + C\left(\frac{1}{r^{n+1}}\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla (v-w)|^2\,{\rm d}{\bf x}\right)^{1/2} \,. \end{multline} Arguing precisely as in Step 1, we derive that \eqref{secondpieceestireglipbdry} still holds. Then, we estimate as in \eqref{estiwcomm2steps}, \begin{equation}\label{prejecpa1} \int_{B_{r/2}({\bf y})}|z|^a\rho^2|\nabla w|^2\,{\rm d}{\bf x}\leqslant (\rho^2({\bf y})+C\eta r^\beta)\int_{B_{r/2}({\bf y})}|z|^a|\nabla w|^2\,{\rm d}{\bf x}\,. \end{equation} Applying Lemma \ref{monotIharmreplac2} with $\theta=t/r$ and then the minimality of $w$, we obtain \begin{multline}\label{prejecpa2} \Big(\frac{2}{r}\Big)^{n+1} \int_{B_{r/2}({\bf y})}|z|^a|\nabla w|^2\,{\rm d}{\bf x}\leqslant \Big(1+\frac{Cr}{t}\Big)\frac{1}{r^{n+1}} \int_{B_{r}({\bf y})}|z|^a|\nabla w|^2\,{\rm d}{\bf x}\\ \leqslant \Big(1+\frac{Cr}{t}\Big)\frac{1}{r^{n+1}} \int_{B_{r}({\bf y})}|z|^a|\nabla v|^2\,{\rm d}{\bf x}\,. \end{multline} Combining \eqref{prejecpa1} with \eqref{prejecpa2}, and using again the H\"older continuity of $\rho^2$ (as in \eqref{holdestrhosq}) together with $1/2\leqslant \rho\leqslant 1$, we deduce that \begin{align}\label{presqfinimarjul} \Big(\frac{2}{r}\Big)^{n+1} \int_{B_{r/2}({\bf y})}|z|^a\rho^2|\nabla w|^2\,{\rm d}{\bf x} \leqslant \Big(1+C(\eta r^\beta +r/t) \Big) \frac{1}{r^{n+1}} \int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\,. \end{align} Inserting \eqref{secondpieceestireglipbdry} and \eqref{presqfinimarjul} in \eqref{ppffpasideedutouStep2}, we infer that $$\frac{1}{|B_{r/2}({\bf y})|}\int_{B_{r/2}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \leqslant \frac{1+C_{\eta}(r^{\beta/2}+r/t)}{|B_r({\bf y})|} \int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \,,$$ for a constant $C_{\eta}=C_{\eta}(\eta,n,s)$. Arguing as Step 1 (using the dyadic radii $r_k:=2^{-k}t$), the arbitrariness of $r\in(0,t/2]$ in this latter estimate implies that \begin{equation}\label{mercrecanic} \frac{1}{|B_r({\bf y})|}\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \leqslant \frac{C_{\eta,\beta}}{|B_{t/2}({\bf y})|} \int_{B_{t/2}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\quad\forall r\in(0,t/2]\,, \end{equation} for a constant $C_{\eta,\beta}=C_{\eta,\beta}(\eta,\beta,n,s)$. Then, we notice that for every radius $r\in(0,t/2]$, $$ |B_r({\bf y})|_a\leqslant \begin{cases} t^a(1+r/t)^a|B_r({\bf y})| & \text{if $s\leqslant 1/2$}\,,\\ t^a(1-r/t)^a|B_r({\bf y})| & \text{if $s >1/2$}\,, \end{cases}$$ and $$ |B_r({\bf y})|_a\geqslant \begin{cases} t^a(1-r/t)^a|B_r({\bf y})| & \text{if $s\leqslant 1/2$}\,,\\ t^a(1+r/t)^a|B_r({\bf y})| & \text{if $s >1/2$}\,. \end{cases}$$ Consequently, dividing \eqref{mercrecanic} by $t^a$, we obtain \begin{equation}\label{mercrecanic2} \frac{1}{|B_r({\bf y})|_a}\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \leqslant \frac{C_{\eta,\beta}}{|B_{t/2}({\bf y})|_a} \int_{B_{t/2}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\quad\forall r\in(0,t/2]\,. \end{equation} Setting $\widetilde {\bf y}:=(y,0)\in D_{1/3}\times\{0\}$, we now observe that $B_{t/2}({\bf y})\subseteq B^+_{3t/2}(\widetilde{\bf y})$ and $3t/2\leqslant 1/2$. Using the symmetry of $v$ and $\rho$ with respect to $\{z=0\}$ and \eqref{gfaimadonfpoilonez}, we deduce that \begin{align}\label{mercrecanic3} \nonumber \frac{1}{|B_{t/2}({\bf y})|_a} \int_{B_{t/2}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} & \leqslant \frac{C}{|B^+_{3t/2}(\widetilde {\bf y})|_a} \int_{B^+_{3t/2}(\widetilde {\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x}\\ \nonumber & \leqslant \frac{C}{|B_{3t/2}(\widetilde {\bf y})|_a} \int_{B_{3t/2}(\widetilde {\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \\ &\leqslant C_{\eta,\beta} \int_{B^+_{1}}|z|^a|\nabla u^{\rm e}|^2\,{\rm d}{\bf x}\,. \end{align} Combining \eqref{mercrecanic2} and \eqref{mercrecanic3}, and in view of the arbitrariness of ${\bf y}$, we infer that $$ \frac{1}{|B_r({\bf y})|_a}\int_{B_{r}({\bf y})}|z|^a\rho^2|\nabla v|^2\,{\rm d}{\bf x} \leqslant C_{\eta,\beta} \int_{B^+_{1}}|z|^a|\nabla u^{\rm e}|^2\,{\rm d}{\bf x}\quad\forall {\bf y}=(y,t)\in B_{1/3}^+\,,\;\forall r\in(0,t/2]\,.$$ Still by symmetry of $v$ and $\rho$, this estimate actually holds for every ${\bf y}=(y,t)\in B_{1/3}\setminus\{z=0\}$ and $r\in(0,|t|/2)$. By Lebesgue's differentiation theorem, we have thus proved that $$\rho^2|\nabla v|^2\leqslant C_{\eta,\beta} \int_{B^+_{1}}|z|^a|\nabla u^{\rm e}|^2\,{\rm d}{\bf x} \quad\text{a.e. in $B_{1/3}$}\,,$$ and the conclusion follows from the fact that $\rho\geqslant 1/2$. \end{proof} \begin{proof}[Proof of Theorem \ref{thmepsregLip}] Once again, rescaling variables, we can assume that $R=1$. Under condition \eqref{condeps0Lip}, Corollary \ref{coroepsreghold} says that $u^{\rm e}\in C^{0,\beta_1}(B^+_{\boldsymbol{\kappa}_1})$ and $[u^{\rm e}]_{C^{0,\beta_1}(B^+_{\boldsymbol{\kappa}_1})}$ is bounded by a constant depending only on $n$ and $s$. Since $|u^{\rm e}|=|u|=1$ on $\partial^0 B^+_{\boldsymbol{\kappa}_1}$, we can thus find a constant $\boldsymbol{\kappa}_2=\boldsymbol{\kappa}_2(n,s)\in(0,1)$ such that $6\boldsymbol{\kappa}_2\leqslant \boldsymbol{\kappa}_1$ and $|u^{\rm e}|\geqslant 1/2$ in $B^+_{3\boldsymbol{\kappa}_2}$. Since $\beta_1=\beta_1(n,s)$, and $(3\boldsymbol{\kappa}_2)^{\beta_1}[u^{\rm e}]_{C^{0,\beta_1}(B^+_{3\boldsymbol{\kappa}_2})}$ is bounded by a constant depending only on $n$ and $s$, Proposition \ref{themholdimplLip} implies that $v:=u^{\rm e}/|u^{\rm e}|$ is Lipschitz continuous in $\overline B^+_{\boldsymbol{\kappa}_2}$ with $$|v({\bf x})-v({\bf y})|\leqslant C \boldsymbol{\Theta}_s(u^{\rm e},0,\boldsymbol{\kappa}_2)|{\bf x}-{\bf y}|\leqslant C\boldsymbol{\Theta}_s(u^{\rm e},0,1)|{\bf x}-{\bf y}|\quad\forall {\bf x},{\bf y}\in \overline B^+_{\boldsymbol{\kappa}_2}\,,$$ for a constant $C=C(n,s)$. Since $v({\bf x})=u(x)$ for every ${\bf x}=(x,0)\in \partial^0B^+_{\boldsymbol{\kappa}_2}$, the conclusion follows. \end{proof} \section{Higher order regularity}\label{highordreg} We have now reached the final stage of our small energy regularity result where it only remains to prove that a Lipschitz continuous $s$-harmonic map is of class $C^\infty$. To achieve this result, we shall apply (local) Schauder type estimates for $(-\Delta)^s$. We only refer to \cite{RosSer} for those estimates as it is best suited to our presentation (see also \cite{Sil}). \begin{theorem}\label{highordthm} Let $u\in \widehat H^s(D_{1};\mathbb{S}^{d-1})$ be a weakly $s$-harmonic map in $D_{1}$. If $u$ is Lipschitz continuous in $D_1$, then $u\in C^\infty(D_{1/2})$. \end{theorem} \begin{proof} The proof of Theorem \ref{highordthm} follows from a bootstrap procedure. The initiation of the induction consists in passing from Lipschitz regularity to $C^{1,\alpha}$-regularity, and it is the object of Proposition \ref{C1alphareg} in the following subsection. Then we shall prove in Proposition \ref{Ckalphareg} that $C^{k,\alpha}$-regularity upgrades to $C^{k+1,\alpha}$-regularity for every integer $k\geqslant 1$. In applying this bootstrap argument, we first fix an arbitrary point $x_0\in D_{1/2}$ and an integer $k\geqslant 1$. We translate variables by $x_0$ and rescale suitably in order to apply Proposition \ref{C1alphareg} and Proposition \ref{Ckalphareg}, and then conclude that $u$ is $C^{k,\alpha}$ in a neighborhood of $x_0$. \end{proof} \subsection{H\"older continuity of first order derivatives} \begin{proposition}\label{C1alphareg} Let $u\in \widehat H^s(D_{3};\mathbb{S}^{d-1})$ be a weakly $s$-harmonic map in $D_{3}$. If $u$ is Lipschitz continuous in $D_3$, then $u\in C^{1,\alpha}(D_{r_*})$ for every $\alpha\in(0,1)$ and some $r_*=r_*(n,s)\in(0,1/2)$. \end{proposition} One of the main ingredients to obtain an improved regularity is the following elementary lemma. \begin{lemma}\label{keybootstraplemma} Let $f:D_3\to \mathbb{R}^d$ be a Lipschitz continuous function, $g:D_3\to\mathbb{R}^d$ an H\"older continuous function, and $\zeta:D_1\to [0,1]$ a measurable function. Assume that one of the following items holds: \begin{enumerate} \item[(i)] $s\in(0,1/2)$ and $g\in C^{0,\alpha}(D_3)$ for some $\alpha\in(2s,1]$; \item[(ii)] $s\in(0,1/2)$ and $g\in C^{0,\alpha}(D_3)$ for every $\alpha\in(0,2s)$; \item[(iii)] $s\in[1/2,1)$ and $g\in C^{0,\alpha}(D_3)$ for every $\alpha\in(0,1)$. \end{enumerate} Then the function \begin{equation}\label{defGfunctionreg} G:x\in D_{1}\mapsto \int_{D_1}\frac{\big(f(x+y)-f(x)\big)\cdot\big(g(x+y)-g(x)\big)}{|y|^{n+2s}}\zeta(y)\,{\rm d} y \end{equation} belongs to \begin{enumerate} \item $C^{0,\alpha}(D_1)$ in case (i); \item $C^{0,\alpha^\prime}(D_1)$ for every $\alpha^\prime\in(0,2s)$ in case (ii); \item $C^{0,\alpha^\prime}(D_1)$ for every $\alpha^\prime\in(0,2-2s)$ in case (iii). \end{enumerate} \end{lemma} \begin{proof} {\it Step 1.} We first claim that $G$ is well defined in all cases. To simplify the notation, we write \begin{equation}\label{notgamGfct} \Gamma(x,y):= \big(f(x+y)-f(x)\big)\cdot\big(g(x+y)-g(x)\big)\,. \end{equation} Observe that in all cases, we have $1+\alpha>2s$ (it holds for every $\alpha\in(0,2s)$ in case (ii), and we can choose such $\alpha\in(0,1)$ in case (iii)). Since $|\Gamma(x,y)|\leqslant C_{f,g,\alpha}|y|^{1+\alpha}$, we have $$\int_{D_1}\frac{|\Gamma(x,y)|}{|y|^{n+2s}}\,{\rm d} y\leqslant C_{f,g,\alpha}\int_{D_1}\frac{{\rm d} y}{|y|^{n+2s-(1+\alpha)}}\leqslant C_{f,g,\alpha}\quad\forall x\in D_1\,, $$ for a constant $C_{f,g,\alpha}$ depending only on $f$, $g$, $\alpha$, $n$, and $s$. \vskip5pt \noindent{\it Step 2, case (i).} Fix arbitrary points $x,h\in D_1$. Since \begin{equation}\label{debilestiapriorbisbis} \big|\Gamma(x+h,y)- \Gamma(x,y)\big| \leqslant C_{f,g,\alpha}|h|^\alpha|y|^\alpha \quad\forall y\in D_1\,, \end{equation} we have $$|G(x+h)-G(x)|\leqslant C_{f,g,\alpha}|h|^\alpha \int_{D_1}\frac{1}{|y|^{n+2s-\alpha}}\,{\rm d} y \leqslant C_{f,g,\alpha}|h|^\alpha\,, $$ for a constant $C_{f,g,\alpha}$ depending only on $f$, $g$, $\alpha$, $n$, and $s$. \vskip5pt \noindent{\it Step 3, case (ii).} Let us fix an arbitrary $\varepsilon\in(0,s)$. We set $\alpha:=2s-\varepsilon$ and $\beta:=1-2\varepsilon$. Since $$\big|\Gamma(x+h,y)- \Gamma(x,y)\big|\leqslant \big|\Gamma(x+h,y)\big|+\big|\Gamma(x,y)\big| \leqslant C_{f,g,\varepsilon}|y|^{1+\alpha}\,,$$ we can use \eqref{debilestiapriorbisbis} to obtain \begin{equation}\label{vendre02aout1} \big|\Gamma(x+h,y)- \Gamma(x,y)\big|\leqslant C_{f,g,\varepsilon}|y|^{(1+\alpha)(1-\beta)}|h|^{\alpha\beta}|y|^{\alpha\beta}=C_{f,g,\varepsilon}|y|^{2s+\varepsilon}|h|^{\alpha\beta} \quad\forall y\in D_1\,. \end{equation} Hence, \begin{equation}\label{vendre02aout2} |G(x+h)-G(x)|\leqslant C_{f,g,\varepsilon}|h|^{\alpha\beta}\int_{D_1}\frac{1}{|y|^{n-\varepsilon}}\,{\rm d} y \leqslant C_{f,g,\varepsilon}|h|^{\alpha\beta}\,, \end{equation} for a constant $C_{f,g,\varepsilon}>0$ depending only on $f$, $g$, $\varepsilon$, $n$, and $s$. \vskip5pt \noindent{\it Step 4, case (iii).} Now we fix an arbitrary $\varepsilon\in(0,1-s)$, and we set $\alpha:=1-\varepsilon$ and $\beta:=2-2s-2\varepsilon$. Then \eqref{vendre02aout1} still holds, and consequently also \eqref{vendre02aout2}. \end{proof} \begin{proof}[Proof of Proposition \ref{C1alphareg}] {\it Step 1.} We start by fixing a radial cut-off function $\zeta\in \mathscr{D}(\mathbb{R}^n)$ such that $0\leqslant \zeta\leqslant 1$, $\zeta=1$ in $D_{1/2}$, and $\zeta=0$ in $\mathbb{R}^n\setminus D_{3/4}$. With $\zeta$ in hands, we rewrite for $x\in D_1$, \begin{multline}\label{rewriterhs1} \int_{\mathbb{R}^n}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} y =\int_{\mathbb{R}^n}\frac{|u(x+y)-u(x)|^2}{|y|^{n+2s}}\,{\rm d} y\\ = \int_{D_1}\frac{|u(x+y)-u(x)|^2}{|y|^{n+2s}}\zeta(y)\,{\rm d} y + \int_{D^c_{1/2}}\frac{|u(x+y)-u(x)|^2}{|y|^{n+2s}}(1-\zeta(y))\,{\rm d} y \,, \end{multline} and we set \begin{equation}\label{deffctGu} G_u(x):= \int_{D_1}\frac{|u(x+y)-u(x)|^2}{|y|^{n+2s}}\zeta(y)\,{\rm d} y \,. \end{equation} By Lemma \ref{keybootstraplemma} (applied to $f=g=u$), the function $G_u$ is Lipschitz continuous in $D_1$ for $s\in(0,1/2)$, and it belongs to $C^{0,\alpha}(D_1)$ for every $\alpha\in(0,2-2s)$ for $s\in[1/2,1)$. Concerning the second term in the right hand side of \eqref{rewriterhs1}, we use the identity $|u|^2=1$ to rewrite it as \begin{multline} \label{rewriterhs2} \int_{D^c_{1/2}}\frac{|u(x+y)-u(x)|^2}{|y|^{n+2s}}(1-\zeta(y))\,{\rm d} y =\int_{\mathbb{R}^n}\frac{2(1-\zeta(y))}{|y|^{n+2s}}\,{\rm d} y\\ -\left(\int_{\mathbb{R}^n}\frac{2(1-\zeta(y))}{|y|^{n+2s}}u(x+y)\,{\rm d} y\right)\cdot u(x)\,. \end{multline} In view of \eqref{rewriterhs2}, it is convenient to introduce the constant $L_\zeta>0$ and the function $Z\in C^\infty(\mathbb{R}^n)$ given by $$L_\zeta:= \int_{\mathbb{R}^n}\frac{2(1-\zeta(y))}{|y|^{n+2s}}\,{\rm d} y\quad\text{and}\quad Z(x):= \frac{2}{L_\zeta} \frac{(1-\zeta(x))}{|x|^{n+2s}}\,.$$ In this way, the right-hand side of \eqref{rewriterhs2} can be written as \begin{equation}\label{deffctHu} H_u(x):=L_\zeta\big(1-Z*u(x)\cdot u(x)\big) \quad\text{for $x\in D_1$}\,. \end{equation} Notice that $Z*u\in C^{\infty}(\mathbb{R}^n)$, so that $H_u$ is Lipschitz continuous in $D_1$. Summarizing our manipulations in \eqref{rewriterhs1} and \eqref{rewriterhs2}, we have obtained $$ \int_{\mathbb{R}^n}\frac{|u(x)-u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} y=G_u(x)+H_u(x)\qquad\forall x\in D_1\,.$$ Now we introduce the map $F_u:D_1\to \mathbb{R}^d$ given by \begin{equation}\label{deffctFu} F_u(x):=\frac{\gamma_{n,s}}{2}\big(G_u(x)+H_u(x)\big)u(x)\,. \end{equation} Then $F_u\in C^{0,1}(D_1)$ for $s\in(0,1/2)$, and $F_u\in C^{0,\alpha}(D_1)$ for every $\alpha\in(0,2-2s)$ for $s\in[1/2,1)$. \vskip5pt \noindent{\it Step 2.} We consider the map $u_0:\mathbb{R}^n\to \mathbb{R}^d$ given by $u_0:=\zeta u$. Then $u_0 \in C^{0,1}(\mathbb{R}^n)$ and $u_0=0$ in $\mathbb{R}^n\setminus D_1$. In particular, $u_0\in H^s_{00}(D_1;\mathbb{R}^d)$. A lengthy but straightforward computation shows that $$(-\Delta)^su_0= \zeta(-\Delta)^su+\big((-\Delta)^s\zeta\big)u-\gamma_{n,s}\int_{\mathbb{R}^n}\frac{(\zeta(x)-\zeta(y))(u(x)-u(y))}{|x-y|^{n+2s}}\,{\rm d} y\quad\text{in $H^{-s}(D_1;\mathbb{R}^d)$}\,,$$ i.e., in the sense of \eqref{deffraclap}. Since $u$ is a weakly $s$-harmonic map in $D_3$, it satisfies equation \eqref{ELeqorig}. In view of Step 1, we thus have \begin{equation}\label{eqofprodzetau} (-\Delta)^su_0= \zeta F_u+\big((-\Delta)^s\zeta\big)u-\gamma_{n,s}\int_{\mathbb{R}^n}\frac{(\zeta(x)-\zeta(y))(u(x)-u(y))}{|x-y|^{n+2s}}\,{\rm d} y\quad\text{in $H^{-s}(D_1;\mathbb{R}^d)$}\,. \end{equation} The function $(-\Delta)^s\zeta$ being smooth over $\mathbb{R}^n$, we infer from Step 1 that $\zeta F_u+\big((-\Delta)^s\zeta\big)u$ belongs to $C^{0,1}(D_1)$ for $s\in(0,1/2)$, and to $C^{0,\alpha}(D_1)$ for every $\alpha\in(0, 2-2s)$ for $s\in[1/2,1)$. We now determine the regularity of the last term in the right-hand side of \eqref{eqofprodzetau} arguing as in Step 1. We write it as $$\int_{\mathbb{R}^n}\frac{(\zeta(x)-\zeta(y))(u(x)-u(y))}{|x-y|^{n+2s}}\,{\rm d} y=:I(x)+II(x)\,, $$ with $$I(x):=\int_{D_1}\frac{(\zeta(x+y)-\zeta(x))(u(x+y)-u(x))}{|y|^{n+2s}}\zeta(y)\,{\rm d} y\,, $$ and \begin{align*} II(x)& := \int_{\mathbb{R}^n}\frac{(\zeta(x+y)-\zeta(x))(u(x+y)-u(x))}{|y|^{n+2s}}(1-\zeta(y))\,{\rm d} y\\ &= \int_{\mathbb{R}^n}\frac{(\zeta(x)-\zeta(y))(u(x)-u(y))}{|x-y|^{n+2s}}(1-\zeta(x-y))\,{\rm d} y\,. \end{align*} By Lemma \ref{keybootstraplemma}, the term $I$ belongs to $C^{0,1}(D_1)$ for $s\in(0,1/2)$, and to $C^{0,\alpha}(D_1)$ for every $\alpha\in(0,2-2s)$ for $s\in[1/2,1)$. On the other hand, the function $\zeta$ being smooth and equal to $1$ in $D_{1/2}$, the term $II$ has clearly the regularity of $u$ in $D_1$, that is $C^{0,1}(D_1)$. Summarizing these considerations, we have shown that $u_0\in H^s(\mathbb{R}^n;\mathbb{R}^d)\cap L^\infty(\mathbb{R}^n)$ is a weak solution of $$ \begin{cases} (-\Delta)^s u_0=F_0 & \text{in $D_1$}\,,\\ u_0=0 & \text{in $\mathbb{R}^n\setminus D_1$}\,, \end{cases} $$ for a right-hand side $F_0$ which belongs to $C^{0,1}(D_1)$ for $s\in(0,1/2)$, and to $C^{0,\alpha}(D_1)$ for every $\alpha\in(0,2-2s)$ for $s\in[1/2,1)$. From well-known (by now) regularity estimates for this equation (see e.g. \cite[Section 2]{RosSer}), the map $u_0$ belongs to $C^{1,\alpha}(D_{1/2})$ for every $\alpha\in(0,2s)$ for $s\in(0,1/2)$, and to $C^{1,\alpha}(D_{1/2})$ for every $\alpha\in(0,1)$ for $s\in[1/2,1)$. Since $u_0=u$ in $D_{1/2}$, the proposition is proved in the case $s\in[1/2,1)$, and we obtained $u\in C^{1,\alpha}(D_{1/2})$ for every $\alpha\in(0,2s)$ for $s\in (0,1/2)$. \vskip5pt \noindent{\it Step 3.} We now assume that $s\in (0,1/2)$, and it remains to prove that $u$ actually belongs to $C^{1,\alpha}(D_{r_*})$ for every $\alpha\in (0,1)$ and a radius $r_*\in(0,1/2)$ depending only on $s$. To this purpose, we rescale $u$ by setting $\widetilde u(x):=u(x/6)$, and from Step 3, we infer that $\widetilde u\in C^{1,\alpha}(D_3)$ for every $\alpha\in(0,2s)$. We shall now make use of the following lemma. \begin{lemma}\label{regC1alphafctGreg} Assume that $s\in(0,1/2)$. Let $f:D_3\to \mathbb{R}^d$ and $g:D_3\to \mathbb{R}^d$ be two $C^1$-functions, and $\zeta:D_1\to [0,1]$ a measurable function. Assume that one of the following items holds: \begin{enumerate} \item[(i)] $f,g \in C^{1,\alpha}(D_3)$ for every $\alpha\in(0,2s)$; \item[(ii)] $f,g \in C^{1,\alpha}(D_3)$ for some $\alpha\in(2s,1)$; \end{enumerate} Then the function $G:D_1\to\mathbb{R}$ given by \eqref{defGfunctionreg} belongs to \begin{enumerate} \item $C^{1,\alpha^\prime}(D_1)$ for every $\alpha^\prime\in(0,2s)$ in case (i); \item $C^{1,\alpha}(D_1)$ in case (ii); \end{enumerate} and for $x\in D_1$, \begin{multline}\label{formpartialderGfctreg} \partial_i G(x)= \int_{D_1}\frac{\big(\partial_i f(x+y)-\partial_i f(x)\big)\cdot\big(g(x+y)-g(x)\big)}{|y|^{n+2s}}\zeta(y)\,{\rm d} y\\ +\int_{D_1}\frac{\big(f(x+y)-f(x)\big)\cdot\big(\partial_i g(x+y)-\partial_i g(x)\big)}{|y|^{n+2s}}\zeta(y)\,{\rm d} y\,, \end{multline} for $i=1,\ldots,n$. \end{lemma} \begin{proof} We keep using notation \eqref{notgamGfct}. First we fix an arbitrary point $x\in D_1$ and we claim that $G$ admits a partial derivative $\partial_i G$ at $x$. Indeed, for $t>0$ small enough, we have $$\big|\Gamma(x+t e_i,y)- \Gamma(x,y)\big| \leqslant C_{f,g} |y| t \quad\forall y\in D_1\,,$$ since $f$ and $g$ are $C^1$ over $D_3$. Hence, $$\frac{|\Gamma(x+t e_i,y)- \Gamma(x,y)|}{|y|^{n+2s}t}\leqslant C_{f,g} |y|^{1-2s-n} \in L^1(D_1)\,,$$ and it follows from the dominated convergence theorem that $G$ admits a partial derivative $\partial_i G$ at $x$ given by formula \eqref{formpartialderGfctreg}. Next we apply Lemma \ref{keybootstraplemma} to the right-hand side of \eqref{formpartialderGfctreg} to deduce that $\partial_i G$ is H\"older continuous, and the conclusion follows. \end{proof} \noindent{\it Proof of Proposition \ref{C1alphareg} completed.} We consider the function $G_{\widetilde u}:D_1\to\mathbb{R}$ as defined in \eqref{deffctGu} with $\tilde u$ in place of $u$. By Lemma \ref{regC1alphafctGreg} (applied to $f=g=\widetilde u$), $G_{\widetilde u}\in C^{1,\alpha}(D_1)$ for every $\alpha\in(0,2s)$. On the other hand, the function $H_{\widetilde u}:D_1\to \mathbb{R}$ as defined in \eqref{deffctHu} clearly belongs to $C^{1,\alpha}(D_1)$ for every $\alpha\in(0,2s)$. Consequently, the map $F_{\widetilde u}:D_1\to\mathbb{R}^d$ as defined in \eqref{deffctFu} also belongs to $C^{1,\alpha}(D_1)$ for every $\alpha\in(0,2s)$. Since $\widetilde u$ is a rescaling of $u$, it is also $s$-harmonic in $D_1$, and thus $(-\Delta)^s\widetilde u=F_{\widetilde u}$ in $\mathscr{D}^\prime(D_1)$. Next, we keep arguing as in Step 2, and we consider the bounded map $\widetilde u_0:=\zeta \widetilde u$. Applying Lemma \ref{regC1alphafctGreg} again, we argue as in Step 2 to infer that $(-\Delta)^s\widetilde u_0=\widetilde F_{0}$ in $H^{-s}(D_1;\mathbb{R}^d)$, for a right-hand side $\widetilde F_0\in C^{1,\alpha}(D_1)$ for every $\alpha\in(0,2s)$. By the results in \cite{RosSer}, we have $\widetilde u_0\in C^{1,\alpha}(D_{1/2})$ for every $\alpha\in(0,4s)$ if $4s<1$, and $\widetilde u_0\in C^{1,\alpha}(D_{1/2})$ for every $\alpha\in(0,1)$ if $4s\geqslant 1$. Once again, since $\widetilde u_0=\widetilde u$ in $D_{1/2}$, we have $\widetilde u\in C^{1,\alpha}(D_{1/2})$ for every $\alpha\in(0,4s)$ if $4s<1$, and $\widetilde u\in C^{1,\alpha}(D_{1/2})$ for every $\alpha\in(0,1)$ if $4s\geqslant 1$. In the case $s\in[ 1/4,1/2)$, we have thus proved that $u\in C^{1,\alpha}(D_{1/12})$ for every $\alpha\in(0,1)$. Hence it remains to consider the case $s<1/4$. In that case, we repeat the preceding argument considering the rescaling $\widehat u(x):=\widetilde u(x/6)$. Following the same notation as above, Lemma \ref{regC1alphafctGreg} tells us that $G_{\widehat u}$ belongs to $C^{1,\alpha}(D_1)$ for every $\alpha\in(0,4s)$, and hence also $F_{\widehat u}$. Then, applying the results of \cite{RosSer} to $\widehat u_0$, we conclude that $\widehat u\in C^{1,\alpha}(D_{1/2})$ for every $\alpha\in(0,6s)$ if $6s<1$, and $\widehat u\in C^{1,\alpha}(D_{1/12})$ for every $\alpha\in(0,1)$ if $6s\geqslant 1$. Therefore, if $s\geqslant 1/6$, then $u\in C^{1,\alpha}(D_{1/72})$ for every $\alpha\in(0,1)$, which is the announced regularity. On the other hand, if $s\in (0,1/6)$, then we repeat the argument. It is now clear that repeating a finite number $\ell$ of times this argument, one reaches the conclusion that $u\in C^{1,\alpha}(D_{(6)^{-\ell}/2})$ for every $\alpha\in(0,1)$, and $\ell$ is essentially the integer part of $1/2s$. \end{proof} Before closing this subsection, we provide an analogue of Lemma \ref{regC1alphafctGreg} in the case $s\in[1/2,1)$. \begin{lemma}\label{regC1alphafctGregbis} Assume that $s\in[1/2,1)$. Let $f:D_3\to \mathbb{R}^d$ and $g:D_3\to \mathbb{R}^d$ be two $C^1$-functions, and $\zeta:D_1\to [0,1]$ a measurable function. If $f$ and $g$ belongs to $C^{1,\alpha}(D_3)$ for every $\alpha\in(0,1)$, then the function $G:D_1\to\mathbb{R}$ given by \eqref{defGfunctionreg} belongs to $C^{1,\alpha^\prime}(D_1)$ for every $\alpha^\prime\in(0,2-2s)$, and \eqref{formpartialderGfctreg} holds. \end{lemma} \begin{proof} We proceed as in the proof of Lemma \ref{regC1alphafctGreg} using notation \eqref{notgamGfct}. We fix an arbitrary point $x\in D_1$ and we want to show that $G$ admits a partial derivative $\partial_i G$ at $x$. For $t>0$ small, we have \begin{multline*} \Gamma(x+t e_i,y)- \Gamma(x,y)=\Big(\int_0^t\big(\partial_if(x+y+\rho e_i)-\partial_if(x+\rho e_i)\big)\,{\rm d}\rho \Big)\cdot\big(g(x+y+te_i)-g(x+te_i)\big)\\ +\big(f(x+y)-f(x)\big)\cdot \Big(\int_0^t\big(\partial_ig(x+y+\rho e_i)-\partial_i g(x+\rho e_i)\big)\,{\rm d}\rho \Big) \end{multline*} for every $y\in D_1$. Fixing an exponent $\alpha\in(2s-1,1)$, we deduce that $$\big|\Gamma(x+t e_i,y)- \Gamma(x,y)\big|\leqslant C_{f,g,\alpha} |y|^{1+\alpha}t \quad\forall y\in D_1\,.$$ Consequently, $$\frac{|\Gamma(x+t e_i,y)- \Gamma(x,y)|}{|y|^{n+2s} t}\leqslant C_{f,g,\alpha} |y|^{n+2s-1-\alpha} \in L^1(D_1)\,.$$ As in the proof of Lemma \ref{regC1alphafctGreg}, it now follows that $G$ admits a partial derivative $\partial_i G$ at $x$ given by \eqref{formpartialderGfctreg}, and the H\"older continuity of the partial derivatives of $G$ is a consequence of Lemma~\ref{keybootstraplemma}. \end{proof} \subsection{H\"older continuity of higher order derivatives} \begin{proposition}\label{Ckalphareg} Let $u\in \widehat H^s(D_{3};\mathbb{S}^{d-1})$ be a weakly $s$-harmonic map in $D_{3}$. If $u\in C^{k,\alpha}(D_3)$ for some integer $k\geqslant 1$ and every $\alpha\in(0,1)$, then $u\in C^{k+1,\alpha}(D_{r_*})$ for every $\alpha\in(0,1)$, where the radius $r_*\in(0,1/2)$ is given by Proposition \ref{C1alphareg}. \end{proposition} \begin{proof} We proceed as in Step 1 in the proof of Proposition \ref{C1alphareg}, and we consider the function $G_u:D_1\to\mathbb{R}$ given by \eqref{deffctGu}. We claim that $G_u\in C^{k,\alpha}(D_1)$ for every $\alpha\in(0,1)$ if $s\in(0,1/2)$, and that $G_u\in C^{k,\alpha}(D_1)$ for every $\alpha\in(0,2-2s)$ if $s\in[1/2,1)$, together with the formula \begin{equation}\label{multiderivfctGu} \partial^{\beta}G_u(x)=\sum_{\nu\leqslant \beta} {\beta\choose \nu}\int_{D_1}\frac{(\partial^\nu u(x+y)-\partial^\nu u(x))\cdot(\partial^{\beta-\nu}u(x+y)-\partial^{\beta-\nu}u(x))}{|y|^{n+2s}}\zeta(y)\,{\rm d} y \end{equation} for every multi-index $\beta\in\mathbb{N}^n$ of length $|\beta|\leqslant k$. To prove this claim, we distinguish the case $s\in(0,1/2)$ from the case $s\in[1/2,1)$. \vskip3pt \noindent{\it Case $s\in(0,1/2)$.} We proceed by induction. First notice that the fact that $G_u\in C^{1,\alpha}(D_1)$ for every $\alpha\in(0,1)$ follows from Lemma \ref{regC1alphafctGreg}, as well as \eqref{multiderivfctGu} with $|\beta|=1$. Next we assume that $G_u\in C^{\ell,\alpha}(D_1)$ for every $\alpha\in(0,1)$ for some integer $\ell<k$, and that \eqref{multiderivfctGu} holds for every multi-index $\beta$ satisfying $|\beta|=\ell$. Applying Lemma \ref{regC1alphafctGreg} to each term in the right hand side of \eqref{multiderivfctGu}, we infer that $\partial^\beta G_u\in C^{1,\alpha}(D_1)$ for every $\alpha\in(0,1)$ and each $\beta$ satisfying $|\beta|=\ell$, and that \eqref{multiderivfctGu} holds for multi-indices $\beta^\prime$ in place of $\beta$ of length $|\beta^\prime|=|\beta|+1$. The claim is thus proved for $s\in(0,1/2)$. \vskip3pt \noindent{\it Case $s\in[1/2,1)$.} We proceed exactly as in the previous case but using Lemma \ref{regC1alphafctGregbis} instead of Lemma \ref{regC1alphafctGreg}. \vskip3pt Now we consider the function $H_u:D_1\to\mathbb{R}$ given by \eqref{deffctHu} which clearly belongs to $C^{k,\alpha}(D_1)$ for every $\alpha\in(0,1)$ by our assumption on $u$. Consequently, the map $F_u:D_1\to \mathbb{R}^d$ belongs to $C^{k,\alpha}(D_1)$ for every $\alpha\in(0,1)$ if $s\in(0,1/2)$, and to $C^{k,\alpha}(D_1)$ for every $\alpha\in(0,2-2s)$ if $s\in[1/2,1)$. By the results in \cite{RosSer} (together with \ref{regC1alphafctGreg} and Lemma \ref{regC1alphafctGregbis}), it implies that the map $u_0:=\zeta u$ as defined in Step 2, proof of Proposition \ref{C1alphareg}, belongs to $C^{k+1,\alpha}(D_{1/2})$ for every $\alpha\in(0,2s)$ if $s\in(0,1/2)$, and to $C^{k+1,\alpha}(D_{1/2})$ for every $\alpha\in(0,1)$ if $s\in[1/2,1)$. Since $u_0=u$ in $D_{1/2}$, the proof is thus complete for $s\in[1/2,1)$. In the case $s\in(0,1/2)$, we argue as in the proof of Proposition~\ref{C1alphareg}, Step 3, applying (inductively) Lemma \ref{regC1alphafctGreg} to formula \eqref{multiderivfctGu} with $|\beta|=k$. It leads to the fact that $u\in C^{k+1,\alpha}(D_{r_*})$ for every $\alpha\in(0,1)$, and hence concludes the proof. \end{proof} \section{Partial regularity for stationary and minimizing $s$-harmonic maps}\label{partialreg} In this section, we complete the proof of Theorems \ref{mainthm1}, \ref{mainthm2}, and \ref{mainthm3}. For $n>2s$, we need to prove compactness of stationary / minimizing $s$-harmonic map to apply Federer's dimension reduction principle. This is the object of the first subsection. \subsection{Compactness properties of $s$-harmonic maps}\label{subsectcompact} \begin{theorem}\label{maincompactthm} Assume that $s\in(0,1)\setminus\{1/2\}$ and $n>2s$. Let $\Omega\subseteq\mathbb{R}^n$ be a bounded open set. Let $\{u_k\}\subseteq \widehat H^s(\Omega;\mathbb{S}^{d-1})$ be a sequence of stationary weakly $s$-harmonic maps in $\Omega$. Assume that $\sup_k\mathcal{E}_s(u_k,\Omega)<+\infty$, and that $u_k\to u$ a.e. in $\mathbb{R}^n$. Then $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$, $u_k\rightharpoonup u$ weakly in $\widehat H^s(\Omega;\mathbb{R}^d)$, and $u$ is a stationary weakly $s$-harmonic map in $\Omega$. In addition, for every open subset $\omega\subseteq \Omega$ and every bounded admissible open set $G\subseteq \mathbb{R}^{n+1}_+$ satisfying $\overline\omega\subseteq \Omega$ and $\overline{\partial^0G}\subseteq \Omega$, \begin{enumerate} \item[(i)] $u_k\to u$ strongly in $\widehat H^s(\omega;\mathbb{R}^d)$; \item[(ii)] $u_k^{\rm e}\to u^{\rm e}$ strongly in $H^1(G;\mathbb{R}^d,|z^a{\rm d}{\bf x})$. \end{enumerate} \end{theorem} \begin{theorem}\label{compactthmmins} Assume that $s\in(0,1/2)$. In addition to Theorem \ref{maincompactthm}, if each $u_k$ is assumed to be a minimizing $s$-harmonic map in $\Omega$, then the limit $u$ is a minimizing $s$-harmonic map in $\Omega$. \end{theorem} \begin{theorem}\label{compactthmmin1/2} Let $\Omega\subseteq\mathbb{R}^n$ be a bounded open set and $\{u_k\}\subseteq \widehat H^{1/2}(\Omega;\mathbb{S}^{d-1})$ be a sequence of minimizing $1/2$-harmonic maps in $\Omega$. Assume that $\sup_k\mathcal{E}_{\frac{1}{2}}(u_k,\Omega)<+\infty$, and that $u_k\to u$ a.e. in $\mathbb{R}^n$. Then the conclusion of Theorem \ref{maincompactthm} holds and the limit $u$ is a minimizing $1/2$-harmonic map in $\Omega$. \end{theorem} \begin{remark} In the case $s\in(1/2,1)$, we do not know if minimality of the sequence $\{u_k\}$ implies minimality of the limit. We believe this is indeed the case, but we won't need this fact. \end{remark} \begin{remark} In the case $n=1$ and $s\in(1/2,1)$, sequences of (arbitrary) weakly $s$-harmonic maps with uniformly bounded energy are relatively compact, i.e., the conclusion of Theorem~\ref{maincompactthm} holds. This fact is a consequence of the Lipschitz estimate established in Theorem \ref{thmepsregLip} together with Remark \ref{remarlepsholdregsubcritic}. Since we shall not need this, we leave the details to the reader. \end{remark} \begin{remark} In the case $s=1/2$, sequences of (stationary or not) $1/2$-harmonic maps are not compact in general, see e.g. \cite{DaLi1,MPis,MirPis,MilPeg}. The prototypical example is the following sequence of smooth $1/2$-harmonic maps from $\mathbb{R}^n$ into $\mathbb{S}^1\subseteq \mathbb{C}$ given by $$u_k(x)=u_k(x_1):=\frac{kx_1-i}{kx_1+i}\,,\quad k\in\mathbb{N}\,,$$ which is converging weakly but not strongly to the constant map $1$ in $\widehat H^{1/2}(D_r)$ for every $r>0$. (Recall that $u_k$ being smooth, it is stationary, see Remark \ref{remsmothhimplstat}.) \end{remark} \begin{proof}[Proof of Theorem \ref{maincompactthm}] {\it Step 1.} We fix two arbitrary admissible bounded open sets $G,G^\prime\subseteq \mathbb{R}^{n+1}_+$ such that $\overline G\subseteq G^\prime\cup\partial^0G^\prime$ and satisfying $\overline{\partial^0G^\prime}\subseteq \Omega$. Since $u_k\to u$ a.e. in $\mathbb{R}^n$ and $|u_k|=1$, we first deduce that $|u|=1$ and $u_k\to u$ strongly in $L^2_{\rm loc}(\mathbb{R}^2;\mathbb{R}^d)$. It then follows from our assumption that $\{u_k\}$ is bounded in $\widehat H^s(\Omega;\mathbb{R}^d)$. Next we derive from Remark \ref{remweakcvHhat} that $u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ and $u_k\rightharpoonup u$ weakly in $\widehat H^s(\Omega;\mathbb{R}^d)$. In view of Corollary \ref{contextHsH1}, $u_k^{\rm e}\rightharpoonup u^{\rm e}$ weakly in $H^1(G^\prime;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$. Since $|u_k|\leqslant 1$, we have $u_k^{\rm e}({\bf x})\to u^{\rm e}({\bf x})$ for every ${\bf x}\in G^\prime$ by dominated convergence. In turn, we have $|u^{\rm e}_k-u^{\rm e}|\leqslant 2$, and it follows by dominated convergence again that $u_k^{\rm e}\to u^{\rm e}$ strongly in $L^2(G^\prime;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$. Recalling that ${\rm div}(z^a\nabla u^{\rm e}_k)=0$ in $G^\prime$, we infer from standard elliptic regularity that $u^{\rm e}_k\to u^{\rm e}$ in $C^1_{\rm loc} (G^\prime)$. In particular, \begin{equation}\label{strcvloccmpct} u^{\rm e}_k\to u^{\rm e}\quad\text{strongly in $H^1_{\rm loc}(G^\prime;\mathbb{R}^d)$}\,. \end{equation} We aim to show that $u_k^{\rm e}\to u^{\rm e}$ strongly in $H^1(G;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$. To prove this strong convergence, we consider the finite measures on $G^\prime\cup\partial^0G^\prime$ given by $$\mu_k:= \frac{\boldsymbol{\delta}_s}{2}z^a|\nabla u_k^{\rm e}|^2\mathscr{L}^{n+1}\res G^\prime\,.$$ Since $\sup_k\mu_k(G^\prime\cup\partial^0G^\prime)<+\infty$, we can find a further (not relabeled) subsequence such that \begin{equation}\label{weakcvmuk} \mu_k\rightharpoonup \frac{\boldsymbol{\delta}_s}{2}z^a|\nabla u^{\rm e}|^2\mathscr{L}^{n+1}\res G^\prime+\mu_{\rm sing}\quad\text{as $k\to\infty$}\,, \end{equation} weakly* as Radon measures on $G^\prime\cup\partial^0G^\prime$ for some finite nonnegative measure $\mu_{\rm sing}$. In view of \eqref{strcvloccmpct}, the defect measure $\mu_{\rm sing}$ is supported by $\partial^0G^\prime$. Since $u_k$ is stationary in $\Omega$, it satisfies the monotonicity formula in Proposition \ref{monotformula}, and thus \begin{equation}\label{monotmuk} \mu_k(B_\rho({\bf x}))\leqslant \mu_k(B_r({\bf x})) \end{equation} for every ${\bf x}\in\partial^0G^\prime$ and $0<\rho<r<{\rm dist}({\bf x}, \partial^+G^\prime)$. From the weak* convergence of $\mu_k$ towards $\mu$, we then infer that $$\mu(B_\rho({\bf x}))\leqslant \mu(B_r({\bf x}))$$ for every ${\bf x}\in\partial^0G^\prime$ and $0<\rho<r<{\rm dist}({\bf x}, \partial^+G^\prime)$. As a consequence, the $(n-2s)$-dimensional density $$\Theta^{n-2s}(\mu,{\bf x}):=\lim_{r\to 0}\frac{\mu(B_r({\bf x}))}{r^{n-2s}} $$ exists and is finite at every point ${\bf x}\in\partial^0G^\prime$. More precisely, \eqref{monotmuk} implies that $$ \Theta^{n-2s}(\mu,{\bf x})\leqslant \big({\rm dist}({\bf x}, \partial^+G^\prime)\big)^{2s-n}\sup_k{\bf E}_s(u_k,G^\prime)<+\infty\quad \forall {\bf x}\in\partial^0G^\prime\,.$$ We now consider the ``concentration set'' $$\Sigma:=\Big\{{\bf x}\in \partial^0G^\prime: \inf_r\big\{\liminf_{k\to\infty} r^{2s-n}\mu_k(B_r({\bf x})) : 0<r<{\rm dist}({\bf x}, \partial^+G^\prime)\big\}\geqslant \boldsymbol{\varepsilon}_1\Big\}\,, $$ where the constant $\boldsymbol{\varepsilon}_1>0$ is given by Corollary \ref{coroepsreghold}. From the monotonicity of $\mu_k$ and $\mu$ together with \eqref{weakcvmuk}, we deduce that \begin{multline*} \Sigma=\Big\{{\bf x}\in \partial^0G^\prime: \lim_{r\to 0}\liminf_{k\to\infty} r^{2s-n}\mu_k(B_r({\bf x})) \geqslant \boldsymbol{\varepsilon}_1 \Big\}\\ = \Big\{{\bf x}\in \partial^0G^\prime: \lim_{r\to 0} r^{2s-n}\mu(B_r({\bf x})) \geqslant \boldsymbol{\varepsilon}_1 \Big\}\,, \end{multline*} that is $$\Sigma= \Big\{{\bf x}\in \partial^0G^\prime:\Theta^{n-2s}(\mu,{\bf x})\geqslant \boldsymbol{\varepsilon}_1 \Big\}\,.$$ Observing that ${\bf x}\in \partial^0G^\prime\mapsto \Theta^{n-2s}(\mu,{\bf x})$ is upper semi-continuous, the set $\Sigma$ is a relatively closed subset of $\partial^0G^\prime$. We claim that ${\rm spt}(\mu_{\rm sing})\subseteq\Sigma$. To prove this inclusion, we fix an arbitrary point ${\bf x}_0=(x_0,0)\in\partial^0 G^\prime\setminus\Sigma$. Then we can find a radius $0<r<{\rm dist}({\bf x}_0, \partial^+G^\prime)$ such that $r^{2s-n}\mu(B_r({\bf x}_0))< \boldsymbol{\varepsilon}_1$ and $\mu(\partial B_r({\bf x}_0))=0$. By \eqref{weakcvmuk} and our choice of $r$, we have $\lim_k\mu_k(B_r({\bf x}_0))=\mu(B_r({\bf x}_0))$. Therefore, $r^{2s-n}\mu_k(B_r({\bf x}_0))< \boldsymbol{\varepsilon}_1$ for $k$ large enough, and we derive from Theorem \ref{thmepsregLip} that for $k$ large enough, $u_k$ is bounded in $C^{0,1}(D_{\boldsymbol{\kappa}_2r}(x_0))$ (and $u\in C^{0,1}(D_{\boldsymbol{\kappa}_2r}(x_0))$), where the constant $\boldsymbol{\kappa}_2\in(0,1)$ only depends on $n$ and $s$. It then follows by dominated convergence that $$[u_k-u]^2_{H^s(D_{\boldsymbol{\kappa}_2r}(x_0))} \mathop{\longrightarrow}\limits_{k\to\infty} 0\,.$$ Setting $w_k:=u_k-u$, we now estimate \begin{multline*} \mathcal{E}_s(w_k,D_{2\boldsymbol{\kappa}_2r/3}(x_0))\leqslant C\Big([u_k-u]^2_{H^s(D_{\boldsymbol{\kappa}_2r}(x_0))}\\+\iint_{D_{2\boldsymbol{\kappa}_2r/3}(x_0)\times D^c_{\boldsymbol{\kappa}_2r}(x_0)} \frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\Big) \,. \end{multline*} Since $|w_k|\leqslant 2$ and $w_k\to 0$ a.e. in $\mathbb{R}^n$, by dominated convergence we have \begin{equation}\label{pipicacaprout} \iint_{D_{2\boldsymbol{\kappa}_2r/3}(x_0)\times D^c_{\boldsymbol{\kappa}_2r}(x_0)} \frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y \mathop{\longrightarrow}\limits_{k\to\infty} 0\,. \end{equation} Hence $\mathcal{E}_s(w_k,D_{2\boldsymbol{\kappa}_2r/3}(x_0))\to 0$, and it follows from Lemma \ref{hatH1/2toH1} that $${\bf E}_s\big(u_k^{\rm e}-u^{\rm e},B^+_{\boldsymbol{\kappa}_2r/3}({\bf x}_0)\big)\leqslant C \mathcal{E}_s(u_k-u,D_{2\boldsymbol{\kappa}_2r/3}(x_0))\to 0\,.$$ Hence, $u_k^{\rm e}\to u^{\rm e}$ strongly in $H^1(B^+_{\boldsymbol{\kappa}_2r/3}({\bf x}_0),|z|^a{\rm d}{\bf x})$, and thus $\mu_{\rm sing}(B_{\boldsymbol{\kappa}_2r/3}({\bf x}_0))=0$. This shows that ${\bf x}_0\not\in {\rm spt}(\mu_{\rm sing})$, and the claim is proved. \vskip3pt Next we claim that $\mu(\Sigma)=0$. Indeed, assume by contradiction that $\mu(\Sigma)>0$. Then the density $\Theta^{n-2s}(\mu,{\bf x})$ exists, it is positive (greater than $\boldsymbol{\varepsilon}_1$) and finite, at every point ${\bf x}\in \Sigma$. By Marstrand's theorem (see e.g. \cite[Theorem 14.10]{Matti}), it implies that $n-2s$ is an integer, a contradiction. Knowing that $\mu(\Sigma)=0$, we now deduce that $\mu_{\rm sing}(\Sigma)=0$. But $\mu_{\rm sing}$ being supported by $\Sigma$, it implies that $\mu_{\rm sing}\equiv0$. As a consequence, ${\bf E}_s(u^{\rm e}_k,G)\to {\bf E}_s(u^{\rm e},G)$, which combined with the weak convergence in $H^1(G;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$ implies that ${\bf E}_s(u^{\rm e}_k-u^{\rm e},G)\to 0$. We have thus proved that $u_k^{\rm e}\to u^{\rm e}$ strongly in $H^1(G;\mathbb{R}^d,|z|^a{\rm d}{\bf x})$. \vskip5pt \noindent{\it Step 2.} We consider in this step an open subset $\omega\subseteq\Omega$ such that $\overline\omega\subseteq \Omega$, and our goal is to prove that $u_k\to u$ strongly in $\widehat H^s(\omega;\mathbb{R}^d)$. Set $\delta:=\frac{1}{8}{\rm dist}(\omega,\Omega^c)$, and consider a finite covering of $\omega$ by balls $(D_{\delta}(x_i))_{i\in I}$ with $x_i\in\overline{\omega}$. By Lemma \ref{HsregtraceH1weight} and Step 1, we have for each $i\in I$, \begin{equation}\label{pipicacaprout1} [u_k-u]^2_{H^s(D_{2\delta}(x_i))}\leqslant C{\bf E}_s(u^{\rm e}_k-u^{\rm e},B^+_{4\delta}({\bf x}_i)) \mathop{\longrightarrow}\limits_{k\to\infty}0\,, \end{equation} where ${\bf x}_i:=(x_i,0)$. Writing again $w_k:=u_k-u$, we now estimate \begin{align} \nonumber \mathcal{E}_s(w_k,\omega)&\leqslant C\iint_{\omega\times\mathbb{R}^n}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\\ \nonumber&\leqslant C\sum_{i\in I} \iint_{D_{\delta}(x_i)\times\mathbb{R}^n}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\,\\ \label{pipicacaprout2}&\leqslant C\sum_{i\in I} \Big( [w_k]^2_{H^s(D_{2\delta}(x_i))}+ \iint_{D_{\delta}(x_i)\times D^c_{2\delta}(x_i)}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y \Big)\,. \end{align} As in \eqref{pipicacaprout}, by dominated convergence we have \begin{equation}\label{pipicacaprout3} \iint_{D_{\delta}(x_i)\times D^c_{2\delta}(x_i)}\frac{|w_k(x)-w_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y \mathop{\longrightarrow}\limits_{k\to\infty}0 \quad\forall i\in I\,. \end{equation} Combining \eqref{pipicacaprout1}, \eqref{pipicacaprout2}, and \eqref{pipicacaprout3} leads to $\mathcal{E}_s(w_k,\omega)\to 0$, and thus $u_k\to u$ strongly in $\widehat H^s(\omega;\mathbb{R}^d)$. \vskip5pt \noindent{\it Step 3.} Our aim in this step is to show that $u$ is a weakly $s$-harmonic map in $\Omega$, i.e., $u$ satisfies equation \eqref{ELeqorig}, or equivalently \eqref{ELeqsgrad}, by Proposition \ref{ELeqprop}. To this purpose, we fix an arbitrary $\varphi\in\mathscr{D}(\Omega;\mathbb{R}^d)$, and we choose an open subset $\omega\subseteq \Omega$ such that ${\rm spt}(\varphi)\subseteq \omega$ and $\overline\omega\subseteq\Omega$. Writing again $w_k:=u_k-u$, we have proved in Step 2 that $\mathcal{E}_s(w_k,\omega)\to 0$. Recalling our notations from Subsection \ref{sectoperandcompcomp}, we observe that $$|{\rm d}_s u_k|^2-|{\rm d}_s u|^2=|{\rm d}_sw_k|^2 +2{\rm d}_sw_k\odot{\rm d}_s u\,,$$ and then estimate \begin{align*} \big\||{\rm d}_s u_k|^2-|{\rm d}_s u|^2\big\|_{L^1(\omega)}& \leqslant \big\| |{\rm d}_sw_k|^2\big\|_{L^1(\omega)}+2 \big\| {\rm d}_sw_k\odot{\rm d}_s u\big\|_{L^1(\omega)}\\ &\leqslant 2 \mathcal{E}_s(w_k,\omega) + 2 \|{\rm d}_sw_k\|_{L^2_{\rm od}(\omega)}\|{\rm d}_su\|_{L^2_{\rm od}(\omega)}\\ &\leqslant 2 \mathcal{E}_s(w_k,\omega) + 2\sqrt{2} \|{\rm d}_su\|_{L^2_{\rm od}(\omega)}\sqrt{\mathcal{E}_s(w_k,\omega)}\,. \end{align*} Therefore $|{\rm d}_s u_k|^2\to |{\rm d}_s u|^2$ in $L^1(\omega)$, and we can find a further (not relabeled) subsequence and $h\in L^1(\omega)$ such that $$|{\rm d}_su_k|^2(x)\to |{\rm d}_su|^2(x)\text{ for a.e. $x\in\omega$, and } |{\rm d}_su_k|^2(x)\leqslant h(x)\text{ for a.e. $x\in\omega$}\,.$$ Since $|u_k|=1$ and $u_k\to u$ a.e. in $\omega$, it follows by dominated convergence that $|{\rm d}_su_k|^2u_k\to |{\rm d}_su|^2u$ in $L^1(\omega)$. Consequently, $$\int_\Omega|{\rm d}_su_k|^2u_k\cdot\varphi\,{\rm d} x\mathop{\longrightarrow}\limits_{k\to\infty}\int_\Omega|{\rm d}_su|^2u\cdot\varphi\,{\rm d} x \,. $$ On the other hand, the weak convergence of $u_k$ to $u$ in $\widehat H^s(\Omega;\mathbb{R}^d)$ implies that $\big\langle (-\Delta)^su_k,\varphi\big\rangle_\Omega$ converges to $\big\langle (-\Delta)^su,\varphi\big\rangle_\Omega$. Hence, $$ \big\langle (-\Delta)^su,\varphi\big\rangle_\Omega=\lim_{k\to\infty} \big\langle (-\Delta)^su_k,\varphi\big\rangle_\Omega=\lim_{k\to\infty} \int_\Omega|{\rm d}_su_k|^2u_k\cdot\varphi\,{\rm d} x=\int_\Omega|{\rm d}_su|^2u\cdot\varphi\,{\rm d} x \,,$$ so that $u$ is indeed weakly $s$-harmonic in $\Omega$ (see \eqref{ELeqsgrad}). \vskip5pt \noindent{\it Step 4.} It now only remains to prove that $u$ is stationary in $\Omega$. This is in fact an easy consequence of the strong convergence of $u^{\rm e}$ established in Step 1. Indeed, let us fix an arbitrary vector field $X\in C^1(\mathbb{R}^n;\mathbb{R}^n)$ compactly supported in $\Omega$. Combining the strong convergence of $u^{\rm e}_k$ established in Step 1 together with the representation of the first variation $\delta\mathcal{E}_s$ stated in Proposition \ref{represfirstvar}, we obtain that $\delta\mathcal{E}_s(u_k,\Omega)[X]\to \delta\mathcal{E}_s(u,\Omega)[X]$, whence $\delta\mathcal{E}_s(u,\Omega)=0$. \end{proof} \begin{proof}[Proof of Theorem \ref{compactthmmins}] In view of Remark \ref{implicminstat} and Theorem \ref{maincompactthm}, it only remains to prove that the limiting map $u$ is a minimizing $s$-harmonic map in $\Omega$. We follow here the argument in \cite[Theorem 4.1]{MSY}. Let us now consider an arbitrary $\widetilde u\in \widehat H^s(\Omega;\mathbb{S}^{d-1})$ such that ${\rm spt}(u-\widetilde u)\subseteq\Omega$. We select an open subset $\omega\subseteq\Omega$ with Lipschitz boundary such that ${\rm spt}(u-\widetilde u)\subseteq\omega$ and $\overline\omega\subseteq\Omega$. Define $$\widetilde u_k(x):=\begin{cases} \widetilde u(x) & \text{if $x\in\omega$}\,,\\ u_k(x) & \text{otherwise}\,. \end{cases} $$ Since $s\in(0,1/2)$ and $\partial\omega$ is Lipschitz regular, it turns out that $\widetilde u_k\in\widehat H^s(\Omega;\mathbb{S}^{d-1})$ (see e.g. \cite[Section 2.1]{MSK}), and ${\rm spt}(u_k-\widetilde u_k)\subseteq \Omega$. By minimality of $u_k$, we have $\mathcal{E}_s(u_k,\Omega)\leqslant \mathcal{E}_s(\widetilde u_k,\Omega)$. Since $\widetilde u_k=u_k$ in $\mathbb{R}^n\setminus\omega$, it reduces to $$ \mathcal{E}_s(u_k,\omega)\leqslant \mathcal{E}_s(\widetilde u_k,\omega)=\frac{\gamma_{n,s}}{4}\iint_{\omega\times\omega}\frac{|\widetilde u(x)-\widetilde u(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y+\frac{\gamma_{n,s}}{2}\iint_{\omega\times\omega^c}\frac{|\widetilde u(x)-u_k(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\,.$$ On the other hand, $$\frac{|\widetilde u(x)- u_k(y)|^2}{|x-y|^{n+2s}}\leqslant \frac{4}{|x-y|^{n+2s}}\in L^1(\omega\times\omega^c)\,, $$ since $\omega$ has Lipschitz boundary. Hence, $\mathcal{E}_s(\widetilde u_k,\omega)\to \mathcal{E}_s(\widetilde u,\omega)$ by dominated convergence and the fact that $\widetilde u=u$ in $\mathbb{R}^n\setminus\omega$. By Fatou's Lemma, we have $\liminf_k\mathcal{E}_s(u_k,\omega)\geqslant \mathcal{E}_s(u,\omega)$, and we reach the conclusion that $ \mathcal{E}_s(u,\omega)\leqslant \mathcal{E}_s(\widetilde u,\omega)$. Once again, the fact that $\widetilde u=u$ in $\mathbb{R}^n\setminus\omega$ then implies that $\mathcal{E}_s(u,\Omega)\leqslant \mathcal{E}_s(\widetilde u,\Omega)$. By arbitrariness of $\widetilde u$, we conclude that $u$ is indeed a minimizing $s$-harmonic map in $\Omega$. \end{proof} We now close this subsection with an easy consequence of Theorem \ref{maincompactthm} and Theorem \ref{compactthmmin1/2} in terms of the pointwise density function $\boldsymbol{\Xi}_s(u,\cdot)$ defined in \eqref{deflimitdens}. \begin{corollary}\label{uscdensit} Assume that $n>2s$. In addition to Theorem \ref{maincompactthm} and Theorem \ref{compactthmmin1/2}, if $\{x_k\}\subseteq\Omega$ is a sequence converging to $x_*\in\Omega$, then $$\limsup_{k\to\infty}\,\boldsymbol{\Xi}_s(u_k,x_k)\leqslant \boldsymbol{\Xi}_s(u,x_*) \,.$$ \end{corollary} \begin{proof} Without loss of generality, we can assume that $x_*=0$. Applying Corollary \ref{corolmonotform}, we obtain for $r>0$ small enough and $r_k:=|x_k|$, \begin{equation}\label{concertcesoir} \boldsymbol{\Xi}_s(u_k,x_k)\leqslant \boldsymbol{\Theta}(u^{\rm e}_k,{\bf x}_k,r)\leqslant \frac{1}{r^{n-2s}}{\bf E}_s(u^{\rm e}_k,B^+_{r+r_k})\,, \end{equation} where ${\bf x}_k:=(x_k,0)$. By Theorem \ref{maincompactthm} (in the case $s\not=1/2$) and Theorem \ref{compactthmmin1/2} (in the case $s=1/2$), $u_k^{\rm e}\to u^{\rm e}$ strongly in $H^1(B^+_{2r},|z|^a{\rm d}{\bf x})$. Since $r_k\to0$, we deduce from \eqref{concertcesoir} that $$\limsup_{k\to\infty} \, \boldsymbol{\Xi}_s(u_k,x_k)\leqslant \boldsymbol{\Theta}(u^{\rm e},0,r)\,, $$ and the conclusion follows letting $r\to 0$. \end{proof} \subsection{Tangent maps}\label{secttangmap} We assume throughout this subsection that $s\in(0,1)$ and $n>2s$. We consider a bounded open set $\Omega\subseteq\mathbb{R}^n$ and a map $u\in\widehat H^s(\Omega;\mathbb{S}^{d-1})$ that we assume to be \begin{itemize} \item a stationary weakly $s$-harmonic map in $\Omega$ for $s\not=1/2$; \item a minimizing $1/2$-harmonic map in $\Omega$ for $s=1/2$. \end{itemize} We shall apply the results of Subsection \ref{subsectcompact} to define the so-called {\sl tangent maps} of $u$ at a given point. To this purpose, we fix a point of study $x_0\in\Omega$ and a reference radius $\rho_0>0$ such that $D_{2\rho_0}(x_0)\subseteq\Omega$. We introduce the rescaled function $$u_{x_0,\rho}(x):=u(x_0+\rho x)\,,$$ and we observe that $(u_{x_0,\rho})^{\rm e}({\bf x})=u^{\rm e}({\bf x}_0+\rho {\bf x})=u_{x_0,\rho}^{\rm e}({\bf x})$ with ${\bf x}_0=(x_0,0)$. Rescaling variables, $u_{x_0,\rho}$ is a stationary weakly $s$-harmonic map in $(\Omega-x_0)/\rho$ for $s\not=1/2$, or a minimizing $1/2$-harmonic map in $(\Omega-x_0)/\rho$ for $s=1/2$. In addition, \begin{equation}\label{identrescaldens} \boldsymbol{\Theta}_s(u^{\rm e}_{x_0,\rho},0,r)=\boldsymbol{\Theta}_s(u^{\rm e},{\bf x}_0,\rho r) \quad\forall r\in(0,\rho_0/\rho]\,. \end{equation} This identity together with the monotonicity formula in Proposition \ref{monotformula} and Lemma \ref{hatH1/2toH1} yields $$\boldsymbol{\Theta}_s(u^{\rm e}_{x_0,\rho},0,r)\leqslant \boldsymbol{\Theta}_s(u^{\rm e},{\bf x}_0,\rho_0) \leqslant C\rho_0^{2s-n}\mathcal{E}_s(u,\Omega) \quad\forall r\in(0,\rho_0/\rho]\,,$$ for a constant $C$ depending only on $n$ and $s$. In turn, Lemma \ref{HsregtraceH1weight} implies that $$[u_{x_0,\rho}]^2_{H^s(D_{2r})}\leqslant C\rho_0^{2s-n}r^{n-2s} \mathcal{E}_s(u,\Omega)\quad\forall r\in(0,\rho_0/(4\rho)]\,.$$ Using $|u_{x_0,\rho}|=1$, we can now estimate for $r\in(0,\rho_0/(4\rho)]$, $$\mathcal{E}_s(u_{x_0,\rho},D_r)\leqslant C\Big([u_{x_0,\rho}]^2_{H^s(D_{2r})}+\iint_{D_r\times D^c_{2r}}\frac{{\rm d} x{\rm d} y}{|x-y|^{n+2s}}\Big) \leqslant Cr^{n-2s}\big(\rho_0^{2s-n} \mathcal{E}_s(u,\Omega)+1\big)\,.$$ Given a sequence $\rho_k\to 0$, we deduce from the above estimate that $$\limsup_{k\to\infty} \mathcal{E}_s(u_{x_0,\rho_k},D_r) <+\infty\quad \forall r>0\,.$$ Applying Theorem \ref{maincompactthm}, Theorem \ref{compactthmmins}, and Theorem \ref{compactthmmin1/2}, we can now find a subsequence $\{\rho^\prime_k\}$ and $\varphi\in H^s_{\rm loc}(\R^n;\mathbb{S}^{d-1})$ such that $$u_{x_0,\rho^\prime_k}\to \varphi\text{ strongly in $\widehat H^s(D_r)$, and }u^{\rm e}_{x_0,\rho^\prime_k}\to \varphi^{\rm e}\text{ strongly in $H^1(B_r^+,|z|^a{\rm d}{\bf x})$ for all $r>0$}\,, $$ where \begin{enumerate} \item[(i)] {\it if $s\not=1/2$:} $\varphi$ is a stationary weakly $s$-harmonic map in $D_r$ for all $r>0$; \item[(ii)] {\it if $s\leq1/2$ and $u$ minimizing:} $\varphi$ is a minimizing $s$-harmonic map in $D_r$ for all $r>0$. \end{enumerate} \begin{definition} Every function $\varphi$ obtained by this process will be referred to as {\sl a tangent map to $u$ at the point $x_0$}. The family of all tangent maps to $u$ at $x_0$ is denoted by $T_{x_0}(u)$. \end{definition} We now present some classical properties of tangent maps following e.g. \cite{Sim} or \cite[Section~6]{MSK}. \begin{lemma} If $\varphi\in T_{x_0}(u)$, then $$\boldsymbol{\Theta}_s(\varphi^{\rm e},0,r)= \boldsymbol{\Xi}_s(\varphi,0)=\boldsymbol{\Xi}_s(u,x_0)\quad\forall r>0\,,$$ and $\varphi$ is positively $0$-homogeneous, i.e., $\varphi(\lambda x)=\varphi(x)$ for every $\lambda>0$ and $x\in\mathbb{R}^n$. In particular, \begin{equation}\label{homogdenstangmap} \boldsymbol{\Xi}_s(\varphi,\lambda x)= \boldsymbol{\Xi}_s(\varphi,x)\quad\text{for every $x\in\mathbb{R}^n\setminus\{0\}$ and $\lambda>0$}\,. \end{equation} \end{lemma} \begin{proof} From the strong convergence of $u^{\rm e}_{x_0,\rho^\prime_k}$ to $\varphi^{\rm e}$ in $H^1(B_r^+,|z|^a{\rm d}{\bf x})$ and \eqref{identrescaldens}, we first deduce that $$\boldsymbol{\Theta}_s(\varphi^{\rm e},0,r)=\lim_{k\to\infty}\boldsymbol{\Theta}_s(u^{\rm e},{\bf x}_0,\rho_k^\prime r)=\boldsymbol{\Xi}_s(u,x_0)\quad\forall r>0\,.$$ Then, the constancy of $r\mapsto \boldsymbol{\Theta}_s(\varphi^{\rm e},0,r)$ together with the monotonicity formula in Proposition~\ref{monotformula} implies that ${\bf x}\cdot\nabla\varphi^{\rm e}({\bf x})=0$ for every ${\bf x}\in\mathbb{R}^{n+1}_+$. Hence, $\varphi^{\rm e}$ is positively $0$-homogeneous, and the homogeneity of $\varphi$ follows. As a consequence, for $x\in\mathbb{R}^n\setminus\{0\}$ and $\lambda>0$, $$\boldsymbol{\Theta}_s(\varphi^{\rm e},\lambda {\bf x},r)= \boldsymbol{\Theta}_s(\varphi^{\rm e},{\bf x},r/\lambda)\,,$$ where ${\bf x}:=(x,0)$. Letting now $r\to0$ yields \eqref{homogdenstangmap}. \end{proof} \begin{lemma}\label{defSphi} If $\varphi\in T_{x_0}(u)$, then $$\boldsymbol{\Xi}_s(\varphi,y)\leqslant \boldsymbol{\Xi}_s(\varphi,0) \quad\forall y\in\mathbb{R}^n\,.$$ In addition, the set $$ S(\varphi):=\Big\{y\in\mathbb{R}^n: \boldsymbol{\Xi}_s(\varphi,y)= \boldsymbol{\Xi}_s(\varphi,0)\Big\}$$ is a linear subspace of $\mathbb{R}^n$, and $\varphi(x+y)=\varphi(x)$ for every $y\in S(\varphi)$ and every $x\in\mathbb{R}^n$. \end{lemma} \begin{proof} {\it Step 1.} By Corollary \ref{corolmonotform}, we have have for every $y\in\mathbb{R}^n$ and $\rho>0$, \begin{equation}\label{yaourtpres} \boldsymbol{\Xi}_s(\varphi,y) +\boldsymbol{\delta}_s\int_{B^+_\rho({\bf y})}z^a\frac{|({\bf x}-{\bf y})\cdot\nabla \varphi^{\rm e}|^2}{|{\bf x}-{\bf y}|^{n+2-2s}}\,{\rm d} {\bf x}=\boldsymbol{\Theta}_s(\varphi^{\rm e},{\bf y},\rho)\,, \end{equation} where ${\bf y}=(y,0)$. On the other hand, by homogeneity of $\varphi$, $$ \boldsymbol{\Theta}_s(\varphi^{\rm e},{\bf y},\rho)\leqslant \frac{(\rho+|y|)^{n-2s}}{\rho^{n-2s}} \boldsymbol{\Theta}_s(\varphi^{\rm e},0,\rho+|y|)=\frac{(\rho+|y|)^{n-2s}}{\rho^{n-2s}}\boldsymbol{\Xi}_s(\varphi,0)\,. $$ Combining this inequality with \eqref{yaourtpres} and letting $\rho\to\infty$ yields $$\boldsymbol{\Xi}_s(\varphi,y) +\boldsymbol{\delta}_s\int_{\mathbb{R}^{n+1}_+}z^a\frac{|({\bf x}-{\bf y})\cdot\nabla \varphi^{\rm e}({\bf x})|^2}{|{\bf x}-{\bf y}|^{n+2-2s}}\,{\rm d} {\bf x}\leqslant \boldsymbol{\Xi}_s(\varphi,0) \,.$$ \vskip5pt \noindent{\it Step 2.} Next, assume that $\boldsymbol{\Xi}_s(\varphi,y)=\boldsymbol{\Xi}_s(\varphi,0)$ for some $y\not=0$. Then $({\bf x}-{\bf y})\cdot\nabla\varphi^{\rm e}({\bf x})=0$ for all ${\bf x}\in\mathbb{R}^{n+1}_+$. By $0$-homogeneity of $\varphi^{\rm e}$, we then have ${\bf y}\cdot\nabla\varphi^{\rm e}({\bf x})=0$ for all ${\bf x}\in\mathbb{R}^{n+1}_+$, and thus \begin{equation}\label{invartransSphi} \varphi(x+y)=\varphi(x) \quad\forall x\in\mathbb{R}^n\,. \end{equation} The other way around, if \eqref{invartransSphi} holds for some $y\not=0$, then $({\bf x}-{\bf y})\cdot\nabla\varphi^{\rm e}({\bf x})=0$ for all ${\bf x}\in\mathbb{R}^{n+1}_+$ (again by homogeneity). We then infer from \eqref{yaourtpres} and \eqref{invartransSphi} that for $\rho>0$, $$\boldsymbol{\Xi}_s(\varphi,y) =\boldsymbol{\Theta}_s(\varphi^{\rm e},{\bf y},\rho)=\boldsymbol{\Theta}_s(\varphi^{\rm e},0,\rho) =\boldsymbol{\Xi}_s(\varphi,0)\,,$$ i.e., $y\in S(\varphi)$. Hence, \eqref{invartransSphi} caracterizes $S(\varphi)$, and the linearity of $S(\varphi)$ follows. \end{proof} \begin{remark}\label{remdimSphinonconst} If there exists $\varphi\in T_{x_0}(u)$ such that ${\rm dim}\,S(\varphi)=n$, then $\varphi$ is clearly constant, and thus $\boldsymbol{\Xi}_s(u,x_0)=\boldsymbol{\Xi}_s(\varphi,0)=0$. By Theorem \ref{thmepsregLip}, $u$ is continuous in a neighborhood of $x_0$, so that $\varphi=u(x_0)$. In other words, $ T_{x_0}(u)=\{u(x_0)\}$. As a consequence, if on the contrary $\boldsymbol{\Xi}_s(u,x_0)>0$, then all tangent maps $\varphi\in T_{x_0}(u)$ must be non constant, and hence satisfy ${\rm dim}\,S(\varphi)\leqslant n-1$. \end{remark} \begin{lemma}\label{rigidlemtangmapsbig1/2} Assume that $s\in[1/2,1)$. If $\varphi\in T_{x_0}(u)$ is not constant, then $${\rm dim}\,S(\varphi)\leqslant n-2\,. $$ \end{lemma} \begin{proof} We proceed by contradiction assuming that there exists a non constant tangent map $\varphi\in T_{x_0}(u)$ such that ${\rm dim}\,S(\varphi)=n-1$. Rotating coordinates if necessary, we can assume that $S(\varphi)=\{0\}\times\mathbb{R}^{n-1}$. By Lemma \ref{defSphi}, the map $\varphi$ only depends on the $x_1$-variable, that is $\varphi(x)=:\psi(x_1)$ where $\psi\in H^s_{\rm loc}(\mathbb{R};\mathbb{S}^{d-1})$. Since $\varphi$ is positively $0$-homogeneous and non constant, the map $\psi$ is of the form \begin{equation}\label{shapetangmap} \psi(x_1)=\begin{cases} {\rm a} & \text{if $x_1>0$}\,\\ {\rm b} & \text{if $x_1<0$}\,, \end{cases} \end{equation} for some points ${\rm a}, {\rm b}\in\mathbb{S}^{d-1}$, ${\rm a}\not={\rm b}$. However, the space $H^s_{\rm loc}(\mathbb{R})$ imbeds into $C^{0,s-1/2}_{\rm loc}(\mathbb{R})$, which enforces ${\rm a}={\rm b}$, a contradiction. \end{proof} \begin{lemma}\label{rigidminsharmtangmap} Assume that $n\geqslant 2$, $s\in(0,1/2)$, and that $u$ is a minimizing $s$-harmonic map in $\Omega$. If $\varphi\in T_{x_0}(u)$ is not constant, then $${\rm dim}\,S(\varphi)\leqslant n-2\,. $$ \end{lemma} To prove Lemma \ref{rigidminsharmtangmap}, we shall make use of the following pleasant computation. \begin{remark}\label{remcomputmasspoisskern} For $n\geqslant 2$, we have \begin{equation}\label{computalphans} \alpha_{n,s}:=\int_{\mathbb{R}^{n-1}} \frac{{\rm d} x^\prime}{(1+|x^\prime|^2)^{\frac{n+2s}{2}}}=\frac{\gamma_{1,s}}{\gamma_{n,s}}\,. \end{equation} Indeed, we easily compute in polar coordinates and setting $t:=r^2$, $$\int_{\mathbb{R}^{n-1}} \frac{{\rm d} x^\prime}{(1+|x^\prime|^2)^{\frac{n+2s}{2}}}=|\mathbb{S}^{n-2}|\int_{0}^{+\infty}\frac{r^{n-2}}{(1+r^2)^{\frac{n+2s}{2}}}\,{\rm d} r =\frac{|\mathbb{S}^{n-2}|}{2}\int_0^{+\infty}\frac{t^{\frac{n-1}{2}-1}}{(1+t)^{\frac{n+2s}{2}}}\,{\rm d} t\,.$$ Recalling the value of $\gamma_{n,s}$ given in \eqref{defHsandgammans}, we thus have \begin{multline}\label{calculpoisskern} \int_{\mathbb{R}^{n-1}} \frac{{\rm d} x^\prime}{(1+|x^\prime|^2)^{\frac{n+2s}{2}}}=\frac{|\mathbb{S}^{n-2}|}{2} {\rm B}\Big(\frac{n-1}{2},\frac{1+2s}{2}\Big)\\ =\frac{|\mathbb{S}^{n-2}|}{2} \frac{\Gamma(\frac{n-1}{2})\Gamma(\frac{1+2s}{2})}{\Gamma(\frac{n+2s}{2})} =\pi^{\frac{n-1}{2}}\frac{\Gamma(\frac{1+2s}{2})}{\Gamma(\frac{n+2s}{2})}=\frac{\gamma_{1,s}}{\gamma_{n,s}}\,, \end{multline} where ${\rm B}(\cdot,\cdot)$ denotes the Euler Beta function. \end{remark} \begin{proof}[Proof of Lemma \ref{rigidminsharmtangmap}] {\it Step 1.} We proceed again by contradiction assuming that there exists a non constant tangent map $\varphi\in T_{x_0}(u)$ such that ${\rm dim}\,S(\varphi)=n-1$. Rotating coordinates if necessary, we can proceed as in the proof of Lemma \ref{rigidlemtangmapsbig1/2} to infer that $\varphi(x)=:\psi(x_1)$ where $\psi\in H^s_{\rm loc}(\mathbb{R};\mathbb{S}^{d-1})$ is of the form \eqref{shapetangmap} for some points ${\rm a}, {\rm b}\in\mathbb{S}^{d-1}$, ${\rm a}\not={\rm b}$. We claim that $\psi$ is a minimizing $s$-harmonic map in the interval $(-1,1)$. Once the claim is proved (which is the object of the next step), we can infer from the regularity result \cite[Theorem 1.2]{MSY} that $\psi$ is continuous in $(-1,1)$, which again enforces ${\rm a}={\rm b}$, a contradiction. \vskip5pt \noindent{\it Step 2.} We now prove that $\psi$ is a minimizing $s$-harmonic map in $(-1,1)$. To this purpose, we fix an arbitrary competitor $v\in \widehat H^s((-1,1);\mathbb{S}^{d-1})$ such that ${\rm spt}(v-\psi)\subseteq(-1,1)$. Given $r>1$, we consider the open set $Q_r\subseteq\mathbb{R}^n$ defined by $Q_r:=(-1,1)\times D^\prime_r$ where $D_r^\prime$ denotes the open ball in $\mathbb{R}^{n-1}$ centered at the origin of radius $r$. We define a map $\widetilde v_r\in \widehat H^s(Q_r;\mathbb{S}^{d-1})$ by setting for $x=(x_1,x^\prime)\in\mathbb{R}^n$, $$\widetilde v_r(x):= \begin{cases} v(x_1) & \text{if $|x^\prime|<r$}\,,\\ \psi(x_1) & \text{if $|x^\prime|\geqslant r$}\,. \end{cases} $$ Recalling that $u$ is assumed to be minimizing, $\varphi$ is minimizing in every ball. Since ${\rm spt}(\widetilde v_r-\varphi)\subseteq Q_{r+1}$, we thus have $$\mathcal{E}_s(\varphi,Q_{r+1})\leqslant \mathcal{E}_s(\widetilde v_r,Q_{r+1})\,. $$ Since $\widetilde v_r=\varphi$ in $\mathbb{R}^n\setminus Q_r$, it reduces to \begin{equation}\label{mintesttangmap} \mathcal{E}_s(\varphi,Q_{r})\leqslant \mathcal{E}_s(\widetilde v_r,Q_{r})\,. \end{equation} We claim that \begin{equation}\label{asymptreducdimenerg} \frac{1}{|D^\prime_r|}\,\mathcal{E}_s(\widetilde v_r,Q_{r})\mathop{\longrightarrow}\limits_{r\to\infty} \mathcal{E}_s\big(v,(-1,1)\big)\,, \end{equation} where $|D^\prime_r|$ denotes the volume of $D_r^\prime$ in $\mathbb{R}^{n-1}$. Since we could have taken $v$ to be equal to $\psi$, \eqref{asymptreducdimenerg} also holds with $\varphi$ in place of $\widetilde v_r$ and $\psi$ in place of $v$. Therefore, dividing both sides of \eqref{mintesttangmap} by $|D^\prime_r|$ and letting $r\to\infty$ leads to $$\mathcal{E}_s\big(\psi,(-1,1)\big)\leqslant \mathcal{E}_s\big(v,(-1,1)\big) \,,$$ which proves that $\psi$ is indeed minimizing in $(-1,1)$. Let us now compute $\mathcal{E}_s(\widetilde v_r,Q_{r})$ to prove \eqref{asymptreducdimenerg}. First, by Fubini's theorem we have \begin{multline*} \iint_{Q_r\times Q_r}\frac{|\widetilde v_r(x)-\widetilde v_r(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\\ =\iint_{(-1,1)^2}|v(x_1)-v(y_1)|^2\Big(\iint_{D_r^\prime\times D_r^\prime}\frac{{\rm d} x^\prime{\rm d} y^\prime}{(|x_1-y_1|^2+|x^\prime-y^\prime|^2)^{\frac{n+2s}{2}}}\Big)\,{\rm d} x_1{\rm d} y_1\,. \end{multline*} Then we observe that a change of variables yields \begin{multline*} \iint_{D_r^\prime\times D_r^\prime}\frac{{\rm d} x^\prime{\rm d} y^\prime}{(|x_1-y_1|^2+|x^\prime-y^\prime|^2)^{\frac{n+2s}{2}}}\\ =\iint_{D_r^\prime\times \mathbb{R}^{n-1}}\frac{{\rm d} x^\prime{\rm d} y^\prime}{(|x_1-y_1|^2+|x^\prime-y^\prime|^2)^{\frac{n+2s}{2}}}-A_r(|x_1-y_1|)\\ =\frac{\alpha_{n,s}|D^\prime_r|}{|x_1-y_1|^{1+2s}}-A_r(|x_1-y_1|)\,, \end{multline*} where $\alpha_{n,s}$ is given by \eqref{computalphans}, and $A_r(t)$ is defined for $t>0$ by $$A_r(t):= \iint_{D_r^\prime\times(D^\prime_r)^c}\frac{{\rm d} x^\prime{\rm d} y^\prime}{(t^2+|x^\prime-y^\prime|^2)^{\frac{n+2s}{2}}}\,.$$ Therefore, \begin{multline}\label{calcaoutprout1} \iint_{Q_r\times Q_r}\frac{|\widetilde v_r(x)-\widetilde v_r(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y=\alpha_{n,s}|D^\prime_r|\iint_{(-1,1)^2}\frac{|v(x_1)-v(y_1)|^2}{|x_1-y_1|^{1+2s}}\,{\rm d} x_1{\rm d} y_1\\ -\iint_{(-1,1)^2}|v(x_1)-v(y_1)|^2A_r(|x_1-y_1|)\,{\rm d} x_1{\rm d} y_1\,. \end{multline} Similarly, we compute \begin{multline*} \iint_{Q_r\times (Q_r)^c}\frac{|\widetilde v_r(x)-\widetilde v_r(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y\\ =\iint_{(-1,1)\times(-1,1)^c}|v(x_1)-v(y_1)|^2\Big(\iint_{D_r^\prime\times \mathbb{R}^{n-1}}\frac{{\rm d} x^\prime{\rm d} y^\prime}{(|x_1-y_1|^2+|x^\prime-y^\prime|^2)^{\frac{n+2s}{2}}}\Big)\,{\rm d} x_1{\rm d} y_1\\ +\iint_{(-1,1)^2}|v(x_1)-\psi(y_1)|^2A_r(|x_1-y_1|)\,{\rm d} x_1{\rm d} y_1\,, \end{multline*} so that \begin{multline}\label{calcaoutprout2} \iint_{Q_r\times (Q_r)^c}\frac{|\widetilde v_r(x)-\widetilde v_r(y)|^2}{|x-y|^{n+2s}}\,{\rm d} x{\rm d} y = \alpha_{n,s}|D^\prime_r|\iint_{(-1,1)\times(-1,1)^c}\frac{|v(x_1)-v(y_1)|^2}{|x_1-y_1|^{1+2s}}\,{\rm d} x_1{\rm d} y_1\\ +\iint_{(-1,1)^2}|v(x_1)-\psi(y_1)|^2A_r(|x_1-y_1|)\,{\rm d} x_1{\rm d} y_1\,. \end{multline} Combining \eqref{calcaoutprout1} and \eqref{calcaoutprout2} leads to $$\frac{1}{|D^\prime_r|}\,\mathcal{E}_s(\widetilde v_r,Q_{r})= \mathcal{E}_s\big(v,(-1,1)\big)-I_r+II_r\,,$$ where $$I_r:=\frac{\gamma_{1,s}}{4|D^\prime_r|} \iint_{(-1,1)^2}|v(x_1)-v(y_1)|^2A_r(|x_1-y_1|)\,{\rm d} x_1{\rm d} y_1\,,$$ and $$II_r :=\frac{\gamma_{1,s}}{2|D^\prime_r|}\iint_{(-1,1)^2}|v(x_1)-\psi(y_1)|^2A_r(|x_1-y_1|)\,{\rm d} x_1{\rm d} y_1\,.$$ Since $|v|=|\psi|=1$, we have $$I_r+II_r\leqslant Cr^{1-n}\iint_{(-1,1)^2}A_r(|x_1-y_1|)\,{\rm d} x_1{\rm d} y_1\,,$$ and using Fubini's theorem again, we estimate \begin{align*} \iint_{(-1,1)^2}A_r(|x_1-y_1|)&\;{\rm d} x_1{\rm d} y_1& \\ &\leqslant \iint_{D^\prime_r\times(D^\prime_r)^c}\Big(\iint_{(-1,1)\times\mathbb{R}}\frac{{\rm d} x_1{\rm d} y_1}{(|x_1-y_1|^2+|x^\prime-y^\prime|^2)^{\frac{n+2s}{2}}}\Big)\,{\rm d} x^\prime{\rm d} y^\prime\\ &\leqslant C \iint_{D^\prime_r\times(D^\prime_r)^c}\frac{{\rm d} x^\prime{\rm d} y^\prime}{|x^\prime-y^\prime|^{n-1+2s}}\\ &\leqslant C r^{n-1-2s}\,. \end{align*} Therefore, $$\frac{1}{|D^\prime_r|}\,\mathcal{E}_s(\widetilde v_r,Q_{r})= \mathcal{E}_s\big(v,(-1,1)\big)+O(r^{-2s})\,,$$ and the proof is complete. \end{proof} \subsection{Proof of Theorem \ref{mainthm1}, Theorem \ref{mainthm2}, and Theorem \ref{mainthm3}} \begin{proof}[Proof of Theorem \ref{mainthm1}] Let us fix an arbitrary point $x_0\in \Omega$, and set $r_0:=\frac{1}{2}{\rm dist}(x_0,\Omega^c)$. Without loss of generality, we can assume that $x_0=0$, so that our aim is to show that $u$ is smooth in a neighborhood of $x_0=0$. As noticed in Remark \ref{remarlepsholdregsubcritic}, the function $r\in(0,2r_0-|{\bf x}|)\mapsto \boldsymbol{\Theta}_s(u^{\rm e},{\bf x},r)$ is nondecreasing for every ${\bf x}\in\partial^0B^+_{2r_0}$. Moreover, since $2s-n=2s-1\geq0$, we have $$\lim_{r\to0}\boldsymbol{\theta}_s(u,0,r)=0\,. $$ Then we deduce from Corollary \ref{corequivvanishdensities} that $$\lim_{r\to0}\boldsymbol{\Theta}_s(u^{\rm e},0,r)=0\,. $$ As a consequence, we can find $r_1\in(0,r_0)$ such that $\boldsymbol{\Theta}(u^{\rm e},0,r_1)\leqslant\boldsymbol{\varepsilon}_1$, where the constant $\boldsymbol{\varepsilon}_1$ is given by Corollary \ref{coroepsreghold}. From Theorem \ref{thmepsregLip}, we infer that $u\in C^{0,1}(D_{\boldsymbol{\kappa}_2}r_1)$ for a constant $\boldsymbol{\kappa}_2\in(0,1)$ depending only on $s$. In turn, Theorem \ref{highordthm} tells us that $u\in C^\infty(D_{\boldsymbol{\kappa}_2r_1/2})$. \end{proof} \begin{proof}[Proof of Theorem \ref{mainthm2}, case $s=1/2$] Considering the constant $\boldsymbol{\varepsilon}_1>0$ given by Corollary \ref{coroepsreghold}, we define \begin{equation}\label{defSigmaproofmainthms} \Sigma:=\Big\{x\in\Omega: \boldsymbol{\Xi}_s(u,x)\geqslant \boldsymbol{\varepsilon}_1 \Big\}\,. \end{equation} By Corollary \ref{corolmonotform}, $\Sigma$ is relatively closed subset of $\Omega$. On the other hand, it is well known that $\mathcal{H}^{n-1}(\Sigma)=0$, see e.g. \cite[Corollary 3.2.3]{Ziem}. We claim that $u\in C^\infty(\Omega\setminus\Sigma)$. Indeed, if $x_0\in\Omega\setminus\Sigma$, then we can find a radius $r\in(0,\frac{1}{2}{\rm dist}(x_0,\Omega^c))$ such that $\boldsymbol{\Theta}(u^{\rm e},0,r)\leqslant\boldsymbol{\varepsilon}_1$. Applying Theorem \ref{thmepsregLip} and Theorem \ref{highordthm}, we conclude that $u\in C^\infty(D_{\boldsymbol{\kappa}_2r/2})$, and the claim is proved. Obviously, ${\rm sing}(u)\subseteq\Sigma$, and it now only remains to show that ${\rm sing}(u)=\Sigma$. This is in fact a direct consequence of the regularity result in \cite[Theorem 4.1]{GJ}. Indeed, assume by contradiction that there is a point $x_0\in \Sigma\setminus{\rm sing}(u)$. Since ${\rm sing}(u)$ is a relatively closed subset of $\Omega$, we can find $r>0$ such that $D_{2r}(x_0)\subseteq \Omega\setminus{\rm sing}(u)$, i.e., $u$ is continuous in $D_{2r}(x_0)$. Consequently, $u^{\rm e}$ is continuous in $B_r^+({\bf x}_0)\cup\partial^0B_r^+({\bf x}_0)$, where ${\bf x}_0=(x_0,0)$. However, by Proposition \ref{equivsharmfreebdry} (with $s=1/2$), $u^{\rm e}\in H^1(B_r^+({\bf x}_0);\mathbb{R}^d)$ also solves $$\int_{B_r(x_0)}\nabla u^{\rm e}\cdot\nabla\Phi\,{\rm d}{\bf x}=0 $$ for every $\Phi\in H^1(B_r({\bf x}_0);\mathbb{R}^d)$ such that $\Phi=0$ on $\partial^+B_r({\bf x}_0)$ and $u\cdot\Phi=0$ on $\partial^0B_r({\bf x}_0)$. Then \cite[Theorem 4.1]{GJ} tells us that $u^{\rm e}\in C^{1,\alpha}(B^+_{r/2}(x_0))$ for every $\alpha\in(0,1)$. Consequently, $\boldsymbol{\Xi}_s(u,x_0)=0$, i.e., $x_0\not\in \Sigma$, a contradiction. \end{proof} \begin{proof}[Proof of Theorem \ref{mainthm2}, case $s\not=1/2$] We still consider the relatively closed subset $\Sigma$ of $\Omega$ defined in \eqref{defSigmaproofmainthms}. As in the case $s=1/2$, it follows from Theorem \ref{thmepsregLip} and Theorem \ref{highordthm} that $u\in C^\infty(\Omega\setminus\Sigma)$. In particular, ${\rm sing}(u)\subseteq \Sigma$. On the other hand, if $u$ is continuous in a neighborhood of a point $x_0\in\Omega$, then $T_{x_0}(u)=\{u(x_0)\}$, and thus $\boldsymbol{\Xi}_s(u,x_0)=0$. Hence, $x_0\not\in\Sigma$, and we conclude that ${\rm sing}(u)= \Sigma$. In view of Remark \ref{remdimSphinonconst} and Lemma \ref{rigidlemtangmapsbig1/2}, we have $$\Sigma= \begin{cases} \big\{x\in\Omega: {\rm dim}\,S(\varphi)\leqslant n-1\;\;\forall\varphi\in T_x(u) \big\} &\text{if $s\in(0,1/2)$}\,;\\[5pt] \big\{x\in\Omega: {\rm dim}\,S(\varphi)\leqslant n-2\;\;\forall\varphi\in T_x(u) \big\} &\text{if $s\in(1/2,1)$}\,. \end{cases} $$ We can now apply e.g. \cite[Chapter 3.4, proof of Lemma 1]{Sim} (which only relies on the upper semicontinuity of $\boldsymbol{\Xi}_s$ stated in Corollary \ref{uscdensit}, the strong convergence of blow-ups to tangent maps, and the structure results on tangent maps established in Subection \ref{secttangmap}) to conclude that ${\rm dim}_{\mathcal{H}}\Sigma\leqslant n-1$ for $s\in(0,1/2)$, ${\rm dim}_{\mathcal{H}}\Sigma\leqslant n-2$ for $s\in(1/2,1)$, and that $\Sigma$ is locally finite in $\Omega$ if $n=1$ with $s\in(0,1/2)$ or $n=2$ with $s\in(1/2,1)$. \end{proof} \begin{proof}[Proof of Theorem \ref{mainthm3}] For $s\in(1/2,1)$, we simply apply Theorem \ref{mainthm2} (recalling that minimality implies stationarity). We thus assume that $s\in(0,1/2]$. Since $u$ is minimizing in $\Omega$, the results in Subsection \ref{secttangmap} apply. Hence, we can repeat the proof of Theorem \ref{mainthm2} to derive that $u\in C^\infty(\Omega\setminus\Sigma)$, ${\rm sing}(u)=\Sigma$, where $\Sigma$ is still given by \eqref{defSigmaproofmainthms}. In view of Lemma \ref{rigidlemtangmapsbig1/2} and Lemma \ref{rigidminsharmtangmap}, we now have $$\Sigma=\big\{x\in\Omega: {\rm dim}\,S(\varphi)\leqslant n-2\;\;\forall\varphi\in T_x(u) \big\}\,.$$ Once again, \cite[Chapter 3.4, proof of Lemma 1]{Sim} shows that ${\rm dim}_{\mathcal{H}}\Sigma\leqslant n-2$, and that $\Sigma$ is locally finite in $\Omega$ if $n=2$. \end{proof} \appendix \section{On the degenerate Laplace equation}\label{appendweightharm} In this first appendix, our aim is to recall some of the properties satisfied by weak solutions of the (scalar) degenerate linear elliptic equation \begin{equation}\label{maineqappendA} {\rm div}(|z|^a\nabla w)= 0 \quad\text{in $B_R({\bf x_0})$}\,, \end{equation} with ${\bf x}_0=(x_0,z_0)\in\mathbb{R}^{n+1}$. Those properties are essentially taken from \cite{Rob}, and we reproduce here the statements for convenience of the reader. The notion of weak solution to this equation corresponds to the variational formulation. In other words, we say that $w\in H^1(B_R({\bf x}_0),|z|^a{\rm d}{\bf x})$ is a weak solution of \eqref{maineqappendA} if $$\int_{B_R({\bf x}_0)}|z|^a\nabla w\cdot\nabla\Phi\,{\rm d}{\bf x}=0$$ for every $\Phi\in H^1(B_R({\bf x}_0),|z|^a{\rm d}{\bf x})$ such that $\Phi=0$ on $\partial B_R({\bf x_0})$. One may complement \eqref{maineqappendA} with a boundary condition of the form $w=v$ on $\partial B_R({\bf x}_0)$ for a given $v\in H^1(B_R({\bf x}_0),|z|^a{\rm d}{\bf x})$. This boundary condition is thus interpreted in the sense of traces. Classically, such a boundary condition uniquely determines the solution of \eqref{maineqappendA} which can be characterized by energy minimality. \begin{lemma}\label{minimalityharmonw} Let $v\in H^1(B_R({\bf x}_0),|z|^a{\rm d}{\bf x})$. The equation \begin{equation}\label{maineqappendAwithbdry} \begin{cases} {\rm div}(|z|^a\nabla w)= 0 &\text{in $B_R({\bf x_0})$}\,,\\ w=v & \text{on $\partial B_R({\bf x}_0)$}\,, \end{cases} \end{equation} admits a unique weak solution which is characterized by $$\int_{B_R({\bf x}_0)}|z|^a|\nabla w|^2\,{\rm d}{\bf x}\leqslant \int_{B_R({\bf x}_0)}|z|^a|\nabla \Phi|^2\,{\rm d}{\bf x}$$ for every $\Phi \in H^1(B_R({\bf x}_0),|z|^a{\rm d}{\bf x})$ satisfying $\Phi=v$ on $\partial B_R({\bf x_0})$. \end{lemma} As for the usual Laplace equation, energy minimality can be used to prove that $w$ inherits symmetries from the boundary condition. In our case, we make use of the following lemma. \begin{lemma}\label{symmharmw} Let ${\bf x}_0\in\mathbb{R}^n\times\{0\}$ and $v\in H^1(B_R,|z|^a{\rm d}{\bf x})$. If $v$ is symmetric with respect to $\{z=0\}$, then the weak solution $w$ of \eqref{maineqappendAwithbdry} is also symmetric with respect to $\{z=0\}$. \end{lemma} Concerning interior regularity of weak solutions, the issue is of course near the hyperplane $\{z=0\}$. Indeed, if the ball $B_R({\bf x}_0)$ is away from $\{z=0\}$, then the operator becomes uniformly elliptic with smooth coefficients, and the classical elliptic theory tells us that weak solutions are $C^\infty$ in the interior. For an arbitrary ball, the general results of \cite{FKS} about degenerate elliptic equations apply, and they provide at least local H\"older continuity in the interior. Using the invariance of the equation with respect to the $x$-variables, the regularity can be further improved (see e.g. \cite[Corollary~2.13]{Rob}). Some boundary regularity and related maximum principles are also known from the general theory in \cite{HKM}. We reproduce here the statement in \cite[Lemma2.18]{Rob}. \begin{lemma}\label{maxprincip} Let $v\in H^1(B_R({\bf x}_0),|z|^a{\rm d}{\bf x})\cap C^0\big(\overline B_R({\bf x}_0)\big)$. The weak solution $w$ of \eqref{maineqappendAwithbdry} belongs to $C^0\big(\overline B_R({\bf x}_0)\big)$. Moreover, $$\min_{\overline B_R({\bf x}_0)} w=\min_{\partial B_R({\bf x}_0)} v \quad\text{and}\quad \max_{\overline B_R({\bf x}_0)} w=\max_{\partial B_R({\bf x}_0)} v\,.$$ \end{lemma} A further fundamental property of weak solutions of \eqref{maineqappendA} is an energy monotonicity in which one has to distinguish balls centered at a point of $\{z=0\}$ from balls lying away from $\{z=0\}$. The two following lemmas are taken from \cite[Lemma 2.8]{Rob} and \cite[Lemma 2.17]{Rob}, respectively. \begin{lemma}\label{monotIharmreplac} Let ${\bf x}_0\in\mathbb{R}^n\times\{0\}$ and $w\in H^1(B_R({\bf x}_0),|z|^a{\rm d}{\bf x})$ a weak solution of \eqref{maineqappendA}. Assume that either $s\geqslant 1/2$, or that $s<1/2$ and $w$ is symmetric with respect to the hyperplane $\{z=0\}$. Then, $$\frac{1}{\rho^{n+2-2s}}\int_{B_\rho({\bf x}_0)}|z|^a|\nabla w|^2\,{\rm d}{\bf x}\leqslant \frac{1}{r^{n+2-2s}}\int_{B_r({\bf x}_0)}|z|^a|\nabla w|^2\,{\rm d}{\bf x} $$ for every $0<\rho\leqslant r\leqslant R$. \end{lemma} \begin{lemma}\label{monotIharmreplac2} Let $w\in H^1(B_R({\bf x}_0),|z|^a{\rm d}{\bf x})$ be a weak solution of \eqref{maineqappendA}. If ${\bf x}_0=(x_0,z_0)\in \mathbb{R}^{n+1}_+$ and $R>0$ are such that $B_R({\bf x}_0)\subseteq \mathbb{R}^{n+1}_+$ and $z_0\geqslant \theta R$ for some $\theta\geqslant 2$, then $$\Big(\frac{2}{R}\Big)^{n+1}\int_{B_{R/2}({\bf x}_0)} |z|^a|\nabla w|^2\,{\rm d}{\bf x}\leqslant\Big(1+\frac{C}{\theta-1}\Big)\frac{1}{R^{n+1}}\int_{ B_{R} ( {\bf x}_0 ) } |z|^a|\nabla w|^2\,{\rm d}{\bf x}\,,$$ for a constant $C=C(n)$. \end{lemma} \section{A Lipschitz estimate for $s$-harmonic functions} \label{AppSharmfct} The purpose of this appendix is to provide an interior Lipschitz estimate for weak solutions $w\in \widehat H^s(D_1)$ of the fractional Laplace equation \begin{equation}\label{sharmfuncteqappend} (-\Delta)^sw=0 \quad\text{in $H^{-s}(D_1)$}\,. \end{equation} The notion of weak solution is understood here according to the weak formulation of the $s$-Laplacian operator, see \eqref{deffraclap}. Interior regularity for weak solutions is known, and it tells us that $w$ is locally $C^\infty$ in $D_1$. The following estimate is probably also well known, but we give a proof for convenience of the reader. \begin{lemma}\label{lipestsharmfctlem} If $w\in \widehat H^s(D_1)$ is a weak solution of \eqref{sharmfuncteqappend}, then $w\in C^\infty(D_{1/2})$, and \begin{equation}\label{lipestiappend} \|w\|^2_{L^\infty(D_{1/2})}+\|\nabla w\|^2_{L^\infty(D_{1/2})}\leqslant C\big(\mathcal{E}_s(w,D_1)+\|w\|^2_{L^2(D_1)}\big)\,, \end{equation} for a constant $C=C(n,s)$. \end{lemma} \begin{proof} As we already mentioned, the regularity theory is already known, and we take advantage of this to only derive estimate \eqref{lipestiappend}. Let us fix an arbitrary point $x_0\in D_{1/2}$. We consider the extension $w^{\rm e}$ which belongs to $H^1(B^+_{1/4}({\bf x}_0),|z|^a{\rm d}{\bf x})$ with ${\bf x}_0:=(x_0,0)$ by Lemma~\ref{hatH1/2toH1}. In view of Lemma \ref{repnormderfraclap}, it satisfies $$\int_{B_{1/4}^+({\bf x}_0)}z^a\nabla w^{\rm e}\cdot\nabla\Phi\,{\rm d} x=0 $$ for every $\Phi\in H^1(B_{1/4}^+({\bf x}_0),|z|^a{\rm d}{\bf x})$ such that $\Phi=0$ on $\partial^+B_{1/4}^+({\bf x}_0)$. Then we consider the even extension of $w^{\rm e}$ to the whole ball $B_{1/4}({\bf x}_0)$ that we still denote by $w^{\rm e}$ (i.e. $w^{\rm e}(x,z)=w^{\rm e}(x,-z)$). Then $w^{\rm e}\in H^1(B_{1/4}({\bf x}_0),|z|^a{\rm d}{\bf x})$, and arguing as in the proof of Corollary \ref{eqsymtrizedphase}, we infer that $w^{\rm e}$ is a weak solution of \eqref{maineqappendA} with $R=1/4$. According to \cite[Corollary 2.13]{Rob}, the weak derivatives $\partial_i w^{\rm e}$ belongs to $H^1(B_{1/8}({\bf x}_0),|z|^a{\rm d} {\bf x})$ for $i=1,\ldots,n$, and they are weak solutions of \eqref{maineqappendA} with $R=1/8$. Now, applying \cite[Theorem 2.3.12]{FKS} to $w^{\rm e}$ and $\partial_i w^{\rm e}$, we infer that $w^{\rm e}\in C^{1,\alpha}(B_{1/16}({\bf x}_0))$ for some exponent $\alpha=\alpha(n,s)\in(0,1)$, \begin{equation}\label{holdestisharmfct} [w^{\rm e}]_{C^{0,\alpha}(B_{1/16}({\bf x}_0))}\leqslant C\|w^{\rm e}\|_{L^2(B_{1/8}({\bf x}_0),|z|^a{\rm d}{\bf x})}\,, \end{equation} and \begin{equation}\label{holdestisharmfct2} [\nabla_x w^{\rm e}] _{C^{0,\alpha}(B_{1/16}({\bf x}_0))}\leqslant C\|\nabla_x w^{\rm e}\|_{L^2(B_{1/8}({\bf x}_0),|z|^a{\rm d}{\bf x})}\,, \end{equation} for a constant $C=C(n,s)$. On the other hand, for every ${\bf x}\in B_{1/16}({\bf x}_0)$, we have (recall our notation in \eqref{weightvolball}) \begin{multline*} |w^{\rm e}({\bf x})|\leqslant \Big| w^{\rm e}({\bf x})-\frac{1}{|B_{1/16}|_a}\int_{B_{1/16}(x_0)}|z|^a w^{\rm e}({\bf y}){\rm d} {\bf y} \Big|+ \frac{1}{|B_{1/16}|_a}\int_{B_{1/16}(x_0)}|z|^a|w^{\rm e}({\bf y})|{\rm d} {\bf y}\\ \leqslant C\big([w^{\rm e}]_{C^{0,\alpha}(B_{1/16}({\bf x}_0))} +\|w^{\rm e}\|_{L^2(B_{1/16}({\bf x}_0),|z|^a{\rm d}{\bf x})} \big)\,. \end{multline*} Combining this estimate with \eqref{holdestisharmfct} and Lemma \ref{hatH1/2toH1} leads to $$\|w^{\rm e}\|^2_{L^\infty(B_{1/16}(x_0))} \leqslant C\big(\mathcal{E}_s(w,D_1)+\|w\|^2_{L^2(D_1)}\big)\,.$$ The same argument applied to $\nabla_xw^{\rm e}$ and using \eqref{holdestisharmfct2} instead of \eqref{holdestisharmfct} yields $$\|\nabla_x w^{\rm e}\|^2_{L^\infty(B_{1/16}(x_0))} \leqslant C \|\nabla_ xw^{\rm e}\|^2_{L^2(B_{1/8}({\bf x}_0),|z|^a{\rm d}{\bf x})} \leqslant C\mathcal{E}_s(w,D_1) \,,$$ thanks to Lemma \ref{hatH1/2toH1} again. Now the conclusion follows from the fact that $w^{\rm e}=w$ and $\nabla_xw^{\rm e}=\nabla w$ on $\partial^0B^+_{1/16}({\bf x}_0)$. \end{proof} \section{An embedding theorem between generalized $\mathcal{Q}_\alpha$-spaces}\label{appQspaces} In this appendix, our goal is to prove one of the crucial estimates used in the proof of Theorem~\ref{thmepsregholder}, Corollary \ref{coroinjQspaces} below. In turns out that this estimate does not explicitly appear in the existing literature (to the best of our knowledge), but it can be shortly derived from recent results in harmonic analysis. The purpose of this appendix is thus to explain how to combine those results to reach our goal. First, we need to recall some definitions and notations. The space $\mathscr{S}_\infty(\mathbb{R}^n)$ can be defined as the topological subspace of the Schwartz class $\mathscr{S}(\mathbb{R}^n)$ made of all functions $\varphi$ such that the semi-norm $$\|\varphi\|_M :=\sup_{|\gamma|\leqslant M}\sup_{\xi\in\mathbb{R}^n}\big|\partial^\gamma\widehat{\varphi}(\xi)\big|(|\xi|^M+|\xi|^{-M})$$ is finite for every $M\in\mathbb{N}$, where $\gamma=(\gamma_1,\ldots,\gamma_n)\in\mathbb{N}^n$, $|\gamma|:=\gamma_1+\ldots+\gamma_n$, and $\partial^\gamma:=\partial_1^{\gamma_1}\ldots\partial_n^{\gamma_n}$. Its topological dual is denoted by $\mathscr{S}^\prime_\infty(\mathbb{R}^n)$, and it is endowed with the weak $*$-topology, see e.g.~\cite{Trieb,YangYuan}. \vskip3pt The following $\mathcal{Q}^{\alpha,q}_p$-spaces were introduced in \cite{CuYa,YangYuan}, generalizing the notion of $\mathcal{Q}_\alpha$-space (see \cite[Section 1.2.4]{SYY} and references therein), in the sense that $\mathcal{Q}_\alpha(\mathbb{R}^n)=\mathcal{Q}^{\alpha,2}_{n/\alpha}(\mathbb{R}^n)$. \begin{definition}[\cite{CuYa,YangYuan}] Given $\alpha\in(0,1)$, $p\in(0,\infty]$ and $q\in[1,\infty)$, define $\mathcal{Q}^{\alpha,q}_p(\mathbb{R}^n)$ as the space made of elements $f\in\mathscr{S}^\prime_\infty(\mathbb{R}^n)$ such that $f(x)-f(y)$ is a measurable function on $\mathbb{R}^n\times\mathbb{R}^n$ and $$\|f\|_{\mathcal{Q}^{\alpha,q}_p(\mathbb{R}^n)}:=\sup_Q\, |Q|^{\frac{1}{p}-\frac{1}{q}}\left(\iint_{Q\times Q}\frac{|f(x)-f(y)|^q}{|x-y|^{n+\alpha q}}\,{\rm d} x{\rm d} y\right)^{1/q}<+\infty\,,$$ where $Q$ ranges over all cubes of dyadic edge lengths in $\mathbb{R}^n$. \end{definition} \begin{remark}\label{equivseminormQsp} Endowed with $\|\cdot\|_{\mathcal{Q}^{\alpha,q}_p(\mathbb{R}^n)}$, the space $\mathcal{Q}^{\alpha,q}_p(\mathbb{R}^n)$ is a semi-normed vector space, and $$N_{\alpha,p,q}(f):=\sup_{D_r(x_0)\subseteq \mathbb{R}^n} r^{\frac{n}{p}-\frac{n}{q}}\left(\iint_{D_r(x_0)\times D_r(x_0)}\frac{|f(x)-f(y)|^q}{|x-y|^{n+\alpha q}}\,{\rm d} x{\rm d} y\right)^{1/q}$$ provides an equivalent semi-norm. \end{remark} The following embeddings between $\mathcal{Q}^{\alpha,q}_p$-spaces hold. \begin{theorem}\label{prfthminj} If $0<\alpha_1<\alpha_2<1$, $1\leqslant q_2<q_1<\infty$, and $0<\lambda\leqslant n$ are such that \begin{equation}\label{conditionembed} \alpha_1-\frac{\lambda}{q_1}=\alpha_2-\frac{\lambda}{q_2}\,, \end{equation} then $\mathcal{Q}^{\alpha_2,q_2}_{\frac{nq_2}{\lambda}}(\mathbb{R}^n)\hookrightarrow \mathcal{Q}^{\alpha_1,q_1}_{\frac{nq_1}{\lambda}}(\mathbb{R}^n)$ continuously. \end{theorem} As we briefly mentioned at the beginning of this appendix, this theorem actually follows quite directly from a more general embedding result between some homogeneous Triebel-Lizorkin-Morrey-Lorentz spaces \cite{Ho} together with an identification result between various definitions of homogeneous Triebel-Lizorkin-Morrey type spaces \cite{SaYY}, and a characterization of the $\mathcal{Q}^{\alpha,q}_p$-spaces within this scale of spaces~\cite{YangYuan}. We refer to the monograph \cite{SYY} for what concerns the spaces involved here, and we limit ourselves to their basic definition. To this purpose, we consider a reference bump function $\psi\in \mathscr{S}(\mathbb{R}^n)$ such that $$ {\rm spt}\,\widehat{\psi} \subseteq \Big\{\xi\in\mathbb{R}^n : \frac{1}{2}\leqslant|\xi|\leqslant 2\Big\}\quad\text{and}\quad |\widehat\psi(\xi)|\geqslant C>0 \textrm{ for }\frac{3}{5}\leqslant |\xi|\leqslant\frac{5}{3}\,. $$ (In particular, $\psi\in \mathscr{S}_\infty(\mathbb{R}^n)$.) For $j\in\mathbb{Z}$, we denote by $\psi_j$ the function defined by $$\psi_j(x):=2^{jn}\psi(2^jx) \,.$$ \begin{definition} Given $p,q\in(0,\infty)$, $s\in\mathbb{R}$, and $\tau\in[0,\infty)$, the homogeneous Triebel-Lizorkin space $\dot F^{s,\tau}_{p,q}(\mathbb{R}^n)$ is defined to be the set of all $f\in\mathscr{S}^\prime_\infty(\mathbb{R}^n)$ such that $$\|f\|_{\dot F^{s,\tau}_{p,q}(\mathbb{R}^n)}:=\sup_Q\, \frac{1}{|Q|^{\tau}}\left(\int_Q\bigg(\sum_{j=j_Q}^\infty \big(2^{js}|\psi_j*f(x)|\big)^q\bigg)^{p/q} {\rm d} x\right)^{1/p}<+\infty\,, $$ where $Q$ ranges over all cubes of dyadic edge lengths in $\mathbb{R}^n$, and $j_Q:=-\log_2\ell(Q)$ with $\ell(Q)$ the edge length of $Q$. \end{definition} \begin{definition} Given $0<p\leqslant u<\infty$\,, $0<q<\infty$, and $s\in\mathbb{R}$, the homogeneous Triebel-Lizorkin-Morrey space $\dot{\mathcal{E}}^s_{p,q,u}(\mathbb{R}^n)$ is defined to be the set of all $f\in\mathscr{S}^\prime_\infty(\mathbb{R}^n)$ such that $$\|f\|_{\dot{\mathcal{E}}^s_{p,q,u}(\mathbb{R}^n)}:=\sup_Q\, |Q|^{\frac{1}{u}-\frac{1}{p}}\left(\int_Q\bigg(\sum_{j\in\mathbb{Z}} \big(2^{js}|\psi_j*f(x)|\big)^p\bigg)^{q/p} {\rm d} x\right)^{1/q}<+\infty\,,$$ where $Q$ ranges over all cubes of dyadic edge lengths in $\mathbb{R}^n$. \end{definition} \begin{proof}[Proof of Theoremf \ref{prfthminj}] In \cite{Ho}, the author introduced a more refined scale of homogeneous Triebel-Lizorkin spaces of Morrey-Lorentz type, denoted by $\dot{F}^{s,u}_{M_{p,q,\lambda}}(\R^n)$. In the case $u=p=q$, those spaces coincide with the homogeneous Triebel-Lizorkin-Morrey spaces above, namely $$\dot{F}^{s,p}_{M_{p,p,\lambda}}(\R^n)=\dot{\mathcal{E}}^s_{p,p,\frac{np}{\lambda}}(\mathbb{R}^n) $$ for every $p\in(0,\infty)$, $\lambda\in(0,n]$, and $s\in \mathbb{R}$. More precisely, their defining semi-norms are equivalent (in one case the supremum is taken over all dyadic cubes, while in the other it is taken over balls). By \cite[Theorem 4.1]{Ho}, under condition \eqref{conditionembed} the space $\dot{F}^{\alpha_2,q_2}_{M_{q_2,q_2,\lambda}}(\R^n)$ embeds continuously into $\dot{F}^{\alpha_1,q_1}_{M_{q_1,q_1,\lambda}}(\R^n)$. In other words, \begin{equation}\label{crucialinj} \dot{\mathcal{E}}^{\alpha_2}_{q_2,q_2,\frac{nq_2}{\lambda}}(\mathbb{R}^n)\hookrightarrow \dot{\mathcal{E}}^{\alpha_1}_{q_1,q_1,\frac{nq_1}{\lambda}}(\mathbb{R}^n) \end{equation} continuously. On the other hand, \cite[Theorem 1.1]{SaYY} tells us that $$\dot{\mathcal{E}}^{\alpha_1}_{q_1,q_1,\frac{nq_1}{\lambda}}(\mathbb{R}^n)= \dot F^{\alpha_1,\frac{n-\lambda}{nq_1}}_{q_1,q_1}(\mathbb{R}^n)\quad\text{and}\quad \dot{\mathcal{E}}^{\alpha_2}_{q_2,q_2,\frac{nq_2}{\lambda}}(\mathbb{R}^n)= \dot F^{\alpha_2,\frac{n-\lambda}{nq_2}}_{q_2,q_2}(\mathbb{R}^n)\,,$$ with equivalent semi-norms. Finally, by \cite[Theorem 3.1]{YangYuan} we have $$ \dot F^{\alpha_1,\frac{n-\lambda}{nq_1}}_{q_1,q_1}(\mathbb{R}^n) = \mathcal{Q}^{\alpha_1,q_1}_{\frac{nq_1}{\lambda}}(\mathbb{R}^n) \quad\text{and}\quad \dot F^{\alpha_2,\frac{n-\lambda}{nq_2}}_{q_2,q_2}(\mathbb{R}^n) = \mathcal{Q}^{\alpha_2,q_2}_{\frac{nq_2}{\lambda}}(\mathbb{R}^n) \,,$$ with equivalent semi-norms. Hence, the conclusion follows from \eqref{crucialinj}. \end{proof} We are now ready to state the important corollary of Theorem \ref{prfthminj} used in the proof of Theorem~\ref{thmepsregholder}. Given $s\in(0,1)$, $p\in[1,\infty)$, and an open set $\Omega\subseteq\mathbb{R}^n$, we recall that the Sobolev-Slobodeckij $W^{s,p}(\Omega)$-semi-norm of a measurable function $f$ is given by \begin{equation}\label{defWspseminorm} [f]_{W^{s,p}(\Omega)}:=\left(\iint_{\Omega\times\Omega}\frac{|f(x)-f(y)|^p}{|x-y|^{n+sp}}\,{\rm d} x{\rm d} y\right)^{1/p}\,. \end{equation} \begin{corollary}\label{coroinjQspaces} Let $s\in(0,1)$ and $f\in L^1(\mathbb{R}^n)$ with compact support. If \begin{equation}\label{condHsQspace} \sup_{D_r(x)\subseteq\mathbb{R}^n} r^{2s-n}[f]^2_{H^s(D_r(x))}<+\infty\,, \end{equation} then, $$\sup_{D_r(x)\subseteq\mathbb{R}^n} r^{\frac{2s-n}{3}}[f]^2_{W^{s/3,6}(D_r(x))}\leqslant C \sup_{D_r(x)\subseteq\mathbb{R}^n} r^{2s-n}[f]^2_{H^s(D_r(x))}\,,$$ for a constant $C=C(n,s)$. \end{corollary} \begin{proof} Since $f\in L^1(\mathbb{R}^n)$ has compact support, it clearly belongs to $\mathscr{S}^\prime_\infty(\mathbb{R}^n)$. Then, condition \eqref{condHsQspace} implies that $f\in\mathcal{Q}^{s,2}_{n/s}(\mathbb{R}^n)$. On the other hand, $\mathcal{Q}^{s,2}_{n/s}(\mathbb{R}^n)\hookrightarrow \mathcal{Q}^{s/3,6}_{3n/s}(\mathbb{R}^n)$ continuously by Theorem \ref{prfthminj}. Then the conclusion follows from the definition of $\mathcal{Q}^{s/3,6}_{3n/s}(\mathbb{R}^n)$ together with Remark \ref{equivseminormQsp}. \end{proof} \vskip10pt \noindent{\it Acknowledgements.} V.M. is supported by the Agence Nationale de la Recherche through the project ANR-14-CE25-0009-01 (MAToS). A.S. is supported by the Simons Fondation through the grant no. 579261. \end{document}
arXiv
\begin{document} \DeclareGraphicsExtensions{.mps} \newcommand{It\^o }{It\^o } \newcommand{\Mathematica}{{\it Mathematica }} \newcommand{\MathReader}{{\it MathReader }} \newcommand{\abbrev}[1]{#1. } \newcommand{\abbrev{i.e}}{\abbrev{i.e}} \newcommand{\abbrev{p}}{\abbrev{p}} \newcommand{\abbrev{eg}}{\abbrev{eg}} \newcommand{\abbrev{etc}}{\abbrev{etc}} \newcommand{\abbrev{vs}}{\abbrev{vs}} \newcommand{\abbrev{Propos}}{\abbrev{Propos}} \newcommand{Theorem}{Theorem} \theoremstyle{theorem} \newtheorem{prop}{Proposition} \newtheorem{lemma}[prop]{Lemma} \newtheorem{theo}[prop]{Theorem} \newtheorem{coroll}[prop]{Corollary} \newcommand{\refprop}[1]{Proposition~\ref{#1}} \newcommand{\refppropag}[1]{\abbrev{Propos}~\ref{#1} \abbrev{p}~\pageref{#1}} \newcommand{\reftpropag}[1]{Theorem~\ref{#1} \abbrev{p}~\pageref{#1}} \newcommand{\refcorol}[1]{Corollary~\ref{#1} \abbrev{p}~\pageref{#1}} \theoremstyle{definition} \newtheorem{adefn}[prop]{Definition} \newenvironment{defn}{\begin{adefn}}{ $\blacksquare$\end{adefn}} \theoremstyle{remark} \newtheorem{arem}[prop]{Remark} \newtheorem{aexample}[prop]{Example} \newcommand{\defbf}[1]{{\bf #1}} \newenvironment{solution}{\begin{proof}[Solution]}{\end{proof}} \newenvironment{rem}{\begin{arem}}{ $\triangledown$\end{arem}} \newenvironment{example}{\begin{aexample}}{ $\vartriangle$\end{aexample}} \newcommand{\algitem}[1]{\item #1} \floatstyle{boxed} \newfloat{Algorithm}{H}{lop} \newenvironment{alg}[5]{ \def{\bf #2}$\quad[$#1$]${{\bf #2}$\quad[$#1$]$} \if 0#5 \def\noindent{#5}{\relax} \else \def\noindent{#5}{\noindent{#5}} \fi \begin{Algorithm} \noindent{\bf input: }#3 \noindent{\bf output: }#4 \begin{itemize} } { \end{itemize} \noindent{#5} \caption{{\bf #2}$\quad[$#1$]$} \end{Algorithm} } \newenvironment{romanenu} {\begin{enumerate}\renewcommand{\labelenumi}{(\roman{enumi})}} {\end{enumerate}} \newcommand{\rfor}[1]{\quad\text{for}\quad#1} \newcommand{\qquad&&\text{for}\quad}{\qquad&&\text{for}\quad} \newcommand{&&\text{for}\quad}{&&\text{for}\quad} \newcommand{\vec}{\vec} \newcommand{\origi}[1]{{#1}_0} \newcommand{\define}{:=} \newcommand{\definer}{=:} \newcommand{\compose}{\circ} \newcommand{\frahalf}{{\frac{1}{2}}} \newcommand{\iva}[1]{0 \le i \le #1} \newcommand{\jva}[1]{0 \le j \le #1} \newcommand{\ssum}[3]{{\sum_{#1 = #2}^{#3}}} \newcommand{\ssumi}[1]{{\ssum{i}{1}{#1}}} \newcommand{\RR}{{\mathbb R}} \newcommand{\NN}{{\mathbb N}} \newcommand{\ZZ}{{\mathbb Z}} \newcommand{\CC}{{\mathbb C}} \newcommand{{\colon\RR^n \to \RR^n}}{{\colon\RR^n \to \RR^n}} \newcommand{{\colon\RR^n \to \RR}}{{\colon\RR^n \to \RR}} \newcommand{{\colon\RR \to \RR^n}}{{\colon\RR \to \RR^n}} \newcommand{{\colon\RR \to \RR}}{{\colon\RR \to \RR}} \newcommand{\dsys}[3]{\left(#1,#2,U,\origi{#3}\right)} \newcommand{\ssys}[4]{\left(#1,#2,#3,U,\origi{#4}\right)} \newcommand{\osys}[4]{\left(#1,#2,#3,U,\origi{#4}\right)} \newcommand{\dsysfgx}{\dsys{f}{g}{x}} \newcommand{\dosysfghx}{\osys{f}{g}{h}{x}} \newcommand{\ssysfgx}{\ssys{f}{g}{\sigma}{x}} \newcommand{\bigdsys}[3]{\left(#1\, , \, #2\, , \, U\, , \, \origi{#3}\right)} \newcommand{\bigssys}[4]{\left(#1\, , \, #2\, , \, #3\, , \, U\, , \, \origi{#4}\right)} \newcommand{\clssys}{{\mathbb X}} \newcommand{\clssysdet}{\clssys_D} \newcommand{\clssys_I}{\clssys_I} \newcommand{\clssys_S}{\clssys_S} \newcommand{\clssys(n,1,1)}{\clssys(n,1,1)} \newcommand{\clssysdet(n,1)}{\clssysdet(n,1)} \newcommand{\clssysito(n,1,1)}{\clssys_I(n,1,1)} \newcommand{\clssysstrat(n,1,1)}{\clssys_S(n,1,1)} \newcommand{\clssys(n,m,k)}{\clssys(n,m,k)} \newcommand{\clssysdet(n,m)}{\clssysdet(n,m)} \newcommand{\clssysito(n,m,k)}{\clssys_I(n,m,k)} \newcommand{\clssysstrat(n,m,k)}{\clssys_S(n,m,k)} \newcommand{$\Theta_D = \dsysfgx \in \clsysdet$ }{$\Theta_D = \dsysfgx \in \clssysdet(n,1)$ } \newcommand{$\Theta_I = \ssysfgx \in \clsysito$ }{$\Theta_I = \ssysfgx \in \clssysito(n,1,1)$ } \newcommand{$\Theta_S = \ssysfgx \in \clsysstrat$ }{$\Theta_S = \ssysfgx \in \clssysstrat(n,1,1)$ } \newcommand{$\Theta = \ssysfgx \in \clmisys$ }{$\Theta = \ssysfgx \in \clssys(n,m,k)$ } \newcommand{$\Theta_D = \dsysfgx \in \clmisysdet$ }{$\Theta_D = \dsysfgx \in \clssysdet(n,m)$ } \newcommand{$\Theta_I = \ssysfgx \in \clmisysito$ }{$\Theta_I = \ssysfgx \in \clssysito(n,m,k)$ } \newcommand{$\Theta_S = \ssysfgx \in \clmisysstrat$ }{$\Theta_S = \ssysfgx \in \clssysstrat(n,m,k)$ } \newcommand{\smooth}{C^{\infty}} \newcommand{\analytic}{C^{\omega(x)}} \newcommand{C^{k}}{C^{k}} \newcommand{\parby}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\parbyx}[1]{\parby{#1}{x}} \newcommand{\parbyz}[1]{\parby{#1}{z}} \newcommand{\parbysec}[2]{\frac{\partial^2 #1}{\partial #2^2}} \newcommand{\parbyxx}[3]{\frac{\partial^2 #1}{\partial #2 \partial #3}} \newcommand{\parbyxxx}[4]{\frac{\partial^3 #1}{\partial #2 \partial #3 \partial #4}} \newcommand{\parbyxxxx}[5]{\frac{\partial^4 #1}{\partial #2 \partial #3 \partial #4 \partial #5 }} \newcommand{\lie}[2]{{\mathcal L}_{#1} {#2}} \newcommand{\multilie}[3]{{\mathcal L}_{#1}^{#2} {#3}} \newcommand{\biglie}[2]{\langle d{#2},{#1}\rangle} \newcommand{\ad}[3]{\operatorname{ad}_{{#1}}^{{#2}} {{#3}}} \newcommand{\adfg}[1]{\ad{f}{{#1}}{g}} \newcommand{\lighk}[1]{\lie{g}{\multilie{f}{#1}{h}}} \newcommand{\distrofg}[1]{\left\{\ad{f}{i}{g},\,\iva{{#1}}\right\}} \newcommand{\distroabafg}[1]{\left\{\ad{{\vec f}}{i}{g},\,\iva{{#1}}\right\}} \newcommand{\compose T^{-1} (z)}{\compose T^{-1} (z)} \newcommand{\travf}[1]{\parbyx{T} #1 \compose T^{-1} (z)} \newcommand{\trav}[1]{\parbyx{T} #1} \newcommand{\flow}[1]{Fl^{#1}} \newcommand{{\operatorname{trace}}}{{\operatorname{trace}}} \newcommand{{\operatorname{grad}}}{{\operatorname{grad}}} \newcommand{{\operatorname{kernel}}}{{\operatorname{kernel}}} \newcommand{{\operatorname{annihilator}}}{{\operatorname{annihilator}}} \newcommand{{\operatorname{rank}}}{{\operatorname{rank}}} \newcommand{{\operatorname{abs}}}{{\operatorname{abs}}} \newcommand{{\operatorname{sgn}}}{{\operatorname{sgn}}} \newcommand{{\operatorname{span}}} \newcommand{\ito}{{\operatorname{span}}} \newcommand{\ito}[2]{P_#1 #2} \newcommand{\corr}[2]{\operatorname{corr}_#1(#2)} \newcommand{\Corr}[2]{\operatorname{Corr}_#1 #2} \newcommand{\corrxx}[2]{{\frahalf \parby{#1}{#2} #1}} \newcommand{\corrx}{\corrxx{\sigma}{x}} \newcommand{\itox}{{\frahalf \sigma^2 \parbysec{T}{x} \compose T^{-1} (z)}} \newcommand{\itoxs}{{\frahalf \sigma^2 \parbysec{T}{x} }} \newcommand{\tantra}[1]{#1_\ast} \newcommand{\cotantra}[1]{#1^\ast} \newcommand{\sct}[1]{{\mathcal T}_{#1}} \newcommand{\sctt}{\sct{T}} \newcommand{\scti}[1]{\sct{#1}^{I}} \newcommand{\scts}[1]{\sct{#1}^{S}} \newcommand{\feedback}[2]{{\mathcal F}_{#1,#2}} \newcommand{\feedbackab}{\feedback{{\alpha}}{{\beta}}} \newcommand{{\mathcal J}}{{\mathcal J}} \newcommand{\combined}[3]{{\mathcal J}_{#1,#2,#3}} \newcommand{\combinedtab}{\combined{T}{{\alpha}}{{\beta}}} \newcommand{\expected}[1]{{\cal E}\left\{#1\right\}} \newcommand{\var}[1]{\operatorname{var}\left\{#1\right\}} \newcommand{\cov}[1]{\operatorname{cov}\left\{#1\right\}} \newcommand{^\#}{^\#} \newcommand{2002/10/09}{2002/10/09} \newcommand{Ladislav Sl\'ade\v cek}{Ladislav Sl\'ade\v cek} \newcommand{\v R\'\i{}kovice 18, CZ 751 18, Czech Republic}{\v R\'\i{}kovice 18, CZ 751 18, Czech Republic} \newcommand{\tpEmail}{\hyreff{mailto:[email protected]}{[email protected]}} \newcommand{\tpeTitle}{Exact Feedback Linearization of Stochastic Control Systems} \newcommand{Abstract}{Abstract} \newcommand{\tpeAbstract}{This paper studies exact linearization methods for stochastic SISO affine controlled dynamical systems. The systems are defined as vectorfield triplets in Euclidean space. The goal is to find, for a given nonlinear stochastic system, a combination of invertible transformations which transform the system into a controllable linear form. Of course, for most nonlinear systems such transformation does not exist. We are focused on linearization by state coordinate transformation combined with feedback. The difference between It\^o and Stratonovich systems is emphasized. Moreover, we define three types of linearity of stochastic systems --- $g$-linearity, $\sigma$-linearity, and $g\sigma$-linearity. Six variants of the stochastic exact linearization problem are studied. The most useful problem --- the It\^o -~$g\sigma$ linearization is solved using the correcting term, which proved to be a very useful tool for It\^o systems. The results are illustrated on a numerical example solved with help of symbolic algebra. } \newcommand{Keywords}{Keywords} \newcommand{\tpeKeywords}{ exact linearization, feedback linearization, nonlinear dynamical system, It\^o integral, Stratonovich integral, correcting term {\bf MCS classification:} 93B18, 93E03} \title{\tpeTitle} \author{Ladislav Sl\'ade\v cek} \newcommand{\tp}[6]{ \begin{titlepage} \begin{center} {~\\} {\Large #1}\\ {\large Ladislav Sl\'ade\v cek}\\ {\tpEmail}\\ {#2}\\ {\large 2002/10/09}\\ \end{center} \begin{quote}\small {\sc\bf #3:} #4\\ {\sc\bf #5:} #6\\ \end{quote} \end{titlepage} } \newcommand{\tpe}{ \tp {\tpeTitle} {\v R\'\i{}kovice 18, CZ 751 18, Czech Republic} {Abstract} {\tpeAbstract} {Keywords} {\tpeKeywords} } \tpe \tableofcontents \section{Introduction} \pagenumbering{arabic}\count1=0 \label{sec:intro} The theory of exact linearization of deterministic dynamical systems has been thoroughly studied since seventies. This paper attempts to apply some of the results to the stochastic area. We emphasize the exact linearization by state coordinate transformation combined with feedback (further abbreviated as SFB linearization). Our main goal is to identify the main difficulties of this approach and to consider applicability of the methods known from the deterministic systems. The task of SFB linearization is following: given a dynamical systems~$\Theta$ we are looking for a combination of coordinate transformation~$\sctt$ and feedback~$\feedbackab$ which will make the resulting system~$\feedbackab \compose \sctt (\Theta)$ linear and controllable. One can also define the feedback-less linearization by coordinate transformation only (here abbreviated as SCT) or several variants of the input-output linearization. These variants are not considered here. \begin{rem} The notation for composition of mappings sometimes differs; right-to-left convention is used here:~$f \compose g(x) \define f(g(x))$. \end{rem} The subject of exact linearization of stochastic controlled dynamical systems lies on the intersection of three branches of science: differential geometry, control theory, and the theory of stochastic processes. Each of them is very broad and it is virtually impossible to cover all details of their combination. Hence it is necessary to choose a minimalistic simplified model for our problem and to refrain from most technical details. {\em We decided to represent dynamical systems under investigation by triplets of smooth vectorfields and to concentrate on transformation rules for these triplets\/}. The detailed interpretation of the vectorfield systems (\abbrev{i.e} solvability of underlying differential equations, properties of flows and trajectories) will be considered only on an informal, motivational level. For simplicity, we shall confine all the definitions of geometrical object to the Euclidean space; we will work in a fixed coordinate system using explicit local coordinates, which may be considered to be local coordinates of some manifold. This is mainly because we are unable to capture all consequences of the modern, coordinate-free differential geometry to the stochastic calculus (see \abbrev{eg}~\citet{kendall86},~\citet{malliavin},~\citet{emery89}). We believe that this approach is quite satisfactory for the majority of practical applications. \subsection{Dynamical systems} \begin{defn} In this paper, a stochastic dynamical system~$\Theta \define \ssysfgx$ is defined to be a triplet of smooth and bounded vectorfields~$f$,~$g$, and~$\sigma$ defined on an open neighborhood~$U$ of a point~$\origi{x} \in \RR^n$. We usually call~$U\in \RR^n$ the \defbf{state space},~$f$ the \defbf{drift vectorfield},~$g$ the \defbf{control vectorfield}, and ~$\sigma$ the \defbf{dispersion vectorfield}. \end{defn} From now on, let's assume that all functions, vectorfields, forms, and distributions are smooth and bounded on~$U$. In this paper, we will study almost only SISO systems, but in the case of stochastic MIMO systems with~$m$ control inputs and~$k$-dimensional noise the symbols~$g$ and~$\sigma$ stand for~$n\times m$ ($n \times k$ respectively) matrix of smooth vectorfields having its rank equal to~$m$ ($k$ respectively). The class of all deterministic~$n$-dimensional dynamical systems with~$m$ inputs will be called~$\clssysdet(n,m)$ and the class of stochastic systems with~$k$-dimensional noise will be denoted with~$\clssys(n,m,k)$. Similarly, autonomous deterministic dynamical system corresponds to a single vectorfield and a controlled deterministic dynamical system corresponds to a vectorfield pair. It is obvious that this approach is limited to time invariant, affine systems. \begin{rem} The acronyms SISO and MIMO are used in the usual meaning even for systems without outputs, where the wording ``scalar-input''and ``vector-input'' will be appropriate. Stochastic systems with~$m =1$ and~$k=1$ will be considered SISO. \end{rem} The definition may be interpreted as follows: there is a stochastic process~$x_t$ defined on~$\RR^n$ which is a strong solution of the stochastic differential equation $dx_t = f(x_t)\,dt + g(x_t) u(t)\,dt + \sigma(x_t)\,dw_t$, with initial condition~$\origi{x}$, where~$u(t)$ is a smooth function with bounded derivatives and~$w_t$ is a one-dimensional Brownian motion. The differential $dw_t$ is just a notational shortcut for the stochastic integral. Details of the theory of stochastic processes are beyond the scope of this article. The reader is referred to \citet{wong84}, \citet{oksendal}, \citet{sagemelsa}, \citet{malliavin}, \citet{kendall86}, \citet{karatzas}; the text of \citet{kohlman} is freely available on the Internet. Theory of stochastic processes offers several alternative definitions of the stochastic integral, among them the It\^o integral and the Stratonovich integral; each of them is used to model different physical problems. Consequently there are two classes of differential equations and two alternative definitions of a stochastic dynamical system --- It\^o dynamical systems defined by It\^o integrals and Stratonovich systems defined by Stratonovich integrals. Serious differences between these integrals exists but from out point of view there is a single important one: {\em the rules for coordinate transformations of dynamical systems defined by It\^o stochastic integral are quite different from the transformation rules which are valid for Stratonovich systems\/}. The definition of the It\^o dynamical system used by us is formally equivalent to the definition of the Stratonovich system; the only difference will be in the corresponding coordinate transformation. If necessary, It\^o and Stratonovich dynamical systems will be distinguished by a subscript: $\Theta_I \in \clssysito(n,m,k)$ and~$\Theta_S \in \clssysstrat(n,m,k)$. \begin{rem} In this paper we will use the adjectives {\em It\^o \/} and {\em Stratonovich} rather freely. For example we will speak of 'Stratonovich linearization' instead of `exact linearization of stochastic dynamical system defined by Stratonovich integral'. \end{rem} \subsection{Transformations} Furthermore, we will study two transformations of dynamical systems: the coordinate transformation~$\sctt$ and the feedback~$\feedbackab$. The definition of these transformation should be in accord with their common interpretation. This can be illustrated on the definition of the \defbf{coordinate transformation of a deterministic dynamical system} $\sctt: \clssysdet(n,m) \to \clssysdet(n,m)$ which is induced by a diffeomorphism~$T \colon U \to \RR^n$ between two coordinate systems on an open set~$U \subset \RR^n$. The mapping~$\sctt$ is defined by: \begin{align} \label{eq:83} \sctt \dsys{f}{g}{x} \define \left( \tantra{T} f , \tantra{T} g ,T(U),T(\origi{x}) \right) .\end{align} Recall that the symbol~$\tantra{T}$ stands for the contravariant transformation~$(\tantra{T} f)_i = \ssum{j}{0}{n} f_j \parby{T_i}{x_j}$. Moreover, we will require that the coordinate transformation~$T$ preserves the equilibrium state of the system \abbrev{i.e}~$T(\origi{x}) = 0$. The definition captures the contravariant transformation rules for differential equations known from the basic calculus. Note that the words ``coordinate transformation'' are used in two different meanings: first as the diffeomorphism~$T\colon U\to \RR^n$ between coordinates; second as the mapping between systems~$\sctt: \clssysdet(n,m) \to \clssysdet(n,m)$. Coordinate transformation of stochastic systems distinguish between It\^o and Stratonovich systems. One of the major complications of the linearization problems for It\^o systems is the second-order term in the transformation rules for It\^o systems: \begin{defn} \label{def:ctito} Let~$U\in \RR^n$ be an open set and let~$T\colon U\to \RR^n$ be a diffeomorphism from~$U$ to~$\RR^n$ with bounded first derivative on $U$ such that~$T(\origi{x})=0$. The mapping $\sctt\colon \clssysito(n,m,k) \to \clssysito(n,m,k)$ will be called a \defbf{coordinate transformation of an It\^o dynamical system} induced by diffeomorphism~$T$ if the systems $\Theta_1 \define \ssys{f}{g}{\sigma}{x}$ and $\Theta_2 \define \left(\tilde f,\tilde g,\tilde \sigma,T(U),\origi{x}\right)$; $\Theta_2 = \sctt\left(\Theta_1\right)$ are related by: \begin{align} \label{eq:86} \tilde f &= \tantra{T}f + \ito{\sigma}{T} \\ \label{eq:87} \tilde g_i &= \tantra{T}g_i \qquad&&\text{for}\quad 1 \le i \le m\\ \label{eq:89} \tilde \sigma_i &= \tantra{T}\sigma_i \qquad&&\text{for}\quad 1 \le i \le k .\end{align} \end{defn} The symbol~$\ito{\sigma}{T}$ stands for the \defbf{It\^o term} which is a second order linear operator defined by the following relation for the~$m$-th component of~$\ito{\sigma}{T}$, $1\le m\le n$ \begin{equation} \label{eq:16} \ito{\sigma} T_m \define \frahalf \ssum{{i,j}}{1}{n} \frac{\partial^2 T_m}{\partial x_i x_j} \ssum{l}{1}{k} \sigma_{il} \sigma_{jl} .\end{equation} The transformation rules for Stratonovich systems~$\sctt\colon \clssysstrat(n,m,k) \to \clssysstrat(n,m,k)$, $(f,g,\sigma,U,\origi{x}) \mapsto (\tantra{T}f, \tantra{T}g, \tantra{T}\sigma, T(U), T(\origi{x}))$ are equivalent to rules valid for the deterministic systems; only the rule\eqref{eq:89} for the drift vectorfield must be added. The difference between the coordinate transformation of It\^o and Stratonovich systems should be emphasized: in the Stratonovich case all the vectorfields transform contravariantly; on the other hand, in the It\^o case, the It\^o term~$\ito{\sigma}{T}$ is added to the drift vectorfield of the resulting system. \psfig{fig:introsfee}{Regular State Feedback}{introsfee} Another important transformation of dynamical systems is the regular feedback transformation. A feedback transformation is determined by two smooth nonlinear functions $\alpha \colon \RR^n \to \RR^m$ and~$\beta \colon \RR^n \to \RR^m \times \RR^m$ defined on~$U$ with~$\beta $ nonsingular for every~$x \in U$ (see Figure~\ref{fig:introsfee}). Usually, $\alpha $ is written as a column~$m \times 1$ matrix; $\beta $ as a square $m \times m$ matrix. \begin{defn} \label{def:feedback} Let~$\Theta = \ssysfgx \in \clmisys$ be a stochastic dynamical system. A \defbf{regular state feedback} is the transformation $\feedbackab \colon \clssys(n,m,k) \to \clssys(n,m,k)~$, $ (f,g,\sigma,U,\origi{x}) \mapsto \ssys{f + g\alpha }{g\beta }{\sigma}{x} $. \end{defn} A new input variable~$v$ is introduced by the relation~$u=\alpha +\beta v$. Given the feedback~$\feedbackab$ with nonsingular~$\beta $, we can always construct an inverse relation~$\feedback{a}{b} \define \feedbackab^{-1}$ such that~$\feedbackab \compose \feedback{a}{b} = \feedback{a}{b} \compose \feedbackab$ is the identity. The coefficients are related as follows:~$\beta = b^{-1}$, $\alpha = - b^{-1}a$, and~$a = -\beta ^{-1} \alpha $. This definition of feedback transformation can be used also for deterministic systems provided that the drift vectorfield~$\sigma$ is assumed to be zero. The symbol~$\combinedtab$ is used to indicate the combination of coordinate transformation with feedback~$\combinedtab \define \feedbackab\compose\sctt$ can be interchanged. \begin{rem} \label{rem:orderinv} Observe that the order of feedback and coordinate transformation in the composed transformation~$\combinedtab \define \feedbackab \compose \sctt$ \begin{multline} \combinedtab = \sctt \compose \feedbackab \bigdsys{f}{g}{x} = \sctt \bigdsys{f+g\alpha }{g\beta }{x}\\ = \bigdsys{\tantra{T} f + \tantra{T} g\alpha }{\tantra{T} g\beta }{z} = \bigdsys{\tantra{T} f + (\tantra{T} g) \alpha }{ (\tantra{T} g) \beta }{z} \\ = \feedback{\alpha '}{\beta '} \compose \sctt .\end{multline} The functions~$\alpha(z)'$, $\beta(z)'$ are equal to~$\alpha(x) $ and~$\beta(x) $ written in the $z$~ coordinates $\alpha(z) '=\alpha(x) \compose T^{-1}(z)$, $\beta(z) '=\beta(x) \compose T^{-1}(z)$. \end{rem} \subsection{Linearity} \label{sub:linearity} The definition of linearity is straightforward in the deterministic case. In contrast, the stochastic case is more complex, because there are two ``input'' vectorfields and thereby several degrees of linearity can be specified. \begin{defn} \label{def:linear} The deterministic dynamical system~$\Theta_D = (f,g,U,0) \in \clssysdet(n,m)$ is \defbf{linear} if the vectorfield~$f$ is a linear mapping without no constant term and the vectorfields~$g_i$ are constant; that is they can be written as $f(x)= Ax$, $g(x)= B$ with~$A$ a square~$n \times n$ matrix and~$B$ an~$n \times m$ matrix. The matrices must be constant on whole~$U$. \end{defn} \begin{defn} \label{def:linearsto} The stochastic dynamical system~$\Theta=(f,g,\sigma,U,0)$ is: \begin{itemize} \item \defbf{$g$-linear} if the mapping~$f(x) = Ax$ is linear without constant term and~$g(x) = B$ is constant on~$U$. \item \defbf{$\sigma$-linear} if the mapping~$f(x) = Ax$ is linear without constant term and~$\sigma(x) = S$ is constant on~$U$. \item \defbf{$g\sigma$-linear} if it is both~$g$-linear and $\sigma$-linear. \end{itemize} The matrices~$A$ and~$B$ have the same dimensions as in Definition~\ref{def:linear}; $S$ is an~$n \times k$ matrix. \end{defn} \begin{rem} The vectorfield~$g$ is \defbf{constant} on~$U$ if the value of~$g(x)$ is the same for every~$x \in U$. The vectorfield~$f$ on~$U$ is \defbf{linear without constant term} if~$f(\origi{x})=0$ and the superposition principle~$f(x_1+x_2)=f(x_1)+f(x_2)$ holds for every~$x_1$, $x_2 \in U$. \end{rem} We study systems at equilibrium \abbrev{i.e} we require that~$f(\origi{x})=0$ and that all transformations preserve the equilibrium:~$T(\origi{x}) =0$,~$\alpha (\origi{x})=0$, and~$\beta(\origi{x})$ is nonsingular. The It\^o systems require an additional condition~$f(\origi{x})+\corr{\sigma}{\origi{x}}=0$. The non-equilibrium case can be easily handled by extending the linear model with a constant term. Moreover we require that the resulting linear systems are controllable. A controlled deterministic dynamical linear system~$\Theta_D = (Ax,B,\RR^n,0) \in \clssysdet(n,m)$ is \defbf{controllable} if its first $n$ repeated brackets form an $n$-dimensional space \begin{equation} \label{eq:68} \dim \left\{A^kB, 0\le k\le n-1 \right\} = n .\end{equation} Other definitions of controllability of linear systems exist. For example Theorem 3.1 of~\citet{zhou} gives six definitions with proofs of equivalence. The controllability property deserves some attention in the stochastic case. The linear stochastic dynamical system is characterized by two input vectorfields $g(x)=B$ and $\sigma(x)=S$. \begin{enumerate} \item The definition of the controllability for the control vectorfield~$g$ is identical to the deterministic case; \abbrev{i.e}~\eqref{eq:68} must be satisfied. This property will be called \defbf{$g$-controllability}. \item We will also define \defbf{$\sigma$-controllability} as the requirement that the repeated brackets $S,AS,A^2S,\dots,A^{n-1}S$ form an $n$-dimensional space. \item Finally, the linear system is \defbf{$g\sigma$-controllable} if \begin{equation} \label{eq:3} \dim \left( \left\{A^kB, 0\le k\le n-1 \right\} \bigcup \left\{A^kS, 0\le k\le n-1 \right\} \right)= n .\end{equation} \end{enumerate} In this paper, we do not deal with the reachability, controllability, accessibility, observability, and similar properties of nonlinear systems. \begin{defn} \label{def:itogssisosfb} Let~$\Theta = \ssysfgx \in \clmisys$ be a dynamical system such that~$f(\origi{x}) =0$. We call the combination of a coordinate transformation~$\sctt$ and a regular feedback~$\feedbackab$ such that $T(\origi{x})=0$, $\alpha (\origi{x})=0$, and $\beta (\origi{x})$ is nonsingular the \defbf{linearizing transformation } of $\Theta$ if the transformation~$\feedbackab \compose \sctt $ converts~$\Theta$ into a~{\em controllable\/} linear system. For stochastic system we distinguish: \begin{romanenu} \item \defbf{$g$-linearizing transformation} which transforms~$\Theta$ into a~$g$-linear and~$g$-controllable system \item \defbf{$\sigma$-linearizing transformation} which transforms~$\Theta$ into a~$\sigma$-linear and~$\sigma$-controllable system \item \defbf{$g\sigma$-linearizing transformation} which transforms~$\Theta$ into a~$g\sigma$-linear and~$g$-controllable system Note that for $g\sigma$-linearization we require~$g$-controllability. This is slightly stricter requirement than~$g\sigma$ controllability but it should be naturally fulfilled by the majority of practical control systems. This requirement cancels many ``uncomfortable'' linear forms. Consider for example the system with prefilter of Figure~\ref{fig:prefilter} which is~$g\sigma$-controllable but~$g$-uncontrollable. \end{romanenu}\psfig{fig:prefilter}{Dynamical System with a Prefilter.}{prefilter} The system~$\Theta$ is \defbf{linearizable} if there exists linearizing transformation of~$\Theta$. \end{defn} \subsection{Computational Issues} In most practical circumstances, computational issues are the limiting factor of any application of differential geometric methods in control. The equations of exact linearization algorithms must be dealt in a symbolic form. Even the simplest exact linearization problems are extremely complex from the computational point of view. Therefore, the computer algebra tools are often employed. The results presented in this paper were tested by the author on few simulations of control systems in the symbolic system \Mathematica. Of course, the computer algebra has apparently serious limitations and drawbacks. Viability of the symbolic computational approach to the problems of the nonlinear control is studied by \citet{jager95}. Some very useful theoretical notes about the symbolic computation can be found in~\citet{winkler}. Unfortunately, the limited scope of this article does not allow deeper discussion of these subjects. \subsection{Applications} We propose, very briefly, two applications of the theory presented here: \begin{romanenu} \item \defbf{Control ---} a dynamical systems~$\Theta$ obtained by exact linearization will be controlled using the linear feedback law: \begin{equation} \label{eq:141} v = Kz + \kappa \nu ,\end{equation} where~$K$ is a row matrix of feedback gains,~$\kappa$ is an input gain, $z$~is the state vector, $v$~is the original control input, and~$\nu$ is a new control input. Two approaches can be studied --- classical linear control methods and the more sophisticated stochastic optimal control approach studied for example by \citet{oksendal}. The~$g\sigma$-linear systems are natural candidates for such approach because the other linear forms leave certain part of the resulting system nonlinear. \item \defbf{Filtering} --- the filtering problem is probably the most useful application of the theory of stochastic processes. We want to give the best estimate to the state of a dynamical system defined by the stochastic differential equation: \begin{align} \label{eq:118} dx_t = f(x_t)\,dt + \sigma_f(x_t)\,dw_tf; \end{align} based on observations of the from: \begin{align} \label{eq:119}y_t = h(x_t) + \sigma_h(x_t)\,dw_th .\end{align} $x_t$ is an $n$-dimensional stochastic process, $f$, $\sigma_f$, $\sigma_h$ are smooth vectorfields and $h$ is a smooth function. It would be interesting to use exact linearization of the nonlinear system to design an exact linear filter. Unfortunately, this idea has no direct association with the linearization results presented below, because it requires {\em output\/} exact linearization or linearization of an autonomous system. Therefore, it would be helpful to extend our results to these cases in the future. \end{romanenu} \subsection{Previous Work} \label{ssub:laal} The problem of SFB~$g$-linearization of SISO dynamical system defined in the It\^o formalism has been studied by~\citet{lahdhiri}. The authors derive equations corresponding to~\eqref{eq:50},~\eqref{eq:51} (eq 14, 15, 16 in~\citez{lahdhiri}). These equations are combined and then reduced to a set of PDEs of a single unknown function~$T_1$. Because there is no commuting relation similar to~\eqref{eq:103} the equations contain partial derivatives of~$T_1$ up to the~$2n$-th order (eq 23 in~\citez{lahdhiri}). Next, the authors propose a lemma (Lemma 1) that identifies the linearity conditions with non-singularity and involutivness of~$\distrofg{n-2}$. Unfortunately, we disagree with this result. It can be easily verified that for~$\sigma=0$ this statement does not correspond to the deterministic conditions (Proposition~\ref{prop:d1sfb2}), because the deterministic case requires non-singularity up to the~$(n-1)$-th bracket, not only up to the~$(n-2)$-th one. Second, although the method of finding~$T_1$ was given (solving PDE), we do not think that the existence of~$T_1$ was proved. After this paper was finished, we discovered recent works of~\citet{pan02} and~\citez{pan01}. In the article~\citez{pan01} Pan defines and solves the problem of {\em feedback complete linearization of stochastic nonlinear systems}. In our terminology, this problem is equivalent to SFB MIMO input--output It\^o $g\sigma$ linearization which was not studied by us. In~\citez{pan02} Pan declares and proves so called \index{invariance under transformation rule}{\em invariance under transformation rule} which is exactly equivalent to our Theorem~\ref{prop:corr} which is probably the most important result of our paper. Althougth the problems solved by Pan were slightly different, he uses the same equivalence --- Theorem~\ref{prop:corr}. This proves that our conclusions about applicability of the \index{correcting term}correcting term are perfectly valid. In~\citez{pan02} Pan consider three other \index{canonical form}canonical forms of stochastic nonlinear systems, namely the \index{noise-prone strict feedback form}noise-prone strict feedback form, \index{zero dynamics canonical form}zero dynamics canonical form and \index{and observer canonical form}observer canonical form also not studied by us. \section{Deterministic Case} \label{sec:detcase} In this section we recapitulate the results of the SFB and SCT exact linearization theory for SISO systems. For detailed treatment and proofs we refer to existing literature, above all the classical monographs of~\citet{isidori85} and~\citet{nijmeijer94}. For a very readable introduction to the field we refer to the seventh chapter of~\citet{vidyasagar93}. The books also contain extensive bibliography. The monograph of~\citet{isidori85} builds mainly on the concept of relative degree. In contrast we will emphasize the approach of~\citet{vidyasagar93} because the method is more suitable for the stochastic case. \subsection{Useful Relations} The solution of the SFB linearization problem as presented here uses the Leibniz rule \begin{align} \label{eq:63} \lie{\lbrack f,g\rbrack }{\alpha } = \lie{f}{(\lie{g}{\alpha })} - \lie{g}{ (\lie{f}{\alpha })} \end{align} with~$f,g$ smooth vectorfields on~$U$; $\alpha\colon U\to \RR $ is a smooth function. The recursive form of the Leibniz rule allows to simplify the chains of differential equations for the transformation~$T$. This can be expressed in the form of the following statement: For all $x \in U$, $k \ge 0$ these two sets of conditions are equivalent: \begin{align} \label{eq:2} \text{(i)}&\qquad&\lie{g}{\alpha } = \lie{g}{\lie{f}{\alpha }} = \cdots = \lie{g}{\multilie{f}{k}{\alpha }}=0 \\ \label{eq:10} \text{(ii)}&&\lie{g}{\alpha } = \lie{\adfg{}}{\alpha } = \cdots = \lie{\adfg{k}}{\alpha }=0 .\end{align} Recall that the symbol~$\lie{f}{h}$ stands for the Lie derivative defined by~$\lie{f}{h} = \langle f,{\operatorname{grad}}\,h \rangle = \ssumi{n} f_i(x) \parby{}{x_i} h(x)$. Higher order Lie derivatives can be defined recursively $\multilie{f}{0}{h} = h$, $\multilie{f}{k+1}{h} = \lie{f}{\multilie{f}{k}{h}}$ for~$k \ge 0$. The Lie Bracket is defined as ~$[f,g] \define \parbyx{g}f-\parbyx{f}g$; there is also a recursive definition: \begin{equation} \ad{f}{0}{g} \define g; \quad \ad{f}{k+1}{g} \define \left[f,\ad{f}{k}{g}\right] \quad \text{for } k \ge 0 .\end{equation} Another very important result of the differential geometry is invariance of the Lie bracket under the tangent transformation~$\tantra{T}$ (see~\citet{nijmeijer94} Proposition 2.30 \abbrev{p}~50): Let~$T\colon U\to \RR^n$ be a diffeomorphic coordinate transformation, and~$f$ and~$g$ be smooth vectorfields. Then \begin{align} \label{eq:103} \tantra{T} [f,g] = [\tantra{T} f, \tantra{T} g] .\end{align} \subsection{SFB Linearization} Every controllable linear system may be, by a linear coordinate transformation, transformed to the controllable canonical form (\citet{kalman}). Furthermore, this controllable canonical form can be always transformed into the integrator chain by a linear regular feedback. Therefore, the integrator chain is a canonical form for all feedback linearizable systems. (See \citet{vidyasagar93} section 7.4). Consequently, the equations of the integrator chain can be compared with the equations of the nonlinear systems and the following proposition can be proved: \begin{prop} \label{prop:d1sfb1a} There is a SFB linearizing transformation~$\combinedtab$ of a SISO deterministic dynamical system~$\Theta_D = \dsysfgx \in \clsysdet$ into a controllable linear system if and only if there is a solution~$T_1, T_2, \dots, T_n{\colon\RR^n \to \RR}$ to the set of partial differential equations defined on~$U$ \begin{alignat}{2} \label{eq:150} \lie{f}{}{T_i} &= T_{i+1} \qquad&&\text{for}\quad 1 \le i \le {n-1}\\ \label{eq:151} \lie{g}{}{T_i} &= 0 &&\text{for}\quad 1 \le i \le {n-1}\\ \label{eq:152} \lie{g}{}{T_n} &\ne 0 .\end{alignat} Then the feedback is defined as follows: \begin{align} \label{eq:57} \alpha &=-\frac{\lie{f}{T_n}}{\lie{g}{{T_n}}}\qquad\qquad \beta =\frac{1}{\lie{g}{T_n}} .\end{align} \end{prop} \begin{proof} See \citet{vidyasagar93} equations 7.4.20--21. \end{proof} \begin{prop} \label{prop:d1sfb1} The SFB linearizing transformation~$\combinedtab$ of a SISO deterministic dynamical system~$\Theta_D = \dsysfgx \in \clsysdet$ into a controllable linear system exists if and only if there is a solution~$\lambda {\colon\RR^n \to \RR}$ to the set of partial differential equations: \begin{alignat}3 \label{eq:8} \biglie{\adfg{i}}{\lambda }&=0\qquad&&\text{for}\quad{\iva{n-2}}\\ \label{eq:9} \biglie{\adfg{{n-1}}}{\lambda }&\not=0 .\end{alignat} The linearizing transformation~$T(x)$ is given by: \begin{alignat}3 \label{eq:11} T_{i}&=\multilie{f}{{i-1}}\lambda \qquad&&\text{for}\quad 1\le i\le n\\ \alpha &=\frac{-\multilie{f}{{n}}\lambda }{\lie{g}{}\multilie{f}{{n-1}}\lambda }&\qquad\qquad& \label{eq:12} \beta =\frac{1}{\lie{g}{}\multilie{f}{{n-1}}\lambda } .\end{alignat} \begin{proof} See \citet{vidyasagar93} equations 7.4.23--33 and \citet{nijmeijer94} Corollary 6.16. \end{proof} \end{prop} Finally, the geometrical conditions for the existence of the linearizing transformation are studied. \begin{theo} \label{prop:d1sfb2} A deterministic SFB linearizing transformation of~$\Theta_D = \dsysfgx \in \clsysdet$ into a controllable linear system exists if and only if the distribution~$\Delta_{n} \define {\operatorname{span}}} \newcommand{\ito\left\{{\adfg{i}, \iva{n-1}}\right\}$ is nonsingular on~$U$ and the distribution~$\Delta_{n-1} \define {\operatorname{span}}} \newcommand{\ito\left\{{\adfg{i}, \iva{n-2}}\right\}$ is involutive on~$U$. \end{theo} \begin{proof} See \citet{nijmeijer94} Corollary 6.17, \citet{vidyasagar93} Theorem 7.4.16, \citet{isidori89} Theorem 4.2.3 . \end{proof} \subsection{SCT Linearization} \begin{theo} \label{prop:s1ctt2t} There is a SCT~linearizing transformation~$\sctt$ of a deterministic MIMO system~$\Theta_D = \dsysfgx \in \clmisysdet$ into a controllable linear system if and only if there exists a reordering of the vectorfields~$g_1 \dots g_m$ and an~$m$-tuple of integers~$\kappa_1 \le \kappa_2 \le \dots \kappa_m$ with~$\ssumi{m} \kappa_i = n$ called the \defbf{controllability indexes} such that the following conditions are satisfied for all~$x \in U$: \begin{align} \label{eq:43} \text{(i)}&\qquad\dim\left({\operatorname{span}}} \newcommand{\ito\left\{( \ad{f}{j}{g_i}(x), \iva{m}, \jva{\kappa_i-1})\right\}\right) = n\\ \label{eq:44} \text{(ii)}&\qquad[\ad{f}{k}{g_i},\ad{f}{l}{g_j}]=0 \qquad\text{for}\qquad 0 \le k+l \le \kappa_i+\kappa_j-1,\, 0 \le i,j \le m . \end{align} \end{theo} \begin{proof} See \citet{nijmeijer94} Theorem 5.3 and Corollary 5.6. \end{proof} The following corollary can be verified for SISO systems: \begin{coroll} \label{prop:s1ctt2a} For a SISO system with~$m=1$ the condition (ii) of~\ref{prop:s1ctt2t} can be simplified as follows: \begin{align} \label{eq:45} [g,ad^l_f g] = 0, \quad l = 1, 3, 5, \dots, 2n-1, \forall x \in U .\end{align} \end{coroll} \begin{proof} See \citet{nijmeijer94} Corollary 5.6 and the text which follows. \end{proof} \section{Transformations of It\^o Dynamical Systems} \label{sec:stotrans} The transformation rules of It\^o systems are motivated by the It\^o differential rule (see \abbrev{eg}~\citet{wong84} Section 3.3), which defines the influence of nonlinear coordinate transformations on It\^o stochastic processes. The It\^o differential rule applies to the situation where a scalar valued stochastic process~$x_t$ defined by a stochastic differential equation~$dx_t = f(x_t)\,dt + \sigma(x_t)\,dw_t$ with~$f{\colon\RR \to \RR}$ and~$\sigma{\colon\RR \to \RR}$ smooth real functions and~$w_t$ a Brownian motion, is transformed by a a diffeomorphic coordinate transformation~$T\colon \RR \to \RR$. Then the stochastic process~$z_t\define T(x_t)$ exists and is an It\^o process. Further, the process~$z_t$ is the solution of the stochastic differential equation \begin{equation} \label{eq:126} dz_t = \parbyx{T} {f(x_t)}\,dt + \parbyx{T} {\sigma(x_t)}\,dw_t + \itoxs\,dt .\end{equation} All details together with a proof are available for example in~\citet{karatzas}. The It\^o rule can be also derived for the multidimensional case: for the~$m$-th component of an~$n$-dimensional stochastic process the It\^o rule can be expressed as follows: \begin{multline} dz_m = \ssumi{n} \parby{T_m}{x_i} f_i\,dt + \frahalf \ssum{i}{1}{n} \ssum{j}{1}{k} \parby{T_m}{x_i} \sigma_{ij}\,dw_j + \frahalf \ssum{{i,j}}{1}{n} \frac{\partial^2 T_m}{\partial x_i x_j} \ssum{l}{1}{k} \sigma_{il} \sigma_{jl}\,dt \\ =\lie{f}T_m\,dt + \ssum{j}{1}{k} \lie{\sigma_j}{T_m}\,dw_j + \ito{\sigma}T_m\,dt .\end{multline} For the most common case with scalar noise~$k=1$ the equation can be further simplified to: \begin{equation} dz_m =\lie{f}{T_m}\,dt + \lie{\sigma}{T_m}\,dw + \frahalf \ssum{{i,j}}{1}{n} \parbyxx{T_m}{x_i}{x_j} \sigma_{i} \sigma_{j}\,dt .\end{equation} The operator~$\ito{\sigma} T_m$ is sometimes written using matrix notation as: \begin{equation} \ito{\sigma} T_m = \frahalf {\operatorname{trace}} \left(\sigma^T\sigma \parbysec{T_m}{x}\right) .\end{equation} Generally, $\ito{\sigma}$ vanishes for linear~$T$ or zero~$\sigma$. \subsection{The Correcting Term} \label{sub:corr} In this section we introduce an extremely useful equivalence between It\^o and Stratonovich systems, which allows to use some Stratonovich linearization techniques for It\^o problems. The motivation is following: let~$\Theta_I = \ssysfgx \in \clmisysito$ be an It\^o system. We are looking for a Stratonovich system $\Theta_S = \ssys{\vec f}{\vec g}{\vec \sigma}{x}$ such that the trajectories of~$\Theta_I$ and~$\Theta_S$ are identical. The aim is to find equations relating the quantities~$\vec f$,~$\vec g$, and~$\vec \sigma$ with~$f$,~$g$, and~$\sigma$. \begin{defn} \label{def:corr} Let~$\Theta_{1I} = \ssys{f}{g}{\sigma}{x} \in \clssysito(n,m,k)$ be an~$n$-dimensional It\^o dynamical system with~$k$-dimensional Brownian motion~$w$. The vectorfield $\corr{\sigma}{x}$ whose ~$r$-th coordinate is equal to \begin{alignat}3 \label{eq:5} (\corr{\sigma}{x})_r &= -\frahalf \ssum{i}{1}{n} \ssum{j}{1}{k} \parby{{\sigma_{rj}}}{{x_i}} \sigma_{ij} \qquad&&\text{for}\quad 1 \le r \le n \end{alignat} is called the \defbf{correcting term}. Note that the derivative is always evaluated in the corresponding coordinate system. Further, let us define the \defbf{correcting mapping} $\Corr{\sigma} \colon \clssysito(n,m,k) \to \clssysstrat(n,m,k)$ by \begin{align} \label{eq:21} \Corr{\sigma} (f,g,\sigma,U,\origi{x}) \define (f+\corr{\sigma}{x},g,\sigma,U,\origi{x}) .\end{align} \end{defn} The general treatment of the subject can be found for example in~\citet{wong84} \abbrev{p}~160 or in \citet{sagemelsa}. The following theorem describes the behavior of the correcting term under the coordinate transformation. \begin{theo} \label{prop:corr} Let~$\Theta_I = \ssysfgx \in \clsysito$ be a one-dimensional It\^o dynamical system. Let $T$ be a diffeomorphism defined on~$U$ and the symbols~$\scti{T}$ and $\scts{T}$ denote a It\^o coordinate transformation and a Stratonovich coordinate transformation induced by the same diffeomorphism~$T$ and~$\tilde\sigma = \tantra{T}{\sigma}$. Then the following diagram commutes: \begin{equation} \begin{CD} \label{eq:4} \Theta_{1I} @>\scti{T}>> \Theta_{2I}\\ @V{\Corr{\sigma}}VV @AA{\Corr{{\tantra{T} \sigma}}^{-1}}A\\ \Theta_{1S} @>\scts{T}>> \Theta_{2S}\\ \end{CD} .\end{equation} In other words: \begin{align} \label{eq:19} \scti{T} &= {\Corr{{\tilde\sigma}}{}}^{-1} \compose \scts{T} \compose \Corr{\sigma}{} \quad\text{and} \\ \label{eq:20} \scts{T} &= {\Corr{\sigma}{}}^{-1} \compose \scti{T} \compose \Corr{{\tilde\sigma}}{} . \end{align} The notation~$\Corr{{\sigma}}{}^{-1}$ is used to denote the inverse mapping \begin{align} \label{eq:23} {\Corr{\sigma}{}}^{-1} (f,g,\sigma,U,\origi{x}) \define (f-\corr{\sigma}{x},g,\sigma,U,\origi{x}) .\end{align} \end{theo} \begin{proof} The correcting term~$\corr{\sigma}{x}{\colon\RR^n \to \RR^n}$ is equal to \begin{equation} \corr{\sigma}{x} = -\corrx .\end{equation} The transformations specified in the diagram~\eqref{eq:4} will be evaluated in the following order: \begin{equation} \begin{CD} \label{eq:6} \Theta_{1I} @>({\operatorname a})>> \Theta_{2I} \\ @V({\operatorname b})VV \\ \Theta_{1S} @>({\operatorname c})>> \Theta_{2S} @>({\operatorname d})>> \Theta_{3I}\\ \end{CD} \end{equation} We want to prove the equivalence of~$\Theta_{2I}$ and~$\Theta_{3I}$. The symbols (a) and (c) denote It\^o coordinate transformations; the symbols (b) (d) stand for the correcting mapping and its inverse. Note that the systems~$\Theta_{2I}$, $\Theta_{2S}$, and $\Theta_{3I}$ are defined in the~$z$-coordinate systems. Further, let \begin{align} \label{eq:243} \tilde\sigma&\define\trav{\sigma}\\ \kappa&\define \tantra{T} \corr{\sigma}{x} = -\trav{(\corrx)}\\ \origi{z}&\define T(\origi{x}) .\end{align} Then \begin{align} \text{(a)}&\qquad \Theta_{2I} =\bigssys{\trav{f} + \itoxs }{ \trav{g}}{ \tilde\sigma }{z}\\ \text{(b)}&\qquad \Theta_{1S} =\bigssys{f - \corrx }{ g }{ \sigma }{x} \\ \text{(c)}&\qquad \Theta_{2S} =\bigssys{\trav{\left(f-\corrx\right)} }{ \trav{g} }{ \trav{\sigma} }{z}\\ \text{(d)}&\qquad \Theta_{3I} =\bigssys{\trav{f} + \kappa + \frahalf \parbyz{\tilde\sigma}{\tilde\sigma} }{ \trav{g} }{ \tilde\sigma }{z} .\end{align} All the terms in (a) are equivalent to the respective terms in (d) except for the drift terms containing functions of~$\sigma$. Therefore, we continue comparing these terms only. For (a): \begin{align} \label{eq:37} L &\define \itoxs .\end{align} For (d): \begin{multline} \label{eq:38} R \define \kappa + \corrxx{{\tilde\sigma}}{z} = \kappa + \frahalf \parbyz{} \left( \parbyx{T} \sigma \right) \parbyx{T}\sigma = \kappa + \frahalf \parbyz{x} \parbyx{} \left( \parbyx{T} \sigma \right) \parbyx{T}\sigma = \\ \kappa + \frahalf \left( \parbysec{T}{x}\sigma + \parbyx{T}\parbyx{\sigma} \right) \sigma = \kappa + \frahalf \parbysec{T}{x}\sigma^2 - \kappa= \itoxs .\end{multline} Thus~$L=R$ and $\Theta_{2I} = \Theta_{3I}$. \end{proof} Theorem~\ref{prop:corr} is valid also for combined transformations: \begin{coroll} \label{prop:corrcorol} Let~$\Theta_I = \ssysfgx \in \clsysito$ , $T$, $\scti{T}$, and $\scts{T}$ have the same meaning as in Theorem~\ref{prop:corr}. Then the following diagram commutes for arbitrary regular feedback~$\feedbackab$: \begin{equation} \begin{CD} \label{eq:4000} \Theta_{1I} @>\scti{T}>> \Theta_{2I} @>\feedbackab>> \Theta_{4I}\\ @V{\Corr{\sigma}}VV @AA{\Corr{{\tantra{T} \sigma}}^{-1}}A @AA{\Corr{{\tantra{T} \sigma}}^{-1}}A\\ \Theta_{1S} @>\scts{T}>> \Theta_{2S} @>\feedbackab>> \Theta_{4S}\\ \end{CD} .\end{equation} \end{coroll} \begin{proof} We want to prove equivalence of~$\Theta_{4I}$ and~${\Corr{{\tantra{T} \sigma}}^{-1}} \Theta_{4S}$. The control and dispersion vectorfields of~$\Theta_{4I}$ and~$\Theta_{4S}$ are identical and they are not influenced by the correcting mapping. Using the notation of Theorem~\ref{prop:corr} we can express the drift term of~$\Theta_{4I}$ as~$\tantra{T}f + L + g\alpha $. The drift term of~${\Corr{{\tantra{T} \sigma}}^{-1}} \Theta_{4S}$ is~$\tantra{T}f + R + g\alpha $. The effect of feedback is purely additive and both the systems are equal. \end{proof} At first glance the correcting term is rather surprising. How can the second derivative of~$T$ in (\ref{eq:37}) be compensated by the correcting term, which does not contain the~$T$ at all? The answer is quite simple: the second derivative is hidden in the correcting term implicitly because the correcting term depends on the coordinate system in which the system~$\Theta_{1I}$ is defined. The derivative~$\parbyx{\sigma}$ contained in the correcting term is always taken in the appropriate coordinate system. To emphasize the dependence of the correcting term on the coordinate system, we will never omit the independent variable (\abbrev{eg} $x$ or $z$) from the symbol~$\corr{\sigma}{x}$. Proposition~\ref{prop:corr} is valid for general multidimensional systems~$\Theta_I = \ssysfgx \in \clmisysito$ ; the proof is purely mechanical and is not presented here. Let us now turn our attention to several special cases of the correcting mapping. \begin{coroll} \label{prop:corrvectr} Let~$\Theta_I = \ssysfgx \in \clssys_I(n,m,1)$ be an~$n$-dimensional stochastic dynamical system with an one-dimensional Brownian motion~$w$. The~$r$-th coordinate of the correcting term~$(\corr{\sigma}{x})_r$ is equal to \begin{equation} \label{eq:214} (\corr{\sigma}{x})_r = \frahalf \ssum{i}{1}{n} \parby{{\sigma_{r}}}{{x_i}} \sigma_{i} = \frahalf \lie{\sigma}{\sigma_r} \rfor{1\le r\le n} .\end{equation} \end{coroll} \begin{proof} Substitute~$k = 1$ into~\eqref{eq:5}. \end{proof} \begin{coroll} \label{prop:ic1} For systems with one-dimensional noise ($k=1$) define the matrix valued It\^o term~$\ito{\sigma}{T}$ for~$T{\colon\RR^n \to \RR^n}$ with components~$T_i$, $1\le i\le n$, as a column $n\times 1$ matrix $\ito{\sigma}{T} \define \left[\ito{\sigma}{T_1}, \ito{\sigma}{T_2}, \dots,\ito{\sigma}{T_n}\right]^T$. Then the relations~\eqref{eq:214} can be expressed as \begin{align} \label{eq:215} \ito{\sigma}{T} = \tantra{T} \left(\corr{\sigma}{x}\right) - \corr{{\tilde\sigma}}{z} .\end{align} \end{coroll} \begin{proof} The proof is almost identical to that of the multidimensional variant of Corollary~\eqref{prop:corr}.The symbols can be identified as follows: \begin{alignat}2 L_i &= \left(\ito{\sigma}{T}\right)_i \qquad&&\text{for}\quad 1\le i\le n \\ \kappa_i &= \left( \tantra{T} \left(\corr{\sigma}{x}\right)\right)_i \\ R_i &= \left(\tantra{T} \left(\corr{\sigma}{x}\right) \right)_i - \left(\corr{{\tilde\sigma}}{z}\right)_i .\end{alignat} \end{proof} \begin{coroll} \label{prop:ic2} Assume that the conditions of Proposition~\ref{prop:ic1} hold. The relation~\eqref{eq:215} can be written as: \begin{alignat}3 \label{eq:140} \frahalf \lie{\sigma}{\lie{\sigma}{T_i}} &= \ito{\sigma}{T_i} - \lie{\corr{\sigma}{x}}{T_i} \qquad&&\text{for}\quad 1 \le i \le n . \end{alignat} \end{coroll} \begin{proof} The formula can be expressed as: \begin{multline} \label{eq:139} \frahalf \lie{\sigma}{{\lie{\sigma}{T_i}}} = \frahalf \lie{\sigma}{ \left( \ssum{j}{1}{n} \parby{T_i}{x_j} \sigma_j \right) } = \frahalf \ssum{l}{1}{n} \sigma_k \parby{}{x_k} \left( \ssum{j}{1}{n} \parby{T_i}{x_j} \sigma_j \right) =\\ \frahalf \ssum{k,j}{1}{n} \left( \sigma_j\sigma_k \parbyxx{T_i}{x_k}{x_j} +\parby{T_i}{x_j} \parby{\sigma_j}{x_k} \sigma_k \right) = \ito{\sigma}{T_i} - \lie{{\corr{\sigma}{x}}}{T_i} . \end{multline} \end{proof} \subsection{Composition of Coordinate Transformations of It\^o Systems} \label{sub:stgroup} The set of all deterministic coordinate transformations~$\sctt$ together with composition~$\sct{RS} \define \sct{S} \compose \sct{R}$ forms a group. Obviously, this fact is a straightforward result of the behavior of the contravariant transformation and therefore an analogous statement must hold for Stratonovich systems. Surprisingly, this is valid also for It\^o systems as will be shown here. This has an important consequence: we may always find the inverse transformation to a given coordinate transformation of It\^o systems. We will prove the following assertion: \begin{theo} \label{prop:compogr} Let~$\scti{R}$, $\scti{S} \in \clssysito(n,1,1)$ be coordinate transformations of one-dimensional It\^o systems induced by diffeomorphisms~$R$ and~$S$. Then \begin{equation} \label{eq:123} \scti{S} \compose \scti{R} = \scti{S \compose R} .\end{equation} \end{theo} \begin{proof} We will transform the system in two different ways and show that the results are equal. \begin{enumerate} \item In the first method the system~$A=(0,0,a,U,\origi{x})$ which corresponds to a differential equation~$dx = a(x)\,dw$ will be transformed twice: \begin{enumerate} \item first, by~$y=R(x)$ to~$y$ coordinates \item and then the result~$B=(g,0,b,U,\origi{x})$ which corresponds to~$dy = g(y)\,dt + b(y)\,dw$ by~$z=S(y)$ to~$z$ coordinates. \end{enumerate} \item The other method transforms the system~$A$ only once by~$z=T(x) = S(R(x)) = (S \compose R)(x)$. \end{enumerate} Without loss of generality, the equation~$dx = a(x)\,dw$ is assumed to have no drift term because the drift term transforms in the contravariant fashion. The derivatives will be denoted by~$\parbyx{T(x)} \definer T'$,~$\parbyx{R(x)} \definer R'$,~$\parby{S(y)}{y} \definer S'$ and similarly for~$T''$, $R''$ and $S''$. Note that the prime is always used to denote derivatives by the argument of the function. The transformation by~$R$ gives: \begin{align} \label{eq:64} dx &= a \, dw\\ \label{eq:266} dy &= R' a\,dw + \frahalf a^2 R'' \,dt .\end{align} Thus the coefficients of the second SDE are defined by \begin{align} \label{eq:267} b(y) &\define (R'a) \compose R^{-1}(y)\\ \label{eq:268} g(y) &\define (\frahalf a^2 R'') \compose R^{-1}(y) .\end{align} The second transformation (by~$S$) gives \begin{align} \label{eq:269} dz &= \left( S' g + \frahalf b^2 S'' \right)\,dt + S' b \,dw =\\ \label{eq:270} &= \left(\frahalf a^2 S' R'' + \frahalf (R')^2 a^2 S'' \right) \,dt + S' R' a \,dw = \\ \label{eq:272} &= \frahalf a^2 \left(S'R'' + (R')^2 S'' \right)\,dt + S'R' a\,dw =\\ \label{eq:273} &= \frahalf a^2 T''\,dt + T' a\,dw .\end{align} The last equality follows from the fact that~$T''=S''(R')^2+S'R''$. \end{proof} One can verify the multidimensional case in the same spirit. \subsection{Invariants} \label{sub:stoinvar} In the deterministic case, some useful propositions about the invariant properties for example the Leibniz rule~\eqref{eq:63} and the relation~\eqref{eq:2} were employed. Unfortunately, we have not found any analogy for the It\^o systems yet. To point out the main complications, we will analyze the It\^o equivalent of the Leibniz rule~\eqref{eq:63}, which is essential for reducing the order of partial differential equations in the deterministic exact linearization. If the Lie derivative~$\lie{g}{}$ is interpreted as a general first order operator \begin{align} \lie{g}{} = \ssumi{n}g_i \parby{}{x_i} \end{align} then the commutator of two such first order operators $\lie{f}{\lie{g}{}} - \lie{g}{\lie{f}{}}$ is also a first order operator~$\lie{\lbrack f,g\rbrack }{}$ (see~\eqref{eq:63}). Similarly, define the general second order operator as \begin{align} \label{eq:17} O(g,G) &\define \ssumi{n}g_i \parby{}{x_i} + \ssum{i,j}{0}{n} G_{ij}\parbyxx{}{x_i}{x_j} \end{align} where $g_i,G_{ij}{\colon\RR^n \to \RR}$ for $1 \le i,j \le n$. We can compute the commutator \begin{align} \label{eq:213} C(f,F,g,G) \define O(f,F)O(g,G) - O(g,G)O(f,F) \end{align} of such second order operators. If this commutator was also a second order operator (\abbrev{i.e} there were~$\varphi$ and $\Phi$ such that the operator~$C(f,F,g,G) = O(\varphi,\Phi)$), then we would be able to simplify any PDEs of stochastic transformations. (See Proposition~\ref{prop:sfbooo}). Because the operator~$O$ is linear, \abbrev{i.e}~$O(f,F) = O(f,0) + O(0,F)$, we can split the computation into four independent, reusable parts: \begin{multline} C(f,F,g,G) = O(f,F)O(g,G) - O(g,G)O(f,F) = \\ =\lie{[f,g]}{} + C(0,F,0,G) + C(f,0,0,G) + C(0,F,g,0) .\end{multline} The first term is already a first order operator. Only the second and the third terms need to be computed because the fourth term can be obtained from the third one by formal substitution. For the third term: \begin{multline} O(f,0) O(0,G) = \ssum{i}{1}{n} f_{i} \parby{}{x_i} \left( \ssum{k,l}{1}{n} G_{kl} \parbyxx{}{x_k}{x_l} \right) = \\ \ssum{i,k,l}{1}{n} \left( f_{i} \parby{G_{kl}}{x_i} \parbyxx{}{x_k}{x_l} + f_{i} G_{kl} \parbyxxx{}{x_i}{x_k}{x_l} \right) .\end{multline} Further, \begin{multline} O(0,G)0(f,0) = \ssum{k,l}{1}{n} G_{kl} \parbyxx{}{x_k}{x_l} \left( \ssum{i}{1}{n} f_i \parby{}{x_i} \right) =\\ \ssum{i,k,l}{1}{n} G_{kl} \left( \parbyxx{f_i}{x_k}{x_l} \parby{}{x_i} + 2 \parby{f_i}{x_l} \parbyxx{}{x_i}{x_k} + f_i \parbyxxx{}{x_i}{x_k}{x_l} \right) .\end{multline} The intermediate results for the third and the fourth terms can be combined into \begin{multline} O(f,0) O(0,G) + O(0,F) O(g,0) - O(g,0) O(0,F) - O(0,G) O(f,0) =\\ \ssum{i,k,l}{1}{n} \left( f_{i} \parby{G_{kl}}{x_i} - g_{i} \parby{F_{kl}}{x_i} + 2 F_{ki} \parby{g_l}{x_i} - 2 G_{ki} \parby{f_l}{x_i} \right) \parbyxx{}{x_k}{x_l} \\ \left( F_{kl} \parbyxx{f_i}{x_k}{x_l} - G_{kl} \parbyxx{g_i}{x_k}{x_l} \right) \parby{}{x_i} .\end{multline} All of them are first and second order operators. Now let's evaluate the second term \begin{multline} O(0,F) O(0,G) = \ssum{i,j}{1}{n} F_{ij} \parbyxx{}{x_i}{x_j} \biggl( \ssum{k,l}{1}{n} G_{kl} \parbyxx{}{x_k}{x_l} \biggr) =\\ \ssum{i,j,k,l}{1}{n} \biggl( F_{ij} \parbyxx{G_{kl}}{x_i}{x_j} \parbyxx{}{x_k}{x_l} + (F_{ij} + F_{ji}) \parby{G_{kl}}{x_j} \parbyxxx{}{x_i}{x_k}{x_l} + F_{ij} G_{kl} \parbyxxxx{}{x_i}{x_j}{x_k}{x_l} \biggr) .\end{multline} \begin{multline} O(0,F) O(0,G) - O(0,G) O(0,F) = \ssum{i,j,k,l}{1}{n} \biggl( \biggl( F_{ij} \parbyxx{G_{kl}}{x_i}{x_j} - G_{ij} \parbyxx{F_{kl}}{x_i}{x_j} \biggr) \parbyxx{}{x_k}{x_l} +\\ \biggl( (F_{ij} + F_{ji}) \parby{G_{kl}}{x_j} - (G_{ij} + G_{ji}) \parby{F_{kl}}{x_j} \biggr) \parbyxxx{}{x_i}{x_k}{x_l} \biggr) .\end{multline} Unfortunately the last term \begin{align} \label{eq:60} \biggl( (F_{ij} + F_{ji}) \parby{G_{kl}}{x_j} - (G_{ij} + G_{ji}) \parby{F_{kl}}{x_j} \biggr) \parbyxxx{}{x_i}{x_k}{x_l} \end{align} is of third order and, in general, it does not vanish. Thus we have shown that the commutator of two general second order operators is of third order. Consequently, the Leibniz rule simplifications used in the deterministic case cannot be applied to the general stochastic linearization problem. \section{Stochastic Case } \label{sub:stoclass} Since there are two definitions of coordinate transformation of stochastic differential equations (It\^o , Stratonovich) and three definitions of linearity ($g,\sigma,g\sigma$), we face at least six stochastic problems per a deterministic one. In this section we will discuss all of them giving at least partial solutions to the feedback linearization problem. We consider mainly the SISO problem except for cases where the MIMO extension is trivial. \subsection{Stratonovich $g$-linearization} \label{sub:stotech} We show that the method known for deterministic systems can be applied without modifications. \begin{prop} \label{prop:stratgprop} The Stratonovich dynamical system~$\Theta_S = \ssysfgx \in \clmisysstrat$ is $g$-linearizable if and only if the deterministic system~$\Theta_D = \dsysfgx \in \clmisysdet$ is linearizable. These two linearizing transformation{}s are equal. This holds both for SISO and MIMO systems and both for SFB and SCT linearization. \end{prop} \begin{proof} The comparison of transformation laws for deterministic and Stratonovich systems shows that the coefficients of $f$ and $g$ transform in the same way. The controllability conditions and the definition of linearity are also identical (compare Definition~\ref{def:linear} with Definition~\ref{def:linearsto}). Identical problems have identical solutions. \end{proof} \subsection{Stratonovich $g\sigma$-linearization} \label{sub:ssgsigma} The Stratonovich problems are not complicated by the second order It\^o term. The transformation laws for Stratonovich systems are the same as the deterministic transformation laws, therefore many results of the deterministic linearization theory can be used. For example, the stochastic SCT $g\sigma$-linearization of a Stratonovich system is equivalent to the linearization of a deterministic, non-square MIMO system with two inputs and a single output. The SFB problem, which is studied in this section, is not as simple as the SCT one because the feedback influences only the control input~$u$ (Figure~\ref{fig:picassym}). The ``dispersion input'' is not a part of the feedback. Consequently, in order to solve the Stratonovich SFB $g\sigma$-linearization, we have to deal with a combined deterministic SFB-SCT problem. \psfig{fig:picassym}{Asymmetry of SFB $g\sigma$-Linearization}{picassym} \subsubsection{Canonical Form} Recall that we require~$g$-controllability of the resulting system Since this is a Stratonovich problem, the transformed vectorfields~$\tilde f$ and~$\tilde g$ do not depend on the dispersion vectorfield~$\sigma$. Therefore, the {\em control part\/} and the {\em dispersion part\/} can be studied independently. Any~$g$-linear system can be transformed into integrator chain by a combination of a linear coordinate transformation and linear feedback. Therefore, if we set~$\sigma=0$, the canonical form is the integrator chain. In general, the dispersion vectorfield~$\tilde \sigma$ is assumed to be arbitrary constant vectorfield~$\tilde \sigma(x)_i=s_i$, $1\le i\le n$ (See Definition~\ref{def:linearsto}) and this form is preserved by arbitrary linear transformations. Therefore the canonical form can be written as: \begin{align} \label{eq:1000} \tilde f_i(x) &= x_{i+1} \qquad&&\text{for}\quad 1\le i\le n-1\\ \tilde f_n(x) &= 0\\ \tilde g_i(x) &= 0 \qquad&&\text{for}\quad 1\le i\le n-1\\ \tilde g_n(x) &= 1\\ \label{eq:1077} \tilde \sigma_i(x) &= s_i \qquad&&\text{for}\quad 1\le i\le n .\end{align} We can compare this canonical form with the equations which define the transformed system~$\tilde \Theta$. \begin{prop} \label{prop:s1sfb1a} There is a SFB $g\sigma$-linearizing transformation of the SISO Stratonovich system $\Theta_S = \ssysfgx \in \clsysstrat$ into a~$g$-controllable linear system if and only if there is a solution~$\lambda {\colon\RR^n \to \RR}$ of the set of partial differential equations: \begin{alignat}3 \label{eq:7} \biglie{\adfg{i}}{\lambda }&=0\qquad&&\text{for}\quad 0 \le i \le n-2\\ \label{eq:13} \biglie{\adfg{{n-1}}}{\lambda }&\ne 0\\ \label{eq:14} \biglie{\ad{f}{i}{\sigma}}{\lambda }&=s'_{i+1} &&\text{for}\quad 0 \le i \le n-1 \end{alignat} such that~$s'_i \in \RR$ are constants on~$U$ for $1 \le i \le n$. Then the linearizing transformation is given by: \begin{alignat}3 \label{eq:22} T_i&=\multilie{f}{{i-1}}\lambda \qquad&&\text{for}\quad 1\le i\le n\\ \alpha &=\frac{-\multilie{f}{{n}}\lambda }{\lie{g}{}\multilie{f}{{n-1}}\lambda } &\qquad\qquad& \label{eq:28} \beta =\frac{1}{\lie{g}{}\multilie{f}{{n-1}}\lambda } .\end{alignat} \end{prop} \begin{proof} Assume that~$\Theta_S$ is transformed by~$\combinedtab$ into $\tilde \Theta \define \left(\tilde f,\tilde g,\tilde \sigma,T(U),T(\origi{x})\right)$ where the~$i$-th components of~$f$,$g$, and~$\sigma$ can be expressed as:~$\tilde f_i = \lie{f}{T_i}$, $\tilde g_i = \lie{g}{T_i}$, $\tilde \sigma_i = \lie{\sigma}{T_i}$. Moreover, the feedback is defined by~$ u = \alpha + \beta v $. The equations of~$\Theta$ can be compared to the equation of the canonical form~\eqref{eq:1000}-\eqref{eq:1077}. \begin{alignat}{3} \label{eq:160} \lie{f}{T_i} &= T_{i+1} \qquad&&\text{for}\quad 1 \le i \le n-1\\ \label{eq:161} \lie{g}{T_i} &= 0 &&\text{for}\quad 1 \le i \le n-1\\ \label{eq:162} \lie{g}{T_n} &= 1/\beta \ne 0 \\ \label{eq:163} \lie{f}{T_n} &= -\alpha /\beta .\end{alignat} The relations~\eqref{eq:7}, \eqref{eq:13}, \eqref{eq:22}, and~\eqref{eq:28} are equivalent to relations~\eqref{eq:8}-\eqref{eq:12} from Proposition~\ref{prop:d1sfb1}. The relation~\eqref{eq:14} can be verified in a similar way: \begin{alignat}{3} \label{eq:165} \lie{\sigma}{T_i} &= s_{i} \qquad&&\text{for}\quad 1 \le i \le n \end{alignat} thus by ~\eqref{eq:160} \begin{alignat}{3} \label{eq:254} \lie{\sigma}{\lie{f}{T_i}} &= s_{i+1} \qquad&&\text{for}\quad 1 \le i \le n-1 \end{alignat} and by ~\eqref{eq:63} \begin{alignat}{3} \label{eq:255} \lie{\sigma}{\lie{f}{T_i}} &= \lie{f}{\lie{\sigma}{T_i}} -\lie{[f,\sigma]}{T_i} \qquad&&\text{for}\quad 1 \le i \le n-1 \end{alignat} since the Lie derivative of a constant is zero: \begin{alignat}{3} \label{eq:256} \lie{f}{\lie{\sigma}{T_i}} &= \lie{f}{s_i} = 0\\ s_{i+1} \define \lie{\sigma}{\lie{f}{T_i}} &= -\lie{[f,\sigma]}{T_i} \qquad&&\text{for}\quad 1 \le i \le n-1 .\end{alignat} The equations~\eqref{eq:14} are obtained by successive application of this relation. The symbols~$s_i$ are equal to~$s'_{i}$ except for the signs. \end{proof} \subsubsection{Conditions for the Control Part} The necessary conditions for linearizability of the {\em control part\/} of~$\Theta$ (\abbrev{i.e} the system~$\left(f,g,0,U,\origi{x} \right)$) can be expressed in geometrical form. We intentionally omit the dispersion part, using the fact that the resulting system must be linear when the noise is zero. Further, the class of all solutions of this subproblem will be called~$C$. This class can be studied to find if some member of~$C$ linearizes the {\em dispersion part\/} of the system. We can find a geometrical criterion similar to the conditions of Proposition~\ref{prop:d1sfb2}. In this case these conditions are necessary but not sufficient since also \eqref{eq:14} must be satisfied. \begin{prop} \label{prop:s1sfb1} SFB~$g\sigma$-linearizing transformation of the Stratonovich system $\Theta_S$ into a~$g\sigma$-controllable system linear system exists only if the distribution~$\distrofg{n-2}$ is involutive and the distribution~$\distrofg{n-1}$ is $n$-dimensional. \end{prop} \begin{proof} This theorem is equivalent to Proposition~\ref{prop:d1sfb2} which is a direct consequence of Proposition~\ref{prop:d1sfb1a} which corresponds to Proposition~\ref{prop:s1sfb1a}. \end{proof} \subsubsection{Condition for the Dispersion Part} \label{prop:cftdp} The conditions of Proposition~\ref{prop:s1sfb1} can be written in matrix form. We are looking for~$T_1 = \lambda{\colon\RR^n \to \RR}$ such that \begin{align} \label{eq:189} \left[\,\,\begin{matrix} \adfg{0}\\ \adfg{1}\\ \vdots\\ \adfg{n-2}\\ \end{matrix}\,\,\right] \quad \left[\,\,\begin{matrix} \parby{\lambda }{x_1}\\ \parby{\lambda }{x_2}\\ \vdots\\ \parby{\lambda }{x_n}\\ \end{matrix}\,\,\right] = \left[0\right] .\end{align} The vectors~$ \adfg{0} \dots \adfg{n-2}$ are written in coordinates as~$1\times n$ rows. The first matrix is~$(n-1)\times n$. Moreover it is required that \begin{equation} \label{eq:1020} \biglie{\adfg{n-1}}{\lambda} \end{equation} is nonzero. We will use the algorithm for SFB deterministic linearization (see Section~\ref{sec:detcase}) to find such a transformation~$\lambda$.Then we will verify if the conditions for linearity of the dispersion part of the system~\eqref{eq:14} are also valid. There are~$n$ additional linearity conditions ($s_i$ are constants): \begin{align} \label{eq:1001} \left[\,\,\begin{matrix} \ad{f}{0}{\sigma}\\ \ad{f}{1}{\sigma}\\ \vdots\\ \ad{f}{n-1}{\sigma}\\ \end{matrix}\,\,\right] \quad \left[\,\,\begin{matrix} \parby{\lambda }{x_1}\\ \parby{\lambda }{x_2}\\ \vdots\\ \parby{\lambda }{x_n}\\ \end{matrix}\,\,\right]= \left[\,\,\begin{matrix} s_1\\ s_2\\ \vdots\\ s_n\\ \end{matrix}\,\,\right] .\end{align} In the deterministic case we were satisfied with {\em arbitrary\/} solution~$\lambda $ to the equations~\eqref{eq:189}, and~\eqref{eq:1020} . In this stochastic case we must find the class of {\em all\/} solutions and then check if this class contains the solution for the~$\sigma$ part~\eqref{eq:1001}. Details depend on the methods used for solving the set of PDEs. This result is summarized in the following algorithm: \begin{itemize} \item{Find $\Delta_k \define \adfg{i}$ for $0\le i \le k-1$.} \item{Verify that $\dim(\Delta_{n})$ is $n$.} \item{Verify that $\Delta_{n-1}$ is involutive (see \citet{nijmeijer94} Remark following Definition 2.39) otherwise no linearizing transformation exists.} \item{Find all~$\lambda$ satisfying~\eqref{eq:192} by solving PDEs~\eqref{eq:192}; denote~$C$ the set of all such functions.} \item{Verify that there is a~$\lambda_1\in C$ such that the conditions~\eqref{eq:1001} are satisfied, otherwise no linearizing transformation exists.} \item{Compute $T$,$\alpha $,$\beta $ from \eqref{eq:160} -- \eqref{eq:163}.} \end{itemize} Now, we can illustrate one possible practical approach which worked for several simple problems solved by us (see the example in Section~\ref{sub:appcrane}). First we can compute the kernel of the matrix~$M_g$ to find the form~$\omega = \left[ \omega_1, \omega_2, \dots, \omega_n \right]^T$ which satisfies~$M_g \omega = 0$, \abbrev{i.e}~$\omega$ is perpendicular to~$M_g$. In modern computer algebra systems there is a single command for this. Proposition~\ref{prop:s1sfb1} assumes that~$n$ vectorfields~$\Delta_{n} \define \distrofg{n-1}$ form an~$n$ dimensional space. The vectorfields~$\Delta_{n-1} \define \distrofg{n-2}$ are chosen from them and consequently must form an~$(n-1)$-dimensional space. Thus their kernel~$d\lambda $ is exactly one dimensional and arbitrary~$\omega' = c(x)\omega(x)$ also belongs to the kernel ($c(x)$ is a scalar). But not every~$\omega'$ that is perpendicular to~$M_g$ is a solution to the original linearization problem. The function~$\omega'$ must be an exact one-form \abbrev{i.e} there must be a scalar function~$\lambda $ such that~$d \lambda = c(x)\omega(x)$. The Frobenius theorem guarantees that if~$\Delta_{n-1}$ is involutive, then there is always~$c(x)\in\RR$ such that~$c(x)\omega(x)$ is the exact one-form. A necessary condition for a one-form~$\omega = \ssumi{n} \omega_i$ to be exact is \begin{equation} \label{eq:102} \parby{\omega_i}{x_j} = \parby{\omega_j}{x_i} \qquad \text{for} \qquad 1 \le i,j \le n .\end{equation} Hence for every~$1\le i,j \le n$ \begin{alignat}{3} \parby{}{x_j}\left(c(x)\omega_i \right) &= \parby{}{x_i}\left(c(x)\omega_j \right),\end{alignat} thus for every~$1\le i,j \le n$ \begin{alignat}{3} \label{eq:192} \parby{c(x)}{x_i}\omega_j - \parby{c(x)}{x_j}\omega_i + c(x) \left( \parby{\omega_j}{x_i} - \parby{\omega_i}{x_j} \right) = 0 .\end{alignat} The later condition is a set of linear PDEs, with unknown~$c(x)$, which are guaranteed to have a solution by the involutivness of~$\Delta_{n-1}$ (the Frobenius theorem). In our computations the equation~\eqref{eq:192} was in a simple form which allowed to determine all the solutions easily. More complicated cases will require more sophisticated analysis. \subsection{It\^o $g\sigma$-linearization} \label{sub:isgsigma} In the previous subsection we tried to find~$g\sigma$-linearizations for Stratonovich dynamical systems. Once this is done, the correcting mapping can be used to construct It\^o $g\sigma$-linearizing transformation. This method works for both the SFB and the SCT case. Given an It\^o system~$\Theta_I$, the corresponding Stratonovich system $\Theta_S$ can be obtained using the correcting mapping $\Theta_S = \Corr{\sigma}\left({\Theta_I}\right)$. Afterward, the Stratonovich $g\sigma$-linearization algorithm can be applied giving a linear system~$\Theta_{2S}$. Due to linearity of the drift vectorfield~$\tilde \sigma$ of~$\Theta_{2S}$, the correcting term~$\corr{{\tilde \sigma}}{z}$ of the backward transformation~${\Corr{{\tilde \sigma}}{}}^{-1}$ vanishes. \begin{theo} \label{prop:gsigmaprop} The SFB $g\sigma$-linearizing transformation~${\mathcal J}_I$ of the~It\^o dynamical system $\Theta_I = \ssysfgx \in \clmisysito$ , $f(\origi{x})=0$, $\corr{\sigma}{\origi{x}}=0$, into a $g\sigma$-controllable linear system exists if and only if there is a SFB $g\sigma$-linearizing transformation~${\mathcal J}_S$ of the Stratonovich dynamical system \begin{align} \label{eq:24} \Theta_S &= \ssys{\vec f}{g}{\sigma}{x} = \Corr{\sigma}{(\Theta_I)}\\ \vec f &= f + \corr{\sigma}{x} .\end{align} Moreover~${\mathcal J}_I = {\mathcal J}_S \compose \Corr{\sigma}{}$. \end{theo} \begin{proof}[Proof (sufficiency)] We use the properties of the correcting term (Subsection \ref{sub:corr}). Assume that there is a mapping~${\mathcal J}_S$ which transform $\Theta_S$ into a linear ~$g$-controllable system $(Ax,B,S,U,0)$ . By~(\ref{eq:19}) \begin{align} \label{eq:248} {\mathcal J}_I = \Corr{{\tilde\sigma}}{}^{-1} \compose {\mathcal J}_S \compose \Corr{\sigma}{} .\end{align} The backward correcting transformation $\Corr{{\tilde \sigma}}^{-1}$ is identity because the correcting term of a linear mapping $\corr{{\tilde \sigma}}{x}$ is zero. Thus~$ {\mathcal J}_I = {\mathcal J}_S \compose \Corr{\sigma}{} $ and ~${\mathcal J}_I (\Theta_{I})$ equals $(Ax,B,S,U,0)$, which is linear and~$g$-controllable by assumption. \end{proof} \begin{proof}[Proof (necessity)] Assume that there is the It\^o transformation~${\mathcal J}_I$ which linearizes $\Theta_I$ and by~(\ref{eq:24}) $\Theta_I = {\Corr{\sigma}{}}^{-1}(\Theta_S)$. Construct Stratonovich linearization by~${\mathcal J}_S = {\mathcal J}_I \compose {\Corr{\sigma}{}}^{-1} $. Hence~${\mathcal J}_I$ linearizes ${\Corr{\sigma}{}}^{-1}(\Theta_S)$ and~${\mathcal J}_S$ linearizes~$\Theta_S$ into the same linear and controllable system as~${\mathcal J}_I$. \end{proof} \subsection{It\^o ~$g$-linearization} \label{sub:sfbig} The It\^o $g$-linearization problem is probably the most complicated variant of exact linearization studied in this paper. The dispersion vectorfield of an It\^o dynamical system transformed by a coordinate transformation~$\sctt$ consists of two terms: the transformed vectorfield~$\tantra{T} f$ and the It\^o term~$\ito{\sigma}$. We require that the sum of these terms is linear, thus the nonlinearity of the drift~$\tantra{T}f$ must compensate for the It\^o term. Since the It\^o term behaves to~$T$ as a second order differential operator, this problems generates a set of second order partial differential equations. One can attempt to use simplifications as in the deterministic linearization, namely, the recursive Leibniz rule~\eqref{eq:10}. Unfortunately, this approach does not work for the stochastic case. In general, the It\^o equations cannot be easily simplified by commutators because the commutator of second order operators is not a second order operator but a third order operator (see Subsection~\ref{sub:stoinvar}). Nevertheless, there are special cases for which simpler conditions can be found. The most important special case (commuting $g$ and $\sigma$) will be studied here. \subsubsection{Canonical Form ---$n$ unknowns} The canonical form for the $g$-linearization is the integrator chain with a nonlinear drift \begin{align} \label{eq:1007} \tilde f_i(x) &= x_{i+1} \qquad&&\text{for}\quad 1\le i\le n-1\\ \label{eq:1008} \tilde f_n(x) &= 0\\ \label{eq:1009} \tilde g_i(x) &= 0 \qquad&&\text{for}\quad 1\le i\le n-1\\ \label{eq:1010} \tilde g_n(x) &= 1 .\end{align} Assume that there is a~$g$-linear system~$\Theta_I = (Ax,B,\sigma(x),U,\origi{x})$. Then the drift part of~$\Theta_I$ can be transformed by a {\em linear} transformation into the integrator chain. This is because the It\^o term of a linear transformation vanishes. The equations which define~$T$ can be obtained by comparing this canonical form with the equations of~$\tilde \Theta$. \begin{prop} \label{prop:s1sfb2} Let~$\Theta_I = \ssysfgx \in \clsysito$ be an It\^o dynamical system with~$f(\origi{x})=0$ such that~$\corr{\sigma}{\origi{x}}=0$. There is a SFB~$g$-linearizing transformation~$\combinedtab$ of the system~$\Theta_{I}$ into a ~$g$-controllable linear system if and only if there is a solution $T_i{\colon\RR^n \to \RR}$, $1\le i \le n$, to the set of partial differential equations defined on~$U$: \begin{alignat}{3} \label{eq:50} T_{i+1}&=\lie{f}{T_i}+\ito{\sigma}{T_i} \qquad&&\text{for}\quad 1\le i\le{n-1}\\ \label{eq:51} \lie{g}{T_i}&=0 &&\text{for}\quad 1\le i\le{n-1}\\ \label{eq:52} \lie{g}{T_n}&\not=0 .\end{alignat} The symbol~$\ito{\sigma}$ denotes the It\^o operator (see Definition~\ref{eq:16}). The feedback can be constructed as: \begin{align} \label{eq:53} \alpha &=-\frac{(\lie{f}{T_n}+\ito{\sigma}{T_n})}{\lie{g}{}T_n} \qquad\qquad \beta =\frac{1}{\lie{g}{T_n}} .\end{align} \end{prop} \begin{proof} The~$i$-th components of~$f$,$g$, and~$\sigma$ are:~$\tilde f_i = \lie{f}{T_i} + \ito{\sigma}{T_i}$, $\tilde g_i = \lie{g}{T_i}$, $\tilde \sigma_i = \lie{\sigma}{T_i}$. The partial differential equations \eqref{eq:50}, \eqref{eq:51} and \eqref{eq:52} are obtained by comparison of \eqref{eq:86}-\eqref{eq:89} with the equations~\eqref{eq:1007}-\eqref{eq:1010}. \end{proof} \subsubsection{PDEs of single unknown} One can attempt to reduce the equations~\eqref{eq:50}, \eqref{eq:51}, and~\eqref{eq:52}, to a set of equations of a single unknown, similarly to the results of Proposition \ref{prop:d1sfb1}. \begin{coroll} \label{prop:sfbooo} Define the general second order operator~$O(f,F)$ as in~\eqref{eq:17}. The exponential notation for~$O$ will be defined recursively: $O^0(f,F)T \define T$ and $O^{l+1}(f,F)T \define O(f,F)O^l(f,F)T$ for~$l \ge 0$. Next, define \begin{align} F_{ij} \define \frahalf \sigma_i \sigma_j . \end{align} Then the set of partial differential equations~\eqref{eq:50}-\eqref{eq:52} of~$n$ unknowns has a solution if and only if there is a solution~$\lambda {\colon\RR^n \to \RR}$ defined on~$U$ to the set of PDEs of single unknown: \begin{alignat}{3} \label{eq:153} O(g,0)O^i(f,F)\lambda &=0\qquad&&\text{for}\quad 0\le i\le n-2\\ \label{eq:154} O(g,0)O^{n-1}(f,F)\lambda &\ne 0 . \end{alignat} The original solution and the feedback can be found as \begin{alignat}{3} T_i&=O^{i-1}(f,F)\lambda \qquad&&\text{for}\quad 1 \le i \le n\\ \label{eq:58} \alpha &= - \frac{O(f,F)^{n-1}\lambda }{O(g,0)^{n-1}\lambda } \quad\quad \beta = \frac{1}{O(g,0)^{n-1}\lambda } . \end{alignat} \end{coroll} \begin{proof} Since~$T_1=\lambda $ and by definition of~$\ito{\sigma}{}$ \eqref{eq:16} and~$O$~\eqref{eq:17}: \begin{alignat}3 \label{eq:264} O(f,F)T_i &= \lie{f}{T_i} + \ito{\sigma}{T_i} \qquad&&\text{for}\quad 1\le i\le n\\ \label{eq:265} O(g,0)T_i &=\lie{g}{T_i} &&\text{for}\quad 1\le i\le n ; \end{alignat} then~$T_{i+1} = O(f,F)T_i$ by~\eqref{eq:50} and $T_i = O^{i-1}(f,F)T_1 = O^{i-1}(f,F)\lambda $. Similarly the equation \eqref{eq:53} implies~\refprop{eq:58}. \end{proof} Note, that for the deterministic case~$\sigma=0$, the operators~$O(g,0)$ and~$O(g,0)O^i(f,F)$ degenerate to~$\lie{g}{}$ and to~$\lie{g}{}\multilie{f}{i}{}$ respectively, thus the result is the same as that of Proposition~\ref{prop:d1sfb1}. In general, the equations of the system are of an order up to~$2n$ and cannot be reduced to a lower order. The commutator of two second order operators is of third order as was pointed out by~\eqref{eq:60}. In particular, for~$i=1$ we have to evaluate~$C(g,0,f,F) \definer O(a,A)$, which {\em is\/} of second order due to the fact that~$G=0$. But starting from~$i=2$ the commutator~$C(f,F,a,A) = C(f,F,C(g,0,f,F))$ is of third order. \subsubsection{Correcting Term} The same problem can be reformulated using conversion to the Stratonovich formalism. One can compute the drift vectorfield~$\vec f$ of the equivalent Stratonovich system by applying the correcting term~$\vec f \define f + \corr{\sigma}{x}$. Then the Stratonovich system~$\ssys{\vec f}{g}{\sigma}{x}$ may be transformed, by a suitable transformation, to such a form~$(\tilde f,\tilde g,\tilde \sigma,T(U),0)$ that after applying the backward correcting term~$-\corr{{\tilde \sigma}}{z}$ the resulting It\^o system will be linear. Compare this formulation with the~$g\sigma$-linearization where the backward correcting term vanished due to linearity of~$\tilde \sigma$. This does not happen with the~$g$-linearization, and the backward correcting term is a part of the equations. In general, it may be difficult to solve these equations. Nevertheless there are special cases when the solution can be obtained. See for example Section~\ref{sub:appcrane}. Another important case (commuting $g$~ and~$\sigma$) is studied below. \begin{coroll} \label{prop:s1sfb3} The equations of Proposition~\ref{prop:s1sfb2} are equivalent to~$n$ partial differential equations: \begin{alignat}3 \label{eq:143} T_{i+1}&=\lie{\vec f}{T_i} -\corr{{\vec \sigma}}{z}=\\ &\lie{\vec f}{T_i} + \frahalf \lie{\sigma}{\lie{\sigma}{T_i}} \qquad&&\text{for}\quad 1 \le i \le{n-1}\\ \label{eq:144} \lie{g}{T_i}&=0&&\text{for}\quad 1 \le i \le{n-1}\\ \label{eq:145} \lie{g}{T_n}&\not=0 .\end{alignat} Then the feedback can be constructed as \begin{align} \alpha &=-\frac{\lie{\vec f}{T_n} + \frahalf \lie{\sigma}{\lie{\sigma}{T_n}}}{\lie{g}{T_n}} \qquad\qquad \beta =\frac{1}{\lie{g}{T_n}} .\end{align} Where $\vec f \define f + \corr{\sigma}{x}$. \end{coroll} \begin{proof} The equations~\eqref{eq:143} can be obtained from~\eqref{eq:50} by applying~\eqref{eq:215} and~\eqref{eq:140}: \begin{multline} \label{eq:216} T_{i+1} = \lie{f}{T_i}+\ito{\sigma}{T_i} = \lie{\left( \vec f - \corr{{\vec \sigma}}{x} \right)}{T_i}+ \ito{\sigma}{T_i} = \\ \lie{\vec f}{T_i} - \lie{\corr{{\vec \sigma}}{x}}{T_i} + \ito{\sigma}{T_i} = \lie{\vec f}{T_i} + \frahalf\lie{\sigma}{\lie{\sigma}{T_i}} .\end{multline} The other equations are adopted from Proposition~\ref{prop:s1sfb2}. The set of PDEs of~$n$ unknowns can be transformed into a set of PDEs of a single unknown~$\lambda =T_1$, but the order of the equation will be~$2n-1$. \end{proof} \begin{rem} \label{prop:s1sfb4} Observe that the set of~$n$ second order partial differential equations~\eqref{eq:143}-~\eqref{eq:145} defined in Proposition~\ref{prop:s1sfb3} can be transformed, by introducing new variables~$S_i = \lie{\sigma}{T_i}$, to the following system of~$2n-1$ first order partial differential equations for~$1 \le i \le {n-1}$: \begin{alignat}3 \label{eq:146} \lie{g}{T_i}&=0\\ \label{eq:147} \lie{\sigma}{T_i}-S_i&=0\\ \label{eq:148} \lie{{\vec f}}{T_i} + \frahalf\lie{\sigma}{S_i}&=T_{i+1}\\ \label{eq:149} \lie{g}{T_n}&\not=0 .\end{alignat} $T_i$ and~$S_i{\colon\RR^n \to \RR}$ are unknown real valued functions defined on $U$. \end{rem} \subsubsection{Systems with Commuting $g$ and $\sigma$} There is a special case of It\^o dynamical systems for which the solution is completely known and can be computed using only first order PDEs. \begin{theo} \label{prop:ssfbco} Let~$\Theta_I = \ssysfgx \in \clsysito$ be a SISO It\^o dynamical system. If the vectorfield~$\sigma$ commutes with all vectorfields~$\ad{{\vec f}}{i}{g}$ for~$0\le i\le n-1$, \abbrev{i.e} ~$[\ad{{\vec f}}{i}{g},\sigma]=0$, where $\vec f \define f + \corr{\sigma}{x}$, then the It\^o system is $g$-linearizable if and only if the distribution \begin{align} \vec \Delta_{n} \define {\operatorname{span}}} \newcommand{\ito\left\{{\ad{{\vec f}} {i}{g}, \iva{n-1}}\right\} \end{align} is nonsingular on~$U$ and the distribution \begin{align} \vec \Delta_{n-1} \define {\operatorname{span}}} \newcommand{\ito\left\{{\ad{{\vec f}}{i}{g}, \iva{n-2}}\right\} \end{align} is involutive on~$U$. If these conditions hold, then a solution~$\lambda {\colon\RR^n \to \RR}$ to the set of partial differential equations exists \begin{align} \label{eq:77} \biglie{\ad{{\vec f}}{i}{g}}{\lambda }&=0\rfor{\iva{n-2}}\\ \label{eq:78} \biglie{\ad{{\vec f}}{{n-1}}{g}}{\lambda }&\not=0 .\end{align} the linearizing transformation is given by: \begin{alignat}3 \label{eq:79} T_1&=\lambda \\ \label{eq:80} T_{i+1}&= \lie{\vec f}{T_i} + \frahalf \lie{\sigma}{\lie{\sigma}{T_i}} \qquad&&\text{for}\quad 1 \le i \le{n-1}\\ \alpha &=\frac{-\multilie{\vec f}{{n}}\lambda }{\lie{g}{}\multilie{\vec f}{{n-1}}\lambda } &\qquad\qquad& \label{eq:81} \beta =\frac{1}{\lie{g}{}\multilie{\vec f}{{n-1}}\lambda } .\end{alignat} \end{theo} \begin{proof} We will apply the Leibniz rule to the relation of~(\ref{eq:143}) Corollary~\ref{prop:s1sfb3} to expand the term~$\lie{g}{T_{i+1}}$ for~$1\le i\le n-1$ \begin{multline} \label{eq:195} \lie{g}{T_{i+1}} = \lie{g}{ \left( \lie{{\vec f}}{T_i}+ \frahalf \lie{\sigma}{\lie{\sigma}{T_i}} \right) } = \lie{g}{\lie{{\vec f}}{T_i}} + \frahalf \lie{g}{\lie{\sigma}{\lie{\sigma}{T_i}}} =\\ \lie{{\vec f}}{\lie{g}{T_i}} - \lie{[\vec f,g]}{T_i} + \frahalf \lie{\sigma}{\lie{g}{\left(\lie{\sigma}{T_i}\right)}} - \frahalf \lie{[\sigma,g]}{\left(\lie{\sigma}{T_i}\right)} =\\ 0 - \lie{[\vec f,g]}{T_i} + \frahalf \lie{\sigma}{ \left( \lie{\sigma}{\lie{g}{T_i}} - \lie{[\sigma,g]}{T_i} \right) } - \frahalf \left( \lie{\sigma}{\lie{[\sigma,g]}{T_i}} - \lie{[\sigma,[\sigma,g]]}{T_i} \right) =\\ -\lie{[\vec f,g]}{T_i} + 0 - \frahalf \lie{\sigma}{\lie{[\sigma,g]}{T_i}} + \frahalf \lie{[\sigma,[\sigma,g]]}{T_i} .\end{multline} If the vectorfields~$g$ and~$\sigma$ commute, then the second and third terms vanish. If, moreover, $[\sigma,[\vec f,g]] = 0$ then \begin{alignat}{3} \label{eq:257} \lie{g}{{T_{i+2}}} &= - \lie{{[\vec f,g]}}{ \left( \lie{{\vec f}}{T_i} + \frahalf \lie{\sigma}{\lie{\sigma}{T_i}} \right)} = \lie{{[\vec f,[\vec f,g]]}}{T_i} .\end{alignat} In general if~$[\sigma,\ad{{\vec f}}{i}{g}]=0$ for~$0\le i\le n-1$ then \begin{alignat}{3} \label{eq:258} \lie{g}{T_{k}} &= (-1)^k \lie{{[\ad{{\vec f}}{k}{g},]}}{T_1} .\end{alignat} Thus the equations~\eqref{eq:143} and \eqref{eq:144} will be equivalent to \begin{alignat}{3} \label{eq:193} \biglie{\ad{{\vec f}}{i}{g}}{\lambda }&=0\qquad&&\text{for}\quad{\iva{n-2}}\\ \label{eq:194} \biglie{\ad{{\vec f}}{{n-1}}{g}}{\lambda }&\not=0 ,\end{alignat} which are of the same form as the equations of Proposition~\ref{prop:d1sfb1} and consequently the conditions from Proposition \ref{prop:d1sfb2} can be used. \end{proof} \subsection{It\^o and Stratonovich $\sigma$-linearization} \label{sub:sfbchsigma} The stochastic SFB $\sigma$-linearization problem is similar to deterministic SCT linearization. The dispersion vectorfield~$\sigma$ transforms in the same way as deterministic drift vectorfields do. Consequently, no It\^o term complicates the transformation. Moreover, the It\^o and Stratonovich cases are equivalent. On the other hand, in the SFB $\sigma$-linearization we are free to choose the feedback~$\feedbackab$ that perturbs the drift vectorfield~$f$ into~$\tilde f = f + g \alpha $. \psfig{fig:sigmasfb}{Ito and Stratonovich $\sigma$-linearization}{sigmasfb} \begin{prop} \label{prop:sfsigma} Let~$\Theta$ be a SISO stochastic system~$\Theta = \ssysfgx$. There is a SFB $\sigma$-linearizing transformation~$\combinedtab$ into a $\sigma$-controllable linear system if and only if there is a smooth feedback function~$\alpha {\colon\RR^n \to \RR}$ such that the deterministic system~$\dsys{f+g\alpha }{\sigma}{x}$ has a SCT linearizing transformation~$\sctt$. Equivalently, there must be such~$\alpha $ that the the modified odd bracket condition: \begin{equation} \label{eq:59} [\sigma,\ad{f+g \alpha }{l}{\sigma}]=0 \qquad\text{for}\qquad l = 1,\dots,2n-1 \end{equation} is satisfied (see~\eqref{eq:45}). The resulting combined transformation consists of composition of the coordinate transformation~$\sctt$ and the feedback~$\feedbackab$ where~$\beta $ is arbitrary function of~$x$; for instance~$\beta = 1$. \end{prop} \begin{proof} Compare definition of linearity of a deterministic system with definition of $\sigma$-linearity. The system is is $\sigma$-linearizable if and only if the deterministic systems~$\dsys{f+g\alpha }{\sigma}{x}$ is SCT linearizable (see Corollary~\ref{prop:s1ctt2a}). It is evident that the function~$\beta $ (see Figure~\ref{fig:sigmasfb}) has no effect on the dispersion part and can be chosen arbitrarily. (Probably nonzero for otherwise the system will be $g$-uncontrollable). \end{proof} The condition~\eqref{eq:59} can be expressed in terms of derivatives of $\alpha $ using bracket relations known from differential geometry. For example, for~$l=1$: \begin{align} \label{eq:156} [\sigma,[f+g \alpha ,\sigma ]] &= [\sigma, [f,\sigma ]+[g \alpha ,\sigma ]] = [\sigma,[f,\sigma ]]+[\sigma,[g \alpha ,\sigma ]]=\\ &=[\sigma,g+[f+g,\sigma ]] + g \lie{\sigma}{\lie{\sigma}{\alpha }} .\end{align} The other conditions for~$k=3,5,7,\dots$ can be expressed in a similar way giving the set of~$n$ partial differential equations of the order up to~$2n$ for example by a computer using symbolic algebra tools. The problem is not very interesting from the practical point of view. \section{Example---Crane} \label{sub:appcrane} In this section the methods of stochastic exact linearization are demonstrated on an example --- control of a crane under the influence of random disturbances. The description of the plant was adopted from~\citet{ackermann93} where the model of a crane linearized by approximative methods was studied. Unlike Ackermann, we control the same system using the exact model. Moreover the influence of random disturbances is added. \psfig{fig:crane1}{Crane}{crane1} Consider the crane of Figure~\ref{fig:crane1}, which can be used for example for loading containers into a ship. The hook must be automatically placed to a given position. Feedback control is needed in order to dampen the motion before the hook is lowered into the ship. The input signal is the force~$u$ that accelerates the crab. The crab mass is~$m_C$, the mass of the load~$m_L$, the rope length is~$l$, and the gravity acceleration~$g$. We assume that the driving motor has no nonlinearities, there is no friction or slip, no elasticity of the rope and no damping of the pendulum (\abbrev{eg} from air drag). We will define four state variables: the rope angle $x_1$ (in radian), the angular velocity~$x_2 = \dot x_1$, the position of the crab~$x_3$, and the velocity of the crab~$x_4 = \dot x_3$. As shown in~\citez{ackermann93}, the plant is described by two second order differential equations: \begin{align} \label{eq:72} u &= (m_L + m_C) \ddot x_3 + m_L l ( \ddot x_1 \cos x_1 - \dot x_1^2 \sin x_1) \\ \label{eq:73} 0 &=m_L \ddot x_3 \cos x_1 + m_L l \ddot x_1 + m_L g \sin x_1 .\end{align} Additionally, we assume that the load is under influence of random disturbance, which can be modeled as a white noise process. The disturbance (wind) is horizontal, has zero mean and can be described by the It\^o differential~$dw$: \begin{align} \label{eq:74} dx_2 = \frac{F \cos x_1}{m_L l}\,dw,\end{align} where~$F$ is a constant having the physical unit of force. We used symbolic algebraic system \Mathematica to handle the computations. The complete \Mathematica worksheet can be downloaded from the web page of the author \hyref{http://www.tenzor.cz/sladecek}. \Mathematica was used to solve the equations of the system for unknown values~$\dot x_2$ and~$\dot x_4$ (angular and positional acceleration). Values of vectorfields~$f$, $g$, and $\sigma$ were derived as follows: \begin{align} \label{eq:76} f &= \left[ x_2, -\frac{\sin x_1 \left( g (m_L+m_C) + l m_L x_2 \cos x_1 \right) } {l (m_C + m_L - m_L \cos^2 x_1) }, x_4, 0\right]^T\\ g &= \left[ 0, -\frac{\cos x_1 } {l (m_C + m_L - m_L \cos^2 x_1) }, 0, u\right]^T\\ \sigma &= \left[0, \frac{F \cos x_1}{m_L l},0,0 \right]^T .\end{align} The state space model is shown in Figure~\ref{fig:crane2}. We can see that the positional state variables~$x_3$ and~$x_4$ are isolated from the angular state variables~$x_1$ and~$x_2$. Later, we will concentrate on the angular variables pretending that the load will be stabilized no matter where the crane is. Consequently, we obtain only two-dimensional system on which the exact linearization techniques can be demonstrated. \psfig{fig:crane2}{The State Space Model of Crane}{crane2} Next, consider the random disturbances. Because the correcting term~$\corr{\sigma}{x}$ is zero, there is no difference in using either the It\^o or the Stratonovich integral. In case of more ``nonlinear'' noise, one of the integrals must be selected. If the It\^o model is chosen, Theorem~\ref{prop:gsigmaprop} must be applied. Now we evaluate the conditions of Proposition~\ref{prop:s1sfb1} to check that the system is linearizable. In fact, we must only evaluate the non-singularity condition because every one-dimensional distribution is involutive, and the integrability is satisfied automatically. To this end, we will compute the null space (kernel) of the matrix~$[[f,g],g]$, which is empty and therefore the matrix is nonsingular. We conclude, that the {\em deterministic\/} SFB problem is solvable. Notice, that the system is already in the integrator chain form and hence~$\lambda = x_1$ satisfies this condition. Therefore, the {\em deterministic\/} system is linearizable by feedback only, with no state space transformation at all, \abbrev{i.e} $z=T(x)=x$. This choice of the output function~$\lambda $ is natural but does not cancel the nonlinearity in the dispersion coefficient~$\sigma$. For this purpose, we must use the algorithm of Section~\ref{prop:cftdp} to construct another nontrivial coordinate transformation~$T$. To obtain this transformation, we must find the space of all functions~$\lambda $ satisfying conditions for feedback linearity~\eqref{eq:7}. Observe that~$\lie{g}{\lambda }$ must be zero hence \begin{align} \label{eq:75} \parby{\lambda }{x_1} g_1 + \parby{\lambda }{x_2} g_2 &= 0 .\end{align} Since~$g_1=0$ and~$g_2 \ne 0$ in neighborhood of~$x_0$, then $\parby{\lambda }{x_2} = 0$ and $\lambda = c_1(x_1)$ is a function of~$x_1$ only (\abbrev{i.e} without~$x_2$). The coordinate transformation is $T = \left[\lambda ,\lie{f}{\lambda } \right]^T$. We want to select such $c_1(x_1)$ that the dispersion vectorfield~$\tilde \sigma \define \tantra{T}\sigma$ in the new coordinate system~$z=T(x)$ will be constant: \begin{align} \label{eq:70} \parby{c_1(x_1)}{x_1} \frac{F \cos x_1}{m_L l} = \text{constant} .\end{align} We decided to define the constant as~${F}/{(m_L l)}$, therefore \begin{align} \label{eq:221} \parby{c_1}{x_1} &= \frac{1}{\cos x_1} \end{align} and \begin{multline} \label{eq:71} T_1 = \lambda = c_1(x_1) = \int \frac{1}{\cos x_1} \, d x_1 =\\ -\ln \left( \cos \frahalf{x_1} - \sin \frahalf{x_1} \right) +\ln \left( \cos \frahalf{x_1} + \sin \frahalf{x_1} \right) .\end{multline} \begin{align} \label{eq:222} T_2 = \lie{f}{\lambda } &= x_2 \sec x_1 .\end{align} Finally, we can compute the feedback from~\eqref{eq:28}. In the \Mathematica worksheet we validate the results by computing~$\tilde \Theta = \combinedtab \ssysfgx $, The computation showed that the system~$\ssys{\hat f}{\hat g}{\hat \sigma}{x}$ is in the integrator chain form in the~$z$ coordinate chart. \begin{multline} b=\frac{1}{l \left( m_c + m_l \left( \sin (x_1) \right)^2 \right)}\\ a=\tan ({x_1}) \left( \sec (x_1)\,x_2^2 - b\,g\, ( m_c + m_l ) + l\,m_l\,{x_2}\cos ({x_1}) \right) .\end{multline} \section{Conclusion} \subsection{Main Results} \begin{romanenu} \item The structure of the stochastic linearization problem is much richer than the structure of the deterministic one. Two definitions of coordinate transformation exist and there are differences between the~$g$,~$\sigma$, and~$g\sigma$-linearization. \item In the case of It\^o integrals, the coordinate transformation laws are of second order (the It\^o rule). There is a large difference between $g\sigma$-linearization and $g$-linearization. In the former case the effect of the It\^o term can be reduced to the first order operator and consequently the problem is solvable by differential geometry. On the other hand, in the later case there is no easy method to elliminate the It\^o term and the a set of second order partial differential equations must be solved to get the linearizing transformation. \item We have given (at least partial) solutions to all SISO SFB problems. The results are listed in Table~\ref{tab:sfb}. \begin{table}[htpb] \begin{center} \begin{tabular}{llll} \hline Linearization&$g$&$g\sigma$&$\sigma$\\ \hline Deterministic&\reftpropag{prop:d1sfb2}&&\\ Stratonovich &\refppropag{prop:stratgprop} &\reftpropag{prop:s1sfb1} &\refppropag{prop:sfsigma}\\ It\^o &\refcorol{prop:sfbooo} &\reftpropag{prop:gsigmaprop} &\refppropag{prop:sfsigma}\\ \hline \end{tabular} \caption{Overview of results --- SISO SFB case} \label{tab:sfb} \end{center} \end{table} \item It\^o linearization problems can be approached by means of the correcting term. The It\^o differential equation can be converted to the Stratonovich equation whose behavior under coordinate transformations is simpler. This method is only partially applicable to the $g$-linearization. \item An important special case was identified for the It\^o $g$-linearization. The case is characterized by commuting control vectorfields~$g$ and dispersion vectorfields~$\sigma$. Solutions can be found using first order methods. \item Computer algebra proved to be a useful tool for solving exact linearization problems. \item Industrial applications of the exact linearization in general are still unlikely, mainly due to complexity, sensitivity, and limited robustness of the control laws designed by the method. \end{romanenu} \subsection{Future Research} \begin{romanenu} \item Find a solution to the It\^o ~$g$-linearization problem in general case, including geometric criteria, using second order geometry. \item Analyze the computability issues; implement a universal symbolic algebra toolbox for the problem. \item Solve the SCT problem. \item Extend the results to the MIMO systems. \item Extend the results to the input-output problems and linearization of autonomous systems. Work out the applications of nonlinear filtering. \item Perhaps, some of the results can be used as a starting point for approaching more general class of problems as the problems of disturbance decoupling, input invariance of stochastic non--linear systems, or problems of reachability and observability. \end{romanenu} \end{document}
arXiv
# Lecture notes: harmonic analysis Russell Brown<br>Department of mathematics<br>University of Kentucky<br>Lexington, KY 40506-0027 12 June 2001 ## Chapter 1 ## The Fourier transform on $L^{1}$ In this section, we define the Fourier transform and give the basic properties of the Fourier transform of an $L^{1}\left(\mathbf{R}^{n}\right)$ function. We will use $L^{1}$ to be the space of Lebesgue measurable functions with the norm $\|f\|_{1}=\int_{\mathbf{R}^{n}}|f(x)| d x$. More generally, $L^{p}\left(\mathbf{R}^{n}\right)$ denotes the space of Lebesgue measurable functions for which $\|f\|_{p}=\left(\int_{\mathbf{R}^{n}}|f(x)|^{p} d x\right)^{1 / p}$. When $p=\infty$, the space $L^{\infty}\left(\mathbf{R}^{n}\right)$ is the collection of measurable functions which are bounded, after we neglect a set of measure zero. These spaces of functions are examples of Banach spaces. We recall that a vector space $V$ over $\mathbf{C}$ with a function $\|\cdot\|$ is called a normed vector space if $\|\cdot\|: V \rightarrow[0, \infty)$ and satisfies $$ \begin{aligned} \|f+g\| & \leq\|f\|+\|g\|, \quad f, g \in V \\ \|\lambda f\| & =|\lambda|\|f\|, \quad f \in v, \quad \lambda \in \mathbf{C} \\ \|f\| & =0, \quad \text { if and only if } f=0 . \end{aligned} $$ A function $\|\cdot\|$ which satisfies these properties is called a norm. If $\|\cdot\|$ is a norm, then $\|f-g\|$ defines a metric. A normed vector space $V,\|\cdot\|$ is called a Banach space if $V$ is complete in the metric defined using the norm. Throughout these notes, functions are assumed to be complex valued. ### Definition and symmetry properties We define the Fourier transform. In this definition, $x \cdot \xi$ is the inner product of two elements of $\mathbf{R}^{n}, x \cdot \xi=\sum_{j=1}^{n} x_{j} \xi_{j}$. Definition 1.1 If $f \in L^{1}\left(\mathbf{R}^{n}\right)$, then the Fourier transform of $f, \hat{f}$, is a function defined on $\mathbf{R}^{n}$ and is given by $$ \hat{f}(\xi)=\int_{\mathbf{R}^{n}} f(x) e^{-i x \cdot \xi} d x $$ The Fourier transform is a continuous map from $L^{1}$ to the bounded continuous functions on $\mathbf{R}^{n}$. Proposition 1.2 If $f \in L^{1}\left(\mathbf{R}^{n}\right)$, then $\hat{f}$ is continuous and $$ \|\hat{f}\|_{\infty} \leq\|f\|_{1} . $$ Proof. If $\xi_{j} \rightarrow \xi$, then $e^{-i x \cdot \xi_{j}} \rightarrow e^{-i x \cdot \xi}$. Hence by the Lebesgue dominated convergence theorem, $\hat{f}\left(\xi_{j}\right) \rightarrow \hat{f}(\xi)$. The inequality in the conclusion of Proposition 1.2 is equivalent to the continuity of the map $f \rightarrow \hat{f}$. This is an application of the conclusion of the following exercise. Exercise 1.3 A linear map $T: V \rightarrow W$ between normed vector spaces is continuous if and only if there exists a constant $C$ so that $$ \|T f\|_{W} \leq C\|f\|_{V} $$ In the following proposition, we use $A^{-t}=\left(A^{-1}\right)^{t}$ for the transpose of the inverse of an $n \times n$ matrix, $A$. Exercise 1.4 Show that if $A$ is an $n \times n$ invertible matrix, then $\left(A^{-1}\right)^{t}=\left(A^{t}\right)^{-1}$. Exercise 1.5 Show that $A$ is an $n \times n$ matrix, then $A x \cdot y=x \cdot A^{t} y$. Proposition 1.6 If $A$ is an $n \times n$ invertible matrix, then $$ \widehat{f \circ A}=|\operatorname{det} A| \hat{f} \circ A^{-t} $$ Proof. If we make the change of variables, $y=A x$ in the integral defining $\widehat{f \circ A}$, then we obtain $$ \int_{\mathbf{R}^{n}} f \circ A(x) d x=|\operatorname{det} A|^{-1} \int_{\mathbf{R}^{n}} f(y) e^{-i A^{-1} y \cdot \xi} d y=|\operatorname{det} A|^{-1} \int_{\mathbf{R}^{n}} f(y) e^{-i y \cdot A^{-t} \xi} d y . $$ A simple application of this theorem is that if we set $f_{\epsilon}(x)=\epsilon^{-n} f(x / \epsilon)$, then $$ \hat{f}_{\epsilon}(\xi)=\hat{f}(\epsilon \xi) $$ Recall that an orthogonal matrix is an $n \times n$-matrix with real entries which satisfies $O^{t} O=I_{n}$ where $I_{n}$ is the $n \times n$ identity matrix. Such matrices are clearly invertible since $O^{-1}=O^{t}$. The group of all such matrices is usually denoted by $O(n)$. Corollary 1.8 If $f \in L^{1}\left(\mathbf{R}^{n}\right)$ and $O$ is an orthogonal matrix, then $\hat{f} \circ O=\widehat{f \circ O}$. Exercise 1.9 If $x \in \mathbf{R}^{n}$, show that there is an orthogonal matrix $O$ so that $O x=$ $(|x|, 0, \ldots, 0)$. Exercise 1.10 Show that an $n \times n$ matrix on $\mathbf{R}^{n}$ is orthogonal if and only if $O x \cdot O x=x \cdot x$ for all $x \in \mathbf{R}^{n}$. We say that function $f$ defined on $\mathbf{R}^{n}$ is radial if there is a function $F$ on $[0, \infty)$ so that $f(x)=F(|x|)$. Equivalently, a function is radial if and only if $f(O x)=f(x)$ for all orthogonal matrices $O$. Corollary 1.11 Suppose that $f$ is in $L^{1}$ and $f$ is radial, then $\hat{f}$ is radial. Proof. We fix $\xi$ in $\mathbf{R}^{n}$ and choose $O$ so that $O \xi=(|\xi|, 0, \ldots, 0)$. Since $f \circ O=f$, we have that $\hat{f}(\xi)=\widehat{f \circ O}(\xi)=\hat{f}(O \xi)=\hat{f}(|\xi|, 0, \ldots, 0)$. The main applications of the Fourier transform depend on the fact that it turns operations that commute with translations into multiplication operations. That is, it diagonalizes operations which commute with translations. The first glimpse we will see of this is that the operation of translation by $h$ (which surely commutes with translations) corresponds to multiplying the Fourier transform by $e^{i h \cdot \xi}$. We will use $\tau_{h}$ to denote translation by $h, \tau_{h} f(x)=f(x+h)$. Exercise 1.12 If $f$ is a nice function on $\mathbf{R}^{n}$, show that $$ \frac{\partial}{\partial x_{j}} \tau_{h} f=\tau_{h} \frac{\partial}{\partial x_{j}} f . $$ Proposition 1.13 If $f$ is in $L^{1}\left(\mathbf{R}^{n}\right)$, then $$ \widehat{\tau_{h} f}(\xi)=e^{i h \cdot \xi} \hat{f}(\xi) . $$ Also, $$ \left(e^{i x \cdot h} f\right)^{\wedge}=\tau_{-h}(\hat{f}) $$ Proof. We change variables $y=x+h$ in the integral $$ \widehat{\tau_{h} f}(\xi)=\int f(x+h) e^{-i x \cdot \xi} d x=\int f(y) e^{-i(y-h) \cdot \xi} d y=e^{i h \cdot \xi} \hat{f}(\xi) . $$ The proof of the second identity is just as easy and is left as an exercise. Example 1.15 If $I=\left\{x:\left|x_{j}\right|<1\right\}$, then the Fourier transform of $f=\chi_{I}$ is easily computed, $$ \hat{f}(\xi)=\prod_{j=1}^{n} \int_{-1}^{1} e^{i x_{j} \xi_{j}} d x_{j}=\prod_{j=1}^{n} \frac{2 \sin \xi_{j}}{\xi_{j}} . $$ In the next exercise, we will need to write integrals in polar coordinates. For our purposes, this means that we have a Borel measure $\sigma$ on the sphere, $\mathbf{S}^{n-1}=\left\{x^{\prime} \in \mathbf{R}^{n}\right.$ : $\left.\left|x^{\prime}\right|=1\right\}$ so that $$ \int_{\mathbf{R}^{n}} f(x) d x=\int_{0}^{\infty} \int_{\mathbf{S}^{n-1}} f\left(r x^{\prime}\right) d \sigma\left(x^{\prime}\right) r^{n-1} d r $$ Exercise 1.16 If $B_{r}(x)=\{y:|x-y|<r\}$ and $f=\chi_{B_{1}(0)}$, compute the Fourier transform $\hat{f}$. Hints: 1. Since $f$ is radial, it suffices to compute $\hat{f}$ at $(0, \ldots, r)$ for $r>0$. 2 . Write the integral over the ball as an iterated integral where we integrate with respect to $x^{\prime}=\left(x_{1}, \ldots, x_{n-1}\right)$ and then with respect to $x_{n}$. 3. You will need to know the volume of a ball, see exercise 1.29 below. 4. At the moment, we should only complete the computation in 3 dimensions (or odd dimensions, if you are ambitious). In even dimensions, the answer cannot be expressed in terms of elementary functions. See Chapter 13 for the answer in even dimensions. The is $$ \hat{f}(\xi)=\frac{\omega_{n-2}}{n-1} \int_{-1}^{1} e^{-i t|\xi|}\left(1-t^{2}\right)^{(n-1) / 2} d t $$ Theorem 1.17 (Riemann-Lebesgue) If $f$ is in $L^{1}\left(\mathbf{R}^{n}\right)$, then $$ \lim _{|\xi| \rightarrow \infty} \hat{f}(\xi)=0 $$ Proof. We let $X \subset L^{1}\left(\mathbf{R}^{n}\right)$ be the collection of functions $f$ for which $\lim _{|\xi| \rightarrow \infty} \hat{f}(\xi)=0$. It is easy to see that $X$ is a vector space. Thanks to Proposition $1.2, X$ is closed in the $L^{1}$-norm. According to Example 1.15, Proposition 1.13 and Proposition 1.6 the characteristic function of every rectangle is in $X$. Since finite linear combinations of characteristic functions of rectangles are dense in $L^{1}, X=L^{1}\left(\mathbf{R}^{n}\right)$. Combining the Riemann-Lebesgue theorem and the first proposition above, we can show that the image of $L^{1}$ under the Fourier transform is contained in $C_{0}\left(\mathbf{R}^{n}\right)$, the continuous functions on $\mathbf{R}^{n}$ which vanish at infinity. This containment is strict. We will see that the Fourier transform of the surface measure on the sphere $\mathbf{S}^{n-1}$ is in $C_{0}\left(\mathbf{R}^{n}\right)$. It is a difficult and unsolved problem to describe the image of $L^{1}$ under the Fourier transform. One of our goals is to relate the properties of $f$ to those of $\hat{f}$. There are two general principles which we will illustrate below. These principles are: If $f$ is smooth, then $\hat{f}$ decays at infinity and If $f$ decays at infinity, then $\hat{f}$ is smooth. We have already seen some weak illustrations of these principles. Proposition 1.2 asserts that if $f$ is in $L^{1}$, which requires decay at infinity, then $\hat{f}$ is continuous. The Riemann-Lebesgue lemma tells us that if $f$ is in $L^{1}$, and thus is smoother than the distributions to be discussed below, then $\hat{f}$ has limit 0 at infinity. The propositions below give further illustrations of these principles. Proposition 1.18 If $f$ and $x_{j} f$ are in $L^{1}$, then $\hat{f}$ is differentiable and the derivative is given by $$ i \frac{\partial}{\partial \xi_{j}} \hat{f}=\widehat{x_{j} f} $$ Furthermore, we have $$ \left\|\frac{\partial f}{\partial \xi_{j}}\right\|_{\infty} \leq\left\|x_{j} f\right\|_{1} $$ Proof. Let $h \in \mathbf{R}$ and suppose that $e_{j}$ is the unit vector parallel to the $x_{j}$-axis. Using the mean-value theorem from calculus, one obtains that $$ \left|\frac{e^{-i x \cdot\left(\xi+h e_{j}\right)}-e^{-i x \cdot \xi}}{h}\right| \leq\left|x_{j}\right| $$ Our hypothesis that $x_{j} f$ is in $L^{1}$ allows to use the dominated convergence theorem to bring the limit inside the integral to compute the partial derivative $$ \frac{\partial \hat{f}(\xi)}{\partial \xi_{j}}=\lim _{h \rightarrow 0} \int \frac{e^{-i x \cdot\left(\xi+h e_{j}\right)}-e^{-i x \cdot \xi}}{h} f(x) d x=\int\left(-i x_{j}\right) e^{-i x \cdot \xi} f(x) d x $$ The estimate follows immediately from the formula for the derivative. Note that the notation in the previous proposition is not ideal since the variable $x_{j}$ appears multiplying $f$, but not as the argument for $f$. One can resolve this problem by decreeing that the symbol $x_{j}$ stands for the multiplication operator $f \rightarrow x_{j} f$ and the $j$ component of $x$. For the next proposition, we need an additional definition. We say $f$ has a partial derivative with respect to $x_{j}$ in the $L^{p}$ sense if $f$ is in $L^{p}$ and there exists a function $\partial f / \partial x_{j}$ so that $$ \lim _{h \rightarrow 0}\left\|\frac{1}{h}\left(\tau_{h e_{j}} f-f\right)-\frac{\partial f}{\partial x_{j}}\right\|_{p}=0 $$ Proposition 1.19 If $f$ is differentiable with respect to $x_{j}$ in the $L^{1}$-sense, then $$ i \xi_{j} \hat{f}=\frac{\widehat{\partial f}}{\partial x_{j}} . $$ Furthermore, we have $$ \left\|\xi_{j} \hat{f}\right\|_{\infty} \leq\left\|\frac{\partial f}{\partial x_{j}}\right\|_{1} $$ Proof. Let $h>0$ and let $e_{j}$ be a unit vector in the direction of the $x_{j}$ - axis. Since the difference quotient converges in $L^{1}$, we have $$ \int_{\mathbf{R}^{n}} e^{-i x \cdot \xi} \frac{\partial f}{\partial x_{j}}(x) d x=\lim _{h \rightarrow 0} \int_{\mathbf{R}^{n}} e^{-i x \cdot \xi} \frac{f\left(x+h e_{j}\right)-f(x)}{h} d x . $$ In the last integral, we can "difference-by-parts" to move the difference quotient over to the exponential. More precisely, we can make a change of variables $y=x+h e_{j}$ to obtain $$ \int_{\mathbf{R}^{n}} \frac{e^{-i\left(x-h e_{j}\right) \cdot \xi}-e^{-i x \cdot \xi}}{h} f(x) d x . $$ Since the difference quotient of the exponential converges pointwise and boundedly (in $x)$ to $i \xi_{j} e^{-i x \cdot \xi}$, we can use the dominated convergence theorem to obtain $\widehat{\partial f} / \partial x_{j}=i \xi_{j} \hat{f}$. Finally, our last result on translation invariant operators involves convolution. Recall that if $f$ and $g$ are measurable functions on $\mathbf{R}^{n}$, then the convolution is defined by $$ f * g(x)=\int_{\mathbf{R}^{n}} f(x-y) g(y) d y $$ provided the integral on the right is defined for a.e. $x$. Some of the basic properties of convolutions are given in the following exercises. The solutions can be found in most real analysis texts. Exercise 1.20 If $f$ is in $L^{1}$ and $g$ are in $L^{p}$, with $1 \leq p \leq \infty$, show that $f * g(x)$ is defined a.e. and $$ \|f * g\|_{p} \leq\|f\|_{1}\|g\|_{p} $$ Exercise 1.21 Show that the convolution is commutative. If $f * g(x)$ is given by a convergent integral, then $$ f * g(x)=g * f(x) . $$ If $f, g$ and $h$ are in $L^{1}$, show that convolution is associative $$ f *(g * h)=(f * g) * h . $$ Hint: Change variables. Exercise 1.22 The map $f \rightarrow f * g$ commutes with translations: $$ \tau_{h}(f * g)=\left(\tau_{h} f\right) * g . $$ Exercise 1.23 (Young's convolution inequality) If the exponents $p$, $q$ and s satisfy $1 / s=$ $1 / p+1 / q-1$, then $$ \|f * g\|_{s} \leq\|f\|_{p}\|g\|_{q} . $$ Proposition 1.24 If $f$ and $g$ are in $L^{1}$, then $$ (f * g)^{\wedge}=\hat{f} \hat{g} . $$ We calculate a very important Fourier transform. The function $W$ in the next proposition gives (a multiple of) the Gaussian probability distribution. Proposition 1.25 Let $W(x)$ be defined by $W(x)=\exp \left(-|x|^{2} / 4\right)$. Then $$ \hat{W}(\xi)=(\sqrt{4 \pi})^{n} \exp \left(-|\xi|^{2}\right) $$ Proof. We use Fubini's theorem to write $\hat{W}$ as a product of one-dimensional integrals $$ \int_{\mathbf{R}^{n}} e^{-|x|^{2} / 4} e^{-i x \cdot \xi} d x=\prod_{j=1}^{n} \int_{\mathbf{R}} e^{-x_{j}^{2} / 4} e^{-i x_{j} \xi_{j}} d x_{j} . $$ To evaluate the one-dimensional integral, we use complex analysis which makes everything trivial. We complete the square in the exponent for the first equality and then use Cauchy's integral theorem to shift the contour of integration in the complex plane. This gives $$ \int_{\mathbf{R}} e^{-x^{2} / 4} e^{-i x \xi} d x=e^{-|\xi|^{2}} \int_{\mathbf{R}} e^{-\left(\frac{x}{2}+i \xi\right)^{2}} d x=e^{-|\xi|^{2}} \int_{\mathbf{R}} e^{-|x|^{2} / 4} d x=\sqrt{4 \pi} e^{-|\xi|^{2}} . $$ Exercise 1.26 Carefully justify the shift of contour in the previous proof. Exercise 1.27 Establish the formula $$ \int_{\mathbf{R}^{n}} e^{-\pi|x|^{2}} d x=1 $$ which was used above. a) First consider $n=2$ and write the integral over $\mathbf{R}^{2}$ in polar coordinates. b) Deduce the general case from this special case. In the next exercise, we use the $\Gamma$ function, defined for $\operatorname{Re} s>0$ by $$ \Gamma(s)=\int_{0}^{\infty} e^{-t} t^{s} \frac{d t}{t} . $$ Exercise 1.28 Use the result of the previous exercise and polar coordinates to compute $\omega_{n-1}$, the $n$-1-dimensional measure of the unit sphere in $\mathbf{R}^{n}$ and show that $$ \omega_{n-1}=\sigma\left(\mathbf{S}^{n-1}\right)=\frac{2 \pi^{n / 2}}{\Gamma(n / 2)} . $$ For the next exercise, we introduce our notation for the Lebesgue measure of a set $E, m(E)$ Exercise 1.29 Use the result of the previous exercise and polar coordinates to find the volume of the unit ball in $\mathbf{R}^{n}$. Show that $$ m\left(B_{1}(0)\right)=\omega_{n-1} / n . $$ ### The Fourier inversion theorem In this section, we show how to recover an $L^{1}$-function from the Fourier transform. A consequence of this result is that we are able to conclude that the Fourier transform is injective. The proof we give depends on the Lebesgue differentiation theorem. We will discuss the Lebesgue differentiation theorem in the chapter on maximal functions, Chapter 4. We begin with a simple lemma. Lemma 1.30 If $f$ and $g$ are in $L^{1}\left(\mathbf{R}^{n}\right)$, then $$ \int_{\mathbf{R}^{n}} \hat{f}(x) g(x) d x=\int_{\mathbf{R}^{n}} f(x) \hat{g}(x) d x . $$ Proof. We consider the integral of $f(x) g(y) e^{-i y \cdot x}$ on $\mathbf{R}^{2 n}$. We use Fubini's theorem to write this as an iterated integral. If we compute the integral with respect to $x$ first, we obtain the integral on the left-hand side of the conclusion of this lemma. If we compute the integral with respect to $y$ first, we obtain the right-hand side. We are now ready to show how to recover a function in $L^{1}$ from its Fourier transform. Theorem 1.31 (Fourier inversion theorem) If $f$ is in $L^{1}\left(\mathbf{R}^{n}\right)$ and we define $f_{t}$ for $t>0$ by $$ f_{t}(x)=\frac{1}{(2 \pi)^{n}} \int_{\mathbf{R}^{n}} e^{-t|\xi|^{2}} e^{i x \cdot \xi} \hat{f}(\xi) d \xi $$ then $$ \lim _{t \rightarrow 0^{+}}\left\|f_{t}-f\right\|_{1}=0 $$ and $$ \lim _{t \rightarrow 0^{+}} f_{t}(x)=f(x), \quad \text { a.e. } x . $$ Proof. We consider the function $g(x)=e^{-t|x|^{2}+i y \cdot x}$. By Proposition 1.25, (1.7) and (1.14), we have that $$ \hat{g}(x)=(2 \pi)^{n}(4 \pi t)^{-n / 2} \exp \left(-|y-x|^{2} / 4 t\right) $$ Thus applying Lemma 1.30 above, we obtain that $$ \frac{1}{(2 \pi)^{n}} \int_{\mathbf{R}^{n}} \hat{f}(\xi) e^{i x \cdot \xi} e^{-t|\xi|^{2}} d \xi=\int_{\mathbf{R}^{n}} f(x)(4 \pi t)^{-n / 2} \exp \left(-\frac{|y-x|^{2}}{4 t}\right) d x . $$ Thus, $f_{t}(x)$ is the convolution of $f$ with the Gaussian and it is known that $f_{t} \rightarrow f$ in $L^{1}$. That $f_{t}$ converges to $f$ pointwise is a standard consequence of the Lebesgue differentiation theorem. A proof will be given below. It is convenient to have a notation for the inverse operation to the Fourier transform. The most common notation is $\check{f}$. Many properties of the inverse Fourier transform follow easily from the properties of the Fourier transform and the inversion. The following simple formulas illustrate the close connection: $$ \begin{aligned} & \check{f}(x)=\frac{1}{(2 \pi)^{n}} \hat{f}(-x) \\ & \check{f}(x)=\frac{1}{(2 \pi)^{n}} \overline{\bar{f}}(x) . \end{aligned} $$ If $\hat{f}$ is in $L^{1}$, then the limit in $t$ in the Fourier inversion theorem can be brought inside the integral (by the dominated convergence theorem) and we have $$ \check{f}(x)=\frac{1}{(2 \pi)^{n}} \int_{\mathbf{R}^{n}} f(\xi) e^{i x \cdot \xi} d \xi $$ Exercise 1.34 Prove the formulae (1.32) and (1.33) above. ## Chapter 2 ## Tempered distributions In this chapter, we introduce the Schwartz space. This is a space of well-behaved functions on which the Fourier transform is invertible. One of the main interests of this space is that other interesting operations such as differentiation are also continuous on this space. Then, we are able to extend differentiation and the Fourier transform to act on the dual space. This dual space is called the space of tempered distributions. The word tempered means that in a certain sense, the distributions do not grow to rapidly at infinity. Distributions have a certain local regularity-on a compact set, they only involve finitely many derivatives. Given the connection between the local regularity of a function and the growth of its Fourier transform, it seems likely that any space on which the Fourier transform acts should have some restriction on the growth at infinity. ### Test functions and tempered distributions The main notational complication of this chapter is the use of multi-indices. A multiindex is an $n$-tuple of non-negative integers, $\alpha=\left(\alpha_{1}, \ldots, \alpha_{n}\right)$. For a multi-index $\alpha$, we let $$ x^{\alpha}=x_{1}^{\alpha_{1}} \ldots x_{n}^{\alpha_{n}} . $$ We also use this notation for partial derivatives, $$ \frac{\partial^{\alpha}}{\partial x^{\alpha}}=\frac{\partial^{\alpha_{1}}}{\partial x_{1}^{\alpha_{1}}} \cdots \frac{\partial^{\alpha_{n}}}{\partial x_{1}^{\alpha_{n}}} . $$ Several other related notations are $$ |\alpha|=\alpha_{1}+\ldots+\alpha_{n} \quad \text { and } \quad \alpha !=\alpha_{1} ! \ldots \alpha_{n} ! $$ Note that the definition of the length of $\alpha,|\alpha|$, appears to conflict with the standard notation for the Euclidean norm. This inconsistency is firmly embedded in analysis and I will not try to change it. Below are a few exercises which illustrate the use of this notation. Exercise 2.1 The multi-nomial theorem. $$ \left(x_{1}+\ldots+x_{n}\right)^{k}=\sum_{|\alpha|=k} \frac{|\alpha| !}{\alpha !} x^{\alpha} . $$ Exercise 2.2 Show that $$ (x+y)^{\alpha}=\sum_{\beta+\gamma=\alpha} \frac{\alpha !}{\beta ! \gamma !} x^{\beta} y^{\gamma} . $$ Exercise 2.3 The Leibniz rule. If $f$ and $g$ have continuous derivatives of order up to $k$ on $\mathbf{R}^{n}$ and $\alpha$ is a multi-index of length $k$, then $$ \frac{\partial^{\alpha}(f g)}{\partial x^{\alpha}}=\sum_{\beta+\gamma=\alpha} \frac{\alpha !}{\beta ! \gamma !} \frac{\partial^{\beta} f}{\partial x^{\beta}} \frac{\partial^{\gamma} g}{\partial x^{\gamma}} . $$ Exercise 2.5 Show for each multi-index $\alpha$, $$ \frac{\partial^{\alpha}}{\partial x^{\alpha}} x^{\alpha}=\alpha ! $$ More generally, show that $$ \frac{\partial^{\beta}}{\partial x^{\beta}} x^{\alpha}=\frac{\alpha !}{(\alpha-\beta) !} x^{\alpha-\beta} . $$ The right-hand side in this last equation is defined to be zero if any component of $\alpha-\beta$ is negative. To define the Schwartz space, we define a family of norms on the collection of $C^{\infty}\left(\mathbf{R}^{n}\right)$ functions which vanish at $\infty$. For each pair of multi-indices $\alpha$ and $\beta$, we let $$ \rho_{\alpha \beta}(f)=\sup _{x \in \mathbf{R}^{n}}\left|x^{\alpha} \frac{\partial^{\beta} f}{\partial x^{\beta}}(x)\right| . $$ We say that a function $f$ is in the Schwartz space on $\mathbf{R}^{n}$ if $\rho_{\alpha \beta}(f)<\infty$ for all $\alpha$ and $\beta$. This space is denoted by $\mathcal{S}\left(\mathbf{R}^{n}\right)$. Recall that a norm was defined in Chapter 1 . If a function $\rho: V \rightarrow[0, \infty)$ satisfies $\rho(f+g) \leq \rho(f)+\rho(g)$ for all $f$ and $g$ in $V$ and $\rho(\lambda f)=|\lambda| \rho(f)$, then $\rho$ is called a semi-norm on the vector space $V$. The Schwartz space is given a topology using the norms $\rho_{\alpha \beta}$ in the following way. Let $\rho_{j}$ be some arbitrary ordering of the norms $\rho_{\alpha \beta}$. Let $\bar{\rho}_{j}=\min \left(\rho_{j}, 1\right)$. Then define $$ \rho(f-g)=\sum_{j=1}^{\infty} 2^{-j} \bar{\rho}_{j}(f-g) . $$ Lemma 2.6 The function $\rho$ is a metric on $\mathcal{S}\left(\mathbf{R}^{n}\right)$ and $\mathcal{S}$ is complete in this metric. The vector operations $(f, g) \rightarrow f+g$ and $(\lambda, f) \rightarrow \lambda f$ are continuous. Exercise 2.7 Prove the assertions in the previous Lemma. Note that our definition of the metric involves an arbitrary ordering of the norms $\rho_{\alpha \beta}$. Readers who are obsessed with details might worry that different choices of the order might lead to different topologies on $\mathcal{S}\left(\mathbf{R}^{n}\right)$. The following proposition guarantees that this does not happen. Proposition $2.8 A$ set $\mathcal{O}$ is open in $\mathcal{S}\left(\mathbf{R}^{n}\right)$ if and only if for each $f \in \mathcal{O}$, there are finitely many semi-norms $\rho_{\alpha_{i} \beta_{i}}$ and $\epsilon_{i}>0, i=1, \ldots, N$ so that $$ \cap_{i=1}^{N}\left\{g: \rho_{\alpha_{i} \beta_{i}}(f-g)<\epsilon_{i}\right\} \subset \mathcal{O} . $$ We will not use this proposition, thus the proof is left as an exercise. It is closely related to Proposition 2.11. Exercise 2.9 Prove Proposition 2.8 Exercise 2.10 The Schwartz space is an example of a Fréchet space. A Fréchet space is a vector space $X$ whose topology is given by a countable family of semi-norms $\left\{\rho_{j}\right\}$ using a metric $\rho(f-g)$ defined by $\rho(f-g)=\sum 2^{-j} \bar{\rho}_{j}(f-g)$. The space $X$ is Fréchet if the resulting topology is Hausdorff and if $X$ is complete in the metric $\rho$. Show that $\mathcal{S}\left(\mathbf{R}^{n}\right)$ is a Fréchet space. Hint: If one of the semi-norms is a norm, then it is easy to see the resulting topology is Hausdorff. In our case, each semi-norm is a norm. Proposition 2.11 A linear map $T$ from $\mathcal{S}\left(\mathbf{R}^{n}\right)$ to $\mathcal{S}\left(\mathbf{R}^{n}\right)$ is continuous if and only if for each semi-norm $\rho_{\alpha \beta}$, there exists a finite collection of semi-norms $\left\{\rho_{\alpha_{i} \beta_{i}}: i=1 \ldots, N\right\}$ and a constant $C$ so that $$ \rho_{\alpha \beta}(T f) \leq C \sum_{i=1}^{n} \rho_{\alpha_{i} \beta_{i}}(f) . $$ A map u from $\mathcal{S}\left(\mathbf{R}^{n}\right)$ to a normed vector space $V$ is continuous if and only if there exists a finite collection of semi-norms $\left\{\rho_{\alpha_{i} \beta_{i}}: i=1 \ldots, N\right\}$ and a constant $C$ so that $$ \|u(f)\|_{V} \leq C \sum_{i=1}^{n} \rho_{\alpha_{i} \beta_{i}}(f) . $$ Proof. We first suppose that $T: \mathcal{S} \rightarrow \mathcal{S}$ is continuous. Let the given semi-norm $\rho_{\alpha \beta}=$ $\rho_{N}$ under the ordering used to define the metric. Then $T$ is continuous at 0 and hence given $\epsilon=2^{-N-1}$, there exists $\delta>0$ so that if $\rho(f)<\delta$, then $\rho(T f)<2^{-N-1}$. We may choose $M$ so that $\sum_{j=M+1}^{\infty} 2^{-j}<\delta / 2$. Given $f$, we set $$ \tilde{f}=\frac{\delta}{2} \frac{f}{\sum_{j=1}^{M} 2^{-j} \rho_{j}(f)} . $$ The function $\tilde{f}$ satisfies $\rho(\tilde{f})<\delta$ and thus $\rho(T \tilde{f})<2^{-N-1}$. This implies that $\rho_{N}(T \tilde{f})<$ $1 / 2$. Thus, by the homogeneity of $\rho_{N}$, we obtain $$ \rho_{N}(T f) \leq \frac{1}{\delta} \sum_{j=1}^{M} \rho_{j}(f) . $$ Now suppose that the second condition of our theorem holds and we verify that the standard $\epsilon-\delta$ formulation of continuity holds. Since the map $T$ is linear, it suffices to prove that $T$ is continuous at 0 . Let $\epsilon>0$ and then choose $N$ so that $2^{-N}<\epsilon / 2$. For each $j=1, \ldots, N$, there exists $C_{j}$ and $N_{j}$ so that $$ \rho_{j}(T f) \leq C_{j} \sum_{k=1}^{N_{j}} \rho_{k}(f) . $$ If we set $N_{0}=\max \left(N_{1}, \ldots, N_{j}\right)$, and $C_{0}=\max \left(C_{1}, \ldots, C_{N}\right)$, then we have $$ \begin{aligned} \rho(T f) & \leq \sum_{j=1}^{N} 2^{-j} \rho_{j}(T f)+\frac{\epsilon}{2} \\ & \leq C_{0} \sum_{j=1}^{N}\left(2^{-j} \sum_{k=1}^{N_{0}} \rho_{k}(f)\right)+\frac{\epsilon}{2} . \end{aligned} $$ Now we define $\delta$ by $\delta=2^{-N_{0}} \min \left(1, \epsilon /\left(2 N_{0} C_{0}\right)\right)$. If we have $\rho(f)<\delta$, then we have $\bar{\rho}_{k}(f)<1$ and $\rho_{k}(f)<\epsilon /\left(2 N_{0} C_{0}\right)$ for $k=1, \ldots, N$. Hence, we have $\rho_{k}(f)<\epsilon /\left(2 N_{0} C_{0}\right)$ for $k=1, \ldots, N_{0}$. Substituting this into the inequality (2.12) above gives that $\rho(T f)<\epsilon$. The proof of the second part is simpler. Finally, it would be a bit embarrassing if the space $\mathcal{S}\left(\mathbf{R}^{n}\right)$ turned out to contain only the zero function. The following exercise guarantees that this is not the case. Exercise 2.13 a) Let $$ \phi(t)=\left\{\begin{array}{l} \exp (-1 / t), \quad t>0 \\ 0, \quad t \leq 0 . \end{array}\right. $$ Show that $\phi(t)$ is in $C^{\infty}(\mathbf{R})$. That is, $\phi$ has derivatives of all orders on the real line. Hint: Show by induction that $\phi^{(k)}(t)=P_{2 k}(1 / t) e^{-1 / t}$ for $t>0$ where $P_{2 k}$ is a polynomial of order $2 k$. b) Show that $\phi\left(-1 /\left(1-|x|^{2}\right)\right)$ is in $\mathcal{S}\left(\mathbf{R}^{n}\right)$. Hint: This is immediate from the chain rule and part a). Lemma 2.14 If $1 \leq p<\infty$, then $\mathcal{S}\left(\mathbf{R}^{n}\right)$ is dense in $L^{p}\left(\mathbf{R}^{n}\right)$. Proof. If we let $f_{\epsilon}(x)=\phi_{\epsilon} * f(x)$ where $\eta(x)=c \phi(x)$ with $\phi$ the function in the previous exercise and $c$ is chosen so that $\int \eta=1$. Hence, $\eta_{\epsilon}(x)=\epsilon^{-n} \eta(x / \epsilon)$ will also have integral 1. It is known from real analysis that if $f \in L^{p}\left(\mathbf{R}^{n}\right)$, then $$ \lim _{\epsilon \rightarrow 0^{+}}\left\|\eta_{\epsilon} * f(x)-f(x)\right\|_{p}=0, \quad 1 \leq p<\infty $$ and that this fails when $p=\infty$ and $f$ is not continuous. Finally, since $\phi(0)=1$, $$ f_{\epsilon_{1}, \epsilon_{2}}(x)=\phi\left(\epsilon_{2} x\right)\left(\eta_{\epsilon_{1}} * f(x)\right. $$ we can choose $\epsilon_{1}$ and then $\epsilon_{2}$ small so that when $f$ is in $L^{p}, p<\infty$, then $\left\|f-f_{\epsilon_{1}, \epsilon_{2}}\right\|_{p}$ is as small as we like. Since $f_{\epsilon_{1}, \epsilon_{2}}$ is in $\mathcal{S}\left(\mathbf{R}^{n}\right)$, we have proven the density of $\mathcal{S}\left(\mathbf{R}^{n}\right)$ in $L^{p}$. We define the space of tempered distributions, $\mathcal{S}^{\prime}\left(\mathbf{R}^{n}\right)$ as the dual of $\mathcal{S}\left(\mathbf{R}^{n}\right)$. If $V$ is a topological vector space, then the $d u a l$ is the vector space of continuous linear functionals on $V$. We give some examples of tempered distributions. Example 2.15 Each $f \in \mathcal{S}$ gives a tempered distribution by the formula $$ g \rightarrow u_{f}(g)=\int_{\mathbf{R}^{n}} f(x) g(x) d x . $$ Example 2.16 If $f$ is in $L^{p}\left(\mathbf{R}^{n}\right)$ for some $p, 1 \leq p \leq \infty$, then we may define a tempered distribution $u_{f}$ by $$ u_{f}(g)=\int_{\mathbf{R}^{n}} f(x) g(x) d x $$ To see this, note that if $N$ is an integer, then $\left(1+|x|^{2}\right)^{N / 2}|f(x)|$ is bounded by a linear combination of the norms, $\rho_{\alpha 0}$ for $|\alpha| \leq N$. Thus, for $f \in \mathcal{S}\left(\mathbf{R}^{n}\right)$, we have $$ \left(\int|f(x)|^{p} d x\right)^{1 / p} \leq C \sum_{\alpha \leq N} \rho_{\alpha 0}(f)\left(\int_{\mathbf{R}^{n}}\left(1+|x|^{2}\right)^{-p N} d x\right)^{1 / p} . $$ If $p N>n / 2$, then the integral on the right-hand side of this inequality if finite. Thus, we have an estimate for the $\|f\|_{p}$ norm. As a consequence, if $f$ is in $L^{p}$, then we have $\left|u_{f}(g)\right| \leq\|f\|_{p}\|g\|_{p^{\prime}}$ by Hölder's inequality. Now the inequality (2.17) applied to $g$ and Proposition 2.11 imply that $u_{f}$ is continuous. Exercise 2.18 Show that the map $f \rightarrow u_{f}$ from $\mathcal{S}\left(\mathbf{R}^{n}\right)$ into $\mathcal{S}^{\prime}\left(\mathbf{R}^{n}\right)$ is injective. Example 2.19 The delta function $\delta$ is the tempered distribution given by $$ \delta(f)=f(0) $$ Example 2.20 More generally, if $\mu$ is any finite Borel measure on $\mathbf{R}^{n}$, we have a distribution $u_{\mu}$ defined by $$ u_{\mu}(f)=\int f d \mu . $$ This is a tempered distribution because $$ \left|u_{\mu}(f)\right| \leq|\mu|\left(\mathbf{R}^{n}\right) \rho_{00}(f) $$ Example 2.21 Any polynomial $P$ (or any measurable function of polynomial growth) gives a tempered distribution by $$ u_{P}(f)=\int P(x) f(x) d x . $$ Example 2.22 For each multi-index $\alpha$, a distribution is given by $$ \delta^{(\alpha)}(f)=\frac{\partial^{\alpha} f(0)}{\partial x^{\alpha}} $$ ### Operations on tempered distributions If $T$ is a continuous linear map on $\mathcal{S}\left(\mathbf{R}^{n}\right)$ and $u$ is a tempered distribution, then $f \rightarrow$ $u(T f)$ is also a distribution. The map $u \rightarrow u \circ T$ is called the transpose of $T$ and is sometimes written as $T^{t} u=u \circ T$. This construction is an important part of extending familiar operations on functions to tempered distributions. Our first example considers the map $$ f \rightarrow \frac{\partial^{\alpha} f}{\partial x^{\alpha}} $$ which is clearly continuous on the Schwartz space. Thus if $u$ is a distribution, then we can define a new distribution by $$ v(f)=u\left(\frac{\partial^{\alpha} f}{\partial x^{\alpha}}\right) $$ If we have a distribution $u$ which is given by a Schwartz function $f$, we can integrate by parts and show that $$ (-1)^{\alpha} u_{f}\left(\frac{\partial^{\alpha} g}{\partial x^{\alpha}}\right)=u_{\partial^{\alpha} f / \partial x^{\alpha}}(g) . $$ Thus we will define the derivative of a distribution $u$ by the formula $$ \frac{\partial^{\alpha} u}{\partial x^{\alpha}}(g)=(-1)^{|\alpha|} u\left(\frac{\partial^{\alpha} g}{\partial x^{\alpha}}\right) $$ This extends the definition of derivative from smooth functions to distributions. When we say extend the definition of an operation $T$ from functions to distributions, this means that we have $$ T u_{f}=u_{T f} $$ whenever $f$ is a Schwartz function. Given a map $T$, we can make this extension if we can find a (formal) transpose of $T, T^{t}$, that satisfies $$ \int_{\mathbf{R}^{n}} T f g d x=\int_{\mathbf{R}^{n}} f T^{t} g d x $$ for all $f, g \in \mathcal{S}\left(\mathbf{R}^{n}\right)$. Then if $T^{t}$ is continuous on $\mathcal{S}\left(\mathbf{R}^{n}\right)$, we can define $T$ on $\mathcal{S}^{\prime}\left(\mathbf{R}^{n}\right)$ by $T u(f)=u\left(T^{t} f\right)$. Exercise 2.23 Show that if $\alpha$ and $\beta$ are multi-indices and $u$ is a tempered distribution, then $$ \frac{\partial^{\alpha}}{\partial x^{\alpha}} \frac{\partial^{\beta}}{\partial x^{\beta}} u=\frac{\partial^{\beta}}{\partial x^{\beta}} \frac{\partial^{\alpha}}{\partial x^{\alpha}} u $$ Hint: A standard result of vector calculus tells us when partial derivatives of functions commute. Exercise 2.24 Let $H(t)$ be the Heaviside function on the real line. Thus $H(t)=1$ if $t>0$ and $H(t)=0$ if $t<0$. Find the distributional derivative of $H$. That is find $H^{\prime}(\phi)$ for $\phi$ in $\mathcal{S}$. We give some additional examples of extending operations from functions to distributions. If $P$ is a polynomial, then we $f \rightarrow P f$ defines a continuous map on the Schwartz space. We can define multiplication of a distribution by a polynomial by $P u(f)=u(P f)$. Exercise 2.25 Show that this definition extends to ordinary product of functions in the sense that if $f$ is a Schwartz function, $$ u_{P f}=P u_{f} . $$ Exercise 2.26 Show that if $f$ and $g$ are in $\mathcal{S}\left(\mathbf{R}^{n}\right)$, then $f g$ is in $\mathcal{S}\left(\mathbf{R}^{n}\right)$ and that the map $$ f \rightarrow f g $$ is continuous. Exercise 2.27 Show that $1 / x$ defines a distribution on $\mathbf{R}$ by $$ u(f)=\lim _{\epsilon \rightarrow 0^{+}} \int_{\{x:|x|>\epsilon\}} f(x) \frac{1}{x} d x . $$ This way of giving a value to an integral which is not defined as an absolutely convergent integral is called the principal value of the integral. Hint: The function $1 / x$ is odd, thus if we consider $\int_{\{\epsilon<|x|<1\}} f(x) / x d x$, we can subtract a constant from $f$ without changing the value of the integral. Exercise 2.28 Show that we cannot in general define the product between two distributions in such a way that the product is associative. (Vague?) Next we consider the convolution of a distribution and a test function. If $f$ and $g$ are in the Schwartz class, we have by Fubini's theorem that $$ \int_{\mathbf{R}^{n}} f * g(x) h(x) d x=\int_{\mathbf{R}^{n}} f(y) \int_{\mathbf{R}^{n}} h(x) \tilde{g}(y-x) d x d y . $$ The reflection of $g, \tilde{g}$ is defined by $\tilde{g}(x)=g(-x)$. Thus, we can define the convolution of a tempered distribution $u$ and a test function $g, g * u$ by $$ g * u(f)=u(f * \tilde{g}) . $$ This will be a tempered distribution thanks to the following. Exercise 2.29 Show that if $f$ and $g$ are in $\mathcal{S}\left(\mathbf{R}^{n}\right)$, then $f * g \in \mathcal{S}\left(\mathbf{R}^{n}\right)$. Furthermore, show that $f \rightarrow f * g$ is continuous on $\mathcal{S}$. Hint: The simplest way to do this is to use the Fourier transform to convert the problem into a problem about pointwise products. ### The Fourier transform Proposition 2.30 The Fourier transform is a continuous linear map from $\mathcal{S}\left(\mathbf{R}^{n}\right)$ to $\mathcal{S}\left(\mathbf{R}^{n}\right)$ with a continuous inverse, $f \rightarrow \check{f}$. Proof. We use the criterion of Proposition 2.11. If we consider the expression in a semi-norm, we have $$ \xi^{\alpha} \frac{\partial^{\beta}}{\partial \xi^{\beta}} \hat{f}(\xi)=\left(\frac{\partial^{\alpha}}{\partial x^{\alpha}} x^{\beta} f\right)^{\wedge} $$ where we have used Propositions 1.18 and 1.19. By the Leibniz rule in (2.4), we have $$ \left(\frac{\partial^{\alpha}}{\partial x^{\alpha}} x^{\beta} f\right)^{\wedge}=\sum_{\gamma+\delta=\alpha} \frac{\alpha !}{\gamma ! \delta !}\left(\left(\frac{\partial^{\gamma}}{\partial x^{\gamma}} x^{\beta}\right) \frac{\partial^{\delta}}{\partial x^{\delta}} f\right)^{\wedge} $$ Hence, using the observation of (2.17) and Proposition 1.18, we have that $$ \rho_{\alpha \beta}(\hat{f}) \leq C \sum_{\lambda \leq \beta,|\gamma| \leq|\alpha|+n+1} \rho_{\gamma \lambda}(f) . $$ Given (1.33) or (1.32) the continuity of $\check{f}$ is immediate from the continuity of $\hat{f}$ and thus it is clear that $\check{f}$ lies in the Schwartz space and hence $L^{1}$ for $f \in \mathcal{S}(\mathbf{R})$. Then, we can use (1.33) and then the Fourier inversion theorem to show Next, recall the identity $$ \int \hat{f}(x) g(x) d x=\int f(x) \hat{g}(x) d x $$ of Lemma 1.30 which holds if $f$ and $g$ are Schwartz functions. Using this identity, it is clear that we want to define the Fourier transform of a tempered distribution by $$ \hat{u}(g)=u(\hat{g}) . $$ Then the above identity implies that if $u_{f}$ is a distribution given by a Schwartz function, or an $L^{1}$ function, then $$ u_{\hat{f}}(g)=\hat{u}_{f}(g) . $$ Thus, we have defined a map which extends the Fourier transform. In a similar way, we can define $\check{u}$ for a tempered distribution $u$ by $\check{u}(f)=u(\check{f})$. Theorem 2.31 The Fourier transform is an invertible linear map on $\mathcal{S}^{\prime}\left(\mathbf{R}^{n}\right)$. Proof. We know that $f \rightarrow \check{f}$ is the inverse of the map $f \rightarrow \hat{f}$ on $\mathcal{S}\left(\mathbf{R}^{n}\right)$. Thus, it is easy to see that $u \rightarrow \check{u}$ is an inverse to $u \rightarrow \hat{u}$ on $\mathcal{S}^{\prime}\left(\mathbf{R}^{n}\right)$. Exercise 2.32 Show that if $f$ is in $\mathcal{S}$, then $f$ has a derivative in the $L^{1}$-sense. Exercise 2.33 Show from the definitions that if $u$ is a tempered distribution, then $$ \left(\frac{\partial^{\alpha}}{\partial x^{\alpha}} u\right)^{\wedge}=(i \xi)^{\alpha} \hat{u} $$ and that $$ \left((-i x)^{\alpha} u\right)^{\wedge}=\left(\frac{\partial^{\alpha} \hat{u}}{\partial \xi^{\alpha}}\right) $$ ### More distributions In addition to the tempered distributions discussed above, there are two more common families of distributions. The (ordinary) distributions $\mathcal{D}^{\prime}\left(\mathbf{R}^{n}\right)$ and the distributions of compact support, $\mathcal{E}^{\prime}\left(\mathbf{R}^{n}\right)$. The $\mathcal{D}^{\prime}$ is defined as the dual of $\mathcal{D}\left(\mathbf{R}^{n}\right)$, the set of functions which are infinitely differentiable and have compact support on $\mathbf{R}^{n}$. The space $\mathcal{E}^{\prime}$ is the dual of $\mathcal{E}\left(\mathbf{R}^{n}\right)$, the set of functions which are infinitely differentiable on $\mathbf{R}^{n}$. Since we have the containments, $$ \mathcal{D}\left(\mathbf{R}^{n}\right) \subset \mathcal{S}\left(\mathbf{R}^{n}\right) \subset \mathcal{E}\left(\mathbf{R}^{n}\right) $$ we obtain the containments $$ \mathcal{E}^{\prime}\left(\mathbf{R}^{n}\right) \subset \mathcal{S}^{\prime}\left(\mathbf{R}^{n}\right) \subset \mathcal{D}^{\prime}\left(\mathbf{R}^{n}\right) $$ To see this, observe that (for example) each tempered distribution defines an ordinary distribution by restricting the domain of $u$ from $\mathcal{S}$ to $\mathcal{D}$. The space $\mathcal{D}^{\prime}\left(\mathbf{R}^{n}\right)$ is important because it can also be defined on open subsets of $\mathbf{R}^{n}$ or on manifolds. The space $\mathcal{E}^{\prime}$ is interesting because the Fourier transform of such a distribution will extend holomorphically to $\mathbf{C}^{n}$. The book of Laurent Schwartz $[12,11]$ is still a good introduction to the subject of distributions. ## Chapter 3 ## The Fourier transform on $L^{2}$. In this section, we prove that the Fourier transform acts on $L^{2}$ and that $f \rightarrow(2 \pi)^{-n / 2} \hat{f}$ is an isometry on this space. Each $L^{2}$ function gives a tempered distribution and thus its Fourier transform is defined. Thus, our main accomplishment in is to prove the Plancherel identity which asserts that $f \rightarrow(2 \pi)^{-n / 2} \hat{f}$ is an isometry. ### Plancherel's theorem Proposition 3.1 If $f$ and $g$ are in the Schwartz space, then we have $$ \int_{\mathbf{R}^{n}} f(x) \bar{g}(x) d x=\frac{1}{(2 \pi)^{n}} \int_{\mathbf{R}^{n}} \hat{f}(\xi) \overline{\hat{g}}(\xi) d \xi $$ Proof. According to the Fourier inversion theorem in Chapter 1, $$ \bar{g}=\frac{1}{(2 \pi)^{n}} \hat{\hat{\hat{g}}} $$ Thus, we can use the identity (1.30) of Chapter 1 to conclude the Plancherel identity for Schwartz functions. Theorem 3.2 (Plancherel) If $f$ is in $L^{2}$, then $\hat{f}$ is in $L^{2}$ and we have $$ \int|f(x)|^{2} d x=\frac{1}{(2 \pi)^{n}} \int|\hat{f}(\xi)|^{2} d \xi $$ Furthermore, the map $f \rightarrow \hat{f}$ is invertible. Proof. If $f$ is in $L^{2}$, then we may approximate $f$ by Schwartz functions $f_{i}$. Applying the previous proposition with $f=g=f_{i}-f_{j}$ we conclude that the sequence $\hat{f}_{i}$ is Cauchy in $L^{2}$. Since this $L^{2}$ is complete, the sequence $f_{i}$ has a limit, $F$. Since $f_{i} \rightarrow f$ in $L^{2}$ we also have that $f_{i}$ converges to $f$ as tempered distributions. To see this, we use the definition of the Fourier transform, and then that $f_{i}$ converges in $L^{2}$ to obtain that $$ u_{\hat{f}}(g)=\int f \hat{g} d x=\lim _{i \rightarrow \infty} \int f_{i} \hat{g} d x=\int \hat{f}_{i} g d x=\int F g d x . $$ Thus $\hat{f}=F$. The identity holds for $f$ and $\hat{f}$ since it holds for each $f_{i}$. We know that $f$ has an inverse on $\mathcal{S}, f \rightarrow \check{f}$. The Plancherel identity tells us this inverse extends continuously to all of $L^{2}$. It is easy to see that this extension is still an inverse on $L^{2}$. Recall that a Hilbert space $\mathcal{H}$ is a complete normed vector space where the norm comes from an inner product. An inner product is a map $\langle, \cdot, \cdot\rangle: \mathcal{H} \times \mathcal{H} \rightarrow \mathbf{C}$ which satisfies $$ \begin{aligned} \langle x, y\rangle & =\overline{\langle y, x\rangle}, \quad \text { if } x, y \in \mathcal{H} \\ \langle\lambda x, y\rangle & =\lambda\langle x, y\rangle, \quad x, y \in \mathcal{H}, \lambda \in \mathbf{C} \\ \langle x, x\rangle & \geq 0, \quad x \in \mathcal{H} \\ \langle x, x\rangle & =0, \quad \text { if and only if } x=0 \end{aligned} $$ Exercise 3.3 Show that the Plancherel identity holds if $f$ takes values in finite dimensional Hilbert space. Hint: Use a basis. Exercise 3.4 Show by example that the Plancherel identity does not always hold if $f$ does not take values in a Hilbert space. Hint: The characteristic function of $(0,1) \subset \mathbf{R}$ should provide an example. Norm the complex numbers by the $\infty$-norm, $\|z\|=\max (\operatorname{Re} z, \operatorname{Im} z)$. Exercise 3.5 (Heisenberg inequality.) If $f$ is a Schwartz function, show that we have the inequality: $$ n \int_{\mathbf{R}^{n}}|f(x)|^{2} d x \leq 2\|x f\|_{2}\|\nabla f\|_{2} $$ Hint: Write $$ \int_{\mathbf{R}^{n}} n|f(x)|^{2} d x=\int_{\mathbf{R}^{n}}(\operatorname{div} x)|f(x)|^{2} d x $$ and integrate by parts. Recall that the gradient operator $\nabla$ and the divergence operator, div are defined by $$ \nabla f=\left(\frac{\partial f}{\partial x_{1}}, \ldots, \frac{\partial f}{\partial x_{n}}\right) \text { and } \operatorname{div}\left(f_{1}, \ldots, f_{n}\right)=\sum_{j=1}^{n} \frac{\partial f_{j}}{\partial x_{j}} $$ This inequality has something to do with the Heisenberg uncertainty principle in quantum mechanics. The function $|f(x)|^{2}$ is a probability density and thus has integral 1. The expression $x f$ is related to the position of the particle represented by $f$ and the expression $\nabla f$ is related to the momentum. The inequality gives a lower bound on the product of position and momentum. If we use Plancherel's theorem and Proposition 1.19, we obtain $$ \int_{\mathbf{R}^{n}}|\nabla f|^{2} d x=(2 \pi)^{-n} \int_{\mathbf{R}^{n}}|\xi \hat{f}(\xi)|^{2} d \xi $$ If we use this to replace $\|\nabla f\|_{2}$ in the above inequality, we obtain a quantitative version of the statement "We cannot have $f$ and $\hat{f}$ concentrated near the origin." ### Multiplier operators If $m(\xi)$ is a tempered distribution, then $m$ defines a multiplier operator $T_{m}$ by $$ \left(T_{m} f\right)^{\wedge}=m(\xi) \hat{f} $$ The function $m$ is called the symbol of the operator. It is clear that $T_{m}$ is maps $\mathcal{S}$ to $\mathcal{S}^{\prime}$. Note that we cannot determine if this map is continuous, since we have not given the topology on $\mathcal{S}^{\prime}\left(\mathbf{R}^{n}\right)$. Our main interest is when $m$ is a locally integrable function. Such a function will be a tempered distribution if there are constants $C$ and $N$ so that $$ \int_{B_{R}(0)}|m(\xi)| d \xi \leq C\left(1+R^{N}\right), \text { for all } R>0 $$ Exercise 3.6 Is this condition necessary for a positive function to give a tempered distribution? There is a simple, but extremely useful condition for showing that a multiplier operator is bounded on $L^{2}$. Theorem 3.7 Suppose $T_{m}$ is a multiplier operator given by a measurable function $m$. The operator $T_{m}$ is bounded on $L^{2}$ if and only if $m$ is in $L^{\infty}$. Furthermore, $\left\|T_{m}\right\|=\|m\|_{\infty}$. Proof. If $m$ is in $L^{\infty}$, then Plancherel's theorem implies the inequality $$ \left\|T_{m} f\right\|_{2} \leq\|m\|_{\infty}\|f\|_{2} $$ Now consider $E_{t}=\{\xi:|m(\xi)|>t\}$ and suppose this set has positive measure. If we choose $F_{t} \subset E_{t}$ with $0<m\left(F_{t}\right)<\infty$, then we have $$ \left\|T_{m}\left(\chi_{F_{t}}\right)\right\| \geq t\left\|\chi_{F_{t}}\right\|_{2} . $$ Hence, $\left\|T_{m}\right\| \geq\|m\|_{\infty}$. Exercise 3.8 (Open.) Find a necessary and sufficient condition for $T_{m}$ to be bounded on $L^{p}$. Example 3.9 If $s$ is a real number, then we can define $J_{s}$, the Bessel potential operator of order $s$ by $$ \left(J_{s} f\right)^{\wedge}=\left(1+|\xi|^{2}\right)^{-s / 2} \hat{f} . $$ If $s \geq 0$, then Theorem 3.7 implies that $J_{s} f$ lies in $L^{2}$ when $f$ is $L^{2}$. Furthermore, if $\alpha$ is multi-index of length $|\alpha| \leq s$, then for some finite constant $C$ we have $$ \left\|\frac{\partial^{\alpha}}{\partial x^{\alpha}} J_{s} f\right\|_{L^{2}} \leq C\|f\|_{2} . $$ The operator $f \rightarrow \frac{\partial^{\alpha}}{\partial x^{\alpha}} J_{s} f$ is a multiplier operator with symbol $(i \xi)^{\alpha} /\left(1+|\xi|^{2}\right)^{s / 2}$, which is bounded. ### Sobolev spaces The Example 3.9 motivates the following definition of the Sobolev space $L_{s}^{2}$. Sobolev spaces are so useful that each mathematician has his or her own notation for them. Some of the more common ones are $H^{s}, W^{s, 2}$ and $B_{2}^{2, s}$. Definition 3.10 The Sobolev space $L_{s}^{2}\left(\mathbf{R}^{n}\right)$ is the image of $L^{2}\left(\mathbf{R}^{n}\right)$ under the map $J_{s}$. The norm is given by $$ \left\|J_{s} f\right\|_{2, s}=\|f\|_{2} $$ or, since $J_{s} \circ J_{-s}$ is the identity, we have $$ \|f\|_{2, s}=\left\|J_{-s} f\right\|_{2} . $$ Note that if $s \geq 0$, then $L_{s}^{2} \subset L^{2}$ as observed in Example 3.9. For $s=0$, we have $L_{0}^{2}=L^{2}$. For $s<0$, the elements of the Sobolev space are tempered distributions, which are not, in general, given by functions. The following propositions are easy consequences of the definition and the Plancherel theorem, via Theorem 3.7. Proposition 3.11 If $s \geq 0$ is an integer, then a function $f$ is in the Sobolev space $L_{s}^{2}$ if and only if $f$ and all its derivatives of order up to $s$ are in $L^{2}$. Proof. If $f$ is in the Sobolev space $L_{s}^{2}$, then $f=J_{s} \circ J_{-s} f$. Using the observation of Example 3.9 that $$ f \rightarrow \frac{\partial^{\alpha}}{\partial x^{\alpha}} J_{s} f $$ is bounded on $L^{2}$, we conclude that $$ \left\|\frac{\partial^{\alpha}}{\partial x^{\alpha}} f\right\|_{2}=\left\|\frac{\partial^{\alpha}}{\partial x^{\alpha}} J_{s} \circ J_{-s} f\right\|_{2} \leq C\left\|J_{-s} f\right\|_{2}=\|f\|_{2, s} $$ If $f$ has all derivatives of order up to $s$ in $L^{2}$, then we have that there is a finite constant $C$ so that $$ \left(1+|\xi|^{2}\right)^{s / 2}|\hat{f}(\xi)| \leq C\left(1+\sum_{j=1}^{n}\left|\xi_{j}\right|^{s}\right)|\hat{f}(\xi)| . $$ Since each term on the right is in $L^{2}$, we have $f$ in the Sobolev space. The characterization of Sobolev spaces in the above theorem is the more standard definition of Sobolev spaces. It is more convenient to define a Sobolev spaces for $s$ a positive integer as the functions which have (distributional) derivatives of order less or equal $s$ in $L^{2}$ because this definition extends easily to give Sobolev spaces on open subsets of $\mathbf{R}^{n}$ and Sobolev spaces based on $L^{p}$. The definition using the Fourier transform provides a nice definition of Sobolev spaces when $s$ is not an integer. Proposition 3.12 If $s<0$ and $-|\alpha| \geq s$, then $\partial^{\alpha} f / \partial x^{\alpha}$ is in $L_{s}^{2}$ and $$ \left\|\frac{\partial^{\alpha} f}{\partial x^{\alpha}}\right\|_{2, s} \leq\|f\|_{2} \text {. } $$ Proof. We have $$ \left.\left.\left(1+|\xi|^{2}\right)^{s / 2}\left(\frac{\partial^{\alpha} f}{\partial \xi^{\alpha}}\right) \hat{(\xi)}=\left[(i \xi)^{\alpha}\left(1+|\xi|^{2}\right)^{s / 2}\right)\right] \hat{f}(\xi)\right) $$ If $|\alpha| \leq-s$, then the factor in square brackets on the right is a bounded multiplier and hence if $f$ is in $L^{2}$, then the left-hand side is in $L^{2}$. Now Plancherel's theorem tells us that $\partial^{\alpha} f / \partial x^{\alpha}$ is in the Sobolev space $L_{s}^{2}$. Exercise 3.13 Show that for all $s$ in $\mathbf{R}$, the map $$ f \rightarrow \frac{\partial^{\alpha} f}{\partial x^{\alpha}} $$ $\operatorname{maps} L_{s}^{2} \rightarrow L_{s-|\alpha|}^{2}$. Exercise 3.14 Show that $L_{-s}^{2}$ is the dual of $L_{s}^{2}$. More precisely, show that if $\lambda: L_{s}^{2} \rightarrow \mathbf{C}$ is a continuous linear map, then there is a distribution $u \in L_{-s}^{2}$ so that $$ \lambda(f)=u(f) $$ for each $f \in \mathcal{S}\left(\mathbf{R}^{n}\right)$. Hint: This is an easy consequence of the theorem that all continuous linear functionals on the Hilbert space $L^{2}$ are given by $f \rightarrow \int f \bar{g}$. ## Chapter 4 ## Interpolation of operators In the section, we will say a few things about the theory of interpolation of operators. For a more detailed treatment, we refer the reader to the book of Stein and Weiss [15] and the book of Bergh and Löfstrom [1]. By interpolation, we mean the following type of result. If $T$ is a linear map which is bounded $^{1}$ on $X_{0}$ and $X_{1}$, then $T$ is bounded on $X_{t}$ for $t$ between 0 and 1 . It should not be terribly clear what we mean by "between" when we are talking about pairs of vector spaces. In the context of $L^{p}$ space, $L^{q}$ is between $L^{p}$ and $L^{r}$ will mean that $q$ is between $p$ and $r$. For these results, we will work on a pair of $\sigma$-finite measure spaces $(M, \mathcal{M}, \mu)$ and $(N, \mathcal{N}, \nu)$ ### The Riesz-Thorin theorem We begin with the Riesz-Thorin convexity theorem. Theorem 4.1 Let $p_{j}, q_{j}, j=0,1$ be exponents in the range $[1, \infty]$ and suppose that $p_{0}<p_{1}$. If $T$ is a linear operator defined (at least) on simple functions in $L^{1}(M)$ into measurable functions on $N$ that satisfies $$ \|T f\|_{q_{j}} \leq M_{j}\|f\|_{p_{j}} . $$ ${ }^{1}$ A linear map $T: X \rightarrow Y$ is bounded between normed vector spaces $X$ and $Y$ if the inequality $\|T f\|_{Y} \leq C\|f\|_{X}$ holds. The least constant $C$ for which this inequality holds is called the operator norm of $T$. If we define $p_{t}$ and $q_{t}$ by $$ \frac{1}{p_{t}}=\frac{1-t}{p_{0}}+\frac{t}{p_{1}} \quad \text { and } \quad \frac{1}{q_{t}}=\frac{1-t}{q_{0}}+\frac{t}{q_{1}} $$ we will have that $T$ extends to be a bounded operator from $L^{p_{t}}$ to $L^{q_{t}}$ : $$ \|T f\|_{q_{t}} \leq M_{t}\|f\|_{p_{t}} . $$ The operator norm, $M_{t}$ satisfies $M_{t} \leq M_{0}^{1-t} M_{1}^{t}$. Before giving the proof of the Riesz-Thorin theorem, we look at some applications. Proposition 4.2 (Hausdorff-Young inequality) The Fourier transform satisfies for $1 \leq$ $p \leq 2$ $$ \|\hat{f}\|_{p^{\prime}} \leq(2 \pi)^{n\left(1-\frac{1}{p}\right)}\|f\|_{p} $$ Proof. This follows by interpolating between the $L^{1}-L^{\infty}$ result of Proposition 1.2 and Plancherel's theorem, Theorem 3.2. The next result appeared as an exercise when we introduced convolution. Proposition 4.3 (Young's convolution inequality) If $f \in L^{p}\left(\mathbf{R}^{n}\right)$ and $g \in L^{q}\left(\mathbf{R}^{n}\right), 1 \leq$ $p, q, r \leq \infty$ and $$ \frac{1}{r}=\frac{1}{p}+\frac{1}{q}-1 $$ then $$ \|f * g\|_{r} \leq\|f\|_{p}\|g\|_{q} . $$ Proof. We fix $p 1 \leq p \leq \infty$ and then will apply Theorem 4.1 to the map $g \rightarrow f * g$. Our endpoints are Hölder's inequality which gives $$ |f * g(x)| \leq\|f\|_{p}\|g\|_{p^{\prime}} $$ and thus $g \rightarrow f * g$ maps $L^{p^{\prime}}\left(\mathbf{R}^{n}\right)$ to $L^{\infty}\left(\mathbf{R}^{n}\right)$ and the simpler version of Young's inequality which tells us that if $g$ is in $L^{1}$, then $$ \|f * g\|_{p} \leq\|f\|_{p}\|g\|_{1} . $$ Thus $g \rightarrow f * g$ also maps $L^{1}$ to $L^{p}$. Thus, this map also takes $L^{q_{t}}$ to $L^{r_{t}}$ where $$ \frac{1}{q_{t}}=\frac{1-t}{1}+t\left(1-\frac{1}{p}\right) \text { and } \quad \frac{1}{r_{t}}=\frac{1-t}{p}+\frac{t}{\infty} . $$ If we subtract the definitions of $1 / r_{t}$ and $1 / q_{t}$, then we obtain the relation $$ \frac{1}{r_{t}}-\frac{1}{q_{t}}=1-\frac{1}{p} $$ The condition $q \geq 1$ is equivalent with $t \geq 0$ and $r \geq 1$ is equivalent with the condition $t \leq 1$. Thus, we obtain the stated inequality for precisely the exponents $p, q$ and $r$ in the hypothesis. Exercise 4.4 The simple version of Young's inequality used in the proof above can be proven directly using Hölder's inequality. A proof can also be given which uses the RieszThorin theorem. To do this, use Tonelli's and then Fubini's theorem to establish the inequality $$ \|f * g\|_{1} \leq\|f\|_{1}\|g\|_{1} . $$ The other endpoint is Hölder's inequality: $$ \|f * g\|_{1} \leq\|f\|_{1}\|g\|_{\infty} . $$ Then, apply Theorem 4.1 to the map $g \rightarrow f * g$. Below is a simple, useful result that is a small generalization of the simple version of Young's inequality. Exercise 4.5 a) Suppose that $K: \mathbf{R}^{n} \times \mathbf{R}^{n} \rightarrow \mathbf{C}$ is measurable and that $$ \int_{\mathbf{R}^{n}}|K(x, y)| d y \leq M_{\infty} $$ and $$ \int_{\mathbf{R}^{n}}|K(x, y)| d x \leq M_{1} $$ Show that $$ T f(x)=\int_{\mathbf{R}^{n}} K(x, y) f(y) d y $$ defines a bounded operator $T$ on $L^{p}$ and $$ \|T f\|_{p} \leq M_{1}^{1 / p} M_{\infty}^{1 / p^{\prime}}\|f\|_{p} $$ Hint: Show that $M_{1}$ is an upper bound for the operator norm on $L^{1}$ and $M_{\infty}$ is an upper bound for the operator norm on $L^{\infty}$ and then interpolate with the Riesz-Thorin Theorem, Theorem 4.1. b) Use the result of part a) to provide a proof of Young's convolution inequality $$ \|f * g\|_{p} \leq\|f\|_{1}\|g\|_{p} . $$ To do this, write $f * g(x)=\int_{\mathbf{R}^{n}} f(x-y) g(y) d y$ and then let $K(x, y)=f(x-y)$. Our next step is a lemma from complex analysis (which makes everything trivial), that is usually called the three lines theorem. This is one of a family of theorems which state that the maximum modulus theorem continues to hold in unbounded regions, provided we put an extra growth condition at infinity. I believe such results are called PhragmenLindelöf theorems, though this may or may not be accurate. This theorem considers analytic functions in the strip $\{z: a \leq \operatorname{Re} z \leq b\}$. Lemma 4.6 (Three lines lemma) If $f$ is analytic in the strip $\{z: a \leq \operatorname{Re} z \leq b\}, f$ is bounded and $$ M_{a}=\sup |f(a+i t)| \quad \text { and } \quad M_{b}=\sup |f(b+i t)| $$ then $$ |f(x+i y)| \leq M_{a}^{\frac{b-x}{b-a}} M_{b}^{\frac{x-a}{b-a}} $$ Proof. We consider $f_{\epsilon}(x+i y)=e^{\epsilon(x+i y)^{2}} f(x+i y) M_{a}^{\frac{x+i y-b}{b-a}} M_{b}^{\frac{a-(x+i y)}{b-a}}$ for $\epsilon>0$. This function satisfies $$ \left|f_{\epsilon}(a+i y)\right| \leq e^{\epsilon a^{2}} \quad \text { and } \quad\left|f_{\epsilon}(b+i y)\right| \leq e^{\epsilon b^{2}} $$ and $$ \lim _{y \rightarrow \pm \infty} \sup _{a \leq x \leq b}\left|f_{\epsilon}(x+i y)\right|=0 . $$ Thus by applying the maximum modulus theorem on sufficiently large rectangles, we can conclude that for each $z \in S$, $$ \left|f_{\epsilon}(z)\right| \leq \max \left(e^{\epsilon a^{2}}, e^{\epsilon b^{2}}\right) $$ Letting $\epsilon \rightarrow 0^{+}$implies the Lemma. Exercise 4.7 If instead of assuming that $f$ is bounded, we assume that $$ |f(x+i y)| \leq e^{M|y|} $$ for some $M>0$, then the above Lemma holds and with the same proof. Show this. What is the best possible growth condition for which the above proof works? What is the best possible growth condition? See [13]. The proof of the Riesz-Thorin theorem will rely on the following family of simple functions. Lemma 4.8 Let $p_{0}, p_{1}$ and $p$ with $p_{0}<p<p_{1}$ be given. Consider $s=\sum \alpha_{j} a_{j} \chi_{E_{j}}$ be $a$ simple function with $\alpha_{j}$ are complex numbers of length $1,\left|\alpha_{j}\right|=1, a_{j}>0$ and $\left\{E_{j}\right\}$ is a pairwise disjoint collection of measurable sets where each is of finite measure. Suppose $\|s\|_{p}=1$. Let $$ \frac{1}{p_{z}}=\frac{1-z}{p_{0}}+\frac{z}{p_{1}} $$ and define $$ s_{z}=\sum \alpha_{j} a_{j}^{p / p} \chi_{E_{j}} $$ This family satisfies $$ \left\|s_{z}\right\|_{p_{\operatorname{Re} z}}=1, \quad \text { for } 0<\operatorname{Re} z<1 \text {. } $$ Proof. We have that $$ \int\left|s_{z}\right|^{p_{\operatorname{Re} z}} d \mu=\sum a_{j}^{p} \mu\left(E_{j}\right) $$ Exercise 4.9 State and prove a similar lemma for the family of Sobolev spaces. Show that if $s_{0}<s<s_{1}$ and $u \in L_{s}^{2}$ with $\|u\|_{L_{s}^{2}}=1$, then we can find a family of distributions $u_{z}$ so that $$ \left\|u_{\operatorname{Re} z}\right\|_{L_{\operatorname{Re} z}^{2}}=1, \quad s_{0}<\operatorname{Re} z<x_{1} . $$ This family will be analytic in the sense that if $f \in \mathcal{S}\left(\mathbf{R}^{n}\right)$, then $u_{z}(f)$ is analytic. We are now ready to give the proof of the Riesz-Thorin theorem, Theorem 4.1. Proof. (Proof of Riesz-Thorin theorem.) We are now ready to give the proof of the Riesz-Thorin theorem, Theorem . We fix a $p=p_{t_{0}}, 0<t_{0}<1$ and consider simple functions $s$ on $M$ and $s^{\prime}$ on $N$ which satisfy $\|s\|_{p_{t_{0}}}=1$ and $\left\|s^{\prime}\right\|_{q_{t_{0}}}=1$. We let $s_{z}$ and $s_{z}^{\prime}$ be the families from the previous Lemma where $s_{z}$ is constructed using $p_{j}, j=0,1$ and $s_{z}^{\prime}$ is constructed using the exponents $q_{j}^{\prime}, j=0,1$. According to our hypothesis, $$ \phi(z)=\int_{N} s_{z}^{\prime}(x) T s_{z}(x) d \nu(x) $$ is an analytic function of $z$. Also, using Lemma 4.8 and the assumption on $T$, $$ \sup _{y \in \mathbf{R}}|\phi(j+i y)| \leq M_{j}, \quad j=0,1 . $$ Thus by the three lines theorem, Lemma 4.6, we can conclude that $$ \left|\int s^{\prime} T s d \mu\right| \leq M_{0}^{1-t_{0}} M_{1}^{t_{0}} $$ Since, $s^{\prime}$ is an arbitrary simple function with norm 1 in $L^{q^{\prime}}$, we can conclude that $$ \|T s\|_{q_{t_{0}}} \leq M_{0}^{1-t_{0}} M_{1}^{t_{0}} . $$ Finally, since simple functions are dense in $L^{p_{t}}$, we may take a limit to conclude that $T$ can be extended to all of $L^{p}$ and is bounded. The next exercise may be used to carry the extension of $T$ from simple functions to all of $L^{p}$. Exercise 4.10 Suppose $T: A \rightarrow Y$ is a map defined on a subset $A$ of a metric space $X$ into a metric space $Y$. Show that if $T$ is uniformly continuous, then $T$ has a unique continuous extension $\bar{T}: \bar{A} \rightarrow Y$ to the closure of $A, \bar{A}$. If in addition, $X$ is a vector space, $A$ is a subspace and $T$ is linear, then the extension is also linear. Exercise 4.11 Show that if $T$ is a linear map (say defined on $\mathcal{S}\left(\mathbf{R}^{n}\right)$ ) which maps $L_{s_{j}}^{2}$ into $L_{r_{j}}^{2}$ for $j=0,1$, then $T$ maps $L_{s_{t}}^{2}$ into $L_{r_{t}}^{2}$ for $0<t<1$, where $s_{t}=(1-t) s_{0}+t s_{1}$ and $r_{t}=(1-t) r_{0}+t r_{1}$. ### Interpolation for analytic families of operators The main point of this section is that in the Riesz-Thorin theorem, we might as well let the operator $T$ depend on $z$. This is a very simple idea. We will see below that often a good deal of cleverness is needed in applying this theorem. I do not wish to get involved the technicalities of analytic operator valued functions. (And am not even sure if there are any technicalities needed here.) If one examines the above proof, we see that the hypothesis we will need on an operator $T_{z}$ is that for all sets of finite measure, $E \subset M$ and $F \subset N$, we have that $$ z \rightarrow \int_{N} \chi_{E} T_{z}\left(\chi_{F}\right) d \nu $$ is an analytic function of $z$. This hypothesis can often be proven by using Morera's theorem which replaces the problem of determining analyticity by the simpler problem of checking if an integral condition holds. The integral condition can often be checked with Fubini's theorem. Theorem 4.13 (Stein's interpolation theorem) For $z$ in $S=\{z: 0 \leq \operatorname{Re} z \leq 1\}$, let $T_{z}$ be a family of linear operators defined simple functions for which we have that the function in 4.12 is bounded and analytic in $S$. We assume that for $j=0,1, T_{j+i y}$ maps $L^{p_{j}}(M)$ to $L^{q_{j}}(N)$. Also assume that $1 \leq p_{0}<p_{1} \leq \infty$. We let $p_{t}$ and $q_{t}$ have the meanings as in the Riesz-Thorin theorem and define $$ M_{t}=\sup _{y \in \mathbf{R}}\left\|T_{t+i y}\right\| $$ where $\left\|T_{t+i y}\right\|$ denotes the norm of $T_{t+i y}$ as an operator from $L^{p_{t}}(M)$ to $L^{q_{t}}(N)$. We conclude that $T_{t}$ maps $L^{p_{t}}$ to $L^{q_{t}}$ and we have $$ M_{t} \leq M_{0}^{1-t} M_{1}^{t} $$ The proof of this theorem is the same as the proof of the Riesz-Thorin theorem. Exercise 4.14 (Interpolation with change of measure) Suppose that $T$ is a linear map which which maps $L^{p_{j}}(d \mu)$ into $L^{q_{j}}\left(\omega_{j} d \nu\right)$ for $j=0,1$. Suppose that $\omega_{0}$ and $\omega_{1}$ are two non-negative functions which are integrable on every set of finite measure in $N$. Show that $T$ maps $L^{p_{t}}(d \mu)$ into $L^{q_{t}}\left(\omega_{t}\right)$ for $0<t<1$. Here, $q_{t}$ and $p_{t}$ are defined as in the Riesz-Thorin theorem and $\omega_{t}=\omega_{0}^{1-t} \omega_{1}^{t}$. Exercise 4.15 Formulate and prove a similar theorem where both measures $\mu$ and $\nu$ are allowed to vary. ### Real methods In this section, we give a special case of the Marcinkiewicz interpolation theorem. This is a special case because we assume that the exponents $p_{j}=q_{j}$ are the same. The full theorem includes the off-diagonal case which is only true when $q \geq p$. To indicate the idea of the proof, suppose that we have a map $T$ which is bounded on $L^{p_{0}}$ and $L^{p_{1}}$. If we take a function $f$ in $L^{p}$, with $p$ between $p_{0}<p<p_{1}$, then we may truncate $f$ by setting $$ f_{\lambda}= \begin{cases}f, & |f| \leq \lambda \\ 0, & |f|>\lambda\end{cases} $$ and then $f^{\lambda}=f-f_{\lambda}$. Since $f^{\lambda}$ is $L^{p_{0}}$ and $f_{\lambda}$ is in $L^{p_{1}}$, then we can conclude that $T f=T f_{\lambda}+T f^{\lambda}$ is defined. As we shall see, if we are clever, we can do this splitting in such a way to see that not only is $T f$ defined, but we can also compute the norm of $T f$ in $L^{p}$. The theorem applies to operators which are more general than bounded linear operators. Instead of requiring the operator $T$ to be bounded, we require the following condition. Let $0<q \leq \infty$ and $0<p<\infty$ we say that $T$ is weak-type $p, q$ if there exists a constant $A$ so that $$ \mu(\{x:|T f(x)|>\lambda\}) \leq\left(\frac{A\|f\|_{p}}{\lambda}\right)^{q} . $$ If $q=\infty$, then an operator is of weak-type $p, \infty$ if there exists a constant $A$ so that $$ \|T f\|_{\infty} \leq A\|f\|_{p} $$ We say that a map $T$ is strong-type $p, q$ if there is a constant $A$ so that $$ \|T f\|_{q} \leq A\|f\|_{p} $$ For linear operators, this condition is the same as boundedness. The justification for introducing the new term "strong-type" is that we are not requiring the operator $T$ to be linear. Exercise 4.17 Show that if $T$ is of strong-type $p, q$, then $T$ is of weak-type p, $q$. Hint: Use Chebyshev's inequality. The condition that $T$ is linear is replaced by the condition that $T$ is sub-linear. This means that for $f$ and $g$ in the domain of $T$, then $$ |T(f+g)(x)| \leq|T f(x)|+|T g(x)| $$ The proof of the main theorem will rely on the following well-known representation of the $L^{p}$ norm of $f$. Lemma 4.18 Let $p<\infty$ and $f$ be measurable, then $$ \|f\|_{p}^{p}=p \int_{0}^{\infty} \mu(\{x:|f(x)|>\lambda\}) \lambda^{p} \frac{d \lambda}{\lambda} $$ Proof. It is easy to see that this holds for simple functions. Write a general function as an increasing limit of simple functions. Our main result is the following theorem: Theorem 4.19 Let $0<p_{0}<p_{1} \leq \infty$ for $j=0,1$ and let $T$ take measurable functions on $M$ to measurable functions on $N$. Assume also that $T$ is sublinear and the domain of $T$ is closed under taking truncations. If $T$ is of weak-type $p_{j}, p_{j}$ for $j=0,1$, then $T$ is of strong-type $p_{t}, p_{t}$ for $0<t<1$ and we have for $p_{0}<p<p_{1}$, that when $p_{1}<\infty$ $$ \|T f\|_{p} \leq 2\left(\frac{p A_{0}^{p_{0}}}{p-p_{0}}+\frac{p_{1} A_{1}^{p_{1}}}{p_{1}-p}\right)^{1 / p}\|f\|_{p} . $$ When $p_{1}=\infty$, we obtain $$ \|T f\|^{p} \leq\left(1+A_{1}\right)\left(\frac{A_{0}^{p_{0}} p}{p-p_{0}}\right)^{1 / p}\|f\|_{p}^{p} . $$ Proof. We first consider the case when $p_{1}<\infty$ We fix $p=p_{t}$ with $0<t<1$, choose $f$ in the domain of $T$ and let $\lambda>0$. We write $f=f_{\lambda}+f^{\lambda}$ as in (4.16). Since $T$ is sub-linear and then weak-type $p_{j}, p_{j}$, we have that $$ \begin{aligned} \nu(\{x:|T f(x)|>2 \lambda\}) & \leq \nu\left(\left\{x:\left|T f^{\lambda}(x)\right|>\lambda\right\}\right)+\nu\left(\left\{x:\left|T f_{\lambda}(x)\right|>\lambda\right\}\right) \\ & \leq\left(\frac{A_{0}\left\|f^{\lambda}\right\|_{p_{0}}}{\lambda}\right)^{p_{0}}+\left(\frac{A_{1}\left\|f_{\lambda}\right\|_{p_{1}}}{\lambda}\right)^{p_{1}} . \end{aligned} $$ We use the representation of the $L^{p}$-norm in Lemma 4.18, the inequality (4.20) and the change of variables $2 \lambda \rightarrow \lambda$ to obtain $$ \begin{aligned} & 2^{-p}\|T f\|_{p}^{p} \leq A_{0}^{p_{0}} p p_{0} \int_{0}^{\infty} \int_{0}^{\infty} \mu\left(\left\{x:\left|f^{\lambda}(x)\right|>\tau\right\}\right) \tau^{p_{0}} \frac{d \tau}{\tau} \lambda^{p-p_{0}} \frac{d \lambda}{\lambda} \\ &+A_{1}^{p_{1}} p p_{1} \int_{0}^{\infty} \int_{0}^{\lambda} \mu\left(\left\{x:\left|f_{\lambda}(x)\right|>\tau\right\}\right) \tau^{p_{1}} \frac{d \tau}{\tau} \lambda^{p-p_{1}} \frac{d \lambda}{\lambda} . \end{aligned} $$ Note that the second integral on the right extends only to $\lambda$ since $f_{\lambda}$ satisfies the inequality $\left|f_{\lambda}\right| \leq \lambda$. We consider the second term first. We use that $\mu\left(\left\{x:\left|f_{\lambda}(x)\right|>\tau\right\}\right) \leq \mu(\{x$ : $|f(x)|>\tau\}$ ) and thus Tonelli's theorem gives the integral in the second term is bounded by $$ \begin{aligned} p p_{1} \int_{0}^{\infty} \mu\left(\left\{x:\left|f_{\lambda}(x)\right|>\tau\right\}\right) \int_{\tau}^{\infty} \lambda^{p-p_{1}} \frac{d \lambda}{\lambda} \frac{d \tau}{\tau} & \leq \frac{p p_{1}}{\left(p_{1}-p\right)} \int_{0}^{\infty} \mu(\{x:|f(x)|>\tau\}) \tau^{p} \frac{d \tau}{\tau} \\ & =\frac{p_{1}}{p-p_{1}}\|f\|_{p}^{p} \end{aligned} $$ We now consider the first term to the right of the inequality sign in (4.21). We observe that when $\tau>\lambda, \mu\left(\left\{x:\left|f^{\lambda}(x)\right|>\tau\right\}\right)=\mu(\{x:|f(x)|>\tau\})$, while when $\tau \leq \lambda$, we have $\mu\left(\left\{x:\left|f^{\lambda}(x)\right|>\tau\right\}\right)=\mu(\{x:|f(x)|>\lambda\})$. Thus, we have $$ \begin{aligned} & p p_{0} \int_{0}^{\infty} \int_{0}^{\lambda} \mu\left(\left\{x:\left|f_{\lambda}(x)\right|>\tau\right\}\right) \tau^{p_{0}} \frac{d \tau}{\tau} \lambda^{p-p_{1}} \frac{d \lambda}{\lambda} \\ &= p p_{0} \int_{0}^{\infty} \int_{\lambda}^{\infty} \mu(\{x:|f(x)|>\tau\}) \tau^{p_{0}} \frac{d \tau}{\tau} \lambda^{p-p_{0}} \frac{d \lambda}{\lambda} \\ & \quad+p \int_{0}^{\infty} \mu(\{x:|f(x)|>\lambda\}) \lambda^{p} \frac{d \lambda}{\lambda} \\ &=\left(\frac{p_{0}}{p-p_{0}}+1\right)\|f\|_{p}^{p} . \end{aligned} $$ Using the estimates (4.22) and (4.23) in (4.21) gives $$ \|T f\|_{p}^{p} \leq 2^{p}\left(\frac{p A_{0}^{p_{0}}}{p-p_{0}}+\frac{p_{1} A_{1}^{p_{1}}}{p_{1}-p}\right)\|f\|_{p}^{p} . $$ Which is what we hoped to prove. Finally, we consider the case $p_{1}=\infty$. Since $T$ is of type $\infty, \infty$, then we can conclude that, with $f_{\lambda}$ as above, $\nu\left(\left\{x:\left|T f_{\lambda}(x)\right|>A_{1} \lambda\right\}\right)=0$. To see how to use this, we write $$ \begin{aligned} \nu\left(\left\{x:|T f(x)|>\left(1+A_{1}\right) \lambda\right\}\right) & \leq \nu\left(\left\{x:\left|T f^{\lambda}(x)\right|>\lambda\right\}\right)+\nu\left(\left\{x:\left|T f_{\lambda}(x)\right|>A_{1} \lambda\right\}\right) \\ & =\nu\left(\left\{x:\left|T f^{\lambda}(x)\right|>\lambda\right\}\right) . \end{aligned} $$ Thus, using Lemma 4.18 that $T$ is of weak-type $p_{0}, p_{0}$, and the calculation in (4.22) we have $$ \begin{aligned} \left(1+A_{1}\right)^{-p}\|T f\|_{p}^{p} & =A_{0}^{p_{0}} p p_{0} \int_{0}^{\infty} \int_{\lambda}^{\infty} \mu\left(\left\{x:\left|f^{\lambda}(x)\right|>\tau\right\}\right) \tau^{p_{0}} \frac{d \tau}{\tau} \lambda^{p-p_{0}} \frac{d \lambda}{\lambda} \\ & \leq \frac{A_{0}^{p_{0}} p}{p-p_{0}}\|f\|_{p}^{p} . \end{aligned} $$ ## Chapter 5 ## The Hardy-Littlewood maximal function In this chapter, we introduce the Hardy-Littlewood maximal function and prove the Lebesgue differentiation theorem. This is the missing step of the Fourier uniqueness theorem in Chapter 1. Since the material in this chapter is familiar from real analysis, we will omit some of the details. In this chapter, we will work on $\mathbf{R}^{n}$ with Lebesgue measure. ### The $L^{p}$-inequalities We let $\chi=n \chi_{B_{1}(0)} / \omega_{n-1}$ be the characteristic function of the unit ball, normalized so that $\int \chi d x=1$ and then we set $\chi_{r}(x)=r^{-n} \chi(x / r)$. If $f$ is a measurable function, we define the Hardy-Littlewood maximal function by $$ M f(x)=\sup _{r>0}|f| * \chi_{r}(x) . $$ Here and throughout these notes, we use $m(E)$ to denote the Lebesgue measure of a set E. Note that the Hardy-Littlewood maximal function is defined as the supremum of an uncoutable family of functions. Thus, the sort of person ${ }^{1}$ who is compulsive about details might worry that $M f$ may not be measurable. The following lemma implies the measurability of $M f$. Lemma 5.1 If $f$ is measurable, then $M f$ is upper semi-continuous. ${ }^{1}$ Though not your instructor. Proof. If $M f(x)>\lambda$, then we can find a radius $r$ so that $$ \frac{1}{m\left(B_{r}(x)\right)} \int_{B_{r}(x)}|f(y)| d y>\lambda . $$ Since this inequality is strict, for $s$ slightly larger than $r$, say $r+\delta>s>r$, we still have $$ \frac{1}{m\left(B_{s}(x)\right)} \int_{B_{r}(x)}|f(y)| d y>\lambda . $$ But then by the monotonicity of the integral, $$ M f(z)>\lambda $$ if $B_{s}(z) \supset B_{r}(x)$. That is if $|z-x|<\delta$. We have shown that the set $\{x: M f(x)>\lambda\}$ is open. Exercise 5.2 If $\left\{f_{\alpha}: \alpha \in A\right\}$ is a family of continuous real valued functions on $\mathbf{R}^{n}$ show that $$ g(x)=\sup _{\alpha \in A} f_{\alpha}(x) $$ is upper semi-continuous. If $f$ is locally integrable, then $\chi_{r} * f$ is continuous for each $r>0$ and the previous exercise can be used to establish the upper semi-continuity of $M f$. Our previous lemma also applies to functions for which the integral over a ball may be infinite. Oops. At this point, I am not sure if I have defined local integrability. We say that a function is locally integrable if it is in $L_{l o c}^{1}\left(\mathbf{R}^{n}\right)$. We say that a function $f$ is $L_{l o c}^{p}\left(\mathbf{R}^{n}\right)$ if $f \in L^{p}(K)$ for each compact set $K$. If one were interested (and we are not), one can define a topology by defining the semi-norms, $$ \rho_{n}(f)=\|f\|_{L^{p}\left(B_{n}(0)\right)}, \quad \text { for } n=1,2 \ldots $$ and then using this countable family of semi-norms, construct a metric as we did in defining the topology on the Schwartz space. Exercise 5.3 Show that a sequence converges in the metric for $L_{l o c}^{p}\left(\mathbf{R}^{n}\right)$ if and only if the sequence converges in $L^{p}(K)$ for each compact set $K$. Exercise 5.4 Let $f=\chi_{(-1,1)}$ on the real line. Show that $M f \geq 1 /|x|$ if $|x|>1$. Conclude that $M f$ is not in $L^{1}$. Exercise 5.5 Show that if $M f$ is in $L^{1}\left(\mathbf{R}^{n}\right)$, then $f$ is zero. The first main fact about the Hardy-Littlewood maximal function is that it is finite almost everywhere, if $f$ is in $L^{1}$. This is a consequence of the following theorem. Theorem 5.6 If $f$ is measurable and $\lambda>0$, then there exists a constant $C=C(n)$ so that $$ m(\{x:|M f(x)|>\lambda\}) \leq \frac{C}{\lambda} \int_{\mathbf{R}^{n}}|f(x)| d x . $$ The observant reader will realize that this theorem asserts that the Hardy-Littlewood maximal operator is of weak-type 1,1 . It is easy to see that it is sub-linear and of weak type $\infty, \infty$ and thus by the diagonal case of the Marcinkiewicz interpolation theorem, Theorem 4.19, we can conclude it is of strong-type $p, p$. The proof of this theorem depends on a Lemma which allows us to extract from a collection of balls, a subcollection whose elements are disjoint and whose total measure is relatively large. Lemma 5.7 Let $\beta=1 /\left(2 \cdot 3^{n}\right)$. If $E$ is a measurable set of finite measure in $\mathbf{R}^{n}$ and we have a collection of balls $\mathcal{B}=\left\{B_{\alpha}\right\}_{\alpha \in A}$ so that $E \subset \cup B_{\alpha}$, then we can find a subcollection of the balls $\left\{B_{1}, \ldots, B_{N}\right\}$ which are pairwise disjoint and which satisfy $$ \sum_{j=1}^{N} m\left(B_{j}\right) \geq \beta m(E) . $$ Proof. We may find $K \subset E$ which is compact and with $m(K)>m(E) / 2$. Since $K$ is compact, there is a finite sub-collection of the balls $\mathcal{B}_{1} \subset \mathcal{B}$ which cover $E$. We let $B_{1}$ be the largest ball in $\mathcal{B}_{1}$ and then we let $\mathcal{B}_{2}$ be the balls in $\mathcal{B}_{1}$ which do not intersect $B_{1}$. We choose $B_{2}$ to be the largest ball in $\mathcal{B}_{2}$ and continue until $\mathcal{B}_{N+1}$ is empty. The balls $B_{1}, B_{2}, \ldots, B_{N}$ are disjoint by construction. If $B$ is a ball in $\mathcal{B}_{1}$ then either $B$ is one of the chosen balls, call it $B_{j_{0}}$ or $B$ was discarded in going from $\mathcal{B}_{j_{0}}$ to $\mathcal{B}_{j_{0}+1}$ for some $j_{0}$. In either case, $B$ intersects one of the chosen balls, $B_{j_{0}}$, and $B$ has radius which is less than or equal to the radius of $B_{j_{0}}$. Hence, we know that $$ K \subset \cup_{B \in \mathcal{B}_{1}} B \subset \cup_{j=1}^{N} 3 B_{j} $$ where if $B_{j}=B_{r}(x)$, then $3 B_{j}=B_{3 r}(x)$. Taking the measure of the sets $K$ and $\cup 3 B_{j}$, we obtain $$ m(E) \leq 2 m(K) \leq 2 \cdot 3^{n} \sum_{j=1}^{N} m\left(B_{j}\right) . $$ Now, we can give the proof of the weak-type 1,1 estimate for $M f$ in Theorem 5.6. Proof. (Proof of Theorem 5.6) We let $E_{\lambda}=\{x: M f(x)>\lambda\}$ and choose a measurable set $E \subset E_{\lambda}$ which is of finite measure. For each $x \in E_{\lambda}$, there is a ball $B_{x}$ so that $$ m\left(B_{x}\right)^{-1} \int_{B_{x}}|f(x)| d x>\lambda $$ We apply Lemma 5.7 to the collection of balls $\mathcal{B} \subset\left\{B_{x}: x \in E\right\}$ to find a sub-collection $\left\{B_{1}, \ldots, B_{N}\right\} \subset \mathcal{B}$ of disjoint balls so that $$ \frac{m(E)}{2 \cdot 3^{n}} \leq \sum_{j=1}^{N} m\left(B_{j}\right) \leq \frac{1}{\lambda} \int_{B_{j}}|f(y)| d y \leq \frac{\|f\|_{1}}{\lambda} . $$ The first inequality above is part of Lemma 5.7, the second is (5.8) and the last holds because the balls $B_{j}$ are disjoint. Since $E$ is an arbitrary, measurable subset of $E_{\lambda}$ of finite measure, then we can take the supremum over all such $E$ and conclude $E_{\lambda}$ also satisfies $$ m\left(E_{\lambda}\right) \leq \frac{2 \cdot 3^{n}\|f\|_{1}}{\lambda} $$ Frequently, in analysis it becomes burdensome to keep track of the exact value of the constant $C$ appearing in the inequality. In the next theorem and throughout these notes, we will give the constant and the parameters it depends on without computing its exact value. In the course of a proof, the value of a constant $C$ may change from one occurrence to the next. Thus, the expression $C=2 C$ is true even if $C \neq 0$ ! Theorem 5.9 If $f$ is measurable and $1<p \leq \infty$, then there exists a constant $C=C(n)$ $$ \|M f\|_{p} \leq \frac{C p}{p-1}\|f\|_{p} $$ Proof. This follows from the weak-type 1,1 estimate in Theorem 5.6, the elementary inequality that $\|M f\|_{\infty} \leq\|f\|_{\infty}$ and Theorem 4.19. The dependence of the constant can be read off from the constant in Theorem 4.19. ### Differentiation theorems The Hardy-Littlewood maximal function is a gadget which can be used to study the identity operator. At first, this may sound like a silly thing to do-what could be easier to understand than the identity? We will illustrate that the identity operator can be interesting by using the Hardy-Littlewood maximal function to prove the Lebesgue differentiation theorem-the identity operator is a pointwise limit of averages on balls. In fact, we will prove a more general result which was used in the proof of the Fourier inversion theorem of Chapter 1. This theorem amounts to a complicated representation of the identity operator. If this does not convince you that the identity operator is interesting, in a few Chapters, we will introduce approximations of the zero operator, $f \rightarrow 0$. The maximal function is constructed by averaging using balls, however, it is not hard to see that many radially symmetric averaging processes can be estimated using $M$. The following useful result is lifted from Stein's book [14]. Before stating this proposition, given a function $\phi$ on $\mathbf{R}^{n}$, we define the non-increasing radial majorant of $\phi$ by $$ \phi^{*}(x)=\sup _{|y|>|x|}|\phi(y)| . $$ Proposition 5.10 Let $\phi$ be in $L^{1}$ and $f$ in $L^{p}$, then $$ \sup _{r>0}\left|\phi_{r} * f(x)\right| \leq \int \phi^{*}(x) d x M f(x) . $$ Proof. It suffices to prove the inequality $$ \phi_{r} * f(x) \leq \int \phi(x) d x M f(x) $$ when $\phi$ is non-negative and radially non-increasing and thus $\phi=\phi^{*}$ a.e. Also, we may assume $f \geq 0$. We begin with the special case when $\phi(x)=\sum_{j} a_{j} \chi_{B_{\rho_{j}}(0)}(x)$ and then $$ \begin{aligned} \phi_{r} * f(x) & =r^{-n} \sum_{j} a_{j} \frac{m\left(B_{r \rho_{j}}(x)\right)}{m\left(B_{r \rho_{j}}(x)\right)} \int_{B_{r \rho_{j}}(x)} f(y) d y \\ & \leq r^{-n} M f(x) \sum_{j} a_{j} m\left(B_{r \rho_{j}}(x)\right) \\ & =M f(x) \int \phi . \end{aligned} $$ The remainder of the proof is a picture. We can write a general, non-increasing, radial function as an increasing limit of sums of characteristic functions of balls. The monotone convergence theorem and the special case already treated imply that $\phi_{r} * f(x) \leq$ $M f(x) \int \phi d x$ and the Proposition follows. Finally, we give the result that is needed in the proof of the Fourier inversion theorem. We begin with a Lemma. Note that this Lemma suffices to prove the Fourier inversion theorem in the class of Schwartz functions. The full differentiation theorem is only needed when $f$ is in $L^{1}$. Lemma 5.11 If $f$ is continuous and bounded on $\mathbf{R}^{n}$ and $\phi \in L^{1}\left(\mathbf{R}^{n}\right)$, then for all $x$, $$ \lim _{\epsilon \rightarrow 0^{+}} \phi_{\epsilon} * f(x)=f(x) \int \phi $$ Proof. Fix $x$ in $\mathbf{R}^{n}$ and $\eta>0$. Recall that $\int \phi_{\epsilon}$ is independent of $\epsilon$ and thus we have $$ \phi_{\epsilon} * f(x)-f(x) \int \phi(x) d x=\int \phi_{\epsilon}(y)(f(x-y)-f(x)) d y $$ Since $f$ is continuous at $x$, there exists $\delta>0$ so that $|f(x-y)-f(x)|<\eta$ if $|y|<\delta$. In the last integral above, we consider $|y|<\delta$ and $|y| \geq \delta$ separately. We use the continuity of $f$ when $|y|$ is small and the boundedness of $|f|$ for $|y|$ large to obtain: $$ \left|\phi_{\epsilon} * f(x)-f(x) \int \phi d x\right| \leq \eta \int_{\{y:|y|<\delta\}}\left|\phi_{\epsilon}(y)\right| d y+2\|f\|_{\infty} \int_{\{y:|y|>\delta\}}\left|\phi_{\epsilon}(y)\right| d y $$ The first term on the right is finite since $\phi$ is in $L^{1}$ and in the second term, a change of variables and the dominated convergence theorem implies we have $$ \lim _{\epsilon \rightarrow 0^{+}} \int_{\{y:|y|>\delta\}}\left|\phi_{\epsilon}(y)\right| d y=\lim _{\epsilon \rightarrow 0^{+}} \int_{\{y:|y|>\delta / \epsilon\}}|\phi(y)| d y=0 $$ Thus, we conclude that $$ \limsup _{\epsilon \rightarrow 0^{+}}\left|\phi_{\epsilon} * f(x)-f(x) \int \phi(y) d y\right| \leq \eta \int|\phi| d y . $$ Since $\eta>0$ is arbitrary, the conclusion of the lemma follows. Theorem 5.12 If $\phi$ has radial non-increasing majorant in $L^{1}$, and $f$ is in $L^{p}$ for some $p, 1 \leq p \leq \infty$, then for a.e. $x \in \mathbf{R}^{n}$, $$ \lim _{\epsilon \rightarrow 0^{+}} \phi_{\epsilon} * f(x)=f(x) \int \phi d x . $$ Proof. The proof for $p=1,1<p<\infty$ and $p=\infty$ are each slightly different. Let $\theta(f)(x)=\lim \sup _{\epsilon \rightarrow 0^{+}}\left|\phi_{\epsilon} * f(x)-f(x) \int \phi\right|$. Our goal is to show that $\theta(f)=0$ a.e. Observe that according to Lemma 5.11, we have if $g$ is continuous and bounded, then $$ \theta(f)=\theta(f-g) $$ Also, according to Proposition 5.10, we have that there is a constant $C$ so that with $I=\left|\int \phi\right|$, $$ \theta(f-g)(x) \leq|f(x)-g(x)| I+C M(f-g)(x) . $$ If $f$ is in $L^{1}$ and $\lambda>0$, we have that for any bounded and continuous $g$ that $$ \begin{aligned} m(\{x: \theta(f)(x)>\lambda\}) & \leq m(\{x: \theta(f-g)(x)>\lambda / 2\})+m(\{x: I|f(x)-g(x)|>\lambda / 2\}) \\ & \leq \frac{C}{\lambda} \int_{\mathbf{R}^{n}}|f(x)-g(x)| d x . \end{aligned} $$ The first inequality uses (5.13) and the second uses the weak-type 1,1 property of the maximal function and Tchebishev. Since we can approximate $f$ in the $L^{1}$ norm by functions $g$ which are bounded and continuous, we conclude that $m(\{x: \theta(x)>\lambda\})=0$. Since this holds for each $\lambda>0$, then we have that $m(\{x: \theta(x)>0\})=0$ If $f$ is in $L^{p}, 1<p<\infty$, then we can argue as above and use the that the maximal operator is of strong-type $p, p$ to conclude that for any continous and bounded $g$, $$ m(\{x: \theta(x)>\lambda\}) \leq \frac{C}{\lambda^{p}} \int|f(x)-g(x)|^{p} d x . $$ Again, continous and bounded functions are dense in $L^{p}$, if $p<\infty$ so we can conclude $\theta(f)=0$ a.e. Finally, if $p=\infty$, we claim that for each natural number, $n$, the set $\{x: \theta(f)(x)>$ 0 and $|x|<n\}$ has measure zero. This implies the theorem. To establish the claim, we write $f=\chi_{B_{2 n}(0)} f+\left(1-\chi_{B_{2 n}(0)}\right) f=f_{1}+f_{2}$. Since $f_{1}$ is in $L^{p}$ for each $p$ finite, we have $\theta\left(f_{1}\right)=0$ a.e. and it is easy to see that $\theta\left(f_{2}\right)(x)=0$ if $|x|<2 n$. Since $\theta(f)(x) \leq \theta\left(f_{1}\right)(x)+\theta\left(f_{2}\right)(x)$, the claim follows. The standard Lebesgue differentiation therorem is a special case of the result proved above. Corollary 5.14 If $f$ is in $L_{l o c}^{1}\left(\mathbf{R}^{n}\right)$, then $$ f(x)=\lim _{r \rightarrow 0^{+}} \frac{1}{m\left(B_{r}(x)\right)} \int_{B_{r}(x)} f(y) d y $$ Corollary 5.15 If $f$ is in $L_{l o c}^{1}\left(\mathbf{R}^{n}\right)$, then there is a measurable set $E$, with $\mathbf{R}^{n} \backslash E$ of Lebesgue measure 0 and so that $$ \lim _{r \rightarrow 0^{+}} \frac{1}{m\left(B_{r}(x)\right)} \int_{B_{r}(x)}|f(y)-f(x)| d y=0, \quad x \in E $$ We omit the proof of this last Corollary. The set $E$ from the previous theorem is called the Lebesgue set of $f$. It is clear from the definition that the choice of the representative of $f$ may change $E$ by a set of measure zero. ## Chapter 6 ## Singular integrals In this section, we will introduce a class of symbols for which the multiplier operators introduced in Chapter 3 are also bounded on $L^{p}$. The operators we consider are modelled on the Hilbert transform and the Riesz transforms. They were systematically studied by Calderón and Zygmund in the 1950's and are typically called Calderón-Zygmund operators. These operators are (almost) examples of pseudo-differential operators of order zero. The distinction between Calderón Zygmund operators and pseudo-differential operators is the viewpoint from which the operators are studied. If one studies the operator as a convolution operator, which seems to be needed to make estimates in $L^{p}$, then one is doing Calderón Zygmund theory. If one is studying the operator as a multiplier, which is more efficient for computing inverses and compositions, then one is studying pseudo-differential operators. One feature of pseudo-differential operators is that there is a general flexible theory for variable coefficient symbols. Our symbols will only depend on the frequency variable $\xi$. ### Calderón-Zygmund kernels In this chapter, we will consider linear operators $T: \mathcal{S}\left(\mathbf{R}^{n}\right) \rightarrow \mathcal{S}^{\prime}\left(\mathbf{R}^{n}\right)$. In addition, we assume that $T$ has a kernel $K: \mathbf{R}^{n} \times \mathbf{R}^{n} \rightarrow \mathbf{C}$ which gives the action of $T$ away from the diagonal. The kernel $K$ is a function which is locally integrable on $\mathbf{R}^{n} \times \mathbf{R}^{n} \backslash\{(x, y)$ : $x=y\}$. That $K$ gives the action of $T$ away from the diagonal means that that for any two functions $f$ and $g$ in $\mathcal{D}\left(\mathbf{R}^{n}\right)$ and which have disjoint support, we have that $$ T f(g)=\int_{\mathbf{R}^{2 n}} K(x, y) f(y) g(x) d x d y $$ Note that the left-hand side of this equation denotes the distribution $T f$ paired with the function $g$. We say that $K$ is a Calderón-Zygmund kernel if there is a constant $C_{K}$ so that $K$ satisfies the following two estimates: $$ \begin{aligned} |K(x, y)| & \leq \frac{C_{K}}{|x-y|^{n}} \\ \left|\nabla_{x} K(x, y)\right|+\left|\nabla_{y} K(x, y)\right| & \leq \frac{C_{K}}{|x-y|^{n+1}} \end{aligned} $$ Exercise 6.4 Show that the kernel is uniquely determined by the operator. Exercise 6.5 What is the kernel of the identity operator? Exercise 6.6 Let $\alpha$ be a multi-index. What is the kernel of the operator $$ T \phi=\frac{\partial^{\alpha} \phi}{\partial x^{\alpha}} ? $$ Conclude that the operator is not uniquely determined by the kernel. If an operator $T$ has a Calderón-Zygmund kernel $K$ as described above and $T$ is $L^{2}$ bounded, then $T$ is said to be a Calderon-Zygmund operator. In this chapter, we will prove two main results. We will show that Calderón-Zygmund operators are also $L^{p}$-bounded, $1<p<\infty$ and we will show that a large class of multipliers operators are Calderón-Zygmund operators. Since Calderón-Zygmund kernels are locally bounded in the complement of $\{(x, y)$ : $x=y\}$, if $f$ and $g$ are $L^{2}$ and have disjoint compact supports, then (6.1) continues to hold. To see this we approximate $f$ and $g$ by smooth functions and note that we can arrange that we only increase the support by a small amount when we approximate. Exercise 6.7 Suppose that $\Omega$ is a smooth function near the sphere $\mathbf{S}^{n-1} \subset \mathbf{R}^{n}$, then show that $$ K(x, y)=\Omega\left(\frac{x-y}{|x-y|}\right) \frac{1}{|x-y|^{n}} $$ is a Calderón-Zygmund kernel. Exercise 6.8 If $n \geq 3$ and $j, k$ are in $\{1, \ldots, n\}$, then $$ \frac{\partial^{2}}{\partial x_{j} \partial x_{k}} \frac{1}{|x-y|^{n-2}} $$ is a Calderón-Zygmund kernel. Of course, this result is also true for $n=2$, but it is not very interesting. In two dimensions, show that for and $j$ and $k$, $$ \frac{\partial^{2}}{\partial x_{j} \partial x_{k}} \log |x-y| $$ is a Calderón-Zygmund kernel. Theorem 6.9 If $T$ is a Calderón-Zygmund operator, then for $1<p<\infty$ there is a constant $C$ so that $$ \|T f\|_{p} \leq C\|f\|_{p} . $$ The constant $C \leq A \max \left(p, p^{\prime}\right)$ where $A$ depends on the dimension $n$, the constant in the estimates for the Calderón-Zygmund kernel and the bound for $T$ on $L^{2}$. The main step of the proof is to prove a weak-type 1,1 estimate for $T$ and then to interpolate to obtain the range $1<p<2$. The range $2<p<\infty$ follows by applying the first case to the adjoint of $T$. Exercise 6.10 Let $\mathcal{H}$ be a Hilbert space with inner product $\langle$,$\rangle If T: \mathcal{H} \rightarrow \mathcal{H}$ is a bounded linear map on a Hilbert space, then the map $x \rightarrow\langle T x, y\rangle$ defines a linear functional on $\mathcal{H}$. Hence, there is a unique element $y^{*}$ so that $\langle T x, y\rangle=\left\langle x, y^{*}\right\rangle$. a) Show that the map $y \rightarrow y^{*}=T^{*} y$ is linear and bounded. b) Suppose now that $T$ is bounded on the Hilbert space $L^{2}$, and that, in addition to being bounded on $L^{2}$, the map $T$ satisfies $\|T f\|_{p} \leq A\|f\|_{p}$, say for all $f$ in $L^{2}$. Show that $\left\|T^{*} f\right\|_{p^{\prime}} \leq A\|f\|_{p^{\prime}}$ Exercise 6.11 If $T$ is a Calderón-Zygmund operator, then show that $T^{*}$ is also a CalderónZygmund operator and that the kernel of $T^{*}$ is $$ K^{*}(x, y)=\bar{K}(y, x) $$ Exercise 6.12 If $T_{m}$ is a multiplier operator with bounded symbol, show that the adjoint is a multiplier operator with symbol $\bar{m}, T_{m}^{*}=T_{\bar{m}}$. Theorem 6.13 If $T$ is a Calderón-Zygmund operator, $f$ is in $L^{2}\left(\mathbf{R}^{n}\right)$ and $\lambda>0$, then $$ m(\{x:|T f(x)|>\lambda\}) \leq \frac{C}{\lambda} \int_{\mathbf{R}^{n}}|f(x)| d x . $$ This result depends on the following decomposition lemma for functions. In this Lemma, we will use cubes on $\mathbf{R}^{n}$. By a cube, we mean a set of the form $Q_{h}(x)=\{y$ : $\left.\left|x_{j}-y_{j}\right| \leq h / 2\right\}$. We let $\mathcal{D}_{0}$ be the mesh of cubes with sidelength 1 and whose vertices have integer coordinates. For $k$ an integer, we define $\mathcal{D}_{k}$ to be the cubes obtained by applying the dilation $x \rightarrow 2^{k} x$ to each cube in $\mathcal{D}_{0}$. The cubes in $\mathcal{D}_{k}$ have sidelength $2^{k}$ and are obtained by bisecting each of the sides of the cubes in $\mathcal{D}_{k-1}$. Thus, if we take any two cubes $Q$ and $Q^{\prime}$, in $\mathcal{D}=\cup_{k} \mathcal{D}_{k}$, then either one is contained in the other, or the two cubes have disjoint interiors. Also, given a cube $Q$, we will refer to the $2^{n}$ cubes obtained by dividing $Q$ as the children of $Q$. And of course, if $Q$ is a child of $Q^{\prime}$, then $Q^{\prime}$ is a parent of $Q$. The collection of cubes $\mathcal{D}$ will be called the dyadic cubes on $\mathbf{R}^{n}$. Lemma 6.14 (Calderón-Zygmund decomposition) If $f \in L^{1}\left(\mathbf{R}^{n}\right)$ and $\lambda>0$, then we can find a family of cubes $Q_{k}$ with disjoint interiors so that $|f(x)| \leq \lambda$ a.e. in $\mathbf{R}^{n} \backslash \cup_{k} Q_{k}$ and for each cube we have $$ \lambda<\frac{1}{m\left(Q_{k}\right)} \int_{Q_{k}}|f(x)| d x \leq 2^{n} \lambda $$ As a consequence, we can write $f=g+b$ where $|g(x)| \leq 2^{n} \lambda$ a.e. and $b=\sum b_{k}$ where each $b_{k}$ is supported in one of the cubes $Q_{k}$, each $b_{k}$ has mean value zero $\int b_{k}=0$ and satisfies $\left\|b_{k}\right\|_{1} \leq 2 \int_{Q_{k}}|f| d x$. The function $g$ satisfies $\|g\|_{1} \leq\|f\|_{1}$ Proof. Given $f \in L^{1}$ and $\lambda>0$, we let $\mathcal{E}$ be the collection of cubes $Q \in \mathcal{D}$ which satisfy the inequality $$ \frac{1}{m(Q)} \int_{Q}|f(x)| d x>\lambda $$ Note that because $f \in L^{1}$, if $m(Q)^{-1}\|f\|_{1} \leq \lambda$, then the cube $Q$ will not be in $\mathcal{E}$. That is $\mathcal{E}$ does not contain cubes of arbitrarily large sidelength. Hence, for each cube $Q^{\prime}$ in $\mathcal{E}$, there is a largest cube $Q$ in $\mathcal{E}$ which contains $Q^{\prime}$. We let these maximal cubes form the collection $\left\{Q_{k}\right\}$, which we index in an arbitrary way. If $Q_{k}^{\prime}$ is the parent of $Q_{k}$, then $Q_{k}^{\prime}$ is not in $\mathcal{E}$ and hence the inequality (6.15) fails for $Q_{k}^{\prime}$. This implies that we have $$ \int_{Q_{k}}|f(x)| d x \leq \int_{Q_{k}^{\prime}}|f(x)| \leq 2^{n} m\left(Q_{k}\right) \lambda $$ Hence, the stated conditions on the family of cubes hold. For each selected cube, $Q_{k}$, we define $b_{k}=f-m(Q)^{-1} \int_{Q_{k}} f(x) d x$ on $Q_{k}$ and zero elsewhere. We set $b=\sum_{k} b_{k}$ and then $g=f-b$. It is clear that $\int b_{k}=0$. By the triangle inequality, $$ \int\left|b_{k}(x)\right| d x \leq 2 \int_{Q_{k}}|f(x)| d x . $$ It is clear that $\|g\|_{1} \leq\|f\|_{1}$. We verify that $|g(x)| \leq 2^{n} \lambda$ a.e. On each cube $Q_{k}$, this follows from the upper bound for the average of $|f|$ on $Q_{k}$. For each $x$ in the complement of $\cup_{k} Q_{k}$, there is sequence of cubes in $\mathcal{D}$, with arbitrarily small sidelength and which contain $x$ where the inequality (6.15) fails. Thus, the Lebesgue differentiation theorem implies that $|g(x)| \leq \lambda$ a.e. Our next step in the proof is the following Lemma regarding the kernel. Lemma 6.17 If $K$ is a Calderón-Zygmund kernel and $x, y$ are in $\mathbf{R}^{n}$ with $|x-y| \leq d$, then $$ \int_{\mathbf{R}^{n} \backslash B_{2 d}(x)}|K(z, x)-K(z, y)| d z \leq C $$ The constant depends only on the dimension and the constant appearing in the definition of the Calderón-Zygmund kernel. Proof. We apply the mean-value theorem of calculus to conclude that if $y \in \bar{B}_{d}(x)$ and $z \in \mathbf{R}^{n} \backslash B_{2 d}(x)$, then the kernel estimate (6.3) $$ |K(z, x)-K(z, y)| \leq|x-y| \sup _{y \in B_{d}(x)}\left|\nabla_{y} K(z, y)\right| \leq 2^{n+1} C_{K}|x-y||x-z|^{-n-1} . $$ The second inequality uses the triangle inequality $|y-z| \geq|x-z|-|y-x|$ and then that $|x-z|-|y-x| \geq|x-z| / 2$ if $|x-y| \leq d$ and $|x-z| \geq 2 d$. Finally, if we integrate the inequality (6.18) in polar coordinates, we find that $$ \int_{\mathbf{R}^{n} \backslash B_{2 d}(x)}|K(z, x)-K(z, y)| d z \leq d C_{K} 2^{n+1} \omega_{n-1} \int_{2 d}^{\infty} r^{-n-1} r^{n-1} d r=C_{K} 2^{n} \omega_{n-1} . $$ This is the desired conclusion. Now, we give the proof of the weak-type 1,1 estimate in Theorem 6.13. Proof of Theorem 6.13. We may assume that $f$ is in $L^{1} \cap L^{2}$. We let $\lambda>0$. We apply the Calderón-Zygmund decomposition, Lemma 6.14 at $\lambda$ to write $f=g+b$. We have $$ \{x:|T f(x)|>\lambda\} \subset\{x:|T g(x)|>\lambda / 2\} \cup\{x:|T b(x)|>\lambda / 2\} . $$ Using Tchebisheff's inequality and that $T$ is $L^{2}$-bounded, and then that $|g(x)| \leq C \lambda$ we obtain $$ m(\{x:|T g(x)|>\lambda / 2\}) \leq \frac{C}{\lambda^{2}} \int_{\mathbf{R}^{n}}|g(x)|^{2} d x \leq \frac{C}{\lambda} \int_{\mathbf{R}^{n}}|g(x)| d x $$ Finally, since $\|g\|_{1} \leq\|f\|_{1}$, we have $$ m(\{x:|T g(x)|>\lambda / 2\}) \leq \frac{C}{\lambda}\|f\|_{1} $$ Now, we turn to the estimate of $T b$. We let $O_{\lambda}=\cup B_{k}$ where each ball $B_{k}$ is chosen to have center $x_{k}$, the center of the cube $Q_{k}$ and the radius of $B_{k}$ is $\sqrt{n}$ multiplied by the sidelength of $Q$. Thus, if $y \in Q_{k}$, then the distance $\left|x_{k}-y\right|$ is at most half the radius of $B_{k}$. This will be needed to apply Lemma 6.17. We estimate the measure of $O_{\lambda}$ using that $$ m\left(O_{\lambda}\right) \leq C \sum_{k} m\left(Q_{k}\right) \leq \frac{1}{\lambda} \sum_{k} \int_{Q_{k}}|f| d x \leq \frac{C}{\lambda}\|f\|_{1} . $$ Next, we obtain an $L^{1}$ estimate for $T b_{k}$. If $x$ is in the complement of $Q_{k}$, we know that $T b_{k}(x)=\int K(x, y) b_{k}(y) d y=\int\left(K(x, y)-K\left(x, x_{k}\right)\right) b_{k}(y) d y$ where the second equality uses that $b_{k}$ has mean value zero. Now, applying Fubini's theorem and Lemma 6.17, we can estimate $$ \begin{aligned} \int_{\mathbf{R}^{n} \backslash B_{k}}\left|T b_{k}(x)\right| d x & \leq \int_{Q_{k}}\left|b_{k}(y)\right| \int_{\mathbf{R}^{n} \backslash B_{k}}\left|K(x, y)-K\left(x, x_{k}\right)\right| d x d y \\ & \leq C \int_{Q_{k}}\left|b_{k}(y)\right| d y \leq C \int_{Q_{k}}|f(y)| d y \end{aligned} $$ Thus, if we add on $k$, we obtain $$ \int_{\mathbf{R}^{n} \backslash O_{\lambda}}|T b(y)| d y \leq \sum_{k} \int_{\mathbf{R}^{n} \backslash B_{k}}\left|T b_{k}(y)\right| d y \leq C\|f\|_{1} $$ Finally, we estimate $$ \begin{aligned} m(\{x: T b(x)>\lambda / 2\}) & \leq m\left(O_{\lambda}\right)+m\left(\left\{x \in \mathbf{R}^{n} \backslash O_{\lambda}:|T b(x)|>\lambda / 2\right\}\right) \\ & \leq m\left(O_{\lambda}\right)+\frac{C}{\lambda}\|f\|_{1} . \end{aligned} $$ Where the the last inequality uses Chebishev and our estimate (6.19) for the $L^{1}$-norm of $T b$ in the complement of $O_{\lambda}$. Exercise 6.20 Let $Q$ be a cube in $\mathbf{R}^{n}$ of sidelength $h>0, Q=\left\{x: 0 \leq x_{i} \leq h\right\}$. Compute the diameter of $Q$. Hint: The answer is probably $h \sqrt{n}$. Proof of Theorem 6.9. Since we assume that $T$ is $L^{2}$-bounded, the result for $1<p<2$, follows immediately from Theorem 6.13 and the Marcinkiewicz interpolation theorem, Theorem 4.19. The result for $2<p<\infty$ follows by observing that if $T$ is a CalderónZygmund operator, then the adjoint $T^{*}$ is also a Calderón-Zygmund operator and hence $T^{*}$ is $L^{p}$-bounded, $1<p<2$. Then it follows that $T$ is $L^{p}$-bounded for $2<p<\infty$. The alert reader might observe that Theorem 4.19 appears to give a bound for the operator norm which grows like $|p-2|^{-1}$ near $p=2$. This growth is a defect of the proof and is not really there. To see this, one can pick one's favorite pair of exponents, say $4 / 3$ and 4 and interpolate (by either Riesz-Thorin or Marcinkiewicz) between them to see that norm is bounded for $p$ near 2 . ### Some multiplier operators In this section, we study multiplier operators where the symbol $m$ is smooth in the complement of the origin. For each $k \in \mathbf{R}$, we define a class of multipliers which we call symbols of order $k$. We say $m$ is symbol of order $k$ if for each multi-index $\alpha$, there exists a constant $C_{\alpha}$ so that $$ \left|\frac{\partial^{\alpha} m}{\partial \xi^{\alpha}}(\xi)\right| \leq C_{\alpha}|\xi|^{-|\alpha|+k} . $$ The operator given by a symbol of order $k$ corresponds to a generalization of a differential operator of order $k$. Strictly speaking, these operators are not pseudo-differential operators because we allow symbols which are singular near the origin. The symbols we study transform nicely under dilations. This makes some of the arguments below more elegant, however the inhomogeneous theory is probably more useful. Exercise 6.22 a) If $P(\xi)$ is homogeneous polynomial of degree $k$, then $P$ is a symbol of order $k$. b) The mutiplier for the Bessel potential operator $\left(1+|\xi|^{2}\right)^{-s / 2}$ is a symbol of order $-s$ for $s \geq 0$. What if $s<0$ ? We begin with a lemma to state some of the basic properties of these symbols. Lemma 6.23 a) If $m_{j}$ is a symbol of order $k_{j}$ for $j=1,2$, then $m_{1} m_{2}$ is a symbol of order $k_{1}+k_{2}$ and each constant for $m_{1} m_{2}$ depends on finitely many of the constants for $m_{1}$ and $m_{2}$. b) If $\eta \in \mathcal{S}\left(\mathbf{R}^{n}\right)$, then $\eta$ is a symbol of order $k$ for any $k \leq 0$. c) If $m$ is a symbol of order $k$, then for all $\epsilon>0, \epsilon^{-k} m(\epsilon \xi)$ is a symbol of order $k$ and the constants are independent of $\epsilon$. d) If $m_{j}, j=1,2$ are symbols of order $k$, then $m_{1}+m_{2}$ is a symbol of order $k$. Proof. A determined reader armed with the Leibniz rule will find that these results are either easy or false. Exercise 6.24 a) Use Lemma 6.23 to show that if $m$ is a symbol of order 0 and $\eta \in$ $\mathcal{S}\left(\mathbf{R}^{n}\right)$ with $\eta=1$ near the origin, then $m_{\epsilon}(\xi)=\eta(\epsilon \xi)(1-\eta(\xi / \epsilon)) m(\xi)$ is a symbol of order 0 . b) Show that if $\eta(0)=1$, then for each $f \in L^{2}\left(\mathbf{R}^{n}\right)$ the multiplier operators given by $m$ and $m_{\epsilon}$ satisfy $$ \lim _{\epsilon \rightarrow 0^{+}}\left\|T_{m} f-T_{m_{\epsilon}} f\right\|_{2}=0 . $$ c) Do we have $\lim _{\epsilon \rightarrow 0^{+}}\left\|T_{m}-T_{m_{\epsilon}}\right\|=0$ ? Here, $\|T\|$ denotes the operator norm of $T$ as an operator on $L^{2}$. Exercise 6.25 Show that if $m$ is a symbol of order 0 and there is a $\delta>0$ so that $|m(\xi)| \geq \delta$ for all $\xi \neq 0$, then $m^{-1}$ is a symbol of order 0 . Lemma 6.26 If $m$ is in the Schwartz class and $m$ is a symbol of order $k>-n$, then there is a constant $C$ depending only on finitely many of the constants in (6.21) so that $$ |\check{m}(x)| \leq C|x|^{-n-k} . $$ Proof. To see this, introduce a cutoff function $\eta_{0} \in \mathcal{D}\left(\mathbf{R}^{n}\right)$ and fix $|x|$ so that $\eta_{0}(\xi)=1$ if $|\xi|<1$ and $\eta_{0}=0$ if $|\xi|>2$ and set $\eta_{\infty}=1-\eta_{0}$. We write $$ K_{j}(x)=(2 \pi)^{-n} \int e^{i x \cdot \xi} \eta_{j}(\xi|x|) m(\xi) d \xi, \quad j=0, \infty . $$ For $j=0$, the estimate is quite simple since $\eta_{0}(\xi|x|)=0$ if $|\xi|>2 /|x|$. Thus, $$ \left|K_{0}(x)\right| \leq(2 \pi)^{-n} \int_{|\xi|<2 /|x|}|\xi|^{k} d \xi=C|x|^{-k-n} $$ For the part near $\infty$, we need to take advantage of the cancellation that results from integrating the oscillatory exponential against the smooth kernel $m$. Thus, we write $(i x)^{\alpha} e^{i x \cdot \xi}=\frac{\partial^{\alpha}}{\partial \xi^{\alpha}} e^{i x \cdot \xi}$ and then integrate by parts to obtain $$ (i x)^{\alpha} K_{\infty}(x)=\int\left(\frac{\partial^{\alpha}}{\partial \xi^{\alpha}} e^{i x \cdot \xi}\right) \eta_{\infty}(\xi|x|) m(\xi) d \xi=(-1)^{|\alpha|} \int e^{i x \cdot \xi} \frac{\partial^{\alpha}}{\partial \xi^{\alpha}}\left(\eta_{\infty}(\xi|x|) m(\xi)\right) d \xi $$ The boundary terms vanish since the integrand is in the Schwartz class. Using the symbol estimates (6.21) and that $\eta_{\infty}$ is zero for $|\xi|$ near 0 , we have for $k-|\alpha|<-n$, that $$ \left|(i x)^{\alpha} K_{\infty}(x)\right| \leq C \int_{|\xi|>1 /|x|}|\xi|^{k-|\alpha|} d \xi=C|x|^{-n-k+|\alpha|} $$ This implies the desired estimate that $\left|K_{\infty}(x)\right| \leq C|x|^{-n-k}$. We are now ready to show that the symbols of order 0 give Calderón Zygmund operators. Theorem 6.27 If $m$ is a symbol of order 0, then $T_{m}$ is a Calderón-Zygmund operator. Proof. The $L^{2}$-boundedness of $T_{m}$ is clear since $m$ is bounded, see Theorem 3.7. We will show that the kernel of $T_{m}$ is of the form $K(x-y)$ and that for all multi-indices $\alpha$ there is a constant $C_{\alpha}$ so that $K$ satisfies $$ \left|\frac{\partial^{\alpha}}{\partial x^{\alpha}} K(x)\right| \leq C|x|^{-n-|\alpha|} $$ The inverse Fourier transform of $m, \check{m}$, is not, in general, a function. Thus, it is convenient to approximate $m$ by nice symbols. To do this, we let $\eta \in \mathcal{D}\left(\mathbf{R}^{n}\right)$ satisfy $\eta(x)=1$ if $|x|<1$ and $\eta(x)=0$ if $|x|>2$. We define $m_{\epsilon}(\xi)=\eta(\epsilon \xi)(1-\eta(\xi / \epsilon)) m(\xi)$. By Lemma 6.23 , we see that $m_{\epsilon}$ is a symbol with constants independent of $\epsilon$. Since $m_{\epsilon} \in \mathcal{S}\left(\mathbf{R}^{n}\right)$, by Lemma 6.26 we have that $K_{\epsilon}=\check{m}_{\epsilon}$ satisfies for each multi-index $\alpha$, $$ \left|\frac{\partial^{\alpha}}{\partial x^{\alpha}} K(x)\right| \leq C|x|^{-n-|\alpha|} $$ This is because the derivative of order $\alpha$ of $K_{\epsilon}$ by Proposition 1.19 is the inverse Fourier transform of $(-i \xi)^{\alpha} m_{\epsilon}(\xi)$, a symbol of order $|\alpha|$. Since the constants in the estimates are uniform in $\epsilon$, we can apply the Arzela-Ascoli theorem to prove that there is a sequence $\left\{\epsilon_{j}\right\}$ with $\lim _{j \rightarrow \infty} \epsilon_{j}=0$ so that $K_{\epsilon_{i}}$ and all of its derivatives converge uniformly on compact subsets of $\mathbf{R}^{n} \backslash\{0\}$ and of course the limit, which we call $K$, satisfies the estimates (6.28). It remains to show that $K(x-y)$ is a kernel for the operator $T_{m}$. Let $f$ be in $\mathcal{S}\left(\mathbf{R}^{n}\right)$. By the dominated convergence theorem and the Plancherel theorem, $T_{m_{\epsilon}} f \rightarrow T_{m} f$ in $L^{2}$ as $\epsilon \rightarrow 0^{+}$. By Proposition 1.24, $T_{m_{\epsilon}} f=K_{\epsilon} * f$. Finally, if $f$ and $g$ have disjoint support, then $$ \begin{aligned} \int T_{m} f(x) g(x) d x & =\lim _{j \rightarrow \infty} \int T_{m_{\epsilon_{j}}} f(x) g(x) d x \\ & =\lim _{j \rightarrow \infty} \int K_{\epsilon_{j}}(x-y) f(y) g(x) d x d y \\ & =\int K(x-y) f(y) g(x) d x d y . \end{aligned} $$ The first equality above holds because $T_{m_{\epsilon}} f$ converges in $L^{2}$, the second follows from Proposition 1.24 and the third equality holds because of the locally uniform convergence of $K$ in the complement of the origin. This completes the proof that $K(x-y)$ is a kernel for $T_{m}$. We can now state a corollary which is usually known as the Mikhlin multiplier theorem. Corollary 6.29 If $m$ is a symbol of order 0 , then the multiplier operator $T_{m}$ is bounded on $L^{p}$ for $1<p<\infty$. We conclude with a few exercises. Exercise 6.30 If $m$ is infinitely differentiable in $\mathbf{R}^{n} \backslash\{0\}$ and is homogeneous of degree 0 , then $m$ is a symbol of order zero. In the next exercise, we introduce the Laplacian $\Delta=\sum_{j=1}^{n} \frac{\partial^{2}}{\partial x_{j}^{2}}$. Exercise 6.31 Let $1<p<\infty, n \geq 3$. If $f \in \mathcal{S}\left(\mathbf{R}^{n}\right)$, then we can find a tempered distribution $u$ so that $\Delta u=f$ and we have the estimate $$ \left\|\frac{\partial^{2} u}{\partial x_{j} \partial x_{k}}\right\|_{p} \leq C\|f\|_{p} $$ where the constant in the estimate $C$ depends only on $p$ and $n$. Why is $n=2$ different? In two dimensions, show that we can construct $u$ if $\hat{f}(0)=0$. (This construction can be extended to all of the Schwartz class, but it is more delicate when $\hat{f}(0) \neq 0$.) This exercise gives an estimate for the solution of $\Delta u=f$. This estimate follows immediately from our work so far. We should also prove uniqueness: If $u$ is a solution of $\Delta u=0$ and $u$ has certain growth properties, then $u=0$. This is a version of the Liouville theorem. The above inequality is not true for every solution of $\Delta u=f$. For example, on $\mathbf{R}^{2}$, if $u(x)=e^{x_{1}+i x_{2}}$, then we have $\Delta u=0$, but the second derivatives are not in any $L^{p}\left(\mathbf{R}^{2}\right)$. Exercise 6.32 Let $\square=\frac{\partial^{2}}{\partial t^{2}}-\Delta$ be the wave operator which acts on functions of $n+1$ variables, $(x, t) \in \mathbf{R}^{n} \times \mathbf{R}$. Can we find a solution of $\square u=f$ and prove estimates like those in Exercise 6.31? Why or why not? Exercise 6.33 Show that if $\lambda \in \mathbf{C}$ is not a negative real number, the operator given by $m(\xi)=\left(\lambda+|\xi|^{2}\right)^{-1}$ is bounded on $L^{p}$ for $1<p<\infty$ and that we have the estimate $$ \left\|T_{m} f\right\|_{p} \leq C\|f\|_{p} $$ Find the dependence of the constant on $\lambda$. ## Chapter 7 ## Littlewood-Paley theory In this chapter, we look at a particular singular integral and see how this can be used to characterize the $L^{p}$ norm of a function in terms of its Fourier transform. The theory discussed here has its roots in the theory of harmonic functions in the disc or the upper half-plane. The expressions $Q_{k} f$ considered below, share many properties with the $2^{-k} \nabla u\left(x^{\prime}, 2^{-k}\right)$ where $u$ is the harmonic function in the upper-half plane $x_{n}>0$ whose boundary values are $f$. Recently, many of these ideas have become part of the theory of wavelets. The operators $Q_{k} f$ decompose $f$ into pieces which are of frequency approximately $2^{k}$. A wavelet decomposition combines this decomposition in frequency with a spatial decomposition, in so far as this is possible. ### A square function that characterizes $L^{p}$ We let $\psi$ be a real-valued function in $\mathcal{D}\left(\mathbf{R}^{n}\right)$ which is supported in $\{\xi: 1 / 2<|\xi|<4\}$ and which satisfies $\sum_{k=-\infty}^{\infty} \psi_{k}(\xi)^{2}=1$ in $\mathbf{R}^{n} \backslash\{0\}$ where $\psi_{k}(\xi)=\psi\left(\xi / 2^{k}\right)$ and we will call $\psi$ a Littlewood-Paley function. It is not completely obvious that such a function exists. Lemma 7.1 A Littlewood-Paley function exists. Proof. We take a function $\tilde{\psi} \in \mathcal{D}\left(\mathbf{R}^{n}\right)$ which is non-negative, supported in $\{\xi: 1 / 2<$ $|\xi|<4\}$ and which is strictly positive on $\{\xi: 1<|\xi|<2\}$. We set $$ \psi(\xi)=\tilde{\psi}(\xi) /\left(\sum_{k=-\infty}^{\infty} \tilde{\psi}^{2}\left(\xi / 2^{k}\right)\right)^{0.5} $$ For $f$ in $L^{p}$, say, we can define $Q_{k} f=\check{\psi}_{k} * f=\left(\psi_{k} \hat{f}\right)$. We define the square function $S(f)$ by $$ S(f)(x)=\left(\sum_{k=-\infty}^{\infty}\left|Q_{k}(f)(x)\right|^{2}\right)^{1 / 2} $$ From the Plancherel theorem, Theorem 3.2, it is easy to see that $$ \|f\|_{2}=\|S(f)\|_{2} $$ and of course this depends on the identity $\sum_{k} \psi_{k}^{2}=1$. We are interested in this operator because we can characterize the $L^{p}$ spaces in a similar way. Theorem 7.3 Let $1<p<\infty$. There is a finite nonzero constant $C=C(p, n, \psi)$ so that if $f$ is in $L^{p}$, then $$ C_{p}^{-1}\|f\|_{p} \leq\|S(f)\|_{p} \leq C_{p}\|f\|_{p} . $$ This theorem will be proven by considering a vector-valued singular integral. The kernel we consider will be $$ K(x, y)=\left(\ldots, 2^{n k} \check{\psi}\left(2^{k}(x-y)\right), \ldots\right) . $$ Lemma 7.4 If $\psi$ is in $\mathcal{S}\left(\mathbf{R}^{n}\right)$, then the kernel $K$ defined above is a Calderón-Zygmund kernel. Proof. We write out the norm of $K$ $$ |K(x, y)|^{2}=\sum_{k=-\infty}^{\infty} 2^{2 n k} \mid \check{\psi}\left(\left.\left(2^{k}(x-y)\right)\right|^{2}\right. $$ We choose $N$ so that $2^{N} \leq|x-y|<2^{N+1}$ and split the sum above at $-N$. Recall that $\check{\psi}$ is in $\mathcal{S}\left(\mathbf{R}^{n}\right)$ and decays faster than any polynomial. Near 0 , that is for $k \leq-N$, we use that $\check{\psi}(x) \leq C$. For $k>-N$, we use that $\check{\psi}(x) \leq C|x|^{-n-1}$. Thus, we have $$ |K(x, y)|^{2} \leq C\left(\sum_{k=-\infty}^{-N} 2^{2 n k}+\sum_{k=-N+1}^{\infty} 2^{2 n k}\left(2^{k+N}\right)^{-2(n+1)}\right)=C 2^{-2 n N} . $$ Recalling that $2^{N}$ is approximately $|x-y|$, we obtain the desired upper-bound for $K(x, y)$. To estimate the gradient, we observe that $\nabla_{x} K(x, y)=\left(\ldots, 2^{-(n+1) k}(\nabla \breve{\psi})((x-\right.$ $\left.\left.y) / 2^{k}\right), \ldots\right)$. This time, we will need a higher power of $|x|$ to make the sum converge. Thus, we use that $|\nabla \check{\psi}(x)| \leq C$ near the origin and $|\nabla \check{\psi}(x)| \leq C|x|^{-n-2}$. This gives that $$ |\nabla K(x)|^{2} \leq C\left(\sum_{k=-\infty}^{-N} 2^{2 k(n+1)}+\sum_{k=-N+1}^{\infty} 2^{2 k(n+1)}\left(2^{k+N}\right)^{-2(n+2)}\right)=C 2^{-2 N(n+1)} . $$ Recalling that $2^{N}$ is approximately $|x-y|$ finishes the proof. Proof of Theorem 7.3. To establish the right-hand inequality, we fix $N$ and consider the map $f \rightarrow\left(\psi_{-N} \hat{f}, \ldots, \psi_{N} \hat{f}\right)^{-}=K_{N} * f$. The kernel $K_{N}$ is a vector-valued function taking values in the vector space $\mathbf{C}^{2 N+1}$. We observe that the conclusion of Lemma 6.17 continues to hold, if we interpret the absolute values as the norm in the Hilbert space $\mathbf{C}^{2 N+1}$, with the standard norm, $\left|\left(z_{-N}, \ldots, z_{N}\right)\right|=\left(\sum_{k=-N}^{N}\left|z_{k}\right|^{2}\right)^{1 / 2}$. As a consequence, we conclude that $K_{N} * f$ satisfies the $L^{p}$ estimate of Theorem 6.9 and we have the inequality $$ \left\|\left(\sum_{k=-N}^{N}\left|Q_{k} f\right|^{2}\right)^{1 / 2}\right\|_{p} \leq\|f\|_{p} . $$ We can use the monotone convergence theorem to let $N \rightarrow \infty$ and obtain the right-hand inequality in the Theorem. To obtain the other inequality, we argue by duality. First, using the polarization identity, we can show that for $f, g$ in $L^{2}$, $$ \int_{\mathbf{R}^{n}} \sum_{k=-\infty}^{\infty} Q_{k}(f)(x) \overline{Q_{k}(g)}(x) d x=\int_{\mathbf{R}^{n}} f(x) \bar{g}(x) d x . $$ Next, we suppose that $f$ is $L^{2} \cap L^{p}$ and use duality to find the $L^{p}$ norm of $f$, the identity (7.6), and then Cauchy-Schwarz and Hölder to obtain $$ \|f\|_{p}=\sup _{\|g\|_{p^{\prime}}=1} \int_{\mathbf{R}^{n}} f(x) \bar{g}(x) d x=\sup _{\|g\|_{p^{\prime}}=1} \int_{\mathbf{R}^{n}} \sum Q_{k}(f)(x) \overline{Q_{k}(g)}(x) d x \leq\|S(f)\|_{p}\|S(g)\|_{p^{\prime}} . $$ Now, if we use the right-hand inequality, (7.5) which we have already proven, we obtain the desired conclusion. Note that we should assume $g$ is in $L^{2}\left(\mathbf{R}^{n}\right) \cap L^{p^{\prime}}\left(\mathbf{R}^{n}\right)$ to make use of the identity (7.2). A straightforward limiting argument helps to remove the restriction that $f$ is in $L^{2}$ and obtain the inequality for all $f$ in $L^{p}$. ## $7.2 \quad$ Variations In this section, we observe two simple extensions of the result above. These modifications will be needed in a later chapter. For our next proposition, we consider an operators $Q_{k}$ which are defined as above, except, that we work only in one variable. Thus, we have a function $\psi \in \mathcal{D}(\mathbf{R})$ and suppose that $$ \sum_{k=-\infty}^{\infty}\left|\psi\left(\xi_{n} / 2^{k}\right)\right|^{2}=1 $$ We define the operator $f \rightarrow Q_{k} f=\left(\psi\left(\xi_{n} / 2^{k}\right) \hat{f}(\xi)\right)^{2}$. Proposition 7.7 If $f \in L^{p}\left(\mathbf{R}^{n}\right)$, then for $1<p<\infty$, we have $$ C_{p}\|f\|_{p}^{p} \leq\left\|\left(\sum_{k}\left|Q_{k} f\right|^{2}\right)^{1 / 2}\right\|_{p}^{p} \leq C_{p}\|f\|_{p}^{p} $$ Proof. If we fix $x^{\prime}=\left(x_{1}, \ldots, x_{n-1}\right)$, then we have that $$ C_{p}\left\|f\left(x^{\prime}, \cdot\right)\right\|_{L^{p}(\mathbf{R})}^{p} \leq\left\|\left(\sum_{k}\left|Q_{k} f\left(x^{\prime}, \cdot\right)\right|^{2}\right)^{1 / 2}\right\|_{p}^{p} \leq C_{p}\left\|f\left(x^{\prime}, \cdot\right)\right\|_{L^{p}(\mathbf{R})}^{p} . $$ This is the one-dimensional version of Theorem 7.3. If we integrate in the remaining variables, then we obtain the Proposition. We will need the following Corollary for the one-dimensional operators. Of course the same result holds, with the same proof, for the $n$-dimensional operator. Corollary 7.8 If $2 \leq p<\infty$, then we have $$ \|f\|_{p} \leq C\left(\sum_{k=-\infty}^{\infty}\left\|Q_{k} f\right\|_{p}^{2}\right)^{1 / 2} $$ If $1<p \leq 2$, then we have $$ \left(\sum_{k=-\infty}^{\infty}\left\|Q_{k} f\right\|_{p}^{2}\right)^{1 / 2} \leq C\|f\|_{p} $$ Proof. To prove the first statement, we apply Minkowski's inequality bring the sum out through an $L^{p / 2}$ norm to obtain $$ \left(\int_{\mathbf{R}^{n}}\left(\sum_{k=-\infty}^{\infty}\left|Q_{k} f(x)\right|^{2}\right)^{p / 2} d x\right)^{2 / p} \leq \sum_{k=-\infty}^{\infty}\left\|Q_{k} f\right\|_{p}^{2} . $$ The application of Minkowski's inequality requires that $p / 2 \geq 1$. If we take the square root of this inequality and use Proposition 7.7, we obtain the first result of the Corollary. The second result has a similar proof. To see this, we use Minkowski's integral inequality to bring the integral out through the $\ell^{2 / p}$ norm to obtain $$ \left(\sum_{k=-\infty}^{\infty}\left(\int_{\mathbf{R}^{n}}\left|Q_{k} f(x)\right|^{2} d x\right)^{2 / p}\right)^{p / 2} \leq \int_{\mathbf{R}^{n}}\left(\sum_{k=-\infty}^{\infty}\left|Q_{k} f(x)\right|^{2}\right)^{p / 2} . $$ Now, we may take the $p$ th root and apply Proposition 7.7 to obtain the second part of our Corollary. ## Chapter 8 ## Fractional integration In this chapter, we study the fractional integration operator or Riesz potentials. To motivate these operators, we consider the following peculiar formulation of the fundamental theorem of calculus: If $f$ is a nice function, then $$ f(x)=\int_{-\infty}^{x} f^{\prime}(t)(x-t)^{1-1} d t $$ Thus the map $g \rightarrow \int_{-\infty}^{x} g(t) d t$ is a left-inverse to differentiation. A family of fractional integrals in one dimension are the operators, defined if $\alpha>0$ by $$ I_{\alpha}^{+} f(x)=\frac{1}{\Gamma(\alpha)} \int_{-\infty}^{x} f(t)(x-t)^{\alpha-1} d t $$ Exercise 8.1 Show that if $\alpha>0$ and $\beta>0$, then $$ I_{\alpha}^{+}\left(I_{\beta}^{+}(f)\right)=I_{\alpha+\beta}^{+}(f) . $$ In this section, we consider a family of similar operators in all dimensions. We will establish the $L^{p}$ mapping properties of these operators. We also will consider the Fourier transform of the distribution given by the function $|x|^{\alpha-n}$. Using these results, we will obtain the Sobolev inequalities. We begin by giving an example where these operators arise in nature. This exercise will be much easier to solve if we use the results proved below. Exercise 8.2 If $f$ is in $\mathcal{S}\left(\mathbf{R}^{n}\right)$, then $$ f(x)=\frac{1}{(2-n) \omega_{n-1}} \int_{\mathbf{R}^{n}} \Delta f(y)|x-y|^{2-n} d y . $$ ### The Hardy-Littlewood-Sobolev theorem The operators we consider in $\mathbf{R}^{n}$ are the family of Riesz potentials $$ I_{\alpha}(f)(x)=\gamma(\alpha, n) \int_{\mathbf{R}^{n}} f(y)|x-y|^{\alpha-n} $$ for $\alpha$ satisfying $0<\alpha<n$. The constant, $\gamma(\alpha, n)$ is probably given by $$ \gamma(\alpha, n)=\frac{2^{n-\alpha} \Gamma((n-\alpha) / 2)}{(4 \pi)^{n / 2} \Gamma(\alpha / 2)} . $$ Note the condition $\alpha>0$ is needed in order to guarantee that $|x|^{\alpha-n}$ is locally integrable. Our main goal is to prove the $L^{p}$ mapping properties of the operator $I_{\alpha}$. We first observe that the homogeneity properties of this operator imply that the operator can map $L^{p}$ to $L^{q}$ only if $1 / p-1 / q=\alpha / n$. By homogeneity properties, we mean: If $r>0$ and we let $\delta_{r} f(x)=f(r x)$ be the action of dilations on functions, then we have $$ I_{\alpha}\left(\delta_{r} f\right)=r^{-\alpha} \delta_{r}\left(I_{\alpha} f\right) $$ This is easily proven by changing variables. This observation is essential in the proof of the following Proposition. Proposition 8.4 If the inequality $$ \left\|I_{\alpha} f\right\|_{q} \leq C\|f\|_{p} $$ holds for all $f$ in $\mathcal{S}\left(\mathbf{R}^{n}\right)$ and a finite constant $C$, then $$ \frac{1}{p}-\frac{1}{q}=\frac{\alpha}{n} $$ Proof. Observe that we have $\left\|\delta_{r} f\right\|_{p}=r^{-n / p}\|f\|_{p}$. This is proven by a change of variables if $0<p<\infty$ and is obvious if $p=\infty$. (Though we will never refer to the case $p<1$, there is no reason to restrict ourselves to $p \geq 1$.) Next, if $f$ is in $\mathcal{S}\left(\mathbf{R}^{n}\right)$, then by (8.3) $$ \left\|I_{\alpha}\left(\delta_{r} f\right)\right\|_{q}=r^{-\alpha}\left\|\delta_{r}\left(I_{\alpha} f\right)\right\|_{q}=r^{-\alpha-n / q}\left\|I_{\alpha} f\right\|_{q} . $$ Thus if the hypothesis of our proposition holds, we have that for all Schwartz functions $f$ and all $r>0$, that $$ r^{-\alpha-n / q}\left\|I_{\alpha} f\right\|_{q} \leq C\|f\|_{p} r^{-n / p} $$ If $\left\|I_{\alpha} f\right\|_{q} \neq 0$ then the truth of the above inequality for all $r>0$ implies that the exponents on each side of the inequality must be equal. If $f \neq 0$ is non-negative, then $I_{\alpha} f>0$ everywhere and hence $\left\|I_{\alpha} f\right\|_{q}>0$ and we can conclude the desired relation on the exponents. Next, we observe that the inequality must fail at the endpoint $p=1$. This follows by choosing a nice function with $\int \phi=1$. Then with $\phi_{\epsilon}(x)=\epsilon^{-n} \phi(x / \epsilon)$, we have that as $\epsilon \rightarrow 0^{+}$ $$ I_{\alpha}\left(\phi_{\epsilon}\right)(x) \rightarrow \gamma(\alpha, n)|x|^{\alpha-n} . $$ If the inequality $\left\|I_{\alpha} \phi_{\epsilon}\right\|_{n /(n-\alpha)} \leq C\left\|\phi_{\epsilon}\right\|_{1}=C$ holds uniformly as $\epsilon$, then Fatou's Lemma will imply that $|x|^{\alpha-n}$ lies in $L^{n /(n-\alpha)}$, which is false. Exercise 8.5 Show that $I_{\alpha}: L^{p} \rightarrow L^{q}$ if and only if $I_{\alpha}: L^{q^{\prime}} \rightarrow L^{p^{\prime}}$. Hence, we can conclude that $I_{\alpha}$ does not map $L^{n / \alpha}$ to $L^{\infty}$. Exercise 8.6 Can you use dilations, $\delta_{r}$ to show that the inequality $$ \|f * g\|_{r} \leq\|f\|_{p}\|g\|_{q} $$ can hold only if $1 / r=1 / p+1 / q-1$ ? Exercise 8.7 Show that estimate $$ \|\nabla f\|_{p} \leq C\|f\|_{q} $$ can not hold. That is if we fix $p$ and $q$, there is no constant $C$ so that the above inequality is true for all $f$ in the Schwartz class. Hint: Let $f(x)=\eta(x) e^{i \lambda x_{1}}$ where $\eta$ is a smooth bump. We now give the positive result. The proof we give is due Lars Hedberg [4]. The result was first considered in one dimension (on the circle) by Hardy and Littlewood. The $n$-dimensional result was considered by Sobolev. Theorem 8.8 (Hardy-Littlewood-Sobolev) If $1 / p-1 / q=\alpha / n$ and $1<p<n / \alpha$, then there exists a constant $C=C(n, \alpha, p)$ so that $$ \left\|I_{\alpha} f\right\|_{q} \leq C\|f\|_{p} . $$ The constant $C$ satisfies $C \leq C(\alpha, n) \min \left((p-1)^{-\left(1-\frac{\alpha}{n}\right)},\left(\frac{\alpha}{n}-\frac{1}{p}\right)^{-\left(1-\frac{\alpha}{n}\right)}\right)$. Proof of Hardy-Littlewood-Sobolev inequality. We may assume that the $L^{p}$ norm of $f$ satisfies $\|f\|_{p}=1$. We consider the integral defining $I_{\alpha}$ and break the integral into sets where $|x-y|<R$ and $|x-y|>R$ : $$ I_{\alpha} f(x) \leq \gamma(\alpha, n)\left(\int_{B_{R}(x)} \frac{|f(y)|}{|x-y|^{n-\alpha}} d y+\int_{\mathbf{R}^{n} \backslash B_{R}(x)} \frac{|f(y)|}{|x-y|^{n-\alpha}} d y\right) \equiv \gamma(\alpha, n)(I+I I) . $$ By Proposition 5.10, we can estimate $$ I(x, R) \leq M f(x) \omega_{n-1} \int_{0}^{R} r^{\alpha-n} r^{n-1} d r=M f(x) \frac{R^{\alpha}}{\alpha} \omega_{n-1} $$ where we need that $\alpha>0$ for the integral to converge. To estimate the second integral, $I I(x, R)$, we use Hölder's inequality to obtain $$ \begin{aligned} I I(x, R) \leq\|f\|_{p} \omega_{n-1}^{1 / p^{\prime}}\left(\int_{r>R} r^{(\alpha-n) p^{\prime}+n-1} d r\right)^{1 / p^{\prime}} & =\|f\|_{p} \omega_{n-1}^{1 / p^{\prime}}\left(\frac{R^{(\alpha-n) p^{\prime}+n}}{\left.(\alpha-n) p^{\prime}+n\right)}\right)^{1 / p^{\prime}} \\ & =\|f\|_{p} \omega_{n-1}^{1 / p^{\prime}} \frac{R^{\alpha-\frac{n}{p}}}{\left((\alpha-n) p^{\prime}+n\right)^{1 / p^{\prime}}} \end{aligned} $$ where we need $\alpha<n / p$ for the integral in $r$ to converge. Using the previous two inequalities and recalling that we have set $\|f\|_{p}=1$, we have constants $C_{1}$ and $C_{2}$ so that $$ \left|I_{\alpha}(f)(x)\right| \leq C_{1} R^{\alpha} M f(x)+C_{2} R^{\alpha-\frac{n}{p}} . $$ If we were dedicated analysts, we could obtain the best possible inequality (that this method will give) by differentiating with respect to $R$ and using one-variable calculus to find the minimum value of the right-hand side of (8.9). However, we can obtain an answer which is good enough by choosing $R=M f(x)^{-p / n}$. We substitute this value of $R$ into (8.9) and obtain $$ \left|I_{\alpha}(x)\right| \leq\left(C_{1}+C_{2}\right) M f(x)^{1-\alpha n / p} $$ and if we raise this inequality to the power $p n /(n-\alpha p)$ we obtain $$ \left|I_{\alpha} f(x)\right|^{n p /(n-\alpha p)} \leq\left(C_{1}+C_{2}\right)^{n p /(n-\alpha p)} M f(x)^{p} . $$ Now, if we integrate and use the Hardy-Littlewood theorem Theorem 5.9 we obtain the conclusion of this theorem. Exercise 8.10 The dependence of the constants on $p, \alpha$ and $n$ is probably not clear from the proof above. Convince yourself that the statement of the above theorem is correct. Exercise 8.11 It may seem as if the obsession with the behavior of the constants in the Hardy-Littlewood-Sobolev theorem, Theorem 8.8, is an indication that your instructor does not have enough to do. This is false. Here is an example to illustrate how a good understanding of the constants can be used to obtain additional information. Suppose that $f$ is in $L^{n / \alpha}$ and $f=0$ outside $B_{1}(0)$. We know that, in general, $I_{\alpha} f$ is not in $L^{\infty}$. The following is a substitute result. Consider the integral $$ \int_{B_{1}(0)} \exp \left(\left[\epsilon\left|I_{\alpha} f(x)\right|\right]^{n /(n-\alpha)}\right) d x=\sum_{k=1}^{\infty} \frac{1}{k !} \epsilon^{n k /(n-k \alpha)} \int_{B_{1}(0)}\left|I_{\alpha} f(x)\right|^{\frac{k n}{n-\alpha}} d x . $$ Since $f$ is in $L^{n / \alpha}$ and $f$ is zero outside a ball of radius 1 , we have that $f$ is in $L^{p}\left(\mathbf{R}^{n}\right)$ for all $p<n / \alpha$. Thus, $I_{\alpha} f$ is in every $L^{q}$-space for $\infty>q>n /(n-\alpha)$. Hence, each term on the right-hand side is finite. Show that in fact, we can sum the series for $\epsilon$ small. Exercise 8.12 If $\alpha$ is real and $0<\alpha<n$, show by example that $I_{\alpha}$ does not map $L^{n / \alpha}$ to $L^{\infty}$. Hint: Consider a function $f$ with $f(x)=|x|^{-\alpha}(-\log |x|)^{-1}$ if $|x|<1 / 2$. Next, we compute the Fourier transform of the tempered distribution $\gamma(\alpha, n)|x|^{\alpha-n}$. More precisely, we are considering the Fourier transform of the tempered distribution $$ f \rightarrow \gamma(\alpha, n) \int_{\mathbf{R}^{n}}|x|^{\alpha-n} f(x) d x $$ Theorem 8.13 If $0<\operatorname{Re} \alpha<n$, then $$ \gamma(\alpha, n)\left(|x|^{\alpha-n}\right)^{\wedge}=|\xi|^{-\alpha} $$ Proof. We let $\eta(|\xi|)$ be a standard cutoff function which is 1 for $|\xi|<1$ and 0 for $|\xi|>2$. We set $m_{\epsilon}(\xi)=\eta(|\xi| \epsilon)(1-\eta(|\xi| / \epsilon))|\xi|^{-\alpha}$. The multiplier $m_{\epsilon}$ is a symbol of order $-\alpha$ uniformly in $\epsilon$. Hence, by the result Lemma 6.26 of Chapter 6 , we have that $K_{\epsilon}=\check{m}_{\epsilon}$ satisfies the estimates $$ \left|\frac{\partial^{\beta}}{\partial x^{\beta}} K_{\epsilon}(x)\right| \leq C(\alpha, \beta)|x|^{\alpha-n-|\beta|} . $$ Hence, applying the Arzela-Ascoli theorem we can extract a sequence $\left\{\epsilon_{i}\right\}$ with $\epsilon_{i} \rightarrow 0$ so that $K_{\epsilon_{i}}$ converges uniformly to some function $K$ on each compact subset of $\mathbf{R}^{n} \backslash\{0\}$. We choose $f$ in $\mathcal{S}\left(\mathbf{R}^{n}\right)$ and recall the definition of the Fourier transform of a distribution to obtain $$ \begin{aligned} \int_{\mathbf{R}^{n}} K(x) \hat{f}(x) d x & =\lim _{j \rightarrow \infty} \int K_{\epsilon_{j}}(x) \hat{f}(x) d x \\ & =\lim _{j \rightarrow \infty} \int m_{\epsilon_{j}}(\xi) f(\xi) d \xi \\ & =\int|\xi|^{-\alpha} f(\xi) d \xi . \end{aligned} $$ The first equality depends on the uniform estimate for $K_{\epsilon}$ in (8.14) and the locally uniform convergence of the sequence $K_{\epsilon_{j}}$. Thus, we have that $\hat{K}(\xi)=|\xi|^{-\alpha}$ in the sense of distributions. Note that each $m_{\epsilon}$ is radial. Hence, $K_{\epsilon}$ and thus $K$ is radial. See Chapter 1. Our next step is to show that the kernel $K$ is homogeneous: $$ K(R x)=R^{\alpha-n} K(x) . $$ To see this, observe that writing $K=\lim _{j \rightarrow \infty} K_{\epsilon_{j}}$ again gives that $$ \begin{aligned} \int_{\mathbf{R}^{n}} K(R x) \hat{f}(x) d x & =\lim _{j \rightarrow \infty} K_{\epsilon_{j}}(R x) \hat{f}(x) d x \\ & =R^{-n} \lim _{j \rightarrow \infty} \int m_{\epsilon_{j}}(\xi / R) f(\xi) d \xi=R^{\alpha-n} \int|\xi|^{\alpha-n} f(\xi) d \xi \\ & =R^{\alpha-n} \int K(x) \hat{f}(x) d x \end{aligned} $$ This equality for all $f$ in $\mathcal{S}\left(\mathbf{R}^{n}\right)$ implies that (8.15) holds. If we combine the homogeneity with the rotational invariance of $K$ observed above, we can conclude that $$ \check{m}(x)=c|x|^{\alpha-n} . $$ It remains to compute the value of $c$. To do this, we only need to find one function where we can compute the integrals explicitly. We use the friendly gaussian. We consider $$ c \int|x|^{\alpha-n} e^{-|x|^{2}} d x=(4 \pi)^{-n / 2} \int|\xi|^{-\alpha} e^{-|\xi|^{2} / 4} d \xi=2^{n-\alpha}(4 \pi)^{-n / 2} \int|\xi|^{-\alpha} e^{-|\xi|^{2}} d \xi . $$ Writing the integrals in polar coordinates, substituting $s=r^{2}$, and then recalling the definition of the Gamma function, we obtain $$ \begin{aligned} \int_{\mathbf{R}^{n}}|x|^{-\beta} e^{-|x|^{2}} d x & =\omega_{n-1} \int_{0}^{\infty} r^{n-\beta} e^{-r^{2}} \frac{d r}{r} \\ & =\frac{\omega_{n-1}}{2} \int_{0}^{\infty} s^{(n-\beta) / 2} e^{-s} \frac{d s}{s} \\ & =\frac{1}{2} \Gamma\left(\frac{n-\beta}{2}\right) \omega_{n-1} . \end{aligned} $$ Using this to evaluate the two integrals in (8.16) and solving for $c$ gives $$ c=\frac{2^{n-\alpha} \Gamma((n-\alpha) / 2)}{(4 \pi)^{n / 2} \Gamma(\alpha / 2)} . $$ We give a simple consequence. Corollary 8.17 For $f$ in $\mathcal{S}\left(\mathbf{R}^{n}\right)$, we have $$ I_{\alpha}(f)=\left(\hat{f}(\xi)|\xi|^{-\alpha}\right)^{r} $$ A reader who is not paying much attention, might be tricked into thinking that this is just an application of Proposition 1.24. Though I like to advocate such sloppiness, it is traditional to be a bit more careful. Note that Proposition 1.24 does not apply to the study of $I_{\alpha} f$ because $I_{\alpha} f$ is not the convolution of two $L^{1}$ functions. A proof could be given based on approximating the multiplier $|\xi|^{-\alpha}$ by nice functions. However, I elect to obtain the result by algebra-that is by using distributions. This result should, perhaps, have appeared in Chapter 2. However, following the modern "just-in-time" approach to knowledge delivery, we have waited until the result was needed before making a proof. Proposition 8.18 If $u$ is a tempered distribution and $f$ is a Schwartz function, then $$ (f * u)^{\wedge}=\hat{f} \hat{u} . $$ Proof. Recall the definition for convolutions involving distributions that appeared in Chapter 2. By this and the definition of the Fourier transform and inverse Fourier transform, we have $$ (f * u)^{\wedge}(g)=f * u(\hat{g})=\check{\hat{u}}(\tilde{f} * \hat{g})=\hat{u}\left((\tilde{f} * \hat{g})^{\dagger}\right) $$ Now, we argue as in the proof of Proposition 1.24 and use the Fourier inversion theorem, 1.31 to obtain $$ (\tilde{f} * \hat{g})^{Y}(x)=(2 \pi)^{-n} \int_{\mathbf{R}^{2 n}} f(\xi-\eta) \hat{g}(\eta) e^{i x \cdot((\xi-\eta+\eta))} d \xi d \eta=\hat{f}(x) g(x) $$ Thus, we have $(f * u)^{\wedge}(g)=\hat{u}(\hat{f} g)=(\hat{f} \hat{u})(g)$. Proof of Corollary 8.17. This is immediate from Theorem 8.13 which gives the Fourier transform of the distribution given by $\gamma(\alpha, n)|x|^{\alpha-n}$ and the previous proposition. ### A Sobolev inequality Next step is to establish an inequality relating the $L^{q}$-norm of a function $f$ with the $L^{p}$-norm of its derivatives. This result, known as a Sobolev inequality is immediate from the Hardy-Littlewood-Sobolev inequality, once we have a representation of $f$ in terms of its gradient. Lemma 8.19 If $f$ is a Schwartz function, then $$ f(x)=\frac{1}{\omega_{n-1}} \int_{\mathbf{R}^{n}} \frac{\nabla f(y) \cdot(x-y)}{|x-y|^{n}} d y . $$ Proof. We let $z^{\prime} \in \mathbf{S}^{n-1}$ and then write $$ f(x)=-\int_{0}^{\infty} \frac{d}{d t} f\left(x+t z^{\prime}\right) d t=-\int_{0}^{\infty} z^{\prime} \cdot(\nabla f)\left(x+t z^{\prime}\right) d t $$ If we integrate both sides with respect to the variable $z^{\prime}$, and then change from the polar coordinates $t$ and $z^{\prime}$ to $y$ which is related to $t$ and $z^{\prime}$ by $y-x=t z^{\prime}$, we obtain $\omega_{n-1} f(x)=-\int_{\mathbf{S}^{n-1}} \int_{0}^{\infty} z^{\prime} \cdot \nabla f\left(x+t z^{\prime}\right) t^{n-1+1-n} d t d z^{\prime}=\int_{\mathbf{R}^{n}} \frac{x-y}{|x-y|} \cdot \nabla f(y) \frac{1}{|x-y|^{n-1}} d y$ This gives the conclusion. Theorem 8.20 If $1<p<n$ (and thus $n \geq 2$ ), $f$ is in the Sobolev space $L_{1}^{p}$ and $q$ is defined by $1 / q=1 / p-1 / n$, then there is a constant $C=C(p, n)$ so that $$ \|f\|_{q} \leq C\|\nabla f\|_{p} $$ Proof. According to Lemma 8.19, we have that for nice functions, ${ }^{1}$ $$ |f(x)| \leq I_{1}(|\nabla f|)(x) $$ Thus, the inequality of this theorem follows from the Hardy-Littlewood-Sobolev theorem on fractional integration. Since the Schwartz class is dense in the Sobolev space, a routine limiting argument extends the inequality to functions in the Sobolev space. ${ }^{1}$ This assumes that $\omega_{n-1}^{-1}=\gamma(1, n)$, which I have not checked. The Sobolev inequality continues to hold when $p=1$. However, the above argument fails. The standard argument for $p=1$ is an ingenious argument due to Gagliardo, see Stein [14, pp. 128-130] or the original paper of Gagliardo [3]. Exercise 8.21 If $p>n$, then the Riesz potential, $I_{1}$ produces functions which are in Hölder continuous of order $\gamma=1-(n / p)$. If $0<\gamma<1$, define the Hölder semi-norm by $$ \|f\|_{C^{\gamma}}=\sup _{x \neq y} \frac{|f(x)-f(y)|}{|x-y|^{\gamma}} $$ a) Show that if $f$ is a Schwartz function, then $\left\|I_{1}(f)\right\|_{C^{\gamma}} \leq C\|f\|_{p}$ provided $p>n$ and $\gamma=1-(n / p)$. b) Generalize to $I_{\alpha}$. c) The integral defining $I_{1}(f)$ is not absolutely convergent for all $f$ in $L^{p}$ if $p>n$. Show that the differences $I_{1} f(x)-I_{1} f(y)$ can be expressed as an absolutely convergent integral. Conclude that if $f \in L_{1}^{p}$, then $f \in C^{\gamma}$ for $\gamma$ and $p$ as above. Exercise 8.22 Show by example that the Sobolev inequality, $\|f\|_{\infty} \leq C\|\nabla f\|_{n}$ fails if $p=n$ and $n \geq 2$. Hint: For appropriate a, try $f$ with $f(x)=\eta(x)(-\log |x|)^{a}$ with $\eta$ a smooth function which is supported in $|x|<1 / 2$. Exercise 8.23 Show that there is a constant $C=C(n)$ so that if $g=I_{1}(f)$, then $$ \sup _{r>0, x \in \mathbf{R}^{n}} \frac{1}{m\left(B_{r}(x)\right.} \int_{B_{r}(x)}\left|g(x)-(g)_{r, x}\right| d x \leq C\|f\|_{n} . $$ Here, $(f)_{r, x}$ denotes the average of $f$ on the ball $B_{r}(x)$. $$ (f)_{r, x}=\frac{1}{B_{r}(x)} \int_{B_{r}(x)} f(y) d y $$ Exercise 8.24 Show that in one dimension, the inequality $\|f\|_{\infty} \leq\|f\|_{1}$ is trivial for nice $f$. State precise hypotheses that $f$ must satisfy. ## Chapter 9 ## Singular multipliers In this section, we establish estimates for an operator whose symbol is singular. The results we prove in this section are more involved than the simple $L^{2}$ multiplier theorem that we proved in Chapter 3. However, roughly speaking what we are doing is taking a singular symbol, smoothing it by convolution and then applying the $L^{2}$ multiplier theorem. As we shall see, this approach gives estimates in spaces of functions where we control the rate of increase near infinity. Estimates of this type were proven by Agmon and Hörmander. The details of our presentation are different, but the underlying ideas are the same. ### Estimates for an operator with a singular symbol For the next several chapters, we will be considering a differential operator, $$ \Delta+2 \zeta \cdot \nabla=e^{-x \cdot \zeta} \Delta e^{x \cdot \zeta} $$ where $\zeta \in \mathbf{C}^{n}$ satisfies $\zeta \cdot \zeta=\sum_{j=1}^{n} \zeta_{j} \zeta_{j}=0$. Exercise 9.1 Show that $\zeta \in \mathbf{C}^{n}$ satisfies $\zeta \cdot \zeta=0$ if and only if $\zeta=\xi+$ in where $\xi$ and $\eta$ are in $\mathbf{R}^{n}$ and satisfy $|\xi|=|\eta|$ and $\xi \cdot \eta=0$. Exercise 9.2 a) Show that $\Delta e^{x \cdot \zeta}=0$ if and only if $\zeta \cdot \zeta=0$. b) Find conditions on $\tau \in \mathbf{R}$ and $\xi \in \mathbf{R}^{n}$ so that $e^{\tau t+x \cdot \xi}$ satisfies $$ \left(\frac{\partial^{2}}{\partial t^{2}}-\Delta\right) e^{t \tau+x \cdot \xi}=0 $$ The symbol of this operator is $$ -|\xi|^{2}+2 i \zeta \cdot \xi=|\operatorname{Im} \zeta|^{2}-|\operatorname{Im} \zeta+\xi|^{2}+2 i \operatorname{Re} \zeta \cdot \xi $$ Thus it is clear that this symbol vanishes on a sphere of codimension 2 which lies in the hyperplane $\operatorname{Re} \zeta \cdot \xi=0$ and which has center $-\operatorname{Im} \zeta$ and radius $|\operatorname{Im} \zeta|$. Near this sphere, the symbol vanishes to first order. This means that the reciprocal of the symbol is locally integrable. In fact, we have the following fundamental estimate. Lemma 9.3 If $\eta \in \mathbf{R}^{n}$ and $r>0$ then there exists a constant $C$ depending only on the dimension $n$ so that $$ \int_{B_{r}(\eta)}\left|\frac{1}{-|\xi|^{2}+2 i \zeta \cdot \xi \mid}\right| d \xi \leq \frac{C r^{n-1}}{|\zeta|} . $$ Proof. We first observe that we are trying to prove a dilation invariant estimate, and we can simplify our work by scaling out one parameter. If we make the change of variables, $\xi=|\zeta| x$, we obtain $$ \int_{B_{r}(\eta)}\left|\frac{1}{-|\xi|^{2}+2 i \zeta \cdot \xi}\right| d \xi=|\zeta|^{n-2} \int_{B_{r /|\zeta|}(\eta /|\zeta|)} \frac{1}{-|x|^{2}+2 \hat{\zeta} \cdot x} d x $$ where $\hat{\zeta}=\zeta /|\zeta|$. Thus, it suffices to consider the estimate when $|\zeta|=1$ and we assume below that we have $|\zeta|=1$. We also, may make rotation $\xi=O x$ so that $O^{t} \operatorname{Re} \zeta=e_{1} / \sqrt{2}$, with $e_{1}$ the unit vector in the $x_{1}$ direction and $O^{t} \operatorname{Im} \zeta=e_{2} / \sqrt{2}$. Then, we have that $$ \int_{B_{r}(\eta)}\left|\frac{1}{-|\xi|^{2}+2 i \zeta \cdot \xi}\right| d \xi=\int_{B_{r}\left(O^{t} \eta\right)}\left|\frac{1}{-|x|^{2}+2 i O^{t} \zeta \cdot x}\right| d x . $$ Thus, it suffices to prove the Lemma in the special case when $\zeta=\left(e_{1}+i e_{2}\right) / \sqrt{2}$. We let $\Sigma_{\zeta}=\left\{\xi:-|\xi|^{2}+2 i \zeta \cdot \xi=0\right\}$ be the zero set of the symbol. Case 1. The ball $B_{r}(\eta)$ satisfies $r<1 / 100$, $\operatorname{dist}\left(\eta, \Sigma_{\zeta}\right)<2 r$. In this case, we make an additional change of variables. We rotate in the variables $\left(\xi_{2}, \ldots \xi_{n}\right)$ about the center of $\Sigma_{\zeta},-e_{1} / \sqrt{2}$, so that $\eta$ is within $2 r$ units of the origin. We can find a ball $B_{3 r}$ of radius $3 r$ and centered 0 in $\Sigma_{\zeta}$ so that $B_{r}(\eta) \subset B_{3 r}$ Now, we use coordinates $x_{1}=\operatorname{Re} \zeta \cdot \xi$, $x_{2}=|\operatorname{Im} \zeta|^{2}-|\operatorname{Im} \zeta+\xi|^{2}$ and $x_{j}=\xi_{j}$ for $j=3, \ldots, n$. We leave it as an exercise to compute the Jacobian and show that it is bounded above and below on $B_{r}(\eta)$. This gives the bound $$ \int_{B_{3 r}}\left|\frac{1}{-|\xi|^{2}+2 i \zeta \cdot \xi}\right| d \xi \leq C \int_{B_{C r}(0)} \frac{1}{\left|x_{1}+i x_{2}\right|} d x_{1} d x_{2} \ldots d x_{n}=C r^{n-1} . $$ Case 2. We have $B_{r}(\eta)$ with $\operatorname{dist}\left(\eta, \Sigma_{\zeta}\right)>2 r$. In this case, we have $$ \sup _{\xi \in B_{r}(\eta)} \frac{1}{\left.|-| \xi\right|^{2}+2 i \zeta \cdot \xi \mid} \leq C / r $$ and the Lemma follows in this case. Case 3. The ball $B_{r}(\eta)$ satisfies $r>1 / 100$ and $\operatorname{dist}\left(\eta, \Sigma_{\zeta}\right)<2 r$. In this case, write $B_{r}(\eta)=B_{0} \cup B_{\infty}$ where $B_{0}=B_{r}(\eta) \cap B_{4}(0)$ and $B_{\infty}=B_{r}(\eta) \backslash B_{0}$. By case 1 and 2, $$ \int_{B_{0}} \frac{1}{\left.|-| \xi\right|^{2}+2 i \zeta \cdot \xi \mid} d \xi \leq C $$ Since $B_{4}(0)$ contains the set $\Sigma_{\zeta}$, one can show that $$ \frac{1}{\left.|-| \xi\right|^{2}+2 i \zeta \cdot \xi \mid} \leq C /|\xi|^{2} $$ on $B_{\infty}$ and integrating this estimate gives $$ \int_{B_{\infty}} \frac{1}{\left.|-| \xi\right|^{2}+2 i \zeta \cdot \xi \mid} d \xi \leq C r^{n-2} $$ Since $r>1 / 100$, the estimates on $B_{0}$ and $B_{\infty}$ imply the estimate of the Lemma in this case. As a consequence of this Lemma, we can define the operator $G_{\zeta}: \mathcal{S}\left(\mathbf{R}^{n}\right) \rightarrow \mathcal{S}^{\prime}\left(\mathbf{R}^{n}\right)$ by $$ G_{\zeta} f=\left(\frac{\hat{f}(\xi)}{-|\xi|^{2}+2 i \xi \cdot \zeta}\right) $$ Lemma 9.4 The map $G_{\zeta}$ is bounded from $\mathcal{S}\left(\mathbf{R}^{n}\right)$ to $\mathcal{S}^{\prime}\left(\mathbf{R}^{n}\right)$ and we have $$ (\Delta+2 \zeta \cdot \nabla) G_{\zeta} f=G_{\zeta}(\Delta+2 \zeta \cdot \nabla) f=f $$ if $f \in \mathcal{S}\left(\mathbf{R}^{n}\right)$. Proof. According to the previous lemma, the symbol of $G_{\zeta}$ satisfies the growth condition of Example 2.21 in Chapter 2. Hence $G_{\zeta} f$ is in $\mathcal{S}^{\prime}\left(\mathbf{R}^{n}\right)$. The remaining results rely on the Proposition 1.18 of Chapter $1^{1}$. It is not enough to know that $G_{\zeta} f$ is a tempered distribution. We would also like to know that the map $G_{\zeta}$ is bounded between some pair of Banach spaces. This will be useful when we try to construct solutions of perturbations of the operator $\Delta+2 \zeta \cdot \nabla$. The definition of the spaces we will use appears similar to the Besov spaces and the Littlewood-Paley theory in Chapter 7. However, now we are decomposing $f$ rather than $\hat{f}$. To define these spaces, we let $$ B_{j}=B_{2^{j}}(0) $$ and then put $R_{j}=B_{j} \backslash B_{j-1}$. We let $\dot{M}_{q}^{p, s}\left(\mathbf{R}^{n}\right)$ denote the space of functions $u$ for which the norm $$ \|u\|_{\dot{M}_{q}^{p, s}}=\left(\sum_{k=-\infty}^{\infty}\left[2^{k s}\|u\|_{L^{p}\left(R_{k}\right)}\right]^{q}\right)^{1 / q}<\infty . $$ Also, we let $M_{q}^{p, s}$ be the space of measurable functions for which the norm $$ \|u\|_{M_{q}^{p, s}}=\left(\|u\|_{L^{p}\left(B_{0}\right)}^{q}+\sum_{k=1}^{\infty}\left[2^{k s}\|u\|_{L^{p}\left(R_{k}\right)}\right]^{q}\right)^{1 / q} . $$ These definitions are valid for $0<p \leq \infty, s \in \mathbf{R}$ and $0<q<\infty$. We will also need the case when $q=\infty$ and this is defined by replacing the $\ell^{q}$ norm of the sequence $2^{k s}\|u\|_{L^{p}\left(R_{k}\right)}$ by the supremum. Our primary interests are the spaces where $p=2, q=1$ and $s=1 / 2$ and the space where $p=2, q=\infty$ and $s=-1 / 2$. The following exercises give some practice with these spaces. ${ }^{1}$ There is a sign error in the version of this Proposition handed out in class. Exercise 9.5 For which a do we have $$ \left(1+|x|^{2}\right)^{a / 2} \in M_{\infty}^{2,1 / 2}\left(\mathbf{R}^{n}\right) . $$ Exercise 9.6 Show that if $r \geq q$, then $$ M_{q}^{2, s} \subset M_{r}^{2, s} . $$ Exercise 9.7 Show that if $s>0$, then $$ M_{1}^{2, s} \subset \dot{M}_{1}^{2, s} . $$ Exercise 9.8 Let $T$ be the multiplication operator $$ T f(x)=\left(1+|x|^{2}\right)^{-(1+\epsilon) / 2} f(x) . $$ Show that if $\epsilon>0$, then $$ T: M_{\infty}^{2,-1 / 2} \rightarrow M_{1}^{2,1 / 2} $$ Exercise 9.9 Show that we have the inclusion $M_{1}^{2,1 / 2} \subset L^{2}\left(\mathbf{R}^{n}, d \mu_{s}\right)$ where $d \mu_{s}=(1+$ $\left.|x|^{2}\right)^{2 s} d x$. This means that we need to establish that for some $C$ depending on $n$ and $s$, we have the inequality $$ \|u\|_{L^{2}\left(B_{0}\right)}+\sum_{k=1}^{\infty}\|u\|_{L^{2}\left(R_{k}\right)} \leq C\left(\int_{\mathbf{R}^{n}}|u(x)|^{2}\left(1+|x|^{2}\right)^{s} d x\right)^{1 / 2} $$ Hint: The integral on $\mathbf{R}^{n}$ dominates the integral on each ring. On each ring, the weight changes by at most a fixed factor. Thus, it makes sense to replace the weight by its smallest value. This will give an estimate on each ring that can be summed to obtain the $M_{1}^{2,1 / 2}$ norm. The main step of our estimate is the following lemma. Lemma 9.10 Let $\psi$ and $\psi^{\prime}$ are Schwartz functions on $\mathbf{R}^{n}$ and set $\psi_{k}(x)=\psi\left(2^{-k} x\right)$ and $\psi_{j}^{\prime}(x)=\psi^{\prime}\left(2^{-j} x\right)$. We define a kernel $K: \mathbf{R}^{n} \times \mathbf{R}^{n} \rightarrow \mathbf{C}$ by $$ K\left(\xi_{1}, \xi_{2}\right)=\int_{\mathbf{R}^{n}} \frac{\hat{\psi}_{j}^{\prime}\left(\xi_{1}-\xi\right) \hat{\psi}_{k}\left(\xi-\xi_{2}\right)}{-|\xi|^{2}+2 i \zeta \cdot \xi} d \xi $$ Then there is a constant $C$ so that $$ \begin{aligned} \sup _{\xi_{1}} \int\left|K\left(\xi_{1}, \xi_{2}\right)\right| d \xi_{2} & \leq \frac{C 2^{j}}{|\zeta|} \\ \sup _{\xi_{2}} \int\left|K\left(\xi_{1}, \xi_{2}\right)\right| d \xi_{1} & \leq \frac{C 2^{k}}{|\zeta|} . \end{aligned} $$ As a consequence, the operator $T_{j, k}$ given by $$ T_{j, k} f\left(\xi_{1}\right)=\int K\left(\xi_{1}, \xi_{2}\right) f\left(\xi_{2}\right) d \xi_{2} $$ satisfies $$ \left\|T_{j, k} f\right\|_{p} \leq \frac{C}{|\zeta|} 2^{k / p} 2^{j / p^{\prime}} $$ Proof. Observe that $\hat{\psi}_{k}(\xi)=2^{k n} \hat{\psi}\left(\xi 2^{k}\right)=(\hat{\psi})_{2^{-k}}(\xi)$. Thus, $\left\|\hat{\psi}_{k}\right\|_{1}$ is independent of $k$. Since $\psi \in \mathcal{S}\left(\mathbf{R}^{n}\right)$, we have that $\|\hat{\psi}\|_{1}$ is finite. Thus if we use Tonelli's theorem, we have $$ \int_{\mathbf{R}^{n}}\left|K\left(\xi_{1}, \xi_{2}\right)\right| d \xi_{2} \leq\|\hat{\psi}\|_{1} \int \frac{\left|\hat{\psi}_{j}^{\prime}\left(\xi_{1}-\xi\right)\right|}{\left.|-| \xi\right|^{2}+2 i \zeta \cdot \xi \mid} d \xi $$ To estimate the integral on the right of this inequality, we break the integral into rings centered at $\xi_{1}$ and use that $\hat{\psi}^{\prime}$ decays rapidly at infinity so that, in particular, we have $\hat{\psi}^{\prime}(\xi) \leq C \min \left(1,|\xi|^{-n}\right)$. Then applying Lemma 9.3 gives us $$ \begin{aligned} \int \frac{\left|\hat{\psi}_{j}^{\prime}\left(\xi_{1}-\xi\right)\right|}{\left.|-| \xi\right|^{2}+2 i \zeta \cdot \xi \mid} d \xi \leq & \|\hat{\psi}\|_{\infty} 2^{n j} \int_{B_{2^{-j}}\left(\xi_{1}\right)} \frac{1}{\left.|-| \xi\right|^{2}+2 i \zeta \cdot \xi \mid} d \xi \\ & +\sum_{l=1}^{\infty} C 2^{n j} 2^{-n(j-l)} \int_{B_{2^{-j+l}}\left(\xi_{1}\right) \backslash B_{2^{-j+l-1}}\left(\xi_{1}\right)} \frac{1}{\left.|-| \xi\right|^{2}+2 i \zeta \cdot \xi \mid} d \xi \\ \leq & \frac{C}{|\zeta|} 2^{j} \sum_{l=0}^{\infty} 2^{-l} . \end{aligned} $$ This gives the first estimate (9.11). The second is proven by interchanging the roles of $\xi_{1}$ and $\xi_{2}$. The estimate (9.11) gives a bound for the operator norm on $L^{\infty}$. The estimate (9.12) gives a bound for the operator norm on $L^{1}$. The bound for the operator norm on $L^{p}$ follows by the Riesz-Thorin interpolation theorem, Theorem 4.1. See exercise 4.5. Exercise 9.14 Show that it suffices to prove the following theorem for $|\zeta|=1$. That is, show that if the theorem holds when $|\zeta|=1$, then by rescaling, we can deduce that the result holds for all $\zeta$ with $\zeta \cdot \zeta=0$. Exercise 9.15 The argument given should continue to prove an estimate as long as $\operatorname{Re} \zeta$ and $\operatorname{Im} \zeta$ are both nonzero. Verify this and show how the constants depend on $\zeta$. Theorem 9.16 The map $G_{\zeta}$ satisfies $$ \sup _{j} 2^{-j / 2}\left\|G_{\zeta} f\right\|_{L^{2}\left(B_{j}\right)} \leq \frac{C}{|\zeta|}\|f\|_{\dot{M}_{1}^{2,1 / 2}} $$ and $$ \sup _{j} 2^{-j / 2}\left\|G_{\zeta} f\right\|_{L^{2}\left(B_{j}\right)} \leq \frac{C}{|\zeta|}\|f\|_{M_{1}^{2,1 / 2} .} $$ Proof. We first suppose that $f$ is in the Schwartz space. We choose $\psi \geq 0$ as in Chapter 7 so that $\operatorname{supp} \psi \subset\{x: 1 / 2 \leq x \leq 2\}$ and with $\psi_{k}(x)=\psi\left(2^{-k} x\right)$, we have $$ \sum_{k=-\infty}^{\infty} \psi_{k}^{2}=1, \quad \text { in } \mathbf{R}^{n} \backslash\{0\} . $$ We let $\phi=1$ if $|x|<1, \phi \geq 0, \phi \in \mathcal{D}\left(\mathbf{R}^{n}\right)$ and set $\phi_{j}(x)=\phi\left(2^{-j} x\right)$. We decompose $f$ using the $\psi_{k}$ 's to obtain $$ \phi_{j} G_{\zeta} f=\sum_{k=-\infty}^{\infty} \phi_{j} G_{\zeta} \psi_{k}^{2} f . $$ The Plancherel theorem implies that $$ \left\|\phi_{j} G_{\zeta}\left(\psi_{k}^{2} f\right)\right\|_{2}^{2}=(2 \pi)^{-n} \int\left|T_{j, k} \widehat{\psi_{k} f}\right|^{2} d \xi $$ Here, the operator $T_{j, k}$ is as in the previous lemma but with $\psi$ replaced by $\phi$ and $\psi^{\prime}$ replaced by $\psi$. Hence, from Lemma 9.10 we can conclude that $$ \left\|\phi_{j} G_{\zeta} \psi_{k}^{2} f\right\|_{2} \leq \frac{C}{|\zeta|} 2^{j / 2} 2^{k / 2} \sum_{|\ell| \leq 1}\left\|\psi_{k+\ell} f\right\|_{2} . $$ Now, using Minkowski's inequality, we have $$ \left\|G_{\zeta} f\right\|_{L^{2}\left(B_{j}\right)} \leq C \sum_{k=-\infty}\left\|\phi_{j} G_{\zeta} \psi_{k}^{2} f\right\|_{L^{2}\left(\mathbf{R}^{n}\right)} $$ The first conclusion of the theorem now follows from (9.17) and (9.18). The estimate in the inhomogeneous space follows by using Cauchy-Schwarz to show $$ \sum_{k=-\infty}^{0} 2^{k / 2}\|f\|_{L^{2}\left(R_{k}\right)} \leq\left(\sum_{k=-\infty}^{0}\|f\|_{L^{2}\left(R_{k}\right)}^{2}\right)^{1 / 2}\left(\sum_{k=-\infty}^{0} 2^{k / 2}\right)^{1 / 2}=\left(\frac{\sqrt{2}}{\sqrt{2}-1}\right)^{1 / 2}\|f\|_{L^{2}\left(B_{0}\right)} . $$ Finally, to remove the restriction that $f$ is in the Schwartz space, we observe that the Lemma below tells us that Schwartz functions are dense in $\dot{M}_{1}^{2,1 / 2}$ and $M_{1}^{2,1 / 2}$. Lemma 9.19 We have that $\mathcal{S}\left(\mathbf{R}^{n}\right) \cap \dot{M}_{1}^{2,1 / 2}$ is dense in $\dot{M}_{1}^{2,1 / 2}$ and $\mathcal{S}\left(\mathbf{R}^{n}\right) \cap M_{1}^{2,1 / 2}$ is dense in $\dot{M}_{1}^{2,1 / 2}$. Proof. To see this, first observe that if we pick $f$ in $\dot{M}_{1}^{2,1 / 2}$ and define $$ f_{N}(x)= \begin{cases}0, & |x|<2^{-N} \text { or }|x|>2^{N} \\ f(x), & 2^{-N} \leq|x| \leq 2^{N}\end{cases} $$ then $f_{N}$ converges to $f$ in $\dot{M}_{1}^{2,1 / 2}$. Next, if we regularize with a standard mollifier, then $f_{N, \epsilon}=f_{N} * \eta_{\epsilon}$ converges to $f_{N}$ in $L^{2}$. If we assume that $\eta$ is supported in the unit ball, then for $\epsilon<2^{-N-1}, f_{N, \epsilon}$ will be supported in the shell $\left\{x: 2^{-N-1} \leq|x| \leq 2^{N+1}\right\}$. For such functions, we may use Cauchy-Schwarz to obtain $$ \left\|f_{N}-f_{N, \epsilon}\right\|_{\dot{M}_{1}^{2,1}} \leq\left(\sum_{k=-N}^{N+1}\left\|f_{N, \epsilon}-f_{N}\right\|_{L^{2}\left(R_{k}\right)}^{2}\right)^{1 / 2}\left(\sum_{k=-N}^{N+1} 2^{k}\right)^{1 / 2}=C\left\|f_{N}-f_{N, \epsilon}\right\|_{2} . $$ Hence, for functions supported in compact subsets of $\mathbf{R}^{n} \backslash\{0\}$, the $L^{2}$ convergence of $f_{N, \epsilon}$ to $f_{N}$ implies convergence in the space $\dot{M}_{1}^{2,1 / 2}$. Approximation in $M_{1}^{2,1 / 2}$ is easier since we only need to cut off near infinity. Exercise 9.20 Are Schwartz functions dense in $M_{\infty}^{2,-1 / 2}$ ? Exercise 9.21 Use the ideas above to show that $$ \sup _{j} 2^{-j / 2}\left\|\nabla G_{\zeta} f\right\|_{L^{2}\left(B_{j}\right)} \leq C\|f\|_{\dot{M}_{1}^{2,1 / 2}} $$ Hint: One only needs to find a replacement for Lemma 9.3. Exercise 9.22 Use the ideas above to show that $I_{\alpha}: \dot{M}_{1}^{2, \alpha / 2} \rightarrow \dot{M}_{\infty}^{2,-\alpha / 2}$. Hint: Again, the main step is to find a substitute for Lemma 9.3. Finally, we establish uniqueness for the equation $\Delta u+2 \zeta \cdot \nabla u=0$ in all of $\mathbf{R}^{n}$. In order to obtain uniqueness, we will need some restriction on the growth of $u$ at infinity. Theorem 9.23 If $u$ in $L_{l o c}^{2}$ and satisfies $$ \lim _{j \rightarrow \infty} 2^{-j}\|u\|_{L^{2}\left(B_{j}(0)\right)}=0 $$ and $\Delta u+2 \zeta \cdot \nabla u=0$, then $u=0$. The following is taken from Hörmander [6], see Theorem 7.1.27. Lemma 9.24 If $u$ is a tempered distribution which satisfies $$ \limsup _{R \rightarrow \infty} R^{-d}\|u\|_{L^{2}\left(B_{R}(0)\right)}=M<\infty $$ and $\hat{u}$ is supported in a compact surface $S$ of codimension d, then there is a function $u_{0} \in L^{2}(S)$ so that $$ \hat{u}(\phi)=\int_{S} \phi u_{0} d \sigma $$ and $\left\|u_{0}\right\|_{L^{2}(S)} \leq C M$. Proof. We choose $\phi \in \mathcal{D}\left(\mathbf{R}^{n}\right)$, $\operatorname{supp} \phi \subset B_{1}(0), \phi$ even, $\int \phi=1$ and consider $\hat{u} * \phi_{\epsilon}$. By Plancherel's theorem, we have that $$ \int\left|\hat{u} * \phi_{2^{-j}}\right|^{2} d \xi=\int\left|\hat{\phi}\left(2^{-j} x\right) u(x)\right|^{2} d x \leq C 2^{-d j} M^{2} $$ To establish this, we break the integral into the integral over the unit ball and integrals over shells. We use that $\hat{\phi}$ is in $\mathcal{S}\left(\mathbf{R}^{n}\right)$ and satisfies $|\hat{\phi}(x)| \leq C \min \left(1,|x|^{-(d+1)}\right)$. For $j$ large enough so that $2^{-j d}\|u\|_{L^{2}\left(B_{j}\right)} \leq 2 M$, we have $$ \begin{aligned} \int\left|\hat{\phi}\left(2^{-j} x\right)\right|^{2}|u(x)|^{2} d x & \leq \int_{B_{j}}|u(x)|^{2} d x+\sum_{k=j}^{\infty} \int_{R_{k+1}}\left|\hat{\phi}\left(2^{-j} x\right)\right|^{2}|u(x)|^{2} d x \\ & \leq C 2^{2 j d} 4 M^{2}+C 2^{2 j(d+1)} \sum_{k=j}^{\infty} 2^{-2 k} 4 M^{2}=C M^{2} 2^{d j} \end{aligned} $$ If we let $S_{\epsilon}=\{\xi: \operatorname{dist}(\xi, \operatorname{supp} S)<\epsilon\}$ and $\psi$ is in the Schwartz class, then we have $$ \int_{S}|\psi(x)|^{2} d \sigma=C_{d} \lim _{\epsilon \rightarrow 0^{+}} \epsilon^{-d} \int_{S_{\epsilon}}|\psi(x)|^{2} d x . $$ Since $\phi_{\epsilon} * \psi \rightarrow \psi$ in $\mathcal{S}$, we have $$ \hat{u}(\psi)=\lim _{j \rightarrow \infty} \hat{u}\left(\phi_{2^{-j}} * \psi\right) $$ Then using Cauchy-Schwarz and the estimate above for $u * \phi_{2^{-j}}$, we obtain $$ \left|\hat{u}\left(\psi * \phi_{2^{-j}}\right)\right|=\left|\int_{S_{2^{-j}}} \hat{u} * \phi_{2^{-j}}(x) \psi(x) d x\right| \leq C M 2^{j d}\left(\int_{S_{2^{-j}}}|\psi(x)|^{2} d x\right)^{1 / 2} . $$ If we let $\epsilon \rightarrow 0^{+}$, we obtain that $|\hat{u}(\psi)| \leq C M\|\psi\|_{L^{2}(S)}$. This inequality implies the existence of $u_{0}$. Now we can present the proof of our uniqueness theorem. Proof of Theorem 9.23. Since $\Delta u+2 \zeta \cdot \nabla u=0$, we can conclude that the distribution $\hat{u}$ is supported on the zero set of $-|\xi|+2 i \zeta \cdot \xi$, a sphere of codimension 2 . Now the hypothesis on the growth of the $L^{2}$ norm and the previous lemma, Lemma 9.24 imply that $\hat{u}=0$. Corollary 9.25 If $f$ is in $M_{1}^{2,1 / 2}$, then there is exactly one solution $u$ of $$ \Delta u+2 \zeta \cdot \nabla u=f $$ which lies in $M_{\infty}^{2,-1 / 2}$. This solution satisfies $$ |\zeta|\|u\|_{M_{\infty}^{2,-1 / 2}}+\|\nabla u\|_{M_{\infty}^{2,-1 / 2}} \leq C\|f\|_{M_{1}^{2,1 / 2}} $$ Proof. The existence follows from Theorem 9.16 and exercise 9.21. If $u$ is in $M_{\infty}^{2,-1 / 2}$, then we have $u$ is in $L_{l o c}^{2}$ and that $$ \lim _{j \rightarrow \infty} 2^{-\alpha j}\|u\|_{L^{2}\left(B_{2^{j}}(0)\right.}=0 $$ if $\alpha>1 / 2$. Thus, the uniqueness follows from Theorem 9.23. ### A trace theorem. The goal of this section is to provide another application of the ideas presented above. The result proven will not be used in this course. Also, this argument will serve to introduce a technical tool that will be needed in Chapter 14 . We begin with a definition of a Ahlfors condition. We say that a Borel measure $\mu$ in $\mathbf{R}^{n}$ satisfies an Ahlfors condition if for some constant $A$ it satisfies $\mu\left(B_{r}(x)\right) \leq A r^{n-1}$. This is a property which is satisfied by surface measure on the boundary of a $C^{1}$-domain as well as by surface measure on a graph $\left\{\left(x^{\prime}, x_{n}\right): x_{n}=\phi\left(x^{\prime}\right)\right\}$ provided that the $\|\nabla \phi\|_{\infty}<\infty$. Our main result is the following theorem. Theorem 9.26 If $f$ is in $\mathcal{S}\left(\mathbf{R}^{n}\right)$ and $\mu$ satisfies the Ahlfors condition, then there is a constant $C$ so that $$ \int_{\mathbf{R}^{n}}|\hat{u}(\xi)|^{2} d \mu \leq C\|u\|_{\dot{M}_{1}^{2,1 / 2}}^{2} $$ This may seem peculiar, but as an application, we observe that this theorem implies a trace theorem for Sobolev spaces. Corollary 9.27 If $\mu$ satisfies the Ahlfors condition and $s>1 / 2$ then we have $$ \int_{\mathbf{R}^{n}}|u|^{2} d \mu \leq C\|u\|_{L_{s}^{2}\left(\mathbf{R}^{n}\right)}^{2} $$ Proof. First assume that $u \in \mathcal{S}\left(\mathbf{R}^{n}\right)$. Applying the previous theorem to $\check{u}(x)=$ $(2 \pi)^{-n} \hat{u}(-x)$ gives that $$ \int|u|^{2} d \mu(x) \leq C\|\hat{u}\|_{\dot{M}_{1}^{2,1 / 2}} $$ It is elementary (see exercise 9.9), to establish the inequality $$ \|v\|_{M_{1}^{2,1 / 2}} \leq C_{s} \int_{\mathbf{R}^{n}}|v(x)|^{2}\left(1+|x|^{2}\right)^{s} d x $$ when $s>1 / 2$. Also, from exercise 9.7 or the proof of theorem 9.16 , we have $$ \|v\|_{\dot{M}_{1}^{2,1 / 2}} \leq C_{s}\|v\|_{M_{1}^{2,1 / 2}} $$ Combining the two previous inequalities with $v=\hat{u}$ gives the desired conclusion. Lemma 9.28 The map $g \rightarrow \int \cdot g d x$ is an isomorphism from $\dot{M}_{\infty}^{2,-1 / 2}$ to the dual space of $M_{1}^{2,1 / 2}, \dot{M}_{1}^{2,1 / 2}$. Proof. It is clear by applying Hölder's inequality twice that $$ \int_{\mathbf{R}^{n}} f g d x \leq\|f\|_{\dot{M}_{1}^{2,1 / 2}}\|g\|_{M_{\infty}^{2,-1 / 2}} $$ Thus, our map takes $\dot{M}_{\infty}^{2,1 / 2}$ into the dual of $\dot{M}_{1}^{2,1 / 2}$. To see that this map is onto, suppose that $\lambda \in M_{1}^{2,1 / 2^{\prime}}$. Observe $L^{2}\left(R_{k}\right) \subset \dot{M}_{1}^{2,1 / 2}$ in the sense that if $f \in L^{2}\left(R_{k}\right)$, then the function which is $f$ in $R_{k}$ and 0 outside $R_{k}$ lies in $\dot{M}_{1}^{2,1 / 2}$. Thus, for such $f$, $$ \lambda(f) \leq\|\lambda\|_{\dot{M}_{1}^{2,1 / 2}}\|f\|_{\dot{M}_{\infty}^{2,1 / 2}}=2^{k / 2}\|\lambda\|_{\dot{M}_{1}^{2,1 / 2}}\|f\|_{L^{2}\left(R_{k}\right)} . $$ Since we know the dual of $L^{2}\left(R_{k}\right)$, we can conclude that there exists $g_{k}$ with $$ \left\|g_{k}\right\|_{L^{2}\left(R_{k}\right)} \leq 2^{k / 2}\|\lambda\|_{\dot{M}_{1}^{2,1 / 2}} $$ so that $$ \lambda(f)=\int_{R_{k}} f g d x $$ for $f \in L^{2}\left(R_{k}\right)$. We set $g=\sum_{k=-\infty}^{\infty} g_{k}$. Note that there can be no question about the meaning of the infinite sum since for each $x$ at most one summand is not zero. The estimate (9.29) implies $\|g\|_{\dot{M}_{\infty}^{2,-1 / 2}} \leq\|\lambda\|_{\dot{M}_{1}^{2,1 / 2} \text {. }}$ If $f$ is supported in $\cup_{k=-N}^{N} R_{k}$, then summing (9.30) implies that $$ \lambda(f)=\int f g d x $$ Finally, such $f$ are dense in $\dot{M}_{1}^{2,1 / 2}$, so we conclude $\lambda(f)=\int f g d x$ for all $f$. We have defined the adjoint of an operator on a Hilbert space earlier. Here, we need a slightly more general notion. If $T: X \rightarrow \mathcal{H}$ is a continuous linear map from a normed vector space into a Hilbert space, then $x \rightarrow\langle T x, y\rangle$ is a continuous linear functional of $X$. Thus, there exists $y * \in X^{\prime}$ so that $y^{*}(x)=\langle T x, y\rangle$. One can show that the map $y \rightarrow y^{*}=T^{*} y$ is linear and continuous. The map $T^{*}: \mathcal{H} \rightarrow X^{\prime}$ is the adjoint of the map $T$. There adjoint discussed here is closely related to the transpose of a map introduced when we discussed distributions. For our purposes, the key distinction is that the transpose satisfies $(T f, g)=\left(f, T^{t} g\right)$ for a bilinear pairing, while the adjoint is satisfies $\langle T f, g\rangle=\left\langle f, T^{*} g\right\rangle$ for a sesquilinear pairing (this means linear in first variable and conjugate linear in the second variable). The map $T \rightarrow T^{t}$ will be linear, while the $\operatorname{map} T \rightarrow T^{*}$ is conjugate linear. The following lemma is a simple case of what is known to harmonic analysts as the Peter Tomás trick. It was used to prove a restriction theorem for the Fourier transform in [18]. Lemma 9.31 Let $T: X \rightarrow \mathcal{H}$ be a map from a normed vector space $X$ into a Hilbert space $\mathcal{H}$. If $T^{*} T: X \rightarrow X^{\prime}$, and $$ \left\|T^{*} T f\right\|_{X^{\prime}} \leq A^{2}\|f\|_{X} $$ then $$ \|T f\|_{\mathcal{H}} \leq A\|f\|_{X} $$ Proof. We have $$ T^{*} T f(f)=\langle T f, T f\rangle=\|T f\|_{\mathcal{H}}^{2} $$ and since $\left|T^{*} T f(f)\right| \leq\left\|T^{*} T f\right\|_{X^{\prime}}\|f\|_{X} \leq A^{2}\|f\|_{X}$, the lemma follows. Proof of Theorem 9.26. We consider $f$ in $\dot{M}_{1}^{2,1 / 2}$ and let $T$ denote the map $f \rightarrow \hat{f}$ as a map into $L^{2}(\mu)$. The map $T^{*} T$ is given by $$ T^{*} T f(x)=\int_{\mathbf{R}^{n}} \hat{f}(\xi) e^{-i x \cdot \xi} d \mu(\xi) $$ Using the Ahlfors condition on the measure $\mu$ one may repeat word for word our proof of Theorem 9.16 to conclude $T^{*} T$ maps $\dot{M}_{1}^{2,1 / 2} \rightarrow \dot{M}_{\infty}^{2,-1 / 2}$. Now the two previous Lemmas give that $T: \dot{M}_{1}^{2,1 / 2} \rightarrow L^{2}(\mu)$. Exercise 9.32 Prove a similar result for other co-dimensions-even fractional ones. That is suppose that $\mu\left(B_{r}(x)\right) \leq C r^{n-\alpha}$ for $0<\alpha<n$. Then show that $$ \int_{\mathbf{R}^{n}}|\hat{f}(\xi)|^{2} d \mu(\xi) \leq C\|f\|_{\dot{M}_{1}^{2, \alpha / 2}} $$ ## Chapter 10 ## MA633, the Cliff notes In this chapter, we introduce a good deal of the machinery of elliptic partial differential equations. This will be needed in the next chapter to introduce the inverse boundary value problem we are going to study. ### Domains in $\mathrm{R}^{n}$ For $\mathcal{O}$ an open subset of $\mathbf{R}^{n}$, we let $C^{k}(\mathcal{O})$ denote the space of functions on $\mathcal{O}$ which have continuous partial derivatives of all orders $\alpha$ with $|\alpha| \leq k$. We let $C^{k}(\overline{\mathcal{O}})$ be the space of functions for which all derivatives of order up to $k$ extend continuously to the closure $\mathcal{O}, \overline{\mathcal{O}}$. Finally, we will let $\mathcal{D}(\mathcal{O})$ to denote the space of functions which are infinitely differentiable and are compactly supported in $\mathcal{O}$. We say that $\Omega \subset \mathbf{R}^{n}$ is a domain if $\Omega$ is a bounded connected open set. We say that a domain is of class $C^{k}$ if for each $x \in \Omega$, there is an $r>0, \phi \in C^{k}\left(\mathbf{R}^{n-1}\right)$ and coordinates $\left(x^{\prime}, x_{n}\right) \in \mathbf{R}^{n-1} \times \mathbf{R}$ (which we assume are a rotation of the standard coordinates) so that $$ \begin{aligned} \partial \Omega \cap B_{2 r}(x) & =\left\{\left(x^{\prime}, x_{n}\right): x_{n}=\phi\left(x^{\prime}\right)\right\} \\ \Omega \cap B_{2 r}(x) & =\left\{\left(x^{\prime}, x_{n}\right): x_{n}>\phi\left(x^{\prime}\right)\right\} . \end{aligned} $$ Here, $\partial \Omega$ is the boundary of a set. We will need that the map $x \rightarrow\left(x^{\prime}, 2 \phi\left(x^{\prime}\right)-x_{n}\right)$ map $\Omega \cap B_{r}(x)$ into $\bar{\Omega}^{c}$. This can always be arranged by decreasing $r$. We also will assume that $\nabla \phi$ is bounded in all of $\mathbf{R}^{n-1}$. In these coordinates, we can define surface measure $d \sigma$ on the boundary by $$ \int_{B_{r}(x) \cap \partial \Omega} f(y) d \sigma(y)=\int_{B_{r}(x) \cap\left\{y: y_{n}=\phi\left(y^{\prime}\right)\right\}} f\left(y^{\prime}, \phi\left(y^{\prime}\right)\right) \sqrt{1+\left|\nabla \phi\left(y^{\prime}\right)\right|^{2}} d y^{\prime} . $$ Also, the vector field $\nu(y)=\left(\nabla \phi\left(y^{\prime}\right),-1\right)\left(1+\left|\nabla \phi\left(y^{\prime}\right)\right|^{2}\right)^{-1 / 2}$ defines a unit outer normal for $y \in B_{r}(x) \cap \partial \Omega$. Since our domain is bounded, the boundary of $\Omega$ is a bounded, closed set and hence compact. Thus, we may always find a finite collection of balls, $\left\{B_{r}\left(x_{i}\right): i=1, \ldots, N\right\}$ as above which cover $\partial \Omega$. Many of arguments will proceed more smoothly if we can divide the problem into pieces, choose a convenient coordinate system for each piece and then make our calculations in this coordinate system. To carry out these arguments, we will need partitions of unity. Given a collection of sets, $\left\{A_{\alpha}\right\}$, which are subsets of a topological space $X$, a partition of unity subordinate to $\left\{A_{\alpha}\right\}$ is a collection of real-valued functions $\left\{\phi_{\alpha}\right\}$ so that $\operatorname{supp} \phi_{\alpha} \subset A_{\alpha}$ and so that $\sum_{\alpha} \phi_{\alpha}=1$. Partitions of unity are used to take a problem and divide it into bits that can be more easily solved. For our purposes, the following will be useful. Lemma 10.1 If $K$ is a compact subset in $\mathbf{R}^{n}$ and $\left\{U_{1}, \ldots, U_{N}\right\}$ is a collection of open sets which cover $K$, then we can find a collection of functions $\phi_{j}$ with each $\phi_{j}$ in $\mathcal{D}\left(U_{j}\right)$, $0 \leq \phi_{j} \leq 1$ and $\sum_{j=1}^{N} \phi_{j}=1$ on $K$. Proof. By compactness, we can find a finite collection of balls $\left\{B_{k}\right\}_{k=1}^{M}$ so that each $\bar{B}_{k}$ lies in some $U_{j}$ and the balls cover $K$. If we let $\mathcal{F}=\cup \bar{B}_{k}$ be the union of the closures of the balls $B_{k}$, then the distance between $K$ and $\mathbf{R}^{n} \backslash \mathcal{F}$ is positive. Hence, we can find finitely many more balls $\left\{B_{M+1}, \ldots, B_{M+L}\right\}$ to our collection which cover $\partial F$ and which are contained in $\mathbf{R}^{n} \backslash K$. We now let $\tilde{\eta}_{k}$ be the standard bump translated and rescaled to the ball $B_{k}$. Thus if $B_{k}=B_{r}(x)$, then $\tilde{\eta}_{k}(y)=\exp \left(-1 /\left(r^{2}-|y-x|^{2}\right)\right)$ in $B_{k}$ and 0 outside $B_{k}$. Finally, we put $\tilde{\eta}=\sum_{k=1}^{M+L} \tilde{\eta}_{k}$ and then $\eta_{k}=\tilde{\eta}_{k} / \tilde{\eta}$. Each $\eta_{k}, k=1, \ldots, M$ is smooth since $\tilde{\eta}$ is strictly positive on $\mathcal{O}$. Then we have $\sum_{k=1}^{M} \eta_{k}=1$ on $K$ and we may group to obtain one $\phi_{j}$ for each $U_{j}$. The following important result is the Gauss divergence theorem. Recall that for a $\mathbf{C}^{n}$ valued function $F=\left(F_{1}, \ldots, F_{n}\right)$, the divergence of $F$, is defined by $$ \operatorname{div} F=\sum_{j=1}^{n} \frac{\partial F_{j}}{\partial x_{j}} . $$ Theorem 10.2 (Gauss divergence theorem) Let $\Omega$ be a $C^{1}$ domain and let $F: \Omega \rightarrow \mathbf{C}^{n}$ be in $C^{1}(\bar{\Omega})$. We have $$ \int_{\partial \Omega} F(x) \cdot \nu(x) d \sigma(x)=\int_{\Omega} \operatorname{div} F(x) d x $$ The importance of this result may be gauged by the following observation: the theory of weak solutions of elliptic pde (and much of distribution theory) relies on making this result an axiom. An important Corollary is the following version of Green's identity. In this Corollary and below, we should visualize the gradient of $u, \nabla u$ as a column vector so that the product $A \nabla u$ is makes sense as a matrix product. Corollary 10.3 If $\Omega$ is a $C^{1}$-domain, $v$ is in $C^{1}(\bar{\Omega}), u$ is in $C^{2}(\bar{\Omega})$ and $A(x)$ is an $n \times n$ matrix with $C^{1}(\bar{\Omega})$ entries, then $$ \int_{\partial \Omega} v(x) A(x) \nabla u(x) \cdot \nu(x) d \sigma(x)=\int_{\Omega} A(x) \nabla u(x) \cdot \nabla v(x)+v(x) \operatorname{div} A(x) \nabla u(x) d x . $$ Proof. Apply the divergence theorem to $v A \nabla u$. Next, we define Sobolev spaces on open subsets of $\mathbf{R}^{n}$. Our definition is motivated by the result in Proposition 3.11. For $k$ a positive integer, we say that $u \in L_{k}^{2}(\Omega)$ if $u$ has weak or distributional derivatives for all $\alpha$ for $|\alpha| \leq k$ and these derivatives, $\partial^{\alpha} u / \partial x^{\alpha}$, lie in $L^{2}(\Omega)$. This means that for all test functions $\phi \in \mathcal{D}(\Omega)$, we have $$ \int_{\Omega} u \frac{\partial^{\alpha}}{\partial x^{\alpha}} \phi(x) d x=(-1)^{|\alpha|} \int_{\Omega} \phi \frac{\partial^{\alpha} u}{\partial x^{\alpha}}(x) d x $$ The weak derivatives of $u$ are defined as we defined the derivatives of a tempered distribution. The differences are that since we are on a bounded open set, our functions are supported there and in this instance we require that the derivative be a distribution given by a function. It should be clear how to define the norm in this space. In fact, we have that these spaces are Hilbert spaces with the inner product defined by $$ \langle u, v\rangle=\int_{\Omega} \sum_{|\alpha| \leq k} \frac{\partial^{\alpha} u}{\partial x^{\alpha}} \frac{\partial^{\alpha} \bar{v}}{\partial x^{\alpha}} d x . $$ We let $\|u\|_{L_{k}^{2}(\Omega)}$ be the corresponding norm. Exercise 10.5 Show that if $\Omega$ is a bounded open set, then $C^{k}(\bar{\Omega}) \subset L_{k}^{2}(\Omega)$. Example 10.6 If $u$ is in the Sobolev space $L_{k}^{2}\left(\mathbf{R}^{n}\right)$ defined in Chapter 3 and $\Omega$ is an open set, then the restriction of $u$ to $\Omega$, call it ru, is in the Sobolev space $L_{k}^{2}(\Omega)$. If $\Omega$ has reasonable boundary, ( $C^{1}$ will do) then the restriction map $r: L_{k}^{2}\left(\mathbf{R}^{n}\right) \rightarrow L_{k}^{2}(\Omega)$ is onto. However, this may fail in general. Exercise 10.7 a) Prove the product rule for weak derivatives. If $\phi$ is in $C^{k}(\bar{\Omega})$ and all the derivatives of $\phi, \partial^{\alpha} \phi / \partial x^{\alpha}$ with $|\alpha| \leq k$ are bounded, then we have that $$ \frac{\partial^{\alpha} \phi u}{\partial x^{\alpha}}=\sum_{\beta+\gamma=\alpha} \frac{\alpha !}{\beta ! \gamma !} \frac{\partial^{\beta} \phi}{\partial x^{\beta}} \frac{\partial^{\gamma} u}{\partial x^{\gamma}} . $$ b) If $\phi \in C^{k}(\bar{\Omega})$, conclude that the map $u \rightarrow \phi u$ takes $L_{k}^{2}(\Omega)$ to $L_{k}^{2}(\Omega)$ and is bounded. c) If $\phi \in C^{1}(\bar{\Omega})$, show that the map $u \rightarrow \phi u$ maps $L_{1,0}^{2}(\Omega) \rightarrow L_{1,0}^{2}(\Omega)$. Lemma 10.8 If $\Omega$ is a $C^{1}$ domain and $u$ is in the Sobolev space $L_{k}^{2}(\Omega)$, then we may write $u=\sum_{j=0}^{N} u_{j}$ where $u_{0}$ has support in a fixed (independent of u) compact subset of $\Omega$ and each $u_{j}, j=1, \ldots, N$ is supported in a ball $B_{r}(x)$ as in the definition of $C^{1}$ domain. Proof. We cover the boundary, $\partial \Omega$ by balls $\left\{B_{1}, \ldots, B_{N}\right\}$ as in the definition of $C^{1}$ domain. Then, $K=\Omega \backslash \cup_{k=1}^{N} B_{k}$ is a compact set so that the distance from $K$ to $\mathbf{R}^{n} \backslash \Omega$ is positive, call this distance $\delta$. Thus, we can find an open set $U_{0}=\{x: \operatorname{dist}(x, \partial \Omega)>\delta / 2\}$ which contains $K$ and is a positive distance from $\partial \Omega$. We use Lemma 10.1 to make a partition of unity $1=\sum_{j=0}^{N} \eta_{j}$ for the open cover of $\bar{\Omega}\left\{U_{0}, B_{1} \ldots, B_{N}\right\}$ and then we decompose $u=\sum_{j=0}^{N} \eta_{j} u$. The product rule of exercise 10.7 allows us to conclude that each term $u_{j}=\eta_{j} u$ is in $L_{k}^{2}(\Omega)$. Recall that we proved in Chapter 2 that smooth (Schwartz, actually) functions are dense in $L^{p}\left(\mathbf{R}^{n}\right), 1 \leq p<\infty$. One step of the argument involved considering the map $$ \eta * u $$ where $\eta$ is a Schwartz function with $\int \eta=1$. This approach may appear to break down $u$ is only defined in an open subset of $\mathbf{R}^{n}$, rather than all of $\mathbf{R}^{n}$. However, we can make sense of the convolution in most of $\Omega$ if we require that the function $\check{\phi}$ have compact support. Thus, we let $\eta \in \mathcal{D}\left(\mathbf{R}^{n}\right)$ be supported in $B_{1}(0)$ and have $\int \eta=1$. Lemma 10.9 Suppose $u$ is the Sobolev space $L_{k}^{p}(\Omega), 1 \leq p<\infty$, for $k=0,1,2, \ldots$ Set $\Omega_{\epsilon}=\Omega \cap\{x: \operatorname{dist}(x, \partial \Omega)>\epsilon\}$. If we set $u_{\delta}=\eta_{\delta} * u$, then for $|\alpha| \leq k$, we have $$ \frac{\partial^{\alpha}}{\partial x^{\alpha}} u_{\delta}=\left(\frac{\partial^{\alpha}}{\partial x^{\alpha}} u\right)_{\delta}, \quad \text { for } x \in \Omega_{\epsilon} \text { with } \delta<\epsilon $$ Hence, for each $\epsilon>0$, we have $$ \lim _{\delta \rightarrow 0^{+}}\left\|u-u_{\delta}\right\|_{L_{k}^{p}\left(\Omega_{\epsilon}\right)}=0 $$ Proof. We assume that $u$ is defined to be zero outside of $\Omega$. The convolution $u * \eta_{\delta}(x)$ is smooth in all of $\mathbf{R}^{n}$ and we may differentiate inside the integral and then express the $x$ derivatives as $y$ derivatives to find $$ u * \eta_{\delta}(x)=\int_{\Omega} u(y) \frac{\partial^{\alpha}}{\partial x^{\alpha}} \eta_{\delta}(x-y) d y=(-1)^{|\alpha|} \int_{\Omega} u(y) \frac{\partial^{\alpha}}{\partial y^{\alpha}} \eta_{\delta}(x-y) d y $$ If we have $\delta<\epsilon$ and $x \in \Omega_{\epsilon}$, then $\eta_{\delta}(x-\cdot)$ will be in the space of test functions $\mathcal{D}(\Omega)$. Thus, we can apply the definition of weak derivative to conclude $$ (-1)^{|\alpha|} \int_{\Omega} u(y) \frac{\partial^{\alpha}}{\partial y^{\alpha}} \eta_{\delta}(x-y) d y=\int_{\Omega}\left(\frac{\partial^{\alpha}}{\partial y^{\alpha}} u(y)\right) \eta_{\delta}(x-y) d y $$ Lemma 10.10 If $\Omega$ is a $C^{1}$-domain and $k=0,1,2, \ldots$, then $C^{\infty}(\bar{\Omega})$ is dense in $L_{k}^{2}(\Omega)$. Proof. We may use Lemma 10.8 to reduce to the case when $u$ is zero outside $B_{r}(x) \cap \Omega$ for some ball centered at a boundary point $x$ and $\partial \Omega$ is given as a graph, $\left\{\left(x^{\prime}, x_{n}\right): x_{n}=\right.$ $\left.\phi\left(x^{\prime}\right)\right\}$ in $B_{r}(x)$. We may translate $u$ to obtain $u_{\epsilon}(x)=u\left(x+\epsilon e_{n}\right)$. Since $u_{\epsilon}$ has weak derivatives in a neighborhood of $\Omega$, by the local approximation lemma, Lemma 10.9 we may approximate each $u_{\epsilon}$ by functions which are smooth up to the boundary of $\Omega$. Lemma 10.11 If $\Omega$ and $\Omega^{\prime}$ are bounded open sets and $F: \Omega \rightarrow \Omega^{\prime}$ is $C^{1}(\bar{\Omega})$ and $F^{-1}$ : $\Omega^{\prime} \rightarrow \Omega$ is also $C^{1}\left(\bar{\Omega}^{\prime}\right)$, then we have $u \in L_{1}^{2}\left(\Omega^{\prime}\right)$ if and only if $u \in L_{1}^{2}\left(\Omega^{\prime}\right)$. Proof. The result is true for smooth functions by the chain rule and the change of variables formulas of vector calculus. Note that our hypothesis that $F$ is invertible implies that the Jacobian is bounded above and below. The density result of Lemma 10.9 allows us to extend to the Sobolev space. Lemma 10.12 If $\Omega$ is a $C^{1}$-domain, then there exists an extension operator $E: L_{k}^{2}(\Omega) \rightarrow$ $L_{k}^{2}\left(\mathbf{R}^{n}\right)$. Proof. We sketch a proof when $k=1$. We will not use the more general result. The general result requires a more substantial proof. See the book of Stein [14], whose result has the remarkable feature that the extension operator is independent of $k$. For the case $k=1$, we may use a partition of unity and to reduce to the case where $u$ is nonzero outside $B_{r}(x) \cap \Omega$ and that $\partial \Omega$ is the graph $\left\{\left(x^{\prime}, x_{n}\right): x_{n}=\phi\left(x^{\prime}\right)\right\}$ in $B_{r}(x)$. By the density result, Lemma 10.10, we may assume that $u$ is smooth up to the boundary. Then we can define $E u$ by $$ E u(x)= \begin{cases}u(x), & x_{n}>\phi\left(x^{\prime}\right) \\ u\left(x^{\prime}, 2 \phi\left(x^{\prime}\right)-x_{n}\right), & x_{n}<\phi\left(x^{\prime}\right)\end{cases} $$ If $\psi$ is test function in $\mathbf{R}^{n}$, then we can apply the divergence theorem in $\Omega$ and in $\mathbf{R}^{n} \backslash \bar{\Omega}$ to obtain that $$ \begin{aligned} \int_{\Omega} E u \frac{\partial \psi}{\partial x_{j}}+\psi \frac{\partial E u}{\partial x_{j}} d x & =\int_{\partial \Omega} \psi E u \nu \cdot e_{j} d \sigma \\ \int_{\mathbf{R}^{n} \backslash \bar{\Omega}} E u \frac{\partial \psi}{\partial x_{j}}+\psi \frac{\partial E u}{\partial x_{j}} d x & =-\int_{\partial \Omega} \psi E u \nu \cdot e_{j} d \sigma \end{aligned} $$ In the above expressions, the difference in sign is due to the fact that the normal which points out of $\Omega$ is the negative of the normal which points out of $\mathbf{R}^{n} \backslash \bar{\Omega}$. Adding these two expressions, we see that $E u$ has weak derivatives in $\mathbf{R}^{n}$. These weak derivatives are given by the (ordinary) derivative $\partial E u / \partial x_{j}$, which is defined except on $\partial \Omega$. In general, $E u$ will not have an ordinary derivative on $\partial \Omega$. Using Lemma 10.11, one can see that this extension operator is bounded. The full extension operator is obtained by taking a function $u$, writing $u=\sum_{j=0}^{N} \eta_{j} u$ as in Lemma 10.8 where the support of $\eta_{\text {) }}$ does not meet the boundary. For each $\eta_{j}$ which meets the boundary, we apply the local extension operator constructed above and then sum to obtain $E u=\eta_{0} u+\sum_{j=1}^{N} E\left(\eta_{j} u\right)$. Once we have defined the extension operator on smooth functions in $L_{1}^{2}$, then we can use the density result of Lemma 10.10 to define the extension operator on the full space. Next, we define an important subspace of $L_{1}^{2}(\Omega), L_{1,0}^{2}(\Omega)$. This space is the closure of $\mathcal{D}(\Omega)$ in the norm of $L_{1}^{2}(\Omega)$. The functions in $L_{1,0}^{2}(\Omega)$ will be defined to be the Sobolev functions which vanish on the boundary. Since a function $u$ in the Sobolev space is initially defined a.e., it should not be clear that we can define the restriction of $u$ to a lower dimensional subset. However, we saw in Chapter 9 that this is possible. We shall present a second proof below. The space $L_{1,0}^{2}(\Omega)$ will be defined as the space of functions which have zero boundary values. Remark: Some of you may be familiar with the spaces $L_{1}^{2}(\Omega)$ as $H^{1}(\Omega)$ and $L_{1,0}^{2}(\Omega)$ as $H_{0}^{1}(\Omega)$. We define the boundary values of a function in $L_{1}^{2}(\Omega)$ in the following way. We say that $u=v$ on $\partial \Omega$ if $u-v \in L_{1,0}^{2}(\Omega)$. Next, we define a space $L_{1 / 2}^{2}(\partial \Omega)$ to be the equivalence classes $[u]=u+L_{1,0}^{2}(\partial \Omega)=\left\{v: v-u \in L_{1,0}^{2}(\Omega)\right\}$. Of course, we need a norm to do analysis. The norm is given by $$ \|u\|_{L_{1 / 2}^{2}(\partial \Omega)}=\inf \left\{\|v\|_{L_{1}^{2}(\Omega)}: u-v \in L_{1,0}^{2}(\Omega)\right\} $$ It is easy to see that this is a norm and the resulting space is a Banach space. It is less clear $L_{1 / 2}^{2}(\partial \Omega)$ is a Hilbert space. However, if the reader will recall the proof of the projection theorem in Hilbert space one may see that the space on the boundary, $L_{1 / 2}^{2}(\partial \Omega)$, can be identified with the orthogonal complement of $L_{1,0}^{2}(\Omega)$ in $L_{1}^{2}(\Omega)$ and thus inherits an inner product from $L_{1}^{2}(\Omega)$. This way of defining functions on the boundary should be unsatisfyingly abstract to the analysts in the audience. The following result gives a concrete realization of the space. Proposition 10.14 Let $\Omega$ be a $C^{1}$-domain. The map $$ r: C^{1}(\bar{\Omega}) \rightarrow L^{2}(\partial \Omega) $$ which takes $\phi$ to the restriction of $\phi$ to the boundary, $r \phi$ satisfies $$ \|r u\|_{L^{2}(\partial \Omega)} \leq C\|u\|_{L_{1}^{2}(\Omega)} $$ and as a consequence extends continuously to $L_{1}^{2}(\Omega)$. Since $r\left(L_{1,0}^{2}(\Omega)\right)=0$, the map $r$ is well-defined on equivalence classes in $L_{1 / 2}^{2}(\partial \Omega)$ and gives a continuous injection $r: L_{1 / 2}^{2}(\partial \Omega) \rightarrow L^{2}(\partial \Omega)$. Exercise 10.15 Prove the above proposition. Exercise 10.16 If $\Omega$ is a $C^{1}$ domain, let $H$ be a space of functions $f$ on $\partial \Omega$ defined as follows. We say that $f \in H$ if for each ball $B_{r}(x)$ as in the definition of $C^{1}$ domains and each $\eta \in \mathcal{D}\left(B_{r}(x)\right)$, we have $(\eta f)\left(y^{\prime}, \phi\left(y^{\prime}\right)\right)$ is in the space $L_{1 / 2}^{2}\left(\mathbf{R}^{n-1}\right)$ defined in Chapter 3. In the above, $\phi$ is the function whose graph describes the boundary of $\Omega$ near $x$. A norm in the space $H$ may be defined by fixing a covering of the boundary by balls as in the definition of $C^{1}$-domains, and then a partition of unity subordinate to this collection of balls, $\sum \eta_{k}$ and finally taking the sum $$ \sum_{k} \| \eta_{k} f_{L_{1 / 2}^{2}\left(\mathbf{R}^{n-1}\right)} $$ Show that $H=L_{1 / 2}^{2}(\partial \Omega)$. Hint: We do not have the tools to solve this problem. Thus this exercise is an excuse to indicate the connection without providing proofs. Lemma 10.17 If $\Omega$ is a $C^{1}$ domain and $u \in C^{1}(\bar{\Omega})$, then there is a constant $C$ so that $$ \int_{\partial \Omega}|u(x)|^{2} d \sigma(x) \leq C \int_{\Omega}|u(x)|^{2}+|\nabla u(x)|^{2} d x $$ Proof. According to the definition of a $C^{1}$-domain, we can find a finite collection of balls $\left\{B_{j}: j=1, \ldots, N\right\}$ and in each of these balls, a unit vector, $\alpha_{j}$, which satisfies $\alpha_{j} \cdot \nu \geq$ $\delta>0$ for some constant $\delta$. To do this, choose $\alpha_{j}$ to be $-e_{n}$ in the coordinate system which is used to describe the boundary near $B_{j}$. The lower bound will be $\min _{j}\left(1+\left\|\nabla \phi_{j}\right\|_{\infty}^{2}\right)^{-1 / 2}$ where $\phi_{j}$ is the function which defines the boundary near $B_{j}$. Using a partition of unity $\sum_{j} \phi_{j}$ subordinate to the family of balls $B_{j}$ which is 1 on $\partial \Omega$, we construct a vector field $$ \alpha(x)=\sum_{j=1}^{N} \phi_{j}(x) \alpha_{j} . $$ We have $\alpha(x) \cdot \nu(x) \geq \delta$ since each $\alpha_{j}$ satisfies this condition and each $\phi_{j}$ takes values in $[0,1]$. Thus, the divergence theorem gives $$ \begin{aligned} \delta \int_{\partial \Omega}|u(x)|^{2} d \sigma(x) & \leq \int_{\partial \Omega}|u(x)|^{2} \alpha(x) \cdot \nu(x) d x \\ & =\int_{\Omega}|u|^{2}(\operatorname{div} \alpha)+2 \operatorname{Re}(u(x) \alpha \cdot \nabla \bar{u}(x)) d x . \end{aligned} $$ Applying the Cauchy-Schwarz inequality proves the inequality of the Lemma. The constant depends on $\Omega$ through the vector field $\alpha$ and its derivatives. Proof of Proposition 10.14. The proposition follows from the lemma. That the map $r$ can be extended from nice functions to all of $L_{1}^{2}(\Omega)$ depends on Lemma 10.10 which asserts that nice functions are dense in $L_{1}^{2}(\Omega)$. Exercise 10.18 Suppose that $\Omega$ is a $C^{1}$ domain. Show that if $\phi \in C^{1}(\bar{\Omega})$ and $\phi(x)=0$ on $\partial \Omega$, then $\phi(x)$ is in the Sobolev space $L_{1,0}^{2}(\Omega)$. Finally, we extend the definition of one of the Sobolev spaces of negative order to domains. We define $L_{-1}^{2}(\Omega)$ to be the dual of the space $L_{1,0}^{2}(\Omega)$. As in the case of $\mathbf{R}^{n}$, the following simple lemma gives examples of elements in this space. Proposition 10.19 Assume $\Omega$ is an open set of finite measure, and $g$ and $f_{1}, \ldots, f_{n}$ are functions in $L^{2}(\Omega)$. Then $$ \phi \rightarrow \lambda(\phi)=\int_{\Omega} g(x) \phi(x)+\sum_{j=1}^{n} f_{j}(x) \frac{\partial \phi(x)}{\partial x_{j}} d x $$ is in $L_{-1}^{2}(\Omega)$. Proof. According to the Cauchy-Schwarz inequality, we have $$ \lambda(\phi) \leq\left(\int_{\Omega}|u(x)|^{2}+|\nabla u(x)|^{2} d x\right)^{1 / 2}\left(\int_{\Omega}|g(x)|^{2}+\left|\sum_{j=1}^{n} f_{j}(x)^{2}\right| d x\right)^{1 / 2} . $$ ### The weak Dirichlet problem In this section, we introduce elliptic operators. We let $A(x)$ be function defined on an open set $\Omega$ and we assume that this function takes values in $n \times n$-matrices with real entries. We assume that each entry is Lebesgue measurable and that $A$ satisfies the symmetry condition $$ A^{t}=A $$ and ellipticity condition, for some $\lambda>0$, $$ \lambda|\xi|^{2} \leq A(x) \xi \cdot \xi \leq \lambda^{-1}|\xi|^{2}, \quad \xi \in \mathbf{R}^{n}, x \in \Omega . $$ We say that $u$ is a local weak solution of the equation $\operatorname{div} A(x) \nabla u=f$ for $f \in L_{-1}^{2}(\Omega)$ if $u$ is in $L_{1, l o c}^{2}(\Omega)$ and for all test functions $\phi \in \mathcal{D}(\Omega)$, we have $$ -\int A(x) \nabla u(x) \cdot \nabla \phi(x) d x=f(\phi) $$ Since the derivatives of $u$ are locally in $L^{2}$, we can extend to test functions $\phi$ which are in $L_{1,0}^{2}(\Omega)$ and which (have a representative) which vanishes outside a compact subset of $\Omega$. However, let us resist the urge to introduce yet another space. Statement of the Dirichlet problem. The weak formulation of the Dirichlet problem is the following. Let $g \in L_{1}^{2}(\Omega)$ and $f \in L_{-1}^{2}(\Omega)$, then we say that $u$ is a solution of the Dirichlet problem if the following two conditions hold: $$ \begin{gathered} u \in L_{1}^{2}(\Omega) \\ u-g \in L_{1,0}^{2}(\Omega) \\ -\int_{\Omega} A(x) \nabla u(x) \nabla \phi(x) d x=f(\phi) \quad \phi \in L_{1,0}^{2}(\Omega) . \end{gathered} $$ Note that both sides of the equation (10.24) are continuous in $\phi$ in the topology of $L_{1,0}^{2}(\Omega)$. Thus, we only need to require that this hold for $\phi$ in a dense subset of $L_{1,0}^{2}(\Omega)$. A more traditional way of writing the Dirichlet problem is, given $g$ and $f$ find $u$ which satisfies $$ \begin{cases}\operatorname{div} A \nabla u=f, & \text { in } \Omega \\ u=g, & \text { on } \partial \Omega\end{cases} $$ Our condition (10.24) is a restatement of the equation, $\operatorname{div} A \nabla u=f$. The condition (10.23) is a restatement of the boundary condition $u=f$. Finally, the condition (10.22) is needed to show that the solution is unique. Theorem 10.25 If $\Omega$ is an open set of finite measure and $g \in L_{1}^{2}(\Omega)$ and $f \in L_{-1}^{2}(\Omega)$, then there is exactly one weak solution to the Dirichlet problem, (10.22-10.24). There is a constant $C(\lambda, n, \Omega)$ so that the solution $u$ satisfies $$ \|u\|_{L_{1,0}^{2}(\Omega)} \leq C\left(\|g\|_{L_{1}^{2}(\Omega)}+\|f\|_{L_{-1}^{2}(\Omega)}\right) $$ Proof. Existence. If $u \in L_{1,0}^{2}(\Omega)$ and $n \geq 3$ then Hölder's inequality and then the Sobolev inequality of Theorem 8.20 imply $$ \int_{\Omega}|u(x)|^{2} d x \leq\left(\int_{\Omega}|u(x)|^{\frac{2 n}{n-2}} d x\right)^{1-\frac{2}{n}} m(\Omega)^{2 / n} \leq C m(\Omega)^{2 / n} \int_{\Omega}|\nabla u(x)|^{2} d x . $$ If $n=2$, the same result holds, though we need to be a bit more persistent and use Hölder's inequality, the Soboleve inequality and Hölder again to obtain: $$ \begin{aligned} \int_{\Omega}|u(x)|^{2} d x \leq\left(\int_{\Omega}|u(x)|^{4} d x\right)^{1 / 2} m(\Omega)^{1 / 2} & \leq\left(\int_{\Omega}|\nabla u(x)|^{4 / 3} d x\right)^{3 / 2} m(\Omega)^{1 / 2} \\ & \leq \int_{\Omega}|\nabla u(x)|^{2} d x m(\Omega) . \end{aligned} $$ Note that in each case, the application of the Sobolev inequality on $\mathbf{R}^{n}$ is allowed because $L_{1,0}^{2}(\Omega)$ may be viewed as a subspace of $L_{1}^{2}(\Omega)$ by extending functions on $\Omega$ to be zero outside $\Omega$. Thus we have $$ \|u\|_{L^{2}(\Omega)} \leq C m(\Omega)^{1 / n}\|\nabla u\|_{L^{2}(\Omega)} $$ Next, we observe that the ellipticity condition (10.21) implies that $$ \lambda \int_{\Omega}|\nabla u(x)|^{2} \leq \int_{\Omega} A(x) \nabla u(x) \nabla \bar{u}(x) d x \leq \lambda^{-1} \int_{\Omega}|\nabla u(x)|^{2} d x . $$ We claim the expression $$ \int_{\Omega} A(x) \nabla u(x) \nabla \bar{v}(x) d x $$ provides an inner product on $L_{1,0}^{2}(\Omega)$ which induces the same topology as the standard inner product on $L_{1,0}^{2}(\Omega) \subset L_{1}^{2}(\Omega)$ defined in (10.4). To see that the topologies are the same, it suffices to establish the inequalities $$ \int_{\Omega}|\nabla u(x)|^{2}+|u(x)|^{2} d x \leq \lambda^{-1}\left(1+C m(\Omega)^{2 / n}\right) \int_{\Omega} A(x) \nabla u(x) \nabla \bar{u}(x) d x $$ and that $$ \int_{\Omega} A(x) \nabla u(x) \nabla \bar{u}(x) d x \leq \lambda^{-1} \int_{\Omega}|\nabla u(x)|^{2} d x \leq \lambda^{-1} \int_{\Omega}|\nabla u(x)|^{2}+|u(x)|^{2} d x . $$ These both follow from the estimates (10.26) and (10.27). As a consequence, standard Hilbert space theory tells us that any continuous linear functional on $L_{1,0}^{2}(\Omega)$ can be represented using the inner product defined in (10.28). We apply this to the functional $$ \phi \rightarrow-\int_{\Omega} A \nabla g \nabla \phi d x-f(\phi) $$ and conclude that there exists $v \in L_{1,0}^{2}(\Omega)$ so that $$ \int A(x) \nabla v(x) \nabla \phi(x) d x=-\int_{\Omega} A(x) \nabla g(x) \nabla \phi(x) d x-f(\phi), \quad \phi \in L_{1,0}^{2}(\Omega) . $$ Rearranging this expression, we can see that $u=g+v$ is a weak solution to the Dirichlet problem. Uniqueness. If we have two solutions of the Dirichlet problem $u_{1}$ and $u_{2}$, then their difference $w=u_{1}-u_{2}$ is a weak solution of the Dirichlet problem with $f=g=0$. In particular, $w$ is in $L_{1,0}^{2}(\Omega)$ and we can use $\bar{w}$ as a test function and conclude that $$ \int A(x) \nabla w(x) \cdot \nabla \bar{w}(x) d x=0 . $$ Thanks to the inequalities (10.26) and (10.27) we conclude that $$ \int_{\Omega}|w(x)|^{2} d x=0 $$ Hence, $u_{1}=u_{2}$. Stability. Finally, we establish the estimate for the solution. We replace the test function $\phi$ in (10.29) by $\bar{v}$. Using the Cauchy-Schwarz inequality gives $$ \int_{\Omega} A \nabla v \cdot \nabla \bar{v} d x \leq \lambda^{-1}\|v\|_{L_{1,0}^{2}(\Omega)}\|\nabla g\|_{L_{1}^{2}(\Omega)}+\|f\|_{L_{-1}^{2}(\Omega)}\|v\|_{L_{1,0}^{2}(\Omega)} $$ If we use that the left-hand side of this inequality is equivalent with the norm in $L_{1,0}^{2}(\Omega)$, cancel the common factor, we obtain that $$ \|v\|_{L_{1,0}^{2}(\Omega)} \leq C\|g\|_{L_{1,0}^{2}(\Omega)}+\|f\|_{L_{-1}^{2}(\Omega)} . $$ We have $u=v+g$ and the triangle inequality gives $$ \|u\|_{L_{1}^{2}(\Omega)} \leq\|g\|_{L_{1}^{2}(\Omega)}+\|v\|_{L_{1,0}^{2}(\Omega)} $$ so combining the last two inequalities implies the estimate of the theorem. Exercise 10.30 (Dirichlet's principle.) Let $g \in L_{1}^{2}(\Omega)$ and suppose that $f=0$ in the weak formulation of the Dirichlet problem. a) Show that the expression $$ I(u)=\int_{\Omega} A(x) \nabla u(x) \cdot \nabla \bar{u}(x) d x $$ attains a minimum value on the set $g+L_{1,0}^{2}(\Omega)=\left\{g+v: v \in L_{1,0}^{2}(\Omega)\right\}$. Hint: Use the foil method. This is a general fact in Hilbert space. b) If $u$ is a minimizer for $I$, then $u$ is a weak solution of the Dirichlet problem, $\operatorname{div} A \nabla u=0$ and $u=g$ on the boundary. c) Can you extend this approach to solve the general Dirichlet problem $\operatorname{div} A \nabla u=f$ in $\Omega$ and $u=g$ on the boundary? ## Chapter 11 ## Inverse Problems: Boundary identifiability ### The Dirichlet to Neumann map In this section, we introduce the Dirichlet to Neumann map. Recall the space $L_{1 / 2}^{2}(\partial \Omega)$ which was introduced in Chapter 10 . We let $\Omega$ be a bounded open set, $A$ a matrix which satisfies the ellipticity condition and given $f$ in $L_{1 / 2}^{2}(\partial \Omega)$, we let $u=u_{f}$ be the weak solution of the Dirichlet problem $$ \begin{cases}\operatorname{div} A \nabla u=0, & \text { on } \Omega \\ u=f, & \text { on } \partial \Omega .\end{cases} $$ Given $u \in L_{1}^{2}(\Omega)$ we can define a continuous linear functional on $L_{1}^{2}(\Omega)$ by $$ \phi \rightarrow \int_{\Omega} A(x) \nabla u(x) \nabla \phi(x) d x $$ If we recall the Green's identity (10.3), we see that if $u$ and $A$ are smooth, then $$ \int_{\partial \Omega} A(x) \nabla u(x) \cdot \nu(x) \phi(x) d \sigma(x)=\int_{\Omega} A(x) \nabla u(x) \nabla \phi(x)+\phi(x) \operatorname{div} A(x) \nabla u(x) d x . $$ Thus, if $u$ solves the equation $\operatorname{div} A \nabla u=0$, then it reasonable to define $A \nabla u \cdot \nu$ as a linear functional on $L_{1 / 2}^{2}(\partial \Omega)$ by $$ A \nabla u \cdot \nu(\phi)=\int_{\Omega} A(x) \nabla u(x) \cdot \nabla \phi(x) d x . $$ We will show that this map is defined on $L_{1 / 2}^{2}(\partial \Omega)$ The expression $A \nabla u \cdot \nu$ is called the conormal derivative of $u$ at the boundary. Note that it is something of a miracle that we can make sense of this expression at the boundary. To appreciate that this is surprising, observe that we are not asserting that the full gradient of $u$ is defined at the boundary, only the particular component $A \nabla u \cdot \nu$. The gradient of $u$ may only be in $L^{2}(\Omega)$ and thus there is no reason to expect that any expression involving $\nabla u$ could make sense on the boundary, a set of measure zero. A potential problem is that this definition may depend on the representative of $\phi$ which is used to define the right-hand side of (11.2). Fortunately, this is not the case. Lemma 11.3 If $u \in L_{1}^{2}(\Omega)$ and $u$ is a weak solution of $\operatorname{div} A \nabla u=0$, then the value of $A \nabla u \cdot \nu(\phi)$ is independent of the extension of $\phi$ from $\partial \Omega$ to $\Omega$. The linear functional defined in (11.2) is a continuous linear functional on $L_{1 / 2}^{2}(\partial \Omega)$. Proof. To establish that $A \nabla u \cdot \nu$ is well defined, we will use that $u$ is a solution of $\operatorname{div} A \nabla u=0$. We choose $\phi_{1}, \phi_{2}$ in $L_{1}^{2}(\Omega)$ and suppose $\phi_{1}-\phi_{2} \in L_{1,0}^{2}(\Omega)$. According to the definition of weak solution, $$ \int_{\Omega} A(x) \nabla u(x) \cdot \nabla\left(\phi_{1}(x)-\phi_{2}(x)\right) d x=0 . $$ To establish the continuity, we need to choose a representative of $\phi$ which is close to the infinum in the definition of the $L_{1 / 2}^{2}$-norm (see (10.13)). Thus we need $\|\phi\|_{L_{1}^{2}(\Omega)} \leq$ $2\|r \phi\|_{L_{1 / 2}^{2}(\partial \Omega)}$. Here, $r \phi$ denotes the restriction of $\phi$ to the boundary. With this choice of $\phi$ and Cauchy-Schwarz we have $$ |A \nabla u \cdot \nu(\phi)| \leq C\|\nabla u\|_{L^{2}(\Omega)}\|\nabla \phi\|_{L^{2}(\Omega)} . $$ This inequality implies the continuity. We will define $L_{-1 / 2}^{2}(\partial \Omega)$ as the dual of the space $L_{1 / 2}^{2}(\partial \Omega)$. Now, we are ready to define the Dirichlet to Neumann map. This is a map $$ \Lambda_{A}: L_{1 / 2}^{2}(\Omega) \rightarrow L_{-1 / 2}^{2}(\partial \Omega) $$ defined by $$ \Lambda_{A} f=A \nabla u \cdot \nu $$ where $u$ is the solution of the Dirichlet problem with boundary data $f$. The traditional goal in pde is to consider the direct problem. For example, given the coefficient matrix $A$, show that we can solve the Dirichlet problem. If we were more persistent, we could establish additional properties of the solution. For example, we could show that the map $A \rightarrow \Lambda_{A}$ is continuous, on the set of strictly positive definite matrix valued functions on $\Omega$. However, that would be the easy way out. The more interesting and difficult problem is the inverse problem. Given the map $\Lambda_{A}$, can we recover the coefficient matrix, $A$. That is given some information about the solutions to a pde, can we recover the equation. The answer to the problem, as stated, is no, of course not. Exercise 11.4 Let $\Omega$ be a bounded domain and let $F: \Omega \rightarrow \Omega$ be a $C^{1}(\bar{\Omega})$ diffeomorphism that fixes a neighborhood of the boundary. Show that if A gives an elliptic operator $\operatorname{div} A \nabla$ on $\Omega$, then there is an operator $\operatorname{div} B \nabla$ so that $$ \operatorname{div} A \nabla u=0 \Longleftrightarrow \operatorname{div} B \nabla u \circ F=0 $$ As a consequence, it is clear that the maps $\Lambda_{A}=\Lambda_{B}$. Hint: See Lemma 11.10 below for the answer. Exercise 11.5 Show that the only obstruction to uniqueness is the change of variables described in the previous problem. Remark: This has been solved in two dimensions, by John Sylvester [16]. In three dimensions and above, this problem is open. Exercise 11.6 Prove that the map $A \rightarrow \Lambda_{A}$ is continuous on the set of strictly positive definite and bounded matrix-valued functions. That is show that $$ \left\|\Lambda_{A}-\Lambda_{B}\right\|_{\mathcal{L}\left(L_{1 / 2}^{2}, L_{-1 / 2}^{2}\right)} \leq C_{\lambda}\|A-B\|_{\infty} $$ Here, $\|\cdot\|_{\mathcal{L}\left(L_{1 / 2}^{2}, L_{-1 / 2}^{2}\right)}$ denotes the norm on linear operators from $L_{1 / 2}^{2}$ to $L_{-1 / 2}^{2}$. a) As a first step, show that if we let $u_{A}$ and $u_{B}$ satisfy $\operatorname{div} A \nabla u_{A}=\operatorname{div} B \nabla u_{B}=0$ in an open set $\Omega$ and $u_{A}=u_{B}=f$ on $\partial \Omega$, then we have $$ \int_{\Omega}\left|\nabla u_{A}-\nabla u_{B}\right|^{2} d x \leq C\|f\|_{L_{1 / 2}^{2}(\partial \Omega)}\|A-B\|_{\infty}^{2} $$ Hint: We have $\operatorname{div} B \nabla u_{A}=\operatorname{div}(B-A) \nabla u_{A}$ since $u_{A}$ is a solution. b) Conclude the estimate above on the Dirichlet to Neumann maps. However, there is a restricted version of the inverse problem which can be solved. In the remainder of these notes, we will concentrate on elliptic operators when the matrix $A$ is of the form $A(x)=\gamma(x) I$ where $I$ is the $n \times n$ identity matrix and $\gamma(x)$ is a scalar function which satisfies $$ \lambda \leq \gamma(x) \leq \lambda^{-1} $$ for some constant $\lambda>0$. We change notation a bit and let $\Lambda_{\gamma}$ be the Dirichlet to Neumann map for the operator $\operatorname{div} \gamma \nabla$. Then the inverse conductivity problem can be formulated as the following question: $$ \text { Is the map } \gamma \rightarrow \Lambda_{\gamma} \text { injective? } $$ We will answer this question with a yes, if the dimension $n \geq 3$ and we have some reasonable smoothness assumptions on the domain and $\gamma$. This is a theorem of J. Sylvester and G. Uhlmann [17]. Closely related work was done by Henkin and R. Novikov at about the same time $[5,9]$. One can also ask for a more or less explicit construcion of the inverse map. A construction is given in in the work of Novikov and the work of A. Nachman [7] for three dimensions and [8] in two dimensions. This last paper also gives the first proof of injectivity in two dimensions. My favorite contribution to this story is in [2]. But this is not the the place for a complete history. We take a moment to explain the appearance of the word conductivity in the above. For this discussion, we will assume that function $u$ and $\gamma$ are smooth. The problem we are considering is a mathematical model for the problem of determining the conductivity $\gamma$ by making measurements of current and voltage at the boundary. To try and explain this, we suppose that $u$ represents the voltage potential in $\Omega$ and then $\nabla u$ is the electric field. The electric field is what makes electrons flow and thus we assume that the current is proportional to the electric field, $J=\gamma \nabla u$ where the conductivity $\gamma$ is a constant of proportionality. Since we assume that charge is conserved, for each subregion $B \subset \Omega$, the net flow of electrons or current through $B$ must be zero. Thus, $$ 0=\int_{\partial B} \gamma \nabla u(x) \cdot \nu(x) d \sigma(x) $$ The divergence theorem gives that $$ 0=\int_{\partial B} \gamma(x) \nabla u(x) \cdot \nu(x) d \sigma(x)=\int_{B} \operatorname{div} \gamma(x) \nabla u(x) d x $$ Finally, since the integral on the right vanishes, say, for each ball $B \subset \Omega$, we can conclude that $\operatorname{div} \gamma \nabla u=0$ in $\Omega$. ### Identifiability Our solution of the inverse conductivity problem has two steps. The first is to show that the Dirichlet to Neumann map determines $\gamma$ on the boundary. The second step is to use the knowledge of $\gamma$ on the boundary to relate the inverse conductivity problem to a problem in all of $\mathbf{R}^{n}$ which turns out to be a type of scattering problem. We will use the results of Chapter 9 to study this problem in $\mathbf{R}^{n}$. Theorem 11.8 Suppose that $\partial \Omega$ is $C^{1}$. If $\gamma$ is in $C^{0}(\bar{\Omega})$ and satisfies (11.7), then for each $x \in \partial \Omega$, there exists a sequence of functions $u_{N}$ so that $$ \gamma(x)=\lim _{N \rightarrow \infty} \Lambda_{\gamma} u_{N}\left(\bar{u}_{N}\right) $$ Theorem 11.9 Suppose $\Omega$ and $\gamma$ are as in the previous theorem and also $\partial \Omega$ is $C^{2}$ and $\gamma$ is in $C^{1}(\bar{\Omega})$. If e is a constant vector and $u_{N}$ as in the previous theorem, then we have $$ \nabla \gamma(x) \cdot e=\lim _{N \rightarrow \infty} \int_{\partial \Omega}\left(\gamma(x)\left|\nabla u_{N}(x)\right|^{2} e \cdot \nu(x)-2 \operatorname{Re} \gamma(x) \frac{\partial u_{N}}{\partial \nu}(x) e \cdot \nabla \bar{u}(x)\right) d \sigma $$ The construction of the solutions $u_{N}$ proceeds in two steps. The first step is to write down an explicit function which is an approximate solution and show that the conclusion of our Theorem holds for this function. The second step is to show that we really do have an approximate solution. This is not deep, but requires a certain amount of persistence. I say that the result is not deep because it relies only on estimates which are a byproduct of our existence theory in Theorem 10.25. In the construction of the solution, it will be convenient to change coordinates so that in the new coordinates, the boundary is flat. The following lemma keeps track of how the operator $\operatorname{div} \gamma \nabla$ transforms under a change of variables. Lemma 11.10 Let $A$ be an elliptic matrix and $F: \Omega^{\prime} \rightarrow \Omega$ be a $C^{1}(\bar{\Omega})$-diffeomorphism, $F: \Omega^{\prime} \rightarrow \Omega$. Then have that $\operatorname{div} A \nabla u=0$ if and only if $\operatorname{div} B \nabla u \circ F$ where $$ B(y)=|\operatorname{det} D F(y)| D F^{-1}(F(y))^{t} A(F(y)) D F^{-1}(F(y)) . $$ Proof. The proof of this lemma indicates one of the advantages of the weak formulation of the equation. Since the weak formulation only involves one derivative, we only need to use the chain rule once. We use the chain rule to compute $$ \nabla u(x)=\nabla\left(u\left(F\left(F^{-1}(x)\right)\right)=D F^{-1}(x) \nabla(u \circ F)\left(F^{-1}(x)\right) .\right. $$ This is valid for Sobolev functions also by approximation (see Lemma 10.11). We insert this expression for the gradient and make a change of variables $x=F(y)$ to obtain $$ \begin{aligned} \int_{\Omega} & A(x) \nabla u(x) \cdot \nabla \phi(x) d x \\ \quad & =\int_{\Omega^{\prime}} A(F(y)) D F^{-1}(F(y)) \nabla(u \circ F(y)) \cdot D F^{-1}(F(y)) \nabla(\phi \circ F(y))|\operatorname{det} D F(y)| d y \\ & =\int_{\Omega^{\prime}}|\operatorname{det} D F(y)| D F^{-1}(F(y))^{t} A(F(y)) D F^{-1}(F(y)) \nabla(u \circ F(y)) \cdot \nabla(\phi \circ F(y)) d y . \end{aligned} $$ This last integral is the weak formulation of the equation $\operatorname{div} B \nabla u=0$ with the test function $\phi \circ F$. To finish the proof, one must convince oneself that the map $\phi \rightarrow \phi \circ F$ is an isomorphism ${ }^{1}$ from $L_{1,0}^{2}(\Omega)$ to $L_{1,0}^{2}\left(\Omega^{\prime}\right)$. Exercise 11.11 Figure out how to index the matrix $D F^{-1}$ so that in the application of the chain rule in the previous Lemma, the product $D F^{-1} \nabla(u \circ F)$ is matrix multiplication. Assume that the gradient is a column vector. Solution The chain rule reads $$ \frac{\partial}{\partial x_{i}} u \circ G=\frac{\partial G_{j}}{\partial x_{i}} \frac{\partial u}{\partial x_{j}} \circ G . $$ Thus, we want $$ (D G)_{i j}=\frac{\partial G_{j}}{\partial x_{i}} $$ In the rest of this chapter, we fix a point $x$ on the boundary and choose coordinates so that $x$ is the origin. Thus, we suppose that we are trying to find the value of $\gamma$ and $\nabla \gamma$ at 0 . We assume that $\partial \Omega$ is $C^{1}$ near 0 and thus we have a ball $B_{r}(0)$ so that $B_{2 r}(0) \cap \partial \Omega=\left\{\left(x^{\prime}, x_{n}\right): x_{n}=\phi\left(x^{\prime}\right)\right\} \cap B_{2 r}(0)$. We let $x=F\left(y^{\prime}, y_{n}\right)=\left(y^{\prime}, \phi\left(y^{\prime}\right)+y_{n}\right)$. Note that we assume that the function $\phi$ is defined in all of $\mathbf{R}^{n-1}$ and thus, the map $F$ is invertible on all of $\mathbf{R}^{n}$. In the coordinates, $\left(y^{\prime}, y_{n}\right)$, the operator $\operatorname{div} \gamma \nabla$ takes the form $$ \operatorname{div} A \nabla u=0 $$ with $A(y)=\gamma(y) B(y)$. (Strictly speaking, this is $\gamma(F(y))$. However, to simplify the notation, we will use $\gamma(z)$ to represent the value of $\gamma$ at the point corresponding to $z$ in the current coordinate system. This is a fairly common convention. To carry it out ${ }^{1}$ An isomorphism for Banach (or Hilbert spaces) is an invertible linear map with continuous inverse. A map which also preserves the norm is called an isometry. precisely would require yet another chapter that we don't have time for...) The matrix $B$ depends on $\phi$ and, by the above lemma, takes the form $$ B(y)=\left(\begin{array}{rr} 1_{n-1} & -\nabla \phi\left(y^{\prime}\right) \\ -\nabla \phi\left(y^{\prime}\right)^{t} & 1+\left|\nabla \phi\left(y^{\prime}\right)\right|^{2} \end{array}\right) . $$ Apparently, we are writing the gradient as a column vector. The domain $\Omega^{\prime}$ has 0 on the boundary and near $0, \partial \Omega^{\prime}$ lies in the hyperplane $y_{n}=0$ and $\Omega^{\prime}$ lies in the regions $y_{n}>0$. We introduce a real-valued cutoff function $\eta(y)=f\left(y^{\prime}\right) g\left(y_{n}\right)$ which is normalized so that $$ \int_{\mathbf{R}^{n-1}} f\left(y^{\prime}\right)^{2} d y^{\prime}=1 $$ and so that $g\left(y_{n}\right)=1$ if $\left|y_{n}\right|<1$ and $g\left(y_{n}\right)=0$ if $\left|y_{n}\right|>2$. Our next step is to set $\eta_{N}(x)=N^{(n-1) / 4} \eta\left(N^{1 / 2} y\right)$. We choose a vector $\alpha \in \mathbf{R}^{n}$ and which satisfies $$ \begin{aligned} B(0) \alpha \cdot e_{n} & =0 \\ B(0) \alpha \cdot \alpha & =B(0) e_{n} \cdot e_{n} . \end{aligned} $$ We define $E_{N}$ by $$ E_{N}(y)=N^{-1 / 2} \exp \left(-N\left(y_{n}+i \alpha \cdot y\right)\right) $$ and then we put $$ v_{N}(y)=\eta_{N}(y) E_{N}(y) . $$ The function $v_{N}$ is our approximate solution. The main facts that we need to prove about $v_{N}$ are in Lemma 11.16 and Lemma 11.19 below. Lemma 11.16 With $v_{N}$ and $\Omega^{\prime}$ as above, $$ \lim _{N \rightarrow \infty}\left\|\operatorname{div} \gamma B \nabla v_{N}\right\|_{L_{-1}^{2}\left(\Omega^{\prime}\right)}=0 $$ To visualize why this might be true, observe that $E_{N}$ is a solution of the equation with constant coefficients $B(0)$. The cutoff function oscillates less rapidly than $E_{N}$ (consider the relative size of the gradients) and thus it introduces an error that is negligible for $N$ large and allows us to disregard the fact that $E_{N}$ is not a solution away from the origin. Our proof will require yet more lemmas. The function $v_{N}$ is concentrated near the boundary. In the course of making estimates, we will need to consider integrals pairing $v_{N}$ and its derivatives against functions which are in $L_{1,0}^{2}(\Omega)$. To make optimal estimates, we will want to exploit the fact that functions in $L_{1,0}^{2}(\Omega)$ are small near the boundary. The next estimate, a version of Hardy's inequality makes this precise. If we have not already made this definition, then we define $$ \delta(x)=\inf _{y \in \partial \Omega}|x-y| $$ The function $\delta$ gives the distance from $x$ to the boundary of $\Omega$. Lemma 11.17 (Hardy's inequality) a) Let $f$ be a $C^{1}$ function on the real line and suppose that $f(0)=0$, then for $1<p \leq \infty$, $$ \int_{0}^{\infty}\left|\frac{f(t)}{t}\right|^{p} d t \leq p^{\prime} \int_{0}^{\infty}\left|f^{\prime}(t)\right|^{p} d t $$ b) If $f$ is in $L_{1,0}^{2}(\Omega)$, then $$ \int_{\Omega}\left|\frac{f(x)}{\delta(x)}\right|^{2} d x \leq C \int_{\Omega}|\nabla f(x)|^{2}+|f(x)|^{2} d x . $$ Proof. a) We prove the one-dimensional result with $p<\infty$ first. We use the fundamental theorem of calculus to write $$ f(t)=-\int_{0}^{t} f^{\prime}(s) d s $$ Now, we confuse the issue by rewriting this as $$ \begin{aligned} -t^{\frac{1}{p}-1} f(t) & =\int(t / s)^{-1 / p^{\prime}} \chi_{(1, \infty)}(t / s) s^{1 / p} f(s) \frac{d s}{s} \\ & =\int K(t / s) s^{1 / p} f^{\prime}(s) \frac{d s}{s} \end{aligned} $$ where $K(u)=u^{-1 / p^{\prime}} \chi_{(1, \infty)}(u)$. A computation shows that $$ \int_{0}^{\infty} K(t / s) \frac{d s}{s}=\int_{0}^{\infty} K(t / s) \frac{d t}{t}=p^{\prime} $$ which will be finite if $p>1$. Thus, by exercise 4.5 we have that $g \rightarrow \int K(t / s) g(s) d s / s$ maps $L^{p}(d s / s)$ into itself. Using this in (11.18) gives $$ \left(\int_{0}^{\infty}\left|\frac{f(t)}{t}\right|^{p} d t\right)^{1 / p} \leq p^{\prime}\left(\int_{0}^{\infty}\left|f^{\prime}(t)\right|^{p} d t\right)^{1 / p} . $$ Which is what we wanted to prove. The remaining case $p=\infty$ where the $L^{p}$ norms must be replaced by $L^{\infty}$ norms is easy and thus omitted. b) Since $\mathcal{D}(\Omega)$ is dense in $L_{1,0}^{2}(\Omega)$, it suffices to consider functions in $\mathcal{D}(\Omega)$. By a partition of unity, as in Lemma 10.8 we can further reduce to a function $f$ which is compactly supported $B_{r}(x) \cap \Omega$, for some ball centered at $x$ on the boundary, or to a function $f$ which is supported at a fixed distance away from the boundary. In the first case have that $\partial \Omega$ is given by the graph $\left\{\left(y^{\prime}, y_{n}\right): y_{n}=\phi\left(y^{\prime}\right)\right\}$ near $x$. Applying the one-dimensional result in the $y_{n}$ variable and then integrating in the remaining variables , we may conclude that $$ \int_{\Omega \cap B_{r}(x)} \frac{|f(y)|^{2}}{\left(y_{n}-\phi\left(y^{\prime}\right)\right)^{2}} d y \leq 4 \int_{\Omega \cap B_{r}(x)}\left|\frac{\partial u}{\partial y_{n}}(y)\right|^{2} d y . $$ This is the desired inequality once we convince ourselves that $\left(y_{n}-\phi\left(y^{\prime}\right)\right) / \delta(y)$ is bounded above and below in $B_{r}(x) \cap \Omega$. The second case where $f$ is supported strictly away from the boundary is an easy consequence of the Sobolev inequality, Theorem 8.20 , because $1 / \delta(x)$ is bounded above on each compact subset of $\Omega$. The following Lemma will be useful in obtaining the properties of the approximate solutions and may serve to explain some of the peculiar normalizations in the definition. Lemma 11.19 Let $v_{N}, E_{N}$ and $\eta_{N}$ be as defined in (11.15). Let $\beta$ be continuous at 0 then $$ \lim _{N \rightarrow \infty} N \int_{\Omega^{\prime}} \beta(y)\left|\eta_{N}(y)\right|^{2} e^{-2 N y_{n}} d y=\beta(0) / 2 . $$ If $k>-1$ and $\tilde{\eta} \in \mathcal{D}\left(\mathbf{R}^{n}\right)$, then for $N$ sufficiently large there is a constant $C$ so that $$ \left|\int_{\Omega^{\prime}} \delta(y)^{k} \tilde{\eta}\left(N^{1 / 2} y\right) e^{-2 N y_{n}} d y\right| \leq C N^{\frac{1-n}{2}-1-k} $$ Proof. To prove the first statement, we observe that by the definition and the normalization of the cutoff function, $f$, in (11.12) we have that $$ \begin{aligned} \int_{\Omega^{\prime}} \eta_{N}(y)^{2} e^{-2 N y_{n}} d y= & N^{\frac{n-1}{2}} \int_{\left\{y: y_{n}>0\right\}} f\left(N^{1 / 2} y^{\prime}\right)^{2} e^{-2 N y_{n}} d y \\ & +N^{\frac{n-1}{2}} \int_{\left\{y: y_{n}>0\right\}}\left(g\left(N^{1 / 2} y_{n}\right)^{2}-1\right) f\left(N^{1 / 2} y^{\prime}\right)^{2} e^{-2 N y_{n}} d y \end{aligned} $$ The first integral is $1 /(2)$ and the second is bounded by a multiple of $(2 N)^{-1} e^{-2 N^{1 / 2}}$. The estimate of the second depends on our assumption that $g(t)=1$ for $t<1$. Thus, we have that $$ \lim _{N \rightarrow \infty} N \int_{\Omega^{\prime}} \eta_{N}(y)^{2} e^{-2 N y_{n}} d y=1 / 2 $$ Using this to express the $\frac{1}{2}$ as a limit gives $$ \begin{aligned} \left|\frac{1}{2} \beta(0)-\lim _{N \rightarrow \infty} N \int_{\Omega^{\prime}} \beta(y) \eta_{N}(y)^{2} e^{-2 N y_{n}} d y\right| \leq & \lim _{N \rightarrow \infty} N \int_{\Omega^{\prime}}|\beta(0)-\beta(y)| \\ & \times \eta_{N}(y)^{2} e^{-2 N y_{n}} d y \\ \leq & \lim _{N \rightarrow \infty} \sup _{\left\{y:|y|<2^{1 / 2} N^{-1 / 2}\right\}} \frac{1}{2}|\beta(0)-\beta(y)| \\ & \times N \int_{\Omega^{\prime}} \eta_{N}(y)^{2} e^{-2 N y_{n}} d y . \end{aligned} $$ Now the continuity of $\beta$ implies that this last limit is 0 . The inequalities in the second statement follow easily, by observing that for $N$ sufficiently large, we have $\delta(y)=y_{n}$ on the support of $\tilde{\eta}\left(N^{1 / 2} y\right)$. If $\operatorname{supp} \tilde{\eta} \subset B_{R}(0)$, then we can estimate our integral by $$ \begin{aligned} \left|\int_{\Omega^{\prime}} \delta(y)^{k} \tilde{\eta}\left(N^{1 / 2} y\right) e^{-2 N y_{n}} d y\right| & \leq\|\tilde{\eta}\|_{\infty} \int_{\left\{y^{\prime}:\left|y^{\prime}\right|<N^{-1 / 2} R\right\}} \int_{0}^{\infty} y_{n}^{k} e^{-2 N y_{n}} d y^{\prime} d y_{n} \\ & \leq C N^{\frac{1-n}{2}-1-k} \end{aligned} $$ We can now give the proof of Lemma 11.22. Lemma 11.22 With $\Omega^{\prime}$ and $v_{N}$ as above, suppose $\beta$ is a bounded function on $\Omega^{\prime}$ which is continuous at 0 , then $$ \lim _{N \rightarrow \infty} \int_{\Omega^{\prime}} \beta(y) B(y) \nabla v_{N}(y) \cdot \nabla \bar{v}_{N}(y) d y=\beta(0) B(0) e_{n} \cdot e_{n} $$ Proof. Using the product rule, expanding the square and that $\eta_{N}$ is real valued gives $$ \begin{aligned} \int_{\Omega^{\prime}} \beta(y) B(y) \nabla v_{N}(y) \nabla \bar{v}_{N}(y) d y= & N \int \beta(y)\left(B(y) \alpha \cdot \alpha+B(y) e_{n} \cdot e_{n}\right) \eta_{N}(y)^{2} e^{-2 N y_{n}} d y \\ & -2 \int \beta(y)\left(B(y) \nabla \eta_{N}(y) \cdot e_{n}\right) \eta_{N}(y) e^{-2 N y_{n}} d y \\ & +N^{-1} \int_{\Omega^{\prime}} \beta(y) B(y) \nabla \eta_{N}(y) \cdot \nabla \eta_{N}(y) e^{-2 N y_{n}} d y \\ = & I+I I+I I I . \end{aligned} $$ By (11.20) of our Lemma 11.19, we have that $$ \lim _{N \rightarrow \infty} I=\beta(0)\left(B(0) e_{n} \cdot e_{n}\right) $$ where we have used (11.14) to replace $B(0) \alpha \cdot \alpha$ by $B(0) e_{n} \cdot e_{n}$. The integral $I I$ can be bounded above by $$ I I \leq 2 N^{\frac{n}{2}}\|\beta B\|_{\infty} \int_{\Omega^{\prime}}\left|\left(\nabla \eta_{1}\right)\left(N^{1 / 2} y\right) \eta_{1}\left(N^{1 / 2} y\right)\right| e^{-2 N y_{n}} d y \leq C N^{-1 / 2} $$ Here, we are using the second part of Lemma 11.19, (11.21). The observant reader will note that we have taken the norm of the matrix $B$ in the above estimate. The estimate above holds if matrices are normed with the operator norm-though since we do not care about the exact value of the constant, it does not matter so much how matrices are normed. Finally, the estimate for III also follows from (11.21) in Lemma 11.19 as follows: $$ I I I \leq N^{\frac{n-1}{2}}\|\beta B\|_{\infty} \int_{\Omega^{\prime}}\left|\left(\nabla \eta_{1}\right)\left(N^{1 / 2} y\right)\right|^{2} e^{-2 N y_{n}} d y \leq C N^{-1} $$ The conclusion of the Lemma follows from (11.23-11.25). Now, we can make precise our assertion that $v_{N}$ is an approximate solution of the equation $\operatorname{div} A \nabla v=0$. Lemma 11.26 With $v_{N}$ and $\Omega^{\prime}$ as above, $$ \lim _{N \rightarrow \infty}\left\|\operatorname{div} A \nabla v_{N}\right\|_{L_{-1}^{2}\left(\Omega^{\prime}\right)}=0 $$ Proof. We compute and use that $\operatorname{div} A(0) \nabla E_{N}=0$ to obtain $$ \begin{aligned} \operatorname{div} A(y) \nabla v_{N}(y)= & \operatorname{div}(A(y)-A(0)) \nabla v_{N}(y)+\operatorname{div} A(0) \nabla v_{N}(y) \\ = & \operatorname{div}(A(y)-A(0)) \nabla v_{N}(y) \\ & \quad+2 A(0) \nabla \eta_{N}(y) \nabla E_{N}(y)+E_{N} \operatorname{div} A(0) \nabla \eta_{N}(y) \\ = & I+I I+I I I . \end{aligned} $$ In the term $I$, the divergence must be interpreted as a weak derivative. To estimate the norm in $L_{-1}^{2}(\Omega)$, we must pair each of $I$ through $I I I$ with a test function $\psi$. With $I$, we use the definition of weak derivative and recall that $\eta_{N}$ is supported in a small ball to obtain $$ \begin{aligned} |I(\psi)| & =\left|\int(A(y)-A(0)) \nabla v_{N}(y) \cdot \nabla \psi(y) d y\right| \\ & \leq \sup _{|y|<2^{3 / 2} N^{-1 / 2}}|A(y)-A(0)||| \nabla v_{N}\left\|_{L^{2}(\Omega)}\right\| \nabla \psi \|_{L^{2}(\Omega)} \end{aligned} $$ This last expression goes to zero with $N$ because $A$ is continuous at 0 and according to (11.22) the $L^{2}(\Omega)$ of the gradient of $v_{N}$ is bounded as $N \rightarrow 0$. To make estimates for $I I$, we multiply and divide by $\delta(y)$, use the Cauchy-Schwarz inequality, the Hardy inequality, Lemma 11.17, and then (11.21) $$ \begin{aligned} |I I(\psi)| & =\left|\int_{\Omega^{\prime}} 2 A(0) \nabla \eta_{N}(y) \nabla E_{N}(y) \psi(y) d y\right| \\ & \leq\left(\int_{\Omega^{\prime}}\left|\frac{\psi(y)}{\delta(y)}\right|^{2} d y\right)^{1 / 2}\left(N^{\frac{n+3}{2}} \int_{\Omega^{\prime}} \delta(y)^{2} \mid\left(\left.\nabla \eta_{1}\left(N^{1 / 2} y\right)\right|^{2} e^{-2 N y_{n}} d y\right)^{1 / 2}\right. \\ & \leq C N^{-1 / 2}\|\psi\|_{L_{1,0}^{2}\left(\Omega^{\prime}\right)} . \end{aligned} $$ Finally, we make estimates for the third term $$ \begin{aligned} |I I I(\psi)| & =\left|\int E_{N}(y) \operatorname{div} A(0) \nabla \eta_{N}(y) d y\right| \\ & \leq\left(\int_{\Omega^{\prime}}\left|\frac{\psi(y)}{\delta(y)}\right|^{2} d y\right)^{1 / 2}\left(N^{\frac{n+1}{2}} \int_{\Omega^{\prime}} \delta(y)^{2}\left|\left(\operatorname{div} A(0) \nabla \eta_{1}\right)\left(N^{1 / 2} y\right)\right|^{2} e^{-2 N y_{n}} d y\right)^{1 / 2} \\ & \leq C\|\psi\|_{L_{1,0}^{2}\left(\Omega^{\prime}\right)} N^{-1} . \end{aligned} $$ Now, it is easy to patch up $v_{N}$ to make it a solution, rather than an approximate solution. Lemma 11.27 With $\Omega^{\prime}$ and $B$ as above, we can find a family of solutions, $w_{N}$, of $\operatorname{div} A \nabla w_{N}=0$ with $w_{N}-v_{N} \in L_{1,0}^{2}\left(\Omega^{\prime}\right)$ so that $$ \lim _{N \rightarrow \infty} \int_{\Omega^{\prime}} \beta(y) B(y) \nabla w_{N}(y) \cdot \nabla \bar{w}_{N}(y) d y=\beta(0) B(0) e_{n} \cdot e_{n} $$ Proof. According to Theorem 10.25 we can solve the Dirichlet problem $$ \begin{cases}\operatorname{div} A \nabla \tilde{v}_{N}=-\operatorname{div} A \nabla v_{N}, & \text { in } \Omega^{\prime} \\ \tilde{v}_{N}=v_{N}, & \text { on } \partial \Omega\end{cases} $$ The solution $\tilde{v}_{N}$ will satisfy $$ \lim _{N \rightarrow \infty}\left\|\nabla \tilde{v}_{N}\right\|_{L^{2}\left(\Omega^{\prime}\right)} \leq \lim _{N \rightarrow \infty} C\left\|\operatorname{div} A \nabla v_{n}\right\|_{L_{-1}^{2}\left(\Omega^{\prime}\right)}=0 $$ by the estimates from the existence theorem, Theorem 10.25 and the estimate of Lemma 11.26 . If we set $w_{N}=v_{N}+\tilde{v}_{N}$, then we have a solution with the correct boundary values and by (11.28) and Lemma 11.22 $$ \begin{aligned} \lim _{N \rightarrow \infty} \int_{\Omega^{\prime}} \beta(y) B(y) \nabla w_{N}(y) \cdot \nabla \bar{w}_{N}(y) d y & =\lim _{N \rightarrow \infty} \int_{\Omega^{\prime}} A \nabla v_{N}(y) \cdot \nabla \bar{v}_{N}(y) d y \\ & =\beta(0) B(0) e_{n} \cdot e_{n} . \end{aligned} $$ We will need another result from partial differential equations-this one will not be proven in this course. This Lemma asserts that solutions of elliptic equations are as smooth as one might expect. Lemma 11.29 If $A$ is matrix with $C^{1}(\bar{\Omega})$ entries and $\Omega$ is a domain with $C^{2}$-boundary, then the solution of the Dirichlet problem, $$ \begin{cases}\operatorname{div} A \nabla u=0 & \text { in } \Omega \\ u=f & \text { on } \partial \Omega\end{cases} $$ will satisfy $$ \|u\|_{L_{2}^{2}(\Omega)} \leq C\|f\|_{L_{2}^{2}(\Omega)} $$ As mentioned above, this will not be proven. To obtain an idea of why it might be true. Let $u$ be a solution as in the theorem. This, we can differentiate and obtain that $v=\partial u / \partial x_{j}$ satisfies an equation of the form $\operatorname{div} \gamma \nabla v=\operatorname{div}\left(\partial \gamma / \partial x_{j}\right) \nabla u$. The right-hand side is in $L_{-1}^{2}$ and hence it is reasonable to expect that $v$ satisfies the energy estimates of Theorem 10.25. This argument cannot be right because it does not explain how the boundary data enters into the estimate. To see the full story, take MA633. Finally, we can give the proofs of our main theorems. Proof of Theorem 11.8 and Theorem 11.9. We let $F: \Omega^{\prime} \rightarrow \Omega$ be the diffeomorphism used above and let $u_{N}=w_{N} \circ F^{-1} /\left(1+|\nabla \phi(0)|^{2}\right)$. According to the change of variables lemma, $u_{N}$ will be a solution of the original equation, $\operatorname{div} \gamma \nabla u_{N}=0$ in $\Omega$. Also, the Dirichlet integral is preserved: $$ \int_{\Omega} \beta(x)\left|\nabla u_{N}(x)\right|^{2} d x=\frac{1}{1+|\nabla \phi(0)|^{2}} \int_{\Omega^{\prime}} \beta(y) B(y) \nabla w_{N}(y) \cdot \nabla \bar{w}_{N}(y) d y . $$ Thus, the recovery of $\gamma$ at the boundary follows from the result in $\Omega^{\prime}$ of Lemma 11.27 and we have $$ \gamma(0)=\lim _{N \rightarrow \infty} \int_{\Omega} \gamma(x)\left|\nabla u_{N}(x)\right|^{2} d x=\lim _{N \rightarrow \infty} \Lambda_{\gamma}\left(u_{N}\right)\left(\bar{u}_{N}\right) $$ For the proof of the second theorem, we use the same family of solutions and the Rellich identity [10]: $$ \int_{\partial \Omega} \gamma(x) e \cdot \nu(x)\left|\nabla u_{N}(x)\right|^{2}-2 \operatorname{Re} \gamma(x) \frac{\partial u}{\partial \nu}(x) e \cdot \nabla \bar{u}(x) d x=\int_{\Omega} e \cdot \nabla \gamma(x)\left|\nabla u_{N}(x)\right|^{2} d x . $$ This is proven by an application of the divergence theorem. The smoothness result in Lemma 11.29 is needed to justify the application of the divergence theorem: we need to know that $u_{N}$ has two derivatives to carry this out. The full gradient of $u_{N}$ is determined by the boundary values of $u_{N}$ and the Dirichlet to Neumann map. By Lemma 11.22, if $\gamma \in C^{1}(\bar{\Omega})$, we can take the limit of the right-hand side and obtain that $$ \frac{\partial \gamma}{\partial x_{j}}(0)=\lim _{N \rightarrow \infty} \int_{\Omega} \frac{\partial \gamma}{\partial x_{j}}(x)\left|\nabla u_{N}(x)\right|^{2} d x $$ Corollary 11.30 If we have a $C^{2}$ domain and for two $C^{1}(\bar{\Omega})$ functions, $\Lambda_{\gamma_{1}}=\Lambda_{\gamma_{2}}$, then $\gamma_{1}=\gamma_{2}$ on the boundary and $\nabla \gamma_{2}=\nabla \gamma_{1}$ on the boundary. Proof. The boundary values of the function $u_{N}$ are independent of $\gamma_{j}$. The expression $\Lambda_{\gamma} u_{N}\left(\bar{u}_{N}\right)$ in Theorem 11.8 clearly depends only on $u_{N}$ and the map $\Lambda_{\gamma}$. The left-hand side Theorem 11.9 depends only on $\gamma$ and $\nabla u_{N}$. Since $\nabla u_{N}$ which can be computed from $u_{N}$ and the normal derivative of $u_{N}$. Hence, we can use Theorem 11.9 to determine $\nabla \gamma$ from the Dirichlet to Neumann map. Exercise 11.31 If $\gamma$ and $\partial \Omega$ are regular enough, can we determine the second order derivatives of $\gamma$ from the Dirichlet to Neumann map? It is known that all derivatives of $u$ are determined by the Dirichlet to Neumann map. I do not know if there is a proof in the style of Theorems 11.8 and 11.9 which tell how to compute second derivatives of $\gamma$ by looking at some particular expression on the boundary. Exercise 11.32 If one examines the above proof, one will observe that there is a bit of slop. We made an arbitrary choice for the vector $\alpha$ and used $\alpha$ in the determination of one function, $\gamma$. It is likely that in fact, we can determine $(n-1)$ parameters at the boundary by considering $(n-1)$ linearly independent choices for $\alpha$. Run with this. ## Chapter 12 ## Inverse problem: Global uniqueness The goal of this chapter is to prove the following theorem. Theorem 12.1 If $\Omega$ is a $C^{2}$-domain in $\mathbf{R}^{n}, n \geq 3$, and we have two $C^{2}(\bar{\Omega})$ conductivities with $\Lambda_{\gamma_{1}}=\Lambda_{\gamma_{2}}$, then $\gamma_{1}=\gamma_{2}$. The proof of this result relies on converting the problem of the uniqueness of $\gamma$ for the equation $\operatorname{div} \gamma \nabla$ to a similar question about the uniqueness of the potential $q$ for a Schrödinger equation of the form $\Delta-q$ with $q=\Delta \sqrt{\gamma} / \sqrt{\gamma}$. One reason why this Chapter is so long, is that we spend a great deal of time convincing ourselves that that the uniqueness question for one equation is equivalent with the uniqueness question for the other. Most of this chapter is lifted from the paper of Sylvester and Uhlmann [17]. A few of the details are taken from later works that simplify parts of the argument. ### A Schrödinger equation Here, we extend our notion of weak solution to equations with a potential or zeroth order term. We say that $v$ is a weak solution of $$ \begin{cases}\Delta v-q v=0 & \text { on } \Omega \\ v=f & \text { on } \partial \Omega\end{cases} $$ if $v \in L_{1}^{2}(\Omega), v-f \in L_{1,0}^{2}(\Omega)$ and $$ \int_{\Omega} \nabla v(x) \cdot \nabla \phi(x)+q(x) v(x) \phi(x) d x=0, \quad \phi \in L_{1,0}^{2}(\Omega) . $$ If $q \geq 0$ and $q \in L^{\infty}$, then the quadratic form associated with this equation clearly provides an inner product on $L_{1,0}^{2}(\Omega)$ and hence we can prove an existence theorem by imitating the arguments from Chapter 10. However, the potentials that we are studying do not satisfy $q \geq 0$, in general. Still, it is possible that the quadratic form is non-negative even without this bound. That is one consequence of the following Lemma. We will use the Lemma below to relate the existence and uniqueness for $\Delta-q$ to $\operatorname{div} \gamma \nabla$. Lemma 12.2 Suppose that $\Omega$ is $C^{1}, \gamma$ is $C^{2}(\bar{\Omega})$ and that $\gamma$ is bounded above and below as in (11.7) A function $u$ in $L_{1}^{2}(\Omega)$ satisfies $$ \left\{\begin{array}{l} \operatorname{div} \gamma \nabla u=0, \quad \text { in } \Omega \\ u=f, \quad \text { on } \partial \Omega \end{array}\right. $$ if and only if $v=\sqrt{\gamma} u$ is a weak solution of $$ \left\{\begin{array}{lc} \Delta v-q v=0, & \text { in } \Omega \\ v=\sqrt{\gamma} f, & \text { on } \partial \Omega \end{array}\right. $$ Proof. We let $C_{c}^{1}(\Omega)$ denote the space of functions in $C^{1}(\Omega)$ which are compactly supported in $\Omega$. If $\phi \in C_{c}^{1}(\Omega)$, then $\sqrt{\gamma} \phi$ is also in $C_{c}^{1}(\Omega)$ and hence lies in $L_{1,0}^{2}(\Omega)$. We consider the quadratic expression in the weak formulation of $\operatorname{div} \gamma \nabla u=0$ and then use the product rule twice to obtain $$ \begin{aligned} \int_{\Omega} \gamma(x) \nabla u(x) \nabla \phi(x) d x= & \int_{\Omega} \nabla(\sqrt{\gamma}(x) u(x)) \cdot \sqrt{\gamma}(x) \nabla \phi(x) \\ & -u(x)(\nabla \sqrt{\gamma}(x)) \cdot(\sqrt{\gamma}(x) \nabla \phi(x)) d x \\ = & \int_{\Omega} \nabla(\sqrt{\gamma}(x) u(x)) \cdot \nabla(\sqrt{\gamma}(x) \phi(x)) \\ & -u(x)(\nabla \sqrt{\gamma}(x)) \cdot(\sqrt{\gamma}(x) \nabla \phi(x)) \\ & -\nabla(\sqrt{\gamma}(x) u(x) \cdot(\nabla \sqrt{\gamma}(x)) \phi(x) d x \end{aligned} $$ Now, in the middle term, we use the divergence theorem to move the gradient operator from $\phi$ to the remaining terms. Since we are not assuming that $\phi$ vanishes on the boundary, we pick up a term on the boundary: $$ \begin{aligned} \int_{\Omega} u(x)(\nabla \sqrt{\gamma(x)}) \cdot(\sqrt{\gamma(x)} \nabla \phi(x)) d x=- & \int_{\Omega} \frac{\Delta \sqrt{\gamma}(x)}{\sqrt{\gamma}(x)}(\sqrt{\gamma}(x) u(x))(\sqrt{\gamma}(x) \phi(x)) \\ & +\nabla(\sqrt{\gamma}(x) u(x)) \cdot(\nabla \sqrt{\gamma}(x)) \phi(x) d x \\ & +\int_{\partial \Omega} \phi(x) \sqrt{\gamma}(x) \nabla \sqrt{\gamma}(x) \cdot \nu(x) d \sigma(x) . \end{aligned} $$ We use this to simplify the above expression, note that two terms cancel and we obtain, with $q=\Delta \sqrt{\gamma} / \sqrt{\gamma}$, that $$ \begin{aligned} \int_{\Omega} \gamma(x) \nabla u(x) \cdot \nabla \phi(x) d x=\int_{\Omega} & \nabla(\sqrt{\gamma}(x) u(x)) \cdot \nabla(\sqrt{\gamma}(x) \phi(x)) \\ & +q(x) \sqrt{\gamma}(x) u(x) \sqrt{\gamma}(x) \phi(x) d x \\ & -\int_{\partial \Omega} \sqrt{\gamma}(x) u(x) \phi(x) \nabla \sqrt{\gamma}(x) \cdot \nu d \sigma \end{aligned} $$ Since the map $\phi \rightarrow \sqrt{\gamma} \phi$ is invertible on $C_{c}^{1}(\Omega)$ we have that $$ \int_{\Omega} \gamma(x) \nabla u(x) \cdot \nabla \phi(x) d x=0, \quad \text { for all } \phi \in C_{c}^{1}(\Omega), $$ if and only if with $v=\sqrt{\gamma} u$ $$ \int_{\Omega} \nabla v(x) \cdot \nabla \phi(x)+q(x) v(x) \phi(x) d x=0, \quad \text { for all } \phi \in C_{c}^{1}(\Omega) . $$ Corollary 12.4 With $q$ as above, if $f \in L_{1}^{2}(\Omega)$, then there exist a unique weak solution of the Dirichlet problem for $\Delta-q$. Proof. According to Lemma 12.2, solutions of the Dirichlet problem for $\Delta v-q v=0$ with data $f$ are taken to solutions of the Dirichlet problem for $\operatorname{div} \gamma \nabla u=0$ with data $f / \sqrt{\gamma}$ by the map $v \rightarrow v / \sqrt{\gamma}$ and this map is invertible, the existence and uniqueness for $\Delta-q$ follows from the existence and uniqueness in Theorem 10.25. Exercise 12.5 We claimed above that if $\Omega$ is bounded and $q \geq 0$ is real and in $L^{\infty}$, then the expression $$ \int_{\Omega} \nabla u \cdot \nabla \bar{v}+q u \bar{v} d x $$ defines an inner product on $L_{1,0}^{2}(\Omega)$ which induces the same topology as the standard inner product. Verify this. Show that this continues to hold for $n \geq 3$, if $q \in L^{n / 2}(\Omega)$. What goes wrong if $n=2$ ? The following Lemma asserts that a function $\beta \in C^{1}(\Omega)$ function defines a multiplier on $L_{1 / 2}^{2}(\partial \Omega)$ which depends only on the boundary values of $\beta$. This should seem obvious. That we need to prove such obvious statements is the price we pay for our cheap definition of the space $L_{1 / 2}^{2}(\Omega)$. Lemma 12.6 Let $\Omega$ be a $C^{1}$-domain. If $\beta_{1}, \beta_{2} \in C^{1}(\bar{\Omega}), \beta_{1}=\beta_{2}$ on $\partial \Omega$, then for each $f \in L_{1}^{2}(\Omega),\left(\beta_{1}-\beta_{2}\right) f \in L_{1,0}^{2}(\Omega)$. As a consequence, for each $f \in L_{1 / 2}^{2}(\partial \Omega), \beta_{1} f=\beta_{2} f$. Proof. First note that the product rule, exercise 10.7 implies that $\beta_{j} f$ is in $L_{1}^{2}(\Omega)$ if $f$ is in $L_{1}^{2}(\Omega)$. To see that $\left(\beta_{1}-\beta_{2}\right) f$ is in $L_{1,0}^{2}(\Omega)$, we will establish the following: Claim. If $\beta \in C^{1}(\bar{\Omega})$ and $\beta=0$ on $\partial \Omega$, then the map $f \rightarrow \beta f \operatorname{maps} L_{1}^{2}(\Omega)$ into $L_{1,0}^{2}(\Omega)$. To establish the claim, we may use a partition of unity to reduce to a function $f$ which is supported in a ball $B_{r}\left(x_{0}\right)$ and so that near $x$, the boundary lies in a graph $\left\{\left(x^{\prime}, x_{n}\right): x_{n}=\phi\left(x^{\prime}\right)\right\}$. We let $\lambda(t)$ be a function which is smooth on all of $\mathbf{R}$, is 0 if $t<1$ and is 1 if $t>2$. We let $$ \eta_{\epsilon}(x)=\lambda\left(\left(x_{n}-\phi\left(x^{\prime}\right)\right) / \epsilon\right) . $$ Thus, $\eta_{\epsilon}$ vanishes on $\partial \Omega \cap B_{r}\left(x_{0}\right)$. The product $\eta_{\epsilon}(x) \beta(x) f(x)$ will be compactly supported in $\Omega$, hence we can regularize as in Lemma 10.9 in order to approximate in the $L_{1}^{2}(\Omega)$ norm by functions in $\mathcal{D}(\Omega)$ and conclude that $\eta_{\epsilon}(x) \beta(x) f(x)$ is in $L_{1,0}^{2}(\Omega)$. Now we show that $$ \lim _{\epsilon \rightarrow 0^{+}}\left\|\eta_{\epsilon} \beta f-\beta f\right\|_{L_{1}^{2}(\Omega)}=0 $$ This will imply that $\beta f$ is in $L_{1,0}^{2}(\Omega)$, since $L_{1,0}^{2}(\Omega)$ is (by definition) a closed subspace of $L_{1}^{2}(\Omega)$. We establish (12.7). It is an immediate consequence of the Lebesgue dominated convergence theorem that $\eta_{\epsilon} \beta f \rightarrow \beta f$ in $L^{2}(\Omega)$ as $\epsilon \rightarrow 0^{+}$. Now, we turn to the derivatives. We compute the derivative $$ \frac{\partial}{\partial x_{j}}\left(\eta_{\epsilon}(x) \beta(x) f(x)\right)=\left(\frac{\partial \eta_{\epsilon}}{\partial x_{j}}(x) \beta(x) f(x)+\eta_{\epsilon}(x) \frac{\partial}{\partial x_{j}}(\beta(x) f(x)) .\right. $$ By the dominated convergence theorem, the second term on the right converges in $L^{2}(\Omega)$ to the derivative of $\beta f$. We show the first term on the right goes to zero in $L^{2}$. To see this, we apply the mean value theorem of one variable calculus on the line segment joining $\left(x^{\prime}, \phi\left(x^{\prime}\right)\right)$ to $\left(x^{\prime}, x_{n}\right)$ and use that $\beta\left(x^{\prime}, \phi\left(x^{\prime}\right)\right)=0$ to conclude that $$ |\beta(x)| \leq 2 \epsilon\|\nabla \beta\|_{\infty} . $$ Using this, and observing that $\nabla \eta_{\epsilon}$ is supported in a thin strip along the boundary and satisfies $\left|\nabla \eta_{\epsilon}\right| \leq C / \epsilon$, we conclude that $$ \int_{\Omega}\left|\frac{\partial \eta_{\epsilon}}{\partial x_{j}}(x) \beta(x) f(x)\right|^{2} d x \leq C\|\beta\|_{\infty} \int_{\left\{B_{r}\left(x_{0}\right) \cap\left\{x: 0<x_{n}-\phi\left(x^{\prime}\right)<2 \epsilon\right\}\right.}|f(x)|^{2} d x . $$ The last integral goes to zero as $\epsilon \rightarrow 0^{+}$. Thus the claim follows. It is easy to see that $f \rightarrow \beta_{j} f$ gives a map on $L_{1}^{2}(\Omega)$. Now, if each $\beta_{1}$ and $\beta_{2}$ are as in the theorem and $f$ is a representative of an element of $L_{1 / 2}^{2}(\partial \Omega)$, then since $\beta_{1} f-\beta_{2} f \in L_{1,0}^{2}(\Omega)$, we can conclude that $\beta_{1} f$ and $\beta_{2} f$ give the same function $L_{1 / 2}^{2}(\partial \Omega)$. Our next step is to establish a relation between the Dirichlet to Neumann map for $q$ and that for $\gamma$. Lemma 12.8 If $\gamma$ is in $C^{2}(\bar{\Omega})$ and satisfies the ellipticity condition (11.7), and $\Omega$ is a $C^{1}$-domain, then we have $$ \Lambda_{q}(\cdot)-\frac{1}{\sqrt{\gamma}} \nabla \sqrt{\gamma} \cdot \nu=\frac{1}{\sqrt{\gamma}} \Lambda_{\gamma}\left(\frac{1}{\sqrt{\gamma}} \cdot\right) $$ Proof. We fix $f$ in $L_{1 / 2}^{2}(\partial \Omega)$ and suppose that $u$ is the solution of the Dirichlet problem for $\operatorname{div} \gamma \nabla$ with boundary data $f$. According to the identity (12.3) in the proof of Lemma 12.2 , we have $$ \begin{aligned} \Lambda_{\gamma}(f)(\phi)= & \int_{\Omega} \gamma(x) \nabla u(x) \cdot \nabla \phi(x) d x \\ = & \int_{\Omega} \nabla(\sqrt{\gamma}(x) u(x)) \cdot \nabla(\sqrt{\gamma}(x) \phi(x))+q(x) \sqrt{\gamma}(x) u(x) \sqrt{\gamma}(x) \phi(x) d x \\ & \quad-\int_{\partial \Omega} \sqrt{\gamma}(x) u(x) \phi(x) \nabla \sqrt{\gamma}(x) \cdot \nu d \sigma \\ = & \sqrt{\gamma} \Lambda_{q}(\sqrt{\gamma} f)(\phi)-\int_{\partial \Omega} \sqrt{\gamma} f \nabla \sqrt{\gamma} \cdot \nu \phi d \sigma . \end{aligned} $$ Making the substitution $f=g / \sqrt{\gamma}$ and dividing by $\sqrt{\gamma}$ gives the desired conclusion. ${ }^{1}$ Remark. A clearer and more direct proof of this lemma can be given if we assume the regularity result of Lemma 11.29. We may choose $f$ which is nice, solve the Dirichlet problem for $\operatorname{div} \gamma \nabla$ with data $f$ to obtain $u$. We have that $v=\sqrt{\gamma} u$ solves the Schrödinger equation $\Delta v-q v=0$. Taking the normal derivative we have $$ \Lambda_{q}(\sqrt{\gamma} f)=\sqrt{\gamma} \frac{\partial u}{\partial \nu}+u \frac{\partial \sqrt{\gamma}}{\partial \nu} $$ We now consider two conductivities $\gamma_{1}$ and $\gamma_{2}$ and the corresponding potentials $q_{j}=$ $\Delta \sqrt{\gamma_{j}} / \sqrt{\gamma_{j}}$ ${ }^{1}$ In the above equation, we are not distinguishing between the multiplication operator that $\sqrt{\gamma}$ gives on $L_{1 / 2}^{2}(\partial \Omega)$ and the transpose of this operator to the dual, $L_{-1 / 2}^{2}(\partial \Omega)$. Did anyone notice? Proposition 12.9 If $\gamma_{1}, \gamma_{2} \in C^{1}(\bar{\Omega}), \gamma_{1}=\gamma_{2}$ and $\nabla \gamma_{1}=\nabla \gamma_{2}$, then $$ \Lambda_{\gamma_{1}}=\Lambda_{\gamma_{2}} $$ if and only if $$ \Lambda_{q_{1}}=\Lambda_{q_{2}} $$ Proof. This result follows from Lemma 12.8 and 12.6. ### Exponentially growing solutions In this section, we consider potential $q$ which are defined in all of $\mathbf{R}^{n}$ and are bounded and compactly supported. In applications, $q$ will be of the form $\Delta \sqrt{\gamma} / \sqrt{\gamma}$ in $\Omega$ and 0 outside $\Omega$. The assumption that $q$ is bounded is needed in this approach. The assumtion that $q$ is compactly supported is too strong. What is needed is that $q$ defines a multiplication operator from $M_{\infty}^{2,-1 / 2} \rightarrow M_{1}^{2,1 / 2}$ and thus there is a constant $M(q)$ so that $$ \|q \phi\|_{M_{1}^{2,1 / 2}} \leq M(q)\|\phi\|_{M_{\infty}^{2,-1 / 2}} $$ This requires that $q$ decay faster than $\left(1+|x|^{2}\right)^{-1 / 2}$ at infinity, which is true if $q$ is bounded and compactly supported. Our goal is to construct solutions of the equation $\Delta v-q v=0$ which are close to the harmonic functions $e^{x \cdot \zeta}$. Recall that such an exponential will be harmonic if $\zeta \cdot \zeta=0$. We will succeed if $\zeta$ is large. Theorem 12.11 Assume $M(q)$ is finite and let $\zeta \in \mathbf{C}^{n}$ satisfy $\zeta \cdot \zeta=0$. There exists a constant $C=C(n)$ so that if $|\zeta|>C(n) M(q)$, then we can find a solution of $$ \Delta v-q v=0 $$ of the form $v(x)=e^{x \cdot \zeta}(1+\psi(x, \zeta))$ which satisfies $$ \|\psi\|_{M_{\infty}^{2,-1 / 2}} \leq \frac{C M(q)}{|\zeta|}\|q\|_{M_{1}^{2,1 / 2}} . $$ Furthermore, the function $\psi$ is the only function in $M_{\infty}^{2,-1 / 2}$ for which $v$ as defined above will satisfy $\Delta v-q v=0$. Proof. Existence. If we differentiate we see that $\Delta v-q v=0$ if and only if $$ \Delta \psi+2 \zeta \cdot \nabla \psi-q \psi=q $$ A solution of this equation may be constructed by solving the integral equation $$ \psi-G_{\zeta}(q \psi)=G_{\zeta}(q) $$ A solution of the integral equation (12.12) is given by the series $$ \psi=\sum_{j=1}^{\infty}\left(G_{\zeta} q\right)^{j}(1) $$ To see that the series can be summed, we apply the second estimate of Theorem 9.16 of Chapter 9, and use the estimate for the multiplication operator given by $q,(12.10)$, to obtain $$ \left\|\left(G_{\zeta} q\right)^{j}(1)\right\|_{M_{\infty}^{2,-1 / 2}} \leq\left(\frac{C M(q)}{|\zeta|}\right)^{j-1} \frac{C}{|\zeta|}\|q\|_{M_{1}^{2,1 / 2}} . $$ Thus, if $|\zeta|$ is large, this series converges and defines a functions $\psi$ in $M_{\infty}^{2,-1 / 2}$. Furthermore, according to exercise $9.21 \nabla \psi=\nabla G_{\zeta}(q(1+\psi))$ is in $M_{\infty}^{2,-1 / 2}$. Thus, $v$ will be a weak solution of the equation $\Delta v-q v=0$ in $\mathbf{R}^{n}$. Uniqueness. If we have two solutions, $\psi_{1}$ and $\psi_{2}$ of (12.12) which are in $M_{\infty}^{2,-1 / 2}$, then their difference satisfies $$ \Delta\left(\psi_{1}-\psi_{2}\right)+2 \zeta \cdot \nabla\left(\psi_{1}-\psi_{2}\right)-q\left(\psi_{1}-\psi_{2}\right)=0 $$ According to Theorem 9.23 we have $\psi_{1}-\psi_{2}=G_{\zeta}\left(q\left(\psi_{1}-\psi_{2}\right)\right)$. Thus from the estimate in Theorem 9.16 we have $$ \left\|\psi_{2}-\psi_{2}\right\|_{M_{\infty}^{2,-1 / 2}} \leq \frac{C M(q)}{|\zeta|}\left\|\psi_{1}-\psi_{2}\right\|_{M_{\infty}^{2,-1 / 2}} $$ If we have $C M(q) /|\zeta|<1$, then this inequality will imply that $\left\|\psi_{1}-\psi_{2}\right\|_{M_{\infty}^{2,-1 / 2}}=0$. Lemma 12.13 Suppose that $\Omega$ is $C^{1}$ and suppose that each $q_{j}$ is supported in $\bar{\Omega}$ and that $q_{j}$ are of the form $\Delta \sqrt{\gamma_{j}} / \sqrt{\gamma_{j}}$. If $\Lambda_{q_{1}}=\Lambda_{q_{2}}$ and $v_{j}=\left(1+\psi_{j}\right) e^{x \cdot \zeta}$ are the solutions for $\Delta-q_{j}$ from Theorem 12.11, then $\psi_{1}(x, \zeta)=\psi_{2}(x, \zeta)$ for $x \in \mathbf{R}^{n} \backslash \Omega$ and all $\zeta$ sufficiently large. Proof. We use a cut-and-paste argument. Define a new function by $$ \tilde{\psi}_{1}(x, \zeta)= \begin{cases}\psi_{2}(x, \zeta), & x \in \mathbf{R}^{n} \backslash \bar{\Omega} \\ \psi(x, \zeta), & x \in \Omega .\end{cases} $$ Here, $\psi(x, \zeta)=e^{-x \cdot \zeta} v(x)-1$ where $v$ is the solution of the Dirichlet problem $$ \begin{cases}\Delta v-q_{1} v=0 & x \in \Omega \\ v(x)=e^{x \cdot \zeta}\left(1+\psi_{2}(x, \zeta)\right) & x \in \partial \Omega\end{cases} $$ We claim that $\tilde{v}_{1}(x, \zeta)=e^{x \cdot \zeta}\left(1+\tilde{\psi}_{1}\right)$ is a solution of $\Delta v-q_{1} v=0$ in all of $\mathbf{R}^{n}$. This depends on the the hypothesis $\Lambda_{q_{1}}=\Lambda_{q_{2}}$. To establish this claim, we let $\phi \in \mathcal{D}\left(\mathbf{R}^{n}\right)$ and consider $$ \int_{\mathbf{R}^{n}} \nabla \tilde{v}_{1} \cdot \nabla \phi+q_{1} \tilde{v}_{1} \phi d x=\int_{\mathbf{R}^{n} \backslash \Omega} \nabla v_{2} \cdot \nabla \phi d x+\int_{\Omega} \nabla v(x) \cdot \nabla \phi(x)+q_{1}(x) v(x) \phi(x) d x . $$ Since $v_{2}$ is a solution of $\Delta v_{2}-q_{2} v_{2}=0$ in $\mathbf{R}^{n}$, we have that $$ \int_{\mathbf{R}^{n} \backslash \Omega} \nabla v_{2} \cdot \nabla \phi d x=-\int_{\Omega} \nabla v_{2} \cdot \nabla \phi+q_{2} v_{2} \phi d x=\Lambda_{q_{2}}\left(v_{2}\right)(\phi) $$ Since $v_{2}=v$ on the boundary of $\Omega$ and $\Lambda_{q_{1}}=\Lambda_{q_{2}}$ we have $$ \Lambda_{q_{2}}\left(v_{2}\right)(\phi)=\Lambda_{q_{1}}(v)(\phi)=\int_{\Omega} \nabla v \cdot \nabla \phi+q_{1} v \phi d x . $$ Combining these last three equations shows that $\tilde{v}_{1}$ is a weak solution of $\Delta-q_{1}$ in $\mathbf{R}^{n}$. By the uniqueness statement in Theorem 12.11, the function $\tilde{\psi}_{1}$ defined by $\tilde{\psi}_{1}=e^{-x \cdot \zeta} \tilde{v}_{1}-1$ must equal $\psi_{1}$. In particular, $\psi_{1}=\psi_{2}$ outside $\Omega$. Lemma 12.14 Let $q$ be a potential for which we can solve the Dirichlet problem. The operator $\Lambda_{q}$ is symmetric. That is we have $\Lambda_{q}(\phi)(\psi)=\Lambda_{q}(\psi)(\phi)$. Proof. Let $\phi_{1}, \phi_{2}$ be in $L_{1 / 2}^{2}(\partial \Omega)$. We solve the Dirichlet problem with data $\phi_{j}$ to find a function $u_{j}$ which solves the Dirichlet problem for $\Delta-q$ with boundary data $\phi_{j}$. Then we have $$ \Lambda\left(\phi_{1}\right)\left(\phi_{2}\right)=\int_{\Omega} \nabla u_{1} \cdot \nabla u_{2}+q u_{1} u_{2} d x . $$ The integral on the right-hand side is symmetric in $u_{1}$ and $u_{2}$ so we can conclude $$ \Lambda_{q}\left(\phi_{1}\right)\left(\phi_{2}\right)=\Lambda_{q}\left(\phi_{2}\right)\left(\phi_{1}\right) . $$ Proof of Theorem 12.1. According to Corollary 11.30 and Proposition 12.9, we have that if $\Lambda_{\gamma_{1}}=\Lambda_{\gamma_{2}}$, then $\Lambda_{q_{1}}=\Lambda_{q_{2}}$. We will show that if $\Lambda_{q_{1}}=\Lambda_{q_{2}}$, then the Fourier transforms satisfy $\hat{q}_{2}=\hat{q}_{1}$ (where we are assuming that each $q_{j}$ has been defined to be zero outside $\Omega$ ). We fix $\xi \in \mathbf{R}^{n}$ and choose two unit vectors $\alpha$ and $\beta$ which satisfy $\alpha \cdot \beta=\alpha \cdot \xi=\beta \cdot \xi=0$. (Here, we use our assumption that $n \geq 3$ in order to find three mutually orthogonal vectors.) Next, for $R>|\xi| / 2$, we define $\zeta_{1}$ and $\zeta_{2}$ by $$ \zeta_{1}=R \alpha+i \beta \sqrt{R^{2}-|\xi|^{2} / 4}-i \xi / 2 \quad \text { and } \quad \zeta_{2}=-R \alpha-i \beta \sqrt{R^{2}-|\xi|^{2} / 4}-i \xi / 2 . $$ These vectors satisfy $\zeta_{j} \cdot \zeta_{j}=0, \zeta_{1}+\zeta_{2}=-i \xi$ and $\left|\zeta_{j}\right|=\sqrt{2} R$. For $R$ large, we let $v_{j}$ be the solution of $\Delta v_{j}-q_{j} v_{j}=0$ corresponding to $\zeta_{j}$ as in Theorem 12.11. Since the Dirichlet to Neumann maps are equal and using Lemma 12.14, we have $$ 0=\Lambda_{q_{1}}\left(v_{1}\right)\left(v_{2}\right)-\Lambda_{q_{2}}\left(v_{2}\right)\left(v_{1}\right)=\int_{\Omega}\left(q_{1}(x)-q_{2}(x)\right) e^{-i x \cdot \xi}\left(1+\psi_{1}+\psi_{2}+\psi_{1} \psi_{2}\right) d x . $$ Recall, that the $\psi_{j}$ depend on the parameter $R$ and that Theorem 12.11 implies that the $\psi_{j} \rightarrow 0$ in $L_{l o c}^{2}$ as $R \rightarrow \infty$. Thus, we conclude $$ \hat{q}_{1}=\hat{q}_{2} . $$ The Fourier inversion theorem implies $q_{1}=q_{2}$. Finally, the Lemma below tells us that if $q_{1}=q_{2}$ and $\gamma_{1}=$ gamma $_{2}$ on the boundary, $\gamma_{1}=\gamma_{2}$. Lemma 12.15 If $\gamma_{1}$ and $\gamma_{2}$ in $C^{2}(\bar{\Omega})$ and if $\Delta \sqrt{\gamma}_{1} / \sqrt{\gamma}_{1}=\Delta \sqrt{\gamma}_{2} / \sqrt{\gamma}_{2}$, then $u=$ $\log \left(\gamma_{1} / \gamma_{2}\right)$ satisfies the equation $$ \operatorname{div} \sqrt{\gamma_{1} \gamma_{2}} \nabla u=0 $$ As a consequence, if $\Omega$ is $C^{1}$, and $\gamma_{1}=\gamma_{2}$ on the boundary, then $\gamma_{1}=\gamma_{2}$. Proof. Let $\phi$ be a $\mathcal{D}(\Omega)$ function, say, which is compactly supported in $\Omega$. We multiply our hypothesis, $\Delta \sqrt{\gamma}_{1} / \sqrt{\gamma}_{1}=\Delta \sqrt{\gamma}_{2} / \sqrt{\gamma}_{2}$ by $\phi$ and integrate by parts to obtain $$ 0=\int_{\Omega}\left(\frac{\Delta \sqrt{\gamma_{1}}}{\sqrt{\gamma_{1}}}-\frac{\Delta \sqrt{\gamma}_{2}}{\sqrt{\gamma_{2}}}\right) \phi d x=-\int_{\Omega} \nabla \sqrt{\gamma_{1}} \cdot \nabla\left(\frac{1}{\sqrt{\gamma_{1}}} \phi\right)-\nabla \sqrt{\gamma_{2}} \cdot \nabla\left(\frac{1}{\sqrt{\gamma_{2}}} \phi\right) d x $$ If we make the substitution $\phi=\sqrt{\gamma_{1}} \sqrt{\gamma_{2}} \psi$, then we have $$ \int_{\Omega} \sqrt{\gamma_{1} \gamma_{2}} \nabla\left(\log \sqrt{\gamma_{1}}-\log \sqrt{\gamma_{2}}\right) \cdot \nabla \psi d x=0 $$ If $\gamma_{1}=\gamma_{2}$ on the boundary and $\Omega$ is $C^{1}$, then by Lemma 12.6 we have $\log \left(\gamma_{1} / \gamma_{2}\right)$ is in $L_{1,0}^{2}(\Omega)$. We can conclude that this function is zero in $\Omega$ from the uniqueness assertion of Theorem 10.25. Exercise 12.16 Suppose the $\Omega$ is a $C^{1}$-domain in $\mathbf{R}^{n}$. Suppose that $u^{+}$is a weak solution $\Delta u^{+}=0$ in $\Omega$ and $u^{-}$is a local weak solution of $\Delta u^{-}=0$ in $\mathbf{R}^{n} \backslash \bar{\Omega}$ and that $\nabla u^{-}$is in $L^{2}$ of every bounded set in $\mathbf{R}^{n} \backslash \bar{\Omega}$. Set $\gamma=1$ in $\mathbf{R}^{n} \backslash \bar{\Omega}$ and set $\gamma=2$ in $\Omega$. Define $u$ by $$ u(x)= \begin{cases}u^{+}(x), & x \in \Omega \\ u^{-}(x), & x \in \mathbf{R}^{n} \backslash \bar{\Omega}\end{cases} $$ What conditions must $u^{ \pm}$satisfy in order for $u$ to be a local weak solution of $\operatorname{div} \gamma \nabla$ in all of $\mathbf{R}^{n}$. Hint: There are two conditions. The first is needed to make the first derivatives of $u$ to be locally in $L^{2}$. The second is needed to make the function $u$ satisfy the weak formulation of the equation. Exercise 12.17 Show that the result of Lemma 12.15 continues to hold if we only require that the coefficients $\gamma_{1}$ and $\gamma_{2}$ are elliptic and in $L_{1}^{2}(\Omega)$. In fact, the proof is somewhat simpler because the equation $\Delta \sqrt{\gamma_{1}} / \sqrt{\gamma_{1}}=\Delta \sqrt{\gamma_{2}} / \sqrt{\gamma_{2}}$ and the boundary condition are assumed to hold in a weak formulation. The proof we gave amounts to showing that the ordinary formulation of these conditions imply the weak formulation. Exercise 12.18 (Open) Show that a uniqueness theorem along the lines of Theorem 12.1 holds under the assumption that $\gamma$ is only $C^{1}(\bar{\Omega})$. ## Chapter 13 ## Bessel functions ## Chapter 14 Restriction to the sphere ## Chapter 15 ## The uniform sobolev inequality In this chapter, we give the proof of a theorem of Kenig, Ruiz and Sogge which can be viewed as giving a generalization of the Sobolev inequality. One version of the Sobolev inequality is that if $1<p<n / 2$, then we have $$ \|u\|_{p} \leq C(n, p)\|\Delta u\|_{p} $$ This can be proven using the result of exercise 8.2 and the Hardy-Littlewood-Sobolev theorem, Theorem 8.8. In our generalization, we will consider more operators, but fewer exponents $p$. The result is Theorem 15.1 Let $L=\Delta+a \cdot \nabla+b$ where $a \in$ complexes $^{n}$ and $b \in \mathbf{C}$ and let $p$ satisfy $1 / p-1 / p^{\prime}=2 / n$. For each $f$ with $f \in L^{p}$ and $D^{2} f L^{p}$ we have $$ \|f\|_{p^{\prime}} \leq C\|L f\|_{p} $$ ## Chapter 16 Inverse problems: potentials in $L^{n / 2}$
Textbooks
\begin{document} \title[Article Title]{Article Title} \title{Quantum Computing for a Profusion of Postman Problem Variants } \author*[1,2]{\fnm{Joel E.} \sur{Pion}}\email{[email protected]} \author[3]{\fnm{Christian F. A.} \sur{Negre}}\email{[email protected]} \author*[1]{\fnm{Susan M.} \sur{Mniszewski}}\email{[email protected]} \affil[1]{\orgdiv{Computer, Computational, and Statistical Sciences Division}, \orgname{Los Alamos National Laboratory}, \state{NM}, \country{USA}} \affil[2]{\orgdiv{Mathematics Department}, \orgname{University of California, Santa Barbara}, \state{CA}, \country{USA}} \affil[3]{\orgdiv{Theoretical Division}, \orgname{Los Alamos National Laboratory}, \state{NM}, \country{USA}} \abstract{In this paper we study the viability of solving the Chinese Postman Problem, a graph routing optimization problem, and many of its variants on a quantum annealing device. Routing problem variants considered include graph type, directionally varying weights, number of parties involved in routing, among others. We put emphasis on the explanation of how to convert such problems into quadratic unconstrained binary optimization (QUBO) problems, one of two equivalent natural paradigms for quantum annealing devices. We also expand upon a previously discovered algorithm for solving the Chinese Postman Problem on a closed undirected graph to decrease the number of constraints and variables used in the problem. Optimal annealing parameter settings and constraint weight values are discussed based on results from implementation on the D-Wave 2000Q and Advantage. Results from classical, purely quantum, and hybrid algorithms are compared.} \keywords{D-Wave, Quantum Annealing, QUBO, Routing Problems} \date{Received: date / Accepted: date} \maketitle \section{Introduction} \label{intro} Quantum annealing exploits the quantum-mechanical effects of superposition, entanglement, and tunneling to explore the energy landscape in an efficient manner \cite{Lanting2014,annealingbasics} when sampling from energy-based models. NP-hard combinatorial optimization problems are formulated as either an Ising model or quadratic unconstrained binary optimization (QUBO) problem that can be run on a D-Wave quantum annealer (QA). The Ising model objective function is \begin{equation} O({h,J,s}) = \sum\limits_{i}h_{i}s_i + \sum\limits_{i<j}J_{ij}s_is_j, \label{eq:ising} \end{equation} where $s_{i} \in$ $\{-1,+1\}$ are the spin variables, while $h_{i}$ and $J_{ij}$ are, respectively, biases on, and strengths between spins. Quantum computers use qubits to encode information. Their behavior is governed by the laws of quantum mechanics. This allows a qubit to be in a “superposition” state which means it can be both a “$-1$” and a “$+1$” at the same time. An outside event causes it to collapse into either. The annealing process results in a low-energy ground state, $g$, which consists of an Ising spin for each qubit. While solving a combinatorial optimization problem, QAs are typically limited by the number of variables which can be represented and embedded in the hardware graph topology. During the embedding process each logical variable maps to a chain of qubits. The D-Wave 2000Q QA uses a Chimera topology with more than $2000$ qubits and more than $6000$ couplers. Each qubit is connected to $6$ others. This allows a fully connected graph (or clique) of $64$ nodes (or variables) to be embedded in the sparse Chimera graph. The newer D-Wave Advantage uses a Pegasus topology with over $5000$ qubits and more than $35,000$ couplers. Each qubit connects to $15$ other qubits, and the largest embeddable clique size is $177$ \cite{McGeoch}. QAs proved to be useful for solving NP-hard optimization problems such as those involved in graph theory \cite{GP2017,CD2020,Mniszewski2021} and machine learning \cite{OMalley2018,Dixit2021} among others. The QUBO formulation is most commonly used for optimization problems. The objective function is \begin{equation} O({Q,x}) = \sum\limits_{i}Q_{ii}x_i + \sum\limits_{i<j}Q_{ij}x_ix_j, \label{eq:qubo} \end{equation} where $x_{i} \in$ $\{0,1\}$ encodes the inputs and results. The symmetric matrix, $Q$, is formulated such that the weights on the diagonal correspond to the linear terms, while the off-diagonal weights are the quadratic terms. Ising and QUBO models are related through the transformation $s = 2x - 1$. Constraints on current D-Wave architectures include limited precision and range on weights and strengths, sparse connectivity, and number of available qubits. These constraints impact both the size of the problems that can be run and the solver performance. A QUBO matrix is mapped onto the hardware using an embedding algorithm such as \emph{minorminer}~\cite{embedding}. A hybrid quantum-classical approach is required when the number of problem variables is too large to run directly on the D-Wave hardware. In that case, the quantum-classical \emph{qbsolv} sampler is used \cite{Booth}. This paper presents a routing problem known as the Chinese Postman Problem (CPP) as well as many of its variants in the context of finding solutions with a QA. In Section \ref{Preliminaries}, we briefly give an overview of the history of the CPP as well as some definitions necessary to define the problem. In Section \ref{variants}, we define many variants of the Postman Problem along with potential applications. Next, in section \ref{Undirected Explanation}, a CPP QA algorithm first put forth by \cite{Siloi} for solving the Closed Undirected CPP is discussed with potential modifications. Then, we introduce a novel algorithm for using a QA to solve a large class of CPP variants in sections \ref{BGC} and \ref{even more general}. Finally, we show results from our implementations and discuss observations from our experiments in sections \ref{results} and \ref{discussion} respectively. \section{Preliminaries}\label{Preliminaries} \subsection{History} \label{History} The CPP was first posed as a combinatorial optimization problem by the then lecturer at Shandong Normal University, Mei-Gu Guan, in 1960. At that time China was trying to modernize itself as a country and mathematicians were encouraged to work on real-world applications. The original phrasing of the CPP is as follows: ``\textit{A postman has to deliver letters to a given neighborhood. He needs to walk through all the streets in the neighborhood and back to the post-office. How can he design his route so that he walks the shortest distance? \cite{Grotschel}}" The modern phrasings of the question are varied as different factors and applications are taken into consideration. The unifying factor across these variants are that they are routing problems framed as combinatorial optimization problems over a graph structure. \subsection{Graph Algorithm Terminology} \label{Graph Definitions} The following are definitions about graphs and objects used in graph algorithms discussed in this paper. The following definitions will be used throughout the paper \cite{graphtextbook}. \begin{definition} \label{definition: graph} (Graph) A graph, $G$, is a triple ($V$, $U$, $D$) where $V\subset\mathbb{N}$ is a non-empty finite subset, $U\subset\{([a,b],c) \vert a,b\in V, c\in(\mathbb{R}^+)^2\}$ with $c = [W_{a,b}, W_{b,a}]$, and $D\subset\{((a,b),W_{a,b}) \vert a,b\in V, W_{a,b}\in\mathbb{R}^+\}.$ We shall refer to $V$ as vertices, $U$ as undirected edges, and $D$ as directed edges. Undirected edges and directed edges are labeled as $[a,b], (a,b)$, respectively, while $W_{a,b}$ is the weight of the edge from vertex $a$ to vertex $b$. We refer to $E = U\bigcup D$ as the edges. Note $(*,...,*)$ is used to denote an ordered tuple and $[*,...,*]$ is used to denoted an unordered tuple. \end{definition} \begin{definition} (Vertex Adjacency) In a graph, $G$, vertex $a\in V$ is said to be adjacent to vertex $b\in V$ if there exists some $W_{a,b}, W_{b,a}\in\mathbb{R}^+$ so that $([a,b],[W_{a,b},W_{b,a}])\in U$ or there exists some $W_{a,b}\in\mathbb{R}^+$ so that ${((a,b),W_{a,b})\in D}$. \end{definition} \begin{definition} (Edge Adjacency) In a graph, $G$, the edge labeled $[a,b]$ or $(a,b)$ is adjacent to the edge labeled $[c,d]$ or $(c,d)$ if the edges may be written, up to reordering of unordered tuples, so that $b = c$. \end{definition} \begin{definition} (Walk) A walk of length $n$ in a graph, $G$, is a tuple of length $n+1$, $(v_0,...,v_n)$, where $v_i$ is a vertex adjacent to $v_{i+1}$ for all $i$. \end{definition} \begin{definition} (Open/Closed Walk) A walk in a graph is a closed walk if the first and last vertex in the walk are the same. Otherwise the walk is called an open walk. \end{definition} \begin{definition} (Walk Weight) The walk weight of a walk is the sum of all the weights of the edges traversed in the walk. Given a walk $(v_0,...,v_n)$, the walk weight is $\sum\limits_{i=0}^{n-1}W_{v_i,v_{i+1}}$. \end{definition} \begin{definition} (Trail) A trail is a walk for which no edge is repeated within the walk. \end{definition} \begin{definition} (Circuit) A circuit is a closed trail. \end{definition} \begin{definition} (Eulerian Circuit) An Eulerian circuit is a circuit which includes every edge in the graph. \end{definition} \begin{definition} (In/Out-Degree) A vertex, $v$, in a graph, $G$, has in-degree equal to the number of vertices adjacent to $v$, and out-degree equal to the number of vertices $v$ is adjacent to. In other words, the in-degree of the vertex $v$ is the number of edges which end in $v$, up to reordering of unordered tuples. The out-degree is similar in reverse. \end{definition} \begin{definition} (Degree) A vertex, $v$, in an undirected graph, $G$, has degree equal to its in-degree and out-degree. \end{definition} \begin{definition} (Strongly Connected) A graph $G$ is said to be strongly connected if for every pair of vertices, $a,b\in V$, there exists a walk in $G$ from vertex $a$ to vertex $b$. \end{definition} \begin{definition} (Partially Ordered Set) A pair ($X$,$\leq$) such that $X$ is a set, $S\subset X\times X$ (Cartesian product), and $x\leq y$ for $x,y\in X$ if and only if $(x,y)\in S$, is called a partially ordered set if the following hold: \begin{enumerate} \item $x\leq x$ for all $x\in X$\\ \item $x\leq y$ and $y\leq x \implies x = y$\\ \item $x\leq y$ and $y\leq z \implies x\leq z$ \end{enumerate} \end{definition} \begin{definition} (Perfect Pairing) Let $S$ be a finite set with an even number of elements. Then a perfect pairing of $S$ is a collection of subsets of $S$, $A_i$, such that: \begin{enumerate} \item $\vert A_i\vert$ = 2 for all i\\ \item $A_i\cap A_j = \emptyset$ for all $i\neq j$\\ \item $\bigcup A_i = S$ \end{enumerate} \end{definition} We shall assume henceforth that all our graphs are strongly connected. \section{Methods} \label{methods} \subsection{Variants and Applications} \label{variants} The CPP is a general term for a wide variety of routing problems. Each variant of the CPP is often created to optimize a specific application ~\cite{Thimbleby,Comaklisokmen}. \begin{variant} \label{undirected variant} (Undirected CPP) Given an undirected graph, $G$, find a walk in $G$ which traverses every edge in $G$ with the minimal walk weight. \end{variant} \begin{application} (Neighborhood Pothole Inspection) Imagine one wished to survey the road conditions in a large neighborhood with bidirectional roads. One could represent the neighborhood as an undirected graph with intersections as vertices and the roads as edges and solve the Undirected CPP. \end{application} \begin{variant} \label{directed variant} (Directed CPP) Given a directed graph, $G$, find a walk in $G$ which traverses every edge in $G$ with the minimal walk weight. \end{variant} \begin{application} (Downtown Pothole Inspection) Imagine one wished to survey the road conditions of a city's downtown containing only one-way streets. One could represent the downtown area as a directed graph with the intersections as vertices and the roads as edges and solve the Directed CPP. \end{application} \begin{variant} \label{mixed variant} (Mixed CPP) Given a mixed graph, $G$, find a walk in $G$ which traverses every edge in $G$ with the minimal walk weight. \end{variant} \begin{application} (Town Pothole Inspection) Imagine one wished to survey the road conditions of an entire town which contained one-way streets, two-way streets (e.g. a highway with lanes), and bidirectional streets (e.g. a residential street with no lanes). One could represent the town as a mixed graph with the intersections as vertices, the one/two-way streets as one/two directed edges, and the bidirectional streets as undirected edges, and solve the Mixed CPP. \end{application} The above variants determine what types of graphs need to be considered for the problem, which can drastically effect the computational complexity of the problem. Both the undirected and the directed variants are solvable classically in polynomial time, while the mixed variant is NP-Hard~\cite{Comaklisokmen}. For any CPP one will need to choose a type of graph to work over as well as where the postman will need to start and/or stop their route. When solving the CPP classically, the start/stop choice will change what algorithm is needed~\cite{Thimbleby}. \begin{variant} \label{closed variant} (Closed CPP) Given a graph, $G$, find a walk in $G$ which traverses every edge in $G$ such that the start and stop are on the same vertex, with a minimal walk weight for such a walk. \end{variant} \begin{application} (Tunnel Inspections) Imagine a mine operator wishes to inspect the integrity of the tunnels in her mining operation. The mine has only one entry/exit for its vast and extensive network of tunnels. One could represent the tunnels as edges and the tunnel junctions as vertices and solve the Closed CPP. \end{application} \begin{variant} \label{open variant} (Open CPP) Given a graph, $G$, find a walk in $G$ which traverses every edge in $G$ with the minimal walk weight. \end{variant} \begin{application} (Museum Cleaning Robot) Imagine a museum wishes to clean their floors using a robot and has two docking stations for the robot start and stop at. The museum wishes to know where to place these docking stations as well as what walk the robot should take so as to clean the museum efficiently. One could represent the rooms as vertices and the hallways between them as edges and solve the Open CPP. \end{application} \begin{variant} \label{endpoint variant} (Open with Endpoints CPP) Given a graph, $G$, and a starting vertex, $v_1$, and/or a stopping vertex, $v_2$, find a walk in $G$ which traverses every edge in $G$ with the minimal walk weight for such walks. This walk starts at $v_1$ if given and ends at $v_2$ if given. \end{variant} \begin{application} (Botanical Garden Picnic) Imagine you wish to see every part of the Botanical Garden as efficiently as possible. You may wish to just start at the beginning and finish at the exit or you may wish to start at the beginning and finish somewhere in the garden (you don't care where) so as to enjoy a picnic. One could represent the paths as edges and the path junctions as vertices and solve the Open with Endpoints CPP. \end{application} Now that all of the required variant choices have been laid out, we will introduce some optional variants to modify the CPP. Inclusion of these variants allow the CPP to be applied in a much wider array of applications. Many of the variants can be applied in conjunction with one another so as to be applicable in an even broader set of use cases. \begin{variant} \label{rural variant} (Rural Postman Problem) Given a graph, $G$, and a subset, $R$, of the edges of $G$, find a walk in $G$ which traverses every edge in $R$ with the minimal walk weight. \end{variant} \begin{application} (Traveling Salesmen) Imagine you were a traveling salesmen who wished to sell your wares in every capital city in every state of the United States of America. One could consider the graph defined with every capital city as a vertex and with an edge (weighted by cost to travel) between every pair of capital cities. Then one could modify this graph by replacing each vertex with two vertices (each with all the original vertices edges) with an edge of weight zero connecting them. Considering only the added edges of weight $0$ as our $R$, one could solve the Rural Postman Problem. \end{application} \begin{variant} \label{windy variant} (Windy Postman Problem) Given a graph, $G$, where $W_{a,b}$ may not equal $W_{b,a}$ for undirected edges, as in definition \ref{definition: graph}, find a walk in $G$ which traverses every edge of $G$ with minimal walk weight. \end{variant} \begin{application} (Injured Hiker) Imagine a rescue team is trying to find an injured hiker on the trails in a mountain range. One could represent the trails as undirected edges and the trail junctions as vertices. One could then account for the differences in difficulties of going uphill versus downhill by assigning different directional weights to the undirected edges and solve the Windy Postman Problem. \end{application} \begin{variant} \label{k variant} ($k$-Postman Problem With Capacity) Given a graph, $G$, $k\in\mathbb{N}$, and $\{c_1,...,c_k\}\subset(\mathbb{R}\cup\{\infty\})^k$, find $k$ walks in $G$ such that each edge in $G$ is covered by at least one of the $k$ walks, the $i$\textsuperscript{th} walk weight is less than or equal to $c_i$ for $i\in\{1,...,k\}$, and the sum of walk weights are minimized. \end{variant} \begin{application} (Postal Service) Imagine you were in charge of your local area postal service and had $10$ postal agents/vehicles in your employ. You need to deliver mail to every street in your region in an efficient way, but no postal worker may work more than an 8 hour workday by law. One could represent every street as an edge and every street intersection as a vertex and solve the $k$-Postman Problem With Capacity. \end{application} \begin{variant} \label{service variant} (Service-Based Traversal Postman Problem) Given a graph, $G$, modify the graph so as to create a duplicate of each edge, without duplicating any vertices. The duplicated edges may have a different weight. Note that each pair of vertices which had only one edge now has two edges between them. All the original edges will be called servicing edges, while all the added edges will be called traversal edges. One should then solve the Rural Postman Problem on all the servicing edges. \end{variant} \begin{application} (Pipe Repairman) Imagine your were a pipe repairman and you had an extensive network of pipes to repair. It takes you 1 hour to repair $10$ meters of pipe and $10$ minutes to pull your pipe fixing supplies that same distance. One could represent the pipes as edges and the pipe-splitting junctions as vertices and solve the Service-Based Traversal Postman Problem. \end{application} \begin{variant} \label{turning variant} (Turning Challenge Postman Problem) Given a graph, $G$, and a collection of $3-$tuples in the form (edge-in, edge-out, bonus weight), we shall, with regard to the collection of $3-$tuples, sum the corresponding bonus weights for each instance where edge-in is followed by edge-out in the walk. We will call this sum, extra weight. Find a walk which traverses every edge in $G$ where the sum of the walk weight and extra weight is minimized. \end{variant} \begin{application} (Street Cleaner) Imagine your job was to clean the streets in a North American city where there are lights and stop-signs. In general, it is faster to go right or straight than it is to make a left turn or a u-turn. One could form a collection of $3-$tuples by, for each road, $r$, making $3-$tuples of the form $(r,s,w)$ for each road, $s$, which could follow $r$, with $w$ being the added time it takes to make such a turn. Then one could represent each road as an edge and each intersection as a vertex and solve the Turning Challenge Postman Problem. \end{application} \begin{variant} \label{hierachy variant} (Hierarchical Postman Problem) Given a graph, $G$, for which the edges, $E$, have a partial ordering, find a Service/Traversal Postman Problem solution constrained by the edges needing to be serviced in an order congruous to the partial order. \end{variant} \begin{application} (Forgotten Packages) Imagine you were a delivery person in a town and yesterday several packages were delivered to the wrong address. Now you must deliver today's packages as well as pick-up and redeliver the misdelivered packages. One could represent roads as edges and the intersections as nodes. Then one could place a partial ordering on the edges so that a street with a package which was misdelivered yesterday must come before the street with the intended destination of the package. One could then solve the Hierachical Postman Problem. \end{application} \subsection{Foundations for Solving a Problem on a Quantum Annealing Device} \label{QUBO foundation} At its core, QAs are machines which are meant to solve one kind of problem extremely well. Fortunately, that problem is NP-Complete~\cite{Lewis} and many useful problems may efficiently be converted into an instance of this problem. Ising model and QUBO formulations of the aforementioned problems can be solved on a QA. We will frame the CPP problem as a Polynomial Version QUBO (see definition \ref{Polynomial Version QUBO} below). \begin{definition} (Matrix Version: QUBO Problem) Given $Q\in M_n(\mathbb{R})$, find $$min\{\vec{x}^\top Q\vec{x}\}$$ constrained by $$\vec{x} = \begin{pmatrix} x_1 \\ x_2 \\ ... \\ x_n \end{pmatrix} \text{ with } x_i\in\{0,1\}\text{ for all } i.$$ \end{definition} \begin{definition} \label{Polynomial Version QUBO} (Polynomial Version: QUBO Problem) Given $q_{ij}\in\mathbb{R}$ for $i,j\in\{1,...,n\}$, find $$min\{\sum\limits_{i = 1}^n\sum\limits_{j = 1}^n q_{ij}x_ix_j\}$$ constrained by $x_i\in \{0,1\}\text{ for all }i$. \end{definition} Once one realizes that any binary variable $x\in\{0,1\}$ has the property $x^2 = x$~\cite{Glover}, the equivalence of the two versions becomes clear upon inspection. Now that we have defined what a QUBO is, we outline the individual steps involved in solving a problem with a QUBO formulation on a QA (see Figure \ref{workflow diagram} ). \tikzstyle{decision} = [diamond, draw, fill=blue!20, text width=6em, text badly centered, node distance=3cm, inner sep=0pt] \tikzstyle{block} = [rectangle, draw, fill=blue!20, text width=5em, text centered, rounded corners, minimum height=4em] \tikzstyle{line} = [draw, -latex'] \tikzstyle{cloud} = [draw, ellipse,fill=red!20, node distance=3cm, minimum height=2em] \tikzset{every picture/.style={line width=0.75pt}} \begin{figure} \caption{Quantum Annealing Workflow. The specifics of the green and red portions of the diagram are dependent on the problem one is trying to solve. The green portion is the classical work to prepare the input to the QA, while the red portion is the classical work to prepare the output of the QA. The purple block can be done via quantum annealing, simulated annealing, or a quantum-classical approach like \emph{qbsolv}~\cite{Booth}. The orange block is where one needs to tune the problem's input parameters. } \label{workflow diagram} \end{figure} \subsection{Closed Undirected CPP} \label{Undirected Explanation} According to a literature review completed by the authors, the first and only previous work done on the subject of using a QA to solve the CPP in any form was done by Siloi et al. in ``Investigating the Chinese Postman Problem on a Quantum Annealer''~\cite{Siloi} which solved the Closed Undirected CPP. Below we outline our implementation for solving the Closed Undirected CPP in Algorithm \ref{CUCPPAlgorithm}. The reason why this algorithm works is a consequence of the following two theorems \cite{graphtextbook}. \begin{algorithm} \caption{Solving the Closed Undirected CPP on a QA} \label{CUCPPAlgorithm} \begin{algorithmic}[1] \Procedure{Routing}{$G$} \Comment{$G$ is an undirected graph} \State Find all nodes of odd degree in $G$ \State Create QUBO for $G$, QUBO($G$) \State Run Quantum Annealing Process on QUBO($G$) \Comment{Do $N$ times} \State Identify lowest energy solution, $S$ \State Intepret $S$ as perfect pairing amongst odd degree nodes of $G$ \If{$S$ is not a valid solution} \State Modify QUBO \State GOTO Line 4 \EndIf \State Create new graph $G'$ by adding perfect pairing edges to $G$ \State Find Eulerian Circuit in $G'$, $E'$ \State Replace added edges in $E'$ with corresponding path to produce path $E$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{theorem} (Fundamental Theorem of Graph Theory) Given an undirected graph, $G$, the sum of the degrees of every vertex in $G$ is even and is twice the number of edges. \end{theorem} \begin{theorem} (Euler Circuit Criterion) Given an undirected graph, $G$, $G$ contains an Eulerian circuit if and only if every vertex in $G$ is of even degree. \end{theorem} \begin{figure} \caption{Undirected CPP algorithm example. The first graph introduces an example of an undirected weighted graph. Below it is a solution to the Undirected CPP for that graph presented in vertex order notation. The way to read this solution is to start at the first vertex in the list and then traverse the edge which connects to the next vertex in the list. One should continue this process until one reaches the end of the list. The second graph is the same as the first, except one extra edge was added with the weight of the shortest path between those two vertices. That edge was added because it makes every vertex have even degree and has the smallest walk weight between the two vertices.} \label{SiloiExample} \end{figure} Upon inspection it should be obvious that if an Eulerian circuit exists in your graph, then, that Eulerian circuit will be an optimal solution. Additionally, if there exists an Eulerian circuit, then, it can be found in polynomial time \cite{EulerCurcuit}. One good implementation of the algorithm for finding an Eulerian Circuit is in \emph{NetworkX} \cite{networkx}. By the Fundamental Theorem of Graph Theory we conclude that there must be an even number of vertices in $G$ of odd degree. For any closed walk in $G$, considering the walk as its own graph (possibly multi-graph), we know the in-degree must equal the out-degree for each vertex in the walk, and hence in $G$. This leads us to realize that for every vertex of odd degree, we must reuse one of its edges in any Undirected Closed CPP solution. Let us say we have a solution to the Undirected Closed CPP. Let us then say that we are at the first point in our walk where we exit a vertex of odd degree. As there are an even number of vertices of odd degree in $G$, we know there must be at least one other vertex of odd degree. As our walk traverses every edge in $G$ we know we must at some point enter another vertex of odd degree and take the first such instance after exiting our previous odd degree node. One could replace the path taken between those two vertices with an edge whose weight is equal to that of the path's walk weight. Then in this modified graph, the two vertices whose degree was odd would now be even. We then repeat this process until all the vertices have even degree. Now we conclude that finding a solution to the Undirected Closed CPP is reduced to finding a perfect pairing of the vertices of odd degree which adds the least amount of weight. It is this problem that we formulate as a QUBO and solve using a QA. Let $G$ be an undirected graph with vertices of odd degree, $\{v_1,...,v_d\}$. Let $x_{i,j}$ be a binary variable for $i,j\in\{1,...,d\}$ with $i<j$. Note we will use $x_{i,j}$ and $x_{j,i}$ to represent the same binary variable. The variable $x_{i,j}$ with value one or zero represents, respectively, vertices $v_i, v_j$ being paired together or not paired together. The symbol $W_{i,j}$ will be a constant representing the weight of the shortest path between $v_i$ and $v_j$. The symbol $P$ will be a positive constant whose use will be described shortly. The QUBO problem is then defined as: $$min\{(\sum\limits_{i = 1}^{d-1}\sum\limits_{j = i+1}^d W_{i,j}x_{i,j}) + P(\sum\limits_{i=1}^{d}(1-\sum\limits_{\substack{j = 1 \\ j\neq i}}^dx_{i,j})^2)\}.$$ To understand this QUBO, let us examine it term by term. First consider: $$\sum\limits_{i=1}^{d}(1-\sum\limits_{\substack{j = 1 \\ j\neq i}}^dx_{i,j})^2.$$ We shall label this part of the equation as $C$, for constraint. This part of the equation is to make sure a perfect pairing is formed. If $C$ equals zero, we may interpret this as: ``for every i, the vertex $v_i$ is paired with exactly one vertex $v_j$ with $j\neq i$.'' Now we consider: $$min\{(\sum\limits_{i = 1}^{d-1}\sum\limits_{j = i+1}^d W_{i,j}x_{i,j}).$$ We shall label this part of the equation $M$ for minimize. Let us assume that $C$ = 0 and thus we have a perfect pairing. Then the minimum value of $M$, and hence the whole QUBO, will correspond to the perfect pairing amongst the odd degree vertices which adds the least total weight. In other words, the outputs of our binary variables, $x_{i,j}$, which correspond to the minimal value of the QUBO, will in turn correspond to a solution of the Closed Undirected CPP. \textit{Note also the number of variables in this QUBO is easy to compute as it is just ${d \choose 2}$, where once again, $d$ is the number of vertices of odd degree; also, the variables in the QUBO are not fully connected}. Now we shall talk about the use and importance of the constant $P$. As we get a perfect pairing if and only if $C = 0$, we must make sure that the QUBO will be, in net, penalized for breaking this condition. Consider for a moment a new graph with two vertices and one edge between them with $W_{1,2} > 2$, and that we set $P = 1$. If we set $x_{1,2} = 0$, we won't get a perfect pairing, but we will minimize the QUBO overall. This is because relatively $C$ will increase by $2$ while $M$ decreases by more than $2$ by switching $x_{1,2} = 1$ to $x_{1,2} = 0$. Thus we see the need for a constant $P$ since it will increase the cost incurred by breaking the condition $C = 0$. In theory, $P$ should be set equal to some arbitrarily large number as we require that $C = 0$. This does not work in practice because when setting up for the annealing process, all binary variable constants are scaled to fit into an interval. Due to a limit on the sensitivity of the method, values sufficiently close to zero will be treated as zero. In practice the value that should be chosen for $P$ will depend on the graph itself and should be large enough to force $C = 0$, but small enough to not overshadow the $M$ component. As a quick example, the QUBO one would get from the graph in Figure \ref{SiloiExample} would be simply, $$9x_{3,5} + P(1-x_{3,5})^2,$$ which if we let $P = 10$, for example, could be written as $$10(x_{3,5})^2 - 11x_{3,5} + 10.$$ Upon inspection one can see that the value of $x_{3,5}$ which minimizes this equation is $x_{3,5} = 1$ for a total value of $9$ as expected. \subsection{A General Approach to the Chinese Postman Problem}\label{BGC} We now outline an algorithm for solving a more general class of CPPs on a QA. We show how to solve variants \ref{undirected variant} through \ref{windy variant} from section \ref{variants} and discuss results for those variants in section \ref{results}. Algorithm \ref{GeneralAlgorithm} is an outline of the general algorithm. \begin{algorithm}[h!] \caption{Quantum Annealing for a CPP} \label{GeneralAlgorithm} \begin{algorithmic}[1] \Procedure{Routing}{$G$} \State Collect Graph Data \State Choose maximum length of walk \State Create QUBO for $G$, QUBO($G$) \State Run Quantum Annealing Process on QUBO($G$) \Comment{Do 10-10000 times} \State Identify lowest energy solution, $S$ \State Intepret $S$ as walk in $G$ \If{$S$ is not a valid solution} \State Modify QUBO \State GOTO Line 5 \EndIf \EndProcedure \end{algorithmic} \end{algorithm} Now we shall explain the construction of the QUBO. Let $G$ be a graph, directed, undirected or mixed. Let $V$ be the set of vertices in $G$. Let $i_{max}\in\mathbb{N}$. The constant $i_{max}$ represents the maximum number of edges we will allow to be traversed in the walk and we will discuss how to choose $i_{max}$ later. Our binary variables will be ${}_ie_{j,k}$ and ${}_{2^r}s_{j,k}$ for $j,k\in V$, such that there is an edge going from vertex $j$ to vertex $k$, $i\in\{0,...,i_{max}\}$, and ${r\in\{0,...,ceiling(\log(i_{max}))\}}$. Let $W_{i,j}$ be the weight of the edge going from vertex $i$ to vertex $j$. The mental picture we will use to guide our thinking is that of choosing the steps in our path in an ordered manner. The variable ${}_ie_{j,k}$ taking the value of one in the solution to the QUBO will correspond to an instruction to traverse the edge going from vertex $j$ to vertex $k$ in the i${}^{\text{th}}$ step of the walk. The variables, ${}_{2^r}s_{j,k}$, are slack variables, which are helpful in setting up some of our inequality conditions. Recall that we are presetting the number of steps we will take in the walk in our graph. As we do not know a priori how many steps we need for our walk, let us assume we overestimate. To compensate for this overestimation, we will allow repetition in the steps we take in our walk so as to allow a shorter walk than the one predetermined in our set-up. For example, ${}_ie_{j,k}$ and ${}_{i+1}e_{j,k}$ will be allowed to simultaneously evaluate to 1. Using this set-up, let us talk about what conditions and constraints need to be met to create a legal path. First, at any given step in our walk, the walk should traverse precisely one edge. This can be phrased as: for all $i$, there is a unique pair, $(j,k)$, such that ${}_ie_{j,k} = 1$. Written as a constraint for a QUBO this is, $$C_{\text{one\_edge}} = \sum\limits_{i}(1-\sum\limits_{j,k}{}_ie_{j,k})^2.$$ The next constraint is that we do not want our walk to 'jump' around in our graph, we only want graph walks. To do this we will require that if ${}_ie_{j,k}$ and ${}_{i+1}e_{r,s}$ are both one, then either $k=r$ and the walk is legal at this location or $j=r$ and $k=s$ and we have a repetition edge whose necessity for existence was explained above. Written as a constraint for a QUBO this is, $$C_{\text{adjacency}} = \sum\limits_i\sum\limits_{j,k}\sum\limits_{\substack{r,s \\ r\neq k \\ \text{and} \\ r \neq j \text{ or } s \neq k}}{}_ie_{j,k}\cdot {}_{(i+1)}e_{r,s}.$$ The next constraint is that we want to make sure that every edge in the graph which is required to be included into the walk is included. This will be phrased as: for any directed edge from vertex $j$ to vertex $k$ which should be included in the walk, there exists at least one $i$ such that ${}_ie_{j,k} = 1$. And for any undirected edge between vertex $j$ and vertex $k$, there exists at least one $i$ such that ${}_ie_{j,k} = 1$ or ${}_ie_{k,j} = 1$. It is for this kind of inequality that we need the slack variables. Written as a constraint for a QUBO this constraint is: \begin{align*} C_{\text{required\_directed}} =& \sum\limits_{(j,k)}((1-\sum\limits_i ({}_ie_{j,k})) + \sum\limits_r 2^r{}_{2^r}s_{j,k})^2,\\ C_{\text{required\_undirected}} =& \sum\limits_{[j,k]}((1-\sum\limits_i ({}_ie_{j,k} + {}_ie_{k,j})) + \sum\limits_r 2^r{}_{2^r}s_{j,k})^2, \text{and}\\ C_{\text{required}} =& ~C_{\text{required\_directed}} + C_{\text{required\_undirected}}. \end{align*} The next constraint which may occur is a required start and/or stop location. Unlike the constraints above which increase the required connectivity between variables in the QUBO, which makes it more difficult to embed on the hardware, we can use this constraint to decrease the number of variables needed and also decrease the connectivity between variables in the QUBO. We can do this by not creating unneeded variables, which means we would also omit those variables from the constraints above. If a start location is required then we shall not include variables ${}_ie_{j,k}$ for which the edge $(j,k)$ cannot be reached in $i$ steps, accounting for the type of repetition we spoke of previously. This is done similarly at the end of the walk if an end location is specified. Let us consider as an example the first graph in Figure \ref{SiloiExample} and let us specify the starting vertex to be vertex $3$. Then for the variables, ${}_ie_{j,k}$, for which $i = 0$, the first step, we only have ${}_0e_{3,2}$. For $i = 1$ we only have ${}_1e_{3,2}$, ${}_1e_{2,1}$, ${}_1e_{2,3}$, ${}_1e_{2,4}$, and ${}_1e_{2,5}$. In this way we forcibly achieve any start or stop constraint. Let us take a brief pause from talking about constraints to introduce the part of the QUBO which will lead us to picking the best walk amongst all the legal walks which meet our requirements. The constraints above will all be zero if the walk chosen is legal and meets the preset requirements. This following part though, apart from some trivial cases, will be non-zero and is the `meat' of what we are trying to minimize. The essence of this part of the QUBO is that we want to add up all the weights of all the edges we traverse while not double counting weights when an edge is repeated. This is done as follows, $$M = \sum\limits_{i>0}\sum\limits_{j,k}W_{j,k}({}_ie_{j,k}\cdot (1 - {}_{(i-1)}e_{j,k})) + \sum\limits_{j,k}W_{j,k}({}_0e_{j,k}).$$ Before we proceed further, it is useful to note that we have produced enough constraints to handle all combinations of variants \ref{undirected variant} through \ref{windy variant} for the CPP using the QUBO, $$Q = M + P_{\text{required}}\cdot C_{\text{required}} + P_{\text{adjacency}}\cdot C_{\text{adjacency}} + P_{\text{one\_edge}}\cdot C_{\text{one\_edge}},$$ where the $'P'$ variable's are positive real numbers which are used to scale the weight of the constraints. How to choose those values will be discussed later. There is a alternative optimization to the start/stop optimization which will now be described in brief. The modifications to the QUBO related to this change are similar to those at the end of section \ref{even more general}. Rather than using the edge repetition method used to handle $i_{\text{max}}$ over-estimations, one can instead use a terminal vertex method. The idea of the terminal vertex method is to add one additional vertex, the terminal vertex, to the graph which will represent the end of the walk. One then needs to add the appropriate edges. If a variable corresponding to an edge leading to the terminal vertex at step $i$ in the walk takes the value $1$ in the QUBO solution, this is interpreted as the walk ending at step $i$. The edges one should add are a directed edge from any vertex the walk is allowed to end at to the terminal vertex and a directed edge from the terminal vertex to itself. The terminal vertex edges should only be included in the above QUBO at time steps greater than or equal to the cardinality of the set of required edges. The terminal vertex method allows us to do two things. One, we may remove the edge repetition from $M$, decreasing the connectivity between variables in the QUBO, and two, we may remove the edge repetition from determining which edges are possible to reach in the $i^{\text{th}}$ step in the start/stop optimization above, which decreases the number of variables needed. However, one must account for the increase of variables and variable connectivity induced by including the terminal vertex itself. The two optimizations both do better on different graphs, depending on the graph topology. The authors have implemented both methods and choose for each problem whichever uses the least variables. Recall that the above QUBO was implemented with results shown in Section \ref{results}. \subsection{Expanding the General Approach to the CPP} \label{even more general} The following variants, variants \ref{k variant} through \ref{hierachy variant}, have not yet been implemented on a quantum annealing device. We provide QUBO equations for implementing these variants and discuss why the equations are valid. To include variant \ref{turning variant} we need to only add one additional constraint. One may recall that variant \ref{turning variant} includes additional information in the form of 3-tuples, (edge-in, edge-out, bonus weight). Similar to the ${}_ie_{j,k}$ variables, let us write the 3-tuple as $((j,k),(k,r),x_{j,k,r})$ where $j,k,r\in V$ such that $(j,k), (k,r)$ are edges in the graph. Then the QUBO constraint can be written as, $$C_{\text{turn}} = \sum\limits_{i,j,k,r}x_{j,k,r}({}_ie_{j,k}\cdot{}_{(i+1)}e_{k,r}),$$ which in conjunction to what we had before would create the QUBO, $$Q + P_{\text{turn}}\cdot C_{\text{turn}}.$$ Observe the turning constraint does not add any more variables, but does increase the connectivity between variables in the QUBO. Adding variants \ref{k variant} and \ref{service variant} requires additional modifications to the QUBO construction above. Let us start with variant \ref{service variant}, service-based traversal. We shall replace the variables ${}_ie_{j,k}$ with ${}_ie^s_{j,k}$, ${}_ie^t_{j,k}$. The ${}_ie^s_{j,k}$ variable equaling 1 corresponds to servicing the edge going from vertex $j$ to vertex $k$ on the $i^{\text{th}}$ step of the walk. Setting ${}_ie^t_{j,k}$ = 1 will correspond to merely traversing the edge going from vertex $j$ to vertex $k$ on the $i^{\text{th}}$ step of the walk. We will also not use the ${}_{2^r}s_{j,k}$ variables. The modifications to the constraints are as follows: \begin{align*} C_{\text{one\_edge}} =& \sum\limits_{i}(1-\sum\limits_{j,k}({}_ie^s_{j,k} + {}_ie^t_{j,k}))^2,\\ C_{\text{adjacency}} =& \sum\limits_i\sum\limits_{j,k}(\sum\limits_{\substack{r,s \\ r\neq k \\ \text{and} \\ r \neq j \text{ or } s \neq k}}({}_ie^t_{j,k}\cdot {}_{(i+1)}e^t_{r,s})\\ &+ \sum\limits_{\substack{r,s \\ r\neq k}}({}_ie^s_{j,k}\cdot {}_{(i+1)}e^t_{r,s} + {}_ie^s_{j,k}\cdot {}_{(i+1)}e^s_{r,s} + {}_ie^t_{j,k}\cdot {}_{(i+1)}e^s_{r,s})),\\ C_{\text{required\_directed}} =& \sum\limits_{(j,k)}(1-\sum\limits_i {}_ie^s_{j,k})^2,\\ C_{\text{required\_undirected}} =& \sum\limits_{[j,k]}(1-\sum\limits_i ({}_ie^s_{j,k} + {}_ie^s_{k,j}))^2,\\ C_{\text{required}} =& C_{\text{required\_directed}} + C_{\text{required\_undirected}}, \text{and}\\ C_{\text{turn}} =& \sum\limits_{i,j,k,r}x_{j,k,r}({}_ie^s_{j,k}\cdot{}_{(i+1)}e^s_{k,r} + {}_ie^s_{j,k}\cdot{}_{(i+1)}e^t_{k,r} \\&+ {}_ie^t_{j,k}\cdot{}_{(i+1)}e^s_{k,r} + {}_ie^t_{j,k}\cdot {}_{(i+1)}e^t_{k,r}). \end{align*} Let $W^s_{j,k}$ be the weight corresponding to servicing the edge going from vertex $j$ to vertex $k$, while $W^t_{j,k}$ is the weight corresponding to just traversing that edge. Then, $$M = \sum\limits_{i>0}\sum\limits_{j,k}(W^t_{j,k}\cdot {}_ie^t_{j,k}\cdot (1 - {}_{(i-1)}e^t_{j,k}) + W^s_{j,k}\cdot {}_ie^s_{j,k}) + \sum\limits_{j,k}(W^s_{j,k}\cdot{}_0e^s_{j,k} + W^t_{j,k}\cdot{}_0e^t_{j,k}).$$ One caveat to the service traversal variant QUBO set-up is that if the solution involves only service steps and no traversal steps, we won't be able to get the optimal solution if we don't a priori know the precise number of steps our walks need as we only allow repetition on traversal steps. This is easily avoided in practice however as the only way a walk will only use service steps is if there is an Eulerian circuit on the subset of edges required, which as stated before, is computationally easy to determine. One benefit is that it is easy to to make the service based traversal hierarchical, like variant \ref{hierachy variant}, for any partially ordered set (i.e. some edges must be serviced prior to others) without introducing more variables. Solving variant \ref{hierachy variant} is achieved by adding on the following constraint. Let $x_{(j,k),(r,s)} = 1$ if the edge going from vertex $j$ to vertex $k$ must be serviced prior to the edge going from vertex $r$ to vertex $s$, and $0$ otherwise. Note $x_{(j,k),(r,s)}$ is a given value in the problem and not a variable the annealing device solves for. Let the other notation be similar. Recall $(*,*)$ is for directed edges while $[*,*]$ is for undirected edges. Then, \begin{align*} C_{\text{hierarchy}} =& \sum\limits_{i_0}\sum\limits_{i_1<i_0,j,k,r,s}(x_{(j,k),(r,s)}\cdot{}_{i_0}e^s_{j,k}\cdot{}_{i_1}e^s_{r,s})\\ &+ (x_{[j,k],(r,s)}\cdot({}_{i_0}e^s_{j,k}\cdot{}_{i_1}e^s_{r,s} + \cdot{}_{i_0}e^s_{k,j}\cdot{}_{i_1}e^s_{r,s}))\\ &+ (x_{(j,k),[r,s]}\cdot({}_{i_0}e^s_{j,k}\cdot{}_{i_1}e^s_{r,s} + \cdot{}_{i_0}e^s_{j,k}\cdot{}_{i_1}e^s_{s,r}))\\ &+ (x_{[j,k],[r,s]}\cdot({}_{i_0}e^s_{j,k}\cdot{}_{i_1}e^s_{r,s} + \cdot{}_{i_0}e^s_{j,k}\cdot{}_{i_1}e^s_{s,r}\\ &+ {}_{i_0}e^s_{k,j}\cdot{}_{i_1}e^s_{s,r} + \cdot{}_{i_0}e^s_{k,j}\cdot{}_{i_1}e^s_{s,r})). \end{align*} The final variant to talk about is variant \ref{k variant}, the $k$-Postman Problem. As we already use $k$, we shall suppose there are $l$ postmen. This variant provides us with the opportunity to introduce a slightly different idea. For variant \ref{k variant} we will need to use a slight modfication of the terminal vertex paradigm. We shall use the binary variables ${}_ie^a_{j,k}$, ${}_{2^r}s^a_{j,k}$, and ${}_irest^{a}$ for $a\in\{1,...,l\}$, all else the same. The $a$ index will refer to the $a^{\text{th}}$ postman walk. So ${}_ie^a_{j,k} = 1$ will correspond to the $a^{\text{th}}$ postman traversing the edge going from vertex $j$ to vertex $k$ on the $i^{\text{th}}$ step of their walk. The ${}_{2^r}s^a_{j,k}$ binary variable will be used as a slack variable similarly to before. When the variable ${}_irest^{a}$ equals one, this will correspond to the $a^{\text{th}}$ postman at their walk's endpoint on the $i^{\text{th}}$ step and is their terminal vertex from the terminal vertex method described in section \ref{BGC}. Let $W^a_{j,k}$ represent the weight corresponding to the $a^{\text{th}}$ postman traversing the edge going from vertex $j$ to vertex $k$. The variations to the QUBO constraints are listed below: \begin{align*} C_{\text{one\_edge}} =& \sum\limits_{i,a}(1-\sum\limits_{j,k}({}_ie^a_{j,k}) - {}_irest^a)^2,\\ C_{\text{adjacency}} =& \sum\limits_{i,a}\sum\limits_{j,k}((\sum\limits_{\substack{r,s \\ r\neq k}}({}_ie^a_{j,k}\cdot {}_{(i+1)}e^a_{r,s})) + ({}_irest^a\cdot {}_{i+1}e^a_{j,k})),\\ C_{\text{required\_directed}} =& \sum\limits_{(j,k)}(1-\sum\limits_a(\sum\limits_i ({}_ie^a_{j,k}) - \sum\limits_r 2^r{}_{2^r}s^a_{j,k}))^2,\\ C_{\text{required\_undirected}} =& \sum\limits_{[j,k]}(1-\sum\limits_a(\sum\limits_i ({}_ie^a_{j,k} + {}_ie^a_{k,j}) - \sum\limits_r 2^r{}_{2^r}s^a_{j,k}))^2,\\ C_{\text{required}} =& C_{\text{required\_directed}} + C_{\text{required\_undirected}},\\ C_{\text{turn}} =& \sum\limits_{i,j,k,r,a}x_{j,k,r}({}_ie^a_{j,k}\cdot{}_{(i+1)}e^a_{k,r}), \text{and}\\ M =& \sum\limits_{a,i,j,k}W^a_{j,k}\cdot {}_ie^a_{j,k}. \end{align*} Consider a use case where no two postmen may occupy the same edge going in the same direction at the same time. An example of an advantage one gains for using this resting paradigm over the repeated edge paradigm is that it becomes easy to create a constraint to avoid such collisions, observe, $$C_{\text{collisions}} = \sum\limits_{i,a,b,j,k}{}_ie^a_{j,k}\cdot {}_ie^b_{j,k}.$$ Similar constraints can be made for avoiding collisions going in the opposite direction along edges and for avoiding collisions at vertices. Additionally, if $W^a_{j,k}\in\mathbb{N}_0$ for all $i,j,a$, then one may consider the $k$-Postman Problem With Capacity. Suppose one is given $\{c_1,...,c_l\}\subset\mathbb{N}$, where the $a^{\text{th}}$ postman is limited to walks of weight less than $c_a$. Then, by introducing the slack variables ${}_{2^y}slack^a$ for $y\in\{0,...,ceiling(\log(c_a))\}$ for $a\in\{1,...,l\}$, we have the constraint, $$C_{\text{capacitance}} = \sum\limits_{a}(c_a - \sum\limits_{i,j,k}(W^a_{j,k}{}_ie^a_{j,k}) - \sum\limits_y(2^y{}_{2^y}slack^a))^2.$$ The slack variables in the equation allow the postmen to have walks with walk weights less than their maximum capacity without punishing such solutions. One may also note that it is possible to combine variant \ref{k variant} and \ref{service variant} together with a little thought, however we shall abstain from doing so to save ourselves from the additional complex notation it would create. \section{Results}\label{results} In this section, we will consider the results of two kinds of experiments for both the Closed Undirected CPP and the general CPP. The first kind is a parameter study for various tuneable parameters for running the problem directly on quantum hardware. The second kind is a comparison of results between various purely quantum, classical, and quantum-classical solutions to the same CPP problem. A reminder to help clarify the results shown is that the goal is to minimize the solution so as to minimize the weight of the walk. \subsection{Closed Undirected CPP Parameter Study} Let us begin with a parameter study of the Closed Undirected CPP. The parameters tuned were the constraint weight, the sample number, the intersample correlation, the number of spin reversal transforms, and the annealing time. The constraint weight is the $P$ variable discussed in section \ref{Undirected Explanation}. The constraint weight needs to be large enough to encourage valid solutions to the Closed Undirected CPP, but small enough to not overshadow the rest of the QUBO. \begin{table}[h] \centering \caption{$P$ Value Efficacy Directly on 2000Q} \label{CUCPP P Table} \begin{tabular}{*5c} \toprule & \multicolumn{4}{c}{$P$ Values} \\ \cmidrule(lr){2-5} ~ & 10 & 30 & 50 & 70 \\ \midrule valid (\%) & 27.5 & 33.8 & 37.5 & 50\\ time to solution (avg, s) & 3.11 & 3.05 & 2.89 & 3.11\\ optimal (\%) & 23.8 & 25 & 26.3 & 26.3\\ $\leq$ 10\% above optimum (\%) & 25 & 25 & 26.3 & 27.3 \\ $\leq$ 25\% above optimum (\%) & 25 & 25 & 30 & 33.8 \\ \bottomrule \end{tabular} \end{table} Some guidance is provided for understanding the tables. In Table \ref{CUCPP P Table}, the number $3.05$ in column $30$ and row 'time to solution,' means that using a $P$ value of $30$, the average wall-clock time to get the solution across all such runs was $3.05$s. The wall-clock time is the time it took the algorithm to find a solution from a given QUBO. Using the Leap based methods this includes over the internet communication plus the time it took to validate the lowest energy solution. The number $30$ in column $50$ and row '$\leq$ 25\% above optimum' means that $30\%$ of the runs using a $P$ value of $50$ were valid and achieved a solution less than (meaning better than) or equal to $25\%$ above (above meaning worse than) the optimal solution. As one can see in Table \ref{CUCPP P Table}, there is a strong correlation between the percentage of valid solutions and the percentage of optimal solutions with the size of the $P$ value. In fact, this becomes more apparent when we separate the data by the size of the problem as in Table \ref{CUCPP P Table Split}. The constraint weight for the Closed Undirected CPP runs when done strictly on quantum hardware will be set to $70$. Next, we will consider the effect the sample number has on the result. The sample number is the number of times states are read from the quantum hardware. \begin{table}[h] \centering \caption{$P$ Value Efficacy by Number of Odd Vertices Directly on 2000Q} \label{CUCPP P Table Split} \begin{tabular}{*5c} \toprule & \multicolumn{4}{c}{4 Odd vertices: $P$ Values} \\ \cmidrule(lr){2-5} ~ & 10 & 30 & 50 & 70 \\ \midrule valid (\%) & 100 & 100 & 100 & 100\\ optimal (\%) & 95 & 100 & 95 & 100\\ $\leq$ 25\% above optimum (\%) & 95 & 100 & 95 & 100 \\ \toprule & \multicolumn{4}{c}{6 Odd vertices: $P$ Values} \\ \cmidrule(lr){2-5} ~ & 10 & 30 & 50 & 70 \\ \midrule valid (\%) & 10 & 30 & 50 & 85\\ optimal (\%) & 0 & 0 & 10 & 5\\ $\leq$ 25\% above optimum (\%) & 5 & 0 & 25 & 30 \\ \toprule & \multicolumn{4}{c}{8 Odd vertices: $P$ Values} \\ \cmidrule(lr){2-5} ~ & 10 & 30 & 50 & 70 \\ \midrule valid (\%) & 0 & 5 & 0 & 15\\ optimal (\%) & 0 & 0 & 0 & 0\\ $\leq$ 25\% above optimum (\%) & 0 & 0 & 0 & 5 \\ \toprule & \multicolumn{4}{c}{10 Odd vertices: $P$ Values} \\ \cmidrule(lr){2-5} ~ & 10 & 30 & 50 & 70 \\ \midrule valid (\%) & 0 & 0 & 0 & 0\\ optimal (\%) & 0 & 0 & 0 & 0\\ $\leq$ 25\% above optimum (\%) & 0 & 0 & 0 & 0 \\ \bottomrule \end{tabular} \end{table} \begin{table}[h] \centering \caption{Sample Number Efficacy Directly on 2000Q} \label{CUCPP Sample Table} \begin{tabular}{*6c} \toprule & \multicolumn{4}{c}{Sample Numbers} \\ \cmidrule(lr){2-6} ~ & 10 & 50 & 100 & 500 & 1000 \\ \midrule valid (\%) & 27.9 & 35.3 & 51.5 & 58.8 & 61.8\\ time to solution (avg, s) & 2.85 & 2.87 & 3.06 & 2.76 & 3.08\\ optimal (\%) & 19.1 & 23.5 & 32.4 & 39.7 & 47.1\\ $\leq$ 10\% above optimum (\%) & 19.1 & 26.5 & 32.4 & 44.1 & 48.5 \\ $\leq$ 25\% above optimum (\%) & 22.1 & 26.5 & 36.8 & 48.5 & 50 \\ \bottomrule \end{tabular} \end{table} A strong positive relationship between the validity and quality of the solutions with the number of samples is seen in Table \ref{CUCPP Sample Table}. Even when viewed by problem size, the monotone increasing relationship between validity and quality of solution with sample number is preserved across every problem size. This is true except for one instance, a problem of size 8 odd degree vertices where one solution of moderate quality ($\leq$ $25$\% above optimum) was found for the $500$ sample number, but not the $1000$ sample number. A sample number of $1000$ will be used for the Closed Undirected CPP when there is a choice on quantum hardware. \begin{table}[h] \centering \caption{Reduced Intersample Correlation Efficacy Directly on 2000Q} \label{CUCPP Intersample Table} \begin{tabular}{*3c} \toprule & \multicolumn{2}{c}{Intersample Correlation} \\ \cmidrule(lr){2-3} ~ & Not Reduced & Reduced \\ \midrule valid (\%) & 57.5 & 63.8 \\ time to solution (avg, s) & 3.03 & 3.46 \\ optimal (\%) & 43.8 & 40 \\ $\leq$ 10\% above optimum (\%) & 46.3 & 43.8 \\ $\leq$ 25\% above optimum (\%) & 48.8 & 47.5 \\ \bottomrule \end{tabular} \end{table} Next, we will explore the relation that reducing intersample correlation has with solutions in Table \ref{CUCPP Intersample Table}. The relationship between reducing intersample correlation and solution validity/quality is mixed with reducing intersample correlation slightly increasing validity of the solution and slightly decreasing solution quality. The time to solution is on average increased when reducing intersample correlation and so is not used moving forward when studying the Closed Undirected CPP. \begin{table}[h] \centering \caption{Spin Reversal Transforms Efficacy Directly on 2000Q} \label{CUCPP Spin Table} \begin{tabular}{*5c} \toprule & \multicolumn{4}{c}{Number of Spin Reversal Tranforms} \\ \cmidrule(lr){2-5} ~ & 0 & 10 & 30 & 100 \\ \midrule valid (\%) & 60 & 61.03 & 58.8 & 60\\ time to solution (avg, s) & 3.31 & 3.36 & 3.91 & 5.04\\ optimal (\%) & 36.3 & 43.8 & 40 & 42.5\\ $\leq$ 10\% above optimum (\%) & 38.8 & 43.8 & 41.3 & 45 \\ $\leq$ 25\% above optimum (\%) & 43.8 & 48.8 & 46.3 & 51.3 \\ \bottomrule \end{tabular} \end{table} We will continue our parameter study of the Closed Undirected CPP by looking at the effect the number of spin reversal transforms has on solutions in Table \ref{CUCPP Spin Table}. There is little relationship between the validity of the solution and the number of spin reversal transformations. The solution quality does improve when using spin reversal transformations, with the largest jump in quality coming from the first $10$ spin reversal transforms. The number of spin reversal transformations increases the time to solution significantly when a large number is used. Only $10$ spin reversal transformations will be used henceforth for the Closed Undirected CPP when on quantum hardware. \begin{table}[h] \centering \caption{Annealing Time Efficacy Directly on 2000Q} \label{CUCPP Annealing Table} \begin{tabular}{*6c} \toprule & \multicolumn{4}{c}{Annealing Time ($\mu$s)}\\ \cmidrule(lr){2-6} ~ & 5 & 10 & 30 & 100 & 500 \\ \midrule valid (\%) & 55.9 & 58.8 & 60.3 & 61.8 & 72.1\\ time to solution (avg, s) & 3.25 & 3.24 & 3.41 & 3.1 & 3.82\\ optimal (\%) & 42.6 & 48.5 & 41.2 & 45.6 & 44.1\\ $\leq$ 10\% above optimum (\%) & 44.1 & 50 & 42.6 & 45.6 & 44.1 \\ $\leq$ 25\% above optimum (\%) & 47.1 & 51.5 & 52.9 & 50 & 48.5 \\ \bottomrule \end{tabular} \end{table} The final piece of our parameter study for the Closed Undirected CPP is to study the effect that annealing time has on solutions. There is a positive relationship between the validity of solutions and the annealing time apparent in Table \ref{CUCPP Annealing Table}. The relationship between the annealing time and the solution quality, however, are less clear. To find a middle ground, an annealing time of $100\mu $s was chosen moving forward when running the Closed Undirected CPP on quantum hardware. \subsection{Closed Undirected CPP Comparison Study} In this section we will compare the validity and quality of solutions between a brute force solution and various methods for finding solutions to the QUBO corresponding to the problem. The brute force method only runs on problems with a maximum of $14$ vertices of odd degree, while the tabu algorithm (tabu) \cite{dwavedoc} has no limit on size. Note that the tabu algorithm is a modified steepest descent algorithm which keeps track of the locations of the best solutions found and temporarily changes values in the QUBO to promote search diversity. Another method tested was the greedy algorithm (greedy) \cite{dwavedoc} which has no limit on size and is just a steepest descent solver. Other methods compared are the tabu algorithm both preprocessed \cite{greedytabu} and post-processed with a greedy algorithm (greedy tabu) with no limit on size and simulated annealing (SA) with no limit on size. SA is a modified hill-climbing algorithm which improves solution diversity, and often quality, over other hill-climbing algorithms by allowing worse solutions to be picked sometimes during the search \cite{SAref}. Quantum and quantum-hybrid methods compared include pure 2000Q (2000Q) with a maximum of $10$ vertices of odd degree, 2000Q post-processed with a greedy algorithm (greedy 2000Q) with a maximum of $10$ vertices of odd degree, \emph{qbsolv} on the 2000Q with no limit on size (2000Q qbsolv), pure Advantage4.1 (Advantage) with a maximum of $18$ vertices of odd degree, Advantage4.1 post-processed with a greedy algorithm (greedy Advantage) with a maximum of $18$ vertices of odd degree, and finally \emph{qbsolv} on the Advantage4.1 (Advantage qbsolv) with no limit on size. For some runs we also compared \emph{qbsolv} on both devices when using the fixed embedding composite (f-2000Q qbsolv and f-Advantage qbsolv respectively) versus the embedding composite in the \textit{ocean-dwave-sdk} \cite{dwavedoc}. When running on sufficiently small problems, solution quality is compared to the brute force method which finds the optimal answer. When on problems too large for brute force, we instead compare solutions with greedy tabu which usually only provides approximate answers. \begin{table}[h] \centering \caption{Comparison on graphs with $4$ or $6$ odd degree vertices} \label{CUCPP 4,6 Table} \begin{tabular}{*6c} \toprule & \multicolumn{5}{c}{Solution Quality: \% $\leq$ ---\% above optimum} \\ \cmidrule(lr){2-6} ~ & 0 & 10 & 25 & 100 \\ \midrule \textbf{classical} & ~ & ~ & ~ & ~ \\ \midrule greedy & 27 & 33 & 58 & 95\\ tabu & 100 & 100 & 100 & 100\\ SA & 100 & 100 & 100 & 100\\ \midrule \textbf{quantum} & ~ & ~ & ~ & ~ \\ \midrule 2000Q & 90 & 93 & 97 & 100 \\ Advantage & 77 & 83 & 92 & 100\\ \midrule \textbf{hybrid} & ~ & ~ & ~ & ~ \\ \midrule greedy 2000Q & 100 & 100 & 100 & 100 \\ greedy Advantage & 100 & 100 & 100 & 100\\ 2000Q qbsolv & 100 & 100 & 100 & 100\\ Advantage qbsolv & 100 & 100 & 100 & 100 \\ \bottomrule \end{tabular} \end{table} We first compare these methods on some small problems of $4$ and $6$ odd degree vertices. Note an example of how Table \ref{CUCPP 4,6 Table} should be read is as follows: the number $33$ in the column labeled `$10$' and row labeled `greedy' means that $33\%$ of the greedy solutions were valid and at most $10\%$ above the optimal solution. One observes from Table \ref{CUCPP 4,6 Table} that on small problems we get perfect results in all methods except 2000Q, Advantage, and greedy. The 2000Q and Advantage still attained strong results on these small problems, while greedy only achieved mediocre results. \begin{table}[h] \centering \caption{Comparison on graphs with $8$ or $10$ odd degree vertices} \label{CUCPP 8,10 Table} \begin{tabular}{*6c} \toprule & \multicolumn{5}{c}{Solution Quality: \% $\leq$ ---\% above optimum} \\ \cmidrule(lr){2-6} ~ & 0 & 10 & 25 & 100 \\ \midrule \textbf{classical} & ~ & ~ & ~ & ~ \\ \midrule greedy & 14 & 20 & 50 & 98\\ tabu & 100 & 100 & 100 & 100\\ SA & 100 & 100 & 100 & 100\\ \midrule \textbf{quantum} & ~ & ~ & ~ & ~ \\ \midrule 2000Q & 4.5 & 4.5 & 11 & 27 \\ Advantage & 2.3 & 2.3 & 4.5 & 6.8\\ \midrule \textbf{hybrid} & ~ & ~ & ~ & ~ \\ \midrule greedy 2000Q & 100 & 100 & 100 & 100 \\ greedy Advantage & 100 & 100 & 100 & 100\\ 2000Q qbsolv & 100 & 100 & 100 & 100\\ Advantage qbsolv & 100 & 100 & 100 & 100 \\ \bottomrule \end{tabular} \end{table} Now in Table \ref{CUCPP 8,10 Table} we compare some slightly larger problems of graphs of $8$ and $10$ odd degree vertices respectively. In this larger problem we see that all of the methods get optimal results except 2000Q, Advantage, and greedy. However, this time, 2000Q, Advantage, and greedy got poor results, with 2000Q and Advantage drastically decreasing in efficacy. It is interesting to observe that despite the fact that greedy and 2000Q individually were ineffective, when used in concert optimal results were achieved. And this holds similarly for greedy and Advantage. The advantage of a greedy post-processing is not unique to this problem and was also used to improve solution quality in \cite{eigenvector} and \cite{qde}. \begin{table}[h] \centering \caption{Comparison on graphs with $16$ or $18$ odd degree vertices} \label{CUCPP 16,18 Table} \begin{tabular}{*7c} \toprule & \multicolumn{6}{c}{Solution Quality: \% $\leq$ ---\% above greedy tabu} \\ \cmidrule(lr){2-7} ~ & -10 & -5 & 0 & 10 & 25 \\ \midrule \textbf{classical} & ~ & ~ & ~ & ~ \\ \midrule greedy & 0 & 0 & 3.3 & 3.3 & 17\\ tabu & 0 & 10 & 70 & 97 & 100\\ SA & 0 & 0 & 3.3 & 20 & 77 \\ \midrule \textbf{quantum} & ~ & ~ & ~ & ~ \\ \midrule Advantage & 0 & 0 & 0 & 0 & 0\\ \midrule \textbf{hybrid} & ~ & ~ & ~ & ~ \\ \midrule greedy Advantage & 3.3 & 6.7 & 57 & 83 & 100\\ 2000Q qbsolv & 3.3 & 17 & 100 & 100 & 100\\ Advantage qbsolv & 3.3 & 23 & 97 & 100 & 100\\ \bottomrule \end{tabular} \end{table} Table \ref{CUCPP 16,18 Table} shows results for problems which are too large to be solved by brute force or to run directly on 2000Q hardware. In fact, these are the largest problems which can be run directly on the Advantage hardware. Once again, greedy and Advantage both performed poorly alone, but when used together produced results comparable to greedy tabu. The two \emph{qbsolv} methods almost always found solutions which were equal to or better than greedy tabu, with the better results occurring a non-negligible amount of the time. \begin{table}[h] \centering \caption{Comparison on graphs with $20$, $30$ and $50$ odd degree vertices} \label{CUCPP 20,30,50 Table} \begin{tabular}{*6c} \toprule & \multicolumn{5}{c}{Solution Quality: \% $\leq$ ---\% above greedy tabu} \\ \cmidrule(lr){2-6} ~ & -5 & 0 & 10 & 25 \\ \midrule \textbf{classical} & ~ & ~ & ~ & ~ \\ \midrule greedy & 0 & 0 & 3.3 & 13\\ tabu & 3.3 & 97 & 100 & 100\\ SA & 0 & 0 & 0 & 6.7\\ \midrule \textbf{hybrid} & ~ & ~ & ~ & ~ \\ \midrule 2000Q qbsolv & 0 & 63 & 87 & 100\\ f-2000Q qbsolv & 0 & 67 & 87 & 100\\ Advantage qbsolv & 0 & 47 & 83 & 100\\ f-Advantage qbsolv & 0 & 60 & 97 & 100\\ \bottomrule \end{tabular} \end{table} In Table \ref{CUCPP 20,30,50 Table} we studied problems which were too large to run directly on quantum hardware. The fixed embedding produced better results than the non-fixed embeddings for \emph{qbsolv}, especially for the Advantage device. However, the fixed embedding took significantly more overhead time for finding the initial embedding and the fixed embedding \emph{qbsolv} used significantly more runs on the quantum hardware than the non-fixed embeddings did. \subsection{General CPP Parameter Study} Let us now examine our parameter study of the General CPP on the Advantage hardware with a greedy algorithm post-processing. \begin{table}[h] \centering \caption{$P_{\text{one\_edge}}$ Value Efficacy on Advantage} \label{GCPP Single P Table} \begin{tabular}{*8c} \toprule & \multicolumn{7}{c}{$P_{\text{one\_edge}}$ Values} \\ \cmidrule(lr){2-8} ~ & 30 & 40 & 50 & 60 & 70 & 80 & 90 \\ \midrule valid (\%) & 86.7 & 85.7 & 80 & 75 & 66.7 & 72.7 & 61.5\\ time to solution (avg, s) & 40.8 & 40.8 & 26.3 & 31.7 & 26.221 & 32.002 & 30.928\\ $\leq$ 0\% below SA (\%) & 66.7 & 66.7 & 60 & 55 & 58.3 & 72.7 & 46.2\\ $\leq$ 10\% above optimum (\%) & 66.7 & 66.7 & 60 & 55 & 58.3 &72.7 & 46.2 \\ $\leq$ 25\% above optimum (\%) & 73.3 & 71.4 & 64 & 60 & 66.7 & 72.7 & 53.8 \\ \bottomrule \end{tabular} \end{table} In Table \ref{GCPP Single P Table} one can see that overall, there is a negative relationship between $P_{\text{one\_edge}}$ and solution validity and quality, with $P_{\text{one\_edge}} = 80$ being an exception. The parameter value chosen for problems run after this is $P_{\text{one\_edge}} = 40$. While doing strictly worse than $P_{\text{one\_edge}} = 30$ in terms of solution validity and quality overall, $P_{\text{one\_edge}} = 40$ actually did significantly better when looking at the larger end of problems able to run directly on the hardware in the 100-200 variable range. \begin{table}[h] \centering \caption{$P_{adjacency}$ Value Efficacy on Advantage} \label{GCPP P Next Move Table} \begin{tabular}{*4c} \toprule & \multicolumn{3}{c}{$P_{adjacency}$ Values} \\ \cmidrule(lr){2-4} ~ & 60 & 70 & 80 \\ \midrule valid (\%) & 77.3 & 81 & 90.3\\ time to solution (avg, s) & 29.1 & 52.6 & 27.473\\ $\leq$ 0\% below SA (\%) & 59.1 & 52.4 & 61.3\\ $\leq$ 10\% above SA (\%) & 59.1 & 57.1 & 61.3 \\ $\leq$ 25\% above SA (\%) & 63.6 & 57.1 & 67.7 \\ \bottomrule \end{tabular} \end{table} Table \ref{GCPP P Next Move Table} shows a strong positive relationship between the validity of the solution and $P_{adjacency}$. If one separates the data out by problem size, $P_{adjacency} = 70$ does the best in terms of validity and solution quality for problems in the $0-100$ variable range, while $P_{adjacency} = 80$ does better for problems in the $100-250$ range. So the average of these two values is used moving forward setting $P_{adjacency} = 75$. \begin{table}[h] \centering \caption{$P_{required}$ Value Efficacy on Advantage} \label{GCPP P required Table} \begin{tabular}{*5c} \toprule & \multicolumn{4}{c}{$P_{required}$ Values} \\ \cmidrule(lr){2-5} ~ & 30 & 40 & 50 & 60 \\ \midrule valid (\%) & 96.4 & 85.3 & 82.9 & 85.3 \\ time to solution (avg, s) & 32 & 29.4 & 47.5 & 33.3\\ $\leq$ 10\% below SA (\%) & 0 & 5.88 & 0 & 2.94\\ $\leq$ 0\% below SA (\%) & 71.4 & 73.5 & 54.3 & 64.7\\ $\leq$ 10\% above SA (\%) & 71.4 & 73.5 & 54.3 & 64.7 \\ $\leq$ 25\% above SA (\%) & 82.1 & 76.5 & 62.9 & 64.7 \\ \bottomrule \end{tabular} \end{table} As can be seen in Table \ref{GCPP P required Table} there is a generally negative trend for solution validity and solution quality with respect to $P_{required}$. When one looks at the data by problem size, $P_{required} = 30$ and $P_{required} = 40$ each do better on certain problem sizes. So once again we take the average and set $P_{required} = 35$ moving forward. \begin{table}[h] \centering \caption{Chain Strength Efficacy on Advantage} \label{GCPP chain strength Table} \begin{tabular}{*5c} \toprule & \multicolumn{4}{c}{Chain Strength} \\ \cmidrule(lr){2-5} ~ & 400 & 500 & 800 & 900 \\ \midrule valid (\%) & 94.7 & 93.8 & 84.2 & 88.9 \\ time to solution (avg, s) & 36.8 & 27.3 & 70.6 & 29\\ $\leq$ 10\% below SA (\%) & 0 & 6.25 & 0 & 0\\ $\leq$ 0\% below SA (\%) & 68.4 & 56.3 & 52.6 & 55.6\\ $\leq$ 10\% above SA (\%) & 68.4 & 62.5 & 57.9 & 55.6 \\ $\leq$ 25\% above SA (\%) & 73.7 & 68.8 & 57.9 & 61.1 \\ \bottomrule \end{tabular} \end{table} Table \ref{GCPP chain strength Table} highlights that having a very large chain strength can deteriorate both the validity and quality of solutions. A chain strength of $400$ leads to the highest percentage of valid solutions and mostly the highest quality solutions. However, when one looks at the data broken up by problem size, one finds that a chain strength of $500$ gets generally better results on larger problems. So the chain strength moving forward has been set to $475$. \subsection{General CPP Comparison Study} \label{GCPP Comparison Study} We compare results for the General CPP algorithm using tabu, greedy tabu, SA, Advantage, greedy Advantage, 2000Q qbsolv, and Advantage qbsolv. We compare how each method does on various problem sizes, for the sizes a method can run. The problem sizes are broken up into three categories: small as $0-250$ variables, medium as $250-1000$ variables, and large as $1000-3200$ variables. As a reminder, how these variables are formulated is explained in Section \ref{BGC}. Roughly speaking, small problems correspond to graphs with $3-4$ vertices, $25\%-75\%$ edge saturation, and all kinds of start/stop conditions. Medium problems correspond to graphs with $5-6$ vertices, and similar other data. Large problems were run on graphs with $9-10$ vertices, $25\%-50\%$ edge saturation, and similar other data. We compare against SA for the small and medium sized problems. However, for the medium sized problems SA took a long time to compute. For the large problems, SA became prohibitively expensive to run. The large problems are compared against greedy tabu. \begin{table} \centering \caption{Comparison on Small Problems} \label{GCPP small comparison} \begin{tabular}{*7c} \toprule & \multicolumn{6}{c}{Solution Quality: \% $\leq$ ---\% above SA} \\ \cmidrule(lr){2-7} ~ & -10 & -5 & 0 & 10 & 25 \\ \midrule \textbf{classical} & ~ & ~ & ~ & ~ \\ \midrule greedy & 0 & 0 & 6.9 & 6.9 & 8.3\\ tabu & 1.4 & 1.4 & 88 & 92 & 96\\ greedy tabu & 0 & 0 & 89 & 93 & 96 \\ \midrule \textbf{quantum} & ~ & ~ & ~ & ~ \\ \midrule Advantage & 0 & 0 & 19 & 19 & 22\\ \midrule \textbf{hybrid} & ~ & ~ & ~ & ~ \\ \midrule greedy Advantage & 0 & 0 & 68 & 69 & 74\\ 2000Q qbsolv & 0 & 0 & 75 & 79 & 85\\ Advantage qbsolv & 0 & 0 & 81 & 81 & 88\\ \bottomrule \end{tabular} \end{table} Let us start with the small problems. When comparing the small problems for the General CPP in Table \ref{GCPP small comparison}, we see once again that greedy and Advantage by themselves do not perform well, but that together get strong results. For these small problems in the $0-250$ variable range it appears that the classical algorithms like SA and tabu are more effective than the quantum/hybrid approaches running on current D-Wave LEAP resources. \begin{table} \centering \caption{Comparison on Medium Problems} \label{GCPP medium comparison} \begin{tabular}{*9c} \toprule & \multicolumn{8}{c}{Solution Quality: \% $\leq$ ---\% above SA} \\ \cmidrule(lr){2-9} ~ & -35 & -20 & -10 & -5 & 0 & 10 & 25 \\ \midrule \textbf{classical} & ~ & ~ & ~ & ~ \\ \midrule greedy & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ tabu & 0 & 0 & 21 & 37 & 42 & 63 & 95\\ greedy tabu & 0 & 0 & 26 & 26 & 47 & 63 & 89 \\ \midrule \textbf{hybrid} & ~ & ~ & ~ & ~ \\ \midrule 2000Q qbsolv & 5.3 & 5.3 & 21 & 26 & 47 & 53 & 84\\ Advantage qbsolv & 0 & 11 & 21 & 26 & 42 & 68 & 89\\ \bottomrule \end{tabular} \end{table} When looking at the data for medium sized problems in the range of $250-1000$ variables in Table \ref{GCPP medium comparison}, we see that the hybrid methods are comparable in solution quality to the classical methods and will sometimes get higher quality results. \begin{table} \centering \caption{Comparison on Large Problems} \label{GCPP large comparison} \begin{tabular}{*9c} \toprule & \multicolumn{8}{c}{Solution Quality: \% $\leq$ ---\% above greedy tabu} \\ \cmidrule(lr){2-9} ~ & -99.5 & -75 & -50 & -20 & 0 & 10 & 25 \\ \midrule \textbf{classical} & ~ & ~ & ~ & ~ \\ \midrule greedy & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ tabu & 0 & 0 & 0 & 4 & 20 & 32 & 36\\ \midrule \textbf{hybrid} & ~ & ~ & ~ & ~ \\ \midrule 2000Q qbsolv & 4 & 8 & 28 & 36 & 52 & 56 & 56\\ Advantage qbsolv & 4 & 12 & 40 & 40 & 44 & 56 & 56\\ \bottomrule \end{tabular} \end{table} Once we start to look at larger problems, as shown in Table \ref{GCPP large comparison}, we see that the quantum-classical hybrids start to significantly outperform the classical methods in terms of solution quality. \section{Discussion} \label{discussion} The following insights come from the data in Section \ref{results} and intuition gained from the implementation of the Closed Undirected CPP and variants \ref{undirected variant} through \ref{windy variant} in the generalized algorithm for the CPP. The observations come from running the algorithms both directly on the D-Wave 2000Q and Advantage chip, and via the quantum-classical implementation of \emph{qbsolv}~\cite{Booth} on the QAs. The differences between the original algorithm~\cite{Siloi} and our modified version are as follows: First, we use $x_{i,j}$ to represent the same binary variable as $x_{j,i}$ whereas the original version treated them as two separate variables. The advantage of this approach is that it halves the number of variables used. Second, we are able to remove a now unnecessary constraint from the equation. The advantages of this are two-fold. One, removing the constraint makes understanding the QUBO and implementing it easier. Two, the removal of the second constraint reduces the QUBO's variable connectivity, allowing for larger problems to fit directly on the hardware. These changes lead to being able to run a 12 odd degree vertices problem directly on the 2000Q versus the previous 8 odd degree vertices. Now let us talk about the generalized CPP algorithm. While handling a much more general class of problems, this algorithm can use a large number of variables. Depending on which variants are used, the number of variables can grow quadratically with the number of edges in the graph. The bright side is that the variables used are not in general fully connected. This means more variables may be used when running the problem on quantum hardware. For example, on the 2000Q D-Wave chip, 109 variables for the algorithm were successfully embedded on the hardware compared to the 64 variable maximum when fully connected. There are many ways the choice of variants can increase or decrease the QA efficiency. Specifying the start and/or end vertex will decrease the number of variables and somewhat decrease the connectivity between the variables. When implementing a Rural Postman Problem, requiring fewer edges can greatly decrease the connectivity between the variables. Variants \ref{k variant} through \ref{hierachy variant} all greatly increase the number of variables required and/or increase the connectivity between variables. One of the main determining factors one has control of which affects how many variables are required is $i_{\text{max}}$, the maximum length of the walk allowed. To find a minimal walk weight which meets all criteria, one must allow sufficient steps in the walk to find that minimal walk weight. Roughly $(2\vert U\vert + \vert D\vert)i_{\text{max}}$ variables are required for all variants except \ref{k variant}, \ref{service variant}, and \ref{hierachy variant}, which require approximately some integer multiple more variables. Thus we try to pick a minimal, yet sufficiently large $i_{\text{max}}$. A safe value to pick, in the sense it will be sufficiently large for any variant, is $i_{\text{max}} = 2\vert E\vert$. If this is too many variables, one may try a smaller $i_{\text{max}}$. It is safer to greatly decrease $i_{\text{max}}$ from $2\vert E\vert$ when either there are a large number of undirected edges or when a significant number of edges are not required in the Rural Postman variant. Now let us take a moment to talk about the '$P$' variables from earlier, the ones which we multiply each constraint by when adding to our QUBOs. This is where our effort becomes a bit more of an art than a science. From a mathematical perspective, one should choose the '$P$' variables to be arbitrarily large. From an implementation perspective this should not be done. When the QUBO is embedded on the hardware, all the values are scaled to fit within a specific range with limited precision and as such, if the '$P$' variables are chosen too large, then numbers which are not zero may be treated as zero, leading to poor results. There are some general guidelines for the choices. All the '$P$' variables are multiplied with constraints which, if broken, lead to an invalid solution. The '$P$' variables should at the very least be larger than the highest weight edge. The authors often found having all such variables set between 1.5 to 15 times the highest edge weight worked well. If one tries to implement the algorithms in this paper and gets results which lead to invalid solutions, then the likely culprit is the '$P$' variables. In this case, one should increase the '$P$' value for the constraint which is broken. If however one is getting valid, but non-optimal results, this may be caused by having '$P$' variables which are too large and one should try decreasing all of them slightly. One of the surprising results is how effective combining annealing on a QA and greedy were, even when either method alone achieved poor results. An interpretation of why this occurs is as follows. The energy landscape for our QUBOs, especially the larger ones, is complex with many peaks and valleys of varying heights and depths. The greedy algorithm by itself can only ever go down, and so will descend into the nearest valley which has a low likelihood of being the deepest valley or even a deep valley. When annealing on a QA, their is a strong likelihood of arriving at the deepest, or at least one of the deepest valleys, but due to noise and flux errors \cite{DwaveQPUSolver}. has trouble settling to the bottom of these values. So when we combine these methods together, the QA finds one of the deepest valleys and then greedy quickly gets us to the bottom of the valley. Another surprising result appears in section \ref{GCPP Comparison Study}. For the methods tested, the data shows a comparative advantage for classical algorithms on small problems, but as the problems grow in size, the quantum-classical hybrid methods overtake the classical algorithms and achieve superior results. This trend is highlighted in Tables \ref{GCPP small comparison}, \ref{GCPP medium comparison}, and \ref{GCPP large comparison}. One should note that in this paper we have defined our graphs to not include multi-graphs, graphs which may have more than one edge which go from vertex $i$ to vertex $j$. This is to make the notation simpler. Everything in this paper may be extended to work with multi-graphs with the largest obstacle being the notation. For ideas on how to implement this work for multi-graphs one should look at the QUBOs for variant \ref{k variant} and variant \ref{service variant}. In conclusion, the authors have designed and developed a framework for solving a large number of variants of the CPP on a QA. Implementation of the framework for variants \ref{undirected variant} through \ref{windy variant} on the D-Wave 2000Q were successful. Optimal results were achieved for problems which could be embedded on the hardware with only short chains and optimal results were sometimes achieved for larger problems after tuning the '$P$' variables. Future directions include the following. Implementation of the remaining variants outlined. Implementation of further variants as there are more variants which could be easily adapted to the method defined in sections \ref{BGC} and \ref{even more general}, but were not included to keep this paper reasonable in length. Translating the CPP algorithm and variants for gate-based quantum architectures. Developing a more efficient way to choose optimal '$P$' variable values given the inputs from the problem. Additionally, there is room to experiment with this algorithm in conjunction with an iterative and/or graph partitioning approach to the CPP. \section*{Funding} This research was supported by the U.S. Department of Energy (DOE) National Nuclear Security Administration (NNSA) Advanced Simulation and Computing (ASC) program at Los Alamos National Laboratory (LANL). This research has been funded by the LANL Laboratory Directed Research and Development (LDRD) under project number 20200056DR. JEP, CFAN, and SMM were funded by LANL LDRD. JEP was also funded by the U.S. Department of Energy (DOE) through a quantum computing program sponsored by the Los Alamos National Laboratory (LANL) Information Science \& Technology Institute. Assigned: Los Alamos Unclassified Report LA-UR-22-27468. LANL is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218NCA000001). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. \section*{Author Information} \subsection*{Author Names and Affiliations} \textbf{Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, NM, USA} \\ \textbf{Mathematics Department, University of California, Santa Barbara, CA, USA} \\ Joel E. Pion \\ \textbf{Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM, USA} \\ Christian F. A. Negre \\ \textbf{Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, NM, USA} \\ Susan M. Mniszewski \subsection*{Author Contributions} J.E.P. and S.M.M. designed the project. J.E.P. performed the numerical simulations and optimizations. S.M.M. supervised the whole project. C.F.A.N advised on the mathematical formulations. All authors contributed to the discussion, analysis of the results and the writing of the manuscript. \subsection*{Corresponding author} Correspondence to Susan M. Mniszewski \section*{Ethics declarations} \subsection*{Declarations} This work does not involve human participants and presents no ethical concerns. \subsection*{Conflict of interest} The authors declare no competing interests. \subsection*{Human and Animal Ethics} Not Applicable \section*{Consent for publication} All authors agreed to publication of this research. \section*{Availability of data and materials} All author-produced code will be available upon reasonable request. \nocite{*} \input sn-article.bbl \end{document}
arXiv
\begin{document} \title{The Chasm at Depth Four, and Tensor Rank:\\ Old results, new insights} \author{Suryajith Chillara \thanks{Chennai Mathematical Institute, Research supported in part by a TCS PhD fellowship. Part of the work done while visiting Tel Aviv University. {\tt [email protected] }}\\ \and Mrinal Kumar \thanks{Rutgers University, Research supported in part by a Simons Graduate Fellowship. Part of the work done while visiting Tel Aviv University. {\tt [email protected]}}\\ \and Ramprasad Saptharishi \thanks{Tel Aviv University {\tt [email protected]}. The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement number 257575.}\\ \and V Vinay \thanks{ Limberlink Technologies Pvt Ltd and Chennai Mathematical Institute {\tt [email protected]}} } \maketitle \begin{abstract} \noindent Agrawal and Vinay \cite{av08} showed how any polynomial size arithmetic circuit can be thought of as a depth four arithmetic circuit of subexponential size. The resulting circuit size in this simulation was more carefully analyzed by Korian \cite{koiran} and subsequently by Tavenas \cite{tav13}. We provide a simple proof of this chain of results. We then abstract the main ingredient to apply it to formulas and constant depth circuits, and show more structured depth reductions for them. In an apriori surprising result, Raz~\cite{raz10} showed that for any $n$ and $d$, such that $ \omega(1) \leq d \leq O\left(\frac{\log n}{\log\log n}\right)$, constructing explicit tensors $T:[n]^d \rightarrow \F$ of high enough rank would imply superpolynomial lower bounds for arithmetic formulas over the field $\F$. Using the additional structure we obtain from our proof of the depth reduction for arithmetic formulas, we give a new and arguably simpler proof of this connection. We also extend this result for homogeneous formulas to show that, in fact, the connection holds for any $d$ such that $\omega(1) \leq d \leq n^{o(1)}$. \end{abstract} \section{Introduction} Agrawal and Vinay \cite{av08} showed how any polynomial size\footnote{in fact, subexponential size } arithmetic circuit can be thought of as a depth four arithmetic circuit of subexponential size. This provided a new direction to seek lower bounds in arithmetic circuits. A long list of papers attest to increasingly sophisticated lower bound arguments, centered around the idea of shifted partial derivates due to Kayal, to separate the so called arithmetic version of P vs NP (cf. \cite{github}). The depth reduction chasm was more carefully analyzed by Korian \cite{koiran} and subsequently by Tavenas \cite{tav13}. Given the importance of these depth reduction chasms, it is natural to seek new and/or simpler proofs. In this work, we do just that. We use a simple combinatorial property to prove our result. We then show how this can be extended to showing chasms for formulas and constant depth circuits. In the case of formulas, we show the top layer of multiplication gates have a much larger number of factors and therefore has more structure than a typical depth reduced circuit. We hope that such structural properties lead to better lower bounds for formulas. In fact, we use this additional structure to give a new proof of a result of Raz~\cite{raz10} which shows that for an appropriate range of parameters, constructing explicit tensors of high enough rank implies super-polynomial lower bounds for arithmetic formulas. More formally, let $f \in \F[\vecx_1, \vecx_2, \ldots, \vecx_d]$ be a set multilinear polynomial of degree $d$ in $nd$ variables, where for every $i \in [d]$, $\vecx_i$ is a subset of variables of size $n$. In a natural way, $f$ can be viewed as a tensor $f:[n]^d \rightarrow \F$. Raz~\cite{raz10} showed if $\omega(1) \leq d \leq O(\log n/\log\log n)$ and $f$ is computed by an arithmetic formula of size $\poly(n)$, then the rank of $f$ as a tensor is far from $n^{d-1}$ (the trivial upper bound\footnote{We know that there exist tensors $g:[n]^d \rightarrow \F$ of rank $n^{d-1}/d$.}). We use the additional structure obtained from our proof of depth reduction for formulas and constant depth arithmetic circuits, to give a very simple of proof of this result. As an extension, we also show that, in fact, the tensor rank of $f$ is far from $n^{d-1}$ as long as $f$ is computed by a \emph{homogeneous} formula of polynomial size and $d$ is such that $\omega(1) \leq d \leq n^{o(1)}$. This write up is organised as follows. We give new proofs of depth reduction for arithmetic circuits (\autoref{sec:depth-reduction-ckts}), for homogeneous arithmetic formulas (\autoref{sec:homformulas}) and for constant depth arithmetic circuits (\autoref{sec:constant-depth-circuits}). We end by applying the new proof of depth reduction for homogeneous formulas to show a simple proof of Raz's upper bound~\cite{raz10} on the tensor rank of polynomials computed by small arithmetic formulas in \autoref{sec:tensor-rank}. For standard definitions concerning arithmetic circuits, arithmetic formulas etc, we refer the reader to the survey of Saptharishi~\cite{github}. For an introduction to connections between tensor rank and arithmetic circuits, we refer the reader to an excellent summary of such results in Raz's original paper~\cite{raz10}. Throughout this paper, unless otherwise stated, by \emph{depth reduction}, we mean a reduction to homogeneous depth four circuits. By a $\SPSP^{[b]}$ circuit, we denote a depth four circuit such that the fan-in of every product gate at the bottom level is \emph{at most} $d$, and by $\mySPSP{a}{b}$ circuit, we denote a $\SPSP^{[b]}$ circuit which also has the property that the fan-in of every product gate adjacent to the output gate has fan-in \emph{at least} $a$, i.e the polynomials computed at the gates adjacent to the output gate have have \emph{at least} $a$ non-trivial factors. \iffalse \paragraph*{Things to be done:} \begin{itemize} \item Add a paragraph with organization of the paper. \item Summarise the statements of the results proved. \item Are there global notations to define? \end{itemize} \fi \section{Depth reduction for arithmetic circuits}\label{sec:depth-reduction-ckts} We shall need the classical depth reduction of \cite{vsbr83, ajmv98}. \begin{theorem}[\cite{vsbr83,ajmv98}]\label{thm:vsbr} Let $f$ be an $n$-variate degree $d$ polynomial computed by an arithmetic circuit $\Phi$ of size $s$. Then there is an arithmetic circuit $\Phi'$ computing $f$ and has size $s' = \poly(s,n,d)$ and depth $O(\log d)$. \end{theorem} \noindent Moreover, the reduced circuit $\Phi'$ has the following properties: \begin{enumerate}\itemsep1pt \parskip0pt \parsep0pt \item The circuit is homogeneous. \item All multiplication gates have fan-in at most $5$. \item If $u$ is any multiplication gate of $\Phi'$, all its children $v$ satisfy $\deg(v) \leq \deg(u)/2$. \end{enumerate} \noindent These properties can be inferred from their proof. A simple self-contained proof may be seen in \cite{github}. Agrawal and Vinay \cite{av08} showed that arithmetic circuits can in fact be reduced to depth four, and the result was subsequently strengthened by Koiran \cite{koiran} and by Tavenas~\cite{tav13}. \begin{theorem}[\cite{av08,koiran,tav13}] \label{thm:av} Let $f$ be an $n$-variate degree $d$ polynomial computed by a size $s$ arithmetic circuit. Then, for any $0< t \leq d$, $f$ can be computed by a homogeneous $\SPSP^{[t]}$ circuit of top fan-in $s^{O(d/t)}$ and size $s^{O(t + d/t)}$. \end{theorem} To optimize the size of the final depth four circuit, we should choose $t = \sqrt{d}$ to get a $\SPSP^{[t]}$ circuit of size $s^{O(\sqrt{d})}$. Note that this implies that if we could prove a lower bound of $n^{\omega(\sqrt{d})}$ for such $\SPSP^{[\sqrt{d}]}$ circuits, then we would have proved a lower bound for general circuits. In this section, we shall see a simple proof of \autoref{thm:av}. \begin{proofof}{\autoref{thm:av}} Using \autoref{thm:vsbr}, we can assume that the circuit has $O(\log d)$ depth. If $g$ is a polynomial computed at any intermediate node of $C$, then from the structure of $C$ we have a homogeneous expression \begin{equation}\label{eqn:vsbr-expansion} g \spaced{=} \sum_{i=1}^{s} g_{i1} \cdot g_{i2} \cdot g_{i3} \cdot g_{i4} \cdot g_{i5} \end{equation} where each $g_{ij}$ is computed by a node in $C$ as well, and $\deg(g_{ij}) \leq \deg(g)/2$. In particular, if $g$ were the output gate of the circuit, the RHS may be interpreted as a $\SPSP^{[d/2]}$ circuit of top fan-in $s$ computing $f$. To obtain a $\SPSP^{[t]}$ circuit eventually, we shall perform the following steps on the output gate: \begin{quote} 1. For each summand $g_{i1}\dots g_{ir}$ in the RHS, pick the gate $g_{ij}$ with largest degree (if there is a tie, pick the one with smaller index $j$). If $g_{ij}$ has degree greater than $t$, expand $g_{ij}$ in-place using \eqref{eqn:vsbr-expansion}. 2. Repeat this process until all $g_{ij}$'s on the RHS have degree at most $t$. \end{quote} \noindent Each iteration of the above procedure increases the top fan-in by a multiplicative factor of $s$. If we could show that the in $O(d/t)$ iterations all terms on the RHS have degree at most $t$, then we would have obtained an $\SPSP^{[t]}$ circuit of top fanin $s^{O(d/t)}$ computing $f$. Label a term $g_{ij}$ \emph{bad} if its degree is more than $t/8$. To bound the number of iterations, we count the number of bad terms in each summand. Since we would always maintain homogeneity, the number of bad terms in any summand is at most $8d/t$ (i.e., not too many). We show each iteration \emph{increases} the number of bad terms by at least one. This bounds the number of iterations by $8d/t$. In \eqref{eqn:vsbr-expansion}, if $\deg(g) = k$, the largest degree term of any summand on the RHS is at least $k/5$ (since the sum of the degrees of the five terms must add up to $k$) and so continues to be bad if $k > t$. But the largest degree term can have degree at most $k/2$. Hence the other four terms must together contribute at least $k/2$ to the degree. This implies that the second largest term in each summand has degree at least $k/8$. This term is bad too, if we started with a term of degree greater than $t$. Therefore, as long as we are expanding terms of degree more than $t$ using \eqref{eqn:vsbr-expansion}, we are guaranteed its replacements have at least one additional bad term. As argued earlier, we can never have more than $8d/t$ such terms in any summand and this bounds the number of iterations by $8d/t$. \begin{figure} \caption{Depth reduction analysis} \label{fig:depth-red-analysis} \end{figure} Observe that the above procedure can be viewed as a tree, as described in \autoref{fig:depth-red-analysis}, where each node represents an intermediate summand in the iterative process. From \eqref{eqn:vsbr-expansion} it is clear that the tree is $s$-ary. Furthermore, the number of ``bad'' terms strictly increases as we go down in the tree (these are marked in red in \autoref{fig:depth-red-analysis}). Since the total number of bad terms in any node can be at most $8(d/t)$, the depth of the tree is at most $8(d/t)$. Therefore, the total number of leaves is at most $s^{\left(8d/t\right)}$. Moreover, since every polynomial with degree at most $t$ can be written as a sum of at most $n^{O(t)}$ monomials, the total size of the resulting $\SPSP^{[t]}$ circuit is at most $s^{O(t + d/t)}$ (since $s\geq n$). \end{proofof} \section{Depth reduction for homogeneous formulas}\label{sec:homformulas} For the class of homogeneous formulas and shallow circuits, we will show that they can be depth reduced to a more structured depth four circuit. To quickly recap the earlier proof, we began with an equation $f=\sum_i g_{i1} \cdot g_{i2} \cdot g_{i3} \cdot g_{i4} \cdot g_{i5}$ and recursively applied the same expansion on all the large degree $g_{ij}$'s. The only property we really used was that in the above equation, there were at least two $g_{ij}$ that had large degree. For the case of homogeneous formulas and shallow circuits, there are better expansions that we could use as a starting point. \begin{theorem}[\cite{hy11a}]\label{thm:HY} Let $f$ be an $n$-variate degree $d$ polynomial computed by a size $s$ homogeneous formula. Then, $f$ can be expressed as \begin{equation}\label{eqn:HY} f\spaced{=} \sum_{i=1}^s f_{i1} \cdot f_{i2} \cdots f_{ir} \end{equation} where \begin{enumerate} \item the expression is homogeneous, \item for each $i,j$, we have $\inparen{\frac{1}{3}}^j d \leq \deg(f_{ij}) \leq \inparen{\frac{2}{3}}^j d$ and $r = \Theta(\log d)$, \item each $f_{ij}$ is also computed by homogeneous formulas of size at most $s$. \end{enumerate} \end{theorem} With this, we are ready to prove a more structured depth reduction for homogeneous formulas. \begin{theorem}\label{thm:depth-red-hom-formulas} Let $f$ be a homogeneous $n$-variate degree $d$ polynomial computed by a size $s$ homogeneous formula. Then for any $0< t \leq d$, $f$ can be equivalently computed by a homogeneous $\Sigma\Pi^{[a]}\Sigma\Pi^{[t]}$ formula of top fan-in $s^{10(d/t)}$ where \[ a > \frac{1}{10}\frac{d}{t} \log t. \] \end{theorem} The resulting depth four circuit is more structured in the sense that the multiplication gates at the second layer have a much larger fan-in (by a factor of $ \log t$). In \autoref{thm:av}, we only know that the polynomials feeding into these multiplication gates have degree at most $t$. The theorem above states that if we were to begin with a homogeneous formula, the degree $t$ polynomials factorize further to give $\Theta((d/t)\log t)$ non-trivial polynomials instead of $\Theta(d/t)$ as obtained in \autoref{thm:av}. \begin{proof} We start with equation \eqref{eqn:HY} which is easily seen to be a homogeneous $\SPSP^{[2d/3]}$ circuit with top fan-in $s$: \[ f\spaced{=} \sum_{i=1}^s f_{i1} \cdot f_{i2} \cdots f_{ir} \] To obtain a $\mySPSP{\Theta((d/t)\log t)}{t}$ circuit eventually, we shall perform the following steps on the output gate: \begin{quote} 1. For each summand $f_{i1}\dots f_{ir}$ in the RHS, pick the gate $f_{ij}$ with largest degree (if there is a tie, pick the one with smaller index $j$). If $f_{ij}$ has degree more than $t$, expand that $f_{ij}$ in-place using \eqref{eqn:HY}. 2. Repeat this process until all $f_{ij}$'s on the RHS have degree at most $t$. \end{quote} \noindent Each iteration again increases the top fan-in by a factor of $s$. Again, as long as we are expanding terms using \eqref{eqn:HY} of degree $k > t$, we are guaranteed by \autoref{thm:HY} that each new summand has at least one more term of degree at least $k/9 > t/9$. To upper bound the number of iterations, we use a potential function --- the number of factors of degree strictly greater than $t/9$ in a summand. A factor that is of degree $k>t$ and which is expanded using \eqref{eqn:HY} contributes at least two factors of degree $> t/9$ per summand. Thus, the net increase in the potential per iteration is at least $1$. Since this is a homogeneous computation, there can be at most $9d/t$ such factors of degree $>t/9$. Thus, the number of iterations must be bounded by $9d/t$ thereby yielding a $\SPSP^{[t]}$ of top fan-in at most $s^{9(d/t)}$ and size $s^{(t +9d/t)}$. This argument is similar to the argument in the proof of \autoref{thm:av}. We now argue that the fan-in of every product gate at the second level in the $\SPSP^{[t]}$ circuit obtained is $\Theta(d/t\log t)$. To this end, we shall now show that we require $\Theta(d/t)$ iterations to make all the factors have degree at most $t$. This, along with the fact that every iteration introduces a certain number of non-trivial factors in every product will complete the proof. We will say a factor is \emph{small} if degree is at most $t$ and \emph{big} otherwise. To prove a lower bound on the number of iterations, we shall use a different potential function --- the total degree of all the big factors. Given the geometric progression of degrees in \autoref{thm:HY}, we can easily see that the total degree of all the small factors in any summand is bounded above by $3t$. Hence, the total degree of all the big terms is $d - 3t$. But whenever \eqref{eqn:HY} is applied on a big factor, we introduce several small degree factors with total degree of at most $3t$. Hence, the potential drops by at most $3t$ per iteration. This implies that we require $(d/3t)$ iterations to make it a constant. Since every expansion via \eqref{eqn:HY} introduces at least $(\log_{3} t)$ non-trivial terms, it would then follow that every summand at the end has $\frac{1}{(3\log 3)}\frac{d}{t}\log t >\frac{1}{10}\frac{d}{t} \log t$ non-trivial factors. \end{proof} \subsection{An alternate proof} While we proved \autoref{thm:depth-red-hom-formulas} along the lines of \autoref{thm:av}, it is possible to provide an alternate proof of it. We provide a sketch. Starting with a homogeneous formula, by \autoref{thm:av} we get a $\SPSP^{[t]}$ circuit of the form \[\sum_{i=1}^{s'} Q_{i1}\dots Q_{ir}\] where $\deg(Q_{ij})\leq t$ and $s' = s^{O(d/t)}$. From the innards of this proof, it can be observed that each of the $Q_{ij}$'s is indeed computable by a homogeneous formula (formula, not a circuit) of size at most $s$. By multiplying several polynomials (if necessary) of degree at most $t/2$, we may assume that there are $\Theta(d/t)$ polynomials $Q_{ij}$ in each summand, with their degree between $t/2$ and $t$. Each of these polynomials may be expanded using \eqref{eqn:HY}. Since each such expansion adds $O(\log t)$ additional factors and increases the fan-in by a factor of $s$, the overall top fan-in is now $s' \cdot s^{O(d/t)}$. The number of factors however increases from $\Theta(d/t)$ to $\Theta((d/t)\log t)$. The resulting circuit is thus a $\mySPSP{\Theta((d/t) \log t)}{t}$ circuit of top fan-in $s^{O(d/t)}$. \section{Depth reduction for constant depth circuits}~\label{sec:constant-depth-circuits} In the same vein, a natural question is if we can obtain more structure for a constant depth circuit. For example, is the resulting depth four circuit more structured when we begin with a depth 100 circuit? By suitably adapting the expansion equation, our approach can answer this question. \begin{lemma}\label{lem:exp-eqn-for-shallow} Let $f$ be an $n$-variate degree $d$ polynomial computed by a size $s$ circuit of product-depth\footnote{the product depth is the number of multiplication gates encountered in any path from root to leaf} $\Delta$. Then $f$ can be expressed as \begin{equation}\label{eqn:HY-for-shallow} f\spaced{=} \sum_{i=1}^{s^2} f_{i2} \cdot f_{i3} \cdots f_{ir}\; \cdot \; g_{i1} \cdots g_{i\ell} \end{equation} where \begin{enumerate} \item the expression is homogeneous, \item for each $i,j$, we have $\inparen{\frac{1}{3}}^{j} d \leq \deg(f_{ij}) \leq \inparen{\frac{2}{3}}^{j} d$ and $r = \Theta(\log d)$, \item each $f_{ij}$ and $g_{ij}$ is also computed by homogeneous formulas of size at most $s$ and product-depth $\Delta$. \item $\ell = \Omega(d^{1/\Delta})$ \item all $g_{ij}, f_{ij}$ are polynomials of degree at least $1$. \end{enumerate} \end{lemma} \noindent Using this equation for the depth reduction yields the following theorem. \begin{theorem}\label{thm:depth-red-for-shallow} Let $f$ be an $n$-variate degree $d$ polynomial computed by a size $s$ homogeneous formula of product-depth $\Delta$. Then for any parameter $t = o(d)$, we can compute $f$ equivalently by a homogeneous $\mySPSP{\Theta((d/t)\cdot t^{1/\Delta})}{t}$ circuit of top fan-in at most $s^{O(d/t)}$ and size $s^{O(t + d/t)}$. \end{theorem} The multiplication gates at the second layer of the resulting depth four circuit have a much larger fan-in than what is claimed in \autoref{thm:av} or \autoref{thm:depth-red-hom-formulas}. When we begin with additional structure in the circuit, it seems we get additional structure in the resulting depth four circuit. Specifically, let us fix $t = \sqrt{d}$. The fan-in of the outer product gate would be $\Theta(\sqrt{d})$ for a general circuit (\autoref{thm:av}), $\Theta(\sqrt{d} \cdot \log d)$ for a homogeneous formula (\autoref{thm:depth-red-hom-formulas}), and $\Theta(\sqrt{d} \cdot d^{1/100})$ for a circuit of depth $100$ (\autoref{thm:depth-red-for-shallow}). \begin{proofof}{\autoref{lem:exp-eqn-for-shallow}} Let $\Phi$ be the product depth-$\Delta$ formula computing $f$. By \autoref{thm:HY}, we get \begin{equation}\label{eqn:HY-in-shallow} f\spaced{=} \sum_{i=1}^{s} f_{i1} \cdot f_{i2} \cdots f_{ir} \end{equation} with the required degree bounds. From the proof of \autoref{thm:HY}, it follows that each $f_{ij}$ is in fact a product of disjoint sub-formulas of $\Phi$, and hence in particular $f_{i1}$ is computable by size $s$ formulas of product-depth $\Delta$. We shall expand $f_{i1}$ again to obtain the $g_{ij}$s. Since $f_{i1}$ is a polynomial of degree at least $d/3$ computed by a size $s$ formula $\Phi'$ of product-depth $\Delta$, there must be some multiplication gate $h$ in $\Phi'$ of fan-in $\Omega(d^{1/\Delta})$. Therefore, \[ f_{i1}\spaced{=} A \cdot [h] \spaced{+} B. \] Here, $[h]$ is the polynomial computed at the gate $h$. Since $B$ is computed by $\Phi'$ with $h=0$, we can induct on $B$ to obtain \[ f_{i1}\spaced{=} A_1 [h_1] + \dots + A_s [h_s] \] where each $h_i$ is a multiplication gate of fan-in $\Omega(d^{1/\Delta})$. Plugging this in \eqref{eqn:HY-in-shallow}, and replacing $[h_i]$'s by the factors, gives \eqref{eqn:HY-for-shallow}. \end{proofof} \section{An Application: Tensor rank and formula lower bounds}~\label{sec:tensor-rank} Tensors are a natural \emph{higher dimensional} analogue of matrices. For the purposes of this short note, we shall take the equivalent perspective of \emph{set-multilinear polynomials}. A detailed discussion on this can be seen in \cite{github}. \begin{definition}[Set-multilinear polynomials]\label{defn:set-multilinear} Let $\vecx = \vecx_1 \sqcup \cdots \sqcup \vecx_d$ be a partition of variables and let $|\vecx_i| = m_i$. A polynomial $f(\vecx)$ is said to be \emph{set-multilinear} with respect to the above partition if every monomial $m$ in $f$ satisfies $\abs{m \intersection X_i} = 1$ for all $i \in [d]$. \end{definition} In other words, each monomial in $f$ picks up one variable from each part in the partition. It is easy to see that many natural polynomials such as the determinant, the permanent are all set-multilinear for an appropriate partition of variables. With this interpretation, a rank-$1$ tensor is precisely a \emph{set-multilinear} product of linear forms such as \[ f(\vecx) \spaced{=} \ell_1(\vecx_1) \cdots \ell_d(\vecx_d) \] where each $\ell_i(\vecx_i)$ is a linear form in the variables in $\vecx_i$. \begin{definition}[Tensor rank, as set-multilinear polynomials] For polynomial $f(\vecx)$ that is set-multilinear with respect to $\vecx = \vecx_1 \sqcup \cdots \sqcup \vecx_d$, the \emph{tensor rank} of $f$ (denoted by $\operatorname{TensorRank}(f)$) is the smallest $r$ for which $f$ can be expressed as a \emph{set-multilinear} $\SPS$ circuit: \[ f(\vecx) \spaced{=} \sum_{i=1}^r \ell_{i1}(\vecx_1) \cdots \ell_{id}(\vecx_d). \] \end{definition} However, even computing the rank of an degree-$3$ tensor is known to be $\NP$-hard \cite{h90}. But one could still ask if one can prove good upper or lower bounds for some specific tensors, or try to find an explicit tensor with large rank. \subsection*{Properties of tensor rank} The following are a couple of basic properties that follow almost immediately from the definitions. \begin{lemma}[Sub-additivity of tensor rank]\label{lem:tensor-subadditivity} Let $f$ and $g$ be two set-multilinear polynomials on $\vecx_1 \sqcup \cdots \sqcup \vecx_d$. Then, $\operatorname{TensorRank}(f+g) \leq \operatorname{TensorRank}(f) + \operatorname{TensorRank}(g)$. \end{lemma} \begin{lemma}[Sub-multiplicativity of tensor rank]\label{lem:tensor-submultiplicativity} Let $f(\vecy)$ be set-multilinear on $\vecy = \vecy_1 \sqcup \cdots \sqcup \vecy_a$ and $g(\vecz)$ be set-multilinear on $\vecz = \vecz_1 \sqcup \cdots \vecz_b$ with $\vecy \intersection \vecz = \emptyset$. Then polynomial $f \cdot g$ that is set-multilinear on $\vecy \union \vecz = \vecy_1 \sqcup \cdots \sqcup \vecy_a \sqcup \vecz_1 \sqcup \cdots \vecz_b$ satisfies\footnote{Tensor rank, in general, \textbf{does not} satisfies the relation $\operatorname{TensorRank}(f \cdot g) = \operatorname{TensorRank}(f) \cdot \operatorname{TensorRank}(g)$. For a concrete counter example, see \cite{CJZ17}.} \[ \operatorname{TensorRank}(f \cdot g) \leq \operatorname{TensorRank}(f) \cdot \operatorname{TensorRank}(g). \] \end{lemma} The following is a trivial upper bound for the tensor rank of any degree $d$ set-multilinear polynomial $f$. \begin{lemma}\label{lem:tensor-rank-trivial-upperbound} Let $f$ be a set-multilinear polynomial with respect to $\vecx = \vecx_1 \sqcup \cdots \sqcup \vecx_d$ and say $n_i = \abs{\vecx_i}$. Then, \[ \operatorname{TensorRank}(f) \spaced{\leq } \frac{\prod_{i=1}^d n_i}{\max_i{n_i}}. \] In particular, if all $n_i=n$, then $\operatorname{TensorRank}(f) \leq n^{d-1}$. \end{lemma} A counting argument would imply that there do exist tensors of rank at least $n^{d-1}/d$ as each elementary tensor has $nd$ \emph{degrees of freedom} and an arbitrary tensor has $n^d$ \emph{degrees of freedom}. \footnote{ One might think that the above upper bound of $n^{d-1}$ should be tight. Bizarrely, it is not! For example (cf. \cite{p85}), the maximum rank of any tensor of shape $2\times 2 \times 2$ is $3$ and not $4$ as one might expect! Tensor rank also behaves in some strange ways under \emph{limits} unlike the usual matrix rank. } So, it is a natural question to understand if we can construct explicit tensors of high rank? Raz \cite{raz10} showed that in certain regimes of parameters involved, an answer to the above question would yield arithmetic formula lower bounds. We elaborate on this now. \subsection{Tensor rank of small formulas} Henceforth, the variables in $\vecx$ are partitioned as $\vecx = \vecx_1 \sqcup \cdots \sqcup \vecx_d$ with $\abs{\vecx_i} = n$ for all $i\in [d]$. The main motivating question of Raz \cite{raz10} was the following: \begin{quote} If $f$ is a set-multilinear polynomial that is computed by a small formula, what can one say about its tensor rank? \end{quote} \noindent Raz gave a partial\footnote{Partial in the sense that we do not know if the bound is tight.} answer to this question by showing the following result. \begin{theorem}\label{thm:tensor-rk-of-homogeneous-formulas} Let $\Phi$ be a formula of size $s \leq n^c$ computing a set-multilinear polynomial $f(\vecx)$ with respect to $\vecx = \vecx_1\sqcup \cdots \sqcup \vecx_d$. If $d = O(\log n/\log\log n)$, then, \[ \operatorname{TensorRank}(f) \spaced{\leq} \frac{n^d}{n^{d/\exp(c)}}. \] \end{theorem} To prove \autoref{thm:tensor-rk-of-homogeneous-formulas}, Raz~\cite{raz10} first showed that when $d$ is small compared to $n$ (specifically, $d = O(\log n /\log\log n)$), any small formula can be converted to a \emph{set-multilinear} formula with only a polynomial over-head. Formally, he shows the following theorem, which is interesting and surprising in its own right\footnote{Indeed, it was believed that even transforming a formula into a homogeneous formula would cause a superpolynomial blow up in its size if the degree of the polynomial computed by the formula is growing with $n$.}. \begin{definition}[Set-multilinear formulas] A formula $\Phi$ is said to be a \emph{set-multilinear formula} if every gate in the formula computes a set-multilinear polynomial syntactically. That is, if $f$ and $g$ are polynomials computed by children of a $+$ gate, then both $f$ and $g$ are set-multilinear polynomials of the same degree over $\vecx$, with possibly different partitions. And if $f$ and $g$ are polynomials computed by children of a $\times$ gate, then both $f$ and $g$ are set-multilinear polyomials on disjoint sets of variables. \end{definition} \begin{theorem}[\cite{raz10}]\label{thm:form-to-smlform} Suppose $d = O\pfrac{\log n}{\log\log n}$. If $\Phi$ is a formula of size $s = \poly(n)$ that computes a set-multilinear polynomial $f(\vecx_1,\cdots, \vecx_d)$, then there is a \emph{set-multilinear} formula of $\poly(s)$ size that computes $f$ as well. \end{theorem} He then proceeds to show that set-mutlilinear formulas of polynomial size can only compute polynomials with tensor rank non-trivially far from the upper bound of $n^{d-1}$. More formally, he shows the following theorem. \begin{theorem}[\cite{raz10}] \label{thm:raz-tensorrank-sml} Let $\Phi$ be a set-multilinear formula of size $s \leq n^c$ computing a polynomial $f(\vecx_1,\cdots, \vecx_d)$. Then, \[ \operatorname{TensorRank}(f) \spaced{\leq} \frac{n^d}{n^{d/\exp(c)}}. \] \end{theorem} It is immediately clear that \autoref{thm:raz-tensorrank-sml} and \autoref{thm:form-to-smlform} imply \autoref{thm:tensor-rk-of-homogeneous-formulas}. In this section, we give a simple proof of \autoref{thm:raz-tensorrank-sml} using \autoref{thm:depth-red-hom-formulas}. We refer the reader to Raz's paper~\cite{raz10} or \cite{github} for a full proof of \autoref{thm:form-to-smlform}. \begin{proof}[Proof of \autoref{thm:raz-tensorrank-sml}] We shall start with the set-multilinear formula $\Phi$ of size $n^c$ and reduce it to depth-$4$ via \autoref{thm:depth-red-hom-formulas} for a bottom degree parameter $t$ that shall be chosen shortly. It is fairly straightforward to observe that the depth reduction preserves multilinearity and set-multilinearity as well. Therefore we now have a set-multilinear expression of the form \[ f \spaced{=} T_1 + \cdots + T_{s'} \] where $s' \leq s^{10(d/t)} = n^{10c(d/t)}$ and each $T_i = Q_{i1} \cdots Q_{ia_i}$ is a set-multilinear product. Let us fix one such term $T = Q_{1} \cdots Q_a$ and we know that this is a set-multilinear product with $a \geq \frac{d\log t}{10 t}$ non-trivial factors (by \autoref{thm:depth-red-hom-formulas}). Let $d_i = \deg(Q_i)$. By the sub-multiplicativity of tensor rank (\autoref{lem:tensor-submultiplicativity}) and the trivial upper bound (\autoref{lem:tensor-rank-trivial-upperbound}) we have \begin{align*} \operatorname{TensorRank}(T) & \leq n^{d_1 - 1} \cdots n^{d_a - 1}\\ & = n^{d - a}\\ \implies \operatorname{TensorRank}(f) & \leq s' \cdot n^{d-a} & \text{(\autoref{lem:tensor-subadditivity})}\\ & = \frac{n^d}{n^{a - 10c(d/t)}} \end{align*} Let us focus on the exponent of $n$ in the denominator. Using the lower bound on $a$ from \autoref{thm:depth-red-hom-formulas}, we get \[ a - 10c(d/t) \spaced{\geq} \frac{d\log t}{10t} - 10c\frac{d}{t} \spaced{=} \frac{d}{t} \inparen{\frac{\log t}{10} - 10c} \] If we set $\frac{\log t}{10} = 11c$, then we get $a - 10c(d/t) \;\geq\; cd/t \;=\; d/\exp(c)$. Hence, \[ \operatorname{TensorRank}(f) \spaced{\leq} \frac{n^d}{n^{d/\exp(c)}}\qedhere \] \end{proof} We would like to remark that, in spirit, a tensor rank upper bound for formulas is essentially a form of non-trivial reduction to set-multilinear depth three circuits. In this sense, this connection between tensor rank upper bound and reduction to depth four is perhaps not too un-natural. Also, observe that if instead of a general set-multilinear formula, we had started with a constant depth set-multilinear formula, we would have obtained a slightly better upper bound (better dependence on $c$) on the tensor rank of $f$. The improvement essentially comes from the fact that the depth reduction for formulas with product depth $\Delta$ to $\SPSP^{[t]}$ guarantees that the fan-in of product gates at the second level is at least $\Theta\left( \frac{d\cdot d^{1/\Delta}}{t} \right)$ (\autoref{sec:constant-depth-circuits}). We skip the details for the reader to verify. \subsection{An improvement} The result of Raz \cite{raz10} required $d = O(\log n / \log \log n)$ to be able to \emph{set-multilinearize} the formula without much cost. However, with this alternate proof via the improved depth reduction, we can delay the set-multilinearization until a later stage and thus get the same upper bound on the tensor rank for much larger $d$, provided that the formula we started with was homogeneous. \begin{theorem} Let $f$ be a set-multilinear polynomial with respect to $\vecx = \vecx_1 \sqcup \cdots \sqcup \vecx_d$ that is computed by a homogeneous formula (not necessarily set-multilinear) $\Phi$ of size $s = n^c$. If $d$ is \emph{sub-polynomial} in $n$, that is $\log d = o(\log n)$, then \[ \operatorname{TensorRank}(f) \spaced{\leq} \frac{n^d}{n^{d / \exp(c)}}. \] \end{theorem} \begin{proof} As earlier, we shall start with the formula $\Phi$ of size $n^c$ and reduce it to a $\SPSP^{[t]}$ formula $\Phi'$ of size $n^{10c(d/t)}$ for a $t$ that shall be chosen shortly. Again, $\Phi'$ is a sum of terms of the form $T = Q_1 \cdots Q_a$, a product of $a \geq \frac{d \log t}{10t}$ non-trivial factors. The difference here is that this is not necessarily a set-multilinear product. Let $d_i = \deg(Q_i)$. Among the monomials in $Q_i$, there may be some that are divisible by two or more variables from some part $\vecx_j$ and others that are products of variables from distinct parts. For any $S \subset [d]$ let, $Q_{i,S}$ be the sum of monomials of $Q_i$ that is a product of exactly only variable from each $\vecx_j$ for $j \in S$. Note that no monomials of $Q_i$ that is a product of two or more variables from some $\vecx_j$ can contribute to a set-multilinear monomial of $f$. Thus, if $\mathrm{SML}(T)$ is the restriction of $T$ to just the set-multilinear monomials of $T$, then \[ \mathrm{SML}(T)\spaced{=} \sum_{\substack{S_1 \sqcup \cdots \sqcup S_a = [d]\\|S_i| = d_i}}\; Q_{1,S_1} \cdots Q_{a,S_a} \] Here, $S_1, S_2, \ldots, S_a$ form a partition of the set $[d]$. We can observe that the tensor rank of each summand is upper bounded by $n^{d_1 - 1}n^{d_2 - 1}\cdots n^{d_a - 1}$ and the number of summands is at most $\binom{d}{d_1}\binom{d-d_1}{d_2}\cdots\binom{d-\sum_{i=1}^{a-1}d_i}{d_a}$. Using \autoref{lem:tensor-subadditivity} and \autoref{lem:tensor-submultiplicativity}, we get the following. \begin{eqnarray*} \operatorname{TensorRank}(\mathrm{SML}(T)) & \leq & \frac{n^d}{n^a}\cdot \binom{d}{d_1\ d_2\ \cdots \ d_a}\\ & \leq & n^{d-a} \cdot d^d\\ & = & n^{d-a} \cdot n^{d \log d / \log n}\\ \implies \quad \operatorname{TensorRank}(f) & \leq & n^d / n^{a - 10c(d/t) - d\log d/\log n} \end{eqnarray*} Again, let us focus on the exponent in the denominator \begin{eqnarray*} a - \frac{10c \cdot d}{t} - \frac{d \log d}{\log n} & \geq & \frac{d}{t} \inparen{\frac{\log t}{10} - 10c - \frac{t \log d}{\log n}} \end{eqnarray*} Once again we shall set $t = 2^{O(c)}$ so that $\frac{\log t}{10} - 10c = c$ and since $\log d = o(\log n)$ it follows that \[ \frac{d}{t} \inparen{\frac{\log t}{10} - 10c - \frac{t \log d}{\log n}} \geq \frac{d}{\exp(c)} \] Hence, \[ \operatorname{TensorRank}(f) \spaced{\leq} \frac{n^d}{n^{d/\exp(c)}}\qedhere \] \end{proof} \end{document}
arXiv
\begin{definition}[Definition:Vectorial Matroid] Let $V$ be a vector space. Let $S$ be a finite subset of $V$. Let $\struct{S, \mathscr I}$ be the matroid induced by linear independence in $V$ on $S$. From Matroid Induced by Linear Independence in Vector Space is Matroid, $\struct{S, \mathscr I}$ is a matroid. Then any matroid isomorphic to $\struct{S, \mathscr I}$ is called a '''vectorial matroid'''. \end{definition}
ProofWiki
\begin{document} \begin{abstract} We show that the set of Finsler metrics on a manifold contains an open everywhere dense subset of Finsler metrics with infinite-dimensional holonomy groups. \end{abstract} \maketitle \noindent {\small \emph{Keywords:} Finsler geometry, algebras of vector fields, holonomy, curvature.} \noindent {\small \emph{2000 Mathematics Subject Classification:} 53C29, 53B40, 22E65, 17B66.} \section{Introduction} \label{sec:1} Finsler metrics appeared already in the inaugural lecture of B.~Riemann in 1854 \cite{Riemann_1854}, under the name \textit{generalized metric}. At the beginning of the XXth century, the intensive study of Finsler metrics was motivated by the optimal transport theory. A group of mathematicians lead by C.~Cartheodory aimed to adapt mathematical tools which were effective in Riemannian geometry (such as affine connections, Jacobi vector fields, sectional curvature) for a more general situation. P.~Finsler was a student of Cartheodory and his dissertation \cite{Finsler_1918} is one of the important steps on this way. Riemannian geometry is one of the main sources of challenging problems in Finsler geometry: many Riemannian results are not valid in the Finslerian setup and one asks under what additional assumptions they are correct. Our paper studies the holonomy groups of Finsler manifolds. We give precise definitions later; at the present point let us recall that the Berwald connection (introduced by L.~Berwald in 1926 \cite{Berwald_1926}) can be viewed as an Ehresmann connection on the unit tangent bundle $\mathcal{I}M$. Its holonomy group (at $x\in M$) is the subgroup of the group $\diff{\mathcal{I}_x}$ generated by the parallel transports along the loops starting and ending at $x$. For Riemannian metrics, the Berwald connection specifies to the Levi-Civita connection. Study of Riemannian holonomy groups is a prominent topic in Riemannian geometry and mathematical physics. It is known (see e.g. A.~Borel and A.~Lichnerowicz \cite{Borel_Lichn_1952}) that the holonomy group is a subgroup of the orthogonal group; in particular it is always finite-dimensional. Moreover, all possible holonomy groups are described and classified due in particular to breakthrough results of M.~Berger and J.~Simons \cite{Berger_1955, Simons}. In the Finslerian case, the situation is very different and not much is known. By \cite{Szabo_1981} (see also \cite{Ma2,Matveev1,Vi}) the so-called Berwald manifolds have finite-dimensional holonomy group. Also the so-called Landsberg manifolds have finite-dimensional holonomy group \cite{Kozma_2000}, but it is not jet known whether nonberwaldian Landsberg manifolds exist \cite{Matveev}. We are not aware of other examples of Finsler metrics with finite-dimensional holonomy group, it is an interesting problem to find such. From the other side, there are also not many explicit examples of Finsler manifolds with infinite-dimensional holonomy group \cite{Hubicska_Muzsnay_tangent_2017, Muzsnay_Nagy_max_2015}, and all these examples have constant curvature. A natural and fundamental question in this context is whether for a generic Finsler manifold the holonomy group is infinite-dimensional. Its simplest version was explicitly asked by S.-S.~Chern et al in \cite[page 85]{ChernShen2005}. In our paper we prove that for a generic Finsler manifold the holonomy group is infinite-dimensional: \begin{theorem} \label{main_thm_2} In the set $\F$ of $C^\infty$-smooth Finsler metrics on a manifold $M$ of dimension $n\geq 2$, there exists a subset $\widetilde{\F}$ of Finsler metrics with infinite dimensional holonomy group, which is open and everywhere dense in any $\mathcal{C}^{m}$-topology, $m\geq 8$. \end{theorem} What we essentially prove is that one can $C^\infty$ small perturb any Finsler metric $F$ at a neighbourhood of any point $x\in M$ such that for every open nonempty subset $U \subseteq M$ containing $x$, the perturbed Finsler structure $(U, F_t)$ has infinite dimensional holonomy group. Moreover, the metric $F_t$ has an open subset in the space of all $C^\infty$-Finsler metrics equipped with $C^{m\ge 8}$-topology such that every Finsler metric $F''$ in this set also has infinite dimensional holonomy group. Theorem \ref{main_thm_2} is true mircrolocally and on the level of germs (see Remark \ref{rem:germs}). The perturbation is given by a formula. We show that for almost every $t \in [0,1]$, the perturbation $F_t$ on its indicatrix $\I_x^{\, t}$ has the full infinity-jet at every point $y\in \I_x^{\, t}$. Based on this, we conjecture that in the generic case, the holonomy group of a Finsler manifold coincides with the full diffeomorphism group of the indicatrix. Our results imply that, in contrast to the Riemannian case, the closure of the holonomy group is not a compact group for most Finlser metric. Similar results for the \emph{linear holonomy group} (defined via the linear parallel transport) were recently obtained in \cite{Ivanov_Lytchak_2019}. The proof is organised as follows. We first show that the standard Funk metric $F_{Funk}$ has `sufficiently large' holonomy algebra. For dimension 2, this was known \cite{Hubicska_Muzsnay_tangent_2017, Muzsnay_Nagy_max_2015}, we generalise these results to all dimensions. Next, we employ the trick from \cite[\S 3.1]{relativity} and show that with the help of $F_{Funk}$ one can perturb an arbitrary Finsler metric such the result also has `sufficiently large' holonomy algebra. Then, we show that if the holonomy algebra is `sufficiently large' then it is infinite-dimensional. This step is in fact a general statement about algebras of smooth vector fields and possibly can be applied elsewhere. Therefore, let us formulate it as Theorem \ref{thm:no_transitive_finite} below, this will also explain what we understand by `sufficiently large'. An algebra $\mathfrak{g}$ of vector fields on $U\subseteq \mathbb{R}^n$ is called \emph{$3$-jet generating} at $x\in U$, if the set of 3rd jets of these vector fields at the point $x$ coincides with the space of all 3rd jets of vector fields at $x$ (see Definition \ref{def:gen}). In other words, every vector field can be approximated at $x$ with order three by a vector field from the algebra. For example, if the algebra is \emph{locally transitive} at $x$, i.e., if the elements of the algebra at $x$ span the whole $T_xU$, then it is $0-$jet generating. \begin{theorem} \label{thm:no_transitive_finite} Let $\mathfrak{g}$ be a Lie algebra of vector fields on a manifold $U$. If there exists a point where it is 3-jet generating, then $\mathfrak{g}$ is infinite-dimensional. \end{theorem} If dimension $U$ is 1, the result is known and is due to Sophus Lie, see e.g.~\cite[Theorem 2.70]{Olver}. As examples show (see e.g.~the tables at the back of \cite{Olver} where vector field algebras of arbitrary finite dimension are listed), the 3-jet generating property is important. \section{Preliminaries} \label{sec:2} Let $M$ be an $n$-dimensional manifold, $TM$ its tangent manifold, $\pi: TM \to M$ the canonical projection. Local coordinates $(x_i)$ on $M$ induce local coordinates $(x_i, y_i)$ on $TM$. The $k$th order jet of a function $f\in C^\infty(M)$ (resp. smooth vector field $V\in \mathfrak X(M)$) at $x\in M$ will be denoted by $j^k_x(f)$ (resp.~$j^k_x(V)$). In local coordinates, the $k$th order jet can be viewed as the collection of all derivatives of the function or of the vector field up to the order $k$. For example, the $1$st order jet of a function at a point $x$ can be viewed as $n+1$ numbers \begin{math} \left(f, \tfrac{\partial f}{\partial x_1}, \dots , \tfrac{\partial f}{\partial x_n} \right), \end{math} and of a vector field $V$ as $n(n+1)$ numbers \begin{math} \left(V_1, \dots,V_n, \tfrac{\partial V_1}{\partial x_1}, \dots , \tfrac{\partial V_n}{\partial x_n}\right). \end{math} \subsection{Finsler manifolds, connection} \label{sec:finsl} \noindent The function $F \colon TM \to \mathbb{R}_+$ is called a \textit{Finsler metric}, if it is a positively 1-homogeneous continuous function, $C^\infty$-smooth on $\T M=TM\!\setminus\!\{0\}$ and \begin{displaymath} \label{metric_coeff} g_{ij}= \frac{\partial^2 \E} {\partial y_i\partial y_j} \end{displaymath} is positive definite at every $y\in \T_xM$, where $\E:=\frac{1}{2}F^2$ denotes the \textit{energy function} of $F$. A pair $(M, F)$ is called a \textit{Finsler manifold}. The hypersurface of $T_xM$ defined by \begin{equation} \label{eq:2} \I_x \! =\! \halmazpont{y \in T_xM}{F_x(y) \! = \! 1} \end{equation} is called the \emph{indicatrix} at $x \in M$. The \emph{geodesics} of a Finsler manifold $(M, F)$ are given by the solutions of the following system of second order ordinary differential equations \begin{equation} \label{eq:geodesic} \ddot{x}^i + 2G^i(x,\dot x)=0, \end{equation} where the geodesic coefficients $G^i=G^i(x,y)$ are determined by the formula \begin{equation} \label{eq:G_i} \hphantom{\qquad i = 1,\dots, n,} G^i = \frac{1}{4} g^{il}\Bigl(2\frac{\partial g_{jl}}{\partial x_k} - \frac{\partial g_{jk}}{\partial x_l} \Bigr) \, y_jy_k, \qquad i = 1,\dots, n. \end{equation} The (Berwald) parallel translation on a Finsler manifold can be introduced by considering the Ehresmann connection: the horizontal distribution is determined by the image of the horizontal lift $T_xM\to T_{(x,y)}TM$ defined in a local basis as \begin{equation} \label{eq:lift2} \delta_i:= \left(\frac{\partial}{\partial x_i} \right)^{\!\! h} = \frac{\partial}{\partial x_i} -G^k_i \frac{\partial}{\partial y_k}, \end{equation} where $y\in \T_xM$ and \begin{math} G^i_j = \frac{\partial G^i}{\partial y_j}. \end{math} We have the decomposition \begin{displaymath} TTM = \mathcal{H} \oplus \mathcal{V}, \end{displaymath} where $\mathcal{V}=\ker \pi_*$ is the vertical distribution. The corresponding projectors are denoted by $h$ and $v$. The \textit{horizontal Berwald covariant derivative} of a vertical vector field $\xi$ with respect to a vector field $X\in \mathfrak{X}(M)$ is defined by \begin{displaymath} \nabla_{X} \xi = [X^h , \xi ]. \end{displaymath} In local coordinates, if \begin{math} \xi=\xi^{i}(x,y) \frac{\partial}{\partial y_{i}} \end{math} and $X(x)=X^{i} \frac{\partial}{\partial x_{i}}$, then \begin{displaymath} \label{eq:covder} \nabla_{X} \xi = \left( \frac{\partial \xi ^{i}}{\partial x_{j}} - G^{k}_j \frac{\partial \xi ^{i}}{\partial y_{k}} +\frac{\partial G^{i}_j }{\partial y_{k}} \xi^{k} \right) X_{j} \frac{\partial}{\partial y_i}. \end{displaymath} \subsection{Parallel translation and curvature} \label{sec:finsler2} \ \noindent Parallel vector fields along a curve $c$ are characterized by the property that their covariant derivative vanishes. Parallel translation can be obtained through the following geometric construction: the horizontal lift of a curve $c\colon [0,1]\to M$ with initial condition $X_0\in T_{c(0)}M$ is a curve $c^h\colon [0,1]\to TM$ such that $\pi \circ c^h \!=\! c$, $\frac{d c^h}{dt}\!=\!(\frac{d c}{dt})^h$ and $c^h(0) \!=\! X_0$. Then the parallel translation of $X_0$ along the curve $c$ from $c(0)$ to $c(1)$ is \begin{equation} \label{eq:parallel_3} \p_{c}(X_0) = c^h(1). \end{equation} The horizontal distribution $\mathcal{H}$ is, in general, non-integrable. The obstruction to its integrability is given by the \emph{curvature tensor} \begin{math} R=\frac{1}{2}[h,h] \end{math} which is the Nijenhuis torsion of the horizontal projector $h$ associated to the subspace $\mathcal{H}$. The \emph{curvature tensor} field is defined by \begin{equation} \label{eq:R} R = R^i_{jk}(x,y) \, dx_j \! \otimes dx_k \! \otimes \frac{\partial}{\partial y_i} \end{equation} where \begin{equation} \label{eq:curv_tensor} R^i_{jk} = \frac{\partial G^i_j}{\partial x_k} - \frac{\partial G^i_k}{\partial x_j} + G_j^m G^i_{k m} - G_k^m G^i_{j m} \end{equation} in a local coordinate system with $G^i_{jk}=\frac{\partial G^i_j}{\partial y_k}$. \begin{remark} \label{remark:jet} From formula \eqref{eq:G_i} we get that the geodesic coefficients $G^i(x,y)$ can be calculated in terms of the 3\textsuperscript{rd} order jet of the Finsler function $F$ at $(x,y)$. Therefore, the coefficients $R^i_{jk}(x,y)$ of the curvature tensor and the curvature vector fields \begin{equation} \label{eq:R_map} \mathcal R_{ij} := R(\delta_i,\delta_j), \qquad i,j =1, \dots, n, \end{equation} can be expressed algebraically by the 5\textsuperscript{th} order jet of $F$. More generally, the value of $k$\textsuperscript{th} order covariant derivatives and $k$\textsuperscript{th} successive Lie brackets of curvature vector fields can be expressed algebraically by the $(k+5)$\textsuperscript{th} order jet of $F$. \end{remark} \subsection{The holonomy group, the holonomy algebra and its subalgebras} \label{sec:holonomy} \ \noindent The \emph{holonomy group} $\mathcal{H}ol_x(M,F)$ of a Finsler manifold $(M, F)$ at a point $x\in M$ is the group generated by parallel translations along piece-wise differentiable closed curves starting and ending at $x$. Since the parallel translation \eqref{eq:parallel_3} is 1-homogeneous and preserves the norm, one can consider it as a map \begin{equation} \label{eq:hol_elem} \p_{c}:\I_x \to \I_x; \end{equation} therefore, the holonomy group can be seen as a subgroup of the diffeomorphism group of the indicatrix: \begin{displaymath} \mathcal{H}ol_x (M, F) \ \subset \ \diff{\I_x}, \end{displaymath} and its tangent space at the identity, is called the holonomy algebra: \begin{equation} \label{eq:hol_x} \mathfrak{hol}\hspace{1pt}_x (M, F) \ \subset \ \X{\I_x}. \end{equation} We are listing below the most important properties of the \emph{holonomy algebra} (see \cite{Hubicska_Muzsnay_tangent_2017}): \begin{property} \label{prop:hol} \ \begin{enumerate}[topsep=3pt, partopsep=3pt,leftmargin=25pt] \setlength{\itemsep}{1pt} \setlength{\parskip}{1pt} \setlength{\parsep}{1pt} \item \label{it:1} $\mathfrak{hol}\hspace{1pt}_x (M, F)$ is a Lie subalgebra of $\X{\I_x}$, \item \label{it:3} the exponential image of $\mathfrak{hol}\hspace{1pt}_x (M, F)$ is in the topological closure of $\mathcal{H}ol_x(M,F)$. \end{enumerate} \end{property} The \emph{infinitesimal holonomy algebra} $\mathfrak{hol}^*_x(M, F)$ is generated by curvature vector fields and their horizontal Berwald covariant derivatives, that is: \begin{equation} \label{eq:inf_hol_alg} \mathfrak{hol}^*_x(M, F):= \left\langle \nabla_{Z_1} \dots \nabla_{Z_k} R(X^h, Y^h) \ \big| \ X,Y, Z_1, \dots, Z_k\in \X{M} \right\rangle_{Lie}. \end{equation} The infinitesimal holonomy algebra $\mathfrak{hol}_x^{*}(M, F)$ is a Lie subalgebra of $\mathfrak{hol}\hspace{1pt}_x (M, F)$. \begin{remark} \label{rem:infinitesimal} The infinitesimal holonomy algebra is local in nature, that is for any open neighbourhood $U$ of $x\in M$ we get \begin{math} \mathfrak{hol}^*_x(M, F) = \mathfrak{hol}^*_x \bigl( U, F|_{\pi^{-1}(U)} \bigr). \end{math} For that reason, we will simplify the notation \begin{displaymath} \mathfrak{hol}^*_x(F) := \mathfrak{hol}^*_x(M, F) \end{displaymath} by omitting the neighborhood of the point where the infinitesimal holonomy algebra is determined. Indeed, the curvature vector fields, their horizontal Berwald covariant derivatives and their Lie brackets can be computed on an arbitrarily small neighbourhood of $x$, therefore their value at $x$ can be determined locally. \end{remark} We have the inclusions of Lie algebras: \begin{equation} \label{eq:curv_hol_x} \mathfrak{hol}_x^{*}(F) \subset \mathfrak{hol}\hspace{1pt}_x (M, F) \subset \X{\I_x}, \end{equation} therefore, at the level of groups, we get \begin{equation} \label{eq:group_curv_hol_x} \exp \bigl(\mathfrak{hol}_x^{*}(F) \bigr) \subset \exp \bigl(\mathfrak{hol}\hspace{1pt}_x (M, F)\bigr) \subset \mathcal{H}ol_x^{c} \bigl(M, F) \subset \diff{\I_x} \end{equation} where $\mathcal{H}ol_x^c (M)$ denotes the topological closure of the holonomy group with respect to the $C^\infty$--topology of $\diff{\I_x}$. We call a Lie algebra infinite dimensional if it contains infinitely many $\mathbb R-$linearly independent elements. Clearly, using the tangent property of the holonomy algebra, if $\mathfrak{hol}\hspace{1pt}_x(M, F)$ is infinite dimensional, then the holonomy group cannot be a finite dimensional Lie group. This observation motivates the following \begin{definition} \label{def:inf_dim} The holonomy group of a Finsler manifold $(M, F)$ is called infinite dimensional if its holonomy algebra is infinite dimensional. \end{definition} We refer to \cite{Hubicska_Muzsnay_tangent_2017} for a discussion of the tangent Lie algebras of diffeomorphism groups and of the relation between the holonomy group and the holonomy algebra. \section{On the holonomy of the standard Funk metric} \label{sec:3} \noindent A Funk metric can be described as follows. Let $\Omega$ be a bounded convex domain in $\mathbb{R}^n$ and denote its boundary by $\partial \Omega$. We can define a Finsler norm function $F_\Omega (x,y)$ in the interior of $\Omega$ for any vector $y\in T_x\Omega$ by the formulas \begin{displaymath} F_\Omega(x,y)>0, \qquad x+\frac{y}{F_\Omega(x,y)}=z, \end{displaymath} where $z\in \partial \Omega$. This norm function is called the Funk norm function induced by $\Omega$. The Funk norm induced by the origo centered unit ball $B^n\subset \mathbb R^n$ will be called the \emph{standard Funk norm} and will be denoted by $F_{\B^{^n}}$. We denote by $o=(0, \dots, 0)$ the origin in $\mathbb R^n$. \begin{remark} \label{rem:dif_group} The holonomy of $(\B^{^2}, F_{\B^{^2}})$ was investigated in \cite[Chapter 5]{Muzsnay_Nagy_max_2015}. It was proved that the infinitesimal holonomy algebra $\mathfrak{hol}^*_o (F_{\B^{^2}})$ contains the Fourier algebra $\mathsf{F}(\mathbb S^1)$ whose elements are vector fields $f\frac{d}{dt}$ such that $f(t)$ has finite Fourier series. One has \begin{equation} \label{eq:fourier_hol} \mathsf{F}(\mathbb S^1) \ \subset \ \mathfrak{hol}^*_o (F_{\B^{^2}}) \ \subset \ \X{\mathbb S^1}. \end{equation} Since $\mathsf{F}(\mathbb S^1)$ is dense in $\X{\mathbb S^1}$, we get the same from \eqref{eq:fourier_hol} for $\mathfrak{hol}^*_o(F_{\B^2})$. Using the exponential map, one can obtain from \eqref{eq:group_curv_hol_x} that the closure of the holonomy group of the Finsler surface $(\B^2, F_{\B^2})$ is $\diffp{\mathbb S^1}$, the group of orientation preserving diffeomorphisms of the circle \cite[Theorem 5.2]{Muzsnay_Nagy_max_2015}. \end{remark} \begin{proposition} \label{prop:ind_vf} The infinitesimal holonomy algebra of the standard Funk metric $F_{\B^{^n}}$ at $o \in \mathbb R^n$ is infinite dimensional. \end{proposition} \begin{proof} For $n=2$ the proof follows directly from Remark \ref{rem:dif_group} since \eqref{eq:fourier_hol} shows that $\mathfrak{hol}^*_o (F_{\B^{^2}})$ contains the infinite dimensional Lie algebra $\mathsf{F}(\mathbb S^1)$. Let us consider the $n>2$ case. For each tangent $2$-plane $\mathcal K \subset T_o\B^n$ the restriction of $F_{\B^{^n}}$ to $\B^2:= \B^n \cap \mathcal K$ is the standard Funk metric on $\B^2$. One can suppose that $\mathcal K$ is the 2-plane generated by $\frac{\partial}{\partial x_1}$ and $\frac{\partial}{\partial x_2}$. Then, using the totally geodesic property, the curvature vector field and its successive covariant derivatives with respect to the directions $\frac{\partial}{\partial x_1}$ and $\frac{\partial}{\partial x_2}$ on $\B^2$ at $o$ can be inherited as restriction of the corresponding vector fields of $\B^n$. Consequently, the elements of the Fourier algebra can be obtained as the restriction of elements of the infinitesimal holonomy algebra $\mathfrak{hol}^*(F_{\B^n})$ and we have \begin{displaymath} \mathsf{F}(\mathbb S^1) \ \subset \ \mathfrak{hol}^*_o (F_{\B^{^2}}) \ \simeq \ \mathfrak{hol}^*_o (F_{\B^{^n}}\big|_{\B^{^n}\cap \mathcal K}) \ \subset \ \mathfrak{hol}^*_o (F_{\B^{^n}}). \end{displaymath} It follows that $\mathfrak{hol}^*_o(F_{\B^{^n}})$ contains infinitely many $\mathbb R$-independent vector fields which can be expressed by the curvature vector fields and their covariant derivatives. \end{proof} \begin{definition} \label{def:gen} A set $\mathcal{V} \subset \X{M}$ of vector fields on a manifold $M$ is called \begin{enumerate} \item $k$-jet generating at $x\in M$ if the natural map \begin{math} j^k_{x}\colon \mathcal{V} \to J^k_{x} (\X{M}) \end{math} is surjective, \item jet generating on $M$ if at any $x\in M$ and for any $k\geq 0$ it is $k$-jet generating. \end{enumerate} \end{definition} We have the following \begin{proposition} \label{prop:Funk_generating} The infinitesimal holonomy algebra $\mathfrak{hol}^*_o (F_{\B^{^n}})$ of the standard Funk metric at the point $o\in \B^{^n}$ has the jet generating property on the indicatrix $\mathcal{I}_o$. \end{proposition} \begin{proof} According to Definition \ref{def:gen}, we have to show that for any $y \in \I_o$ and $k\in \mathbb N$ the jet-projection \begin{math} \mathfrak{hol}^*_o(F_{\B^{^n}}) \rightarrow J_y^k(\X{\I_o}) \end{math} is onto. In the case $n=2$, we get from Remark \ref{rem:dif_group} that $\mathfrak{hol}^*_o (F_{\B^{^2}})$ is dense in $\X{\I_o}$. It follows that the restriction of the $k$\textsuperscript{th} order jet projection on the infinitesimal holonomy algebra \begin{equation} \label{eq:2_dim_jet_onto} j^k_y: \mathfrak{hol}^*_o(F_{\B^2}) \longrightarrow J^k_y(\X{\I_o}), \end{equation} is onto. In other words, any given $k$\textsuperscript{th} order jet in $J^k_y(\X{\I_o})$ can be realized as the $k$-jet of an element of the infinitesimal holonomy algebra. Clearly we have the jet generating property. Let us consider the $n>2$ case. If $y\in \I_o$ and $v\in T_y(\I_o)$, let $\mathcal K_{y,v}$ be the 2-plane determined by these vectors. Using the argument of the proof of Proposition \ref{prop:ind_vf} and \eqref{eq:2_dim_jet_onto} we get that \begin{equation} j^k_y: \mathfrak{hol}^*_o(F_{\B^n}\big|_{\I_o\cap \mathcal K_{y,v}}) \longrightarrow J^k_y(\X{\I_o\cap \mathcal K_{y,v}}) \end{equation} is onto. It follows that for $y \in \I_o$ and $v\in T_y(\I_o)$, any $k$\textsuperscript{th} order $v$-directional derivative can be realised by elements of the holonomy algebra. Using the local coordinate system $y_1, \dots, y_{n-1}$ on the $n-1$-dimensional indicatrix $\I_o$ we get that for any given $(z, z_1, \dots, z_k) \in (\mathbb R^{(n-1)})^{(k+1)}$ there exist $\xi\in \mathfrak{hol}^*_o(F_{\B^n})$ such that \begin{equation} \label{eq:derivatives} \xi\big|_y=z, \quad (\mathcal D_v \xi)\big|_y=z_1, \quad \dots \quad (\mathcal D^{(k)}_v \xi)\big|_y=z_k, \end{equation} where we consider a locally constant extension of $v$ when the higher order derivatives are computed. For the completion of the proof however we must be able to generate all $k$-th jet at $y$, that is the terms corresponding to mixed partial derivatives as well. This is possible, by using higher order derivatives corresponding to several directions. Indeed, one can use the polarization technique to show that the $k$\textsuperscript{th} order mixed partial derivatives are determined by the $k$\textsuperscript{th} order directional derivatives, a similar way as the quadratic form determines the corresponding symmetric bilinear form, or more generally, as the homogeneous from of degree $k$ determines the corresponding symmetric multilinear $k$-from. Indeed, considering any $v_1, \dots v_k \in T_y (\I_o)$ and their constant extension in a neighbourhood of $y$, we get \begin{equation} \label{eq:mixed_der} \mathcal D_{v_1}\bigl(\mathcal D_{v_2} \cdots (\mathcal D_{v_k} \xi)\bigr) = \frac{1}{k!} \sum^k_{s=1} \sum_{\ 1 \leq j_1 < \dots < j_s \leq k} (-1)^{k-s} \mathcal D^{(k)}_{v_{j_1}+ \dots + v_{j_s}} \xi. \end{equation} It follows that any mixed derivative can be realized for appropriately chosen (higher order) directional derivatives, therefore the $k$-jet generating property is satisfied. The argument works for any $y\in \I_o$ and $k\in \mathbb N$, therefore, the jet generating property holds. \end{proof} \begin{remark}[The jet generating property of curvature vector fields and their derivatives] \label{rem:k_jet_gen} One can easily show that in the 2-dimensional case, at any point $y\in \I_o$, the set of the curvature vector field and its derivatives up to order $k$ contains $k+1$ linearly independent $k$-jet, therefore this set has the $k$-jet generating property. In the higher dimensional cases, from the argument of Proposition \ref{prop:Funk_generating} using 2-dimensional planes, one can obtain that for any point $y\in \I_o$ and any direction $v\in T_y(\I_o)$, the curvature vector fields and their derivatives up to order $k$ can be used to express the directional derivatives \eqref{eq:derivatives}. From formula \eqref{eq:mixed_der} one obtains that any $k$\textsuperscript{th} order derivative can be obtained by the derivatives of the curvature vector fields up to order $k$, that is the set \begin{equation} \label{eq:k_jet_curv} \left\{\mathcal R_{i j}, \ \nabla_{p_1} \! \mathcal R_{i j}, \ \dots, \ \nabla_{p_1 \dots p_k} \! \mathcal R_{ij} \ | \ 1 \leq i,j,p_1 \dots p_k \leq n \right\} \subset \X{\I_o} \end{equation} has the $k$-jet generating property. \end{remark} \section{The Funk-perturbed Finsler metrics} \label{sec:4} In this section we investigate the holonomy group of a Finsler metric perturbed with the standard Funk metric. We present some technical properties of the Funk deformation which are essential in the proof of Theorem \ref{main_thm_2}. Let $(M, F)$ be a Finsler manifold and $x_0\in M$ be a fixed point. We can chose an $x_0$-centered coordinate system $(U, x)$ such that $x(U)\subset \B^n$. The associated coordinate system on $TM$ will be denoted by $( \pi^{-1}(U), \chi=(x, y))$. We also consider a bump function $\psi\colon M \to \mathbb R$, such that $supp(\psi) \subset U$ and $\psi|_{\tilde{U}} =1$ for some open neighbourhood $\tilde{U} \subset U$ of $x_0$. We denote by $\bar{\psi}:= \psi \circ \pi$ the pull-back of $\psi$ by the projection $\pi$. Using the standard Funk norm function $F_{\B^n}$, we introduce the Finsler norm $\bar{F}: TM \to \mathbb R$ by the formula \begin{equation} \label{eq:bar_F} \bar{F}^2 = \psi \cdot (F_{\B^n} \circ \chi) ^2 + (1-\psi) \cdot F^2. \end{equation} We remark that $\bar{F}$ is the pull-back of the standard Funk norm function on $\pi^{-1}(\tilde{U})$. Using \eqref{eq:bar_F} we define a smooth perturbation of the Finsler function $F$ as a 1-parameter family of functions $F_t$, where \begin{equation} \label{def:pert} F_t^2 = (1-t)F^2 + t \bar{F}^2, \quad t\in [0,1]. \end{equation} Then $F_t$ is a 1-parameter family of Finsler metrics. Indeed, $F$ and $\bar{F}$ are positively 1-homogeneous continuous function, smooth on $\T M$, therefore $F_t$ verifies these properties. Moreover, taking the squares in \eqref{def:pert} ensures that the bilinear form \begin{displaymath} g_{ij}^t = (1-t) \, g_{ij} + t \, \bar{g}_{ij} , \quad t\in [0,1] \end{displaymath} of $F_t$ is positive definite as well. \begin{proposition} \label{property:alg_exp} Any element of the infinitesimal holonomy algebra $\mathfrak{hol}^*_{x_0} (F_t)$ can be expressed as an algebraic fraction of polynomials in $t$ whose coefficients are determined by $j^{k}_{x_0}F$ and $j^{k}_{x_0}\bar{F}$ for some $k\in \mathbb N$. \end{proposition} \begin{proof} The geodesic coefficients $G^i_t$, $i=1, \dots, n$ of $F_t$ can be calculated in terms of $j^3_{x_0}(F_t)$, therefore in terms of $t$, $j^3_{x_0} (F)$, and $j^3_{x_0}(\bar{F})$. More precisely, their expressions are algebraic fractions of polynomials in $t$ whose coefficients are determined by the third order jets of $F$ and $\bar{F}$. Similarly, the curvature vector fields of $F_t$ can be expressed as algebraic fractions of polynomials in $t$ whose coefficients are determined by $j^5_{x_0}(F)$ and $j^5_{x_0}(\bar{F})$. More generally, using Remark \ref{remark:jet}, the value of $k$\textsuperscript{th} order covariant derivatives and $k$\textsuperscript{th} successive Lie brackets of curvature vector fields of $F_t$ can be expressed as algebraic fractions of polynomials in $t$ whose coefficients are determined by $j^{k+5}_{x_0}(F)$ and $j^{k+5}_{x_0}(\bar{F})$. \end{proof} \begin{proposition} \label{prop:generating_at_t} For any $y_0 \in \I_o$ the set of parameters $t\in [0,1]$, where the 3-jet generating property of the infinitesimal holonomy algebra \begin{math} \mathfrak{hol}^*_{x_0} (F_t) \subset \X{\I_{x_0}^{\, t}} \end{math} of the Funk perturbation \eqref{def:pert} is not satisfied, is finite. \end{proposition} \begin{proof} Let us suppose that $y_0\in \mathcal{I}_{x_0}^{\, t} \subset T_{x_0}M$ for every $t \in [0,1]$. If not, then we can consider \begin{math} \widetilde{F}_t(x,y) := F_t(x,y)/F_t(x_0,y_0) \end{math} which is just a rescaling with a constant for any given $t$, therefore it doesn't affect the jet generating property on the indicatrix. From Proposition \ref{prop:Funk_generating} we know that the Funk metric has the jet generating property, therefore for $t=1$, there are vector fields \begin{equation} \label{eq:W_funk} \left\{ W_1,\ldots, W_l \right\} \in \mathfrak{hol}^*_{x_0} (F_{t=1}), \end{equation} $l:=\dim (J^3_{y_0}(\X{\I_{x_0}^{\, t=1}}))$ in the infinitesimal holonomy algebra, such that any 3\textsuperscript{rd} order jet at $y_0$ can be realized with their combination. Those vector fields are linear combination of curvature vector fields, their derivatives and their Lie brackets. Let us consider them for any $F_t$, $t\in [0,1]$. We get a set in the infinitesimal holonomy algebra of $F_t$ at $x_0$: \begin{equation} \label{eq:W} \left\{ W_1(t),\ldots, W_l(t) \right\} \in \mathfrak{hol}^*_{x_0} (F_t). \end{equation} Using Proposition \ref{property:alg_exp}, these vector fields are algebraic fractions of polynomials in $t$ whose coefficients are determined by $j^k_{x_0}(F)$ and $j^k_{x_0}(\bar{F})$ for some $k\in \mathbb N$. It follows that the determinant of the $l\times l$ matrix composed by the 3\textsuperscript{rd} order jet coordinates of \eqref{eq:W} at $y_0$: \begin{equation} \label{eq:P_t} \mathcal P_t:= \det \left( \begin{matrix} j^3_{y_0} (W_1(t)) \\ \vdots \\ j^3_{y_0} (W_l(t)) \end{matrix} \right), \end{equation} is an algebraic fraction of polynomials in $t$ whose coefficients are determined by $j^3_{x_0}(F)$ and $j^3_{x_0}(\bar{F})$ for some $k\in \mathbb N$ with $\mathcal P_{t=1} \not \equiv 0$. Since every non-trivial polynomial has finitely many roots, $\mathcal P_t$ can only be zero at finitely many values $t\in[0,1]$. By continuity, there is a neighbourhood of $y_0$ where this property is satisfied. \end{proof} \section{Density of Finsler metrics with infinite dimensional holonomy group} \label{sec:5} \noindent In this section we prove Theorems \ref{thm:no_transitive_finite} and \ref{main_thm_2}. \subsection{Proof of Theorem \ref{thm:no_transitive_finite}.} We prove the theorem by contradiction: let us suppose that $\mathfrak{g} \subset \X{U}$ is a \emph{finite} dimensional Lie algebra on an $n$-dimensional manifold $U$ and it is generating the third order jets at $x_0\in U$. As before, the last property means that the 3-jet projection \begin{math} \mathfrak{g} \to J^3_{x_0} (\X{U}) \end{math} is onto. We remark that a manifold with a finite dimensional Lie algebra of vector fields with locally transitive action is real analytic (in the sense that there exist a real-analytic atlas such that the vector fields of the Lie algebra are real-analytic). Indeed, a finite dimensional Lie algebra generates a Lie group, and one can provide an analytic atlas on this Lie group so that the group multiplication is analytic. Moreover, local Lie subgroups are real analytic submanifolds, because they are images of the exponential map. For more about Lie groups and related topics, we refer to \cite{Kolar_Michor_Slovak_1993, Pontryagin_1954}. It follows that locally, a manifold with transitive action of a Lie group is the factor group of the Lie group by the stabilizer of one element, which is a local Lie subgroup, then it is also analytic. We say that the order of singularity of an element $v\in \mathfrak{g}$ at $x_0$ is $k \in \mathbb N$, noted as $\mathcal O_{x_0}(v)=k$, if the value and all partial derivatives up to order $k$ at $x_0$ are zero, and $v$ has a non-vanishing $(k+1)$st order derivative. Each nonzero element has finite order by analyticity. Let us consider the set $\mathfrak{g}_1 \subset \mathfrak{g}$ with order of singularity at least one, that is \begin{displaymath} \mathfrak{g}_1:=\Big\lbrace v\in \mathfrak{g}\quad \Big|\quad v(x_0)=0, \ \frac{\partial v}{\partial x_{i}}(x_0)=0, \quad i=1, \ldots, n \Big\rbrace. \end{displaymath} It is easy to see that $\mathfrak{g}_1$ is a Lie subalgebra of $\mathfrak{g}$. Indeed, the $j$th component of the commutator of two vector fields $v,u\in \mathfrak{g}_1$ is given by \begin{displaymath} [ v, u]_j =\sum_i \left(\tfrac{ \partial v_j }{\partial x_i} u_i - \tfrac{ \partial u_j }{\partial x_i} v_i\right) \end{displaymath} and has singularity of order at least two at $x_0$. Actually, for any two vector fields $V, U$ from $\mathfrak{g}_1$ such that $V$ has order of singularity $k$ and $U$ has order of singularity $m$ their commutator has order of singularity $k+m$. Since $\mathfrak{g}$ is finite dimensional, so is $\mathfrak{g}_1$. It follows that the order of singularity is bounded on $\mathfrak{g}_1$. Indeed, if not, then there would be a sequence of vectors in $\mathfrak{g}_1$ with strictly monotone increasing order of singularity at $x_0$ which would produce an infinite number of linearly independent elements which is impossible. Let $v_1\in \mathfrak{g}_1$ be a non-zero element with maximal order $\mathcal O_{x_0}(v_1)=k$ of singularity at $x_0$. Using the 3-jet generating property, we have $k \geq 2$. Then, for any $v\in \mathfrak{g}_1$ we have $[v,v_1]\in \mathfrak{g}_1$ and \begin{math} \mathcal O_{x_0}([v,v_1]) > k. \end{math} Since $k$ is maximal, it follows that $[v,v_1]=0$, and it shows that $v_1$ commutes with every elements of $\mathfrak{g}_1$. On the other hand, it is possible to choose a point $\hat{x}_0 \in U$ in a neighbourhood of $x_0$ such that $v_1(\hat{x}_0) \neq 0$ and $\mathfrak{g}_1$ has the 1-jet generating property at $\hat{x}_0$. It follows that one can choose an element $v_2 \in \mathfrak{g}_1$ such that $v_2(\hat{x}_0)=0$ but $\mathcal D_{v_1} v_2 \neq 0$. Therefore the commutator $[v_1,v_2]$ at $\hat{x}_0$ is non zero. This is a contradiction since $v_1$ is an element which commutes with every element of $\mathfrak{g}_1$. Theorem \ref{thm:no_transitive_finite} is proved. \subsection{Proof of Theorem \ref{main_thm_2}.} \ \noindent Let $\F$ be the set of $C^\infty$-smooth Finsler metrics on a given manifold $M$ and let us consider the subset $\widetilde{\F} \subset \F$ characterized by the following property: $\tilde{F} \in \widetilde{\F}$ if and only if there exists a point $x_0 \in M$ such that the curvature vector fields and their derivatives \begin{equation} \label{eq:3_jet_curv} \left\{\mathcal R_{i j}, \ \nabla_{k} \mathcal R_{i j}, \ \nabla_{k l} \mathcal R_{ij}, \ \nabla_{klh} \mathcal R_{ij} \ | \ 1 \leq i,j,k,l,h \leq n \right\} \subset \X{\I_{x_0}}, \end{equation} up to order 3 has the $3-$jet generating property at least at one point of the indicatrix at $x_0$. Then \begin{enumerate} \item [\emph{i)}] the holonomy group at $x_0$ of any $\tilde{F} \in \widetilde{\F} $ is infinite dimensional, \item [\emph{ii)}] the set $\widetilde{\F}$ is dense in $\F$ with respect to the $C^m$ topology for each $m\geq 8$ (in fact for any $m\ge 0$), \item [\emph{iii)}] the set $\widetilde{\F}$ is open in $\F$ with respect the the $C^{\widetilde{m}}$ topology for $\widetilde{m}\geq 8$. \end{enumerate} Indeed, \emph{i)} follows from Theorem \ref{thm:no_transitive_finite}: the infinitesimal holonomy algebra $\mathfrak{hol}^*_{x_0}(\tilde{F})$ is infinite dimensional, consequently, the holonomy algebra and the holonomy group of $\tilde{F}$ at $x_0$ are infinite dimensional. In order to show \emph{ii)}, let us consider the Funk-perturbation $F_t$ given by \eqref{def:pert}, a point $x_0$ in $M$ and a point $y_0$ of the indicatrix at $x_0$. By Proposition \ref{prop:generating_at_t} there exists a sufficiently small $t>0$ such that the curvature vector fields and their derivatives up to order 3 has the $3-$jet generating property at $y_0$. For sufficiently small $t$ the metric $F_t$ is sufficiently close to $F$ in $C^m$-topology. In order to prove \emph{iii)}, we observe that the jet-generating condition is an open condition, so if it is fulfilled at $y_0\in \mathcal{I}_{x_0}$, it is fulfilled at any point $y_1\in \mathcal{I}_{x_1}$ sufficiently close to $y_0$ in $C^{m\ge 8}$-topology on $TM$. Theorem \ref{main_thm_2} is proved. \begin{remark} \label{rem:germs} In the proof of Theorem \ref{main_thm_2}, it was showed that for a convenient perturbation the infinitesimal holonomy algebra at a point is infinite-dimensional. This remains valid mircrolocally and on the level of germs. \end{remark} \end{document}
arXiv
January 12, 2022 January 10, 2022 by publisher Eric Busboom. An isochrone is the area of travel from a point in a constant time, which allows for more realistic analysis of retail cachment areas than a simple radius. An isochrone ( meaning "equal time" ) is an area that encloses the points one can travel to in a fixed time. For a bird, an isochrone would be a circle, but humans usually have to follow roads, so human isochrones have shapes that mirror the road network. Despite the name, isochrones are most often computed using a fixed distance. For instance, a 5km isochrone would be all the points that are within 5km of a point, but using distances based on a road network, not the straight distance. So, instead of a 5km radius circle, a 5km isochrone will have a non-circular shape. Civic Knowledge is developing a Python program for raster-based spatial analysis. The system can create isochrone regions, rasterize them, and use them in algebraic equations with other rasters, allowing for very powerful spatial analysis. For a quick example, here is a 10km isochrone around Franklin Barbeque in Austin Texas. The isochrone region is colored by the distance from the central point, in meters. The area is computed by tracing the road network, then finding a region ( a concave hull ) that encloses all of the nodes ( the ends of road segments ) that are a specific distance from the central point. The distances are quantized to 500 meters. There are a lot of analyses that we can do with this shape. One that is particularly interesting to a business is to calculate the number of customers in the area around the store, weighted by how likely they are to visit, which is a function of the attractiveness of the location and the distance from each consumer. The most common model for this sort of analysis is the Huff model: $$P_{ij}= \frac{A_j^\alpha D_{ij}^{- \beta}} {\sum_{j=1}^{n}A_j^{\alpha} D_{ij}^{- \beta}}$$ where : $A_j$ is a measure of attractiveness of store j $r_{ij}$ is the distance from the consumer's location, i, to store j. $\alpha$ is an attractiveness parameter $\beta$ is a distance decay parameter $n$ is the total number of stores, including store j The term $A_j^\alpha D_{ij}^{- \beta}$ multiplies the "attractiveness" of the location times the distance weighted probability of the consumer visiting the location. The attractiveness term, $A_j^\alpha$ can be any of a variety of measures, but retail square footage is a common one. The exponent $\alpha$ accounts for the non-linearity of attractiveness; a store that has twice the square footage is not always twice as attractive. The distance weighting value, because of it's negative exponent, accounts for consumers who are farther away being less likely to visit. If $\beta$ is 1, then the term reduces to $1/r$ for distance $r$ and if $\beta$ is two, the term is $1/r^2$. Because $1/r^2$ is the same law that gravity follows, this model is sometimes called Huff's Gravity Model. While the Huff model is expressed as the probability that a consumer will visit a location, we can also use it to determine the likely number of visitors to a location, by calculating the value for every person in the retail cachement area. For this analysis, we will use a raster map of population density. Here is the map of population, based on census tracts for 2019, for the area around our location. Each pixel of the map is 100m square and the value at the pixel is the estimated number of people who live in that square, computed by dividing the total estimated population of a census tract by its area. By calculating the Huff model value for each pixel and multiplying by the number of people in that pixel, then summing over all pixels, we can get an estimated number of visitors to the location. This calculation will involve these steps: Compute the isochrone for the location, with pixel values of meters distance from the central location. Set ${\beta}=1$ and compute $1/r$ for all pixels. Multiply $1/r$ values by the population. Sum the values to get an estimate of the number of customers. For this example, we are only working with one location, and assuming the atractiveness is 1. For the full Huff model we would have to perform this calculation for every location, sum the results and use it as a denominator. The original isochrone values are in meters from the central point and we use ${\beta}=1$ so the weighting for population will be $1/r$. Then we can multiply the weighting by the population to get a weighted population. {'data': {'text/markdown': 'The total population of the whole isochrone area is 383,044 but the weighted population within the isochrone is \n43,005. The difference is the result of the $1/r$ weighting, which counts population farther away with a value less than \npopulation that is closer. \n\nIf we had used the straight line distance instead of the isochrone — which would be a circle instead of the odd isochrone shape — \nthe 1/r weighted popuation would be 192,230.\n\nHere is what the straight-line distances look like versus the isochrone; they are very different. \n\n', 'text/plain': "}, 'execution_count': 13, 'metadata': {}, 'output_type': 'execute_result'} The total population of the whole isochrone area is 383,044 but the weighted population within the isochrone is 43,005. The difference is the result of the $1/r$ weighting, which counts population farther away with a value less than population that is closer. If we had used the straight line distance instead of the isochrone — which would be a circle instead of the odd isochrone shape — the 1/r weighted popuation would be 192,230. Here is what the straight-line distances look like versus the isochrone; they are very different. We can also use the isochrones to count the number of specific sites, such as related businesses or competitors, within a given area. In this case, we will create a raster of the locations of cafes, which will have a value of 1 where there is a cafe and 0 elsewhere. We'll binarize the isochone areas where the value is less than 5_000, which will produce a raster with 1 for the cells that are 5km or less away from the central point, 0 elsewhere. Multiplying these two rasters will produce a raster {'data': {'text/markdown': "Summing the cells in the last raster gives us the number of cafes within 5km of the location, 39. \nIf we'd used the striaght-line distance ( a circular area ) the number would have been 57.\n", 'text/plain': "}, 'execution_count': 10, 'metadata': {}, 'output_type': 'execute_result'} Summing the cells in the last raster gives us the number of cafes within 5km of the location, 39. If we'd used the striaght-line distance ( a circular area ) the number would have been 57. Isochrone areas are a powerful addition to your spatial analysis techniques, allowing a more accurate assessment of cachement areas. Using isochrones can produce significantly different results that simplier fixed radii, although they can require more effort to use, particualrly in a vector-based process. However, when used as part of a raster-based spatial analysis, there is little difference. Categories spatial_analysis Tags isochrone Post navigation
CommonCrawl
\begin{document} \title{Evaluating $L$-functions with few known coefficients} \begin{abstract} We address the problem of evaluating an $L$-function when only a small number of its Dirichlet coefficients are known. We use the approximate functional equation in a new way and find that it is possible to evaluate the $L$-function more precisely than one would expect from the standard approach. The method, however, requires considerably more computational effort to achieve a given accuracy than would be needed if more Dirichlet coefficients were available. \end{abstract} \section{Introduction} $L$-functions are central to much of contemporary number theory. Two celebrated conjectures, the Riemann Hypothesis and the Birch and Swinnerton-Dyer Conjecture are about values of $L$-functions and were discovered as a result of the explicit computation of the Riemann zeta function and the Hasse-Weil $L$-function associated to an elliptic curve, respectively. These $L$-functions are, respectively, of degree one and degree two and it is interesting to verify analogous conjectures about special values and zeros for higher degree $L$-functions. Conjectures such as B\"ocherer's Conjecture \cite{Bocherer2,RyanTornaria} and the Bloch-Kato Conjecture \cite{BlochKato} are about the central values of degree four and degree three $L$-functions, and the Grand Riemann Hypothesis asserts that all nontrivial zeros of an $L$-function, of any degree, lie along the critical line. In addition to these conjectures, there are a number of other conjectures for the statistical behavior of $L$-functions, arising from the interplay between random matrix theory and number theoretic heuristics \cite{KS,CFKRS,CFZ}. One of the main reasons those conjectures are believable is that large-scale calculations of the value distribution and the zeros of $L$-functions yield data that support those conjectures. The $L$-functions we consider here are associated to Siegel modular forms. Our examples will use the first non-lift Siegel modular form on ${\mathrm Sp}(4,\mathbb Z)$. The form has weight 20 and is usually denoted~$\Upsilon_{20}$. Background information beyond what we mention about Siegel modular forms can be found in~\cite{Abook,Kbook,Skoruppa}. For this paper the relevant information is that a Siegel modular form is acted on by Hecke operators $T(n)$, which have eigenvalues~$\lambda(n)$. It is the eigenvalues $\lambda(p)$ and $\lambda(p^2)$, for $p$ prime, which are used to define the $L$-functions associated to the modular form. For $\Upsilon_{20}$ the eigenvalues $\lambda(p)$ have been computed for $p\le 997$, and the eigenvalues $\lambda(p^2)$ for $p\le 79$~\cite{KohnenKuss}. These data are available at~\cite{Skoruppa_site}. There is an $L$-function $L(s,F, \rho)$ of degree $n$ for each $n$-dimensional representation $\rho$ of the dual group of $\textrm{PGSp}(4)$, namely $\textrm{Sp}(4,\mathbb C)$. Associated to a Siegel modular form $F$ is a sequence of $L$-functions, of degrees 4, 5, 10, 14, 16,~etc. The degree 4, 5, and 10 $L$-functions are called, respectively, the spinor, standard, and adjoint, and are denoted $L(s,F,\mathrm{spin})$, $L(s,F,\mathrm{stan})$, and $L(s,F,\mathrm{adj})$. Proposition \ref{prop:3 L-functions}, taken from~\cite{FRS}, summarizes the properties of those $L$-functions for a weight~$k$ Siegel modular form on~${\mathrm Sp}(4,\mathbb Z)$. The degree 4 and 5 $L$-functions were shown by Andrianov~\cite{Aspin} and B\"ocherer~\cite{Bocherer} to have an analytic continuation and satisfy a functional equation. The degree 10 $L$-function was only recently shown by Pitale, Saha, and Schmidt~\cite{PSS} to have an analytic continuation and satisfy a functional equation. Those properties for the $L$-functions of degree~14 and above are still conjectural. \subsection{Evaluating $L$-functions} We are concerned with numerically evaluating $L$-functions. The standard tool, which is used in available open-source computational packages \cite{Rub,Dok} is the approximate functional equation. See Proposition~\ref{thm:formula} for the precise formulation. There are two main difficulties in evaluating high degree $L$-functions. The first is that if the $L$-function $L(s)$ has degree $d$, evaluating $L(\frac12 +it)$ using the approximate functional equation requires $\gg (1+|t|)^{d/2}$ Dirichlet series coefficients. Here the implied constant depends on the $L$-function and the desired precision in the answer. For example, estimating the implied constant from the calculations in Section~\ref{sec:optimalappfe}, to find the first 1~million zeros of $L(s,\Upsilon_{20}, \mathrm{adj})$, the degree 10 $L$-function associated to~$\Upsilon_{20}$, would require using the approximate functional equation with around $10^{30}$ Dirichlet series coefficients. Note that 1~million zeros is not even a large sample; for example it is probably not sufficient for testing various conjectures about the lower-order terms in the distribution of spacings between zeros. The second difficulty is that current methods are incapable of producing a large number of Dirichlet coefficients of the standard and adjoint $L$-functions of a Siegel modular form. The Fourier coefficients indexed by quadratic forms with discriminant up to 3000000 have been computed for $\Upsilon_{20}$~\cite{KohnenKuss}. These Fourier coefficients are used to compute the Hecke eigenvalues. Examination of formulas on page 387 of \cite{Skoruppa} shows that to find the eigenvalue $\lambda(n)$ of $T(n)$, for $n=p^2$, requires the Fourier coefficients indexed by quadratic forms of discriminant up to~$n^2=p^4$. It gets worse. By \eqref{eq-satake} and \eqref{eqn:EPs}, the $p$th Dirichlet coefficient of the standard or adjoint $L$-function requires both $\lambda(p)$ and $\lambda({p^2})$. Thus, to determine the first $n$ Dirichlet coefficients of those $L$-functions requires Fourier coefficients of the Siegel modular form of index up to approximately~$n^4$. The extensive calculations in~\cite{KohnenKuss} are not even sufficient to determine the 83rd Dirichlet coefficient of the standard or adjoint $L$-functions of~$\Upsilon_{20}$. Of course, one could determine more Dirichlet coefficients by first finding more Fourier coefficients of the cusp form. But the $n^4$ relationship makes this quite expensive, so with current methods it is not feasible to determine many more Dirichlet coefficients than currently known. It is possible that new methods will be developed to determine the Hecke eigenvalues without extensive computation. The real problem will still remain: how to compute high-degree $L$-functions without requiring an enormous number of Dirichlet coefficients. That brings us to the theme of this paper: how accurately can one compute an $L$-function given a limited number of Dirichlet coefficients. As the above discussion indicates, this is a practical problem and there are many $L$-functions for which it is not currently possible to determine a reasonably large number of Dirichlet coefficients. As we describe in this paper, even without knowing many Dirichlet coefficients, we were able to evaluate the $L$-functions to surprisingly high accuracy (surprising to us, anyway). In fact, we were not able to establish that there is an absolute limit to the accuracy one can obtain from only a limited number of coefficients. However, our method is computationally expensive -- much more expensive than evaluating the $L$-function in a straightforward way if more coefficients were available. If one could find an efficient way to determine the unknown parameters in our method, that could make it possible to quickly evaluate high-degree $L$-functions. See Section~\ref{sec:speculation} for a discussion. In the next section we describe the $L$-functions we consider here, and in Section~\ref{sec:appfe} we recall the approximate functional equation and how it is used to compute an $L$-function. In Section~\ref{sec:optimalappfe} we state the underlying problem and then describe our experiments evaluating $L(s,\Upsilon_{20},\mathrm{stan})$ and $L(s,\Upsilon_{20},\mathrm{adj})$. In Section~\ref{sec:lp} we describe a second method that performs a little better than the method in Section~\ref{sec:optimalappfe} and show how this second method can be used to approximate unknown Dirichlet series coefficients. We thank the referee for suggesting we include the material in Section~\ref{sec:realitychecks}. \section{The $L$-functions} The $L$-functions associated to a Siegel modular form are most conveniently described as Euler products. The local factors of the Euler product can be expressed in terms of the Hecke eigenvalues $\lambda(p)$ and $\lambda(p^2)$, but it is more convenient to express them in terms of the Satake parameters, $\alpha_{0,p}$, $\alpha_{1,p}$, and $\alpha_{2,p}$, given by \begin{align} p^{2k-3} &= \alpha_0^2\alpha_1\alpha_2\\ A &= \alpha_0^2\alpha_1^2\alpha_2+\alpha_0^2\alpha_1\alpha_2^2+\alpha_0^2\alpha_1 + \alpha_0^2\alpha_2 \notag \\ B &= \alpha_0^2\alpha_1^2\alpha_2^2+\alpha_0^2\alpha_1^2+\alpha_0^2\alpha_2^2+\alpha_0^2, \notag \end{align} where \begin{eqnarray}\label{eq-satake} \lambda(p)^2 &=& 4p^{2k-3}+2A+B\\\notag \lambda(p^2) &=& (2-1/p)p^{2k-3} + A +B. \end{eqnarray} We suppress the $p$ on the Satake parameters when clear from context. See \cite{Ryan} for a discussion of how to solve this polynomial system of three equation for the three unknowns $\alpha_{0,p},\alpha_{1,p},\alpha_{2,p}$ using Gr\"obner bases. We rescale the Satake parameters to have the so-called ``analytic'' normalization $|\alpha_j|=1$, $ \alpha_0^2\alpha_1\alpha_2 =1$, which is possible if we assume the Ramanujan bound on the Hecke eigenvalues. This corresponds to a simple change of variables in the $L$-functions, so that all our $L$-functions satisfy a functional equation in the standard form~$s\leftrightarrow 1-s$. As an error check for the reader who may wish to extend our calculations, for $\Upsilon_{20}$ we have $\lambda(2)=-840960$, $\lambda(4)=248256200704$, and the Satake parameters at~2 are approximately \begin{equation} \begin{split} \{\alpha_0&,\alpha_1,\alpha_2\}=\\ &\{-0.901413+0.43296 i,0.630904 - 0.775861 i,-0.211226+0.977437 i\}. \end{split} \end{equation} Here and throughout this paper, decimal values are truncations of the true values. \begin{proposition}\label{prop:3 L-functions} Suppose $F\in M_k({\mathrm Sp}(4,\mathbb Z))$ is a Hecke eigenform. Let $\alpha_{0,p}$, $\alpha_{1,p}$, $\alpha_{2,p}$ be the Satake parameters of $F$ for the prime~$p$, where we suppress the dependence on $p$ in the formulas below. For $\rho\in\{\mathrm{spin},\mathrm{stan},\mathrm{adj}\}$ we have the $L$-functions $L(s,F,\rho):=\prod_{p \text{ prime}} Q_p(p^{-s},F,\rho)^{-1}$ where \begin{align}\label{eqn:EPs} Q_p(X,F,\mathrm{spin}):=\mathstrut & {(1-\alpha_0 X)(1-\alpha_0\alpha_1 X) (1-\alpha_0\alpha_2 X)(1-\alpha_0\alpha_1\alpha_2 X)},\nonumber\\ Q_p(X,F,\mathrm{stan}):=\mathstrut &(1-X)(1-\alpha_{1}X)(1-\alpha_{1}^{-1}X) (1-\alpha_{2}X)(1-\alpha_{2}^{-1}X),\nonumber\\ Q_p(X,F,\mathrm{adj}):=\mathstrut &(1- X)^2(1-\alpha_1 X)(1-\alpha_1^{-1} X) (1-\alpha_2 X)(1-\alpha_2^{-1} X)\nonumber\\ &\phantom{X}(1-\alpha_1\alpha_2 X)(1-\alpha_1^{-1}\alpha_2 X) (1-\alpha_1\alpha_2^{-1} X)(1-\alpha_1^{-1}\alpha_2^{-1} X), \end{align} give the $L$-series of, respectively, the spinor, standard, and adjoint $L$-functions. These $L$-functions satisfy the functional equations: \begin{align}\label{eqn:FEs} \Lambda(s,F,\mathrm{spin}):=\mathstrut & \Gamma_\mathbb C(s+\tfrac12)\Gamma_\mathbb C(s+k-\tfrac32)L(s,F,\mathrm{spin})\nonumber \\ =\mathstrut & (-1)^k \Lambda(1-s,F,\mathrm{spin}),\nonumber\\ \Lambda(s,F,\mathrm{stan}):=\mathstrut & \Gamma_\mathbb R( s)\Gamma_\mathbb C(s+k-2)\Gamma_\mathbb C(s+k-1) L(s,F,\mathrm{stan}) \nonumber\\ =\mathstrut & \Lambda(1-s,F,\mathrm{stan})\nonumber,\\ \Lambda(s,F,\mathrm{adj}):=\mathstrut & \Gamma_\mathbb R(s+1)^2\Gamma_\mathbb C(s+1)\nonumber\\ & \times\Gamma_\mathbb C(s+k-2)\Gamma_\mathbb C(s+k-1) \Gamma_\mathbb C(s+2k-3) L(s,F,\mathrm{adj}) \nonumber\\ =\mathstrut &\Lambda(1-s,F,\mathrm{adj}). \end{align} \end{proposition} In \eqref{eqn:FEs}, we use the normalized $\Gamma$-functions \[ \Gamma_\mathbb R(s):=\pi^{-s/2}\Gamma(s)\ \ \ \ \text{ and }\ \ \ \ \Gamma_\mathbb C(s):=2(2\pi)^{-s}\Gamma(s). \] The \emph{degree} of an $L$-function is $r+2c$ where $r$ and $c$ are the number of $\Gamma_\mathbb R$ and $\Gamma_\mathbb C$ factors in the functional equation, respectively. The spin, standard, and adjoint $L$-functions described above are of degree 4, 5, and~10. The Ramanujan bound for a degree~$d$ $L$-function with Dirichlet series $\sum_{n\geq 1} b_nn^{-s}$ is given by: \begin{equation}\label{eqn:ram} |b_{p^j}|\le \left( \genfrac{}{}{0pt}{}{d+j-1}{j} \right). \end{equation} Note that this is equivalent to the assertion that the Satake parameters satisfy~$|\alpha_{j,p}|\le 1$. \section{The approximate functional equation}\label{sec:appfe} In this section we describe the approximate functional equation, which is the primary tool used to evaluate $L$-functions. The approximate functional equation involves a test function which can be chosen with some freedom. This will play a key role in our calculations. \subsection{Smoothed approximate functional equations} The material in this section is taken from Section~3.2 of~\cite{Rub}. Let \begin{equation} L(s) = \sum_{n=1}^{\infty} \frac{b_n}{n^s} \end{equation} be a Dirichlet series that converges absolutely in a half plane, $\Re(s) > \sigma_1$. Let \begin{equation} \label{eqn:lambda} \Lambda(s) = Q^s \left( \prod_{j=1}^a \Gamma(\kappa_j s + \lambda_j) \right) L(s), \end{equation} with $Q,\kappa_j \in {\mathbb{R}}^+$, $\Re(\lambda_j) \geq 0$, and assume that: \begin{enumerate} \item $\Lambda(s)$ has a meromorphic continuation to all of ${\mathbb{C}}$ with simple poles at $s_1,\ldots, s_\ell$ and corresponding residues $r_1,\ldots, r_\ell$. \item $\Lambda(s) = \varepsilon \cj{\Lambda(1-\cj{s})}$ for some $\varepsilon \in {\mathbb{C}}$, $|\varepsilon|=1$. \item For any $\sigma_2 \leq \sigma_3$, $L(\sigma +i t) = O(\exp{t^A})$ for some $A>0$, as $\abs{t} \to \infty$, $\sigma_2 \leq \sigma \leq \sigma_3$, with $A$ and the constant in the `Oh' notation depending on $\sigma_2$ and $\sigma_3$. \label{page:condition 3} \end{enumerate} Note that~\eqref{eqn:lambda} expresses the functional equation in more general terms than~\eqref{eqn:FEs}, but it is a simple matter to unfold the definition of~$\Gamma_\mathbb R$ and~$\Gamma_\mathbb C$. To obtain a smoothed approximate functional equation with desirable properties, Rubinstein \cite{Rub} introduces an auxiliary function. Let $g: \mathbb C \to \mathbb C$ be an entire function that, for fixed $s$, satisfies \begin{equation}\label{eqn:gbound} \abs{\Lambda(z+s) g(z+s) z^{-1}} \to 0 \end{equation} as $\abs{\Im{z}} \to \infty$, in vertical strips, $-x_0 \leq \Re{z} \leq x_0$. The smoothed approximate functional equation has the following form. \begin{theorem}\label{thm:formula} For $s \notin \cbr{s_1,\ldots, s_\ell}$, and $L(s)$, $g(s)$ as above, \begin{equation}\label{eqn:formula} \Lambda(s) g(s) = \sum_{k=1}^{\ell} \frac{r_k g(s_k)}{s-s_k} + Q^s \sum_{n=1}^{\infty} \frac{b_n}{n^s} f_1(s,n) + \varepsilon Q^{1-s} \sum_{n=1}^{\infty} \frac{\cj{b_n}}{n^{1-s}} f_2(1-s,n) \end{equation} where \begin{align}\label{eqn:mellin} f_1(s,n) &:= \frac{1}{2\pi i} \int_{\nu - i \infty}^{\nu + i \infty} \prod_{j=1}^a \Gamma(\kappa_j (z+s) + \lambda_j) z^{-1} g(s+z) (Q/n)^z dz \notag \\ f_2(1-s,n) &:= \frac{1}{2\pi i} \int_{\nu - i \infty}^{\nu + i \infty} \prod_{j=1}^a \Gamma(\kappa_j (z+1-s) + \cj{\lambda_j}) z^{-1} g(s-z) (Q/n)^z dz \end{align} with $\nu > \max \cbr{0,-\Re(\lambda_1/\kappa_1+s),\ldots,-\Re(\lambda_a/\kappa_a+s)}$. \end{theorem} In our examples $L(s)$ continues to an entire function, so the first sum in \eqref{eqn:formula} does not appear. For fixed $Q,\kappa,\lambda,\varepsilon$, and sequence~$b_n$, and $g(s)$ as described below, the right side of \eqref{eqn:formula} can be evaluated to high precision. A reasonable choice for the weight function is \begin{equation}\label{eqn:test} g(s)=e^{i b s + c s^2}, \end{equation} which by Stirling's formula satisfies \eqref{eqn:gbound} if $c>0$, or if $c=0$ and $|b| < \pi d/4$, where $d$ is the degree of the $L$-function. Rubinstein~\cite{Rub} uses such a weight function with $b$ chosen to balance the size of the terms in the approximate functional equation, minimizing the loss in precision in the calculation. In this paper we exploit the fact that there are many choices of weight function, and so there are many ways to evaluate the $L$-function. We combine those calculations to extract as much information as possible from the known Dirichlet coefficients. This idea is described in the next section. In the computations we carry out below, we find it more convenient to use the Hardy $Z$-function in our computations instead of the $L$-function itself. The function $Z$ associated to an $L$-function $L$ is defined by the properties: $Z(\frac12 + it)$ is a smooth function which is real if $t$ is real, and $|Z(\frac12 + it)| = |L(\frac12 + it)|$. \section{Exploiting the test function in the approximate functional equation}\label{sec:optimalappfe} If we let $g(s)=1$ and $s=\frac12 + 10 i$ in the approximate functional equation \eqref{eqn:formula} for the standard (degree 5) $L$-function of $\Upsilon_{20}$, we get \begin{align}\label{eqn:ex1} Z(\tfrac12+10 i, \Upsilon_{20},\mathrm{stan})=\mathstrut & -1835.424 -395.011 \,b_2 + 1012.179 \,b_3 + 1906.603 \, b_4 +\nonumber\\ &\mathstrut + 2226.503 \,b_5 + \cdots + 6.840 \times 10^{-9} \,b_{82} + \nonumber\\ &\mathstrut + 5.132\times 10^{-9} \,b_{83}+\cdots+3.205 \times 10^{-16} \,b_{149} \nonumber\\ &\mathstrut + 2.564 \times 10^{-16} \,b_{150} + \cdots . \end{align} If instead we let $g(s)=e^{-3i s/2}$ and keep $s=\frac12 + 10i$ then we have \begin{align}\label{eqn:ex2} Z(\tfrac12+10 i, \Upsilon_{20},\mathrm{stan})=\mathstrut & 1.66549 + 1.39643 \, b_{2} -0.658439 \, b_{3} + 0.726149 \, b_{4} + \nonumber\\ &\mathstrut -0.88227 \, b_{5} +\cdots + 1.532 \times 10^{-8} \, b_{82} \nonumber\\ &\mathstrut + 1.271 \times 10^{-8} \, b_{83} +\cdots+ 3.514 \times 10^{-14} \, b_{149} \nonumber \\ &\mathstrut + 2.309 \times 10^{-14} \, b_{150} + \cdots . \end{align} Note that neither of the above expressions appears optimal: the first involves large coefficients, which will lead to a loss of precision. In the second, the terms do not decrease as quickly, so one must use more coefficients to achieve a given accuracy. An observation that we exploit is the fact that the above are just two among a large number of expressions for the value of the $L$-function at $\frac12+10i$. Recall that by \cite{KohnenKuss,Skoruppa_site} we know the Satake parameters of $\Upsilon_{20}$ for all $p\le 79$. Therefore we recognize two types of terms in the approximate functional equation, as illustrated in the above examples. There are terms for which we know the Dirichlet coefficients, such as $b_5$, $b_{82}$, and $b_{150}$. And there are terms with an unknown Dirichlet coefficient, such as $b_{83}$ or $b_{149}$. Actually, there is a third type of term, such as $b_{166}=b_{2}b_{83}$ which is ``partially unknown''. We can estimate the unknown terms by applying the Ramanujan bound to the Dirichlet coefficient, and evaluate everything else precisely. Thus, once we choose a test function, we can evaluate an $L$-function at a given point as \begin{equation} Z(s) = \text{calculated\_value}(s) \pm \text{error\_estimate}(s), \end{equation} where both the calculated value and the error estimate are functions of the test function and the set of known Dirichlet coefficients. For later use, we write \begin{equation}\label{eqn:error} \text{error\_estimate}(s) = \sum_{n: \ b_n \text{unknown}} \delta_n(g,s) b_n \end{equation} where the $\delta_n(g,s)$ is the coefficient of $b_n$ in \eqref{eqn:formula}. The product of $\delta_n(g,s)$ and the Ramanujan bound for $b_n$ is an upper bound for the error contributed to computation by the unknown coefficient $b_n$. In the calculations described below, we directly evaluate the contributions from the first 2000 Dirichlet coefficients. For $n\le 2000$ we use the calculated value of $\delta_n(g,s)$ and the Ramanujan bound for $b_n$ to estimate the contribution. This is the main source of the error term in~\eqref{eqn:error}. While there are rigorous bounds for the contribution of the tail to the error (see, e.g., Propositions 3.7 and 3.9 in \cite{Molin}) we do not make use of them for two reasons. First, those general bounds are much larger than what is actually observed in our examples. For instance, in \eqref{eqn:ex2} it appears that by the 150th term the contribution is less than $10^{-13}$, and this is confirmed by further computation (to thousands of terms), showing a steady decrease at the expected rate. But the general bounds of~\cite{Molin} require about 8000 terms before the predicted contribution drops below $10^{-13}$. Second, we consider our method to be experimental and, as such, did not emphasize being so careful with the bounds and instead, relied on observation and intuition. We think, as illustrated in examples below, the fact that we were able to obtain consistent values for our calculations of $L$-functions is evidence that the results are correct and in principle could be made rigorous. Using the known $b_n$ and applying the Ramanujan bound \eqref{eqn:ram} to~\eqref{eqn:ex1} we get \begin{equation} Z(\tfrac12+10 i, \Upsilon_{20},\mathrm{stan}) = 3.03930\,70838 \pm 3.12 \times 10^{-8} . \end{equation} And for~\eqref{eqn:ex2} we get \begin{equation} Z(\tfrac12+10 i, \Upsilon_{20},\mathrm{stan}) = 3.03930\,70808 \pm 7.10 \times 10^{-8} . \end{equation} In Figure~\ref{fig:stan10error} we show the calculated value and error estimate for $Z(\frac12+10 i,\Upsilon_{20},\mathrm{stan})$ when evaluated with test functions of the form~$g(s)=e^{-i\beta s}$. Note that the vertical axis in the figure is on a log scale. \begin{figure} \caption{\sf The solid line is the calculated value and the dashed line is the error estimate in computing $Z(\frac12+10 i,\Upsilon_{20},\mathrm{stan})$ using the available Dirichlet coefficients with the weight function $g(s)=e^{-i \beta s}$ where $\beta$ is given along the horizontal axis. The vertical axis is $\log_{10}$ of (the absolute value of) the actual value. For quite a wide range of test functions, the value of the Z-function at $\frac12+10i$ is determined with some accuracy, achieving around 10 decimal digits of accuracy with the optimal choice of $\beta$. } \label{fig:stan10error} \end{figure} As Figure~\ref{fig:stan10error} shows, there is a wide range of $\beta$ for which it is possible to determine $\Lambda(\frac12+10 i,\Upsilon_{20},\mathrm{stan})$ with some accuracy. With the optimal choice of $\beta$ the error estimate is approximately $4\times 10^{-10}$. Figure~\ref{fig:adj5error} shows the calculated value and error estimate for the adjoint (degree 10) $L$-function $Z(\frac12+5 i,\Upsilon_{20},\mathrm{adj})$ when evaluated with test functions of the form~$g(s)=e^{-i\beta s+\frac{1}{500}(s-5i)^2}$. \begin{figure}\label{fig:adj5error} \end{figure} As Figure~\ref{fig:adj5error} shows, every test function of the given form leads to an error which is larger than the calculated value. Thus, we can determine that $|Z(\frac12+5 i,\Upsilon_{20},\mathrm{adj})| < 0.25$, but with individual test functions of the given form we cannot even determine if the actual value is positive or negative. We now introduce a new idea for increasing the accuracy of these calculations. \subsection{Optimizing the test function} In Figure~\ref{fig:stan10error} we see that there are many values of the parameters in the test function which give reasonable results. If there is some degree of independence in the errors, then there is hope for obtaining a smaller error by combining the results of those separate calculations. Write $Z(s,\Upsilon_{20},\mathrm{stan},\beta)$ for the output of the approximate functional equation with weight function~$g(s)=e^{-i \beta s}$. Consider \begin{equation} Z(s,\Upsilon_{20},\mathrm{stan}) = \sum_{j=1}^J c_{\beta_j} Z(s,\Upsilon_{20},\mathrm{stan},\beta_j) \end{equation} where $\sum c_{\beta_j} =1$. We make the specific choices \begin{align}\label{eqn:5weights} s=\mathstrut & \tfrac12+10 i\nonumber\\ (\beta_1,\beta_2,\beta_3,\beta_4,\beta_5) =\mathstrut & \left(\tfrac{1}{10},\tfrac{2}{10},\tfrac{3}{10},\tfrac{4}{10},\tfrac{5}{10}\right) \nonumber\\ (c_{\beta_1},c_{\beta_2},c_{\beta_3},c_{\beta_4},c_{\beta_5}) =\mathstrut & (0.03150,0.18061,0.36563,0.31421,0.10801) . \end{align} Recall that all decimal numbers are truncations of the actual values; one requires much higher precision than the displayed numbers in order to obtain the answers below. With the choices in~\eqref{eqn:5weights}, after substituting the known Dirichlet coefficients and then using the Ramanujan bound, we find \begin{align}\label{eqn:errorexample} Z&(s,\Upsilon_{20},\mathrm{stan}) = \nonumber \\ \mathstrut & 3.03930\,70864\,89527\,82778 + 2.688\cdot 10^{-19} b_{83} + \cdots - 1.147\cdot 10^{-16} b_{107}+ \nonumber\\ & + \cdots -5.291 \cdot 10^{-18} b_{137} + \cdots + 1.216 \cdot 10^{-23} b_{199} + \cdots \nonumber\\ =&\mathstrut 3.03930\,70864\,89527\,827 \pm 4.73 \cdot 10^{-15}. \end{align} Thus, by averaging only 5 evaluations of the $L$-function, the error decreased by a factor of almost $10^{-5}$. The weights $c_{\beta_j}$ in~\eqref{eqn:5weights} were determined by finding the least-squares fit to \begin{equation}\label{eqn:leastsquares} \sum_{n:\ b_n\ \text{unknown}} Ram(b_n)^2 \biggl(\sum_j c_{\beta_j} \delta_n(\beta_j,\tfrac12+ 10 i) \biggr)^2 =0, \end{equation} where $Ram(b_n)$ is the Ramanujan bound~\eqref{eqn:ram} for~$b_n$, subject to $\sum c_{\beta_j} = 1$. Note that in our actual examples the vast majority of unknown coefficients have prime index, so the $ Ram(b_n)$ weighting is not important, but we include it for completeness. For the calculations in this paper, we use the $n<1000$ for which $b_n$ is unknown in \eqref{eqn:leastsquares}. The error estimate in \eqref{eqn:errorexample} is an $L^1$ estimate, not the $L^2$ estimate as shown in \eqref{eqn:leastsquares}, so actually it is possible to choose slightly better weights than those used in our example. In Section~\ref{sec:lp} we show how to obtain the optimal result that can arise from combining different evaluations of the $L$-function, but for now we merely wish to illustrate the seemingly surprising fact that appropriately combining several evaluations can vastly decrease the error. \subsection{Results} In Figure~\ref{fig:stan_errors} we plot the error obtained by combining varying numbers of weight functions, where we evaluate $Z(\frac12+10 i,\Upsilon_{20},\mathrm{stan})$ with a weight function $g(s)=e^{i \beta s}$ with $\beta = j/10$ with $-10\le j \le 25$. The horizontal axis is the number of terms averaged, where we start with $\beta=\tfrac12$ and first use those $\beta$ which are closest to~$\frac12$. The vertical axis is the error estimate on a $\log_{10}$ scale. The lowest point on the graph, when we average all 36 evaluations, corresponds to \begin{equation} \begin{split}Z(\tfrac12+&10 i,\Upsilon_{20},\mathrm{stan}) =\\ &3.03930\,70864\,89528\,48108\,24603\,28442\,22509\,10 \pm 2.79 \times 10^{-35} . \end{split} \end{equation} \begin{figure} \caption{\sf The error obtained from a least-squares minimization of the error for combining $n$ evaluations of $Z(\frac12+10 i,\Upsilon_{20},\mathrm{stan})$ using the weight functions $g(s)= e^{- i \beta s}$ with $\beta=j/10$. The horizontal axis is $n$ and the vertical axis is $\log_{10}$ of the resulting error estimate. } \label{fig:stan_errors} \end{figure} It is not clear from Figure~\ref{fig:stan_errors} whether one would expect to obtain an arbitrarily small error by combining sufficiently many test functions in the approximate functional equations. This is discussed in Section~\ref{sec:speculation}. We briefly describe the corresponding calculations for $Z(\frac12+5 i,\Upsilon_{20},\mathrm{adj})$. Recall that, as shown in Figure~\ref{fig:adj5error}, with a single test function of the standard form we are not able to determine whether that value is positive or negative. Now we combine five evaluations in the analogous way: \begin{equation} Z(s,\Upsilon_{20},\mathrm{adj}) = \sum_{j=1}^J c_{\beta_j} Z(s,\Upsilon_{20},\mathrm{adj},\beta_j) \end{equation} where $\sum c_{\beta_j} =1$. Here the weight function is $g(s) = e^{-i \beta s + (s-5i)^2/500}$. We make the specific choices \begin{align}\label{eqn:10weights} s=\mathstrut & \tfrac12+5 i\nonumber\\ (\beta_1,\beta_2,\beta_3,\beta_4,\beta_5) =\mathstrut & \left(\tfrac{3}{5},\tfrac{6}{5},\tfrac{9}{5},\tfrac{12}{5},\tfrac{15}{5}\right) \nonumber\\ (c_{\beta_1},c_{\beta_2},c_{\beta_3},c_{\beta_4},c_{\beta_5}) =\mathstrut & (0.035863,0.33504,0.47934,0.13827,0.01146) . \end{align} The result is \begin{equation} Z(\tfrac12 + 5 i,\Upsilon_{20},\mathrm{adj}) = 0.01556 \pm 0.0049. \end{equation} So we have determined that $Z(\tfrac12 + 5 i,\Upsilon_{20},\mathrm{adj})$ is positive, but we can only be certain of one significant figure in its decimal expansion. Using this method, the best result we were able to obtain, by averaging 11 evaluations, is \begin{equation} Z(\tfrac12 + 5 i,\Upsilon_{20},\mathrm{adj}) = 0.01558768 \pm 0.00016. \end{equation} Averaging more weight function actually makes the result worse. The explanation is simple: since the adjoint $L$-function has high degree, the error terms $\delta_n$ decrease very slowly. The least-squares fit does not properly take into account the contribution of a large number of small terms, so the least-squares fit actually has a large $L^1$ norm. In the next section we describe some ``sanity checks'' on our method, as suggested by the referee. Then, in Section~\ref{sec:lp}, we introduce a different method which avoids some of the shortcomings in the least-squares method. \subsection{The method applied to known examples}\label{sec:realitychecks} We check that our method gives correct results in some cases where it is possible to evaluate the $L$-function using another method. First we consider $L(s, \Delta)$, where $\Delta$ is the (unique) weight 12 cusp form for $SL(2,\mathbb Z)$, which satisfies the functional equation \begin{equation} \Lambda(s,\Delta) := \Gamma_\mathbb C(s+\tfrac{11}{2}) L(s,\Delta) = \Lambda(1-s,\Delta) . \end{equation} We will evaluate $Z(\tfrac12+100 i,\Delta)$ \emph{without using any Dirichlet coefficients}, other than the leading coefficient~1. That is, all we know about the $L$-function is its functional equation and the fact that its Dirichlet coefficients satisfy the Ramanujan bound. In the approximate functional equation we will use the test function $g_\beta(s)=e^{-i \beta s + (s-100i)^2/100}$, for $\beta=-\frac{30}{20},-\frac{29}{20},...,\frac{69}{20},\frac{70}{20}$. That is a total of 101 evaluations. Using the least-squares method described previously, we find 101 coefficients $c_\beta$ with $\sum c_\beta=1$. Forming the weighted sum of the 101 evaluations of $Z(\tfrac12+100i,\Delta)$ and estimating the unknown terms as described previously, we find \begin{equation} Z(\tfrac12+100i,\Delta) = -0.23390\,65915\,56845\,20570\,65824\,17137\,27923\,81141\,00783 \pm 3.28 \times 10^{-42}. \end{equation} The given digits are correct to the claimed accuracy: the last three digits shown should be 880, and the actual difference between the calculated value and the true value is $9.66\times 10^{-44}$. Next we consider the $L$-function associated to a weight 24 cusp form for $SL(2,Z)$. Note that $S_{24}(SL(2,\mathbb Z)$ is two dimensional, and every cusp form $f$ in that vector space satisfies the functional equation \begin{equation}\label{eqn:fe24} \Lambda(s,f) := \Gamma_\mathbb C(s+\tfrac{23}{2}) L(s,f) = \Lambda(1-s,f) . \end{equation} We will attempt to evaluate $Z(\frac12+100i,f)$ using as few coefficients as possible. Because there is more than one function satisfying~\eqref{eqn:fe24}, it seems obvious that we cannot evaluate such an $L$-function without knowing any coefficients. If we assume the cusp form $f$ is a Hecke eigenform, then the Dirichlet coefficient $b_2$ determines the coefficients $b_4$, $b_8$, etc, and it also allows us to eliminate every even-index Dirichlet coefficient as an ``unknown.'' Since that does not seem like an adequately strenuous test of the method, instead we will assume nothing about the Dirichlet series except the functional equation~\eqref{eqn:fe24} and a bound on the Dirichlet coefficients. We will assume a Ramanujan bound of the form $|b_n| \le C_f d(n)$, where $d(n)$ is the divisor function and $C_f$ is a constant depending only on the cusp form~$f$. Is $f$ is a Hecke eigenform then $C_f=1$, and if $f=A f_1+B f_2$ where $f_1,f_2\in S_{24}(SL(2,\mathbb Z)$ are the Hecke eigenforms, then $C_f = |A|+|B|$. Using the same 101 test functions as in the case of $L(s,\Delta)$, and choosing 101 weights to minimize the contribution of $b_3$, $b_4$, ..., we find \begin{align}\label{eqn:wt24at100i} Z(\tfrac12 + 100i,f) =\mathstrut & 1.87042\,65340\,29268\,89914\,33391\,93910\,89610\,35060\,87410\ b_1 \cr &+ 1.12500\,88863\,02338\,48447\,34844\,21487\,86375\,36206\,60254\ b_2 \cr &\pm C_f \times 2.86 \times 10^{-43}. \end{align} Thus, we can evaluate $Z(\frac12+100i,f)$ to 42 decimal places, knowing (up to a normalizing constant) only one Dirichlet coefficient. The values of $b_2$ for the two Hecke eigenforms, in the analytic normalization, are \begin{equation} b_2 = \frac{540 \pm 12 \sqrt{144169}}{2^{\frac{32}{2}}}, \end{equation} and inserting those values finds that \eqref{eqn:wt24at100i} is correct. Our third check on the method is to evaluate a degree 10 $L$-function which can be also evaluated in an independent way. To give a reasonable match with the case of $L(\frac12+5 i, \Upsilon_{20},\mathrm{adj})$, we consider $L(\frac12+5 i,f)^5$, where $f$ is a weight 24 cusp forms for $SL(2,\mathbb Z)$. In other words, the same $L$-function as in the previous example, except that we take its 5th power and evaluate at $\frac12 + 5 i$. Note that this time there are two $L$-functions with the given functional equation, with known values: \begin{align} L(\tfrac12+5 i,f_1)^5=\mathstrut & (-3.0527819)^5 = -265.14223\\ L(\tfrac12+5 i,f_2)^5=\mathstrut & (-0.7404879)^5 = -0.2226331 \ . \end{align} We will assume that we know the Euler factors up through $p=79$, just as in the case of $L(\frac12+5 i, \Upsilon_{20},\mathrm{adj})$. If we only use a single test function of the standard form, then the best error we can obtain is comparable to what we found in the previous degree~10 case: \begin{align} L(\tfrac12+5 i,f_1)^5=\mathstrut & -265.204\pm 0.314 \\ L(\tfrac12+5 i,f_2)^5=\mathstrut & -0.22193 \pm 0.233\ . \end{align} Using the test functions $g_\beta(s) = e^{-i \beta s + (s-5i)^2/500}$, selecting the 7 values of $\beta$ in the set $\{-\frac{10}{10}, -\frac{2}{10},\frac{4}{10},\frac{6}{10},\frac{11}{10}, \frac{17}{10},\frac{18}{10},\frac{25}{10}\}$, and using the least-squares method to find suitable weights, we find \begin{align} L(\tfrac12+5 i,f_1)^5=\mathstrut & -265.14224\pm 0.00117 \\ L(\tfrac12+5 i,f_2)^5=\mathstrut & -0.222664 \pm 0.00186\ . \end{align} Averaging only 7 evaluations decreased the error by a factor of more than 100, and we see that the calculated values are in fact correct. This confirms that our method gives consistent results in cases of comparably complicated $L$-functions for which it is possible to give an independent check on the calculations. \section{Linear programming}\label{sec:lp} In this section we view the evaluation of the $L$-function as an optimization problem. For example, we can view the equality of the expressions in~\eqref{eqn:ex1} and~\eqref{eqn:ex2} as a \emph{constraint} on the value of the $L$-function. Thus, the same calculations which were used as input for the least-squares method described in the previous section can also be used as input to a linear programming problem. We set up the linear programming problem in the following way. Let $Z(\frac12+10 i,\Upsilon_{20},\mathrm{stan},g_j)$ denote the evaluation of $Z(\frac12+10 i,\Upsilon_{20},\mathrm{stan})$ using the weight function $g_j$ in the approximate functional equation. One evaluation, $Z(\frac12+10 i,\Upsilon_{20},\mathrm{stan},g_1)$, is taken as the objective. The other evaluations are taken in pairs and \begin{equation}\label{eqn:constraint} Z(\tfrac12+10 i,\Upsilon_{20},\mathrm{stan},g_1) - Z(\tfrac12+10 i,\Upsilon_{20},\mathrm{stan},g_j) = 0 \end{equation} is interpreted as a constraint. The other constraints come from the Ramanujan bound on the unknown coefficients. In the above description there are infinitely many unknowns and constraints. We eliminate the unknown coefficients $b_n$ with $n>1000$ by using the Ramanujan bound, replacing the equality \eqref{eqn:constraint} by a pair of inequalities. Thus, we have a straightforward linear programming problem, which we use to determine the minimum and maximum possible values of $Z(\frac12+10 i,\Upsilon_{20},\mathrm{stan})$. We implemented this idea using the same set of test functions we used for the least-squares method. The calculations were done in Mathematica \cite{mathematica}, using the built-in {\tt LinearProgramming} function with the {\tt Method -> Simplex} option. In all cases the linear programming method gave better results, but not spectacularly better. Figure~\ref{fig:LPstan10} shows the ratio of the errors in the results of the two methods, on a $\log_{10}$ scale. For example, when using 30 equations the error from the linear programming approach was approximately $1/10$ the error from the least-squares method. \begin{figure} \caption{\sf The ratio of the errors in the linear programming and least-squares methods for evaluating $Z(\frac12+10 i,\Upsilon_{20},\mathrm{stan})$ using $n$ equations. The horizontal axis is $n$ and the vertical axis is $\log_{10}$ of the ratio of errors. } \label{fig:LPstan10} \end{figure} \subsection{Computing unknown coefficients} An interesting side-effect of the linear programming approach is that it allows us to obtain information about the unknown coefficients. Instead of treating the value of the $L$-function as the objective, we can use an unknown coefficient as the objective. Note that all the other constraints in the problem are unchanged. Using the same method as described above we find the following coefficients of the standard $L$-function of~$\Upsilon_{20}$: \begin{align} b_{83}=\mathstrut& 0.48845\,58312\,724 \pm 2.4\times 10^{-12}\cr b_{89}=\mathstrut& 0.10561\,760640 \pm 2.7\times 10^{-10}\cr b_{97}=\mathstrut& 0.46813\,5808 \pm 1.5 \times 10^{-7} \end{align} Since the eigenvalues of $\Upsilon_{20}$ are integral and of a known size, if we were to know $b_{83}$ to 35 digits, we would determine it exactly. This is computationally expensive but perhaps not as expensive as if we were to compute more Fourier coefficients of $\Upsilon_{20}$. These results can be checked once more Fourier coefficients of $\Upsilon_{20}$ are computed. By the $n^4$ argument given in in the introduction, computing exact values of $b_p$ for $p\le 97$ will take about twice as much work as it took to compute those for $p\le 79$. \section{Conclusions and further questions}\label{sec:speculation} We have shown that, at the cost of a lot of computation, one can evaluate an $L$-function to high precision using only a small number of coefficients. That this is theoretically possible is not surprising: $L$-functions are very special objects, and the data we have for the $L$-function considered here (the functional equation and the first several coefficients) presumably specify the $L$-function uniquely. Thus, in an abstract sense there is no new information in the missing coefficients. But the question remains as to whether our methods accomplish this in practice. \begin{question} Can the method of calculating an $L$-function by evaluating the approximate functional equation and then averaging to minimize the contributions of the unknown coefficients, determine numerical values of the $L$-function to arbitrary accuracy? \end{question} Because the approximate functional equation requires a huge number of terms to evaluate a high-degree $L$-function, it would be significant if the weights we obtained by our methods could be determined without actually calculating all the terms with unknown coefficients. \begin{problem} Devise a method of determining an optimal weight function in the approximate functional equation without first calculating a large number of terms which do not actually contribute to the final answer. \end{problem} Implicit in the above problem is the requirement that one knows the calculated value as well as an estimate of the error. \begin{problem} Is there any meaning to the weights determined by the least-squares method? \end{problem} The weights which appear in our least-squares method depend on the point at which the $L$-function is evaluated. It might be helpful to consider a case where there are two $L$-functions with the same functional equation, such as the spin $L$-functions of ${\mathrm Sp}(4,\mathbb Z)$ Siegel modular forms of weight $k\ge 22$. Without progress on these problems, or a completely new method, there is little hope of making extensive computations of high-degree $L$-functions. \affiliationone{David W. Farmer\\ American Institute of Mathematics\\ 360 Portage Ave\\ Palo Alto, CA 94306\\ USA\\ \email{[email protected]} } \affiliationtwo{Nathan C. Ryan\\ Department of Mathematics\\ Bucknell University\\ Lewisburg, PA 17837\\ USA\\ \email{[email protected]} } \end{document}
arXiv
🔥👉 How does Einstein's theory of general relativity explain (almost) everything? "✅👉 According to Einstein's theory of general relativity, the force of gravity is a result of the curvature of spacetime. The more massive an object is, the more it bends spacetime." Mr. Easton Farrell Why is it hard to have a dog in Indonesia (in Bandung to be exact)? Most people seem to just afraid of them even when they're nowhere near the dogs. One possible reason is that Indonesian culture generally does not include dogs as pets. In fact, most Indonesians view dogs as dirty animals that should not be kept inside the home. Additionally,many people in Indonesia are afraid of dogs because they have heard stories about people being attacked by dogs. This fear is compounded by the fact that there are very few public places where people can safely walk their dogs. Consequently, it can be difficult to find a place to take your dog for a walk and to find people who are willing to help care for your dog. How do you start a blended learning university? There is no definitive answer to this question since there is no one-size-fits-all approach to starting a blended learning university. However, some tips that may be helpful include: 1. Define your blended learning model. 2. Identify your target students and create marketing materials accordingly. 3. Choose an online learning platform that meets your needs. 4. Recruit faculty who are passionate about blended learning and are willing to experiment with new teaching methods. 5. Invest in professional development for your faculty to ensure they are well-prepared to teach in a blended learning environment. What are some effective methods for protesting social and environmental injustice? There is no one answer to this question as different methods will be effective in different contexts, but some possible methods for protesting social and environmental injustice include: organizing rallies and marches, starting petitions, writing letters to decision-makers, occupying buildings or land, and engaging in civil disobedience. What state eats the most McDonald's? In 2017, Americans in the South spent the most money at McDonald's per capita. What are some smart ways by which someone can sail through creative writing (story/poetry/essay) competitions? 1. Be original and innovative in your approach to writing. 2. Do not copy the style of another writer, but develop your own unique voice. 3. Polish your work until it is the best it can be. 4. Make sure you follow the guidelines of the competition you are entering. 5. Keep a positive attitude and believe in yourself. Can multilevel marketing make one rich? There is no guarantee that joining a multilevel marketing company will make anyone rich. Wealth is relative, and what one person may consider to be a fortune, another may consider to be barely enough to get by. Some people in MLMs do become very successful, but there is no guarantee that this will happen. What is high-frequency finite element circuit modeling? High-frequency finite element circuit modeling is a numerical technique used to model the behavior of electrical circuits. The technique is based on the finite element method, which is a powerful tool for solving differential equations. The finite element method is used to discretize the circuit into a set of small elements, which are then solved using numerical methods. Are waitresses in bars forced to wear provocative tops or skirts or is it just for tips? There is no one answer to this question, as different bars have different policies. Some bars may require their waitresses to wear certain types of clothing in order to promote a certain image, while others may simply allow their staff to dress however they please. However, it is not uncommon for bartenders or servers to dress in more revealing clothing in order to generate more tips from customers. A Ford Fiesta 1.5 TDCi diesel econetic available second hand in the UK promises to produce only 82gCO2/km. How honest should we consider this claim set against the fairly poor "real world" performance of PHEV models? There is no easy answer to this question. While PHEVs have been shown to perform poorly in real-world conditions, it is difficult to say how honest the Ford Fiesta 1.5 TDCi diesel econetic claim is. It is possible that the car is more efficient than advertised, but it is also possible that the car is not as efficient as advertised. Without further information, it is difficult to say for sure. What are the tips for a new writer planning to publish his book in India? There are a few things to keep in mind when publishing a book in India: 1. Make sure the content is appropriate for the Indian market. This means taking into account local customs, traditions, and sensibilities. 2. Find a good printer who can produce high-quality books. 3. Choose a reputable distribution company to get your books into stores. 4. Promote your book through social media and other channels. 5. Be prepared to face challenges and obstacles along the way. Is it child abuse if a parent makes their child eat old food out of the trash (some of it being cat food) to teach them a lesson about throwing away food in a wasteful way? Yes, this would be considered child abuse. What is the salary of a West Bengal school teacher appointed through the PSC exam? The salary of a West Bengal school teacher appointed through the PSC exam is Rs. 21,600/- Why do we have to set real names on Google accounts? Setting a real name on a Google account helps other users find and connect with you online. It also makes it easier for Google to verify your identity if you ever need to recover your account. What sports are typically about conquering your inner self rather than winning against the opponent? Some sports that typically focus on conquering inner self rather than winning against the opponent are things like yoga, archery, some types of martial arts, and golf. What challenges does a fretless guitar pose to an experienced conventional guitar player? A Acoustic is a challenge for an experienced electric guitar player. The main problem is that the acoustic guitar has six strings, which are all tuned to different pitches. This means that you have to retune your guitar every time you want to play a different note. What is the best DC movie according to Rotten Tomatoes? The best DC movie according to Rotten Tomatoes is "Wonder Woman." How are psychopaths so good at mimicking genuine human emotions? psychopaths are experts at reading and understanding body language and social cues How can I let go what my aunt is saying about my family? Because I currently can't fix anything and she will continue to bad mouth with my grandfather on daily phone calls. She clearly has a grudge since last year we had a fight. It can be difficult to let go of what someone is saying about your family, especially if you feel like there is nothing you can do to fix the situation. However, it is important to remember that you cannot control what other people say or think about your family. If your aunt has a grudge against your family, there is likely nothing you can do to change her opinion. Instead, focus on your own relationship with your family and try to maintain a positive outlook. Is sodium hypochlorite a common preservative in medications? My pharmacist told me this. If it is, could you cite an official source? I can't speak to whether or not sodium hypochlorite is a common preservative in medications, but I would imagine that it depends on the medication in question. How do I tell nationality by the last name? There is no sure way to tell the nationality of a person based on their last name. However, last names can be traced back to a particular country or region, which may give you an indication of a person's heritage. Additionally, many last names have meaning in different languages, which can also provide clues about a person's background. Should Iran be considered a Semitic nation since the majority of people descend from Elamites, Caucasus people, and Mesopotamians? No. Although the majority of people in Iran descend from Elamites, Caucasus people, and Mesopotamians, Iran is not typically considered a Semitic nation. How does Facebook accurately suggest dozens of friends before you even provided any information about yourself other than your date of birth and name? Facebook has access to a lot of data about its users. It can use this data to make suggestions about who you might know. I messed up with my best friend, realised it, went on for months apologizing, stopped, yet the regret is there, I feel I should text her but I know she's ignorant. What should I do? There is no easy answer to this question. It sounds like you have already tried apologizing and that didn't work, so it may be time to move on. If you still have feelings of regret and want to reach out to your friend, you can try sending her a message expressing your thoughts and apologizing again. However, there is no guarantee that she will respond positively or forgive you, so it is important to be prepared for the possibility that she may not want to renew your friendship. What does Lacan mean when he says the big Other does not exist? The big Other does not exist for Lacan because it is an ideal that can never be reached. The Other is always partial and lacking, which means that it can never be whole or complete. What type of bio molecules are the last source of energy for cellular activities? The last source of energy for cellular activities are carbohydrates. Do pwBPD know when they are mirroring someone? Is it a conscious effort or unconscious? There is no definitive answer to this question as everyone experiences and expresses their Borderline Personality Disorder (BPD) symptoms in different ways. However, some people with BPD may be aware that they are mirroring the emotions or behaviors of others, while others may not be aware that they are doing so. In general, it is thought that people with BPD often unconsciously mirror the emotions and behaviors of others as a way to try to feel closer to them or to cope with their own intense emotions. When Jesus tried to sack the last temple and declare himself king, how many Roman soldiers were patrolling the temple grounds? A Roman cohort typically consists of 480 soldiers, so there were at least 480 soldiers patrolling the temple grounds. I got a tattoo on my forearm a day ago and I have soccer practice next week, can I wear a loose sleeve on the arm of the tattoo and still play? Assuming the tattoo is healed, you should be fine to play with a loose sleeve. Do pre-built PCs come with liquid already in the water cooler? Pre-built PCs that come with water coolers will have the liquid already in the cooler. What do Sikh writings say about violence, tolerance, understanding, sympathy, empathy, slavery, serfdom, colonization, conquering, diets, colors, castes, race, ethnicity, and nationality? The Sikh writings do not condone violence, and instead advocate for tolerance and understanding. Sikhism also emphasises the need for empathy and sympathy, and states that all humans are equal regardless of race, ethnicity or nationality. While Sikhism does not condone slavery or serfdom, it does not advocate for the conquering of others, and instead urges Sikhs to follow a peaceful and just way of life. In Into the Dark (Hulu), what does The Current Occupant really mean? The title of the episode, "The Current Occupant", refers to the current occupant of the White House. In this episode, a man who believes he is the current President of the United States is institutionalized and is subjected to a series of tests to determine his mental state. Is height about as heritable as obesity? There is no definitive answer to this question as the inheritability of both height and obesity are complex and depend on a variety of factors. However, some experts believe that height is more likely to be determined by genetic factors than obesity, while others believe that obesity is more likely to be influenced by environmental and lifestyle factors than height. What are some tips on how to appear more masculine without binding or doing anything permanent? I'm genderfluid. There is no one definitive answer to this question. Depending on your individual presentation and desired look, there are many different ways you can appear more masculine without binding or doing anything permanent. grooming choices, such as shaving facial hair or choosing masculine hairstyles, can make a big difference. style choices, like opting for button-down shirts and khakis instead of dresses or skirts, can also create a more masculine appearance. And finally, mannerisms and body language can play a role in how masculine you appear - try standing up straight with your shoulders back, avoiding feminine poses and gestures, and speaking in a low, deep voice. What are some good deals on couches right now (August)? Where can you find them? August is a great time to find deals on couches. Check out your local furniture stores or big box retailers. You can also find some good deals online. Which is the oldest surviving computer program/ application? The FORTRAN computer program, developed in 1957, is the oldest surviving computer program. What are some good resources on how to avoid shooting oneself in the foot when localizing a software product? Some good resources on how to avoid shooting oneself in the foot when localizing a software product are: -The Software Localization Handbook by Anne-Marie Colliander -The Art of Localization by David Bell -Localization for Dummies by Arun Krishnamurthy Will oversized drain lines pose problems in plumbing systems? Oversized drain lines are not a problem in plumbing systems. Have your college plans stopped because of the coronavirus? My college plans have not stopped because of the coronavirus. I am still planning on attending college in the fall. During World War 2, was there any politician or other significant public figure on either the allies or axis side that objected to the war effort? There were several politicians and public figures on both sides of the war who objected to the fighting. In the Allied countries, there were people like Charles Lindbergh and Gandhi who spoke out against the war. On the Axis side, there was Adolf Hitler's own Foreign Minister, Joachim von Ribbentrop, who objected to the way the war was being fought. What is the new Arizona open vaccinations at 2 of its state-run sites to people 55 and older? The new Arizona open vaccinations at 2 of its state-run sites to people 55 and older is that the state is now allowing people in this age group to get the vaccine at these 2 locations. Is there a Master's program in nursing offered at Norway? There is a Master's program in nursing, but it is not offered at Norway specifically. How many traffic tickets did your Saab 9000 collect behind the CoC? Since the 9000 was introduced in 1984, it's hard to know how many traffic tickets it has collected over the years. It's safe to say that, if it was driven often enough, it probably collected at least a few. Is it okay to take prenatal and diet pills together every day? I am 15. No. You should speak with a medical professional before taking any kind of pill, let alone 2 different kinds together. In case the customer renews for > 3 years, should the reseller of a B2B SaaS product: 1) get compensated based on 1 year (ACV) or 2) compensation shall continue for the 2nd year, or 3) Full LTV? Details of the case are in my 1st comment. Thank you! In this case, the reseller should continue to receive compensation for the second year. What is the application for grant of permission to import or manufacture a new drug or undertake clinical trial in India? The application for grant of permission to import or manufacture a new drug or undertake clinical trial in India is the New Drug Application (NDA). Why does milk sometimes cause anal itching? There are many possible causes of anal itching, and milk is a common allergen. Other possible causes include hygiene, Irritable Bowel syndrome, skin conditions such as psoriasis, and yeast infections. If the itch persists, it is best to see a doctor to rule out any serious underlying causes. Which one is better project management certification? There is no definitive answer to this question. Different project management certifications may be better for different people, depending on their individual needs and goals. Some popular project management certifications include the Project Management Professional (PMP) certification from the Project Management Institute (PMI), and the Certified Associate in Project Management (CAPM) certification from PMI. What are 5 easy steps to start a limited liability company (LLC)? 1. Determine the desired legal structure for the company. 2. Choose a name for the company and check for availability. 3. Draft Articles of Incorporation or Organization and file with the state. 4. Apply for an Employer Identification Number from the IRS. 5. Comply with state and local requirements for operating an LLC. What are 5 examples of organic compounds available in your home? What are their uses? Formaldehyde, benzene, ethanol, toluene, and xylene are all organic compounds that are found in many homes. Formaldehyde is often used as a preservative or in the production of some resins. Benzene is found in gasoline and is also used to make plastics. Ethanol is a type of alcohol that is found in alcoholic beverages. Toluene is used as a solvent or to make some types of rubber. Xylene is another solvent that is also used to make some dyes and paints. Can I stay with my partner for two days since I am from out of the station and coming to attend a workshop in Chennai? The Chennai International Airport has a hotel on its premises that offers a special rate for travellers. You can also find many hotels and guest houses near the airport. Is failing a polygraph enough probable cause to arrest a suspect? No, failing a polygraph or lie detector test is not probable cause to arrest a suspect. Probable cause is defined as a reasonable belief that a crime has been committed and that the suspect is the person who committed the crime. There are many factors that contribute to probable cause, and failing a lie detector test is not enough on its own to constitute probable cause. How are the 1/3 of angels (2 Peter 2:4) who were thrown out of Heaven into Hell and chained and not to be loosed until the day of judgement) able to do so much destruction to mankind on Earth? There is debate on exactly how many angels were cast out of heaven, but it is difficult to say how they are able to cause such destruction. Some believe that they are able to do this because they are fallen angels and have great power. Others believe that they are able to do this because they are evil spirits who are able to influence people and events on Earth. Why did Muhammad order Muslims to sever all ties with their parents and all other family members until they embraced Islam? One of the main reasons Muhammad ordered Muslims to sever all ties with their parents and all other family members until they embraced Islam was because he wanted to create a sense of community and identity amongst his followers. He wanted them to feel like they were part of something larger than themselves and that they had a duty to protect and promote the interests of the group. Additionally, Muhammad may have believed that by breaking ties with family members, Muslims would be less likely to experience personal conflicts that could potentially undermine the unity of the group. Why do flies try to get into the house even if there is no food inside? A house typically contains many opportunities for a fly to find a mate and lay eggs. Once the fly finds a mate, they can lay their eggs on any number of surfaces inside the house, including garbage, rotting food, or even feces. The eggs hatch into larvae which then mature into adult flies. Should the AAP have thought through basic protocols before entering mainstream politics given that they now want to move back from a democratic system of candidate selection? AAP should have thought through basic protocols before entering mainstream politics given that they now want to move back from a democratic system of candidate selection. Having a clear and concise plan for how the party plans to operate would have helped to avoid any confusion or conflict that may have arisen from making a sudden shift in strategy. Did the Joker film disturb you at all? Why is it such a negative film? The Joker film is disturbing because it portrays violence and mental illness in a very negative light. It is also disturbing because it is a very graphic film. Why do people still use dial up if DSL is available almost everywhere dial up is? There are a number of reasons. First, dial-up is still cheaper than DSL or cable for many people, especially if they do not use the Internet very often. Second, DSL and cable require a phone line or cable TV line, respectively, which many people do not have. In contrast, almost everyone has a phone line, so almost everyone can get dial-up service. Third, DSL and cable are not available in all areas. In rural areas, for example, only dial-up is available. Fourth, some people simply prefer dial-up service for its simplicity. I am 15 years old. I want to create a basic iOS app, however I have no idea how to code. How do I learn, and what is a simple genre of app to create? I.e. Basic racing game or some kind of Bluetooth texting app. Please be honest. Thanks! The best way to learn how to code is to find some resources online and start practicing. A simple genre of app to create would be a basic game or a simple utility app. What is the right way to choose an electronic document management system? When choosing an electronic document management system, it is important to consider the specific needs of your organization and what features you need the system to have. Additionally, it is important to compare the different systems available and find one that is within your budget. Where was the first center of agriculture? The first centers of agriculture were in the Fertile Crescent. What exactly is Prince Harry's major malfunction? Each time we see him, he's acting like a smug as**ole, or sullen/moody or seems angry. He doesn't seem present in any situation he does. It's not some kind of compassion gift that he's 'vectored in on his causes' (larger issues) because he can't seem to see anything but himself. Also, he seems high or something. It's like watching a train wreck. Do other people feel like they have lost several months of their life since the pandemic started? This is a difficult question to answer. Some people may feel like they have lost several months of their life since the pandemic started, while others might not feel this way at all. It likely depends on a variety of factors, such as how much someone's life has changed since the pandemic began and how they have coped with these changes. Does practicing "Tratak" really activate your pituitary gland, causing it to secrete growth hormone? There is no scientific evidence that supports the claim that "Tratak" activates the pituitary gland and causes it to secrete growth hormone. Does money punishment change bad behaviour or act? There is no definitive answer to this question. Some argue that money punishment does not change bad behaviour or act, while others argue that it may have some effect in deterring criminal behaviour. There is no clear consensus on the matter. I am in love with a 16 year old girl and she loves me too. I want to marry her one day. I am 23 year old. Is it illegal even being in a relationship? No, it is not illegal to be in a relationship. What exactly is A/C current, and why is there still "polarity" associated with one side of an outlet when (I thought) positive and negative current is taking turns running up and down the power cord? What exactly is A/C current? It stands for Alternating Current. Anyway, AC is similar in one phrase as DC (Direct Current) but opposite in the rest cases. As you said, in DC the current always flows in the same direction. On the contrary, AC changes its direction periodically and it always has a frequency associated with it. The might be even casted to Vector by adding frequency phasors to the time domainSignals: For example, we don't talk about an El (positive current) or a Keel (negative current). We talk about an Element with {Meter per Ampere} as its unit [1]. However, despite of this fact that AC is carried througout space, it only change its direction lineally through time. It follows a sinusoidal shape as show below [2]: $$i(t)=I_\mathrm{max}\,\sin{\omega t}$$ Where \$\sin{\omega t}\$ define a polarity (Phase), \$I_\mathrm{max}\$ is like El or Keel is defined in DC case but, this time, is called : peak value and finally \$\omega=2\pi f\$ is equal to the periodically time where \$f\$ is the frequency which means cycles per second [3]. For further information check these wiki articles out: 1- Direct current; 2- Alternating Current; and 3- Frequency reference article. You might want to add 'polarity' tag for future readers so that your question will likely be found out looking for it and also mentioned terms like: AC/DC Phasor, Frequency .~. ;) .~. That's because your next question points towards my new edit were I was supposed to explain how phasors can give you such periodic outputs from around zero position without a fully shape staright shape [4]: $$i(t)=I_\mathrm{max}\sin(\omega t + \theta)\tag{1}$$ Because think of phasors' slow down version on time domain !! So if you use phasors to get instanteous values at zero position as an angle reference they What is Doklam issue with India? What Indian government should be done to avoid such tension over border land? The Doklam issue is a border dispute between India and China over the Doklam plateau, which is located in Bhutan. The Indian government should take measures to avoid such tensions by holding talks with the Chinese government and working out a mutually acceptable solution. Can Interpol arrest someone in the United Kingdom because of debt (personal loan less than 500 thousand dirhams) in the UAE? In theory, Interpol cannot make arrests for debt. However, some countries have extradition treaties with the UAE that may allow for Interpol to assist in the arrest of an individual if they are believed to have skipped out on a loan. How would you write an argumentative essay presenting your stand on the topic: globalization does not need media for global integration? Globalization does not need media for global integration because globalization is the process of expanding world trade, contacts between societies, and the sharing of ideas around the world. The media is not necessary for this process to occur. Do Taliban fighters watch porn? There is no definitive answer to this question, as the Taliban is a highly conservative Islamic organization and its members likely do not approve of pornography. However, it is possible that some Taliban fighters do watch porn, either out of curiosity or because they are unable to resist temptation. How bad is traffic around LAX? Traffic around LAX is notoriously bad, often causing congestion and delays on the surrounding highways. Is all news on the internet real/true? All news on the internet is not necessarily true. However, there are many credible news sources that publish accurate information. I had been feeling unmotivated, lost, and always sleepy. What should I do to get on track again? It sounds like you might be experiencing depression. Depression is a serious condition that can cause a variety of symptoms, including feeling unmotivated, lost, and always sleepy. If you think you might be depressed, it's important to see a doctor or mental health professional for an evaluation. Treatment for depression can be very effective, so don't hesitate to reach out for help. How easily does IKEA furniture fall apart? IKEA furniture falls apart quite easily. Is the food good on US Naval vessels? There is no one answer to this question as each person's tastes are different. Some people may enjoy the food on US Naval vessels, while others may not. What software tools do professional data recovery places use? There are many software tools used by professional data recovery places including: -Data Recovery software -Recovery Partition software -Partition Recovery software -File Recovery software What can be done to clean up the plastic in the oceans? Are there patches where plastic seems to congregate? There are many ways to clean up the plastic in the oceans. One way is to use a device called a "skimmer" which can be towed behind a boat to collect floating debris. Another way is to use a net to collect debris that has sunk to the ocean floor. There are also many organizations that clean up beaches and coastal areas. Why does Qualcomm enjoy near monopoly in smartphone chips, and why do other companies not compete with them? I thought a VPN would anonymize a user and their IP and MAC address. Disregarding their location, can a VPN service keep our identities safe? How determined must a (government/hacker/stalker) be to trace an online presence "secured" behind a VPN? How will Islam make their goal happen? Are you into fantasy baseball, and if so, what are several sources that you believe most helpful to success in the game? Do hidden car antennas work well? How can I avoid those black heads and white heads forever? I have lots on and nose and cheeks parts..and am 18 is it okay not to wash my face without soap or facewash? How much it will take to start a trading business? Is it worth doubting the intention of police in mob lynching incident of Palghar, Maharashtra in which mob killed sadhus in suspicion of theft? What scene from Friends can't you watch enough of? How does Jay Garrick's hat stay on? best dog foods for picky eaters order to watch sword art online series applebees or chilis nude move how to attach paddle board to car name something that uses key fax machine and phone on same line hulu add blocker how long to cook frozen chicken in a crockpot pea protein vs whey rutgers easy a classes conservativereview com nietzsche mbti when should you wear a sports bra how to make hopper go into chest westport ct sales tax lodha palava phase 2 knee boots for men men stauer watch can box turtles eat blueberries dribble up activation code sims freeplay cheats that actually work 2022 who sings vivir mi vida what is half of 3 3 8 turmeric heartburn best hashtags on linkedin dop acronym invisalign smile widening
CommonCrawl
# Understanding systems of polynomial equations Before we dive into solving methods for systems of polynomial equations, let's first make sure we have a good understanding of what a system of polynomial equations is. A system of polynomial equations is a set of equations where each equation is a polynomial. The variables in the equations are typically real numbers, although they can also be complex numbers. The goal is to find the values of the variables that satisfy all of the equations in the system. For example, consider the following system of polynomial equations: $$ \begin{align*} 2x + y &= 5 \\ x^2 + y^2 &= 10 \end{align*} $$ In this system, we have two equations with two variables, $x$ and $y$. The goal is to find the values of $x$ and $y$ that make both equations true at the same time. Solving systems of polynomial equations is an important problem in mathematics and has many applications in various fields, such as physics, engineering, and computer science. It allows us to model and analyze real-world phenomena, make predictions, and solve complex problems. # Solving methods for systems of polynomial equations 1. Substitution method: This method involves solving one equation for one variable and substituting the result into the other equations. It is a straightforward method but can be time-consuming for large systems. 2. Elimination method: This method involves eliminating one variable at a time by adding or subtracting equations. It is a systematic method but can be complex for systems with many variables. 3. Gaussian elimination: This method involves transforming the system into an equivalent system with a triangular form, making it easier to solve. It is an efficient method but may require a lot of computational resources for large systems. 4. Cramer's rule: This method uses determinants to find the solutions of a system. It is an elegant method but can be computationally expensive for systems with many variables. 5. Newton's method: This method involves iteratively improving an initial guess of the solutions using the gradient of the system of equations. It is an iterative method but may not always converge to the correct solutions. These are just a few of the many methods available for solving systems of polynomial equations. Each method has its own advantages and disadvantages, and the choice of method depends on the specific characteristics of the system and the desired accuracy of the solutions. # The role of dimension theory in solving equations Dimension theory plays a crucial role in solving systems of polynomial equations. It provides a framework for understanding the number of solutions a system can have and the complexity of finding those solutions. In algebraic geometry, the dimension of a variety is a measure of its complexity. A variety is the set of solutions to a system of polynomial equations. The dimension of a variety is the maximum number of independent parameters needed to describe its solutions. The dimension of a variety can be determined using the concept of transcendence degree. The transcendence degree of a field extension is the maximum number of algebraically independent elements over the base field. In the context of solving polynomial equations, the transcendence degree corresponds to the number of independent parameters needed to describe the solutions. By studying the dimension of a variety, we can gain insights into the number and nature of its solutions. For example, a variety of dimension zero corresponds to a finite number of isolated points. A variety of dimension one corresponds to a curve, while a variety of dimension two corresponds to a surface in three-dimensional space. Understanding the dimension of a variety allows us to choose appropriate solving methods. For example, for varieties of dimension zero or one, we can use algebraic methods such as substitution or elimination. However, for varieties of higher dimensions, more advanced techniques such as numerical methods or computer algebra systems may be required. # Bertini's algorithm and its applications Bertini's algorithm is a powerful tool in real algebraic geometry for solving systems of polynomial equations. It is named after Daniel J. Bates, Jonathan D. Hauenstein, Andrew J. Sommese, and Charles W. Wampler, who developed the original version of the algorithm, known as Bertini Classic. A more efficient and modularizable version, called Bertini 2.0, was later developed by Danielle Brake. Bertini's algorithm works by approximating the solutions to a system of polynomial equations. It uses numerical methods to find an approximate solution based on the size and complexity of the system. The algorithm allows users to adjust settings to optimize the run time of the program, although determining the best settings can be challenging. The applications of Bertini's algorithm are wide-ranging. It has been used in problems related to robotics, chemical reaction networks, and Global Positioning Systems (GPS). For example, GPS location services rely on solutions to systems of polynomial equations to determine the coordinates of target objects. By solving a system of equations that involves the coordinates of multiple satellites and the speed of light, GPS systems can approximate the placement of an object of interest. # Real-world examples of using Bertini's algorithm One application of Bertini's algorithm is in robotics. Robots often rely on systems of polynomial equations to perform tasks such as motion planning and control. By solving these equations using Bertini's algorithm, robots can determine the optimal paths and trajectories to achieve their objectives. For example, a robot arm with multiple joints can use Bertini's algorithm to find the joint angles that result in a desired end effector position. Another application is in chemical reaction networks. Chemical reactions can be modeled as systems of polynomial equations, where the variables represent the concentrations of the chemical species involved. By solving these equations using Bertini's algorithm, researchers can gain insights into the behavior and dynamics of chemical reactions. This information is crucial for designing and optimizing chemical processes. Bertini's algorithm also plays a role in Global Positioning Systems (GPS). As mentioned earlier, GPS systems use solutions to systems of polynomial equations to determine the coordinates of target objects. By solving these equations using Bertini's algorithm, GPS systems can accurately locate objects based on the signals received from multiple satellites. # Advanced techniques for solving systems of polynomial equations One advanced technique is the use of homotopy continuation methods. Homotopy continuation is a numerical method that starts with an easy-to-solve system of equations and gradually deforms it into the original system. This allows for the efficient computation of solutions to the original system. Homotopy continuation methods have been successfully applied to a wide range of problems, including robotics, computer vision, and computational biology. Another advanced technique is the use of numerical algebraic geometry. Numerical algebraic geometry combines algebraic geometry with numerical methods to solve polynomial equations. It allows for the computation of approximate solutions and provides insights into the geometry of the solution set. Numerical algebraic geometry has applications in robotics, computer-aided design, and optimization problems. Computer algebra systems (CAS) are another valuable tool for solving systems of polynomial equations. CAS software, such as Mathematica and Maple, provide powerful algorithms and tools for symbolic and numerical computations. They can handle large and complex systems of equations and offer a wide range of functionalities for analyzing and solving polynomial equations. In addition to these techniques, there are many other advanced methods and tools available for solving systems of polynomial equations. The choice of technique depends on the specific problem and the desired level of accuracy. By combining these advanced techniques with Bertini's algorithm, researchers and practitioners can tackle complex problems in real algebraic geometry. # Using technology to solve polynomial equations One way technology can be used is through the use of computer algebra systems (CAS). CAS software, such as Mathematica, Maple, and SageMath, provide powerful tools for symbolic and numerical computations. They can handle large and complex systems of equations, perform symbolic manipulations, and offer a wide range of functionalities for analyzing and solving polynomial equations. CAS software can significantly speed up the solving process and provide accurate solutions. Another way technology can be used is through the use of numerical methods and algorithms. Numerical methods, such as Newton's method and homotopy continuation methods, allow for the computation of approximate solutions to polynomial equations. These methods can handle systems with a large number of variables and equations, making them suitable for complex problems. Technology enables the efficient implementation of these numerical methods, making it easier to solve polynomial equations. In addition to CAS software and numerical methods, technology also enables the visualization and analysis of polynomial equations. Graphing software, such as Desmos and GeoGebra, can plot the solution sets of polynomial equations, allowing for a better understanding of their geometry. Visualization tools can help identify patterns, symmetries, and critical points, leading to insights into the behavior of the equations. Technology also facilitates collaboration and sharing of knowledge in the field of polynomial equation solving. Online platforms, such as GitHub and Stack Exchange, provide forums for researchers and practitioners to discuss and exchange ideas. These platforms allow for the dissemination of new techniques, algorithms, and software, fostering innovation and advancements in the field. # Challenges and limitations in solving systems of polynomial equations One of the main challenges is the computational complexity of solving systems of polynomial equations. As the number of variables and equations increases, the solving process becomes more computationally intensive. The time and memory requirements can quickly become prohibitive, making it difficult to solve large and complex systems. To overcome this challenge, researchers and practitioners often rely on advanced techniques, such as numerical methods and computer algebra systems, to speed up the solving process and handle the computational complexity. Another challenge is the existence and uniqueness of solutions. Not all systems of polynomial equations have solutions, and even when solutions exist, they may not be unique. Determining the existence and uniqueness of solutions is a non-trivial task and often requires a deep understanding of the underlying mathematics. Researchers and practitioners need to carefully analyze the problem and choose appropriate solving methods based on the characteristics of the system. The accuracy of solutions is another limitation in solving systems of polynomial equations. Due to the use of numerical methods and approximations, the solutions obtained may not be exact. The level of accuracy depends on the chosen solving method and the desired precision. Researchers and practitioners need to carefully evaluate the accuracy of the solutions and assess whether they meet the requirements of the problem. In addition to these challenges and limitations, there are other factors to consider, such as the availability and accessibility of computational resources, the complexity of the problem domain, and the expertise and experience of the solver. Researchers and practitioners need to navigate these factors and make informed decisions to effectively solve systems of polynomial equations. # Comparing and contrasting different solving methods One common method is substitution, where one equation is solved for one variable and substituted into the other equations. This reduces the system to a single equation in one variable, which can be solved using algebraic techniques. Substitution is straightforward and can be used for small systems with a small number of variables. However, it can be cumbersome and inefficient for large systems with many variables, as it involves solving multiple equations. Another method is elimination, where one variable is eliminated from the system by combining equations. This reduces the number of variables in the system, making it easier to solve. Elimination is particularly useful for systems with a large number of equations and a small number of variables. However, it can be challenging to perform elimination for systems with many variables, as it requires careful manipulation of equations. Numerical methods, such as Newton's method and homotopy continuation methods, are another set of techniques for solving systems of polynomial equations. These methods approximate the solutions to the equations using numerical computations. Numerical methods are particularly useful for large and complex systems, as they can handle a large number of variables and equations. However, they may not provide exact solutions and require careful consideration of the desired level of accuracy. Computer algebra systems (CAS) provide another approach to solving systems of polynomial equations. CAS software, such as Mathematica and Maple, offer powerful algorithms and tools for symbolic and numerical computations. They can handle large and complex systems, perform symbolic manipulations, and provide accurate solutions. CAS software is particularly useful for researchers and practitioners who require precise and reliable solutions. The choice of solving method depends on various factors, such as the size and complexity of the system, the desired level of accuracy, and the available computational resources. Researchers and practitioners need to carefully evaluate these factors and choose the most appropriate method for their specific problem domain. # Applications of solving systems of polynomial equations in real algebraic geometry One application is in curve fitting and interpolation. Given a set of data points, we can find a polynomial that passes through these points by solving a system of equations. This allows us to approximate the underlying function and make predictions for new data points. Curve fitting and interpolation are widely used in fields such as statistics, engineering, and computer graphics. Another application is in geometric modeling and computer-aided design (CAD). Systems of polynomial equations can be used to represent and manipulate geometric objects, such as curves, surfaces, and solids. By solving these equations, we can determine the properties and relationships of these objects, enabling us to design and analyze complex structures. CAD software often relies on solving systems of polynomial equations to perform operations such as intersection, projection, and deformation. Real algebraic geometry also has applications in optimization and control. Systems of polynomial equations can be used to model and solve optimization problems, where the goal is to find the values of variables that minimize or maximize a given objective function. These equations can also be used to analyze and control dynamical systems, such as robotic arms and chemical reaction networks. By solving the equations, we can determine the optimal values of the variables and design control strategies to achieve desired behaviors. These are just a few examples of the many applications of solving systems of polynomial equations in real algebraic geometry. The versatility and power of these equations make them valuable tools for solving practical problems and gaining insights into the geometry of the solution sets. In the final section, we will explore the future research and developments in the field of solving systems of polynomial equations. We will discuss emerging trends, challenges, and potential directions for further exploration. Let's conclude our journey into the world of effective methods for solving systems of polynomial equations! # Future research and developments in this field The field of solving systems of polynomial equations is constantly evolving, with new research and developments pushing the boundaries of what is possible. In this final section, we will explore some of the future research directions and emerging trends in this field. One area of research is the development of more efficient and accurate algorithms for solving systems of polynomial equations. Researchers are constantly exploring new numerical methods, homotopy continuation methods, and symbolic computation techniques to improve the solving process. The goal is to reduce the computational complexity and memory requirements while maintaining high accuracy. Another area of research is the integration of machine learning and artificial intelligence techniques into the solving process. Researchers are exploring how machine learning algorithms can be used to predict the best solving methods and settings for different types of polynomial systems. By leveraging the power of machine learning, researchers aim to automate and optimize the solving process, making it more accessible to a wider range of users. The development of specialized hardware and parallel computing techniques is also an area of interest. Researchers are exploring how to leverage the power of GPUs, FPGAs, and other specialized hardware to accelerate the solving process. Parallel computing techniques, such as distributed computing and GPU computing, can significantly speed up the solving process for large and complex systems. The field of solving systems of polynomial equations is also closely connected to other areas of mathematics and computer science, such as algebraic geometry, optimization, and control theory. Researchers are exploring how to integrate the techniques and methodologies from these fields to tackle more complex and interdisciplinary problems. This interdisciplinary approach can lead to new insights and advancements in the field. In addition to these research directions, there are also challenges and open questions that need to be addressed. These include the development of efficient algorithms for solving sparse and structured systems, the exploration of new mathematical frameworks for understanding the geometry of solution sets, and the investigation of the connections between polynomial equations and other areas of mathematics, such as number theory and graph theory. As the field of solving systems of polynomial equations continues to advance, it holds great promise for solving complex problems in various domains, including robotics, computer-aided design, optimization, and control. By pushing the boundaries of what is possible and exploring new frontiers, researchers and practitioners can unlock the full potential of polynomial equations in real algebraic geometry. Congratulations on completing this textbook on effective methods for solving systems of polynomial equations in real algebraic geometry! We hope this journey has expanded your knowledge and understanding of this fascinating field.
Textbooks
\begin{document} \title{Quantum Mechanics as a Classical Theory IV:\ The Negative Mass Conjecture} \begin{abstract} The following two papers form a natural development of a previous series of three articles on the foundations of quantum mechanics; they are intended to take the theory there developed to its utmost logical and epistemological consequences. We show in the first paper that relativistic quantum mechanics might accommodate without ambiguities the notion of negative masses. To achieve this, we rewrite all of its formalism for integer and half integer spin particles and present the world revealed by this conjecture. We also base the theory on the second order Klein-Gordon's and Dirac's equations and show that they can be stated with only positive definite energies. In the second paper we show that the general relativistic quantum mechanics derived in paper II of this series supports this conjecture. \end{abstract} \section{General Introduction} What is the job of a theoretical physicist? The first answer that comes to us in a somewhat precipitate manner is: The theoretical physicist's job is to say how the world is. Despite the obvious philosophical fragility of such an assertion, hardly adjustable with the method of systematic doubt of science, this answer gives us a key for a more adequate approach. The theoretical physicist has not the mission of saying how the world is but, rather, the job to explain how the world might be. Only the experiments have the final word about, among all the numerous possible worlds furnished by a theory, which one is more adequate. The history of science of the last four centuries has shown that we shall not underestimate any of the models we uncover with our interpretations of the underlying formal apparatus. This is the spirit underlying the first paper of this series. In this paper, we will show that relativistic quantum mechanics admits an interpretation very different from the one usually accepted. Since we are not interested in obtaining a new relativistic quantum mechanical formalism, its formal apparatus will be kept intact. Our interest is to uncover another world, equally allowed by this apparatus, and show that an arbitrary choice has hidden this world. The methodological criterion that one should apply in judging the merits of this work cannot be related to its applicability, since we keep the pure formal apparatus intact and expect the same formal outcomes. This criterion has to do with the world picture that emerges from one and the other theoretical interpretations. It is from this point that they become different theories and expect answers from Nature so distinct as excluding. The corroboration of one choice or the other is left, however, for the experiments... In the second section of this paper, we will introduce, in a rather intuitive way, the main idea of this first paper. We will claim that the relativistic quantum mechanical formalism can accommodate a world with negative masses. We will use the Klein-Gordon theory in elaborating such an argument. In the third section, we will develop the considerations made in the previous one into a more mathematical format. We will make, in the fourth section, an extension of the previous formalism to apply it to particles with half integral spin. We will then use the second order Dirac's equation. After the fourth section, the first part of this series of two papers will be complete. We will have demonstrated that all relativistic quantum mechanics can be rewritten to accommodate negative masses. We then make our conclusions. We devote the appendix to show that relativistic quantum mechanics based on second order equations can be rewritten to admit only positive probability densities. We then show that the solution of this problem bears some resemblance with the formalism developed in the main text. The next paper is a continuation of the first. We make an application of the general relativistic quantum mechanical formalism already derived\cite{1,2,3} to a simple, but highly instructive, example. In the second section of that paper, we apply the formalism to the simple problem of a test mass gravitating around a heavy body (we call this problem the quantum Schwartzchild problem). From the results so obtained, we show that this general relativistic quantum theory supports the negative mass conjecture of the first paper. In the third section we make our final conclusions. \section{Introduction} When the Klein-Gordon theory (hereafter KG) was proposed, the possibility of negative probability densities was one of its main deficiencies. The solution met was to multiply this density by the modulus of the electric charge and to consider it as a charge, rather than a probability, density. This attitude, however, seems to be based on an arbitrary choice that has hidden other possibilities. The usual interpretation of the relativistic quantum mechanical formalism assumes, by principle, that there can be no negative masses in Nature\cite{ 4,5}. We now turn to show that we can eliminate this constraint from the interpretation without incurring into inconsistencies. In a previous series of papers, we have shown that the KG probability density, defined as \begin{equation} \label{1}j_0\left( x\right) =\frac{i\hbar }{2mc}\left( \phi ^{*}\left( x\right) \partial _0\phi \left( x\right) -\phi \left( x\right) \partial _0\phi ^{*}\left( x\right) \right) , \end{equation} where $m$ is the particle's mass and $\phi $ is the associated probability amplitude, does not require to be multiplied by any charge to represent a true probability density if we accept that we should have negative masses for antiparticles. We then could write the probability density as \begin{equation} \label{2}\rho _\lambda \left( x\right) =\frac{i\hbar }{2\lambda mc}\left[ \phi ^{*}\left( x\right) \partial _0\phi \left( x\right) -\phi \left( x\right) \partial _0\phi ^{*}\left( x\right) \right] _{+\ }, \end{equation} where $[\ ]_{+}$ implies that we take only the positive signal of the quantity inside brackets, and the parameter $\lambda $ defines if the density refers to particles or antiparticles: \begin{equation} \label{3}\lambda =sign\left( \phi ^{*}\left( x\right) \partial _0\phi \left( x\right) -\phi \left( x\right) \partial _0\phi ^{*}\left( x\right) \right) =\left\{ \begin{array}{c} +1\ \ \ \ \ \quad \mbox{for\ particles} \\ -1\quad \mbox{for\ antiparticles} \end{array} \right. . \end{equation} We might thus interpret a negative probability density as a positive one describing negative mass particles (antiparticles). In such a case, the mass distribution can be written as: \begin{equation} \label{4}\rho _\lambda ^{mass}\left( x\right) =m\rho _\lambda \left( x\right) =\frac{i\hbar }{2\lambda c}\left[ \phi ^{*}\left( x\right) \partial _0\phi \left( x\right) -\phi \left( x\right) \partial _0\phi ^{*}\left( x\right) \right] _{+} \end{equation} and will be positive for particles and negative for antiparticles. From the very definition of the parameter $\lambda $ it is easy to see that the complex conjugation, defined by $\phi \rightarrow \phi ^{*}$, implies $ \lambda \rightarrow -\lambda $ and thus, in the mapping of particles into antiparticles. We can handle with electromagnetic fields in a way similar to the usually done in the literature. We then have $$ \rho _\lambda \left( x\right) =\frac{i\hbar }{2\lambda m_0c}\left[ \phi ^{*}\left( x\right) \partial _0\phi \left( x\right) -\phi \left( x\right) \partial _0\phi ^{*}\left( x\right) \right] _{+}-\frac{2e}{m_0c^2}\Phi \left( x\right) \phi ^{*}\left( x\right) \phi \left( x\right) = $$ \begin{equation} \label{5}=\frac{i\hbar }{2\lambda mc}\left[ \phi ^{*}\left( x\right) \partial _0\phi \left( x\right) -\phi \left( x\right) \partial _0\phi ^{*}\left( x\right) \right] _{+}-\frac{2\lambda e}{\lambda mc^2}\Phi \left( x\right) \phi ^{*}\left( x\right) \phi \left( x\right) , \end{equation} where $\Phi $ is the scalar electromagnetic potential. Collecting terms we get $$ \label{6}\rho _\lambda \left( x\right) = $$ \begin{equation} =\frac 1{2\lambda mc^2}\left\{ \left[ \phi ^{*}\left( x\right) i\hbar \frac{\partial \phi \left( x\right) }{ \partial t}-\phi \left( x\right) i\hbar \frac{\partial \phi ^{*}\left( x\right) }{\partial t}\right] _{+}-2\lambda e\Phi \left( x\right) \phi ^{*}\left( x\right) \phi \left( x\right) \right\} , \end{equation} which shows that the complex conjugation of the amplitudes also implies in the change of the electric charge (we can also see this looking directly at the KG equation). We conclude that, in the present theory, the complex conjugation operation has the effect of changing the mass and charge signs. This implies that particles shall have these properties with the opposite sign of the associated antiparticles. We know from the experiments that particle-antiparticle pairs, when subjected to homogeneous magnetic fields, move along opposite circular trajectories; this is the reason for the usual interpretation considering the charge of particles and antiparticles to have opposite sign. In the present interpretation, both mass and charge change sign; thus, the ratio $ e/m$ does not change its sign. It is important to stress, however, that the charge and the mass appear in the expression for the trajectory of the pair together with the velocity of its components. We now turn to see what happens with these velocities in our formalism. Let us consider then the free particle-antiparticle solutions: \begin{equation} \label{7}\phi _\lambda \left( x\right) =\exp \left[ -\lambda i\left( E_pt- {\bf p\cdot r}\right) /\hbar \right] , \end{equation} where the evolution parameter $E_{p\mbox{ }}$and the momentum ${\bf p}$ are given by \begin{equation} \label{8}E_p=mc^2/\sqrt{1-v^2/c^2}\quad ;\quad {\bf p}=m{\bf v}/\sqrt{ 1-v^2/c^2}. \end{equation} The probability density and flux, in the absence of electromagnetic fields, are given by \begin{equation} \label{9}\rho _\lambda \left( x\right) =\frac{E_p}{\lambda mc^2}\quad ;\quad {\bf j}_\lambda \left( x\right) =\frac{{\bf p}}{\lambda mc}. \end{equation} If we put \begin{equation} \label{10}\frac{{\bf p}}m={\bf v}\quad \Rightarrow \left\{ \begin{array}{c} {\bf p}_a=\left( -m\right) {\bf v}_a \\ {\bf p}_p=\left( +m\right) {\bf v}_p \end{array} \right. \quad , \end{equation} where ${\bf v}_p$ and ${\bf v}_a$ are the velocities of the particle and antiparticle, respectively, and ${\bf p}_p$, ${\bf p}_a$ their momenta, we then get \begin{equation} \label{11}{\bf j}_\lambda \left( x\right) =\lambda \frac{{\bf v}}c, \end{equation} which can be interpreted as meaning that the flux of particles in one direction is equivalent to the flux of antiparticles in the opposite direction. In this manner, we expect that, when gravitational forces are present, particles and antiparticles behave in the way shown in figure 1. These forces obviously do not pertain to the framework of the present theory; a theory that takes gravitation into account will be dealt with in the next paper. However it is noteworthy that particles and antiparticles will not respond perversely to homogeneous magnetic fields as one could in principle think\cite{6}. Indeed, when electromagnetic fields are present and taking $+e$ and $-e$ as the particle's and the antiparticle's charges, respectively, we have: \begin{equation} \label{12}\rho _\lambda \left( x\right) =\frac 1{\lambda mc^2}\left( E_p-\lambda e\Phi \right) \quad ;\quad {\bf j}_\lambda \left( x\right) =\frac 1{\lambda mc}\left( {\bf p-}\lambda \frac ec{\bf A}\right) , \end{equation} which gives, for the flux \begin{equation} \label{13}{\bf j}_\lambda \left( x\right) =\frac 1c\left( \lambda {\bf v-} \frac e{mc}{\bf A}\right) . \end{equation} This shows that particles and antiparticles have velocity vectors with opposite signs compared to the potential vector. This property is sufficient to explain their behavior under the influence of a homogeneous magnetic field (figure 2). We now proceed, in the next two sections, to rewrite the mathematical apparatus to state formally our conjecture. \section{Klein-Gordon's Theory with Negative Mass} If we depart from the hypothesis that Nature can reveal entities with masses of both signs, we then expect to find in It all the combinations shown in table I. We might use the Feshbach-Villars decomposition to relate all the possibilities furnished by nature with the KG equation. By means of this decomposition, the KG equation \begin{equation} \label{14}\frac 1{c^2}\left( i\hbar \frac \partial {\partial t}-e\Phi \right) ^2\varphi =\frac 1{2m}\left( \frac \hbar i\nabla -\frac ec{\bf A} \right) ^2\varphi +m^2c^2\varphi , \end{equation} when we use \begin{equation} \label{15}\varphi _0\left( {\bf r},t\right) =\left[ \frac \partial {\partial t}+\frac{ie}\hbar \Phi \left( {\bf r},t\right) \right] \varphi \left( {\bf r} ,t\right) , \end{equation} and \begin{equation} \label{16}\varphi _1=\frac 12\left[ \varphi _0+\frac{i\hbar }{m_0c^2}\varphi \right] \quad ;\quad \varphi _2=\frac 12\left[ \varphi _0-\frac{i\hbar }{ m_0c^2}\varphi \right] , \end{equation} becomes the following system of equations \begin{equation} \label{17}\left[ i\hbar \frac \partial {\partial t}-e\Phi \right] \varphi _1=\frac 1{2m}\left[ \frac \hbar i\nabla -\frac ec{\bf A}\right] ^2\left( \varphi _1+\varphi _2\right) +mc^2\varphi _1; \end{equation} \begin{equation} \label{18}\left[ i\hbar \frac \partial {\partial t}-e\Phi \right] \varphi _2= \frac{-1}{2m}\left[ \frac \hbar i\nabla -\frac ec{\bf A}\right] ^2\left( \varphi _1+\varphi _2\right) -mc^2\varphi _2; \end{equation} together with their complex conjugate \begin{equation} \label{19}\left[ i\hbar \frac \partial {\partial t}+e\Phi \right] \varphi _1^{*}=\frac{-1}{2m}\left[ \frac \hbar i\nabla +\frac ec{\bf A}\right] ^2\left( \varphi _1^{*}+\varphi _2^{*}\right) -mc^2\varphi _1^{*}; \end{equation} \begin{equation} \label{20}\left[ i\hbar \frac \partial {\partial t}+e\Phi \right] \varphi _2^{*}=\frac 1{2m}\left[ \frac \hbar i\nabla +\frac ec{\bf A}\right] ^2\left( \varphi _1^{*}+\varphi _2^{*}\right) +mc^2\varphi _2^{*}. \end{equation} With the notation above we note that it is possible to make a connection between the amplitudes and the particles signs of mass and charge they represent \begin{equation} \label{21}\varphi _1\Longleftrightarrow \left( +,+\right) \ ;\ \varphi _2\Longleftrightarrow \left( -,+\right) , \end{equation} \begin{equation} \label{22}\varphi _1^{*}\Longleftrightarrow \left( -,-\right) \ ;\ \varphi _2^{*}\Longleftrightarrow \left( +,-\right) , \end{equation} where $(A,B)$ represents an entity with mass and charge signs $A$ and $B$, respectively. We made the {\it choice}, in the last section, to represent antiparticles with the signs of the mass and charge reverted as related to the particle. In agreement with this choice, we shall attribute for pairs of such entities an amplitude and its complex conjugate, as become clear from equations (\ref {17}-\ref{20}). We can now define the two-component spinors \begin{equation} \label{23}\Psi =\left( \begin{array}{c} \varphi _1 \\ \varphi _2 \end{array} \right) \quad ;\quad \Psi ^{*}=\left( \begin{array}{c} \varphi _1^{*} \\ \varphi _2^{*} \end{array} \right) , \end{equation} together with the Pauli matrices \begin{equation} \label{24}\sigma _1=\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) \ ;\ \sigma _2=\left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) \ ;\ \sigma _3=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) \end{equation} and rewrite the system (17-20) as \begin{equation} \label{25}\left( i\hbar \frac \partial {\partial t}-e\Phi \right) \Psi =\left[ \frac 1{2m}\left( \frac \hbar i\nabla -\frac ec{\bf A}\right) ^2\left( \sigma _3+i\sigma _2\right) +mc^2\sigma _3\right] \Psi , \end{equation} or else \begin{equation} \label{26}\Psi _{c1}\left( i\hbar \frac \partial {\partial t}+e\Phi \right) =\Psi _{c1}\left[ \frac{-1}{2m}\left( \frac \hbar i\nabla +\frac ec{\bf A} \right) ^2\left( \sigma _3+i\sigma _2\right) -mc^2\sigma _3\right] , \end{equation} where \begin{equation} \label{27}\Psi _{c1}=\Psi ^{\dagger }\sigma _3. \end{equation} Using the basis \begin{equation} \label{28}{\bf e}_1=\left( \begin{array}{c} 1 \\ 0 \end{array} \right) \ ;\ {\bf e}_2=\left( \begin{array}{c} 0 \\ 1 \end{array} \right) , \end{equation} we can adopt the convention \begin{equation} \label{29}u_0^{\left( P,+\right) }, \end{equation} where the index zero indicates that we are in the inertial frame of reference and the dyad $(P,+)$ implies that the related spinor describe a particle with positive charge. It is then possible to write the four possible functions (Table 1) as \begin{equation} \label{30}u_0^{\left( P,+\right) }={\bf e}_1e^{-iE_p\tau /\hbar }\ ;\ u_0^{\left( A,-\right) }={\bf e}_1e^{+iE_p\tau /\hbar }; \end{equation} \begin{equation} \label{31}u_0^{\left( A,+\right) }={\bf e}_2e^{+iE_p\tau /\hbar }\ ;\ u_0^{\left( P,-\right) }={\bf e}_2e^{-iE_p\tau /\hbar }; \end{equation} where \begin{equation} \label{32}E_p=mc^2. \end{equation} We might still define a charge conjugation by the operation \begin{equation} \label{33}\Psi _c=\sigma _1\Psi ^{*}, \end{equation} that satisfies a KG equation with the same mass sign but with the charge sign reverted. This spinor, however, cannot be now a candidate to represent antiparticles related to $\Psi $ since only the charge sign is reverted. We note, however, that the difference between complex conjugation and charge conjugation is relevant only in the realm of a theory that distinguishes mass signs. We can see this by covering the mass column in table I or II and noting that, in this case, those amplitudes are degenerate. The probability density can be immediately obtained and is given by \begin{equation} \label{34}\rho =\Psi ^{\dagger }\sigma _3\Psi =\Psi _{c1}\Psi , \end{equation} where now, when permuting the amplitudes, we keep the sign. This should happen because each amplitude has a component related to a particle and another to an antiparticle (with the same sign of the charge). The current or flux density can be easily obtained and is given by \begin{equation} \label{35}{\bf j}=\frac 1{2m}\left[ \Psi _{c1}\Lambda \nabla \Psi -\left( \nabla \Psi _{c1}\right) \Lambda \Psi \right] -\frac{e\hbar }{mc}{\bf A}\Psi _{c1}\Lambda \Psi , \end{equation} where \begin{equation} \label{36}\Lambda =\left( \sigma _3+i\sigma _2\right) . \end{equation} Before we go on with the study of particles with spin, it is interesting to consider particles with null charge. The usual interpretation denies these particles of being described by the KG formalism (at least if there is no interaction capable of distinguishing them). This is the case, for example, of the pion zero. Being a null charge particle, the associated charge density must be identically zero. These particles are then said to be their own antiparticles. We cannot say this in the present theory. Here, the pion zero might manifest itself with two masses of different signs that can be distinguished by a gravitational field. We are then faced with a pion zero and an antipion zero. In the next section, we continue developing an analogous theory for half- spin particles. \section{Dirac's Theory with Negative Mass} We wish to develop a similar formalism for Dirac's equation as was done for Klein-Gordon's. As was already mentioned in the first papers of this series\cite{1,2,3}, we shall consider the second order Dirac's equation as the fundamental one rather than the first order equation. We then depart from Dirac's second order equation $$ \frac 1{c^2}\left( i\hbar \frac \partial {\partial t}-e\Phi \right) ^2\left( \begin{array}{c} \varphi \\ \chi \end{array} \right) =\left[ \left( \frac \hbar i\nabla -\frac ec{\bf A}\right) ^2{\bf 1} +m^2c^2{\bf 1}+\right. $$ \begin{equation} \label{37}\left. +\frac{e\hbar }c\left( \begin{array}{cc} \sigma \cdot {\bf H} & {\bf 0} \\ {\bf 0} & \sigma \cdot {\bf H} \end{array} \right) -i\frac{e\hbar }c\left( \begin{array}{cc} {\bf 0} & \sigma \cdot {\bf E} \\ \sigma \cdot {\bf E} & {\bf 0} \end{array} \right) \right] \left( \begin{array}{c} \varphi \\ \chi \end{array} \right) \end{equation} where $\varphi $, $\chi $ are two-component spinors, while ${\bf H}$ and $ {\bf E}$ are the magnetic and electric field, respectively. The expression for the probability density can be easily obtained and is given by \begin{equation} \label{38}\rho _\lambda =\frac 1{2\lambda mc^2}\left\{ \left[ \psi ^{\dagger }\beta ih\frac{\partial \psi }{\partial t}-\left( i\hbar \frac{\partial \psi ^{\dagger }}{\partial t}\right) \beta \psi \right] _{+}-2\lambda e\Phi \psi ^{\dagger }\beta \psi \right\} , \end{equation} where $\psi $ is the four-component spinor \begin{equation} \label{38a}\psi =\left( \begin{array}{c} \varphi \\ \chi \end{array} \right) \end{equation} and $\beta $ is the usual spin parity operator in Dirac's representation. We are now in position to rewrite Dirac's formalism in the format given in the previous section. We will thus use a simile of the Feshbach-Villars decomposition applied to Dirac's second order equation. Such a decomposition is attained if we define \begin{equation} \label{39}\varphi _0=\left[ \frac \partial {\partial t}+\frac{ie}\hbar \Phi \right] \varphi \ ;\ \chi _0=\left[ \frac \partial {\partial t}+\frac{ie} \hbar \Phi \right] \chi \end{equation} and \begin{equation} \label{40}\left\{ \begin{array}{c} \varphi _1=\frac 12\left( \varphi _0+ \frac{i\hbar }{mc}\varphi \right) \\ \varphi _2=\frac 12\left( \varphi _0- \frac{i\hbar }{mc}\varphi \right) \end{array} \right. \ ;\ \left\{ \begin{array}{c} \chi _1=\frac 12\left( \chi _0+ \frac{i\hbar }{mc}\chi \right) \\ \chi _2=\frac 12\left( \chi _0-\frac{ i\hbar }{mc}\chi \right) \end{array} \right. , \end{equation} where $\varphi _1$, $\varphi _2$, $\chi _1$, $\chi _2$ are two-component spinors. We are then led to the following equations: $$ \left( i\hbar \frac \partial {\partial t}-e\Phi \right) \varphi _1=\left[ \frac 1{2m}\left( \frac \hbar i\nabla -\frac ec{\bf A}\right) ^2{\bf 1}- \frac{e\hbar }{2mc}\sigma \cdot {\bf H}\right] \left( \varphi _1+\varphi _2\right) + $$ \begin{equation} \label{41}+mc^2\varphi _1+\frac{ie\hbar }{2mc}\sigma \cdot {\bf E}\left( \chi _1+\chi _2\right) ; \end{equation} $$ \left( i\hbar \frac \partial {\partial t}-e\Phi \right) \varphi _2=\left[ \frac{-1}{2m}\left( \frac \hbar i\nabla -\frac ec{\bf A}\right) ^2{\bf 1+} \frac{e\hbar }{2mc}\sigma \cdot {\bf H}\right] \left( \varphi _1+\varphi _2\right) - $$ \begin{equation} \label{42}-mc^2\varphi _1-\frac{ie\hbar }{2mc}\sigma \cdot {\bf E}\left( \chi _1+\chi _2\right) ; \end{equation} $$ \left( i\hbar \frac \partial {\partial t}-e\Phi \right) \chi _1=\left[ \frac 1{2m}\left( \frac \hbar i\nabla -\frac ec{\bf A}\right) ^2{\bf 1}-\frac{ e\hbar }{2mc}\sigma \cdot {\bf H}\right] \left( \chi _1+\chi _2\right) + $$ \begin{equation} \label{43}+mc^2\chi _1+\frac{ie\hbar }{2mc}\sigma \cdot {\bf E}\left( \varphi _1+\varphi _2\right) ; \end{equation} $$ \left( i\hbar \frac \partial {\partial t}-e\Phi \right) \chi _2=\left[ \frac{ -1}{2m}\left( \frac \hbar i\nabla -\frac ec{\bf A}\right) ^2{\bf 1+}\frac{ e\hbar }{2mc}\sigma \cdot {\bf H}\right] \left( \chi _1+\chi _2\right) - $$ \begin{equation} \label{44}-mc^2\chi _1-\frac{ie\hbar }{2mc}\sigma \cdot {\bf E}\left( \varphi _1+\varphi _2\right) . \end{equation} These equations, togheter with their complex conjugate, cover all the possibilities we expect from Nature when leting for the existence of negative masses (Table 2). Defining the eight-component spinor \begin{equation} \label{45}\Psi =\left[ \begin{array}{c} \varphi _1 \\ \varphi _2 \\ \chi _1 \\ \chi _2 \end{array} \right] \ ;\ \varphi _i=\left[ \begin{array}{c} \varphi _{i1} \\ \varphi _{i2} \end{array} \right] \ ;\ \chi _i=\left[ \begin{array}{c} \chi _{i1} \\ \chi _{i2} \end{array} \right] \ , \end{equation} the matrices \begin{equation} \label{46}\Sigma _1=\left[ \begin{array}{cccc} {\bf 0} & {\bf +1} & {\bf 0} & {\bf 0} \\ {\bf +1} & {\bf 0} & {\bf 0} & {\bf 0} \\ {\bf 0} & {\bf 0} & {\bf 0} & {\bf +1} \\ {\bf 0} & {\bf 0} & {\bf +1} & {\bf 0} \end{array} \right] \ ;\ \Sigma _2=\left[ \begin{array}{cccc} {\bf 0} & -{\bf i} & {\bf 0} & {\bf 0} \\ {\bf i} & {\bf 0} & {\bf 0} & {\bf 0} \\ {\bf 0} & {\bf 0} & {\bf 0} & - {\bf i} \\ {\bf 0} & {\bf 0} & {\bf i} & {\bf 0} \end{array} \right] \end{equation} \begin{equation} \label{47}\Sigma _3=\left[ \begin{array}{cccc} +{\bf 1} & {\bf 0} & {\bf 0} & {\bf 0} \\ {\bf 0} & {\bf -1} & {\bf 0} & {\bf 0} \\ {\bf 0} & {\bf 0} & +{\bf 1} & {\bf 0} \\ {\bf 0} & {\bf 0} & {\bf 0} & {\bf -1} \end{array} \right] \ ;\ \alpha _1=\left[ \begin{array}{cccc} {\bf 0} & {\bf 0} & {\bf 0} & {\bf 1} \\ {\bf 0} & {\bf 0} & {\bf 1} & {\bf 0 } \\ {\bf 0} & {\bf 1} & {\bf 0} & {\bf 0} \\ {\bf 1} & {\bf 0} & {\bf 0} & {\bf 0} \end{array} \right] ; \end{equation} \begin{equation} \label{48}\alpha _2=\left[ \begin{array}{cccc} {\bf 0} & {\bf 0} & {\bf 0} & {\bf -i} \\ {\bf 0} & {\bf 0} & {\bf +i} & {\bf 0} \\ {\bf 0} & {\bf -i} & {\bf 0} & {\bf 0} \\ {\bf +i} & {\bf 0} & {\bf 0} & {\bf 0} \end{array} \right] \ ;\ \alpha _3=\left[ \begin{array}{cccc} {\bf 0} & {\bf 0} & {\bf +1} & {\bf 0} \\ {\bf 0} & {\bf 0} & {\bf 0} & {\bf -1} \\ {\bf +1} & {\bf 0} & {\bf 0} & {\bf 0} \\ {\bf 0} & {\bf -1} & {\bf 0} & {\bf 0} \end{array} \right] \end{equation} and \begin{equation} \label{49}\beta =\left[ \begin{array}{cccc} {\bf +1} & {\bf 0} & {\bf 0} & {\bf 0} \\ {\bf 0} & {\bf +1} & {\bf 0} & {\bf 0} \\ {\bf 0} & {\bf 0} & {\bf -1} & {\bf 0} \\ {\bf 0} & {\bf 0} & {\bf 0} & {\bf -1} \end{array} \right] , \end{equation} where each element is a $2\times 2$ matrix, we can write the above system of equations as: $$ \left( i\hbar \frac \partial {\partial t}-e\Phi \right) \Psi =\left[ \frac 1{2m}\left( \frac \hbar i\nabla -\frac ec{\bf A}\right) ^2{\bf 1}-\frac{ e\hbar }{2mc}\sigma \cdot {\bf H}\right] \left( \Sigma _3+i\Sigma _2\right) \Psi + $$ \begin{equation} \label{50}+mc^2\Sigma _3\Psi +\frac{ie\hbar }{2mc}\sigma \cdot {\bf E}\left( \alpha _3+i\alpha _2\right) \Psi . \end{equation} It is then easy to show that \begin{equation} \label{51}\Psi _{c1}=i\beta \sigma _2\Psi ^{*} \end{equation} is a solution of $$ \left( i\hbar \frac \partial {\partial t}+e\Phi \right) \Psi _{c1}=\left[ \frac{-1}{2m}\left( \frac \hbar i\nabla +\frac ec{\bf A}\right) ^2{\bf 1}- \frac{e\hbar }{2mc}\sigma \cdot {\bf H}\right] \left( \Sigma _3+i\Sigma _2\right) \Psi _{c1}- $$ \begin{equation} \label{52}-mc^2\Sigma _3\Psi _{c1}+\frac{ie\hbar }{2mc}\sigma \cdot {\bf E} \left( \alpha _3+i\alpha _2\right) \Psi _{c1} \end{equation} which is the same equation solved by $\Psi $ with the signs of the mass and the charge inverted, but with the same parity. We can also show that \begin{equation} \label{53}\Psi _{c2}=\Psi ^{\dagger }\Sigma _3i\alpha _3\beta \end{equation} is a solution of $$ \Psi _{c2}\left( i\hbar \frac \partial {\partial t}+e\Phi \right) =\Psi _{c2}\left( \Sigma _3+i\Sigma _2\right) \left[ \frac{-1}{2m}\left( \frac \hbar i\nabla +\frac ec{\bf A}\right) ^2{\bf 1}-\frac{e\hbar }{2mc}\sigma \cdot {\bf H}\right] - $$ \begin{equation} \label{54}-mc^2\Psi _{c2}\Sigma _3-\frac{ie\hbar }{2mc}\sigma \cdot {\bf E} \Psi _{c2}\left( \alpha _3+i\alpha _2\right) \end{equation} which is similar to the one solved by $\Psi $ with the signs of the mass, the charge and the parity inverted, while keeping the signs of the spins. Both the above amplitudes are candidates to represent antiparticles of $\Psi $ since we have used, until now, only the criterion of the mass and charge signs. We might write them explicitly as \begin{equation} \label{55}\Psi =\left[ \begin{array}{c} \varphi _{11}(+,+) \\ \varphi _{12}(+,+) \\ \varphi _{21}(-,+) \\ \varphi _{22}(-,+) \\ \chi _{11}(+,+) \\ \chi _{12}(+,+) \\ \chi _{21}(-,+) \\ \chi _{22}(-,+) \end{array} \right] \ \Rightarrow \ \Psi _{c1}=\left[ \begin{array}{c} +\varphi _{12}^{*}(-,-) \\ -\varphi _{11}^{*}(-,-) \\ +\varphi _{22}^{*}(+,-) \\ -\varphi _{21}^{*}(+,-) \\ -\chi _{12}^{*}(-,-) \\ +\chi _{11}^{*}(-,-) \\ -\chi _{22}^{*}(+,-) \\ +\chi _{21}^{*}(+,-) \end{array} \right] \ ;\ \Psi _{c2}=\left[ \begin{array}{c} +\chi _{11}^{*}(-,-) \\ +\chi _{12}^{*}(-,-) \\ +\chi _{21}^{*}(+,-) \\ +\chi _{22}^{*}(+,-) \\ -\varphi _{11}^{*}(-,-) \\ -\varphi _{12}^{*}(-,-) \\ -\varphi _{21}^{*}(+,-) \\ -\varphi _{22}^{*}(+,-) \end{array} \right] , \end{equation} where we also show, inside parenthesis, the signs of the mass and the charge related to each component of the spinors (we took the transpose of the line-spinor). This arrangement shows more clearly what relation particles exhibit with antiparticles by means of the above mentioned functions. We now define the element \begin{equation} \label{56}u_{0\uparrow (+)}^{(P,+)}, \end{equation} as the eight-component spinor where: the index zero denotes that we are in the rest frame, the up arrow indicates the spin up (upon action of operator $ \Sigma _3$), the pair $(P,+)$ implies that we have a particle with positive charge and the lower index $(+)$ denotes that the spin parity is positive (upon action of operator $\beta $). With the usual eight canonical basis vectors, ${\bf e}_i,i=1..8$ that are extensions of the two-dimensional KG case, we can write the eight distinct possibilities for $\Psi $ as $$ u_{0\uparrow (+)}^{(P,+)}={\bf e}_1e^{-iE_p\tau /\hbar }\quad ;\quad u_{0\downarrow (+)}^{(P,+)}={\bf e}_2e^{-iE_p\tau /\hbar }; $$ \begin{equation} \label{59}u_{0\downarrow (+)}^{(A,+)}={\bf e}_3e^{+iE_p\tau /\hbar }\quad ;\quad u_{0\uparrow (+)}^{(A,+)}={\bf e}_4e^{+iE_p\tau /\hbar }; \end{equation} $$ v_{0\uparrow (-)}^{(P,+)}={\bf e}_5e^{-iE_p\tau /\hbar }\quad ;\quad v_{0\downarrow (-)}^{(P,+)}={\bf e}_6e^{-iE_p\tau /\hbar }; $$ \begin{equation} \label{60}v_{0\downarrow (-)}^{(A,+)}={\bf e}_7e^{+iE_p\tau /\hbar }\quad ;\quad v_{0\uparrow (-)}^{(A,+)}={\bf e}_8e^{+iE_p\tau /\hbar }, \end{equation} where \begin{equation} \label{61}E_p=m_0c^2. \end{equation} With the correspondence (55) between particle and antiparticle spinors, we can write the spinors for $\Psi _{c1}$ $$ \mu _{0\uparrow (+)}^{(A,-)}={\bf e}_1e^{+iE_p\tau /\hbar }\quad ;\quad \mu _{0\downarrow (+)}^{(A,-)}={\bf e}_2e^{+iE_p\tau /\hbar }; $$ \begin{equation} \label{62}\mu _{0\downarrow (+)}^{(P,-)}={\bf e}_3e^{-iE_p\tau /\hbar }\quad ;\quad \mu _{0\uparrow (+)}^{(P,-)}={\bf e}_4e^{-iE_p\tau /\hbar }, \end{equation} $$ \nu _{0\uparrow (-)}^{(A,-)}={\bf e}_5e^{+iE_p\tau /\hbar }\quad ;\quad \nu _{0\downarrow (-)}^{(A,-)}={\bf e}_6e^{+iE_p\tau /\hbar }; $$ \begin{equation} \label{63}\nu _{0\downarrow (-)}^{(P,-)}={\bf e}_7e^{-iE_p\tau /\hbar }\quad ;\quad \nu _{0\uparrow (-)}^{(P,-)}={\bf e}_8e^{-iE_p\tau /\hbar }, \end{equation} while for $\Psi _{c2}$ $$ \omega _{0\uparrow (-)}^{(A,-)}={\bf e}_5e^{+iE_p\tau /\hbar }\quad ;\quad \omega _{0\downarrow (-)}^{(A,-)}={\bf e}_6e^{+iE_p\tau /\hbar }; $$ \begin{equation} \label{64}\omega _{0\downarrow (-)}^{(P,-)}={\bf e}_7e^{-iE_p\tau /\hbar }\quad ;\quad \omega _{0\uparrow (-)}^{(P,-)}={\bf e}_8e^{-iE_p\tau /\hbar }, \end{equation} $$ \eta _{0\uparrow (-)}^{(A,-)}={\bf e}_1e^{+iE_p\tau /\hbar }\quad ;\quad \eta _{0\downarrow (-)}^{(A,-)}={\bf e}_2e^{+iE_p\tau /\hbar }; $$ \begin{equation} \label{65}\eta _{0\downarrow (-)}^{(P,-)}={\bf e}_3e^{-iE_p\tau /\hbar }\quad ;\quad \eta _{0\uparrow (-)}^{(P,-)}={\bf e}_4e^{-iE_p\tau /\hbar }. \end{equation} We also get the following relations between the antiparticle spinors \begin{equation} \label{66}\omega _{0\uparrow }=\nu _{0\downarrow \ };\omega _{0\downarrow }=\nu _{0\uparrow }\mbox{ and }\eta _{0\uparrow }=\mu _{0\downarrow }\ ;\ \eta _{0\downarrow }=\mu _{0\uparrow }; \end{equation} together with the spin parity relations \begin{equation} \label{67}\left\{ \begin{array}{c} \beta u_0=+u_0 \\ \beta v_0=-v_0 \end{array} \right. \ ;\ \left\{ \begin{array}{c} \beta \mu _0=+\mu _0 \\ \beta \nu _0=-\nu _0 \end{array} \right. \ ;\left\{ \begin{array}{c} \beta \omega _0=-\omega _0 \\ \beta \eta _0=+\eta _0 \end{array} \right. \ . \end{equation} These results can also be compared with those obtained using the linear Dirac's equation\cite{7}. We give the annihilation relations in Table 3. We can now obtain the expression for the densities of probability and current in the present formalism. This is a straightforward extension of what was done in the KG formalism. We get equation \begin{equation} \label{68}\frac \partial {\partial t}\left( \Psi _{c2}\Psi \right) +\nabla \cdot \left\{ \frac 1{2m}\left[ \Psi _{c2}\Lambda \nabla \Psi -\left( \nabla \Psi _{c2}\right) \Lambda \Psi \right] -\frac{e{\bf A}}{mc}\Psi _{c2}\Lambda \Psi \right\} =0, \end{equation} where \begin{equation} \label{69}\Lambda =\left( \Sigma _3+i\Sigma _2\right) . \end{equation} Equation (\ref{68}) then implies that \begin{equation} \label{70}\rho =\Psi _{c2}\Psi \end{equation} and \begin{equation} \label{71}{\bf j}=\frac 1{2m}\left[ \Psi _{c2}\Lambda \nabla \Psi -\left( \nabla \Psi _{c2}\right) \Lambda \Psi \right] -\frac{e{\bf A}}{mc}\Psi _{c2}\Lambda \Psi . \end{equation} We can write the expression for the density only in terms of $\Psi $ \begin{equation} \label{72}\rho =\Psi ^{\dagger }\Sigma _3i\alpha _3\beta \Psi , \end{equation} which gives, considering (\ref{68}), the conservation equation \begin{equation} \label{73}\frac{\partial \rho }{\partial t}+\nabla \cdot {\bf j}=0. \end{equation} With straightforward calculations we can show that one might write the probability density, using the $\Psi $ components, as \begin{equation} \label{74}\rho =Im\left[ \sum_{ij}^2\varphi _i^{\dagger }\chi _j\right] . \end{equation} Finally, we shall comment the results of Table 3. Until now, we do not impose any constraint on the parity of annihilating particles. This degree of freedom leads to the possibility represented by the first column of Table 3, where particles and antiparticles with the same parity annihilate each other. The problem with this annihilation process is that physicists have not found, until now, any spin zero (longitudinal) photon. There are indeed strong arguments against their existence. We could then postulate that particles and antiparticles should have the spin parity (if any) also reverted. However, we avoid to assert this for the moment, since we are here interested in uncovering worlds, not in hiding them. \section{Conclusions} We have then succeeded in showing that our conjecture can be accommodated into the formal apparatus of the special relativistic quantum mechanics. When Dirac's theory, based on its first order equation, revealed the antiparticle (as were defined), many physicists were delighted with the symmetries it has brought\cite{4,8}. For each element with mass of a given value, Nature does not distinguish them by giving different charges. So, the massive proton with positive charge shall have its negative charge counterpart. Each particle has its antiparticle defined by its charge mirror. This work takes this approach to its utmost limits. Each particle has its antiparticle defined by its mirror world, where both charge and mass signs are reverted. Also, there is no particle being its own antiparticle in the sense that only entities with opposite mass sign might annihilate each other. In this case, pions zero are annihilated by antipions zero (both, of course, might decay spontaneously). The vacuum that emerges is not a filled structure in which every point of the real space is occupied with an infinitude of antiparticles. This picture can be avoided while keeping the important property of vacuum polarization. Moreover, contrarily to the usual interpretation, the present theory treats particles and antiparticles in a totally symmetrical way $^{(5)}$. We shall also stress, considering the present conjecture, that the gravitational field is highly capable of polarizing the vacuum. This property will become relevant in the second paper of this series. This theory does not claim for a strict inertial mass conservation law. This is because for mass we have Einstein's equation, $E=mc^2$, which distinguishes mass from charge with respect to conservative behavior. If we also admit, following the discussion at the end of the last section, that creation and annihilation processes shall conserve parity, then we place parity, aside from the charge, as a fundamental property of Nature. The possible existence of negative masses have far reaching cosmological consequences that will be addressed in a future paper. The arguments above, about the higher symmetry of Nature introduced by the concept of negative masses, cannot, of course, prove the conjecture. They fail to have any necessity character. They are just a metaphysical constraint we wish to impose upon Nature. The final word will be with the experimental physicists. This formidable task is being presently carried on by several experiments\cite{9}. Clearly, in the realm of special relativistic quantum mechanics, fixing mass signs is an ad hoc postulate, as we stated in the last paragraph. The next paper of this series will show, however, that the general relativistic quantum mechanical theory derived in paper II of this series supports this conjecture. \appendix \section{Negative Densities} When studying the KG formalism, we are faced with a striking fact. While the amplitudes in (\ref{7}) indicate that we should expect both positive and negative energy densities, the energy density obtained from the energy-momentum tensor is always positive. Moreover, since we sustain\cite{ 1,2,3} that relativistic quantum mechanics can be derived from classical relativity and statistics, where we impose the positive character of the energy, it is higly desirable to clarify this apparent paradox. We will show in this appendix that this paradoxical situation can be easily clarified. We will use the formalism already developed\cite{1,2,3} that enables us to go from Liouville's equation to the equation for the density function. From the analysis of what is happening in phase space, it will be easier to understand this property of the KG equation. In fact, it will be shown that this ''pathology'' is also present in the non-relativistic Schr\"odinger equation. We will thus present both the non-relativistic and relativistic calculations to make our discussion clearer. We have shown\cite{1,2,3} that all non-relativistic and relativistic quantum mechanics could be obtained from the classical Liouville's equation \begin{equation} \label{75}\frac{dF_n\left( {\bf x},{\bf p};t\right) }{dt}=0\ ;\ \frac{ dF_r\left( x,p\right) }{d\tau }=0, \end{equation} where ${\bf x}$ and ${\bf p}$ are the position and momentum vectors, $x$ and $p$ are the related four-vectors, $\tau $ is the proper time and $F_n$ and $ F_r$ are the non-relativistic and relativistic joint probability densities, respectively. This was accomplished using the Infinitesimal Wigner-Moyal Transformations \begin{equation} \label{76}\rho _n^{(d)}\left( {\bf x-}\frac{\delta {\bf x}}2,{\bf x}+\frac{ \delta {\bf x}}2;t\right) =\int F_n\left( {\bf x},{\bf p};t\right) \exp \left( \frac i\hbar {\bf p}\cdot \delta {\bf x}\right) d^3p \end{equation} and \begin{equation} \label{77}\rho _r^{(d)}\left( x{\bf -}\frac{\delta x}2,x+\frac{\delta x} 2\right) =\int F_r\left( x,p;t\right) \exp \left( \frac i\hbar p^\alpha \delta x_\alpha \right) d^4p, \end{equation} where $\rho _n$ and $\rho _r$ are the non-relativistic and relativistic density functions, respectively. We also assumed as an axiom that Newton's equation, and its special relativistic counterpart, are valid \begin{equation} \label{78}\frac{d{\bf x}}{dt}=\frac{{\bf p}}m\ ;\ \frac{dx^\alpha }{d\tau }= \frac{p^\alpha }m; \end{equation} to obtain the equations \begin{equation} \label{79}\frac{-\hbar ^2}{2m}\frac{\partial ^2\rho _n^{(d)}}{\partial {\bf x }\partial \left( \delta {\bf x}\right) }=i\hbar \frac{\partial \rho _n^{(d)} }{\partial t}\ ;\ \hbar ^2\frac{\partial ^2\rho _r^{(d)}}{\partial x^\alpha \partial \left( \delta x_\alpha \right) }=0. \end{equation} These equations, we showed, can be taken into the Schr\"odinger's and Klein-Gordon's equations (in the absence of external forces and spin, for simplicity) \begin{equation} \label{80}\frac{-\hbar ^2}{2m}\frac{\partial ^2\psi _n}{\partial {\bf x}^2} =i\hbar \frac{\partial \psi _n}{\partial t}\ ;\ \left( \hbar ^2\Box -m^2\right) \psi _r=0, \end{equation} where $\psi _n$ and $\psi _r$ are the non-relativistic and relativistic probability amplitudes, when we use the property that $\delta x$ represents an infinitesimal variation and that the expansions \begin{equation} \label{81}\rho _{n(+)}^{(d)}\left( {\bf x-}\frac{\delta {\bf x}}2,{\bf x}+ \frac{\delta {\bf x}}2;t\right) =\psi _n^{*}\left( {\bf x-}\frac{\delta {\bf x}}2;t\right) \psi _n\left( {\bf x}+\frac{\delta {\bf x}}2;t\right) \end{equation} and \begin{equation} \label{82}\rho _{r(+)}^{(d)}\left( x{\bf -}\frac{\delta x}2,x+\frac{\delta x} 2\right) =\psi _r^{*}\left( x{\bf -}\frac{\delta x}2\right) \psi _r\left( x+ \frac{\delta x}2\right) \end{equation} might be performed. {}From these expressions, we can define the 3 and 4-momentum operators by means of the expressions for their expectation values \begin{equation} \label{83}\left\langle {\bf p}\right\rangle =\lim _{\delta {\bf x} \rightarrow 0}\frac \hbar i\frac \partial {\partial \left( \delta {\bf x} \right) }\int \rho _{n(+)}^{(d)}\left( {\bf x-}\frac{\delta {\bf x}}2,{\bf x} +\frac{\delta {\bf x}}2;t\right) d^3x; \end{equation} \begin{equation} \label{84}\left\langle p\right\rangle =\lim _{\delta x\rightarrow 0}\frac \hbar i\frac \partial {\partial \left( \delta x\right) }\int \rho _{r(+)}^{(d)}\left( x{\bf -}\frac{\delta x}2,x+\frac{\delta x}2;t\right) d^4x. \end{equation} It is noteworthy that we have, however, a freedom of choice in expressions ( \ref{81}) and (\ref{82}). We could equally well have chosen \begin{equation} \label{85}\rho _{n(-)}^{(d)}\left( {\bf x-}\frac{\delta {\bf x}}2,{\bf x}+ \frac{\delta {\bf x}}2;t\right) =\psi _n^{*}\left( {\bf x+}\frac{\delta {\bf x}}2;t\right) \psi _n\left( {\bf x-}\frac{\delta {\bf x}}2;t\right) =\rho _{n(+)}^{(d)\dagger } \end{equation} and \begin{equation} \label{86}\rho _{r(-)}^{(d)}\left( x{\bf -}\frac{\delta x}2,x+\frac{\delta x} 2\right) =\psi _r^{*}\left( x+\frac{\delta x}2\right) \psi _r\left( x-\frac{ \delta x}2\right) =\rho _{r(+)}^{(d)\dagger } \end{equation} that is equivalent to the change $\psi \leftrightarrow \psi ^{*},i=n,r$. It is easy to see that, with this new definition, we get \begin{equation} \label{87}\left\langle {\bf p}\right\rangle \rightarrow -\left\langle {\bf p} \right\rangle \ ;\ \left\langle p\right\rangle \rightarrow -\left\langle p\right\rangle . \end{equation} We can interpret these results as representing, in the non-relativistic case, a problem where the particle travels back in space. In the relativistic case it can be understood as if the particle travels back in space-time (with negative momentum and energy). However, if we still want to have an adequate definition of three and four-momentum, as given by (\ref{83}) and (\ref{84}), we shall redefine the 3- and 4-momentum mean values as \begin{equation} \label{88}\left\langle {\bf p}\right\rangle =\lim _{\delta {\bf x} \rightarrow 0}-\frac \hbar i\frac \partial {\partial \left( \delta {\bf x} \right) }\int \rho _{n(-)}^{(d)}\left( {\bf x-}\frac{\delta {\bf x}}2,{\bf x} +\frac{\delta {\bf x}}2;t\right) d^3x; \end{equation} \begin{equation} \label{89}\left\langle p\right\rangle =\lim _{\delta x\rightarrow 0}-\frac \hbar i\frac \partial {\partial \left( \delta x\right) }\int \rho _{r(-)}^{(d)}\left( x{\bf -}\frac{\delta x}2,x+\frac{\delta x}2;t\right) d^4x. \end{equation} which give the operators \begin{equation} \label{90}{\bf p}_{op}=\lim _{\delta {\bf x}\rightarrow 0}-\frac \hbar i\frac \partial {\partial \left( \delta {\bf x}\right) }\ ;\ p_{op}=\lim _{\delta x\rightarrow 0}-\frac \hbar i\frac \partial {\partial \left( \delta x\right) }. \end{equation} In general, we have \begin{equation} \label{91}{\bf p}_{op}=\lim _{\delta {\bf x}\rightarrow 0}\lambda \frac \hbar i\frac \partial {\partial \left( \delta {\bf x}\right) }\ ;\ p_{op}=\lim _{\delta x\rightarrow 0}\lambda \frac \hbar i\frac \partial {\partial \left( \delta x\right) }, \end{equation} where $\lambda $ has the same definition given in the main text, when acting upon $\rho _{i,\lambda },i=n,r$. This assures that the energy and momentum have the correct sign when calculated by expressions (\ref{83}-\ref{84}) or ( \ref{88}-\ref{89}). It is important to stress that equations (\ref{79}) do not depend upon $ \lambda $, since they are quadratic in the considered quantities (energy and momentum for KG's and momentum for Schr\"odinger's). This justifies that the energy density obtained from the energy-momentum tensor is always positive. In the same way, the energy in Schr\"odinger's equation is not affected, since the time does not enter into the non-relativistic transformation (\ref {76}). With the conventions (\ref{91}), the relativistic energy density\cite{1,2,3} can be written, in the absence of electromagnetic fields, as \begin{equation} \label{92}p_\lambda ^0\left( x\right) =\frac{i\hbar }2\lambda \left[ \psi ^{*}\frac{\partial \psi }{\partial t}-\psi \frac{\partial \psi ^{*}}{ \partial t}\right] =\frac{i\hbar }2\left[ \psi ^{*}\frac{\partial \psi }{ \partial t}-\psi \frac{\partial \psi ^{*}}{\partial t}\right] _{+}, \end{equation} which is always positive. If we impose the possibility of negative masses for the complex conjugate amplitudes, the probability density can be written as \begin{equation} \label{93}\rho _\lambda \left( x\right) =\frac{i\hbar }{2\lambda mc^2}\left[ \psi ^{*}\frac{\partial \psi }{\partial t}-\psi \frac{\partial \psi ^{*}}{ \partial t}\right] _{+}, \end{equation} where $\lambda $ now comes from the mass sign. Let us consider now the relativistic situation when electromagnetic fields are present. The energy density is now given by \begin{equation} \label{94}p_\lambda ^0\left( x\right) =\frac{i\hbar }2\left[ \psi ^{*}\frac{ \partial \psi }{\partial t}-\psi \frac{\partial \psi ^{*}}{\partial t} \right] _{+}-\lambda e\Phi \psi ^{*}\psi , \end{equation} where $e$ is the particle charge and $\Phi $ is the scalar electromagnetic potential. The parameter $\lambda $ appears multiplying the charge, since in the equation that $\varphi ^{*}$ solves, the charge changes sign. The probability density can be written as \begin{equation} \label{95}\rho _\lambda \left( x\right) =\frac{i\hbar }{2\lambda mc^2}\left[ \psi ^{*}\frac{\partial \psi }{\partial t}-\psi \frac{\partial \psi ^{*}}{ \partial t}\right] _{+}-\frac e{mc^2}\Phi \psi ^{*}\psi , \end{equation} as was presented in the main text. It is important to note that (\ref{95}) is a density that represents particles (positive mass, given by $\psi $) and antiparticles (negative mass, given by $\psi ^{*}$) with energy and momentum positive, as stressed in the main text (\ref{9}). We thus obtain results in agreement with those obtained with the energy-momentum tensor. The final expression for the density with the normalization \begin{equation} \label{96}\int \rho _\lambda \left( x\right) d^3x=\lambda \end{equation} implies that, without fields, the worlds of particles and antiparticles fall apart. This is a very important property; it allows us to avoid the vacuum picture that emerges from Dirac's theory based in his first order equation. This theory is automatically prevented from a radiative catastrophe. The above calculations also allow us to understand the picture of particle flow. A positive mass particle, with negative energy and momentum traveling backward on time, is equivalent to a negative mass antiparticle, with positive energy and momentum, traveling in the usual time direction. These conventions just reflect equations (\ref{10}-\ref{12}) and agree with the signs of the velocity as being opposite for particles and antiparticles. These considerations will play an important role in the next paper of this series. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline {\bf mass} & {\bf charge} & {\bf amplitude} \\ \hline + & + & $\chi_1$ \\ \hline + & - & $\chi_{2}^{\dag}$ \\ \hline - & - & $\chi_{1}^{\dag}$ \\ \hline - & + & $\chi_2$ \\ \hline \end{tabular} \end{center} \caption{Possible combinations of mass and charge signals allowed by Nature for spinless particles.} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline {\bf mass} & {\bf charge} & {\bf spin} & {\bf amplitude} \\ \hline + & + & $\uparrow$ & $\phi_1$ \\ \hline + & + & $\downarrow$ & $\chi_{1}$ \\ \hline + & - & $\uparrow$ & $\chi_{2}^{\dag}$ \\ \hline + & - & $\downarrow$ & $\phi_{2}^{\dag}$ \\ \hline - & + & $\uparrow$ & $\phi_{2}$ \\ \hline - & + & $\downarrow$ & $\chi_2$ \\ \hline - & - & $\uparrow$ & $\chi_{1}^{\dag}$ \\ \hline - & - & $\downarrow$ & $\phi_{1}^{\dag}$ \\ \hline \end{tabular} \end{center} \caption{Possible combinations allowed by Nature for particles with spin.} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline ${\bf \Psi}$ & ${\bf \Psi}_{c1}(\hbar\omega)$ & ${\bf \Psi}_{c2}(\hbar\omega)$ \\ \hline $u_{0\uparrow(+)}^(P,+)$ & $\mu_{0\downarrow(+)}^{(A,-)}(0)$ & $\omega_{0\uparrow(-)}^{(A,-)}(+1)$ \\ \hline $u_{0\downarrow(+)}^(P,+)$ & $\mu_{0\uparrow(+)}^{(A,-)}(0)$ & $\omega_{0\downarrow(-)}^{(A,-)}(-1)$ \\ \hline $u_{0\downarrow(+)}^(A,+)$ & $\mu_{0\uparrow(+)}^{(P,-)}(0)$ & $\omega_{0\downarrow(-)}^{(P,-)}(-1)$ \\ \hline $u_{0\uparrow(+)}^(A,+)$ & $\mu_{0\downarrow(+)}^{(P,-)}(0)$ & $\omega_{0\uparrow(-)}^{(P,-)}(+1)$ \\ \hline\hline $v_{0\uparrow(-)}^(P,+)$ & $\nu_{0\downarrow(-)}^{(A,-)}(0)$ & $\eta_{0\uparrow(+)}^{(A,-)}(+1)$ \\ \hline $v_{0\downarrow(-)}^(P,+)$ & $\nu_{0\uparrow(-)}^{(A,-)}(0)$ & $\eta_{0\downarrow(+)}^{(A,-)}(-1)$ \\ \hline $v_{0\downarrow(-)}^(A,+)$ & $\nu_{0\uparrow(-)}^{(P,-)}(0)$ & $\eta_{0\downarrow(+)}^{(P,-)}(-1)$ \\ \hline $v_{0\uparrow(-)}^(A,+)$ & $\nu_{0\downarrow(-)}^{(P,-)}(0)$ & $\eta_{0\uparrow(+)}^{(P,-)}(+1)$ \\ \hline \end{tabular} \end{center} \caption{Possible combinations allowed by Nature for particles with spin.} \end{table} \unitlength=1.00mm \special{em:linewidth 1pt} \linethickness{1pt} \begin{figure} \caption{Particle-antiparticle trajectories in the presence of a gravitational field.} \end{figure} \begin{figure} \caption{Particle-Antiparticle trajectories in the presence of a homogeneous magnetic field.} \end{figure} \end{document}
arXiv
Internal set In mathematical logic, in particular in model theory and nonstandard analysis, an internal set is a set that is a member of a model. The concept of internal sets is a tool in formulating the transfer principle, which concerns the logical relation between the properties of the real numbers R, and the properties of a larger field denoted *R called the hyperreal numbers. The field *R includes, in particular, infinitesimal ("infinitely small") numbers, providing a rigorous mathematical justification for their use. Roughly speaking, the idea is to express analysis over R in a suitable language of mathematical logic, and then point out that this language applies equally well to *R. This turns out to be possible because at the set-theoretic level, the propositions in such a language are interpreted to apply only to internal sets rather than to all sets (note that the term "language" is used in a loose sense in the above). Edward Nelson's internal set theory is an axiomatic approach to nonstandard analysis (see also Palmgren at constructive nonstandard analysis). Conventional infinitary accounts of nonstandard analysis also use the concept of internal sets. Internal sets in the ultrapower construction Relative to the ultrapower construction of the hyperreal numbers as equivalence classes of sequences $\langle u_{n}\rangle $ of reals, an internal subset [An] of *R is one defined by a sequence of real sets $\langle A_{n}\rangle $, where a hyperreal $[u_{n}]$ is said to belong to the set $[A_{n}]\subseteq \;^{*}\!{\mathbb {R} }$ if and only if the set of indices n such that $u_{n}\in A_{n}$, is a member of the ultrafilter used in the construction of *R. More generally, an internal entity is a member of the natural extension of a real entity. Thus, every element of *R is internal; a subset of *R is internal if and only if it is a member of the natural extension ${}^{*}{\mathcal {P}}(\mathbb {R} )$ of the power set ${\mathcal {P}}(\mathbb {R} )$ of R; etc. Internal subsets of the reals Every internal subset of *R that is a subset of (the embedded copy of) R is necessarily finite (see Theorem 3.9.1 Goldblatt, 1998). In other words, every internal infinite subset of the hyperreals necessarily contains nonstandard elements. See also • Standard part function • Superstructure (mathematics) References • Goldblatt, Robert. Lectures on the hyperreals. An introduction to nonstandard analysis. Graduate Texts in Mathematics, 188. Springer-Verlag, New York, 1998. • Abraham Robinson (1996), Non-standard analysis, Princeton landmarks in mathematics and physics, Princeton University Press, ISBN 978-0-691-04490-3 Infinitesimals History • Adequality • Leibniz's notation • Integral symbol • Criticism of nonstandard analysis • The Analyst • The Method of Mechanical Theorems • Cavalieri's principle Related branches • Nonstandard analysis • Nonstandard calculus • Internal set theory • Synthetic differential geometry • Smooth infinitesimal analysis • Constructive nonstandard analysis • Infinitesimal strain theory (physics) Formalizations • Differentials • Hyperreal numbers • Dual numbers • Surreal numbers Individual concepts • Standard part function • Transfer principle • Hyperinteger • Increment theorem • Monad • Internal set • Levi-Civita field • Hyperfinite set • Law of continuity • Overspill • Microcontinuity • Transcendental law of homogeneity Mathematicians • Gottfried Wilhelm Leibniz • Abraham Robinson • Pierre de Fermat • Augustin-Louis Cauchy • Leonhard Euler Textbooks • Analyse des Infiniment Petits • Elementary Calculus • Cours d'Analyse
Wikipedia
Research and Practice in Technology Enhanced Learning Comparing the effects of dynamic computer visualization on undergraduate students' understanding of osmosis with randomized posttest-only control group design Shannon Hsianghan-Huang Sung1, Ji Shen2, Shiyan Jiang3 & Guanhua Chen3 Research and Practice in Technology Enhanced Learning volume 12, Article number: 26 (2017) Cite this article This study describes the impact of embedding dynamic computer visualization (DCV) in an online instrument that was designed to assess students' understanding of osmosis. The randomized posttest-only control group research was designed to compare the effect and the perceived helpfulness of the integration of DCV before and after the administration of an osmosis instrument. College students from three large classes (N = 640) were randomly assigned to participate in the research through an online system. Rasch-PCM was applied to determine the psychometric properties of the instrument and differentiate the student's understanding of osmosis. Welch two-sample t test was applied to examine whether there was significant discrepancy between groups. Multiple regressions analysis was conducted to evaluate the association between predictors and the student's understanding level, alluding to the performance on the online instrument. We found (a) the psychometric properties of the instrument with DCVs were reliable with good construct validity, (b) students who viewed DCV before they took the assessment performed better than those without, especially on solvation-related items, (c) students' time spent on the DCVs significantly contributed to their performance, (d) the current data analytics enabled us to study respondents' DCV navigation behavior, and (e) we summarized how participants perceived DCVs that are in the assessment. Educational implications and significance of this study is also discussed. A growing number of reforms and studies stress on the practice of deepening learners' understanding of dynamic interaction in natural phenomena (Chiu and Linn 2014; NGSS Lead States 2013; Smetana and Bell 2012; Wu et al. 2010). Demand to integrate advanced educational technologies to model or represent natural science systems in action, thus, is also increasing in this decade (Cook 2006; Marbach-Ad et al. 2008; Quellmalz et al. 2012; Xie and Tinker 2006). Incorporating dynamic computer visualization (DCV) in instruction on students' science learning has been documented in abundant studies (e.g., Brunye et al. 2004; Chiu and Linn 2014; Jensen et al. 1996; Ryoo and Linn 2012; Sanger et al. 2001; Smetana and Bell 2012). With the development of technology-enhanced activities and curricula, conventional ways of assessing students' knowledge gradually become inadequate to precisely determine their understanding about the dynamic interactions of science systems (Marbach-Ad et al. 2008; Quellmalz et al. 2012; Wu et al. 2010). Unlike DCV (e.g., Molecular Workbench, Xie and Pallant 2011; PhET, Wieman et al. 2008), text or static visualizations used in a traditional assessment instrument are less likely to elicit higher-level molecular reasoning for the dynamic nature of phenomena (Jensen et al. 1996; Marbach-Ad et al. 2008; McElhaney et al. 2015; Levy 2013; Pedrosa and Dias 2000; Smetana and Bell 2012). To address the potential assessment gap, emerging technologies have enabled the integration and administration of a measurement instrument that captures complex learning processes (Linn and Eylon 2011; Quellmalz and Pellegrino 2009). The advancement of technology also has the affordance of documenting users' behavior in using the designated means (e.g., animation, inquiry learning tools, simulation) when progressing along with the online assessment (Ryoo and Linn 2012; Ryoo and Bedell 2017). Nevertheless, very little research has examined the measure and impact of incorporating DCV in assessing students' performance on the same instrument (e.g., pilot study in Wu et al. 2010) and how exactly students utilize DCV during assessment. Rationale of using dynamic computer visualization for osmosis The concept of osmosis is listed as one of the most important and difficult concepts for undergraduate science learning (Shen et al. 2015; Odom and Barrow 1995, 2007; Sanger et al. 2001). Osmosis is the net movement of water through a selectively permeable membrane from a region of lower solute concentration to a region of higher solute concentration. It is a phenomenon that associates with molecular level interactions that oftentimes could be observed macroscopically (e.g., U-tube example in Fig. 5). Osmosis is also critical to various biological processes that are essential to plant water intake, maintaining cell shapes, water balance and transport in all types of living creatures, and sustaining a nurturing ecosystem. It is related to many physical and chemical concepts, such as pressure, solutions, and the particulate nature of matter (Friedler et al. 1987; Jensen et al. 1996; Sanger et al. 2001). Osmosis is a poorly understood science concept, despite being an important one (Shen et al. 2014; Fisher et al. 2011; Odom 1995; Odom and Barrow 1995, 2007). Osmosis is oftentimes perceived by students to be only driven by life forces or an input of energy (Odom 1995), which pertains to the misunderstanding that the whole process is purpose-driven (e.g., plant cells undergo osmosis in order to prevent from withering or human drinks water to quench thirst). In fact, the wither-prevention and thirst sensation is not directed by osmosis but rather the mechanism living creatures developed in order to consume/uptake water so osmosis can take place. However, studies have regularly suggested that students have retained such misconceptions concerning the mechanisms and processes of osmosis at all levels (Jensen et al. 1996; Kramer and Myers 2012; Odom 1995; Odom and Barrow 1995, 2007; Sanger et al. 2001). Moreover, students think that the solvent (i.e., most common one is water) stops moving once the solution reaches equilibrium and the solute "absorbs" water from areas with low concentration (Jensen et al. 1996), just like a sponge. Studies suggested that even with the assistance of dynamic visualization, learners may still experience a difficult time comprehending the dynamic and interactive structures of biological systems (Buckley and Quellmalz 2013; Hmelo-Silver and Pfeffer 2004; Rundgren and Tibell 2010; Tasker and Dalton 2006). The learning obstacle could be partly attributed to the lack of explicit demonstration concerning how osmosis is applicable among different systems when dynamic visualizations are introduced and used (Buckley and Quellmalz 2013). For instance, in our textbook analysis study (Sung et al. 2015), we found that in many textbooks, the classic U-tube experiment was used to demonstrate and explain osmosis as governed by stringent scientific law (e.g., random motion of molecules that takes place without external energy input), which is always carried out in a well-defined, lifeless scenario. In our study, we would like to capture whether participants find the DCVs helpful in responding to the osmosis items. The learning challenge might also be caused by their inability to acknowledge "dynamic equilibrium," (Meir et al. 2005) which is closely related to one of the seven crosscutting concepts summarized in the National Research Council's (NRC) Framework for K-12 Science Education (2012)—stability and change. The dynamic equilibrium depicts the ongoing random movement of molecules when the system stabilizes, which is an essential concept in order to fully acquire understanding of osmotic processes at the microscopic level. Failing to recognize the dynamic process, students might be stuck with the macroscopic, static equilibrium example portrayed in the textbooks. The raised column of solution provides visual reinforcement that osmosis must be sustained by an external input of energy, just like the static equilibrium example where the "stair is leaning against the wall" (National Research Council (NRC) 2012). Many of these misconceptions students have on osmosis will be carried over through the different stages of education. This does not have to be the case as many of these misconceptions can be addressed by well-designed DCVs (Meir et al. 2005; Rundgren and Tibell 2010). For example, a previous study did show that students with DCV exposure were less likely to perceive that particles stopped moving at equilibrium (Sanger et al. 2001). However, assessment items that include concrete, dynamic representation of abstract concepts (Wu et al. 2010) and incorporate essential variables that could draw attention to the target mechanism (Smetana and Bell 2012) are relatively scarce. In order to address the aforementioned challenges, we developed four short DCVs in an online assessment instrument to assess the effect of dynamic visualizations on college students' interdisciplinary understanding of osmosis (Shen et al. 2014). The study evaluated the effect of the integration of DCVs demonstrating molecular movement on the student's understanding of osmosis. Specifically, three research questions (RQs) directed the examination of whether these DCVs imposed any impact on students' performance: What are the psychometric properties of the instrument with DCVs in assessing students' understanding of osmosis? How does the integration of DCVs impact students' performance on the osmosis assessment? Specifically, the following two sub-questions were investigated: What is the difference between treatment group, which interacted with four DCV clips prior to answering the osmosis survey, as compared to the control group? What variable(s) (e.g., gender, language, major, time spent on animation or survey) best predict(s) student understanding of osmosis? How do students use and perceive the DCVs when responding to and reviewing the assessment? Due to the highly dynamic interactions among water and solute molecules during osmosis, we believe that the DCVs could visually simulate the interactive nature among water, solute molecules, and the selectivity of the permeable membrane at the microscopic level. DCV has significantly enriched the ways in which science instruction is delivered (National Research Council 2014; Scalise et al. 2011; Yarden and Yarden 2010). One of the many benefits of DCVs in science instruction is that they are better than static visualizations in serving as conceptual references for complex, dynamic molecular representations and processes (Marbach-Ad et al. 2008; Smetana and Bell 2012; Savec et al. 2005). Furthermore, recent NRC reports have called for creative ways of incorporating computer technology including DCV in assessing students' science understanding and practices (National Research Council 2014). McElhaney et al. (2015) conducted a meta-analyses study documenting how dynamic visualization contributes to conceptual learning and deeper understanding of complicated science topics. They found dynamic visualization to be most effective in showcasing dynamic processes, e.g., "continuity of motion" (p. 62), and enhancing learners' comprehension of target phenomenon (McElhaney et al. 2015). They also assessed the use of static and dynamic visualizations in their analysis and found the latter to be more favorable for inquiry-based and collaborative learning. The merit of dynamic visualization, as summarized in McElhaney et al.'s article, is most frequently found in effective prompts that tasked learners to contrast among several components of a phenomenon. Their meta-analysis on the dynamic and static comparison studies revealed that among the 26 identified articles, only 11 focused their assessment at the small/particulate level. Our work on the effectiveness of DCVs at the microscopic level is needed to fill the limited work being conducted. Ever since the call made by the National Research Council (2014), advancement in incorporating DCVs in assessment has been gradually achieved at the classroom level for formative assessment purposes. For instance, researchers have embedded DCV and associated assessment items within technology-enhanced curriculum (e.g., Ryoo and Linn 2015; Shen and Linn 2011). Recently, attempts have been made to incorporate DCV in evidence-centered assessment design, model-based learning, and large-scale state science assessment (Quellmalz et al. 2012). There are at least two major advantages of incorporating DCV in science assessment: (1) DCVs can provide more concrete contexts to assess the complex/abstract science phenomena (Quellmalz et al. 2012) and (2) the rich information exhibited in DCVs can facilitate the assessment of complex learning processes (Quellmalz and Pellegrino 2009). The effect of incorporating DCV on learners' conceptual understanding, however, is contested. That is, the finding in the science-simulation literature review of Scalise et al. (2011) suggested that 96% of the relevant studies synthesized for secondary school students' learning outcomes indicated at least partial learning gains. With the mixed-result studies considered (i.e., partial no gains and gains reported in the same study in 25.3% articles), there was still 29% reports that indicated partially no learning gains. Tversky et al.'s (2002) review study also suggested that most of the studies suggested no apparent advantage of animations. Also, it is difficult to elicit learners' understanding of the dynamic nature of scientific phenomenon by means of administering conventional, text-based, or static visualization assessment. Some studies revealed that proper integration of DCV during science learning enhanced the student's conceptual understanding (Ryoo and Linn 2012; Savec et al. 2005); some found either small or no effect with the inclusion of DCV on performance (Byrne et al. 1999; Höffler and Leutner 2007; Kim et al. 2007; Tversky et al. 2002), while others only found that the effect was more obvious under certain conditions (e.g., differential spatial ability (Merchant et al. 2013), learners with disabilities (Quellmalz et al. 2012)) or enhanced affective attributes that are not directly related to subject matter performance, such as perceived comprehensibility, interestingness, or motivation (Hwang et al. 2012; Kim et al. 2007). Furthermore, dynamic visualization and static materials may bring about different learning outcomes, and some researchers argued that DCVs had no advantages in increasing performance with recall assessments; however, students' performance on inference assessment was typically significantly positive for DCV conditions (e.g., McElhaney et al. 2015). In light of the mixed and disagreeing findings reporting the effect (or no effect) of embedded DCVs on students' conceptual understanding, many of them approached from the impact of curriculum and instruction perspectives on learning but not the inspection of the validity of the assessment instrument or the behavior of the participants engaged with DCV activities (e.g., Kehoe et al. 2001; McElhaney et al. 2015), we aimed to develop and validate the assessment instrument that could be used to determine the effectiveness of DCVs. We adopted a randomized posttest-only control group experiment designed to investigate the effect of incorporating DCVs in an assessment instrument on which students' performance was examined. We expect students who received DCVs before they take the osmosis assessment will perform better than those without. Context of the study The research was conducted in a large university in the southeast USA. The study used a randomized posttest-only control group experiment with a convenience sample from three classes: biology, physics, and physiology. The students in these three classes were randomly assigned to two conditions: animation prior to responding to osmosis survey and animation after responding to osmosis survey. Student participants are consisted of 60.8% female and 39.1% male; 30.2% in their freshman and sophomore year, 64% in their junior and senior year, and 5.8% are fifth year and beyond; and 89.7% of the respondents use English as their first language and 10.2% reported otherwise. Assessment instrument The osmosis survey was constructed by a research team consisting of science content experts, educational researchers, and psychometricians. The current knowledge assessment was adapted from an earlier one. In the current study, we primarily focused on the items that especially require students' deeper understanding of osmosis that connects the molecular level and the macroscopic level. The present version included 20 multiple-choice items and 13 constructed-response questions targeting students' interdisciplinary understanding of osmosis. Table 1 enlists the five scenarios in the assessment. More details concerning the instrument and item design can be found in our prior study. (Shen et al. 2014). Table 1 Scenario of each item sets in the osmosis survey The results of our previous survey suggested that students had difficulty in understanding the molecular mechanism for solvation and water movement in osmosis (Shen et al. 2014). Therefore, we developed four short DCVs (total time = 108 s) and incorporated them in the current survey. Dynamic computer visualization design Designing dynamic computer visualization The DCVs are operationally defined as the computer-based, visual representations showcasing the dynamic movement and interactions of molecules in the format of animated video clips. The users can adjust the rate (e.g., forward, pause, reverse) of playing the DCVs in accordance to the theoretical perspectives (e.g., cognitive load (Chandler and Sweller 1991)) and multimedia learning (Mayer 2001). There are four DCVs introduced in the study (see Fig. 1 for the images of the three 3-D representations made by our team). The first clip (18 s) exemplifies the process of molecular-level solvation (or dissolution), showcasing the attraction and association of solvent molecules (e.g., water) with molecules or ions of a solute; the second clip (49 s) represents the diffusion of the random movement of individual particles as opposed to the intentional/directional movement of molecules. The visualization shows that when a dye droplet is added in water, its molecules diffuse from the region of high concentration of dye to the region of low concentration and eventually reaches dynamic equilibrium. The third clip (16 s) demonstrates the osmosis as the net diffusion of water across a selectively permeable membrane. This visualization shows how osmosis is caused by a solute concentration gradient and differential solvation capacity to bind water. It creates a concentration gradient of free water across the membrane. The last visualization (25 s) shows the differential water-binding capacity of two different solutes with the same molar concentration across a membrane that is only permeable to water. The solute molecules on the right are bigger and can each bind more water molecules than the ones on the left. This leaves less free water on the right and creates a concentration gradient of free water molecules across the membrane. Osmosis still occurs as a result. The static visualizations of the three 3-D DCV clips integrated in the osmosis assessment. a–c exemplifies the process of molecular-level solvation (or dissolution), showcasing the attraction and association of solvent and solvate molecules; d–f demonstrates the osmosis as the net diffusion of water across a selectively permeable membrane; g–i shows the differential water-binding capacity of two different solutes with the same molar concentration across a membrane Assessment survey design, implementation, and data collection The osmosis survey is consisted of knowledge questions, demographic questions surveying students' gender, language use, academic status, etc. At the end of the survey, questions asking students to reflect on their perceived helpfulness of the DCVs were administered. The osmosis assessment was administered in the same semester in three classes (biology, physics, and physiology). It was delivered through the Web-based Inquiry Science Environment (WISE; wise.berkeley.edu), which provides logging capability, allowing researchers to record variables such as time spent on each step, the frequency of steps visited, and the sequence of steps visited. It was administered as a 1-week homework assignment. The students in each class were randomly assigned to two conditions: Visualization Before (VB)—students view the DCVs prior to responding to the osmosis knowledge assessment, and Visualization After (VA)—students respond to the assessment and then view the DCVs. A total of 667 students took the survey, but 640 were considered as valid ones (e.g., agreed to sign off the consent form or completed at least 50% of the knowledge items). The multiple-choice items were graded dichotomously (1—correct/0—incorrect). There are up to five levels in the coding rubric for the constructed-response items (see Table 2) (Shen et al. 2014). The inter-rater reliability reached 0.80 after several iterations. Inconsistent coding was resolved during research team meetings. Table 2 Scoring rubric of the constructive-response item on differential height We applied the Rasch model to analyze the dichotomous data and the Partial Credit Model (PCM) to analyze polytomous data (i.e., the constructed-response scores) using the Winsteps 3.0 software (Linacre 2012). In the Rasch model, only item difficulty (b) and student ability (θ) are considered. In the Rasch model, the probability of the respondent n getting a dichotomous question i right is denoted by the expression: $$ \Pr \left({x}_{n,i}=1|\theta, b\right)=\frac{e^{\left({\theta}_n-{b}_i\right)}}{1+{e}^{\left({\theta}_n-{b}_i\right)}} $$ The term (θ n − b i ) is the log odds or simply called logit. Persons at the same logit scale have approximately 50% chance of getting the corresponding item(s) correct. Persons positioned at a higher logit scale have greater than 50% chance of responding to the item right, and vice versa (Glynn 2012). A plot (i.e., Wright map), which provides information about students' osmosis understanding (θ n ) and item difficulty (b) simultaneously was constructed. This map is often used to identify the gaps between items with different difficulty levels. Infit and outfit were inspected in this study. Item infit/outfit indicates whether students from a high ability group and a low ability group perform "normally" as predicted. A large infit value for one item implies that a person's ability close to a particular item difficulty level is not consistent with the model's prediction. A large outfit value for an easy item indicates that a high ability level student fails to respond to the question correctly, and vice versa. The item parameters, infit/outfit parameters, and the Wright map were reported. Welch two-sample t test A Welch two-sample t test was conducted to evaluate whether students' performance on the assessment is different in the two conditions. Multiple regression Multiple regression examining what factors contributed to students' success in solving the osmosis problems was conducted (see Table 3 for the list of variables). Outliers (spent over 6000 s on the assessment) were deleted from the multiple regression analyses. Akaike information criterion (AIC) was used as the criterion to compare model-fitting results in the model selection algorithm. Table 3 Denotation of variables and their basic descriptive statistics Visualize log data To explore how students in each condition interacted with the assessment, the study examined log files generated by WISE, which provides a more in-depth analysis on how students in the two conditions (VA and VB groups) may have navigated through the activity sequence differently. We selected one representative student from each condition and analyzed their log data. The first criterion we administered was to select students who visited the visualization step more than once and also stated that the visualization was helpful (VB) or could be helpful (VA). We then pulled the individual logging data along with the scores each one received on the assessment portion and identified two respondents with comparable scores, one from VA and one from VB. When respondents interact with the osmosis assessment items, they need to submit their answers every step and they were not allowed to change their answers after submission. The log files show how long students interact with each step, the sequence students follow to visit steps, and the frequency of steps visited. Using the sequence and frequency data from the log files, the study visualized students' navigation behavior in both conditions via free library D3.js. The first visualization will depict the overall, cumulative navigation behavior of the respondent from the VA and VB groups, while the second visualization will illustrate a linear navigation pattern of participants. Perceived helpfulness One of the exit questions is a two-tier question that is consisted of an ordinal-format item eliciting the student's perceived helpfulness of the animation on their response to the survey followed by the explanations they provided for the previous Likert question. For the animation-after-survey group, the question to elicit their perception toward the helpfulness of the animation was worded slightly different with a multiple-choice prompt: "How much do you think the visualizations in the previous activity would have helped you answer some of the survey questions?" also followed by their explanations. The self-reported Likert-scale item has three levels: not helpful at all, somewhat, and very much. The variables are listed in Table 4. Table 4 Variables in the perceived-helpfulness question Mann-Whitney U test We analyzed the exit two-tier question inquiring students' perceptions of the DCVs. The first part was a Likert-scale item that has three levels: not helpful at all, somewhat, and very much; the second part was their explanation of why the DCVs (could have) helped or not. The parenthesis in the prompt was phrased slightly different for the two groups. A Mann-Whiney U test was conducted to evaluate the difference of the Likert-scale responses between the two conditions. The open-ended responses to the aforementioned question were reviewed to triangulate with the student's Likert-scale response from the VA and VB groups. Psychometrics properties In response to RQ1, the psychometrics properties are reported in the following. The Wright map (Fig. 2) shows that student ability in solving osmosis problems and item difficulty matched fairly well with two outlying items, one on each end of the scale (i.e., items 4.4 and 2.8). The students' abilities ranges from − 3.34 to 2.57 logits. Figure 3 shows a summary of Rasch modeling for the osmosis survey based on a sample of 640 subjects. Overall person separation and reliability was helpful in determining model-data-fit. The test differentiated subjects with a person separation index of 2.08 based on the empirical data or 2.28 based on the model expectations. A separation index above 2.0 is indicative of acceptable sensitivity of the instrument to differentiate high and low performers. The Cronbach alpha for those separations were .81 and .84, which also represented acceptable test reliability. The item separation index is 13.17 based on the empirical data or 13.37 based on the model expectations. Cronbach's alpha for those separation indices were both .99, indicating satisfactory item reliability. High item separation verifies the item hierarchy, implying that the number of person sample is large enough to confirm the item difficulty hierarchy or construct validity of the osmosis assessment instrument. The Wright map of person-item measure of osmosis assessment. Each "#" symbol means a subgroup of five people and a "." represents less than 5. "M" is the mean, "S" is one standard deviation from the mean, and "T" is two standard deviations from the mean Summary of Rasch modeling statistics The Wright map shows that student osmosis understanding and item difficulty matched fairly well with two outlying questions, one on each end of the item difficulty scale (i.e., questions 4.4 and 2.8). The Wright map shows that although overall students' abilities spread over a range from − 3.34 to 2.57 logits, there were two gaps in items. For instance, question 2.8 (b = 2.92) was more difficult than the student ability because there was no corresponding participant at that logit level. Also, there was a large item gap between logit − 2 to − 4, meaning that subjects whose abilities fall within this gap were not clearly differentiated by the osmosis understanding instrument. There were several items clustered around logits 0 and 1 that measure similar osmosis understanding levels but from different item sets. All the infit values were within acceptable range [0.7–1.3] (Wright and Linacre 1994). The most difficult item (i.e., question 2.8 on the Wright map) was contextualized in an innovative context. The prompt reads "Jessie quickly poured some dilute sugar water on the left side and pure water on the right side of the U-tube so that, initially, the left column was higher than the right one. What will happen to the height of each column?" (see Fig. 4) With further t test, VA and VB students did not have significant difference in their mean score on this item (t(638) = 1.143, p = 0.253); only 6.4% of the students correctly responded to this question. Rubrics for students' rationale for their height prediction are given in Table 2. The formation of rubrics is guided by the idea of knowledge integration (Author 2011), where all possible key ideas to respond to the question was first laid out, and then the levels were assigned based on the linkage of ideas found in the responses. The context of the most difficult question in the osmosis question—classical U-tube example The second most difficult question (i.e., question 1.5 on the Wright map) was one of the innovative assessment context, in which we replace the classical U-tube example with a horizontal tube divided by a selectively permeable membrane that is only permeable to water. On each side of the tube, a freely movable piston is held fixed initially (see Fig. 5). The rationale to design such a question was to simplify the gravitational force associated with the U-tube example and direct the respondent's attention solely to the osmosis process between the two compartments divided by the membrane. The question 1.5 prompt is shown in Fig. 5. With further t test, VA and VB students had significant difference in their mean score on this item (t(596) = 3.75, p = 0.000); only 12.3% of the students correctly respond to this question. The question prompt for question 1.5 eliciting the differential solvation effect between glucose and sucrose Regarding RQ2-a, confirming our hypothesis, the students in the VB condition demonstrated higher understanding of osmosis than those in the VA condition (M VB = 0.056, SD VB = 0.751 and M VA = − 0.083, SD VA = 0.856, Welch two-sample t(608) = 2.17, p = 0.03, d = 0.17). To answer RQ2-b, a multiple regression analysis was conducted to evaluate how well the student-associated attributes predicted osmosis-understanding level. The final model for the multiple regression included the following predictors (Table 3): class, DCV condition, gender, English as first language, time spent on knowledge assessment, and time spent on DCVs, while the dependent variable was the estimated student ability in solving osmosis problems. The model was statistically significant (F(7, 562) = 22.99, p < 0.000). The sample multiple correlation coefficient was .472, indicating that approximately 22.3% of the variance of the osmosis understanding in the sample can be accounted for by the linear combination of the predictors. Table 5 shows the coefficients of the predictor variables and their significance level. We found that the science-class-enrolled, time spent on assessment, and time spent on DCV were all significant predictors at p < 0.001 level, while English usage at home and gender were significant predictors of student ability at p < 0.05 and 0.01, respectively. The results indicated that there are other variables affecting student ability in addition to the DCV treatment. In particular, in this multiple regression model with seven predictors, the time spent on assessment and DCV had significant positive regression weights. It indicates that students, who invest more time on both the test and watching DCV, after controlling for the other variables in the model, were expected to have a higher ability score. Enrollment in biology and physics classes as well as English used at home as the first language had significantly negative weights. That is, students from families in which English is the first language or those who chose biology or physics were expected to have a lower ability score after controlling for other variables. Gender contributed marginally significantly to students' ability score, with female students expected to have higher score than male students. It is interesting to find that when other predictors are being considered together, visualization condition did not significantly contribute to the multiple regression model. Table 5 Summary of multiple regression analysis from the predictors for student ability To answer RQ3, first of all, students in the VA group perceived the DCV to be more helpful than those in the VB group (Mann-Whiney U test, z = − 6.055, p < 0.000). The finding resonated with students' reasoning to the open-ended question responses as discussed below. According to the log data, time spent on the osmosis assessment ranged from 60 to 22,411 s. Figure 6 illustrates the summative navigation behavior of students who major in biology in the VB (Fig. 6—top) and VA (Fig. 6—bottom) groups, respectively. Collectively, the students jumped back (red line) to the animation in both situations, and the density of the red line in the VA group is higher than that in the VB group. It implies that the students who view the DCV after finishing the assessment revisited previous question steps more frequently than those who view the DCV first. Note that students could not change the answers they have submitted and they could not jump back to DCVs if they did not submit their response to each step (i.e., 3.1–3.5). The summative navigation behavior of the VB group (top) and the VA group (bottom). Numbers represent the steps on the survey. Steps 1 and 2 are background information items; DCV is the step where students view the visualization. The five osmosis assessment scenarios correspond to steps 3.1–3.5. The white line represents the normal-order sequence. The red line represents the jump-back sequence. The yellow line represents the normal-jump sequence. The size of the node is proportional to the frequencies for the particular step that is visited by respondents There were 26 students (6 students are in the VB group, and 20 students are in the VA group) who visited the DCV at least twice and reported the visualizations to be useful (see the supplementary material for the logging behavior of the 26 students). We identified student 44394 from VB and student 44821 from VA who received similar scores and compared their linear navigation behavior during item review (see Fig. 7 and more detailed webpage information from the supplementary material). With the color-coded linear navigation bar, we can tell that even student 44821 has spent a longer time reviewing the items; when s/he jumped back to the DCV, this student spent less than 10 s on reviewing the DCV, just like student 44394. These two visualizations of the log data explored the potential of communicating navigation behavior with advanced data analytics. The linear navigation behavior of students. A more interactive version of this summary of data analytics can be retrieved from the link: http://shiyanjiang.com/shan/ Many students in the VA group perceived DCVs could have been helpful for them, for instance: I would have been able to answer …correctly after seeing a 3D representation of 'free' water molecules in solutions containing larger solute particles versus those containing smaller solute molecules. Some students from the VB group found certain DCVs to be particularly helpful in conceptual understanding but did not necessarily help them answer the questions: I already knew 3 of them (DCVs)…water molecules tend to conglomerate around larger organic molecules was a good reminder. … it didn't help as much visualizing the water movement in the stomach questions. Many students in VB are either neutral or negative about the helpfulness of DCVs because they reflected that they relied more on their prior knowledge rather than these short, basic DCVs. We speculate that the "helpfulness" will be enhanced if we allow students to freely retrieve the DCVs while they were responding to the osmosis assessment. The reason for lower perceived helpfulness could be due to the fact that they were not allowed to revisit the DCVs and change their answer. Some preferred narration or audio accompanying the DCVs. For example: I already knew about osmosis…I didn't need them (DCVs) to answer questions. I wish someone would explain what is happening while the video is playing. Their feedback speaks to the diverse learning styles of students that reflect on their perceived helpfulness of adopting DCVs in the assessment. Our analysis showed that the psychometric properties of the assessment instrument with the inclusion of DCVs in the assessment items demonstrated acceptable reliability and high construct validity. The most difficult item required students' reverse thought processes in order to predict the movement of the solution. In order to answer this question correctly, students would need to critically consider the variables given in the target system, analyze the dynamic nature of molecule movement, and then apply their understanding from the macroscopic level to associate with potential impact on the microscopic mechanisms (for more details about student reasoning, see Zhang et al. 2015). Therefore, the integration of DCVs in the osmosis assessment in VB did not provide them with a better chance to answer this innovative assessment item. This is probably due to the DCVs' stress on the microscopic interactions rather than the introduction of explicit connections between macroscopic and microscopic relationship. We also found that the VB group performed significantly better in the second most difficult question, which was directly related to our DCVs. The idea of designing question 1.5 originated from our faculty research meeting, where the conventional assessment of osmosis does not consider the differential solvation effect of the solute. The DCVs portrayed the differential ability of sugar-water bonding during solvation, and the significantly better performance on this item is indicative of the significant effect of DCV treatment before students answer their osmosis questions. The results from the Welch two-sample t test showed that the DCVs played a significant role when students were completing the assessment. First, the students who viewed the DCVs first (i.e., the VB group) outperformed those who viewed them later (i.e., the VA group). This result is remarkable, considering that the DCVs only took them about 3 min to complete and the average amount of time a student needs to finish the osmosis assessment portion of the entire survey was 47 min. The finding resonates with Kühl et al.'s (2011) study that dynamic visualization condition outperformed those with text-only condition. In addition, the time spent on watching DCV (T DCV) contributes significantly to the student's osmosis understanding. This result resonates with O'Day's study, in which learners who were exposed to the dynamic visualization with more time outperformed those with less exposure (O'Day 2006). Anyhow, even though the t test result suggests that the VB group performed significantly better than its counterpart, the mean ability scores of each group showed that the winning group has only a slight edge. That is, the ability score average is 0.056 logit for the VB group as opposed to − 0.083 logit for the VA group within a wide range from − 3.34 to 2.57. That is why when multiple predictors were considered, the different DCV treatment became a non-significant predictor in the model. We do not expect a wide score gap between both groups, given that students from the VB group spent only 180.6 s, on average, watching the animation. However, the results do show the potential of integrating DCVs in science education to enhance learning. And our future work should be focused on the nature of the impact of DCVs on student performance and how they could be integrated in an efficient manner. In the conclusion section, how students from the VB group interact with DCVs will be described. Students did come back to the DCVs during the assessment for the VB group; however, counterintuitively, we found the reviewing pattern for the VA group more intriguing than the usage pattern of DCV for the VB group. Notice that while the VB group reviewed their response to the assessment, they revisited the visualization section and then resumed reviewing items. This observation might have explained why the students in the VA group perceived that DCVs could have been more helpful than those in the VB group. The result echoed with that from students' feedback to the open-ended response for the VA and VB groups. Many students in the VB group viewed the DCVs more critically (e.g., design feature) than those in the VA group, which might contribute to their lower perceived helpfulness score on the Likert-scale item. Sometimes students came back to DCVs directly from certain assessment items during their review. It suggested that students might be confused about the knowledge component in the item and realized that the video could provide information related to the item. In addition, we found that most students who came back to the DCVs did not perform well in the assessment. It provides further evidence that the DCVs might offer additional self-paced learning opportunities for those who were academically underrepresented in science learning. The low perceived helpfulness from students' feedback also resonates with that from Tversky and her colleagues' review study, in which animated graphics are only beneficial for participants with lower spatial ability (Tversky et al. 2002). There are several limitations in this study: (1) The survey was only administered once to capture the direct impact of DCVs on the performance of osmosis across two sections; students did not take pre- and posttests to conduct within-group comparison statistically. (2) There were some constructive feedback on the features of DCVs, such as the inclusion of audio component, narration over the interaction, and also the addition of marking the particles on the visualization. (3) The four DCV clips created by our research team were all embedded in one step on WISE, so we were not able to determine the correlation of particular navigation behavior against different features and design purposes of each DCV. (4) Similar to the previous limitation, the respondents were not able to revisit the DCVs in the midst of answering the assessment items. It limits the ability for us to capture participants' intentional navigation behavior in revisiting DCVs in search for useful clues before moving on to the next question. Significance of the study and future work After iterative validation processes of the assessment instrument on osmosis, the psychometric properties approved the innovative osmosis survey to be valid and reliable. Educators and other researchers interested in eliciting students' deeper understanding of osmosis could administer the osmosis survey to acquire some understanding of where students stand before teaching the subject matter. They can then engage in curricular design to specifically address the gaps found in their understanding. Furthermore, we were able to study students' navigation behavior using current data analytics tool to decipher the underlying message conveyed from the logging data. The application is critical to visualize and communicate with interested audience concerning the dynamics of participant-DCV interactions. The findings suggest that the integration of short DCVs has a positive impact on students' performance on the osmosis assessment. The instructors in higher education are recommended to incorporate DCVs in their (formative) assessments to elicit students' deeper understanding of microscopic, molecular-level reactions. Some modification of the features and operational design of embedding DCVs in the assessment is expected to improve participants' perceptions toward the helpfulness of adopting dynamic visualizations in their assessment instrument. Future research on more in-depth navigation behavior of DCVs could focus on enhancing the planning of data analytics to grasp more subtle DCV-usage behavior of respondents, especially when the osmosis assessment is administered via a technology-enhanced environment. Brunye, T, Rapp, DN, Taylor, HA (2004). Building mental models of multimedia procedures: Implications for memory structure and content. In Proceedings of the 26th Annual Meeting of the Cognitive Science Society. Buckley, BC, & Quellmalz, ES (2013). Supporting and assessing complex biology learning with computer-based simulations and representations. In Multiple representations in biological education, (pp. 247–267). Dordrecht: Springer https://doi.org/10.1007/978-94-007-4192-8_14. Byrne, MD, Catrambone, R, Stasko, JT. (1999). Evaluating animations as student aids in learning computer algorithms. Computers and Education, 33(4), 253–278 https://doi.org/10.1016/S0360-1315(99)00023-8. Chandler, P, & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332 https://doi.org/10.1207/s1532690xci0804_2. Chiu, JL, & Linn, MC. (2014). Supporting knowledge integration in chemistry with a visualization-enhanced inquiry unit. Journal of Science Education and Technology, 23(1), 37–58 https://doi.org/10.1007/s10956-013-9449-5. Cook, MP. (2006). Visual representations in science education: the influence of prior knowledge and cognitive load theory on instructional design principles. Science Education, 90(6), 1073–1091 https://doi.org/10.1002/sce.20164. Fisher, KM, Williams, KS, Lineback, JE. (2011). Osmosis and diffusion conceptual assessment. CBE Life Sciences Education, 10(4), 418–429 https://doi.org/10.1187/cbe.11-04-0038. Friedler, Y, Amir, R, Tamir, P. (1987). High school students' difficulties in understanding osmosis. International Journal of Science Education, 9(5), 541–551 https://doi.org/10.1080/0950069870090504. Glynn, SM. (2012). International assessment: a Rasch model and teachers' evaluation of TIMSS science achievement items. Journal of Research in Science Teaching, 49(10), 1321–1344. Hmelo-Silver, CE, & Pfeffer, MG. (2004). Comparing expert and novice understanding of a complex system from the perspective of structures, behaviors, and functions. Cognitive Science, 28, 127–138. Höffler, TN, & Leutner, D. (2007). Instructional animation versus static pictures: a meta-analysis. Learning and Instruction, 17, 722–738 http://doi.org/10.1016/j.learninstruc.2007.09.013. Hwang, I, Tam, M, Lam, SL, Lam, P. (2012). Review of use of animation as a supplementary learning material of physiology content in four academic years. Electronic Journal of E-Learning, 10(4), 368–377. Jensen, MS, Wilcox, KJ, Hatch, JT. (1996). A computer-assisted instruction unit on diffusion and osmosis with a conceptual change design. Journal of Computers in Mathematics and Science Teaching, 15(1–2), 49–64. Kehoe, C, Stasko, J, Taylor, A. (2001). Rethinking the evaluation of algorithm animations as learning aids. International Journal of Human-Computer Studies, 54(2), 265–284 https://doi.org/10.1006/ijhc.2000.0409. Kim, S, Yoon, M, Whang, S-M, Tversky, B, Morrison, J b. (2007). The effect of animation on comprehension and interest. Journal of Computer Assisted Learning, 23(3), 260–270 https://doi.org/10.1111/j.1365-2729.2006.00219.x. Kramer, EM, & Myers, DR. (2012). Five popular misconceptions about osmosis. American Journal of Physics, 84, 694–699. Kühl, T, Scheiter, K, Gerjets, P, Gemballa, S. (2011). Can differences in learning strategies explain the benefits of learning from static and dynamic visualizations? Computers & Education, 56(1), 176–187 https://doi.org/10.1016/j.compedu.2010.08.008. Levy, D. (2013). How dynamic visualization technology can support molecular reasoning. Journal of Science Education and Technology, 22(5), 702–717 https://doi.org/10.1007/s10956-012-9424-6. Linacre, JM (2012). Winsteps® Rasch measurement computer program user's guide. Beaverton: Winsteps.com http://www.winsteps.com/index.htm. Retrieved 14 Feb 2013. Linn, MC, & Eylon, B-S (2011). Science learning and instruction: taking advantage of technology to promote knowledge integration. New York: Routledge. Marbach-Ad, G, Rotbain, Y, Stavy, R. (2008). Using computer animation and illustration activities to improve high school students' achievement in molecular genetics. Journal of Research in Science Teaching, 45(3), 273–292 https://doi.org/10.1002/tea.20222. Mayer, RE (2001). Multimedia learning. New York: Cambridge University Press. McElhaney, KW, Chang, H-Y, Chiu, JL, Linn, MC. (2015). Evidence for effective uses of dynamic visualisations in science curriculum materials. Studies in Science Education, 51(1), 49–85. Meir, E, Perry, J, Stal, D, Maruca, S, Klopfer, E. (2005). How effective are simulated molecular-level experiments for teaching diffusion and osmosis? Cell Biology Education, 4(3), 235–248 https://doi.org/10.1187/cbe.04-09-0049. Merchant, Z, Goetz, E t, Keeney-Kennicutt, W, Cifuentes, L, Kwok, O, Davis, T j. (2013). Exploring 3-D virtual reality technology for spatial ability and chemistry achievement. Journal of Computer Assisted Learning, 29(6), 579–590 http://doi.org/10.1111/jcal.12018. National Research Council (2014). Developing assessments for the next generation science standards. Committee on developing assessments of science proficiency in K-12. Board on testing and assessment and board on science education. In JW Pellegrino, MR Wilson, JA Koenig, AS Beatty (Eds.), Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press. National Research Council (NRC) (2012). A framework for K-12 science education: practices, crosscutting concepts, and core ideas. Washington, DC: National Academies Press. NGSS Lead States (2013). Next generation science standards: for states, by states. Washington: The National Academies Press. O'Day, DH. (2006). Animated cell biology: a quick and easy method for making effective, high-quality teaching animations. CBE-Life Sciences Education, 5, 255–263. Odom, AL. (1995). Secondary and college biology students' misconceptions about diffusion and osmosis. The American Biology Teacher, 57(7), 409–415 https://doi.org/10.2307/4450030. Odom, AL, & Barrow, LH. (1995). Development and application of a two-tier diagnostic test measuring college biology students' understanding of diffusion and osmosis after a course of instruction. Journal of Research in Science Teaching, 32(1), 45–61 https://doi.org/10.1002/tea.3660320106. Odom, AL, & Barrow, LH. (2007). High school biology students' knowledge and certainty about diffusion and osmosis concepts. School Science and Mathematics, 107(3), 94–101 https://doi.org/10.1111/j.1949-8594.2007.tb17775.x. Pedrosa, M, & Dias, M. (2000). Chemistry textbook approaches to chemical equilibrium and student alternative conceptions. Chemistry Education Research and Practice, 1 https://doi.org/10.1039/A9RP90024A. Quellmalz, ES, & Pellegrino, JW. (2009). Technology and testing. Science, 323(5910), 75–79 http://doi.org/10.1126/science.1168046. Quellmalz, ES, Timms, MJ, Silberglitt, MD, Buckley, BC. (2012). Science assessments for all: Integrating science simulations into balanced state science assessment systems. Journal of Research in Science Teaching, 49(3), 363–393 http://doi.org/10.1002/tea.21005. Rundgren, C-J, & Tibell, LAE. (2010). Critical features of visualizations of transport through the cell membrane—an empirical study of upper secondary and tertiary students' meaning-making of a still image and an animation. International Journal of Science and Mathematics Education, 8(2), 223–246 https://doi.org/10.1007/s10763-009-9171-1. Ryoo, K, & Bedell, K. (2017). The effects of visualizations on linguistically diverse students' understanding of energy and matter in life science. Journal of Research in Science Teaching, 54(10), 1274–1301 https://doi.org/10.1002/tea.21405. Ryoo, K, & Linn, MC. (2012). Can dynamic visualizations improve middle school students' understanding of energy in photosynthesis? Journal of Research in Science Teaching, 49(2), 218–243 http://doi.org/10.1002/tea.21003. Ryoo, K, & Linn, MC. (2015). Designing and validating assessments of complex thinking in science. Theory Into Practice, 0(ja), 0 http://doi.org/10.1080/00405841.2015.1044374. Sanger, MJ, Brecheisen, DM, Hynek, BM. (2001). Can computer animations affect college biology students' conceptions about diffusion & osmosis? The American Biology Teacher, 63(2), 104–109 https://doi.org/10.2307/4451051. Savec, VF, Vrtacnik, M, Gilbert, JK (2005). Evaluating the educational value of molecular structure representations. In JK Gilbert (Ed.), Visualization in science education, (pp. 269–297). Dordrecht: Springer. https://doi.org/10.1007/1-4020-3613-2_14. Scalise, K, Timms, M, Moorjani, A, Clark, L, Holtermann, K, Irvin, PS. (2011). Student learning in science simulations: design features that promote learning gains. Journal of Research in Science Teaching, 48(9), 1050–1078 http://doi.org/10.1002/tea.20437. Shen, J., Liu, O., & Sung, S. (2014). Designing interdisciplinary assessments in sciences for college students: An example on osmosis. International Journal of Science Education, 36(11), 1773-1793. doi:10.1080/09500693.2013.879224. Shen, J., Sung, S., & Zhang, D. (2015). Toward an analytic framework of interdisciplinary reasoning and communication (IRC) processes in science. International Journal of Science Education, 37(17), 2809–2835. https://doi.org/10.1080/09500693.2015.1106026. Shen, J., & Linn, M. C. (2011). A technology-enhanced unit of modeling static electricity: Integrating scientific explanations and everyday observations. International Journal of Science Education, 33(12), 1597–1623. https://doi.org/10.1080/09500693.2010.514012. Smetana, LK, & Bell, RL. (2012). Computer simulations to support science instruction and learning: a critical review of the literature. International Journal of Science Education, 34(9), 1337–1370 https://doi.org/10.1080/09500693.2011.605182. Sung, S., Shen, J., Stanger-Hall, K. F., Wiegert, C., Wan-I Li, Robertson, T., & Brown, S. (2015). Toward Interdisciplinary Perspectives: Using Osmotic Pressure as an Example for Analyzing Textbook Explanations. Journal of College Science Teaching, 44(4), 76–87. Tasker, RF, & Dalton, RM. (2006). Research into practice: visualisation of the molecular world using animations. Chemistry Education Research and Practice, 7, 141–159. Tversky, B, Morrison, JB, Betrancourt, M. (2002). Animation: can it facilitate? International Journal of Human–Computer Studies, 57, 247–262. Wieman, CE, Adams, WK, Perkins, KK. (2008). PhET: simulations that enhance learning. Science, 322(5992), 682–683. Wright, B, & Linacre, JM. (1994). Reasonable mean-square fit values. Rasch Measurement Transactions, 8, 370. Wu, H-C, Yeh, T-K, Chang, C-Y. (2010). The design of an animation-based test system in the area of Earth sciences. British Journal of Educational Technology, 41(3), E53–E57 https://doi.org/10.1111/j.1467-8535.2009.00977.x. Xie, Q, & Pallant, A (2011). The molecular workbench software: an innovative dynamic modeling tool for nanoscience education. In MS Khine, IM Saleh (Eds.), Models and modeling: cognitive tools for scientific enquiry, (pp. 121–132). New York: Springer. Xie, Q, & Tinker, R. (2006). Molecular dynamics simulations of chemical reactions for use in education. Journal of Chemical Education, 83(1), 77 https://doi.org/10.1021/ed083p77. Yarden, H, & Yarden, A. (2010). Learning using dynamic and static visualizations: students' comprehension, prior knowledge and conceptual status of a biotechnological method. Research in Science Education, 40(3), 375–402 https://doi.org/10.1007/s11165-009-9126-0. Zhang, D.M., & Shen, J. (2015). Disciplinary foundations for solving interdisciplinary scientific problems. International Journal of Science Education. 37 (15), 2555-2576. Assistant Professor, Education Department, Spelman College, 350 Spelman Lane, Atlanta, GA, 30314, USA Shannon Hsianghan-Huang Sung Associate Professor, Department of Teaching and Learning, University of Miami, 5202 University Drive, Coral Gables, FL, 33124, USA Ji Shen Doctoral Student, Department of Teaching and Learning, University of Miami, 5202 University Drive, Coral Gables, FL, 33124, USA Shiyan Jiang & Guanhua Chen Search for Shannon Hsianghan-Huang Sung in: Search for Ji Shen in: Search for Shiyan Jiang in: Search for Guanhua Chen in: GC performed Welch t-test and multiple regression to help identify the effectiveness of DCV and the contributing predictors to the student ability on responding to the osmosis assessment SJ performed Mann- Whiney U test to determine students' perception toward DCVs. She also assisted visualizing student log data and discuss the navigation behavior of students when they review items. JS was the director of the research project. He oversees the writing process of this research paper. He engaged in the process of assessment generation, validation, theoretical framework identification, and contributed to the review and discussion of the manuscript. SS led the research with assessment item design, administration, data collection, Rasch-PCA data analysis, instrument validation, and literature review. She also coordinated the findings into conclusions and discussions. All authors read and approved the final manuscript. Correspondence to Shannon Hsianghan-Huang Sung. Sung, S.H., Shen, J., Jiang, S. et al. Comparing the effects of dynamic computer visualization on undergraduate students' understanding of osmosis with randomized posttest-only control group design. RPTEL 12, 26 (2017) doi:10.1186/s41039-017-0067-3 Accepted: 07 December 2017 Dynamic computer visualization Navigation behavior
CommonCrawl
Vandermonde matrix In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an $(m+1)\times (n+1)$ matrix $V=V(x_{0},x_{1},\cdots ,x_{m})={\begin{bmatrix}1&x_{0}&x_{0}^{2}&\dots &x_{0}^{n}\\1&x_{1}&x_{1}^{2}&\dots &x_{1}^{n}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{m}&x_{m}^{2}&\dots &x_{m}^{n}\end{bmatrix}}$ with entries $V_{i,j}=x_{i}^{j}$, the jth power of the number $x_{i}$, for all zero-based indices $i$ and $j$.[1] Most authors define the Vandermonde matrix as the transpose of the above matrix.[2][3] The determinant of a square Vandermonde matrix (when $n=m$) is called a Vandermonde determinant or Vandermonde polynomial. Its value is: $\det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i}).$ This is non-zero if and only if all $x_{i}$ are distinct (no two are equal), making the Vandermonde matrix invertible. Applications The polynomial interpolation problem is to find a polynomial $p(x)=a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{n}x^{n}$ which satisfies $p(x_{0})=y_{0},\ldots ,p(x_{m})=y_{m}$ for given data points $(x_{0},y_{0}),\ldots ,(x_{m},y_{m})$. This problem can be reformulated in terms of linear algebra by means of the Vandermonde matrix, as follows. $V$ computes the values of $p(x)$ at the points $x=x_{0},\ x_{1},\dots ,\ x_{m}$ via a matrix multiplication $Va=y$, where $a=(a_{0},\ldots ,a_{n})$ is the vector of coefficients and $y=(y_{0},\ldots ,y_{m})=(p(x_{0}),\ldots ,p(x_{m}))$ is the vector of values (both written as column vectors): ${\begin{bmatrix}1&x_{0}&x_{0}^{2}&\dots &x_{0}^{n}\\1&x_{1}&x_{1}^{2}&\dots &x_{1}^{n}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{m}&x_{m}^{2}&\dots &x_{m}^{n}\end{bmatrix}}\cdot {\begin{bmatrix}a_{0}\\a_{1}\\\vdots \\a_{n}\end{bmatrix}}={\begin{bmatrix}p(x_{0})\\p(x_{1})\\\vdots \\p(x_{m})\end{bmatrix}}.$ If $n=m$ and $x_{0},\dots ,\ x_{n}$ are distinct, then V is a square matrix with non-zero determinant, i.e. an invertible matrix. Thus, given V and y, one can find the required $p(x)$ by solving for its coefficients $a$ in the equation $Va=y$:[4] $a=V^{-1}y$. That is, the map from coefficients to values of polynomials is a bijective linear mapping with matrix V, and the interpolation problem has a unique solution. This result is called the unisolvence theorem, and is a special case of the Chinese remainder theorem for polynomials. In statistics, the equation $Va=y$ means that the Vandermonde matrix is the design matrix of polynomial regression. In numerical analysis, solving the equation $Va=y$ naïvely by Gaussian elimination results in an algorithm with time complexity O(n3). Exploiting the structure of the Vandermonde matrix, one can use Newton's divided differences method[5] (or the Lagrange interpolation formula[6][7]) to solve the equation in O(n2) time, which also gives the UL factorization of $V^{-1}$. The resulting algorithm produces extremely accurate solutions, even if $V$ is ill-conditioned.[2] (See polynomial interpolation.) The Vandermonde determinant is used in the representation theory of the symmetric group.[8] When the values $x_{i}$ belong to a finite field, the Vandermonde determinant is also called the Moore determinant, and has properties which are important in the theory of BCH codes and Reed–Solomon error correction codes. The discrete Fourier transform is defined by a specific Vandermonde matrix, the DFT matrix, where the $x_{i}$ are chosen to be nth roots of unity. The Fast Fourier transform computes the product of this matrix with a vector in O(n log2n) time.[9] In the physical theory of the quantum Hall effect, the Vandermonde determinant shows that the Laughlin wavefunction with filling factor 1 is equal to a Slater determinant. This is no longer true for filling factors different from 1 in the fractional quantum Hall effect. In the geometry of polyhedra, the Vandermonde matrix gives the normalized volume of arbitrary $k$-faces of cyclic polytopes. Specifically, if $F=C_{d}(t_{i_{1}},\dots ,t_{i_{k+1}})$ is a $k$-face of the cyclic polytope $C_{d}(T)\subset \mathbb {R} ^{d}$ corresponding to $T=\{t_{1}<\cdots <t_{N}\}\subset \mathbb {R} $, then $\mathrm {nvol} (F)={\frac {1}{k!}}\prod _{1\leq m<n\leq k+1}{(t_{i_{n}}-t_{i_{m}})}.$ Determinant The determinant of a square Vandermonde matrix is called a Vandermonde polynomial or Vandermonde determinant. Its value is the polynomial $\det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i})$ which is non-zero if and only if all $x_{i}$ are distinct. The Vandermonde determinant was formerly sometimes called the discriminant, but in current terminology the discriminant of a polynomial $p(x)=(x-x_{0})\cdots (x-x_{n})$ is the square of the Vandermonde determinant of the roots $x_{i}$. The Vandermonde determinant is an alternating form in the $x_{i}$, meaning that exchanging two $x_{i}$ changes the sign, and $\det(V)$ thus depends on order for the $x_{i}$. By contrast, the discriminant $\det(V)^{2}$ does not depend on any order, so that Galois theory implies that the discriminant is a polynomial function of the coefficients of $p(x)$. The determinant formula is proved below in three ways. The first uses polynomial properties, especially the unique factorization property of multivariate polynomials. Although conceptually simple, it involves non-elementary concepts of abstract algebra. The second proof is based on the linear algebra concepts of change of basis in a vector space and the determinant of a linear map. In the process, it computes the LU decomposition of the Vandermonde matrix. The third proof is more elementary but more complicated, using only elementary row and column operations. First proof: polynomial properties By the Leibniz formula, $\det(V)$ is a polynomial in the $x_{i}$, with integer coefficients. All entries of the $i$th column (zero-based) have total degree $i$. Thus, again by the Leibniz formula, all terms of the determinant have total degree $0+1+2+\cdots +n={\frac {n(n+1)}{2}};$ (that is the determinant is a homogeneous polynomial of this degree). If, for $i\neq j$, one substitutes $x_{i}$ for $x_{j}$, one gets a matrix with two equal rows, which has thus a zero determinant. Thus, by the factor theorem, $x_{j}-x_{i}$ is a divisor of $\det(V)$. By the unique factorization property of multivariate polynomials, the product of all $x_{j}-x_{i}$ divides $\det(V)$, that is $\det(V)=Q\prod _{1\leq i<j\leq n}(x_{j}-x_{i}),$ where $Q$ is a polynomial. As the product of all $x_{j}-x_{i}$ and $\det(V)$ have the same degree $n(n+1)/2$, the polynomial $Q$ is, in fact, a constant. This constant is one, because the product of the diagonal entries of $V$ is $x_{1}x_{2}^{2}\cdots x_{n}^{n}$, which is also the monomial that is obtained by taking the first term of all factors in $\textstyle \prod _{0\leq i<j\leq n}(x_{j}-x_{i}).$ This proves that $\det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i}).$ Second proof: linear maps Let F be a field containing all $x_{i},$ and $P_{n}$ the F vector space of the polynomials of degree less than or equal to n with coefficients in F. Let $\varphi :P_{n}\to F^{n+1}$ be the linear map defined by $p(x)\mapsto (p(x_{0}),p(x_{1}),\ldots ,p(x_{n}))$. The Vandermonde matrix is the matrix of $\varphi $ with respect to the canonical bases of $P_{n}$ and $F^{n+1}.$ Changing the basis of $P_{n}$ amounts to multiplying the Vandermonde matrix by a change-of-basis matrix M (from the right). This does not change the determinant, if the determinant of M is 1. The polynomials $1$, $x-x_{0}$, $(x-x_{0})(x-x_{1})$, …, $(x-x_{0})(x-x_{1})\cdots (x-x_{n-1})$ are monic of respective degrees 0, 1, …, n. Their matrix on the monomial basis is an upper-triangular matrix U (if the monomials are ordered in increasing degrees), with all diagonal entries equal to one. This matrix is thus a change-of-basis matrix of determinant one. The matrix of $\varphi $ on this new basis is ${\begin{bmatrix}1&0&0&\ldots &0\\1&x_{1}-x_{0}&0&\ldots &0\\1&x_{2}-x_{0}&(x_{2}-x_{0})(x_{2}-x_{1})&\ldots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}-x_{0}&(x_{n}-x_{0})(x_{n}-x_{1})&\ldots &(x_{n}-x_{0})(x_{n}-x_{1})\cdots (x_{n}-x_{n-1})\end{bmatrix}}$. Thus Vandermonde determinant equals the determinant of this matrix, which is the product of its diagonal entries. This proves the desired equality. Moreover, one gets the LU decomposition of V as $V=LU^{-1}$. Third proof: row and column operations This third proof is based on the fact that if one adds to a column of a matrix the product by a scalar of another column then the determinant remains unchanged. So, by subtracting to each column – except the first one – the preceding column multiplied by $x_{0}$, the determinant is not changed. (These subtractions must be done by starting from last columns, for subtracting a column that has not yet been changed). This gives the matrix ${\begin{bmatrix}1&0&0&0&\cdots &0\\1&x_{1}-x_{0}&x_{1}(x_{1}-x_{0})&x_{1}^{2}(x_{1}-x_{0})&\cdots &x_{1}^{n-1}(x_{1}-x_{0})\\1&x_{2}-x_{0}&x_{2}(x_{2}-x_{0})&x_{2}^{2}(x_{2}-x_{0})&\cdots &x_{2}^{n-1}(x_{2}-x_{0})\\\vdots &\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}-x_{0}&x_{n}(x_{n}-x_{0})&x_{n}^{2}(x_{n}-x_{0})&\cdots &x_{n}^{n-1}(x_{n}-x_{0})\\\end{bmatrix}}$ Applying the Laplace expansion formula along the first row, we obtain $\det(V)=\det(B)$, with $B={\begin{bmatrix}x_{1}-x_{0}&x_{1}(x_{1}-x_{0})&x_{1}^{2}(x_{1}-x_{0})&\cdots &x_{1}^{n-1}(x_{1}-x_{0})\\x_{2}-x_{0}&x_{2}(x_{2}-x_{0})&x_{2}^{2}(x_{2}-x_{0})&\cdots &x_{2}^{n-1}(x_{2}-x_{0})\\\vdots &\vdots &\vdots &\ddots &\vdots \\x_{n}-x_{0}&x_{n}(x_{n}-x_{0})&x_{n}^{2}(x_{n}-x_{0})&\cdots &x_{n}^{n-1}(x_{n}-x_{0})\\\end{bmatrix}}$ As all the entries in the $i$-th row of $B$ have a factor of $x_{i+1}-x_{0}$, one can take these factors out and obtain $\det(V)=(x_{1}-x_{0})(x_{2}-x_{0})\cdots (x_{n}-x_{0}){\begin{vmatrix}1&x_{1}&x_{1}^{2}&\cdots &x_{1}^{n-1}\\1&x_{2}&x_{2}^{2}&\cdots &x_{2}^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}&x_{n}^{2}&\cdots &x_{n}^{n-1}\\\end{vmatrix}}=\prod _{1<i\leq n}(x_{i}-x_{0})\det(V')$, where $V'$ is a Vandermonde matrix in $x_{1},\ldots ,x_{n}$. Iterating this process on this smaller Vandermonde matrix, one eventually gets the desired expression of $\det(V)$ as the product of all $x_{j}-x_{i}$ such that $i<j$. Rank of the Vandermonde matrix • An m × n rectangular Vandermonde matrix such that m ≤ n has rank m if and only if all xi are distinct. • An m × n rectangular Vandermonde matrix such that m ≥ n has rank n if and only if there are n of the xi that are distinct. • A square Vandermonde matrix is invertible if and only if the xi are distinct. An explicit formula for the inverse is known (see below).[10][3][11] Inverse Vandermonde matrix As explained above in Applications, the polynomial interpolation problem for $p(x)=a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{n}x^{n}$satisfying $p(x_{0})=y_{0},\ldots ,p(x_{n})=y_{n}$ is equivalent to the matrix equation $Va=y$, which has the unique solution $a=V^{-1}y$. There are other known formulas which solve the interpolation problem, which must be equivalent to the unique $a=V^{-1}y$, so they must give explicit formulas for the inverse matrix $V^{-1}$. In particular, Lagrange interpolation shows that the columns of the inverse matrix $V^{-1}={\begin{bmatrix}1&x_{0}&\dots &x_{0}^{n}\\\vdots &\vdots &&\vdots \\[.5em]1&x_{n}&\dots &x_{n}^{n}\end{bmatrix}}^{-1}=L={\begin{bmatrix}L_{00}&\!\!\!\!\cdots \!\!\!\!&L_{0n}\\\vdots &&\vdots \\L_{n0}&\!\!\!\!\cdots \!\!\!\!&L_{nn}\end{bmatrix}}$ are the coefficients of the Lagrange polynomials $L_{j}(x)=L_{0j}+L_{1j}x+\cdots +L_{nj}x^{n}=\prod _{0\leq i\leq n \atop i\neq j}{\frac {x-x_{i}}{x_{j}-x_{i}}}={\frac {f(x)}{(x-x_{j})\,f'(x_{j})}}\,,$ where $f(x)=(x-x_{0})\cdots (x-x_{n})$. This is easily demonstrated: the polynomials clearly satisfy $L_{j}(x_{i})=0$ for $i\neq j$ while $L_{j}(x_{j})=1$, so we may compute the product $VL=[L_{j}(x_{i})]_{i,j=0}^{n}=I$, the identity matrix. Confluent Vandermonde matrices As described before, a Vandermonde matrix describes the linear algebra interpolation problem of finding the coefficients of a polynomial $p(x)$ of degree $n-1$ based on the values $p(x_{1}),\,...,\,p(x_{n})$, where $x_{1},\,...,\,x_{n}$ are distinct points. If $x_{i}$ are not distinct, then this problem does not have a unique solution (and the corresponding Vandermonde matrix is singular). However, if we specify the values of the derivatives at the repeated points, then the problem can have a unique solution. For example, the problem ${\begin{cases}p(0)=y_{1}\\p'(0)=y_{2}\\p(1)=y_{3}\end{cases}}$ where $p(x)=ax^{2}+bx+c$, has a unique solution for all $y_{1},y_{2},y_{3}$ with $y_{1}\neq y_{3}$. In general, suppose that $x_{1},x_{2},...,x_{n}$ are (not necessarily distinct) numbers, and suppose for simplicity that equal values are adjacent: $x_{1}=\cdots =x_{m_{1}},\ x_{m_{1}+1}=\cdots =x_{m_{2}},\ \ldots ,\ x_{m_{k-1}+1}=\cdots =x_{m_{k}}$ where $m_{1}<m_{2}<\cdots <m_{k}=n,$ and $x_{m_{1}},\ldots ,x_{m_{k}}$ are distinct. Then the corresponding interpolation problem is ${\begin{cases}p(x_{m_{1}})=y_{1},&p'(x_{m_{1}})=y_{2},&\ldots ,&p^{(m_{1}-1)}(x_{m_{1}})=y_{m_{1}},\\p(x_{m_{2}})=y_{m_{1}+1},&p'(x_{m_{2}})=y_{m_{1}+2},&\ldots ,&p^{(m_{2}-m_{1}-1)}(x_{m_{2}})=y_{m_{2}},\\\qquad \vdots &&&\qquad \vdots \\p(x_{m_{k}})=y_{m_{k-1}+1},&p'(x_{m_{k}})=y_{m_{k-1}+2},&\ldots ,&p^{(m_{k}-m_{k-1}-1)}(x_{m_{k}})=y_{m_{k}}.\end{cases}}$ The corresponding matrix for this problem is called a confluent Vandermonde matrix, given as follows. If $1\leq i,j\leq n$, then $m_{\ell }<i\leq m_{\ell +1}$ for a unique $0\leq \ell \leq k-1$ (denoting $m_{0}=0$). We let $V_{i,j}={\begin{cases}0&{\text{if }}j<i-m_{\ell },\\[6pt]{\dfrac {(j-1)!}{(j-(i-m_{\ell }))!}}x_{i}^{j-(i-m_{\ell })}&{\text{if }}j\geq i-m_{\ell }.\end{cases}}$ This generalization of the Vandermonde matrix makes it non-singular, so that there exists a unique solution to the system of equations, and it possesses most of the other properties of the Vandermonde matrix. Its rows are derivatives (of some order) of the original Vandermonde rows. Another way to derive this formula is by taking a limit of the Vandermonde matrix as the $x_{i}$'s approach each other. For example, to get the case of $x_{1}=x_{2}$, take subtract the first row from second in the original Vandermonde matrix, and let $x_{2}\to x_{1}$: this yields the corresponding row in the confluent Vandermonde matrix. This derives the generalized interpolation problem with given values and derivatives as a limit of the original case with distinct points: giving $p(x_{i}),p'(x_{i})$ is similar to giving $p(x_{i}),p(x_{i}+\varepsilon )$ for small $\varepsilon $. Geometers have studied the problem of tracking confluent points along their tangent lines, known as compacitification of configuration space. See also • Companion matrix § Diagonalizability • Schur polynomial – a generalization • Alternant matrix • Lagrange polynomial • Wronskian • List of matrices • Moore determinant over a finite field • Vieta's formulas References 1. Roger A. Horn and Charles R. Johnson (1991), Topics in matrix analysis, Cambridge University Press. See Section 6.1. 2. Golub, Gene H.; Van Loan, Charles F. (2013). Matrix Computations (4th ed.). The Johns Hopkins University Press. pp. 203–207. ISBN 978-1-4214-0859-0. 3. Macon, N.; A. Spitzbart (February 1958). "Inverses of Vandermonde Matrices". The American Mathematical Monthly. 65 (2): 95–100. doi:10.2307/2308881. JSTOR 2308881. 4. François Viète (1540-1603), Vieta's formulas, https://en.wikipedia.org/wiki/Vieta%27s_formulas 5. Björck, Å.; Pereyra, V. (1970). "Solution of Vandermonde Systems of Equations". American Mathematical Society. 24 (112): 893–903. doi:10.1090/S0025-5718-1970-0290541-1. S2CID 122006253. 6. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 2.8.1. Vandermonde Matrices". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. 7. Inverse of Vandermonde Matrix (2018), https://proofwiki.org/wiki/Inverse_of_Vandermonde_Matrix 8. Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Lecture 4 reviews the representation theory of symmetric groups, including the role of the Vandermonde determinant. 9. Gauthier, J. "Fast Multipoint Evaluation On n Arbitrary Points." Simon Fraser University, Tech. Rep (2017). 10. Turner, L. Richard (August 1966). Inverse of the Vandermonde matrix with applications (PDF). 11. "Inverse of Vandermonde Matrix". 2018. Further reading • Ycart, Bernard (2013), "A case of mathematical eponymy: the Vandermonde determinant", Revue d'Histoire des Mathématiques, 13, arXiv:1204.4716, Bibcode:2012arXiv1204.4716Y. External links • Vandermonde matrix at ProofWiki Matrix classes Explicitly constrained entries • Alternant • Anti-diagonal • Anti-Hermitian • Anti-symmetric • Arrowhead • Band • Bidiagonal • Bisymmetric • Block-diagonal • Block • Block tridiagonal • Boolean • Cauchy • Centrosymmetric • Conference • Complex Hadamard • Copositive • Diagonally dominant • Diagonal • Discrete Fourier Transform • Elementary • Equivalent • Frobenius • Generalized permutation • Hadamard • Hankel • Hermitian • Hessenberg • Hollow • Integer • Logical • Matrix unit • Metzler • Moore • Nonnegative • Pentadiagonal • Permutation • Persymmetric • Polynomial • Quaternionic • Signature • Skew-Hermitian • Skew-symmetric • Skyline • Sparse • Sylvester • Symmetric • Toeplitz • Triangular • Tridiagonal • Vandermonde • Walsh • Z Constant • Exchange • Hilbert • Identity • Lehmer • Of ones • Pascal • Pauli • Redheffer • Shift • Zero Conditions on eigenvalues or eigenvectors • Companion • Convergent • Defective • Definite • Diagonalizable • Hurwitz • Positive-definite • Stieltjes Satisfying conditions on products or inverses • Congruent • Idempotent or Projection • Invertible • Involutory • Nilpotent • Normal • Orthogonal • Unimodular • Unipotent • Unitary • Totally unimodular • Weighing With specific applications • Adjugate • Alternating sign • Augmented • Bézout • Carleman • Cartan • Circulant • Cofactor • Commutation • Confusion • Coxeter • Distance • Duplication and elimination • Euclidean distance • Fundamental (linear differential equation) • Generator • Gram • Hessian • Householder • Jacobian • Moment • Payoff • Pick • Random • Rotation • Seifert • Shear • Similarity • Symplectic • Totally positive • Transformation Used in statistics • Centering • Correlation • Covariance • Design • Doubly stochastic • Fisher information • Hat • Precision • Stochastic • Transition Used in graph theory • Adjacency • Biadjacency • Degree • Edmonds • Incidence • Laplacian • Seidel adjacency • Tutte Used in science and engineering • Cabibbo–Kobayashi–Maskawa • Density • Fundamental (computer vision) • Fuzzy associative • Gamma • Gell-Mann • Hamiltonian • Irregular • Overlap • S • State transition • Substitution • Z (chemistry) Related terms • Jordan normal form • Linear independence • Matrix exponential • Matrix representation of conic sections • Perfect matrix • Pseudoinverse • Row echelon form • Wronskian •  Mathematics portal • List of matrices • Category:Matrices
Wikipedia
\begin{document} \title{Monogenic pure cubics} \date{September 2020} \begin{abstract} Let $k\geq 2$ be a square-free integer. We prove that the number of square-free integers $m\in [1,N]$ such that $(k,m)=1$ and ${\mathbb Q}(\sqrt[3]{k^2m})$ is monogenic is $\gg N^{1/3}$ and $\ll N/(\log N)^{1/3-\epsilon}$ for any $\epsilon>0$. Assuming ABC, the upper bound can be improved to $O(N^{(1/3)+\epsilon})$. Let $F$ be the finite field of order $q$ with $(q,3)=1$ and let $g(t)\in F[t]$ be non-constant square-free. We prove unconditionally the analogous result that the number of square-free $h(t)\in F[t]$ such that $\deg(h)\leq N$, $(g,h)=1$ and $F(t,\sqrt[3]{g^2h})$ is monogenic is $\gg q^{N/3}$ and $\ll N^2q^{N/3}$. \end{abstract} \maketitle \section{introduction} A number field $K$ is called monogenic if its ring of integers $\mathcal{O}_K$ is ${\mathbb Z}[\theta]$ for some $\theta\in\mathcal{O}_K$. Number fields that are fundamental to the development of algebraic number theory such as quadratic and cyclotomic fields are all monogenic. Certain questions about monogenic number fields (as well as monogenic orders) are closely related to the so called discriminant form equations which have been studied extensively by Evertse, Gy\H{o}ry, and other authors. The readers are referred to \cite{EG15_UE,EG16_DE,Ngu17_OM,BN18_SF,Gaa19_DE} and the references there for many interesting results including those over positive characteristic fields. A pure cubic field is a number field of the form ${\mathbb Q}(\sqrt[3]{n})$ where $n>1$ is cube-free. In a certain sense, pure cubic fields are the ``next'' family of number fields to investigate after quadratic fields especially from the computational point of view (for example, see \cite{WCS80_CO,WDS83_AR,SS99_VA} of which the third paper treats the function field analogue of pure cubic fields). While every quadratic field is monogenic, many pure cubics are not and the goal of this paper is to study the density of monogenic pure cubic fields and its function field analogue. For instance, the first naive question is whether the set $$\{\text{cube-free $n>1$ such that ${\mathbb Q}(\sqrt[3]{n})$ is monogenic}\}$$ has zero density. It turns out that the answer is negative thanks to the below theorem of Dedekind. Satisfactory results have been obtained by Bhargava, Shankar, and Wang \cite{BSW_SV} in which they establish the density of monic integer polynomials of degree $n$ having squarefree discriminants and, in a certain sense, the density of monogenic number fields of degree $n$ for any $n>1$. The questions considered in this paper are quite different in nature since we restrict to the 1-parameter family ${\mathbb Q}(\sqrt[3]{n})$ as well as the family of polynomials $X^3-n$ none of which have square-free discriminant. To see why the above question has a negative answer, we start with the following \cite[p.~35--36]{Mar18_NF}: \begin{theorem}[Dedekind]\label{thm:DedekindZ} Let $n>1$ be a cube-free integer, let $\alpha=\sqrt[3]{n}$, and write $n=k^2m$ where $k$ and $m$ are square-free positive integers. We have the following: \begin{itemize} \item If $n\not\equiv\pm 1$ mod $9$ then $\{1,\alpha,\alpha^2/k\}$ is an integral basis of $K$. \item If $n\equiv \pm 1$ mod $9$ then $\{1,\alpha,(k^2\pm k^2\alpha+\alpha^2)/(3k)\}$ is an integral basis of $K$. \end{itemize} \end{theorem} An immediate consequence is that ${\mathbb Q}(\sqrt[3]{n})$ is monogenic when $n>1$ is square-free and $n\not\equiv \pm 1$ mod $9$. In fact this is the \emph{only} case when we have a positive density result. For the remaining cases (i.e. $k>1$ or $n\equiv \pm 1$ mod $9$), the conclusion is in stark contrast with the above. As a side note, a recent paper of Gassert, Smith, and Stange \cite{GSS19_AF} considers the 1-parameter family of quartic fields given by $X^4-6X^2-\alpha X-3$ and shows that a positive density of them are monogenic. Throughout this paper, for each square-free positive integer $k$, let: $$\mathcal{S}_k=\{\text{square-free $m>0$: $(m,k)=1$, $k^2m\not\equiv \pm 1$ mod $9$, ${\mathbb Q}(\sqrt[3]{k^2m})$ is monogenic}\},$$ and if $(k,3)=1$ let $$\mathcal{T}_k=\{\text{square-free $m>0$: $(m,k)=1$, $k^2m\equiv \pm 1$ mod $9$, ${\mathbb Q}(\sqrt[3]{k^2m})$ is monogenic}\}.$$ From now on, whenever $\mathcal{T}_k$ is mentioned, we tacitly assume the condition that $(k,3)=1$. Our main results for pure cubic number fields are the following: \begin{theorem}\label{thm:unconditional} For every $\epsilon>0$ and square-free integer $k\geq 2$, we have $N^{1/3}\ll_k \vert\mathcal{S}_k\cap [1,N]\vert\ll_{k,\epsilon} N/(\log N)^{1/3-\epsilon}$ as $N\to\infty$. For every square-free $k\geq 1$, we have $N^{1/3}\ll_k\vert\mathcal{T}_k\cap [1,N]\vert\ll_{k,\epsilon} N/(\log N)^{1/3-\epsilon}$ as $N\to\infty$. Consequently, the sets $S_k$ for $k\geq 2$ and the sets $T_k$ for $k\geq 1$ have zero density. \end{theorem} A table of monogenic pure cubic fields with discriminant up to $12\cdot 10^6$ has been computed by Ga\'al-Szab\'o \cite{GS10_AN,Gaa19_DE} and it is noted in \cite[p.~111]{Gaa19_DE} that ``the frequency of monogenic fields is decreasing''. Our zero density result illustrates this observation. Further investigations and computations involving integral bases and monogenicity of higher degree pure number fields have been done by Ga\'al-Remete \cite{GR17_IB}. Assuming ABC, we can arrive at the much stronger upper bound: \begin{theorem}\label{thm:1/3+epsilon} Assume that the ABC Conjecture holds. Let $\epsilon>0$ and let $k$ be a square-free positive integer. We have $\vert \mathcal{T}_k\cap [1,N]\vert=O_{\epsilon,k}(N^{(1/3)+\epsilon})$. And if $k\geq 2$, we have $\vert \mathcal{S}_k\cap [1,N]\vert=O_{\epsilon,k}(N^{(1/3)+\epsilon})$. \end{theorem} \begin{remark} From the lower bound in Theorem~\ref{thm:unconditional}, we have that the exponent $1/3$ in Theorem~\ref{thm:1/3+epsilon} is best possible. It is not clear if we can replace $N^{(1/3)+\epsilon}$ by some $N^{1/3}f(N)$ where $f(N)$ is dominated by $N^{\epsilon}$ for any $\epsilon$. \end{remark} \begin{remark} In principle, we can break $\mathcal{T}_k$ into $\mathcal{T}_k^+=\{m\in \mathcal{T}_k:\ k^2m\equiv 1\bmod 9\}$ and $\mathcal{T}_k^-=\{m\in\mathcal{T}_k:\ k^2m\equiv -1\bmod 9\}$. When choosing the $\pm$ signs appropriately, all results and arguments for $\mathcal{T}_k$ remain valid for each individual $\mathcal{T}_k^+$ and $\mathcal{T}_k^-$. \end{remark} We now consider the function field setting. For the rest of this section, let $F$ be a finite field of order $q$ and characteristic $p\neq 3$. A polynomial $f(t)\in F[t]$ is called square-free (respectively cube-free) if it is not divisible by the square (respectively cube) of a \emph{non-constant} element of $F[t]$. Every cube-free $f(t)$ can be written uniquely as $f(t)=g(t)^2h(t)$ in which $g(t),h(t)\in F[t]$ are square-free and $g(t)$ is monic. We have the analogue of Dedekind's theorem for $F[t]$: \begin{theorem}[function field Dedekind]\label{thm:DedekindFt} Let $f(t)\in F[t]$ be cube-free, let $\alpha=\sqrt[3]{f}$, $K=F(t,\alpha)$, and let $\mathcal{O}_K$ be the integral closure of $F[t]$ in $K$. Express $f(t)=g(t)^2h(t)$ as above. Then $\{1,\alpha,\alpha^2/g\}$ is a basis of $\mathcal{O}_K$ over $F[t]$. \end{theorem} \begin{proof} The proof is a straightforward adaptation of steps in the proof of Theorem~\ref{thm:DedekindZ} given in \cite[p.~35--36]{Mar18_NF} \end{proof} As before, $K=F(t,\sqrt[3]{f})$ is called monogenic if $\mathcal{O}_K=F[t,\theta]$ for some $\theta\in \mathcal{O}_K$. For each monic square-free $g(t)\in F[t]$, let $$\mathcal{U}_g=\{\text{square-free $h\in F[t]:$ $(g,h)=1$, $F(t,\sqrt[3]{g^2h})$ is monogenic}\}.$$ For each positive integer $N$, let $F[t]_{\leq N}$ denote the set of polynomials of degree at most $N$. It is easy to show that there are $q^{N+1}-q^N$ square-free polynomials in $F[t]_{\leq N}$. Therefore, if we define the density of a subset $A$ of $F[t]$ to be $$\lim_{N\to\infty}\frac{\vert A\cap F[t]_{\leq N}\vert}{\vert F[t]_{\leq N}\vert}$$ (assuming the limit exists), then the set $\mathcal{U}_1$ has density $1-1/q$. As before, this is in stark contrast to the case $\deg(g)>0$: \begin{theorem}\label{thm:Ft setting} Let $g$ be a non-constant monic square-free polynomial in $F[t]$. We have $$q^{N/3}\ll \vert \mathcal{U}_g\cap F[t]_{\leq N}\vert \ll N^2q^{N/3}$$ as $N\to\infty$ where the implied constants depend only on $F$ and $g$. \end{theorem} \begin{remark} In Theorem~\ref{thm:Ft setting}, an analogous upper bound to the number field setting would be $q^{(1/3+\epsilon)N}$. The bound $N^2q^{N/3}$ obtained here is much stronger; this is a typical phenomenon thanks to the uniformity of various results over function fields. \end{remark} We end this section with a brief discussion on the methods of the proofs. As mentioned above, it is well-known that monogenicity is equivalent to the fact that a certain discriminant form equation has a solution in ${\mathbb Z}$ (or ${\mathbb F}[t]$ if we are in the function field case). For the questions involving pure cubic fields considered here, we end up with an equation of the form $aX^3+bY^3=c$ where $a$ and $c$ are fixed and $b$ varies so that the equation has a solution $(X,Y)$. There are several methods to study those Thue equations \cite{EG15_UE,EG16_DE,Gaa19_DE} and we can effectively bound the number of solutions or the size of a possible solution. However, the question considered here is somewhat different: we are estimating how many $b$ for which we have at least one solution. The unconditional upper bound $N/(\log(N))^{1/3}$ in the number field case follows from a sieving argument together with a simple instance of the Chebotarev density theorem. The much stronger bound $N^{(1/3)+\epsilon}$ in the number field case as well as the bound $N^2q^{N/3}$ in the function field case follow from the use of ABC together with several combinatorial arguments that might be of independent interest. \textbf{Acknowledgments.} We wish to thank Professors Shabnam Akhtari and Istv\'{a}n Ga\'{a}l for helpful comments. Z.~S.~A is partially supported by a PIMS Postdoctoral Fellowship. K.~N. is partially supported by an NSERC Discovery Grant and a CRC tier-2 research stipend. \section{The number field case} We start with the following: \begin{proposition}\label{prop:kX^3-mY^3} Let $k$ and $m$ be square-free positive integers. We have: \begin{itemize} \item [(a)] $m\in \mathcal{S}_k$ if and only if $k^2m\not\equiv\pm 1$ mod $9$ and the equation $kX^3-mY^3=1$ has a solution $X,Y\in{\mathbb Z}$. \item [(b)] $m\in\mathcal{T}_k$ if and only if $k^2m\equiv \pm 1$ mod $9$ and the equation $kX^3-mY^3=9$ has a solution $X,Y\in{\mathbb Z}$. \end{itemize} \end{proposition} \begin{proof} For (a), suppose $k^2m\not\equiv \pm 1$ mod $9$, let $\alpha=\sqrt[3]{k^2m}$, $K={\mathbb Q}(\alpha)$, and consider the integral basis $\{1,\alpha,\alpha^2/k\}$. To find $\theta\in \mathcal{O}_K$ such that $\mathcal{O}_K={\mathbb Z}[\theta]$, it suffices to consider $\theta$ of the form $\theta=u\alpha+v(\alpha^2/k)$ with $u,v\in{\mathbb Z}$. Then we have: $$\theta^2=2uvkm+v^2m\alpha+u^2\alpha^2.$$ We represent $(1,\theta,\theta^2)$ in terms of the given integral basis and the corresponding matrix has determinant $ku^3-mv^2$. Therefore $\mathcal{O}_K={\mathbb Z}[\theta]$ if and only if the equation $kX^3-mY^3=1$ has a solution $X,Y\in{\mathbb Z}$. The proof of part (b) is similar with some tedious algebraic expressions as follows. Suppose $k^2m\equiv \pm 1$ mod $9$, let $\alpha=\sqrt[3]{k^2m}$, $K={\mathbb Q}(\alpha)$, and consider the integral basis $\{1,\alpha,\beta=(k^2\pm k^2\alpha+\alpha^2)/(3k)\}$. As before, consider $\theta=u\alpha+v\beta$ with $u,v\in{\mathbb Z}$. Then depending on whether $k^2m\equiv \pm 1$ mod $9$, we have: \begin{align*} \theta^2 &=u^2\alpha^2+\frac{2uv}{3k}(k^2\alpha\pm k^2\alpha^2+\alpha^3)+\frac{v^2}{9k^2}(k^2\pm k^2\alpha+\alpha^2)^2\\ &=\frac{2uvkm}{3}+\frac{v^2k^2}{9}\pm\frac{2v^2k^2m}{9}+\left(\frac{2uvk}{3}+\frac{v^2m}{9}\pm\frac{2v^2k^2}{9}\right)\alpha\\ &\ +\left(u^2\pm\frac{2uvk}{3}+\frac{v^2k^2}{9}+\frac{2v^2}{9}\right)\alpha^2\\ &=c_1+c_2\alpha+c_3\beta \end{align*} where $\displaystyle c_3=3k\left(u^2\pm \frac{2uvk}{3}+\frac{v^2k^2}{9}+\frac{2v^2}{9}\right)$, $\displaystyle c_2=\frac{2uvk}{3}+\frac{v^2m}{9}\mp k^2u^2-\frac{2uvk^3}{3}\mp\frac{v^2k^4}{9}$, and the precise value of $c_1$ is not needed for our purpose. We represent $(1,\theta,\theta^2)$ in terms of the given integral basis and the corresponding matrix has determinant: $$3ku^3\pm 3k^2u^2v+k^3uv^2-\frac{m\mp k^4}{9}v^3=\frac{1}{9}\left(k(3u\pm kv)^3-mv^3\right).$$ Therefore if $\mathcal{O}_K={\mathbb Z}[\theta]$ then the equation $kX^3-mY^3=9$ has a solution $X,Y\in{\mathbb Z}$. Conversely, if $(X_0,Y_0)$ is a solution, we can choose $v=Y_0$ and $u=(X_0\mp kY_0)/3$ and we need to explain why $u\in{\mathbb Z}$. From $k^2m\equiv \pm 1$ mod $9$, we have $m\equiv \pm k^4$ mod $9$. Using this and the equation $kX_0^3-mY_0^3=9$, we have $X_0^3\equiv \pm k^3Y_0^3$ mod $9$. Hence $X_0\equiv \pm kY_0$ mod $3$. \end{proof} The following establish the upper bounds in Theorem~\ref{thm:unconditional}: \begin{proposition}\label{prop:upper bound} Let $a$ and $b$ be positive integers such that $b/a$ is not the cube of a rational number. As $N\to\infty$, the number of integers $m\in [1,N]$ such that the equation $aX^3-mY^3=b$ has an integer solution is $O_{a,b}(N/(\log N)^{1/3})$. \end{proposition} \begin{proof} Let $L={\mathbb Q}(\sqrt[3]{b/a})$ and let $L'$ be its Galois closure. Let $S$ be the set of primes $p\nmid 3ab$ such that $b/a$ is not a cube mod $p$. This means $p$ remains a prime in $L$ and $p\mathcal{O}_L$ splits completely in $L'$; in other words the Frobenius of $p$ with respect to $L'/{\mathbb Q}$ is the conjugacy class of the 2 elements of order 3. The Chebotarev density theorem gives that $S$ has Dirichlet as well as natural density $1/3$. Put $s(x)=\vert S\cap [1,x]\vert$ so that $s(x)=\displaystyle \frac{\pi(x)}{3}+o(\pi(x))$; put $r(x)=s(x)-\displaystyle\frac{\pi(x)}{3}$. Then partial summation gives: $$\sum_{p\in S, p\leq x}\frac{1}{p}=\int_{2^{-}}^{x}\frac{ds(t)}{t}=\frac{1}{3}\int_{2^{-}}^{x}\frac{d\pi(t)}{t}+\int_{2^{-}}^{x}\frac{dr(t)}{t}\sim \frac{1}{3}\log\log x\ \text{as $x\to\infty$}$$ thanks to the Prime Number Theorem and the fact that $r(t)=o(\pi(t))$. This implies \begin{equation}\label{eq:NF prod p in S} \prod_{p\in S,p\leq x}\left(1-\frac{1}{p}\right)=(\log x)^{-1/3}e^{o(\log\log x)}. \end{equation} Now observe that if $m\in [1,N]$ is divisible by some $p\in S$ then the equation $aX^3-mY^3=b$ cannot have an integer equation since $b/a$ is not a cube mod $p$. By sieving \cite[Chapter~3.2]{MV06_MN}, the number of $m\in [1,N]$ such that $p\nmid m$ for all $p\in S$ is $O\left(\displaystyle\prod_{p\in S,p\leq N}\left(1-\frac{1}{p}\right)N\right)$ and we use \eqref{eq:NF prod p in S} to finish the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:unconditional}] The upper bound in Theorem~\ref{thm:unconditional} follows from Propositions \ref{prop:kX^3-mY^3} and \ref{prop:upper bound}. For the lower bound, first we consider $S_k$ and the equation $kX^3-mY^3=1$. We can always take $m=kX_0^3-1$ for $X_0\in [1,(N/k)^{1/3}]$ so that the above equation has a solution $(X_0,1)$. We need that $k^2m\not\equiv\pm 1$ mod $9$ and $m$ is square-free for a positive proportion of such $X_0$. A direct calculation shows that regardless of the possibility of $k$ mod $9$, we can always find $r\in\{0,\ldots,8\}$ such that $k^2(kr^3-1)\not\equiv \pm 1$ mod $9$. We now choose $X_0$ of the form $X_0=9t+r$ for $t\in [1,cN^{1/3}]$ where $c$ is a positive constant depending only on $k$. By classical results of Hooley \cite{Hoo67_OT,Hoo68_OT} (also see \cite{Gra98_AB} for a more general result assuming ABC), the irreducible cubic polynomial $f(t)=k(9t+r)^3-1\in{\mathbb Z}[t]$ admits square-free values for at least $c'cN^{1/3}$ many $t$ where $c'>0$ depends only on $k$ and $r$. The proof of $N^{1/3}\ll_k \vert \mathcal{T}_k\cap [1,N]\vert$ is completely similar. \end{proof} We will obtain the stronger upper bound $O(N^{1/3+\epsilon})$ assuming the ABC Conjecture: \begin{conjecture}\label{conj:ABC} Let $\epsilon>0$, then there exists a positive constant $C$ depending only on $\epsilon$ such that the following holds. For all relatively prime integers $a,b,c\in {\mathbb Z}$ with $a+b=c$, we have: $$\max\{\vert a\vert,\vert b\vert,\vert c\vert\}\leq C\left(\prod_{\text{prime}\ p\mid abc}p\right)^{1+\epsilon}$$ \end{conjecture} Theorem~\ref{thm:1/3+epsilon} follows from Proposition~\ref{prop:kX^3-mY^3} and the following: \begin{proposition} Assume Conjecture~\ref{conj:ABC}. Let $a$ and $b$ be positive integers such that $b/a$ is not the cube of a rational number and let $\epsilon>0$. The number of integers $m$ such that $\vert m\vert\leq N$ and the equation $aX^3-mY^3=b$ has an integer solution $(X,Y)$ is $O_{a,b,\epsilon}(N^{(1/3)+\epsilon})$. \end{proposition} \begin{proof} Let $\delta$ be a small positive number depending on $\epsilon$ that will be specified later. The implicit constants in this proof depends only on $a$, $b$, and $\delta$. Except for the finitely many $m$ for which $b/m$ is the cube of an integer, any $(m,X_0,Y_0)$ such that $aX_0^3-mY_0^3=b$, $\vert m\vert\leq N$, and $X_0,Y_0\in{\mathbb Z}$ satisfies $mX_0Y_0\neq 0$. An immediate consequence of ABC gives: $$\max\{\vert X_0^3\vert,\vert mY_0^3\vert\}\ll \vert mX_0Y_0\vert^{1+\delta}.$$ From $aX_0^3-mY_0^3=b$, we get $\vert Y_0\vert\ll \vert m^{-1/3}X_0\vert$. Combining with the above, we get: $\vert X_0\vert^3\ll \vert m^{2/3}X_0^2\vert^{1+\delta}$. Put $\delta'=\displaystyle\frac{2(1+\delta)}{3(1-2\delta)}-\frac{2}{3}$ so that we have: $$\vert X_0\vert \ll m^{(2/3)+\delta'}\ \text{and}\ \vert Y_0\vert\ll m^{(1/3)+\delta'}.$$ Therefore, in order to estimate the number of $m$, we estimate the number of pairs $(X_0,Y_0)$ with $X_0=O(N^{(2/3)+\delta'})$ and $Y_0=O(N^{(1/3)+\delta'})$ such that $\displaystyle\frac{aX_0^3-b}{Y_0^3}$ is an integer in $[-N,N]$. Fix such a $Y_0$, we have the obvious bound $\vert X_0\vert\ll N^{1/3}\vert Y_0\vert$ and we now study the congurence $aX_0^3\equiv b$ mod $Y_0^3$. Let $p$ be a prime divisor of $Y_0$ and let $d>0$ such that $p^d\parallel Y_0$. If $p\nmid ab$, the equation $aX^3\equiv b$ mod $p^{3d}$ has at most 3 solutions in ${\mathbb Z}/p^{3d}{\mathbb Z}$ thanks to the structure of $({\mathbb Z}/p^{3d}{\mathbb Z})^*$. If $p\mid ab$ and $3d>\max\{v_p(a),v_p(b)\}$, for the above congruence equation to have a solution, we must have that $v_p(b)-v_p(a)$ is a positive integer divisible by $3$ and any solution must have the form $p^{(v_p(b)-v_p(a))/3}x$ where $x$ satisfies $x^3\equiv u$ mod $p^{3d-(v_p(b)-v_p(a))}$ and $u$ is given by $\displaystyle\frac{b}{a}=p^{v_p(b)-v_p(a)}u$. Again, there are at most 3 solutions in this case. In conclusion, there are $O(3^{\omega(Y_0)})$ many solutions in ${\mathbb Z}/Y_0^3{\mathbb Z}$ of the equation $aX^3\equiv b$ mod $Y_0^3$; here $\omega(n)$ denotes the number of distinct prime factors of $n$. Overall, the number of pairs $(X_0,Y_0)$ is at most: $$\sum_{Y_0=O(N^{(1/3)+\delta'})}O\left(3^{\omega(Y_0)}\left(\frac{N^{1/3}\vert Y_0\vert}{\vert Y_0^3\vert}+1\right)\right).$$ This is $O(N^{((1/3)+\delta')(1+\delta')})$ since $3^{\omega(Y_0)}$ is dominated by $\vert Y_0\vert^{\delta'}$. Now choosing $\delta$ sufficiently small so that $((1/3)+\delta')(1+\delta')<(1/3)+\epsilon$ and we get the desired conclusion. \end{proof} \section{The function field case} Throughout this section, let $F$ be a finite field of order $q$ and characteristic $p\neq 3$. A polynomial $f(t)\in F[t]$ is called square-free (respectively cube-free) if it is not divisible by the square (respectively cube) of a \emph{non-constant} polynomial in $F[t]$. Every cube-free $f(t)$ can be written uniquely as $f(t)=g(t)^2h(t)$ in which $g(t),h(t)\in F[t]$ are square-free and $g(t)$ is monic. We have the function field analogue of Proposition~\ref{prop:kX^3-mY^3} whose proof is completely similar: \begin{proposition}\label{prop:gX^3-hY^3} Let $f,g,h\in F[t]$ be as above. Then $F(t,\sqrt[3]{f})$ is monogenic if and only if there exists $X,Y\in F[t]$ such that $gX^3-hY^3\in F^*$. \end{proposition} In function fields, the Mason-Stothers theorem plays a similar role to ABC: \begin{theorem}[Mason-Stothers]\label{thm:Mason-Stothers} Let $E$ be a field and let $A,B,C\in E[t]$ be relatively prime polynomials with $A+B=C$. Suppose that at least one of the derivatives $A',B',C'$ is non-zero then $$\max\{\deg(A),\deg(B),\deg(C)\}\leq r(ABC)-1$$ where $r(ABC)$ denotes the number of distinct roots of $ABC$ in $\bar{E}$. \end{theorem} In order to guarantee the condition on derivatives in the above theorem, we need: \begin{lemma}\label{lem:one derivative is nonzero} Let $g(t),h(t)\in F[t]$ be non-constant square-free polynomials. Suppose there exist $X,Y\in F[t]$ such that $gX^3-hY^3\in F^*$. Then there exist $X_1,Y_1\in F[t]$ such that $gX_1^3-hY_1^3\in F^*$ and at least one of the derivatives $(gX_1^3)'$ and $(hY_1^3)'$ is non-zero. \end{lemma} \begin{proof} Write $g=g_1\cdots g_u$ and $h=h_1\cdots h_v$ where the $g_i$'s and $h_j$'s are irreducible over $F$. Let $n$ be the largest non-negative integer such that both $gX^3$ and $hY^3$ are $p^n$-th power of some element of $F[t]$. Write $gX^3=\tilde{X}^{p^n}$ and $hY^3=\tilde{Y}^{p^n}$, we have that $\tilde{X}-\tilde{Y}\in F^*$ and at least one of the derivatives $\tilde{X}'$ and $\tilde{Y}'$ is non-zero. Since $p\neq 3$, from $gX^3=\tilde{X}^{p^n}$ and $hY^3=\tilde{Y}^{p^n}$ we can express $\tilde{X}$ and $\tilde{Y}$ as: $$\tilde{X}=g_1^{b_1}\cdots g_u^{b_u}X_0^{3}\ \text{and}\ \tilde{Y}=h_1^{c_1}\cdots h_v^{c_v}Y_0^3$$ where the $b_i$'s and $c_j$'s are positive integer, $\gcd(X_0,g_1\cdots g_u)=\gcd(Y_0,h_1\cdots h_v)=1$, and $b_ip^n-1\equiv c_jp^n-1\equiv 0$ mod $3$ for $1\leq i\leq u$ and $1\leq j\leq v$. Hence the $b_i$'s and $c_j$'s have the same non-zero congruence mod $3$. Depending on whether they are $1$ mod $3$ or respectively $2$ mod $3$, we can write $$\tilde{X}=gX_1^3\ \text{and}\ \tilde{Y}=hY_1^3$$ or respectively $$\tilde{X}=g^2X_1^3\ \text{and}\ \tilde{Y}=h^2Y_1^3.$$ We need to rule out the second possibility above. Indeed, suppose it happens then the Mason-Stothers theorem implies: \begin{align*} \max\{2\deg(g)+3\deg(X_1),2\deg(h)+3\deg(Y_1)\}\leq &\deg(g)+\deg(X_1)+\deg(h)\\ & +\deg(Y_1)-1, \end{align*} contradiction since the RHS is strictly smaller than the average of the 2 terms in the LHS. This finishes the proof. \end{proof} For $P(t)\in F[t]\setminus\{0\}$, let $\omega(P)$ denote the number of distinct monic irreducible factors of $P$. As before, we also need an upper bound for $\omega(P)$: \begin{lemma} For every $P(t)\in F[t]$ of degree $N\geq 2$, we have $\omega(P)\ll_q N/\log N$. \end{lemma} \begin{proof} All the implicit constants in this proof depend only on $q$. For every positive interger $k$, let $d_k$ be the degree of the product of all monic irreducible polynomials of degree at most $k$. Since there are $(q^n+O(q^{n/2}))/n$ monic irreducible polynomials of degree $n$, we have $$d_k=\sum_{n=1}^k (q^{n}+O(q^{n/2}))= \frac{q^{k+1}}{q-1}+O(q^{k/2}).$$ Now we choose the smallest $k$ such that $N\leq d_k$. This implies that $\omega(P)$ is at most the number of monic irreducible polynomials of degree at most $k$: $$\omega(P)\leq \sum_{n=1}^k\frac{q^n+O(q^{n/2})}{n}=O(q^k/k).$$ From the above formula for $d_k$ and the choice of $k$, we have that $q^k\ll N$ and $k\ll \log N$; this finishes the proof. \end{proof} It turns out that we will need an estimate for $\sum 3^{\omega(P)}$ where $\deg(P)\leq N$. Using the above bound $N/\log N$ for each individual $\omega(P)$ would yield $O(q^{N+O(N/\log N)})$ for the above sum which would not be good enough for our purpose. Instead, we have: \begin{lemma} $\displaystyle\sum_{\deg (P)\leq N} 3^{\omega(P)}=O(N^2q^N)$. \end{lemma} \begin{proof} Let $s_N$ be $\sum 3^{\omega(P)}$ where $P$ ranges over all $\emph{monic}$ polynomials of degree equal to $N$. It suffices to show $s_N=O(N^2q^N)$. The generating series $\displaystyle\sum_{n}s_nT^n$ has the Euler product: $$\prod_{P}(1+3T^{\deg(P)}+3T^{2\deg(P)}+\ldots)=\prod_{P}\frac{1+2T^{\deg(P)}}{1-T^{\deg(P)}},$$ where $P$ ranges over all the monic irreducible polynomials over $F$. The denominator is simply the zeta-function $1/(1-qT)$ while the coefficients of the numerator are bounded above by the coefficients of $\displaystyle\prod_{P}(1+t^{\deg(P)}+t^{2\deg(P)}+\ldots)^2=\frac{1}{(1-qT)^2}$. Therefore the $s_N$'s are bounded above by the coefficients of $1/(1-qT)^3$ and this finishes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:Ft setting}] For the lower bound, we simply study the equation $gX^3-hY^3=1$. Either by adapting the arguments in \cite{Hoo67_OT,Hoo68_OT,Gra98_AB} or using a general result of Poonen \cite[Theorem~3.4]{Poo03_SV} which is valid for a multivariable polynomial, we have that for a positive proportion of the polynomials $X\in F[t]$ with $\deg(X)\leq (N-\deg(g))/3$, we have that $gX^3-1$ is squarefree; we now simply take $Y=1$ and $h=gX^3-1$ for those $X$'s. This proves the lower bound. For the upper bound, we prove that for an arbitrary $\alpha\in F^*$, there are $O(q^{N/3})$ many $h$ of degree at most $N$ such that the equation $gX^3-hY^3=\alpha$ has a solution $X,Y\in F[t]$; since $\deg(g)>0$ we must have that $Y\neq 0$. By Lemma~\ref{lem:one derivative is nonzero}, we may assume that at least one of the derivatives $(gX^3)'$ and $(hY^3)'$ is non-zero. The Mason-Stothers theorem yields: $$\deg(h)+3\deg(Y)\leq \deg(g)+\deg(X)+\deg(h)+\deg(Y)-1,\ \text{and}$$ $$\deg(g)+3\deg(X)\leq \deg(g)+\deg(X)+\deg(h)+\deg(Y)-1.$$ The first inequality gives $\deg(Y)\leq \deg(X)/2+O(1)$ then we use this and the second inequality to obtain $\deg(X)\leq \frac{2}{3}\deg(h)+O(1)$. Then it follows that $\deg(Y)\leq \frac{N}{3}+O(1)$. We now count the number of pairs $(X,Y)$ with $Y\neq 0$ such that $\deg(Y)\leq \frac{N}{3}+O(1)$ and $\frac{gX^3-\alpha}{Y^3}$ is a polynomial in $F[t]$ of degree at most $N$. Hence $\deg(X)\leq \deg(Y)+(N/3)-\deg(g)$. Arguing as before, for each prime power factor $P^n$ of $Y$, the congruence equation $gX^3-\alpha=0$ mod $P^{3n}$ has at most 3 solutions mod $P^{3n}$. Therefore by the Chinese Remainder Theorem, the congruence equation $gX^3-\alpha=0$ mod $Y^3$ has at most $3^{\omega(Y)}$ solutions mod $Y^3$. Therefore once $Y$ is fixed, there are at most $$3^{\omega(Y)}\left(q^{\deg(Y)+(N/3)-\deg(g)-3\deg(Y)+1}+1\right)$$ possibilities for $X$. Hence the number of pairs $(X,Y)$ is at most: \begin{align*} & \sum_{k=0}^{(N/3)+O(1)} \sum_{Y:\ \deg(Y)=k}3^{\omega(Y)}\left(q^{(N/3)-2\deg(Y)} +1\right)\\ & \ll \sum_{k=2}^{(N/3)+O(1)}(q-1)q^k3^{O(k/\log k)}q^{(N/3)-2k}+ \sum_{k=0}^{(N/3)+O(1)} \sum_{Y:\ \deg(Y)=k}3^{\omega(Y)}.\\ \end{align*} The first term is $O(q^{N/3})$ since $\sum_{k=0}^{\infty} 3^{O(k/\log k)}q^{-k}<\infty$ while the second term is $O(N^2q^{N/3})$ thanks to the previous lemma and this finishes the proof. \end{proof} \section{Further Questions} Thanks to the lower bounds in our results, we know that the ``main terms'' $N^{1/3}$ and $q^{N/3}$ in the upper bounds are optimal. However, it seems possible that the ``extra factors'' $N^{\epsilon}$ in the number field case and $N^2$ in the function field case can be improved. This motivates: \begin{question} \begin{itemize} \item [(a)] In Theorem~\ref{thm:1/3+epsilon}, can one replace the bound $O(N^{1/3+\epsilon})$ by $O(N^{1/3}f(N))$ where $f(N)$ is dominated by any $N^{\epsilon}$? \item [(b)] In Theorem~\ref{thm:Ft setting}, can one improve the bound $O(N^2q^{N/3})$? Could this upper bound even be $O(q^{N/3})$? \item [(c)] In the number field case, can one obtain an unconditional power-saving bound $O(N^c)$ where $c<1$? \end{itemize} \end{question} \end{document}
arXiv
ZetaGrid ZetaGrid was at one time the largest distributed computing project, designed to explore the non-trivial roots of the Riemann zeta function, checking over one billion roots a day. Roots of the zeta function are of particular interest in mathematics; a single root out of alignment would disprove the Riemann hypothesis, with far-reaching consequences for all of mathematics. As of June, 2023 no counterexample to the Riemann hypothesis has been found. The project ended in November 2005 due to instability of the hosting provider.[1] The first more than 1013 zeroes were checked.[2] The project administrator stated that after the results were analyzed, they would be posted on the American Mathematical Society website.[3] The official status remains unclear, however, as it was never published nor independently verified. This is likely because there was no evidence that each zero was actually computed, as there was no process implemented to check each one as it was calculated.[4][5] References 1. Zeta Finished – Free-DC Forum 2. Ed Pegg Jr. «Ten Trillion Zeta Zeros» 3. "ZetaGrid - News". 2010-11-18. Archived from the original on 2010-11-18. Retrieved 2023-06-04. 4. Yannick Saouter, Xavier Gourdon and Patrick Demichel. An improved lower bound for the de Bruijn-Newman constant. Math. Comp. 80 (2011) 2283. MR 2813360. 5. Yannick Saouter and Patrick Demichel. A sharp region where π(x)−li(x) is positive. Math. Comp. 79 (2010) 2398. MR 2684372. External links • Home page (Web archive)
Wikipedia
The EdExcel C3 debacle: initial thoughts Written by Colin+ in core 3. These are my barely-edited, initial thoughts on today's controversial EdExcel Core 3 paper. Nothing is meant as an attack on anyone - except, of course, for Mr Gove, who must be used to it by now. UPDATED June 14: The legendary Arsey at TSR has worked solutions for the paper here. There were a few unusual questions, and I rate it as a tough paper that required some thought beyond mechanically applying formulas - which is one of the things C3 is meant to test. At the same time, there were several questions that should have been almost automatic for a candidate with a realistic hope of doing well in the exam. Assuming the grade boundaries are adjusted, I don't think there's much hope for the free-resit brigade. (Update ends) UPDATED again, June 14, evening: I've now seen the original, compromised paper. Mark for mark, there's no contest; the original paper was far, far less demanding than the replacement paper. However, mark-for-mark isn't the right comparison: the right comparison is how hard it is to achieve each grade. The grade boundaries for the original paper would have been much higher than they will be for the replacement paper. It could well turn out to have been 'easier' to get an A on the replacement paper than the original - until the grade boundaries are decided, it's impossible to say. (Update ends) I've not yet seen the full C3 paper, but I've seen some of the questions, including the "why can't Kate get a zoom lens?" one. I have some thoughts: Some students seem to be genuinely devastated by this paper. I suspect there's also a bandwagon effect; for many people, this was an extremely high-stakes exam with the potential to change their future. I feel sorry for students who feel like the system has screwed them over, and wish it hadn't happened. The questions I've seen were, in my opinion, tough. Not impossible, not unfair, but tough. I'd be very interested to see the full paper to form a proper judgement on the balance and style of the exam. The lost papers debacle: careless, very careless. For all EdExcel's protests, I find it a bit unlikely that the substitute paper was subject to the same level of scrutiny as the original. Around 60 students sat the 'wrong' exam, out of around 35,000; I find it hard to believe that there was cheating involved. Speaking to some of my students, I gather some of the questions were asked oddly. This will have disadvantaged students who relied on rote learning rather than reading and understanding. Mr Gove will be LIVID. The final question did involve — for three marks — knowing that distance = speed $\times$ time. Technically, that's not in the specification. Neither is knowing there are 60 minutes in an hour; it's not unreasonable to expect an A-level maths student to know either of those facts. The grade boundaries have historically changed from one exam to the next to reflect the difficulty of exams. If, as seems likely, scores on this paper are lower than expected, the UMS conversion will be adjusted to account for that. That said, it's a source of doubt — and a blow to the confidence — that will make for a very uncomfortable summer for many students. I've seen some amusing memes, and I've seen a lot of what can only be called entitled whining. I've seen someone claim a 14-mark question cost them the chance of getting more than 20/75. Frankly, I think trigonometry is the least of their problems. One last criticism about balance, based on what I've seen so far: the two trigonometry questions were both from what I'd consider the hard end of the spectrum. That would appear to be poor practice, although I can't say it would make for an unfair paper on its own. Students who have done poorly in this exam will not currently be able to resit until next summer. This is a ludicrous state of affairs, and there's a petition to resurrect the January exam diet here. Overall, my suspicion is that there's an element of EdExcel cock-up, an element of teenage melodrama, and an element of high-stakes exams being a sodding ridiculous way to assess whether someone's a good mathematician. I hope it all comes out in the wash. How to do… R-sin-alpha questions Core 3 basics revision quiz A trigonometric coincidence The Mathematical Pirate's guide to the Chain Rule Attack of the Mathematical Zombies: Five excuses that need a bullet to the head 14 comments on "The EdExcel C3 debacle: initial thoughts" BioLecturer RT @icecolbeveridge: [FCM] The EdExcel C3 debacle: initial thoughts: http://t.co/xE0743WP2F morphosaurus Cameron Bell liked this on Facebook. MathbloggingAll The EdExcel C3 debacle: initial thoughts http://t.co/Fw4UEl1Zr9 MrsOClee m4thsdotcom @icecolbeveridge @MrsOClee Tough in parts, very tough but enough marks on the paper. Boundaries will be low. Students still hope of success. icecolbeveridge I've updated my thoughts on #C3, now I've seen the paper: http://t.co/f8ZHAKiwCm TL;DR: tough in places, easy in others, unusual in many. ImMisterAl My thoughts on #EdexcelC3. The paper was tough but not unfair. I pretty much agree with @icecolbeveridge here: http://t.co/ZzBgEhtB8b Dave Gale I wonder, in other situations where replacement exams have to be used (any exam board), how many centres end up using the original paper. I'd imagine it's a common mistake or am I being too lenient? I happen to think that Edexcel shouldn't have 'lost' the papers but what choice did they have but to replace them? As for the difficulty: Basically I agree with your very last paragraph. I understand the concerns, but students that can only answer maths questions in format that they are pretty familiar with (ie very familiar with) should be tested in unusual circumstances. I haven't seen the questions and perhaps I'm being harsh, but there's a message here for teachers: Don't rely too heavily on bashing through all the past papers as the best form of practise. 4 centres who used the wrong paper. Is that Edexcel's fault or the centres' fault? CathB I don't agree about the "unlikely that the replacement was subject to the same level of scrutiny". Exam boards always have papers in reserve – it's standard policy as leaks can always happen, either through exam board incompetence or, much more commonly, things going awry at centres. They'd already have set next summer's, so if they didn't have an official spare, chances are they'd have brought that one forwards. Why do you find it ludicrous that students can't resit until next summer? It's always been the case for some subjects (not everything was offered in January). Although I understand someone who's messed up will wish it was otherwise, I don't think it would be possible to offer resit opportunities without allowing people to do modules early, which then perpetuates the whole modularisation issue. Thanks for your comment, Cath. I think students should be allowed to sit the modules as soon as they're ready to. There's quite enough artificial pressure on students as it is without adding to it by making high-stakes exams even higher-stakes. (I don't like exams. I always did well in them, but they're a lousy way of telling a good mathematician from a poor one; good mathematicians are generally good at discussing problems in groups and persisting with them, neither of which is allowed in a short exam). I don't think that modularisation (done right) is a bad thing at all, although I'm not sure modularisation is currently done right. (If anything, the C3 paper was an example of modularisation done right: students who remembered their trig rules from C2 and straight line geometry from C1 were at an advantage). Pingback: What does progress look like? | cavmaths Pingback: Real life problems | cavmaths
CommonCrawl
Section PEE Properties of Eigenvalues and Eigenvectors The previous section introduced eigenvalues and eigenvectors, and concentrated on their existence and determination. This section will be more about theorems, and the various properties eigenvalues and eigenvectors enjoy. Like a good $4\times 100\text{ meter}$ relay, we will lead-off with one of our better theorems and save the very best for the anchor leg. Subsection BPE Basic Properties of Eigenvalues Theorem EDELI Eigenvectors with Distinct Eigenvalues are Linearly Independent Suppose that $A$ is an $n\times n$ square matrix and $S=\set{\vectorlist{x}{p}}$ is a set of eigenvectors with eigenvalues $\scalarlist{\lambda}{p}$ such that $\lambda_i\neq\lambda_j$ whenever $i\neq j$. Then $S$ is a linearly independent set. There is a simple connection between the eigenvalues of a matrix and whether or not the matrix is nonsingular. Theorem SMZE Singular Matrices have Zero Eigenvalues Suppose $A$ is a square matrix. Then $A$ is singular if and only if $\lambda=0$ is an eigenvalue of $A$. With an equivalence about singular matrices we can update our list of equivalences about nonsingular matrices. Theorem NME8 Nonsingular Matrix Equivalences, Round 8 Suppose that $A$ is a square matrix of size $n$. The following are equivalent. $A$ is nonsingular. $A$ row-reduces to the identity matrix. The null space of $A$ contains only the zero vector, $\nsp{A}=\set{\zerovector}$. The linear system $\linearsystem{A}{\vect{b}}$ has a unique solution for every possible choice of $\vect{b}$. The columns of $A$ are a linearly independent set. $A$ is invertible. The column space of $A$ is $\complex{n}$, $\csp{A}=\complex{n}$. The columns of $A$ are a basis for $\complex{n}$. The rank of $A$ is $n$, $\rank{A}=n$. The nullity of $A$ is zero, $\nullity{A}=0$. The determinant of $A$ is nonzero, $\detname{A}\neq 0$. $\lambda=0$ is not an eigenvalue of $A$. Sage NME8 Nonsingular Matrix Equivalences, Round 8 Certain changes to a matrix change its eigenvalues in a predictable way. Theorem ESMM Eigenvalues of a Scalar Multiple of a Matrix Suppose $A$ is a square matrix and $\lambda$ is an eigenvalue of $A$. Then $\alpha\lambda$ is an eigenvalue of $\alpha A$. Unfortunately, there are not parallel theorems about the sum or product of arbitrary matrices. But we can prove a similar result for powers of a matrix. Theorem EOMP Eigenvalues Of Matrix Powers Suppose $A$ is a square matrix, $\lambda$ is an eigenvalue of $A$, and $s\geq 0$ is an integer. Then $\lambda^s$ is an eigenvalue of $A^s$. While we cannot prove that the sum of two arbitrary matrices behaves in any reasonable way with regard to eigenvalues, we can work with the sum of dissimilar powers of the same matrix. We have already seen two connections between eigenvalues and polynomials, in the proof of Theorem EMHE and the characteristic polynomial (Definition CP). Our next theorem strengthens this connection. Theorem EPM Eigenvalues of the Polynomial of a Matrix Suppose $A$ is a square matrix and $\lambda$ is an eigenvalue of $A$. Let $q(x)$ be a polynomial in the variable $x$. Then $q(\lambda)$ is an eigenvalue of the matrix $q(A)$. Example BDE Building desired eigenvalues Inverses and transposes also behave predictably with regard to their eigenvalues. Theorem EIM Eigenvalues of the Inverse of a Matrix Suppose $A$ is a square nonsingular matrix and $\lambda$ is an eigenvalue of $A$. Then $\lambda^{-1}$ is an eigenvalue of the matrix $\inverse{A}$. The proofs of the theorems above have a similar style to them. They all begin by grabbing an eigenvalue-eigenvector pair and adjusting it in some way to reach the desired conclusion. You should add this to your toolkit as a general approach to proving theorems about eigenvalues. So far we have been able to reserve the characteristic polynomial for strictly computational purposes. However, sometimes a theorem about eigenvalues can be proved easily by employing the characteristic polynomial (rather than using an eigenvalue-eigenvector pair). The next theorem is an example of this. Theorem ETM Eigenvalues of the Transpose of a Matrix Suppose $A$ is a square matrix and $\lambda$ is an eigenvalue of $A$. Then $\lambda$ is an eigenvalue of the matrix $\transpose{A}$. If a matrix has only real entries, then the computation of the characteristic polynomial (Definition CP) will result in a polynomial with coefficients that are real numbers. Complex numbers could result as roots of this polynomial, but they are roots of quadratic factors with real coefficients, and as such, come in conjugate pairs. The next theorem proves this, and a bit more, without mentioning the characteristic polynomial. Theorem ERMCP Eigenvalues of Real Matrices come in Conjugate Pairs Suppose $A$ is a square matrix with real entries and $\vect{x}$ is an eigenvector of $A$ for the eigenvalue $\lambda$. Then $\conjugate{\vect{x}}$ is an eigenvector of $A$ for the eigenvalue $\conjugate{\lambda}$. This phenomenon is amply illustrated in Example CEMS6, where the four complex eigenvalues come in two pairs, and the two basis vectors of the eigenspaces are complex conjugates of each other. Theorem ERMCP can be a time-saver for computing eigenvalues and eigenvectors of real matrices with complex eigenvalues, since the conjugate eigenvalue and eigenspace can be inferred from the theorem rather than computed. Subsection ME Multiplicities of Eigenvalues A polynomial of degree $n$ will have exactly $n$ roots. From this fact about polynomial equations we can say more about the algebraic multiplicities of eigenvalues. Theorem DCP Degree of the Characteristic Polynomial Suppose that $A$ is a square matrix of size $n$. Then the characteristic polynomial of $A$, $\charpoly{A}{x}$, has degree $n$. Theorem NEM Number of Eigenvalues of a Matrix Suppose that $\scalarlist{\lambda}{k}$ are the distinct eigenvalues of a square matrix $A$ of size $n$. Then \begin{equation*} \sum_{i=1}^{k}\algmult{A}{\lambda_i}=n \end{equation*} Theorem ME Multiplicities of an Eigenvalue Suppose that $A$ is a square matrix of size $n$ and $\lambda$ is an eigenvalue. Then \begin{equation*} 1\leq\geomult{A}{\lambda}\leq\algmult{A}{\lambda}\leq n \end{equation*} Theorem MNEM Maximum Number of Eigenvalues of a Matrix Suppose that $A$ is a square matrix of size $n$. Then $A$ cannot have more than $n$ distinct eigenvalues. Subsection EHM Eigenvalues of Hermitian Matrices Recall that a matrix is Hermitian (or self-adjoint) if $A=\adjoint{A}$ (Definition HM). In the case where $A$ is a matrix whose entries are all real numbers, being Hermitian is identical to being symmetric (Definition SYM). Keep this in mind as you read the next two theorems. Their hypotheses could be changed to "suppose $A$ is a real symmetric matrix." Theorem HMRE Hermitian Matrices have Real Eigenvalues Suppose that $A$ is a Hermitian matrix and $\lambda$ is an eigenvalue of $A$. Then $\lambda\in{\mathbb R}$. Notice the appealing symmetry to the justifications given for the steps of this proof. In the center is the ability to pitch a Hermitian matrix from one side of the inner product to the other. Look back and compare Example ESMS4 and Example CEMS6. In Example CEMS6 the matrix has only real entries, yet the characteristic polynomial has roots that are complex numbers, and so the matrix has complex eigenvalues. However, in Example ESMS4, the matrix has only real entries, but is also symmetric, and hence Hermitian. So by Theorem HMRE, we were guaranteed eigenvalues that are real numbers. In many physical problems, a matrix of interest will be real and symmetric, or Hermitian. Then if the eigenvalues are to represent physical quantities of interest, Theorem HMRE guarantees that these values will not be complex numbers. The eigenvectors of a Hermitian matrix also enjoy a pleasing property that we will exploit later. Theorem HMOE Hermitian Matrices have Orthogonal Eigenvectors Suppose that $A$ is a Hermitian matrix and $\vect{x}$ and $\vect{y}$ are two eigenvectors of $A$ for different eigenvalues. Then $\vect{x}$ and $\vect{y}$ are orthogonal vectors. Notice again how the key step in this proof is the fundamental property of a Hermitian matrix (Theorem HMIP) — the ability to swap $A$ across the two arguments of the inner product. We will build on these results and continue to see some more interesting properties in Section OD.
CommonCrawl
\begin{document} \title{Bowers-Stephenson's conjecture on the convergence of inversive distance circle packings to the Riemann mapping} \author{Yuxiang Chen, Yanwen Luo, Xu Xu, Siqi Zhang} \address{School of Mathematics and Statistics, Wuhan University, Wuhan, 430072, P.R.China} \email{[email protected]} \address{Department of Mathematics, Rutgers University, New Brunswick NJ, 08817} \email{[email protected]} \address{School of Mathematics and Statistics, Wuhan University, Wuhan, 430072, P.R.China} \email{[email protected]} \address{School of Mathematics and Statistics, Wuhan University, Wuhan, 430072, P.R.China} \email{[email protected]} \thanks{MSC (2020): 52C25, 52C26.} \keywords{Inversive distance circle packings, maximal principles, infinite rigidity, convergence. } \begin{abstract} Bowers and Stephenson \cite{BS} introduced the notion of inversive distance circle packings as a natural generalization of Thurston's circle packings \cite{Th}. They conjectured that discrete conformal maps induced by inversive distance circle packings converge to the Riemann mapping. Motivated by the recent work of Luo-Sun-Wu \cite{LSW}, we prove Bowers-Stephenson's conjecture for Jordan domains by establishing a maximal principle, an infinite rigidity theorem and a solvability theorem of certain prescribing combinatorial curvature problems for inversive distance circle packings. \end{abstract} \maketitle \tableofcontents \section{Introduction} In \cite{Th2}, Thurston proposed a constructive approach to the Riemann mapping theorem by approximating conformal mappings in simply connected domains using circle packings. Thurston conjectured that the discrete conformal maps induced by circle packings converge to the Riemann mapping. Thurston's conjecture has been proved elegantly by Rodin-Sullivan \cite{RS}. Since then, there have been lots of important works on the convergence of discrete conformal maps to the Riemann mapping. See \cite{Bucking, GLW, HS1, HS2, LSW,WZ} and others. Motivated by Thurston's circle packings \cite{Th}, Bowers-Stephenson \cite{BS} introduced the notion of inversive distance circle packings and conjectured that the Riemann mapping could be approximated by inversive distance circle packings. In this paper, we prove Bowers-Stephenson's conjecture for Jordan domains as a counterpart of Thurston's conjecture \cite{Th2} in the setting of circle packings. The main idea comes from the recent work of Luo-Sun-Wu \cite{LSW}. Suppose $S$ is a topological surface possibly with boundary and $\mathcal{T}$ is a triangulation of $S$. We use $V = V(\mathcal{T})$, $E = E(\mathcal{T})$ and $F = F(\mathcal{T})$ to denote the set of vertices, edges, and faces of $\mathcal{T}$ respectively. A piecewise linear metric $d$ (PL metric for simplicity) on $(S, \mathcal{T})$ is a flat cone metric on $S$ such that each face in $F$ in the metric $d$ is isometric to a non-degenerate Euclidean triangle. In this case, one can represent the PL metric on $(S, \mathcal{T})$ as a length function $l: E\rightarrow \mathbb{R}_{>0}$, which satisfies the strict triangle inequality for any face in $F$. Conversely, given a function $l: E\rightarrow \mathbb{R}_{>0}$ satisfying the strict triangle inequality, one can construct a PL metric on $(S, \mathcal{T})$ by isometrically gluing Euclidean triangles along edges in pairs. Hence, we also refer to a PL metric on $(S, \mathcal{T})$ as a function $l: E\rightarrow \mathbb{R}_{>0}$ satisfying the strict triangle inequality for any face in $F$. For a PL metric $l: E\rightarrow \mathbb{R}_{>0}$ on $(S, \mathcal{T})$, the combinatorial curvature is a map $K: V\rightarrow (-\infty, 2\pi)$ sending an interior vertex $v\in V$ to $2\pi$ minus the sum of angles at $v$ and a boundary vertex $v\in V$ to $\pi$ minus the sum of angles at $v$. The combinatorial curvature $K$ for a PL metric on $(S, \mathcal{T})$ satisfies the discrete Gauss-Bonnet formula \begin{equation}\label{discrete Gauss-Bonnet formula} \sum_{v\in V}K(v)=2\pi\chi(S), \end{equation} where $\chi(S)$ is the Euler characteristic of the surface. A vertex $v$ is flat in a PL metric if $K(v) = 0$. A PL metric is flat if all interior vertices are flat. \begin{definition}[\cite{BS}]\label{discrere conformal for idcp} Suppose $(S, \mathcal{T})$ is a triangulated surface with a weight $I: E\rightarrow (-1, +\infty)$. A PL metric $l: E\rightarrow \mathbb{R}_{>0}$ on $(S, \mathcal{T})$ is an inversive distance circle packing metric on the weighted triangulated surface $(S, \mathcal{T}, I)$ if there exists a function $u: V \to \mathbb{R}$ such that for any edge $e\in E$ with vertices $v$ and $v'$, the length $ l(e)$ is given by \begin{equation} \label{length1 introduction} l(e)=\sqrt{e^{2u(v)}+e^{2u(v')}+2 I(e)e^{u(v)+u(v')}}. \end{equation} The function $u: V \to \mathbb{R}$ is called a label on $(S, \mathcal{T}, I)$. Two inversive distance circle packing metrics $(S, \mathcal{T}, I, l)$ and $(S, \mathcal{T}, \tilde I, \tilde l)$ are conformally equivalent if $I = \tilde I$. In this case, we set $w=\tilde{u}-u$ and denote this relation as $l^*= \tilde l =w*l$. The function $w$ is called a discrete conformal factor on $(S, \mathcal{T}, I, l)$. \end{definition} If we set $r(v)=e^{u(v)}$ for $v\in V$, then the weight $I(e)$ in (\ref{length1 introduction}) is the inversive distance of the two circles centered at $v$ and $v'$ with radii $r(v)$ and $r(v')$ respectively. The map $r: V\rightarrow (0, +\infty)$ is referred as an \textit{inversive distance circle packing} on the weighted triangulated surface $(S, \mathcal{T}, I)$. Thurston's circle packing \cite{Th} is a special type of inversive distance circle packing with $I\in [0,1]$ in (\ref{length1 introduction}). An excellent source for the comprehensive theory of circle packings is \cite{St}. The main focus of this paper is to provide an affirmative answer to Bowers-Stephenson's conjecture on the convergence of discrete conformal maps induced by inversive distance circle packings to the Riemann mapping for Jordan domains. Specifically, let $\Omega$ be a Jordan domain in the plane with three distinct boundary points $p,q,r$ specified. By the Riemann mapping theorem, there exists a conformal map from $\Omega$ to the interior of an equilateral Euclidean triangle $\triangle ABC$ with unit edge length, which could be uniquely extended to be a homeomorphism $g$ from $\overline{\Omega}$ to $\triangle ABC$ with $p,q,r$ sent to $A, B,C$ respectively by Caratheodory's extension theorem \cite{P book}. The map $g$ and $g^{-1}$ are referred as the \textit{Riemann mapping} for $(\Omega, (p,q,r))$. Let $(D,\mathcal{T}, I)$ be an oriented weighted polygonal disk in the plane with three distinct boundary vertices $p,q,r$ and $l$ be a flat inversive distance circle packing metric on $(D,\mathcal{T}, I)$. Suppose that there exists a function $w: V\rightarrow \mathbb{R}$ such that $l^*=w*l$ is an inversive distance circle packing metric on $(D,\mathcal{T}, I)$ with total area $\frac{\sqrt{3}}{4}$, combinatorial curvature $\frac{2\pi}{3}$ at $p,q,r$, and flat at other vertices. Then $(D,\mathcal{T}, l^*)$ is isometric to a triangulated unit equilateral triangle $(\triangle ABC, \mathcal{T}')$ with some triangulation $\mathcal{T}'$ and the standard flat metric. Let $f$ be the orientation-preserving piecewise linear map induced by the map sending the vertices of $\mathcal{T}$ to the corresponding vertices of $\mathcal{T}'$ such that $f(A) = p$, $f(B) = q$ and $f(C) = r$. The map $f$ is called the \textit{discrete conformal map} associated to $(D,\mathcal{T}, I, l, \{p,q,r\})$. We prove the following theorem on the convergence of discrete conformal maps induced by a specific sequence of inversive distance circle packings on $\Omega$. \begin{theorem}\label{conv introduction} Let $\Omega$ be a Jordan domain in the complex plane with three distinct boundary points $p,q, r$ specified. Let $f$ be the Riemann mapping from the equilateral triangle $\triangle ABC$ to $\overline{\Omega}$ such that $f(A) = p$, $f(B) = q$, $f(C) = r$. Then there exists a sequence of weighted triangulated polygonal disks $(\Omega_{n}, \mathcal{T}_n, I_n, (p_n, q_n, r_n))$ with inversive distance circle packing metrics $l_n$, where $\mathcal{T}_n$ is a triangulation of $\Omega_n$, $I_n: E_n\rightarrow (1, +\infty)$ is a weight defined on $E_n = E( \mathcal{T}_n$) and $p_n, q_n, r_n$ are three distinct boundary vertices of $\mathcal{T}_n$, such that \begin{enumerate} \item[(a)] $\Omega=\cup_{n=1}^{\infty} \Omega_n$ with $\Omega_n \subset \Omega_{n+1}$, and $\lim_n p_n =p$, $\lim_n q_n =q$, $\lim_n r_n=r$. \item[(b)] discrete conformal maps $f_n$ from $\triangle ABC$ to $(\Omega_{n}, \mathcal{T}_n,I_n, l_n)$ with $f_n(A) = p_n$, $f_n(B) = q_n$, $f_n(C) = r_n$ exist. \item[(c)] discrete conformal maps $f_n$ converge uniformly to the Riemann mapping $f$. \end{enumerate} \end{theorem} In comparison with Rodin-Sullivan's convergence theorem for circle packings in \cite{RS}, which allows the approximating triangulated polygonal disks to be arbitrarily selected, Theorem \ref{conv introduction} requires that the approximating weighted triangulated polygonal disks should be carefully selected. The key difference is that the discrete conformal map does not exist for general inversive distance circle packings on weighted triangulated polygonal disks with inversive distance $I: E\rightarrow (1, +\infty)$, while Koebe-Andreev-Thurston theorem ensures the existence of discrete conformal maps for any circle packings on triangulated polygonal disks. In the rest of this paper, we assume that $I:E\to (1,+\infty)$ unless otherwise stated. This condition corresponds to the ``S-packings" introduced by Bowers-Stephenson \cite{BS}. The paper is organized as follows. In Section \ref{section 2}, we give some preliminaries on inversive distance circle packings and weighted Delaunay triangulations. In Section \ref{section 3}, we derive a maximal principle and a ring lemma for inversive distance circle packings. We also study the properties of inversive distance circle packings on spiral hexagonal triangulations in this section. In Section \ref{section 4}, we prove the rigidity of infinite inversive distance circle packings on the hexagonal triangulated plane. In Section \ref{section 5}, we solve some prescribing combinatorial curvature problem for inversive distance circle packings and prove Theorem \ref{conv introduction}. \textbf{Acknowledgement.} The research of Xu Xu is supported by the Fundamental Research Funds for the Central Universities under Grant No. 2042020kf0199. \section{Inversive distance circle packings and weighted Delaunay triangulations}\label{section 2} In this section, we collect some basic properties of inversive distance circle packings and weighted Delaunay triangulations. We first describe the admissible space of inversive distance circle packings on a triangle and the variation of inner angles in this space. Then we discuss a notion of generalized weighted Delaunay triangulations and their relationships with inversive distance circle packings. \subsection{Basic properties of inversive distance circle packings} Let $(S, \mathcal{T}, I)$ be a weighted triangulated surface. We use $v_i$ to denote a vertex in $V$, $e_{ij} = v_iv_j$ to denote an edge in $E$ and $\triangle v_iv_jv_k$ to denote a face in $F$. We will denote $f_i = f(v_i)$ if $f$ is a function defined on $V$, $f_{ij} = f(v_iv_j) = f(e_{ij})$ if $f$ is a function defined on $E$, and $f_{ijk} = f(\triangle v_iv_jv_k)$ if $f$ is a function defined on $F$. For any function $u: V\rightarrow \mathbb{R}$, the formula (\ref{length1 introduction}) produces a positive function $l$ on $E$. However, for a face $\triangle v_iv_jv_k$ in $(S, \mathcal{T}, I)$, the positive numbers $l_{ij}, l_{ik}, l_{jk}$ may not satisfy the \textit{strict triangle inequality} \begin{equation}\label{strict triangle inequality} l_{rs}< l_{rt}+l_{st}, \{r,s,t\}=\{i,j,k\}. \end{equation} The label $u: V\rightarrow \mathbb{R}$ is said to be \textit{admissible} if the function $l: E\rightarrow (0, +\infty)$ determined by $u: V\rightarrow \mathbb{R}$ via the formula (\ref{length1 introduction}) satisfies the strict triangle inequality (\ref{strict triangle inequality}) for every face in $(S, \mathcal{T}, I)$. We also say that the corresponding inversive distance circle packing $r:V\rightarrow \mathbb{R}_{>0}$ on $(S, \mathcal{T}, I)$ with $r_i=e^{u_i}$ is admissible, if it causes no confusion in the context. The admissible space of inversive distance circle packings on $(S, \mathcal{T}, I)$ consists of all the admissible inversive distance circle packings on $(S, \mathcal{T}, I)$. For an admissible inversive distance circle packing $r$ on $(S, \mathcal{T}, I)$, every face in $(S, \mathcal{T}, I)$ is isometric to a \textit{non-degenerate} Euclidean triangle with edge lengths given by (\ref{length1 introduction}). We also say that $r: V\rightarrow \mathbb{R}_{>0}$ generates a PL metric on $(S, \mathcal{T}, I)$ for simplicity in this case. If three positive numbers $l_{ij}, l_{ik}, l_{jk}$ satisfy the \textit{triangle inequality } \begin{equation}\label{triangle inequality} l_{rs}\leq l_{rt}+l_{st}, \{r,s,t\}=\{i,j,k\}, \end{equation} then $l_{ij}, l_{ik}, l_{jk}$ generate a \textit{generalized Euclidean triangle} $\triangle v_iv_jv_k$. If $l_{ij}=l_{ik}+l_{jk}$, the generalized triangle $\triangle v_iv_jv_k$ is flat at $v_k$, and the inner angle at $v_k$ is defined to be $\pi$. In this case, the generalized triangle is referred as a \textit{degenerate triangle}. A function $l: E\rightarrow \mathbb{R}_{>0}$ is called a \textit{generalized PL metric} on $(S,\mathcal{T})$ if the triangle inequality (\ref{triangle inequality}) is satisfied for every face in $(S,\mathcal{T})$. A PL metric is a special type of generalized PL metric with the strict triangle inequality (\ref{strict triangle inequality}) for every face in $(S,\mathcal{T})$. The combinatorial curvature of generalized PL metrics is defined the same as that of PL metrics and still satisfies the discrete Gauss-Bonnet formula (\ref{discrete Gauss-Bonnet formula}). A generalized PL metric $l:E\rightarrow \mathbb{R}_{>0}$ is called a \textit{generalized inversive distance circle packing metric} on a weighted triangulated surface $(S, \mathcal{T}, I)$ if there exists a map $u: V\rightarrow \mathbb{R}$ such that $l$ is determined by $u$ via the formula (\ref{length1 introduction}). In this case, the map $r: V\rightarrow \mathbb{R}_{>0}$ with $r_i=e^{u_i}$ is said to be a generalized inversive distance circle packing on $(S, \mathcal{T}, I)$. We will denote it as $(S, \mathcal{T}, I, l)$, $(S, \mathcal{T}, I, u)$, or $(S, \mathcal{T}, I, r)$ interchangeably. We have a characterization of the admissible space of inversive distance circle packings on a weighted triangle and an extension of inner angles for generalized triangles generated by generalized inversive distance circle packings. \begin{lemma}[\cite{Guo, Xu AIM, Xu MRL}]\label{basic property I of IDCP} Let $\triangle v_1v_2v_3$ be a face in $(S, \mathcal{T})$ with three weights $I_1,I_2,I_3\in (1, +\infty)$ defined on edges opposite to the vertices $v_1, v_2, v_3$ respectively. Let $u: \{v_1, v_2, v_3\}\rightarrow \mathbb{R}$ be a function defined on the vertices, inducing edge lengths by \begin{equation} \label{length2} l_{ij}= \sqrt{e^{2u_i}+e^{2u_j}+2e^{u_i+u_j}I_{k}} = \sqrt{r_i^2+r_j^2+2r_ir_jI_{k}}, \end{equation} where $r_i=e^{u_i}$, $\{i,j,k\}=\{1,2,3\}$. \begin{enumerate} \item[(a)] $l_{12}, l_{13}, l_{23}$ generate a non-degenerate Euclidean triangle if and only if \begin{equation} \label{definitionQ} Q:=\kappa_1^2(1-I^2_{1})+\kappa_2^2(1-I^2_{2})+\kappa_3^2(1-I^2_{3}) +2\kappa_1\kappa_2\gamma_{3}+2\kappa_1\kappa_3\gamma_{2}+2\kappa_2\kappa_3\gamma_{1}>0, \end{equation} where $\gamma_{i}:=I_{i}+I_{j}I_{k}$ and $\kappa_i:=r_i^{-1}$. They generate a degenerate Euclidean triangle if and only if $Q= 0$. \item[(b)] The admissible space $\Omega_{123}$ of inversive distance circle packings $(r_1, r_2, r_3)\in \mathbb{R}^3_{>0}$ on $\triangle v_1v_2v_3$ is $$\Omega_{123}=\mathbb{R}^3_{>0}\setminus \sqcup_{i=1}^3V_i,$$ where $\sqcup_{i=1}^3V_i$ is a disjoint union of $$V_i=\left\{(r_1, r_2, r_3)\in \mathbb{R}^3_{>0}|\kappa_i\geq \frac{-B_i+\sqrt{\Delta_i}}{2A_i}\right\}$$ with \begin{equation} \label{discriminant} \begin{aligned} A_i=&I^2_{i}-1,\\ B_i=&-2(\kappa_j\gamma_{k}+\kappa_{k}\gamma_j),\\ \Delta_i =&4(I_1^2+I_2^2+I_{3}^2+2I_1I_2I_{3}-1)(\kappa_j^2+\kappa_{k}^2+2\kappa_j\kappa_{k}I_i). \end{aligned} \end{equation} Let $\theta_i$ be the inner angle of $\triangle v_1v_2v_3$ at $v_i$, then the inner angles of $\triangle v_1v_2v_3$ could be uniquely continuously extended by constants as follows \begin{equation*} \begin{aligned} \widetilde{\theta}_i(r_1,r_2,r_3)=\left\{ \begin{array}{ll} \theta_i, & \hbox{if $(r_1,r_2,r_3)\in \Omega_{123}$;} \\ \pi, & \hbox{if $(r_1,r_2,r_3)\in V_i$;} \\ 0, & \hbox{otherwise.} \end{array} \right. \end{aligned} \end{equation*} \end{enumerate} \end{lemma} \begin{corollary}[\cite{Guo, Xu AIM, Xu MRL}]\label{simply connect of admi space with weight} If $v_i$ is the flat vertex of the degenerate triangle $\triangle v_1v_2v_3$ generated by $(r_1,r_2,r_3)\in \mathbb{R}^3_{>0}$, then $(r_1,r_2,r_3)\in \partial V_i$, i.e. $\kappa_i=\frac{-B_i+\sqrt{\Delta_i}}{2A_i}.$ \end{corollary} The following lemma describes the change of inner angles along PL metrics generated by smooth families of labels on $(S, \mathcal{T}, I)$. \begin{lemma}[\cite{Guo, Xu AIM, Xu MRL}] \label{derivative angles} Let $\triangle v_1v_2v_3$ be a face in $(S, \mathcal{T}, I)$ given by Lemma \ref{basic property I of IDCP}. \item[(a)] Suppose that the label $u\in \mathbb{R}^3$ induces a non-degenerate Euclidean triangle $\triangle v_1v_2v_3$, then \begin{equation} \label{angle deform} \frac{\partial \theta_i}{\partial u_j}=\frac{\partial \theta_j}{\partial u_i}=\frac{h_{ij,k}}{l_{ij}}, \ \ \ \frac{\partial \theta_i}{\partial u_i}=-\frac{\partial \theta_i}{\partial u_j}-\frac{\partial \theta_i}{\partial u_k}<0, \end{equation} where \begin{equation}\label{h_ij,k} \begin{aligned} h_{ij,k} =\frac{r_1^2r_2^2r_3^2}{A_{123}l_{ij}}[\kappa_k^2(1-I_k^2)+\kappa_j\kappa_k\gamma_{i}+\kappa_i\kappa_k\gamma_{j}] =\frac{r_1^2r_2^2r_3^2}{A_{123}l_{ij}}\kappa_kh_k \end{aligned} \end{equation} with $A_{123}=l_{12}l_{13}\sin\theta_1$ and \begin{equation} \label{h_i} \begin{aligned} h_k=\kappa_k(1-I_k^2)+\kappa_i\gamma_{j}+\kappa_j\gamma_{i}. \end{aligned} \end{equation} \item[(b)] If $u=(u_1, u_2, u_3)\in \mathbb{R}^3$ is not admissible, then one of $h_1, h_2, h_3$ is negative and the other two are positive. Specially, if $u\in \mathbb{R}^3$ generates a degenerate triangle $\triangle v_1v_2v_3$ having $v_3$ as the flat vertex, then $h_1>0, h_2>0, h_3<0$ at $u$. Moreover, in this case, $$h_{12,3}\rightarrow -\infty, h_{13,2}\rightarrow +\infty, h_{23,1}\rightarrow +\infty$$ as $(\tilde{r}_1, \tilde{r}_2, \tilde{r}_3)\in \Omega_{123}$ tends to $(r_1, r_2, r_3)=(e^{u_1}, e^{u_2},e^{u_3})\in \partial \Omega_{123}$. \end{lemma} Note that $h_{ij,k}$ is only defined for non-degenerate inversive distance circle packings $(r_1, r_2, r_3) \in \Omega_{123}\subseteq \mathbb{R}^3_{>0}$, while $h_i$ is defined for any $(r_1, r_2, r_3)\in \mathbb{R}^3_{>0}$. For a non-degenerate inversive distance circle packing metric $l$ on $(S, \mathcal{T}, I)$, set $\eta_{ij}^k = h_{ij,k}/l_{ij}$ and define the \textit{conductance} $\eta: E\rightarrow \mathbb{R}$ for $(S, \mathcal{T}, I, l)$ by \begin{equation} \label{definitioneta} \eta_{ij} = \begin{cases} \eta_{ij}^k + \eta_{ij}^m , & v_iv_j \text{ is an interior edge contained in } \triangle v_iv_jv_k \text{ and } \triangle v_iv_jv_m;\\ \eta_{ij}^k , & v_iv_j \text{ is a boundary edge contained in } \triangle v_iv_jv_k. \end{cases} \end{equation} As a direct corollary of formula (\ref{angle deform}), we have the following variation of combinatorial curvatures. \begin{corollary}[\cite{Guo, Xu AIM, Xu MRL}] Suppose $w(t)*l$ is a family of inversive distance circle packing metrics on $ ({S},\mathcal{T}, I) $ induced by a smooth family of discrete conformal factor $w(t) \in \mathbb{R}^V$. Let $K(t)$ and $\eta(t)$ be the combinatorial curvature and the conductance of $ ({S},\mathcal{T}, I, w(t)*l)$. Then \begin{equation} \label{curvature} \frac{dK_i(t)}{dt} = \sum_{j\sim i}\eta_{ij}(t)(\frac{dw_i}{dt} - \frac{dw_j}{dt}). \end{equation} \end{corollary} We prove the following results on inversive distance circle packings. \begin{proposition} \label{interval} Let $\triangle v_1v_2v_3$ be a face in $(S, \mathcal{T}, I)$ given by Lemma \ref{basic property I of IDCP}. \begin{enumerate} \item[(a)] For any fixed $r_i, r_j\in (0, +\infty)$, the set of $r_k\in (0, +\infty)$ such that $(r_i, r_j, r_k)$ is an admissible inversive distance circle packing on $\triangle v_1v_2v_3$ is an open interval. As a result, if $ (r_i, r_j, \hat{r}_k)$ and $ (r_i, r_j, \bar{r}_k)$ are two generalized inversive distance circle packings on $\triangle v_1v_2v_3$ with $\hat{r}_k<\bar{r}_k$, then for any $r_k\in (\hat{r}_k,\bar{r}_k)$, $ (r_i, r_j, r_k)$ generates a non-degenerate triangle $\triangle v_1v_2v_3$. \item[(b)] If $\triangle v_1v_2v_3$ generated by $(r_1,r_2,r_3)\in \mathbb{R}^3_{>0}$ is a degenerate triangle having $v_3$ as the flat vertex, then there exists $\epsilon>0$ such that $(r_1,r_2,r_3+t)\in \Omega_{123}$ and $$\frac{\partial h_{12,3}}{\partial r_3}(r_1, r_2, r_3+t)>0$$ for $t\in (0, \epsilon)$. \end{enumerate} \end{proposition} \begin{proof} To prove part (a), without loss of generality, set $\{i,j\}=\{2,3\}$, $k=1$ and $$ f(\kappa_1)=(1-I^2_{1})\kappa_1^2+2\kappa_1(\kappa_2\gamma_{3}+\kappa_3\gamma_{2})+ \kappa_2^2(1-I^2_{2})+\kappa_3^2(1-I^2_{3})+2\kappa_2\kappa_3\gamma_{1}. $$ By Lemma \ref{basic property I of IDCP} (a), we need to show that the solution of $f(\kappa_1)>0$ with $\kappa_1\in (0, +\infty)$ is an open interval. The inequality $f(\kappa_1)>0$ is equivalent to the following quadratic inequality $$ (I^2_{1}-1)\kappa_1^2-2\kappa_1(\kappa_2\gamma_{3}+\kappa_3\gamma_{2})- \kappa_2^2(1-I^2_{2})-\kappa_3^2(1-I^2_{3})-2\kappa_2\kappa_3\gamma_{1}<0. $$ By $I>1$, we have \begin{equation*} -\frac{b}{2a}=\frac{\kappa_2\gamma_{3}+\kappa_3\gamma_{2}}{I_1^2-1}> 0, \end{equation*} and the discriminant of the quadratic polynomial defined in (\ref{discriminant}) satisfies \begin{equation*} \Delta =4(I_1^2+I_2^2+I_3^2+2I_1I_2I_3-1)(\kappa_2^2+\kappa_3^2+2\kappa_2\kappa_3I_1)>0. \end{equation*} This implies that the solution of $f(\kappa_1)>0$ with $\kappa_1>0$ is an open interval in $(0, +\infty)$. To prove part (b), recall that the triangle $\triangle v_1v_2v_3$ is degenerate if and only if $Q= 0$ by Lemma \ref{basic property I of IDCP} (a), where $Q$ is defined by (\ref{definitionQ}). By direct calculations, we have $\frac{\partial Q}{\partial \kappa_3}=2h_3<0$ at $(r_1,r_2,r_3)$ by Lemma \ref{derivative angles} (b), which implies that $\frac{\partial Q}{\partial r_3}=\frac{\partial Q}{\partial \kappa_3}\frac{\partial \kappa_3}{\partial r_3}=-\frac{1}{r_3^2}\frac{\partial Q}{\partial \kappa_3}>0$ around $(r_1, r_2, r_3)$. Therefore, for small $t>0$, $Q(r_1,r_2,r_3+t)>0$ and $(r_1,r_2,r_3+t)$ generates a non-degenerate triangle. Using the identities $Q = \kappa_1h_1+\kappa_2h_2+\kappa_3h_3$ and $A_{123}^2 = r_1^2r_2^2r_3^2Q$, we can deduce from the definition (\ref{h_ij,k}) of $h_{12,3}$ that \begin{equation}\label{derivative of hijk} \frac{\partial h_{12,3}}{\partial \kappa_3}=\frac{r_1^2r_2^2r_3^{2}}{A_{123}^3l_{12}}[r_1^2r_2^2r_3^{2}(\kappa_1h_1+\kappa_2h_2)h_3 -A_{123}^2(\kappa_1\gamma_2+\kappa_2\gamma_1)]. \end{equation} Note that $v_3$ is the flat vertex of the degenerate triangle $\triangle v_1v_2v_3$ generated by $(r_1,r_2,r_3)$, then $A_{123}=0$ and $h_1>0,h_2>0, h_3<0$ at $(r_1,r_2,r_3)$ by Lemma \ref{derivative angles} (b), which implies that $\frac{\partial h_{12,3}}{\partial \kappa_3}<0$ around $(r_1,r_2,r_3)$ in the admissible space $\Omega_{123}$ by (\ref{derivative of hijk}). Note that $\frac{\partial h_{12,3}}{\partial r_3}=\frac{\partial h_{12,3}}{\partial \kappa_3}\frac{\partial \kappa_3}{\partial r_3}=-\frac{1}{r_3^2}\frac{\partial h_{12,3}}{\partial \kappa_3}.$ Therefore, there exists $\epsilon>0$ such that $\frac{\partial h_{12,3}}{\partial r_3}(r_1, r_2, r_3+t)>0$ for $t\in (0, \epsilon)$. \end{proof} \subsection{Weighted Delaunay triangulations} Weighted Delaunay triangulations are natural generalizations of the classical Delaunay triangulations, where the sites generating the corresponding Voronoi decomposition are disks instead of points. It has wide applications in computational geometry. See \cite{AK, Ed} and others. In this subsection, we propose an alternative characterization of weighted Delaunay triangulations for inversive distance circle packing metrics and generalize weighted Delaunay triangulations for non-degenerate inversive distance circle packing metrics to generalized inversive distance circle packing metrics. Assume $r: V\rightarrow (0, +\infty)$ is a non-degenerate inversive distance circle packing on a weighted triangulated surface $(S, \mathcal{T}, I)$. Let $\triangle v_1v_2v_3$ be a Euclidean triangle in the plane isometric to a face in $(S, \mathcal{T}, I, r)$. Then there exists a unique geometric center $C_{123}$ such that its power distances to $v_i$, defined by $|C_{123}-v_i|^2 - r_i^2$, are equal for $i = 1,2,3.$ Projections of the geometric center $C_{123}$ to the lines $v_1v_2, v_1v_3, v_2v_3$ give rise to the geometric centers of these edges, which are denoted by $C_{12}, C_{13}, C_{23}$ respectively. Please refer to Figure \ref{figure1}. One can refer to \cite{Glickenstein DCG, Glickenstein JDG, Glickenstein preprint, GT} for more information on the geometric center generated by discrete conformal structures on manifolds. \begin{figure} \caption{Sign distances of the geometric center. } \label{figure1} \end{figure} Denote $d_{ij}$ as the signed distance of $C_{ij}$ to the vertex $v_i$ and $h_{ij,k}$ as the signed distance of $C_{123}$ to the edge $v_iv_j$. Glickenstein \cite{Glickenstein JDG} obtained the following identities \begin{equation}\label{d} d_{ij}=\frac{r_i^2+r_ir_jI_{ij}}{l_{ij}}, \quad h_{ij,k} = \frac{d_{ik} - d_{ij}\cos \theta_i}{\sin \theta_i}. \end{equation} Note that $d_{ij}\in \mathbb{R}_{>0}$ could be defined by (\ref{d}) independent of the existence of the geometric center $C_{ijk}$, and $h_{ij,k}$ is symmetric in the indices $i$ and $j$, while $d_{ij}$ is not. For a weighted triangulated surface with a non-degenerate inversive distance circle packing $(S, \mathcal{T}, I, r)$, an interior edge $v_iv_j$ is \textit{weighted Delaunay} if $h_{ij,k} + h_{ij,l}\geq 0,$ where $\triangle v_iv_jv_k$ and $\triangle v_iv_jv_l$ are two triangles in $F$ sharing the common edge $v_iv_j$. And $(S, \mathcal{T}, I, r)$ is weighted Delaunay if all the interior edges are weighted Delaunay. Note that weighted Delaunay triangulations are only defined for non-degenerate inversive distance circle packings. We need to introduce the definition of weighted Delaunay triangulations for generalized inversive distance circle packing metrics. To this end, we introduce the following notion. \begin{definition}\label{theta} Let $r\in \mathbb{R}^V_{>0}$ be a generalized inversive distance circle packing on a weighted triangulated surface $(S, \mathcal{T}, I)$. Suppose that $\triangle v_1v_2v_3$ is a generalized triangle in $(S, \mathcal{T}, I, r)$. If $\triangle v_1v_2v_3$ is non-degenerate, define $$\theta_{ij,k} = \arctan\frac{h_{ij,k}}{d_{ij}}.$$ If $\triangle v_1v_2v_3$ is degenerate, define $\theta_{ij,k}$ as \[ \theta_{ij,k}= \left\{ \begin{array}{cr} \frac{\pi}{2}, & \text{if $v_i$ or $v_{j}$ is the flat vertex,}\\ -\frac{\pi}{2}, & \text{if $v_k$ is the flat vertex.}\\ \end{array} \right. \] \end{definition} Note that for a non-degenerate triangle $\triangle v_1v_2v_3$ in $(S, \mathcal{T}, I, r)$, $\theta_{ij,k}$ is in fact the signed angle $\angle v_jv_iC_{ijk}$, which is negative if $h_{ij,k}<0$ and non-negative otherwise. Please refer to Figure \ref{figure2} for this. \begin{figure} \caption{The angle $\theta_{ij,k}$ when $h_{ij,k}<0$ (left) and $h_{ij,k}>0$ (right).} \label{figure2} \end{figure} For non-degenerate inversive distance circle packings on a weighted triangle $\triangle v_1v_2v_3$, $\theta_{ij,k}$ is a continuous function of $(r_1,r_2,r_3)\in \Omega_{123}$ and satisfies $\theta_{ij,k}+\theta_{ik,j}=\theta_i$. We further have the following property on $\theta_{ij,k}$ for generalized inversive distance circle packings on a weighted triangle. \begin{lemma}\label{theta continuous} Suppose $\triangle v_1v_2v_3$ is a face in a weighted triangulated surface $(S, \mathcal{T}, I)$. Then $\theta_{ij,k}(r_1, r_2, r_3)$ is a continuous function defined on $\overline{\Omega_{123}}$ and satisfies \begin{equation}\label{theta++theta-=theta} \theta_{ij,k}+\theta_{ik,j}=\theta_i. \end{equation} \end{lemma} \begin{proof} We just need to prove that $\theta_{ij,k}(r_1,r_2,r_3)\rightarrow \theta_{ij,k}(\bar{r}_1,\bar{r}_2,\bar{r}_3)$ as $(r_1,r_2,r_3)\in \Omega_{123}$ tends to a point $(\bar{r}_1,\bar{r}_2,\bar{r}_3)\in \partial\Omega_{123}$. If $v_k$ is the flat vertex of the degenerate triangle $\triangle v_1v_2v_3$ generated by $(\bar{r}_1,\bar{r}_2,\bar{r}_3)$, then $h_{ij,k}(r_1,r_2,r_3)\rightarrow -\infty$ as $(r_1,r_2,r_3)\rightarrow(\bar{r}_1,\bar{r}_2,\bar{r}_3)$ by Lemma \ref{derivative angles}. As a result, we have $\theta_{ij,k}(r_1,r_2,r_3)=\arctan\frac{h_{ij,k}}{d_{ij}}\rightarrow -\frac{\pi}{2}=\theta_{ij,k}(\bar{r}_1,\bar{r}_2,\bar{r}_3)$ by Definition \ref{theta}. If $v_i$ is the flat vertex of the degenerate triangle $\triangle v_1v_2v_3$ generated by $(\bar{r}_1,\bar{r}_2,\bar{r}_3)$, then $h_{ij,k}(r_1,r_2,r_3)\rightarrow +\infty$ as $(r_1,r_2,r_3)\rightarrow(\bar{r}_1,\bar{r}_2,\bar{r}_3)$. As a result, we have $\theta_{ij,k}(r_1,r_2,r_3)\rightarrow \frac{\pi}{2}=\theta_{ij,k}(\bar{r}_1,\bar{r}_2,\bar{r}_3)$ as $(r_1,r_2,r_3)\rightarrow(\bar{r}_1,\bar{r}_2,\bar{r}_3)$ by Definition \ref{theta}. The same argument applies to the case that $v_j$ is the flat vertex. \end{proof} Weighted Delaunay triangulations for non-degenerate inversive distance circle packings have a simple characterization using $\theta_{ij,k}$. \begin{corollary}\label{g-Delaunay lemma} Suppose $r\in \mathbb{R}^V_{>0}$ is a non-degenerate inversive distance circle packing on a weighted triangulated surface $(S, \mathcal{T}, I)$. An edge $v_iv_j\in E$ is shared by two adjacent non-degenerate triangles $\triangle v_iv_jv_k$ and $\triangle v_iv_jv_l$ in $(S, \mathcal{T}, I, r)$. Then the edge $v_iv_j$ is weighted Delaunay in the inversive distance circle packing $r$ if and only if $$\theta_{ij,k}+\theta_{ij,l}\ge0.$$ \end{corollary} \begin{proof} Since $\theta_{ij,k}=\arctan\frac{h_{ij,k}}{d_{ij}}\in (-\frac{\pi}{2},\frac{\pi}{2})$ and $\theta_{ij,l}=\arctan\frac{h_{ij,l}}{d_{ij}}\in (-\frac{\pi}{2},\frac{\pi}{2})$ by Definition \ref{theta}, we have $$\frac{h_{ij,k}+h_{ij,l}}{d_{ij}}=\tan\theta_{ij,k}+\tan\theta_{ij,l}=\frac{\sin(\theta_{ij,k}+\theta_{ij,l})}{\cos\theta_{ij,k}\cos\theta_{ij,l}},$$ which implies $h_{ij,k}+h_{ij,l}\ge0$ is equivalent to $\theta_{ij,k}+\theta_{ij,l}\ge0$ by $d_{ij}>0$. \end{proof} \begin{remark}\label{h>0 equavalent to theta>0} Under the conditions in Corollary \ref{g-Delaunay lemma}, we further have that $h_{ij,k}+h_{ij,l}>0$ is equivalent to $\theta_{ij,k}+\theta_{ij,l}>0$ for non-degenerate inversive distance circle packings. \end{remark} Note that $h_{ij,k}$ is only defined for non-degenerate inversive distance circle packings, while $\theta_{ij,k}$ could be defined for generalized inversive distance circle packings. We introduce the following definition of weighed Delaunay triangulations for generalized inversive distance circle packings, which generalizes the classical definition of weighed Delaunay triangulations for non-degenerate inversive distance circle packings. \begin{definition}\label{defn of weighted Delaunay for generalized IDCP} Suppose $r: V\rightarrow (0, +\infty)$ is a generalized inversive distance circle packing on a weighted triangulated surface $(S, \mathcal{T}, I)$. Let $v_iv_j\in E$ be an edge shared by two adjacent triangles $\triangle v_iv_jv_k$ and $\triangle v_iv_jv_l$ in $\mathcal{T}$. An interior edge $v_iv_j\in E$ is weighted Delaunay in the generalized inversive distance circle packing $r$ if $$\theta_{ij,k}+\theta_{ij,l}\ge0.$$ The triangulation $\mathcal{T}$ is weighted Delaunay in the generalized inversive distance circle packing $r$ if every interior edge is weighed Delaunay. \end{definition} For simplicity, we also say that $r$ is a generalized weighted Delaunay inversive distance circle packing on $(S, \mathcal{T}, I)$ if $\mathcal{T}$ is weighted Delaunay in $r$. We further have the following monotonicity for the angle $\theta_{ij,k}$ in Definition \ref{theta}. \begin{lemma}\label{monotonicity} Suppose $\triangle v_1v_2v_3$ is a face in a weighted triangulated surface $(S, \mathcal{T}, I)$. Let $ (r_1, r_2, \hat{r}_3)$ and $ (r_1, r_2, \bar{r}_3)$ be two generalized inversive distance circle packings on $\triangle v_1v_2v_3$ with $\hat{r}_3<\bar{r}_3$. If $r_1$ and $r_2$ are fixed, then $\theta_{12,3}$ is strictly increasing in $r_3\in [\hat{r}_3,\bar{r}_3]$. \end{lemma} \begin{proof} By Proposition \ref{interval} (a), $ (r_1, r_2, r_3)$ generates a non-degenerate triangle $\triangle v_1v_2v_3$ for $r_3\in (\hat{r}_3,\bar{r}_3)$. For $r_3\in (\hat{r}_3,\bar{r}_3)$, $h_{12,3}$ and $\theta_{12,3}$ are smooth functions of $r_3$. By the definition of $h_i$ and $\gamma_i$, we can deduce that \begin{equation}\label{hi-} \begin{aligned} \frac{\partial{h_{12,3}}}{\partial{\kappa_3}}&=\frac{r_1^2r_2^2r_3^2}{A_{123}^3l_{12}}[r_1^2r_2^2r_3^2(\kappa_1h_1+\kappa_2h_2)h_3-A_{123}^2(\kappa_2\gamma_1+\kappa_1\gamma_2)] \notag\\ &=\frac{ r_1^4r_2^4r_3^3}{A_{123}^3l_{12}}(1-I_{12}^2-I_{13}^2-I_{23}^2-2I_{12}I_{13}I_{23})(\kappa_1^2 + \kappa_2^2 + 2\kappa_1\kappa_2I_{12})<0. \end{aligned} \end{equation} This implies $$\frac{\partial{\theta_{12,3}}}{\partial r_3}=-\frac{d_{12}\kappa_3^{2}}{d_{12}^2+(h_{12,3})^2}\cdot\frac{\partial{h_{12,3}}}{\partial{\kappa_3}}>0,\ \ \forall r_3\in (\hat{r}_3,\bar{r}_3)$$ by the definition of $\theta_{12,3}$. Note that $\theta_{12,3}$ is a continuous function of $r_3\in [\hat{r}_3,\bar{r}_3]$ by Lemma \ref{theta continuous}, then $\theta_{12,3}$ is strictly increasing in $r_3\in [\hat{r}_3,\bar{r}_3]$. \end{proof} \section{A maximal principle, a ring lemma and spiral hexagonal triangulations}\label{section 3} \subsection{A maximal principle} Let $P_n$ be a star-shaped $n$-sided polygon in the plane with boundary vertices $v_1,\cdots,v_n$ ordered cyclically ($v_{n+i} = v_i$). Assume $v_0$ is an interior point of $P_n$ and it induces a triangulation $\mathcal{T}$ of $P_n$ with triangles $\triangle v_0v_iv_{i+1}$. Then an assignment of radii $r:V(\mathcal{T}) \to \mathbb{R}_{>0}$ is a vector in $\mathbb{R}^{n+1}$. For any two vectors $x = (x_0,\dots, x_n)$ and $y = (y_0,\dots, y_n)$ in $\mathbb{R}^{n+1}$, we use $x\geq y$ to denote $x_i\geq y_i$ for all $i \in \{0, \dots, n\}$. \begin{figure} \caption{A star triangulation of a polygon.} \label{figure3} \end{figure} We have the following maximal principle for inversive distance circle packings. \begin{theorem}\label{Maximum principle} Let $\mathcal{T}$ be a star triangulation of $P_n$ with boundary vertices $v_1,\dots, v_n$ and a unique interior vertex $v_0$. Let $I:E\rightarrow (1, +\infty)$ be a weight. Suppose $\overline{r}$ and $r$ are two generalized inversive distance circle packings on $(P_n, \mathcal{T}, I)$ such that \begin{enumerate} \item[(a)] $\overline{r}$ and $r$ are generalized weighted Delaunay inversive distance circle packings, \item[(b)] the combinatorial curvatures $K_0(r)$ and $K_0(\bar{r})$ at the vertex $v_0$ satisfy $K_0(r)\le K_0(\bar{r})$, \item[(c)] $\max \{\frac{r_i}{\overline{r}_i}| i=1,2,\cdots,n\}\leq \frac{r_0}{\overline{r}_0}$. \end{enumerate} Then there exists a constant $c>0$ such that $r=c\overline{r}$. \end{theorem} We use the following notations to prove Theorem \ref{Maximum principle}. For $i\in \{1,\cdots, n\}$, we denote $I_{0i}$ as $I_i$ for simplicity. For two adjacent triangles $\triangle v_0v_jv_{j\pm1}$ in $\mathcal{T}$, set $\theta^0_{j,j\pm 1}$ to be the inner angle at $v_0$ in the triangle $\triangle v_0v_jv_{j\pm 1}$. Moreover, set $$h_j^-=h_{0j,j-1}, h_j^+=h_{0j,j+1},\theta_j^-=\theta_{0j,j-1}, \theta_j^+=\theta_{0j,j+1}.$$ The proof of the maximal principle is based on the following key lemma. \begin{lemma}\label{key lemma} If $r, \overline{r}:\{v_0,v_1,\dots,v_n\}\rightarrow\mathbb{R}_{>0}$ satisfy (a), (b), (c) in Theorem \ref{Maximum principle} and there exists $j\in\{1,2,\dots,n\}$ such that $\frac{r_j}{\overline{r}_j}<\frac{r_0}{\overline{r}_0}$, then there exists $\hat{r}\in\mathbb{R}_{>0}^{n+1}$ such that \begin{enumerate} \item[(a)] $\hat{r}_i\geq r_i$ for $i\in \{1,\cdots, n\}$, \item[(b)] $\frac{\hat{r}_i}{\overline{r}_i} \leq \frac{\hat{r}_0}{\overline{r}_0} = \frac{r_0}{\overline{r}_0} $ for all $i=1,2,\dots,n$, \item[(c)] $\hat{r}$ is a generalized weighted Delaunay inversive distance circle packing on $(P_n, \mathcal{T}, I)$, \item[(d)] if $\alpha(r)$ is the cone angle of the inversive distance circle packing $r$ at $v_0$, then \begin{equation} \alpha(\hat{r})>\alpha(r). \end{equation} \end{enumerate} \end{lemma} \begin{proof} Up to a scaling, we may assume that $r_0=\bar{r}_0$. Then the condition (c) in Theorem \ref{Maximum principle} is equivalent to $r_i\le{\bar{r}_i}$ for all $i\in\{1,2,\dots,n\}$. Set \begin{equation*} \begin{aligned} J=&\{j\in\{1,2,\dots,n\}|r_j<{\bar{r}_j}\}, \\ K=&\{k\in\{1,2,\dots,n\}|r_k={\bar{r}_k}\},\\ \gamma(r)=&\sum_{j\in{J}}(\theta_{0j,j+1}+\theta_{0j,j-1})=\sum_{j\in{J}}(\theta_j^++\theta_j^-), \\ \beta(r)=&\sum_{k\in{K}}(\theta_{0k,k+1}+\theta_{0k,k-1})=\sum_{k\in{K}}(\theta_k^++\theta_k^-). \end{aligned} \end{equation*} Then $J \neq \emptyset$ by assumption. By (\ref{theta++theta-=theta}), we have $ \alpha(r)=\beta(r)+\gamma(r), \alpha(\overline{r})=\beta(\overline{r})+\gamma(\overline{r}), $ which further implies \begin{equation}\label{beta+gamma<beta+gamma} \beta(\bar{r})+\gamma(\bar{r})\leq \beta(r)+\gamma(r) \end{equation} by the condition $K_0(r)\le K_0(\bar{r})$. \textbf{Claim 1}: For any $j\in J$, $\theta^{0}_{j-1,j}(r)<\pi$ and $\theta^{0}_{j,j+1}(r)<\pi$. We will prove that for any $j\in J$, $v_0$ is not the flat vertex if the triangle $\triangle v_0v_{j}v_{j-1}$ is degenerate. Otherwise, suppose that for some $j\in J$, $v_0$ is the flat vertex of the degenerate triangle $\triangle v_0v_{j}v_{j-1}$ generated by $r$. By Corollary \ref{simply connect of admi space with weight}, $r$ satisfies $\kappa_0=f(\kappa_{j-1},\kappa_{j})$, where \begin{equation*} \begin{aligned} f(\kappa_{j-1},\kappa_{j}) =&\frac{1}{I_{j,j-1}^2-1}[(\kappa_j\gamma_{j-1}+\kappa_{j-1}\gamma_j)\\ &+ (I_j^2+I_{j-1}^2+I_{j,j-1}^2+2I_jI_{j-1}I_{j,j-1}-1)^{1/2}(\kappa_{j}^2+\kappa_{j-1}^2+2I_{j,j-1}\kappa_j\kappa_{j-1})^{1/2}]. \end{aligned} \end{equation*} Note that $\kappa_j>\overline{\kappa}_j$ and $\kappa_{j-1}\geq\overline{\kappa}_{j-1}$. Then we have $$\overline{\kappa}_0=\kappa_0=f(\kappa_{j-1},\kappa_{j})<f(\overline{\kappa}_{j-1},\overline{\kappa}_{j}).$$ This implies that $(\overline{r}_0, \overline{r}_j, \overline{r}_{j-1})$ is in the complement of the space of generalized inversive distance circle packings on $\triangle v_0v_{j}v_{j-1}$ in $\mathbb{R}^3_{>0}$ by Lemma \ref{basic property I of IDCP} (b), which contradicts the assumption that $\overline{r}$ is a generalized inversive distance circle packing on $(P_n, \mathcal{T}, I)$. \textbf{Claim 2}: There exists $j\in J$ such that $\theta_j^+(r)+\theta_j^-(r)>0$. We will consider the two cases $K\ne\emptyset$ and $K=\emptyset$. \textbf{Case 1}: $K\ne\emptyset$. By Lemma \ref{monotonicity}, for any $i\in K$, $\theta_i^-$ and $\theta_i^+$ are strictly increasing in $r_{i-1}$ and $r_{i+1}$ respectively, which implies that $\beta(r)\leq\beta(\bar{r})$. As $J\neq \emptyset$, there exists $i\in K$ such that $i-1$ or $i+1$ is in $J$. Say $i-1\in J$, then $r_{i-1}<\bar{r}_{i-1}$ and then $\theta_i^-(r)<\theta_i^-(\overline{r})$ by Lemma \ref{monotonicity}. Thus, $\beta(r)<\beta(\bar{r})$, which implies $0\leq\gamma(\bar{r})<\gamma(r)$ by (\ref{beta+gamma<beta+gamma}). Therefore, there exists $j\in J$ such that $\theta_j^+(r)+\theta_j^-(r)>0$ by the definition of $\gamma(r)$. \textbf{Case 2}: $K=\emptyset$. If $K=\emptyset$, we have $J=\{1,\dots,n\}$, $$ \gamma(r)=\sum_{j\in{J}}(\theta_j^+(r)+\theta_j^-(r))=\alpha(r)\ge0. $$ If $\alpha(r)>0$, there exists $j\in J$ such that $\theta_j^+(r)+\theta_j^-(r)>0$. If $\alpha(r)=0$, for any triangle $\triangle v_0v_jv_{j-1}$, $j=1,\dots,n$, the inner angle at $v_0$ is equal to zero. Thus all triangles are degenerate. For any triangle $\triangle v_0v_jv_{j-1}$, the flat vertex is $v_j$ or $v_{j-1}$ by Claim 1. Then $\{\theta_j^-(r), \theta_{j-1}^+(r)\}=\{\frac{\pi}{2}, -\frac{\pi}{2}\}, \forall j\in \{1, \cdots, n\}.$ Without loss of generality, we may assume $v_1$ is the flat vertex of $\triangle v_0v_1v_2$. Then $\theta_1^+(r)=\frac{\pi}{2}$, $\theta_2^-(r)=-\frac{\pi}{2}$ by Definition \ref{theta} and $l_{02}(r)=l_{01}(r)+l_{12}(r)>l_{01}(r).$ By the weighted Delaunay condition (a) in Theorem \ref{Maximum principle}, $\theta_2^+(r)=\frac{\pi}{2}$, which implies $\theta_3^-(r)=\frac{\pi}{2}$ and $l_{03}(r)=l_{02}(r)+l_{23}(r)>l_{02}(r).$ By induction, we have a contradiction $$l_{01}(r)<l_{02}(r)<\dots<l_{0n}(r)<l_{01}(r).$$ This completes the proof of Claim 2. Now we fix $j\in J$ in Claim 2. Then we have \begin{equation} \label{g-Delaunay} \theta_j^+(r)+\theta_j^-(r)>0. \end{equation} In the following, we show that there exists $\epsilon>0$ such that $\hat{r}=(r_0,\dots,r_j+t,\dots,r_n)$ satisfies Lemma \ref{key lemma} for $t\in (0, \epsilon)$. It is easy to check that for $t\in (0, \overline{r}_j-r_j)$, $\hat{r}$ satisfies Lemma \ref{key lemma} (a) and (b). To see part (c) of Lemma \ref{key lemma}, we first show that there exists $\epsilon>0$ such that $\hat{r}$ is a generalized inversive distance circle packing on $(P_n, \mathcal{T}, I)$ for $t\in (0,\epsilon)$. Furthermore, we will show that the triangles $\triangle v_0v_jv_{j\pm1}$ generated by $\hat{r}$ are non-degenerate. The triangle $\triangle v_0v_jv_{j-1}$ generated by $r$ is non-degenerate or degenerate with $v_j$ or $v_{j-1}$ as the flat vertex by Claim 1. By Proposition \ref{interval}, we just need to prove that $v_{j-1}$ is not the flat vertex of the triangle $\triangle v_0v_jv_{j-1}$ generated by $r$ if it is degenerate. Otherwise, we have $\theta_j^-(r)=-\frac{\pi}{2}$ by Definition \ref{theta}, which implies $\theta_j^+(r)>\frac{\pi}{2}$ by (\ref{g-Delaunay}). However, this is impossible since $\theta_j^+(r)\in [-\frac{\pi}{2}, \frac{\pi}{2}]$ by Definition \ref{theta}. Therefore, $v_{j-1}$ can never be the flat vertex of the triangle $\triangle v_0v_jv_{j-1}$ if it is degenerate. Similar arguments applying to the triangle $\triangle v_0v_jv_{j+1}$ show that $v_{j+1}$ can never be the flat vertex of the triangle $\triangle v_0v_jv_{j+1}$ if it is degenerate. Therefore, by Proposition \ref{interval} (b), there exists $\epsilon>0$ such that for $t\in (0, \epsilon)$, $\hat{r}$ is a generalized inversive distance circle packing on $(P_n, \mathcal{T}, I)$ and the triangles $\triangle v_0v_jv_{j\pm1}$ generated by $\hat{r}$ are non-degenerate. Next, we show that $\hat{r}$ satisfies the weighted Delaunay condition. As $\hat{r}$ differs from $r$ only at the $j$-th position, we just need to consider the edges $v_0v_j$ and $v_0v_{j\pm1}$. For the edge $v_0v_j$, since $\theta_j^+(r)+\theta_j^-(r)>0$, we have $\theta_j^+(\hat{r})+\theta_j^-(\hat{r})>0$ for small $t>0$ by the continuity of $\theta_j^{\pm}$ in Lemma \ref{theta continuous}. For the edge $v_0v_{j-1}$, $\theta_{j-1}^-(r)=\theta_{j-1}^-(\hat{r})$. We further have $\theta_{j-1}^+(r)< \theta_{j-1}^+(\hat{r})$ for $t\in (0, \overline{r}_j-r_j)$ by Lemma \ref{monotonicity}, which implies $\theta_{j-1}^+(\hat{r})+\theta_{j-1}^-(\hat{r})>\theta_{j-1}^+(r)+\theta_{j-1}^-(r)\ge0.$ This implies that the edge $v_0v_{j-1}$ satisfies the weighted Delaunay condition for $\hat{r}$. The same arguments apply to the edge $v_0v_{j+1}$. To see part (d) of Lemma \ref{key lemma}, by the arguments for part (c), there exists $\epsilon>0$ such that the triangles $\triangle v_0v_jv_{j\pm1}$ are non-degenerate in $\hat{r}$ and $\theta_{j}^+(\hat{r})+\theta_{j}^-(\hat{r})>0$ for $t\in (0,\epsilon)$, which implies $h_j^+(\hat{r})+h_j^-(\hat{r})>0$ for $t\in (0,\epsilon)$ by Remark \ref{h>0 equavalent to theta>0}. Note that $\alpha(\hat{r})$ is continuous for $t\in[0,\epsilon]$, smooth for $t\in(0,\epsilon)$, and $$\frac{\partial \alpha}{\partial t}(\hat{r})=\frac{h_j^+(\hat{r})+h_j^-(\hat{r})}{l_{0j}}>0, t\in (0,\epsilon)$$ by Lemma \ref{derivative angles} (a), we have $\alpha(\hat{r})>\alpha(r)$ for $t\in(0,\epsilon)$. \end{proof} Now we can prove Theorem \ref{Maximum principle}. \textbf{Proof for Theorem \ref{Maximum principle}:} Without loss of generality, we assume $r_0=\bar{r}_0$ and $r_i\le{\bar{r}_i}$ for all $i=1,2,\dots,n$. We prove the theorem by contradiction. Suppose that there exists a weighted Delaunay inversive distance circle packing $r$ on $(P_n, \mathcal{T}, I)$ such that $r_0=\bar{r}_0$, $r_i\le\bar{r}_i$ for all $i=1,2,\dots,n$ with one $r_{i_0}<\bar{r}_{i_0}$ and $\alpha(\bar{r})\leq\alpha(r)$. By Lemma \ref{key lemma}, we may assume that \begin{equation} \label{alpha(bar-r)<alpha(r)} \alpha(\bar{r})<\alpha(r). \end{equation} On the other hand, consider the set \begin{equation*} \begin{aligned} X:=\{x\in\mathbb{R}^{n+1}|&r\le x\le \bar{r}, \text{ and $x$ is a generalized weighted Delaunay inversive} \\ &\text{distance circle packing on } (P_n, \mathcal{T}, I) \}. \end{aligned} \end{equation*} Obviously, $r\in X$ and $X$ is bounded. By Lemma \ref{theta continuous}, $X$ is a closed subset of $\mathbb{R}^{n+1}$. Therefore, $X$ is a nonempty compact set and $\alpha(x)$ has a maximum point on $X$. Let $t\in X$ be a maximum point of the continuous function $f(x)=\alpha(x)$ on $X$. If $t\ne\bar{r}$, then by Lemma \ref{key lemma}, we can find a weighted Delaunay inversive distance circle packing $\hat{t}$ on $(P_n, \mathcal{T}, I)$ such that $\hat{t}\ge t$, $\hat t_0=\bar{r}_0$, $\hat{t}\le \bar{r}$ and $\alpha(\hat{t})>\alpha(t)$, which implies that $t$ is not a maximum point of $f(x)=\alpha(x)$ on $X$. So, $t=\bar{r}$ and then $$\alpha(\bar{r})=\alpha(t)\ge\alpha(r)>\alpha(\bar{r}),$$ where the last inequality comes from (\ref{alpha(bar-r)<alpha(r)}). This is a contradiction. {Q.E.D.} \begin{remark} For simplicity, we only present the maximal principle for the inversive distance $I: E\rightarrow (1, +\infty)$, which is enough for the application in the convergence of inversive distance circle packings in this paper. For a much more general version of the maximal principle for inversive distance circle packings with $I: E\rightarrow (-1, +\infty)$ and generic discrete conformal structures on surfaces \cite{GT, Xu arxiv21}, please refer to \cite{LXZ}. \end{remark} \subsection{A ring lemma} \begin{lemma}\label{ring lemma} Let $\mathcal{T}$ be a star triangulation of an $n$-sided polygon $P_n$ with boundary vertices $v_1,\dots, v_n$ and a unique interior vertex $v_0$. Let $I:E\rightarrow (1, +\infty)$ be a weight and $r$ be a flat generalized inversive distance circle packing on $(P_n,\mathcal{T}, I)$. Then there exists a constant $C = C(I, n)>0$ such that $r_0\leq C r_i$ for all $i\in \{1,2, \cdots, n\}$. \end{lemma} \begin{proof} Without loss of generality, we assume $r_0= 1$, otherwise applying a scaling to the weighed triangulated polygon $(P_n,\mathcal{T}, I, r)$. Then we just need to prove $Cr_i\geq 1$ for some $C\in \mathbb{R}_{>0}$, which is equivalent to $\kappa_i\leq C$, for all $i\in \{1,2, \cdots, n\}$. We prove Lemma \ref{ring lemma} by contradiction. If the result in Lemma \ref{ring lemma} is not true, then there exists a sequence of generalized inversive distance circle packings $\{r^{(m)}\}_{m=1}^\infty$ with $r^{(m)}_0=1$ on $(P_n,\mathcal{T}, I)$ such that $\lim_{m\rightarrow \infty}\kappa^{(m)}_{i}=+\infty$ for some $i\in \{1,2, \cdots, n\}$. Without loss of generality, we can assume $i=1$. For the triangle $\triangle v_0v_1v_2$, we set $I_0=I_{12}, I_1=I_{02}$ and $I_2=I_{01}$ for simplicity. As $r^{(m)}$ is a generalized inversive distance circle packing on $(P_n,\mathcal{T}, I)$, we have \begin{align*} (I_2^2-1)(\kappa^{(m)}_2)^2+(I_1^2-1)(\kappa^{(m)}_1)^2+(I_0^2-1) -2\kappa^{(m)}_1\gamma_2-2\kappa^{(m)}_2\gamma_1-2\kappa^{(m)}_1\kappa^{(m)}_2\gamma_0\leq 0 \end{align*} by Lemma \ref{basic property I of IDCP} (a), which implies \begin{equation}\label{kappa2 tend infty} \begin{aligned} \kappa^{(m)}_2 \geq& \frac{1}{I_2^2-1}[\kappa^{(m)}_1\gamma_0+\gamma_1 -\sqrt{(I_{0}^2+I_1^2+I_2^2+2I_{0}I_1I_2-1)((\kappa^{(m)}_1)^2+2I_2\kappa^{(m)}_1+1)}]\\ =&\frac{(I_1^2-1)(\kappa^{(m)}_1)^2-2I_2\kappa^{(m)}_1+I_0^2-1}{\kappa^{(m)}_1\gamma_0+\gamma_1 +\sqrt{(I_{0}^2+I_1^2+I_2^2+2I_{0}I_1I_2-1)((\kappa^{(m)}_1)^2+2I_2\kappa^{(m)}_1+1)}}. \end{aligned} \end{equation} Note that $\lim_{m\rightarrow \infty}\kappa^{(m)}_{1}=+\infty$. We have $\lim_{m\rightarrow \infty}\kappa^{(m)}_2=+\infty$ by (\ref{kappa2 tend infty}), which is equivalent to $\lim_{m\rightarrow \infty}r^{(m)}_2=0$. Combining $r^{(m)}_0=1$ with $\lim_{m\rightarrow \infty}r^{(m)}_{1}=\lim_{m\rightarrow \infty}r^{(m)}_2=0$, we have $\lim_{m\rightarrow \infty}(\theta^0_{12})^{(m)}\rightarrow 0$, where $\theta^0_{12}$ is the inner angle of the triangle $\triangle v_0v_1v_2$ at $v_0$. The same arguments applying to the triangles $\triangle v_0v_iv_{i+1}, i=2, 3, \cdots, n$ subsequently give $\lim_{m\rightarrow \infty}(\theta^0_{i, i+1})^{(m)}\rightarrow 0$ for all $i=1, 2, \cdots, n$, which implies $\lim_{m\rightarrow \infty}K^{(m)}_0=2\pi$. This contradicts the assumption that $v_0$ is a flat interior vertex of $(P_n,\mathcal{T}, I, r^{(m)})$. \end{proof} \subsection{Spiral hexagonal triangulations and linear discrete conformal factors } We first recall the definition of developing maps in \cite{LSW}. Let $l$ be a flat polyhedral metric on a simply connected triangulated surface $(S, \mathcal{T})$. Then $S$ is homeomorphic to its universal covering, so the developing map $\phi:(S, \mathcal{T}, l)\to \mathbb{C}$ for this polyhedral metric can be constructed by starting with any isometrically embedding in $\mathbb{C}$ of a Euclidean triangle $t\in F$ . This defines an initial map $\phi|t: (t, l)\to \mathbb{C}$, which can be extended to any adjacent triangle $s\in F$ with $e = t\cap s \in E$ by isometrically embedding $s$ in $\mathbb{C}$ such that $\phi(e) = \phi(s)\cap \phi(t)$. Since $S$ is simply connected, we can repeat this extension for all triangles, which induces a well-defined developing map up to isometries of $(S, \mathcal{T}, l)$. \begin{proposition}\label{spiral} Let $\mathcal{T}_{st}$ be the standard hexagonal triangulation of the plane. Let $l$ be a weighted Delaunay inversive distance circle packing metric determined by a label $u:V\rightarrow\mathbb{R}$ on $(\mathbb{C}, \mathcal{T}_{st}, I)$ such that the vertex set is a lattice $V= L = \{m\vec v_1 + n\vec v_2\}$, where $\{\vec v_1,\vec v_2\}$ is a geometric basis of the lattice $L$, and $I$ is invariant under the translations generated by $\{\vec v_1,\vec v_2\}$. Suppose $w:V\to\mathbb{R}$ is a non-constant linear function defined by two positive numbers $\lambda$ and $\mu$ via \begin{equation}\label{expression of linear w} w(m\vec v_1 + n\vec v_2) = m\log \lambda + n\log \mu \end{equation} and $w*l$ is a generalized weighted Delaunay inversive distance circle packing metric on $(\mathbb{C}, \mathcal{T}_{st}, I)$. Then the following statements hold. \begin{enumerate} \item[(a)] $(\mathbb{C}, \mathcal{T}_{st}, I, w*l)$ is flat. \item[(b)] Let $\phi$ be the developing map for $(\mathbb{C}, \mathcal{T}_{st}, I, w*l)$. If there exists a non-degenerate triangle in $(\mathbb{C}, \mathcal{T}_{st}, I, w*l)$, then there are two different non-degenerate triangles $t_1$ and $t_2$ in $(\mathbb{C}, \mathcal{T}_{st}, I, w*l)$ such that $\phi(int(t_1))\cap \phi(int(t_2)) \neq \emptyset$. In other words, $\phi$ does not produce an embedding of $(\mathbb{C}, \mathcal{T}_{st}, I, w*l)$ in the plane. \item[(c)] If all the triangles in $(\mathbb{C}, \mathcal{T}_{st}, I, w*l)$ are degenerate, then there exists an automorphism $\psi$ of the triangulation $\mathcal{T}_{st}$ and two positive constants $\bar\lambda=\bar\lambda(I, u)$ and $\bar\mu=\bar\mu(I, u)$ such that $w(\psi(m\vec v_1 + n\vec v_2))=m\ln \bar\lambda +n\ln \bar\mu$. \end{enumerate} \end{proposition} \begin{proof} The proof is a modification of the proof for Proposition 3.4 in \cite{LSW}. For completeness, we include the proof here. \begin{figure} \caption{Angles of spiral triangulations.} \label{figure4} \end{figure} To prove part (a), consider two translations $\tau_1$ and $\tau_2$ of the triangulation $\mathcal{T}_{st}$ defined by $$\tau_1(v)=v+\vec v_1, \ \tau_2(v)=v+\vec v_2, \ v\in V.$$ The lattice $L$ is isometric to the abelian subgroup of the automorphism group of $\mathcal{T}_{st}$ generated by $\tau_1$ and $\tau_2$. Up to the action to this subgroup, all the triangles in $(\mathbb{C}, \mathcal{T}_{st}, I, w*l)$ belong to two equivalent classes, one of which is equivalent to $t_1$ with vertices $0$, $\vec v_1$, $\vec v_2$ and the other of which is equivalent to $t_2$ with vertices $0$, $\vec v_2 - \vec v_1$, $\vec v_2$. Please refer to Figure \ref{figure4}. Since $I$ is invariant under the translations generated by $\{\vec v_1,\vec v_2\}$, $w*l(\tau_1(e))=\lambda w*l(e)$ and $w*l(\tau_2(e))=\mu w*l(e)$ for any edge $e\in E$ by (\ref{expression of linear w}) and (\ref{length1 introduction}). It is straightforward to check that the generalized triangle $\tau_1(t)$ ($\tau_2(t)$ respectively) is a scaling of the generalized triangle $t$ by a factor $\lambda$ ($\mu$ respectively). As a result, the triangles in the same equivalence class are similar to each other. Assume that the inner angles in $t_i$ are $\alpha_i$, $\beta_i$ and $\gamma_i$ for $i = 1, 2$. Then it is clear from Figure \ref{figure4} that the curvature $K(v) = 0$ for all $v\in V$ by $\alpha_i+\beta_i+\gamma_i=\pi$. Therefore, $(\mathbb{C},\mathcal{T}_{st}, I, w*l)$ is flat. To prove part (b), note that $(\lambda, \mu) \neq (1, 1)$ by the assumption that $w$ is not a constant. Without loss of generality, assume $\lambda\neq 1$. By the similarity of triangles in $(\mathbb{C}, \mathcal{T}_{st}, I, w*l)$ under the action of $\tau_1$ and $\tau_2$, the two translations $\tau_1$ and $\tau_2$ induce two affine transformations $\eta$ and $\zeta $ of the plane when composed with the developing map $\phi$. Taking $\tau_1$ for example, by the similarities of triangles in $(\mathbb{C},\mathcal{T}_{st}, I, w*l)$ under the action of $\tau_1$ and $\tau_2$, there exists $\theta\in [0,2\pi)$ such that $\phi(\tau_1(v))-\phi(v)=\lambda e^{i\theta}[\phi(v)-\phi(\tau_1^{-1}(v))]$ for any $v\in V$, which implies $\phi(\tau_1(v))-\lambda e^{i\theta}\phi(v)=\phi(v)-\lambda e^{i\theta}\phi(\tau_1^{-1}(v)).$ Therefore, $\phi(\tau_1(v))-\lambda e^{i\theta}\phi(v)$ is a constant for any $v\in V$, denoted by $c\in \mathbb{C}$, which implies $\phi(\tau_1(v))=\lambda e^{i\theta}\phi(v)+c$. Similar arguments apply to $\tau_2$. Therefore, there exists an affine transformation $\eta(z) = \lambda^* z + c$ with $|\lambda^*|=\lambda\neq 1$ such that $\phi\circ\tau_1 = \eta\circ\phi$. Since $\lambda \neq 1$, $\eta$ has a unique fixed point $p\in \mathbb{C}$. By $\tau_1\tau_2=\tau_2\tau_1$, we have $\eta\zeta=\zeta\eta$, which implies $\zeta(p) = p$ by the uniqueness of the fixed point of $\eta$. Set $\tilde{\phi}(v)=\phi(v)-p$, then $\tilde{\phi}$ is still a developing map. For simplicity, we still denote $\tilde{\phi}$ by $\phi$. Then $\phi(\tau_1(v)) =\eta\phi(v)=\lambda^*\phi(v)$ and $\phi(\tau_2(v)) =\zeta\phi(v)=\mu^*\phi(v)$ for some linear transformations $\eta(z)=\lambda^* z$ and $\zeta(z)=\mu^* z$. Notice that the group $G=<\eta, \zeta>$ generated by $\eta$ and $\zeta$ is either a non-trivial cyclic group or an abelian group isomorphic to $\mathbb{Z}^2$. By the assumption of part (b), $U = \phi(\mathbb{C})$ has non-empty interior containing the interior of a triangle $t_1$. If $G=<\eta, \zeta>$ is a cyclic group, then there exists $(k,j) \neq (0, 0)$ such that $\eta^k\zeta^j$ is the identity map. Set $t_2 = \tau_1^k\tau_2^j(t_1)$. Then $\phi(t_2)=\phi(\tau_1^k\tau_2^j(t_1))=\eta^k\zeta^j\phi(t_1)=\phi(t_1)$, which implies $\phi(t_1)\cap\phi(t_2)\neq \emptyset$. If $G=<\eta, \zeta>$ is isometric to $\mathbb{Z}^2$, then $\eta^k\zeta^j$ is never identity and their action in the plane is not discontinuous. Specifically, for $W := \phi(int(t_1))$, there exists $(k,j) \neq (0, 0)$ so that $\eta^k\zeta^j(W)\cap W\neq \emptyset$. Set $t_2 = \tau_1^k\tau_2^j(t_1)$, then $\phi(t_1) \cap \phi(t_2) = \eta^k\zeta^j(W)\cap W \neq \emptyset.$ To see part (c), since all the triangles in $(\mathbb{C}, \mathcal{T}_{st}, I, w*l)$ are degenerate, the inner angles of the triangles $t_1$ and $t_2$ are $0$ or $\pi$. Composing with an automorphism of the triangulation $\mathcal{T}_{st}$, we may assume $\alpha_1=\gamma_2=\pi$, where the angles are marked in Figure \ref{figure4}. For the degenerate triangle with vertices $0$, $-\vec v_1$ and $-\vec v_2$, it is flat at $-\vec v_2$ by assumption. By Corollary \ref{simply connect of admi space with weight}, if we use $\kappa^*$ to denote the reciprocal of radii in the metric $w*l$, then \begin{equation}\label{kappa* equation 1} \begin{aligned} \kappa^*(-\vec v_2)&= \frac{1}{I_{0,-\vec v_1}^2-1}\{\gamma_{-\vec v_1,-\vec v_2,0}\kappa^*(0)+\gamma_{0, -\vec v_1,-\vec v_2}\kappa^*(-\vec v_1)\\ &+\sqrt{\Delta_{0,-\vec v_1,-\vec v_2}[(\kappa^*(0))^2+(\kappa^*(-\vec v_1))^2+2I_{0,-\vec v_1}\kappa^*(0)\kappa^*(-\vec v_1)]}\}, \end{aligned} \end{equation} where $\gamma_{v_i,v_j,v_k}=I_{v_jv_k}+I_{v_iv_k}I_{v_iv_j}$ and $$\Delta_{0,-\vec v_1,-\vec v_2} = I^2_{0,-\vec v_1} + I^2_{0,-\vec v_2} + I^2_{-\vec v_1, -\vec v_2} + 2I_{0,-\vec v_1}I_{0,-\vec v_2} I_{-\vec v_1, -\vec v_2} - 1. $$ Note that $\kappa^*(0)=\kappa(0)$, $\kappa^*(-\vec v_1)=\kappa(-\vec v_1)\lambda$ and $\kappa^*(-\vec v_2)=\kappa(-\vec v_2)\mu$, we have \begin{equation}\label{lambda-mu equation 1} \begin{aligned} \kappa(-\vec v_2)\mu&=\frac{1}{I_{0,-\vec v_1}^2-1}\{\gamma_{-\vec v_1,-\vec v_2,0}\kappa(0)+\gamma_{0, -\vec v_1,-\vec v_2}\kappa(-\vec v_1)\lambda\\ &+\sqrt{\Delta_{0,-\vec v_1,-\vec v_2}[(\kappa(0))^2+\kappa^2(-\vec v_1)\lambda^2+2I_{0,-\vec v_1}\kappa(0)\kappa(-\vec v_1)\lambda]}\} \end{aligned} \end{equation} by (\ref{kappa* equation 1}). Denote the right hand side of the equation (\ref{lambda-mu equation 1}) as $f_1(\lambda)$. Then $f_1(\lambda)$ is a strictly increasing function of $\lambda$. Furthermore, we have $\lim_{\lambda\rightarrow 0+}f_1(\lambda)=C_1>0$ and $\lim_{\lambda\rightarrow +\infty}f_1(\lambda)=+\infty$. Dividing both sides of (\ref{lambda-mu equation 1}) by $\lambda$ gives \begin{equation}\label{lambda-mu equation 2} \begin{aligned} \kappa(-\vec v_2)\frac{\mu}{\lambda}&=\frac{1}{I_{0,-\vec v_1}^2-1}\{\gamma_{-\vec v_1,-\vec v_2,0}\kappa(0)\lambda^{-1}+\gamma_{0, -\vec v_1,-\vec v_2}\kappa(-\vec v_1)\\ &+\sqrt{\Delta_{0,-\vec v_1,-\vec v_2}[(\kappa(0))^2\lambda^{-2}+\kappa^2(-\vec v_1)+2I_{0,-\vec v_1}\kappa(0)\kappa(-\vec v_1)\lambda^{-1}]}\}, \end{aligned} \end{equation} which implies that $\frac{\mu}{\lambda}$ is a strictly decreasing function of $\lambda$ with $\lim_{\lambda\rightarrow 0+}\frac{\mu}{\lambda}=+\infty$ and $\lim_{\lambda\rightarrow +\infty}\frac{\mu}{\lambda}=C_2>0$. On the other hand, for the triangle with vertices $0$, $-\vec v_2$ and $\vec v_1-\vec v_2$, it is flat at $-\vec v_2$ by assumption. Applying Corollary \ref{simply connect of admi space with weight} to this triangle gives \begin{equation}\label{kappa* equation 2} \begin{aligned} \kappa^*(-\vec v_2) &=\frac{1}{I_{0,\vec v_1-\vec v_2}^2-1}\{\gamma_{0,-\vec v_2,\vec v_1-v_2}\kappa^*(\vec v_1-\vec v_2)+\gamma_{\vec v_1-\vec v_2, 0,-\vec v_2}\kappa^*(0)\\ &+\sqrt{\Delta_{0,-\vec v_2,\vec v_1-\vec v_2}[(\kappa^*(0))^2+(\kappa^*(\vec v_1-\vec v_2))^2+2I_{0,-\vec v_1}\kappa^*(0)\kappa^*(\vec v_1-\vec v_2)]}\}. \end{aligned} \end{equation} Note that $\kappa^*(0)=\kappa(0)$, $\kappa^*(-\vec v_2)=\kappa(-\vec v_2)\mu$ and $\kappa^*(\vec v_1-\vec v_2)=\kappa(\vec v_1-\vec v_2)\frac{\mu}{\lambda}$, we have \begin{equation}\label{lambda-mu equation 3} \begin{aligned} \kappa(-\vec v_2)\mu &=\frac{1}{I_{0,\vec v_1-\vec v_2}^2-1}\{\gamma_{0,-\vec v_2,\vec v_1-\vec v_2}\kappa(\vec v_1-\vec v_2)\frac{\mu}{\lambda}+\gamma_{\vec v_1-\vec v_2, 0,-\vec v_2}\kappa(0)\\ &+\sqrt{\Delta_{0,-\vec v_2,\vec v_1-\vec v_2}[(\kappa(0))^2+\kappa^2(\vec v_1-\vec v_2)\frac{\mu^2}{\lambda^2} +2I_{0,-\vec v_1}\kappa^*(0)\kappa(\vec v_1-\vec v_2)\frac{\mu}{\lambda}]}\} \end{aligned} \end{equation} by (\ref{kappa* equation 2}). Denote the right hand side of the equation (\ref{lambda-mu equation 3}) as $f_2(\lambda)$. Then $f_2(\lambda)$ is a strictly decreasing function of $\lambda$ by the fact that $\frac{\mu}{\lambda}$ is a strictly decreasing function of $\lambda$. Furthermore, $\lim_{\lambda\rightarrow 0+}f_2(\lambda)=+\infty$ and $\lim_{\lambda\rightarrow +\infty}f_2(\lambda)=C_3>0$. Set $f(\lambda)=f_1(\lambda)-f_2(\lambda)$, then $f(\lambda)$ is a strictly increasing continuous function of $\lambda\in \mathbb{R}_{>0}$ with $\lim_{\lambda\rightarrow 0+}f(\lambda)=-\infty$ and $\lim_{\lambda\rightarrow +\infty}f_2(\lambda)=+\infty$, which implies that there exists a unique number $\bar\lambda=\bar\lambda(I, u)\in \mathbb{R}_{>0}$ such that $f_1(\bar\lambda)=f_2(\bar\lambda)$. As a result, the system (\ref{lambda-mu equation 1}) and (\ref{lambda-mu equation 3}) has a unique solution $\bar\lambda=\bar\lambda(I, u)$ and $\bar\mu=\bar\mu(I, u)$ in $\mathbb{R}_{>0}$. This completes the proof for part (c). \end{proof} We call the weight $I$ in Proposition \ref{spiral} \textit{translation invariant} on $\mathcal{T}_{st}$ since $I(e)=I(e+\delta)$ for any $\delta \in V= L = \{m\vec v_1 + n\vec v_2\}$ and $e\in E$. It is in fact determined by the three weights on the edges of any triangle in $\mathcal{T}_{st}$. \section{Rigidity of infinite inversive distance circle packings}\label{section 4} The conformality of the limit of discrete conformal maps $f_n$ in Theorem \ref{conv introduction} is a consequence of the rigidity of infinite inversive distance circle packings in the plane, which is also conjectured by Bowers-Stephenson \cite{BS}. The main result of this section confirms this conjecture. \begin{theorem} \label{infrigidity introduction} Let $(\mathbb{C}, \mathcal{T}_{st}, I)$ be a weighted hexagonal triangulated plane such that the weight $I: E\rightarrow (1, +\infty)$ is translation invariant. Assume $l$ is a weighted Delaunay inversive distance circle packing metric on $(\mathbb{C}, \mathcal{T}_{st}, I)$ induced by a constant label. If $(\mathbb{C}, \mathcal{T}_{st}, I, w*l)$ is a weighted Delaunay triangulated surface isometric to an open set in the plane, then $w$ is a constant function. \end{theorem} The rigidity of inversive distance circle packings with prescribed combinatorial curvatures on weighted triangulated compact surfaces has been proved in \cite{Guo,Luo GT,Xu AIM,Xu MRL} based on variational principles. Theorem \ref{infrigidity introduction} provides a result on the rigidity of infinite inversive distance circle packings in the non-compact plane. In the case of Thurston's circle packings, the rigidity of infinite circle packings in the plane has been explored in \cite{H2, RS, Sch}. To prove Theorem \ref{infrigidity introduction}, recall the following definition and properties of embeddable flat polyhedral surfaces in \cite{LSW}. \begin{definition}[\cite{LSW}, Definition 4.1] Suppose $(S,\mathcal{T})$ is a simply connected triangulated surface with a generalized PL metric $l$ and $\phi$ is a developing map for $(S,\mathcal{T}, l)$. Then $(S,\mathcal{T}, l,\phi)$ is said to be \it embeddable \rm into $\mathbb{C}$ if for every simply connected finite subcomplex $P$ of $\mathcal{T}$, there exist a sequence of flat PL metrics on $P$ whose developing maps $\phi_n: P \to \mathbb{C}$ are topological embeddings and converge uniformly to $\phi|_P$. \end{definition} \begin{lemma}[\cite{LSW}, Lemma 4.2]\label{embed} Let $(S,\mathcal{T},l)$ be a flat polyhedral metric on a simply connected surface with a developing map $\phi$. \begin{enumerate} \item Suppose $\phi$ is embeddable. If two simplices $s_1$, $s_2$ represent two distinct non-degenerate triangles or two distinct edges in $\mathcal{T}$, then $\phi(int(s_1))\cap \phi(int(s_2)) =\emptyset$. \item If $\phi$ is the pointwise convergent limit $\lim_{n\to \infty}\psi_n$ of the developing maps $\psi_n$ of embeddable flat polyhedral metrics $(X, \mathcal{T}, l_n)$, then $(X,\mathcal{T}, l)$ is embeddable. \end{enumerate} \end{lemma} The standard hexagonal geodesic triangulations of open sets in $\mathbb{C}$ are embeddable. On the other hand, the generic Doyle spirals produce circle packings with overlapping disks, so the corresponding polyhedral metrics are not embeddable. \begin{lemma}\label{flatness preserved by group action} Let $(S, \mathcal{T}_{st}, I)$ be a weighted hexagonal triangulated plane with the weight $I$ translation invariant, and $l_0$ be a weighted Delaunay inversive distance circle packing metric on $(S, \mathcal{T}_{st}, I)$ generated by a label $w_0: V\rightarrow \mathbb{R}$ such that the vertex set is a lattice $L=V$. Suppose $(w-w_0)*l_0$ is a flat generalized weighted Delaunay inversive distance circle packing metric on the plane $(S, \mathcal{T}_{st}, I)$. For any $\delta\in V$, set $u(v)=w(v+\delta)-w(v)$. Then $u*((w-w_0)*l_0)=(u+w-w_0)*l_0$ is a flat generalized weighed Delaunay inversive distance circle packing metric on $(S, \mathcal{T}_{st})$. Furthermore, if $u(v_0)=\max_{v\in V}u(v)$, then $u$ is a constant. \end{lemma} \proof Suppose $e\in E$ is an edge with vertices $v$ and $v'$. By Definition \ref{discrere conformal for idcp}, we have \begin{equation*} \begin{aligned} &u*((w-w_0)*l_0)(e)\\ =&[e^{2w(v)+2u(v)}+e^{2w(v')+2u(v')}+2I(e)e^{w(v)+u(v)+w(v')+u(v')}]^{1/2}\\ =&[e^{2w(v+\delta)}+e^{2w(v'+\delta)}+2I(e+\delta)e^{w(v+\delta)+w(v'+\delta)}]^{1/2}\\ =&(w-w_0)*l_0(e+\delta), \end{aligned} \end{equation*} where $I(e)=I(e+\delta)$ is used in the third line. By the condition that $(w-w_0)*l_0$ is a flat generalized weighted Delaunay inversive distance circle packing metric on $(S, \mathcal{T}_{st}, I)$, we have $u*((w-w_0)*l_0)$ is a flat generalized weighted Delaunay inversive distance circle packing metric on $(S, \mathcal{T}_{st}, I)$. The rest of the proof is an application of the discrete maximal principle, i.e. Theorem \ref{Maximum principle}. {Q.E.D.} The function $u$ in Lemma \ref{flatness preserved by group action} could be taken as a discrete version of the directional derivative of $w$. \begin{lemma}\label{subsequence with const difference for w0 const} Let $(S, \mathcal{T}_{st}, I)$ be a weighted hexagonal triangulated plane with the weight $I$ translation invariant, and $l_0$ be a weighted Delaunay inversive distance circle packing metric on $(S, \mathcal{T}_{st}, I)$ generated by a constant label $w_0: V\rightarrow \mathbb{R}$ such that the vertex set is a lattice $L=V=\{mu_1+nu_2|m,n\in \mathbb{Z}\}$. Suppose $w*l_0$ is a flat generalized weighted Delaunay inversive distance circle packing metric on the plane $(S, \mathcal{T}_{st}, I)$. Then for any $\delta\in \{\pm u_1, \pm u_2, \pm(u_1-u_2)\}$, there exists a sequence $\{v_n\}\subset V$ such that $$w_n(v):=w(v+v_n)-w(v_n)$$ satisfies \begin{enumerate} \item[(a)] for all $v\in V$, the limit $w_\infty(v)=\lim_{n\rightarrow\infty}w_n(v)$ exists. \item[(b)] $w_n*l_0$ and $w_\infty*l_0$ are flat generalized weighted Delaunay inversive distance circle packing metrics on $(S, \mathcal{T}_{st}, I)$. \item[(c)] $w_\infty(v+\delta)-w_\infty(v)=a:=\sup \{w(v+\delta)-w(v)|v\in V\}$ for all $v\in V$. \item[(d)] the normalized developing maps $\phi_{w_n*l_0}$ of $w_n*l_0$ converges uniformly on compact subcomplex of $(S, \mathcal{T}_{st})$ to the normalized developing maps $\phi_{w_\infty*l_0}$ of $w_\infty*l_0$. As a result, if $(S, \mathcal{T}_{st}, I, w*l_0)$ is embeddable, then $(S, \mathcal{T}_{st}, I, w_\infty*l_0)$ is embeddable. \end{enumerate} \end{lemma} \proof Since $w_0$ is a constant function, without loss of generality, we can assume that $w_0 = 0$, otherwise applying a scaling to $l_0$. To see part (a), notice that Lemma \ref{ring lemma} implies that there exists a positive constant $M$ such that \begin{equation}\label{M} M=M(V, I)=\sup \{|w(v+\delta)-w(v)||\\ v\in V, \delta\in \{\pm u_1, \pm u_2, \pm(u_1-u_2)\}. \end{equation} Then for fixed $\delta\in \{\pm u_1, \pm u_2, \pm(u_1-u_2)$, we have $$a:=\sup \{w(v+\delta)-w(v)|v\in V\}\leq M.$$ Therefore, there exist a sequence $\{v_n\}$ in $V$ such that \begin{equation}\label{w_n delta} a-\frac{1}{n}\leq w_n(\delta)=w(v_n+\delta)-w(v_n)\leq a. \end{equation} Furthermore, we have $w_n(0)=0$ and \begin{equation}\label{w_n v+delta-w_n v} w_n(v+\delta)-w_n(v)=w(v+\delta+v_n)-w(v+v_n)\leq a, \forall v\in V \end{equation} by the definition of $w_n$ and $a$. By Lemma \ref{ring lemma}, if $v\in V$ is of combinatorial distance $m$ to $0$, then \begin{equation*} \begin{aligned} |w_n(v)|=&|w_n(v)-w_n(0)|\\ \leq& \sum_{i=1}^m |w_n(v_i)-w_n(v_{i-1})| = \sum_{i=1}^m |w(v_i+v_n)-w(v_{i-1}+v_n)|\\ =& \sum_{i=1}^m |w(v_i+v_n)-w(v_{i-1}+v_n)| \leq m M, \end{aligned} \end{equation*} where $v_m=v$, $v_0=0$ and $v_0\sim v_1\sim\cdots\sim v_m$ is a path of combinatorial distance $m$ between $0$ and $v$. By the diagonal argument, there exists a subsequence of $\{v_n\}$, still denoted by $\{v_n\}$ for simplicity, such that $w_\infty(v):=\lim_{n\rightarrow \infty}w_n(v)$ exists for all $v\in V$. To see part (b), for any fixed $n\in \mathbb{N}$ and any edge $e\in E$, we have \begin{equation}\label{(w_n-w_0)*l_0} \begin{aligned} w_n*l_0(e)=e^{-w(v_n)}w*l_0(e+v_n) \end{aligned} \end{equation} by the translation invariance $I(e)=I(e+\delta)$ for the weight $I$. This implies that $w_n*l_0$ is a flat generalized weighted Delaunay inversive distance circle packing metric on $(S, \mathcal{T}_{st}, I)$ by the assumption that $w*l_0$ is a flat generalized weighted Delaunay inversive distance circle packing metric on $(S, \mathcal{T}_{st}, I)$. By $w_\infty(v)=\lim_{n\rightarrow \infty}w_n(v)$ and continuity, we have $w_\infty*l_0$ is a flat generalized weighted Delaunay inversive distance circle packing metric on $(S, \mathcal{T}_{st}, I)$. Moreover, we have $w_\infty(v+\delta)-w_\infty(v)\leq a$ for any $v\in V$ by (\ref{w_n v+delta-w_n v}), which implies \begin{equation}\label{w infty<a} \begin{aligned}\sup \{w_\infty(v+\delta)-w_\infty(v)|v\in V\}\leq a. \end{aligned} \end{equation} To see part (c), by $w_n(0)=0$, (\ref{w_n delta}) and (\ref{w infty<a}), we have $w_\infty(0)=0$ and \begin{equation*} \begin{aligned} w_\infty(\delta)-w_\infty(0)=w_\infty(\delta)=a\geq \sup \{w_\infty(v+\delta)-w_\infty(v)|v\in V\}, \end{aligned} \end{equation*} which implies that $w_\infty(v+\delta)-w_\infty(v)$ attains the maximal value $\sup \{w_\infty(v+\delta)-w_\infty(v)|v\in V\}$ at $v=0$. Note that for a fixed $\delta$ and $u(v):=w_\infty(v+\delta)-w_\infty(v)$, $u*(w_\infty*l_0)$ is a flat generalized weighted Delaunay inversive distance circle packing metric on $(S, \mathcal{T}_{st}, I)$ by Lemma \ref{flatness preserved by group action}. By Theorem \ref{Maximum principle}, we have $w_\infty(v+\delta)-w_\infty(v)=a$ for any $v\in V$. If $(S, \mathcal{T}_{st}, I, w*l_0)$ is embeddable, then $(S, \mathcal{T}_{st}, I, w_n*l_0)$ is embeddable by (\ref{(w_n-w_0)*l_0}). The rest of the proof is an application of Lemma \ref{embed}. {Q.E.D.} Theorem \ref{infrigidity introduction} follows from the following more general result. \begin{theorem} \label{rigidity general} Let $(S, \mathcal{T}_{st}, I)$ be a weighted hexagonal triangulated plane with the weight $I$ translation invariant, and $l_0$ be a weighted Delaunay inversive distance circle packing metric on $(S, \mathcal{T}_{st}, I)$ generated by a constant label $w_0: V\rightarrow \mathbb{R}$ such that the vertex set is a lattice $L=V=\{mu_1+nu_2|m,n\in \mathbb{Z}\}$. Suppose $w*l_0$ is a flat generalized weighted Delaunay inversive distance circle packing metric on the plane $(S, \mathcal{T}_{st}, I)$ and $(S, \mathcal{T}_{st}, I,w*l_0)$ is embeddable into $\mathbb{C}$. Then $w$ is a constant function. \end{theorem} \begin{proof} The idea of proof can be summarized as follows. Assume $w$ is not a constant, we will construct a sequence of discrete conformal factor $w_n$ by extracting ``directional derivatives" of $w$ at different base points. This construction relies heavily on the symmetric structure of the lattice $V(\mathcal{T}_{st}) = L$ generated by $I$ and $w_0$, which implies that the limit of this sequence produce a linear discrete conformal factor $w_\infty$. By Lemma \ref{subsequence with const difference for w0 const}, $(S, \mathcal{T}_{st},I, w_\infty*l_0)$ is embeddable. However, by Proposition \ref{spiral}, if $w_\infty$ is not a constant, $(S, \mathcal{T}_{st}, I, w_\infty*l_0)$ contains overlapping triangles under the developing maps. This leads to a contradiction. \textbf{Step 1:} construct a linear discrete conformal factor $\hat w$. Since $w$ is assumed to be different from a constant function, then there exists a $\delta_1 \in \{\pm u_1, \pm u_2, \pm(u_1-u_2)\}$ such that $a_1 = \sup \{w(v + \delta_1) - w(v) | v\in V\}> 0$. By Lemma \ref{ring lemma}, $a_1 \in (0, \infty)$. Applying Lemma \ref{subsequence with const difference for w0 const} to $w*l_0$ in the direction $\delta_1$, we can find a function $w_\infty: V \rightarrow \mathbb{R}$ such that $(S, \mathcal{T}_{st},I, w_\infty*l_0)$ is embeddable and $$w_\infty(v+\delta_1)-w_\infty(v)=a_1, \forall v\in V.$$ Further applying Lemma \ref{subsequence with const difference for w0 const} to $w_\infty*l_0$ in the direction $\delta_2\in \{\pm u_1, \pm u_2, \pm(u_1-u_2)\}-\{\pm \delta_1\}$ gives rise to a function $\hat{w}=(w_\infty)_\infty: V\rightarrow \mathbb{R}$ such that $(S, \mathcal{T}_{st},I, \hat{w}*l_0)$ is embeddable. Moreover, $$\hat{w}(v+\delta_1)-\hat{w}(v)=a_1, \hat{w}(v+\delta_2)-\hat{w}(v)=a_2, \forall v\in V,$$ which shows that $\hat{w}(v)$ is an affine function of the form $\hat{w}(n\delta_1+m\delta_2)=na_1+ma_2+a_3$ with $a_1\in (0, +\infty), a_2, a_3\in \mathbb{R}$. Without loss of generality, we can assume $\hat{w}(n\delta_1+m\delta_2)=na_1+ma_2$, because the weighted Delaunay property of generalized inversive distance circle packing metrics is invariant under scaling. Furthermore, $(S, \mathcal{T}_{st},I, \hat{w}*l_0)$ is embeddable. \begin{figure} \caption{Three cases of degenerate triangulations.} \label{figure5} \end{figure} \textbf{Step 2:} Overlapping of $(S, \mathcal{T}_{st}, I, \hat{w}*l_0)$. By Step 1, there are two positive numbers $\bar{\lambda}\in (1,+\infty)$ and $\bar{\mu}\in (0, +\infty)$ so that $$\hat{w}(m\delta_1 + n\delta_2) = m\log \bar{\lambda} + n\log \bar{\mu}$$ and $(S, \mathcal{T}_{st},I, \hat{w}*l_0)$ is embeddable. Then there is no non-degenerate triangle in the image of the developing map $\hat{\phi}$ for $(S, \mathcal{T}_{st}, I, \hat{w}*l_0)$. Otherwise by Proposition \ref{spiral}, there are two triangles with overlapping interiors. Therefore, all the triangles in the image of $(S, \mathcal{T}_{st}, I, \hat{w})$ under $\hat{\phi}$ are degenerate. All the inner angles are either $0$ and $\pi$. Up to obvious automorphisms of $\mathcal{T}_{st}$, there are three cases in Figure \ref{figure5} showing triangles in the star of the origin. Case 1 and Case 2 are differed by a rotation $\gamma$, and Case 1 and Case 3 are differed by an orientation-reversing automorphism $\rho$ of $\mathcal{T}_{st}$ such that $\rho(0) = 0$, $\rho(\vec v_1) = \vec v_2$, and $\rho(\vec v_2) = \vec v_2 - \vec v_1$. Therefore, we only need to consider Case 1. By Proposition \ref{spiral} (c), the constants $\bar\lambda$ and $\bar\mu$ depend only on $I$ and $w_0$. \begin{figure} \caption{Intersecting edges in the developing maps. } \label{figure6} \end{figure} Consider the lengths of edges $e_1 = v_0v_3$, $e_2 = v_0v_6$, $e_3 = v_6v_7$ and their respective lengths $l_1$, $l_2$, $l_3$ in $\hat{w}*l_0$ in Figure \ref{figure6}. Notice that $l_3 = (\bar\lambda/\bar\mu) l_2$ and $l_1 = (\bar\mu/\bar\lambda) l_2$, then $$l_1 + l_3 \geq 2l_2 > l_2.$$ Since $(S, \mathcal{T}_{st}, I, \hat{w}*l_0)$ with a developing map $\hat{\phi}$ is embeddable, there exist a sequence of flat polyhedral metrics with developing maps $\phi_n$ which are embeddings such that $\phi_n$ converge to $\hat{\phi}$ uniformly on compact sets. Then for $n$ large enough, the images of $e_1$ and $e_3$ under $\phi_n$ intersect by the inequality above. This is because the angle condition at $v_6$ forces $e_3$ to rotate clockwise and the angle condition at $v_0$ forces $e_1$ to rotate counterclockwise. Then the intersection contradicts the fact that $(S, \mathcal{T}_{st},I, \hat{w}*l_0)$ is embeddable. \end{proof} \section{The convergence of inversive distance circle packings}\label{section 5} \subsection{Proof of the main theorem} Recall the main theorem of this paper. \begin{theorem}\label{conv1} Let $\Omega$ be a Jordan domain in the complex plane bounded by a Jordan curve $\partial\Omega$ with three distinct points $p,q, r\in \partial \Omega$. Let $f$ be the Riemann mapping from the unit equilateral triangle $\triangle ABC$ to $\overline{\Omega}$ such that $f(A) = p$, $f(B) = q$, $f(C) = r$. There exists a sequence of weighted triangulated polygonal disks $(\Omega_{n}, \mathcal{T}_n, I_n, (p_n, q_n, r_n))$ with inversive distance circle packing metrics $l_n$, where $\mathcal{T}_n$ is an equilateral triangulation of $\Omega_n$, $I_n: E_n\rightarrow (1, +\infty)$ is a weight defined on $E_n = E( \mathcal{T}_n$) and $p_n, q_n, r_n$ are three boundary vertices of $\mathcal{T}_n$, such that \begin{enumerate} \item $\Omega=\cup_{n=1}^{\infty} \Omega_n$ with $\Omega_n \subset \Omega_{n+1}$, and $\lim_n p_n =p$, $\lim_n q_n =q$ and $\lim_n r_n=r$, \item the discrete conformal maps $f_n$ from $\triangle ABC$ to $(\Omega_{n}, \mathcal{T}_n,I_n, l_n)$ with $f_n(A) = p_n$, $f_n(B) = q_n$, $f_n(C) = r_n$ exist, \item the discrete conformal maps $f_n$ converge uniformly to the Riemann mapping $f$. \end{enumerate} \end{theorem} To prove Theorem \ref{conv1}, we need to establish the existence of discrete conformal maps induced by inversive distance circle packings from a flat polyhedral disk to an equilateral triangle. In the case of Thurston's circle packings, the Koebe-Andreev-Thurston theorem guarantees the existence of a circle packing of the unit disk with any given triangulation of a disk as the nerve of the packing. In the case of vertex scaling \cite{Luo CCM}, the discrete uniformization theorem \cite{GGLSW,GLSW} gives the existence of a discrete conformal factor with prescribed combinatorial curvature on marked closed surfaces. Unfortunately, there is no known existence theorem for inversive distance circle packings on arbitrary triangulations. Theorem \ref{exist} below establishes such an existence theorem for inversive distance circle packings when the triangulations of flat polyhedral disks are subdivided in a scheme as follows. Let $(\mathcal P, \mathcal{T}, l)$ be a flat polyhedral disk with an equilateral triangulation, in which all triangles are equilateral. Then the length function $l$ is a constant function on $E$. Given an equilateral Euclidean triangle $\triangle$ in the plane, the \it $n$-th standard subdivision \rm of $\triangle$ is the equilateral triangulation of $\triangle$ by $n^2$ equilateral triangles. Applying this subdivision to each triangle in an equilateral triangulation of a flat polyhedral disk $(\mathcal P, \mathcal{T}, l)$, we obtain its \it $n$-th standard subdivision \rm $(\mathcal P, \mathcal{T}_{(n)}, l_{(n)})$. Furthermore, if $l$ is an inversive distance circle packing metric induced by a constant label $u$ and a constant weight $I: E\rightarrow (1, +\infty)$, we require that $l_{(n)}$ is also an inversive distance circle packing metric induced by a constant label $u_{(n)}$ and a constant weight $I_n: E_{(n)}\rightarrow (1, +\infty)$ taking the same value as $I: E\rightarrow (1, +\infty)$. \begin{theorem} \label{exist} Suppose $(\mathcal P, \mathcal{T}, l)$ is a flat polyhedral disk with an equilateral triangulation $\mathcal{T}$ such that exactly three boundary vertices $p,q,r$ have curvature $\frac{2\pi}{3}$, and the metric $l$ is an inversive distance circle packing metric induced by a constant label $u$ and a constant weight $I:E\rightarrow (1, +\infty)$. Then for sufficiently large $n$, there is a discrete conformal factor $w: V_{(n)}\to \mathbb{R}$ for the $n$-th standard subdivision $(\mathcal P, \mathcal{T}_{(n)}, I_{(n)}, l_{(n)} )$ such that \begin{enumerate} \item $K_i(w*l_{(n)})=0$ for all $v_i \in V_{(n)}-\{p,q,r\}$, \item $K_i(w*l_{(n)})=\frac{2\pi}{3}$ for all $v_i \in \{p,q,r\}$, \item there is a constant $\theta_0 = \theta_0(I)>0$ independent of $n$ such that all inner angles of triangles in $(\mathcal{T}_{(n)}, w*l_{(n)})$ are in the interval $[\theta_0, \pi/2+\theta_0]$. \end{enumerate} \end{theorem} Note that the underlying metric space of $(\mathcal P, \mathcal{T}_{(n)}, I_{(n)}, w*l_{(n)} )$ is an equilateral triangle, and $(\mathcal P, \mathcal{T}_{(n)}, I_{(n)}, l_{(n)} )$ is weighted Delaunay for each $n$. Assuming Theorem \ref{exist}, the proof of Theorem \ref{conv1} is a standard argument using the properties of quasiconformal maps in the plane. To achieve this aim, we first recall the following three theorems on the extension and convergence of quasiconformal maps. \begin{theorem}[\cite{Al}, Corollary in Page 30] \label{extension} If $f: \mathbb{D} \to \Omega$ is a $K$-quasiconformal map from the open unit disk $\mathbb{D}$ onto a Jordan domain $\Omega$, then $f$ extends continuously to a homeomorphism $\overline{f}: \overline{\mathbb{D}} \to \overline{\Omega}$. \end{theorem} The following theorem is a simple consequence of Lemma 2.1 and Theorem 2.2 in \cite{Leh}. \begin{theorem} \label{compactness} If $f_n:\mathbb{D}\to\Omega_n$ is a sequence of $K$-quasiconformal maps such that $\Omega_n$ is uniformly bounded, then every subsequence of $f_n$ contains a subsequence that converge locally uniformly. Moreover, the limit of this subsequence is a $K$-quasiconformal map or a constant map. \end{theorem} A sequence of Jordan curves $J_n$ in $\mathbb{C}$ converge uniformly to a Jordan curve $J$ in $\mathbb{C}$ if there exist homeomorphisms $\phi_n: \mathbb S^1 \to J_n$ and $\phi: \mathbb S^1 \to J$ such that $\phi_n$ converge uniformly to $\phi$. \begin{theorem}[\cite{Pal}, Corollary 1] \label{pal} Assume that $\Omega_n$ is a sequence of Jordan domains such that $\partial \Omega_n$ converge uniformly to $\partial \Omega$. If $f_n: \mathbb{D} \to \Omega_n $ is a $K$-quasiconformal map for each $n$, and the sequence \{$f_n$\} converge to a $K$-quasiconformal map $f: \mathbb{D} \to \Omega$ uniformly on compact sets of $\mathbb{D}$, then $\overline{f_n}$ converge to $\overline{f}$ uniformly on $\overline{\mathbb{D}}$. \end{theorem} \begin{proof}[Proof of Theorem \ref{conv1}] By taking the intersection of scalings of the standard hexagonal triangulation in the plane with $\Omega$, we can construct a sequence of nested polygonal disks $\Omega_n$ such that $\partial \Omega_n$ converge uniformly to $\partial \Omega$ and there are three boundary vertices $p_n,q_n, r_n \subset \partial \Omega_n$ such that $\lim_n p_n =p$, $\lim_n q_n =q$ and $\lim_n r_n =r$. By adding or subtracting boundary vertices if necessary, we can assume that the curvatures at $p_n, q_n, r_n\in \partial \Omega_n$ are $\frac{2\pi}{3}$ and the curvatures at all other boundary vertices of $\Omega_n$ are not $\frac{2\pi}{3}$. By Theorem \ref{exist}, we produce some standard subdivision $\mathcal{T}_n$ of $\Omega_n$ and some discrete conformal factors $w_n$ such that $(\Omega_n, \mathcal{T}_n, w_n*l_{st})$ is isometric to the unit equilateral triangle $(\triangle ABC, \mathcal{T}_n)$, where $A,B,C$ correspond to $p_n,q_n,r_n$ respectively. Let $f_n: (\triangle ABC,\mathcal{T}_n, (A,B,C)) \to (\Omega_n, \mathcal{T}_n, (p_n, q_n, r_n))$ be the discrete conformal map induced by the correspondence of triangulations. Let $\bar{f}$ be the Riemann mapping from $\triangle ABC$ to $\overline{\Omega}$ sending $A, B, C$ to $p, q, r$ respectively. We claim that $f_n$ converges uniformly to $\bar{f}$ on $\triangle ABC$. By Theorem \ref{exist}, all angles of triangles in $(\triangle ABC, \mathcal{T}_n, w*l_{(n)})$ are at least $\epsilon_0>0$. Then the discrete conformal maps $f_n$ are $K$-quasiconformal from $int(\triangle ABC)$ to $int(\Omega_n)$ for some constant $K$ independent of $n$ and continuous from $\triangle ABC$ to $\Omega_n$. Let $\mathring{f}_n$ be the restriction of $f_n$ in $int(\triangle ABC)$. Theorem \ref{compactness} implies that every convergent subsequence of $\{\mathring f_n\}$ converge to a $K$-quasiconformal map $\mathring{g}$ from $int(\triangle ABC)$ to $int(\Omega)$. Since $\Omega = \cup_n\Omega_n$, $\mathring{g}$ is onto $int(\Omega)$. Theorem \ref{extension} implies that $\mathring{g}$ extends to a homeomorphism $g:\triangle ABC \to \Omega$. Theorem \ref{pal} implies that $f_n$ converge uniformly to $g$ on $\triangle ABC$. It is straightforward to check that $g(A) = p$, $g(B) = q$, and $g(C) = r$. Notice that the Riemann mapping $\bar f$ is the only continuous extension of a \textit{conformal} map from $int(\triangle ABC)$ to $\Omega$ with $\bar{f}(A) = p$, $\bar{f}(B) = q$, and $\bar{f}(C) = r$. This means that if we can show $g$ is conformal, then $g = \bar{f}$ and all limits of convergent subsequences of $\{f_n\}$ are $\bar{f}$. This will complete the proof of $f_n\to \bar{f}$ uniformly on $\triangle ABC$. The conformality of $g$ follows from Theorem \ref{rigidity general} by the same argument as the Hexagonal Packing Lemma in \cite{RS}. We briefly repeat the arguments here for completion. For a vertex $v_0\in \mathcal{T}_{st}$, let $B_n$ be the $n$-ring neighborhood of $v_0$ in $\mathcal{T}_{st}$. Then $B_n$ is a finite simplicial complex whose underlying space is a topological disk $\mathbb{D}$. Assume that $l_n$ is a flat inversive distance circle packing on $B_n$ with the constant weight $I$. Let $s_n$ be the maximal ratio of radii of two adjacent circles in $l_n$ of $B_n$. Lemma \ref{ring lemma} implies that $s_n$ is uniformly bounded by some constant $C(I)$. As $n\to \infty$, we can pick a convergent subsequence of $(\mathbb{D}, B_n, I, l_n)$, still indexed with $n$, such that all circles converge geometrically. We claim that $\lim_n{s_n} = 0$. Otherwise, as $n\to \infty$, the limit produces an inversive distance circle packing on $\mathcal{T}_{st}$ such that circles have different sizes. This contradicts the fact that $w$ is a constant in Theorem \ref{rigidity general}. As $n\to \infty$, the arguments above show that $s_n$ of $\mathcal{T}_n$ goes to zero. Equilateral triangles in $\mathcal{T}_n$ contained in a compact subset of $\Omega$ are mapped by $f_n^{-1}$ to triangles in $(\triangle ABC, \mathcal{T}_n)$ which are close to be equilateral. Then $f_n$ restricted in each triangle converge to a similarity map. The dilatations $K_n$ of $f_n$ converge to $1$. Therefore, $g$ is $1$-conformal, which is equivalent to be conformal. \end{proof} The rest of this paper is devoted to prove Theorem \ref{exist}. To find the discrete conformal factors in Theorem \ref{exist}, we will construct a system of ordinary differential equations proposed in \cite{GLW} to deform the discrete conformal factors via discrete curvatures at vertices. We first consider such a flow on a standard subdivision of an equilateral triangle in Theorem \ref{dcm triangle}, then use the flow to construct the discrete conformal factor required in Theorem \ref{exist}. In the rest of this section, we assume that any initial polyhedral metric $l$ on $(\mathcal{P},\mathcal{T})$ is an inversive distance circle packing metric induced by a constant weight $I>1$ and a constant label, which is weighted Delaunay. For any discrete conformal factor $w$ on $V$, denote the angle at $v_k$ of a triangle $\triangle v_iv_jv_k$ in the metric $w*l$ as $\theta_{ij}^k(w)$. Similarly, the conductance of $w*l$ defined by the formula (\ref{definitioneta}) is denoted as $\eta(w)$, and the curvature of $w*l$ is denoted as $K(w)$. The notation $a = O(b)$ refers to the fact that $|a| \leq C|b|$ for some constant $C = C(I)>0$. \subsection{Inversive distance circle packings along flows} In this subsection, we will solve the following prescribing curvature problem: assume $V_0\subset V$ and the initial curvature of $(\mathcal{P},\mathcal{T}, l)$ is $K^0$. Given a prescribed curvature $K^*$ on $V - V_0$, find a discrete conformal factor $w$ such that $w|_{V_0} = 0$ and $K(w) = K^*$. Consider a smooth family of discrete conformal factors $w(t)$ satisfying \begin{equation} \label{flow} \left\{ {\begin{array}{lr} {K_i}(w(t)) = (1 - t)K_i^0 + tK_i^*,&v_i \in V - {V_0},\\ {w_i}(t) = 0, &v_i \in {V_0}, \end{array}} \right.\end{equation} and $w(0) = 0$. This family of $w(t)$, if it exists in the interval $[0,1]$, provides a linear interpolation between the initial curvature $K^0$ and the prescribed curvature $K^*$. Therefore, $w(1)*l$ has curvature $K^*$. Hence, we need to show that this flow exists on $[0,1]$ for some standard subdivisions of $\mathcal{T}$ on $V - {V_0}$. To this end, we recall some basic notions of analysis on graphs. Given a graph $(V, E)$, the set of all oriented edges in $(V,E)$ is denoted by $\bar{E}$. If $v_iv_j$ is an edge in $E$, we denote it as $i\sim j$. A \it conductance \rm on $G$ is a function $\eta: \bar{E} \to \mathbb{R}_{\geq 0}$ so that $\eta_{ij}=\eta_{ji}$. The following definitions and results are well-known. See \cite{Lov} for details. \begin{definition} \label{definition lap} Given a finite graph $(V,E)$ with a conductance $\eta$, the gradient $\triangledown: \mathbb{R}^V \to \mathbb{R}^{\bar{E}}$ is defined by $$ (\nabla f)_{ij} =\eta_{ij}(f_i-f_j),$$ the Laplace operator associated to $\eta$ is the linear map $\Delta: \mathbb{R}^V \to \mathbb{R}^V$ defined by $$ (\Delta f)_i =\sum_{j \sim i} \eta_{ij}(f_i-f_j).$$ \end{definition} Given a set $V_0\subset V$ and a function $g: V_0\to \mathbb{R}$, the solution to the Dirichlet problem is a function $f:V\to\mathbb{R}$ satisfying $$(\Delta f)_i = 0, \forall v_i\in V-V_0, \text{ and } f|_{V_0} = g.$$ \begin{proposition}\label{mpgraph} Suppose $(V,E)$ is a finite connected graph with a conductance $\eta(e)>0$ for any edge $e\in E$. Given a nonempty $V_0 \subset V$ and $g:V_0\to \mathbb{R}$, the solution $f$ to the Dirichlet problem exists. Moreover, (a) (Maximum principle) $ \max_{v_i \in V} f_i =\max_{ v_i \in V_0} f_i.$ (b) (Strong maximum principle) If $V-V_0$ is connected and $\max_{v_i\in V-V_0} f_i =\max_{v_i \in V_0} f_i$, then $f|_{V-V_0}$ is a constant function. \end{proposition} Recall that the formula (\ref{definitioneta}) defines a conductance $\eta$ for any inversive distance circle packing on $(S, \mathcal{T}, I)$. If it is weighted Delaunay, then $\eta_{ij}\geq 0$. In the rest of this paper, we assume that the Laplace operator $\Delta$ is induced from this conductance $\eta$ for an inversive distance circle packing. By the variation formula of curvatures in (\ref{curvature}), we have the following system of ODEs by taking derivative of equation (\ref{flow}) with respect to $t$ \begin{equation} \label{ode} \left\{ {\begin{array}{lr} {{(\Delta w')}_i} = \sum\limits_{j \sim i} {\eta _{ij}}({w'_i} - {w'_j}) = K_i^* - K_i^0,& v_i \in V - {V_0},\\ {w'_i}(t) = 0, & v_i \in {V_0}, \end{array}} \right. \end{equation} with the initial value $w(0) = 0$ and $w'_i = \frac{dw_i}{dt}$. We will show that the solution to the system (\ref{ode}) exists for all $t \in [0,1]$ if $(\mathcal{P},\mathcal{T}, l)$ is chosen carefully. Prior to the existence, we first characterize the maximal interval for the existence of the solution to (\ref{ode}). Given a weighted triangulated surface $(S, \mathcal{T}, I)$ with an inversive distance circle packing metric $l$, consider the set of discrete conformal factors $W\subset \mathbb{R}^V$ defined by \begin{equation}\label{space W} \begin{aligned} W = \{w\in \mathbb{R}^V | w*l &\text{ is an inversive distance circle packing metric}\\ &\text{on $(S, \mathcal{T}, I)$ such that $\eta_{ij}>0$ for all edges}\}. \end{aligned} \end{equation} \begin{lemma} \label{odeexist} Let $(\mathcal{P}, \mathcal{T}, I)$ be a weighted triangulated surface with an inversive distance circle packing metric $l$ generated by a label $u$. The initial valued problem (\ref{ode}) defined on $W$ has a unique solution in a maximum interval $[0, t_0)$ with $t_0>0$ if $V_0 \neq \emptyset$ and $0\in W$. Moreover, if $t_0<\infty$, then either $\liminf_{t \to t_0^-} \theta^i_{jk}(w(t)) = 0$ for some angle $\theta^i_{jk}$ or $\liminf_{t \to t_0^-} \eta_{ij}(w(t))=0$ for some edge $v_iv_j$. \end{lemma} \begin{proof} The ODE system (\ref{ode}) can be written as \begin{equation*} \left\{ {\begin{array}{*{20}{l}} {A(w) \cdot w'(t) = b, }\\ {w(0) = 0,} \end{array}} \right. \end{equation*} where ${A(w)}$ is a square matrix valued smooth function of $w$, $b$ is a column vector determined by curvature, and ${w'(t)}$ is a column vector. Then ${A(w)}$ is an invertible matrix for a fixed $w\in W$. Indeed, consider the following system of linear equations for a fixed $w$ \begin{equation} \label{a*f=0} {A(w) \cdot f = 0.} \end{equation} From (\ref{ode}) we know that equation (\ref{a*f=0}) is equivalent to \begin{equation*} \left\{ {\begin{array}{lr} {{(\Delta f)}_i} = 0,&v_i \in V - {V_0},\\ {f_i} = 0,&v_i \in {V_0}, \end{array}} \right. \end{equation*} where ${\eta _{ij}>0}$ for all edges since $w \in W$. The maximal principle in Proposition \ref{mpgraph} implies that ${f = 0}$. Therefore, ${A(w)}$ is invertible. As a result, (\ref{ode}) can be written as $w'(t)=A(w)^{-1}b$. Picard's existence theorem for solutions to the ODE systems implies that there exists an interval $[0,{t_0})$ on which (\ref{ode}) has a solution. If $t_0<\infty$ and $t \nearrow t_0$, then $w(t)$ leaves every compact set in $W$. Consider subsets $W_{\delta}=\{ w \in W | \theta^{i}_{jk} \geq \delta, |w_{i}| \leq \frac{1}{\delta},\eta_{ij} \geq \delta \}$. It is straightforward to check that $W_\delta$ is compact. Since $w(t)$ leaves every $W_{\delta}$ for each $\delta >0$, one of the following three cases occurs: \begin{enumerate} \item $\liminf_{t \to t_0^-} \theta^i_{jk}(w(t))=0$ for some $\theta^i_{jk}$, or \item $\liminf_{t \to t_0^-} \eta_{ij}(w(t))=0$ for some edge , or \item $\limsup_{t \to t_0^-} |w_i(t)|=+\infty$ for some $v_i \in V$. \end{enumerate} We claim that the case (3) implies the case (1). Otherwise, there exists $\delta>0$ such that $\liminf_{t \to t_0^-} \theta^i_{jk}(w(t))>\delta$ for all $\theta^i_{jk}$. Since $w_i'(t) = 0$ for $v_i \in V_0$ along the flow (\ref{ode}), the radius $r_i = e^{w_i+u_i}$ does not change along the flow. Then for any triangle $\triangle v_iv_jv_k$ with $v_i$ as a vertex, the sine law implies that $$\frac{{l_{ij}^2}}{{l_{ik}^2}} \le \frac{1}{{{{\sin }^2}\delta }}, \quad \frac{{l_{ik}^2}}{{l_{ij}^2}} \le \frac{1}{{{{\sin }^2}\delta }},$$ which further implies that ${r_j} \le \frac{{\sqrt I }}{{\sin \delta }}({r_i} + {r_k})$ and ${r_k} \le \frac{{\sqrt I }}{{\sin \delta }}({r_i} + {r_j}) $ by $I>1$ and (\ref{length1 introduction}). Therefore, ${r_j}$ and ${r_k}$ are of the same order. Specially, $r_j\rightarrow +\infty$ if and only if $r_k\rightarrow +\infty$. If $w_k$ and $w_j$ go to infinity, then $$\cos \theta _{jk}^i = \frac{{l_{ij}^2 + l_{ik}^2 - l_{jk}^2}}{{2{l_{ij}}{l_{ik}}}} = \frac{{r_i^2 + {r_i}{r_j}I + {r_i}{r_k}I - {r_j}{r_k}I}}{{{l_{ij}}{l_{ik}}}} \to -I<-1,$$ which is impossible. Since the $1$-skeleton of $\mathcal{T}$ is a finite connected graph, we can show inductively that $w_i$ is bounded for all $v_i\in V$, which contradicts the assumption in case (3). This completes the proof for the claim. \end{proof} \subsection{Standard subdivisions of an equilateral triangle} In this subsection, we consider the ODE system (\ref{ode}) when the polyhedral surface is an equilateral triangulation of an equilateral triangle. We will prescribe special curvatures at the boundary vertices such that the discrete conformal maps approximate the power functions in complex analysis. To apply the estimates in network theory, we need to bound the conductance of a weighted Delaunay triangulation as follows. \begin{lemma} \label{ratio} Let $\triangle v_1v_2v_3$ be a weighted triangle generated by an inversive distance circle packing $(r_1, r_2, r_3)$ and the weight $I>1$ is a constant. There exists a constant $\theta_0 = \theta_0(I)\in (0, \frac{\pi}{6})$ such that if the three inner angles of the triangle are bounded in $[\pi /6 - \theta_0, \pi/2 + \theta_0]$, then \begin{enumerate} \item[(a)] $r_j/r_i \leq 20$ for any two radii $r_i$ and $r_j$, \item[(b)] $C\leq \eta_{ij}^k \leq M$ for some constants $C = C(I)>0$ and $M = M(I)>0$. \end{enumerate} \end{lemma} \begin{proof} Set $$\theta_{0} = \min \{\frac{\pi}{1000}, \arcsin \frac{1}{10(20+I)}\}.$$ To prove part (a), by the angle bound and the sine law, \begin{equation}\label{lij bdd} l_{ij}/l_{ik}\leq 1 /\sin (\pi /6 - \pi /1000)< \sqrt{5} \end{equation} for any two edges in the triangle. Without loss of generality, assume that $r_i = 1$. We will prove that $r_j\leq 20$ by contradiction in the following two cases. If $r_j>20$ and $r_k/r_j\leq 1/5$, then $$\frac{l_{ij}^2}{l_{ik}^2} \geq \frac{r_j^2/5 + 2Ir_j+ 1+ 4r_j^2/5}{(r_j/5)^2 + 2Ir_j/5+ 1} \geq 5,$$ which contradicts (\ref{lij bdd}). If $r_j>20$ and $r_k/r_j> 1/5$, then $r_k>4$ and the inner angle $\theta_{jk}^i$ at $v_i$ is the largest inner angle in $\triangle v_iv_jv_k$. Note that in this case, we have $I({r_k} + {r_j} - {r_k}{r_j}) + 1<0$, $l_{ij}<\sqrt{I}(r_j+1)$ and $l_{ik}<\sqrt{I}(r_k+1)$. As a result, by the cosine law, we have $$ \begin{aligned} \cos \theta _{jk}^i & = \frac{{l_{ij}^2 + l_{ik}^2 - l_{jk}^2}}{{2{l_{ij}}{l_{ik}}}} \\ &= \frac{{I({r_k} + {r_j} - {r_k}{r_j}) + 1}}{{{l_{ij}}{l_{ik}}}} \\ & < \frac{{ - I({r_k} + {r_j} + {r_k}{r_j} + 1) + 2I({r_k} + {r_j}) + I + 1}}{{I({r_k} + {r_j} + {r_k}{r_j} + 1)}} \\ & \le - 1 + \frac{{2I({r_k} + {r_j}) + 2I}}{{I({r_k} + {r_j} + {r_k}{r_j} + 1)}}\\ & \le \frac{{ - 11}}{{21}}.\\ \end{aligned} $$ This contradicts that the angle bound is $[\pi /6 - \theta_0, \pi/2 + \theta_0]$. To prove part (b), the definition of $\eta_{ij}^k$ and the formula (\ref{h_ij,k}) implies $\eta_{ij}^k = \frac{h_{ij,k}}{l_{ij}} = \frac{r_i^2r_j^2r_kh_k}{Al^2_{ij}},$ where $A=l_{ij}l_{ik}\sin \theta^i_{jk}$. The sign of $\eta_{ij}^k$ is determined by $h_k$. We will show \begin{equation} \label{radius} r_kh_k = (1+I)(1 + I(\frac{r_k}{r_i} +\frac{r_k}{r_j} - 1))\geq \frac{1+I}{4}>0. \end{equation} We just need to check the case that $r_k/r_i\leq 1$ and $r_k/r_j\leq 1$. If both $r_k/r_i\geq 1/2$ and $r_k/r_j\geq 1/2$, then $r_kh_k\geq 1+I$. Hence, we only need to consider the situation that $r_k/r_i<1/2$ or $r_k/r_j<1/2$. By the angle bound and cosine law, we have $$-\frac{1}{5(20+I)}l_{ij}^2 \leq l_{jk}^2 + l_{ik}^2 -l_{ij}^2 .$$ This is equivalent to $$I(r_ir_k + r_jr_k - r_ir_j ) \geq - r_k^2 - \frac{1}{10(20+I)}(r_i^2 + r_j^2 + 2Ir_ir_j),$$ which implies $$1 + I(\frac{r_k}{r_i} +\frac{r_k}{r_j} - 1) \geq 1 - \frac{r_k^2}{r_ir_j} - \frac{1}{10(20+I)}(\frac{r_i}{r_j} + \frac{r_j}{r_i} + 2I) \geq \frac{4}{5} - \frac{r_k^2}{r_ir_j},$$ where the results in part (a) of Lemma \ref{ratio} is used in the last inequality. Then by the formula (\ref{h_i}) of $h_k$, we have $$r_kh_k = (1+I)(1 + I(\frac{r_k}{r_i} +\frac{r_k}{r_j} - 1)) \geq (1+I)(\frac{4}{5} - \frac{r_k^2}{r_ir_j}).$$ Therefore, under the assumption that $r_k/r_i\leq 1$ and $r_k/r_j\leq 1$, we have $r_kh_k\geq 3(1+I)/10> (1+I)/4$ when $r_k/r_i<1/2$ or $r_k/r_j<1/2$. The sine law implies that $l_{ij}^2/50\leq A\leq 5l_{ij}^2$. Combining with part (a) of Lemma \ref{ratio}, we can find two constants $M=M(I)$ and $C=C(I)$ such that $$M(I) \geq \frac{r_i^2r_j^2 (1+I)(1+40I)}{Al^2_{ij}}\geq \frac{r_i^2r_j^2r_kh_k}{Al^2_{ij}}\geq \frac{r_i^2r_j^2 (1+I)}{4Al^2_{ij}} \geq C(I) > 0.$$ \end{proof} \begin{theorem} \label{dcm triangle} Let $\mathcal{P}=\triangle ABC$ be an equilateral triangle, $\mathcal{T}_{(n)}$ be the $n$-th standard subdivision of $\mathcal{P}$, $l$ be an inversive distance circle packing metric on $(\mathcal{P}, \mathcal{T}_{(n)})$ induced by a constant weight $I$ and a constant label. Set $$V_0=\{ v \in V | v \text{ is in the edge $BC$ of the triangle }\Delta ABC\}.$$ Given any $\alpha \in [\frac{\pi}{6}, \frac{\pi}{2}]$, there exists a smooth family of discrete conformal factors $w(t) \in \mathbb{R}^V$ for $t\in [0,1]$ such that $w(0)=0$ and $w(t)* l $ is an inversive distance circle packing metric on $\mathcal{T}_{(n)}$ with curvature $K(t)=K(w(t)* l)$ satisfying \begin{enumerate} \item $K_A(t)= -t\alpha+(2+t)\frac{\pi}{3}$, \item $K_i(t)=0$ \text{for all $v_i \in V-\{A\}\cup V_0$,} \item $w_i(t)=0$ \text{for all $v_i \in V_0$,} \item all inner angles $ \theta^i_{jk}(t)$ in the metric $w(t)* l$ are in the interval $$ [\frac{\pi}{3}-|\alpha-\frac{\pi}{3}|, \frac{\pi}{3}+|\alpha-\frac{\pi}{3}|] \subset [\frac{\pi}{6}, \frac{\pi}{2}],$$ \item for $v_i\neq A$, $$|K_i(t) -K_i(0)| = O(\frac{1}{\sqrt{ \ln(n)}}).$$Moreover, \begin{equation}\label{totalcur} \sum_{v_i \in V_0} |K_i(t)-K_i(0)|\leq \frac{\pi}{6}. \end{equation} \end{enumerate} \end{theorem} Notice that the angle at the vertex $A$ is $t\alpha+(1-t)\pi/3$ along $w(t)$, and curvatures of vertices stay zero except vertices in $BC$ and the vertex $A$. The piecewise linear map from $(\mathcal{P}, \mathcal{T}_{(n)}, l)$ to $(\mathcal{P}, \mathcal{T}_{(n)}, w*l)$ determined by Theorem \ref{dcm triangle} is a discrete analogue of the analytic function $f(z) = z^{3\alpha/\pi}$. This construction works for any $n$-th subdivision of equilateral triangulations. To prove Theorem \ref{dcm triangle}, we need the following two estimates for solutions to the Dirichlet problem when the graph is an equilateral triangulation of a polygonal disk. \begin{lemma}[\cite{LSW}, Lemma 5.8] \label{estimate1} Assume $\Delta ABC,n,\mathcal T,V_0$ are as given in Theorem \ref{dcm triangle}. Let $\tau: \mathcal{T} \to \mathcal{T}$ be the involution induced by the reflection of $\Delta ABC$ about the angle bisector of $\angle BAC$ and $\eta: E \to \mathbb{R}_{\geq 0}$ be a conductance so that $\eta \tau =\eta$ and $\eta_{ij}=\eta_{ji}$. Let $\Delta: \mathbb{R}^V \to \mathbb{R}^V$ be the Laplace operator defined by $(\Delta f)_i =\sum_{ j \sim i} \eta_{ij}(f_i -f_j)$. If $f \in \mathbb{R}^V$ satisfies $(\Delta f)_i=0$ for $v_i \in V-\{A\}\cup V_0$ and $f|_{V_0}=0$, then for all edges $v_iv_j$, the gradient $(\triangledown f)_{ij}=\eta_{ij}(f_i-f_j)$ satisfies \begin{equation}\label{90e} |(\triangledown f)_{ij}| \leq \frac{1}{2}|\Delta(f)_A|. \end{equation} \end{lemma} \begin{lemma}[\cite{LSW}, Lemma 5.9] \label{estimate2} Assume $\Delta ABC,n,\mathcal T,V_0$ are as given in Theorem \ref{dcm triangle}. Let $\eta: E(\mathcal{T}) \to [\frac{1}{M}, M]$ be a conductance function for some $M>0$ and $\Delta$ be the Laplace operator on $\mathbb{R}^V$ associated to $\eta$. If $f:V \to \mathbb{R}$ solves the Dirichlet problem $(\Delta f)_i=0, \forall v_i \in V-\{A\}\cup V_0$, $f|_{V_0}=0$ and $(\Delta f )_A=1$, then for all $u \in V_0$, $|(\Delta f)_u|\leq \frac{20M}{\sqrt{\ln n}}.$ \end{lemma} \begin{proof}[Proof of Theorem \ref{dcm triangle}] We will prove Theorem \ref{dcm triangle} by considering the ODE system (\ref{ode}) for $(\triangle ABC, \mathcal{T}_{(n)}, l)$ when $n$ is sufficiently large. The prescribed curvature $K^*$ is $$K_A^* = \pi - \alpha, K_i^* = 0, v_i \in V - {V_0} \cup \{ A\}.$$ The initial curvature ${K^0}$ is $$K_i^0 = K_i(0) = \frac{2\pi }{3},v_i \in \{ A,B,C\}, K_i^0 = K_i(t) = 0,v_i \in V - \{ A,B,C\}.$$ Then the ODE system (\ref{ode}) could be written as \begin{equation} \label{flowtriangle} \left\{ {\begin{array}{lr} {K_A}'(t) = {{(\Delta w')}_A} =\frac{\pi }{3} - \alpha, &\quad \\ {K_i}'(t) = {{(\Delta w')}_i} = K_i^* - K_i^0 = 0,&v_i \in V - {V_0} \cup \{ A\}, \\ {w'_i}(t) = 0,& v_i \in {V_0}, \end{array}} \right. \end{equation} with the initial value ${w_i}(0) = 0$. It is straightforward to check that $w(0)\in W$, where $W$ is defined by (\ref{space W}). Then there exists a maximum ${t_0} > 0$ such that a solution $w(t)$ to (\ref{flowtriangle}) exists. Moreover, Lemma \ref{odeexist} implies that there exists a maximal time $s_0$ such that $w(t) \in W$ and the statement (4) holds for $t\in [0, s_0)$. We will prove $s_0\geq 1$. Moreover, $w(1)$ exists and $w(1)*l$ is an inversive distance circle packing metric satisfying (1)-(4) in Theorem \ref{dcm triangle}. Without loss of generality, we assume that $s_0<\infty$. \textbf{Claim}: For any inner angle $\theta^i_{jk}$ and $t\in [0, s_0)$, we have \begin{equation}\label{theta bound} |\theta _{jk}^i(t) - \frac{\pi }{3}| \le {t}|\alpha - \frac{\pi }{3}|. \end{equation} To prove this claim, notice that $\alpha\in [\pi/6, \pi/2]$ implies $\theta^i_{jk}(t)\in [\pi/6, \pi/2]$ for $t\in [0, s_0)$ by the statement (4). By Lemma \ref{ratio}, $\eta_{ij}^k(t)>C(I)>0$ for any triangle $\triangle v_iv_jv_k$. Then $$|{(\nabla w')_{ij}}| = {\eta _{ij}}|{w'_i} - {w'_j}| \ge \eta _{ij}^k|{w'_i} - {w'_j}|.$$ By Lemma \ref{estimate1} and ${\frac{{d{K_i}}}{{dt}} = {{(\Delta w')}_i}}$, we obtain \begin{equation*} |{(\nabla w')_{ij}}| \le \frac{1}{2}|{(\Delta w')_A}| = \frac{1}{2}\Big|\frac{{d{K_A}}}{{dt}}\Big| = \frac{1}{2}|\alpha - \frac{\pi }{3}|. \end{equation*} By the formula (\ref{angle deform}), we have \begin{equation*} \Big|\frac{{d\theta _{jk}^i}}{{dt}}\Big| \le \eta _{ik}^j|{w'_i} - {{w'}_k}| + \eta _{ij}^k|{w'_i} - {w'_j}| \le |{(\nabla w')_{ik}}| + |{(\nabla w')_{ij}}| \le |\alpha - \frac{\pi }{3}|. \end{equation*} Then for all $t \in [0,s_0)$, \begin{equation*} |\theta _{jk}^i(t) - \frac{\pi }{3}| = |\theta _{jk}^i(t) - \theta _{jk}^i(0)| \le \int_0^t \Big| \frac{{d\theta _{jk}^i(t)}}{{dt}}\Big|dt \le t|\alpha - \frac{\pi }{3}|. \end{equation*} Now it is not hard to show that $s_0\geq 1$ from the claim. Notice that $\liminf_{t\to s_0^-} \theta^i_{jk}(w(t)) \geq \pi/6$ and $ \liminf_{t \to s_0^-} \eta_{ij}(w(t))>0$ for all edges by Lemma \ref{ratio}. Therefore, as $t\to s_0^-$, for some $\theta^i_{jk}$, \begin{equation}\label{theta limit as t tends s0} \limsup_{ t \to s_0^-} |\theta^i_{jk}(w(t))-\frac{\pi}{3}|= |\alpha-\frac{\pi}{3}| \end{equation} by the definition of $s_0$ and Lemma \ref{odeexist}. If $s_0<1$, then for all the inner angles, we have \begin{equation*} \limsup_{ t \to s_0^-}|\theta _{jk}^i(t) - \frac{\pi }{3}| \le s_0|\alpha - \frac{\pi }{3}| < |\alpha - \frac{\pi }{3}| \end{equation*} by (\ref{theta bound}), which contradicts (\ref{theta limit as t tends s0}). Therefore, $s_0\geq 1$. Notice that $\theta_{jk}^i(t) \in [\pi/6, \pi/2]$ for all inner angles and $\eta_{ij}(t)\geq C(I)>0$ by Lemma \ref{ratio} when $t\in [0, s_0)$. This means that $w(t)*l$ is non-degenerate and strictly weighted Delaunay for $t\in [0, s_0)$. Therefore, Lemma \ref{odeexist} implies that $1\leq s_0<t_0$. Then $w(1) \in W$. By continuity, the metric $w(1)*l$ is a non-degenerated inversive distance circle packing metric satisfies (1)-(4) in Theorem \ref{dcm triangle}. Finally, we use Lemma \ref{estimate2} to prove the last statement in Theorem \ref{dcm triangle}. Notice that by Lemma \ref{ratio}, $0< C\leq \eta_{ij} \leq M$, where $C=C(I), M=M(I)$. Applying Lemma \ref{estimate2} to the function $f = \frac{{dw(t)}}{{dt}}/|\alpha - \pi /3|$, we obtain \begin{equation*}\Big|\frac{{d{K_i}(t)}}{{dt}}\Big| = |{(\Delta w')_i}| = |\alpha - \pi /3| \cdot |{(\Delta f)_i}| = O(\frac{1}{{\sqrt {\ln (n)} }}),v_i \in V_0 . \end{equation*} Then for $v_i \ne A$, we have for $t\in[0, 1]$, $$|{K_i}(t) - {K_i}(0)| \le \int_0^t \Big| \frac{{d{K_i}(t)}}{{dt}}\Big|dt = O(\frac{1}{{\sqrt {\ln (n)} }}).$$ Moreover, if $\alpha = \pi/3$, then (\ref{totalcur}) is automatically true since the flow would be a constant flow by Proposition \ref{mpgraph}. Hence we assume $\alpha\neq \pi/3$. We claim that $ {{w'}_A}(t) \ne 0$ for $t \in [0,{t_0})$. Otherwise, ${{w'}_A}(s) = 0$ for some $s \in [0,{t_0})$. Applying the maximum principle, i.e. Proposition \ref{mpgraph}, to the following Dirichlet problem $${\left\{ {\begin{array}{lr} {{(\Delta w'(s))}_i= 0}, &v_i \in V - \{ A\} \cup {V_0},\\ {w'_i}(s) = 0, &v_i \in \{ A\} \cup {V_0}, \end{array}} \right.}$$ we obtain ${w'_i}(s) = 0$ for all $v_i \in V$, which implies ${(\Delta w')_A} (s)= 0$. This contradicts ${(\Delta w')_A} (s)=\alpha - \frac{\pi }{3} \neq 0$. This completes the proof of the claim. Furthermore, applying the maximal principle, i.e. Proposition \ref{mpgraph}, to (\ref{flowtriangle}) again shows that $w'_A(t)$ and $w'_i(t)$ have the same sign. Note that for $v_i\in V_0$, $w_i'(t) = 0$. We have $$w'_A(t)K'_i(t) =w'_A(t) \sum_{i\sim j}\eta_{ij}(w_i' - w_j') = -\sum_{i\sim j}\eta_{ij}w'_j(t)w'_A(t) \leq 0, $$ which implies $({K_i}(t) - {K_i}(0)){{w'}_A}(t) \le 0$. By the discrete Gauss-Bonnet formula (\ref{discrete Gauss-Bonnet formula}), we have $${K_A}(t) + \sum\limits_{v_i \in {V_0}} {{K_i}} (t) = {K_A}(0) + \sum\limits_{v_i \in V} {{K_i}} (0) = 2\pi . $$ Since ${K_i}(t) - {K_i}(0)$ have the same sign for all $v_i \in {V_0}$, we conclude that for $t\in[0,1]$, $$\sum\limits_{v_i \in {V_0}} | {K_i}(t) - {K_i}(0)| =|\sum\limits_{v_i \in {V_0}} ({K_i}(t) - {K_i}(0))| = |{K_A}(t) - {K_A}(0)| = |t(\alpha - \frac{\pi }{3})| \le \frac{\pi }{6}.$$ \end{proof} \subsection{Proof of Theorem \ref{exist}} There are two steps to find the discrete conformal factor required in Theorem \ref{exist}. In the first step, we construct a discrete conformal factor by Theorem \ref{dcm triangle} where the triangles contain boundary vertices of nonzero curvatures. This step will diffuse the curvature of boundary vertices of polyhedral disk $\mathcal{P}$ to interior vertices such that curvatures are small if the subdivision is sufficiently dense. In the second step, we eliminate these small curvatures using a flow similar to (\ref{ode}). We need the following lemma in the second step. \begin{lemma}[\cite{LSW}, Proposition 5.10] \label{estimate4} Suppose $(\mathcal P, \mathcal{T}', l)$ is polygonal disk with an equilateral triangulation and $\mathcal{T}$ is the $n$-th standard subdivision of the triangulation $\mathcal{T}'$ with $n \geq e^{10^6}$. Let $\eta: E=E(\mathcal{T}) \to [\frac{1}{M}, M]$ be a conductance function with $M>0$ and $\Delta: \mathbb{R}^V \to \mathbb{R}^V$ be the associated Laplace operator. Let $V_0 \subset V(\mathcal{T})$ be a thin subset such that for all $v \in V$ and $m \leq n/2$, $|B_m(v) \cap V_0| \leq 10m$. If $f: V \to \mathbb{R}$ satisfies $(\Delta f)_i=0$ for $v_i \in V-V_0$, $|(\Delta f)_i| \leq \frac{M}{\sqrt{\ln(n)}}$ for $v_i \in V_0$ and $\sum_{v_i \in V_0} |(\Delta f)_i| \leq M$, then for all edges $v_jv_k$ in $\mathcal{T}$, $$|f_j-f_k| \leq \frac{200M^3}{\sqrt{\ln(\ln(n))}}.$$ \end{lemma} \begin{proof}[Proof of Theorem \ref{exist}] We call each boundary vertex of $\mathcal{P}$ other than ${p, q, r}$ \emph{corner} if it has nonzero curvature. Denote the set of corners as $V_c$. Notice that by the assumption on $\mathcal{P}$, each vertex in $V_c$ has degree $m = 3, 5$ or $6$. Moreover, the standard subdivision of each triangle of $\mathcal{P}$ does not introduce new corners. Thus, the cardinality $|V_c|$ of $V_c$ is independent of the subdivision $\mathcal{T}_{(n)}$ of $\mathcal{T}$. Let ${B_{[n/3]}}(v)$ be the combinatorial ball in $\mathcal{T}_{(n)}$ centered at $v\in V_c$ with radius $ [n/3]$ where $[x]$ is the integer part of a real number $x$. Notice that these ball are disjoint in $\mathcal{T}_{(n)}$. Each ${B_{[n/3]}}(v)$ consists of $m-1$ copies of $[n/3]$-th subdivision of equilateral triangles $\triangle_1^v, \dots, \triangle_{m-1}^v$. \textbf{Step 1}: For every $v\in V_c$, we will deform its curvature to zero. In particular, we apply Theorem \ref{dcm triangle} to $\triangle_1^v, \dots, \triangle_{m-1}^v$ with $\alpha = \pi/(m-1)\in [\frac{\pi}{6}, \frac{\pi}{2}]$. It produces a discrete conformal factor $w_i$ on $\triangle_i^v$ for each $i = 1, \dots, m-1$. Notice that the discrete conformal factor on $\mathcal{T}_{(n)}$ in Theorem \ref{dcm triangle} depends only on $\alpha$. Then discrete conformal factor $w_i$ are identical on each $\triangle_i$. By symmetry, we can glue them together to form a discrete conformal factor $\bar w$ on $\mathcal{T}_{(n)}$. Specifically, the value of $\bar w$ on ${B_{[n/3]}}(v)$ for $v\in V_c$ is determined by Theorem \ref{dcm triangle}, and the values of $\bar w$ on other vertices are zero. Let $\bar K$ be the curvature of inversive distance circle packing metric $\bar l = \bar w*l$. Let $K$ be the curvature of the target equilateral triangle with $K_i=0$ for all $v_i \in V_{(n)}-\{p,q,r\}$ and $K_i=\frac{2\pi}{3}$ for all $v_i \in \{p,q,r\}$. Then Theorem \ref{dcm triangle} implies that \begin{enumerate} \item $\bar K_i = K_i$ for all vertices $v_i$ in the set of $V_k := \{ v_i|{d_c}(v_i,v) \ne [n/3],v \in {V_c}\}$, \item $\bar w_i = 0$ for all $v_i \notin { \cup _{v \in {V_c}}}{B_{[n/3]}}(v)$, \item all inner angles at $v \in {V}$ satisfy $\theta _{ij}^v \in [\frac{\pi }{6},\frac{\pi }{2}]$, \item for all vertices $ v_i \notin V_k$, $|{{\bar K}_i} - K_i| = O(\frac{1}{{\sqrt {\ln (n)} }})$, \item $\sum\limits_{v_i \in V} | {{\bar K}_i} - K_i| \le \frac{2\pi N}{3}$, where $N$ denote the number of corners. \end{enumerate} Notice that the set $V_k$ is the union of the sets $V_0$ given by Theorem \ref{dcm triangle} for each $v\in V_c$. Statement (1) and (2) are immediate by the construction. Statement (3), (4), and (5) are immediate by Theorem \ref{dcm triangle}. \textbf{Step 2}: we construct a flow to deform the curvatures of vertices in $V_k$ to be zero when the subdivision is sufficiently dense. Specifically, consider the following ODE system on $\mathcal{T}_{(n)}$ \begin{equation} \label{odestep2} \left\{ {\begin{array}{lr} \frac{{d{K_i}(w(s)*\bar l)}}{{dt}} = K_i - {{{\bar K}_i}} ,& v_i \in V - \{ p,q,r\},\\ {w_i}(s) = 0,& v_i \in \{ p,q,r\}, \end{array}} \right. \end{equation} with initial value $w(0) = 0$. The idea is the same as that of (\ref{ode}). Namely, we linearly interpolate the initial curvature $\bar K$ and the target curvature $K$. By Lemma \ref{odeexist}, there exists a maximal $s_0>0$ such that the solution $w(s)$ to (\ref{odestep2}) exists and $w(s)*\bar l$ satisfies that on $[0, s_0)$, all inner angles at $v \in {V}$, $\theta _{ij}^v \in [\frac{\pi }{6} - \theta_0,\frac{\pi}{2} + \theta_0]$, where $\theta_0$ is the parameter given by Lemma \ref{ratio}. Now we apply Lemma \ref{estimate4} to estimate the angle deformation along the flow (\ref{odestep2}). Set $V_B=V_{(n)}\setminus V_k$. First notice that $V_B$ is a thin set in $V_{(n)}$ of $\mathcal{T}_{(n)}$. In particular, $|{B_r}(v_i) \cap {V_B}| \le 10r$ for any $v_i\in V_{(n)}$ and any $r \le n/3$. Moreover, by (4) and (5) in Step 1, we obtain $$\sum\limits_{v_i \in {V_B}} | {K_{i}'}| =\sum\limits_{v_i \in {V_B}} | {(\Delta w')_i}| \le \sum\limits_{i\in V} | {{\bar K}_i} - K_i| \le \frac{{2\pi N}}{3},$$ and $$|{K_{i}'}|= |{(\Delta w')_i}| \le |{{\bar K}_i} - K_i| = O(\frac{1}{{\sqrt {\ln (n)} }}) ,v_i \in {V_B} .$$ Lemma \ref{ratio} implies that $f = w'$ along the flow (\ref{odestep2}) satisfies the conditions in Lemma \ref{estimate4}. Therefore, we obtain that if $i \sim j$, then $$ |{w'_i}(s) - {w'_j}(s)| = O(\frac{1}{{\sqrt {\ln(\ln (n)}) }}).$$ As a result, $$ \Big|\frac{{d\theta _{ij}^k}}{{ds}}\Big| \le |\eta _{jk}^i({w'_j} - {{w'}_k})| + |\eta _{ik}^j({w'_i} - {{w'}_k})| = O(\frac{1}{{\sqrt {\ln(\ln (n)}) }}), $$ where Lemma \ref{ratio} is used in the last equality. For all $s \in [0,{s_0})$ and sufficiently large $n$, \begin{equation*}|\theta _{ij}^k(w(s)) - \theta _{ij}^k(0)| \le \int_0^s \Big| \frac{{d\theta _{ij}^k(w(s))}}{{ds}}\Big|ds = O(\frac{1}{{\sqrt {\ln(\ln (n)}) }}) \le \theta_0 s_0. \end{equation*} We claim that ${s_0} > 1$. Otherwise, we can extend the solution to (\ref{odestep2}) to $[0, s_0+\epsilon)$ for some small $\epsilon>0$, which contradicts the maximality of $s_0$. Set $w^*=w(1)$ and $w = \bar w + w^*$. Then the curvature of the inversive distance circle packing metric $w*l$ is $$K(0)+\int^1_0 K'(s)ds=\bar{K}+(K-\bar{K})=K.$$ This implies that the discrete conformal factor $w = \bar w +w^*$ produces the discrete conformal map from $(\mathcal{P}, \mathcal{T}_{(n)}, l_{(n)})$ to the equilateral triangle. \end{proof} \end{document}
arXiv
ATS theorem In mathematics, the ATS theorem is the theorem on the approximation of a trigonometric sum by a shorter one. The application of the ATS theorem in certain problems of mathematical and theoretical physics can be very helpful. History of the problem In some fields of mathematics and mathematical physics, sums of the form $S=\sum _{a<k\leq b}\varphi (k)e^{2\pi if(k)}\qquad (1)$ are under study. Here $\varphi (x)$ and $f(x)$ are real valued functions of a real argument, and $i^{2}=-1.$ Such sums appear, for example, in number theory in the analysis of the Riemann zeta function, in the solution of problems connected with integer points in the domains on plane and in space, in the study of the Fourier series, and in the solution of such differential equations as the wave equation, the potential equation, the heat conductivity equation. The problem of approximation of the series (1) by a suitable function was studied already by Euler and Poisson. We shall define the length of the sum $S$ to be the number $b-a$ (for the integers $a$ and $b,$ this is the number of the summands in $S$). Under certain conditions on $\varphi (x)$ and $f(x)$ the sum $S$ can be substituted with good accuracy by another sum $S_{1},$ $S_{1}=\sum _{\alpha <k\leq \beta }\Phi (k)e^{2\pi iF(k)},\ \ \ (2)$ where the length $\beta -\alpha $ is far less than $b-a.$ First relations of the form $S=S_{1}+R,\qquad (3)$ where $S,$ $S_{1}$ are the sums (1) and (2) respectively, $R$ is a remainder term, with concrete functions $\varphi (x)$ and $f(x),$ were obtained by G. H. Hardy and J. E. Littlewood,[1][2][3] when they deduced approximate functional equation for the Riemann zeta function $\zeta (s)$ and by I. M. Vinogradov,[4] in the study of the amounts of integer points in the domains on plane. In general form the theorem was proved by J. Van der Corput,[5][6] (on the recent results connected with the Van der Corput theorem one can read at [7]). In every one of the above-mentioned works, some restrictions on the functions $\varphi (x)$ and $f(x)$ were imposed. With convenient (for applications) restrictions on $\varphi (x)$ and $f(x),$ the theorem was proved by A. A. Karatsuba in [8] (see also,[9][10]). Certain notations [1]. For $B>0,B\to +\infty ,$ or $B\to 0,$ the record $1\ll {\frac {A}{B}}\ll 1$ means that there are the constants $C_{1}>0$ and $C_{2}>0,$ such that $C_{1}\leq {\frac {|A|}{B}}\leq C_{2}.$ [2]. For a real number $\alpha ,$ the record $\|\alpha \|$ means that $\|\alpha \|=\min(\{\alpha \},1-\{\alpha \}),$ where $\{\alpha \}$ is the fractional part of $\alpha .$ ATS theorem Let the real functions ƒ(x) and $\varphi (x)$ satisfy on the segment [a, b] the following conditions: 1) $f''''(x)$ and $\varphi ''(x)$ are continuous; 2) there exist numbers $H,$ $U$ and $V$ such that $H>0,\qquad 1\ll U\ll V,\qquad 0<b-a\leq V$ and ${\begin{array}{rc}{\frac {1}{U}}\ll f''(x)\ll {\frac {1}{U}}\ ,&\varphi (x)\ll H,\\\\f'''(x)\ll {\frac {1}{UV}}\ ,&\varphi '(x)\ll {\frac {H}{V}},\\\\f''''(x)\ll {\frac {1}{UV^{2}}}\ ,&\varphi ''(x)\ll {\frac {H}{V^{2}}}.\\\\\end{array}}$ Then, if we define the numbers $x_{\mu }$ from the equation $f'(x_{\mu })=\mu ,$ we have $\sum _{a<\mu \leq b}\varphi (\mu )e^{2\pi if(\mu )}=\sum _{f'(a)\leq \mu \leq f'(b)}C(\mu )Z(\mu )+R,$ where $R=O\left({\frac {HU}{b-a}}+HT_{a}+HT_{b}+H\log \left(f'(b)-f'(a)+2\right)\right);$ $T_{j}={\begin{cases}0,&{\text{if }}f'(j){\text{ is an integer}};\\\min \left({\frac {1}{\|f'(j)\|}},{\sqrt {U}}\right),&{\text{if }}\|f'(j)\|\neq 0;\\\end{cases}}$ $j=a,b;$ $C(\mu )={\begin{cases}1,&{\text{if }}f'(a)<\mu <f'(b);\\{\frac {1}{2}},&{\text{if }}\mu =f'(a){\text{ or }}\mu =f'(b);\\\end{cases}}$ $Z(\mu )={\frac {1+i}{\sqrt {2}}}{\frac {\varphi (x_{\mu })}{\sqrt {f''(x_{\mu })}}}e^{2\pi i(f(x_{\mu })-\mu x_{\mu })}\ .$ The most simple variant of the formulated theorem is the statement, which is called in the literature the Van der Corput lemma. Van der Corput lemma Let $f(x)$ be a real differentiable function in the interval $a<x\leq b,$ moreover, inside of this interval, its derivative $f'(x)$ is a monotonic and a sign-preserving function, and for the constant $\delta $ such that $0<\delta <1$ satisfies the inequality $|f'(x)|\leq \delta .$ Then $\sum _{a<k\leq b}e^{2\pi if(k)}=\int _{a}^{b}e^{2\pi if(x)}dx+\theta \left(3+{\frac {2\delta }{1-\delta }}\right),$ where $|\theta |\leq 1.$ Remark If the parameters $a$ and $b$ are integers, then it is possible to substitute the last relation by the following ones: $\sum _{a<k\leq b}e^{2\pi if(k)}=\int _{a}^{b}e^{2\pi if(x)}\,dx+{\frac {1}{2}}e^{2\pi if(b)}-{\frac {1}{2}}e^{2\pi if(a)}+\theta {\frac {2\delta }{1-\delta }},$ where $|\theta |\leq 1.$ On the applications of ATS to the problems of physics see,;[11][12] see also,.[13][14] Notes 1. Hardy, G. H.; Littlewood, J. E. (1914). "Some problems of diophantine approximation: Part II. The trigonometrical series associated with the elliptic θ-functions". Acta Mathematica. International Press of Boston. 37: 193–239. doi:10.1007/bf02401834. ISSN 0001-5962. 2. Hardy, G. H.; Littlewood, J. E. (1916). "Contributions to the theory of the riemann zeta-function and the theory of the distribution of primes". Acta Mathematica. International Press of Boston. 41: 119–196. doi:10.1007/bf02422942. ISSN 0001-5962. 3. Hardy, G. H.; Littlewood, J. E. (1921). "The zeros of Riemann's zeta-function on the critical line". Mathematische Zeitschrift. Springer Science and Business Media LLC. 10 (3–4): 283–317. doi:10.1007/bf01211614. ISSN 0025-5874. S2CID 126338046. 4. I. M. Vinogradov. On the average value of the number of classes of purely root form of the negative determinant Communic. of Khar. Math. Soc., 16, 10–38 (1917). 5. van der Corput, J. G. (1921). "Zahlentheoretische Abschätzungen". Mathematische Annalen (in German). Springer Science and Business Media LLC. 84 (1–2): 53–79. doi:10.1007/bf01458693. ISSN 0025-5831. S2CID 179178113. 6. van der Corput, J. G. (1922). "Verschärfung der Abschätzung beim Teilerproblem". Mathematische Annalen (in German). Springer Science and Business Media LLC. 87 (1–2): 39–65. doi:10.1007/bf01458035. ISSN 0025-5831. S2CID 177789678. 7. Montgomery, Hugh (1994). Ten lectures on the interface between analytic number theory and harmonic analysis. Providence, R.I: Published for the Conference Board of the Mathematical Sciences by the American Mathematical Society. ISBN 978-0-8218-0737-8. OCLC 30811108. 8. Karatsuba, A. A. (1987). "Approximation of exponential sums by shorter ones". Proceedings of the Indian Academy of Sciences, Section A. Springer Science and Business Media LLC. 97 (1–3): 167–178. doi:10.1007/bf02837821. ISSN 0370-0089. S2CID 120389154. 9. A. A. Karatsuba, S. M. Voronin. The Riemann Zeta-Function. (W. de Gruyter, Verlag: Berlin, 1992). 10. A. A. Karatsuba, M. A. Korolev. The theorem on the approximation of a trigonometric sum by a shorter one. Izv. Ross. Akad. Nauk, Ser. Mat. 71:3, pp. 63—84 (2007). 11. Karatsuba, Ekatherina A. (2004). "Approximation of sums of oscillating summands in certain physical problems". Journal of Mathematical Physics. AIP Publishing. 45 (11): 4310–4321. doi:10.1063/1.1797552. ISSN 0022-2488. 12. Karatsuba, Ekatherina A. (2007-07-20). "On an approach to the study of the Jaynes–Cummings sum in quantum optics". Numerical Algorithms. Springer Science and Business Media LLC. 45 (1–4): 127–137. doi:10.1007/s11075-007-9070-x. ISSN 1017-1398. S2CID 13485016. 13. Chassande-Mottin, Éric; Pai, Archana (2006-02-27). "Best chirplet chain: Near-optimal detection of gravitational wave chirps". Physical Review D. American Physical Society (APS). 73 (4): 042003. arXiv:gr-qc/0512137. doi:10.1103/physrevd.73.042003. hdl:11858/00-001M-0000-0013-4BBD-B. ISSN 1550-7998. S2CID 56344234. 14. Fleischhauer, M.; Schleich, W. P. (1993-05-01). "Revivals made simple: Poisson summation formula as a key to the revivals in the Jaynes-Cummings model". Physical Review A. American Physical Society (APS). 47 (5): 4258–4269. doi:10.1103/physreva.47.4258. ISSN 1050-2947. PMID 9909432.
Wikipedia
The Maths Behind the Trans-Atlantic Telegraph Cable by Liam Morris Introduction: Thanks to Liam Morris for permission to reproduce this December 2017 blog post at Old Maths, New Science. William Thomson, later Lord Kelvin, was one of the pre-eminent scientists of the 19th century, as well as a forward-thinking businessman who profited greatly from his services to the cable industry. This post illustrates his grasp of the mathematics behind the early Atlantic cables, an insight which if followed would surely have resulted in an earlier success for the enterprise. As Silvanus Thomson later remarked in the Journal of the Society of Telegraph Engineers, William Thompson's key point in the 1855 paper analyzed below was "the famous law of squares about which so much dispute arose." The dispute was between William Thomson and Edward Orange Wildman Whitehouse, and Thomson was, of course, eventually proved correct, while Whitehouse's erroneous theories led indirectly to the failure of the 1858 Atlantic cable. In the mid-19th century, the most sought-after feat of engineering and industry was a trans-Atlantic telegraph cable. But when the first successful cable was laid between Britain and America in 1858, the speed of transmission was only about 0.1 words per minute. This post will be an analysis of Thomson's 1855 paper "On the Theory of the Electric Telegraph", published in the journal of the Royal Society of London. Derivation of the PDE Thomson starts by defining the variables that he uses throughout the work. I shall define the same variables using more modern lettering conventions. Let $C$ be the capacitance of the wire, $R$ the resistance of the wire, $V$ the potential at a point $P$ on the wire at a time $t$, and $I$ the current at the same point in the same instant. The charge $Q$ at $P$, called "quantity of electricity" by Thomson, is given by \[Q = VC = It\] Thus, in an infinitesimal length $dx$ of wire at $P$, we have a charge of $VCdx$ and, in an infinitesimal period of time, the charge that leaves $P$ is equal to \[dtdI = dt\frac{dI}{dx}dx\] Using two equations for the electromotive force, Thomson showed that $RI = -\frac{\partial V}{\partial x}$. Combining these formulae gives the all-important simple form of the telegraphist's equation: CR\frac{\partial V}{\partial t} = \frac{\partial^2V}{\partial x^2} Thomson notes instantly that this is the heat equation. In a real wire, there would be some electrical leakage into the surrounding water. By introducing some coefficient $h$ to measure this loss, Thomson recovers the equation: \[CR\frac{\partial V}{\partial t} = \frac{\partial^2V}{\partial x^2} -hV\] but with the change of variable: \[V = e^{-\frac{h}{RC}t}\phi\] he obtains a PDE in the original form, i.e: \[CR\frac{\partial \phi}{\partial t} = \frac{\partial^2\phi}{\partial x^2}\] Thomson uses his observation that his derived equation corresponds to the heat equation by stating immediately a solution, obtained by Fourier methods: \[V = \frac{\tilde{V}}{\pi}\int_0^\infty e^{-zn^\frac{1}{2}}\frac{\sin(2nt-zn^\frac{1}{2})-\sin[(t-T)2n-zn^\frac{1}{2}]}{n}dn\] where the wire is understood to be infinitely-long, $z:=x\sqrt{RC}, \tilde{V}$ is the voltage to which the end $O$ is instantaneously raised and $T$ is the time for which this potential is maintained. By cleverly noting that: \begin{align} \int_{t-T}^t \cos(2n\theta - zn^\frac{1}{2})d\theta &= \left[\frac{1}{2n}\sin(2n\theta-zn^\frac{1}{2})\right]_{t-T}^t \\ &= \frac{1}{n}\left[\sin(2nt-zn^\frac{1}{2})-\sin(2n(t-T)-zn^\frac{1}{2})\right] \end{align} and taking $T$ to be infinitesimal, so that $\sin(T) \approx T$ with $t > 0$, Thomson could conclude that: V &= T\frac{2\tilde{V}}{\pi}\int_0^\infty e^{-zn^\frac{1}{2}}\cos(2nt-zn^\frac{1}{2})\\ &= T\frac{\tilde{V}z}{2\pi^\frac{1}{2}t^\frac{3}{2}}e^{-\frac{z^2}{4t}} Thomson now moves to finding the solution to the problem when a charge $Q$ is communicated instantaneously to the wire at $O$. He states the strength of the current $I$ at any point in the wire to be equal to $-\frac{1}{k}\frac{dV}{dx}$ and deduces that the "maximum electrodynamic effect" of an impulse into the wire will therefore be found by finding the value of t making $\frac{\partial V}{\partial z}$ (this being proportional to $\frac{\partial V}{\partial x}$) maximal: with $V$ given by an equation similar to the above: \[V = \frac{e^{-\frac{z^2}{4t}}}{\sqrt{t}}\frac{Q}{\sqrt{\pi}}\sqrt{\frac{R}{C}}\] Upon finding $\frac{\partial^2 V}{\partial z^2}$, we find that $\frac{\partial V}{\partial z}$ is maximal when: \[\frac{1}{2t^\frac{3}{2}} = \frac{z^2}{4t^\frac{5}{2}}\] giving the value: \[t = \frac{z^2}{2} = \frac{RCx^2}{2}\] Note that this is different to the value of t which Thomson gives, being $\frac{z^2}{6}$. This is however entirely irrelevant to the remainder of the exposition; the point of the calculation was to show that the time that one should leave between message pulses due to the "retardations of signals" was proportional to the square of the length of the wire, now taking x to be the length of the cable. This is a deduction which Thomson's erroneous calculation still makes. Thomson then goes on to discuss the "velocity of transmission" of the signal. He states that, if the potential at $O$ is varied in a sinusoidal way (namely with $\sin(2nt)$), then this velocity is equal to $2\sqrt{\frac{n}{RC}}$ . It is interesting that the velocity of transmission and the retardations of the signals bear no mathematical relation to one another: showing again that this problem is, perhaps counter-intuitively, one regarding the length of the wire Application to the cable As stated before, the main obstacle to the efficiency of communication was the length of the wire used. A cursory glance at the equation above will show that using a wire with a lower capacitance and/or resistance would also help to reduce the time taken to send a message. Attempting to use this part of the relation is, however, futile for two reasons. Firstly, the difference made by reducing the resistance or capacitance would only achieve results proportional to the square root of shortening the distance. Secondly, the makers of the wire were already using one of the least resistant metals commonly-available in their cable: copper. Copper has the second lowest resistance of any conductive metal; the first, being silver, was far too expensive to use in such quantities. Also, the capacitance of the wire can only be decreased in a meaningful way by decreasing the diameter of the wire, eventually rendering it too fragile to use practically. The makers of the cable came to these conclusions too. The wires of the 1858 and 1865 cables were made with the same materials, with very similar insulation; the only difference was a decrease of 304 nautical miles, or about 350 imperial miles, in the length of the cable. This difference in distance is thus clearly what caused the increased efficiency of the 1865 cables and is a physical proof of the validity of Thomson's work. The most striking thing about the paper is not the mathematics itself, while ingenious in many parts; it is the surprising and definitive results the mathematics lead to. Based on this, Thomson made the following assertion in his second letter to Stokes: "We may be sure beforehand that the American telegraph will succeed" From the point of view of an investor in the early 1860s, this was a very bold claim to make, given the severe limitations of the 1858 model. But, sure enough, by 1866, the mathematics of Thomson had resulted in the laying of two cables, each almost ten times more efficient than the previous. Copyright © 2017 Liam Morris Last revised: 21 December, 2017 Return to Atlantic Cable main page Search all pages on the Atlantic Cable site: You can help - if you have cable material, old or new, please contact me. Cable samples, instruments, documents, brochures, souvenir books, photographs, family stories, all are valuable to researchers and historians. If you have any cable-related items that you could photograph, copy, scan, loan, or sell, please email me: [email protected] —Bill Burns, publisher and webmaster: Atlantic-Cable.com
CommonCrawl
\begin{document} \begin{frontmatter} \title{From Stein identities to moderate deviations} \runtitle{Moderate deviations} \begin{aug} \author[A]{\fnms{Louis H. Y.} \snm{Chen}\thanksref{t1}\ead[label=e1]{[email protected]}}, \author[B]{\fnms{Xiao} \snm{Fang}\thanksref{t1}\ead[label=e2]{[email protected]}} \and \author[C]{\fnms{Qi-Man} \snm{Shao}\corref{}\thanksref{t2}\ead[label=e3]{[email protected]}} \runauthor{L. H. Y. Chen, X. Fang and Q.-M. Shao} \affiliation{National University of Singapore, National University of Singapore and Hong Kong University of Science and Technology} \address[A]{L. H. Y. Chen\\ Department of Mathematics\\ National University of Singapore\\ 10 Lower Kent Ridge Road\\ Singapore 119076\\ Republic of Singapore\\ \printead{e1}} \address[B]{X. Fang\\ Department of Statistics\\ \quad and Applied Probability\\ National University of Singapore\\ 6 Science Drive 2\\ Singapore 117546 \\ Republic of Singapore\\ \printead{e2}} \address[C]{Q.-M. Shao\\ Department of Mathematics\\ Hong Kong University of Science and Technology\\ Clear Water Bay, Kowloon, Hong Kong\\ China\\ \printead{e3}\\ and\\ Department of Statistics\\ Chinese University of Hong Kong\\ Shatin, N. T., Hong Kong\\ China\\ (after September 1, 2012)} \end{aug} \thankstext{t1}{Supported in part by Grant C-389-000-010-101 from the National University of Singapore.} \thankstext{t2}{Supported in part by Hong Kong RGC CERG---602608 and 603710.} \received{\smonth{11} \syear{2009}} \revised{\smonth{2} \syear{2012}} \begin{abstract} Stein's method is applied to obtain a general Cram\'er-type moderate deviation result for dependent random variables whose dependence is defined in terms of a Stein identity. A corollary for zero-bias coupling is deduced. The result is also applied to a combinatorial central limit theorem, a general system of binary codes, the anti-voter model on a complete graph, and the Curie--Weiss model. A general moderate deviation result for independent random variables is also proved. \end{abstract} \begin{keyword}[class=AMS] \kwd[Primary ]{60F10} \kwd[; secondary ]{60F05} \end{keyword} \begin{keyword} \kwd{Stein's method} \kwd{Stein identity} \kwd{moderate deviations} \kwd{Berry--Esseen bounds} \kwd{zero-bias coupling} \kwd{exchangeable pairs} \kwd{dependent random variables} \kwd{combinatorial central limit theorem} \kwd{general system of binary codes} \kwd{anti-voter model} \kwd{Curie--Weiss model} \end{keyword} \end{frontmatter} \section{Introduction}\label{sec1} Moderate deviations date back to \citet{Cr38} who obtained expansions for tail probabilities for sums of independent random variables about the normal distribution. For independent and identically distributed random variables $X_1, \ldots, X_n$ with $EX_i=0$ and $\operatorname{Var}(X_i)= 1$ such that $Ee^{t_0|X_1|}\le c < \infty$ for some $t_0>0$, it follows from Petrov [(\citeyear{Pet75}), Chapter 8, equation (2.41)] that \begin{equation} \label{00} \frac{P(W_n>x)}{1-\Phi(x)}=1+O(1) \bigl(1+x^3\bigr)/\sqrt{n} \end{equation} for $0\le x\le a_0 n^{1/6}$, where $W_n=(X_1+\cdots+X_n)/ \sqrt{n}$ and $\Phi$ is the standard normal distribution function, $a_0>0$ depends on $c$ and $t_0$ and $O(1)$ is bounded by a constant depending on $c$ and $t_0$. The range $0\le x\le a_0 n^{1/6}$ and the order of the error term $O(1) (1+x^3)/\sqrt{n}$ are optimal. The proof of (\ref{00}) depends on the conjugate method and a Berry--Esseen bound, while the classical proof of Berry--Esseen bound for independent random variables uses the Fourier transform. However, for dependent random \mbox{variables}, Stein's method performs much better than the method of Fourier transform. Stein's method was introduced by Charles Stein in 1972 and further developed by him in 1986. Extensive applications of Stein's method to obtain Berry--Esseen-type bounds for dependent random variables can be found in, for example, \citet{Dia77}, \citet{BRS89}, \citet{Bar90}, \citet{DR96}, \citet{GR97}, \citet{CS04}, \citet{Cha08} and \citet{NP09}. Recent applications to concentration of measures and large deviations can be found in, for example, \citet{Cha07} and \citet{ChD10}. Expositions of Stein's method and its applications in normal and other distributional approximations can be found in \citet{DH04} and \citet{CB05}. In this paper we apply Stein's method to obtain a Cram\'er-type moderate deviation result for dependent random variables whose dependence is defined in terms of an identity, called Stein identity, which plays a central role in Stein's method. A corollary for zero-bias coupling is deduced. The result is then applied to a combinatorial central limit theorem, the anti-voter model, a general system of binary codes and the Curie--Weiss model. The bounds obtained in these examples are as in (\ref{00}) and therefore may be optimal (see Remark~\ref{r71}). It is noted that \citet{Raic07} also used Stein's method to obtain moderate deviation results for dependent random variables. However, the dependence structure he considered is related to local dependence and is of a different nature from what we assume through the Stein identity. This paper is organized as follows. Section~\ref{sec2} is devoted to a description of Stein's method and to the construction of Stein identities using zero-bias coupling and exchangeable pairs. Section~\ref{sec3} presents a general Cram\'er-type moderate deviation result and a corollary for zero-bias coupling. The result is applied to the four examples mentioned above in Section \ref{sec4}. Although the general Cram\'er-type moderate deviation result cannot be applied directly to unbounded independent random variables, the proof of the general result can be adapted to prove (\ref{00}) under less stringent conditions, thereby extending a result of \citet{Lin61}. These are also presented in Section~\ref{sec4}. The rest of the paper is devoted to proofs. \section{Stein's method and Stein's identity}\label{sec2} Let $W$ be the random variable of interest and $Z$ be another random variable. In approximating ${\cal L}(W)$ by ${\cal L}(Z)$ using Stein's method, the difference between $Eh(W)$ and $Eh(Z)$ for a class of functions $h$ is expressed as \begin{equation}\label{01} Eh(W) - Eh(Z) = E\bigl\{Lf_h(W)\bigr\}, \end{equation} where $ L$ is a linear operator and $f_h$ a bounded solution of the equation $Lf = h - Eh(Z)$. It is known that for $N(0,1)$, $Lf(w) = f'(w) - wf(w)$ [see \citet{Ste72}] and for Poisson($\lambda$), $Lf(w) = \lambda f(w + 1) - w f(w)$; see \citet{Chen75}. However, $L$ is not unique. For example, for normal approximation $L$ can also be the generator of the Ornstein--Uhlenbeck process, and for Poisson approximation $L$, the generator of an immigration-death process. The solution $f_h$ will then be expressed in terms of a Markov process. This generator approach to Stein's method is due to Barbour (\citeyear{Bar88}, \citeyear{Bar90}). By (\ref{01}), bounding $Eh(W) - Eh(Z)$ is equivalent to bounding $E\{Lf_h(W)\}$. To bound the latter one finds another operator $\tilde{L}$ such that $E\{\tilde{L} f(W)\} = 0$, for a class of functions $f$ including $f_h$, and write $\tilde{L}=L-R$ for a suitable operator $R$. The error term $E\{ Lf_h(W)\}$ is then expressed as $E \{R f_h(W)\}$. The equation \begin{equation}\label{02} E\bigl\{ \tilde{L} f(W)\bigr\} = 0 \end{equation} for a class of functions $f$ including $f_h$, is called a Stein identity for ${\cal L}(W)$. For normal approximation, there are four methods for constructing a Stein identity: the direct method [\citet{Ste72}], zero-bias coupling [\citet{GR97} and \citet{Gol05}], exchangeable pairs [\citet{Ste86}] and Stein coupling [\citet{CR09}]. We discuss below the construction of Stein identities using zero-bias coupling and exchangeable pairs. As proved in \citet{GR97}, for $W$ with $EW=0$ and $\operatorname{Var}(W)=1$, there always exists $W^*$ such that \begin{equation}\label{04} E\bigl(Wf(W)\bigr) = Ef'\bigl(W^*\bigr) \end{equation} for all bounded absolutely continuous $f$ with bounded derivative $f'$. The distribution of $W^*$ is called $W$-zero-biased. If $W$ and $W^*$ are defined on the same probability space (zero-bias coupling), we may write $\Delta= W^* - W$. Then by (\ref{04}), we obtain the Stein identity \begin{equation}\label{05} E\bigl(Wf(W)\bigr) = Ef'(W + \Delta) = E \int_{-\infty}^\infty f'(W+t) \,d \mu(t|W), \end{equation} where $\mu(\cdot|W)$ is the conditional distribution of $\Delta$ given $W$. Here $\tilde{L}(w) = \int_{-\infty}^\infty f'(w+ t) \,d \mu(t |W=w) - w f(w)$.\vspace*{1pt} The method of exchangeable pairs [\citet{Ste86}] consists of constructing $W'$ such that $(W, W')$ is exchangeable. Then for any anti-symmetric function $F(\cdot, \cdot)$, that is, $F(w, w') = - F(w', w)$, \[ EF\bigl(W, W'\bigr) = 0, \] if the expectation exists. Suppose that there exist a constant $\lambda $ $(0 < \lambda< 1)$ and a random variable $R$ such that \begin{equation}\label{ex-1} E\bigl(W- W' | W\bigr) = \lambda\bigl( W - E(R|W)\bigr). \end{equation} Then for all $f$, \[ E\bigl\{ \bigl(W- W'\bigr) \bigl(f(W) + f\bigl(W' \bigr)\bigr)\bigr\} =0, \] provided the expectation exists. This gives the Stein identity \begin{eqnarray}\label{ex-2} E\bigl(Wf(W)\bigr) & =& -\frac{ 1 }{2 \lambda} E\bigl\{ \bigl(W- W' \bigr) \bigl(f\bigl(W'\bigr) - f(W)\bigr)\bigr\} + E\bigl(Rf(W)\bigr) \nonumber\\[-8pt]\\[-8pt] & =& E \int_{-\infty}^\infty f'(W+t) \hat{K}(t) \,dt + E\bigl(R f(W)\bigr)\nonumber \end{eqnarray} for all absolutely continuous functions $f$ for which expectations exist, where $\hat{K}(t) = { 1 \over2 \lambda} \Delta(I(0 \leq t \leq\Delta) - I(\Delta\leq t < 0))$ and $\Delta= W'-W$. In this case, $\tilde{L}(w) = \int_{-\infty}^\infty f'(w+t) E(\hat{K}(t) |W=w) \,dt + E(R|W=w) f(w) - w f(w)$. Both Stein identities (\ref{05}) and (\ref{ex-2}) are special cases of \begin{equation}\label{c0} E\bigl(Wf(W)\bigr) = E \int_{-\infty}^\infty f'(W+t) \,d \hat{\mu} (t) + E\bigl(Rf(W)\bigr), \end{equation} where $\hat{\mu}$ is a random measure. We will prove a moderate deviation result by assuming that $W$ satisfies the Stein identity (\ref{c0}). \section{A Cram\'er-type moderate deviation theorem}\label{sec3} Let $W$ be a random variable of interest. Assume that there exist a deterministic positive constant $\delta$, a random positive measure $\hat{\mu}$ with support $[-\delta, \delta]$ and a random variable $R$ such that \begin{equation} \label{c1} E\bigl(Wf(W)\bigr) = E \int_{|t| \leq\delta} f'(W+t) \,d \hat{\mu} (t) + E\bigl(Rf(W)\bigr) \end{equation} for all absolutely continuous function $f$ for which the expectation of either side exists. Let \begin{equation}\label{1-1} D= \int_{|t|\leq\delta} d \hat{\mu} (t). \end{equation} \begin{theorem}\label{t1} Suppose that there exist constants $\delta_1, \delta_2$ and $\theta\geq1$ such that \begin{eqnarray} \label{c2} \bigl|E(D|W) -1\bigr| &\leq&\delta_1 \bigl(1+ |W|\bigr), \\ \label{c3} \bigl|E(R|W)\bigr| &\leq&\delta_2 \bigl( 1+ |W|\bigr) \quad\mbox{or}\nonumber\\[-8pt]\\[-8pt] \bigl|E(R|W)\bigr| &\leq& \delta_2 \bigl( 1+ W^2\bigr) \quad\mbox{and}\quad \delta_2 |W|\le\alpha<1\nonumber \end{eqnarray} and \begin{equation}\label{d2-0} E(D |W) \leq\theta. \end{equation} Then \begin{equation}\label{t1a} \frac{ P(W > x) }{1-\Phi(x)} = 1+ O_\alpha(1) \theta^3 \bigl(1+x^3\bigr) (\delta+\delta_1+\delta_2) \end{equation} for $0 \leq x \leq\theta^{-1} \min( \delta^{-1/3}, \delta_1^{-1/3}, \delta_2^{-1/3})$, where $O_\alpha(1)$ denotes a quantity whose absolute value is bounded by a universal constant which depends on $\alpha$ only under the second alternative of (\ref{c3}). \end{theorem} \begin{remark} Theorem~\ref{t1} is intended for bounded random variables but with very general dependence assumptions. For this reason, the support of the random measure $\hat{\mu}$ is assumed to be within $[-\delta, \delta]$ where $\delta$ is typically of the order of $1/\sqrt{n}$ due to standardization. In order for the normal approximation to work, $E(D|W)$ should be close to $1$ and $E(R|W)$ small. This is reflected in $\delta_1$ and $\delta_2$ which are assumed to be small. \end{remark} For zero-bias coupling, $D=1$ and $R=0$, so conditions (\ref{c2}), (\ref{c3}) and (\ref{d2-0}) are satisfied with $\delta_1 = \delta_2 =0$ and $\theta=1$. Therefore, we have: \begin{coro} \label{c21} Let $W$ and $W^*$ be defined on the same probability space satisfying (\ref{04}). Assume that $EW=0$, $EW^2=1$ and $|W- W^*| \leq\delta$ for some constant $\delta $. Then \[ \frac{ P(W \geq x) }{1-\Phi(x)} = 1+ O(1) \bigl(1+x^3\bigr) \delta \] for $0 \leq x \leq\delta^{-1/3}$. \end{coro} \begin{remark} \label{r2} For an exchangeable pair $(W, W')$ satisfying (\ref{ex-1}) and $|\Delta| \leq\delta$, (\ref{c1}) is satisfied with $D = \Delta^2 / (2 \lambda)$. \end{remark} \begin{remark} \label{r5} Although one cannot apply Theorem~\ref{t1} directly to unbounded random variables, one can adapt the proof of Theorem~\ref{t1} to give a proof of (\ref{00}) for independent random variables assuming the existence of the moment generating functions of $|X_i|^{1/2}$ thereby extending a result of \citet{Lin61}. This result is given in Proposition~\ref{t72}. The proof also suggests the possibility of extending Theorem~\ref{t1} to the case where the support of $\hat{\mu}$ may not be bounded. \end{remark} \section{Applications}\label{sec4} In this section we apply Theorem~\ref{t1} to four cases of dependent random variables, namely, a combinatorial central limit theorem, the anti-voter model on a complete graph, a general system of binary codes, and the Curie--Weiss model. The proofs of the results for the third and the fourth example will be given in the last section. At the end of this section, we will present a moderate deviation result for sums of independent random variables and the proof will also be given in the last section. \subsection{Combinatorial central limit theorem}\label{sec4.1} Let $\{a_{ij}\}_{i,j=1}^n$ be an array of real numbers satisfying $\sum_{j=1}^n a_{ij}=0$ for all $i$ and $\sum_{i=1}^n a_{ij}=0$ for all $j$. Set $c_0=\max_{i,j} |a_{ij}|$ and $W=\sum_{i=1}^n a_{i\pi(i)}/\sigma$, where $\pi$ is a uniform random permutation of $\{1,2,\ldots,n\}$ and $\sigma^2=E (\sum_{i=1}^n a_{i\pi(i)})^2$. In \citet{Gol05} $W$ is coupled with the zero-biased $W^*$ in such a way that $|\Delta| = |W^*-W|\leq8c_0/\sigma$. Therefore, by Corollary~\ref{c21} with $\delta= 8c_0/\sigma$, we have \begin{equation}\label{010} \frac{ P(W \geq x) }{1- \Phi(x)} = 1 + O(1) \bigl(1+x^3\bigr) c_0 / \sigma \end{equation} for $ 0 \leq x \leq(\sigma/c_0)^{1/3}$. \subsection{Anti-voter model on a complete graph}\label{sec4.2} Consider the anti-voter model on a complete graph with $n$ vertices, $1, \ldots, n$ and $(n-1)n/2$ edges. Let $X_i$ be a random variable taking value $1$ or $-1$ at the vertex $i$, $i =1, \ldots, n$. Let $X=(X_1,\ldots, X_n)$, where $X_i$ takes values $1$ or $-1$. The anti-voter model in discrete time is described as the following Markov chain: in each step, uniformly pick a vertex $I$ and an edge connecting it to $J$, and then change $X_I$ to $-X_J$. Let $U=\sum_{i=1}^n X_i$ and $W=U/\sigma$, where $\sigma^2=\operatorname{Var}(U)$. Let $W'= (U - X_I -X_J) /\sigma$, where $I$ is uniformly distributed on $\{1, 2,\ldots, n\}$ independent of other random variables. Consider the case where the distribution of $X$ is the stationary distribution. Then as shown in \citet{RR97}, $(W, W')$ is an exchangeable pair and \begin{equation}\label{anti-1} E\bigl(W-W'|W\bigr)=\frac{2}{n}W. \end{equation} According to (\ref{ex-2}), (\ref{c1}) is satisfied with $\delta= 2 /\sigma$ and $R=0$. To check conditions (\ref{c2}) and (\ref{d2-0}), let $T$ denote the number of $1$'s among $X_1,\ldots,X_n$, $a$ be the number of edges connecting two $1$'s, $b$ be the number of edges connecting two $-1$'s and $c$ be the number of edges connecting $1$ and $-1$. Since it is a complete graph, $a=\frac{T(T-1)}{2}$, $b=\frac{(n-T)(n-T-1)}{2}$. Therefore [see, e.g., \citet{RR97}] \begin{eqnarray} \label{anti-2} E\bigl[\bigl(W-W'\bigr)^2|X\bigr] & =& \frac{1}{\sigma^2}E\bigl[\bigl(U'-U\bigr)^2|X\bigr]= \frac{4}{\sigma^2}\frac{2a+2b}{n(n-1)} \nonumber\\[-8pt]\\[-8pt] &=&\frac{1}{\sigma^2}\frac{2U^2+2n^2-4n}{n(n-1)}=\frac{2\sigma ^2W^2+2n^2-4n}{\sigma^2n(n-1)},\nonumber \\ \label{anti-3} E(D|W)-1 & = & \frac{n}{4}E\bigl(\bigl(W'-W \bigr)^2|W\bigr)-1 \nonumber\\[-8pt]\\[-8pt] & = & \frac{W^2}{2(n-1)}-\frac{2\sigma^2(n-1)-(n^2-2n)}{2\sigma ^2(n-1)}.\nonumber \end{eqnarray} Noting that $E(E(D|W)-1)=0$ and $EW^2=1$, we have $\sigma^2=\frac{n^2-2n}{2n-3}$. Hence \begin{equation}\label{anti-4} E(D|W)-1=\frac{W^2}{2(n-1)}-\frac{1}{2(n-1)}, \end{equation} which means that (\ref{c2}) is satisfied with $\delta_1=O(n^{-1/2})$. Thus, we have the following moderate deviation result. \begin{prop}\label{p22} We have \[ \frac{ P(W\geq x) }{1- \Phi(x)} = 1 + O(1) \bigl(1+x^3\bigr)/\sqrt{n} \] for $0 \leq x \leq n^{1/6}$. \end{prop} \subsection{A general system of binary codes}\label{sec4.3} In \citet{CHSZ11}, a general system of binary codes is defined as follows. Suppose each nonnegative integer $x$ is coded by a binary string consisting of $0$'s and $1$'s. Let $\tilde{S}(x)$ denote the number of $1$'s in the resulting coding string of $x$, and let \begin{equation} \tilde{\mathbf S}=\bigl(\tilde{S}(0), \tilde{S}(1), \ldots\bigr). \end{equation} For each nonnegative integer $n$, define $\tilde{S}_n=\tilde{S}(X)$, where $X$ is a random integer uniformly distributed over the set $\{0,1,\ldots,n\}$. The general system of binary codes introduced by \citet{CHSZ11} is one in which \begin{equation} \label{p21cond} \tilde{S}_{2m-1}=\tilde{S}_{m-1}+\mathcal{I} \qquad\mbox{in distribution}\qquad \mbox{for all } m\ge1, \end{equation} where $\mathcal{I}$ is an independent $\operatorname{Bernoulli}(1/2)$ random variable. \citet{CHSZ11} proved the asymptotic normality of $\tilde{S}_n$. Here, we apply Theorem~\ref{t1} to obtain the following Cram\'er moderate deviation result. For $n\ge1$, let integer $k$ be such that $2^{k-1}-1<n\le2^k-1$, and let $\tilde{W}_n=(\tilde {S}_n-k/2)/\sqrt{k/4}$. \begin{prop} \label{p21} Under the assumption (\ref{p21cond}), we have \begin{equation}\label{p21a} \frac{ P( \tilde{W}_n \geq x) }{1- \Phi(x)} = 1 + O(1) \bigl(1+x^3\bigr ) \frac{1}{\sqrt{k}} \end{equation} for $0 \leq x \leq k^{1/6}$. \end{prop} As an example of this system of binary codes, we consider the binary expansion of a random integer $X$ uniformly distributed over $\{0,1,\ldots,n\}$. For $2^{k-1}-1<n\le 2^k-1$, write $X$ as \[ X= \sum_{i=1}^k X_i 2^{k-i}, \] and let $S_n=X_1 + \cdots + X_k$. Set $W_n=(S_n-k/2)/\sqrt{k/4}$. It is easy to verify that $S_n$ satisfies (\ref{p21cond}). A Berry--Esseen bound for $W_n$ was first obtained by \citet{Dia77}. Proposition~\ref{p21} provides a Cram\'er moderate deviation result for $W_n$. Other examples of this system of binary codes include the binary reflected Gray code and a coding system using translation and complementation. Detailed descriptions of these codes are given in \citet{CHSZ11}. \subsection{Curie--Weiss model}\label{sec4.4} Consider the Curie--Weiss model for $n$ spins $\Sigma=(\sigma_1,\sigma_2,\ldots,\sigma_n)\in\{-1,1\}^n$. The joint distribution of $\Sigma$ is given by \[ Z_{\beta,h}^{-1} \exp\Biggl(\frac{\beta}{n} \sum _{1\le i<j \le n} \sigma_i \sigma_j +\beta h \sum_{i=1}^n \sigma_i \Biggr), \] where $Z_{\beta,h}$ is the normalizing constant, and $\beta>0, h\in\mathbb{R}$ are called the inverse of temperature and the external field, respectively. We are interested in the total magnetization $S=\sum _{i=1}^n \sigma_i$. We divide the region $\beta>0, h\in\mathbb{R}$ into three parts, and for each part, we list the concentration property and the limiting distribution of $S$ under proper standardization. Consider the solution(s) to the equation \begin{equation} \label{CW1} m=\tanh\bigl(\beta(m+h)\bigr). \end{equation} \begin{longlist}[\textit{Case} 1.] \item[\textit{Case} 1.] $0<\beta<1, h\in\mathbb{R}$ or $\beta\ge1, h\ne0$. There is a unique solution $m_0$ to (\ref{CW1}) such that $m_0 h \ge0 $. In this case, $S/n$ is concentrated around $m_0$ and has a Gaussian limit under proper standardization. \item[\textit{Case} 2.] $\beta>1, h=0$. There are two nonzero solutions to (\ref{CW1}), $m_1<0<m_2$, where $m_1 = - m_2$. Given condition on $S< 0$ ($S>0$, resp.), $S/n$ is concentrated around $m_1$ ($m_2$, resp.) and has a Gaussian limit under proper standardization. \item[\textit{Case} 3.] $\beta=1, h=0$. $S/n$ is concentrated around $0$, but the limit distribution is not Gaussian. \end{longlist} We refer to \citet{E85} for the concentration of measure results, Ellis and Newman (\citeyear{EN78}, \citeyear{EN782}) for the results on limiting distributions. See also \citet{ChS11} for a Berry--Esseen-type bound when the limiting distribution is not Gaussian. Here we focus on the Gaussian case and prove the following two Cram\'er moderate deviation results for cases 1 and 2. \begin{prop} \label{p4} In case 1, define \begin{equation} W=\frac{S-n m_0}{\sigma}, \end{equation} where \begin{equation} \sigma^2=\frac{ n( 1-m_0^2)}{1-(1-m_0^2)\beta}.\vadjust{\goodbreak} \end{equation} Then we have \begin{equation}\label{p4a} \frac{P( W \geq x) }{1- \Phi(x)} = 1 + O(1) \bigl(1+x^3\bigr)/ \sqrt{n} \end{equation} for $ 0 \leq x \leq n^{1/6}$. \end{prop} \begin{prop} \label{p5} In case 2, define \begin{equation} W_1=\frac{S-n m_{1}}{\sigma_1},\qquad W_2=\frac{S-n m_{2}}{\sigma_2}, \end{equation} where \begin{equation} \sigma_1^2=\frac{n( 1-m_1^2) }{1-(1-m_{1}^2)\beta},\qquad \sigma_2^2= \frac{n( 1-m_2^2) }{1-(1-m_{2}^2)\beta}. \end{equation} Then we have \begin{equation}\label{p5a} \frac{P( W_1 \geq x|S<0)}{1- \Phi(x)} = 1 + O(1) \bigl(1+x^3\bigr)/ \sqrt{n} \end{equation} and \begin{equation}\label{p5b} \frac{P( W_2 \geq x|S>0)}{1- \Phi(x)} = 1 + O(1) \bigl(1+x^3\bigr)/ \sqrt{n} \end{equation} for $ 0 \leq x \leq n^{1/6}$. \end{prop} \subsection{Independent random variables}\label{sec4.5} Moderate deviation for independent random variables has been extensively studied in literature [see, e.g., \citet{Pet75}, Chapter 8] based on the conjugated method. Here, we will adapt the proof of Theorem~\ref{t1} to prove the following moderate deviation result, which is a variant of those in the literature [see again \citet{Pet75}, Chapter~8]. \begin{prop}\label{t71} Let $\xi_i, 1 \leq i \leq n$ be independent random variables with $E\xi_i=0$ and $Ee^{t_n |\xi_i|} < \infty$ for some $t_n$ and for each $1 \leq i \leq n$. Assume that \begin{equation}\label{t71-c} \sum_{i=1}^n E\xi_i^2 =1. \end{equation} Then \begin{equation}\label{t71a} \frac{P( W \geq x)}{1 - \Phi(x)} = 1 + O(1) \bigl(1+ x^3\bigr) \gamma e^{4 x^3 \gamma} \end{equation} for $ 0 \leq x \leq t_n$, where $\gamma= \sum_{i=1}^n E|\xi_i|^3 e^{x |\xi_i|} $. \end{prop} We deduce (\ref{00}) under less stringent conditions from Proposition \ref{t71} and extend a result of \citet{Lin61} to independent but not necessarily identically distributed random variables.\vadjust{\goodbreak} \begin{prop} \label{t72} Let $X_i, 1 \leq i \leq n $ be a sequence of independent random variables with $EX_i=0$. Put $S_n = \sum_{i=1}^n X_i$ and $B_n^2 = \sum_{i=1}^n EX_i^2$. Assume that there exists positive constants $c_1, c_2$ and $t_0 $ such that \begin{equation}\label{t72-c} B_n^2 \geq c_1^2 n,\qquad Ee^{t_0 \sqrt{|X_i|}} \leq c_2 \qquad\mbox{for } 1 \leq i \leq n. \end{equation} Then \begin{equation}\label{t72a} \frac{ P( S_n / B_n \geq x)}{1- \Phi(x)} = 1 + O(1) \bigl( 1+x^3\bigr) /\sqrt{n} \end{equation} for $ 0 \leq x \leq(c_1 t_0^2 )^{1/3} n^{1/6}$, where $O (1)$ is bounded by a constant depending on $c_2$ and $c_1 t_0^2 $. In particular, we have \begin{equation} \label{t72b} \frac{P( S_n / B_n \geq x)}{1- \Phi(x)} \to1 \end{equation} uniformly in $ 0 \leq x \leq o(n^{1/6})$. \end{prop} \begin{pf} The main idea is first truncating $X_i$ and then applying Proposition~\ref{t71} to the truncated sequence. Let \[ \tau_n = \bigl( c_1^2 t_0 n \bigr)^{1/3}2^{-2/3},\qquad \bar{X}_i = X_i {\mathbf1}\bigl( |X_i| \leq\tau_n^2\bigr),\qquad \bar{S}_n = \sum_{i=1}^n \bar{X}_i. \] Observe that \begin{eqnarray*} &&\bigl|P( S_n / B_n \geq x) - P( \bar{S}_n / B_n \geq x)\bigr| \\ &&\qquad\leq\sum_{i=1}^n P \bigl(|X_i| \geq\tau_n^2 \bigr) \\ &&\qquad\leq\sum_{i=1}^n e^{-t_0 \tau_n} E e^{t_0 \sqrt{|X_i|}} \leq c_2 n e^{-t_0 \tau_n}\\ &&\qquad= O(1) \bigl(1- \Phi(x)\bigr) \bigl( 1+ x^3\bigr) / \sqrt{n} \end{eqnarray*} for $ 0 \leq x \leq(c_1 t_0^2)^{1/3} n^{1/6} $; here we used the fact that \[ t_0 \tau_n =\bigl(c_1 t_0^2\bigr)^{2/3} n^{1/3} 2^{-2/3}. \] Now let $\xi_i = ( \bar{X}_i - E\bar{X}_i) / \bar{B}_n $, where $\bar{B}_n^2 = \sum_{i=1}^n \operatorname{Var}(\bar{X}_i)$. It is easy to see that \begin{eqnarray} \label{t72-1} \sum_{i=1}^n |E\bar{X}_i | & \le& \sum_{i=1}^n E|X_i| {\mathbf1}\bigl( |X_i| \geq\tau_n^2\bigr) \nonumber \\ & \leq& \sum_{i=1}^n \sup_{s \geq\tau_n} \bigl(s^2 e^{-t_0 s}\bigr) Ee^{t_0 \sqrt{|X_i|}} \\ & \leq& c_2 n c_1 \bigl(c_1 t_0^2\bigr)^{-1} \sup_{s \geq t_0 \tau_n} \bigl(s^2 e^{-s}\bigr) = c_1 o\bigl( n^{-2}\bigr)\nonumber \end{eqnarray} and similarly, $\bar{B}_n = B_n( 1+ o(n^{-2}))$. Thus, for $ 0 \leq x \leq(c_1 t_0^2)^{1/3} n^{1/6}$ \begin{eqnarray*} x |\xi_i| &\leq&\frac{2^{1/3} x}{c_1 n^{1/2} } |X_i|{\mathbf1}\bigl( |X_i|\leq\tau_n^2 \bigr) + o(1) \leq \frac{ 2^{1/3} x \tau_n }{ c_1 n^{1/2}} \sqrt{|X_i|} + o(1) \\ &\leq&\frac {t_0}{2^{1/3}} \sqrt{|X_i|} + o(1) \end{eqnarray*} and hence $\gamma= O(n^{-1/2})$. Applying Proposition~\ref{t71} to $\{\xi_i, 1 \leq i \leq n \}$ gives (\ref{t72a}). \end{pf} \begin{remark}\label{r71} As stated previously for (\ref{00}) in the \hyperref[sec1]{Introduction}, the range $0 \le x \le(c_1 t_0^2)^{1/3} n^{1/6} $ and the order of the error term $O(1)(1 + x^3)/\sqrt{n}$ in Proposition~\ref{t72} are optimal. By comparing with (\ref{00}) the results in the four examples discussed above may be optimal. \end{remark} \section{Preliminary lemmas}\label{sec5} To prove Theorem~\ref{t1}, we first need to develop two preliminary lemmas. Our first lemma gives a bound for the moment generating function of $W$. \begin{lemma} \label{l21} Let $W$ be a random variable with $E|W| \le C$. Assume that there exist $\delta>0$, $\delta_1 \geq0, 0 \leq \delta_2 \leq1/4$ and $\theta\geq1$ such that (\ref{c1}) and (\ref{c2})--(\ref{d2-0}) are satisfied. Then for all $0<t \le1/(2\delta)$ satisfying \begin{equation} \label{l21b} t\delta_1 +C_\alpha t\theta \delta_2 \leq1/2, \end{equation} where \begin{equation} \label{Calpha} C_\alpha= \cases{ 12, &\quad under the first alternative of (\ref{c3}), \vspace*{2pt}\cr \dfrac{2(3+\alpha)}{1-\alpha}, &\quad under the second alternative of (\ref{c3}),} \end{equation} we have \begin{equation}\label{l21a} Ee^{tW} \leq\exp\bigl(t^2/2 + c_0(t)\bigr), \end{equation} where \begin{equation}\label{l21c} c_0(t) = c_1(C, C_\alpha) \theta\bigl\{ \delta_2 t + \delta_1 t^2 + (\delta+ \delta_1 + \delta_2) t^3 \bigr\}, \end{equation} where $c_1(C, C_\alpha)$ is a constant depending only on $C$ and $C_\alpha$. \end{lemma} \begin{pf} Fix $a>0$, $t \in(0, 1/(2\delta)]$ and $s \in(0,t]$, and let $f(w)=e^{s(w\wedge a)}$. Letting $h(s)=Ee^{s(W\wedge a)}$, firstly we prove that $h'(s)$ can be bounded by $s h(s)$ and $EW^2f(W)$. By (\ref{c1}), \begin{eqnarray*} h'(s) & = & E(W\wedge a)e^{s(W\wedge a)} \le E\bigl(Wf(W)\bigr) \\ & = & E \int f'(W+t) \,d \hat{\mu} (t) + E\bigl(Rf(W)\bigr) \\ & =& s E \int e^{s(W+t)} I(W+t \le a) \,d \hat{\mu} (t) + E \bigl(e^{s(W\wedge a)} E(R|W)\bigr) \\[-2pt] & \le& s E \int e^{s[(W+t)\wedge a]} \,d \hat{\mu} (t) + E \bigl(e^{s(W\wedge a)} E(R|W)\bigr) \\[-2pt] & \le& s E \int e^{s(W\wedge a + \delta)} \,d \hat{\mu} (t) + E \bigl (e^{s(W\wedge a)} E(R|W)\bigr) \\[-2pt] & =& s E\int e^{s(W\wedge a)} \,d \hat{\mu} (t) + s E \int e^{s(W\wedge a)} \bigl( e^{s \delta} - 1\bigr) \,d \hat{\mu} (t)\\[-2pt] &&{} + E \bigl(e^{s(W\wedge a)} E(R|W) \bigr) \\[-2pt] & \leq& s Ee^{s(W\wedge a)} D + s E e^{s (W\wedge a)} \bigl|e^{s \delta} - 1\bigr| D + 2 \delta_2 E \bigl(\bigl(1+W^2\bigr)e^{s(W\wedge a)} \bigr), \end{eqnarray*} where we have applied (\ref{1-1}) and (\ref{c3}) to obtain the last inequality. Now, applying the simple inequality \[ \bigl|e^x - 1\bigr| \le2 |x| \qquad\mbox{for $|x| \le1$}, \] and then (\ref{c2}), we find that \begin{eqnarray*} E\bigl(Wf(W)\bigr) &\le& s Ee^{s(W\wedge a)} D + s E e^{s (W\wedge a)} 2 s \delta D+ 2\delta_2 E \bigl(\bigl(1+ W^2\bigr) e^{s(W\wedge a)} \bigr) \\[-2pt] &\leq& s Ee^{s(W\wedge a)} E(D|W) + 2 s^2 \theta\delta Ee^{s(W\wedge a)} + 2\delta_2 E \bigl(\bigl(1+W^2\bigr) e^{s(W\wedge a)} \bigr) \\[-2pt] & =& s Ee^{s(W\wedge a)} + s E e^{s(W\wedge a)} \bigl[E(D|W) -1\bigr] \\[-2pt] &&{} + 2 s^2 \theta\delta Ee^{s(W\wedge a)} + 2\delta_2 E \bigl(\bigl(1+W^2\bigr) e^{s(W\wedge a)} \bigr) \\[-2pt] & \leq& s Ee^{s(W\wedge a)} + s \delta_1 Ee^{s(W\wedge a)} \bigl(1+ |W|\bigr) + 2 s^2 \theta\delta Ee^{s(W\wedge a)} \\[-2pt] &&{} +2 \delta_2 E \bigl(\bigl(1+W^2\bigr) e^{s(W\wedge a)} \bigr). \end{eqnarray*} Note that \begin{eqnarray} E|W|e^{s(W\wedge a)}&=&EW e^{s(W\wedge a)}+2 EW^- e^{s(W\wedge a)} \nonumber\\[-9pt]\\[-9pt] &\le& E\bigl(Wf(W)\bigr)+ 2E|W|\le2C+E\bigl(Wf(W)\bigr).\nonumber \end{eqnarray} Collecting terms, we obtain \begin{eqnarray} \label{l21-1} && h'(s)\le E\bigl(Wf(W)\bigr) \nonumber\\[-2pt] &&\qquad\leq\bigl\{ \bigl( s (1+ \delta_1 + 2 t \theta\delta)+ 2 \delta_2\bigr) h(s) + 2 \delta_2 EW^2f(W)+2Cs \delta_1 \bigr\}\\[-2pt] &&\qquad\quad{} /(1-s\delta_1). \nonumber \end{eqnarray} Secondly, we show that $EW^2f(W)$ can be bounded by a function of $h(s)$ and $h'(s)$. Letting $g(w)=we^{s(w\wedge a)}$, and then arguing as for (\ref{l21-1}), \begin{eqnarray} \label{l21-02} EW^2 f(W) &=& EW g(W) \nonumber\\ & =& E\int\bigl( e^{s[(W+t)\wedge a]} + s (W+t) e^{s[(W+t)\wedge a]} I(W+t \le a)\bigr) \,d \hat{\mu} (t) \nonumber\\ &&{}+ E\bigl(RWf(W)\bigr) \nonumber \\ & \le& E\int\bigl( e^{s(W\wedge a)} e^{s\delta} + s \bigl[(W+t)\wedge a \bigr] e^{s(W\wedge a)} e^{s\delta} \bigr) \,d \hat{\mu} (t) \nonumber\\[-8pt]\\[-8pt] &&{} + E\bigl(RWf(W) \bigr) \nonumber \\ & =& e^{s\delta} E\bigl(f(W)+s f(W) \bigl((W\wedge a)+\delta\bigr)\bigr ) D+ E\bigl(RWf(W)\bigr) \nonumber \\ & \le& \theta e^{0.5} (1+0.5) Ef(W) + s\theta e^{s\delta} E(W \wedge a)f(W) +E\bigl(RWf(W)\bigr) \nonumber \\ & \le& 1.5 e^ {0.5}\theta h(s)+2s\theta h'(s) +E \bigl(RWf(W)\bigr). \nonumber \end{eqnarray} Note that under the first alternative of (\ref{c3}), \begin{equation} \bigl|E\bigl(RWf(W)\bigr)\bigr|\le\delta_2 Ef(W) +2\delta_2 EW^2f(W), \end{equation} and under the second alternative of (\ref{c3}), \begin{equation} \bigl|E\bigl(RWf(W)\bigr)\bigr|\le\alpha E f(W) +\alpha EW^2f(W). \end{equation} Thus, recalling $\delta_2\le1/4$ and $\alpha<1$, we have \begin{equation}\label{l21-3} EW^2f(W) \leq\frac{C_\alpha}{2} \bigl(\theta h(s)+s\theta h'(s)\bigr), \end{equation} where $C_\alpha$ is defined in (\ref{Calpha}). We are now ready to prove (\ref{l21a}). Substituting (\ref{l21-3}) into (\ref{l21-1}) yields \begin{eqnarray} \label{l21-4} (1-s\delta_1) h'(s) & \leq& \bigl( s (1+ \delta_1 + 2 t \theta\delta)+ 2 \delta_2\bigr) h(s) \nonumber \\ & &{} + \delta_2 C_\alpha\bigl(\theta h(s) +s \theta h'(s)\bigr) +2C s\delta_1 \nonumber \\ & = & \bigl( s (1+ \delta_1 + 2 t \theta\delta)+ 2 \delta_2(1+C_\alpha\theta) \bigr) h(s) \nonumber\\[-8pt]\\[-8pt] & &{} + C_\alpha s \theta\delta_2 h'(s)+2C s \delta_1 \nonumber \\ & \leq& \bigl( s (1+ \delta_1 + 2 t \theta\delta)+ 2 \delta_2(1+C_\alpha\theta) \bigr) h(s) \nonumber \\ & &{} + C_\alpha t \theta\delta_2 h'(s)+2C s \delta_1.\nonumber \end{eqnarray} Solving for $h'(s)$, we obtain \begin{equation} \label{l21-5} h'(s)\leq\bigl( s c_1(t) + c_2(t)\bigr) h(s) +\frac{2C s\delta_1}{1-c_3 (t)}, \end{equation} where \begin{eqnarray*} c_1(t) & = & \frac{ 1+ \delta_1 + 2 t \theta\delta}{1- c_3(t)}, \\ c_2(t) & = & \frac{ 2 \delta_2(1+C_\alpha\theta) }{1- c_3(t)}, \\ c_3(t) & = & t\delta_1 +C_\alpha t\theta \delta_2. \end{eqnarray*} Now taking $t$ to satisfy (\ref{l21b}) yields $c_3(t) \leq1/2$, so in particular, $c_i(t)$ is nonnegative for $i=1,2$ and $1/(1-c_3(t)) \le1+ 2 c_3(t)$. Solving (\ref{l21-5}), we have \begin{equation}\label{l21-6} h(s) \le\exp\biggl(\frac{t^2}{2}c_1(t)+t c_2(t)+2C \delta_1 t^2\biggr). \end{equation} Note that $ c_3(t) \leq1/2$, $\delta_2 \le1/4$ and $\theta\geq1$. Elementary calculations now give \begin{eqnarray*} && \frac{t^2}{2}\bigl(c_1(t)-1\bigr)+tc_2(t) +2C \delta_1 t^2 \\ &&\qquad= \frac{t^2}{2}\frac{\delta_1+2t\theta\delta +c_3(t)}{1-c_3(t)}+\frac{2 t \delta_2(1+C_\alpha\theta )}{1-c_3(t)}+2C \delta_1 t^2 \\ &&\qquad\leq t^2(\delta_1+2t\theta\delta+ t \delta_1+C_\alpha t\theta\delta_2)+4t \delta_2(1+C_\alpha)+2C \delta_1 t^2 \\ &&\qquad\leq c_0(t), \end{eqnarray*} and hence \[ t^2 c_1(t) /2 + t c_2(t) + 2 C \delta_1 t^2 \leq t^2/ 2 + c_0(t), \] thus proving (\ref{l21a}) by letting $a\to\infty$. \end{pf} \begin{lemma} \label{l22} Suppose that for some nonnegative $\delta,\delta_1$ and $\delta_2$, satisfying $\max(\delta, \delta_1, \delta_2) \leq1$ and $\theta\ge1$, (\ref{l21a}) is satisfied, with $c_0(t)$ as in (\ref{l21c}), for all \begin{equation} \label{MDranget} t \in\bigl[0,\theta^{-1} \min\bigl( \delta^{-1/3}, \delta_1^{-1/3},\delta_2^{-1/3} \bigr)\bigr]. \end{equation} Then for integers $k \ge1$, \begin{equation} \label{l22a} \int_0^t u^k e^{u^2/2} P( W \geq u) \,du \leq c_2(C, C_\alpha) t^k, \end{equation} where $c_2(C, C_\alpha)$ is a constant depending only on $C$ and $C_\alpha$ defined in Lem\-ma~\ref{l21}. \end{lemma} \begin{pf} For $t$ satisfying (\ref{MDranget}), it is easy to see that $c_0(t) \leq5c_1(C, C_\alpha)$, where $c_1(C, C_\alpha)$ is as in Lemma~\ref{l21}, and (\ref{l21b}) is satisfied. Write \begin{eqnarray*} &&\int_0^t u^k e^{u^2/2} P( W \geq u) \,du \\ &&\qquad= \int_0^{[t]} u^k e^{u^2/2} P( W \geq u) \,du + \int_{[t]}^t u^k e^{u^2/2} P( W \geq u) \,du, \end{eqnarray*} where $[t]$ denotes the integer part of $t$. For the first integral, noting that $\sup_{j-1 \leq u \leq j} e^{u^2/2- j u} = e^{(j-1)^2/2 - j(j-1)}$, we have \begin{eqnarray}\label{l22-1} && \int_0^{[t]} u^k e^{u^2/2} P(W\geq u)\,du \nonumber\\ &&\qquad\leq\sum_{j=1}^{[t]}j^k \int_{j-1}^j e^{u^2/2-ju} e^{ju} P(W \geq u)\,du \nonumber \\ &&\qquad\leq\sum_{j=1}^{[t]}j^k e^{(j-1)^2/2-j(j-1)}\int_{j-1}^j e^{ju} P(W \geq u)\,du \\ &&\qquad\leq2 \sum_{j=1}^{[t]}j^k e^{-j^2/2}\int_{-\infty}^\infty e^{ju} P(W\geq u)\,du \nonumber \\ &&\qquad= 2 \sum_{j=1}^{[t]}j^k e^{-j^2/2} (1/j) E e^{jW} \nonumber \\ &&\qquad\leq2 \sum _{j=1}^{[t]}j^{k-1}\exp\bigl(-j^2/2 + j^2/2 + c_0(j)\bigr) \nonumber \\ &&\qquad\leq2 e^{c_0(t)} \sum_{j=1}^{[t]}j^{k-1} \nonumber \\ &&\qquad\leq c_2(C,C_\alpha) t^k.\nonumber \end{eqnarray} Similarly, we have \begin{eqnarray*} && \int_{[t]}^tu^ke^{u^2/2}P(W \geq u)\,du \\ &&\qquad\leq t^k \int_{[t]}^t e^{u^2/2-tu}e^{tu}P(W\geq u)\,du \\ &&\qquad\leq t^k e^{[t]^2/2-t[t]} \int_{[t]}^t e^{tu} P(W\geq u)\,du \\ &&\qquad\leq2t^k e^{-t^2/2} \int_{-\infty}^\infty e^{tu}P(W\geq u)\,du \\ &&\qquad\leq c_2(C, C_\alpha) t^k. \end{eqnarray*} This completes the proof. \end{pf} \section{Proofs of results}\label{sec6} In this section, let $O_\alpha(1)$ denote universal constants which depend on $\alpha$ only under the second alternative of (\ref{c3}). \subsection{\texorpdfstring{Proof of Theorem \protect\ref{t1}}{Proof of Theorem 3.1}}\label{sec6.1} If $\theta^{-1} \min( \delta^{-1/3}, \delta_1^{-1/3}, \delta_2^{-1/3}) \leq O_\alpha(1)$, then $1/(1-\Phi (x)) \leq1/(1-\Phi(O_\alpha(1)))$ for $0 \leq x \leq O_\alpha(1)$. Moreover, $\theta^3 (\delta+\delta_1+\delta_2)\ge O_\alpha(1)$. Therefore, (\ref{t1a}) is trivial. Hence, we can assume \begin{equation}\label{t1-01} \theta^{-1} \min\bigl( \delta^{-1/3}, \delta_1^{-1/3}, \delta_2^{-1/3}\bigr) \geq O_\alpha(1) \end{equation} so that $\delta\le1, \delta_2\le1/4, \delta_1+2\delta_2 <1$, and moreover, $\delta_1+\delta_2+\alpha<1$ under the second alternative of (\ref{c3}). Our proof is based on Stein's method. Let $f=f_x$ be the solution to the Stein equation \begin{equation}\label{stein2} wf(w)- f'(w) = I( w\geq x) - \bigl(1-\Phi(x)\bigr). \end{equation} It is known that \begin{eqnarray}\label{t11} f(w) &= & \cases{ \sqrt{2\pi} e^{w^2/2} \bigl( 1- \Phi(w) \bigr)\Phi(x), &\quad $w \geq x$, \vspace*{1pt}\cr \sqrt{2\pi} e^{w^2/2} \bigl( 1- \Phi(x) \bigr)\Phi(w), &\quad $w < x$,} \nonumber \\ & \leq& \frac{ 4 }{1+w} {\mathbf1}(w \geq x) + 3 \bigl(1-\Phi(x)\bigr) e^{w^2/2} {\mathbf1}(0 < w < x) \\ &&{} + 4 \bigl(1-\Phi(x)\bigr) \frac{ 1 }{1+|w| } {\mathbf1}(w \leq0)\nonumber \end{eqnarray} by using the following well-known inequality: \[ \bigl(1- \Phi(w)\bigr) e^{w^2/2} \leq\min\biggl( \frac{ 1}{2}, \frac{ 1 }{ w \sqrt{2 \pi}} \biggr),\qquad w >0. \] It is also known that $wf(w)$ is an increasing function; see \citet{CS05}, Lemma 2.2. By (\ref{c1}) we have \begin{equation} \label{t1-1a} E\bigl(Wf(W)\bigr) - E\bigl(Rf(W)\bigr) = E\int f'(W+t) \,d \hat{\mu} (t), \end{equation} and monotonicity of $w f(w)$ and equation (\ref{stein2}) imply that \begin{equation} \label{t1-1b} f'(W+t) \leq(W+\delta) f(W+\delta) + 1 - \Phi(x) - {\mathbf1}(W \geq x+ \delta). \end{equation} Recall that $\int d \hat{\mu}(t) = D$. Thus using nonnegativity of $\hat{\mu}$ and combining (\ref{t1-1a}) and (\ref{t1-1b}), we have \begin{eqnarray}\label{t1-1} && E \bigl(Wf(W)\bigr) - E\bigl(Rf(W)\bigr) \nonumber \\ &&\qquad\leq E \int\bigl((W+\delta) f(W+\delta) - W f(W)\bigr) \,d \hat {\mu} (t) + EWf(W) D \\ &&\qquad\quad{} + E\int\bigl\{ 1-\Phi(x) - {\mathbf1}(W > x+ \delta) \bigr \} \,d \hat{\mu} (t).\nonumber \end{eqnarray} Now, by (\ref{1-1}), the expression above can be written \begin{eqnarray}\label{t1-2} && E \bigl((W+\delta) f(W+\delta) - W f(W)\bigr)D \nonumber \\ & &\quad{} + EWf(W) D + E \bigl\{ 1-\Phi(x) - {\mathbf1}(W > x+ \delta) \bigr\} D \nonumber \\ &&\qquad= 1- \Phi(x) - P(W > x+ \delta) \\ &&\qquad\quad{} + E \bigl((W+\delta) f(W+\delta) - W f(W)\bigr)D + EWf(W) D \nonumber \\ &&\qquad\quad{} + E \bigl\{ 1-\Phi(x) - {\mathbf1}(W > x+ \delta) \bigr\} (D -1).\nonumber \end{eqnarray} Therefore, we have \begin{eqnarray}\label{t1-2a} && P(W > x+\delta) -\bigl(1- \Phi(x)\bigr) \nonumber \\ &&\qquad\leq E \bigl((W+\delta) f(W+\delta) - W f(W)\bigr)D + EWf(W) (D -1) \nonumber \\ &&\qquad\quad{} + E \bigl\{ 1-\Phi(x) - {\mathbf1}(W > x+ \delta) \bigr\} (D -1) + ERf(W) \\ &&\qquad\leq\theta E \bigl((W+\delta) f(W+\delta) - W f(W)\bigr)+ \delta_1 E\bigl(|W| \bigl(1+|W|\bigr) f(W)\bigr) \nonumber \\ &&\qquad\quad{} + \delta_1 E \bigl| 1-\Phi(x) - {\mathbf1}(W > x+ \delta) \bigr| \bigl(1+|W|\bigr) + \delta_2 E\bigl(2+W^2\bigr) f(W),\nonumber \end{eqnarray} where we have again applied the monotonicity of $wf(w)$ as well as (\ref{d2-0}), (\ref{c2}) and (\ref{c3}). Hence we have that \begin{equation} \label{t1-3} P(W > x+\delta) -\bigl(1- \Phi(x)\bigr) \le\theta I_1 + \delta_1 I_2 + \delta_1 I_3 + \delta_2 I_4, \end{equation} where \begin{eqnarray*} I_1 & =& E \bigl((W+\delta) f(W+\delta) - W f(W)\bigr), \\ I_2 & =& E\bigl(|W| \bigl(1+|W|\bigr) f(W)\bigr), \\ I_3 & =& E \bigl| 1-\Phi(x) - {\mathbf1}(W > x+ \delta) \bigr| \bigl(1+|W|\bigr) \end{eqnarray*} and \[ I_4 = E\bigl(2+W^2\bigr) f(W). \] By (\ref{t11}) we have \begin{eqnarray}\label{t1-4} Ef(W) & \leq& 4 P(W > x) + 4 \bigl(1- \Phi(x)\bigr) \nonumber\\[-8pt]\\[-8pt] &&{} + 3 \bigl(1- \Phi(x)\bigr) Ee^{W^2/2}{\mathbf1}(0 < W \leq x). \nonumber \end{eqnarray} Note that by (\ref{c1}) with $f(w)=w$, \begin{eqnarray*} EW^2 & =& E\int d \hat{\mu} (t) + E(RW) \\ &= & ED + E(RW). \end{eqnarray*} Therefore, under the first alternative of (\ref{c3}), $EW^2 \le (1+2\delta_1+\delta_2) +(\delta_1+2\delta_2)EW^2$, and under the second alternative of (\ref{c3}), $EW^2 \le(1+2\delta_1+\delta_2) +(\delta_1+\delta_2+\alpha)EW^2$. This shows $EW^2 \leq O_\alpha(1)$. Hence the hypotheses of Lem\-ma~\ref {l21} is satisfied with $C=O_\alpha(1)$, and therefore also the conclusion of Lem\-ma~\ref{l22}. In particular, \begin{eqnarray}\label{t1-5}\quad Ee^{W^2/2} {\mathbf1}(0 < W \leq x) & \leq& P(0< W \le x)+ \int _0^x y e^{y^2/2} P(W > y) \,dy \nonumber\\[-8pt]\\[-8pt] & \leq& O_\alpha(1) (1+x).\nonumber \end{eqnarray} Similarly, by (\ref{t11}) again, \begin{eqnarray*} EW^2 f(W) &\leq& 4 E|W|{\mathbf1}(W>x) + 4 \bigl(1- \Phi(x)\bigr) E|W| \\ &&{}+ 3 \bigl(1- \Phi(x)\bigr) EW^2 e^{W^2/2}{\mathbf1}(0 < W \leq x) \end{eqnarray*} and by Lemma~\ref{l22}, \begin{eqnarray} \label{t1-6} EW^2 e^{W^2/2}{\mathbf1}(0 < W \leq x) &\le& \int _0^x \bigl( y^3 + 2y \bigr)e^{y^2/2} P(W > y) \,dy \nonumber\\[-8pt]\\[-8pt] & \leq& O_\alpha(1) \bigl( 1+ x^3\bigr).\nonumber \end{eqnarray} As to \[ E|W|{\mathbf1}(W>x)\le P(W>x)+EW^2 I(W>x), \] it follows from Lemma~\ref{l21} that \begin{equation}\label{t1-010} P(W > x)\leq e^{-x^2} E e^{xW}= O_\alpha(1) e^{-x^2/2} \end{equation} and \begin{eqnarray} \label{t1-011} \int_x^{\infty}t P(W\geq t)\,dt &\leq& Ee^{xW} \int_x^{\infty}te^{-xt}\,dt \nonumber\\ &=& Ee^{xW} x^{-2} \bigl(1+ x^2\bigr) e^{-x^2} \nonumber\\[-8pt]\\[-8pt] &\leq& O_\alpha(1) e^{-x^2/2} x^{-2} \bigl(1+ x^2\bigr) \nonumber\\ &\leq& O_\alpha(1) e^{-x^2/2}\nonumber \end{eqnarray} for $ x\geq1$. Thus we have for $x>1$, \begin{eqnarray}\label{t1-7}\quad EW^2{\mathbf1}(W> x) & =& x^2 P( W > x) + \int _x^\infty2 y P(W > y) \,dy \nonumber\\[-8pt]\\[-8pt] & \leq& O_\alpha(1) \bigl(1+x^2\bigr) e^{-x^2/2} \leq O_\alpha(1) \bigl(1+ x^3\bigr) \bigl(1-\Phi(x)\bigr). \nonumber \end{eqnarray} Clearly, (\ref{t1-7}) remains valid for $ 0 \leq x \leq1$ by the fact that $EW^2 {\mathbf1}(W > x) \leq EW^2 \leq2$. Combining (\ref{t1-5})--(\ref{t1-7}), we have \begin{equation}\label{t1-8} I_2 \leq O_\alpha(1) \bigl(1+x^{3}\bigr) \bigl(1-\Phi(x)\bigr). \end{equation} Similarly, \begin{equation}\label{t1-9} I_4 \leq O_\alpha(1) \bigl(1+x^{3}\bigr) \bigl(1-\Phi(x)\bigr) \end{equation} and \begin{eqnarray}\label{t1-10} I_3 &\leq&\bigl(1-\Phi(x)\bigr) E\bigl(2+W^2\bigr) + E \bigl(2+W^2\bigr) {\mathbf1}(W \geq\delta+x) \nonumber\\[-8pt]\\[-8pt] &\leq& O_\alpha(1) \bigl(1+x^3\bigr) \bigl(1-\Phi(x)\bigr).\nonumber \end{eqnarray} Let $g(w)=(wf(w))'$. Then $I_1=\int_0^\delta Eg(W+t)\,dt$. It is easy to see that [e.g., \citet{CS01}] \begin{equation}\label{t1-11}\quad g(w) = \cases{ \bigl( \sqrt{2\pi}\bigl(1+w^2 \bigr)e^{w^2/2}\bigl(1- \Phi(w)\bigr) - w \bigr)\Phi(x), &\quad $w \geq x$, \cr \bigl( \sqrt{2\pi}\bigl(1+w^2\bigr)e^{w^2/2} \Phi(w) + w \bigr) \bigl(1-\Phi(x)\bigr), &\quad $w< x$,} \end{equation} and \begin{equation}\label{t1-012} 0 \leq\sqrt{2\pi}\bigl(1+w^2\bigr)e^{w^2/2}\bigl(1-\Phi(w) \bigr) -w \leq\frac{ 2 }{ 1+w^3}, \end{equation} and we have for $0\leq t \leq\delta$, \begin{eqnarray} \label{t1-12} && E g(W+t) \nonumber\\ &&\qquad= Eg(W+t){\mathbf1}\{W+t\geq x\} + Eg(W+t){\mathbf1}\{W+t\leq0 \} \nonumber \\ &&\qquad\quad{}+ Eg(W+t){\mathbf1}\{0< W+t < x\} \nonumber \\ &&\qquad\leq\frac{ 2 }{1+x^3} P(W+t\geq x) + 2 \bigl(1-\Phi(x)\bigr )P(W+t\leq0) \\ &&\qquad\quad{} + \sqrt{2\pi} \bigl(1-\Phi(x)\bigr)\nonumber\\ &&\qquad\quad\hspace*{10.5pt}{}\times E \bigl\{ \bigl (1+(W+t)^2+(W+t) \bigr)e^{(W+t)^2/2} {\mathbf1}\{0< W+t < x \} \bigr\} \nonumber \\ &&\qquad= O_\alpha(1) \bigl(1+x^3\bigr) \bigl(1-\Phi(x)\bigr) \nonumber \end{eqnarray} and hence \begin{equation} \label{t1-12a} I_1 = O_\alpha(1) \delta\bigl( 1+ x^3\bigr) \bigl(1- \Phi(x)\bigr). \end{equation} Putting (\ref{t1-3}), (\ref{t1-8}), (\ref{t1-9}), (\ref{t1-10}) and (\ref{t1-12a}) together gives \[ P(W \geq x+ \delta) -\bigl(1-\Phi(x)\bigr) \leq O_\alpha(1) \bigl(1-\Phi(x)\bigr) \theta\bigl(1+x^3\bigr) (\delta+ \delta_1+\delta_2) , \] and therefore \begin{equation}\label{t1-14}\qquad P(W \geq x ) -\bigl(1-\Phi(x)\bigr) \leq O_\alpha(1) \bigl(1-\Phi(x) \bigr) \theta\bigl(1+x^3\bigr) (\delta+\delta_1+ \delta_2). \end{equation} As to the lower bound, similarly to (\ref{t1-1b}) and (\ref{t1-2a}), we have \[ f'(W+t) \geq(W-\delta) f(W-\delta) + 1- \Phi(x) - {\mathbf1}( W \geq x - \delta) \] and \begin{eqnarray*} && P(W > x -\delta) -\bigl(1- \Phi(x)\bigr) \\ &&\qquad\geq\theta E \bigl((W-\delta) f(W-\delta) - W f(W)\bigr)- \delta_1 E\bigl(|W| \bigl(1+|W|\bigr) f(W)\bigr) \\ &&\qquad\quad{} - \delta_1 E \bigl| 1-\Phi(x) - {\mathbf1}(W > x- \delta) \bigr| \bigl(1+|W|\bigr) - \delta_2 E\bigl(2+W^2\bigr) f(W). \end{eqnarray*} Now follwoing the same proof of (\ref{t1-14}) leads to \[ P(W \geq x ) -\bigl(1-\Phi(x)\bigr) \geq- O_\alpha(1) \bigl(1-\Phi(x) \bigr) \theta\bigl(1+x^3\bigr) (\delta+ \delta_1 + \delta_2). \] This completes the proof of Theorem~\ref{t1}. \subsection{\texorpdfstring{Proof of Proposition \protect\ref{p21}}{Proof of Proposition 4.2}}\label{sec6.2} For $n\ge2$, $X\sim U\{0,1,\ldots,n\}$, let $\tilde{S}_n=\tilde {S}(X)$ be the number of $1$'s in the binary string of $X$ generated in any system of binary codes satisfying (\ref{p21cond}). Without loss of generality, assume that \begin{equation} \label{42-1} \tilde{S}(0)=0. \end{equation} Condition (\ref{p21cond}) allows $\tilde{S}(X)$ to be represented in terms of the labels of the nodes in a binary tree described as follows. Let $\tilde{T}$ be an infinite binary tree. For $k\ge0$, the nodes of $\tilde{T}$ in the $k$th generation are denoted by (from left to right) $(V_{k,0},\ldots, V_{k, 2^k-1})$. Each node is labeled by $0$ or $1$. Assume $\tilde{T}$ satisfies: \begin{longlist}[(C3)] \item[(C1)] the root is labeled by $0$; \item[(C2)] the labels of two siblings are different; \item[(C3)] infinite binary subtrees of $\tilde{T}$ with roots $\{ V_{k,0}\dvtx k\ge0\}$ are the same as $\tilde{T}$. \end{longlist} For $2^{k-1}-1< n\le2^{k}-1$, represent $0,\ldots,n$ by the nodes $V_{k,0},\ldots, V_{k,n}$, respectively. Then $\tilde{S}(X)$ is the sum of $1$'s in the shortest path from $V_{k, X}$ to the root of the tree. Condition (C3) implies that $\tilde{S}(X)$ does not depend on $k$ so that the representation is well defined. We consider two extreme cases. Define a binary tree $T$ by always assigning $0$ to the left sibling and $1$ to the right sibling. Then the number of $1$'s in the binary string of $X$ is that in the binary expansion of $X$. Denote it by $S_n(=S(X))$. Next, define a binary tree $\bar{T}$ by assigning $V_{k,0}=0$, $V_{k,1}=1$ for all $k$ and assigning $1$ to the left sibling and $0$ to the right\vspace*{1pt} sibling for all other nodes. Let the number of $1$'s in the binary string of $X$ on $\bar{T}$ be $\bar{S}_n(=\bar{S}(X))$. Both $T$ and $\bar{T}$ are infinity binary trees satisfying C1, C2 and C3, and both $S_n$ and $\bar{S}_n$ satisfy (\ref{p21cond}). It is easy to see that for all integers $n\ge0$, \begin{equation} S_n\le_{st} \tilde{S}_n \le_{st} \bar{S}_n, \end{equation} where $\le_{st}$ denotes stochastic ordering. Therefore, it suffices to prove Cram\'er moderate deviation results for $W_n$ and $\bar{W}_n$ where $W_n=(S_n-\frac{k}{2})/\sqrt{\frac{k}{4}}$ and $\bar {W}_n=(\bar{S}_n-\frac{k}{2})/\sqrt{\frac{k}{4}}$. We suppress the subscript $n$ in the following and follow \citet{Dia77} in constructing the exchangeable pair $(W, W')$. Let $I$ be a random variable uniformly distributed over the set $\{1, 2, \ldots, k\}$ and independent of $X$, and let the random variable $X'$ be defined by \[ X' = \sum_{i=1}^k X_i' 2^{k-i}, \] where \begin{equation}\label{p21-1} X_i'= \cases{ X_i, &\quad if $i \not= I$, \vspace*{1pt}\cr 1, &\quad if $i =I,X_I=0$ \mbox{ and } $X+2^{k-I} \leq n$, \vspace*{1pt}\cr 0, &\quad else.} \end{equation} Let $S'=S-X_I+X_I'$, $W'=(S'-k/2)/\sqrt{k/4}$. As proved in \citet{Dia77}, $(W,W')$ is an exchangeable pair and \begin{eqnarray} \label{p21-2} E\bigl(W-W'|W\bigr)&=&\lambda\biggl(W-\biggl(-\frac{E(Q|W)}{\sqrt{k}} \biggr)\biggr), \\ \label{p21-3} \frac{1}{2\lambda}E\bigl(\bigl(W-W'\bigr)^2|W\bigr) -1&=&-\frac{E(Q|W)}{k}, \end{eqnarray} where\vspace*{2pt} $\lambda=2/k$ and $Q = \sum_{i=1}^k I( X_i =0, X+ 2^{k-i} > n)$. From Lemma~\ref{l41} and Theorem \ref{t1} [with $\delta=O(k^{-1/2}), \delta_1 = O(k^{-1}), \delta_2 = O(k^{-1/2})$], \[ \frac{ P( W \geq x) }{1- \Phi(x)} = 1 + O(1) \bigl(1+x^3\bigr) \frac {1}{\sqrt{k}} \] for $0 \leq x \leq k^{1/6}$. Repeat the above argument for $-W$, and we have \[ \frac{ P( W \le-x) }{1- \Phi(x)} = 1 + O(1) \bigl(1+x^3\bigr) \frac {1}{\sqrt{k}} \] for $0 \leq x \leq k^{1/6}$. Next, we notice that $S$ and $\bar{S}$ can be written as, with $X\sim U\{0,1,\ldots, n\}$, \[ S=I\bigl(0\le X\le2^{k-1}-1\bigr) S +I\bigl(2^{k-1}\le X \le n \bigr) S \] and \[ \bar{S}=I\bigl(0\le X\le2^{k-1}-1\bigr) \bar{S} +I\bigl(2^{k-1} \le X \le n\bigr)\bar{S}. \] Therefore, \begin{eqnarray*} &&-W-\frac{1}{\sqrt{k/4}} \\ &&\qquad= \biggl(-\frac{1}{2}+ I\bigl(0\le X\le2^{k-1}-1\bigr)\biggl(\frac{k-1}{2}-S\biggr) \\ &&\qquad\quad\hspace*{42.5pt}{}+I\bigl(2^{k-1} \le X\le n\bigr) \biggl(\frac{k-1}{2}-S\biggr) \biggr) \Big/{\sqrt{k/4}} \end{eqnarray*} and \begin{eqnarray*} \bar{W}&=&\biggl(-\frac{1}{2}+ I\bigl(0\le X\le2^{k-1}-1\bigr)\biggl(\bar{S}-\frac {k-1}{2}\biggr)\\ &&\hspace*{40.7pt}{} +I\bigl(2^{k-1} \le X\le n\bigr) \biggl(\bar{S}-\frac{k-1}{2}\biggr)\biggr)\Big/{\sqrt{k/4}}. \end{eqnarray*} Conditioning on $0\le X\le2^{k-1}-1$, both the distributions of $S(X)$ and $\bar{S}(X)$ are $\operatorname{Binomial}(k-1,1/2)$, which yields \[ \mathcal{L}\biggl(\frac{k-1}{2}-S\Big| 0\le X\le2^{k-1}-1\biggr) = \mathcal{L} \biggl(\bar{S} -\frac{k-1}{2}\Big| 0\le X\le2^{k-1}-1 \biggr). \] On the other hand, when $2^{k-1} \le X\le n$, $\bar{S}(X)=k-1-S(X)$. Therefore, $\bar{W}$ has the same distribution as $-W-1/\sqrt{\frac {k}{4}}$, which implies Cram\'er moderate deviation results also holds for $\bar{W}$. Thus finishes the proof of Proposition~\ref{p21}. \begin{lemma}\label{l41} We have $E(Q|S)= O(1)(1+|W|)$. \end{lemma} \begin{pf} Write \[ n = \sum_{i \ge1} 2^{k-p_i} \] with $1=p_1<p_2<\cdots\le p_{ k_1}$ the positions of the ones in the binary expansion of~$n$, where $k_1 \leq k$. Recall that $X$ is uniformly distributed over $\{0, 1,\ldots,n\}$, and that \[ X=\sum_{i=1}^k X_i 2^{k-i} \] with exactly $S$ of the indicator variables $X_1,\ldots,X_k$ equal to 1. We say that $X$ falls in category $i$, $i=1,\ldots,k_1$, when \begin{equation} \label{MDbincati} X_{p_1}=1,\qquad X_{p_2}=1,\ldots,X_{p_{i-1}}=1 \quad\mbox{and}\quad X_{p_i}=0. \end{equation} We say that $X$ falls in category $k_1+1$ if $X=n$. This special category is nonempty only when $S=k_1$, and in this case, $Q=k-k_1$, which gives the last term in (\ref{l41-1}). Note that if $X$ is in category $i$ for $i\le k_1$, then, since $X$ can be no greater than~$n$, the digits of $X$ and $n$ match up to the $p_i$th, except for the digit in place $p_i$, where $n$ has a one, and $X$ a zero. Further, up to this digit, $n$ has $p_i-i$ zeros, and so $X$ has $a_i=p_i-i+1$ zeros. Changing any of these $a_i$ zeros, except the zero in position $p_i$ to ones, results in a number $n$ or greater, while changing any other zeros, since digit $p_i$ of $n$ is one and of $X$ zero, does not. Hence $Q$ is at most $a_i$ when $X$ falls in category $i$. Since $X$ has $S$ ones in its expansion, $i-1$ of which are accounted for by (\ref{MDbincati}), the remaining $S-(i-1)$ are uniformly distributed over the\vadjust{\goodbreak} $k-p_i=k-(i-1)-a_i$ remaining digits $\{X_{p_i+1},\ldots,X_k\}$. Thus, we have the inequality \begin{equation} \label{l41-1} E(Q|S) \le\frac{1}{A}\sum_{i \ge1} \pmatrix{k-(i-1)-a_i \cr S-(i-1)}a_i + \frac{I(S=k_1)}{A}(k-k_1), \end{equation} where \[ A = \sum_{i \ge1} \pmatrix{k-(i-1)-a_i \cr S-(i-1)}+I(S=k_1) \] and $1=a_1\leq a_2\leq a_3\leq\cdots\,$. Note that if $k_1=k$, the last term of (\ref{l41-1}) equals $0$. When $k_1< k$, we have \begin{equation} \frac{I(S=k_1)}{A} (k-k_1) \le\pmatrix{k-1 \cr k_1}^{-1} (k-k_1) \le1, \end{equation} so we omit this term in the following argument. We consider two cases. \textit{Case} 1: $S\geq k/2$. As $a_i \ge1$ for all $i$, there are at most $k+1$ nonzero terms in the sum (\ref{l41-1}). Divide the summands into two groups, those for which $a_i\leq2\log_2 k$ and those with $a_i> 2\log_2 k$. The first group can sum to no more than $2\log_2 k$ because the sum is like weighted average of $a_i$. For the second group, note that \begin{eqnarray} \label{l41-2} &&\pmatrix{k-(i-1)-a_i \cr S-(i-1)}\Big/A \nonumber \\[-2pt] &&\qquad\leq\pmatrix{k-(i-1)-a_i \cr S-(i-1)}\bigg/\pmatrix{k-1 \cr S} \nonumber\\[-9pt]\\[-9pt] &&\qquad= \prod_{j=1}^{a_i-1} \biggl( \frac{k-S-j}{k-j} \biggr) \prod_{j=0}^{i-2} \biggl( \frac{S-j}{k-(a_i-1)-1-j} \biggr) \nonumber \\[-2pt] &&\qquad\leq\frac{1}{2^{a_i-1}}\leq\frac{1}{k^2},\nonumber \end{eqnarray} where the second inequality follows from $S\geq k/2$, and the last inequality from $a_i>2\log_2 k$. Therefore, the sum of the second group of terms is bounded by $1$. \textit{Case} 2: $S< k/2$. Divide the sum on the right-hand side into two groups according to whether $i \leq2 \log_2 k$ or $i> 2 \log_2 k$. Clearly, \begin{eqnarray*} && \pmatrix{k-(i-1)-a_i \cr S-(i-1)}\Big/A \\[-2pt] &&\qquad \leq \prod _{j=0}^{i-2} \biggl( \frac{S-j}{k-1-j} \biggr) \prod _{j=1}^{a_i-1} \biggl( \frac{k-S-j}{k-(i-1)-j} \biggr) \nonumber \\[-2pt] &&\qquad \leq 1/2^{i-1} \end{eqnarray*} using the assumption $S< k/2$ and the fact that $S\geq i-1$. The above inequality is true for all $i$, so the summation for the part where $i> 2\log_2 k$ is bounded by $1$. Next we consider $i\leq2\log_2 k$. When $S\geq k ( \frac{\log a_i}{a_i-1} ) +2\log_2 k$, we have\break $a_i(\frac{k-S-1}{k-(i-1)-1})^{a_i-1} \leq1$. Solving $S$ from the inequality $a_i(\frac{k-S-1}{k-(i-1)-1})^{a_i-1} \leq1$, we see that it is equivalent to the inequality $S\geq(1-e^{-({\log a_i})/({a_i-1})})k-1+e^{-({\log a_i})/({a_i-1})}i$, which is a result of the above assumption on $S$ when $i< 2\log_2 k$. Now we have \begin{eqnarray}\label{l41-3} && a_i\pmatrix{k-(i-1)-a_i \cr S-(i-1)}\Big/A \nonumber \\ &&\qquad\leq a_i\pmatrix{k-(i-1)-a_i \cr S-(i-1)}\bigg/ \pmatrix{k-1 \cr S} \nonumber\\[-8pt]\\[-8pt] &&\qquad= a_i \prod_{j=0}^{i-2} \biggl( \frac{S-j}{k-1-j} \biggr) \prod_{j=1}^{a_i-1} \biggl( \frac{k-S-j}{k-(i-1)-j} \biggr) \nonumber \\ &&\qquad\leq a_i\frac{1}{2^{i-1}} \biggl(\frac{k-S-1}{k-(i-1)-1} \biggr)^{a_i-1}\leq\frac{1}{2^{i-1}}\nonumber \end{eqnarray} using the fact that $a_i(\frac{k-S-1}{k-(i-1)-1})^{a_i-1} \leq1$. On the other hand, if $S< k ( \frac{\log a_i}{a_i-1} ) + 2\log_2 k$, then $a_i S/(k-1)=O(1) \log_2 k$, which implies \begin{eqnarray*} &&a_i \pmatrix{k-(i-1)-a_i \cr S-(i-1)}\Big/A \\ &&\qquad \leq \frac{a_i S}{k-1} \prod_{j=1}^{i-2} \biggl( \frac{S-j}{k-1-j} \biggr) \prod_{j=1}^{a_i-1} \biggl( \frac{k-S-j}{k-(i-1)-j} \biggr) \\ &&\qquad = O(1)\log_2 k/2^{i-2}. \end{eqnarray*} This proves that the right-hand side of (\ref{l41-1}) is bounded by $O(1)\log_2 k$. To complete the proof of the lemma, that is, to prove $E(Q|W)\leq C(1+|W|)$, we only need to show that $E(Q|S)\leq C$ for some universal constant $C$ when $|W|\leq \log_2 k$, that is, when $k/2- \sqrt{k/4} \log_2 k \leq S\leq k/2+ \sqrt{k/4} \log_2 k $. Following the argument in case 2 above, we only need to consider the summands where $i\leq2\log_2 k$ because the other part where $i> 2\log_2 k$ is bounded by $1$ as proved in case 2. When $a_i, k$ are bigger than some universal constant, $k/2- \sqrt{k/4} \log_2 k \geq\frac{\log a_i}{a_i-1}\times k+2\log_2 k$, which implies $(\frac{k-S-1}{k-(i-1)-1})^{a_i-1}\times a_i\leq1$ and ${k-(i-1)-a_i\choose S-(i-1)}\times a_i/A\leq1/2^{i-1}$. Since both parts for $i\leq2\log_2 k$ and $i> 2\log_2 k$ are bounded by some constant, $E(Q|S)\leq C$ when $|W|\leq\log_2 k$, and hence the lemma is proved. \end{pf} \subsection{\texorpdfstring{Proof of Propositions \protect\ref{p4} and \protect\ref{p5}}{Proof of Propositions 4.3 and 4.4}}\label{sec6.3} Let $\tilde{W}$ have the conditional distribution of $W$ ($W_1$, $W_2$, resp.) given $|W|\le c_1 \sqrt{n}$ ($|W_1|, |W_2| \le c_1 \sqrt{n}$, resp.) where $c_1$ is to be determined. If we can prove that \begin{equation} \frac{ P(\tilde{W} \geq x) }{1-\Phi(x)} =1+ O(1) \bigl(1+x^3\bigr) /\sqrt{n} \end{equation} for $0\le x \le n^{1/6}$, then from the fact that [\citet{E85}] \begin{equation} \label{CW0} P\bigl(|W|>K \sqrt{n}\bigr) \le e^{-n C(K)} \end{equation} and \[ P\bigl(|W_1|>K \sqrt{n}| S<0\bigr) \le e^{-n C(K)},\qquad P\bigl(|W_2|>K \sqrt{n}| S>0\bigr) \le e^{-n C(K)} \] for any positive number $K$ where $C(K)$ is a positive constant depending only on~$K$, we have, with $\delta_2=O(1/\sqrt{n})$, \begin{eqnarray*} \frac{ P(W \geq x) }{1-\Phi(x)} &\le&\frac{ P(\tilde{W} \ge x)+P(\delta _2|W|>1/2) }{1-\Phi(x)} \\ &=& 1+ O(1) \bigl(1+x^3\bigr) /\sqrt{n} \end{eqnarray*} for $0\le x \le n^{1/6}$. Similarly, (\ref{p5a}) and (\ref{p5b}) are also true. Therefore, we prove Cram\'er moderate deviation for $\tilde{W}$ (still denoted by $W$ in the following) defined below. Assume the state space of the spins is $\Sigma=(\sigma_1, \sigma_2, \ldots, \sigma_n)\in \{-1, 1\}^n$ such that $\sum_{i=1}^n \sigma_i/n \in[a,b]$ where $[a,b]$ is any interval within which there is only one solution $m$ to (\ref{CW1}). Let $S=\sum_{i=1}^n \sigma_i$, $W=\frac {S-nm}{\sigma}$ and $\sigma^2=n\frac{1-m^2}{1-(1-m^2)\beta}$. Note that in cases 1 and 2, $1-(1-m^2)\beta>0$, thus $\sigma^2$ is well defined. Moreover, $[a,b]$ is chosen such that $|W|\le c_1 \sqrt {n}$. The joint distribution of the spins is \[ Z_{\beta, h}^{-1} \exp\Biggl(\frac{\beta\sum_{1\le i<j \le n} \sigma_i \sigma_j}{n}+\beta h\sum _{i=1}^n \sigma_i\Biggr). \] Let $I$ be a random variable uniformly distributed over $\{1, \ldots, n\}$ independent of $\{\sigma_i, 1 \leq i \leq n\}$. Let $\sigma_i'$ be a random sample from the conditional distribution of $\sigma_i$ given $\{\sigma_j, j \not=i, 1 \leq j \leq n\}$. Define $W' = W- ( \sigma_I - \sigma_I')/\sigma$. Then $(W, W')$ is an exchangeable pair. Let \[ A(w)=\frac{\exp(-\beta(m+h)- \beta\sigma w/n+\beta/n)}{\exp (-\beta(m+h)- \beta\sigma w/n+\beta/n)+\exp(\beta(m+h)+ \beta \sigma w/n-\beta/n)} \] and \[ B(w)=\frac{\exp(\beta(m+h)+ \beta\sigma w/n+\beta/n)}{\exp(\beta (m+h)+ \beta\sigma w/n+\beta/n)+\exp(-\beta(m+h)- \beta\sigma w/n-\beta/n)}. \] It is easy to see that \begin{eqnarray*} \hspace*{-4pt}&& \frac{ e^{-\beta(m+h)- \beta\sigma w/n} }{ e^{-\beta(m+h)-\beta \sigma w /n} + e^{\beta(m+h)+\beta\sigma w /n }} \\ \hspace*{-4pt}&&\qquad\leq A(w) = \frac{ \exp(-\beta(m+h)-\beta\sigma w/n) }{\exp (-\beta(m+h)-\beta\sigma w/n)+\exp(\beta(m+h)+\beta\sigma w /n - 2 \beta/n)} \\ \hspace*{-4pt}&&\qquad\leq\frac{ e^{-\beta(m+h)- \beta\sigma w/n} }{ e^{-\beta (m+h)-\beta\sigma w /n} + e^{\beta(m+h)+\beta\sigma w /n }}e^{2\beta /n} \end{eqnarray*} and \begin{eqnarray*} \hspace*{-4pt}&&\frac{ e^{\beta(m+h) + \beta\sigma w/n} }{ e^{\beta(m+h)+\beta \sigma w /n} + e^{-\beta(m+h)-\beta\sigma w /n }} \\ \hspace*{-4pt}&&\qquad\leq B(w) = \frac{ \exp(\beta(m+h)+\beta\sigma w /n) }{\exp (\beta(m+h)+\beta\sigma w /n)+\exp(-\beta(m+h)- \beta\sigma w /n - 2 \beta/n)} \\ \hspace*{-4pt}&&\qquad\leq\frac{ e^{\beta(m+h) + \beta\sigma w/n} }{ e^{\beta (m+h)+\beta\sigma w /n} + e^{-\beta(m+h)-\beta\sigma w /n }}e^{2\beta/n}. \end{eqnarray*} Therefore \[ A(W)+B(W)=1+O(1)\frac{1}{n} \] and \[ A(W)-B(W)=-\tanh\bigl(\beta(m+h)+\beta\sigma W/n\bigr)+O(1) \frac{1}{n}. \] Note that \begin{eqnarray*} &&E\bigl(W-W'|\Sigma\bigr) \\ &&\qquad= \frac{1}{\sigma}E(\sigma_{I}-\sigma_{I}|\Sigma) \\ &&\qquad= \frac{2}{\sigma} E\bigl(I\bigl(\sigma_I=1, \sigma_I'=-1\bigr)-I\bigl(\sigma_I=-1, \sigma_I'=1\bigr)|\Sigma\bigr) \\ &&\qquad= \frac{2}{\sigma} \frac{\sigma W+nm+n}{2n} A(W) I(S-2\ge an) \\ &&\qquad\quad{}-\frac{2}{\sigma} \frac{n-\sigma W-nm}{2n} B(W)I(S+2\le bn) \\ &&\qquad= \bigl(A(W)+B(W)\bigr) \biggl(\frac{W}{n}+\frac{m}{\sigma }\biggr)+ \frac{1}{\sigma }\bigl(A(W)-B(W)\bigr) \\ &&\qquad\quad{}-\frac{\sigma W+nm+n}{\sigma n} A(W) I(S-2<an)\\ &&\qquad\quad{}+\frac {n-\sigma W-nm}{\sigma n} B(W) I(S+2>bn) \\ &&\qquad= \biggl(\frac{W}{n}+\frac{m}{\sigma}\biggr) \biggl(1+O\biggl( \frac{1}{n}\biggr)\biggr) -\frac {1}{\sigma} \biggl(\tanh\biggl( \beta(m+h)+\frac{\beta\sigma W}{n}\biggr)+O\biggl(\frac {1}{n}\biggr)\biggr) \\ &&\qquad\quad{}-\frac{S+n}{\sigma n}A(W)I(S-2<an)+\frac{n-S}{\sigma n}B(W) I(S+2>bn) \\ &&\qquad=\lambda(W-R), \end{eqnarray*} where \[ \lambda=\frac{1-(1-m^2)\beta}{n}>0 \] and \begin{eqnarray*} R&=&\frac{1}{\lambda}\frac{\tanh'' (\beta(m+h)+\xi) \beta^2 \sigma}{2n^2} W^2+\frac{1}{\lambda} \frac{S+n}{\sigma n}A(W) I(S-2<an) \\ &&{}-\frac{1}{\lambda}\frac{n-S}{\sigma n}B(W) I(S+2>bn)+O(1) \biggl( \frac {W}{n}+\frac{1}{\sigma}\biggr), \end{eqnarray*} where $\xi$ is between $0$ and $\beta\sigma W/n$. Similarly, \begin{eqnarray*} \hspace*{-4pt}&&E\bigl(\bigl(W-W'\bigr)^2|\Sigma\bigr) \\ \hspace*{-4pt}&&\qquad= \frac{4}{\sigma^2} E\bigl(I\bigl(\sigma_I=1, \sigma_I'=-1\bigr)+I\bigl(\sigma_I=-1, \sigma_I'=1\bigr)|\Sigma\bigr) \\ \hspace*{-4pt}&&\qquad=\frac{2(1-m^2)}{\sigma^2}+O(1)\frac{W}{n \sigma}+O\biggl(\frac{1}{n \sigma^2} \biggr)+O\biggl(\frac{I(S-2<an\mbox{ or } S+2>bn)}{\sigma^2}\biggr). \end{eqnarray*} Therefore, recall that $\sigma^2=n\frac{1-m^2}{1-(1-m^2)\beta}$, \[ \bigl|E(D|W)-1\bigr|\le O\biggl(\frac{1}{\sqrt{n}}\biggr) \bigl(1+|W|\bigr). \] For $R$, with $\delta_2=O(1/\sqrt{n})$, \[ \bigl|E(R|W)\bigr|\le\delta_2 \bigl(1+W^2\bigr), \] and if $c_1$ is chosen such that $\delta_2 |W|\le1/2$, the second alternative of (\ref{c3}) is satisfied with $\alpha=1/2$. Thus from Theorem~\ref{t1}, we have the following moderate deviation result for $W$: \[ \frac{ P(W \geq x) }{1-\Phi(x)} = 1+ O(1) \bigl(1+x^3\bigr) \frac {1}{\sqrt{n}} \] for $0\le x\le n^{1/6}$. This completes the proof of (\ref{p4a}) and (\ref{p5a}). \subsection{\texorpdfstring{Proof of Proposition \protect\ref{t71}}{Proof of Proposition 4.5}}\label{sec6.4} Since $(1-\Phi(x)) \geq{ 1 \over2 (1+x)} e^{-x^2/2}$ for $x\geq0$, (\ref{t71a}) becomes trivial if $ x\gamma\geq1/8$. Thus we can assume \begin{equation} \label{t71-00} x \gamma\leq1/8. \end{equation} Let $f=f_x$ be the Stein solution to equation (\ref{stein2}). Let $W^{(i)} = W- \xi_i$ and $ K_i(t) = E\xi_i ( I\{ 0 \leq t \leq\xi_i\} - I \{\xi_i \leq t \leq 0\})$. It is known that [see, e.g., (2.18) in \citet{CS05}] \[ EWf(W) = \sum_{i=1}^n E \int _{-\infty}^\infty f'\bigl(W^{(i)} + t\bigr) K_i(t) \,dt. \] Since $\int_{-\infty}^\infty K_i(t) \,dt = E\xi_i^2$, we have \begin{eqnarray}\label{t71-1} &&P(W \geq x) - \bigl(1-\Phi(x)\bigr) \nonumber \\ &&\qquad= EWf(W) - Ef'(W) \nonumber \\ &&\qquad= \sum_{i=1}^n E \int _{-\infty}^\infty\bigl( f' \bigl(W^{(i)} +t\bigr) - f'(W)\bigr) K_i(t) \,dt \nonumber\\[-8pt]\\[-8pt] &&\qquad= \sum_{i=1}^n E \int _{-\infty}^\infty\bigl( \bigl(W^{(i)} + t\bigr) f\bigl(W^{(i)}+t\bigr) - W f(W)\bigr) K_i(t) \,dt \nonumber \\ &&\qquad\quad{} + \sum_{i=1}^n E \int _{-\infty}^\infty\bigl( I\bigl\{W^{(i)} + t \geq x\bigr\} - I\{W \geq x\}\bigr) K_i(t) \,dt \nonumber \\ &&\qquad:= R_1 + R_2.\nonumber \end{eqnarray} It suffices to show that \begin{equation} \label{r1} |R_1| \leq C \bigl(1+x^3\bigr) \gamma\bigl(1- \Phi(x) \bigr) e^{x^3 \gamma} \end{equation} and \begin{equation}\label{R2} |R_2| \leq C \bigl(1+x^2\bigr) \gamma\bigl(1- \Phi(x) \bigr) e^{x^3 \gamma}. \end{equation} To estimate $R_1$, let $g(w) = (wf(w))'$. It is easy to see that \begin{equation}\label{r1-1} R_1 = \sum_{i=1}^n E \int\!\!\int_{\xi_i}^t g\bigl(W^{(i)} +s\bigr) \,ds K_i(t) \,dt. \end{equation} By (\ref{t1-11}) and (\ref{t1-012}), following the proof of (\ref{t1-12}), we have \begin{eqnarray} \label{t71-2} && E g\bigl(W^{(i)} +s\bigr) \nonumber\\ &&\qquad= Eg\bigl(W^{(i)}+s\bigr)I\bigl\{W^{(i)}+s\geq x\bigr\} + Eg\bigl(W^{(i)}+s\bigr)I\bigl\{W^{(i)}+s\leq0 \bigr\} \nonumber \\ &&\qquad\quad{}+ Eg\bigl(W^{(i)} +s \bigr)I\bigl\{0< W^{(i)} + s < x\bigr\} \nonumber \\ &&\qquad\leq\frac{ 2 }{1+x^3} P\bigl(W^{(i)} +s\geq x\bigr) + 2 \bigl(1- \Phi(x)\bigr)P\bigl(W^{(i)}+s\leq0\bigr) \nonumber \\ &&\qquad\quad{} + \sqrt{2\pi} \bigl(1-\Phi(x)\bigr) \nonumber\\ &&\qquad\quad\hspace*{11pt}{}\times E \bigl\{ \bigl (1+\bigl(W^{(i)}+s \bigr)^2\bigr)e^{(W^{(i)}+s)^2/2} I\bigl\{0< W^{(i)}+s < x\bigr \} \bigr\} \\ &&\qquad\leq\frac{ 2 }{1+x^3} P\bigl(W^{(i)}\geq x-s\bigr) + 2 \bigl(1- \Phi(x)\bigr)P\bigl(W^{(i)}+s \leq0\bigr) \nonumber \\ &&\qquad\quad{} - \sqrt{2\pi} \bigl(1-\Phi(x)\bigr)\int_0^x \bigl(1+y^2\bigr)e^{y^2/2} \,dP\bigl(W^{(i)}+s > y \bigr) \nonumber \\ &&\qquad\leq\frac{ 2 }{1+x^3} P\bigl(W^{(i)}\geq x-s\bigr) + 2 \bigl(1- \Phi(x)\bigr)P\bigl(W^{(i)}+s \leq0\bigr) \nonumber\\ &&\qquad\quad{} + \sqrt{2\pi} \bigl(1-\Phi(x)\bigr)P\bigl(W^{(i)}+s >0\bigr) + \sqrt{2\pi} \bigl(1-\Phi(x)\bigr) J(s) \nonumber \\ &&\qquad\leq\frac{ 2 }{1+x^3} P\bigl(W^{(i)} \geq x-s\bigr) + \sqrt {2\pi} \bigl(1-\Phi(x)\bigr) + \sqrt{2\pi} \bigl(1-\Phi(x)\bigr) J(s), \nonumber \end{eqnarray} where \begin{equation} \label{t71-3} J(s) = \int_0^x \bigl(3y+y^3 \bigr) e^{y^2/2} P\bigl(W^{(i)}+s >y\bigr) \,dy. \end{equation} Clearly, for $0 < t \leq x$ \begin{eqnarray*} Ee^{t \xi_j} & =& 1 + t^2 E\xi_j^2 / 2 + \sum_{k=3}^\infty\frac{ ( t \xi_j)^k }{ k!} \\ & \leq& 1 + t^2 E\xi_j^2 /2 + \frac{ t^3 }{6} E |\xi_j|^3 e^{t |\xi _j|} \\ & \leq& \exp\biggl( t^2 E\xi_j^2/2 + \frac{ x^3 }{6} E |\xi_j|^3 e^{ x |\xi_j|} \biggr) \end{eqnarray*} and hence \begin{equation}\label{t71-5} Ee^{t(W^{(i)}+s)} \leq\exp\biggl( t^2/2 + x |s| + \frac{ x^3 }{6} \gamma\biggr) \qquad\mbox{for } 0 \leq t \leq x. \end{equation} By (\ref{t71-5}), following the proof of Lemma~\ref{l22} yields \begin{equation} \label{t71-6} J(s) \leq C \bigl(1+x^3\bigr) e^{ x^3 \gamma+ x |s|}. \end{equation} Noting that (\ref{t71-5}) also implies that \begin{eqnarray*} P\bigl( W^{(i)} \geq x -s\bigr) & \leq& e^{-x^2} Ee^{x ( W^{(i)}+s)} \leq\exp\bigl( - x^2/2 + x|s| + x^3 \gamma\bigr) \\ & \leq& ( 1+ x) \bigl( 1- \Phi(x)\bigr) \exp\bigl( x|s| + x^3 \gamma \bigr), \end{eqnarray*} we have \[ Eg\bigl(W^{(i)}+s\bigr) \leq C \bigl(1+x^3\bigr) \bigl( 1- \Phi(x)\bigr) e^{ x^3 \gamma+ x |s|} \] and therefore by (\ref{r1-1}), \begin{eqnarray} \label{t71-7}\quad |R_1| & \leq& \sum_{i =1}^n E \int_{-\infty}^\infty\biggl|\int_{\xi_i}^t g\bigl( W^{(i)} +s\bigr) \,ds \biggr| K_i(t) \,dt \nonumber \\ & \leq& C \bigl( 1+ x^3\bigr) \bigl(1-\Phi(x)\bigr) e^{x^3 \gamma} \sum_{i=1}^n E\int_{-\infty}^\infty \bigl(|t| e^{x|t|} + |\xi_i| e^{x|\xi_i|}\bigr) K_i(t) \,dt \\ & \leq& C \bigl(1+x^3\bigr) \gamma\bigl(1- \Phi(x)\bigr) e^{x^3 \gamma}.\nonumber \end{eqnarray} This proves (\ref{r1}). As to $R_2$, we apply an exponential concentration inequality of \citet{Shao2010} [see Theorem 2.7 in \citet{Shao2010}]: for $a\geq0$ and $b \geq0$, \begin{eqnarray*} && P\bigl( x - a \leq W^{(i)} \le x +b\bigr) \\ &&\qquad\leq C e^{x \gamma+ x a - x^2} \bigl( (\gamma+ b+a) E\bigl|W^{(i)}\bigr| e^{x W^{(i)}} + \bigl( E e^{2x W^{(i)}}\bigr)^{1/2} \exp\bigl( - \gamma^{-2}/32\bigr) \bigr) \\ &&\qquad\leq C e^{x \gamma+ x a - x^2} \bigl( (\gamma+ b+a) \bigl(EW^{(i)} e^{x W^{(i)}} + 1\bigr)\\ &&\qquad\quad\hspace*{57.2pt}{}+ \bigl( E e^{2x W^{(i)}}\bigr)^{1/2} \exp \bigl( - \gamma^{-2}/32\bigr) \bigr) \\ &&\qquad\leq C e^{x \gamma+ x a - x^2} \bigl((\gamma+ b+a) (1+x) e^{x^2/2 + x^3 \gamma} + e^{ x^2 + x^3\gamma} \exp\bigl( - \gamma^{-2}/32\bigr) \bigr) \\ &&\qquad\leq C e^{x^3 \gamma+ x a - x^2/2} \bigl( (\gamma+ b+a) (1+x) + \exp\bigl( x^2/2 - \gamma^{-2}/32\bigr) \bigr) \\ &&\qquad\leq C \bigl(1-\Phi(x)\bigr) e^{x^3 \gamma+ xa} \bigl( (\gamma + b+a) \bigl(1+x^2\bigr) + \exp\bigl( x^2 - \gamma^{-2}/32 \bigr) \bigr). \end{eqnarray*} Here we use the fact that $E W^{(i)} e^{x W^{(i)}} \leq x e^{x^2/2 + x^3 \gamma}$, by following the proof of (\ref{t71-5}). Therefore \begin{eqnarray*} R_2 & \leq& \sum_{i=1}^n E \int_{-\infty}^\infty P\bigl( x - \xi_i \leq W^{(i)} \leq x -t | \xi_i\bigr) K_i(t) \,dt \\ & \leq& C \bigl(1-\Phi(x)\bigr) e^{x^3 \gamma} \sum _{i=1}^n \int_{-\infty}^\infty \bigl\{ \bigl(1+x^2\bigr) E\bigl(\gamma+ |t|+|\xi_i|\bigr) e^{x |\xi_i|}\\ &&\hspace*{159pt}{} + \exp\bigl( x^2 - \gamma^{-2}/32\bigr) \bigr\} K_i(t) \,dt \\ & \leq& C \bigl( 1- \Phi(x)\bigr) e^{x^3 \gamma} \bigl( \bigl(1+x^2 \bigr) \gamma+ \exp\bigl( x^2 - \gamma^{-2}/32\bigr) \bigr) \\ & \leq& C \gamma\bigl(1+x^2\bigr) \bigl( 1- \Phi(x)\bigr) e^{x^3 \gamma} \end{eqnarray*} by (\ref{t71-00}). Similarly, the above bound holds for $-R_2$. This proves (\ref{R2}). \section*{Acknowledgments} We would like to thank the Associate Editor and the referees for their helpful comments and suggestions which have significantly improved the presentation of this paper. \printaddresses \end{document}
arXiv
\begin{document} \title{Path Connectivity of Spheres in the Gromov--Hausdorff Class} \author{A.~Ivanov, R.~Tsvetnikov, and A.~Tuzhilin} \maketitle \begin{abstract} The paper is devoted to geometrical investigation of Gromov--Hausdorff distance on the classes of all metric spaces and of all bounded metric spaces. The main attention is paid to path connectivity questions. The path connected components of the Gromov--Hausdorff class of all metric spaces are described, and the path connectivity of spheres is proved in several particular cases. {\bf Keywords:} metric geometry, Gromov--Hausdorff distance, metric spaces, bounded metric spaces, compact metric spaces, path connectivity \end{abstract} \section*{Introduction} \markright{Introduction} Comparison of metric spaces is an important problem that is interesting as in pure research, so as in numerous applications. A natural approach is to define some distance function between metric spaces: The less similar the spaces are, the more the distance between them should be. Currently, the Gromov--Hausdorff distance and some its modifications are the most commonly used for these purposes. The history of this distance goes back to works of F.~Hausdorff~\cite{Hausdorff}. In~1914 he defined a non-negative symmetrical function on pairs of subsets of a metric space $X$ that is equal to the infimum of the reals $r$ such that the first subset is contained in the $r$-neighborhood of the second one, and vice-versa. It turns out that this function satisfies the triangle inequality and further, it is a metric on the family of all closed bounded subsets of $X$. Later on, D.~Edwards~\cite{Edwards} and, independently, M.~Gromov~\cite{Gromov} generalized the Hausdorff's construction to the case of all metric spaces using their isometric embeddings into all possible ambient metric spaces, see formal definition below. The resulting function is now referred as the \emph{Gromov--Hausdorff distance}. Notice that this distance is also symmetric, non-negative and satisfies the triangle inequality. It always equals zero for a pair of isometric metric spaces, therefore in this context such spaces are usually identified. But generally speaking, the Gromov--Hausdorff distance can be infinite and also can vanish on a pair of non-isometric spaces. Nevertheless, if one restricts himself to the family $\mathcal{M}$ of isometry classes of all compact metric spaces, then the Gromov--Hausdorff distance satisfies all metric axioms. The set $\mathcal{M}$ endowed with the Gromov--Hausdorff distance is called the \emph{Gromov--Hausdorff space}. Geometry of this metric space turns out to be rather non-trivial and is studied actively. It is well-known that $\mathcal{M}$ is path connected, Polish (i.e., complete and separable), geodesic~\cite{INT}, and also the space $\mathcal{M}$ is not proper, and has no non-trivial symmetries~\cite{ITSymm}. The Gromov--Hausdorff distance is effectively used in computer graphics and computational geometry for comparison and transformation of shapes, see for example~\cite{MemSap}. It also has applications in other natural sciences, for example, C.~Sormani used this distance to prove stability of Friedmann cosmology model~\cite{Sor}. A detailed introduction to geometry of the Gromov--Hausdorff distance can be found in~\cite[Ch.~7]{BurBurIva} or in~\cite{ITlectHGH}. The case of arbitrary metric spaces is also very interesting. For such spaces many authors use some modifications of the Gromov--Hausdorff distance. For example, pointed spaces, i.e., the spaces with marked points, are considered, see~\cite{Jen} and~\cite{Herron}. In the present paper we continue the studying of the classical Gromov--Hausdorff distance on the classes $\operatorname{\mathcal{G\!H}}$ and ${\cal B}$ consisting of the representatives of isometry classes of all metric spaces and of bounded metric spaces, respectively, that we started in~\cite{BIT}. Here the term ``class'' is understood in the sense of von Neumann--Bernays--G\"odel set theory (NBG) that permits to define correctly the distance on the proper class $\operatorname{\mathcal{G\!H}}$, and to construct on it an analogue of metric topology basing on so-called filtration by sets with respect to cardinality, see below. In paper~\cite{BIT}, continuous curves in $\operatorname{\mathcal{G\!H}}$ are defined, and it is proved that the Gromov--Hausdorff distance is an interior generalized semi-metric as on $\operatorname{\mathcal{G\!H}}$, so as on ${\cal B}$, i.e., the distance between points is equal to the infimum of the lengths of curves connecting these points. In the present paper the questions of path connectivity in $\operatorname{\mathcal{G\!H}}$ are studied, in particular, the path connected components of $\operatorname{\mathcal{G\!H}}$ are described. One of these components is the class ${\cal B}$ of bounded metric spaces. Notice that the geometry of those components is rather tricky, see~\cite{BogaTuz}. Also in the present paper the path connectivity of spheres is proved for several particular cases. Namely, the following results are obtained. \begin{itemize} \item It is shown that all spheres centered at the single-point metric space in $\operatorname{\mathcal{G\!H}}$, in ${\cal B}$, and in $\mathcal{M}$ are path connected. \item For each bounded metric space $X$, there exists a real $R_X$ such that all the spheres centered at $X$ with radius $r\ge R_X$ are path connected in ${\cal B}$. And if $X$ is a compact space, then such spheres are path connected in $\mathcal{M}$. \item For each generic metric space $M$ (see the definition below), there exists a real $r_M>0$ such that all the spheres centered at $M$ with radius $r\le r_M$ are path connected in $\operatorname{\mathcal{G\!H}}$. And if $M$ is a bounded (respectively, compact) metric space, then such spheres are path connected in ${\cal B}$ (in $\mathcal{M}$, respectively). \end{itemize} \section{Preliminaries} \markright{\thesection.~Preliminaries} \noindent Let $X$ be an arbitrary set. By $\#X$ we denote the cardinality of $X$, and let $\mathcal{P}_0(X)$ be the set of all its non-empty subsets. A symmetric mapping $d\:X\times X\to[0,\infty]$ vanishing on the pairs of the same elements is called a \emph{distance function on $X$}. If $d$ satisfies the triangle inequality, then $d$ is referred as a \emph{generalized semi-metric}. If in addition, $d(x,y)>0$ for all $x\ne y$, then $d$ is called a \emph{generalized metric}. At last, if $d(x,y)<\infty$ for all $x,y\in X$, then such distance function is called a \emph{metric}, and sometimes a \emph{finite metric.\spacefactor\@m} to emphasize the difference with a generalized metric. A set $X$ endowed with a (generalized) (semi-)metric is called a \emph{\(generalized.\spacefactor\@m.\spacefactor\@m \(semi-.\spacefactor\@m\)metric space}. We need the following simple properties of metrics. \begin{prop}\label{prop:metric} The following statements are valid. \begin{enumerate} \item \label{prop:metric:2} A non-trivial non-negative linear combination of two metrics given on an arbitrary set is also a metric. \item \label{prop:metric:3} A positive linear combination of a metric and a semi-metric given on an arbitrary set is a metric. \end{enumerate} \end{prop} If $X$ is a set endowed with a distance function, then the distance between its points $x$ and $y$ we usually denote by $|xy|$. To emphasize that the distance between $x$ and $y$ is calculated in the space $X$, we write $|xy|_X$. Further, if $\gamma\colon[a,b]\to X$ is a continuous curve in $X$, then its \emph{length $|\gamma|$} is defined as the supremum of the ``lengths of inscribed polygonal lines'', i.e., of the values $\sum_i\big|\gamma(t_i)\gamma(t_{i+1})\big|$, where the supremum is taken over all possible partitions $a=t_1<\cdots<t_k=b$ of the segment $[a,b]$. A distance function on $X$ is called \emph{interior.\spacefactor\@m} if the distance between any two its points $x$ and $y$ equals to the infimum of lengths of curves connecting these points. A curve $\gamma$ whose length differs from $|xy|$ at most by $\varepsilon$ is called \emph{$\varepsilon$-shortest}. If for any pair of points $x$ and $y$ of the space $X$ there exists a curve whose length equals to the infimum of lengths of curves connecting these points and equals $|xy|$, then the distance is called \emph{strictly interior}, and the space $X$ is referred as a \emph{geodesic space}. Let $X$ be a metric space. For each $A,\,B\in\mathcal{P}_0(X)$ and each $x\in X$ we put \begin{flalign*} \indent&|xA|=|Ax|=\inf\bigl.\spacefactor\@m|xa|:a\in A\bigr.\spacefactor\@m,\qquad |AB|=\inf\bigl.\spacefactor\@m|ab|:a\in A,\,b\in B\bigr.\spacefactor\@m,&.\spacefactor\@m \indent&d_H(A,B)=\max.\spacefactor\@m\sup_{a\in A}|aB|, .\spacefactor\@m\sup_{b\in B}|Ab|.\spacefactor\@m=\max\bigl.\spacefactor\@m\sup_{a\in A}\inf_{b\in B}|ab|,.\spacefactor\@m\sup_{b\in B}\inf_{a\in A}|ba|\bigr.\spacefactor\@m. \end{flalign*} The function $d_H\colon\mathcal{P}_0(X)\times\mathcal{P}_0(X)\to[0,\infty]$ is called the \emph{Hausdorff distance}. It is well-known, see for example~\cite{BurBurIva} or~\cite{ITlectHGH}, that $d_H$ is a metric on the family ${\cal H}(X)\subset\mathcal{P}_0(X)$ of all non-empty closed bounded subsets of $X$. Let $X$ and $Y$ be metric spaces. A triple $(X',Y',Z)$ consisting of a metric space $Z$ and two its subsets $X'$ and $Y'$ isometric to $X$ and $Y$, respectively, is called a \emph{realization of the pair $(X,Y)$}. The \emph{Gromov--Hausdorff distance $d_{GH}(X,Y)$ between $X$ and $Y$} is defined as the infimum of reals $r$ such that there exists a realization $(X',Y',Z)$ of the pair $(X,Y)$ with $d_H(X',Y')\le r$. Notice that the Gromov--Hausdorff distance could take as finite, so as infinite values, and always satisfies the triangle inequality, see~\cite{BurBurIva} or~\cite{ITlectHGH}. Besides, this distance always vanishes at each pair of isometric spaces, therefore, due to triangle inequality, the Gromov--Hausdorff distance does not depend on the choice of representatives of isometry classes. There are examples of non-isometric metric spaces with zero Gromov--Hausdorff distance between them, see~\cite{Ghanaat}. Since each set can be endowed with a metric (for example, one can put all the distances between different points to be equal to $1$), then representatives of isometry classes form a proper class. This class endowed with the Gromov--Hausdorff distance is denoted by $\operatorname{\mathcal{G\!H}}$. Here we use the concept \emph{class.\spacefactor\@m} in the sense of von Neumann--Bernays--G\"odel set theory (NBG). Recall that in NBG all objects (analogues of ordinary sets) are called \emph{classes}. There are two types of classes: \emph{sets.\spacefactor\@m} (the classes that are elements of other classes), and \emph{proper classes.\spacefactor\@m} (all the remaining classes). The class of all sets is an example of a proper class. Many standard operations are well-defined for classes. Among them are intersection, complementation, direct product, mapping, etc. Such concepts as a \emph{distance function, \(generalized.\spacefactor\@m.\spacefactor\@m semi-metric.\spacefactor\@m} and \emph{\(generalized.\spacefactor\@m.\spacefactor\@m metric.\spacefactor\@m} are defined in the standard way for any class, as for a set, so as for a proper class, because the direct products and mappings are defined. But direct transfer of some other structures, such as topology, leads to contradictions. For example, if we defined a topology for a proper class, then this class has to be an element of the topology, that is impossible due to the definition of proper classes. In paper~\cite{BIT} the following construction is suggested. For each class ${\cal C}$ consider a ``filtration'' by subclasses ${\cal C}_n$, each of which consists of all the elements of ${\cal C}$ of cardinality at most $n$, where $n$ is a cardinal number. Recall that elements of a class are sets, therefore cardinality is defined for them. A class ${\cal C}$ such that all its subclasses ${\cal C}_n$ are sets is said to be \emph{filtered by sets}. Evidently, if a class ${\cal C}$ is a set, then it is filtered by sets. Thus, let ${\cal C}$ be a class filtered by sets. When we say that the class ${\cal C}$ satisfies some property, we mean the following: Each set ${\cal C}_n$ satisfies this property. Let us give several examples. \begin{itemize} \item Let a distance function on ${\cal C}$ be given. It induces an ``ordinary'' distance function on each set ${\cal C}_n$. Thus, for each ${\cal C}_n$ the standard concepts of metric geometry such as open balls and spheres are defined and are sets. The latter permits to construct standard metric topology $\tau_n$ on ${\cal C}_n$ taking the open balls as a base of the topology. It is clear that if $n\le m$, then ${\cal C}_n\subset{\cal C}_m$, and the topology $\tau_n$ on ${\cal C}_n$ is induced from $\tau_m$. \item More general, a \emph{topology.\spacefactor\@m} on the class ${\cal C}$ is defined as a family of topologies $\tau_n$ on the sets ${\cal C}_n$ satisfying the following \emph{consistency condition}: If $n\le m$, then $\tau_n$ is the topology on ${\cal C}_n$ induced from $\tau_m$. A class endowed with a topology is referred as a \emph{topological class}. \item The presence of a topology permits to define continuous mappings from a topological space $Z$ to a topological class ${\cal C}$. Notice that according to NBG axioms, for any mapping $f\:Z\to{\cal C}$ from the set $Z$ to the class ${\cal C}$, the image $f(Z)$ is a set, all elements of $f(Z)$ are also some sets, and hence, the union $\cup f(Z)$ is a set of some cardinality $n$. Therefore, each element of $f(Z)$ is of cardinality at most $n$, and so, $f(Z)\subset{\cal C}_n$. The mapping $f$ is called \emph{continuous}, if $f$ is a continuous mapping from $Z$ to ${\cal C}_n$. The consistency condition implies that for any $m\ge n$, the mapping $f$ is a continuous mapping from $Z$ to ${\cal C}_m$, and also for any $k\le n$ such that $f(Z)\subset{\cal C}_k$, the mapping $f$ considered as a mapping from $Z$ to ${\cal C}_k$ is continuous. \item The above arguments allows to define \emph{continuous curves in a topological class ${\cal C}$}. \item Let a class ${\cal C}$ be endowed with a distance function and the corresponding topology. We say that the distance function is \emph{interior.\spacefactor\@m} if it satisfies the triangle inequality, and for any two elements from ${\cal C}$ such that the distance between them is finite, this distance equals the infimum of the lengths of the curves connecting these elements. \item Let a sequence $\{X_i.\spacefactor\@m$ of elements from a topological class ${\cal C}$ be given. Since the family $\{X_i.\spacefactor\@m_{i=1}^\infty$ is the image of the mapping $\mathbb{N}\to{\cal C}$, $i\mapsto X_i$, and $\mathbb{N}$ is a set, then, due to the above arguments, all the family $\{X_i.\spacefactor\@m$ lies in some ${\cal C}_m$. Thus, the concept of \emph{convergence.\spacefactor\@m} of a sequence in a topological class is defined, namely, the sequence converges if it converges with respect to some topology $\tau_m$ such that $\{X_i.\spacefactor\@m\subset{\cal C}_m$, and hence, with respect to any such topology. \end{itemize} Our main examples of topological classes are the classes $\operatorname{\mathcal{G\!H}}$ and ${\cal B}$ defined above. Recall that the class $\operatorname{\mathcal{G\!H}}$ consists of representatives of isometry classes of all metric spaces, and the class ${\cal B}$ consists of representatives of isometry classes of all bounded metric spaces. Notice that $\operatorname{\mathcal{G\!H}}_n$ and ${\cal B}_n$ are sets for any cardinal number $n$. The most studied subset of $\operatorname{\mathcal{G\!H}}$ is the set of all compact metric spaces. It is called a \emph{Gromov--Hausdorff space.\spacefactor\@m} and often denoted by $\mathcal{M}$. It is well-known, see~\cite{BurBurIva, ITlectHGH, INT}, that the Gromov--Hausdorff distance is an interior metric on $\mathcal{M}$, and the metric space $\mathcal{M}$ is Polish and geodesic. In paper~\cite{BIT} it is shown that the Gromov--Hausdorff distance is interior as on the class $\operatorname{\mathcal{G\!H}}$, so as on the class ${\cal B}$. As a rule, to calculate the Gromov--Hausdorff distance between a pair of given metric spaces is rather difficult, and for today the distance is known for a few pairs of spaces, see for example~\cite{GrigIT_Sympl}. The most effective approach for this calculations is based on the next equivalent definition of the Gromov--Hausdorff distance, see details in~\cite{BurBurIva} or~\cite{ITlectHGH}. Recall that a \emph{relation.\spacefactor\@m} between sets $X$ and $Y$ is defined as an arbitrary subset of their direct product $X\times Y$. Thus, $\mathcal{P}_0(X\times Y)$ is the set of all non-empty relations between $X$ and $Y$. \begin{dfn} For any $X,Y\in\operatorname{\mathcal{G\!H}}$ and any $\sigma\in\mathcal{P}_0(X\times Y)$, the \emph{distortion ${\operatorname{dis}}.\spacefactor\@m\sigma$ of the relation $\sigma$} is defined as the following value: $$ {\operatorname{dis}}.\spacefactor\@m\sigma=\sup\Bigl.\spacefactor\@m\bigl||xx'|-|yy'|\bigr|:(x,y),.\spacefactor\@m(x',y')\in\sigma\Bigr.\spacefactor\@m. $$ \end{dfn} A relation $R\subset X\times Y$ between sets $X$ and $Y$ is called a \emph{correspondence}, if the restrictions of the canonical projections $\pi_X\colon(x,y)\mapsto x$ and $\pi_Y\colon(x,y)\mapsto y$ onto $R$ are surjective. Notice that relations can be considered as partially defined multivalued mapping. Form this point of view, the correspondences are multivalued surjective mappings. For a correspondence $R\subset X\times Y$ and $x\in X$, we put $R(x)=\big\{y\in Y:(x,y)\in R\big.\spacefactor\@m$ and call $R(x)$ the \emph{image of the element $x$ under the relation $R$}. By $\mathcal{R}(X,Y)$ we denote the set of all correspondences between $X$ and $Y$. The following result is well-known. \begin{ass}\label{ass:GH-metri-and-relations} For any $X,Y\in\operatorname{\mathcal{G\!H}}$, it holds $$ d_{GH}(X,Y)=\frac12\inf\bigl.\spacefactor\@m{\operatorname{dis}}.\spacefactor\@m R:R\in\mathcal{R}(X,Y)\bigr.\spacefactor\@m. $$ \end{ass} We need the following estimates that can be easily proved by means of Assertion~\ref{ass:GH-metri-and-relations}. By $\Delta_1$ we denote the single-point metric space. \begin{ass}\label{ass:estim} For any $X,Y\in\operatorname{\mathcal{G\!H}}$, the following relations are valid.\spacefactor\@m\rom: \begin{itemize} \item $2d_{GH}(\Delta_1,X)={\operatorname{diam}}.\spacefactor\@m X$\rom; \item $2d_{GH}(X,Y)\le\max.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m X,{\operatorname{diam}}.\spacefactor\@m Y.\spacefactor\@m$\rom; \item If at least one of $X$ and $Y$ is bounded, then $\bigl|{\operatorname{diam}}.\spacefactor\@m X-{\operatorname{diam}}.\spacefactor\@m Y\bigr|\le2d_{GH}(X,Y)$. \end{itemize} \end{ass} For topological spaces $X$ and $Y$, their direct product $X\times Y$ is considered as the topological space endowed with the standard topology of the direct product. Therefore, it makes sense to speak about \emph{closed relations.\spacefactor\@m} and \emph{closed correspondences}. A correspondence $R\in\mathcal{R}(X,Y)$ is called \emph{optimal.\spacefactor\@m} if $2d_{GH}(X,Y)={\operatorname{dis}}.\spacefactor\@m R$. The set of all optimal correspondences between $X$ and $Y$ is denoted by $\mathcal{R}_{\operatorname{opt}}(X,Y)$. \begin{ass}[\cite{IvaIliadisTuz, Memoli}]\label{ass:optimal-correspondence-exists} For any $X,\,Y\in\mathcal{M}$, there exists as a closed optimal correspondence, so as a realization $(X',Y',Z)$ of the pair $(X,Y)$ which the Gromov--Hausdorff distance between $X$ and $Y$ is attained at. \end{ass} \begin{ass}[\cite{IvaIliadisTuz, Memoli}] For any $X,\,Y\in\mathcal{M}$ and each closed optimal correspondence $R\in\mathcal{R}(X,Y)$, the family $R_t$, $t\in[0,1]$, of compact metric spaces, where $R_0=X$, $R_1=Y$, and for $t\in(0,1)$ the space $R_t$ is the set $R$ endowed with the metric $$ \bigl|(x,y),(x',y')\bigr|_t=(1-t)|xx'|+t.\spacefactor\@m|yy'|, $$ is a shortest curve in $\mathcal{M}$ connecting $X$ and $Y$, and the length of this curve equals $d_{GH}(X,Y)$. \end{ass} Let $X$ be a metric space and $\lambda>0$ a real number. By $\lambda\,X$ we denote the metric space obtained from $X$ by multiplication of all the distances by $\lambda$, i.e., $|xy|_{\lambda X}=\lambda|xy|_X$ for any $x,\,y\in X$. If the space $X$ is bounded, then for $\lambda=0$ we put $\lambda X=\Delta_1$. \begin{ass} \label{ass:l1} For any $X,Y\in\operatorname{\mathcal{G\!H}}$ and any $\lambda>0$, the equality $d_{GH}(\lambda X,\lambda Y)=\lambda\,d_{GH}(X,Y)$ holds. If $X,Y\in{\cal B}$, then, in addition, the same equality is valid for $\lambda=0$. \end{ass} \begin{ass}\label{ass:l1l2} Let $X\in{\cal B}$ and $\lambda_1,\lambda_2\ge0$. Then $2d_{GH}(\lambda_1 X,\lambda_2 X)=|\lambda_1-\lambda_2|.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m X$. \end{ass} \begin{rk} If ${\operatorname{diam}}.\spacefactor\@m X=\infty$, then Assertion~\ref{ass:l1l2} is not true in general. For example, let $X=\mathbb{R}$. The space $\lambda.\spacefactor\@m\mathbb{R}$ is isometric to $\mathbb{R}$ for any $\lambda>0$, therefore $d_{GH}(\lambda_1\mathbb{R}, \lambda_2\mathbb{R})=0$ for any $\lambda_1,\lambda_2>0$. \end{rk} \section{Path Connectivity.} \markright{\thesection.~Path Connectivity.} In the present section, the path connected components of $\mathcal{M}$, ${\cal B}$, and $\operatorname{\mathcal{G\!H}}$ are described, and the path connectivity of spheres in these classes is investigated. \subsection{Path Connectivity Components.} Let us start with several auxiliary results. \begin{lem}\label{lem:diam} If $\gamma\colon[a,b]\to{\cal B}$ is a continuous curve, then ${\operatorname{diam}}.\spacefactor\@m \gamma(t)$ is a continuous function. \end{lem} \begin{proof} According to Assertion~\ref{ass:estim}, we have: ${\operatorname{diam}}.\spacefactor\@m\gamma(t)=2d_{GH}\bigl(\gamma(t),\Delta_1\bigr)$. It remains to use the fact that the distance function is continuous. \end{proof} \begin{lem}\label{lem:dist_inf} If $\gamma\colon[a,b]\to\operatorname{\mathcal{G\!H}}$ is a continuous curve, then the distance between its ends is finite. In particular, if the distance between $X$ and $Y$ is infinite, then such $X$ and $Y$ can not be connected by a continuous curve. \end{lem} \begin{proof} Indeed, the image of the segment $[a,b]$ under the continuous mapping $\gamma$ is a compact subset. Therefore, it can be covered by a finite number of balls of a fixed radius. The distance between any two points from such ball is finite. Since the Gromov--Hausdorff distance is interior, then such points can be connected by a curve of finite length. The latter implies that the endpoints of the curve $\gamma$ can be connected by a ``polygonal line'' of a finite length. \end{proof} \begin{lem}\label{lem:fin_inf} If $\gamma\colon[a,b]\to\operatorname{\mathcal{G\!H}}$ is a continuous curve, then either ${\operatorname{diam}}.\spacefactor\@m \gamma(t)=\infty$ for all $t$, or ${\operatorname{diam}}.\spacefactor\@m \gamma(t)<\infty$ for all $t$, i.e., in the latter case, the $\gamma$ is a continuous curve in ${\cal B}$. \end{lem} \begin{proof} It is clear that if $X,\,Y\in\operatorname{\mathcal{G\!H}}$, ${\operatorname{diam}}.\spacefactor\@m X<\infty$ and ${\operatorname{diam}}.\spacefactor\@m Y=\infty$, then $d_{GH}(X,Y)=\infty$, but the distance between any two points of a continuous curve is finite according to Lemma~\ref{lem:dist_inf}, that implies the lemma's statement. \end{proof} \begin{cor}\label{cor:path_conn_GH} Two spaces $X,\,Y\in\operatorname{\mathcal{G\!H}}$ can be connected by a continuous curve in $\operatorname{\mathcal{G\!H}}$, if and only if $d_{GH}(X,Y)<\infty$. In particular, $\operatorname{\mathcal{G\!H}}$ is not path connected, and its path connected components are the classes of spaces with pairwise finite mutual distances. One of such components coincides with the class ${\cal B}$. \end{cor} \begin{proof} Indeed, if the distance between a pair of points in $\operatorname{\mathcal{G\!H}}$ is infinite, then, due to Lemma~\ref{lem:dist_inf}, there is no a continuous curve connecting them, and if the distance is finite, then such continuous curve exists because the distance is interior. Finally, the distance between any two spaces from ${\cal B}$ is finite according to Assertion~\ref{ass:estim}. \end{proof} \begin{lem}\label{lem:scalar} If $\lambda(t)$, $t\in[a,b]$, is a continuous non-negative function and $\gamma\colon[a,b]\to{\cal B}$ a continuous curve, then $\lambda(t)\gamma(t)$ is a continuous curve in ${\cal B}$. \end{lem} \begin{proof} Consider the mapping $F\colon[0,\infty)\times{\cal B}\to{\cal B}$ defined as $F\colon(\lambda,X)\mapsto\lambda\,X$. We will show that $F$ is continuous. Notice that $$ d_{GH}(\lambda X,\mu Y)\le d_{GH}(\lambda X,\mu X)+d_{GH}(\mu X,\mu Y)=\frac12|\lambda-\mu|.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m X+\mu\,d_{GH}(X,Y). $$ Further, for any $\varepsilon>0$, there exists $\delta>0$ such that for any pair $(\mu,Y)$ with $d_{GH}(X,Y)<\delta$ and $|\lambda-\mu|<\delta$, the inequalities $\frac12|\lambda-\mu|.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m X<\varepsilon/2$ and $\mu\,d_{GH}(X,Y)<\varepsilon/2$ are valid, and hence, $d_{GH}(\lambda X,\mu Y)<\varepsilon$, and the latter means continuity of $F$. It remains to notice that $\lambda(t)\gamma(t)=F\bigl(\lambda(t),\gamma(t)\bigr)$. \end{proof} \subsection{Path Connectivity of Spheres Centered at Single-Point Metric Space} For a metric space $X\in\operatorname{\mathcal{G\!H}}$ and a real $r\ge0$, we define the \emph{sphere $S_r(X)$} and the \emph{ball $B_r(X)$ in $\operatorname{\mathcal{G\!H}}$} in the standard way: $$ S_r(X)=\{Y:d_{GH}(X,Y)=r.\spacefactor\@m.\spacefactor\@m.\spacefactor\@m\text{and}.\spacefactor\@m\ B_r(X)=\{Y:d_{GH}(X,Y)\le r.\spacefactor\@m. $$ As usual, $X$ and $r$ are called the \emph{center.\spacefactor\@m} and the \emph{radius.\spacefactor\@m} of the sphere $S_r(X)$ and of the ball $B_r(X)$, respectively. Notice that for $r>0$, the spheres and the balls are proper subclasses in $\operatorname{\mathcal{G\!H}}$. Also notice that for $X\in {\cal B}$, the spheres and the balls in the class ${\cal B}$ (in the space $\mathcal{M}$ for $X\in\mathcal{M}$) can be defined as the intersections of $S_r(X)$ and $B_r(X)$ with ${\cal B}$ (with $\mathcal{M}$, respectively). \begin{lem}\label{lem:geo} Let $A$ and $B$ be arbitrary spaces in ${\cal B}$ with nonzero diameters. Then for any positive $\varepsilon$ such that $2\varepsilon<{\operatorname{min}}.\spacefactor\@m.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m A,{\operatorname{diam}}.\spacefactor\@m B.\spacefactor\@m$, any $\varepsilon$-shortest curve connecting $A$ and $B$ does not pass through $\Delta_1$. \end{lem} \begin{proof} Assume the contrary, and let $\gamma$ be an $\varepsilon$-shortest curve passing through $\Delta_1$. Denote by $\gamma_1$ and $\gamma_2$ the segments of $\gamma$ between $A$ and $\Delta_1$ and between $\Delta_1$ and $B$, respectively. Then $|\gamma|=|\gamma_1|+|\gamma_2|\ge d_{GH}(A,\Delta_1)+d_{GH}(\Delta_1,B)=\frac12({\operatorname{diam}}.\spacefactor\@m A+{\operatorname{diam}}.\spacefactor\@m B)$. On the other hand, $|\gamma|\le d_{GH}(A,B)+\varepsilon\le\frac12\max.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m A,{\operatorname{diam}}.\spacefactor\@m B.\spacefactor\@m+\varepsilon$, and hence, ${\operatorname{diam}}.\spacefactor\@m A+{\operatorname{diam}}.\spacefactor\@m B\le\max.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m A,{\operatorname{diam}}.\spacefactor\@m B.\spacefactor\@m+2\varepsilon$. The latter inequality is not valid for $2\varepsilon<{\operatorname{min}}.\spacefactor\@m.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m A,{\operatorname{diam}}.\spacefactor\@m B.\spacefactor\@m$, a contradiction. \end{proof} \begin{thm}\label{thm:1} Each sphere centered at the single-point metric space $\Delta_1$ is path connected. \end{thm} \begin{proof} Let $S=S_r(\Delta_1)$ be the sphere of radius $r>0$ centered at the single-point space $\Delta_1$. Then $S\subset{\cal B}$. Consider arbitrary $A,\,B \in S$ and fix an $\varepsilon$, $0<2\varepsilon<{\operatorname{min}}.\spacefactor\@m.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m A,{\operatorname{diam}}.\spacefactor\@m B.\spacefactor\@m$. Consider an $\varepsilon$-shortest curve $\gamma(t)$ connecting $A$ and $B$. According to Lemmas~\ref{lem:fin_inf} and~\ref{lem:geo}, the diameter of each space $\gamma(t)$ is finite and does not equal zero. Define a mapping $\delta\colon[a,b]\to\mathcal{{\cal B}}$ as follows: $\delta\:t\mapsto\bigl(2r/{\operatorname{diam}}.\spacefactor\@m\gamma(t)\bigr)\gamma(t)$. Due to Lemma~\ref{lem:diam}, the function $2r/{\operatorname{diam}}.\spacefactor\@m\gamma(t)$ is continuous, and so, due to Lemma~\ref{lem:scalar}, we conclude that $\delta(t)$ is a continuous curve. And ${\operatorname{diam}}.\spacefactor\@m \delta(t)=2r$, therefore the curve $\delta$ lies in the sphere $S$. Theorem is proved. \end{proof} If $A,\,B\in S_r(\Delta_1)$ are compact spaces, then one can change the $\varepsilon$-shortest curve in the proof of Theorem~\ref{thm:1} by a shortest curve in $\mathcal{M}$. This implies the following result. \begin{cor} Any sphere $S_r(\Delta_1)$ in $\mathcal{M}$ centered at the single-point compact space $\Delta_1$ is linear connected. \end{cor} \subsection{Path Connectivity of Large Spheres Centered at an Arbitrary Bounded Metric Space} We start with several technical results. The next lemma is a special kind of Implicit Function Theorem. \begin{lem}\label{lem:neyavnaya} Let $F\colon[T_0,T_1]\times[S_0,S_1]\to\mathbb{R}$ be a continuous function that satisfies the following conditions.\spacefactor\@m\rom: \begin{enumerate} \item For each $s\in[S_0,S_1]$, the function $f_s(t)=F(t,s)$ is strictly monotonic.\spacefactor\@m\rom; \item There exists an $r$ such that for all $s\in[S_0,S_1]$ the inequality $F[T_0,s]\le r\le F[T_1,s]$ holds. Then the set $\big.\spacefactor\@m(t,s):F(t,s)=r\big.\spacefactor\@m$ is an image of embedded continuous curve of the form $\gamma(s) = \big(t(s),s\big)$. \end{enumerate} \end{lem} \begin{proof} Since the function $f_s(t)$ is strictly monotonic and continuous, and $f_s(T_0)\le r\le f_s(T_1)$, then for each $s$ there exists unique $t=t(s)$ such that $f_s\big(t(s)\big)=r$. Put $\gamma(s)=\big(t(s),s\big)$. Notice that $F\big(\gamma(s)\big)=r$ by definition of $\gamma$. It remains to show that function $t(s)$ is continuous. Assume to the contrary that $t(s)$ is discontinuous at some point $s_0$, and let $t_0=t(s_0)$. Then there exists an $\varepsilon>0$ and a sequence $s_i\to s_0$ as $i\to\infty$ such that $\bigl|t_0-t(s_i)\bigr|\ge\varepsilon$. Due to compactness arguments, the sequence $t_i=t(s_i)$ contains a convergent subsequence. Assume without loss of generality that the entire sequence $t_i$ converges to some $t'$. Since $|t_0-t_i|\ge\varepsilon$ for all $i$, then $t'\ne t_0$. Since $F$ is continuous and $F(s_i,t_i)=r$, then $F(s_0,t')=r$ too. But $F(s_0,t_0)$ is also equal to $r$, that is impossible, because $f_{s_0}$ is strictly monotonic, a contradiction. \end{proof} \begin{lem}\label{lem:mon} For a pair $C,G\in{\cal B}$ of bounded metric spaces with $2d_{GH}(G,C)>{\operatorname{diam}}.\spacefactor\@m G>0$, the function $h(\lambda):=d_{GH}(G,\lambda C)$ is strictly monotonically increasing for $\lambda\ge1$. \end{lem} \begin{proof} Due to assumptions, for any correspondence $R\in\mathcal{R}(G,C)$ we have ${\operatorname{dis}}.\spacefactor\@m R>{\operatorname{diam}}.\spacefactor\@m G$. Consider a sequence $\bigl((g_i,c_i),(g'_i,c'_i)\bigr)\in R\times R$ such that $\bigl||c_ic'_i|-|g_ig'_i|\bigr|\to{\operatorname{dis}}.\spacefactor\@m R$. Since ${\operatorname{dis}}.\spacefactor\@m R>{\operatorname{diam}}.\spacefactor\@m G$, then the inequality $|c_ic'_i|>|g_ig'_i|$ is valid for sufficiently large $i$. Assume without loss of generality that it holds for all $i$. By $R_\lambda$ we denote the correspondence from $\mathcal{R}(G,\lambda C)$ that coincides with $R$ as a subset of $G\times C$. Then for $\lambda\ge1$ we have $$ {\operatorname{dis}}.\spacefactor\@m R_\lambda\ge\lambda|c_ic'_i|-|g_ig'_i|=(\lambda-1)|c_ic'_i|+|c_ic'_i|-|g_ig'_i|, $$ and therefore, passing to the limit as $i\to\infty$, we conclude that $$ {\operatorname{dis}}.\spacefactor\@m R_\lambda\ge(\lambda-1)\liminf_{i\to\infty}|c_ic'_i|+{\operatorname{dis}}.\spacefactor\@m R. $$ Since ${\operatorname{dis}}.\spacefactor\@m R>{\operatorname{diam}}.\spacefactor\@m G>0$, then $|c_ic'_i|>|g_ig'_i|+\frac12{\operatorname{diam}}.\spacefactor\@m G$ for large $i$, therefore, $\liminf_{i\to\infty}|c_ic'_i|\ge\frac12.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m G$, and hence, ${\operatorname{dis}}.\spacefactor\@m R_\lambda\ge{\operatorname{dis}}.\spacefactor\@m R+\frac{\lambda-1}2.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m G$. Since the latter inequality is valid for all correspondences $R\in G\times C$, we conclude that $d_{GH}(G,\lambda C)\ge d_{GH}(G,C)+\frac{\lambda-1}4.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m G$, and so $d_{GH}(G,\lambda C)>d_{GH}(G,C)$ for $\lambda>1$. In particular, each $\lambda C$, $\lambda\ge1$, taking instead of $C$, satisfies the conditions of the lemma. So, choosing arbitrary $\lambda_1,.\spacefactor\@m\lambda_2$, $1\le\lambda_1<\lambda_2$, and substituting to the inequality $\lambda_1 C$ and $\lambda_2/\lambda_1$ instead of $C$ and $\lambda$, respectively, we get $d_{GH}(G,\lambda_2C)>d_{GH}(G,\lambda_1C)$. \end{proof} Let us now prove the path connectivity of spheres of a sufficiently large radius centered at an arbitrary $G\in{\cal B}$. \begin{thm}\label{thm:bounded} For any bounded metric space $G\in{\cal B}$ and any $r>{\operatorname{diam}}.\spacefactor\@m G$, the sphere $S_r(G)$ is path connected. \end{thm} \begin{proof} Since ${\operatorname{diam}}.\spacefactor\@m G<\infty$, then $S_r(G)\subset{\cal B}$, see Assertion~\ref{ass:estim}. Since the case ${\operatorname{diam}}.\spacefactor\@m G=0$ has been considered in Theorem~\ref{thm:1}, assume that ${\operatorname{diam}}.\spacefactor\@m G>0$. \begin{lem}\label{lem:out} The ball $B_r(G)$ lies inside the ball $B_{3r/2}(\Delta_1)$. \end{lem} \begin{proof} Indeed, if $X\in B_r(G)$, then according to the triangle inequality, we have $$ d_{GH}(\Delta_1,X)\le d_{GH}(\Delta_1,G)+d_{GH}(G,X)\le\frac12.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m G+r<\frac{3r}2, $$ and hence, $X\in B_{3r/2}(\Delta_1)$. \end{proof} \begin{lem}\label{lem:in} The sphere $S_r(G)$ lies outside the ball $B_{r/2}(\Delta_1)$. \end{lem} \begin{proof} Indeed, if $X\in S_r(G)$, then, due to the triangle inequality, $$ d_{GH}(\Delta_1,X)\ge d_{GH}(X,G)-d_{GH}(G,\Delta_1)>r-r/2=r/2, $$ and so, $d_{GH}(\Delta_1,X)>r/2$. \end{proof} Consider a pair of arbitrary points $A$ and $B$ lying in the sphere $S_r(G)$. Due to Lemma~\ref{lem:in}, ${\operatorname{diam}}.\spacefactor\@m A>r$ and ${\operatorname{diam}}.\spacefactor\@m B>r$, therefore, there exists $\lambda_A>0$ and $\lambda_B>0$ such that ${\operatorname{diam}}.\spacefactor\@m \lambda_A A=r$ and ${\operatorname{diam}}.\spacefactor\@m \lambda_B B=r$, i.e., the points $A'=\lambda_A A$ and $B'=\lambda_B B$ lie in the sphere $S_{r/2}(\Delta_1)$. Due to Theorem~\ref{thm:1}, there exists a continuous curve $\gamma\colon[a,b]\to{\cal B}$ lying in $S_{r/2}(\Delta_1)$ and such that $A'=\gamma(a)$ and $B'=\gamma(b)$. But then, according to Lemma~\ref{lem:scalar}, the points $A''=3A'$ and $B''=3B'$ can be connected by a continuous curve $3\gamma(s)$ lying in $S_{3r/2}(\Delta_1)$. Since $$ 2d_{GH}\bigl(G,\gamma(s)\bigr)\ge{\operatorname{diam}}.\spacefactor\@m\gamma(s)-{\operatorname{diam}}.\spacefactor\@m G>r>{\operatorname{diam}}.\spacefactor\@m G, $$ then Lemma~\ref{lem:mon} can be applied to the points $\gamma(s)$, and according to this Lemma, the function $f_s(t)=d_{GH}\bigl(G,t.\spacefactor\@m\gamma(s)\bigr)$, $t\in[1,3]$, is strictly monotonically increasing. Due to Lemmas~\ref{lem:out} and~\ref{lem:in}, for each $s$ there exists $t$ such that $f_s(t)=r$. Let us put $F(t,s)=d_{GH}\bigl(G,t.\spacefactor\@m\gamma(s)\bigr)$. Applying Lemma~\ref{lem:neyavnaya}, we obtain a continuous curve lying in $S_r(G)$ and connecting $A$ and $B$. Theorem is proved. \end{proof} If $G$ is a compact space, then one can consider a sphere $S_r(G)$ in the Gromov--Hausdorff space $\mathcal{M}$. It is easy to see that in this case all the curves constructed in the proof of Theorem~\ref{thm:bounded} lie in $\mathcal{M}$. \begin{cor} For each compact metric space $G\in\mathcal{M}$ and each $r>{\operatorname{diam}}.\spacefactor\@m G$, the sphere $S_r(G)$ in $\mathcal{M}$ is path connected. \end{cor} \subsection{Generic Metric Spaces} Let $X$ be a metric space, $\#X\ge3$. By $S(X)$ we denote the set of all bijective mappings of the set $X$ onto itself, and let $\operatorname{id}\in S(X)$ be the identical bijection. Put \begin{align*} s(X)&=\inf\bigl.\spacefactor\@m|xx'|:x\ne x',\ x,x'\in X\bigr.\spacefactor\@m,.\spacefactor\@m t(X)&=\inf\bigl.\spacefactor\@m|xx'|+|x'x''|-|xx''|:x\ne x'\ne x''\ne x\bigr.\spacefactor\@m,.\spacefactor\@m e(X)&=\inf\bigl.\spacefactor\@m{\operatorname{dis}}.\spacefactor\@m f:f\in S(X),\ f\ne\operatorname{id}\bigr.\spacefactor\@m. \end{align*} We call a metric space $M\in\operatorname{\mathcal{G\!H}}$, $\#M\ge3$, \emph{generic.\spacefactor\@m} if all the three its characteristics $s(M)$, $t(M)$, and $e(M)$ are positive. The following generalization of results from~\cite{ITFiniteLoc} and~\cite{Filin} was obtained by A.~Filin in his diploma. \begin{ass}\label{ass:diz} Let $M,X\in\operatorname{\mathcal{G\!H}}$ be metric spaces such that $\#M\ge3$ and $r:=d_{GH}(M,X)<s(M)/2$. Then for any real $d$ such that $r<d\le s(M)/2$, the following statements are valid. \begin{enumerate} \item\label{ass:diz:1} There exists a correspondence $R\in\mathcal{R}(M,X)$ such that ${\operatorname{dis}}.\spacefactor\@m R<2d\le s(M)$. \item\label{ass:diz:2} For each such $R$, the family $D_R=\bigl\{X_i:=R(i)\bigr.\spacefactor\@m_{i\in M}$ is a partition of the space $X$. \item\label{ass:diz:3} For any $i,j\in M$, probably coinciding, and for any $x_i\in X_i$ and $x_j\in X_j$, the estimate $\bigl||x_ix_j|-|ij|\bigr|<2d\le s(M)$ holds. \item\label{ass:diz:4} For all $i\in M$, the inequality ${\operatorname{diam}}.\spacefactor\@m X_i<2d\le s(M)$ holds. \item\label{ass:diz:5} If $d\le s(M)/4$, then the partition $D_R$ is uniquely defined, i.e., if $R'\in\mathcal{R}(M,X)$ is such that ${\operatorname{dis}}.\spacefactor\@m R'<2d$, then $D_{R'}=D_R$. \item\label{ass:diz:6} If $d\le{\operatorname{min}}.\spacefactor\@m\bigl\{s(M)/4,\,e(M)/4\bigr.\spacefactor\@m$, then the correspondence $R\in\mathcal{R}(M,X)$ such that ${\operatorname{dis}}.\spacefactor\@m R<2d$ is uniquely defined, and hence, $R$ is an optimal correspondence. In this case, ${\operatorname{dis}}.\spacefactor\@m R=2d_{GH}(M,X)$, ${\operatorname{diam}}.\spacefactor\@m X_i\le{\operatorname{dis}}.\spacefactor\@m R$, and $\bigl||x_ix'_j|-|ij|\bigr|\le{\operatorname{dis}}.\spacefactor\@m R$ for all $i,j\in M$, $i\ne j$. \end{enumerate} \end{ass} \begin{proof} (\ref{ass:diz:1}) Since $$ d_{GH}(M,X)=\frac12\inf\bigl.\spacefactor\@m{\operatorname{dis}}.\spacefactor\@m R:R\in\mathcal{R}(M,X)\bigr.\spacefactor\@m<d, $$ then there exists a correspondence $R\in\mathcal{R}(M,X)$ such that ${\operatorname{dis}}.\spacefactor\@m R<2d\le s(M)$. (\ref{ass:diz:2}) If for some $x\in X$ one has $\#R^{-1}(x)>1$, then ${\operatorname{dis}}.\spacefactor\@m R\ge{\operatorname{diam}}.\spacefactor\@m R^{-1}(x)\ge s(M)$, a contradiction. Therefore, the family $\big\{X_i:=R(i)\big.\spacefactor\@m_{i\in M}$ is a partition of the space $X$. (\ref{ass:diz:3}) Since ${\operatorname{dis}}.\spacefactor\@m R<2d$, then for any $i,j\in M$ and any $x_i\in R(i)$, $x_j\in R(j)$ we have: $$ \bigl||x_ix_j|-|ij|\bigr|\le{\operatorname{dis}}.\spacefactor\@m R<2d\le s(M). $$ (\ref{ass:diz:4}) Since ${\operatorname{dis}}.\spacefactor\@m R<2d$, then for any $i\in M$ we have ${\operatorname{diam}}.\spacefactor\@m X_i\le{\operatorname{dis}}.\spacefactor\@m R<2d\le s(M)$. (\ref{ass:diz:5}) If $d\le s(M)/4$, and $R'\in\mathcal{R}(M,X)$ is another correspondence such that ${\operatorname{dis}}.\spacefactor\@m R'<2d$, then $\big\{X'_i:=R'(i)\big.\spacefactor\@m_{i\in M}$ is also a partition of $X$ satisfying the properties already proved above. If $D_R\ne D_{R'}$, then either there exists an $X'_i$ intersecting simultaneously some different $X_j$ and $X_k$, or there exists an $X_i$ intersecting simultaneously some different $X'_j$ and $X'_k$. Indeed, if there is no such an $X'_i$, then each $X'_i$ is contained in some $X_j$, and hence, $\{X'_p.\spacefactor\@m$ is a sub-partition of $\{X_q.\spacefactor\@m$. Since these partitions are different, then some element $X_i$ contains some different $X'_j$ and $X'_k$. Thus, assume without loss of generality that $X'_i$ intersects some different $X_j$ and $X_k$ simultaneously. Choose arbitrary $x_j\in X_j\cap X'_i$ and $x_k\in X_k\cap X'_i$. Then $$ 2d>{\operatorname{diam}}.\spacefactor\@m X'_i\ge|x_jx_k|>|jk|-2d\ge s(M)-2d, $$ and hence, $d>s(M)/4$, a contradiction. (\ref{ass:diz:6}) Let $R'\in\mathcal{R}(M,X)$ satisfy ${\operatorname{dis}}.\spacefactor\@m R'<2d$ as in previous Item. Then $D_R=D_{R'}$, as it is already proved above. It remains to verify that $X_i=X'_i$ for each $i\in M$. Assume to the contrary that there exists a non-trivial bijection $f\:M\to M$ such that $X_i=X'_{f(i)}$. Due to assumptions, ${\operatorname{dis}}.\spacefactor\@m f\ge e(M)\ge4d$. The latter means that there exist $i,j\in M$ such that $$ \Bigl||ij|-\bigl|f(i)f(j)\bigr|\Bigr|\ge4d. $$ Let us put $p=f(i)$, $q=f(j)$, and choose arbitrary $x_i\in X_i=X'_p$ and $x_j\in X_j=X'_q$. Then $$ 4d\le\bigl||pq|-|ij|\bigr|=\bigl||pq|-|x_ix_j|+|x_ix_j|-|ij|\bigr|\le\bigl||pq|-|x_ix_j|\bigr|+\bigl||x_ix_j|-|ij|\bigr|<2d+2d, $$ a contradiction. Thus, a correspondence $R$ whose distortion is close enough to $\frac12d_{GH}(M,X)$ is uniquely defined. So, $R$ is an optimal correspondence. \end{proof} \begin{lem}\label{lem:M_sigma} Let $M$ be a metric space, and $s(M)>0$. For each pair $\{i,j.\spacefactor\@m\subset M$, fix a real number $a_{ij}=a_{ji}$ in such a way that $a_{ii}=0$ and $|a_{ij}|<s(M)$. Change the distance function on $M$ by adding the value $a_{ij}$ to each distance $|ij|$ and denote by $\rho$ the resulting function. Put $a=\sup|a_{ij}|$. If $a\le t(M)/3$, then the following statements are valid. \begin{enumerate} \item\label{lem:M_sigma:1} The distance function $\rho$ is a metric on the set $M$. \item\label{lem:M_sigma:2} For the resulting metric space $M_\rho=(M,\rho)$, the inequality $d_{GH}(M,M_\rho)\le a/2$ holds. \item\label{lem:M_sigma:3} If in addition $a/2<{\operatorname{min}}.\spacefactor\@m\bigl\{s(M)/4,\,e(M)/4\bigr.\spacefactor\@m$, then $d_{GH}(M,M_\rho)=a/2$. \item\label{lem:M_sigma:4} Under assumptions of Item~$(\ref{lem:M_sigma:3})$, if the equality $a_{ij}=\pm a$ holds for all $i\ne j$, then $M_\rho$ lies in the sphere $S_{a/2}(M)$\rom; moreover, if $M_{\rho'}$ is constructed in a similar way by the set $a'_{ij}=\pm a$, $i\ne j$, and $\rho_t=(1-t)\rho+t.\spacefactor\@m\rho'$, $t\in[0,1]$, then the set $M$ with the distance function $\rho_t$ is a metric space, the mapping $t\mapsto M^t:=(M,\rho_t)$ is a continuous curve in $\operatorname{\mathcal{G\!H}}$, and, if for some pair $\{i,j.\spacefactor\@m$, $i\ne j$, the equality $a_{ij}=a'_{ij}$ holds, then all the spaces $M^t$ lie in the sphere $S_{a/2}(M)$. \end{enumerate} \end{lem} \begin{proof} (\ref{lem:M_sigma:1}) It is sufficient to verify that the distance function $\rho$ satisfies the triangle inequalities. Choose arbitrary three points $i,j,k\in M$, then $$ \rho(i,j)+\rho(j,k)-\rho(i,k)=|ij|+a_{ij}+|jk|+a_{jk}-|ik|-a_{ik}\ge t(M)-3a\ge0. $$ (\ref{lem:M_sigma:2}) Let $R\in\mathcal{R}(M,M_\rho)$ be the identical mapping, then $$ 2d_{GH}(M,M_\rho)\le{\operatorname{dis}}.\spacefactor\@m R=\sup\Bigl.\spacefactor\@m\bigl||ij|-|ij|-a_{ij}\bigr|:i,j\in M\Bigr.\spacefactor\@m=\sup\bigl.\spacefactor\@m|a_{ij}|:i,j\in M\bigr.\spacefactor\@m=a. $$ (\ref{lem:M_sigma:3}) The assumptions imply that Assertion~\ref{ass:diz}, Item~(\ref{ass:diz:6}), can be applied, and hence, the identical mapping $R$ is an optimal correspondence. The latter implies the equality required. (\ref{lem:M_sigma:4}) Proposition~\ref{prop:metric} implies that $M^t$ is a metric space. Further, let $R^t\in\mathcal{R}(M,M^t)$ be the identical mapping, then ${\operatorname{dis}}.\spacefactor\@m R^t\le a$, and so $R^t$ is an optimal correspondence. Let $R'\in\mathcal{R}(M^t,M^s)$ be the identical mapping too. Then $$ 2d_{GH}(M^t,M^s)\le{\operatorname{dis}}.\spacefactor\@m R'=|t-s|.\spacefactor\@m\sup_{i,j\in M}|a_{ij}-a'_{ij}|\le 2a|t-s|, $$ therefore, $M^t$ is a continuous curve in $\operatorname{\mathcal{G\!H}}$. It remains to notice that if $a_{ij}=a'_{ij}$ for some $i\ne j$, then $$ \bigl|\rho_t(i,j)-|ij|\bigr|=\bigl|(1-t)a_{ij}+t\,a'_{ij}\bigr|=a, $$ so ${\operatorname{dis}}.\spacefactor\@m R^t=a$ for all $t$, and hence, $d_{GH}(M,M^t)=a/2$. \end{proof} \subsection{Path Connectivity of Small Spheres Centered at Generic Spaces} We need the following notations. Let $A$ and $B$ be non-empty subsets of a metric space $X$. We put $$ |AB|=\inf\bigl.\spacefactor\@m|ab|:a\in A,\,b\in B\bigr.\spacefactor\@m,\qquad |AB|'=\sup\bigl.\spacefactor\@m|ab|:a\in A,\,b\in B\bigr.\spacefactor\@m. $$ If we need to emphasis that these values are calculated with respect to a metric $\rho$, then we write $|AB|_\rho$ and $|AB|'_\rho$ instead of $|AB|$ and $|AB|'$, respectively. Similarly, we sometimes write ${\operatorname{diam}}.\spacefactor\@m_\rho A$ instead of ${\operatorname{diam}}.\spacefactor\@m A$ by the same reason. \begin{thm}\label{thm:small} Let $M$ be a generic metric space and $$ 0<r<{\operatorname{min}}.\spacefactor\@m\bigl\{s(M)/4,\,e(M)/4,\,t(M)/6\bigr.\spacefactor\@m. $$ Then the sphere $S_r(M)$ in $\operatorname{\mathcal{G\!H}}$ is path connected. \end{thm} \begin{proof} Choose an arbitrary $X\in S_r(M)$. Since the conditions of Assertion~\ref{ass:diz}, Item~(\ref{ass:diz:6}), are valid, then there exists a unique optimal correspondence $R\in\mathcal{R}(M,X)$. Moreover, ${\operatorname{dis}}.\spacefactor\@m R=2r$ and, besides, the correspondence $R$ defines the partition $D_R=\big\{X_i:=R(i)\big.\spacefactor\@m_{i\in M}$ of the space $X$ into subsets $X_i$, whose diameters are at mots $2r$ and such that for any $x_i\in X_i$ and $x_j\in X_j$, the inequality $\bigl||x_ix_j|-|ij|\bigr|\le2r$ holds. It is easy to see that the distortion of the correspondence $R$ can be calculated by the following formula: $$ {\operatorname{dis}}.\spacefactor\@m R=\max\Bigl[\sup_i.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m X_i.\spacefactor\@m,.\spacefactor\@m\sup_{i\ne j}\bigl.\spacefactor\@m|X_iX_j|'-|ij|\bigr.\spacefactor\@m,.\spacefactor\@m\sup_{i\ne j}\bigl.\spacefactor\@m|ij|-|X_iX_j|\bigr.\spacefactor\@m\Bigr]. $$ We consider three possibilities depending on which of the three items the maximum attains at. {\bf (1)} Let ${\operatorname{dis}}.\spacefactor\@m R=\sup_i.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m X_i.\spacefactor\@m$. Define a distance function $\rho$ on the set $X$ as follows: the distances between the points in each $X_i$ do not change, and for $x_i\in X_i$ and $x_j\in X_j$, $i\ne j$, put $\rho(x_i,x_j)=|ij|+2r$. Let us show that $\rho$ is a metric. To do that, it suffices to verify the triangle inequalities for the points $x_i,x'_i\in X_i$ and $x_j\in X_j$, $j\ne i$. Since the triangle $x_ix'_ix_j$ is equilateral, it suffices to show that $\rho(x_i,x'_i)\le2\rho(x_i,x_j)$. We have $$ 2\rho(x_i,x_j)-\rho(x_i,x'_i)=2|ij|+4r-|x_ix'_i|=\bigl(2|ij|-|x_ix'_i|\bigr)+4r>0. $$ Notice that $|X_iX_j|_\rho=|X_iX_j|'_\rho=|ij|+2r$ for any $i\ne j$. Define the functions $\rho_t$, $t\in[0,1]$, that coincide with the initial metric on all $X_i$ and equal $\rho_t(x_i,x_j)=(1-t)|x_ix_j|+t\bigl(|ij|+2r\bigr)$ for all $x_i\in X_i$ and $x_j\in X_j$, $i\ne j$. All the functions $\rho_t$ are metrics according to Proposition~\ref{prop:metric}. By $X^t$ we denote the metric space $(X,\rho_t)$. If $R'\in\mathcal{R}(X^t,X^s)$ is the identical mapping, then $$ {\operatorname{dis}}.\spacefactor\@m R'=|t-s|\sup\Bigl.\spacefactor\@m\bigl||x_ix_j|-|ij|-2r\bigr|:i,j\in M,\,i\ne j\Bigr.\spacefactor\@m\le 4r.\spacefactor\@m|t-s|, $$ and so $X^t$ is a continuous curve in $\operatorname{\mathcal{G\!H}}$. Further, let us show that all the spaces $X^t$ lie in the sphere $S_r(M)$. Since the metric $\rho_t$ coincides with the initial one at each $X_i$, then ${\operatorname{diam}}.\spacefactor\@m_{\rho_t}X_i={\operatorname{diam}}.\spacefactor\@m X_i$. Further, for $i\ne j$, the function $\rho_t(x_i,x_j)=|x_ix_j|+t\bigl(|ij|+2r-|x_ix_j|\bigr)$ increases monotonically as $t$ increases, and $$ |x_ix_j|\le\rho_t(x_i,x_j)\le|ij|+2r. $$ Therefore, $$ -2r\le|X_iX_j|'-|ij|\le|X_iX_j|'_{\rho_t}-|ij|\le2r\quad \text{and}\quad 2r\ge|ij|-|X_iX_j|\ge|ij|-|X_iX_j|_{\rho_t}\ge-2r. $$ So, we conclude that $$ \sup_{i\ne j}\bigl.\spacefactor\@m|X_iX_j|'_{\rho_t}-|ij|\bigr.\spacefactor\@m\le2r\quad \text{and}\quad \sup_{i\ne j}\bigl.\spacefactor\@m|ij|-|X_iX_j|_{\rho_t}\bigr.\spacefactor\@m\le2r. $$ But $\sup_i.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m_{\rho_t} X_i.\spacefactor\@m=\sup_i.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m X_i.\spacefactor\@m=2r$, therefore, if we take the correspondence $R^t\in\mathcal{R}(M,X^t)$ coinciding with $R$, then $$ {\operatorname{dis}}.\spacefactor\@m R^t=\max\Bigl[\sup_i.\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m_{\rho_t}X_i.\spacefactor\@m,.\spacefactor\@m\sup_{i\ne j}\bigl.\spacefactor\@m|X_iX_j|'_{\rho_t}-|ij|\bigr.\spacefactor\@m,.\spacefactor\@m\sup_{i\ne j}\bigl.\spacefactor\@m|ij|-|X_iX_j|_{\rho_t}\bigr.\spacefactor\@m\Bigr]=2r. $$ So, each space $X^t$ satisfies the conditions of Assertion~\ref{ass:diz}, Item~(\ref{ass:diz:6}), and hence, $R^t$ is an optimal correspondence. Thus, $2d_{GH}(M,X^t)={\operatorname{dis}}.\spacefactor\@m R^t=2r$, i.e., the curve $X^t$ lies in $S_r(M)$. Now let us deform the metric on the space $X^1$ as follows: for points $x,\,x'$ lying in the same $X_i$, we put $\nu_t(x,x')=(1-t)|xx'|$, $t\in[0,1]$, and for points lying in different sets $X_i$, we do not change the metric. It is easy to see that all $\nu_t$ are metrics, and for the metric space $X^1_t=(X,\nu_t)$, the next relations are valid: \begin{gather*} {\operatorname{diam}}.\spacefactor\@m_{\nu_t}X_i=(1-t).\spacefactor\@m{\operatorname{diam}}.\spacefactor\@m X_i\le2r,.\spacefactor\@m |X_iX_j|'_{\nu_t}-|ij|=|X_iX_j|'_\rho-|ij|=2r,.\spacefactor\@m |ij|-|X_iX_j|_{\nu_t}=|ij|-|X_iX_j|_\rho=-2r. \end{gather*} Consider the correspondence $R$ as a correspondence between $M$ and $X^1_t$, and denote it by $R^t_1\in\mathcal{R}(M,X^1_t)$. Then ${\operatorname{dis}}.\spacefactor\@m R^t_1=2r$, therefore, $R^t_1$ is an optimal correspondence, and $d_{GH}(M,X^1_t)=r$, i.e., all the spaces $X^1_t$ lie in the sphere $S_r(M)$. The same arguments as above show that the mapping $t\mapsto X^1_t$ is a continuous curve in $\operatorname{\mathcal{G\!H}}$. The space $X^1_1$ is obtained from $M$ by adding the value $2r$ to all the distances $|ij|$, $i\ne j$. This space is denoted by $M^+$. Thus, we have constructed a continuous curve in $S_r(M)\subset\operatorname{\mathcal{G\!H}}$ connecting the space $X\in S_r(M)$ with the space $M^+\in S_r(M)$. {\bf (2)} Let ${\operatorname{dis}}.\spacefactor\@m R=\sup_{i\ne j}\bigl.\spacefactor\@m|X_iX_j|'-|ij|\bigr.\spacefactor\@m$. Consider the same family $\rho_t$ of metrics as in Case~(1). Since $$ |X_iX_j|'-|ij|\le |X_iX_j|'_{\rho_t}-|ij|\le2r, $$ and $\sup_{i\ne j}\bigl.\spacefactor\@m|X_iX_j|'-|ij|\bigr.\spacefactor\@m=2r$, then $\sup_{i\ne j}\bigl.\spacefactor\@m|X_iX_j|'_{\rho_t}-|ij|\bigr.\spacefactor\@m=2r$ for all $t$. As it is shown in Case~(1), ${\operatorname{diam}}.\spacefactor\@m_{\rho_t}X_i={\operatorname{diam}}.\spacefactor\@m X_i\le 2r$ and $\big||ij|-|X_iX_j|_{\rho_t}\big|\le2r$, therefore, the family $X^t=(X,\rho_t)$ lies in the sphere $S_r(M)$. The arguments from Case~(1) prove the continuity of the curve $t\mapsto X^t$, whose ending point can be connected with the space $M^+\in S_r(M)$ by means of the same construction. Thus, again we have constructed a continuous curve in $S_r(M)\subset\operatorname{\mathcal{G\!H}}$ connecting the space $X\in S_r(M)$ with the space $M^+\in S_r(M)$. {\bf (3)} Let ${\operatorname{dis}}.\spacefactor\@m R=\sup\bigl.\spacefactor\@m|ij|-|X_iX_j|:i\ne j\bigr.\spacefactor\@m$. In this case, we define the metric $\rho$ in a different way, namely, we put $\rho(x_i,x_j)=|ij|-2r$ for $i\ne j$, and $\rho_t(x_i,x_j)=(1-t)|x_ix_j|+t\bigl(|ij|-2r\bigr)$. Let us show that $\rho$ is a metric too. The value $|ij|-2r$ is positive, because $r<s(M)/4$, and hence $|ij|-2r>|ij|-s(M)/2\ge s(M)/2>0$. It remains to verify the triangle inequalities. If all three points belong to the same $X_i$, then the inequality coincides with the one in $X$, and hence it holds. Further, if $x_i,x'_i\in X_i$ and $x_j\in X_j$, $j\ne i$, then the triangle $x_ix'_ix_j$ is equilateral, and it suffices to show that $\rho(x_i,x'_i)\le2\rho(x_i,x_j)$. Since $r<s(M)/4$, we have $$ 2\rho(x_i,x_j)-\rho(x_i,x'_i)=2|ij|-4r-|x_ix'_i|>2|ij|-s(M)-|x_ix'_i|=\big(|ij|-s(M)\big)+\big(|ij|-|x_ix'_i|\big). $$ It remains to notice that $|ij|-s(M)\ge 0$ by definition of $s(M)$, and $|ij|-|x_ix'_i|>2r$ because ${\operatorname{diam}}.\spacefactor\@m X_i\le 2r$ and $|ij|\ge s(M)>4r$. At last, if $x_i\in X_i$, $x_j\in X_j$, and $x_k\in X_k$ for pairwise distinct $i,\,j,\,k$, then $$ \rho(x_i,x_j)+\rho(x_j,x_k)-\rho(x_i,x_k)=|ij|+|jk|-|ik|-2r\ge t(M)-2r>6r-2r=4r>0. $$ Thus, $\rho$ is a metric. Notice that $|X_iX_j|_\rho=|X_iX_j|'_\rho=|ij|-2r$ for any $i\ne j$. In this case the function $\rho_t(x_i,x_j)=|x_ix_j|-t\bigl(2r+|x_ix_j|-|ij|\bigr)$ decreases monotonically as $t$ increases. Since $$ |ij|-|X_iX_j|\le|ij|-|X_iX_j|'_{\rho_t}\le2r, $$ and $\sup_{i\ne j}\bigl.\spacefactor\@m|ij|-|X_iX_j|\bigr.\spacefactor\@m=2r$, then $\sup_{i\ne j}\bigl.\spacefactor\@m|ij|-|X_iX_j|_{\rho_t}\bigr.\spacefactor\@m=2r$ for all $t$. Reasonings similar to Cases~(1) and~(2) show that ${\operatorname{diam}}.\spacefactor\@m_{\rho_t}X_i={\operatorname{diam}}.\spacefactor\@m X_i\le 2r$ and $\big||X_iX_j|'_{\rho_t}-|ij|\big|\le2r$, therefore, the family $X^t=(X,\rho_t)$ forms a continuous curve lying in the sphere $S_r(M)$. Further, the arguments similar to the ones given in the previous cases allow us to construct a continuous curve in $\operatorname{\mathcal{G\!H}}$ connecting the space $X\in S_r(M)$ with the space $M^-\in S_r(M)$ by a curve lying in the sphere $S_r(M)$. Here the space $M^-$ is obtained from the space $M$ by subtracting the value $2r$ from all the distances $|ij|$, $i\ne j$. To conclude the proof, we apply Lemma~\ref{lem:M_sigma}, Item~(\ref{lem:M_sigma:4}), and construct continuous curves in $S_r$ connecting the spaces $M^+$ and $M^-$ with the space $M^{\pm}$, where some nonzero distances have the form $|ij|+2r$, $i\ne j$, and the remaining ones have the form $|ij|-2r$, $i\ne j$. That can be done since $\#M\ge3$ according to the assumptions. Theorem is proved. \end{proof} If the space $M$ is finite, and $X$ is a compact metric space, then it is easy to see that all the curves constructed in the proof of Theorem~\ref{thm:small} lies in $\mathcal{M}$. Thus, the following result holds. \begin{cor} Let $M$ be a finite generic space, and $$ 0<r<{\operatorname{min}}.\spacefactor\@m\bigl\{s(M)/4,\,e(M)/4,\,t(M)/6\bigr.\spacefactor\@m. $$ Then the sphere $S_r(M)$ in $\mathcal{M}$ is path connected. \end{cor} In conclusion, we give several examples of generic spaces. \begin{examp} A finite metric space $M$ with pairwise distinct non-zero distances obviously has $s(M)>0$ and $e(M)>0$. Adding a positive constant $\varepsilon$ to all non-zero distances we get $t(M)\ge\varepsilon$ in addition. \end{examp} The following elegant construction has been suggested to us by Konstantin Shramov and gives an opportunity to obtain a generic metric space of an arbitrary cardinality. \begin{examp} Consider an arbitrary infinite set $X$. It is well-known (Zermelo's theorem) that any set can be well-ordered, i.e., there exists a total order such that any subset contains a least element. It is easy to show, see for example~\cite[Ch.~2]{StRoman}, that an order-preserving bijection of a well-ordered set is identical. Let us fix some well-order on $X$ and denote it by $\le$. Construct an oriented graph $G_o$ on the vertex set $X$ connecting by an oriented edge $(x,y)$ each pair of vertices $x,\,y\in X$ such that $x\lvertneqq y$. It follows from the above that the automorphism group of this graph is trivial. Now reconstruct $G_o$ to another oriented graph $H_o$ changing each its oriented edge $e=(x,y)$ by three vertices $u_e,\,v_e,\,w_e$ and four oriented edges $(x,u_e)$, $(u_e,v_e)$, $(v_e,y)$, and $(v_e,w_e)$. Finally, denote by $H$ the non-oriented graph corresponding to $H_o$. Show that the automorphism group of the graph $H=(V_H,E_H)$ is trivial. Let $\sigma\:V_H\to V_H$ be an automorphism of the graph $H$. Notice that the set $X\subset V_H$ is the set of all vertices from $H$ having infinite degree, therefore the restriction of $\sigma$ onto $X$ is a bijection of $X$ onto itself. Further, let $e=(x,y)$ be an oriented edge of the graph $G_o$. Let us show that $\big(\sigma(x),\sigma(y)\big)$ is also an edge of the graph $G_o$. Assume the contrary, then $f=\big(\sigma(y),\sigma(x)\big)$ is an edge of $G_o$ because the order is total. The path $x,\,u_e,\,v_e,\,y$ in $H$ is mapped to the path $\sigma(x),.\spacefactor\@m\sigma(u_e),.\spacefactor\@m\sigma(v_e),.\spacefactor\@m\sigma(y)$. On the other hand, in $H$ there exists unique $3$-edge path corresponding to the edge $f$, namely, the path $\sigma(y),\,u_f,\,v_f,.\spacefactor\@m\sigma(x)$, therefore $u_f=\sigma(v_e)$ and $v_f=\sigma(u_e)$. But $\deg u_e= \deg u_f =2$, and $\deg v_e= \deg v_f =3$, a contradiction, so $\big(\sigma(x),\sigma(y)\big)$ is an edge of $G_o$. The latter means that $\sigma|_X$ is an automorphism of the oriented graph $G_o$, and hence, $\sigma|_X=\operatorname{id}_X$. But then the vertices of each path $x,\,u_e,\,v_e,\,y$ are also fixed, and hence, $\sigma=\operatorname{id}_{V_H}$. Notice that the cardinality of $X$ and $V_H$ are the same. Construct a metric on $V_H$ as follows: put the distances between distinct adjacent vertices equal $1$, and the distance between non-adjacent vertices equal $1+\varepsilon$, where $\varepsilon\in(0,1)$. Then the resulting metric space $V_H$ is generic, because $s(M)=1$, $t(M)=1-\varepsilon$, and $e(M)=\varepsilon$. \end{examp} This example can be modified easily to obtain an unbounded generic metric space. \begin{examp} Let $X$ be an arbitrary generic metric space of a finite diameter $d$, in particular $\#X\ge3$. Put $Y=X\sqcup\{y.\spacefactor\@m$ and extend metric from $X$ to $Y$ as follows: $|xy|=f$, where $f>d$. It is clear that $s(Y)=s(X)$ and $t(Y)$ remains positive, because all new triangles are isosceles with equal sides of the length $f$ and a smaller side of the length at least $s(X)>0$. Further, each isometry of $Y$ preserves $y$, and hence, preserves $X$, therefore, $Y$ has no non-trivial isometries. Let $\sigma$ be a bijection of $Y$. If $\sigma$ preserves $y$, then ${\operatorname{dis}}.\spacefactor\@m \sigma={\operatorname{dis}}.\spacefactor\@m \sigma|_X\ge e(X)>0$. Otherwise, take a point $x$ such that $\sigma(x)\ne y$ (such point exists, because $\#X\ge3$), then ${\operatorname{dis}}.\spacefactor\@m\sigma\ge\big||yx|-\big|\sigma(y)\sigma(x)\big|\big|\ge(f-d)>0$. Thus, $e(Y)>0$ anyway, and hence, $Y$ is a generic metric space also. This construction permits to obtain an unbounded generic metric space starting from a bounded metric space by adding point by point and taking the distance to next new point greater by $1$ than to the previous one. Notice that if the initial space is infinite, then the resulting space has the same cardinality. \end{examp} Next example is based on another idea and gives a countable generic metric space of infinite diameter. \begin{examp} Consider a geometric progression $\{q^n.\spacefactor\@m_{n=0}^\infty\subset\mathbb{R}$, where $q>4$, and define a metric space $X=\{x_i.\spacefactor\@m$ with the distances $|x_ix_j|=|q^i-q^j|+1$. It is clear that $s(X)>4$ and $t(X)=1$. Further, let $\sigma\in S(X)$ be a nontrivial bijection. Then there exists $m$ such that $\sigma(x_m)=x_n$ and $n\ne m$. Now, take $k$ as large that $k$ and $l$, where $\sigma(x_k)=x_l$, are greater than $m$ and $n$. We have: $$ {\operatorname{dis}}.\spacefactor\@m\sigma\ge\big||x_mx_k|-|\sigma(x_m)\sigma(x_k)|\big|=\big||x_mx_k|-|x_nx_l|\big|=\big||q^m-q^k|-|q^n-q^l|\big|. $$ If $k\ne l$, then ${\operatorname{dis}}.\spacefactor\@m\sigma\ge q^{\max\{k,l.\spacefactor\@m}\big|1-3/q\big|>q$, because $\max\{k,l.\spacefactor\@m\ge 3$. Otherwise, ${\operatorname{dis}}.\spacefactor\@m\sigma\ge|q^m-q^n|\ge q^{\max\{m,n.\spacefactor\@m}\big|1-1/q\big|\ge q-1$, because $\max\{m,n.\spacefactor\@m\ge 1$. Thus, $e(X)\ge q-1>3$, and so, $X$ is a generic space. \end{examp} \eject \markright{References} \end{document}
arXiv
Home Journals IJHT AD-HOC PRINCIPLES OF "MINIMUM ENERGY EXPENDITURE" AS COROLLARIES OF THE CONSTRUCTAL LAW THE CASES OF RIVER BASINS AND HUMAN VASCULAR SYSTEMS AD-HOC PRINCIPLES OF "MINIMUM ENERGY EXPENDITURE" AS COROLLARIES OF THE CONSTRUCTAL LAW THE CASES OF RIVER BASINS AND HUMAN VASCULAR SYSTEMS A. Heitor Reis Department of Physics and Institute of Earth Sciences (ICT), University of Évora, R. Romao Ramalho, 59, 7002-554 Evora, Portugal [email protected] In a recent paper [1] Reis showed that both the principles of extremum of entropy production rate, which are often used in the study of complex systems, are corollaries of the Constructal Law. In fact, both follow from the maximization of overall system conductivities, under appropriate constraints. In this way, the maximum rate of entropy production (MEP) occurs when all the forces in the system are kept constant. On the other hand, the minimum rate of entropy production (mEP) occurs when all the currents that cross the system are kept constant. In this paper it is shown how the so-called principle of "minimum energy expenditure" which is often used as the basis for explaining many morphologic features in biologic systems, and also in inanimate systems, is also a corollary of Bejan's Constructal Law [2]. Following the general proof some cases namely, the scaling laws of human vascular systems and river basins are discussed as illustrations from the side of life, and inanimate systems, respectively. Flow systems, Ad-hoc principles; Entropy production rate, Energy expenditure, Constructal Law. 1. System Dynamics, Energy Expenditure, Entropy Production, and The Constructal Law Dynamics and evolution of complex systems are not easy to predict. This is why many studies of such systems generally invoke either principles of extremum of entropy production rate or extremum of "energy (power) expenditure" as the theoretical ground for explaining their dynamics. The principle of Minimum Entropy Production rate (mEP) was first proposed by Prigogine [3,4] as a rule governing open systems at nonequilibrium stationary states: "In the linear regime, the total entropy production in a system subject to flow of energy and matter, reaches a minimum value at the nonequilibrium stationary state" [4]. The justification of mEP presented by Prigogine still continues to be the subject of heated controversy. The principle of Maximum Entropy Production rate (MEP) was proposed in 1956 by Ziman [5] in the form: "Consider all distributions of currents such that the intrinsic entropy production equals the extrinsic entropy production for the given set of forces. Then, of all current distributions satisfying this condition, the steady state distribution makes the entropy production a maximum". The principle of "Minimum Energy Expenditure" has also been widely used. Classical examples from the side of living systems are the derivation of the scaling laws of branching in vascular systems by C. Murray [6-8], and from the inanimate side the studies on the tri-dimensional structure of river basins by Rodríguez-Iturbe et al. [9]. Interestingly, in the later work the authors also used the additional principle of "Equal Energy Expenditure". Being taken as "principles" none of the above statements has been demonstrated, and therefore should be considered as "rational beliefs" whose validity and utility have to be inferred from the adequacy of their predictions to observational data. However, it is beyond doubt that so many "ad-hoc principles" generate some intellectual discomfort, and so it becomes clear that an effort is needed to unify the underlying theoretical framework for the study of complex systems. In a recent paper [1] Reis showed how both mEP and MEP principles stem from Constructal Law as corollaries. The Constructal Law, which states: "For a finite-size system to persist in time (to live), it must evolve in such a way that it provides easier access to the imposed (global) currents that flow through it" [2], was translated into the mathematical form: $d L_{i k}=0 ; d^{2} L_{i k}<0, \forall i, k$(1) where $L_{i k}$ is conductivity that couples force $\boldsymbol{F}_{k}$ with flow $\mathcal{J}_{i}$. The forces $\boldsymbol{F}_{k}=\nabla \Phi_{k}$ are the gradients of the potentials $\Phi=\{T, P, \mu, \ldots . . \phi\}$. In fact, "easiest flow access" occurs when all conductivities $L_{i k}$ are at their maxima. The results may be summarized as follows [1]: "In the case when all the forces driving the flows are kept constant the entropy production rate reaches the maximum value that is compatible with the existing constraints". "In the case when all the flows are kept constant, the entropy production rate reaches the minimum value that is compatible with the existing constraints". Therefore, the extrema of entropy production rate, which are often presented as principles governing the dynamics of certain systems, are in fact corollaries of the Constructal Law in certain specific situations. Moreover, in each situation the Constructal Law requires a specific flow organization as a fundamental condition for achieving either maximum, or minimum entropy production rate. In what follows we show that the so-called "principles of extremum of energy expenditure" also follow as corollaries of the Constructal Law. 2. The "Principles of Extremum of Energy Expenditure" as Corollaries of The Constructal Law Thought no unified formulation is found in the works that use these principles, the general idea is that many biologic and inanimate systems behave as if the whole "energy expenditure" required to sustain the internal processes was minimum. The derivation of the known scaling laws of dichotomous branching flow systems is such an example. For instance, Murray's law for diameter scaling reads: $D_{0}^{3}=D_{1}^{3}+D_{2}^{3}$(2) where the subscript 0 stands for parent vessel and subscripts 1 and 2 for daughter vessels. Murray [6, 7] derived his law using biological considerations, but Sherman [8] showed that it could be derived by considering minimum total power expenditure. Total power was considered to be the power needed to drive the flow in the Poiseuille regime that scales with D-4L – where L is vessel length – plus the "metabolic power" required to maintain the volume of blood and vessel tissue involved in the flow, which in a cylindrical vessel scales with D2L. In fact, in Sherman's minimization, the consideration of a "metabolic power" in addition to power for driving the flow is mathematically equivalent to minimizing the power needed to drive the flow under total constant vessel volume (vessel volume =πD2L/4), as Bejan et al. [10] did by using the Constructal Law. The scaling laws of branching pulsatile flows have been studied by Silva and Reis [11], who showed that they reduce to Murray's law at zero pulse frequency. Moreover, by using Constructal Law these authors were able to explain some physiological features such as the elongation of the ascending aorta with age [12], and also the different behaviour of the radial and carotid arteries with pulse frequency [13]. In their studies on the tri-dimensional structure of river basins Rodríguez-Iturbe et al. [9] found that known empirical scaling laws could be explained if the following theoretical framework was assumed: "(1) the principle of minimum energy expenditure in any link of the network, (2) the principle of equal energy expenditure per unit area of channel anywhere in the network, and (3) the principle of minimum total energy expenditure in the network as a whole". With respect to the scaling laws of river basins, studies published after the paper by Rodríguez-Iturbe et al. may be found in the literature that show that the Constructal Law alone was able to anticipate the empirical scaling laws [14, 15]. To understand the connection between the Constructal Law and ad-hoc "minimum energy expenditure" principles first let us note that "minimum energy expenditure" actually means "minimum flow exergy destruction". In fact the streams carry a flow exergy potential (per unit mass) that reads $\psi=\left(h-h_{0}\right)+T_{0}\left(s-s_{0}\right)+v^{2} / 2+g z$(3) where h stands for specific entalphy (Jkg-1), T0 for ambient temperature (K), s for specific entropy (JK-1kg-1), v for velocity (ms-1), g for acceleration due to gravity (m2s-1), and z for height above a reference level (m). According to the Gouy-Stodola theorem: "In any open system, the rate of flow exergy lost for irreversibility $\dot{\Psi}=\dot{m} \psi$, (which is negative, and where $T_{0}$ is mass flow rate) and the entropy generation rate $\dot{S}_{g e n}$ are related each another as$\dot{\Psi}=-T_{0} \dot{S}_{g e n}$, where $T_{0}$ is the ambient temperature" (see for instance [16]). Therefore by applying the Gouy-Stodola theorem to a flow tree with N channels, one has $\sum_{i=1}^{N} \dot{\Psi}_{i}=-T_{0} \sum_{i=1}^{N}\left(\dot{S}_{g e n}\right)_{i}$(4) In view of eq. (4), and the statements A and B, it follows: A1 - When the entropy production rate reaches the minimum value, the rate of "energy expenditure" is also at the minimum; B1 - When the entropy production rate reaches the maximum value, the rate of "energy expenditure" is also at the maximum. Therefore it is clear that to each extremum of rate of "energy expenditure" corresponds the respective extremum of entropy generation rate. In this way, minimum "energy expenditure" is equivalent to minimum entropy production rate. As referred above, the so-called mEP principle is a corollary of the Constructal Law that is applicable when all currents are fixed, therefore the so-called principle of minimum "energy expenditure" is also a corollary of the Constructal Law. In this context, the successful application of the principle of minimum "energy expenditure" to scaling of branching vessels (e.g. Murray's Law) in the case of biologic systems finds its justification in the fact that blood flow rate is set by the fixed needs of the organs and tissues supplied by the vascular tree. Hence, because the flow is fixed, the entropy generation rate is at its minimum, and same occurs with the rate of "energy expenditure". Analogously, in the case of river basins, the global flow is fixed by the precipitation regime, hence the streams distribute in the basin in such a way that the entropy generation rate, and also the rate of "energy expenditure" are at their minima. This explains the successful application of the principle of minimum "energy expenditure" by Rodríguez-Iturbe et al. It is worth to say that Reis [14] and Bejan et al., [15] have anticipated the empirical scaling laws of river basins by using the Constructal Law alone, without needing to invoke any of the above mentioned ad-hoc principles. On the other side, some authors found that another ad-hoc principle, the "principle of maximum energy dissipation" was useful in describing patterns of water infiltration in cohesive soil with different populations of worm burrows for a range of rainfall scenarios [17]. They found that "flow in connected worm burrows allows a more efficient redistribution of water within the soil, which implies a more efficient dissipation of free energy/higher production of entropy". They explained that "this is because upslope run-off accumulates and infiltrates via the worm burrows into the dry soil in the lower part of the hillslope, which results in an overall more efficient dissipation of free energy". In fact, it is not surprising that the pattern corresponds to the maximum entropy production rate, given that water infiltrates into the soil under a fixed force (i.e. the gradient of water potential). A classical application of the "maximum entropy production rate" was carried out by Paltridge [17-20] who proposed that the Earth's climate structure could be explained through the MEP principle. Also in this case, we can show that all the processes occurring in Earth that are powered by solar radiation occur in such a way that the net (whole) result is maximal entropy production rate. For example, the processes involved in the transport of the excess heat absorbed at the equator (at temperature) $T_{H}$ to the poles, which are at a lower temperature $T_{L}$, develop to jointly produce maximum entropy per unit time. In fact, in this case, the global flow organizes itself in patterns that enable maximal "global heat conductivity" [1, 2], and because heat flows under a fixed force proportional to $\left(T_{H}-T_{L}\right)\left(T_{H} T_{L}\right)^{-1}$, maximal entropy production rate is expected to occur. Many other cases in which the ad-hoc "principles of extremum of energy expenditure" are invoked as the theoretical basis for understanding system dynamics could be presented here. In all such cases it would be possible to show that they stem from a unique principle: the Constructal Law. "Energy expenditure" in the sense of exergy destruction is related to entropy generation. Therefore, ad-hoc "principles of extremum of energy expenditure" are equivalent either to the "principle of minimum entropy production rate" – mEP, or to the "principle of maximum entropy production rate" – MEP. Both the mEP, and MEP principles are shown to be corollaries of the Constructal Law. Therefore, both the "principle of minimum energy expenditure", and the "principle of maximum energy expenditure", also are corollaries of the Constructal Law. In fact, taking together the statements A, A1, B, B1, from the Constructal Law it follows: A2 - "In the case when all the forces driving the flows are kept constant the "energy expenditure" reaches the maximum value that is compatible with the existing constraints". B2 - "In the case when all the flows are kept constant, the "energy expenditure" reaches the minimum value that is compatible with the existing constraints". As the main conclusion: there is no need for using ad-hoc "principles of extremum of energy expenditure", because a unique principle - the Constructal Law -provides the theoretical basis for describing the dynamics of flow systems. The author acknowledges the support provided by ICT under contract with FCT (the Portuguese National Science Foundation). 1. Heitor Reis, A., "Use and validity of principles of extremum of entropy production in the study of complex systems," Annals of Physics 346, 22–27, 2014. DOI: 10.1016/j.aop.2014.03.013. 2. A. Bejan, Advanced Engineering Thermodynamics, 2nd ed. Wiley, New York, Ch. 13, 1997. ISBN: 978-0-471-67763-5. 3. Prigogine, I., Étude Thermodynamique des Phenomenes Irreversibles, Desoer, Liege, Ch. V, 1947. 4. Kondepudi D., Prigogine, I., Modern Thermodynamics. From Heat Engines to Dissipative Structures, Wiley, Chichester, §17.2, 1998. DOI: 10.1002/9781118698723. 5. Ziman, J.M., "The general variational principle of transport theory," Canadian Journal of Physics 34, 1256-1273, 1956. DOI: 10.1139/p56-139. 6. Murray, C. D., "The physiological principle of minimum work: I. The vascular system and the cost of blood volume," Proc. of the National Academy of Sciences of the United States of America, 12 (3), 207–214, 1926. DOI: 10.1073/pnas.12.3.207. 7. Murray, C.D., "The physiological principle of minimum work: II Oxygen exchange in capillaries," Proc. of the National Academy of Sciences of the United States of America, 12 (5), 299–304, 1926. DOI: 10.1073/pnas.12.5.299. 8. Sherman, T.F., "On connecting large vessels to small: The meaning of Murray's Law," The Journal of General Physiology, 78 (4), 431–453, 1981. DOI: 10.1085/jgp.78.4.431. 9. Rodríguez-Iturbe, I., Rinaldo, A., Rigon, R., Bras, R.L., Marani, A. and Ijjász-Vásquez, E., "Energy dissipation, runoff production, and the three-dimensional structure of river basins," Water Resources Research, 28 (4), 1095-1103, 1992. DOI: 10.1029/91WR03034. 10. Bejan, A., Rocha, L.A.O., and Lorente, S., "Thermodynamic optimization of geometry: T- and Y-shaped constructs of fluid streams," Int. J. Thermal Sciences, 39, 949–960, 2000. DOI: 10.1016/S1290-0729(00)01176-5. 11. C. Silva, A.H. Reis, "Scaling relations of branching pulsatile flows," Int. J. of Thermal Sciences, 88, 77–83, 2014. DOI: 10.1016/j.ijthermalsci.2014.09.009. 12. Silva, C. and Reis, A.H., "Structure and adaptation of arteries to pulsatile flow - The case of the ascending aorta," Medical Physics, 41, (6), 063701, 2014. DOI: 10.1118/1.4876379. 13. Silva, C. and Reis, A.H., "Heart rate, arterial distensibility, and optimal performance of the arterial tree," J. of Biomechanics, 47, (12), 2878–2882, 2014. DOI: 10.1016/j.jbiomech.2014.07.025. 14. Heitor Reis, A. "Constructal view of scaling laws of river basins," Geomorphology, 78, 201-206, 2006. DOI: 1016/j.geomorph.2006.01.015. 15. Bejan, A., Lorente, S., Miguel, A.F. and Reis, A.H., "Constructal theory of distribution of river sizes," §13. 4, pp. 774-779, in Bejan, A., Advanced Engineering Thermodynamics, 3rd ed., Wiley, Hoboken, NJ, 2006. ISBN: 978-0-471-67763-5. 16. Bejan, A., Entropy Generation Minimization: The Method of Thermodynamic Optimization of Finite-Size Systems and Finite-Time Processes, Ch. 2, CRC Press, 1995. ISBN 9780849396519 . 17. Zehe, E., Blume, T. and Blöschl, G., "The principle of 'maximum energy dissipation': a novel thermodynamic perspective on rapid water flow in connected soil structures," Phil. Trans. R. Soc. B, 365, 1377–1386, 2010. DOI: 10.1098/rstb.2009.0308. 18. Paltridge, G.W., "Global dynamics and climate––A system of minimum entropy exchange," Q. J. R. Meteorol. Soc. 101, 475–484, 1975. DOI: 10.1002/qj.49710142906. 19. Paltridge, G.W., "The steady-state format of global climate," Q. J. R. Meteorol. Soc., 104, 927–945, 1978. DOI: 10.1002/qj.49710444206 20. Paltridge, G.W., "Climate and thermodynamic systems of maximum dissipation," Nature, 279, 630–631, 1979. DOI: 10.1038/279630a0.
CommonCrawl
Genevieve M. Knight Genevieve Madeline Knight (June 18, 1939 – August 19, 2021) was an American mathematics educator.[2][3] Genevieve Madeline Knight Born18 June 1939 Brunswick, Georgia DiedAugust 19, 2021(2021-08-19) (aged 82) Silver Spring, Maryland [1] Alma mater • Fort Valley State College (1961) • Atlanta University (1963) • University of Maryland (1970) Scientific career FieldsMathematics education InstitutionsHampton Institute  Coppin State College  ThesisThe Effect of a Sub-Culturally Appropriate Language upon Achievement in Mathematical Content Academic advisors • Henry Walbasser • Abdulalim A. Shabazz Education and career Knight was the youngest of three sisters who all became mathematics and science educators, daughters of a seamstress and a civil service radar specialist. As a freshman at Fort Valley State College in 1957, Knight was studying home economics when the Sputnik launch created a big push for more American students to become educated in mathematics and the sciences. Knight transferred to mathematics, "because it had fewer labs than any of the sciences", and graduated in 1961.[4][2][3] She completed a master's degree in 1963 at Atlanta University, under the supervision of Abdulalim A. Shabazz, and took a teaching position at the Hampton Institute, also becoming an NSF fellow, a position that allowed her to travel and meet with other college mathematics teachers. In 1966, she returned to graduate school, and completed a doctorate in mathematics education in 1970 at the University of Maryland, College Park under the supervision of Henry H. Walbesser.[4][2][3] Returning from her doctorate, Knight remained at the Hampton Institute, where she became chair of mathematics and computer science.[5] In 1985 she moved to Coppin State College as a full professor.[2][3] She retired in 2006. Recognition In 1980, the Virginia Council of Teachers of Mathematics named Knight as their College Teacher of the Year.[6] In 1993 she was named Maryland Mathematics Teacher of the Year, and the Mathematical Association of America gave Knight a Distinguished Teaching Award.[5] In 1996 the University System of Maryland named her as that year's Wilson H. Elkins Distinguished Professor.[7] The National Council of Teachers of Mathematics gave her their 1999 lifetime achievement award for her service to mathematics education, outspoken support of equity "regardless of ethnicity, gender, or socioeconomic background", and distinguished teaching.[7] She was the 2013 Cox–Talbot Lecturer of the National Association of Mathematicians, one of the member societies of the Conference Board of the Mathematical Sciences.[8] In 2018 the Association for Women in Mathematics named her as one of their inaugural Fellows.[9] References 1. "Genevieve M. Knight, a longtime math educator at historically Black colleges and universities, dies". Baltimore Sun. September 1, 2021. 2. Williams, Scott W., "Genevieve Madeline Knight", Black Women in Mathematics, State University of New York at Buffalo, retrieved 2018-02-12 3. "Genevieve Madeline Knight", Strengthening Underrepresented Minority Mathematics Achievement (SUMMA), Mathematical Association of America, retrieved 2018-02-12 4. Ross, Kenneth A. (2007), Genevieve Knight (interview) (PDF), Mathematical Association of America 5. Distinguished Teaching Award citation (PDF), retrieved 2018-02-12 6. "People", Jet, Johnson Publishing Company, vol. 58, no. 8, p. 21, May 8, 1980 7. 1999 Lifetime Achievement Award Recipient: Genevieve M. Knight, National Council of Teachers of Mathematics, retrieved 2018-02-12 8. Cox–Talbot Lecture, National Association of Mathematicians, retrieved 2018-02-12 9. 2018 Inaugural Class of AWM Fellows, Association for Women in Mathematics, retrieved 9 January 2021
Wikipedia
Is M-theory just a M-yth? Type I' String theory as M-theory compactified on a line segment? I was considering the S-dual of the Type I' String theory (the solitonic Type I string theory). That is the same as the S-dual of the T-Dual of Type I String theory. Then, that means both length scales and coupling constant are inverted. So, since inverting the length scale of the theory before inverting the coupling constant is the same as inverting the coupling constant before the length scale, I think the S-dual of the T-dual of the Type I String theory is the same as the T-dual of the S-dual of the Type I String theory. The S-dual of the Type I string theory is the Type HO String theory. The T-dual of the Type HO string theory is the Type HE String theory. Therefore, the S-dual of the Type I' String theory is the Type HE String theory. But the Type HE String theory is S-dual to M-theory compactified on a line segment. So does this mean that the Type I' String theory is M-theory compactified on a line segment? Type I' string theory is equivalent to M-theory compactified on a line segment times a circle, i.e. M-theory on a cylinder. M-theory on a line segment only is the Hořava-Witten M-theory, a dual description of the $E_8\times E_8$ heterotic string, because every 9+1-dimensional boundary in M-theory has to carry the $E_8$ gauge supermultiplet. The extra compactified circle is needed to break the $E_8\times E_8$ gauge group to a smaller one; and to get the right number of large spacetime dimensions, among other things. Type I' string theory has D8-branes that come from the end-of-the-world branes in M-theory on spaces with boundaries; it also possesses orientifold O8-planes. Interestingly enough, the relative position of O8-planes and D8-branes in type I' string theory may be adjusted. This freedom goes away in the M-theory limit; the D8-branes have to be stuck at the orientifold planes, those that become the end-of-the-world domain walls of M-theory, and this obligation is explained by the observation that an O8-plane with a wrong number of D8-branes on it is a source of the dilaton that runs. In the M-theory limit, the running of the dilaton becomes arbitrarily fast which sends the maximum tolerable distance between the O8-plane and D8-branes to zero. Thanks a lot for the answer but I have 1 more question: M-theory compactified on a cylinder definitely isn't equivalent to M-theory compactified on a line segment. So, there must be some fallacy with my reasoning. Do the T- and S- Dualities not commute? Or is the S-dual of Type HE String theory not M-theory compactified on a line segment (but instead a cylinder)? Thanks! There are various fallacies - you use the dualities in a bizarre way. The equivalence of the heterotic strings to M-theory with boundary isn't really a normal S-duality, it's a strong coupling limit and a general equivalence. Even more importantly, the mistake is in the first S-duality between type I' and HE. Type I' is a 9-dimensional theory (counting large dimensions only) so it can't be equivalent to a 10-dimensional one. It can't be hard to trace the number of large dimensions of spacetime and avoid simple mistakes of the sort, can it? Thanks. So does that mean that The Type I' String theory is only T-dual to Type I string theory compactified on a circle, rather than Type I string theory itself? If so what is the T-dual of the actual 10 dimensional string theory called? Dear @dimension10, T-duality always requires some dimensions to be compactified on a circle or for type I', on line segment. For 10D string theories, T-duality relates two theories with a circular dimension (of inverse radii) and 8+1 large dimensions. It is nonsense to ask what is the T-dual of a 10-dimensional vacuum. At most, you may understand it as the infinite $R$ limit of some vacua; the T-dual is formally a singular $R=0$ compactification. Let me also mention that the infinite $R$ limit of type I' = type IA looks like type IIA string theory everywhere away from the orientifold planes.
CommonCrawl
John H. Walter John Harris Walter (14 December 1927 – 20 September 2021)[1] was an American mathematician known for proving the Walter theorem in the theory of finite groups. Born in Los Angeles, Walter received from California Institute of Technology his bachelor's degree in 1951. He received from the University of Michigan his master's degree in 1953 and his Ph.D. in 1954 with thesis Automorphisms of the Projective Unitary Groups under the supervision of Leonard Tornheim.[2] Walter was a visiting professor in 1960/61 and 1965/66 at the University of Chicago, 1967/68 at Harvard University, and 1972/73 at the University of Cambridge, UK. He was a professor emeritus of mathematics at the University of Illinois at Urbana–Champaign,[3] where he became an associate professor in 1961 and a full professor in 1966. In 2012 he was elected a fellow of the American Mathematical Society.[4] He died at the age of 93 in 2021.[5] Selected publications • with Daniel Gorenstein: Gorenstein, Daniel; Walter, John H. (1962). "On finite groups with dihedral Sylow 2-subgroups". Illinois J. Math. 6 (4): 553–593. doi:10.1215/ijm/1255632706. MR 0142619. • with Daniel Gorenstein: "The characterization of finite groups with dihedral Sylow 2-subgroups". {{cite journal}}: Cite journal requires |journal= (help) Parts I, II, III, J. Algebra, vol. 2, 1964, pp. 85–151 doi:10.1016/0021-8693(65)90027-X; doi:10.1016/0021-8693(65)90019-0 217–270, doi:10.1016/0021-8693(65)90015-3 354-393 • Walter, John H. (1967). "Finite groups with abelian Sylow 2-subgroups of order 8". Inventiones Mathematicae. 2 (5): 332–376. Bibcode:1967InMat...2..332W. doi:10.1007/BF01428899. S2CID 121324944. • Walter, John H. (1969). "The characterization of finite groups with abelian Sylow 2-subgroups". Annals of Mathematics. 89 (3): 405–514. doi:10.2307/1970648. JSTOR 1970648. References 1. biographical information from Pamela Kalte et al., American Men and Women of Science, Thomson Gale 2004 2. John H. Walter at the Mathematics Genealogy Project. 3. Department of Mathematics, Directory, UIUC. 4. List of Fellows of the American Mathematical Society, retrieved 2013-09-01. 5. "In Memoriam: John H. Walter". UNIVERSITY OF ILLINOIS URBANA-CHAMPAIGN. Retrieved 6 October 2021. Authority control International • ISNI • VIAF National • Germany • Israel • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Study on communication channel estimation by improved SOMP based on distributed compressed sensing Biao Wang1, Yufeng Ge1, Cheng He1, You Wu1 & Zhiyu Zhu1 Wireless communication channel usually show the feature of time-varying; however, the time-varying channel has the characteristic that the channel structure within the adjacent time slots having serious time correlation. Therefore, how to use the time slow-changing characteristics of the channel to design the suitable channel state information acquisition method is of great significance to further improve communication performance with low communication bit error rate (BER) for OFDM communication system. The distributed compressed sensing (DCS) is proposed for the phenomenon that multiple sparse signals with time correlation. Based on the DCS theory framework, this article will re-build a time-domain channel estimation method with joint structure by improving the synchronous orthogonal matching pursuit (simultaneous orthogonal matching pursuit, SOMP) algorithm to get better channel information acquisition performance. Simulation results demonstrate the effectiveness of the proposed channel estimation method. Compared with the conventional compressed sensing-based channel estimations which perform at each time separately, the method proposed has better performance in terms of BER. Compressive sensing (CS) [1,2,3] as a new signal processing theory, when the signal is sparse in an orthogonal transformation domain, it can be under-sampled at a frequency far lower than Nyquist sampling rate, then the original sparse signal is recovered by nonlinear reconstruction algorithm with high probability. CS theory provides a new solution for traditional channel estimation, which has been applied to the channel estimation of a communication system by many scholars. Cotter and Rao proposed sparse channel estimation based on matching pursuit (MP) algorithm [4] for single-carrier communication system [5, 6]. Dongming and Bing extended the above sparse channel estimation method based on MP to the MIMO-OFDM multicarrier communication system [7]. Based on literature [8], C. R, Berger, S. Zhou, and others deeply studied the sparse channel estimation method based on CS, proposed an OFDM frequency domain channel estimation algorithm and analyzed the advantages and disadvantages of various sparse reconstruction algorithm, the simulation and experiment show that the bit error rate of BP channel estimation algorithm can reach lower bit error rate, which is far less than that of LS channel estimation [9]. In addition, the team mentioned above also conducted other CS-based channel estimation studies in 2010 and 2011, which all verified the superiority of CS-based algorithm over traditional algorithm [10, 11]. Compared with the traditional pilot aided method, CS-based method greatly reduces the number of insertion pilot frequencies and improves the communication efficiency when the performance of the channel estimation remains unchanged. But in the study of the CS theory applied to the channel, the CS reconstruction algorithm is based on the static sparse signal, only considering the sparse reconstruction of the single observable signal at a certain point of time. However, the high-speed mobile communication channel is often regarded as a coherent multipath channel with slow time-varying, the traditional channel estimation method based on static CS will seriously reduce the efficiency of channel estimation. At present, some scholars have begun to study the application of compressed sensing for the processing of time-varying sparse signals. The existing research results show that for the dynamic characteristics of time-varying sparse signals, if the random signal processing method is used to conduct a dynamic reconstruction, a better sparse reconstruction effect will be obtained. To sum up, the traditional channel estimation method has the disadvantages of high pilot cost and low spectral efficiency, the classic static CS channel estimation method has good performance of channel estimation, but due to the high complexity of the calculations, especially poor performance for tine-varying channel estimation. In this paper, based on this background, considering high-speed mobile communication channel is time-varying sparse and statistical dynamic, therefore, based on OFDM technology as the basic framework, a dynamic CS channel estimation method for high-speed vehicular communication channels is studied to further improve the performance of channel estimation and reduce the computational complexity of the algorithm. For DCS theory, the most typical application scenario is that multiple sensor nodes observe signals at the same time. These observed signals are usually non-sparse, but can be sparse represented under some sparse basis. More importantly, there is a certain correlation between the signals. When the corresponding signal is independently observed by each sensor, the original signal set can be recovered according to the observation results of each sensor node, and the reconstruction effect is better than that of each sensor node. Thus, it can be seen that the research object of DCS theory is the signal set with joint sparse characteristics. By analyzing the correlation structure between each sparse signal and adopting the strategy of joint signal recovery, the better sparse signal reconstruction effect is obtained. This section will focus on JSM-1 and JSM-2 models commonly used in joint sparsity models (JSM): JSM-1 In the JSM-1 model, each signal in the observed signal set is composed of a common sparse part and a separate sparse part. Among them, all the signals contain a common sparse part, and the separate sparse part in each are not identical, meanwhile, the common sparse part and separate sparse part can be sparsely represented through the same sparse basis. The following is the mathematical description of this model: $$ {\mathbf{X}}_j={\mathbf{Z}}_c+{\mathbf{Z}}_j,\kern0.5em j\in \left\{1,2,\cdots, J\right\} $$ Where Zc = Ψθc represents the common sparse part of the signal, ‖θc‖0 = Kc represents Zc is the sparse signal of Kc, Ζj = Ψθj represents the separate sparse part of the signal, and ‖θj‖0 = Kj represents Zj is the sparse signal of Kj. In addition, the common sparse part only means that the coordinate of non-zero coefficient of the signal is the same, that is, the support set is the same. Unlike the JSM-1 model, all signals in the JSM-2 model have the same sparse support set but different non-zero coefficients. In this model, all the original signals can be constructed by the same sparse basis, and the mathematical representation is as follows: $$ {\mathbf{X}}_j={\boldsymbol{\Psi} \boldsymbol{\uptheta}}_j,\kern0.5em j\in \left\{1,2,\cdots, J\right\} $$ Where ‖θj‖0 = K denotes that the sparsity of all signals is K. For the JSM-2 model, its typical application scenarios include MIMO communication, sound localization, etc. SOMP algorithm is also based on this model. The system model The OFDM channel estimation method based on CS is to transform the problem of OFDM channel estimation into the problem of sparse signal reconstruction, which is solved by the CS reconstruction algorithm. Specifically expressed as $$ {\mathbf{Y}}_P={\mathbf{X}}_P{\mathbf{F}}_P\mathbf{h}+{\mathbf{W}}_P=\mathbf{Ah}+{\mathbf{W}}_P $$ In the above equation, for the receiver, YP, XP, FP are all known and h satisfies the sparse characteristic. Therefore, it is a typical CS theoretical model, which can reconstruct the time-domain channel response h by using the l1 minimum norm method or greedy algorithm with high probability. On the other hand, high-speed mobile communication channel is considered to be slowly time-varying, and the channel coherence time is greater than the symbol period of OFDM. Within each OFDM symbol, the sparse multipath structure of communication channels remains unchanged. Therefore, the channel estimation method based on CS above is often carried out in a way of symbol-by-symbol, that is, the guide frequency is inserted into each OFDM symbol, and the impulse response of the channel is estimated by the receiver symbol-by-symbol. Although this method is simple and practical, it does not take into account the slowly varying characteristics of the high-speed mobile communication channel. Due to the channel change rate is slower than the OFDM symbol rate, channel impulse response has strong correlation within the duration of several consecutive OFDM symbols, namely, the corresponding channel response of each symbol have a common sparse part. The performance of channel estimation in OFDM system can be further improved if the characteristic of common sparse of channel response at each time can be fully utilized. Based on this, the relevant literature, under the framework of DCS theory, converts the channel estimation under the slow time-varying channel into the problem of joint sparse recovery under the JSM-2 model, and carries out the joint sparse recovery through the SOMP algorithm to obtain channel response, which is introduced as follows: According to Eq. (3), if the channel estimation within the time of continuous T OFDM symbols is taken into account, then $$ \left\{\begin{array}{c}{\mathbf{Y}}_P^{(1)}={\mathbf{Ah}}^{(1)}+{\mathbf{W}}_P^{(1)}\\ {}{\mathbf{Y}}_P^{(2)}={\mathbf{Ah}}^{(2)}+{\mathbf{W}}_P^{(2)}\\ {}\vdots \\ {}{\mathbf{Y}}_P^{(T)}={\mathbf{Ah}}^{(T)}+{\mathbf{W}}_P^{(T)}\end{array}\right. $$ Where \( {\mathbf{Y}}_P^{(t)},{\mathbf{h}}^{(t)},{\mathbf{W}}_P^{(t)} \) respectively represents the receiving signal at the pilot frequency, the impulse response of the time domain channel and the noise in the frequency domain of the tth consecutive OFDM symbol, 1 ≤ t ≤ T. When the channel changes slowly, the channel response h(t) corresponding to T consecutive OFDM symbols owns time correlation and common sparsity. By making full use of this common sparsity between signals, the channel estimation of multiple consecutive symbols can be considered as a whole, then the channel response h(t) can be restored by joint sparse reconstruction instead of independent channel estimation of single OFDM symbol. For the joint sparse model in Eq. (4), the following problems of optimization are constructed to solve the joint channel estimation: $$ \widehat{\mathbf{H}}=\arg \min \sum \limits_{t=1}^T{\left\Vert {\mathbf{h}}^{(t)}\right\Vert}_1\kern0.5em s.t.\kern0.5em \sum \limits_{t=1}^T{\left\Vert {\mathbf{Y}}_P^{(t)}-{\mathbf{Ah}}^{(t)}\right\Vert}_2^2\le \varepsilon $$ In Eq. (5), ε is the parameter related to noise. The optimization problem above can be obtained by a joint reconstruction algorithm corresponding to JSM-2 model. This paper mainly discusses SOMP algorithm. Time domain joint channel estimation based on SOMP algorithm Signal sets constructed by different JSM models require different joint recovery algorithms to recover the signal. SOMP algorithm is a classic joint recovery algorithm in JSM-2 model, which was proposed by Tropp and Gilbert and then introduced into DCS environment by Baron [12]. As an improved greedy algorithm based on OMP algorithm, SOMP algorithm selects the atoms that best matched with the residual error to update the support set in each iteration and completes joint recovery of the signal set through multiple rapid iterations. However, the biggest difference between SOMP algorithm and OMP algorithm is that OMP algorithm is to select the atomic set that best matches the residual error of a single signal, while SOMP algorithm selects the atomic set that best matches the residual error of the whole signal group. Combined with the basic principles of SOMP algorithm and the OFDM channel estimation model derived above, the time domain joint channel estimation based on SOMP algorithm is given. The specific steps are as follows: Input: receive pilot signal \( {\mathbf{Y}}_P=\left[{\mathbf{Y}}_P^{(1)},{{\mathbf{Y}}_P}^{(2)},\cdots, {{\mathbf{Y}}_P}^{(T)}\right] \); the perception matrix A; maximum sparsity K of high-speed mobile communication channel. Initialization: make the signal residual \( {\mathbf{r}}_0^{(i)}={\mathbf{Y}}^{(i)},i=1,2,\cdots, T \), the upper label represents the ith OFDM symbol, and the lower index represents the number of iterations. The index set Λ0 = ∅; reconstructed atomic set Φ0 = ∅; number of iterations t = 1. Iteration process: the tth iteration. Step 1: Take the inner product of the perceived matrix A and the residual \( {\mathbf{r}}_{t-1}^{(i)} \), and find the inner product sum corresponding to T OFDM symbols, then calculate the maximum position λt of the inner product sum: $$ {\lambda}_t=\arg {\max}_j\sum \limits_{i=1}^T\left|\left\langle {\mathbf{A}}_j,{\mathbf{r}}_{t-1}^{(i)}\right\rangle \right| $$ Where Aj represents the jth column atom of the perceptive matrix A; Step 2: Update the index set and the reconstructed atomic set: $$ {\displaystyle \begin{array}{c}{\Lambda}_t={\Lambda}_{t-1}\cup {\lambda}_t\\ {}{\Phi}_t={\Phi}_{t-1}\cup {\mathbf{A}}_{\lambda_t}\end{array}} $$ Step 3: Using LS algorithm to calculate the multipath coefficient corresponding to each symbol: $$ {\widehat{\mathbf{h}}}_t^{(i)}=\arg \min {\left\Vert {{\mathbf{Y}}_P}^{(i)}-{\boldsymbol{\Phi}}_t{\mathbf{h}}^{(i)}\right\Vert}_2,i=1,2,\cdots, T $$ Step 4: Update the residual according to the estimated value of channel impulse response: $$ {\mathbf{r}}_t^{(i)}={{\mathbf{Y}}_P}^{(i)}-{\boldsymbol{\Phi}}_t{\widehat{\mathbf{h}}}_t^{(i)},i=1,2,\cdots, T $$ Step 5: Judge whether the condition t = K is satisfied. If it satisfied, then stop iteration. If not, let t = t + 1, execute step 1. Output: channel impulse response estimated value \( \widehat{\mathbf{h}}=\left[{\widehat{\mathbf{h}}}^{(1)},{\widehat{\mathbf{h}}}^{(2)},\cdots, {\widehat{\mathbf{h}}}^{(T)}\right] \) corresponding to each OFDN symbol. Analysis on the disadvantages of SOMP algorithm For the phenomenon that multiple sparse signals are correlated in distribution, DCS theory can further improve the recovery performance by utilizing the common sparsity between signals on the basis of classical CS theory, which has been widely studied and applied in wireless sensor network and MIMO communication. However, the path delay of the actual high-speed mobile communication channel may change in multiple OFDM data symbols, and even the path generation and death may occur. The above time domain joint channel estimation based on SOMP algorithm assumes that the channel shares the same path time delay set in several continuous data symbols, which is obviously difficult to be completely satisfied and inconsistent with the actual situation! In addition, from the above description of the channel time correlation, it can be seen that the support sets for the channel sparse paths in adjacent OFDM symbols are not identical, in other words, there is a separate sparse multiple paths between the symbols, and therefore the JSM-1 model is clearly more compliant with the characteristics of the slowly varying high-speed mobile communication channel. OFDM channel estimation based on improved SOMP algorithm Through the analysis of time correlation of slow time-varying high-speed mobile communication channel, it can be seen that the JSM-1 model in DCS theory is more consistent with the characteristics of high-speed mobile communication channel than the JSM-2 model. In the JSM-1 model, each signal is composed of the common sparse part and the separate sparse part, so we can divide the channel path delay into two different parts, namely, the common channel tap and the dynamic channel tap. On the basis of this separation, an improved SOMP algorithm is proposed in this paper to reconstruct a time-varying sparse channel, the key idea of the proposed algorithm is to first simultaneously detect the common channel tap of the sparse channel in all OFDM data symbols, and then to use the path of the common channel tap as the initialization set in the dynamic channel tap tracking procedure, in order to track the dynamic channel tap and eliminate the wrong tap of the initialization set. Combined with the symbol-by-symbol OFDM channel estimation model, the OFDM channel estimation process based on the improved SOMP algorithm is presented. The steps are as follows: Part 1: detection of common channel tap $$ {\Lambda}_0= SOMP\left({\mathbf{Y}}_P,\mathbf{A},K\right) $$ Part 2: detection of dynamic channel tap Initialization: the index set \( {\Gamma}_0^{(i)}={\Lambda}_0 \); reconstructed atomic set \( {\Phi}_0^{(i)}={\mathbf{A}}_{\Lambda_0} \); channel impulse response \( {\widehat{\mathbf{h}}}_0^{(i)}={\mathbf{A}}_{\Lambda_0}^{\dagger }{\mathbf{Y}}_P^{(i)} \); residual \( {\mathbf{r}}_0^{(i)}={\mathbf{Y}}_P^{(i)}-{\Phi}_0^{(i)}{\widehat{\mathbf{h}}}_0^{(i)} \); number of iterations t = 1. In all the above symbols, the superscript represents the ith OFDM symbol, and the subscript represents the number of iterations. Step 1: Find out the column index λt corresponding to the atom with the greatest correlation between residual error \( {\mathbf{r}}_{t-1}^{(i)} \) and the perception matrix A, and update the index set and reconstructed atomic set.; $$ {\displaystyle \begin{array}{c}{\lambda}_t^{(i)}=\arg {\max}_j\left|\left\langle {\mathbf{A}}_j,{\mathbf{r}}_{t-1}^{(i)}\right\rangle \right|\\ {}{\Gamma}_t^{(i)}={\Gamma}_{t-1}^{(i)}\cup {\lambda}_t^{(i)}\\ {}{\Phi}_t^{(i)}={\Phi}_{t-1}^{(i)}\cup {\mathbf{A}}_{\lambda_t^{(i)}}\end{array}} $$ Step 2: Using the LS algorithm to find the channel response in each symbol; $$ {\widehat{\mathbf{h}}}_t^{(i)}=\arg \min {\left\Vert {\mathbf{Y}}_P^{(i)}-{\Phi}_t^{(i)}{\mathbf{h}}^{(i)}\right\Vert}_2 $$ Step 3: Retain only the largest K coefficients in \( {\widehat{\mathbf{h}}}_t^{(i)} \), and update the index set \( {\Gamma}_t^{(i)} \) and reconstructed atomic set \( {\Phi}_t^{(i)} \), use the LS algorithm to recalculate \( {\widehat{\mathbf{h}}}_t^{(i)} \); Step 4: Update the residual: $$ {\mathbf{r}}_t^{(i)}={Y}_P^{(i)}-{\Phi}_t^{(i)}{\widehat{\mathbf{h}}}_t^{(i)} $$ Step 5: Judging whether the condition \( {\left\Vert {\mathbf{r}}_t^{(i)}\right\Vert}_2<{\left\Vert {\mathbf{r}}_{t-1}^{(i)}\right\Vert}_2 \) is satisfied. If so, let t = t + 1 then return to step 1. Otherwise \( {\Gamma}_t^{(i)}={\Gamma}_{t-1}^{(i)} \), \( {\widehat{\mathbf{h}}}_t^{(i)}=\arg \min {\left\Vert {\mathbf{Y}}_P^{(i)}-{\Phi}_t^{(i)}{\mathbf{h}}^{(i)}\right\Vert}_2 \), the iteration process ends. Output: channel impulse response \( \widehat{\mathbf{h}}=\left[{\widehat{\mathbf{h}}}^{(1)},{\widehat{\mathbf{h}}}^{(2)},\cdots, {\widehat{\mathbf{h}}}^{(T)}\right] \) The core steps of improved SOMP algorithm include two parts: common channel tap detection and dynamic channel tap tracking. The first part of the common channel tap detection is intended to estimate the common path delay set Λ0 of each symbol, which is considered to contain the most real common channel taps. In this algorithm, this step is completed by the classical SOMP algorithm. After this, the time delay set Λ0 is used as the initial support set for dynamic channel tap tracking within each symbol. In the second part of the algorithm, the dynamic channel tap tracking is designed to track the different time-varying channel taps within different OFDM symbols, namely, the separate sparse part in the JSM-1 model. Within each symbol, the common path delay set Λ0 obtained in the first part of the algorithm is used to initiate the dynamic tracking process. Through each iteration, the size K of the path delay set remains unchanged and only the reconstruction effect is enhanced. In addition, reliable or incorrect channel taps can be added or removed from the path delay set during each iteration. Benefit from the detection of the common channel tap, the dynamic channel tap tracking process can rapidly approach the real path delay set, and when the residual error signal \( {\mathbf{r}}_t^{(i)} \) is no longer reduced, namely, \( {\left\Vert {\mathbf{r}}_t^{(i)}\right\Vert}_2<{\left\Vert {\mathbf{r}}_{t-1}^{(i)}\right\Vert}_2 \), the process will be terminated. Simulation results and discussion Parameter settings In order to verify the effectiveness of the algorithm, the proposed channel estimation based on the improved SOMP algorithm is simulated and analyzed, and the proposed algorithm is compared with other similar algorithms in this section. In the simulation system, the total number of subcarriers of OFDM system is set to 256, the protection interval length is 64 cyclic prefixes, and the number of pilot frequencies inserted by a single OFDM symbol is 32 pilot frequencies according to the data. The pilot insertion mode is determined according to the channel estimation method. The conventional LS channel estimation adopts comb pilot insertion with equal interval, but the channel estimation based on CS adopts random pilot insertion. All modulation modes adopt QPSK modulation, and there is no channel coding, the receiver assumes full synchronization. Because the proposed algorithm is based on the JSM-1 model, the sparse multipath setting in channel simulation includes the common sparse multipath and dynamic sparse multipath. Where the common sparse multipath exists within each symbol, the delay position remains unchanged, only the amplitude changes with time, and the dynamic sparse multipath within each symbol is generated randomly. All sparse multipath are independent of each other, and the gain follows the complex Gaussian distribution of zero mean and decays exponentially. Assuming that the channel length L = 60, each multipath time delay is an integer multiple of the system sampling interval without energy leakage. Under the condition of high-speed mobile communication channel, the channel state remains unchanged within an OFDM symbol period, and each OFDM symbol is independent of each other. Results discussion Based on the above system simulation parameters, the following sets of the simulation are made in this paper: The effect of the number of common channel tap on the performance of the channel estimation algorithm Since the proposed algorithm is based on the JSM-1 model, the proportion of common and separate sparse paths in the channel will influence the channel estimation algorithm. When the number of common sparse multipath increases, the correlation of the channel multipath sparse structure within the adjacent OFDM symbol is enhanced, and vice versa. In this simulation, the maximum sparsity K of the channel is 10, and the number of the public channel taps is represented by the time correlation L. The specific value varies from 2 to 8. The simulation results are shown in Fig. 1. In addition to the algorithm mentioned in this paper, LS, OMP, SOMP, and Oracle-LS algorithms are also used as comparison algorithms. It can be seen from Fig. 1 that SOMP algorithm is the one most affected by time correlation L of all algorithms, and the other algorithms have no significant influence. Because the proposed algorithm includes dynamic channel tap detection, the change of L only affects the computational complexity of the algorithm, but not the performance of the algorithm. As for the SOMP algorithm, it shares the same path delay set by default, so when L increases, the same path delay number increases, the algorithm performance improves. On the contrary, the performance of the algorithm reduces. As can be seen from the figure, when L = K = 10, the channel degenerates into jsm-2 model. At this point, the algorithm proposed in this paper also degenerates into SOMP algorithm, so the estimated performance of the two algorithms is equivalent. The effect of the number of joint symbol on detection of common channel tap. As an improved algorithm of SOMP algorithm, the proposed algorithm is also based on joint sparse model. As can be seen from the description of the algorithm above, detection of common channel tap in the first part of the algorithm is to use SOMP algorithm to combine multiple OFDM symbols. Because the delay set of the common channel tap will be the initial support set of the dynamic channel tap detection in the second part, the success rate of the common channel tap detection directly affects the performance and speed of the second part of the algorithm. Figure 2 below simulates the effect of four different joint symbol numbers on the success rate of common channel tap detection. It is not hard to find that the bigger the number of joint symbols, the higher the detection success rate of common channel tap. For this phenomenon, it can be explained that the public sparse multipath exists at all times. When combined with SOMP algorithm recovery, the energy of the weak path will be enhanced by superimposing the same path in multiple symbols, so that it can be detected more easily. If the number of symbols involved in the superposition increases, the detected probability will also increase. NMSE and BER curves of various algorithms with different SNR. The channel estimation performances of the proposed algorithm in this paper are compared in this group of simulation. The normalized mean square error and error rate are adopted for the specific evaluation indexes. In this paper, normalized mean square error (NMSE) is defined as follows: $$ \mathrm{NMSE}=\frac{1}{T}\sum \limits_{t=1}^T\frac{{\left\Vert {\widehat{\mathbf{h}}}^{(t)}-{\mathbf{h}}^{(t)}\right\Vert}_2^2}{{\left\Vert {\mathbf{h}}^{(t)}\right\Vert}_2^2} $$ In Eq. (14), h(t) represents the impulse response of the ideal channel, and \( {\widehat{\mathbf{h}}}^{(t)} \) represents the channel estimated value obtained by various algorithms. The simulation results of NMSE and BER are shown in Fig. 3 and Fig. 4 respectively. In these two simulations, the number of common channel taps and the maximum sparsity of the channel are set as L = 8 and K = 10 respectively. The channel estimation algorithms involved in the comparison include LS, OMP, SOMP, and Oracle-LS algorithms. In order to ensure the performance of LS algorithm, the pilot insertion mode is comb pilot with uniform spacing of 8, and the rest of CS algorithms adopt random pilot. It is not hard to find from Fig. 3 and Fig. 4 that the performance curves of NMSE and BER tend to be basically the same. Among them, LS algorithm, as a traditional algorithm, has the worst estimation performance because the pilot frequency is smaller than the channel length and the algorithm itself is greatly affected by noise when estimating the sparse path. For CS-based channel estimation, SOMP algorithm has a great advantage over OMP algorithm when SNR is low. The reason is that when SNR is low, SOMP algorithm can combine multiple symbols for channel reconstruction, and its detection ability of weak path is better than that of OMP algorithm which considers only one symbol. When SNR increases, due to the inherent defects of SOMP algorithm model in time-varying channel, the performance improvement effect is limited. OMP algorithm is less affected by noise in high SNR environment, every time the reconstructed channel can be completed K iterations, so the performance is better under high SNR environment. As for the improved SOMP algorithm proposed in this paper, it is better than OMP and SOMP algorithm because it considers the detection of dynamic channel tap, which is consistent with the results of the previous mathematical analysis. In addition, the comparison algorithm Oracle-LS in the simulation of this group is the LS estimation when the multipath delay set of time-varying sparse channel is known, and this result is the theoretical limit value under the LS criterion. Analysis of algorithm complexity under different channel sparsity. The NMSE with different temporal correlation degree L Correct detection probability of common channel taps with different SNR The NMSE with different SNR The BER with different SNR For greedy algorithm, the number of iterations is generally signal sparsity K. The higher the K value of signal sparsity is, the higher the complexity of greedy algorithm. The proposed algorithm is also a greedy algorithm, so the complexity of the algorithm is analyzed by calculating the CPU running time of channels estimation with different K sparsity. In the simulation, the estimated channel time of 10 consecutive OFDM symbols is taken as CPU running time, a total of 100 simulations were performed. The comparison algorithm of this group of the simulation consists of LS, OMP, SP, and SOMP algorithms. The simulation results are shown in Fig. 5. As it can be seen from Fig. 5, except the LS algorithm, the CPU running time of the other algorithms increases exponentially with the increase of K value. Among the four greedy algorithms, the OMP channel estimation and the SP channel estimation are performed symbol by symbol, so it is much larger in time than the channel estimation algorithm based on the joint sparse model. Particularly, the SP algorithm, because each iteration takes K column atoms, so when K increases, the algorithm complexity also has a rapidly increase, which is most obvious in all the algorithms. Among the SOMP algorithm and the proposed algorithm in this paper, due to the improved SOMP algorithm increases the dynamic channel tap detection, so the CPU running time is longer, which is equivalent to sacrificing speed for higher estimation accuracy. CPU running time with different K sparse channels This paper mainly studies the joint estimation method of high-speed mobile communication channel based on improved SOMP algorithm. Firstly, the distributed compressed sensing theory and joint sparse model are introduced. Secondly, the existing time domain joint channel estimation method based on SOMP algorithm is introduced. On this basis, this paper analyzes the defects of existing algorithms that do not take into account channel time-varying and proposes an improved SOMP algorithm based on JSM1 model for recovery of joint channel estimation. Simulation results show that the proposed algorithm in this paper has greater advantages of estimation performance and lower algorithm complexity compared with the existing symbol-by-symbol CS channel estimation algorithm in the time-varying channel environment. Bit error rate Compressive sensing DCS: Distributed compressed sensing SOMP: Simultaneous orthogonal matching pursuit D.L. Donoho, Compressed sensing [J]. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006) E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information [J]. IEEE Trans. Inf. Theory 52(2), 489–509 (2006) E.J. Candès, M.B. Wakin, An introduction to compressive sampling [J]. IEEE Signal Process. Mag. 25(2), 21–30 (2008) S.G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries [J]. IEEE Trans. Signal Process. 41(12), 3397–3415 (1993) S.F. Cotter, B.D. Rao, Matching pursuit based decision-feedback equalizers [C]// IEEE international conference on acoustics, speech. Signal Process., 2713–2716 (2000) S.F. Cotter, B.D. Rao, Sparse channel estimation via matching pursuit with application to equalization [J]. IEEE Trans. Wirel. Commun. 50(3), 374–377 (2002) Wang D, Han B, Zhao J, et al. Channel estimation algorithms for broadband MIMO-OFDM sparse channel [C]// IEEE 2003 international symposium on personal, Indoor and Mobile Radio Communications. 2003: 1929–1933 W. Li, J.C. Preisig, Estimation of rapidly time-varying sparse channels [J]. IEEE J. Ocean. Eng. 32(4), 927–939 (2008) Mason S, Berger C, Zhou S, et al. An OFDM design for underwater acoustic channels with Doppler spread [C]// Digital Signal Processing Workshop and, IEEE Signal Processing Education Workshop. 2009: 138–143 C.R. Berger, Z. Wang, J. Huang, et al., Application of compressive sensing to sparse channel estimation [J]. IEEE Commun. Mag. 48(11), 164–174 (2010) J. Huang, S. Zhou, J. Huang, et al., Progressive inter-carrier interference equalization for OFDM transmission over time-varying underwater acoustic channels [J]. IEEE Journal of Selected Topics in Signal Processing 5(8), 1524–1536 (2011) J.A. Tropp; A.C. Gilbert; M.J. Strauss. Simultaneous sparse approximation via greedy pursuit [C].// proceedings. (ICASSP '05). IEEE international conference on acoustics, speech, and Signal Process., 2005:721–724 The research presented in this paper was supported by National Natural Science Foundation of China, Natural Science Foundation of Jiangsu Province of China. The authors acknowledge the National Natural Science Foundation of China (grant 11574120, U1636117), the Natural Science Foundation of Jiangsu Province of China (grant BK20161359), The Science and Technology on Underwater Acoustic Antagonizing Laboratory, Systems Engineering Research Institute of CSSC (grant MB80038). School of Electronic and Information, Jiangsu University of Science and Technology, Zhenjiang, 212003, China Biao Wang , Yufeng Ge , Cheng He , You Wu & Zhiyu Zhu Search for Biao Wang in: Search for Yufeng Ge in: Search for Cheng He in: Search for You Wu in: Search for Zhiyu Zhu in: YG and BW are the main writers of this paper. They proposed the main idea, deduced the algorithm theory, completed the simulation, and analyzed the result. CH and YW give some work for simulation. ZZ gave some important suggestions for improving the SOMP. All authors read and approved the final manuscript. Correspondence to Biao Wang. Wang, B., Ge, Y., He, C. et al. Study on communication channel estimation by improved SOMP based on distributed compressed sensing. J Wireless Com Network 2019, 121 (2019). https://doi.org/10.1186/s13638-019-1464-7 Channel estimation Compressed sensing Joint sparse model Orthogonal frequency division multiplexing (OFDM)
CommonCrawl
Are all atoms spherically symmetric? If so, why are atoms with half-filled/filled sub-shells often quoted as 'especially' spherically symmetric? In general, atoms need not be spherically symmetric. The source you've given is flat-out wrong. The wavefunction it mentions, $\varphi=\frac{1}{\sqrt3}[2p_x+2p_y+2p_z]$, is in no way spherically symmetric. This is easy to check: the wavefunction for the $2p_z$ orbital is $\psi_{2p_z}(\mathbf r)=\frac {1}{\sqrt {32\pi a_0^5}}\:z \:e^{-r/2a_{0}}$ (and similarly for $2p_x$ and $2p_y$), so the wavefunction of the combination is $$\varphi(\mathbf r)=\frac {1}{\sqrt {32\pi a_0^5}}\:\frac{x+y+z}{\sqrt 3} \:e^{-r/2a_{0}},$$ i.e., a $2p$ orbital oriented along the $(\hat{x}+\hat y+\hat z)/\sqrt3$ axis. This is an elementary fact and it can be verified at the level of an undergraduate text in quantum mechanics (and it was also obviously wrong in the 1960s). It is extremely alarming to see it published in an otherwise-reputable journal. On the other hand, there are some states of the hydrogen atom in the $2p$ shell which are spherically symmetric, if you allow for mixed states, i.e., a classical probabilistic mixture $\rho$ of hydrogen atoms prepared in the $2p_x$, $2p_y$ and $2p_z$ states with equal probabilities. It is important to emphasize that it is essential that the mixture be incoherent (i.e. classical and probabilistic, as opposed to a quantum superposition) for the state to be spherically symmetric. As a general rule, if all you know is that you have "hydrogen in the $2p$ shell", then you do not have sufficient information to know whether it is in a spherically-symmetric or an anisotropic state. If that's all the information available, the initial presumption is to take a mixed state, but the next step is to look at how the state was prepared: The $2p$ shell can be prepared through isotropic processes, such as by excitation through collisions with a non-directional beam of electrons of the correct kinetic energy. In this case, the atom will be in a spherically-symmetric mixed state. On the other hand, it can also be prepared via anisotropic processes, such as photo-excitation with polarized light. In that case, the atom will be in an anisotropic state, and the direction of this anisotropy will be dictated by the process that produced it. It is extremely tempting to think (as discussed previously e.g. here, here and here, and links therein) that the spherical symmetry of the dynamics (of the nucleus-electron interactions) must imply spherical symmetry of the solutions, but this is obviously wrong $-$ to start with, it would apply equally well to the classical problem! The spherical symmetry implies that, for any anisotropic solution, there exist other, equivalent solutions with complementary anisotropies, but that's it. The hydrogen case is a bit special because the $2p$ shell is an excited state, and the ground state is symmetric. So, in that regard, it is valid to ask: what about the ground states of, say, atomic boron? If all you know is that you have atomic boron in gas phase in its ground state, then indeed you expect a spherically-symmetric mixed state, but this can still be polarized to align all the atoms into the same orientation. As a short quip: atoms can have nontrivial shapes, but the fact that we don't know which way those shapes are oriented does not make them spherically symmetric. So, given an atom (perhaps in a fixed excited state), what determines its shape? In short: its term symbol, which tells us its angular momentum characteristics, or, in other words, how it interacts with rotations. The only states with spherical symmetry are those with vanishing total angular momentum, $J=0$. If this is not the case, then there will be two or more states that are physically distinct and which can be related to each other by a rotation. It's important to note that this anisotropy could be in the spin state, such as with the $1s$ ground state of hydrogen. If you want to distinguish the states with isotropic vs anisotropic charge distributions, then you need to look at the total orbital angular momentum, $L$. The charge distribution will be spherically symmetric if and only if $L=0$. A good comprehensive source for term symbols of excited states is the Levels section of the NIST ASD. The reference paper is actually not really wrong, just poorly communicated for this physics audience. If you read the paper this letter is responding to, it becomes clear that the authors are referring to the probability distribution of an incoherent sum of the referenced orbitals. The letter to the editor sloppily writes (emphasis mine) We therefore must describe the electron as (to use chemically familiar language) a "resonance hybrid" of $2p_x, 2p_y,$ and $2p_z$. In more detail, we write if $ \psi = \frac{1}{\sqrt{3}}(2p_x + 2p_y + 2p_z)$ and this is exactly a spherical distribution, as Johnson and Rettew have shown. Where, out of context, every physicist assumes that the author specifically means a coherent sum of wavefunctions, and where both the wavefunction and the probability distribution resulting from it are not spherically symmetric. However, I believe (I'm no chemist) to a chemistry audience the common phrase "resonance hybrid" would immediately imply an incoherent superposition of the given states, as there's nothing particularly coherent about normal chemistry. The word "distribution" also hints that something is funny, as it's not typical to call the wavefunction itself a "distribution". Specifically, Johnson and Rettew showed that $\psi_{2p_x}^2 + \psi_{2p_y}^2 + \psi_{2p_z}^2$ is spherically symmetrical, which it is. Since there is basically only one equation in the referenced article, this is clearly what Cohen was referring to. The phrasing of the letter to the editor could clearly should have been clearer here, but good communication does take effort from both sides, especially when the two sides come from different fields where notation is not so well standardized or understood. For completeness, if a shell is partially filled then there's the possibility of there being a specific angle between the orbits of electrons in the inner shell and the electrons in the outermost shell (think, e.g. of two concentric donuts rotating independently). Even in a mixed state, the outermost electron would be in a mixed state of interacting with various orientations of inner electrons, none of which are independently centric, which suggests that assuming a central field approximation will miss some important physics. No coherent superposition of 2p orbitals is spherically symmetric. Your example $\frac{1}{\sqrt3}[2p_x+2p_y+2p_z]$ is a 2p orbital pointing in the 111 direction and is not spherical. The proper description is by a diagonal density matrix, which states that the atom is in an incoherent superposition of the three states. Angular Momentum Centrifugal Force Dilemma Why do wet plates stick together with a relatively high force? Varying the action with respect to the metric in one dimension Why do we not observe a greater Casimir force than we do? Is the heat from a flame mainly radiation or convection? Why does resonance occur at only standing wave frequencies in a fixed string? How much of gravity is caused by time dilation? Motivation for curved spacetime on General Relativity Is it safe to keep uranium ore in my house? The effects of Lorentz transformation on shape If a jet engine is bolted to the equator, does the Earth speed up? Why are good absorbers also good emitters?
CommonCrawl
Slonimski's Theorem Slonimski's Theorem is an observation by Hayyim Selig Slonimski that the sequence of carry digits in a multiplication table is the Farey sequence. This observation allowed Slonimski to create very compact multiplication tables for use in hand calculations. He received several awards for different devices for presenting these tables. The most common format were Joffe Bars similar to Napier's Rods. Joffe Bars were popular in Eastern Europe in the late 19th and early 20th century. References • Weiss, Stephan (2011), "Slonimsky's Multiplying Device, an Impressive Example for Applied Mathematics" (PDF), Journal of the Oughtred Society, 20 (1): 23–30. Provides a derivation of Slonimski's theorem, and some details on the calculating machine. • Knight, Henry (1847), Multiplication Tablets: Derived from a theorem of S. Slonimski (PDF), Birmingham: Josiah Allen and Son. Provides a complete set of tables. • Monnier, Valéry; Szrek, Walter; Zalewski, Janusz (2013), "Chaim Selig Slonimski and his adding devices", IEEE Ann. Hist. Comput., 35 (3): 42–53, MR 3111378
Wikipedia
\begin{definition}[Definition:Catalan Number] The '''Catalan Numbers''' $C_n$ are a sequence of natural numbers defined by: :$C_n = \dfrac 1 {n + 1} \dbinom {2 n} n $ \end{definition}
ProofWiki
\begin{document} \centerline{\Large The Necessary and Sufficient Conditions of Separability} \centerline{\Large for Multipartite Pure States } \footnote{The paper was supported by NSFC(Grant No. 60433050), the fundamental research fund of Tsinghua university NO: JC2003043 and partially by the state key lab. of intelligence technology and system} \centerline{Dafa Li$^{1,*}$, Xiangrong Li$^{2}$, Hongtao Huang$^{3}$, Xinxin Li$^{4}$ } \centerline{$^{1}$ Dept of mathematical sciences, Tsinghua University, Beijing 100084 CHINA} \centerline{email:[email protected]} \centerline{$^{2}$ Department of Mathematics, University of California, Irvine, CA 92697-3875, USA} \centerline{$^{3}$ Electrical Engineering and Computer Science Department} \centerline{University of Michigan, Ann Arbor, MI 48109, USA} \centerline{$^{4}$ Dept. of computer science, Wayne State University, Detroit, MI 48202, USA} Abstract In this paper we present the necessary and sufficient conditions of separability for multipartite pure states. These conditions are very simple, and they don't require Schmidt decomposition or tracing out operations. We also give a necessary condition for a local unitary equivalence class for a bipartite system in terms of the determinant of the matrix of amplitudes and explore a variance as a measure of entanglement for multipartite pure states. Keywords: Entanglement, measure of entanglement, quantum computing, separability. PACS numbers:03.67.Lx, 03.67.Hk. \section{Introduction:} Notation: $M^{+}$ is the complex conjugate of transpose of $M$. Let $|\psi \rangle $ and $|\phi \rangle $ be two pure states of a composite system $AB$ possessed by both Alice and Bob, where system $A$ ($B$) is called Alice's (Bob's) system. By Nielsen's notation $|\psi \rangle $ $\sim $ $|\phi \rangle $ if and only if $|\psi \rangle $ \ and $|\phi \rangle $ are locally unitarily equivalent \cite{Nielsen99}. Let $\rho _{\psi }^{A}$ and $ \rho _{\phi }^{A}$ be the states of Alice's system. It is known that $|\psi \rangle $ $\sim $ $|\phi \rangle $ \ if and only if $\rho _{\psi }^{A}$ and $ \rho _{\phi }^{A}$ have the same spectrum of eigenvalues \cite{Nielsen99} \cite{Peres}. A pure state is separable if and only if it can be written as a tensor product of states of different subsystems. It is also known that a state $|\psi \rangle $ of a bipartite system is separable if and only if it has Schmidt number 1 \cite{Nielsen00}. Clearly it is essential to do Schmidt decomposition to find the eigenvalues of $\rho _{\psi }^{A}$ and $\rho _{\phi }^{A}$. To obtain a Schmidt decomposition of a pure state $|\psi \rangle $, we need to compute (1) the density operator $\rho _{\psi }^{AB}$; (2) the reduced density operator $\rho _{\psi }^{A}$ for system $A$; (3) the eigenvalues of $\rho _{\psi }^{A}$. However it is hard to compute roots of a characteristic polynomial of high degree. Peres presented a necessary and sufficient condition for the occurrence of Schmidt decomposition for a tripartite pure state \cite{Peres95}\ and showed that the positivity of the partial transpose of a density matrix is a necessary condition for separability \cite{Peres96}. Thapliyal showed that a multipartite pure state is Schmidt decomposable if and only if the density matrices obtained by tracing out any party are separable \cite{Thapliyal99}. In \cite{Grassl} the local invariants of quantum-bit systems were investigated. In \cite{Sudbery00}\cite{Sudbery01} the local symmetry properties and local invariants of pure three-qubit states were discussed, respectively. In \cite{Acin} the classification of three-qubit states was given. Bennett reported measures of multipartite pure-state entanglement in \cite{Bennett00}. Meyer and Wallach \cite{Meyer}\ proposed a measure of $n-$ qubit pure-state entanglement. Nielsen used the majorization of the eigenvalues of the reduced density operators of a composite system $AB$ to describe the equivalence class under LOCC transformations. For a multi ($n$)$-$partite system, in this paper we illustrate the reduced density operators obtained by tracing out the $ith$ subsystem $\rho ^{12...(i-1)(i+1)...n}=tr_{i}(\rho ^{12...n})=M_{i}M_{i}^{+}$, where $ i=1,2,...,n$\ and $M_{i}$ are the $d^{n-1}\times d$\ matrices, of which every entry is an amplitude of the state in question. For a bipartite system $AB$, the reduced density operator $\rho _{\psi }^{A}$ ($\rho _{\psi }^{B}$)$ =MM^{+}$, where $M$ is the matrix of the amplitudes. Hence $\det (\rho _{\psi }^{A})=|\det (M)|^{2}$. However, for a multi ($n$)$-$partite system, $ M_{i}$ are not square. In section 2, we present a necessary and sufficient condition for separability for a bipartite system in terms of the determinants of all the $2\times 2$\ submatrices of the matrix of the amplitudes. Section 3 contains three versions of the necessary and sufficient separability criterion for a $n-$qubit system. Section 4 is devoted to study the separability of multipartite pure states, and two versions of the necessary and sufficient separability criterion are proposed. Section 5 gives a simple necessary criterion for $|\psi \rangle $ $ \sim $ $|\phi \rangle $ for a bipartite system. Section 6 suggests an intuitive measure of multipartite pure-state entanglement. \section{The separability for a bipartite system} Let $|\psi \rangle $ be a pure state of a composite system $AB$ possessed by both Alice and Bob. In this section we give a simple and intuitive criterion for the separability. Let $|i\rangle $ ($|j\rangle $) be the orthonormal basis for system $A$ ($B$). Then we can write $|\psi \rangle =\sum_{i,j}a_{ij}|i\rangle |j\rangle $, where $ \sum_{ij=0}^{n-1}|a_{ij}|^{2}=1$. Let $M$ $=(a_{ij})_{n\times n}$ be the matrix of the amplitudes of $|\psi \rangle $. Then the criterion for the separability is as follows. $|\psi \rangle $\textbf{\ }is separable if and only if the determinants of all the $2\times 2$\ submatrices of $M$\ are zero.\textbf{\ } This criterion for the separability avoids Schmidt decomposition. To compute the determinants, it needs $n^{2}(n-1)^{2}/2$ multiplication operations and $ n^{2}(n-1)^{2}/4$ minus operations. Proof. Suppose that systems $A$ and $B$ have the same dimension $n$.\ By definition, $|\psi \rangle $ is separable if and only if we can write $|\psi \rangle =(\sum_{i=0}^{n-1}x_{i}|i\rangle )\otimes (\sum_{j=0}^{n-1}y_{j}|j\rangle )$, where $\sum_{i=0}^{n-1}|x_{i}|^{2}=1$ and $=\sum_{j=0}^{n-1}|y_{j}|^{2}=1$. By tensor product $|\psi \rangle =\sum_{i,j=0}^{n-1}x_{i}y_{j}|i\rangle |j\rangle $. It means that $|\psi \rangle $ is separable if and only if $x_{i}y_{j}=a_{ij}$, $ i,j=0,1,...,(n-1)......(1)$. Let $m=\left( \begin{tabular}{cc} $a_{il}$ & $a_{ik}$ \\ $a_{jl}$ & $a_{jk}$ \end{tabular} \right) $ be any $2\times 2$ submatrix of $M$. It is easy to check $\det (m)=a_{il}a_{jk}-a_{ik}a_{jl}=x_{i}y_{l}x_{j}y_{k}-x_{i}y_{k}x_{j}y_{l}=0$. Therefore if $|\psi \rangle $ is separable then the determinants of all the $ 2\times 2$ submatrices of $M$\ are zero. Conversely, suppose that the determinants of all the $2\times 2$ submatrices of $M$ are zero. We can write $M$ in the block form, $M=\left( \begin{tabular}{c} $A_{0}$ \\ $A_{1}$ \\ $\vdots $ \\ $A_{n-1}$ \end{tabular} \right) =(B_{0},B_{1},...,B_{n-1})$, where $A_{i}$ is the $ith$ row and $ B_{i}$ is the $ith$ column of $M$, respectively, $i=0,1,...,(n-1)$. Let $ \left| x_{i}\right| ^{2}=A_{i}A_{i}^{+}......(2)$ and $\left| y_{j}\right| ^{2}=B_{j}^{+}B_{j}......(3)$, $i,j=0,1,...,(n-1)$, respectively. Under the supposition we can show that the above $x_{i}$ in (2) and $y_{j}$ in (3) satisfy (1). Let us consider the case in which all the $a_{ij}$ are real. It is not hard to extend the result to the case in which all the $a_{ij}$ are complex. We only show $\left| x_{0}y_{0}\right| ^{2}=\left| a_{00}\right| ^{2}$ and omit the others. From (2) and (3), $\left| x_{0}y_{0}\right| ^{2}=$ $A_{0}A_{0}^{+}B_{0}^{+}B_{0}=(\sum_{j=0}^{n-1}\left| a_{0j}\right| ^{2})(\sum_{i=0}^{n-1}\left| a_{i0}\right| ^{2})=\sum_{i,j=0}^{n-1}\left| a_{0j}\right| ^{2}\left| a_{i0}\right| ^{2}=\sum_{i,j=0}^{n-1}\left| a_{00}\right| ^{2}\left| a_{ij}\right| ^{2}=\left| a_{00}\right| ^{2}$. In the last but one step we use the equality $\left| a_{0j}\right| ^{2}\left| a_{i0}\right| ^{2}=\left| a_{00}\right| ^{2}\left| a_{ij}\right| ^{2}$, which holds since $\left( \begin{tabular}{cc} $a_{00}$ & $a_{0j}$ \\ $a_{i0}$ & $a_{ij}$ \end{tabular} \right) $ is a $2\times 2$ submatrix of $M$. This completes the proof. {\large Corollary} If $|\psi \rangle $ is separable then $\det (M)=0$. \section{The separability for a $n-$qubit system} Let $|\psi \rangle $ be a pure state of a $n-$qubit system. Then we can write $|\psi \rangle =\sum_{i_{1},i_{2},...,i_{n}\in \{0,1\}}a_{i_{1}i_{2}...i_{n}}|i_{1}i_{2}...i_{n}\rangle $. Let the density operator $\rho ^{12...n}=|\psi \rangle \langle \psi |$ and $\rho ^{12...(i-1)(i+1)...n}$ be the reduced density operator obtained by tracing out the $ith$ qubit. Then $\rho ^{12...(i-1)(i+1)...n}=tr_{i}(\rho ^{12...n})=M_{i}M_{i}^{+}$, where $i=1,2,...,n$ and $M_{i}$ are $ 2^{(n-1)}\times 2$ matrices of the form $\left( a_{b_{1}b_{2}...b_{i-1}0b_{i+1}...b_{n}},a_{b_{1}b_{2}...b_{i-1}1b_{i+1}...b_{n}}\right) $ in which $b_{1}$,$b_{2}$,$...$,$b_{n}\in \{0,1\}$. For example, let $|\psi \rangle $ be a state of a 3-qubit system. Then $ |\psi \rangle $ can be written as $|\psi \rangle =\sum_{i=0}^{7}a_{i}|i\rangle $. $M_{3}$ is a $4\times 2$ matrix $\left( \begin{tabular}{cc} $a_{0}$ & $a_{1}$ \\ $a_{2}$ & $a_{3}$ \\ $a_{4}$ & $a_{5}$ \\ $a_{6}$ & $a_{7}$ \end{tabular} \right) $. Each entry of $M_{3}$ is an amplitude of $|\psi \rangle $. There are three versions of the separability. Version 1. $|\psi \rangle $ is separable if and only if the determinants of all the $2\times 2$ submatrices of $M_{1}$,$M_{2}$,.... and $M_{n}$ are zero. The proof of version 1 is similar to the one for a bipartite system in section 2. Version 2. $|\psi \rangle $ is separable if and only if $ a_{i}a_{j}=a_{k}a_{l}$, where $i+j=k+l$ and $i\oplus j=k\oplus l$ where $ 0\leq i,j,k,l\leq 2^{n}-1$ are $n-$bit strings$\ $and $\oplus $ indicates addition modulo 2. For example, $2$, $7$, $5$ and $4$ can be written in binary numbers as $ 010,111,101$ and $100$, respectively. It is well known $010+111$(modulo 2)$ =101$, $101$ $+$ $100=001$(modulo 2). Therefore $2+7\neq 5+4$(modulo 2) though $2+7=5+4=9$. Using this condition it is easy to verify that states $|W\rangle =1/\sqrt{n} (|2^{0}\rangle +$ $|2^{1}\rangle +...+|2^{n-1}\rangle $ and $|GHZ\rangle =1/ \sqrt{2}(|0^{(n)}\rangle +|1^{(n)}\rangle )$ for a $n-$qubit system \cite {Dur}\ are entangled. Let $i_{1}i_{2}...i_{n}$, $j_{1}j_{2}...j_{n}$, $k_{1}k_{2}...k_{n}$ and $ l_{1}l_{2}...l_{n}$ be $n-$bit strings of $i$,$j,k$ and $l$, respectively. Then version 3 is phrased below. Version 3. $|\psi \rangle $ is separable if and only if $ a_{i}a_{j}=a_{k}a_{l}$, where $\{i_{t},j_{t}\}=\{k_{t},l_{t}\}$, $ t=1,2,...,n $. The following lemma 1 shows that versions 2 and 3 are equivalent to each other. Lemma 1. $i+j=k+l$ and $i\oplus j=k\oplus l$ if and only if $ \{i_{t},j_{t}\}=\{k_{t},l_{t}\}$, $t=1,2,...,n$. The proof of lemma 1 is put in appendix A. We argue version 3 next. Assume that $|\psi \rangle =(x_{0}^{(1)}|0\rangle +x_{1}^{(1)}|1\rangle )\otimes (x_{0}^{(2)}|0\rangle +x_{1}^{(2)}|1\rangle )\otimes ...\otimes (x_{0}^{(n)}|0\rangle +x_{1}^{(n)}|1\rangle )$. By tensor product $ x_{i_{1}}^{(1)}x_{i_{2}}^{(2)}....x_{i_{n}}^{(n)}=a_{i_{1}i_{2}...i_{n}}$, where $i_{t}=0,1$, $t=1,2,...,n$. Then $ a_{i}a_{j}=x_{i_{1}}^{(1)}x_{j_{1}}^{(1)}x_{i_{2}}^{(2)}x_{j_{2}}^{(2)}....x_{i_{n}}^{(n)}x_{j_{n}}^{(n)} $ and $ a_{k}a_{l}=x_{k_{1}}^{(1)}x_{l_{1}}^{(1)}x_{k_{2}}^{(2)}x_{l_{2}}^{(2)}....x_{k_{n}}^{(n)}x_{l_{n}}^{(n)} $. Explicitly, $a_{i}a_{j}=a_{k}a_{l}$ whenever $\{i_{t}$,$ j_{t}\}=\{k_{t},l_{t}\}$, $t=1,2,...,n$. Conversely, suppose that $a_{i}a_{j}=a_{k}a_{l}$ whenever $ \{i_{t},j_{t}\}=\{k_{t},l_{t}\}$, $t=1,2,...,n$. Let $\left| x_{i_{t}}^{(t)}\right| ^{2}=\sum_{i_{1},..,i_{t-1},i_{t+1},..,i_{n}\in \{0,1\}}\left| a_{i_{1}i_{2},...,i_{n}}\right| ^{2}$, where $t=1,2,...,n$. We can show $ |x_{i_{1}}^{(1)}x_{i_{2}}^{(2)}....x_{i_{n}}^{(n)}|^{2}=|a_{i_{1}i_{2}...i_{n}}|^{2} $. We only demonstrate the cases of $n=2$ and $3$ to give the essential ideas of the general case. When $n=2$, see section 2. When $n=3$, see appendix $B$. The two cases suggest that it be simpler to prove $ |x_{i_{1}}^{(1)}x_{i_{2}}^{(2)}....x_{i_{n}}^{(n)}|^{2}=|a_{i_{1}i_{2}...i_{n}}|^{2}\left( \sum |a_{i_{1}i_{2}...i_{n}}|^{2}\right) ^{n-1} $. Now we finish the argument for the real number case. It is not hard to extend the result to the complex number case. \section{The separability for a multi ($n$)$-$partite system} Assume that each subsystem has the same dimension $d$. Let $|i_{t}\rangle $ be the orthonormal basis $|0\rangle $,$|1\rangle $,...,$|(d-1)\rangle $ for the $tth$ subsystem. Then any pure state $|\psi \rangle $\ can be written as $|\psi \rangle =\sum_{i_{1},i_{2},...,i_{n}=0}^{d-1}a_{i_{1}i_{2}...i_{n}}|i_{1}i_{2}...i_{n}\rangle $. Assume that $|\psi \rangle $ is separable. Then we can write $|\psi \rangle =\left( \sum_{i_{1}=0}^{d-1}x_{i_{1}}^{(1)}|i_{1}\rangle \right) \otimes \left( \sum_{i_{2}=0}^{d-1}x_{i_{2}}^{(2)}|i_{2}\rangle \right) \otimes ...\otimes \left( \sum_{i_{n}=0}^{d-1}x_{i_{n}}^{(n)}|i_{n}\rangle \right) $. By tensor product $ x_{i_{1}}^{(1)}x_{i_{2}}^{(2)}....x_{i_{n}}^{(n)}=a_{i_{1}i_{2}...i_{n}}$, where $i_{1}$,$i_{2}$,$...$,$i_{n}\in \{0,1,...,(d-1)\}$. Let the density operator $\rho ^{12...n}=|\psi \rangle \langle \psi |$ and $ \rho ^{12...(i-1)(i+1)...n}$ be the reduced density operator obtained by tracing out the $ith$ subsystem. Then $\rho ^{12...(i-1)(i+1)...n}=tr_{i}(\rho ^{12...n})=M_{i}M_{i}^{+}$, where $ i=1,2,...,n$ and $M_{i}$ are $d^{n-1}\times d$ matrices of the amplitudes of the form \noindent $\left( a_{k_{1}k_{2}...k_{i-1}0k_{i+1}...k_{n}},a_{k_{1}k_{2}...k_{i-1}1k_{i+1}...k_{n}},...,a_{k_{1}k_{2}...k_{i-1}(d-1)k_{i+1}...k_{n}}\right) $, where $k_{1}$,$k_{2}$,...,$k_{i-1}$,$k_{i+1}$,...,$k_{n}\in \{0,1,...,(d-1)\}$. There are two versions of the separability. Version 1. $|\psi \rangle $ is separable if and only if the determinants of all the $2\times 2$ submatrices of $M_{1}$, $M_{2}$, ... and $M_{n}$ are zero. Version 2. $|\psi \rangle $ is separable if and only if $ a_{i_{1}i_{2}...i_{n}}a_{j_{1}j_{2}...j_{n}}=a_{k_{1}k_{2}...k_{n}}a_{l_{1}l_{2}...l_{n}} $, where $\{i_{t},j_{t}\}=\{k_{t},l_{t}\}$, $t=1,2,...,n$. The proof of version 1 is similar to the one for a bipartite system. The proof of version 2 is similar to the one for a $n-$qubit system. When $n=2$, the criterion is reduced to the one for a bipartite system. When $d=2$, the criterion is reduced to the one for a $n$-qubit system. \section{A necessary condition for a local unitary equivalence class for a bipartite system} We use the following lemma 2 to establish the necessary condition. Lemma 2. Let $|\psi \rangle $ be a pure state of a composite system $AB$ possessed by both Alice and Bob. Let $M$ $=(a_{jk})_{n\times n}$ be the matrix of the amplitudes of $|\psi \rangle $. Let $\rho ^{AB}=$ $|\psi \rangle \langle \psi |$ and $\rho ^{A}$ $=tr_{B}(\rho ^{AB})$. Then $\left| \det (M)\right| ^{2}$ is just the product of the eigenvalues of $\rho ^{A}$. The proof is put in appendix C. Lemma 2 reveals the relation between the determinant of the matrix of the amplitudes and the eigenvalues of $\rho ^{A}$ for a bipartite system. {\large The corollary of lemma 2} Let $M_{\psi }$ ($M_{\phi }$) be the matrix of the amplitudes of a pure state $|\psi \rangle $ ($|\phi \rangle $) of a composite system $AB$. Then $ \left| \det (M_{\psi })\right| =\left| \det (M_{\phi })\right| $ whenever $ |\psi \rangle $ $\sim $ $|\phi \rangle $. That is, $\left| \det (M_{\psi })\right| $ is invariant under local unitary operators. It is well known that it only needs $O(n^{3})$ multiplication operations to compute $\left| \det (M)\right| $ instead of doing Schmidt decomposition in \cite{Nielsen99}\cite{Peres}. For a two-qubit system, let $|\psi \rangle =a|00\rangle +b|01\rangle +c|10\rangle +d|11\rangle $ and $\rho ^{12}=|\psi \rangle \langle \psi |$. By lemma 2 $|ad-bc|^{2}$ is the product of the eigenvalues of $\rho ^{1}$. Let $|ad-bc|=\in $. We can show that $\in $ satisfies $0\leq \in \leq \frac{1 }{2}$ and eigenvalues $\lambda _{\pm }=\frac{1\pm \sqrt{1-4\in ^{2}}}{2}$. Hence, $|\psi \rangle \sim \sqrt{\lambda _{+}}|00\rangle +\sqrt{\lambda _{-}} |11\rangle $ or $|\psi \rangle \sim \sqrt{\lambda _{-}}|00\rangle +\sqrt{ \lambda _{+}}|11\rangle $.\ \section{The variance as a measure of entanglement} We obtain the necessary and sufficient conditions of separability in sections 2, 3 and 4. Apparently, $\left| a_{i_{1}i_{2}...i_{n}}a_{j_{1}j_{2}...j_{n}}-a_{k_{1}k_{2}...k_{n}}a_{l_{1}l_{2}...l_{n}}\right| $, where $\{i_{t},j_{t}\}=\{k_{t},l_{t}\}$, $t=1,2,...,n$., is just a deviation from a product state. It is intuitive to suggest the variance: $ \sum \left| a_{i_{1}i_{2}...i_{n}}a_{j_{1}j_{2}...j_{n}}-a_{k_{1}k_{2}...k_{n}}a_{l_{1}l_{2}...l_{n}}\right| ^{2} $, where $\{i_{t},j_{t}\}=\{k_{t},l_{t}\}$, $t=1,2,...,n$, as a measure of entanglement of $|\psi \rangle $. Let $D_{E}(|\psi \rangle )$ be the measure of entanglement. $D_{E}(|\psi \rangle )$ has the following properties. Property 1. $D_{E}(|\psi \rangle )=0$ if and only if $|\psi \rangle $ is separable. {\large The properties for a two-qubit system} For a two-qubit system, let $|\psi \rangle =a|00\rangle +b|01\rangle +c|10\rangle +d|11\rangle $. Then $D_{E}(|\psi \rangle )=\left| ad-bc\right| ^{2}$. Property 2. The maximum of $D_{E}(|\psi \rangle )=\left| ad-bc\right| ^{2}\leq (\left| ad\right| +\left| bc\right| )^{2}\leq (\frac{\left| a\right| ^{2}+\left| d\right| ^{2}}{2}+\frac{\left| b\right| ^{2}+\left| c\right| ^{2}}{2})^{2}=\frac{1}{4}.$ When $a,b,c$ and $d$ are real, by computing extremum it is derived that the maximally entangled states must be of the forms: $x|00\rangle +y|01\rangle -y|10\rangle +x|11\rangle $ or $x|00\rangle +y|01\rangle +y|10\rangle -x|11\rangle $. Property 3. $|\psi \rangle $ $\sim $ $|\psi ^{\prime }\rangle $ if and only if $D_{E}(|\psi \rangle )=D_{E}(|\psi ^{\prime }\rangle )$. Given $|\psi \rangle =a|00\rangle +b|01\rangle +c|10\rangle +d|11\rangle $ and $|\psi ^{\prime }\rangle =a^{\prime }|00\rangle +b^{\prime }|01\rangle +c^{\prime }|10\rangle +d^{\prime }|11\rangle $. Suppose that $|\psi \rangle \sim |\psi ^{\prime }\rangle $. By the necessary condition in section 5, $ D_{E}(|\psi \rangle )=D_{E}(|\psi ^{\prime }\rangle )$. Conversely, suppose $D_{E}(|\psi \rangle )=D_{E}(|\psi ^{\prime }\rangle )$. Let us show $|\psi \rangle \sim |\psi ^{\prime }\rangle $. Using Schmidt decomposition, we can write $|\psi \rangle \sim \sqrt{\lambda _{1}} |00\rangle +\sqrt{\lambda _{2}}|11\rangle $, where $\lambda _{1}+\lambda _{2}=1$. As discussed above $|ad-bc|=\sqrt{\lambda _{1}}\sqrt{\lambda _{2}}$ . As well using Schmidt decomposition\ we can write $|\psi ^{\prime }\rangle \sim \sqrt{\rho _{1}}|00\rangle +\sqrt{\rho _{2}}|11\rangle $, where $\rho _{1}+\rho _{2}=1$, and $|a^{\prime }d^{\prime }-b^{\prime }c^{\prime }|= \sqrt{\rho _{1}}\sqrt{\rho _{2}}$. Thus $\lambda _{1}\lambda _{2}=\rho _{1}\rho _{2}$. Then $\lambda _{1}(1-\lambda _{1})=\rho _{1}(1-\rho _{1})$. There are two cases. Case 1. $\lambda _{1}=\rho _{1}$. Then $\lambda _{2}=\rho _{2}$. Case 2. $\lambda _{1}+\rho _{1}+1=0$. In the case $\lambda _{2}=\rho _{1}$ and $\lambda _{1}=\rho _{2}$. It means that $|\psi \rangle $ and $|\psi ^{\prime }\rangle $ have the same Schmidt co-efficient for either of the two cases. By factor 5 in \cite{Nielsen99}\cite{Peres}, $|\psi \rangle \sim |\psi ^{\prime }\rangle $. Nielsen in \cite{Nielsen99} showed $|\psi ^{\prime }\rangle $ $\sim $ $|\psi ^{\prime \prime }\rangle $ by calculating eigenvalue, where $|\psi ^{\prime }\rangle =\sqrt{\alpha _{+}}|00\rangle +\sqrt{\alpha _{-}}|11\rangle $, and $ |\psi ^{\prime \prime }\rangle =(|00\rangle +|1\rangle (\cos \gamma |0\rangle +\sin \gamma |1\rangle ))/\sqrt{2}$. By property 3 it only needs to check $\sqrt{\alpha _{+}}\sqrt{\alpha _{-}}=\sin \gamma /2$. {\large Conclusion} In this paper we have presented the necessary and sufficient conditions of separability for multipartite pure states. These conditions don't require Schmidt decomposition or tracing out operations. By using the conditions it is easy to check whether or not a multipartite pure state is entangled. Appendix A. The proof of lemma 1 Let $\alpha _{1}\alpha _{2}...\alpha _{n}$, $\beta _{1}\beta _{2}...\beta _{n}$, $\delta _{1}\delta _{2}...\delta _{n}$ and $\gamma _{1}\gamma _{2}...\gamma _{n}$ be the $n-$bit strings of $\alpha $, $\beta $, $\delta $ and $\gamma $, respectively. Lemma 1. \{$\alpha _{i}$, $\beta _{i}$\}=\{$\delta _{i}$, $\gamma _{i}$\}$,$ $i=1,2,...,n$, if and only if $\alpha +\beta =\delta $ $+$ $\gamma $ and $ \alpha \oplus \beta =\delta $ $\oplus $ $\gamma ,$ where $\oplus $ indicates addition modulo 2. Proof. Suppose \{$\alpha _{i}$, $\beta _{i}$\}=\{$\delta _{i}$, $\gamma _{i}$ \}$,$ $i=1,2,...,n$. Since $\alpha +\beta =(\alpha _{1}+\beta _{1})2^{n-1}+(\alpha _{2}+\beta _{2})2^{n-2}+...+(\alpha _{n}+\beta _{n})$ and $\delta $ $+$ $\gamma =(\delta _{1}+\gamma _{1})2^{n-1}+(\delta _{2}+\gamma _{2})2^{n-2}+...+(\delta _{n}+\gamma _{n})$, by the supposition it is easy to see $\alpha +\beta =\delta $ $+$ $\gamma $. It is straightforward to obtain $\alpha _{1}\alpha _{2}...\alpha _{n}\oplus \beta _{1}\beta _{2}...\beta _{n}=\delta _{1}\delta _{2}...\delta _{n}\oplus \gamma _{1}\gamma _{2}...\gamma _{n}$. Conversely, suppose $\alpha +\beta =\delta $ $+$ $\gamma $ and $\alpha \oplus \beta =\delta $ $\oplus $ $\gamma $. First let us consider the case where $n=1$. There are three cases. Case 1. $\alpha _{1}+\beta _{1}=\delta _{1}+\gamma _{1}=0$. This means $ \alpha _{1}=\beta _{1}=\delta _{1}=\gamma _{1}=0$. Case 2. $\alpha _{1}+\beta _{1}=\delta _{1}+\gamma _{1}=1$. This implies $ \{\alpha _{1},\beta _{1}\}=\{\delta _{1},\gamma _{1}\}=\{1,0\}$. Case 3. $\alpha _{1}+\beta _{1}=\delta _{1}+\gamma _{1}=2$. \ This says $ \alpha _{1}=\beta _{1}=\delta _{1}=\gamma _{1}=1$. No matter which of the above three cases happens, it yields \{$\alpha _{1}$, $\beta _{1}$\}=\{$\delta _{1}$, $\gamma _{1}$\}. Let us consider the case $n$. Since $\alpha +\beta =\delta $ $+$ $\gamma $, $ (\alpha _{1}+\beta _{1})2^{n-1}+(\alpha _{2}+\beta _{2})2^{n-2}+...+(\alpha _{n}+\beta _{n})=(\delta _{1}+\gamma _{1})2^{n-1}+(\delta _{2}+\gamma _{2})2^{n-2}+...+(\delta _{n}+\gamma _{n})$. Again since $\alpha \oplus \beta =\delta $ $\oplus $ $\gamma $, that is, $\alpha _{1}\alpha _{2}...\alpha _{n}\oplus \beta _{1}\beta _{2}...\beta _{n}=\delta _{1}\delta _{2}...\delta _{n}\oplus \gamma _{1}\gamma _{2}...\gamma _{n}$, we obtain $ \alpha _{i}\oplus \beta _{i}=\delta _{i}\oplus \gamma _{i}$, $i=1,2,...,n$. \ There are two cases. Case 1. $\alpha _{n}\oplus \beta _{n}=\delta _{n}\oplus \gamma _{n}=1$. In the case $\{\alpha _{n}$, $\beta _{n}\}=\{\delta _{n}$, $\gamma _{n}\}=\{0,1\}$. Then $(\alpha _{1}+\beta _{1})2^{n-2}+(\alpha _{2}+\beta _{2})2^{n-3}+...+(\alpha _{n-1}+\beta _{n-1})=(\delta _{1}+\gamma _{1})2^{n-2}+(\delta _{2}+\gamma _{2})2^{n-3}+...+(\delta _{n-1}+\gamma _{n-1})$ \ \ and $\alpha _{i}\oplus \beta _{i}=\delta _{i}\oplus \gamma _{i} $, $i=1,2,...,n-1$. By induction hypothesis $\{\alpha _{i},\beta _{i}\}=\{\delta _{i},\gamma _{i}\}$, $i=1,2,...,n-1$. Case 2. $\alpha _{n}\oplus \beta _{n}=\delta _{n}\oplus \gamma _{n}=0$. There are two subcases. Subcase 2.1. $\alpha _{n}=\beta _{n}=\delta _{n}=\gamma _{n}=0$ or $\alpha _{n}=\beta _{n}=\delta _{n}=\gamma _{n}=1$. As discussed in case 1, we can obtain $\{\alpha _{i},\beta _{i}\}=\{\delta _{i},\gamma _{i}\}$, $ i=1,2,...,n-1$ by induction hypothesis. Subcase 2.2. $\alpha _{n}=\beta _{n}=1$ and $\delta _{n}=\gamma _{n}=0$ or $ \alpha _{n}=\beta _{n}=0$ and $\delta _{n}=\gamma _{n}=1$. Let us consider the former case. In the case $(\alpha _{1}+\beta _{1})2^{n-2}+(\alpha _{2}+\beta _{2})2^{n-3}+...+(\alpha _{n-2}+\beta _{n-2})2+(\alpha _{n-1}+\beta _{n-1}+1)=$ $(\delta _{1}+\gamma _{1})2^{n-2}+(\delta _{2}+\gamma _{2})2^{n-3}+...+(\delta _{n-2}+\gamma _{n-2})2+(\delta _{n-1}+\gamma _{n-1}) $. Since $\alpha _{n-1}\oplus \beta _{n-1}=\delta _{n-1}\oplus \gamma _{n-1}$, either $\alpha _{n-1}\oplus \beta _{n-1}=\delta _{n-1}\oplus \gamma _{n-1}=0$ or $1$ causes that one of $(\alpha _{n-1}+\beta _{n-1}+1)$ and $(\delta _{n-1}+\gamma _{n-1})$ is odd and the other is even. It contradicts $\alpha \oplus \beta =\delta $ $\oplus $ $\gamma $. \ Appendix B. The separability for a $n-$qubit system When $n=3,$\ let us show $ |x_{i_{1}}^{(1)}x_{i_{2}}^{(2)}x_{i_{3}}^{(3)}|^{2}=|a_{i_{1}i_{2}i_{3}}|^{2} $ when $a_{i}a_{j}=a_{k}a_{l}$, where $\{i_{t},j_{t}\}=\{k_{t},l_{t}\}$, $ t=1,2,3$. We only illustrate $ |x_{0}^{(1)}x_{0}^{(2)}x_{0}^{(3)}|^{2}=|a_{000}|^{2}$. Other cases then follow readily. Experientially, it is simpler to prove $ |x_{0}^{(1)}x_{0}^{(2)}x_{0}^{(3)}|^{2}$ $=|a_{000}|^{2}(\sum_{i,j,k\in \{0,1\}}|a_{ijk}|^{2})(\sum_{i,j,k\in \{0,1\}}|a_{ijk}|^{2})$, where $ |x_{0}^{(1)}|^{2}=\sum_{i,j\in \{0,1\}}|a_{0ij}|^{2}$, $|x_{0}^{(2)}|^{2}= \sum_{k,l\in \{0,1\}}|a_{k0l}|^{2}$ and $|x_{0}^{(3)}|^{2}=\sum_{p,q\in \{0,1\}}|a_{pq0}|^{2}$. First we show that $a_{0ij}a_{k0l}a_{pq0}$ can be rewritten as $ a_{000}a_{\alpha _{1}\alpha _{2}\alpha _{3}}a_{\delta _{1}\delta _{2}\delta _{3}}$. There are the following four cases. Case 1. Consider $a_{0ij}a_{k0l}$ and the pairs $\{0,k\},\{i,0\}$ and $ \{j,l\}$. If $j\ast l=0$ , then $a_{0ij}a_{k0l}=a_{000}a_{ki(j+l)}$ since $ \{j,l\}=\{0,j+l\}$. Case 2. Consider $a_{0ij}a_{pq0}$ and the pairs $\{0,p\},\{i,q\}$ and $ \{j,0\}$. If $i\ast q=0$, then $a_{0ij}a_{pq0}=a_{000}a_{p(i+q)j}$ since $ \{i,q\}=\{0,i+q\}$. Case 3. Consider $a_{k0l}a_{pq0}$ and the pairs $\{k,p\},\{0,q\}$ and $ \{l,0\}$. If $k\ast p=0$, then $a_{k0l}a_{pq0}=a_{000}a_{(k+p)ql}$ since $ \{k,p\}=\{0,k+p\}$. Case 4. Otherwise $i=j=l=k=p=q=1$. It is not hard to derive $ a_{3}a_{5}a_{6}=a_{1}a_{7}a_{6}=a_{0}a_{7}^{2}$. Second, let us show that $a_{000}a_{\alpha _{1}\alpha _{2}\alpha _{3}}a_{\delta _{1}\delta _{2}\delta _{3}}$\ can be rewritten\ as $ a_{0ij}a_{k0l}a_{pq0}$. If $a_{000}a_{\alpha _{1}\alpha _{2}\alpha _{3}}a_{\delta _{1}\delta _{2}\delta _{3}}$ is of the forms: $ a_{000}a_{0ij}a_{k0l}$,\ $a_{000}a_{0ij}a_{pq0}$ or $a_{000}a_{k0l}a_{pq0}$, then these forms are desired. Otherwise $a_{000}a_{\alpha _{1}\alpha _{2}\alpha _{3}}a_{\delta _{1}\delta _{2}\delta _{3}}$\ must be $ a_{0}a_{6}a_{6}$, $a_{0}a_{3}a_{3},$ $a_{0}a_{5}a_{5}$ or\ of the form $ a_{0}a_{7}a_{rst}$, which can be rewritten as $a_{2}a_{4}a_{6}$, $ a_{1}a_{2}a_{3}$, $a_{1}a_{4}a_{5}$, $a_{1}a_{6}a_{rst}$, respectively. $ a_{2}a_{4}a_{6}$, $a_{1}a_{2}a_{3}$ and $a_{1}a_{4}a_{5}$ are just desired and $a_{1}a_{6}a_{rst}$ is furthermore rewritten as follows. There are three cases. Case 1. In the case $r=0$ or $s=0,$ this is desired. Case 2. In the case $r=s=t=1$, $a_{1}a_{6}a_{7}=a_{3}a_{5}a_{6},$ desired. Case 3. In the case $r=s=1$ and $t=0$, $a_{1}a_{6}a_{6}=a_{2}a_{5}a_{6}$, desired. Appendix C. The proof of lemma 2 Proof. Suppose that systems $A$ and $B$ have the same dimensions $n$. Let $ |\psi \rangle =\sum_{i,j=0}^{n-1}a_{ij}|i\rangle |j\rangle $. Then $ M=(a_{ij})_{n\times n}$. Let density operator $\rho ^{AB}=|\psi \rangle \langle \psi |$. Then $\rho ^{AB}=(\sum_{i,j=0}^{n-1}a_{ij}|i\rangle |j\rangle )(\sum_{l,k=0}^{n-1}a_{lk}^{\ast }\langle l|\langle k|)=\sum_{i,j=0}^{n-1}\sum_{l,k=0}^{n-1}a_{ij}a_{lk}^{\ast }|i\rangle |j\rangle \langle l|\langle k|$ $=\sum_{i,l=0}^{n-1}\sum_{j,k=0}^{n-1}a_{ij}a_{lk}^{\ast }|i\rangle |j\rangle \langle l|\langle k|$. The reduced density operator for system $A$ is defined by $\rho ^{A}=tr_{B}(\rho ^{AB})$. Let us compute $\rho ^{A}$. $\rho ^{A}=\sum_{i,l=0}^{n-1}\sum_{j,k=0}^{n-1}a_{ij}a_{lk}^{\ast }|i\rangle \langle l|\delta _{kj}$ (where $\delta _{kj}=1$ when $k=j$. Otherwise $0$.) $ =\sum_{i,l=0}^{n-1}\sum_{j=0}^{n-1}a_{ij}a_{lj}^{\ast }|i\rangle \langle l|=\sum_{i,l=0}^{n-1}(\sum_{j=0}^{n-1}a_{ij}a_{lj}^{\ast })|i\rangle \langle l|$. Let $A_{i}=(a_{i0},a_{i1},....a_{i(n-1)})$, that is, the $ith$ row of $ A $. Then $\sum_{j=0}^{n-1}a_{ij}a_{lj}^{\ast }=A_{i}A_{l}^{+}$. Finally $ \rho ^{A}=\sum_{i,l=0}^{n-1}A_{i}A_{l}^{+}|i\rangle \langle l|=\left( \begin{tabular}{c} $A_{0}$ \\ $A_{1}$ \\ $\vdots $ \\ $A_{n-1}$ \end{tabular} \right) (A_{0}^{+},A_{1}^{+},...,A_{n-1}^{+})=MM^{+}$. Thus $\det (\rho ^{A})=|\det (M)|^{2}$. Hence $\left| \det (M)\right| ^{2}$ is just the product of the eigenvalues of $\rho ^{A}$. Q.E.D. \end{document}
arXiv
Conformal Prediction Competitive On-line Prediction Game-theoretic Probability Latex Support Advanced Formatting Conformal Predictor On this page... (hide) 2. Desiderata 2.1 Validity 2.2 Efficiency 3. Conformalizing specific machine-learning algorithms 4. Conformal transducers and predictive systems 1. Definition Suppose we have a training sequence of pairs $\displaystyle{ (z_1,z_2,\ldots,z_n) = ((x_1, y_1), (x_2, y_2), \ldots, (x_n,y_n)) }$ called observations. The objects $x_i$ are elements of a measurable space $\mathbf{X}$ and the labels $y_i$ are elements of a measurable space $\mathbf{Y}$. We call $\mathbf{Z}:=\mathbf{X}\times\mathbf{Y}$ the observation space, $\epsilon\in(0,1)$ the significance level, and the complimentary value $1 - \epsilon$ the confidence level. A conformity measure is a measurable mapping \[ A: \mathbf{Z}^{(*)} \times \mathbf{Z} \to [-\infty, +\infty], \] where $\mathbf{Z}^{(*)}$ is the set of all bags (multisets) of elements of $\mathbf{Z}$. Intuitively, this function assigns a numerical score (sometimes called the conformity score) indicating how similar a new observation is to a multiset of old ones. The conformal predictor determined by the conformity measure $A$ is a confidence predictor $\Gamma$ whose value on the training sequence $z_1,\ldots,z_n$ and a test object $x_{n+1}\in\mathbf{X}$ at a significance level $\epsilon$ is obtained by setting \begin{equation} \Gamma^\epsilon(z_1,\ldots,z_n,x_{n+1}) := \left\{ y\in\mathbf{Y} \mid \frac{|{i = 1, \ldots,n,n+1: \alpha_i^y \le \alpha^y_{n+1} }|}{n+1} > \epsilon \right\},\end{equation} where \[ \alpha^y_i := A(\{z_1,\ldots,z_{i-1}, z_{i+1},\ldots,z_n,(x_{n+1},y)\}, z_i), \quad i = 1,\ldots,n,\] \[ \alpha^y_{n+1} := A(\{z_1,\ldots,z_n\}, (x_{n+1},y)),\] and $\{\ldots\}$ designates the bag (multiset) of observations. The standard assumption for conformal predictors is the randomness assumption (also called the i.i.d. assumption). Conformal predictors can be generalized by inductive conformal predictors or Mondrian conformal predictors to a wider class of confidence predictors. 2. Desiderata 2.1 Validity All the statements in the section are given under the randomness assumption in the on-line prediction protocol. The statement of validity is easiest for smoothed conformal predictors, for which \[ \Gamma^\epsilon(z_1,\ldots,z_n,x_{n+1}) := \left\{ y\in\mathbf{Y} \mid \frac{|\{i=1,\ldots,n+1:\alpha^y_i>\alpha^y_{n+1}\}|+\eta_y|\{i=1,\ldots,n+1:\alpha^y_i=\alpha^y_{n+1}\}|}{n+1} > \epsilon \right\},\] where the nonconformity scores $\alpha^y_i$ are defined as before and $\eta_y\in[0,1]$ is a random number (chosen from the uniform distribution on [0,1]. In the rest of this section we will assume that the random numbers $\eta_y$ used at different trials $n$ are independent among themselves and of the observations. Theorem All smoothed conformal predictors are exactly valid, in the sense that, for any exchangeable probability distribution $P$ on $\mathbf{Z}^{\infty}$ and any significance level $\epsilon\in(0,1)$, the random variables $\text{err}_n^{\epsilon}(\Gamma)$, $n = 1, 2, \ldots$, are independent Bernoulli variables with parameter $\epsilon$, where $\text{err}_{n} ^{\epsilon}(\Gamma)$ is the random variable $\displaystyle{ \text{err}_{n}^{\epsilon}(\Gamma)(z_1,z_2,\ldots) := \begin{cases} 1 & \text{if } y_{n+1} \not \in \Gamma^\epsilon(z_1,\ldots,z_n,x_{n+1})\\ 0 & \text{otherwise}. \end{cases}}$ The idea of the proof is quite simple. Corollary All smoothed conformal predictors are asymptotically exact, in the sense that for any exchangeable probability distribution $P$ on $\mathbf{Z}^{\infty}$ and any significance level $\epsilon$, \[ \lim_{n\to\infty} \frac{1}{n}\sum_{i=1}^{n} \text{err}_n^{\epsilon}(\Gamma) = \epsilon\] with probability one. Corollary All conformal predictors are asymptotically conservative, in the sense that for any exchangeable probability distribution $P$ on $\mathbf{Z}^{\infty}$ and any significance level $\epsilon$, \[ \limsup_{n\to \infty} \frac{1}{n} \sum_{i=1}^{n} \text{err}_n^{\epsilon}(\Gamma) \le \epsilon\] with probability one. To put it simply, in the long run the frequency of erroneous predictions does not exceed $\epsilon$ at each confidence level $1 - \epsilon$. One can also give a formal notion of validity for conformal predictors, although the usefulness of this notion is limited: it is intuitively clear that conformal predictors are valid or "more than valid" since the number of errors made by a conformal predictor never exceeds the number of errors made by the corresponding smoothed conformal predictor. 2.2 Efficiency As conformal predictors are automatically valid, the main goal is to improve their efficiency (also known as predictive efficiency): to make the prediction sets conformal predictors output as small as possible. In classification problem, a natural measure of efficiency of conformal predictors is the number of multiple predictions - the number of prediction sets containing more than one label. In regression problems, the prediction set is often an interval of values, hence, a natural measure of efficiency of such predictors is the length of the interval. There are many possible criteria of efficiency. There are separate articles about validity, efficiency, and a third desideratum, conditionality: conditionality 3. Conformalizing specific machine-learning algorithms Suppose we have a prediction algorithm that outputs a prediction $\hat y\in\mathbf{Y}$ for a test object $x\in\mathbf{X}$ given a training set $\{z_1,\ldots,z_n\}\in\mathbf{Z}^{(*)}$ (notice that it is assumed that $\hat y$ does not depend on the ordering of $(z_1,\ldots,z_n)$). The function \[ A(\{z_1,\ldots,z_n\},(x,y)) := y - \hat y\] is a conformity measure, for any method of computing $\hat y$ from $\{z_1,\ldots,z_n\}$, $x$, and $y$. The conformal predictor determined by this conformity measure can be referred to as the conformalization of the original algorithm. More generally, we can set \[ A(\{z_1,\ldots,z_n\},(x,y)) := (y - \hat y) / \sigma_y,\] where $\sigma_y$ is a measure of precision of $\hat y$ (computed from $\{z_1,\ldots,z_n\}$, $x$, and $y$). For many non-trivial machine-learning algorithms, their conformalizations will be computationally inefficient (and then alternative methods such as inductive conformal predictors or aggregated conformal predictors) should be used. However, for some important algorithms their conformalizations can be computed efficiently (at least to some degree): see, e.g., Ridge Regression Confidence Machine Conformalized LASSO It is also known that K-nearest neighbours classification and regression are conformalizable. 4. Conformal transducers and predictive systems In many cases, it is more convenient to consider conformal transducers which output, for each training set $z_1,\ldots,z_n$, each test object $x_{n+1}\in\mathbf{X}$, and each potential label $y\in\mathbf{Y}$ for $x_{n+1}$ the p-value given by \[ p^y := \frac{|\{i=1,\ldots,n+1:\alpha^y_i>\alpha^y_{n+1}\}|+\eta_y|\{i=1,\ldots,n+1:\alpha^y_i=\alpha^y_{n+1}\}|}{n+1}\] (cf. the definition of the smoothed conformal predictor). Conformal predictors and conformal transducers can be regarded as different ways of packaging the same object. For further details, see Vovk et al. (2005). Conformal transducers can be used for solving problems of probabilistic regression, giving rise to conformal predictive systems. Vladimir Vovk, Alexander Gammerman, and Glenn Shafer (2005). Algorithmic learning in a random world. Springer, New York. ▲ Top ▲
CommonCrawl
Recent results of search for solar axions using resonant absorption by $^{83}$Kr nuclei (1711.03354) A.V. Derbin, I.S. Drachnev, A.M. Gangapshev, Yu.M. Gavrilyuk, V.V. Kazalov, V.V.Kobychev, V.V. Kuzminov, V.N. Muratova, S.I. Panasenko, S.S. Ratkevich, D.A. Tekueva, E.V. Unzhakov, S.P. Yakimenko Nov. 9, 2017 hep-ph, hep-ex, nucl-ex A search for resonant absorption of the solar axion by $^{83}\rm{Kr}$ nuclei was performed using the proportional counter installed inside the low-background setup at the Baksan Neutrino Observatory. The obtained model independent upper limit on the combination of isoscalar and isovector axion-nucleon couplings $|g_3-g_0|\leq 8.4\times 10^{-7}$ allowed us to set the new upper limit on the hadronic axion mass of $m_{A}\leq 65$ eV (95\% C.L.) with the generally accepted values $S$=0.5 and $z$=0.56. Comparative study of the double $K$-shell-vacancy production in single- and double-electron capture decay (1707.07171) S.S. Ratkevich, A.M. Gangapshev, Yu.M. Gavrilyuk, F.F. Karpeshin, V.V. Kazalov, V.V. Kuzminov, S.I. Panasenko, M.B. Trzhaskovskaya, S.P. Yakimenko Sept. 6, 2017 nucl-ex We carried out the comparative study of the signal from the decay of double $K$-shell vacancy production that follows after single $K$-shell electron capture of $^{81}$Kr and double $K$-shell electron capture of $^{78}$Kr. The radiative decay of a the double $1s$ vacancy state was identified by detecting the triple coincidence of two $K$ X-rays and several Auger electrons in the $ECEC$-decay, or by detecting two $K$ X-rays and (Auger electrons + ejected $K$-shell electron) in the $EC$ decay. The number of $K$-shell vacancies per the $K$-electron capture, produced as a result of the shake-off process, has been measured for the decay of $^{81}$Kr. The probability for this decay was found to be $P_{KK}=(5.7\pm0.8)\times10^{-5}$ with a systematic error of $(\Delta P_{KK})_{syst}=\pm0.4 \times10^{-5}$. For the $^{78}{\rm{Kr}}(2\nu2K)$ decay, the comparative study of single- and double-capture decays allowed us to obtain the signal-to-background ratio to be 15/1. The half-life $T_{1/2}^{2\nu2K}(g.s. \rightarrow g.s.) = [1.9^{+1.3}_{-0.7}(stat)\pm0.3(syst)]\times 10^{22}$ y is determined from the analysis of data that have been accumulated over 782 days of live measurements in the experiment that used samples consisted of 170.6 g of $^{78}$Kr. Technical Design Report for the AMoRE $0\nu\beta\beta$ Decay Search Experiment (1512.05957) V. Alenkov, P. Aryal, J. Beyer, R.S. Boiko, K. Boonin, O. Buzanov, N. Chanthima, M.K. Cheoun D.M. Chernyak, J. Choi, S. Choi, F.A. Danevich, M. Djamal, D. Drung, C. Enss, A. Fleischmann, A.M. Gangapshev, L. Gastaldo, Yu.M. Gavriljuk, A.M. Gezhaev, V.I. Gurentsov, D.H Ha, I.S. Hahn, J.H. Jang, E.J. Jeon, H.S. Jo, H. Joo, J. Kaewkhao, C.S. Kang, S.J. Kang, W.G Kang, S. Karki, V.V. Kazalov, N. Khanbekov, G.B. Kim, H.J. Kim, H.L. Kim, H.O. Kim, I. Kim, J.H. Kim, K. Kim, S.K. Kim, S.R. Kim, Y.D. Kim, Y.H. Kim, K. Kirdsiri, V.V. Kobychev, V. Kornoukhov, V.V. Kuzminov, H.J. Lee, H.S. Lee, J.H. Lee, J.M. Lee, J.Y. Lee, K.B. Lee, M.H. Lee, M.K. Lee, D.S. Leonard, J. Li, J. Li, Y.J. Li, P. Limkitjaroenporn, K.J. Ma, O.V. Mineev, V.M. Mokina, S.L. Olsen, S.I. Panasenko, I. Pandey, H.K. Park, H.S. Park, K.S. Park, D.V. Poda, O.G. Polischuk, P. Polozov, H. Prihtiadi, S.J. Ra, S.S. Ratkevich, G. Rooh, K. Siyeon, N. Srisittipokakun, J.H. So, J.K. Son, J.A. Tekueva, V.I. Tretyak, A.V. Veresnikova, R. Wirawan, S.P. Yakimenko, N.V. Yershov, W.S. Yoon, Y.S. Yoon, Q. Yue Dec. 18, 2015 hep-ex, physics.ins-det The AMoRE (Advanced Mo-based Rare process Experiment) project is a series of experiments that use advanced cryogenic techniques to search for the neutrinoless double-beta decay of \mohundred. The work is being carried out by an international collaboration of researchers from eight countries. These searches involve high precision measurements of radiation-induced temperature changes and scintillation light produced in ultra-pure \Mo[100]-enriched and \Ca[48]-depleted calcium molybdate ($\mathrm{^{48depl}Ca^{100}MoO_4}$) crystals that are located in a deep underground laboratory in Korea. The \mohundred nuclide was chosen for this \zeronubb decay search because of its high $Q$-value and favorable nuclear matrix element. Tests have demonstrated that \camo crystals produce the brightest scintillation light among all of the molybdate crystals, both at room and at cryogenic temperatures. $\mathrm{^{48depl}Ca^{100}MoO_4}$ crystals are being operated at milli-Kelvin temperatures and read out via specially developed metallic-magnetic-calorimeter (MMC) temperature sensors that have excellent energy resolution and relatively fast response times. The excellent energy resolution provides good discrimination of signal from backgrounds, and the fast response time is important for minimizing the irreducible background caused by random coincidence of two-neutrino double-beta decay events of \mohundred nuclei. Comparisons of the scintillating-light and phonon yields and pulse shape discrimination of the phonon signals will be used to provide redundant rejection of alpha-ray-induced backgrounds. An effective Majorana neutrino mass sensitivity that reaches the expected range of the inverted neutrino mass hierarchy, i.e., 20-50 meV, could be achieved with a 200~kg array of $\mathrm{^{48depl}Ca^{100}MoO_4}$ crystals operating for three years. Characteristics of a thermal neutrons scintillation detector with the [ZnS(Ag)+$^6$LiF] at different conditions of measurements (1510.09002) V.V. Alekseenko, I.R. Barabanov, R.A. Etezov, Yu.M. Gavrilyuk, A.M. Gangapshev, A.M. Gezhaev, V.V. Kazalov, A.Kh. Khokonov, V.V. Kuzminov, S.I. Panasenko, S.S. Ratkevich Oct. 30, 2015 nucl-ex, physics.ins-det A construction of a thermal neutron testing detector with a thin [ZnS(Ag)+$^6$LiF] scintillator is described. Results of an investigation of sources of the detector pulse origin and the pulse features in a ground and underground conditions are presented. Measurements of the scintillator own background, registration efficiency and a neutron flux at different objects of the BNO INR RAS were performed. The results are compared with the ones measured by the $^3$He proportional counter. Results of measurements of an environment neutron background at BNO INR RAS objects with the helium proportional counter (1510.05109) A method of measurements of the environmental neutron background at the Baksan Neutrino Observatory of the INR RAS are described. Measurements were done by using of a proportional counter filled with mixture of Ar(2 at)+$^3$He(4 at). The results obtained at the surface and the underground laboratory of the BNO INR RAS are presented. It is shown that a neutron background in the underground laboratory at the 4900 m w.e. depth is decreased by $\sim 260$ times without any special shield in a comparison with the Earth surface. A neutron flux density in the 5-1323.5~cm air height region is constant within the determination error and equal to $(7.1\pm0.1_{\rm{stat}}\pm0.3_{\rm{syst}})\times10^{-3}$ s$^{-1}\cdot$cm$^{-2}$. High-resolution ion pulse ionization chamber with air filling for the Rn-222 decays detection (1508.04295) Yu.M. Gavrilyuk, A.M. Gangapshev, A.M. Gezhaev, R.A. Etezov, V.V. Kazalov, V.V. Kuzminov, S.I. Panasenko, S.S. Ratkevich, D.A. Tekueva, S.P. Yakimenko Aug. 18, 2015 physics.ins-det The construction and characteristics of the cylindrical ion pulse ionization chamber (CIPIC) with a working volume of 3.2 L are described. The chamber is intended to register alpha-particles from the $^{222}$Rn and its daughter's decays in the filled air sample. The detector is less sensitive to electromagnetic pick-ups and mechanical noises. The digital pulse processing method is proposed to improve the energy resolution of the ion pulse ionization chamber. An energy resolution of 1.6% has been achieved for the 5.49 MeV alpha-line. The dependence of the energy resolution on high voltage and working media pressure has been investigated and the results are presented. Search for 2K(2\nu)-capture of Xe-124 (1507.04520) Yu.M. Gavrilyuk, A.M. Gangapshev, V.V. Kazalov, V.V. Kuzminov, S.I. Panasenko, S.S. Ratkevich, D.A. Tekueva, S.P. Yakimenko July 16, 2015 nucl-ex, physics.ins-det The results of a search for two neutrino mode of double K-capture of Xe-124 using a large copper low-background proportional counter are presented. Data collected during 3220 hours of measurements with 58.6 g of $^{124}$Xe provides us to a new limit on the half-life of Xe-124 regarding 2K-capture at the level: T_{1/2} >= 2.0*10^{21} years at a 90\% confidence level. The origin of the background radioactive isotope Xe-127 in the sample of Xe enriched in Xe-124 (1507.04181) The results of investigation of Xe-127 radioactive isotope production in the xenon sample enriched in Xe-124, Xe-126, Xe-128 are presented. The isotope is supposed to be the source of the background events in the low-background experiment on search for 2K-capture of Xe-124. In this work we consider two channels of Xe-127 production: the neutron knock-out from Xe-128 nucleus by cosmogenic muons and the neutron capture by Xe-126 nucleus. For the first channel the upper limit of the cross section of Xe-127 production was found to be sigma >= 0.007 barn at 95\% C.L. For the second channel the value obtained for the cross section was found to be equal to sigma =(2.74+-0.4) barn, which coincides well, within the statistical error, with reference value. Results of a search for daily and annual variations of the Po-214 half-life at the two year observation period (1505.01752) E.N. Alexeyev, Yu.M. Gavrilyuk, A.M. Gangapshev, V.V. Kazalov, V.V. Kuzminov, S.I. Panasenko, S.S. Ratkevich May 7, 2015 nucl-ex, physics.ins-det, astro-ph.SR The brief description of installation TAU-2 intended for long-term monitoring of the half-life value $\tau$ ($\tau_{1/2}$) of the $^{214}$Po is presented. The methods of measurement and processing of collected data are reported. The results of analysis of time series values $\tau$ with different time step are presented. Total of measurement time was equal to 590 days. Averaged value of the $^{214}$Po half-life was obtained $\tau=163.46\pm0.04$ $\mu$s. The annual variation with an amplitude $A=(8.9\pm2.3)\cdot10^{-4}$, solar-daily variation with an amplitude $A_{So}=(7.5\pm1.2)\cdot10^{-4}$, lunar-daily variation with an amplitude $A_L=(6.9\pm2.0)\cdot10^{-4}$ and sidereal-daily variation with an amplitude $A_S=(7.2\pm1.2)\cdot10^{-4}$ were found in a series of $\tau$ values. The maximal values of amplitude are observed at the moments when the projections of the installation Earth location velocity vectors toward the source of possible variation achieve its maximal magnitudes. New limit on the mass of 9.4-keV solar axions emitted in an M1 transition in $^{83}$Kr nuclei (1501.02944) A.V. Derbin, A.M. Gangapshev, Yu.M. Gavrilyuk, V.V. Kazalov, H.J. Kim, Y.D. Kim, V.V. Kobychev, V.V. Kuzminov, Luqman Ali, V.N. Muratova, S.I. Panasenko, S.S. Ratkevich, D.A. Semenov, D.A. Tekueva, S.P. Yakimenko, E.V. Unzhakov Jan. 13, 2015 hep-ph, hep-ex A search for resonant absorption of the solar axion by $^{83}\rm{Kr}$ nuclei was performed using the proportional counter installed inside the low-background setup at the Baksan Neutrino Observatory. The obtained model independent upper limit on the combination of isoscalar and isovector axion-nucleon couplings $|g_3-g_0|\leq 1.69\times 10^{-6}$ allowed us to set the new upper limit on the hadronic axion mass of $m_{A}\leq 130$ eV (95\% C.L.) with the generally accepted values $S$=0.5 and $z$=0.56. First result of the experimental search for the 9.4 keV solar axion reactions with Kr-83 in the copper proportional counter (1405.1271) Yu.M. Gavrilyuk, A.M. Gangapshev, A.V. Derbin, V.V. Kazalov, H.J. Kim, Y.D. Kim, V.V. Kobychev, V.V. Kuzminov, Luqman Ali, V.N. Muratova, S.I. Panasenko, S.S. Ratkevich, D.A. Semenov, D.A. Tekueva, S.P. Yakimenko, E.V. Unzhakov May 6, 2014 nucl-ex, physics.ins-det The experimental search for solar hadronic axions is started at the Baksan Neutrino Observatory of the Institute for Nuclear Researches Russian Academy of Science. It is assumed that axions are created in the Sun during M1-transition between the first thermally excited level at 9.4 keV and the ground state in Kr-83. The experiment is based on axion detection via resonant absorption process by the same nucleus in the detector. The big copper proportional counter filled with krypton is used to detect signals from axions. The experimental setup is situated in the deep underground low background laboratory. No evidence of axion detection were found after the 26.5 days data collection. Resulting new upper limit on axion mass is m_{A} < 130 eV at 95% C.L. Sources of the systematic errors in measurements of Po-214 decay half-life time variations at the Baksan deep underground experiments (1404.5769) April 23, 2014 nucl-ex, physics.ins-det The design changes of the Baksan low-background TAU-1 and TAU-2 set-ups allowed to improve a sensitivity of Po-214 half-life (\tau) measurements up to the 2.5 x 10^{-4} are described. Different possible sources of systematic errors influencing on the $\tau$-value are studed. An annual variation of Po-214 half-life time measurements with an amplitude of A=(6.9 \pm 3) x 10^{-4} and a phase of (\phi=93 \pm 10) days was found in a sequence of the week-collected \tau-values obtained from the TAU-2 data sample with total duration of 480 days. 24 hours' variation of the \tau-value measurements with an amplitude of A=(10.0 \pm 2.6) x 10^{-4} and phase of (\phi=1 \pm 0.5) hours was found in a solar day 1 hour step \tau-value sequence formed from the same data sample. It was found that the Po-214 half-life averaged at 480 days is equal to (163.45 \pm 0.04) mks. First result of the experimental search for the 2K-capture of Xe-124 with the copper proportional counter (1404.5530) First result of experiment for searching of 2K-capture of Xe-124 with the large-volume copper proportional counter is given. The 12 litre sample with 63.3% (44 g) of Xe-124 was used in measurements. The limit on the half-life of Xe-124 with regard to 2K(2\nu)-capture for the ground state of Te-124 has been found: T_{1/2} > 4.67x10^{20} y (90% C.L.). A sample with volume 52 L comprising of Xe-124 (10.6 L - 58.6 g) and Xe-126 (14.1 L - 79.3 g) will used at the next step of the experiment to increase a sensitivity of 2K-caption of Xe-124 registration. In this case sensitivity to the investigated process will be at the level of S=1.46 x 10^{21} y (90% C.L.) for 1 year measurement. Background radioactivity of construction materials, raw substance and ready-made CaMoO4 crystals (1312.1041) O.A. Busanov, R.A. Etezov, Yu.M. Gavriljuk, A.M. Gezhaev, V.V. Kazalov, V.N. Kornoukhov, V.V. Kuzminov, P.S. Moseev, S.I. Panasenko, S.S. Ratkevich, S.P. Yakimenko Dec. 4, 2013 physics.ins-det The results of measurements of natural radioactive isotopes content in different source materials of natural and enriched composition used for CaMoO4 scintillation crystal growing are presented. The crystals are to be used in the experiment to search for double neutrinoless betas-decay of Mo-100. Radiative Strength Functions for Dipole Transitions in Zr-90 (1212.1802) I.D. Fedorets, S.S. Ratkevich Dec. 8, 2012 nucl-ex, nucl-th Partial cross sections for the (p,gamma) reaction on the Y-89 nucleus that were measured previously at proton energies between 2.17 and 5.00 MeV and which were averaged over resonances were used to determine the absolute values and the energy distribution of the strength of dipole transitions from compound-nucleus states to low-lying levels of the Zr-90 nucleus. The data obtained in this way were compared with the predictions of various models. Working characteristics of the New Low-Background Laboratory (DULB-4900, Baksan Neutrino Observatory) (1204.6424) Ju.M. Gavriljuk, A.M. Gangapshev, A.M. Gezhaev, V.V. Kazalov, V.V.Kuzminov, S.I. Panasenko, S.S. Ratkevich, S.P. Yakimenko April 28, 2012 physics.ins-det A concise technical characteristic of a new low-background laboratory DULB-4900 of the BNO INR RAS is presented. The technique and the results of background measurements in the Hall, ordinary box and low-background box are presented. Rn-222 contamination in the laboratory air has been measured by direct detection of gamma-radiation of its daughter Bi-214 distributed over the volume of the low-background box. The results of the data analysis are presented. Next stage of search for 2K(2$\nu$)-capture of $^{78}$Kr (nucl-ex/0510070) Ju.M. Gavriljuk, V.N. Gavrin, A.M. Gangapshev, V.V. Kazalov, V.V. Kuzminov, N.Ya. Osetrova, I.I. Pul'nikov, A.V. Ryabukhin, A.N. Shubin, G.M. Skorynin, S.I. Panasenko, S.S. Ratkevich Oct. 26, 2005 nucl-ex A technique to search for 2K-capture of $^{78}$Kr with large low-background proportional counter filled with an enriched in $^{78}$Kr up to 99.8% sample of Krypton at a pressure of 4.51 is described in this paper. The results of first measurements are presented. Analysis of data collected during 159 hours yielded new limit to the half-life of $^{78}$Kr with regard to 2K-capture (T$_{1/2}\geq6\cdot10^{21}$ yr (90% C.L.)). Sensitivity of the facility to the process for one year of measurement was evaluated to be $\texttt{S}=1.0\cdot10^{22}$ yr (90% C.L.). Results of a search for 2$\beta$-decay of $^{136}$Xe with high-pressure copper proportional counters in Baksan Neutrino Observatory (nucl-ex/0510071) Ju.M. Gavriljuk, A.M. Gangapshev, V.V. Kuzminov, S.I. Panasenko, S.S. Ratkevich The experiment for the 2$\beta$-decay of $^{136}$Xe search with two high-pressure copper proportional counters has been held in Baksan neutrino observatory. The search for the process is based on comparison of spectra measured with natural and enriched xenon. No evidence has been found for 2$\beta$(2$\nu$)- and 2$\beta$(0$\nu$)-decay. The decay half lifetime limit based on data measured during 8000 h is T$_{1/2}$$\geq8.5\cdot10^{21}$yr for 2$\nu$-mode and T$_{1/2}$$\geq3.1\cdot10^{23}$yr for 0$\nu$-mode (90%C.L.).
CommonCrawl