text
stringlengths
4
2.78M
meta
dict
--- author: - | Zhitao Xing$^{1,2},$ Ercai Chen$^{1,3}.$\ 1 School of Mathematical Science, Nanjing Normal University,\ Nanjing 210023, Jiangsu, P.R. China\ e-mail: [email protected]\ e-mail: [email protected]\ 2 School of Mathematics and Statistics, Zhaoqing University,\ Zhaoqing 526061, Guangdong, P.R. China\ 3 Center of Nonlinear Science, Nanjing University,\ Nanjing 210093, Jiangsu, P.R. China\ title: Induced topological pressure for topological dynamical systems --- [ In this paper, inspired by the article \[5\], we introduce the induced topological pressure for a topological dynamical system. In particular, we prove a variational principle for the induced topological pressure. ]{} 0.5cm [ Induced pressure, dynamical system, variational principle.]{}0.5cm INTRODUCTION AND MAIN RESULT ============================= The present paper is devoted to the study of the induced topological pressure for topological dynamical systems. Before stating our main result, we first give some notation and background about the induced topological pressure. By a topological dynamical system (TDS) $(X,f)$, we mean a compact metric space $(X,d)$ together with a continuous map $f:X\rightarrow X.$ Recall that $C(X,\mathbb{R})$ is the Banach algebra of real-valued continuous functions of $X$ equipped with the supremum norm. For $\varphi \in C(X,\mathbb{R}), n\geq 1$, let $(S_{n}\varphi)(x):=\sum \limits_{i=0}^{n-1}\varphi(f^{i}x)$ and for $\psi \in C(X,\mathbb{R})$ with $\psi >0$, let $m:=\min\{\psi(x): x\in X\}$. We denote by $ M(X,f)$ all $f$-invariant Borel probability measures on $X$ endowed with the weak-star topology. Topological pressure is a basic notion of the thermodynamic formalism. It first introduced by Ruelle \[11\] for expansive topological dynamical systems, and later by Walters \[1,9,10\] for the general case. The variational principle established by Walters can be stated as follows: Let $(X,f)$ be a TDS, and let $\varphi \in C(X,\mathbb{R})$, $ P(\varphi)$ denote the topological pressure of $\varphi.$ Then $$\label{tag-1} P(\varphi)=\sup\{h_{\mu}(f)+\int \varphi d\mu: \mu \in M(X,f)\}.$$ where $h_{\mu}(f)$ denotes the measure-theoretical entropy of $\mu.$ The theory of topological pressure and its variational principle plays a fundamental role in statistics, ergodic theory, and the theory of dynamical systems \[3,9,13\]. Since the works of Bowen \[4\] and Ruelle \[12\], the topological pressure has become a basic tool in the dimension theory of dynamical systems \[8,14\]. Recently Jaerish, Kesseböhmer and Lamei \[5\] introduced the notion of the induced topological pressure of a countable Markov shift, and established a variational principle for it. One important feature of this pressure is the freedom in choosing a scaling function, and this is applied to large deviation theory and fractal geometry. In this paper we present the induced topological pressure for a topological dynamical system and consider the relation between it and the topological pressure. We set up a variational principle for the induced topological pressure. As an application, we will point out that the BS dimension is a special case of the induced topological pressure. Let $(X,f)$ be a TDS. For $n\in \mathbb{N}$, the $n$th Bowen metric $d_{n}$ on $X$ is defined by $$d_{n}(x,y)=\max \{d(f^{i}(x),f^{i}(y)): i=0,1,\ldots, n-1 \}.$$ For every $\epsilon >0$, we denote by $B_{n}(x,\epsilon),\overline{B}_{n}(x,\epsilon) $ the open (resp. closed) ball of radius $\epsilon$ and order $n$ in the metric $d_{n}$ around $x$, i.e., $$B_{n}(x,\epsilon)= \{y\in X : d_{n}(x,y)<\epsilon\} \text{ and } \overline{B}_{n}(x,\epsilon)= \{y\in X : d_{n}(x,y)\leq \epsilon\}.$$ Let $Z\subseteq X$ be a non-empty set. A subset $F_{n}\subset X$ is called an $(n, \epsilon)$-spanning set of $Z$ if for any $y\in Z$, there exists $x \in F_{n} $ with $d_{n}(x,y)\leq \epsilon$. A subset $E_{n}\subset Z$ is called an $(n,\epsilon)$-separated set of $Z$ if $x,y\in E_{n}, x\neq y$ implies $d_{n}(x,y)>\epsilon$. Now we define a new notion, the *induced topological pressure* which extends the definition in \[5\] for topological Markov shifts if the Markov shift is compact, as follows. Let $(X,f)$ be a TDS and $ \varphi,\psi \in C(X,\mathbb{R})$ with $\psi>0$. For $ T>0$, define $$S_{T}=\{n\in \mathbb{N}: \exists x\in X \text { such that } S_{n}\psi(x)\leq T \text{ and }S_{n+1}\psi(x)>T\}.$$ For $n\in S_{T}$, define $$X_{n}=\{x\in X: S_{n}\psi(x)\leq T \text{ and }S_{n+1}\psi(x)>T \}.$$ Let $$Q_{\psi ,T}(f,\varphi, \epsilon)= \inf\left\{\sum\limits_{n\in S_{T}}\sum \limits_{x\in F_{n}}\exp (S_{n}\varphi)(x): F_{n} \ is \ an \ (n,\epsilon)\text{-spanning set of } X_{n},n\in S_{T} \right\}.$$ We define the $\psi$-induced topological pressure of $\varphi $ (with respect to $f$) by $$\label{tag-1} P_{\psi}(\varphi)=\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log Q_{\psi ,T}(f,\varphi, \epsilon)$$ [**Remarks.**]{}\ ($\romannumeral1$) Let $[\frac{T}{m}]$ denote the integer part of $\frac{T}{m}$. Then for $n\in S_{T}$, $n\leq [\frac{T}{m}]+1$, i.e., $S_T$ is a finite set.\ ($\romannumeral2$) If $0<\epsilon_{1}<\epsilon_{2}$, then $Q_{\psi ,T}(f,\varphi, \epsilon_{1})\geq Q_{\psi ,T}(f,\varphi, \epsilon_{2})$, which implies the existence of the limit in (1.2) and $P_{\psi}(\varphi)> -\infty$.\ ($\romannumeral3$) $P_1(\varphi)=P(\varphi)$. The variational principle for induced topological pressure is stated as follows. Let $(X,f)$ be a TDS and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi> 0$. Then $$\label{tag-1} P_{\psi}(\varphi)=\sup\left\{\frac{h_{\nu}(f)}{\int \psi d\nu}+\frac{\int \varphi d\nu}{\int \psi d\nu}: \nu \in M(X ,f )\right\}.$$ This paper is organized as follows. In Section 2, we provide an equivalent definition of induced topological pressure. We prove Theorem 1.1 in Section 3. We point out that the BS dimension is a special case of the induced topological pressure in Section 4. In Section 5, we study the equilibrium measures for the induced topological pressure. AN EQUIVALENT DEFINITION ========================= In this section, we obtain an equivalent definition of the induced topological pressure by using separated sets (from now on, we omit the word ‘topological’ if no confusion can arise). Let $(X,f)$ be a TDS and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi>0$. For $T>0$, define $$P_{\psi ,T}(f,\varphi, \epsilon)= \sup\left\{\sum\limits_{n\in S_{T}}\sum \limits_{x\in E_{n}}\exp (S_{n}\varphi)(x): E_{n} \ is \ an \ (n,\epsilon)\text{-separated set of } X_n, n\in S_T \right\}.$$ Then $$\label{tag-1}P_{\psi}(\varphi)=\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \epsilon)$$ **Proof.** We note that since the map $\epsilon\mapsto \limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \epsilon)$ is nondecreasing, the limit in (2.4) is well defined when $\epsilon\rightarrow 0$. For $n\in S_{T}$, let $E_{n}$ be an $(n,\epsilon)$-separated set of $X_{n}$ which fails to be $(n,\epsilon)$-separated when any point of $X_{n}$ is added. Then $E_{n}$ is an $(n,\epsilon)$-spanning set of $X_{n}$. Therefore $$Q_{\psi ,T}(f,\varphi, \epsilon)\leq P_{\psi ,T}(f,\varphi, \epsilon)$$ and $$P_{\psi}(\varphi)\leq \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \epsilon).$$ To show the reverse inequality, for any $\epsilon>0$, we choose $\delta>0$ small enough so that d(x,y) |(x)-(y)|&lt;. For $n\in S_{T}$, let $E_{n}$ be an $(n,\delta)$-separated set of $X_{n}$ and $F_{n}$ an $(n, \frac{\delta}{2})$-spanning set of $X_{n}$. Define $\phi:E_{n}\rightarrow F_{n}$ by choosing, for each $x\in E_{n}$, some point $\phi(x)\in F_{n}$ with $d_{n}(\phi(x),x)\leq \frac{\delta}{2}$. Then $\phi$ is injective.\ Therefore, &\_[nS\_[T]{}]{}\_[yF\_[n]{}]{}(S\_[n]{})(y)\ &\_[nS\_[T]{}]{}\_[yE\_[n]{}]{}(S\_[n]{})(y)\ &\_[nS\_[T]{}]{}(\_[xE\_[n]{}]{}((S\_[n]{})(x)-(S\_[n]{})(x)))\_[xE\_[n]{}]{}(S\_[n]{})(x)\ & (-(+1))\_[nS\_[T]{}]{}\_[xE\_[n]{}]{}(S\_[n]{})(x). We conclude that $$\lim \limits_{\delta\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log Q_{\psi ,T}(f,\varphi, \frac{\delta}{2})\geq -\frac{1}{m}\epsilon+\lim \limits_{\delta\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \delta).$$ As $\epsilon\rightarrow 0 $, we have $$P_{\psi}(\varphi)\geq \lim \limits_{\delta\rightarrow 0}\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \delta).$$ THE PROOF OF THEOREM 1.1 ========================= In this section, we give the proof of Theorem 1.1. Firstly, we study the relation between $P_{\psi}(\varphi)$ and $P(\varphi)$, which will be needed for the proof of Theorem 1.1. The following Theorem 3.1 is very similar to Theorem 2.1 of \[5\], and it is a generalization of this theorem in the case of a compact topological Markov shift. Let $(X,f)$ be a TDS and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi> 0$. For $T>0$, define $$G_{T}=\{n\in \mathbb{N}: \exists x\in X \text { such that } S_{n}\psi(x)> T\}.$$ For $n\in G_{T}$, define $$Y_{n}=\{x\in X: S_{n}\psi(x)>T\}.$$ Let $$R_{\psi ,T}(f,\varphi, \epsilon)= \sup\left\{\sum\limits_{n\in G_{T}}\sum \limits_{x\in E^{'}_{n}}\exp (S_{n}\varphi)(x): E^{'}_{n} \text { is \ an }(n,\epsilon)\text{-separated set of } Y_{n}, n\in G_T \right \}.$$ We have P\_()={: \_[0]{}\_[T ]{}R\_[,T]{}(f,-, )&lt;}. Here we make the convention that $\inf \emptyset =\infty$. **Proof.** For $n\in \mathbb{N},x\in X$, we define $m_n(x)$ to be the unique positive integer such that $$(m_n(x)-1)\|\psi\|<S_{n}\psi(x)\leq m_n(x)\|\psi\|.$$ Observing that $$\exp(-\beta \|\psi\| m_n(x))\exp(-|\beta|\|\psi\|)\leq \exp(-\beta S_{n}\psi(x))\leq \exp(-\beta \|\psi\| m_n(x))\exp(|\beta|\|\psi\|)$$ for all $ x\in X$. For $\xi_T=\{\xi_n: X\to\mathbb{R}\}_{n\in G_T}$, we define &R\_[,T]{}(f,, \_T,)\ =& {\_[nG\_[T]{}]{}\_[xE\^[’]{}\_[n]{}]{}((S\_[n]{})(x)-\_n(x)): E\^[’]{}\_[n]{} (n,) Y\_[n]{}, nG\_T }. We conclude that $$\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)<\infty$$ if and only if $$\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-\beta \|\psi\| m_n\}_{n\in G_T}, \epsilon)<\infty.$$ Hence, it will be sufficient to verify that $$P_{\psi}(\varphi)=\inf\{\beta \in \mathbb{R}: \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-\beta \|\psi\| m_n\}_{n\in G_{T}}, \epsilon)<\infty\}.$$ By the equivalent definition of $P_{\psi}(\varphi),$ for every $ \delta>0 , \beta\in \mathbb{R}$ with $\beta<P_{\psi}(\varphi)-\delta$, there exists an $\epsilon_{0}>0$ with $$\beta+\delta<\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \epsilon)\leq P_{\psi}(\varphi), \ \ \forall \epsilon \in (0, \epsilon_{0}),$$ and we can find a sequence $\{T_{j}\}_{j\in \mathbb{N}}$ such that for every $j\in \mathbb{N},$ $T_{j+1}-T_{j}>2\|\psi\|$ and for each $j\in \mathbb{N}$, there exists an $E_{T_{j}}=\bigcup \limits_{n\in S_{T_{j}}}E_{n}$ with $$\sum\limits_{n\in S_{T_{j}}}\sum \limits_{x\in E_{n}}\exp (S_{n}\varphi)(x)\geq \exp(T_{j}(\beta+\frac{\delta}{2})).$$ Since for $j\in \mathbb{N}, n\in S_{T_{j}}, x\in E_{n}$, $T_j-\|\psi\|<S_{n}\psi(x)\leq T_j$, we have $$S_{T_i}\cap S_{T_j}=\emptyset,i\neq j$$ and $$|\|\psi\| m_n(x)-T_{j}|<2\|\psi\|.$$ It follows that &R\_[,T]{}(f,,{- m\_n}\_[nG\_T]{},)\ & \_[j, T\_[j]{}-&gt;T]{}\_[nS\_[T\_[j]{}]{}]{}\_[xE\_[n]{}]{}((S\_[n]{})(x)- m\_n(x))\ & (-2||)\_[j,T\_[j]{}-&gt; T]{}\_[nS\_[T\_[j]{}]{}]{}\_[xE\_[n]{}]{}((S\_[n]{})(x)-T\_[j]{})\ & (-2||)\_[j,T\_[j]{}- &gt;T]{}((+)T\_[j]{}-T\_[j]{})\ =&. Therefore, for all $\beta<P_{\psi}(\varphi)-\delta$, $$\label{tag-1} \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-\beta \|\psi\| m_n\}_{n\in G_T}, \epsilon)=\infty.$$ This argument is not only valid for $P_{\psi}(\varphi)\in \mathbb{R}$, but also for $P_{\psi}(\varphi)=\infty$, in which case (3.7) holds for every $\beta\in \mathbb{R}$. Then $$\label{tag-1} P_{\psi}(\varphi)\leq \inf\{\beta \in \mathbb{R}: \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-\beta \|\psi\| m_n\}_{n\in G_T}, \epsilon)<\infty\}.$$ Next, we establish the reverse inequality. We consider the case $P_{\psi}(\varphi)\in \mathbb{R}$ and show that for any $\delta>0,$ $$\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-(P_{\psi}(\varphi)+\delta) \|\psi\| m_n\}_{n\in G_T}, \epsilon)<\infty.$$ Again, by the equivalent definition of $P_{\psi}(\varphi)$, we have, for any $\epsilon>0$, $$\limsup \limits_{T \rightarrow \infty}\frac{1}{T}\log P_{\psi ,T}(f,\varphi, \epsilon)< P_{\psi}(\varphi)+\frac{\delta}{2},$$ and we can find an $l_{0}\in \mathbb{N}$ such that for all $l \in \mathbb{N}$ with $l \geq l_{0}$, $$P_{\psi,lm}(f,\varphi,\epsilon)\leq \exp(lm(P_{\psi}(\varphi)+\frac{2\delta}{3})).$$ Note that for $n\in S_{lm}, x\in E_{n}$, we have $$|\|\psi\| m_n(x)-lm|<2\|\psi\|$$ and $$-(P_{\psi}(\varphi)+\delta)\|\psi\| m_n(x)\leq -lm(P_{\psi}(\varphi)+\delta)+2|P_{\psi}(\varphi)+\delta||\psi\|.$$ Moreover, for sufficiently large $T>0, n\in G_{T}, x\in E^{'}_{n} \subset Y_n$, there exists a unique $l\in \mathbb{N}$ such that $(l-1)m<S_{n}\psi(x)\leq lm$. Obviously $S_{n+1}\psi(x)>lm$. Hence, we obtain &R\_[,T]{}(f,,{-(P\_()+) m\_n}\_[nG\_T]{},)\ & \_[ll\_[0]{}]{}{\_[nS\_[lm]{}]{}\_[xE\_[n]{}]{}((S\_[n]{})(x)- (P\_()+) m\_n(x)):\ &E\_n (n,) X\_n, nS\_[lm]{}}\ & (2|P\_()+|)\_[ll\_[0]{}]{}(- (P\_()+)lm)P\_[,lm]{}(f,,)\ & (2|P\_()+|)\_[ll\_[0]{}]{}(-lm)\ &lt;&(2|P\_()+|). This implies $$\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-(P_{\psi}(\varphi))+\delta) \|\psi\| m_n\}_{n\in G_T}, \epsilon)<\infty,$$ and hence, $$\label{tag-1} P_{\psi}(\varphi)\geq \inf\{\beta \in \mathbb{R}: \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi,\{-\beta \|\psi\| m_n\}_{n\in G_T}, \epsilon)<\infty\}.$$ Combining (3.8) and (3.9) we obtain (3.6). Let $(X,f)$ be a TDS, and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi>0$. We have $$\label{tag-1} P_{\psi}(\varphi)\geq \inf \{\beta \in \mathbb{R}: P(\varphi-\beta \psi)\leq 0\}.$$ **Proof.** Let $\beta\in \{\beta \in \mathbb{R}:\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)<\infty\}$ and $$\lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)=a.$$ Then for any $\epsilon>0$, $$\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)<a+1 .$$ We can find a $T_{0}>0$ such that for all $T>T_{0}$, $$R_{\psi ,T}(f,\varphi-\beta\psi, \epsilon)<a+2.$$ Now, for sufficiently large $n\in\mathbb{N}$, $$S_{n}\psi(x)>T, \ \ \forall x\in X,$$ and hence, for such $n\in G_{T}$, $E_{n}$ is an $(n, \epsilon)$-separated set of $X$ and $$\sum\limits_{x\in E_{n}}\exp (S_{n}(\varphi-\beta \psi))(x)<a+2.$$ It follows from this that $$P(\varphi-\beta \psi)\leq 0.$$ Since &{:\_[0]{}\_[T ]{}R\_[,T]{}(f,-, )&lt;}\ & {:P(-)0}, the inequality (3.10) follows by Theorem 3.1. Let $(X,f)$ be a TDS, and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi>0$. We have $$P_{\psi}(\varphi)=\inf \{\beta \in \mathbb{R}: P(\varphi-\beta\psi)\leq 0\}=\sup\{\beta \in \mathbb{R}: P(\varphi-\beta\psi)\geq 0\}.$$ **Proof.** If there exists a $\beta \in \mathbb{R}$ such that $P(\varphi-\beta \psi)=\infty$, then $P(\varphi-\beta \psi)=\infty$ for all $\beta \in \mathbb{R}$. By Corollary 3.1, we have $$P_{\psi}(\varphi)=\inf \{\beta \in \mathbb{R}: P(\varphi-\beta\psi)\leq 0\}=\sup\{\beta \in \mathbb{R}: P(\varphi-\beta\psi)\geq 0\}.$$ Suppose for any $\beta \in \mathbb{R}$, $P(\varphi-\beta \psi)<\infty$. By (1.1) we have $$P(\varphi-\beta \psi)=\sup \{h_{\nu}(f)+\int \varphi d\nu- \beta\int\psi d\nu: \nu\in M(X,f)\}.$$ Then for each $\beta_{1},\beta_{2} \in \mathbb{R}, \beta_{1}<\beta_{2}$ and $0<\epsilon<\frac{m(\beta_{2}-\beta_{1})}{2}$, there exists a $\mu \in M(X,f)$ such that &{h\_(f)+d- \_[2]{}d: M(X,f)}\ &lt;& h\_(f)+d- \_[2]{}d +\ =&h\_(f)+d- \_[1]{}d +-(\_[2]{}-\_[1]{})d\ &lt;&h\_(f)+d- \_[1]{}d -(\_[2]{}-\_[1]{})(d-)\ &{h\_(f)+d- \_[1]{}d: M(X,f)}-(\_[2]{}-\_[1]{})(d-). Thus, the map $\beta \mapsto P(\varphi-\beta \psi)$ is strictly decreasing. Next, we prove that $$P(\varphi-\beta\psi)< 0\Longrightarrow R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)<\infty.$$ Let $P(\varphi-\beta\psi)=2a< 0$. For any $\epsilon>0$, we can find $N\in \mathbb{N}$ such that for all $n\in \mathbb{N}$ with $n\geq N$, $$\sup\limits_{E_{n}}\sum \limits_{x\in E_{n}}\exp (S_{n}(\varphi-\beta\psi))(x)\leq \exp (na),$$ where the supremum is taken over all $(n,\epsilon)$-separated sets of $X$. Consequently, for sufficiently large $T>0$, we have $$R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)\leq \sum\limits_{n\geq N}\sup\limits_{E_{n}}\sum\limits_{x\in E_{n}}\exp (S_{n}(\varphi-\beta\psi))(x)\leq \frac{1}{1-\exp(a)}<\infty,$$ and the conclusion holds.\ Since $$\inf \{\beta \in \mathbb{R}: P(\varphi-\beta\psi)< 0\}\geq \inf\{\beta \in \mathbb{R}: \lim \limits_{\epsilon\rightarrow 0}\limsup \limits_{T \rightarrow \infty}R_{\psi ,T}(f,\varphi-\beta \psi, \epsilon)<\infty\},$$ by Theorem 3.1 and Corollary 3.1, we conclude that {: P(-)0}&= {: P(-)&lt; 0}\ &={: P(-)0}. Let $(X,f)$ be a TDS and $\varphi, \psi \in C(X,\mathbb{R})$ with $\psi>0$. Suppose that for each $\beta\in \mathbb{R}$ we have $P(\varphi-\beta\psi)\in \mathbb{R}$. Then $P(\varphi-P_{\psi}(\varphi)\psi)=0$. **Proof.** By the proof of Corollary 3.2 the map $\beta \mapsto P(\varphi-\beta \psi)$ is a strictly decreasing, continuous map on $ \mathbb{R}$. Hence $P(\varphi-P_{\psi}(\varphi)\psi)=0$. We are now ready to prove Theorem 1.1.\ *Proof of Theorem 1.1*. Firstly, we show $$P_{\psi}(\varphi)\geq \sup \{\frac{h_{\upsilon}(f)}{\int \psi d\nu}+\frac{\int \varphi d\nu}{\int \psi d\nu}: \nu \in M(X,f)\}.$$ By Corollary 3.1 we have $0\geq P(\varphi-\beta\psi)$ for $\beta> P_{\psi}(\varphi)$. It follows from (1.1) that 0&P(-)\ &= {h\_(f)+d- d: M(X,f)}\ &={d(+-): M(X,f)}, and hence (3.11) holds. Next, we establish the reverse inequality. Similarly by Corollary 3.2 we have $ P(\varphi-\beta\psi)\geq 0$ for $ \beta<P_{\psi}(\varphi)$. Then &P(-)\ =& {h\_(f)+d- d: M(X,f)}\ =&{d(+-): M(X,f)}\ & 0. It is easy to see that $$P_{\psi}(\varphi)\leq\sup \{\frac{h_{\upsilon}(f)}{\int \psi d\nu}+\frac{\int \varphi d\nu}{\int \psi d\nu}: \nu \in M(X,f)\}.$$ Combining (3.11) and (3.12), we obtain (1.3). A SPECIAL CASE (BS-DIMENSION) ============================== In this section we will show that the BS dimension with Carathéodory structure is a special case of the induced pressure. The BS dimension was first defined by Barreira and Schmeling \[2\] as follows. For $n\geq1, \epsilon>0$, we put $$\mathcal{W}_{n}(\epsilon)=\{B_{n}(x,\epsilon):x\in X\}.$$ For any $B_{n}(x,\epsilon)\in\mathcal{W}_{n}(\epsilon), \psi\in C(X,\mathbb{R})$ with $\psi>0$, the function $\psi$ can induce a function by $$\psi(B)=\sup\limits_{x\in B}(S_{n}\psi)(x).$$ We call $\mathcal{G}\subset \cup_{j\geq N}\mathcal{\mathcal{W}}_{j}(\epsilon)$ covers $X$, if $\bigcup\limits_{B\in\mathcal{G}}B=X$. Let $(X,f)$ be a TDS. For any $\alpha>0, N\in\mathbb{N}$ and $\epsilon>0$, we define $$M(\alpha,\epsilon,N)=\inf \limits_{\mathcal{G}}\{\sum\limits_{B\in\mathcal{G}}\exp(-\alpha \psi(B))\},$$ where the infimum is taken over all finite $\mathcal{G}\subset \cup_{j\geq N}\mathcal{\mathcal{W}}_{j}(\epsilon)$ that cover $X.$ Obviously $M(\alpha,\epsilon,N)$ is a finite outer measure on $X$ and increases as $N$ increases. Define $$m(\alpha,\epsilon)=\lim\limits_{N \rightarrow \infty}M(\alpha,\epsilon,N)$$ and $$\dim_{BS}(X,\epsilon)=\inf \{\alpha:m(\alpha,\epsilon)=0\}=\sup\{\alpha:m(\alpha,\epsilon)=\infty\}.$$ The BS dimension is $\dim_{BS}X=\lim \limits_{\epsilon\rightarrow 0}\dim_{BS}(X,\epsilon):$ this limit exists because given $\epsilon_{1}<\epsilon_{2},$ we have $m(\alpha,\epsilon_{1})\geq m(\alpha,\epsilon_{2})$, so $\dim_{BS}(X,\epsilon_{1})\geq\dim_{BS}(X,\epsilon_{2})$. For a TDS, we have $P_{\psi}(0)=\dim_{BS}X$. **Proof.** By \[2, Proposition 6.4\], we have $P(-\psi\dim_{BS}X)=0$. Now it follows from Corollary 3.3 that $P_{\psi}(0)=\dim_{BS}X$. EQUILIBRIUM MEASURES AND GIBBS MEASURES ======================================== In this section we consider the problem of the existence of equilibrium measures for the induced pressure. We also study the relation between Gibbs measures and equilibrium measures for the induced pressure in the particular case of symbolic dynamics. Let $(X,f)$ be a TDS and $\varphi,\psi\in C(X,\mathbb{R})$ with $\psi>0$. A member $\mu$ of $M(X,f)$ is called an equilibrium measure for $\psi$ and $\varphi$ if $P_{\psi}(\varphi)=\frac{h_{\mu}(f)+\int \varphi d\mu}{\int \psi d\mu}.$ We will write $M_{\psi,\varphi}(X,f)$ for the collection of all equilibrium measures for $\psi$ and $\varphi$. Let $(X,f)$ be a TDS. Then $f$ is said to be positively expansive if there exists $\epsilon>0$ such that $x=y$ whenever $d(f^{n}(x),f^{n}(y))<0$ for every $n\in\mathbb{N}\cup \{0\}$. The entropy map of a TDS is the map $\mu\mapsto h_{\mu}(f)$, which is defined on $M(X,f)$ and has values in $[0,\infty]$. The entropy map $\mu\mapsto h_{\mu}(f)$ is called upper semi-continuous if given a measure $\mu \in M(X,f)$ and $\delta>0$, we have $h_{\nu}(f)<h_{\mu}(f)+\delta$ for any measure $\nu\in M(X,f)$ in some open neighborhood of $\mu$. Now we show that any expansive map has equilibrium measures. Let $(X,f)$ be a TDS and $\varphi,\psi\in C(X,\mathbb{R})$ with $\psi>0$. Then [*($\romannumeral1$)*]{} If $f$ is a positively expansive map, then $M_{\psi,\varphi}(X,f)$ is compact and non-empty. [*($\romannumeral2$)*]{} If $\varphi,\phi,\psi\in C(X,\mathbb{R})$ with $\psi>0$ and if there exists a $c\in \mathbb{R}$ such that $$\varphi-\phi-c\int\psi d\mu\in \overline{\{\tau\circ f -\tau:\tau\in C(X,\mathbb{R})\}}$$ for each $\mu\in M(X,f)$, then $M_{\psi,\varphi}(X,f)=M_{\psi,\phi}(X,f)$. **Proof.** ($\romannumeral1$) For a positively expansive map $f$, it follows from the proof in \[1,9\] that the map $\mu \mapsto h_{\mu}(f)$ is upper semi-continuous. Then $\mu\mapsto \frac{h_{\mu}(f)}{\int \psi d\mu}$ is upper semi-continuous. Since the map $$\mu\mapsto \int \frac{\varphi}{\int\psi d\mu}d\mu$$ is continuous for each $\varphi\in C(X,f),$ then $$\mu\mapsto \frac{h_{\mu}(f)+\int\varphi d\mu}{\int\psi d\mu}$$ is upper semi-continuous. Since an upper semi-continuous map has a maximum on any compact set, it follows from Theorem 1.1 that $M_{\psi,\varphi}(X,f)\neq \emptyset$. The upper semi-continuity also implies $M_{\psi,\varphi}(X,f)$ is compact because if $\mu_{n}\in M_{\psi,\varphi}(X,f)$ and $\mu_{n}\rightarrow \mu \in M(X,f)$, then $$\frac{h_{\mu}(f)+\int \varphi d\mu}{\int \psi d\mu}\geq \limsup\limits_{n\rightarrow\infty}\frac{h_{\mu_{n}}(f)+\int \varphi d\mu_{n}}{\int \psi d\mu_{n}}=P_{\psi}(\varphi),$$ so $\mu\in M_{\psi,\varphi}(X,f)$.\ ($\romannumeral2$) Note that for each $\mu\in M(X,f)$ $$\frac{h_{\mu}(f)+\int\varphi d\mu}{\int\psi d\mu}=\frac{h_{\mu}(f)+\int\phi d\mu}{\int\psi d\mu},$$ therefore $P_{\psi}(\varphi)=P_{\psi}(\phi)+c$, hence $M_{\psi,\varphi}(X,f)=M_{\psi,\phi}(X,f)$. Next, we consider symbolic dynamics. Let $(\Sigma_{A} ,\sigma)$ be a one-sided *topological Markov shift* (TMS, for short) over a finite set $S=\{1,2,\ldots, k\}$. This means that there exists a matrix $A=(t_{ij})_{k\times k}$ of zeros and ones (with no row or column made entirely of zeros) such that $$\Sigma_{A} =\{{\omega =(i_{1},i_{2},\ldots)\in S^{\mathbb{N}}:t_{i_{j}i_{j+1}=1} \text{ for every } j\in \mathbb{N}}\}.$$ The *shift map* $\sigma :\Sigma_{A} \rightarrow\Sigma_{A}$ is defined by $(i_{1},i_{2},i_{3}\ldots)\mapsto (i_{2},i_{3},\ldots)$. We call $C_{i_{1}\ldots i_{n}}=\{(j_{1}j_{2}\ldots) \in \Sigma_{A} :j_{l}=i_{l} \text{ for }l=1,\ldots,n\}$ the *cylindrical set* of $\omega$. We equip $\Sigma_{A}$ with the topology generated by the cylindrical sets. The topology of a TMS is metrizable and may be given by the metric $d_{\alpha}(\omega ,\omega')=e^{-\alpha|\omega\wedge\omega'|}, \alpha>0$, where $\omega\wedge\omega'$ denotes the longest common initial block of $\omega ,\omega'\in \Sigma_{A}$. The shift map $\sigma$ is continuous with respect to this metric. A TMS $(\Sigma_{A},\sigma)$ is called a topologically mixing TMS if for every $a, b \in S$, there exists an $N_{ab}\in\mathbb{N}$ such that for every $n>N_{ab}$, we have $C_{a}\cap \sigma^{-n}C_{b}$. Let $(\Sigma_{A} ,\sigma)$ be a TMS and $\varphi,\psi\in C(\Sigma_{A},\mathbb{R})$ with $\psi>0$. We say that a probability measure $\mu$ in $\Sigma_{A}$ is a Gibbs measure for $\psi$ and $\varphi$ if there exists a $K>1$ such that $$K^{-1}\leq \frac{\mu(C_{i_{1}\ldots i_{n}})}{\exp [-(S_{n}\psi)(\omega) P_{\psi}(\varphi)+(S_{n}\varphi)(\omega)]}\leq K$$ for each $(i_{1},i_{2},\ldots)\in \Sigma_{A}, n\in \mathbb{N}$ and $\omega\in C_{i_{1}\ldots i_{n}}$. We show that $\sigma$-invariant Gibbs measures are equilibrium measures. Making a similar proof as in \[1, Theorem 3.4.2\], we can obtain the following statement: If a probability measure $\mu$ in $(\Sigma_{A},\sigma)$ is a $\sigma$-invariant Gibbs measure for $\varphi$ and $\psi$, then it is also an equilibrium measure for $\psi$ and $\varphi$. Now we establish the existence of Gibbs measures. Let $(\Sigma_{A},\sigma)$ be a topologically mixing TMS. Suppose that $\varphi$ and $\psi$ are H$\ddot{\text{o}}$lder continuous functions and $\psi>0$. Then there exists at least one $\sigma$-invariant Gibbs measure for $\psi$ and $\varphi$. **Proof.** By Corollary 3.3 we have $$P(\varphi-P_{\psi}(\varphi)\psi)=0.$$ As $\varphi-P_{\psi}(\varphi)\psi$ is H$\ddot{\text{o}}$lder continuous, it follows from \[1, Theorem 3.4.4\] that there exists a $K>1$ such that $$K^{-1}\leq \frac{\mu(C_{i_{1}\ldots i_{n}})}{\exp [-n P(\varphi-P_{\psi}(\varphi)\psi)-(S_{n}\psi)(\omega) P_{\psi}(\varphi)+(S_{n}\varphi)(\omega)]}\leq K.$$ [**ACKNOWLEDGEMENTS.**]{} This research was supported by the National Natural Science Foundation of China (Grant No. 11271191) and the National Basic Research Program of China (Grant No. 2013CB834100). We would like to thank the referee for very useful comments and helpful suggestions. The first author would like to thank Dr. Zheng Yin for useful discussions. [50]{} L. Barreira. *Thermodynamic Formalism and Applications to Dimension Theory*. (Springer, 2011). L. Barreira and J. Schmeling. “Sets of ‘non-typical’ points have full topological entropy and full Hausdorff dimension,” Israel J. Math. [**116**]{}, 29–70 (2000). R. Bowen. *Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms*. Lecture Notes in Math. Vol. 470, (Springer-Verlag, 1975). R. Bowen. “Hausdorff dimension of quasicircles," Inst. Hautes Études Sci. Publ. Math. No. [**50**]{}, 11–25 (1979). J. Jaerisch, M. Kesseböhmer and S. Lamei. “Induced topological pressure for countable state Markov shifts,” Stochastics and Dyn. [**14**]{} (2014) Y. B. Pesin. *Dimension Theory in Dynamical Systems. Contemporary Views and Applications*. (University of Chicago Press, 1998). Y. B. Pesin and B. S. Pitskel. “Topological Pressure and the Variational Principle for Non-Compact Sets,” Functional Anal. and Appl. [**18:4**]{}, 307–318 (1984). Y. B. Pesin. *Dimension theory in Dynamical Systems. Contemporary Views and Applications.* (University of Chicago Press, 1997). P. Walters. *An Introduction to Ergodic Theory*. (Springer-Verlag, 1982). P. Walters. “A variational principle for the pressure of continuous transformations,” Amer. J. Math. [**97**]{}, 937–971 (1975). D. Ruelle. “Statistical mechanics on a compact set with $\mathbb{Z}^{\nu}$ action satisfying expansiveness and specification,” Trans. Amer. Math. Soc. [**187**]{}, 237–251 (1973). D. Ruelle. “Repellers for real analytic maps," Ergod. Theory Dynam. Syst. [**2**]{}, 99–107 (1982). G. Keller. *Equilibrium States in Ergodic Theory.* (Cambridge University Press, 1998). M. Zinsmeister. *Thermodynamic Formalism and Holomorphic Dynamical Systems.* (Translated from the 1996 French original by C. Greg Anderson. American Mathematical Society, Providence, RI; Société Mathématique de France, Paris, 2000).
{ "pile_set_name": "ArXiv" }
--- abstract: 'with multiple protocols is commonly used for diagnosis, but it suffers from a long acquisition time, which yields the image quality vulnerable to say motion artifacts. To accelerate, various methods have been proposed to reconstruct full images from undersampled k-space data. However, these algorithms are inadequate for two main reasons. Firstly, aliasing artifacts generated in the image domain are structural and non-local, so that sole image domain restoration is insufficient. Secondly, though MRI comprises multiple protocols during one exam, almost all previous studies only employ the reconstruction of an individual protocol using a highly distorted undersampled image as input, leaving the use of fully-sampled short protocol (say T1) as complementary information highly underexplored. In this work, we address the above two limitations by proposing a Dual Domain Recurrent Network (DuDoRNet) with deep T1 prior embedded to simultaneously recover k-space and images for accelerating the acquisition of MRI with a long imaging protocol. Specifically, a Dilated Residual Dense Network (DRDNet) is customized for dual domain restorations from undersampled MRI data. Extensive experiments on different sampling patterns and acceleration rates demonstrate that our method consistently outperforms state-of-the-art methods, and can achieve SSIM up to 0.99 at $6 \times$ acceleration.' author: - | Bo Zhou\ Department of Biomedical Engineering\ Yale University\ [[email protected]]{} - | S. Kevin Zhou\ Institute of Computing Technology\ Chinese Academy of Sciences\ [[email protected]]{} bibliography: - 'egbib.bib' title: 'DuDoRNet: Learning a Dual-Domain Recurrent Network for Fast MRI Reconstruction with Deep T1 Prior' --- Introduction ============ ![image](Figures/Figure_pipeline.pdf){width="100.00000%"} is one of the most applied imaging procedures for disease diagnosis and treatment planning. As a non-invasive, radiation-free, and in-vivo imaging modality, it provides significantly better soft-tissue contrast than many other imaging modalities and offers accurate measurements of both anatomical and functional signals. However, its long acquisition time, owing to sampling full k-space, especially under the multiple protocols requiring long and , could lead to significant artifacts in the reconstructed image caused by patient or physiological motions during acquisitions, cardiac motion and respiration. Furthermore, a long acquisition time also limits the availability of MR scanner for patients and causes delayed patient care in the medical system. To accelerate the acquisition, various efforts have been made for improving the reconstruction image quality with undersampled k-space data. Previously, and Parallel Imaging have achieved significant progresses in fast [@otazo2010combination; @griswold2002generalized; @huang2005k]. In -, one assumes that images have a sparse representation in certain domains [@lustig2007sparse; @murphy2012fast; @liang2009accelerating]. In conventional -, previous works focus on using sparse coefficients in sparsifying transforms, wavelet transform [@qu2010combined; @zhang2015exponential] and contourlet transform [@qu2010iterative], combined with regularization parameters, total variation, to solve the ill-posed inverse problems in an iterative manner. However, these iterative minimization process based on sparsifying transforms tends to generate smaller sparse coefficient values, and lead to loss of details and unwanted artifacts in the reconstruction when undersampling rate is high [@ravishankar2010mr]. Thus, current - is limited to a rate of $2 \sim 3$ [@lustig2007sparse; @ravishankar2010mr]. Furthermore, reconstruction based on these iterative algorithms is time-consuming; thus it is challenging to deploy them in near real-time MRI scenarios, Cardiac- and Functional-. More recently, triggered by the success of computer vision [@lecun2015deep], deep learning based algorithms have been developed for fast reconstruction and demonstrated significant advantages [@wang2016accelerating; @schlemper2017deep; @sun2016deep; @hammernik2018learning; @liu2019theoretically; @lonning2019recurrent; @han2019k; @mardani2018deep; @zhang2019reducing; @zhu2018image]. Wang [@wang2016accelerating] first proposed to train a multi-layer to recover the fully sampled MRI image from undersampled MRI image using supervised training with paired data. Similarly, Jin [@jin2017deep] proposed to use U-Net [@ronneberger2015u] to solve the inverse problem in imaging. Yang [@yang2017dagan] developed a De-Aliasing GAN that uses a U-Net as the generator with a loss function consisting of four components: an image domain loss, a frequency domain loss, a perceptual loss, and an adversarial loss. Quan [@quan2018compressed] introduced a RefineGAN that uses a U-Net structure based generator with a cyclic loss for MRI de-aliasing. However, individual training is required for different sampling patterns and undersampling rates. Schlemper [@schlemper2017deep; @qin2018convolutional] came up with the data consistency layer in a deep cascade of , which ensures the consistency of the reconstruction image in k-space and potentially reduces the issue of overfitting in deep training. Other than learning from pre-defined undersampling patterns, Zhang [@zhang2019reducing] proposed to use active learning to determine the 1D Cartesian undersampling pattern on the fly during the acquisition. These pioneering deep learning based methods surpass conventional algorithms owing to the high-nonlinearity properties of data-driven feature extraction and achieve significantly lower computation during run-time. However, existing deep learning based methods have three major limitations. In this work, we aim to break these limitations. Firstly, the above-mentioned algorithms are principally learned in the image domain alone, with a few amendments that use the frequency domain in the loss design [@yang2017dagan] or in the data consistency layer [@schlemper2017deep]. All of these deep learning based algorithms are designed to receive an image reconstructed from undersampled k-space as input and output an image as if reconstructed from fully sampled k-space; but unfortunately in the input images to the CNNs, detailed structures are likely distorted or even disappear. While there are recent attempts on restoring fully sampled image from undersampled k-space [@eo2018kiki], the large scale trainable parameters limits the model can only be trained in an incremental fashion which could be further improved. As the first step to tackle this issue, we develop a dual domain learning scheme in MRI, which allows the network to restore the data in both image and frequency domains in a recurrent fashion. Previous works on CT metal artifact reduction and limited angle CT reconstruction also demonstrated the advantages of cross domain learning [@lin2019dudonet; @zhou2019limited]. Secondly, the previous studies are limited to conventional network design, such as multi-layer and U-Net, and there are few attempts to design a customized network structure for undersampled reconstruction. Inspired by the recent development of super-resolution imaging techniques [@dong2015image; @kim2016accurate; @zhang2018residual; @hu2018squeeze], we propose a Dilated Residual Dense Network (DRD-Net) with a Squeeze-and-Excitation Dilated Dense Residual Block (SDRDB) as the building module. The DRD-Net is used for both image and frequency domain restorations. Our SDRDB is customized for reconstruction task. In fast MRI acquisitions, we observe significant sparsity in undersampled k-space, especially when undersampling rate is high. Previous studies have demonstrated that better k-space recovery can be achieved via using non-local information interpolation, such as GRAPPA [@griswold2002generalized]. Based on this assumption, the first motivation of our DRD Block design in k-space is synthesis of missing k-space from a large receptive field by utilizing non-local k-space data, thus bringing more robustness and reliability. In the image domain, human organ anatomy is correlated in different regions. Our DRD Block with a large receptive field in the image domain can better capture this correlation between anatomical regions and synthesize the missing anatomical information even when the signal is highly distorted. Thirdly, previous studies have not fully explored the use of protocol that requires a short acquisition time as deep prior to guide the reconstruction process. In clinical routines, typical total scanning time for , , and is $\sim 20$ mins, in which and take the majority due to their long and . However, using the undersampled image with the detailed structures may already disappeared as input, the existing methods could synthesize artificial structure that does not belong to the patient. Recently, Xiang explored the merits of using as additional channel input in the image domain to aid reconstruction [@xiang2018ultra]. To further address this issue, we propose to use as deep prior in both image domain and k-space domain for improving the reconstruction fidelity, given that the structural information in is highly correlated with that in different protocols. In summary, we propose a **Du**al **Do**main **R**ecurrent **Net**work (DuDoRNet) embedding with T1 priors to address these problems by learning two DRD-Nets on dual domains in a recurrent fashion to restore k-space and image domains simultaneously. Our method (Figure \[fig:network\]) consists of three major parts: recurrent blocks comprised of image restoration network (iDRD-Net) and k-space restoration network (kDRD-Net), recurrent T1 priors embedded in image and k-space domains, and recurrent data consistency regularization. A correct reconstruction should ensure the consistency in both domains linked by the linear operation of . Our intuition is that image domain restoration can be enhanced by fusing signal back-propagated from the k-space restoration, vice versa. Given sparse signal in both domains, DRD-Net with a large receptive field can sense more signal for a better restoration. Our recurrent learning can better avoid overfitting in directly optimizing restoration networks in dual domains. Extensive experiments on MRI patients with different sampling patterns and acceleration rates demonstrate that our DuDoRNet generates superior reconstructions. Problem Formulation =================== Denoting a 2D k-space with complex values as $k$, and a 2D image reconstructed from $k$ as $x$, we need to reconstruct a fully sampled image $x$ from both the undersampled k-space ($k_u$) and reconstructed image ($x_u$). The relationship between $x$ and $k$ can be written as: $$k_u = M \odot k_f = M \odot \mathcal{F}(x_f) ,$$ $$x_u = \mathcal{F}^{-1} (k_u) = \mathcal{F}^{-1} (M \odot \mathcal{F}(x_f)) ,$$ where $M$ represents the binary undersampling mask used for accelerations; $k_f$ and $k_u$ denote the fully sampled and the undersampled k-space, respectively; $x_f$ and $x_u$ denote the images reconstructed from fully sampled and undersampled k-space, respectively; $\odot$ is the element-wise multiplication operation; and $\mathcal{F}$ and $\mathcal{F}^{-1}$ are and . To tackle the ill-posed inverse problem of reconstructing $x_f$ from limited sampled data, we propose to restore both image domain and k-space domain. In image domain restoration, the optimization can be expressed as minimizing the prediction error: $$\label{eqn:img_opt_x} \underset{\tilde x}{\arg\min} || x_f - \tilde x ||^2 = \underset{\theta_x}{\arg\min} || x_f - \mathcal{P}_x (x_u;\theta_x) ||^2 ,$$ where $\tilde x$ is the prediction of the fully sampled image ($x_f$) generated by the estimation function ($\mathcal{P}_x$) with its parameters ($\theta_x$) using the undersampled image ($x_u$) as input. Furthermore, the data consistency constraint [@schlemper2017deep] is often used in addition to the prediction error: $$\begin{aligned} \underset{\theta_x}{\arg\min} &( || x_f - \mathcal{P}_x (x_u;\theta_x) ||^2 \label{eqn:img_opt_dc1}\\ & + ||k_u - M \odot \mathcal{F} (\mathcal{P}_x (x_u;\theta_x)) ||^2 ), \label{eqn:img_opt_dc2}\end{aligned}$$ where is the same as , and is the regularization term for data consistency. Similarly, we can formulate the optimization target for k-space restoration as: $$\label{eqn:img_opt_k} \underset{\tilde k}{\arg\min} || k_f - \tilde k ||^2 = \underset{\theta_k}{\arg\min} || k_f - \mathcal{P}_k (k_u;\theta_k) ||^2 ,$$ where $\tilde k$ is the prediction of the fully sampled k-space ($k_f$) generated by the estimation function ($\mathcal{P}_k$) with its parameters ($\theta_k$) using the undersampled k-space ($k_u$) as input. Combining the image domain and k-space domain restoration with data consistency, the target function can thus be formulated as: $$\begin{aligned} \underset{\theta_k,\theta_x}{\arg\min} ( || k_f - \mathcal{P}_k (\mathcal{F}(\mathcal{P}_x (x_u;\theta_x));\theta_k) ||^2 &\\ + || x_f - \mathcal{P}_x (\mathcal{F}^{-1}(\mathcal{P}_k (k_u;\theta_k));\theta_x) ||^2 &\\ + || k_u - M \odot \mathcal{F} (\mathcal{P}_x (\mathcal{F}^{-1} (\mathcal{P}_k (k_u;\theta_k)) ; \theta_x)) ||^2 &) ,\end{aligned}$$ Directly optimizing multiple terms in the above target function is challenging in traditional network design, owing to its high computational complexity, overfitting, and local optima problems. Thus, we propose a dual-domain recurrent learning strategy that optimizes $\theta_x$ and $\theta_k$ recurrently. Our proposed approach is illustrated in details in the following sections. Methods ======= The overall pipeline of our network is illustrated in Figure \[fig:pipline\]. It consists of three major parts: 1) the dual domain recurrent network consisting of recurrent blocks with image and k-space restoration networks in it; 2) the deep prior information generated from T1 data in the image and k-space domains for feeding into the recurrent blocks of the network; and 3) the recurrent data consistency regulations using sampled k-space data. In each recurrent block, we propose to use a Dilated Residual Dense Network (DRD-Net) for both image and k-space restorations. Dilated Residual Dense Network ------------------------------ ![The architecture of our **D**ilated **R**esidual **D**ense Network (**DRD-Net**) with building modules of SDRDB shown in Figure \[fig:network\_block\]. The input can be either $x_u$ in image domain or $k_u$ in k-space domain. Convolution operation is followed by ReLU.[]{data-label="fig:network"}](Figures/Figure_Network.pdf){width="46.00000%"} We develop a network structure for both image and k-space restoration, called Dilated Residual Dense Network (DRD-Net). The idea is to use a Squeeze-and-excitation Dilated Residual Dense Block (SDRDB) as the backbone in our DRD-Net. The DRD-Net design and SDRDB are shown in Figure \[fig:network\] and Figure \[fig:network\_block\], respectively. Compared to RDN [@zhang2018residual], we customize the local and global structure design for our reconstruction task. ### Global Structure Our DRD-Net consists of three parts: initial feature extraction (IFE) via two sequential $3 \times 3$ convolution layers, multiple SDRDBs followed by global feature fusion, and global residual learning. The overall pipeline is as follow: $$F_{-1} = \mathcal{P}_{IFE_{1}} (X_u) ,$$ $$F_{0} = \mathcal{P}_{IFE_{2}} (F_{-1}) ,$$ where $\mathcal{P}_{IFE_{1}}$ and $\mathcal{P}_{IFE_{2}}$ denote the first and second convolutional operations in IFE, respectively. The first extracted feature $F_{-1}$ is used for global residual learning in the third part. The second extracted feature $F_{0}$ is used as SDRDB input. If there are $n$ SDRDBs, the $n$-th output $F_{n}$ can be written as: $$\label{eq:sdrdb} F_{n} = \mathcal{P}_{SDRDB_{n}} (F_{n-1}) ,$$ where $\mathcal{P}_{SDRDB_{n}}$ represents the n-th SDRDB operation with $n \geq 1$. Given the extracted local features from a set of SDRDBs, we apply global feature fusion (GFF) to extract the global feature via: $$F_{GF} = \mathcal{P}_{GFF} (\{F_{1}, F_{2}, \dots, F_{n}\}) ,$$ where $\{ \}$ denotes the concatenation operation along feature channel. Our global feature fusion function $\mathcal{P}_{GFF}$ consists of a $1 \times 1$ and $3 \times 3$ convolution layers to fuse the extracted local features from different levels of SDRDB. The GFF output is used as input for global residual learning: $$X_f = \mathcal{P}_{final} (F_{GF} + F_{-1}) ,$$ The element-wise addition of global feature and initial feature are fed into our final $3 \times 3$ convolution layer for reconstruction output. ### SDRD Block ![The structure of our **S**queeze-and-excitation **D**ilated **R**esidual **D**ense Block (**SDRDB**).[]{data-label="fig:network_block"}](Figures/Figure_Network_Block.pdf){width="47.00000%"} The design of our SDRDB is shown in Figure \[fig:network\_block\]. It contains four densely connected atrous convolution layers [@chen2017deeplab], local feature fusion, and local residual learning. Expanding the expression of SDRDB, the $t$-th convolution output in $n$-th SDRDB can be written as: $$F_{n}^{t} = \mathcal{H}_{n}^{t} \{F_{n-1}, F_{n}^{1}, \dots ,F_{n}^{t-1}\} ,$$ where $\mathcal{H}_{n}^{t}$ denotes the $t$-th convolution followed by Leaky-ReLU in the $n$-th SDRDB, $\{ \}$ is the concatenation operation along feature channel, and the number of convolution is set to $t \leq 4$. Our SDRDB begins by composing a feature pyramid using 4 atrous convolution layers with dilation rates of 1, 2, 4, and 4. For an atrous convolution layer with kernel size (K) and kernel dilation (D), the receptive field (R) can be written as $R = K + (K-1) \times (D-1)$. The combination of two atrous convolution layers creates a new receptive field of $R_{comb} = R_1 + R_2 -1$. With that being said, the dense connection over 4 atrous layers enables diverse combinations of layers with various receptive fields, which can more efficiently extract features from different scales than traditional dilation approaches. Figure \[fig:pyramid\] demonstrates the feature pyramid from our 4 densely connected atrous convolution layers. ![Illustration of the scale pyramid using our densely connected atrous layers with kernel size of $3 \times 3$ and dilation factors of 1,2,4, and 4. Our densely connection setting creates diverse kernel sizes with much larger receptive fields (blue + green) than naive sequential dilated layers (blue only). R denotes the size of receptive field created from the dilation (D) combination in each block.[]{data-label="fig:pyramid"}](Figures/Figure_pyramid.pdf){width="45.00000%"} \[htb!\] -------------------------------- -------- ---------------------------------- ---------------------- ----------------------------- ----------------------------- ----------------------------- ---------------- ------------------------- ------ Cartesian ZP GRAPPA[@griswold2002generalized] TV[@ma2008efficient] Wang[@wang2016accelerating] DeepCas[@schlemper2017deep] RefGAN[@quan2018compressed] Ours w/o Prior UF-T2 [@xiang2018ultra] Ours \[0.025cm\] PSNR \[dB\] 22.498 23.318 23.026 26.061 26.993 25.848 30.594 \[0.025cm\] SSIM 0.667 0.730 0.725 0.828 0.859 0.819 0.929 \[0.025cm\] MSE($\times 10^2$) 0.622 0.508 0.555 0.306 0.221 0.340 0.097 \[0.025cm\] Radial ZP GRAPPA TV Wang DeepCas RefGAN Ours w/o Prior UF-T2 Ours \[0.025cm\] PSNR \[dB\] 24.294 29.548 27.222 32.586 34.955 35.110 33.318 \[0.025cm\] SSIM 0.581 0.822 0.727 0.889 0.957 0.959 0.939 \[0.025cm\] MSE($\times 10^2$) 0.412 0.122 0.216 0.102 0.025 0.023 0.049 \[0.025cm\] Spiral ZP GRAPPA TV Wang DeepCas RefGAN Ours w/o Prior UF-T2 Ours \[0.025cm\] PSNR \[dB\] 26.181 31.893 33.890 35.987 43.867 36.049 37.793 \[0.025cm\] SSIM 0.675 0.882 0.909 0.932 0.972 0.960 0.961 \[0.025cm\] MSE($\times 10^2$) 0.266 0.076 0.046 0.029 0.0046 0.024 0.019 \[0.025cm\] -------------------------------- -------- ---------------------------------- ---------------------- ----------------------------- ----------------------------- ----------------------------- ---------------- ------------------------- ------ Then, we apply our local feature fusion (LFF), consisting of $1 \times 1$ convolution layer and SE layer, to fuse the output from the last SDRDB and all convolution layers in current SDRDB. Thus, the LFF output can be expressed as: $$F_{LF,n} = \mathcal{P}_{LFF,n} ( \{F_{n-1}, F_{n}^{1}, F_{n}^{2}, F_{n}^{3} ,F_{n}^{4}\} ) ,$$ where $\mathcal{P}_{LFF,n}$ denotes the LFF operation. Finally, we apply the local residual learning to LFF output by adding the residual connection from SDRDB input. Thus, the SDRDB output is: $$F_{n} = F_{LF,n} + F_{n-1} ,$$ Dual-Domain Recurrent Learning ------------------------------ In this section, we present details about our proposed DuDoRNet. As illustrated in Figure \[fig:pipline\], each of the recurrent blocks of DuDoRNet contains one image restoration DRD-Net (iDRD-Net), one k-space restoration DRD-Net (kDRD-Net), and two data consistency layers interleaved. The T1 deep prior provides data in both image space and k-space that are fed into DuDoRNet recurrent blocks. In the $n$-th recurrent block, denoting the image input as $x_{u_{n}}$, the image restoration optimization target can be written as: $$\label{eq:img_opt_idrd} \underset{\theta_{iDRD}}{\arg\min} ( || x_{f} - \mathcal{P}_{iDRD} (x_{u_{n}},x_{T1};\theta_{iDRD}) ||^2$$ $$+ || k_{u_{n}} - M \odot \mathcal{F} (\mathcal{P}_{iDRD} (x_{u_{n}},x_{T1};\theta_{iDRD})) ||^2 ) , \nonumber$$ where $\mathcal{P}_{iDRD}$ is the image restoration network based on DRD-Net with parameters $\theta_{iDRD}$. The image output from the last recurrent block $x_{u_{n}}$ and T1 image prior $x_{T1}$ are concatenated channel-wise for inputting into iDRD-Net. $M$ is the binary undersampling mask used for accelerations. The k-space values in $M$ can be altered after inference through iDRD-Net since it only optimizes the first term. To maintain the k-space fidelity at sampled locations $z$ of $M$, we add the data consistency as the second term. Denoting the output of iDRD-Net as $x_{iDRD_{n}}$ and its as $k_{iDRD_{n}} = \mathcal{F}(x_{iDRD_{n}})$, the corresponding output from data consistency layer can be thus formulated as: $$k_{iDRD_{n}}= \begin{cases} \frac{\lambda k_{iDRD_{n}}(z) + k_{u_{n}}(z)}{\lambda + 1} & \text{if $M(z) = 1$} \\[6pt] \quad\quad k_{iDRD_{n}}(z) & \text{if $M(z) = 0$} \end{cases}$$ where $\lambda$ controls the level of linear combination between sampled k-space values and predicted values. When $\lambda=0$, the sampled k-space directly substitutes the prediction at $z$ in k-space. Denoting this output as $k_{u_{n}}=k_{iDRD_{n}}$, then $k_{u_{n}}$ and T1 k-space prior $k_{T1}$ are concatenated channel-wise to feed into the kDRD-Net for k-space restoration. Similarly, the k-space restoration optimization target can be written as: $$\label{eq:img_opt_kdrd} \underset{\theta_{kDRD}}{\arg\min} ( || k_{f} - \mathcal{P}_{kDRD} (k_{u_{n}},k_{T1};\theta_{kDRD}) ||^2$$ $$+ || k_{u_{n}} - M \odot \mathcal{P}_{kDRD} (k_{u_{n}},k_{T1};\theta_{kDRD})) ||^2) , \nonumber$$ where $\mathcal{P}_{kDRD}$ is the k-space restoration network based on DRD-Net with network parameters $\theta_{kDRD}$. Similarly here, the second term is to ensure the data consistency in the restored k-space. Thus, the loss function for each recurrent block is $\mathcal{L}_{i_n} + \mathcal{L}_{k_n}$. The final loss is the summation of recurrent blocks losses: $\sum_{n=1}^{N_{rec}} (\mathcal{L}_{i_n} + \mathcal{L}_{k_n})$, where $N_{rec}$ denotes the number of recurrent blocks. In our experiments, $\lambda$ is set to $0.01$, $N_{rec}$ is set to $5$, and the number of SDRDB is set to $2$. Each recurrent block in our DuDoRNet share the same network parameters. The final reconstruction output during testing is obtained by applying to the last kDRD-Net output after data consistency operation. Experiments =========== ![image](Figures/Figure_comparison_Traditional_T2.pdf){width="92.00000%"} ![Comparison of reconstructions using radial trajectory at an acceleration rate $R=5$. Three enlarged sub-regions and corresponding difference images are shown on the right. SSIM is indicated on the bottom-left.[]{data-label="fig:comp_image2"}](Figures/Figure_comparison_DP_T2.pdf){width="42.00000%"} Experimental Settings --------------------- **Dataset and Training** We acquired an in-house MRI dataset consisting of 20 patients. We scanned each patient using three different protocols (T1, T2, and FLAIR) with full k-space sampled, resulting in three 3D volumes of $320 \times 230 \times 18$ for each patient in both image and k-space domains. 360 2D images are generated for each protocol. We split the dataset patient-wise into training/validation/test sets with a ratio of $7:1:2$. As a result, our dataset consists of 252 training images, 36 validation images, and 72 test images for each protocol. Three different k-space undersampling patterns are examined in our experiments, including Cartesian, radial, and spiral trajectories. Examples of the sampling patterns are illustrated in Figure \[fig:comp\_image1\] (green box). The acceleration factor (R) is set to a value between 1 and 6 for all three patterns, corresponding to acceleration in acquisition time. Using these patterns, randomly undersampled T2/FLAIR and fully-sampled T1 are used as input to our model for training. During training, we also randomly augment the image data by flipping horizontally or vertically, and by rotating at different angles. **Performance Evaluation** The final evaluation is performed on our 72 test images. For quantitative evaluation, we evaluate our image reconstruction results using Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Mean Square Error (MSE). For comparative study, we compared our results with the following algorithms in previous works: total variation with penalized compress-sensing (TV-CS) [@lustig2007sparse], GRAPPA [@griswold2002generalized], and four CNN-based algorithms including a sequential convolutional model [@wang2016accelerating], a cascade image restoration model called DeepCas [@schlemper2017deep], a structural refinement GAN model called RefGAN [@quan2018compressed], and a DenseUNet model with complementary T1 image called UF-T2 [@xiang2018ultra]. All CNN-based methods are trained using the same settings as to our method. Results ------- **Image Quality Evaluation and Comparison** We evaluated our reconstruction results by computing their SSIM, PSNR, and MSE, where the fully sampled images are used as ground-truth for this calculations. In Table \[tab:scan\_metrics\], we demonstrated our T2 reconstruction evaluations on three sampling patterns with a high acceleration rate of $R=5$. The first sub-table summarized the image quality when Cartesian-based acceleration is applied. Our DuDoRNet without T1 prior achieved PSNR up to $27.834$ dB, and boosted to $32.511$ dB when T1 prior is given. As compared to the previous state-of-the-art method with T1 prior called UF-T2 [@xiang2018ultra], we improved the reconstruction from $30.594$ dB to $32.511$ dB. Similarly for radial-based acceleration, our DuDoRNet without T1 prior achieved PSNR up to $37.27$ dB, and further boosted to $40.815$ dB when T1 prior is provided. As compared to the best results with PSNR $=35.11$ from RefGAN [@quan2018compressed] without T1 prior, our DuDoRNet achieves significantly better results. In the last sub-table, we found our DuDoRNet with spiral pattern yields the best image quality among all sampling patterns and reconstruction methods. Under the spiral pattern, our DuDoRNet without T1 prior achieved PSNR $=48.418$ dB and reinforced to $49.186$ dB when T1 prior is present. The qualitative comparison results with non-CNN based methods are shown in Figure \[fig:comp\_image1\]. As we can see, the reconstructions with zero padding (ZP) at a high acceleration rate create significant aliasing artifacts and lose anatomical details. Non-CNN based methods can improve the reconstruction as compared to ZP, but it is hard to see a significant improvement when a significant level of aliasing artifact is presented. In comparison, our reconstructions are robust to aliasing artifacts and structural loss in the input. The qualitative comparison results with CNN based methods are shown in Figure \[fig:comp\_image2\]. At a high acceleration rate, the CNN based methods achieve better results than non-CNN based methods. Among them, our method restores information in both image and k-space better preserved the important anatomical details, as demonstrated in Figure \[fig:comp\_image2\] with arrows and ellipses. More results on FLAIR reconstruction are summarized in supplemental materials with similar performances. **Reconstruction Stability Evaluation** To evaluate the performance stability at different acceleration rates, we recorded the reconstruction performance by varying $R$ from 2 to 6 on all three patterns. The evaluation results are summarized in Figure \[fig:recon\_stab\]. For the Cartesian pattern, our method can consistently maintain the SSIM to above 0.95 even when an aggressive acceleration rate is used, $R=6$. Due to the large aliasing artifact created from the Cartesian sampling pattern as the acceleration rate increases, the reconstruction performance is more challenging to remain stable without T1 prior, but we were still able to maintain SSIM to above 0.89 and consistently outperforms the other methods for all undersampling rates. For radial pattern, all methods performed approximately the same at low acceleration rates $R<4$. However, for a more aggressive undersampling rate, $R>4$, our DuDoRNet is able to reduce the structural loss by a considerable margin. At $R=6$, Our DuDoRNet with and without T1 prior can consistently maintain the SSIM to above 0.98 and 0.97, respectively. Lastly, we found the best performance stability when the spiral pattern is applied. Our DuDoRNet keeps the SSIM to above 0.99 over the whole undersampling range regardless of T1 prior. Under the same acceleration rate, radial and spiral patterns sample the k-space more uniformly than random Cartesian, thus leading to less aliasing artifact in the initial reconstruction input for the models, as demonstrated in Figure \[fig:comp\_image1\]. As a result, radial and spiral patterns generate less aliasing artifacted input and are able to output more stable reconstruction in our experiments. Ablation Studies ---------------- SSIM Cartesian Radial Spiral Average -------------------------------- ----------- -------- -------- --------- \(A) Net-baseline 0.839 0.902 0.951 0.897 \[0.025cm\] (B) Net-Rec 0.860 0.959 0.973 0.931 \[0.025cm\] (C) Net-DD 0.851 0.929 0.962 0.914 \[0.025cm\] (D) Net-DIL 0.844 0.909 0.956 0.903 \[0.025cm\] (E) Net-Rec-DD 0.891 0.968 0.989 0.949 \[0.025cm\] (F) Net-Rec-DIL 0.869 0.962 0.978 0.936 \[0.025cm\] (G) Net-DD-DIL 0.859 0.935 0.969 0.921 \[0.025cm\] (H) Net-Rec-DD-DIL 0.898 0.974 0.991 0.954 \[0.025cm\] : Quantitative evaluations for Rec, DD, and DIL components in DuDoRNet.[]{data-label="tab:component_analysis"} Firstly, we evaluated the effectiveness of different components in our DuDoRNet. Without loss of generality, the reconstruction performance is evaluated on $R=5$ for all three patterns. We evaluate three key components, including: dual domain learning (DD), dilated residual dense learning (DIL), and recurrent learning (Rec) without T1 prior. $N_{rec}$ in Rec is set to 5. The component analysis is summarized in Table \[tab:component\_analysis\]. As we can observe, recurrent learning (B) and dual domain learning (C) each improve the performance over the baseline (A) by 0.015, which are more significant than dilated residual dense learning (D). Combining recurrent learning and dual domain learning (E), the performance achieves the largest boost as compared to the other two component combinations (F and G). Our DuDoRNet, equipping all components (H), produces the best reconstruction results. Overall, all three components help DuDoRNet to enhance the performance. ![The effect of increasing the number of recurrent blocks ($n$) in our DuDoRNet under three sampling patterns.[]{data-label="fig:as_nrecurrent"}](Figures/AS_NRecurrent.png){width="42.00000%"} Secondly, we evaluated the effect of $N_{rec}$ in our DuDoRNet. As shown in Figure \[fig:as\_nrecurrent\], the reconstruction performance, measured by SSIM, increases monotonically when $N_{rec}$ increases, while the rate of improvement starts to converge after $N_{rec}=3$ for all three patterns. We also found that our DuDoRNet achieves the best performance when the spiral pattern is used even when different $N_{rec}$ is implemented. Conclusion ========== We present a dual domain recurrent network for fast MRI reconstruction with T1 prior embedded. Specifically, we propose to restore both image and k-space domains recurrently through DRD-Nets with large receptive fields. The T1 prior is embedded at each recurrent block to deeply guide restorations for both domains. Extensive experimental results demonstrate that while previous fast MRI methods on single domain for individual protocol have limited capability of directly reducing aliasing artifacts in the image domain, our DuDoRNet can efficiently restore the reconstruction, and the T1 prior can further significantly improve the structural recovery. Future work includes exploring the application of DuDoRNet to other signal recovery tasks, such as noise reduction and image super-resolution.
{ "pile_set_name": "ArXiv" }
--- abstract: | Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g. multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in *decoding* or *encoding* settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g. resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain. Keywords: ========= machine learning, statistical learning, neuroimaging, scikit-learn, Python author: - bibliography: - 'biblio.bib' title: 'Machine Learning for Neuroimaging with Scikit-Learn' --- Introduction ============ Interest in applying statistical machine learning to neuroimaging data analysis is growing. Neuroscientists use it as a powerful, albeit complex, tool for statistical inference. The tools are developed by computer scientists who may lack a deep understanding of the neuroscience questions. This paper aims to fill the gap between machine learning and neuroimaging by demonstrating how a general-purpose machine-learning toolbox, scikit-learn, can provide state-of-the-art methods for neuroimaging analysis while keeping the code simple and understandable by both worlds. Here, we focus on software; for a more conceptual introduction to machine learning methods in fMRI analysis, see [@pereira2009] or [@mur2009], while [@hastie2001] provides a good reference on machine learning. We discuss the use of the scikit-learn toolkit as it is a reference machine learning tool and has and a variety of algorithms that is matched by few packages, but also because it is implemented in Python, and thus dovetails nicely in the rich neuroimaging Python ecosystem. This paper explores a few applications of statistical learning to resolve common neuroimaging needs, detailing the corresponding code, the choice of the methods, and the underlying assumptions. We discuss not only prediction scores, but also the interpretability of the results, which leads us to explore the internal model of various methods. Importantly, the GitHub repository of the paper[^1] provides complete scripts to generate figures. The scope of this paper is not to present a neuroimaging-specific library, but rather code patterns related to scikit-learn. However, the nilearn library –<http://nilearn.github.io>– is a software package under development that seeks to simplify the use of scikit-learn for neuroimaging. Rather than relying on an immature and black-box library, we prefer here to unravel simple and didactic examples of code that enable readers to build their own analysis strategies. The paper is organized as follows. After introducing the *scikit-learn* toolbox, we show how to prepare the data to apply *scikit-learn* routines. Then we describe the application of *supervised learning* techniques to learn the links between brain images and stimuli. Finally we demonstrate how *unsupervised learning* techniques can extract useful structure from the images. Our tools: scikit-learn and the Python ecosystem ================================================ Basic scientific Python tools for the neuroimager ------------------------------------------------- With its mature scientific stack, Python is a growing contender in the landscape of neuroimaging data analysis with tools such as Nipy [@millman2007analysis] or Nipype [@gorgolewski2011]. The scientific Python libraries used in this paper are: - [**NumPy**]{}. Provides the `ndarray` data type to python, an efficient $n$-dimensional data representation for array-based numerical computation, similar to that used in Matlab [@vanderwalt2011]. It handles efficient array persistance (input and output) and provides basic operations such as dot product. Most scientific Python libraries, including scikit-learn, use NumPy arrays as input and output data type. - [**SciPy**]{}: higher level mathematical functions that operate on ndarrays for a variety of domains including linear algebra, optimization and signal processing. SciPy is linked to compiled libraries to ensure high performances (BLAS, Arpack and MKL for linear algebra and mathematical operations). Together, NumPy and SciPy provide a robust scientific environment for numerical computing and they are the elementary bricks that we use in all our algorithms. - [**Matplotlib**]{}, a plotting library tightly integrated into the scientific Python stack [@hunter2007]. It offers publication-quality figures in different formats and is used to generate the figures in this paper. - [**Nibabel**]{}, to access data in neuroimaging file formats. We use it at the beginning of all our scripts. Scikit-learn and the machine learning ecosystem ----------------------------------------------- Scikit-learn [@pedregosa2011] is a general purpose machine learning library written in Python. It provides efficient implementations of state-of-the-art algorithms, accessible to non-machine learning experts, and reusable across scientific disciplines and application fields. It also takes advantage of Python interactivity and modularity to supply fast and easy prototyping. There is a variety of other learning packages. For instance, in Python, PyBrain [@schaul2010pybrain] is best at neural networks and reinforcement learning approaches, but its models are fairly black box, and do not match our need to interpret the results. Beyond Python, Weka [@hall2009weka] is a rich machine learning framework written in Java, however it is more oriented toward data mining. Some higher level frameworks provides full pipeline to apply machine learning techniques to neuroimaging. PyMVPA [@hanke2009pymvpa] is a Python packaging that does data preparation, loading and analysis, as well as result visualization. It performs multi-variate pattern analysis and can make use of external tools such as R, scikit-learn or Shogun [@sonnenburg2010]. PRoNTo [@schrouff2013pronto] is written in Matlab and can easily interface with SPM but does not propose many machine learning algorithms. Here, rather than full-blown neuroimaging analysis pipelines, we discuss lower-level patterns that break down how neuroimaging data is input to scikit-learn and processed with it. Indeed, the breadth of machine learning techniques in scikit-learn and the variety of possible applications are too wide to be fully exposed in a high-level interface. Note that a package like PyMVPA that can rely on scikit-learn for neuroimaging data analysis implements similar patterns behind its high-level interface. Scikit-learn concepts {#scikitlearn} --------------------- In [*scikit-learn*]{}, all objects and algorithms accept input data in the form of 2-dimensional arrays of size samples $\times$ features. This convention makes it generic and domain-independent. Scikit-learn objects share a uniform set of methods that depends on their purpose: *estimators* can fit models from data, *predictors* can make predictions on new data and *transformers* convert data from one representation to another. - [**Estimator**]{}. The *estimator* interface, the core of the library, exposes a `fit` method for learning model parameters from training data. All supervised and unsupervised learning algorithms (e.g., for classification, regression or clustering) are available as objects implementing this interface. Machine learning tasks such as feature selection or dimensionality reduction are also provided as estimators. - [**Predictor**]{}. A *predictor* is an estimator with a `predict` method that takes an input array `X_test` and makes predictions for each sample in it. We denote this input parameter “`X_test`” in order to emphasize that `predict` generalizes to new data. In the case of supervised learning estimators, this method typically returns the predicted labels or values computed from the estimated model. - [**Transformer**]{}. As it is common to modify or filter data before feeding it to a learning algorithm, some estimators, named *transformers*, implement a `transform` method. Preprocessing, feature selection and dimensionality reduction algorithms are all provided as transformers within the library. If the transformation can be inverted, a method called `inverse_transform` also exists. When testing an estimator or setting hyperparameters, one needs a reliable metric to evaluate its performance. Using the same data for training and testing is not acceptable because it leads to overly confident model performance, a phenomenon also known as *overfitting*. Cross-validation is a technique that allows one to reliably evaluate an estimator on a given dataset. It consists in iteratively fitting the estimator on a fraction of the data, called *training set*, and testing it on the left-out unseen data, called *test set*. Several strategies exists to partition the data. For example, $k$-fold cross-validation consists in dividing (randomly or not) the samples in $k$ subsets: each subset is then used once as testing set while the others $k - 1$ subsets are used to train the estimator. This is one of the simplest and most widely used cross-validation strategies. The parameter $k$ is commonly set to 5 or 10. Another strategy, sometimes called Monte-Carlo cross-validation, uses many random partitions in the data. For a given model and some fixed value of hyperparameters, the scores on the various test sets can be averaged to give a quantitative score to assess how good the model is. Maximizing this cross-validation score offers a principled way to set hyperparameters and allows to choose between different models. This procedure is known as *model selection*. In [*scikit-learn*]{}, hyperparameters tuning can be conviently done with the `GridSearchCV` estimator. It takes as input an estimator and a set of candidate hyperparameters. Cross-validation scores are then computed for all hyperparameters combinations, possibly in parallel, in order to find the best one. In this paper, we set the regularization coefficient with grid search in section \[kamitani\]. Data preparation: from MR volumes to a data matrix {#data_preparation} ================================================== Before applying statistical learning to neuroimaging data, standard preprocessing must be applied. For fMRI, this includes motion correction, slice timing correction, coregistration with an anatomical image and normalization to a common template like the MNI (Montreal Neurologic Institute) one if necessary. Reference softwares for these tasks are SPM [@friston2007] and FSL [@smith2004]. A Python interface to these tools is available in nipype Python library [@gorgolewski2011]. Below we discuss shaping preprocessed data into a format that can be fed to scikit-learn. For the machine learning settings, we need a data matrix, that we will denote $X$, and optionally a target variable to predict, $y$. Spatial resampling {#resampling} ------------------ Neuroimaging data often come as Nifti files, 4-dimensional data (3D scans with time series at each location or voxel) along with a transformation matrix (called affine) used to compute voxel locations from array indices to world coordinates. When working with several subjects, each individual data is registered on a common template (MNI, Talairach...), hence on a common affine, during preprocessing. Affine matrix can express data anisotropy, when the distance between two voxels is not the same depending on the direction. This information is used by algorithms relying on the spatial structure of the data, for instance the Searchlight. SciPy routine `scipy.ndimage.affine_transform` can be used to perform image resampling: changing the spatial resolution of the data[^2]. This is an interpolation and alters the data, that is why it should be used carefully. Downsampling is commonly used to reduce the size of data to process. Typical sizes are 2mm or 3mm resolution, but scan spatial resolution is increasing with progress in MR physics. The affine matrix can encode the scaling factors for each direction. Signal cleaning --------------- Due to its complex and indirect acquisition process, neuroimaging data often have a low signal-to-noise ratio. They contain trends and artifacts that must be removed to ensure maximum machine learning algorithms efficiency. Signal cleaning includes: - [**Detrending**]{} removes a linear trend over the time series of each voxel. This is a useful step when studying fMRI data, as the voxel intensity itself has no meaning and we want to study its variation and correlation with other voxels. Detrending can be done thanks to SciPy (`scipy.signal.detrend`). - [**Normalization**]{} consists in setting the timeseries variance to 1. This harmonization is necessary as some machine learning algorithms are sensible to different value ranges. - [**Frequency filtering**]{} consists in removing high or low frequency signals. Low-frequency signals in fMRI data are caused by physiological mechanisms or scanner drifts. Filtering can be done thanks to a Fourier transform (`scipy.fftpack.fft`) or a Butterworth filter (`scipy.signal.butter`). From 4-dimensional images to 2-dimensional array: masking {#sec:unmasking} --------------------------------------------------------- Neuroimaging data are represented in 4 dimensions: 3 spatial dimensions, and one dimension to index time or trials. Scikit-learn algorithms, on the other hand, only accept 2-dimensional samples $\times$ features matrices (see Section \[scikitlearn\]). Depending on the setting, voxels and time series can be considered as features or samples. For example, in spatial independent component analysis (ICA), voxels are samples. The reduction process from 4D-images to feature vectors comes with the loss of spatial structure. It however allows to discard uninformative voxels, such as the ones outside of the brain. Such voxels that only carry noise and scanner artifacts would reduce SNR and affect the quality of the estimation. The selected voxels form a *brain mask*. Such a mask is often given along with the datasets or can be computed with software tools such as FSL or SPM. ![Conversion of brain scans into 2-dimensional data[]{data-label="fig:niimg"}](niimgs.jpg){width=".5\linewidth"} Applying the mask is made easy by NumPy advanced indexing using boolean arrays. Two-dimensional masked data will be referred to as `X` to follow scikit-learn conventions: mask = nibabel.load('mask.nii').get_data() func_data = nibabel.load('epi.nii').get_data() # Ensure that the mask is boolean mask = mask.astype(bool) # Apply the mask, X = timeseries * voxels X = func_data[mask].T # Unmask data unmasked_data = numpy.zeros(mask.shape, dtype=X.dtype) unmasked_data[mask] = X Data visualisation ------------------ Across all our examples, voxels of interest are represented on an axial slice of the brain. Some transformations of the original matrix data are required to match matplotlib data format. The following snippet of code shows how to load and display an axial slice overlaid with an activation map. The background is an anatomical scan and its highest voxels are used as synthetic activations. # Load image bg_img = nibabel.load('bg.nii.gz') bg = bg_img.get_data() # Keep values over 6000 as artificial activation map act = bg.copy() act[act < 6000] = 0. # Display the background plt.imshow(bg[..., 10].T, origin='lower', interpolation='nearest', cmap='gray') # Mask background values of activation map masked_act = np.ma.masked_equal(act, 0.) plt.imshow(masked_act[..., 10].T, origin='lower', interpolation='nearest', cmap='hot') # Cosmetics: disable axis plt.axis('off') plt.show() Note that a background is needed to display partial maps. Overlaying two images can be done thanks to the `numpy.ma.masked_array` data structure. Several options exist to enhance the overall aspect of the plot. Some of them can be found in the full scripts provided with this paper. It generally boils down to a good knowledge of Matplotlib. Note that the Nipy package provides a `plot_map` function that is tuned to display activation maps (a background is even provided if needed). Decoding the mental representation of objects in the brain ========================================================== In the context of neuroimaging, *decoding* refers to learning a model that predicts behavioral or phenotypic variables from brain imaging data. The alternative that consists in predicting the imaging data given external variables, such as stimuli descriptors, is called *encoding* [@naselaris2011]. It is further discussed in the next section. First, we illustrate decoding with a simplified version of the experiment presented in [@haxby2001]. In the original work, visual stimuli from 8 different categories are presented to 6 subjects during 12 sessions. The goal is to predict the category of the stimulus presented to the subject given the recorded fMRI volumes. This example has already been widely analyzed [@hanson2004combinatorial; @detre2006multi; @otoole2007; @hanson2008brain; @hanke2009pymvpa] and has become a reference example in matter of decoding. For the sake of simplicity, we restrict the example to one subject and to two categories, faces and houses. As there is a *target* variable $y$ to predict, this is a supervised learning problem. Here $y$ represents the two object categories, a.k.a. *classes* in machine-learning terms. In such settings, where $y$ takes discrete values the learning problem is known as *classification*, as opposed to *regression* when the variable $y$ can take continuous values, such as age. Classification with feature selection and linear SVM ---------------------------------------------------- Many classification methods are available in scikit-learn. In this example we chose to combine the use of univariate feature selection and Support Vector Machines (SVM). Such a classification strategy is simple yet efficient when used on neuroimaging data. After applying a brain mask, the data consist of 40000 voxels, here the features, for only 1400 volumes, here the samples. Machine learning with many more features than samples is challenging, due to the so-called *curse of dimensionality*. Several strategies exist to reduce the number of features. A first one is based on prior neuroscientific knowledge. Here one could restrict the mask to occipital areas, where the visual cortex is located. Feature selection is a second, data-driven, approach that relies on a univariate statistical test for each individual feature. Variables with high individual discriminative power are kept. Scikit-learn offers a panel of strategies to select features. In supervised learning, the most popular feature selection method is the F-test. The null hypothesis of this test is that the feature takes the same value independently of the value of $y$ to predict. In scikit-learn, `sklearn.feature_selection` proposes a panel of feature selection strategies. One can choose to take a percentile of the features (`SelectPercentile`), or a fixed number of features (`SelectKBest`). All these objects are implemented as transformers (see section \[scikitlearn\]). The code below uses the `f_classif` function (ANOVA F-Test) along with the selection of a fixed number of features. On the reduced feature set, we use a linear SVM classifier, `sklearn.svm.SVC`, to find the hyperplane that maximally separates the samples belonging to the different classes. Classifying a new sample boils down to determining on which side of the hyperplane it lies. With a linear kernel, the separating hyperplane is defined in the input data space and its coefficients can be related to the voxels. Such coefficients can therefore be visualized as an image (after unmasking step described in \[sec:unmasking\]) where voxels with high values have more influence on the prediction than the others (see figure \[fig:haxby\]). feature_selection = SelectKBest(f_classif, k=500) clf = SVC(kernel='linear') X_reduced = feature_selection.fit_transform(X) clf.fit(X_reduced, y) ### Look at the discriminating weights coef = clf.coef_ ### Reverse feature selection coef = feature_selection.inverse_transform(coef) Searchlight ----------- Searchlight [@kriegeskorte2006] is a popular algorithm in the neuroimaging community. It runs a predictive model on a spatial neighborhood of each voxel and tests the out-of-sample prediction performance as proxy measure of the link between the local brain activity and the target behavioral variable. In practice, it entails performing cross-validation of the model, most often an SVM, on voxels contained in balls centered on each voxel of interest. The procedure implies solving a large number of SVMs and is computationally expensive. Detailing an efficient implementation of this algorithm is beyond the scope of this paper. However, code for searchlight and to generate figure \[fig:haxby\] is available in the GitHub repository accompanying the paper. Results ------- ![Maps derived by different methods for face versus house recognition in the Haxby experiment – *left*: standard analysis; *center*: SVM weights after screening voxels with an ANOVA; *right*: Searchlight map. The masks derived from standard analysis in the original paper [@haxby2001] are displayed in blue and green.[]{data-label="fig:haxby"}](haxby){width="\linewidth"} Results are shown in figure \[fig:haxby\]: first F-score, that is standard analysis in brain mapping but also the statistic used to select features; second the SVC weights after feature selection and last the Searchlight map. Note that the voxels with larger weights roughly match for all methods and are located in the house-responsive areas as defined by the original paper. The Searchlight is more expanded and blurry than the other methods as it iterates over a ball around the voxels. These results match neuroscientific knowledge as they highlight the high level regions of the ventral visual cortex which is known to contain category-specific visual areas. While Searchlight only gives a score to each voxel, the SVC can be used afterward to classify unseen brain scans. Most of the final example script (`haxby_decoding.py` on GitHub) is for data loading and result visualization. Only 5 lines are needed to run a scikit-learn classifier. In addition, thanks to the scikit-learn modularity, the SVC can be easily replaced by any other classifier in this example. As all linear models share the same interface, replacing the SVC by another linear model, such as ElasticNet or LogisticRegression, requires changing only one line. Gaussian Naive Bayes is a non-linear classifier that should perform well in this case, and modifiying display can be done by replacing `coef_` by `theta_`. Encoding brain activity and decoding images {#kamitani} =========================================== In the previous experiment, the category of a visual stimulus was inferred from brain activity measured in the visual cortex. One can go further by inferring a direct link between the image seen by the subject and the associated fMRI data. In the experiment of [@miyawaki2008] several series of $10{\times}10$ binary images are presented to two subjects while activity on the visual cortex is recorded. In the original paper, the training set is composed of random images (where black and white pixels are balanced) while the testing set is composed of structured images containing geometric shapes (square, cross...) and letters. Here, for the sake of simplicity, we consider only the training set and use cross-validation to obtain scores on unseen data. In the following example, we study the relation between stimuli pixels and brain voxels in both directions: the reconstruction of the visual stimuli from fMRI, which is a decoding task, and the prediction of fMRI data from descriptors of the visual stimuli, which is an encoding task. Decoding -------- In this setting, we want to infer the binary visual stimulus presented to the subject from the recorded fMRI data. As the stimuli are binary, we will treat this problem as a classification problem. This implies that the method presented here cannot be extended as-is to natural stimuli described with gray values. In the original work, [@miyawaki2008] uses a Bayesian logistic regression promoting sparsity along with a sophisticated multi-scale strategy. As one can indeed expect the number of predictive voxels to be limited, we compare the $\ell_2$ SVM used above with a logistic regression and a SVM penalized with the $\ell_1$ norm known to promote sparsity. The $\ell_1$ penalized SVM classifier compared here uses a square-hinge loss while the logistic regression uses a logit function. [l|llllll]{} $C$ value & 0.0005 & 0.001 & 0.005 & 0.01 & 0.05 & 0.1\ \ \ $\ell_1$ Logistic Regression & 0.50 $\pm$ .02 & 0.50 $\pm$ .02 & 0.57 $\pm$ .13 & 0.63 $\pm$ .11 & **0.70** $\pm$ .12 & 0.70 $\pm$ .12\ $\ell_2$ Logistic Regression & 0.60 $\pm$ .11 & 0.61 $\pm$ .12 & 0.63 $\pm$ .13 & 0.63 $\pm$ .13 & **0.64** $\pm$ .13 & 0.64 $\pm$ .13\ $\ell_1$ SVM classifier (SVC)& 0.50 $\pm$ .06 & 0.55 $\pm$ .12 & 0.69 $\pm$ .11 & **0.71** $\pm$ .12 & 0.69 $\pm$ .12 & 0.68 $\pm$ .12\ $\ell_2$ SVM classifier (SVC)& 0.67 $\pm$ .12 & **0.67** $\pm$ .12 & 0.67 $\pm$ .12 & 0.66 $\pm$ .12 & 0.65 $\pm$ .12 & 0.65 $\pm$ .12 Table \[fig:miyawaki\_cv\] reports the performance of the different classifiers for various values of C using a 5-fold cross-validation. We first observe that setting the parameter $C$ is crucial as performance drops for inappropriate values of C. It is particularly true for $\ell_1$ regularized models. Both $\ell_1$ logistic regression and SVM yield similar performances, which is not surprising as they implement similar models. from sklearn.linear_model import LogisticRegression as LR from sklearn.cross_validation import cross_val_score pipeline_LR = Pipeline([('selection', SelectKBest(f_classif, 500)), ('clf', LR(penalty='l1', C=0.05)]) scores_lr = [] # y_train = n_samples x n_voxels # To iterate on voxels, we transpose it. for pixel in y_train.T: score = cross_val_score(pipeline_LR, X_train, pixel, cv=5) scores_lr.append(score) Encoding -------- Given an appropriate model of the stimulus, e.g. one which can provide an approximately linear representation of BOLD activation, an encoding approach allows one to quantify for each voxel to what extent its variability is captured by the model. A popular evaluation method is the predictive $r^2$ score, which uses a prediction on left out data to quantify the decrease in residual norm brought about by fitting a regression function as opposed to fitting a constant. The remaining variance consists of potentially unmodelled, but reproducible signal and spurious noise. On the Miyawaki dataset, we can observe that mere black and white pixel values can explain a large part of the BOLD variance in many visual voxels. Sticking to the notation that $X$ represesents BOLD signal and $y$ the stimulus, we can write an encoding model using the ridge regression estimator: from sklearn.linear_model import Ridge from sklearn.cross_validation import KFold cv = KFold(len(y_train), 10) # Fit ridge model, calculate predictions on left out data # and evaluate r^2 score for each voxel scores = [] for train, test in cv: pred = (Ridge(alpha=100.).fit(y_train[train], X_train[train]) .predict(y_train[test])) X_true = X_train[test] scores.append( 1. - ((X_true - pred) ** 2).sum(axis=0) / ((X_true - X_true.mean(axis=0)) ** 2).sum(axis=0)) mean_scores = np.mean(scores, axis=0) Note here that the Ridge can be replaced by a Lasso estimator, which can give better prediction performance at the cost of computation time. ### Receptive fields Given the retinotopic structure of early visual areas, it is expected that the voxels well predicted by the presence of a black or white pixel are strongly localized in so-called population receptive fields (*prf*). This suggests that only very few stimulus pixels should suffice to explain the activity in each brain voxel of the posterior visual cortex. This information can be exploited by using a sparse linear regression –the Lasso [@tibshirani:96]– to find the receptive fields. Here we use the *LassoLarsCV* estimator that relies on the LARS algorithm [@Efron04leastangle] and cross-validation to set the Lasso parameter. from sklearn.linear_model import LassoLarsCV # choose number of voxels to treat, set to None for all voxels n_voxels = 50 # choose best voxels indices = mean_scores.argsort()[::-1][:n_voxels] lasso = LassoLarsCV(max_iter=10) receptive_fields = [] for index in indices: lasso.fit(y_train, X_train[:, index]) receptive_fields.append(lasso.coef_.reshape(10, 10)) Results {#sec:miyawaki_results} ------- Figure \[fig:miyawaki\] gives encoding and decoding results: the relationship between a given image pixel and four voxels of interest in the brain. In decoding settings, Figures \[fig:miyawaki\]*a* and \[fig:miyawaki\]*c* show the classifier’s weights as brain maps for both methods. They both give roughly the same results and we can see that the weights are centered in the V1 and nearby retinotopic areas. Figures \[fig:miyawaki\]*b* and \[fig:miyawaki\]*d* show reconstruction accuracy score using Logistic Regression (LR) and SVM (variable `mean_scores` in the code above). Both methods give almost identical results. As in the original work [@miyawaki2008], reconstruction is more accurate in the fovea. This is explained by the higher density of neurons dedicated to foveal representation in the primary visual area. In encoding settings, figure \[fig:miyawaki\]*e* shows classifiers weights for encoding, that we interpret as receptive fields. We can see that receptive fields of neighboring voxels are neighboring pixels, which is expected from retinotopy: primary visual cortex maps the visual field in a topologically organized manner. Both encoding and decoding analysis show a link between the selected pixel and brain voxels. In the absence of ground truth, seeing that different methods come to the same conclusion comes as face validity. ![ Miyawaki results in both decoding and encoding. Relations between one pixel and four brain voxels is highlighted for both methods. **Top: Decoding.** Classifier weights for the pixel highlighted (*a.* Logistic regression, *c.* SVM). Reconstruction accuracy per pixel (*b.* Logistic regression, *c.* SVM). **Bottom: Encoding.** *e*: receptive fields corresponding to voxels with highest scores and its neighbors. *f*: reconstruction accuracy depending on pixel position in the stimulus. — Note that the pixels and voxels highlighted are the same in both decoding and encoding figures and that encoding and decoding roughly match as both approach highlight a relationship between the same pixel and voxels. []{data-label="fig:miyawaki"}](miyawaki){width="\linewidth"} Resting-state and functional Connectivity analysis ================================================== Even in the absence of external behavioral or clinical variable, studying the structure of brain signals can reveal interesting information. Indeed, [@biswal1995] have shown that brain activation exhibits coherent spatial patterns during rest. These correlated voxel activations form functional networks that are consistent with known task-related networks [@smith2009]. Biomarkers found via predictive modeling on resting-state fMRI would be particularly useful, as they could be applied to diminished subjects that cannot execute a specific task. Here we use a dataset containing control and ADHD (Attention Disorder Hyperactivity Disorder) patients resting state data (subjects are scanned without giving them any specific task to capture the cerebral background activity). Resting state fMRI is unlabeled data in the sense that the brain activity at a given instant in time cannot be related to an output variable. In machine learning, this class of problems is known as unsupervised learning. To extract functional networks or regions, we use methods that group together similar voxels by comparing their time series. In neuroimaging, the most popular method is ICA that is the subject of our first example. We then show how to obtained functionally-homogeneous regions with clustering methods. Independent Component Analysis (ICA) to extract networks -------------------------------------------------------- ICA is a blind source separation method. Its principle is to separate a multivariate signal into several components by maximizing their non-Gaussianity. A typical example is the *cocktail party problem* where ICA is able to separate voices from several people using signal from microphones located across the room. ### ICA in neuroimaging ICA is the reference method to extract networks from resting state fMRI [@kiviniemi2003]. Several strategies have been used to syndicate ICA results across several subjects. [@calhoun2001a] propose a dimension reduction (using PCA) followed by a concatenation of timeseries (used in this example). [@varoquaux2010] use dimension reduction and canonical correlation analysis to aggregate subject data. Melodic [@beckmann2004], the ICA tool in the FSL suite, uses a concatenation approach not detailed here. ### Application As data preparation steps, we not only center, but also detrend the time series to avoid capturing linear trends with the ICA. Applying to the resulting time series the FastICA algorithm [@Hyvarinen:2000vk] with scikit-learn is straightforward thanks to the transformer concept. The data matrix must be transposed, as we are using *spatial* ICA, in other words the direction considered as random is that of the voxels and not the time points. The maps obtained capture different components of the signal, including noise components as well as resting-state functional networks. To produce the figures, we extract only 10 components, as we are interested here in exploring only the main signal structures. # Here we start with Xs: a list of subject-level data matrices # First we concatenate them in the time-direction, thus implementing # a concat-ICA X = np.vstack(Xs) from sklearn.decomposition import FastICA ica = FastICA(n_components=10) components_masked = ica.fit_transform(data_masked.T).T ### Results {#results-1} On fig. \[fig:ica\] we compare a simple concat ICA as implemented by the code above to more sophisticated multi-subject methods, namely Melodic’s concat ICA and CanICA–also implemented using scikit-learn although we do not discuss the code here. We display here only the default mode network as it is a well-known resting-state network. It is hard to draw conclusions from a single map but, at first sight, it seems that both CanICA and Melodic approaches are less subject to noise and give similar results. ![Default mode network extracted using different approaches: *left*: the simple Concat-ICA approach detailed in this article; *middle*: CanICA, as implemented in nilearn; *right*: Melodic’s concat-ICA. Data have been normalized (set to unit variance) for display purposes.[]{data-label="fig:ica"}](ica){width=".9\linewidth"} Scikit-learn proposes several other matrix decomposition strategies listed in the module ‘sklearn.decomposition‘. A good alternative to ICA is the dictionary learning that applies a $\ell_1$ regularization on the extracted components [@varoquaux2011]. This leads to more sparse and compact components than ICA ones, which are full-brain and require thresholding. Learning functionally homogeneous regions with clustering {#clustering} --------------------------------------------------------- From a machine learning perspective, a clustering method aggregates samples into groups (called clusters) maximizing a measure of similarity between samples within each cluster. If we consider voxels of a functional brain image as samples, this measure can be based on functional similarity, leading to clusters of voxels that form functionally homogeneous regions [@thirion2006]. ### Approaches Several clustering approaches exists, each one having its own pros and cons. Most require setting the number of clusters extracted. This choice depends on the application: a large number of clusters will give a more fine-grained description of the data, with a higher fidelity to the original signal, but also a higher model complexity. Some clustering approaches can make use of spatial information and yield spatially contiguous clusters, *i.e.* parcels. Here we will describe two clustering approaches that are simple and fast. #### Ward clustering uses a bottom-up hierarchical approach: voxels are progressively agglomerated together into clusters. In scikit-learn, structural information can be specified via a connectivity graph given to the Ward clustering estimator. This graph is used to allow only merges between neighboring voxels, thus readily producing contiguous parcels. We will rely on the [sklearn.feature\_extraction.image.grid\_to\_graph]{} function to construct such a graph using the neighbor structure of an image grid, with optionally a brain mask. #### K-Means is a more top-down approach, seeking cluster centers to evenly explain the variance of the data. Each voxels are then assigned to the nearest center, thus forming clusters. As imposing a spatial model in K-means is not easy, it is often advisable to spatially smooth the data. To apply the clustering algorithms, we run the common data preparation steps and produce a data matrix. As both Ward clustering and K-means rely on second-order statistics, we can speed up the algorithms by reducing the dimensionality while preserving these second-order statistics with a PCA. Note that clustering algorithms group samples and that here we want to group voxels. So if the data matrix is, as previously a (time points $\times$ voxels) matrix, we need to transpose it before running the scikit-learn clustering estimators. Scikit-learn provides a `WardAgglomeration` object to do this *feature agglomeration* with Ward clustering [@michel2012supervisedclustering], but this is not the case when using K-Means. connectivity = grid_to_graph(n_x=mask.shape[0], n_y=mask.shape[1], n_z=mask.shape[2], mask=mask) ward = WardAgglomeration(n_clusters=1000, connectivity=connectivity) ward.fit(X) # The maps of cluster assignment can be retrieved and unmasked cluster_labels = numpy.zeros(mask.shape, dtype=int) cluster_labels[mask] = ward.labels_ ### Results {#results-2} Clustering results are shown in figure \[fig:clustering\]. While clustering extracts some known large scale structure, such as the calcarine sulcus on fig \[fig:clustering\].a, it is not guaranteed to delineate functionally specific brain regions. Rather, it can be considered as a compression, that is a useful method of summarizing information, as it groups together similar voxels. Note that, as K-means does not extract spatially-contiguous clusters, it gives a number of regions that can be much larger than the number of clusters specified, although some of these regions can be very small. On the opposite, spatially-constrained Ward directly creates regions. As it is a bottom-up process, it tends to perform best with a large number of clusters. There exist many more clustering techniques exposed in scikit-learn. Determining which is the best one to process fMRI time-series requires a more precise definition of the target application. Ward’s clustering and K-Means are among the simplest approaches proposed in the scikit-learn. [@craddock2011] applied spectral clustering on neuroimaging data, a similar application is available in nilearn as an example. ![Brain parcellations extracted by clustering. Colors are random.[]{data-label="fig:clustering"}](clustering){width="\linewidth"} Conclusion ========== In this paper we have illustrated with simple examples how machine learning techniques can be applied to fMRI data using the scikit-learn Python toolkit in order to tackle neuroscientific problems. Encoding and decoding can rely on supervised learning to link brain images with stimuli. Unsupervised learning can extract structure such as functional networks or brain regions from resting-state data. The accompanying Python code for the machine learning tasks is straightforward. Difficulties lie in applying proper preprocessing to the data, choosing the right model for the problem, and interpreting the results. Tackling these difficulties while providing the scientists with simple and readable code requires building a domain-specific library, dedicated to applying scikit-learn to neuroimaging data. This effort is underway in a nascent project, nilearn, that aims to facilitate the use of scikit-learn on neuroimaging data. The examples covered in this paper only scratch the surface of applications of statistical learning to neuroimaging. The tool stack presented here shines uniquely in this regard as it opens the door to any combination of the wide range of machine learning methods present in scikit-learn with neuroimaging-related code. For instance, sparse inverse covariance can extract the functional interaction structure from fMRI time-series [@varoquaux2013] using the graph-lasso estimator. Modern neuroimaging data analysis entails fitting rich models on limited data quantities. These are high-dimensional statistics problems which call for statistical-learning techniques. We hope that bridging a general-purpose machine learning tool, scikit-learn, to domain-specific data preparation code will foster new scientific advances. Disclosure/Conflict-of-Interest Statement {#disclosureconflict-of-interest-statement .unnumbered} ========================================= The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. #### Funding We acknowledge funding from the NiConnect project and NIDA R21 DA034954, SUBSample project from the DIGITEO Institute, France. [^1]: <http://www.github.com/AlexandreAbraham/frontiers2013> [^2]: An easy-to-use implementation is proposed in nilearn
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider divergence form uniformly parabolic SPDEs with bounded and measurable leading coefficients and possibly growing lower-order coefficients in the deterministic part of the equations. We look for solutions which are summable to the second power with respect to the usual Lebesgue measure along with their first derivatives with respect to the spatial variable.' address: '127 Vincent Hall, University of Minnesota, Minneapolis, MN, 55455, USA' author: - 'N.V. Krylov' title: ' On divergence form SPDEs with growing coefficients in $W^{1}_{2}$ spaces without weights' --- [^1] Introduction ============ We consider divergence form uniformly parabolic SPDEs with bounded and measurable leading coefficients and possibly growing lower-order coefficients in the deterministic part of the equation. We look for solutions which are summable to the second power with respect to the usual Lebesgue measure along with their first derivatives with respect to the spatial variable. To the best of our knowledge our results are new even for deterministic PDEs when one deletes all stochastic terms in the results below. If there are no stochastic terms and the coefficients are nonrandom and time independent, our results allow one to obtain the corresponding results for elliptic divergence-form equations which also seem to be new. A sample result in this case is the following. Consider the equation $$D_{i}\big( a^{ij}( x)D_{j}u (x) +{\mathfrak{b}}^{i} (x)u (x)\big) + b^{i} (x)D_{i} u (x)$$ $$\label{6.9.1} -(c (x)+\lambda) u (x)= D_{i}f^{i}(x)+f^{0}(x)$$ in ${\mathbb{R}}^{d}$ which is the Euclidean space of points $x=(x^{1},...,x^{d})$. Here and below the summation convention is enforced and $$D_{i}=\frac{\partial}{\partial x^{i}}.$$ Assume that is uniformly elliptic, $a^{ij}$ are bounded, and $c\geq0$. Also assume that $f^{j}\in {\mathcal{L}}_{2}={\mathcal{L}}_{2}({\mathbb{R}}^{d})$, $j=0,...,d$, and $$\sup_{|x-y|\leq1}(|b(x)-b(y)|+|{\mathfrak{b}}(x)-{\mathfrak{b}}(y)|+|c(x)-c(y)|)<\infty$$ and that the constant $\lambda>0$ is large enough. Then equation has a unique solution in the class of functions $u\in W^{1}_{2}=W^{1}_{2}({\mathbb{R}}^{d})$. Notice that the above condition on ${\mathfrak{b}},b$, and $c$ allow them to grow linearly as $|x|\to\infty$. As in [@CV] one of the main motivations for studying SPDEs with growing first-order coefficients is filtering theory for partially observable diffusion processes. It is generally believed that introducing weights is the most natural setting for equations with growing coefficients. When the coefficients grow it is quite natural to consider the equations in function spaces with weights that would restrict the set of solutions in such a way that all terms in the equation will be from the same space as the free terms. The present paper seems to be the first one treating the unique solvability of these equations with growing lower-order coefficients in the usual Sobolev spaces $W^{1}_{2}$ without weights and without imposing any [*special*]{} conditions on the relations between the coefficients or on their [*derivatives*]{}. The theory of SPDEs in Sobolev-Hilbert spaces [*with*]{} weights attracted some attention in the past. We do not use weights and only mention a few papers about [*stochastic*]{} PDEs in ${\mathcal{L}}_{p}$-spaces with weights in which one can find further references: [@AM] (mild solutions, general $p$), [@CV], [@Gy93], [@Gy97], [@GK] ($p=2$ in the four last articles). Many more papers are devoted to the theory of [*deterministic*]{} PDEs with growing coefficients in Sobolev spaces with weights. We cite only a few of them sending the reader to the references therein again because neither do we deal with weights nor use the results of these papers. It is also worth saying that our results do not generalize the results of the above cited papers. In most of these papers the coefficients are time independent, see [@CV1], [@ChG], [@FL], [@Lu], [@MP1], part of the result of which are extended in [@GL] to time-dependent Ornstein-Uhlenbeck operators. It is worth noting that many issues for [*deterministic*]{} divergence-type equations with time independent growing coefficients in ${\mathcal{L}}_{p}$ spaces with arbitrary $p \in(1,\infty)$ [*without*]{} weights were also treated previously in the literature. This was done mostly by using the semigroup approach which excludes time dependent coefficients and makes it almost impossible to use the results in the more or less general filtering theory. We briefly mention only a few recent papers sending the reader to them for additional information. In [@LV] a strongly continuous in ${\mathcal{L}}_{p}$ semigroup is constructed corresponding to elliptic operators with measurable leading coefficients and Lipschitz continuous drift coefficients. In [@MP] it is assumed that if, for $| x|\to\infty$, the drift coefficients grow, then the zeroth-order coefficient should grow, basically, as the square of the drift. There is also a condition on the divergence of the drift coefficient. In [@PR] there is no zeroth-order term and the semigroup is constructed under some assumptions one of which translates into the monotonicity of $\pm b(x)-Kx$, for a constant $K$, if the leading term is the Laplacian. In [@CF] the drift coefficient is assumed to be globally Lipschitz continuous if the zeroth-order coefficient is constant. Some conclusions in the above cited papers are quite similar to ours but the corresponding assumptions are not as general in what concerns the regularity of the coefficients. However, these papers contain a lot of additional important information not touched upon in the present paper (in particular, it is shown in [@LV] that the corresponding semigroup is not analytic). The technique, we apply, originated from [@KP] and [@Kr09_1] and uses special cut-off functions whose support evolves in time in a manner adapted to the drift. We do not make any regularity assumptions on the coefficients and are restricted to only treat equations in $W^{1}_{2}$. Similar, techniques could be used to consider equations in the spaces $W^{1}_{p}$ with any $p\geq2$. This time one can use the results of [@Ki1] and [@Kr09_2] where some regularity on the coefficients in $x$ variable is needed like, say, the condition that the second order coefficients be in VMO uniformly with respect to the time variable (see [@Kr09_2]). However, for the sake of brevity and clarity we concentrate only on $p=2$. The main emphasis here is that we allow the first-order coefficients to grow as $|x|\to\infty$ and still measure the size of the derivatives with respect to Lebesgue measure thus avoiding using weights. It is worth noting that considering divergence form equations in ${\mathcal{L}}_{p}$-spaces is quite useful in the treatment of filtering problems (see, for instance, [@Kr_10]) especially when the power of summability is taken large and we intend to treat this issue in a subsequent paper. The article is organized as follows. In Section \[section 6.9.1\] we describe the problem, Section \[section 2.15.1\] contains the statements of two main results, Theorem \[theorem 3.11.1\] on an apriori estimate providing, in particular, uniqueness of solutions and Theorem \[theorem 3.16.1\] about the existence of solutions. Theorem \[theorem 3.11.1\] is proved in Section \[section 6.9.3\] after we prepare the necessary tools in Section \[section 6.9.2\]. Theorem \[theorem 3.16.1\] is proved in the last Section \[section 6.9.5\]. As usual when we speak of “a constant" we always mean “a finite constant". Setting of the problem ====================== \[section 6.9.1\] Let $(\Omega,{\mathcal{F}},P)$ be a complete probability space with an increasing filtration $\{{\mathcal{F}}_{t},t\geq0\}$ of complete with respect to $({\mathcal{F}},P)$ $\sigma$-fields ${\mathcal{F}}_{t}\subset{\mathcal{F}}$. Denote by ${\mathcal{P}}$ the predictable $\sigma$-field in $\Omega\times(0,\infty)$ associated with $\{{\mathcal{F}}_{t}\}$. Let $w^{k}_{t}$, $k=1,2,...$, be independent one-dimensional Wiener processes with respect to $\{{\mathcal{F}}_{t}\}$. Finally, let $\tau$ be a stopping time with respect to $\{{\mathcal{F}}_{t}\}$. We consider the second-order operator $L$ $$\label{lu} L_{t} u_{t}(x) =D_{i}\big( a^{ij}_{t}( x)D_{j}u_{t}(x) +{\mathfrak{b}}^{i}_{t}(x)u_{t}(x)\big) + b^{i} _{t}(x) D_{i} u_{t}(x) -c _{t}(x) u_{t}(x),$$ and the first-order operators $$\Lambda^{k}_{t} u_{t}(x)=\sigma^{ik}_{t}(x)D_{i}u_{t}(x) +\nu^{k}_{t}(x)u_{t}(x)$$ acting on functions $u_{t}(x)$ defined on $\Omega\times{\mathbb{R}}^{d+1}_{+}$, where ${\mathbb{R}}^{d+1}_{+}= [0, \infty) \times {\mathbb{R}}^d$, and given for $k=1,2,...$ (the summation convention is enforced throughout the article). We set ${\mathbb{R}}_{+}=[0,\infty)$. Our main concern is proving the unique solvability of the equation $$\label{2.6.4} du_{t}=(L_{t}u_{t}-\lambda u_{t}+D_{i}f^{i}_{t}+f^{0}_{t})\,dt +(\Lambda^{k}_{t}u_{t}+g^{k}_{t})\,dw^{k}_{t}, \quad t\leq\tau,$$ with an appropriate initial condition at $t=0$, where $\lambda>0$ is a constant. The precise assumptions on the coefficients, free terms, and initial data will be given later. First we introduce appropriate function spaces. Denote $C^{\infty}_{0}=C^{\infty}_{0}({\mathbb{R}}^{d})$, ${\mathcal{L}}_{2}={\mathcal{L}}_{2}({\mathbb{R}}^{d})$, and let $W^{1}_{2} =W^{1}_{2}({\mathbb{R}}^{d})$ be the Sobolev space of functions $u$ of class ${\mathcal{L}}_{2}$, such that $Du\in {\mathcal{L}}_{2}$, where $Du$ is the gradient of $u$. Introduce $${\mathbb{L}}_{2}( \tau)={\mathcal{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}},\bar{{\mathcal{P}}},{\mathcal{L}}_{2} ), \quad {\mathbb{W}}^{1}_{2}( \tau)={\mathcal{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}},\bar{{\mathcal{P}}}, W^{1}_{2} ),$$ where $\bar{{\mathcal{P}}}$ is the completion of ${\mathcal{P}}$ with respect to the product measure. Remember that the elements of ${\mathbb{L}}_{2}( \tau)$ need only belong to ${\mathcal{L}}_{2}$ on a predictable subset of ${\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}$ of full measure. For the sake of convenience we will always assume that they are defined everywhere on ${\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}$ at least as generalized functions. Similar situation occurs in the case of ${\mathbb{W}}^{1}_{2}( \tau)$. We also use the same notation ${\mathbb{L}}_{2}(\tau)$ for $\ell_{2}$-valued functions like $g_{t}=(g^{k}_{t} )$. For such a function, naturally, $$\|g\|_{{\mathcal{L}}_{2}}=\|\,|g|_{\ell_{2}} \,\|_{{\mathcal{L}}_{2}}=\big\|\big(\sum_{k=1}^{\infty} (g^{k})^{2}\big)^{1/2}\|_{{\mathcal{L}}_{2}} =\big(\sum_{k=1}^{\infty}\int_{{\mathbb{R}}^{d}} |g^{k}|^{2}\,dx\big)^{1/2}.$$ The following definition turns out to be useful if the coefficients of $L$ and $\Lambda^{k}$ are bounded. \[definition 3.16.1\] We introduce the space ${\mathcal{W}}^{1}_{2}(\tau)$, which is the space of functions $u_{t} =u_{t}(\omega,\cdot)$ on $\{(\omega,t): 0\leq t\leq\tau,t<\infty\}$ with values in the space of generalized functions on ${\mathbb{R}}^{d}$ and having the following properties: \(i) We have $u_{0}\in {\mathcal{L}}_{2}(\Omega,{\mathcal{F}}_{0},{\mathcal{L}}_{2})$; \(ii) We have $u \in {\mathbb{W}}^{1}_{2}(\tau )$; \(iii) There exist $f^{i}\in {\mathbb{L}}_{2}(\tau)$, $i=0,...,d$, and $g=(g^{1},g^{2},...)\in {\mathbb{L}}_{2}(\tau)$ such that for any $\phi\in C^{\infty}_{0}$ with probability 1 for all $t\in{\mathbb{R}}_{+}$ we have $$(u_{t\wedge\tau},\phi)=(u_{0},\phi) +\sum_{k=1}^{\infty}\int_{0}^{t}I_{s\leq\tau} (g^{k}_{s},\phi)\,dw^{k}_{s}$$ $$\label{1.2.1} +\int_{0}^{t}I_{s\leq\tau}\big( (f^{0}_{s},\phi)-(f^{i}_{s},D_{i}\phi)\big)\,ds.$$ In particular, for any $\phi\in C^{\infty}_{0}$, the process $(u_{t\wedge\tau},\phi)$ is ${\mathcal{F}}_{t}$-adapted and (a.s.) continuous. In case that property (iii) holds, we write $$du_{t}=(D_{i}f^{i}_{t}+f^{0}_{t})\,dt+g^{k}_{t}\,dw^{k}_{t}, \quad t\leq\tau.$$ It is a standard fact that for $g\in{\mathbb{L}}_{2}(\tau)$ and any $\phi\in C^{\infty}_{0}$ the series in converges uniformly on ${\mathbb{R}}_{+}$ in probability. Similarly to this definition we understand equation in the general case as the requirement that for any $\phi\in C^{\infty}_{0}$ with probability one the relation $$(u_{t\wedge\tau},\phi)=(u_{0},\phi) +\sum_{k=1}^{\infty}\int_{0}^{t}I_{s\leq\tau} (\sigma^{ik}_{s}D_{i}u_{s}+\nu^{k}_{s}u_{s} +g^{k}_{s},\phi)\,dw^{k}_{s}$$ $$\label{3.16.7} + \int_{0}^{t}I_{s\leq\tau}\big[(b^{i}_{s}D_{i}u_{s} -(c_{s}+\lambda)u_{s}+f^{0}_{s},\phi) -(a^{ij}_{s}D_{j}u_{s}+{\mathfrak{b}}^{i}_{s}u_{s}+ f^{i}_{s},D_{i}\phi) \big]\,ds$$ hold for all $t\in{\mathbb{R}}_{+}$. Observe that at this moment it is not clear that the right-hand side makes sense. Also notice that, if the coefficients of $L$ and $\Lambda^{k}$ are bounded, then any $u\in{\mathcal{W}}^{1}_{2}(\tau)$ is a solution of with appropriate free terms since if holds, then holds as well with $$f^{i}_{t}-a^{ij}_{t}D_{j}u_{t}-{\mathfrak{b}}^{i}u_{t}, \quad i=1,...,d,\quad f^{0}_{t}+(c_{t}+\lambda)u_{t}-b^{i}_{t}D_{i}u_{t},$$ $$g^{k}_{t}-\sigma^{ik}D_{i}u_{t}-\nu^{k}_{t}u_{t}$$ in place of $f^{i}_{t}$, $i=1,...,d$, $f^{0}_{t}$, and $g^{k}_{t}$, respectively. Main results ============ \[section 2.15.1\] For $\rho>0$ denote $B_{\rho}(x)=\{y\in{\mathbb{R}}^{d}:|x-y|<\rho\}$, $B_{\rho}=B_{\rho}(0)$. \[assumption 2.7.2\] (i) The functions $a^{ij}_{t}(x)$, ${\mathfrak{b}}^{i}_{t}(x)$, $b^{i}_{t}(x)$, $c_{t}(x)$, $\sigma^{ik}_{t}(x)$, $\nu^{k}_{t}(x)$ are real valued, measurable with respect to ${\mathcal{F}}\otimes{\mathcal{B}}({\mathbb{R}}^{d+1}_{+})$, ${\mathcal{F}}_{t}$-adapted for any $x$, and $c\geq 0$. \(ii) There exist constants $K,\delta>0$ such that for all values of arguments and $\xi\in{\mathbb{R}}^{d}$ $$(a^{ij} - \alpha^{ij} ) \xi^{i} \xi^{j}\geq\delta|\xi|^{2},\quad |a^{ij}|\leq \delta^{-1} , \quad |\nu|_{\ell_{2}}\leq K,$$ where $\alpha^{ij} =(1/2)(\sigma^{i\cdot},\sigma^{j\cdot}) _{\ell_{2}}$. Also, the constant $\lambda>0$. \(iii) For any $x\in {\mathbb{R}}^{d}$ (and $\omega$) the function $$\int_{B_{1}}(|{\mathfrak{b}}_{t}(x+y)|+|b_{t}(x+y)|+c_{t}(x+y))\,dy$$ is locally square integrable on ${\mathbb{R}}_{+}=[0,\infty)$. Notice that the matrix $a=(a^{ij})$ need not be symmetric. Also notice that in Assumption \[assumption 2.7.2\] (iii) the ball $B_{1}$ can be replaced with any other ball without changing the set of admissible coefficients ${\mathfrak{b}},b,c$. We take some $f^{j},g\in{\mathbb{L}}_{2}(\tau)$ and before we give the definition of solution of we remind the reader that, if $u\in{\mathbb{W}}^{1}_{2}(\tau)$, then owing to the boundedness of $\nu$ and $\sigma$ and the fact that $Du,u,g\in{\mathbb{L}}_{2}(\tau)$, the first series on the right in converges uniformly in probability and the series is a continuous local martingale. \[definition 3.20.01\] By a solution of for $t\leq\tau$ with initial condition $u_{0}\in{\mathcal{L}}_{2}(\Omega,{\mathcal{F}}_{0},{\mathcal{L}}_{2})$ we mean a function $u\in {\mathbb{W}}^{1}_{2}(\tau) $ (not ${\mathcal{W}}^{1}_{2}(\tau))$ such that \(i) For any $\phi\in C^{\infty}_{0} $ with probability one the integral with respect to $ds$ in is well defined and is finite for all $t\in{\mathbb{R}}_{+}$; \(ii) For any $\phi\in C^{\infty}_{0} $ with probability one equation holds for all $t\in{\mathbb{R}}_{+}$. For $d\ne2$ define $$q =d\vee 2,$$ and if $d=2$ let $q $ be a fixed number such that $q >2$. The following assumption contains a parameter $ \gamma \in(0,1]$, whose value will be specified later. \[assumption 3.11.1\] There exists a $\rho_{0}\in(0,1]$ such that, for any $\omega\in\Omega$ and $ {\mathfrak{b}}:=({\mathfrak{b}}^{1} ,...,{\mathfrak{b}}^{d} )$ and $ b:=(b^{1} ,...,b^{d} )$ and $(t,x)\in{\mathbb{R}}^{d+1}_{+}$ we have $$\rho_{0}^{- d}\int_{B_{\rho_{0}}}\int_{B_{\rho_{0}}}|{\mathfrak{b}}_{t}( x+y) -{\mathfrak{b}}_{t}( x+z)|^{q }\,dydz \leq\gamma ,$$ $$\rho_{0}^{- d}\int_{B_{\rho_{0}}}\int_{B_{\rho_{0}}}|b_{t}( x+y) -b_{t}( x+z)|^{q}\,dydz \leq\gamma ,$$ $$\rho_{0}^{- d}\int_{B_{\rho_{0}}}\int_{B_{\rho_{0}}}|c_{t}( x+y) -c_{t}( x+z)|^{q}\,dydz \leq\gamma .$$ Obviously, Assumption \[assumption 3.11.1\] is satisfied with any $\gamma\in(0,1]$ if $b$, ${\mathfrak{b}}$, and $c$ are independent of $x$. Also notice that Assumption \[assumption 3.11.1\] allows $b$, ${\mathfrak{b}}$, and $c$ growing linearly in $x$. \[theorem 3.11.1\] There exist $$\gamma =\gamma (d,\delta,K )\in(0,1],$$ $$N=N(d,\delta,K ), \quad \lambda_{0}=\lambda_{0}(d,\delta,K, \rho_{0})\geq0$$ such that, if the above assumptions are satisfied and $\lambda\geq \lambda_{0}$ and $u $ is a solution of with initial condition $u_{0}$ and some $f^{j},g\in{\mathbb{L}}_{2}(\tau)$, then $$\|u\sqrt{\lambda +c}\|^{2}_{{\mathbb{L}}_{2}(\tau)}+\|Du\|^{2}_{{\mathbb{L}}_{2}(\tau)} \leq N\big(\sum_{i=1}^{d}\|f^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)}$$ $$\label{3.11.2} +\|g\|^{2}_{{\mathbb{L}}_{2}(\tau)} +\lambda^{-1}\|f^{0}\|^{2}_{{\mathbb{L}}_{2}(\tau)}+ E\|u_{0}\|^{2}_{{\mathcal{L}}_{2}}\big).$$ This theorem provides an apriori estimate implying uniqueness of solutions $u $. Observe that the assumption that such a solution exists is quite nontrivial because if ${\mathfrak{b}}_{t}(x)\equiv x$, it is not true that ${\mathfrak{b}}u\in{\mathbb{L}}_{2}(\tau)$ for arbitrary $u\in {\mathbb{W}}^{ 1}_{2}(\tau) $. To prove the existence we need stronger assumptions because, generally, under only the above assumptions the term $$D_{i}({\mathfrak{b}}^{i}_{t}u_{t})+b^{i}_{t}D_{i}u_{t}$$ cannot be written even locally as $D_{i}\hat{f}^{i}_{t}+\hat{f}^{0}_{t}$ with $\hat{f}^{j}\in{\mathbb{L}}_{2}(\tau)$ if we only know that $u\in{\mathbb{W}}^{1}_{2}(\tau)$ even if ${\mathfrak{b}}$ and $b$ are independent of $x$. We can only prove our crucial Lemma \[lemma 3.16.5\] if such a representation is possible. \[assumption 3.16.1\] For any $T,R\in{\mathbb{R}}_{+}$, and $\omega\in\Omega$ we have $$\sup_{t\leq T}\int_{B_{R}}(|{\mathfrak{b}}_{t}(x)| +|b_{t}(x)|+ c_{t}(x) ) \,dx<\infty.$$ \[theorem 3.16.1\] Let the above assumptions be satisfied with $\gamma $ taken from Theorem \[theorem 3.11.1\]. Take $\lambda\geq\lambda_{0}$, where $\lambda_{0}$ is defined in Theorem \[theorem 3.11.1\], and take $u_{0}\in{\mathcal{L}}_{2}(\Omega,{\mathcal{F}}_{0},{\mathcal{L}}_{2})$. Then there exists a unique solution of with initial condition $u_{0}$. If the stopping time $\tau$ is bounded, then in the above theorem one can take $\lambda=0$. To show this take a large $\lambda>0$ and replace the unknown function $u_{t}$ with $v_{t} e^{ \lambda t}$. This leads to an equation for $v_{t}$ with the additional term $-\lambda v_{t}\,dt$ and the free terms multiplied by $e^{-\lambda t}$. The existence of $v\in{\mathcal{W}}^{1}_{2}(\tau)$ will be then equivalent to the existence of $u\in{\mathcal{W}}^{1}_{2}(\tau)$ if $\tau$ is bounded. A version of the Itô-Wentzell formula ===================================== \[section 6.9.2\] Let ${\mathcal{D}}$ be the space of generalized functions on ${\mathbb{R}}^{d}$. We remind a definition and a result from [@Kr09_4]. Recall that for any $v\in{\mathcal{D}}$ and $\phi\in C^{\infty}_{0}$ the function $(v,\phi(\cdot-x))$ is infinitely differentiable with respect to $x$, so that the sup in below is predictable. Denote by ${\mathfrak{D}}$ \[def 10.25.1\] the set of all ${\mathcal{D}}$-valued functions $u$ (written as $u_{t}(x)$ in a common abuse of notation) on $\Omega\times{\mathbb{R}}_{+}$ such that, for any $\phi\in C_{0}^{\infty} :=C_{0}^{\infty}({\mathbb{R}}^{d})$, the restriction of the function $(u_{t},\phi)$ on $\Omega\times(0,\infty)$ is ${\mathcal{P}}$-measurable and $(u_{0},\phi)$ is ${\mathcal{F}}_{0}$-measurable. For $p=1,2$ denote by $\mathfrak{D}^{p}$ the subset of ${\mathfrak{D}}$ consisting of $u$ such that, for any $\phi\in C_{0}^{\infty}$ and $T ,R \in{\mathbb{R}}_{+}$, we have $$\label{11.16.2} \int_{0}^{T}\sup_{ |x|\leq R}|(u_{t} , \phi(\cdot-x))|^{ p}\,dt<\infty \quad\hbox{ (a.s.)}.$$ In the same way, considering $\ell_{2}$-valued distributions $g$ on $C_{0}^{\infty}$, that is linear $\ell_{2}$-valued functionals such that $(g,\phi)$ is continuous as an $\ell_{2}$-valued function with respect to the standard convergence of test functions, we define ${\mathfrak{D}}(\ell_{2})$ and $\mathfrak{D}^{ 2} (\ell_{2})$ replacing $|\cdot|$ in (\[11.16.2\]) with $p=2$ by $|\cdot|_{\ell_{2}}$. Observe that if $g\in\mathfrak{D}^{2}(l_{2})$ then for any $\phi\in C_{0}^{\infty}$, and $T\in{\mathbb{R}}_{+}$ $$\sum_{k=1}^{\infty}\int_{0}^{T}(g^{k}_{t},\phi)^{2}\,dt =\int_{0}^{T}|(g _{t},\phi)|_{\ell_{2}}^{2}\,dt<\infty \quad\hbox{(a.s.)},$$ which, by well known theorems about convergence of series of martingales, implies that the series in below converges uniformly on $[0,T]$ in probability for any $T\in{\mathbb{R}}_{+}$. \[def 10.25.3\] Let $f,u\in\mathfrak{D}$, $g\in\mathfrak{D}(l_{2})$. We say that the equality $$\label{11.16.3} du_{t}(x)=f_{t}( x)\,dt+ g_{t}^{k}( x)\,dw^{k}_{t},\quad t\leq\tau,$$ holds [*in the sense of distributions*]{} if $ fI_{{\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}}\in\mathfrak{D}^ {1}$, $gI_{{\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}}\in\mathfrak{D}^{ 2}(l_{2})$, and for any $\phi\in C_{0}^{\infty}$, with probability one we have for all $t\in{\mathbb{R}}_{+}$ $$\label{12.23.40} (u_{t\wedge\tau} ,\phi)=(u_{0} ,\phi)+\int_{0}^{t}I_{ s\leq\tau} (f_{s},\phi)\,ds+\sum_{k=1}^{\infty} \int_{0}^{t}I_{ s\leq\tau}(g^{k}_{s},\phi)\,dw^{k}_{s}.$$ Let $x_{t}$ be an ${\mathbb{R}}^{d}$-valued stochastic process given by $$x^{i}_{t}=\int_{0}^{t}\hat b^{i}_{s}\,ds +\sum_{k=1}^{\infty}\int_{0}^{t}\hat \sigma^{ik}_{s}\,dw^{k}_{s},$$ where $\hat b_{t}=(\hat b^{i}_{t}),\hat \sigma^{k}_{t} =(\hat \sigma^{ik}_{t})$ are predictable ${\mathbb{R}}^{d}$-valued processes such that for all $\omega$ and $s,T\in{\mathbb{R}}_{+}$ we have ${\text{\rm tr}\,}\hat\alpha_{s}<\infty$ and $$\int_{0}^{T}(|\hat b_{t}|+{\text{\rm tr}\,}\hat\alpha_{t}) \,dt<\infty,$$ where $\hat\alpha_{t}=(\hat\alpha^{ij}_{t})$ and $2\hat \alpha^{ij}_{t}=(\hat \sigma^{i\cdot}, \hat \sigma^{j\cdot})_{\ell_{2}}$. Finally, before stating the main result of [@Kr09_4] we remind the reader that for a generalized function $v$, and any $\phi\in C^{\infty}_{0} $ the function $(v,\phi(\cdot-x))$ is infinitely differentiable and for any derivative operator $D $ of order $n$ with respect to $x$ we have $$\label{4.3.2} D (v,\phi(\cdot-x)) =(-1)^{n}(v,(D \phi)(\cdot-x))=:(D v,\phi(\cdot-x))=: ((D v)(\cdot+x),\phi)$$ implying, in particular, that $D u \in{\mathfrak{D}}$ if $u \in{\mathfrak{D}}$. \[theorem 11.16.5\] Let $f,u\in\mathfrak{D}$, and $g\in\mathfrak{D}(l_{2})$. Introduce $$v_{t}(x)=u_{t}(x+x_{t})$$ and assume that holds (in the sense of distributions). Then $$dv_{t}(x)= [f_{t}( x+x_{t})+\hat L_{t}v_{t}(x) +(D_{i}g_{t}( x+x_{t}) ,\hat \sigma^{i\cdot}_{t} )_{\ell_{2}}]\,dt$$ $$\label{4.4.5} + [g^{k}_{t}( x+x_{t})+D_{i}v_{t}( x) \hat \sigma^{ik}_{t} ]\,dw_{t}^{k}, \quad t\leq\tau$$ (in the sense of distributions), where $\hat L_{t}v_{t}= \hat \alpha^{ij}_{t} D_{i}D_{j}v_{t}( x)+\hat b^{i}_{t}D_{i}v_{t}(x)$. In particular, the terms on the right in belong to the right class of functions. We remind the reader that the summation convention over the repeated indices $i,j=1,...,d$ (and $k=1,2,...$) is enforced throughout the article. In the main part of this paper we are going to use Theorem \[theorem 11.16.5\] only for $ \hat\sigma \equiv0$. \[corollary 7.3.1\] Under the assumptions of Theorem \[theorem 11.16.5\] for any $\eta\in C^{\infty}_{0}$ we have $$d[u_{t}(x)\eta(x-x_{t})]=[g^{k}_{t}(x)\eta(x-x_{t})- u_{t}(x)\hat\sigma^{ik}_{t}(D_{i}\eta)(x-x_{t})]\,dw^{k}_{t}$$ $$+[f_{t}(x)\eta(x-x_{t})+u_{t}(x) (\hat{L}_{t}^{*}\eta)(x-x_{t})-(g_{t}(x),\hat\sigma^{i\cdot}(D_{i}\eta) (x-x_{t}))_{\ell_{2}}]\,dt,\quad t\leq\tau,$$ where $\hat{L}_{t}^{*}$ is the formal adjoint to $\hat{L}_{t}$. Indeed, what we claim is that for any $\phi\in C^{\infty}_{0}$ with probability one $$((u_{t\wedge\tau} \phi)(\cdot+x_{t\wedge\tau}),\eta)=( u_{0} \phi ,\eta)$$ $$+\int_{0}^{t}I_{s\leq\tau} \big (\big[ g^{k}_{s}\phi + \hat{\sigma}^{ik}_{s} D_{i} (u_{s}\phi)\big](\cdot+x_{s}),\eta\big) \,dw^{k}_{s}$$ $$+\int_{0}^{t}I_{s\leq\tau}\big(\big[f_{s}\phi + \hat{L}_{s}(u_{t}\phi) + (\hat{\sigma}^{i\cdot}_{s},D_{i}(g _{s}\phi))_{\ell_{2}}\big] (\cdot+x_{s}),\eta\big)\,ds$$ for all $t$. However, to obtain this result it suffices to write down an obvious equation for $u_{t}\phi$, then use Theorem \[theorem 11.16.5\] and, finally, use Definition \[def 10.25.3\] to interpret the result. Proof of Theorem \[theorem 3.11.1\] =================================== \[section 6.9.3\] Throughout this section we suppose that the assumptions of Theorem \[theorem 3.11.1\] are satisfied and start with analyzing the second integral in . Recall that $q$ was introduced before Assumption \[assumption 3.11.1\]. \[lemma 6.27.1\] Let $h\in{\mathcal{L}}_{q}$, $v\in {\mathcal{L}}_{2}$, and $u\in W^{1}_{2}$. Then there exist $V^{j}\in{\mathcal{L}}_{2}$, $j=0,1,...,d$, such that $$h v=D_{i}V^{i}+V^{0},\quad \sum_{j=0}^{d}\|V^{j}\|_{{\mathcal{L}}_{2}}\leq N\|h\|_{{\mathcal{L}}_{q}}\|v\|_{{\mathcal{L}}_{2}},$$ where $N$ is independent of $h$ and $v$. In particular, $$\label{7.1.1} |(hv,u)|\leq N\|h\|_{{\mathcal{L}}_{q}}\|v\|_{{\mathcal{L}}_{2}}\|u\|_{W^{1}_{2}}.$$ Furthermore, if a number $\rho>0$, then for any ball $B$ of radius $\rho$ we have $$\label{6.27.7} \|I_{B}hu\|_{{\mathcal{L}}_{2}} \leq N\|h\| _{{\mathcal{L}}_{q}}\big (\rho^{ 1-d/q }\|I_{B}Du\| _{{\mathcal{L}}_{2}} +\rho^{- d/q}\|I_{B}u\| _{{\mathcal{L}}_{2}}\big),$$ where $N$ is independent of $h$, $u$, $\rho$, and $B$. Proof. Observe that by Hölder’s inequality for $r=2q/(2+q)$ ($\in[1,2)$) we have $$\|h v\|_{{\mathcal{L}}_{r}}\leq \|h\|_{{\mathcal{L}}_{q}}\|v\|_{{\mathcal{L}}_{2}}.$$ Next we use the classical theory and introduce $V \in W^{2}_{r}$ (note that $r>1$ if $d\ne1$ and $r=1$ if $d=1$) as a unique solution of $$\Delta V - V =hv.$$ We know that for a constant $N=N(d,r)$ we have $$\|V \|_{W^{2}_{r}}\leq N \|h v\|_{{\mathcal{L}}_{r}}, \quad \|V \|_{W^{1}_{2}}\leq N\|V \|_{W^{2}_{r}},$$ where the last inequality follows by embedding theorems ($2-d/r\geq1-d/2$). Now to prove the first assertion of the lemma it only remains to combine the above estimates and notice that for $V^{i}=D_{i}V$, $i=1,...,d$, $V^{0}=-V$ it holds that $h v=D_{i}V^{i}+V^{0}$. To prove the second assertion, first let $q>2$. Observe that by Hölder’s inequality $$\|I_{B}hu\|_{{\mathcal{L}}_{2}} \leq\|h\| _{{\mathcal{L}}_{q}}\|I_{B}u\| _{{\mathcal{L}}_{s}},$$ where $s=2q/(q-2)$. By embedding theorems (we use the fact that $d/s\geq d/2-1$) $$\|I_{B}u\| _{{\mathcal{L}}_{s}} \leq N(\rho^{ 1-d/q }\|I_{B}Du\| _{{\mathcal{L}}_{2}} +\rho^{- d/q}\|I_{B}u\| _{{\mathcal{L}}_{2}}\big)$$ and the result follows. In the remaining case $q=2$, which happens only if $d=1$. In that case the above estimates remain true if we set $s=\infty$. The lemma is proved. Before we extract some consequences from the lemma we take a nonnegative $ \xi\in C^{\infty}_{0}(B_{\rho_{0}})$ with unit integral and define $$\bar{b}_{s}(x)=\int_{B_{\rho_{0}}}\xi(y) b_{s}(x-y) \,dy,\quad \bar{{\mathfrak{b}}}_{s}(x) =\int_{B_{\rho_{0}}}\xi(y) {\mathfrak{b}}_{s}(x-y) \,dy,$$ $$\label{6.28.3} \bar{c}_{s}(x)=\int_{B_{\rho_{0}}}\xi(y) c_{s}(x-y) \,dy.$$ We may assume that $|\xi|\leq N(d)\rho_{0}^{-d}$. One obtains the first two assertions of the following corollary from and by performing estimates like $$\|I_{B_{\rho_{0}}(x_{t})}(b_{t}-\bar{b}_{t}(x_{t}))\|_{{\mathcal{L}}_{q}}^{q} = \int_{B_{\rho_{0}}(x_{t})}| b_{ t}-\bar{b}_{t}(x_{t})|^{q}\,dx$$ $$=\int_{B_{\rho_{0}}(x_{t})}\big|\int_{B_{\rho_{0}}(x_{t})} [b_{t}(x)-b_{t}(y)]\xi(x_{t}-y)\,dy\big|^{q}\,dx$$ $$\leq N\int_{B_{\rho_{0}}(x_{t})}\big|\rho_{0}^{-d} \int_{B_{\rho_{0}}(x_{t})} |b_{t}(x)-b_{t}(y)|\,dy\big|^{q}\,dx$$ $$\label{6.11.2} \leq N\rho_{0}^{-d}\int_{B_{\rho_{0}}(x_{t})} \int_{B_{\rho_{0}}(x_{t})} |b _{t}(x)-b_{t}(y)|^{q}\,dy \,dx\leq N\gamma ,$$ \[corollary 6.27.2\] Let $u\in{\mathbb{W}}^{1}_{2}(\tau)$, let $x_{s}$ be an ${\mathbb{R}}^{d}$-valued predictable process, and let $\eta\in C^{\infty}_{0}(B_{\rho_{0}})$. Set $\eta_{s}(x)=\eta(x-x_{s})$. Then on ${\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}$ \(i) For any $v\in W^{1}_{2}$ $$(|b^{i}_{s}-\bar{b}^{i}_{s}(x_{s})|I_{B_{\rho_{0}}(x_{s})} | D_{i}u_{s}|,|v|) \leq N(d)\gamma^{1/q}\|I_{B_{\rho_{0}}(x_{s})}Du_{s}\| _{{\mathcal{L}}_{2}} \|v\|_{W^{1}_{2}} ;$$ \(ii) We have $$\|I_{B_{\rho_{0}}(x_{s})} |{\mathfrak{b}}_{s}-\bar{{\mathfrak{b}}}_{s}(x_{s})|\,u_{s}\|_{{\mathcal{L}}_{2}} +\|I_{B_{\rho_{0}}(x_{s})}|c_{s}-\bar{c}_{s}(x_{s})|\,u_{s}\|_{{\mathcal{L}}_{2}}$$ $$\leq N(d)\gamma^{1/q} \big (\rho_{0}^{ 1-d/q }\|I_{B_{\rho_{0}}(x_{s})}Du_{s}\| _{{\mathcal{L}}_{2}} +\rho_{0}^{- d/q}\|I_{B_{\rho_{0}}(x_{s})}u_{s}\| _{{\mathcal{L}}_{2}}\big);$$ \(iii) Almost everywhere on ${\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}$ we have $$\label{7.1.3} (b^{i}_{s}-\bar{b}^{i}_{s}(x_{s}))\eta_{s} D_{i}u_{s} =D_{i}V^{i}_{s}+V^{0}_{s} ,$$ $$\label{6.11.3} \sum_{j=0}^{d}\|V^{j}_{s}\|_{{\mathcal{L}}_{2}} \leq N(d)\gamma^{1/q}\|I_{B_{\rho_{0}}}(x_{s})Du_{s}\|_{{\mathcal{L}}_{2}} \sup_{B_{\rho_{0}}}|\eta|,$$ where $V^{j}_{s} $, $j=0,...,d$, are some predictable ${\mathcal{L}}_{2}$-valued functions on ${\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}$. To prove (iii) observe that one can find a predictable set $A\subset{\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}$ of full measure such that $I_{A}D_{i}u$, $i=1,...,d$, are well defined as ${\mathcal{L}}_{2}$-valued predictable functions. Then with $I_{A}D_{i}u$ in place of $D_{i}u$ and follow from , the first assertion of Lemma \[lemma 6.27.1\], and the fact that the way $V^{j}$ are constructed uses bounded hence continuous operators and translates the measurability of the data to the measurability of the result. Since we are interested in and holding only almost everywhere on ${\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}$, there is no actual need for the replacement. \[corollary 6.27.1\] Let $u\in{\mathbb{W}}^{1}_{2}(\tau)$. Then for almost any $(\omega,s)$ the mappings $$\label{6.27.2} \phi\,\to\,I_{s\leq\tau}(b^{i}_{s}D_{i}u_{s},\phi),\quad I_{s\leq\tau}({\mathfrak{b}}^{i}_{s} u_{s},D_{i}\phi),\quad I_{s\leq\tau}(c_{s}u_{s},\phi)$$ are generalized functions on ${\mathbb{R}}^{d}$. Furthermore, for any $T\in{\mathbb{R}}_{+}$ almost surely $$\label{6.27.3} \int_{0}^{T} I_{s\leq\tau}(|(b^{i}_{s}D_{i}u_{s},\phi)|+ |({\mathfrak{b}}^{i}_{s} u_{s},D_{i}\phi)|+ |(c_{s}u_{s},\phi)|)\,ds<\infty,$$ so that requirement (i) in Definition \[definition 3.20.01\] can be dropped. Proof. By having in mind partitions of unity we convince ourselves that it suffices to prove that the mappings are generalized functions on any ball $B$ of radius $\rho_{0}$ and that holds if $\phi\in C^{\infty}_{0}(B)$. Let $x_{0}$ be the center of $B$ and set $x_{s}\equiv x_{0}$. Then to prove the first assertion concerning the last two functions in it suffices to use the first assertion of Corollary \[corollary 6.27.2\] along with the observation that, say, $$({\mathfrak{b}}^{i}_{s} u_{s},D_{i}\phi)=(({\mathfrak{b}}^{i}_{s}-\bar{{\mathfrak{b}}}^{i}_{s}(x_{0})) u_{s},D_{i}\phi) +\bar{{\mathfrak{b}}}^{i}_{s}(x_{0})( u_{s},D_{i}\phi).$$ Similar transformation and Corollary \[corollary 6.27.2\] (i) prove that the first function in is also a generalized function. Assumption \[assumption 2.7.2\] (iii) and the estimates from Corollary \[corollary 6.27.2\] also easily imply thus finishing the proof of the corollary. Before we continue with the proof of Theorem \[theorem 3.11.1\], we notice that, if $u \in {\mathcal{W}}^{ 1}_{2 }(\tau)$, then as we know (see, for instance, Theorem 2.1 of [@Kr09_3]), there exists an event $\Omega'$ of full probability such that $u_{t\wedge\tau}I_{\Omega'}$ is a continuous ${\mathcal{L}}_{2}$-valued ${\mathcal{F}}_{t}$-adapted process on ${\mathbb{R}}_{+}$. Substituting, $u_{t\wedge\tau}I_{\Omega'}$ in place of $u$ in our assumptions and assertions does not change them. Furthermore, replacing $\tau$ with $\tau\wedge n$ and then sending $n$ to infinity allows us to assume that $\tau$ is bounded. Therefore, without losing generality we assume that \(H) If we are considering a $u \in {\mathcal{W}}^{ 1}_{2 }(\tau)$, the process $u_{t\wedge\tau}$ is a continuous ${\mathcal{L}}_{2}$-valued ${\mathcal{F}}_{t}$-adapted process on ${\mathbb{R}}_{+}$. The stopping time $\tau$ is bounded. Now we are ready to prove Theorem \[theorem 3.11.1\] in a particular case. \[lemma 3.11.1\] Let $ \nu^{k}\equiv0$ and let ${\mathfrak{b}}^{i}$, $b^{i}$, and $c$ be independent of $x$. Assume that $u $ is a solution of with some $f^{j},g\in{\mathbb{L}}_{2}(\tau)$ and $\lambda>0$. Then holds with $N=N(d,\delta,K)$. Proof. We want to use Theorem \[theorem 11.16.5\] to get rid of the first order terms. Observe that reads as $$du_{t}=(\sigma^{ik}_{t}D_{i}u_{t}+g^{k}_{t})\,dw^{k}_{t}$$ $$\label{6.28.1} +\big(D_{i}(a^{ij}_{t}D_{j}u_{t}+[{\mathfrak{b}}^{i}_{t} + b^{i}_{t}]u_{t}+f^{i}_{t})+f^{0}_{t}-(c_{t} +\lambda) u_{t}\big)\,dt, \quad t\leq\tau.$$ One can find a predictable set $A\subset{\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}$ of full measure such that $I_{A}f^{j}$, $j=0,1,...,d$, and $I_{A}D_{i}u$, $i=1,...,d$, are well defined as ${\mathcal{L}}_{2}$-valued predictable functions satisfying $$\int_{0}^{\infty}I_{A}\big(\sum_{j=0}^{d}\|f^{j}_{t}\|^{2}_{{\mathcal{L}}_{2}} + \|Du_{t}\|^{2}_{{\mathcal{L}}_{2}}\big)\,dt<\infty.$$ Replacing $f^{j}$ and $D_{i}u$ in with $I_{A}f^{j}$ and $I_{A}D_{i}u$, respectively, will not affect . Similarly, one can handle the function $g$ and the terms $h_{t}=I_{{\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}}[{\mathfrak{b}}^{i} + b^{i} ]u ,I_{{\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,\tau{\text{$]$\kern-.15em$]$}}}c u $ for which $$\int_{0}^{T}\|h_{t}\|_{{\mathcal{L}}_{1}}\,dt<\infty\quad\text{(a.s.)}$$ for each $T\in{\mathbb{R}}^{d}$ owing to Assumption \[assumption 2.7.2\] (iii) and the fact that $u\in{\mathbb{W}}^{1}_{2}(\tau)$. After these replacements all terms in will be of class ${\mathfrak{D}}^{1}$ or ${\mathfrak{D}}^{2}(\ell_{2})$ as appropriate since $a$ and $\sigma$ are bounded. This allows us to apply Theorem \[theorem 11.16.5\] and for $$B_{t}^{i}=\int_{0}^{t} ({\mathfrak{b}}^{i}_{s}+b^{i}_{s})\,ds,\quad \hat{u}_{t}(x)=u_{t}(x-B_{t})$$ obtain that $$d\hat{u}_{t}=\big(D_{i}(\hat{a}^{ij}_{t}D_{j}\hat{u}_{t} ) -(c_{t}+\lambda) \hat{u}_{t}+D_{i}\hat{f}^{i}_{t}+\hat{f}^{0}_{t}\big)\,dt$$ $$\label{6.28.8} +\big(\hat{\sigma}^{ik}_{t}D_{i}\hat{u}_{t}+ \hat{g}^{k}_{t}\big)\,dw^{k}_{t},\quad t\leq\tau,$$ where $$(\hat{a}^{ij}_{t},\hat{\sigma}^{ik}_{t}, \hat{f}^{j}_{t}, \hat{g}^{k}_{t})(x)=(a^{ij}_{t},\sigma^{ik}_{t}, f^{j}_{t}, g^{k}_{t})(x-B_{t}).$$ Obviously, $\hat{u}$ is in ${\mathbb{W}}^{1}_{2}(\tau)$ and its norm coincides with that of $u$. Moreover, having in mind that $c_{t}$ is independent of $x$ and is locally (square) integrable, one can find stopping times $\tau_{n}\uparrow\tau$ such that $I_{\tau_{n}\ne\tau} \downarrow0$ and $$\xi_{\tau_{n}}\leq n,\quad \xi_{t}:=\int_{0}^{t}c_{s}\,ds\leq n .$$ Then it follows from from the equation $$d(\xi_{t}\hat{u}_{t})=\big(D_{i}( \xi_{t}\hat{a}^{ij}_{t}D_{j}\hat{u}_{t} ) -\lambda \xi_{t}\hat{u}_{t}+D_{i}\xi_{t}\hat{f}^{i}_{t}+ \xi_{t}\hat{f}^{0}_{t}\big)\,dt$$ $$+\big(\hat{\sigma}^{ik}_{t}\xi_{t}D_{i}\hat{u}_{t}+ \xi_{t}\hat{g}^{k}_{t}\big)\,dw^{k}_{t},\quad t\leq\tau_{n}$$ that $\xi u\in{\mathcal{W}}^{1}_{2}(\tau_{n})$ and hence $\xi_{t\wedge\tau_{n}} u _{t\wedge\tau_{n}}$ is a continuous ${\mathcal{L}}_{2}$-valued function and so are $u_{t\wedge\tau_{n}}$ and $u_{t\wedge\tau}$. Furthermore, since $\tau$ is bounded and $u_{t\wedge\tau}$ is a continuous ${\mathcal{L}}_{2}$-valued function and $c_{t}$ is independent of $x$ and is locally square integrable, we have $$\label{6.29.2} \int_{0}^{\tau}\|c_{t}\hat{u}_{t}\|^{2}_{{\mathcal{L}}_{2}}\,dt =\int_{0}^{\tau}c^{2}_{t}\|u_{t}\|^{2}_{{\mathcal{L}}_{2}}\,dt \leq\sup_{t\leq\tau}\|u_{t}\|^{2}_{{\mathcal{L}}_{2}}\int_{0}^{\tau} c^{2}_{t}\,dt<\infty$$ and there is a sequence of, perhaps, different from the above stopping times $\tau_{n}\uparrow\tau$ such that for each $n$ $$\label{6.29.1} E\int_{0}^{\tau_{n}}\|c_{t}\hat{u}_{t}\|^{2}_{{\mathcal{L}}_{2}}\,dt<\infty.$$ Then implies that $\hat{u}\in{\mathcal{W}}^{1}_{2}(\tau_{n})$ for each $n$. Also observe that if we can prove with $\tau_{n}$ in place of $\tau$, then we can let $n\to\infty$ and use the monotone convergence theorem to get as is. Therefore, in the rest of the proof we assume that holds with $\tau$ in place of $\tau_{n}$, that is, $\hat{u}\in{\mathcal{W}}^{1}_{2}(\tau )$. The next argument is standard (see, for instance, Lemma 3.3 and Corollary 3.2 of [@Kr09_2]). Itô’s formula implies that $$\label{3.11.3} E\|u_{0}\|^{2}_{{\mathcal{L}}_{2}}+ E\int_{0}^{\tau}\int_{{\mathbb{R}}^{d}}I_{t}\,dxdt\geq0,$$ where $$I_{t}:=2\hat{u}_{t}(\hat{f}^{0}_{t}-\lambda \hat{u}_{t}-c_{t} \hat{u}_{t}) -2(\hat{a}^{ij}_{t}D_{j}\hat{u}_{t}+\hat{f}^{i}_{t})D_{i}\hat{u}_{t} +|\hat{\sigma}^{i\cdot}_{t}D_{i}\hat{u}_{t}+ \hat{g}_{t}|_{\ell_{2}}^{2}.$$ We use the inequality $$|\hat{\sigma}^{i\cdot}_{t}D_{i}\hat{u}_{t}+ \hat{g}_{t}|_{\ell_{2}}^{2}\leq (1+\varepsilon) | \hat\sigma^{i\cdot}_{t}D_{i}\hat{u}_{t} |^{2} _{\ell_{2}} +2\varepsilon^{-1}|\hat{g}_{t}|^{2}_{\ell_{2}}, \quad\varepsilon\in(0,1],$$ and Assumption \[assumption 2.7.2\]. Then for $\varepsilon=\varepsilon(\delta)>0$ small enough we find $$I_{t}\leq-\delta|D\hat{u}_{t}|^{2}-2(c_{t}+\lambda)\hat{u}^{2}_{t} +2\hat{u}_{t}\hat{f}^{0}_{t}-2\hat{f}^{i}_{t} D_{i}\hat{u}_{t} +N|\hat{g}_{t}|^{2}_{\ell_{2}}.$$ Once again using $2\hat{u}_{t}\hat{f}^{0}_{t}\leq\lambda \hat{u}^{2}_{t}+\lambda^{-1}|\hat{f}^{0}_{t}|^{2}$ and similarly estimating $2\hat{f}^{i}_{t} D_{i}\hat{u}_{t}$ we conclude that $$I_{t}\leq-(\delta/2)|D\hat{u}_{t}|^{2}-(c_{t}+ \lambda)\hat{u}^{2}_{t} +N\big(\sum_{i=1}^{d}|\hat{f}^{i}_{t}|^{2}+|\hat{g}_{t}| _{\ell_{2}}^{2}\big)+N\lambda^{-1}|\hat{f}^{0}_{t}|^{2}.$$ By coming back to we obtain $$\|\hat{u}\sqrt{c_{t}+ \lambda}\|^{2}_{{\mathbb{L}}_{2}(\tau)}+\|D\hat{u}\|^{2} _{{\mathbb{L}}_{2}(\tau)}\leq N \big(\sum_{i=1}^{d} \|\hat{f}^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)}+\|\hat{g}\|^{2}_{{\mathbb{L}}_{2}(\tau)} \big)$$ $$+N\lambda^{-1 }\|\hat{f}^{0}\|^{2}_{{\mathbb{L}}_{2}(\tau)}+N E\|u_{0}\|^{2}_{{\mathcal{L}}_{2}}.$$ This is equivalent to and the lemma is proved. To proceed further we need a construction. Take $\bar{{\mathfrak{b}}}, \bar{b}$, and $\bar{c}$ from . From Lemma 4.2 of [@Kr09_1] and Assumption \[assumption 3.11.1\] it follows that, for $h_{ t}=\bar{{\mathfrak{b}}}_{ t},\bar{b}_{ t}, \bar{c}_{ t}$, it holds that $|D^{n}h_{ t}|\leq \kappa_{n} $, where $\kappa_{n}=\kappa_{n} (n,\gamma ,d,\rho_{0})\geq1$ and $ D^{n}h_{ t }$ is any derivative of $h_{ t}$ of order $n\geq1$ with respect to $x$. By Corollary 4.3 of [@Kr09_1] we have $|h_{ t}( x )|\leq K(t)(1+|x|)$, where for each $\omega$ the function $K(t)=K(\omega,t)$ is locally (square) integrable with respect to $t$ on ${\mathbb{R}}_{+}$. Owing to these properties the equation $$\label{2.8.1} x_{t}=x_{0}-\int_{t_{0}}^{t}(\bar{{\mathfrak{b}}}_{ s}+\bar{b}_{ s}) ( x_{s})\,ds,\quad t \geq t_{0} ,$$ for any ($\omega$ and) $ (t_{0},x_{0}) \in {\mathbb{R}}^{d+1 }_{+} $ has a unique solution $x_{t}=x_{t_{0},x_{0},t} $. Obviously, the process $x_{t_{0},x_{0},t}$, $t\geq t_{0}$, is ${\mathcal{F}}_{t}$-adapted. Next, for $i=1,2$ set $\chi^{(i)}(x)$ to be the indicator function of $B_{\rho_{0}/i}$ and introduce $$\chi^{(i)}_{t_{0},x_{0},t}(x)=\chi^{(i)}(x-x_{t_{0},x_{0},t}) I_{t\geq t_{0}}.$$ Here is a crucial estimate. \[lemma 3.14.1\] Assume that $u $ is a solution of with some $f^{j},g\in{\mathbb{L}}_{2}(\tau)$. Then for $ (t_{0},x_{0}) \in {\mathbb{R}}^{d+1 }_{+} $ and $\lambda>0$ we have $$\|\chi^{(2)}_{t_{0},x_{0}}u\sqrt{c+\lambda}\|^{2}_{{\mathbb{L}}_{2}(\tau)}+ \|\chi^{(2)}_{t_{0},x_{0}}Du\|^{2}_{{\mathbb{L}}_{2}(\tau)}$$ $$\leq N\big(\sum_{i=1}^{d}\|\chi^{(1)}_{t_{0},x_{0}} f^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)} +\|\chi^{(1)}_{t_{0},x_{0}}g\|^{2}_{{\mathbb{L}}_{2}(\tau)}\big)$$ $$+N\lambda^{-1 }\|\chi^{(1)}_{t_{0},x_{0}}f^{0}\|^{2}_{{\mathbb{L}}_{2}(\tau)} +NE\|u_{t_{0}}I_{B_{\rho_{0}}(x_{0})} I_{t_{0}\leq\tau}\|^{2} _{{\mathcal{L}}_{2}}$$ $$+N\gamma ^{2/q} \| \chi^{(1)}_{t_{0},x_{0}} Du \|_{{\mathbb{L}}_{2}(\tau)}^{2}+ N^{*} \lambda^{-1}\| \chi^{(1)}_{t_{0},x_{0}} Du \|_{{\mathbb{L}}_{2}(\tau)}^{2}$$ $$\label{3.14.2} + N^{*}(1+\lambda^{-1}) \| \chi^{(1)}_{t_{0},x_{0}} u \|_{{\mathbb{L}}_{2}(\tau)}^{2} +N^{*}\lambda^{-1}\sum_{i=1}^{d}\|\chi^{(1)}_{t_{0},x_{0}} f^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)},$$ where and below in the proof by $N$ we denote generic constants depending only on $d,\delta$, and $K$ and by $N^{*}$ constants depending only on the same objects and $\rho_{0}$. Proof. Since we are only concerned with the values of $u_{t}$ if $t_{0}\leq t\leq\tau$, we may start considering on $[t_{0},\tau\vee t_{0})$ and then shifting time allows us to assume that $t_{0}=0$. Obviously, we may also assume that $x_{0}=0$. With this stipulations we will drop the subscripts $t_{0}, x_{0}$. Then, we can include the term $\nu^{k}u$ into $g^{k}$ and obtain by the triangle inequality if we assume that this estimate is true in case $ \nu^{k}\equiv0$. Thus, without losing generality we assume $$t_{0}=0,\quad x_{0}=0,\quad \nu^{k}\equiv0.$$ Fix a $ \zeta\in C^{\infty}_{0} $ with support in $B_{\rho_{0}}$ and such that $\zeta =1$ on $B_{\rho_{0}/2}$ and $0\leq\zeta \leq1$. Set $x_{t}=x_{0,0,t}$, $$\hat{{\mathfrak{b}}}_{ t} =\bar{{\mathfrak{b}}}_{ t}( x_{ t}) ,\quad \hat{b}_{ t} =\bar{b}_{ t}( x_{ t}),\quad \hat{c}_{ t} =\bar{c}_{ t}( x_{ t})$$ $$\eta_{ t}( x)=\zeta(x-x_{ t} ), \quad v_{ t}( x)= u_{t}( x) \eta_{ t}( x).$$ The most important property of $\eta_{t}$ is that $$d\eta_{t}=(\hat{{\mathfrak{b}}}^{i}_{t}+\hat{b}^{i}_{t})D_{i}\eta_{t}\,dt.$$ Also observe for the later that we may assume that $$\label{6.16.1} \chi^{(2)}_{t}\leq\eta_{ t}\leq \chi^{(1)}_{t},\quad |D\eta_{ t}|\leq N\rho_{0}^{-1 }\chi^{(1)}_{t},$$ where $\chi^{(i)}_{t}=\chi^{(i)}_{0,0,t}$ and $N=N(d)$. By Corollary \[corollary 7.3.1\] (also see the argument before ) we obtain that for $t\leq\tau$ $$dv_{ t} =\big[D_{i}(\eta_{ t}a^{ij}_{t}D_{j}u_{t}+{\mathfrak{b}}^{i}_{t}v_{ t}) -(a^{ij}_{t}D_{j}u_{t}+{\mathfrak{b}}^{i}_{t}u_{t}) D_{i}\eta_{ t}$$ $$+b^{i}_{t}\eta_{ t}D_{i} u_{t} -(c_{t}+\lambda) v_{ t} +D_{i}(f^{i}_{t}\eta_{ t})-f^{i}_{t}D_{i}\eta_{ t} +f^{0}_{t}\eta_{ t}$$ $$+ (\hat{{\mathfrak{b}}}^{i}_{ t} +\hat{b}^{i}_{ t} )u_{t} D_{i} \eta_{ t}\big]\,dt+\big[\sigma^{ik}D_{i}v_{ t} -\sigma^{ik}u_{t}D_{i}\eta_{ t}+g^{k}_{t}\eta_{ t} \big]\,dw^{k}_{t}.$$ We transform this further by noticing that $$\eta_{ t}a^{ij}_{t}D_{j}u_{t}= a^{ij}_{t}D_{j}v_{ t}- a^{ij}_{t}u_{t}D_{j}\eta_{ t}.$$ To deal with the term $b^{i}_{t}\eta_{ t}D_{i} u_{t} $ we use Corollary \[corollary 6.27.2\] and find the corresponding functions $V^{j}_{t}$. Then simple arithmetics show that $$dv_{ t}=(\sigma^{ik}D_{i}v_{ t} +\hat{g}^{k}_{t} )\,dw^{k}_{t}$$ $$+\big[D_{i}\big(a^{ij}_{t}D_{j}v_{ t}+ \hat{{\mathfrak{b}}}^{i}_{ t}v_{ t}\big)-(\hat{c}_{ t}+\lambda) v_{ t} +\hat{b}^{i}_{ t} D_{i}v_{ t}+D_{i}\hat{f}^{i}_{ t} +\hat{f}^{0}_{ t}\big]\,dt,$$ where $$\hat{f}^{0}_{ t}=f^{0}_{t}\eta_{ t}-f^{i}_{t}D_{i}\eta_{ t} -a^{ij}_{t}(D_{j}u_{t})D_{i}\eta_{ t} +( \hat{{\mathfrak{b}}}^{i}_{ t}-{\mathfrak{b}}^{i}_{t}) u_{t}D_{i} \eta_{ t}+(\hat{c}_{ t}-c_{t})u_{t}\eta_{t} +V^{0}_{t},$$ $$\hat{f}^{i}_{ t}=f^{i}_{t}\eta_{ t}- a^{ij}_{t}u_{t}D_{j}\eta_{ t}+({\mathfrak{b}}^{i}_{t}- \hat{{\mathfrak{b}}}^{i}_{ t}) u_{ t}\eta_{ t}+V^{i}_{t},\quad i=1 ,..,d,$$ $$\hat{g}^{k}_{ t}= -\sigma^{ik}u_{t}D_{i}\eta_{ t}+g^{k}_{t}\eta_{ t}.$$ It follows by Lemma \[lemma 3.11.1\] that for $\lambda>0$ $$\|v \sqrt{\hat{c}+\lambda}\|^{2}_{{\mathbb{L}}_{2}(\tau)}+ \|Dv \|^{2}_{{\mathbb{L}}_{2}(\tau)}\leq N\lambda^{-1}\|\hat{f}^{0} \|^{2}_{{\mathbb{L}}_{2}(\tau)}$$ $$\label{3.13.1} +N\big( \sum_{i=1}^{d}\|\hat{f}^{i} \|^{2}_{{\mathbb{L}}_{2}(\tau)} +\|\hat{g} \|^{2}_{{\mathbb{L}}_{2}(\tau)}+E\|v_0\|^{2}_{{\mathcal{L}}_{2}}\big).$$ Recall that here and below by $N$ we denote generic constants depending only on $d,\delta$, and $K$. Now we start estimating the right-hand side of . First we deal with $\hat{f}^{i}_{ t}$ and $\hat{g}^{k}_{ t}$. Recall and observe that obviously, if $\eta_{ t} (x)\ne0$, then $|x -x_{ t}|\leq\rho_{0}$. Therefore, $$\label{3.14.02} \|\hat{g} \|^{2}_{{\mathbb{L}}_{2}(\tau)} \leq N^{*} \|u \chi^{(1)}_{\cdot}\|^{2}_{{\mathbb{L}}_{2}(\tau)} +N\|g \chi^{(1)}_{\cdot}\|^{2}_{{\mathbb{L}}_{2}(\tau)}$$ (we remind the reader that by $N^{*}$ we denote generic constants depending only on $d,\delta, K$, and $\rho_{0}$). By Corollary \[corollary 6.27.2\] $$\label{3.14.1} \| ({\mathfrak{b}}^{i}_{t}- \hat{{\mathfrak{b}}}^{i}_{ t}) u_{ t}\eta_{ t}\|^{2}_{{\mathcal{L}}_{2}} \leq N \gamma^{2/q}(\rho_{0}^{2(1-d/q)}\|\chi^{(1)}_{t} Du_{t} \|^{2}_{{\mathcal{L}}_{2}}+ \rho_{0}^{-2d/q}\|\chi^{(1)}_{t} u_{t}\|^{2}_{{\mathcal{L}}_{2}}).$$ Here $\rho_{0}^{2(1-d/q)}\leq1$ since $q\geq d$. By adding that $$\|a^{ij} u D_{j}\eta \|^{2} _{{\mathbb{L}}_{2}(\tau)}\leq N^{*} \|\chi^{(1)}_{\cdot}u \|^{2}_{{\mathbb{L}}_{2}(\tau)},$$ we derive from , , and that $$\sum_{i=1}^{d}\|\hat{f}^{i} \|^{2}_{{\mathbb{L}}_{2}(\tau)} +\|\hat{g} \|^{2}_{{\mathbb{L}}_{2}(\tau)} \leq N\big(\sum_{i=1}^{d}\|\chi^{(1)}_{\cdot}f^{i} \|^{2}_{{\mathbb{L}}_{2}(\tau)} +\|\chi^{(1)}_{\cdot}g \|^{2}_{{\mathbb{L}}_{2}(\tau)}\big)$$ $$\label{3.14.4} +N\gamma ^{2/q} \| \chi^{(1)}_{\cdot} Du \|_{{\mathbb{L}}_{2}(\tau)}^{2}+ N^{*}\| \chi^{(1)}_{\cdot} u \|_{{\mathbb{L}}_{2}(\tau)}^{2}.$$ While estimating $\hat{f}^{0}$ we use again and observe that we can deal with $( \hat{{\mathfrak{b}}}^{i}_{ t}-{\mathfrak{b}}^{i}_{t}) u_{t}D_{i} \eta_{ t}$ as in this time without paying much attention to the dependence of our constants on $\rho_{0}$ and obtain that $$\|( \hat{{\mathfrak{b}}}^{i} -{\mathfrak{b}}^{i} ) u D_{i} \eta \|_{{\mathbb{L}}_{2}(\tau)}^{2} \leq N^{*}(\|\chi^{(1)}_{\cdot} Du \|_{{\mathbb{L}}_{2}(\tau)}^{2}+\|\chi^{(1)}_{\cdot}u\|_{{\mathbb{L}}_{2}(\tau)}^{2}).$$ By estimating also roughly the remaining terms in $\hat{f}^{0}$ and combining this with and , we see that the left-hand side of is less than the right-hand side of . However, $$|\chi^{(2)}_{t}Du_{t}|\leq|\eta_{t}Du_{t}|\leq |Dv_{t}|+|u_{t}D\eta_{t}|\leq|Dv_{t}|+ N\rho_{0}^{-1}|u_{t} \chi^{(1)}_{t}|$$ and also $$|\chi^{(2)}_{t}u_{t}|^{2}(c_{t}+\lambda)\leq |\eta_{t} u_{t}|^{2}(c_{t}+\lambda) \leq |v_{t}|^{2}(\hat{c}_{t}+\lambda)+|\eta_{t} u_{t}|^{2}(1+|c_{t} -\hat{c}_{t}|^{2}).$$ By combining this with the fact that by Corollary \[corollary 6.27.2\] $$\|( \hat{c}^{i} -c ) u \eta \|_{{\mathbb{L}}_{2}(\tau)}^{2} \leq N\gamma^{2/q}\|\chi^{(1)}_{\cdot} Du \|_{{\mathbb{L}}_{2}(\tau)}^{2}+N^{*} \|\chi^{(1)}_{\cdot} u\|_{{\mathbb{L}}_{2}(\tau)}^{2})$$ we obtain . The lemma is proved. Next, from the result giving “local" in space estimates we derive global in space estimates but for functions having, roughly speaking, small “future" support in the time variable. \[lemma 3.14.3\] Assume that $u $ is a solution of with some $f^{j},g\in{\mathbb{L}}_{2}(\tau)$ and assume that $u_{t}=0$ if $t_{0}+\kappa_{1}^{-1} \leq t\leq\tau$ with $\kappa_{1}=\kappa_{1}(\gamma ,d,\rho_{0})\geq1$ introduced before and some (nonrandom) $t_{0}\geq 0$ (nothing is required for those $\omega$ for which $\tau<t_{0}+\kappa^{-1}$). Then for $\lambda>0$ and $I_{t_{0}}:=I_{[t_{0},\infty)}$ $$\| I_{t_{0}}u\sqrt{c+\lambda }\|^{2}_{{\mathbb{L}}_{2}(\tau)}+ \|I_{t_{0}} Du\|^{2}_{{\mathbb{L}}_{2}(\tau)} \leq N\big(\sum_{i=1}^{d}\| I_{t_{0}}f^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)} +\|I_{t_{0}} g\|^{2}_{{\mathbb{L}}_{2}(\tau)}\big)$$ $$+N\lambda^{-1 }\|I_{t_{0}} f^{0}\|^{2}_{{\mathbb{L}}_{2}(\tau)} +N E\|u_{t_{0}}I_{t_{0}\leq\tau}\|^{2}_{{\mathcal{L}}_{2}}$$ $$+N\gamma ^{2/q} \| I_{t_{0}} Du\|_{{\mathbb{L}}_{2}(\tau)}^{2} + N^{*} \lambda^{-1}\| I_{t_{0}} Du\|_{{\mathbb{L}}_{2}(\tau)}^{2}$$ $$\label{3.14.5} + N^{*}(1+\lambda^{-1})\| I_{t_{0}} u\|_{{\mathbb{L}}_{2}(\tau)}^{2} +N^{*}\lambda^{-1}\sum_{i=1}^{d}\| I_{t_{0}}f^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)},$$ where and below in the proof by $N$ we denote generic constants depending only on $d,\delta$, and $K$ and by $N^{*}$ constants depending only on the same objects and $\rho_{0}$. Proof. Take $x_{0}\in{\mathbb{R}}^{d}$ and use the notation introduced before Lemma \[lemma 3.14.1\]. One knows that for each $t\geq t_{0}$, the mapping $x_{0}\to x_{t_{0},x_{0},t} $ is a diffeomorphism with Jacobian determinant given by $$\bigg|\frac{\partial x_{t_{0},x_{0},t} }{ \partial x_{0}}\bigg| =\exp\big(-\int_{t_{0}}^{t}\sum_{i=1}^{d} D_{i} [\bar{{\mathfrak{b}}} _{ s}^{i}+\bar{ b} _{ s}^{i}] ( x_{t_{0},x_{0},s}) \,ds\big).$$ By the way the constant $\kappa_{1}$ is introduced, we have $$e^{-N\kappa_{1}(t-t_{0})}\leq \bigg|\frac{\partial x_{t_{0},x_{0},t}}{ \partial x_{0}}\bigg| \leq e^{N\kappa_{1}(t-t_{0})},$$ where $N$ depends only on $d$. Therefore, for any nonnegative Lebesgue measurable function $w(x)$ it holds that $$e^{-N\kappa_{1}(t-t_{0})} \int_{{\mathbb{R}}^{d}}w(y)\,dy\leq \int_{{\mathbb{R}}^{d}}w(x_{t_{0},x_{0},t})\,dx_{0}\leq e^{N\kappa_{1}(t-t_{0})}\int_{{\mathbb{R}}^{d}}w(y)\,dy .$$ In particular, since $$\int_{{\mathbb{R}}^{d}}|\chi^{(i)}_{t_{0},x_{0},t}( x)|^{2}\,dx_{0}= \int_{{\mathbb{R}}^{d}}|\chi^{(i)}(x-x_{t_{0},x_{0},t})|^{2}\,dx_{0} ,$$ we have $$e^{-N\kappa_{1}(t-t_{0})}=N^{*}_{i} e^{-N\kappa_{1}(t-t_{0})} \int_{{\mathbb{R}}^{d}}|\chi^{(i)}(x-y)|^{2}\,dy$$ $$\leq N^{*}_{i} \int_{{\mathbb{R}}^{d}}|\chi^{(i)}_{t_{0},x_{0},t}( x)|^{2}\,dx_{0} \leq N^{*}_{i} e^{N\kappa_{1}(t-t_{0})} \int_{{\mathbb{R}}^{d}}|\chi^{(i)}(x-y)|^{2}\,dy=e^{N\kappa_{1}(t-t_{0})} ,$$ where $N^{*}_{i}=|B_{1}|^{-1} \rho_{0}^{-d}i^{d}$ and $|B_{1}|$ is the volume of $B_{1}$. It follows that $$\int_{{\mathbb{R}}^{d}}|\chi^{(1)}_{t_{0},x_{0},t}( x)|^{2}\,dx_{0} \leq (N^{*}_{1})^{-1}e^{N\kappa_{1}(t-t_{0})},$$ $$(N^{*}_{2})^{-1}e^{-N\kappa_{1}(t-t_{0})}\leq \int_{{\mathbb{R}}^{d}}|\chi^{(2)}_{t_{0},x_{0},t}( x)|^{2}\,dx_{0}.$$ Furthermore, since $u_{t}=0$ if $\tau\geq t\geq t_{0}+\kappa^{-1}_{1}$ and $\chi^{(i)}_{t_{0},x_{0},t}=0$ if $t< t_{0}$, in evaluating the norms in we need not integrate with respect to $t$ such that $\kappa_{1}(t-t_{0})\geq 1$, so that for all $t$ really involved we have $$\int_{{\mathbb{R}}^{d}}|\chi^{(1)}_{t_{0},x_{0},t}( x)|^{2}\,dx_{0} \leq (N^{*}_{1})^{-1}e^{N },\quad (N^{*}_{2})^{-1}e^{-N }\leq \int_{{\mathbb{R}}^{d}}|\chi^{(2)}_{t_{0},x_{0},t}( x)|^{2}\,dx_{0}.$$ After this observation it only remains to integrate through with respect to $x_{0}$ and use the fact that $N^{*}_{1}=2^{-d}N^{*}_{2}$. The lemma is proved. [**Proof of Theorem \[theorem 3.11.1\]**]{}. First we show how to choose $\gamma =\gamma (d,\delta,K)>0$. Call $N_{0}$ the constant factor of $\gamma ^{2/q} \| I_{t_{0}} Du\|_{{\mathbb{L}}_{2}(\tau)}^{2}$ in . We know that $N_{0}=N_{0}(d,\delta,K)$ and we choose $\gamma \in(0,1]$ so that $N_{0}\gamma ^{2/q}\leq1/2$. Then under the conditions of Lemma \[lemma 3.14.3\] for $\lambda\geq1$ we have $$\|I_{t_{0}} u\sqrt{c+\lambda }\|^{2}_{{\mathbb{L}}_{2}(\tau)}+ \|I_{t_{0}} Du\|^{2}_{{\mathbb{L}}_{2}(\tau)} \leq N\big(\sum_{i=1}^{d}\| I_{t_{0}}f^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)} +\|I_{t_{0}} g\|^{2}_{{\mathbb{L}}_{2}(\tau)}\big)$$ $$+N\lambda^{-1 }\|I_{t_{0}} f^{0}\|^{2}_{{\mathbb{L}}_{2}(\tau)} +NE\|u_{t_{0}}I_{t_{0}\leq\tau}\|^{2}_{{\mathcal{L}}_{2}} + N^{*} \lambda^{-1}\| I_{t_{0}} Du\|_{{\mathbb{L}}_{2}(\tau)}^{2}$$ $$\label{3.14.6} + N^{*}\| I_{t_{0}} u\|_{{\mathbb{L}}_{2}(\tau)}^{2} +N^{*}\lambda^{-1}\sum_{i=1}^{d}\| I_{t_{0}}f^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)}.$$ After $\gamma $ has been fixed we have $\kappa_{1}=\kappa_{1} (d,\delta,K,\rho_{0})$ and we take a $\zeta\in C^{\infty}_{0}({\mathbb{R}})$ with support in $(0,\kappa_{1}^{-1})$ such that $$\label{3.15.1} \int_{-\infty}^{\infty}\zeta^{2}(t)\,dt=1.$$ For $s\in{\mathbb{R}}$ define $\zeta^{s}_{t}=\zeta(t-s)$, $u^{s}_{t}( x)=u_{t}(x)\zeta^{s}_{t}$. Obviously $u^{s}_{t}=0$ if $s_{+}+\kappa_{1}^{-1}\leq t\leq\tau$. Therefore, we can apply to $u^{s}_{t}$ with $t_{0}=s_{+}$ observing that $$du^{s}_{t}=(\sigma^{ik}_{t}D_{i}u^{s}_{t}+\nu^{}_{t}u^{s}_{t} +\zeta^{s}_{t}g^{k})\,dw^{k}_{t}$$ $$+\big(D_{i}(a^{ij}_{t}D_{j}u^{s}_{t} +{\mathfrak{b}}^{i}_{t}u^{s}_{t})+b^{i}_{t}u^{s}_{t}-(c_{t}+\lambda) u^{s}_{t}+D_{i}(\zeta^{s}_{t}f^{i}_{t})+( \zeta^{s}_{t}f^{0}_{t}+(\zeta^{s}_{t})'u_{t}\big)\,dt.$$ Then from for $\lambda\geq1$ we obtain $$\|I_{s_{+}}\zeta^{s} u\sqrt{c+\lambda }\|^{2}_{{\mathbb{L}}_{2}(\tau)}+ \|I_{s_{+}}\zeta^{s} Du\|^{2}_{{\mathbb{L}}_{2}(\tau)}$$ $$\leq N\big(\sum_{i=1}^{d}\| I_{s_{+}} \zeta^{s} f^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)} +\|I_{s_{+}}\zeta^{s} g\|^{2}_{{\mathbb{L}}_{2}(\tau)}\big)$$ $$+N\lambda^{-1 }\big(\|I_{s_{+}}\zeta^{s}f^{0}\|^{2}_{{\mathbb{L}}_{2}(\tau)} +\|I_{s_{+}}(\zeta^{s} )'u \|^{2}_{{\mathbb{L}}_{2}(\tau)}\big) +NE\|u_{s_{+}}\zeta^{s}_{s_{+}}I_{s_{+}\leq\tau}\|^{2}_{{\mathcal{L}}_{2}}$$ $$\label{3.14.7} + N^{*} \lambda^{-1}\|I_{s_{+}}\zeta^{s}Du\|_{{\mathbb{L}}_{2}(\tau)}^{2} + N^{*}\| I_{s_{+}}\zeta^{s} u\|_{{\mathbb{L}}_{2}(\tau)}^{2} +N^{*}\lambda^{-1}\sum_{i=1}^{d}\| I_{s_{+}}\zeta^{s} f^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)}.$$ Here $I_{s_{+}}$ can be dropped since $I_{s_{+}}I_{[0,\tau)}=I_{s}I_{[0,\tau)}$ and $I_{s}\zeta^{s}=\zeta^{s}$. After dropping $I_{s_{+}}$ we integrate through with respect to $s\in{\mathbb{R}}$, use , and observe that, since $\kappa_{1}$ depends only on $d,\delta,K,\rho_{0}$, we have $$\int_{-\infty}^{\infty}|\zeta'(s)|^{2}\,ds=N^{*}.$$ We also use the fact that $\zeta^{s}_{s_{+}}\ne0$ only if $s_{+}=0$ and $-\kappa_{1}^{-1}\leq s\leq 0$ whereas $$\int_{-\kappa_{1}^{-1}}^{0}(\zeta^{s}_{0})^{2}\,ds=1.$$ Then we conclude $$\lambda \| u\|^{2}_{{\mathbb{L}}_{2}(\tau)}+\| u\sqrt{c} \|^{2}_{{\mathbb{L}}_{2}(\tau)}+ \| Du\|^{2}_{{\mathbb{L}}_{2}(\tau)}$$ $$\leq N_{1}\big(\sum_{i=1}^{d}\| f^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)} +\| g\|^{2}_{{\mathbb{L}}_{2}(\tau)}+E\|u_{0}\|^{2}_{{\mathcal{L}}_{2}}\big)$$ $$+N_{1}\lambda^{-1 }\big(\| f^{0}\|^{2}_{{\mathbb{L}}_{2}(\tau)} +\| u \|^{2}_{{\mathbb{L}}_{2}(\tau)}\big) + N_{1}^{*} \lambda^{-1}\| Du\|_{{\mathbb{L}}_{2}(\tau)}^{2}$$ $$+ N_{1}^{*}\| u\|_{{\mathbb{L}}_{2}(\tau)}^{2} +N_{1}^{*}\lambda^{-1}\sum_{i=1}^{d}\| f^{i}\|^{2}_{{\mathbb{L}}_{2}(\tau)}.$$ Without losing generality we assume that $N_{1}\geq1$ and we show how to choose $\lambda_{0}=\lambda_{0}( d,\delta,K,\rho_{0})$. We take it so that $\lambda_{0}\geq 4N^{*}_{1} $, $\lambda_{0}^{2}\geq 4N_{1}$. Then we obviously come to with $N=4N_{1}$. The theorem is proved. Proof of Theorem \[theorem 3.16.1\] =================================== \[section 6.9.5\] We may assume in this section that ${\mathcal{F}}_{t}={\mathcal{F}}_{t+}$ for all $t\in{\mathbb{R}}_{+}$. This does not restrict generality because replacing ${\mathcal{F}}_{t}$ with ${\mathcal{F}}_{t+}$ makes our assumptions weaker and does not affect our assertions because the solutions are continuous in time. Furthermore, having in mind setting all data equal to zero for $t>\tau$, we see that without loss of generality we may assume that $\tau=\infty$. Set $${\mathbb{L}}_{2}={\mathbb{L}}_{2}( \infty),\quad {\mathbb{W}}^{1}_{2}={\mathbb{W}}^{1}_{2}( \infty),\quad {\mathcal{W}}^{1}_{2}={\mathcal{W}}^{1}_{2}( \infty).$$ We need a few auxiliary results. \[lemma 6.28.1\] For any $T,R\in{\mathbb{R}}_{+}$, and $\omega\in\Omega$ we have $$\label{6.28.5} \sup_{t\leq T}\int_{B_{R}}(|{\mathfrak{b}}_{t}(x)|^{q} +|b_{t}(x)|^{q}+ c^{q}_{t}(x) ) \,dx<\infty.$$ Proof. Obviously it suffices to prove with $B_{\rho_{0}}(x_{0})$ in place of $B_{R}$ for any $x_{0}$. In that case, for instance, $$\int_{B_{\rho_{0}} (x_{0})} |{\mathfrak{b}}_{t}(x) |^{q}\,dx\leq 2^{q} \int_{B_{\rho_{0}}(x_{0})} |{\mathfrak{b}}_{t}(x)-\bar{{\mathfrak{b}}}_{t}(x_{0})|^{q}\,dx +N|\bar{{\mathfrak{b}}}_{t}(x_{0})|^{q}$$ and we conclude estimating the left-hand side as in also relying on Assumption \[assumption 3.16.1\]. Similarly, $b_{t}$ and $c_{t}$ are treated. The lemma is proved. \[lemma 3.16.1\] For any $R\in{\mathbb{R}}_{+}$ there exists a sequence of stopping times $\tau_{n}\uparrow\infty$ such that for any $n=1,2,...$ and $\omega$ for almost any $t\leq\tau_{n}$ we have $$\label{4.19.1} \int_{B_{R}}(|{\mathfrak{b}}_{t}|^{q}+|b_{t}|^{q}+|c_{t}|^{q}) \,dx\leq n.$$ Proof. For each $t,R>0$, and $\omega$ define $$\beta_{t,R}= \int_{B_{R}}(|{\mathfrak{b}}_{t}|^{q} +|b_{t}|^{q}+|c_{t}|^{q})\,dx,$$ $$\psi_{t,R}={\operatornamewithlimits{\overline{lim}}}_{\substack{0\leq s_{1}<s_{2}\leq t,\\ s_{2}-s_{1}\to0}}\frac{1}{s_{2}-s_{1}}\int_{s_{1}}^{s_{2}} \beta_{s,R}\,ds.$$ As is easy to see, $\psi_{t,R}$ is an increasing, left-continuous, and ${\mathcal{F}}_{t}$-adapted process. It follows that $$\tau_{n}:=\inf\{t\geq0:\psi_{t,R}> n\}$$ are stopping times with respect to ${\mathcal{F}}_{t+}$ ($={\mathcal{F}}_{t}$) and $\psi_{t,R}\leq n$ for $t< \tau_{n}$. Furthermore, by Lemma \[lemma 6.28.1\] we have $\tau_{n}\uparrow\infty$ as $n\to\infty$. By Lebesgue differentiation theorem we conclude that (for any $\omega$) for almost all $t\leq\tau_{n}$ we have . This proves the lemma. By combining this lemma with Lemma \[lemma 6.27.1\] we obtain the following. \[corollary 6.12.1\] If $\psi\in C^{\infty}_{0}$ has support in $B_{R}$, then for $\tau_{n}$ from Lemma \[lemma 3.16.1\] for each $n=1,2,...$, for almost all $t\leq\tau_{n}$, for any $u\in W^{1}_{2}$ and $v\in W^{1}_{2} $ we have $$|({\mathfrak{b}}^{i}_{t}D_{i}(v\psi),u)|\leq N\|v\|_{W^{1}_{2}} \|u\|_{W^{1}_{2}},\quad |(b^{i}_{t}D_{i}u,v\psi )|\leq N\|v\|_{W^{1}_{2}} \|u\|_{W^{1}_{2}},$$ $$\label{6.12.4} |(c_{t}v\psi,u)|\leq N\|v\|_{{\mathcal{L}}_{2}} \|u\|_{W^{1}_{2}},$$ where the constant $N=N(n,d)$. Since bounded linear operators are continuous we obtain the following. \[corollary 3.23.1\] If $\phi\in C^{\infty}_{0}$ has support in $B_{R}$, then for $\tau_{n}$ from Lemma \[lemma 3.16.1\] and each $n$ the operators $$u_{\cdot}\to (b^{i}_{\cdot}D_{i}u_{\cdot},\phi),\quad u_{\cdot}\to ({\mathfrak{b}}^{i}_{\cdot}u_{\cdot},D_{i}\phi), \quad u_{\cdot}\to (c_{\cdot}u_{\cdot}, \phi),$$ $$u_{\cdot}\to \int_{0}^{\cdot} (b^{i}_{t}D_{i}u_{t},\phi)\,dt,\quad u_{\cdot}\to \int_{0}^{\cdot} ({\mathfrak{b}}^{i}_{t}u_{t},D_{i}\phi)\,dt,\quad u_{\cdot}\to \int_{0}^{\cdot}(c_{\cdot}u_{\cdot}, \phi)\,dt$$ are continuous as operators from ${\mathbb{W}}^{1}_{2}$ to ${\mathcal{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,n\wedge\tau_{n}{\text{$]$\kern-.15em$]$}})= {\mathcal{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,n\wedge\tau_{n}{\text{$]$\kern-.15em$]$}},{\mathbb{R}})$. In the proof of Theorem \[theorem 3.16.1\] we are going to use sequences which converge weakly in ${\mathbb{W}}^{1}_{2}$. Therefore, the following result is relevant. \[lemma 3.16.5\] Assume that for some $f^{j}\in{\mathbb{L}}_{2}$, $j=0,...,d$, $g=(g^{k})\in{\mathbb{L}}_{2}$, $u\in{\mathbb{W}}^{1}_{2}$, and any $\phi\in C^{\infty}_{0}$ equation with $u_{0}\in{\mathcal{L}}_{2}(\Omega,{\mathcal{F}}_{0}, {\mathcal{L}}_{2})$ holds [*for almost all*]{} $(\omega,t)$. Then there exists a function $\tilde{u}\in{\mathcal{W}}^{1}_{2}$ solving equation (for all $t$) with initial data $u_{0}$ in the sense of Definition \[definition 3.20.01\]. Proof. We split the proof into two steps. [*Step 1. Modifying $u_{t}\psi$*]{}. We recall some facts from the theory of Itô stochastic integrals in a separable Hilbert space, say $H$ and some other results, which can be found, for instance, in [@KR] and [@Anal]. Integrating $H$-valued processes with respect to a one-dimensional Wiener process presents no difficulties and leads to strongly continuous $H$-valued locally square-integrable martingales with natural isometry. If $g=(g^{k})\in{\mathbb{L}}_{2}$, then by Doob’s inequality $$E\sup_{t}\big\|\sum_{k=n}^{m} \int_{0}^{t} g^{k}_{s}\,dw^{k}_{s}\big\|^{2}_{{\mathcal{L}}_{2} } \leq 4E \int_{0}^{\infty} \sum_{k=n}^{m}\| g^{k}_{s} \|^{2}_{{\mathcal{L}}_{2} }\,ds\to0$$ as $m\geq n\to\infty$. Therefore, $$m_{t}=\sum_{k=1}^{\infty}\int_{0}^{t} g^{k}_{s}\,dw^{k}_{s}$$ is well defined as a continuous ${\mathcal{L}}_{2}$-valued square-integrable martingale. Furthermore, for any $\phi\in C^{\infty}_{0}$ with probability one we have $$(m_{t},\phi)=\sum_{k=1}^{\infty}\int_{0}^{t} (g^{k}_{s},\phi)\,dw^{k}_{s}$$ for all $t$ and the series on the right converges uniformly in probability on ${\mathbb{R}}_{+}$. If $g\in{\mathbb{L}}_{2} (\tau_{n})$, $n=1,2,...$, and stopping times $ \tau_{n}\uparrow\infty$, then $$m_{t}=\sum_{k=1}^{\infty}\int_{0}^{t} g^{k}_{s}\,dw^{k}_{s}$$ is well defined as a locally square-integrable ${\mathcal{L}}_{2}$-valued continuous martingale. Again for any $\phi\in C^{\infty}_{0}$ with probability one we have $$\label{3.23.1} (m_{t},\phi)=\sum_{k=1}^{\infty}\int_{0}^{t} (g^{k}_{s},\phi)\,dw^{k}_{s}$$ for all $t$ and the series on the right converges uniformly in probability on every finite interval of time. We fix a $\psi \in C^{\infty}_{0}$ and apply the above to $$h^{\psi}_{t}:=\sum_{k=1}^{\infty}\int_{0}^{t} \psi( \sigma^{ik}_{s}D_{i}u_{s}+\nu^{k}_{s}u_{s}+ g^{k}_{s})\,dw^{k}_{s}.$$ Observe that, by assumption, for any $v\in C^{\infty}_{0}$ for almost all $(\omega,t) $ $$\label{3.17.3} (u_{t}\psi,v)=(u_{0}\psi,v)+\int_{0}^{t}\langle F_{s},v\rangle\,ds +(h^{\psi}_{t},v),$$ where $$\langle F_{t},v\rangle= (b^{i}_{t}D_{i}u_{t} -(c_{t}+\lambda)u_{t}+f^{0}_{t},v\psi) -(a^{ij}_{t}D_{j}u_{t}+{\mathfrak{b}}^{i}_{t}u_{t}+ f^{i}_{t},D_{i}(v\psi)).$$ We also define $V=W^{1}_{2}$, and notice that if $\|v\|_{V}\leq1$, then by Corollary \[corollary 6.12.1\] for any $T\in{\mathbb{R}}_{+}$ for almost any $(\omega,t)\in\Omega\times[0,T]$ we have $$|\langle F_{t},v\rangle|\leq N \big( \sum_{j=0}^{d}\|f^{j}_{t}\|_{{\mathcal{L}}_{2}} +\|u_{t}\|_{W^{1}_{2}}\big),$$ where $N$ is independent of $v,t$ (but may depend on $\omega$ and $T$). It follows that, for $V^{*}$ defined as the dual of $V$, the $V^{*}$-norm of $F_{t}$ is in ${\mathcal{L}}_{2}([0,T])$ (a.s.) for every $T\in{\mathbb{R}}_{+}$. It also follows that holds for almost all $(\omega,t)$ for each $v\in V$ rather than only for $v\in C^{\infty}_{0}$. By Theorem 3.1 of [@KR] there exists a set $\Omega_{\psi}$ of full probability and an ${\mathcal{L}}_{2}$-valued function $\tilde{u}^{\psi}_{t}$ on $\Omega\times{\mathbb{R}}_{+}$ such that $\tilde{u}^{\psi}_{t} $ is ${\mathcal{F}}_{t}$-measurable, $\tilde{u}^{\psi}_{t}$ is ${\mathcal{L}}_{2}$-continuous in $t$ for every $\omega$ and $\tilde{u}^{\psi}_{t}=u_{t}\psi$ for almost all $(\omega,t)$. Furthermore, for $\omega\in\Omega_{\psi}$, $t\geq0$, and $\phi\in C^{\infty}_{0}$ we have $$(\tilde{u}^{\psi}_{t},\phi)=(h^{\psi}_{t},\phi) +\int_{0}^{t} (b^{i}_{s}D_{i}u_{s} -(c_{s}+\lambda)u_{s}+f^{0}_{s},\phi\psi)\,ds$$ $$\label{3.19.3} -\int_{0}^{t} \big(a^{ij}_{s}D_{j}u_{s}+{\mathfrak{b}}^{i}_{s}u_{s}+ f^{i}_{s},D_{i}(\phi\psi)\big)\,ds.$$ [*Step 2. Constructing $\tilde{u}_{t}$*]{}. Let $\psi\in C^{\infty}_{0}$ be such that $\psi=1$ on $B_{1}$ and set $\psi_{n}(x)=\psi(x/n)$, $n=1,2,...$. Define $\tilde{u}^{n}_{t}= \tilde{u}^{\psi_{n}}_{t}$ and notice that by the above for $m\geq n$ and almost all $(\omega,t)$ $$\tilde{u}^{m}_{t}I_{B_{n}}=u_{t}\psi_{m}I_{B_{n}} =u_{t}I_{B_{n}}=\tilde{u}^{n}_{t}I_{B_{n}}$$ as ${\mathcal{L}}_{2}$-elements. Since the extreme terms are ${\mathcal{L}}_{2}$-continuous functions of $t$, there exist sets $\Omega_{nm}$, $m\geq n$, of full probability such that for $\omega\in\Omega_{nm}$ we have $\tilde{u}^{m}_{t}I_{B_{n}}=\tilde{u}^{n}_{t} I_{B_{n}}$ as ${\mathcal{L}}_{2}$-elements for all $t$. Then for $t\geq0$ and $\omega\in\Omega':=\bigcap_{m\geq n}\Omega_{nm}$ the formula $$\tilde{u}_{t}=I_{\Omega'} \sum_{n=0}^{\infty}\tilde{u}_{t}^{n+1} I_{B_{n+1}\setminus B_{n}}$$ defines a distribution such that $\tilde{u}_{t}I_{B_{n}}= \tilde{u}^{n}_{t}I_{B_{n}}$ as ${\mathcal{L}}_{2}$-elements for any $\omega\in\Omega'$, $t\geq0$, and $n$. It follows that $\tilde{u}_{t}=u_{t}$ as distributions for almost any $(\omega,t)$, hence, $\tilde{u}\in{\mathbb{W}}^{1}_{2}$ and there exists an event $\Omega''\subset\Omega'$ of full probability such that for any $\omega\in\Omega''$ and almost any $t\geq0$ we have $\tilde{u}_{t}=u_{t}$. Now implies that if $\phi\in C^{\infty}_{0}$ is such that $\phi(x)=0$ for $|x|\geq n$, then for $\omega\in\Omega''\cap\Omega_{\psi_{n}}$ and all $t\geq0$ we have $$(\tilde{u}_{t},\phi)=(\tilde{u}^{n}_{t},\phi) =(h^{\psi_{n}}_{t},\phi) +\int_{0}^{t} (b^{i}_{s}D_{i}\tilde{u}_{s} -(c_{s}+\lambda)\tilde{u}_{s}+f^{0}_{s},\phi)\,ds$$ $$\label{3.19.03} -\int_{0}^{t} \big(a^{ij}_{s}D_{j}\tilde{u}_{s}+ {\mathfrak{b}}^{i}_{s}\tilde{u}_{s}+ f^{i}_{s},D_{i}\phi \big)\,ds.$$ By recalling what was said about and using Corollary \[corollary 6.12.1\], we see that indeed the requirements of Definition \[definition 3.20.01\] are satisfied with $\tilde{u}$ and $\infty$ in place of $u$ and $\tau$, respectively. The lemma is proved. \[lemma 3.23.2\] Let $\phi\in C^{\infty}_{0}$ be supported in $B_{R}$ and take $\tau_{n}$ from Lemma \[lemma 3.16.1\]. Let $u^{n}$, $u\in {\mathbb{W}}^{1}_{2}$, $n=1,2,...$, be such that $u^{n}\to u$ weakly in ${\mathbb{W}}^{1}_{2}$. For $n=1,2,...$ define $\chi_{n}(t)=(-n)\vee t\wedge n$, ${\mathfrak{b}}^{i}_{nt}=\chi_{n}({\mathfrak{b}}^{i}_{t})$, $b^{i}_{nt}=\chi_{n}(b^{i}_{t})$ and set $c_{ns}=n\wedge c_{s}$. Then for any $m=1,2,...$ $$\int_{0}^{t}[(b^{i}_{ns}D_{i}u^{n}_{s},\phi) -({\mathfrak{b}}^{i}_{ns}u^{n}_{s},D_{i}\phi)-(c_{ns}u^{n}_{s},\phi)]\,ds$$ $$\label{4.19.6} \to \int_{0}^{t}[(b^{i}_{s}D_{i}u_{s},\phi) -({\mathfrak{b}}^{i}_{s}u_{s},D_{i}\phi)-(c_{ s}u _{s},\phi)]\,ds$$ weakly in the space ${\mathcal{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,m\wedge\tau_{m}{\text{$]$\kern-.15em$]$}})$ as $n\to\infty$ . Proof. By Corollary \[corollary 3.23.1\] and by the fact that (strongly) continuous operators are weakly continuous we obtain that $$\int_{0}^{t}[(b^{i}_{s}D_{i}u^{n}_{s},\phi) -({\mathfrak{b}}^{i}_{s}u^{n}_{s},D_{i}\phi)-(c_{ s}u^{n}_{s},\phi)]\,ds$$ $$\to \int_{0}^{t}[(b^{i}_{s}D_{i}u_{s},\phi) -({\mathfrak{b}}^{i}_{s}u_{s},D_{i}\phi)-(c_{ s}u _{s},\phi)]\,ds$$ as $n\to\infty$ weakly in the space ${\mathcal{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,m\wedge\tau_{m}{\text{$]$\kern-.15em$]$}})$ for any $m$. Therefore, it suffices to show that $$\int_{0}^{t}[(D_{i}u^{n}_{s},(b^{i}_{s}-b^{i}_{ns})\phi) -(u^{n}_{s},({\mathfrak{b}}^{i}_{s}-{\mathfrak{b}}^{i}_{ns})D_{i}\phi +(c_{s}-c_{ns})\phi)]\,ds \to0$$ weakly in ${\mathcal{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,m\wedge\tau_{m}{\text{$]$\kern-.15em$]$}})$ for any $m$. In other words, it suffices to show that for any $\xi\in {\mathcal{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0, m\wedge\tau_{m}{\text{$]$\kern-.15em$]$}})$ $$E\int_{0}^{m\wedge\tau_{m}}\xi_{t} \big(\int_{0}^{t}[(D_{i}u^{n}_{s},(b^{i}_{s}-b^{i}_{ns})\phi)$$ $$-(u^{n}_{s},({\mathfrak{b}}^{i}_{s}-{\mathfrak{b}}^{i}_{ns})D_{i}\phi +(c_{s}-c_{ns})\phi)]\,ds \big)\,dt\to0.$$ This relation is rewritten as $$E\int_{0}^{m\wedge\tau_{m}} [(D_{i}u^{n}_{s},\eta_{s}(b^{i}_{s}-b^{i}_{ns})\phi)$$ $$\label{4.21.1} -(\eta_{s}u^{n}_{s}, ({\mathfrak{b}}^{i}_{s}-{\mathfrak{b}}^{i}_{ns})D_{i}\phi +(c_{s}-c_{ns})\phi)]\,ds\to0,$$ where the process $$\eta_{s}:=\int_{s}^{m\wedge\tau_{m}}\xi_{t}\,dt$$ is of class ${\mathcal{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,m\wedge\tau_{m}{\text{$]$\kern-.15em$]$}})$ since $m\wedge\tau_{m}$ is bounded ($\leq m$). However, by the choice of $\tau_{m}$ and the dominated convergence theorem, $$\eta_{s}({\mathfrak{b}}^{i}_{s}-{\mathfrak{b}}^{i}_{ns})D_{i}\phi\to0,\quad \eta_{s}(b^{i}_{s}-b^{i}_{ns})\phi\to0,\quad \eta_{s}(c_{s}-c_{ns})\phi\to0$$ as $n\to\infty$ strongly in ${\mathbb{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,m\wedge\tau_{m}{\text{$]$\kern-.15em$]$}})$ (use the fact that $q\geq2$) and by assumption $u^{ n}\to u$ and $Du^{n}\to Du$ weakly in ${\mathbb{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0, \tau_{m}{\text{$]$\kern-.15em$]$}})$. This implies for any $m$ and the lemma is proved. [**Proof of Theorem \[theorem 3.16.1\]**]{}. Define ${\mathfrak{b}}_{nt}$, $b_{nt}$, and $c_{nt}$ as in Lemma \[lemma 3.23.2\] and consider equation with ${\mathfrak{b}}_{nt}$, $b_{nt}$, and $c_{nt}$ in place of ${\mathfrak{b}}_{t}$, $b_{t}$, and $c_{t}$, respectively, and with $\tau=n$. By a classical result there exists a unique $u^{n}\in{\mathcal{W}}^{1}_{2}(n)$ satisfying the modified equation with initial condition $u_{0}$. Obviously, ${\mathfrak{b}}_{nt}$, $b_{nt}$, and $c_{nt}$ satisfy Assumption \[assumption 3.11.1\] with the same $\gamma $ as ${\mathfrak{b}}_{t}$, $b_{t}$, and $c_{t}$ do. By Theorem \[theorem 3.11.1\] for $\lambda\geq\lambda_{0}(d,\delta,K,\rho_{0})$ we have $$\|u^{n}\|_{{\mathbb{L}}_{2}(n)}+\|Du^{n}\|_{{\mathbb{L}}_{2}(n)}\leq N,$$ where $N$ is independent of $n$. Hence the sequence of functions $u^{n}_{t}I_{t\leq n}$ is bounded in the Hilbert space ${\mathbb{W}}^{1}_{2}$ and consequently has a weak limit point $u\in {\mathbb{W}}^{1}_{2}$. For simplicity of presentation we assume that the whole sequence $u^{n}_{t}I_{t\leq n}$ converges weakly to $u$. Take a $\phi\in C^{\infty}_{0}$. Then by Lemma \[lemma 3.23.2\] for appropriate $\tau_{m}$ we have that holds weakly in ${\mathcal{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,m\wedge\tau_{m}{\text{$]$\kern-.15em$]$}})$ for any $m$. Since $$u=u_{t}\to\sum_{k=1}^{\infty} \int_{0}^{t}(\Lambda^{k}_{s}u_{s},\phi)\,dw^{k}_{s}$$ is a continuous operator from ${\mathbb{W}}^{1}_{2}$ to ${\mathbb{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,m{\text{$]$\kern-.15em$]$}})$, it is weakly continuous, so that $$\sum_{k=1}^{\infty} \int_{0}^{t}(\Lambda^{k}_{s}u^{n}_{s},\phi)\,dw^{k}_{s} \to \sum_{k=1}^{\infty} \int_{0}^{t}(\Lambda^{k}_{s}u_{s},\phi)\,dw^{k}_{s}$$ weakly in ${\mathcal{L}}_{2}({\text{\raise.2ex\hbox{${\scriptstyle | }$}\kern-.34em$($} }0,m{\text{$]$\kern-.15em$]$}})$ for any $m$. Obviously, the same is true for $(u^{n}_{ t},\phi)\to(u_{t},\phi)$ and the remaining terms entering the equation for $ u^{n}_{ s}$. Hence by passing to the weak limit in the equation for $u^{n}_{ t}$ we see that $u$ satisfies the assumptions of Lemma \[lemma 3.16.5\] applying which finishes the proof of the theorem. [mm]{} S. Assing and R. Manthey, [*Invariant measures for stochastic heat equations with unbounded coefficients*]{}, Stochastic Process. Appl., Vol. 103 (2003), No. 2, 237-256. P. Cannarsa and V. Vespri, [*Generation of analytic semigroups by elliptic operators with unbounded coefficients*]{}, SIAM J. Math. Anal., Vol. 18 (1987), No. 3, 857-872. P. Cannarsa and V. Vespri, [ *Existence and uniqueness results for a nonlinear stochastic partial differential equation*]{}, in Stochastic Partial Differential Equations and Applications Proceedings, G. Da Prato and L. Tubaro (eds.), Lecture Notes in Math., Vol. 1236, pp. 1-24, Springer Verlag, 1987. A. Chojnowska-Michalik and B. Goldys, [*Generalized symmetric Ornstein-Uhlenbeck semigroups in $L^p$: Littlewood-Paley-Stein inequalities and domains of generators*]{}, J. Funct. Anal., Vol. 182 (2001), 243-279. G. Cupini and S. Fornaro, [*Maximal regularity in $L^ p({\mathbb{R}}^N)$ for a class of elliptic operators with unbounded coefficients*]{}, Differential Integral Equations, Vol. 17 (2004), No. 3-4, 259-296. M. Geissert and A. Lunardi, [*Invariant measures and maximal $L\sp 2$ regularity for nonautonomous Ornstein-Uhlenbeck equations*]{}, J. Lond. Math. Soc. (2), Vol. 77 (2008), No. 3, 719-740. B. Farkas and A. Lunardi, [*Maximal regularity for Kolmogorov operators in $L^2$ spaces with respect to invariant measures*]{}, J. Math. Pures Appl., Vol. 86 (2006), 310-321. I. Gyöngy, [*Stochastic partial differential equations on Manifolds, I*]{}, Potential Analysis, Vol. 2 (1993), 101-113. I. Gyöngy, [*Stochastic partial differential equations manifolds II. Nonlinear filtering*]{}, Potential Analysis, Vol. 6 (1997), 39-56. I. Gyöngy and N.V. Krylov, [ *On stochastic partial differential equations with unbounded coefficients*]{}, Potential Analysis, Vol. 1 (1992), No. 3, 233-256. Kyeong-Hun Kim, [ *On $L_p$-theory of stochastic partial differential equations of divergence form in $C^1$ domains*]{}, Probab. Theory Related Fields, Vol. 130 (2004), No. 4, 473-492. N.V. Krylov, [*An analytic approach to SPDEs*]{}, pp. 185-242 in Stochastic Partial Differential Equations: Six Perspectives, Mathematical Surveys and Monographs, Vol. 64, AMS, Providence, RI, 1999. N.V. Krylov, [*On linear elliptic and parabolic equations with growing drift in Sobolev spaces without weights*]{}, Problemy Matemtaticheskogo Analiza, Vol. 40 (2009), 77-90, in Russian; English version in Journal of Mathematical Sciences, Vol. 159 (2009), No. 1, 75-90, Srpinger. N.V. Krylov, [*On divergence form SPDEs with VMO coefficients*]{}, SIAM J. Math. Anal. Vol. 40 (2009), No. 6, 2262-2285. N.V. Krylov, [*Itô’s formula for the $L_{p}$-norm of stochastic $W^{1}_{p}$-valued processes*]{}, to appear in Probab. Theory Related Fields, http://arxiv.org/abs/0806.1557 N.V. Krylov, [*On the Itô-Wentzell formula for distribution-valued processes and related topics*]{}, submitted to Probab. Theory Related Fields, http://arxiv.org/abs/0904.2752 N.V. Krylov, [*Filtering equations for partially observable diffusion processes with Lipschitz continuous coefficients*]{}, to appear in “The Oxford Handbook of Nonlinear Filtering", Oxford University Press, http://arxiv.org/abs/0908.1935 N.V. Krylov and E. Priola, [*Elliptic and parabolic second-order PDEs with growing coefficients*]{}, to appear in Comm. in PDEs, http://arXiv.org/abs/0806.3100 N.V. Krylov and B.L. Rozovskii, [*Stochastic evolution equations*]{}, pp. 71-146 in “Itogy nauki i tekhniki”, Vol. 14, VINITI, Moscow, 1979, in Russian; English translation: J. Soviet Math., Vol. 16 (1981), No. 4, 1233-1277. A. Lunardi, [*Schauder estimates for a class of degenerate elliptic and parabolic operators with unbounded coefficients in ${\mathbb{R}}^n$*]{}, Ann. Sc. Norm. Super Pisa, Ser. IV., Vol. 24 (1997), 133Ð164. A. Lunardi and V. Vespri, [*Generation of strongly continuous semigroups by elliptic operators with unbounded coefficients in $L^p({\mathbb{R}}^n)$*]{}, Rend. Istit. Mat. Univ. Trieste 28 (1996), suppl., 251-279 (1997). G. Metafune, J. Prüss, A. Rhandi, and R. Schnaubelt, [*The domain of the OrnsteinÐUhlenbeck operator on an $L^p$-space with invariant measure*]{}, Ann. Sc. Norm. Super. Pisa, Cl. Sci., (5) 1 (2002), 471-485. G. Metafune, J. Prüss, A. Rhandi, and R. Schnaubelt, [*$L\sp p$-regularity for elliptic operators with unbounded coefficients*]{}, Adv. Differential Equations, Vol. 10 (2005), No. 10, 1131-1164. J. Prüss, A. Rhandi, and R. Schnaubelt, [*The domain of elliptic operators on $L^p({\mathbb{R}}^d)$ with unbounded drift coefficients*]{}, Houston J. Math., Vol. 32 (2006), No. 2, 563-576. [^1]: The work was partially supported by NSF grant DMS-0653121
{ "pile_set_name": "ArXiv" }
--- abstract: 'The influence of poor solvent quality on fluid demixing of a model mixture of colloids and nonadsorbing polymers is investigated using density functional theory. The colloidal particles are modelled as hard spheres and the polymer coils as effective interpenetrating spheres that have hard interactions with the colloids. The solvent is modelled as a two-component mixture of a primary solvent, regarded as a background theta-solvent for the polymer, and a cosolvent of point particles that are excluded from both colloids and polymers. Cosolvent exclusion favors overlap of polymers, mimicking the effect of a poor solvent by inducing an effective attraction between polymers. For this model, a geometry-based density functional theory is derived and applied to bulk fluid phase behavior. With increasing cosolvent concentration (worsening solvent quality), the predicted colloid-polymer binodal shifts to lower colloid concentrations, promoting demixing. For sufficiently poor solvent, a reentrant demixing transition is predicted at low colloid concentrations.' author: - 'Matthias Schmidt[^1] and Alan R. Denton' bibliography: - 'cpps.bib' date: - 3 October 2001 - 6 October 2001 - 26 October 2001 - 10 November 2001 - 4 December 2001 - 11 December 2001 - 6 March 2002 - 5 April 2002 title: ' Demixing of colloid-polymer mixtures in poor solvents ' --- Introduction ============ Solvents play a crucial role in the thermodynamic behavior of macromolecular solutions. Over the past half-century, effects of solvent quality on the physical properties of polymer solutions have been extensively studied [@flory69; @deGennes79]. Polymer-solvent and solvent-solvent interactions were first incorporated into the classic Flory-Huggins mean-field theory of polymer solutions [@flory71]. Subsequently, excluded-volume interactions between polymer segments were identified as the key determinants of solvent quality. Polymer segments sterically repel one another in a good solvent, attract in a poor solvent, and behave as though ideal (noninteracting) in a theta-solvent. Interactions between polymer segments strongly influence chain conformations and, in turn, phase separation and other macroscopic phenomena. Compared to solvent effects in pure polymer solutions, much less is known about the role of solvent quality in colloid-polymer mixtures. The simplest and most widely-studied theoretical model of colloid-polymer mixtures is the Asakura-Oosawa (AO) model [@asakura54; @vrij76]. This treats the colloids as hard spheres and the polymers as effective spheres that are mutually noninteracting but have hard interactions with the colloids. The thermodynamic phase diagram of the AO model has been mapped out by thermodynamic perturbation theory [@gast83], free volume theory [@lekkerkerker92], density functional (DF) theory [@schmidt00cip], and Monte Carlo simulation [@dijkstra99]. By assuming ideal polymers, however, the AO model is implicitly limited to theta-solvents. Recently, by incorporating polymer-polymer repulsion into the AO model, the influence of a good solvent on phase behavior has been explored via perturbation theory [@warren95] and DF theory [@schmidt02intpol]. All of these studies assume an effective penetrable-sphere model for the polymer coils, which is supported by explicit Monte Carlo simulations of interacting segmented-chain polymers [@louis00; @bolhuis01jcp; @bolhuis01pre]. An alternative, more microscopic, theoretical approach is the PRISM integral-equation theory [@ramakrishnan02], which models polymers on the segment level. The purpose of the present paper is to investigate the effect of a [*poor*]{} solvent on the bulk phase behavior of colloid-polymer mixtures. To this end, we consider a variation of the AO model that explicitly includes the solvent as a distinct component. Specifically, the solvent is treated as a binary mixture of a primary solvent, which alone acts as a theta-solvent for the polymer, and a cosolvent, which acts as a poor solvent for the polymer. The primary solvent is regarded as a homogeneous background that freely penetrates the polymer, but is excluded by the colloids. The cosolvent is modelled simply as an ideal gas of point-like particles that penetrate neither colloids nor polymers. In the absence of colloids, the polymer-cosolvent subsystem is the Widom-Rowlinson (WR) model of a binary mixture [@widom70; @rowlinson82], in which particles of unlike species interact with hard cores and particles of like species are noninteracting. The WR model can be shown to be equivalent to a one-component system of penetrable spheres that interact via a many-body interaction potential, proportional to the cosolvent pressure and the volume covered by the spheres (with overlapping portions counted only once). Hence, in the polymer-cosolvent subsystem, the volume occupied by the polymer spheres costs interaction energy, inducing an effective attraction between polymers reminiscent of that caused by a poor solvent. By varying cosolvent concentration, the solvent quality can be tuned. Here we investigate whether and how added hard colloidal spheres mix with such effectively interacting polymers. In Sec. \[SECmodels\], we define more explicitly the model colloid-polymer-cosolvent mixture. In Sec. \[SECtheory\], we develop a general geometry-based DF theory, which may be applied to both homogeneous and inhomogeneous states of the model system. The general theory provides the foundation for an application to bulk phase behavior in Sec. \[SECresults\]. Readers who are interested only in bulk properties may wish to skip Sec. \[SECtheory\] and turn directly to Sec. \[SECresults\]. We finish with concluding remarks in Sec. \[SECdiscussion\]. The Model {#SECmodels} ========= We consider a ternary mixture of colloidal hard spheres (species $C$) of radius $R_C$, globular polymers (species $P$) of radius $R_P$, and point-like cosolvent particles (species $S$), as illustrated in Fig. \[FIGmodel\]. The respective number densities are $\rho_C({{\bf r}})$, $\rho_P({{\bf r}})$, and $\rho_S({{\bf r}})$, where ${{\bf r}}$ is the spatial coordinate. The primary solvent is regarded as a homogeneous background for the polymer and is not explicitly included. All particles experience only pairwise interactions, $V_{ij}(r)$, $i,j=C,P,S$, where $r$ is the separation distance between particle centers. Colloids behave as hard spheres: $V_{CC}(r)=\infty$, if $r<2R_C$, and zero otherwise. Colloids and polymers interact as hard bodies via $V_{CP}(r)=\infty$, if $r<R_C+R_P$, and zero otherwise, and both exclude cosolvent particles: $V_{CS}(r)=\infty$, if $r<R_C$, $V_{PS}(r)=\infty$, if $r<R_P$, and zero otherwise. The polymers and cosolvent particles behave as ideal gases: $V_{PP}(r)=0$, $V_{SS}(r)=0$, for all $r$. In essence, this is the AO model with additional point particles that cannot penetrate either colloids or polymers. ![Model ternary mixture of colloidal hard spheres of diameter $\sigma_C$, polymer effective spheres of diameter $\sigma_P$, and point-like solvent particles. []{data-label="FIGmodel"}](fig1.ps){width="\columnwidth"} We denote the sphere diameters by $\sigma_C=2R_C$ and $\sigma_P=2R_P$, the bulk packing fractions by $\eta_C=4\pi R_C^3 \rho_C/3$ and $\eta_P=4\pi R_P^3 \rho_P/3$, and define a dimensionless solvent bulk density ${\rho_{S}^{\ast}}=\rho_S \sigma_P^3.$ The polymer-colloid size ratio, $q=\sigma_P/\sigma_C$, is regarded as a control parameter. Density functional theory {#SECtheory} ========================= We develop a geometry-based DF theory for the excess Helmholtz free energy of the model system, expressed as an integral over an excess free energy density, $${F_{\rm exc}}[ \rho_C, \rho_P, \rho_S ] = {k_{\rm B} T}\int {\rm d}^3 x\ {\Phi_{}} \left( \{{n_{\nu}^{i}}\} \right), \label{EQfexc}$$ where ${k_{\rm B}}$ is Boltzmann’s constant, $T$ is absolute temperature, and the (local) reduced excess free energy density, ${\Phi_{}}$, is a simple function (not a functional) of weighted densities, ${n_{\nu}^{i}}$. The weighted densities are smoothed averages of the possibly highly inhomogeneous density profiles, $\rho_i({{\bf r}})$, expressed as convolutions, $${n_{{\nu}}^{i}}({{\bf r}}) = \rho_i({{\bf r}}) * {w_{{\nu}}^{i}}({{\bf r}}) = \int{\rm d}{{\bf r}}'\ \rho_i({{\bf r}}') w_{\nu}^i({{\bf r}}-{{\bf r}}'), \label{EQnfirst}$$ with respect to weight functions, ${w_{{\nu}}^{i}}({{\bf r}})$, where $i=C,P,S$ and $\nu=$0,1,2,3,v1,v2,m2. The usual weight functions [@Rosenfeld89; @tarazona00] are $${w_{2}^{i}}( {{\bf r}}) = \delta( R_i - r), \hspace{3mm} {w_{3}^{i}}( {{\bf r}}) = \theta( R_i - r), \label{EQwsfirst}$$ $${{\bf w}_{{{\rm v}2}}^{i}}( {{\bf r}}) = {w_{2}^{i}}({{\bf r}}) \, \frac{{{\bf r}}}{r}, \hspace{3mm} {{\bf \hat{w}}_{{{\rm m}2}}^{i}}( {{\bf r}}) = {w_{2}^{i}}({{\bf r}}) \left( \frac{{{\bf r}}{{\bf r}}}{r^2} - \frac{{{\hat{\mathbf{1}}}}}{3} \right), \label{EQwssecond}$$ where $r=|{{\bf r}}|$, $\delta(r)$ is the Dirac distribution, $\theta(r)$ is the step function, and ${{\hat{\mathbf{1}}}}$ is the identity matrix. Further linearly dependent weight functions are ${w_{1}^{i}}({{\bf r}}) = {w_{2}^{i}}({{\bf r}})/(4 \pi R), {{\bf w}_{{{\rm v}1}}^{i}}({{\bf r}}) = {{\bf w}_{{{\rm v}2}}^{i}}({{\bf r}})/(4 \pi R)$, and ${w_{0}^{i}}({{\bf r}}) = {w_{1}^{i}}({{\bf r}})/R$. The weight functions for $\nu=3,2,1,0$ represent geometrical measures of the particles in terms of volume, surface area, integral mean curvature, and Euler characteristic, respectively [@Rosenfeld89]. Note that the weight functions differ in tensorial rank: ${w_{0}^{i}}$, ${w_{1}^{i}}$, ${w_{2}^{i}}$, and ${w_{3}^{i}}$ are scalars, ${{\bf w}_{{{\rm v}1}}^{i}}$ and ${{\bf w}_{{{\rm v}2}}^{i}}$ are vectors, and ${{\bf \hat{w}}_{{{\rm m}2}}^{i}}$ is a (traceless) matrix. The excess free energy density can be expressed in the general form $$\Phi = \Phi_{C} + \Phi_{CP} + \Phi_{CS} + \Phi_{CPS},$$ where the four contributions have forms motivated by consideration of the appropriate exact zero-dimensional limits. The colloid contribution, $\Phi_{C}$, is the same as that for the pure hard-sphere (HS) system [@Rosenfeld89; @tarazona00]: $$\begin{aligned} {\Phi_{C}} &=& -{n_{0}^{C}} \ln (1 - {n_{3}^{C}})+ \frac{{n_{1}^{C}}\,{n_{2}^{C}} - {{\bf n}_{{\rm v}1}^{C}} \cdot {{\bf n}_{{\rm v}2}^{C}}}{1 - {n_{3}^{C}}} \nonumber\\ & &+ \left[{\frac{1}{3}{\left({n_{2}^{C}}\right)^3}} - {n_{2}^{C}}\,{\left({{\bf n}_{{\rm v}2}^{C}}\right)^2} + \frac{3}{2}\left({{\bf n}_{{\rm v}2}^{C}} \cdot {{{{\hat{\mathbf{n}}}}_{{\rm m}2}}^{C}} \cdot {{\bf n}_{{\rm v}2}^{C}} \right.\right.\nonumber\\ && \left.\left.\phantom{{\left({n_{2}^{C}}\right)^3}/3}\left.\hspace{-8mm} -\,3\det {{{{\hat{\mathbf{n}}}}_{{\rm m}2}}^{C}}\right) \right]\right/[8\pi(1-{n_{3}^{C}})^2].\label{EQphic}\end{aligned}$$ The colloid-polymer interaction contribution, ${\Phi_{CP}}$, is the same as in the pure AO case [@schmidt00cip], $${\Phi_{CP}} = \sum_\nu \frac{\partial {\Phi_{C}}}{\partial {n_{\nu}^{C}}} {n_{\nu}^{P}},$$ while the colloid-solvent interaction contribution [@schmidt01rsf] is $${\Phi_{CS}} = -{n_{0}^{S}} \ln (1 - {n_{3}^{C}})\label{EQphics}.$$ Finally, in order to model the WR-type interaction between polymers and cosolvent particles in the presence of the colloidal spheres, we assume $${\Phi_{CPS}} = \frac{ {n_{0}^{S}} {n_{3}^{P}} }{1-{n_{3}^{C}}}\label{EQphipn}, \label{EQphicps}$$ which takes into account the volume excluded to the polymer and cosolvent by the colloids. It is instructive to compare the current theory to geometry-based DF theories previously formulated for two related ternary model systems. One starting point is a ternary AO model that combines a binary HS mixture and one polymer species [@schmidt02cip]. Letting the radius of the smaller HS component go to zero, one obtains the cosolvent species. The other starting point is a recently-introduced model [@schmidt02cpn] for a ternary mixture of colloids, polymers and hard vanishingly thin needles of length $L$, where the needles are ideal amongst themselves but cannot penetrate the polymers (hard-core interaction). In the limit $L\to 0$, the needles become identical to the cosolvent particles. We have explicitly checked that the DF theories for both systems reduce to the theory described above, demonstrating the internal consistency of the geometry-based approach. Results and Discussion {#SECresults} ====================== Bulk Limit ---------- For bulk fluid phases the density profiles are homogeneous: $\rho_i({{\bf r}}) =$ const. In this case, the integrations in Eq. (\[EQnfirst\]) are trivial, and simple expressions for the weighted densities can be obtained. Inserting these expressions into the excess free energy density \[Eqs. (\[EQphic\])-(\[EQphicps\])\] yields the bulk excess free energy in analytic form. The HS contribution, which is equal to the Percus-Yevick compressibility (and scaled-particle) result, is given as $$\Phi_{C} = \frac{3 \eta_C [3 \eta_C (2-\eta_C) - 2(1-\eta_C)^2\ln(1-\eta_C)]} {8 \pi R_C^3 (1-\eta_C)^2}.$$ The colloid-polymer contribution is equal to that predicted by free volume theory [@lekkerkerker92], and subsequently rederived by DFT [@schmidt00cip]: $$\begin{aligned} \Phi_{CP} &=& \frac{\eta_P/(8\pi R_P^3)}{(1-\eta_C)^3} \left\{ 3 q \eta_C \left[ 6(1-\eta_C)^2 \right.\right. \nonumber\\ &+&\left. 3q(2-\eta_C -\eta_C^2) +2q^2(1+\eta_C+\eta_C^2) \right] \nonumber\\ &-&\left. 6(1-\eta_C)^3 \ln (1 - \eta_C) \right\}.\end{aligned}$$ This contribution is linear in the polymer density and has a form that arises, as in the original free volume theory [@lekkerkerker92], from treating the polymers as an ideal gas occupying the free volume between the colloids. The colloid-cosolvent contribution is given by $$\Phi_{CS} = -\rho_S\ln(1-\eta_C).$$ This contribution can be similarly interpreted as the free energy of an ideal gas in the free volume of the colloids. In this case, however, the ideal gas consists of point-like cosolvent particles, considerably simplifying the analytical form of the free volume. In fact, by letting $q\to 0$ in Eq. (11), and identifying species $P$ and $S$, $\Phi_{CP}$ reduces to $\Phi_{CS}$. The remaining contribution couples the densities of all three species, and is given by $$\Phi_{CPS} = \frac{\rho_S \eta_P}{1-\eta_C}.$$ In the absence of colloids ($\eta_C=0$), this is equivalent to the mean-field free energy of the WR-model. Eq. (13) is a non-trivial generalization thereof to the case of non-vanishing $\eta_C$. For completeness, the reduced ideal-gas free energy is $$\Phi_{\rm id} = \sum_{i=C,P,S} \rho_i [\ln(\rho_i \Lambda_i^3) -1],$$ where the $\Lambda_i$ are (irrelevant) thermal wavelengths of species $i$. This puts us in a position to obtain the reduced total free energy density, $\Phi_{\rm tot}=\Phi_{\rm id}+\Phi$, of any given fluid state characterized by the bulk densities of the three components and the size ratio $q$. Phase Diagrams -------------- The conditions for phase coexistence are equality of the total pressures, $p_{\rm tot}$, and of the chemical potentials, $\mu_i$, in the coexisting phases. For phase equilibrium between phases I and II, $p_{\rm tot}^{\rm I} = p_{\rm tot}^{\rm II}$ and $\mu_i^{\rm I} = \mu_i^{\rm II}, i=C,P,S$, yielding four equations for six unknowns (two state-points, each characterized by three densities). In our case, a set of analytical expressions is obtained from $$\frac{p_{\rm tot}}{k_BT}=-\Phi_{\rm tot}+\sum_{i=C,P,S} \rho_i ~\frac{\partial \Phi_{\rm tot}}{\partial \rho_i}$$ and $$\mu_i=k_B T~\frac{\partial \Phi_{\rm tot}}{\partial \rho_i},$$ the numerical solution of which is straightforward. In order to graphically represent the ternary phase diagrams, we choose the system reduced densities, ${\eta_{{C}}},{\eta_{{P}}}$, and ${\rho_{S}^{\ast}}$ as basic variables. For given $q$, these span a three-dimensional (3d) phase space. Each point in this space corresponds to a possible bulk state. Two-phase coexistence is indicated by a pair of points joined by a straight tie-line. We imagine controlling the system directly with ${\eta_{{C}}}$ and ${\eta_{{P}}}$, but indirectly via coupling to a cosolvent reservoir, whose chemical potential, $\mu_S$, tunes the solvent quality. Note that, because the cosolvent is treated as an ideal gas, the reservoir’s density is simply proportional to its activity. Thus, the reduced density, ${\rho_{S}^{\ast r}}= \exp(\mu_S/k_BT)$, may be equivalently taken as a control parameter, which is equal in coexisting phases. To make contact with Flory-Huggins theory, we are implicitly considering here the case in which the Flory interaction parameter, $\chi$, falls in the range $0.5 < \chi < 1$, corresponding to a negative excluded-volume parameter, $v \propto (1-2\chi)$. We initially consider colloids and polymers of equal size ($\sigma_C=\sigma_P$). For this case, Fig. \[FIGps1\] shows projections of constant-${\rho_{S}^{\ast r}}$ surfaces onto the three sides of the coordinate system, namely the ${\eta_{{C}}}-{\rho_{S}^{\ast}}$, ${\eta_{{C}}}-{\eta_{{P}}}$, and ${\eta_{{P}}}-{\rho_{S}^{\ast}}$ planes, as well as a perspective 3d view. For reference, the phase diagram without cosolvent is shown in Fig. \[FIGps1\]a. This is identical to the common free volume demixing curve of the AO model [@lekkerkerker92; @schmidt00cip]. For ${\rho_{S}^{\ast r}}=0$, in which case ${\rho_{S}^{\ast}}=0$, the ${\eta_{{C}}}-{\rho_{S}^{\ast}}$ and ${\eta_{{P}}}-{\rho_{S}^{\ast}}$ planes are inaccessible, [*i.e.*]{}, all accessible states lie completely within the ${\eta_{{C}}}-{\eta_{{P}}}$ plane. Upon increasing the cosolvent reservoir density to ${\rho_{S}^{\ast r}}=0.5$, and thus worsening the solvent quality, the demixed region grows, as seen in Fig. \[FIGps1\]b. The critical point shifts towards lower ${\eta_{{C}}}$ and higher ${\eta_{{P}}}$, the tie lines become steeper, and the area beneath the colloid-polymer binodal in the ${\eta_{{C}}}-{\eta_{{P}}}$ plane (a measure of miscibility) decreases. \ As a physical interpretation of the results, one can imagine the polymer spheres as tending to merge (overlap) to avoid contact with the solvent. The resulting polymer “dimers,” “trimers,” etc., act as larger depleting agents, increasing the range of the effective depletion potential between colloids. At the same time, the lower effective concentration of depletants reduces the osmotic pressure and thus the depth of the potential. Comparing the phase diagrams for different cosolvent reservoir densities, we can conclude that the net effect of merging polymers is to increase the integrated strength of the depletion potential and thus to promote demixing. Eventually, at ${\rho_{S}^{\ast r}}=0.64894$, the colloid-polymer critical point meets the ${\eta_{{P}}}-{\rho_{S}^{\ast}}$ plane (where ${\eta_{{C}}}=0$), as seen in Figs. \[FIGps1\]c and (on a larger scale) \[FIGps1\]d. Polymers and cosolvent here begin to demix already in the absence of colloids (the critical point of the WR model). For still higher cosolvent reservoir densities (beyond the WR critical point), the critical point vanishes from the phase diagram and a polymer-cosolvent miscibility gap opens up at ${\eta_{{C}}}=0$. It is tempting to interpret this demixing as aggregation of the polymer spheres, although it must be emphasized that the WR model can only crudely describe polymer aggregation. Another intriguing prediction is the reentrant colloid-polymer mixing evident in Fig. \[FIGps1\]d. For sufficiently low colloid concentrations and high cosolvent reservoir densities (poor solvent), colloids and polymers initially demix with increasing ${\eta_{{P}}}$. Upon increasing ${\eta_{{P}}}$ further, miscibility returns over a small range before demixing again occurs at higher ${\eta_{{P}}}$. Such a phenomenon could conceivably result from the complex interplay between range and depth of the depletion potential arising from solvent-induced overlap of polymers. For smaller polymer-to-colloid size ratios, the above scenario persists. Figure \[FIGps2\] shows qualitatively similar results for $q=0.5$ and cosolvent reservoir densities ${\rho_{S}^{\ast r}}=0$ (Fig. \[FIGps2\]a) and 0.5 (Fig. \[FIGps2\]b). Conclusions {#SECdiscussion} =========== In summary, we have investigated the bulk fluid demixing behavior of model mixtures of colloids and nonadsorbing polymers in poor solvents. Our model combines the Asakura-Oosawa model of hard-sphere colloids plus ideal penetrable-sphere polymers with a binary solvent model. The solvent comprises a primary theta-solvent and a cosolvent of point particles that are excluded from both colloids and polymers. Cosolvent exclusion energetically favors overlapping configurations of polymers. Although somewhat idealized, the model exhibits the essential feature of solvent-induced effective attraction between polymers, mimicking the effect of a poor solvent. To study the equilibrium phase behavior of this model, we have derived a geometry-based density functional theory that combines elements of previous theories for the AO and Widom-Rowlinson models. Applying the theory to bulk fluid phases, we have calculated phase diagrams for cosolvent densities spanning a range from theta-solvent to poor solvent. With increasing cosolvent concentration (worsening solvent quality), the predicted colloid-polymer binodal shifts to lower colloid concentrations, destabilizing the mixed phase. Beyond a threshold cosolvent concentration, a reentrant colloid-polymer demixing transition is predicted at low colloid concentrations. Predictions of the theory could be tested by comparison with simulations of the model. Qualitative comparison with experiment also may be possible, but would require a relation between the cosolvent concentration (as a measure of solvent quality) and the Flory interaction parameter. In principle, such a relation could be established by calculating the effective second virial coefficient of the polymer in the polymer-cosolvent subsystem. Although here we have approximated the polymers as mutually noninteracting, their effective attractions being driven only by cosolvent exclusion, future work should include non-ideality between polymers, arising fundamentally from excluded-volume repulsion between polymer segments. For this purpose, a reasonable model is an effective-sphere description based on a repulsive, penetrable pair interaction (finite at the origin), [*e.g.*]{}, of step-function or Gaussian shape [@louis00]. The competition between such intrinsic repulsion and the solvent-induced attraction considered in this work is likely to produce rich phase behavior. As a further outlook, our approach also could be applied to effects of solvent quality on polymer brushes adsorbed onto surfaces of colloidal particles. [^1]: Permanent address: Institut f[ü]{}r Theoretische Physik II, Heinrich-Heine-Universit[ä]{}t D[ü]{}sseldorf, Universit[ä]{}tsstra[ß]{}e 1, D-40225 D[ü]{}sseldorf, Germany.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we discuss the problem of generically finding near-collisions for cryptographic hash functions in a memoryless way. A common approach is to truncate several output bits of the hash function and to look for collisions of this modified function. In two recent papers, an enhancement to this approach was introduced which is based on classical cycle-finding techniques and covering codes. This paper investigates two aspects of the problem of memoryless near-collisions. Firstly, we give a full treatment of the trade-off between the number of truncated bits and the success-probability of the truncation based approach. Secondly, we demonstrate the limits of cycle-finding methods for finding near-collisions by showing that, opposed to the collision case, a memoryless variant cannot match the query-complexity of the “memory-full” birthday-like near-collision finding method.' address: - 'Institute for Applied Information Processing and Communications, Graz University of Technology, Inffeldgasse 16a, A–8010 Graz, Austria.' - 'Mathematisches Institut, Eberhard Karls Universität Tübingen, Auf der Morgenstelle 10, D–72076 Tübingen, Germany.' author: - Mario Lamberger - Elmar Teufl bibliography: - 'nearcolls30-IPL.bib' date: 19 September 2012 title: 'Memoryless Near-Collisions, Revisited' --- Introduction {#sec:intro} ============ The field of hash function research has developed significantly in the light of the attacks on some of the most frequently used hash functions like MD4, MD5 and SHA-1. As a consequence, academia and industry started to evaluate alternative hash functions,  in the SHA-3 initiative organized by NIST [@sha3COMP]. During this ongoing evaluation, not only the three classical security requirements *collision resistance*, *preimage resistance* and *second preimage resistance* are considered. Researchers look at (semi-)free-start collisions, near-collisions, distinguishers, etc. A ‘behavior different from that expected of a random oracle’ for the hash function is undesirable as are weaknesses that are demonstrated only for the compression function and not for the full hash function. Coding theory and hash function cryptanalysis have gone hand in hand for quite some time now, where a crucial part of the attacks is based on the search for low-weight code words in a linear code ( [@cryptoBihamC04; @cryptoChabaudJ98; @imaPramstallerRR05] among others). In this paper, we want to elaborate on a newly proposed application of coding theory to hash function cryptanalysis. In [@dcc_nc; @sacryptLambergerR10], it is demonstrated how to use covering codes to find near-collisions for hash functions in a memoryless way. We also want to refer to the recent paper [@Gordon2010Optimal] which considers similar concepts from the viewpoint of locality sensitive hashing. In all of the following, we will work with binary values, where we identify $\{0,1\}^n$ with $\Z_2^n$. Let “$+$” denote the $n$-bit exclusive-or operation. The Hamming weight of a vector $v\in\Z_2^n$ is denoted by $\wt{v} = \card{\{i \setsep v_i = 1\}}$ and the Hamming distance of two vectors by $\dist{u}{v} = \wt{u + v}$. The Handbook of Applied Cryptography [@bookMenezesOV96 page 331] defines *near-collision resistance* of a hash function $\Hash$ as follows: \[d:nearcollision\] It should be hard to find any two inputs $m$, $m^*$ with $m\neq m^*$ such that $\Hash(m)$ and $\Hash(m^*)$ differ in only a small number of bits: $$\label{eq:NC_def} \dist{\Hash(m)}{\Hash(m^*)} \leq \epsilon.$$ For ease of later use we also give the following definition: \[d:eps-near\] A message pair $m, m^*$ with $m\neq m^*$ is called an *$\epsilon$-near-collision* for $\Hash$ if holds. Collisions can be considered a special case of near-collisions with the parameter $\epsilon = 0$. The generic method for finding collisions for a given hash function is based on the *birthday paradox* and attributed to Yuval [@Yuval1979HowToSwindle]. There are well established cycle-finding techniques (due to Floyd, Brent, Nivasch,  [@Brent1980Improved; @Knuth1997TheArtOf2; @Nivasch2004CycleDetection]) that remove the memory requirements from an attack based on the birthday paradox (see also [@jocOorschotW99]). These methods work by repeated iteration of the underlying hash function where in all of these applications the function is considered to behave like a random mapping ( [@FlajoletO1989Random; @Harris1960Probability]). In [@dcc_nc; @sacryptLambergerR10], the question is raised whether or not the above mentioned cycle-finding techniques are also applicable to the problem of finding near-collisions. We now briefly summarize the ideas of [@dcc_nc; @sacryptLambergerR10]. Since Definitions \[d:nearcollision\] and \[d:eps-near\] include collisions as well, the task of finding near-collisions is easier than finding collisions. We now want to have a look at generic methods to construct near-collisions which are more efficient than the generic methods to find collisions. In the following, let $B_{r}(x) := \{y\in\Z_2^n \setsep \dist{x}{y} \le r\}$ denote the *Hamming ball* (or *Hamming sphere*) around $x$ of radius $r$. Furthermore, we denote by $S_n(r) := \card{B_r(x)} = \sum_{i=0}^r\binom{n}{i}$ the cardinality of any $n$-dimensional Hamming ball of radius $r$. A simple adaption of the classical table-based birthday attack for finding $\epsilon$-near-collisions is to start with an empty table, randomly select a message $m$ and compute $\Hash(m)$ and then test whether the table contains an entry $(H(m)+\delta,m^*)$ for some $\delta \in B_\epsilon(0)$ and arbitrary $m^*$. If so, the pair $(m, m^*)$ is an $\epsilon$-near-collision. If not, $(\Hash(m),m)$ is added to the table and repeat. Then, we know the following: \[lem:memory\_NC\] Let $\Hash$ be an $n$-bit hash function. If we assume that $\Hash$ acts like a random mapping, the average number of messages that we need to hash and store in a table-based birthday-like attack before we find an $\epsilon$-near-collision is $O( 2^{n/2} S_n(\epsilon)^{-1/2} )$. We want to note that in this paper we are measuring the complexity of a problem by counting (hash) function invocations. This constitutes an adequate measure in the case of the memoryless algorithms in this paper, however the real computational complexity of the table-based algorithm above is dominated by the memory access, as the problem of searching for an $\epsilon$-near-collision in the table is much harder than testing for a collision. The first straight-forward approach to apply the cycle-finding algorithms to the problem of finding near-collisions is a truncation based approach. \[lem:plain\_trunc\] Let $\Hash$ be an $n$-bit hash function. Let $\tau_\epsilon\colon\Z_2^n\to\Z_2^{n-\epsilon}$ be a map that truncates $\epsilon$ bits from its input at predefined positions. If we assume that $\tau_\epsilon\circ \Hash$ acts like a random mapping, we can apply a cycle-finding algorithm to the map $\tau_\epsilon\circ \Hash$ to find an $\epsilon$-near-collision in a memoryless way with an expected complexity of about $2^{(n-\epsilon)/2}$. Under the assumptions of the lemma, the results from [@FlajoletO1989Random; @Harris1960Probability] are applied to a random mapping with output length $n-\epsilon$. A Thorough Analysis of the Truncation Approach {#sec:trunc} ============================================== As indicated in [@dcc_nc], a simple idea to improve the truncation based approach is to truncate more than $\epsilon$ bits. That is, in order to find an $\epsilon$-near-collision we simply truncate $\mu$ bits with $\mu > \epsilon$. A cycle-finding method applied to $\tau_\mu\circ\Hash$ has an expected complexity of $2^{(n-\mu)/2}$ and deterministically finds two messages $m,m^*$ such that $\dist{\Hash(m)}{\Hash(m^*)} \le \mu$. However, we can look at the probability that these two messages $m,m^*$ satisfy $\dist{\Hash(m)}{\Hash(m^*)} \le \epsilon$ which is $2^{-\mu}\sum_{i=0}^{\epsilon} \binom{\mu}{i} = 2^{-\mu} S_\mu(\epsilon)$. For a truly memoryless approach, multiple runs of the cycle-finding algorithm are interpreted as independent events. Therefore, the expected complexity to find an $\epsilon$-near-collision can be obtained as the product of the expected complexity to find a cycle, and the expected number of repetitions of the cycle-finding algorithm,  the reciprocal value of the probability that a single run finds an $\epsilon$-near-collision. In other words, we end up with an expected complexity of $$\label{eq:opt_proj} 2^{(n+\mu)/2} S_\mu(\epsilon)^{-1} = 2^{(n+\mu)/2} \, \left( \sum_{i=0}^{\epsilon} \binom{\mu}{i} \right)^{-1}$$ \[rem:trunc\_dcc\] In [@dcc_nc], the above approach was already proposed with $\mu=2\epsilon+1$. In this case results in a complexity of $$2^{(n+2\epsilon+1)/2} S_{2\epsilon+1}(\epsilon)^{-1} = 2^{(n+1)/2-\epsilon},$$ which clearly improves upon Lemma \[lem:plain\_trunc\]. Here we have used that $S_{2\epsilon+1}(\epsilon) = \frac12 S_{2\epsilon+1}(2\epsilon+1) = 2^{2\epsilon}$. An interesting question that now arises is to find the number of truncated bits $\mu$ that constitutes the best trade-off between a larger $\mu$,  a faster cycle-finding part, and a higher number of repetitions for this probabilistic approach. In other words, we would like to determine the value of $\mu$ which minimizes for a given $\epsilon$. Analogously, we can search for an integer $\mu > \epsilon$ such that for a given $\epsilon$ the expression $2^{-\mu/2} S_\mu(\epsilon)$ is maximized. For small values of $\epsilon$, values for $\mu$ were already computed in [@dcc_nc] by an exhaustive search. In this section, we want so solve this problem analytically. We first show a result that tells us something about the behavior of the sequence of real numbers $$\label{eq:seq} a_\mu := 2^{-\mu/2} S_\mu(\epsilon) = 2^{-\mu/2} \sum_{i=0}^{\epsilon} \binom{\mu}{i}.$$ We want to note that based on the origin of the problem, we are only interested in values $a_\mu$ for $\mu > \epsilon$. Our analysis is still valid starting with $\mu=1$. We will need the following two properties of sequences: \[def:log\_unimod\] Let $a_\mu$ be a real-valued sequence. (i) A sequence $a_\mu$ is called *unimodal* in $\mu$, if there exists an index $t$ such that $a_1\le a_2 \le \dots \le a_t$ and $a_t \ge a_{t+1} \ge a_{t+2} \ge \dots$ The index $t$ is called a *mode* of the sequence. (ii) A sequence $a_\mu$ is called *log-concave*, if $a_\mu^2 \ge a_{\mu-1}a_{\mu+1}$ holds for every $\mu$. If $\ge$ is replaced by $>$, we speak of a *strictly log-concave* sequence. \[prop:unimod\] The sequence $a_\mu$ defined in is strictly log-concave and therefore also unimodal. It is a well known fact that a log-concave sequence is also unimodal,  for example [@Stanley1986Unimodal]. So in order to show that is strictly log-concave we have to show that for any $\epsilon\ge 1$, $$\label{eq:start_ineq} \sum_{i=0}^\epsilon \sum_{j=0}^\epsilon \binom\mu i \binom\mu j > \sum_{i=0}^\epsilon \sum_{j=0}^\epsilon \binom{\mu-1}{i}\binom{\mu+1}{j}$$ holds. By using the recursion for the binomial coefficient twice, we can transform the inequality into $$\begin{aligned} \sum_{i=0}^\epsilon\sum_{j=0}^\epsilon \Biggl[\binom{\mu-1}{i}+\binom{\mu-1}{i-1}\Biggr] \binom\mu j > \sum_{i=0}^\epsilon\sum_{j=0}^\epsilon \binom{\mu-1}{i} \Biggl[\binom\mu j+\binom{\mu}{j-1}\Biggr],\end{aligned}$$ which boils down to the inequality $$\binom\mu\epsilon \sum_{i=0}^{\epsilon-1} \binom{\mu-1}{i} > \binom{\mu-1}{\epsilon}\sum_{i=0}^{\epsilon-1} \binom{\mu}{i}.$$ By direct computation using the definition of the binomial coefficient, it is easy to see that each summand on the left is strictly larger than the respective summand on the right, simply because $\epsilon > i$. The strict log-concavity guarantees us the existence of at most two adjacent indices for which the sequence $a_\mu$ attains its global maximum. But if there would be an index $t$, such that $a_t=a_{t+1}$ is maximal, the definition of the sequence $a_\mu$ in shows that this would imply the existence of two positive integers $a,b$ such that $a = \sqrt2\,b$, which is clearly not possible. Therefore, the mode of the sequence is indeed unique. In order to find the mode of $a_\mu$, we have to investigate some properties of truncated sums of binomial coefficients. There are well known bounds for the sum $S_\mu(\epsilon)$, which yield upper and lower bounds for the optimal value of $\mu$. As we are interested in an asymptotically correct approximation for the optimal $\mu$, we need to derive an asymptotic expansion of $S_\mu(\epsilon)$ which seems to be hard to find in the literature. Notationally, we use $f(\mu) \sim g(\mu)$ if $\lim_{\mu\to\infty} f(\mu)/g(\mu) = 1$ and $f(\mu) \asymp g(\mu)$ if there exist positive $c_1,c_2,\mu_0$ such that $c_1\cdot\abs{g(\mu)} \le \abs{f(\mu)} \le c_2\cdot\abs{g(\mu)}$ for all $\mu\ge\mu_0$. \[prop:asym\_sum\] Let $S_\mu(\epsilon) = \sum_{k=0}^\epsilon \binom \mu k$ and define $\alpha := \frac\epsilon\mu$. If we assume, that there exist constants $c_1,c_2$ such that $0 < c_1 \le \alpha \le c_2 < \frac 12$, then we have $$\label{eq:twoterm} S_\mu(\epsilon) = \binom\mu\epsilon \cdot \biggl( \frac{\mu-\epsilon}{\mu-2\epsilon} - \frac{2\epsilon(\mu-\epsilon)}{(\mu-2\epsilon)^3} + O(\mu^{-2}) \biggr),$$ for $\epsilon,\mu \to \infty$ and thus $$S_\mu(\epsilon) \sim \frac{\mu-\epsilon}{\mu-2\epsilon} \cdot \binom\mu\epsilon.$$ For $k\le \epsilon$ we have $$\label{eq:binq} \binom\mu k = \binom\mu\epsilon \prod_{i=0}^{\epsilon-k-1} \frac{\epsilon-i}{\mu-k-i} \le \binom\mu\epsilon \cdot \biggl(\frac{\epsilon}{\mu-\epsilon}\biggr)^{\epsilon-k}.$$ Because of the requirements in the proposition we have $$\frac{\epsilon}{\mu-\epsilon} = \frac{\alpha}{1-\alpha} \le \frac{c_2}{1-c_2} < 1.$$ For sake of notation we set $\beta:=\frac{\alpha}{1-\alpha}$ and $c:=\frac{c_2}{1-c_2}$. This then leads to $$\label{eq:sm} \begin{aligned} \binom\mu\epsilon &\le S_\mu(\epsilon) \le \binom\mu\epsilon \sum_{k=0}^\epsilon \biggl(\frac{\epsilon}{\mu-\epsilon}\biggr)^{\epsilon-k} \\ &\le \binom\mu\epsilon \sum_{j=0}^\infty \biggl(\frac{\epsilon}{\mu-\epsilon}\biggr)^j = \frac{\mu-\epsilon}{\mu-2\epsilon} \cdot \binom\mu\epsilon \le \frac{1}{1-c} \cdot \binom\mu\epsilon. \end{aligned}$$ From equation we learn that $S_\mu(\epsilon) \asymp \binom\mu\epsilon$. The following can be seen as a discrete version of Laplace’s method to approximate integrals ( [@deBruijnAsymptotic1981]). $$\begin{aligned} S_\mu(\epsilon) = \sum_{k=0}^\epsilon \binom \mu k = \sum_{0\le k\le \epsilon-r} \binom \mu k + \sum_{\epsilon-r<k\le\epsilon} \binom \mu k = S_\mu(\epsilon-r) + \sum_{0\le k<r} \binom{\mu}{\epsilon-k},\end{aligned}$$ where $r = r(\mu)$ is such that $r = o(\mu)$ for $\mu\to\infty$. We will determine $r$ later. Because of and we obtain $$S_\mu(\epsilon-r) \asymp \binom{\mu}{\epsilon-r} = \binom\mu\epsilon \cdot O(c^r).$$ This implies $$S_\mu(\epsilon) = \binom\mu\epsilon \cdot \Biggl( \sum_{0\le k<r} \prod_{i=0}^{k-1} \frac{\epsilon-i}{\mu-\epsilon+k-i} + O(c^r) \Biggr).$$ We now have a closer look at the product above: $$\prod_{i=0}^{k-1} \frac{\epsilon-i}{\mu-\epsilon+k-i} = \exp\Biggl( \sum_{i=0}^{k-1} \log\frac{\alpha-\frac i \mu}{1-\alpha+\frac k \mu-\frac i \mu} \Biggr).$$ For $x,y$ close to $0$ we have $$\log\frac{\alpha+x}{1-\alpha+y} = \log\beta + \frac1\alpha\cdot x - \frac1{1-\alpha}\cdot y + O(x^2 + y^2).$$ Since $0\le i<k<r$ and $r=o(\mu)$ we conclude $$\begin{aligned} \log\frac{\alpha-\frac i \mu}{1-\alpha+\frac k \mu-\frac i \mu} = \log\beta - \frac{1}{(1-\alpha)}\cdot\frac k \mu - \frac{(1-2\alpha)}{\alpha(1-\alpha)}\cdot\frac i \mu + O\biggl(\frac{k^2}{\mu^2}\biggr),\end{aligned}$$ where the error term is uniform in $0\le k< r$. With this we get $$\begin{aligned} \prod_{i=0}^{k-1} \frac{\epsilon-i}{\mu-\epsilon+k-i} &= \beta^k \exp\biggl( \frac{1-2\alpha}{2\alpha(1-\alpha)} \cdot \frac k \mu - \frac1{2\alpha(1-\alpha)} \cdot \frac{k^2}{\mu} + O\biggl(\frac{k^3}{\mu^2}\biggr) \biggr) \\ &= \beta^k \, \biggl( 1 + \frac{1-2\alpha}{2\alpha(1-\alpha)} \cdot \frac k \mu - \frac1{2\alpha(1-\alpha)} \cdot \frac{k^2}{\mu} + O\biggl(\frac{k^3}{\mu^2}\biggr) \biggr).\end{aligned}$$ In total we obtain that $S_\mu(\epsilon)\big/\binom\mu\epsilon$ is equal to $$\sum_{0\le k<r} \beta^k \, \biggl( 1 + \frac{1-2\alpha}{2\alpha(1-\alpha)} \cdot \frac k \mu - \frac1{2\alpha(1-\alpha)} \cdot \frac{k^2}{\mu} + O\biggl(\frac{k^3}{\mu^2}\biggr) \biggr)$$ up to an error term which is bounded by $O(c^r)$. Since $$\sum_{0\le k<r} \beta^k \cdot \frac{k^3}{\mu^2} = O(\mu^{-2})$$ and $$\sum_{k\ge r} \beta^k \, \biggl( 1 + \frac{1-2\alpha}{2\alpha(1-\alpha)} \cdot \frac k \mu - \frac1{2\alpha(1-\alpha)} \cdot \frac{k^2}{\mu} \biggr) = O(r^2 c^r),$$ it follows that $S_\mu(\epsilon)\big/\binom\mu\epsilon$ is equal to $$\sum_{k\ge0} \beta^k \, \biggl( 1 + \frac{1-2\alpha}{2\alpha(1-\alpha)} \cdot \frac k \mu - \frac1{2\alpha(1-\alpha)} \cdot \frac{k^2}{\mu} \biggr) + O(\mu^{-2} + r^2 c^r).$$ Simplifying the infinite sum above yields $$S_\mu(\epsilon) = \binom\mu\epsilon \cdot \biggl( \frac{1-\alpha}{1-2\alpha} - \frac{2\alpha(1-\alpha)}{(1-2\alpha)^3} \cdot \frac 1 \mu + O(\mu^{-2} + r^2 c^r) \biggr).$$ We now choose $r=r(\mu)=(\log \mu)^2$, since then $r^2 c^r = o(\mu^{-2})$, which readily implies the statement using the definition of $\alpha$. The results of the Lem. \[prop:unimod\] and Prop. \[prop:asym\_sum\] can now be combined in the following way. We are interested in the behavior of $a_\mu$, that is, $$a_\mu = 2^{-\mu/2} S_\mu(\epsilon) = 2^{-\mu/2} \sum_{i=0}^\epsilon \binom \mu i.$$ We have already seen that there will be a unique mode $t$ for the sequence. Until this index, we have $a_{\mu+1}/a_{\mu} \ge 1$ and for all following values of $\mu$, we have $a_{\mu+1}/a_{\mu} \le 1$. If we evaluate the fraction, we get $a_{\mu+1}/a_{\mu} = S_{\mu+1}(\epsilon)/(\sqrt2\,S_{\mu}(\epsilon))$. From the recurrence relation of the binomial coefficient we get the analogous recurrence relation for $S_\mu(\epsilon)$, namely $S_{\mu+1}(\epsilon) = S_{\mu}(\epsilon) + S_{\mu}(\epsilon-1) = 2S_{\mu}(\epsilon)-\binom\mu\epsilon$. If we use this in the above equation we end up with $$\label{eq:eq_for_proof} \frac{a_{\mu+1}}{a_{\mu}} = \sqrt2\, \Biggl( 1-\frac{\binom\mu\epsilon}{2S_\mu(\epsilon)} \Biggr)$$ If we now use the asymptotic expansion in we can compute an approximation for $\mu = \mu(\epsilon)$ such that an optimum for is found. \[th:trunc\_opt\] Let $\Hash$ be a hash function producing an $n$-bit hash value and let $\epsilon\ge1$ be given. Let $\tau_\mu\colon\Z_2^n \to \Z_2^{n-\mu}$ be a map that truncates $\mu$ fixed bits from an $n$-bit value, and suppose we apply a cycle-finding algorithm to $\tau_\mu\circ\Hash$, which is assumed to act like a random mapping. Then, there exists a unique optimal choice $\mu = \mu(\epsilon) > \epsilon$ to find an $\epsilon$-near-collision. For large $\epsilon$, we have $$\label{eq:opt_mu} \mu(\epsilon) = (2+\sqrt2\,)(\epsilon-1) + O(\epsilon^{-1}).$$ Substituting the lower bound $$S_\mu(\epsilon) \ge \binom\mu\epsilon + \binom\mu{\epsilon-1} = \frac{\mu+1}{\mu+1-\epsilon} \binom\mu\epsilon$$ and the upper bound of in implies that the mode $t$ of the sequence $a_\mu$ is bounded by $(1+\sqrt2)\epsilon - 1 \le t \le (2+\sqrt2) \epsilon$. For values of $\mu$ in the domain above we may use Prop. \[prop:asym\_sum\], since the quotient $\epsilon/\mu$ is easily seen to be bounded in the right way. Furthermore, $\mu\asymp\epsilon$ and $\mu-2\epsilon\asymp\epsilon$. For large values of $\epsilon$ we infer from $$S_\mu(\epsilon) = \binom\mu\epsilon \cdot \biggl( \frac{\mu-\epsilon}{\mu-2\epsilon} + O(\epsilon^{-1}) \biggr),$$ that the mode $t$ must satisfy the equation $$1 = (2-\sqrt2) \biggl( \frac{t-\epsilon}{t-2\epsilon} + O(\epsilon^{-1}) \biggr).$$ Solving this equation yields $t = (2+\sqrt2)\epsilon + O(1)$. Now let us try to obtain further terms of the asymptotic expansion of $t$ using bootstrapping (see for instance [@deBruijnAsymptotic1981]). Using the full strength of Prop. \[prop:asym\_sum\] implies that the equation $$1 = (2-\sqrt2) \biggl( \frac{t-\epsilon}{t-2\epsilon} - \frac{2\epsilon(t-\epsilon)}{(t-2\epsilon)^3} + O(\epsilon^{-2}) \biggr)$$ must be satisfied by the mode $t$. Using the ansatz $t = (2+\sqrt2)\epsilon + r$, where $r = O(1)$, yields $$2\bigl(1+\sqrt2\bigr) \Bigl( (3-2\sqrt2)r + (2-\sqrt2) \Bigr) \epsilon^2 + O(\epsilon) = 0.$$ Hence we get $r=-(2+\sqrt2) + O(\epsilon^{-1})$ and $t=(2+\sqrt2\,)(\epsilon-1) + O(\epsilon^{-1})$ which corresponds to $\mu(\epsilon)$. We want to note that in both, Prop. \[prop:asym\_sum\] and Th. \[th:trunc\_opt\], it is possible to compute an arbitrary number of terms of the asymptotic expansions and . We end this section with Table \[t:mu\] demonstrating the quality of the approximation of . The actual values for $\mu(\epsilon)$ are produced by an exhaustive search and for simplicity, is replaced with $\lceil(2+\sqrt2)(\epsilon-1)\rceil$. [@c\*[12]{}[@&gt;p[6mm]{}]{}@]{} $\epsilon$ & 1 & 2 & 3 & 4 & …& 8 & 9 & 10 & …& 98 & 99 & 100\ $\mu(\epsilon)$ & 2 & 5 & 8 & 11 & …& 25 & 28 & 32 & …& 332 & 335 & 339\ $\mu^*(\epsilon)$ & 0 & 4 & 7 & 11 & …& 24 & 28 & 31 & …& 332 & 335 & 339\ Limitations of Memoryless Near-Collisions {#sec:limits} ========================================= A drawback to the truncation based solution is of course that we can only find $\epsilon$-near-collisions of a limited shape (depending on the fixed bit positions), so only a fraction of all possible $\epsilon$-near-collisions can be detected, namely $S_{\mu}(\epsilon)/S_n(\epsilon)$. To improve upon this, [@dcc_nc; @sacryptLambergerR10] had the idea is to replace the projection $\tau_\epsilon$ by a more complicated function $g$, where $g$ is the decoding operation of a certain *covering code* $\mathcal{C}$. Let $R=R(C)$ be the *covering radius* of a code $\mathcal{C}$, that is $R(\mathcal{C}) = \max_{x \in \Z_2^{n}} \min_{c \in \mathcal{C}} \dist{x}{c}$. \[th:coding\] Let $\Hash$ be a hash function of output size $n$. Let $\mathcal{C}$ be a covering code of the same length $n$, size $K$ and covering radius $R(\mathcal{C})$ and assume there exists an efficiently computable map $g$ such that $g\colon\Z_2^n \to \mathcal{C}$, where $x \mapsto c$ with $\dist{x}{c} \leq R(\mathcal{C})$. If we further assume that $g\circ\Hash$ acts like a random mapping, in the sense that the expected cycle and tail lengths are the same as for the iteration of a truly random mapping on a space of size $K$, then we can find $2R(\mathcal{C})$-near-collisions for $\Hash$ with a complexity of about $\sqrt{K}$ and with virtually no memory requirements. An extensive amount of work in the theory of covering codes is devoted to derive upper and lower bounds for $K$ (when $n$ and $R$ are given) and to construct codes achieving these bounds ( [@bookCoveringCodes1997; @Struik1994AnImprovementOf; @Wee1988Improved]). The authors of [@sacryptLambergerR10] have investigated a class of efficient codes suitable for the approach outlined in Th. \[th:coding\]. The approach via covering codes constitutes an improvement over the purely truncation based approach. However, (depending on $\epsilon$) the query-complexity of the approach outlined in Th. \[th:coding\] is larger than the expected query-complexity of the table-based birthday method,  Lem. \[lem:memory\_NC\]. \[rem:code\_prob\] We briefly want to mention the possibility of considering a probabilistic version of the covering code approach in an analogous manner to the approach in Sec. \[sec:trunc\]. In other words, what is the probability to find a $(2R-1)$-near-collision if the covering radius is $R$? This problem has also been studied in [@dcc_nc] with the outcome that in general, finding a closed expression like is beyond reach. Numerical experiments for relevant values of $n$ and $\epsilon = 2R$ show, that increasing the covering radius is rarely bringing an improvement. We use [@dcc_nc Eq. (20)] together with the optimal solution from [@sacryptLambergerR10] to compute complexities for small values of $\epsilon$ in Table \[t:rho\_general\]. The limitations of the covering code approach are inherent to the *sphere covering bound*, which states that $K \ge 2^{n}/S_n(R)$ ( [@bookCoveringCodes1997]). Since we use codes with covering radius $R$ to find $2R$-near-collisions, that is, $\epsilon = 2R$, the sphere covering bound implies that the size $K$ of the code has to be larger than $K \ge 2^n/S_n(R) \gg 2^n / S_n(2R)$, where the latter would be the desired quantity to match the complexity of Lem. \[lem:memory\_NC\] to find an $\epsilon$-near-collision. In the following, we want to investigate, if there are other possibilities to choose a mapping $g$ such that collisions for $g\circ\Hash$ imply $\epsilon$-near-collisions for $\Hash$. In [@dcc_nc] it was shown, that the “perfect” mapping $g$ is beyond reach: \[lem:no\_g\] Let $\epsilon \ge 1$, let $\Hash$ be a hash function and let $g$ be a function such that $$\dist{\Hash(m)}{\Hash(m^*)} \leq \epsilon \Leftrightarrow g(\Hash(m)) = g(\Hash(m^*))$$ holds. Then, $g$ is a constant map and $\dist{\Hash(m)}{\Hash(m^*)} \leq \epsilon$ for all $m, m^*$. So the best we can hope for is a mapping $g\colon\Z_2^n \to \Z_2^k$ that satisfies $$\label{eq:eps_inj} g(y) = g(y') \Rightarrow \dist{y}{y'} \leq \epsilon,$$ for all $y,y' \in \Z_2^n$. If we recall the requirements of Th. \[th:coding\], it was stated that $g\circ\Hash$ should act like a random mapping in order to have the expected cycle and tail lengths of the iteration of $g\circ\Hash$ to be the same as for a truly random mapping on a space of size $2^k$. We formalize this in the following lemma. For this, we assume that the hash function $\Hash$ acts like a random mapping from a large domain $D \simeq \Z_2^\ell$ to $\Z_2^n$ (since most hash standards define a maximum input length). First, we need yet another definition: \[def:balanced\] Let $D,I$ be finite domains. We call a function $g\colon D \to I$ *balanced*, if $\card{I}$ divides $\card{D}$ and for all $z\in I$ we have $\card{g^{-1}(z)} = \card{D}/\card{I}$. \[lem:balanced\] Let $\Hash\colon \Z_2^\ell \to \Z_2^n$ be a random mapping. Furthermore, consider a function $g\colon \Z_2^n \to \Z_2^k$ with $k \le n$. Then, $g$ is balanced if and only if $g\circ\Hash\colon \Z_2^\ell \to \Z_2^k$ is a random mapping. Let $g$ be balanced, that is, for all $z\in\Z_2^k$ we have $\card{g^{-1}(z)} = 2^{n-k}$. The sets $P_z := g^{-1}(z)$ for all $z\in\Z_2^k$ define a disjoint partition of $\Z_2^n$ of size $\card{P_z}=2^{n-k}$ and $g$ is constant on each set $P_z$. Now let $\Hash$ be drawn uniformly at random from the set of all functions $\Z_2^\ell \to \Z_2^n$, that is, for any function $h\colon\Z_2^\ell \to \Z_2^n$ we have $\mathbb{P}(\Hash = h) = 2^{-n 2^\ell}$. For a given $h'\colon \Z_2^\ell \to \Z_2^k$, we now want to compute the probability $\mathbb{P}(g\circ\Hash = h')$, for which we get $$\label{eq:probs} \begin{aligned} \mathbb{P}(g\circ\Hash = h') &= 2^{-n 2^\ell} \card{\{h\colon\Z_2^\ell \to \Z_2^n \setsep g(h(x)) = h'(x) \text{ for all } x\in\Z_2^\ell \}} \\ &= 2^{-n 2^\ell} \card{\{h\colon\Z_2^\ell \to \Z_2^n \setsep h(x) \in P_{h'(x)} \text{ for all } x\in\Z_2^\ell \}} \\ &= 2^{-n 2^\ell} 2^{(n-k) 2^\ell} = 2^{-k 2^\ell}, \end{aligned}$$ because $\card{P_z}=2^{n-k}$ for all $z$. In other words, $g\circ\Hash$ is a random mapping. Now assume that $g\circ\Hash$ is a random mapping. That means, that for every $h'\colon\Z_2^\ell \to \Z_2^k$ we have $\mathbb{P}(g\circ\Hash = h') = 2^{-k 2^\ell}$. This stays true, if we choose $h'$ to be one of the $2^k$ constant functions. If we argue along the same lines as in , we get $$2^{-k 2^\ell} = 2^{-n 2^\ell} \card{\{h\colon \Z_2^\ell \to \Z_2^n \setsep g(h(x)) = c \text{ for all } x\in\Z_2^\ell \}}$$ for all $c \in \Z_2^k$. Again, with $P_c = g^{-1}(c)$, we have $$2^{(n-k) 2^\ell} = \card{\{h\colon \Z_2^\ell \to \Z_2^n \setsep h(x) \in P_c \text{ for all } x\in\Z_2^\ell \}}.$$ This leaves us with $\card{P_c} = \card{g^{-1}(c)} = 2^{n-k}$ for all $c\in\Z_2^k$, and thus, $g$ is balanced. Lem. \[lem:balanced\] teaches us, that in a memoryless near-collision algorithm based on the iteration of the concatenation of the hash function $\Hash$ and a function $g$, additionally to the requirement we need $g$ also to be balanced. In the remaining part of this section, we want to show that this limits our choices basically to the known candidates for $g$. For the proof of the next proposition, we will need a lemma which goes back to a conjecture by Erdős. The solution of this problem by Kleitman in [@Kleitman1966OnACombinatorial], was further investigated in [@Bezrukov1987OnTheDescription]. Let $\diam{A}$ be the diameter of a set $A \subset \Z_2^n$, , $\diam{A} := \max_{x,y\in A} \dist{x}{y}$. We now collect the results of Th. 1 and Th. 2 of [@Bezrukov1987OnTheDescription] in the following lemma: \[lem:diam\] Let $s$ be a non-negative integer. 1. The Hamming balls $B_s(x)$ for any $x \in \Z_2^n$ are the sets of maximal size among all sets $A\subset \Z_2^n$ with $\diam{A} = 2s < n-1$. 2. The sets $B_s(x) \cup B_s(y )$ for any $x,y \in \Z_2^n$ with $\dist{x}{y}=1$ are sets of maximal size among all sets $A\subset \Z_2^n$ with $\diam{A} = 2s+1 < n-1$. With this auxiliary result, we can now formulate the main result of this section. \[th:impossible\] Let $1 \le \epsilon < \frac n2$ be given and let $g\colon \Z_2^n \to I$ be a balanced function satisfying , that is, $$g(y) = g(y') \Rightarrow \dist{y}{y'} \le \epsilon$$ for all $y,y'\in\Z_2^n$. Then, $\card{I}$ must satisfy $$\label{eq:bound_k} \card{I} \ge 2^n / S_n(\lceil \epsilon/2\rceil)).$$ In the proof of Lem. \[lem:balanced\] we have seen, that the balancedness of $g$ implies a disjoint partition $\bigcup_z P_z$ of $\Z_2^n$ where the size of each set $P_z$ is $2^n/\card{I}$. The sets $P_z$ are exactly such that $g(x) = z$ for all $x\in P_z$. Taking property into account, we need that $\diam{P_z} \le \epsilon$. Therefore, Lem. \[lem:diam\] teaches us that if $\epsilon$ is even, we have $2^n/\card{I} \le S_n(\epsilon/2)$ and $2^n/\card{I} \le S_n(\frac{\epsilon-1}{2}) + \binom{n-1}{(\epsilon-1)/2}$, for odd $\epsilon$. Since $S_n(\frac{\epsilon-1}{2}) + \binom{n-1}{(\epsilon-1)/2} \le S_n(\frac{\epsilon+1}{2})$, we can unify the expressions to . As a consequence we get the following corollary: \[cor:limits\] Let $\Hash$ be an $n$-bit hash function, let $1 \le \epsilon < \frac n 2$ and let $g$ be a balanced function satisfying . Then, the complexity to find an $\epsilon$-near-collision by applying a cycle-finding algorithm to the concatenation $g\circ\Hash$ is bounded from below by $\Omega(2^{n/2} S_n(\lceil \epsilon/2 \rceil)^{-1/2})$. [@&gt;p[27mm]{}&gt;p[28mm]{} &gt;p[36mm]{}&gt;p[48mm]{}@]{} **short explanation** & **memory** & **complexity** & **remarks**\ cycle finding approach applied to an $\epsilon$-truncation of $H$ & negligible (memory is only required for cycle finding) & $2^{(n-\epsilon)/2}$ &  Lemma \[lem:plain\_trunc\] and [@Harris1960Probability];\ cycle finding approach applied to an $2\epsilon+1$-truncation of $H$ & negligible (memory is only required for cycle finding) & $2^{(n+1)/2-\epsilon}$ &  Remark \[rem:trunc\_dcc\] and [@dcc_nc]; (A) in Table \[t:rho\_general\];\ cycle finding approach applied to an optimized $\mu$-truncation of $H$ ($\mu>\epsilon$) & negligible (memory is only required for cycle finding) & $2^{(n+\mu)/2} S_\mu(\epsilon)^{-1}$ & optimal $\mu=\mu(\epsilon)$ is unique and $\mu\sim(2+\sqrt2)(\epsilon-1)$,  Theorem \[th:trunc\_opt\]; (B) in Table \[t:rho\_general\];\ table based approach & a table of exponential size in $n$ for the pairs $(m,H(m))$ & $2^{n/2} S_n(\epsilon)^{-1/2}$ &  Lemma \[lem:memory\_NC\] and [@dcc_nc]; (C) in Table \[t:rho\_general\];\ coding based approach & negligible (memory is only required for coding and cycle finding) & for even $\epsilon=2R$: $2^{(n-\ell R-r)/2}$, where $\ell := \lfloor \log_2(n/R+1) \rfloor$, $r := \lfloor (n-R(2^\ell-1))/2^\ell \rfloor$ &  [@dcc_nc; @sacryptLambergerR10]; (D) in Table \[t:rho\_general\]; for odd $\epsilon$ the coding based approach for $\epsilon+1$ is repeated until an $\epsilon$-near-collision is found,  Remark \[rem:code\_prob\];\ Conclusion {#sec:conclusion} ========== At the moment, a lot of effort is dedicated to the cryptanalysis of concrete hash function designs. From a theoretical perspective it is still very important to investigate generic aspects of non-random properties of hash functions. In this paper, we have analyzed several aspects of the question of finding near-collisions in a memoryless way. This problem has recently been investigated in [@dcc_nc; @sacryptLambergerR10]. All these methods rely on the application of a cycle-finding technique to an alteration (that is, concatenation with a new mapping) of the hash function. We have investigated in full detail the complexity of a probabilistic version of the simple truncation based approach. Furthermore, we have shown that the approach in general is limited in its capabilities, in the sense, that if $g$ is such that finding a collision for $g\circ\Hash$ implies a near-collision for $\Hash$, the query-complexity of this approach is always higher than the query-complexity of a birthday-like method using a table of exponential size. A comparison of the known methods is compiled in Tables \[t:methods\] and \[t:rho\_general\]. It has to be noted that in practice the real complexity of the table-based method will be dominated by the table queries and not by the hash computations. Acknowledgements {#acknowledgements .unnumbered} ================ The authors wish to thank the anonymous referee for valuable comments. The work in this paper has been supported in part by the Austrian Science Fund (FWF), project P21936-N23 and by the European Commission under contract ICT-2007-216646 (ECRYPT II). \#1[[](http://dx.doi.org/#1)]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'The dominant production mechanism for heavy quark-antiquark bound states in very high energy processes is fragmentation, the splitting of a high energy parton into a quarkonium state and other partons. We show that the fragmentation functions $D(z,\mu)$ describing these processes can be calculated using perturbative QCD. We calculate the fragmentation functions for a gluon to split into S-wave quarkonium states to leading order in the QCD coupling constant. The leading logarithms of $\mu/m_Q$, where $\mu$ is the factorization scale and $m_Q$ is the heavy quark mass, are summed up using Altarelli-Parisi evolution equations.' --- -.625in -1.00in 6.5in 9.00in .5in 3 Eric Braaten *Department of Physics and Astronomy, Northwestern University, Evanston, IL 60208* Tzu Chiang Yuan *Davis Institute for High Energy Physics* *Department of Physics, University of California, Davis, CA 95616* Quantitative evidence for quantum chromodynamics (QCD) as the fundamental field theory describing the strong interactions has come primarily from high energy processes involving leptons and the electroweak gauge bosons. Such processes are simpler than most purely hadronic processes, because leptons and electroweak gauge bosons do not have strong interactions. The next simplest particles as far as the strong interactions are concerned are heavy quarkonia, the bound states of a heavy quark and antiquark. While not pointlike, the lowest states in the charmonium and bottomonium systems have typical radii that are significantly smaller than those of hadrons containing light quarks. They have simple internal structure, consisting primarily of a nonrelativistic quark and antiquark only. The charmonium and bottomonium systems exhibit a rich spectrum of orbital and angular excitations. Thus in addition to being simple enough to be used as probes of the strong interactions, heavy quarkonia are also a potentially much richer source of information than leptons and electroweak gauge bosons. In most previous studies of the production of heavy quarkonia in high energy processes, it was implicitly assumed that they are produced by [*short distance*]{} mechanisms, in which the heavy quark and antiquark are created with transverse separations of order $1/E$, where $E$ is the characteristic energy scale of the process. In this paper, we point out that the dominant mechanism at very high energies is [*fragmentation*]{}, the production of a high energy parton followed by its splitting into the quarkonium state and other partons. The $Q {\bar Q}$ pair are created with a separation of order $1/m_Q$, where $m_Q$ is the mass of the heavy quark $Q$. The fragmentation mechanism is often of higher order in the QCD coupling constant $\alpha_s$ than the short distance mechanism, but it is enhanced by a factor of $(E/m_Q)^2$ and thus dominates at high energies $E >> m_Q$. The fragmentation of a parton into a quarkonium state is described by a fragmentation function $D(z,\mu)$, where $z$ is the longitudinal momentum fraction of the quarkonium state and $\mu$ is a factorization scale. We calculate to leading order in $\alpha_s$ the fragmentation functions $D(z,m_Q)$ for gluons to split into S-wave quarkonium states at energy scales $\mu$ of order $m_Q$. The fragmentation functions at larger scales $\mu$ are then determined by Altarelli-Parisi evolution equations which sum up the leading logarithms of $\mu/m_Q$. One of the quarkonium processes that is important in hadron collider physics is the production of charmonium at large transverse momentum $p_T$. A charmonium state can either be produced directly at large $p_T$ or it can be produced indirectly by the decay of a large $p_T$ $B$-meson or a higher charmonium state with large $p_T$. In previous calculations of the rate for direct production of charmonium at large $p_T$ [@br], the dominant mechanisms were assumed to be short distance processes, in which a collinear $c {\bar c}$ pair in a color-singlet S-wave state is created with transverse separation on the order of $1/p_T$. A typical Feynman diagram which contributes to the production of the $^1S_0$ charmonium state $\eta_c$ at order $\alpha_s^3$ is the diagram for $g g \rightarrow c {\bar c} g$ shown in Figure 1. The order-$\alpha_s^4$ radiative corrections to this process include the Feynman diagram for $g g \rightarrow c {\bar c} g g$ shown in Figure 2. In most regions of phase space, the virtual gluons in Figure 2 are off their mass shells by amounts of order $p_T$, and the contribution from this diagram is suppressed relative to the diagram in Figure 1 by a power of the running coupling constant $\alpha_s(p_T)$. But there is a part of the phase space in which the virtual gluon attached to the $c {\bar c}$ pair in Figure 2 is off-shell by an amount of order $m_c$. The propagator of this virtual gluon enhances the cross section by a factor of $p_T^2/m_c^2$. At large enough $p_T$, this easily overwhelms the extra power of the coupling constant $\alpha_s$. The enhancement is due to the fact that the $c {\bar c}$ pair can be produced with transverse separation of order $1/m_c$ instead of $1/p_T$. A more thorough analysis of the amplitude for $g g \rightarrow \eta_c g g$ reveals that the term that is enhanced by $p_T^2/m_c^2$ can be written in a factored form. The first factor is the amplitude for the production of a virtual gluon $g^*$ with high-$p_T$ but low invariant mass $q^2$ via the process $g g \rightarrow g g^*$. In the limit $q^2 << p_T^2$, it reduces to the on-shell scattering amplitude for $g g \rightarrow g g$. The second factor is the propagator $1/q^2$ for the virtual gluon. The third and final factor is the amplitude for the process $g^* \rightarrow \eta_c g$, in which an off-shell gluon fragments into an $\eta_c$ and a gluon. The factoring of the amplitude allows the fragmentation contribution to the differential cross section $d\sigma_{\eta_c}(E)$ for producing an $\eta_c$ with energy $E >> m_c$ to be written in a factorized form: $${ d\sigma_{\eta_c}(E) \;\approx\; \int_0^1 dz \; d{\widehat \sigma}_g(E/z) \; D_{g \rightarrow \eta_c}(z,m_c) \;, } \label{fac0}$$ where $d{\widehat \sigma}_g(E)$ is the differential cross section for producing a real gluon of energy $E$. All of the dependence on the energy $E$ appears in the subprocess cross section $d{\widehat \sigma}_g$, while all the dependence on the quark mass $m_c$ is in the fragmentation function $D(z, m_c)$. The variable $z$ is the longitudinal momentum fraction of the $\eta_c$ relative to the gluon. The physical interpretation of (\[fac0\]) is that an $\eta_c$ of energy $E$ can be produced by first producing a gluon of larger energy $E/z$ which subsequently splits into an $\eta_c$ carrying a fraction $z$ of the gluon energy. The generalization of the leading order formula (\[fac0\]) to all orders in $\alpha_s$ is straightforward. At higher orders in $\alpha_s$, the gluon that splits into a quarkonium state ${\cal O}$ can itself arise from the splitting of a higher energy parton into a collinear gluon. This splitting process gives rise to logarithms of $E/m_Q$. In order to maintain the factorization of the dependences on $E$ and $m_Q$, it is necessary to introduce a factorization scale $\mu$: $\log(E/m_Q) = \log(E/\mu) + \log(\mu/m_Q)$. To all orders in $\alpha_s$, the fragmentation contribution to the differential cross section for producing a quarkonium state ${\cal O}$ with energy $E$ can be written in the factorized form $${ d\sigma_{\cal O}(E) \;=\; \sum_i \int_0^1 dz \; d{\widehat \sigma}_i(E/z,\mu) \; D_{i \rightarrow {\cal O}}(z,\mu) \;, } \label{fac}$$ where the sum is over all parton types $i$. The scale $\mu$ is arbitrary, but large logarithms of $E/\mu$ in the parton cross section $d{\widehat \sigma}_i$ can be avoided by choosing $\mu$ on the order of $E$. Large logarithms of $\mu/m_Q$ then necessarily appear in the fragmentation functions $D_{i \rightarrow {\cal O}}(z,\mu)$, but they can be summed up by solving the evolution equations [@rdf] $${ \mu {\partial \ \over \partial \mu} D_{i \rightarrow {\cal O}}(z,\mu) \;=\; \sum_j \int_z^1 {dy \over y} \; P_{i\rightarrow j}(z/y,\mu) \; D_{j \rightarrow {\cal O}}(y,\mu) \;, } \label{evol}$$ where $P_{i\rightarrow j}(x,\mu)$ is the Altarelli-Parisi function for the splitting of the parton of type $i$ into a parton of type $j$ with longitudinal momentum fraction $x$. For many applications, calculations to leading order in $\alpha_s$ require only the $g \rightarrow g$ splitting function, which is $${ P_{g \rightarrow g}(x,\mu) \;=\; {3 \alpha_s(\mu) \over \pi} \left( {1-x \over x} + {x \over (1-x)_+} + x(1-x) \;+\; {33 - 2 n_f \over 36} \delta(1-x) \right) \;, } \label{split}$$ where $n_f$ is the number of light quark flavors. The boundary condition on this evolution equation is the fragmentation function $D_{i \rightarrow {\cal O}}(z,m_Q)$ at the scale $m_Q$. It can be calculated perturbatively as a series in $\alpha_s(m_Q)$. We proceed to calculate the fragmentation function $D_{g \rightarrow \eta_c}(z,m_c)$ for a gluon to split into the $^1S_0$ charmonium state $\eta_c$ to leading order in $\alpha_s(m_c)$. A process (such as $g g \rightarrow g g$) that produces a real gluon of 4-momentum $q$ has a matrix element of the form ${\cal M}_\alpha \epsilon^\alpha(q)$, where $\epsilon^\alpha(q)$ is the polarization 4-vector of the on-shell ($q^2 = 0$) gluon. In the corresponding fragmentation process (such as Figure 2), a virtual gluon is produced with large energy $q_0 >> m_c$ but small invariant mass $s = q^2$ of order $m_c^2$, and it subsequently fragments into an $\eta_c$ and a real gluon. The fragmentation probability $\int_0^1 dz D(z,m_c)$ is the ratio of the rates for these two processes. In Feynman gauge, the fragmentation term in the matrix element for the $\eta_c$ production has the form ${\cal M}_\alpha (-i g^{\alpha \beta} / q^2) {\cal A}_\beta$, where ${\cal M}_\alpha$ is the matrix element for the production of the virtual gluon and ${\cal A}_\beta$ is the amplitude for $g^* \rightarrow \eta_c g$. The fragmentation term is distinguished from the short distance terms in the matrix element by the presence of the small denominator $q^2$ of order $m_c^2$. The amplitude ${\cal A}_\beta$ can be written down using standard Feynman rules for quarkonium processes [@kks]. Multiplying ${\cal A}_\alpha$ by its complex conjugate and summing over final colors and spins, we get $$\begin{aligned} \sum {\cal A}_\alpha {\cal A}_\beta^* &=& {16 \pi \over 3} \alpha_s^2 {|R(0)|^2 \over 2 m_c } {1 \over (s - 4 m_c^2)^2} \Bigg( - (s - 4 m_c^2)^2 g_{\alpha \beta} \nonumber \\ && \;+\; 2 (s + 4 m_c^2) (p_\alpha q_\beta + q_\alpha p_\beta) \;-\; 4 s p_\alpha p_\beta \;-\; 16 m_c^2 q_\alpha q_\beta \Bigg) \;, \label{Asq} \end{aligned}$$ where $p$ is the 4-momentum of the $\eta_c$. Terms proportional to $q_\alpha$ or $q_\beta$ are gauge artifacts and can be dropped. In an appropriate axial gauge, $q_\alpha$ and $q_\beta$ are of order $m_c^2/q_0$ when contracted with the numerator of the propagator of the virtual gluon. In covariant gauges, the $q_\alpha$ and $q_\beta$ terms are not suppressed but are cancelled by other diagrams. In the $p_\alpha p_\beta$ term, we can set $p = zq + p_\perp$ up to corrections of order $m_c^2/q_0$, where $z$ is the longitudinal momentum fraction and $p_\perp$ is the transverse part of the 4-vector $p$. In a frame where $q = (q_0,0,0,q_3)$, $z = (p_0 + p_3)/(q _0 + q_3)$ and $p_\perp = (0,p_1,p_2,0)$. After averaging over the directions of the transverse momentum, $p_\perp^\alpha p_\perp^\beta$ can be replaced by $-g^{\alpha \beta} {{\vec p}_\perp}^{\;2}/2$, up to terms that are suppressed in axial gauge. The terms in (\[Asq\]) that contribute to fragmentation then reduce to $${ \sum {\cal A}_\alpha {\cal A}_\beta^* \;=\; {16 \pi \over 3} \alpha_s^2 {|R(0)|^2 \over 2 m_c } {1 \over (s - 4 m_c^2)^2} \left( (s - 4 m_c^2)^2 \;-\; 2 (1-z) (zs - 4 m_c^2) s \right) \left( - g_{\alpha \beta} \right) \;. } \label{Asqfrag}$$ We have used the conservation of the $q_0 - q_3$ component of the 4-momentum in the form $s = ({{\vec p}_\perp}^{\;2} + 4 m_c^2)/z + {{\vec p}_\perp}^{\;2}/(1-z)$. At this point, it is easy to calculate the rate for production of $\eta_c g$ in the limit $q_0^2 >> q^2 \sim m_c^2$ and divide it by the rate for production of an on-shell gluon. The resulting fragmentation probability is $${ \int_0^1 dz \; D_{g \rightarrow \eta_c}(z) \;=\; {\alpha_s^2 \over 3 \pi} {|R(0)|^2 \over 2 m_c } \int_{4 m_c^2}^\infty ds \int_{4 m_c^2/s}^1 dz \; {s^2 + 16 m_c^4 - 2 z (s + 4 m_c^2) s + 2 z^2 s^2 \over s^2(s - 4 m_c^2)^2} \;, } \label{Peta}$$ where $R(0)$ is the nonrelativistic radial wavefunction at the origin for the S-wave bound state. We have increased the upper endpoint of the integration over $s$ to infinity, because the resulting error is of order $m_c^2/q_0^2$, which we have been consistently neglecting. Interchanging orders of integration, we can read off the fragmentation function: $${ D_{g \rightarrow \eta_c}(z,2 m_c) \;=\; {1 \over 3 \pi} \alpha_s(2 m_c)^2 {|R(0)|^2 \over M_{\eta_c}^3 } \; \Bigg( 3 z - 2 z^2 + 2 (1-z) \log(1-z) \Bigg)\;. } \label{Deta}$$ We have set the scale in the fragmentation function and in the running coupling constant to $\mu = 2 m_c$, which is the minimum value of the invariant mass $\sqrt{s}$ of the fragmenting gluon. In the denominator, we have set $2 m_c = M_{\eta_c}$, which takes into account the correct phase space limitations and is accurate up to relativistic corrections. The value of the S-state wavefunction at the origin $R(0)$ is determined from the $\psi$ electronic width to be $|R(0)|^2 = (0.8 \; {\rm GeV})^3$. We use the value $\alpha_s(2 m_c) = 0.26$ for the strong coupling constant. Given the initial fragmentation function (\[Deta\]), the fragmentation function is determined at larger values of $\mu$ by solving the evolution equation (\[evol\]) with (\[Deta\]) as a boundary condition. The $z$-dependence of $D_{g \rightarrow \eta_c}(z,\mu)$ at the energy scales $\mu = 2 m_c$ and $\mu = 20 m_c$ is illustrated in Figure 3. The evolution causes the fragmentation function to decrease at large $z$ and to diverge at $z = 0$. A physical cross section like (\[fac\]) will still be well-behaved, because phase space limitations will place an upper bound on the parton energy $E/z$ which translates into a lower bound on $z$. It is evident from Figure 3 that taking into account the evolution of the fragmentation function can significantly increase the rate for the production process, particularly at small values of $z$. The fragmentation function for a gluon into $J/\psi$ can be calculated to leading order in $\alpha_s$ from the Feynman diagrams for $g^* \rightarrow \psi g g$. The square of the amplitude $\sum {\cal A}_\alpha {\cal A}_\beta^*$ for this process can be extracted from a calculation of the matrix element for $e^+ e^- \rightarrow \psi g g$ [@ks]. The calculation of the fragmentation function is rather involved and we present only the final result: $$\begin{aligned} D_{g \rightarrow \psi}(z, 2 m_c) \;=\; {5 \over 144 \pi^2} \alpha_s(2m_c)^3 {|R(0)|^2 \over M_\psi^3 } \int_0^z dr \int_{(r+z^2)/2z}^{(1+r)/2} dy \; {1 \over (1-y)^2 (y-r)^2 (y^2-r)^2} \nonumber \\ \sum_{i=0}^2 z^i \left( f_i(r,y) \;+\; g_i(r,y) {1+r-2y \over 2 (y-r) \sqrt{y^2-r}} \log{y-r + \sqrt{y^2-r} \over y-r - \sqrt{y^2-r}} \right) \;. \label{Dpsi} \end{aligned}$$ where the integration variables are $r = 4 m_c^2/s$ and $y = p \cdot q/s$. The functions $f_i$ and $g_i$ are $$\begin{aligned} f_0(r,y) &=& r^2(1+r)(3+12r+13r^2) \;-\; 16r^2(1+r)(1+3r)y \nonumber \\ &-& 2r(3-9r-21r^2+7r^3)y^2 \;+\; 8r(4+3r+3r^2)y^3 \;-\; 4r(9-3r-4r^2)y^4 \nonumber \\ &-& 16(1+3r+3r^2)y^5 \;+\; 8(6+7r)y^6 \;-\; 32 y^7 \;, \label{f0} \\ f_1(r,y) &=& -2r(1+5r+19r^2+7r^3)y \;+\; 96r^2(1+r)y^2 \;+\; 8(1-5r-22r^2-2r^3)y^3 \nonumber \\ &+& 16r(7+3r)y^4 \;-\; 8(5+7r)y^5 \;+\; 32y^6 \;, \label{f1} \\ f_2(r,y) &=& r(1+5r+19r^2+7r^3) \;-\; 48r^2(1+r)y \;-\; 4(1-5r-22r^2-2r^3)y^2 \nonumber \\ &-& 8r(7+3r)y^3 \;+\; 4(5+7r)y^4 \;-\; 16y^5 \;, \label{f2} \\ g_0(r,y) &=& r^3(1-r)(3+24r+13r^2) \;-\; 4r^3(7-3r-12r^2)y \;-\; 2r^3(17+22r-7r^2)y^2 \nonumber \\ &+& 4r^2(13+5r-6r^2)y^3 \;-\; 8r(1+2r+5r^2+2r^3)y^4 \;-\; 8r(3-11r-6r^2)y^5 \nonumber \\ &+& 8(1-2r-5r^2)y^6 \;, \label{g0} \\ g_1(r,y) &=& -2r^2(1+r)(1-r)(1+7r)y \;+\; 8r^2(1+3r)(1-4r)y^2 \nonumber \\ &+& 4r(1+10r+57r^2+4r^3)y^3 \;-\; 8r(1+29r+6r^2)y^4 \;-\; 8(1-8r-5r^2)y^5 , \label{g1} \\ g_2(r,y) &=& r^2(1+r)(1-r)(1+7r) \;-\; 4r^2(1+3r)(1-4r)y \nonumber \\ &-& 2r(1+10r+57r^2+4r^3)y^2 \;+\; 4r(1+29r+6r^2)y^3 \;+\; 4(1-8r-5r^2)y^4 . \label{g2} \end{aligned}$$ The integrals over $r$ and $y$ in (\[Dpsi\]) must be evaluated numerically to obtain the fragmentation function at the energy scale $\mu = 2 m_c$. At larger values of $\mu$, it is found by solving the evolution equation (\[evol\]) with (\[Dpsi\]) as a boundary condition. The $z$-dependence of $D_{g \rightarrow \psi}(z,\mu)$ at the energy scales $\mu = 2 m_c$ and $\mu = 20 m_c$ is illustrated in Figure 4. The evolution causes the fragmentation function to decrease at large $z$ and to diverge at $z = 0$. An order of magnitude estimate of the gluon fragmentation contribution to quarkonium production in any high energy process can be obtained by multiplying the cross section for producing gluons of energy $E > 2 m_Q$ by the initial fragmentation probability $\int_0^1 dz D(z,2m_Q)$. For the $\eta_c$ and $\psi$, these probabilities are $4.6 \cdot 10^{-5}$ and $2.8 \cdot 10^{-6}$. Thus, we can expect the asymptotic production rate of $\psi$ to be more than an order of magnitude smaller than $\eta_c$. The initial fragmentation function for the splitting of a gluon into the $^3S_1$ bottomonium state $\Upsilon$ is also given by (\[Dpsi\]), except that the mass $M_\psi$ is replaced by $M_\Upsilon$, the scale $2 m_c$ is replaced by $2 m_b$, and $R(0)$ is the appropriate wavefunction at the origin. The initial fragmentation probability is about $5.3 \cdot 10^{-7}$, which is smaller than for $\psi$ by about a factor of 5. In hadron colliders, short distance processes dominate the direct production of charmonium at small $p_T$, because fragmentation processes are suppressed by powers of $\alpha_s(m_c)$. At sufficiently large $p_T$, fragmentation processes must dominate, because the short distance processes are suppressed by powers of $m_c^2/p_T^2$. We can make a quantitative estimate of the $p_T$ at which the crossover occurs by comparing the differential cross sections for short distance process with the differential cross section for the gluon scattering process $g g \rightarrow g g$ multiplied by the appropriate fragmentation probability. For simplicity, we consider $90^o$ scattering in the $g g$ center of mass frame. In terms of $p_T$, we have $s = 4 p_T^2$. In the limit $p_T >> m_c$, the differential cross section for $g g \rightarrow g \eta_c$ is [@br] $d \sigma/dt = 81 \pi \alpha_s^3 |R(0)|^2/(256 M_{\eta_c} p_T^6)$. The differential cross section for $g g \rightarrow g g$ is $d \sigma/dt = 243 \pi \alpha_s^2 / (128 p_T^4)$. To allow for fragmentation of either of the two outgoing gluons, we multiply by twice the initial fagmentation probability $\alpha_s^2 |R(0)|^2 / (9 \pi M_{\eta_c}^3)$. The resulting differential cross section exceeds that for the short distance process at $p_T \approx \sqrt{3 \pi/4 \alpha_s} M_{\eta_c} \approx 3 M_{\eta_c}$. The differential cross section for $g g \rightarrow g \psi$ is [@br] $d \sigma/dt = 5 \pi \alpha_s^3 |R(0)|^2 M_\psi / ( 128 p_T^8 )$ and for this case we estimate the crossover point to be $p_T \approx \sqrt{1.05/\alpha_s} M_\psi \approx 2 M_\psi$. Thus fragmentation should dominate over short distance production at values of $p_T$ that are being measured in present collider experiments. The importance of fragmentation for charmonium production in high energy processes can also be seen from the surprisingly large rate [@bck] for $Z^0 \rightarrow \psi c {\bar c}$, which is two orders of magnitude larger than that of $Z^0 \rightarrow \psi g g$. The explanation for this is that $Z^0 \rightarrow \psi g g$ is a short distance process, while $Z^0 \rightarrow \psi c {\bar c}$ includes a fragmentation contribution enhanced by $(M_Z/M_\psi)^2$. The enhanced contribution arises from the decay $Z^0 \rightarrow c {\bar c}$, followed by the splitting $c \rightarrow \psi c$ or ${\bar c} \rightarrow \psi {\bar c}$. With this insight, the lengthy calculation presented in Ref. [@bck] can be reduced to a simple calculation of the fragmentation function $D_{c \rightarrow \psi}(z,M_Z)$. This calculation will be presented elsewhere [@bcy]. The probability for a virtual gluon to decay into a $\psi$ was calculated by Hagiwara, Martin and Stirling [@hms] and used to study $J/\psi$ production from gluon jets at the LEP collider. They did not calculate the fragmentation function $D_{g \rightarrow \psi}(z,\mu)$ and thus were unable to sum up large logarithms of $M_Z/m_c$. Furthermore, their expression for the fragmentation probability is missing a factor of $1/(16 \pi^2)$, which explains the surprisingly large rate that they found for this $\psi$-production mechanism. A complete calculation of the fragmentation contribution to $\psi$ production in high energy processes must include the production of the P-wave charmonium states $\chi_{cJ}$, followed by their radiative decays into $\psi$. In calculating the fragmentation functions for gluons to split into P-wave charmonium states, there are two distinct contributions that must be included at leading order in $\alpha_s$. The P-wave state can arise either from the production of a collinear $c {\bar c}$ pair in a color-singlet P-wave state, or from the production of a collinear $c {\bar c}$ pair in a color-octet S-wave state [@bbly]. Calculations of the P-wave fragmentation functions will be presented elsewhere [@by]. We have shown in this letter that the dominant production mechanism for quarkonium in high energy processes is fragmentation, the production of a high energy parton followed by its splitting into the quarkonium state. We calculated the fragmentation functions $D(z,\mu)$ for gluons to split into S-wave quarkonium states to leading order in $\alpha_s$. The fragmentation functions satisfy Altarelli-Parisi evolution equations which can be used to sum up large logarithms of $\mu/m_Q$. Most previous calculations of quarkonium production have considered only short-distance production mechanisms, because the fragmentation mechanism is often of higher order in $\alpha_s$. At high energies $E$, the fragmentation mechanism dominates because it is enhanced by a factor of $(E/m_Q)^2$. In the case of charmonium production at large $p_T$ in hadron colliders, we estimated the transverse momentum at which fragmentation begins to dominate to be less than 10 GeV. All previous calculations of quarkonium production from this and other high energy processes must therefore be reexamined, taking into account the possibility of production by fragmentation. This work was supported in part by the U.S. Department of Energy, Division of High Energy Physics, under Grant DE-FG02-91-ER40684. We would like to acknowledge G.P. Lepage for suggesting the importance of fragmentation in the production of quarkonium in high energy processes. We thank M.L. Mangano for pointing out an error in an earlier version of this paper. We would also like to acknowledge useful conversations with G.T. Bodwin, K. Cheung, J. Gunion, and W.J. Stirling. [99]{} [**Figure Captions**]{} 1. A Feynman diagram for $g g \rightarrow c {\bar c} g$ that contributes to $\eta_c$ production at order $\alpha_s^3$. 2. A Feynman diagram for $g g \rightarrow c {\bar c} g g$ that contributes to $\eta_c$ production at order $\alpha_s^4$. 3. The fragmentation function $D_{g \rightarrow \eta_c}(z,\mu)$ as a function of $z$ for $\mu = 2 m_c$ (solid line) and $\mu = 20 m_c$ (dotted line). 4. The fragmentation function $D_{g \rightarrow \psi}(z,\mu)$ as a function of $z$ for $\mu = 2 m_c$ (solid line) and $\mu = 20 m_c$ (dotted line).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The problem of the evaluation of the two-photon decay width of excited states in hydrogen is considered. Two different approaches to the evaluation of the width including cascades channels are employed: the summation of the transition probabilities for various decay channels and the evaluation of the imaginary part of the Lamb shift. As application the decay channels for the $3s$ level of the hydrogen atom are evaluated, including the cascade transition probability $3s-2p-1s$ as well as “pure” two-photon decay probability and the interference between both channels. An important role should be assigned to the “pure” two-photon probability in astrophysics context, since processes of this kind provide a possibility for the decoupling of radiation and matter in the early Universe. We demonstrate the ambiguity of separation of the “pure” two-photon contribution and criticize the existing attempts for such a separation.' author: - 'L. Labzowsky$^{1),2)}$, D. Solovyev$^{1)}$ and G. Plunien$^{3)}$' title: 'Two-photon decay of excited levels in hydrogen: the ambiguity of the separation of cascades and pure two-photon emission' --- Introduction ============ During recent years the two-photon decay processes in hydrogen have attracted special attention due to new and very accurate observations of the cosmic microwave background temperature and polarization anisotropies [@Hinshaw; @Page]. In view of these observations it becomes important to understand the hydrogen recombination history with high precision. In the early Universe the strong Lyman-alpha $2p-1s$ transition did not permit the atoms to remain in their ground states: each photon released in such a transition in one atom was immediately absorbed by another one. However, due to the very weak $2s-1s$ two-photon decay process the radiation could finally decouple from the interaction with matter and thus permit a successful recombination. The role of the $2s-1s$ two-photon decay was first established in [@Zeldovich; @Peebles]. The other two-photon channels, i.e. $ns-1s$, $nd-1s$ decays were also discussed in [@cea86]-[@Wong]. At the present level of accuracy reached in the astrophysical observations these contributions also become important. A crucial difference between the decay of the $ns$ (with $n>2$, similar for $nd$) and the $2s$ levels consists in the presence of the cascade transitions as the dominant decay channels. In case of the $2s$ level the cascade transitions are absent. Since the cascade photons can be effectively reabsorbed, the problem of separating the “pure” two-photon contribution from the cascade contribution arises. An interference between the two decay channels, i.e. “pure” two-photon decay and cascade, should also be taken into account. A similar problem did arise within the theory of the two-electron highly charged ions (HCI) [@Drake]-[@LabShon]. Drake [@Drake] evaluated for the first time the two-photon E1M1 transition in the presence of a cascade transition in He-like uranium ion ($Z=92$). Later Savukov and Johnson preformed similar calculation for a variety of He-like HCI ($50\leq Z\leq 92$) [@Savukov]. In [@Drake; @Savukov] the “pure” two-photon contribution was obtained by subtracting off a Lorentzian fit for the cascade contribution from the total two-photon decay frequency distribution. The existence of the interference terms was recognized in [@Drake; @Savukov], but only approximately included in the Lorentzian fit as an asymmetric deviation from the pure Lorentzian. A rigorous QED approach for the evaluation of two-photon decay width with cascades was developed in [@LabShon] (see also [@AndrLab]). This approach was based on the standard evaluation of the decay probability as a transition probability to the lower levels. In case of cascades the integral over emitted photon frequency distribution becomes divergent due to the singular terms, corresponding to the cascade resonances. To avoid such a singularity, the resummation of an infinite series of the electron self-energy insertions was performed in [@LabShon]. This resummation converts into a geometric progression and in this way the self-energy insertion (and the level width as its imaginary part) enters in the energy denominator and shifts the pole on the real axis into the complex energy plane, thus making the integral finite. With this approach F. Low first derived the Lorentz profile from QED [@Low]. In [@Drake; @Savukov] the level widths in the singular energy denominators were also introduced, though without special justification. Similarly, i.e. by introducion of the level widths in the singular energy denominators the two-photon decay of $ns$ and $nd$ excited levels in hydrogen was evaluated most recently in the astrophysical papers [@Chluba; @Hirata]. In [@LabShon] the ambiguity of the separation of the “pure” two-photon decay and cascades was first revealed for HCI, where it was shown that the interference terms can essentially contribute to the total decay probability. The full QED treatment of the two-photon decay process with inclusion of cascades and interference terms was performed in [@Chluba; @Hirata] for the hydrogen atom. However, in [@Chluba; @Hirata] the ambiguity of separation of the “pure” two-photon decay was neither emphasized nor demonstrated explicitly. This will be subject of the present paper. Our numerical results for $3s$-level in hydrogen are in agreement with the most accurate recent calculations in [@Chluba] (see Section 11 below). However, our results disagree strongly with the value obtained in [@jas08]. The result in [@jas08] follows from the “alternative” approach to the evaluation of the two-photon decay, developed by Jentschura [@Jent1; @Jent2; @Jent3]. This “alternative” approach is based on the evaluation of the imaginary part of the two-loop Lanb shift for $3s$-level. The idea to use the imaginary part of the Lamb shift for calculating the radiative corrections to the one-photon transitions is due to Barbieri and Sucher [@bas78] and is definitely adequate for this propose. Later this approach was used in [@Sapir] for HCI. Still, the extension of this approach to the evaluation of the two-photon decay width, when the cascade transitions are also present, is more involved and requires special care. In [@Jent1; @Jent2; @Jent3] it was claimed that the singularity in the integration over the photon frequency distribution for the two-photon decay is absent and thus the finite integral represents directly the “pure” two-photon decay rate. Also this statement forced us to perform a careful revision of the evaluation of the two-photon decay width via the imaginary part of the second order Lamb shift, which will be presented below in Sections 6-10. In our approach we employed the consequences of the “optical theorem” for the $S$-matrix and the Gell-Mann and Low adiabatic formula [@Gell] for the evaluation of the energy level shift via $S$-matrix. The results of our analysis, unlike Jentschura’s derivations, reveal that the evaluation of the two-photon decay width via the imaginary part of the Lamb shift provides identical expressions as the standard QED description via summation of transition probabilities. The integration over the emitted photon frequency for the two-photon decay with cascades remains divergent and requires the introduction of the level widths in the singular energy denominators (resummation of QED-radiative corrections). The detailed derivations employing both methods (summation of transition probabilities and evaluation of the imaginary part of the Lamb shift) are given and numerical results for the $3s$-level in hydrogen are provided. These results demonstrate clearly the ambiguity of the separation of the “pure” two-photon and cascade contributions with accuracy, required for the modern astrophysical investigations (i.e. 1%). Our paper is organized as follows. In Section 2 we shortly review the standard derivation of the excited atomic level one-photon width via the summation over transition probabilities to the lower levels. This presentation of formulas, though available in text-books on quantum electrodynamics, are necessary for the further comparison with the results obtained by another approaches. It is for the readers convenience to make the paper self-contained and allowing to follow our derivations without consulting additioanl literature. The same is done (even in a more compressed way) in the Section III for the two-photon decay width. In Section 4 we repeat (again very briefly) the standard understanding of the situation for the two-photon decay with cascades and present the formulas employed for the appropriate calculations in the numerous works on the subject, beginning from [@Drake]. For justification of this procedure we follow [@LabShon]. In Section 5 we evaluate the one-photon decay width via the imaginary part of the Lamb shift. This is again a well-known result but it will be also necessary to confirm our further derivations. In this Section we follow derivations of [@LabKlim] in a more condensed way. In Section VI we evaluate the imaginary part of the energy level shift employing the adiabatic formula of Gell-Mann and Low. The key idea of our approach - the application of the “optical theorem” - is formulated in Section VII. Actually, we employ here the unitarity of the adiabatic $S$-matrix. The formulated approach is applied to the evaluation of the one-photon width in the Section VIII, to the evaluation of the two-photon width (without cascades) in Section IX and to the evaluation of the two-photon width (with cascades) in Section X. In these sections we compare our derivations with the standard QED derivations and analyse the discrepance with Jentschura’s results in Section X. In Section XI the numerical results for the $3s$-level in hydrogen are given and the ambiguity of separation the “pure” two-photon and cascade contributions is demonstrated. Finally, in Section XII we formulate our version of the problem of the cascade separation in astrophysics and present our recommendations for the resolution of this problem. Relativistic units $\hbar=c=1$ are employed. One-photon decay width via summation of transition probabilities ================================================================ Transition probability ---------------------- The first-oder matrix element of the $S$-matrix describing the one-photon emission in a one-electron atom (see Fig. 1) is given by $$\begin{aligned} \label{1} \langle A'|\hat{S}^{(1)}|A\rangle = e \int d^4 x\, \bar{\psi}_{A'}(x)\gamma_{\mu}A^*_{\mu}(x)\psi_A(x)\,.\end{aligned}$$ Here $\hat{S}^{(1)}$ is the first-order $S$-matrix, $e$ is the electron charge, $\psi_A(x) = \psi_A(\vec{r})e^{-i E_A t}$, $\psi_A(\vec{r})$ is the solution of the Dirac equation for the atomic electron, $E_A$ is the Dirac energy, $\bar{\psi}_{A'} = \psi_{A'}^\dagger \gamma_0$ is the Dirac conjugated wave function with $\psi_{A'}^{\dagger}$ being its Hermitian conjugate and $\gamma_{\mu} = (\gamma_0, \vec\gamma)$ are the Dirac matrices. The photon field $A_{\mu}(x)$ appears in terms of eigenmodes of the form $$\begin{aligned} \label{2} A^{(\vec e,\,\vec k)}_{\mu}(x) = \sqrt{\frac{2\pi}{\omega}}\,e^{(\lambda)}_{\mu}e^{i(\vec{k}\vec{r}-\omega t)} = \sqrt{\frac{2\pi}{\omega}}\,e^{(\lambda)}_{\mu}e^{-i\omega t}\,A^{(\vec e,\,\vec k)}_{\mu}(\vec r\,) \, ,\end{aligned}$$ where $e^{(\lambda)}_{\mu}$ is the photon polarization 4-vector, $k=(\vec{k},\omega)$ is the photon momentum 4-vector ($\vec{k}$ is the wave vector, $\omega=|\vec{k}|$ is the photon frequency), $x\equiv(\vec{r},t)$ is the coordinate 4-vector ($\vec{r}, t$ are the space- and time-coordinates). Describing emitted (real) photons in the Coulomb gauge implies the transversality condition $$\begin{aligned} \label{3} \gamma_{\mu}e^{(\lambda)}_{\mu}= \vec{e}^{\,(\lambda)} \vec{\gamma}\, ,\end{aligned}$$ where $ \vec{e}^{\,(\lambda)}$ is the 3-vector of the photon polarization. Then integrating over the time variable yields $$\begin{aligned} \label{4} \langle A'|\hat{S}^{(1)}|A\rangle &=& 2\pi\,\left(\vec{A}^*_{\vec e, \vec k} \vec{\alpha}\right)_{A'A}\, \delta\left(\omega-E_A+E_{A'}\right) \nonumber \\ &=& 2\pi\,e\sqrt{\frac{2\pi}{\omega}}\,\left(( \vec{e}^{\,*}\vec{\alpha})\, e^{-i\vec{k}\vec{r}}\right)_{A'A}\,\delta\left(\omega-E_A+E_{A'}\right)\, .\end{aligned}$$ Here $\vec{\alpha} = \gamma_0\vec\gamma$ are the Dirac matrices and $(\dots )_{A'A}$ denotes the spatial matrix element $\langle A'|\dots |A\rangle$ with the wave functions $\psi^{\dagger}_{A'}(\vec{r}) = \langle A'|\vec r\rangle$ and $\psi_{A}(\vec{r}) = \langle\vec r|A\rangle$, respectively. The transition amplitude $U_{A'A}$ is defined as $$\begin{aligned} \label{5} \langle A'|\hat{S}^{(1)}|A\rangle = -2\pi\, i\delta\left(\omega-E_A+E_{A'}\right)U_{A'A}^{(1)}\, .\end{aligned}$$ The transition probability cannot be defined straightforeward as $\left|\langle A'|\hat{S}^{(1)}|A\rangle \right|^2$, since the square of a $\delta$-function is actually meaningless. The standard way to overcome this technical diffculty is to express one of the two $\delta$-function, which arises in Eq. (\[1\]) after integration over times $t$ as the Fourier integral $$\begin{aligned} \label{6} \delta(E)=\frac{1}{2\pi}\int\limits_{-\infty}^{\infty}e^{iEt}dt\, ,\end{aligned}$$ by a representation $\delta_T(E)$. The latter is introduced by restricting the integration to the finite time interval $(-T/2,+T/2)$ with the result $$\begin{aligned} \label{7} \delta_T(E) = \frac{1}{2\pi}\int\limits_{-T/2}^{T/2}e^{iEt}dt\, .\end{aligned}$$ Multiplying $\delta_T(E)$ by the second $\delta$-function $\delta(E)$ results in the substitution $$\begin{aligned} \nonumber \delta (E) \delta_T(E) = \delta (E)\delta_T(0) =\delta (E) \frac{T}{2\pi}\, .\end{aligned}$$ Thus, the probability of the process appears to be proportional to the observation time interval $T$. It is natural then to introduce the transition probability per time unit (transition rate) by definition $$\begin{aligned} \label{8} W_{A'A}=\lim_{T\rightarrow\infty}\,\frac{1}{T}\left|\langle A'|\hat{S}^{(1)}(T)|A\rangle\right|^2 = 2\pi\left|U_{A'A}^{(1)}\right|^2\delta\left(\omega-E_A+E_{A'}\right)\, .\end{aligned}$$ We will compare this definition with another one in Section 8. If the final state belongs to the continuous spectrum (due to the emitted photon in our case) the differential transition probability should be introduced $$\begin{aligned} \label{9} dW_{A'A}(\vec{k},\vec{e})=2\pi\left|U_{A'A}^{(1)}\right|^2\delta\left(\omega-E_A+E_{A'}\right) \frac{d\vec k}{(2\pi)^3}\, ,\end{aligned}$$ where $d\vec k\equiv d^3k = d\Omega_{\vec\nu}d\omega \omega^2$. Integration in Eq. (\[9\]) over $\omega$ gives the probability of the photon emission with polarization $\vec{e}$ in the direction $\vec{\nu}\equiv\vec{k}/\omega$ per time unit (and solid angle $d\Omega_{\vec\nu}\equiv d\vec\nu$): $$\begin{aligned} \label{10} dW_{A'A}=\frac{e^2}{2\pi}\omega_{A'A}\left|\left(( \vec{e}^{\,*}\vec{\alpha})e^{-i\vec{k}\vec{r}}\right)_{A'A}\right|^2d\vec{\nu}\, ,\end{aligned}$$ where $\omega_{A'A}=E_A-E_{A'}$. Note, that due to the presence of $\delta$-function formally we should integrate over the frequency interval $(-\infty, \infty)$, while physically the frequency of real photons changes within the interval $(0, \infty)$. The total transition probability follows from Eq. (\[10\]) after integration over angles and summation over the polarization: $$\begin{aligned} \label{11} W_{A'A}=\frac{e^2}{2\pi}\omega_{A'A}\sum\limits_{\vec{e}}\int d\vec{\nu}\left|\left(( \vec{e}^{\,*}\vec{\alpha})e^{-i\vec{k}\vec{r}}\right)_{A'A}\right|^2\end{aligned}$$ The result of this summation and integration will be carried out in Section 5. Nonrelativistic limit --------------------- For the atomic electron the characteristic scales for $|\vec{r}|$ and $|\vec{k}|=\omega$ are: $|\vec{r}|\sim 1/m\alpha Z$, $\omega=E_{A'}-E_A\sim m(\alpha Z)^2$, where $m$ is the electron mass, $\alpha$ is the fine structure constant, $Z$ is the charge of the nucleus. Then in the nonrelativistic case, in particular for the hydrogen atom ($Z=1$), the exponential function in the matrix element in Eq. (\[11\]) can be replaced by 1, since $\vec{k}\vec{r}\sim \alpha$. In the nonrelativistic limit the matrix element involving the Dirac matrices $\vec{\alpha}$ (electron velocity operator $\hat{\vec{v}}$ in the relativistc theory) can be substituted by $\hat{\vec{p}}/m$, where $\hat{\vec{p}}$ is the electron momentum operator. Then Eq. (\[11\]) takes the form $$\begin{aligned} \label{12} W_{A'A}=\frac{e^2}{2\pi m^2}\omega_{A'A}\sum\limits_{\vec{e}}\int d\vec{\nu}\left|\left(\vec{e}\vec{p}\right)_{A'A}\right|^2\, ,\end{aligned}$$ where the notation $(...)_{A'A}$ now also implies evaluation of the matrix element with Schrödinger wave functions. Performing summation over the polarization with the help of the known formula $$\begin{aligned} \label{13} F=\sum\limits_{\vec{e}}( \vec{e}^{\,*}\vec{a})(\vec{e}\vec{b})=(\vec{a}\times\vec{\nu})(\vec{b}\times{\vec{\nu}})\end{aligned}$$ with $\vec{a}$, $\vec{b}$ being two arbitrary vectors, we arrive at the standard expression for the one-photon probability in the “velocity” form after integrating over $\vec{\nu}$: $$\begin{aligned} \label{14} W_{A'A}=\frac{4}{3}\frac{e^2}{m^2}\omega_{A'A}\left|\left(\vec{p}\right)_{A'A}\right|^2\, .\end{aligned}$$ The so-called “length” form $$\begin{aligned} \label{16} W_{A'A}=\frac{4}{3}\omega_{A'A}^3\left|(\vec{d})_{A'A}\right|^2\end{aligned}$$ involving the electric dipole moment operator $\vec{d}=e\vec{r}$ of the electron can be obtained via the quantum mechanical relation $$\begin{aligned} \label{15} \omega_{A'A}(\vec{r})_{A'A}=\frac{i}{m}(\vec{p})_{A'A}\, .\end{aligned}$$ Eq. (\[16\]) reveals that in the non-relativistic limit the atomic radiation is essentially the radiation of the electric dipole. In the nonrelativistic limit the total one-photon width of the atomic level $A$ results as $$\begin{aligned} \label{17} \Gamma_{A}^{(1)}=\frac{4}{3}\omega_{A'A}^3\sum\limits_{E_{A'}<E_A}\left|(\vec{d})_{A'A}\right|^2,\end{aligned}$$ where the summation runs over all levels $A'$ with energy $E_{A'}$ lower than that of the level $A$ (provided the transitions $A\rightarrow A'$ are allowed in the nonrelativistic limit). Two-photon decay width via summation of transition probabilities ================================================================ The two-photon transition probability $A\rightarrow A'+2\gamma$ corresponds to the following second-order $S$-matrix elements (see Fig. 2) $$\begin{aligned} \label{18} \langle A'|\hat{S}^{(2)}|A\rangle = e^2\int d^4x_1d^4x_2\left(\bar{\psi}_{A'}(x_1)\gamma_{\mu_1}A^*_{\mu_1}(x_1)S(x_1x_2)\gamma_{\mu_2}A^*_{\mu_2}(x_2)\psi_A(x_2)\right),\end{aligned}$$ where $S(x_1x_2)$ is the Feynman propagator for the atomic electron. In the Furry picture the eigenmode decomposition reads (e.g. [@Akhiezer]) $$\begin{aligned} \label{19} S(x_1x_2)=\frac{1}{2\pi i}\int\limits_{\infty}^{\infty}d\omega_1e^{i\omega_1(t_1-t_2)}\sum\limits_n\frac{\psi_n(x_1)\bar{\psi}_n(x_2)}{E_n(1-i0)+\omega_1}\, ,\end{aligned}$$ where the summation in Eq. (\[19\]) extends over the entire Dirac spectrum of electron states $n$ in the field of the nucleus. Using again Eqs. (\[5\]) and (\[9\]) for the two-photon transition and integrating over time and frequency variables in Eq. (\[18\]) for the sum of the contributions of the both Feynamn graphs (see Figs. 2a, 2b), we find $$\begin{aligned} \label{20} dW_{A'A}=2\pi\delta\left(E_A-E_{A'}-\omega-\omega'\right)\left|U_{A'A}^{(2)}\right|^2\frac{d\vec{k}}{(2\pi)^3}\frac{d\vec{k}'}{(2\pi)^3}\, ,\end{aligned}$$ $$\begin{aligned} \label{21} U_{A'A}^{(2)} = \frac{2\pi e^2}{\sqrt{\omega\omega'}}\left[\sum\limits_n\frac{\left(\vec{\alpha}\vec{A}^*_{\vec{e},\vec{k}}\right)_{A'n}\left(\vec{\alpha}\vec{A}^*_{ \vec{e}\,',\vec{k}'}\right)_{nA}}{E_n-E_A+\omega'}+\sum\limits_n\frac{\left(\vec{\alpha}\vec{A}^*_{ \vec{e}\,',\vec{k}'}\right)_{A'n}\left(\vec{\alpha}\vec{A}^*_{\vec{e},\vec{k}}\right)_{nA}}{E_n-E_A+\omega}\right] \end{aligned}$$ with the abbreviatory notation $\vec{A}_{\vec{e}, \vec{k}} = \vec{e}\,e^{i\vec{k}\vec{r}}$. In what follows, we will be interested in the decay width of the $ns$ levels ($A\equiv ns \rightarrow A'\equiv 1s$) in hydrogen. In this section we focus on the case $n=2$, when the cascades are absent. In the nonrelativistic limit, after the integration over frequencies $\omega'$, over photon directions $d\vec{\nu}$, $d\vec{\nu}'$ and summation over all polarizations $\vec{e}$, $ \vec{e}\,'$ is performed, we obtain for the photon frequency distribution: $$\begin{aligned} \label{23} dW_{ns,1s}(\omega)=\frac{8\omega^3(\omega_0-\omega)^3}{27\pi}e^4\left|S_{1s,ns}(\omega)+S_{1s,ns}(\omega_0-\omega)\right|^2d\omega\, ,\end{aligned}$$ $$\begin{aligned} \label{24} S_{1s,ns}(\omega)=\sum\limits_{n'p}\frac{\langle R_{1s}|r|R_{n'p}\rangle\langle R_{n'p}|r|R_{ns}\rangle}{E_{n'p}-E_{ns}+\omega}\, ,\end{aligned}$$ $$\begin{aligned} \label{25} \langle R_{n'l'}|r|R_{nl}\rangle=\int\limits_{0}^{\infty}r^3R_{n'l'}(r)R_{nl}(r)dr\, ,\end{aligned}$$ where $\omega_0=E_{ns}-E_{1s}$, $R_{nl}(r)$ are the radial part of the nonrelativistic hydrogen wave functions, and $E_{nl}$ are the hydrogen electron energies. Here we have used again the quantum-mechanical relation Eq. (\[15\]); Eq. (\[23\]) is written in the “length” form. The decay rate for the two-photon transition can be obtained by integration of Eq. (\[1\]) over the entire frequency interval $$\begin{aligned} \label{26} W_{ns,1s}=\frac{1}{2}\int\limits_0^{\omega_0}dW_{ns,1s}(\omega).\end{aligned}$$ In case $n=2$ the cascade transitions are absent, the frequency distribution Eq. (\[23\]) is not singular and the integral Eq. (\[26\]) is convergent. Two-photon decay with cascades ============================== In case of the cascade transitions ($n> 2$), some terms in Eq. (\[24\]) become singular and the integral Eq. (\[26\]) diverges. This divergency has a physical origin: an emitted photon meets the resonance. So the divergency can be avoided only by introducing the width of this resonance.This situation was studied in [@LabShon] for the HCI. The same recipe can also be used in case of the hydrogen atom. Following the prescriptions given in [@LabShon] we separate out the resonant terms (corresponding to cascades) in the sum over the intermediate states Eq. (\[24\]) and apply Low’s procedure [@Low] for the regularization of the corresponding expressions in the vicinity of the resonance frequency values. Practically this leads to the apperance of the energy level widths in the energy denominators. Then the Lorentz profiles arise for the resonant terms in the expression for the probability. However, the Lorentz profile is valid only in the vicinity of the resonance and cannot be extended too far off from the resonance frequency value. As for any multichannel processes such a separation is an approximate procedure due to existence of the interference terms. The integration over the entire frequency interval $[0,\omega_0]$ in Eq. (\[26\]) should be split into several subintervals, e.g. 5 in case of the two-photon emission profile for the $3s$-level decay, see Fig. 3. The first interval (I) extends from $\omega=0$ up to the lower boundary of the second interval (II). The latter one encloses the resonance frequency value $\omega_1=E_{3s}-E_{2p}$. Within the interval (II) the resonant term $n=2$ in Eq. (\[24\]) should be subtracted from the sum over intermediate states and replaced by the term with modified energy denominator. This modified denominator is $E_{2p}-E_{3s}+\omega+\frac{i}{2}\Gamma$, where $\Gamma=\Gamma_{2p}+\Gamma_{3s}$. The third interval (III) extends from the upper boundary of interval II up to the lower boundary of the interval (IV), the latter one enclosing another resonance frequency value $\omega_2=E_{2p}-E_{1s}$. Within the interval (IV) again the resonant term $n=2$ in Eq. (\[24\]) should be replaced by the term with modified denominator $E_{2p}-E_{1s}-\omega-\frac{i}{2}\Gamma_{2p}$. Finally, a fifth interval (V) ranges from the upper boundary of the interval (IV) up to the maximum frequency value $\omega_0$. Note, that the frequency distribution $dW_{3s,1s}(\omega)$ is symmetric with respect to $\omega=\omega_0/2$ with a 1% accuracy (the asymmetry is due to the difference between $\Gamma=\Gamma_{2p}+\Gamma_{2s}$ and $\Gamma_{2p}$, respectively). The discussion on the choice of the size of the intervals (II) and (IV), which defines also the size of the other intervals, as well as on further approximations will be postponed until Section 11. In both Section III, IV the total width of the levels $\Gamma_{ns}$ is defined by the sum of the one-photon and two-photon transition rates to the lower levels. The cascade transitions yield the dominant contribution to $\Gamma_{ns}$ except for the case $n=2$. One-photon decay width via the imaginary part of the Lamb shift: direct evaluation ================================================================================== In this Section we will briefly recall the well known derivation of the one-photon level width arising from pure one-photon transitions via the imaginary part of the Lamb shift. This may serve as a lucid introduction to the further derivations with employment of the adiabatic $S$-matrix theory. Let us consider a (free) one-electron ion or atom in the excited state $|A\rangle$ interacting with the vacuum $|0_\gamma\rangle$ of the quantized radiation field (with no additional external electromagnetic fields present). The initial state of the total system ”atom + radiation field” $|I\rangle = |A\rangle|0_\gamma\rangle \equiv |A, 0_\gamma\rangle$ is described as pure number eigenstate. Since the one-loop vacuum-polarization contribution to the Lamb shift is real, the pure one-photon width of the excited level $A$ is given by the imaginary part of the one-loop electron self-energy contribution to the Lamb shift for the level $A$ (see Fig. 4): $$\begin{aligned} \label{27} \Delta E^{(2)}_A = Re\Delta E^{(2)}_A+i Im \Delta E^{(2)}_A = Re \Delta E^{(2)}_A - \frac{i}{2}\Gamma^{(1)}_A.\end{aligned}$$ Note, that the superscript at $\Delta E^{(2)}$ refers to powers of the coupling constant $e$, while the superscript at $ \Gamma^{(1)}_A$ implies the number of emitted photons. The second-order $S$-matrix element, which corresponds to the Fig. 4, looks like $$\begin{aligned} \label{28} \langle A|S^{(2)}|A\rangle = \int d^4x_1d^4x_2\left(\bar{\psi}_A(x_1)\gamma_{\mu_1} S(x_1x_2)\gamma_{\mu_2}\psi_A(x_2)\right)D_{\mu_1\mu_2}(x_1x_2)\, ,\end{aligned}$$ where $D_{\mu_1\mu_2}(x_1x_2)$ is the photon propagator in the Feynman gauge: $$\begin{aligned} \label{29} D_{\mu_1\mu_2}(x_1x_2)=\frac{1}{2\pi i}\frac{\delta_{\mu_1\mu_2}}{r_{12}}\int\limits_{-\infty}^{\infty}d\omega_1e^{i\omega_1(t_1-t_2)+i|\omega_1|r_{12}},\end{aligned}$$ $r_{12}=|\vec{r}_1-\vec{r}_2|$. For the evaluation of this energy shift corresponding to an “irreducible” Feynman graph (i.e. such graphs cannot be cut into subgraphs of lower order by cutting only the electron lines; the graph Fig. 3 belongs to the “irreducible” ones) the following formula can be used: $$\begin{aligned} \label{30} \Delta E^{(2)}_A=\langle A|U^{(2)}|A\rangle _{irr},\end{aligned}$$ where the amplitude $U^{(2)}$ is defined by the relation $$\begin{aligned} \label{31} \langle A'|S^{(2)}|A\rangle =-2\pi i\delta\left(E_{A'}-E_{A}\right)\langle A'|U^{(2)}|A\rangle .\end{aligned}$$ The general proof of Eqs. (\[30\]), (\[31\]) one can find in [@AndrLab]. Insertion of the expressions for the electron (Eq. (\[19\])) and photon (Eq. (\[29\])) propagator into Eq. (\[28\]) and integrating over time and frequency variables yields $$\begin{aligned} \label{32} \Delta E^{(2)}_A=\frac{e^2}{2\pi} \sum\limits_n\left(\frac{1-\vec{\alpha}_1\vec{\alpha}_2}{r_{12}}I_{nA}(r_{12})\right)_{AnnA}\, ,\end{aligned}$$ $$\begin{aligned} \label{33} I_{nA}(r_{12}) = \int\limits_{-\infty}^{\infty}\frac{e^{i|\omega_1|r_{12}}d\omega}{E_n(1-i0)-E_A+\omega}\, ,\end{aligned}$$ where $\vec{\alpha}$ (i=1,2) are the Dirac matrices acting on the different electron variables. The exact evaluation of the integral (\[33\]) in the complex $\omega$ plane results in [@LabKlim]: $$\begin{aligned} \label{34} I_{nA}(r_{12}) = \pi i \left(1+\frac{E_n}{|E_n|}\right)\left(1-\frac{\beta_{nA}}{|\beta_{nA}|}\right)e^{i|\beta_{nA}|r_{12}}+2i\frac{\beta_{nA}}{|\beta_{nA}|}\, \left[ci\left(|\beta_{nA}|r_{12}\right)\,\sin\left(|\beta_{nA}|r_{12}\right)-si\left(|\beta_{nA}|r_{12}\right)\,\cos\left(|\beta_{nA}|r_{12}\right)\right]\, ,\nonumber \\\end{aligned}$$ where $\beta_{nA}=E_n-E_A$ and the notations $si(x)$ and $ci(x)$ for the integral sine and cosine functions are employed. Then, according to Eq. (\[27\]) the pure one-photon width $\Gamma^{(1)}_A$ can be represented in a closed form [@LabKlim] $$\begin{aligned} \label{35} \Gamma^{(1)}_A=-\frac{e^2}{2}\sum\limits_n\left(1-\frac{\beta_{nA}}{|\beta_{nA}|}\right)\left(1+\frac{E_n}{|E_n|}\right)\left(\frac{1-\vec{\alpha}_1\vec{\alpha}_2}{r_{12}}\sin\left(|\beta_{nA}|r_{12}\right)\right)_{AnnA}\, ,\end{aligned}$$ where the summation over $\vec{e}$ and integration over $\vec{\nu}$, remaining in Eq. (\[11\]), is now performed. Note, that the summation in Eq. (\[35\]) extends only over the electron states with energy $-m< E_n<E_A$. The variable $\beta_{nA}r_{12}$ is of the order $\alpha Z$, so that in the nonrelativistic limit $\beta_{nA}r_{12}\ll 1$. Then expanding $\sin\left(|\beta_{nA}|r_{12}\right)$ in Eq. (\[35\]) we can again make contact with the nonrelativistic expression Eq. (\[14\]), or Eq. (\[17\]) for $\Gamma^{(1)}_A$. Decay width via the imaginary part of the Lamb shift: application of the adiabatic theory ========================================================================================= In this Section we will apply the Gell-Mann and Low adiabatic formula [@Gell] for the energy shift $\Delta E_A$ (Lamb shift) of an excited atomic state $A$ due to the interaction with the vacuum of the radiation fields $$\begin{aligned} \label{36} \Delta E_A = \lim_{\eta\rightarrow 0}\frac{1}{2}i\eta\frac{e\frac{\partial}{\partial e}\langle A|\hat{S}_{\eta}|A\rangle }{\langle A|\hat{S}_{\eta}|A\rangle }\, .\end{aligned}$$ for the evaluation of the imaginary part of the Lamb shift. The adiabatic $S$-matrix $\hat{S}_{\eta}$ differs from the usual $S$-matrix by the presence of the adiabatic (exponential) factor $e^{-\eta|t|}$ in each (interaction) vertex. It refers to the concept of adiabatic switching on and off the interaction introduced formally by the replacement $\hat{H}_{{\rm int}}(t) \longrightarrow \hat{H}^\eta_{{\rm int}}(t) = e^{-\eta|t|}\,\hat{H}_{{\rm int}}(t)$. The symmetrized version of the adiabatic formula containing $S_{\eta}(\infty,-\infty)$, which is more convenient for the QED calculations, was proposed by Sucher [@Sucher]. The first application of the formula (\[36\]) to calculations within bound-state QED was made in [@Lab]. In [@Lab] it was shown how to deal with the adiabatic exponential factor when evaluating the real part of corrections to the energy levels Eq. (\[36\]) (see also [@LabKlim]). In this paper we will employ the same methods for evaluating the imaginary part of Eq. (\[36\]). For a free atom (or ion) in the state $|A\rangle$ interacting with the photon vacuum $|0_\gamma\rangle$ (i.e. $|A, 0_\gamma\rangle =|A\rangle|0_\gamma\rangle$ in the absence of external fields) the complex energy correction Eq. (\[26\]) contains only diagonal $S$-matrix elements of even order, since $\langle 0_\gamma|\hat{S}^{(1)}_{\eta}|0_\gamma\rangle = \langle 0_\gamma|\hat{S}^{(3)}_{\eta}|0_\gamma\rangle = 0$ etc. For the separation of the imaginary part of the energy shift $\Delta E_A^{(2i)}$ of a given order $2i$, it is more convenient to represent Eq. (\[36\]) in terms of a perturbation series of the form (up to terms $e^4$) [@LabKlim] $$\begin{aligned} \label{37} %\Delta E_A = \lim_{\eta\rightarrow 0}\frac{1}{2}i\eta\frac{\langle A|\hat{S}_{\eta}|A\rangle %^*e\frac{\partial}{\partial e}\langle A|\hat{S}_{\eta}|A\rangle }{\left|\langle A|\hat{S}_{\eta}|A\rangle \right|^2}. \Delta E_A = \lim_{\eta\rightarrow 0} i\eta\, \left[\langle A|\hat{S}^{(2)}_{\eta}|A\rangle + \left(2 \langle A|\hat{S}^{(4)}_{\eta}|A\rangle - \langle A|\hat{S}^{(2)}_{\eta}|A\rangle^2\right) \dots \right]\, .\end{aligned}$$ For the adiabatic $\hat{S}_{\eta}$ matrix we used the standard expansion in powers of the interaction constant $e$ $$\begin{aligned} \label{38} \hat{S}_{\eta}(\infty,-\infty)=1+\sum\limits_{i=1}^{\infty}\hat{S}^{(i)}_{\eta}(\infty,-\infty)\end{aligned}$$ and can separate real and imaginary parts of the matrix elements at any given order of perturbation theory $$\begin{aligned} \label{39} \langle A|\hat{S}^{(i)}_{\eta}|A\rangle = Re\langle A|\hat{S}^{(i)}_{\eta}|A\rangle +iIm\langle A|\hat{S}^{(i)}_{\eta}|A\rangle\, .\end{aligned}$$ The only one second-order term describes the pure one-photon decay width $$\begin{aligned} \label{40} Im\Delta E_A^{(2)}=\lim_{\eta\rightarrow 0}\eta Re\langle A|\hat{S}_{\eta}^{(2)}|A\rangle\, .\end{aligned}$$ Arranging all the terms of fourth order, which describe the pure two-photon decay width including - as we will see below - a part of the (one-loop) radiative corrections to the one-photon width, one obtains $$\begin{aligned} \label{41} Im\Delta E_A^{(4)} = \lim_{\eta\rightarrow 0} \eta \left[2 Re\langle A|\hat{S}_{\eta}^{(4)}|A\rangle +\left|\langle A|\hat{S}_{\eta}^{(2)}|A\rangle \right|^2-2\left(Re\langle A|\hat{S}_{\eta}^{(2)}|A\rangle \right)^2\right]\, ,\end{aligned}$$ where the last two terms result from the expression $\langle A|\hat{S}_{\eta}^{(2)}|A\rangle^2$. The total width $\Gamma_A$ of an excited electron state $A$ (specifying the initial state as $|A, 0_\gamma\rangle \equiv |A\rangle$) should follow (by definition) from the imaginary part of the total energy-shift via $$\begin{aligned} \label{41bb} \Gamma_A = -2 Im\Delta E_A\, ,\end{aligned}$$ respectively, after perturbation expansion of $\Delta E_A$ (up to order $e^4$) as $$\begin{aligned} \label{41b} \Gamma_A = \lim_{\eta\rightarrow 0} -2\eta \left[ Re \langle A|\hat{S}_{\eta}^{(2)}|A\rangle + 2 Re \langle A|\hat{S}_{\eta}^{(4)}|A\rangle + \left|\langle A|\hat{S}_{\eta}^{(2)}|A\rangle \right|^2 - 2\left(Re\langle A|\hat{S}_{\eta}^{(2)}|A\rangle \right)^2\right]\, .\end{aligned}$$ The formulas (\[40\]), (\[41\]) and (\[41b\]) will be employed in the next Sections for evaluating the pure one-photon and two-photon widths $\Gamma_A^{(1)}$ and $\Gamma_A^{(2)}$, respectively. Application of the “optical theorem” ==================================== Formulation of the “optical theorem” for the $S$-matrix elements ---------------------------------------------------------------- The “optical theorem” is a consequence of the unitarity of the $S$-matrix and in the most general case can be formulated as follows (see, for example, [@bas59]).First, we introduce the $\hat{T}$-matrix via the definition $$\begin{aligned} \label{42} \hat{S}=1+i\hat{T}\, .\end{aligned}$$ For the diagonal matrix element of Eq. (\[42\]) it follows $$\begin{aligned} \label{43} \langle I|\hat{S}|I\rangle = 1+i\langle I|\hat{T}|I\rangle \end{aligned}$$ and $$\begin{aligned} \label{43b} Re \langle I|(1-\hat{S})|I\rangle = Im \langle I|\hat{T}|I\rangle \, .\end{aligned}$$ Here $|I\rangle$ denotes the initial state of the (decaying) system ”atom + radiation field”. The wave function $|I\rangle $ is assumed to be normalized; in our later case of interest it refers to the wave function for the excited atom state and the photon vacuum. As a consequence of the unitarity relation $\hat{S}^\dagger\hat{S} = \hat{S}\hat{S}^\dagger = 1$ for the $S$-matrix, the “optical theorem” states $$\begin{aligned} \label{44b} i \left(\hat{T} - \hat{T}^\dagger \right) = - \hat{T}^\dagger \hat{T} = - \hat{T}\hat{T}^\dagger\, ,\end{aligned}$$ respectively, for the matrix elements $$\begin{aligned} \label{44} 2Im\langle I|\hat{T}|I\rangle =\sum\limits_F \left|\langle F|\hat{T}|I\rangle \right|^2\, .\end{aligned}$$ The summation in Eq. (\[44\]) runs over the complete set of final states $F$ allowed by the energy conservation law. Formally, the state $F=I$ is also included in this summation. The latter circumstance will be important for our further derivations. The initial and final states $|I\rangle$ and $|F\rangle$ are by definition eigenstates of the electron and photon number operator. We also mention at this point, that the ”optical theorem” strictly holds for arbitrary not explicitly time-dependent problems. Specific modifications are required in the presence of time-dependent external field, which allow for electron-positron pair creation out of the Dirac vacuum. Under such conditions the $S$-matrix will become nonunitary [@fgs91]. Performing perturbation expansion of the $T$-(respectively for the $S$-) matrix one derives immediately from (\[44b\]) and (\[44\]) the ”optical theorem” for the $T$-matrix elements up to any (even) order $2i$ of perturbation theory $$\begin{aligned} \label{44c} 2 Im \langle I|\hat{T}^{(2i)}|I\rangle &=& \sum\limits_F \left|\langle F|\hat{T}^{(i)}|I\rangle \right|^2 + \sum\limits_{F}\, \sum\limits_{j < i} 2 Re\langle I|\hat{T}^{(j)\,\dagger}|F\rangle\langle F|\hat{T}^{(2i-j)}|I\rangle \, .\end{aligned}$$ Depending on the physical process (respectively, the scenario) under consideration one has to fix the number of electrons (atomic state) and photons (radiation field at zero temperature) in the inital and final state. However, the quantum numbers of electrons ($A$) and photons ($\vec k, \vec e$) will vary of course. In case of the one-photon decay the summation over $F$ includes the summation over the final atomic states $A'$ as well as over the quantum numbers $\vec k, \vec e$ of the emitted photon. In case of the two-photon decay the summation over $F$ includes, apart from the summation over $A'$, also the summation (integration) over the characteristic quantum numbers of the two emitted photons. Expanding both sides of Eq. (\[43b\]) into a perturbation series (with respect to $e$), we find for arbitrary orders $i=1, 2, \dots$ the relation $$\begin{aligned} \label{45} Re\langle I|\hat{S}^{(i)}|I\rangle =-Im\langle I|\hat{T}^{(i)}|I\rangle \, .\end{aligned}$$ and thus the $S$-matrix form of the ”optical theorem” corresponding to Eq. (\[44c\]) $$\begin{aligned} \label{44d} - 2 Re \langle I|\hat{S}^{(2i)}|I\rangle &=& \sum\limits_F \left|\langle F|\hat{S}^{(i)}|I\rangle \right|^2 + \sum\limits_{F}\, \sum\limits_{j < i} 2 Re \langle I|\hat{S}^{(j)\,\dagger}|F\rangle\langle F|\hat{S}^{(2i-j)}|I\rangle \, .\end{aligned}$$ Then, collecting the second-order terms in Eqs. (\[44\]) and (\[44d\]) yields $$\begin{aligned} \label{46} -2Re\langle I|\hat{S}^{(2)}|I\rangle =\sum\limits_{F\neq I}\left|\langle F|\hat{S}^{(1)}|I\rangle \right|^2.\end{aligned}$$ Again, only nondiagonal matrix elements, like $\langle F|\hat{S}^{(1)}|I\rangle $ contribute. Collecting now the fourth-order terms, we find $$\begin{aligned} \label{47} -2Re\langle I|\hat{S}^{(4)}|I\rangle =\left|\langle I|\hat{S}^{(2)}|I\rangle \right|^2+\sum\limits_{F\neq I}\left|\langle F|\hat{S}^{(2)}|I\rangle \right|^2+ \sum\limits_{F\neq I} 2Re\langle I|\hat{S}^{(1)}|F\rangle \langle F|\hat{S}^{(3)}|I\rangle .\end{aligned}$$ The last term in Eq. (\[47\]) represents, evidently, the radiative corrections to the one-photon width. These corrections were evaluated by Barbieri and Sucher via direct evaluation of the corresponding imaginary part of the two-loop Lamb shift [@bas78]. Here we will not repeat these calculations within our approach. Note, that the term $F=I$ in the sum over $F$ is absent in this contribution, since $\langle I|\hat{S}^{(1)}|I\rangle =\langle I|\hat{S}^{(3)}|I\rangle =0$. Application of the “optical theorem” to the adiabatic $S$-matrix elements ------------------------------------------------------------------------- As indicated above the adiabatic $S$-matrix $\hat{S}_\eta$ arises after introduction of the adiabatic switching function $f(\eta) = e^{-\eta|t|}$ in the QED interaction Hamiltonian. Assuming that no dynamic excitations of the system takes place during switching on and off the interaction, the adiabatic $S$-matrix remains unitary [@fgs91], [@Berest]. Moreover, all observables calculated on the basis of the adiabatic approach should not depend on the specific form used for the adiabatic factor after the limiting process $\eta\rightarrow 0$ has been performed. Therefore, we will apply the “optical theorem” relations (\[46\]) and (\[47\]) to the adiabatic formulas (\[40\]), (\[41\]) and (\[41b\]). In what follows, it will be necessary to fix not only the state of an electron in an atom, but also the number of the photons. Then from Eq. (\[40\]), (\[41b\]) and (\[47\]) it follows for $I=A,0_\gamma$ (excited state, no photons) for the pure one-photon width $$\begin{aligned} \label{48} \Gamma_A^{(1)}= %-2Im\Delta E_A^{(2)}= \lim_{\eta\rightarrow 0} \eta\,\sum\limits_{F\neq A,0_\gamma}\left|\langle F|\hat{S}^{(1)}_{\eta}|A,0_\gamma\rangle\right|^2 \end{aligned}$$ and for the two-photon width $$\begin{aligned} \label{49} \Gamma_A^{(2)}= %-2Im\Delta E_A^{(4)}= \lim_{\eta\rightarrow 0}\eta \left\lbrace 2\sum\limits_{F\neq A,0_\gamma}\left|\langle F|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle\right|^2+4\left(Re\langle A, 0_\gamma|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle\right)^2\right\rbrace \, .\end{aligned}$$ The remaining term up to order $e^4$ containing radiative-correction effects $$\begin{aligned} \Gamma_A^{{\rm rad}} = \lim_{\eta\rightarrow 0}\eta \,\sum\limits_{F\neq A,0_\gamma}\, 2 Re \langle A, 0_\gamma|\hat{S}^{(1)}|F\rangle\langle F|\hat{S}^{(3)}|A, 0_\gamma\rangle\end{aligned}$$ will not be considered further, since we are aiming at the two-photon decay. Employing now Eq. (\[46\]), we can rewrite Eq. (\[49\]) finally into the form $$\begin{aligned} \label{50} \Gamma_A^{(2)}=\lim_{\eta\rightarrow 0} \eta \left\lbrace 2\sum\limits_{F\neq A,0_\gamma}\left|\langle F|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle\right|^2+ 2\sum\limits_{2_\gamma}\left|\langle A, 2_\gamma|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle\right|^2+ \left(\sum\limits_{F'\neq A,0_\gamma}\left|\langle F'|\hat{S}^{(1)}_{\eta} |A, 0_\gamma\rangle\right|^2\right)^2 \right\rbrace .\end{aligned}$$ In Eq. (\[50\]) we have to distinguish between the final states ($F$) and ($F'$) for two-photon and for the one-photon transitions, respectively. It is important that the term $\left|\langle A, 0_\gamma|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle\right|^2$ has canceled out in Eq. (\[49\]). The last but one term in (\[50\]), corresponding to apparently nonphysical transition $A\rightarrow A+2\gamma$, but formally present in the sum over $F$ states, will indeed cancel out in the final expression (see Section 9). The notation $\sum\limits_{2_\gamma}$ means here the integration over the frequencies of two photons. In the next Section we will evaluate the one- and two-photon decay widths using Eqs (\[48\]) and (\[50\]). One-Photon decay width via the “optical theorem” ================================================ We start with the evaluation of the decay width $\Gamma^{(1)}_A$ using Eq. (\[48\]). First, we evaluate the matrix element $\langle A',\vec{k},\vec{e}|\hat{S}^{(1)}_{\eta}|A, 0_\gamma\rangle$ for the emission of the photon with momentum $\vec{k}$ and polarization $\vec{e}$. This matrix element for the “normal” $S$-matrix was evaluated in Section 2. The corresponding adiabatic $S_{\eta}$-matrix element reads $$\begin{aligned} \label{51} \langle A', \vec{k},\vec{e}|\hat{S}^{(1)}_{\eta}|A, 0_\gamma\rangle=e\int d^4x\bar{\psi}_{A'}(x)\gamma_{\mu}A^{*}_{\mu}(x)\psi_A(x)e^{-\eta |t|}.\end{aligned}$$ Now the integration over the time variable yields essentially a representation of the $\delta$-function $$\begin{aligned} \label{52} \int\limits_{-\infty}^{\infty}dte^{i(E_A-E_{A'}-\omega)t-\eta |t|}=\frac{2\eta}{(\omega_{AA'}-\omega)^2+\eta^2} \equiv 2\pi\,\delta_\eta (\omega_{AA'}-\omega),\end{aligned}$$ where $\lim\limits_{\eta\rightarrow 0}\delta_{\eta}(x)=\delta (x)$. As the next step we perform the integration over the photon frequency. Taking Eq. (\[52\]) by squre modulus, multiplying by $\omega$ and integrating, we obtain $$\begin{aligned} \label{53} 4\eta^2\int\limits_0^{\infty}\frac{\omega d\omega}{\left[(\omega_{AA'}-\omega)^2+\eta^2\right]^2}=4\eta^2\left\lbrace \frac{\pi\omega_{AA'}}{4\eta^3}+\frac{1}{2\eta^2}+\frac{\omega_{AA'}}{2\eta^3}arctg\left(\frac{\omega_{AA'}}{\eta}\right) \right\rbrace \end{aligned}$$ Having in mind the limit $\eta\rightarrow 0$ we can replace Eq. (\[53\]) by $$\begin{aligned} \label{54} 4\eta^2\int\limits_0^{\infty}\frac{\omega d\omega}{\left[(\omega_{AA'}-\omega)^2+\eta^2\right]^2}=\frac{2\pi\omega_{AA'}}{\eta}\end{aligned}$$ It remains to multiply the result by the factor $(2\pi)^{-3}$ (see Eq. (\[9\])), by the factor $(\sqrt{2\pi})$ (see Eq. (\[2\])) and by the factor $\eta$ from Eq. (\[48\]). The matrix element will be the same as in Eq. (\[4\]) and we again arrive at Eq. (\[11\]) with the summation over the electron states, lower by energy than the state $A$: $$\begin{aligned} \label{55} \Gamma_A^{(1)}=\frac{e^2}{2\pi} \sum\limits_{A'\,(E_{A'}<E_A)} \omega_{AA'} \sum\limits_{\vec{e}}\int d\vec{\nu} \left|\left( ( \vec{e}^{\,*}\vec{\alpha})e^{-i\vec{k}\vec{r}} \right)_{A'A} \right|^2\end{aligned}$$ In the derivation above the manipulations with $\delta$-functions, like in Section 2, have been avoided. Multiplying the result by the adiabatic parameter $\eta$ in Eq. (\[48\]) plays the same role as dividing the result by the time $T$ in Section 2: the adiabatic factor $\eta$ has the dimensionality $s^{-1}$. Note, that in this approach the automatic exclusion (like in Eq. (\[35\])) of the transitions to the states higher than $A$ in the summation over $F$ in Eq. (\[48\]) does not occur and we have to refer to the energy conservation law to avoid them. Two-photon decay width via “optical theorem” in the absence of cascades ======================================================================= Evaluation of the two-photon decay width ---------------------------------------- In this Section we will evaluate the two-photon decay width $\Gamma^{(2)}_A$ using Eq. (\[50\]). We start with the first term in the curly brackets in Eq. (\[50\]). The $S$-matrix elements corresponding to the emission of the two photons $\vec{k}$, $\vec{e}$ and $\vec{k}'$, $ \vec{e}\,'$ look like $$\begin{aligned} \label{56} \langle A', \vec{k}', \vec{e}\,';\vec{k},\vec{e}|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle_a=e^2\int d^4x_1d^4x_2\left(\bar{\psi}_{A'}(x_1)\gamma_{\mu_1}A^{\vec{k}', \vec{e}\,'\,\,*}_{\mu_1}(x_1)e^{-\eta |t_1|}S(x_1x_2)\gamma_{\mu_2}A^{\vec{k},\vec{e}\,\,*}_{\mu_2}(x_2)e^{-\eta |t_2|}\psi_A(x_2)\right),\end{aligned}$$ $$\begin{aligned} \label{57} \langle A', \vec{k},\vec{e};\vec{k}', \vec{e}\,'|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle_b=e^2\int d^4x_1d^4x_2\left(\bar{\psi}_{A'}(x_1)\gamma_{\mu_1}A^{\vec{k},\vec{e}\,\,*}_{\mu_1}(x_1)e^{-\eta |t_1|}S(x_1x_2)\gamma_{\mu_2}A^{\vec{k}', \vec{e}\,'\,\,*}_{\mu_2}(x_2)e^{-\eta |t_2|}\psi_A(x_2)\right),\end{aligned}$$ where the electron propagator $S(x_1x_2)$ is defined by Eq. (\[19\]) and the indices $a$, $b$ correspond to the Feynman graphs Figs 2a, 2b, respectively. The integration over $t_2$ in Eq. (\[56\]) results $$\begin{aligned} \label{58} \int\limits_{-\infty}^{\infty}dt_2e^{-i\left(\omega_1+E_A-\omega\right)t_2-\eta |t_2|}=\frac{2\eta}{(\omega_1+E_A-\omega)^2+\eta^2}\end{aligned}$$ and similarly looks the integration over $t_1$ in Eq. (\[56\]) $$\begin{aligned} \label{59} \int\limits_{-\infty}^{\infty}dt_2e^{i\left(\omega_1+E_{A'}+\omega'\right)t_1-\eta |t_1|}=\frac{2\eta}{(\omega_1+E_{A'}+\omega')^2+\eta^2}\end{aligned}$$ The next step is the integration over $\omega_1$ in Eq. (\[56\]) which has to be performed in the complex $\omega_1$-plane. We have to evaluate the integral (considering the $i0$-perscription in the energy denominator of the electron propagator) $$\begin{aligned} \label{60} I_{\eta}\equiv 4\eta^2\int\limits_{-\infty}^{\infty}\frac{d\omega_1}{\left[(\omega_1+E_{A}-\omega)^2+\eta^2\right]\left[(\omega_1+E_{A'}+\omega')^2+\eta^2\right] \left[E_n(1-i0)+\omega_1\right]}.\end{aligned}$$ In what follows in this Section we will restrict ourselves to the nonrelativistic limit of Eqs (\[56\]), (\[57\]), since we are most interested in the hydrogen case. Then we can fully neglect the sum over the negative-energy states in the electron propagator (\[19\]). Note, that it is possible while we are using “velocity” form for the matrix elements of the photon emission operator. In the “length” form it would not be the case [@Akhiezer]. So we can close the integration contour in the lower half-plane, where only two poles are located: $\omega_1^{(1)}=-E_A+\omega+i\eta $ and $\omega_1^{(2)}=-E_{A'}-\omega'+i\eta $. In the absence of cascades, the energy denominators $(E_n-E_A+\omega-i\eta )^{-1}$ and $(E_n-E_{A'}+\omega'-i\eta )^{-1}$ are nonsingular and we can omit the imaginary parts $i\eta$ in these denominators. Moreover, using energy conservation law condition, $E_{A'}+\omega'=E_A-\omega$ we can consider both denominators as being equal. Then, collecting the factors ($1/2\pi$ from Eq. (\[19\]), $-2\pi$ from the Cauchy formula for the contour integration in the clockwise direction) yields for the amplitude Eq. (\[56\]) $$\begin{aligned} \label{61} \langle A', \vec{k}', \vec{e}\,';\vec{k},\vec{e}|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle_a=-\frac{4\eta}{\left[(\omega_0-\omega-\omega')^2+4\eta^2\right]}\sum\limits_n\frac{\langle A'|\vec{\alpha}\vec{A}_{\vec{k}', \vec{e}\,'}^{*}|n \rangle\langle n| \vec{\alpha}\vec{A}_{\vec{k},\vec{e}}^{*}|A\rangle }{E_n-E_A+\omega}\end{aligned}$$ Adding the similar contribution from Eq. (\[57\]) leads to $$\begin{aligned} \label{62} \langle A'|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle_{a+b}=\hspace{5cm} \\ \nonumber =-\frac{4\eta}{\left[(\omega_0-\omega-\omega')^2+4\eta^2\right]} \left[\sum\limits_n \frac{\langle A'|\vec{\alpha}\vec{A}_{\vec{k}', \vec{e}\,'}^{*}|n \rangle\langle n| \vec{\alpha}\vec{A}_{\vec{k},\vec{e}}^{*}|A\rangle }{E_n-E_A+\omega}+\sum\limits_n \frac{\langle A'|\vec{\alpha}\vec{A}_{\vec{k},\vec{e}}^{*}|n \rangle\langle n| \vec{\alpha}\vec{A}_{\vec{k}', \vec{e}\,'}^{*}|A\rangle }{E_n-E_{A'}-\omega}\right]\, .\end{aligned}$$ In the matrix elements of the photon emission operators, unlike the $S$-matrix elements, we retain the shorthand notation $|A\rangle $ instead of $|A, 0_\gamma\rangle$, since in this case it cannot lead to any misunderstandings. Insertion of Eq. (\[62\]) into the first term on the right-hand side of Eq. (\[50\]) and summation (integration) over quantum numbers of the final-state particles yields in the nonrelativistic limit $$\begin{aligned} \label{63} \Gamma^{(2)}_{A}(1st\,\, term)=\lim_{\eta\rightarrow 0}\, 2\eta\, (\sqrt{2\pi})^4\frac{e^4}{(2\pi)^6}\sum\limits_{\vec{e}}\sum\limits_{ \vec{e}\,'}\int d\vec{\nu}d\vec{\nu}'\int\omega d\omega\int\omega'd\omega'\times \\ \nonumber \times\frac{(4\eta)^2}{\left[(\omega_0-\omega-\omega')^2+4\eta^2\right]^2}\left|\sum\limits_n \frac{\langle A'|\vec{\alpha}\vec{A}_{\vec{k}', \vec{e}\,'}^{*}|n \rangle\langle n| \vec{\alpha}\vec{A}_{\vec{k},\vec{e}}^{*}|A\rangle }{E_n-E_A+\omega}+\sum\limits_n \frac{\langle A'|\vec{\alpha}\vec{A}_{\vec{k},\vec{e}}^{*}|n \rangle\langle n| \vec{\alpha}\vec{A}_{\vec{k}', \vec{e}\,'}^{*}|A\rangle }{E_n-E_{A'}-\omega}\right|^2\, .\end{aligned}$$ The integrations over $\omega$, $\omega'$ in Eq. (\[62\]) can be performed using exactly the same standard QED procedure as in Section III. We first integrated over $\omega'$, using the $\delta$-function in Eq. (\[20\]), i.e. over the interval $(\infty, 0)$, or even over $(\infty,-\infty)$ what is actually equivalent in this case. The second integration over $\omega$ was performed over the interval $(0,\omega_0)$ (see Eq. (\[26\])). Let us adopt here this procedure within adiababtic $S_{\eta}$-matrix approach. Integrating Eq. (\[63\]) over $\omega'$ according to Eqs (\[53\]) and (\[54\]) leads to $$\begin{aligned} \label{64} (4\eta)^2\int\limits_0^{\infty}\frac{\omega'd\omega'}{\left[(\omega_0-\omega-\omega')^2+4\eta^2\right]^2}=\frac{\pi(\omega_0-\omega)}{\eta}.\end{aligned}$$ Then the integration over the emitted photon directions and summation over the polarizations yields in the nonrelativistic limit (in the “velocity” form): $$\begin{aligned} \label{65} \Gamma^{(2)}_{A}(1st\,\, term)=\frac{4e^4}{9\pi}\int\limits_0^{\omega_0}\omega (\omega_0-\omega)d\omega \sum\limits_{i,k=1}^3\left|\left(U_{ik}(\omega)\right)_{A'A}\right|^2,\end{aligned}$$ $$\begin{aligned} \label{66} \left(U_{ik}\right)_{A'A}=\sum\limits_n \frac{\langle A'|p_i|n \rangle\langle n| p_k|A\rangle }{E_n-E_A+\omega}+\sum\limits_n \frac{\langle A'|p_k|n \rangle\langle n| p_i|A\rangle }{E_n-E_{A'}-\omega},\end{aligned}$$ where $p_i\equiv(\vec{p})_i$. For $A=2s$, $A'=1s$ Eqs. (\[65\]), (\[66\]) go over to Eqs (\[23\])-(\[25\]), if we use again the quantum mechanical relation (\[15\]). Cancellation of singularities ----------------------------- Apart from the first term in Eq. (\[50\]) there are two additional terms which contain the singularities with respect to the adiabatic parameter $\eta$ in the limit $\eta\rightarrow 0$. In this Subsection we will show that these singularities exactly cancel. We start with the last term in the right-hand side of Eq. (\[57\]). In the absence of cascades, i.e. when there are no energy levels between the initial state $A$ and the final state $A'$, the only term in the sum over $F'$ is $F'=A',1_\gamma$. Then, after summation over the emitted photon polarization and integration over th emitted photon directions, repeating the derivations performed in Section 8, we obtain the result $$\begin{aligned} \label{67} \lim_{\eta\rightarrow 0}\,\eta\,\left(\sum\limits_{F'\neq A, 0}\left|\langle F'|\hat{S}^{(1)}|A, 0_\gamma\rangle\right|^2\right)^2=\lim_{\eta\rightarrow 0}\frac{1}{\eta}\left(\Gamma_A^{(1)}\right)^2.\end{aligned}$$ This divergent term can be canceled only by the “unphysical” contribution $2\sum\limits_{2\gamma}\left|\langle A, 2_\gamma|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle\right|^2$ in the right-hand side of Eq. (\[50\]). The latter one looks exactly like Eq. (\[63\]) if $A'$ is replaced by $A$. Setting $n=A'$ in the sum over $n$ in the expression $2\langle A, 2_\gamma|\hat{S}_{\eta}^{(2)}|A, 0_\gamma\rangle$ will give the same set of the matrix elements as in Eq. (\[67\]). This contribution also appears to be divergent like $\eta^{-1}$ in the limit $\eta\rightarrow 0$ and cancels the divergency Eq. (\[67\]). To demonstrate this, we write down the expression $$\begin{aligned} \label{67a} \lim_{\eta\rightarrow 0}\,2\eta\,\sum\limits_{2\gamma}\left|\langle A, 2_\gamma|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle\right|^2=\lim_{\eta\rightarrow\, 0}2\eta\,(2\pi)^2\frac{e^4}{(2\pi)^6}\sum\limits_{\vec{e}}\sum\limits_{ \vec{e}\,'}\int d\vec{\nu}d\vec{\nu}'\int\omega d\omega\int\omega'd\omega'\times \nonumber \\ \frac{(4\eta)^2}{\left[(\omega_0-\omega-\omega')^2+4\eta^2\right]^2}\left| \frac{\langle A|\vec{\alpha}\vec{A}_{\vec{k}', \vec{e}\,'}^{*}|A' \rangle\langle A'|\vec{\alpha}\vec{A}_{\vec{k},\vec{e}}^{*}|A\rangle }{E_{A'}-E_A+\omega+i\eta}+ \frac{\langle A|\vec{\alpha}\vec{A}_{\vec{k},\vec{e}}^{*}|A' \rangle\langle A'|\vec{\alpha}\vec{A}_{\vec{k}', \vec{e}\,'}^{*}|A\rangle }{E_{A'}-E_{A}-\omega+i\eta}\right|^2\, .\end{aligned}$$ In this case, unlike (\[62\]), we now keep $i\eta$ in the energy denominators, in order to keep trace of all the divergences. The integration over $\omega$ is exactly performed like in Eq. (\[64\]) with the result $$\begin{aligned} \label{67b} \lim_{\eta\rightarrow 0}\, 2\eta\,\sum\limits_{2\gamma}\left|\langle A, 2_\gamma|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle\right|^2 = \lim_{\eta\rightarrow 0}\, 2\eta\, \frac{e^4}{(2\pi)^4}\sum\limits_{\vec{e}, \vec{e}\,'}\int d\vec{\nu}d\vec{\nu}'4 \left|\langle A|\vec{\alpha}\vec{A}_{\vec{k},\vec{e}}^{*}|A' \rangle\right|^2\left|\langle A'|\vec{\alpha}\vec{A}_{\vec{k}', \vec{e}\,'}^{*}|A\rangle \right|^2\times \nonumber \\ \times\left(-\frac{\pi}{\eta}\right)\int\omega^2d\omega\left|\frac{1}{E_{A'}-E_A+\omega+i\eta}+\frac{1}{E_{A'}-E_A-\omega+i\eta}\right|^2\hspace{2cm}\end{aligned}$$ After summation over the polarizations and integration over the emission angels and transforming the expression in $|...|^2$, we get $$\begin{aligned} \label{67c} \lim_{\eta\rightarrow 0}\, 2\eta\, \sum\limits_{2\gamma}\left|\langle A, 2_\gamma|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle\right|^2 = -\lim_{\eta\rightarrow 0}\frac{2}{\eta}\left(\Gamma^{(1)}_A\right)^2F(\eta)\, ,\end{aligned}$$ where the function $$\begin{aligned} \label{67d} F(\eta)=\int\frac{\omega^2d\omega}{(\omega_0^2-\eta^2-\omega^2)^2+4\eta^2\omega^2_0}\end{aligned}$$ remains to be calculated. In this case we have to evaluate the integral over $\omega$ in the same way as for deriving the expression (\[67\]), i.e. integrating over the frequency interval $(0,\infty)$. However, it will be more convenient to extend formally the integration over the entire interval $(\infty,-\infty)$ as it was done, for example, in [@Berest]. Then the integration can be performed in the complex plane: $$\begin{aligned} \label{67da} F(\eta)&=&\int\limits_{-\infty}^{\infty}\frac{\omega^2d\omega}{(\omega_0^2-\eta^2-\omega^2+2i\eta\omega_0)(\omega_0^2-\eta^2-\omega^2-2i\eta\omega_0)}\nonumber \\ &=& \int\limits_{-\infty}^{\infty}\frac{\omega^2d\omega}{[(\omega_0 + i\eta + \omega) (\omega_0 + i\eta - \omega)][(\omega_0 - i\eta + \omega)(\omega_0 - i\eta - \omega)]}\, .\end{aligned}$$ The denominator in Eq. (\[67da\]) contains 4 single poles: one pole in each quadrant of the complex plane. However, we have to recall that two of these poles originate from the negative (i.e. unphysical) $\omega$ values. Therefore we have to omit their contribution. Then the evaluation of the integral (\[67da\]) yields with the aid of Cauchy’s theorem $$\begin{aligned} \label{67g} F(\eta)=\frac{2\pi i}{4 i \eta}=\frac{\pi}{2\eta}\end{aligned}$$ Inserting this result into Eq. (\[67c\]) we find that this result exactly cancels the divergent contribution Eq. (\[67\]). What concerns the terms with $n\neq A'$ in the sum over $n$ in the expression for $2\langle A, 2_\gamma|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle$, we will avoid them referring to the energy conservation law, exactly like we did it in the Section 8 for the one-photon transition. Aiming to cancel the contributions from Eq. (\[67\]) and from $2\langle A, 2_\gamma|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle$ we have to treat both contributions in the same way. Thus, the second and the third terms in Eq. (\[50\]) in the absence of cascades cancel each other and the two-photon width is given exclusively by the first term in Eq. (\[50\]): $\Gamma^{(2)}_A(1st\,\, term)=\Gamma_A^{(2)}$. The expression (\[66\]) for $\Gamma_A^{(2)}$ coincides exactly with the standard QED expression for the two-photon decay width in the absence of the cascades. In case of the $2s$-level in hydrogen we return to the expression (\[23\]) for $n=2$. Two-photon decay width via “optical theorem” in the presence of cascades ======================================================================== The derivation of the expression for the two-photon decay width in the presence of the cascades does not differ, in principle, from the derivation of Section IX. The expression (\[63\]) holds in this case as well. We will assume that there exists only one cascade channel (for example, $3s-2p-1s$ in case of the decay of $3s$-level in hydrogen). The only difference is the existence of an additional resonance by $n=R$ in the sum over $n$ in Eq. (\[63\]): $\omega=E_A-E_R$ ($R$ is the resonance state). Due to the energy conservation this implies also the existence of another resonance (lower branch of the cascade): $\omega'=E_R-E_{A'}$. Now Eq. (\[67\]) contains two divergent (like $\eta^{-1}$ by $\eta\rightarrow 0$) terms: by $F'=A', 1_\gamma$ (this divergency is the same as in case of the absence of cascades) and by $F'=R,1_\gamma$ (this is the additionaö divergency connected with the existence of the cascade). The former divergency is compensated by the term $n=A'$ in the “unphysical contribution” $2\langle A2_{\gamma}|\hat{S}^{(2)}_{\eta}|A, 0_\gamma\rangle$ and the latter one is compensated in the same way by the term $n=R$ in the expression for $2\langle A2_\gamma|\hat{S}{(2)}_{\eta}|A, 0_\gamma\rangle$. However, in the presence of the cascade a third divergency arises directly in the summation over $n$ (for $n=R$) in the expression (\[67a\]). This divergency cannot be canceled by any counterterm since it is proportional to a special product of matrix element: $\left|\langle A'|p_i|R><R|p_k|A\rangle \right|^2$. No counterterm in Eq. (\[50\]) contains such a product. This remaining divergency can only be removed by taking into account (at given order of perturbation theory) the radiative corrections to the level width. Equivalently, one may introduce the level width as it was done in Section IV within the framework of standard QED. Thus, evaluating the two-photon decay width in the presence of the cascades, we again return back to the same expressions and the same problems which were discussed in Section IV. Our analysis of the evaluation of the two-photon decay width in the presence of cascades via the imaginary part of the Lamb shift, performed in Sections IX, X contradicts to Jentschura’s “alternative” approach [@Jent1]-[@Jent3]. First, the expression (\[65\]) contains the square modulus $\left|\left(U_{ik}(\omega)\right)_{A'A}\right|^2$, not the squared matrix element $\left(\left(U_{ik}(\omega)\right)_{A'A}\right)^2$ as used in Jentschura’s calculation. Second, there is no chance to compensate the cascade divergency in Eq. (\[63\]) within the evaluation of the imaginary part of the Lamb shift via the adiabatic $S$-matrix approach. In Jentschura’s evaluation of the two-photon decay width in the presence of cascades the integral remains finite without introduction of the level width. We claim that the cascade contribution to the two photon decay width remains infinite without an introduction of the level widths in the energy denominators (via partial resummation of radiative corrections). Evaluation of the two-photon width for the $3s$ level in hydrogen ================================================================= As a consequense of the studies presented in Sections 5-10, we return now to the standard QED expression for the two-photon decay of $3s$-level in hydrogen (Eqs (\[23\])-(\[25\]) with $n=3$) and to the prescription for the employment of the formulas (\[23\])-(\[25\]) given in Section 4. Inserting Eq. (\[23\]) into Eq. (\[26\]) and retaining only the resonant term within the second and fourth frequency intervals, will yield the cascade contribution to the total two-photon decay rate of the $3s$-level. Taking the ratio to the total width of the $3s$-level $\Gamma_{3s}$ we wil obtain the absolute probability or branching ratio $W^{(cascade)}_{3s;1s}/\Gamma_{3s}\equiv b^{(cascade)}_{3s-2p-1s}$ for the cascade transition. The contributions to $b^{(cascade)}_{3s-2p-1s}$ from the intervals (I), (III), (V) are assumed to be zero. The cascade contribution of the $3s$-level results (in the “length” form) $$\begin{aligned} \label{68} W^{({\rm cascade}\, 1\gamma)}_{3s;1s}=\frac{4}{27\pi}\int\limits_{({\bf II})}\omega^3(\omega_0-\omega)^3\left|\frac{\langle R_{3s}(r)|r|R_{2p}(r)\rangle\langle R_{2p}(r')|r'|R_{1s}(r')\rangle}{E_{2p}-E_{3s}+\omega-\frac{i}{2}\Gamma}\right|^2d\omega + \\ \nonumber +\frac{4}{27\pi}\int\limits_{({\bf IV})}\omega^3(\omega_0-\omega)^3\left|\frac{\langle R_{3s}(r)|r|R_{2p}(r)\rangle\langle R_{2p}(r')|r'|R_{1s}(r')\rangle}{E_{2p}-E_{1s}-\omega-\frac{i}{2}\Gamma_{2p}}\right|^2d\omega .\end{aligned}$$ According to the discussion in Section 4 the “pure” two-photon decay probabilities within each interval, defined in Section 4, look like $$\begin{aligned} \label{69} dW_{3s;1s}^{(\rm{pure} 2\gamma)}&=&\frac{4}{27\pi}\omega^3(\omega_0-\omega)^3\left[S_{1s;3s}^{(2p)}(\omega)+S_{1s;3s}(\omega_0-\omega)\right]^2d\omega, \,\, \omega\in {\bf II}\end{aligned}$$ $$\begin{aligned} \label{70} dW_{3s;1s}^{(\rm{pure} 2\gamma)}&=&\frac{4}{27\pi}\omega^3(\omega_0-\omega)^3\left[S_{1s;3s}(\omega)+S_{1s;3s}^{(2p)}(\omega_0-\omega)\right]^2d\omega, \,\, \omega\in {\bf IV}\end{aligned}$$ $$\begin{aligned} \label{71} dW^{(\rm{pure} 2\gamma)}_{3s;1s} &=& \frac{4}{27\pi}\omega^3(\omega_0-\omega)^3\left[S_{1s;3s}(\omega)+S_{1s;3s}(\omega_0-\omega)\right]^2d\omega, \,\, \omega \in {\bf I, III, V} \, .\end{aligned}$$ Here $S_{1s;3s}^{(2p)}(\omega)$ is the expression (\[2\]) with the $n=2$ term being excluded. Unlike cascade, all the intervals contribute to the “pure” two-photon transition. The branching ratio for this transition $3s\rightarrow 2\gamma +1s$ appears to be $$\begin{aligned} \label{72} b^{(\rm{pure} 2\gamma)}_{3s-1s} = \frac{1}{2}\frac{1}{\Gamma_{3s}}\int\limits_0^{\omega_0}dW^{(pure 2\gamma)}_{3s;1s}(\omega)\, .\end{aligned}$$ It remains to introduce the interference contribution. This contribution we consider only for the 2nd and 4th intervals. The corresponding frequency distribution functions are given by $$\begin{aligned} \label{73} dW^{(\rm{inter})1}_{3s;1s}=\frac{4\omega^3(\omega_0-\omega)^3}{27\pi}Re\left[\frac{\langle R_{3s}(r)|r|R_{2p}(2r)\rangle\langle R_{2p}(r')|r'|R_{1s}(r')\rangle}{E_{2p}-E_{3s}+\omega-\frac{i}{2}\Gamma}\right]\left[S_{1s;3s}^{(2p)}(\omega)+S_{1s;3s}(\omega_0-\omega)\right]d\omega\end{aligned}$$ $$\begin{aligned} \label{74} dW^{(\rm{inter})2}_{3s;1s}=\frac{4\omega^3(\omega_0-\omega)^3}{27\pi}Re\left[\frac{\langle R_{3s}(r)|r|R_{2p}(2r)\rangle\langle R_{2p}(r')|r'|R_{1s}(r')\rangle}{E_{2p}-E_{1s}-\omega-\frac{i}{2}\Gamma}\right]\left[S_{1s;3s}(\omega)+S_{1s;3s}^{(2p)}(\omega_0-\omega)\right]d\omega\end{aligned}$$ and branching ratio results as $$\begin{aligned} \label{75} b^{(\rm{inter})}_{3s;1s} = \frac{1}{2\Gamma_{3s}}\int\limits_{({\bf II})}dW^{(\rm{inter})1}_{3s;1s}+\frac{1}{2\Gamma_{3s}}\int\limits_{({\bf IV})}dW^{(\rm{inter})2}_{3s;1s}.\end{aligned}$$ The results of our calculations are presented in Table 1. It is convenient to define the size $\Delta \omega$ of the second interval as multiples $l$ of the widths $\Gamma$, i.e. $\Delta \omega = 2l\Gamma$ and for the fourth interval as $\Delta \omega = 2l\Gamma_{2p}$, respectively. In Table 1 numbers are given for different values of $l$ ranging from $l\simeq 10^5$ up to $l\simeq 10^7$. The upper bound of interval [**II**]{} equals $\omega_1+l\Gamma=\frac{5}{72}+l\Gamma$ (in a.u.), while the lower bound of interval [**IV**]{} equals $\omega_2-l\Gamma_{2p}=\frac{3}{8}-l\Gamma_{2p}$. The different lines of the Table 1 present branching ratios and transition rates of the “pure” two-photon and “interference” channel, respectively. For the more detailed analysis the contributions of the “pure” two-photon transition rate for the each frequency interval are also compiled. The branching ratio and the transition rate for the cascade contribution can be obtained from the relation $b^{(\rm{cascade})}_{3s-2p-1s} + b^{(\rm{pure}2\gamma)}_{2s;1s} + b^{(\rm{inter})}_{3s;1s}=1$. This relation is sutisfied with high accuracy since the only decay channel neglected is the very weak direct 1-photon $M1$ transition $3s\rightarrow 1s+\gamma$. From the Table 1 we can draw the following conlusions: as in the case of the HCI [@LabShon], the “pure” two-photon and cascade contributions to the total decay rate appear to be inseparable. Changing the interval size $\Delta\omega$, we obtain quite different values for $dW^{(\rm{pure} 2\gamma)}_{3s;1s}$ ranging from $202.16\, s^{-1}$ (for $l = 10^4$) up to $7.9385\, s^{-1}$ (for $l = 1.00256\cdot 10^7$). Moreover, in our calculations - depending on the size of the interval - the interference contribution also can become quite large, comparable in magnitude with the “pure” two-photon contribution. Thus, we demonstrated that even the order of magnitude of the “pure” two-photon decay rate for the $3s$-state in hydrogen can not be predicted reliably. Earlier the result $8.2196$ $s^{-1}$ for the “pure” two-photon decay of the $3s$-level was reported in [@cea86] and confirmed in [@fsm88]. However, as it was pointed out in [@Chluba] in both papers [@cea86], [@fsm88] the summation over the intermediate states was not performed properly. The “nonresonant” contribution $10.556$ $s^{-1}$ deduced in [@Chluba], which plays the role of the “pure” two-photon decay rate is well within the range of our values given Table 1. However, the result $2.08$ $s^{-1}$ obtained for the “pure” two-photon decay rate in [@jas08] is in strong contradiction with the present analysis. Very recently, a paper [@Amaro] did arrive where both the standard QED approach, based on the line profile theory ([@Drake]-[@LabShon]) and the “alternative” approach based on the two-loop Lamb shift theory ([@Jent1]-[@Jent3]) were applied to the calculation of the two-photon transition in hydrogen. A reasonable agreement between the two methods was found. However, from the derivations in our present paper it follows that the employment of the Lamb shift imaginary part gives exactly the same results as the standard QED approach. The difference between the “standard” and the “alternative” methods is due to the use of the squared amplitude instead of square modulus in Jentschura’s calculation. To our mind this replacement is unacceptable and cannot be justified within QED. Conclusion ========== In this paper we developed a method for the calculation of the two-photon decay rates, based on the evaluation of the imaginary part of the Lamb shift with employment of the adiabatic $S$-matrix theory and the “optical theorem”. We have shown that the results of such calculations coincide exactly with the standard QED approach also in the presence of cascades. We demonstrated that a strict separation of the “pure” two-photon and cascade contributions for $3s$-level decay in hydrogen is impossible. Moreover, we show that even the approximate separation of these two decay channels cannot be achieved with an accuracy, required in modern astrophysical investigations (i.e. at 1% level) of the recombination history of hydrogen in the early Universe. As a possible solution of the problem with respect to astrphysical needs, we would suggest to rewrite the basic evolution equation for the number of the hydrogen atoms in a certain excited state (i.e. Eq. (2) in [@Hirata]) in a way which does not distinguish “pure” two-photon decays and cascades. Acknowledgments The authors are grateful to R. A. Sunyaev and J. Chluba for stimulating interest in the problem and for many valuable discussions. The authors acknowledge financial support from DFG and GSI. The work was also supported by RFBR grant Nr. 08-02-00026. The work of D. S. was supported by the Non-profit Foundation “Dynasty” (Moscow). L. L. and D. S. acknowledge also the support by the Program of development of scientific potential of High School, Ministry of Education and Science of Russian Federation, grant $\aleph$2.1.1/1136. [0]{} G. Hinshaw, M. R. Nolta, C. L. Bennett et al, ApJS, 170, pp. 288-334 (2007) L. Page, G. Hinshaw, E. Komatsu et al, ApJS ${\bf 170}$:2, pp. 335-376 (2007) Ya. B. Zeldovich, V. G. Kurt and R. A. Sunyaev, Zh. Exsp. Teor. Fiz. ${\bf 55}$, 278 (1968) \[Engl. Transl. Sov. Phys. - JETP Lett. ${\bf 28}$, 146 (1969)\] P. J. E. Peebles, Astrophys. Journ. ${\bf 153}$, 1 (1968) J. D. Cresser, A. Z. Tang, G. J. Salamo, F. T. Chan, Phys, Rev. A${\bf 33}$, 1677 (1986) V. Florenscu, I. Schneider, I. N. Mihailescu, Phys, Rev. A${\bf 38}$, 2189 (1988) V. K. Dubrovich and S. I. Grachev, Astronomy Letters [**31**]{}, 359 (2006) W. Y. Wong and D. Scott, Mon. Not. Roy. Astron. Soc. [**375**]{}, 1441 (2007) G. W. F. Drake, Nucl. Instr. Meth. Phys. Res. B[**9**]{}, 465 (1985) I. M. Savukov, W. R. Johnson, Phys. Rev. A[**66**]{}, 062507 (2002) L. N. Labzowsky and A. V. Shonin, Phys. Rev. A${\bf 69}$, 012503 (2004) O. Yu. Andreev, L. N. Labzowsky, G. Plunien and D. A. Solovyev, Phys. Rep. ${\bf 455}$, 135 (2008) F. Low, Phys. Rev. ${\bf 88}$, 53 (1952) J. Chluba and R. A. Sunyaev, Astronomy and Astrophysics [**480**]{}, 629 (2008) C. M. Hirata, arXiv: 0808, v.2 \[atrp-ph\], 20 May 2008 U. D. Jentschura, A. Surzhykov, Phys, Rev. A${\bf 77}$, 042507 (2008) U. D. Jentschura, J. Phys. A${\bf 40}$, F223 (2007) U. D. Jentschura, J. Phys. A${\bf 41}$, 155307 (2008) U. D. Jentschura, Phys. Rev. A${\bf 79}$, 022510 (2009) R. Barbieri, and J. Sucher, Nucl. Phys. B${\bf 134}$, 155 (1978) J. Sapirstein, K. Pachucki and K. T. Chang, Phys. Rev. A${\bf 69}$, 022113 (2004) M. Gell-Mann and F. Low, Phys. Rev. ${\bf 84}$, 350 (1951) L. Labzowsky, G. Klimchitskaya and Yu. Dmitriev, “Relativistic Effects in the Spectra of Atomic Systems”, IOP Publishing, Bristol and Philaddelphia, 1993 A. I. Akhiezer and V. B. Berestetskii, “Quantum Electrodynamics”, Wiley, New York, 1965 J. Sucher, Phys. Rev. ${\bf 107}$, 1448 (1957) L. Labzowsky, Zh. Eksp. Teor. Fiz. ${\bf 59}$, 167 (1970) \[Engl. Transl. Sov. Phys. JETP 32, 94 (1970)\] N. N. Bogoliubov, D. V. Shirkov, “Introduction to the theory of quantized fields” Interscience Publishers, NewYork, 1959 E. S. Fradkin, D. M. Gitman, S. M. Shvartsman, “Quantum electrodynamics with unstable vacuum”, Springer, Berlin, 1991 V. Berestetskii, E. Lifshits, L. Pitaevski, Quantum Electrodynamics, Pergamon, London, 1983 P. Amaro, J. P. Santos, F. Parente, A. Surzhykov and P. Indelicato, arXiv:0904.0708v1 \[physics.atom-ph\] (2009)
{ "pile_set_name": "ArXiv" }
<span style="font-variant:small-caps;">Barbados Lectures on Complexity Theory, Game Theory, and Economics</span> *Forward* This monograph is based on lecture notes from my mini-course “Complexity Theory, Game Theory, and Economics,” taught at the Bellairs Research Institute of McGill University, Holetown, Barbados, February 19–23, 2017, as the 29th McGill Invitational Workshop on Computational Complexity. The goal of this mini-course is twofold: - to explain how complexity theory has helped illuminate several barriers in economics and game theory; and - to illustrate how game-theoretic questions have led to new and interesting complexity theory, including several very recent breakthroughs. It consists of two five-lecture sequences: the [*Solar Lectures,*]{} focusing on the communication and computational complexity of computing equilibria; and the [*Lunar Lectures,*]{} focusing on applications of complexity theory in game theory and economics.[^1] No background in game theory is assumed. Thanks are due to many people: Denis Therien and Anil Ada for organizing the workshop and for inviting me to lecture; Omri Weinstein, for giving a guest lecture on simulation theorems in communication complexity; Alex Russell, for coordinating the scribe notes; the scribes[^2], for putting together a terrific first draft; and all of the workshop attendees, for making the experience so unforgettable (if intense!). I also thank Yakov Babichenko, Mika Göös, Aviad Rubinstein, Eylon Yogev, and an anonymous reviewer for numerous helpful comments on earlier drafts of this monograph. The writing of this monograph was supported in part by NSF award CCF-1524062, a Google Faculty Research Award, and a Guggenheim Fellowship. I would be very happy to receive any comments or corrections from readers. Tim Roughgarden\ Bracciano, Italy\ December 2017\ (Revised December 2019) *Overview* There are 5 solar lectures and 5 lunar lectures. The solar lectures focus on the communication and computational complexity of computing an (approximate) Nash equilibrium. The lunar lectures are less technically intense and meant to be understandable even after consuming a rum punch; they focus on applications of computational complexity theory to game theory and economics. The Solar Lectures: Complexity of Equilibria {#the-solar-lectures-complexity-of-equilibria .unnumbered} -------------------------------------------- #### Lecture 1: Introduction and wish list. The goal of the first lecture is to get the lay of the land. We’ll focus on the types of positive results about equilibria that we want, like fast algorithms and quickly converging distributed processes. Such positive results are possible in special cases (like zero-sum games), and the challenge for complexity theory is to prove that they cannot be extended to the general case. The topics in this lecture are mostly classical. #### Lectures 2 and 3: The communication complexity of Nash equilibria. These two lectures cover the main ideas in the recent paper of @BR17, which proves strong communication complexity lower bounds for computing an approximate Nash equilibrium. Discussing the proof also gives us an excuse to talk about “simulation theorems” in the spirit of @DBLP:journals/combinatorica/RazM99, which lift query complexity lower bounds to communication complexity lower bounds and have recently found a number of exciting applications. #### Lecture 4: ${\mathsf{TFNP}}$, ${\mathsf{PPAD}}$, and all that. In this lecture we begin our study of the [*computational*]{} complexity of computing a Nash equilibrium, where we want conditional but super-polynomial lower bounds. Proving analogs of ${\mathsf{NP}}$-completeness results requires developing customized complexity classes appropriate for the study of equilibrium computation.[^3] This lecture also discusses the existing evidence for the intractability of these complexity classes, including some very recent developments. #### Lecture 5: The computational complexity of computing an approximate Nash equilibrium of a bimatrix game. The goal of this lecture is to give a high-level overview of Rubinstein’s recent breakthrough result [@R16] that an ETH-type assumption for ${\mathsf{PPAD}}$ implies a quasi-polynomial-time lower bound for the problem of computing an approximate Nash equilibrium (which is tight, by Corollary \[cor:lmm2\]). The Lunar Lectures: Complexity-Theoretic Barriers in Economics {#the-lunar-lectures-complexity-theoretic-barriers-in-economics .unnumbered} -------------------------------------------------------------- Most of the lunar lectures have the flavor of “applied complexity theory.”[^4] While the solar lectures build on each other to some extent, the lunar lectures are episodic and can be read independently of each other. #### Lecture 1: The 2016 FCC Incentive Auction. The recent FCC Incentive Auction is a great case study of how computer science has influenced real-world auction design. This lecture provides our first broader glimpse of the vibrant field called [ *algorithmic game theory*]{}, at most 10% of which concerns the complexity of computing equilibria. #### Lecture 2: Barriers to near-optimal equilibria. This lecture concerns the “price of anarchy,” meaning the extent to which the Nash equilibria of a game approximate an optimal outcome. It turns out that nondeterministic communication complexity lower bounds can be translated, in black-box fashion, to lower bounds on the price of anarchy. We’ll see how this translation enables a theory of “optimal simple auctions.” #### Lecture 3: Barriers in markets. You’ve surely heard of the idea of “market-clearing prices,” which are prices in a market such that supply equals demand. When the goods are divisible (milk, wheat, etc.), market-clearing prices exist under relatively mild technical assumptions. With indivisible goods (houses, spectrum licenses, etc.), market-clearing prices may or may not exist. It turns out that complexity considerations can be used to explain when such prices exist and when they do not. This is cool and surprising because the issue of equilibrium existence seems to have nothing to do with computation (in contrast to the Solar Lectures, where the questions studied are explicitly about computation). #### Lecture 4: The borders of Border’s theorem. Border’s theorem is a famous result in auction theory from 1991, about single-item auctions. Despite its fame, no one has been able to extend it to significantly more general settings. We’ll see that complexity theory explains this mystery: significantly generalizing Border’s theorem would imply that the polynomial hierarchy collapses! #### Lecture 5: Tractable relaxations of Nash equilibria. With the other lectures focused largely on negative results for computing Nash equilibria, for an epilogue we’ll conclude with positive algorithmic results for relaxations of Nash equilibria, such as correlated equilibria. Introduction, Wish List, and Two-Player Zero-Sum Games ====================================================== Nash Equilibria in Two-Player Zero-Sum Games -------------------------------------------- ### Preamble To an algorithms person (like the author), complexity theory is the science of why you can’t get what you want. So what is it we want? Let’s start with some cool positive results for a very special class of games—two-player zero-sum games—and then we can study whether or not they extend to more general games. For the first positive result, we’ll review the famous Minimax theorem, and see how it leads to a polynomial-time algorithm for computing a Nash equilibrium of a two-player zero-sum game. Then we’ll show that there are natural “dynamics” (basically, a distributed algorithm) that converge rapidly to an approximate Nash equilibrium. ### Rock-Paper-Scissors Recall the game of rock-paper-scissors (or roshambo, if you like)[^5]: there are two players, each simultaneously picks a strategy from $\{ \text{rock}, \text{paper}, \text{scissors} \}$. If both players choose the same strategy then the game is a draw; otherwise, rock beats scissors, scissors beats paper, and paper beats rock.[^6] Here’s an idea: how about we play rock-paper-scissors, and you go first? This is clearly unfair—no matter what strategy you choose, I have a response that guarantees victory. But what if you only have to commit to a [*probability distribution*]{} over your three strategies (called a *mixed strategy*)? To be clear, the order of operations is: (i) you pick a distribution; (ii) I pick a response; (iii) nature flips coins to sample a strategy from your distribution. Now you can protect yourself—by picking a strategy uniformly at random, no matter what I do, you have an equal chance of a win, a loss, or a draw. The [*Minimax theorem*]{} states that, in any game of “pure competition” like rock-paper-scissors, a player can always protect herself with a suitable randomized strategy—there is no disadvantage of having to move first. The proof of the Minimax theorem also gives as a byproduct a polynomial-time algorithm for computing a Nash equilibrium (by linear programming). ### Formalism We specify a two-player zero-sum game with an $m \times n$ payoff matrix $A$ of numbers. The rows correspond to the possible choices of Alice (the “row player”) and the columns correspond to possible choices for Bob (the “column player”). Entry $A_{ij}$ contains Alice’s payoff when Alice chooses row $i$ and Bob chooses column $j$. In a zero-sum game, Bob’s corresponding payoff is automatically defined to be $-A_{ij}$. Throughout the solar lectures, we normalize the payoff matrix so that $|A_{ij}| \leq 1$ for all $i$ and $j$.[^7] For example, the payoff matrix corresponding to rock-paper-scissors is:\ [ r|c|c|c| ]{} & & &\ R & 0 & -1 & 1\ P & 1 & 0 & -1\ S & -1 & 1 & 0\ Mixed strategies for Alice and Bob correspond to probability distributions $x$ and $y$ over rows and columns, respectively.[^8] When speaking about Nash equilibria, one always assumes that players randomize independently. For a two-player zero-sum game $A$ and mixed strategies $x,y$, we can write Alice’s expected payoff as $$x^{\top} A y = \sum_{i,j} A_{ij} x_i y_j\,.$$ Bob’s expected payoff is the negative of this quantity, so his goal is to minimize the expression above. ### The Minimax Theorem The question that the Minimax theorem addresses is the following: > If two players make choices *sequentially* in a zero-sum game, is it better to go first or second? In a zero-sum game, there can only be a first-mover disadvantage. Going second gives a player the opportunity to adapt to what the other player does first. And the second player always has the option of choosing whatever mixed strategy she would have chosen had she gone first. But does going second ever strictly help? The Minimax theorem gives an amazing answer to the question above: [*it doesn’t matter!*]{} \[t:minmax\] Let $A$ be the payoff matrix of a two-player zero-sum game. Then $$\label{eq:min-max} \max_x \left ( \min_y \; x^{\top} A y \right ) = \min_y \left ( \max_x \; x^{\top} A y \right )\,,$$ where $x$ and $y$ range over probability distributions over the rows and columns of $A$, respectively. On the left-hand side of , the row player moves first and the column player second. The column player plays optimally given the strategy chosen by the row player, and the row player plays optimally anticipating the column player’s response. On the right-hand side of , the roles of the two players are reversed. The Minimax theorem asserts that, under optimal play, the expected payoff of each player is the same in both scenarios. The first proof of the Minimax theorem was due to von Neumann [@vN28] and used fixed-point-type arguments (which we’ll have much more to say about later). von Neumann and Morgenstern [@vNM44], inspired by Ville [@V38], later realized that the Minimax theorem can be deduced from strong linear programming duality.[^9] The idea is to formulate the problem faced by the first player as a linear program. The theorem will then follow from linear programming duality. First, the player who moves second always has an optimal pure (i.e., deterministic) strategy—given the probability distribution chosen by the first player, the second player can simply play the strategy with the highest expected payoff. This means the inner $\min$ and $\max$ in  may as well range over columns and rows, respectively, rather than over all probability distributions. The expression in the left-hand side of  then translates to the following linear program: $$\begin{aligned} &&\max_{x,v} &\quad v \\ &&\text{s.t.} &\quad v \le \quad\sum_{i=1}^m A_{ij} x_i \quad \text{for all columns $j$}, \\ && & \quad x \text{ is a probability distribution over rows.}\end{aligned}$$ If the optimal point is $(v^*, x^*)$, then $v^*$ equals the left-hand-side of  and $x^*$ belongs to the corresponding arg-max. In plain terms, $x^*$ is what Alice should play if she has to move first, and $v^*$ is the consequent expected payoff (assuming Bob responds optimally). Similarly, we can write a second linear program that computes the optimal point $(w^*, y^*)$ from Bob’s perspective, where $w^*$ equals the right-hand-side of  and $y^*$ is in the corresponding arg-min: $$\begin{aligned} && \min_{y,w} &\quad w \\ && \text{s.t.} &\quad w \ge \sum_{j=1}^n A_{ij} y_j \quad \text{for all rows $i$}, \\ && &\quad y \text{ is a probability distribution over columns.}\end{aligned}$$ It is straightforward to verify that these two linear programs are in fact duals of each other (left to the reader, or see Chvátal [@chvatal]). By strong linear programming duality, we know that the two linear programs have equal optimal objective function value and hence $v^* = w^*$. This means that the payoff that Alice can guarantee herself when she goes first is the same as when Bob goes first (and plays optimally), completing the proof. Let $A$ be the payoff matrix of a two-player zero-sum game. The *value* of the game is defined as the common value of $$\max_x \left ( \min_y \; x^{\top} A y \right) \text{~~and~~} \min_y \left ( \max_x \; x^{\top} A y \right ).$$ A [*min-max strategy*]{} is a strategy $x^*$ in the arg-max of the left-hand side or a strategy $y^*$ in the arg-min of the right-hand side. A *min-max pair* is a pair $(x^*,y^*)$ where $x^*$ and $y^*$ are both min-max strategies. For example, the value of the rock-paper-scissors game is $0$ and $(u,u)$ is its unique min-max pair, where $u$ denotes the uniform probability distribution. The min-max pairs are the optimal solutions of the two linear programs in the proof of Theorem \[t:minmax\]. Because the optimal solution of a linear program can be computed in polynomial time, so can a min-max pair. ### Nash Equilibrium In zero-sum games, a min-max pair is closely related to the notion of a Nash equilibrium, defined next.[^10] \[d:ne\] Let $A$ be the payoff matrix of a two-player zero-sum game. The pair $(\hat{x}, \hat{y})$ is a *Nash equilibrium* if: - $\hat{x}^{\top} A \hat{y} \geq x^{\top} A \hat{y} \;$ for all $x$ (given that Bob plays $\hat{y}$, Alice cannot increase her expected payoff by deviating unilaterally to a strategy different from $\hat{x}$, i.e., $\hat{x}$ is optimal given $\hat{y}$); - $\hat{x}^{\top} A \hat{y} \leq \hat{x}^{\top} A y \;$ for all $y$ (given $\hat{x}$, $\hat{y}$ is an optimal strategy for Bob). The pairs in Definition \[d:ne\] are sometimes called *mixed* Nash equilibria, to stress that players are allowed to randomize. (As opposed to a [*pure*]{} Nash equilibrium, where both players play deterministically.) Unless otherwise noted, we will always be concerned with mixed Nash equilibria. \[claim:ne\] In a two-player zero-sum game, a pair $(x^*,y^*)$ is a min-max pair if and only if it is a Nash equilibrium. Suppose $(x^*,y^*)$ is a min-max pair, and so Alice’s expected payoff is $v^*$, the value of the game. Because Alice plays her min-max strategy, Bob cannot make her payoff smaller than $v^*$ via some other strategy. Because Bob plays his min-max strategy, Alice cannot make her payoff larger than $v^*$. Neither player can do better with a unilateral deviation, and so $(x^*,y^*)$ is a Nash equilibrium. Conversely, suppose $(x^*,y^*)$ is not a min-max pair with, say, Alice not playing a min-max strategy. If Alice’s expected payoff is less than $v^*$, then $(x^*,y^*)$ is not a Nash equilibrium (she could do better by deviating to a min-max strategy). Otherwise, because $x^*$ is not a min-max strategy, Bob has a response $y$ such that Alice’s expected payoff would be strictly less than $v^*$. Here, Bob could do better by deviating unilaterally to $y$. In any case, $(x^*,y^*)$ is not a Nash equilibrium. There are several interesting consequences of Theorem \[t:minmax\] and Proposition \[claim:ne\]: 1. The set of all Nash equilibria of a two-player zero-sum game is convex, as the optimal solutions of a linear program form a convex set. 2. All Nash equilibria $(x,y)$ of a two-player zero-sum game lead to the same value of $x^{\top} A y$. That is, each player receives the same expected payoff across all Nash equilibria. 3. Most importantly, because the proof of Theorem \[t:minmax\] provides a polynomial-time algorithm to compute a min-max pair $(x^*, y^*)$, we have a polynomial-time algorithm to compute a Nash equilibrium of a two-player zero-sum game. \[cor:zerosum\] A Nash equilibrium of a two-player zero-sum game can be computed in polynomial time. ### Beyond Zero-Sum Games (Computational Complexity) {#ss:beyond} Can we generalize Corollary \[cor:zerosum\] to more general classes of games? After all, while two-player zero-sum games are important—von Neumann was largely focused on them, with applications ranging from poker to war—most game-theoretic situations are not purely zero-sum.[^11] For example, what about [ *bimatrix games*]{}, in which there are still two players but the game is not necessarily zero-sum?[^12] Solar Lectures 4 and 5 are devoted to this question, and provide evidence that there is no polynomial-time algorithm for computing a Nash equilibrium (even an approximate one) of a bimatrix game. ### Who Cares? {#ss:whocares} Before proceeding to our second cool fact about two-player zero-sum games, let’s take a step back and be clear about what we’re trying to accomplish. Why do we care about computing equilibria of games, anyway? 1. We might want fast algorithms to use in practice. The demand for equilibrium computation algorithms is significantly less than that for, say, linear programming solvers, but the author regularly meets researchers who would make good use of better off-the-shelf solvers for computing an equilibrium of a game. 2. Perhaps most relevant for this monograph’s audience, the study of equilibrium computation naturally leads to interesting and new complexity theory (e.g., definitions of new complexity classes, such as ${\mathsf{PPAD}}$). We will see that the most celebrated results in the area are quite deep and draw on ideas from all across theoretical computer science. 3. Complexity considerations can be used to support or critique the practical relevance of an equilibrium concept such as the Nash equilibrium. It is tempting to interpret a polynomial-time algorithm for computing an equilibrium as a plausibility argument that players can figure one out quickly, and an intractability result as evidence that players will not generally reach an equilibrium in a reasonable amount of time. Of course, the real story is more complex. First, computational intractability is not necessarily first on the list of the Nash equilibrium’s issues. For example, its non-uniqueness in non-zero-sum games already limits its predictive power.[^13] Second, it’s not particularly helpful to critique a definition without suggesting an alternative. Lunar Lecture 5 partially addresses this issue by discussing two tractable equilibrium concepts, correlated equilibria and coarse correlated equilibria. Third, does an arbitrary polynomial-time algorithm, such as one based on solving a non-trivial linear program, really suggest that independent play by strategic players will actually converge to an equilibrium? Algorithms for linear programming do not resemble how players typically make decisions in games. A stronger positive result would involve a behaviorally plausible distributed algorithm that players can use to efficiently converge to a Nash equilibrium through repeated play over time. We discuss such a result for two-player zero-sum games next. Uncoupled Dynamics {#s:uncoupled} ------------------ In the first half of the lecture, we saw that a Nash equilibrium of a two-player zero-sum game can be computed in polynomial time using linear programming. It would be more compelling, however, to come up with a definition of a plausible process by which players can learn a Nash equilibrium. Such a result requires a behavioral model for what players do when not at equilibrium. The goal is then to investigate whether or not the process converges to a Nash equilibrium (for an appropriate notion of convergence), and if so, how quickly. ### The Setup [*Uncoupled dynamics*]{} refers to a class of processes with the properties mentioned above. The idea is that each player initially knows only her own payoffs (and not those of the other players), à la the number-in-hand model in communication complexity.[^14] The game is then played repeatedly, with each player picking a strategy in each time step as a function only of her own payoffs and what transpired in the past. At each time step $t=1,2,3,\ldots$: 1. Alice chooses a strategy $x^t$ as a function only of her own payoffs and the previously chosen strategies $x^1,\ldots,x^{t-1}$ and $y^1,\ldots,y^{t-1}$. 2. Bob simultaneously chooses a strategy $y^t$ as a function only of his own payoffs and the previously chosen strategies $x^1,\ldots,x^{t-1}$ and $y^1,\ldots,y^{t-1}$. 3. Alice learns $y^t$ and Bob learns $x^t$. Uncoupled dynamics have been studied at length in both the game theory and computer science literatures (often under different names). Specifying such dynamics boils down to a definition of how Alice and Bob choose strategies as a function of their payoffs and the joint history of play. Let’s look at some famous examples. ### Fictitious Play One natural idea is to best respond to the observed behavior of your opponent. In *fictitious play*, each player assumes that the other player will mix according to the relative frequencies of their past actions (i.e., the empirical distribution of their past play), and plays a best response.[^15] At each time step $t=1,2,3,\ldots$: 1. Alice chooses a strategy $x^t$ that is a best response against ${\hat{y}}^{t-1} = \tfrac{1}{t-1} \sum_{s=1}^{t-1} y^s$, the past actions of Bob (breaking ties arbitrarily). 2. Bob simultaneously chooses a strategy $y^t$ that is a best response against ${\hat{x}}^{t-1} = \tfrac{1}{t-1} \sum_{s=1}^{t-1} x^s$, the past actions of Alice (breaking ties arbitrarily). 3. Alice learns $y^t$ and Bob learns $x^t$. Note that each player picks a pure strategy in each time step (modulo tie-breaking in the case of multiple best responses). One way to interpret fictitious play is to imagine that each player assumes that the other is using the same mixed strategy every time step, and estimates this time-invariant mixed strategy with the empirical distribution of the strategies chosen in the past. Fictitious play has an interesting history: 1. It was first proposed by G. W. Brown in 1949 (published in 1951 [@B51]) as a computer algorithm to compute a Nash equilibrium of a two-player zero-sum game. This is not so long after the birth of either game theory or computers! 2. In 1951, Julia Robinson (better known for her contributions to the resolution of Hilbert’s tenth problem about Diophantine equations) proved that, in two-player zero-sum games, the time-averaged payoffs of the players converge to the value of the game [@Rob51]. Robinson’s proof gives only an exponential (in the number of strategies) bound on the number of iterations required for convergence. In 1959, Karlin [@K59] conjectured that a polynomial bound should be possible (for two-player zero-sum games). Fast forward to 2014, and @DP14 refuted Karlin’s conjecture and proved an exponential lower bound for the case of adversarial (and not necessarily consistent) tie-breaking. 3. It is still an open question whether or not fictitious play converges quickly in two-player zero-sum games for natural (or even just consistent) tie-breaking rules! The goal here would be to show that ${\mathrm{poly}}(n,1/\epsilon)$ time steps suffice for the time-averaged payoffs to be within $\epsilon$ of the value of the game (where $n$ is the total number of rows and columns). 4. The situation for non-zero-sum games was murky until 1964, when Lloyd Shapley discovered a $3 \times 3$ game (a non-zero-sum variation on rock-paper-scissors) where fictitious play never converges to a Nash equilibrium [@S64]. Shapley’s counterexample foreshadowed future separations between the tractability of zero-sum and non-zero-sum games. Next we’ll look at a different choice of dynamics with better convergence properties. ### Smooth Fictitious Play Fictitious play is “all-or-nothing”—even if two strategies have almost the same expected payoff against the opponent’s empirical distribution, the slightly worse one is completely ignored in favor of the slightly better one. A more stable approach, and perhaps a more behaviorally plausible one, is to assume that players randomize, biasing their decision toward the strategies with the highest expected payoffs (again, against the empirical distribution of the opponent). In other words, each player plays a “noisy best response” against the observed play of the other player. For example, already in 1957 @Han57 considered dynamics where each player chooses a strategy with probability proportional to her expected payoff (against the empirical distribution of the other player’s past play), and proved polynomial convergence to the Nash equilibrium payoffs in two-player zero-sum games. Even better convergence properties are possible if poorly performing strategies are abandoned more aggressively, corresponding to a “softmax” version of fictitious play. In time $t$ of *smooth fictitious play*, a player (Alice, say) computes the empirical distribution ${\hat{y}}^{t-1} = \sum_{s=1}^{t-1} y^s$ of the other player’s past play, computes the expected payoff $\pi^t_i$ of each pure strategy $i$ under the assumption that Bob plays ${\hat{y}}^{t-1}$, and chooses $x^t$ by playing each strategy with probability proportional to $e^{\eta^t \pi^t_i}$. (When $t=1$, interpret the $\pi^t_i$’s as 0 and hence the player chooses the uniform distribution.) Here $\eta^t$ is a tunable parameter that interpolates between always playing uniformly at random (when $\eta = 0$) and fictitious play with random tie-breaking (when $\eta = +\infty$). The choice $\eta^t \approx \sqrt{t}$ is often the best one for proving convergence results. **Given:** parameter family $\{ \eta^t \in [0,\infty) \,:\, t=1,2,3,\ldots\}$. At each time step $t=1,2,3,\ldots$: 1. Alice chooses a strategy $x^t$ by playing each strategy $i$ with probability proportional to $e^{\eta^t\pi^t_i}$, where $\pi^t_i$ denotes the expected payoff of strategy $i$ when Bob plays the mixed strategy ${\hat{y}}^{t-1} = \tfrac{1}{t-1} \sum_{s=1}^{t-1} y^s$. 2. Bob simultaneously chooses a strategy $y^t$ by playing each strategy $j$ with probability proportional to $e^{\eta^t\pi^t_j}$, where $\pi^t_j$ is the expected payoff of strategy $j$ when Alice plays the mixed strategy ${\hat{x}}^{t-1} = \tfrac{1}{t-1} \sum_{s=1}^{t-1} x^s$. 3. Alice learns $y^t$ and Bob learns $x^t$. Versions of smooth fictitious play have been studied independently in the game theory literature (beginning with @FL95) and the computer science literature (beginning with @FS99). It converges extremely quickly. \[t:sfp\] For a zero-sum two-player game with $m$ rows and $n$ columns and a parameter $\epsilon > 0$, after $T=O(\log(n+m)/\epsilon^2)$ time steps of smooth fictitious play with $\eta^t=\Theta(\sqrt{t})$ for each $t$, the empirical distributions ${\hat{x}}= \tfrac{1}{T} \sum_{t=1}^T x^t$ and ${\hat{y}}= \tfrac{1}{T} \sum_{t=1}^T y^t$ constitute an $\epsilon$-approximate Nash equilibrium. The ${\epsilon}$-approximate Nash equilibrium condition in Theorem \[t:sfp\] is exactly what it sounds like: neither player can improve their expected payoff by more than $\epsilon$ via a unilateral deviation (see also Definition \[d:ene\], below).[^16] There are two steps in the proof of Theorem \[t:sfp\]: (i) the noisy best response in smooth fictitious play is equivalent to the “Exponential Weights” algorithm, which has “vanishing regret”; and (ii) in a two-player zero-sum game, vanishing-regret guarantees translate to (approximate) Nash equilibrium convergence. The optional Sections \[ss:sfp1\]–\[ss:sfp3\] provide more details for the interested reader. ### Beyond Zero-Sum Games (Communication Complexity) {#ss:implycomm} Theorem \[t:sfp\] implies that smooth fictitious play can be used to define a randomized $O(\log^2 (n+m)/\epsilon^2)$-bit communication protocol for computing an $\epsilon$-${\mathsf{NE}}$ of a two-player zero sum game.[^17] The goal of Solar Lectures 2 and 3 is to prove that there is no analogously efficient communication protocol for computing an approximate Nash equilibrium of a general bimatrix game.[^18] Ruling out low-communication protocols will in particular rule out any type of quickly converging uncoupled dynamics.[^19] ### Proof of Theorem \[t:sfp\], Part 1: Exponential Weights (Optional) {#ss:sfp1} To elaborate on the first step of the proof of Theorem \[t:sfp\], we need to explain the standard setup for online decision-making. At each time step $t=1,2,\ldots,T$: =.5=1 a decision-maker picks a probability distribution $\p^t$ over her actions $\Lambda$ an adversary picks a reward vector $\r^t:\Lambda \rightarrow [-1,1]$ =.5=1 an action $a^t$ is chosen according to the distribution $\p^t$, and the decision-maker receives reward $r^t(a^t)$ =.5=1 the decision-maker learns $\r^t$, the entire reward vector In smooth fictitious play, each of Alice and Bob are in effect solving the online decision-making problem (with actions corresponding to the game’s strategies). For Alice, the reward vector $\r^t$ is induced by Bob’s action at time step $t$ (if Bob plays strategy $j$, then $r^t$ is the $j$th column of the game matrix $A$), and similarly for Bob (with reward vector equal to the $i$th row multiplied by $-1$). Next we interpret Alice’s and Bob’s behavior in smooth fictitious play as algorithms for online decision-making. An [*online decision-making algorithm*]{} specifies for each $t$ the probability distribution $\p^t$, as a function of the reward vectors $\r^1,\ldots,\r^{t-1}$ and realized actions $a^1,\ldots,a^{t-1}$ of the first $t-1$ time steps. An [*adversary*]{} for such an algorithm $\Alg$ specifies for each $t$ the reward vector $\r^t$, as a function of the probability distributions $\p^1,\ldots,\p^t$ used by $\Alg$ on the first $t$ days and the realized actions $a^1,\ldots,a^{t-1}$ of the first $t-1$ days. Here is a famous online decision-making algorithm, the “Exponential Weights (EW)” algorithm (see [@LW94; @FS97]).[^20] initialize $w^1(a) = 1$ for every $a \in \Lambda$ The EW algorithm maintains a weight, intuitively a “credibility,” for each action. At each time step the algorithm chooses an action with probability proportional to its current weight. The weight of each action evolves over time according to the action’s past performance. Inspecting the descriptions of smooth fictitious play and the EW algorithm, we see that we can rephrase the former as follows: **Given:** parameter family $\{ \eta^t \in [0,\infty) \,:\, t=1,2,3,\ldots\}$. At each time step $t=1,2,3,\ldots$: 1. Alice uses an instantiation of the EW algorithm to choose a mixed strategy $x^t$. 2. Bob uses a different instantiation of the EW algorithm to choose a mixed strategy $y^t$. 3. Alice learns $y^t$ and Bob learns $x^t$. 4. Alice feeds her EW algorithm a reward vector $r^t$ with $r^t(i)$ equal to the expected payoff of playing row $i$, given Bob’s mixed strategy $y^t$ over columns; and similarly for Bob. How should we assess the performance of an online decision-making algorithm like the EW algorithm, and do guarantees for the algorithm have any implications for smooth fictitious play? ### Proof of Theorem \[t:sfp\], Part 2: Vanishing Regret (Optional) {#ss:sfp2} One of the big ideas in online learning is to compare the time-averaged reward earned by an online algorithm with that earned by the best [*fixed action*]{} in hindsight.[^21] \[d:regreta\] Fix reward vectors $\r^1,\ldots,\r^T$. The [*regret*]{} of the action sequence $a^1,\ldots,a^T$ is $$\label{eq:regreta} \underbrace{\frac{1}{T} \max_{a \in \Lambda} \sum_{t=1}^T r^t(a)}_{\text{best fixed action}}- \underbrace{\frac{1}{T} \sum_{t=1}^T r^t(a^t)}_{\text{our algorithm}}.$$ Note that, by linearity, there is no difference between considering the best fixed action and the best fixed distribution over actions (there is always an optimal pure action in hindsight). We aspire to an online decision-making algorithm that achieves low regret, as close to 0 as possible. Because rewards lie in $[-1,1]$, the regret can never be larger than 2. We think of regret $\Omega(1)$ (as $T \rightarrow \infty$) as an epic fail for an algorithm. It turns out that the EW algorithm has the best-possible worst-case regret guarantee (up to constant factors).[^22] \[t:noregret\] For every adversary, the EW algorithm has expected regret $O(\sqrt{(\log n)/T})$, where $n=|\Lambda|$. See e.g. the book of @CBL06 for a proof of Theorem \[t:noregret\], which is not overly difficult. An immediate corollary is that the number of time steps needed to drive the expected regret down to a small constant is only logarithmic in the number of actions—this is surprisingly fast! \[cor:noregret\] There is an online decision-making algorithm that, for every adversary and ${\epsilon}> 0$, has expected regret at most ${\epsilon}$ after $O((\log n)/{\epsilon}^2)$ time steps, where $n=|\Lambda|$. ### Proof of Theorem \[t:sfp\], Part 3: Vanishing Regret Implies Convergence (Optional) {#ss:sfp3} Consider a zero-sum game $A$ with payoffs in $[-1,1]$ and some ${\epsilon}> 0$. Let $n$ denote the number of rows or the number of columns, whichever is larger, and set $T = \Theta((\log n)/{\epsilon}^2)$ so that the guarantee in Corollary \[cor:noregret\] holds with error ${\epsilon}/2$. Let $x^1,\ldots,x^T$ and $y^1,\ldots,y^T$ be the mixed strategies used by Alice and Bob throughout $T$ steps of smooth fictitious play. Let $\hat{\x} = \tfrac{1}{T} \sum_{t=1}^T \x^t$ and $\hat{\y} = \tfrac{1}{T} \sum_{t=1}^T \y^t$ denote the time-averaged strategies of Alice and Bob, respectively. We claim that $({\hat{x}},{\hat{y}})$ is an ${\epsilon\text{-}\mathsf{NE}}$. In proof, let $$v = \frac{1}{T} \sum_{t=1}^T (\x^t)^{\top}A\y^t$$ denote Alice’s time-averaged payoff. Alice and Bob both used (in effect) the EW algorithm to choose their strategies, so we can apply the vanishing regret guarantee in Corollary \[cor:noregret\] once for each player and use linearity to obtain $$\label{eq:noregret1} v \ge \left( \max_{\x} \frac{1}{T} \sum_{t=1}^T \x^{\top}A\y^t \right) - \frac{{\epsilon}}{2} = \left( \max_{\x} \x^{\top}A\hat{\y} \right) - \frac{{\epsilon}}{2}$$ and $$\label{eq:noregret2} v \le \left( \min_{\y} \frac{1}{T} \sum_{t=1}^T ({\x}^t)^{\top}A\y \right) + \frac{{\epsilon}}{2} = \left( \min_{\y} \hat{\x}^{\top}A\y \right) + \frac{{\epsilon}}{2}.$$ In particular, taking $\x = {\hat{x}}$ in  and $\y = {\hat{y}}$ in  shows that $$\label{eq:noregret3} {\hat{x}}^{\top}A{\hat{y}}\in \left[ v - \frac{{\epsilon}}{2}, v + \frac{{\epsilon}}{2} \right].$$ Now consider a (pure) deviation from $({\hat{x}},{\hat{y}})$, say by Alice to the row $i$. Denote this deviation by $e_i$. By inequality  (with $\x = e_i$) we have $$\label{eq:noregret4} e_i^{\top}A{\hat{y}}\le v+\frac{{\epsilon}}{2}.$$ Because Alice receives expected payoff at least $v-\tfrac{{\epsilon}}{2}$ in $({\hat{x}},{\hat{y}})$ (by ) and at most $v+\tfrac{{\epsilon}}{2}$ from any deviation (by ), her ${\epsilon\text{-}\mathsf{NE}}$ conditions are satisfied. A symmetric argument applies to Bob, completing the proof. General Bimatrix Games {#s:bimatrix} ---------------------- A general bimatrix game is defined by two independent payoff matrices, an $m \times n$ matrix $A$ for Alice and an $m \times n$ matrix $B$ for Bob. (In a zero-sum game, $B=-A$.) The definition of an (approximate) Nash equilibrium is what you’d think it would be: \[d:ene\] For a bimatrix game $(A,B)$, row and column mixed strategies $\hat{x}$ and $\hat{y}$ constitute an $\epsilon$-${\mathsf{NE}}$ if $$\begin{aligned} \hat{x}^{\top} A\hat{y}\ &\geq\ x^{\top} A\hat{y}- \epsilon \qquad \forall x \,, \text{ and }\\ \hat{x}^{\top} B\hat{y}\ &\geq\ \hat{x}^{\top} By - \epsilon\qquad \forall y\,.\end{aligned}$$ It has long been known that many of the nice properties of zero-sum games break down in general bimatrix games.[^23] \[ex:bimatrix\] Suppose two friends, Alice and Bob, want to go for dinner, and are trying to agree on a restaurant. Alice prefers Italian over Thai, and Bob prefers Thai over Italian, but both would rather eat together than eat alone.[^24] Supposing the rows and columns are indexed by Italian and Thai, in that order, and Alice is the row player, we get the following payoff matrices: $$A=\left[\begin{matrix}2&0\\0&1\end{matrix}\right], \qquad B=\left[\begin{matrix}1&0\\0&2\end{matrix}\right], \qquad \text{or, in shorthand, }\quad (A,B)=\left[\begin{matrix}(2,1)&(0,0)\\(0,0)&(1,2)\end{matrix}\right]\enspace .$$ There are two obvious Nash equilibria, both pure: either Alice and Bob go to the Italian restaurant, or they both go to the Thai restaurant. But there’s a third Nash equilibrium, a mixed one[^25]: Alice chooses Italian over Thai with probability $\tfrac23$, and Bob chooses Thai over Italian with probability $\tfrac23$. This is an undesirable Nash equilibrium, with Alice and Bob eating alone more than half the time. Example \[ex:bimatrix\] shows that, unlike in zero-sum games, different Nash equilibria can result in different expected player payoffs. Similarly, the Nash equilibria of a bimatrix game do not generally form a convex set (unlike in the zero-sum case). Nash equilibria of bimatrix games are not completely devoid of nice properties, however. For starters, we have guaranteed existence. \[t:nash\] Every bimatrix game has at least one (mixed) Nash equilibrium. The proof is a fixed-point argument that we will have more to say about in Solar Lecture 2.[^26] Nash’s theorem holds more generally for games with any finite number of players and strategies. Nash equilibria of bimatrix games have nicer structure than those in games with three or more players. First, in bimatrix games with integer payoffs, there is a Nash equilibrium in which all probabilities are rational numbers with bit complexity polynomial in that of the game.[^27] Second, there is a simplex-type pivoting algorithm, called the [*Lemke-Howson algorithm*]{} [@LH64], which computes a Nash equilibrium of a bimatrix game in a finite number of steps (see @vS07 for a survey). Like the simplex method, the Lemke-Howson algorithm takes an exponential number of steps in the worst case [@M94; @SvS04]. The similarities between Nash equilibria of bimatrix games and optimal solutions of linear programs initially led to some optimism that computing the former might be as easy as computing the latter (i.e., might be a polynomial-time solvable problem). Alas, as we’ll see, this does not seem to be the case. Approximate Nash Equilibria in Bimatrix Games --------------------------------------------- The last topic of this lecture is some semi-positive results about [*approximate*]{} Nash equilibria in general bimatrix games. While simple, these results are important and will show up repeatedly in the rest of the lectures. ### Sparse Approximate Nash Equilibria Here is a crucial result for us: there are always [*sparse*]{} approximate Nash equilibria.[^28][^29] \[t:lmm\] For every ${\epsilon}> 0$ and every $n\times n$ bimatrix game, there exists an $\epsilon$-${\mathsf{NE}}$ in which each player randomizes uniformly over a multi-set of $O((\log n)/\epsilon^2)$ pure strategies.[^30] Fix an $n \times n$ bimatrix game $(A,B)$. 1. Let $(x^*,y^*)$ be an exact Nash equilibrium of $(A,B)$. (One exists, by Theorem \[t:nash\].) 2. As a thought experiment, sample $\Theta((\log n)/{\epsilon}^2)$ pure strategies for Alice i.i.d. (with replacement) from $x^*$, and similarly for Bob i.i.d. from $y^*$. 3. Let ${\hat{x}},{\hat{y}}$ denote the empirical distributions of the samples (with probabilities equal to frequencies in the sample)—equivalently, the uniform distributions over the two multi-sets of pure strategies. 4. Use Chernoff bounds to argue that $({\hat{x}},{\hat{y}})$ is an ${\epsilon\text{-}\mathsf{NE}}$ (with high probability). Specifically, because of our choice of the number of samples, the expected payoff of each row strategy w.r.t. ${\hat{y}}$ differs from that w.r.t. $y^*$ by at most ${\epsilon}/2$ (w.h.p.). Because every strategy played with non-zero probability in $x^*$ is an exact best response to $y^*$, every strategy played with non-zero probability in ${\hat{x}}$ is within ${\epsilon}$ of a best response to ${\hat{y}}$. (The same argument applies with the roles of ${\hat{x}}$ and ${\hat{y}}$ reversed.) This is a sufficient condition for being an ${\epsilon\text{-}\mathsf{NE}}$.[^31] ### Implications for Communication Complexity Theorem \[t:lmm\] immediately implies the existence of an ${\epsilon\text{-}\mathsf{NE}}$ of an $n \times n$ bimatrix game with description length $O((\log^2 n)/{\epsilon}^2)$, with $\approx \log n$ bits used to describe each of the $O((\log n)/{\epsilon}^2)$ pure strategies in the multi-sets promised by the theorem. Moreover, if an all-powerful prover writes down an alleged such description on a publicly observable blackboard, then Alice and Bob can privately verify that the described pair of mixed strategies is indeed an ${\epsilon\text{-}\mathsf{NE}}$. For example, Alice can use the (publicly viewable) description of Bob’s mixed strategy to compute the expected payoff of her best response and check that it is at most ${\epsilon}$ more than her expected payoff when playing the mixed strategy suggested by the prover. Summarizing: \[cor:lmm1\] The nondeterministic communication complexity of computing an ${\epsilon\text{-}\mathsf{NE}}$ of an $n \times n$ bimatrix game is $O((\log^2 n)/{\epsilon}^2)$. Thus, if there [*is*]{} a polynomial lower bound on the deterministic or randomized communication complexity of computing an approximate Nash equilibrium, the only way to prove it is via techniques that don’t automatically apply also to the problem’s nondeterministic communication complexity. This observation rules out many of the most common lower bound techniques. In Solar Lectures 2 and 3, we’ll see how to thread the needle using a [*simulation theorem*]{}, which lifts a deterministic or random query (i.e., decision tree) lower bound to an analogous communication complexity lower bound. ### Implications for Computational Complexity The second important consequence of Theorem \[t:lmm\] is a limit on the worst-possible computational hardness we could hope to prove for the problem of computing an approximate Nash equilibrium of a bimatrix game: at worst, the problem is quasi-polynomial-hard. \[cor:lmm2\] There is an algorithm that, given as input a description of an $n \times n$ bimatrix game and a parameter ${\epsilon}$, outputs an ${\epsilon\text{-}\mathsf{NE}}$ in $n^{O((\log n)/{\epsilon}^2)}$ time. The algorithm enumerates all $n^{O((\log n)/{\epsilon}^2)}$ possible choices for the multi-sets promised by Theorem \[t:lmm\]. It is easy to check whether or not the mixed strategies induced by such a choice constitute an ${\epsilon\text{-}\mathsf{NE}}$—just compute the expected payoffs of each strategy and of the players’ best responses, as in the proof of Corollary \[cor:lmm1\]. Because of the apparent paucity of natural problems with quasi-polynomial complexity, the quasi-polynomial-time approximation scheme (QPTAS) in Corollary \[cor:lmm2\] initially led to optimism that there should be a PTAS for the problem. Also, if there [*were*]{} a reduction showing quasi-polynomial-time hardness for computing an approximate Nash equilibrium, what would be the appropriate complexity assumption, and what would the reduction look like? Solar Lectures 4 and 5 answer this question. Communication Complexity Lower Bound for Computing an Approximate Nash Equilibrium of a Bimatrix Game (Part I) ============================================================================================================== This lecture and the next consider the communication complexity of computing an approximate Nash equilibrium, culminating with a proof of the recent breakthrough polynomial lower bound of @BR17. This lower bound rules out the possibility of quickly converging uncoupled dynamics in general bimatrix games (see Section \[s:uncoupled\]). Preamble {#s:ccpreamble} -------- Recall the setup: there are two players, Alice and Bob, each with their own payoff matrices $A$ and $B$. Without loss of generality (by padding), the two players have the same number $N$ of strategies. We consider a two-party model where, initially, Alice knows only $A$ and Bob knows only $B$. The goal is then for Alice and Bob to compute an approximate Nash equilibrium (Definition \[d:ene\]) with as little communication as possible. This lecture and the next explain all of the main ideas behind the following result: \[t:br17\] There is a constant $c > 0$ such that, for all sufficiently small constants ${\epsilon}> 0$ and sufficiently large $N$, the randomized communication complexity of computing an ${\epsilon\text{-}\mathsf{NE}}$ is $\Omega(N^c)$.[^32] For our purposes, a randomized protocol with communication cost $b$ always uses at most $b$ bits of communication, and terminates with at least one player knowing an ${\epsilon\text{-}\mathsf{NE}}$ of the game with probability at least $\tfrac{1}{2}$ (over the protocol’s coin flips). Thus, while there are lots of obstacles to players reaching an equilibrium of a game (see also Section \[ss:whocares\]), communication alone is already a significant bottleneck. A corollary of Theorem \[t:br17\] is that there can be no uncoupled dynamics (Section \[s:uncoupled\]) that converge to an approximate Nash equilibrium in a sub-polynomial number of rounds in general bimatrix games (cf., the guarantee in Theorem \[t:sfp\] for smooth fictitious play in zero-sum games). This is because uncoupled dynamics can be simulated by a randomized communication protocol with logarithmic overhead (to communicate which strategy gets played each round).[^33] This corollary should be regarded as a fundamental contribution to pure game theory and economics. The goal of this and the next lecture is to sketch a full proof of the lower bound in Theorem \[t:br17\] for deterministic communication protocols. We do really care about randomized protocols, however, as these are the types of protocols induced by uncoupled dynamics (see Section \[ss:implycomm\]). The good news is that the argument for the deterministic case will already showcase all of the conceptual ideas in the proof of Theorem \[t:br17\]. Extending the proof to randomized protocols requires substituting a simulation theorem for randomized protocols (we’ll use only a simulation theorem for deterministic protocols, see Theorem \[t:rm\]) and a few other minor tweaks.[^34] Naive Approach: Reduction From [[Disjointness]{}]{} {#s:naive} --------------------------------------------------- To illustrate the difficulty of proving a result like Theorem \[t:br17\], consider a naive attempt that tries to reduce, say, the [[Disjointness]{}]{}problem to the problem of computing an $\epsilon$-${\mathsf{NE}}$, with YES-instances mapped to games in which all equilibria have some property $\Pi$, and NO-instances mapped to games in which no equilibrium has property $\Pi$ (Figure \[f:naive\]).[^35] For the reduction to be useful, $\Pi$ needs to be some property that can be checked with little to no communication, such as “Alice plays her first strategy with positive probability” or “Bob’s strategy has full support.” The only problem is that [*this is impossible!*]{} The reason is that the problem of computing an approximate Nash equilibrium has polylogarithmic [*nondeterministic*]{} communication complexity (because of the existence of sparse approximate equilibria, see Theorem \[t:lmm\] and Corollary \[cor:lmm1\]), while the [[Disjointness]{}]{}function does not (for 1-inputs). A reduction of the proposed form would translate a nondeterministic lower bound for the latter problem to one for the former, and hence cannot exist.[^36] ![A naive attempt to reduce the [[Disjointness]{}]{}problem to the problem of computing an approximate Nash equilibrium.[]{data-label="f:naive"}](disj){width=".6\textwidth"} Our failed reduction highlights two different challenges. The first is to resolve the typechecking error that we encountered between a standard decision problem, where there might or might not be a witness (like [[Disjointness]{}]{}, where a witness is an element in the intersection), and a total search problem where there is always a witness (like computing an approximate Nash equilibrium, which is guaranteed to exist by Nash’s theorem). The second challenge is to figure out how to prove a strong lower bound on the deterministic or randomized communication complexity of computing an approximate Nash equilibrium without inadvertently proving the same (non-existent) lower bound for nondeterministic protocols. To resolve the second challenge, we’ll make use of simulation theorems that lift query complexity lower bounds to communication complexity lower bounds (see Section \[s:cceol\]); these are tailored to a specific computational model, like deterministic or randomized protocols. For the first challenge, we need to identify a total search problem with high communication complexity. That is, for total search problems, which should be the analog of [3SAT]{} or [[Disjointness]{}]{}? The correct answer turns out to be [*fixed-point computation*]{}. Finding Brouwer Fixed Points (The [[${\epsilon}$-BFP]{}]{}Problem) {#s:bfp} ------------------------------------------------------------------ This section and the next describe reductions from computing Nash equilibria to computing fixed points, and from computing fixed points to a path-following problem. These reductions are classical. The content of the proof in Theorem \[t:br17\] are [*reductions in the opposite direction*]{}; these are discussed in Solar Lecture 3. ### Brouwer’s Fixed-Point Theorem [*Brouwer’s fixed-point theorem*]{} states that whenever you stir your coffee, there will be a point that ends up exactly where it began. Or if you prefer a more formal statement: \[t:bfp\] If $C$ is a compact convex subset of ${{\mathbb R}}^d$, and $f\colon C\to C$ is continuous, then there exists a [*fixed point*]{}: a point $x\in C$ with $f(x)=x$. All of the hypotheses are necessary.[^37] We will be interested in a computational version of Brouwer’s fixed-point theorem, the [*[[${\epsilon}$-BFP]{}]{}problem*]{}: given a description of a compact convex set $C \subseteq {{\mathbb R}}^d$ and a continuous function $f:C \rightarrow C$, output an [*${\epsilon}$-approximate fixed point*]{}, meaning a point $x \in C$ such that ${ {\| {f(x)-x} \|} } < {\epsilon}$. The [[${\epsilon}$-BFP]{}]{}problem, in its many different forms, plays a starring role in the study of equilibrium computation. The set $C$ is typically fixed in advance, for example to the $d$-dimensional hypercube. While much of the work on the [[${\epsilon}$-BFP]{}]{}problem has focused on the $\ell_{\infty}$ norm (e.g. [@HPV89]), one innovation in the proof of Theorem \[t:br17\] is to instead use a normalized version of the $\ell_2$ norm (following @R16). Nailing down the problem precisely requires committing to a family of succinctly described continuous functions $f$. The description of the family used in the proof of Theorem \[t:br17\] is technical and best left to Section \[s:ccbfp\]. Often (and in these lectures), the family of functions considered contains only $O(1)$-Lipschitz functions.[^38] In particular, this guarantees the existence of an ${\epsilon}$-approximate fixed point with description length polynomial in the dimension and $\log \tfrac{1}{{\epsilon}}$ (by rounding an exact fixed point to its nearest neighbor on a suitably defined grid). ### From Brouwer to Nash {#ss:nashpf} Fixed-point theorems have long been used to prove equilibrium existence results, including the original proofs of the Minimax theorem (Theorem \[t:minmax\]) and Nash’s theorem (Theorem \[t:nash\]).[^39] Analogously, algorithms for computing (approximate) fixed points can be used to compute (approximate) Nash equilibria. Existence/computation of $\epsilon$-${\mathsf{NE}}$ reduces to that of $\epsilon$-BFP. To provide further details, let’s sketch why Nash’s theorem (Theorem \[t:nash\]) reduces to Brouwer’s fixed-point theorem (Theorem \[t:bfp\]), following the version of the argument in @G03.[^40] Consider a bimatrix game $(A,B)$ and let $S_1,S_2$ denote the strategy sets of Alice and Bob (i.e., the rows and columns). The relevant convex compact set is $C = \Delta_1 \times \Delta_2$, where $\Delta_i$ is the simplex representing the mixed strategies over $S_i$. We want to define a continuous function $f:C \rightarrow C$, from mixed strategy profiles to mixed strategy profiles, such that the fixed points of $f$ are the Nash equilibria of this game. We define $f$ separately for each component $f_i:C \rightarrow \Delta_i$ for $i=1,2$. A natural idea is to set $f_i$ to be a best response of player $i$ to the mixed strategy of the other player. This does not lead to a continuous, or even well defined, function. We can instead use a “regularized” version of this idea, defining $$\begin{aligned} \label{eq:nash1} f_1({x}_1,{x}_2) = \underset{{x}'_1 \in \Delta_1}{\operatorname{argmax}} \,\, g_1({x}'_1,x_2),\end{aligned}$$ where $$\begin{aligned} \label{eq:nash2} g_1({x}'_1,x_2) = \underbrace{(x'_1)^{\top}Ax_2}_{\text{linear in ${x}'_1$}} - \underbrace{\|{x}'_1 - {x}_1 \|^2_2}_{\text{strictly convex}},\end{aligned}$$ and similarly for $f_2$ and $g_2$ (with Bob’s payoff matrix $B$). The first term of the function $g_i$ encourages a best response while the second “penalty term” discourages big changes to player $i$’s mixed strategy. Because the function $g_i$ is strictly concave in ${x}'_i$, $f_i$ is well defined. The function $f=(f_1,f_2)$ is continuous (as you should check). By definition, every Nash equilibrium of the given game is a fixed point of $f$. For the converse, suppose that $(x_1,x_2)$ is not a Nash equilibrium, with Alice (say) able to increase her expected payoff by deviating unilaterally from ${x}_1$ to ${x}'_1$. A simple computation shows that, for sufficiently small ${\epsilon}> 0$, $g_1((1-{\epsilon}){x}_1 + {\epsilon}{x}'_1,{x}_2) > g_1({x}_1,{x}_2)$, and hence $(x_1,x_2)$ is not a fixed point of $f$ (as you should check). Summarizing, an oracle for computing a Brouwer fixed point immediately gives an oracle for computing a Nash equilibrium of a bimatrix game. The same argument applies to games with any (finite) number of players. The same argument also shows that an oracle for computing an ${\epsilon}$-approximate fixed point in the $\ell_{\infty}$ norm can be used to compute an $O({\epsilon})$-approximate Nash equilibrium of a game. The first high-level goal of the proof of Theorem \[t:br17\] is to reverse the direction of the reduction—to show that the problem of computing an approximate Nash equilibrium is as general as computing an approximate fixed point, rather than merely being a special case. [[${\epsilon}$-BFP]{}]{}$\leq$ $\epsilon$-${\mathsf{NE}}$ This goal follows in the tradition of a sequence of celebrated computational hardness results last decade for computing an exact Nash equilibrium (or an ${\epsilon}$-approximate Nash equilibrium with ${\epsilon}$ polynomial in $\tfrac{1}{n}$) [@DGP09; @CDT09]. There are a couple of immediate issues. First, it’s not clear how to meaningfully define the [[${\epsilon}$-BFP]{}]{}problem in a two-party communication model—what are Alice’s and Bob’s inputs? We’ll address this issue in Section \[s:ccbfp\]. Second, even if we figure out how to define the [[${\epsilon}$-BFP]{}]{}problem and implement goal \#1, so that the ${\epsilon\text{-}\mathsf{NE}}$ problem is at least as hard as the [[${\epsilon}$-BFP]{}]{}problem, what makes us so sure that the latter is hard? This brings us to our next topic—a “generic” total search problem that is hard almost by definition and can be used to transfer hardness to other problems (like [[${\epsilon}$-BFP]{}]{}) via reductions.[^41] The End-of-the-Line ([[EoL]{}]{}) Problem {#s:eol} ----------------------------------------- ### Problem Definition For equilibrium and fixed-point computation problems, it turns out that the appropriate “generic” problem involves following a path in a large graph; see also Figure \[f:ppad\]. ![An instance of the [[EoL]{}]{}problem corresponds to a directed graph with all in- and out-degrees at most 1. Solutions correspond to sink vertices and source vertices other than the given one.[]{data-label="f:ppad"}](ppad){width=".85\textwidth"} given a description of a directed graph $G$ with maximum in- and out-degree 1, and a source vertex $s$ of $G$, find either a sink vertex of $G$ or a source vertex other than $s$. The restriction on the in- and out-degrees forces the graph $G$ to consist of vertex-disjoint paths and cycles, with at least one path (starting at the source $s$). The [[EoL]{}]{}problem is a total search problem—there is always a solution, if nothing else the other end of the path that starts at $s$. Thus an instance of [[EoL]{}]{}can always be solved by rotely following the path from $s$; the question is whether or not there is a more clever algorithm that always avoids searching the entire graph. It should be plausible that the [[EoL]{}]{}problem is hard, in the sense that there is no algorithm that always improves over rote path-following; see also Section \[s:eollb\]. But what does it have to do with the [[${\epsilon}$-BFP]{}]{}problem? A lot, it turns out. The problem of computing an approximate Brouwer fixed point reduces to the [[EoL]{}]{}problem (i.e., $\epsilon$-BFP $\leq$ EoL). ### From [[EoL]{}]{}to Sperner’s Lemma The basic reason that fixed-point computation reduces to path-following is [*Sperner’s lemma*]{}, which we recall next (again borrowing from [@f13 Lecture 20]). Consider a subdivided triangle in the plane (Figure \[f:sperner\]). A [*legal coloring*]{} of its vertices colors the top corner vertex red, the left corner vertex green, and the right corner vertex blue. A vertex on the boundary must have one of the two colors of the endpoints of its side. Internal vertices are allowed to possess any of the three colors. A small triangle is [*trichromatic*]{} if all three colors are represented at its vertices. ![A subdivided triangle in the plane.[]{data-label="f:sperner"}](sperner2.eps){width=".3\textwidth"} Sperner’s lemma then asserts that for every legal coloring, there is at least one trichromatic triangle.[^42] \[t:sperner\] For every legal coloring of a subdivided triangle, there is an odd number of trichromatic triangles. The proof is constructive. Define an undirected graph $G$ that has one vertex corresponding to each small triangle, plus a source vertex that corresponds to the region outside the big triangle. The graph $G$ has one edge for each pair of small triangles that share a side with one red and one green endpoint. Every trichromatic small triangle corresponds to a degree-one vertex of $G$. Every small triangle with one green and two red corners or two green and one red corners corresponds to a vertex with degree two in $G$. The source vertex of $G$ has degree equal to the number of red-green segments on the left side of the big triangle, which is an odd number. Because every undirected graph has an even number of vertices with odd degree, there is an odd number of trichromatic triangles. The proof of Sperner’s lemma shows that following a path from a canonical source vertex in a suitable graph leads to a trichromatic triangle. Thus, computing a trichromatic triangle of a legally colored subdivided triangle reduces to the [[EoL]{}]{}problem.[^43] ### From Sperner to Brouwer Next we’ll use Sperner’s lemma to prove Brouwer’s fixed-point theorem for a 2-dimensional simplex $\Delta$; higher-dimensional versions of Sperner’s lemma (see footnote \[foot:sperner\]) similarly imply Brouwer’s fixed-point theorem for simplices of arbitrary dimension.[^44] Let $f:\Delta \rightarrow \Delta$ be a $\lambda$-Lipschitz function (with respect to the $\ell_2$ norm, say). 1. Subdivide $\Delta$ into sub-triangles with side length at most ${\epsilon}/\lambda$. Think of the points of $\Delta$ as parameterized by three coordinates $(x,y,z)$, with $x,y,z \ge 0$ and $x+y+z=1$. 2. Associate each of the three coordinates with a distinct color. To color a point $(x,y,z)$, consider its image $(x',y',z')$ under $f$ and choose the color of a coordinate that strictly decreased (if there are none, then $(x,y,z)$ is a fixed point and we’re done). Note that the conditions of Sperner’s lemma are satisfied. 3. We claim that the center $({\bar{x}},{\bar{y}},{\bar{z}})$ of a trichromatic triangle must be an $O(\epsilon)$-fixed point (in the $\ell_{\infty}$ norm). Because some corner of the triangle has its $x$-coordinate go down under $f$, $({\bar{x}},{\bar{y}},{\bar{z}})$ is at distance at most ${\epsilon}/\lambda$ from this corner, and $f$ is $\lambda$-Lipschitz, the $x$-coordinate of $f({\bar{x}},{\bar{y}},{\bar{z}})$ is at most ${\bar{x}}+O({\epsilon})$. The same argument applies to ${\bar{y}}$ and ${\bar{z}}$, which implies that each of the coordinates of $f({\bar{x}},{\bar{y}},{\bar{z}})$ is within $\pm O({\epsilon})$ of the corresponding coordinate of $({\bar{x}},{\bar{y}},{\bar{z}})$. Brouwer’s fixed-point theorem now follows by taking the limit ${\epsilon}\rightarrow 0$ and using the continuity of $f$. The second high-level goal of the proof of Theorem \[t:br17\] is to reverse the direction of the above reduction from [[${\epsilon}$-BFP]{}]{}to [[EoL]{}]{}. That is, we would like to show that the problem of computing an approximate Brouwer fixed point is as general as every path-following problem (of the form in [[EoL]{}]{}), rather than merely being a special case. [[EoL]{}]{}$\leq$ [[${\epsilon}$-BFP]{}]{} If we succeed in implementing goals \#1 and \#2, and also prove directly that the [[EoL]{}]{}problem is hard, then we’ll have proven hardness for the problem of computing an approximate Nash equilibrium. Road Map for the Proof of Theorem \[t:br17\] {#s:map} -------------------------------------------- The high-level plan for the proof in the rest of this and the next lecture is to show that $$\text{a low-cost communication protocol for ${\epsilon\text{-}\mathsf{NE}}$}$$ implies $$\text{a low-cost communication protocol for {${\epsilon}$-{\sc 2BFP}\xspace}},$$ where [${\epsilon}$-[2BFP]{}]{}is a two-party version of the problem of computing a fixed point (to be defined), which then implies $$\text{a low-cost communication protocol for {{\sc 2EoL}\xspace}},$$ where [[2EoL]{}]{}is a two-party version of the [[EoL]{}]{}problem (to be defined), which then implies $$\text{a low-query algorithm for {{\sc EoL}\xspace}}.$$ Finally, we’ll prove directly that the [[EoL]{}]{}problem does not admit a low-query algorithm. This gives us four things to prove (hardness of [[EoL]{}]{}and the three implications); we’ll tackle them one by one in reverse order: - [**Step 1:**]{} Query lower bound for [[EoL]{}]{}. - [**Step 2:**]{} Communication complexity lower bound for [[2EoL]{}]{} via a simulation theorem. - [**Step 3:**]{} [[2EoL]{}]{}reduces to [${\epsilon}$-[2BFP]{}]{}. - [**Step 4:**]{} [${\epsilon}$-[2BFP]{}]{}reduces to ${\epsilon\text{-}\mathsf{NE}}$. The first step (Section \[s:eollb\]) is easy. The second step (Section \[s:cceol\]) follows directly from one of the simulation theorems alluded to in Section \[s:ccpreamble\]. The last two steps, which correspond to goals \#2 and \#1, respectively, are harder and deferred to Solar Lecture 3. Most of the ingredients in this road map were already present in a paper by Roughgarden and Weinstein [@RW16], which was the first paper to define and study two-party versions of fixed-point computation problems, and to propose the use of simulation theorems in the context of equilibrium computation. One major innovation in @BR17 is the use of the generic [[EoL]{}]{}problem as the base of the reduction, thereby eluding the tricky interactions in [@RW16] between simulation theorems (which seem inherently combinatorial) and fixed-point problems (which seem inherently geometric). @RW16 applied a simulation theorem directly to a fixed-point problem (relying on strong query complexity lower bounds for finding fixed points [@HPV89; @B16]), which yielded a hard but unwieldy version of a two-party fixed-point problem. It is not clear how to reduce this version to the problem of computing an approximate Nash equilibrium. @BR17 instead apply a simulation theorem directly to the [[EoL]{}]{}problem, which results in a reasonably natural two-party version of the problem (see Section \[s:cceol\]). There is significant flexibility in how to interpret this problem as a two-party fixed-point problem, and the interpretation in @BR17 (see Section \[s:ccbfp\]) yields a version of the problem that is hard and yet structured enough to be solved using approximate Nash equilibrium computation. A second innovation in [@BR17] is the reduction from [${\epsilon}$-[2BFP]{}]{}to ${\epsilon\text{-}\mathsf{NE}}$ (see Section \[s:mt06\]) which, while not difficult, is both new and clever.[^45] Step 1: Query Lower Bound for [[EoL]{}]{} {#s:eollb} ----------------------------------------- We consider the following “oracle” version of the [[EoL]{}]{}problem. The vertex set $V$ is fixed to be ${\{0,1\}}^n$. Let $N = |V| = 2^n$. Algorithms are allowed to access the graph only through vertex queries. A query to the vertex $v$ reveals its alleged predecessor $pred(v)$ (if any, otherwise $pred(v)$ is NULL) and its alleged successor $succ(v)$ (or NULL if it has no successor). The interpretation is that the directed edge $(v,w)$ belongs to the implicitly defined directed graph $G=(V,E)$ if and only if both $succ(v)=w$ and $pred(w)=v$. These semantics guarantee that the graph has in- and out-degree at most 1.[^46] We also assume that $pred(0^n)=NULL$, and interpret the vertex $0^n$ as the a priori known source vertex of the graph. The version of the [[EoL]{}]{}problem for this oracle model is: given an oracle as above, find a vertex $v \in V$ that satisfies one of the following: - $succ(v)$ is NULL; - $pred(v)$ is NULL and $v \neq 0^n$; - $v \neq pred(succ(v))$; or - $v \neq succ(pred(v))$ and $v \neq 0^n$. According to our semantics, cases (iii) and (iv) imply that $v$ is a sink and source vertex, respectively. A solution is guaranteed to exist—if nothing else, the other end of the path of $G$ that originates with the vertex $0^n$. It will sometimes be convenient to restrict ourselves to a “promise” version of the [[EoL]{}]{}problem (which can only be easier), where the graph $G$ is guaranteed to be a single Hamiltonian path. Even in this special case, because every vertex query reveals information about at most three vertices, we have the following. \[c:eol\] Every deterministic algorithm that solves the [[EoL]{}]{}problem requires $\Omega(N)$ queries in the worst case, even for instances that consist of a single Hamiltonian path. Slightly more formally, consider an adversary that always responds with values of $succ(v)$ and $pred(v)$ that are never-before-seen vertices (except as necessary to maintain the consistency of all of the adversary’s answers, so that cases (iii) and (iv) never occur). After only $o(N)$ queries, the known parts of $G$ constitute a bunch of vertex-disjoint paths, and $G$ could be any Hamiltonian path of $V$ consistent with these. The end of this Hamiltonian path could be any of $\Omega(N)$ different vertices, and the algorithm has no way of knowing which one.[^47] Step 2: Communication Complexity Lower Bound for [[2EoL]{}]{}via a Simulation Theorem {#s:cceol} ------------------------------------------------------------------------------------- Our next step is to use a “simulation theorem” to transfer our query lower bound for the [[EoL]{}]{}problem to a communication lower bound for a two-party version of the problem, [[2EoL]{}]{}.[^48] The exact definition of the [[2EoL]{}]{}problem will be determined by the output of the simulation theorem. ### The Query Model Consider an arbitrary function $f:\Sigma^N \rightarrow \Sigma$, where $\Sigma$ denotes a finite alphabet. There is an input $\bfz = (z_1,\ldots,z_N) \in \Sigma^N$, initially unknown to an algorithm. The algorithm can query the input $\bfz$ adaptively, with each query revealing $z_i$ for a coordinate $i$ of the algorithm’s choosing. It is trivial to evaluate $f(\bfz)$ using $N$ queries; the question is whether or not there is an algorithm that always does better (for some function $f$ of interest). For example, the query version of the [[EoL]{}]{}problem in Proposition \[c:eol\] can be viewed as a special case of this model, with $\Sigma = {\{0,1\}}^n \times {\{0,1\}}^n$ (to encode $pred(v)$ and $succ(v)$) and $f(\z)$ encoding the (unique) vertex at the end of the Hamiltonian path. ### Simulation Theorems We now describe how a function $f:\Sigma^N \rightarrow \Sigma$ as above induces a two-party communication problem. The idea is to “factor” the input $\bfz=(z_1,\ldots,z_N)$ to the query version of the problem between Alice and Bob, so that neither player can unilaterally figure out any coordinate of $\bfz$. We use an [[Index]{}]{}gadget for this purpose, as follows. (See also Figure \[f:rm\].) **Alice’s input:** $N$ “blocks” $A_1,\dots,A_N$. Each block has $M={\mathrm{poly}}(N)$ entries (with each entry in $\Sigma$). (Say, $M=N^{20}$.) **Bob’s input:** $N$ indices $y_1,\dots,y_N\in [M]$. **Communication problem:** compute $f(A_1[y_1],\dots,A_N[y_N])$. ![A query problem induces a two-party communication problem. Alice receives $N$ blocks, each containing a list of possible values for a given coordinate of the input. Bob receives $N$ indices, specifying where in Alice’s blocks the actual vales of the input reside.[]{data-label="f:rm"}](rm){width=".3\textwidth"} Note that the $y_i$th entry of $A_i$—Bob’s index into Alice’s block—is playing the role of $z_i$ in the original problem. Thus each block $A_i$ of Alice’s input can be thought of as a “bag of garbage,” which tells Alice a huge number of possible values for the $i$th coordinate of the input without any clue about which is the real one. Meanwhile, Bob’s indices tell him the locations of the real values, without any clues about what these values are. If $f$ can be evaluated with a query algorithm that always uses at most $q$ queries, then the induced two-party problem can be solved using $O(q \log N)$ bits of communication. For Alice can just simulate the query algorithm; whenever it needs to query the $i$th coordinate of the input, Alice asks Bob for his index $y_i$ and supplies the query algorithm with $A_i[y_i]$. Each of the at most $q$ questions posed by Alice can be communicated with $\approx \log N$ bits, and each answer from Bob with $\approx \log M = O(\log N)$ bits. There could also be communication protocols for the two-party problem that look nothing like such a straightforward simulation. For example, Alice and Bob could send each other the exclusive-or of all of their input bits. It’s unclear why this would be useful, but it’s equally unclear how to prove that it [*can’t*]{} be useful. The remarkable [*Raz-McKenzie simulation theorem*]{} asserts that there are no communication protocols for the two-party problem that improve over the straightforward simulation of a query algorithm. \[t:rm\] If every deterministic query algorithm for $f$ requires at least $q$ queries in the worst case, then every deterministic communication protocol for the induced two-party problem has cost $\Omega(q \log N)$. The proof, which is not easy but also not unreadable, shows how to extract a good query algorithm from an arbitrary low-cost communication protocol (essentially by a potential function argument). The original Raz-McKenzie theorem [@DBLP:journals/combinatorica/RazM99] and the streamlined version by @GPW18 are both restricted to deterministic algorithms and protocols, and this is the version we’ll use in this monograph. Recently, @GPW17 and @A+17 proved the analog of Theorem \[t:rm\] for randomized query algorithms and randomized communication protocols (with two-sided error).[^49] This randomized simulation theorem simplifies the original proof of Theorem \[t:br17\] (which pre-dated [@GPW17; @A+17]) to the point that it’s almost the same as the argument given here for the deterministic case.[^50] The Raz-McKenzie theorem provides a generic way to generate a hard communication problem from a hard query problem. We can apply it in particular to the [[EoL]{}]{}problem, and we call the induced two-party problem [[2EoL]{}]{}.[^51] - Let $V={\{0,1\}}^n$ and $N=|V|=2^n$. - Alice’s input consists of $N$ blocks, one for each vertex of $V$, and each block $A_v$ contains $M$ entries, each encoding a possible predecessor-successor pair for $v$. - Bob’s input consists of one index $y_v \in \{1,2,\ldots,M\}$ for each vertex $v \in V$, encoding the entry of the corresponding block holding the “real” predecessor-successor pair for $v$. - The goal is to identify a vertex $v \in V$ that satisfies one of the following: - the successor in $A_v[y_v]$ is NULL; - the predecessor in $A_v[y_v]$ is NULL and $v \neq 0^n$; - $A_v[y_v]$ encodes the successor $w$ but $A_w[y_w]$ does not encode the predecessor $v$; or - $A_v[y_v]$ encodes the predecessor $u$ but $A_u[y_u]$ does not encode the successor $v$, and $v \neq 0^n$. The next statement is an immediate consequence of Proposition \[c:eol\] and Theorem \[t:rm\]. \[cor:cceol\] The deterministic communication complexity of the [[2EoL]{}]{}problem is $\Omega(N \log N)$, even for instances that consist of a single Hamiltonian path. A matching upper bound of $O(N \log N)$ is trivial, as Bob always has the option of sending Alice his entire input. Corollary \[cor:cceol\] concludes the second step of the proof of Theorem \[t:br17\] and furnishes a generic hard total search problem. The next order of business is to transfer this communication complexity lower bound to the more natural [[${\epsilon}$-BFP]{}]{}and ${\epsilon\text{-}\mathsf{NE}}$ problems via reductions. Communication Complexity Lower Bound for Computing an Approximate Nash Equilibrium of a Bimatrix Game (Part II) =============================================================================================================== This lecture completes the proof of Theorem \[t:br17\]. As a reminder, this result states that if Alice’s and Bob’s private inputs are the two payoff matrices of an $N \times N$ bimatrix game, and ${\epsilon}$ is a sufficiently small constant, then $N^{\Omega(1)}$ communication is required to compute an ${\epsilon}$-approximate Nash equilibrium (Definition \[d:ene\]), even when randomization is allowed. In terms of the proof road map in Section \[s:map\], it remains to complete steps 3 and 4. This corresponds to implementing Goals \#1 and \#2 introduced in the last lecture—reversing the direction of the classical reductions from the [[${\epsilon}$-BFP]{}]{}problem to path-following and from the ${\epsilon\text{-}\mathsf{NE}}$ problem to (a two-party version of) the [[${\epsilon}$-BFP]{}]{}problem. Step 3: [[2EoL]{}]{}$\leq$ [${\epsilon}$-[2BFP]{}]{} {#s:ccbfp} ---------------------------------------------------- ### Preliminaries We know from Corollary \[cor:cceol\] that [[2EoL]{}]{}, the two-party version of the [End-of-the-Line]{} problem defined in Section \[s:cceol\], has large communication complexity. This section transfers this lower bound to a two-party version of an approximate fixed point problem, by reducing the [[2EoL]{}]{}problem to it. We next define our two-party version of the [[${\epsilon}$-BFP]{}]{}problem, the [${\epsilon}$-[2BFP]{}]{}problem. The problem is parameterized by the dimension $d$ and an approximation parameter ${\epsilon}$. The latter should be thought of as a sufficiently small constant (independent of $d$). - Let $H = [0,1]^d$ denote the $d$-dimensional hypercube. - Alice and Bob possess private inputs that, taken together, implicitly define a continuous function $f:H \rightarrow H$. - The goal is to identify an [*${\epsilon}$-approximate fixed point*]{}, meaning a point $x \in H$ such that ${ {\| {f(x)-x} \|} } < {\epsilon}$, where $\|\cdot\|$ denotes the normalized $\ell_2$ norm: $${ {\| {a} \|} } = \sqrt{\frac{1}{d} \sum_{i=1}^d a_i^2}.$$ The normalized $\ell_2$ norm of a point in the hypercube (or the difference between two such points) is always between 0 and 1. If a point $x \in H$ is [*not*]{} an ${\epsilon}$-approximate fixed point with respect to this norm, then $f(x)$ and $x$ differ by a constant amount in a constant fraction of the coordinates. This version of the problem can only be easier than the more traditional version, which uses the $\ell_{\infty}$ norm. To finish the description of the [${\epsilon}$-[2BFP]{}]{}problem, we need to explain how Alice and Bob interpret their inputs as jointly defining a continuous function. ### Geometric Intuition Our reduction from [[2EoL]{}]{}to [${\epsilon}$-[2BFP]{}]{}will use no communication—Alice and Bob will simply reinterpret their [[2EoL]{}]{}inputs as [${\epsilon}$-[2BFP]{}]{}inputs in a specific way, and a solution to the [[2EoL]{}]{}instance will be easy to recover from any approximate fixed point. Figure \[f:hpv\] shows the key intuition: graphs of paths and cycles naturally lead to continuous functions, where the gradient of the function “follows the line” and fixed points correspond to sources and sinks of the graph. Following the line (i.e., “gradient ascent”) guarantees discovery of an approximate fixed point; the goal will be to show that no cleverer algorithm is possible. ![Directed paths and cycles can be interpreted as a continuous function whose gradient “follows the line.” Points far from the path are moved by $f$ in some canonical direction. (Figure courtesy of Yakov Babichenko.)[]{data-label="f:hpv"}](pic.png){width=".7\textwidth"} This idea originates in @HPV89, who considered approximate fixed points in the $\ell_{\infty}$ norm. @R16 showed how to modify the construction so that it works even for the normalized $\ell_2$ norm. @BR17 used the construction from [@R16] in their proof of Theorem \[t:br17\]; our treatment here includes some simplifications. ### Embedding a Graph in the Hypercube {#ss:embed1} Before explaining exactly how to interpret graphs as continuous functions, we need to set up an embedding of every possible graph on a given vertex set into the hypercube. Let $V = {\{0,1\}}^n$ and $N=|V|=2^n$. Let $K$ denote the complete undirected graph with vertex set $V$—all edges that could conceivably be present in an [[EoL]{}]{}instance (ignoring their orientations). Decide once and for all on an embedding $\sigma$ of $K$ into $H=[0,1]^d$, where $d = \Theta(n) = \Theta(\log N)$, with two properties:[^52] - The images of the vertices are well separated: for every $v,w \in V$ (with $v \neq w$), ${ {\| {\sigma(v)-\sigma(w)} \|} }$ is at least some constant (say $\tfrac{1}{10}$). - The images of the edges are well separated. More precisely, a point $x \in H$ is close (within distance $10^{-3}$, say) to the images $\sigma(e)$ and $\sigma(e')$ of distinct edges $e$ and $e'$ only if $x$ is close to the image of a shared endpoint of $e$ and $e'$. (In particular, if $e$ and $e'$ have no endpoints in common, then no $x \in H$ is close to both $\sigma(e)$ and $\sigma(e')$.) Property (P1) asserts that the images of two different vertices differ by a constant amount in a constant fraction of their coordinates.[^53] One natural way to achieve this property is via an error-correcting code with constant rate. The simplest way to achieve both properties is to take a random straight-line embedding. Each vertex $v \in V$ is mapped to a point in $\{ \tfrac{1}{4}, \tfrac{3}{4} \}^d$, with each coordinate set to $\tfrac{1}{4}$ or $\tfrac{3}{4}$ independently with 50/50 probability.[^54] Each edge is mapped to a straight line between the images of its endpoints. Provided $d=cn$ for a sufficiently large constant $c$, properties (P1) and (P2) both hold with high probability.[^55] The point of properties (P1) and (P2) is to classify the points of $H$ into three categories: (i) those close to the image of a (unique) vertex of $K$; (ii) those not close to the image of any vertex but close to the image of a (unique) edge of $K$; and (iii) points not close to the image of any vertex or edge of $K$. Accordingly, each point $x \in H$ can be “decoded” to a unique vertex $v$ of $K$, a unique edge $(v,w)$ of $K$, or $\bot$. Don’t forget that this classification of points of $H$ is made in advance of receiving any particular [[2EoL]{}]{}input. In the [${\epsilon}$-[2BFP]{}]{}problem, because Alice and Bob both know the embedding in advance, they can decode points at will without any communication.[^56] ### Interpreting Paths as Continuous Functions {#ss:embed2} Given the embedding above, we can now describe how to interpret a directed graph $G=(V,E)$ induced by an instance of [[EoL]{}]{}as a continuous function on the hypercube, with approximate fixed points of the function corresponding only to sources and sinks of $G$. Write a function $f:H \rightarrow H$ as $f(x) = x + g(x)$ for the “displacement function” $g:H \rightarrow [-1,1]^d$. (The final construction will take care to define $g$ so that $x+g(x) \in H$ for every $x \in H$.) An ${\epsilon}$-approximate fixed point is a point $x$ with ${ {\| {g(x)} \|} } < {\epsilon}$, so it’s crucial for our reduction that our definition of $g$ satisfies ${ {\| {g(x)} \|} } \ge {\epsilon}$ whenever $x$ is not close to the image of a source or sink of $G$. Consider for simplicity a directed graph $G=(V,E)$ of an [[EoL]{}]{}instance that has no 2-cycles and no isolated vertices.[^57] For a (directed) edge $(u,v) \in E$, define $$\gamma_{uv} = \frac{\sigma(v)-\sigma(u)}{{ {\| {\sigma(v)-\sigma(u)} \|} }}$$ as the unit vector with the same direction as the embedding of the corresponding undirected edge of $K$, oriented from $u$ toward $v$. A rough description of the displacement function $g(x)$ corresponding to $G$ is as follows, where $\delta > 0$ is a parameter (cf., Figure \[f:hpv\]): 1. For $x$ close to the embedding $\sigma(e)$ of the (undirected) edge $e \in K$ with endpoints $u$ and $v$, but not close to $\sigma(u)$ or $\sigma(v)$, define $$g(x)=\delta \cdot \left\{ \begin{array}{ll} \gamma_{uv} & \text{if edge $(u,v) \in E$}\\ \gamma_{vu} & \text{if edge $(v,u) \in E$}\\ \text{some default direction} & \text{otherwise}. \end{array} \right. $$ 2. For $x$ close to $\sigma(v)$ for some $v \in V$, 1. if $v$ has an incoming edge $(u,v) \in E$ and an outgoing edge $(v,w) \in E$, then define $g(x)$ by interpolating between $\delta \cdot \gamma_{uv}$ and $\delta \cdot \gamma_{vw}$ (i.e., “turn slowly” as in Figure \[f:hpv\]); 2. otherwise (i.e., $v$ is a source or sink of $G$), define $g(x)$ by interpolating between the all-zero vector and the displacement vector (as defined in case 1) associated with $v$’s (unique) incoming or outgoing edge in $G$. 3. For $x$ that are not close to any $\sigma(v)$ or $\sigma(e)$, define $g(x)$ as $\delta$ times the default direction. For points $x$ “in between” the three cases (e.g., almost but not quite close enough to the image $\sigma(v)$ of a vertex $v \in V$), $g(x)$ is defined by interpolation (e.g., a weighted average of the displacement vector associated with $v$ in case 2 and $\delta$ times the default direction, with the weights determined by $x$’s proximity to $\sigma(v)$). The default direction can be implemented by doubling the number of dimensions to $2d$, and defining the displacement direction as the vector $(0,0,\ldots,0,1,1,\ldots,1)$. Special handling (not detailed here) is then required at points $x$ with value close to 1 in one of these extra coordinates, to ensure that $x+g(x)$ remains in $H$ while also not introducing any unwanted approximate fixed points. Similarly, special handling is required for the source vertex $0^n$, to prevent $\sigma(0^n)$ from being a fixed point. Roughly, this can be implemented by mapping the vertex $0^n$ to one corner of the hypercube and defining $g$ to point in the opposite direction. The parameter $\delta$ is a constant, bigger than ${\epsilon}$ by a constant factor. (For example, one can assume that ${\epsilon}\le 10^{-12}$ and take $\delta \approx 10^{-6}$.) This ensures that whenever the normalized $\ell_2$ norm of a direction vector $y$ is at least a sufficiently large constant, $\delta \cdot y$ has norm larger than ${\epsilon}$. This completes our sketch of how to interpret an instance of [[EoL]{}]{}as a continuous function on the hypercube. ### Properties of the Construction {#ss:props} Properly implemented, the construction in Sections \[ss:embed1\] and \[ss:embed2\] has the following properties: 1. Provided ${\epsilon}$ is at most a sufficiently small constant, a point $x \in H$ satisfies ${ {\| {g(x)} \|} } < {\epsilon}$ only if it is close to the image of a source or sink of $G$ different from the canonical source $0^n$. (Intuitively, this should be true by construction.) 2. There is a constant $\lambda$, independent of $d$, such that the function $f(x)=x+g(x)$ is $\lambda$-Lipschitz. In particular, $f$ is continuous. (Intuitively, this is because we take care to linearly interpolate between regions of $H$ with different displacement vectors.) Sections \[ss:embed1\] and \[ss:embed2\], together with Figure \[f:hpv\], provide a plausibility argument that a construction with these two properties is possible along the proposed lines. Readers interested in further details should start with the carefully written two-dimensional construction in @HPV89 [Section 4]—where many of these ideas originate—before proceeding to the general case in [@HPV89 Section 5] for the $\ell_{\infty}$ norm and finally @BR17 for the version tailored to the normalized $\ell_2$ norm (which is needed here). ### The [${\epsilon}$-[2BFP]{}]{}Problem and Its Communication Complexity We can now formally define the two-party version of the [[${\epsilon}$-BFP]{}]{}problem that we consider, denoted [${\epsilon}$-[2BFP]{}]{}. The problem is parameterized by a positive integer $n$ and a constant ${\epsilon}> 0$. - Alice and Bob begin with private inputs to the [[2EoL]{}]{}problem: Alice with $N=2^n$ “blocks” $A_1,\dots,A_N$, each with $M={\mathrm{poly}}(N)$ entries from the alphabet $\Sigma = {\{0,1\}}^n \times {\{0,1\}}^n$, and Bob with $N$ indices $y_1,\dots,y_N\in [M]$. - Let $G$ be the graph induced by these inputs (with $V = {\{0,1\}}^n$ and $A_v[y_v]$ encoding\ $(pred(v),succ(v))$. - Let $f$ denote the continuous function $f:H \rightarrow H$ induced by $G$, as per the construction in Sections \[ss:embed1\] and \[ss:embed2\], where $H = [0,1]^d$ is the $d$-dimensional hypercube with $d = \Theta(n)$. - The goal is to compute a point $x \in H$ such that ${ {\| {f(x)-x} \|} } < {\epsilon}$, where ${ {\| {\cdot} \|} }$ denotes the normalized $\ell_2$ norm. The first property in Section \[ss:props\] implies a communication complexity lower bound for the [${\epsilon}$-[2BFP]{}]{}problem, which implements step 3 of the road map in Section \[s:map\]. (The second property is important for implementing step 4 of the road map in the next section.) \[t:ccbfp\] For every sufficiently small constant ${\epsilon}> 0$, the deterministic communication complexity of the [${\epsilon}$-[2BFP]{}]{}problem is $\Omega(N \log N)$. If there is a deterministic communication protocol with cost $c$ for the [${\epsilon}$-[2BFP]{}]{}problem, then there is also one for the [[2EoL]{}]{}problem: Alice and Bob interpret their [[2EoL]{}]{}inputs as inputs to the [${\epsilon}$-[2BFP]{}]{} problem, run the assumed protocol to compute an ${\epsilon}$-approximate fixed point $x \in H$ of the corresponding function $f$, and (using no communication) decode $x$ to a source or sink vertex of $G$ (that is different from $0^n$). The theorem follows immediately from Corollary \[cor:cceol\]. ### Local Decodability of [${\epsilon}$-[2BFP]{}]{}Functions {#ss:local} There is one more important property of the functions $f$ constructed in Sections \[ss:embed1\] and \[ss:embed2\]: they are [*locally decodable*]{} in a certain sense. Suppose Alice and Bob want to compute the value of $f(x)$ at some commonly known point $x \in H$. If $x$ decodes to $\bot$ (i.e., is not close to the image of any vertex or edge of the complete graph $K$ on vertex set $V$), then Alice and Bob know the value of $f(x)$ without any communication whatsoever: $f(x)$ is $x$ plus $\delta$ times the default direction (or a known customized displacement if $x$ is too close to certain boundaries of $H$). If $x$ decodes to the edge $e=(u,v)$ of the complete graph $K$, then Alice and Bob can compute $f(x)$ as soon as they know whether or not edge $e$ belongs to the directed graph $G$ induced by their inputs, along with its orientation. This requires Alice and Bob to exchange predecessor-successor information about only two vertices ($u$ and $v$). Analogously, if $x$ decodes to the vertex $v$ of $K$, then Alice and Bob can compute $f(x)$ after exchanging information about at most three vertices ($v$, $pred(v)$, and $succ(v)$). Step 4: [${\epsilon}$-[2BFP]{}]{}$\le {\epsilon\text{-}\mathsf{NE}}$ {#s:mt06} -------------------------------------------------------------------- This section completes the proof of Theorem \[t:br17\] by reducing the [${\epsilon}$-[2BFP]{}]{}problem to the ${\epsilon\text{-}\mathsf{NE}}$ problem, where ${\epsilon}$ is a sufficiently small constant. ### The McLennan-Tourky Analytic Reduction {#ss:mt06} The starting point for our reduction is a purely analytic reduction of @MT06, which reduces the existence of (exact) Brouwer fixed points to the existence of (exact) Nash equilibria.[^58] Subsequent sections explain the additional ideas needed to implement this reduction for approximate fixed points and Nash equilibria in the two-party communication model. \[t:mt06\] Nash’s theorem (Theorem \[t:nash\]) implies Brouwer’s fixed-point theorem (Theorem \[t:bfp\]). Consider an arbitrary continuous function $f:H \rightarrow H$, where $H = [0,1]^d$ is the $d$-dimensional hypercube (for some positive integer $d$).[^59] Define a two-player game as follows. The pure strategies of Alice and Bob both correspond to points of $H$. For pure strategies $x,z \in H$, Alice’s payoff is defined as $$\label{eq:apayoff} 1 - { {\| {x-z} \|} }^2 = 1 - \frac{1}{d} \sum_{i=1}^d (x_i-z_i)^2$$ and Bob’s payoff as $$\label{eq:bpayoff} 1 - { {\| {z-f(x)} \|} }^2 = 1 - \frac{1}{d} \sum_{i=1}^d (z_i-f(x)_i)^2.$$ Thus Alice wants to imitate Bob’s strategy, while Bob wants to imitate the image of Alice’s strategy under the function $f$. For any mixed strategy $\sigma$ of Bob (i.e., a distribution over points of the hypercube), Alice’s unique best response is the corresponding center of gravity ${\mathbf{E}\ifthenelse{\not\equal{}{z \sim \sigma}}{_{z \sim \sigma}}{}\!\left[z\right]}$ (as you should check). Thus, in any Nash equilibrium, Alice plays a pure strategy $x$. Bob’s unique best response to such a pure strategy is the pure strategy $z = f(x)$. That is, every Nash equilibrium is pure, with $x = z = f(x)$ a fixed point of $f$. Because a Nash equilibrium exists, so does a fixed point of $f$.[^60] An extension of the argument above shows that, for $\lambda$-Lipschitz functions $f$, an ${\epsilon}'$-approximate fixed point (in the normalized $\ell_2$ norm) can be extracted easily from any ${\epsilon}$-approximate Nash equilibrium, where ${\epsilon}'$ is a function of ${\epsilon}$ and $\lambda$ only.[^61] ### The Two-Party Reduction: A Naive Attempt We now discuss how to translate the McLennan-Tourky analytic reduction to an analogous reduction in the two-party model. First, we need to discretize the hypercube. Define $\discH$ as the set of $\approx \left(\tfrac{1}{{\epsilon}}\right)^d$ points of $[0,1]^d$ for which all coordinates are multiples of ${\epsilon}$. Every $O(1)$-Lipschitz function $f$—including every function arising in a [${\epsilon}$-[2BFP]{}]{}instance (Section \[ss:props\])—is guaranteed to have an $O({\epsilon})$-approximate fixed point at some point of this discretized hypercube (by rounding an exact fixed point to its nearest neighbor in $H_{{\epsilon}}$). This also means that the corresponding game (with payoffs defined as in  and ) has an $O({\epsilon})$-approximate Nash equilibrium in which each player deterministically chooses a point of $\discH$. The obvious attempt at a two-party version of the McLennan-Tourky reduction is: 1. Alice and Bob start with inputs to the [${\epsilon}$-[2BFP]{}]{}problem. 2. The players interpret these inputs as a two-player game, with strategies corresponding to points of the discretized hypercube $\discH$, and with Alice’s payoffs given by  and Bob’s payoffs by . 3. The players run the assumed low-cost communication protocol for computing an approximate Nash equilibrium. 4. The players extract an approximate fixed point of the [${\epsilon}$-[2BFP]{}]{}function from the approximate Nash equilibrium. Just one problem: [*this doesn’t make sense*]{}. The issue is that Bob needs to be able to compute $f(x)$ to evaluate his payoff function in , and his [${\epsilon}$-[2BFP]{}]{}input (a bunch of indices into Alice’s blocks) does not provide sufficient information to do this. Thus, the proposed reduction does not produce a well-defined input to the ${\epsilon\text{-}\mathsf{NE}}$ problem. ### Description of the Two-Party Reduction {#ss:step4} The consolation prize is that Bob can compute the function $f$ at a point $x$ after a brief conversation with Alice. Recall from Section \[ss:local\] that computing $f$ at a point $x \in H$ requires information about at most three vertices of the [[2EoL]{}]{}input that underlies the [${\epsilon}$-[2BFP]{}]{}input (in addition to $x$). Alice can send $x$ to Bob, who can then send the relevant indices to Alice (after decoding $x$ to some vertex or edge of $K$), and Alice can respond with the corresponding predecessor-successor pairs. This requires $O(\log N)$ bits of communication, where $N=2^n$ is the number of vertices in the underlying [[2EoL]{}]{}instance. (We are suppressing the dependence on the constant ${\epsilon}$ in the big-O notation.) Denote this communication protocol by $P$. At this point, it’s convenient to restrict the problem to the hard instances of [[2EoL]{}]{}used to prove Corollary \[cor:cceol\], where in particular, $succ(v) = w$ if and only if $v = pred(w)$. (I.e., cases (iii) and (iv) in the definition of the [[2EoL]{}]{}problem in Section \[s:cceol\] never come up.) For this special case, $P$ can be implemented as a two-round protocol where Alice and Bob exchange information about one relevant vertex $v$ (if $x$ decodes to $v$) or two relevant vertices $u$ and $v$ (if $x$ decodes to the edge $(u,v)$).[^62] How can we exploit the local decodability of [${\epsilon}$-[2BFP]{}]{}functions? The idea is to enlarge the strategy sets of Alice and Bob, beyond the discretized hypercube $\discH$, so that the players’ strategies at equilibrium effectively simulate the protocol $P$. Alice’s pure strategies are the pairs $(x,\alpha)$, where $x \in \discH$ is a point of the discretized hypercube and $\alpha$ is a possible transcript of Alice’s communication in the protocol $P$. Thus $\alpha$ consists of at most two predecessor-successor pairs. Bob’s pure strategies are the pairs $(z,\beta)$, where $z \in \discH$ and $\beta$ is a transcript that could be generated by Bob in $P$—a specification of at most two different vertices and his corresponding indices for them.[^63] Crucially, because the protocol $P$ has cost $O(\log N)$, there are only $N^{O(1)}$ possible $\alpha$’s and $\beta$’s. There are also only $N^{O(1)}$ possible choices of $x$ and $z$—since ${\epsilon}$ is a constant and $d=\Theta(n)$ in the [${\epsilon}$-[2BFP]{}]{}problem, $|\discH|\approx \left(\tfrac{1}{{\epsilon}}\right)^d$ is polynomial in $N=2^n$. We conclude that the size of the resulting game is polynomial in the length of the given [${\epsilon}$-[2BFP]{}]{}(or [[2EoL]{}]{}) inputs. We still need to define the payoffs of the game. Let $A_1,\ldots,A_N$ and $y_1,\ldots,y_N$ denote Alice’s and Bob’s private inputs in the given [${\epsilon}$-[2BFP]{}]{}(equivalently, [[2EoL]{}]{}) instance and $f$ the corresponding function. Call an outcome $(x,\alpha,z,\beta)$ *consistent* if $\alpha$ and $\beta$ are the transcripts generated by Alice and Bob when they honestly follow the protocol $P$ to compute $f(x)$. Precisely, a consistent outcome is one that meets the following two conditions: - for each of the (zero, one, or two) vertices $v$ and corresponding indices ${\hat{y}}_v$ announced by Bob in $\beta$, $\alpha$ contains the correct response $A_v[{\hat{y}}_v]$; - $\beta$ specifies the names of the vertices relevant for Alice’s announced point $x \in H_{{\epsilon}}$, and for each such vertex $v$, $\beta$ specifies the correct index $y_v$. Observe that Alice can privately check if condition (i) holds (using her private input $A_1,\ldots,A_N$ and the vertex names and indices in Bob’s announced strategy $\beta$), and Bob can privately check condition (ii) (using his private input $y_1,\ldots,y_N$ and the point $x$ announced by Alice). For an outcome $(x,\alpha,z,\beta)$, we define Alice’s payoffs by $$\label{eq:apayoff2} \left\{ \begin{array}{cl} -1-\frac{1}{d} \sum_{i=1}^d (x_i-z_i)^2 &\mbox{ if (i) fails}\\ 1-\frac{1}{d} \sum_{i=1}^d (x_i-z_i)^2 &\mbox{ otherwise.} \end{array}\right.$$ (Compare  with .) This definition makes sense because Alice can privately check whether or not (i) holds and hence can privately compute her payoff.[^64] For Bob’s payoffs, we need a preliminary definition. Let $f_{\alpha}(x)$ denote the value that the induced function $f$ would take on if $\alpha$ was consistent with $x$ and with Alice’s and Bob’s private inputs. That is, to compute $f_{\alpha}(x)$: 1. Decode $x$ to a vertex or an edge (or $\bot$). 2. Interpret $\alpha$ as the predecessor-successor pairs for the vertices relevant for evaluating $f$ at $x$. 3. Output $x$ plus the displacement $g_{\alpha}(x)$ defined as in Sections \[ss:embed1\] and \[ss:embed2\] (with $\alpha$ supplying any predecessor-successor pairs that are necessary). To review, $f$ is the [${\epsilon}$-[2BFP]{}]{}function that Alice and Bob want to find a fixed point of, and $f(x)$ generally depends on the private inputs $A_1,\ldots,A_N$ and $y_1,\ldots,y_N$ of both Alice and Bob. The function $f_{\alpha}$ is a speculative version of $f$, predicated on Alice’s announced predecessor-successor pairs in her strategy $\alpha$. Crucially, the definition of $f_{\alpha}$ does not depend at all on Alice’s private input, only on Alice’s [*announced strategy*]{}. Thus given $\alpha$, Bob can privately execute the three steps above and evaluate $f_{\alpha}(x)$ for any $x \in \discH$. The other crucial property of $f_{\alpha}$ is that, if $\alpha$ happens to be the actual predecessor-successor pairs $\{ A_v[y_v] \}$ for the vertices relevant for $x$ (given Alice’s and Bob’s private inputs), then $f_{\alpha}(x)$ agrees with the value $f(x)$ of the true [${\epsilon}$-[2BFP]{}]{}function. We can now define Bob’s payoffs as follows (compare with ): $$\label{eq:bpayoff2} \left\{ \begin{array}{cl} -1&\mbox{ if (ii) fails}\\ 1- \frac{1}{d} \sum_{i=1}^d (z_i-f_{\alpha}(x)_i)^2 &\mbox{ otherwise.} \end{array}\right.$$ Because Bob can privately check condition (ii) and compute $f_{\alpha}(x)$ (given $x$ and $\alpha$), Bob can privately compute his payoff. This completes the description of the reduction from the [${\epsilon}$-[2BFP]{}]{}problem to the ${\epsilon\text{-}\mathsf{NE}}$ problem. Alice and Bob can carry out this reduction with no communication—by construction, their [${\epsilon}$-[2BFP]{}]{}inputs fully determine their payoff matrices. As noted earlier, because ${\epsilon}$ is a constant, the sizes of the produced ${\epsilon\text{-}\mathsf{NE}}$ inputs are polynomial in those of the [${\epsilon}$-[2BFP]{}]{}inputs. ### Analysis of the Two-Party Reduction Finally, we need to show that the reduction “works,” meaning that Alice and Bob can recover an approximate fixed point of the [${\epsilon}$-[2BFP]{}]{}function $f$ from any approximate Nash equilibrium of the game produced by the reduction. For intuition, let’s think first about the case where Alice’s and Bob’s strategies are points of the hypercube $H$ (rather than the discretized hypercube $\discH$) and the case of exact fixed points and Nash equilibria. (Cf., Theorem \[t:mt06\].) What could a Nash equilibrium of the game look like? Consider mixed strategies by Alice and Bob. 1. Alice’s payoff in  includes a term $-\frac{1}{d} \sum_{i=1}^d (x_i-z_i)^2$ that is independent of her choice of $\alpha$ or Bob’s choice of $\beta$, and the other term (either 1 or -1) is independent of her choice of $x$ (since condition (i) depends only on $\alpha$ and $\beta$). Thus, analogous to the proof of Theorem \[t:mt06\], in every one of Alice’s best responses, she deterministically chooses $x = {\mathbf{E}\ifthenelse{\not\equal{}{z \sim \sigma}}{_{z \sim \sigma}}{}\!\left[z\right]}$, where $\sigma$ denotes the marginal distribution of $z$ in Bob’s mixed strategy. 2. Given that Alice is playing deterministically in her $x$-coordinate, in every one of Bob’s best responses, he deterministically chooses $\beta$ to name the vertices relevant for Alice’s announced point $x$ and his indices for these vertices (to land in the second case of  with probability 1). 3. Given that Bob is playing deterministically in his $\beta$-coordinate, Alice’s unique best response is to choose $x$ as before and also deterministically choose the (unique) message $\alpha$ that satisfies condition (i), so that she will be in the more favorable second case of  with probability 1. 4. Given that Alice is playing deterministically in both her $x$- and $\alpha$-coordinates, Bob’s unique best response is to choose $\beta$ as before and set $z=f_{\alpha}(x)$ (to maximize his payoff in the second case of ). These four steps imply that every (exact) Nash equilibrium $(x,\alpha,z,\beta)$ of the game is pure, with $\alpha$ and $\beta$ consistent with $x$ and Alice’s and Bob’s private information about the corresponding relevant vertices, and with $x=z=f_{\alpha}(x) = f(x)$ a fixed point of $f$. As with Theorem \[t:mt06\], a more technical version of the same argument implies that an approximate fixed point—a point $x$ satisfying ${ {\| {f(x)-x} \|} } < {\epsilon}'$ with respect to the normalized $\ell_2$ norm—can be easily extracted by Alice and Bob from any ${\epsilon}$-approximate Nash equilibrium, where ${\epsilon}'$ depends only on ${\epsilon}$ (e.g., ${\epsilon}' = O({\epsilon}^{1/4})$ suffices). For example, the first step of the proof becomes: in an ${\epsilon}$-approximate Nash equilibrium, Alice must choose a point $x \in \discH$ that is close to ${\mathbf{E}\ifthenelse{\not\equal{}{}}{_{}}{}\!\left[z\right]}$ except with small probability (otherwise she could increase her expected payoff by more than ${\epsilon}$ by switching to the point of $\discH$ closest to ${\mathbf{E}\ifthenelse{\not\equal{}{}}{_{}}{}\!\left[z\right]}$). And so on. Carrying out approximate versions of all four steps above, while keeping careful track of the epsilons, completes the proof of Theorem \[t:br17\].[^65] We conclude that computing an approximate Nash equilibrium of a general bimatrix game requires a polynomial amount of communication, and in particular there are no uncoupled dynamics guaranteed to converge to such an equilibrium in a polylogarithmic number of iterations. ${\mathsf{TFNP}}$, ${\mathsf{PPAD}}$, & All That ================================================ Having resolved the communication complexity of computing an approximate Nash equilibrium of a bimatrix game, we turn our attention to the [*computational*]{} complexity of the problem. Here, the goal will be to prove a super-polynomial lower bound on the amount of computation required, under appropriate complexity assumptions. The techniques developed in the last two lectures for our communication complexity lower bound will again prove useful for this goal, but we will also need several additional ideas. This lecture identifies the appropriate complexity class for characterizing the computational complexity of computing an exact or approximate Nash equilibrium of a bimatrix game, namely ${\mathsf{PPAD}}$. Solar Lecture 5 sketches some of the ideas in Rubinstein’s recent proof [@R16] of a quasi-polynomial-time lower bound for the problem, assuming an analog of the Exponential Time Hypothesis for ${\mathsf{PPAD}}$. Section \[s:preamble\] explains why customized complexity classes are needed to reason about equilibrium computation and other total search problems. Section \[s:tfnp\] defines the class ${\mathsf{TFNP}}$ and some of its syntactic subclasses, including ${\mathsf{PPAD}}$.[^66] Section \[s:ppad\] reviews a number of ${\mathsf{PPAD}}$-complete problems. Section \[s:evidence\] discusses the existing evidence that ${\mathsf{TFNP}}$ and its important subclasses are hard, and proves that the class ${\mathsf{TFNP}}$ is hard on average assuming that ${\mathsf{NP}}$ is hard on average. Preamble {#s:preamble} -------- We consider two-player (bimatrix) games, where each player has (at most) $n$ strategies. The $n \times n$ payoff matrices for Alice and Bob $A$ and $B$ are described explicitly, with $A_{ij}$ and $B_{ij}$ indicating Alice’s and Bob’s payoffs when Alice plays her $i$th strategy and Bob his $j$th strategy. Recall from Definition \[d:ene\] that an $\epsilon$-${\mathsf{NE}}$ is a pair ${\hat{x}},{\hat{y}}$ of mixed strategies such that neither player can increase their payoff with a unilateral deviation by more than ${\epsilon}$. What do we know about the complexity of computing an $\epsilon$-${\mathsf{NE}}$ of a bimatrix game? Let’s start with the exact case (${\epsilon}=0$), where no subexponential-time (let alone polynomial-time) algorithm is known for the problem. (This contrasts with the zero-sum case, see Corollary \[cor:zerosum\].) It is tempting to speculate that no such algorithm exists. How would we amass evidence that the problem is intractable? As we’re interested in super-polynomial lower bounds, communication complexity is of no direct help. Could the problem be ${\mathsf{NP}}$-complete?[^67] The following theorem by @MP91 rules out this possibility (unless ${\mathsf{NP}}= {\mathsf{co}\mbox{-}\mathsf{NP}}$). \[t:mp91\] The problem of computing a Nash equilibrium of a bimatrix game is ${\mathsf{NP}}$-hard only if ${\mathsf{NP}}= {\mathsf{co}\mbox{-}\mathsf{NP}}$. ![A reduction from the search version of the SAT problem to the problem of computing a Nash equilibrium of a bimatrix game would yield a polynomial-time verifier for the unsatisfiability problem.[]{data-label="f:mp"}](mp){width="\textwidth"} The proof is short but a bit of a mind-bender, analogous to the argument back in Section \[s:naive\]. Suppose there is a reduction from, say, (the search version of) satisfiability to the problem of computing a Nash equilibrium of a bimatrix game. By definition, the reduction comprises two algorithms: 1. A polynomial-time algorithm $\A_1$ that maps every SAT formula $\phi$ to a bimatrix game $\A_1(\phi)$. 2. A polynomial-time algorithm $\A_2$ that maps every Nash equilibrium ${({\hat{x}},{\hat{y}})}$ of a game $\A_1(\phi)$ to a satisfying assignment $\A_2{({\hat{x}},{\hat{y}})}$ of $\phi$, if one exists, and to the string “no” otherwise. We claim that the existence of these algorithms $\A_1$ and $\A_2$ imply that ${\mathsf{NP}}= {\mathsf{co}\mbox{-}\mathsf{NP}}$ (see also Figure \[f:mp\]). In proof, consider an unsatisfiable SAT formula $\phi$, and an arbitrary Nash equilibrium ${({\hat{x}},{\hat{y}})}$ of the game $\A_1(\phi)$.[^68] We claim that ${({\hat{x}},{\hat{y}})}$ is a short, efficiently verifiable proof of the unsatisfiability of $\phi$, implying that ${\mathsf{NP}}= {\mathsf{co}\mbox{-}\mathsf{NP}}$. Given an alleged certificate ${({\hat{x}},{\hat{y}})}$ that $\phi$ is unsatisfiable, the verifier performs two checks: (1) compute the game $\A_1(\phi)$ using algorithm $\A_1$ and verify that ${({\hat{x}},{\hat{y}})}$ is a Nash equilibrium of $\A_1(\phi)$; (2) use the algorithm $\A_2$ to verify that $\A_2{({\hat{x}},{\hat{y}})}$ is the string “no.” This verifier runs in time polynomial in the description lengths of $\phi$ and ${({\hat{x}},{\hat{y}})}$. If ${({\hat{x}},{\hat{y}})}$ passes both of these tests, then correctness of the algorithms $\A_1$ and $\A_2$ implies that $\phi$ is unsatisfiable. ${\mathsf{TFNP}}$ and Its Subclasses {#s:tfnp} ------------------------------------ ### ${\mathsf{TFNP}}$ What’s really going on in the proof of Theorem \[t:mp91\] is a mismatch between the search version of an ${\mathsf{NP}}$-complete problem like SAT, where an instance may or may not have a witness, and a problem like computing a Nash equilibrium, where every instance has at least one witness. While the correct answer to a SAT instance might well be “no,” a correct answer to an instance of Nash equilibrium computation is always a Nash equilibrium. It seems that if the problem of computing a Nash equilibrium is going to be complete for some complexity class, it must be a class smaller than ${\mathsf{NP}}$. The subset of ${\mathsf{NP}}$ (search) problems for which every instance has at least one witness is called ${\mathsf{TFNP}}$, for “total functional ${\mathsf{NP}}$.” The proof of Theorem \[t:mp91\] shows more generally that if [*any*]{} ${\mathsf{TFNP}}$ problem is ${\mathsf{NP}}$-complete, then ${\mathsf{NP}}= {\mathsf{co}\mbox{-}\mathsf{NP}}$. Thus a fundamental barrier to ${\mathsf{NP}}$-completeness is the guaranteed existence of a witness. Since computing a Nash equilibrium does not seem to be ${\mathsf{NP}}$-complete, the sensible refined goal is to prove that the problem is ${\mathsf{TFNP}}$-complete—as hard as any other ${\mathsf{NP}}$ problem with a guaranteed witness. ### Syntactic vs. Semantic Complexity Classes Unfortunately, ${\mathsf{TFNP}}$-completeness is also too ambitious a goal. The reason is that ${\mathsf{TFNP}}$ does not seem to have complete problems. Think about the complexity classes that [*are*]{} known to have complete problems—${\mathsf{NP}}$ of course, and also classes like ${\mathsf{P}}$ and ${\mathsf{PSPACE}}$. What do these complexity classes have in common? They are “syntactic,” meaning that membership can be characterized via acceptance by some concrete computational model, such as polynomial-time or polynomial-space deterministic or nondeterministic Turing machines. In this sense, there is a generic reason for membership in these complexity classes. Syntactically defined complexity classes always have a “generic” complete problem, where the input is a description of a problem in terms of the accepting machine and an instance of the problem, and the goal is to solve the given instance of the given problem. For example, the generic ${\mathsf{NP}}$-complete problem takes as input a description of a verifier, a polynomial time bound, and an encoding of an instance, and the goal is to decide whether or not there is a witness, meaning a string that causes the given verifier to accept the given instance in at most the given number of steps. ${\mathsf{TFNP}}$ has no obvious generic reason for membership, and as such is called a “semantic” class.[^69] For example, the problem of computing a Nash equilibrium of a bimatrix game belongs to ${\mathsf{TFNP}}$ because of the topological arguments that guarantee the existence of a Nash equilibrium (see Section \[s:bfp\]). Another problem in ${\mathsf{TFNP}}$ is factoring: given a positive integer, output its factorization. Here, membership in ${\mathsf{TFNP}}$ has a number-theoretic explanation.[^70] Can the guaranteed existence of a Nash equilibrium of a game and of a factorization of an integer be regarded as separate instantiations of some “generic” ${\mathsf{TFNP}}$ argument? No one knows the answer. ### Syntactic Subclasses of ${\mathsf{TFNP}}$ Given that the problem of computing a Nash equilibrium appears too specific to be complete for ${\mathsf{TFNP}}$, we must refine our goal again, and try to prove that the problem is complete for a still smaller complexity class. @P94 initiated the search for syntactic subclasses of ${\mathsf{TFNP}}$ that contain interesting problems not known to belong to ${\mathsf{P}}$. His proposal was to categorize ${\mathsf{TFNP}}$ problems according to the type of mathematical proof used to guaranteed the existence of a witness. Interesting subclasses include the following: - ${\mathsf{PPAD}}$ (for polynomial parity argument, directed version): Problems that can be solved by path-following in a (exponential-size) directed graph with in- and out-degree at most 1 and a known source vertex (specifically, the problem of identifying a sink or source vertex other than the given one). - ${\mathsf{PPA}}$ (for polynomial parity argument, undirected version): Problems that can be solved by path-following in an undirected graph (specifically, given an odd-degree vertex, the problem of identifying a different odd-degree vertex). - ${\mathsf{PLS}}$ (for polynomial local search): Problems that can be solved by path-following in a directed acyclic graph (specifically, given such a graph, the problem of identifying a sink vertex).[^71] - ${\mathsf{PPP}}$ (for polynomial pigeonhole principle): Problems that reduce to the following: given a function $f$ mapping $\{1,2,\ldots,n\}$ to $\{1,2,\ldots,n-1\}$, find $i \neq j$ such that $f(i)=f(j)$. All of these complexity classes can be viewed as intermediate to ${\mathsf{P}}$ and ${\mathsf{NP}}$. The conjecture, supported by oracle separations [@B+98], is that all four of these classes are distinct (Figure \[fig:belief\]). (-3, 1.4) rectangle (18.3,5); (5.5,3.) ellipse (3.792864479119171cm and 1.2450269068279611cm); (4.83,2.88) ellipse (1.1797724164164307cm and 0.5536812752270697cm); (6.14,2.86) ellipse (1.1411849820365318cm and 0.5620526338571116cm); (9.3,3.46) node\[anchor=north west\] [${\mathsf{TFNP}}$]{}; (7.28,3.) node\[anchor=north west\] [${\mathsf{PLS}}$]{}; (2.5,3.16) node\[anchor=north west\] [${\mathsf{PPAD}}$]{}; Section \[s:bfp\] outlined the argument that the guaranteed existence of Nash equilibria reduces to the guaranteed existence of Brouwer fixed points, and Section \[s:eol\] showed (via Sperner’s lemma) that Brouwer’s fixed-point theorem reduces to path-following in a directed graph with in- and out-degrees at most 1. Thus, ${\mathsf{PPAD}}$ would seem to be the subclass of ${\mathsf{TFNP}}$ with the best chance of capturing the complexity of computing a Nash equilibrium. ${\mathsf{PPAD}}$ and Its Complete Problems {#s:ppad} ------------------------------------------- ### [[EoL]{}]{}: The Generic Problem for ${\mathsf{PPAD}}$ {#ss:seol} We can formally define the class ${\mathsf{PPAD}}$ by defining its generic problem. (A problem is then in ${\mathsf{PPAD}}$ if it reduces in polynomial time to the generic problem.) Just as the [End-of-the-Line ([[EoL]{}]{})]{} problem served as the starting point of our communication complexity lower bound (see Section \[s:eol\]), a succinct version of the problem will be the basis for our computational hardness results. Given two circuits $S$ and $P$ (for “successor” and “predecessor”), each mapping ${\{0,1\}}^n$ to ${\{0,1\}}^n \cup \{ NULL\}$ and with size polynomial in $n$, and with $P(0^n) = NULL$, find an input $v \in {\{0,1\}}^n$ that satisfies one of the following: - $S(v)$ is NULL; - $P(v)$ is NULL and $v \neq 0^n$; - $v \neq P(S(v))$; or - $v \neq S(P(v))$ and $v \neq 0^n$. Analogous to Section \[s:eol\], we can view the circuits $S$ and $P$ as defining a graph $G$ with in- and out-degrees at most 1 (with edge $(v,w)$ in $G$ if and only if $S(v) = w$ and $P(w) = v$), and with a given source vertex $0^n$. The [[EoL]{}]{}problem then corresponds to identifying either a sink vertex of $G$ or a source vertex other than $0^n$.[^72] A solution is guaranteed to exist—if nothing else, the other end of the path of $G$ that originates with the vertex $0^n$. Thus [[EoL]{}]{}does indeed belong to ${\mathsf{TFNP}}$, and ${\mathsf{PPAD}}\subseteq {\mathsf{TFNP}}$. Note also that the class is syntactic and by definition has a complete problem, namely the [[EoL]{}]{}problem. ### Problems in ${\mathsf{PPAD}}$ The class ${\mathsf{PPAD}}$ contains several natural problems (in addition to the [[EoL]{}]{}problem). For example, it contains a computational version of Sperner’s lemma—given a succinct description (e.g., polynomial-size circuits) of a legal coloring of an exponentially large triangulation of a simplex, find a sub-simplex such that its vertices showcase all possible colors. This problem can be regarded as a special case of the [[EoL]{}]{}problem (see Section \[s:eol\]), and hence belongs to ${\mathsf{PPAD}}$. Another example is the problem of computing an approximate fixed point. Here the input is a succinct description of a $\lambda$-Lipschitz function $f$ (on the hypercube in $d$ dimensions, say) and a parameter ${\epsilon}$, and the goal is to compute a point $x$ with ${ {\| {f(x)-x} \|} } < {\epsilon}$ (with respect to some norm). The description length of $x$ should be polynomial in that of the function $f$. Such a point is guaranteed to exist provided ${\epsilon}$ is not too small relative to $\lambda$.[^73] The reduction from Brouwer’s fixed-point theorem to Sperner’s lemma (with colors corresponding to directions of movement, see Section \[s:bfp\]) shows that computing an approximate fixed point can also be regarded as a special case of the [[EoL]{}]{}problem, and hence belongs to ${\mathsf{PPAD}}$. The problem of computing an exact or approximate Nash equilibrium of a bimatrix game also belongs to ${\mathsf{PPAD}}$. For the problem of computing an ${\epsilon}$-approximate Nash equilibrium (with ${\epsilon}$ no smaller than inverse exponential in $n$), this follows from the proof of Nash’s theorem outlined in Section \[ss:nashpf\]. That proof shows that computing an ${\epsilon\text{-}\mathsf{NE}}$ is a special case of computing an approximate fixed point (of the regularized best-response function defined in  and ), and hence the problem belongs to ${\mathsf{PPAD}}$. The same argument shows that this is true more generally with any finite number of players (i.e., not only for bimatrix games). The problem of computing an exact Nash equilibrium (${\epsilon}=0$) also belongs to ${\mathsf{PPAD}}$ in the case of two-player (bimatrix) games.[^74] One way to prove this is via the Lemke-Howson algorithm [@LH64] (see also Section \[s:bimatrix\]), which reduces the computation of an (exact) Nash equilibrium of a bimatrix game to a path-following problem, much in the way that the simplex method reduces computing an optimal solution of a linear program to following a path of improving edges along the boundary of the feasible region. The proof of the Lemke-Howson algorithm’s inevitable convergence uses parity arguments akin to the one in the proof of Sperner’s lemma, and shows that the problem of computing a Nash equilibrium of a bimatrix game belongs to ${\mathsf{PPAD}}$. ### ${\mathsf{PPAD}}$-Complete Fixed-Point Problems {#ss:ppadbfp} The [[EoL]{}]{}problem is ${\mathsf{PPAD}}$-complete by construction. What about “more natural” problems? Papadimitriou [@P94] built evidence that ${\mathsf{PPAD}}$ is a fundamental complexity class by showing that fixed-point problems are complete for it. To be precise, let [[Brouwer]{}]{}$({ {\| {\cdot} \|} }, d, \F, \epsilon)$ denote the following problem: given a (succinct description of a) function $f \in \F$, with $f:[0,1]^d \rightarrow [0,1]^d$, compute a point $x \in [0,1]^d$ such that ${ {\| {f(x)-x} \|} } < {\epsilon}$. The original hardness result from [@P94] is the following. \[t:brouwer1\] The [[Brouwer]{}]{}$({ {\| {\cdot} \|} }, d, \F, \epsilon)$ problem is ${\mathsf{PPAD}}$-complete, even when $d=3$, the functions in $\F$ are $O(1)$-Lipschitz, ${ {\| {\cdot} \|} }$ is the $\ell_{\infty}$ norm, and ${\epsilon}$ is exponentially small in the description length $n$ of a function $f \in \F$. The high-level idea of the proof is similar to the construction in Section \[s:ccbfp\] that shows how to interpret [[EoL]{}]{}instances as implicitly defined Lipschitz functions on the hypercube. Given descriptions of the circuits $S$ and $P$ in an instance of the generic [[EoL]{}]{}problem, it is possible to define an (efficiently computable) function whose gradient “follows the line” of an embedding of the induced directed graph into the hypercube. Three dimensions are needed in the construction in [@P94] to ensure that the images of different edges do not intersect (except at a shared endpoint). Some time later, @CD09 used a somewhat different approach to prove that Theorem \[t:brouwer1\] holds even when $d=2$.[^75] Much more recently, with an eye toward hardness results for ${\epsilon}$-approximate Nash equilibria with constant ${\epsilon}$ (see Solar Lecture 5), @R16 proved the following.[^76] \[t:brouwer2\] The [[Brouwer]{}]{}$({ {\| {\cdot} \|} }, d, \F, \epsilon)$ problem is ${\mathsf{PPAD}}$-complete even when the functions in $\F$ are $O(1)$-Lipschitz functions, $d$ is linear in the description length $n$ of a function in $\F$, ${ {\| {\cdot} \|} }$ is the normalized $\ell_2$ norm (with ${ {\| {x} \|} } = \sqrt{\tfrac{1}{d} \sum_{i=1}^d x_i^2}$), and ${\epsilon}$ is a sufficiently small constant. The proof of Theorem \[t:brouwer2\] is closely related to the third step of our communication complexity lower bound (Section \[s:ccbfp\]), and in particular makes use of a similar embedding of graphs into the hypercube with the properties (P1) and (P2) described in Section \[ss:embed1\].[^77] One major difference is that our proof of existence of the embedding in Section \[s:ccbfp\] used the probabilistic method and hence is not constructive (which is not an issue in the two-party communication model), while the computational lower bound in Theorem \[t:brouwer2\] requires an efficiently computable embedding. In particular, the reduction from [[EoL]{}]{}to [[Brouwer]{}]{}$({ {\| {\cdot} \|} }, d, \F, \epsilon)$ must efficiently produce a succinct description of the function $f$ induced by an instance of [[EoL]{}]{}, and it should be possible to efficiently evaluate $f$, presumably while using the given [[EoL]{}]{}circuits $S$ and $P$ only as black boxes. For example, it should be possible to efficiently decode points of the hypercube (to a vertex, edge, or $\bot$, see Section \[ss:embed1\]). Conceptually, the fixes for these problems are relatively simple. First, rather than mapping the vertices randomly into the hypercube, the reduction in the proof of Theorem \[t:brouwer2\] embeds the vertices using an error-correcting code (with constant rate and efficient encoding and decoding algorithms). This enforces property (P1) of Section \[ss:embed1\]. Second, rather than using a straight-line embedding, the reduction is more proactive about making the images of different edges stay far apart (except for at shared endpoints). Specifically, an edge of the directed graph induced by the given [[EoL]{}]{}instance is now mapped to 4 straight line segments, and along each line segment, two-thirds of the coordinates stay fixed. (This requires blowing up the number of dimensions by a constant factor.) For example, the directed edge $(u,v)$ can be mapped to the path $$(\sigma(u),\sigma(u),\mathbf{\tfrac{1}{4}}) \mapsto (\sigma(u),\sigma(v),\mathbf{\tfrac{1}{4}}) \mapsto (\sigma(u),\sigma(v),\mathbf{\tfrac{3}{4}}) \mapsto (\sigma(v),\sigma(v),\mathbf{\tfrac{3}{4}}) \mapsto (\sigma(v),\sigma(v),\mathbf{\tfrac{1}{4}}),$$ where $\sigma$ denotes the error-correcting code used to map the vertices to the hypercube and the boldface $\mathbf{\tfrac{1}{4}}$ and $\mathbf{\tfrac{3}{4}}$ indicate the value of the last third of the coordinates. This maneuver enforces property (P2) of Section \[ss:embed1\]. It also ensures that it is easy to decode points of the hypercube that are close to the image of an edge of the graph—at least one of the edge’s endpoints can be recovered from the values of the frozen coordinates, and the other endpoint can be recovered using the given predecessor and successor circuits.[^78] ### ${\mathsf{PPAD}}$-Complete Equilibrium Computation Problems @P94 defined the class ${\mathsf{PPAD}}$ in large part to capture the complexity of computing a Nash equilibrium, conjecturing that the problem is in fact ${\mathsf{PPAD}}$-complete. Over a decade later, a flurry of papers confirmed this conjecture. First, Daskalakis, Goldberg, and Papadimitriou [@DGP06; @GP06] proved that computing an ${\epsilon\text{-}\mathsf{NE}}$ of a four-player game, with ${\epsilon}$ inverse exponential in the size of the game, is ${\mathsf{PPAD}}$-complete. This approach was quickly refined [@CD05; @DP05], culminating in the proof of Chen and Deng [@CD06] that computing a Nash equilibrium (or even an ${\epsilon\text{-}\mathsf{NE}}$ with exponentially small ${\epsilon}$) of a bimatrix game is ${\mathsf{PPAD}}$-complete. Thus the nice properties possessed by Nash equilibria of bimatrix games (see Section \[s:bimatrix\]) are not enough to elude computational intractability. @CDT06 strengthened this result to hold even for values of ${\epsilon}$ that are only inverse polynomial in the size of the game.[^79] The papers by @DGP09 and @CDT09 give a full account of this breakthrough sequence of results. \[t:cdt\] The problem of computing an ${\epsilon\text{-}\mathsf{NE}}$ of an $n \times n$ bimatrix game is ${\mathsf{PPAD}}$-complete, even when ${\epsilon}= 1/{\mathrm{poly}}(n)$. The proof of Theorem \[t:cdt\], which is a tour de force, is also outlined in the surveys by @J07, @P07, @DGPcacm, and @et. Fundamentally, the proof shows how to define a bimatrix game so that every Nash equilibrium effectively performs a gate-by-gate simulation of the circuits of a given [[EoL]{}]{}instance. Theorem \[t:cdt\] left open the possibility that, for every constant ${\epsilon}> 0$, an ${\epsilon\text{-}\mathsf{NE}}$ of a bimatrix game can be computed in polynomial time. (Recall from Corollary \[cor:lmm2\] that it can be computed in [ *quasi-polynomial*]{} time.) A decade later, @R16 ruled out this possibility (under suitable complexity assumptions) by proving a quasi-polynomial-time hardness result for the problem when ${\epsilon}$ is a sufficiently small constant. We will have much more to say about this result in Solar Lecture 5. Are ${\mathsf{TFNP}}$ Problems Hard? {#s:evidence} ------------------------------------ It’s all fine and good to prove that a problem is as hard as any other problem in ${\mathsf{PPAD}}$, but what makes us so sure that ${\mathsf{PPAD}}$ problems (or even ${\mathsf{TFNP}}$ problems) can be computationally difficult? ### Basing the Hardness of ${\mathsf{TFNP}}$ on Cryptographic Assumptions The first evidence of hardness of problems in ${\mathsf{TFNP}}$ came in the form of exponential lower bounds for functions given as “black boxes,” or equivalently query complexity lower bounds, as in Proposition \[c:eol\] for the [[EoL]{}]{}problem or @HPV89 for the [[Brouwer]{}]{} problem. Can we relate the hardness of ${\mathsf{TFNP}}$ and its subclasses to other standard complexity assumptions? Theorem \[t:mp91\] implies that we can’t base hardness of ${\mathsf{TFNP}}$ on the assumption that ${\mathsf{P}}\neq {\mathsf{NP}}$, unless ${\mathsf{NP}}= {\mathsf{co}\mbox{-}\mathsf{NP}}$. What about cryptographic assumptions? After all, the problem of inverting a one-way permutation belongs to ${\mathsf{TFNP}}$ (and even the subclass ${\mathsf{PPP}}$). Thus, sufficiently strong cryptographic assumptions imply hardness of ${\mathsf{TFNP}}$. Can we prove hardness also for all of the other interesting subclasses of ${\mathsf{TFNP}}$, or can we establish the hardness of ${\mathsf{TFNP}}$ under weaker assumptions (like the existence of one-way functions)? Along the former lines, a recent sequence of papers (not discussed here) show that sufficiently strong cryptographic assumptions imply that ${\mathsf{PPAD}}$ is hard [@BPR15; @GPS16; @RSS17; @HY17; @C+19]. The rest of this lecture covers a recent result in the second direction by @HNY17, who show that the average-case hardness of ${\mathsf{TFNP}}$ can be based on the average-case hardness of ${\mathsf{NP}}$. (Even though the worst-case hardness of ${\mathsf{TFNP}}$ [*cannot*]{} be based on that of ${\mathsf{NP}}$, unless ${\mathsf{NP}}={\mathsf{co}\mbox{-}\mathsf{NP}}$!) Note that assuming that ${\mathsf{NP}}$ is hard on average is only weaker than assuming the existence of one-way functions. \[theorem:average\_hard\] If there exists a hard-on-average language in ${\mathsf{NP}}$, then there exists a hard-on-average search problem in ${\mathsf{TFNP}}$. There is some fine print in the precise statement of the result (see Remarks \[rem:public\] and \[rem:uniform\]), but the statement in Theorem \[theorem:average\_hard\] is the gist of it.[^80] ### Proof Sketch of Theorem \[theorem:average\_hard\] Let $L$ be a language in ${\mathsf{NP}}$ that is hard on average w.r.t. some family of distributions $D_n$ on input strings of length $n$. Average-case hardness of $(L, D_n)$ means that there is no polynomial-time algorithm with an advantage of $1/{\mathrm{poly}}(n)$ over random guessing when the input is sampled according to $D_n$ (for any polynomial). Each $D_n$ should be efficiently sampleable, so that hardness cannot be baked into the input distribution. Can we convert such a problem into one that is total while retaining its average-case hardness? Here’s an initial attempt: [**Input:**]{} $l$ independent samples $x_1, x_2, \ldots, x_l$ from $D_n$.\ [**Output:**]{} a witness for some $x_i \in L$. For sufficiently large $l$, this problem is “almost total.” Because $(L,D_n)$ is hard-on-average, random instances are nearly equally likely to be “yes” or “no” instances (otherwise a constant response would beat random guessing). Thus, except with probability $\approx 2^{-l}$, at least one of the sampled instances $x_i$ is a “yes” instance and has a witness. Taking $l$ polynomial in $n$, we get a problem that is total except with exponentially small probability. How can we make it “totally total?” The idea is to sample the $x_i$’s in a correlated way, using a random shifting trick reminiscent of Lautemann’s proof that ${\mathsf{BPP}}\subseteq \Sigma_2 \cap \Pi_2$ [@L83]. This will give a non-uniform version of Theorem \[theorem:average\_hard\]; Remark \[rem:uniform\] sketches the changes necessary to get a uniform version. Fix $n$. Let $D_n(r)$ denote the output of the sampling algorithm for $D_n$, given the random seed $r \in {\{0,1\}}^n$. (By padding, we can assume that the input length and the random seed length both equal $n$.) Call a set containing the strings $s_1, s_2, \ldots, s_l \in {\{0,1\}}^n$ [*good*]{} if for every seed $r\in \{0,1\}^n$ there exists an index $i\in [l]$ such that $D(r\oplus s_i) \in L$. We can think of the $s_i$’s as masks; goodness then means that there is always a mask whose application yields a “yes” instance. \[claim:goodness\] If $s_1, s_2, \ldots, s_{2n} \sim \{0,1\}^n$ are sampled uniformly and independently, then $\{ s_1,\ldots,s_{2n} \}$ is good except with exponentially small probability. Fix a seed $r\in \{0,1\}^n$. The distribution of $r \oplus s_i$ (over $s_i$) is uniform, so $D_n(r \oplus s_i)$ has a roughly 50% chance of being a “yes” instance (since $(L,D_n)$ is hard on average). Thus the probability (over $s_1,\ldots,s_{2n}$) that $D_n(r \oplus s_i)$ is a “no” instance for [*every*]{} $s_i$ is $\approx 2^{-2n}$. Taking a union bound over the $2^n$ choices for $r$ completes the proof. Consider now the following reduction, from the assumed hard-on-average ${\mathsf{NP}}$ problem $(L,D_n)$ to a hopefully hard-on-average ${\mathsf{TFNP}}$ problem. [**Chosen in advance:**]{} A good set of strings $\{s_1, s_2, \ldots, s_{2n}\}$.\ [**Input:**]{} an instance $x$ of $(L,D_n)$, in the form of the random seed $\hat{r}$ used to generate $x = D_n(\hat{r})$.\ [**Output:**]{} a witness for one of the instances $D(\hat{r}\oplus s_1),\ldots,D(\hat{r}\oplus s_{2n})$. By the definition of a good set of strings, there is always at least one witness of the desired form, and so the output of this reduction is a ${\mathsf{TFNP}}$ problem (or more accurately, a ${\mathsf{TFNP}}/{\mathrm{poly}}$ problem, with $s_1,\ldots,s_{2n}$ given as advice). Let $D'$ denote the distribution over instances of this problem induced by the uniform distribution over $\hat{r}$. It remains to show how a (non-uniform) algorithm that solves this ${\mathsf{TFNP}}/{\mathrm{poly}}$ problem (with respect to $D'$) can be used to beat random guessing (with inverse polynomial advantage) for $(L,D_n)$ in a comparable amount of time. Given an algorithm $A$ for the former problem (and the corresponding good set of strings), consider the following algorithm $B$ for $(L,D_n)$. [**Input:**]{} A random instance $x$ of $(L,D_n)$ and the random seed $\hat{r}$ that generated it (so $x = D_n(\hat{r})$). 1. Choose $i\in [2n]$ uniformly at random. 2. Set $r^\star = \hat{r} \oplus s_i$. 3. Use the algorithm $A$ to generate a witness $w$ for one of the instances $$D(r^\star\oplus s_1), D(r^\star\oplus s_2), \ldots, D(r^\star\oplus s_{2n}).$$ (Note that the $i$th problem is precisely the one we want to solve.) 4. If $w$ is a witness for $D(r^\star\oplus s_i)$, then output “yes.” 5. Otherwise, randomly answer “yes” or “no” (with 50/50 probability). Consider a “yes” instance $D_n(\hat{r})$ of $L$. If algorithm $A$ happens to output a witness to the $i$th instance $D_n(r^\star\oplus s_i) = D_n(\hat{r})$, then algorithm $B$ correctly decides the problem. The worry is that the algorithm $A$ somehow conspires to always output a witness for an instance other than the “real” one. Suppose algorithm $A$, when presented with the instances $D(r^\star\oplus s_1), D(r^\star\oplus s_2), \ldots, D(r^\star\oplus s_{2n})$, exhibits a witness for the $j$th instance $D(r^\star\oplus s_j)$. This collection of instances could have been produced by the reduction in exactly $2n$ different ways: with $i=1$ and $\hat{r}=r^{\star} \oplus s_1$, with $i=2$ and $\hat{r}=r^{\star} \oplus s_2$, and so on. Since $i$ and $\hat{r}$ were chosen independently and uniformly at random, each of these $2n$ outcomes is equally likely, and algorithm $A$ has no way of distinguishing between them. Thus whatever $j$ is, $A$’s witness has at least a $1/2n$ chance of being a witness for the true problem $D_n(\hat{r})$ (where the probability is over both $\hat{r}$ and $i$). We conclude that, for “yes” instances of $L$, algorithm $B$ has advantage $\tfrac{1}{2n}$ over random guessing. Since roughly 50% of the instances $D_n(\hat{r})$ are “yes” instances (since $(L,D_n)$ is average-case hard), algorithm $B$ has advantage roughly $\tfrac{1}{4n}$ over random guessing for $(L,D_n)$. This contradicts our assumption that $(L,D_n)$ is hard on average. We have completed the proof of Theorem \[theorem:average\_hard\], modulo two caveats. \[rem:public\] The algorithm $B$ used in the reduction above beats random guessing for $(L,D_n)$, provided the algorithm receives as input the random seed $\hat{r}$ used to generate an instance of $(L,D_n)$. That is, our current proof of Theorem \[theorem:average\_hard\] assumes that $(L,D_n)$ is hard on average [*even with public coins*]{}. While there are problems in ${\mathsf{NP}}$ conjectured to be average-case hard in this sense (like random SAT near the phase transition), it would be preferable to have a version of Theorem \[theorem:average\_hard\] that allows for private coins. Happily, @HNY17 prove that there exists a private-coin average-case hard problem in ${\mathsf{NP}}$ only if there is also a public-coin such problem. This implies that Theorem \[theorem:average\_hard\] holds also in the private-coin case. \[rem:uniform\] Our proof of Theorem \[theorem:average\_hard\] only proves hardness for the non-uniform class ${\mathsf{TFNP}}/{\mathrm{poly}}$. (The good set $\{ s_1,\ldots,s_{2n} \}$ of strings is given as “advice” separately for each $n$.) It is possible to extend the argument to (uniform) ${\mathsf{TFNP}}$, under some additional (reasonably standard) complexity assumptions. The idea is to use techniques from derandomization. We already know from Claim \[claim:goodness\] that almost all sets of $2n$ strings from ${\{0,1\}}^n$ are good. Also, the problem of checking whether or not a set of strings is good is a $\Pi_2$ problem (for all $r \in {\{0,1\}}^n$ there exists $i \in [2n]$ such that $D_n(r \oplus s_i)$ has a witness). Assuming that there is a problem in ${\mathsf{E}}$ with exponential-size $\Pi_2$ circuit complexity, it is possible to derandomize the probabilistic argument and efficiently compute a good set $\{ s_1,\ldots,s_l\}$ of strings (with $l$ larger than $2n$ but still polynomial in $n$), à la @IW97. An important open research direction is to extend Theorem \[theorem:average\_hard\] to subclasses of ${\mathsf{TFNP}}$, such as ${\mathsf{PPAD}}$. [**Open Problem:**]{} Does an analogous average-case hardness result hold for ${\mathsf{PPAD}}$? The Computational Complexity of Computing an Approximate Nash Equilibrium ========================================================================= Introduction ------------ Last lecture we stated without proof the result by @DGP09 and @CDT09 that computing an ${\epsilon}$-approximate Nash equilibrium of a bimatrix game is ${\mathsf{PPAD}}$-complete, even when ${\epsilon}$ is an inverse polynomial function of the game size (Theorem \[t:cdt\]). Thus, it would be surprising if there were a polynomial-time (or even subexponential-time) algorithm for this problem. Recall from Corollary \[cor:lmm2\] in Solar Lecture 1 that the story is different for constant values of ${\epsilon}$, where an ${\epsilon}$-approximate Nash equilibrium can be computed in quasi-polynomial (i.e., $n^{O(\log n)}$) time. The Pavlovian response of a theoretical computer scientist to a quasi-polynomial-time algorithm is to conjecture that a polynomial-time algorithm must also exist. (There are only a few known natural problems that appear to have inherently quasi-polynomial time complexity.) But recall that the algorithm in the proof of Corollary \[cor:lmm2\] is just exhaustive search over all probability distributions that are uniform over a multi-set of logarithmically many strategies (which is good enough, by Theorem \[t:lmm\]). Thus the algorithm reveals no structure of the problem other than the fact that the natural search space for it has quasi-polynomial size. It is easy to imagine that there are no “shortcuts” to searching this space, in which case a quasi-polynomial amount of time would indeed be necessary. How would we ever prove such a result? Presumably by a non-standard super-polynomial reduction from some ${\mathsf{PPAD}}$-complete problem like succinct [[EoL]{}]{}(defined in Section \[ss:seol\]). This might seem hard to come by, but in a recent breakthrough, @R16 provided just such a reduction! \[thm:Aviad\] For all sufficiently small constants ${\epsilon}>0$, for every constant $\delta > 0$, there is no $n^{O(\log^{1-\delta} n)}$-time algorithm for computing an ${\epsilon}$-approximate Nash equilibrium of a bimatrix game, unless the succinct [[EoL]{}]{}problem has a $2^{O(n^{1-\delta'})}$-time algorithm for some constant $\delta' > 0$. In other words, assuming an analog of the Exponential Time Hypothesis (ETH) [@IPZ01] for ${\mathsf{PPAD}}$, the quasi-polynomial-time algorithm in Corollary \[cor:lmm2\] is essentially optimal![^81][^82] Three previous papers that used an ETH assumption (for ${\mathsf{NP}}$) along with PCP machinery to prove quasi-polynomial-time lower bounds for ${\mathsf{NP}}$ problems are: 1. @AaronsonImMo14, for the problem of computing the value of free games (i.e., two-prover proof systems with stochastically independent questions), up to additive error ${\epsilon}$; 2. @BravermanYoWe15, for the problem of computing the ${\epsilon}$-approximate Nash equilibrium with the highest expected sum of player payoffs; and 3. @BKRW17, for the problem of distinguishing graphs with a $k$-clique from those that only have $k$-vertex subgraphs with density at most $1-{\epsilon}$. In all three cases, the hardness results apply when ${\epsilon}> 0$ is a sufficiently small constant. Quasi-polynomial-time algorithms are known for all three problems. The main goal of this lecture is to convey some of the ideas in the proof of Theorem \[thm:Aviad\]. The proof is a tour de force and the paper [@R16] is 57 pages long, so our treatment will necessarily be impressionistic. We hope to explain the following: 1. What the reduction in Theorem \[thm:Aviad\] must look like. (Answer: a blow-up from size $n$ to size $\approx 2^{\sqrt{n}}$.) 2. How a $n \mapsto \approx 2^{\sqrt{n}}$-type blowup can naturally arise in a reduction to the problem of computing an approximate Nash equilibrium. 3. Some of the tricks used in the reduction. 4. Why these tricks naturally lead to the development and application of PCP machinery. Proof of Theorem \[thm:Aviad\]: An Impressionistic Treatment ------------------------------------------------------------ ### The Necessary Blow-Up The goal is to reduce length-$n$ instances of the succinct [[EoL]{}]{}problem to length-$f(n)$ instances of the problem of computing an ${\epsilon}$-approximate Nash equilibrium with constant ${\epsilon}$, so that a sub-quasi-polynomial-time algorithm for the latter implies a subexponential-time algorithm for the former. Thus the mapping $n \mapsto f(n)$ should satisfy $2^n \approx f(n)^{\log f(n)}$ and hence $f(n) \approx 2^{\sqrt{n}}$. That is, we should be looking to encode a length-$n$ instance of succinct [[EoL]{}]{}as a $2^{\sqrt{n}} \times 2^{\sqrt{n}}$ bimatrix game. The $\sqrt{n}$ will essentially come from the “birthday paradox,” with random subsets of $[n]$ of size $s$ likely to intersect once $s$ exceeds $\sqrt{n}$. The blow-up from $n$ to $2^{\sqrt{n}}$ will come from PCP-like machinery, as well as a game-theoretic gadget (“Althöfer games,” see Section \[ss:althofer\]) that forces players to randomize nearly uniformly over size-$\sqrt{n}$ subsets of $[n]$ in every approximate Nash equilibrium. ### The Starting Point: [[${\epsilon}$-BFP]{}]{} The starting point of the reduction is the ${\mathsf{PPAD}}$-complete version of the [[${\epsilon}$-BFP]{}]{}problem in Theorem \[t:brouwer2\]. We restate that result here. \[t:brouwer3\] The [[Brouwer]{}]{}$({ {\| {\cdot} \|} }, d, \F, \epsilon)$ problem is ${\mathsf{PPAD}}$-complete when the functions in $\F$ are $O(1)$-Lipschitz functions from the $d$-dimensional hypercube $H=[0,1]^d$ to itself, $d$ is linear in the description length $n$ of a function in $\F$, ${ {\| {\cdot} \|} }$ is the normalized $\ell_2$ norm (with ${ {\| {x} \|} } = \sqrt{\tfrac{1}{d} \sum_{i=1}^d x_i^2}$), and ${\epsilon}$ is a sufficiently small constant. The proof is closely related to the reduction from [[2EoL]{}]{}to [${\epsilon}$-[2BFP]{}]{}outlined in Section \[s:ccbfp\], and Section \[ss:ppadbfp\] describes the additional ideas needed to prove Theorem \[t:brouwer3\]. As long as the error-correcting code used to embed vertices into the hypercube (see Section \[ss:ppadbfp\]) has linear-time encoding and decoding algorithms (as in [@S97], for example), the reduction can be implemented in linear time. In particular, our assumption that the succinct [[EoL]{}]{}problem has no subexponential-time algorithms automatically carries over to this version of the [[${\epsilon}$-BFP]{}]{}problem. In addition to the properties of the functions in $\F$ that are listed in the statement of Theorem \[t:brouwer3\], the proof of Theorem \[thm:Aviad\] crucially uses the “locally decodable” properties of these functions (see Section \[ss:local\]). ### [[${\epsilon}$-BFP]{}]{}$\le {\epsilon\text{-}\mathsf{NE}}$ (Attempt \#1): Discretize McLennan-Tourky One natural starting point for a reduction from [[${\epsilon}$-BFP]{}]{}to ${\epsilon\text{-}\mathsf{NE}}$ is the McLennan-Tourky analytic reduction in Section \[ss:mt06\]. Given a description of an $O(1)$-Lipschitz function $f:[0,1]^d \rightarrow [0,1]^d$, with $d$ linear in the length $n$ of the function’s description, the simplest reduction would proceed as follows. Alice and Bob each have a strategy set corresponding to the discretized hypercube $H_{{\epsilon}}$ (points of $[0,1]^d$ such that every coordinate is a multiple of ${\epsilon}$). Alice’s and Bob’s payoffs are defined as in the proof of Theorem \[t:mt06\]: for strategies $x,y \in H_{{\epsilon}}$, Alice’s payoff is $$\label{eq:apayoff3} 1 - { {\| {x-y} \|} }^2 = 1 - \frac{1}{d} \sum_{i=1}^d (x_i-y_i)^2$$ and Bob’s payoff is $$\label{eq:bpayoff3} 1 - { {\| {y-f(x)} \|} }^2 = 1 - \frac{1}{d} \sum_{j=1}^d (y_j-f(x)_j)^2.$$ (Here ${ {\| {\cdot} \|} }$ denotes the normalized $\ell_2$ norm.) Thus Alice wants to imitate Bob’s strategy, while Bob wants to imitate the image of Alice’s strategy under the function $f$. This reduction is correct in that in every ${\epsilon}$-approximate Nash equilibrium of this game, Alice’s and Bob’s strategies are concentrated around an $O({\epsilon})$-approximate fixed point of the given function $f$ (in the normalized $\ell_2$ norm). See also the discussion in Section \[ss:mt06\]. The issue is that the reduction is not efficient enough. Alice and Bob each have $\Theta((1/{\epsilon})^d)$ pure strategies; since $d = \Theta(n)$, this is exponential in the size $n$ of the given [[${\epsilon}$-BFP]{}]{}instance, rather than exponential in $\sqrt{n}$. This exponential blow-up in size means that this reduction has no implications for the problem of computing an approximate Nash equilibrium. ### Separable Functions {#ss:separable} How can we achieve a blow-up exponential in $\sqrt{n}$ rather than in $n$? We might guess that the birthday paradox is somehow involved. To build up our intuition, we’ll discuss at length a trivial special case of the [[${\epsilon}$-BFP]{}]{}problem. It turns out that the hard functions used in Theorem \[t:brouwer3\] are in some sense surprisingly close to this trivial case. For now, we consider only instances $f$ of [[${\epsilon}$-BFP]{}]{}where $f$ is [ *separable*]{}. That is, $f$ has the form $$\label{eq:separable} f(x_1,\ldots,x_d) = (f_1(x_1),\ldots,f_d(x_d))$$ for efficiently computable functions $f_1,\ldots,f_d : [0,1]\rightarrow [0,1]$. Separable functions enjoy the ultimate form of “local decodability”—to compute the $i$th coordinate of $f(x)$, you only need to know the $i$th coordinate of $x$. Finding a fixed point of a separable function is easy: the problem decomposes into $d$ one-dimensional fixed point problems (one per coordinate), and each of these can be solved efficiently by a form of binary search. The hard functions used in Theorem \[t:brouwer3\] possess a less extreme form of “local decodability,” in that each coordinate of $f(x)$ can be computed using only a small amount of “advice” about $f$ and $x$ (cf., the [${\epsilon}$-[2BFP]{}]{}$\le {\epsilon\text{-}\mathsf{NE}}$ reduction in Section \[ss:step4\]). ### [[${\epsilon}$-BFP]{}]{}$\le {\epsilon\text{-}\mathsf{NE}}$ (Attempt \#2): Coordinatewise Play {#ss:coordinatewise} Can we at least compute fixed points of separable functions via approximate Nash equilibria, using a reduction with only subexponential blow-up? The key idea is, instead of Alice and Bob each picking one of the (exponentially many) points of the discretized hypercube $H_{{\epsilon}}$, each will pick [*only a single coordinate*]{} of points $x$ and $y$. Thus a pure strategy of Alice comprises an index $i \in [d]$ and a number $x_i \in [0,1]$ that is a multiple of ${\epsilon}$, and similarly Bob chooses $j \in [d]$ and $y_j \in [0,1]$. Given choices $(i,x_i)$ and $(j,y_j)$, Alice’s payoff is defined as $$\begin{cases} 1 - (x_i-y_i)^2 & \quad \text{if $i=j$} \\ 0 & \quad \text{if $i \neq j$} \\ \end{cases}$$ and Bob’s payoff is $$\begin{cases} 1 - (y_i-f_i(x_i))^2 & \quad \text{if $i=j$} \\ 0 & \quad \text{if $i \neq j$.} \\ \end{cases}$$ Thus Alice and Bob receive payoff 0 unless they “interact,” meaning choose the same coordinate to play in, in which case their payoffs are analogous to  and . Note that Bob’s payoff is well defined only because we have assumed that $f$ is separable (Bob only knows the coordinate $x_i$ proposed by Alice, but this is enough to compute the $i$th coordinate of the output of $f$ and hence his payoff). Each player has only $\approx \tfrac{d}{{\epsilon}}$ strategies, so this is a polynomial-time reduction, with no blow-up. The good news is that (approximate) fixed points give rise to (approximate) Nash equilibria of this game. Specifically, if $\hat{x}=\hat{y}=f(\hat{x})$ is a fixed point of $f$, then the following is a Nash equilibrium (as you should check): Alice and Bob pick their coordinates $i,j$ uniformly at random and set $x_i=\hat{x}_i$ and $y_j=\hat{y}_j$. The problem is that the game also has equilibria other than the intended ones, for example where Alice and Bob choose pure strategies with $i=j$ and $x_i=y_i=f_i(x_i)$. ### [[${\epsilon}$-BFP]{}]{}$\le {\epsilon\text{-}\mathsf{NE}}$ (Attempt \#3): Gluing Althöfer Games {#ss:althofer} Our second attempt failed because Alice and Bob were not forced to randomize their play over all $d$ coordinates. We can address this issue with a game-theoretic gadget called an [*Althöfer game*]{} [@A94].[^83] For a positive and even integer $k$, this $k \times \binom{k}{k/2}$ game is defined as follows. - Alice chooses an index $i \in [k]$. - Bob chooses a subset $S \sse [k]$ of size $k/2$. - Alice’s payoff is 1 if $i\in S$, and -1 otherwise. - Bob’s payoff is -1 if $i\in S$, and 1 otherwise. For example, here is the payoff matrix for the $k=4$ case (with only Alice’s payoffs shown): $$\left( \begin{array}{cccccc} 1 & 1 & 1 & -1 & -1 & -1\\ 1 & -1 & -1 & -1 & 1 & 1\\ -1 & 1 & -1 & 1 & -1 & 1\\ -1 & -1 & 1 & 1 & 1 & -1\\ \end{array} \right)$$ Every Althöfer game is a zero-sum game with value 0: for both players, choosing a uniformly random strategy guarantees expected payoff 0. The following claim proves a robust converse for Alice’s play. Intuitively, if Alice deviates much from the uniform distribution, Bob is well-positioned to punish her.[^84] \[claim:Althofer\] In every ${\epsilon}$-approximate Nash equilibrium of an Althöfer game, Alice’s strategy is ${\epsilon}$-close to uniformly random in statistical distance (a.k.a. total variation distance). Suppose that Alice plays strategy $i \in [k]$ with probability $p_i$. After sorting the coordinates so that $p_{i_1}\leq p_{i_2}\leq \cdots \leq p_{i_k}$, Bob’s best response is to play the subset $S=\{i_1,i_2,\ldots,i_{k/2}\}$. We must have either $p_{i_{k/2}} \leq 1/k$ or $p_{i_{k/2}+1}\geq 1/k$ (or both). Suppose that $p_{i_{k/2}} \leq 1/k$; the other case is similar. Bob’s expected payoff from playing $S$ is then: $$\begin{aligned} \sum_{j>k/2} p_{i_j} - \sum_{j\leq k/2} p_{i_j} &=& \sum_{j>k/2} (p_{i_j}-1/k) + \sum_{j\leq k/2} (1/k-p_{i_j})\\ &=& \sum_{j : p_{i_j}> 1/k} (p_{i_j}-1/k) + \sum_{j > k/2 : p_{i_j}\leq 1/k} (p_{i_j}-1/k) + \sum_{j\leq k/2} (1/k-p_{i_j})\\ &\geq& \sum_{j: p_{i_j}> 1/k} (p_{i_j}-1/k),\end{aligned}$$ where the last inequality holds because the $p_{i_j}$’s are sorted in increasing order and $p_{i_{k/2}} \leq 1/k$. The final expression above equals the statistical distance between Alice’s mixed strategy $\vec{p}$ and the uniform distribution. The claim now follows from that fact that Bob cannot achieve a payoff larger than ${\epsilon}$ in any ${\epsilon}$-approximate Nash equilibrium (otherwise, Alice could increase her expected payoff by more than ${\epsilon}$ by switching to the uniform distribution). In Claim \[claim:Althofer\], it’s important that the loss in statistical distance (as a function of ${\epsilon}$) is independent of the size $k$ of the game. For example, straightforward generalizations of rock-paper-scissors fail to achieve the guarantee in Claim \[claim:Althofer\]. ##### Gluing Games. We incorporate Althöfer games into our coordinatewise play game as follows. Let - $G_1 = \text{the $\tfrac{d}{{\epsilon}} \times \tfrac{d}{{\epsilon}}$ coordinatewise game of Section~\ref{ss:coordinatewise}}$; - $G_2 = \text{a $d\times {d \choose d/2}$ Alth\"ofer game;}$ and - $G_3 = \text{a ${d \choose d/2} \times d$ Alth\"ofer game, with the roles of Alice and Bob reversed.}$ Consider the following game, where Alice and Bob effectively play all three games simultaneously: - A pure strategy of Alice comprises an index $i\in [d]$, a multiple $x_i$ of ${\epsilon}$ in $[0,1]$, and a set $T\subseteq [d]$ of size $d/2$. The interpretation is that she plays $(i,x_i)$ in $G_1$, $i$ in $G_2$, and $T$ in $G_3$. - A pure strategy of Bob comprises an index $j\in [d]$, a multiple $y_j$ of ${\epsilon}$ in $[0,1]$, and a set $S\subseteq [d]$ of size $d/2$, interpreted as playing $(j,y_j)$ in $G_1$, $S$ in $G_2$ and $j$ in $G_3$. - Each player’s payoff is a weighted average of their payoffs in the three games: $\tfrac{1}{100} \cdot G_1 + \tfrac{99}{200}\cdot G_2 + \tfrac{99}{200}\cdot G_3$. The good news is that, in every exact Nash equilibrium of the combined game, Alice and Bob mix uniformly over their choices of $i$ and $j$. Intuitively, because deviating from the uniform strategy can be punished by the other player at a rate linear in the deviation (Claim \[claim:Althofer\]), it is never worth doing (no matter what happens in $G_1$). Given this, à la the McLennan-Tourky reduction (Theorem \[t:mt06\]), the $x_i$’s and $y_j$’s must correspond to a fixed point of $f$ (for each $i$, Alice must set $x_i$ to the center of mass of Bob’s distribution over $y_i$’s, and then Bob must set $y_i = f_i(x_i)$). The bad news is that this argument breaks down for ${\epsilon}$-approximate Nash equilibria with constant ${\epsilon}$. The reason is that, even when the distributions of $i$ and $j$ are perfectly uniform, the two players interact (i.e., choose $i=j$) only with probability $1/d$. This means that the contribution of the game $G_1$ to the expected payoffs is at most $1/d \ll {\epsilon}$, freeing the players to choose their $x_i$’s and $y_j$’s arbitrarily. Thus we need another idea to force Alice and Bob to interact more frequently. A second problem is that the sizes of the Althöfer games are too big—exponential in $d$ rather than in $\sqrt{d}$. ### [[${\epsilon}$-BFP]{}]{}$\le {\epsilon\text{-}\mathsf{NE}}$ (Attempt \#4): Blockwise Play {#ss:blockwise} To solve both of the problems with the third attempt, we force Alice and Bob to play larger sets of coordinates at a time. Specifically, we view $[d]$ as a $\sqrt{d} \times \sqrt{d}$ grid, and any $x,y\in [0,1]^d$ as $\sqrt{d}\times \sqrt{d}$ matrices. Now Alice and Bob will play a row and column of their matrices, respectively, and their payoffs will be determined by the entry where the row and column intersect. That is, we replace the coordinatewise game of Section \[ss:coordinatewise\] with the following [*blockwise game*]{}: - A pure strategy of Alice comprises an index $i\in \left[\sqrt{d}\right]$ and a row $x_{i*}\in [0,1]^{\sqrt{d}}$. (As usual, every $x_{ij}$ should be a multiple of ${\epsilon}$.) - A pure strategy of Bob comprises an index $j\in \left[\sqrt{d}\right]$ and a column $y_{*j}\in [0,1]^{\sqrt{d}}$. - Alice’s payoff in the outcome $(x_{i*},y_{*j})$ is $$1-(x_{ij}-y_{ij})^2.$$ - Bob’s payoff in the outcome $(x_{i*},y_{*j})$ is $$\label{eq:bpayoff4} 1-(y_{ij}-f_{ij}(x_{ij}))^2.$$ Now glue this game together with $k\times {k \choose k/2}$ and $\binom{k}{k/2} \times k$ Althöfer games with $k=\sqrt{d}$, as in Section \[ss:althofer\]. (For example, Alice’s index $i \in \left[\sqrt{d}\right]$ is identified with a row in the first Althöfer game, and now Alice also picks a subset $S \sse \left[\sqrt{d}\right]$ in the second Althöfer game, in addition to $i$ and $x_{i*}$.) This construction yields exactly what we want: a game of size $\exp({\tilde{O}}(k)) = \exp({\tilde{O}}(\sqrt{d}))$ in which every ${\epsilon}$-approximate Nash equilibrium can be easily translated to a $\delta$-approximate fixed point of $f$ (in the normalized $\ell_2$ norm), where $\delta$ depends only on ${\epsilon}$.[^85][^86] ### Beyond Separable Functions {#ss:nonseparable} We now know how to use an ${\epsilon}$-approximate Nash equilibrium of a subexponential-size game (with constant ${\epsilon}$) to compute a $\delta$-approximate fixed point of a function that is separable in the sense of . This is not immediately interesting, because a fixed point of a separable function is easy to find by doing binary search independently in each coordinate. The hard Brouwer functions identified in Theorem \[t:brouwer3\] have lots of nice properties, but they certainly aren’t separable. Conceptually, the rest of the proof of Theorem \[thm:Aviad\] involves pushing in two directions: first, identifying hard Brouwer functions that are even “closer to separable” than the functions in Theorem \[t:brouwer3\]; and second, extending the reduction in Section \[ss:blockwise\] to accommodate “close-to-separable” functions. We already have an intuitive feel for what the second step looks like, from Step 4 of our communication complexity lower bound (Section \[ss:step4\] in Solar Lecture 3), where we enlarged the strategy sets of the players so that they could smuggle “advice” about how to decode a hard Brouwer function $f$ at a given point. We conclude the lecture with one key idea for the further simplification of the hard Brouwer functions in Theorem \[t:brouwer3\]. ### [Local [[EoL]{}]{}]{} Recall the hard Brouwer functions constructed in our communication complexity lower bound (see Section \[s:ccbfp\]), which “follow the line” of an embedding of an [[EoL]{}]{}instance, as well as the additional tweaks needed to prove Theorem \[t:brouwer3\] (see Section \[ss:ppadbfp\]). We are interested in the “local decodability” properties of these functions. That is, if Bob needs to compute the $j$th coordinate of $f(x)$ (to evaluate the $j$th term in his payoff in ), how much does he need to know about $x$? For a separable function $f=(f_1,\ldots,f_d)$, he only needs to know $x_j$. For the hard Brouwer functions in Theorem \[t:brouwer3\], Bob needs to know whether or not $x$ is close to an edge (of the embedding of the succinct [[EoL]{}]{}instance into the hypercube) and, if so, which edge (or pair of edges, if $x$ is close to a vertex). Ultimately, this requires evaluating the successor circuit $S$ and predecessor circuit $P$ of the succinct [[EoL]{}]{}instance that defines the hard Brouwer function. It is therefore in our interest to force $S$ and $P$ to be as simple as possible, subject to the succinct [[EoL]{}]{}problem remaining ${\mathsf{PPAD}}$-complete. In a perfect world, minimal advice (say, $O(1)$ bits) would be enough to compute $S(v)$ and $P(v)$ from $v$.[^87] The following lemma implements this idea. It shows that a variant of the succinct [[EoL]{}]{}problem, called [Local [[EoL]{}]{}]{}, remains ${\mathsf{PPAD}}$-complete even when $S$ and $P$ are guaranteed to change only $O(1)$ bits of the input, and when $S$ and $P$ are $\mathrm{NC}^0$ circuits (and hence each output bit depends on only $O(1)$ input bits). \[l:local\] The following [Local [[EoL]{}]{}]{} problem is ${\mathsf{PPAD}}$-complete: 1. the vertex set $V$ is a subset of ${\{0,1\}}^n$, with membership in $V$ specified by a given $\mathrm{AC}^0$ circuit; 2. the successor and predecessor circuits $S,P$ are computable in $\mathrm{NC}^0$; 3. for every vertex $v \in V$, $S(v)$ and $P(v)$ differ from $v$ in $O(1)$ coordinates. The proof idea is to start from the original circuits $S$ and $P$ of a succinct [[EoL]{}]{}instance and form circuits $S'$ and $P'$ that operate on partial computation transcripts, carrying out the computations performed by the circuits $S$ or $P$ one gate/line at a time (with $O(1)$ bits changing in each step of the computation). The vertex set $V$ then corresponds to the set of valid partial computation transcripts. The full proof is not overly difficult; see [@R16 Section 5] for the details. This reduction from succinct [[EoL]{}]{}to [Local [[EoL]{}]{}]{} can be implemented in linear time, so our assumption that the former problem admits no subexponential-time algorithm carries over to the latter problem. In the standard succinct [[EoL]{}]{}problem, every $n$-bit string $v \in {\{0,1\}}^n$ is a legitimate vertex. In the [Local [[EoL]{}]{}]{} problem, only elements of ${\{0,1\}}^n$ that satisfy the given $\mathrm{AC}^0$ circuit are legitimate vertices. In our reduction, we need to produce a game that also incorporates checking membership in $V$, also with only a $d^{o(1)}$ blow-up in how much of $x$ we need to access. This is the reason why @R16 needs to develop customized PCP machinery in his proof of Theorem \[thm:Aviad\]. These PCP proofs can then be incorporated into the blockwise play game (Section \[ss:blockwise\]), analogous to how we incorporated a low-cost interactive protocol into the game in our reduction from [[2EoL]{}]{}to ${\epsilon\text{-}\mathsf{NE}}$ in Section \[ss:step4\]. \[display\] [<span style="font-variant:small-caps;"></span>]{} [ ]{} [.5ex]{} [**]{} How Computer Science Has Influenced Real-World Auction Design.\ Case Study: The 2016–2017 FCC Incentive Auction =============================================================== Preamble {#preamble-1} -------- Computer science is changing the way auctions are designed and implemented. For over 20 years, the US and other countries have used [*spectrum auctions*]{} to sell licenses for wireless spectrum to the highest bidder. What’s different this decade, and what necessitated a new auction design, is that in the US the juiciest parts of the spectrum for next-generation wireless applications are already accounted for, owned by over-the-air television broadcasters. This led Congress to authorize the FCC in the fall of 2012 to design a novel auction (the [*FCC Incentive Auction*]{}) that would repurpose spectrum—procuring licenses from television broadcasters (a relatively low-value activity) and selling them to parties that would put them to better use (e.g., telecommunication companies who want to roll out the next generation of wireless broadband services). Thus the FCC Incentive Auction is really a [*double auction*]{}, comprising two stages: a *reverse auction*, where the government buys back licenses for spectrum from their current owners; and then a *forward auction*, where the government sells the procured licenses to the highest bidder. Computer science techniques played a crucial role in the design of the new reverse auction. The main aspects of the forward auction have been around a long time; here, theoretical computer science has contributed on the analysis side, and to understanding when and why such forward auctions work well. Sections \[s:reverse\] and \[s:forward\] give more details on the reverse and forward parts of the auction, respectively. The FCC Incentive Auction finished around the end of March 2017, and so the numbers are in. The government spent roughly 10 billion USD in the reverse part of the auction buying back licenses from television broadcasters, and earned roughly 20 billion USD of revenue in the forward auction. Most of the 10 billion USD profit was used to reduce the US debt![^88] Reverse Auction {#s:reverse} --------------- ### Descending Clock Auctions The reverse auction is the part of the FCC Incentive Auction that was totally new, and where computer science techniques played a crucial role in the design. The auction format, proposed by Milgrom and Segal [@MS20], is what’s called a [*descending clock auction*]{}. By design, the auction is very simple from the perspective of any one participant. The auction is iterative, and operates in rounds. In each round of the auction, each remaining broadcaster is asked a question of the form: “Would you or would you not be willing to sell your license for (say) 1 million dollars?” The broadcaster is allowed to say “no,” with the consequence of getting kicked out of the auction forevermore (the station will keep its license and remain on the air, and will receive no compensation from the government). The broadcaster is also allowed to say “yes” and accept the buyout offer. In the latter case, the government will not necessarily buy the license for 1 million dollars—in the next round, the broadcaster might get asked the same question, with a lower buyout price (e.g., 950,000 USD). If a broadcaster is still in the auction when it ends (more on how it ends in a second), then the government does indeed buy their license, at the most recent (and hence lowest) buyout offer. Thus all a broadcaster has to do is answer a sequence of “yes/no” questions for some decreasing sequence of buyout offers. The obvious strategy for a broadcaster is to formulate the lowest acceptable offer for their license, and to drop out of the auction once the buyout price drops below this threshold. The auction begins with very high buyout offers, so that every broadcaster would be ecstatic to sell their license at the initial price. Intuitively, the auction then tries to reduce the buyout prices as much as possible, subject to clearing a target amount of spectrum. Spectrum is divided into channels which are blocks of 6 MHz each. For example, one could target broadcasters assigned to channels 38–51, and insist on clearing 10 out of these 14 channels (60 MHz overall).[^89] By “clearing a channel,” we mean clearing it [*nationwide*]{}. Of course, in the descending clock auction, bidders will drop out in an uncoordinated way—perhaps the first station to drop out is channel 51 in Arizona, then channel 41 in western Massachusetts, and so on. To clear several channels nationwide without buying out essentially everybody, it was essential for the government to use its power to [*reassign*]{} the channels of the stations that remain on the air. Thus while a station that drops out of the auction is guaranteed to retain its license, it is not guaranteed to retain its channel—a station broadcasting on channel 51 before the auction might be forced to broadcast on channel 41 after the auction. The upshot is that the auction maintains the invariant that the stations that have dropped out of the auction (and hence remain on the air) can be assigned channels so that at most a target number of channels are used (in our example, 4 channels). This is called the [*repacking problem*]{}. Naturally, two stations with overlapping broadcasting regions cannot be assigned the same channel (otherwise they would interfere with each other). See Figure \[f:stations\]. ![Different TV stations with overlapping broadcasting areas must be assigned different channels (indicated by shades of gray). Checking whether or not a given subset of stations can be assigned to a given number of channels without interference is an ${\mathsf{NP}}$-hard problem.[]{data-label="f:stations"}](stations){width=".8\textwidth"} ### Solving the Repacking Problem Any properly trained computer scientist will recognize the repacking problem as the ${\mathsf{NP}}$-complete graph coloring problem in disguise.[^90] For the proposed auction format to be practically viable, it must quickly solve the repacking problem. Actually, make that thousands of repacking problems every round of the auction![^91] The responsibility of quickly solving repacking problems fell to a team led by Kevin Leyton-Brown (see [@FNL17; @LMS17]). The FCC gave the team a budget of one minute per repacking problem, ideally with most instances solved within one second. The team’s approach was to build on state-of-the-art solvers for the satisfiability (SAT) problem. As you can imagine, it’s straightforward to translate an instance of the repacking problem into a SAT formula (even with the idiosyncratic constraints).[^92] Off-the-shelf SAT solvers did pretty well, but still timed out on too many representative instances.[^93] Leyton-Brown’s team added several new innovations, including taking advantage of problem structure specific to the application and implementing a number of caching techniques (reusing work done solving previous instances to quickly solve closely related new instances). In the end, they were able to solve more than 99% of the relevant repacking problems in under a minute. Hopefully the high-level point is clear: > without cutting-edge techniques for solving $NP$-complete problems, [*the FCC would have had to use a different auction format*]{}. ### Reverse Greedy Algorithms One final twist: the novel reverse auction format motivates some basic algorithmic questions (and thus ideas flow from computer science to auction theory and back). We can think of the auction as an algorithm, a heuristic that tries to maximize the value of the stations that remain on the air, subject to clearing the target amount of spectrum. Milgrom and Segal [@MS20] prove that, ranging over all ways of implementing the auction (i.e., of choosing the sequences of descending prices), the corresponding algorithms are exactly the [ *reverse greedy algorithms*]{}.[^94] This result gives the first extrinsic reason to study the power and limitations of reverse greedy algorithms, a research direction explored by @DGR14 and @GMR17. Forward Auction {#s:forward} --------------- Computer science did not have an opportunity to influence the design of the forward auction used in the FCC Incentive Auction, which resembles the formats used over the past 20+ years. Still, the theoretical computer science toolbox turns out to be ideally suited for explaining when and why these auctions work well.[^95] ### Bad Auction Formats Cost Billions {#ss:bad} Spectrum auction design is stressful, because small mistakes can be extremely costly. One cautionary tale is provided by an auction run by the New Zealand government in 1990 (before governments had much experience with auctions). For sale were 10 essentially identical national licenses for television broadcasting. For some reason, lost to the sands of time, the government decided to sell these licenses by running 10 second-price auctions in parallel. A [*second-price*]{} or [*Vickrey*]{} auction for a single good is a sealed-bid auction that awards the item to the highest bidder and charges her the highest bid by someone else (the second-highest bid overall). When selling a single item, the Vickrey auction is often a good solution. In particular, each bidder has a dominant strategy (always at least as good as all alternatives), which is to bid her true maximum willingness-to-pay.[^96][^97] The nice properties of a second-price auction evaporate if many of them are run simultaneously. A bidder can now submit up to one bid in each auction, with each license awarded to the highest bidder (on that license) at a price equal to the second-highest bid (on that license). With multiple simultaneous auctions, it is no longer clear how a bidder should bid. For example, imagine you want one of the licenses, but only one. How should you bid? One legitimate strategy is to pick one of the licenses—at random, say—and go for it. Another strategy is to bid less aggressively on multiple licenses, hoping that you get one at a bargain price, and that you don’t inadvertently win extra licenses that you don’t want. The difficulty is trading off the risk of winning too many licenses with the risk of winning too few. The challenge of bidding intelligently in simultaneous sealed-bid auctions makes the auction format prone to poor outcomes. The revenue in the 1990 New Zealand auction was only \$36 million, a paltry fraction of the projected \$250 million. On one license, the high bid was \$100,000 while the second-highest bid (and selling price) was \$6! On another, the high bid was \$7 million and the second-highest was \$5,000. To add insult to injury, the winning bids were made available to the public, who could then see just how much money was left on the table! ### Simultaneous Ascending Auctions Modern spectrum auctions are based on [*simultaneous ascending auctions (SAAs)*]{}, following 1993 proposals by McAfee and by Milgrom and Wilson. You’ve seen—in the movies, at least—the call-and-response format of an ascending single-item auction, where an auctioneer asks for takers at successively higher prices. Such an auction ends when there’s only one person left accepting the currently proposed price (who then wins, at this price). Conceptually, SAAs are like a bunch of single-item English auctions being run in parallel in the same room, with one auctioneer per item. The primary reason that SAAs work better than sequential or sealed-bid auctions is [*price discovery*]{}. As a bidder acquires better information about the likely selling prices of licenses, she can implement mid-course corrections—abandoning licenses for which competition is fiercer than anticipated, snapping up unexpected bargains, and rethinking which packages of licenses to assemble. The format typically resolves the miscoordination problems that plague simultaneous sealed-bid auctions. ### Inefficiency in SAAs {#ss:exposure} SAAs have two big vulnerabilities. The first problem is [*demand reduction*]{}, and this is relevant even when items are substitutes.[^98] Demand reduction occurs when a bidder asks for fewer items than she really wants, to lower competition and therefore the prices paid for the items that it gets. To illustrate, suppose there are two identical items and two bidders. By the [*valuation*]{} of a bidder for a given bundle of items, we mean her maximum willingness to pay for that bundle. Suppose the first bidder has valuation 10 for one of the items and valuation 20 for both. The second bidder has valuation 8 for one of the items and does not want both (i.e., her valuation remains 8 for both). The socially optimal outcome is to give both licenses to the first bidder. Now consider how things play out in an SAA. The second bidder would be happy to have either item at any price less than 8. Thus, the second bidder drops out only when the prices of both items exceed 8. If the first bidder stubbornly insists on winning both items, her utility is $20-16=4$. An alternative strategy for the first bidder is to simply concede the second item and never bid on it. The second bidder takes the second item and (because she only wants one license) withdraws interest in the first, leaving it for the first bidder. Both bidders get their item essentially for free, and the utility of the first bidder has jumped to 10. The second big problem with SAAs is relevant when items can be complements, and is called the [*exposure problem*]{}.[^99] As an example, consider two bidders and two nonidentical items. The first bidder only wants both items—they are complementary items for the bidder—and her valuation is 100 for them (and 0 for anything else). The second bidder is willing to pay 75 for either item but only wants one item. The socially optimal outcome is to give both items to the first bidder. But in an SAA, the second bidder will not drop out until the price of both items reaches 75. The first bidder is in a no-win situation: to get both items she would have to pay 150, more than her value. The scenario of winning only one item for a nontrivial price could be even worse. Thus the exposure problem leads to economically inefficient allocations for two reasons. First, an overly aggressive bidder might acquire unwanted items. Second, an overly tentative bidder might fail to acquire items for which she has the highest valuation. ### When Do SAAs Work Well? {#ss:when} If you ask experts who design or consult for bidders in real-world SAAs, a rough consensus emerges about when they are likely to work well. Without strong complements, [*SAAs work pretty well. Demand reduction does happen, but it is not a deal-breaker because the loss of efficiency appears to be small.*]{} With strong complements, [*simple auctions like SAAs are not good enough. The exposure problem is a deal-breaker because it can lead to very poor outcomes (in terms of both economic efficiency and revenue).*]{} There are a number of beautiful and useful theoretical results about spectrum auctions in the economics literature, but none map cleanly to these two folklore beliefs. A possible explanation: translating these beliefs into theorems seems to fundamentally involve approximate optimality guarantees, a topic that is largely avoided by economists but right in the wheelhouse of theoretical computer science. In the standard model of [*combinatorial auctions*]{}, there are $n$ bidders (e.g., telecoms) and $m$ items (e.g., licenses).[^100] Bidder $i$ has a nonnegative valuation $v_i(S)$ for each subset $S$ of items she might receive. Note that, in general, describing a bidder’s valuation function requires $2^m$ parameters. Each bidder wants to maximize her utility, which is the value of the items received minus the total price paid for them. From a social perspective, we’d like to award bundles of items $T_1,\ldots,T_n$ to the bidders to maximize the [*social welfare*]{} $\sum_{i=1}^n v_i(T_i)$. To make the first folklore belief precise, we need to commit to a definition of “without strong complements” and to a specific auction format. We’ll focus on simultaneous first-price auctions (S1As), where each bidder submits a separate bid for each item, for each item the winner is the highest bidder (on that item), and winning bidders pay their bid on each item won.[^101] One relatively permissive definition of “complement-free” is to restrict bidders to have [*subadditive valuations*]{}. This means what it sounds like: if $A$ and $B$ are two bundles of items, then bidder $i$’s valuation $v_i(A \cup B)$ for their union should be at most the sum $v_i(A)+v_i(B)$ of her valuations for each bundle separately. Observe that subadditivity is violated in the exposure problem example in Section \[ss:exposure\]. We also need to define what we mean by “the outcome of an auction” like S1As. Remember that bidders are strategic, and will bid to maximize their utility (value of items won minus the price paid). Thus we should prove approximation guarantees for the [*equilibria*]{} of auctions. Happily, computer scientists have been working hard since 1999 to prove approximation guarantees for game-theoretic equilibria, also known as bounds on [*the price of anarchy*]{} [@KP99; @book; @RT00].[^102] In the early days, price-of-anarchy bounds appeared somewhat ad hoc and problem-specific. Fast forwarding to the present, we now have a powerful and user-friendly theory for proving price-of-anarchy bounds, which combine “extension theorems” and “composition theorems” to build up bounds for complex settings (including S1As) from bounds for simple settings.[^103] In particular, @FFGL13 proved the following translation of Folklore Belief \#1.[^104] \[t:ffgl\] When every bidder has a subadditive valuation, every equilibrium of an S1A has social welfare at least 50% of the maximum possible. One version of Theorem \[t:ffgl\] concerns (mixed) Nash equilibria in the full-information model (in which bidders’ valuations are common knowledge), as studied in the Solar Lectures. Even here, the bound in Theorem \[t:ffgl\] is tight in the worst case [@CKST16]. The approximation guarantee in Theorem \[t:ffgl\] holds more generally for [*Bayes-Nash equilibria*]{}, the standard equilibrium notion for games of incomplete information.[^105] Moving on to the second folklore belief, let’s now drop the subadditivity restriction. S1As no longer work well. \[t:hkmn\] When bidders have arbitrary valuations, an S1A can have a mixed Nash equilibrium with social welfare arbitrarily smaller than the maximum possible. Thus for S1As, the perspective of worst-case approximation confirms the dichotomy between the cases of substitutes and complements. But the lower bound in Theorem \[t:hkmn\] applies only to one specific auction format. Could we do better with a different natural auction format? Folklore Belief \#2 asserts the stronger statement that [*no*]{} “simple” auction works well with general valuations. This stronger statement can also be translated into a theorem (using nondeterministic communication complexity), and this will be the main subject of Lunar Lecture 2. \[t:condpoa\] With general valuations, *every* simple auction can have an equilibrium with social welfare arbitrarily smaller than the maximum possible. The definition of “simple” used in Theorem \[t:condpoa\] is quite generous: it requires only that the number of strategies available to each player is [*sub-doubly-exponential*]{} in the number of items $m$. For example, running separate single-item auctions provides each player with only an exponential (in $m$) number of strategies (assuming a bounded number of possible bid values for each item). Thus Theorem \[t:condpoa\] makes use of the theoretical computer science toolbox to provide solid footing for Folklore Belief \#2. Communication Barriers to Near-Optimal Equilibria ================================================= This lecture is about the communication complexity of the welfare-maximization problem in combinatorial auctions and its implications for the price of anarchy of simple auctions. Section \[s:camodel\] defines the model, Section \[s:cclb\] proves lower bounds for nondeterministic communication protocols, and Section \[s:condpoa\] gives a black-box translation of these lower bounds to equilibria of simple auctions. In particular, Section \[s:condpoa\] provides the proof of Theorem \[t:condpoa\] from last lecture. Section \[s:open\] concludes with a juicy open problem on the topic.[^106] Welfare Maximization in Combinatorial Auctions {#s:camodel} ---------------------------------------------- Recall from Section \[ss:when\] the basic setup in the study of combinatorial auctions. 1. There are $k$ players. (In a spectrum auction, these are the telecoms.) 2. There is a set $M$ of $m$ items. (In a spectrum auction, these are the licenses.) 3. Each player $i$ has a [*valuation*]{} $v_i:2^M \rightarrow {{\mathbb R}}_+$. The number $v_i(T)$ indicates $i$’s value, or willingness to pay, for the items $T \subseteq M$. The valuation is the private input of player $i$, meaning that $i$ knows $v_i$ but none of the other $v_j$’s. (I.e., this is a number-in-hand model.) We assume that $v_i(\emptyset) = 0$ and that the valuations are [*monotone*]{}, meaning $v_i(S) \le v_i(T)$ whenever $S \subseteq T$. (The more items, the better.) To avoid bit complexity issues, we’ll also assume that all of the $v_i(T)$’s are integers with description length polynomial in $k$ and $m$. We sometimes impose additional restrictions on the valuations to study special cases of the general problem. Note that we may have more than two players—more than just Alice and Bob. (For example, you might want to think of $k$ as $\approx m^{1/3}$.) Also note that the description length of a player’s valuation is exponential in the number of items $m$. In the [*welfare-maximization problem*]{}, the goal is to partition the items $M$ into sets $T_1,\ldots,T_k$ to maximize, at least approximately, the social welfare $$\label{eq:welfare} \sum_{i=1}^k v_i(T_i),$$ using communication polynomial in $k$ and $m$. Note this amount of communication is logarithmic in the sizes of the private inputs. Maximizing social welfare  is the most commonly studied objective in combinatorial auctions, and it is the one we will focus on in this lecture. Communication Lower Bounds for Approximate Welfare Maximization {#s:cclb} --------------------------------------------------------------- This section studies the communication complexity of computing an approximately welfare-maximizing allocation in a combinatorial auction. For reasons that will become clear in Section \[s:condpoa\], we are particularly interested in the problem’s nondeterministic communication complexity.[^107] ### Lower Bound for General Valuations We begin with a result of Nisan [@Nis02] showing that, alas, computing even a very weak approximation of the welfare-maximizing allocation requires exponential communication. To make this precise, it is convenient to turn the optimization problem of welfare maximization into a decision problem. In the [<span style="font-variant:small-caps;">Welfare-Maximization</span>($k$)]{} problem, the goal is to correctly identify inputs that fall into one of the following two cases: - Every partition $(T_1,\ldots,T_k)$ of the items has welfare at most 1. - There exists a partition $(T_1,\ldots,T_k)$ of the items with welfare at least $k$. Arbitrary behavior is permitted on inputs that fail to satisfy either (1) or (0). Clearly, communication lower bounds for [<span style="font-variant:small-caps;">Welfare-Maximization</span>($k$)]{} apply to the more general problem of obtaining a better-than-$k$-approximation of the maximum welfare.[^108] \[thm\_nisan\] The nondeterministic communication complexity of [<span style="font-variant:small-caps;">Welfare-Maximization</span>($k$)]{} is\ $\exp \{ \Omega(m/k^2) \}$, where $k$ is the number of players and $m$ is the number of items. This lower bound is exponential in $m$, provided that $m = \Omega(k^{2+{\epsilon}})$ for some ${\epsilon}> 0$. Since communication complexity lower bounds apply even to players who cooperate perfectly, this impossibility result holds even when all of the (tricky) incentive issues are ignored. ### The [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}Problem The plan for the proof of Theorem \[thm\_nisan\] is to reduce a multi-party version of the [[Disjointness]{}]{}problem to the [<span style="font-variant:small-caps;">Welfare-Maximization</span>($k$)]{} problem. There is some ambiguity about how to define a version of [[Disjointness]{}]{}for three or more players. For example, suppose there are three players, and among the three possible pairings of them, two have disjoint sets while the third have intersecting sets. Should this count as a “yes” or “no” instance? We’ll skirt this issue by worrying only about unambiguous inputs, that are either “totally disjoint” or “totally intersecting.” Formally, in the [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}problem, each of the $k$ players $i$ holds an input $\bfx_i \in {\{0,1\}}^n$. (Equivalently, a set $S_i \subseteq \{1,2,\ldots,n\}$.) The task is to correctly identify inputs that fall into one of the following two cases: - “Totally disjoint,” with $S_i \cap S_{i'} = \emptyset$ for every $i \neq i'$. - “Totally intersecting,” with $\cap_{i=1}^k S_i \neq \emptyset$. When $k=2$, this is the standard [[Disjointness]{}]{}problem. When $k > 2$, there are inputs that are neither 1-inputs nor 0-inputs. We let protocols off the hook on such ambiguous inputs—they can answer “1” or “0” with impunity. The following communication complexity lower bound for [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}is credited to Jaikumar Radhakrishnan and Venkatesh Srinivasan in [@Nis02]. (The proof is elementary, and for completeness is given in Section \[s:mdisj\].) \[t:mdisj\] The nondeterministic communication complexity of [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}, with $k$ players with $n$-bit inputs, is $\Omega(n/k)$. This nondeterministic lower bound is for verifying a 1-input. (It is easy to verify a 0-input—the prover just suggests the index of an element $r$ in $\cap_{i=1}^k S_i$.)[^109] ### Proof of Theorem \[thm\_nisan\] The proof of Theorem \[thm\_nisan\] relies on Theorem \[t:mdisj\] and a combinatorial gadget. We construct this gadget using the probabilistic method. Consider $t$ random partitions $P^1,\ldots,P^t$ of $M$, where $t$ is a parameter to be defined later. By a random partition $P^j = (P^j_1,\ldots,P^j_k)$, we mean that each of the $m$ items is assigned to exactly one of the $k$ players, independently and uniformly at random. We are interested in the probability that two classes of different partitions intersect: for all $i \neq i'$ and $j \neq \ell$, because the probability that a given item is assigned to $i$ in $P^j$ and also to $i'$ in $P^{\ell}$ is $\tfrac{1}{k^2}$, we have $${\mathbf{Pr}\ifthenelse{\not\equal{}{}}{_{}}{}\!\left[P^j_i \cap P^{\ell}_{i'} = \emptyset\right]} = \left( 1 - \frac{1}{k^2} \right)^m \le e^{-m/k^2}.$$ Taking a Union Bound over the $k$ choices for $i$ and $i'$ and the $t$ choices for $j$ and $\ell$, we have $$\label{eq:int} {\mathbf{Pr}\ifthenelse{\not\equal{}{}}{_{}}{}\!\left[\exists i \neq i', j \neq \ell \text{ s.t.\ } P^j_i \cap P^{\ell}_{i'} = \emptyset\right]} \le k^2t^2e^{-m/k^2}.$$ Call $P^1,\ldots,P^t$ an [*intersecting family*]{} if $P^j_i \cap P^{\ell}_{i'} \neq \emptyset$ whenever $i \neq i'$, $j \neq \ell$. By , the probability that our random experiment fails to produce an intersecting family is less than 1 provided $t < \tfrac{1}{k}e^{m/2k^2}$. The following lemma is immediate. \[l:int\] For every $m,k \ge 1$, there exists an intersecting family of partitions $P^1,\ldots,P^t$ with $t = \exp \{ \Omega(m/k^2) \}$. A simple combination of Theorem \[t:mdisj\] and Lemma \[l:int\] now proves Theorem \[thm\_nisan\]. (of Theorem \[thm\_nisan\]) The proof is a reduction from [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}. Fix $k$ and $m$. (To be interesting, $m$ should be significantly bigger than $k^2$.) Let $(S_1,\ldots,S_k)$ denote an input to [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}with $t$-bit inputs, where $t = \exp \{ \Omega(m/k^2) \}$ is the same value as in Lemma \[l:int\]. We can assume that the players have coordinated in advance on an intersecting family of $t$ partitions of a set $M$ of $m$ items. Each player $i$ uses this family and her input $S_i$ to form the following valuation: $$v_i(T) = \left \{ \begin{array}{cl} 1 & \text{if $T \supseteq P^j_i$ for some $j \in S_i$}\\ 0 & \text{otherwise.} \end{array} \right.$$ That is, player $i$ is either happy (value 1) or unhappy (value 0), and is happy if and only if she receives all of the items in the corresponding class $P^j_i$ of some partition $P^j$ with index $j$ belonging to its input to [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}. The valuations $v_1,\ldots,v_k$ define an input to [<span style="font-variant:small-caps;">Welfare-Maximization</span>($k$)]{}. Forming this input requires no communication between the players. Consider the case where the input to [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}is a 1-input, with $S_i \cap S_{i'} = \emptyset$ for every $i \neq i'$. We claim that the induced input to [<span style="font-variant:small-caps;">Welfare-Maximization</span>($k$)]{} is a 1-input, with maximum welfare at most 1. To see this, consider a partition $(T_1,\ldots,T_k)$ in which some player $i$ is happy (with $v_i(T_i) = 1$). For some $j \in S_i$, player $i$ receives all the items in $P^j_i$. Since $j \not\in S_{i'}$ for every $i' \neq i$, the only way to make a second player $i'$ happy is to give her all the items in $P^{\ell}_{i'}$ in some other partition $P^{\ell}$ with $\ell \in S_{i'}$ (and hence $\ell \neq j$). Since $P^1,\ldots,P^t$ is an intersecting family, this is impossible — $P^j_i$ and $P^{\ell}_{i'}$ overlap for every $\ell \neq j$. When the input to [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}is a 0-input, with an element $r$ in the mutual intersection $\cap_{i=1}^k S_i$, we claim that the induced input to [<span style="font-variant:small-caps;">Welfare-Maximization</span>($k$)]{} is a 0-input, with maximum welfare at least $k$. This is easy to see: for $i=1,2,\ldots,k$, assign the items of $P^r_i$ to player $i$. Since $r \in S_i$ for every $i$, this makes all $k$ players happy. This reduction shows that a (deterministic, nondeterministic, or randomized) protocol for [<span style="font-variant:small-caps;">Welfare-Maximization</span>($k$)]{} yields one for [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}(with $t$-bit inputs) with the same communication. We conclude that the nondeterministic communication complexity of [<span style="font-variant:small-caps;">Welfare-Maximization</span>($k$)]{} is $\Omega(t/k) = \exp \{ \Omega(m/k^2) \}$. ### Subadditive Valuations To an algorithms person, Theorem \[thm\_nisan\] is depressing, as it rules out any non-trivial positive results. A natural idea is to seek positive results by imposing additional structure on players’ valuations. Many such restrictions have been studied. We consider here the case of [*subadditive*]{} valuations (see also Section \[ss:when\] of the preceding lecture), where each $v_i$ satisfies $v_i(S \cup T) \le v_i(S) + v_i(T)$ for every pair $S,T \subseteq M$. Our reduction in Theorem \[thm\_nisan\] easily implies a weaker inapproximability result for welfare maximization with subadditive valuations. Formally, define the [<span style="font-variant:small-caps;">Welfare-Maximization</span>($2$)]{} problem as that of identifying inputs that fall into one of the following two cases: - Every partition $(T_1,\ldots,T_k)$ of the items has welfare at most $k+1$. - There exists a partition $(T_1,\ldots,T_k)$ of the items with welfare at least $2k$. Communication lower bounds for [<span style="font-variant:small-caps;">Welfare-Maximization</span>($2$)]{} apply also to the more general problem of obtaining a better-than-$2$-approximation of the maximum social welfare. \[t:wm2\] The nondeterministic communication complexity of [<span style="font-variant:small-caps;">Welfare-Maximization</span>($2$)]{} is $\exp \{ \Omega(m/k^2) \}$, even when all players have subadditive valuations. This theorem follows from a modification of the proof of Theorem \[thm\_nisan\]. The 0-1 valuations used in that proof are not subadditive, but they can be made subadditive by adding 1 to each bidder’s valuation $v_i(T)$ of each non-empty set $T$. The social welfare obtained in inputs corresponding to 1- and 0-inputs of [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}become $k+1$ and $2k$, respectively, and this completes the proof of Theorem \[t:wm2\]. There is also a quite non-trivial deterministic and polynomial-communication protocol that guarantees a 2-approximation of the social welfare when bidders have subadditive valuations [@F06]. Lower Bounds on the Price of Anarchy of Simple Auctions {#s:condpoa} ------------------------------------------------------- The lower bounds of the previous section show that every protocol for the welfare-maximization problem that interacts with the players and then explicitly computes an allocation has either a bad approximation ratio or high communication cost. Over the past decade, many researchers have considered shifting the work from the protocol to the players, by analyzing the equilibria of simple auctions. Can such equilibria bypass the communication complexity lower bounds proved in Section \[s:cclb\]? The answer is not obvious, because equilibria are defined non-constructively, and not through a low-cost communication protocol. ### Auctions as Games What do we mean by a “simple” auction? For example, recall the [ *simultaneous first-price auctions (S1As)*]{} introduced in Section \[ss:when\] of the preceding lecture. Each player $i$ chooses a strategy $b_{i1},\ldots,b_{im}$, with one bid per item.[^110] Each item is sold separately in parallel using a “first-price auction”—the item is awarded to the highest bidder on that item, with the selling price equal to that bidder’s bid.[^111] The payoff of a player $i$ in a given outcome (i.e., given a choice of strategy for each player) is then her utility: $$\underbrace{v_i(T_i)}_{\text{value of items won}} - \underbrace{\sum_{j \in T_i} b_{ij}}_{\text{price paid for them}},$$ where $T_i$ denotes the items on which $i$ is the highest bidder (given the bids of the others). Bidders strategize already in a first-price auction for a single item—a bidder certainly doesn’t want to bid her actual valuation (this would guarantee utility 0), and instead will “shade” her bid down to a lower value. (How much to shade is a tricky question, and depends on what the other bidders are doing.) Thus it makes sense to assess the performance of an auction by its equilibria. As usual, a Nash equilibrium comprises a (randomized) strategy for each player, so that no player can unilaterally increase her expected payoff through a unilateral deviation to some other strategy (given how the other players are bidding). ### The Price of Anarchy {#ss:poa} So how good are the equilibria of various auction games, such as S1As? To answer this question, we use an analog of the approximation ratio, adapted for equilibria. Given a game $G$ (like an S1A) and a nonnegative maximization objective function $f$ on the outcomes (like the social welfare), @KP99 defined the [*price of anarchy (POA)*]{} of $G$ as the ratio between the objective function value of an optimal solution, and that of the worst equilibrium: $$\mathsf{PoA}(G):= \frac{f(OPT(G))}{\min_{\text{$\rho$ is an equilibrium of $G$}} f(\rho)},$$ where $OPT(G)$ denotes the optimal outcome of $G$ (with respect to $f$).[^112] Thus the price of anarchy of a game quantifies the inefficiency of selfish behavior.[^113] The POA of a game and a maximization objective function is always at least 1. We can identify “good performance” of a system with strategic participants as having a POA close to 1.[^114] The POA depends on the choice of equilibrium concept. For example, the POA with respect to approximate Nash equilibria can only be worse (i.e., bigger) than for exact Nash equilibria (since there are only more of the former). ### The Price of Anarchy of S1As As we saw in Theorem \[t:ffgl\] of the preceding lecture, the equilibria of simple auctions like S1As can be surprisingly good.[^115] We restate that result here.[^116] \[t:ffgl2\] In every S1A with subadditive bidder valuations, the POA is at most 2. This result is particularly impressive because achieving an approximation factor of 2 for the welfare-maximization problem with subadditive bidder valuations by any means (other than brute-force search) is not easy (see [@F06]). As mentioned last lecture, a recent result shows that the analysis of [@FFGL13] is tight. \[t:ckst14\] The worst-case POA of S1As with subadditive bidder valuations is at least 2. The proof of Theorem \[t:ckst14\] is an ingenious explicit construction—the authors exhibit a choice of subadditive bidder valuations and a Nash equilibrium of the corresponding S1A so that the welfare of this equilibrium is only half of the maximum possible. One reason that proving results like Theorem \[t:ckst14\] is challenging is that it can be difficult to solve for a (bad) equilibrium of a complex game like a S1A. ### Price-of-Anarchy Lower Bounds from Communication Complexity Theorem \[t:ffgl2\] motivates an obvious question: can we do better? Theorem \[t:ckst14\] implies that the analysis in [@FFGL13] cannot be improved, but can we reduce the POA by considering a different auction? Ideally, the auction would still be “reasonably simple” in some sense. Alternatively, perhaps no “simple” auction could be better than S1As? If this is the case, it’s not clear how to prove it directly—proving lower bounds via explicit constructions auction-by-auction does not seem feasible. Perhaps it’s a clue that the POA upper bound of 2 for S1As (Theorem \[t:ffgl2\]) gets stuck at the same threshold for which there is a lower bound for protocols that use polynomial communication (Theorem \[t:wm2\]). It’s not clear, however, that a lower bound for low-communication protocols has anything to do with equilibria. Can we extract a low-communication protocol from an equilibrium? \[t:condpoa2\] Fix a class $\V$ of possible bidder valuations. Suppose that, for some $\alpha \ge 1$, there is no nondeterministic protocol with subexponential (in $m$) communication for the 1-inputs of the following promise version of the welfare-maximization problem with bidder valuations in $\V$: - Every allocation has welfare at most $W^*/\alpha$. - There exists an allocation with welfare at least $W^*$. Let ${\epsilon}$ be bounded below by some inverse polynomial function of $k$ and $m$. Then, for every auction with sub-doubly-exponential (in $m$) strategies per player, the worst-case POA of ${\epsilon}$-approximate Nash equilibria with bidder valuations in $\V$ is at least $\alpha$. Theorem \[t:condpoa2\] says that lower bounds for nondeterministic protocols carry over to all “sufficiently simple” auctions, where “simplicity” is measured by the number of strategies available to each player. These POA lower bounds follow automatically from communication complexity lower bounds, and do not require any new explicit constructions. To get a feel for the simplicity constraint, note that S1As with integral bids between 0 and $B$ have $(B+1)^m$ strategies per player—singly exponential in $m$. On the other hand, in a “direct-revelation” auction, where each bidder is allowed to submit a bid on each bundle $S \subseteq M$ of items, each player has a doubly-exponential (in $m$) number of strategies.[^117] The POA lower bound promised by Theorem \[t:condpoa2\] is only for approximate Nash equilibria; since the POA is a worst-case measure and the set of ${\epsilon\text{-}\mathsf{NE}}$ is nondecreasing with ${\epsilon}$, this is weaker than a lower bound for exact Nash equilibria. It is an open question whether or not Theorem \[t:condpoa2\] holds also for the POA of exact Nash equilibria.[^118] Theorem \[t:condpoa2\] has a number of interesting corollaries. First, consider the case where $\V$ is the set of subadditive valuations. Since S1As have only a singly-exponential (in $m$) number of strategies per player, Theorem \[t:condpoa2\] applies to them. Thus, combining it with Theorem \[t:wm2\] recovers the POA lower bound of Theorem \[t:ckst14\]—modulo the exact vs. approximate Nash equilibria issue—and shows the optimality of the upper bound in Theorem \[t:ffgl2\] without an explicit construction. Even more interestingly, this POA lower bound of 2 applies not only to S1As, but more generally to all auctions in which each player has a sub-doubly-exponential number of strategies. Thus, S1As are in fact [*optimal*]{} among the class of all such auctions when bidders have subadditive valuations (w.r.t. the worst-case POA of ${\epsilon}$-approximate Nash equilibria). We can also take $\V$ to be the set of all (monotone) valuations, and then combine Theorem \[t:condpoa2\] with Theorem \[thm\_nisan\] to deduce that no “simple” auction gives a non-trivial (i.e., better-than-$k$) approximation for general bidder valuations. We conclude that with general valuations, complexity is essential to any auction format that offers good equilibrium guarantees. This completes the proof of Theorem \[t:condpoa\] from the preceding lecture and formalizes the second folklore belief in Section \[ss:when\]; we restate that result here. With general valuations, *every* simple auction can have equilibria with social welfare arbitrarily worse than the maximum possible. ### Proof of Theorem \[t:condpoa2\] Presumably, the proof of Theorem \[t:condpoa2\] extracts a low-communication protocol from a good POA bound. The hypothesis of Theorem \[t:condpoa2\] offers the clue that we should be looking to construct a nondeterministic protocol. So what could we use an all-powerful prover for? We’ll see that a good role for the prover is to suggest a Nash equilibrium to the players. Unfortunately, it can be too expensive for the prover to write down the description of a Nash equilibrium, even in S1As. Recall that a mixed strategy is a distribution over pure strategies, and that each player has an exponential (in $m$) number of pure strategies available in a S1A. Specifying a Nash equilibrium thus requires an exponential number of probabilities. To circumvent this issue, we resort to approximate Nash equilibria, which are guaranteed to exist even if we restrict ourselves to distributions with small descriptions. We proved this for two-player games in Solar Lecture 1 (Theorem \[t:lmm\]); the same argument works for games with any number of players. \[l:lmm\] For every ${\epsilon}> 0$ and every game with $k$ players with strategy sets $A_1,\ldots,A_k$, there exists an ${\epsilon}$-approximate Nash equilibrium with description length polynomial in $k$, $\log (\max_{i=1}^k |A_i|)$, and $\tfrac{1}{{\epsilon}}$. In particular, every game with a sub-doubly-exponential number of strategies admits an approximate Nash equilibrium with subexponential description length. We now proceed to the proof of Theorem \[t:condpoa2\]. (of Theorem \[t:condpoa2\]) Fix an auction with at most $A$ strategies per player, and a value for ${\epsilon}= \Omega(1/{\mathrm{poly}}(k,m))$. Assume that, no matter what the bidder valuations $v_1,\ldots,v_k \in \V$ are, the POA of ${\epsilon}$-approximate Nash equilibria of the auction is at most $\rho < \alpha$. We will show that $A$ must be doubly-exponential in $m$. Consider the following nondeterministic protocol for verifying a 1-input of the welfare-maximization problem—for convincing the $k$ players that every allocation has welfare at most $W^*/\alpha$. See also Figure \[f:condpoa\]. The prover writes on a publicly visible blackboard an ${\epsilon}$-approximate Nash equilibrium $(\sigma_1,\ldots,\sigma_k)$ of the auction, with description length polynomial in $k$, $\log A$, and $\tfrac{1}{{\epsilon}} = O({\mathrm{poly}}(k,m))$ as guaranteed by Lemma \[l:lmm\]. The prover also writes down the expected welfare contribution ${\mathbf{E}\ifthenelse{\not\equal{}{}}{_{}}{}\!\left[v_i(S)\right]}$ of each bidder $i$ in this equilibrium. ![Proof of Theorem \[t:condpoa2\]. How to extract a low-communication nondeterministic protocol from a good price-of-anarchy bound.[]{data-label="f:condpoa"}](condpoa.pdf){width=".6\textwidth"} Given this advice, each player $i$ verifies that $\sigma_i$ is indeed an ${\epsilon}$-approximate best response to the other $\sigma_j$’s and that her expected welfare is as claimed when all players play the mixed strategies $\sigma_1,\ldots,\sigma_k$. Crucially, player $i$ is fully equipped to perform both of these checks without any communication—she knows her valuation $v_i$ (and hence her utility in each outcome of the game) and the mixed strategies used by all players, and this is all that is needed to verify her ${\epsilon}$-approximate Nash equilibrium conditions and compute her expected contribution to the social welfare.[^119] Player $i$ accepts if and only if the prover’s advice passes these two tests, and if the expected welfare of the equilibrium is at most $W^*/\alpha$. For the protocol correctness, consider first the case of a 1-input, where every allocation has welfare at most $W^*/\alpha$. If the prover writes down the description of an arbitrary ${\epsilon}$-approximate Nash equilibrium and the appropriate expected contributions to the social welfare, then all of the players will accept (the expected welfare is obviously at most $W^*/\alpha$). We also need to argue that, for the case of a 0-input—where some allocation has welfare at least $W^*$—there is no proof that causes all of the players to accept. We can assume that the prover writes down an ${\epsilon}$-approximate Nash equilibrium and its correct expected welfare $W$, as otherwise at least one player will reject. Because the maximum-possible welfare is at least $W^*$ and (by assumption) the POA of ${\epsilon}$-approximate Nash equilibria is at most $\rho < \alpha$, the expected welfare of the given ${\epsilon}$-approximate Nash equilibrium must satisfy $W \ge W^*/\rho > W^*/\alpha$. The players will reject such a proof, so we can conclude that the protocol is correct. Our assumption then implies that the protocol has communication cost exponential in $m$. Since the cost of the protocol is polynomial in $k$, $m$, and $\log A$, $A$ must be doubly exponential in $m$. Conceptually, the proof of Theorem \[t:condpoa2\] argues that, when the POA of ${\epsilon}$-approximate Nash equilibria is small, every ${\epsilon}$-approximate Nash equilibrium provides a privately verifiable proof of a good upper bound on the maximum-possible welfare. When such upper bounds require large communication, the equilibrium description length (and hence the number of available strategies) must be large. An Open Question {#s:open} ---------------- While Theorems \[t:wm2\], \[t:ffgl2\], and \[t:condpoa2\] pin down the best-possible POA achievable by simple auctions with subadditive bidder valuations, open questions remain for other valuation classes. For example, a valuation $v_i$ is [ *submodular*]{} if it satisfies $$v_i(T \cup \{j\}) - v_i(T) \le v_i(S \cup \{j\}) - v_i(S)$$ for every $S \subseteq T \subset M$ and $j \notin T$. This is a “diminishing returns” condition for set functions. Every monotone submodular function is also subadditive, so welfare-maximization with the former valuations is only easier than with the latter. The worst-case POA of S1As is exactly $\tfrac{e}{e-1} \approx 1.58$ when bidders have submodular valuations. The upper bound was proved by @ST13, the lower bound by @CKST16. It is an open question whether or not there is a simple auction with a smaller worst-case POA. The best lower bound known—for nondeterministic protocols and hence, by Theorem \[t:condpoa2\], for the POA of ${\epsilon}$-approximate Nash equilibria of simple auctions—is $\tfrac{2e}{2e-1} \approx 1.23$ [@DV13]. Intriguingly, there is an upper bound (very slightly) better than $\tfrac{e}{e-1}$ for polynomial-communication protocols [@FV06]—can this better upper bound also be realized as the POA of a simple auction? What is the best-possible approximation guarantee, either for polynomial-communication protocols or for the POA of simple auctions? Resolving this question would require either a novel auction format (better than S1As), a novel lower bound technique (better than Theorem \[t:condpoa2\]), or both. Appendix: Proof of Theorem \[t:mdisj\] {#s:mdisj} -------------------------------------- The proof of Theorem \[t:mdisj\] proceeds in three easy steps. **Step 1:** [*Every nondeterministic protocol with communication cost $c$ induces a cover of the 1-inputs of $M(f)$ by at most $2^c$ monochromatic boxes.*]{} By “$M(f)$,” we mean the $k$-dimensional array in which the $i$th dimension is indexed by the possible inputs of player $i$, and an array entry contains the value of the function $f$ on the corresponding joint input. By a “box,” we mean the $k$-dimensional generalization of a rectangle—a subset of inputs that can be written as a product $A_1 \times A_2 \times \cdots \times A_k$. By “monochromatic,” we mean a box that does not contain both a 1-input and a 0-input. (Recall that for the [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}problem there are also inputs that are neither 1 nor 0—a monochromatic box can contain any number of these.) The proof of this step is the same as the standard one for the two-party case (see e.g. [@KN96]). **Step 2:** [*The number of $1$-inputs in $M(f)$ is $(k+1)^n$.*]{} In a $1$-input $\minputs$, for every coordinate $\ell$, at most one of the $k$ inputs has a 1 in the $\ell$th coordinate. This yields $k+1$ options for each of the $n$ coordinates, thereby generating a total of $(k+1)^n$ 1-inputs. **Step 3:** [*The number of $1$-inputs in a monochromatic box is at most $k^n$.*]{} Let $B = A_1 \times A_2 \times \cdots \times A_k$ be a 1-box. The key claim here is: for each coordinate $\ell=1,\ldots,n$, there is a player $i \in \{1,\ldots,k\}$ such that, for every input $\bfx_i \in A_i$, the $\ell$th coordinate of $\bfx_i$ is 0. That is, to each coordinate we can associate an “ineligible player” that, in this box, never has a 1 in that coordinate. This is easily seen by contradiction: otherwise, there exists a coordinate $\ell$ such that, for every player $i$, there is an input $\bfx_i \in A_i$ with a 1 in the $\ell$th coordinate. As a box, $B$ contains the input $\minputs$. But this is a 0-input, contradicting the assumption that $B$ is a 1-box. The claim implies the stated upper bound. Every 1-input of $B$ can be generated by choosing, for each coordinate $\ell$, an assignment of at most one “1” in this coordinate to one of the $k-1$ eligible players for this coordinate. With only $k$ choices per coordinate, there are at most $k^n$ 1-inputs in the box $B$. **Conclusion:** Steps 2 and 3 imply that covering the 1s of the $k$-dimensional array of the [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}function requires at least $(1+\tfrac{1}{k})^n$ 1-boxes. By the discussion in Step 1, this implies a lower bound of $n \log_2 (1 + \tfrac{1}{k}) = \Theta(n/k)$ on the nondeterministic communication complexity of the [<span style="font-variant:small-caps;">Multi-Disjointness</span>]{}function (and output 1). This concludes the proof of Theorem \[t:mdisj\]. Why Prices Need Algorithms ========================== You’ve probably heard about “market-clearing prices,” which equate the supply and demand in a market. When are such prices guaranteed to exist? In the classical setting with divisible goods (milk, wheat, etc.), market-clearing prices exist under reasonably weak conditions [@AD54]. But with indivisible goods (houses, spectrum licenses, etc.), such prices may or may not exist. As you can imagine, many papers in the economics and operations research literatures study necessary and sufficient conditions for existence. The punchline of today’s lecture, based on joint work with Inbal Talgam-Cohen [@priceeq], is that computational complexity considerations in large part govern whether or not market-clearing prices exist in a market of indivisible goods. This is cool and surprising because the question (existence of equilibria) seems to have nothing to do with computation (cf., the questions studied in the Solar Lectures). Markets with Indivisible Items ------------------------------ The basic setup is the same as in the preceding lecture, when we were studying price-of-anarchy bounds for simple combinatorial auctions (Section \[s:camodel\]). To review, there are $k$ players, a set $M$ of $m$ items, and each player $i$ has a valuation $v_i:2^M \rightarrow {{\mathbb R}}_+$ describing her maximum willingness to pay for each bundle of items. For simplicity, we also assume that $v_i(\emptyset)=0$ and that $v_i$ is monotone (with $v_i(S) \le v_i(T)$ whenever $S \subseteq T$). As in last lecture, we will often vary the class $\V$ of allowable valuations to make the setting more or less complex. ### Walrasian Equilibria Next is the standard definition of “market-clearing prices” in a market with multiple indivisible items. \[d:we\] A [*Walrasian equilibrium*]{} is an allocation $S_1,\ldots,S_k$ of the items of $M$ to the players and nonnegative prices $p_1,p_2,...,p_m$ for the items such that: - All buyers are as happy as possible with their respective allocations, given the prices: for every $i=1,2,\ldots,k$, $S_i\in \text{argmax}_T \{ v_i(T)-\sum_{j\in T} p_j \}$. - Feasibility: $S_i \cap S_j = \emptyset$ for $i \neq j$. - The market clears: for every $j \in M$, $j \in S_i$ for some $i$.[^120] Note that $S_i$ might be the empty set, if the prices are high enough for (W1) to hold for player $i$. Also, property (W3) is crucial for the definition to be non-trivial (otherwise set $p_j = +\infty$ for every $j$). Walrasian equilibria are remarkable: even though each player optimizes independently (modulo tie-breaking) and gets exactly what she wants, somehow the global feasibility constraint is respected. ### The First Welfare Theorem Recall from last lecture that the [*social welfare*]{} of an allocation $S_1,\ldots,S_k$ is defined as $\sum_{i=1}^k v_i(S_i)$. Walrasian equilibria automatically maximize the social welfare, a result known as the “First Welfare Theorem.” \[t:fwt\] If the prices $p_1,p_2,\ldots,p_m$ and allocation $S_1,S_2,\ldots,S_k$ of items constitute a Walrasian equilibrium, then $$(S_1,S_2,...,S_k)\in\textnormal{argmax}_{(T_1,T_2,...,T_k)}\sum_{i=1}^k v_i(T_i),$$ where $(T_1,\ldots,T_k)$ ranges over all feasible allocations (with $T_i \cap T_j = \emptyset$ for $i \neq j$). If one thinks of a Walrasian equilibrium as the natural outcome of a market, then Theorem \[t:fwt\] can be interpreted as saying “markets are efficient.”[^121] There are many versions of the “First Welfare Theorem,” and all have this flavor. Let $(S^*_1,\ldots,S^*_k)$ denote a welfare-maximizing feasible allocation. We can apply property (W1) of Walrasian equilibria to obtain $$v_i(S_i) - \sum_{j \in S_i} p_j \ge v_i(S^*_i) - \sum_{j \in S^*_i} p_j$$ for each player $i=1,2,\ldots,k$. Summing over $i$, we have $$\label{eq:we} \sum_{i=1}^k v_i(S_i) - \sum_{i=1}^k \left( \sum_{j \in S_i} p_j \right) \ge \sum_{i=1}^k v_i(S^*_i) - \sum_{i=1}^k \left( \sum_{j \in S^*_i} p_j\right).$$ Properties (W2) and (W3) imply that the second term on the left-hand side of  equals the sum $\sum_{j=1}^m p_j$ of all the item prices. Since $(S^*_1,\ldots,S^*_n)$ is a feasible allocation, each item is awarded at most once and hence the second term on the right-hand side is at most $\sum_{j=1}^m p_j$. Adding $\sum_{j=1}^m p_j$ to both sides gives $$\sum_{i=1}^k v_i(S_i) \ge \sum_{i=1}^k v_i(S^*_i),$$ which proves that the allocation $(S_1,\ldots,S_k)$ is also welfare-maximizing. ### Existence of Walrasian Equilibria The First Welfare Theorem says that Walrasian equilibria are great when they exist. But when do they exist? Suppose $M$ contains only one item. Consider the allocation that awards the item to the player $i$ with the highest value for it, and a price that is between player $i$’s value and the highest value of some other player (the second-highest overall). This is a Walrasian equilibrium: the price is low enough that bidder $i$ prefers receiving the item to receiving nothing, and high enough that all the other bidders prefer the opposite. A simple case analysis shows that these are all of the Walrasian equilibria. \[ex:nonex\] Consider a market with two items, $A$ and $B$. Suppose the valuation of the first player is $$v_1(T) = \begin{cases} 3 & \quad \text{for } T=\{A,B\} \\ 0 & \quad \text{otherwise} \\ \end{cases}$$ and that of the second player is $$v_2(T) = \begin{cases} 2 & \quad \text{for } T \neq \emptyset\\ 0 & \quad \text{otherwise.} \\ \end{cases}$$ The first bidder is called a “single-minded” or “AND” bidder, and is happy only if she gets both items. The second bidder is called a “unit-demand” or “OR” bidder, and effectively wants only one of the items.[^122] We claim that there is no Walrasian equilibrium in this market. From the First Welfare Theorem, we know that such an equilibrium must allocate the items to maximize the social welfare, which in this case means awarding both items to the first player. For the second player to be happy getting neither item, the price of each item must be at least 2. But then the first player pays 4 and has negative utility, and would prefer to receive nothing. These examples suggest a natural question: under what conditions is a Walrasian equilibrium guaranteed to exist? There is a well-known literature on this question in economics (e.g. [@KC82; @GS99; @M00]); here are the highlights. 1. If every player’s valuation $v_i$ satisfies the “gross substitutes (GS)” condition, then a Walrasian equilibrium is guaranteed to exist. We won’t need the precise definition of the GS condition in this lecture. GS valuations are closely related to weighted matroid rank functions, and hence are a subclass of the submodular valuations defined at the end of last lecture in Section \[s:open\].[^123] A unit-demand (a.k.a. “OR”) valuation, like that of the second player in Example \[ex:nonex\], satisfies the GS condition (corresponding to the 1-uniform matroid). It follows that single-minded (a.k.a.“AND”) valuations, like that of the first player in Example \[ex:nonex\], do not in general satisfy the GS condition (otherwise the market in Example \[ex:nonex\] would have a Walrasian equilibrium). 2. If $\V$ is a class of valuations that contains all unit-demand valuations and also some valuation that violates the GS condition, then there is a market with valuations in $\V$ that does not possess a Walrasian equilibrium. These results imply that GS valuations are a maximal class of valuations subject to the guaranteed existence of Walrasian equilibria. These results do, however, leave open the possibility of guaranteed existence for classes $\V$ that contain non-GS valuations but not all unit-demand valuations, and a number of recent papers in economics and operations research have pursued this direction (e.g. [@BLN13; @COP15; @COP17; @SY06]). All of the non-existence results in this line of work use explicit constructions, like in Example \[ex:nonex\]. Complexity Separations Imply Non-Existence of Walrasian Equilibria {#s:priceeq} ------------------------------------------------------------------ ### Statement of Main Result Next we describe a completely different approach to ruling out the existence of Walrasian equilibria, based on complexity theory rather than explicit constructions. The main result is the following. \[t:priceeq\] Let $\V$ denote a class of valuations. Suppose the welfare-maximization problem for $\V$ does not reduce to the utility-maximization problem for $\V$. Then, there exists a market with all player valuations in $\V$ that has no Walrasian equilibrium. In other words, a necessary condition for the guaranteed existence of Walrasian equilibria is that welfare-maximization is no harder than utility-maximization. This connects a purely economic question (when do equilibria exist?) to a purely algorithmic one. To fill in some of the details in the statement of Theorem \[t:priceeq\], by “does not reduce to,” we mean that there is no polynomial-time Turing reduction from the former problem to the latter. By “the welfare-maximization problem for $\V$,” we mean the problem of, given player valuations $v_1,\ldots,v_k \in \V$, computing an allocation that maximizes the social welfare $\sum_{i=1}^k v_i(S_i)$.[^124] By “the utility-maximization problem for $\V$,” we mean the problem of, given a valuation $v \in \V$ and nonnegative prices $p_1,\ldots,p_m$, computing a utility-maximizing bundle $S \in \textnormal{argmax}_{T \subseteq M} \{v(T) - \sum_{j \in T} p_j\}$. The utility-maximization problem, which involves only one player, can generally only be easier than the multi-player welfare-maximization problem. Thus the two problems either have the same computational complexity, or welfare-maximization is strictly harder. Theorem \[t:priceeq\] asserts that whenever the second case holds, Walrasian equilibria need not exist. ### Examples Before proving Theorem \[t:priceeq\], let’s see how to apply it. For most natural valuation classes $\V$, a properly trained theoretical computer scientist can identify the complexity of the utility- and welfare-maximization problems in a matter of minutes. \[ex:and\] Let $\V_m$ denote the class of “AND” valuations for markets where $|M|=m$. That is, each $v \in \V_m$ has the following form, for some $\alpha \ge 0$ and $T \subseteq M$: $$v(S)= \begin{cases} \alpha & \quad \text{if } S\supseteq T\\ 0 & \quad \text{otherwise.} \\ \end{cases}$$ The utility-maximization problem for $\V_m$ is trivial: for a single player with an AND valuation with parameters $\alpha$ and $T$, the better of $\emptyset$ or $T$ is a utility-maximizing bundle. The welfare-maximization problem for $\V_m$ is essentially set packing and is ${\mathsf{NP}}$-hard (with $m \rightarrow \infty$).[^125] We conclude that the welfare-maximization problem for $\V$ does not reduce to the utility-maximization problem for $\V$ (unless ${\mathsf{P}}= {\mathsf{NP}}$). Theorem \[t:priceeq\] then implies that, assuming ${\mathsf{P}}\neq {\mathsf{NP}}$, there are markets with AND valuations that do not have any Walrasian equilibria.[^126] Of course, Example \[ex:nonex\] already shows, without any complexity assumptions, that markets with AND bidders do not generally have Walrasian equilibria.[^127] Our next example addresses a class of valuations for which the status of Walrasian equilibrium existence was not previously known. A [*capped additive*]{} valuation $v$ is parameterized by $m+1$ numbers $c, \alpha_1,\alpha_2,\ldots,\alpha_m$ and is defined as $$v(S) = \min \left\{ c, \sum_{j \in S} \alpha_j \right\}.$$ The $\alpha_j$’s indicate each item’s value, and $c$ the “cap” on the maximum value that can be attained. Capped additive valuations were proposed in @LLN06 as a natural subclass of submodular valuations, and have been studied previously from a welfare-maximization standpoint. Let $\V_{m,d}$ denote the class of capped additive valuations in markets with $|M|=m$ and with $c$ and $\alpha_1,\ldots,\alpha_m$ restricted to be positive integers between 1 and $m^d$. (Think of $d$ as fixed and $m \rightarrow \infty$.) A Knapsack-type dynamic programming algorithm shows that the utility-maximization problem for $\V_{m,d}$ can be solved in polynomial time (using that $c$ and the $\alpha_j$’s are polynomially bounded). For $d$ a sufficiently large constant, however, the welfare-maximization problem for $\V_{m,d}$ is ${\mathsf{NP}}$-hard (it includes the strongly ${\mathsf{NP}}$-hard Bin Packing problem). Theorem \[t:priceeq\] then implies that, assuming ${\mathsf{P}}\neq {\mathsf{NP}}$, there are markets with valuations in $\V_{m,d}$ with no Walrasian equilibrium. Proof of Theorem \[t:priceeq\] {#s:priceeqpf} ------------------------------ ### The Plan Here’s the plan for proving Theorem \[t:priceeq\]. Fix a class $\V$ of valuations, and assume that a Walrasian equilibrium exists in every market with player valuations in $\V$. We will show, in two steps, that the welfare-maximization problem for $\V$ (polynomial-time Turing) reduces to the utility-maximization problem for $\V$. **Step 1:** The “fractional” version of the welfare-maximization problem for $\V$ reduces to the utility-maximization problem for $\V$. **Step 2:** A market admits a Walrasian equilibrium if and only if the fractional welfare-maximization problem has an optimal integral solution. (We’ll only need the “only if” direction.) Since every market with valuations in $\V$ admits a Walrasian equilibrium (by assumption), these two steps imply that the integral welfare-maximization problem reduces to utility-maximization. ### Step 1: Fractional Welfare-Maximization Reduces to Utility-Maximization This step is folklore, and appears for example in @NS06. Consider the following linear program (often called the [ *configuration LP*]{}), with one variable $x_{iS}$ for each player $i$ and bundle $S \subseteq 2^M$: $$\begin{aligned} \max & \sum_{i=1}^k \sum_{S \subseteq M} v_i(S)x_{iS} \\ \text{s.t.} & \sum_{i=1}^k \sum_{S \subseteq M \,:\, j \in S} x_{iS} \le 1 \qquad \text{for $j=1,2,\ldots,m$} \\ & \sum_{S \subseteq M} x_{iS} = 1 \qquad \text{for $i=1,2,\ldots,k$.} \\ \end{aligned}$$ The intended semantics are $$x_{iS} = \begin{cases} 1 & \quad \text{if $i$ gets the bundle $S$}\\ 0 & \quad \text{otherwise.} \\ \end{cases}$$ The first set of constraints enforces that each item is awarded only once (perhaps fractionally), and the second set enforces that every player receives one bundle (perhaps fractionally). Every feasible allocation induces a 0-1 feasible solution to this linear program according to the intended semantics, and the objective function value of this solution is exactly the social welfare of the allocation. This linear program has an exponential (in $m$) number of variables. The good news is that it has only a polynomial number of constraints. This means that the dual linear program will have a polynomial number of variables and an exponential number of constraints, which is right in the wheelhouse of the ellipsoid method. Precisely, the dual linear program is: $$\begin{aligned} \min &\sum_{i=1}^k u_i + \sum_{j=1}^m p_j \\ \text{s.t.} \quad & u_i + \sum_{j \in S} p_j \ge v_i(S) \qquad \text{for all $i=1,2,\ldots,k$ and $S \subseteq M$}\\ & p_j \ge 0 \qquad \text{for $j=1,2,\ldots,m$}, \\ \end{aligned}$$ where $u_i$ and $p_j$ correspond to the primal constraints that bidder $i$ receives one bundle and that item $j$ is allocated at most once, respectively. Recall that the ellipsoid method [@K79] can solve a linear program in time polynomial in the number of variables, as long as there is a polynomial-time [*separation oracle*]{} that can verify whether or not a given point is feasible and, if not, produce a violated constraint. For the dual linear program above, this separation oracle boils down to solving the following problem: for each player $i=1,2,\ldots,k$, check that $$u_i \ge \max_{S \subseteq M} \left[ v_i(S) - \sum_{j \in S} p_j \right].$$ But this reduces immediately to the utility-maximization problem for $\V$! Thus the ellipsoid method can be used to solve the dual linear program to optimality, using a polynomial number of calls to a utility-maximization oracle. The optimal solution to the original fractional welfare-maximization problem can then be efficiently extracted from the optimal dual solution.[^128] ### Step 2: Walrasian Equilibria and Exact Linear Programming Relaxations {#ss:step2} We now proceed with the second step, which is based on @BM97 and follows from strong linear programming duality. Recall from linear programming theory (see e.g. [@chvatal]) that a pair of primal and dual feasible solutions are both optimal if and only if the “complementary slackness” conditions hold.[^129] These conditions assert that every non-zero decision variable in one of the linear programs corresponds to a tight constraint in the other. For our primal-dual pair of linear programs, these conditions are: - $x_{iS} > 0$ implies that $u_i = v_i(S) - \sum_{j \in S} p_j$ (i.e., only utility-maximizing bundles are used); - $p_j > 0$ implies that $\sum_i \sum_{S : j \in S} x_{iS} = 1$ (i.e., item $j$ is not fully sold only if it is worthless). Comparing the definition of Walrasian equilibria (Definition \[d:we\]) with conditions (i) and (ii), we see that a 0-1 primal feasible solution $\bfx$ (corresponding to an allocation) and a dual solution $\boldp$ (corresponding to item prices) constitute a Walrasian equilibrium if and only if the complementary slackness conditions hold (where $u_i$ is understood to be set to $\max_{S \subseteq M} v_i(S) - \sum_{j \in S} p_j$). Thus a Walrasian equilibrium exists if and only if there is a feasible 0-1 solution to the primal linear program and a feasible solution to the dual linear problem that satisfy the complementary slackness conditions, which in turn holds if and only if the primal linear program has an optimal 0-1 feasible solution.[^130] We conclude that a Walrasian equilibrium exists if and only if the fractional welfare-maximization problem has an optimal integral solution. This completes the proof of Theorem \[t:priceeq\]. Beyond Walrasian Equilibria --------------------------- For valuation classes $\V$ that do not always possess Walrasian equilibria, is it possible to define a more general notion of “market-clearing prices” so that existence is guaranteed? For example, what if we use prices that are more complex than item prices? This section shows that complexity considerations provide an explanation of why interesting generalizations of Walrasian equilibria have been so hard to come by. Consider a class $\V$ of valuations, and a class ${\mathcal{P}}$ of [*pricing functions*]{}. A pricing function, just like a valuation, is a function $p:2^M \rightarrow {{\mathbb R}}_+$ from bundles to nonnegative numbers. The item prices $p_1,\ldots,p_m$ used to define Walrasian equilibria correspond to additive pricing functions, with $p(S) = \sum_{j \in S} p_j$. The next definition articulates the appropriate generalization of Walrasian equilibria to more general classes of pricing functions. \[d:pe\] A [*price equilibrium*]{} (w.r.t. pricing functions ${\mathcal{P}}$) is an allocation $S_1,\ldots,S_k$ of the items of $M$ to the players and a pricing function $p \in {\mathcal{P}}$ such that: - All buyers are as happy as possible with their respective allocations, given the prices: for every $i=1,2,\ldots,k$, $S_i\in \text{argmax}_T \{v_i(T)-p(T)\}$. - Feasibility: $S_i \cap S_j = \emptyset$ for $i \neq j$. - Revenue maximizing, given the prices: $(S_1,S_2,...,S_k)\in\textnormal{argmax}_{(T_1,T_2,...,T_k)} \{ \sum_{i=1}^k p(T_i) \}$. Condition (P3) is the analog of the market-clearing condition (W3) in Definition \[d:we\]. It is not enough to assert that all items are sold, because with a general pricing function, different ways of selling all of the items can lead to different amounts of revenue. Under conditions (P1)–(P3), the First Welfare Theorem (Theorem \[t:fwt\]) still holds, with essentially the same proof, and so every price equilibrium maximizes the social welfare. For which choices of valuations $\V$ and pricing functions ${\mathcal{P}}$ is Definition \[d:pe\] interesting? Ideally, the following properties should hold. 1. Guaranteed existence: for every set $M$ of items and valuations $v_1,\ldots,v_k \in \V$, there exists a price equilibrium with respect to ${\mathcal{P}}$. 2. Efficient recognition: there is a polynomial-time algorithm for checking whether or not a given allocation and pricing function constitute a price equilibrium. This boils down to assuming that utility-maximization (with respect to $\V$ and ${\mathcal{P}}$) and revenue-maximization (with respect to ${\mathcal{P}}$) are polynomial-time solvable problems (to check (W1) and (W3), respectively). 3. Markets with valuations in $\V$ do not always have a Walrasian equilibrium. (Otherwise, why bother generalizing item prices?) We can now see why there are no known natural choices of $\V$ and ${\mathcal{P}}$ that meet these three requirements. The first two requirements imply that the welfare-maximization problem belongs to ${\mathsf{NP}}\cap {\mathsf{co}\mbox{-}\mathsf{NP}}$. To certify a lower bound of $W^*$ on the maximum social welfare, one can exhibit an allocation with social welfare at least $W^*$. To certify an upper bound of $W^*$, one can exhibit a price equilibrium that has welfare at most $W^*$—this is well defined by the first condition, efficiently verifiable by the second condition, and correct by the First Welfare Theorem. Problems in $({\mathsf{NP}}\cap {\mathsf{co}\mbox{-}\mathsf{NP}}) \setminus {\mathsf{P}}$ appear to be rare, especially in combinatorial optimization. The preceding paragraph gives a heuristic argument that interesting generalizations of Walrasian equilibria are possible only for valuation classes for which welfare-maximization is polynomial-time solvable. For every natural such class known, the linear programming relaxation in Section \[s:priceeqpf\] has an optimal integral solution; in this sense, solving the configuration LP appears to be a “universal algorithm” for polynomial-time welfare-maximization. But the third requirement asserts that a Walrasian equilibrium does not always exist in markets with valuations in $\V$ and so, by the second step of the proof of Theorem \[t:priceeq\] (in Section \[ss:step2\]), there are markets for which the configuration LP sometimes has only fractional optimal solutions. The upshot is that interesting generalizations of Walrasian equilibria appear possible only for valuation classes where a non-standard algorithm is necessary and sufficient to solve the welfare-maximization problem in polynomial time. It is not clear if there are any natural valuation classes for which this algorithmic barrier can be overcome.[^131] The Borders of Border’s Theorem =============================== Border’s theorem [@B91] is a famous result in auction theory about the design space of single-item auctions, and it provides an explicit linear description of the single-item auctions that are “feasible” in a certain sense. Despite the theorem’s fame, there have been few generalizations of it. This lecture, based on joint work with Parikshit Gopalan and Noam Nisan [@GNR18], uses complexity theory to explain why: if there [*were*]{} significant generalizations of Border’s theorem, the polynomial hierarchy would collapse! Optimal Single-Item Auctions ---------------------------- ### The Basics of Single-Item Auctions {#ss:basics} Single-item auctions have made brief appearances in previous lectures; let’s now study the classic model, due to @V61, in earnest. There is a single seller of a single item. There are $n$ bidders, and each bidder $i$ has a valuation $v_i$ for the item (her maximum willingness to pay). Valuations are [ *private*]{}, meaning that $v_i$ is known a priori to bidder $i$ but not to the seller or the other bidders. Each bidder wants to maximize the value obtained from the auction ($v_i$ if she wins, 0 otherwise) minus the price she has to pay. In the presence of randomization (either in the input or internal to the auction), we assume that bidders are risk-neutral, meaning they act to maximize their expected utility. This lecture is our only one on the classical [*Bayesian*]{} model of auctions, which can be viewed as a form of average-case analysis. The key assumption is that each valuation $v_i$ is drawn from a distribution $F_i$ that is known to the seller and possibly the other bidders. The actual realization $v_i$ remains unknown to everybody other than bidder $i$. For simplicity we’ll work with discrete distributions, and let $V_i$ denote the support of $F_i$ and $f_i(v_i)$ the probability that bidder $i$’s valuation is $v_i \in V_i$. Typical examples include (discretized versions of) the uniform distribution, the lognormal distribution, the exponential distribution, and power-law distributions. We also assume that bidders’ valuations are stochastically independent. When economists speak of an “optimal auction,” they usually mean the auction that maximizes the seller’s expected revenue with respect to a known prior distribution.[^132] Before identifying optimal auctions, we need to formally define the design space. The auction designer needs to decide who wins and how much they pay. Thus the designer must define two (possibly randomized) functions of the bid vector $\vec{b}$: an *allocation rule* $\vec{x}(\vec{b})$ which determines which bidder wins the item, where $x_i=1$ and if $i$ wins and $x_i=0$ otherwise, and a *payment rule* $\vec{p}(\vec{b})$ where $p_i$ is how much $i$ pays. We impose the constraint that whenever bidder $i$ bids $b_i$, the expected payment ${\mathbf{E}\ifthenelse{\not\equal{}{}}{_{}}{}\!\left[p_i(\vec{b})\right]}$ of the bidder is at most $b_i$ times the probability $x_i(\vec{b})$ that it wins. (The randomization is over the bids by the other bidders and any randomness internal to the auction.) This participation constraint ensures that a bidder who does not overbid will obtain nonnegative expected utility from the auction. (Without it, an auction could just charge $+\infty$ to every bidder.) The [*revenue*]{} of an auction on the bid vector $\vec{b}$ is $\sum_{i=1}^n p_i(\vec{b})$. For example, in the [*Vickrey*]{} or [*second-price auction*]{}, the allocation rule awards the item to the highest bidder, and the payment rule charges the second-highest bid. This auction is *(dominant-strategy) truthful*, meaning that for each bidder, truthful bidding (i.e., setting $b_i=v_i$) is a *dominant strategy* that maximizes her utility no matter what the other bidders do. With such a truthful auction, there is no need to assume that the distributions $F_1,\ldots,F_n$ are known to the bidders. The beauty of the Vickrey auction is that it delegates underbidding to the auctioneer, who determines the optimal bid for the winner on their behalf. A [*first-price auction*]{} has the same allocation rule as a second-price auction (give the item to the highest bidder), but the payment rule charges the winner her bid. Bidding truthfully in a first-price auction guarantees zero utility, so strategic bidders will underbid. Because bidders do not have dominant strategies—the optimal amount to underbid depends on the bids of the others—it is non-trivial to reason about the outcome of a first-price auction. The traditional solution is to assume that the distributions $F_1,\ldots,F_n$ are known in advance to the bidders, and to consider Bayes-Nash equilibria. Formally, a [*strategy*]{} of a bidder $i$ in a first-price auction is a predetermined plan for bidding—a function $b_i(\cdot)$ that maps a valuation $v_i$ to a bid $b_i(v_i)$ (or a distribution over bids). The semantics are: “when my valuation is $v_i$, I will bid $b_i(v_i)$.” We assume that bidders’ strategies are common knowledge, with bidders’ valuations (and hence induced bids) private as usual. A strategy profile $b_1(\cdot),\cdots,b_n(\cdot)$ is a [*Bayes-Nash equilibrium*]{} if every bidder always bids optimally given her information—if for every bidder $i$ and every valuation $v_i$, the bid $b_i(v_i)$ maximizes $i$’s expected utility, where the expectation is with respect to the distribution over the bids of other bidders induced by $F_1,\ldots,F_n$ and their bidding strategies.[^133] Note that the set of Bayes-Nash equilibria of an auction generally depends on the prior distributions $F_1,\ldots,F_n$. An auction is called [*Bayesian incentive compatible (BIC)*]{} if truthful bidding (with $b_i(v_i)=v_i$ for all $i$ and $v_i$) is a Bayes-Nash equilibrium. That is, as a bidder, if all other bidders bid truthfully, then you also want to bid truthfully. A second-price auction is BIC, while a first-price auction is not.[^134] However, for every choice of $F_1,\ldots,F_n$, there is a BIC auction that is equivalent to the first-price auction. Specifically: given bids $a_1,\ldots,a_n$, implement the outcome of the first-price auction with bids $b_1(a_1),\ldots,b_n(a_n)$, where $b_1(\cdot),\ldots,b_n(\cdot)$ denotes a Bayes-Nash equilibrium of the first-price auction (with prior distributions $F_1,\ldots,F_n$). Intuitively, this auction makes the following pact with each bidder: “you promise to tell me your true valuation, and I promise to bid on your behalf as you would in a Bayes-Nash equilibrium.” More generally, this simulation argument shows that for [*every*]{} auction $A$, distributions $F_1,\ldots,F_n$, and Bayes-Nash equilibrium of $A$ (w.r.t. $F_1,\ldots,F_n$), there is a BIC auction $A'$ whose (truthful) outcome (and hence expected revenue) matches that of the chosen Bayes-Nash equilibrium of $A$. This result is known as the [*Revelation Principle.*]{} This principle implies that, to identify an optimal auction, there is no loss of generality in restricting to BIC auctions.[^135] ### Optimal Auctions {#ss:m81} In optimal auction design, the goal is to identify an expected revenue-maximizing auction, as a function of the prior distributions $F_1,\ldots,F_n$. For example, suppose that $n=1$, and we restrict attention to truthful auctions. The only truthful auctions are take-it-or-leave-it offers (or a randomization over such offers). That is, the selling price must be independent of the bidder’s bid, as any dependence would result in opportunities for the bidder to game the auction. The optimal truthful auction is then the take-it-or-leave-it offer at the price $r$ that maximizes $$\underbrace{r}_{\text{revenue of a sale}} \cdot \underbrace{(1-F(r))}_{\text{probability of a sale}},$$ where $F$ denotes the bidder’s valuation distribution. Given a distribution $F$, it is usually a simple matter to solve for the best $r$. An optimal offer price is called a [ *monopoly price*]{} of the distribution $F$. For example, if $F$ is the uniform distribution on $[0,1]$, then the monopoly price is $\tfrac{1}{2}$. Myerson [@myerson] gave a complete solution to the optimal single-item auction design problem, in the form of a generic compiler that takes as input prior distributions $F_1,\ldots,F_n$ and outputs a closed-form description of the optimal auction for $F_1,\ldots,F_n$. The optimal auction is particularly easy to interpret in the symmetric case, in which bidders’ valuations are drawn i.i.d. from a common distribution $F$. Here, the optimal auction is simply a second-price auction with a reserve price $r$ equal to the monopoly price of $F$ (i.e., an eBay auction with a suitably chosen opening bid).[^136][^137] For example, with any number $n$ of bidders with valuations drawn i.i.d. from the uniform distribution on $[0,1]$, the optimal single-item auction is a second-price auction with a reserve price of $\tfrac{1}{2}$. This is a pretty amazing confluence of theory and practice—we optimized over the space of all imaginable auctions (which includes some very strange specimens), and discovered that the theoretically optimal auction format is one that is already in widespread use![^138] Myerson’s theory of optimal auctions extends to the asymmetric case where bidders have different distributions (where the optimal auction is no longer so simple), and also well beyond single-item auctions.[^139] The books by @hartline and the author [@f13 Lectures 3 and 5] describe this theory from a computer science perspective. Border’s Theorem ---------------- ### Context Border’s theorem identifies a tractable description of [*all*]{} BIC single-item auctions, in the form of a polytope in polynomially many variables. (See Section \[ss:basics\] for the definition of a BIC auction.) This goal is in some sense more ambitious than merely identifying the optimal auction; with this tractable description in hand, one can efficiently compute the optimal auction for any given set $F_1,\ldots,F_n$ of prior distributions. Economists are interested in Border’s theorem because it can be used to extend the reach of Myerson’s optimal auction theory (Section \[ss:m81\]) to more general settings, such as the case of risk-adverse bidders studied by @MR84. @M84 conjectured the precise result that was proved by @B91. Computer scientists have used Border’s theorem for orthogonal extensions to Myerson’s theory, like computationally tractable descriptions of the expected-revenue maximizing auction in settings with multiple non-identical items [@A+19; @CDW12]. While there is no hope of deriving a closed-form solution to the optimal auction design problem with risk-adverse bidders or with multiple items, Border’s theorem at least enables an efficient algorithm for computing a description of an optimal auction (given descriptions of the prior distributions). ### An Exponential-Size Linear Program As a lead-in to Border’s theorem, we show how to formulate the space of BIC single-item auctions as an (extremely big) linear program. The decision variables of the linear program encode the allocation and payment rules of the auction (assuming truthful bidding, as appropriate for BIC auctions). There is one variable $x_i(\vec{v}) \in [0,1]$ that describes the probability (over any randomization in the auction) that bidder $i$ wins the item when bidders’ valuations (and hence bids) are $\vec{v}$. Similarly, $p_i(\vec{v}) \in {{\mathbb R}}_+$ denotes the expected payment made by bidder $i$ when bidders’ valuations are $\vec{v}$. Before describing the linear program, we need some odd but useful notation (which is standard in game theory and microeconomics). For an $n$-vector $\vec{z}$ and a coordinate $i \in [n]$, let $\vec{z}_{-i}$ denote the $(n-1)$-vector obtained by removing the $i$th component from $\vec{z}$. We also identify $(z_i,\vec{z}_{-i})$ with $\vec{z}$. Also, recall that $V_i$ denotes the possible valuations of bidder $i$, and that we assume that this set is finite. Our linear program will have three sets of constraints. The first set enforces the property that truthful bidding is in fact a Bayes-Nash equilibrium (as required for a BIC auction). For every bidder $i$, possible valuation $v_i \in V_i$ for $i$, and possible false bid $v'_i \in V_i$, $$\label{eq:bic1} \underbrace{v_i \cdot {\mathbf{E}\ifthenelse{\not\equal{}{\vec{v}_{-i} \sim \vec{F}_{-i}}}{_{\vec{v}_{-i} \sim \vec{F}_{-i}}}{}\!\left[x_i(\vec{v})\right]} - {\mathbf{E}\ifthenelse{\not\equal{}{\vec{v}_{-i} \sim \vec{F}_{-i}}}{_{\vec{v}_{-i} \sim \vec{F}_{-i}}}{}\!\left[p_i(\vec{v})\right]}}_{\text{expected utility of truthful bid $v_i$}} \ge \underbrace{v_i \cdot {\mathbf{E}\ifthenelse{\not\equal{}{\vec{v}_{-i} \sim \vec{F}_{-i}}}{_{\vec{v}_{-i} \sim \vec{F}_{-i}}}{}\!\left[x_i(v'_i,\vec{v}_{-i})\right]} - {\mathbf{E}\ifthenelse{\not\equal{}{\vec{v}_{-i} \sim \vec{F}_{-i}}}{_{\vec{v}_{-i} \sim \vec{F}_{-i}}}{}\!\left[p_i(v'_i,\vec{v}_{-i})\right]}}_{\text{expected utility of false bid $v'_i$}}.$$ The expectation is over both the randomness in $\vec{v}_{-i}$ and internal to the auction. Each of the expectations in  expands to a sum over all possible $\vec{v}_{-i} \in \vec{V}_{-i}$, weighted by the probability $\prod_{j \neq i} f_j(v_j)$. Because all of the $f_j(v_j)$’s are numbers known in advance, each of these constraints is linear (in the $x_i(\vec{v})$’s and $p_i(\vec{v})$’s). The second set of constraints encode the participation constraints from Section \[ss:basics\], also known as the [ *interim individually rational (IIR)*]{} constraints. For every bidder $i$ and possible valuation $v_i \in V_i$, $$\label{eq:iir1} v_i \cdot {\mathbf{E}\ifthenelse{\not\equal{}{\vec{v}_{-i} \sim \vec{F}_{-i}}}{_{\vec{v}_{-i} \sim \vec{F}_{-i}}}{}\!\left[x_i(\vec{v})\right]} - {\mathbf{E}\ifthenelse{\not\equal{}{\vec{v}_{-i} \sim \vec{F}_{-i}}}{_{\vec{v}_{-i} \sim \vec{F}_{-i}}}{}\!\left[p_i(\vec{v})\right]} \ge 0.$$ The final set of constraints assert that, with probability 1, the item is sold to at most one bidder: for every $\vec{v} \in \vec{V}$, $$\label{eq:feas1} \sum_{i=1}^n x_i(\vec{v}) \le 1.$$ By construction, feasible solutions to the linear system – correspond to the allocation and payment rules of BIC auctions with respect to the distributions $F_1,\ldots,F_n$. This linear program has an exponential number of variables and constraints, and is not immediately useful. ### Reducing the Dimension with Interim Allocation Rules Is it possible to re-express the allocation and payment rules of BIC auctions with a small number of decision variables? Looking at the constraints  and , a natural idea is use only the decision variables $\{ y_i(v_i) \}_{i \in [n], v_i \in V_i}$ and $\{ q_i(v_i) \}_{i \in [n], v_i \in V_i}$, with the intended semantics that $$y_i(v_i) = {\mathbf{E}\ifthenelse{\not\equal{}{\vec{v}_{-i}}}{_{\vec{v}_{-i}}}{}\!\left[x_i(v_i,\vec{v}_{-i})\right]} \quad \text{and} \quad q_i(v_i) = {\mathbf{E}\ifthenelse{\not\equal{}{\vec{v}_{-i}}}{_{\vec{v}_{-i}}}{}\!\left[p_i(v_i,\vec{v}_{-i})\right]}.$$ In other words, $y_i(v_i)$ is the probability that bidder $i$ wins when she bids $v_i$, and $q_i(v_i)$ is the expected amount that she pays; these were the only quantities that actually mattered in  and . (As usual, the expectation is over both the randomness in $\vec{v}_{-i}$ and internal to the auction.) In auction theory, the $y_i(v_i)$’s are called an [*interim allocation rule*]{}, the $q_i(v_i)$’s an [*interim payment rule*]{}.[^140] There are only $2 \sum_{i=1}^n |V_i|$ such decision variables, far fewer than the $2 \prod_{i=1}^n |V_i|$ variables in –. We’ll think of the $|V_i|$’s (and hence the number of decision variables) as polynomially bounded. For example, $V_i$ could be the multiples of some small ${\epsilon}$ that lie in some bounded range like $[0,1]$. We can then express the BIC constraints  in terms of this smaller set of variables by $$\label{eq:bic2} \underbrace{v_i \cdot y_i(v_i) - q_i(v_i)}_{\text{expected utility of truthful bid $v_i$}} \ge \underbrace{v_i \cdot y_i(v'_i) - q_i(v'_i)}_{\text{expected utility of false bid $v'_i$}}$$ for every bidder $i$ and $v_i,v'_i \in V_i$. Similarly, the IIR constraints  become $$\label{eq:iir2} v_i \cdot y_i(v_i) - q_i(v_i) \ge 0$$ for every bidder $i$ and $v_i \in V_i$. Just one problem. What about the feasibility constraints , which reference the individual $x_i(\vec{v})$’s and not merely their expectations? The next definition articulates what feasibility means for an interim allocation rule. An interim allocation rule $\{ y_i(v_i) \}_{i \in [n], v_i \in V_i}$ is [*feasible*]{} if there exist nonnegative values for $\{ x_i(\vec{v}) \}_{i \in [n], \vec{v} \in \vec{V}}$ such that $$\sum_{i=1}^n x_i(\vec{v}) \le 1$$ for every $\vec{v}$ (i.e., the $x_i(\vec{v})$’s constitute a feasible allocation rule), and $$y_i(v_i) = \underbrace{\sum_{\vec{v}_{-i} \in \vec{V}_{-i}} \left( \prod_{j \neq i} f_j(v_j) \right) \cdot x_i(v_i,\vec{v}_{-i})}_{{\mathbf{E}\ifthenelse{\not\equal{}{\vec{v}_{-i}}}{_{\vec{v}_{-i}}}{}\!\left[x_i(v_i,\vec{v}_{-i})\right]}}$$ for every $i \in [n]$ and $v_i \in V_i$ (i.e., the intended semantics are respected). In other words, the feasible interim allocation rules are exactly the projections (onto the $y_i(v_i)$’s) of the feasible (ex post) allocation rules. The big question is: how can we translate interim feasibility into our new, more economical vocabulary?[^141] As we’ll see, Border’s theorem [@B91] provides a crisp and computationally useful solution. ### Examples To get a better feel for the issue of checking the feasibility of an interim allocation rule, let’s consider a couple of examples. A necessary condition for interim feasibility is that the item is awarded to at most one bidder in expectation (over the randomness in the valuations and internal to the auction): $$\label{eq:nec} \sum_{i=1}^n \underbrace{\sum_{v_i \in V_i} f_i(v_i) y_{i}(v_i) }_{{\mathbf{Pr}\ifthenelse{\not\equal{}{}}{_{}}{}\!\left[\text{$i$ wins}\right]}} \le 1.$$ Could this also be a sufficient condition? That is, is every interim allocation rule $\{ y_i(v_i) \}_{i \in [n], v_i \in V_i}$ that satisfies  induced by a bone fide (ex post) allocation rule? \[ex:ex1\] Suppose there are $n=2$ bidders. Assume that $v_1,v_2$ are independent and each is equally likely to be 1 or 2. Consider the interim allocation rule given by $$\label{eq:ex1} y_1(1) = \tfrac{1}{2}, y_1(2) = \tfrac{7}{8}, y_2(1) = \tfrac{1}{8}, \text{ and } y_2(2) = \tfrac{1}{2}.$$ Since $f_i(v) = \tfrac{1}{2}$ for all $i=1,2$ and $v=1,2$, the necessary condition in  is satisfied. Can you find an (ex post) allocation rule that induces this interim rule? Answering this question is much like solving a Sudoko or KenKen puzzle—the goal is to fill in the table entries in Table \[t:blank\] so that each row sums to at most 1 (for feasibility) and that the constraints  are satisfied. For example, the average of the top two entries in the first column of Table \[t:blank\] should be $y_1(1) = \tfrac{1}{2}$. In this example, there are a number of such solutions; one is shown in Table \[t:ex1\]. Thus, the given interim allocation rule is feasible. $(v_1, v_2)$ $x_1(v_1, v_2)$ $x_2(v_1, v_2)$ -------------- ----------------- ----------------- $(1,1)$ $(1,2)$ $(2,1)$ $(2,2)$ : Certifying feasibility of an interim allocation rule is analogous to filling in the table entries while respecting constraints on the sums of certain subsets of entries.[]{data-label="t:blank"} $(v_1, v_2)$ $x_1(v_1, v_2)$ $x_2(v_1, v_2)$ -------------- ----------------- ----------------- $(1,1)$ 1 0 $(1,2)$ 0 1 $(2,1)$ 3/4 1/4 $(2,2)$ 1 0 : One solution to Example \[ex:ex1\].[]{data-label="t:ex1"} \[ex:ex2\] Suppose we change the interim allocation rule to $$y_1(1) = \tfrac{1}{4}, y_1(2) = \tfrac{7}{8}, y_2(1) = \tfrac{1}{8}, \text{ and } y_2(2) = \tfrac{3}{4}.$$ The necessary condition  remains satisfied. Now, however, the interim rule is not feasible. One way to see this is to note that $y_1(2) = \tfrac{7}{8}$ implies that $x_1(2,2) \ge \tfrac{3}{4}$ and hence $x_2(2,2) \le \tfrac{1}{4}$. Similarly, $y_2(2) = \tfrac{3}{4}$ implies that $x_2(2,2) \ge \tfrac{1}{2}$, a contradictory constraint. The first point of Examples \[ex:ex1\] and \[ex:ex2\] is that it is not trivial to check whether or not a given interim allocation rule is feasible—the problem corresponds to solving a big linear system of equations and inequalities. The second point is that  is not a sufficient condition for feasibility. In hindsight, trying to summarize the exponentially many ex post feasibility constraints  with a single interim constraint  seems naive. Is there a larger set of linear constraints—possibly an exponential number—that characterizes interim feasibility? ### Border’s Theorem Border’s theorem states that a collection of “obvious” necessary conditions for interim feasibility are also sufficient. To state these conditions, assume for notational convenience that the valuation sets $V_1,\ldots,V_n$ are disjoint.[^142] Let $\{ x_i(\vec{v}) \}_{i \in [n], \vec{v} \in \vec{V}}$ be a feasible (ex post) allocation rule and $\{ y_i(v_i) \}_{i \in [n], v_i \in V_i}$ the induced (feasible) interim allocation rule. Fix for each bidder $i$ a set $S_i \sse V_i$ of valuations. Call the valuations $\cup_{i=1}^n S_i$ the [*distinguished*]{} valuations. Consider first the probability, over the random valuation profile $\vec{v} \sim \vec{F}$ and any coin flips of the ex post allocation rule, that the winner of the auction (if any) has a distinguished valuation. By linearity of expectations, this probability can be expressed in terms of the interim allocation rule: $$\label{eq:lhs} \sum_{i=1}^n \sum_{v_i \in S_i} f_i(v_i) y_i(v_i).$$ The expression  is linear in the $y_i(v_i)$’s. The second quantity we study is the probability, over $\vec{v} \sim \vec{F}$, that there is a bidder with a distinguished valuation. This has nothing to do with the allocation rule, and is a function of the prior distributions only: $$\label{eq:rhs} 1 - \prod_{i=1}^n \left( 1 - \sum_{v_i \in S_i} f_i(v_i) \right).$$ Because there can only be a winner with a distinguished valuation if there is a bidder with a distinguished valuation, the quantity in  can only be less than . Border’s theorem asserts that these conditions, ranging over all choices of $S_1 \sse V_1,\ldots,S_n \sse V_n$, are also sufficient for the feasibility of an interim allocation rule. \[t:border\] An interim allocation rule $\{ y_i(v_i) \}_{i \in [n], v_i \in V_i}$ is feasible if and only if for every choice $S_1 \sse V_1,\ldots,S_n \sse V_n$ of distinguished valuations, $$\label{eq:border} \sum_{i=1}^n \sum_{v_i \in S_i} f_i(v_i)y_i(v_i) \le 1 - \prod_{i=1}^n \left( 1 - \sum_{v_i \in S_i} f_i(v_i) \right).$$ Border’s theorem can be derived from the max-flow/min-cut theorem (following [@B07; @CKM13]); we include the proof in Section \[s:borderpf\] for completeness. Border’s theorem yields an explicit description as a linear system of the feasible interim allocation rules induced by BIC single-item auctions. To review, this linear system is $$\begin{aligned} \label{eq:b1} v_i \cdot y_i(v_i) - q_i(v_i) &\ge v_i \cdot y_i(v'_i) - q_i(v'_i) & \forall i \text { and } v_i, v_i' \in V_i \\ \label{eq:b2} v_i \cdot y_i(v_i) - q_i(v_i) &\ge 0 & \forall i \text { and } v_i \in V_i \\ \label{eq:b3} \sum_{i=1}^n \sum_{v_i \in S_i} f_i(v_i) y_i(v_i) &\le 1 - \prod_{i=1}^n \left( 1 - \sum_{v_i \in S_i} f_i(v_i) \right) & \forall S_1 \sse V_1,\ldots,S_n \sse V_n.\end{aligned}$$ For example, optimizing the objective function $$\label{eq:rev} \max \sum_{i=1}^n f_i(v_i) \cdot q_i(v_i)$$ over the linear system – computes the expected revenue of an optimal BIC single-item auction for the distributions $F_1,\ldots,F_n$. The linear system – has only a polynomial number of variables (assuming the $|V_i|$’s are polynomially bounded), but it does have an exponential number of constraints of the form . One solution is to use the ellipsoid method, as the linear system does admit a polynomial-time separation oracle [@A+19; @CDW12].[^143] Alternatively, @A+19 provide a polynomial-size extended formulation of the polytope of feasible interim allocation rules (with a polynomial number of additional decision variables and only polynomially many constraints). In any case, we conclude that there is a computationally tractable description of the feasible interim allocation rules of BIC single-item auctions. Beyond Single-Item Auctions: A Complexity-Theoretic Barrier ----------------------------------------------------------- Myerson’s theory of optimal auctions (Section \[ss:m81\]) extends beyond single-item auctions to all “single-parameter” settings (see footnote \[foot:sparam\] for discussion and Section \[ss:pp\] for two examples). Can Border’s theorem be likewise extended? There are analogs of Border’s theorem in settings modestly more general than single-item auctions, including $k$-unit auctions with unit-demand bidders [@A+19; @CDW12; @CKM13], and approximate versions of Border’s theorem exist fairly generally [@CDW12; @CDW12b]. Can this state-of-the-art be improved upon? We next use complexity theory to develop evidence for a negative answer. \[t:gnr15\] (Informal) There is no exact Border’s-type theorem for settings significantly more general than the known special cases (unless ${{\mathsf{PH}}}$ collapses). We proceed to defining what we mean by “significantly more general” and a “Border’s-type theorem.” ### Two Example Settings {#ss:pp} The formal version of Theorem \[t:gnr15\] conditionally rules out “Border’s-type theorems” for several specific settings that are representative of what a more general version of Border’s theorem might cover. We mention two of these here (more are in [@GNR18]). In a [*public project*]{} problem, there is a binary decision to make: whether or not to undertake a costly project (like building a new school). Each bidder $i$ has a private valuation $v_i$ for the outcome where the project is built, and valuation 0 for the outcome where it is not. If the project is built, then everyone can use it. In this setting, feasibility means that all bidders receive the same allocation: $x_1(\vec{v}) = x_2(\vec{v}) = \cdots = x_n(\vec{v}) \in [0,1]$ for every valuation profile $\vec{v}$. In a [*matching*]{} problem, there is a set $M$ of items, and each bidder is only interested in receiving a specific pair $j,\ell \in M$ of items. (Cf., the AND bidders of the preceding lecture.) For each bidder, the corresponding pair of items is common knowledge, while the bidder’s valuation for the pair is private as usual. Feasible outcomes correspond to (distributions over) matchings in the graph with vertices $M$ and edges given by bidders’ desired pairs. The public project and matching problems are both “single-parameter” problems (i.e., each bidder has only one private parameter). As such, Myerson’s optimal auction theory (Section \[ss:m81\]) can be used to characterize the expected revenue-maximizing auction. Do these settings also admit analogs of Border’s theorem? ### Border’s-Type Theorems What do we actually mean by a “Border’s-type theorem?” Because we aim to prove impossibility results, we should adopt a definition that is as permissive as possible. Border’s theorem (Theorem \[t:border\]) gives a characterization of the feasible interim allocation rules of a single-item auction as the solutions to a finite system of linear inequalities. This by itself is not impressive—the set is a polytope, and as such is guaranteed to have such a characterization. The appeal of Border’s theorem is that the characterization uses only the “nice” linear inequalities in . Our “niceness” requirement is that the characterization use only linear inequalities that can be efficiently recognized and tested. This is a weak necessary condition for such a characterization to be computationally useful. \[d:gbt\] A [ *Border’s-type theorem*]{} holds for an auction design setting if, for every instance of the setting (specifying the number of bidders and their prior distributions, etc.), there is a system of linear inequalities such that the following properties hold. 1. (Characterization) The feasible solutions of the linear system are precisely the feasible interim allocation rules of the instance. 2. (Efficient recognition) There is a polynomial-time algorithm that can decide whether or not a given linear inequality (described as a list of coefficients) belongs to the linear system. 3. (Efficient testing) The bit complexity of each linear inequality is polynomial in the description of the instance. (The number of inequalities can be exponential.) For example, consider the original Border’s theorem, for single-item auctions (Theorem \[t:border\]). The recognition problem is straightforward: the left-side of  encodes the $S_i$’s, from which the right-hand side can be computed and checked in polynomial time. It is also evident that every inequality in  has a polynomial-length description.[^144] ### Consequences of a Border’s-Type Theorem The high-level idea behind the proof of Theorem \[t:gnr15\] is to show that a Border’s-type theorem puts a certain computational problem low in the polynomial hierarchy, and then to show that this problem is ${{\mathsf{\#P}}}$-hard for the public project and matching settings defined in Section \[ss:pp\].[^145] The computational problem is: given a description of an instance (including the prior distributions), compute the maximum-possible expected revenue that can be obtained by a feasible and BIC auction.[^146] What use is a Border’s-type theorem? For starters, it implies that the problem of testing the feasibility of an interim allocation rule is in ${\mathsf{co}\mbox{-}\mathsf{NP}}$. To prove the infeasibility of such a rule, one simply exhibits an inequality of the characterizing linear system that the rule fails to satisfy. Verifying this failure reduces to the recognition and testing problems, which by Definition \[d:gbt\] are polynomial-time solvable. \[prop:conp\] If a Border’s-type theorem holds for an auction design setting, then the membership problem for the polytope of feasible interim allocation rules belongs to ${\mathsf{co}\mbox{-}\mathsf{NP}}$. Combining Proposition \[prop:conp\] with the ellipsoid method puts the problem of computing the maximum-possible expected revenue in ${\mathsf{P}}^{{\mathsf{NP}}}$. \[t:main\] If a Border’s-type theorem holds for an auction design setting, then the maximum expected revenue of a feasible BIC auction can be computed in ${\mathsf{P}}^{{\mathsf{NP}}}$. We compute the optimal expected revenue of a BIC auction via linear programming, as follows. The decision variables are the same $y_i(v_i)$’s and $q_i(v_i)$’s as in –, and we retain the BIC constraints  and the IIR constraints . By assumption, we can replace the single-item interim feasibility constraints  with a linear system that satisfies the properties of Definition \[d:gbt\]. The maximum expected revenue of a feasible BIC auction can then be computed by optimizing a linear objective function (in the $q_i(v_i)$’s, as in ) subject to these constraints. Using the ellipsoid method [@K79], this can be accomplished with a polynomial number of invocations of a separation oracle (which either verifies feasibility or exhibits a violated constraint). Proposition \[prop:conp\] implies that we can implement this separation oracle in ${\mathsf{co}\mbox{-}\mathsf{NP}}$, and thus compute the maximum expected revenue of a BIC auction in ${\mathsf{P}}^{{\mathsf{NP}}}$.[^147] ### Impossibility Results from Computational Intractability {#ss:imp} Theorem \[t:main\] concerns the problem of computing the maximum expected revenue of a feasible BIC auction, given a description of an instance. It is easy to classify the complexity of this problem in the public project and matching settings introduced in Section \[ss:pp\] (and several other settings, see [@GNR18]). \[prop:pp\] Computing the maximum expected revenue of a feasible BIC auction of a public project instance is a ${{\mathsf{\#P}}}$-hard problem. Proposition \[prop:pp\] is a straightforward reduction from the ${{\mathsf{\#P}}}$-hard problem of computing the number of feasible solutions to an instance of the <span style="font-variant:small-caps;">Knapsack</span> problem.[^148] \[prop:match\] Computing the maximum expected revenue of a feasible BIC auction of a matching instance is a ${{\mathsf{\#P}}}$-hard problem. Proposition \[prop:match\] is a straightforward reduction from the ${{\mathsf{\#P}}}$-hard [Permanent]{} problem. We reiterate that Myerson’s optimal auction theory applies to the public project and matching settings, and in particular gives a polynomial-time algorithm that outputs a description of an optimal auction (for given prior distributions). Moreover, the optimal auction can be implemented as a polynomial-time algorithm. Thus it’s not hard to figure out what the optimal auction is, nor to implement it—what’s hard is figuring out exactly how much revenue it makes on average! Combining Theorem \[t:main\] with Propositions \[prop:pp\] and \[prop:match\] gives the following corollaries, which indicate that there is no Border’s-type theorem significantly more general than the ones already known. \[cor:pp\] If ${{\mathsf{\#P}}}\not\subseteq {{\mathsf{PH}}}$, then there is no Border’s-type theorem for the setting of public projects. \[cor:match\] If ${{\mathsf{\#P}}}\not\subseteq {{\mathsf{PH}}}$, then there is no Border’s-type theorem for the matching setting. Appendix: A Combinatorial Proof of Border’s Theorem {#s:borderpf} --------------------------------------------------- (of Theorem \[t:border\]) We have already argued the “only if” direction, and now prove the converse. The proof is by the max-flow/min-cut theorem—given the statement of the theorem and this hint, the proof writes itself. Suppose the interim allocation rule $\{ y_i(v_i) \}_{i \in [n], v_i \in V_i}$ satisfies  for every $S_1 \sse V_1,\ldots,S_n \sse V_n$. Form a four-layer $s$-$t$ directed flow network $G$ as follows (Figure \[f:border\](a)). The first layer is the source $s$, the last the sink $t$. In the second layer $X$, vertices correspond to valuation profiles $\vec{v}$. We abuse notation and refer to vertices of $X$ by the corresponding valuation profiles. There is an arc $(s,\vec{v})$ for every $\vec{v} \in X$, with capacity $\prod_{i=1}^n f_i(v_i)$. Note that the total capacity of these edges is 1. In the third layer $Y$, vertices correspond to winner-valuation pairs; there is also one additional “no winner” vertex. We use $(i,v_i)$ to denote the vertex representing the event that bidder $i$ wins the item and also has valuation $v_i$. For each $i$ and $v_i \in V_i$, there is an arc $((i,v_i),t)$ with capacity $f_i(v_i) y_i(v_i)$. There is also an arc from the “no winner” vertex to $t$, with capacity $1 - \sum_{i=1}^n \sum_{v_i \in V_i} f_i(v_i) y_i(v_i) $.[^149] Finally, each vertex $\vec{v} \in X$ has $n+1$ outgoing arcs, all with infinite capacity, to the vertices $(1,v_1),(2,v_2),$ $\ldots,$ $(n,v_n)$ of $Y$ and also to the “no winner” vertex. By construction, $s$-$t$ flows of $G$ with value 1 correspond to ex post allocation rules with induced interim allocation rule $\{ y_i(v_i) \}_{i \in [n], v_i \in V_i}$, with $x_i(\vec{v})$ equal to the amount of flow on the arc $(\vec{v},(i,v_i))$ times $(\prod_{i=1}^n f_i(v_i))^{-1}$. To show that there exists a flow with value 1, it suffices to show that every $s$-$t$ cut has value at least 1 (by the max-flow/min-cut theorem). So fix an $s$-$t$ cut. Let this cut include the vertices $A$ from $X$ and $B$ from $Y$. Note that all arcs from $s$ to $X \setminus A$ and from $B$ to $t$ are cut (Figure \[f:border\](b)). For each bidder $i$, define $S_i \sse V_i$ as the possible valuations of $i$ that are [*not*]{} represented among the valuation profiles in $A$. Then, for every valuation profile $\vec{v}$ containing at least one distinguished valuation, the arc $(s,\vec{v})$ is cut. The total capacity of these arcs is the right-hand side  of Border’s condition. Next, we can assume that every vertex of the form $(i,v_i)$ with $v_i \notin S_i$ is in $B$, as otherwise an (infinite-capacity) arc from $A$ to $Y \setminus B$ is cut. Similarly, unless $A = \emptyset$—in which case the cut has value at least 1 and we’re done—we can assume that the “no winner” vertex lies in $B$. Thus, the only edges of the form $((i,v_i),t)$ that are not cut involve a distinguished valuation $v_i \in S_i$. It follows that the total capacity of the cut edges incident to $t$ is at least 1 minus the left-hand size  of Border’s condition. Given our assumption that  is at most , this $s$-$t$ cut has value at least 1. This completes the proof of Border’s theorem. Tractable Relaxations of Nash Equilibria ======================================== Preamble {#preamble-2} -------- Much of this monograph is about impossibility results for the efficient computation of exact and approximate Nash equilibria. How should we respond to such rampant computational intractability? What should be the message to economists—should they change the way they do economic analysis in some way?[^150] One approach, familiar from coping with ${\mathsf{NP}}$-hard problems, is to look for tractable special cases. For example, Solar Lecture 1 proved tractability results for two-player zero-sum games. Some interesting tractable generalizations of zero-sum games have been identified (see [@CCDP] for a recent example), and polynomial-time algorithms are also known for some relatively narrow classes of games (see e.g. [@graphicalgames]). Still, for the lion’s share of games that we might care about, no polynomial-time algorithms for computing exact or approximate Nash equilibria are known. A different approach, which has been more fruitful, is to continue to work with general games and look for an [*equilibrium concept*]{} that is more computationally tractable than exact or approximate Nash equilibria. The equilibrium concepts that we’ll consider—the correlated equilibrium and the coarse correlated equilibrium—were originally invented by game theorists, but computational complexity considerations are now shining a much brighter spotlight on them. Where do these alternative equilibrium concepts come from? They arise quite naturally from the study of uncoupled dynamics, which we last saw in Solar Lecture 1. Uncoupled Dynamics Revisited ---------------------------- Section \[s:uncoupled\] of Solar Lecture 1 introduced uncoupled dynamics in the context of two-player games. In this lecture we work with the analogous setup for a general number $k$ of players. We use $S_i$ to denote the (pure) strategies of player $i$, $s_i \in S_i$ a specific strategy, $\sigma_i$ a mixed strategy, $\vec{s}$ and $\vec{\sigma}$ for profiles (i.e., $k$-vectors) of pure and mixed strategies, and $u_i(\vec{s})$ for player $i$’s payoff in the outcome $\vec{s}$. At each time step $t=1,2,3,\ldots$: 1. Each player $i=1,2,\ldots,k$ simultaneously chooses a mixed strategy $\sigma_i^t$ over $S_i$ as a function only of her own payoffs and the strategies chosen by players in the first $t-1$ time steps. 2. Every player observes all of the strategies $\vec{\sigma}^{t}$ chosen at time $t$. “Uncoupled” refers to the fact that each player initially knows only her own payoff function $u_i(\cdot)$, while “dynamics” means a process by which players learn how to play in a game. One of the only positive algorithmic results that we’ve seen concerned [*smooth fictitious play (SFP)*]{}. The $k$-player version of SFP is as follows. **Given:** parameter family $\{ \eta^t \in [0,\infty) \,:\, t=1,2,3,\ldots\}$. At each time step $t=1,2,3,\ldots$: 1. Every player $i$ simultaneously chooses the mixed strategy $\sigma_i^t$ by playing each strategy $s_i$ with probability proportional to $e^{\eta^t\pi^t_i}$, where $\pi^t_i$ is the time-averaged expected payoff player $i$ would have earned by playing $s_i$ at every previous time step. Equivalently, $\pi^t_i$ is the expected payoff of strategy $s_i$ when the other players’ strategies $\vec{s}_{-i}$ are drawn from the joint distribution $\tfrac{1}{t-1} \sum_{h=1}^{t-1} \vec{\sigma}^h_{-i}$. 2. Every player observes all of the strategies $\vec{\sigma}^{t}$ chosen at time $t$. A typical choice for the $\eta_t$’s is $\eta_t \approx \sqrt{t}$. In Theorem \[t:sfp\] in Solar Lecture 1 we proved that, in an $m \times n$ two-player zero-sum game, after $O(\log (m+n)/{\epsilon}^2)$ time steps, the empirical distributions of the two players constitute an ${\epsilon}$-approximate Nash equilibrium.[^151] An obvious question is: what is the outcome of a logarithmic number of rounds of smooth fictitious play in a non-zero-sum game? Our communication complexity lower bound in Solar Lectures 2 and 3 implies that it cannot in general be an ${\epsilon}$-approximate Nash equilibrium. Does it have some alternative economic meaning? The answer to this question turns out to be closely related to some classical game-theoretic equilibrium concepts, which we discuss next. Correlated and Coarse Correlated Equilibria {#s:ce} ------------------------------------------- ### Correlated Equilibria The correlated equilibrium is a well-known equilibrium concept defined by @A74. We define it, then explain the standard semantics, and then offer an example.[^152] \[d:ce\] A joint distribution $\rho$ on the set $S_1 \times \cdots \times S_k$ of outcomes of a game is a [*correlated equilibrium*]{} if for every player $i \in \{1,2,\ldots,k\}$, strategy $s_i \in S_i$, and deviation $s'_i \in S_i$, $$\label{eq:ce} {\mathbf{E}\ifthenelse{\not\equal{}{\vec{s} \sim \rho}}{_{\vec{s} \sim \rho}}{}\!\left[u_i(\vec{s}) \,|\, s_i\right]} \ge {\mathbf{E}\ifthenelse{\not\equal{}{\vec{s} \sim \rho}}{_{\vec{s} \sim \rho}}{}\!\left[u_i(s_i',\vec{s}_{-i}) \,|\, s_i\right]}.$$ Importantly, the distribution $\rho$ in Definition \[d:ce\] need not be a product distribution; in this sense, the strategies chosen by the players are correlated. The Nash equilibria of a game correspond to the correlated equilibria that are product distributions. The usual interpretation of a correlated equilibrium involves a trusted third party. The distribution $\rho$ over outcomes is publicly known. The trusted third party samples an outcome $\vec{s}$ according to $\rho$. For each player $i=1,2,\ldots,k$, the trusted third party privately suggests the strategy $s_i$ to $i$. The player $i$ can follow the suggestion $s_i$, or not. At the time of decision making, a player $i$ knows the distribution $\rho$ and one component $s_i$ of the realization $\vec{s}$, and accordingly has a posterior distribution on others’ suggested strategies $\vec{s}_{-i}$. With these semantics, the correlated equilibrium condition  requires that every player maximizes her expected payoff by playing the suggested strategy $s_i$. The expectation is conditioned on $i$’s information—$\rho$ and $s_i$—and assumes that other players play their recommended strategies $\vec{s}_{-i}$. Definition \[d:ce\] is a bit of a mouthful. But you are intimately familiar with a good example of a correlated equilibrium that is not a mixed Nash equilibrium—a traffic light! Consider the following two-player game, with each matrix entry listing the payoffs of the row and column players in the corresponding outcome: Stop Go ------ ------ ------- Stop 0,0 0,1 Go 1,0 -5,-5 This game has two pure Nash equilibria, the outcomes (Stop, Go) and (Go, Stop). Define $\rho$ by randomizing uniformly between these two Nash equilibria. This is not a product distribution over the game’s four outcomes, so it cannot correspond to a Nash equilibrium of the game. It is, however, a correlated equilibrium.[^153] ### Coarse Correlated Equilibria The outcome of smooth fictitious play in non-zero-sum games relates to a still more permissive equilibrium concept, the [ *coarse correlated equilibrium*]{}, which was first studied by @MV78. \[d:cce\] A joint distribution $\rho$ on the set $S_1 \times \cdots \times S_k$ of outcomes of a game is a [*coarse correlated equilibrium*]{} if for every player $i \in \{1,2,\ldots,k\}$ and every unilateral deviation ${s}'_i \in S_i$, $$\label{eq:cce} {\mathbf{E}\ifthenelse{\not\equal{}{\vec{s} \sim \rho}}{_{\vec{s} \sim \rho}}{}\!\left[u_i(\vec{s}) \right]} \ge {\mathbf{E}\ifthenelse{\not\equal{}{\vec{s} \sim \rho}}{_{\vec{s} \sim \rho}}{}\!\left[u_i(s_i',\vec{s}_{-i})\right]}.$$ The condition  is the same as that for the Nash equilibrium (Definition \[d:ne\]), except without the restriction that $\rho$ is a product distribution. In this condition, when a player $i$ contemplates a deviation $s_i'$, she knows only the distribution $\rho$ and [*not*]{} the component $s_i$ of the realization. That is, a coarse correlated equilibrium only protects against unconditional unilateral deviations, as opposed to the unilateral deviations conditioned on $s_i$ that are addressed in Definition \[d:ce\]. It follows that every correlated equilibrium is also a coarse correlated equilibrium (Figure \[f:venn\]). (0,1) ellipse (1cm and 1cm); at (0,1) [NE]{}; (0,2) ellipse (1.5cm and 2cm); at (0,3) [CE]{}; (0,3) ellipse (2cm and 3cm); at (0,5) [CCE]{}; As you would expect, [*${\epsilon}$-approximate*]{} correlated and coarse correlated equilibria are defined by adding a “$-{\epsilon}$” to the right-hand sides of  and , respectively. We can now answer the question about smooth fictitious play in general games: the time-averaged history of joint play under smooth fictitious play converges to the set of coarse correlated equilibria. \[prop:cce\] For every $k$-player game in which every player has at most $m$ strategies, after $T=O((\log m)/\epsilon^2)$ time steps of smooth fictitious play, the time-averaged history of play $\tfrac{1}{T} \sum_{t=1}^T \vec{\sigma}^t$ is an $\epsilon$-approximate coarse correlated equilibrium. Proposition \[prop:cce\] follows straightforwardly from the definition of ${\epsilon}$-approximate coarse correlated equilibria and the vanishing regret guarantee of smooth fictitious play that we proved in Solar Lecture 1. Precisely, by Corollary \[cor:noregret\] of that lecture, after $O((\log m)/\epsilon^2)$ time steps of smooth fictitious play, every player has at most ${\epsilon}$ regret (with respect to the best fixed strategy in hindsight, see Definition \[d:regreta\] in Solar Lecture 1). This regret guarantee is equivalent to the conclusion of Proposition \[prop:cce\] (as you should check). What about correlated equilibria? While the time-averaged history of play in smooth fictitious play does not in general converge to the set of correlated equilibria, @FV97 and @HM00 show that the time-averaged play of other reasonably simple types of uncoupled dynamics is guaranteed to be an ${\epsilon}$-correlated equilibrium after a polynomial (rather than logarithmic) number of time steps. Computing an Exact Correlated or Coarse Correlated Equilibrium {#s:exact} -------------------------------------------------------------- ### Normal-Form Games Solar Lecture 1 showed that approximate Nash equilibria of two-player zero-sum games can be learned (and hence computed) efficiently (Theorem \[t:sfp\]). Proposition \[prop:cce\] and the extensions in [@FV97; @HM00] show analogs of this result for approximate correlated and coarse correlated equilibria of general games. Solar Lecture 1 also showed that an exact Nash equilibrium of a two-player zero-sum game can be computed in polynomial time by linear programming (Corollary \[cor:zerosum\]). Is the same true for an exact correlated or coarse correlated equilibrium of a general game? Consider first the case of coarse correlated equilibria, and introduce one decision variable $x_{\vec{s}}$ per outcome $\vec{s}$ of the game, representing the probability assigned to $\vec{s}$ in a joint distribution $\rho$. The feasible solutions to the following linear system are then precisely the coarse correlated equilibria of the game: $$\begin{aligned} \label{eq:cce1} \sum_{\vec{s}} u_i(\vec{s})x_{\vec{s}} \ge \sum_{\vec{s}} u_i(s_i',\vec{s}_{-i})x_{\vec{s}} & \quad\quad\text{for every $i \in [k]$ and $s'_i \in S_i$}\\ \label{eq:cce2} \sum_{\vec{s} \in \vec{S}} x_{\vec{s}} = 1 &\\ \label{eq:cce3} x_{\vec{s}} \ge 0 & \quad\quad\text{for every $\vec{s} \in \vec{S}$.}\end{aligned}$$ Similarly, correlated equilibria are captured by the following linear system: $$\begin{aligned} \label{eq:ce1} \sum_{\vec{s} \,:\, s_i = j} u_i(\vec{s})x_{\vec{s}} \ge \sum_{\vec{s} \,:\, s_i = j} u_i(s_i',\vec{s}_{-i})x_{\vec{s}} & \quad\quad\text{for every $i \in [k]$ and $j,s_i' \in S_i$}\\ \label{eq:ce2} \sum_{\vec{s} \in \vec{S}} x_{\vec{s}} = 1 &\\ \label{eq:ce3} x_{\vec{s}} \ge 0 & \quad\quad\text{for every $\vec{s} \in \vec{S}$.}\end{aligned}$$ The following proposition is immediate. \[prop:lp\] An exact correlated or coarse correlated equilibrium of a game can be computed in time polynomial in the number of outcomes of the game. More generally, any linear function (such as the sum of players’ expected payoffs) can be optimized over the set of correlated or coarse correlated equilibria in time polynomial in the number of outcomes. For games described in [*normal form*]{}, with each player $i$’s payoffs $\{ u_i(\vec{s}) \}_{\vec{s} \in \vec{S}}$ given explicitly in the input, Proposition \[prop:lp\] provides an algorithm with running time polynomial in the input size. However, the number of outcomes of a game scales exponentially with the number $k$ of players.[^154] The computationally interesting multi-player games, and the multi-player games that naturally arise in computer science applications, are those with a [*succinct description*]{}. Can we compute an exact correlated or coarse correlated equilibrium in time polynomial in the size of a game’s description? ### Succinctly Represented Games For concreteness, let’s look at one concrete example of a class of succinctly represented games: [*graphical games*]{} [@KLS01; @KM03]. A graphical game is described by an undirected graph $G=(V,E)$, with players corresponding to vertices, and a local payoff matrix for each vertex. The local payoff matrix for vertex $i$ specifies $i$’s payoff for each possible choice of its strategy and the strategies chosen by its neighbors in $G$. By assumption, the payoff of a player is independent of the strategies chosen by non-neighboring players. When the graph $G$ has maximum degree $\Delta$, the size of the game description is exponential in $\Delta$ but polynomial in the number $k$ of players. The most interesting cases are when $\Delta=O(1)$ or perhaps $\Delta=O(\log k)$. In these cases, the number of outcomes (and hence the size of the game’s normal-form description) is exponential in the size of the succinct description of the game, and solving the linear system – or – does not result in a polynomial-time algorithm. We next state a result showing that, quite generally, an exact correlated (and hence coarse correlated) equilibrium of a succinctly represented game can be computed in polynomial time. The key assumption is that the following [Expected Utility]{} problem can be solved in time polynomial in the size of the game’s description.[^155] Given a succinct description of a player’s payoff function $u_i$ and mixed strategies $\sigma_1,\ldots,\sigma_k$ for all of the players, compute the player’s expected utility: $${\mathbf{E}\ifthenelse{\not\equal{}{\vec{s} \sim \vec{\sigma}}}{_{\vec{s} \sim \vec{\sigma}}}{}\!\left[u_i(\vec{s})\right]}.$$ For most of the succinctly represented multi-player games that come up in computer science applications, the [Expected Utility]{} problem can be solved in polynomial time. For example, in a graphical game it can be solved by brute force—summing over the entries in player $i$’s local payoff matrix, weighted by the probabilities in the given mixed strategies. This algorithm takes time exponential in $\Delta$ but polynomial in the size of the game’s succinct representation. Tractability of solving the [Expected Utility]{} problem is a sufficient condition for the tractability of computing an exact correlated equilibrium. \[t:pr\] There is a polynomial-time Turing reduction from the problem of computing a correlated equilibrium of a succinctly described game to the [Expected Utility]{} problem. Theorem \[t:pr\] applies to a long list of succinctly described games that have been studied in the computer science literature, with graphical games serving as one example.[^156] The starting point of the proof of Theorem \[t:pr\] is the exponential-size linear system –. We know that this linear system is feasible (by Nash’s Theorem, since the system includes all Nash equilibria). With exponentially many variables, however, it’s not clear how to efficiently compute a feasible solution. The dual linear system, meanwhile, has a polynomial number of variables (corresponding to the constraints in ) and an exponential number of inequalities (corresponding to game outcomes). By Farkas’s Lemma—or, equivalently, strong linear programming duality (see e.g. [@chvatal])—we know that this dual linear system is infeasible. The key idea is to run the ellipsoid algorithm [@K79] on the infeasible dual linear system—called the “ellipsoid against hope” in [@PR08]. A polynomial-time separation oracle must produce, given an alleged solution (which we know is infeasible), a violated inequality. It turns out that this separation oracle reduces to solving a polynomial number of instances of the [Expected Utility]{} problem (which is polynomial-time solvable by assumption) and computing the stationary distribution of a polynomial number of polynomial-size Markov chains (also polynomial-time solvable, e.g. by linear programming). The ellipsoid against hope terminates after a polynomial number of invocations of its separation oracle, necessarily with a proof that the dual linear system is infeasible. To recover a primal feasible solution (i.e., a correlated equilibrium), one can retain only the primal decision variables corresponding to the (polynomial number of) dual constraints generated by the separation oracle, and solve directly this polynomial-size reduced version of the primal linear system.[^157] The Price of Anarchy of Coarse Correlated Equilibria ---------------------------------------------------- ### Balancing Computational Tractability with Predictive Power We now understand senses in which Nash equilibria are computationally intractable (Solar Lectures 2–5) while correlated equilibria are computationally tractable (Sections \[s:ce\] and \[s:exact\]). From an economic perspective, these results suggest that it could be prudent to study the correlated equilibria of a game, rather than restricting attention only to its Nash equilibria.[^158] Passing from Nash equilibria to the larger set of correlated equilibria is a two-edged sword. Computational tractability increases, and with it the plausibility that actual game play will conform to the equilibrium notion. But whatever criticisms we had about the Nash equilibrium’s predictive power (recall Section \[ss:whocares\] in Solar Lecture 1), they are even more severe for the correlated equilibrium (since there are only more of them). The worry is that games typically have far too many correlated equilibria to say anything interesting about them. Our final order of business is to dispel this worry, at least in the context of price-of-anarchy analyses. Recall from Lunar Lecture 2 that the [*price of anarchy (POA)*]{} is defined as the ratio between the objective function value of an optimal solution, and that of the worst equilibrium: $$\mathsf{PoA}(G):= \frac{f(OPT(G))}{\min_{\text{$\rho$ is an equilibrium of $G$}} f(\rho)},$$ where $G$ denotes a game, $f$ denotes a maximization objective function (with $f(\rho) = {\mathbf{E}\ifthenelse{\not\equal{}{\vec{s} \sim \rho}}{_{\vec{s} \sim \rho}}{}\!\left[f(\vec{s})\right]}$ when $\rho$ is a probability distribution), and $OPT(G)$ is the optimal outcome of $G$ with respect to $f$. Thus the POA of a game is always at least 1, and the closer to 1, the better. The POA of a game depends on the choice of equilibrium concept. Because it is defined with respect to the worst equilibrium, the POA only degrades as the set of equilibria grows larger. Thus, the POA with respect to coarse correlated equilibria is only worse (i.e., larger) than that with respect to correlated equilibria, which in turn is only worse than the POA with respect to Nash equilibria (recall Figure \[f:venn\]). The hope is that there’s a “sweet spot” equilibrium concept—permissive enough to be computationally tractable, yet stringent enough to allow good worse-case approximation guarantees. Happily, the coarse correlated equilibrium is just such a sweet spot! ### Smooth Games and Extension Theorems After the first ten years of price-of-anarchy analyses (roughly 1999-2008), it was clear to researchers in the area that many such analyses across different application domains share a common architecture (in routing games, facility location games, scheduling games, auctions, etc.). The concept of “proofs of POA bounds that follow the standard template” was made precise in the theory of smooth games [@robust].[^159][^160] One can then define the [*robust price of anarchy*]{} of a game as the best (i.e., smallest) bound on the game’s POA that can be proved by following the standard template. The proof template formalized by smooth games superficially appears relevant only for the POA with respect to [*pure*]{} Nash equilibria, as the definition involves no randomness (let alone correlation). The good news is that the template’s simplicity makes it relatively easy to use. One would expect the bad news to be that bounds on the POA of more permissive equilibrium concepts require different proof techniques, and that the corresponding POA bounds would be much worse. Happily, this is not the case—every POA bound proved using the canonical template automatically applies not only to the pure Nash equilibria of a game, but more generally to all of the game’s coarse correlated equilibria (and hence all of its correlated and mixed Nash equilibria).[^161] \[t:robust\] In every game, the POA with respect to coarse correlated equilibria is bounded above by its robust POA. For ${\epsilon}$-approximate coarse correlated equilibria—as guaranteed by a logarithmic number of rounds of smooth fictitious play (Proposition \[prop:cce\])—the POA bound in Theorem \[t:robust\] degrades by an additive $O({\epsilon})$ term. [158]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{} S. Aaronson, R. Impagliazzo, and D. Moshkovitz. A[M]{} with multiple [M]{}erlins. In *Proceedings of the 29th IEEE Conference on Computational Complexity (CCC)*, pages 44–55, 2014. I. Adler. The equivalence of linear programs and zero-sum games. *International Journal of Game Theory*, 420 (1):0 165–177, 2013. S. Alaei, H. Fu, N. Haghpanah, J. D. Hartline, and A. Malekian. Efficient computation of optimal auctions via reduced forms. *Mathematics of Operations Research*, 440 (3):0 1058–1086, 2019. I. Alth[ö]{}fer. On sparse approximations to randomized strategies and convex combinations. *Linear Algebra and Its Applications*, 1990 (1):0 339–355, 1994. A. Anshu, N. Goud, R. Jain, S. Kundu, and P. Mukhopadhyay. Lifting randomized query complexity to randomized communication complexity. Technical Report TR17-054, ECCC, 2017. K. J. Arrow and G. Debreu. Existence of an equilibrium for a competitive economy. *Econometrica*, 22:0 265–290, 1954. R. J. Aumann. Subjectivity and correlation in randomized strategies. *Journal of Mathematical Economics*, 10 (1):0 67–96, 1974. Y. Babichenko. Query complexity of approximate [N]{}ash equilibria. *Journal of the ACM*, 630 (4):0 36, 2016. Y. Babichenko and A. Rubinstein. Communication complexity of approximate [Nash]{} equilibria. In *Proceedings of the 49th Annual [ACM]{} Symposium on Theory of Computing (STOC)*, pages 878–889, 2017. P. Beame, S. Cook, J. Edmonds, R. Impagliazzo, and T. Pitassi. The relative complexity of [NP]{} search problems. *Journal of Computer and System Sciences*, 570 (1):0 3–19, 1998. O. Ben-Zwi, R. Lavi, and I. Newman. Ascending auctions and [W]{}alrasian equilibrium. Working paper, 2013. S. Bikhchandani and J. W. Mamer. Competitive equilibrium in an exchange economy with indivisibilities. *Journal of Economic Theory*, 74:0 385–413, 1997. N. Bitansky, O. Paneth, and A. Rosen. On the cryptographic hardness of finding a [N]{}ash equilibrium. In *Proceedings of the 56th Annual Symposium on Foundations of Computer Science [(FOCS)]{}*, pages 1480–1498, 2015. A. Blum, M. T. Hajiaghayi, K. Ligett, and A. Roth. Regret minimization and the price of total anarchy. In *Proceedings of the 40th Annual [ACM]{} Symposium on Theory of Computing [(STOC)]{}*, pages 373–382, 2008. K. C. Border. *Fixed point theorems with applications to economics and game theory*. Cambridge University Press, 1985. K. C. Border. Implementation of reduced form auctions: A geometric approach. *Econometrica*, 590 (4):0 1175–1187, 1991. K. C. Border. Reduced form auctions revisited. *Economic Theory*, 31:0 167–181, 2007. M. Braverman, Y. Kun Ko, and O. Weinstein. Approximating the best [N]{}ash equilibrium in [$n^{o(\log n)}$]{}-time breaks the [E]{}xponential [T]{}ime [H]{}ypothesis. In *Proceedings of the 26th [A]{}nnual [ACM]{}-[SIAM]{} [S]{}ymposium on [D]{}iscrete [A]{}lgorithms (SODA)*, pages 970–982, 2015. M. Braverman, Y. Kun Ko, A. Rubinstein, and O. Weinstein. hardness for densest-[$k$]{}-subgraph with perfect completeness. In *Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, pages 1326–1341, 2017. G. W. Brown. Iterative solutions of games by fictitious play. In T. C. Koopmans, editor, *Activity Analysis of Production and Allocation*, Cowles Commission Monograph No. 13, chapter XXIV, pages 374–376. Wiley, 1951. Y. Cai, C. Daskalakis, and S. M. Weinberg. An algorithmic characterization of multi-dimensional mechanisms. In *Proceedings of the 44th Symposium on Theory of Computing (STOC)*, pages 459–478, 2012. Y. Cai, C. Daskalakis, and S. M. Weinberg. Optimal multi-dimensional mechanism design: Reducing revenue to welfare maximization. In *Proceedings of the 53rd Annual Symposium on Foundations of Computer Science (FOCS)*, pages 130–139, 2012. Y. Cai, O. Candogan, C. Daskalakis, and C. H. Papadimitriou. Zero-sum polymatrix games: A generalization of minmax. *Mathematics of Operations Research*, 410 (2):0 648–655, 2016. O. Candogan, A. Ozdaglar, and P. Parrilo. Iterative auction design for tree valuations. *Operations Research*, 630 (4):0 751–771, 2015. O. Candogan, A. Ozdaglar, and P. Parrilo. Pricing equilibria and graphical valuations. *ACM Transactions on Economics and Computation*, 60 (1):0 2, 2018. N. Cesa-Bianchi and G. Lugosi. *Prediction, Learning, and Games*. Cambridge University Press, 2006. N. Cesa-Bianchi, Y. Mansour, and G. Stolz. Improved second-order bounds for prediction with expert advice. *Machine Learning*, 660 (2–3):0 321–352, 2007. Y.-K. Che, J. Kim, and K. Mierendorff. Generalized reduced form auctions: A network flow approach. *Econometrica*, 81:0 2487–2520, 2013. X. Chen and X. Deng. 3-[N]{}ash is [PPAD]{}-complete. Technical Report TR05-134, ECCC, 2005. X. Chen and X. Deng. Settling the complexity of two-player [N]{}ash equilibrium. In *Proceedings of the 47th Annual Symposium on Foundations of Computer Science [(FOCS)]{}*, pages 261–270, 2006. X. Chen and X. Deng. On the complexity of [2D]{} discrete fixed point problem. *Theoretical Computer Science*, 4100 (44):0 4448–4456, 2009. X. Chen, X. Deng, and S.-H. Teng. Computing [N]{}ash equilibria: Approximation and smoothed complexity. In *Proceedings of the 47th Annual Symposium on Foundations of Computer Science [(FOCS)]{}*, pages 603–612, 2006. X. Chen, X. Deng, and S.-H. Teng. Sparse games are hard. In *Proceedings of the Second Annual International Workshop on Internet and Network Economics [(WINE)]{}*, pages 262–273, 2006. X. Chen, X. Deng, and S.-H. Teng. Settling the complexity of computing two-player [Nash]{} equilibria. *Journal of the ACM*, 560 (3):0 14, 2009. Journal version of [[@CD05]]{}, [[@CD06]]{}, [[@CDT06]]{}, and [[@CDT06b]]{}. A. R. Choudhuri, P. [Hubáček]{}, C. Kamath, K. Pietrzak, A. Rosen, and G. N. Rothblum. Finding a [N]{}ash equilibrium is no easier than breaking [F]{}iat-[S]{}hamir. In *Proceedings of the 51st Annual [ACM]{} Symposium on Theory of Computing (STOC)*, pages 1103–1114, 2019. G. Christodoulou and E. Koutsoupias. On the price of anarchy and stability of correlated equilibria of linear congestion games. In *Proceedings of the 13th Annual European Symposium on Algorithms [(ESA)]{}*, pages 59–70, 2005. G. Christodoulou, A. Kovács, and M. Schapira. Bayesian combinatorial auctions. *Journal of the ACM*, 630 (2):0 11, 2016. G. Christodoulou, A. Kov[á]{}cs, A. Sgouritsa, and B. Tang. Tight bounds for the price of anarchy of simultaneous first price auctions. *ACM Transactions on Economics and Computation*, 40 (2):0 9, 2016. V. Chv[á]{}tal. *Linear Programming*. Freeman, 1983. V. Conitzer and T. Sandholm. Communication complexity as a lower bound for learning in games. In *Proceedings of the Twenty-first International Conference on Machine Learning (ICML)*, 2004. G. B. Dantzig. A proof of the equivalence of the programming problem and the game problem. In T. C. Koopmans, editor, *Activity Analysis of Production and Allocation*, Cowles Commission Monograph No. 13, chapter XX, pages 330–335. Wiley, 1951. G. B. Dantzig. Reminiscences about the origins of linear programming. Technical Report SOL 81-5, Systems Optimization Laboratory, Department of Operations Research, Stanford University, 1981. C. Daskalakis and Q. Pan. A counter-example to [Karlin’s]{} strong conjecture for fictitious play. In *Proceedings of the 55th Annual Symposium on Foundations of Computer Science (FOCS)*, pages 11–20, 2014. C. Daskalakis and C. H. Papadimitriou. Three-player games are hard. Technical Report TR05-139, ECCC, 2005. C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a [N]{}ash equilibrium. In *Proceedings of the 38th Annual [ACM]{} Symposium on Theory of Computing [(STOC)]{}*, pages 71–78, 2006. C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a [Nash]{} equilibrium. *SIAM Journal on Computing*, 390 (1):0 195–259, 2009. Journal version of [[@DP05]]{}, [[@DGP06]]{}, and [[@GP06]]{}. C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a [N]{}ash equilibrium. *Communications of the ACM*, 520 (2):0 89–97, 2009. S. Dobzinski and J. Vondr[á]{}k. Communication complexity of combinatorial auctions with submodular valuations. In *Proceedings of the 24th Annual ACM-SIAM Symposium on Discrete Algorithms [(SODA)]{}*, pages 1205–1215, 2013. S. Dobzinski, N. Nisan, and M. Schapira. Approximation algorithms for combinatorial auctions with complement-free bidders. *Mathematics of Operations Research*, 350 (1):0 1–13, 2010. P. Dütting, V. Gkatzelis, and T. Roughgarden. The performance of deferred-acceptance auctions. *Mathematics of Operations Research*, 420 (4):0 897–914, 2017. K. Etessami and M. Yannakakis. On the complexity of [N]{}ash equilibria and other fixed points. *SIAM Journal on Computing*, 390 (6):0 2531–2597, 2010. U. Feige. On maximizing welfare where the utility functions are subadditive. *SIAM Journal on Computing*, 390 (1):0 122–142, 2009. U. Feige and J. Vondr[á]{}k. The submodular welfare problem with demand queries. *Theory of Computing*, 60 (1):0 247–290, 2010. M. Feldman, H. Fu, N. Gravin, and B. Lucier. Simultaneous auctions are (almost) efficient. In *Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing*, pages 201–210, 2013. D. P. Foster and R. Vohra. Calibrated learning and correlated equilibrium. *Games and Economic Behavior*, 210 (1–2):0 40–55, 1997. A. Fr[é]{}chette, N. Newman, and K. Leyton-Brown. Solving the station repacking problem. In *Handbook of Spectrum Auction Design*, chapter 38, pages 813–827. Cambridge University Press, 2017. Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. *Journal of Computer and System Sciences*, 550 (1):0 119–139, 1997. Y. Freund and R. E. Schapire. Adaptive game playing using multiplicative weights. *Games and Economic Behavior*, 290 (1–2):0 79–103, 1999. D. Fudenberg and D. K. Levine. Consistency and cautious fictitious play. *Journal of Economic Dynamics and Control*, 190 (5):0 1065–1089, 1995. D. Gale, H. W. Kuhn, and A. W. Tucker. Linear programming and the theory of games. In T. C. Koopmans, editor, *Activity Analysis of Production and Allocation*, Cowles Commission Monograph No. 13, chapter XIX, pages 317–329. Wiley, 1951. A. Ganor, C. S. Karthik, and D. P[á]{}lv[ö]{}lgyi. On communication complexity of fixed point computation. arXiv:1909.10958, 2019. S. Garg, O. Pandey, and A. Srinivasan. Revisiting the cryptographic hardness of finding a [N]{}ash equilibrium. In *Proceedings of the 36th Annual International Cryptology Conference on Advances in Cryptology (CRYPTO)*, pages 579–604, 2016. J. Geanakoplos. ash and [W]{}alras equilibrium via [B]{}rouwer. *Economic Theory*, 210 (2/3):0 585–603, 2003. I. Gilboa and E. Zemel. Nash and correlated equilibria: Some complexity considerations. *Games and Economic Behavior*, 10 (1):0 80–93, 1989. V. Gkatzelis, E. Markakis, and T. Roughgarden. Deferred-acceptance auctions for multiple levels of service. In *Proceedings of the 18th Annual ACM Conference on Economics and Computation (EC)*, pages 21–38, 2017. P. W. Goldberg and C. H. Papadimitriou. Reducibility among equilibrium problems. In *Proceedings of the 38th Annual [ACM]{} Symposium on Theory of Computing [(STOC)]{}*, pages 61–70, 2006. M. G[öö]{}s. Lower bounds for clique [vs. independent]{} set. In *Proceedings of the 56th Annual Symposium on Foundations of Computer Science (FOCS)*, pages 1066–1076, 2015. M. [Göös]{} and T. Pitassi. Communication lower bounds via critical block sensitivity. *SIAM Journal on Computing*, 470 (5):0 1778–1806, 2018. M. [Göös]{} and A. Rubinstein. Near-optimal communication lower bounds for approximate [N]{}ash equilibria. In *Proceedings of the 59th Annual Symposium on Foundations of Computer Science [(FOCS)]{}*, pages 397–403, 2018. M. [Göös]{}, S. Lovett, R. Meka, T. Watson, and D. Zuckerman. Rectangles are nonnegative juntas. *SIAM Journal on Computing*, 450 (5):0 1835–1869, 2016. M. [Göös]{}, T. Pitassi, and T. Watson. Query-to-communication lifting for [BPP]{}. In *Proceedings of the 58th Annual IEEE Symposium on Foundations of Computer Science*, pages 132–143, 2017. M. G[ö]{}[ö]{}s, T. Pitassi, and T. Watson. Deterministic communication vs. partition number. *SIAM Journal on Computing*, 470 (6):0 2435–2450, 2018. P. Gopalan, N. Nisan, and T. Roughgarden. Public projects, [Boolean]{} functions, and the borders of [Border’s]{} theorem. *ACM Transactions on Economic and Computation*, 60 (3-4):0 18, 2018. F. Gul and E. Stacchetti. Walrasian equilibrium with gross substitutes. *Journal of Economic Theory*, 87:0 95–124, 1999. J. Hannan. Approximation to [B]{}ayes risk in repeated play. In M. Dresher, A. W. Tucker, and P. Wolfe, editors, *Contributions to the Theory of Games*, volume 3, pages 97–139. Princeton University Press, 1957. S. Hart and A. Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. *Econometrica*, 680 (5):0 1127–1150, 2000. J. D. Hartline. Mechanism design and approximation. Book draft, July 2017. A. Hassidim, H. Kaplan, Y. Mansour, and N. Nisan. Non-price equilibria in markets of discrete goods. In *Proceedings of the 12th Annual ACM Conference on Economics and Computation (EC)*, pages 295–296, 2011. M. D. Hirsch, C. H. Papadimitriou, and S. A. Vavasis. Exponential lower bounds for finding [B]{}rouwer fix points. *Journal of Complexity*, 50 (4):0 379–416, 1989. P. [Hubáček]{} and E. Yogev. Hardness of continuous local search: Query complexity and cryptographic lower bounds. In *Proceedings of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)*, pages 1352–1371, 2017. P. [Hubáček]{}, M. Naor, and E. Yogev. The journey from [NP]{} to [TFNP]{} hardness. In *Proceedings of the 8th Conference on Innovations in Theoretical Computer Science (ITCS)*, 2017. Article 60. R. Impagliazzo and A. Wigderson. . In *Proceedings of the 29th Annual [ACM]{} Symposium on Theory of Computing [(STOC)]{}*, pages 220–229, 1997. R. Impagliazzo, R. Paturi, and F. Zane. Which problems have strongly exponential complexity? *Journal of Computer and System Sciences*, 630 (4):0 512–530, 2001. A. X. Jiang and K. Leyton-Brown. Polynomial-time computation of exact correlated equilibrium in compact games. *Games and Economic Behavior*, 91:0 347–359, 2015. D. S. Johnson. The [NP]{}-completeness column: Finding needles in haystacks. *ACM Transactions on Algorithms*, 30 (2):0 24, 2007. D. S. Johnson, C. H. Papadimitriou, and M. Yannakakis. How easy is local search? *Journal of Computer and System Sciences*, 370 (1):0 79–100, 1988. S. Kakade, M. Kearns, J. Langford, and L. Ortiz. Correlated equilibria in graphical games. In *Proceedings of the 4th [ACM]{} Conference on Electronic Commerce*, pages 42–47, 2003. B. Kalyanasundaram and G. Schnitger. The probabilistic communication complexity of set intersection. *SIAM Journal on Discrete Mathematics*, 50 (4):0 545–557, 1992. S. Karlin. *Mathematical Methods and Theory in Games, Programming, and Economics*. Addison-Wesley, 1959. M. Kearns. Graphical games. In N. Nisan, T. Roughgarden, [É]{}. Tardos, and V. Vazirani, editors, *Algorithmic Game Theory*, chapter 7, pages 159–180. Cambridge University Press, 2007. M. Kearns, M. L. Littman, and S. Singh. Graphical models for game theory. In *Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI)*, pages 253–260, 2001. A. S. Kelso and V. P. Crawford. Job matching, coalition formation, and gross substitutes. *Econometrica*, 500 (6):0 1483–1504, 1982. L. G. Khachiyan. A polynomial algorithm in linear programming. *Soviet Mathematics Doklady*, 200 (1):0 191–194, 1979. T. H. Kjeldsen. John von [N]{}eumann’s conception of the [M]{}inimax theorem: A journey through different mathematical contexts. *Archive for History of Exact Sciences*, 56:0 39–68, 2001. D. Koller and B. Milch. Multi-agent influence diagrams for representing and solving games. *Games and Economic Behavior*, 450 (1):0 181–221, 2003. S. Kopparty, O. Meir, N. Ron-Zewi, and S. Saraf. High-rate locally correctable and locally testable codes with sub-polynomial query complexity. *Journal of the ACM*, 640 (2):0 11, 2017. E. Koutsoupias and C. H. Papadimitriou. Worst-case equilibria. In *Proceedings of the 16th Annual Conference on Theoretical Aspects of Computer Science (STACS)*, pages 404–413, 1999. E. Kushilevitz and N. Nisan. *Communication Complexity*. Cambridge University Press, 1996. C. Lautemann. and the polynomial hierarchy. *Information Processing Letters*, 170 (4):0 215–217, 1983. B. Lehmann, D. Lehmann, and N. Nisan. Combinatorial auctions with decreasing marginal utilities. *Games and Economic Behavior*, 550 (2):0 270–296, 2006. C. E. Lemke and J. T. Howson, Jr. Equilibrium points of bimatrix games. *SIAM Journal*, 120 (2):0 413–423, 1964. K. Leyton-Brown, P. Milgrom, and I. Segal. Economics and computer science of a radio spectrum reallocation. *Proceedings of the National Academy of Sciences (PNAS)*, 1140 (28):0 7202–7209, 2017. R. J. Lipton and N. E. Young. Simple strategies for large zero-sum games with applications to complexity theory. In *Proceedings of the 26th Annual [ACM]{} Symposium on Theory of Computing (STOC)*, pages 734–740, 1994. R. J. Lipton, E. Markakis, and A. Mehta. Playing large games using simple strategies. In *Proceedings of the 4th ACM Conference on Electronic Commerce (EC)*, pages 36–41, 2003. N. Littlestone and M. K. Warmuth. The weighted majority algorithm. *Information and Computation*, 1080 (2):0 212–261, 1994. E. Maskin and J. Riley. Optimal auctions with risk-adverse buyers. *Econometrica*, 520 (6):0 1473–1518, 1984. S. A. Matthews. On the implementability of reduced form auctions. *Econometrica*, 520 (6):0 1519–1522, 1984. A. [McLennan]{}. *Advanced Fixed Point Theory for Economics*. Springer, 2018. A. [McLennan]{} and R. Tourky. From imitation games to [K]{}akutani. Unpublished manuscript, 2006. N. Megiddo and C. H. Papadimitriou. On total functions, existence theorems and computational complexity. *Theoretical Computer Science*, 810 (2):0 317–324, 1991. P. Milgrom. Putting auction theory to work: The simultaneous ascending auction. *Journal of Political Economy*, 1080 (2):0 245–272, 2000. P. Milgrom. *Putting Auction Theory to Work*. Churchill Lectures in Economics. Cambridge University Press, 2004. P. Milgrom and I. Segal. Clock auctions and radio spectrum reallocation. *Journal of Political Economy*, 2020. To appear. W. D. [Morris, Jr.]{} Lemke paths on simple polytopes. *Mathematics of Operations Research*, 190 (4):0 780–789, 1994. H. Moulin and J. P. Vial. Strategically zero-sum games: The class of games whose completely mixed equilibria cannot be improved upon. *International Journal of Game Theory*, 70 (3–4):0 201–221, 1978. R. Myerson. Optimal auction design. *Mathematics of Operations Research*, 60 (1):0 58–73, 1981. S. Nasar. *A Beautiful Mind: a Biography of John Forbes Nash, Jr., Winner of the Nobel Prize in Economics, 1994*. Simon [&]{} Schuster, 1998. J. F. Nash, Jr. Equilibrium points in [$N$]{}-person games. *Proceedings of the National Academy of Sciences*, 360 (1):0 48–49, 1950. J. F. Nash, Jr. Non-cooperative games. *Annals of Mathematics*, 540 (2):0 286–295, 1951. N. Nisan. The communication complexity of approximate set packing and covering. In *Proceedings of the 29th International Colloquium on Automata, Languages and Programming (ICALP)*, pages 868–875, 2002. N. Nisan and I. Segal. The communication requirements of efficient allocations and supporting prices. *Journal of Economic Theory*, 1290 (1):0 192–224, 2006. C. H. Papadimitriou. On the complexity of the parity argument and other inefficient proofs of existence. *Journal of Computer and System Sciences*, 480 (3):0 498–532, 1994. C. H. Papadimitriou. The complexity of finding [N]{}ash equilibria. In N. Nisan, T. Roughgarden, [É]{}. Tardos, and V. V. Vazirani, editors, *Algorithmic Game Theory*, chapter 2, pages 29–51. Cambridge, 2007. C. H. Papadimitriou and T. Roughgarden. Computing correlated equilibria in multi-player games. *Journal of the ACM*, 550 (3):0 14, 2008. R. Pass and M. Venkitasubramaniam. A round-collapse theorem for computationally-sound protocols; or, [TFNP]{} is hard (on average) in [P]{}essiland. arXiv:1906.10837, 2019. R. Raz and P. McKenzie. Separation of the monotone [NC]{} hierarchy. *Combinatorica*, 190 (3):0 403–435, 1999. R. Raz and A. Wigderson. Monotone circuits for matching require linear depth. *Journal of the ACM*, 390 (3):0 736–744, 1994. A. A. Razborov. On the distributional complexity of disjointness. *Theoretical Computer Science*, 1060 (2):0 385–390, 1992. J. Robinson. An iterative method of solving a game. *Annals of Mathematics*, pages 296–301, 1951. A. Rosen, G. Segev, and I. Shahaf. Can [PPAD]{} hardness be based on standard cryptographic assumptions? In *Proceedings of the 15th International Conference on Theory of Cryptography (TCC)*, pages 173–205, 2017. T. Roughgarden. *Selfish Routing and the Price of Anarchy*. MIT Press, 2005. T. Roughgarden. Computing equilibria: A computational complexity perspective. *Economic Theory*, 420 (1):0 193–236, 2010. T. Roughgarden. Barriers to near-optimal equilibria. In *Proceedings of the 55th Annual Symposium on Foundations of Computer Science (FOCS)*, pages 71–80, 2014. T. Roughgarden. lecture notes. Stanford University, 2014. T. Roughgarden. Intrinsic robustness of the price of anarchy. *Journal of the ACM*, 620 (5):0 32, 2015. T. Roughgarden. *Twenty Lectures on Algorithmic Game Theory*. Cambridge University Press, 2016. T. Roughgarden. Communication complexity (for algorithm designers). *Foundations and Trends in Theoretical Computer Science*, 110 (3-4):0 217–404, 2016. T. Roughgarden and I. [Talgam-Cohen]{}. Why prices need algorithms. In *Proceedings of the 16th Annual ACM Conference on Economics and Computation (EC)*, pages 19–36, 2015. T. Roughgarden and [É]{}. Tardos. How bad is selfish routing? *Journal of the ACM*, 490 (2):0 236–259, 2002. T. Roughgarden and O. Weinstein. On the communication complexity of approximate fixed points. In *Proceedings of the 57th Annual Symposium on Foundations of Computer Science (FOCS)*, pages 229–238, 2016. T. Roughgarden, V. Syrgkanis, and [É]{}. Tardos. The price of anarchy in auctions. *Journal of Artificial Intelligence Research*, 59:0 59–101, 2017. A. Rubinstein. Settling the complexity of computing approximate two-player [N]{}ash equilibria. In *Proceedings of the 57th Annual IEEE Symposium on Foundations of Computer Science*, pages 258–265, 2016. R. Savani and B. von Stengel. Hard-to-solve bimatrix games. *Econometrica*, 740 (2):0 397–429, 2006. A. Schrijver. *Theory of Linear and Integer Programming*. Wiley, 1986. L. S. Shapley. Some topics in two-person games. In M. Dresher, L. S. Shapley, and A. W. Tucker, editors, *Advances in Game Theory*, pages 1–28. Princeton University Press, 1964. E. Solan and R. Vohra. Correlated equilibrium payoffs and public signalling in absorbing games. *International Journal of Game Theory*, 310 (1):0 91–121, 2002. E. Sperner. . *Abhandlungen aus dem Mathematischen Seminar der Universit[ä]{}t Hamburg*, 60 (1):0 265–272, 1928. D. A. Spielman. The complexity of error-correcting codes. In *Proceedings of the 11th International Symposium on Fundamentals of Computation Theory*, pages 67–84, 1997. D. A. Spielman and S.-H. Teng. Smoothed analysis: Why the simplex algorithm usually takes polynomial time. *Journal of the ACM*, 510 (3):0 385–463, 2004. N. Sun and Z. Yang. Equilibria and indivisibilities: Gross substitutes and complements. *Econometrica*, 740 (5):0 1385–1402, 2006. V. Syrgkanis and [É]{}. Tardos. Composable and efficient mechanisms. In *Proceedings of the 45th ACM Symposium on Theory of Computing (STOC)*, pages 211–220, 2013. S. Toda. is as hard as the polynomial-time hierarchy. *SIAM Journal on Computing*, 200 (5):0 865–877, 1991. A. Vetta. Nash equilibria in competitive societies, with applications to facility location, traffic routing and auctions. In *Proceedings of the 43rd Annual Symposium on Foundations of Computer Science (FOCS)*, pages 416–425, 2002. W. Vickrey. Counterspeculation, auctions, and competitive sealed tenders. *Journal of Finance*, 160 (1):0 8–37, 1961. J. Ville. Sur la theorie g[é]{}n[é]{}rale des jeux ou intervient l’habilet[é]{} des joueurs. Fascicule 2 in Volume 4 of [É.]{} Borel, *Trait[é]{} du Calcul des probabilit[é]{}s et de ses applications*, pages 105–113. Gauthier-Villars, 1938. J. von Neumann. . *Mathematische Annalen*, 100:0 295–320, 1928. J. von Neumann and O. Morgenstern. *Theory of Games and Economic Behavior*. Princeton University Press, 1944. B. von Stengel. Equilibrium computation for two-player games in strategic and extensive form. In N. Nisan, T. Roughgarden, [É]{}. Tardos, and V. Vazirani, editors, *Algorithmic Game Theory*, chapter 3, pages 53–78. Cambridge University Press, 2007. [^1]: Cris Moore: “So when are the [*stellar*]{} lectures?” [^2]: Anil Ada, Amey Bhangale, Shant Boodaghians, Sumegha Garg, Valentine Kabanets, Antonina Kolokolova, Michal Koucký, Cristopher Moore, Pavel Pudlák, Dana Randall, Jacobo Torán, Salil Vadhan, Joshua R. Wang, and Omri Weinstein. [^3]: Why can’t we use the tried-and-true theory of ${\mathsf{NP}}$-completeness? Because the guaranteed existence (Theorem \[t:nash\]) and efficient verifiability of a Nash equilibrium imply that computing one is an easier task than solving an ${\mathsf{NP}}$-complete problems, under appropriate complexity assumptions (see Theorem \[t:mp91\]). [^4]: Not an oxymoron! [^5]: <https://en.wikipedia.org/wiki/Rock-paper-scissors> [^6]: Here are some fun facts about rock-paper-scissors. There’s a World Series of RPS every year, with a top prize of at least \$50K. If you watch some videos from the event, you will see pure psychological warfare. Maybe this explains why some of the same players seem to end up in the later rounds of the tournament every year. There’s also a robot hand, built at the University of Tokyo, that plays rock-paper-scissors with a winning probability of 100% (check out the video). No surprise, a very high-speed camera is involved. [^7]: This is without loss of generality, by scaling. [^8]: A [*pure strategy*]{} is the special case of a mixed strategy that is deterministic (i.e., allots all its probability to a single strategy). [^9]: Dantzig [@D81 p.5] describes meeting John von Neumann on October 3, 1947: “In under a minute I slapped the geometric and the algebraic version of the \[linear programming\] problem on the blackboard. Von Neumann stood up and said ‘Oh that!’ Then for the next hour and a half, he proceeded to give me a lecture on the mathematical theory of linear programs. “At one point seeing me sitting there with my eyes popping and my mouth open (after all I had searched the literature and found nothing), von Neumann said: ‘I don’t want you to think I am pulling all this out of my sleeve on the spur of the moment like a magician. I have just recently completed a book with Oskar Morgenstern on the Theory of Games. What I am doing is conjecturing that the two problems are equivalent.’’ This equivalence between strong linear programming duality and the Minimax theorem is made precise in Dantzig [@D51], Gale et al. [@GKT51], and Adler [@A13]. [^10]: If you think you learned this definition from the movie *A Beautiful Mind*, it’s time to learn the correct definition! [^11]: Games can even have a collaborative aspect, for example if you and I want to meet at some intersection in Manhattan. Our strategies are intersections, and either we both get a high payoff (if we choose the same strategy) or we both get a low payoff (otherwise). [^12]: Notice that three-player zero-sum games are already more general than bimatrix games—to turn one of the latter into one of the former, add a dummy third player with only one strategy whose payoff is the negative of the combined payoff of the original two players. Thus the most compelling negative results would be for the case of bimatrix games. [^13]: Recall our “meeting in Manhattan” example—every intersection is a Nash equilibrium! [^14]: If a player knows the game is zero-sum and also her own payoff matrix, then she automatically knows the other player’s payoff matrix. Nonetheless, it is non-trivial and illuminating to investigate the convergence properties of general-purpose uncoupled dynamics in the zero-sum case, thereby identifying an aspiration point for the analysis of general games. [^15]: In the first time step, Alice and Bob both choose a default strategy, such as the uniform distribution. [^16]: Recall our assumption that payoffs have been scaled to lie in $[-1,1]$. [^17]: This communication bound applies to the variant of smooth fictitious play where Alice (respectively, Bob) learns only a random sample from $y^t$ (respectively, $x^t$); see footnote \[foot:feedback\]. Each such sample can be communicated to the other player in $\log (n+m)$ bits. Theorem \[t:sfp\] continues to hold (with high probability over the samples) for this variant of smooth fictitious play [@FL95; @FS99].\[foot:uncoupled\] [^18]: The communication complexity of computing anything about a two-player zero-sum game is zero—Alice knows the entire game at the beginning (as Bob’s payoff is the negative of hers) and can unilaterally compute whatever she wants. But it still makes sense to ask if the communication bound implied by smooth fictitious play can be replicated in non-zero-games (where Alice and Bob initially know only their own payoff matrices). [^19]: The relevance of communication complexity to fast learning in games was first pointed out by @CS04. [^20]: Also known as the “Hedge” algorithm. The closely related “Multiplicative Weights” algorithm uses the update rule $w^{t+1}(a) = w^t(a) \cdot (1 + \eta^t r^t(a))$ instead of $w^{t+1}(a) = w^t(a) \cdot (e^{\eta^t r^t(a)})$ [@CBMS07]. [^21]: There is no hope of competing with the best action [*sequence*]{} in hindsight: consider two actions and an adversary that flips a coin each time step to choose between the reward vectors $(1,0)$ and $(0,1)$. [^22]: For the matching lower bound, with $n$ actions, consider an adversary that sets the reward of each action uniformly at random from $\{-1,1\}$ at each time step. Every online algorithm earns expected cumulative reward 0, while the expected cumulative reward of the best action in hindsight is $\Theta(\sqrt{T} \cdot \sqrt{\log n})$. [^23]: We already mentioned Shapley’s 1964 example showing that fictitious play need not converge [@S64]. [^24]: In older game theory texts, this example is called the “Battle of the Sexes.” [^25]: Fun fact: outside of degenerate cases, every game has an [*odd*]{} number of Nash equilibria (see also Solar Lecture 4). [^26]: Von Neumann’s alleged reaction when Nash told him his theorem [@nasar P.94]: “That’s trivial, you know. That’s just a fixed point theorem.” [^27]: Exercise: prove this by showing that, after you’ve guessed the two support sets of a Nash equilibrium, you can recover the exact probabilities using two linear programs. [^28]: @A94 and @LY94 independently proved a precursor to this result in the special case of zero-sum games. The focus of the latter paper is applications in complexity theory (like “anticheckers”). [^29]: Exercise: there are arbitrarily large games where every exact Nash equilibrium has full support. Hint: generalize rock-paper-scissors. Alternatively, see Section \[ss:althofer\] of Solar Lecture 5. [^30]: By a padding argument, there is no loss of generality in assuming that Alice and Bob have the same number of strategies. [^31]: This sufficient condition has its own name: a [ *well-supported ${\epsilon\text{-}\mathsf{NE}}$.*]{} [^32]: This $\Omega(N^c)$ lower bound was recently improved to $\Omega(N^{2-o(1)})$ by Göös and Rubinstein [@GR18] (for constant ${\epsilon}> 0$ and $N \rightarrow \infty$). The proof follows the same high-level road map used here (see Section \[s:map\]), with a number of additional optimizations. [^33]: See also footnote \[foot:uncoupled\] in Solar Lecture 1. [^34]: When @BR17 first proved their result (in late 2016), the state-of-the-art in simultaneous theorems for randomized protocols was much more primitive than for deterministic protocols. This forced @BR17 to use a relatively weak simulation theorem for the randomized case (by @G+16), which led to a number of additional technical details in the proof. Amazingly, a full-blown randomized simulation theorem was published shortly thereafter [@A+17; @GPW17]! With this in hand, extending the argument here for deterministic protocols to randomized protocols is relatively straightforward. [^35]: Recall the [[Disjointness]{}]{}function: Alice and Bob have input strings $a,b \in \{0,1\}^n$, and the output of the function is “0” if there is a coordinate $i \in \{1,2,\ldots,n\}$ with $a_i = b_i = 1$ and “1” otherwise. One of the first things you learn in communication complexity is that the nondeterministic communication complexity of [[Disjointness]{}]{}(for certifying 1-inputs) is $n$ (see e.g. [@KN96; @w15]). And of course one of the most famous and useful results in communication complexity is that the function’s randomized communication complexity (with two-sided error) is $\Omega(n)$ [@KS92; @R92]. [^36]: Mika Göös (personal communication, January 2018) points out that there are more clever reductions from [[Disjointness]{}]{}, starting with Raz and Wigderson [@RW90], that [*can*]{} imply strong lower bounds on the randomized communication complexity of certain problems with low nondeterministic communication complexity; and that it is plausible that a Raz-Wigderson-style proof, such as that for search problems in Göös and Pitassi [@GP18], could be adapted to give an alternative proof of Theorem \[t:br17\]. [^37]: If convexity is dropped, consider rotating an annulus centered at the origin. If boundedness is dropped, consider $x \mapsto x+1$ on ${{\mathbb R}}$. If closedness is dropped, consider $x \mapsto \tfrac{x}{2}$ on $(0,1]$. If continuity is dropped, consider $x \mapsto (x+ \tfrac{1}{2}) \bmod 1$ on $[0,1]$. Many more general fixed-point theorems are known, and find applications in economics and elsewhere; see e.g. [@B85; @M15]. [^38]: Recall that a function $f$ mapping a metric space $(X,d)$ to itself is [*$\lambda$-Lipschitz*]{} if $d(f(x),f(y)) \le \lambda \cdot d(x,y)$ for all $x,y \in X$. That is, the function can only amplify distances between points by a $\lambda$ factor. [^39]: In fact, the story behind von Neumann’s original proof of the Minimax theorem is a little more complicated and nuanced; see @K01 for a fascinating and detailed discussion. [^40]: This discussion is borrowed from [@f13 Lecture 20]. [^41]: For an analogy, a “generic” hard decision problem for the complexity class ${\mathsf{NP}}$ is: given a description of a polynomial-time verifier, does there exist a witness (i.e., an input accepted by the verifier)? [^42]: The same result and proof extend by induction to higher dimensions. Every subdivided simplex in ${{\mathbb R}}^n$ with vertices legally colored with $n+1$ colors has an odd number of panchromatic subsimplices, with a different color at each vertex.\[foot:sperner\] [^43]: We’re glossing over some details. The graph in an instance of [[EoL]{}]{}is directed, while the graph $G$ defined in the proof of Theorem \[t:sperner\] is undirected. There is, however, a canonical way to direct the edges of the graph $G$. Also, the canonical source vertex in an [[EoL]{}]{}instance has out-degree 1, while the source of the graph $G$ has degree $2k-1$ for some positive integer $k$. This can be rectified by splitting the source vertex of $G$ into $k$ vertices, a source vertex with out-degree 1 and $k-1$ vertices with in- and out-degree 1. [^44]: Every compact convex subset of finite-dimensional Euclidean space is homeomorphic to a simplex of the same dimension (by scaling and radial projection, essentially), and homeomorphisms preserve fixed points, so Brouwer’s fixed-point theorem carries over from simplices to all compact convex subsets of Euclidean space.\[foot:radial\] [^45]: Very recently, @GKP19 showed how to implement directly the road map of @RW16, thereby giving an alternative proof of Theorem \[t:br17\]. [^46]: For the proof of Theorem \[t:br17\], we could restrict attention to instances that are consistent in the sense that $succ(v)=w$ if and only if $pred(w)=v$. The computational hardness results in Solar Lectures 4 and 5 require the general (non-promise) version of the problem stated here. [^47]: A similar argument, based on choosing a Hamiltonian path of $V$ at random, implies an $\Omega(N)$ lower bound for the randomized query complexity as well. [^48]: This monograph does not reflect a beautiful lecture given by Omri Weinstein at the associated workshop on the history and applications of simulation theorems (e.g., to the first non-trivial lower bounds for the clique vs. independent set problem [@G15]). Contact him for his slides! [^49]: Open question: prove a simulation theorem for quantum computation. [^50]: For typechecking reasons, the argument for randomized protocols needs to work with a decision version of the [[EoL]{}]{}problem, such as “is the least significant bit of the vertex at the end of the Hamiltonian path equal to 1?” [^51]: @DBLP:journals/combinatorica/RazM99 stated their result for the binary alphabet and for total functions. @GPW18 note that it applies more generally to arbitrary alphabets and partial functions, which is important for its application here. For further proof details of these extensions, see @RW16. [^52]: By an [*embedding*]{}, we mean a function $\sigma$ that maps each edge $(v,w)$ of $K$ to a continuous path in $H$ with endpoints $\sigma(v)$ and $\sigma(w)$. [^53]: In the original construction of @HPV89, vertices of $K$ could potentially be mapped to points of $H$ that differ significantly in only one coordinate. This construction is good enough to prevent spurious approximate fixed points in the $\ell_{\infty}$ norm, but not in the normalized $\ell_2$ norm. [^54]: For reasons related to the omitted technical details, it’s convenient to have a “buffer zone” between the embedding of the graph and the boundary of the hypercube. [^55]: In the two-party communication model, we need not be concerned about efficiently constructing such an embedding. Because Alice and Bob have unbounded computational power, they can both compute the lexicographically first such embedding in advance of the protocol. When we consider computational lower bounds in Solar Lecture 5, we’ll need an efficient construction. [^56]: As suggested by Figure \[f:hpv\], in the final construction it’s important to use a more nuanced classification that “interpolates” between points in the three different categories. It will still be the case that Alice and Bob can classify any point $x \in H$ appropriately without any communication. [^57]: Recall from Corollary \[cor:cceol\] that the [[2EoL]{}]{}problem is already hard in the special case where the encoded graph $G$ is guaranteed to be a Hamiltonian path. [^58]: This reduction was popularized in a Leisure of the Theory Class blog post by Eran Shmaya (<https://theoryclass.wordpress.com/2012/01/05/brouwer-implies-nash-implies-brouwer/>), who heard about the result from Rida Laraki. [^59]: If fixed points are guaranteed for hypercubes in every dimension, then they are also guaranteed for all compact convex subsets of finite-dimensional Euclidean space; see footnote \[foot:radial\] in Solar Lecture 2. [^60]: Strictly speaking, we’re assuming a more general form of Nash’s theorem that asserts the existence of a pure Nash equilibrium whenever every player has a convex compact strategy set (like $H$) and a continuous concave payoff function (like  and ). (The version in Theorem \[t:nash\] corresponds to the special case where each strategy set corresponds to a finite-dimensional simplex of mixed strategies, and where all payoff functions are linear.) Most proofs of Nash’s theorem—including the one outlined in Section \[ss:nashpf\]—are straightforward to generalize in this way. [^61]: It is not clear how to easily extract an approximate fixed point in the $\ell_{\infty}$ norm from an approximate Nash equilibrium without losing a super-constant factor in the parameters. The culprit is the “$\tfrac{1}{d}$” factor in  and —needed to ensure that payoffs are bounded—which allows each player to behave in an arbitrarily crazy way in a few coordinates without violating the ${\epsilon}$-approximate Nash equilibrium conditions. (Recall ${\epsilon}> 0$ is constant while $d \rightarrow \infty$.) This is one of the primary reasons why @R16 and @BR17 needed to modify the construction in @HPV89 to obtain their results. [^62]: If $x$ decodes to the edge $(u,v)$, then Alice and Bob exchange information about $u$ and $v$ in two rounds. If $x$ decodes to the vertex $v$, they exchange information about $v$ in two rounds. This reveals $v$’s opinion of its predecessor $u$ and successor $w$. In the general case, Alice and Bob would still need to exchange information about $u$ and $w$ using two more rounds of communication to confirm that $succ(u)=pred(w)=v$. (Recall our semantics: directed edge $(v,w)$ belongs to $G$ if and only if both $succ(v)=w$ and $pred(w)=v$.) In the special case of instances where $succ(v) = w$ if and only if $v = pred(w)$, these two extra rounds of communication are redundant. [^63]: In the protocol $P$, Bob does not need to communicate the names of any vertices—Alice can decode $x$ privately. But it’s convenient for the reduction to include the names of the vertices relevant for $x$ in the $\beta$ component of Bob’s strategy. [^64]: If you want to be a stickler and insist on payoffs in $[0,1]$, then shift and scale the payoffs in  appropriately. [^65]: The fact that $f$ is $O(1)$-Lipschitz is important for carrying out the last of these steps. [^66]: Some of the discussion in these two sections is drawn from [@f13 Lecture 20]. [^67]: Technically, we’re referring to the [*search*]{} version of ${\mathsf{NP}}$ (sometimes called ${\mathsf{FNP}}$, where the “F” stands for “functional”), where the goal is to either exhibit a witness or correctly deduce that no witness exists. [^68]: Crucially, $\A_1(\phi)$ has at least one Nash equilibrium, including one whose description length is polynomial in that of the game (see Theorem \[t:nash\] and the subsequent discussion). [^69]: There are many other interesting examples of classes that appear to be semantic in this sense, such as ${\mathsf{RP}}$ and ${\mathsf{NP}}\cap {\mathsf{co}\mbox{-}\mathsf{NP}}$. [^70]: There are many other natural examples of ${\mathsf{TFNP}}$ problems, including computing a local minimum of a function, computing an approximate Brouwer fixed point, and inverting a one-way permutation. [^71]: ${\mathsf{PLS}}$ was actually defined prior to ${\mathsf{TFNP}}$, by @JPY88. [^72]: The undirected version of the problem can be used to define the class ${\mathsf{PPA}}$. The version of the problem where only sink vertices count as witnesses seems to give rise to a different (larger) complexity class called ${\mathsf{PPADS}}$. [^73]: For example, for the $\ell_{\infty}$ norm, existence of such a point is guaranteed with ${\epsilon}$ as small as $\tfrac{(\lambda+1)}{2^n}$, where $n$ is the description length of $f$. This follows from rounding each coordinate of an exact fixed point to its nearest multiple of $2^{-n}$. [^74]: @EY07 proved that, with 3 or more players, the problem of computing an exact Nash equilibrium of a game appears to be strictly harder than any problem in ${\mathsf{PPAD}}$. [^75]: The one-dimensional case can be solved in polynomial time, essentially by binary search. [^76]: Theorem \[t:brouwer1\] proves hardness in the regime where $d$ and ${\epsilon}$ are both small, Theorem \[t:brouwer2\] when both are large. This is not an accident; if $d$ is small (i.e., constant) and ${\epsilon}$ is large (i.e., constant), the problem can be solved in polynomial time by exhaustively checking a constant number of evenly spaced grid points. [^77]: We have reversed the chronology; Theorem \[t:br17\] was proved after Theorem \[t:brouwer2\] and used the construction in [@R16] more or less as a black box. [^78]: This embedding is defined only for the directed edges that are present in the given [[EoL]{}]{}instance, rather than for all possible edges (in contrast to the embedding in Sections \[ss:embed1\] and \[ss:embed2\]). [^79]: In particular, under standard complexity assumptions, this rules out an algorithm for computing an exact Nash equilibrium of a bimatrix game that has smoothed polynomial complexity in the sense of @ST04. Thus the parallels between the simplex method and the Lemke-Howson algorithm (see Section \[s:bimatrix\]) only go so far. [^80]: The amount of fine print was reduced very recently by @PV19. [^81]: To obtain a quantitative lower bound like the conclusion of Theorem \[thm:Aviad\], it is necessary to make a quantitative complexity assumption (like an analog of ETH). This approach belongs to the tradition of “fine-grained” complexity theory. [^82]: How plausible is the assumption that the ETH holds for ${\mathsf{PPAD}}$, even after assuming that the ETH holds for ${\mathsf{NP}}$ and that ${\mathsf{PPAD}}$ has no polynomial-time algorithms? The answer is far from clear, although there are exponential query lower bounds for ${\mathsf{PPAD}}$ problems (e.g. [@HPV89]) and no known techniques that show promise for a subexponential-time algorithm for the succinct [[EoL]{}]{} problem. [^83]: Similar ideas have been used previously, including in the proofs that computing an ${\epsilon}$-approximate Nash equilibrium with ${\epsilon}$ inverse polynomial in $n$ is a ${\mathsf{PPAD}}$-complete problem [@DGP09; @CDT09]. [^84]: The statement and proof here include a constant-factor improvement, due to Salil Vadhan, over those in [@R16]. [^85]: The ${\tilde{O}}(\cdot)$ notation suppresses logarithmic factors. [^86]: In more detail, in every ${\epsilon}$-approximate Nash equilibrium of the game, Alice and Bob both randomize nearly uniformly over $i$ and $j$; this is enforced by the Althöfer games as in Section \[ss:althofer\]. Now think of each player as choosing its strategy in two stages, first the index $i$ or $j$ and then the corresponding values $x_{i*}$ or $y_{*j}$ in the row or column. Whenever Alice plays $i$, her best response (conditioned on $i$) is to play ${\mathbf{E}\ifthenelse{\not\equal{}{}}{_{}}{}\!\left[y_{ij}\right]}$ in every column $j$, where the expectation is over the distribution of $y_{ij}$ conditioned on Bob choosing index $j$. In an ${\epsilon}$-approximate Nash equilibrium, in most coordinates, Alice must usually choose $x_{ij}$’s that are close to this best response. Similarly, for most indices $j \in \left[\sqrt{d}\right]$, whenever Bob chooses $j$, he must usually choose a value of $y_{ij}$ that is close to ${\mathbf{E}\ifthenelse{\not\equal{}{}}{_{}}{}\!\left[f_{ij}(x_{ij})\right]}$ (for each $i$). It can be shown that these facts imply that Alice’s strategy corresponds to a $\delta$-approximate fixed point (in the normalized $\ell_2$ norm), where $\delta$ is a function of ${\epsilon}$ only. [^87]: It is also important that minimal advice suffices to translate between points $x$ of the hypercube and vertices $v$ of the underlying succinct [[EoL]{}]{}instance (as $f$ is defined on the former, while $S$ and $P$ operate on the latter). This can be achieved by using a state-of-the-art locally decodable error-correcting code (with query complexity $d^{o(1)}$, similar to that in @KMRS17) to embed the vertices into the hypercube (as described in Section \[ss:ppadbfp\]). Incorporating the advice that corresponds to local decoding into the game produced by the reduction results in a further blow-up of $2^{d^{o(1)}}$. This is effectively absorbed by the $2^{\sqrt{d}}$ blow-up that is already present in the reduction in Section \[ss:blockwise\]. [^88]: This was the plan all along, which is probably one of the reasons the bill didn’t have trouble passing a notoriously partisan Congress. Another reason might be the veto-proof title of the bill: “The Middle Class Tax Relief and Job Creation Act.” [^89]: The FCC Incentive Auction wound up clearing 84 MHz of spectrum (14 channels). [^90]: The actual repacking problem was more complicated—overlapping stations cannot even be assigned adjacent channels, and there are idiosyncratic constraints at the borders with Canada and Mexico. See @LMS17 for more details. But the essence of the repacking problem really is graph coloring. [^91]: Before the auction makes a lower offer to some remaining broadcaster in the auction, it needs to check that it would be OK for the broadcaster to decline and drop out of the auction. If a station’s dropping out would render the repacking problem infeasible, then that station’s buyout price remains frozen until the end of the auction. [^92]: A typical representative instance would have thousands of variables and tens of thousands of constraints. [^93]: Every time the repacking algorithm fails to find a repacking when one exists, money is left on the table—the auction has to conservatively leave the current station’s buyout offer frozen, even though it could have safely lowered it. [^94]: For example, Kruskal’s algorithm for the minimum spanning tree problem (start with the empty set, go through the edges of the graph from cheapest to most expensive, adding an edge as long as it doesn’t create a cycle) is a standard (forward) greedy algorithm. The reverse version is: start with the entire edge set, go through the edges in reverse sorted order, and remove an edge whenever it doesn’t disconnect the graph. For the minimum spanning tree problem (and more generally for finding the minimum-weight basis of a matroid), the reverse greedy algorithm is just as optimal as the forward one. In general (and even for e.g. bipartite matching), the reverse version of a good forward greedy algorithm can be bad [@DGR14]. [^95]: Much of the discussion in Sections \[ss:bad\]–\[ss:exposure\] is from [@f13 Lecture 8], which in turn takes inspiration from @Milgrom:2004jx. [^96]: Intuitively, a second-price auction shades your bid optimally after the fact, so there’s no reason to try to game it. [^97]: For a more formal treatment of single-item auctions, see Section \[ss:basics\] in Lunar Lecture 4. [^98]: Items are substitutes if they provide diminishing returns—having one item only makes others less valuable. For two items $A$ and $B$, for example, the substitutes condition means that a bidder’s value for the bundle of $A$ and $B$ is at most the sum of her values for $A$ and $B$ individually. In a spectrum auction context, two licenses for the same area with equal-sized frequency ranges are usually substitute items. [^99]: Items are complements if there are synergies between them, so that possessing one makes others more valuable. With two items $A$ and $B$, this translates to a bidder’s valuation for the bundle of $A$ and $B$ exceeding the sum of her valuations for $A$ and $B$ individually. Complements arise naturally in wireless spectrum auctions, as some bidders want a collection of licenses that are adjacent, either in their geographic areas or in their frequency ranges. [^100]: This model is treated more thoroughly in the next lecture (see Section \[s:camodel\]). [^101]: Similar results hold for other auction formats, like simultaneous second-price auctions. Directly analyzing what happens in iterative auctions like SAAs when there are multiple items appears difficult. [^102]: See Section \[ss:poa\] of the next lecture for a formal definition. [^103]: We will say more about this theory in Lunar Lecture 5. See also @RST17 for a recent survey. [^104]: To better appreciate this result, we note that multi-item auctions like S1As are so strategically complex that they have historically been seen as unanalyzable. For example, we have no idea what their equilibria look like in general. Nevertheless, we can prove good approximation guarantees for them! [^105]: In more detail, in this model there is a commonly known prior distribution over bidders’ valuations. In a Bayes-Nash equilibrium, every bidder bids to maximize her expected utility given her information at the time: her own valuation, her posterior belief about other bidders’ valuations, and the bidding strategies (mapping valuations to bids) used by the other bidders. Theorem \[t:ffgl\] continues to hold for every Bayes-Nash equilibrium of an S1A, as long as bidders’ valuations are independently (and not necessarily identically) distributed.\[foot:bn\] [^106]: Much of this lecture is drawn from [@w15 Lecture 7]. [^107]: For basic background on nondeterministic multi-party communication protocols, see @KN96 or @w15. [^108]: Achieving a $k$-approximation is trivial: every player communicates her value $v_i(M)$ for the whole set of items, and the entire set of items is awarded to the bidder with the highest value for them. [^109]: In proving Theorem \[thm\_nisan\], we’ll be interested in the case where $k$ is much smaller than $n$, such as $k = \Theta(\log n)$. Intuition might suggest that the lower bound should be $\Omega(n)$ rather than $\Omega(n/k)$, but this is incorrect—a slightly non-trivial argument shows that Theorem \[t:mdisj\] is tight for nondeterministic protocols (for all small enough $k$, like $k = O(\sqrt{n})$). This factor-$k$ difference won’t matter for our applications, however. [^110]: To keep the game finite, let’s agree that each bid has to be an integer between 0 and some known upper bound $B$. [^111]: In the preceding lecture we mentioned the [*Vickrey*]{} or [*second-price*]{} auction, where the winner does not pay their own bid, but rather the highest bid by someone else (the second-highest overall). We’ll stick with S1As for simplicity, but similar results are known for simultaneous second-price auctions, as well. [^112]: If $\rho$ is a probability distribution over outcomes, as in a mixed Nash equilibrium, then $f(\rho)$ denotes the expected value of $f$ w.r.t. $\rho$. [^113]: Games generally have multiple equilibria. Ideally, we’d like an approximation guarantee that applies to [*all*]{} equilibria, so that we don’t need to worry about which one is reached—this is the point of the POA. [^114]: One caveat is that it’s not always clear that a system will reach an equilibrium in a reasonable amount of time. A natural way to resolve this issue is to relax the notion of equilibrium enough so that it become relatively easy to reach an equilibrium. See Lunar Lecture 5 for more on this point. [^115]: The first result of this type, for simultaneous second-price auctions and bidders with submodular valuations, is due to @CKS16. [^116]: For a proof, see the original paper [@FFGL13] or course notes by the author [@w14 Lecture 17.5]. [^117]: Equilibria can achieve the optimal welfare in a direct-revelation auction, so some bound on the number of strategies is necessary in Theorem \[t:condpoa2\]. [^118]: Arguably, Theorem \[t:condpoa2\] is good enough for all practical purposes—a POA upper bound that holds for exact Nash equilibria and does not hold (at least approximately) for approximate Nash equilibria with very small ${\epsilon}$ is too brittle to be meaningful. [^119]: These computations may take a super-polynomial amount of time, but they do not contribute to the protocol’s cost. [^120]: The most common definition of a Walrasian equilibrium asserts instead that an item $j$ is not awarded to any player only if $p_j=0$. With monotone valuations, there is no harm in insisting that every item is allocated. [^121]: Needless to say, much blood and ink have been spilled over this interpretation over the past couple of centuries. [^122]: More formally, a [*unit-demand*]{} valuation $v$ is characterized by nonnegative values $\{ \alpha_j \}_{j \in M}$, with $v(S) = \max_{j \in S} \alpha_j$ for each $S \sse E$. Intuitively, a bidder with a unit-demand valuation throws away all her items except her favorite. [^123]: A weighted matroid rank function $f$ is defined using a matroid $(E,\mathcal{I})$ and nonnegative weights on the elements $E$, with $f(S)$ defined as the maximum weight of an independent set (i.e., a member of $\mathcal{I}$) that lies entirely in the subset $S \sse E$. [^124]: For concreteness, think about the case where every valuation $v_i$ has a succinct description and can be evaluated in polynomial time. Analogous results hold when an algorithm has only oracle access to the valuations. [^125]: For example, given an instance $G=(V,E)$ of the [Independent Set]{} problem, take $M=E$, make one player for each vertex $i \in V$, and give player $i$ an AND valuation with parameters $\alpha = 1$ and $T$ equal to the edges that are incident to $i$ in $G$. [^126]: It probably seems weird to have a conditional result ruling out equilibrium existence. A conditional non-existence result can of course be made unconditional through an explicit example. A proof that the welfare-maximization problem for $\V$ is ${\mathsf{NP}}$-hard will generally suggest candidate markets to check for non-existence. The following analogy may help: consider computationally tractable linear programming relaxations of ${\mathsf{NP}}$-hard optimization problems. Conditional on ${\mathsf{P}}\neq {\mathsf{NP}}$, such relaxations cannot be exact (i.e., have no integrality gap) for all instances. ${\mathsf{NP}}$-hardness proofs generally suggest instances that can be used to prove directly (and unconditionally) that a particular linear programming relaxation has an integrality gap.\[foot:gap\] [^127]: Replacing the OR bidder in Example \[ex:nonex\] with an appropriate pair of AND bidders extends the example to markets with only AND bidders. [^128]: In more detail, consider the (polynomial number of) dual constraints generated by the ellipsoid method when solving the dual linear program. Form a reduced version of the original primal problem, retaining only the (polynomial number of) variables that correspond to this subset of dual constraints. Solve this polynomial-size reduced version of the primal linear program using your favorite polynomial-time linear programming algorithm. [^129]: If you’ve never seen or have forgotten about complementary slackness, there’s no need to be afraid. To derive them, just write down the usual proof of weak LP duality (which is a chain of inequalities), and back out the conditions under which all the inequalities hold with equality. [^130]: This argument re-proves the First Welfare Theorem (Theorem \[t:fwt\]). It also proves the Second Welfare Theorem, which states that for every welfare-maximizing allocation, there exist prices that render it a Walrasian equilibrium—any optimal solution to the dual linear program furnishes such prices. [^131]: See [@priceeq Section 5.3.2] for an unnatural such class. [^132]: One advantage of assuming a distribution over inputs is that there is an unequivocal way to compare the performance of different auctions (by their expected revenues), and hence an unequivocal way to define an optimal auction. One auction generally earns more revenue than another on some inputs and less on others, so in the absence of a prior distribution, it’s not clear which one to prefer. [^133]: Straightforward exercise: if there are $n$ bidders with valuations drawn i.i.d. from the uniform distribution on $[0,1]$, then setting $b_i(v_i) = \tfrac{n-1}{n} \cdot v_i$ for every $i$ and $v_i$ yields a Bayes-Nash equilibrium. [^134]: The second-price auction is in fact [ *dominant-strategy incentive compatible (DSIC)*]{}—truthful bidding is a dominant strategy for every bidder, not merely a Bayes-Nash equilibrium. [^135]: Of course, non-BIC auctions like first-price auctions are still useful in practice. For example, the description of the first-price auction does not depend on bidders’ valuation distributions $F_1,\ldots,F_n$ and can be deployed without knowledge of them. This is not the case for the simulating auction. [^136]: Intuitively, a reserve price of $r$ acts as an extra bid of $r$ submitted by the seller. In a second-price auction with a reserve price, the winner is the highest bidder who clears the reserve (if any). The winner (if any) pays either the reserve price or the second-highest bid, whichever is higher. [^137]: Technically, this statement holds under a mild “regularity” condition on the distribution $F$, which holds for all of the most common parametric distributions. [^138]: In particular, there is always an optimal auction in which truthful bidding is a dominant strategy (as opposed to merely being a BIC auction). This is also true in the asymmetric case. [^139]: The theory applies more generally to “single-parameter problems.” These include problems in which in each outcome a bidder is either a “winner” or a “loser” (with multiple winners allowed), and each bidder $i$ has a private valuation $v_i$ for winning (and value 0 for losing).\[foot:sparam\] [^140]: Auction theory generally thinks about three informational scenarios: [*ex ante*]{}, where each bidder knows the prior distributions but not even her own valuation; [*interim*]{}, where each bidder knows her own valuation but not those of the others; and [*ex post*]{}, where all of the bidders know everybody’s valuation. Bidders typically choose their bids at the interim stage. [^141]: In principle, we know this is possible. The feasible (ex post) allocation rules form a polytope, the projection of a polytope is again a polytope, and every polytope can be described by a finite number of linear inequalities. So the real question is whether or not there’s a [*computationally useful*]{} description of interim feasibility. [^142]: This is without loss of generality, since we can simply “tag” each valuation $v_i \in V_i$ with the “name” $i$ (i.e., view each $v_i \in V_i$ as the set $\{ v_i,i\}$). [^143]: This is not immediately obvious, as the max-flow/min-cut argument in Section \[s:borderpf\] involves an exponential-size graph. [^144]: The characterization in Theorem \[t:border\] and the extensions in [@A+19; @CDW12; @CKM13] have additional features not required or implied by Definition \[d:gbt\], such as a polynomial-time separation oracle (and even a compact extended formulation in the single-item case [@A+19]). The impossibility results in Section \[ss:imp\] rule out analogs of Border’s theorem that merely satisfy Definition \[d:gbt\], let alone these stronger properties. [^145]: Recall that Toda’s theorem [@toda] implies that a ${{\mathsf{\#P}}}$-hard problem is contained in the polynomial hierarchy only if ${{\mathsf{PH}}}$ collapses. [^146]: Sanity check: this problem turns out to be polynomial-time solvable in the setting of single-item auctions [@GNR18]. [^147]: One detail: Proposition \[prop:conp\] only promises solutions to the “yes/no” question of feasibility, while a separation oracle needs to produce a violated constraint when given an infeasible point. But under mild conditions (easily satisfied here), an algorithm for the former problem can be used to solve the latter problem as well [@S86 P.189]. [^148]: An aside for aficionados of the analysis of Boolean functions: Proposition \[prop:pp\] is essentially equivalent to the ${{\mathsf{\#P}}}$-hardness of checking whether or not given Chow parameters can be realized by some bounded function on the hypercube. See [@GNR18] for more details on the surprisingly strong correspondence between Myerson’s optimal auction theory (in the context of public projects) and the analysis of Boolean functions. [^149]: If $\sum_{i=1}^n \sum_{v_i \in V_i} f_i(v_i) y_i(v_i) > 1$, then the interim allocation rule is clearly infeasible (recall ). Alternatively, this would violate Border’s condition for the choice $S_i = V_i$ for all $i$. [^150]: Recall the discussion in Section \[ss:whocares\] of Solar Lecture 1: a critique of a widely used concept like the Nash equilibrium is not particularly helpful unless accompanied by a proposed alternative. [^151]: Recall the proof idea: smooth fictitious play corresponds to running the vanishing-regret “exponential weights” algorithm (with reward vectors induced by the play of others), and in a two-player zero-sum game, the vanishing-regret guarantee (i.e., with time-averaged payoff at least that of the best fixed action in hindsight, up to $o(1)$ error) implies the ${\epsilon}$-approximate Nash equilibrium condition. [^152]: This section draws from [@f13 Lecture 13]. [^153]: For example, consider the row player. If the trusted third party (i.e., the traffic light) recommends the strategy “Go” (i.e., is green), then the row player knows that the column player was recommended “Stop” (i.e., has a red light). Assuming the column player plays her recommended strategy and stops at the red light, the best strategy for the row player is to follow her recommendation and to go. [^154]: This fact should provide newfound appreciation for the distributed learning algorithms that compute an approximate coarse correlated equilibrium (in Proposition \[prop:cce\]) and an approximate correlated equilibrium (in [@FV97; @HM00]), where the total amount of computation is only [*polynomial*]{} in $k$ (and in $m$ and $\tfrac{1}{{\epsilon}}$). [^155]: Some kind of assumption is necessary to preclude baking an ${\mathsf{NP}}$-complete problem into the game’s description. [^156]: For the specific case of graphical games, @KKLO03 were the first to develop a polynomial-time algorithm for computing an exact correlated equilibrium. [^157]: As a bonus, this means that the algorithm will output a “sparse” correlated equilibrium, with support size polynomial in the size of the game description. [^158]: This is not a totally unfamiliar idea to economists. According to @SV02, Roger Myerson, winner of the 2007 Nobel Prize in Economics, asserted that “if there is intelligent life on other planets, in a majority of them, they would have discovered correlated equilibrium before Nash equilibrium.” [^159]: The formal definition is a bit technical, and we won’t need it here. Roughly, it requires that the best-response condition is invoked in an equilibrium-independent way and that a certain restricted type of charging argument is used. [^160]: There are several important precursors to this theory, including @B+08, @CK05b, and Vetta [@V02]. See [@robust] for a detailed history. [^161]: Smooth games and the “extension theorem” in Theorem \[t:robust\] are the starting point for the modular and user-friendly toolbox for proving POA bounds in complex settings mentioned in Section \[ss:when\] of Lunar Lecture 1. Generalizations of this theory to incomplete-information games (like auctions) and to the composition of smooth games (like simultaneous single-item auctions) lead to good POA bounds for simple auctions [@ST13]. (These generalizations also brought together two historically separate subfields of algorithmic game theory, namely algorithmic mechanism design and price-of-anarchy analyses.) See [@RST17] for a user’s guide to this toolbox.
{ "pile_set_name": "ArXiv" }
\ [Herbert Spohn]{} Zentrum Mathematik and Physik Department, TUM,\ Boltzmannstra[ß]{}e 3, 85747 Garching, Germany, **Abstract**: The Toda lattice is an integrable system and its natural space-time stationary states are the generalized Gibbs ensembles (GGE). Of particular physical interest are then the space-time correlations of the conserved fields. To leading order they scale ballistically. We report on the exact solution of the respective generalized hydrodynamic equations linearized around a GGE as background state. Thereby we obtain a concise formula for the family of scaling functions. 25.11.2019 Introduction {#sec1} ============ With the discovery of the Toda lattice as an integrable classical field theory an immediate hope was to be able to compute time-correlations in thermal equilibrium. During the 1980ies considerable efforts were invested [@SchS80; @D81; @Sch83; @TM83; @T84; @Opper; @TI86; @GM88; @CSTV93; @JE00]. But mostly a better understanding of static properties were accomplished [@TM83; @T84; @Opper; @TI86]. A few at the time pioneering molecular dynamics simulations became available [@SchS80; @Sch83]. Theoretically low order continued fraction expansions were tried, also expansions close to the two solvable limits, namely harmonic chain and hard rods [@D81; @CSTV93]. More recently, with improved computational power, molecular dynamics simulations with approximately 1000 particles have been performed [@KD16]. Time-displaced correlations as stretch-stretch (the same as free volume) and energy-energy are measured with high precision, which requires not only numerically solving a system of Newton’s equations of motion but also generating roughly $10^7$ samples so to keep the noise level small. As one main conclusion from these studies, the mentioned correlations scale ballistically over the accessible time span, except for very early times. Thus the challenging goal is to predict, resp. to compute, the scaling function for a specified correlation. Another important advance happened on the theoretical side. Starting with integrable quantum field theories the formalism of generalized hydrodynamics (GHD) has been developed [@CDY16; @PNCBF17], which in fact also applies to classical counterparts, in particular to the Toda lattice [@D19; @BCM19]. In the context of time correlations only the equations linearized around thermal equilibrium are of relevance. Some preliminary results are reported in [@D19; @S19]. In this contribution we provide the exact solution of linearized hydrodynamics. For quantitative results one still has to numerically determine the density of states (DOS) of the Lax matrix and solve the linear integral equation linked to the dressing transformation. To fully exploit the linear structure all conserved quantities have to be treated on equal footing. Also thermal equilibrium is just one particular case of a generalized Gibbs ensemble (GGE). The connection between linear response and microscopic correlation functions is standard wisdom. In [@D19b] this theme is studied in detail for a general class of integrable systems governed by GHD. For the one-dimensional sinh-Gordon model molecular dynamics is compared with the predictions from linearized GHD [@BDWY18]. While our arguments follow a somewhat different route, our final result on time-correlations of the conserved fields is in complete agreement with the general theory developed in [@D19b]. For the Toda lattice, thermal, more generally GGE, average of the conserved fields is most concisely encoded through the DOS of the Lax matrix. But on top there is the conserved stretch which does not seem to be connected to the Lax matrix and plays a special role. The stretch current equals momentum, hence also conserved, which is not the case for all the other conserved fields. As a consequence generalized hydrodynamics consists of two equations, one for the stretch and one for the Lax DOS. The linearized equations then inherit a $2\times 2$ matrix structure (with operator entries) and one difficulty in the analysis is to properly handle this feature. Such property appears also, for example, in the XXZ chain in which case the exceptional role is played by the magnetization. The insight from the Toda lattice might thus be helpful for other integrable models. A basic claim of generalized hydrodynamics is to be valid over the entire parameter range, no exceptions. This claim is particularly intriguing for the Toda chain. In thermal equilibrium, for large $N$, the length of the system behaves as $\nu N$ with some coefficient $\nu(P)$, which depends on the applied pressure $P$. $\nu$ can have either sign. With high probability, for small $P$ particles are ordered, while for negative $P$ they are anti-ordered. But this means that there is a specific pressure $P_\mathrm{c}$ at which $\nu(P_\mathrm{c}) = 0$ and the typical distance between particles is of order $1/\sqrt{N}$ only. So the picture that the interactions can be broken up into small groups of particles, which are isolated from the rest of the system and move from an incoming configuration to an outgoing one, becomes questionable. On the Euler level the propagation speed of a perturbation is proportional to $\nu^{-1}$, thus formally diverges. It is an open problem to understand the dynamical behavior close to $\nu =0$. Our predictions are valid for the ballistic scale and one may wonder how much microscopic information is still visible. We recall that for a simple fluid the Euler equations have always the same structure. The particular fluid under consideration is encoded by the thermodynamic pressure as a function of density and internal energy. For an integrable many-body system, apparently the system specific character comes from the two-particle phase shift, whereas the mathematical structure persists over a large variety of models both quantum and classical. For the Toda lattice the phase shift is given by $2\log|v-v'|$ in dependence on the incoming velocities $v,v'$. The entire set of equations of generalized hydrodynamics could be written down by substituting some other function of the momentum transfer. Of course, there is then no reason to have an underlying microscopic particle model, the only confirmed cases being the Toda chain and hard rods, for which the phase shift equals the hard rod length $a$ independently of $v,v'$. In Sect. \[sec2\] we discuss some background material on the Toda chain. A summary on the generalized free energy is shifted to Appendix \[sec8\]. In Sect. \[sec3\] we work out a convenient form of the static GGE charge-charge and charge-current correlations. This allows us to state then the exact solution of linearized hydrodynamics with GGE random initial conditions. The required intermediate steps are reported in Sect. \[sec4\]. There is an alternative way to obtain the static charge-charge correlator, which is related to Dyson’s Brownian motion, see Sect. \[sec5\]. Hard rods turn out to be a useful guiding example, well covered in the literature [@BDS83; @S91; @BS97; @DS17a]. To be closer to the Toda lattice, in Sect. \[sec6\] we study hard rods as coming from an anharmonic chain with an infinite hard core potential of core size $a$. Thereby the structure of the corresponding generalized hydrodynamic equations resembles more closely the one of Toda. Conserved charges, currents, and normal form {#sec2} ============================================ The Toda lattice is an anharmonic chain, for which the interaction potential is specified to be exponential. The corresponding hamiltonian is written as $$\label{2.1} H = \sum_{j\in\mathbb{Z}}\big( \tfrac{1}{2}p_j^2 + \mathrm{e}^{-r_j}\big),\quad r_j = q_{j+1} - q_j.$$ Here $q_j,p_j$ is the position and momentum of the $j$-th particle. The positional increments, $r_j$, are called stretches. We consider the infinitely extended lattice and indicate suitable finite number approximations whenever required. Then the equations of motion read $$\label{2.2} \frac{d}{dt}q_j = p_j, \qquad \frac{d}{dt}p_j = \mathrm{e}^{-r_{j-1}} -\mathrm{e}^{-r_j},\quad j \in \mathbb{Z},$$ which is viewed as a discrete nonlinear wave equation. Let us introduce the Flaschka variables [@F74], $$\label{2.3} a_j = \mathrm{e}^{-r_j/2},\quad b_j = p_j.$$ In fact the original Flaschka variables carry both a prefactor $\tfrac{1}{2}$ and, in principle, any prefactor could be used. However, only with our choice the generalized free energy has a particularly simple form, see Appendix \[sec8\]. The Lax matrix is the tridiagonal real symmetric matrix with matrix elements $$\label{2.4} L_{j,j} = b_j, \quad L_{j,j+1} = L_{j+1,j}= a_j.$$ For the finite lattice $[1,....,N]$ the $N$ eigenvalues of $L$ are conserved. As functions on phase space they are non-local, their local version being $\mathrm{tr}[L^n]$, $n =1,2,...$ [@H74]. In generalized hydrodynamics such conserved fields are usually called charges or conserved charges and it is convenient to follow this practice. The locally conserved charges of the Toda lattice have a strictly local density given by $$\label{2.5} Q^{[n]}_j = (L^n)_{j,j},\quad n= 1,2,...,$$ with $j \in \mathbb{Z}$. In addition, the stretch is conserved with density $$\label{2.5a} Q^{[0]}_j = r_j.$$ The respective current densities are of the form $$\label{2.6} J^{[0]}_j = - Q^{[1]}_j , \qquad J^{[n]}_j = \tfrac{1}{2}(L^nL^\mathrm{off})_{j,j}.$$ where $L^\mathrm{off}$ denotes the off-diagonal part of $L$ [@S19]. For the Lax matrix we adopted the convention $L= 2 L_\mathrm{Flaschka}$. This may look like an arbitrary choice, but is so to speak dictated by the structure of the Toda generalized free energy. This point was slightly overlooked in [@S19], see also the arXiv version 4, and for convenience we briefly repeat in Appendix \[sec8\]. As a next step we have to introduce a few standard items from generalized hydrodynamics. A generalized Gibbs state (GGE) of the Toda lattice is characterized by the pressure $P > 0$ and the chemical potential $V(w)$, see Appendix \[sec8\]. $P$ is the thermodynamic dual to the stretch and $V(w)$ the dual to the collection of conserved charges, where instead of the discrete label $n$ a continuous label $w \in \mathbb{R}$ will be more convenient. To compute averages of the charges and their currents one starts from the free energy functional $$\label{2.7} \mathcal{F}(\varrho) = \int _\mathbb{R}\mathrm{d}w \varrho(w) V(w) - \int _\mathbb{R}\mathrm{d}w\int _\mathbb{R}\mathrm{d}w' \log|w - w'|\varrho(w) \varrho(w') + \int _\mathbb{R}\mathrm{d}w \varrho(w) \log \varrho(w),$$ to be minimized under the constraint $$\label{2.8} \varrho(w) \geq 0,\quad \int _\mathbb{R}\mathrm{d}w\varrho(w) =P.$$ Introducing the Lagrange multiplier, $\mu$, the Euler-Lagrange equation for the minimizer, $\rho_\mu$, of $\mathcal{F}$ reads $$\label{2.9} V(w) - 2 \int_\mathbb{R} \mathrm{d}w' \log|w-w'| \rho_\mu(w') +\log \rho_\mu(w) - \mu = 0.$$ Setting $\rho_\mu= \mathrm{e}^{-\varepsilon}$ with quasi-energies $\varepsilon$, turns into the classical version of the TBA equation, $$\label{2.10} \varepsilon(w) = V(w) - \mu - 2 \int_\mathbb{R} \mathrm{d}w' \log|w-w'| \mathrm{e}^{-\varepsilon(w')}.$$ Let us define the integral operator $$\label{2.11} T\psi(w) = 2 \int_\mathbb{R} \mathrm{d}w' \log |w-w'| \psi(w'),\quad w \in \mathbb{R}.$$ Then the TBA equation is rewritten as $$\label{2.12} \varepsilon (w) = V(w) - \mu - (T \mathrm{e}^{-\varepsilon})(w)$$ and one introduces the dressing of a function $\psi$ by $$\label{2.13} \psi^\mathrm{dr} = \psi + T \rho_\mu \psi^\mathrm{dr},\quad \psi^\mathrm{dr} = \big(1 - T\rho_\mu\big)^{-1} \psi.$$ On the right hand side $\rho_\mu$ is regarded as multiplication operator, $(\rho_\mu\psi)(w) = \rho_\mu(w)\psi(w) $. The DOS (density of states) of the Lax matrix under GGE is given by $\nu\rho_\mathrm{p}$, such that $$\label{2.14} \partial_\mu \rho_\mu = \rho_\mathrm{p}, \quad \nu\langle\rho_\mathrm{p}\rangle = 1, \quad \nu = \langle r_0\rangle_{P,V},$$ where $\langle r_0\rangle_{P,V}$ stands for the average with respect to the GGE state labeled by the parameters $P,V$, while $\langle f\rangle$ is merely a short hand for $\int_\mathbb{R} \mathrm{d}w f(w)$. Differentiating TBA with respect to $\mu$ we conclude $$\label{2.15} \rho_\mathrm{p}= (1 - \rho_\mu T)^{-1} \rho_\mu = \rho_\mu(1 - T\rho_\mu)^{-1}[1] = \rho_\mu[1]^\mathrm{dr}.$$ Here $[1]$ stands for the constant function, $\psi(w) = 1$, and similarly $[w^n]$ for the $n$-th power, $\psi(w) = w^n$. We also use $[w^n]^{\mathrm{dr}2} = ([w^n]^\mathrm{dr})^2$. A tracer pulse with bare velocity $v$ in fact travels with an effective velocity, which is determined through the integral equation $$\label{2.16} v^\mathrm{eff}(v) = v + (T\rho_\mathrm{p} v^\mathrm{eff})(v) - (T\partial_\mu\rho_\mathrm{p}(v))v^\mathrm{eff}(v).$$ More concisely the effective velocity can be written as $$\label{2.17} v^\mathrm{eff} = \frac{ \,[w]^\mathrm{dr}}{\, [1]^\mathrm{dr}}.$$ With this input the GGE averaged charge densities are $$\label{2.18} \langle r_0\rangle_{P,V} = \nu, \quad \langle Q^{[n]}_0 \rangle_{P,V} = \nu\langle \rho_\mathrm{p}w^n\rangle = q_n, \quad n = 1,2,...\,,$$ with the corresponding average current densities $$\label{2.19} \langle J^{[0]}_0\rangle_{P,V} = -\langle Q^{[0]}_0\rangle_{P,V}, \quad \langle J^{[n]}_0 \rangle_{P,V} = \langle (v^\mathrm{eff} - q_1) \rho_\mathrm{p}w^n\rangle, \quad n = 1,2,...\,. \medskip$$ The status of both identities is distinctly different. Under suitable conditions on the confining potential, is established in [@D19; @S19], while should be viewed as a conjecture, however with strongly supporting numerical evidence [@BCS19].\ *Remark on notations*. In [@D19] B. Doyon investigates the generalized hydrodynamics for the Toda lattice, both in the fluid picture and the chain picture, the latter being adopted here. Our notations differ only minimally. More precisely the definitions of $\nu$, $T$, $v^\mathrm{eff}$, and $\rho_\mathrm{p}$ are identical. $n$ in [@D19] becomes $\rho_\mu$ here, because $n$ is used already otherwise.$\Box$ Note that through $v^\mathrm{eff}$ becomes a functional of $\rho_\mathrm{p}$. Thus the Euler type equations for the Toda chain read $$\label{2.20} \partial_t \nu -\partial_x q_1 = 0,\qquad \partial_t(\nu \rho_\mathrm{p}) + \partial_x\big((v^\mathrm{eff} - q_1)\rho_\mathrm{p}\big) = 0.$$ We claim that the nonlinear transformation $\rho_\mathrm{p} \mapsto \rho_\mu$, defined by $$\label{2.21} \rho_\mu = \rho_\mathrm{p}(1+ T\rho_\mathrm{p})^{-1},$$ results in the normal form $$\label{2.22} \nu \partial_t \rho_\mu + (v^\mathrm{eff} - q_1)\partial_x \rho_\mu = 0.\medskip$$ *Proof*: Inserting in $$\label{2.23} \rho_\mathrm{p}\partial_t\nu + \nu\partial_t\rho_\mathrm{p} + \partial_x\big((v^\mathrm{eff} - q_1)\rho_\mathrm{p}\big) = 0,$$ the continuity equation yields $$\label{2.24} \nu\partial_t \rho_\mathrm{p} + \partial_x(v^\mathrm{eff} \rho_\mathrm{p}) - q_1 \partial_x \rho_\mathrm{p}= 0.$$ Using this identity together with , we obtain $$\begin{aligned} \label{2.25} &&\hspace{-20pt} \nu\partial_t \rho_\mu = \frac{\rho_\mu}{\rho_\mathrm{p}}\big(-\partial_x(v^\mathrm{eff}\rho_\mathrm{p}) +q_1\partial_x \rho_\mathrm{p}\big) -\frac{\rho_\mu^2}{\rho_\mathrm{p}}\big(T(-\partial_x(v^\mathrm{eff}\rho_\mathrm{p}) + q_1\partial_x \rho_\mathrm{p})\big)\nonumber\\ &&\hspace{12pt} = - \frac{\rho_\mu}{\rho_\mathrm{p}}\partial_x\big((1 - \rho_\mu T)(v^\mathrm{eff}\rho_\mathrm{p})\big) + q_1 \partial_x\rho_\mu.\end{aligned}$$ The effective velocity is solution of the integral equation $$\label{2.26} v^\mathrm{eff} = v + T(v^\mathrm{eff}\rho_\mathrm{p}) - (T\rho_\mathrm{p})v^\mathrm{eff}.$$ Hence $$\label{2.27} \partial_x \big((1- \rho_\mu T)(v^\mathrm{eff}\rho_\mathrm{p})\big) = -\frac{\rho_\mathrm{p}}{\rho_\mu}v^\mathrm{eff}\partial_x \rho_\mu.$$ Adding all terms yields . $\Box$ GGE space-time covariance {#sec3} ========================= The static charge-charge and charge-current correlator {#sec3.1} ------------------------------------------------------ The space-time two-point function of the conserved charges under a prescribed GGE state is defined by $$\label{3.1} S_{m,n}(j,t) = \big\langle Q_j^{[m]}(t)Q_0^{[n]}\big\rangle_{P,V} - \big\langle Q_0^{[m]}\big\rangle_{P,V}\langle Q_0^{[n]}\big\rangle_{P,V}, \quad m,n\geq 0.$$ Here $t$ refers to the Toda time evolution with the convention $Q_j^{[n]}= Q_j^{[n]}(t=0)$. Through a Galilei transformation one can always achieve $q_1 = 0$, which we adopt from now on. An exact solution is of out reach and we focus on the hydrodynamic prediction for this correlator. On the Euler scale the initial randomness is merely propagated. The noise resulting from the dynamics becomes visible only on the diffusive scale. Therefore the linearized equation has to be solved with random initial data as deduced from the GGE. On the hydrodynamic scale they are delta-correlated in space with non-trivial charge correlations as determined through the static charge-charge covariance $$\label{3.2} C_{m,n} = \big\langle Q^{[m]};Q^{[n]}\big\rangle_{P,V}, \quad C_{m,n} = C_{n,m}.$$ Here we use the shorthand $$\label{3.3} \big\langle Q;Q'\big\rangle_{P,V} = \sum_{j \in \mathbb{Z}} \big(\big\langle Q_0 Q'_j\big\rangle_{P,V} - \big\langle Q_0\big \rangle \big\langle Q'_0\big\rangle_{P,V}\big).$$ As to be explained in Sect. \[sec4\], for all $m,n\geq1$, $$\begin{aligned} \label{3.4} &&\hspace{-20pt} C_{0,0} =\nu^3 \langle \rho_\mathrm{p} [1]^\mathrm{dr} [1]^\mathrm{dr} \rangle,\nonumber\\[0.5ex] &&\hspace{-20pt}C_{0,n}= C_{n,0}= -\nu^2 \langle \rho_\mathrm{p} [1]^\mathrm{dr} ([w^n]- q_n[1])^\mathrm{dr} \rangle,\nonumber\\[0.5ex] &&\hspace{-20pt} C_{m,n}= \nu \langle \rho_\mathrm{p} ([w^m]- q_m[1])^\mathrm{dr} ([w^n]- q_n[1])^\mathrm{dr} \rangle.\end{aligned}$$ For later computations the basis consisting of moments is not so convenient. In addition the index $0$ plays a special role. For these reasons we introduce the two vector $(r,\phi)$ with $r\in\mathbb{R}$ and $\phi$ a real-valued function, formally $\phi(w) = \sum_{n=1}^\infty a_n w^n$ with some real coefficients $a_n$. The matrix then inherits a two-block structure. We define $$\label{3.5} h = \nu\rho_\mathrm{p}, \quad \langle h \rangle = 1, \quad h \geq 0 ,$$ and $$\label{3.6} F\phi = (\phi - \langle h\phi\rangle)^\mathrm{dr} = \phi^\mathrm{dr} - [1]^\mathrm{dr}\langle h\phi\rangle = (1 - T\rho_\mu)^{-1}(\phi - \langle h\phi\rangle).$$ Then $$\label{3.7} C = \begin{pmatrix} \nu^2 \langle h [1]^{\mathrm{dr}2}\rangle &-\nu \big\langle F^\mathrm{T}h[1]^\mathrm{dr}\big|\\[0.5ex] -\nu \big| F^\mathrm{T}h[1]^\mathrm{dr}\big\rangle& F^\mathrm{T}h F \end{pmatrix},$$ where $^\mathrm{T}$ stands for transpose and we freely use the Dirac notation for row and column vectors. $F$ is a linear operator acting on the functions to the right. For example $F^\mathrm{T}h F\phi$ means first to compute $F\phi$, then multiply the result by the function $h$, and finally act with $F^\mathrm{T}$. Note that $F1 = 0$. We also introduce the static charge-current correlator $$\label{3.8} B_{m,n} = \big\langle Q^{[m]};J^{[n]}\big\rangle_{P,V}.$$ On abstract grounds, see Sect. \[sec4\], the matrix $B$ is symmetric. In addition the column $m=0$ is already determined by $C$, since $J^{[0]} = -Q^{[0]}$. As to be discussed, one arrives at $$\begin{aligned} \label{3.9} &&\hspace{-20pt} B_{0,0} = - \langle r;Q_1\rangle_{P,V} = \nu^2 \langle \rho_\mathrm{p} [1]^\mathrm{dr} [w]^\mathrm{dr}\rangle,\nonumber\\[0.5ex] &&\hspace{-20pt}B_{0,n}= B_{n,0}= -\big\langle Q^{[n]};Q^{[1]}\big\rangle_{P,V} = -\nu \langle \varrho_\mathrm{p} [w]^\mathrm{dr} ([w^n]- q_n[1])^\mathrm{dr}\rangle,\nonumber\\[0.5ex] &&\hspace{-20pt} B_{m,n} = B_{n,m} = \langle \rho [w]^\mathrm{dr} ([w^m]- q_m[1])^\mathrm{dr} ([w^n]- q_n[1])^\mathrm{dr}\rangle.\end{aligned}$$ Using $[w]^\mathrm{dr} = v^\mathrm{eff} [1]^\mathrm{dr} $, one obtains the more concise operator form $$\label{3.10} B = \nu^{-1}\begin{pmatrix} \nu^2 \langle h v^\mathrm{eff} [1]^{\mathrm{dr}2}\rangle &-\nu \big\langle F^\mathrm{T}h v^\mathrm{eff}[1]^\mathrm{dr}\big|\\[0.5ex] -\nu \big| F^\mathrm{T}hv^\mathrm{eff}[1]^\mathrm{dr}\big\rangle& F^\mathrm{T}hv^\mathrm{eff} F \end{pmatrix}.$$ Main result {#sec3.2} ----------- The particular structure of the matrices $C,B$ allows for an educated guess of the full space-time correlators in the hydrodynamic approximation. As can be seen from the following section, it will require actually some efforts to confirm this guess. Let us start with a generic example of $\kappa$ conservation laws linearized around a spatially homogeneous equilibrium state, which are governed by $$\label{3.11} \partial_t u_n(x,t) + \sum_{m=1}^{\kappa} \mathsf{A}_{n,m} \partial_x u_m(x,t) = 0,\quad n = 1,...,\kappa.$$ Here $x\in\mathbb{R}$ is the continuum approximation for the lattice $\mathbb{Z}$. The $\kappa\times \kappa$ linearization matrix $\mathsf{A}$ is $x$-independent, generically not symmetric, but ensured to have a right and left system of eigenvectors, $|\psi_\alpha\rangle, \langle\tilde{\psi}_\alpha |, \alpha = 1,...,\kappa$, and has real eigenvalues $c_\alpha,\alpha = 1,...,\kappa$. has to be solved with random initial conditions with mean zero and covariance $$\label{3.12} \mathbb{E}\big(u_m(x,0)u_n(0,0)\big) = \delta(x) \mathsf{C}_{m,n},$$ where to distinguish from other averages we use $\mathbb{E}(\cdot)$ for expectation over the initial noise. As before, $\mathsf{C}$ denotes the static correlator and $\delta(x)$ reflects the exponential decay of spatial correlations for the underlying microscopic model. Since the spatial part in is merely a translation proportional to $t$, the space-time correlator is given by $$\label{3.13} \mathbb{E}(u_m(x,t)u_n(0,0)) = \sum_{\alpha = 1}^{\kappa} \delta(x-c_\alpha t) \big(|\psi_\alpha\rangle \langle\tilde{\psi}_\alpha |\mathsf{C}\big)_{m,n},$$ which has a simple interpretation: The $\alpha$-th peak travels with velocity $c_\alpha$ and has a weight depending on $\mathsf{A}$ and $\mathsf{C}$. The static field-current correlator equals $$\label{3.14} \mathsf{B} = \mathsf{A}\mathsf{C} = \tfrac{d}{dt} \mathrm{e}^{\mathsf{A}t}\mathsf{C}\big |_{t=0} = \sum_{\alpha = 1}^{\kappa} c_\alpha |\psi_\alpha\rangle \langle \tilde{\psi}_\alpha | \mathsf{C}.$$ Returning to the Toda chain, the charge-charge correlator $C$ has been computed in and charge-current $B$ in . To be in agreement with , the natural guess for the full solution reads $$\label{3.15} S(j,t) \simeq \begin{pmatrix} \nu^2 \langle h \,\delta(x - t\nu^{-1}v^\mathrm{eff}) [1]^{\mathrm{dr}2}\rangle &-\nu \big\langle F^\mathrm{T}h\,\delta(x - t\nu^{-1}v^\mathrm{eff})[1]^\mathrm{dr}\big|\\[0.5ex] -\nu \big| F^\mathrm{T}h\,\delta(x - t\nu^{-1}v^\mathrm{eff})[1]^\mathrm{dr}\big\rangle& F^\mathrm{T}h\delta(x - t\nu^{-1}v^\mathrm{eff}) F \end{pmatrix}.$$ This expression is our **main result**, which is exact on the ballistic Euler scale. The right hand side for $t=0$ equals $\delta(x) C$ and differentiating with respect to $t$ at $t=0$ yields the required identity $B =AC$. Still to be shown is the exponential form, $\mathrm{e}^{At}C$, of the correlator, see Section \[sec4\]. Manifestly, the right hand side of Eq. scales ballistically. The left hand side is the true correlator of the Toda chain which exhibits ballistic scaling only for sufficiently large $j,t$ both of the same order. This is the meaning of $\simeq$, where one should identify $x$ with $j$ up to a suitable scale factor. is an operator identity. For specific correlations one has to compute the respective matrix elements. As examples, we list the time-dependent stretch-stretch and momentum-momentum correlation, $$\begin{aligned} \label{3.16} &&S_{0,0}(j,t) \simeq \nu^2 \int_\mathbb{R}\mathrm{d}w h(w) \delta(x- t\nu^{-1}v^\mathrm{eff}(w)) [1]^{\mathrm{dr}2}(w),\\ &&S_{1,1}(j,t) \simeq \int_\mathbb{R}\mathrm{d}w h(w) \delta(x- t\nu^{-1}v^\mathrm{eff}(w)) [v]^{\mathrm{dr}2}(w),\end{aligned}$$ where in the second line $q_1 = 0$ has been used. Towards the exact solution {#sec4} ========================== The $C$- and $B$-matrix {#sec4.1} ----------------------- We first collect a few identities, which will be used to compute the derivatives of the average charges and currents with respect to the pressure $P$ and the chemical potential $V(w)$. The most basic identity is $$\label{4.1} \partial_{\odot} \partial_\mu \rho_\mu= (1- \rho_\mu T)^{-1}\partial_\odot \rho_\mu(1- T\rho_\mu)^{-1},$$ which follows from expanding $\rho_\mathrm{p} = \partial_\mu \rho_\mu = (1- \rho_\mu T)^{-1} \rho_\mu$ in a power series. Hence, setting $\odot = \mu$, $$\label{4.2} \partial_\mu \langle \rho_\mathrm{p} \rangle= \langle(1- \rho_\mu T)^{-1}\partial_\mu \rho_\mu(1- T\rho_\mu)^{-1} \rangle =\langle \rho_\mathrm{p}[1]^{\mathrm{dr} 2}\rangle.$$ Furthermore, for a general function $f$, $$\label{4.3} \partial_P \langle \rho_\mathrm{p}f\rangle = \partial_\mu\langle \rho_\mathrm{p}f\rangle \partial_P\mu = \nu\langle \rho_\mathrm{p}[1]^\mathrm{dr}[f]^\mathrm{dr}\rangle.$$ The variational derivative with respect to $V(w)$ is slightly more complicated, since the constraint has to be respected. The corresponding directional derivative, oriented along $g(w)$, is denoted by $\mathcal{D}_g = \int \mathrm{d} w g(w) \delta/ \delta V(w)$. Then, for general functions $g,f,\tilde{f}$, one obtains $$\label{4.4} -\mathcal{D}_g \langle \tilde{f} (1- \rho_\mu T)^{-1} \rho_\mu f\rangle = \langle \rho_\mathrm{p} g^\mathrm{dr} \tilde{f}^\mathrm{dr} f^\mathrm{dr}\rangle - \nu \langle \rho_\mathrm{p}g^\mathrm{dr} \rangle \langle \rho_\mathrm{p} \tilde{f}^\mathrm{dr} f^\mathrm{dr}\rangle.$$ The average charges are $$\label{4.5} \big\langle r_0\big\rangle_{P,V} = \nu, \quad \big\langle Q^{[n]}_0 \big\rangle_{P,V} = \nu\langle\rho_\mathrm{p}w^n\rangle.$$ By suitable linear combinations the powers $w^n$ are replaced by a general function $f(w)$. The covariance matrix $C$ is obtained by taking derivatives of with respect to $P$ and $V$. For $C_{0,0}$ we note $$\label{4.6} -\partial_P \nu = \nu^2 \partial_P \langle \rho_\mathrm{p} \rangle = \nu^3 \langle \rho_\mathrm{p}[1]^{\mathrm{dr} 2}\rangle$$ and for $C_{0,n}, n \geq 1$ we use to arrive at $$\begin{aligned} \label{4.7} &&\hspace{-40pt}- \partial_P\big( \nu\langle \rho_\mathrm{p} f\rangle\big) = -(\partial_P\nu)\langle \rho_\mathrm{p} f\rangle - \nu \partial_P \langle \rho_\mathrm{p} f\rangle \nonumber\\ &&\hspace{32pt}= \nu^3 \langle \rho_\mathrm{p}[1]^{\mathrm{dr} 2}\rangle\langle \rho_\mathrm{p} f\rangle - \nu^2 \langle \rho_\mathrm{p}[1]^\mathrm{dr} [f]^\mathrm{dr}\rangle =-\nu \langle F^\mathrm{T}h[1]^\mathrm{dr}|f\rangle,\end{aligned}$$ as claimed in . For $m,n\geq 1$ we use for the special choice $\tilde{f} = 1$. Then $$\begin{aligned} \label{4.8} &&\hspace{-36pt}-\mathcal{D}_g(\nu\langle \rho_\mathrm{p} f \rangle) = \nu^2\big( \mathcal{D}_g\langle \rho_\mathrm{p} \rangle\big) \langle \rho_\mathrm{p} f\rangle -\nu\mathcal{D}_g \langle \rho_\mathrm{p} f\rangle = \nu^2\big(- \langle \rho_\mathrm{p} [1]^{\mathrm{dr} 2} g^\mathrm{dr} \rangle\nonumber\\ &&\hspace{46pt} + \,\nu \langle \rho_\mathrm{p} g \rangle\langle \rho_\mathrm{p} [1]^{\mathrm{dr} 2} \rangle\big) \langle \rho_\mathrm{p} f \rangle - \nu \big( - \langle \rho_\mathrm{p} [1]^\mathrm{dr}g^\mathrm{dr}f^\mathrm{dr} \rangle + \nu \langle \rho_\mathrm{p} g \rangle \langle \rho_\mathrm{p} [1]^\mathrm{dr}f^\mathrm{dr} \rangle\big)\nonumber\\ &&\hspace{34pt} = \nu \langle \rho_\mathrm{p} (g- [1] \nu \langle \rho_\mathrm{p} g\rangle)^\mathrm{dr} (f- [1]\nu \langle \rho_\mathrm{p}f \rangle)^\mathrm{dr} \rangle,\end{aligned}$$ in agreement with . The $B$-matrix is symmetric. Since this property is a useful control check, for conciseness, we repeat the argument in our specific context, see also [@S91]. By stationarity $S_{m,n}(j,t) = S_{n,m}(-j,-t)$ and hence $$\label{4.9} \sum_{j \in \mathbb{Z}} jS_{m,n}(j,t) = - \sum_{j \in \mathbb{Z}} jS_{n,m}(j,-t).$$ Because of the conservation law, $$\begin{aligned} \label{4.10} &&\hspace{-30pt} \frac{d}{dt}\sum_{j \in \mathbb{Z}} jS_{m,n}(j,t) = \sum_{j \in \mathbb{Z}} j \langle J^{[m]}_{j-1}(t) - J^{[m]}_{j}(t));Q^{[n]}_0(0)\rangle = \sum_{j \in \mathbb{Z}} \langle J^{[m]}_j(t);Q^{[n]}_0(0)\rangle \nonumber\\ &&\hspace{59pt} = \sum_{j \in \mathbb{Z}} \langle J^{[m]}_0(0);Q^{[n]}_{-j}(-t) \rangle = \sum_{j \in \mathbb{Z}} \langle J^{[m]}_0(0); Q^{[n]}_{j}(0) \rangle,\end{aligned}$$ where we used stationarity again and in the last step the conservation of $Q^{[n]}$. Thus, $$\label{4.11} \sum_{j \in \mathbb{Z}} jS_{m,n}(j,t) = B_{m,n}t +\sum_{j \in \mathbb{Z}} jS_{m,n}(j,0) \,.$$ Upon inserting in one arrives at $B_{m,n}t = B_{n,m}t$. Returning to Toda, the average currents are $$\label{4.12} \big\langle J^{[n]}_0 \big\rangle_{P,V} = \langle (v^\mathrm{eff} - q_1) \rho_\mathrm{p}w^n\rangle = \langle w (1- \rho_\mu T)^{-1} \rho_\mu w^n\rangle - \nu\langle \rho_\mathrm{p}w\rangle \langle \rho_\mathrm{p}w^n\rangle.$$ The second version will be more convenient for us. Note that while one can shift to $q_1 = 0$, when taking derivatives this term has to be kept. As before the powers $w^n$ are replaced by a general function $f(w)$. Since $J^{[0]} = - Q^{[1]}$, the border matrix elements are already determined. As control we still repeat with the result $$\begin{aligned} \label{4.13} &&\hspace{-30pt}-\partial_P \big( \langle w (1- \rho_\mu T)^{-1} \rho_\mu f \rangle - \nu \langle\rho_\mathrm{p}f\rangle \langle \rho_\mathrm{p}w\rangle\big)\nonumber\\ &&\hspace{20pt}=- \big(\nu \langle \rho_\mathrm{p}[w]^\mathrm{dr}f^\mathrm{dr}\rangle + \nu^3\langle \rho_\mathrm{p}[1]^{\mathrm{dr} 2}\rangle\langle\rho_\mathrm{p}f\rangle \langle \rho_\mathrm{p}w\rangle \nonumber\\ &&\hspace{32pt}-\nu^2\langle \rho_\mathrm{p}[1]^\mathrm{dr}f^\mathrm{dr}\rangle \langle \rho_\mathrm{p}w\rangle - \nu^2 \langle\rho_\mathrm{p}f\rangle \langle \rho_\mathrm{p}[w]^\mathrm{dr}[1]^\mathrm{dr}\rangle \big).\end{aligned}$$ The two middle terms vanish, since $q_1 = 0$, and the remainder agrees with . Finally, using with $\tilde{f}(w) = w$, we turn to $$\begin{aligned} \label{4.14} &&\hspace{-30pt}-\mathcal{D}_g \big( \langle w (1- \rho_\mu T)^{-1} \rho_\mu f \rangle - \nu \langle\rho_\mathrm{p}w\rangle \langle \rho_\mathrm{p}f\rangle\big)\nonumber\\ && = \langle \rho_\mathrm{p}( [w]^\mathrm{dr} - \nu\langle \rho_\mathrm{p}w\rangle)(g - \nu \langle \rho_\mathrm{p} g\rangle [1])^\mathrm{dr}(f - \nu \langle \rho_\mathrm{p} f\rangle[1] )^\mathrm{dr} \rangle.\end{aligned}$$ Since $q_1 = 0$, we have obtained the $(2,2)$ matrix element of . Linearized transformation to normal modes {#sec4.2} ----------------------------------------- In we introduced a nonlinear map which transforms the system of conservation laws into its quasi-linear version in such a way that the linearized operator $A$ is manifestly diagonal. One would expect this property to persist under linearization. In the $\nu,h$ variables the map is $$\label{4.15} \rho_\mu = h(\nu + Th)^{-1}.$$ We linearize on both sides as $\rho_\mu +\epsilon g$, $\nu +\epsilon r$, $h +\epsilon \phi$, $\langle \phi \rangle = 0$. To first order in $\epsilon$ this yields the linear map $R: g \mapsto (r,\phi)$ given by $$\label{4.16} Rg = \nu \begin{pmatrix} - \nu \langle g [1]^{\mathrm{dr}2}\rangle\\[0.5ex] F^\mathrm{T}g [1]^\mathrm{dr} \end{pmatrix},$$ where $$\label{4.17} F^{\mathrm{T}} \psi = (1 - \rho_\mu T)^{-1}\psi - h \langle [1]^\mathrm{dr} \psi\rangle.$$ Note that indeed $\langle F^{\mathrm{T}} \psi\rangle = 0$. can be inverted as $$\label{4.18} \nu = \langle \rho_\mathrm{p} \rangle^{-1},\quad h = \langle \rho_\mathrm{p} \rangle^{-1} \rho_\mathrm{p}, \quad \rho_\mathrm{p} = (1- \rho_\mu T)^{-1}\rho_\mu,$$ thereby deriving, by a similar argument as before, $$\label{4.19} R^{-1} \begin{pmatrix} r\\ \phi \end{pmatrix} = (\nu [1]^\mathrm{dr})^{-1}(- \rho_\mu r + (1-\rho_\mu T )\phi).$$ Indeed one checks that $$\label{4.20} RR^{-1} = 1,\quad R^{-1}R = 1,$$ the first $1$ standing for the identity operator as a $2\times 2$ block matrix and the second $1$ for the identity operator in the space of scalar functions. The next step is to write the $C, B$ matrices in the new basis. Using $$\label{4.21} (1-\rho_\mu T) F^\mathrm{T}\psi = \psi - \nu \rho_\mu \langle \psi[1]^{\mathrm{dr}}\rangle,$$ one arrives at $$\begin{aligned} \label{4.22} &&\hspace{-47pt}R^{-1}CRg = \nu R^{-1} \begin{pmatrix} \nu^2 \langle h [1]^{\mathrm{dr}2}\rangle &-\nu \big\langle F^\mathrm{T}h[1]^\mathrm{dr}\big|\nonumber\\[0.5ex] -\nu \big| F^\mathrm{T}h[1]^\mathrm{dr}\big\rangle& F^\mathrm{T}h F \end{pmatrix} \begin{pmatrix} - \nu \langle g [1]^{\mathrm{dr}2}\rangle\\[0.5ex] F^\mathrm{T}g [1]^\mathrm{dr} \end{pmatrix}\\ [1ex] &&\hspace{0pt}= ([1]^\mathrm{dr})^{-1}\big(\nu^3 \rho_\mu \langle h [1]^{\mathrm{dr}2}\rangle \langle g[1]^{\mathrm{dr}2}\rangle +\nu \rho_\mu \langle (F^\mathrm{T}h[1]^\mathrm{dr})(F^\mathrm{T}g[1]^\mathrm{dr})\rangle\nonumber\\[0.5ex] &&\hspace{30pt}+ \nu^2 (1-\rho_\mu T) F^\mathrm{T}h[1]^\mathrm{dr}\langle g[1]^{\mathrm{dr}2}\rangle + (1-\rho_\mu T) F^\mathrm{T}hFF^\mathrm{T}g[1]^\mathrm{dr}\big)\nonumber\\ [1ex] &&\hspace{0pt} = \nu^2\big|([1]^\mathrm{dr})^{-2}h\big\rangle \langle h [1]^{\mathrm{dr}2}\rangle \langle g[1]^{\mathrm{dr}2}\rangle + \big|([1]^\mathrm{dr})^{-2}h\big\rangle \langle (F^\mathrm{T}h[1]^\mathrm{dr})(F^\mathrm{T}g[1]^\mathrm{dr})\rangle\nonumber\\[0.5ex] &&\hspace{30pt}+ \nu^2 \big|h\big\rangle\langle g[1]^{\mathrm{dr}2} \rangle - \nu^2\big|([1]^\mathrm{dr})^{-2}h\big\rangle \langle h [1]^{\mathrm{dr}2}\rangle \langle g[1]^{\mathrm{dr}2}\rangle\nonumber\\[0.5ex] &&\hspace{30pt}+\big|([1]^\mathrm{dr})^{-1} h FF^\mathrm{T}g[1]^\mathrm{dr} \big\rangle - \big|([1]^\mathrm{dr})^{-2} h\big\rangle \langle [1]^\mathrm{dr}hFF^\mathrm{T}g[1]^\mathrm{dr} \rangle. \end{aligned}$$ There are two cancellations which yield, as operators, $$\label{4.23} R^{-1}CR = \nu^2 \big| h \big\rangle \big\langle[1]^{\mathrm{dr}2}\big| + ([1]^\mathrm{dr})^{-1}h FF^\mathrm{T}[1]^\mathrm{dr}.$$ Correspondingly for the $B$-matrix, $$\begin{aligned} \label{4.24} &&\hspace{-24pt}R^{-1}BRg = R^{-1} \begin{pmatrix} \nu^2 \langle h v^\mathrm{eff} [1]^{\mathrm{dr}2}\rangle &-\nu \big\langle F^\mathrm{T}h v^\mathrm{eff}[1]^\mathrm{dr}\big|\nonumber\\[0.5ex] -\nu \big| F^\mathrm{T}hv^\mathrm{eff}[1]^\mathrm{dr}\big\rangle& F^\mathrm{T}hv^\mathrm{eff} F \end{pmatrix} \begin{pmatrix} - \nu \langle g [1]^{\mathrm{dr}2}\rangle\\[0.5ex] F^\mathrm{T}g [1]^\mathrm{dr} \end{pmatrix}\\ [1ex] &&\hspace{14pt}= (\nu[1]^\mathrm{dr})^{-1}\big( \nu^3 \rho_\mu \langle h v^\mathrm{eff} [1]^{\mathrm{dr}2}\rangle \langle g[1]^{\mathrm{dr}2}\rangle + \nu\rho_\mu \langle (F^\mathrm{T}h v^\mathrm{eff} [1]^\mathrm{dr})(F^\mathrm{T}g[1]^\mathrm{dr})\rangle\nonumber\\[0.5ex] &&\hspace{44pt}+ \nu^2 (1-\rho_\mu T) F^\mathrm{T}hv^\mathrm{eff}[1]^\mathrm{dr}\langle g[1]^{\mathrm{dr}2}\rangle + (1-\rho_\mu T) F^\mathrm{T}hv^\mathrm{eff}FF^\mathrm{T}g[1]^\mathrm{dr}\big)\nonumber\\ [1ex] &&\hspace{14pt}= \nu^{-1}\big(\nu^2 \big|([1]^\mathrm{dr})^{-2} h \big\rangle \langle h v^\mathrm{eff} [1]^{\mathrm{dr}2}\rangle \langle g[1]^{\mathrm{dr}2}\rangle + \big|([1]^\mathrm{dr})^{-2}h\big\rangle \langle (F^\mathrm{T}h v^\mathrm{eff} [1]^\mathrm{dr})(F^\mathrm{T}g[1]^\mathrm{dr})\rangle\nonumber\\[0.5ex] &&\hspace{44pt}+\nu^2\big| hv^\mathrm{eff}\big\rangle\langle g[1]^{\mathrm{dr}2}\rangle - \nu^2 \big|([1]^\mathrm{dr})^{-2}h\big\rangle\langle hv^\mathrm{eff}[1]^{\mathrm{dr}2}\rangle\langle g[1]^{\mathrm{dr}2}\rangle \nonumber \\[0.5ex] &&\hspace{44pt}+ \big|([1]^\mathrm{dr})^{-1}hv^\mathrm{eff}FF^\mathrm{T}g[1]^\mathrm{dr} \big\rangle - \big|([1]^\mathrm{dr})^{-2}h\big\rangle \langle hv^\mathrm{eff}[1]^\mathrm{dr}FF^\mathrm{T}g[1]^\mathrm{dr}\rangle\big).\end{aligned}$$ As before there are two cancellations yielding $$\label{4.25} R^{-1}BR = \nu^{-1}\big(\nu^2\big| hv^\mathrm{eff}\big\rangle\big\langle [1]^{\mathrm{dr}2}\big| + ([1]^\mathrm{dr})^{-1}hv^\mathrm{eff}FF^\mathrm{T}[1]^\mathrm{dr} \big) = \nu^{-1} v^\mathrm{eff} R^{-1}CR.$$ In the new basis, as anticipated, $R^{-1}AR$ is simply multiplication by $\nu^{-1} v^\mathrm{eff}$. We conclude that $$\label{4.26} \mathrm{e}^{At}C = R\mathrm{e}^{(v^\mathrm{eff}/\nu)t}R^{-1}C.$$ Working out the algebra, one arrives at $$\label{4.27} \mathrm{e}^{At}C = \begin{pmatrix} \nu^2 \langle h\mathrm{e}^{(v^\mathrm{eff}/\nu)t} [1]^{\mathrm{dr}2}\rangle &-\nu \big\langle F^\mathrm{T}h\mathrm{e}^{(v^\mathrm{eff}/\nu)t} [1]^\mathrm{dr}\big|\\[0.5ex] -\nu \big| F^\mathrm{T}h\mathrm{e}^{(v^\mathrm{eff}/\nu)t} [1]^\mathrm{dr}\big\rangle& F^\mathrm{T}h\mathrm{e}^{(v^\mathrm{eff}/\nu)t } F \end{pmatrix}.$$ As explained in the beginning of Sect. \[sec3.2\], with this input one can write the solution to the hydrodynamic equations linearized relative to some precsribed GGE and with random initial conditions having covariance matrix $\delta(x)C$. The result is the right hand side of . While not used, just for completeness the matrix $A$ is recorded as $$\label{4.28} A = \nu^{-1}Rv_\mathrm{eff}R^{-1} = \begin{pmatrix} \langle \rho_\mu v^\mathrm{eff} [1]^{\mathrm{dr}}\rangle &- \big\langle v^\mathrm{eff}[1]^\mathrm{dr}(1 -\rho_\mu T)\big|\\[0.5ex] -\nu^{-1} \big| F^\mathrm{T}\rho_\mu v^\mathrm{eff}\big\rangle& \nu^{-1}F^\mathrm{T}v^\mathrm{eff} (1 -\rho_\mu T) \end{pmatrix}.$$ The $C$-matrix from Dyson’s Brownian motion {#sec5} =========================================== The variational problem is closely linked to Dyson’s Brownian motion with confining potential $V$. To make this section self-contained, some background material is required. The precise connection to Toda will be at the end of this section. We consider the stochastic particle system on $\mathbb{R}$ governed by $$\label{5.1} dx_j(t) = -V'(x_j(t))dt + \frac{1}{N}\sum_{i = 1,i\neq j}^N \frac{2\alpha}{x_j(t) - x_i(t)} dt + \sqrt{2} db_j(t), \quad j = 1,...,N, \quad\alpha \geq 0 ,$$ with $\{b_j(t), j = 1,...,N\}$ a collection of independent standard Brownian motions. This is Dyson’s Brownian motion in an external potential $V$. The interaction has strength $\alpha/N$, which corresponds to a standard mean-field limit. (As to be discussed, the proper identification will $\alpha = P$). Let us introduce the empirical density $$\label{5.2} \rho_N(x,t) = \frac{1}{N} \sum_{j = 1}^N \delta(x_j(t)-x).$$ If at the initial time $t = 0$, $\rho_N(x,0)$ converges in the limit $N \to \infty$ to a non-random density $\rho(x,0)$, then such a limit will hold for any $t >0$ and the limit density satisfies the nonlinear Fokker-Planck equation $$\label{5.3} \partial_t \rho(x,t) = \partial_x\big( V_\mathrm{eff}'(x,t)\rho(x,t) + \partial_x \rho(x,t)\big),$$ where the effective potential is defined by $$\label{5.4} V_\mathrm{eff}(x,t) = V(x) - \alpha (T \rho)(x,t)$$ with $T$ is the linear operator defined in . For this convergence result even a proof is available [@CL97]. If $V$ is suitably confining, then Eq. has a unique stationary solution $\rho_\mathrm{s}$, which can also be characterized as the minimizer of $$\label{5.5} \mathcal{F}^\alpha(\varrho) = \int _\mathbb{R}\mathrm{d}w \varrho(w) V(w) -\alpha \int _\mathbb{R}\mathrm{d}w\int _\mathbb{R}\mathrm{d}w' \log|w - w'|\varrho(w) \varrho(w') + \int _\mathbb{R}\mathrm{d}w \varrho(w) \log \varrho(w)$$ under the constraints $\varrho (x) \geq 0$, $\int \mathrm{d}x \varrho (x) = 1$. Let us now consider the stationary dynamics with $\rho(x,0) = \rho_\mathrm{s}(x)$ and study the fluctuations of the density. It is convenient to integrate against some smooth test function $f$. Then the scaled density fluctuations are $$\label{5.6} \phi_N(f,t) = \frac{1}{\sqrt{N}} \sum_{j=1}^N \Big(f(x_j(t)) - \int _\mathbb{R}\mathrm{d}x \rho_\mathrm{s}(x) f(x)\Big) = \int_\mathbb{R} \mathrm{d}x f(x) \phi(x,t).$$ For them a standard central limit theorem holds, compare with [@I01], $$\label{5.7} \lim_{N\to \infty} \phi_N(f,t) = \phi(f,t),$$ where $\phi(x,t)$ is governed the linear Langevin equation $$\label{5.8} \frac{d}{dt} \phi(x,t) = \partial_x D\phi(x,t) + \sqrt{2\rho_\mathrm{s}(x)}\xi(x,t).$$ Here $\xi(x,t)$ is normalized space-time white noise and $\partial_x D $ the linearized evolution operator with $$\label{5.9} D = V'_\mathrm{eff} - \alpha \rho_\mathrm{s} T\partial_x.$$ $V'_\mathrm{eff}$ is defined as in with $\rho(x,t)$ substituted by $\rho_\mathrm{s}(x)$. For the following argument $V'_\mathrm{eff}$ and $\rho_\mathrm{s}$ act as multiplication operators, while $\partial = \partial_x$ is the differentiation operator, which commutes with $T$, $T\partial = \partial T$. The stationary solution to Eq. is a Gaussian measure, its covariance denoted by $C^\sharp$, which is determined by $$\label{5.10} \langle D^\mathrm{T}\partial f,C^\sharp g\rangle + \langle f,C^\sharp D^\mathrm{T}\partial g\rangle = 2\langle \partial f, \rho_\mathrm{s} \partial g\rangle,$$ where $\langle \cdot,\cdot \rangle$ denotes the standard $L^2$ inner product, $\langle f,g \rangle = \int\mathrm{d}x f(x) g(x)$. We claim that as an operator $$\label{5.11} C^\sharp = (1 - \alpha \rho_\mathrm{s} T)^{-1}\rho_\mathrm{s} - \frac{1}{\langle [1],(1 - \alpha \rho_\mathrm{s} T)^{-1}\rho_\mathrm{s}[1]\rangle} \big|(1 - \alpha \rho_\mathrm{s} T)^{-1}\rho_\mathrm{s}\big\rangle \big\langle (1 - \alpha\rho_\mathrm{s}T)^{-1}\rho_\mathrm{s}\big|.$$ The second term ensures that there are no fluctuations in the number of particles, $C^\sharp [1] = 0$.\ *Proof*: We consider only the left most term, the other one following by symmetry, and have to show that $$\label{5.12} \langle \partial f,D \rho_\mathrm{s} (1 -\alpha T \rho_\mathrm{s})^{-1}g\rangle = \langle \partial f, \rho_\mathrm{s} \partial g\rangle,$$ and, upon $g$ replacing by $(1 -\alpha T \rho_\mathrm{s})g$, $$\label{5.13} \langle \partial f,D \rho_\mathrm{s} g\rangle = \langle \partial f, \rho_\mathrm{s} \partial (1 -\alpha T \rho_\mathrm{s})g\rangle.$$ Since $\rho_\mathrm{s}$ is stationary, we have $$\label{5.14} \big(V' -\alpha \rho_\mathrm{s} T\partial + \partial\big)\rho_\mathrm{s} = 0,$$ which when inserted in leads to the condition $$\label{5.15} \rho_\mathrm{s} \partial g -\alpha \rho_\mathrm{s} T\partial ( \rho_\mathrm{s}g) = \rho_\mathrm{s} \partial (1 -\alpha T \rho_\mathrm{s})g,$$ It follows from using $T\partial = \partial T$, thereby confirming our claim. $\Box$ The see the connection to the $(2,2)$ matrix element of , one recalls that because of linear ramp argument the covariance for the conserved charges is given by $\partial_\alpha(\alpha C^\sharp)$, an expression depending only on $\alpha \rho_\mathrm{s}$. Thus differentiating with respect $\alpha$ becomes identical to differentiating with respect to $P$. Using this observation in , one arrives at the four terms defining the $(2,2)$ matrix element, thus providing an alternative derivation for this particular contribution. The stretch does not seem to be explicitly linked to the Lax matrix and for the $(1,2)$ matrix element of $C$ one has to rely on the thermodynamic reasoning. Hard rods viewed as lattice model {#sec6} ================================= Hard rods are more naturally viewed as a one-dimensional fluid. To make the connection with the Toda chain, we discuss here the chain point of view. The rod length is denoted by $a$. The $r_j$’s are changing linearly in time at constant $p_j$’s. At the instant when $r_j = a$ with incoming momenta $p_j ,p_{j+1}$ the momenta are simply interchanged, resulting in $\tfrac{d}{dt} r_j >0$. The Euler equations could be easily guessed. But in our context it is more instructive to follow the systematic route outlined in Sect. \[sec2\]. $\nu$ denotes the mean distance between hard rods, $\nu > a$, and $h(v)$ is the normalized velocity distribution. Thus $\rho_\mathrm{p} = \nu^{-1}h$ and $u = \langle h v \rangle$. The phase shift equals $-a$ and hence $$\label{6.1} T = -a |1\rangle \langle 1|, \qquad (1 - a T\rho_\mu)^{-1} = 1 - (1 + a \langle \rho_\mu \rangle)^{-1} |1\rangle \langle a\rho_\mu|.$$ Since $\rho_\mathrm{p} = [1]^\mathrm{dr}\rho_\mu$, one arrives at $$\label{6.2} \rho_\mu = (\nu - a)^{-1}h$$ and, using , $$\label{6.3} v^\mathrm{eff} = (\nu - a)^{-1}(\nu v -a u).$$ Thus the equations of GHD read $$\label{6.4} \partial_t \nu - \partial_x u = 0,\quad \partial_t h +\partial_x \big((\nu - a)^{-1}(v-u)h\big) = 0$$ with the normal mode transform $$\label{6.5} \partial_t \rho_\mu + (\nu - a)^{-1}(v-u) \partial_x\rho_\mu = 0.$$ The factor $(\nu - a)^{-1}$ is the equilibrium density at contact. The static covariance equals $$\label{6.6} C = \begin{pmatrix} (\nu - a)^2 &0\\[1ex] 0& h - |h\rangle\langle h| \end{pmatrix}$$ and the charge-current correlator, setting $u = 0$, $$\label{6.7} B = \begin{pmatrix} 0 &- \langle hv|\\[1ex] -| hv\rangle& (\nu- a)^{-1}\big(hv - | hv\rangle\langle h| - | h\rangle\langle hv|\big) \end{pmatrix}.$$ The linearized operator $A$ is explicit, $$\label{6.8} A = \begin{pmatrix} 0 &- \langle v|\\[1ex] -(\nu- a)^{-2}| vh\rangle& (\nu- a)^{-1}\big(v - | h\rangle\langle v| \big) \end{pmatrix}.$$ Linearizing , as before, yields the similarity transform $$\label{6.9} Rg = (\nu- a) \begin{pmatrix} - (\nu - a) \langle g \rangle\\ g - h\langle g\rangle \end{pmatrix},\quad R^{-1} \begin{pmatrix} r\\ \phi \end{pmatrix} = (\nu- a)^{-1} ( \phi - (\nu - a)^{-1}h r).$$ With this input $$\begin{aligned} \label{6.10} &&\hspace{ -8pt}\mathrm{e}^{At}C = \\[0.5ex] &&\hspace{ -12pt}\begin{pmatrix} (\nu - a)^{2}\langle \mathrm{e}^{v^\mathrm{eff}t}h\rangle &(\nu - a)\big(- \langle \mathrm{e}^{v^\mathrm{eff}t}h | + \langle\mathrm{e}^{v^\mathrm{eff}t}h\rangle \langle h|\big) \\[1ex] (\nu - a)\big(-|\mathrm{e}^{v^\mathrm{eff}t}h\rangle + | h\rangle\langle\mathrm{e}^{v^\mathrm{eff}t}h \rangle\big)& \mathrm{e}^{v^\mathrm{eff}t}h - |\mathrm{e}^{v^\mathrm{eff}t}h\rangle\langle h| - |h\rangle\langle \mathrm{e}^{v^\mathrm{eff}t}h| + \langle\mathrm{e}^{v^\mathrm{eff}t}h\rangle |h\rangle \langle h| \end{pmatrix}. \nonumber \end{aligned}$$ The limit $t \to 0$ and its first derivative yield $C$ and $B$, respectively. Free energy {#sec8} =========== We briefly explain how the Toda generalized free energy is linked to the variational problem , more details being provided in [@S19]. For free boundary conditions, $(L_N)_{1,N} = 0$, and in the variables of , the Toda partition function reads $$\begin{aligned} \label{8.1} &&\hspace{-29pt} Z_\mathrm{toda} = \int_{\mathbb{R}^N} \prod_{j=1}^{N} \mathrm{d}b_j \int_{(\mathbb{R}_+)^{(N-1)}} \prod_{j=1}^{N-1} \mathrm{d}a_j \frac{2}{a_j} (a_j)^{2P} \mathrm{e}^{-\mathrm{tr}[V(L_N)]}\nonumber\\ && = \int_{\mathbb{R}^N} \prod_{j=1}^{N} \mathrm{d}b_j \int_{(\mathbb{R}_+)^{(N-1)}} \prod_{j=1}^{N-1} \mathrm{d}a_j \frac{2}{a_j} (a_j)^{2P} \mathrm{e}^{(\frac{1}{2}p_j^2 + (a_j)^2 ) } \mathrm{e}^{-\mathrm{tr}[\tilde{V}(L_N)]},\end{aligned}$$ where $\tfrac{1}{2}w^2 + \tilde{V}(w) = V(w)$. Here $P$ is the pressure and $V$ the chemical potential. More precisely, the $n$-th charge is controlled by the chemical potential $\mu_n$. The grand-canonical weight is therefore the exponential of $\sum_{n=1}^\infty\mu_n Q^{[n]}= \sum_{n=1}^\infty \mu_n \mathrm{tr}[L^n]$. Introducing $V(w) = -\sum_{n=1}^\infty\mu_n w^n$ the exponent can be written more concisely as $-\mathrm{tr}[V(L_N)]$. We compare $Z_\mathrm{toda}$ with the Dumitriu-Edelman partition function, $$\begin{aligned} \label{8.2} &&\hspace{-30pt}Z_\mathrm{dued} = \int_{\mathbb{R}^N} \prod_{j=1}^{N} \mathrm{d}b_j \int_{(\mathbb{R}_+)^{(N-1)}} \prod_{j=1}^{N-1} \mathrm{d}a_j \frac{2}{a_j} (a_j)^{2P(j/N)} \mathrm{e}^{(\frac{1}{2}p_j^2 + (a_j)^2 ) } \mathrm{e}^{-\mathrm{tr}[\tilde{V}(L_N)]}\\ && = D_N(P)\int_{\mathbb{R}^N} \prod_{j=1}^{N}\mathrm{d} \lambda_j \exp\Big[- \sum_{j=1}^N\big( \tfrac{1}{2}(\lambda_j)^2 + \tilde{V}(\lambda_j)\big)- P \frac{1}{N}\sum_{i,j=1,i \neq j}^N \log|\lambda_i - \lambda_j|\Big]. \nonumber\\ && = D_N(P)\int_{\mathbb{R}^N} \prod_{j=1}^{N}\mathrm{d} \lambda_j \exp\Big[- \sum_{j=1}^N\big( V(\lambda_j)\big)- P \frac{1}{N}\sum_{i,j=1,i \neq j}^N \log|\lambda_i - \lambda_j|\Big]. \end{aligned}$$ This identity follows from the Dumitriu-Edelman theorem [@DE02] with the choice $\beta = 2P/N$ and exploiting that $\mathrm{tr}[\tilde{V}(L_N)]$ depends only on the eigenvalues of $L_N$. Only with the choice of the multiplicative factor as , the potential term on the left recombines into $V(\lambda_j)$. The prefactor is given by $$\label{8.3} D_N(P) = \Gamma(P)^{-1}\Gamma(1+ \tfrac{P}{N})^N \prod_{j=1}^N\frac{\Gamma(\tfrac{j}{N})}{\Gamma(1+\tfrac{j}{N})}$$ and $$\label{8.4} \lim_{N \to \infty} N^{-1} \log D_N(P) = -\log P +1.$$ On the left of Eq. we now use the slow ramp of the pressure and on the right the convergence to the mean field free energy of the log gas. Thus $$\label{8.5} \int_0^1 \mathrm{d}u F_\mathrm{toda}(uP) = \mathcal{F}^\mathrm{MF}_P(\rho^*) +\log P -1$$ with $\rho^*$ the minimizer of the mean field free energy functional. It is of advantage to absorb $P$ into $\varrho$ through $P \mathcal{F}_P^\mathrm{MF}(P^{-1}\varrho)= \mathcal{F}(\varrho) -P\log P$. Then $$\label{8.6} F_\mathrm{toda}(P) = \partial_P \mathcal{F}(\varrho^*(P)) - 1$$ with $\mathcal{F}$ as in . Further properties are discussed in [@S19]. In particular $$\label{8.8} F_\mathrm{toda}(P) = \mu(P).$$ [99]{} T. Schneider and E. Stoll, Excitation spectrum of the Toda lattice: a molecular-dynamics study, Phys. Rev. Lett. **45**, 997 – 1002 (1980). S. Diederich, A conventional approach to dynamic correlations in the Toda lattice, Phys. Lett. A **85**, 233 – 235 (1981). T. Schneider, Classical statistical mechanics of lattice dynamic model systems: transfer integral and molecular-dynamics studies. In: Statics and Dynamics of Nonlinear Systems, eds.: G. Benedek, H.Bilz, R. Zeyher, p. 212 –241. Proceedings Ettore Majorana Centre, Erice, Italy,1983. N. Theodorakopoulos and F.G. Mertens, Dynamics of the Toda lattice: a soliton-phonon phase-shift analysis, Phys. Rev. B **28**, 3512 (1983). N. Theodorakopoulos, Finite-temperature excitations of the classical Toda chain, Phys. Rev. Lett. **53**, 871 – 874 (1984). M. Opper, Analytical solution of the classical Bethe-ansatz solution for the Toda chain, Phys. Lett. A **112**, 201 – 203 (1985). H. Takayama and M. Ishikawa, Classical thermodynamics of the Toda lattice as a classical limit of the two-component Bethe ansatz scheme, Progr. Theor. Phys. **76**, 820 – 836 (1986). P. Gruner-Bauer and F.G. Mertens, Excitation spectrum of the Toda lattice for finite temperatures, Z. Physik B, Cond. Mat. **80**, 435 – 447 (1988). A. Cuccoli, M. Spicci, V. Tognetti, and R. Vaia, Dynamic correlations of the classical and quantum Toda lattices, Phys. Rev. B **47**, 7859 (1993). M. Jenssen and W. Ebeling, Distribution functions and excitation spectra of Toda systems at intermediate temperatures, Physica D **141**, 117 – 132 (2000). A. Kundu and A. Dhar, Equilibrium dynamical correlations in the Toda chain and other integrable models, Phys. Rev. E **94**, 062130 (2016). O. A. Castro-Alvaredo, B. Doyon, and T. Yoshimura, Emergent hydrodynamics in integrable quantum systems out of equilibrium, Phys. Rev. X **6**, 041065 (2016). L. Piroli, J. De Nardis, M. Collura, B. Bertini, and M. Fagotti, Transport in out-of-equilibrium XXZ chains: Nonballistic behavior and correlation functions, Phys. Rev. B **96**, 115124 (2017). B. Doyon, Generalised hydrodynamics of the classical Toda system,\ `arXiv`:1902.07624. V. Bulchandani, Xiangyu Cao, and J. Moore, Kinetic theory of quantum and classical Toda lattices, `arXiv`:1902.10121. H. Spohn, Generalized Gibbs ensembles of the classical Toda chain,\ `arXiv`:1902.07751v3, to appear J. Stat. Phys. (2019). B. Doyon, Exact large-scale correlations in integrable systems out of equilibrium, SciPost Phys. **5**, 054 (2018). A. Bastianello, B. Doyon, G. Watts, and T. Yoshimura, Generalized hydrodynamics of classical integrable field theory: the sinh-Gordon model, SciPost Phys. **4**, 045 (2018). C. Boldrighini, R.L. Dobrushin and Yu. M. Sukhov, One-dimensional hard rod caricature of hydrodynamics, J. Stat. Phys. [**31**]{}, 577 (1983). H. Spohn, Large Scale Dynamics of Interacting Particles, Springer-Verlag, Heidelberg, 1991. C. Boldrighini and Yu. M. Suhov, One-dimensional hard rod caricature of hydrodynamics: Navier-Stokes correction for locally-equilibrium initial states, Commun. Math. Phys. **189**, 577 (1997). B. Doyon and H. Spohn, Dynamics of hard rods with initial domain wall state, J. Stat. Mech. (2017) 073210. H. Flaschka, The Toda lattice. II. Existence of integrals, Phys. Rev. B **9**, 1924 – 1925 (1974). M. Henon, Integrals of the Toda lattice, Phys. Rev. B **9**, 1921 – 1923 (1974). V. Bulchandani, Xiangyu Cao, and H. Spohn, The GGE averaged currents of the classical Toda chain, `arXiv`:1905.04548. I. Dumitriu and A. Edelman, Matrix models for beta ensembles, J. Math. Phys. **43**, 5830 – 5847 (2002). E. Cépa and D. Lépingle, Diffusing particles with electrostatic repulsion, Probab. Theory Rel. Fields **107**, 429 – 449 (1997). S. Israelsson, Asymptotic fluctuations of a particle system with singular interaction, Stoch. Process. Appl. **93**, 25 – 56 (2001).
{ "pile_set_name": "ArXiv" }
--- address: - 'Laboratoire de Mathématiques Appliquées de Compiègne, Département de Génie Informatique, Université de Technologie de Compiègne, BP 20529, 60205 COMPIEGNE CEDEX, FRANCE' - author: - Stéphane Mottelet title: Fast computation of gradient and sentitivity in 13C metabolic flux analysis instationary experiments using the adjoint method --- metabolic engineering,metabolic flux analysis,carbon labeling experiments,isotopomer labeling systems,XML,computer code generation,adjoint method Motivation ========== The overall dynamics of a CLE can be described by a cascade of differential equations of the following form (see e.g. [@wiechert]) : $${\mathbf{X}}_k({\mathbf{m}}){\mathbf{\dot x}}_k = {\mathbf{f}}_k({\mathbf{v}},{\mathbf{x}}_k,\dots,{\mathbf{x}}_1,{\mathbf{x}}_k^{input}),\;k=1\dots n,\;t\in[0,T]$$ where the states ${\mathbf{x}}_k$ are functions of time $t$ and take their values in $\mathbb{R}^{n_k}$ (they represent the cumomer fractions of each metabolite) and the constant vectors ${\mathbf{x}}_k^{input}$ are vectors of $\mathbb{R}^{n^{input}_k}$ depend on the labeling of the input substrates. The vector ${\mathbf{v}}\in\mathbb{R}^m$ denotes the unknown fluxes and ${\mathbf{X}}_k$ are diagonal matrices containing unknown pool sizes corresponding to to cumomer fractions of weight $k$. The particular form taken by the functions ${\mathbf{f}}_k$ depends on the transition pattern of carbon atoms occuring for each reaction in the metabolic network. Writing down by hand the expression of these functions is quite easy for a small sample network but becomes untractable for a realistic network. As far as numerical computations are concerned (direct problem solving or identification) the real concern is to write some specific computer code computing these functions and their exact derivatives with respect of states ${\mathbf{x}}_1,\dots,{\mathbf{x}}_n$ and ${\mathbf{v}}$, in a target language. The formal expression of the overall system could be interesting for testing the identifiability of the flux vector ${\mathbf{v}}$ and the pool sizes, but previous work shows that the size of realistic networks prevents the use of classical algorithms based on symbolic computations. Nowadays, the most efficient way of describing a metabolic network is to use the Systems Biology Markup Language (SBML, see [@sbml], [@sbml2]), as it has become the de facto standard, used by a growing number of commercial or open source applications. The SBML markup language is a dedicated dialect of XML (see e.g. [@xml]) with a specific structure which allows to describe the different compartments, species, and the kinetics of reactions occuring between these species. Transformations can be applied to the SBML file describing the network, described in another XML dialect, the eXtendted Stylesheet Language (XSL), and the kind of transformations we are interested in, are those which will allow to generate the specific numerical code we need to solve the identification problem (stationnary and instationnary). The generated computer code is specific for each particular metabolic network and associated CLE, an thus more efficient, readable and resusable, than a general application written to cover all possible cases. The target language which has been chosen is Scilab (see [@scilabbook; @scilabweb]) because it is an open-source Matlab compatible language, allowing high-level programming together with compiled libraries with performant differential equation solvers, optimization routines and efficient sparse matrix algebra. The GUI of the final application is also described in XML, using another specific dialect called XMLlab (see [@xmllab]), which is available under the form of an official Scilab Toolbox (see Scilab www site). The description of the GUI is obtained with another pass of XSL transformations on the original SBML file describing the network. This GUI allows the biologist to enter the input data (label input of the substrate, known fluxes or a priori relationships betwen them, label observation), and lauch the optimization process solving the identification problem. Throughout this paper, we will use a very small example to illustrate our approach. This is the branching network used by Wiechert and Isermann in [@wiechert2] (see Figure \[branching\]). ------------------------------------------------ ---------------------------------------- ---------------------------------------- $$\includegraphics[width=4cm]{branching.pdf}$$ --------- ------ ------- ----------- --------- ----------- ------- ------ $v_1$ : A $\to$ F $v_4$ : D + D $\to$ F \#ij $\to$ \#ij \#i + \#j $\to$ \#ij $v_2$ : A $\to$ D + D $v_5$ : F $\to$ G \#ij $\to$ \#i + \#j \#ij $\to$ \#ij $v_3$ : A $\to$ F $v_6$ : A\_out $\to$ A \#ij $\to$ \#ji \#ij $\to$ \#ij --------- ------ ------- ----------- --------- ----------- ------- ------ ------------------------------------------------ ---------------------------------------- ---------------------------------------- The SBML file corresponding to this network can be found in Figures \[branching:listing1\] and \[branching:listing2\] in the appendix section. As it can be seen, the description is very verbose. The added material describing some informations on the CLE (label input, label observation, carbon atom mapping) are entered in the species and reaction notes, directly from the CellDesigner interface. In the future, we plan to develop a plugin to directly enter these information from within CellDesigner and create XML annotation in the SBML file. . Mathematical modelling in the stationnary case ============================================== It has been shown in [@wiechert] that the actual state equation is in fact a succession of linear ordinary differential equation, where for each $k$ the non homogeneous part ${\mathbf{b}}_k$ of the right handside depends on ${\mathbf{x}}_1,\dots,{\mathbf{x}}_{k-1}$, giving the following cascade $${\mathbf{X}}_k({\mathbf{m}}){\mathbf{\dot x}}_k(t) = {\mathbf{M}}_k({\mathbf{v}}){\mathbf{x}}_k(t) +{\mathbf{b}}_k({\mathbf{v}},{\mathbf{x}}_{k-1}(t),\dots,{\mathbf{x}}_1(t),{\mathbf{x}}_k^{input}),\;k=1\dots n,\;t>0. \label{state2}$$ Each component of vectors ${\mathbf{x}}_k(t)$ represents a cumomer fraction of weight $k$ of a given species (for a proper definition of cumomer and cumomer weight see [@wiechert]). The constant vectors ${\mathbf{x}}_k^{input}$ contain the cumomer fractions of weight $k$ of species which are input metabolites. The diagonal matrices ${\mathbf{X}}_k({\mathbf{m}})$ depend on the stationnary concentrations of metabolites. The matrix ${\mathbf{M}}_k({\mathbf{v}})$ and the vector ${\mathbf{b}}_k$ are constructed by considering the balance equation for each cumomer of the vector ${\mathbf{x}}_k$. Constructing these matrix by hand is a very tedious task but can be automated if adequate data structures are used, in order to represent the metabolic network and the carbon transition map for each reaction (we will explain later how we deal with this particular information). In the stationnary case, the CLE is considering the asymptotic behaviour of the system (\[state2\]), $$0 = {\mathbf{M}}_k({\mathbf{v}}){\mathbf{x}}_k +{\mathbf{b}}_k({\mathbf{v}},{\mathbf{x}}_{k-1}\dots,{\mathbf{x}}_1(t),{\mathbf{x}}_k^{input}),\;k=1\dots n, \label{state:stat}$$ in this case the states ${\mathbf{x}}_k$ do not depend on time anymore and the identification problem is restricted to the determination of the flux vector ${\mathbf{v}}$ such that some cost function is minimized. For a given flux vector ${\mathbf{v}}$ this cost function can be classically defined as the squared norm of the difference betwen an observation vector ${\mathbf{y}}^{meas}\in \mathbb{R}^{n_{meas}}$ and the corresponding synthetic observation ${\mathbf{y}}({\mathbf{v}})$ computed by solving the state equation (\[state:stat\]) for the given value of the fluxes. In the following, we will consider that this observation is composed of isotopomer and cumomer fractions of given species, which can always be computed as linear combination of cumomers, i.e. there exists $n$ non zero matrices ${\mathbf{C}}_1,{\mathbf{C}}_2,\dots,{\mathbf{C}}_n$ such that $${\mathbf{y}}({\mathbf{v}})=\sum_{k=1}^n {\mathbf{C}}_k {\mathbf{x}}_k({\mathbf{v}}),$$ where we have used ${\mathbf{x}}_k({\mathbf{v}})$ to denote the solutions of the state equation (\[state:stat\]) for a given flux vector ${\mathbf{v}}$. We have also natural constraints on the flux vector which result of the particular structure of the metabolic network (the stoichiometric balances) and some other kinds of linear constraints on the fluxes, which can express that some of the fluxes are fixed, for example the flux of input substrate, or some more specific information, e.g. some linear combination of fluxes which should be zero. Thus, the constraints on ${\mathbf{v}}$ take an affine form $${\mathbf{A}}{\mathbf{v}}={\mathbf{w}}.$$ There is usually some measured extracellular fluxes ${\mathbf{v}}^{meas}$, which has to be compared with the actual value of these fluxes, which can be always be expressed as linear function of ${\mathbf{v}}$. The mostly used cost function is the Chi-Square function $$J({\mathbf{v}})=\frac{1}{2}\left\Vert {\mathbf{\sigma}}^{-1}\left({\mathbf{y}}({\mathbf{v}})-{\mathbf{y}}^{meas}\right)\right\Vert^2 + \frac{1}{2}\left\Vert{\mathbf{\alpha}}^{-1}\left({\mathbf{E}}{\mathbf{v}}-{\mathbf{v}}^{meas}\right)\right\Vert^2$$ where ${\mathbf{\sigma}}$ and ${\mathbf{\alpha}}$ are diagonal positive definite matrices containing the standard deviation for each observation. Minimizing the Chi-Square function, under the hypothesis of gaussian distribution, is equivalent to maximizing the likelihood of measurements. $$\left\{ \begin{array}{rcl} {\mathbf{{\mathbf{\hat v}}}}&=&\operatorname{arg}\min_{{\mathbf{v}}\in\mathbb{R}^m} J_{\varepsilon}({\mathbf{v}}),\\ {\mathbf{A}}{\mathbf{v}}&=&{\mathbf{w}},\\ {\mathbf{v}} &\geq &0, \end{array} \right. \tag{$P_\varepsilon$}$$ where ${\mathbf{y}}({\mathbf{v}})=\sum_{k=1}^n {\mathbf{C}}_k {\mathbf{x}}_k({\mathbf{v}})$ and ${\mathbf{x}}_k({\mathbf{v}})$, $k=1\dots n$ are the solutions of the state equation (\[state:stat\]). Parametrisation of the admissible fluxes subspace ------------------------------------------------- The subspace of admissible fluxes is determined by the system of equations and inequations $$\left\{ \begin{array}{rcl} {\mathbf{A}}{\mathbf{v}}&=&{\mathbf{w}},\\ {\mathbf{v}} &\geq &0, \end{array} \right. \label{sys:constr}$$ In order to detect any redundancy or incompatibilities due to the eventual complimentary constraints added by the user, an admissibility test is done on the system. We do it by solving a trivial linear program, which allows to test if $w$ is in the range of $A$ and then if the subspace (\[sys:constr\]) is non-void. Then there are two possibilities to obtain a parametrization : by computing an orthonormal basis $\{{\mathbf{V}}^1,\dots {\mathbf{V}}^r\}$ of the kernel of ${\mathbf{A}}$ (where $p=\operatorname{rank} {\mathbf{A}}$) and the minimum norm solution ${\mathbf{v}}_0$ of ${\mathbf{A}}{\mathbf{v}}={\mathbf{w}}$, any ${\mathbf{v}}$ satisfying (\[sys:constr\]) can be expressed as $${\mathbf{v=Vq+v}}_0,$$ where ${\mathbf{q}}$ is a vector of size $m-p$. The classical parametrization, using the free fluxes, can be found by computing the ${\mathbf{Q}}{\mathbf{R}}$ factorization of ${\mathbf{A}}$. There exists an $m\times m$ permutation matrix ${\mathbf{P}}=[{\mathbf{P}}_1,{\mathbf{P}}_2]$ and an orthogonal square matrix ${\mathbf{Q}}=[{\mathbf{Q}}_1,{\mathbf{Q}}_2]$, where ${\mathbf{P}}_1$ and ${\mathbf{Q}}_1$ represent the first $p$ columns of ${\mathbf{P}}$ and ${\mathbf{Q}}$, and a full rank $p\times p$ upper-triangular matrix ${\mathbf{R}}_1$ such that $${\mathbf{A}}{\mathbf{P}}={\mathbf{Q}}\left[\begin{array}{c|c}{\mathbf{R}}_1 & {\mathbf{R}}_2\\\hline 0 & 0\end{array}\right],$$ where the lower right zero block is absent if ${\mathbf{A}}$ has full rank. The free fluxes in the vector are given by ${\mathbf{q}}={\mathbf{P}}_2^\top {\mathbf{v}}$ and the complimentary dependent fluxes are given by ${\mathbf{P}}_1^\top {\mathbf{v}}$. Straightforward computations give the parametrization ${\mathbf{v=Vq+v}}_0$, where ${\mathbf{q}}$ has $m-p$ components and $${\mathbf{V}}={\mathbf{P}}_2-{\mathbf{P}}_1 {\mathbf{R}}_1^{-1}{\mathbf{R}}_2,~{\mathbf{v}}_0={\mathbf{P}}_1 {\mathbf{R}}_1 ^{-1} {\mathbf{Q}}_1 ^\top {\mathbf{w}}.$$ Such parametrizations remove any redundancy in the constraints, and allow to identify which constraints in ${\mathbf{V}}{\mathbf{q}}+{\mathbf{v}}_0\geq 0$ are equality constraints blocking the value of some fluxes. Some of them are not specified in the initial system (\[sys:constr\]) but are added [*de facto*]{} in the parametrization because of the implicit fluxes balancing constraints. This situation can be detected when a row of ${\mathbf{V}}_i$ is equal to zero. Hence, if we define the sets $I=\{i,\;{\mathbf{V}}_i\neq 0\}$ and the set $D$ containing the indices of dependent fluxes, the parametrized optimization problem takes the form $$\left\{ \begin{array}{rcl} {\mathbf{\hat q}}&=&\operatorname{arg}\min_{{\mathbf{q}}\in\mathbb{R}^{m-p}} J_{\varepsilon}({\mathbf{V}}{\mathbf{q}}+{\mathbf{v}}_0),\\ q_i&\geq& 0,\;i=1\dots m-p,\\ {\mathbf{V}}_i{\mathbf{q}}+({\mathbf{v}}_0)_i&\geq& 0,\;i\in I\cap D, \end{array} \right. \label{pparam}$$ and we have ${\mathbf{{\mathbf{\hat v}}}}={\mathbf{V}}{\mathbf{\hat q}}+{\mathbf{v}}_0$. The gradient of the parametrized cost function can be expressed via the chain rule as $$\left(\frac{d}{d{\mathbf{q}}}J_{\varepsilon}({\mathbf{V}}{\mathbf{q}}+{\mathbf{v}}_0)\right)^\top={\mathbf{V}}^\top\nabla J_\varepsilon({\mathbf{V}}{\mathbf{q}}+{\mathbf{v}}_0).$$ The type of parametrization (orthogonal or free fluxes) used in the optimization does not seem to influence the conditioning of the algorithm, so the free fluxes parametrization is used, because of its biological interpretation. Identifiability and regularization ---------------------------------- When $\varepsilon=0$ the constraints on ${\mathbf{q}}$ are not enough to ensure existence of a solution because the problem may be unbounded. In fact, existence and unicity of a solution will occur if the fluxes are identifiable. A general discussion about this subject can be found in [@wiechert2], where the authors propose an algorithm based on integer arithmetics to test the structural identifiability of metabolic networks. The most encoutered problematic situation corresponds to bidirectional reactions, such as $$v_1:\mathrm{A\to B},~v_2:\mathrm{B\to A}$$ where $v_1-v_2$ (the net flux) is identifiable but $v_1$ and $v_2$ are not individualy identifiable. The counterpart of such a situation is that the optimal $v_1$ and $v_2$ tend to infinity when $\varepsilon\to 0$, and the cost function $J_{\varepsilon}$ is ill-conditioned when $\varepsilon$ is too small, leading to convergence problems in the optimization phase. The change of variables proposed in [@wiechertnonstat] considers the net flux $v_{net}=v_1-v_2$ and the exchange fluxes $v_{xch}=\min(v_1,v_2)$ and a compacification of $v_{xch}$ defined by $$v_{[0,1]}=\frac{v_{xch}}{\beta+v_{xch}},$$ where $\beta>0$. The above change of variables maps $[0,+\infty[$ to $[0,1[$ and thus is interesting from a numerical point of view. Although these new variables make sense from a metabolic point of view, it remains that the overall mapping from $(v_1,v_2)$ to $(v_{net},v_{[0,1]})$ is not differentiable and thus needs to be approximated. A more systematic approach is proposed in [@yang] where all free fluxes $q_i\geq 0$ are mapped to $r_i\in [0,1[$ with the change of variables ${\mathbf{q}}={\mathbf{q}}({\mathbf{r}})$, where $$q_i=\beta\frac{r_i}{1-r_i},\;i=1\dots m-p,$$ where $\beta>0$ is a scaling parameter. In this case, the inequality constraints in (\[pparam\]) become non linear and the new optimization problem is $$\left\{ \begin{array}{rcl} {\mathbf{\hat r}}&=&\operatorname{arg}\min_{{\mathbf{r}}\in\mathbb{R}^{m-p}} J_{\varepsilon}({\mathbf{V}}{\mathbf{q(r)}}+{\mathbf{v}}_0),\\ 1-\delta\geq r_i&\geq& 0,\;i=1\dots m-p,\\ {\mathbf{V}}_i{\mathbf{q(r)}}+({\mathbf{v}}_0)_i&\geq& 0,\;i\in I\cap D, \end{array} \right. \label{pparamcompact}$$ where $\delta>0$ can be arbitrary small. The gradient of the cost function is given by $$\left(\frac{d}{d{\mathbf{r}}}J_{\varepsilon}({\mathbf{V}}{\mathbf{q(r)}}+{\mathbf{v}}_0)\right)^\top={\mathbf{q'(r)}}^{\top}{\mathbf{V}}^\top\nabla J_\varepsilon({\mathbf{V}}{\mathbf{q(r)}}+{\mathbf{v}}_0).$$ Multiple experiences -------------------- In the following we will also consider the case where multiple CLE are done with the same metabolic network but with diffferent labeling of the input metabolites, given by ${\mathbf{x}}^{input,i}$ for $i=1\dots n_{exp}$. Thus, we will consider the cost function $$J_{\varepsilon}({\mathbf{v}})=\frac{1}{2}\sum_{i=1}^{n_{exp}}\left(\left\Vert {\mathbf{\sigma}}^{-1}\left({\mathbf{y}}({\mathbf{v}},{\mathbf{x}}^{input,i})-{\mathbf{y}}^{meas,i}\right)\right\Vert^2 + \left\Vert {\mathbf{\alpha}}^{-1}\left({\mathbf{E}}{\mathbf{v}}-{\mathbf{v}}_{obs,i}\right)\right\Vert^2\right) + \frac{\varepsilon}{2} \Vert{\mathbf{v}}\Vert^2$$ where ${\mathbf{y}}^{meas,i}$ is the observation of labeled material for experience $i$, ${\mathbf{v}}_{obs,i}$ is the vector of measured extracellular fluxes and $${\mathbf{y}}({\mathbf{v}},{\mathbf{x}}^{input,i})=\sum_{k=1}^n {\mathbf{C}}_k {\mathbf{x}}_k({\mathbf{v}},{\mathbf{x}}^{input,i})$$ and ${\mathbf{x}}_k({\mathbf{v}},{\mathbf{x}}^{input,i})$, $k=1\dots n$ is the solution of $$0 = {\mathbf{M}}_k({\mathbf{v}}){\mathbf{x}}_k +{\mathbf{b}}_k({\mathbf{v}},{\mathbf{x}}_{k-1},\dots,{\mathbf{x}}_1(t),{\mathbf{x}}_k^{input,i}),\;k=1\dots n, \label{state:stat:mult}$$ Computation of the gradient of the cost function ------------------------------------------------ The computation of the gradient of $J({\mathbf{v}})$ needs the derivative of ${\mathbf{x}}({\mathbf{v}})$ with respect to ${\mathbf{v}}$. In the stationnary case, it makes sense to compute this derivative by implicit differentiation of the state equation (\[state:stat\]). To this purpose, we adopt the notation $${\mathbf{f}}_k({\mathbf{v}},{\mathbf{x}}_k,\dots,{\mathbf{x}}_1,{\mathbf{x}}_k^{input,i})={\mathbf{M}}_k({\mathbf{v}}){\mathbf{x}}_k +{\mathbf{b}}_k({\mathbf{v}},{\mathbf{x}}_{k-1},\dots,{\mathbf{x}}_1,{\mathbf{x}}_k^{input,i}) and \label{notation:fk}$$ we denote by ${\mathbf{x}}^i({\mathbf{v}})$ the solution of $${\mathbf{f}}_k({\mathbf{v}},{\mathbf{x}}_k,\dots,{\mathbf{x}}_1,{\mathbf{x}}_k^{input,i})=0,\;k=1\dots n.$$ By differentiating these equations with respect to ${\mathbf{v}}$, when ${\mathbf{x}}={\mathbf{x}}^i({\mathbf{v}})$, we obtain for $k=1\dots n$ $$0=\frac{d{\mathbf{f}}_k}{d{\mathbf{v}}}({\mathbf{v}},{\mathbf{x}}_k,\dots,{\mathbf{x}}_1,{\mathbf{x}}_k^{input,i})= \frac{\partial {\mathbf{f}}_k}{\partial{\mathbf{v}}}({\mathbf{v}},{\mathbf{x}}_k,\dots,{\mathbf{x}}_1,{\mathbf{x}}_k^{input,i}) +\sum_{l=1}^{k}\frac{\partial {\mathbf{f}}_l}{\partial {\mathbf{x}}_l}({\mathbf{v}},{\mathbf{x}}_k,\dots,{\mathbf{x}}_1,{\mathbf{x}}_k^{input,i}) \frac{\partial {\mathbf{x}}^i_l}{\partial{\mathbf{v}}}.$$ Since ${\mathbf{f}}_k$ is linear with respect to ${\mathbf{x}}_k$ for fixed ${\mathbf{v}}$, we can determine $\frac{\partial {\mathbf{x}}_k}{\partial{\mathbf{v}}}$ as the solution of a linear system of equations, whose right hand side is a function of ${\mathbf{x}}_l$ and $\frac{\partial {\mathbf{x}}_l}{\partial{\mathbf{v}}}$ for $l=1\dots k$: $${\mathbf{M}}_k({\mathbf{v}})\frac{\partial {\mathbf{x}}^i_k}{\partial{\mathbf{v}}}=\frac{\partial {\mathbf{f}}_k}{\partial{\mathbf{v}}}({\mathbf{v}},{\mathbf{x}}_k,\dots,{\mathbf{x}}_1,{\mathbf{x}}_k^{input,i}) - \sum_{l=1}^{k-1}\frac{\partial {\mathbf{b}}_k}{\partial {\mathbf{x}}_l}({\mathbf{v}},{\mathbf{x}}_k,\dots,{\mathbf{x}}_1,{\mathbf{x}}_k^{input}) \frac{\partial {\mathbf{x}}^i_l}{\partial{\mathbf{v}}},\;k=1\dots n. \label{cascade:deriv}$$ Hence, the key ingredients in the computation of the derivatives of ${\mathbf{x}}_k$ are the derivatives $\frac{\partial {\mathbf{b}}_k}{\partial {\mathbf{x}}_l}$ for $l<k$ and the derivatives $\frac{\partial {\mathbf{f}}_k}{\partial{\mathbf{v}}}$. This will be one of the main tasks of the automatically generated computer code, together with the assembly of matrices ${\mathbf{M}}_k({\mathbf{v}})$. Since the gradient of the cost function $J({\mathbf{v}})$ will be required at each iteration of the optimization algorithm, these matrix will be assembled as sparse matrices in order to speed up the computations, particularly the resolution of the linear systems (\[cascade:deriv\]). The final computation of the derivative gives $$\frac{d J_\varepsilon({\mathbf{v}})}{d{\mathbf{v}}}=\left({\mathbf{E}}{\mathbf{v}}-{\mathbf{v}}_{obs,i}\right)^\top {\mathbf{\alpha}}^{-2}{\mathbf{E}}+\varepsilon {\mathbf{v}}^\top+\sum_{i=1}^{n_{exp}}\left({\mathbf{y}}({\mathbf{v}},{\mathbf{x}}^{input,i})-{\mathbf{y}}^{meas}\right)^\top{\mathbf{\sigma}}^{-2}\sum_{k=1}^n{\mathbf{C}}_k\frac{\partial {\mathbf{x}}^i_k}{\partial{\mathbf{v}}}.$$ Architecture of the computer code generation algorithms ======================================================= The most innovative aspect of this work is the choice of the techniques to generate the code : from the original SBML file edited under Cell Designer (or any other SBML compliant software), only XSL (eXetended Stylesheet Language) transformations are used to generate the Scilab code computing the specific objects for a given Carbon Labeling Experiment. The way transformations are done is described in XSL stylesheets, written in anthor XML dialect. XSL is very different from the typical programming languages in use today. One question that’s being asked frequently is : what kind of programming language is actually XSLT ? Until now, the authoritative answer from some of the best specialists is that XSLT is a declarative (as opposed to imperative) language. The XSL stylesheets are thus very explicit, human readable, and easy to debug and maintain. In the whole process which maps the original SBML file to the computer code and the graphical user interface, successive XSL transformations occur. The first set of transformations aims to translate all the specific information about the CLE into XML markup which can be later used, for example, the carbon atom mapping of each reaction (this step is described in Appendix A). Then two different main paths are followed : 1. The first series of transformations is dedicated to the computer code generation : 1. A main assembly loop is processed, which for each weight $k$, generates, for $j=1\dots n_j$, some intermediate XML markup declaring the contribution of each cumomer fraction $({\mathbf{x}}_k)_j$ to the matrices $${\mathbf{M}}_k({\mathbf{v}}),\,{\mathbf{b}}_k({\mathbf{v}}),\frac{\partial {\mathbf{f}}_k}{\partial{\mathbf{v}}},\left(\frac{\partial{\mathbf{b}}_k}{\partial {\mathbf{x}}_l}\right)_{l=1\dots k-1}.$$ The contributions to ${\mathbf{M}}_k({\mathbf{v}})$ are functions of the flux vector ${\mathbf{v}}$ only, but the contributions to other matrices are functions of lower weight cumomer components and eventually of ${\mathbf{x}}_k^{input}$, respecting the weight preservation property (see [@wiechert]). This intermediate step is event-driven, i.e. contributions to the matrices are dumped in the order they occur. The contributions are gathered for each matrix in the subsequent transformation which produces the Scilab code. 2. The Scilab code solving the cascaded linear systems is generated. For each weight $k$, the matrix ${\mathbf{M}}_k({\mathbf{v}})$ is stored as a sparse matrix and its sparse $LU$ factorization is computed (the sparse triangular factors are also retained because they are also needed to compute the derivatives). The linear system giving ${\mathbf{x}}_k$ is then solved by using the precomputed sparse $LU$ factors. Then we compute the matrices ${\partial {\mathbf{f}}_k}/{\partial{\mathbf{v}}}$ and $\left({\partial{\mathbf{b}}}/{\partial {\mathbf{x}}_i}\right)_{i=1\dots k-1}$, which need the previously computed cumomer vectors ${\mathbf{x}}_1,\dots,{\mathbf{x}}_{k-1}$ and finally solve the linear system giving ${\partial {\mathbf{x}}_k}/{\partial{\mathbf{v}}}$. 2. The second series of transformation aims to build the specific graphical user interface for the given metabolic network. To this purpose, an XML file conforming to the XMLlab DTD is generated. The structure of the interface is described in a high level way : there are given sections of the interface, each of these being dedicated to different purposes. The first section is hosting all the fluxes, the second section is hosting the fluxes observations with associated standard deviation, the third section hosts together the label measurements and the label output corresponding to the current fluxes. This is the place where the user can compare the original measurement and the reconstructed measurement after the identification process. The fourth section displays all components of the cumomer vector, sorted by weight and by species name. The last section is reserved to the parameters of the identification method (maximum iteration, regularization parameter, and so on). The structure of the original SBML file, enriched with the specific annotations in the sbml namespace (see Appendix A), allows to perform this step in a very straightforward way. The generated Scilab code computing the cumomers vector ${\mathbf{x}}$, the derivative matrices, the cost function and its gradient are given on Figures \[fig:branchingScilab1\] and \[fig:branchingScilab2\] in Appendix A. Numerical results ================= Mathematical modelling in the unstationnary case ================================================ In the unstationnary case, the CLE is done when the system (\[state2\]) has not reached its asymptotic behaviour. The measurements $${\mathbf{y}}^{meas,j},\;j=1\dots n_t,$$ are done at different values $t_j$ of time, and we can make the hypothesis that the final time $T$ in (\[state2\]) is equal to the final observation time i.e $T=t_{n_t}$. The fundamental difference with the stationnary case is that the stationnary concentration of metabolites is also an unknown, i.e. the vector ${\mathbf{m}}$ is also to be determined. The cost function takes the form $$J_\epsilon({\mathbf{v}},{\mathbf{m}})=\frac{1}{2}\sum_{j=1}^{n_t} \left(\left\Vert {\mathbf{\sigma}}^{-1}\left({\mathbf{y}}(t_j,{\mathbf{v}},{\mathbf{m}})-{\mathbf{y}}^{meas,j}\right)\right\Vert^2 \right) + \frac{1}{2}\left\Vert{\mathbf{\alpha}}^{-1}\left({\mathbf{E}}{\mathbf{v}}-{\mathbf{v}}^{meas}\right)\right\Vert^2,$$ where $${\mathbf{y}}(t_j,{\mathbf{v}},{\mathbf{m}})=\sum_{k=1}^n {\mathbf{C}}_k {\mathbf{x}}_k(t_i,{\mathbf{v}},{\mathbf{m}}),$$ and ${\mathbf{x}}_k(t,{\mathbf{v}},{\mathbf{m}})$, $k=1\dots n$ are the solutions of the state equation for a given pair $({\mathbf{v}},{\mathbf{m}})$ of fluxes and pool sizes : $${\mathbf{X}}_k({\mathbf{m}}){\mathbf{\dot x}}_k(t) = {\mathbf{M}}_k({\mathbf{v}}){\mathbf{x}}_k(t) +{\mathbf{b}}_k({\mathbf{v}},{\mathbf{x}}_{k-1}(t),\dots,{\mathbf{x}}_1(t),{\mathbf{x}}_k^{input}),\;k=1\dots n,\;t>0. \label{cascadenonstat}$$ The minimization problem is the the following : $$\left\{ \begin{array}{rcl} ({\mathbf{\hat v}},{\mathbf{\hat m}})&=&\operatorname{arg}\min_{{\mathbf{v}},{\mathbf{m}}} J_{\varepsilon}({\mathbf{v}},{\mathbf{m}}),\\ {\mathbf{A}}{\mathbf{v}}&=&{\mathbf{w}},\\ {\mathbf{v}} &\geq &0,\\ {\mathbf{m}} & \geq& 0. \end{array} \right. \tag{$P^u_\varepsilon$}$$ The main difficulty is the computation of the gradient of $J$, which can be done by using the sensitivity matrices $\frac{\partial {\mathbf{x}}_k(t)}{\partial{\mathbf{v}}}$, computed as the solution of a system of differential equations obtained by implicit differentiation of (\[cascadenonstat\]) as in the stationnary case. The problem is that this approach is computationnaly intensive (see e.g. [@wiechertnonstat; @Noh; @Noh2006554]) because it involves a cascade of differential equations where the state is a matrix (instead of a vector). A more suitable approach for non-stationnary problems is to use the adjoint state method (see [@plessix; @chavent; @lionsmagenes]). If the number of parameters of interest (the fluxes) exceeds the number of model outputs for which the sensitivity is desired, the adjoint method is more efficient than traditional direct methods of calculating sensitivities. The gradient of $J$ can be computed at the same cost as the state equation (\[cascadenonstat\]). A far as the statistical evalution of identified fluxes is concerned, the sensitivity of the model output ${\mathbf{y}}(t,{\mathbf{v}},{\mathbf{m}})$ with respect to ${\mathbf{v}}$ can be obtained with a cost of $n_{meas}$ state equations. Hence, the adjoint method will always outperform the sensitivity method for the computation of the gradient. For the output sensitivity, the dimension of ${\mathbf{y}}(t,{\mathbf{v}},{\mathbf{m}})$ observations is generaly lower than the number of fluxes, so the same method should be used. The adjoint state method is best understood in continuous time and the next section is devoted to this presentation. Section \[sect:adjdiscr\] will detail its practical implementation in discrete time. The adjoint equation in continuous time --------------------------------------- In order to simplify the presentation of the results, we will adopt the block notation ${\mathbf{x}}=\left({\mathbf{x}}_1;{\mathbf{x}}_2;\dots {\mathbf{x}}_n\right)$ for the overall cumomer vector, and for the state equation we will write $${\mathbf{X}}({\mathbf{m}}){\mathbf{\dot x}}(t)-{\mathbf{f}}({\mathbf{x}}(t),{\mathbf{v}})=0,\;t\in[0,T[, \label{state}$$ where ${\mathbf{f}}({\mathbf{x}},{\mathbf{v}})=\left({\mathbf{f}}_1({\mathbf{x}},{\mathbf{v}});\dots{\mathbf{f}}_n({\mathbf{x}},{\mathbf{v}})\right)$ where ${\mathbf{f}}_k$ is defined by (\[notation:fk\]). We also define ${\mathbf{C}}=[{\mathbf{C}}_1,{\mathbf{C}}_2,\dots,{\mathbf{C}}_n]$ so that ${\mathbf{y}}(t)={\mathbf{C}}{\mathbf{x}}(t)$. Without loss of generality, we consider only one measurement at final time $T$. The adjoint state method allows to compute the total derivative with respect to ${\mathbf{v}}$ and ${\mathbf{m}}$ of a given function $I({\mathbf{x}}({\mathbf{v}},{\mathbf{m}}))\in\mathbb{R}$ where ${\mathbf{x}}({\mathbf{v}},{\mathbf{m}})$ is the solution of the state equation (\[state\]). If the gradient of $J_{\epsilon}$ is to be computed then we will take $$I({\mathbf{x}})=\frac{1}{2}\left\Vert {\mathbf{\sigma}}^{-1}\left({\mathbf{Cx}}(T)-{\mathbf{y}}^{meas}\right)\right\Vert^2. \label{I_for_J}$$ In the following, we do not consider the quantities in $J_{\varepsilon}$ depending explicitely on ${\mathbf{v}}$ hence, we define $J({\mathbf{v}},{\mathbf{m}})=I({\mathbf{x}}({\mathbf{v}},{\mathbf{m}}))$. Let us define the Lagrangian $$\begin{split} L({\mathbf{x}},{\mathbf{p}},{\mathbf{v}},{\mathbf{m}})&=I({\mathbf{x}})+\int_0^T {\mathbf{p}}(t)^\top\left({\mathbf{X}}({\mathbf{m}}){\mathbf{\dot x}}(t)-{\mathbf{f}}({\mathbf{x}}(t),{\mathbf{v}})\right)dt, \end{split}$$ where the adjoint state ${\mathbf{{\mathbf{p}}}}=\left({\mathbf{p}}_1;{\mathbf{p}}_2;\dots {\mathbf{p}}_n\right)$ has the same block structure as ${\mathbf{x}}$. The first remark that can be done is that when ${\mathbf{x}}$ is the solution of the state equation (\[state\]), we have $$L({\mathbf{x}}({\mathbf{v}},{\mathbf{m}}),{\mathbf{p}},{\mathbf{v}},{\mathbf{m}})=J({\mathbf{v}},{\mathbf{m}}),$$ and when we express the total derivative of $J({\mathbf{v}},{\mathbf{m}})$ e.g. with respect to ${\mathbf{v}}$ we have $$\frac{dJ({\mathbf{v}},{\mathbf{m}})}{d{\mathbf{v}}}=\frac{\partial L}{\partial {\mathbf{x}}}({\mathbf{x}}({\mathbf{v}},{\mathbf{m}}),{\mathbf{p}},{\mathbf{v}},{\mathbf{m}})\frac{\partial {\mathbf{x}}({\mathbf{v}},{\mathbf{m}})}{\partial{\mathbf{v}}}+\frac{\partial L}{\partial{\mathbf{v}}}({\mathbf{x}}({\mathbf{v}},{\mathbf{m}}),{\mathbf{p}},{\mathbf{v}},{\mathbf{m}}). \label{relat:adj}$$ The idea of the adjoint state technique is to compute ${\mathbf{p}}$ such that $\frac{\partial L}{\partial {\mathbf{x}}}=0$. Then the remaining part of the derivative can be computed in a straightforward way. This adjoint equation is a (backward in time) differential equation given by $$\begin{aligned} {\mathbf{X}}({\mathbf{m}}){\mathbf{\dot p}}(t)&=-\left(\frac{\partial{{\mathbf{f}}}}{\partial {\mathbf{x}}}({\mathbf{x}}(t),{\mathbf{v}})\right)^\top {\mathbf{p}}(t),\;t\in[0,T[, \label{adj}\end{aligned}$$ with the final conditon $${\mathbf{p}}(T)=-{\mathbf{C}}^\top{\mathbf{\sigma}}^{-2}\left({\mathbf{C}}{\mathbf{x}}(T)-{\mathbf{y}}^{meas}\right). \label{finalcondadj}$$ Because of the block triangular structure of $\frac{\partial{{\mathbf{f}}}}{\partial {\mathbf{x}}}$, the adjoint equation has also a cascade structure, but in reverse order, i.e. ${\mathbf{p}}_n$ is obtained at first and ${\mathbf{p}}_1$ at last : $$\begin{aligned} \label{finalcondk}{\mathbf{p}}_k(T)&=-{\mathbf{C}}_k^\top{\mathbf{\sigma}}^{-2}\left({\mathbf{C}}{\mathbf{x}}(T)-{\mathbf{y}}^{meas}\right),\\ {\mathbf{X}}_k({\mathbf{m}}) {\mathbf{{\mathbf{\dot p}}}}_k(t)&=-{\mathbf{M}}_k^\top{\mathbf{p}}_k(t)+\sum_{l=k+1}^{n}\left(\frac{\partial {\mathbf{b}}_l}{\partial {\mathbf{x}}_k}\right)^\top {\mathbf{p}}_l(t),\;t\in]0,T[, \label{adjk}\end{aligned}$$ for $k=1\dots n$. When for a given pair $({\mathbf{v}},{\mathbf{m}})$ the state equations and the adjoint state equations are solved, then the gradient of $J$ can be readily computed by using (\[relat:adj\]) $$\frac{dJ({\mathbf{v}},{\mathbf{m}})}{d{\mathbf{v}}}=\frac{\partial L}{\partial{\mathbf{v}}}({\mathbf{x}},{\mathbf{p}},{\mathbf{v}},{\mathbf{m}})=\sum_{k=1}^n\int_0^T {\mathbf{p}}_k(t)^\top\frac{\partial {\mathbf{f}}_k}{\partial{\mathbf{v}}}({\mathbf{v}},{\mathbf{x}}(t))\,dt.$$ We will give the derivative with respect to $m$ in the next section. The output sensitivity $\frac{\partial {\mathbf{y}}(T)}{\partial{\mathbf{v}}}$ can be computed in the same way by taking $I({\mathbf{x}})={\mathbf{C}}{\mathbf{x}}(T)$. In this case, we have $$\frac{\partial {\mathbf{y}}(T)}{\partial{\mathbf{v}}}=\sum_{k=1}^n\int_0^T {\mathbf{p}}_k(t)^\top\frac{\partial {\mathbf{f}}_k}{\partial{\mathbf{v}}}({\mathbf{v}},{\mathbf{x}}(t))\,dt,$$ where the final condition (\[finalcondk\]) is replaced by ${\mathbf{p}}_k(T)=-{\mathbf{C}}_k^\top$ and the adjoint equation (\[adjk\]) is unchanged, but ${\mathbf{p}}_k(t)$ is a matrix of size $n_k\times n_{meas}$. The adjoint equation in discrete time {#sect:adjdiscr} ------------------------------------- The previous sections shows that once the state equation is solved, the gradient of $J$ can be computed at the cost of one more differential equation (\[adj\]), which has to be compared to the cost of computing the sensitivity function. But the practical implementation needs to reconsider this approach in discrete time, since we cannnot just approximate independently the continuous state and the continuous adjoint state equation, i.e. the discretized adjoint must be the adjoint of the discretize state. This is a reason why high order integration schemes are seldom used in adjoint codes written by hand (otherwise automatic differentiation can be used, see [@Bischof]) since the discrete adjoint is obtained by formal derivation of the state integration scheme. Since we have to consider that the state equation could be stiff because of eventual large values of fluxes, a good compromise is the implicit trapezoidal rule, which is of order 2. Hence, we consider a discretization of the interval $[0,T]$ by considering $t_i=(i-1)h$, for $i=1\dots N$ and $h=T/(N-1)$, and we denote by ${\mathbf{x}}^i$ the approximation of ${\mathbf{x}}(t_i)$. The implicit trapezoidal rule applied to equation (\[state\]) gives $${\mathbf{D(m)}}({\mathbf{x}}^{i+1}-{\mathbf{x}}^{i})-\frac{h}{2}({\mathbf{f}}({\mathbf{x}}^{i+1},{\mathbf{v}})+{\mathbf{f}}({\mathbf{x}}^{i},{\mathbf{v}}))=0,\;i=1\dots N-1, \label{state:discr}$$ and we still denote by ${\mathbf{x}}({\mathbf{v}},{\mathbf{m}})$ the solution of (\[state:discr\]). We consider that for each measurement time $\left\{\tau_j\right\}_{1\leq j\leq n_t}$ there exists $\theta(j)$ such that $\tau_j=t_{\theta(j)}$, with $\theta(n_t)=N$, and we define the cost function $J({\mathbf{v}},{\mathbf{m}})=I({\mathbf{x}}({\mathbf{v}},{\mathbf{m}})$ where $$I({\mathbf{x}})=\frac{1}{2}\sum_{j=1}^{n_t}\left\Vert {\mathbf{\sigma}}^{-1}\left({\mathbf{C}}{\mathbf{x}}^{\theta(j)}-{\mathbf{y}}^{meas,j}\right)\right\Vert^2.$$ The discrete Lagrangian is defined by $$L({\mathbf{x}},{\mathbf{p}},{\mathbf{v}},{\mathbf{m}})=I({\mathbf{x}})+\sum_{i=1}^{N-1}({\mathbf{p}}^i)^\top ({\mathbf{D(m)}}({\mathbf{x}}^{i+1}-{\mathbf{x}}^{i})-\frac{h}{2}({\mathbf{f}}({\mathbf{x}}^{i+1},{\mathbf{v}})+{\mathbf{f}}({\mathbf{x}}^{i},{\mathbf{v}}))),$$ where ${\mathbf{p}}^i$ is the adjoint state for time $i$. Straightforward computations show that the adjoint equation is given by $$\left({\mathbf{X}}({\mathbf{m}})-\frac{h}{2}\frac{\partial {\mathbf{f}}}{\partial {\mathbf{x}}}({\mathbf{x}}^i,{\mathbf{v}})^\top\right){\mathbf{p}}^{i-1}= \left({\mathbf{X}}({\mathbf{m}})+\frac{h}{2}\frac{\partial {\mathbf{f}}}{\partial {\mathbf{x}}}({\mathbf{x}}^i,{\mathbf{v}})^\top\right){\mathbf{p}}^i-\left(\frac{\partial I}{\partial {\mathbf{x}}^i}\right)^{\top},\;1<i<N,$$ with the final condition $$\left({\mathbf{X}}({\mathbf{m}})-\frac{h}{2}\frac{\partial {\mathbf{f}}}{\partial {\mathbf{x}}}({\mathbf{x}}^N,{\mathbf{v}})^\top\right){\mathbf{p}}^{N-1}=-\left(\frac{\partial I}{\partial {\mathbf{x}}^N}\right)^{\top},$$ where $\frac{\partial I}{\partial {\mathbf{x}}^i}=0$ if $\theta(j)\neq i$ for all $j=1\dots n_t$ and $$\frac{\partial I}{\partial {\mathbf{x}}^i}=-\left({\mathbf{C}}{\mathbf{x}}^i-{\mathbf{y}}^{meas,\theta^{-1}(i)}\right)^\top\sigma^{-2}{\mathbf{C}}$$ otherwise. Once the state and the adjoint state equations are solved, the gradient is given by $$\left(\frac{dJ({\mathbf{v}},{\mathbf{m}})}{d{\mathbf{v}}}\right)^\top=-\frac{h}{2}\sum_{i=1}^{N-1}\left(\frac{\partial {\mathbf{f}}}{\partial {\mathbf{v}}}({\mathbf{x}}^{i+1},{\mathbf{v}})+\frac{\partial {\mathbf{f}}}{\partial {\mathbf{v}}}({\mathbf{x}}^{i},{\mathbf{v}})\right)^\top {\mathbf{p}}^i,$$ and $$\left(\frac{dJ({\mathbf{v}},{\mathbf{m}})}{d{\mathbf{m}}}\right)^\top=\sum_{i=1}^{N-1} \left(\frac{\partial}{\partial{\mathbf{m}}}\left({\mathbf{X}}({\mathbf{m}})\left({\mathbf{x}}^{i+1}-{\mathbf{x}}^{i}\right)\right)\right)^{\top}{\mathbf{p}}^i,$$ where for a given vector ${\mathbf{z}}$ the matrix $\frac{\partial}{\partial {\mathbf{m}}}({\mathbf{X}}({\mathbf{m}}){\mathbf{z}})$ is defined by $$\left(\frac{\partial}{\partial {\mathbf{m}}}({\mathbf{X}}({\mathbf{m}}) {\mathbf{z}})\right)_{ij}=\left\{\begin{array}{rl} z_i,&\mbox{ if the cumomer fraction }z_i\mbox{ belongs to metabolite }j,\\ 0,&\mbox{ otherwise.} \end{array} \right. \label{def:derivm}$$ [10]{} url \#1[`#1`]{}urlprefix W. Wiechert, M. Wurzel, Metabolic isotopomer labeling systems: Part [I]{}: global dynamic behavior, Mathematical Biosciences 169 (2) (2001) 173 – 205. M. Hucka, A. Finney, H. M. Sauro, H. Bolouri, J. C. Doyle, H. Kitano, al., [The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models]{}, Bioinformatics 19 (4) (2003) 524–531. A. Finney, M. Hucka, [Systems biology markup language: Level 2 and beyond.]{}, Biochem Soc Trans 31 (Pt 6) (2003) 1472–1473. T. Bray, J. Paoli, C. Sperberg-McQueen, Extensible markup language (xml) 1.0, vailable via the World Wide Web at <http://www.w3.org/TR/2004/REC-xml-20040204>. C. Bunks, J. Chancelier, F. Delebecque, C. Gomez, M. Goursat, R. Nikoukhah, S. Steer, Engineering and Scientific Computing with Scilab, Birkaüser, Boston, 1999. Scilab web site, <http://www.scilab.org>. S. Mottelet, A. Pauss, [XMLlab]{} : multimedia publication of simulations applets using [XML]{} and [Scilab]{}, arXiv:1102.5711v1, [//http://arxiv.org/abs/1102.5711](//http://arxiv.org/abs/1102.5711). N. Isermann, W. Wiechert, Metabolic isotopomer labeling systems. part [II]{}: structural flux identifiability analysis, Mathematical Biosciences 183 (2) (2003) 175 – 214. W. Wiechert, K. N[ö]{}h, From stationary to instationary metabolic flux analysis, in: U. Kragl (Ed.), Technology Transfer in Biotechnology, Vol. 92 of Advances in Biochemical Engineering/Biotechnology, Springer Berlin / Heidelberg, 2005, pp. 145–172. T. H. Yang, O. Frick, E. Heinzle, Hybrid optimization for 13c metabolic flux analysis using systems parametrized by compactification, BMC Systems Biology 2 (1) (2008) 29. K. N[ö]{}h, W. Wiechert, Parallel solution of cascaded ode systems applied to [13C]{}-labeling experiments, in: M. Bubak, G. D. v. Albada, P. M. A. Sloot, J. J. Dongarra (Eds.), Computational Science - ICCS 2004, Vol. 3037 of Lecture Notes in Computer Science, Springer Berlin / Heidelberg, 2004, pp. 594–597. K. N[ö]{}h, A. Wahl, W. Wiechert, Computational tools for isotopically instationary 13c labeling experiments under metabolic steady state conditions, Metabolic Engineering 8 (6) (2006) 554 – 577. R.-E. Plessix, A review of the adjoint-state method for computing the gradient of a functional with geophysical applications, Geophysical Journal International 167 (2006) 495–503. G. Chavent, Identification of function parameters in partial differential equations, in: R. Goodson, N.-Y. Polis (Eds.), Identification of parameter distributed systems, ASME, 1974. J. L. Lions, E. Magenes, Non-homogeneous boundary value problems and applications, Springer-Verlag, Berlin, New York, 1972. C. H. Bischof, H. M. B[ü]{}cker, P. D. Hovland, U. Naumann, J. Utke (Eds.), Advances in Automatic Differentiation, Vol. 64 of Lecture Notes in Computational Science and Engineering, Springer, Berlin, 2008. L.-E. Quek, C. Wittmann, L. Nielsen, J. Kromer, Openflux: efficient modelling software for 13c-based metabolic flux analysis, Microbial Cell Factories 8 (1). W. Wiechert, A. A. de Graaf, Bidirectional reaction steps in metabolic networks: I. modeling and simulation of carbon isotope labeling experiments, Biotechnology and Bioengineering 55 (1) (1997) 101–117. Workflow of SBML markup to Scilab code ====================================== <?xml version="1.0" encoding="UTF-8"?> <sbml level="2" version="1" xmlns="http://www.sbml.org/sbml/level2" xmlns:ns="http://www.sbml.org/sbml/level2" xmlns:celldesigner="http://www.sbml.org/2001/ns/celldesigner"> <model id="branching"> <listOfCompartments> <compartment id="default" /> </listOfCompartments> <listOfSpecies> <species compartment="default" id="A"/> <species compartment="default" id="D"/> <species compartment="default" id="F"> <notes> <html xmlns="http://www.w3.org/1999/xhtml"> <body> LABEL_MEASUREMENT 1x,x1,11 </body> </html> </notes> </species> <species compartment="default" id="G"/> <species compartment="default" id="A_out"> <notes> <html xmlns="http://www.w3.org/1999/xhtml"> <body> LABEL_INPUT 01,10,11 </body> </html> </notes> </species> </listOfSpecies> <listOfReactions> <reaction id="v1" reversible="false"> <notes> <html xmlns="http://www.w3.org/1999/xhtml"> <body> IJ &gt; IJ </body> </html> </notes> <listOfReactants> <speciesReference species="A"/> </listOfReactants> <listOfProducts> <speciesReference species="F"/> </listOfProducts> </reaction> <reaction id="v2" reversible="false"> <notes> <html xmlns="http://www.w3.org/1999/xhtml"> <body> IJ &gt; I+J </body> </html> </notes> <listOfReactants> <speciesReference species="A" /> </listOfReactants> <listOfProducts> <speciesReference species="D" stoichiometry="2.0" /> </listOfProducts> </reaction> ``` {startFrom="62"} <reaction id="v3" reversible="false"> <notes> <html xmlns="http://www.w3.org/1999/xhtml"> <body> IJ &gt; JI </body> </html> </notes> <listOfReactants> <speciesReference species="A"/> </listOfReactants> <listOfProducts> <speciesReference species="F"/> </listOfProducts> </reaction> <reaction id="v4" reversible="false"> <notes> <html xmlns="http://www.w3.org/1999/xhtml"> <body> I+J &gt; IJ </body> </html> </notes> <listOfReactants> <speciesReference species="D" stoichiometry="2"/> </listOfReactants> <listOfProducts> <speciesReference species="F"/> </listOfProducts> </reaction> <reaction id="v5"reversible="false"> <notes> <html xmlns="http://www.w3.org/1999/xhtml"> <body> IJ &gt; IJ </body> </html> </notes> <listOfReactants> <speciesReference species="F"/> </listOfReactants> <listOfProducts> <speciesReference species="G"/> </listOfProducts> </reaction> <reaction id="v6" reversible="false"> <notes> <html xmlns="http://www.w3.org/1999/xhtml"> <body> IJ &gt; IJ </body> </html> </notes> <listOfReactants> <speciesReference species="A_out"/> </listOfReactants> <listOfProducts> <speciesReference species="A" /> </listOfProducts> </reaction> </listOfReactions> </model> </sbml> ``` The specific annotation in the private namespace `xmlns:smtb="http://www.utc.fr/sysmetab"` namespace concerning the carbon atom mapping of each reaction is done as follows : for example, for the reaction corresponding to flux $v_2$, --------- ------ ------- ----------- $v_2$ : A $\to$ D + D \#ij $\to$ \#i + \#j --------- ------ ------- ----------- some specific markup is generated from the string `IJ &gt; I+J` found in the reaction `<notes>` element, as depicted in Figure \[fig:atom:mapping\]. <reaction position="2" id="v2" name="v2" reversible="false"> <listOfReactants> <speciesReference species="A"> <smtb:carbon position="2" destination="1" occurence="1" species="D"/> <smtb:carbon position="1" destination="1" occurence="2" species="D"/> </speciesReference> </listOfReactants> <listOfProducts> <speciesReference species="D"> <smtb:carbon position="1" destination="2" occurence="1" species="A"/> </speciesReference> <speciesReference species="D"> <smtb:carbon position="1" destination="1" occurence="1" species="A"/> </speciesReference> </listOfProducts> </reaction> For each intermediate species, we also add some markup specifying the exhaustive list of its cumomers. For example, for species A in the branching network, we have the cumomers $\mathrm{A}_{x1}$, $\mathrm{A}_{1x}$, $\mathrm{A}_{11}$ (we don’t take into account $\mathrm{A}_{xx}$ which is equal to 1), and the `<species>` element corresponding to A is enriched as depicted in Figure \[fig:cumomer:species:list\]. Each `<smtb:cumomer>` element has an id of the form $\mathrm{A}_n$ where $n$ is equal to the number whose base 2 representation is equal to the cumomer pattern when replacing the x’s by zeros. Each `<smtb:carbon>` element denotes a 13 neutrons isotope at position given by the `position` attribute in the molecule. <species compartment="default" id="A" name="A" type="intermediate" carbons="2"> <smtb:cumomer id="A_1" species="A" weight="1" pattern="x1"> <smtb:carbon position="1"/> </smtb:cumomer> <smtb:cumomer id="A_2" species="A" weight="1" pattern="1x"> <smtb:carbon position="2"/> </smtb:cumomer> <smtb:cumomer id="A_3" species="A" weight="2" pattern="11"> <smtb:carbon position="1"/> <smtb:carbon position="2"/> </smtb:cumomer> </species> When we consider the vectors of intermediate species cumomer fractions ${\mathbf{x}}_k$ for weights up to 2 for the branching network, we have $${\mathbf{x}}_1=\left(\begin{array}{c}\mathrm{A}_{x1},\mathrm{A}_{1x},\mathrm{D}_{1},\mathrm{F}_{x1},\mathrm{F}_{1x}\end{array}\right)^\top,\; {\mathbf{x}}_2=\left(\mathrm{A}_{11},\mathrm{F}_{11}\right)^\top$$ A redundant enumeration is also generated (see the `<smtb:listOfIntermediateCumomers>` element on Figure \[fig:cumomer:species:globallist\]) giving the ordering of all cumomers sorted by weight, allowing to keep the correspondance between components of vectors ${\mathbf{x}}_1$, ${\mathbf{x}}_2$ and corresponding species cumomers (this information is needed in the subsequent transformations). A similar enumeration is also generated for input species cumomers in the `<smtb:listOfInputCumomers>`, giving the correspondance between the components of vectors ${\mathbf{x}}_k^{input}$ and corresponding cumomers : $${\mathbf{x}}_1^{input}=\left(\mathrm{A\_{out}}_{x1},\mathrm{A\_out}_{1x}\right)^\top,\; {\mathbf{x}}_2^{input}=\left(\mathrm{A\_{out}}_{11}\right)^\top$$ <smtb:listOfIntermediateCumomers xmlns:smtb="http://www.utc.fr/sysmetab"> <smtb:listOfCumomers weight="1"> <smtb:cumomer id="A_1" species="A" weight="1" pattern="x1" position="1"> <smtb:carbon position="1"/> </smtb:cumomer> <smtb:cumomer id="A_2" species="A" weight="1" pattern="1x" position="2"> <smtb:carbon position="2"/> </smtb:cumomer> <smtb:cumomer id="D_1" species="D" weight="1" pattern="1" position="3"> <smtb:carbon position="1"/> </smtb:cumomer> <smtb:cumomer id="F_1" species="F" weight="1" pattern="x1" position="4"> <smtb:carbon position="1"/> </smtb:cumomer> <smtb:cumomer id="F_2" species="F" weight="1" pattern="1x" position="5"> <smtb:carbon position="2"/> </smtb:cumomer> </smtb:listOfCumomers> <smtb:listOfCumomers weight="2"> <smtb:cumomer id="A_3" species="A" weight="2" pattern="11" position="1"> <smtb:carbon position="1"/> <smtb:carbon position="2"/> </smtb:cumomer> <smtb:cumomer id="F_3" species="F" weight="2" pattern="11" position="2"> <smtb:carbon position="1"/> <smtb:carbon position="2"/> </smtb:cumomer> </smtb:listOfCumomers> </smtb:listOfIntermediateCumomers> <smtb:listOfInputCumomers xmlns:smtb="http://www.utc.fr/sysmetab"> <smtb:listOfCumomers weight="1"> <smtb:cumomer id="A_out_1" species="A_out" weight="1" pattern="x1" position="1"> <smtb:carbon position="1"/> </smtb:cumomer> <smtb:cumomer id="A_out_2" species="A_out" weight="1" pattern="1x" position="2"> <smtb:carbon position="2"/> </smtb:cumomer> </smtb:listOfCumomers> <smtb:listOfCumomers weight="2"> <smtb:cumomer id="A_out_3" species="A_out" weight="2" pattern="11" position="1"> <smtb:carbon position="1"/> <smtb:carbon position="2"/> </smtb:cumomer> </smtb:listOfCumomers> </smtb:listOfInputCumomers> function [x1,x2,dx1_dv,dx2_dv]=solveCumomers(v,x1_input,x2_input) n1=5; // Weight 1 cumomers M1_ijv=[1,1,-(v(1)+v(2)+v(3)) 2,2,-(v(1)+v(2)+v(3)) 3,2,v(2) 3,1,v(2) 3,3,-(v(4)+v(4)) 4,1,v(1) 4,2,v(3) 4,3,v(4) 4,4,-v(5) 5,2,v(1) 5,1,v(3) 5,3,v(4) 5,5,-v(5)]; M1=sparse(M1_ijv(:,1:2),M1_ijv(:,3),[n1,n1]); b1_ijv=[1,1,v(6).*x1_input(1,:) 2,1,v(6).*x1_input(2,:)]; b1_1=s_full(b1_ijv(:,1:2),b1_ijv(:,3),[n1,1]); [M1_handle,M1_rank]=lufact(M1); x1=lusolve(M1_handle,-[b1_1]); df1_dv_ijv=[1,6,x1_input(1,:) 1,1,-x1(1,:) 1,2,-x1(1,:) 1,3,-x1(1,:) 2,6,x1_input(2,:) 2,1,-x1(2,:) 2,2,-x1(2,:) 2,3,-x1(2,:) 3,2,x1(2,:) 3,2,x1(1,:) 3,4,-x1(3,:) 3,4,-x1(3,:) 4,1,x1(1,:) 4,3,x1(2,:) 4,4,x1(3,:) 4,5,-x1(4,:) 5,1,x1(2,:) 5,3,x1(1,:) 5,4,x1(3,:) 5,5,-x1(5,:)]; df1_dv_1=s_full(df1_dv_ijv(:,1:2),df1_dv_ijv(:,3),[n1,6]); dx1_dv(:,:,1)=lusolve(M1_handle,-df1_dv_1); ludel(M1_handle); n2=2; // Weight 2 cumomers M2_ijv=[1,1,-(v(1)+v(2)+v(3)) 2,1,v(1) 2,1,v(3) 2,2,-v(5)]; M2=sparse(M2_ijv(:,1:2),M2_ijv(:,3),[n2,n2]); b2_ijv=[1,1,v(6).*x2_input(1,:) 2,1,v(4).*x1(3,:).*x1(3,:)]; b2_1=s_full(b2_ijv(:,1:2),b2_ijv(:,3),[n2,1]); [M2_handle,M2_rank]=lufact(M2); x2=lusolve(M2_handle,-[b2_1]); ``` {startFrom="61"} df2_dv_ijv=[1,6,x2_input(1,:) 1,1,-x2(1,:) 1,2,-x2(1,:) 1,3,-x2(1,:) 2,1,x2(1,:) 2,3,x2(1,:) 2,4,x1(3,:).*x1(3,:) 2,5,-x2(2,:)]; df2_dv_1=s_full(df2_dv_ijv(:,1:2),df2_dv_ijv(:,3),[n2,6]); db2_dx1_ijv=[2,3,x1(3,:).*v(4) 2,3,x1(3,:).*v(4)]; db2_dx1_1=sparse(db2_dx1_ijv(:,1:2),db2_dx1_ijv(:,3),[n2,n1]); dx2_dv=zeros(n2,6,1); dx2_dv(:,:,1)=lusolve(M2_handle,-(df2_dv_1+db2_dx1_1*dx1_dv(:,:,1))); ludel(M2_handle); endfunction function [cost,grad]=costAndGrad(v) [x1,x2,dx1_dv,dx2_dv]=solveCumomers(v,x1_input,x2_input); e_label=(C1*x1+C2*x2)-yobs; e_flux=E*v-vobs; cost=0.5*(sum(delta.*e_flux.^2)+sum(alpha(:,1).*e_label(:,1).^2)); grad=(delta.*e_flux)'*E+(alpha(:,1).*e_label(:,1))'*(C1*dx1_dv(:,:,1)+C2*dx2_dv(:,:,1)); endfunction ``` Computation of the gradient in the non stationnary case ======================================================= The discrete state equation in its cascade form is easily obtained from (\[state:discr\]) and the definition of ${\mathbf{f}}$ as $$\left({\mathbf{X}}_k-\frac{h}{2}{\mathbf{M}}_k\right){\mathbf{x}}_k^{i+1}= \left({\mathbf{X}}_k+\frac{h}{2}{\mathbf{M}}_k\right){\mathbf{x}}_k^i +\frac{h}{2} \left({\mathbf{b}}_k({\mathbf{x}}^i) +{\mathbf{b}}_k({\mathbf{x}}^{i+1}) \right),\;1\leq i< N, \label{state:discr:casc}$$ for $k=1\dots n$. We recall that ${\mathbf{b}}_k({\mathbf{x}})$ only depends on ${\mathbf{x}}_l$ for $l<k$, so that the right-hand side of (\[state:discr:casc\]) is already known at stage $k$. To obtain ${\mathbf{x}}_k^{i+1}$ at each time step $i$ we just have have to solve a sparse linear system with a matrix whose ${\mathbf{LU}}$ factors need to be determined only once before the iterations. The cascade structure of discretized state and adjoint state equations is easily recovered. The discretized state equations (\[state:discr\]) for weights $k=1\dots n$, by $$\begin{aligned} \left({\mathbf{X}}_k-\frac{h}{2}{\mathbf{M}}_k^\top\right){\mathbf{p}}_k^{N-1}&=&\frac{h}{2}\sum_{l=k+1}^n \left(\frac{\partial {\mathbf{b}}_l}{\partial {\mathbf{x}}_k}({\mathbf{x}}^N)\right)^\top {\mathbf{p}}_k^{N-1} -\frac{\partial I}{\partial {\mathbf{x}}_k^N},\\ \left({\mathbf{X}}_k-\frac{h}{2}{\mathbf{M}}_k^\top\right){\mathbf{p}}_k^{i-1}&=& \left({\mathbf{X}}_k+\frac{h}{2}{\mathbf{M}}_k^\top\right){\mathbf{p}}_k^i +\frac{h}{2}\sum_{l=k+1}^n \left(\frac{\partial {\mathbf{b}}_l}{\partial {\mathbf{x}}_k}({\mathbf{x}}^i)\right)^\top({\mathbf{p}}_l^{i}+{\mathbf{p}}_l^{i-1}),\;1< i< N.\end{aligned}$$ As in the continous case, the adjoint states ${\mathbf{p}}_k$ are obtained in decreasing weight order. The two components of the gradient are finaly obtained by $$\left(\frac{dJ({\mathbf{v}},{\mathbf{m}})}{d{\mathbf{v}}}\right)^\top=-\frac{h}{2}\sum_{i=1}^{N-1}\sum_{k=1}^n\left(\frac{\partial {\mathbf{f}}_k}{\partial {\mathbf{v}}}({\mathbf{x}}^{i+1})+\frac{\partial {\mathbf{f}}_k}{\partial {\mathbf{v}}}({\mathbf{x}}^{i})\right)^\top {\mathbf{p}}_k^i,$$ and $$\left(\frac{dJ({\mathbf{v}},{\mathbf{m}})}{d{\mathbf{m}}}\right)^\top=\sum_{i=1}^{N-1}\sum_{k=1}^n \left(\frac{\partial}{\partial{\mathbf{m}}}\left({\mathbf{D_k}}({\mathbf{m}})\left({\mathbf{x}}_k^{i+1}-{\mathbf{x}}_k^{i}\right)\right)\right)^{\top}{\mathbf{p}}_k^i,$$ where for a given vector ${\mathbf{z}}\in\mathbb{R}^{n_k}$ the matrix $\frac{\partial}{\partial {\mathbf{m}}}({\mathbf{X}}_k({\mathbf{m}}){\mathbf{z}})$ is defined by $$\left(\frac{\partial}{\partial {\mathbf{m}}}({\mathbf{X}}_k({\mathbf{m}}) {\mathbf{z}})\right)_{ij}=\left\{\begin{array}{rl} z_i,&\mbox{ if the cumomer fraction }z_i\mbox{ belongs to metabolite }j,\\ 0,&\mbox{ otherwise.} \end{array} \right. \label{def:derivm2}$$
{ "pile_set_name": "ArXiv" }
--- abstract: 'In general there exists no relationship between the fixed point sets of the composition and of the average of a family of nonexpansive operators in Hilbert spaces. In this paper, we establish an asymptotic principle connecting the cycles generated by under-relaxed compositions of nonexpansive operators to the fixed points of the average of these operators. In the special case when the operators are projectors onto closed convex sets, we prove a conjecture by De Pierro which has so far been established only for projections onto affine subspaces.' author: - | J.-B. Baillon,$^1$ P. L. Combettes,$^{2}$ and R. Cominetti$^3$\ $\!^1$Université Paris 1 Panthéon-Sorbonne\ SAMM – EA 4543\ 75013 Paris, France ([[email protected]]{})\ $\!^2$UPMC Université Paris 06\ Laboratoire Jacques-Louis Lions – UMR 7598\ 75005 Paris, France ([[email protected]]{})\ $\!^3$Universidad de Chile\ Departamento de Ingeniería Industrial\ Santiago, Chile ([[email protected]]{}) date:   title: 'Asymptotic behavior of compositions of under-relaxed nonexpansive operators' --- [**Keywords.**]{} Cyclic projections, De Pierro’s conjecture, fixed point, nonexpansive operator, projection operator, under-relaxed cycles. [**2010 Mathematics Subject Classification.**]{} 47H09, 47H10, 47N10, 65K15 Introduction ============ Fixed points of compositions and averages of nonexpansive operators arise naturally in diverse settings; see for instance [@Banf11; @Livre1; @Byrn08; @Cegi12] and the references therein. In general there is no simple relationship between the fixed point sets of such operators. In this paper we investigate the connection of the fixed points of the average operator with the limits of a family of under-relaxed compositions. More precisely, we consider the framework described in the following standing assumption. *\[h:1\] ${\ensuremath{{\mathcal H}}}$ is a real Hilbert space, $D$ is a nonempty, closed, convex subset of ${\ensuremath{{\mathcal H}}}$, $m{\ensuremath{\geqslant}}2$ is an integer, $I=\{1,\ldots,m\}$, $(T_i)_{i\in I}$ is a family of nonexpansive operators from $D$ to $D$, and $({\ensuremath{\operatorname{Fix}}}T_i)_{i\in I}$ is their fixed point sets. Moreover, we set $$\label{e:Reps} \begin{cases} T={\displaystyle{\frac{1}{m}}}\sum_{i\in I}T_i\\ R=T_m\circ\cdots\circ T_1\\ (\forall\varepsilon\in{\ensuremath{\left]0,1\right[}})\;\; R^{\varepsilon}= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(T_m-{\ensuremath{\operatorname{Id}}})\big)\circ\cdots\circ \big({\ensuremath{\operatorname{Id}}}+\varepsilon(T_{1}-{\ensuremath{\operatorname{Id}}})\big). \end{cases}$$* When the operators $(T_i)_{i\in I}$ have common fixed points, ${\ensuremath{\operatorname{Fix}}}T=\bigcap_{i=1}^m{\ensuremath{\operatorname{Fix}}}T_i\neq{\ensuremath{{\varnothing}}}$ [@Livre1 Proposition 4.32]. If, in addition, they are strictly nonexpansive in the sense that $$\label{e:2012-11-20b} (\forall i\in I)(\forall x\in D\smallsetminus{\ensuremath{\operatorname{Fix}}}T_i) (\forall y\in{\ensuremath{\operatorname{Fix}}}T_i)\quad\|T_ix-y\|<\|x-y\|,$$ it also holds that ${\ensuremath{\operatorname{Fix}}}R=\bigcap_{i\in I}{\ensuremath{\operatorname{Fix}}}T_i$ [@Livre1 Corollary 4.36], and therefore ${\ensuremath{\operatorname{Fix}}}R={\ensuremath{\operatorname{Fix}}}T$. However, in the general case when $\bigcap_{i\in I}{\ensuremath{\operatorname{Fix}}}T_i={\ensuremath{{\varnothing}}}$, the question has been long standing and remains open even for convex projection operators, e.g., [@Byrn08 Section 8.3.2] and [@Depi01]. Even when $m=2$ and $T_1$ and $T_2$ are resolvents of maximally monotone operators, there does not seem to exist a simple relationship between ${\ensuremath{\operatorname{Fix}}}R$ and ${\ensuremath{\operatorname{Fix}}}T$ [@Wang11], except for convex projection operators, in which case ${\ensuremath{\operatorname{Fix}}}T=(1/2)({\ensuremath{\operatorname{Fix}}}R+{\ensuremath{\operatorname{Fix}}}R')$, with $R'=T_1\circ T_2$ (see [@Baus93; @Sign94] for related results, and [@Baus12] for the case of $m{\ensuremath{\geqslant}}3$ resolvents). When $(T_i)_{i\in I}=(P_i)_{i\in I}$ are projection operators onto nonempty closed convex sets $(C_i)_{i\in I}$, ${\ensuremath{\operatorname{Fix}}}T$ is the set of minimizers of the average square-distance function [@Baus93; @Sign94; @Depi85] $$\label{e:1994} \Phi\colon{\ensuremath{{\mathcal H}}}\to{\ensuremath{\mathbb{R}}}\colon x\mapsto\frac{1}{2m} \sum_{i\in I}d_{C_i}^2(x),$$ while ${\ensuremath{\operatorname{Fix}}}R$ is related to the set of Nash equilibria of a cyclic projection game. Indeed, the fixed point equation $x=Rx$ can be restated as a system of equations in $(x_1,\ldots,x_m)\in{\ensuremath{{\mathcal H}}}^m$, namely $$\label{e:2010-12-21cc} \begin{cases} x_1&=P_1x_m\\ x_2&=P_2x_1\\ &~\vdots\\ x_m&=P_mx_{m-1}, \end{cases}$$ which characterize the Nash equilibria of a game in which each player $i\in I$ selects a strategy $x_i\in C_i$ to minimize the payoff $x\mapsto\|x-x_{i-1}\|$, with the convention $x_{0}=x_m$. It is worth noting that, for $m{\ensuremath{\geqslant}}3$, these Nash equilibria cannot be characterized as minimizers of any function $\Psi\colon{\ensuremath{{\mathcal H}}}^m\to{\ensuremath{\mathbb{R}}}$ over $C_1\times\cdots\times C_m$ [@Bail12], which further reinforces the lack of hope for simple connections between ${\ensuremath{\operatorname{Fix}}}R$ and ${\ensuremath{\operatorname{Fix}}}T$. It was shown in [@Gubi67] that, if one of the sets is bounded, for every $y_0\in{\ensuremath{{\mathcal H}}}$, the sequence $(y_{km+1},\ldots,y_{km+m})_{k\in{\ensuremath{\mathbb N}}}$ generated by the periodic best-response dynamics $$\label{e:pocs} (\forall k\in{\ensuremath{\mathbb N}})\quad \begin{array}{l} \left\lfloor \begin{array}{ll} y_{km+1}&=P_1y_{km}\\ y_{km+2}&=P_2y_{km+1}\\ &\;\vdots\\ y_{km+m}&=P_my_{km+m-1}, \end{array} \right.\\[2mm] \end{array}$$ converges weakly to a solution $(x_1,\ldots,x_m)$ to (see Fig. \[fig:1\]). \[fig:1\] Working in a similar direction, and motivated by the work of [@Cens83] on under-relaxed projection methods for solving inconsistent systems of affine inequalities, De Pierro considered in [@Depi01] an under-relaxed version of , namely $$\label{e:2012-04-09g} (\forall k\in{\ensuremath{\mathbb N}})\quad \begin{array}{l} \left\lfloor \begin{array}{ll} y^\varepsilon_{km+1}&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(P_1-{\ensuremath{\operatorname{Id}}})\big)y^\varepsilon_{km}\\ y^\varepsilon_{km+2}&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(P_2-{\ensuremath{\operatorname{Id}}})\big) y^\varepsilon_{km+1}\\ &\;\vdots\\ y^\varepsilon_{km+m}&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(P_m-{\ensuremath{\operatorname{Id}}})\big) y^\varepsilon_{km+m-1}. \end{array} \right.\\[2mm] \end{array}$$ Under mild conditions the resulting sequence $(y^\varepsilon_{km+1},y^\varepsilon_{km+2},\ldots,y^\varepsilon_{km+m})_{k\in{\ensuremath{\mathbb N}}}$ converges weakly to a limit cycle that satisfies the coupled equations $$\label{e:pocs_limit} (\forall\varepsilon\in{\ensuremath{\left]0,1\right[}})\quad \begin{cases} x_1^\varepsilon&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(P_1-{\ensuremath{\operatorname{Id}}})\big)x_{m}^\varepsilon\\ x_2^\varepsilon&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(P_2-{\ensuremath{\operatorname{Id}}})\big)x_{1}^\varepsilon\\ &~\vdots\\ x_m^\varepsilon&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(P_m-{\ensuremath{\operatorname{Id}}})\big)x_{m-1}^\varepsilon. \end{cases}$$ In [@Depi01 Conjecture I], De Pierro conjectured that as $\varepsilon\to 0$ these limit cycles $({x^\varepsilon}_1,\ldots,{x^\varepsilon}_m)_{\varepsilon\in{\ensuremath{\left]0,1\right[}}}$ shrink towards a single point which is a minimizer of $\Phi$, i.e., a fixed point of $T$. In contrast with , the solutions of which do not satisfy any optimality criteria, this conjecture suggests an asymptotic variational principle for the cycles obtained as limits of the under-relaxed version of . An important contribution was made in [@Baus05], where it was shown that De Pierro’s conjecture is true for families of closed affine subspaces which satisfy a certain regularity condition. In this paper we investigate the asymptotic behavior of the under-relaxed cycles $$\label{e:2011-12-06} \begin{cases} x_1^\varepsilon&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(T_1-{\ensuremath{\operatorname{Id}}})\big)x_{m}^\varepsilon\\ x_2^\varepsilon&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(T_2-{\ensuremath{\operatorname{Id}}})\big)x_{1}^\varepsilon\\ &~\vdots\\ x_m^\varepsilon&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(T_m-{\ensuremath{\operatorname{Id}}})\big)x_{m-1}^\varepsilon \end{cases}$$ as $\varepsilon\to 0$ in the general setting of Assumption \[h:1\]. In Section \[s2\] we present a first general convergence result, which establishes conditions under which the limits as $\varepsilon\to 0$ of the $m$ curves $(x_i^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,1\right[}}}$ ($i\in I$) exist and all coincide with a fixed point of $T$. This result not only gives conditions under which De Pierro’s conjecture is true, but also extends its scope from projection operators to arbitrary nonexpansive operators. In Section \[s3\] we revisit the problem from a constructive angle. Given an initial point $y_0\in D$ and $\varepsilon\in{\ensuremath{\left]0,1\right[}}$, it is known [@Livre1 Theorem 5.22] that the cycles in can be constructed iteratively as the weak limit of the periodic process $$\label{e:2012-04-09a} (\forall k\in{\ensuremath{\mathbb N}})\quad \begin{array}{l} \left\lfloor \begin{array}{ll} y^\varepsilon_{km+1}&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(T_1-{\ensuremath{\operatorname{Id}}})\big)y^\varepsilon_{km}\\ y^\varepsilon_{km+2}&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(T_2-{\ensuremath{\operatorname{Id}}})\big) y^\varepsilon_{km+1}\\ &\;\vdots\\ y^\varepsilon_{km+m}&= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(T_m-{\ensuremath{\operatorname{Id}}})\big) y^\varepsilon_{km+m-1}. \end{array} \right.\\[2mm] \end{array}$$ We analyze the connection between this iterative process and the trajectories of the evolution equation $$\label{e:2012-04-09b} \begin{cases} x'(t)+x(t)=Tx(t)\;\,\text{on}\;{\ensuremath{\left]0,+\infty\right[}}\\ x(0)=y_0, \end{cases}$$ and then establish extended versions of De Pierro’s conjecture under various assumptions. [**Notation.**]{} The scalar product of ${\ensuremath{{\mathcal H}}}$ is denoted by ${{\left\langle{{\cdot}\mid{\cdot}}\right\rangle}}$ and the associated norm by $\|\cdot\|$. The symbols ${\ensuremath{\:\rightharpoonup\:}}$ and $\to$ denote, respectively, weak and strong convergence, and ${\ensuremath{\operatorname{Id}}}$ denotes the identity operator. The closed ball of center $x\in{\ensuremath{{\mathcal H}}}$ and radius $\rho\in{\ensuremath{\left]0,+\infty\right[}}$ is denoted by $B(x;\rho)$. Given a nonempty closed convex subset $C\subset {\ensuremath{{\mathcal H}}}$, the distance function to $C$ and the projection operator onto $C$ are respectively denoted by $d_C$ and $P_C$. Convergence of general families of under-relaxed cycles {#s2} ======================================================= We investigate the asymptotic behavior of the cycles $(x_i^\varepsilon)_{i\in I}$ defined by when $\varepsilon\to 0$. Let us remark that such a cycle $(x_i^\varepsilon)_{i\in I}$ is in bijection with the fixed points of the composition $R^\varepsilon$ of . Indeed, $z^\varepsilon=x_m^\varepsilon$ is a fixed point of $R^\varepsilon$; conversely, each $z^\varepsilon\in{\ensuremath{\operatorname{Fix}}}R^\varepsilon$ generates a cycle by setting, for every $i\in I$, $x_i^\varepsilon=({\ensuremath{\operatorname{Id}}}+\varepsilon(T_i-{\ensuremath{\operatorname{Id}}}))x_{i-1}^\varepsilon$, where $x_{0}^\varepsilon=z^\varepsilon$. This motivates our second standing assumption. \[h:2\] For every $\varepsilon\in{\ensuremath{\left]0,1\right[}}$, $R^\varepsilon$ is given by and $$\label{e:H} ({\ensuremath{\exists\,}}\eta\in\left]0,1\right])({\ensuremath{\exists\,}}\beta\in{\ensuremath{\left]0,+\infty\right[}}) (\forall\varepsilon\in\left]0,\eta\right[) ({\ensuremath{\exists\,}}z^{\varepsilon}\in{\ensuremath{\operatorname{Fix}}}R^{\varepsilon})\quad \|z^{\varepsilon}\|{\ensuremath{\leqslant}}\beta.$$ For later reference, we record the fact that under this assumption the cycles in can be obtained as weak limits of the iterative process . \[p:2013-04-22\] Suppose that Assumptions \[h:1\] and \[h:2\] are satisfied. Let $y_0\in D$ and $\varepsilon\in{\ensuremath{\left]0,\eta\right[}}$. Then the sequence $(y^\varepsilon_{km+1},\ldots,y^\varepsilon_{km+m})_{k\in{\ensuremath{\mathbb N}}}$ produced by converges weakly to an $m$-tuple $(x^\varepsilon_{1},\ldots,x^\varepsilon_{m})$ which satisfies . This follows from [@Livre1 Theorem 5.22]. The following result provides sufficient conditions for Assumption \[h:2\] to hold. \[p:zenzile\] Suppose that Assumption \[h:1\] holds, together with one of the following. 1. \[p:zenzilei\] For some $j\in I$, $T_j$ has bounded range. 2. \[p:zenzileii\] $D$ is bounded. Then Assumption \[h:2\] is satisfied. It is clear that \[p:zenzileii\] is a special case of \[p:zenzilei\]. Suppose that \[p:zenzilei\] holds. Fix $\varepsilon\in{\ensuremath{\left]0,1\right]}}$ and $y\in D$, and take $\rho\in\left[\max_{i\in I\smallsetminus\{j\}} \|T_iy-y\|,{\ensuremath{{+\infty}}}\right[$ such that $T_j(D)\subset B(y;\rho)$. Furthermore, let $x\in D$, set $x_0=x$, and define recursively $x_i=(1-\varepsilon)x_{i-1}+\varepsilon T_ix_{i-1}$, so that $x_m=R^\varepsilon x$. Then $$\begin{aligned} \label{e:kj1} (\forall i\in I\smallsetminus\{j\})\quad \|x_i-y\| &= \|(1-\varepsilon)(x_{i-1}-y)+\varepsilon( T_ix_{i-1}-y)\|\nonumber\\ &{\ensuremath{\leqslant}}(1-\varepsilon)\|x_{i-1}-y\|+ \varepsilon\|T_ix_{i-1}-T_iy\|+\varepsilon\|T_iy-y\|\nonumber\\ &{\ensuremath{\leqslant}}\|x_{i-1}-y\|+\varepsilon\rho\end{aligned}$$ and $$\begin{aligned} \label{e:kj2} \|x_{j}-y\| &{\ensuremath{\leqslant}}(1-\varepsilon)\|x_{j-1}-y\|+ \varepsilon\|T_{j}x_{j-1}-y\|\nonumber\\ &{\ensuremath{\leqslant}}(1-\varepsilon)\|x_{j-1}-y\|+\varepsilon\rho.\end{aligned}$$ By applying inductively and to majorize $\|x_m-y\|$, we obtain $$\label{e:bluemask1982} \|R^\varepsilon x-y\|=\|x_m-y\|{\ensuremath{\leqslant}}(1-\varepsilon)\|x-y\| +\varepsilon m\rho.$$ This implies that $R^{\varepsilon}$ maps $D\cap B(y;m\rho)$ to itself. Hence, the Browder–Göhde–Kirk theorem (see [@Livre1 Theorem 4.19]) asserts that $R^\varepsilon$ has a fixed point in $B(y;m\rho)$. Moreover, if $x$ is a fixed point of $R^{\varepsilon}$, gives $\|x-y\|{\ensuremath{\leqslant}}m\rho$, which shows that holds with $\eta=1$ and $\beta=\|y\|+m\rho$. To illustrate Assumption \[h:2\], it is instructive to consider the following examples. \[ex:2013-04-06\] The following variant of the example discussed in [@Depi01 Section 3] shows that is a non trivial assumption: ${\ensuremath{{\mathcal H}}}$ is the Euclidean plane, $m=3$, $\alpha\in{\ensuremath{\mathbb{R}}}$, $\beta\in{\ensuremath{\mathbb{R}}}$, $\gamma\in{\ensuremath{\left]0,+\infty\right[}}$, $\varepsilon\in{\ensuremath{\left]0,1\right[}}$, and $(T_i)_{1{\ensuremath{\leqslant}}i{\ensuremath{\leqslant}}3}$ are, respectively, the projection operators onto the sets $$\label{e:2013-04-05} C_1={\ensuremath{\mathbb{R}}}\times\{\alpha\},\quad C_2={\ensuremath{\mathbb{R}}}\times\{\beta\},\quad\text{and}\quad C_3={\big\{{(\xi_1,\xi_2)\in{\ensuremath{\left]0,+\infty\right[}}^2}~\big |~{\xi_1\xi_2{\ensuremath{\geqslant}}\gamma}\big\}}.$$ Then we have $$\begin{cases} {\ensuremath{\operatorname{Fix}}}T={\big\{{(\xi_1,\xi_2)\in C_3}~\big |~{\xi_2=(\alpha+\beta)/2}\big\}}\\ {\ensuremath{\operatorname{Fix}}}R ={\big\{{(\xi_1,\xi_2)\in C_3}~\big |~{\xi_2=\beta}\big\}}\\ {\ensuremath{\operatorname{Fix}}}R^\varepsilon={\big\{{(\xi_1,\xi_2)\in C_3}~\big |~{\xi_2=\big((1-\varepsilon)\alpha+\beta\big)/(2-\varepsilon)}\big\}}. \end{cases}$$ Thus, depending on the values of $\alpha$ and $\beta$, we can have ${\ensuremath{\operatorname{Fix}}}T={\ensuremath{\operatorname{Fix}}}R\neq{\ensuremath{{\varnothing}}}$, ${\ensuremath{\operatorname{Fix}}}T={\ensuremath{\operatorname{Fix}}}R={\ensuremath{{\varnothing}}}$, ${\ensuremath{\operatorname{Fix}}}T\neq{\ensuremath{\operatorname{Fix}}}R={\ensuremath{{\varnothing}}}$, ${\ensuremath{\operatorname{Fix}}}R\neq{\ensuremath{\operatorname{Fix}}}T={\ensuremath{{\varnothing}}}$, or ${\ensuremath{{\varnothing}}}\neq{\ensuremath{\operatorname{Fix}}}R\neq{\ensuremath{\operatorname{Fix}}}T\neq{\ensuremath{{\varnothing}}}$. Now set $\eta=1+\beta/\alpha$. Then, under the assumption that $\alpha+\beta<0<\beta$, we have $\eta\in{\ensuremath{\left]0,1\right[}}$ and ${\ensuremath{\operatorname{Fix}}}R^\varepsilon={\ensuremath{{\varnothing}}}$ if $\varepsilon{\ensuremath{\leqslant}}\eta$, while ${\ensuremath{\operatorname{Fix}}}R^\varepsilon\neq{\ensuremath{{\varnothing}}}$ if $\varepsilon>\eta$. On the other hand, under the assumption that $\beta<0<\alpha+\beta$, $\eta\in{\ensuremath{\left]0,1\right[}}$ and ${\ensuremath{\operatorname{Fix}}}R^\varepsilon\neq{\ensuremath{{\varnothing}}}$ if $\varepsilon<\eta$, while ${\ensuremath{\operatorname{Fix}}}R^\varepsilon={\ensuremath{{\varnothing}}}$ if $\varepsilon{\ensuremath{\geqslant}}\eta$. Moreover, setting $$\label{e:2013-04-17} (\forall\varepsilon\in{\ensuremath{\left]0,\eta\right[}})\quad \begin{cases} y^\varepsilon=\bigg({\displaystyle{\frac{2\gamma}{(1-\varepsilon)\alpha+\beta}}}+\frac{1}{\varepsilon}\,, {\displaystyle{\frac{(1-\varepsilon)\alpha+\beta}{2-\varepsilon}}}\bigg) \in{\ensuremath{\operatorname{Fix}}}R^\varepsilon\\[4mm] z^\varepsilon=\bigg({\displaystyle{\frac{(2-\varepsilon)\gamma}{(1-\varepsilon)\alpha+\beta}}}\,, {\displaystyle{\frac{(1-\varepsilon)\alpha+\beta}{2-\varepsilon}}}\bigg) \in{\ensuremath{\operatorname{Fix}}}R^\varepsilon. \end{cases}$$ we see that $(y^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ is an unbounded curve, while $(z^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ is bounded. In Example \[ex:2013-04-06\] the sets $({\ensuremath{\operatorname{Fix}}}T_i)_{1{\ensuremath{\leqslant}}i{\ensuremath{\leqslant}}3}$ are nonempty, and one may ask whether this plays a role in the nonemptiness of ${\ensuremath{\operatorname{Fix}}}R$, ${\ensuremath{\operatorname{Fix}}}T$, or ${\ensuremath{\operatorname{Fix}}}R^\varepsilon$. To see that such is not the case, define $T_3$ as in Example \[ex:2013-04-06\], and consider the modified operators $T_1\colon(\xi_1,\xi_2)\mapsto(\xi_1+\mu,\alpha)$ and $T_2\colon (\xi_1,\xi_2)\mapsto(\xi_1-\mu,\beta)$, where $\mu>0$. Although now the nonexpansive operators $T_1$ and $T_2$ have no fixed points, the operators $T$, $R$, and $R^\varepsilon$ remain unchanged. \[ex:2013-04-08b\] By considering products of sets of the form one can build an example in which ${\ensuremath{\operatorname{Fix}}}T$ is nonempty but the sets $({\ensuremath{\operatorname{Fix}}}R^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,1\right[}}}$ are empty. More precisely, let ${\ensuremath{{\mathcal H}}}=\ell^2({\ensuremath{\mathbb N}})$, and let $(\alpha_n)_{n\in{\ensuremath{\mathbb N}}}$, $(\beta_n)_{n\in{\ensuremath{\mathbb N}}}$, and $(\gamma_n)_{n\in{\ensuremath{\mathbb N}}}$ be sequences in $\ell^2({\ensuremath{\mathbb N}})$ such that $({\gamma_n}/(\alpha_n+\beta_n))_{n\in{\ensuremath{\mathbb N}}}\in\ell^2({\ensuremath{\mathbb N}})$ and $(\forall n\in{\ensuremath{\mathbb N}})$ $\beta_n<0<\alpha_n+\beta_n$ and $\gamma_n>0$. Set $$\label{e:2013-04-08} \begin{cases} C_1={\big\{{(\xi_n)_{n\in{\ensuremath{\mathbb N}}}\in\ell^2({\ensuremath{\mathbb N}})}~\big |~{(\forall n\in{\ensuremath{\mathbb N}})\;\; \xi_{2n}=\alpha_{n}}\big\}}\\ C_2={\big\{{(\xi_n)_{n\in{\ensuremath{\mathbb N}}}\in\ell^2({\ensuremath{\mathbb N}})}~\big |~{(\forall n\in{\ensuremath{\mathbb N}})\;\; \xi_{2n}=\beta_{n}}\big\}}\\ C_3={\big\{{(\xi_n)_{n\in{\ensuremath{\mathbb N}}}\in\ell^2({\ensuremath{\mathbb N}})}~\big |~{(\forall n\in{\ensuremath{\mathbb N}})\;\;\xi_n>0\;\;\text{and}\;\; \xi_{2n-1}\xi_{2n}{\ensuremath{\geqslant}}\gamma_{n}}\big\}}. \end{cases}$$ Then ${\ensuremath{\operatorname{Fix}}}T\neq{\ensuremath{{\varnothing}}}$ but, for $\varepsilon\in{\ensuremath{\left]0,1\right[}}$, we have ${\ensuremath{\operatorname{Fix}}}R^{\varepsilon}\neq{\ensuremath{{\varnothing}}}$ if and only if $(\forall n\in{\ensuremath{\mathbb N}})$ $\varepsilon<1+\beta_{2n+1}/\alpha_{2n+1}$. In particular if we take, for every $n\in{\ensuremath{\mathbb N}}\smallsetminus\{0\}$, $\alpha_n=(n+1)/n^2$, $\beta_n=-1/n$, and $\gamma_n=1/n^3$, then ${\ensuremath{\operatorname{Fix}}}R^\varepsilon={\ensuremath{{\varnothing}}}$ for every $\varepsilon\in{\ensuremath{\left]0,1\right[}}$. [([@Baus05 Example 4.1])]{} \[ex:2013-04-08c\] Let $m=2$, and let $T_1$ and $T_2$ be the projection operators onto closed affine subspaces $C_1\subset{\ensuremath{{\mathcal H}}}$ and $C_2\subset{\ensuremath{{\mathcal H}}}$, respectively. If ${\ensuremath{{\mathcal H}}}$ is finite-dimensional, the sets ${\ensuremath{\operatorname{Fix}}}R$, $({\ensuremath{\operatorname{Fix}}}R^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,1\right[}}}$, and ${\ensuremath{\operatorname{Fix}}}T$ are nonempty; if ${\ensuremath{{\mathcal H}}}$ is infinite-dimensional, there exist $C_1$ and $C_2$ such that these sets are all empty. However, if the vector subspace $(C_1-C_1)+(C_2-C_2)$ is closed, then ${\ensuremath{\operatorname{Fix}}}T\neq{\ensuremath{{\varnothing}}}$ and $(\forall\varepsilon\in{\ensuremath{\left]0,1\right[}})$ ${\ensuremath{\operatorname{Fix}}}R^\varepsilon\neq{\ensuremath{{\varnothing}}}$. The next result establishes conditions for the convergence of the cycles of when the relaxation parameter $\varepsilon$ vanishes. \[t:1\] Suppose that Assumptions \[h:1\] and \[h:2\] are satisfied. Then ${\ensuremath{\operatorname{Fix}}}T\neq{\ensuremath{{\varnothing}}}$. Now let $(x_m^{\varepsilon})_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}= (z^{\varepsilon})_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ be the bounded curve provided by and denote by $(x_1^\varepsilon,\ldots,x_m^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ the associated family of cycles arising from . Then $(x_1^\varepsilon,\ldots,x_m^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ is bounded and each of its weak sequential cluster points is of the form $(x,\ldots,x)$, where $x\in{\ensuremath{\operatorname{Fix}}}T$. Moreover, $$\label{e:2013-03-24} (\forall i\in I)\quad \lim_{\varepsilon\to 0}\|x^\varepsilon_{i}-x^\varepsilon_{i-1}\|=0, \quad\text{where}\quad (\forall\varepsilon\in{\ensuremath{\left]0,\eta\right[}}) \quad x^\varepsilon_{0}=x^\varepsilon_{m}.$$ In addition, suppose that one of the following holds. 1. \[t:1i\] $(\forall x\in{\ensuremath{\operatorname{Fix}}}T)(\forall y\in{\ensuremath{\operatorname{Fix}}}T)$ ${{\left\langle{{x_m^\varepsilon}\mid{x-y}}\right\rangle}}$ converges as $\varepsilon\to 0$. 2. \[t:1ii\] $(\forall x\in{\ensuremath{\operatorname{Fix}}}T)$ $\|x_m^\varepsilon-x\|$ converges as $\varepsilon\to 0$. 3. \[t:1iv\] ${\ensuremath{\operatorname{Fix}}}T$ is a singleton. Then there exists $\overline{x}\in{\ensuremath{\operatorname{Fix}}}T$ such that, for every $i\in I$, $x_i^\varepsilon{\ensuremath{\:\rightharpoonup\:}}\overline{x}$ as $\varepsilon\to 0$. Finally, suppose that ${\ensuremath{\operatorname{Id}}}-T$ is demiregular on ${\ensuremath{\operatorname{Fix}}}T$, i.e., $$\label{e:condi2} \big(\forall (y_k)_{k\in{\ensuremath{\mathbb N}}}\in D^{{\ensuremath{\mathbb N}}}\big) \big(\forall y\in{\ensuremath{\operatorname{Fix}}}T\big)\quad \begin{cases} y_k{\ensuremath{\:\rightharpoonup\:}}y\\ y_k-Ty_k\to 0 \end{cases} \quad\Rightarrow\quad y_k\to y.$$ Then, for every $i\in I$, $x_i^\varepsilon\to\overline{x}$ as $\varepsilon\to 0$. Fix $z\in D$. By nonexpansiveness of the operators $(T_i)_{i\in I}$, we have $$\begin{aligned} \label{e:2012-04-08x} (\forall i\in I)\quad \|T_ix^\varepsilon_{i-1}-x^\varepsilon_{i-1}\| &{\ensuremath{\leqslant}}\|T_ix^\varepsilon_{i-1}-T_iz\|+\|T_iz-z\| +\|z-x^\varepsilon_{i-1}\|\nonumber\\ &{\ensuremath{\leqslant}}2\|x^\varepsilon_{i-1}-z\|+\|T_iz-z\|.\end{aligned}$$ In particular, for $i=1$, it follows from the boundedness of $(x_m^{\varepsilon})_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ that $(T_1x_m^{\varepsilon}-x_m^{\varepsilon})_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ is bounded. In turn, we deduce from that $(x_1^{\varepsilon})_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ is bounded. Continuing this process, we obtain the boundedness of $(x_1^\varepsilon,\ldots,x_m^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ and the fact that $$\label{e:2013-04-08c} (\forall i\in I)\quad (T_ix_{i-1}^{\varepsilon}- x_{i-1}^{\varepsilon})_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}} \quad\text{is bounded.}$$ On the other hand, adding all the equalities in , we get $$\label{e:2011-12-08b} (\forall\varepsilon\in{\ensuremath{\left]0,\eta\right[}})\quad \sum_{i\in I}T_ix_{i-1}^\varepsilon= \sum_{i\in I}x_i^\varepsilon,$$ from which it follows that $$\begin{aligned} \label{e:2011-12-09} (\forall\varepsilon\in{\ensuremath{\left]0,\eta\right[}})\quad Tx_m^\varepsilon-x_m^\varepsilon &=\frac1m\sum_{i=1}^m T_ix_m^\varepsilon-x_m^\varepsilon \nonumber\\ &=\frac1m\sum_{i=1}^m T_ix_{i-1}^\varepsilon+ \frac1m\sum_{i=2}^{m}\big(T_ix_m^\varepsilon- T_ix_{i-1}^\varepsilon\big)-x_m^\varepsilon\nonumber\\ &=\frac1m\sum_{i=1}^m x_i^\varepsilon+ \frac1m\sum_{i=2}^{m}\big(T_ix_m^\varepsilon- T_ix_{i-1}^\varepsilon\big)-x_m^\varepsilon\nonumber\\ &=\frac1m\sum_{i=1}^{m-1}(x_i^\varepsilon-x_m^\varepsilon)+ \frac1m\sum_{i=1}^{m-1}\big(T_{i+1}x_m^\varepsilon- T_{i+1}x_i^\varepsilon\big).\end{aligned}$$ Hence, using the nonexpansiveness of the operators $(T_i)_{i\in I}$, we obtain $$\begin{aligned} \label{e:2012-01-03} (\forall\varepsilon\in{\ensuremath{\left]0,\eta\right[}})\quad \|Tx_m^\varepsilon-x_m^\varepsilon\| &{\ensuremath{\leqslant}}\frac2m\sum_{i=1}^{m-1}\big\|x_m^\varepsilon- x_i^\varepsilon\big\|.\end{aligned}$$ Consequently, since and also imply that $$\label{e:2011-12-07} (\forall i\in I)\quad \|x_i^\varepsilon-x_{i-1}^\varepsilon\| =\varepsilon\|T_ix_{i-1}^\varepsilon-x_{i-1}^\varepsilon\| \to 0\quad\text{as}\quad\varepsilon\to 0,$$ thus proving , the triangle inequality gives $\|x_m^\varepsilon-x_i^\varepsilon\|\to 0$, which, combined with , yields $$\label{e:2013} Tx_m^\varepsilon-x_m^\varepsilon\to 0.$$ Hence, we can invoke the demiclosed principle [@Livre1 Corollary 4.18] to deduce that every weak sequential cluster point of the bounded curve $(x_m^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ belongs to ${\ensuremath{\operatorname{Fix}}}T$, which is therefore nonempty. In view of , we therefore deduce that every weak sequential cluster point of $(x_1^\varepsilon,\ldots,x_m^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ is of the form $(x,\ldots,x)$, where $x\in{\ensuremath{\operatorname{Fix}}}T$. It remains to show that under any of the conditions \[t:1i\], \[t:1ii\], or \[t:1iv\], the curve $(x_m^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,1\right[}}}$ is weakly convergent. Clearly \[t:1iv\] implies \[t:1i\], and the same holds for \[t:1ii\] since $$\label{e:2012-11-28} (\forall(x,y)\in{\ensuremath{{\mathcal H}}}^2)(\forall\varepsilon\in{\ensuremath{\left]0,\eta\right[}})\quad {{\left\langle{{x_m^\varepsilon}\mid{x-y}}\right\rangle}}=\frac12\big(\|x_m^\varepsilon-y\|^2 -\|x_m^\varepsilon-x\|^2+\|x\|^2-\|y\|^2\big).$$ Thus, it suffices to show that under \[t:1i\] the curve $(x_m^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ has a unique weak sequential cluster point. Let $x$ and $y$ be two weak sequential cluster points and choose sequences $(\varepsilon_n)_{n\in{\ensuremath{\mathbb N}}}$ and $(\varepsilon'_n)_{n\in{\ensuremath{\mathbb N}}}$ in ${\ensuremath{\left]0,\eta\right[}}$ converging to $0$ such that $x_m^{\varepsilon_n}{\ensuremath{\:\rightharpoonup\:}}x$ and $x_m^{\varepsilon'_n}{\ensuremath{\:\rightharpoonup\:}}y$ as $n\to{\ensuremath{{+\infty}}}$. As shown above, we have $x$ and $y$ lie in ${\ensuremath{\operatorname{Fix}}}T$ and, therefore, it follows from \[t:1i\] that ${{\left\langle{{x}\mid{x-y}}\right\rangle}}=\lim_{n\to{\ensuremath{{+\infty}}}} {{\left\langle{{x_m^{\varepsilon_n}}\mid{x-y}}\right\rangle}}=\lim_{n\to{\ensuremath{{+\infty}}}} {{\left\langle{{x_m^{\varepsilon'_n}}\mid{x-y}}\right\rangle}}={{\left\langle{{y}\mid{x-y}}\right\rangle}}$. This yields $\|x-y\|^2=0$ proving our claim. Finally, let us establish the strong convergence assertion. To this end, let $(\varepsilon_n)_{n\in{\ensuremath{\mathbb N}}}$ be a sequence in ${\ensuremath{\left]0,\eta\right[}}$ converging to $0$. Then, as just proved, $x_m^{\varepsilon_n}{\ensuremath{\:\rightharpoonup\:}}\overline{x}\in{\ensuremath{\operatorname{Fix}}}T$ as $n\to{\ensuremath{{+\infty}}}$. On the other hand, yields $x_m^{\varepsilon_n}-Tx_m^{\varepsilon_n}\to 0$ as $n\to{\ensuremath{{+\infty}}}$. Hence, we derive from that $x_m^{\varepsilon_n}\to\overline{x}$ as $n\to{\ensuremath{{+\infty}}}$. This shows that $x_m^{\varepsilon}\to\overline{x}$ as $\varepsilon\to 0$. In view of , the proof is complete. \[r:2013-04-23\] The demiregularity condition is a specialization of a notion introduced in [@Sico10 Definition 2.3] for set-valued operators (see also [@Zeid90 Definition 27.1]). It follows from [@Sico10 Proposition 2.4] that is satisfied in each of the following cases. 1. \[p:2009-09-20i\] ${\ensuremath{\operatorname{Id}}}-T$ is uniformly monotone at every $y\in{\ensuremath{\operatorname{Fix}}}T$. 2. \[r:2013-04-23ii\] ${\ensuremath{\operatorname{Id}}}-T$ is strongly monotone at every $y\in{\ensuremath{\operatorname{Fix}}}T$. 3. \[p:2009-09-20i+\] $T={\ensuremath{\operatorname{Id}}}-\nabla f$, where $f\in\Gamma_0({\ensuremath{{\mathcal H}}})$ is uniformly convex at every $y\in{\ensuremath{\operatorname{Fix}}}T$. 4. \[p:2009-09-20iv\] $D$ is boundedly compact: its intersection with every closed ball is compact. 5. \[p:2009-09-20vi\] $D={\ensuremath{{\mathcal H}}}$ and ${\ensuremath{\operatorname{Id}}}-T$ is invertible. 6. \[p:2009-09-20vii\] $T$ is demicompact [@Petr66]: for every bounded sequence $(y_n)_{n\in{\ensuremath{\mathbb N}}}$ in $D$ such that $(y_n-Ty_n)_{n\in{\ensuremath{\mathbb N}}}$ converges strongly, $(y_n)_{n\in{\ensuremath{\mathbb N}}}$ admits a strongly convergent subsequence. In the special case when $(T_i)_{i\in I}$ is a family of projection operators onto closed convex sets, Theorem \[t:1\] asserts that De Pierro’s conjecture is true under any of conditions \[t:1i\]–\[t:1iv\]. In particular, we obtain weak convergence of each point in the cycle to the point in ${\ensuremath{\operatorname{Fix}}}T$ if this set is a singleton, which can be considered as a generic situation in many practical instances when $\bigcap_{i\in I}{\ensuremath{\operatorname{Fix}}}T_i\neq{\ensuremath{{\varnothing}}}$. The following example illustrates a degenerate case in which weak convergence of the cycles can fail. \[ex:2012\] Suppose that in Theorem \[t:1\] we have $\bigcap_{i\in I}{\ensuremath{\operatorname{Fix}}}T_i\neq{\ensuremath{{\varnothing}}}$. Then it follows from the results of [@Livre1 Section 4.5] that $$\label{e:2012-01-30} (\forall\varepsilon\in{\ensuremath{\left]0,1\right[}})\quad {\ensuremath{\operatorname{Fix}}}R^\varepsilon=\bigcap_{i\in I}{\ensuremath{\operatorname{Fix}}}\big((1-\varepsilon){\ensuremath{\operatorname{Id}}}+\varepsilon T_i\big)= \bigcap_{i\in I}{\ensuremath{\operatorname{Fix}}}T_{i}={\ensuremath{\operatorname{Fix}}}T.$$ Now suppose $y$ and $z$ are two distinct points in ${\ensuremath{\operatorname{Fix}}}T$ and set $$\label{e:2012-01-30b} (\forall\varepsilon\in{\ensuremath{\left]0,1\right[}})\quad x_{m}^{\varepsilon}= \begin{cases} y,&\text{if}\;\;\lfloor 1/\varepsilon \rfloor\;\text{is even};\\ z,&\text{if}\;\;\lfloor 1/\varepsilon \rfloor\;\text{is odd}. \end{cases}$$ Then $(x_{m}^{\varepsilon})_{\varepsilon\in{\ensuremath{\left]0,1\right[}}}$ has two distinct weak cluster points and therefore it does not converge weakly, although Assumptions \[h:1\] and \[h:2\] are trivially satisfied. Convergence of limit cycles of under-relaxed iterations {#s3} ======================================================= As illustrated in Example \[ex:2012\], in general one cannot expect every solution cycle $(x_1^\varepsilon,\ldots,x_m^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ in to converge as there are cases that oscillate. Theorem \[t:1\] provided conditions that rule out multiple clustering and ensure the weak convergence of the cycles as $\varepsilon\to 0$. An alternative approach, inspired from [@Depi01], is to focus on solutions of that arise as limit cycles of the under-relaxed periodic iteration started from the same initial point $y_0\in D$ for every $\varepsilon\in{\ensuremath{\left]0,\eta\right[}}$. This arbitrary but fixed initial point is intended to act as an anchor that avoids multiple cluster points of the resulting family of limit cycles $(x_1^\varepsilon,\ldots,x_m^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$. As mentioned in the Introduction, for convex projection operators De Pierro conjectured that, as $\varepsilon\to 0$, the limit cycles shrink to a least-squares solution, namely $(x_1^\varepsilon,\ldots,x_m^\varepsilon){\ensuremath{\:\rightharpoonup\:}}(\overline{x},\ldots,\overline{x})$, where $\overline{x}$ is a minimizer of the function $\Phi$ of . In [@Baus05 Theorem 6.4] the conjecture was proved for closed affine subspaces satisfying a regularity conditions, in which case the limit $\overline{x}$ exists in the strong topology and is in fact the point in $S={\ensuremath{\operatorname{Argmin}}}\Phi={\ensuremath{\operatorname{Fix}}}T$ closest to the initial point $y_0$, namely $\overline{x}=P_Sy_0$. However, for general convex sets the conjecture remains open. We revisit this question in the general framework delineated by Assumptions \[h:1\] and \[h:2\] with a different strategy than that adopted in Section \[s2\]. Our approach consists in showing that, for $\varepsilon$ small, the iterates follow closely the orbit of the semigroup generated by $A={\ensuremath{\operatorname{Id}}}-T$, i.e., the semigroup associated with the autonomous Cauchy problem $$\label{gradflow_T} \begin{cases} x'(t)=-Ax(t)\;\,\text{on}\;{\ensuremath{\left]0,+\infty\right[}}\\ x(0)=y_0. \end{cases}$$ This allows us to relate the limit cycles $(x_1^\varepsilon,\ldots,x_m^\varepsilon)_{\varepsilon\in{\ensuremath{\left]0,\eta\right[}}}$ to the limit of $x(t)$ when $t\to{\ensuremath{{+\infty}}}$. Note that, since $y_0\in D={\ensuremath{\operatorname{dom}}}A$ and $A$ is Lipschitz, has a unique solution $x\in{\EuScript C}^1({\ensuremath{\left]0,+\infty\right[}};D)$; see, e.g., [@Brez73 Theorem I.1.4]. In addition, if there exists $x_\infty\in{\ensuremath{{\mathcal H}}}$ such that $x(t){\ensuremath{\:\rightharpoonup\:}}x_\infty$ as $t\to{\ensuremath{{+\infty}}}$, then $\overline{x}\in{\ensuremath{\operatorname{Fix}}}T$. In the case of convex projections, reduces to the gradient flow $$\label{gradflow} \begin{cases} x'(t)=-\nabla\Phi(x(t))\;\,\text{on}\;{\ensuremath{\left]0,+\infty\right[}}\\ x(0)=y_0, \end{cases}$$ which converges weakly to some point $x_\infty\in S$ as $t\to{\ensuremath{{+\infty}}}$ [@Bruc75 Theorem 4], and one may therefore expect De Pierro’s conjecture to hold with $\overline{x}=x_\infty$ under suitable assumptions. Note, however, that for non-affine convex sets the limit $x_\infty$ might not coincide with the projection $P_Sy_0$. Under-relaxed cyclic iterations and semigroup flows {#s32_T} --------------------------------------------------- In order to study for a fixed $\varepsilon\in{\ensuremath{\left]0,1\right[}}$, it suffices to consider the iterates modulo $m$, that is, the sequence $(y^\varepsilon_{km})_{k\in{\ensuremath{\mathbb N}}}=((R^{\varepsilon})^ky_0)_{k\in{\ensuremath{\mathbb N}}}$, which converge weakly towards some point ${x^\varepsilon}_m\in{\ensuremath{\operatorname{Fix}}}R^{\varepsilon}$ (see Proposition \[p:2013-04-22\]). The key to establish a formal connection between the iteration and the semigroup associated with , is the following approximation lemma that relates $R^{\varepsilon}$ to $A={\ensuremath{\operatorname{Id}}}-T$. \[lema31\_T\] Set $A={\ensuremath{\operatorname{Id}}}-T$, fix $z\in D$, and set $\rho=\max_{i\in I}\|T_iz-z\|/2$. Then $$\label{ReA} (\forall\varepsilon\in [0,1])(\forall x\in D) \quad\|R^{\varepsilon} x-x+\varepsilon m Ax\|{\ensuremath{\leqslant}}\varepsilon^2(3^m-2m-1)(\|x-z\|+\rho).$$ Since the case $\varepsilon=0$ is trivial, we take $\varepsilon\in{\ensuremath{\left]0,1\right]}}$. Define operators on $D$ by $$(\forall j\in I)\quad R^\varepsilon_j= \big({\ensuremath{\operatorname{Id}}}+\varepsilon(T_j-{\ensuremath{\operatorname{Id}}})\big)\circ\cdots\circ \big({\ensuremath{\operatorname{Id}}}+\varepsilon(T_{1}-{\ensuremath{\operatorname{Id}}})\big)$$ and $$\label{e:rc12} (\forall j\in I)\quad E^\varepsilon_j= \frac{1}{\varepsilon^2}(R^\varepsilon_j-{\ensuremath{\operatorname{Id}}}) +\frac{1}{\varepsilon}\sum_{i=1}^j({\ensuremath{\operatorname{Id}}}-T_i).$$ Then $R^{\varepsilon}=R^\varepsilon_m$ and therefore $R^{\varepsilon}-{\ensuremath{\operatorname{Id}}}+\varepsilon m A=\varepsilon^2 E_m^\varepsilon$. Thus, the result boils down to showing that $(\forall x\in D)$ $\|E_m^\varepsilon x\|{\ensuremath{\leqslant}}(3^m-2m-1)(\|x-z\|+\rho)$. We derive from that $$(\forall j\in\{1,\ldots,m-1\})\quad E^\varepsilon_{j+1}=E^\varepsilon_j+\frac{1}{\varepsilon} \Big(({\ensuremath{\operatorname{Id}}}-T_{j+1})-({\ensuremath{\operatorname{Id}}}-T_{j+1})\circ R^\varepsilon_j\Big).$$ Now let $x\in D$. Since the operators $({\ensuremath{\operatorname{Id}}}-T_{j})_{1{\ensuremath{\leqslant}}j{\ensuremath{\leqslant}}m-1}$ are 2-Lipschitz, we have $$\begin{aligned} (\forall j\in\{1,\ldots,m-1\})\quad \|E^\varepsilon_{j+1}x\| &{\ensuremath{\leqslant}}\|E^\varepsilon_j x\|+\frac{2}{\varepsilon} \|x-R^\varepsilon_jx\|\nonumber\\ &=\|E^\varepsilon_j x\|+2\bigg\|\sum_{i=1}^j({\ensuremath{\operatorname{Id}}}-T_i)x-\varepsilon E^\varepsilon_j x\bigg\|\nonumber\\ &{\ensuremath{\leqslant}}(1+2\varepsilon)\|E^\varepsilon_j x\|+ 2\sum_{i=1}^j(\|x-z\|+\|z-T_iz\|+\|T_iz-T_ix\|)\nonumber\\ &{\ensuremath{\leqslant}}(1+2\varepsilon)\|E^\varepsilon_j x\|+4j(\|x-z\|+\rho). \label{recur_T}\end{aligned}$$ Using recursively, and observing that $E_1^\varepsilon x=0$, it follows that $$\label{cotita_T} \|E_m^\varepsilon x\|{\ensuremath{\leqslant}}4(\|x-z\|+\rho)\sum_{j=1}^{m-1} j(1+2\varepsilon)^{m-1-j}.$$ Upon applying the identity $\sum_{j=1}^{m-1}j\alpha^j= ((m-1)\alpha^{m+1}-m\alpha^m+\alpha)/(1-\alpha)^2$ to $\alpha=(1+2\varepsilon)^{-1}\in{\ensuremath{\left]0,1\right[}}$, we see that the sum in is equal to $((1+2\varepsilon)^m-1-2m\varepsilon)/(4\varepsilon^2)$, which increases with $\varepsilon$ attaining its maximum $(3^m-2m-1)/4$ at $\varepsilon=1$. This combined with yields the announced bound. For firmly nonexpansive operators, such as projection operators onto closed convex sets, the operators $({\ensuremath{\operatorname{Id}}}-T_i)_{i\in I}$ are nonexpansive and the previous proof can be modified to derive a tighter bound in , namely $$\label{firm} (\forall\varepsilon\in [0,1])(\forall x\in D) \quad\|R^{\varepsilon} x-x+\varepsilon m Ax\|{\ensuremath{\leqslant}}\varepsilon^2(2^m-m-1)(\|x-z\|+2\rho).$$ We proceed with the announced connection between and . This will be used later to establish De Pierro’s conjecture in several alternative settings. \[sd\_T\] Let $y_0\in D$, let $x$ be the solution of , suppose that Assumptions \[h:1\] and \[h:2\] are satisfied. For every $\varepsilon\in{\ensuremath{\left]0,\eta\right[}}$, set $(z_k^\varepsilon)_{k\in{\ensuremath{\mathbb N}}}=((R^{\varepsilon})^ky_0)_{k\in{\ensuremath{\mathbb N}}}$ and let $\psi^{\varepsilon}$ be the linear interpolation of $(z^{\varepsilon}_k)_{k\in{\ensuremath{\mathbb N}}}$ given by $$\label{e:poi} \big(\forall k\in{\ensuremath{\mathbb N}}\big) \big(\forall t\in[km\varepsilon,(k+1)m\varepsilon[\big)\quad \psi^{\varepsilon}(t)=z^{\varepsilon}_k+\frac{t-km\varepsilon} {m\varepsilon}(z^{\varepsilon}_{k+1}-z^{\varepsilon}_k).$$ Then $(\forall \bar t\in{\ensuremath{\left]0,+\infty\right[}})$ $\sup_{0{\ensuremath{\leqslant}}t{\ensuremath{\leqslant}}\bar t}\|\psi^{\varepsilon}(t)-x(t)\|\to 0$ when $\varepsilon\to 0$. Set $A={\ensuremath{\operatorname{Id}}}-T$, let $\varepsilon\in{\ensuremath{\left]0,\eta\right[}}$, and fix $z\in D$. The function $\psi^{\varepsilon}$ is differentiable except at the breakpoints ${\big\{{km\varepsilon}~\big |~{k\in{\ensuremath{\mathbb N}}}\big\}}$. Now set $(\forall k\in{\ensuremath{\mathbb N}})$ $J_k=\left]km\varepsilon,(k+1)m\varepsilon\right[$. According to Lemma \[lema31\_T\], we have $$(\forall k\in{\ensuremath{\mathbb N}})(\forall t\in J_k)\quad (\psi^\varepsilon)'(t)= \frac{1}{m\varepsilon}(z^{\varepsilon}_{k+1}-z^{\varepsilon}_k) =\frac{1}{m\varepsilon}(R^{\varepsilon}z^{\varepsilon}_k -z^{\varepsilon}_k) =-Az^{\varepsilon}_k+\varepsilon h^\varepsilon_k,$$ where $\|h^\varepsilon_k\|{\ensuremath{\leqslant}}(3^m-2m-1) (\|z^{\varepsilon}_k-z\|+\rho)/m$. Now set $$(\forall k\in{\ensuremath{\mathbb N}})(\forall t\in J_k)\quad h^\varepsilon(t)= A\psi^{\varepsilon}(t)-Az^{\varepsilon}_k+\varepsilon h^\varepsilon_k.$$ Then $$(\forall k\in{\ensuremath{\mathbb N}})(\forall t\in J_k)\quad (\psi^{\varepsilon})'(t) =-A\psi^{\varepsilon}(t)+h^\varepsilon(t).$$ Moreover, it follows from that there exists a constant $\alpha\in{\ensuremath{\left]0,+\infty\right[}}$ independent from $\varepsilon$ such that $(\forall k\in{\ensuremath{\mathbb N}})$ $\|h^\varepsilon_k\|{\ensuremath{\leqslant}}\alpha$. Hence, since $A$ is 2-Lipschitz, there exists $\gamma\in{\ensuremath{\left]0,+\infty\right[}}$ such that $$\begin{aligned} (\forall k\in{\ensuremath{\mathbb N}})(\forall t\in J_k)\quad \|h^\varepsilon(t)\| &{\ensuremath{\leqslant}}2\|\psi^{\varepsilon}(t)-z^{\varepsilon}_k\|+ \varepsilon\|h^\varepsilon_k\|\nonumber\\ &{\ensuremath{\leqslant}}2\|z^{\varepsilon}_{k+1}-z^{\varepsilon}_k\|+ \varepsilon\|h^\varepsilon_k\| \nonumber\\ &=2\varepsilon m \|-Az^{\varepsilon}_k+\varepsilon h^\varepsilon_k\|+\varepsilon\|h^\varepsilon_k\| \nonumber\\ &{\ensuremath{\leqslant}}\varepsilon \gamma.\end{aligned}$$ Next, consider the function $\theta\colon{\ensuremath{\left[0,+\infty\right[}}\to{\ensuremath{\left[0,+\infty\right[}}$ defined by $\theta(t)=\|x(t)-\psi^{\varepsilon}(t)\|^2$. Then it follows from the monotonicity of $A$ that $$\begin{aligned} (\forall t\in{\ensuremath{\left[0,+\infty\right[}}\smallsetminus{\big\{{km\varepsilon}~\big |~{k\in{\ensuremath{\mathbb N}}}\big\}})\quad \theta'(t)&=2{{\left\langle{{x(t)-\psi^{\varepsilon}(t)}\mid{x'(t)- (\psi^{\varepsilon})'(t)}}\right\rangle}}\nonumber\\ &=2{{\left\langle{{x(t)-\psi^{\varepsilon}(t)}\mid{A\psi^{\varepsilon}(t)-h^\varepsilon(t)-Ax(t)}}\right\rangle}}\nonumber\\ &{\ensuremath{\leqslant}}2{{\left\langle{{x(t)-\psi^{\varepsilon}(t)}\mid{-h^\varepsilon(t)}}\right\rangle}} \nonumber\\ &{\ensuremath{\leqslant}}2\|x(t)-\psi^{\varepsilon}(t)\|\,\|h^\varepsilon(t)\| \nonumber\\ &{\ensuremath{\leqslant}}2\varepsilon \gamma\sqrt{\theta(t)}.\end{aligned}$$ Integrating this inequality and noting that $\theta(0)=0$, we obtain $(\forall t\in{\ensuremath{\left[0,+\infty\right[}})$ $\|\psi^{\varepsilon}(t)-x(t)\|= \sqrt{\theta(t)}{\ensuremath{\leqslant}}\varepsilon\gamma t$. Now let $\bar t\in{\ensuremath{\left]0,+\infty\right[}}$. Then $\sup_{0{\ensuremath{\leqslant}}t{\ensuremath{\leqslant}}\bar t}\|\psi^{\varepsilon}(t)-x(t)\|{\ensuremath{\leqslant}}\varepsilon\gamma\bar t\to 0$ as $\varepsilon\to 0$. Strong convergence under stability of approximate cycles -------------------------------------------------------- In this section, we investigate the strong convergence of the cycles defined in when a stability condition holds. \[p33\_T\] Suppose that Assumptions \[h:1\] and \[h:2\] are satisfied, and that $$\label{e:stab} (\forall z\in{\ensuremath{\operatorname{Fix}}}T)\quad \lim_{\varepsilon\to 0}d_{{\ensuremath{\operatorname{Fix}}}R^{\varepsilon}}(z)=0.$$ In addition, let $y_0\in D$, and suppose that the orbit of $y_0$ in the Cauchy problem converges strongly, say $x(t)\to\overline{x}\in D$ as $t\to{\ensuremath{{+\infty}}}$. For every $\varepsilon\in{\ensuremath{\left]0,\eta\right[}}$, let $(x_i^\varepsilon)_{i\in I}$ be the cycle obtained as the weak limit of in Proposition \[p:2013-04-22\]. Then $\overline{x}\in{\ensuremath{\operatorname{Fix}}}T$ and $(\forall i\in I)$ $x_i^\varepsilon\to\overline{x}$ when $\varepsilon\to 0$. Since $x(t)\to\overline{x}$, implies that $x'(t)$ converges to $A\overline{x}$ and therefore $A\overline{x}=0$ since $x'(t)\to 0$. Hence, $\overline{x}\in{\ensuremath{\operatorname{Fix}}}T$. Now fix $\delta\in{\ensuremath{\left]0,+\infty\right[}}$ and $\bar t\in{\ensuremath{\left]0,+\infty\right[}}$ such that $(\forall t\in[\bar t,{\ensuremath{{+\infty}}}[)$ $\|x(t)-\overline{x}\|{\ensuremath{\leqslant}}\delta$. For every $\varepsilon\in{\ensuremath{\left]0,\eta\right[}}$, set $(z_k^\varepsilon)_{k\in{\ensuremath{\mathbb N}}}=(y^{\varepsilon}_{km})_{k\in{\ensuremath{\mathbb N}}} =((R^{\varepsilon})^ky_0)_{k\in{\ensuremath{\mathbb N}}}$ and define the function $\psi^{\varepsilon}$ as in . By Proposition \[sd\_T\], there exists $\varepsilon_0\in{\ensuremath{\left]0,\eta\right[}}$ such that $$(\forall\varepsilon\in\left]0,\varepsilon_0\right[) (\forall t\in\left[0,\bar t+m\right])\quad \|\psi^{\varepsilon}(t)-x(t)\|{\ensuremath{\leqslant}}\delta.$$ Now let $\varepsilon\in\left]0,\varepsilon_0\right[$, choose $k_0\in{\ensuremath{\mathbb N}}$ such that $k_0 m\varepsilon\in[\bar t,\bar t+m]$, and set $\bar{x}^\varepsilon=P_{{\ensuremath{\operatorname{Fix}}}R^{\varepsilon}}\overline{x}$ (recall that, since $D$ is closed and convex and $R^{\varepsilon}$ is nonexpansive, ${\ensuremath{\operatorname{Fix}}}R^{\varepsilon}$ is closed and convex [@Livre1 Corollary 4.15]). Then $\|z^{\varepsilon}_{k_0}-x(k_0 m\varepsilon)\|= \|\psi^{\varepsilon}(k_0 m \varepsilon)-x(k_0m\varepsilon)\| {\ensuremath{\leqslant}}\delta$ and therefore $\|z^{\varepsilon}_{k_0}-\overline{x}\|{\ensuremath{\leqslant}}2\delta$. Since $R^{\varepsilon}$ is nonexpansive, we have $(\forall k\in{\ensuremath{\mathbb N}})$ $\|z^{\varepsilon}_{k+1}-\bar{x}^\varepsilon\| {\ensuremath{\leqslant}}\|z^{\varepsilon}_k-\bar{x}^\varepsilon\|$. Hence, for every integer $k{\ensuremath{\geqslant}}k_0$, we have $$\|z^{\varepsilon}_k-\bar{x}^\varepsilon\|{\ensuremath{\leqslant}}\|z^{\varepsilon}_{k_0}-\bar{x}^\varepsilon\|{\ensuremath{\leqslant}}\|z^{\varepsilon}_{k_0}-\overline{x}\|+ \|\overline{x}-\bar{x}^\varepsilon\|{\ensuremath{\leqslant}}2\delta +d_{{\ensuremath{\operatorname{Fix}}}R^\varepsilon}(\overline{x})$$ and therefore $$\|y^{\varepsilon}_{km}-\overline{x}\|= \|z^{\varepsilon}_k-\overline{x}\|{\ensuremath{\leqslant}}2\delta +2d_{{\ensuremath{\operatorname{Fix}}}R^\varepsilon}(\overline{x}).$$ Since Proposition \[p:2013-04-22\] asserts that $y^{\varepsilon}_{km}{\ensuremath{\:\rightharpoonup\:}}{x^\varepsilon}_m$, we get $$\label{e:2013-04-22h} \|{x^\varepsilon}_m-\overline{x}\|{\ensuremath{\leqslant}}\varliminf_{k\to{\ensuremath{{+\infty}}}} \|y^{\varepsilon}_{km}-\overline{x}\| {\ensuremath{\leqslant}}2\delta+2d_{{\ensuremath{\operatorname{Fix}}}R^\varepsilon}(\overline{x}),$$ and yields $$\varlimsup_{\varepsilon\to 0} \|x^\varepsilon_m-\overline{x}\|{\ensuremath{\leqslant}}2\delta.$$ Letting $\delta\to 0$, we deduce that ${x^\varepsilon}_m\to\overline{x}$ as $\varepsilon\to 0$. In turn, it follows from that $(\forall i\in I)$ ${x^\varepsilon}_i\to\overline{x}$ as $\varepsilon\to 0$. The following corollary settles entirely De Pierro’s conjecture in the case of $m=2$ closed convex sets in Euclidean spaces. In Assumption \[h:1\], suppose that ${\ensuremath{{\mathcal H}}}$ is finite-dimensional, $D={\ensuremath{{\mathcal H}}}$, and $m=2$, and let $T_1=P_1$ and $T_2=P_2$ be the projection operators onto nonempty closed convex sets such that $$\label{e:S1} {\ensuremath{\operatorname{Fix}}}T=S=\operatorname{Argmin}\Phi\neq{\ensuremath{{\varnothing}}}, \quad\text{where}\quad\Phi=\frac{1}{4}\big(d_{C_1}^2+d_{C_2}^2\big).$$ Let $y_0\in{\ensuremath{{\mathcal H}}}$ and let $\overline{x}\in S$ be the limit of the the solution $x$ of Cauchy problem $$\begin{cases} x'(t)+x(t)=\frac{1}{2}\big(P_1x(t)+P_2x(t)\big) \;\,\text{on}\;{\ensuremath{\left]0,+\infty\right[}}\\ x(0)=y_0. \end{cases}$$ For for every $\varepsilon\in{\ensuremath{\left]0,1\right[}}$, let $x_1^\varepsilon=\lim_{k\to{\ensuremath{{+\infty}}}}y^\varepsilon_{2k+1}$ and $x_2^\varepsilon=\lim_{k\to{\ensuremath{{+\infty}}}}y^\varepsilon_{2k+2}$, where $$\label{e:2013-04-23a} (\forall k\in{\ensuremath{\mathbb N}})\quad \begin{array}{l} \left\lfloor \begin{array}{ll} y^\varepsilon_{2k+1}&=\big({\ensuremath{\operatorname{Id}}}+\varepsilon(P_{1}-{\ensuremath{\operatorname{Id}}})\big) y^\varepsilon_{2k}\\[2mm] y^\varepsilon_{2k+2}&=\big({\ensuremath{\operatorname{Id}}}+\varepsilon(P_{2}-{\ensuremath{\operatorname{Id}}})\big) y^\varepsilon_{2k+1}. \end{array} \right. \end{array}$$ Then $x_1^\varepsilon\to\overline{x}$ and $x_2^\varepsilon\to\overline{x}$ when $\varepsilon\to 0$. Fix $z\in S$, and set $a=P_1z$ and $b=P_2z$. Then $z=(a+b)/2$ and $(\forall\varepsilon\in{\ensuremath{\left]0,1\right[}})$ $z^{\varepsilon}=((1-\varepsilon) a+b)/(2-\varepsilon)\in {\ensuremath{\operatorname{Fix}}}R^{\varepsilon}$. Thus $$\label{m=2} d_{{\ensuremath{\operatorname{Fix}}}R^{\varepsilon}}(z){\ensuremath{\leqslant}}\|z-z^{\varepsilon}\|= \frac{\varepsilon\|b-a\|}{2(2-\varepsilon)}\to 0 \quad\text{as}\quad\varepsilon\to 0,$$ and the conclusion follows from Theorem \[p33\_T\]. ![An example in which the condition fails.[]{data-label="fig:2"}](contra2.eps){width="7cm"} We conclude this section by showing that, in contrast with , the condition can fail in the case of projection operators in the presence of $m=3$ sets. \[contraejemplo\] Suppose that ${\ensuremath{{\mathcal H}}}={\ensuremath{\mathbb{R}}}^3$ and $m=3$, and let $T_1$, $T_2$, and $T_3$ be, respectively, the projection operators onto the bounded closed convex sets (see Fig. \[fig:2\]) $$\begin{cases} C_1=[-1,1]\times\{-1\}\times\{1\}\\ C_2=[-1,1]\times\{1\}\times\{1\}\\ C_3={\big\{{(\xi_1,\xi_2,\xi_3)\in{\ensuremath{\mathbb{R}}}^3}~\big |~{\xi_1\in[-1,1],\,\xi_3\in[0,1],\, (1-\xi_3)(\xi_1^2-1)+\xi_2^2{\ensuremath{\leqslant}}0}\big\}}. \end{cases}$$ Then the set of least-squares solutions is $S={\ensuremath{\operatorname{Fix}}}T=[-1,1]\times\{0\}\times\{1\}\subset C_3$. Moreover, $$(\forall\varepsilon\in{\ensuremath{\left]0,1\right[}})\quad {\ensuremath{\operatorname{Fix}}}R^\varepsilon=\{z^\varepsilon\}= \left\{\left(0,\frac{w_\varepsilon+\varepsilon(1-\varepsilon)} {3(1-\varepsilon)+\varepsilon^2},1-\frac{w_\varepsilon^2} {3(1-\varepsilon)+\varepsilon^2}\right)\right\},$$ where $w_\varepsilon$ is the unique real solution of $2w^3+w=\varepsilon/(2-\varepsilon)$. Clearly $z^\varepsilon\to(0,0,1)\in S$ as $\varepsilon\to 0$, but $(\forall z\in S\smallsetminus\{(0,0,1)\})$ $d_{{\ensuremath{\operatorname{Fix}}}R^\varepsilon}(z)\not\to 0$ as $\varepsilon\to 0$. Strong convergence under local strong monotonicity -------------------------------------------------- Another situation covered by Theorem \[p33\_T\] is when the operator $T$ has a unique fixed point $\overline{x}$ and $A={\ensuremath{\operatorname{Id}}}-T$ is locally strongly monotone around $\overline{x}$, namely $$\label{M_T} ({\ensuremath{\exists\,}}\alpha\in{\ensuremath{\left]0,+\infty\right[}})({\ensuremath{\exists\,}}\delta\in{\ensuremath{\left]0,+\infty\right[}}) (\forall x\in D\cap B(\overline{x};\delta))\quad {{\left\langle{{x-\overline{x}}\mid{x-Tx}}\right\rangle}}{\ensuremath{\geqslant}}\alpha\|x-\overline{x}\|^2.$$ In the case of convex projections operators, then $A=\nabla\Phi$ and, if $\Phi$ is twice differentiable at $\overline{x}$, then is equivalent to the positive-definiteness of $\nabla^2\Phi(\overline{x})$. Another case in which is satisfied, with $\alpha=1-\rho$, is when $T$ is a local strict contraction with constant $\rho\in{\ensuremath{\left]0,1\right[}}$ at the fixed point $\overline{x}$, namely, for all $x$ in some ball $B(\overline{x};\delta)$, $\|Tx-T\overline{x}\|{\ensuremath{\leqslant}}\rho\|x-\overline{x}\|$. If $T$ is differentiable at $\overline{x}$ this amounts to $\|T'(\overline{x})\|<1$. \[p38\_T\] Suppose that Assumptions \[h:1\] and \[h:2\] are satisfied, together with , and let ${\ensuremath{\operatorname{Fix}}}T=\{\bar{x}\}$. In addition, let $y_0\in D$ and, for every $\varepsilon\in{\ensuremath{\left]0,\eta\right[}}$, let $(x_i^\varepsilon)_{i\in I}$ be the cycle obtained as the weak limit of in Proposition \[p:2013-04-22\]. Then $(\forall i\in I)$ $x_i^\varepsilon\to\overline{x}$ as $\varepsilon\to 0$. It suffices to check the assumptions of Theorem \[p33\_T\]. Set $A={\ensuremath{\operatorname{Id}}}-T$ and let $x$ be the solution to . - $d_{{\ensuremath{\operatorname{Fix}}}R^{\varepsilon}}(\overline{x})\to 0$ as $\varepsilon\to 0$:\ Let $\varepsilon\in\left]0,\min\{\eta,\alpha/(2m)\}\right[$, set $Q^\varepsilon={\ensuremath{\operatorname{Id}}}-m\varepsilon A$ and $\gamma(\varepsilon)=1-m\varepsilon(\alpha-2m\varepsilon)$, and let $y\in D\cap B(\overline{x};\delta)$. Since $A\overline{x}=0$ and $A$ is 2-Lipschitz, we have $$\begin{aligned} \label{e:KJ2013a} \|Q^\varepsilon y-\overline{x}\|^2 &=\|y-\overline{x}\|^2-2m\varepsilon{{\left\langle{{y-\overline{x}}\mid{Ay-A\overline{x}}}\right\rangle}}+(m\varepsilon)^2\|Ay-A\overline{x}\|^2\nonumber\\ &{\ensuremath{\leqslant}}(1-2m\varepsilon(\alpha-2m\varepsilon))\|y-\overline{x}\|^2 \nonumber\\ &{\ensuremath{\leqslant}}\gamma(\varepsilon)^2\|y-\overline{x}\|^2.\end{aligned}$$ On the other hand, setting $\rho=\max_{i\in I}\|T_i\overline{x}-\overline{x}\|/2$ and $\beta=3^m-2m-1$, Lemma \[lema31\_T\] gives $$\label{e:KJ2013b} \|R^{\varepsilon} y-Q^\varepsilon y\|{\ensuremath{\leqslant}}\varepsilon^2 \beta(\|y-\overline{x}\|+\rho)$$ which, combined with , yields $$\label{e:KJ2013c} \|R^{\varepsilon}y-\overline{x}\|{\ensuremath{\leqslant}}\|R^{\varepsilon}y-Q^\varepsilon y\|+\|Q^\varepsilon y-\overline{x}\| {\ensuremath{\leqslant}}\varepsilon^2\beta(\|y-\overline{x}\|+\rho) +\gamma(\varepsilon)\|y-\overline{x}\|.$$ From this estimate it follows that given $\delta'\in\left]0,\delta\right]$, for every $\varepsilon{\ensuremath{\leqslant}}m\alpha\delta'/(\beta(\delta'+\rho)+2m^2\delta')$, we have $R^{\varepsilon}(D\cap B(\overline{x};\delta'))\subset D\cap B(\overline{x};\delta')$. Therefore $R^\varepsilon$ has a fixed point in $B(\overline{x};\delta')$ and hence $d_{{\ensuremath{\operatorname{Fix}}}R^{\varepsilon}}(\overline{x}){\ensuremath{\leqslant}}\delta'$. Since $\delta'$ can be arbitrarily small, this proves that $d_{{\ensuremath{\operatorname{Fix}}}R^{\varepsilon}}(\overline{x})\to 0$ as $\varepsilon\to 0$. - $x(t)\to\overline{x}$ as $t\to{\ensuremath{{+\infty}}}$:\ Let $\theta\colon{\ensuremath{\left[0,+\infty\right[}}\to{\ensuremath{\left[0,+\infty\right[}}$ be defined by $\theta(t)=\|x(t)-\overline{x}\|^2/2$, and let us show that $\lim_{t\to{\ensuremath{{+\infty}}}}\theta(t)=0$. We note that this holds whenever the orbit enters the ball $B(\overline{x};\delta)$ at some instant $t_0$. Indeed, the monotonicity of $A$ implies that $\theta$ is decreasing so that, for every $t\in[t_0,{\ensuremath{{+\infty}}}[$, $x(t)\in D\cap B(\overline{x};\delta)$ and hence and give $$\theta'(t)={{\left\langle{{x(t)-\overline{x}}\mid{x'(t)}}\right\rangle}} ={{\left\langle{{\overline{x}-x(t)}\mid{x(t)-Tx(t)}}\right\rangle}}{\ensuremath{\leqslant}}-\alpha\|x(t)- \overline{x}\|^2=-2\alpha\theta(t).$$ Consequently, $\theta(t){\ensuremath{\leqslant}}\theta(t_0)\exp(-2\alpha(t-t_0))\to 0$ as $t\to{\ensuremath{{+\infty}}}$. It remains to prove that $x(t)$ enters the ball $B(\overline{x};\delta)$. If this was not the case we would have $\mu=\lim_{t\to{\ensuremath{{+\infty}}}}\sqrt{\theta(t)}{\ensuremath{\geqslant}}\delta$. Choose $t_0$ large enough so that $\sqrt{\theta(t_0)}{\ensuremath{\leqslant}}\mu+{\delta}/{2}$ and let $\tilde x$ be the solution to the Cauchy problem $$\begin{cases} \tilde{x}'(t)=-A\tilde{x}(t)\;\,\text{on}\;[t_0,{\ensuremath{{+\infty}}}[\\ \tilde{x}(t_0)=\tilde x_0, \end{cases}$$ where $\tilde x_0=\overline{x}+\delta(x(t_0)-\overline{x})/ \|x(t_0)-\overline{x}\|\in D\cap B(\overline{x};\delta)$. By monotonicity of $A$, $t\mapsto\|x(t)-\tilde x(t)\|$ is decreasing and hence $$\begin{aligned} (\forall t\in[t_0,{\ensuremath{{+\infty}}}[)\quad \|x(t)-\overline{x}\| &{\ensuremath{\leqslant}}\|x(t)-\tilde x(t)\|+\|\tilde x(t)-\overline{x}\| \nonumber\\ &{\ensuremath{\leqslant}}\|x(t_0)-\tilde x(t_0)\|+\|\tilde x(t)-\overline{x}\| \nonumber\\ &{\ensuremath{\leqslant}}(\mu-\delta/2)+\|\tilde x(t)-\overline{x}\|.\end{aligned}$$ Since by the previous argument $\|\tilde x(t)-\overline{x}\|\to 0$, we reach a contradiction with the fact that $(\forall t\in{\ensuremath{\left[0,+\infty\right[}})$ $\|x(t)-\overline{x}\|{\ensuremath{\geqslant}}\mu$. Altogether, the conclusion follows from Theorem \[p33\_T\]. If ${\ensuremath{\operatorname{Id}}}-T$ were globally (rather than just locally as in ) strongly monotone at every point in ${\ensuremath{\operatorname{Fix}}}T$, we could derive Theorem \[p38\_T\] directly from Theorem \[t:1\] and Remark \[r:2013-04-23\]\[r:2013-04-23ii\]. Theorem \[p38\_T\] can also be applied when the local strong monotonicity or the local contraction properties hold up to an affine subspace (see below). This is relevant in the case studied in [@Baus05] when $(T_i)_{i\in I}$ is a family of projection operators onto closed affine subspaces $(x_i+E_i)_{i\in I}$, where $(E_i)_{i\in I}$ is a family of closed vector subspaces of ${\ensuremath{{\mathcal H}}}$, and more generally for unbounded closed convex cylinders of the form $(B_i+E_i)_{i\in I}$, where $B_i$ is a nonempty bounded closed convex subset of $E_i^\perp$. \[corf\] Suppose that Assumptions \[h:1\] and \[h:2\] are satisfied, that $D={\ensuremath{{\mathcal H}}}$, and that $(T_i)_{i\in I}$ is a family of projection operators onto nonempty closed convex subsets $(C_i)_{i\in I}$ of ${\ensuremath{{\mathcal H}}}$. In addition, suppose that the set $S$ of minimizers of $\Phi$ in is a closed affine subspace, say $S=z+E$, where $z\in{\ensuremath{{\mathcal H}}}$ and $E$ is a closed vector subspace of ${\ensuremath{{\mathcal H}}}$. Let $y_0\in D$, set $\overline{x}=P_Sy_0$, and, for every $\varepsilon\in{\ensuremath{\left]0,\eta\right[}}$, let $(x_i^\varepsilon)_{i\in I}$ be the cycle obtained as the weak limit of in Proposition \[p:2013-04-22\]. Then the following hold. 1. \[corfi\] $(\forall i\in I)$ $x_i^\varepsilon{\ensuremath{\:\rightharpoonup\:}}\overline{x}$ as $\varepsilon\to 0$. 2. \[corfii\] Suppose that $$\label{paralelo_T} (\forall y\in S)({\ensuremath{\exists\,}}\rho\in [0,1[)({\ensuremath{\exists\,}}\delta\in{\ensuremath{\left]0,+\infty\right[}}) (\forall x\in B(0;\delta)\cap E^\perp)\;\; \|T(x+y)-Ty\|{\ensuremath{\leqslant}}\rho\|x\|.$$ Then $(\forall i\in I)$ $x_i^\varepsilon\to\overline{x}$ as $\varepsilon\to 0$. Let $i\in I$. Since $S=z+E$, we have $C_i+E\subset C_i$ and the iterates $(y_{k}^\varepsilon)_{k\in{\ensuremath{\mathbb N}}}$ in move parallel to $E^\perp$ and remain in $y_0+E^\perp$. Hence, since $\{\overline{x}\}=S\cap (y_0+E^\perp)$, \[corfi\] follows by applying Theorem \[t:1\] in the space $y_0+E^\perp$, while \[corfii\] follows by applying Theorem \[p38\_T\] in this same space. We conclude the paper by revisiting De Pierro’s conjecture in the affine setting investigated in [@Baus05]. More precisely, we shall derive an alternative proof of the main result of [@Baus05] from Corollary \[corf\]. For this purpose, we need the following notion of regularity. A finite family $(E_i)_{i\in I}$ of closed vector subspaces of ${\ensuremath{{\mathcal H}}}$ with intersection $E$ is regular if $$\big(\forall (y_k)_{n\in{\ensuremath{\mathbb N}}}\in{\ensuremath{{\mathcal H}}}^{{\ensuremath{\mathbb N}}}\big)\quad \max_{i\in I}d_{E_i}(y_k)\to 0\quad\Rightarrow\quad d_E(y_k)\to 0.$$ Let $(E_i)_{i\in I}$ be a regular family of closed vector subspaces of ${\ensuremath{{\mathcal H}}}$ with intersection $E$ and for, every $i\in I$, let $\overline{x}_i\in{\ensuremath{{\mathcal H}}}$ and let $P_i$ be the projection operator onto the affine subspace $C_i=\overline{x}_i+E_i$. Let $y_0\in{\ensuremath{{\mathcal H}}}$ and set $S=\operatorname{Argmin}\sum_{i\in I}d_{C_i}^2$. Then there exists $z\in{\ensuremath{{\mathcal H}}}$ such that $S=z+E$. Moreover, for every $\varepsilon\in{\ensuremath{\left]0,1\right]}}$, the cycle $(x_i^\varepsilon)_{i\in I}$ obtained as the weak limit of in Proposition \[p:2013-04-22\] exists, and $(\forall i\in I)$ $x_i^\varepsilon\to P_Sy_0$ as $\varepsilon\to 0$. We have $(\forall i\in I)$ $P_i\colon x\mapsto\overline{x}_i+P_{E_i}(x-\overline{x}_i)$. Hence $Tx=a+Lx$, where $a=(1/m)\sum_{i\in I}(\overline{x}_i- P_{E_i}\overline{x}_i)$ and $L=(1/m)\sum_{i\in I}P_{E_i}$. According to [@Baus05 Theorem 5.4], the subspaces $(E_i)_{i\in I}$ are regular if and only if $\rho=\|L\circ P_{E^\perp}\|<1$, which implies that $T$ is a strict contraction on $y_0+E^\perp$. From this we get simultaneously that $T$ has a fixed point $z$, that the least-squares solution set is of the form $S=z+E$, and that holds. Hence, the result will follow from Corollary \[corf\] provided that $(x_i^\varepsilon)_{i\in I}$ exists for every $\varepsilon\in{\ensuremath{\left]0,1\right]}}$. This was proved in [@Baus05 Theorem 5.6] by noting that $R^\varepsilon|_{y_0+E^\perp}$ is a strict contraction. Indeed, $R^\varepsilon$ is a composition of affine maps and an inductive calculation reveals that it can be written as $R^\varepsilon x=a^\varepsilon+L^\varepsilon x$, where $a^\varepsilon\in{\ensuremath{{\mathcal H}}}$ and $L^\varepsilon$ a linear operator which is a convex combination of nonexpansive linear maps, one of which is the strict contraction $L\circ P_{E^\perp}$. Corollary \[corf\]\[corfi\] seems to be new even for affine subspaces $(C_i)_{i\in I}$. Also new in Corollary \[corf\]\[corfii\] is the fact that strong convergence holds for more general convex sets than just translates of regular subspaces. [**Acknowledgement.**]{} The research of P. L. Combettes was supported in part by the European Union under the 7th Framework Programme “FP7-PEOPLE-2010-ITN”, grant agreement number 264735-SADCO. The research of R. Cominetti was supported by Fondecyt 1130564 and Núcleo Milenio Información y Coordinación en Redes ICM/FIC P10-024F. [99]{} H. Attouch, L. M. Briceño-Arias, and P. L. Combettes, A parallel splitting method for coupled monotone inclusions, [*SIAM J. Control Optim.*]{}, vol. 48, pp. 3246–3270, 2010. J.-B. Baillon, P. L. Combettes, and R. Cominetti, There is no variational characterization of the cycles in the method of periodic projections, [*J. Funct. Anal.,*]{} vol. 262, pp. 400–408, 2012. H. H. Bauschke and J. M. Borwein, On the convergence of von Neumann’s alternating projection algorithm for two sets, [*Set-Valued Anal.,*]{} vol. 1, pp. 185–212, 1993. H. H. Bauschke, R. Burachik, P. L. Combettes, V. Elser, D. R. Luke, H. Wolkowicz, Eds., [*Fixed-Point Algorithms for Inverse Problems in Science and Engineering.*]{} Springer-Verlag, New York, 2011. H. H. Bauschke and P. L. Combettes, [*Convex Analysis and Monotone Operator Theory in Hilbert Spaces.*]{} Springer, New York, 2011. H. H. Bauschke and M. R. Edwards, A conjecture by De Pierro is true for translates of regular subspaces, [*J. Nonlinear Convex Anal.,*]{} vol. 6, pp. 93–116, 2005. H. H. Bauschke, X. Wang, and C. J. S. Wylie, Fixed points of averages of resolvents: Geometry and algorithms, [*SIAM J. Optim.,*]{} vol. 22, pp. 24–40, 2012. H. Brézis, [*Opérateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert.*]{} North-Holland/Elsevier, New York, 1973. R. E. Bruck, Asymptotic convergence of nonlinear contraction semigroups in Hilbert space, [*J. Funct. Anal.,*]{} vol. 18, pp. 15–26, 1975. C. L. Byrne, [*Applied Iterative Methods.*]{} A. K. Peters, Wellesley, MA, 2008. Y. Censor, P. P. B. Eggermont, and D. Gordon, Strong under-relaxation in Kaczmarz’s method for inconsistent systems, [*Numer. Math.,*]{} vol. 41, pp. 83–92, 1983. A. Cegielski, [*Iterative Methods for Fixed Point Problems in Hilbert Spaces,*]{} Lecture Notes in Mathematics, vol. 2057. Springer, Heidelberg, 2012. P. L. Combettes, Inconsistent signal feasibility problems: Least-squares solutions in a product space, [*IEEE Trans. Signal Process.,*]{} vol. 42, pp. 2955–2966, 1994. A. R. De Pierro, From parallel to sequential projection methods and vice versa in convex feasibility: Results and conjectures, in: D. Butnariu, Y. Censor, and S. Reich (Eds.), [*Inherently Parallel Algorithms for Feasibility and Optimization*]{}, pp. 187–201. Elsevier, New York, 2001. A. R. De Pierro and A. N. Iusem, A parallel projection method for finding a common point of a family of convex sets, [*Pesquisa Operacional,*]{} vol. 5, pp. 1–20, 1985. L. G. Gubin, B. T. Polyak, and E. V. Raik, The method of projections for finding the common point of convex sets, [*Comput. Math. Math. Phys.,*]{} vol. 7, pp. 1–24, 1967. W. V. Petryshyn, Construction of fixed points of demicompact mappings in Hilbert space, [*J. Math. Anal. Appl.,*]{} vol. 14, pp. 276–284, 1966. X. Wang and H. H. Bauschke, Compositions and averages of two resolvents: Relative geometry of fixed points sets and a partial answer to a question by C. Byrne, [*Nonlinear Anal.*]{}, vol. 74, pp. 4550–4572, 2011. E. Zeidler, [*Nonlinear Functional Analysis and Its Applications II/B*]{}, Springer-Verlag, New York, 1990.
{ "pile_set_name": "ArXiv" }
--- abstract: | The radial momentum operator in quantum mechanics is usually obtained through canonical quantization of the (symmetrical form of the) classical radial momentum. We show that the well known connection between the Hamiltonian of a free particle and the radial momentum operator $\hat{H}=\hat{P}_{r}^2/2m+ \mbox{\boldmath $ \hat{L}^2$}/2mr^{2}$ is true **only in 1 or 3 dimensions. In general, an extra term of the form $\hbar^{2}(n-1)(n-3)/ 2m \cdot 4r^{2}$ has to be added to the Hamiltonian.** author: - | Gil Paz $^{a)}$\ *Physics Department, Technion-Israel Institute of Technology, 32000 Haifa, Israel* title: ' [**On the Connection between The Radial Momentum Operator and the Hamiltonian in $ n$ Dimensions**]{}' ---
{ "pile_set_name": "ArXiv" }
--- author: - | Owais Gilani\ Bucknell University\ Lewisburg, PA, USA Lisa A. McKay\ Yale University\ New Haven, CT, USA Timothy G. Gregoire\ Yale University\ New Haven, CT, USA Yongtao Guan\ University of Miami\ Miami, FL, USA Brian P. Leaderer\ Yale University\ New Haven, CT, USA Theodore R. Holford\ Yale University\ New Haven, CT, USA bibliography: - 'biblio.bib' title: Spatiotemporal Calibration of Atmospheric Nitrogen Dioxide Concentration Estimates From an Air Quality Model for Connecticut --- Introduction {#Paper2intro} ============ Nitrogen dioxide (NO$_2$) is a highly reactive gas that contributes to the formation of ground-level ozone and fine particle pollution, and is believed to be associated with adverse respiratory health effects [@ISA2008]. A detailed analysis of the effects of atmospheric pollutants such as NO$_2$ on various health outcomes requires access to data on the concentration of the pollutant on a fine spatial and temporal scale, which is rarely available. However, we often have data on the concentration of atmospheric pollutants for a given region and time period from different sources that differ in their spatial and temporal resolutions, as well as in their measurement accuracy. Fixed site air quality monitoring stations, such as the US Environmental Protection Agency’s (EPA) monitoring stations , record pollutant concentration data on a dense temporal scale (hourly), but the network of monitoring sites is generally spatially very sparse (e.g. only four sites in Connecticut (CT)), which doesn’t allow for accurate modeling at sites far away from the monitoring sites. On the other hand, data collected at many different spatial locations using passive sampling as part of environmental epidimiologic studies, such as the [Acid/Aerosol ]{}study [@triche2002], generally provide an aggregate measure of the pollutant concentration over relatively long time periods (1-2 weeks), resulting in spatially dense but temporally sparse data. Such data sources do not allow accurate estimation of pollutant concentrations at a fine temporal scale. In the absence of a single source of observed data on the concentration of a pollutant that is both spatially and temporally dense, deterministic meteorological air quality models, such as the Community Multiscale Air Quality (CMAQ) model , provide an alternative source of pollutant concentration. Predictions from such models are provided either at the centroids of pixels or as an aggregate measure over the pixel on a regular square grid format, which generally span an extended spatial domain on a dense temporal scale (hourly or daily). However, the grid-cell or pixel sizes are often fairly large (typically 12 km x 12 km), providing crude spatial resolution. Additionally, these complex models do not use any observed measurements of the pollutant in the modeling process, and can often have significant bias associated with them. To account for these potential biases, various spatiotemporal modeling techniques have been developed that seek to calibrate output from such deterministic models using observed data on the pollutants. Most spatiotemporal calibration methods require temporal allignment between the observed data source and output from the deterministic model that needs to be calibrated, in addition to a somewhat dense spatial network of observed sites [@meiring1998; @brown2001; @li2008]. Additionally, these models do not address the issue of improving the spatial resolution of the large pixel sizes of deterministic model outputs, known as the “change of support” problem [@cressie1993]. While these methods work well when the pixel sizes are smaller, or if the process being modeled does not exhibit large variability over short distances, they are inadequate in modeling a process such an NO$_2$, which is known to vary considerably over short distances [@jerrett2004; @who2003]. Other models do address this issue, but they are either purely spatial models [@fuentes2005], or require fairly large number of spatial locations for accurately addressing the downscaling issue [@berrocal2010; @alkuwari2013; @chang2014]. @gilani2016 developed a two-step modeling strategy, the Spatiotemporal Calibration and Resolution Refinement (SCARR) model, that allows calibrating estimates of a pollutant from a deterministic air quality model available in the form of grid-cell data, while also refining its spatial resolution, using two different sources of measured data that differ in their spatial and temporal resolutions. The modeling strategy was demonstrated by developing a space-time model using partial observations from three sources of data on the concentration of ambient [NO$_2$ ]{}over Connecticut in 1994, and its performance was tested using the remaining observations. Additionally, for simplicity in the demonstrative example, the first step of the model was developed as a purely spatial model, without accounting for season as a predictor in the model. In this paper, we extend the SCARR model and fit it to the same data sources using the complete set of observations to develop a space-time model to estimate the concentration of [NO$_2$ ]{}at a fine spatial and temporal resolution over the state of Connecticut for 1994 and 1995. Specifically, estimates of [NO$_2$ ]{}from the CMAQ model available in a grid-cell format with relatively large pixel sizes are calibrated in space and time while also refining their spatial resolution using observations from two sources ([Acid/Aerosol ]{}epidemiologic study data [@triche2002], and US Environmental Protection Agency monitoring data ) measured at different spatial and temporal resolutions. In this analysis, the SCARR model is extended in three ways: (a) the first step of the model is developed as a space-time model instead of a purely spatial model; (b) a parameter is included in the second step of the model that controls the influence of the estimated spatiotemporal calibration bias from the first step; and (c) additional covariates potentially correlated with atmospheric [NO$_2$ ]{}are included. The model is then used to predict the daily concentration of ambient [NO$_2$ ]{}for 1994 and 1995 over the entire state of Connecticut on a grid with a pixel size of 300 x 300 m. The remainder of the paper is organized as follows. Section 2 provides a description of the three different sources of available data on concentrations of NO$_2$, and the additional local covariates included in the model. Section 3 gives details on the two-step SCARR modeling strategy. Results for the fitted model are given in Section 4, while Section 5 provides predictions of [NO$_2$ ]{}for CT for 1994 and 1995 obtained from the fitted SCARR model. Finally, Section 6 provides some discussion and directions for future work. Data {#Paper2data} ==== Sources of Data on [NO$_2$ ]{}Concentration ------------------------------------------- Data on the outdoor concentration of [NO$_2$ ]{}for the state of Connecticut (CT) for 1994 and 1995 are available from three different sources - predictions from the Community Multiscale Air Quality (CMAQ) model on a grid-cell format, and observed data from the [Acid/Aerosol ]{}study and from EPA monitoring sites, both measured at different spatial and temporal resolutions. #### {#cmaqDesc} Data from the Community Multiscale Air Quality (CMAQ) model version 4.7.1 were provided by the Atmospheric Sciences Research Center in Albany, New York . The model uses data from a meteorological forecast model, source emission inventories and chemistry transport modeling to predict hourly [NO$_2$ ]{}concentration on a regular grid over CT with each pixel of size 12 km x 12 km. These data have an extensive spatial coverage (over the entire state of CT) and are temporally dense, but provide estimates at the centroids of pixels with rather large sizes. Additionally, these estimates have not been calibrated to actual observed measurements of NO$_2$, and have systematic errors associated with them. #### {#acidDesc} In the [Acid/Aerosol ]{}study, 138 families were recruited from mothers delivering babies at seven Connecticut hospitals between 1993 and 1996 [@triche2002]. Of these, 129 families had outdoor [NO$_2$ ]{}concentrations measured at their residences by passive sampling using Palmes Tubes [@palmes1976]. At the enrollment home visit, the [NO$_2$ ]{}monitoring tube was placed in an inverted funnel-shaped metal weather protector and hung from a tree branch or outdoor clothes line at least 5 ft above the ground and as close to the home as possible. The monitor was left in place for 10-14 days, and the cumulative concentration during that period was recorded. Point locations for the residences were obtained by geocoding each address against ESRI’s^^ database [@ESRIstreet]; geocoding was unsuccessful for five locations, while two samples were excluded due to equipment contamination. The final analysis utilized 122 (94.6 percent) samples, all of which were collected at various times between March - December, 1994. These data are spatially dense but temporally sparse as they provide measurements for each residence only once a year, aggregated over a 10-14 day period. The index $[t_{\mathbf{s}}]$ for $Y_2$ indicates that data were observed over different lengths of time and at different time points for different [Acid/Aerosol ]{}locations, ${\mathbf{s}}$. #### {#epaDesc} The US Environmental Protection Agency (EPA) monitors atmospheric [NO$_2$ ]{}levels over four locations in CT (Bridgeport, East Hartford, New Haven, and Tolland) and two locations in southern Massachusetts (MA) (Chicopee and Springfield) on an hourly basis , which provide a rather accurate estimate of ambient [NO$_2$ ]{}at these locations. While these data are temporally dense, they are spatially very sparse, with only six locations in and southern Massachusetts. For each of the six EPA sites, the 24-hour mean daily [NO$_2$ ]{}concentration was calculated for the two years. The spatial distribution of the [Acid/Aerosol ]{}and EPA sites with the CMAQ grid overlaid is given in Figure \[fig:SiteMap\], while Figure \[fig:EHEPACMAQJ1994\] shows the variation in [NO$_2$ ]{}concentration over a 30 day period for the EPA monitor in New Haven, CT along with the CMAQ estimate and observations from a nearby [Acid/Aerosol ]{}site for March/April 1994 (reproduced from @gilani2016). [0.7]{} ![Spatial and temporal distribution of study data.](Figures2/CMAQ_EPA_ACID_Color4.png "fig:"){width="\textwidth"} ![Spatial and temporal distribution of study data.](Figures2/March_1994_CMAQ_EPA_ACID_NewHaven.png){width="\textwidth"} Variables Used for Model Fitting {#Paper2Variables} -------------------------------- #### {#Paper2CmaqAtAcid} For each Acid/ Aerosol and EPA site, the closest CMAQ pixel centroid was identified, and a 24-hour mean CMAQ [NO$_2$ ]{}concentration was calculated for the exact days that [NO$_2$ ]{}was measured at the [Acid/Aerosol ]{}sites, and for 730 days in 1994 and 1995 at the EPA sites. #### {#Paper2traffic} The Department of Transportation for Connecticut, Massachusetts and New York record annual traffic volume in the form of average daily traffic (ADT) for interstates and numbered highways [@ctdot2000; @madot; @nydot]. Following the approach outlined by @holford2010 and @skene2010 and using traffic data for 1994 and 1995 provided by the Connecticut, Massachusetts and New York Departments of Transportation, we divided a line file of interstates and numbered highways into approximately 50-meter segments, each of which had an associated ADT count. We defined the midpoint of each segment and calculated a measure of traffic volume (TV) on that segment as the product of segment length and ADT. The contribution of a segment to [NO$_2$ ]{}concentration at any location can be expressed as the product of TV and a dispersion function of distance and direction between the segment midpoint and the site. @holford2010 found that the dispersion function can be effectively estimated by a step-function with steps of distance 0-0.5 km, 0.5-1 km, 1-2 km, 2-3 km, 3-4 km, 4-5 km and 5-6 km. Beyond 6 kms, the effect of vehicular traffic on [NO$_2$ ]{}was not statistically significant [@holford2010; @skene2010]. To estimate the dispersion function, circular buffers of radii 500 m, 1 km, 2 km, 3 km, 5 km and 6 km were created around each [Acid/Aerosol ]{}and EPA site, and the total traffic volume (TTV) within each concentric buffer ring was calculated by summing the contribution of all point sources within the buffer ring and dividing by 10,000. This gave a measure of 10,000 vehicle-kilometers per day for a given distance range - e.g., a total traffic volume value of 3 at an [Acid/Aerosol ]{}site for a buffer ring of 0.5-1 km would indicate an average of 30,000 vehicle-kilometers traveled per day within that buffer ring. #### {#Paper2LU} Land use data for Connecticut were obtained from the United States Geological Survey (USGS) ‘National Land Cover Dataset’ (NLCD) for the year 1992 [@lu]. The data were stored as a raster file with 30-meter pixels and classified into 17 categories: open water, low intensity residential, high intensity residential, commercial/industrial, bare rock, quarries, transitional barren, deciduous forest, evergreen forest, mixed forest, shrubland, orchards, pastures, row crops, urban/recreational grasses, woody wetlands and emergent herbaceous wetlands. For each [Acid/Aerosol ]{}site, we used the “Intersect Points with Raster" tool from the Geostatistical Modeling Environment [@gme; @R] to count the number of pixels of each land use category within circular buffer rings of size 0-0.5 km, 0.5-1 km and 1-2 km. This total was then multiplied by 900 (area in m$^2$ of a 30 m pixel) and divided by 10,000 to give the area in hectares of each land use category (LUC) within the three buffer rings. #### {#Paper2PopDen} The 1990 mid-year census tract population and polygon shape file for census tracts was obtained from the US Census Bureau [@censusShape; @census1990]. Population density (per square mile) was assigned to each residence as the mid-year population of the census tract divided by the area of the census tract in square miles. #### {#Paper2Elevation} Elevation above sea level, in meters, for each residence was extracted as the raster cell value from the USGS ‘National Map’ for the year 2005 [@gesch2007; @gesch2002]. #### {#Paper2Season} Concentrations of [NO$_2$ ]{}follow a seasonal cycle, and are generally higher in the winter months as compared to the summer. To capture the effect of season, a trigonometric and linear function of date was included in the model. Day of the Year Ratio (DYR) was defined as the midpoint of the days that [NO$_2$ ]{}was monitored at an [Acid/Aerosol ]{}residence divided by 365, and four covariates were calculated: sin($2.\pi.$DYR), cos($2.\pi.$DYR), sin($4.\pi.$DYR), cos($4.\pi.$DYR). Model {#Paper2ModelingStrategy} ===== As outlined by @gilani2016, the spatiotemporal calibration and resolution refinement (SCARR) model to calibrate the predictions of the concentration of ambient [NO$_2$ ]{}from the CMAQ model and to improve its spatial resolution is developed in two steps. The first step, Calibration and Spatial Refinement, uses the spatially dense but temporally sparse [Acid/Aerosol ]{}data along with other publicly available data on various local covariates related to the modification and dispersion of NO$_2$ to calibrate the CMAQ predictions over space and time, while also improving their spatial resolution. This step provides a continuous space representation of the artificially discrete CMAQ data, and estimates the additive and multiplicative calibration bias of the CMAQ data. The second step, Spatiotemporal Calibration using Dynamic Space-Time modeling, improves the temporal resolution of Step I by using the temporally dense but spatially sparse EPA data for estimating the temporal evolution of the additive and multiplicative calibration constants at a finer temporal resolution (daily). The SCARR model assumes that there is large scale spatial variation in the concentration of the pollutant over the region $\mathcal{S}$ of interest as well as small scale variation. The CMAQ data capture the large scale variation but not the small scale variation. It also assumes that the small scale spatial variation in pollutant concentration is due to local factors such as land use type, traffic density, population density, elevation, and that this small scale spatial variability is similar between the shorter time interval (1 day) of the EPA data and the longer time durations (10-14 days) of Step I {#Paper2StepIModel} ------ We fitted a model to calibrate and refine the granularity of the daily mean CMAQ [NO$_2$ ]{}concentration using [Acid/Aerosol ]{}[NO$_2$ ]{}data. A spatiotemporal calibration and refinement model, as described by @gilani2016, is specified by: $$\label{eqn:Paper2IEM} \overline{Y}\!_2({\mathbf{s}},t) = \frac{\bigintsss_{[t_{\mathbf{s}}]} \big(\mathbf{X}'({\mathbf{s}},t)\boldsymbol{\beta} + \mathbf{G}'({\mathbf{s}},t)\boldsymbol{\lambda} + \ddot{Y}\!_1({\mathbf{s}},t)\gamma \big)~dt}{\|[t_{\mathbf{s}}]\|} + \nu({\mathbf{s}},t) + \epsilon ({\mathbf{s}},t),$$ where $\overline{Y}\!_2({\mathbf{s}},t)$ is the concentration of [NO$_2$ ]{}from the [Acid/Aerosol ]{}data observed at location ${\mathbf{s}}$ averaged over the 10-14 day time interval $[t_{\mathbf{s}}]$; $\mathbf{X}$ are the covariates not related to dispersion, and includes a column of ones for the intercept; $\mathbf{G}$ is the integral of the product of the dispersion related covariates and the intensity $\mathbf{Z}$ at the source located within a local neighborhood; $\ddot{Y}\!_1({\mathbf{s}},t)$ is the estimate from the CMAQ data at the pixel centroid nearest to location ${\mathbf{s}}$; and $\epsilon ({\mathbf{s}},t)$ are independent mean zero Gaussian random errors. Residual spatiotemporal dependence between the sites is modeled using the hierarchical error term, $\nu({\mathbf{s}},t)$, where . The covariates $\mathbf{G}$ related to dispersion include total traffic volume (TTV) and land use categories (LUC), where the dispersion is modeled using a step function, which is estimated through the model. For the covariates $\mathbf{X}$ not related to dispersion, @gilani2016 included only one variable, population density, in their model. In our analysis, we further include elevation, as well as a trigonometric function of time to capture the seasonal variation in [NO$_2$ ]{}concentrations. Given the temporal sparsity of the [Acid/Aerosol ]{}data, it was difficult to accurately estimate a space-time covariance matrix $\boldsymbol{\Sigma}$. Additionally, aggregation over the 10-14 day time interval accounted for the short-range temporal dependence at a given site ${\mathbf{s}}$. We therefore assumed that most of the residual dependence between sites was spatial in nature. We considered both spatially dependent and independent error models. For the spatially dependent error models, spherical, exponential and Matérn covariance functions were used to model $\mathbf{\Sigma}$, whereas for the spatially independent error model, the hierarchical error term was removed and a linear regression framework was utilized. The initial model included all the variables, and the traffic covariates were selected by a backward approach, giving preference to buffer rings closer to the residences. For example, to include the TTV for the buffer 1.0-2.0 km, the buffers 0-0.5 km and 0.5-1.0 km must also be included in the model. The same strategy was adopted for land use categories. Nested traffic and land use buffer models were compared using $F$-tests, and buffer variables not contributing significantly to the model (at the 5% level) were removed. Leave-one-out cross-validation was also performed on the best few models, and preference was given to models for which the prediction sum of squares was closer to the error sum of squares. Step II {#Paper2StepIIModel} ------- We fitted a dynamic space-time model for temporal calibration of the spatially refined CMAQ estimates from Step I. Assumption 3 of the modeling strategy allows us to generalize the results of Step I to calibrate estimates of the CMAQ data on a finer temporal resolution in Step II using the temporally dense EPA data. Let ${\mathbf{s}}_1, \ldots, {\mathbf{s}}_6$ represent the spatial location of the six EPA monitor sites. The spatiotemporal additive calibration bias $\widetilde{C}({\mathbf{s}}_i,t)$ at location ${\mathbf{s}}_i, i = 1, \ldots 6,$ from Step I is defined as , where $\boldsymbol{\hat{\beta}}$ and $\boldsymbol{\hat{\lambda}}$ are vectors of parameters estimated in the first step. Note that the tilde ( $\widetilde{}$ ) on $C$ signifies that this variable was calculated using parameters estimated in Step I of the model. We further define ;   $\mathbf{\ddot{Y}}\!_1(t)$ a diagonal matrix with diag$\big(\mathbf{\ddot{Y}}\!_1(t)\big)= \big(\ddot{Y}\!_1({\mathbf{s}}_1,t), \ldots, \ddot{Y}\!_1({\mathbf{s}}_6,t)\big)$; and $\mathbf{\widetilde{C}}(t) = \big(\widetilde{C}({\mathbf{s}}_1,t), \ldots, \widetilde{C}({\mathbf{s}}_6,t)\big)'$. The *observation equation* for a dynamic spatiotemporal model describing the relationship between $\mathbf{Y}\!_3(t)$ and $\mathbf{\ddot{Y}}\!_1(t)$ in @gilani2016 is extended by adding a parameter $\beta_c$, which controls the influence of $\mathbf{\widetilde{C}}(t)$ in the temporal calibration of Step II. The modified observation equation is thus given by $$\label{eqn:Paper2obs} \mathbf{Y}\!_3(t) = \mathbf{A}(t) + \beta_c\mathbf{\widetilde{C}}(t) + \hat{\gamma}\mathbf{\ddot{Y}}\!_1(t) + \mathbf{Z}(t), \hspace{1in} t = 1, 2, \ldots, 730,$$ where $\mathbf{Z}(t) = \big(Z({\mathbf{s}}_1,t), \ldots, Z({\mathbf{s}}_6,t)\big)'$ is a multivariate white noise time series with the $Z({\mathbf{s}}_i,t),$ $i=1,\ldots,6$ mutually independent $N\big(0, \sigma^2_{Z}({\mathbf{s}})\big)$ variates. $\mathbf{A}(t) = \big(A({\mathbf{s}}_1,t), \ldots, A({\mathbf{s}}_6,t)\big)'$ is a stochastic process whose evolution over time is described by the *state equation* $$\label{eqn:Paper2state1} \mathbf{A}(t) - \mu_{A} = \boldsymbol{\Psi}_{A}(\mathbf{A}(t-1)- \mu_{A}) + \mathbf{F}\boldsymbol{\xi}(t),$$ where $\boldsymbol{\xi}(t) = \big(\xi({\mathbf{s}}_1,t), \ldots, \xi({\mathbf{s}}_6,t)\big)'$ is a multivariate stochastic process with $\xi({\mathbf{s}}_i,t), i=1,\ldots,6,$ mean zero Gaussian variates with variance $\sigma^2_{A}({\mathbf{s}})$. $\boldsymbol{\Psi}_A$ is a diagonal matrix with $diag(\boldsymbol{\Psi}_A) = \big(\psi_A({\mathbf{s}}_1), \ldots, \psi_A({\mathbf{s}}_6)\big)$, where $\psi_A({\mathbf{s}}_i), i=1,\ldots,6$ are autoregressive parameters lying in the range 0 – 1. $\beta_c$ is a parameter that is constant in space and time. In the model described above, $\mathbf{A}(t) + \beta_c\mathbf{\widetilde{C}}(t)$ provides the additive calibration bias on the finer temporal scale, while $\hat{\gamma}$, estimated in Step I, provides the multiplicative calibration bias. The matrix $\mathbf{F}$ in equation \[eqn:Paper2state1\] can be used to model the spatial correlation in the additive bias to arrive at an integrated space-time model. However, for this example, with only six locations spread over relatively large distances, accurately estimating the spatial correlation structure of $\mathbf{A}(t)$ is not possible. We therefore assumed spatial independence for $A({\mathbf{s}}_i,t)$ between the six sites. Additionally, we assumed that the six sites have identical parameters $\sigma^2_Z,~\sigma^2_A,~ \psi_A,$ and $\mu_A$, allowing us to pool across the six sites to estimate $A(t)$ common to the entire study region. In this setting, $\beta_c\, \widetilde{C}({\mathbf{s}},t)$ captures the spatial and two-week temporal variability in the additive calibration bias, while $A(t)$ provides the finer-scale temporal evolution of the additive calibration bias that is common for all sites. Under these assumptions, equations \[eqn:Paper2obs\] and \[eqn:Paper2state1\] can be rewritten in vector notation as $$\begin{aligned} \label{eqn:Paper2obsExVec} \mathbf{Y}_{3}(t)& = &A(t) \mathbb{1}_6+ \beta_c\mathbf{\widetilde{C}}(t) + \hat{\gamma}\mathbf{\ddot{Y}}\!_{1}(t) + \sigma_zZ(t)\mathbf{I}_6 \\ \label{eqn:Paper2stateExVec} A(t) - \mu_A &=& \psi_A (A(t-1) - \mu_A) + \sigma_A\xi(t) \hspace{0.5in} t = 1, 2, \ldots, 730,\end{aligned}$$ where $Z(t)$ and $\xi(t)$ are standard Gaussian variates, $\mathbb{1}$ a vector of 1’s and $\mathbf{I}$ the identity matrix. The dynamic space-time model outlined in equations \[eqn:Paper2obsExVec\] and \[eqn:Paper2stateExVec\] was fitted to data using the Kalman filter [@kalman1960] [see @gilani2016 for details]. Both maximum likelihood and Bayesian techniques using Markov Chain Monte Carlo simulations were utilized to estimate the parameters $\sigma^2_{Z}$, $\sigma^2_{A}$, $\psi_{A}$, $\mu_{A}$ and $\beta_c$ using the package [@petris2010] in statistical package. Estimates produced by the two methods for the full model were quite similar, and due to the faster computational speed of maximum likelihood estimation (MLE), further model fitting was conducted using MLE. To test the fit of the two-step SCARR model, we calculated the coefficient of correlation and the empirical mean squared error (MSE) for each of the six sites, comparing the EPA observations with predictions from the SCARR and CMAQ models. We also fit the final SCARR model at the 122 [Acid/Aerosol ]{}locations (details on fitting the full model at a new location ${\mathbf{s}}'$ are given in Section 5) and calculated the mean square prediction error (MSPE) for the SCARR model predictions and the CMAQ model estimates at these sites. Diagnostic plots were used to check for violations of model assumptions. Results {#Paper2Results} ======= Step I {#Paper2StepIResults} ------ Table \[tab:Paper2Sum1\] gives a summary of all covariates used to develop the spatial calibration and refinement model. Figures \[fig:Paper2popden\] and \[fig:Paper2elevation\] show the geographic distribution of population density in Connecticut for 1990 and elevation in 1992, respectively, while Figure \[fig:Paper2adt\] shows the distribution of ADT for interstates and numbered highways for 1994. [0.49]{} ![Geographical distribution of spatial covariates.[]{data-label="Paper2Maps"}](Figures2/ADT.png "fig:"){width="\textwidth"} [0.49]{} ![Geographical distribution of spatial covariates.[]{data-label="Paper2Maps"}](Figures2/PopDensityMile.png "fig:"){width="\textwidth"} [0.49]{} ![Geographical distribution of spatial covariates.[]{data-label="Paper2Maps"}](Figures2/Elevation.png "fig:"){width="\textwidth"} [0.49]{} ![Geographical distribution of spatial covariates.[]{data-label="Paper2Maps"}](Figures2/LandUse3.png "fig:"){width="\textwidth"} [l c c]{} **Variable & **Mean (SD)$^*$ & **Range\ ****** [Acid/Aerosol ]{}[NO$_2$ ]{}(ppb)$^a$ & 13.6 (5.28) & 4.39 - 33.1\ CMAQ [NO$_2$ ]{}(ppb)$^b$ & 10.6 (4.75) & 2.51 - 24.0\ Traffic Density (traffic volume/km$^2$)\    *Buffer ring (km)*$^c$\       0.0 - 0.5 & 0.89 (1.92) & 0.00 - 12.5\       0.5 - 1.0 & 1.00 (1.52) & 0.00 - 8.00\       1.0 - 2.0 & 1.14 (1.31) & 0.00 - 5.95\       2.0 - 3.0 & 0.99 (1.02) & 0.02 - 4.40\       3.0 - 4.0 & 1.11 (1.08) & 0.05 - 5.38\       4.0 - 5.0 & 0.91 (0.71) & 0.06 - 3.12\       5.0 - 6.0 & 0.89 (0.62) & 0.05 - 2.50\ Land Use Density (hectares/km$^2$)\    *Category/Buffer ring (km)*$^d$\    Developed$^e$\       0.0 - 0.5 &19.89 (15.82) & 0.17 - 49.27\       0.5 - 1.0 & 53.07 (42.34) & 0.40 - 146.7\       1.0 - 2.0 & 186.65 (141.35) & 2.64 - 558.3\    Forest$^f$\       0.0 - 0.5 & 8.39 (5.01) & 0.08 - 15.91\       0.5 - 1.0 & 26.78 (13.88) & 0.55 - 48.89\       1.0 - 2.0 & 113.99 (47.69) & 8.54 - 193.9\    Other$^g$\       0.0 - 0.5 & 0.41 (0.47) & 0.00 - 3.07 \       0.5 - 1.0 & 1.38 (1.39) & 0.12 - 10.47\       1.0 - 2.0 & 5.95 (5.67) & 0.50 - 42.84\ Population Density (population/mi$^2$) & 1682 (2284) & 86.3 - 11529\ Elevation (m) & 106 (68.5) & 1.60 - 331.57\ We explored the shape of the dispersion function relating [NO$_2$ ]{}to total traffic volume by fitting a model using just the TTV buffer covariates. The estimated dispersion function is given in Figure \[fig:Paper2dispIso\]. To check for small-scale spatial anisotropy in the dispersion of [NO$_2$ ]{}related to traffic, we divided the traffic buffers into four quadrants and calculated the total traffic volume within each quadrant for each buffer ring. Figure \[fig:Paper2dispQuad\] shows the step dispersion function in each direction, estimated by fitting the directional traffic buffer covariates in separate models for each direction. The shape of the four directional dispersion functions appear very similar to each other and to the estimated isotropic dispersion function in suggesting small scale spatial isotropy in the dispersion of NO$_2$. We therefore used the omnidirectional total traffic volume covariates in developing the model. [0.7]{} ![Estimated (a) isotropic and (b) anisotropic step dispersion functions for total traffic volume (TTV) based on distance $d$ and direction from a residence.](Figures2/ADT_dispersion_iso.png "fig:"){width="\textwidth"} [0.7]{} ![Estimated (a) isotropic and (b) anisotropic step dispersion functions for total traffic volume (TTV) based on distance $d$ and direction from a residence.](Figures2/ADT_dispersion_aniso.png "fig:"){width="\textwidth"} Based on previous results [@skene2010] and exploratory analysis, we reclassified the land use categories into developed (including low intensity residential, high intensity residential and commercial/industrial), forest (including deciduous, evergreen and mixed forests, pastures, row crops, and urban/recreational grasses), and other (all other categories). Figure \[fig:Paper2lu\] shows the spatial distribution of the reclassified land use categories for Connecticut in 1992. An initial analysis of the land use categories displayed a high degree of negative correlation between the “developed” and “forest” categories (e.g. -0.92 between “developed ” and “forest 0.5-1 km” rings). Additionally, within each of the three categories, we observed very high positive correlations between the buffer rings (e.g. 0.91 between 0-0.5 km and “forest” rings). This raised the concern of multicollinearity if all the land use category covariates were included in the model together. In fact, inclusion of all of these covariates in the model resulted in unstable and unusual parameter estimates. Due to the high degree of correlation between the different buffer rings within the categories, the buffer rings for each category were combined to derive a single covariate for each category from 0-2 km. The new categories “developed 0-2 km” and “forest 0-2 km” had a correlation coefficient of -0.90, and including both categories in the model did not provide significant improvement in the model as compared to including just one covariate. Including these two covariates separately in the model, while keeping all other covariates fixed, provided almost exactly the same parameter estimates, with the sign reversed. Therefore, in the final model ’ alone was included. The land use category did not significantly improve the model in the presence of and was therefore not included in the final model. We fit a spatially independent error model using a linear regression framework to identify the best subset of variables to include in the model. Omnidirectional and four-directional variograms of the residuals (not shown) revealed no evidence of spatial correlation in the residuals, which justified the assumption of spatial independence of the errors. However, we fitted spatially dependent error models as well. The covariates from the final regression model were included in a spatially dependent error model with the spherical, exponential, and Matérn covariance functions to model the spatial dependence in the errors using in (SAS Institute, Cary, NC). These models provided similar parameter estimates as the linear regression model, arriving at the maximum likelihood when the estimated covariance parameters provided an effective range of zero. In the presence of the other covariates, elevation did not appear to improve the fit of the model. CMAQ NO$_2$, population density and the trigonometric and linear function of date were statistically significant, and were included in the final model. None of the TTV buffer rings further than 2 km of the [Acid/Aerosol ]{}sites were statistically significant, and were removed from the model. A summary of the covariates in the final model and their parameter estimates is given in , while Figure \[fig:Paper2Season\] shows the observed concentrations of [NO$_2$ ]{}from the [Acid/Aerosol ]{}data, along with the estimated trigonometric and linear function of date. --------------------------------------------------------- -- -- **Parameter & **Estimate & **(95% CI)\ $\boldsymbol{\beta}$:\   Intercept & 11.891 & (8.894, 14.888)\   Population Density (10,000) & 5.508 & (2.656, 8.359)\   Season\      sin($2.\pi.$DYR) & 1.245 & (0.424, 2.065)\      cos($2.\pi.$DYR) & 1.727 & (-0.207, 3.662)\      sin($4.\pi.$DYR) & 1.792 & (0.883, 2.702)\      cos($4.\pi.$DYR) & 2.706 & (1.685, 3.728)\ $\boldsymbol{\lambda}$:\   Total Traffic Vol. (10,000 v-km)\         0.0 - 0.5 km & 0.851 & (0.496, 1.207)\         0.5 - 1.0 km & -0.138 & (-0.310, 0.035)\         1.0 - 2.0 km & 0.084 & (0.031, 0.136)\   Land Use (1,000 hectares) & &\      *Forest*\         0.0 - 2.0 km & -5.430 & (-7.606, -3.254)\ $\gamma$:\   CMAQ [NO$_2$ ]{}&  0.487 & (0.333, 0.642)\ \ $R^2$ & 0.777 &\ $R^2$ (Adjusted) & 0.757 &\ RMSE & 2.60\ RMSPE$^*$ & 2.77\ ****** --------------------------------------------------------- -- -- : ![Observed concentrations of [NO$_2$ ]{}along with the estimated trigonometric and linear function of date to represent season.[]{data-label="fig:Paper2Season"}](Figures2/sineCurve.png) Step II {#Paper2StepIIResults} ------- -------------------------------------------------------------------------------------------------------------- -- -- -- -------------------------- -- -- -- $\mathbf{\widetilde{C}}$ (lr)[2-3]{} (lr)[4-5]{} (lr)[7-7]{} **Site & **Mean (SD) & **Range & **Mean (SD) & **Range & $r^a$ & **Mean\ Bridgeport & 25.3 (10.5) & 5.3 - 74.2 & 20.2 (9.7) & 3.9 - 55.7 & 0.693 & 16.7\ Chicopee & 15.9  (9.8) & 1.0 - 71.7 &  9.1 (7.6) & 1.0 - 50.1 & 0.749 & 10.8\ E. Hartford & 18.5  (9.8) & 1.0 - 63.4 & 16.1 (8.6) & 2.3 - 53.3 & 0.763 & 13.6\ New Haven & 27.5 (10.2) & 5.9 - 74.9 & 19.7 (9.0) & 3.5 - 55.0 & 0.658 & 24.6\ Springfield & 26.0 (11.1) & 4.1 - 84.0 & 14.6 (9.0) & 2.0 - 9.90 & 0.681 & 20.2\ Tolland &  9.5  (6.5) & 1.0 - 41.9 &  9.1 (7.0) & 1.1 - 46.6 & 0.819 & 6.13\ ************ -------------------------------------------------------------------------------------------------------------- -- -- -- -------------------------- -- -- -- : $^a$ Coefficient of correlation between observed EPA data and CMAQ data. Table \[tab:Paper2summaryEPA\] provides summary statistics for [NO$_2$ ]{}concentrations at the six sites for the EPA and CMAQ data, along with the correlation between these two sources of data at each location. It also gives the estimate of the spatiotemporal additive bias $\widetilde{C}({\mathbf{s}},t)$ at each site, calculated using parameter estimates from Step I. The EPA site at Springfield had observations for all 730 days, while Tolland had observations only for 369 days between October 1994 and November 1995. The other four sites had EPA data missing for a few days, with the available data ranging between 697 - 727 days. CMAQ data generally underestimates [NO$_2$ ]{}concentrations, with the difference most pronounced for New Haven and Springfield. The additive bias $\widetilde{C}({\mathbf{s}},t)$ from Step I captures this difference, providing highest estimates for these two sites. The MLE of $\mu_A$ was not significantly different from zero, and was therefore removed from the model. The estimates (and their standard errors) for the remaining parameters in the final model were $\sigma_Z = 22.53~(0.559), \psi_A= 0.593 ~(0.033), \sigma_A=30.32 ~(1.868),$ and $\beta_c=0.713 ~(0.013)$. Figure \[fig:Paper2at\] shows the smoothed estimate of $A(t)$, calculated using the Kalman filter, along with its 95% confidence interval. Figure \[fig:Paper2fittedNewHavenCI\] shows the observed EPA and SCARR fitted concentration of [NO$_2$ ]{}for New Haven, along with the 95% confidence interval, while Figure \[fig:Paper2fittedNewHavenCMAQ\] gives a comparison between the observed EPA, SCARR model predictions, and CMAQ model estimates of [NO$_2$ ]{}concentration at this site. While the CMAQ model clearly underestimates [NO$_2$ ]{}concentrations in the summer, SCARR model estimates more closely follow the observed concentrations. Similar plots for the remaining five EPA sites are given in Figures S1–S4 in the Supplement. ![Smoothed estimate of $A(t)$ (solid blue) with 95% CI (dotted red).[]{data-label="fig:Paper2at"}](Figures2/a_t_color.png) ![image](Figures2/Predicted_CI_9495_newHaven.png) ![image](Figures2/Predicted_CMAQ_9495_newHaven.png) Table \[tab:Paper2corr\] gives a comparison of the coefficient of correlation and mean squared error (MSE) between the EPA data and estimates from the SCARR and CMAQ models. For all sites except Tolland, SCARR model provides a better fit than CMAQ, with a marked improvement observed at New Haven and Springfield. Figure \[fig:Paper2ValidationAcid\] shows a comparison for the observed [NO$_2$ ]{}concentration with the SCARR and CMAQ model predictions at 20 randomly selected [Acid/Aerosol ]{}sites. The black horizontal line is the mean [NO$_2$ ]{}recorded over the 10-14 day period at each site, while the blue and red lines show the daily SCARR and CMAQ model predictions, respectively, for those sites for the same time duration. It also shows the approximate month in which the data were collected. SCARR model predictions are generally closer to the observed [NO$_2$ ]{}concentrations and provide a much smaller MSPE (36.30) as compared to CMAQ estimates (75.83). A similar plot for all 122 [Acid/Aerosol ]{}sites is given in Figure S5 in the Supplement. [l c c c c]{} & &\ (r)[2-3]{} (r)[4-5]{} **Site & **SCARR & **CMAQ & **SCARR & **CMAQ\ ********** Bridgeport & 0.700 & 0.693 & 60.62 & 88.68\ Chicopee & 0.789 & 0.749 & 45.00 & 88.59\ E. Hartford & 0.797 &0.763 & 37.05 & 46.78\ New Haven & 0.687 & 0.658 & 55.83 & 123.8\ Springfield & 0.766 &0.681 & 64.36 & 197.4\ Tolland & 0.726 & 0.819 & 21.38 & 18.91\ ![Observed [Acid/Aerosol ]{}(black), SCARR predicted (blue), and CMAQ estimated (red) [NO$_2$ ]{}concentration at 20 randomly selected [Acid/Aerosol ]{}sites.[]{data-label="fig:Paper2ValidationAcid"}](Figures2/acid_cmaq_pred_compare_random.png) A plot of the standardized residuals against the fitted values from the model as well as a $QQ$ plot of the standardized residuals (not shown) did not reveal any violations of the assumption of normality of the errors. An autocorrelation plot of the residuals over various temporal lags also suggested that the AR(1) assumption for $A(t)$ was justified. [NO$_2$ ]{}Prediction for Connecticut, 1994-1995 {#Paper2CTpred} ================================================ The final model was used to predict the daily concentration of [NO$_2$ ]{}for the state of in 1994 and 1995 over a fine grid with pixels of size A raster with 300 m square pixels was created that covered the entire state of Connecticut, and the latitude and longitude of the centroid of each pixel was extracted. For each of the 143050 centroids, the corresponding CMAQ pixels were identified, and the covariates $\ddot{Y}\!_{1}({\mathbf{s}},t)$ and $\widetilde{C}({\mathbf{s}},t)$ were at each location, as detailed in Sections \[Paper2Variables\] and \[Paper2StepIIModel\]. To predict a calibrated concentration at a new location ${\mathbf{s}}'$, the vectors $\mathbf{\widetilde{C}}(t)$ and $\mathbf{\ddot{Y}}\!_{1}(t)$ from equation \[eqn:Paper2obsExVec\] are augmented to include $\widetilde{C}({\mathbf{s}}',t)$ and $\ddot{Y}\!_{1}({\mathbf{s}}',t)$, while the EPA observation for this location $Y_{3}({\mathbf{s}}',t)$ is treated as missing. Then, equation \[eqn:Paper2obsExVec\] is rewritten as $$\label{eqn:Paper2Fit} \mathbf{Y}_{3}(t) = A(t) \mathbb{1}_7+ \beta_c\mathbf{\widetilde{C}}(t) + \hat{\gamma}\mathbf{\ddot{Y}}\!_{1}(t) + \sigma_zZ(t)\mathbf{I}_7 \hspace{0.5in} t = 1, 2, \ldots, 730,$$ and, $$E\big(Y_{3}({\mathbf{s}}',t)| \mathbf{Y}_{3}(1), \ldots, \mathbf{Y}_{3}(t)\big) = E\big(A({\mathbf{s}}',t)| \mathbf{Y}_{3}(1), \ldots, \mathbf{Y}_{3}(t)\big) + E(\beta_c) \widetilde{C}({\mathbf{s}}',t) + \hat{\gamma}\ddot{Y}\!_{1}({\mathbf{s}}',t),$$ where $E(\beta_c)$ is given by its MLE and $E\big(A({\mathbf{s}}',t)| \mathbf{Y}_{3}(1), \ldots, \mathbf{Y}_{3}(t)\big)$ is evaluated by rerunning the Kalman filter on the augmented data vectors using the MLE of the model parameters estimated earlier. The predicted values for each pixel for each day were reassigned to the original raster grid to create 729 raster images. Prediction maps of the daily concentration of [NO$_2$ ]{}for Connecticut are displayed for a summer, winter and fall day 1994; December 27, 1994; and October 19th, 1995, respectively) in Figure \[fig:Paper2SCARRpred\] alongside the corresponding maps created using the predictions from the CMAQ model. An animation that shows the change over time in the spatial distribution of ambient [NO$_2$ ]{}in CT for 1994 and 1995 is available online at “https://ogilani.shinyapps.io/CTNO2”. ![image](Figures2/ThreeTogether7.png) Discussion {#Paper2discussion} ========== In the first step of the model, we spatially calibrated and refined the CMAQ [NO$_2$ ]{}estimates using observed concentrations available from the [Acid/Aerosol ]{}data, borrowing spatial information from various local covariates while controlling for time in the model. However, while the resultant model improved the spatial resolution of the CMAQ estimates, the calibration was done over very few time points, which were aggregated over long durations: between 10-14 days. Therefore, it didn’t provide effective temporal calibration of the daily variability in the CMAQ data. Given the availability of another source of observed data on the concentration of [NO$_2$ ]{}(EPA data) that is temporally dense, in the second step we developed a dynamic spatiotemporal model to use the EPA data for temporal calibration of the spatially refined CMAQ estimates from the first step. The final model from Step I includes population density, total traffic volume buffers up to 2 km, “forest 0-2 km” buffer for land use type, and a trigonometric function of date. The model explains about 76% of the variability in the observed data, with a leave-one-out cross validation PRESS statistic (936) close to the residual sum of squares (752), suggesting it does a decent job at predicting the concentration of [NO$_2$ ]{}at a new location. The coefficient for total traffic volume buffer 0.5-1 km was less than zero, suggesting that traffic volume within that buffer reduces the concentration of NO$_2$, which is unexpected. However, the estimate is not statistically significant, and was included in the final model to allow the next outer buffer ring, 1-2 km, to be included, which was statistically significant. Similar “U shaped" estimates for the dispersion function of NO$_2$ have been observed in previous studies [@holford2010], and may be explained by the complex process that produces NO$_2$ from nitrogen oxide (NO) and ozone (O$_3$) in the atmosphere. NO$_2$ is not produced directly by combustion in automobiles, and elevated levels of NO$_2$ are often observed at farther distances, perhaps reflecting the time needed to convert NO to NO$_2$. The estimated trigonometric function (Figure \[fig:Paper2Season\]) shows two peaks within a 12 month cycle - a higher peak in the winter and a lower one in the summer. While the fit of the curve seems appropriate for these data, comparing the time trend of [NO$_2$ ]{}for the EPA data suggests that the summer peak estimated by the trigonometric function might be artificially high in the [Acid/Aerosol ]{}data. However, an advantage of the two stage modeling strategy is that this possible over estimation in the summer months in Step I is countered during the temporal calibration in Step II, as seen by a dip in the mean trend of the estimate of $A(t)$ (Figure \[fig:Paper2at\]) during the corresponding summer period. The MLE for the parameter $\mu_A$ in Step II of the model was not statistically significantly different from zero. Given the presence of $\widetilde{C}({\mathbf{s}},t)$ in the model, this was to be expected as $\widetilde{C}({\mathbf{s}},t)$ captures the mean spatial additive calibration bias for the CMAQ data. The parameter $\beta_c$ controls the influence of $\widetilde{C}({\mathbf{s}},t)$ in Step II. The MLE for $\beta_c$ was 0.713, which suggests a slight mitigation of the spatial additive calibration bias $\widetilde{C}({\mathbf{s}},t)$, estimated in Step I of the model, when evaluating the temporal evolution of the additive calibration bias $A(t)$ in the second step. As discussed in @gilani2016, the CMAQ model does a reasonable job of predicting the ambient [NO$_2$ ]{}concentrations in the winter months when the concentrations are generally high. But it does not provide very accurate predictions during the low concentration periods in the summer, when it generally underpredicts the true concentration. As seen in the SCARR model improves the prediction during the summer months and, overall, appears to provide a better prediction as compared to the CMAQ model. The coefficients of correlation comparing the EPA observations with predictions from the SCARR and CMAQ model, and the empirical mean squared errors (MSE) for these two models (Table \[tab:Paper2corr\]), show that the SCARR model provides better predictions at five of the six sites, with a remarkable improvement in the prediction for New Haven and Springfield. The CMAQ model, on the other hand, appears to provide better predictions at Tolland. However, as mentioned in §\[Paper2StepIIResults\], the site at Tolland was missing EPA data for the summer of 1994, and the correlation and MSE for this site reflect the performance of the two models only for the winter period, during which time the CMAQ model generally performs well. For almost all of the [Acid/Aerosol ]{}sites, predictions from the SCARR model more accurately reflect the truth than the CMAQ model (Figures \[fig:Paper2ValidationAcid\] and S5). The empirical MSPE for the SCARR model predictions (36.30) at the [Acid/Aerosol ]{}sites was much smaller than the MSPE for the CMAQ model prediction (75.83). There are a few limitations to the modeling strategy presented here. The model assumes that the [NO$_2$ ]{}observations recorded at the [Acid/Aerosol ]{}and EPA sites accurately reflect the true ambient concentrations. However, qualitatively that might not necessarily be the case as different monitoring equipments record [NO$_2$ ]{}concentrations at varying levels of accuracy. Additionally, the EPA monitors are typically placed in high concentration locations near major roadways. However, the two step modeling strategy may actually help in balancing this effect by also including observations from the [Acid/Aerosol ]{}data, whose locations were sampled independent of pollutant concentrations and were therefore more evenly distributed between high and low concentration areas. Another limitation is due to the fact that Step II of the model includes variables that were estimated in Step I ($\widetilde{C}$), and the resulting estimates of the model errors do not capture the additional uncertainty of including these estimated variables, leading to somewhat conservative prediction errors. Methods developed for accounting for measurement errors in models can be applied to this model to provide more accurate error estimates [@carroll2006]. Given the Markov property of Kalman filters, a Bayesian approach can also be utilized to explicitly account for the uncertainty in the estimated variables included in the model. However, prediction of [NO$_2$ ]{}on a fine spatiotemporal resolution, with 143,050 spatial locations and 730 time points, using a Bayesian approach can impose a The [NO$_2$ ]{}prediction maps for CT for 1994 and 1995 (Figure \[fig:Paper2SCARRpred\]) show a clear improvement in the spatial resolution as compared to the estimates provided by the CMAQ model. The effect of local covariates is evident in the finer spatial resolution map, where the contribution of traffic on major highways to ambient [NO$_2$ ]{}concentration stands out. These maps provide more accurate estimates for points within the CMAQ pixel. For example, the concentration of [NO$_2$ ]{}appears rather high at all points in the pan-handle of CT (south-west corner of the map) in the CMAQ model estimates, whereas the SCARR model predictions show that the concentration is high primarily along the major highway (Interstate 95), while locations away from the highway have a significantly lower concentration. These maps with more accurate estimates at the centroid and finer spatial resolution can be very useful in assigning mean daily exposure to [NO$_2$ ]{}for participants in epidemiologic studies, which is usually not possible to do using available observed sources of data on the concentration of NO$_2$. Predictions from this model significantly contribute to exposure assessment at a fine spatial and temporal resolution for CT in 1994 and 1995. Acknowledgement {#acknowledgement .unnumbered} =============== The authors thank Dr. Lance Waller for useful feedback on the manuscript. This research was partially funded by grant R01ES017416 from the National Institutes of Health.
{ "pile_set_name": "ArXiv" }
--- author: - | \ Department of Physics, Columbia University, New York, NY 10025, USA\ E-mail: - RBC and UKQCD collaborations bibliography: - 'LD.bib' title: 'Computing the long-distance contribution to second order weak amplitudes' --- Introduction ============ Lattice QCD has been very successful at computing the effects of the electroweak interactions on the properties of the strongly interacting particles. For many processes the large mass of the $W^\pm$ and $Z$ bosons cause their interactions with the quarks and gluons of the hadrons to take place in a very small space-time region. These short distance interactions can be evaluated using electroweak and QCD perturbation theory and their low energy effects on hadrons described by effective four quark operators. For example, this approach provides a good description of both first-order decays and even some second order processes such as the CP violating effects in $K^0 - \overline{K}^0$ and $B^0 - \overline{B}^0$ mixing. However, for general second order processes, in which two $W^\pm$ and/or $Z$ bosons appear, it is possible that while each $W^\pm$ or $Z$ exchange will appear to take place at a point, the points locating these two exchanges may be separated by a much larger distance $\sim 1/\Lambda_{\rm QCD}$. Such long distance effects are believed to contribute to the CP violation seen in $K^0 - \overline{K}^0$ mixing at the 5% level [@Buras:2010pza] but on at least the 20% level [@Herrlich:1993yv] for the CP conserving $K_L - K_S$ mass difference.[^1] Here we present a method to compute such long distance effects using lattice QCD, focused on the case of the $K_L - K_S$ mass difference. There are three complications which must be overcome. First we need to devise an Euclidean space expectation value which can be evaluated in lattice QCD and which contains the second order energy shift of interest. Second, such a lattice quantity will involve a product of two, first-order weak Hamiltonian densities, ${\cal H}_W(x_i)_{i=1,2}$, each corresponding to one of the $W^\pm$ or $Z$ exchanges. The short distance behavior of this product as $|x_1-x_2| \rightarrow 0$ will not describe the actual behavior of the exchange of two $W^\pm$ or $Z$ bosons at nearby space-time points. Thus, this incorrect short distance behavior must be removed and replaced by the known, physical, short distance behavior described above. Third, the effects of finite volume, necessary in a lattice calculation, must be removed. These appear especially significant since the infinite volume expression contains continuous integrals, often with vanishing energy denominators evaluated as principal parts, while the finite volume quantity is a simple sum of discrete finite volume states. Here a generalization of the method of Lellouch and Luscher [@Lellouch:2000pv] can be used. We will now discuss how each of these obstacles may be overcome in a calculation of the $K_L - K_S$ mass difference, $\Delta m_K$. Second order lattice amplitude ============================== The standard description of $K^0 - \overline{K}^0$ mixing provides an expression for the $K_L - K_S$ mass difference which we will write as $$\Delta m_K = 2{\cal P} \sum_\alpha \frac{\langle \overline{K}^0 | H_W|\alpha\rangle \langle \alpha|H_W|K^0\rangle} {m_K - E_\alpha}. \label{eq:delta_m_IV}$$ Here CP violating effects, at the 0.1% level, have been neglected, we are summing over intermediate states $|\alpha\rangle$ with energy $E_\alpha$ and normalization factors associated with the conserved total momentum are suppressed. This generalized sum includes an integral over intermediate state energies and the $\cal P$ indicates the principal part of the integral over the $E_\alpha=m_K$ singularity. One possible way to capture a similar expression in a Euclidean space lattice calculation is to evaluate the time-integrated second-order product that if evaluated in Minkowski space would yield the $\Delta m_K$ contribution to the time evolution over a time interval $[t_a,t_b]$: $${\cal A} = \frac{1}{2}\langle \overline{K}^0(t_f) \int_{t_a}^{t_b} d t_2 \int_{t_a}^{t_b} d t_1 H_W(t_2) H_W(t_1) K^0(t_i) \rangle. \label{eq:lattice_amplitude}$$ Here the initial $K^0$ is created by a source $\overline{K}^0(t_i)$ at the time $t_i$ and the final $\overline{K}^0$ state destroyed by the sink $\overline{K}^0(t_f)$ at time $t_f$. This amplitude is represented schematically in Fig. \[fig:lattice\_amplitude\]. Equation \[eq:lattice\_amplitude\] can be evaluated as a standard Euclidean space path integral with $t_f \gg t_b \gg t_a \gg t_f$. If the time extent of this Euclidean path integral is sufficiently large, then when converted to an operator expression, Eq. \[eq:lattice\_amplitude\] becomes the vacuum expectation value of the time-ordered product of Heisenberg operators. Assuming that $t_f-t_b$ and $t_a-t_i$ are sufficiently large to project onto the $\overline{K}^0$ and $K^0$ states, substituting a sum over energy eigenstates $|n\rangle$, and integrating over $t_2$ and $t_1$ one obtains: $$\begin{aligned} {\cal A} &=& -\sum_{n \ne n_0} \frac{\langle \overline{K}^0 |H_W|n\rangle \langle n|H_W|K^0\rangle} {m_K - E_n} \left\{t_b-t_a - \frac{e^{-(E_n-m_K)(t_b-t_a)} - 1}{m_K-E_n}\right\} e^{-(t_f-t_i)m_K} \nonumber \\ &&\quad -\frac{1}{2}(t_b-t_a)^2\langle\overline{K}^0|H_W|n_0\rangle \langle n_0|H_W| K^0\rangle e^{-(t_f-t_i)m_K}. \label{eq:lattice_amplitude_explicit}\end{aligned}$$ Anticipating a result from Sec. \[sec:finite\_volume\], we have assumed that a single two-pion intermediate state $|n_0\rangle$ is degenerate with the kaon and treated that state separately in the time integrations. ![One type of diagram contributing to $\cal A$ of Eq. \[eq:lattice\_amplitude\]. Here $t_2$ and $t_1$ are integrated over the interval $[t_a,t_b]$, represented by the shaded region between the two vertical lines. In addition to this connected quark flow there will also be disconnected diagrams in which no quark lines connect $H_W(t_2)$ and $H_W(t_1)$.[]{data-label="fig:lattice_amplitude"}](Lattice_PT.EPS "fig:"){width="70.00000%"} -0.1in The coefficient of the $(t_b-t_a)$ term in Eq. \[eq:lattice\_amplitude\_explicit\] is then a finite volume approximation to $\Delta m_K$: $$\Delta m_K^{\rm FV} = 2\sum_{n \ne n_0} \frac{\langle \overline{K}^0 |H_W|n\rangle \langle n|H_W|K^0\rangle} {m_K - E_n}. \label{eq:delta_m_FV}$$ The other terms in Eq. \[eq:lattice\_amplitude\_explicit\] fall into four categories: i) The term independent of $t_b-t_a$ within the large curly brackets. This constant must be distinguished from the desired term proportional to $t_b-t_a$. ii) Exponentially decreasing terms coming from states $|n\rangle$ with $E_n > m_K$. These should be negligible if $t_b-t_a$ is sufficiently large. iii) Exponentially increasing terms coming from states $|n\rangle$ with $E_n < m_K$. These will be the dominant contributions and must be accurately determined and removed as discussed in the paragraph below[^2]. iv) The final term proportional to $(t_b-t_a)^2$ arises because our choice of volume makes one $\pi-\pi$ state, $|n_0\rangle$, degenerate with the kaon. The exponentially growing terms pose a significant challenge. Fortunately, we have some freedom to reduce their number and complexity. The two leading terms corresponding to the vacuum and single pion states can be computed separately and subtracted. Two pion states lying below $m_K$ can be eliminated using the same techniques that have been developed to evade the Maiani-Testa theorem and force the lowest energy $\pi-\pi$ state to be the on-shell $K \rightarrow \pi\pi$ decay product. Either choosing the kaon to have a non-zero laboratory momentum of 753 MeV or introducing G-parity boundary conditions to force non-zero pion momentum can eliminate all $\pi-\pi$ states with energy below $m_K$, at least for those lattice volumes that will be accessible within the next few years. Short distance correction {#sec:short_distance} ========================= The product of operators appearing in Eq. \[eq:lattice\_amplitude\] accurately describes the second order weak effects when the corresponding Hamiltonian densities ${\cal H}(x_i)_{i=1,2}$ are evaluated at space-time points separated by a few lattice units $a$: $|x_2 - x_1| \gg a$. However, as $|x_2 - x_1| \rightarrow 0$ the behavior is unphysical, being dominated by lattice artifacts rather than revealing the short distance structure of $W^\pm$ and $Z$ exchange. Fortunately, non-perturbative Rome-Southampton methods can be applied here to accurately remove this incorrect behavior and replace it with the correct short distance behavior, that portion of the process that has been traditionally computed using lattice methods. This can be done by identifying the short distance part of the amplitude by evaluating the four-quark, off-shell Green’s function $$\Gamma_{\alpha\beta\gamma\delta}(p_i) =\langle \widetilde{\overline{d}}_\alpha(p_4) \widetilde{s}_\beta(p_3) \int d^4x_1 d^4 x_2 {\cal H}_W(x_2){\cal H}_W(x_1) \widetilde{s}_\gamma(p_2)\widetilde{\overline{d}}_\delta(p_1)\rangle. \label{eq:NPR}$$ Here the quark fields are Fourier transformed and the gauge is fixed. A class of connected contributions to this Green’s function is shown in Fig. \[fig:NPR\]. A standard application of Weinberg’s theorem demonstrates that if the external momenta $p_i$ obey a condition such as $p_i \cdot p_j = \mu^2(1-4\delta_{ij})$, then for $\mu^2 \gg \Lambda_{\rm QCD}^2$ all of the internal momenta contributing to $\Gamma(p_i)$ will have the scale $\mu$, up to terms of order $\Lambda_{\rm QCD}^2/\mu^2$. ![Diagram representing a class of connected contributions to the off-shell, four-quark Green’s function defined in Eq. \[eq:NPR\]. In a non-perturbative evaluation of the Green’s function $\Gamma(p_i)$, graphs of this sort including all possible gluon exchanges would be included.[]{data-label="fig:NPR"}](Lattice_PT_NPR_2o.EPS){width="30.00000%"} At low energies this high momentum part of the integrated product ${\cal H}_W(x_2){\cal H}_W(x_1)$ can be represented as a linear combination of four-quark operators $\{O_s\}_{1 \le s \le S}$. These operators are typically normalized by imposing conditions on off-shell Green’s functions similar to that in Eq. \[eq:NPR\] in which the product of ${\cal H}_W$ operators is replaced by $O_s$ and the same kinematic point evaluated. The result is an alternative expression for the short distance part of the amplitude $\cal A$: $${\cal A}_{\rm SD} = \langle \overline{K}^0(t_f) \int_{t_a}^{t_b} d x_0 \int d^3 x \sum_{s=1}^S c_s^{\rm lat}(\mu^2) O_s(\vec x, x_0) K^0(t_i) \rangle. \label{eq:lattice_amplitude_SD}$$ The functions $c_s^{\rm lat}(\mu^2)$ are Wilson coefficients for the lattice-regularized operator product expanded in operators normalized using the regularization invariant (RI) Rome-Southampton scheme. Thus, we can replace the incorrect short distance part of our lattice operator product by the correct continuum contribution by adding to the integrated operator product in Eq. \[eq:lattice\_amplitude\] the operator: $$\int_{t_a}^{t_b}d x_0 \int d^3 x \sum_{s=1}^S \left\{c_s^{\rm cont}(\mu^2) - c_s^{\rm lat}(\mu^2)\right\}O_s(\vec x,x_0). \label{eq:SD_correction}$$ Here the $\{c_s^{\rm cont}\}_{1 \le s \le S}$ are the usual continuum Wilson coefficients that are computed from electroweak and QCD perturbation theory to represent the correct short distance part of the physical second order weak process while the lattice coefficients $c_s^{\rm lat}$ can be computed from the somewhat elaborate but well defined lattice RI/MOM calculation of the Green’s functions in Eq. \[eq:NPR\]. An important issue on which the above argument depends is the degree to which the dimension-6, four-quark operators introduced above capture the entire short distance part of the lattice amplitude. Since the degree of divergence of the diagram shown in Fig. \[fig:NPR\] is +2, the integration that remains after the “subtraction” of the operator in Eq. \[eq:SD\_correction\] will still receive $O(1)$ contributions from the lattice scale. This difficulty can be avoided by including dimension eight terms in the Wilson expansion employed in Eq. \[eq:SD\_correction\]. A more physical and more practical approach includes the charm quark in the lattice calculation so that GIM suppression makes the integration more convergent. Controlling finite volume errors {#sec:finite_volume} ================================ We now turn to the heart of this proposal: a demonstration that the potentially large volume dependence coming from those energy denominators in Eq. \[eq:lattice\_amplitude\_explicit\] with $E_n \sim m_K$ can be removed, leaving $O(1/L^4)$ finite-volume errors. This important conclusion is a consequence of a generalization of the original method of Lellouch and Luscher. The starting point is Luscher’s relation [@Luscher:1990ux] between an allowed, finite-volume, two-particle energy, $E = 2\sqrt{k^2 + m_\pi^2}$ and the two-particle scattering phase shift $\delta(E)$: $$\phi(kL/2\pi) +\delta(E) = n\pi \label{eq:Luscher}$$ where $n$ is an integer and the known function $\phi(q)$ is defined in Ref. [@Luscher:1990ux]. Following Lellouch and Luscher we consider the $s$-wave, $\pi-\pi$ scattering phase shift as modified by the weak interactions and use Eq. \[eq:Luscher\] to connect this to the finite volume energies, determined using degenerate perturbation theory for the $K_S \leftrightarrow \pi-\pi$ finite volume system. For simplicity we will limit our discussion to the larger $\Delta I = 1/2$ part of $H_W$ and the $I=0$ $\pi-\pi$ state.[^3] The relation between the finite and infinite volume second order mass shift is obtained by imposing Eq. \[eq:Luscher\], accurate through second order in $H_W$. We begin by examining the energies, accurate through second order in the strangeness changing, $\Delta I = 1/2$ weak Hamiltonian, $H_W$, of the finite volume system made up of a $K_S$ meson, an $I=0$ two-pion state $|n_0\rangle$ with energy $E_{n_0}$ nearly degenerate with $m_K$ and other single and multi-particle states coupled to $K_S$ and $|n_0\rangle$ by $H_W$. Following second order degenerate perturbation theory, we can obtain the energies of the $K_S$ and two-pion state $|n_0\rangle$ as the eigenvalues of the $2 \times 2$ matrix: $$\left( \begin{array}{cc} m_K + \sum_{n \ne n_0} \frac{|\langle n|H_W|K_S\rangle|^2}{m_K -E_n} & \langle K_S|H_W|n_0\rangle \\ \langle n_0 |H_W | K_S\rangle & E_{n_0}+\sum_{n \ne K_S} \frac{|\langle n|H_W|n_0\rangle|^2}{E_{n_0} -E_n} \end{array}\right). \label{eq:fv_2x2}$$ Finite and infinite volume quantities can then be related by requiring that the eigenvalues of the $2 \times 2$ matrix in Eq. \[eq:fv\_2x2\] solve Eq. \[eq:Luscher\] where the phase shift $\delta(E)$ is the sum of that arising from the strong interactions, $\delta_{\,0}(E)$, a resonant contribution from the $K_S$ pole and more familiar second-order Born terms: $$\delta(E) = \delta_{\,0}(E) + \arctan(\frac{\Gamma(E)/2}{m_K+\Delta m_{K_S}-E}) -\pi\sum_{\beta \ne K_S} \frac{|\langle \beta|H_W|n_0\rangle|^2}{E -E_\beta}. \label{eq:delta_second_order}$$ Here $\Gamma(E)$ is proportional to the square of the $K_S$ - two pion vertex which becomes the $K_S$ width when evaluated at $E=m_K$: $$\Gamma(E) = 2\pi |\langle \pi\pi(E)|H_W|K_S\rangle|^2,$$ where for the infinite volume, $I=0$, $s$-wave, 2-pion state we choose the convenient normalization $\langle \pi\pi(E)|\pi\pi(E')\rangle = \delta(E-E')$. The three terms in Eq. \[eq:delta\_second\_order\] are shown in Fig. \[fig:resonance\]. ![Diagrams showing the three contributions to the $\pi-\pi$ phase shift when both strong and second order weak effects are included. The states $\beta$ are multi-particle states with $S=\pm 1$.[]{data-label="fig:resonance"}](pi-pi_scattering.eps){width="100.00000%"} The easiest case to examine is that in which the volume is chosen to make $E_{n_0}-m_K$ very small on the scale of $\Lambda_{\rm QCD}$ but large compared to $\Gamma$ or $\Delta m_K$, so that $m_K$ and $E_{n_0}$ are not “degenerate”. Expanding Eq. \[eq:Luscher\] and the $\pi-\pi$ energy eigenvalue from Eq. \[eq:fv\_2x2\] in $H_W$ and collecting all terms of second order in $H_W$ we find: $$\begin{aligned} \mbox{\ }\hskip -0.2in \left.\frac{\partial \Bigl(\phi+\delta_{\,0}\Bigr)}{\partial E}\right|_{E=E_{n_0}} \hskip -0.1in \left\{\frac{|\langle K_S|H_W|n_0\rangle|^2}{E_{n_0}-m_K} + \sum_{n \ne K_S} \frac{|\langle n|H_W|n_0\rangle|^2}{E_{n_0} -E_n} \right\} = \frac{\Gamma(E_{n_0})/2}{E_{n_0}-m_K} + \hskip -0.1in \sum_{\beta \ne K_S} \frac{\pi|\langle \beta|H_W|\pi\pi\rangle|^2}{E_{n_0} -E_\beta}. \label{eq:non-degenerate}\end{aligned}$$ This relation has two useful consequences. First we can equate the residues of the kaon poles, $E_{n_0}=m_K$ on the left- and right-hand sides. This gives us the original Lellouch-Luscher relation. Second we can subtract the pole terms and equate the remaining parts of Eq. \[eq:non-degenerate\] evaluated at $E_{n_0}=m_K$. This second result will be used below to remove the second-order Born terms. Finally, closer to the original spirit of Lellouch and Luscher, we substitute the phase shift $\delta(E)$ from Eq. \[eq:delta\_second\_order\] into Eq. \[eq:Luscher\] and require that the resulting equation be valid at the energy eigenvalues $E_\pm$ of the $2\times 2$ matrix in Eq. \[eq:fv\_2x2\] for a box chosen to make $E_{n_0}=m_K$. To zeroth order in $H_W$, this relation is the usual Luscher relation between $\delta_{\,0}(E)$ and the allowed, finite volume, $\pi-\pi$ energy. When Eq. \[eq:Luscher\] is expanded to first order, we reproduce the standard derivation of Lellouch and Luscher’s relation. Expanding to second order in $H_W$ yields the desired relation between the finite and infinite volume expressions for the second order weak contribution to the $K_S$ difference: $$\begin{aligned} \Delta m_{K_S} &=& \sum_{n \ne n_0}\frac{|\langle n|H_W|K_S\rangle|^2}{m_K-E_n} +\frac{1}{\frac{\partial (\phi+\delta_{\,0})}{\partial E}} \Bigg[\frac{1}{2}\frac{\partial^2 (\phi+\delta_{\,0})}{\partial E^2} |\langle n_0|H_W|K_S\rangle|^2 \nonumber \\ && -\frac{\partial}{\partial E_{n_0}}\left\{ \left.\frac{\partial(\phi+\delta_{\,0})}{\partial E}\right|_{E=E_{n_0}}\hskip -0.1in |\langle n|H_W|K_S\rangle|^2\right\}\Bigg] \label{eq:result}\end{aligned}$$ where Eq. \[eq:non-degenerate\], evaluated at $E_{n_0}=m_K$ with the pole terms subtracted has been used to eliminate the second-order Born terms. Note the $\partial/\partial E_{n_0}$ appearing in the final term in Eq. \[eq:result\] must be evaluated by varying the spatial volume which determines $E_{n_0}$.[^4] To obtain the $K_L-K_S$ mass difference we first observe that the $K_L$ second order mass shift is given by a formula similar to Eq. \[eq:result\] in which $K_L$ replaces $K_S$ and all but the first term on the right-hand side is omitted since $K_L$ does not couple to two pions, assuming CP conservation. Second if this new equation is subtracted from Eq. \[eq:result\] the result is similar to Eq. \[eq:result\] with $\Delta m = \Delta m_{K_S} - \Delta m_{K_L}$ on the left-hand side, the first term on the right-hand side is simply $\Delta m_K^{FV}$ of Eq. \[eq:delta\_m\_FV\] and the remaining two $O(1/L^3)$ correction terms on the right hand side of Eq. \[eq:result\] are unchanged. Conclusion ========== We have proposed a lattice method to compute the $K_L-K_S$ mass difference in which all errors can be controlled at the percent level. Both short and long distance effects are represented, including a possibly $\Delta I = 1/2$-enhanced contribution from $I=0$ two pion states. Given the complexity of the analysis, the importance of physical kinematics and the difficulty of the disconnected diagrams this calculation is not practical today but may be possible in the next few years. The author thanks his RBC/UKQCD collaborators for important contributions to this work and Laurent Lellouch, Guido Martinelli and Stephen Sharpe for very helpful discussions. This work was supported in part by U.S. DOE grant DE-FG02-92ER40699. [^1]: For a discussion of the lattice QCD calculation of long distance effects in different decay processes see Ref. [@Isidori:2005tv]. [^2]: The author thanks Guido Martinelli and Stephen Sharpe for pointing out this behavior which had been overlooked when this talk was presented. [^3]: Treating the general case is not difficult: $H_W^{\Delta I=3/2}\cdot H_W^{\Delta I=3/2}$ and $H_W^{\Delta I=1/2}\cdot H_W^{\Delta I=1/2}$ can be analyzed in the same way while the combination $H_W^{\Delta I=1/2}\cdot H_W^{\Delta I=3/2}$ contains no two-pion intermediate states. [^4]: In a one-dimensional example, these derivative terms come naturally from a generalization of the usual contour integration relation between finite volume sums and infinite volume momentum integrals which includes the effects of a double pole arising from the vanishing on-shell energy denominator.
{ "pile_set_name": "ArXiv" }
--- author: - 'Chieh-An Lin' - Martin Kilbinger bibliography: - 'Bibliographie\_Linc.bib' date: 'Received 20 October 2014 / Accepted 20 January 2015' subtitle: 'I. Comparison with $N$-body simulations[^1]' title: 'A new model to predict weak-lensing peak counts' --- Introduction {#sec:intro} ============ Weak gravitational lensing (WL) probes matter structures in the Universe. It contains information from the linear growth of structures to the recent highly nonlinear evolution, going from scales of hundreds of Mpc down to sub-Mpc levels. Until now, most studies have focused on two-point-correlation functions, but the non-Gaussianity of WL cannot be ignored if one aims for a deep understanding of cosmology. One simple way to extract higher order WL information is peak counting. Peaks are defined as local maxima of the projected mass measurement. They are particularly interesting for at least two reasons. First, peaks are tracers of high-density regions. While other tracers of halo mass such as optical richness, X-ray luminosity or temperature, or the SZ Compton-$y$ parameter depend on scaling relations and often require assumptions about the dynamical state of galaxy clusters such as isothermal equilibrium and relaxedness, lensing does not. It therefore provides us with a direct way to study cosmology with the cluster mass function. Second, the lensing signal is highly non-Gaussian, and two-point-function-only studies deprive one of the information richness beyond second order. For example, [@Dietrich_Hartlap_2010] show that parameter constraints can be highly improved by joining peak counts and using second-order statistics, and [@Pires_etal_2012] find that peak counts capture more than the convergence skewness and kurtosis of the non-Gaussian information. Another advantage of WL peaks is information about the halo profile. @Mainini_Romano_2014 showed that combining peak information with other cosmological probes provides an interesting way to study the mass-concentration relation. For studies of the mass function via X-ray or the SZ effect, most works have adapted a reverse-fitting approach. This means that from diverse observables, one first establishes the observed mass function and then fit it with a theoretical model. To extract the mass function, this process needs to reverse the effect of selection functions, to use scaling relations, and to make further assumptions about sample properties. Alternatively, one can proceed with a forward-modeling approach: starting from an analytical mass function, we compute predicted values for observables and compare them to the data to carry out parameter fits ([Fig. \[fig:diagram\]]{}). The corresponding forward application of selection functions is typically much simpler than its reverse. Moreover, instrumental effects can be easily included, and model uncertainties can be marginalized over. Forward modeling requires well-motivated models of physical phenomena, which is challenging in the case of observables derived from baryonic physics, yet [@Clerc_etal_2012] still provide a forward analysis from X-ray observations. For WL peak counts, however, computing the observable prediction is more straightforward, as long as using some appropriate assumptions. ![image](Forward_reverse_modeling_1.pdf){width="16cm"} One of the difficulties of predicting WL peak counts is that peaks can come from several mass overdensities at various redshifts due to projection effects [@Jain_VanWaerbeke_2000; @Hennawi_Spergel_2005; @Kratochvil_etal_2010]. This makes counting nonadditive even in the linear regime, and the prediction becomes less trivial. To overcome this ambiguous effect, some previous works have used $N$-body simulations, e.g., [@Dietrich_Hartlap_2010]. They perform peak counts from $N$-body runs with different paremeter sets to obtain confidence contours for constraints. However, since $N$-body simulations are very costly in terms of computation time, input parameter sets should be carefully chosen, and an interpolation of results is needed. Thus the resolution in the parameter space is limited, and the Fisher matrix is only available for the fiducial parameters. Alternatively, there have been several attempts at peak-count modeling. [@Maturi_etal_2010] propose to study contiguous areas of high-signal regions instead of peaks, and provide a model that predicts the amount of this alternative observable. Meanwhile, @Fan_etal_2010 [hereafter ] propose a model for convergence peaks by supposing at most one halo exists on each line of sight. Both models are analytical and based on calculation from Gaussian random field theory. A comparison of the model with observation has been shown by [@Shan_etal_2014], using the data from the Canada-France-Hawaii Telescope Stripe 82 Survey. However, these models encounter difficulties for additional complications and subtleties. On one hand both models require Gaussian noise and linear filters, otherwise the Gaussian random field theory becomes invalid. As a result, non-linear, optimized reconstruction methods of the projected overdensity are automatically excluded. On the other hand realistic scenarios, such as mask effects and intrinsic ellipticity alignment, introduce asymmetric changes into the peak counts. The impact of these additional effects are unpredictable in purely analytical models. This encourages us to propose a new model for WL peak counts. In this paper, we adopt a probabilistic approach to forecasting peak counts. This can be handled by our <span style="font-variant:small-caps;">Camelus</span> algorithm (Counts of Amplified Mass Elevations from Lensing with Ultrafast Simulation). Unlike $N$-body simulations which are very time-consuming, we create “fast simulations” by sampling halos from the mass function. The only requirement is a cosmology with a known mass function and halo mass profiles. To validate this method and to justify various hypotheses that our model makes, we compare results from our fast simulations to those from $N$-body runs. This approach is similar to the sGL model of [@Kainulainen_Marra_2009; @Kainulainen_Marra_2011; @Kainulainen_Marra_2011a], where they show that the stochastic process provides a quick and accurate way to recover the lensing signal distribution. The outline of this paper is as follows. In [Sect. \[sec:theory\]]{}, we recall some of the WL formalism and theoretical support for our model. In [Sect. \[sec:model\]]{}, a full description of our model is given. In [Sect. \[sec:simu\]]{}, we give the details concerning the $N$-body and the ray-tracing simulations. Finally, the results are presented in [Sect. \[sec:results\]]{}, before we summarize and conclude in [Sect. \[sec:conclu\]]{}. Theoretical basics {#sec:theory} ================== In this section, we define the formalism necessary for our analysis. To model the convergence field lensed by halos, we need to specify their profile, projected mass, and distribution in mass and redshift, which is the mass function. Weak lensing convergence {#subsec:WL} ------------------------ Observationally, galaxy shape distortions can be displayed at linear order in the form of the lensing distortion matrix $\mathcal{A}$. For an angular position $\btheta$, $\mathcal{A}(\btheta)$ is given by $$\begin{aligned} \mathcal{A}(\btheta) = \begin{pmatrix} 1-\kappa - \gamma_1 & - \gamma_2\\ - \gamma_2 & 1-\kappa + \gamma_1 \end{pmatrix},\end{aligned}$$ which defines two WL observables: convergence $\kappa$ and shear $\gamma$. The latter is a complex number given by $\gamma = \gamma_1 + \iiii \gamma_2$. This linearization of the light distortion can be calculated explicitly in general relativity. Accordingly, the matrix elements are linked to second derivatives of the Newtonian gravitational potential $\phi$ by $$\begin{aligned} \mathcal{A}_{ij}(\btheta) = \delta_{ij} - \frac{2}{\cccc^2}\int_0^w \dddd w'\ \frac{f_K(w-w')f_K(w')}{f_K(w)}\ \phi_{,ij}\big(f_K(w')\btheta, w'\big),\end{aligned}$$ where $f_K$ is the comoving transverse distance and $\delta_{ij}$ the Kronecker delta. In particular, an explicit expression of $\kappa$ is given as follows [see, e.g., @Schneider_etal_1998], $$\begin{aligned} \label{for:WL_4} \kappa(\btheta, w) = \frac{3H^2_0 \Omega_\mmmm}{2\cccc^2} \int_0^w \dddd w'\ \frac{f_K(w-w')f_K(w')}{f_K(w)} \frac{\delta\big( f_K(w')\btheta, w' \big)}{a(w')},\end{aligned}$$ where $H_0$ is the Hubble parameter, $\Omega_\mmmm$ the matter density, $\cccc$ the speed of light, $a(w')$ represents the scale factor at the epoch to which the comoving distance from now is $w'$, and $\delta$ is the matter density contrast. Halo density profile and its projected mass {#subsec:NFW} ------------------------------------------- Consider now a dark matter (DM) halo with a Navarro-Frenk-White (NFW) density profile [@Navarro_etal_1996; @Navarro_etal_1997], given by $$\begin{aligned} \label{for:WL_5} \rho(r) = \frac{\rho_\ssss}{(r/r_\ssss)^\alpha (1+r/r_\ssss)^{3-\alpha}},\end{aligned}$$ where $\rho_\ssss$ and $r_\ssss$ are the characteristic mass density and the scale radius of the halo, respectively, and $\alpha$ is the inner slope parameter. The concentration parameter $\cNFW$ is defined as the ratio of the virial radius to the scale radius, $\cNFW = r_\vir/r_\ssss$. We assume the following expression [proposed by @Takada_Jain_2002]: $$\begin{aligned} \cNFW(z,M) = \frac{c_0}{1+z}\left(\frac{M}{M_\star}\right)^{-\beta},\end{aligned}$$ where $M$ is the halo mass and $M_\star$ the pivot mass such that $\delta_\cccc(z=0) = \sigma(M_\star)$, with $\delta_\cccc$ the threshold overdensity for the spherical collapse model, and $\sigma^2(M)$ is the variance of the density contrast fluctuation smoothed with a top-hat sphere with radius $R$ such that $M=\bar{\rho}_0(4\pi/3)R^3$. In this paper, we take $c_0 = 8$, $\alpha = 1$, and $\beta = 0.13$. The value of $\alpha$ corresponds to the classical NFW profile. The value of $\beta$ is provided by [@Bullock_etal_2001], and $c_0$ corresponds to the best-fit value, using $r_\vir$, $r_\ssss$, $z$, $M$ derived from the $N$-body simulations that we use and fixing $\beta$. For $\delta_\cccc$, we use the fitting formula of @Weinberg_Kamionkowski_2003 with $$\begin{aligned} \delta_\cccc(z) = \frac{3(12\pi)^{2/3}}{20} \left(1+\alpha\log_{10}\Omega_\mmmm(z)\right),\end{aligned}$$ and $$\begin{aligned} \alpha = 0.353w^4 + 1.044w^3 + 1.128w^2 + 0.555w + 0.131.\end{aligned}$$ Lensing by an NFW halo is characterized by its projected mass. More precisely, defining the scale angle $\theta_\ssss = r_\ssss/\Dl$ as the ratio of the scale radius to the angular diameter distance $\Dl$ between lens and observer, we get [following @Bartelmann_1996; @Takada_Jain_2003a] [^2] [^3] $$\begin{aligned} \label{for:WL_1} \kappa_\proj(\btheta) = \frac{2\rho_\ssss r_\ssss}{\Sigma_\cccc} G\left(\frac{\theta}{\theta_\ssss}\right),\end{aligned}$$ with $$\begin{aligned} \label{for:WL_2} \Sigma_\cccc = \frac{\cccc^2}{4\pi\GGGG} \frac{\Ds}{\Dl\Dls},\end{aligned}$$ where the quantities $\Ds$ and $\Dls$ are the angular diameter distances between source and observer, and lens and source, respectively, and $$\begin{aligned} \label{for:WL_3} G(x) =\left\{ \begin{array}{l} \hspace{-0.5em}\displaystyle -\frac{1}{1-x^2}\frac{\scalebox{0.8}{$\sqrt{\cNFW^2-x^2}$}}{\cNFW+1} + \frac{1}{(1-x^2)^{3/2}} \arcosh\left[ \frac{x^2+\cNFW}{\scalebox{0.9}{$x(\cNFW+1)$}} \right]\\[3ex] \hfill \text{if $x<1$;}\\ \displaystyle\frac{\scalebox{0.8}{$\sqrt{\cNFW^2-1}$}}{\cNFW+1} \cdot \frac{\cNFW+2}{3(\cNFW+1)} \hfill \text{if $x=1$;}\\[3ex] \displaystyle\frac{1}{x^2-1}\frac{\scalebox{0.8}{$\sqrt{\cNFW^2-x^2}$}}{\cNFW+1} - \frac{1}{(x^2-1)^{3/2}}\arccos\left[ \frac{x^2+\cNFW}{\scalebox{0.9}{$x(\cNFW+1)$}} \right]\\[3ex] \hfill \text{if $1<x\leq\cNFW$;}\\[1ex] \displaystyle 0 \hfill \text{if $x > \cNFW$.} \end{array}\right.\end{aligned}$$ We have truncated the projected mass distribution at $\theta = \cNFW\theta_\ssss$. Equation (\[for:WL\_1\]) is used and computed for the ray-tracing simulations with NFW halos. Halo mass function {#subsec:massFct} ------------------ The halo mass function $n(z,\lessM)$ indicates the halo number density with mass less than $M$ at redshift $z$ [^4], often characterized by a function $f(\sigma, z)$ as $$\begin{aligned} f(\sigma, z) \equiv \frac{M}{\bar{\rho}_0}\frac{\dddd n(z,\lessM)}{\dddd\ln\sigma\inv(z,M)},\end{aligned}$$ where $\bar{\rho}_0$ is the current matter density, and $\sigma(z,M)$ is defined as $\sigma(M)$ multiplied by the growth factor $D(z)$. In this study, we adopt the model proposed by [@Jenkins_etal_2001] in which a fit for $f$ is given as $$\begin{aligned} f(\sigma) = 0.315 \exp\left[ -\left|\ln\sigma\inv + 0.61\right|^{3.8} \right].\end{aligned}$$ A new model for WL peak counts {#sec:model} ============================== Probabilistic approach: fast simulations {#subsec:ours} ---------------------------------------- Our model is based on the idea that we can replace $N$-body simulations with an alternative random process, such that the relevant observables are preserved, but the computation time is drastically reduced. We call this alternative process “fast simulations”, which are produced by the following steps: 1. generate halo masses by sampling from a mass function, 2. assign density profiles to the halos, 3. place the halos randomly on the field of view, 4. perform ray-tracing simulation. One can notice that we have made two major hypotheses. First, we assume that diffuse, unbound matter, for example cosmological filaments, does not significantly contribute to peak counts. Second, we suppose that the spatial correlation of halos has a minor influence, since this correlation is broken down in fast simulations. Previous work has shown that correlated structures influence number and height of peaks by only a few percentage points [@Marian_etal_2010]. Furthermore, assuming a stochastical distribution of halos can lead to accurate predictions of the convergence probability distirbution function [@Kainulainen_Marra_2009]. One may also notice that halos can overlap in 3D space, and indeed we do not exclude this possibility. We test and validate these hypotheses in [Sect. \[sec:results\]]{}, and discuss possible improvements to our model in [Sect. \[sec:conclu\]]{} Although we have chosen NFW profiles for the density of DM halos, using any halo profile model for which the projected mass is known is of course possible, such as triaxial halos or profiles offered by baryonic feedback [@Yang_etal_2013]. In addition, our prediction model is completely independent of the method by which peaks are extracted from the weak-lensing data. The same analysis can be applied to data (or $N$-body simulations + ray-tracing) and to fast simulations. Moreover, survey characteristics, such as masks, photometric redshift errors, PSF residuals, and other systematics, can be incorporated and forward-propagated as model uncertainties. Furthermore, the halo sampling technique is much faster than a full $N$-body run. For instance, it only takes a dozen seconds on a single-CPU desktop computer to generate a box that is large enough for our use (see specifications in [Sect. \[subsec:fastSimu\]]{}). This is a probabilistic approach to forecast peak counts, and we compare the convergence peaks obtained with those from full $N$-body runs in order to validate our forward model. This is described in [Sect. \[subsec:validation\]]{}. Peak selection {#subsec:peak} -------------- In this paper, we focus on convergence peaks. We have followed a classical analysis used in former studies [e.g., @Hamana_etal_2004; @Wang_etal_2009; @Fan_etal_2010; @Yang_etal_2011] to extract peaks. First, we should highlight that $\kappa$ and $\kappa_\proj$ (respectively given by Eqs. and ) do not follow the same definition. Actually, [Eq. (\[for:WL\_1\])]{} can be recovered by replacing $\delta$ with $\rho/\bar{\rho}$ in [Eq. (\[for:WL\_4\])]{}. This means that $\kappa_\proj$ does not take lensing by underdense regions into account and is shifted by a constant value, which corresponds to the mass-sheet degenerency. To obtain a model that is consistant with a zero-mean convergence field, we subtract the mean value of $\kappa_\proj$ from our convergence maps, so that $$\begin{aligned} \kappa(\btheta) = \kappa_\proj(\btheta) - \overline{\kappa_\proj}.\end{aligned}$$ We use this approximation throughout this study when ray-tracing is done with projected mass. Consider now a reconstructed convergence field $\kappa_n(\btheta)$ in the absence of intrinsic ellipticity alignment. The presence of galaxy shape noise leads to the true lensing field $\kappa(\btheta)$ being contaminated by a linear additive noise field $n(\btheta)$, such that $$\begin{aligned} \label{for:model_1} \kappa_n(\btheta) = \kappa(\btheta) + n(\btheta).\end{aligned}$$ In general, $\kappa$ is dominated by $n$, and one way to suppress the noise is to apply a smoothing: $$\begin{aligned} K_N(\btheta) \equiv (\kappa_n\ast W)(\btheta) = \int\dddd\btheta'\ \kappa_n(\btheta-\btheta')W(\btheta')\end{aligned}$$ where $W(\btheta)$ is a window function, chosen to be Gaussian in this study as $$\begin{aligned} W(\btheta) = \frac{1}{\pi\theta_\GGGG}\exp\left( -\frac{\theta^2}{\theta_\GGGG^2} \right),\end{aligned}$$ which is specified by the smoothing scale $\theta_\GGGG$. We denote $K_N(\btheta)$, $K(\btheta)$, and $N(\btheta)$ as corresponding smoothed fields to [Eq. (\[for:model\_1\])]{}, such that $$\begin{aligned} K_N(\btheta) = K(\btheta) + N(\btheta),\end{aligned}$$ and set $\theta_\GGGG = 1$ arcmin in the following. If intrinsic ellipticities are uncorrelated between source galaxies, $N(\btheta)$ can be described as a Gaussian random field [@Bardeen_etal_1986; @Bond_Efstathiou_1987] for which the variance is related to the number of galaxies contained in the filter. This is given by [@VanWaerbeke_2000] as $$\begin{aligned} \label{for:model_2} \sigma_\noise^2 = \frac{\sigma_\epsilon^2}{2}\frac{1}{2\pi n_\gggg \theta_\GGGG^2}.\end{aligned}$$ Here, $n_\gggg$ is the source galaxy number density, and $\sigma_\epsilon^2 = \langle\epsilon_1^2\rangle + \langle\epsilon_2^2\rangle$ is the variance of the intrinsic ellipticity distribution. We then define the lensing S/N as $$\begin{aligned} \nu(\btheta) \equiv \frac{K_N(\btheta)}{\sigma_\noise},\end{aligned}$$ and the peaks are extracted from the $\nu$ field, defined as pixels that have a S/N value higher than their eight neighbors. This implies that peak analyses require S/N values on a well-defined grid (e.g., HEALPix grid). Furthermore, we suppose that source galaxies are uniformly distributed in this study, so $\sigma_\noise$ is a constant. However, this does not have to be true in general. In summary, convergence peaks are selected by the following steps: 1. compute the projected mass $\kappa_\proj(\btheta)$ by ray-tracing, 2. subtract the mean to obtain $\kappa(\btheta)$, 3. add the noise to obtain $\kappa_n(\btheta)$, 4. smooth the field and acquire $K_N(\btheta)$, 5. determine the S/N $\nu(\btheta)$, and 6. select local maxima and compute the density $n_\peak(\nu)$. Only positive peaks are selected, and the analysis is based on the abundance histograms from peak counts. From fast simulation, through ray-tracing, to peak selection, the calculation is carried out by our <span style="font-variant:small-caps;">Camelus</span> algorithm. Simulations {#sec:simu} =========== $N$-body simulations {#subsec:aardvark} -------------------- As provided by A. Evrard, the $N$-body simulations “Aardvark” have been used in this study. They were generated by <span style="font-variant:small-caps;">LGadget-2</span>, a DM-only version of <span style="font-variant:small-caps;">Gadget-2</span> [@Springel_2005]. The Aardvark parameters had been chosen to represent a WMAP-like $\Lambda$CDM cosmology, with $\Omega_\mathrm{m} = 0.23$, $\Omega_\Lambda = 0.77$, $\Omega_\mathrm{b} = 0.047$, $\sigma_8 = 0.83$, $h = 0.73$, $n_\mathrm{s} = 1.0$, and $w_0 = -1.0$. The DM halos in Aardvark were identified using the <span style="font-variant:small-caps;">Rockstar</span> friends-of-friends code [@Behroozi_etal_2013]. The field of view is 859 deg$^2$. This corresponds to a HEALPix patch with $n_\mathrm{side} = 2$ [for HEALPix, see @Gorski_etal_2005]. ![image](massComp_aardvark_fast1.pdf){width="17cm"} Fast simulations {#subsec:fastSimu} ---------------- As described in Sect \[subsec:ours\], our model requires a mass function as input. We chose the model of @Jenkins_etal_2001 [see [Sect. \[subsec:massFct\]]{}] to sample halos. This is done in ten redshift bins from $z =$ 0 to 1. We set the sample mass range to the interval $\dix{12}$ and $\dix{17} M_\odot/h$. For each halo, the NFW parameters were set to be $(c_0, \alpha, \beta) = (8.0, 1.0, 0.13)$. We suggest seeing [Sect. \[subsec:NFW\]]{} for their definitions. [Figure \[fig:massComp\]]{} shows an example of our halo samples, compared to the original mass function, and mass histograms established from the Aardvark simulations. Although halos with high mass can be $\dix{3}$–$\dix{5}$ times less populated than low-mass halos, our sampling is still in a perfect agreement with the original mass function. One may notice a shift and a tilt in the Aardvark halo mass function for low and high redshifts, however, in these regimes, the lensing efficiency is low because of the distance weight term $\Dl\Dls/\Ds$, so this mismatch is not very large. Ray-tracing simulations {#subsec:RT} ----------------------- For the Aardvark simulations, ray-tracing was performed with CALCLENS [@Becker_2013]. Galaxies were generated using ADDGALS (by M. Busha and R. Wechsler [^5]). Ray-tracing information is available only on a subset of 53.7 deg$^2$ (a HEALPix patch with $n_\mathrm{side}=8$), which is 16 times smaller than the halo field. In this study, only galaxies at redshift between 0.9 and 1.1 were chosen for drawing the convergence map. It led to an irregular map, and in order to clearly define eight neighbors to identify peaks, we used a 2D-linear interpolation to obtain $\kappa$ values on a grid. This was done after carrying out a projection to Cartesian coordinates. For computational purposes, in order not to handle too many galaxies at a time, we split the field into four “ray-tracing patches”, the size of which is 13.4 deg$^2$ each (corresponding to $n_\mathrm{side}=16$). We then project the coordinates with regard to the center of each patch using the Gnomonic projection. The size lengths of the ray-tracing patches are between 3.5 and 6.2 deg, so small enough to retain a good approximation. For the fast simulations and the two intermediate cases that we study in [Sect. \[subsec:validation\]]{}, source galaxies have a fixed redshift $z_\ssss=1.0$. They are regularly distributed on a HEALPix grid and placed at the center of pixels. Each ray-tracing pixel is a HEALPix patch with $n_\mathrm{side} =$ 16,384, for which the characteristic size is $\theta_\pix \approx$ 0.215 arcmin. Thus, the galaxy number density is $n_\gggg=1/\theta_\pix^2=21.7$ arcmin$\invSq$. Ray-tracing for fast simulations is carried out after splitting and projection to Cartesian coordinates. There are 64 ray-tracing patches in a halo field, and each patch contains 1024 $\times$ 1024 pixels. The convergence was computed using Eqs. , , and . As a remark, no mask was applied in this study. Adding noise {#subsec:noise} ------------ Shape noise $n(\btheta)$ is added to each pixel after we obtain $\kappa(\btheta)$ from $N$-body runs or fast simulations. It is modeled as a Gaussian random field with a top-hat filter with a size that corresponds to the pixel area $A_\pix$. The variance of this is given by [@VanWaerbeke_2000] as $$\begin{aligned} \sigma_\pix^2 = \frac{\sigma_\epsilon^2}{2}\frac{1}{n_\gggg A_\pix}.\end{aligned}$$ ![image](HPMap_patch0368_fast1_noise1_gauss1.pdf){width="17cm"} We choose $\sigma_\epsilon=0.4$ which corresponds to a CFHTLenS-like survey, and $n_\gggg A_\pix$ is chosen to be 1 so that each pixel represents one galaxy. This leads to $\sigma_\pix\approx0.283$. We can also estimate $\sigma_\noise$ with [Eq. (\[for:model\_2\])]{} and obtain $\sigma_\noise \approx 0.024$. This shows that a real map is in general dominated by the noise ([Fig. \[fig:HPMap\]]{}). Even for a peak at $\nu=5$, the lensing signal is only on the order of $\kappa=0.12$, less than half of the pixel noise amplitude. Results {#sec:results} ======= Validation of our model: comparison to $N$-body runs {#subsec:validation} ---------------------------------------------------- To validate our model, we compare it to the $N$-body simulations. We compute peak abundance histograms from both simulations, together with two intermediate steps. This results in four cases in total: - full $N$-body runs; - replacing $N$-body halos with NFW profiles with the same masses; - randomizing angular positions of halos from Case 2; - fast simulations, corresponding to our model. These cases form a progressive transition from full $N$-body runs toward our model. More precisely, Case 2 tests the hypothesis corresponding to the second step of our model (see [Sect. \[subsec:ours\]]{}); i.e., diffuse, unbound matter contributes little to peak counts. Case 3 additionally tests the assumption made in the third step. (Halo clustering plays a minor role.) Finally, Case 4 completes our model with the missing first step. As a result, the halo population and their redshifts are identical to $N$-body runs in Cases 2 and 3. ![image](peakHist_smallField.pdf){width="17cm"} [Figure \[fig:small\_field\]]{} shows the peak abundance histograms for all four cases. In this section, the field of view is 53.7 deg$^2$, since we are limited by the available information of ray-tracing for the $N$-body runs. For Cases 1 and 2, we compute the average in each histogram bin for eight noise maps. For Cases 3 and 4, this is done with eight realizations (of randomization and of fast simulations, respectively) and eight noise maps, thus 64 maps in total. The error bars therefore refer to the combination of the statistical fluctuation due to the random process and the shape noise uncertainty. For low peaks with $\nu\leq 3.75$, we observe that $n_\peak(\nu)$ remains almost unchanged between the different cases. This is not suprising because in this regime, $n_\peak(\nu)$ is mainly contributed by noise. This argument is supported by the noise-only peak histogram. The lower panel of [Fig. \[fig:small\_field\]]{} shows that there exist some systematic overcounts in this regime on the order of 10%. The cause of this bias is ambiguous. One possibility might be the use of NFW profiles for ray-tracing simulations. It might also come from the subtraction of the mean $\kappa$ value from the maps. We leave this to future studies. Another observation in this regime is that by adding the signal to the noise field, the number of peaks with $\nu\leq2.75$ decreases. This proves that the effect of noise is not additive for peak counts. In the regime of $\nu\geq3.75$, we observe that replacement by NFW profiles enhances the peak counts, while position randomization introduces an opposite effect of a similar order of magnitude. The enhancement from Case 2 may be explained by the halo triaxiality. A spherical profile such as the NFW model may lead to an overestimation of the projected mass at the center of halos if the major axis is not aligned with the line of sight, and this would probably be the case for most of the $N$-body halos. It could also be an effect of the $M$-$c$ relation: we might overestimate $\cNFW$ for large $M$. Comparing Cases 2 and 3, we discover that position randomization decreases peak counts by 10% to 50%. Apparently, decorrelating angular positions breaks down the two-halo term, so that halos overlap less on the field of view and decreases high-peak counts. [@Yang_etal_2011] shows that high peaks with $\nu\geq4.8$ are mainly contributed by one single halo, and about 12% of total high-peak counts are contributed by multiple halos. This number agrees with the undercount from our hypothesis of randomization. Combining this step with the former one, we confirm that considering lensing contribution from spatially decorrelated clusters is a good approximation for peak counts. The impact of the mass function is shown by comparing Case 3 to Case 4. Peak counts are more numerous in our forward model based on the mass function of [@Jenkins_etal_2001]. This excess compensates for the deficit from randomization. However, as shown by [Fig. \[fig:massComp\]]{}, the real mass function in $N$-body runs is coherent to the analytical model that we use, except for the low-mass deficit tails from $N$-body runs. To test the impact from this, we ran fast simulations with different lower limit for the halo sampling, and we discover that peak counts do not depend on the lower sampling limit $M_\min$ when $M_\min$ remains lower than $10^{13}\ \Msol/h$. This proves that the deficit tails are not the cause of the peak count enhancement. Without this explanation, we may have to test with another $N$-body simulation set to understand the origin of this effect. ![Similar plot to [Fig. \[fig:small\_field\]]{}, but in a larger field. Cases 2, 3, and 4 are carried out for 859 deg$^2$. Case 1 should only be taken as an indication, since its size of field is the same as in [Fig. \[fig:small\_field\]]{}, and therefore 16 times smaller than cases 2–4. The fluctuation from high $\nu$ bins is much reduced compared to [Fig. \[fig:small\_field\]]{}.[]{data-label="fig:large_field"}](peakHist_largeField.pdf){width="\columnwidth"} [Figure \[fig:large\_field\]]{} shows a similar study of Cases 2, 3, and 4 for a larger field of 859 deg$^2$. One can recover the same effects: compensation of effects deriving from NFW profiles and randomization. Therefore, the difference between our model and $N$-body simulations is at the same order of magnitude of the one between the analytical and the $N$-body mass functions. We would like to point out that the Poisson fluctuation has been largely suppressed. A quick calculation shows that, for a given peak density $n$ and a survey area $A$, the ratio of the Poisson noise to peak density is $1/\sqrt{nA}$. The error bars for high peaks in both [Fig. \[fig:small\_field\]]{} and [Fig. \[fig:large\_field\]]{} stay within 50% of the values given by this formula. As a result, we argue that to reduce the Poisson fluctuation at the level of 10%, a survey of more than 150 deg$^2$ is preferable using WL peaks with $\nu \lesssim 5.25$ and 800 deg$^2$ using peaks with $\nu \lesssim 6.25$. ![Comparison of the FSL model (orange triangles) to our model (cyan diamonds). The full $N$-body peak histogram is shown as a blue line. In the lower panel, we draw the difference between the FSL model and $N$-body data within an orange dashed line. The cyan-colored zone represents the error bars for our model. The field of view for fast simulations is 859 deg$^2$. The $N$-body data is only indicative.[]{data-label="fig:Fan_vs_ours"}](peakHist_Fan_vs_ours.pdf){width="\columnwidth"} Comparison to an analytical model {#subsec:Fan} --------------------------------- In [Fig. \[fig:Fan\_vs\_ours\]]{}, we draw peak histograms obtained from the analytical model of and from our model. The computation for the model is done with the same halo profiles and parameters, and the same mass function. For our model, we use our large-field result as mentioned in the previous section. Both models are computed with the same parameter set as the Aardvark $N$-body simulation inputs. We observe that the model is also in good agreement with $N$-body runs. The prediction from the model is more consistent with $N$-body values for high-peak counts, whereas our model performs better in the low-peak regime. In general, the deviation of both models for $\nu\leq5.25$ stays under 25%. Sensitivity tests on cosmological parameters {#subsec:param} -------------------------------------------- ![image](peakHist_param.pdf){width="17cm"} Finally in this section, we show how our model depends on cosmological parameters. Weak lensing is particularly sensitive to $\Omega_\mmmm$ and $\sigma_8$, hence we carry out nine series of fast simulations for which $(\Omega_\mmmm, \sigma_8)$ is chosen from $\left\{\Omega_\mmmm^{(N)}, \Omega_\mmmm^{(N)}\pm \Delta\Omega_\mmmm\right\}$ $\times$ $\left\{\sigma_8^{(N)}, \sigma_8^{(N)}\pm \Delta\sigma_8\right\}$, where $\Omega_\mmmm^{(N)}$ and $\sigma_8^{(N)}$ are input from our $N$-body runs. The values of $\Delta\Omega_\mmmm$ and $\Delta\sigma_8$ are chosen to be 0.03 and 0.05, respectively, and the remaining parameters are identical to the $N$-body simulations. Each scenario is the average over 16 combinations of four fast simulation realizations and four noise maps. [Figure \[fig:param\]]{} shows four plots that correspond to four variation directions on the $\Omega_\mmmm$-$\sigma_8$ parameter plane, with regard to $(\Omega_\mmmm^{(N)}, \sigma_8^{(N)})$. Both upper panels show the variation of only one parameter. They reveal that our model performs a neat, progressive difference of peak abundance in every bin, ranging from $\nu = 4$ to 6. We notice that the differences between cyan diamonds (higher value of $\Omega_\mmmm$ or $\sigma_8$) and red squares ($N$-body value) are always narrower than those between green circles (lower value of $\Omega_\mmmm$ or $\sigma_8$) and red squares. This is triggered by the banana-shape constraint on the $\Omega_\mmmm$-$\sigma_8$ plane, from which a horizontal or a vertical cut will result in an asymmetric confidence level for a single parameter. The two lower panels are variations in the diagonal and anti-diagonal directions. Like what we expect, the diagonal variation is the most efficient discriminant of $\Omega_\mmmm$-$\sigma_8$. In contrast, peak counts for different parameter sets completely merge together in the lower right-hand panel, since the anti-diagonal direction corresponds roughly to the degenerency lines. Furthermore, all error bars (for $3.75 \leq \nu \leq 6.25$) remain smaller than 5%, which shows the robustness of our model. We recall that blue solid lines correspond to a small 53.7 deg$^2$ field, such that the Poisson noise might bias high peak counts, as explained in [Sect. \[subsec:validation\]]{}. At the end of the day, the performance of our model at distinguishing different cosmological models has been confirmed. [Figure \[fig:param\]]{} also shows that systematic biases of our model could lead to parameter biases. A simple interpolation for the bin $\nu=5$ shows that $N$-body peak counts correspond to a cosmology with $\Omega_\mmmm \approx 0.212$ if the knowledge of $\sigma_8$ is perfect. The bias is then $\Delta\Omega_\mmmm \approx 0.018$. Similarly, the bias on $\sigma_8$ is $\Delta\sigma_8 \approx 0.030$ if $\Omega_\mmmm$ is known. The origin of the biases of our model is complex. We discuss a list of possible improvements that reduce potential systematics in the following section. Summary and perspectives {#sec:conclu} ======================== WL peaks probe cosmological structures in a straightforward way, since they are directly induced by total-mass gravitational effects, and they especially probe the high-mass part of the mass function. Unlike other tracers, WL peaks provide a forward-fitting approach to study the mass function and cosmology. This makes WL peaks a very competitive candidate for improving our knowledge about structure formation. In this paper, we presented a new model that predicts weak-lensing peak counts. We generated fast simulations by sampling halos from analytical expressions. By assuming that halos in these simulations are randomly distributed on the sky, we count peaks from ray-tracing maps obtained from these simulations to predict number counts. In this model, we have supposed that unbound matter contributes little to the lensing and that halo clustering has little impact on peak counts. We validated our approach by comparing number counts with $N$-body results. In particular, we focused on peaks with $\nu \approx$ 4–6, since lower $\nu$ are dominated by shape noise, and higher $\nu$ are dominated by the Poisson fluctuation. We showed how the three steps corresponding to the main assumptions of our model influence convergence peak abundance. First, NFW profiles tend to shift some medium peaks to higher values, in spite of the lack of unbound objects. Second, the number of peaks decreases when halo positions are randomized. Last, the difference between the $N$-body mass function and the analytical one is observable in produced peak counts. In summary, our model is in good agreement with results from full $N$-body runs. We also tested the dependence of our model on $\Omega_\mmmm$ and $\sigma_8$. For a 859 deg$^2$ sky area, the Poisson fluctuation is reduced to a reasonable level for peaks with $\nu\lesssim 6$. It turns out that different scenarios are discernable for $\nu \gtrsim 4$, with a degerency direction corresponding roughly to the anti-diagonal in the plane $\Omega_\mmmm$-$\sigma_8$. Tests on a large set of different parameters are feasible with our model thanks to the short computation time. Our probabilistic model has other potential advantages. Repeated simulations for the same cosmological parameters generate the distribution of observables. This allows us to compare observations with our model without the need to define a likelihood function or to assume any Gaussian distribution. For example, model discrimination can be carried out using the false discovery rate method [FDR, @Benjamini_Hochberg_1995 an application can be found in @Pires_etal_2009a], approximate Bayesian computation [ABC, see for example @Cameron_Pettitt_2012; @Weyant_etal_2013], or other statistical techniques. Another powerful advantage of our model is its flexibility. Additional effects such as intrinsic ellipticity alignment, alternative methods such as nonlinear filters, and realistic survey settings, such as mask effects, magnification bias [@Liu_etal_2014a], shape measurement errors [@Bard_etal_2013], and photo-$z$ errors, can all be modeled in this peak-counting framework. The forward-modeling approach allows for a straightforward inclusion and marginalization of model uncertainties and systematics. Several improvements to our model are possible. Using perturbation theory, we may take halo clustering into account in fast simulations. This can be done by some fast algorithms, such as <span style="font-variant:small-caps;">PTHalos</span> [@Scoccimarro_Sheth_2002], <span style="font-variant:small-caps;">Pinocchio</span> [@Monaco_etal_2002 see also @Heisenberg_etal_2011], and remapping LPT [@Leclercq_etal_2013]. In addition, we can go beyond the idealized setting considered in this work by including a realistic source distribution, intrinsic alignment, mask effects, etc. We also expect that nonlinear filters and tomography studies may bring some more refined results for cosmology from peak counting. Finally, peak counts can be supplemented with additional WL observables, such as magnification and flexion. The <span style="font-variant:small-caps;">Camelus</span> algorithm is implemented in C language. It requires the <span style="font-variant:small-caps;">Nicaea</span> library for cosmological computations. The <span style="font-variant:small-caps;">Camelus</span> source code is released via the website [^6]. This study is supported by Région d’Île-de-France under grant DIM-ACAV and the French national program for cosmology and galaxies (PNCG). The authors acknowledge the anonymous referee for useful comments and suggestions. We would like to thank August Evrard for providing $N$-body simulations. We also thank Zuhui Fan, Xiangkun Liu, and Chuzhong Pan for giving constructive comments on the preprint. Chieh-An Lin is very grateful for inspiring discussions with François Lanusse, Yohan Dubois, and Michael Vespe. [^1]: The <span style="font-variant:small-caps;">Camelus</span> source code is released via the website <http://www.cosmostat.org/software/camelus/> [^2]: The convention of [@Takada_Jain_2003a] is different from ours. Their $d_\AAAA$ is actually $f_K$ in our notation, and they also express the virial radius $r_\vir$ in comoving coordinates. [^3]: For computational purpose, $2\rho_\ssss r_\ssss = (Mf\cNFW^2) / (2\pi r_\vir^2)$, where $f = [\ln(1+\cNFW) - \cNFW/(1+\cNFW)]\inv$. [^4]: Some papers define the mass function as $\tilde{n}(z,M)$, where $\tilde{n}(z,M) = \dddd n(z,\lessM) / \dddd M$. [^5]: <http://bitbucket.org/mbusha/addgals> [^6]: <http://www.cosmostat.org/software/camelus/>
{ "pile_set_name": "ArXiv" }
--- abstract: 'Automatic charge prediction aims to predict appropriate final charges according to the fact descriptions for a given criminal case. Automatic charge prediction plays a critical role in assisting judges and lawyers to improve the efficiency of legal decisions, and thus has received much attention. Nevertheless, most existing works on automatic charge prediction perform adequately on high-frequency charges but are not yet capable of predicting few-shot charges with limited cases. In this paper, we propose a **S**equence **E**nhanced **Caps**ule model, dubbed as SECaps model, to relieve this problem. Specifically, following the work of capsule networks, we propose the seq-caps layer, which considers sequence information and spatial information of legal texts simultaneously. Then we design a attention residual unit, which provides auxiliary information for charge prediction. In addition, our SECaps model introduces focal loss, which relieves the problem of imbalanced charges. Comparing the state-of-the-art methods, our SECaps model obtains 4.5% and 6.4% absolutely considerable improvements under Macro F1 in Criminal-S and Criminal-L respectively. The experimental results consistently demonstrate the superiorities and competitiveness of our proposed model.' author: - Congqing He - 'Li Peng^()^' - Yuquan Le - Jiawei He - Xiangyu Zhu bibliography: - 'mybibliography.bib' title: 'SECaps: A Sequence Enhanced Capsule Model for Charge Prediction' --- Conclusion ========== In this paper, we focus on the few-shot problem of charge prediction according to the fact descriptions of criminal cases. To alleviate the problem, we propose a Sequence Enhanced Capsule model for charge prediction. In particular, our SECaps model employs the seq-caps layer, which can capture characteristics of the sequence and abstract advanced semantic features simultaneously, and then combine with focal loss, which can handle the unbalanced problem of charges. Experiments on the real-world datasets show that our SECaps model achieves $69.4\%$, $69.6\%$, $79.5\%$ Macro F1 on three datasets respectively, surpassing existing state-of-the-art methods by a considerable margin.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Local structure of REOFeAs (RE=La, Pr, Nd, Sm) system has been studied as a function of chemical pressure varied due to different rare-earth size. Fe K-edge extended X-ray absorption fine structure (EXAFS) measurements in the fluorescence mode has permitted to compare systematically the inter-atomic distances and their mean square relative displacements (MSRD). We find that the Fe-As bond length and the corresponding MSRD hardly show any change, suggesting the strongly covalent nature of this bond, while the Fe-Fe and Fe-RE bond lengths decrease with decreasing rare earth size. The results provide important information on the atomic correlations that could have direct implication on the superconductivity and magnetism of REOFeAs system, with the chemical pressure being a key ingredient.' address: - '$^{1}$ Dipartimento di Fisica, Università di Roma “La Sapienza", P. le Aldo Moro 2, 00185 Roma, Italy' - '$^{2}$ Laboratoire CRISMAT, CNRS UMR 6508, ENSICAEN, Boulevard du Marechal Juin, 14050 Caen, France' - '$^{3}$ Dept of Physics and Astronomy, Vrije Universiteit, De Boelelaan 1081, 1081 HV Amsterdam, The Netherlands' - '$^{4}$ European Synchrotron Radiation Facility, 6 RUE Jules Horowitz BP 220 38043 Grenoble Cedex 9 France' - '$^{5}$ Department of Materials Science and Engineering, State University of New York at Stony Brook, Stony Brook, NY 11794, USA' - '$^{6}$ Brookhaven National Laboratory, Upton, NY 11973, USA' author: - 'A. Iadecola$^{1}$, S. Agrestini$^{2}$, M. Filippi$^{3}$, L. Simonelli$^{4}$, M. Fratini$^{1}$, B. Joseph$^{1}$, D. Mahajan$^{5,6}$ and N.L.Saini$^{1}$' title: 'Local structure of $RE$FeAsO ($RE$=La, Pr, Nd, Sm) oxypnictides studied by Fe K-edge EXAFS' --- The recent discovery of high T$_{c}$ superconductivity in the LaOFeAs [@Kamihara] has triggered intensive research activities on REOFeAs (RE=rare earth) oxypnictides, producing a large number of publications focusing on different aspects of these materials [@JPSJIssue; @NewJP; @Izyumov]. One of the interesting aspects of these materials is the competing spin density wave (SDW) and the superconductivity [@JZhaoNatP; @HuangPRB; @JZhaoPRB]. Indeed, the undoped compound REOFeAs is antiferromagnetically ordered (albeit a poor metal), and shows a structural phase transition [@JZhaoNatP; @HuangPRB; @JZhaoPRB; @DCruzNat; @Fratini; @Margadonna; @Karolina]. With doping, the system gets superconducting and the structural transition as well as the SDW transition disappears [@JPSJIssue; @NewJP; @Izyumov]. In addition, while the maximum T$_{c}$ of the doped system increases with reducing the rare-earth ion size [@JPSJIssue; @NewJP; @Izyumov; @REnEPL; @CHLee], the structural transition temperature decreases for the undoped system [@JZhaoNatP; @HuangPRB; @JZhaoPRB; @DCruzNat; @Fratini; @Margadonna; @Karolina]. These observations show interesting interplay between structure, magnetism and superconductivity with the chemical pressure and structural topology being important parameters. It is known that a mere knowledge of the long range ordered structure is generally insufficient to describe electronic functions of a system with interplaying electronic degrees of freedom. Indeed, this has been shown for transition metal oxides in which the electronic functions like superconductivity, colossal magneto resistence and metal insulator transitions are related with interplaying charge- spin Ð lattice degrees of freedom [@StrBond]. Therefore, a detailed knowledge of the atomic structure for the REOFeAs oxypnictides could be a timely feedback to the theoretical models for correlating structure, magnetism and superconductivity in these materials. Extended X-ray absorption fine structure (EXAFS) is a site selective method, providing information on the local atomic distribution around a selected absorbing atom through photoelectron scattering [@Konings]. Recently, Zhang et al [@Oyanagi] have studied local structure of doped and undoped LaOFeAs system by Fe K-edge and As K-edge measurements, providing a temperature dependent anomaly in the Fe-As correlations at low temperature. This study was followed by Tyson et al [@Tyson] measuring the same system using Fe K-edge with no evidence of such anomalies. Here we have decided to address a different aspect and exploited the Fe K-edge EXAFS to explore the local structure of REOFeAs with varying rare-earth size (RE=La (1.16 Å), Pr(1.13 Å), Nd(1.11 Å), Sm(1.08 Å)). The results reveal strongly covalent nature of the Fe-As bond, showing hardly any change with rare earth size, while the Fe-Fe and Fe-RE bonds show a systematic size dependence. On the other hand, the mean square relative displacements (MSRD) determined by the correlated Debye Waller (DW) factors of the Fe-Fe bond length decrease with decreasing rare-earth size and that of Fe-RE seems to increase. Again, the MSRD of Fe-As bond remains almost unchanged with the chemical pressure, underlining the stiffness of this bond. Fe K-edge X-ray absorption measurements were performed on powder samples of REOFeAs (RE=La, Pr, Nd, Sm) prepared using solid state reaction method [@RenEPL2]. Prior to the absorption measurements, the samples were characterized for the phase purity and the average structure by X-ray diffraction measurements [@Fratini]. The X-ray absorption measurements were made at the beamline BM29 of the European Synchrotron Radiation Facility (ESRF), Grenoble, where the synchrotron radiation emitted by a bending magnet source at the 6 GeV ESRF storage ring was monochromatized using a double crystal Si(311) monochromator. The Fe K$_{\alpha}$ fluorescence yield was collected using a multi-element Ge detector array. Simultaneous transmission signal was measured to make sure the observed signal to represent true X-ray absorption, however, it was not possible to obtain absorption signal in transmission mode without a contribution of the rare-earth L$_{I}$-edge (6.267KeV, 6.835 KeV, 7.126 KeV and 7.737 KeV respectively for the La, Pr, Nd and Sm with respect to the Fe K-edge at 7.112 KeV) and hence the choice was to opt for the partial absorption signal measured by fluorescence detection for a systematic comparison. The samples were mounted in a continuous flow He cryostat to perform the measurements at low temperature (40 K). The sample temperature was controlled and monitored within an accuracy of $\pm$1 K. Several absorption scans were measured to ensure reproducibility of the spectra and a high signal to noise ratio. Standard procedure was used to extract the EXAFS signal from the absorption spectrum [@Konings], followed by the X-ray fluorescence self-absorption correction before the analysis. Figure 1 shows Fe K-edge EXAFS oscillations of REOFeAs samples at low temperature (40 K) extracted from the X-ray absorption spectra measured on the powder samples. The EXAFS oscillations are weighted by k$^{2}$ to highlight the higher k-region. There are evident differences between the EXAFS oscillations due to differing local structure of REOFeAs with different RE atom (see e.g. the oscillation around k=6-8 $\AA^{-1}$ and in the k range above $\sim$10-14 $\AA^{-1}$). The differences in the local structure could be better appreciated in the Fourier transforms of the EXAFS oscillations providing real space information. Figure 2 shows magnitude of the Fourier transforms, $|$FT(k$^{2}\chi$(k))$|$. The Fourier transforms are not corrected for the phase shifts due to the photoelectron back-scattering and represent raw experimental data. The main peak at $\sim$2.4 $\AA$ is due to Fe-As (4 As atoms at a distance $\sim$2.4 $\AA$) and Fe-Fe (4 Fe atoms at a distance $\sim$ 2.8 $\AA$) bond lengths, while the peak at $\sim$ 3.6 $\AA$ corresponds to the Fe-RE bond length (4 RE atoms at a distance $\sim$ 3.72 $\AA$). While the main Fourier transform peak at $\sim$2.4 $\AA$ appears to shift towards higher R-values, the Fe-RE peak seems to appear with a decreased amplitude with decreasing rare-earth size. The evident shift of the main peak is due to increased amplitude of the Fe-Fe scattering derived by decreasing Fe-Fe bond length and corresponding MSRD (discussed later). The EXAFS amplitude depends on several factors and could be given by the following general equation[@Konings]: $$\chi(k)= \sum_{i}\frac{N_{i}S_{0}^{2}}{kR_{i}^{2}}f_{i}(k,R_{i}) e^{-\frac{2R_{i}}{\lambda}} e^{-2k^{2}\sigma^{2}} sin[2kR_{i}+\delta_{i}(k)]\nonumber$$ Here N$_{i}$ is the number of neighboring atoms at a distance R$_{i}$. S$_{0}^{2}$ is the passive electrons reduction factor, f$_{i}$(k,R$_{i}$) is the backscattering amplitude, $\lambda$ is the photoelectron mean free path, and $\sigma_{i}^{2}$ is the correlated Debye-Waller (DW) factor, measuring the mean square relative displacements (MSRD) of the photoabsorber-backscatterer pairs. Apart from these, the photoelectron energy origin E$_{0}$ and the phase shifts $\delta_{i}$ should be known. We have used conventional procedure to analyze the EXAFS signal [@Konings] due to three shells, i.e., Fe-As, Fe-Fe and Fe-RE scatterings. Except the radial distances R$_{i}$ and the corresponding DW factors $\sigma_{i}^{2}$, all other parameters were kept fixed in the least squares fit (S$_{0}^{2}$=1). The EXCURVE9.275 code was used for the model fit with calculated backscattering amplitudes and phase shift functions [@excurve]. The number of independent data points, N$_{ind}\sim$(2$\Delta$k$\Delta$R)/$\pi$ [@Konings] was 16 for the present analysis ($\Delta$k=11 Å$^{-1}$ (k=3-14Å$^{-1}$) and $\Delta$R=2.5 Å). Starting parameters were taken from the diffraction studies [@JZhaoNatP; @HuangPRB; @JZhaoPRB; @DCruzNat; @Fratini; @Margadonna; @Karolina; @Ozawa]. A representative three shell model fit is shown with experimental Fourier transform as inset to the Figure 2. The average radial distances as a function of rare earth atom are shown in Figure 3. There is a gradual decrease of the average Fe-Fe and Fe-RE distances (two upper panels) with decreasing rare earth size, consistent with the diffraction studies showing decreasing lattice parameters (the a-axis and c-axis as a function of the rare earth atom are shown as insets) [@Ozawa]. On the other hand, the Fe-As distance (lower middle panel) does not show any appreciable change with the rare-earth atom size, revealing strongly covalent nature of this bond. Within experimental uncertainties this appears to be consistent with the diffraction results[@JZhaoNatP; @HuangPRB; @JZhaoPRB; @DCruzNat; @Fratini; @Margadonna; @Karolina; @CHLee]. Using the bond lengths measured by EXAFS, we can determine directly the opening angle at the top of the Fe$_{4}$As tetrahedron (Fe-As-Fe angle $\theta_{3}$), considered to be the key to the superconductivity in these materials [@JZhaoNatP]. The Fe-As-Fe angle $\theta_{3}$ has been calculated using the formula; $\theta_{3}$=$\pi$-2cos$^{-1}$ ($\frac{d_{Fe-Fe}}{\sqrt{2}d_{Fe-As}}$). The Fe-As-Fe angle $\theta_{3}$ is shown in Fig.3. The Fe-As-Fe angle $\theta_{3}$ is consistent with the earlier studies, revealing perfect Fe$_{4}$As tetrahedron [@CHLee; @JZhaoNatP] for the SmOFeAs. Figure 4 shows the correlated DW factors as a function of rare earth atoms measuring the MSRD of different bond lengths. The MSRD of the Fe-Fe (middle panel) and Fe-RE pairs (upper panel) appear to depend on the rare-earth size, while we could hardly see any change in that of the Fe-As pairs indicating again the stiffness of the later. The MSRD of the Fe-Fe shows a clear decrease with decreasing rare earth size, as the Fe-Fe bond length (Fig. 3). Incidentally, the MSRD of the Fe-RE appears to show a small increase with decreasing rare earth size, albeit the change is smaller than that of the Fe-Fe bond length (upper panel). Recently Tyson et al [@Tyson] have reported temperature dependence of Fe-As MSRD for the doped and undoped LaOFeAs, showing that, while the Einstein frequency of the Fe-As mode does not change with doping, there is a small decrease of static contribution to the MSRD. The results of Tyson et al [@Tyson] are consistent with strongly covalent nature of the Fe-As bond length. On the other hand, the same authors have shown increased Einstein frequency with doping for the Fe-Fe pair, indicating enhanced Fe-Fe correlations. In summary, we have measured the local structure of REOFeAs with variable rare-earth ion (RE) revealing highly covalent nature of the Fe-As bond length. In addition, the Fe-Fe and Fe-RE local atomic correlations show a systematic change with the rare earth ion size, evidenced by MSRD of the respective bond lengths. Considering the conventional superconductivity mechanism in the strong coupling limit [@McMillan; @Santi], the electron-phonon interaction parameter is inversely proportional to the phonon frequency, i.e., proportional to the MSRD (the zero point motion dominates at low temperature and hence $\sigma^{2}\approx{\hbar}/{2\omega_{E}m_{r}}$, where m$_{r}$ is the reduced mass and $\omega_{E}$ is the Einstein frequency of the pair). Since the T$_c$ of the REOFeAs (if doped) increases with decreasing rare earth size, it is reasonable to think that the Fe-Fe phonon modes may not have a direct role in the superconductivity (Fe-Fe MSRD decrease with decreasing rare-earth size). In contrast, the Fe-RE MSRD tends to show a small increase (or remains constant) with decreasing rare earth size, and may be somehow contributing to the superconductivity, however, more experiments are needed to address this issue. Although, it is difficult quantify role of local eletron-phonon coupling in the correlating magnetism and superconductivity, the presented results certainly provide timely experimental information on the local atomic fluctuations, that could be important feed-back for new models to describe fundamental properties of the REOFeAs with doping and chemical pressure. Acknowledgments {#acknowledgments .unnumbered} =============== The authors thank the ESRF staff for the help and cooperation during the experimental run. We also acknowledge Zhong-Xian Zhao (Beijing) for providing high quality samples for the present study, and Antonio Bianconi for stimulating discussions and encouragement. One of us (DM) would like to acknowledge $\prime$La Sapienza$\prime$ University of Rome for the financial assistance and hospitality. This research has been supported by COMEPHS (under the FP6 STREP Controlling mesoscopic phase separation).\ [0]{} Y. Kamihara, T. Watanabe, M. Hirano, and H. Hosono, J. Am. Chem. Soc., 130, 3296 (2008). see e.g. special issue of J. Phys. Soc. Jpn. Suppl. 77 SC (2008). see e.g., focus issue on iron based superconductors, New J Phys 11, 025003 (2009). see e.g., a short review by Yu A Izyumov, E.Z. Kurmaev, Phys. Usp. 51 1261 (2008). J. Zhao, Q. Huang, C. de la Cruz, S. Li, J. W. Lynn, Y. Chen, M. A. Green, G. F. Chen, G. Li, Z. Li, J. L. Luo, N. L. Wang and P. Dai, Nature Materials 7, 953 (2008). Q. Huang, Jun Zhao, J. W. Lynn, G. F. Chen, J. L. Luo, N. L. Wang, P. Dai, Phys. Rev. B 78 054529 (2008) Jun Zhao, Q. Huang, C. de la Cruz, J. W. Lynn, M. D. Lumsden, Z. A. Ren, Jie Yang, X. Shen, X. Dong, Z.X. Zhao, P. Dai, Phys. Rev. B 78 132504 (2008) C. De la Cruz, Q. Huang, J.W. Lynn, J. Li, W. Ratcliff II, J.L. Zarestky, H.A. Mook, G.F. Chen, J.L. Luo, N.L. Wang, P. Dai, Nature 453, 899 (2008). M. Fratini, R. Caivano, A. Puri, A. Ricci, Z.A. Ren, X.L Dong, J. Yang, W. Lu, Z.X. Zhao, L. Barba, G. Arrighetti, M. Polentarutti, A. Bianconi, Supercond. Sci. Technol. 21 092002 (2008); M. Fratini et al (unpublished). S. Margadonna, Y. Takabayashi, M. T. McDonald, M. Brunelli, G. Wu, R. H. Liu, X. H. Chen, K. Prassides, Phys. Rev. B 79, 014503 (2009). K. Kasperkiewicz, J.W. G. Bos, A. N. Fitch, K. Prassides, S. Margadonna, Chem. Commun., 707 (2009). Z.A. Ren, G.C. Che, X.L. Dong, J. Yang, W. Lu, W. Yi, X.L. Shen, Z.C. Li, L.L. Sun, F. Zhou and Z. X. Zhao, Europhys. Lett. 83 17002 (2008). C.-H. Lee, A. Iyo, H. Eisaki, H. Kito, M.T. Fernanadez-Diaz, T. Ito, K. Kihou, H. Matsuhata, M. Braden, K. Yamada, J. Phys. Soc. of Japan 77, 083704 (2008); K. Miyazawa, K. Kihou, P. M. Shirage, C-H. Lee, H. Kito, H. Eisaki, and A. Iyo, J. Phys. Soc. of Japan, 78, 034712 (2009);C.-H. Lee, A. Iyo, H. Eisaki, H. Kito, M.T. Fernandez-diaz, R. Kumai, K. Miyazawa, K. Kihou, H. Matsuhata, M. Braden and K. Yamada, J. Phys. Soc. of Japan, 77, 44-46 (2008). see e.g. a review by A. Bianconi, N .L. Saini, STRUCTURE AND BONDING 114: 287-330 (2005). X-ray Absorption: Principles, Applications, Techniques of EXAFS, SEXAFS, XANES, edited by R. Prinz and D. Koningsberger (Wiley, New York, 1988). C. J. Zhang, H. Oyanagi, Z. H. Sun, Y. Kamihara, H. Hosono, Phys. Rev. B 78, 214513 (2008) T. A. Tyson, T. Wu, J. Woicik, B. Ravel, A. Ignatov, C. L. Zhang, Z. Qin, T. Zhou, S.-W. Cheong, arXiv:0903.3992. Z-A. Ren, J. Yang, W. Lu, W. Yi, X-L. Shen, Z-C. Li, G-C. Che, X-L. Dong, L.L. Sun, F. Zhou, and Z.X. Zhao, Europhys. Lett. 82, 57002 (2008). S.J. Gurman, J. Synch. Rad. 2, 56-63 (1995). Tadashi C. Ozawa, Susan M. Kauzlarich Sci. Technol. Adv. Mater. 9, 033003 (2008). W. L. McMillan, Phys. Rev. 167, 331 (1968); P. B. Allen and R. C. Dynes, Phys. Rev. B 12, 905 (1975). G. Santi, S. B. Dugdale, and T. Jarlborg, Phys. Rev. Lett. 87, 247004 (2001) ![\[fig:epsart\] EXAFS oscillations (multiplied by k$^{2}$) extracted from the Fe K-edge absorption spectra measured on REOFeAs system (RE =La, Pr, Nd, Sm) at low temperature (40 K) and corrected for the fluorescence self-absorption effect.](fig1a.eps){width="120mm"} ![\[fig:epsart\]Fourier transforms of the Fe K-edge EXAFS oscillations showing partial atomic distribution around the Fe in the REOFeAs system. The Fourier transforms are performed between k$_{min}$=3 $\AA^{-1}$ and k$_{max}$=14 $\AA^{-1}$ using a Gaussian window. The peak positions do not represent the real distances as the FTs are not corrected for the phase shifts. The inset shows a phase corrected Fourier transform (symbols) (for the LaOFeAs) with a fit over three shells, i.e., Fe-As, Fe-Fe and Fe-RE.](fig2a.eps){width="120mm"} ![\[fig:epsart\] Fe-As (lower middle), Fe-Fe (upper middle) and Fe-RE (upper) distances at 40 K as a function of rare-earth atom. While the Fe-As bond lengths hardly show any change, the Fe-Fe and Fe-RE bonds change with rare earth size. The insets (two upper panels) show shrinkage of the lattice parameters with the decreasing rare-earth size [@Ozawa; @Fratini]. The bond lengths derived from the diffraction are included in the three panels for the comparison. Error bars represent the average uncertainties estimated by creating correlation maps. The Fe-As-Fe angle $\theta_{3}$ determined using the EXAFS data (circles) is also shown (lower) with the dotted line at $\theta_{3}$=109.5$^{o}$ corresponding to a perfect tetrahedron. The $\theta_{3}$ determined from the diffraction data is shown for comparison (triangles). The vertical bars in the lower panel represent the span of the $\theta_{3}$ measured by diffraction experiments on the REOFeAs [@Ozawa; @CHLee; @JZhaoNatP; @Fratini]. The inset shows the cartoon picture of Fe-As-Fe angles[@CHLee; @JZhaoNatP].](fig3a.eps){width="120mm"} ![\[fig:epsart\]Mean square relative displacements (MSRD) of the Fe-As (lower), Fe-Fe (middle) and Fe-RE (upper) pairs at 40 K as a function of rare-earth size. As the bond length (Fig. 3), the MSRD of Fe-As hardly show any change indicating strongly covalent nature of this bond. On the other hand, the MSRD of Fe-Fe shows a decrease while that of the Fe-RE tending to increase from LaOFeAs to SmOFeAs.](fig4a.eps){width="120mm"}
{ "pile_set_name": "ArXiv" }
--- abstract: | Internal gravity waves play a primary role in geophysical fluids: they contribute significantly to mixing in the ocean and they redistribute energy and momentum in the middle atmosphere. Until recently, most studies were focused on plane wave solutions. However, these solutions are not a satisfactory description of most geophysical manifestations of internal gravity waves, and it is now recognized that internal wave beams with a confined profile are ubiquitous in the geophysical context. We will discuss the reason for the ubiquity of wave beams in stratified fluids, related to the fact that they are solutions of the nonlinear governing equations. We will focus more specifically on situations with a constant buoyancy frequency. Moreover, in light of recent experimental and analytical studies of internal gravity beams, it is timely to discuss the two main mechanisms of instability for those beams. i) The Triadic Resonant Instability generating two secondary wave beams. ii) The streaming instability corresponding to the spontaneous generation of a mean flow. author: - 'Thierry Dauxois, Sylvain Joubaud, Philippe Odier and Antoine Venaille' title: Instabilities of Internal Gravity Wave Beams --- internal waves, instability, mean-flow INTRODUCTION ============ Internal gravity waves play a primary role in geophysical fluids [@SutherlandBook]: they contribute significantly to mixing in the ocean [@wunsch2004] and they redistribute energy and momentum in the middle atmosphere [@fritts2003]. The generation and propagation mechanisms are fairly well understood, as for instance in the case of oceanic tidal flows [@GarrettKunze2007]. By contrast, the dissipation mechanisms, together with the understanding of observed energy spectra resulting from nonlinear interactions between those waves, are still debated [@Johnstonetal2003; @MacKinnonWinters2005; @RainvillePinkel2006; @CalliesFerrariBuhler; @Alford2015; @SarkarScotti2016]. Several routes towards dissipation have been identified, from wave-mean flow interactions to cascade processes, but this remains a fairly open subject from both theoretical [@Craik; @NazarenkoBook] and experimental points of view [@StaquetSommeria2002]. The objective of this review is to present important recent progress that sheds new light on the nonlinear destabilization of internal wave beams, bridging part of the gap between our understanding of their generation mechanisms based mostly on linear analysis, and their subsequent evolution through nonlinear effects. Until recently, most studies were focused on plane wave solutions, which are introduced in classical textbooks [@GillBook]. Strikingly, such plane waves are not only solutions of the linearized dynamics, but also of the nonlinear equations [@McEwan1973; @Akylas2003]. However, spatially and temporally monochromatic internal wave trains are not a satisfactory description of most geophysical internal gravity waves [@Sutherland2013]. Indeed, oceanic field observations have rather reported internal gravity beams with a confined profile [@LienGregg2001; @Coleetal2009; @Johnstonetal2011]. In the atmosphere, gravity waves due to thunderstorms also often form beam-like structures [@Alexander2003]. Oceanic wave beams arise from the interaction of the barotropic tide with sea-floor topography, as has been recently studied theoretically and numerically [@Khatiwala2003; @Lamb2004; @MaugeGerkema], taking into account transient, finite-depth and nonlinear effects, ignored in the earlier seminal work by [@Bell1975]. The importance of those beams has also been emphasized recently in quantitative laboratory experiments [@GostiauxDauxois2007; @ZhankKingSwinney2007; @PeacockEcheverriBalmforth2008]. From these different works, it is now recognized that internal wave beams are ubiquitous in the geophysical context. The interest for internal gravity beams resonates with the usual pedagogical introduction to internal waves, the Saint Andrew’s cross, which comprises four beams generated by oscillating a cylinder in a stratified fluid [@MowbrayRarity1967]. Thorough studies of internal wave beams can be found in  [@voisin2003]. Moreover, [@Akylas2003] have realized that an inviscid uniformly stratified Boussinesq fluid supports time-harmonic plane waves invariant in one transverse horizontal direction, propagating along a direction determined by the frequency [(and the medium through the buoyancy frequency)]{}, with a general spatial profile in the cross-beam direction. These wave beams are not only fundamental to the linearized dynamics but, like sinusoidal wavetrains, happen to be exact solutions of the nonlinear governing equations. Remarkably, [@Akylas2003] showed that the steady-state similarity linear solution for a viscous beam [@ThomasStevenson1972] is also valid in the nonlinear regime. In light of the recent experimental and analytical studies of those internal gravity [wave]{} beams, it is thus timely to study their stability properties. The structure of the review is the following. First, in section \[sectionIntroductive\], we introduce the subject by presenting concepts, governing equations and approximations that lead to the description of gravity waves in stratified fluids. We dedicate a special emphasis on the peculiar role of nonlinearities to explain why internal gravity wave beams are ubiquitous solutions in oceans and middle atmospheres. Then, in section \[TriadicResonanceInstability\], we discuss the classic Triadic Resonant Instability that corresponds to the destabilization of a primary wave with the spontaneous emission of two secondary waves, of lower frequencies and different wave vectors. In addition to the simple case of plane waves, we discuss in detail the generalization to wave beams with a finite width. Section \[StreamingInstability\] is dedicated to the streaming instability, the second important mechanism for the instability of internal gravity waves beams through the generation of a mean flow. Finally, in section \[ConclusionsPerspectives\], we draw some conclusions and discuss main future issues. THE DYNAMICS OF STRATIFIED FLUIDS AND ITS SOLUTIONS {#sectionIntroductive} =================================================== Basic Equations --------------- Let us consider an incompressible non rotating stratified Boussinesq fluid in Cartesian coordinates ($_x$,$_y$,$_z$) where $_z$ is the direction opposite to gravity. The Boussinesq approximation amounts to neglecting density variations with respect to a constant reference density $\rho_{\mathrm{ref}}$, except when those variations are associated with the gravity term $g$. The relevant field to describe the effect of density variations is then the buoyancy field $b_{\mathrm{tot}}=g\left(\rho_{\mathrm{ref}}-\rho\right)/\rho_{\mathrm{ref}}$, with $\rho(\mathbf{r},t)$ the full density field, =($x$,$y$,$z$) the space coordinates and $t$ the time coordinate. Let us call $\rho_0(z)$ the density of the flow at rest, with buoyancy frequency $N(z)=(-g\left(\partial_z \rho_0\right)/\rho_{\mathrm{ref}})^{1/2}$. The corresponding buoyancy profile $g\left(\rho_{\mathrm{ref}}-\rho_0\right)/\rho_{\mathrm{ref}}$ is denoted $b_0$. The buoyancy frequency $N$ varies in principle with the depth $z$. In the ocean, $N$ is rather large in the thermocline and weaker in the abyss. For the sake of simplicity, however, $N$ will be taken constant in the remainder of the paper. In some studies, to ease greatly the theoretical analysis, this approximation that looks drastic at first sight can be relaxed when $N$ changes smoothly by relying on the WKB approximation. The equations of motion can be written as a dynamical system for the perturbed buoyancy field $b=b_{\mathrm{tot}}-b_0$ and the three components of the velocity field = ($u_x$,$u_y$,$u_z$): $$\begin{aligned} \nabla\cdot \mbox{\boldmath $u$} &=& 0, \label{eq:div_u}\\ \partial_t \mbox{\boldmath $u$} + \mbox{\boldmath $u$}\cdot \nabla \mbox{\boldmath $u$}&=& -\frac{1}{\rho_{\mathrm{ref}}}\nabla p +b \mbox{\boldmath $e$}_z + \nu \nabla^2 \mbox{\boldmath $u$}, \label{eq:NS_strat}\\ \partial_t b + \mbox{\boldmath $u$}\cdot \nabla b +u_z N^2 &=&0 . \label{eq:cons_masse}\end{aligned}$$ with $p(\mbox{\boldmath $r$},t)$ the pressure variation with respect to the hydrostatic equilibrium pressure $P_0(z)=P_{0}(0)-\int_{0}^z \rho_{0}(z') g \mathrm{d} z'$, and $\nu$ the kinematic viscosity. We have neglected the molecular diffusivity, which would imply a term $D\nabla^2 b$ in the right-hand side of Equation (\[eq:cons\_masse\]), with $D$ the diffusion coefficient of the stratifying element (molecular diffusivity for salt, thermal diffusivity for temperature). The importance of the dissipative terms with respect to the nonlinear ones are described by the Reynolds $UL/\nu$ and the Peclet numbers $UL/D$, with $U$ and $L$ typical velocity and length scales, or equivalently by the Reynolds number and the Schmidt number $\nu/D$. In many geophysical situations, both Reynolds and Peclet numbers are large, and molecular effects can be neglected at lowest order. In such cases, the results do not depend on the Schmidt number. In laboratory settings, the Peclet is often also very large, at least when the stratification agent is salt, in which case $D\approx10^{-9}$ m$^2\cdot$s$^{-1}$. However, the viscosity of water is $\nu\approx 10^{-6}$ m$^2\cdot$s$^{-1}$, and the corresponding Reynolds numbers are such that viscous effects can play an important role, as we will see later. Let us first consider the simplest case of two-dimensional flow, which is invariant in the transverse $y$-direction. The non-divergent two-dimensional velocity field is then conveniently expressed in terms of a streamfunction $\psi(x,z)$ as $\mbox{\boldmath $u$}=(\partial_z\psi,0,-\partial_x\psi)$. Introducing the Jacobian $J(\psi,b)=\partial_x \psi\, \partial _zb - \partial_x b\, \partial _z \psi$, the dynamical system (\[eq:div\_u\]), (\[eq:NS\_strat\]) and (\[eq:cons\_masse\]) is expressed as $$\begin{aligned} \partial_{t}\nabla^2 \psi + J(\nabla^2 \psi , \psi) &=& -\partial_x b+\nu \nabla^4 \psi,\label{equationenpsi} \\ \partial_t b+ J(b,\psi) - {N^2 }\partial_x \psi &=& 0. \label{equationenrho}\end{aligned}$$ Differentiating Equation (\[equationenpsi\]) with respect to time and Equation (\[equationenrho\]) with respect to the spatial variable $x$, and subtracting the latter from the former, one gets finally $$\begin{aligned} \partial_{tt}\nabla^2 \psi +N^2 \partial_{xx} \psi &=& \nu \nabla^4 \partial_t\psi + \partial_t J( \psi , \nabla^2 \psi) + \partial_x J(b,\psi),\label{equationenpsietrho}\end{aligned}$$ describing the nonlinear dynamics of non-rotating non-diffusive viscous stratified fluids in two dimensions. Linear Approximation -------------------- In the linear approximation, assuming vanishing viscosity, the right-hand side of Equation (\[equationenpsietrho\]) immediately vanishes leading to the following wave equation for the streamfunction $$\begin{aligned} \partial_{tt}\nabla^2 \psi + N^2 \partial_{xx}\psi &=& 0. \label{eq_disp_gravity_non_viscous}\end{aligned}$$ This equation is striking for several reasons. First, its mathematical structure is clearly different from the traditional d’Alembert equation. Indeed, the spatial differentiation appears at second order in both terms. Time-harmonic plane waves with frequency $\omega$, wave vector $\mbox{\boldmath $k$}=(\ell,0,m)$ and wavenumber $k=|\mbox{\boldmath $k$}|=(\ell^2+m^2)^{1/2}$ are solutions of Equation (\[eq\_disp\_gravity\_non\_viscous\]), if the dispersion relation for internal gravity waves $$\begin{aligned} \omega=\pm N \frac{\ell}{k} = \pm N \sin \theta, \label{eq_disp_gravity_theta}\end{aligned}$$ is satisfied. $\theta$ is the angle between wavenumber $\mbox{\boldmath $k$}$ and the vertical. The second important remark is that contrary to the usual concentric waves emitted from the source of excitation when considering the d’Alembert equation, here four different directions of propagation are possible depending on the sign of $\ell$ and $m$. This is an illustration of the anisotropic propagation due to the vertical stratification. The third remarkable property is that the dispersion relation features the angle of propagation rather than the wavelength, emphasizing a clear difference between internal waves and surface waves. This is also a crucial property for this review since it will allow us to define beams with a general profile, rather than with a single wavenumber. Nonlinear Terms {#NLterms} --------------- ### Plane Wave Solutions It is striking and pretty unusual that plane waves are solutions of the [inviscid]{} nonlinear equation (\[equationenpsietrho\]) even for large amplitudes. Indeed, the streamfunction of the plane wave solution is a Laplacian eigenmode, with $\nabla^2 \psi=-k^2\psi$. Consequently, the first Jacobian term vanishes in Equation (\[equationenpsietrho\]). Equation (\[equationenpsi\]) leads [therefore]{} to the so-called polarization relation $b=-\left( N^2\ell /\omega\right) \psi \equiv{\cal P}\psi$, with ${\cal P}$ the polarization prefactor. Consequently, the second Jacobian in (\[equationenpsietrho\]) vanishes: $J(\psi,{\cal P}\psi)=0$. To conclude, both nonlinear terms in Equation (\[equationenpsietrho\]) vanish for plane wave solutions, that are therefore solutions of the nonlinear equation, for any amplitude. ### Internal Wave Beams [Since]{} the frequency $\omega$ is independent of the wavenumber, it is possible to devise more general solutions, time-harmonic with the same frequency $\omega$, by superposing several linear solutions associated to the same angle of propagation, but with different wavenumbers $k$ [@McEwan1973; @Akylas2003]. Introducing the along-beam coordinate $\xi = x \cos \theta - z \sin \theta$, defined along the direction of propagation, and the cross-beam coordinate $\eta = x \sin \theta + z \cos \theta$ (see **Figure \[profilselonetabb\]**), the plane wave solution can be written as $$\begin{aligned} \psi(x,y,z,t) &=& \psi_0 \, e^{i(\ell x+mz- \omega t)}+\textrm{c.c.}= \psi_0\, e^{ik \eta }\,e^{-i \omega t}+\textrm{c.c.}\,,\label{ondesplanesbis}\end{aligned}$$ since $\ell=k \sin\theta$ and $m=k\cos\theta$. If one introduces $Q(\eta)=ik \psi_0 e^{ik\eta }$, one obtains the velocity field $\mbox{\boldmath $u$}= Q(\eta) (\cos\theta ,{0},-\sin \theta) e^{-i\omega t } +{c.c.}$ and the buoyancy perturbation $b=-i({{\cal P}}{/k})Q(\eta) e^{-i\omega t } +{c.c.}\,. $ One can actually obtain a wider class of solutions by considering an arbitrary complex amplitude $Q(\eta)$. Indeed, the fields $\mbox{\boldmath $u$}$ and $b$ do not depend on the longitudinal variable $\xi$. Consequently, after the change of variables, the Jacobians, which read [$J(\psi,b)=\partial_\xi \psi\, \partial _\eta b - \partial_\xi b\, \partial_\eta \psi$]{}, simply vanish, making the governing equations linear. As discussed in [@Akylas2005], note that uni-directional beams, in which energy propagates in one direction, involve plane waves with wavenumbers of the same sign only: $Q(\eta)=\int_0^{+\infty}A(k)e^{ik\eta} \mbox{d}k$ or $Q(\eta)=\int_{-\infty}^0A(k)e^{ik\eta} \mbox{d}k$. We see that the class of propagating waves that are solutions of the nonlinear dynamics in a Boussinesq stratified fluid is much more general than plane wave solutions: there is a whole family of solutions corresponding to uniform plane waves in the longitudinal direction $\xi$, but with a general profile in the cross-beam direction $\eta$, as represented in **Figure \[profilselonetabb\]**. ![ (a) Schematic representation of an internal wave beam and definition of the longitudinal and cross-beam coordinates $\xi$ and $\eta$, of the angle of inclination $\theta$, and finally of the group and phase velocities $c_g$ and $c_\varphi$. (b) Geometry of a uniform (along $\xi$) internal wave beam inclined at an angle $\theta$ with respect to the horizontal. The beam profile varies in the cross-beam $\eta$ direction, and the associated flow velocity is in the along-beam direction $\xi$. The transverse horizontal direction is denoted by $y$. []{data-label="profilselonetabb"}](./figure1.pdf "fig:"){width="70.00000%"} -.5truecm [@Akylas2003] have generalized those results by computing asymptotic solutions for a slightly viscous nonlinear wave beam with amplitude slowly modulated along $\xi$ and in time. After considerable manipulation, it turns out that all leading-order nonlinear [advective]{}-acceleration terms in the governing equations of motion vanish, and a uniform (along $\xi$) beam, regardless of its profile (along $\eta$), represents an exact nonlinear solution in an unbounded, inviscid, uniformly stratified fluid. This result not only extends the validity of the [@ThomasStevenson1972] steady-state similarity solution to the nonlinear regime, but emphasizes how nonlinearity has only relatively weak consequences. This has profound and useful outcomes on the applicability of results obtained with linear theory, for comparisons with field observations, laboratory experiments or numerical simulations. The vanishing of the nonlinear contributions is really unexpected and results from the combination of numerous different terms. [@Akylas2003] noticed, however, that the underlying reason for the [seemingly]{}  miraculous cancellation of the resonant nonlinear terms was the very same one that had been already pointed out by [@DauxoisYoung1999]. After lengthy calculations, in both cases, the reason is a special case of the Jacobi identity $J\left[A,J(B,C) \right]+J\left[C,J(A,B) \right] +J\left[J(A,C),B \right]=0$. [@DauxoisYoung1999] were studying near-critical reflection of a finite amplitude internal wave on a slope to heal the singularity occurring in the solution of [@Phillips1966]. Using matched asymptotic, they took a distinguished limit in which the amplitude of the incident wave, the dissipation, and the departure from criticality are all small. At the end, although the reconstructed fields do contain nonlinearly driven second harmonics, they obtained the striking and unusual result that the final amplitude equation happens to be a linear equation. The underlying reason was already this Jacobi identity.[^1] To conclude, the effects of nonlinearities on plane waves or wavebeams exhibit very peculiar properties. There are two important points to keep in mind. First, plane waves and internal wave beams are solutions of the full equation. Second, identifying a solution does not mean that it is a stable one. This remark is at the core of the present review: we will focus in the following on the behavior of wave beams with respect to the triadic resonant and the streaming instabilities. TRIADIC RESONANT INSTABILITY {#TriadicResonanceInstability} ============================ Introduction ------------ It was first realized fifty years ago that internal gravity plane waves are unstable to infinitesimal perturbations, which grow to form temporal and spatial resonant triads [@DavisAcrivos1967; @McEwan1971; @Mied1976]. This nonlinear instability produces two secondary waves that extract energy from a primary one. Energy transfer rates due to this instability are now well established for plane waves [@StaquetSommeria2002]. The instability [was]{} observed in several laboratory experiments [@BenielliSommeria1998; @ClarkSutherland2010; @Pairaudetal2010; @Joubaudetal2012] and numerical experiments on propagating internal waves [@Koudella2006; @Wienkers2015] or reflecting internal tides on a horizontal or sloping boundary [@GerkemaStaquetBouruet-Aubertot2006; @Pairaudetal2010; @ZhouDiamessis2013; @GayenSarkar2013]. Oceanic field observations have also confirmed the importance of this instability, especially close to the critical latitude, where the Coriolis frequency is half of the tidal frequency [@HibiyaNagasawaNiwa2002; @MacKinnonetal2013; @Sun2013]. Recent experiments by [@BDJO2013], however, followed by a simple model and numerical simulations by [@BSDBOJ2014] as well as a theory by [@Karimi2014] have shown that finite-width internal gravity wave beams exhibit a much more complex behavior than expected in the case of interacting plane waves. This is what will be discussed in this section. The Triadic Resonant Instability (TRI) versus the Parametric Subharmonic Instability (PSI) ========================================================================================== The classic Triadic Resonant Instability corresponds to the destabilization of a primary wave through the spontaneous emission of two secondary waves. The frequencies and wave vectors of these three waves are related by the spatial, $\mbox{\boldmath $k$}_0 = \mbox{\boldmath $k$}_++\mbox{\boldmath $k$}_-$, and the temporal, $ \omega_0 = \omega_++\omega_-$, resonance conditions, where the indices 0 and $\pm$ refer respectively to the primary and secondary waves. In the inviscid case, the most unstable triad corresponds to antiparallel, infinitely long secondary wave vectors associated with frequencies which are both half of the primary wave frequency: $ \omega_+\simeq\omega_-\simeq\omega_0/2$. Because of the direct analogy with the parametric oscillator, this particular case defines the Parametric Subharmonic Instability (PSI). This special case applies to many geophysical situations, and especially for oceanic applications. In laboratory experiments, viscosity plays an important role and the two secondary wave frequencies are different. By abuse of language, some authors have sometimes extended the use of the name PSI to cases for which secondary waves do not oscillate at half the forcing frequency. To avoid confusion, in the general case, it is presumably more appropriate to use the acronym TRI. The simplest case of Plane Waves Solutions ------------------------------------------ ### Derivation of the Equations and Plane Waves Solutions {#derivEquPlaneWaves} Looking for solutions of the basic equations (\[equationenpsi\]) and (\[equationenrho\]) as sum of three plane waves as follows $b=\sum_{j}^{} R_j(t) e^{i \left(\mbox{\boldmath $k$}_j \cdot \mbox{\boldmath $r$} - \omega_j t\right)} +c.c.$ and $\psi =\sum_{j}^{} \Psi_j(t) e^{i \left(\mbox{\boldmath $k$}_j \cdot \mbox{\boldmath $r$} - \omega_j t\right)} +c.c.$, [with $j=0$ for the primary wave]{} and $j=\pm$  for the secondary ones, and denoting $\dot R$ the derivative of the amplitude $R$, one gets (see for example [@Hasselman1967]) $$\begin{aligned} \sum_{j}^{}[- k_j^2 (\dot \Psi_j - i \omega_j \Psi_j) + i \ell_j R_j - \nu k_j ^4 \Psi_j] e^{i \left(\mbox{\boldmath $k$}_j \cdot \mbox{\boldmath $r$} - \omega_j t\right)} +c.c. &=& - J(\nabla^2\psi, \psi)\,.\label{Eqpsi2}\\ \sum_{j}^{} [\dot R_j - i \omega_j R_j - i N^2 \ell_j \Psi_j] e^{i \left(\mbox{\boldmath $k$}_j \cdot \mbox{\boldmath $r$} - \omega_j t\right)} + c.c. &=& -J(b, \psi)\,,\label{Eqrho2}\end{aligned}$$ The left-hand sides represent the linear parts of the dynamics. Neglecting the nonlinear terms, as well as the viscous terms and the temporal evolution of the amplitudes, one recovers the polarization expression $R_j = -{(N^2 \ell_j}/{\omega_j}) \Psi_j$ and the dispersion relation $\omega_j=N |\ell_j|/\sqrt{\ell_j^2+m_j^2} $. This linear system is [resonantly]{} forced by the Jacobian nonlinear terms on the right-hand side [when]{} the waves fulfill a spatial resonance condition $$\mbox{\boldmath $k$}_0 = \mbox{\boldmath $k$}_++\mbox{\boldmath $k$}_- \label{spatialcondition}$$ and a temporal resonance condition $$\omega_0 = \omega_++\omega_-\,. \label{temporalcondition}$$ The Jacobian terms in Equations (\[Eqpsi2\]) and (\[Eqrho2\]) can then be written as the sum of a resonant term that will drive the instability, plus [some]{} unimportant non resonant terms. Introducing this result into Equation (\[Eqpsi2\]), one obtains three relations between $\Psi_{{j}}$ and $R_{{j}}$ for each mode $\exp[{i(\mbox{\boldmath $k$}_{{j}} \cdot \mbox{\boldmath $r$} - \omega_{{j}} t)}]$ with ${j} = 0, +$ or $-$. One gets $$\begin{aligned} R_\pm &=&\frac{1}{i\ell_\pm} \left[ k_\pm^2(\dot \Psi_\pm - i\omega_\pm\Psi_\pm) + \nu k_\pm^4\Psi_\pm +\alpha_\pm \Psi_0 \Psi^*_\mp\right]\,, \label{eq:Rpm}\end{aligned}$$ where $\alpha_\pm = (\ell_0 m_\mp - m_0 \ell_\mp) (k_0^2 - k_\mp^2)$. Here, one traditionally uses the “pump-wave” approximation, which assumes that over the initial critical growth period of the secondary waves, the primary wave amplitude, $\Psi_0$, remains constant [and that the amplitude varies slowly with respect to the period of the wave ($\dot \Psi_j\ll\omega_j\Psi_j$).]{} Differentiating the polarization expression, cumbersome but straightforward calculations [@BDJO2013] lead to first order to $$\begin{aligned} \frac{{\rm d}\Psi_\pm}{{\rm d}t} & =& |I_\pm|\Psi_0\Psi_\mp^*-\frac{\nu}{2} k_\pm^2\Psi_\pm ,\label{equation1z}\end{aligned}$$ where $I_\pm =({\ell_0 m_\mp - m_0 \ell_\mp})[\omega_\pm(k_0^2 - k_\mp^2)+\ell_\pm N^2({\ell_0}/{\omega_0}-{\ell_\mp}/{\omega_\mp}) ]/({2\omega_\pm k_\pm^2})$. Differentiating Equation (\[equation1z\]), one gets $$\begin{aligned} {\ddot \Psi_\pm} = I_{+}I_{-} {|}\Psi_0{|}^2 \Psi_\pm - \frac{\nu^2}{4}k_+^2k_-^2 \Psi_\pm- \frac{\nu}{2}(k_+^2+k_-^2){\dot \Psi_\pm}\,. \label{eqfinal} \end{aligned}$$ The general solution is $\Psi_{\pm}(t)=A_{1,2}\,\exp{(\sigma t)} +B_{1,2}\, \exp{(\sigma' t)},$ with $\sigma = -{\nu}(k_+^2 +k_-^2)/4 + \sqrt{({\nu}/{4})^2(k_+^2 -k_-^2)^2+I_+I_-|\Psi_0|^2}$ and $\sigma'<0<\sigma$. In conclusion, a vanishingly small amplitude noise induces the growth of two secondary waves by a triadic resonant mechanism. Since their sum gives the primary frequency (see Equation (\[temporalcondition\])), $\omega_+$ and $\omega_-$ are subharmonic waves. The growth rate of the instability depends on the characteristics of the primary wave, namely its wave vector, its frequency and its amplitude $\Psi_0$, but also on the viscosity $\nu$. ### Triads, Resonance Loci and Growth Rates Using the dispersion relation for internal waves, the temporal resonance condition leads to [@BDJO2013] $$\begin{aligned} \label{equfinale} \frac{ |\ell_0|}{\sqrt{{\ell_0^2+m_0^2}}} & = & \frac{{|\ell_+|}}{\sqrt{{\ell_+^2+m_+^2}}} + \frac{{|\ell_0 {-}\ell_+|}}{\sqrt{(\ell_0{-}\ell_+)^2+(m_0{-}m_+)^2}}\,,\label{equation_k1m1}\end{aligned}$$ whose solutions are presented in **Figure \[dessindelacacouete\]**a. Once the primary wave vector $\mbox{\boldmath $k$}_0$ is defined, any point of the solid curve corresponds to the tip of the $\mbox{\boldmath $k$}_+$ vector, while $\mbox{\boldmath $k$}_-$ is obtained by closing the triangle. The choice between the labels + and - is essentially arbitrary and this leads to the symmetry $\mbox{\boldmath $k$}\rightarrow \mbox{\boldmath $k$}_0-\mbox{\boldmath $k$}$ in **Figure \[dessindelacacouete\]**[a]{}. Without loss of generality, we will always call $\mbox{\boldmath $k$}_+$ the largest wavenumber. ![[(a) Resonance locus for the unstable wave vectors $(\ell_+,m_+)$ satisfying Equation (\[equation\_k1m1\]) once the primary wave vector $\mbox{\boldmath $k$}_0=$$(\ell_0,m_0)$ is given. Two examples of vector triads ($\mbox{\boldmath $k$}_0$, $\mbox{\boldmath $k$}_+$, $\mbox{\boldmath $k$}_-$) are shown. The dotted curve is defined by $k_+=k_0$. The solid green curves correspond to the central branch, while the dashed and dash-dotted black curves correspond to the external branch. (b) and (c) Corresponding growth rates $\sigma/\max(\sigma)$ as a function of the normalized wave vector modulus $k_+/k_0$. (b) presents the inviscid case while (c) presents a viscous case corresponding to $\Psi_0/\nu=100$.]{} []{data-label="dessindelacacouete"}](./figure2.pdf "fig:"){width="\textwidth"} -.5truecm One can observe two distinct parts of this resonance locus, characterized by the position of $k_+/k_0$ with respect to 1. The wavelength of the secondary waves generated by the instability can be - both smaller than the primary wavelength: this case corresponds to the external branch of the resonance locus and implies an energy transfer towards smaller scales [(represented by black curves in **Figure \[dessindelacacouete\]**)]{}. - one larger and the other one smaller: this case corresponds to the central branch of the resonance locus and implies an energy transfer towards smaller and larger scales [(represented by solid green curves in **Figure \[dessindelacacouete\]**)]{}. Among the different possible solutions on the resonance locus, the one expected to be seen experimentally or numerically is the one associated [with]{} the largest growth rate. In the inviscid case, the most unstable growth rate occurs for $k\rightarrow\infty$, with essentially $\mbox{\boldmath $k$}_+\simeq-\mbox{\boldmath $k$}_-$, and therefore $ \omega_+=\omega_-=\omega_0/2$. This ultraviolet catastrophe is healed in the presence of viscosity, which selects a finite wavelength for the maximum growth rate [@Hazewinkel2011] as shown in Figure \[dessindelacacouete\]c. For typical laboratory scale experiments, the values of $k_+$ corresponding to significant growth rates are of the same order of magnitude as the primary wavenumber $k_0$, as can be seen in **Figure \[dessindelacacouete\]c**, with $k_{1}/k_{0}\simeq 1.5$ and $k_{2}/k_{0}\simeq 2.3$. Thus, TRI corresponds to a direct energy transfer from the primary wave to small scales where viscous effects come into play, without the need of a turbulent cascade process. The fact that viscosity has a significant effect on the selection of the excited resonant triad, preventing any large wave number secondary wave to grow from the instability, has been observed by [@BDJO2013] in laboratory experiments on wave beams. However, they also found a different type of triads than those predicted by the previous theoretical arguments. This will be discussed in more detail in the following sections. ### Amplitude Threshold for Plane Wave Solutions {#threshold} The expression for the growth rate $\sigma$ implies that the amplitude of the stream function has to be larger than the critical value $|\Psi_c(\ell_+,m_+)|={\nu k_+k_-}/{\sqrt{4I_+I_-}}$ to get a strictly positive growth rate [@Koudella2006; @BDJO2013]. The threshold for the instability is thus given by the global minimum of this function of several variables. Let us focus on the particular case where $\mbox{\boldmath $k$}_+$ tends to $\mbox{\boldmath $k$}_0$ by considering the following description of the wave vector components $\ell_+=\ell_0(1+\mu_0\varepsilon^{\alpha})$ and $ m_+=m_0(1+\varepsilon)$ where $\varepsilon{\ll 1}$, $\alpha\geq 1$, while $\varepsilon$ and $\mu_0$ are positive quantities. Using the dispersion relation, the temporal and spatial resonance conditions, [@BSDBOJ2014] have shown that $\alpha=2$ is the only acceptable value to balance the lowest order terms. Plugging these relations into the expression of $I_\pm$, one gets $I_+=-\ell_0m_0\varepsilon+o(\varepsilon)$ and $I_-=-\ell_0m_0+o(1)$, which leads to $|\Psi_c|=\sqrt{\varepsilon}\, {\nu N}/{(2\omega_0)}+o(\varepsilon^{1/2})$. The minimum of this positive expression being zero, it shows that there is no threshold for an infinitely wide wave beam, even when considering a viscous fluid. Plane wave solutions are thus always unstable to this Triadic Resonant Instability. Why does the Finite Width of Internal Waves Beam Matter? -------------------------------------------------------- The above theory for the TRI does not take into account the finite width of the experimental beam. Qualitatively, the subharmonic waves can only extract energy from the primary wave if they do not leave the primary beam before they can extract substantial energy [@BSDBOJ2014]. The group velocity of the primary wave is aligned with the beam, but the group velocity of the secondary waves is definitely not, and these secondary waves eventually leave the primary wave beam, as illustrated in **Figure \[dessindelavitessedegroupe\]**. This is a direct consequence of the dispersion relation, which relates the direction of propagation to the frequency: a different frequency, smaller for subharmonic waves, will lead to a shallower angle. Three comments are in order: i\) The angles between primary and secondary waves strongly influence the interaction time, and thus the instability. ii\) Secondary waves with small wave vectors, having a larger group velocity $c_{g,{\pm}}=(N^2-\omega_{\pm}^2)^{1/2}/k_{\pm}$, leave the primary wave beam more rapidly and have less time to grow in amplitude. Such solutions will therefore [be]{} less likely to develop, [opening the door to]{} stabilization of the primary wave by [the]{} finite width effect. This clarifies why experiments with the most unstable secondary waves on the internal branch (small wave vector case) of the resonance locus (**Figure \[dessindelavitessedegroupe\]**) were found to be stable [@BDJO2013] contrary to the prediction for plane waves. This decisive role of the group velocity of the short-scale subharmonic waves was identified long ago by [@McEwanPlumb1977]. iii\) At the other end of the spectrum, small wavelengths are more affected by dissipation and will also be less likely to be produced by TRI. Consequently, only a window of secondary wavelengths is possibly produced by TRI. ![ Sketch of the experimental set-up showing the wave generator lying horizontally at the top of the wave tank with a superimposed snapshot of the vertical density gradient field. (a) The internal wave beam is propagating downward. (b) The instability of the propagating internal wave beam is visible [@BourgetPhD]. The tilted dashed rectangle corresponds to the control area for the energy approach of section \[energyapproach\]. (c) The vector triad with the three arrows representing the primary wave vector $\mbox{\boldmath $k$}_0$ (black) and the two secondary waves vectors $\mbox{\boldmath $k$}_+$ (red) and $\mbox{\boldmath $k$}_-$ (blue). From this triad, it is possible to deduce the orientation of the group velocities of the three different waves as shown in panels (a) and (b). []{data-label="dessindelavitessedegroupe"}](./figure3.pdf "fig:"){width="99.00000%"} -.5truecm Energy Approach {#energyapproach} --------------- A simple energy balance proposed by [@BSDBOJ2014] makes possible an insightful and more quantitative estimate for the most unstable triad. We introduce the tilted rectangle shown in **Figure \[dessindelavitessedegroupe\]** as control area [(denoting $W$ the perpendicular beam width)]{} and we neglect the spatial attenuation of the primary wave in this region (“pump-wave” approximation). Since secondary waves do not propagate parallel to the primary beam, they exit the control area from the lateral boundaries without compensation. Equation (\[equation1z\]) is thus modified as follows $$\begin{aligned} \frac{{\rm d}\Psi_\pm}{{\rm d}t} & =&|I_\pm|\Psi_0\Psi_\mp^*-\frac{\nu}{2} k_\pm^2\Psi_\pm-\frac{|{\mbox{\boldmath $c$}_{g,\pm}}\cdot{\mbox{\boldmath $e$}_{k_0}}|}{2W}\Psi_\pm \ . \label{equation3}\end{aligned}$$ The first term represents the interaction with the other plane waves of the triadic resonance, the second term is due to viscous damping while the third one accounts for the energy leaving the control area. One finds here also exponentially growing solutions with a positive growth rate slightly modified as $\sigma ^*=-\left(\Sigma_++\Sigma_-\right)/4 +\sqrt{\left(\Sigma_+-\Sigma_-\right)^2/16+|I_+||I_-||\Psi_0|^2}, $ in which the effective viscous term now reads $\Sigma_\pm=\nu k_\pm^2+ {|{\mbox{\boldmath $c$}_{g,\pm}}\cdot{\mbox{\boldmath $e$}_{k_0}}|}/{W}$. The finite width of the beam is responsible for a new term characterizing the transport of the secondary waves energy out of the interaction region. For infinitely wide wave beams ($W\rightarrow+\infty$), one recovers the growth rate $\sigma$ obtained in the plane wave case. In contrast, when the beam becomes narrow ($W\rightarrow0$), the growth rate decreases to zero, leading to a stabilization. The finite width of a wave beam increases therefore its stability, owing to the transport of the secondary waves out of the triadic interaction zone of the primary wave beam before they can extract substantial energy. This interaction time scales directly with the perpendicular beam width, $W$ as can be seen from the expression of $\sigma^*$. Theory in the Nearly Inviscid Limit {#TheoryintheNearlyInviscidLimit} ----------------------------------- A beautiful weakly nonlinear asymptotic analysis of the finite width effect on TRI has been recently proposed by [@Karimi2014]. Mostly interested by oceanic applications, they look for subharmonic perturbations in the form of fine-scale, nearly monochromatic wavepackets with frequency close to one half of the primary frequency; in this limit, usually called Parametric Subharmonic Instability (see the sidebar distinguishing TRI vs. PSI), $\omega_\pm\simeq \omega_0/2={N}(\sin\theta)/2={N}\sin\phi$, that defines the angle $\phi$ with the vertical of the wave vectors $\mbox{\boldmath $k$}_\pm$ of opposite directions. The key ingredient in the analytical derivation is to take advantage of the scale difference between the width of the primary beam $W$ and the very small carrier wavelength of the subharmonic wave packets $\lambda_\pm=2\pi/k_\pm$. The small amplitude expansion is thus characterized by the small parameter $\mu=\lambda_\pm/(2\pi W)$. They consider an expansion with not only the underlying wave beam but also the superimposed subharmonic wavepackets that appear to order $\mu$. The derivation of the wave-interaction equations leads to six coupled equations for the two primary beam envelopes (two because of the two phases) and the four ($2\times2$) subharmonic wavepacket envelopes. Fortunately, this system can be reduced at leading order to only three coupled equations with three unknowns. Taking a distinguished limit in which not only the amplitude of the primary wave, but also the nonlinear, dispersive and viscous terms are all small, they obtain a reduced description of the dynamics. The strategy is as usual to choose the scaling in order to get comparable magnitudes of the different terms. Interestingly, as only the quadratic interaction is potentially destabilizing for the primary beam, they compare it with the advection term for subharmonic waves: the former being smaller, this confirms that the resonant interaction cannot feed the instability in the limited time during which perturbations are in contact with the underlying beam. Beams with a general profile of finite width are thus stable to TRI. Next, they consider the case of beams with profiles in the form of a monochromatic carrier with $O(1)$ wavelength, modulated by a confined envelope. Functions of the cross-beam direction $\eta$ (see **Figure \[profilselonetabb\]**) and of the appropriate slow time $\tau$, the complex envelopes $\Psi_0(\eta,\tau)$ and $\Psi_\pm(\eta,\tau)$, of the primary and secondary waves are thus generalizations of the plane wave solutions considered in section \[derivEquPlaneWaves\]. We recover these solutions with an envelope function independent of the cross-beam coordinate $\eta$, while an internal wave beam (such as the one in **Figure \[dessindelavitessedegroupe\])** will correspond to $\Psi_0(\eta)=1/2$ for $|\eta|<1/2$ and zero otherwise. Introducing the appropriate change of variables and rescaling of the relevant variables, the beam envelope $\Psi_0$ and the complex subharmonic envelopes $\Psi_\pm$ are linked through the following three coupled [dimensionless]{} equations $$\begin{aligned} \frac{\partial \Psi_\pm}{\partial\tau}&=&-\mbox{\boldmath $c$}_{g,\pm}\cdot\mbox{\boldmath $e$}_{k_0}\frac{\partial \Psi_\pm}{\partial\eta}-\overline{\nu} \kappa^2 \Psi_\pm +i\frac{{N}\kappa^2}{\omega_0}\sin^2\chi \, |\Psi_0|^2\Psi_\pm+\varpi \Psi_0\Psi^*_\mp,\label{equationfora}\\ \frac{\partial \Psi_0}{\partial\tau}&=&-2\varpi \Psi_+ \Psi_-, \label{equationforq}$$ where $\overline{\nu}$ is the renormalized viscous dissipation and $\kappa=2/\mu$ a rescaled wavenumber modulus. $\chi=\theta-\phi$ and $\varpi=\sin\chi \cos^2\left({\chi}{/2}\right)$ are two geometrical parameters, while $\tau$ is the appropriate slow time for the evolution of the subharmonic wave packets and $\eta$ the appropriate cross-beam coordinate. In the appropriate distinguished limit identified by [@Karimi2014], the nonlinear term is balanced as in section \[energyapproach\] by the viscous term, but also by the transport term. The coupling between the evolution equations occurs through the nonlinear terms, which allow energy exchange between the underlying beam and subharmonic perturbations. The subsequent behavior of the complex envelopes $\Psi_\pm$ determines the stability of the beam: if they are able to extract energy via nonlinear interaction with $\Psi_0$ at a rate exceeding the speed of linear transport and viscous decay, the beam is unstable. From this system, it is in principle possible to study the stability of any profile: 1. For example, a time independent beam $(\Psi_0(\eta),\Psi_\pm=0)$ is a steady state solution of this system of three equations. The study of its stability relies on looking for the normal mode solutions $\Psi_\pm\propto\exp(\sigma\tau)$. For a plane wave, one obtains the growth rate $\sigma=\sin\chi\cos^2\left({\chi}/{2}\right)/2-\overline{\nu} \kappa^2$. A subtle point was carefully emphasized by [@Karimi2014]. The above expression of $\sigma$ seems independent of the wave vector disturbance $\kappa$ [in the inviscid limit, but the derivation has]{} extensively used the hypothesis of fine-scale disturbances, that will of course break down for $\kappa\ll1$. The maximum growth rate is indeed attained for finite but small $\kappa$. Uniform beams (internal plane waves with a confined envelope) are unstable if the beam is wide enough. 2. [@Karimi2014] provide also the solution of the initial value problem for a beam with $\Psi_0(\eta)$ tending towards zero as $\eta$ tends to infinity. They show the existence of a minimum value for the unstable perturbation wavenumber $\kappa_{\mbox{\tiny min}}=\pi c_{g,\pm}/(2\varpi W\int_{-\infty}^{+\infty}\Psi_0(\eta)\mbox{d}\eta)$, corresponding to a maximum wavelength. Therefore, the possible spatial scale window for secondary wavelengths shrinks towards smaller scales as the beam is made narrower. Outside this range, no instability is possible even in the inviscid case. 3. They derive also analytically the minimum width explicitly for the top-hat profile used in the experiments by [@BDJO2013] ($\Psi_0(\eta)=1/2$ for $|\eta|<1/2$ and zero otherwise as shown in **Figure \[dessindelavitessedegroupe\]**). They argue that the existence of a minimum width is valid for a general profile. This minimum is dependent on the beam shape. To summarize, internal plane waves with a confined envelope are unstable if the beam is wide enough, while weakly nonlinear beams with a general but confined profile (i.e. without any dominant carrier wavenumber) are stable to short-scale subharmonic waves. Effect of a mean advective flow ------------------------------- [@LerissonChomaz2017] have recently studied theoretically and numerically the Triadic Resonant Instability of an internal gravity beam in presence of a mean advective flow. They keep constant the local wave vector and wave frequency in the frame moving with the fluid in order to encompass both tidal flows and lee waves. Their main result is that, by impacting the group velocity of the primary and secondary waves, the mean advection velocity modifies the most unstable triads. They have predicted and confirmed numerically that a strong enough advective flow enhances the instability of the central branch (leading to large scale mode since one secondary wavelength is larger than the primary one) with respect to the external branch. However, the model is not able to explain the existence of an interesting stable region, at intermediate velocity in their numerical simulations. To go beyond, it would be necessary to take into account the spatial growth of the secondary waves within the internal wave beam. Such a theory relying on the extension of the classical absolute or convective instability is still to be derived. Effect of the rotation ---------------------- ### Theoretical study When one includes Coriolis effects due to Earth’s rotation at a rate $\Omega_C$, the dispersion relation of internal waves is modified and this has of course consequences on the group velocity, which we showed to be intimately associated to the stability of internal wave beams. [@BordesFast2012] have reported experimental signatures of Triadic Resonant Instability of inertial gravity beams in homogenous rotating fluid. It is thus expected that TRI will also show up when considering stratified rotating fluid. Assuming invariance in the transverse $y$ direction, the flow field may be written as $(u_x,u_y,u_z) = ( \partial_z \psi,u_y,-\partial_x\psi )$ with $\psi(x,z,t)$ the streamfunction of the non-divergent flow in the vertical plane, and $u_y(x,z,t)$ the transverse velocity. Introducing the Coriolis parameter $f=2 \Omega_C\sin\beta$, where $\beta$ is the latitude, the dynamics of the flow field is given by the following system of three equations $$\begin{aligned} \partial_t b + J(b,\psi) - N^2 \partial_x \psi &=& 0, \label{eq_gravity_1rot}\\ \partial_{t}\nabla^{2} \psi + J(\nabla^{2} \psi , \psi) -f\partial_z u_y&=& - \partial_x b+\nu \nabla^{4} \psi,\label{equationenpsirot} \\ \partial_t u_y+ J(u_y,\psi) +f\partial_z \psi &=&\nu \nabla^{2} u_y. \label{equationenrhorot}\end{aligned}$$ in which the equation for the buoyancy perturbation is not modified, while Equation (\[equationenpsi\]) has been modified and is now coupled to the dynamics of the transverse velocity $u_y$. As previously, one can study beams of general spatial profile, corresponding to the superposition of time-harmonic plane waves with a dispersion relation (\[eq\_disp\_gravity\_theta\]) modified into $ \omega^2= N^2\sin^2\theta+f^2\cos^2\theta.$ The next step is again to look for subharmonic perturbations in the form of fine-scale (with respect to the width of the beam), nearly monochromatic wavepackets with frequency close to half the primary frequency. It is straightforward to see that subharmonic waves propagate with an inclination $\phi$ given by $\sin\phi=((\omega_0^2/4-f^2)/(N^2-f^2))^{1/2}$ that vanishes when $\omega_0/2= f$, i.e. at the critical latitude $\beta\simeq28.8^\circ$ [@MacKinnonWinters2005]. The modulus of the group velocity of subharmonic waves $c_{g,\pm}=(N^2-f^2) \sin(2\phi)/(\omega_0 k_\pm)$ will thus also vanish at this latitude. The rotation reducing dramatically the ability of subharmonic waves to escape, it may seriously reinforce the instability. [@Karimi2017] have shown that it is possible to reproduce the asymptotic analysis of the Triadic Resonant Instability with the inclusion of Earth’s rotation (see also [@KarimiPhD2015]). One ends up with an unchanged equation (\[equationforq\]) for the dynamics of the primary wave, while the coupled dynamics of subharmonic waves is modified as follows $$\begin{aligned} \frac{\partial \Psi_\pm}{\partial\tau}= -\mbox{\boldmath $c$}_{g,\pm}\cdot\mbox{\boldmath $e$}_{k_0} \frac{\partial\Psi_\pm}{\partial\eta} +\frac{i}{2}\frac{3f}{\kappa^2{N}}\frac{{\partial^2} \Psi_\pm}{\partial\eta^2}-\overline{\nu}\kappa^2 \Psi_\pm +i\delta{\kappa^2}{} \, \left|\frac{\partial \Psi_0}{{\partial\eta}} \right|^2\psi_\pm-\gamma \frac{\partial^2\Psi_0}{{\partial\eta^2}}\Psi^*_\mp,\label{equationforarot}\end{aligned}$$ with $\delta$ and $\gamma$ two parameters depending on the Coriolis parameter $f$, which vanish when $f$ tends to zero. The important modification is the appearance, on the right-hand side, of the second linear term due to dispersion. It is important here because the first one may disappear since the projection of the respective group velocity $\textbf c_{g,\pm}$ of subharmonic envelopes $\Psi_\pm$ on the across beam direction $\textbf e_{k_0}$ may vanish. [@Karimi2017] consider first weakly nonlinear sinusoidal wavetrains, emphasizing two interesting limits: the case far from the critical latitude allows one to recover the results of section \[TheoryintheNearlyInviscidLimit\] in which there is no preferred wavelength of instability in the inviscid limit. On the other hand, when the group velocity of perturbations vanishes at the critical latitude, energy transport is due solely to second-order dispersion. This process of energy transport leads to the selection of a preferred wavenumber, independent of damping effects, which may suppress the instability for a sufficiently large damping factor, permitting the underlying wave to survive the instability. They obtained an expression for the growth rate identical to the result in the inviscid limit of [@YoungTsangBalmforth2008]. It explains that, at the critical latitude, additional physical factors, such as scale-selective dissipation, must become important. This result has been shown numerically by [@Hazewinkel2011] in agreement with in situ measurements [@Alford2007]. For beams, there is always a competition between energy extraction from the beam, which varies with beam profile, and the proximity to the critical latitude, without forgetting the viscous effects on the fine-scale structure of disturbances. Relying on numerical computations, it is possible to predict the stability properties for a given profile. In general, it turns out that rotation plays a significant role in dictating energy transfer from an internal wave to fine-scale disturbances via TRI under resonant configurations. ### Experimental study Until now, the only laboratory experiment that studied the influence of rotation on the triadic instability of inertia-gravity waves, in a rotating stratified fluid, was performed recently by [@Maureretal2016]. In this study, the set-up that was used by [@BDJO2013; @BSDBOJ2014] was placed on a rotating platform, with a range of rotation rates from 0 to 2.16 rpm, allowing the dimensionless Coriolis parameter, $f/N$, to vary in a range from 0 to 0.45. One of their main findings is the observation that the TRI threshold in frequency is lowered (by about 20%), compared to the non-rotating case. An extension of the energy approach developed in section \[energyapproach\] to the rotating case confirms this observation, by showing that the finite-size effect of the beam width is reduced when rotation increases. This enhancement of TRI only applies to a limited range of rotation rate since, when the rotation rate overcomes half the primary wave frequency, TRI is forbidden due to the dispersion relation not allowing the existence of the lowest-frequency secondary wave. The competition between this limit and the finite-size effect reduction by rotation results in a minimum value for the frequency threshold. The position of this minimum, observed around $f/\omega_{0}\simeq 0.35$ in the experiment, depends on the Reynolds number, defined as $Re=\Psi_{0}/\nu$. The transposition of this result to high Reynolds number situations like in the ocean shows that the TRI enhancement is then localized in a narrow Coriolis parameter range, with $f=\omega_{0}/2$, thus recovering the critical latitude phenomenon. When global rotation is applied to the fluid, another interesting feature is that it creates an amplitude threshold for TRI. Indeed, it was discussed in section \[threshold\] that in the absence of rotation, plane waves were always unstable. However, as shown in this section, the instability at very low amplitude occurs when $\mbox{\boldmath $k$}_{+}$ tends towards $\mbox{\boldmath $k$}_{0}$, which implies $\omega_{+}\rightarrow \omega_{0}$ and therefore $\omega_{-}\rightarrow 0$. But when there is rotation in the system, a zero-frequency subharmonic wave is no longer allowed, hence the appearance of a threshold in amplitude, which increases with $f/N$. STREAMING INSTABILITY {#StreamingInstability} ===================== Introduction ------------ Another important mechanism for the instability of internal gravity waves beams is the generation of a mean flow, also called streaming instability[^2] by [@KataokaAkylas2015]. [@LighthillBook] and [@andrews1978] noticed in early times that internal gravity wave beams share several properties with acoustic wave beams. In particular, both kinds of waves may be subject to streaming in the presence of dissipative effects. Streaming refers here to the emergence of a slowly evolving, non-oscillating, Eulerian flow forced by nonlinear interactions of the oscillating wave-beam with itself [@nyborg1965acoustic; @lighthill1978]. As reviewed in [@riley2001], it is now recognized that streaming occurs actually in a variety of flow models; it remains an active field of research for both theoretical [@xie2014boundary] and experimental [@Squires2013; @moudjed2014scaling; @Wunenburger] points of view. The fact that dissipative effects are required to generate irreversibly a mean flow through the nonlinear interactions of a wave beam with itself can be thought of as a direct consequence of “non-acceleration” arguments that came up in the geophysical fluid dynamics context fifty years ago with important contributions from [@charney1961propagation; @eliassen1961transfer; @andrews1976], among others. [@plumb1977interaction] used those ideas to propose an idealized model for the quasi-biennal oscillation (QBO), together with an experimental simulation of the phenomenon [@PlumbMcEwan1978]. The oscillations require more than one wave beam, but [@plumb1977interaction] discussed first how a single wave beam propagating in a vertical plane could generate a mean flow. He predicted the vertical shape of this mean flow, emphasizing the important role played by the wave attenuation through dissipative effects. The experiment by [@PlumbMcEwan1978] may be thought of as the first quantitative observation of streaming in stratified fluids. Those examples correspond, however, to a very peculiar instance of streaming, with no production of vertical vorticity. By contrast, most applications of acoustic streaming since the earlier works of [@eckart1948vortices] and [@westervelt1953theory] involve the production of vorticity by an irrotational wave. As far as vortical flows are concerned, [@LighthillBook] noticed important analogies between acoustic waves and internal gravity waves: in both cases, vortical flows and propagating waves are decoupled at a linear level [in the inviscid limit]{}, and steady streaming results from viscous attenuation [of the wave amplitude. In particular, [@LighthillBook] noticed that streaming could generate a flow with vertical vorticity]{}. However, experimental observation of the emergence of a vortical flow in stratified fluids through this mechanism remained elusive until recently. While studying the internal wave generation process via a tidal flow over 3D seamounts in a stratified fluid, [@ZhankKingSwinney2007] observed a strong flow in the plane perpendicular to the oscillating tidal flow. For low forcing, this flow was found to be proportional to the square of the forcing amplitude. That led them to invoke nonlinear interactions, either between the internal wave beam and itself, or between internal waves and the viscous boundary layer. The analysis was not pursued further, and the sign of the vorticity generated, opposite to the one discussed in next subsections, remains puzzling. A few years later, studying the reflection of an internal wave beam on a sloping bottom, Grisouard and his collaborators have also discovered this mean-flow generation in experiments [@GrisouardPhD; @Leclairetal2011; @Grisouardetal2013]. The basic configuration was an uniform beam reflecting onto a simple slope in a uniformly stratified fluid. As predicted [@DauxoisYoung1999; @Gostiaux2006], the interaction between the incident and reflected waves produced harmonic waves, thereby reducing the amplitude of the reflected wave. However, more surprisingly, they found that the reflected wave was nearly absent because a wave-induced mean flow appeared in the superposition region of the incident and reflected waves, progressively growing in amplitude. Comparing two- and three-dimensional numerical simulations, they showed that this mean flow is of dissipative origin[^3] and three-dimensional. Its presence totally modifies the two-dimensional view considered in the literature for reflection of internal waves. Indeed, there has been many interesting theoretical studies of internal gravity waves-mean flow interactions [@Bretherton1969; @LelongRiley1991; @Akylas2003], but none of them considered the effect of dissipation in three dimensions. The complete and theoretical understanding of the generation of a slowly evolving vortical flow by an internal gravity wave beam was possible using an even simpler set-up that we describe in the following section. [@Bordes2012] reported observations of a strong mean flow accompanying a time-harmonic internal gravity beam, freely propagating in a tank significantly wider than the beam. We describe below in detail the experimental set-up and the observations, together with two related theories by [@Bordes2012] and [@KataokaAkylas2015], which describe well the experimental results, by providing the spatial structure and temporal evolution of the mean flow and illuminating the mechanism of instability. Those approaches bear strong similarities with the result obtained by [@GrisouardBuhler2012], who used generalized Lagrangian mean theory, in order to describe the emergence of a vortical flow in the presence of an oscillating flow of a barotropic tide above topography variations. Experimental Observations ------------------------- [@Bordes2012] have studied an internal gravity wave beam of limited lateral extent propagating along a significantly wider stratified fluid tank. Previously, most experimental studies that were using the same internal wave generator [@Gostiaux2007; @Mercieretal2010] were quasi-two-dimensional (beam and tank of equal width) and therefore without significant transversal variations. **Figure \[SetupmanipGuilhem\]a** presents a schematic view of the experimental set-up in which one can see the generator, the tank and the representation of the internal wave beam generated (see [@BordesPhD] for additional details). ![(a) Schematic representation of the experimental set-up with the generator on the left of the tank. (b) Top view of the particle flow in a horizontal plane at intermediate depth. []{data-label="SetupmanipGuilhem"}](./figure4.pdf){width="\textwidth"} The direct inspection of the flow field shows an unexpected and spontaneously generated pair of vortices, emphasized in **Figure \[SetupmanipGuilhem\]b** by the tracer particles dispersed in the tank to visualize the flow field using particle image velocimetry. This structure is actually a consequence of the generation of a strong mean flow. This experiment provides therefore an excellent set-up to carefully study the mean-flow generation and to propose a theoretical understanding that explains the salient features of the experimental observations. ![Experimental (a,c,e,g) and theoretical (b,d,f,h) horizontal velocity fields $u_x$ for the primary wave (top plots, obtained by filtering [@Mercieretal2008] the velocity field at the forcing frequency) and the mean flow (bottom plots, obtained by low-pass filtering the velocity field) as reported respectively in [@Bordes2012] and [@KataokaAkylas2015]. The four left panels present the side view, while the right ones show the top view. The wave generator is represented in grey with its moving part in black.[]{data-label="ResultmanipGuilhem"}](./figure5.pdf){width="\textwidth"} These observations are summarized in **Figure \[ResultmanipGuilhem\]**, which shows side and top views not only of the generated internal wave beam but also of its associated mean flow. One sees that the wave part of the flow is monochromatic, propagating at an angle $\theta$ and with an amplitude varying slowly in space compared to the wavelength $\lambda$. These waves are accompanied by a mean flow with a jet-like structure, [in the direction of the horizontal propagation of waves,]{}  together with a weak recirculation outside the wave beam. Initially produced inside the wave beam, this dipolar structure corresponds to the spontaneously generated vortex shown in **Figure \[SetupmanipGuilhem\]b**. Moreover, the feedback of the mean-flow on the wave leads to a transverse bending of wave beam crests that is apparent in **Figures \[ResultmanipGuilhem\][e]{}** and **\[ResultmanipGuilhem\][f]{}**. Analytical Descriptions ----------------------- ### A preliminary multiple scale analysis Taking advantage of the physical insights provided by the experiments, [@Bordes2012] have proposed an approximate description that uses a time-harmonic wave flow with a slowly varying amplitude in space. The problem contains two key non-dimensional numbers, the Froude number $U/\lambda N$ and the ratio $\nu/\lambda^2 N$ between the wavelength $\lambda$ and the attenuation length scale of the wave beam due to viscosity, $\lambda^{3}N/\nu$ [@Mercieretal2008]. For analytical convenience, they considered a distinguished limit with the small parameter $\varepsilon=Fr^{1/3}$, together with the scaling $\nu/\lambda^2 N=\varepsilon /\lambda_\nu$ where $\lambda_\nu\sim 1$. As usual, the appropriate scaling in the small parameter $\varepsilon$ is deduced from a mix of physical intuition and analytical handling of the calculations. In their case, they were looking for a regime with small nonlinearity and dissipation. In terms of the velocity components $(u_x,u_y,u_z)$, the buoyancy $b$ and the vertical vorticity $\Omega=\partial_x u_y -\partial_yu_x$, the governing dimensionless equations in this three-dimensional setting read now $$\nabla_{H}\cdot\mbox{\boldmath $u$}_{H}=-\partial_z u_{z}, \label{eq:divAD}$$ $$\partial_{t} b+\varepsilon^{3}\left(\mbox{\boldmath $u$}\cdot\nabla b\right)+u_z=0, \label{eq:BAD}$$ $$\partial_{t}\Omega+\varepsilon^{3}\left(\mbox{\boldmath $u$}_{H}\cdot\nabla_{H}\Omega+\left(\nabla_{H}\cdot\mbox{\boldmath $u$}_{H}\right)\Omega+\partial_{x}\left(u_z \partial_z u_{y} \right)-\partial_{y}\left(u_z \partial_z u_x\right)\right)=\varepsilon\lambda_{\nu}^{-1}\nabla^2 \Omega,\label{eq:OmegaAD-1}$$ $$\nabla^2\partial_{tt} u_z+\nabla^2_{H}u_z=\varepsilon\lambda_{\nu}^{-1}\nabla^4 \partial_tu_{z}-\varepsilon^{3}\left(\partial_{t}\left(\nabla^2_{H}\left(\mbox{\boldmath $u$}\cdot\nabla {u_z}\right)-\partial_{z}\nabla_{H}\left(\mbox{\boldmath $u$}\cdot\nabla\mbox{\boldmath $u$}_{H}\right)\right)+\nabla^2_{H}\left(\mbox{\boldmath $u$}\cdot\nabla b\right)\right), \label{eq:WAD}$$ in which the index $H$ in $\mbox{\boldmath $u$}_{H}$ and $\nabla_{H}$ reduces to the horizontal velocity field, gradient or Laplacian operator. Introducing rescaled spatial and time coordinates, a multiple scale analysis is now at hand. Looking for a flow field in perturbation series $u_r=u_r^{0}+\varepsilon u_r^{1}+o(\varepsilon)$ for $r=x$, $y$ or $z$ with a priori $u_y^{0}=0$ as suggested by the structure of the beam, together with the vertical vorticity field $\Omega=\varepsilon^{2}\Omega_{2} +\varepsilon^{4}\Omega_{4} +o(\varepsilon^{4}),$ a tedious but straightforward application of the multiple scale framework (with $x_i=\varepsilon ^i x$ and $t_i=\varepsilon ^i t$) gives then, to the first three orders, the structure of the beam: the first order $\varepsilon^0$ provides the expressions for $u_x^0$ and $u_z^0$, the second order $\varepsilon^1$ gives the expression for $u_y^1$ and finally order $\varepsilon^2$ shows that $u_z^0$ does not depend on the slow timescale $t_2$. Nonlinear terms contribute a priori for the first time to order $\varepsilon^3$, but one interestingly finds again (see section \[NLterms\]) that they vanish to this order. To order $\varepsilon^4$, one obtains that the term independent of the slow time $t_0$ vanishes and, thus, nonlinear terms do not induce a mean flow to this order either. It is only to order $\varepsilon^5$ that nonlinear terms directly contribute to the mean-flow generation. The governing equation of the vortical flow induced by the mean flow is then given in the original dimensional units by $$\partial_t \overline\Omega =\frac{\partial_{xy}\,{\cal U}^2}{(2\cos\theta)^2}+\nu \nabla^2\overline\Omega\label{equationmeanflowGuilhem}$$ where the overline stands for the filtering over one period and ${\cal U}(x,y)$ is the amplitude of the wave envelope. Several conclusions can be directly inferred from this analysis: i\) As emphasized by the first term on the right-hand side, nonlinear terms are crucial as a source of vertical vorticity. Note that one recovers that the amplitude of the mean flow is proportional to the square of the wave amplitude as has been invoked from experimental [@ZhankKingSwinney2007] or theoretical [@BuhlerBook] results. ii\) The variations of the wave field in the $y$-direction (implying $\partial_y\neq0$) are necessary for nonlinearities to be a source of vertical vorticity. This illuminates why three-dimensional effects are crucial and therefore why no mean-flow generation was noticed in two dimensions [@Mercieretal2010; @Grisouardetal2013]. iii\) Finally, the viscous attenuation of the wave field in the $x$-direction (implying $\partial_x\neq0$) is also necessary to produce vertical vorticity. In actual experiments, variations of the amplitude in the $x$-direction can also come from finite size effects, but are not sufficient to generate a mean flow. One drawback of the above approach, however, is that it does not describe the feedback of the mean flow on the waves. For this reason, the approach becomes inconsistent at long time in the far field region. The above combined experimental and analytical proof of the key role played by viscous attenuation and lateral variation of the wave beam amplitude in the generation of the observed mean flow has therefore motivated a more careful asymptotic expansion by [@KataokaAkylas2015], taking into account the two-way coupling between waves and mean-flow. This two-way coupling accounts for the horizontal bending of the wave mean-field in [@Bordes2012] experiments, as explained in section \[completemodel\]. ### Stability to three-dimensional modulations {#Kataoka2013} Initially, [@KataokaAkylas2013] were interested in three dimensional perturbations of internal wave beams. Specifically, they studied the stability of uniform beams subject to oblique modulations which vary slowly in the along-beam and the horizontal transverse directions. Results turned out to be fundamentally different from that of purely longitudinal modulations considered in [@Akylas2003]. Because of the presence of transverse variations, a resonant interaction becomes possible between the primary beam and three dimensional perturbations. Moreover, their analysis revealed that three-dimensional perturbations are accompanied by circulating horizontal mean flows at large distances from the vicinity of the beam. They studied the linear stability of uniform internal wave beams with confined spatial profile by introducing infinitesimal disturbances to the basic state, in the form of normal modes, not only in the along-beam direction $\xi$ but also in the horizontal transverse direction $y$ (see **Figure \[profilselonetabb\]**). They used an asymptotic approach, valid for long wavelength perturbations relative to the beam thickness. The boundary conditions combined with the matching conditions between the solution near and far from the beam ensure that the primary-harmonic and mean-flow perturbations are confined in the cross-beam direction. The analysis brings out the coupling of the primary-harmonic and mean-flow perturbations to the underlying internal wave beam: the interaction of the primary-harmonic perturbation with the beam induces a mean flow, which in turn feeds back to the primary harmonic via interactions with the beam. Whether this primary-harmonic-mean flow interaction mechanism can extract energy from the basic beam, causing instability, depends upon finding modes which remain confined in the cross-beam direction. ### Complete model for the 3d propagation of small-amplitude internal wave beams {#completemodel} In a second stage, [@KataokaAkylas2015] have derived a complete matched asymptotic analysis of the experiment performed by [@Bordes2012] for a 3D Boussinesq weakly nonlinear viscous fluid uniformly stratified. From their prior experience [@Akylas2003; @KataokaAkylas2013], they have chosen stretched along-beam spatial coordinate as ${\Xi}=\varepsilon^2 \xi$, slow time as $T=\varepsilon^2 t$ and transverse variations as $Y=\varepsilon y$ so that along-beam and transverse dispersions are comparable, together with variations in the cross-beam direction $\eta$ (see **Figure \[profilselonetabb\]**). Combining this choice with a small nonlinearity [scaling as $\varepsilon^{1/2}$]{} and a weak viscous dissipation ${\bar\nu} \varepsilon^{2}$ that carry equal weight, they were able to fully analyze the mean flow, separately near and far from the beam, before matching both solutions. They derived a closed system of two coupled equations linking the amplitude of the primary time harmonic ${\cal U}$ and the mean-flow component $\overline V_\infty$ of the cross-beam velocity field. The latter appears to be necessary for matching with the mean flow far from the beam. The equation governing the dynamics of the mean flow reads $$\partial_T \overline V_\infty = \cos\theta\, \partial_Y {\mathcal H} \left(\int_{-\infty}^{+\infty} \mbox{d} \eta \ {\cal U}^* \left( \frac{\partial \cal U}{\partial{\Xi}}+ \frac{\cot\theta}{2} \int^{\eta} \mbox{d} \eta \,\frac{\partial^2 \cal U}{\partial{Y^2}}\right)\right),\label{equationformeanflowinit}$$ where ${\mathcal H}(.)$ stands for the Hilbert transform in the transverse coordinate $Y$. This immediately shows that transverse variations $(\partial_Y\neq0)$ of the beam are essential for having a nonzero source term. Since the generated mean vertical vorticity is given at leading order by $\overline \Omega =\cos\theta \, \partial_Y\overline {\cal U}=(\cos^2\theta/\sin\theta) \, \partial_Y {\overline V_\infty }+{O(\varepsilon^{1/2})}$, a direct comparison with Equation (\[equationmeanflowGuilhem\]) is possible. The first term in Equation (\[equationformeanflowinit\]), which involves derivatives in both horizontal coordinates corresponds to the term identified by [@Bordes2012], while this more complete analysis sheds light on an additional term deriving from purely transverse variations. Using an intermediate equation, [@KataokaAkylas2015] end finally with the alternative and more elegant form $$\partial_T \overline V_\infty = i \partial_Y {\mathcal H} \left(\int_{-\infty}^{+\infty} \mbox{d} \eta \left[ \left({\cal U}^*\partial_\eta {\cal U}\right)_T+{\bar\nu}\partial_\eta {\cal U}^*\partial_{\eta\eta} {\cal U}\right]\right).\label{equationformeanflow}$$ Moreover, they show that to match inner and outer solutions, this induced mean flow turns out to be purely horizontal to leading order and also dominant over the other harmonics. The comparison of this theoretical description agrees very well with the experimental results as beautifully emphasized by the different panels presented in **Figure \[ResultmanipGuilhem\]**. As far as a comparison with the experimental results of [@Bordes2012] is concerned, a common caveat of the predictions by [@Bordes2012] and [@KataokaAkylas2015] is the assumption of small wavelength compared to the length scale of the wave envelope, which is only marginally satisfied in the experiments. One may for instance wonder if the horizontal structure of the observed waves is primarily due to the feedback of the mean flow on the wave, or to the sole diffraction pattern of the wave due to this absence of scale separation. This needs to be addressed in future works. Forcing of Oceanic Mean Flows ----------------------------- Using an analysis based on the Generalized-Lagrangian-Mean (GLM) theory, [@GrisouardBuhler2012] have also studied the role of dissipating oceanic internal tides in forcing mean flows. [For analytical convenience, they model wave dissipation as a linear damping term $-\gamma_b b$ in the buoyancy equation (\[eq:cons\_masse\]), and neglect the viscous term in the momentum equation (\[eq:NS\_strat\]).]{} Within this framework, they discuss in detail the range of situations in which a strong, secularly growing mean-flow response can be expected. Their principal results include the derivation of an expression for the effective mean force exerted by small-amplitude internal tides on oceanic mean flows. At leading order, taking into account the background rotation and using a perturbation series in small wave amplitude, they derive the following explicit expression $$\partial_t\overline \Omega+\frac{\gamma_b f}{N^2}\partial_z \overline b=\frac{-i{\gamma_b}N^2}{2(\omega^2+\gamma_b^2)\omega} \left({\mbox{\boldmath $\nabla$} u_z^*\times\mbox{\boldmath $\nabla$} u_z}\right) \cdot {\mbox{\boldmath $e$}_z},\label{EquationbulhlerGrisouard}$$ for the average over the tidal period of the vertical vorticity. It is remarkable that one recovers in the presence of rotation a forcing term on the right-hand side that is analogous to the forcing terms obtained by [@Bordes2012] and [@KataokaAkylas2015] in the non-rotating case. In inviscid rotating flows, vortical modes are at geostrophic equilibrium, and there is a frequency gap separating those geostrophic modes from inertia-gravity waves. This frequency gap generally precludes interactions between geostrophic modes and wave modes. The work of [@GrisouardBuhler2012], however, shows that the combination of nonlinear and dissipative effects allows for a one-way energy transfer from inertia-gravity wave modes to geostrophic modes, through a genuinely three dimensional mechanism. [Using]{} Equation (\[EquationbulhlerGrisouard\]), [@GrisouardBuhler2012] compute the effective mean force numerically in a number of idealized examples with simple topographies. Although a complete formulation with dissipative terms in the momentum equation is necessary, the conclusion of this important work by [@GrisouardBuhler2012] is that energy [of inertia-gravity waves in rotating fluids ]{}can be transferred to a horizontal mean flow by a similar resonance mechanism as described in the experiment by [@Bordes2012]. One understands therefore that mean flows can be generated in regions of wave dissipation, and not necessarily near the topographic wave source. CONCLUSIONS AND FUTURE DIRECTIONS {#ConclusionsPerspectives} ================================= We have presented several recent experimental and theoretical works that have renewed the interest of internal wave beams. After emphasizing the reason for their ubiquity in stratified fluids – they are solutions of the nonlinear governing equations – this review has presented the two main mechanisms of instability for those beams: i\) Triadic Resonant Instability. We have shown that this instability produces a direct transfer of energy from large scales (primary waves) to smaller scales (subharmonic ones) for inviscid plane waves, but that it is no longer true for internal wave beams since the most unstable triad may combine subharmonic waves with larger and smaller wavelength. Moreover, the effects of the finite size and envelope shape for the onset of Triadic Resonant Instability have been overlooked. These features have to be taken into account to safely reproduce the complete nonlinear transfer of energy between scales in the ocean interior or in experimental analog [@SED2013; @BrouzetEPL2016], and therefore to find its stationary state, the so-called Garrett and Munk Spectrum [@GarrettMunk1975] or its possible theoretical analog, the Zakharov spectrum for the wave turbulence theory [@NazarenkoBook]. ii\) Streaming Instability. Now that the mechanism underlying streaming instability and the conditions for its occurrence have been identified, several other examples will probably be reported in the coming years. For example, such a mean-flow generation has also been observed in a recent experiment [@Brouzetetal2016] for which the reflection of internal gravity waves in closed domains lead to an internal wave attractor. Two lateral Stokes boundary layers generate indeed a fully three-dimensional interior velocity field that provides the condition for the mean flow to appear. With a perturbation approach, [@Beckebanze2016] confirmed this theoretically and showed that the generated 3D velocity field damps the wave beam at high wave numbers, thereby providing a new mechanism to establish an energetic balance for steady state wave attractors. [@SeminFauve] have also recently studied experimentally the generation of a mean flow by a progressive internal gravity wave in a simple two-dimensional geometry, revisiting an experimental analog of the quasi-biennial oscillation [@PlumbMcEwan1978]. They study the feedback of the mean flow on the wave, an essential ingredient of the quasi-biennial oscillation. Which is the dominant mechanism? [@KataokaAkylas2016] have recently suggested that streaming instability are central to three-dimensional internal gravity wave beam dynamics in contrast to the TRI of sinusoidal wave train relevant to uniform beams, the special case of a internal plane wave with confined spatial profile. This review reinforces therefore the need for more three-dimensional experiments studying wave-induced mean flow. In particular, the conditions that favor mean-flow generation with respect to triadic resonant interaction remains largely unknown. Angles of propagation? Three-dimensionality? Stratification? This is an important question that needs to be addressed. In an incompressible non-rotating linearly stratified Boussinesq fluid, 1. Plane waves are solutions of the linear and nonlinear equations for any amplitude. 2. Internal wave beams, which correspond to the superposition of plane waves with wave vectors of different magnitude but pointing in the same direction, are solutions of the linear and nonlinear equations. 3. Plane waves solutions are always unstable by TRI. 4. General localized internal wave beams are stable while (quasi) spatial-harmonic internal wave beams are unstable if the beam is wide enough. 5. In presence of rotation, beams of general spatial profile are more vulnerable to TRI especially close to the critical latitude where nearly-stationary wavepackets remain in the interaction region for extended durations, facilitating energy transfer. 6. Internal gravity wave beams with confined spatial profile are linearly unstable to three-dimensional modulations. 7. When the wave beam is attenuated along its direction of propagation and when the wave-envelop varies in the transverse horizontal direction, nonlinear interactions of the wave beam with itself induce the emergence of a horizontal mean-flow with vertical vorticity. DISCLOSURE STATEMENT {#disclosure-statement .unnumbered} ==================== The authors are not aware of any biases that might be perceived as affecting the objectivity of this review. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== This work was supported by the LABEX iMUST (ANR-10-LABX-0064) of Université de Lyon, within the program “Investissements d’Avenir” (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). This work has achieved thanks to the resources of PSMN from ENS de Lyon. We acknowledge the contributions of G. Bordes, B. Bourget, C. Brouzet, P.-P. Cortet, E. Ermanyuk, M. Le Bars, P. Maurer, F. Moisy, J. Munroe, H. Scolan, and A. Wienkers to our research on this topic. We thank T. Akylas, V. Botton, N. Grisouard, H. Karimi, T. Kataoka, L. Maas, T. Peacock, C. Staquet, B. Voisin for helpful discussions. [00]{} Alexander M. 2003. *Encyclopedia of the Atmospheric Sciences*. Chapter Parametrization of Physical Process: Gravity wave Momentum Flux, pp. 1669–1705. London, Academic/Elsevier. Alford M., Peacock T, MacKinnon JA, Nash JD, Buijsman MC, Centuroni LR, Chao SH, Chang MH, Farmer DM, Fringer OB, Fu KH, Gallacher PC, Graber HC, Helfrich KH, Jachec SM, Jackson CR, Klymak JM, Ko DS, Jan S, Johnston TMS, Legg S, Lee IH, Lien RC, Mercier MJ, Moum JN, Musgrave R., Park JH, Pickering AI, Pinkel R, Rainville L, Ramp ST, Rudnick DL, Sarkar S, Scotti A, Simmons HL, St Laurent LC, Venayagamoorthy SK, Wang YH, Wang J, Yang YJ, Paluszkiewicz T, Tang TY. (2015). The formation and fate of internal waves in the South China Sea. *Nature* 521:65-69 Alford M., MacKinnon JA, Zhao Z., Pinkel R., Klymak J., Peacock T. 2007. Internal waves across the Pacific. *Geophys. Res. Lett.* 34:L24601 Andrews DG, McIntyre ME. 1978. An exact theory of nonlinear waves on a lagrangian-mean flow. *J. Fluid Mech.* 89:609–646 Andrews DG, McIntyre ME. 1976. Planetary waves in horizontal and vertical shear: The generalized Eliassen-Palm relation and the mean zonal acceleration. *J. Atmos. Sci.* 33:2031–2048 Beckebanze F, Maas LRM. 2016. Damping of 3D internal wave attractors by lateral walls *Proceedings of the VIII International Symposium on Stratified Flows*, San Diego, 29 August-1 September 2016, Editor: University of California at San Diego. Bell TH. 1975. Lee waves in stratified flows with simple harmonic time dependence. *J. Fluid Mech.* 67:705–722 Benielli D, Sommeria J. 1998. Excitation and breaking of internal gravity waves by parametric instability. *Journal of Fluid Mechanics* 374:117–144 Bordes G. 2012. *Interactions non-linéaires d’ondes et tourbillons en milieu stratifié ou tournant*. Ph.D. dissertation, ENS de Lyon (https://tel.archives-ouvertes.fr/tel-00733175/en) Bordes G, Moisy F, Dauxois T, Cortet PP. 2012a. Experimental evidence of a triadic resonance of plane inertial waves in a rotating fluid *Phys. Fluids* 24:014105 Bordes G, Venaille A, Joubaud S, Odier P, Dauxois. 2012b. Experimental observation of a strong mean flow induced by internal gravity waves. *Phys. Fluids* 24:086602 Bourget B. 2014. *Ondes internes, de l’instabilité au mélange. Approche expérimentale.*. Ph.D. dissertation, ENS de Lyon (https://tel.archives-ouvertes.fr/tel-01073663/en) Bourget B, Dauxois T, Joubaud S, Odier P. 2013. Experimental study of parametric subharmonic instability for internal plane waves. *J. Fluid Mech.* 723:1–20 Bourget B, Scolan H, Dauxois T, Le Bars M, Odier P, Joubaud, S. 2014. Finite-size effects in parametric subharmonic instability. *J. Fluid Mech.* 759:739–750 Bretherton FP. 1969. On the mean motion induced by internal gravity waves. *J. Fluid Mech.* 36:785–803 Brouzet C, Ermanyuk EV, Joubaud S, Sibgatullin IN, Dauxois T. 2016a Energy cascade in internal wave attractors. *Europhysics Letters* 113:44001 Brouzet C, Sibgatullin IN, Scolan H, Ermanyuk EV, Dauxois T. 2016b Internal wave attractors examined using laboratory experiments and 3D numerical simulations. *J. Fluid Mech.* 793:109–131 Buhler OK. 2009. *Waves and Mean Flows*. London, UK: Cambridge University Press Callies J, Ferrari R, & Bühler O. 2014. Transition from geostrophic turbulence to inertia–gravity waves in the atmospheric energy spectrum. *Proceedings of the National Academy of Sciences* 111:17033–17038 Charney JG, Drazin PG. 1961. Propagation of planetary-scale disturbances from the lower into the upper atmosphere. *J. Geophys. Res.* 66:83–109 Chraibi H, Wunenburger R, Lasseux D, Petit J, Delville JP. Eddies and interface deformations induced by optical streaming. *J. Fluid Mech.* 688:195–218 Clark HA, Sutherland BR. 2010. Generation, propagation, and breaking of an internal wave beam. *Phys. Fluids* 22, 076601 Cole ST, Rudnick DL, Hodges BA, Martin JP. 2009. Observations of tidal internal wave beams at Kauai Channel, Hawaii. *J. Phys. Oceanogr.* 39:421–436 Craik AD. 1988. *Wave Interactions and Fluid Flows*. London, UK: Cambridge University Press Dauxois T, Young WR. 1999. Near critical reflection of internal waves. *J. Fluid Mech.* 390:271–295 Davis RE, Acrivos A. 1967. The stability of oscillatory internal waves. *J. Fluid Mech.* 30:723–736 Eckart C. 1948. Vortices and streams caused by sound waves. *Phys. Rev.* 73:68-76. Eliassen A., Palm E. 1961. On the transfer of energy in stationary mountain waves. *Geofysiske Publikasjoner* 22:1–23 Fritts, D. C., Alexander, M.J. 2003. Gravity wave dynamics and effects in the middle atmosphere. *Reviews of Geophysics* 41:1003 Garrett C, Kunze E. 2007. Internal tide generation in deep ocean *Annu. Rev. Fluid Mech.* 39:57–87 Garrett C, Munk W. 1975. Space-time scales of internal waves: A progress report *J. Geophys. Res.* 80:291–297 Gayen B., Sarkar S. 2013. Degradation of an internal wave beam by parametric subharmonic instability in an upper ocean pycnocline. *J. Geophys. Res. Oceans* 118:4689–98 Gerkema T., Staquet C., Bouruet-Aubertot P. 2006. Decay of semi-diurnal internal-tide beams due to subharmonic resonance. *Geophys. Res. Lett.* 33:L08604 Gill A. E. 1982. Atmosphere-ocean dynamics *Int. Geophys. Series, Academic press* 30 Gostiaux L, Dauxois T., Didelle H., Sommeria J., Viboud S. 2006. Quantitative laboratory observations of internal wave reflection on ascending slopes *Phys. Fluids* 18:056602 Gostiaux L, Dauxois T. 2007. Laboratory experiments on the generation of internal tidal beams over steep slopes. *Phys. Fluids* 19:028102 Gostiaux L, Didelle H, Mercier S, Dauxois T. 2007. A novel internal waves generator. *Experiments in Fluids* 42:123–130 Grisouard N. 2010. *Réflexions et réfractions non-linéaires d’ondes de gravitée internes*. Ph.D. dissertation, Université de Grenoble (http://tel.archives-ouvertes.fr/tel-00540608/en/) Grisouard N, Leclair M, Gostiaux L, Staquet C. 2007. Large scale energy transfer from an internal gravity wave reflecting on a simple slope. *Proc. IUTAM* 8:119–128 Grisouard N, Bühler O. 2012. Forcing of oceanic mean flows by dissipating internal tides. *J. Fluid Mech.* 708:250–278 Hasselman K. 1967. A criterion fo nonlinear wave instability. *J. Fluid Mech.* 30:737–739 Hazewinkel J, Winters KB. 2011. PSI on the Internal Tide on a $\beta$ Plane: Flux Divergence and Near-Inertial Wave Propagation. *Journal of Physical Oceanography* 41:1673–1682 Hibiya T, Nagasawa M, Niwa Y. 2002. Nonlinear energy transfer within the oceanic internal wave spectrum at mid and high latitudes. *J. Geophys. Res.* 107:3207 Joubaud S, Munroe J, Odier P, Dauxois T. 2012. Experimental parametric subharmonic instability in stratified fluids. *Phys. Fluids* 24:041703 Johnston TMS, Merrifield MA, Holloway PE. 2003. Internal tide scattering at the Line Islands Ridge. *J. Geophys. Res.* 108:3365 Johnston TMS, Rudnick DL, Carter GS, Todd RE, Cole ST. 2011. Internal tidal beams and mixing near Monterey Bay. *J. Geophys. Res.* 116:C03017 Karimi HH. 2015. Doctoral dissertation. Department of Mechanical Engineering, MIT Karimi HH, Akylas TR. 2014 Parametric subharmonic instability of internal waves: locally confined beams versus monochromatic wave trains. *J. Fluid Mech.* 757:381–402 Karimi HH, Akylas TR. 2017 Near-inertial parametric subharmonic instability of internal gravity wave beams. *Physical Review Fluids* submitted. Kataoka T, Akylas TR. 2013 Stability of internal gravity wave beams to three-dimensional modulations. *J. Fluid Mech.* 736:67–90 Kataoka T, Akylas TR. 2015 On three-dimensional internal gravity wave beams and induced large-scale mean flows *J. Fluid Mech.* 769:621–634 Kataoka T, Akylas TR. 2016 Three-diensional instability of internal gravity beams. *Proceedings of the VIII International Symposium on Stratified Flows*, San Diego, 29 August-1 September 2016, Editor: University of California at San Diego. Khatiwala S. 2003 Generation of internal tides in an ocean of finite depth: analytical and numerical calculations. *Deep-Sea Res.* 50:3–21 King B, Zhang HP, Swinney HL. 2009. Tidal flow over three-dimensional topography in a stratified fluid. *Phys. Fluids* 21:116601 Koudella CR, Staquet C. 2006. Instability mechanisms of a two-dimensional progressive internal gravity wave. *J. Fluid Mech.* 548:165–196 Lamb KG. 2004 Nonlinear interaction among internal wave beams generated by tidal flow over supercritical topography. *Geophys. Res. Lett.* 31:L09313 Leclair M, Grisouard N, Gostiaux L, Staquet C, Auclair F. 2011. Reflexion of a plane wave onto a slope and wave-induced mean flow. *Proceedings of the VII International Symposium on Stratified Flows*, Rome, 22–26 August 2011, Editor: Sapienza Università di Roma Lelong M-P, Riley J. 1991. Internal wave-vortical mode interactions in strongly stratified flows. *J. Fluid Mech.* 232:1:19 Lerisson G, Chomaz JM. 2017. Global stability of internal gravity wave. *Phys. Rev. Fluids* submitted. Liang Y, Zareei A, Alam MR. 2017. Inherently unstable internal gravity waves due to resonant harmonic generation. *J. Fluid Mech.* 811:400–420 Lien R-C, Gregg MC. 2001. Observations of turbulence in a tidal beam and across a coastal ridge. *J. Geophys. Res.* 106:4575–4591 Lighthill J. 1978a. *Waves In Fluids*. London, UK: Cambridge University Press Lighthill J. 1978b. Acoustic streaming. *J. Sound Vib.* 61:391–418 Lighthill J. 1996. Internal waves and related initial-value problems. *Dyn. Atmos. Oceans* 23:3–17 MacKinnon JA, Winters KB. 2005. Subtropical catastrophe: Significant loss of low-mode tidal energy at 28.9. *Geophys. Res. Lett.* 43:5 MacKinnon JA, Alford MH, Sun O, Pinkel R, Zhao Z, Klymak J. 2013. Parametric subharmonic instability of the internal tide at 29$^\circ$N. *J. Phys. Oceanogr.* 43:17–28 Maugé R, Gerkema T. 2008. Generation of weakly nonlinear nonhydrostatic internal tides over large topography: a multi-modal approach. *Nonlin. Processes Geophys.* 15:233–244 Maurer P, Joubaud S, Odier P. 2016. Generation and stability of inertia–gravity waves *J. Fluid Mech* 808:539–561 McEwan AD. 1971. Degeneration of resonantly excited standing internal gravity waves. [*J. Fluid Mech.*]{} 50:431–448 McEwan AD. 1973. Interactions between internal gravity wave and their traumatic effect on a continuous stratification. *Boundary-Layer Meteorol.* 5:159–175 McEwan AD, Plumb RA. 1977. Off-resonant amplification of finite internal wave packets. *Dyn. Atmos. Oceans* 2:83–105 Mercier MJ, Garnier N, Dauxois T. 2008. Reflection and diffraction of internal waves analyzed with the Hilbert transform. *Physics of Fluids* 20:0866015 Mercier MJ, Martinand D, Mathur M, Gostiaux L, Peacock T, Dauxois T. 2010. New wave generation *J. Fluid Mech.* 657:308–334 Moudjed B, Botton V, Henry D, Ben Hadid H, Garandet JP. 2014. Scaling and dimensional analysis of acoustic streaming jets. *Phys. Fluids* 26:093602 Mowbray DE, Rarity BS. 1967. A theoretical and experimental investigation of the phase configuration of internal waves of small amplitude in a density stratified fluid. *J. Fluid Mech.* 28:1–16 Mied RP. 1976. The occurrence of parametric instabilities in finite-amplitude internal gravity waves. *J. Fluid Mech.* 78:763–784 Nazarenko S. 2011. *Wave Turbulence*. Springer-Verlag, Berlin Heidelberg Nyborg WL. 1965. Acoustic streaming. *Phys. Acoustics* 2:265 Pairaud I, Staquet C, Sommeria J, Mahdizadeh M. 2010. Generation of harmonics and sub-harmonics from an internal tide in a uniformly stratified fluid: numerical and laboratory experiments. In IUTAM Symposium on Turbulence in the Atmosphere and Oceans (ed. D. Dritschel), vol. 28, pp. 51–62. Springer Peacock T, Echeverri P, Balmforth NJ. 2008. An experimental investigation of internal tide generation by two-dimensional topography. *J. Phys. Oceanogr.* 38:235–242 Phillips OM. 1966. *The Dynamics of the Upper Ocean*. [Cambridge University Press, New York]{} Plumb RA. 1977. The interaction of two internal waves with the mean flow: Implications for the theory of the quasi-biennial oscillation. *Journal of the Atmospheric Sciences* 34:1847–1858 Plumb R, McEwan A. 1978. The instability of a forced standing wave in a viscous stratified fluid : A laboratory analogue of the quasi-biennial oscillation. *J. Atmos. Sci.* 35:1827–1839 Rainville L, Pinkel R. 1978. Propagation of low-mode internal waves through the ocean. *J. Phys. Oceanogr.* 36:1220 Riley N. 2001. Steady streaming. *Annual Review of Fluid Mechanics* 33:43–65 Sarkar S, Scotti A. 2016. Turbulence During Generation of Internal Tides in the Deep Ocean and Their Subsequent Propagation and Reflection. *Annual Review of Fluid Mechanics* 49:1 Scolan H, Ermanyuk E, Dauxois T. 2013. Nonlinear fate of internal waves attractors. *Physical Review Letters.* 110:234501 Semin B, Facchini G, Pétrélis F, Fauve S. 2016. Generation of a mean flow by an internal wave *Physics of Fluids.* 28:096601 Staquet C, Sommeria J. 2002. Internal gravity waves: from instabilities to turbulence. *Annu. Rev. Fluid Mech.* 34:559–593 Squires, T.M. and Quake, S.R. 2013 Microfluidics: Fluid physics at the nanoliter scale. *Reviews of Modern Physics*, 77:3-977 Sun O, Pinkel R. 2013. Subharmonic energy transfer from the semi-diurnal internal tide to near-diurnal motions over Kaena Ridge, Hawaï. *Journal of Physical Oceanography* 43:766–789 Sutherland BR. 2010 *Internal Gravity Waves*. London, UK: Cambridge University Press Sutherland BR. 2013. The wave instability pathway to turbulence *J. Fluid Mech.* 724:1–4 Tabaei A, Akylas TR. 2003. Nonlinear internal gravity wave beams. *J. Fluid Mech.* 482:141–161 Tabaei A, Akylas TR, Lamb KG. 2005. Nonlinear effects in reflecting and colliding internal wave beams. *J. Fluid Mech.* 526:217-243 Thomas NH, Stevenson TN. 1972. A similarity solution for viscous internal waves. *J. Fluid Mech.* 54:495–506. van den Bremer TS, Sutherland BR. 2014. The mean flow and long waves induced by two-dimensional internal gravity wavepackets. *Physics of Fluids* 26:106601. Voisin B. 2003. Limit states of internal wave beams. *J. Fluid Mech.* 496:243–293. Westervelt PJ. 1953. The theory of steady rotational flow generated by a sound field. *The Journal of the Acoustical Society of America* 25:60–67 Wienkers AF. 2015. A critical amplitude for finite size parametric subharmonic instability: numerical simulations. Master Report. Berkeley University and ENS de Lyon. Wunsch C., Ferrari R. (2004). Vertical mixing, energy, and the general circulation of the oceans. *Annu. Rev. Fluid Mech.*, 36:281-314. Xie JH, Vanneste J. 2014. Boundary streaming with Navier boundary condition. *Phys. Rev. E* 89:063010 Young WR, Tsang Y-K, Balmforth NJ. 2008. Near-inertial parametric subharmonic instability. *J. Fluid Mech.* 607:25–49 Zhou Q, Diamessis PJ. 2013. Reflection of an internal gravity wave beam off a horizontal free-slip surface. *Phys. Fluids* 25:036601 [^1]: Studying the mechanism of superharmonic generation, [@Alam2016] reported recently another situation for which the nonlinear terms vanish in the domain bulk. Interestingly, however, they play a pivotal role through the free surface boundary condition. [^2]: This should not be mixed-up with the mechanism for planetesimal formation in astrophysics. [^3]: Note however that transient mean flows can be generated by inviscid motion in the wake of a propagating internal wave packet [@Bretherton1969; @vandenBremerSutherland2014].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a consistent theoretical approach for calculating effective nonlinear susceptibilities of metamaterials taking into account both frequency and spatial dispersion. Employing the discrete dipole model, we demonstrate that effects of spatial dispersion become especially pronounced in the vicinity of effective permittivity resonance where nonlinear susceptibilities reach their maxima. In that case spatial dispersion may enable simultaneous generation of two harmonic signals with the same frequency and polarization but different wave vectors. We also prove that the derived expressions for nonlinear susceptibilities transform into the known form when spatial dispersion effects are negligible. In addition to revealing new physical phenomena, our results provide useful theoretical tools for analysing resonant nonlinear metamaterials.' author: - 'Maxim A. Gorlach' - 'Tatiana A. Voytova' - Mikhail Lapine - 'Yuri S. Kivshar' - 'Pavel A. Belov' bibliography: - 'NonlinearLib.bib' title: Nonlocal homogenization for nonlinear metamaterials --- Introduction {#sec:Introduction} ============ The field of nonlinear metamaterials attracts significant research interest due to the numerous fascinating applications [\[\]]{}. The use of large nonlinearities available in resonant nonlinear metamaterials [\[\]]{} opens a possibility to all-optical signal processing [\[\]]{} and provides reach opportunities to implement tunable and reconfigurable photonic devices [\[\]]{}. A fundamental task existing in the field is characterization of electromagnetic properties of nonlinear metamaterials, i.e. homogenization. For many nonlinear scenarios nonlinearity of metamaterials is described in a perturbative way in terms of nonlinear susceptibilities [\[\]]{}. This allows one to exploit the framework of nonlinear optics and directly compare nonlinearities of artificially structured media with those occuring in natural crystals. To date, a number of approaches to homogenize nonlinear composites and metamaterials were reported [\[\]]{}. However, most of these approaches do not take spatial dispersion into account. Spatial dispersion (or nonlocality) implies the dependence of polarization of a physically small volume on the fields existing in the neighboring regions of space. A number of theoretical and experimental studies demonstrate that nonlocality is pronounced in a wide class of linear metamaterials [\[\]]{}. In linear structures nonlocality is often described in terms of effective permittivity tensor depending on both frequency and wave vector. Comprehensive theoretical models describing nonlocality in linear artificial composites were developed [\[\]]{} and, in particular, a generalization of the Clausius-Mossotti formula for the case of discrete three-dimensional (3D) metamaterial was proposed [\[\]]{}. However, to the best of our knowledge, the consistent theoretical description of spatial dispersion effects in nonlinear metamaterials remains an open problem so far. ![(Color online) Schematic representation of a three-dimensional metamaterial composed of nonlinear uniaxial inclusions.[]{data-label="ris:Uniaxial"}](Uniaxial-str.eps){width="0.7\linewidth"} In this paper, we derive nonlocal nonlinear susceptibilities of a 3D nonlinear metamaterial composed of uniaxial scatterers (Fig. \[ris:Uniaxial\]). We employ the discrete dipole model [\[\]]{} describing the field of the scatterer in dipole approximation, whereas the properties of the individual meta-atom are characterized by linear and nonlinear polarizability tensors. The rest of the paper is organized as follows. In Sec. \[sec:Homogenization\] we derive general expressions for nonlocal nonlinear susceptibilities of discrete metamaterial. Sec. \[sec:Local\] demonstrates that in the limiting case when spatial dispersion effects in the structure can be neglected our results reduce to the known expressions for local field corrections. In Sec. \[sec:Numerical\] we illustrate the main features of the developed approach providing a numerical example for the structure composed of short wires loaded with varactor diodes. In particular, we highlight the essential role of spatial dispersion effects at frequencies in the vicinity of effective pemittivity resonance. Finally, in Sec. \[sec:Discussion\] we discuss the obtained results. Calculation of linear and nonlinear polarizabilities of a short varactor-loaded wire is provided in Appendix. Nonlocal homogenization of discrete nonlinear metamaterial {#sec:Homogenization} ========================================================== We consider an array of nonlinear uniaxial meta-atoms arranged in a cubic 3D lattice with the period $a$ (Fig. \[ris:Uniaxial\]). Note that chaotic arrangement of the similar nonlinear dipoles was studied in Ref. [\[\]]{}. However, in that work mutual interactions of meta-atoms were not considered. In the present derivation we use the CGS system of units and assume $e^{-i\omega t}$ time dependence of monochromatic fields. We consider excitation of nonlinear structure by an incident wave with frequency $\omega$, denoting wave vector of the fundamental wave propagating in the metamaterial by $\vec{k}$. Due to the nonlinear nature of inclusions, the incident wave generates polarization not only at the fundamental frequency $\omega$, but also at frequencies $2\,\omega$, $3\,\omega$, etc. This nonlinear polarization becomes a source of harmonics at frequencies $2\,\omega$, $3\,\omega$, etc. In the present analysis we consider only second and third harmonics omitting nonlinear contributions of higher order. This is a typical assumption for many nonlinear metamaterials [\[\]]{}. Furthermore, for the sake of simplicity we assume that the scatterers can be polarized only along $z$ axis. Then the only essential components of linear and nonlinear susceptibility tensors would be $\chi^{(1)}_{zz}$, $\chi^{(2)}_{zzz}$ and $\chi^{(3)}_{zzzz}$. Accordingly, the subscript $z$ is omitted further in the designations of vector and tensor components. Under these assumptions the dipole moment $d$ of the individual meta-atom is given by the equations: $$\begin{aligned} & d(\omega)=\alpha_1(\omega)\,E_{\rm{tot}}(\omega)+\notag\\ & 2\,\alpha_2(\omega;2\,\omega,-\omega)\,E_{\rm{tot}}(2\,\omega)E_{\rm{tot}}^*(\omega)+\label{d1}\\ & 3\,\alpha_3(\omega;\omega,\omega,-\omega)\,\left|E_{\rm{tot}}(\omega)\right|^2\,E_{\rm{tot}}(\omega)\:,\notag\\ \notag \\ & d(2\,\omega)=\alpha_1(2\,\omega)\,E_{\rm{tot}}(2\,\omega)+\notag\\ & \alpha_2(2\,\omega;\omega,\omega)\,E_{\rm{tot}}^2(\omega)\:,\label{d2}\\ \notag \\ & d(3\,\omega)=\alpha_1(3\,\omega)\,E_{\rm{tot}}(3\,\omega)+\notag\\ & 2\,\alpha_2(3\,\omega;2\,\omega,\omega)\,E_{\rm{tot}}(2\,\omega)\,E_{\rm{tot}}(\omega)+\label{d3}\\ & \alpha_3(3\,\omega;\omega,\omega,\omega)\,E_{\rm{tot}}^3(\omega)\:.\notag\end{aligned}$$ where $\alpha_1$, $\alpha_2$ and $\alpha_3$ stand for linear, second- and third-order nonlinear polarizabilities. The total field $E_{\rm{tot}}(\omega)$ is the sum of the external field acting on the scatterer $E(\omega)$ and field associated with radiation friction [\[\]]{} $E_s(\omega)=2i\omega^3\,d(\omega)/(3\,c^3)$. In this case polarizabilities introduced in Eqs. , , are so-called [*bare*]{} polarizabilities, i.e. they do not contain the radiation loss contribution [\[\]]{}. Note that bare polarizability of a lossless scatterer is purely real. An alternative description of radiation losses incorporates an imaginary part into scatterer polarizability [\[\]]{}. The latter approach, however, turns out to be less convenient for nonlinear structures. Linear polarizability $\alpha_1(\omega)$ along with nonlinear polarizabilities $\alpha_2$ and $\alpha_3$ can be calculated for a particular scatterer. For example, the calculation of nonlinear polarizabilities of varactor-loaded short wire is provided in Appendix, whereas the analysis of nonlinear properties of varactor-loaded split-ring resonators is carried out in Refs. [\[\]]{}. It should be emphasized that the field $E(\omega)\equiv E_{\rm{tot}}(\omega)-E_s(\omega)$ appearing in Eqs. , , is [*local field*]{}, i.e. the electric field in the point where the scatterer is located. On the other hand, in the definition of effective material parameters the [*average field*]{} appears. The average fields are defined as [\[\]]{} $$\begin{gathered} \left<\vec{E}(\Omega)\right>=\frac{1}{V_0}\,\int\limits_{V_0}\,\vec{E}(\Omega;\vec{r})\,e^{-i\vec{K}\cdot\vec{r}}\,dV\:,\label{Eav}\\ \left<\vec{P}(\Omega)\right>=\frac{1}{V_0}\,\int\limits_{V_0}\,\vec{P}(\Omega;\vec{r})\,e^{-i\vec{K}\cdot\vec{r}}\,dV\:,\label{Pav}\end{gathered}$$ where $V_0=a^3$ is a unit cell volume, $\Omega$ is arbitrary frequency and vector function $\vec{K}=\vec{K}(\Omega)$ is specified later in this section. The average structure polarization is $$\label{AvP} \left<P(\Omega)\right>=d(\Omega)/V_0\:,$$ and the average electric field can be related to the average polarization by [\[\]]{} $$\label{AvField} \left<E(\Omega)\right>=\Gamma_{k}(\Omega,\vec{K})\,d(\Omega)\:,$$ with $$\label{Gamma} \Gamma_k(\Omega;\vec{K})=-\frac{4\pi}{a^3}\,\frac{\Omega^2-K_z^2\,c^2}{\Omega^2-K^2\,c^2}\:.$$ Local field acting on the scatterer in the coordinate origin can be evaluated via dyadic Green’s function [\[\]]{} $\widehat{G}(\Omega;\vec{r})$ and dipole moments $d_{mnl}$ of metaatoms as $$E(\Omega)=\sum\limits_{(m,n,l)\not=(0,0,0)}\,G_{zz}(\Omega;-\vec{r}_{mnl})\,d_{mnl}(\Omega)$$ where the indices $m,n,l$ enumerate the lattice sites and the dipole moments of the scatterers in the unbounded structure are distributed as $$d_{mnl}(\Omega)=d(\Omega)\,e^{i\vec{K}(\Omega)\cdot\vec{r}_{mnl}}\:.$$ The distribution of polarization at the fundamental frequency is determined by the incident wave, and in this case $\vec{K}(\omega)=\vec{k}$, where $\vec{k}$ is a wave vector of the structure eigenmode. Since second-order nonlinear polarization is a quadratic function of the fundamental wave, $\vec{K}(2\,\omega)=2\,\vec{k}$ and similarly $\vec{K}(3\,\omega)=3\,\vec{k}$. Thus, the expression for local field can be represented as $$\label{LocField} E(\Omega)=G_k\left(\Omega;\vec{K}(\Omega)\right)\,d(\Omega)\:,$$ with the lattice sum defined as $$G_k(\Omega;\vec{K})\equiv\sum\limits_{(m,n,l)\not=(0,0,0)}\,G_{zz}(\Omega;\vec{r})\,e^{-i\vec{K}\cdot\vec{r}_{mnl}}\:.$$ Therefore, the total field appearing in Eqs. , , is $$\label{TotField} E_{\rm{tot}}(\Omega)=G_k'\left(\Omega,\vec{K}(\Omega)\right)\,d(\Omega)$$ with $G_k'(\Omega,\vec{K})=G_k(\Omega,\vec{K})+2i\,\Omega^3/(3\,c^3)$. Effective algorithms for the lattice sum rapid evaluation where developed earlier [\[\]]{}. Importantly, it can be proved that for real $\Omega$ and $\vec{K}$ ${\rm Im}\, G_k'(\Omega,\vec{K})=0$ [\[\]]{}. Making use of Eqs.  and it is easy to see that the dispersion equation for the linear structure with the metaatom’s polarizability $\alpha_1(\omega)$ has the form [\[\]]{} $$\label{DispEq} \alpha_1^{-1}(\omega)-G_k'(\omega;\vec{k})=0\:.$$ Using Eqs.  and we obtain $$\label{TotAvField} E_{\rm{tot}}(\Omega)=\Phi(\Omega;\vec{K})\,\left<E(\Omega)\right>$$ where $\Phi(\Omega;\vec{K})=G_k'(\Omega;\vec{K})/\Gamma_k(\Omega;\vec{K})$ is introduced for convenience. Making use of Eqs.  and , the relation between local and averaged fields can be also represented in the alternative form $$\label{TotAvField2} E_{\rm{tot}}(\Omega)=\left<E(\Omega)\right>+C_k(\Omega;\vec{K})\,d(\Omega)$$ where $C_k(\Omega;\vec{K})=G_k'(\Omega;\vec{K})-\Gamma_k(\Omega;\vec{K})$ is the lattice interaction constant. Finally, inserting the expressions Eqs. , , into Eqs. , , , we obtain the relation between the averaged structure polarization and the averaged field: $$\begin{aligned} & \left<P(\omega)\right>=\chi^{(1)}(\omega,\vec{k})\,\left<E(\omega)\right>+\notag\\ & 2\,\chi^{(2)}(\omega;2\,\omega,-\omega,\vec{k})\,\left<E(2\,\omega)\right>\,\left<E(\omega)\right>^*+\label{P1}\\ & 3\,\chi^{(3)}(\omega;\omega,\omega,-\omega,\vec{k})\,\left|\left<E(\omega)\right>\right|^2\,\left<E(\omega)\right>\:,\notag\\ \notag\\ & {\left< P(2\,\omega) \right>}=\chi^{(1)}(2\,\omega,2\,\vec{k})\,{\left< E(2\,\omega) \right>}+\notag\\ & \chi^{(2)}(2\,\omega;\omega,\omega,2\,\vec{k})\,{\left< E(\omega) \right>}^2\:,\label{P2}\\ \notag\\ & {\left< P(3\,\omega) \right>}=\chi^{(1)}(3\,\omega,3\,\vec{k})\,{\left< E(3\,\omega) \right>}+\notag\\ & 2\,\chi^{(2)}(3\,\omega;2\,\omega,\omega,3\,\vec{k})\,{\left< E(2\,\omega) \right>}\,{\left< E(\omega) \right>}+\label{P3}\\ & \chi^{(3)}(3\,\omega;\omega,\omega,\omega,3\,\vec{k})\,{\left< E(\omega) \right>}^3\:.\notag\end{aligned}$$ Nonlocal nonlinear susceptibilities in Eqs. , , can be written in a compact form as follows: $$\begin{gathered} \chi^{(1)}(\Omega,\vec{K})=\frac{1}{a^3}\,\left[\alpha_1^{-1}(\Omega)-C_k(\Omega,\vec{K})\right]^{-1}\:,\label{Chi1}\\ \chi^{(2)}(\omega_3;\omega_2,\omega_1,\vec{K}(\omega_3))=\frac{\alpha_2(\omega_3;\omega_2,\omega_1)}{a^3\,\alpha_1(\omega_3)}\,\Phi(\omega_2,\vec{K}(\omega_2))\,\Phi(\omega_1,\vec{K}(\omega_1))\,\left[\alpha_1^{-1}(\omega_3)-C_k(\omega_3,\vec{K}(\omega_3))\right]^{-1}\:,\label{Chi2}\\ \chi^{(3)}(\omega_4;\omega_3,\omega_2,\omega_1,\vec{K}(\omega_4))=\frac{\alpha_3(\omega_4;\omega_3,\omega_2,\omega_1)}{a^3\,\alpha_1(\omega_4)}\Phi(\omega_3,\vec{K}(\omega_3))\,\Phi(\omega_2,\vec{K}(\omega_2))\,\Phi(\omega_1,\vec{K}(\omega_1))\,\left[\alpha_1^{-1}(\omega_4)-C_k(\omega_4,\vec{K}(\omega_4))\right]^{-1}\:.\label{Chi3}\end{gathered}$$ In Eq. , $\Omega=\omega$, $2\,\omega$ or $3\,\omega$ and $\vec{K}(\Omega)=\vec{k}$, $2\,\vec{k}$ or $3\,\vec{k}$, respectively. In Eq. , $\omega_3=\omega_2+\omega_1$, the pair $(\omega_2,\omega_1)$ acquires the values $(\omega,\omega)$, $(2\,\omega,\omega)$ and $(2\,\omega,-\omega)$. In Eq. , $\omega_4=\omega_3+\omega_2+\omega_1$, the triplet $(\omega_3,\omega_2,\omega_1)$ acquires the values $(\omega,\omega,\omega)$ and $(\omega,\omega,-\omega)$. Note that Eqs. , and are valid for negative frequencies in which case $\Phi(-\Omega,\vec{K}(-\Omega))\equiv \Phi^*(\Omega,\vec{K}(\Omega))$. Expressions , and depend implicitly on $\vec{k}$ which is the solution of the dispersion equation for linear structure Eq. . Therefore, one may easily calculate nonlinear susceptibilities for a given direction of wave propagation and fundamental frequency $\omega$. Eqs. , , suggest that nonlinear suscepbilities depend on the direction of wave propagation. Such effect is one of the manifestations of spatial dispersion. It is discussed in detail in Sec. \[sec:Numerical\]. As additional evidence of the validity of our approach, we notice that the obtained expression Eq.  for the linear susceptibility of the structure coincides with the result derived in Ref. [\[\]]{}. Furthermore, in the case of lossless scatterers and for the propagating mode with real $\omega$ and $\vec{k}$ effective nonlinear susceptibilities turn out to be purely real because the quantities $G_k'(\Omega,\vec{K})$ and $C_k(\Omega,\vec{K})$ are both real. This result satisfies energy conservation law. Comparison with the local effective medium model {#sec:Local} ================================================ In this section we demonstrate that in the limiting case when $K\,a\ll 1$ and $\Omega\,a/c\ll 1$, i.e. when spatial dispersion effects in the structure are negligible, our results can be reduced to local nonlinear susceptibilities well-known from nonlinear optics. In this limit the interaction constant for a cubic lattice [\[\]]{} $$\label{Cappr} C_k(\Omega,\vec{K})=4\,\pi/(3\,a^3)$$ for any real and sufficiently small $\Omega$ and $\vec{K}$. As a result, effective permittivity of the structure is given by the Clausius-Mossotti formula [\[\]]{} $$\label{Clausius} \varepsilon_{\rm{loc}}(\omega)\equiv 1+4\,\pi\,\chi_{\rm{loc}}^{(1)}(\omega)=\frac{1+8\,\pi\,\alpha_1(\omega)/(3\,a^3)}{1-4\,\pi\,\alpha_1(\omega)/(3\,a^3)}\:.$$ Equations  and yield also that $$\label{Prefactor} \begin{split} &\alpha_1^{-1}(\Omega)\,\left[\alpha_1^{-1}(\Omega)-C_k(\Omega,\vec{K}(\Omega))\right]^{-1}=\\ &\frac{1}{1-4\,\pi\,\alpha_1(\Omega)/(3\,a^3)}=\frac{\varepsilon_{\rm{loc}}(\Omega)+2}{3}\:. \end{split}$$ Furthermore, from Eq.  it is straightforward that $$\label{Gamma2} \Gamma(\Omega,\vec{K}(\Omega))=\Gamma(\omega,\vec{k})\:,$$ and as a consequence of Eqs. , and $$\label{GkTr} \begin{split} & G_k'(\Omega,\vec{K}(\Omega))\equiv\Gamma_k(\Omega,\vec{K}(\Omega))+C_k(\Omega,\vec{K}(\Omega))=\\ & \Gamma_k(\omega,\vec{k})+C_k(\omega,\vec{k})=G_k'(\omega,\vec{k})=\alpha_1^{-1}(\omega)\:. \end{split}$$ Expression for the factor $\Phi(\Omega,\vec{K}(\omega))$ can be transformed using Eqs.  and as follows: $$\label{Phi} \Phi(\Omega,\vec{K}(\Omega))\equiv\frac{G_k'(\Omega,\vec{K}(\Omega))}{\Gamma(\Omega,\vec{K}(\Omega))}=\frac{\alpha_1^{-1}(\omega)}{\alpha_1^{-1}(\omega)-4\pi/(3\,a^3)}\:.$$ Thus, taking into account Eq.  we derive the simplified expression for the factor $\Phi$: $$\label{Phi2} \Phi(\Omega,\vec{K}(\Omega))=\frac{\varepsilon_{\rm{loc}}(\omega)+2}{3}$$ for any $\Omega=\omega,\,2\,\omega$ and $3\,\omega$. Applying the simplified expressions Eqs.  and to the general formulas Eqs.  and we finally obtain: $$\begin{gathered} \chi^{(2)}_{\rm{loc}}(\omega_3;\omega_2,\omega_1)=\frac{\alpha_2(\omega_3;\omega_2,\omega_1)}{a^3}\,\frac{\varepsilon_{\rm{loc}}(\omega_3)+2}{3}\,\frac{\varepsilon_{\rm{loc}}(\omega_2)+2}{3}\,\frac{\varepsilon_{\rm{loc}}(\omega_1)+2}{3}\:,\label{Chi2Loc}\\ \chi^{(3)}_{\rm{loc}}(\omega_4;\omega_3,\omega_2,\omega_1)=\frac{\alpha_3(\omega_4;\omega_3,\omega_2,\omega_1)}{a^3}\,\frac{\varepsilon_{\rm{loc}}(\omega_4)+2}{3}\,\frac{\varepsilon_{\rm{loc}}(\omega_3)+2}{3}\,\frac{\varepsilon_{\rm{loc}}(\omega_2)+2}{3}\,\frac{\varepsilon_{\rm{loc}}(\omega_1)+2}{3}\:.\label{Chi3Loc}\end{gathered}$$ In Eq.  $\omega_3=\omega_1+\omega_2$, in Eq.  $\omega_4=\omega_3+\omega_2+\omega_1$. Both equations are also applicable for negative frequency values in accordance with $\varepsilon(-\Omega)\equiv \varepsilon^*(\Omega)$. Essentially, factors $\alpha_2/a^3$ and $\alpha_3/a^3$ describe second- and third-order nonlinear susceptibilities in the case when interaction of the scatterers is negligible. Factors $(\varepsilon_{\rm{loc}}+2)/3$ thus represent a local field correction to the nonlinear susceptibilities. The presented form of local field corrections is well-known in nonlinear optics [\[\]]{}. Therefore, in the limit of negligible spatial dispersion our results are consistent with the previous studies based on the local effective medium approach. Numerical example {#sec:Numerical} ================= ![(Color online) (a) Calculated dispersion diagram for the structure composed of short varactor-loaded wires. (b) Enlarged branches $\Gamma$X and $\Gamma$M of the dispersion diagram in the vicinity of high-$\varepsilon$ mixed regime.[]{data-label="ris:LinearDisp"}](LinearDispb.eps){width="0.9\linewidth"} ![(Color online) Comparison of the developed theoretical approach (solid line) with the local effective medium model (dashed line). (a) Real part of metamaterial linear susceptibility in the vicinity of resonance $f_r$. (b, c) Real part of metamaterial second-order nonlinear susceptibility $\chi^{(2)}(2\omega;\omega,\omega)$ in the vicinity of the resonances (b) $f_r$ and (c) $f_r/2$.[]{data-label="ris:Chi"}](Chi-b.eps){width="0.9\linewidth"} ![(Color online) Comparison of the developed theoretical approach (solid line) with the local effective medium model (dashed line). Real part of metamaterial third-order nonlinear susceptibility $\chi^{(3)}(3\,\omega;\omega,\omega,\omega)$ in the vicinity of the resonances (a) $f_r$; (b) $f_r/2$; (c) $f_r/3$.[]{data-label="ris:Chi3"}](Chi3b.eps){width="0.9\linewidth"} ![(Color online) Dependence of the real part of linear and nonlinear susceptibilities on the angle $\theta$ between the wave vector of the fundamental harmonic and $\Gamma X$ direction. Here, $f=f_r=0.616$ GHz, $\lambda/a=48$.[]{data-label="ris:Chi-Angle"}](Chi-Angle-b.eps){width="0.9\linewidth"} Now we proceed to the demonstration of the main features of the developed theoretical model for a particular example. To this end we consider a 3D structure with the period $a=1$ cm composed of short wires (much shorter than the wavelength) loaded by varactors Skyworks SMV 1231-079 [\[\]]{} possessing nonlinear capacitance as well as associated linear parameters. Varactor is inserted in the gap of the size $\Delta l=1.3$ mm in the middle of the wire with the radius $r=1$ mm and half-length $l=3$ mm. The total inductance of varactor inclusion is $L_{t}=42.5$ nH and the capacitance determining the input impedance of the wire (without varactor capacitance) is $C_t=0.2$ pF. Other varactor parameters are set to those specified by the manufacturer [\[\]]{}. For clarity, dispersion diagram is shown for zero dissipation. However, nonlinear susceptibilities are calculated with realistic losses according to specifications [\[\]]{}. The dispersion diagram of the described metamaterial in linear regime of operation calculated with Eq.  is presented in Fig. \[ris:LinearDisp\]. Analysis of the calculated diagram reveals that there are two frequency intervals where spatial dispersion effects are most pronounced. These frequency intervals correspond to the so-called mixed dispersion regime studied in detail for linear uniaxial metamaterials in Ref. [\[\]]{}. Mixed dispersion regime arises at frequencies in the vicinity of zeros [\[\]]{} (low-$\varepsilon$ mixed regime) and poles [\[\]]{} (high-$\varepsilon$ mixed regime) of the structure local permittivity Eq. . We expect the most significant deviations of nonlinear susceptibilities from the local effective medium model to occur in mixed dispersion regime. Note that in the low-$\varepsilon$ mixed regime the main manifestation of spatial dispersion is the emergence of longitudinal waves propagating closely to $\Gamma$Z direction [\[\]]{}. Excitation of such longitudinal modes is not simple in experiment. Therefore, we concentrate on the analysis of transverse modes which are strongly affected by nonlocality in high-$\varepsilon$ mixed regime. In this example, high- and low-$\varepsilon$ mixed regimes correspond to the spectral ranges $0.565<f<0.670$ GHz and $0.992<f<1.024$ GHz, respectively, and frequency of linear permittivity resonance is $f_r=0.616$ GHz. Perturbative analysis of the nonlinear oscillator model [\[\]]{} suggests that nonlinear susceptibilities $\chi^{(2)}$ and $\chi^{(3)}$ are subject to resonant enhancement not only at the frequency $f_r$ but also at frequencies $f_r/2$ ($\chi^{(2)}$, $\chi^{(3)}$) and $f_r/3$ ($\chi^{(3)}$ only) as explained in more details in Appendix. Therefore, we also studied metamaterial nonlinear properties in spectral intervals around $f_r/2$ and $f_r/3$. Linear and nonlinear susceptibilities calculated for $\Gamma$M direction of propagation by Eqs. , , are compared with the predictions of the local effective medium model Eqs. , , in Figs. \[ris:Chi\], \[ris:Chi3\]. The results show that significant deviations from the local effective medium model indeed occur in high-$\varepsilon$ mixed regime whereas at lower frequency resonances $f_r/2$ and $f_r/3$ resonant enhancement of nonlinear susceptibilities is accurately captured by the local model. Furthermore, it can be noticed that spatial dispersion dumps the resonance that leads to the decrease of the maximal achievable values of nonlinear susceptibility. Therefore, we conclude that spatial dispersion should be necessarily taken into account while describing nonlinearities of metamaterials in the vicinity of permittivity resonance. Another interesting manifestation of spatial dispersion is the dependence of nonlinear susceptibilities on the direction of propagation of the fundamental wave with respect to the sample crystallographic axes. In Fig. \[ris:Chi-Angle\] we plot the dependence of linear and nonlinear susceptibilities on the angle between the wave vector $\vec{k}$ and $\Gamma$X direction in the first Brillouin zone of the crystal. Even though a dependence of nonlinear susceptibilities on the propagation direction is also known for photonic crystals [\[\]]{}, it should be stressed that the metamaterial operates in deeply subwavelength regime with the ratio $\lambda/a\approx 48$ at frequency $f_r$. Nevertheless, variation of susceptibility with the direction of wave vector $\vec{k}$ reaches $4\%$ for $\chi^{(1)}$ and $\chi^{(3)}$ and $1\%$ for $\chi^{(2)}$ at the resonance frequency $f_{\rm{r}}=0.616$ GHz. In general, at frequencies around $f_r$ maximal variation of susceptibilities $\chi^{(1)}$, $\chi^{(2)}$ and $\chi^{(3)}$ with the direction of wave vector is $4$-$5\%$. Finally, as Fig. \[ris:LinearDisp\]b clearly shows, in high-$\varepsilon$ mixed regime there are two solutions of the dispersion equation corresponding to the given frequency and $\Gamma$M or $\Gamma$X directions of propagation. As a result, nonlinear susceptibilities Eqs. , are multivalued functions of frequency in this spectral range. Consequently, one may expect that one TM-polarized incident beam can produce [*two*]{} second-harmonic (third-harmonic) beams with the same polarization and frequency. Importantly, the described effect arises purely due to spatial dispersion effects and cannot be explained in the framework of the local effective medium model. Conclusions {#sec:Discussion} =========== We have developed a consistent theoretical approach for calculating effective nonlinear susceptibilities of nonlinear discrete metamaterials taking into account both frequency and spatial dispersion. We have modelled nonlinear metamaterial as a lattice of nonlinear uniaxial electric dipoles, and obtained closed-form expressions for effective nonlinear parameters. It has been demonstrated that spatial dispersion affects strongly nonlinear properties of metamaterials in the vicinity of effective permittivity resonance dumping the frequency variation of nonlinear susceptibilities in comparison to the models of local effective media. We have predicted that, due to the spatial dispersion effects, one incident light beam may produce two harmonic beams with the same polarization. Additionally, we have demonstrated the dependence of nonlinear susceptibilities on the direction in which the fundamental harmonic propagates with respect to the crystallographic axes. Our results suggest that nonlocality is important in metamaterials even if they operate in a deeply subwavelength regime. Furthermore, we have verified that our results yield an accurate form of the so-called local field corrections to the nonlinear susceptibilities when spatial dispersion effects become negligible. Our conclusions are also valid for three-dimensional arrays of uniaxial magnetic scatterers such as varactor-loaded split-ring resonantors. We believe that our study provides important insights into the characterization of nonlinear metamaterials exhibiting large resonant nonlinearities. Acknowledgments =============== This work was supported by the Government of the Russian Federation (Grant No. 074-U01), the Dynasty Foundation, a grant of the President of the Russian Federation (MD-7841.2015.2), the Ministry of Education and Science of the Russian Federation (Projects No. 14.584.21.0009 10, GZ No. 2014/190, GZ No. 3.561.2014/K), Russian Foundation for Basic Research (Project No. 15-02-08957 A), and the Australian Research Council. M.G. acknowledges a visiting appointment at the University of Technology Sydney. Appendix. Linear and nonlinear polarizabilities of a short varactor-loaded wire {#appendix.-linear-and-nonlinear-polarizabilities-of-a-short-varactor-loaded-wire .unnumbered} =============================================================================== ![(Color online) A schematic representation of short varactor-loaded wire used as a building block of metamaterial.[]{data-label="ris:Varactor"}](Varactor.eps){width="0.7\linewidth"} In this appendix we derive expressions characterizing linear and nonlinear properties of a short ($2\,l\ll \lambda$) varactor-loaded wire. Varactor is described by the nonlinear capacitance as well as associated linear parameters. Nonlinearity in such meta-atom arises due to the voltage-dependent varactor capacitance that is well approximated by the formula [\[\]]{}: $$\label{CU-approximation} C(U)=\frac{C_{J0}}{(1+U/U_J)^M}$$ where $U$ is deemed positive in the case of reverse varactor bias. In the case of varactor SMV 1231-079, the parameters in Eq.  are as follows [\[\]]{}: $M=4.999$, $C_{J0}=1.88$ pF and $U_J=10.13$ V. Making use of the definition $C=dq/dU$ one can relate reverse voltage on the varactor $U_V$ to its charge $q$: $$\label{UNonlinear} U_V(q)=U_J\,\left[\left(1+\frac{1-M}{C_{J0}\,U_J}\,q\right)^{1/(1-M)}-1\right]\:.$$ Under the assumption $|q|\ll C_{J0}\,U_J$ the right-hand side of Eq.  can be expanded in a series with respect to $q$: $$\label{UNonlinearAppr} U_V(q)=\frac{1}{C_{J0}}\,\left[q+\frac{M\,q^2}{2\,C_{J0}\,U_J}+\frac{M\,(2M-1)}{6\,C_{J0}^2\,U_J^2}\,q^3\right]\:.$$ Taking into account parasitic linear capacitance $C_p$ loaded parallel to varactor, Eq.  can be further rearranged to yield $$\label{UNonlinearAppr2} \begin{split} & U_V(Q)=\frac{1}{C_{0}}\,\left[Q+\frac{M\,C_{J0}\,Q^2}{2\,C_{0}^2\,U_J}+\right.\\ & \left.\frac{C_{J0}\,Q^3}{C_0^4\,U_J^2}\,\left(C_0\,\frac{M(2M-1)}{6}-C_p\,\frac{M^2}{2}\right)\right] \end{split}$$ where $Q$ is the total charge stored by both varactor and parasitic capacitance $C_p$, and $C_0=C_{J0}+C_p$. Denoting the inductance and resistance of the entire varactor inclusion by $L_t$ and $R_t$, respectively, we obtain the nonlinear oscillator equation for the total charge stored in the system: $$\label{NonlinOscEq} \ddot{Q}+2\,\beta_0\,\dot{Q}+\omega_0^2\,Q+\beta_2\,Q^2+\beta_3\,Q^3=\mathfrak{E}(t)$$ where $\mathfrak{E}(t)=\mathfrak{\epsilon}(t)/L_t$, $\mathfrak{\epsilon}(t)$ is electromotive force, $\beta_0=R_t/(2\,L_t)$, $\omega_0=1/\sqrt{L_t\,C_0}$, $$\label{Beta2} \beta_2=\frac{\omega_0^2\,M\,C_{J0}}{2\,C_{0}^2\,U_J}$$ and $$\label{Beta3} \beta_3=\frac{\omega_0^2\,C_{J0}}{C_0^4\,U_J^2}\,\left[C_0\,\frac{M(2M-1)}{6}-C_p\,\frac{M^2}{2}\right]\:.$$ To calculate linear and nonlinear polarizabilities, we consider the system composed of nonlinear varactor with associated linear parameters and wires. Denoting the input impedance of the entire wire by $Z_{\rm{inp}}(\omega)=-1/(i\omega\,C_t)$ we obtain $$\label{NonlinOscEq2} \begin{split} & \ddot{Q}+2\,\beta_0\,\dot{Q}+\omega_0^2\,Q+\beta_2\,Q^2+\beta_3\,Q^3=\\ & \text{Re}\left\lbrace\frac{\xi\,e^{-i\omega\,t}}{L_t}\,\left[l\,E(\omega)-I_0(\omega)\,Z_{\rm{inp}}(\omega)\right]+\right.\\ & \left.\frac{\xi^2\,e^{-2i\,\omega t}}{L_t}\,\left[l\,E(2\omega)-I_0(2\omega)\,Z_{\rm{inp}}(2\,\omega)\right]+\right.\\ & \left.\frac{\xi^3\,e^{-3i\,\omega t}}{L_t}\,\left[l\,E(3\omega)-I_0(3\omega)\,Z_{\rm{inp}}(3\,\omega)\right]\right\rbrace-\\ &-\frac{\xi^2\,U_l(0)}{L_t}\:, \end{split}$$ where $I_0(\Omega)=\dot{Q}_\Omega$, $U_l(0)=\lim\limits_{\omega\rightarrow 0} I_0(\omega)\,Z_{\rm{inp}}(\omega)$ is a static voltage arising on varactor inclusion and $\xi$ is an auxiliary dimensionless parameter that is usually introduced for the perturbative solution of nonlinear oscillator equation and which will be set to $1$ at the end of the calculation [\[\]]{}. We consider the incident field $E(\omega)$ as sufficiently small. In this case the steady-state solution for Eq.  may be searched in the form of a power series in $\xi$ $$\label{Ansatz} Q(t)=\xi\,Q^{(1)}(t)+\xi^2\,Q^{(2)}(t)+\xi^3\,Q^{(3)}(t)\:.$$ Plugging ansatz Eq.  into Eq.  yields the set of equations: $$\label{SetNonlinEq1} \begin{split} & \ddot{Q}^{(1)}+2\,\beta_0\,\dot{Q}^{(1)}+\omega_0^2\,Q^{(1)}=\\ & \text{Re}\left\lbrace\frac{e^{-i\omega\,t}}{L_t}\,\left[l\,E(\omega)-I_0'(\omega)\,Z_{\rm{inp}}(\omega)\right]\right\rbrace\:, \end{split}$$ $$\label{SetNonlinEq2} \begin{split} & \ddot{Q}^{(2)}+2\,\beta_0\,\dot{Q}^{(2)}+\omega_0^2\,Q^{(2)}+\beta_2\,\left[Q^{(1)}\right]^2=\\ & \text{Re}\left\lbrace\frac{e^{-2i\omega\,t}}{L_t}\,\left[l\,E(2\,\omega)-I_0(2\,\omega)\,Z_{\rm{inp}}(2\,\omega)\right]\right\rbrace-\frac{U_l(0)}{L_t}\:, \end{split}$$ $$\label{SetNonlinEq3} \begin{split} & \ddot{Q}^{(3)}+2\,\beta_0\,\dot{Q}^{(3)}+\omega_0^2\,Q^{(3)}+2\,\beta_2\,Q^{(1)}\,Q^{(2)}+\\ & \beta_3\,\left[Q^{(1)}\right]^3=\text{Re}\left\lbrace\frac{e^{-3i\omega\,t}}{L_t}\,\left[l\,E(3\,\omega)-I_0(3\,\omega)\,Z_{\rm{inp}}(3\,\omega)\right]-\right.\\ &\left.\frac{e^{-i\omega\,t}}{L_t}\,I_0''(\omega)\,Z_{\rm{inp}}(\omega)\right\rbrace \end{split}$$ where $I_0'(\omega)=-i\omega\,Q^{(1)}(\omega)$ and $I_0''(\omega)=-i\omega\,Q^{(3)}(\omega)$, $I_0'(\omega)+I_0''(\omega)=I_0(\omega)$. Each of Eqs. , , is a linear differential equation with single unknown function ($Q^{(1)}(t)$, $Q^{(2)}(t)$ and $Q^{(3)}(t)$, respectively). The solutions of these equations are as follows: $$\begin{gathered} Q^{(1)}(t)=\text{Re}\left(x_1\,e^{-i\omega\,t}\right)\:,\label{SetSolutions1}\\ Q^{(2)}(t)=x_0+\text{Re}\left(x_2\,e^{-2i\omega\,t}\right)\:,\label{SetSolutions2}\\ Q^{(3)}(t)=\text{Re}\left(x_1'\,e^{-i\omega\,t}+x_3\,e^{-3i\omega\,t}\right)\label{SetSolutions3}\end{gathered}$$ with the amplitudes $x$ defined as $$\begin{gathered} x_1=\frac{l\,E(\omega)}{F(\omega)}\:,\label{x1}\\ x_0=-\frac{\beta_2\,l^2\,L_t\,\left|E(\omega)\right|^2}{2\,F(0)\,\left|F(\omega)\right|^2}\:,\label{x0}\\ x_2=\frac{l\,E(2\,\omega)}{F(2\,\omega)}-\frac{\beta_2\,l^2\,L_t\,E^2(\omega)}{2\,F^2(\omega)\,F(2\,\omega)}\:,\label{x2}\\ x_1'=-\frac{\beta_2\,l^2\,L_t}{\left|F(\omega)\right|^2\,F(2\,\omega)}\,E^*(\omega)\,E(2\,\omega)+\notag\\ \frac{l^3\,\left|E(\omega)\right|^2\,E(\omega)}{\left|F(\omega)\right|^2\,F^2(\omega)}\,\left[-\frac{3\,\beta_3\,L_t}{4}+\frac{\beta_2^2\,L_t^2}{F(0)}+\frac{\beta_2^2\,L_t^2}{2\,F(2\,\omega)}\right]\:,\label{x12}\\ x_3=\frac{l\,E(3\,\omega)}{F(3\,\omega)}-\frac{\beta_2\,l^2\,L_t\,E(\omega)\,E(2\,\omega)}{F(\omega)\,F(2\,\omega)\,F(3\,\omega)}+\notag\\ \left[\frac{\beta_2^2\,l^3\,L_t^2}{2\,F^3(\omega)\,F(2\,\omega)\,F(3\,\omega)}-\frac{\beta_3\,l^3\,L_t}{4\,F^3(\omega)\,F(3\,\omega)}\right]\,E^3(\omega)\:.\label{x3}\end{gathered}$$ where we use the designation $$\begin{gathered} F(\Omega)=L_t\,D(\Omega)-i\Omega\,Z_{\rm{inp}}(\Omega)=L_t\,D(\Omega)+\frac{1}{C_t}\:,\\ D(\Omega)=\omega_0^2-2i\,\beta_0\,\Omega-\Omega^2\:.\end{gathered}$$ Having solved the nonlinear oscillator equation and assuming the current distribution as in a short symmetric antenna [\[\]]{}, we are able to calculate the meta-atom dipole moment as $d(\omega)=l\,\left(x_1+x_1'\right)$, $d(2\,\omega)=l\,x_2$ and $d(3\,\omega)=l\,x_3$. It is now straightforward to prove the validity of Eqs. , , used in Sec. \[sec:Homogenization\] for the meta-atom characterization. Linear and nonlinear meta-atom polarizabilities are given by the formulas: $$\begin{gathered} \alpha_1(\omega)=\frac{l^2}{F(\omega)}\:,\label{Al1}\\ \alpha_2(\omega;2\,\omega,-\omega)=-\frac{1}{2}\,\frac{\beta_2\,l^3\,L_t}{\left|F(\omega)\right|^2\,F(2\,\omega)}\:,\label{Al21}\\ \alpha_2(2\,\omega;\omega,\omega)=-\frac{\beta_2\,l^3\,L_t}{2\,F^2(\omega)\,F(2\,\omega)}\:,\label{Al2}\\ \alpha_2(3\,\omega;2\,\omega,\omega)=-\frac{\beta_2\,l^3\,L_t}{2\,F(3\,\omega)\,F(2\,\omega)\,F(\omega)}\:,\label{Al23}\\ \alpha_3(\omega;\omega,\omega,-\omega)=\frac{l^4}{\left|F(\omega)\right|^2\,F^2(\omega)}\times\notag\\ \left[-\frac{\beta_3\,L_t}{4}+\frac{\beta_2^2\,L_t^2}{3\,F(0)}+\frac{\beta_2^2\,L_t^2}{6\,F(2\,\omega)}\right]\:,\label{Al31}\\ \alpha_3(3\,\omega;\omega,\omega,\omega)=\frac{l^4}{F(3\,\omega)\,F^3(\omega)}\,\left[-\frac{\beta_3\,L_t}{4}+\frac{\beta_2^2\,L_t^2}{2\,F(2\,\omega)}\right]\:.\end{gathered}$$ The derived expressions suggest that nonlinear polarizabilities exhibit resonant enhancement at frequencies satisfying one of the following conditions $F(\omega)=0$ or $F(2\,\omega)=0$ or $F(3\,\omega)=0$ that was used in Sec. \[sec:Numerical\].
{ "pile_set_name": "ArXiv" }
--- author: - 'T. Sperling' - 'W. B[ü]{}hrer' - 'C.M. Aegerter' - 'G. Maret' title: Direct determination of the transition to localization of light in three dimensions --- [**In the diffusive transport of waves in three dimensional media, there should be a phase transition with increasing disorder to a state where no transport occurs. This transition was first discussed by Anderson in 1958 [@anderson58] in the context of the metal insulator transition, but as was realized later it is generic for all waves [@anderson85; @john84]. However, the quest for the experimental demonstration of “Anderson” or “strong” localization of waves in 3D has been a challenging task. For electrons [@bergmann] and cold atoms [@kondov11], the challenge lies in the possibility of bound states in a disordered potential well. Therefore, electromagnetic and acoustic waves have been the prime candidates for the observation of Anderson localization [@kuga; @albada; @wolf; @drakegenack; @wiersma97; @scheffold99; @fiebig08; @stoerzer06; @stoerzer06n2; @acousticexp; @hu08]. The main challenge using light lies in the distinction between effects of absorption and localization [@wiersma97; @scheffold99]. Here we present measurements of the time-dependence of the transverse width of the intensity distribution of the transmitted waves, which provides a direct measure of the localization length and is independent of absorption. From this we find direct evidence for a localization transition in three dimensions and determine the corresponding localization lengths.**]{} In the diffusive regime ($kl^* \gg 1$) the mean square width $\sigma^2$ of the transmitted pulse, i.e. the spread of the photon cloud, is described by a linear increase in time $\sigma^2 = 4 D t$ [@lenke00]. Here, $D$ is the diffusion coefficient for light, $k$ the wave-vector and $l^*$ the transport mean free path. When considering interference effects of the diffusive light, Anderson et. al [@abrahams] predicted a transition to localization in three dimensional systems at high enough turbidity $(kl^*)^{-1}$. The criterion for where this transition should occur is known as the Ioffe-Regel criterion, namely $kl^* \lesssim 1$ [@ioffe60]. At such high turbidities, light will be localized to regions of a certain length scale, namely the localization length $\xi$, which diverges at the transition to localization. This implies that $\sigma^2$ initially increases linearly with time, but saturates at a later time $t_{\text{loc}}$ (localization time) towards a constant value given by $\sigma^2 = \xi^2$, where $\xi$ is the localization length. In this work we present measurements of light propagation in 3D open, highly scattering TiO$_2$ powders. Given the high turbidity of the samples studied and the large slab thickness ($L$ varying from 0.6 mm to 1.5 mm) the transmitted light undergoes typically a few million scattering events in any of the three spatial directions before leaving the sample. Thus our samples present a true bulk 3D medium for light transport. The great advantage of determining the time dependence of the width of the transmission profile lies in the fact that since the width is obtained at a specified time, absorption effects are present on all paths equally. This means that the width of the profile at a given time is [*independent of absorption*]{}. This can be seen from the general definition of the width in terms of the spatial dependence of the photon density $T(\rho,t)$, where $\rho$ is a vector in the 2D transmission plane with the origin at the center of the beam: $\sigma^2(t) = \int\rho^2 T(\boldsymbol{\rho},t)\text{d}^2\boldsymbol{\rho}/\int T(\boldsymbol{\rho},t)\text{d}^2\boldsymbol{\rho}.$ In this definition, an exponential decrease due to absorption enters $T(\rho,t)$ both in the nominator and in the denominator and thus cancels out. In the diffusive regime, the profile will be given by a Gaussian: $T(\rho) \propto \exp(-\frac{\rho^2}{8Dt})$, i.e. with a width $\sigma^2 = 4Dt$. Hence we fit a 2D Gaussian to the intensity profile at a given time (see Fig. \[fig:raw\], which shows the gated intensity profile at three different time points demonstrating the increase in width with time). This fit yields the width of the Gaussian in both the x- and y-direction. In localizing samples, the intensity distribution is expected to be exponential, with a characteristic length scale $\xi$. This can be seen in our samples, however at small distances $\rho$, the profile can be well approximated by a Gaussian (see supplementary material). Hence, we fit a Gaussian to all our samples, which gives qualitatively similar fits as an exponential function in the localized case (see supplementary material). The fitted widths are then plotted as a function of time to yield the results shown in Fig. \[fig:tmax\]. In the case of a diffusive sample, Aldrich anatase, with $kl^*_{AA} = 6.4$, the square of the width increases linearly over the whole timespan (see Fig. \[fig:tmax\] a) as expected. The small deviation from linearity around the diffusion time $\tau_{max}$ is a result of the gating of the high rate intensifier (HRI) [@HRI] (see supplementary material). The slope of the increase is in very good accord with the diffusion coefficient determined from time dependent transmission experiments [@stoerzer06]. Note also that the time dependent width can exceed the thickness of the sample, which is a consequence of the fact that we are studying the transmission profile at specific times. The width $\sigma^2$ of the transmitted pulse gives a direct measure of the localization length $\xi$ in the localizing regime. This is because the 2D transmission profile of the photon cloud is confined to within a localization length. When considering an effective diffusion coefficient corresponding to the slope of the temporal increase in width, one thus obtains an effective decrease of the diffusion coefficient with time as $D(t) \propto 1/t$ after a time scale corresponding to the localization length [@berkovits]. In this picture, for large $L$, one expects a time dependence of the width, which is linear up to the localization length and then remains constant as time goes on. Numerical calculations of self-consistent theory [@skipetrov06; @cherroret10] give a different increase at short times as $\sigma^2 \propto t^{1/2}$ and a plateau value of $\sigma^2 = 2L\xi$ for $L \gg \xi$. These predictions can be directly tested from data of samples with high turbidity, which show non-classical diffusion in time dependent transmission measurements. This is shown in Fig. \[fig:tmax\] b) and c). Taking a closer look at the short time behavior one can see that $\sigma^2$ increases linearly in time contrary to the self-consistent theory calculation. This is similar to the behavior found in acoustic waves [@hu08]. However in contrast to the diffusive sample, a plateau of the width can be clearly seen. This is in good accord with the theoretical prediction and a direct sign of Anderson localization. This plateau can also be seen directly from the transmission profiles shown in Fig. \[fig:raw\], where the normalized intensity profile is shown for three different time points. At late times, the width does no longer increases indicating a localization of light. The data shown in Fig. \[fig:tmax\] also show results for samples of different thickness. These samples of different thickness are made from the same particles but may vary slightly in terms of filling fraction. However as checked by coherent backscattering, samples made up from the same particles have very comparable turbidity (see supplementary material). If the thickness $L$ of the sample becomes comparable to the localization length, a decrease of the width of the photon distribution with time can be observed. This surprising fact can be understood in a statistical picture of localization, where a range of localization lengths exists in the sample corresponding to different sizes of closed loops of photon transport. In finite slabs, larger localized loops will be cut off by the surfaces leading to a lower population of such localized states at longer times. Thus on average, the observed width will correspond to increasingly shorter localization lengths and thus a decrease of $\sigma^2$ with time can be observed. This is schematically illustrated in Fig. \[fig:tmax\] d). Such a peak in the width of the intensity distribution has also been seen in calculations of self-consistent theory, albeit in thicker samples [@cherroret10]. When the thickness decreases even more, such that it is shorter than the localization length, the plateau in the width is lost altogether and $\sigma^2$ increases over the whole time window. In fact, the behavior then corresponds to that predicted for the mobility edge [@berkovits], where a sub-linear increase of $\sigma^2 \propto t^{2/3}$ is predicted. At the transition one observes a kink in $\sigma^2$ and the ratio of the initial slope to that at the kink corresponds to the sub-diffusive exponent $a$. In addition, this thickness dependence can be used as an alternative determination of the localization length. The evaluation of the plateaus of the localizing samples for different thicknesses, yields a localization length independent of $L$. In case the time dependence showed a maximum rather than a plateau, the maximum value was used. Thus we identify $\sigma_\infty^2 = \xi^2$ and obtain $\xi_\text{R104} = 717(6) \mu\text{m}$ for R104, $\xi_\text{R902} = 717(9)\mu\text{m}$ for R902 and $\xi_\text{R700} = 670(9)\mu\text{m}$ for R700. These are mean values for all thicknesses investigated. As expected, sample R700 has the smallest localization length $\xi$, as has already been concluded by time of flight experiments [@stoerzer06] and corresponds to the lowest value of $kl^*_{R700} = 2.7$ in this sample. In terms of localization, R104 and R902 are very similar, which again is in good accord with the fact that their turbidities are very similar, $kl^*_{R104} = 3.7$ and $kl^*_{R902} = 3.4$ respectively, even though their other sample properties are rather different. As stated above, this determination of $\xi$ is in good accord with that from the thickness dependence of the occurrence of a plateau. As seen in Fig. \[fig:tmax\], R104 with a thickness of $L = 0.71\text{mm}$ behaves sub-diffusively, but the sample with $L = 0.75\text{mm}$ shows a plateau, indicating a localization length of $\xi = 0.73(2)\text{mm}$. The same transitional behavior can be seen for R902 between $0.7\text{mm}<L<0.8\text{mm}$ as well. So far, we have shown that for different samples showing a range of $kl^*$ close to unity a qualitative change in the transport properties occurs which is consistent with the transition to Anderson localization. In order to show that these are not sample intrinsic properties, we now study one and the same sample at different incoming wavelengths. The turbidity depends quite strongly on the wavelength $\lambda$ of light, which we tuned from [$550 \, \text{nm}$]{} to [$650 \, \text{nm}$]{}. For these wavelengths, we have determined that the turbidity changes from $kl^*_\text{550nm} \approx 2.1$ up to $kl^*_\text{650nm} \approx 3.45$, thus spanning a range similar to that of the different samples above. At the highest and lowest wavelengths, the values of $kl^*$ were interpolated from the accessible values, which is a good approximation, since for the investigated region $kl^*$ are found to scale linearly with $\lambda$ (see supplementary material). The result of such a spectral measurement of a R700 sample ($L = 0.98\, \text{mm}$ and $m = 377\, \text{mg}$) is shown in Fig. \[fig:spectral\]. For the wavelengths of [$640 \, \text{nm}$]{} and [$650 \, \text{nm}$]{}, corresponding to the largest values of $kl^*$, $\sigma^2$ does not saturate, which shows that the mobility edge is approached. This allows a direct characterization of the localization transition with a continuous change of the order parameter. We have determined the same spectral information also from a R104 sample, which is closer to the mobility edge at a wavelength of 590 nm and for a rutile sample from Aldrich, which shows classical diffusion at 590 nm. For all of these samples, we have determined the value of $kl^*$ [@gross07]. With the value of $\xi$, and the scattering strength $kl^*$ we are able to determine the approach to the mobility edge at $kl^*_\text{crit}$, as shown in Fig. \[fig:kl\*-sigma\]. At the mobility edge, we can determine the qualitative change in behavior from the ratio of the slopes of $\sigma^2$ as a function of time in the localized or sub-linear regime and the initial diffusive regime (see supplementary material). This gives a direct estimate of the exponent $a$ with which the width increases with time, $\sigma^2 \propto t^{a}$ shown in Fig. \[fig:transition\]. There is a clear transition in the behavior with $kl^*$, showing a critical value of $kl^*_\text{crit}=4.5(4)$, above which $a = 1$ and below which $a = 0$. This is in good accord with the determination from time of flight measurements on similar samples yielding $kl^*_\text{crit,ToF}=4.2(2)$ [@stoerzer06n2]. Note that with an effective refractive index of the samples of $n_\text{eff} \simeq 1.75$, a critical value of $kl^*_\text{crit} = 4.2$ corresponds to an onset of localization at the point of $l^*/\lambda_\text{eff} = 1$, which is a reasonable expectation for the onset of localization. The dependence of the inverse width on the turbidity, as shown in Fig. \[fig:kl\*-sigma\], also indicates the critical behavior around the transition. Below the critical turbidity, $\sigma^2$ increases at all times and the corresponding inverse localization length is zero. At the mobility edge, the localization length is limited by the sample thickness, which in the case shown here was approximately 1 mm and a more detailed determination of the intrinsic localization length is not possible. For highly turbid samples, well below the transition, the inverse localization length seems to increase linearly with decreasing $kl^*$ indicating an exponent of unity. However, there is insufficient dynamic range close to the transition for a full determination of a critical exponent. In conclusion, we have shown direct evidence for localization of light in three dimensions and the corresponding transition at the mobility edge. This has been achieved using the time dependence of the mean square width $\sigma^2$ of the transmission profile, which is an excellent measure for the onset of localization of light. In contrast to other measures, it is completely independent of absorption and allows a [*direct*]{} determination of the localization length for samples close to the mobility edge. We find that for highly turbid samples, $\sigma^2$ shows a plateau, which changes to a sub-linear increase for critical turbidities and becomes linear for purely diffusive samples. This allows a detailed characterization of the behavior of transport close to the transition, which is not possible with other techniques. By evaluating the plateau $\sigma^2_\infty$ of localizing samples one can directly access the localization length $\xi$. For sample thicknesses close to the localization length, we moreover observe a decrease in the width of the photon cloud, which we associate with a statistical distribution of microscopic localization lengths. A description of these data will stimulate further theoretical work and comparison between such quantitative theories, such as self-consistent theory [@cherroret10] or direct numerical simulation [@gentilini10] and the data can then yield valuable information about the statistical distribution of localization lengths close to the transition. In addition, we have shown that the transition to localization can be observed in one and the same sample using spectral measurements, thus continuously varying the control parameter of turbidity through the transition. For highly turbid samples, the width of the transmission profile saturates at a value, which increases with decreasing turbidity until the localization length is comparable to the sample thickness. At this point the width increases at all times, albeit with a sub-linear increase at long times. This behavior is expected from the diffusion coefficient at the mobility edge [@berkovits]. Such measurements close to the transition between Anderson localization and diffusion allow a determination of the critical turbidity $kl^*_\text{crit} = 4.5(4)$, which is in good agreement with an indirect determination using time of flight measurements. In addition, our determination of the localization length during the approach to the localization transition allows an estimate of the critical exponent of the transition. Well away from the critical regime, we find a value close to unity, which is not incompatible with theoretical determinations [@abrahams; @john84; @numeric]. A complete description of the transition in open media taking finite size effects into account will be a great challenge for future theoretical descriptions of Anderson localization. [**Methods**]{} The samples are slabs made up of nano particles of sizes ranging from 170 to 540 nm in diameter with polydispersities ranging between 25 and 45 $\%$. Powders were provided by DuPont and Sigma Aldrich. These samples are slightly compressed and have been used previously [@stoerzer06] to demonstrate non-classical transport behaviour in time dependent transmission. TiO$_2$ has a relatively high refractive index in the visible of $n=2.7$ in the rutile phase and 2.5 in the anatase phase. The extremely high turbidity of the samples implies the use of a high power laser system to be able to measure this transmitted light. We use a frequency doubled Nd:YAG laser (Verdi V18), operated at [$18 \, \text{W}$]{} output power, to pump a titanium sapphire laser (HP Mira). The HP Mira runs mode locked with a repetition rate of [$75 \, \text{MHz}$]{} at a maximum of about [$4 \, \text{W}$]{}. To convert the laser light from about [$790 \, \text{nm}$]{} to orange laser light ([$590 \, \text{nm}$]{}) a frequency doubled OPO is used. The laser wavelength emitted by the OPO can be tuned from approx. [$550 \, \text{nm}$]{} to [$650 \, \text{nm}$]{}. To approximate a point-like source the laser beam was focused onto the flat front surface of the sample with a waist of 100 $\mu$m. The transmitted light was imaged from the flat backside by a magnifying lens ($f =$[$25 \, \text{mm}$]{}, mounted in reverse position) onto a high rate intensifier (HRI, LaVision PicoStar). The HRI can be gated on a time scale of about [$1 \, \text{ns}$]{} and the gate can be shifted in time steps of [$0.25 \, \text{ns}$]{}. The HRI is made of gallium arsenide phosphide which has a high quantum efficiency of maximum 40.6 % at about [$590 \, \text{nm}$]{}. A fluorescent screen images the signal onto a [$16 \, \text{bit}$]{} CCD Camera with a resolution of 512 $\times$ 512 pixel. With this system we were able to record the transmitted profile with a time resolution below a nanosecond. To measure the turbidity of a sample we used a backscattering set-up described elsewhere [@fiebig08; @gross07]. With this setup covering the full angular range, it is possible to determine $kl^*$ from the inverse width of the backscattering cone. Since this system used different laser sources, the spectral range of the set-up is more limited in wavelength ([$568 \, \text{nm}$]{} to [$619 \, \text{nm}$]{} and [$660 \, \text{nm}$]{}). This work was funded by DFG, SNSF, as well as the Land Baden-Württemberg, via the Center for Applied Photonics. Furthermore we like to thank Nicolas Cherroret for his support and fruitful discussions. [99]{} Anderson, P.W., ’Absence of diffusion in certain random lattices.’, Phys. Rev. [**109**]{}, 5 (1958). Anderson, P.W., ’The question of classical localization: a theory of white paint?’, Philosophical Magazine. Lett [**52**]{}, 3 (1985). John, S., ’Electromagnetic Absorption in a Disordered Medium near a Photon Mobility Edge.’, Phys. Rev. Lett. [**53**]{}, 2169 (1984). Altshuler, B.L. et al., Mesoscopic Phenomena in Solids (North-Holland, Amsterdam, 1991). Kondov, S.S., [*et al.*]{}, ’Three-Dimensional Anderson Localization of Ultracold Matter.’, Science [**334**]{}, 66 (2011). Kuga, Y. and Ishimaru, A., ’Retroreflectance from a dense distribution of spherical particles.’, J. Opt. Soc. Am. A [**1**]{}, 831 (1984). van Albada, M.P. and Lagendijk, A., ’Observation of weak localization of light in a random medium.’, Phys. Rev. Lett. [**55**]{}, 2696 (1985). Wolf, P.E. and Maret, G., ’Weak localization and coherent backscattering of photons in disordered media.’, Phys. Rev. Lett. [**55**]{}, 2696 (1985). Drake, J.M. and Genack, A.Z., ’Observation of nonclassical optical diffusion.’, Phys. Rev. Lett. [**63**]{}, 259 (1989). Wiersma, D.S., Bartolini, P., Lagendijk, A., and Righini, R., ’Localization of light in a disordered medium.’, Nature [**390**]{}, 671 (1997). Scheffold, F., Lenke, R., Tweer, R., and Maret, G., ’Localization or classical diffusion of light?’, Nature [**398**]{}, 206 (1999). Fiebig, S., Aegerter, C.M., Bührer, W., Störzer, M., Akkermans, E., Montambuax, G. and Maret, G., ’Conservation of energy in coherent backscattering at large angles.’, EPL [**81**]{}, 64004 (2008). Störzer, M., Gross, P., Aegerter, C.M. and Maret, G., ’Observation of the critical regime in the approach to Anderson localization of light.’, Phys. Rev. Lett. [**96**]{}, 063904 (2006). Aegerter, C.M., Störzer, M. and Maret, G., ’Experimental determination of critical exponents in Anderson localization of light.’, Europhys. Lett. [**75**]{}, 562 (2006). Bayer, G. and Niederdränk, T., ’Weak localization of acoustic waves in strongly scattering media.’, Phys. Rev. Lett. [**70**]{}, 3884 (1993). Hu, H., Strybulevych, A., Page, J.H., Skipetrov, S.E. and van Tiggelen, B.A., ’Localization of ultrasound in a three-dimensional elastic network.’, Nature Phys.[**4**]{}, 945 (2008). Lenke, R. and Maret, G., [*Multiple Scattering of Light: Coherent Backscattering and Transmission*]{}, Gordon and Breach Science Publishers (2000). Abrahams E., Anderson P.W., Licciardello D.C., and Ramakrishnan T.V., ’Scaling theory of localization: absence of quantum diffusion in two dimensions.’, Phys. Rev. Lett. [**42**]{}, 673 (1979). Ioffe, A.F. and Regel, A.R., ’Non-crystalline, amorphous and liquid electronic semiconductors.’, Progress in Semiconductors [**4**]{}, 237 (1960). Berkovits, R. and Kaveh, M., ’Propagation of waves through a slab near the Anderson transition: a local scaling approach.’, J. Phys. C: Cond. Mat. [**2**]{}, 307 (1990). Skipetrov, S.E. and van Tiggelen, B.A., ’Dynamics of Anderson localization in open 3D media.’, Phys. Rev. Lett. [**96**]{}, 043902 (2006). Cherroret, N., Skipetrov, S.E. and van Tiggelen, B.A., ’Transverse confinement of waves in random media.’, Phys. Rev. E [**82**]{}, 056603 (2010). Gross, P., Störzer, M., Fiebig, S., Clausen, M., Maret, G. and Aegerter, C.M., ’A precise method to determine the angular distribution of backscattered light to high angles.’, Rev. Sci. Instrum. [**78**]{}, 033105 (2007). The finite gating window leads to an averaging of the time-dependent width weighted by the time-dependent transmitted intensity. Due to the non-linear time-dependence of this intensity, the averaged width can be different around the maximum intensity for longer gating times. Gentilini, S., Fratalocchi, A., and Conti, C., ’Signatures of Anderson localization excited by an optical frequency comb.’, Physical Review B [**81**]{}, 014209 (2010). MacKinnon, A. and Kramer, B., ’One-parameter scaling of localization length and conductance in disordered systems.’, Phys. Rev. Lett. [**47**]{}, 1546 (1981). ![\[fig:tmax\] The mean square width scaled with the sample size $\frac{\sigma^2}{L^2}$ is shown for different samples. The time axis is scaled with the diffusion time $\tau_\text{max}$ (see supplementary material). In a) Aldrich anatase is shown, which behaves diffusively. Samples showing localizing effects are b) R104 and c) R700. The legends show the slab thickness $L$ in mm. d) Schematic illustration of the expectation for the time dependence of the width in the presence of statistically distributed localization lengths as discussed in the text. The decreasing population of the different grey lines at late times for larger localization lengths, leads to an overall decrease of the width in particular for sample thicknesses close to the average localization length, because big loops are leaking out of the sample. The different coloured lines correspond to the time dependence of the width with increasing microscopic localization length from small (green) to large (red). ](all_tmax4.pdf){width="1\linewidth"} ![\[fig:spectral\] The spectral measurement of a R700 sample ranging from [$550 \, \text{nm}$]{} – [$650 \, \text{nm}$]{}, corresponding to $kl*$ values between 2.1 and 3.6, is shown. With decreasing wavelength $\lambda$ the turbidity $kl^*$ increases, as well as localizing effects are getting stronger. This can be seen via the lower mean square width $\sigma^2_\infty$ of the plateaus. For the wavelengths above [$640 \, \text{nm}$]{} one can observe a breakdown of localization to a sub-diffusive behavior. The legend shows the wavelength of light in nm.](spectral3.pdf){width="\linewidth"} ![\[fig:kl\*-sigma\] The inverse of the mean square width $\sigma^2_\infty$ of the plateau against $kl^*$ for different samples. As can be seen, the width, corresponding to the localization length diverges at a value of $kl^* \simeq 4.5$, indicating the transition from a localized to a non-localized state. The increase of the localization length approaching the critical turbidity can also be used to estimate the critical exponent. ](spectral_kl_sigma.pdf){width="\linewidth"} ![\[fig:transition\] The value of the exponent $a$ describing the temporal increase of the mean square width. In the diffusive regime, the exponent should be unity, whereas in the fully localized regime a value of zero is expected. At the mobility edge the sub-diffusive increase corresponds to intermediate values. This allows a determination of the critical turbidity. ](transition.pdf){width="\linewidth"}
{ "pile_set_name": "ArXiv" }
--- abstract: 'Given a compact metric space $X$ and a probability measure in the $\sigma-$algebra of Borel subsets of $X$, we will establish a dominated convergence theorem for ultralimits of sequences of integrable maps and apply it to deduce a non-standard ergodic-like theorem for any probability measure.' address: - | Maria Carvalho\ Centro de Matemática da Universidade do Porto\ Rua do Campo Alegre 687\ 4169-007 Porto\ Portugal - | Fernando Jorge Moreira\ Centro de Matemática da Universidade do Porto\ Rua do Campo Alegre 687\ 4169-007 Porto\ Portugal author: - Maria Carvalho - Fernando Jorge Moreira title: Ultralimits of Birkhoff averages --- Introduction ============ Let $(X,\mathcal{B}, \mu)$ be a measure space, where $\mathcal{B}$ is a $\sigma-$algebra of subsets of $X$ and $\mu$ is a $\sigma-$finite measure on $\mathcal{B}$, and consider a measurable map $T:X \to X$. One says that $T:X\to X$ preserves $\mu$ (or that $\mu$ is $T-$invariant) if $\mu(T^{-1}(A)) = \mu(A)$ for any $A \in \mathcal{B}$. The measure $\mu$ is said to be ergodic with respect to $T$ if, given $A\in \mathcal{B}$ with $T^{-1}(A)=A$, we have $\mu(A)\times \mu(X\setminus A)=0$. For each $x \in X$, its orbit by $T$ is defined by the sequence of iterates $\left(T^n(x)\right)_{n \,\in \, \mathbb{N}}$, where $T^0 = Id$. If $T$ preserves $\mu$ and $\mu$ is finite, the Recurrence Theorem of Poincaré asserts that, for every $A \in \mathcal{B}$ with $\mu(A) > 0$, the orbit of almost every point in $A$ returns to $A$ infinitely many times. If, additionally, $\mu$ is ergodic with respect to $T$, then the expected time for the first return, as estimated by Kac (cf. [@Petersen]), is of the order of $\frac{1}{\mu(A)}$. Besides, by the Ergodic Theorem of Birkhoff [@Walters; @Katznelson-Weiss] we may also evaluate the mean sojourn of almost every orbit in $A$, and it is asymptotically close to $\mu(A)$. The statement of the Ergodic Theorem is in fact more general, asserting that, if $T$ preserves $\mu$ and $\varphi:X \to \mathbb{R}$ belongs to $L^1(X)$ (as happens with the characteristic map $\chi_A$ for every set $A \in \mathcal{B}$ whenever $\mu$ is a finite measure), then there exists $\widetilde{\varphi}\in L^1(X)$ such that, at $\mu-\text{a.e.} \,x \in X$, we have $\lim_{n \to +\infty}\, \frac 1 n \,\sum_{k=1}^n\,\varphi \circ T^k(x) = \widetilde{\varphi}(x)$ and $\widetilde{\varphi}\circ T(x) = \widetilde{\varphi}(x)$, and $\int_X \, \widetilde{\varphi}\,d\mu = \int_X \,\varphi d\mu.$ Several generalizations of this theorem are known, either demanding less from the observable $\varphi$ or from the probability measure $\mu$ (cf. [@Krengel; @CM2014; @CV-M-Marinacci]). The aim of this work is to settle an abstract frame for these generalizations through a non-standard dominated convergence theorem whenever $X$ is a compact metric space and $\mu$ is a Borel probability measure. Its application to Birkhoff averages of measurable bounded potentials, with respect to either dynamical systems without invariant measures (such as $T:[0,1] \to [0,1]$ given by $T(x)=\frac{x}{2}$ if $x \neq 0$, $T(0)=1$) or to those whose invariant measures have relevant sets of points with historical behavior (as the ones described in [@Takens2; @Kiriki-Soma]), conveys more information on the accumulation points of such averages. The main tools in this non-standard approach are the notions of ultrafilter and ultralimit, besides the ultrapower construction in order to produce extensions of relevant structures and transformations to the non-standard realm. (Concerning non-standard analysis, we refer the reader to [@Cutland; @Goldblatt].) The paper is organized as follows. After recalling a few basic properties of ultralimits, ultraproducts, the shadow map and integrability, we will prove a Dominated Convergence Theorem for ultralimits, from which we will deduce an ergodic-like theorem for a Borel probability measure in a compact Hausdorff space, where a dynamical system $T$ is acting, and a measurable bounded function. Using the shift map in the space of ultrafilters, we will also show the existence of a space mean of the Birkhoff limits when we take into account all the possible choices of the ultrafilter. Basic definitions ================= In this section we will give a brief though comprehensive list of the non-standard concepts and results we will use in the sequel. More information may be found in [@Goldblatt]. Filters ------- A filter on a set $X$ is a non-empty family $\mathcal{F}$ of subsets of $X$ such that: - $A, B \in \mathcal{F} \quad \Rightarrow \quad A \cap B \in \mathcal{F}$. - $A \in \mathcal{F} \text{ and } A \subseteq B \subseteq X \quad \Rightarrow \quad B \in \mathcal{F}$. - $\emptyset \notin \mathcal{F}$. A filter $\mathcal{U}$ on $X$ is said to be an *ultrafilter* if for every $A \subseteq X$ either $A \in \mathcal{U}$ or $X\setminus A \in \mathcal{U}$ (but not both due to conditions (i) and (iii)). Ultrafilters are maximal filters with respect to the inclusion, and provide a useful criterion to establish which sets are considered large. Given $m \in \mathbb{N}$, the family $\mathcal{U}_{m}=\{A \subseteq\mathbb{N} \colon m \in A\}$ is an ultrafilter in $\mathbb{N}$, called *principal*. We are interested in non-principal ultrafilters as a measure of largeness of sets. For instance, take the Fréchet filter $\mathcal{F}_{\text{cf}} = \{A \subseteq \mathbb{N}\colon \mathbb{N}\setminus A \text{ is finite}\}$, that is, the collection of subsets of $\mathbb{N}$ whose complement is finite. There exists an ultrafilter $\mathcal{U}_{\text{cf}}$ containing $\mathcal{F}_{\text{cf}}$: since the union of any chain of proper filters is again a proper filter, by Zorn’s Lemma the filter $\mathcal{F}_{\text{cf}}$ is contained in a maximal proper filter $\mathcal{U}_{\text{cf}}$. It is not hard to see that $\mathcal{U}_{\text{cf}}$ is a non-principal ultrafilter. One advantage of using this kind of ultrafilters is the fact that an ultrafilter is non-principal if and only if it contains the Fréchet filter of co-finite subsets. Assuming that $\mathbb{N}$ has the discrete topology, we will denote by $\beta \mathbb{N}$ the Stone-$\check{C}$ech compactification of $\mathbb{N}$, which is a non-metrizable Hausdorff compact space (cf. [@Willard]). The space $\beta \mathbb{N}$ is homeomorphic to the collection $\mathbb{U}_\mathbb{N}$ of all the ultrafilters of subsets of $\mathbb{N}$ endowed with the topology generated by the (open and closed) sets $\left\{\mathbb{O}_F\right\}_{F \subseteq \mathbb{N}}$ where $$\mathbb{O}_F=\{\mathcal{U} \in \mathbb{U}_\mathbb{N} \colon F \in \mathcal{U}\}.$$ Ultralimits {#sse:ultralimits} ----------- Let $X$ be a compact Hausdorff space and $\mathcal{U}$ be a non-principal ultrafilter on $\NN$. We say that a sequence $(x_n)_{n \in \mathbb{N}}$ in $X$ is $\mathcal{U}-$convergent in $X$ to $\ell$, and denote its limit by $\mathcal{U} \text{-} \lim_n \,x_n$, if, for any neighborhood $V_\ell$ of $\ell$, we have $\{n\in \NN \colon x_n \in V_\ell\} \in \mathcal{U}$. Observe that the $\mathcal{U} \text{-} \lim_n \,x_n$ always exists since $X$ is compact. Otherwise, for every $\ell \in X$ we may find an open neighborhood $V_\ell$ such that the set $\mathcal{C}_\ell = \{n \in \mathbb{N} \colon x_n \in V_\ell\}$ does not belong to $\mathcal{U}$. As $\mathcal{U}$ is an ultrafilter, the complement $\mathbb{N} \setminus \mathcal{C}_{\ell}$ must be in $\mathcal{U}$. Besides, as $X$ is compact, we may take a finite subcover $\left\{V_{\ell_1}, \ldots, V_{\ell_k}\right\}$ of the cover $\left(V_\ell\right)_{\ell \in X}$. The finite intersection $\bigcap_{j=1}^k\,\Big(\mathbb{N} \setminus \mathcal{C}_{\ell_j}\Big)$ is in $\mathcal{U}$ as well. Hence, as the empty set is not in $\mathcal{U}$, we conclude that $\bigcap_{j=1}^k\,\Big(\mathbb{N} \setminus \mathcal{C}_{\ell_j}\Big) \neq \emptyset$. However, if $m \in \bigcap_{j=1}^k\,\Big(\mathbb{N} \setminus \mathcal{C}_{\ell_j}\Big)$, then $x_m$ does not belong to $V_\ell$ for every $1 \leq \ell \leq k$. This contradicts the fact that $\left\{V_{\ell_1}, \ldots, V_{\ell_k}\right\}$ is a cover of $X$. Additionally, as $X$ is Hausdorff, the ultralimit is unique. Given a sequence $(x_n)_{n \in \mathbb{N}}$, the ultralimits for all the possible choices of non-principal ultrafilters in $\mathbb{N}$ are precisely the cluster points of this sequence. In particular, if the sequence is convergent in $X$ to a limit $\ell$, then $\mathcal{U} \text{-} \lim_n \, x_n = \ell$ for every non-principal ultrafilter $\mathcal{U}$ in $\mathbb{N}$. If a real-valued sequence is not bounded, its $\UU-$limit always exists, though it may be either $+\infty$ or $-\infty$. Ultrapower construction ----------------------- Let $X$ be a compact metric space, $\mu$ be a probability measure defined on the $\sigma-$algebra $\mathcal{B}$ of the Borel subsets $X$ and $\mathcal{U}$ be an ultrafilter in $\mathbb{N}$. Given two sequences $\left(a_n\right)_{n \,\in\, \mathbb{N}}$ and $\left(b_n\right)_{n \,\in\, \mathbb{N}}$ of elements of $X$, define the equivalence relation $$\left(a_n\right)_{n \,\in\, \mathbb{N}} \quad \backsim \quad \left(b_n\right)_{n \,\in\, \mathbb{N}} \quad \quad \Leftrightarrow \quad \quad \Big\{n \in \NN \colon \,a_n \,= \, b_n\Big\} \,\, \in \mathcal{U}.$$ Denote by $\widehat X$ the *ultrapower* of $X$ made by the equivalence classes of sequences of elements of $X$, that is, $$\widehat X \,\,:= \,\,X^\NN_{\diagup \backsim} \,\,= \,\,\Big\{[\left(x_n\right)_{n \,\in \,\NN}] \colon x_n \in X \,\,\, \forall n \in \NN\Big\}.$$ In what follows we will denote by $[x_n]$ the equivalence class of the sequence $\left(x_n\right)_{n \, \in \, \NN}$ in $\widehat X$. In a natural way, $X$ is embedded in $\widehat X$ by the inclusion map $\iota: \,X \hookrightarrow \widehat X$ given by $\iota(x)=[x]$, where $[x]$ stands for the equivalence class of the constant sequence whose term is constant and equal to $x$. The shadow map -------------- Part of the usefulness of the ultrapower structures relies on the possibility to transfer information from its universe to the standard realm. To do it, one often uses the *shadow map* $\operatorname{sh}_{_{\mathcal{U},\,X}}\colon \widehat X \to X$ defined by $$\operatorname{sh}_{_{\mathcal{U},\,X}}([\left(a_n\right)_{n \, \in \, \NN}])\,\, := \,\,\mathcal{U} \text{-} \lim_n \, a_n.$$ \[le:openclosedsets\] $\,$ 1. $C\subset X$ is closed if and only if $ \widehat C \subset \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(C)$. 2. $O\subset X$ is open if and only if $\operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(O)\subset \widehat O$. Let $C$ be a closed subset of $X$ and suppose that there exists $\widehat{x}:=[\left(x_n\right)_{n \, \in \, \NN}] \in \widehat C$ such that $y=\operatorname{sh}_{_{\mathcal{U},\,X}}(\widehat{x}) \not\in C$. Since $C$ is closed we can find $\epsilon >0$ such that the open ball $B_\epsilon(y)$ of radius $\epsilon$ centered at $y$ is contained in $X \setminus C$. By definition $y=\ulim_n x_n$, and thus $\left\{n \in \NN\; :\ x_n \in B_\epsilon(y) \right\} \in \mathcal{U}$. Therefore, $\left\{n \in \NN\; :\ x_n \in X \setminus C \right\} \in \mathcal{U}$, or equivalently, $\widehat{x} \in \widehat{X \setminus C}$. However, $\widehat{X \setminus C} = \widehat{X} \setminus \widehat{C}$, so $\widehat{x} \notin \widehat C$, which is a contradiction. Assume now that $\widehat C \subset \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(C)$. If $C$ were not closed, then one could find $y \in X \setminus C$ such that, for every $n \in \NN$, we would have $B_{1/n}(y) \cap C \ \not= \ \emptyset.$ For each $n \in \NN$, take $c_n \in B_{1/n}(y) \,\cap\, C$. Since $y=\lim_n c_n$, then $y=\operatorname{sh}_{_{\mathcal{U},\,X}}([\left(c_n\right)_{n \, \in \, \NN}])$, so $\widehat{c}:=[\left(c_n\right)_{n \, \in \, \NN}] \in \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(X \setminus C)$. Now, $\operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(X \setminus C)= \widehat{X}\setminus \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(C)$, thus $\widehat{c} \notin \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(C)$. As $\widehat C \subset \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(C)$, we have $\widehat{X} \setminus \widehat C \supset \widehat{X} \setminus \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(C)$, hence $\widehat{c} \not \in \widehat{C}$. Yet, by construction, $\widehat{c}=[\left(c_n\right)_{n \, \in \, \NN}]$ belongs to $\widehat{C}$. Concerning item (2), we are left to notice that $O \subset X$ is open if and only if $X \setminus O$ is closed, to recall that $\widehat{X \setminus O} \subset \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(X \setminus O)$ by the previous item, and that $$\widehat{X \setminus O} \subset \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(X \setminus O) \quad \quad \Longleftrightarrow \quad \quad \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(O) \subset \widehat{O}.$$ Ultraproducts and a finitely additive measure --------------------------------------------- A set $\Lambda \subset \widehat X$ is said to be an *ultraproduct set* (which we abbreviate to *UP-set*) if there exists a sequence $\left(A_n\right)_{n \,\in\, \mathbb{N}}$ of subsets of $X$ such that $$\Lambda = \Big(\prod_{n \,\in \,\NN} \,A_n\Big)_{\diagup \backsim}.$$ For UP-sets that are determined by sequences of Borel sets, which form an algebra we denote by $\widehat{\mathcal{B}}$, we have defined a finitely additive measure as follows: if $\Lambda = \Big(\prod_{n \,\in \,\NN} \,A_n\Big)_{\diagup \backsim}$ where $A_n \in \mathcal{B}$ for every $n \in \mathbb{N}$, then $$\label{def:medida Loeb} \widehat \mu_{_\mathcal{U}}(\Lambda) \,:= \,\,\mathcal{U}\text{-}\lim_n \,\mu(A_n).$$ The previous computation does not depend on the sequence $\left(A_n\right)_{n \,\in\, \mathbb{N}}$ of measurable subsets of $X$ whose product builds $\Lambda$ (cf. [@Goldblatt]). $\UU-$integrability {#sse:U-integrable} ------------------- Let $X$ be a compact metric space, $\mu$ be a probability measure defined on the $\sigma-$algebra $\mathcal{B}$ of the Borel subsets $X$ and $\mathcal{U}$ be a non-principal ultrafilter in $\mathbb{N}$. A sequence $(f_n)_{n \, \in \, \mathbb{N}}$ of $\mathcal{B}-$measurable functions $f_n: X \to \RR$ is said to be $\UU-$integrable if the following conditions hold: 1. For every $n \in \NN$, the map $f_n$ is $\mu-$integrable. 2. $\mathcal{U} \text{-} \lim_n \,\int_X\,|f_n|\,d\mu$ is finite. 3. If $\left(B_n\right)_{n \, \in \, \NN}$ is a sequence of elements of $\mathcal{B}$, then $$\mathcal{U} \text{-} \lim_n \,\mu(B_n)=0 \quad \quad \Rightarrow \quad \quad \mathcal{U} \text{-} \lim_n \, \int_{B_n}\,|f_n|\,d\mu = 0.$$ For example, take $B \in \mathcal{B}$, the characteristic map $\chi_{_B}$ of $B$ and, for each natural number $n$, consider $f_n=\chi_{_B} \circ T^n$. Then $$\mathcal{U} \text{-} \lim_n \,\int_X\,f_n\,d\mu = \mathcal{U} \text{-} \lim_n \, \mu(T^{-n}(B)) \in [0,1]$$ and, for every sequence $\left(B_n\right)_{n \, \in \, \NN}$ of elements of $\mathcal{B}$ satisfying $\mathcal{U} \text{-} \lim_n \,\mu(B_n)=0$, we have $$0 \leq \mathcal{U} \text{-} \lim_n \, \int_{B_n}\,f_n\,d\mu = \mathcal{U} \text{-} \lim_n \, \mu(T^{-n}(B_n) \cap B_n) \leq \mathcal{U} \text{-} \lim_n \, \mu(B_n) = 0.$$ By a similar argument, we conclude that if $\varphi: X \to \RR$ is a measurable and bounded map (so, for every $n \in \NN_0$, the map $\varphi \circ T^n$ is $\mu-$integrable), then the sequence $(f_n)_{n \, \in \, \mathbb{N}}$ defined by $$f_n := \frac{1}{n}\Big(\varphi + \varphi \circ T + \cdots + \varphi \circ T^{n-1}\Big)$$ is $\UU-$integrable. On the contrary, the unbounded sequence of maps $f_n:[0,1] \to \RR$ defined by $f_n(x)= n$ if $x < 1/n$ and $f_n(x)=0$ otherwise is not $\UU-$integrable if we consider in $[0,1]$ the Lebesgue measure $m$. Indeed, for every $n \in \NN$, the map $f_n$ is Lebesgue-integrable and $\lim_n \,\int |f_n|\,d\,m = 1$. However, if $B_n=[0, \frac{1}{n}]$ for every $n \in \NN$, then $\lim_n \,m(B_n)=0$ but $\int_{B_n}\,|f_n|\,d\,m = 1.$ We observe that, in this case, $\int \,\lim_n f_n \,d\,m \neq \lim_n \int f_n\,d\,m$. Main results ============ After selecting a non-principal ultrafilter in $\mathbb{N}$ and extending the finitely additive measure to a measure defined on a $\sigma-$algebra on the ultrapower $\widehat{X}$ that contains $\operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(\mathcal{B})$, we will consider integrable maps and the ultralimits of their Birkhoff averages in order to prove a non-standard pointwise convergence theorem for any probability measure. There are two main difficulties to establish such a result. The first one is the lack of some version of the Dominated Convergence Theorem with respect to ultralimits. We will establish the following one. \[teo:main\] If $g: X \to \RR$ is $\mu-$integrable and $(f_n)_{n \, \in \, \mathbb{N}}$ is a sequence of real-valued $\mu-$integrable functions with $|f_n| \leq g$ for every $n \in \NN$, then the map $$[x_n] \,\in \,\widehat{X} \quad \mapsto \quad \mathcal{U}\text{-}\lim_n \,[f_n(x_n)]$$ is $\widehat \mu_{{_\mathcal{U}}}-$integrable and satisfies $$\int_{\widehat X} \,\mathcal{U}\text{-}\lim_n \,[f_n(x_n)]\,d\,\widehat \mu_{{_\mathcal{U}}}([x_n]) \,\,=\, \,\mathcal{U}\text{-}\lim_n \, \int_X \,f_n(x) \,\,d\mu(x).$$ This result is a consequence of a more general statement concerning bounded $\UU-$integrable sequences $(f_n)_{n \, \in \, \mathbb{N}}$ of real-valued functions which we will prove in Section \[se:prova-teorema-A\]. \[teo:second-main\] If $(f_n)_{n \, \in \, \mathbb{N}}$ is a $\UU-$integrable sequence of real-valued functions, then the map $$[x_n] \,\in \,\widehat{X} \quad \mapsto \quad \mathcal{U}\text{-}\lim_n \,[f_n(x_n)]$$ is $\widehat \mu_{{_\mathcal{U}}}-$integrable and satisfies $$\int_{\widehat X} \,\mathcal{U}\text{-}\lim_n \,[f_n(x_n)]\,d\,\widehat \mu_{{_\mathcal{U}}}([x_n]) \,\,=\, \,\mathcal{U}\text{-}\lim_n \, \int_X \,f_n(x) \,\,d\mu(x).$$ The second difficulty concerns the measurability of the ultralimit: although the pointwise convergence with respect to an ultrafilter of a sequence of measurable functions is guaranteed, its ultralimit may not be measurable. Let us see an example. Consider a compact metric space $X$, a $\sigma-$algebra of subsets of $X$, an ultrafilter $\mathcal{U}$ and a sequence $(f_n)_{n \, \in \, \mathbb{N}}$ of measurable functions $f_n:X \to \mathbb{R}$. If $\mathcal{U}$ is principal, generated by $\{n_0\}$, then the $\mathcal{U} \text{-} \lim_n \,f_n$ is $f_{n_0}$, so it is measurable. Otherwise, if $\mathcal{U}$ is non-principal, consider $X=[0,1]$ with the usual topology and the Lebesgue measure. For each $n \in \mathbb{N}$ and $x \in X$, define $$f_n(x) = \text{the $n$th digit in the infinite binary expansion of $x$}.$$ Then the $\mathcal{U} \text{-} \lim_n \,f_n$ sends $x \in X$ to $1$ if and only if the set $$\{n \in \mathbb{N} \colon \text{the $n$th bit in the infinite binary expansion of $x$ is 1}\}$$ is in $\mathcal{U}$. In other words, if we identify $x$ via its binary expansion with a sequence of $0$’s and $1$’s and if we then regard that sequence as the characteristic function of a subset of $\mathbb{N}$, then the $\mathcal{U} \text{-} \lim_n \,f_n$, mapping subsets of $\mathbb{N}$ to $\{0,1\}$, is just the characteristic function of $\mathcal{U}$. However, a theorem of Sierpinski [@Sierpinski] [^1] asserts that this is never Lebesgue measurable when $\mathcal{U}$ is a non-principal ultrafilter. To overcome this problem, we summon the ultrapower extension of the space $X$ with respect to a fixed ultrafilter. This way, we are able to prove the following property of the Birkhoff averages of bounded measurable potentials $\varphi$ as a direct consequence of Theorem \[teo:main\] when applied to the sequence $$\begin{aligned} \label{eq:averages} (f_n)_{n \, \in \, \NN} = \Big(\frac 1 n \,\sum_{j=0}^{n-1}\,\varphi \circ T^j\Big)_{n \, \in \, \NN}\end{aligned}$$ whenever $(f_n)_{n \, \in \, \NN}$ is $\UU-$integrable. \[cor:main\] Let $X$ be a compact metric space, $\mu$ be a probability measure defined on the Borel subsets of $X$ and $\mathcal{U}$ be a non-principal ultrafilter in $\NN$. Consider a measurable map $T: X \to X$ and a measurable bounded function $\varphi: X \to \RR$. Then there exists a $\widehat \mu_{_\mathcal{U}}-$integrable map $\widehat \varphi_{_\mathcal{U}}: \widehat X \to \RR$ satisfying: 1. $\mathcal{U} \text{-} \lim_n\,\frac 1 n \,\sum_{j=0}^{n-1}\,\varphi \circ T^j(x) = \widehat \varphi_{_\mathcal{U}} \circ \iota(x)$ for every $x \in X$. 2. $\widehat \varphi_{_\mathcal{U}}\circ \iota(T(x)) = \widehat \varphi_{_\mathcal{U}}\circ \iota(x)$ for every $x \in X$. 3. $\ulim_n \,\frac 1 n \,\sum_{j=0}^{n-1}\, \int_{_{X}} \,\varphi\circ T^j \, d\mu = \int_{_{\widehat X}}\,\widehat \varphi_{_\mathcal{U}}\,d\,\widehat \mu_{_\mathcal{U}}.$ As an immediate consequence of Corollary \[cor:main\] and [@Katznelson-Weiss] we deduce that, if $\mu$ is $T-$invariant, then, given a measurable bounded function $\varphi: X \to \RR$ and an ultrafilter $\mathcal{U}$, we have: - The maps $\widehat \varphi_{_\mathcal{U}}\circ \iota$ and $\widetilde{\varphi}$ (given by the Ergodic Theorem applied to $T$ and $\varphi$) coincide $\mu$ almost everywhere. - $\int_{_{\widehat X}}\,\widehat \varphi_{_\mathcal{U}}\,d\,\widehat \mu_{_\mathcal{U}} = \int_{_{X}} \,\varphi \, d\mu.$ - $\int_X \,\mathcal{U} \text{-} \lim_n\,\,\,\frac 1 n \,\sum_{j=0}^{n-1}\,\varphi \circ T^j(x)\,\,d\mu(x) = \int_X \,\varphi(x) \,\,d\mu(x).$ A natural question we may now address concerns the impact of the choice of the ultrafilter $\UU$. Using the shift map on the space $\beta \NN$ of all non-principle ultrafilters in $\NN$ and a suitable Borel shift-invariant probability measure on $\beta \NN$ of all non-principle ultrafilters, we will show in Section \[se:proof-of-Corollary-B\] that there exists in $\beta \NN$ a space mean of all the ultralimits of the Birkhoff averages . Construction of an ultrapower measure ===================================== In this section we recall how to extend $\widehat \mu_{_\mathcal{U}}$ to a $\sigma-$algebra containing $\widehat{\mathcal{B}}$. [@Goldblatt Theorem 11.10.1]\[le:fip\] Let $(\Lambda_n)_{n\, \in \,\mathbb{N}}$ be a decreasing sequence of non-empty UP-sets $\Lambda_n$. Then $\bigcap_{n \,\in \,\NN}\,\,\Lambda_n \not= \emptyset.$ Therefore, any cover of a UP-set by countably many UP-sets has a finite subcover. Since $\widehat \mu_{_\mathcal{U}}$ is finitely additive and for any disjoint union $\Lambda \ = \ \dot{\bigcup}_{k \, \in \, \mathbb{N}} \,\Lambda_k$ of UP-sets $\Lambda_k$ there is $k_0 \in \NN$ such that $\Lambda \ = \ \Lambda_1\dot{\cup} \ldots \dot{\cup}\,\Lambda_{k_0}$, with $\Lambda_k =\emptyset$ for every $k > k_0$, we conclude that the compatibility condition $$\label{caratheodor_cond} \Lambda = \dot{\bigcup}_{k \, \in \, \mathbb{N}} \,\Lambda_k \quad \quad \Rightarrow \quad \quad \widehat \mu_{_\mathcal{U}}(\Lambda) = \sum_{k=1}^\infty\,\,\widehat \mu_{_\mathcal{U}}(\Lambda_k).$$ is valid. Consequently, [@AB1998]\[LBcomplete\] The finitely additive measure $\widehat \mu_{_\mathcal{U}}$ can be extended to a measure, we will keep denoting by $\widehat {\mu}_{_\mathcal{U}}$, on a $\sigma-$algebra $L(\widehat{\mathcal{B}})$ which satisfies: 1. $L(\widehat{\mathcal{B}}) \supset \widehat{\mathcal{B}}$. 2. $L(\widehat{\mathcal{B}})$ is a complete $\sigma-$algebra, that is, given $\Gamma \subset \widehat X$, if there are $\Gamma_1, \, \Gamma_2 \in L(\widehat{\mathcal{B}})$ such that $\Gamma_1 \subset \Gamma \subset \Gamma_2$ and $\widehat {\mu}_{_\mathcal{U}}(\Gamma_1)=\widehat {\mu}_{_\mathcal{U}}(\Gamma_2)$, then $\Gamma \in L(\widehat{\mathcal{B}}).$ Under the assumption that $X$ is a metric space and $\mu$ a Borel probability measure, we know that $\mu$ is a regular measure (cf. [@Walters Corollary 6.1.1]). More precisely, for any Borel set $B \in \mathcal{B}$ there is a decreasing sequence $(O_n)_{n \, \in \,\NN}$ of open sets and an increasing sequence $(C_n)_{n \, \in \,\NN}$ of closed sets such that $$\label{caractborelian} C_n \subset B \subset O_n \quad \quad \text{and} \quad \quad \mu(O_n)-\mu(C_n) < \frac 1n \quad \quad \forall \,\,n \in \NN.$$ From this property we deduce that the $\sigma-$algebra $\operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(\mathcal{B})$ is contained in $L(\widehat{\mathcal{B}})$ and that $\operatorname{sh}_{_{\mathcal{U},\,X}}$ is a measure preserving map. \[sh-measurability\] For every Borel set $B \in \mathcal{B}$ we have $\operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(B) \in L(\widehat{\mathcal{B}})$ and $\widehat {\mu}_{_\mathcal{U}}(\operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(B)) = \mu(B).$ For each $B \in \mathcal{B}$, consider sequences $(O_n)_{n \, \in \,\NN}$ and $(C_n)_{n \, \in \,\NN}$ as in . Then $$\operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(C_n) \subset \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(B) \subset \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(O_n) \quad \quad \forall \,n \,\in \,\mathbb{N}.$$ By Lemma \[le:openclosedsets\], we have $\widehat C_n \subset \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(C_n)$ and $\operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(O_n) \subset \widehat O_n$ for every $n \in \NN$. Thus, $$\widehat C_n \subset \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(B) \subset \widehat O_n \quad \quad \forall \,n \,\in \,\mathbb{N}.$$ Set $$\widehat{C} := \bigcap_{n \,\in \,\NN} \,\widehat C_n \quad \quad \text{and} \quad \quad \widehat{O} := \bigcap_{n\, \in\, \NN}\, \widehat O_n.$$ Then, $$\widehat{C} \subset \operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(B) \subset \widehat{O}.$$ Moreover, by Definition \[def:medida Loeb\] we have $$\widehat \mu_{{_\mathcal{U}}}(\widehat C_n) = \mu(C_n) \quad \quad \text{and} \quad \quad \widehat \mu_{{_\mathcal{U}}}(\widehat O_n) = \mu(O_n)$$ so, as $\mu(O_n)-\mu(C_n) < \frac 1n$, $$\widehat \mu_{{_\mathcal{U}}}(\widehat{C})=\widehat \mu_{{_\mathcal{U}}}(\widehat{O}).$$ Hence, with Proposition \[LBcomplete\] we confirm that $\operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(B) \in L(\widehat{\mathcal{B}})$. Additionally, we deduce that $$\widehat {\mu}_{{_\mathcal{U}}}(\operatorname{sh}_{_{\mathcal{U},\,X}}^{-1}(B)) = \widehat {\mu}_{{_\mathcal{U}}}(\widehat{C}) = \mu(B).$$ We will now prove that any element of $L(\widehat{\mathcal{B}})$ differs from an element of $\widehat{\mathcal{B}}$ by a $\widehat \mu_{{_\mathcal{U}}}-$null set. \[LBapproximation\] For any set $\Gamma \in L(\widehat{\mathcal{B}})$ there is $\Lambda \in \widehat{\mathcal{B}}$ such that $\widehat \mu_{{_\mathcal{U}}}(\Gamma \,\Delta \,\Lambda)=0.$ According to Carathéodory Extension Theorem, we can find an increasing sequence of sets $\Sigma_n \in \widehat{\mathcal{B}}$ and a decreasing sequence of sets $\Upsilon_n \in \widehat{\mathcal{B}}$ such that, for every $n \in \NN$, $$\label{defCnDn1} \Sigma_n \subset \Gamma \subset \Upsilon_n$$ and $$\label{defCnDn2} \widehat {\mu}_{{_\mathcal{U}}}(\Sigma_n)-\widehat {\mu}_{{_\mathcal{U}}}(\Upsilon_n) < \frac 1n.$$ By definition, for each $n \in \mathbb{N}$ there are sequences $(A_{n,k})_{k \,\in\,\mathbb{N}}$ and $(B_{n,k})_{k \,\in\,\mathbb{N}}$ of Borel subsets of $X$ such that $$\Sigma_n=\Big(\prod_{k \, \in \, \mathbb{N}} \,A_{n,k}\Big)_{\diagup \backsim} \quad \quad \text{and} \quad \quad \Upsilon_n=\Big(\prod_{k \, \in \, \mathbb{N}} \,B_{n,k}\Big)_{\diagup \backsim}.$$ Given $n \in \NN$, denote by $K_n$ the set of $k \in \mathbb{N}$ such that $$A_{n-1,k} \subset A_{n,k} \subset B_{n,k} \subset B_{n-1,k} \quad \quad \text{and} \quad \quad \mu(B_{n,k})-\mu(A_{n,k}) < \frac 1n.$$ We observe that $K_n \in \mathcal{U}$ for every $n \in \NN$. Since $\widehat{\mathcal{B}}$ satisfies the finite intersection property stated in Lemma \[le:fip\], we may assume that the sequence $(K_n)_{n \,\in\,\NN}$ is decreasing. Moreover, as all co-finite sets belong to $\mathcal{U}$, we may also suppose that, if $k \in K_n$, then $k \geq n$. By setting $K_0=\NN$ and, for $n \in \mathbb{N}$, $$J_n = K_{n-1} \setminus K_n$$ we obtain a sequence $(J_n)_{n \, \in \, \mathbb{N}}$ of disjoint subsets of positive integers with the property $$\label{defInbyJn} K_n = \bigcup_{m\,>\,n}\,J_m.$$ Define the UP-set $$\Lambda := \Big(\prod_{k \, \in \, \mathbb{N}} \,\Lambda_{k}\Big)_{\diagup \backsim}$$ where $\Lambda_k := B_{m,k}$ for $k \in J_m$. We are left to prove that the set $\Lambda$ satisfies $$\label{eq:inclusion} \Sigma_n \subset \Lambda \subset \Upsilon_n \quad \quad \forall \,\,n \in \NN$$ because, as $\Sigma_n \subset \Gamma$, if $\Lambda \subset \Upsilon_n$ for every $n \in \mathbb{N}$ then the assertion $\widehat \mu_{{_\mathcal{U}}}(\Gamma \,\Delta\, \Lambda)=0$ is a straightforward consequence of . To prove , it is enough to show that, for any $n \in \NN$, we have $$M_n = \left\{k \in \NN \colon A_{n,k} \subset \Lambda_k \subset B_{n,k}\right\} \in \mathcal{U}.$$ Let $n \in \NN$ and $m > n$ any arbitrary integer. For $k \in J_m$ we have, by definition, $\Lambda_k = B_{m,k}$. Besides, as $B_{m,k} \subset B_{m-1,k}\subset \cdots \subset B_{n,k}$, for $k \in J_m$ we have $$\Lambda_k \subset B_{n,k}.$$ Therefore, $\Lambda_k \subset B_{n,k}$ for any $k \in \bigcup_{m\,>\,n} \, J_m$. So, by , we deduce that $\Lambda_k \subset B_{n,k}$ for every $k \in K_n$. As $K_n \in \mathcal{U}$, we finally obtain $\Lambda \subset \Upsilon_n$. In an analogous way one shows that $\Sigma_n \subset \Lambda$. If we endow the space $\widehat X$ the quotient topology of the product topology in $\prod_{n \, \in \,\NN}\,X$, it will have the indiscrete topology because any nonempty open set in $\prod_{n \, \in \,\NN}\,X$ depends on only finitely many coordinates. Although with this choice $\widehat X$ would be compact, we are not interested in such a non-Hausdorff space. Otherwise, if we choose to give $\widehat X$ the so called ultraproduct topology (cf. [@Bankston; @Bankston2]), which is generated by the sets $\Big(\prod_{n \, \in \,\mathbb{N}}\,O_n\Big)_{\diagup \backsim}$ where each $O_n$ is open in $X$, and for which the subsets $\Big(\prod_{n \, \in \,\mathbb{N}}\,F_n\Big)_{\diagup \backsim}$, where $F_n$ is closed in $X$ for every $n$, are closed, then the Borel sets are included in $L(\widehat{\mathcal{B}})$. Notice that the $\sigma-$algebra we are considering in $\widehat X$ contains the Borel sets of $\widehat X$. Moreover, by Proposition \[LBapproximation\], given $\Gamma \in L(\widehat{\mathcal{B}})$ there is $\Lambda=\Big(\prod_{n \, \in \,\mathbb{N}}\,A_n\Big)_{\diagup \backsim} \in \widehat{\mathcal{B}}$ such that $\widehat \mu_{{_\mathcal{U}}}(\Gamma \,\Delta \,\Lambda)=0.$ As $\mu$ is regular, for each $A_n$ there exists a closed subset $K_n \subset A_n$ such that $\mu(A_n) - \mu(K_n) < \frac{1}{n}$. Take $\Sigma = \Big(\prod_{n \, \in \,\mathbb{N}}\,K_n\Big)_{\diagup \backsim}$; as $\ulim_n \,\mu(K_n) = \ulim_n \,\mu(A_n)$, we get $$\label{eq:approx-compact} \widehat \mu_{{_\mathcal{U}}}(\Sigma \,\Delta \,\Gamma) = 0.$$ So, although with this topology $\widehat X$ may not be locally compact (cf. [@Bankston]), the probability measure $\widehat \mu$ is inner regular. *We observe that the inclusion map $\iota$ may be non-measurable.[^2]* Proof of Theorem \[teo:main\] {#se:prova-teorema-A} ============================= As mentioned previously, for a sequence of measurable maps $f_n: X \to \RR$ we do not necessarily have that $f=\ulim_n f_n$ is a measurable function. However, we can extend $f$ to a map $\widehat f: \widehat X \to \RR$ using the shadow map $\operatorname{sh}_{_{\mathcal{U},\,\RR}}: \widehat{\RR} \to \RR$ by defining $$\label{def:mathfrak f} \widehat f([x_n]) \ = \ \ulim_n \,[f_n(x_n)] \ = \ \operatorname{sh}_{_{\mathcal{U},\,\RR}}([f_n(x_n)])$$ (thus, if we identify $x$ with its equivalent class $[x]$, one has $\widehat f([x])= \ \ulim_n \, f_n(x)$). One advantage of performing this extension is the following. \[measurability\_ulim\_fn\] The map $\widehat f$ is $L(\widehat{\mathcal{B}})-$measurable. Denote by $\mathcal{B}_\RR$ the Borel sets of $\RR$. It is straightforward to verify that the map $F:\widehat X \to \widehat{\RR}$ defined by $$F([x_n])=[f_n(x_n)]$$ is a measurable function from $(\widehat X,\,L(\widehat{\mathcal{B}}))$ to $(\widehat{\RR},\, L(\widehat{\mathcal{B}}_\RR))$. Indeed, we have $F^{-1}(\widehat{\mathcal{B}_\RR})\subset \widehat{\mathcal{B}}$ since, for each $n \in \mathbb{N}$, the map $f_n$ is measurable and so $f_n^{-1}(\mathcal{B}_\RR) \subset \mathcal{B}$. To end the proof we are left to take into account that $\operatorname{sh}_{_{\mathcal{U},\,\RR}}$ is measurable (cf. Lemma \[sh-measurability\]) and that $\widehat f= \operatorname{sh}_{_{\mathcal{U},\,\RR}}\,\circ\, F$. Given a $\mu-$integrable map $g: X \to \RR$ and a sequence $(f_n)_{n \, \in \, \mathbb{N}}$ of $\mu-$integrable real-valued functions satisfying $|f_n| \leq g$ for every $n \in \NN$, then $(f_n)_{n \, \in \, \mathbb{N}}$ is $\UU-$integrable. Indeed, conditions (2) and (3) of Subsection \[sse:U-integrable\] are immediate consequences of the domination $|f_n| \leq g$ by a $\mu-$integrable map $g$. \[prop:auxiliary\] If $(f_n)_{n \, \in \, \mathbb{N}}$ is a sequence of real-valued bounded $\UU-$integrable functions, then $\widehat f$ is $\widehat {\mu}_{{_\mathcal{U}}}-$integrable and $$\int_{\widehat X} \,\widehat f \,\,d\,\widehat {\mu}_{{_\mathcal{U}}} \,\,=\, \,\mathcal{U}\text{-}\lim_n \, \int_X \,f_n \,\,d\mu.$$ As usual, we will verify the statement for sequences of characteristic functions of measurable subsets of $X$, then proceed to bounded sequences of simple functions and finally bounded sequences of measurable functions. ### ***Sequences of measurable characteristic maps*** {#sequences-of-measurable-characteristic-maps .unnumbered} We start considering a sequence of maps $(f_n)_{n \, \in \, \NN} = (\chi_{_{A_n}})_{n \, \in \, \NN}$, where $\chi_{_{A_n}}$ stands for the characteristic function of a measurable subset $A_n$ of $X$. As $\widehat f$ is well defined (cf. Subsection \[sse:ultralimits\]), measurable and bounded, it is $\widehat {\mu}_{{_\mathcal{U}}}-$integrable. Our aim is to prove that $$\int_{\widehat X}\,\ulim_n \,[\chi_{_{A_n}}(x_n)] \,\,d\,\widehat {\mu}_{{_\mathcal{U}}}([x_n]) \ = \ \ulim_n\,\int_{X}\,\chi_{_{A_n}}(x)\,\,d\mu(x).$$ Notice that, by definition, if $\Lambda = \Big(\prod_{n\, \in\, \mathbb{N}} A_n\Big)_{\diagup \backsim}$, then $$\label{ulimchi_n} \ulim_n \,[\chi_{_{A_n}}(x_n)] \ = \ \chi_{_\Lambda}([x_n]) \quad \quad \forall \, [\left(x_n\right)_{n \, \in \, \NN}] \in \widehat X.$$ Consequently, by , $$\begin{aligned} \label{eq:caracteristic} \int_{\widehat X}\,\ulim_n \,[\chi_{_{A_n}}(x_n)] \,\,d\,\widehat {\mu}_{{_\mathcal{U}}}([x_n]) & = & \int_{\widehat X}\,\chi_{_\Lambda}([x_n])\,\,d\,\widehat {\mu}_{{_\mathcal{U}}}([x_n]) = \widehat {\mu}_{{_\mathcal{U}}}(\Lambda) \nonumber \\ &=& \ulim_n\,\mu(A_n) \nonumber \\ &=& \ulim_n\,\int_{X}\,\chi_{_{A_n}}(x)\,\,d\mu(x).\end{aligned}$$ ### ***Bounded sequences of simple measurable maps*** {#bounded-sequences-of-simple-measurable-maps .unnumbered} We proceed considering a bounded $\UU-$integrable sequence of simple functions $f_n: X \to \RR$ defined by $$f_n=\sum_{k=1}^{p_n} a_{n,k} \,\chi_{_{A_{n,k}}}$$ where $a_{n,k} \in \RR$, $A_{n,k} \in \mathcal{B}$ and, by assumption (2) of Subsection \[sse:U-integrable\], $$\ulim_n\,\Big(\sum_{k=1}^{p_n} a_{n,k} \,\int_{X}\,\chi_{_{A_{n,k}}}\,\,d\mu\Big) < + \infty.$$ The corresponding ultralimit $\widehat f:\widehat X \to \RR$ is measurable (cf. Lemma \[measurability\_ulim\_fn\]) and bounded, hence $\widehat {\mu}_{_\mathcal{U}}-$integrable. Using and the linearity of both the $\ulim$ and the integral operator, we obtain $$\label{ulim_sim_func_2} \int_{\widehat X} \,\ulim_n\,\Big(\sum_{k=1}^{p_n} a_{n,k}\,[\chi_{_{A_{n,k}}}]\Big)\,\,d\,\widehat {\mu}_{{_\mathcal{U}}} \ = \ \ulim_n\,\Big(\sum_{k=1}^{p_n} a_{n,k} \,\int_{X}\,\chi_{_{A_{n,k}}}\,\,d\mu\Big).$$ ### ***Bounded sequences of measurable maps*** {#sse:bounded .unnumbered} Take now a sequence of bounded measurable and $\UU-$integrable functions $f_n: X\to \RR$, and consider the corresponding ultralimit $\widehat f:\widehat X \to \RR$, which is $\widehat {\mu}_{_\mathcal{U}}-$integrable because it is measurable and bounded. \[simple\_func\_for\_hat\_mu\_integral\] Given a bounded measurable map $\mathfrak{g}: \widehat X \to \RR$ and $\varepsilon>0$, there exists a simple function $\mathfrak{s}^\varepsilon:\widehat X \to \RR$ supported on UP-sets such that $$|\mathfrak{g}(z)-\mathfrak{s}^\varepsilon(z)|< \varepsilon \quad \quad \text{at} \quad \widehat {\mu}_{_\mathcal{U}}-\text{almost every}\quad z \,\in\, \widehat X.$$ Fix $\mathfrak{g}$ and $\varepsilon >0$. There is a simple map $\mathfrak{r}^\varepsilon = \sum_{k=1}^p \, a_k\,\chi_{_{\Gamma_k}}$, where $a_k \in \RR$ and $\Gamma_k \in L(\widehat{\mathcal{B}})$, such that $$|\mathfrak{g}(z)-\mathfrak{r}^\varepsilon(z)|< \varepsilon \quad \quad \text{at}\quad \widehat {\mu}_{_\mathcal{U}}-\text{almost every}\quad z \,\in\, \widehat X.$$ By Proposition \[LBapproximation\], for each $k \in \{1, 2, \cdots, p\}$, we can find $\Upsilon_k \in \widehat{\mathcal{B}}$ such that $\widehat \mu_{{_\mathcal{U}}}(\Gamma_k \,\Delta \,\Upsilon_k)=0$. Rewriting, if needed, the sum that defines $\mathfrak{r}^\varepsilon$, we may assume that $\widehat X$ is the disjoint union of the sets $\Gamma_k$, so $\widehat \mu_{{_\mathcal{U}}}\left(\widehat X \setminus \bigcup_{i=1}^p \,\Gamma_k\right)=0$, and that $\widehat \mu_{{_\mathcal{U}}}\left(\Gamma_i \cap \Gamma_j\right)=0$ for $i\not=j$. Define $$\Gamma_{p + 1} =X \quad \quad \text{and} \quad \quad \Omega_1 \ = \ \Gamma_1$$ and, recursively, for $1 \leq k \leq p$, set $$\Omega_{k+1}=\Gamma_{k+1} \setminus \bigcup_{i=1}^k \, \Omega_k.$$ Then, $\widehat \mu_{{_\mathcal{U}}}\left(\Omega_{p+1}\right)=0$ and $\mathfrak{s}^\varepsilon = \sum_{k=1}^p \,a_k\,\chi_{_{\Omega_k}}$ is a simple function supported on UP-sets such that $\mathfrak{r}^\varepsilon(z)=\mathfrak{s}^\varepsilon(z)$ for $\widehat {\mu}_{_\mathcal{U}}-$almost every $z \in \widehat X$ and $\int_{\widehat X} \,\mathfrak{r}^\varepsilon\; d\,\widehat {\mu}_{_\mathcal{U}} \ = \ \int_{\widehat X} \, \mathfrak{s}^\varepsilon \, d\,\widehat {\mu}_{_\mathcal{U}}$. For a simple function $\mathfrak{s}:\widehat X \to \RR$ given by $\mathfrak{s}=\sum_{k=1}^p \,a_k\,\chi_{_{\Lambda_k}}$, where $a_k \in \RR$ for all $k$, $\Lambda_k = \Big(\prod_{n \, \in \,\mathbb{N}}\,A_{k,n}\Big)_{\diagup \backsim}$ and $A_{k,n} \in \mathcal{B}$ for every $n$, we can consider the induced sequence of simple functions $(s_n: X \to \RR)_{n \in \NN}$ by setting $$\label{def:sn} s_n=\sum_{k=1}^p \,a_k\,\chi_{_{A_{k,n}}}.$$ Then, from \[ulimchi\_n\] we obtain, using the linearity properties of the $\ulim$, $$\label{ulim_sim_func_1} \mathfrak{s} \ = \ \ulim_n \,s_n$$ and, applying , we get $$\label{ulim_sim_func_2} \int_{\widehat X} \,\mathfrak{s} \,\,d\,\widehat {\mu}_{_\mathcal{U}} \ = \ \ulim_n\,\int_{X}\,s_n\,d\mu.$$ Take $\varepsilon >0$ arbitrary. By Lemma \[simple\_func\_for\_hat\_mu\_integral\], we may find a simple function $\mathfrak{s}^\varepsilon =\sum_{k=1}^p \,a_k\,\chi_{_{\Lambda_k}}$, where $a_k \in \RR$ and $\Lambda_k=\Big(\prod_{\ell \, \in \,\mathbb{N}}\,A_{k,\ell}\Big)_{\diagup \backsim}$ is a UP-set, such that $$\label{eq:aprox} \left|\widehat f([x_n])- \mathfrak{s}^\varepsilon([x_n])\right| < \frac{\epsilon}{2} \quad \quad \text{at} \quad \widehat {\mu}_{_\mathcal{U}}-\text{almost every}\quad [x_n] \,\in\, \widehat X$$ and $$\label{def_sepsilon} \left|\int_{\widehat X}\,\widehat f\:d\,\widehat {\mu}_{_\mathcal{U}} -\int_{\widehat X}\,\mathfrak{s}^\varepsilon\:d\,\widehat {\mu}_{_\mathcal{U}} \right| < \frac{\varepsilon}{2}.$$ Let $\Big(s^\varepsilon_n\Big)_{n \, \in \, \mathbb{N}}$ be the sequence of simple functions induced by $\mathfrak{s}^\varepsilon$ as explained in , obeying to \[ulim\_sim\_func\_1\] and \[ulim\_sim\_func\_2\]. \[f\_n\_s\_n\_relat\] $\quad \ulim_n \,\int_X\,\left|f_n - s^\varepsilon_n\right|\:d\mu \ < \ \frac{\epsilon}{2}$. Suppose this is not true. Then the set $$E_n\ = \ \left\{x \in X \colon \left|f_n(x)- s^\varepsilon_n(x)\right| \geq \frac{\varepsilon}{2} \right\}$$ satisfies $$\ulim_n\,\mu(E_n)>0.$$ Therefore, $\widehat {\mu}_{_\mathcal{U}}(E)>0$ where $E$ is the UP-set $\Big(\prod_{n \, \in \, \mathbb{N}}\,E_n]\Big)_{\diagup \backsim}$. But, from the definiton of $E_n$, we deduce that, for every $[x_n] \in E$, $$\left|\widehat f([x_n])- \mathfrak{s}^\varepsilon([x_n])\right| \geq \frac{\varepsilon}{2}$$ which contradicts . As an immediate consequence of Lemma \[f\_n\_s\_n\_relat\], we have $$\label{approxfn_to_sn} \ulim_n\,\left|\int_X f_n\:d\mu - \int_X \mathfrak{s}^\varepsilon_n\:d\mu\right| = \ulim_n \,\left|\int_X (f_n - \mathfrak{s}^\varepsilon_n)\:d\mu\right| \leq \ulim_n \,\int_X \left|f_n - \mathfrak{s}^\varepsilon_n\right|\:d\mu < \frac{\varepsilon}{2} \bigskip$$ On the other hand, by the continuity of the absolute value map, $$\begin{aligned} \ulim_n\,\left|\int_X f_n\:d\mu-\int_X \mathfrak{s}^\varepsilon_n\:d\mu\right| & = & \left|\ulim_n\,\int_X f_n\:d\mu-\ulim_n\,\int_X \mathfrak{s}^\varepsilon_n\:d\mu\right| \nonumber \\ &=& \left|\ulim_n\,\int_X f_n\:d\mu-\int_{\widehat X} \mathfrak{s}^\varepsilon\:d\,\widehat {\mu}_{_\mathcal{U}}\right|.\end{aligned}$$ Applying \[def\_sepsilon\] and \[approxfn\_to\_sn\], we finally get $$\left|\ulim_n\,\int_X f_n\:d\mu-\int_{\widehat X} \widehat f\:d\,\widehat {\mu}_{_\mathcal{U}}\right| \ < \ \frac{\varepsilon}{2}+\frac{\varepsilon}{2} \ = \ \varepsilon.$$ Finally, as $\varepsilon>0$ may be chosen arbitrarily small, we conclude that $$\ulim_n\,\int_X f_n\:d\mu \ = \ \int_{\widehat X} \widehat f\:d\,\widehat {\mu}_{_\mathcal{U}}.$$ We proceed proving the assertion of Theorem \[teo:main\] about dominated sequences. Dominated sequences of measurable maps -------------------------------------- The natural decomposition $f_n=f_n^+ - f_n^-$, with $f_n^+ = \max\,\{f,0\}$ and $f_n^- = \max\,\{-f,0\}$, reduces the proof to the case of a $\UU-$integrable sequence of non-negative $\mu$-integrable functions $f_n: X \to \RR$ dominated by a $\mu-$integrable function $g: X\to \RR$. Given $M \in \NN$, let $f_n \wedge M: X \to \RR$ be the function $$f_n \wedge M(x)= \left\{\begin{array}{ll} f_n(x) & \text{if } \,\,f_n(x)\leq M \\ \\ M & \text{otherwise}. \end{array} \right.$$ In an analogous way, define $g \wedge M$. Notice that, for every $M \in \NN$, both maps $f_n \wedge M$ and $g \wedge M$ are measurable and $\mu-$integrable. Besides, as we assume that $\widehat f$ is defined, we have $$\widehat{f \wedge M} = \ulim_n \,(f_n \wedge M) = \widehat f \wedge M.$$ We already know that the ultralimit $\widehat f: \widehat X \to \RR$ of the sequence $(f_n)_{n \, \in \, \NN}$ is measurable (cf. Lemma \[measurability\_ulim\_fn\]). Moreover, the sequence $(f_n \wedge M)_{n \, \in \, \NN}$ is $\UU-$integrable since $$\begin{aligned} \mathcal{U} \text{-} \lim_n \,\int_X\,|f_n \wedge M|\,d\mu &\leq& M \\ \mathcal{U} \text{-} \lim_n \,\mu(B_n)=0 \quad &\Rightarrow& \quad \mathcal{U} \text{-} \lim_n \, \int_{B_n}\,|f_n \wedge M|\,d\mu \,\,\leq\,\, \mathcal{U} \text{-} \lim_n \,M\mu(B_n)=0.\end{aligned}$$ Therefore, by Proposition \[prop:auxiliary\], $$\label{dominated_result_truncated} \ulim_n\,\int_X (f_n \wedge M)\:d\mu \ = \ \int_{\widehat X} (\widehat f \wedge M)\:d\,\widehat {\mu}_{_\mathcal{U}} \quad \quad \forall \,\,M \in \NN.$$ Besides, by the Monotone Convergence Theorem, $$\int_{\widehat X} \widehat f\:d\,\widehat {\mu}_{_\mathcal{U}} = \lim_{M \to +\infty}\,\int_{\widehat X} (\widehat f \wedge M)\:d\,\widehat {\mu}_{_\mathcal{U}}$$ thus, taking the limit as $M$ goes to $+\infty$ at both hand sides of \[dominated\_result\_truncated\], we obtain $$\lim_{M \to +\infty} \,\ulim_n\,\int_X (f_n \wedge M)\:d\mu \ = \ \lim_{M \to +\infty}\, \int_{\widehat X} (\widehat f \wedge M)\:d\,\widehat {\mu}_{_\mathcal{U}} = \int_{\widehat X} \widehat f\:d\,\widehat {\mu}_{_\mathcal{U}}.$$ To finish the proof of Theorem \[teo:main\] we are left to show the following property. Under the assumptions of Theorem \[teo:main\], $$\lim_{M \to +\infty} \,\ulim_n\,\int_X (f_n \wedge M)\:d\mu \ = \ \ulim_n\,\int_X f_n\:d\mu.$$ Consider, for each $M \in \NN$, the sets $$A_{n,M}=f_n^{-1}([M,\infty[) \quad \quad \text{and} \quad \quad B_{M}=g^{-1}([M,\infty[).$$ Notice that, as each $f_n$ is $\mu-$integrable, we have for every $n \in \NN$ $$\lim_{M \to +\infty} \mu(A_{n,M})=0= \lim_{M \to +\infty} \,\int_{A_{n,M}}f_n\;d\mu$$ and, again by the Monotone Convergence Theorem, $$\lim_{M \to +\infty}\,\int_X (f_n \wedge M)\:d\mu =\int_X f_n\:d\,\mu.$$ Observe now that, for every $n, \,M \in \NN$, $$\label{eq:fn wedge M} \int_X (f_n \wedge M)\:d\mu \ = \ \int_X f_n\:d\mu -\int_{A_{n,M}} f_n\;d\mu + M\mu(A_{n,M})$$ and so, taking into account that $f_n \geq 0$ and $A_{n,M} \subset B_M$, we conclude that $$\int_X (f_n \wedge M)\:d\mu \leq \int_X f_n\:d\mu + M\mu(A_{n,M}) \leq \int_X f_n\:d\mu + M\mu(B_M).$$ Consequently, since $\lim_{M \to +\infty} \, M\mu(B_M)=0$ due to the $\mu-$integrability of $g$, we obtain $$\begin{aligned} \label{eq:metade da prova} \lim_{M \to +\infty} \,\ulim_n\,\int_X (f_n \wedge M)\:d\mu & \leq & \lim_{M \to +\infty} \,\ulim_n\, \Big(\int_X f_n\:d\mu + M\mu(B_M)\Big) \nonumber\\ &=& \ulim_n\, \int_X f_n\:d\mu + \lim_{M \to +\infty} \,M\mu(B_M) \nonumber \\ &=& \ulim_n\, \int_X f_n\:d\mu.\end{aligned}$$ Conversely, from we also deduce that $$\int_X (f_n \wedge M)\:d\mu \ \geq \ \int_X f_n\:d\mu -\int_{A_{n,M}} f_n\;d\mu$$ thus, as $0 \leq f_n \leq g$ and $A_{n,M} \subset B_M$, $$\begin{aligned} \int_X (f_n \wedge M)\:d\mu &\geq& \int_X f_n\:d\mu -\int_{A_{n,M}} f_n\;d\mu \nonumber \\ &\geq& \int_X f_n\:d\mu -\int_{B_M} f_n\;d\mu \\ &\geq& \int_X f_n\:d\mu -\int_{B_M} g\;d\mu.\end{aligned}$$ Therefore, $$\begin{aligned} \label{eq:a outra metade da prova} \lim_{M \to +\infty} \,\ulim_n\,\int_X (f_n \wedge M)\:d\mu &\geq& \lim_{M \to +\infty} \,\ulim_n\, \Big(\int_X f_n\:d\mu -\int_{B_M} g\;d\mu\Big) \nonumber \\ &=& \ulim_n\, \int_X f_n\:d\mu.\end{aligned}$$ The proof ends by considering both and . *The key ingredient of the previous argument is the uniformity in the variable $n$ of the $\UU-$limit of both sequences $(\mu(A_{n,M}))_{n,\,M \, \in \, \NN}$ and $(\int_{A_{n,M}} f_n\;d\mu)_{n,\,M \, \in \, \NN}$ as $M$ goes to $+\infty$. Thus, Theorem \[teo:main\] is still valid if, instead of domination, we assume any other property of the sequence $(f_n)_{n \, \in \NN}$ which ensures that the previous $\UU-$limits are uniform in $n$. See Section \[se:proof-of-Theorem-B\].* Proof of Theorem \[teo:second-main\] {#se:proof-of-Theorem-B} ==================================== As in the previous section, we will assume that $f_n \geq 0$ for every $n \in \NN$. To prove Theorem \[teo:second-main\] we just need to conclude that the $\UU-$integrability (cf. Subsection \[sse:U-integrable\]) of $(f_n)_{n \, \in \NN}$ implies that the $\UU-$limit, as $M$ goes to $+\infty$, of both sequences $(\mu(A_{n,M}))_{\{n,\,M \, \in \, \NN\}}$ and $(\int_{A_{n,M}} f_n\;d\mu)_{\{n,\,M \, \in \, \NN\}}$ is $0$ uniformly in $n$. Concerning the first sequence, we notice that the assumption (2) of the definition of $\UU-$integrability guarantees that there exists $K>0$ such that $$\Big\{n \in \NN \colon \,\int_X \,f_n < K\Big\} \quad \in \,\,\mathcal{U}.$$ Therefore, for the elements of this set we have $$0 \leq \mu(A_{n,M}) \leq \frac{\int_X \,f_n \, d\mu}{M} < \frac{K}{M}$$ and so $\ulim_M \, \mu(A_{n,M}) = 0$ uniformly in $n$. From the $\mu-$integrability assumption of each $f_n$ we know that $\lim_{M \, \to \, +\infty} \,\int_{A_{n,M}} f_n\;d\mu=0$ for every $n \in \NN$. Given $n \in \NN$ and $\varepsilon > 0$, consider the set $$W_n \,:=\,\Big\{M \in \NN \colon \, \int_{A_{n,M}} f_n \; d\mu < \varepsilon\Big\}.$$ It turns out that each $W_n$ is a co-finite set since $A_{n,M+1} \subset A_{n,M}$ and so, as $f_n \geq 0$, if $M \in W_n$ then $M+1 \in W_n$. This means that $W_n = [\gamma_n,+\infty[$ for some $\gamma_n \in \NN$ which is the least element of $W_n$. We note that one cannot have $\ulim_n \,\gamma_n=+\infty$, otherwise $\ulim_n \,(\gamma_n-1) = +\infty$ although, by the minimality of $\gamma_n$, we must also have $$\Big\{n \in \NN \colon \, \gamma_n-1 \in W_n \Big\} \quad \notin \,\,\UU.$$ Yet, these two properties contradict the condition (3) of the definition of $\UU-$integrability of $(f_n)_{n}$. Indeed, for every sequence $(L_n)_{n \, \in \, \NN}$ of elements of $\NN$ satisfying $\lim_{n \, \to \, +\infty}\, L_n = + \infty$, we have $\lim_{n \, \to \, +\infty}\, \mu(A_{n,L_n}) =0$, which, due to condition (3), yields $\ulim_n \int_{A_{n,L_n}} f_n \; d\mu = 0$. Therefore, there exists $M_0 \in \NN$ such that $\ulim \gamma_n = M_0$. Consequently, $$\Big\{n \in \NN \colon \, M \in W_n \quad \forall \, M \geq M_0 \Big\} \quad \in \,\ \UU.$$ This is the uniformity we were looking for. Changing the ultrafilter {#se:proof-of-Corollary-B} ======================== Consider a compact metric space $X$, a Borel probability measure $\mu$ in $X$, a measurable map $T: X \to X$, a non-principal ultrafilter $\mathcal{U}$ in $\NN$ and a measurable bounded function $\varphi: X \to \RR$. We will show that there is a Borel shift-invariant probability measure $\eta_{_\mathcal{U}}$ in $\beta \NN$ such that, for every $x \in X$, the space mean in $\beta \NN$ given by $$\int_{\beta \NN}\,\plim_n \, \Big(\frac{\varphi(x) + \varphi(T(x)) + \cdots + \varphi(T^{n-1}(x))}{n}\Big) \,d\eta_{_\mathcal{U}}(p)$$ is well defined. A natural dynamics in $\beta \NN$ is determined by the extension to the Stone-$\check{C}$ech compactification $\beta \NN$ of the map $s\colon \mathbb{N} \to \mathbb{N}$ such that $s(n)=n+1$. More precisely, we have defined a continuous map (whom we also call shift) $$\begin{aligned} S \colon \beta \NN \quad &\to& \quad \beta \NN \\ p \quad &\mapsto& \quad S(p) = \{A \colon A - 1 \in p\}\end{aligned}$$ where $A-1=\{n \in \NN \colon n+1 \in A\}.$ It is easy to verify that $S$ commutes with the map $\sigma \colon \ell_\infty(\mathbb{R}) \to \ell_\infty(\mathbb{R})$ given by $\sigma\left((b_n)_{n \, \in \, \mathbb{N}}\right) = (b_{n+1})_{n \, \in \, \mathbb{N}}$; that is, given a bounded sequence $(b_n)_{n \, \in \, \NN}$, we have $\splim_n \,\,b_n = \plim_n \,\,b_{n+1}.$ Therefore, by the linearity of the ultralimits we have, for every $k \in \NN$, $$\label{eq:ultra-average} \frac{1}{k} \,\,\Big(p + S(p) + \cdots + S^{k-1}(p)\Big) - \lim_n \,\,b_n = \plim_n \,\,\Big(\frac{b_n + b_{n+1}+ \cdots + b_{n+k-1}}{k}\Big).$$ The space $C^0(\beta \NN)$ of all continuous maps $f \colon \beta \NN \to \mathbb{R}$ (which have compact support) is isometrically isomorphic to the space $\ell_\infty(\mathbb{R})$ of bounded sequences of $\mathbb{R}$. Indeed, if $\tau \colon \mathbb{N} \to \beta \NN$ is the inclusion that takes $n_0 \in \mathbb{N}$ to the principal ultrafilter $\mathcal{U}_{n_0}$, then the maps $$\begin{aligned} G: \ell_\infty(\mathbb{R})\, \to\, C^0(\beta \NN) \quad &\rightarrowtail & \quad G\Big(\overline{b}:=(b_n)_{n \, \in \, \mathbb{N}}\Big) = \psi_{\overline{b}}, \,\,\,\text{ where } \,\,\,\psi_{\overline{b}}\,(p) = \plim_n\,b_n \\ H: C^0(\beta \NN)\, \to \,\ell_\infty(\mathbb{R}) \quad &\rightarrowtail & \quad H(\psi) = \psi \circ \tau\end{aligned}$$ are linear norm-preserving isomorphisms between the two spaces. Having fixed the ultrafilter $\mathcal{U} \in \beta \NN$, consider the operator $\mathcal{L} \colon C^0(\beta \NN) \to \RR$ which assigns to each $\psi \in C^0(\beta \NN)$ (which, after the identification of $C^0(\beta \NN)$ with $\ell_\infty(\mathbb{R})$, may be seen as a bounded sequence $(a_n)_{n\, \in \,\mathbb{N}}$) the real number $$\mathcal{L}(\psi) = \ulim_n \,\,\Big(\frac{a_1 + a_2 + \cdots + a_{n}}{n}\Big).$$ The operator $\mathcal{L}$ is linear, positive, and $\mathcal{L}(\textbf{1})=1$. Moreover, given $\psi \in C^0(\beta \NN)$, the map $\psi \circ S$ is represented by the bounded sequence $(a_{n+1})_{n\, \in\, \mathbb{N}}$, and so $$\begin{aligned} \mathcal{L}(\psi \circ S) &=& \ulim_n \,\,\Big(\frac{a_2 + a_3 + \cdots + a_{n+1}}{n}\Big) \\ &=& \ulim_n \,\,\Big(\frac{a_1 + a_2 + \cdots + a_{n}}{n} - \frac{a_1}{n} + \frac{a_{n+1}}{n}\Big) \\ &=& \ulim_n \,\,\Big(\frac{a_1 + a_2 + \cdots + a_{n}}{n} \Big) = \mathcal{L}(\psi).\end{aligned}$$ Therefore, by the Representation Theorem of Riesz-Markov-Kakutani there is a unique regular Borel probability measure $\eta_{_\mathcal{U}}$ on $\beta \NN$ such that $$\mathcal{L}(\psi) = \int_{\beta \NN}\,\psi\,d\eta_{_\mathcal{U}} \quad \quad \forall \,\,\psi \,\in \,C^0(\beta \NN).$$ For instance, if we take $x_0 \in X$, a bounded map $\varphi \colon X \to \RR$ and the bounded sequence $$(a_n)_{n \, \in \, \NN}\,\,:=\,\, \Big(\varphi(T^{n}(x_0))\Big)_{n \, \in \, \NN}$$ then we conclude that $$\ulim_n \,\,\Big(\frac{\varphi(x_0) + \varphi(T(x_0)) + \cdots + \varphi(T^{n-1}(x_0))}{n}\Big) = \int_{\beta \NN}\,\plim_n \,\varphi(T^n(x_0)) \,d\eta_{_\mathcal{U}}(p).$$ We recall that the map $p \in \beta \NN \mapsto \plim_n \,\varphi(T^n(x_0))$ is the continuous Stone-$\check{C}$ech extension of the continuous map $n \in \NN \mapsto \varphi(T^n(x_0))$, and so it is $\eta_{_\mathcal{U}}-$integrable. An important consequence of the way the probability measure $\eta_{_\mathcal{U}}$ was obtained is the fact that $\eta_{_\mathcal{U}}$ is $S-$invariant. Indeed, in normal Hausdorff spaces (such as the compact Hausdorff $\beta \NN$) the characterization of invariant probabilities made be done using continuous maps (cf. [@Walters Theorem 6.2]); and, for every $\psi \in C^0(\beta \NN)$, $$\int_{\beta \NN}\,\psi \circ S\,d\eta_{_\mathcal{U}} = \mathcal{L}(\psi \circ S) = \mathcal{L}(\psi) = \int_{\beta \NN}\,\psi\,d\eta_{_\mathcal{U}}.$$ Consequently, by the Ergodic Theorem of Birkhoff, given an $L(\widehat{\mathcal{B}})-$measurable and $\eta_{_\mathcal{U}}-$integrable map $\psi: \beta \NN \to \RR$, the sequence of averages $$\Big(\frac{\psi(p) + \psi(S(p)) + \psi(S^2(p)) + \cdots + \psi(S^{n-1}(p))}{n}\Big)_{n \, \in \, \mathbb{N}}$$ converges at $\eta_{_\mathcal{U}}$ almost every $p \in \beta \NN$, thus defining an $L(\widehat{\mathcal{B}})-$measurable and $\eta_{_\mathcal{U}}-$integrable map $\widetilde{\psi}$ such that $\widetilde{\psi}\circ S=\widetilde{\psi}$. Moreover, $\int_{\beta \NN}\,\widetilde{\psi} \,d\eta_{_\mathcal{U}} = \int_{\beta \NN}\,\psi \,d\eta_{_\mathcal{U}}$, that is, $$\int_{\beta \NN}\,\lim_n \frac{\psi(p) + \psi(S(p)) + \cdots + \psi(S^{n-1}(p))}{n} \,d\eta_{_\mathcal{U}}(p) = \int_{\beta \NN}\,\psi \,d\eta_{_\mathcal{U}}.$$ Considering the unique bounded sequence $(a_m)_{m \in \mathbb{N}}$ which represents $\psi$ and the equation , the previous equality may be rewritten as $$\int_{\beta \NN}\,\lim_n \frac{\Big(\plim_m \, a_m\Big) + \cdots + \Big(\plim_m \,a_{m + n-1}\Big)}{n} \,d\eta_{_\mathcal{U}}(p) = \int_{\beta \NN}\,\plim_m \, a_m \,d\eta_{_\mathcal{U}}(p).$$ In particular, if $x_0 \in X$, $\varphi \colon X \to \RR$ is a bounded map and we consider the bounded sequence $$(a_m)_{m \, \in \, \NN}\,:=\,\Big(\frac{\varphi(x_0) + \varphi(T(x_0)) + \cdots + \varphi(T^{m-1}(x_0))}{m}\Big)_{m \, \in \, \NN}\,:=\,\psi_{\{x_0,\, \varphi,\, T\}} \,\,\in \beta \NN$$ then we deduce that the space mean in $\beta \NN$ of the ultralimits of the Birkhoff averages along the orbit of $x_0$ by $T$, namely $$\int_{\beta \NN}\,\plim_m \, \Big(\frac{\varphi(x_0) + \varphi(T(x_0)) + \cdots + \varphi(T^{m-1}(x_0))}{m}\Big) \,d\eta_{_\mathcal{U}}(p)$$ is well defined and is given by $\int_{\beta \NN}\,\widetilde{\psi}_{\{x_0,\, \varphi,\, T\}}(p) \,d\eta_{_\mathcal{U}}(p)$, where $\widetilde{\psi}_{\{x_0,\, \varphi,\, T\}}$ stands for the Birkhoff limit of the averages of the observable $\psi_{\{x_0,\, \varphi,\, T\}} \in C^0(\beta \NN)$ with respect to the dynamics $S$ and the probability measure $\eta_{_\mathcal{U}}$, that is, $$\begin{aligned} \widetilde{\psi}_{\{x_0,\, \varphi,\, T\}}(p)=\lim_n \,\frac{\Big(\plim_m \, a_m\Big) + \Big(\plim_m \, a_{m+1}\Big) + \cdots + \Big(\plim_m \,a_{m + n-1}\Big)}{n}.\end{aligned}$$ at $\eta_{_\mathcal{U}}$ almost every $p \in \beta \NN$. Example ======= Take $X=[0,1]$ and consider the dynamics $T(x) = \frac{x}{2}$ if $x \neq 0$, $T(0)=0$, the Dirac measure $\mu=\delta_0$ supported on $\{0\}$ and the map $\varphi = \text{Identity}_{[0,1]}$. Observe that the non-wandering set of $T$ is $\{0\}$ and that $\mathcal{U} \text{-} \lim_n\,\,\,\frac 1 n \,\sum_{j=0}^{n-1}\,T^j(x)=0$ for every $x \in \,[0,1]$ and any ultrafilter $\UU$, since the sequence $(T^n(x))_{n \in \NN}$ converges to $0$. Therefore, by Corollary \[cor:main\], the map $\widehat \varphi_{_\mathcal{U}}$ is $0$ at $\widehat \mu_{_\mathcal{U}}$ almost every point of $\widehat{[0,1]}$, because $\widehat \varphi_{_\mathcal{U}} \geq 0$ and $$\int_{_{\widehat X}}\,\widehat \varphi_{_\mathcal{U}}\,d\,\widehat \mu_{_\mathcal{U}} \,= \,\ulim_n \,\frac 1 n \,\sum_{j=0}^{n-1}\, \int_{_{X}} \,\varphi\circ T^j \, d\mu \,=\, \mathcal{U} \text{-} \lim_n\,\,\,\frac 1 n \,\sum_{j=0}^{n-1}\,T^j(0) = 0.$$ We note that $\int_X \,\varphi \,\,d\mu = \varphi(0)=0$ as well. If we consider instead the Dirac measure $\mu=\delta_1$ supported on $\{1\}$, then the previous argument also proves that, for every ultrafilter $\UU$, the map $\widehat \varphi_{_\mathcal{U}}$ is $0$ at $\widehat \mu_{_\mathcal{U}}$ almost every point of $\widehat{X}$. However, now we get $\int_X \,\varphi \,\,d\mu = \varphi(1)=1$. [10]{} C.D. Aliprantis, O. Burkinshaw. *Principles of Real Analysis.* Academic Press, San Diego, 3rd edition, 1998. R.M. Anderson. *Star-finite representations of measure spaces.* Trans. Amer. Math. Soc. 271:2 (1982) 667–687. P. Bankston. *Ultraproducts in topology.* General Top. Appl. 7 (1977) 283–308. P. Bankston. *Topological reduced products via good ultrafilters.* General Top. Appl. 10:2 (1979) 121–137. M. Carvalho, F. Moreira. *A note on the Ergodic Theorem.* Qual. Theory Dyn. Syst. 13:2 (2014) 253–268. S. Cerreia-Vioglio, F. Maccheroni, M. Marinacci. *Ergodic theorems for lower probabilities.* Proc. Amer. Math. Soc. 144:8 (2016) 3381–3396. N. Cutland. *Nonstandard measure theory and its applications.* Bull. London Math. Soc. 15 (1983) 529–589. R. Goldblatt. *Lectures on the Hyperreals.* Springer Verlag, 1998. Y. Katznelson, B. Weiss. *A simple proof of some ergodic theorems.* Israel J. Math. 42:4 (1982) 291–296. S. Kiriki, T. Soma. *Takens’ last problem and existence of non-trivial wandering domains.* Adv. Math. 306 (2017) 524–588. U. Krengel. *Ergodic Theorems.* De Gruyter Studies in Mathematics 6, 1985. K. Petersen. *Ergodic Theory.* Cambridge Studies in Advanced Mathematics 2, Cambridge University Press, 1989. W. Sierpinski. *Fonctions additives non complétement additives et fonctions non mesurables.* Fund. Math. 30:1 (1938) 96–99. F. Takens. *Orbits with historic behaviour, or non-existence of averages.* Nonlinearity 21 (2008) T33–T36. P. Walters. *An Introduction to Ergodic Theory.* Springer-Verlag, 1982. S. Willard. *General Topology.* Addison-Wesley, 1970. [^1]: We thank Andreas Blass for calling our attention to this reference. [^2]: https://terrytao.wordpress.com/2008/10/14/non-measurable-sets-via-non-standard-analysis/
{ "pile_set_name": "ArXiv" }
--- abstract: 'First-principles calculations of polar semiconductor nanorods reveal that their dipole moments are strongly influenced by Fermi level pinning. The Fermi level for an isolated nanorod is found to coincide with a significant density of electronic surface states at the end surfaces, which are either mid-gap states or band-edge states. These states pin the Fermi level, and therefore fix the potential difference across the rod. We provide evidence that this effect can have a determining influence on the polarity of nanorods, with consequences for the way a rod responds to changes in its surface chemistry, the scaling of its dipole moment with its size, and the dependence of polarity on its composition.' author: - 'Philip W.' - 'Nicholas D. M.' - Paul - 'Peter D.' title: Fermi level pinning can determine polarity in semiconductor nanorods --- Introduction {#sec:introduction} ============ Semiconductor nanostructures in solution are a very exciting class of material due to our growing ability to manipulate their shapes and sizes, and the superstructures into which they assemble, to produce a wide range of technologically useful properties.[@smallisdifferent; @X.Michalet01282005; @NirTessler02222002; @kazesetal; @Wendy; @Nieetal; @shevchenko] Nanocrystals of binary semiconductors, such as those of ZnO, have been observed to exhibit very large dipole moments[@PhysRevLett.79.865; @shim:6955; @PhysRevLett.90.097402] which affect their internal electronic structure (and therefore their optical properties) as well as their interactions with their environment. The latter may influence the kinetics of self-assembly and the stability of the structures formed.[@talapin] A detailed understanding of the factors contributing to this polarity in nanocrystals has proven elusive[@Goniakowski] for two main reasons: first, many factors are involved, ranging from surface chemistry, to the non-centrosymmetric nature of the underlying crystal, to quantum confinement, to long-range electrostatics, to interactions with the solvent and considerations of thermodynamic stability; and second, the limitations of current experimental techniques, which do not allow the level of control over, or knowledge of, the state of the system, that is necessary to be able to disaggregate these factors. Computer simulation is an ideal tool for addressing this problem.[@rabani:5355; @rabani:1493; @Shanbhag:2006ix; @wang] Recent developments in linear-scaling density-functional theory (LS-DFT), make accurate quantum-mechanical methods applicable to nanocrystals of realistic sizes. In our earlier work[@PhysRevB.83.241402] we presented results from LS-DFT calculations using the <span style="font-variant:small-caps;">onetep</span> code,[@skylaris:084119; @onetep-forces] of the ground-state charge distributions in GaAs nanorods of sizes comparable to those found in experiment. We found that its dipole moment depends strongly on the surface termination, particularly of its polar surfaces, with full hydrogen termination on polar surfaces strongly reversing its direction. A common feature of all of the nanorods studied was that the Fermi energy was found to coincide with a significant density of states located at the end surfaces of the rods. [*Fermi level pinning*]{} (FLP) is known to occur in semi-infinite semiconductor surfaces when states are found at the Fermi energy, and in this work we show that a finite-surface version of FLP plays a crucial role in determining the polar characteristics of such nanorods. In section \[sec:methods\] we outline the simulation details and methodology. In section \[sec:FLP\] we show that mid-gap states on the end surfaces of the rod can pin the Fermi energy, which in turn determines the potential difference across the nanorod, and therefore its dipole moment. In section \[sec:ionic-charge\] we take up an important observation from our previous work, namely that nanorods terminated on their ends with ions of very different ionic charge can nevertheless have very similar dipole moments. This observation is particularly problematic for simple ionic or bond-electron counting models,[@Goniakowski] which can fail to predict the dipole moments as a result. These models are not able to explain the magnitudes of the differences in polarity between nanorods of different surface terminations. We show that our FLP model can rationalize these observations. In section \[sec:NR-size\] we calculate the variation of nanorod polarization with rod length and cross-sectional area. The dipole moment is found to increase with nanorod size in a manner consistent with maintaining a ‘pinned’ Fermi level at the end polar surfaces of the nanorod. Finally, in section \[sec:NR-type\] we study the variation in polarity between nanorods of different compositions (specifically GaAs, GaN and AlN), again illustrating the determining role of FLP for the rod polarizations. \[sec:methods\] Simulation Methodology ====================================== This work uses linear-scaling density-functional theory (LS-DFT) as implemented in the <span style="font-variant:small-caps;">onetep</span> code.[@skylaris:084119; @onetep-forces] This method combines the benefits of linear scaling, in that computational resources for calculating the total energy of an $N$-atom system scales as $O(N)$, with the accuracy of plane-wave methods.[@onetep-pwaccuracy] In <span style="font-variant:small-caps;">onetep</span> the single-particle density matrix is represented by an optimized set of non-orthogonal, strictly localized, Wannier-like orbitals $\{\phi_{\alpha}({\bf {r})\}}$, and is written $$\rho(\mathbf{r},\mathbf{r'})=\sum_{\alpha\beta}\phi_{\alpha}(\mathbf{r})K^{\alpha\beta}\phi_{\beta}^{*}(\mathbf{r'})$$ where $K^{\alpha\beta}$ is the *density kernel* representing a generalization of the occupation numbers to a non-orthogonal basis. Both the local orbitals and the density kernel are optimized during the calculation. The three tuneable parameters controlling the quality of the representation are:[@0953-8984-17-37-012] the ‘plane-wave’ cutoff energy $E_{\text{cut}}$, defining the grid-spacing for the grid on which the local-orbitals are represented; the local-orbital cutoff radius $R_{\phi}$ for each atomic species; and the density kernel cutoff radius $R_{K}$. Exchange and correlation is treated within the local density approximation (LDA). Errors resulting from the supercell approximation, which can be large in systems with a monopole or a strong dipole, are eliminated using a truncated Coulomb potential.[@PhysRevB.73.205119; @cutoff-coulomb] Basis set superposition error that could affect the treatment, within a local-orbital framework, of surface adsorption is eliminated by the optimization procedure.[@bsse] A further advantage of our method over other computational methods that have been used to study nanocrystals,[@wang] is that the whole of the nanostructure is included in the calculation in a way which allows the electrons throughout the nanostructure to reach a global equilibrium. We are therefore able accurately to account for any coupling that may (and in fact does, as we shall show) occur between different regions of the nanostructure. We caution that this method presupposes integer occupations, which precludes partial occupancies of states which might otherwise occur in a traditional calculation where the system is treated as metallic. We have also performed test calculations which permit fractional occupancies (albeit with cubic-scaling computational cost) on representative smaller systems, which confirm that the states presented here are indeed lowest in energy. Primarily, we study nanorods of wurtzite GaAs (though we also model GaN and AlN), since it exhibits all of the important characteristics of a polar semiconductor i.e. elements of both ionic and covalent bonding character and a non-centrosymmetric lattice structure. Ion cores are represented using norm-conserving pseudopotentials. It has been shown in previous work[@GaAs-nlcc] that an adequate description of the geometry of systems containing Ga, requires either the explicit inclusion of the Ga $3d$ electrons in the calculation, or, if the $3d$ electrons are frozen into the pseudopotential, non-linear-core-corrections[@nlcc] should be applied. To reduce the computational cost, we have chosen the latter approach for both the Ga and As pseudopotentials. An effectively infinite kernel cutoff radius $R_{K}$ was used in order to treat insulators and metals on an equal footing. Calculations using plane-wave DFT, as implemented in the <span style="font-variant:small-caps;">castep</span> code,[@castep] show that setting $E_{\text{cut}}=400$ eV is sufficient to converge bond-lengths, bond-angles and total energies of bulk GaAs, Ga$_{2}$ and As$_{2}$ dimers to within 0.02% of their 800 eV values, using our pseudopotentials. We find that bond-lengths are underestimated by 1.3%, which is typical for LDA. <span style="font-variant:small-caps;">onetep</span> is known to require a $10$-$20\%$ larger $E_{\text{cut}}$ than <span style="font-variant:small-caps;">castep</span> for the same level of convergence,[@onetep-forces] thus, the calculations in this work use $E_{\text{cut}}=480$ eV and a generous local orbital radius of $R_{\phi}=0.53$ nm. For analysis of the dipole moment, we calculate the quantity $\mathbf{d}=-\int\! n(\mathbf{r})\mathbf{r}\, d\mathbf{r}+\sum_{I}Z_{I}\mathbf{R}_{I}$ from the density $n(\mathbf{r})$ in the whole simulation cell, and the positions $\mathbf{R}_{I}$ of the ions of charge $Z_{I}$. The internal electric field is calculated from the gradient of the value of the local effective potential smoothed over a volume equivalent to one primitive cell of the underlying material, as in our previous work[@PhysRevB.83.241402]. Fermi level pinning in nanorods {#sec:FLP} =============================== We first consider the ground-state electronic structure of a structurally relaxed nanorod of length 12.8 nm and cross-sectional area 3.56 nm$^{2}$, comprising 2862 atoms. The rod (represented schematically in Fig. \[fig:H/H-r\_LDOS\]) is labelled H/H-r, where the first three symbols (H/H) denote that the lateral/end surfaces are terminated with hydrogen atoms, and ‘-r’ denotes that it is structurally relaxed. This rod has a large negative dipole moment of $-600$ D and a large internal field of $+0.1$ V/nm in the center of the rod. We adopt the convention that a negative dipole moment is one whose direction opposes that of the spontaneous polarization of the underlying wurtzite crystal lattice (the wurtzite $[0001]$ direction, which is referred to as the $z$ direction in this work). In Fig. \[fig:H/H-r\_LDOS\] we plot the ‘slab-wise’ local density of electronic energy states (LDOS) for this rod. We define a slab LDOS as follows: the rod is nominally divided into 20 slabs along its length (the $z$-direction), each consisting of four planes of atoms: two each of Ga and As. The slab LDOS is the sum of the contributions to the total DOS from the local orbitals centered on those atoms. In Fig. \[fig:H/H-r\_LDOS\] we superpose these slab LDOS. It is clear that the electric field shifts the individual slab LDOS with respect to one another. ![(Color online) Structurally relaxed, fully hydrogen terminated GaAs nanorod (left) and the LDOS (right) for each ‘slab’, consisting of four planes of atoms (two As and two Ga). The filled curves indicate the occupied (valence) states at each slab. The band-edge states at opposite ends of the rod are seen to coincide in energy. []{data-label="fig:H/H-r_LDOS"}](figure1){width="86mm"} The Fermi energy can thus be considered to coincide with a significant density of states on both polar surfaces of the nanorod. On the Ga(-H) polar surface these states are mid-gap states, and on the As(-H) surface, these mid-gap surface states are adjacent to the conduction band edge. These are very stable positions for the Fermi level because small deviations from these positions would cause changes in occupancy of the surface states, resulting in a redistribution of charge and a potential opposing the redistribution. This is analogous to Fermi level pinning exhibited by some semiconductor surfaces, in which a group of mid-gap states fixes the Fermi level at the surface at the position of their average energy due to the action of surface states as donors or acceptors, which get filled or emptied to compensate for any change that may affect the relative position of the Fermi level (e.g. the application of a voltage). We see this principle in action in Fig. \[fig:H/H-r\_LDOS\], in that any significant occupancy of the lowest-energy empty state on the As(-H) surface (which appears to lie below the Fermi level) would in fact bring it above the Fermi level due to the change in the electric field produced by the charge redistribution. Of course, although this filling and emptying of states can occur unaided in a DFT calculation, it would, in real systems, depend on the availability of free charges in the environment, implying an important role for the solvent. There are at least two important differences between FLP on semi-infinite surfaces and the finite end surfaces of nanorods; first, on surfaces of area $A$, changes in surface charge density $\Delta\sigma$ due to changes in occupancy of surface states come in discrete amounts (i.e. $\Delta\sigma=e/A$), meaning that the continuous variability of the surface charge density on semi-infinite surfaces gives way to a discrete variability on finite surfaces; second, the analogue of the *depletion region* associated with FLP is the charged region on the opposite end of the nanorod, meaning that the two surfaces are coupled. This second effect may confer an important role on the environment surrounding the nanorod, which may mediate the interaction between the coupled ends by facilitating the transfer of electrons between them as the system is perturbed. In our previous work,[@PhysRevB.83.241402] we studied rods with a range of different polar surface terminations, and with dipole moments ranging from $+330$ D to -$614$ D. In all cases, the nanorods exhibited this same feature of having Fermi levels coinciding with the energies of large densities of mid-gap states on the end polar surfaces of the nanorods. The arguments made here about FLP apply to all nanorods with this feature. One immediately obvious consequence of this picture is that the dipole moment and internal field of a nanorod are dependent on the energies of the pinning states on both ends of the rod, relative to their local (slab) band edges. The difference between these relative energies defines how much the energy spectrum is shifted between the top and bottom ends of the nanorod i.e. the potential difference $\Delta V$ between the ends. If the Fermi level is pinned on both ends of the rod, then the potential difference $\Delta V$ must also be pinned. We will find, in each of the subsequent sections in this work, that this pinning of $\Delta V$ plays a crucial role in determining the polarity of a nanorod. The pinning states in rod H/H-r on both ends of the rod are mid-gap states, though they are adjacent to the band-edges in this case. Different surface reconstructions on the polar surfaces may remove these mid-gap states or change their positions relative to the local energy spectra. This could change the potential difference across the rod and, therefore, the dipole moment. Effect of surface chemistry on dipole moment {#sec:ionic-charge} ============================================ Another implication of the picture presented above is that it is overly simplistic to cast the problem of nanorod polarity in terms of an ionic model, or a simple bond-electron counting model, since these models do not include constraints on the potential difference across a nanorod imposed by FLP. In previous work,[@PhysRevB.83.241402] we found that the dipole moment $d_{z}$, the charge on the bottom (As-rich) end $Q_{\text{b}}$, and the electric field in the middle of the rod $E_{\text{m}}$ for two unrelaxed nanorods (labelled H/H and H/P) were all very similar, despite having surface terminating species of very different ionic charge. Rod H/H is fully hydrogen terminated on both the lateral ($\parallel$ to $z$) surfaces and the polar ($\perp$ to $z$) surfaces. Rod H/P, on the other hand, is terminated with hydrogen atoms on the lateral surfaces, while on the polar surfaces there are pseudo-hydrogen[@PhysRevB.71.165328] atoms of two different varieties. These pseudo-hydrogen atoms are used to passivate the dangling bonds of their respective surfaces: those on the Ga polar surface have an ionic charge of $+1.25e$, while those used to terminate the As polar surface have an ionic charge of $+0.75e$. These pseudo-atoms are intended to passivate dangling bonds on the polar surfaces, without adding charge to them, and they have been shown in other work to render the surfaces electronically inert.[@PhysRevB.71.165328] A simple bond-electron and ion counting argument predicts that the Ga polar surface on H/P should have an additional charge of $+0.25e$ for each of the 27 bound pseudo-atoms, compared to H/H – a total change of $+6.75e$ for each end. Similarly, the As polar surface should have a reduced charge of $-0.25e$ per pseudo-atom – a total change of $-6.75e$. Nanorod H/P should therefore have a greatly reduced dipole moment and potential difference across it. In fact, we observed $d_{z}$, $Q_{\text{b}}$, and $E_{\text{m}}$ change from $-614$ D, $1.00e$ and 0.100 V/nm respectively in H/H, to $-531$ D, $0.95e$ and 0.105 V/nm in H/P – a much smaller change. ![(Color online) The difference in laterally-integrated electron density profile between H/H and H/P. The standard deviation of the Gaussian used to smooth the data parallel to the nanorod axis is 0.32 nm. There has been a shift of 6.70 electrons from left to right. We show that the majority of this redistribution is attributable to changes in surface state occupancy.[]{data-label="fig:HHminusHP"}](figure2){width="0.9\columnwidth"} We plot the electron density difference between rods H/H and H/P in Fig. \[fig:HHminusHP\]. The densities have been integrated in the $x$- and $y$-directions and convolved with a Gaussian of standard deviation 0.32 nm in the $z$-direction. The latter process smooths out variations on length-scales smaller a unit cell length. By integrating the resulting curve from each end to the center of the rod, we find that there has been a transfer of 6.70 electrons from one end of the rod to the other between rods H/H and H/P, which almost entirely cancels the change in ionic charge. In Fig. \[fig:HH\_HP\_LDOS\], we plot the LDOS of only the top (the Ga rich polar end surface) and bottom (the As rich polar end surface) slabs of both rods H/H and H/P. By summing the occupations of the states plotted in this figure, we find that there has been a change of six in the number of occupied states on each end of the rod between H/H and H/P. The remaining charge transfer of $0.70e$ must be associated with the polarization of occupied states in slabs far away from the ends. This polarization of the electron density can be observed in the inset to Fig. \[fig:HHminusHP\]. The potential difference between the ends $\Delta V$ is very similar for both rods – $1.8$ eV for H/H and $1.5$ eV for H/P. ![(Color online) Local densities of states for the slab of atoms on the Ga-rich (top) and As-rich (bottom) ends of nanorods H/H and H/P. The potential difference between the two ends $\Delta V\sim1.8$ eV for H/H and $1.5$ eV for H/P.[]{data-label="fig:HH_HP_LDOS"}](figure3){width="86mm"} It is instructive to consider a fictitious adiabatic process in which the ionic charge of the polar terminating species is slowly tuned so as to go from rod H/P to H/H. The LDOS on the rod ends begins with the Fermi level at the local band edge on each end of the rod, adjacent to the electronic states. Therefore, we have $\Delta V\approx E_{\text{g}}$ in this case. As the charges of the terminating pseudo-atoms decrease on the Ga end, and increase on the As end, the energy of nearby electronic states on the Ga end of the rod must increase, pulling some of those that lay just below the Fermi level, above it, and vice-versa on the As end. This causes these states to change occupancy and compensate some of the change in ionic charge. The higher the density of states at the Fermi level, the less mobile is the Fermi level (i.e. the more strongly the Fermi level is pinned). To effect a given shift in the Fermi level, a larger change in surface ionic charge is required if the density of states is high. That is to say, energies coinciding with a high density of states (like the band edges) represent regions of high stability for the Fermi level. The transition from H/P to H/H causes the Fermi energy to run in to the (local) band edges, which is why there is very little change in the pinned position of the Fermi level on both ends, and therefore very little change in the potential difference $\Delta V$ between the ends of the rod. The general conclusion from this section is that changes in nanorod polarity due to changes in ionic charge at the surfaces of nanorods can be screened out due to FLP occurring at the ends of the nanorod. This effect tends to preserve the potential difference between the ends of the nanorod $\Delta V$, and consequently, the dipole moment. The band-gap $E_{\text{g}}$, in effect, imposes an approximate upper limit on $\Delta V$, since the density of states within the bands is so high that the Fermi level would be very strongly pinned at its edges. Effect of length and cross-sectional area on dipole moment {#sec:NR-size} ========================================================== In this section, we look at how the dipole moment of nanorod H/H varies with rod length $L$, and cross-sectional area $A$, and show how it can be explained using our FLP model. ![The magnitude of the dipole moment increases linearly with nanorod length for nanorods of cross-sectional area $A=3.56\text{nm}^{2}$. []{data-label="fig:dvslength"}](figure4){width="86mm"} ![The magnitude of the dipole moment increases with nanorod cross sectional areas for nanorods of length $L=12.8\text{nm}$. Curves are fitted to the data, with functional forms $\sigma(A)=c_{1}/(\sqrt{A}+c_{2}-\sqrt{A+c_{2}^{2}})$ and $d_{z}=c_{3}A/(\sqrt{A}+c_{2}-\sqrt{A+c_{2}^{2}})$, derived from Eq. \[eq:sigma\]. Over this range $\sqrt{A}\ll c_{2}$ placing these rods firmly in the “thin” regime.[]{data-label="fig:RvsA"}](figure5){width="86mm"} We find, from Fig. \[fig:dvslength\], that the dipole moment increases roughly linearly with $L$ over the range studied, for rods of $A=3.56$ nm$^{2}$. This implies that the excess polar surface ground-state charge density on each end surface is independent of nanorod length over this range. In Fig. \[fig:RvsA\] we show how both the dipole moment, and the polar surface charge *density* $\sigma$ on the bottom (As) end surface of the rod, changes with $A$ for a fixed nanorod length of $L=12.8$ nm. The charge density $\sigma$ on the polar end surfaces decreases rapidly with cross-sectional area, asymptotically approaching a constant value that may well be slightly above zero for nanorods of this length (because surfaces of polar thin-films, unlike semi-infinite surfaces, can support a non-zero charge[@Goniakowski]). We turn to consider the causes of these scaling relationships, focusing first on the variation in rod polarization with respect to $A$. The slab-LDOS plots in Fig. \[fig:LDOSvsA\] show that for all of the cross-sectional areas studied, the occupied states on top surface align closely with the unoccupied conduction band edge on the bottom surface. In the previous section we argued that the local band-edges represented an effective upper and lower limit for the Fermi energy on the ends of a nanorod, and that the polarization of rod H/H, in particular, is constrained by these band-edges (evidenced by the fact that going from H/P to H/H does not change the dipole moment very much, because the Fermi level touches the band-edges at both ends of the rod). ![(Color online) Slab-wise local densities of states for rods of four different cross-sectional areas, sampled at three positions on the rod: (top) the slab on the Ga-rich end, (middle) the slab in the middle of the rod, (bottom) the slab on the As-rich end. For ease of comparison, we have shifted the energy of the highest occupied state for each rod to zero. The Fermi level can be imagined to remain adjacent to the band edges for all rods, and the band-gap is larger for thinner rods due to quantum confinement.[]{data-label="fig:LDOSvsA"}](figure6){width="86mm"} In such a rod, the potential difference between its ends, $\Delta V$, is determined mostly by its band-gap $E_{\text{g}}$, so that $\Delta V\approx E_{\text{g}}$. We will argue that this observation alone can qualitatively account for the observed trends in $d_{z}$ and $\sigma$ with $A$ in Fig. \[fig:RvsA\]. While we do not expect the band-gap of equivalent real nanostructures to match exactly with the DFT gaps we observe (due to the well-known band-gap error of DFT), we expect qualitatively the same behavior to emerge. We can analyse this behaviour in terms of a simple electrostatic model, and compare this to the results in Fig. \[fig:RvsA\]. The electrostatic potential due to a circular disk of radius $a$ and area charge density $\sigma$ at a distance $z$ along its axis is given by $$V(z)=2\pi\sigma\left(\sqrt{a^{2}+z^{2}}-|z|\right)$$ This expression simplifies to the familiar results for a point charge in the limit that $z\gg a$ and infinite slab when $z\ll a$. Assuming equal and opposite densities at the two ends of the rod, $z=0$ and $z=L$, the total potential difference is $\Delta V=2\left[V(0)-V(L)\right]$ ($\approx E_{\text{g}}$ in this case), which rearranges to give $$\sigma\approx\frac{E_{\text{g}}}{4\pi\left(a+L-\sqrt{a^{2}+L^{2}}\right)}\label{eq:sigma}$$ For “thick” rods, $a\gg L$, $\sigma\sim E_{\text{g}}/L$, independent of $a$ to leading order, whereas for “thin” rods, $a\ll L$, $\sigma\sim E_{\text{g}}/a\propto E_{\text{g}}/\sqrt{A}$. The rod dipole moment $d_{z}=\sigma AL$ therefore scales as $d_{z}\sim E_{\text{g}}A$ for thick rods but as $d_{z}\sim E_{\text{g}}\sqrt{A}L$ for thin rods. Substituting $a\propto\sqrt{A}$ into Eq. \[eq:sigma\] yields a general expression for $\sigma(A)$, which we fit to the data in Fig. \[fig:RvsA\]. We also fit the curve given by the expression $d_{z}=\sigma AL$ to the data for $d_{z}$. Evidence of deviation between our model and the data can be seen at smaller values of $A$ in the data for $d_{z}$. The smaller $A$ is, the larger the error in our model. This is not surprising because the model assumes that charge is localised on planes at the ends of the rod, but we know that as $A$ becomes smaller, the surface charge becomes increasingly delocalized along $z$. Furthermore, at small $A$ the rod cross-section is increasingly dominated by edge atoms rather than atoms truly belonging to the polar surface. For these reasons, a breakdown of the model is expected at very small values of $A$. Despite this complication it is clear from the fitting parameters that our rods are in the “thin” regime, as defined above, as the model form correlates well with the observed behaviour. In summary, thinner nanorods exhibit stronger decay of their internal potential, due to finite width effects, therefore thinner rods require a larger charge density on the nanorod ends in order to generate the required potential difference $\Delta V$, than do thicker rods. There is a second and less significant feature in the LDOS plots of Fig. \[fig:LDOSvsA\], that serves slightly to complicate the picture described above. From the data sets in the middle window of Fig. \[fig:LDOSvsA\], thinner nanorods are found to exhibit a larger local band-gap than thicker rods. The local band-gap in the middle of the rod is found to be 1.3 eV in the thinnest rod, and 0.9 eV in the thickest. This is due to quantum confinement of electronic states in the lateral direction, which is stronger in thinner rods. As the band-gap increases, the potential difference between the ends of the rod can increase, which further increases the amount of charge density required on the end surfaces of the thinner nanorods in order to meet the resulting increased pinned potential difference. Although both of these effects (i.e. loss of the internal field due to finite size effects, and the increase in the band-gap due to quantum confinement) play a role in generating the behavior seen in Fig. \[fig:RvsA\], the first is more significant, since quantum confinement produces only a 44% increase in the band-gap over the range of rods studied, which does not come close to accounting for the 740% increase in the polar surface charge density over the same range. We return now to the variation in nanorod polarization with $L$. We did not observe quantum confinement related variation as was observed over the range of $A$. Presumably, this is due to the large extent of the rods in the $z$-direction. However, just like over the range of $A$, we found that the Fermi level remains pinned close to the band edges over the range of $L$, resulting in the potential difference between the nanorod ends remaining constant. The rods in Fig. \[fig:dvslength\] are able to maintain the charge on their ends as $L$ increases, without incurring a significant change in the potential difference across the rod because the rod is very thin, and the internal potential decays very strongly: the field in the middle of rods of length 12.8 nm, 25.6 nm and 51.2 nm are found to decay to values of 0.1 V/nm, 0.035 V/nm and 0.009 V/nm respectively in the rod centers. If the rods were thicker, we would expect this decay to be weaker, and the amount of charge on the ends to be reduced with $L$ to maintain the pinned potential difference, thus reducing the rate at which the dipole moment increases with length. In summary, FLP plays a determining role in the scaling of the dipole moment of the nanorods studied, with length and cross-sectional area. This effect manifests itself in different scaling behavior, the details of which depend primarily on the rate of decay of the internal electric field (which is a function of $A$), the length $L$ of the rods, and the pinned potential difference $\Delta V$, which is close to the size of the band-gap for rods in which the Fermi level is pinned near the local band-edges, as is the case in the particular rods studied in this section. Quantum confinement may also have some influence on this scaling by affecting $\Delta V$. Effect of nanorod composition {#sec:NR-type} ============================= In this section we investigate how the polar behavior of nanorods depends on composition. We calculate the charge distribution in three rods – one composed of GaAs, another of GaN, and a third of AlN. These are all III-V semiconductors, so their chemistry and response to terminating ligands can be expected to be similar. We therefore terminate the rods with the same atoms as in previous sections, as type H/P (lateral surfaces fully covered with hydrogen atoms, and polar surfaces fully covered with the appropriate passivating pseudo-hydrogen atoms). All have the same number of atoms (2862), and are constructed of the same number of unit cells in each direction. Atoms are located at their bulk equilibrium values, as calculated in the CASTEP plane-wave-DFT code, meaning that the GaAs rod is longer than the GaN rod, which in turn is longer than the AlN rod, because of the differences in bulk lattice parameters. The main characteristics of these rods and their charge distributions are summarized in Table \[tab:GaAs-GaN-AlN\], along with reference information about the bulk properties of these semiconductors. AlN GaN GaAs -- -------------------------------------------------- ------- -------- -------- DFT lattice param $a$ (Å) 3.075 3.154 3.935 DFT lattice param $c$ (Å) 4.941 5.132 6.486 DFT polarization (C/m$^{2}$) 0.073 0.029 0.005 DFT (LDA) bandgap (eV) 4.5 2.7 0.9 Experimental bandgap (eV) 6.2 3.3 1.5 Experimental permittivity, $\epsilon_{\text{r}}$ 8.5 9.7 13.1 Length, $L$ (nm) 9.66 10.01 12.61 Cross-sectional area, A (nm$^{2}$) 2.26 2.33 3.62 $d_{z}$ (D) -713 -682 -531 Polarization (C/m$^{2}$) -0.11 -0.098 -0.039 $\Delta V$ (eV) 4.2 3.2 1.5 $Q_{\text{b}}$ $(e)$ 1.61 1.50 0.95 $\sigma_{\text{b}}$ $(e/\text{nm}^{2})$ 0.711 0.645 0.262 $Q_{\text{b}}$ decay constant (nm$^{-1}$) 1.02 0.80 0.48 : Some properties of AlN, GaN, and GaAs in nanorod and in bulk. Experimental data for AlN obtained from Refs. , for GaN obtained from Refs.  and for GaAs obtained from Ref. .[]{data-label="tab:GaAs-GaN-AlN"} Figure \[fig:GaAs\_vs\_GaN\_vs\_AlN\] shows the distributions of charge along the lengths of the rods for the three nanorods, integrated in the $x$ and $y$ directions and in the $z$ direction, convolved with a Gaussian of standard deviation $c/2$ so as to smooth out variations on length-scales smaller than the length of half a unit cell length $c$, (N.B. $c$ is different for each of the rods – summarized in Table \[tab:GaAs-GaN-AlN\]). ![(Color online) Laterally averaged and Gaussian-smoothed charge distributions along the lengths of nanorods of AlN, GaN and GaAs. The ordinate has been magnified in the lower panel.[]{data-label="fig:GaAs_vs_GaN_vs_AlN"}](figure7){width="86mm"} ![(Color online) Local densities of states of the cation-rich (top three data sets) and anion rich (bottom three data sets) polar surfaces of AlN, GaN, and GaAs. For ease of comparison, we have shifted the energy of the highest occupied state of each rod to zero.[]{data-label="fig:GaN_AlN_LDOS"}](figure8){width="86mm"} In Fig. \[fig:GaN\_AlN\_LDOS\] we plot the LDOS for the polar surfaces of the three rods. In all cases, the Fermi level can be imagined as being pinned by surface states near the band-edges, for reasons outlined in previous sections. The polarization of the rod appears to increase proportionally with the potential difference across the rod $\Delta V$, which is positively correlated with the bulk semiconductor band-gap. The nanorod of the largest band-gap semiconductor, AlN, supports the largest polarization, and the nanorod with the lowest, GaAs, supports the smallest. However, $\Delta V$ is not equal to, or proportional to, the bulk band-gap. This is due to two factors: first, the effect of quantum confinement, described in Sec. \[sec:NR-size\], increases the band-gap by an amount which varies depending on the type of material; and second, the polar surface states responsible for pinning the Fermi level, particularly on the bottom surface of the rod, can be seen in Fig. \[fig:GaN\_AlN\_LDOS\] to lie at different positions relative to the local band edges in all three rods. The amount of excess charge on the bottom ends of the rods $Q_{\text{b}}$ is also positively correlated with the semiconductor band-gap. However, it is not proportional to $d_{z}$, so there must be a significant difference in how this charge is distributed along the rods. We measure the decay rate of the long-range tails of excess charge which can be seen in the magnified plot in Fig. \[fig:GaAs\_vs\_GaN\_vs\_AlN\]. Nanorods of higher band-gap materials exhibit a larger decay constant (Table \[tab:GaAs-GaN-AlN\]), and therefore, stronger localization of their excess surface charges. This stronger localization is indicative of the fact that rods of lower permittivity materials more strongly concentrate the field lines associated with surface charge, and therefore exhibit a weaker long-range decay of their internal electric fields for a given finite cross-sectional area. Therefore, rods of lower permittivity materials require less excess charge density on their ends to attain a particular potential difference $\Delta V$ (and polarisation), than do rods of higher permittivity materials. This is a similar argument to the one in Sec. \[sec:NR-size\], which also concluded that rods exhibiting weaker decay of their internal fields (i.e. thick rods), require less excess surface charge density to attain a particular $\Delta V$. This effect can be partially incorporated in to our model in Sec. \[sec:NR-size\], by introducing a material-dependent constant of proportionality which determines the effective cross-sectional area seen by the electrons, for a given geometrical cross-sectional area. This effective cross-sectional area is larger in materials of lower permittivity. Summary and conclusion {#sec:conclusion} ====================== The potential difference across a nanorod due to its large dipole moment shows up in the LDOS as a shifting of the energy of the states as one moves along the length of the rod. In this work and in our previous work,[@PhysRevB.83.241402] it has been found that nanorods of a variety of surface terminations have Fermi levels which coincide with a high LDOS at their polar end surfaces. These are either mid-gap states or states close to the band-edges. In the latter case, this means that the potential difference across the rod is approximately equal to its local band-gap. These are very stable positions for the Fermi level because small deviations from these energies result in changes in occupancy and a redistribution of charge, which generates a potential that opposes the initial change. This phenomenon is a generalization of the FLP effect on semi-infinite surfaces to structures of small dimensions. In this work, we provide evidence that FLP plays a determining role for the polarity of nanorods. Pinning of the Fermi level results in a pinning of the potential difference $\Delta V$ across the nanorod, and hence its dipole moment. We demonstrate that simple ionic or bond-electron counting models can be inadequate for describing, even qualitatively, differences in polarity between nanorods of different surface termination. In particular, we have shown that the effect of varying the ionic charge on the ends of a rod can be screened out, due to pinning at the nanorod ends, so as to maintain its polarity. We show that FLP can play a determining role for the scaling of the dipole moment with nanorod size. It is also able to account for differences in polarity between nanorods of different composition. A particularly striking consequence of this effect is that it implies a crucial role for the solvent in determining the properties of a nanorod. Not only does the choice of solvent determine whether charge can be transferred between the ends of the nanorod, because it mediates this transfer, but it can also alter the LDOS on the nanorod ends by changing the surface chemistry. We propose that this latter effect, coupled with FLP, could have a dramatic effect on the dipole moment, and hence the optical properties. Clearly, the picture discussed in this work could have important consequences for the response properties of nanorods in applied electric fields, and in the fields of neighboring polar nanorods. This could be important, not only for their optical properties, but also for the energetics of self-assembly of polar semiconductor nanostructures. This work was supported by EPSRC (UK) under Grant No. EP/G05567X/1, the EC under Contract No. MIRG-CT-2007- 208858, and a Royal Society University Research Fellowship (PDH). All calculations were run on the Imperial College HPC Service. [36]{} M. A. El-Sayed, Accounts Chem. Res. **37**, 326 (2004). X. Michalet, F. F. Pinaud, L. A. Bentolila, J. M. Tsay, S. Doose, J. J. Li, G. Sundaresan, A. M. Wu, S. S. Gambhir, and S. Weiss, Science **307**, 538 (2005). N. Tessler, V. Medvedev, M. Kazes, S. Kan, and U. Banin, Science **295**, 1506 (2002). M. Kazes, D. Y. Lewis, Y. Ebenstein, T. Mokari, and U. Banin, Adv. Mater. **14**, 317 (2002). W. U. Huynh, J. J. Dittmer and A. P. Alivisatos, Science **295**, 2425 (2002). Z. Nie, A. Petukhova, and E. Kumacheva, Nat. Nanotechnol. **5**, 15 (2010). E. V. Shevchenko, D. V. Talapin, N. A. Kotov, S. O’Brien, and C. B. Murray, Nature **439**, 55 (2006). S. A. Blanton, R. L. Leheny, M. A. Hines, and P. Guyot-Sionnest, Phys. Rev. Lett., **79**, 865 (1997); M. Shim and P. Guyot-Sionnest, J. Chem. Phys. **111**, 6955 (1999). L.-S. Li and A. P. Alivisatos, Phys. Rev. Lett. **90**, 097402 (2003); D. V. Talapin, E. V. Shevchenko, C. B. Murray, A. V. Titov, and P. Král, Nano Lett. **7**, 1213 (2007). J. Goniakowski, F. Finocchi, and C. Noguera, Rep. Prog. Phys. 71, 016501 (2008) E. Rabani, B. Hetényi, B. J. Berne, and L. E. Brus, J. Chem. Phys. **110**, 5355 (1999). E. Rabani, J. Chem. Phys. **115**, 1493 (2001). S. Shanbhag, and N. A. Kotov, J. Phys. Chem. B **110**, 12211 (2006). S. Dag, S. Wang, and L.-W. Wang, Nano Lett. **11**, 2348 (2011). P. W. Avraam, N. D. M. Hine, P. Tangney, and P. D. Haynes, Phys. Rev. B **83**(**24**), 241402 (2011). C.-K. Skylaris, P. D. Haynes, A. A. Mostofi, and M. C. Payne, J. Chem. Phys. **122**, 084119 (2005). N. D. M. Hine, M. Robinson, P. D. Haynes, C.-K. Skylaris, M. C. Payne, and A. A. Mostofi, Phys. Rev. B **83**, 195102 (2011). C.-K. Skylaris and P. D. Haynes, J. Chem. Phys. **127**, 164712 (2007). C. A. Rozzi, D. Varsano, A. Marini, E. K. U. Gross, and A. Rubio, Phys. Rev. B **73**, 205119 (2006). N. D. M. Hine, J. Dziedzic, P. D. Haynes, and C.-K. Skylaris, J. Chem. Phys. **135**, 204103 (2011). P. D. Haynes, C.-K. Skylaris, A. A. Mostofi and M. C. Payne, Chem. Phys. Lett. **422**, 345 (2006). A. Qteish and R. J. Needs, Phys. Rev. B **43** 4229 (1991) S. G. Louie, S. Froyen, and M. L. Cohen, Phys. Rev. B **26** 1738 (1982) S. J. Clark, M. D. Segall, C. J. Pickard, P. J. Hasnip, M. I. J. Probert, K. Refson, and M. C. Payne, Z. Kristallogr. **220** (2005) 567570 C.-K. Skylaris, P. D. Haynes, A. A. Mostofi, and M. C. Payne, J. Phys.: Condens. Matter **17**, 5757 (2005). X. Huang, E. Lindgren, and J. R. Chelikowsky, Phys. Rev. B **71**, 165328 (2005). H. Yamashita, K. Fukui, S. Misawa, and S. Yoshida, J. Appl. Phys. **50** (1979) 896. D. Brunner, H. Angerer, E. Bustarret, F. Freudenberg, R. Hopler, R. Dimitrov, O. Ambacher, and M. Stutzmann, J. Appl. Phys. **82**, 5090 (1997) Y. Goldberg, *Properties of Advanced Semiconductor Materials GaN, AlN, InN, BN, SiC, SiGe* . Eds. M. E. Levinshtein, S. L. Rumyantsev, M. S. Shur, John Wiley & Sons, Inc., New York, 2001, 31-47 H. Teisseyre, P. Perlin, T. Suski, I. Grzegory, S. Porowski, J. Jun, A. Pietraszko, and T. D. Moustakas, J. Appl. Phys. **76**, 2429 (1994) I. Vurgaftman and J. R. Meyer, J. Appl. Phys. **94**, 3675 (2003). V. Bougrov, M. E. Levinshtein, S. L. Rumyantsev, and A. Zubrilov, *Properties of Advanced Semiconductor Materials GaN, AlN, InN, BN, SiC, SiGe.* Eds. M. E. Levinshtein, S. L. Rumyantsev, M. S. Shur, John Wiley & Sons, Inc., New York, 2001, 1-30. H. C. Casey, D. D. Sell, and K. W. Wecht, J. Appl. Phys. **46**, 250 (1975) S. Adachi, J. Appl. Phys, **53**, 12, 8775-8792, 1982
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the frustration-induced enhancement of the incommensurate correlation for a bond-alternating quantum spin chain in a magnetic field, which is associated with a quasi-one-dimensional organic compound F$_5$PNN. We investigate the temperature dependence of the staggered susceptibilities by using the density matrix renormalization group, and then find that the incommensurate correlation becomes dominant in a certain range of the magnetic field. We also discuss the mechanism of this enhancement on the basis of the mapping to the effective S=1/2 XXZ chain and a possibility of the field-induced incommensurate long range order.' author: - Nobuya Maeshima - Kouichi Okunishi - Kiyomi Okamoto - Tôru Sakai title: 'Frustration-induced $\eta$ inversion in the S=1/2 bond-alternating spin chain' --- The one-dimensional (1D) $S=1/2$ antiferromagnetic bond-alternating spin chain has been an important issue in the condensed matter physics, since it exhibits some typical quantum effects. The magnetic susceptibility clearly reflects the existence of the dimer spin gap. A more important aspect is that the Tomonaga-Luttinger (TL) liquid, which is an undoubtedly essential concept in 1D quantum critical systems, is realized between the vanishing field of the spin gap $H_{c1}$ and the saturation field $H_{c2}$, where the low-energy behavior of the system is characterized by the TL exponents $\eta_x$ ($\eta_z$) associated with the correlation function for the transverse (longitudinal) staggered mode (see Eq. (\[correlation\])) [@sakai-takahashi2; @sakai1]. A good example of such a bond-alternating spin chain is a quasi-1D organic compound F$_5$PNN [@hosokoshi]. However, a recent precise analysis of F$_5$PNN suggests that the frustration effect, which is recently attracting considerable attention, induces various anomalous properties [@goto; @izumi1; @izumi]. In the magnetization process, the bond alternation ratio $\alpha$ (defined in Eq. (\[hamil\])) shows a crossover from $0.4$ in the low field region to $0.5$ in the high field region [@goto]. Moreover, the temperature dependence of the NMR relaxation rate in a magnetic field exhibits anomalous enhancement of the TL exponent $\eta_z$; in contrast to the usual behavior of the TL exponents $\eta_x < \eta_z$ for the simple bond-alternating system [@sakai1], $\eta_x > \eta_z$ may be realized in a certain range of the magnetic field [@izumi1; @izumi], which we shall call “$\eta$-inversion” in this paper. Motivated by such interesting experimental suggestions of the frustration effect, we study the bond-alternating spin chain with the frustration in a magnetic field $H$: $$\begin{aligned} {\cal H} &=& J\sum_{i}[ \vec{S}_{2i} \cdot \vec{S}_{2i+1} + \alpha \vec{S}_{2i+1} \cdot \vec{S}_{2i+2} ] \nonumber \\ &+& J' \sum_i \vec{S}_{i} \cdot \vec{S}_{i+2} - H\sum_i S^z_i, \label{hamil}\end{aligned}$$ where $\vec{S}$ is the $S=1/2$ spin operator and $J'$ is the frustrating coupling. In the following we use the normalization $J=1$ for simplicity. Particular importance of this model is that it enable us to capture the interesting physics cooperatively generated by the magnetic field and frustration. In fact the frustration-induced plateau formation has been studied intensively [@Tone2; @totsuka], and recently a remarkable enhancement of the incommensurate correlation $\eta_x > \eta_z$ has been suggested by the numerical diagonalization analysis in some intermediate magnetic field [@suga]. However, a systematic investigation of the frustration effect for the TL liquid behavior in the magnetization curve is essentially difficult and a detailed study is clearly desired for more thorough understanding of its impact on the TL exponents. In addition, a precise analysis for the $\eta$-inversion provides an essential view point for the material physics; as an interesting consequence of such enhancement of the incommensurate correlation, a novel type of incommensurate order can be induced in the magnetic field through the inter-chain coupling. In this paper, we reveal the effect of the frustrating coupling $J'$ on the observable quantities, using the finite temperature density matrix renormalization group (DMRG) [@moukouri; @wang; @shibata]. We first calculate the magnetization curve and further investigate the temperature dependence of the staggered susceptibility $\chi_\perp$ ($\chi_\parallel$) perpendicular (parallel) to the uniform magnetic field. We then find that the $\eta$-inversion actually occurs in a certain range of the magnetic field, where the gap formation at the $M=1/4$ plateau plays a crucial role. Here $M$ is the magnetization per one spin. Finally we discuss the possibility of the field induced incommensurate order assisted by the frustration. On the basis of the obtained phase diagram, we make a comment on the NMR experiment of F$_5$PNN. In order to analyze the crossover of $\alpha$ in F$_5$PNN, we first calculate the magnetization process by the finite temperature DMRG with the retained bases number $64$. Figure \[fig:mhcurve\] shows the comparison of the obtained curves at $T=0.085$. The parameters in the figure correspond to the F$_5$PNN experiment in Ref. [@goto]. We can clearly see that the curve of $\alpha=0.45$ with the frustrating coupling $J'=0.05$ explains the crossover of the magnetization process from $\alpha\simeq 0.4$ in the low field region to $\alpha \simeq 0.5$ in the high field region. This gives a clear evidence of the frustration effect in F$_5$PNN. ![Magnetization curves obtained with the DMRG.[]{data-label="fig:mhcurve"}](mhcurve.eps){width="6.0cm"} Let us next discuss the TL exponents $\eta_x$ and $\eta_z$, the precise definitions of which are given by power-law decay of the spin-spin correlation functions at zero temperature: $$\begin{aligned} \langle S_0^x S_r^x \rangle &\sim& (-1)^r r^{-\eta_x}, \nonumber \\ \langle S_0^z S_r^z \rangle - M^2 &\sim& \cos(2k_Fr)r^{-\eta_z}, \label{correlation}\end{aligned}$$ where $k_F=\pi(1/2 - M) $. It is well known that these exponents provide the essential information for the observable quantities. For instance, the low temperature behaviors of the staggered susceptibilities are characterized by these exponents [@chitra]: $$\chi_\perp(T) \sim T^{-(2-\eta_x)} \quad {\rm and} \quad \chi_\parallel(T) \sim T^{-(2-\eta_z)}.$$ Since the relation $\eta_x \eta_z =1$ is satisfied for the TL liquid, the smaller exponent yields the dominant spin fluctuation in low temperatures. Since the bond-alternating chain usually has $\eta_x < \eta_z$, $\chi_\perp$ shows the stronger divergence in the $T\to 0$ limit. However, if the incommensurate spin correlation along the magnetic field is enhanced, $\chi_\parallel$ can be dominant in the low temperature region. In order to see how the frustrating coupling $J'$ affects the incommensurate correlation, we directly calculate the staggered susceptibilities $\chi_\perp(T)$ and $\chi_\parallel(T)$ with the finite temperature DMRG [@moukouri; @wang; @shibata], which is the most reliable method to investigate an infinitely long chain with the frustration [@maisinger; @klumper; @maeshima]. In actual computations, we selectively employ the infinite size and finite size algorithms of the DMRG. For a calculation of $\chi_\perp$, we perform the infinite size DMRG with a weak commensurate staggered field along the $x$ direction; $\chi_\perp$ is obtained as a numerical derivation of the staggered field. In contrast, it is generally difficult to treat directly the $2k_F$-oscillating magnetic field with the finite temperature DMRG. Thus we start with the linear-response formula: $$\chi_\parallel(T)= \sum_{|r|\le r_{\rm c}} e^{i2k_Fr} \int_{0}^{\beta}d\tau \langle {\cal S}^z(\tau,r) {\cal S}^z(0,0) \rangle, \label{eq:chidmrg}$$ where $\beta$ is the inverse temperature, $r_{\rm c}$ is a cut-off for the real space direction, and ${\cal S}^z(\tau,r)$ is the spin operator in the Heisenberg representation at an imaginary time $\tau$ and position $r$. By using the finite size algorithm of the DMRG for the quantum transfer matrix ${\cal T}$, we calculate the correlation function $\langle {\cal S}^z(\tau,r) {\cal S}^z(0,0) \rangle$. After obtaining the maximum eigenvalue of $\cal T$ and the corresponding eigenvector $ |\psi_{\rm max}\rangle$, we calculate $\langle {\cal S}^z(\tau,r) {\cal S}^z(0,0) \rangle$ as $$\langle {\cal S}^z(\tau,r) {\cal S}^z(0,0) \rangle = \frac{\langle \psi_{\rm max}| S^z_{\tau,r} {\cal T}^r S^z_{0,0} |\psi_{\rm max}\rangle }{\langle \psi_{\rm max}|{\cal T}^r |\psi_{\rm max}\rangle }. \label{cordmrg}$$ Here $S^\alpha_{\tau, r}$ is the spin operator at a position $(\tau,r)$ on the checkerboard lattice obtained via the Suzuki-Trotter decomposition [@ST]. In the numerical computation, the integral for $\tau$ in Eq. (\[eq:chidmrg\]) is replaced by the summation with a finite imaginary-time-slice $\epsilon=\beta/N$, where $N$ is the Trotter number. In the following results, we have set $r_{\rm c}=200$ and $N=80$, and confirmed the sufficient convergence with respect to $r_{\rm c}$ and $N$. Here, it should be noted that, especially at $M=1/4$, the consistency of the finite size algorithm can be directly checked with the numerical derivative of the infinite-size DMRG. This is because the renormalization process for the quantum transfer matrix is compatible to the periodicity of $k_F=\pi/2$ at $M=1/4$. We have also confirmed that the both results are in good agreement with each other. In Fig. \[fig:kais\], we show comparisons of $\chi_\perp$ and $\chi_\parallel$ for $(\alpha,J',H)=(0.45,0.05,1.16)$ and ($\alpha, J', H)=(0.45,0.15,1.19)$. In both cases, the corresponding magnetizations at the zero temperature are slightly lower than $M=1/4$. We can see that $\chi_\perp$ for $J'=0.05$ is always larger than $\chi_\parallel$, implying that the commensurate fluctuation is still dominant. The estimated exponents are $\eta_x=0.78$ and $\eta_z=1.3$, which yields $\eta_x\eta_z=1.0$. In contrast, for $J'=0.15$, the susceptibilities clearly show the crossover around $T\sim0.1$; as the temperature is decreased below $T\simeq 0.1$, the divergence of $\chi_\perp$ is reduced, while that of $\chi_\parallel$ is enhanced. The TL exponents in the low temperature region are extracted as $\eta_x=1.3$ and $\eta_z=0.8$, which also gives $\eta_x \eta_z \sim 1.0$. We can thus verify that the $\eta$-inversion is actually realized by the frustration effect. In addition, we have found that, as $T \rightarrow 0$, $\chi_\parallel$ in the $M=1/4$ plateau diverges exponentially while $\chi_\perp$ converges to a finite value. This fact suggests that the appearance of the plateau gap plays an important role for the $\eta$ inversion. ![Staggered susceptibilities $\chi_\parallel$ and $\chi_\perp$ (a) for $(\alpha,J',H)=(0.45,0.05,1.16)$ and (b) for $(\alpha, J', H)=(0.45,0.15,1.19)$. The dotted (thin solid) lines show the results of power-law fitting for $\chi_{\perp} (\chi_\parallel)$. []{data-label="fig:kais"}](kais0.45-n0.05-h1.16268.eps "fig:"){width="6.0cm"} ![Staggered susceptibilities $\chi_\parallel$ and $\chi_\perp$ (a) for $(\alpha,J',H)=(0.45,0.05,1.16)$ and (b) for $(\alpha, J', H)=(0.45,0.15,1.19)$. The dotted (thin solid) lines show the results of power-law fitting for $\chi_{\perp} (\chi_\parallel)$. []{data-label="fig:kais"}](kais0.45-n0.15-h1.19006.eps "fig:"){width="6.0cm"} In order to discuss the microscopic origin of the $\eta$ inversion, let us recall that the Hamiltonian (\[hamil\]) around the plateau can be mapped to the effective S=1/2 XXZ chain [@totsuka], where the anisotropy and magnetic field of the effective model are given by $\Delta=\frac{1}{2}\frac{2J'+ \alpha}{|2J'-\alpha|}$ and $H_{\rm eff}=H-1-(2J'+\alpha)/4$ respectively. The exact critical exponents of the XXZ chain in a magnetic field can be derived from the Bethe ansatz integral equation for the dressed charge [@BA]. Since the $\eta_z$ of the S=1/2 XXZ chain is a monotonous increasing function of $|H_{\rm eff}|$, it is sufficient to investigate $\eta_z$ at $H_{\rm eff}=0$ for the purpose of understanding the appearance of the $\eta$-inversion. For the XXZ model, the following fact is well known; as far as $\Delta \le 1$, the XXZ chain is in the critical TL phase with $\eta_z>1$, while, for $\Delta >1$, the excitation gap is opened and, at the same time, $\eta_z <1 $ appears only near the zero magnetic field. We therefore find that [*the criterion for $\eta_x > \eta_z$ is equivalent to the one for the gap formation*]{}. Turning to the original model, we can see that the condition for $\eta_x > \eta_z$ is deduced as $\alpha /6 <J' < 3\alpha/2$, which is satisfied by the parameters used in Fig. \[fig:kais\] (b). Since the effective anisotropy $\Delta$ becomes large as $J'$ approaches $\alpha /2$, we can also understand that the region $\eta_x >\eta_z$ extends as $J'\to \alpha/2 $. Although the mapping to the XXZ chain is based on the perturbation theory from the $\alpha,J'\to 0$ limit, the precise study by the level spectroscopy method [@LSM] gives a good estimation for the $1/4$ plateau, where the effective XXZ model picture is basically maintained [@Tone2]. In fact, $\eta_z$ of the effective XXZ model for $(\alpha,J',H)=(0.45,0.15,1.19)$ is calculated as $\eta_z=0.735$, which is consistent with the fitting result in Fig. \[fig:kais\] (b). Here we should note that, for $\alpha=0.45$, the $1/4$ plateau at $T=0$ emerges for $0.08<J'<0.5$ [@Tone2], implying that F$_5$PNN is located at a subtle position near the plateau phase boundary. On the basis of the results mentioned above, let us discuss the effect of the inter-chain interaction $J_{\rm int}$, which induces the 3D long-range order corresponding to the dominant spin correlation [@sakai2]. Indeed the sharp peak of the specific heat was observed for F$_5$PNN in a magnetic field [@yoshida]. For the weakly frustrating bond-alternating chain, the antiferromagnetic long-range order is realized for $H_{c1}<H<H_{c2}$. However, when the strong frustrating coupling $J'$ induces the $\eta$-inversion, the 3D order should also change from the ordinary Néel order perpendicular to $H$ into the incommensurate order along $H$ at some field. In order to clarify whether such a change of the order occurs or not, we calculate the theoretical phase diagram of the frustrated bond-alternating chain within the inter-chain mean field approximation combined with the DMRG [@klumper; @nishiyama]; using $\chi_\parallel(T)$ and $\chi_\perp(T)$ obtained by the finite temperature DMRG, we can extract the phase boundary from the equation $\chi_\gamma (T_c)=(zJ_{\rm int})^{-1}$ [@scalapino], where $z$ is a coordination number and $\gamma \in \perp$ or $\parallel$. Here we assume that the inter-ladder coupling is weak and is not frustrating. In Fig. \[fig:phase\], we show the $H$-$T$ phase diagrams for $\alpha=0.45$ with $J'=0.05$ corresponding to F$_5$PNN, and with $J'=0.15$. In this figure, the phase boundary for a fixed $zJ_{\rm int}$ is indicated by a line. For $\alpha=0.45$ and $J'=0.05$, the transition to the commensurate order always appears, reflecting the fact $\chi_\perp > \chi_\parallel$ in Fig.2 (a). Moreover, we can see that the phase boundary for F$_5$PNN [@yoshida] is well reproduced by the theoretical curve for $zJ_{\rm int}=1/20$. The estimated coupling $J/k_{\rm B}=6$K is also consistent with the one obtained in Ref. [@goto]. This fact provides another verification for $J'=0.05$ determined from the magnetization curve. If we assume $z=4$, the inter-chain coupling is estimated as $J_{\rm int}/k_{\rm B}\simeq0.07$K. As $J'$ is increased, the usual commensurate fluctuation is suppressed around $H\simeq 1$ and, at the same time, the fluctuation associated with $\eta_z$ is increased. Accordingly the phase boundary of the commensurate Néel order is shifted to the low temperature side and, as far as $\chi_{\parallel}= 1/zJ_{\rm int}$ is satisfied, the incommensurate order can appear in the left side of the bold line in Fig. 3 (b), which becomes identical to the commensurate-incommensurate transition line for sufficiently large $J_{\rm int}$. As was seen before, the $\eta$ inversion simultaneously appears with the plateau formation. Thus we can understand that the incommensurate order develops in the intermediate field region. Finally, we want to make a comment on the NMR relaxation rate of F$_5$PNN, for which the theoretical $H$-$T$ phase diagram does not show the $\eta$-inversion. Since the temperature region used for the estimation of $\eta_z$ in Ref. [@izumi1; @izumi] is close to the transition temperature, the experimentally observed crossover of $\eta_z$ for F$_5$PNN is possibly due to the anomaly originating from the transition to the commensurate Néel order. However the $\eta$ inversion certainly occurs in the large $J'$ region. ![ Phase diagram determined by the inter-chain mean field approximation (a) for $(\alpha,J')=(0.45,0.05)$ and (b) for $(\alpha,J')=(0.45,0.15)$. Pluses show the critical temperature $T_{\rm c}(H)$ of F$_5$PNN with $J/k_{\rm B}=6K$. The bold line in (b) indicates the $\chi_\parallel=\chi_\perp$ line. Dotted lines represent the critical fields ($H_{\rm c1}$ and $H_{\rm c2}$), and the lower (upper) edge of the plateau $H_{\rm p1}$ ($H_{\rm p2}$) for the single chain.[]{data-label="fig:phase"}](Tc0.45-n0.05.eps "fig:"){width="6.4cm"} ![ Phase diagram determined by the inter-chain mean field approximation (a) for $(\alpha,J')=(0.45,0.05)$ and (b) for $(\alpha,J')=(0.45,0.15)$. Pluses show the critical temperature $T_{\rm c}(H)$ of F$_5$PNN with $J/k_{\rm B}=6K$. The bold line in (b) indicates the $\chi_\parallel=\chi_\perp$ line. Dotted lines represent the critical fields ($H_{\rm c1}$ and $H_{\rm c2}$), and the lower (upper) edge of the plateau $H_{\rm p1}$ ($H_{\rm p2}$) for the single chain.[]{data-label="fig:phase"}](Tc0.45-n0.15.eps "fig:"){width="6.4cm"} To summarize, we have studied the bond-alternating spin chain with the frustrating interaction in a magnetic field. What we want to emphasize in the present study is that the combination of the magnetic field and frustration gives rise to various exotic phenomena. We have actually shown that the frustration explains the crossover of $\alpha$ in the magnetization curve of F$_5$PNN. We have also clarified that the $\eta$-inversion of the TL exponents is induced in a certain range of the magnetic field by the frustration. The mechanism of the $\eta$-inversion is explained on the basis of the effective XXZ model associated with the 1/4 plateau formation. Moreover, we discussed the possibility of the field induced incommensurate order. Although the nature of this transition between commensurate and incommensurate orders is not clear within the mean field theory, we believe that we could elucidate the importance of the incommensurate correlation. Recently, several frustrated spin systems have been studied, where a low-energy effective XXZ model often works successfully [@totsuka; @XXZ; @MILA]. In particular the similar enhancement of $\eta_x$ is also reported in Ref. [@MILA]. This suggests that the mechanism for the $\eta$-inversion can be applicable to a class of realistic frustrating spin systems. Then we hope that the present theory stimulates further researches on such interesting physics caused by the incommensurate correlation. We would like to thank S. Suga, T. Suzuki, Y. Yoshida, K. Izumi, Tsuneaki Goto and Takao Goto for fruitful discussions. This work was partially supported by Grants-in-Aid for Scientific Research on Priority Areas (B), for Scientific Research (C) (No.14540329), and for Creative Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology, Japan. [00]{} T. Sakai and M. Takahashi, J. Phys. Soc. Jpn. [**60**]{}, 3615 (1991). T. Sakai, J. Phys. Soc. Jpn. [**64**]{} 251 (1995). Y. Hosokoshi [*et al.*]{} , Physica B [**201**]{}, 497 (1994). M. Takahashi [*et al.*]{}, Moo. Cryst. Liq. Cryst. [**306**]{}, 111 (1997). T. Goto (private communication). K. Izumi, T. Goto, Y. Hosokoshi, and J.-P. Boucher, The Physical Society of Japan, 58th Annual Meeting. K. Totsuka, Phys. Rev. B [**57**]{}, 3454 (1998). T. Tonegawa, T. Hikihara, K. Okamoto, and M. Kaburagi, Physica B [**294-295**]{}, 39 (2001), and references therein. N. Haga and S. Suga, J. Phys. Soc. Jpn. [**69**]{}, 2431 (2000). S. Moukouri and L.G. Caron, Phys. Rev. Lett. [**77**]{}, 4640 (1996) X. Wang and T. Xiang, Phys. Rev. B [**56**]{}, 5061 (1997) N. Shibata, J. Phys. Soc. Jpn. [**66**]{}, 2221 (1997). R. Chitra and T. Giamarchi, Phys. Rev. B [**55**]{}, 5816 (1997). See also Ref. [@MILA]. K. Maisinger and U. Schollw[ö]{}ck, Phys. Rev. Lett. [**81**]{}, 445 (1998). A. Kl[ü]{}mper, R. Raupach, and F. Sch[ö]{}nfeld, Phys. Rev. B [**59**]{}, 3612 (1999). N. Maeshima and K. Okunishi, Phys. Rev. B [**62**]{}, 934 (2000). M. Suzuki, Prog. Theor. Phys. [**56**]{}, 1454 (1976). N. M. Bogoliubov, A. G. Izergin, and V.E. Korepin, Nucl. Phys. B [**275**]{}, 687 (1986). K. Okamoto and K. Nomura, Phys. Lett. A [**169**]{}, 433 (1992). T. Sakai, Phys. Rev. B [**62**]{}, (2000) R9240. Y. Yoshida [*et al.*]{}, Physica B [**329**]{}, 979 (2003). Z. Honda, K. Katsumata, Y. Nishiyama, and I. Harada, Phys. Rev. B, [**63**]{}, 064420 (2001). D. J. Scalapino, Y. Imry, and P. Pincus, Phys. Rev. B, [**11**]{}, 2042 (1975). K. Okamoto, N. Okazaki, and T. Sakai, J. Phys. Soc. Jpn. [**70**]{} 636 (2001). F. Mila, Eur. Phys. J. B [**6**]{} 201 (1998).
{ "pile_set_name": "ArXiv" }
--- abstract: | Let $F: \mathbb{C}^n\rightarrow \mathbb{C}^m$ be a polynomial map with $\deg F=d\geq 2$. We prove that $F$ is invertible if $m = n$ and $\sum^{d-1}_{i=1}{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is invertible for all $i$, which is trivially the case for invertible quadratic maps. More generally, we prove that for affine lines $L = \{\beta + \mu \gamma \mid \mu \in {{\mathbb C}}\} \subseteq {{\mathbb C}}^n$ ($\gamma \ne 0$), $F|_L$ is linearly rectifiable, if and only if $\sum^{d-1}_{i=1}{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}} \cdot \gamma \ne 0$ for all $\alpha_i\in L$. This appears to be the case for all affine lines $L$ when $F$ is injective and $d \le 3$. We also prove that if $m = n$ and $\sum^{n}_{i=1}{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is invertible for all $\alpha_i\in \mathbb{C}^n$, then $F$ is a composition of an invertible linear map and an invertible polynomial map $X+H$ with linear part $X$, such that the subspace generated by $\{{({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha}} \mid \alpha\in\mathbb{C}^n\}$ consists of nilpotent matrices. author: - | Hongbo Guo, Michiel de Bondt, Xiankun Du[^1], Xiaosong Sun\ School of Mathematics, Jilin University, Changchun 130012, China\ Radboud University, Nijmegen, The Netherlands[^2]\ Email: [email protected], [email protected], [email protected],\ [email protected] title: 'Polynomial maps with invertible sums of Jacobian matrices and of directional derivatives[^3]' --- **Keywords:** Jacobian matrices, Jacobian conjecture, polynomial embedding, linearly rectifiable MSC(2000): 14R10, 14R15 Introduction ============ Denote by ${{\mathcal J}}F$ the Jacobian matrix of a polynomial map $F:\mathbb{C}^n\rightarrow\mathbb{C}^n$. The Jacobian conjecture states that $F$ is invertible if ${{\mathcal J}}F$ is invertible, or equivalently if ${({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha}}$ is invertible for all $\alpha \in \mathbb{C}^n$. The conjecture has been reduced to polynomial maps of the form $F=X+H$, where $H$ is homogeneous (of degree 3) and ${{\mathcal J}}H$ is nilpotent, by Bass, Connell and Wright in [@Bass], and independently by Yagzhev in [@Yagzhev]. Subsequent reductions are to the case where for the polynomial map $F = X + H$ above, each component of $H$ is a cube of a linear form, by Drużkowski in [@Druzkowski], and to the case where ${{\mathcal J}}H$ is symmetric, by De Bondt and Van den Essen in [@Bondt], but these reductions cannot be applied simultaneously, see also [@BondtEssen]. More details about the Jacobian conjecture can be found in [@Arno] and [@homokema]. Invertibility of a polynomial map $F$ has been examined by several authors under certain conditions on the evaluated Jacobian matrices ${({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha}}, ~\alpha\in\mathbb{C}^n$. With an extra assumption that $F-X$ is cubic homogeneous, Yagzhev proved in [@Yagzhev] that if ${({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_1}}+{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_2}}$ is invertible for all $\alpha_1,\alpha_2\in \mathbb{C}^n$, then the polynomial map $F$ is invertible. The Jacobian matrix ${{\mathcal J}}H$ of a polynomial map $H$ is called strongly nilpotent if ${({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_1}}\cdot{({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_2}}\cdot\cdots\cdot {({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_n}}=0$ for all $\alpha_i \in \mathbb{C}^n$. Van den Essen and Hubbers proved in [@Essen] that ${{\mathcal J}}H$ is strongly nilpotent if and only if there exists $T\in GL_n(\mathbb{C})$ such that $T^{-1}{{\mathcal J}}(H)T$ is strictly upper triangular, if and only if the polynomial map $F=X+H$ is linearly triangularizable (so $F$ is invertible). This result was generalized by Yu in [@Yu], where he additionally observed that ${{\mathcal J}}H$ is already strongly nilpotent if ${({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_1}}\cdot{({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_2}}\cdot\cdots\cdot {({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_m}}=0$ for some $m \in {{\mathbb N}}$. In [@Sun], Sun extended the notion of strong nilpotency and proved that a polynomial map $F=X+H$ is invertible if the Jacobian matrix ${{\mathcal J}}H$ is *additive-nilpotent*, i.e. $\sum_{i=1}^m {({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is nilpotent for each positive integer $m$ and all $\alpha_i\in \mathbb{C}^n$, which generalizes results in [@Essen; @wang; @Yagzhev; @Yu]. Instead of looking at polynomial maps $F=X+H$ such that ${{\mathcal J}}H$ is nilpotent, we look at polynomial maps $F$ in general, and assume that $\det \sum_{i=1}^{d-1} {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}} \ne 0$ for all $\alpha_i\in \mathbb{C}^n$, where $d = \deg F$. More generally, we only assume that $\sum_{i=1}^{d-1} {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}} \cdot \gamma \ne 0$ and only for $\alpha_i\in \mathbb{C}^n$ which are collinear, where $\gamma \ne 0$ is the direction of the line. Observe that if $F=X+H$ is a polynomial map such that ${{\mathcal J}}H$ is additive-nilpotent, then $\sum_{i=1}^{m}{({{\mathcal J}}{\tilde{F}})|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is invertible for all $m \in {{\mathbb N}}$ and all $\alpha_i\in \mathbb{C}^n$, where ${\tilde{F}}=L_1\circ F\circ L_2$ is a composition of $F$ and invertible linear maps $L_1$ and $L_2$. Conversely, it is interesting to describe the polynomial maps such that sums of the evaluated Jacobian matrices are invertible. In this paper, we first prove that a polynomial map $F$ of degree $d$ is invertible if $\sum_{i=1}^{d-1} {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is invertible for all $\alpha_i\in \mathbb{C}^n$. This generalizes results of Wang in [@wang], Yagzhev in [@Yagzhev], Van den Essen and Hubbers in [@Essen] and Sun in [@Sun]. Then we prove the invertibility of a polynomial map $F$ such that $\sum_{i=1}^{n}{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is invertible for all $\alpha_i\in \mathbb{C}^n$, and finally characterize such a polynomial map as a composition of an invertible linear map and an invertible polynomial map $X+H$ such that ${{\mathcal J}}H$ is additive-nilpotent. Additive properties of the derivative on lines ============================================== \[addexp1\] Assume $\lambda_1 ,\lambda_2, \ldots, \lambda_{d-1} \in {{\mathbb C}}$ such that $\sum_{i\in I} \lambda_i \ne 0$ for all nonempty $I \subseteq \{1,2,\ldots,d-1\}$, and $P \in {{\mathbb C}}[[T]]$ with constant term $\lambda_1 + \lambda_2 + \cdots + \lambda_{d-1}$. Then there are $r_1, r_2, \ldots, \allowbreak r_{d-1} \in {{\mathbb C}}$ such that $$P - \sum_{i=1}^{d-1} \lambda_i \exp(r_i T)$$ is divisible by $T^d$, where $\exp(T) = \sum_{j=0}^{\infty} \frac{1}{j!} T^j$. Write $$P = \sum_{j=0}^{\infty} \frac{p_j}{j!} T^i$$ Then we must find a solution $(Y_1,Y_2,\ldots, Y_{d-1}) = (r_1,r_2,\ldots,r_{d-1}) \in {{\mathbb C}}^{d-1}$ of $$\sum_{i=1}^{d-1} \lambda_i Y_i^j = p_j \quad (j = 0,1,\ldots,d-1) \label{eq1}$$ The equation for $j = 0$ is fulfilled by assumption, and finding a solution of (\[eq1\]) is the same as finding a solution $(Y_1,Y_2,\ldots, Y_d) = (r_1,r_2,\ldots,r_d)$ of $$\sum_{i=1}^{d-1} \lambda_i Y_i^j = p_j Y_d^j \quad (j = 1,\ldots,d-1) \label{eq2}$$ for which $r_d = 1$. Since $(Y_1,Y_2,\ldots, Y_d) = 0$ is a solution of (\[eq2\]), it follows from Krull’s Height Theorem that the dimension of the set of solutions $(r_1,r_2,\ldots,r_d) \in {{\mathbb C}}^d$ of (\[eq2\]) is at least one. Hence there exists a nonzero solution $(r_1,r_2,\ldots,r_d) \in {{\mathbb C}}^d$ of (\[eq2\]). If $r_d \ne 0$, then $r_d^{-1}(r_1,r_2,\ldots,r_d)$ is a solution of (\[eq2\]) as well, because the equations of (\[eq2\]) are homogeneous. Hence $r_d^{-1}(r_1,r_2,\ldots,r_{d-1})$ is a solution of (\[eq1\]) in that case. So assume that $r_d = 0$. Then $\sum_{i=1}^{d-1} \lambda_i r_i^j = 0$ for all $j$. Take $e \le d-1$ and nonzero $s_1 < s_2 < \cdots < s_e$ such that $\{0,r_1, r_2, \ldots, r_{d-1}\} = \{0,s_1, s_2, \ldots,s_e\}$. Then $e \ge 1$ because $(r_1,r_2,\ldots,r_d) \ne 0$, and $$0 = \sum_{i=1}^{d-1} \lambda_i r_i^j = \sum_{k=1}^e s_k^j \sum_{r_i = s_k} \lambda_i$$ for all $j$ such that $1 \le j \le e$. This means that the vector $v$ defined by $v_k := \sum_{r_i = s_k} \lambda_i$ for all $k$ satisfies $M v = 0$, where $M$ is the Vandermonde matrix with entries $M_{jk} = s_k^j$. Since $v_k$ is nonzero by assumption for all $k$, this contradicts $\det M \ne 0$. Let $f \in {{\mathbb C}}[X] = {{\mathbb C}}[X_1,X_2,\ldots, X_n]$ be a polynomial of degree $d$ and $\beta, \gamma \in {{\mathbb C}}^n$. Set $g(T) := f(\beta + T \gamma)$ and $D := \sum_{i=1}^n \gamma_i { \frac{\partial \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \partial X_i \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}}$. Notice that $T \mapsto D$ induces an isomorphism of ${{\mathbb C}}[T]$ and ${{\mathbb C}}[D]$. By the chain rule, $$\begin{aligned} { \frac{\mathrm{d} \ifthenelse{\equal{i}{Default}}{}{^{i}}}{ \mathrm{d} T \ifthenelse{\equal{i}{Default}}{}{^{i}}}} \big(f(\beta + T \gamma)\big) &= { \frac{\mathrm{d} \ifthenelse{\equal{i-1}{Default}}{}{^{i-1}}}{ \mathrm{d} T \ifthenelse{\equal{i-1}{Default}}{}{^{i-1}}}} \big({({{\mathcal J}}f)|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta + T \gamma}} \cdot \gamma\big) \\ &= { \frac{\mathrm{d} \ifthenelse{\equal{i-1}{Default}}{}{^{i-1}}}{ \mathrm{d} T \ifthenelse{\equal{i-1}{Default}}{}{^{i-1}}}} \big((D f) (\beta + T \gamma)\big) = (D^i f) (\beta + T \gamma)\end{aligned}$$ follows for all $i \in {{\mathbb N}}$ by induction on $i$. Using the Taylor series at $0$ of $g$, we see that for all $c \in {{\mathbb C}}$, $$\begin{aligned} f(\beta + c \gamma) = g(c) &= \sum_{i=0}^{\infty} \frac{(c-0)^i}{i!} {\bigg({ \frac{\mathrm{d} \ifthenelse{\equal{i}{Default}}{}{^{i}}}{ \mathrm{d} T \ifthenelse{\equal{i}{Default}}{}{^{i}}}} g(T) \bigg)\bigg|_{\ifthenelse{\equal{T}{X}}{}{T=}0}} \nonumber \\ &= \sum_{i=0}^{\infty} \frac{c^i}{i!} {\bigg({ \frac{\mathrm{d} \ifthenelse{\equal{i}{Default}}{}{^{i}}}{ \mathrm{d} T \ifthenelse{\equal{i}{Default}}{}{^{i}}}} f(\beta + T \gamma) \bigg)\bigg|_{\ifthenelse{\equal{T}{X}}{}{T=}0}} \nonumber \\ &= {\sum_{i=0}^{\infty} \frac{c^i}{i!} \big((D^i f)(\beta + T \gamma) \big)\bigg|_{\ifthenelse{\equal{T}{X}}{}{T=}0}} \nonumber \\ &= {{\big((\exp c D) f\big)\big|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta + T \gamma}}\Big|_{\ifthenelse{\equal{T}{X}}{}{T=}0}} = {\big((\exp c D) f\big)\big|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta}} \label{taylor}\end{aligned}$$ \[lineinj\] Let $F: {{\mathbb C}}^n \rightarrow {{\mathbb C}}^m$ be a polynomial map of degree $d$ and $\lambda_i \in {{\mathbb C}}$ for all $i$, such that $\sum_{i\in I} \lambda_i \ne 0$ for all nonempty $I \subseteq \{1,2,\ldots,d-1\}$. Assume $\beta, \gamma \in {{\mathbb C}}^n$ such that $\gamma \ne 0$. If every sum of $d-1$ directional derivatives of $F|_{\beta + {{\mathbb C}}\gamma}$ along $\gamma$ is nonzero ($\lambda_i = 1$ for all $i$ below), or more generally, $$\sum_{i=1}^{d-1} \lambda_i \cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}} \cdot \gamma \ne 0$$ for all $\alpha_i \in \{ \beta + \mu \gamma \mid \mu \in {{\mathbb C}}\}$, then $F(\beta) \ne F(\beta + \gamma)$. Set $D := \sum_{i=1}^n \gamma_i { \frac{\partial \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \partial X_i \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}}$ and $P(T) := \big(\sum_{i=1}^{d-1} \lambda_i\big) T^{-1}(\exp(T)-1)$. By (\[taylor\]), $$\begin{aligned} \bigg(\sum_{i=1}^{d-1} \lambda_i\bigg) \cdot \big(F_j(\beta + \gamma) - F_j(\beta)\big) &= \bigg(\sum_{i=1}^{d-1} \lambda_i\bigg) \cdot {\Big(\big(\exp(D) - 1\big) F_j\Big)\Big|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta}} \\ &= {\big( D P(D) F_j\big)\big|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta}} = {\big( P(D) (D F_j)\big)\big|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta}}\end{aligned}$$ for all $j$. Choose $r_i$ as in Lemma \[addexp1\] for all $i$. From the definition of $D$ and (\[taylor\]) with $c = r_i$ and $f = D F_j$, $$\begin{aligned} \sum_{i=1}^{d-1} \lambda_i \cdot {({{\mathcal J}}F_j)|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta + r_i \gamma}} \cdot \gamma &= \sum_{i=1}^{d-1} \lambda_i(D F_j) (\beta + r_i \gamma) \\ &= {\bigg(\sum_{i=1}^{d-1} \lambda_i \exp(r_i D) (D F_j) \bigg)\bigg|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta}}\end{aligned}$$ follows for all $j$. Since $P(T) - \sum_{i=1}^{d-1} \lambda_i \exp(r_i T)$ is divisible by $t^d$ and $D F_j$ has degree at most $d - 1$, we have $$P(D) (D F_j) = \sum_{i=1}^{d-1} \lambda_i \exp(r_i D) (D F_j)$$ for all $j$. By substituting $x = \beta$ on both sides, we obtain $$\bigg(\sum_{i=1}^{d-1} \lambda_i\bigg) \cdot \big(F_j(\beta + \gamma) - F_j(\beta)\big) = \sum_{i=1}^{d-1} \lambda_i {({{\mathcal J}}F_j)|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta+r_i \gamma}} \cdot \gamma$$ for all $j$, which gives the desired result. \[allinj\] Let $F: {{\mathbb C}}^n \rightarrow {{\mathbb C}}^m$ be a polynomial map of degree $d$ and $\lambda_i \in {{\mathbb C}}$ for all $i$, such that $\sum_{i\in I} \lambda_i \ne 0$ for all nonempty $I \subseteq \{1,2,\ldots,d-1\}$. If ${\operatorname{rk}}(\sum_{i=1}^{d-1} \lambda_i {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}) = n$ for all $\alpha_i \in {{\mathbb C}}^n$, then $F$ is injective. If additionally $n = m$, then $F$ is an invertible polynomial map. Assume $F(\beta) = F(\beta + \gamma)$ for some $\beta, \gamma \in {{\mathbb C}}^n$ By Proposition \[lineinj\], there are $\alpha_i \in {{\mathbb C}}^n$ such that $$\sum_{i=1}^{d-1} \lambda_i \cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}} \cdot \gamma = 0$$ and in particular ${\operatorname{rk}}\big(\sum_{i=1}^{d-1} \lambda_i \cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\big) \ne n$. If $n = m$, then a special case of the Cynk-Rusek Theorem in [@Cynk] (see also [@Yagzhev Lemma 3] and [@Borel]) tells us that $F$ is an invertible polynomial map in case it is injective, which is the case here. When $d=2$ or $d=3$, Corollary \[allinj\] gives a result of Wang [@wang Theorem 1.2.2] and one of Yagzhev [@Yagzhev Theorem 1(ii)], respectively. Corollary \[allinj\] also generalizes [@Sun Theorem 2.2.1, Corollary 2.2.2]. Now you might think that for Theorem \[lineinj\], the condition that there are $d-1$ collinear $\alpha_i$’s with the additive property therein is weaker than a similar property for $s$ $\alpha_i$’s, where $s \in {{\mathbb N}}$ is arbitrary. This is however not the case. \[lineadd\] Let $F: {{\mathbb C}}^n \rightarrow {{\mathbb C}}^m$ be a polynomial map of degree $\le d$ and $\beta, \gamma \in {{\mathbb C}}^n$. Then the following statements are equivalent. 1. There exists $\lambda_1 ,\lambda_2, \ldots, \lambda_{d-1} \in {{\mathbb C}}$ satisfying $\sum_{i\in I} \lambda_i \ne 0$ for all nonempty $I \subseteq \{1,2,\ldots,d-1\}$, such that $$\sum_{i=1}^{d-1} \lambda_i \cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}} \cdot \gamma \ne 0$$ for all $\alpha_i \in \{\beta + \mu \gamma \mid \mu \in {{\mathbb C}}\}$. 2. $F|_{\beta + {{\mathbb C}}\gamma}$ is linearly rectifiable (in particular injective), i.e. there exists a vector $v \in {{\mathbb C}}^m$ such that $$\label{lineaddv} \sum_{j=1}^m v_j \cdot{ \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}} \big(F_j(\beta+T \gamma)\big) = 1$$ 3. For all $s \in {{\mathbb N}}$, $$\sum_{i=1}^s \lambda_i\cdot{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}} \cdot \gamma \ne 0$$ for all $\lambda_i \in {{\mathbb C}}$ such that $\lambda_1 + \lambda_2 + \cdots + \lambda_s \ne 0$, and all $\alpha_i \in \{\beta + \mu \gamma \mid \mu \in {{\mathbb C}}\}$. Since (3) $\Rightarrow$ (1) is trivial, only two implications remain. (2) [[$\Rightarrow$]{} ]{}(3) : Assume that (2) is satisfied. Take $s \in {{\mathbb N}}$, $\lambda_1, \lambda_2, \ldots, \lambda_s \in {{\mathbb C}}$ such that $\lambda_1 + \lambda_2 + \cdots + \lambda_s \ne 0$, and $\alpha_i \in \{\beta + \mu \gamma \mid \mu \in {{\mathbb C}}\}$. Each $\alpha_i$ is of the form $\alpha_i = \beta + r_i \gamma$ for some $r_i \in {{\mathbb C}}$. By the chain rule, $$\begin{aligned} v{{^{\mathrm t}}}\cdot \bigg( \sum_{i=1}^s \lambda_i\cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}} \cdot \gamma \bigg) &= \sum_{i=1}^s \lambda_i\cdot\bigg(\sum_{j=1}^m v_j \cdot {({{\mathcal J}}F_j)|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta+r_i\gamma}} \cdot \gamma \bigg) \\ &= \sum_{i=1}^s \lambda_i\cdot {\bigg(\sum_{j=1}^m v_j { \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}} \big(F_j (\beta + T \gamma)\big)\bigg)\bigg|_{\ifthenelse{\equal{T}{X}}{}{T=}r_i}} \\ &= \sum_{i=1}^s \lambda_i \cdot {1|_{\ifthenelse{\equal{T}{X}}{}{T=}r_i}} = \sum_{i=1}^s \lambda_i \ne 0\end{aligned}$$ which gives (3). (1) [[$\Rightarrow$]{} ]{}(2) : Assume that (2) does not hold. We will derive a contradiction by showing that (1) does not hold either. Since $\deg_T { \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}} F_j(\beta + T \gamma) \le d-1$ for all $j$, the ${{\mathbb C}}$-space $U$ that is generated by $${ \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}} F_1(\beta + T \gamma), { \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}} F_2(\beta + T \gamma), \ldots, { \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}} F_m(\beta + T \gamma)$$ has dimension $s \le d-1$, for $1 \notin U$. Take a basis of $U$ of monic $u_1, u_2, \allowbreak \ldots, \allowbreak u_s \in {{\mathbb C}}[T]$ such that $0 < \deg u_1 < \deg u_2 < \cdots < \deg u_s < d$. Write $u_{ji}$ for the coefficient of $T^i$ of $u_j$. Next, define $p_i$ for $i = 0,1,\ldots,d-1$ as follows. $$p_i := \left\{ \begin{array}{ll} -\sum_{k=0}^{i-1} p_k u_{jk} & \mbox{if $u_j$ has degree $i$.} \\ \lambda_1 + \lambda_2 + \cdots + \lambda_{d-1} & \mbox{if no $u_j$ has degree $i$,} \end{array} \right.$$ Set $P := \sum_{k=1}^{d-1} \frac{p_k}{k!} T^k$ and choose $r_i$ as in Lemma \[addexp1\] for all $i$. Looking at the term expansion of $u_j$, we see that $$P\Big({ \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}}\Big)u_j = \sum_{k=0}^{\infty} \frac{p_k}{k!} \cdot \sum_{l=0}^{\infty} \frac{(k+l)!}{l!} u_{jk} T^l$$ whence for $i = \deg u_j$ $$\begin{aligned} {\bigg(P\Big({ \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}}\Big) u_j\bigg)\bigg|_{\ifthenelse{\equal{T}{X}}{}{T=}0}} &= \sum_{k=0}^{\infty} p_k u_{jk} = p_{i} + \sum_{k=0}^{i-1} p_k u_{jk} = 0 \intertext{and similarly for each $i$} {\bigg(\exp\Big(r_i { \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}}\Big)u_j\bigg)\bigg|_{\ifthenelse{\equal{T}{X}}{}{T=}0}} &= \sum_{k=0}^{\infty} r_i^k u_{jk} = u_j(r_i) = {u_j\big|_{\ifthenelse{\equal{T}{X}}{}{T=}r_i}}\end{aligned}$$ follow for all $j$. By Lemma \[addexp1\], $P - \sum_{i=1}^{d-1} \lambda_i \exp(r_i T)$ is divisible by $T^d$. Since $\deg u_j < d$ for all $j$, $$0 = {\bigg(P\Big({ \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}}\Big)u_j\bigg)\bigg|_{\ifthenelse{\equal{T}{X}}{}{T=}0}} = \sum_{i=1}^{d-1} \lambda_i \cdot {\bigg(\exp\Big(r_i { \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}}\Big)u_j\bigg)\bigg|_{\ifthenelse{\equal{T}{X}}{}{T=}0}} = \sum_{i=1}^{d-1} \lambda_i {u_j\big|_{\ifthenelse{\equal{T}{X}}{}{T=}r_i}}$$ Since ${ \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}} F_j(\beta + T \gamma)$ is a ${{\mathbb C}}$-linear combination of $u_1, u_2, \ldots, u_s$ for all $j$, we have $$\begin{aligned} 0 &= \sum_{i=1}^{d-1} \lambda_i \cdot {\Big({{\mathcal J}}_T \big(F(\beta + T \gamma)\big)\Big)\Big|_{\ifthenelse{\equal{T}{X}}{}{T=}r_i}} \\ &= \sum_{i=1}^{d-1} \lambda_i \cdot {\big({({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta + T \gamma}} \cdot \gamma\big)\big|_{\ifthenelse{\equal{T}{X}}{}{T=}r_i}} \\ &= \sum_{i=1}^{d-1} \lambda_i \cdot {\big({{\mathcal J}}F\big)\big|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta + r_i \gamma}} \cdot \gamma\end{aligned}$$ which is a contradiction. For the map $F = (X_1 + (X_2 + X_1^2)^2, X_2 + X_1^2)$, only images of lines parallel to the $X_2$-axis are linearly rectifiable. But all images of lines are linearly rectifiable when $F = (X_1 + (X_2 + X_1^2)^2 - (X_3 + X_1^2)^2, X_2 + X_1^2, X_3 + X_1^2)$ or any other invertible cubic map over ${{\mathbb C}}$. This follows from the proposition below. \[cubicrectif\] Let $F: {{\mathbb C}}^n \rightarrow {{\mathbb C}}^m$ be a polynomial map of degree $\le 3$, and $\beta, \gamma \in {{\mathbb C}}^n$ such that $\gamma \ne 0$. If $F|_{\beta + {{\mathbb C}}\gamma}$ is injective and $({{\mathcal J}}F)|_{X = \alpha} \cdot \gamma \ne 0$ for all $\alpha \in \{\beta + \mu \gamma \mid \mu \in {{\mathbb C}}\}$, then $F|_{\beta + {{\mathbb C}}\gamma}$ is linearly rectifiable, i.e. there exists a $v \in {{\mathbb C}}^m$ such that (\[lineaddv\]) holds. Assume $F|_{\beta + {{\mathbb C}}\gamma}$ is not linearly rectifiable. Then there exist monic $u_1, u_2 \in {{\mathbb C}}[T]$ such that $\deg u_i = i$ and for all $j$, ${ \frac{\mathrm{d} \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}{ \mathrm{d} T \ifthenelse{\equal{Default}{Default}}{}{^{Default}}}} F_j(\beta + T \gamma)$ is linearly dependent over ${{\mathbb C}}$ of $u_1$ and $u_2$. If the constant term $u_{10}$ of $u_1$ is nonzero, then $u_{10}$ will become zero after replacing $\beta$ by $\beta - u_{10}\gamma$ and adapting $u_1$ and $u_2$ accordingly. So assume $u_{10} = 0$ and let $u_{20}$ be the constant term of $u_2$. By taking the integral of $u_1$ and $u_2$ from $T = - \sqrt{-3u_{20}}$ to $T = + \sqrt{-3u_{20}}$, we see that $F(\beta - \sqrt{-3u_{20}} \gamma) = F(\beta + \sqrt{-3u_{20}} \gamma)$, thus either $F|_{\beta + {{\mathbb C}}\gamma}$ is not injective or $u_{20} = 0$. If $u_{20} = 0$, then $({{\mathcal J}}F)|_{X = \beta} \cdot \gamma = 0$ because both $u_1$ and $u_2$ are divisible by $T$. This completes the proof of Proposition \[cubicrectif\]. Assume $F: {{\mathbb C}}^n \rightarrow {{\mathbb C}}^n$ is a polynomial map of degree $\le 3$ which safisfies the Keller condition $\det {{\mathcal J}}F \in {{\mathbb C}}^{*}$. Then $F$ is invertible, if and only if $F|_{L}$ is linearly rectifiable for every affine line $L \subseteq {{\mathbb C}}^n$, if and only if $\big({({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha}}+{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta}}\big)(\alpha-\beta)\neq 0$ for all $\alpha,\beta\in {{\mathbb C}}^n$ with $\alpha \neq \beta$. By Proposition \[cubicrectif\], $F$ is invertible, if and only if $F|_{L}$ is linearly rectifiable for every affine line $L \subseteq {{\mathbb C}}^n$. By Proposition \[lineadd\], the latter is equivalent to $\big({({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha}}+{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\beta}}\big)(\alpha-\beta)\neq 0$ for all $\alpha,\beta\in {{\mathbb C}}^n$ with $\alpha \neq \beta$, as desired. Notice that in the proof of Lemma \[addexp1\], we solve $d-1$ equations in $d-1$ variables to obtain $r_1, r_2, \ldots, r_{d-1}$. In case $\lambda_1 = \lambda_2 = \cdots = \lambda_{d-1}$, it suffices to solve only one equation in only one variable to obtain $r_1, r_2, \ldots, r_{d-1}$. \[addexp2\] Let $P \in {{\mathbb C}}[[T]]$ with constant term $d-1$. Then there are $r_1, r_2, \ldots, \allowbreak r_{d-1} \in {{\mathbb C}}$, which are roots of a polynomial whose coefficients are polynomials in those of $P$, such that $$P - \sum_{i=1}^{d-1} \exp(r_i T)$$ is divisible by $T^d$, where $\exp(T) = \sum_{j=0}^{\infty} \frac{1}{j!} T^j$. Write $$P = \sum_{j=0}^{\infty} \frac{p_j}{j!} T^j$$ Then we must find a solution $(Y_1, Y_2, \ldots, Y_{d-1}) = (r_1, r_2, \ldots, r_{d-1})$ of $$\sum_{i=1}^{d-1} Y_i^j = p_j \quad (j = 0,1,\ldots,d-1)$$ By Newton’s identities for symmetric polynomials, there exist a polynomial $f \in {{\mathbb C}}[T][X_1,X_2,\ldots,X_{d-1}]$ which is injective as a function of ${{\mathbb C}}^{d-1}$ to ${{\mathbb C}}[T]$, such that $$f \left(\sum_{i=1}^{d-1} X_i, \sum_{i=1}^{d-1} X_i^2, \ldots, \sum_{i=1}^{d-1} X_i^{d-1}\right) = \prod_{i=1}^{d-1} (T + X_i)$$ Notice that $g := f(p_1, \ldots, p_{d-1})$ is a monic polynomial of degree $d-1$ in $T$. Hence we can decompose $g$ as $$g = \prod_{i=1}^{d-1} (T + r_i) = f\left(\sum_{i=1}^{d-1} r_i, \sum_{i=1}^{d-1} r_i^2, \ldots, \sum_{i=1}^{d-1} r_i^{d-1}\right)$$ and the injectivity of $f$ gives the desired result. Additive properties of the Jacobian determinant =============================================== \[quadr\] Let $F: {{\mathbb C}}^n \rightarrow {{\mathbb C}}^n$ be a quadratic polynomial map such that $\det {{\mathcal J}}F \in {{\mathbb C}}$. Then for all $s \in {{\mathbb N}}$, $$\det\bigg(\sum_{i=1}^s b_i \cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = \det \bigg(\sum_{i=1}^s b_i \cdot {{\mathcal J}}F \bigg) = \bigg(\sum_{i=1}^s b_i\bigg)^n \cdot \det {{\mathcal J}}F$$ for all $\alpha_1, \alpha_2, \ldots, \alpha_s \in {{\mathbb C}}^n$ and all $b_1, b_2, \ldots, b_s \in {{\mathbb C}}$. Since the entries of ${{\mathcal J}}F$ are affinely linear, we have $$\sum_{i=1}^s b_i \cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}} = \sigma \cdot {\bigg({{\mathcal J}}F\bigg)\bigg|\Big._{\ifthenelse{\equal{\textstyle X}{X}}{}{\textstyle X=}\sigma^{-1} \sum_{i=1}^s b_i \alpha_i}}$$ for all $\alpha_1, \alpha_2, \ldots, \alpha_s \in {{\mathbb C}}^n$ and all $b_1, b_2, \ldots, b_s \in {{\mathbb C}}$, in case $\sigma := \sum_{i=1}^s b_i \ne 0$. Taking determinants on both sides, it follows from $\det {{\mathcal J}}F \in {{\mathbb C}}$ that $$\det\bigg(\sum_{i=1}^s b_i \cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = \det (\sigma \cdot {{\mathcal J}}F) = \sigma^n \cdot \det {{\mathcal J}}F$$ when $\sigma \ne 0$, and by continuity also in case $\sigma = 0$, as desired. \[lm3.2\] Assume $f \in {{\mathbb C}}[X]$ has degree $\le d$. If $f$ vanishes on the set $S := \{a \in {{\mathbb N}}^n \mid a_1 + a_2 + \cdots + a_n \le d \}$, then $f = 0$. Write $f = (f|_{X_n = 0}) + X_n \cdot (g|_{X_n=X_n-1})$. By induction on $n$, $(f|_{X_n = 0}) = 0$. Furthermore, if $a \in S$ and $a_n \ge 1$, then $$g(a_1,a_2,\ldots,a_{n-1},a_n-1) = (g|_{X_n=X_n-1})(a) = \frac{f(a) - (f|_{X_n=0})(a)}{a_n} = 0$$ thus by induction on $d$, $g = 0$. Hence $f=0$ as well. \[cor3.2\] Let $f \in {{\mathbb C}}[X]$ be a polynomial of degree $\le d$. If $f(a) = 0$ for all $a \in {{\mathbb N}}^n$ such that $\sum_{i=1}^n a_i = d$, then $\sum_{i=1}^n x_i - d \mid f$. If additionally $f$ is homogeneous, then $f = 0$. If we substitute $X_n = d - \sum_{i=1}^{n-1} X_i$ in $f$, then we get a polynomial of degree $\le d$ which is zero on account of Lemma \[lm3.2\]. Hence $X_n = d - \sum_{i=1}^{n-1} X_i$ is a zero of $f \in {{\mathbb C}}(X_1,X_2,\ldots,X_{n-1})[X_n]$ and $f$ is divisible over ${{\mathbb C}}(X_1,X_2,\ldots,X_{n-1})$ by $\sum_{i=1}^n X_i - d$. By Gauss’ Lemma, $f$ is divisible over ${{\mathbb C}}[X]$ by $\sum_{i=1}^n X_i - d$, which is only homogeneous if $d = 0$. Hence $f = 0$ when $f$ is homogeneous. \[lm3.3\] Let $F: {{\mathbb C}}^n \rightarrow {{\mathbb C}}^m$ be a polynomial map and $P: {\operatorname{Mat}}_{m,n}({{\mathbb C}}) \rightarrow {{\mathbb C}}$ be a polynomial of degree $\le d$ in the entries of its input matrix. Fix $\mu \in {{\mathbb C}}$ and assume that $$P\bigg(\sum_{i=1}^d {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = \mu$$ for all $\alpha_1, \alpha_2, \ldots, \alpha_d \in {{\mathbb C}}^n$. Then for all $s \in {{\mathbb N}}$ $$P\bigg(\sum_{i=1}^s b_i \cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = \mu = P(d {{\mathcal J}}F)$$ for all $\alpha_1, \alpha_2, \ldots, \alpha_s \in {{\mathbb C}}^n$ and all $b_1, b_2, \ldots, b_s \in {{\mathbb C}}$ such that $\sum_{i=1}^s b_i = d$. If additionally $P$ is homogeneous, then $$P\bigg(\sum_{i=1}^s b_i\cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = \bigg(\frac1d \sum_{i=1}^s b_i\bigg)^{\deg P} \mu = \bigg(\sum_{i=1}^s b_i\bigg)^{\deg P} P({{\mathcal J}}F)$$ for all $\alpha_1, \alpha_2, \ldots, \alpha_s \in {{\mathbb C}}^n$ and all $b_1, b_2, \ldots, b_s \in {{\mathbb C}}$. Since $P\big(\sum_{i=1}^d {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\big) = \mu$ is constant, $$\mu = P\bigg(\sum_{i=1}^d {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = P(d {{\mathcal J}}F)$$ for all $\alpha_i \in {{\mathbb C}}^n$. Take $\alpha_1, \alpha_2, \ldots, \alpha_s \in {{\mathbb C}}^n$ and let $$f(Y_1,Y_2,\ldots,Y_s) := P\bigg(\sum_{i=1}^s Y_i\cdot{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) - \mu$$ Then $\deg f(Y_1,Y_2,\ldots,Y_s) \le d$, and for all $b \in {{\mathbb N}}^s$ such that $\sum_{i=1}^s b_i = d$, we have $$\begin{aligned} f(b) &= P\bigg(\sum_{i=1}^s b_i\cdot{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) - \mu \\ &= P\bigg(\sum_{i=1}^s \sum_{j=1}^{b_i} {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) - \mu = 0\end{aligned}$$ By Corollary \[cor3.2\], $\sum_{i=1}^s Y_i - d \;\big|\; f(Y_1,Y_2,\ldots,Y_s) - \mu$, whence $$0 \;\bigg|\; P\bigg(\sum_{i=1}^s b_i\cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) - \mu$$ for all $b \in {{\mathbb C}}^s$ such that $\sum_{i=1}^s b_i = d$. This gives the first assertion of Lemma \[lm3.3\]. Assume $P$ is homogeneous. Then $$\begin{aligned} g(Y_1, Y_2, \ldots, Y_s) &:= P\bigg(\sum_{i=1}^s Y_i\cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) - \bigg(\frac1d\sum_{i=1}^s Y_i\bigg)^{\deg P} \mu \\ &\;= P\bigg(\sum_{i=1}^s Y_i\cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) - \bigg(\sum_{i=1}^s Y_i\bigg)^{\deg P} P({{\mathcal J}}F)\end{aligned}$$ is homogeneous as well. Since $g$ vanishes on $b$ for all $b \in {{\mathbb N}}^s$ such that $\sum_{i=1}^s b_i = d$, we obtain from Corollary \[cor3.2\] that $g = 0$, which completes the proof of Lemma \[lm3.3\]. \[th2\] Let $m \ge n$ and $F: {{\mathbb C}}^n \rightarrow {{\mathbb C}}^n$ be a polynomial map such that for a fixed $\mu \in {{\mathbb C}}$, we have $$\det\bigg(\sum_{i=1}^m {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = \mu$$ for all $\alpha_1, \alpha_2, \ldots, \alpha_m \in {{\mathbb C}}^n$. Then $\mu = \det (m {{\mathcal J}}F) = m^n \det({{\mathcal J}}F)$ and for all $s \in {{\mathbb N}}$ $$\det\bigg(\sum_{i=1}^s b_i\cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = \bigg(\frac1m \sum_{i=1}^s b_i\bigg)^n \mu = \bigg(\sum_{i=1}^s b_i\bigg)^n \det ({{\mathcal J}}F)$$ for all $\alpha_1, \alpha_2, \ldots, \alpha_s \in {{\mathbb C}}^n$ and all $b_1, b_2, \ldots, b_s \in {{\mathbb C}}$. Furthermore, $F$ is an invertible polynomial map in case $\det {{\mathcal J}}F \ne 0$. To obtain the first assertion, take $P = \det$, $d = m$ and $m = n$ in Lemma \[lm3.3\]. By taking $s = \deg F - 1$ and $b_i = 1$ for all $i$ in this assertion, it follows from Corollary \[allinj\] that $F$ is an invertible polynomial map in case $\det {{\mathcal J}}F \ne 0$. Assume $H: {{\mathbb C}}^n \rightarrow {{\mathbb C}}^n$ is a polynomial map and define $$M(\alpha_1,\alpha_2,\ldots,\alpha_s) := {({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_1}} + {({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_2}} + \cdots + {({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_s}}$$ If for some $m \ge d$, the sum of the principal minors of size $d$ of $M(\alpha_1,\alpha_2,\ldots,\alpha_m)$ is zero for all $\alpha_i \in {{\mathbb C}}^n$, then for all $s \in {{\mathbb N}}$, the sum of the principal minors of size $d$ of $$b_1 \cdot {({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_1}} + b_2 \cdot {({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_2}} + \cdots + b_s \cdot {({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_s}} \label{bsum}$$ is zero as well, for all $b_i \in {{\mathbb C}}$ and all $\alpha_i \in {{\mathbb C}}^n$. If for some $m \ge d$, the trace of $M(\alpha_1,\alpha_2,\ldots,\allowbreak \alpha_m)^d$ is zero for all $\alpha_i \in {{\mathbb C}}^n$, then for all $s \in {{\mathbb N}}$, the trace of the $d$-the power of (\[bsum\]) is zero as well, for all $b_i \in {{\mathbb C}}$ and all $\alpha_i \in {{\mathbb C}}^n$. Take for $P$ the sum of the principal minors of size $m$ or the trace of the $m$-th power, respectively. By Lemma \[lm3.3\], $P((\ref{bsum}))$ is divisible by $\mu := P(m {{\mathcal J}}H) = P(M(\alpha_1,\alpha_2,\ldots,\allowbreak \alpha_m)) = 0$. Let $F=X+H$ such that the Jacobian matrix ${{\mathcal J}}H$ is additive-nilpotent. Then for all $m \in {{\mathbb N}}$, $\sum^m_{i=1}{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is invertible for all $\alpha_i\in \mathbb{C}^n$. We shall show below that the converse holds when $H$ does not have linear terms. But the converse is not true in general. For example, let $F(X)=X+H$, where $H=(-X_1+X_2,X_1-X_2+X_2^2)$. Then $${{\mathcal J}}H=\begin{pmatrix} -1&1\\1&2X_2-1 \end{pmatrix} \qquad \mbox{and} \qquad {{\mathcal J}}F=\begin{pmatrix} 0&1\\1&2X_2 \end{pmatrix}$$ such that ${{\mathcal J}}H$ is not even nilpotent and $\sum^{2}_{i=1}{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is invertible for all $\alpha_1, \alpha_2\in \mathbb{C}^2$. \[dpr3\] Assume $F: \mathbb{C}^n\rightarrow \mathbb{C}^n$ is a polynomial map of the form $F = L + H$, such that $L$ is invertible and $\deg L = 1$. Then for all $s \in {{\mathbb N}}$, all $b_i \in {{\mathbb C}}$, and all $\alpha_i \in {{\mathbb C}}^n$, the following statements are equivalent to each other. 1. For all $\mu \in {{\mathbb C}}$, we have $$\det\bigg(\mu \cdot {{\mathcal J}}L + \sum_{i=1}^s b_i \cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = \bigg(\mu + \sum_{i=1}^s b_i\bigg)^n \cdot \det ({{\mathcal J}}L)$$ 2. $\sum_{i=1}^s b_i \cdot {\big({{\mathcal J}}(L^{-1} \circ H)\big)\big|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is nilpotent. Assume (1). Since the equality of (1) holds for all $\mu \in {{\mathbb C}}$, we obtain $$\det\bigg(T \cdot {{\mathcal J}}L + \sum_{i=1}^s b_i \cdot {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = \bigg(T + \sum_{i=1}^s b_i\bigg)^n \cdot \det ({{\mathcal J}}L)$$ which is equivalent to $$\det\bigg(\bigg(T - \sum_{i=1}^s b_i\bigg) \cdot {{\mathcal J}}L + \sum_{i=1}^s b_i \cdot {({{\mathcal J}}L + {{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = T^n \cdot \det ({{\mathcal J}}L)$$ and $$\det\bigg(T \cdot {{\mathcal J}}L + \sum_{i=1}^s b_i \cdot {({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = T^n \cdot \det ({{\mathcal J}}L)$$ By dividing both sides by $\det ({{\mathcal J}}L)$, we obtain $$\det\bigg(T + \sum_{i=1}^s b_i \cdot ({{\mathcal J}}L)^{-1} \cdot {({{\mathcal J}}H)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}\bigg) = T^n$$ which implies (2). The converse is similar. Let $F=X+H$ such that ${{\mathcal J}}H$ is additive-nilpotent. Then $\sum_{i=1}^{m}{({{\mathcal J}}{\tilde{F}})|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is invertible for all $\alpha_i\in \mathbb{C}^n$ and all positive integers $m$, where ${\tilde{F}}=L_1\circ F\circ L_2$ for invertible linear maps $L_1$ and $L_2$. We next prove that the converse holds. \[dth3\] For a polynomial map $F: \mathbb{C}^n\rightarrow \mathbb{C}^n$ the following statements are equivalent. 1. $\sum^{n}_{i=1}{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is invertible for all $\alpha_i\in \mathbb{C}^n$; 2. $F=L\circ(X+H)$, where $H$ has no linear terms, the linear part $L$ of $F$ is invertible and ${{\mathcal J}}H$ is additive-nilpotent; 3. $F=(X+H)\circ L$, where $H$ has no linear terms, the linear part $L$ of $F$ is invertible and ${{\mathcal J}}H$ is additive-nilpotent; 4. $F=L_1\circ(X+H)\circ L_2$, where $L_1$ and $L_2$ are invertible maps of degree one and ${{\mathcal J}}H$ is additive-nilpotent. Since (3) $\Rightarrow$ (4) is trivial, the following three implications remain to be proved. (4) [[$\Rightarrow$]{} ]{}(1) : Assume (4). Since ${{\mathcal J}}H$ is additive-nilpotent, (1) holds with $X + H$ instead of $F$. Since (1) is not affected by compositions with translations and invertible linear maps, and $F$ can be obtained from $X + H$ in that manner, (1) follows. (1) [[$\Rightarrow$]{} ]{}(2) : Assume (1). By the fundamental theorem of algebra, the determinant of $\sum^{n}_{i=1}{({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is a nonzero constant which does not depend on $\alpha_1,\alpha_2,\ldots,\alpha_n$. Let $L$ be the linear part of $F$ . By theorem \[th2\], we obtain that $\det {{\mathcal J}}F = \det {({{\mathcal J}}F)|_{\ifthenelse{\equal{X}{X}}{}{X=}0}} = \det {{\mathcal J}}L$ and that (1) of proposition \[dpr3\] holds for all $s \in {{\mathbb N}}$, all $b_i \in {{\mathbb C}}$, and all $\alpha_i \in {{\mathbb C}}^n$. Hence the Jacobian of $H := L^{-1} \circ (F-L)$ is additive-nilpotent on account of proposition \[dpr3\], which gives the desired result. (2) [[$\Rightarrow$]{} ]{}(3) : This follows from the fact that $F=L\circ (X+H)= \big(X+(L\circ H\circ L^{-1})\big)\circ L$ and the Jacobian of $L\circ H\circ L^{-1}$ is also additive-nilpotent. A polynomial map $F=(F_1,\ldots,F_n)$ is called triangular if its Jacobian matrix is triangular, i.e. either above or below the main diagonal, all entries of ${{\mathcal J}}F$ are zero. The Jacobian matrix ${{\mathcal J}}F$ of a triangular invertible polynomial map $F$ can only have nonzero constants on the main diagonal, and thus for all invertible linear maps $L_1$ and $L_2$, $\sum_{i=1}^{n} {\big({{\mathcal J}}(L_1\circ F\circ L_2)\big)\big|{}_{\ifthenelse{\equal{X}{X}}{}{X=}\alpha_i}}$ is invertible for all $\alpha_i\in \mathbb{C}^n$. However, a polynomial map satisfying the conditions of Theorem \[dth3\] is not necessarily a composition of a triangular map and two linear maps. Indeed, in [@Meisters], it was shown that in dimension 5 and up, Keller maps $X + H$ with $H$ quadratic homogeneous do not necessarily have the property that ${{\mathcal J}}H$ is strongly nilpotent. But on account of Proposition \[quadr\], such maps satisfy property (1) of Theorem \[dth3\]. In [@homokema], all those maps such that ${{\mathcal J}}H$ is not strongly nilpotent are determined in dimension 5. $H$ is either of the form $$H = L^{-1} \circ \left(\left(\begin{array}{c} 0 \\ \lambda X_1^2 \\ X_2 X_4 \\ X_1 X_3 - X_2 X_5 \\ X_1 X_4 \end{array} \right) + \left( \begin{array}{c} 0 \\ 0 \\ p(X_1,X_2) \\ q(X_1,X_2) \\ r(X_1,X_2) \end{array} \right) \right) \circ L$$ where $\lambda \in \{0,1\}$, $L$ is linear and $p,q,r \in {{\mathbb C}}[x_1,x_2]$, or of the form $$H = L^{-1} \circ \left(\left(\begin{array}{c} 0 \\ X_1 X_3 \\ X_2^2 - X_1 X_4 \\ 2 X_2 X_3 - X_1 X_5 \\ X_3^2 \end{array} \right) + \left( \begin{array}{c} 0 \\ \lambda_2 X_1^2 \\ \lambda_3 X_1^2 \\ \lambda_4 X_1^2 \\ \lambda_5 X_1^2 \end{array} \right) \right) \circ L$$ where $L$ is linear and $\lambda_i \in {{\mathbb C}}$. One can show that in both cases, the columns of ${{\mathcal J}}(L \circ H \circ L^{-1})$ are linearly independent over ${{\mathbb C}}$, something that cannot be counteracted with compositions with invertible linear maps. Hence the columns of ${{\mathcal J}}(L_1\circ H\circ L_2)$ are linearly independent over ${{\mathbb C}}$ for all invertible maps $L_i$. ${{\mathcal J}}(L_1\circ H\circ L_2)$ is exactly the linear part of ${{\mathcal J}}(L_1\circ F\circ L_2)$, thus ${{\mathcal J}}(L_1\circ F\circ L_2)$ can only be triangular if its main diagonal is not constant on one of its ends. This is however not possible since $L_1\circ F\circ L_2$ is invertible. [000]{} Bass, H., Connell, E., Wright, D. (1982). The Jacobian conjecture: Reductin of degree and formal expansion of inverse. *Bull. Amer. Math. Soc.* 7: 287–330. De Bondt, M., Van den Essen, A. (2005). A reduction of the Jacobian conjecture to the symmetric case. *Proc. Amer. Math. Soc.* 133(8): 2201–2205. De Bondt, M., Van den Essen, A. (2005). The Jacobian conjecture for symmetric Drużkowski mappings. *Ann. Polon. Math.* 86(1): 43–46. De Bondt, M. (2009). *Homogeneous Keller maps*. Ph.D. thesis. Nijmegen: Radboud University .\ `http://webdoc.ubn.ru.nl/mono/b/bondt_m_de/homokema.pdf` Borel, A. (1969). Injective endomorphisms of algebraic varieties. *Arch. Math. (Basel)* 20: 531–537. Cynk, S., Rusek, L. (1991). Injective endomorphisms of algebraic and analytic sets. *Ann. Polon. Math.* 56: 29–35. Drużkowski, L. M. (1983).An effective approach to the Jacobian conjecture. *Math. Ann.* 264: 303–313. Van den Essen, A. (2000). *Polynomial Automorphisms and the Jacobian Conjecture*. Progress in Mathematics, Vol. 190. Basel, Boston, Berlin: Birkhäuser. Van den Essen, A., Hubbers, E. (1996). Polynomial maps with strongly nilpotent Jacobian matrix and the Jacobian conjecture. *Linear Algebra Appl.* 247: 121–132. Meisters, G. H., Olech, C. (1991). Strong nilpotence holds in dimensions up to five only. *Linear and Multilinear Algebra.* 30(4): 23–255. Sun, X. (2009). *Polynomial Maps with Additive-nilpotent Jacobian Matrices.* Ph.D. thesis. Changchun: Jilin University. Wang, S. (1980). A Jacobian criterion for separability. *J. Algebra* 65(2): 453–494. Yagzhev, A. (1980). On Keller’s problem. *Siberian Math. J.* 21(5): 747–754. Yu, J. (1996). On generalized strongly nilpotent matrices. *Linear Multilinear Algebra* 41(1): 19–22. [^1]: Corresponding author [^2]: Institute of second author [^3]: Supported by NSF of China (No.11071097, No.11026039) and “211 Project" and “985 Project" of Jilin University
{ "pile_set_name": "ArXiv" }
--- abstract: '> Many optimization tasks have to be handled in noisy environments, where we cannot obtain the exact evaluation of a solution but only a noisy one. For noisy optimization tasks, evolutionary algorithms (EAs), a kind of stochastic metaheuristic search algorithm, have been widely and successfully applied. Previous work mainly focuses on empirical studying and designing EAs for noisy optimization, while, the theoretical counterpart has been little investigated. In this paper, we investigate a largely ignored question, i.e., whether an optimization problem will always become harder for EAs in a noisy environment. We prove that the answer is negative, with respect to the measurement of the expected running time. The result implies that, for optimization tasks that have already been quite hard to solve, the noise may not have a negative effect, and the easier a task the more negatively affected by the noise. On a representative problem where the noise has a strong negative effect, we examine two commonly employed mechanisms in EAs dealing with noise, the *re-evaluation* and the *threshold selection* strategies. The analysis discloses that the two strategies, however, both are not effective, i.e., they do not make the EA more noise tolerant. We then find that a small modification of the threshold selection allows it to be proven as an effective strategy for dealing with the noise in the problem.' address: | National Key Laboratory for Novel Software Technology\ Nanjing University, Nanjing 210023, China author: - Chao Qian - Yang Yu - 'Zhi-Hua Zhou' bibliography: - 'ectheory.bib' title: Analyzing Evolutionary Optimization in Noisy Environments --- Noisy optimization ,evolutionary algorithms ,re-evaluation ,threshold selection ,running time ,computational complexity Introduction ============ Optimization tasks often encounter noisy environments. For example, in airplane design, every prototype is evaluated by simulations so that the evaluation result may not be perfect due to the simulation error; and in machine learning, a prediction model is evaluated only on a limited amount of data so that the estimated performance is shifted from the true performance. Noisy environments could change the property of an optimization problem, thus traditional optimization techniques may have low efficacy. While, evolutionary algorithms (EAs) [@back:96] have been widely and successfully adopted for noisy optimization tasks [@freitas2003survey; @ma2006evolutionary; @chang2006new; @chang2006automated]. EAs are a kind of randomized metaheuristic optimization algorithms, inspired by natural phenomena including evolution of species, swarm cooperation, immune system, etc. EAs typically involve a cycle of three stages: reproduction stage produces new solutions based on the currently maintained solutions; evaluation stage evaluates the newly generated solutions; selection stage wipes out bad solutions. An inspiration of using EAs for noisy optimization is that the corresponding natural phenomena have been processed successfully in noisy environments, and hence the algorithmic simulations are also likely to be able to handle noise. Besides, improved mechanisms have been invented for better handling noise. Two representative strategies are *re-evaluation* and *threshold selection*: by the re-evaluation strategy [@jin2005evolutionary; @goh2007investigation], whenever the fitness (also called cost or objective value) of a solution is required, EAs make an independent evaluation of the solution despite of whether the solution has been evaluated before, such that the fitness is smoothed; by the threshold selection strategy [@markon2001thresholding; @beielstein2002threshold; @bartz2005new], in the selection stage EAs accept a newly generated solution only if its fitness is larger than the fitness of the old solution by at least a threshold, such that the risk of accepting a bad solution due to noise is reduced. An assumption implied by using a noise handling mechanism in EAs is that the noise makes the optimization harder, so that a better handling mechanism can reduce the negative effect by the noise [@fitzpatrick1988genetic; @beyer2000evolutionary; @rudolph2001partial; @arnold2003comparison]. This paper firstly investigates if this assumption is true. We start by presenting an experimental evidence using (1+1)-EA optimizing the hardest case in the pseudo-Boolean function class [@qian2012algorithm]. Experiment results indicate that the noise, however, makes the optimization easier rather than harder, under the measurement of expected running time. Following the experiment evidence, we then derive sufficient theoretical conditions, under which the noise will make the optimization easier or harder. By filling the conditions, we present proofs that, for the (1+$\lambda$)-EA (a class of EAs employing offspring population size $\lambda$), the noise will make the optimization easier on the hardest case in the pseudo-Boolean function class, while harder on the easiest case. The proofs imply that we need to take care of the noise only when the optimization is moderately or less complex, and ignore this issue when the optimization task itself is quite hard. For the situations where the noise needs to be cared, this paper examines the re-evaluation and the threshold selection strategies for their *polynomial noise tolerance* (PNT). For a kind of noise, the PNT of an EA is the maximum noise level such that the expected running time of the algorithm is polynomial. The closer the PNT is to 1, the better the noise tolerance is. Taking the easiest pseudo-Boolean function case as the representative problem, we analyze the PNT for different configurations of the (1+1)-EA with respect to the one-bit noise, whose level is characterized by the noise probability. For the (1+1)-EA (without any noise handling strategy), we prove that the PNT has a lower bound $1-\frac{1}{\Omega(poly(n))}$ and an upper bound $1-\frac{1}{O(2^npoly(n))}$. Since the (1+1)-EA with re-evaluation has the PNT $\Theta(\frac{\log n}{n})$ [@droste2004analysis], it is surprisingly that the re-evaluation makes the PNT much worse. We further prove that for the (1+1)-EA with re-evaluation using threshold selection, when the threshold is 1, the PNT is not less than $\frac{1}{2e}$, and when the threshold is 2, the PNT has a lower bound $1-\frac{1}{\Omega(poly(n))}$ and an upper bound $1-\frac{1}{O(2^npoly(n))}$. The PNT bounds indicate that threshold selection improves the re-evaluation strategy, however, no improvements from the (1+1)-EA are found. We then introduce a small modification into the threshold selection strategy to turn the original hard threshold to be a smooth threshold. We prove that with the smooth threshold selection strategy the PNT is $1$, i.e., the (1+1)-EA is always a polynomial algorithm disregard the probability of one-bit noise on the problem. The rest of this paper is organized as follows. Section 2 introduces some background. Section 3 shows that the noise may not always be bad, and presents a sufficient condition for that. Section 4 analyzes noise handling strategies. Section 5 concludes. Background ========== Noisy Optimization ------------------ A general optimization problem can be represented as $ \arg\max\nolimits_{x} f(x)$, where the objective $f$ is also called fitness in the context of evolutionary computation. In real-world optimization tasks, the fitness evaluation for a solution is usually disturbed by noise, and consequently we can not obtain the exact fitness value but only a noisy one. In this paper, we will involve the following kinds of noise, and we will always denote $f^N(x)$ and $f(x)$ as the noisy and true fitness of a solution $x$, respectively. additive noise : $f^N(x)=f(x)+ \delta$, where $\delta$ is uniformly selected from $[\delta_1,\delta_2]$ at random. multiplicative noise : $f^N(x)=f(x)\cdot \delta$, where $\delta$ is uniformly selected from $[\delta_1,\delta_2]$ at random. one-bit noise : $f^N(x)=f(x)$ with probability $(1-p_n)$ $(0\leq p_n \leq 1)$; otherwise, $f^N(x)=f(x')$, where $x'$ is generated by flipping a uniformly randomly chosen bit of $x \in \{0,1\}^n$. This noise is for problems where solutions are represented in binary strings. Additive and multiplicative noise has been often used for analyzing the effect of noise [@beyer2000evolutionary; @jin2005evolutionary]. One-bit noise is specifically for optimizing pseudo-Boolean problems over $\{0,1\}^n$, and also the investigated noise in the only previous work for analyzing running time of EAs in noisy optimization [@droste2004analysis]. For one-bit noise, $p_n$ controls the noise level. In this paper we assume that the parameters of the environment (i.e., $p_n$, $\delta_1$ and $\delta_2$) do not change over time. It is possible that a large noise could make an optimization problem extremely hard for particular algorithms. We are interested in the noise level, under which an algorithm could be “tolerant” to have polynomial running time. We define the polynomial noise tolerance (PNT) as Definition \[PNT\], which characterizes the maximum noise level for allowing a polynomial expected running time. Note that, the noise level can be measured by the adjusting parameter, e.g., $\delta_1, \delta_2$ for the additive and multiplicative noise, and $p_n$ for the one-bit noise. We will study the PNT of EAs for analyzing the effectiveness of noise handling strategies. \[PNT\] The polynomial noise tolerance of an algorithm on a problem, with respect to a kind of noise, is the maximum noise level such that the algorithm has expected running time polynomial to the problem size. Evolutionary Algorithms ----------------------- Evolutionary algorithms (EAs) [@back:96] are a kind of population-based metaheuristic optimization algorithms. Although there exist many variants, the common procedure of EAs can be described as follows:\ 1. Generate an initial set of solutions (called population);\ 2. Reproduce new solutions from the current population;\ 3. Evaluate the newly generated solutions;\ 4. Update the population by removing bad solutions;\ 5. Repeat steps 2-5 until some criterion is met. The (1+1)-EA, as in Algorithm \[(1+1)-EA\], is a simple EA for maximizing pseudo-Boolean problems over $\{0,1\}^n$, which reflects the common structure of EAs. It maintains only one solution, and repeatedly improves the current solution by using bit-wise mutation (i.e., the 3rd step of Algorithm \[(1+1)-EA\]). It has been widely used for the running time analysis of EAs, e.g., [@YaoAI01; @droste2002analysis]. \[(1+1)-EA\] Given pseudo-Boolean function $f$ with solution length $n$, it consists of the following steps:\ ---- --------------------------------------------------- 1. $x:=$ randomly selected from $\{0,1\}^{n}$. 2. Repeat until the termination condition is met 3. $x':=$ flip each bit of $x$ with probability $p$. 4. if [$f(x') \geq f(x)$]{} 5. $x:=x'$. ---- --------------------------------------------------- \ where $p \in (0,0.5)$ is the mutation probability. The (1+$\lambda$)-EA, as in Algorithm \[(1+lambda)-EA\], applies an offspring population size $\lambda$. In each iteration, it first generates $\lambda$ offspring solutions by independently mutating the current solution $\lambda$ times, and then selects the best solution from the current solution and the offspring solutions as the next solution. It has been used to disclose the effect of offspring population size by running time analysis [@jansen2005choice; @neumann2007randomized]. Note that, (1+1)-EA is a special case of (1+$\lambda$)-EA with $\lambda=1$. \[(1+lambda)-EA\] Given pseudo-Boolean function $f$ with solution length $n$, it consists of the following steps:\ ---- --------------------------------------------------------- 1. $x:=$ randomly selected from $\{0,1\}^{n}$. 2. Repeat until the termination condition is met 3. $i:=1$. 4. Repeat until $i>\lambda$. 5. $x_i:=$ flip each bit of $x$ with probability $p$. 6. $i:=i+1$. 7. $x=\arg\max_{x'\in\{x,x_1,\ldots,x_{\lambda}\}} f(x').$ ---- --------------------------------------------------------- \ where $p \in (0,0.5)$ is the mutation probability. The running time of EAs is usually defined as the number of fitness evaluations (i.e., computing $f(\cdot)$) until an optimal solution is found for the first time, since the fitness evaluation is the computational process with the highest cost of the algorithm [@YaoAI01; @Yu:Zhou:08]. Markov Chain Modeling --------------------- We will analyze EAs by modeling them as Markov chains in this paper. Here, we first give some preliminaries. EAs generate solutions only based on their currently maintained solutions, thus, they can be modeled and analyzed as Markov chains, e.g., [@YaoAI01; @Yu:Zhou:08]. A Markov chain $\{\xi_t\}^{+\infty}_{t=0}$ modeling an EA is constructed by taking the EA’s population space $\mathcal{X}$ as the chain’s state space, i.e. $\xi_t \in \mathcal{X}$. Let $\mathcal{X}^* \subset \mathcal{X}$ denote the set of all optimal populations, which contains at least one optimal solution. The goal of the EA is to reach $\mathcal{X}^*$ from an initial population. Thus, the process of an EA seeking $\mathcal{X}^*$ can be analyzed by studying the corresponding Markov chain. A Markov chain $\{\xi_t\}_{t=0}^{+\infty}$ $(\xi_t \in \mathcal{X})$ is a random process, where $\forall t \geq 0$, $\xi_{t+1}$ depends only on $\xi_t$. A Markov chain $\{\xi_t\}^{+\infty}_{t=0}$ is said to be homogeneous, if $\forall t \geq 0,\forall x,y \in \mathcal{X}$: $$\begin{aligned}\label{homogeneous} &P(\xi_{t+1}=y|\xi_t=x)=P(\xi_1=y|\xi_0=x). \end{aligned}$$ In this paper, we always denote $\mathcal{X}$ and $\mathcal{X}^*$ as the state space and the optimal state space of a Markov chain, respectively. Given a Markov chain $\{\xi_t\}^{+\infty}_{t=0}$ and $\xi_{\hat{t}}=x$, we define the first hitting time (FHT) of the chain as a random variable $\tau$ such that $\tau=\min\{t|\xi_{\hat{t}+t} \in \mathcal{X}^*,t\geq0\}$. That is, $\tau$ is the number of steps needed to reach the optimal state space for the first time starting from $\xi_{\hat{t}}=x$. The mathematical expectation of $\tau$, ${\mathbb{E}[\kern-0.15em[ \tau | \xi_{\hat{t}}=x ]\kern-0.14em]}=\sum\nolimits^{\infty}_{i=0} iP(\tau=i)$, is called the expected first hitting time (EFHT) of this chain starting from $\xi_{\hat{t}}=x$. If $\xi_{0}$ is drawn from a distribution $\pi_{0}$, ${\mathbb{E}[\kern-0.15em[ \tau | \xi_{0}\sim \pi_0 ]\kern-0.14em]} = \sum\nolimits_{x\in \mathcal{X}} \pi_{0}(x){\mathbb{E}[\kern-0.15em[ \tau | \xi_{0}=x ]\kern-0.14em]}$ is called the expected first hitting time of the Markov chain over the initial distribution $\pi_0$. For the corresponding EA, the running time is the numbers of calls to the fitness function until meeting an optimal solution for the first time. Thus, the *expected running time* starting from $\xi_0$ and that starting from $\xi_0 \sim \pi_0$ are respectively equal to $$\begin{aligned}\label{runtime} N_1+N_2\cdot {\mathbb{E}[\kern-0.15em[ \tau | \xi_{0} ]\kern-0.14em]} && \text{and} && N_1+N_2\cdot {\mathbb{E}[\kern-0.15em[ \tau | \xi_{0} \sim \pi_0 ]\kern-0.14em]}, \end{aligned}$$ where $N_1$ and $N_2$ are the number of fitness evaluations for the initial population and each iteration, respectively. For example, for (1+1)-EA, $N_1=1$ and $N_2=1$; for (1+$\lambda$)-EA, $N_1=1$ and $N_2=\lambda$. Note that, when involving the expected running time of an EA on a problem in this paper, if the initial population is not specified, it is the expected running time starting from a uniform initial distribution $\pi_u$, i.e., $N_1+N_2 \cdot {\mathbb{E}[\kern-0.15em[ \tau | \xi_{0} \sim \pi_u ]\kern-0.14em]}=N_1+N_2 \cdot\sum\nolimits_{x\in \mathcal{X}} \frac{1}{|\mathcal{X}|}{\mathbb{E}[\kern-0.15em[ \tau | \xi_{0}=x ]\kern-0.14em]}$. The following two lemmas on the EFHT of Markov chains [@Freidlin:97] will be used in this paper. \[lem\_onestep\] Given a Markov chain $\{\xi_t\}^{+\infty}_{t=0}$, we have $$\begin{aligned} &\forall x \in \mathcal{X}^*: {\mathbb{E}[\kern-0.15em[ \tau | \xi_t=x ]\kern-0.14em]}=0; \\ &\forall x\notin \mathcal{X}^*: {\mathbb{E}[\kern-0.15em[ \tau | \xi_t=x ]\kern-0.14em]}=1+\sum\nolimits_{y\in \mathcal{X}} P(\xi_{t+1}=y | \xi_t=x){\mathbb{E}[\kern-0.15em[ \tau | \xi_{t+1}=y ]\kern-0.14em]}. \end{aligned}$$ \[lem\_homo\] Given a homogeneous Markov chain $\{\xi_t\}^{+\infty}_{t=0}$, it holds $$\forall t_1, t_2 \geq 0, x \in \mathcal{X}: {\mathbb{E}[\kern-0.15em[ \tau |\xi_{t_1}=x ]\kern-0.14em]} = {\mathbb{E}[\kern-0.15em[ \tau| \xi_{t_2}=x ]\kern-0.14em]}.$$ For analyzing the EFHT of Markov chains, drift analysis [@YaoAI01; @he2004study] is a commonly used tool, which will also be used in this paper. To use drift analysis, it needs to construct a function $V(x)\;(x \in \mathcal{X})$ to measure the distance of a state $x$ to the optimal state space $\mathcal{X}^*$. The distance function $V(x)$ satisfies that $V(x \in \mathcal{X}^*)=0$ and $V(x \notin \mathcal{X}^*)>0$. Then, by investigating the progress on the distance to $\mathcal{X}^*$ in each step, i.e., ${\mathbb{E}[\kern-0.15em[ V(\xi_t)-V(\xi_{t+1}) | \xi_t ]\kern-0.14em]}$, an upper (lower) bound of the EFHT can be derived through dividing the initial distance by a lower (upper) bound of the progress. \[drift\] Given a Markov chain $\{\xi_t\}^{+\infty}_{t=0}$ and a distance function $V(x)$, if it satisfies that for any $t \geq 0$ and any $\xi_t$ with $V(\xi_t) > 0$, $$0<c_l \leq {\mathbb{E}[\kern-0.15em[ V(\xi_t)-V(\xi_{t+1}) | \xi_t ]\kern-0.14em]} \leq c_u,$$ then the EFHT of this chain satisfies that $$V(\xi_0)/c_u \leq {\mathbb{E}[\kern-0.15em[ \tau | \xi_0 ]\kern-0.14em]} \leq V(\xi_0)/c_l,$$ where $c_l,c_u$ are constants. Pseudo-Boolean Functions ------------------------ The pseudo-Boolean function class in Definition \[def\_Boolean\] is a large function class which only requires the solution space to be $\{0,1\}^n$ and the objective space to be $\mathbb{R}$. Many well-known NP-hard problems (e.g., the vertex cover problem and the 0-1 knapsack problem) belong to this class. Diverse pseudo-Boolean problems with different structures and difficulties have been used for analyzing the running time of EAs, and then to disclose properties of EAs, e.g., [@droste:jansen:wegener:98; @YaoAI01; @droste2002analysis]. Note that, we consider only maximization problems in this paper since minimizing $f$ is equivalent to maximizing $-f$. \[def\_Boolean\] A function in the pseudo-Boolean function class has the form: $ f:\{0,1\}^n \rightarrow \mathbb{R}. $ I$_{hardest}$ (or called Trap) problem in Definition \[def\_trap\] is a special instance in this class, which is to maximize the number of 0 bits of a solution except the global optimum $11\ldots1$ (briefly denoted as $1^n$). Its optimal function value is $2n$, and the function value for any non-optimal solution is not larger than 0. It has been widely used in the theoretical analysis of EAs, and the expected running time of (1+1)-EA with mutation probability $\frac{1}{n}$ has been proved to be $\Theta(n^n)$ [@droste2002analysis]. It has also been recognized as the hardest instance in the pseudo-Boolean function class with a unique global optimum for the (1+1)-EA [@qian2012algorithm]. \[def\_trap\] I$_{hardest}$ Problem of size $n$ is to find an $n$ bits binary string $x^*$ such that $$x^*=\mathop{\arg\max}\nolimits_{x \in \{0,1\}^n} \big( f(x)=3n\prod\nolimits^n_{i=1}x_i -\sum\nolimits^{n}_{i=1} x_i\big),$$ where $x_i$ is the $i$-th bit of a solution $x \in \{0,1\}^n$. I$_{easiest}$ (or called OneMax) problem in Definition \[def\_onemax\] is to maximize the number of 1 bits of a solution. The optimal solution is $1^n$, which has the maximal function value $n$. The running time of EAs has been well studied on this problem [@YaoAI01; @droste2002analysis; @sudholt2011new]. Particularly, the expected running time of (1+1)-EA with mutation probability $\frac{1}{n}$ on it has been proved to be $\Theta(n \log n)$ [@droste2002analysis]. It has also been recognized as the easiest instance in the pseudo-Boolean function class with a unique global optimum for the (1+1)-EA [@qian2012algorithm]. \[def\_onemax\] I$_{easiest}$ Problem of size $n$ is to find an $n$ bits binary string $x^*$ such that $$x^*=\mathop{\arg\max}\nolimits_{x \in \{0,1\}^n} \big( f(x)=\sum\nolimits^{n}_{i=1} x_i\big),$$ where $x_i$ is the $i$-th bit of a solution $x \in \{0,1\}^n$. Noise is Not Always Bad ======================= Empirical Evidence ------------------ It has been observed that noisy fitness evaluation can make an optimization harder for EAs, since it may make a bad solution have a “better" fitness, and then mislead the search direction of EAs. Droste [@droste2004analysis] proved that the running time of (1+1)-EA can increase from polynomial to exponential due to the presence of noise. However, when studying the running time of (1+1)-EA solving the hardest case I$_{hardest}$ in the pseudo-Boolean function class, we have observed oppositely that noise can also make an optimization easier for EAs, which means that the presence of the noise decreases the running time of EAs for finding the optimal solution. For I$_{hardest}$ problem over $\{0,1\}^n$, there are $2^n$ possible solutions, which are denoted by their corresponding integer values $0,1,\ldots,2^n-1$, respectively. Then, we estimate the expected running time of (1+1)-EA maximizing I$_{hardest}$ when starting from every solution. For each initial solution, we repeat independent runs for 1000 times, and then the average running time is recorded as an estimation of the expected running time (briefly called as ERT). We run (1+1)-EA without noise, with additive noise and with multiplicative noise, respectively. For the mutation probability of (1+1)-EA, we use the common setting $p=\frac{1}{n}$. For additive noise, $\delta_1=-n$ and $\delta_2=n$, and for multiplicative noise, $\delta_1=0.1$ and $\delta_2=10$. The results for $n=3,4,5$ are plotted in Figure \[fig\_ERT\_helpful1\]. We can observe that the curves by these two kinds of noise are always under the curve without noise, which shows that I$_{hardest}$ problem becomes easier for (1+1)-EA in a noisy environment. Note that, the three curves meet at the last point, since the initial solution $2^n-1$ is the optimal solution and then ERT $=1$. ![image](compare_n3){width="0.8\linewidth" height="0.65\linewidth"} ![image](compare_n4){width="0.8\linewidth" height="0.65\linewidth"} ![image](compare_n5){width="0.8\linewidth" height="0.65\linewidth"} \ \(a) $n=3$ \(b) $n=4$ \(c) $n=5$ \ A Sufficient Condition ---------------------- In this section, by comparing the expected running time of EAs with and without noise, we derive a sufficient condition under which the noise will make an optimization easier for EAs. Most practical EAs employ time-invariant operators, thus we can model an EA without noise by a homogeneous Markov chain. While for an EA with noise, since noise may change over time, we can just model it by a Markov chain. Note that, the two EAs with and without noise are different only on whether the fitness evaluation is disturbed by noise, thus, they must have the same values on $N_1$ and $N_2$ for their running time Eq.. Then, comparing their expected running time is equivalent to comparing the EFHT of their corresponding Markov chains. We first define a partition of the state space of a homogeneous Markov chain based on the EFHT, and then define a jumping probability of a Markov chain from one state to one state space in one step. It is easy to see that $\mathcal{X}_0$ in Definition \[def\_partition\] is just $\mathcal{X}^*$, since ${\mathbb{E}[\kern-0.15em[ \tau|\xi_0 \in \mathcal{X}^* ]\kern-0.14em]}=0$. \[def\_partition\] For a homogeneous Markov chain $\{\xi_t\}^{+\infty}_{t=0}$, the EFHT-Partition is a partition of $\mathcal{X}$ into non-empty subspaces $\{\mathcal{X}_0,\mathcal{X}_1,\ldots,\mathcal{X}_m\}$ such that $$\begin{aligned} &(1) \quad \forall x,y \in \mathcal{X}_i, {\mathbb{E}[\kern-0.15em[ \tau|\xi_0=x ]\kern-0.14em]}={\mathbb{E}[\kern-0.15em[ \tau|\xi_0=y ]\kern-0.14em]};\\ &(2) \quad {\mathbb{E}[\kern-0.15em[ \tau|\xi_0 \in \mathcal{X}_0 ]\kern-0.14em]}<{\mathbb{E}[\kern-0.15em[ \tau|\xi_0 \in \mathcal{X}_1 ]\kern-0.14em]}< \ldots < {\mathbb{E}[\kern-0.15em[ \tau|\xi_0 \in \mathcal{X}_m ]\kern-0.14em]}. \end{aligned}$$ \[def\_jump\] For a Markov chain $\{\xi_t\}^{+\infty}_{t=0}$, $P^t_{\xi}(x,\mathcal{X}')=\sum_{y \in \mathcal{X}'} P(\xi_{t+1}=y|\xi_{t}=x)$ is the probability of jumping from state $x$ to state space $\mathcal{X}'\subseteq \mathcal{X}$ in one step at time $t$. \[analysis\_approach\] Given an EA $\mathcal{A}$ and a problem $f$, let a Markov chain $\{\xi_t\}^{+\infty}_{t=0}$ and a homogeneous Markov chain $\{\xi'_t\}^{+\infty}_{t=0}$ model $\mathcal{A}$ running on $f$ with noise and without noise respectively, and denote $\{\mathcal{X}_0,\mathcal{X}_1,\ldots,\mathcal{X}_m\}$ as the EFHT-Partition of $\{\xi'_t\}^{+\infty}_{t=0}$, if for all $t\geq 0$, $x \in \mathcal{X}-\mathcal{X}_0$, and for all integers $i\in [0,m-1]$, $$\begin{aligned}\label{analysis_condition} &\sum\nolimits^i_{j=0}P^t_{\xi}(x,\mathcal{X}_j) \geq \sum\nolimits^{i}_{j=0} P^t_{\xi'}(x,\mathcal{X}_j), \end{aligned}$$ then noise makes $f$ easier for $\mathcal{A}$, i.e., for all $x \in \mathcal{X}$, $${\mathbb{E}[\kern-0.15em[ \tau | \xi_{0}=x ]\kern-0.14em]} \leq {\mathbb{E}[\kern-0.15em[ \tau' | \xi'_{0}=x ]\kern-0.14em]}.$$ The condition of this theorem (i.e., Eq.\[analysis\_condition\]) intuitively means that the presence of noise leads to a larger probability of jumping into good states (i.e., $\mathcal{X}_j$ with small $j$ values), starting from which the EA needs less time for finding the optimal solution. For the proof, we need the following lemma, which is proved in the appendix. \[lemma\_analysis\_condition\] Let $m\;(m \geq 1)$ be an integer. If it satisfies that $$\begin{aligned} & (1)\quad \forall 0 \leq i \leq m, P_i,Q_i\geq 0,\; \text{and} \; \sum\nolimits^{m}_{i=0}P_i=\sum\nolimits^{m}_{i=0}Q_i=1;\\ & (2)\quad 0\leq E_0<E_1<\ldots<E_m;\\ & (3)\quad \forall 0 \leq k \leq m-1, \sum\nolimits^k_{i=0} P_i \leq \sum\nolimits^k_{i=0} Q_i, \end{aligned}$$ then it holds that $$\sum\nolimits^{m}_{i=0}P_i\cdot E_i \geq \sum\nolimits^{m}_{i=0}Q_i\cdot E_i.$$ We use Lemma \[drift\] to derive a bound on ${\mathbb{E}[\kern-0.15em[ \tau|\xi_0 ]\kern-0.14em]}$, based on which this theorem holds. For using Lemma \[drift\] to analyze ${\mathbb{E}[\kern-0.15em[ \tau|\xi_0 ]\kern-0.14em]}$, we first construct a distance function $V(x)$ as $$\begin{aligned}\label{distance} & \forall x \in \mathcal{X}, V(x)={\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0=x ]\kern-0.14em]}, \end{aligned}$$ which satisfies that $V(x \in \mathcal{X}^*)=0$ and $V(x \notin \mathcal{X}^*)>0$ by Lemma \[lem\_onestep\]. Then, we investigate ${\mathbb{E}[\kern-0.15em[ V(\xi_t)-V(\xi_{t+1}) | \xi_t=x ]\kern-0.14em]}$ for any $x$ with $V(x)>0$ (i.e., $x \notin \mathcal{X^*}$). $$\begin{aligned} &{\mathbb{E}[\kern-0.15em[ V(\xi_t)-V(\xi_{t+1}) | \xi_t=x ]\kern-0.14em]}=V(x)-{\mathbb{E}[\kern-0.15em[ V(\xi_{t+1})|\xi_t=x ]\kern-0.14em]}\\ &=V(x)-\sum\nolimits_{y \in \mathcal{X}} P(\xi_{t+1}=y|\xi_t=x) V(y)\\ &= {\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0=x ]\kern-0.14em]}-\sum\nolimits_{y \in \mathcal{X}} P(\xi_{t+1}=y|\xi_t=x) {\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0=y ]\kern-0.14em]} \quad (\text{by Eq.\refeq{distance}})\\ &=1+\sum\nolimits_{y \in \mathcal{X}} P(\xi'_{1}=y|\xi'_0=x) {\mathbb{E}[\kern-0.15em[ \tau'|\xi'_1=y ]\kern-0.14em]}-\sum\nolimits_{y \in \mathcal{X}} P(\xi_{t+1}=y|\xi_t=x) {\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0=y ]\kern-0.14em]}\quad (\text{by Lemma \refeq{lem_onestep}})\\ &=1+\sum\nolimits_{y \in \mathcal{X}} P(\xi'_{t+1}=y|\xi'_t=x) {\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0=y ]\kern-0.14em]}-\sum\nolimits_{y \in \mathcal{X}} P(\xi_{t+1}=y|\xi_t=x) {\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0=y ]\kern-0.14em]}\\ &\quad (\text{by Eq.\refeq{homogeneous} and Lemma \ref{lem_homo}, since $\{\xi'_t\}^{+\infty}_{t=0}$ is homogeneous.})\\ &=1+\sum\nolimits^m_{j=0} (P^t_{\xi'}(x,\mathcal{X}_j)-P^t_{\xi}(x,\mathcal{X}_j)) {\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0\in \mathcal{X}_j ]\kern-0.14em]}.\quad (\text{by Definitions \ref{def_partition} and \ref{def_jump}}) \end{aligned}$$ Since $\sum^m_{j=0} P^t_{\xi}(x,\mathcal{X}_j)=\sum^m_{j=0} P^t_{\xi'}(x,\mathcal{X}_j)=1$, ${\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0 \in \mathcal{X}_j ]\kern-0.14em]}$ increases with $j$ and Eq. holds, by Lemma \[lemma\_analysis\_condition\], we have $$\sum\nolimits^{m}_{j=0} P^t_{\xi'}(x,\mathcal{X}_j) {\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0 \in \mathcal{X}_j ]\kern-0.14em]} \geq \sum\nolimits^{m}_{j=0} P^t_{\xi}(x,\mathcal{X}_j) {\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0 \in \mathcal{X}_j ]\kern-0.14em]}.$$ Thus, we have, for all $t \geq 0$, all $x \notin \mathcal{X}^*$, $${\mathbb{E}[\kern-0.15em[ V(\xi_t)-V(\xi_{t+1}) | \xi_t=x ]\kern-0.14em]}\geq 1.$$ Thus, by Lemma \[drift\], we get for all $x \in \mathcal{X}$, $${\mathbb{E}[\kern-0.15em[ \tau|\xi_0=x ]\kern-0.14em]} \leq V(x)={\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0=x ]\kern-0.14em]}, \quad \text{(the `$=$' is by Eq.\ref{distance})}$$ which implies that noise leads to less time for finding the optimal solution, i.e., noise makes optimization easier. We prove below that the experimental example satisfies this sufficient condition. We consider (1+$\lambda$)-EA, which covers (1+1)-EA and is much more general. Let $\{\xi_t\}^{+\infty}_{t=0}$ and $\{\xi'_t\}^{+\infty}_{t=0}$ model (1+$\lambda$)-EA with and without noise for maximizing I$_{hardest}$ problem, respectively. For I$_{hardest}$ problem, it is to maximize the number of 0 bits except the optimal solution $1^n$. It is not hard to see that the EFHT ${\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0=x ]\kern-0.14em]}$ only depends on $|x|_0$ (i.e., the number of 0 bits). We denote $\mathbb{E}_1(j)$ as ${\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0=x ]\kern-0.14em]}$ with $|x|_0=j$. The order of $\mathbb{E}_1(j)$ is showed in Lemma \[CFHT\_Trap\], the proof of which is in the Appendix. \[CFHT\_Trap\] For any mutation probability $0<p<0.5$, it holds that $\mathbb{E}_1(0)< \mathbb{E}_1(1)< \mathbb{E}_1(2)< \ldots < \mathbb{E}_1(n).$ \[theo\_helpful\_case1\] Either additive noise with $\delta_2-\delta_1 \leq 2n$ or multiplicative noise with $\delta_2> \delta_1 >0$ makes I$_{hardest}$ problem easier for (1+$\lambda$)-EA with mutation probability less than 0.5. The proof is by showing that the condition of Theorem \[analysis\_approach\] (i.e., Eq.\[analysis\_condition\]) holds here. By Lemma \[CFHT\_Trap\], the EFHT-Partition of $\{\xi'_t\}^{+\infty}_{t=0}$ is $\mathcal{X}_i=\{x \in \{0,1\}^n | |x|_0=i\} \;(0\leq i\leq n)$ and $m$ in Theorem \[analysis\_approach\] equals to $n$ here. Let $f^N(x)$ and $f(x)$ denote the noisy and true fitness, respectively. For any $x \in \mathcal{X}_{k}\;(k \geq 1)$, we denote $P(0)$ and $P(j)\;(1 \leq j \leq n)$ as the probability that for the $\lambda$ offspring solutions $x_1,\ldots,x_{\lambda}$ generated by bit-wise mutation on $x$, $\min\{|x_1|_0,\ldots,|x_{\lambda}|_0\}=0$ (i.e., the least number of 0 bits is 0), and $\min\{|x_1|_0,\ldots,|x_{\lambda}|_0\}>0 \wedge \max\{|x_1|_0,\ldots,|x_{\lambda}|_0\}=j$ (i.e., the largest number of 0 bits is $j$ while the least number of 0 bits is larger than 0), respectively. Then, we analyze one-step transition probabilities from $x$ for both $\{\xi'_t\}^{+\infty}_{t=0}$ (i.e., without noise) and $\{\xi_t\}^{+\infty}_{t=0}$ (i.e., with noise). For $\{\xi'_t\}^{+\infty}_{t=0}$, because only the optimal solution or the solution with the largest number of 0 bit among the parent solution and $\lambda$ offspring solutions will be accepted, we have $$\begin{aligned}\label{onestep1} &P^t_{\xi'}(x,\mathcal{X}_0)=P(0);&& \forall\; 1\leq j \leq k-1: P^t_{\xi'}(x,\mathcal{X}_j)=0;\\ &P^t_{\xi'}(x,\mathcal{X}_k)=\sum\nolimits^{k}_{j=1}P(j); && \forall\; k+1 \leq j \leq n: P^t_{\xi'}(x,\mathcal{X}_j)=P(j). \end{aligned}$$ For $\{\xi_t\}^{+\infty}_{t=0}$ with additive noise, since $\delta_2-\delta_1 \leq 2n$, we have $$\begin{aligned} &f^N(1^n) \geq f(1^n)+\delta_1 \geq 2n+\delta_2-2n=\delta_2;\\ &\forall y\neq 1^n, f^N(y)\leq f(y)+\delta_2 \leq \delta_2. \end{aligned}$$ For multiplicative noise, since $\delta_2>\delta_1 >0$, then $$\begin{aligned} &f^N(1^n) >0; && \forall y\neq 1^n, f^N(y) \leq 0. \end{aligned}$$ Thus, for these two noises, we have $\forall y \neq 1^n, f^N(1^n) \geq f^N(y)$, which implies that if the optimal solution $1^n$ is generated, it will always be accepted. Thus, we have, note that $\mathcal{X}_0=\{1^n\}$, $$\begin{aligned}\label{onestep2} &P^t_{\xi}(x,\mathcal{X}_0)=P(0). \end{aligned}$$ Due to the fitness evaluation disturbed by noise, the solution with the largest number of 0 bit among the parent solution and $\lambda$ offspring solutions may be rejected. Thus, we have $$\begin{aligned}\label{onestep3} &\forall \; k+1 \leq i \leq n: \sum^{n}_{j=i}P^t_{\xi}(x,\mathcal{X}_j) \leq \sum^{n}_{j=i} P(j). \end{aligned}$$ By combining Eq., Eq. and Eq., we have $$\begin{aligned} &\forall \; 1\leq i \leq n: \sum^{n}_{j=i}P^t_{\xi}(x,\mathcal{X}_j)\leq \sum^{n}_{j=i}P^t_{\xi'}(x,\mathcal{X}_j). \end{aligned}$$ Since $\sum^{n}_{j=0}P^t_{\xi}(x,\mathcal{X}_j)= \sum^{n}_{j=0}P^t_{\xi'}(x,\mathcal{X}_j)=1$, the above inequality is equivalent to $$\begin{aligned} &\forall \; 0\leq i \leq n-1: \sum^{i}_{j=0}P^t_{\xi}(x,\mathcal{X}_j)\geq \sum^{i}_{j=0}P^t_{\xi'}(x,\mathcal{X}_j), \end{aligned}$$ which implies that the condition Eq.\[analysis\_condition\] of Theorem \[analysis\_approach\] holds. Thus, we can get that I$_{hardest}$ problem becomes easier for (1+$\lambda$)-EA under these two kinds of noise. Theorem \[analysis\_approach\] gives a sufficient condition for that noise makes optimization easier. If its condition Eq.\[analysis\_condition\] changes the inequality direction, which implies that noise leads to a smaller probability of jumping to good states, it obviously becomes a sufficient condition for that noise makes optimization harder. We show it in Theorem \[analysis\_approach\_harmful\], the proof of which is as similar as that of Theorem \[analysis\_approach\], except that the inequality direction needs to be changed. \[analysis\_approach\_harmful\] Given an EA $\mathcal{A}$ and a problem $f$, let a Markov chain $\{\xi_t\}^{+\infty}_{t=0}$ and a homogeneous Markov chain $\{\xi'_t\}^{+\infty}_{t=0}$ model $\mathcal{A}$ running on $f$ with noise and without noise respectively, and denote $\{\mathcal{X}_0,\mathcal{X}_1,\ldots,\mathcal{X}_m\}$ as the EFHT-Partition of $\{\xi'_t\}^{+\infty}_{t=0}$, if for all $t\geq 0$, $x \in \mathcal{X}-\mathcal{X}_0$, and for all integers $i\in [0,m-1]$, $$\begin{aligned}\label{analysis_condition_harmful} &\sum\nolimits^i_{j=0}P^t_{\xi}(x,\mathcal{X}_j) \leq \sum\nolimits^{i}_{j=0} P^t_{\xi'}(x,\mathcal{X}_j), \end{aligned}$$ then noise makes $f$ harder for $\mathcal{A}$, i.e., for all $x \in \mathcal{X}$, $${\mathbb{E}[\kern-0.15em[ \tau | \xi_{0}=x ]\kern-0.14em]} \geq {\mathbb{E}[\kern-0.15em[ \tau' | \xi'_{0}=x ]\kern-0.14em]}.$$ Then we apply this condition to the case that (1+$\lambda$)-EA is used for optimizing the easiest case I$_{easiest}$ in the pseudo-Boolean function class. Let $\{\xi_t\}^{+\infty}_{t=0}$ and $\{\xi'_t\}^{+\infty}_{t=0}$ model (1+$\lambda$)-EA with and without noise for maximizing I$_{easiest}$ problem, respectively. It is not hard to see that the EFHT ${\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0=x ]\kern-0.14em]}$ only depends on $|x|_0$. We denote $\mathbb{E}_2(j)$ as ${\mathbb{E}[\kern-0.15em[ \tau'|\xi'_0=x ]\kern-0.14em]}$ with $|x|_0=j$. The order of $\mathbb{E}_2(j)$ is showed in Lemma \[CFHT\_OneMax\], the proof of which is in the Appendix. \[CFHT\_OneMax\] For any mutation probability $0<p<0.5$, it holds that $\mathbb{E}_2(0)<\mathbb{E}_2(1)<\mathbb{E}_2(2)<\ldots<\mathbb{E}_2(n).$ \[theo\_harmful\_case\] Any noise makes I$_{easiest}$ problem harder for (1+$\lambda$)-EA with mutation probability less than 0.5. We use Theorem \[analysis\_approach\_harmful\] to prove it. By Lemma \[CFHT\_OneMax\], the EFHT-Partition of $\{\xi'_t\}^{+\infty}_{t=0}$ is $\mathcal{X}_i=\{x \in \{0,1\}^n | |x|_0=i\} \;(0\leq i\leq n)$. For any non-optimal solution $x \in \mathcal{X}_k \;(k>0)$, we denote $P(j)\;(0 \leq j \leq n)$ as the probability that the least number of 0 bits for the $\lambda$ offspring solutions generated by bit-wise mutation on $x$ is $j$. For $\{\xi'_t\}^{+\infty}_{t=0}$, because the solution with the least number of 0 bits among the parent solution and $\lambda$ offspring solutions will be accepted, we have $$\begin{aligned} &\forall\; 0\leq j \leq k-1: P^t_{\xi'}(x,\mathcal{X}_j)=P(j); && P^t_{\xi'}(x,\mathcal{X}_k)=\sum\nolimits^{n}_{j=k}P(j); && \forall\; k+1 \leq j \leq n: P^t_{\xi'}(x,\mathcal{X}_j)=0. \end{aligned}$$ For $\{\xi_t\}^{+\infty}_{t=0}$, due to the fitness evaluation disturbed by noise, the solution with the least number of 0 bits among the parent solution and $\lambda$ offspring solutions may be rejected. Thus, we have $$\begin{aligned} & 0\leq i \leq k-1: \sum^{i}_{j=0}P^t_{\xi}(x,\mathcal{X}_j)\leq \sum^{i}_{j=0}P(j). \end{aligned}$$ Then, we can get $$\begin{aligned} &\forall \; 0\leq i \leq n-1: \sum^{i}_{j=0}P^t_{\xi}(x,\mathcal{X}_j)\leq \sum^{i}_{j=0}P^t_{\xi'}(x,\mathcal{X}_j). \end{aligned}$$ This implies that the condition Eq. of Theorem \[analysis\_approach\_harmful\] holds. Thus, by Theorem \[analysis\_approach\_harmful\], we can get that noise makes I$_{easiest}$ problem harder for (1+$\lambda$)-EA. Discussion ---------- We have shown that noise makes I$_{hardest}$ and I$_{easiest}$ problems easier and harder, respectively, for (1+$\lambda$)-EA. These two problems are known to be the hardest and the easiest instance respectively in the pseudo-Boolean function class with a unique global optimum for the (1+1)-EA [@qian2012algorithm]. We can intuitively interpret the discovered effect of noise for EAs on these two problems. For I$_{hardest}$ problem, the EA searches along the deceptive direction while noise can add some randomness to make the EA have some possibility to run along the right direction; for I$_{easiest}$ problem, the EA searches along the right direction while noise can only harm the optimization process. We thus hypothesize that we need to take care of the noise only when the optimization problem is moderately or less complex. To further verify our hypothesis, we employ the Jump$_{m,n}$ problem, which is a problem with adjustable difficulty and can be configured as I$_{eaisest}$ when $m=1$ and I$_{hardest}$ when $m=n$. \[def\_jump\_mn\] Jump$_{m,n}$ Problem of size $n$ with $1 \leq m \leq n$ is to find an $n$ bits binary string $x^*$ such that &x\^\* =\_[x {0,1}\^n]{}(= m+\^n\_[i=1]{} x\_i &\ n-\^n\_[i=1]{} x\_i & ), where $x_i$ is the $i$-th bit of a solution $x \in \{0,1\}^n$. We test (1+1)-EA with mutation probability $\frac{1}{n}$ on Jump$_{m,n}$. It is known that the expected running time of the (1+1)-EA on Jump$_{m,n}$ is $\Theta(n^m+n \log n)$ [@droste2002analysis], which implies that Jump$_{m,n}$ with larger value of $m$ is harder. In the experiment, we set $n=5$, and for noise, we use the additive noise with $\delta_1=-0.5n \wedge \delta_2=0.5n$, the multiplicative noise with $\delta_1=1 \wedge \delta_2=2$, and the one-bit noise with $p_n=0.5$, respectively. We record the expected running time gap starting from each initial solution $$gap=({\mathbb{E}[\kern-0.15em[ \tau ]\kern-0.14em]}-{\mathbb{E}[\kern-0.15em[ \tau' ]\kern-0.14em]})/{\mathbb{E}[\kern-0.15em[ \tau' ]\kern-0.14em]},$$ where ${\mathbb{E}[\kern-0.15em[ \tau ]\kern-0.14em]}$ and ${\mathbb{E}[\kern-0.15em[ \tau' ]\kern-0.14em]}$ denote the expected running time of the EA optimizing the problem with and without noise, respectively. The larger the gap means that the noise has a more negative effect, while the smaller the gap means that the noise has a less negative effect. For each initial solution and each configuration of noise, we repeat the running of the (1+1)-EA 1000 times, and estimate the expected running time by the average running time, and thus estimate the gap. The results are plotted in Figure \[fig\_ratio\]. ![image](ratio_additive){width="0.8\linewidth" height="0.65\linewidth"} ![image](ratio_multi){width="0.8\linewidth" height="0.65\linewidth"} ![image](ratio_onebit){width="0.8\linewidth" height="0.65\linewidth"} \ \(a) additive noise \(b) multiplicative noise \(c) one-bit noise \ We can observe that the gaps for larger $m$ are lower (i.e., the negative effect by noise decreases as the problem hardness increases), and the gaps for large $m$ tend to be 0 or negative values (i.e., noise can have no or positive effect when the optimization is quite hard). These empirical observations give support to our hypothesis that the noise should be handled carefully only when the optimization is moderately or less complex. On the Usefulness of Noise Handling Strategies ============================================== Re-evaluation ------------- There are naturally two fitness evaluation options for EAs [@arnold2002local; @jin2005evolutionary; @goh2007investigation; @heidrich2009hoeffding]: - **single-evaluation** we evaluate a solution once, and use the evaluated fitness for this solution in the future. - **re-evaluation** every time we access the fitness of a solution by evaluation. For example, for (1+1)-EA in Algorithm \[(1+1)-EA\], if using re-evaluation, both $f(x')$ and $f(x)$ will be calculated and recalculated in each iteration; if using single-evaluation, only $f(x')$ will be calculated and the previous obtained fitness $f(x)$ will be reused. Intuitively, re-evaluation can smooth noise and thus could be better for noisy optimizations, but it also increases the fitness evaluation cost and thus increases the running time. Its usefulness was not yet clear. Note that, the analysis in the previous section assumes single-evaluation. In this section, we take the I$_{easiest}$ problem, where noise has been proved to have a strong negative effect in the previous section, as the representative problem, and compare these two options for (1+1)-EA with mutation probability $\frac{1}{n}$ solving this problem under one-bit noise to show whether re-evaluation is useful. Note that for one-bit noise, $p_n$ controls the noise level, that is, noise becomes stronger as $p_n$ gets larger, and it is also the variable of the PNT. \[runtime\_without\] The PNT of (1+1)-EA using single-evaluation with mutation probability $\frac{1}{n}$ on I$_{easiest}$ problem is lower bounded by $1-1/\Omega(poly(n))$ and upper bounded by $1-1/O(2^npoly(n))$, where $poly(n)$ indicates any polynomial of $n$, with respect to one-bit noise. The theorem is straightforwardly derived from the following lemma. \[runtime\_single\] For (1+1)-EA using single-evaluation with mutation probability $\frac{1}{n}$ on I$_{easiest}$ problem under one-bit noise, the expected running time is $O(n^2+n/(1-p_n))$ and $\Omega(np_n/(2^n(1-p_n)))$. Let $L$ denote the noisy fitness value $f^N(x)$ of the current solution $x$. Because (1+1)-EA does not accept a solution with a smaller fitness (i.e., the 4th step of Algorithm \[(1+1)-EA\]) and it doesn’t re-evaluate the fitness of the current solution $x$, $L\;(0\leq L\leq n)$ will never decrease. We first analyze the expected steps until $L$ increases when starting from $L=i$ (denoted by ${\mathbb{E}[\kern-0.15em[ i ]\kern-0.14em]}$), and then sum up them to get an upper bound $\sum\nolimits^{n-1}_{i=0} {\mathbb{E}[\kern-0.15em[ i ]\kern-0.14em]}$ for the expected steps until $L$ reaches the maximum value $n$. For ${\mathbb{E}[\kern-0.15em[ i ]\kern-0.14em]}$, we analyze the probability $P$ that $L$ increases in two steps when $L=i$, then ${\mathbb{E}[\kern-0.15em[ i ]\kern-0.14em]}=2 \cdot \frac{1}{P}$. Note that, one-bit noise can make $L$ be $|x|_1-1$, $|x|_1$ or $|x|_1+1$, where $|x|_1=\sum^{n}_{i=1} x_i$ is the number of 1 bits. When analyzing the noisy fitness $f^N(x')$ of the offspring $x'$ in each step, we need to first consider bit-wise mutation on $x$ and then one random bit flip for noise. When $0<L<n-1$, $|x|_1=L-1$, $L$ or $L+1$.\ (1) For $|x|_1=L-1$, $P\geq\frac{n-L+1}{n}(1-\frac{1}{n})^{(n-1)}p_n\frac{n-L}{n}+\frac{n-L+1}{n} (1-\frac{1}{n})^{(n-1)}(1-p_n)\frac{n-L}{n}(1-\frac{1}{n})^{(n-1)}(1-p_n)$, since it is sufficient to flip one 0 bit for mutation and one 0 bit for noise in the first step, or flip one 0 bit for mutation and no bit for noise in the first step and flip one 0 bit for mutation and no bit for noise in the second step.\ (2) For $|x|_1=L$, $P\geq(1-\frac{1}{n})^np_n\frac{n-L}{n}+\frac{n-L}{n}(1-\frac{1}{n})^{n-1}(1-p_n),$ since it is sufficient to flip no bit for mutation and one 0 bit for noise, or flip one 0 bit for mutation and no bit for noise in the first step.\ (3) For $|x|_1=L+1$, $P\geq(1-\frac{1}{n})^{n}(1-p_n+p_n\frac{n-L-1}{n}),$ since it is sufficient to flip no bit for mutation and no bit or one 0 bit for noise in the first step.\ Thus, for these three cases, we have $$\begin{aligned}\label{one-step-probability} P&\geq p_n(1-\frac{1}{n})^{(n-1)}\frac{n-L}{n}\frac{n-L-1}{n}+(1-\frac{1}{n})^{2(n-1)} (1-p_n)^2 \frac{n-L}{n}\frac{n-L-1}{n}\\ &\geq^1 (p_n+(1-p_n)^2) \frac{(n-L)(n-L-1)}{e^2n^2} \geq^2 \frac{3(n-L)(n-L-1)}{4e^2n^2}, \end{aligned}$$ where the ‘$\geq^1$’ is by $(1-\frac{1}{n})^{n-1} \geq \frac{1}{e}$ and the ‘$\geq^2$’ is by $0\leq p_n\leq 1$. When $L=0$, $|x|_1=0$ or 1. By considering case (2) and (3), we can get the same lower bound for $P$. When $L=n-1$ and the optimal solution $1^n$ has not been found, $|x|_1=n-2$ or $n-1$. By considering case (1) and (2), we can get $P \geq 3/(2e^2n^2)$. Based on the above analysis, we can get that the expected steps until $L=n$ is at most $$\sum\nolimits^{n-1}_{i=0} {\mathbb{E}[\kern-0.15em[ i ]\kern-0.14em]}\leq 2 \cdot (\sum^{n-2}_{L=0}\frac{4e^2n^2}{3(n-L)(n-L-1)}+\frac{2e^2n^2}{3}),\; \text{i.e.}, O(n^2).$$ When $L=n$, $|x|_1=n-1$ or $n$ (i.e., the optimal solution has been found). If $|x|_1=n-1$, the optimal solution will be generated and accepted in one step with probability $\frac{1}{n}(1-\frac{1}{n})^{n-1}(1-p_n)\geq \frac{(1-p_n)}{en}$, because it needs to flip the unique 0 bit for mutation and no bit for noise. This implies that the expected steps for finding the optimal solution is at most $\frac{en}{(1-p_n)}$. Thus, we can get the upper bound $O(n^2+\frac{n}{1-p_n})$ for the expected running time of the whole process. Then, we are to analyze the lower bound. Assume that the initial solution $x_{init}$ has $n-1$ number of 1 bits, i.e., $|x_{init}|_1=n-1$. If the fitness of $x_{init}$ is evaluated as $n$, which happens with probability $p_n\frac{1}{n}$, before finding the optimal solution, the solution will always have $n-1$ number of 1 bits and its fitness will always be $n$. From the above analysis, we know that in such a situation, the probability of generating and accepting the optimal solution in one step is $\frac{1}{n}(1-\frac{1}{n})^{n-1}(1-p_n) \leq \frac{(1-p_n)}{n}$. Thus, the expected running time for finding the optimal solution when starting from $|x_{init}|_1=n-1$ is at least $p_n\frac{1}{n} \cdot \frac{n}{(1-p_n)}=\frac{p_n}{(1-p_n)}$. Because the initial solution is uniformly distributed over $\{0,1\}^n$, the probability that the algorithm starts from $|x_{init}|_1=n-1$ is $n/2^n$. Thus, we can get the lower bound $\Omega(\frac{np_n}{2^n(1-p_n)})$ for the expected running time of the whole process. \[runtime\_with\] The PNT of (1+1)-EA using re-evaluation with mutation probability $\frac{1}{n}$ on I$_{easiest}$ problem is $\Theta(\frac{\log(n)}{n})$, with respect to one-bit noise. The theorem is straightforwardly derived from the following lemma. \[[@droste2004analysis]\] For (1+1)-EA using re-evaluation with mutation probability $\frac{1}{n}$ on I$_{easiest}$ problem under one-bit noise, the expected running time is polynomial when $p_n\in O(\log(n)/n)$, and the running time is polynomial with super-polynomially small probability when $p_n\in \omega(\log(n)/n)$. Threshold Selection ------------------- During the process of evolutionary optimization, most of the improvements in one generation are small. When using re-evaluation, due to noisy fitness evaluation, a considerable portion of these improvements are not real, where a worse solution appears to have a “better" fitness and then survives to replace the true better solution which has a “worse" fitness. This may mislead the search direction of EAs, and then slow down the efficiency of EAs or make EAs get trapped in the local optimal solution, as observed in Section 4.1. To deal with this problem, a selection strategy for EAs handling noise was proposed [@markon2001thresholding]. - **threshold selection** an offspring solution will be accepted only if its fitness is larger than the parent solution by at least a predefined threshold $\tau \geq 0$. For example, for (1+1)-EA with threshold selection as in Algorithm \[(1+1)-EA-threshold\], its 4th step changes to be “if [$f(x') \geq f(x)+\tau$]{}" rather than “if [$f(x') \geq f(x)$]{}" in Algorithm \[(1+1)-EA\]. Such a strategy can reduce the risk of accepting a bad solution due to noise. Although the good local performance (i.e., the progress of one step) of EAs with threshold selection has been shown on some problems [@markon2001thresholding; @beielstein2002threshold; @bartz2005new], its usefulness for the global performance (i.e., the running time until finding the optimal solution) of EAs under noise is not yet clear. \[(1+1)-EA-threshold\] Given pseudo-Boolean function $f$ with solution length $n$, and a predefined threshold $\tau \geq 0$, it consists of the following steps:\ ---- --------------------------------------------------- 1. $x:=$ randomly selected from $\{0,1\}^{n}$. 2. Repeat until the termination condition is met 3. $x':=$ flip each bit of $x$ with probability $p$. 4. if [$f(x') \geq f(x)+\tau$]{} 5. $x:=x'$. ---- --------------------------------------------------- \ where $p \in (0,0.5)$ is the mutation probability. In this section, we compare the running time of (1+1)-EA with and without threshold selection solving I$_{easiest}$ problem under one-bit noise to show whether threshold selection will be useful. Note that, the analysis here assumes re-evaluation. Algorithm \[random\_walk\] shows a random walk on a graph. Lemma \[theo\_randwalk\] gives an upper bound on the expected steps for a random walk to visit each vertex of a graph at least once, which will be used in the following analysis. \[random\_walk\] Given an undirected connected graph $G=(V,E)$ with vertex set $V$ and edge set $E$, it consists of the following steps:\ ---- ---------------------------------------------------------- 1. start at a vertex $v \in V$. 2. Repeat until the termination condition is met 3. choose a neighbor $u$ of $v$ in $G$ uniformly at random. 4. set $v:=u$. ---- ---------------------------------------------------------- \ \[theo\_randwalk\] Given an undirected connected graph $G=(V,E)$, the expected cover time of a random walk on $G$ is upper bounded by $2|E|(|V|-1)$, where the cover time of a random walk on $G$ is the number of steps until each vertex $v \in V$ has been visited at least once. \[runtime\_theo\_threshold\] The PNT of (1+1)-EA using re-evaluation with threshold selection $\tau=1$ and mutation probability $\frac{1}{n}$ on I$_{easiest}$ problem is not less than $\frac{1}{2e}$, with respect to one-bit noise. The theorem can be directly derived from the following lemma. \[theo\_threshold\] For (1+1)-EA using re-evaluation with threshold selection $\tau=1$ and mutation probability $\frac{1}{n}$ on I$_{easiest}$ problem under one-bit noise, the expected running time is $O(n^3)$ when $p_n \leq \frac{1}{2e}$. We denote the number of one bits of the current solution $x$ by $L\;(0\leq L \leq n)$. Let $P_d$ denote the probability that the offspring solution $x'$ by bit-wise mutation on $x$ has $L+d \;(-L\leq d \leq n-L)$ number of one bits, and let $P'_d$ denote the probability that the next solution after bit-wise mutation and selection has $L+d$ number of one bits. Then, we analyze $P'_d$. We consider $0 \leq L \leq n-1$. Note that one-bit noise can change the true fitness of a solution by at most 1, i.e., $|f^N(x)-f(x)|\leq 1$.\ (1) When $d \leq -2$, $f^N(x') \leq L+d+1 \leq L-1 \leq f^N(x)$. Because an offspring solution will be accepted only if $f^N(x') \geq f^N(x)+1$, the offspring solution $x'$ will be discarded in this case, which implies that $\forall d \leq -2: P'_d=0$.\ (2) When $d=-1$, the offspring solution $x'$ will be accepted only if $f^N(x')=L \wedge f^N(x)=L-1$, the probability of which is $p_n\frac{n-L+1}{n}\cdot p_n\frac{L}{n}$, since it needs to flip one 0 bit of $x'$ and flip one 1 bit of $x$. Thus, $P'_{-1}=P_{-1}\cdot(p_n\frac{L}{n}p_n\frac{n-L+1}{n})$.\ (3) When $d=1$, if $f^N(x)=L-1$, the probability of which is $p_n\frac{L}{n}$, the offspring solution $x'$ will be accepted, since $f^N(x') \geq L+1-1=L>f^N(x)$; if $f^N(x)=L \wedge f^N(x')\geq L+1$, the probability of which is $(1-p_n)\cdot(1-p_n+p_n\frac{n-L-1}{n})$, $x'$ will be accepted; if $f^N(x)=L+1 \wedge f^N(x')=L+2$, the probability of which is $p_n\frac{n-L}{n}\cdot p_n\frac{n-L-1}{n}$, $x'$ will be accepted; otherwise, $x'$ will be discarded. Thus, $P'_{1}=P_{1}\cdot(p_n\frac{L}{n}+(1-p_n)(1-p_n+p_n\frac{n-L-1}{n})+p_n\frac{n-L}{n}p_n\frac{n-L-1}{n})$.\ (4) When $d \geq 2$, it is easy to see that $P'_d>0$. Because we are to get the upper bound of the expected running time for finding the optimal solution $1^n$ for the first time, we pessimistically assume that $\forall d \geq 2: P'_d=0$. Then, we compare $P'_1$ with $P'_{-1}$. $$\begin{aligned} & P'_1\geq P_1 p_n \frac{L}{n} \geq \frac{n-L}{n}(1-\frac{1}{n})^{n-1}p_n\frac{L}{n}\geq p_n\frac{L(n-L)}{en^2}, \end{aligned}$$ where the second inequality is by $P_1 \geq \frac{n-L}{n}(1-\frac{1}{n})^{n-1}$ since it is sufficient to flip just one 0 bit, and the last inequality is by $(1-\frac{1}{n})^{n-1}\geq \frac{1}{e}$. $$\begin{aligned} & P'_{-1}=P_{-1}(p_n\frac{L}{n}p_n\frac{n-L+1}{n})\leq\frac{L}{n}(p_n\frac{L}{n}p_n\frac{n-L+1}{n}) \leq p_n\frac{L}{en^2} \cdot \frac{L(n-L+1)}{2n}\leq p_n\frac{L(n-L)}{en^2}, \end{aligned}$$ where the first inequality is by $P_{-1}\leq \frac{L}{n}$ since it is necessary to flip at least one 1 bit, the second inequality is by $p_n\leq \frac{1}{2e}$, and the last inequality is by $\frac{L(n-L+1)}{2n}\leq n-L$. Thus, we have for all $0\leq L\leq n-1$, $P'_{1} \geq P'_{-1}$. Because we are to get the upper bound of the expected running time for finding $1^n$, we can pessimistically assume that $P'_{1} = P'_{-1}$. Then, we can view the evolutionary process as a random walk on the path $\{0,1,2,\ldots,n\}$. We call a step that jumps to the neighbor state a relevant step. Thus, by Lemma \[theo\_randwalk\], it needs at most $2n^2$ expected relevant steps to find $1^n$. Because the probability of a relevant step is at least $P'_{1} \geq P_1(1-p_n)^2\geq \frac{n-L}{n}(1-\frac{1}{n})^{n-1}(1-\frac{1}{2e})^2 \geq (1-\frac{1}{2e})^2/en$, the expected running time for a relevant step is $O(n)$. Thus, the expected running time of (1+1)-EA with $\tau=1$ on I$_{easiest}$ problem with $p_n \leq \frac{1}{2e}$ is upper bounded by $O(n^3)$. \[runtime\_theo\_threshold2\] The PNT of (1+1)-EA using re-evaluation with threshold selection $\tau=2$ and mutation probability $\frac{1}{n}$ on I$_{easiest}$ problem is lower bounded by$1-1/\Omega(poly(n))$ and upper bounded by $1-1/O(2^npoly(n))$, where $poly(n)$ indicates any polynomial of $n$, with respect to one-bit noise. The theorem can be directly derived from the following lemma. \[theo\_threshold2\] For (1+1)-EA using re-evaluation with threshold selection $\tau=2$ and mutation probability $\frac{1}{n}$ on I$_{easiest}$ problem under one-bit noise, the expected running time is $O(n\log n/(p_n(1-p_n)))$ and\ $\Omega(n^2/(2^np_n(1-p_n)))$. Let $L\;(0\leq L \leq n)$ denote the number of one bits of the current solution $x$. Here, an offspring solution $x'$ will be accepted only if $f^N(x')-f^N(x) \geq 2$. As in the proof of Lemma \[theo\_threshold\], we can derive $$\begin{aligned} &\forall d \leq -1: P'_d=0; \\ &P'_1=P_{1}\big(p_n\frac{L}{n}((1-p_n)+p_n\frac{n-L-1}{n})+(1-p_n)(p_n\frac{n-L-1}{n})\big);\\ &\forall d \geq 2: P'_d>0. \end{aligned}$$ Thus, $L$ will never decrease in the evolution process, and it can increase in one step with probability $$\begin{aligned} P'_{d>0} &> P'_1 \geq \frac{n-L}{n}(1-\frac{1}{n})^{(n-1)}((1-p_n)p_n(1-\frac{1}{n})+p^2_n\frac{L(n-L-1)}{n^2})\\ & \geq \frac{1}{2e} (1-p_n)p_n \frac{n-L}{n}. \end{aligned}$$ Then, we can get that the expected steps until $L=n$ (i.e., the optimal solution is found) is at most $$\sum^{n-1}_{L=0} \frac{2en}{(1-p_n)p_n (n-L)},\; \text{i.e.}, O(\frac{n \log n}{p_n(1-p_n)}).$$ Then, we are to analyze the lower bound. Assume that the initial solution $x_{init}$ has $n-1$ number of 1 bits. Before finding the optimal solution, the solution $x$ in the population will always satisfy $|x|_1=n-1$ because $\forall d \leq -1: P'_d=0$. The optimal solution (i.e., $|x|_1=n$) will be found in one step with probability $P'_1=P_1p_n(1-p_n)(1-\frac{1}{n})=\frac{1}{n}(1-\frac{1}{n})^{(n-1)}p_n(1-p_n)(1-\frac{1}{n}) \leq \frac{p_n(1-p_n)}{en}$. Thus, the expected steps for finding the optimal solution when starting from $|x_{init}|_1=n-1$ is at least $\frac{en}{p_n(1-p_n)}$. By the uniform distribution of the initial solution, the probability that $|x_{init}|_1=n-1$ is $n/2^n$. Thus, we can get the lower bound $\Omega(\frac{n^2}{2^np_n(1-p_n)})$ for the expected running time of the whole process. Smooth Threshold Selection -------------------------- We propose the smooth threshold selection as in Definition \[smooth\], which modifies the original threshold selection by changing the hard threshold value to a smooth one. We are to show that, by such a small modification, the PNT of (1+1)-EA on I$_{easiest}$ problem is improved to 1, which means that the expected running time of (1+1)-EA is always polynomial disregard the one-bit noise level. \[smooth\] Let $\delta$ be the gap between the fitness of the offspring solution $x'$ and the parent solution $x$, i.e., $\delta=f(x')-f(x)$. Then, the selection process will behave as follows:\ (1) if $\delta \leq 0$, $x'$ will be rejected;\ (2) if $\delta=1$, $x'$ will be accepted with probability $\frac{1}{5n}$;\ (3) if $\delta>1$, $x'$ will be accepted. \[theo\_threshold\_smart\] The PNT of (1+1)-EA using re-evaluation with smooth threshold selection and mutation probability $\frac{1}{n}$ on I$_{easiest}$ problem is 1, with respect to one-bit noise. We first analyze $P'_{d}$ as that analyzed in the proof of Lemma \[theo\_threshold\]. The only difference is that when the fitness gap between the offspring and the parent solution is 1, the offspring solution will be accepted with probability $\frac{1}{5n}$ here, while it will be always accepted in the proof of Lemma \[theo\_threshold\]. Thus, for smooth threshold selection, we can similarly derive $$\begin{aligned} &\forall d \leq -2: P'_d=0; \\ &P'_{-1}=P_{-1}(p_n\frac{L}{n}p_n\frac{n-L+1}{n}) \cdot \frac{1}{5n};\\ &P'_1=P_{1}\big(p_n\frac{L}{n}(p_n\frac{L+1}{n} \cdot \frac{1}{5n}+(1-p_n)+p_n\frac{n-L-1}{n})+(1-p_n)((1-p_n)\cdot \frac{1}{5n}+p_n\frac{n-L-1}{n})\\ &\qquad +p_n\frac{n-L}{n}p_n\frac{n-L-1}{n}\cdot \frac{1}{5n}\big);\\ & \forall d \geq 2: P'_d>0. \end{aligned}$$ Note that $L$ ($0 \leq L \leq n$) denotes the number of one bits of the current solution $x$. Our goal is to reach $L=n$. If starting from $L=n-1$, $L$ will reach $n$ in one step with probability $$\begin{aligned} &P'_1 \geq P_1 (p_n\frac{L}{n} p_n\frac{L+1}{n} \cdot \frac{1}{5n}+(1-p_n)(1-p_n)\cdot \frac{1}{5n})\\ & \geq \frac{n-L}{n}(1-\frac{1}{n})^{n-1} (p_n\frac{L}{n} p_n\frac{L+1}{n} \cdot \frac{1}{5n}+(1-p_n)(1-p_n)\cdot \frac{1}{5n})\\ & \geq \frac{1}{5en^2}(\frac{n-1}{n}p_n^2+(1-p_n)^2) \quad (\text{by $L=n-1$ and $(1-\frac{1}{n})^{n-1} \geq \frac{1}{e}$})\\ & \geq \frac{1}{5en^2} \cdot \frac{n-1}{2n-1} \in \Omega(\frac{1}{n^2}). \quad (\text{by $0 \leq p_n \leq 1$}) \end{aligned}$$ Thus, for reaching $L=n$, we need to reach $L=n-1$ for $O(n^2)$ times in expectation. Then, we analyze the expected running time until $L=n-1$. In this process, we can pessimistically assume that $L=n$ will never be reached, because our final goal is to get the upper bound on the expected running time for reaching $L=n$. For $0 \leq L \leq n-2$, we have $$\begin{aligned} &\frac{P'_1}{P'_{-1}} \geq \frac{P_1 \cdot (p_n\frac{L}{n} p_n\frac{n-L-1}{n})}{P_{-1}\cdot (p_n\frac{L}{n}p_n\frac{n-L+1}{n}) \cdot \frac{1}{5n}} \geq \frac{\frac{n-L}{n}(1-\frac{1}{n})^{n-1} \cdot (p_n\frac{L}{n} p_n\frac{n-L-1}{n})}{\frac{L}{n}\cdot (p_n\frac{L}{n}p_n\frac{n-L+1}{n}) \cdot \frac{1}{5n}}\\ &\geq \frac{5n(n-L)(n-L-1)}{eL(n-L+1)}=\frac{5n(\frac{n}{L}-1)}{e(1+\frac{2}{n-L-1})} > 1. \end{aligned}$$ Again, we can pessimistically assume that $P'_1=P'_{-1}$ and $\forall d \geq 2, P'_{d}=0$, because we are to get the upper bound on the expected running time until $L=n-1$. Then, we can view the evolutionary process for reaching $L=n-1$ as a random walk on the path $\{0,1,2,\ldots,n-1\}$. We call a step that jumps to the neighbor state a relevant step. Thus, by Lemma \[theo\_randwalk\], it needs at most $2(n-1)^2$ expected relevant steps to reach $L=n-1$. Because the probability of a relevant step is at least $$\begin{aligned} & P'_{1} \geq P_1((1-p_n)(1-p_n)\cdot \frac{1}{5n}+ p_n\frac{n-L}{n}p_n\frac{n-L-1}{n}\cdot \frac{1}{5n})\\ & \geq \frac{n-L}{5en^2}((1-p_n)^2 + p^2_n\frac{(n-L)(n-L-1)}{n^2})\\ & \geq \frac{2}{5en^2}((1-p_n)^2+\frac{2}{n^2}p^2_n) \geq \frac{2}{5en^2} \cdot \frac{2}{n^2+2}, \end{aligned}$$ the expected running time for a relevant step is $O(n^4)$. Then, the expected running time for reaching $L=n-1$ is $O(n^6)$. Thus, the expected running time of the whole optimization process is $O(n^8)$ for any $p_n \in [0,1]$, and then this theorem holds. We draw an intuitive understanding from the proof of Theorem \[theo\_threshold\_smart\] that why the smooth threshold selection can be better than the original threshold selections. By changing the hard threshold to be a smooth threshold, it can not only make the probability of accepting a false better solution in one step small enough, i.e. $P'_1 \geq P'_{-1}$, but also make the probability of producing progress in one step large enough, i.e., $P'_1$ is not small. Discussions and Conclusions =========================== This paper studies theoretical issues of noisy optimization by evolutionary algorithms. First, we discover that an optimization problem may become easier instead of harder in a noisy environment. We then derive a sufficient condition under which noise makes optimization easier or harder. By filling this condition, we have shown that for (1+$\lambda$)-EA, noise makes the optimization on the hardest and the easiest case in the pseudo-Boolean function class easier and harder, respectively. We also hypothesize that we need to take care of noise only when the optimization problem is moderately or less complex. Experiments on the Jump$_{m,n}$ problem, which has an adjustable difficulty parameter, supported our hypothesis. In problems where the noise has a negative effect, we then study the usefulness of two commonly employed noise-handling strategies, re-evaluation and threshold selection. The study takes the easiest case in the pseudo-Boolean function class as the representative problem, where the noise significantly harms the expected running time of the (1+1)-EA. We use the polynomial noise tolerance (PNT) level as the performance measure, and analyzed the PNT of each EA. The re-evaluation strategy seems to be a reasonable method for reducing random noise. However, we derive that the (1+1)-EA with single-evaluation has a PNT lower bound $1-1/\Omega(poly(n))$ from Theorem 5 which is close to $1$, whilst the (1+1)-EA with re-evaluation has the PNT $\Theta(\log(n)/n)$ which can be quite close to zero as $n$ is large. It is surprise to see that the re-evaluation strategy leads to a much worse noise tolerance than that without any noise handling method. The re-evaluation with threshold selection strategy has a better PNT comparing with the re-evaluation alone. When the threshold is 1, we derive a PNT lower bound $\frac{1}{2e}$ from Theorem 7, and when the threshold is 2, we obtain $1-1/\Omega(poly(n))$ from Theorem 8. The improvement from re-evaluation alone could be explained as that the threshold selection filters out fake progresses that caused by the noise. However, it still showed no improvements from the (1+1)-EA without any noise handling method. We then proposed the smooth threshold selection, which acts like the threshold selection with threshold 2 but accepts progresses 1 with a probability. We proved that the (1+1)-EA with the smooth threshold selection has the PNT 1 from Theorem 9, which exceeds that of (1+1)-EA without any noise handling method. Our explanation is that, like the original threshold selection, the proposed one filters out fake progresses, while it also keep some chances to accept real progresses. Although the investigated EAs and problems in this paper are simple and specifically used for the theoretical analysis of EAs, the analysis still disclosed counter-intuitive results and, particularly, demonstrated that theoretical investigation is essential in designing better noise handling strategies. We are optimistic that our findings may be helpful for practical uses of EAs, which will be studied in the future. Acknowledgements ================ to be added ... Appendix {#appendix .unnumbered} ======== [Lemma \[lemma\_analysis\_condition\]]{} We prove it by induction on $m$. [**(a) Initialization**]{} is to prove that it holds when $m=1$. $$\begin{aligned} &\sum\nolimits^{1}_{i=0}P_iE_i =\sum\nolimits^{1}_{i=0}Q_iE_i+(P_0-Q_0)E_0+(P_1-Q_1)E_1\\ &\mathop{=}^{1}\sum\nolimits^{1}_{i=0}Q_iE_i+(P_0-Q_0)E_0+(1-P_0-(1-Q_0))E_1\\ &=\sum\nolimits^{1}_{i=0}Q_iE_i+(P_0-Q_0)(E_0-E_1)\geq \sum\nolimits^{1}_{i=0}Q_iE_i, \end{aligned}$$ where the ‘$\mathop{=}\limits^{1}$’ is by $P_0+P_1=Q_0+Q_1=1$, and the ‘$\geq$’ is by $P_0 \leq Q_0$ and $E_0<E_1$. [**(b) Inductive Hypothesis**]{} assumes that this lemma holds when $1\leq m \leq k$. Then, we consider $m=k+1$. The proof idea is to combine the first two terms of $\sum^{k+1}_{i=0}P_iE_i$, and then apply inductive hypothesis. \(1) When $P_0=P_1=0$, we can get $$\begin{aligned} &\sum\nolimits^{k+1}_{i=0}P_iE_i = (P_0+P_1)E_1 +\sum\nolimits^{k+1}_{i=2}P_iE_i\\ &\mathop{=}^{1}\sum\nolimits^{k}_{i=0} P'_i E'_i \geq^1 \sum\nolimits^{k}_{i=0} Q'_i E'_i\\ &\mathop{=}^{2}(Q_0+Q_1)E_1+ \sum\nolimits^{k+1}_{i=2}Q_iE_i \geq^2 \sum\nolimits^{k+1}_{i=0}Q_iE_i, \end{aligned}$$ where the ‘$\mathop{=}\limits^{1}$’ and ‘$\mathop{=}\limits^{2}$’ is by letting $E'_i=E_{i+1}$, $P'_0=P_0+P_1$, $Q'_0=Q_0+Q_1$ and $\forall i \geq 1, P'_i=P_{i+1}, Q'_i=Q_{i+1}$; the ‘$\mathop{\geq}^{1}$’ is by applying inductive hypothesis because for $P'_i, Q'_i, E'_i$, the three conditions of this lemma hold and $m=k$; and the ‘$\mathop{\geq}^{2}$’ is by $E_1>E_0$ and $Q_0 \geq 0$. \(2) When $P_0+P_1>0$, we consider two cases.\ (2.1) If $P_1> Q_1$, we have $$\begin{aligned} &\sum\nolimits^{k+1}_{i=0}P_iE_i=(P_0+P_1)\frac{P_0E_0+P_1E_1}{P_0+P_1}+\sum\nolimits^{k+1}_{i=2}P_iE_i\\ &\geq^1 (Q_0+Q_1)\frac{P_0E_0+P_1E_1}{P_0+P_1}+\sum\nolimits^{k+1}_{i=2}Q_iE_i\\ &\geq^2 (Q_0+Q_1)\frac{Q_0E_0+Q_1E_1}{Q_0+Q_1}+\sum\nolimits^{k+1}_{i=2}Q_iE_i=\sum\nolimits^{k+1}_{i=0}Q_iE_i, \end{aligned}$$ where the ‘$\mathop{\geq}\nolimits^{1}$’ is by applying inductive hypothesis as the ‘$\mathop{\geq}^{1}$’ in case (1) except $E'_0=\frac{P_0E_0+P_1E_1}{P_0+P_1}$ here, and the ‘$\mathop{\geq}\nolimits^{2}$’ can be easily derived by $Q_0\geq P_0, P_1>Q_1, E_1>E_0$.\ (2.2) If $P_1\leq Q_1$, we have $$\begin{aligned} &\sum\nolimits^{k+1}_{i=0}P_iE_i=(P_0+P_1)\frac{P_0E_0+P_1E_1}{P_0+P_1}+\sum\nolimits^{k+1}_{i=2}P_iE_i\\ &\geq^1 (P_0+P_1)\frac{P_0E_0+P_1E_1}{P_0+P_1}+(Q_0-P_0+Q_1-P_1+Q_2) E_2+\sum\nolimits^{k+1}_{i=3}Q_iE_i\\ &\geq^2 (P_0+P_1)\frac{P_0E_0+P_1E_1}{P_0+P_1}+(Q_0-P_0)E_0+(Q_1-P_1)E_1+\sum\nolimits^{k+1}_{i=2}Q_iE_i\\ &=\sum\nolimits^{k+1}_{i=0}Q_iE_i, \end{aligned}$$ where the ‘$\mathop{\geq}^1$’ is by applying inductive hypothesis as the ‘$\mathop{\geq}^{1}$’ in case (1) except $E'_0=\frac{P_0E_0+P_1E_1}{P_0+P_1}$, $Q'_0=P_0+P_1$, $Q'_1=Q_0-P_0+Q_1-P_1+Q_2$ here, and the ‘$\mathop{\geq}^2$’ is by $Q_0 \geq P_0$, $Q_1 \geq P_1$ and $E_2 >E_1>E_0$. [**[(c) Conclusion]{}**]{} According to (a) and (b), the lemma holds. [Lemma \[CFHT\_Trap\]]{} First, $\mathbb{E}_1(0)< \mathbb{E}_1(1)$ trivially holds, because $\mathbb{E}_1(0)=0$ and $\mathbb{E}_1(1)>0$. Then, we prove $\forall\; 0 < j <n:\mathbb{E}_1(j)<\mathbb{E}_1(j+1)$ inductively on $j$. [**(a) Initialization**]{} is to prove $\mathbb{E}_1(n-1) < \mathbb{E}_1(n)$. For $\mathbb{E}_1(n)$, because the next solution can be only $1^n$ or $0^n$, we have $\mathbb{E}_1(n)=1+(1-(1-p^n)^{\lambda})\mathbb{E}_1(0)+(1-p^n)^{\lambda}\mathbb{E}_1(n)$, then, $\mathbb{E}_1(n)=1/(1-(1-p^n)^{\lambda})$. For $\mathbb{E}_1(n-1)$, because the next solution can be $1^n$, $0^n$ or a solution with $n-1$ number of 0 bits, we have $ \mathbb{E}_1(n-1)=1+(1-(1-p^{n-1}(1-p))^{\lambda})\mathbb{E}_1(0)+P\cdot\mathbb{E}_1(n)+((1-p^{n-1}(1-p))^{\lambda}-P)\mathbb{E}_1(n-1)$, where $P$ denotes the probability that the next solution is $0^n$. Then, $\mathbb{E}_1(n-1)=(1+P\mathbb{E}_1(n))/(1-(1-p^{n-1}(1-p))^{\lambda}+P)$. Thus, we have $$\frac{\mathbb{E}_1(n-1)}{\mathbb{E}_1(n)}=\frac{1-(1-p^n)^{\lambda}+P}{1-(1-p^{n-1}(1-p))^{\lambda}+P}< 1,$$ where the inequality is by $0<p<0.5$. [**(b) Inductive Hypothesis**]{} assumes that $$\forall\; K< j \leq n-1 (K\geq 1): \mathbb{E}_1(j)< \mathbb{E}_1(j+1).$$ Then, we consider $j=K$. Let $x$ and $x'$ be a solution with $K+1$ number of 0 bits and that with $K$ number of 0 bits, respectively. Then, we have $\mathbb{E}_1(K+1)={\mathbb{E}[\kern-0.15em[ \tau'\mid \xi'_0=x ]\kern-0.14em]}$ and $\mathbb{E}_1(K)={\mathbb{E}[\kern-0.15em[ \tau'\mid \xi'_0=x' ]\kern-0.14em]}$. For the solution $x$, we divide the mutation on $x$ into two parts: mutation on one 0 bit and mutation on the $n-1$ remaining bits. The $n-1$ remaining bits contain $K$ number of 0 bits since $|x|_0=K+1$. Let $P^j_0$ and $P^j_i$ $(1 \leq i \leq n)$ be the probability that for the $\lambda$ offspring solutions under the condition that the 0 bit in the first mutation part is flipped by $j$ ($0\leq j \leq \lambda$) times in the $\lambda$ mutations, the least number of 0 bits is $0$, and the largest number of 0 bits is $i$ while the least number of 0 bits is larger than $0$, respectively. By considering the mutation and selection behavior of the (1+$\lambda$)-EA on the I$_{hardest}$ problem, we have, assuming that $\lambda$ is even, $$\begin{aligned} \mathbb{E}_1(K+1) &=1\\ j: 0 \rightarrow \frac{\lambda}{2}-1&\begin{cases} +& \cdots\\ +& \binom{\lambda}{j}p^{j}(1-p)^{\lambda-j}\cdot (P^{j}_0\mathbb{E}_1(0)+\sum\nolimits^{K}_{i=1}P^{j}_{i}\mathbb{E}_1(K+1)+\sum\nolimits^{n}_{i=K+1}P^{j}_{i}\mathbb{E}_1(i))\\ +& \cdots\\ \end{cases}\\ &+\binom{\lambda}{\lambda/2}p^{\frac{\lambda}{2}}(1-p)^{\frac{\lambda}{2}}\cdot (P^{\frac{\lambda}{2}}_0\mathbb{E}_1(0)+\sum\nolimits^{K}_{i=1}P^{\frac{\lambda}{2}}_{i}\mathbb{E}_1(K+1)+\sum\nolimits^{n}_{i=K+1}P^{\frac{\lambda}{2}}_{i}\mathbb{E}_1(i))\\ j: \frac{\lambda}{2}-1 \rightarrow 0&\begin{cases} +& \cdots\\ +& \binom{\lambda}{\lambda-j}p^{\lambda-j}(1-p)^{j}\cdot (P^{\lambda-j}_0\mathbb{E}_1(0)+\sum\nolimits^{K}_{i=1}P^{\lambda-j}_{i}\mathbb{E}_1(K+1)+\sum\nolimits^{n}_{i=K+1}P^{\lambda-j}_{i}\mathbb{E}_1(i))\\ +& \cdots, \end{cases} \end{aligned}$$ where the term $\binom{\lambda}{j}p^{j}(1-p)^{\lambda-j}$ ($0\leq j \leq \lambda$) is the probability that the 0 bit in the first mutation part is flipped by $j$ times in the $\lambda$ mutations. For the solution $x'$, we also divide the mutation on $x'$ into two parts: mutation on one 1 bit and mutation on the $n-1$ remaining bits. The $n-1$ remaining bits also contain $K$ number of 0 bits since $|x'|_0=K$. Note that, the $P^j_0$ and $P^j_i$ $(1 \leq i \leq n)$ defined above are actually also the probability that for the $\lambda$ offspring solutions under the condition that the 1 bit in the first mutation part is flipped by $\lambda-j$ ($0\leq j \leq \lambda$) times in the $\lambda$ mutations, the least number of 0 bits is $0$, and the largest number of 0 bits is $i$ while the least number of 0 bits is larger than $0$, respectively. Then, we have \_1(K) &=1\ j: 0 -1& +&\ +& p\^[j]{}(1-p)\^[-j]{}(P\^[-j]{}\_0\_1(0)+\^[K]{}\_[i=1]{}P\^[-j]{}\_[i]{}\_1(K)+\^[n]{}\_[i=K+1]{}P\^[-j]{}\_[i]{}\_1(i))\ +&\ \ &+p\^(1-p)\^(P\^\_0\_1(0)+\^[K]{}\_[i=1]{}P\^\_[i]{}\_1(K)+\^[n]{}\_[i=K+1]{}P\^\_[i]{}\_1(i))\ j: -1 0& +&\ +& p\^[-j]{}(1-p)\^[j]{}(P\^[j]{}\_0\_1(0)+\^[K]{}\_[i=1]{}P\^[j]{}\_[i]{}\_1(K)+\^[n]{}\_[i=K+1]{}P\^[j]{}\_[i]{}\_1(i))\ +& , where the term $\binom{\lambda}{j}p^{j}(1-p)^{\lambda-j}$ ($0\leq j \leq \lambda$) is the probability that the 1 bit in the first mutation part is flipped by $j$ times in the $\lambda$ mutations. From the above two equalities, we have &\_1(K+1)-\_1(K)=\ j: 0-1 & &\ & +&p\^[j]{}(1-p)\^[-j]{} (P\^j\_0 \_1(0)-P\^[-j]{}\_0 \_1(0)+\^[n]{}\_[i=K+1]{}P\^[j]{}\_i\_1(i)-\^[n]{}\_[i=K+1]{}P\^[-j]{}\_i\_1(i)\ &+\^[K]{}\_[i=1]{}P\^[j]{}\_[i]{}\_1(K+1)-\^[K]{}\_[i=1]{}P\^[-j]{}\_[i]{}\_1(K+1)+\^[K]{}\_[i=1]{}P\^[-j]{}\_[i]{}\_1(K+1)-\^[K]{}\_[i=1]{}P\^[-j]{}\_[i]{}\_1(K) )\ \ &+\ \ &+p\^(1-p)\^(\^[K]{}\_[i=1]{}P\^\_i(\_1(K+1)-\_1(K)))\ j: -1 0& &+\ & +&p\^[-j]{}(1-p)\^[j]{}(P\^[-j]{}\_0 \_1(0)-P\^[j]{}\_0 \_1(0)+\^[n]{}\_[i=K+1]{}P\^[-j]{}\_i\_1(i)-\^[n]{}\_[i=K+1]{}P\^[j]{}\_i\_1(i)\ &+\^[K]{}\_[i=1]{}P\^[-j]{}\_[i]{}\_1(K+1)-\^[K]{}\_[i=1]{}P\^[j]{}\_[i]{}\_1(K+1)+\^[K]{}\_[i=1]{}P\^[j]{}\_[i]{}\_1(K+1)-\^[K]{}\_[i=1]{}P\^[j]{}\_[i]{}\_1(K) )\ \ &+\ \ &=()\ j: 0-1 & &\ & +&(p\^[j]{}(1-p)\^[-j]{}-p\^[-j]{}(1-p)\^[j]{}) (P\^j\_0 \_1(0)+\^[K+1]{}\_[i=1]{}P\^[j]{}\_[i]{}\_1(K+1)\ &+\^[n]{}\_[i=K+2]{}P\^[j]{}\_i\_1(i)-P\^[-j]{}\_0 \_1(0)-\^[K+1]{}\_[i=1]{}P\^[-j]{}\_[i]{}\_1(K+1)-\^[n]{}\_[i=K+2]{}P\^[-j]{}\_i\_1(i))\ +&p\^[j]{}(1-p)\^[-j]{}(\^[K]{}\_[i=1]{}P\^[-j]{}\_[i]{}(\_1(K+1)-\_1(K)))\ +&p\^[-j]{}(1-p)\^[j]{}(\^[K]{}\_[i=1]{}P\^[j]{}\_[i]{}(\_1(K+1)-\_1(K)))\ \ &+\ \ &+p\^(1-p)\^(\^[K]{}\_[i=1]{}P\^\_i(\_1(K+1)-\_1(K))). Then, we are to investigate the relation between $\sum^k_{i=0} P^j_i$ and $\sum^k_{i=0} P^{\lambda-j}_i$ for $0\leq j\leq \frac{\lambda}{2}-1$. Let $m$ ($0 \leq m\leq n-1$) denote the number of 0 bits after bit-wise mutation on a Boolean string of length $n-1$ with $K$ number of 0 bits. For the $\lambda$ independent mutations, we use $m_1,\ldots,m_{\lambda}$, respectively. By the definition of $P^j_i$, we know that there are $j$ number of 1 bits in the first mutation part, since $j$ 0 bits are flipped in the $\lambda$ mutations. Under this condition, $\sum^k_{i=0} P^j_i$ is the probability that for the $\lambda$ offspring solutions, the least number of 0 bits is 0, or the least number of 0 bits is larger than 0 while the largest number of 0 bits is not larger than $k$. We assume that the $j$ number of 1 bits in the first mutation part correspond to $m_1,\ldots,m_j$. Thus, we have \^k\_[i=0]{} P\^j\_i=&P( m\_1=0 …m\_j=0\ &(0 &lt;m\_1 k …0&lt;m\_j k m\_[j+1]{} k-1 …m\_ k-1)), and \^k\_[i=0]{} P\^[-j]{}\_i=&P(m\_1=0 …m\_[-j]{}=0\ & (0&lt;m\_1 k …0&lt;m\_[-j]{} k m\_[-j+1]{} k-1 …m\_ k-1))\ &P( m\_1=0 …m\_j=0 (0 &lt;m\_1 k …0&lt;m\_j k\ & m\_[j+1]{} k …m\_[-j]{} k m\_[-j+1]{} k-1 …m\_ k-1)). Then, we have \[trap-prob1\] &0 k n-1, \^k\_[i=0]{} P\^j\_i\^k\_[i=0]{} P\^[-j]{}\_i. By Lemma \[lemma\_analysis\_condition\], we can get $$P^j_0 \mathbb{E}_1(0)+\sum\limits^{K+1}_{i=1}P^{j}_{i}\mathbb{E}_1(K+1)+\sum\limits^{n}_{i=K+2}P^{j}_i\mathbb{E}_1(i)\geq P^{\lambda-j}_0\mathbb{E}_1(0)+\sum\limits^{K+1}_{i=1}P^{\lambda-j}_{i}\mathbb{E}_1(K+1)+\sum\limits^{n}_{i=K+2}P^{\lambda-j}_i\mathbb{E}_1(i).$$ The three conditions of Lemma \[lemma\_analysis\_condition\] can be easily verified, because $\mathbb{E}_1(0)=0<\mathbb{E}_1(K+1)<\ldots<\mathbb{E}_1(n)$ by inductive hypothesis; $\sum^{n}_{i=0} P^j_i= \sum^{n}_{i=0} P^{\lambda-j}_i=1$; and Eq. holds. By the above inequality and $p<0.5$, we have &\_1(K+1)-\_1(K)&gt; (\^\_[j=0]{}p\^[j]{}(1-p)\^[-j]{}\^[K]{}\_[i=1]{}P\^[-j]{}\_i)(\_1(K+1)-\_1(K)). Because $\sum^{\lambda}_{j=0}\binom{\lambda}{j}p^{j}(1-p)^{\lambda-j}\sum\limits^{K}_{i=1}P^{\lambda-j}_i<\sum^{\lambda}_{j=0}\binom{\lambda}{j}p^{j}(1-p)^{\lambda-j}=1$, we have $\mathbb{E}_1(K+1)>\mathbb{E}_1(K)$. For the case that $\lambda$ is odd, we can prove it similarly. [**[(c) Conclusion]{}**]{} According to (a) and (b), the lemma holds. [Lemma \[CFHT\_OneMax\]]{} We prove $\forall \;0 \leq j <n:\mathbb{E}_2(j)<\mathbb{E}_2(j+1)$ inductively on $j$. [**(a) Initialization**]{} is to prove $\mathbb{E}_2(0) < \mathbb{E}_2(1)$, which trivially holds since $\mathbb{E}_2(1)>0=\mathbb{E}_2(0)$. [**(b) Inductive Hypothesis**]{} assumes that $$\forall \; 0 \leq j < K (K\leq n-1): \mathbb{E}_2(j)<\mathbb{E}_2(j+1).$$ Then, we consider $j=K$. When comparing $\mathbb{E}_2(K+1)$ with $\mathbb{E}_2(K)$, we use the similar analysis method as that in the proof of Lemma \[CFHT\_Trap\]. Let $P^j_i \; (0 \leq i \leq n)$ be the probability that the least number of 0 bits for the $\lambda$ offspring solutions is $i$ under the condition that the 0 bit in the first mutation part is flipped by $j$ ($0 \leq j \leq \lambda$) times in the $\lambda$ mutations. Then, by considering the mutation and selection behavior of the (1+$\lambda$)-EA on the I$_{easiest}$ problem, we have, assuming that $\lambda$ is even, \_2(K+1) &=1\ j: 0 -1& +&\ +& p\^[j]{}(1-p)\^[-j]{}(\^[K]{}\_[i=0]{}P\^[j]{}\_i\_2(i)+\^[n]{}\_[i=K+1]{}P\^[j]{}\_[i]{}\_2(K+1))\ +&\ \ &+p\^(1-p)\^(\^[K]{}\_[i=0]{}P\^\_i\_2(i)+\^[n]{}\_[i=K+1]{}P\^\_[i]{}\_2(K+1))\ j: -1 0& +&\ +& p\^[-j]{}(1-p)\^[j]{}(\^[K]{}\_[i=0]{}P\^[-j]{}\_i\_2(i)+\^[n]{}\_[i=K+1]{}P\^[-j]{}\_[i]{}\_2(K+1))\ +& , and \_2(K)&=1\ j: 0 -1& +&\ +& p\^[j]{}(1-p)\^[-j]{}(\^[K]{}\_[i=0]{}P\^[-j]{}\_i\_2(i)+\^[n]{}\_[i=K+1]{}P\^[-j]{}\_[i]{}\_2(K))\ +&\ \ &+p\^(1-p)\^(\^[K]{}\_[i=0]{}P\^\_i\_2(i)+\^[n]{}\_[i=K+1]{}P\^\_[i]{}\_2(K))\ j: -1 0& +&\ +& p\^[-j]{}(1-p)\^[j]{}(\^[K]{}\_[i=0]{}P\^[j]{}\_i\_2(i)+\^[n]{}\_[i=K+1]{}P\^[j]{}\_[i]{}\_2(K))\ +& . From the above two equalities, we have &\_2(K+1)-\_2(K)=\ j: 0 -1 & &\ & +&p\^[j]{}(1-p)\^[-j]{} (\^[K]{}\_[i=0]{}P\^[j]{}\_i\_2(i)-\^[K]{}\_[i=0]{}P\^[-j]{}\_i\_2(i)+\^[n]{}\_[i=K+1]{}P\^[j]{}\_[i]{}\_2(K+1)\ &-\^[n]{}\_[i=K+1]{}P\^[j]{}\_[i]{}\_2(K)+\^[n]{}\_[i=K+1]{}P\^[j]{}\_[i]{}\_2(K)-\^[n]{}\_[i=K+1]{}P\^[-j]{}\_[i]{}\_2(K) )\ \ &+\ \ &+p\^(1-p)\^(\^[n]{}\_[i=K+1]{}P\^\_i(\_2(K+1)-\_2(K)))\ j: -1 0& &+\ & +&p\^[-j]{}(1-p)\^[j]{}(\^[K]{}\_[i=0]{}P\^[-j]{}\_i\_2(i)-\^[K]{}\_[i=0]{}P\^[j]{}\_i\_2(i)+\^[n]{}\_[i=K+1]{}P\^[-j]{}\_[i]{}\_2(K+1)\ &-\^[n]{}\_[i=K+1]{}P\^[-j]{}\_[i]{}\_2(K)+\^[n]{}\_[i=K+1]{}P\^[-j]{}\_[i]{}\_2(K)-\^[n]{}\_[i=K+1]{}P\^[j]{}\_[i]{}\_2(K) )\ \ &+\ \ &= ()\ j: 0-1 & &\ & +&(p\^[j]{}(1-p)\^[-j]{}-p\^[-j]{}(1-p)\^[j]{}) (\^[K-1]{}\_[i=0]{}P\^[j]{}\_i\_2(i)+\^[n]{}\_[i=K]{}P\^[j]{}\_[i]{}\_2(K)\ &-\^[K-1]{}\_[i=0]{}P\^[-j]{}\_i\_2(i)-\^[n]{}\_[i=K]{}P\^[-j]{}\_[i]{}\_2(K))\ +&p\^[j]{}(1-p)\^[-j]{}(\^[n]{}\_[i=K+1]{}P\^[j]{}\_[i]{}(\_2(K+1)-\_2(K)))\ +&p\^[-j]{}(1-p)\^[j]{}(\^[n]{}\_[i=K+1]{}P\^[-j]{}\_[i]{}(\_2(K+1)-\_2(K)))\ \ &+\ \ &+p\^(1-p)\^(\^[n]{}\_[i=K+1]{}P\^\_i(\_2(K+1)-\_2(K))). Then, we are to investigate the relation between $\sum^k_{i=0} P^j_i$ and $\sum^k_{i=0} P^{\lambda-j}_i$ for $0\leq j\leq \frac{\lambda}{2}-1$. Let $m$ ($0 \leq m\leq n-1$) denote the number of 0 bits after bit-wise mutation on a Boolean string of length $n-1$ with $K$ number of 0 bits. For the $\lambda$ independent mutations, we use $m_2,\ldots,m_{\lambda}$, respectively. By the definition of $P^j_i$, we know that there are $j$ number of 1 bits in the first mutation part. Under this condition, $\sum^k_{i=0} P^j_i$ is the probability that the least number of 0 bits for the $\lambda$ offspring solutions is not larger than $k$. We assume that the $j$ number of 1 bits in the first mutation part correspond to $m_1,\ldots,m_j$. Thus, we have $$\sum^k_{i=0} P^j_i=P(m_1 \leq k \vee \ldots \vee m_j \leq k \vee m_{j+1} \leq k-1 \vee \ldots \vee m_{\lambda} \leq k-1),$$ and $$\sum^k_{i=0} P^{\lambda-j}_i=P(m_1 \leq k \vee \ldots \vee m_{\lambda-j} \leq k \vee m_{\lambda-j+1} \leq k-1 \vee \ldots \vee m_{\lambda} \leq k-1).$$ Since $0\leq j\leq \frac{\lambda}{2}-1$, we have \[onemax-prob1\] &0 k n-1, \^k\_[i=0]{} P\^j\_i\^k\_[i=0]{} P\^[-j]{}\_i. By Lemma \[lemma\_analysis\_condition\], we can get $$\sum\limits^{K-1}_{i=0}P^{j}_i\mathbb{E}_2(i)+\sum\limits^{n}_{i=K}P^{j}_{i}\mathbb{E}_2(K) \geq \sum\limits^{K-1}_{i=0}P^{\lambda-j}_i\mathbb{E}_2(i)+\sum\limits^{n}_{i=K}P^{\lambda-j}_{i}\mathbb{E}_2(K).$$ The three conditions of Lemma \[lemma\_analysis\_condition\] can be easily verified, because $\mathbb{E}_2(0)<\mathbb{E}_2(1)<\ldots<\mathbb{E}_2(K)$ by inductive hypothesis; $\sum^{n}_{i=0} P^j_i= \sum^{n}_{i=0} P^{\lambda-j}_i=1$; and Eq. holds. By the above inequality and $p<0.5$, we have &\_2(K+1)-\_2(K)&gt; (\^\_[j=0]{}p\^[j]{}(1-p)\^[-j]{}\^[n]{}\_[i=K+1]{}P\^j\_i)(\_2(K+1)-\_2(K)). Because $\sum^{\lambda}_{j=0}\binom{\lambda}{j}p^{j}(1-p)^{\lambda-j}\sum\limits^{n}_{i=K+1}P^j_i<\sum^{\lambda}_{j=0}\binom{\lambda}{j}p^{j}(1-p)^{\lambda-j}=1$, we have $\mathbb{E}_2(K+1)>\mathbb{E}_2(K)$. For the case that $\lambda$ is odd, we can prove it similarly. [**[(c) Conclusion]{}**]{} According to (a) and (b), the lemma holds.
{ "pile_set_name": "ArXiv" }
--- abstract: '> Recommendation systems usually involve exploiting the relations among known features and content that describe items (content-based filtering) or the overlap of similar users who interacted with or rated the target item (collaborative filtering). To combine these two filtering approaches, current model-based hybrid recommendation systems typically require extensive feature engineering to construct a user profile. Statistical Relational Learning (SRL) provides a straightforward way to combine the two approaches. However, due to the large scale of the data used in real world recommendation systems, little research exists on applying SRL models to hybrid recommendation systems, and essentially none of that research has been applied on real big-data-scale systems. In this paper, we proposed a way to adapt the state-of-the-art in SRL learning approaches to construct a real hybrid recommendation system. Furthermore, in order to satisfy a common requirement in recommendation systems (i.e. that false positives are more undesirable and therefore penalized more harshly than false negatives), our approach can also allow tuning the trade-off between the precision and recall of the system in a principled way. Our experimental results demonstrate the efficiency of our proposed approach as well as its improved performance on recommendation precision.' author: - 'Shuo Yang\*' - Mohammed Korayem - Khalifeh AlJadda - Trey Grainger - | Sriraam Natarajan\*\ \* School of Informatics and Computing, Indiana University Bloomington, IN 47408\ CareerBuilder,Norcross, GA 30092 bibliography: - 'bib.bib' title: Application of Statistical Relational Learning to Hybrid Recommendation Systems --- Conclusion ========== We proposed an efficient statistical relational learning approach to construct a hybrid job recommendation system which can also satisfy the unique cost requirements regarding precision and recall of a specific domain. The experiment results show the ability of our model to reduce the rate of inappropriate job recommendations. Our contribution includes: i. we are the first to apply statistical relational learning models to a real-world large-scale job recommendation system; ii. our proposed model not only proves to be the most efficient SRL learning approach, but also demonstrates its ability to further reduce false positive predictions; iii. the experiment results reveal a promising direction for future hybrid recommendation systems– with proper utilization of first-order predicates, an SRL-model-based hybrid recommendation system can not only prevent the necessity for exhaustive feature engineering or pre-clustering, but can also provide a robust way to solve the cold-start problem.
{ "pile_set_name": "ArXiv" }
--- author: - 'Hossein Hosseini[^1]' - Sungrack Yun - Hyunsin Park - Christos Louizos - Joseph Soriaga - Max Welling - '`{hhossein,sungrack,hyunsinp,clouizos,jsoriaga,mwelling}@qti.qualcomm.com`' bibliography: - 'main.bib' title: Federated Learning of User Authentication Models --- [^1]: Qualcomm AI Research, an initiative of Qualcomm Technologies, Inc.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Hibi rings are a kind of graded toric ring on a finite distributive lattice $D = J(P)$, where $P$ is a partially ordered set. In this paper, we compute diagonal $F$-thresholds and $F$-pure thresholds of Hibi rings.' address: - 'Graduate School of Mathematics, Nagoya University, Nagoya 464–8602, Japan' - 'Graduate School of Mathematics, Nagoya University, Nagoya 464–8602, Japan' author: - Takahiro Chiba - Kazunori Matsuda title: 'Diagonal F-thresholds and F-pure thresholds of Hibi rings' --- Introduction {#introduction .unnumbered} ============ In this paper, we study two invariants of commutative Noetherian rings of positive characteristic, that is, the $F$-threshold and the $F$-pure threshold. In [@MTW], Mustaţă, Takagi and Watanabe defined the notion of $F$-thresholds for $F$-finite regular local rings. And in [@HMTW], Huneke, Mustaţă, Takagi and Watanabe generalized it in more general situation. In higher dimensional algebraic geometry over a field $k$ of characteristic $0$, the log canonical thresholds are important objects. In [@TW], Takagi and Watanabe introduced the notion of the $F$-pure threshold, which is an analogue of the log canonical threshold. Firstly, we recall the definition of $F$-threshold. Let $(R, {{\mathfrak{m}}})$ be an $F$-finite $F$-pure local domain or a standard graded $k$-algebra with the unique graded maximal ideal ${{\mathfrak{m}}}$, of characteristic $p > 0$. Then the following limit value exists(see [@HMTW]): $$c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) = \lim_{e \to \infty} \frac{\max\{r \in {{\mathbb{N}}} \mid {{\mathfrak{m}}}^{r} \not\subset {{\mathfrak{m}}}^{[p^{e}]}\}}{p^{e}},$$ where ${{\mathfrak{m}}}^{[p^{e}]} = (x^{p^{e}} \mid x \in {{\mathfrak{m}}})$. We call it the [*diagonal F-threshold*]{} of $R$. Secondly, we recall the definition of $F$-pure threshold. Let $t \ge 0$ be a real number and ${{\mathfrak{a}}}$ a nonzero ideal of $R$. The pair $(R, {{\mathfrak{a}}}^{t})$ is said to be [*F-pure*]{} if for all large $q = p^{e} \gg 0$, there exists an element $d \in {{\mathfrak{a}}}^{{\lfloor t(q - 1) \rfloor}}$ such that $R \to R^{1/q} (1 \mapsto d^{1/q})$ splits as an $R$-linear map. Then the [*F-pure threshold*]{}, denoted by $\operatorname{fpt}({{\mathfrak{a}}})$, is defined by $\operatorname{fpt}({{\mathfrak{a}}}) = \sup\{t \in {{\mathbb{R}}}_{\ge 0} \mid$ the pair $(R, {{\mathfrak{a}}}^{t})$ is $F$-pure$\}$. There are a few examples of these invariants. Hence it seems to be important to compute $F$-thresholds and $F$-pure thresholds concretely for several rings. In [@MOY], the second author, Ohtani and Yoshida computed diagonal $F$-thresholds and $F$-pure thresholds for binomial hypersurfaces. In this paper, we pick up Hibi rings. We compute diagonal $F$-thresholds $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}})$ and $F$-pure thresholds $\operatorname{fpt}({{\mathfrak{m}}})$ of such rings and describe these invariants in terms of poset. Let $P$ be a finite partially ordered set(poset for short), and $\mathcal{R}_{k}[D]$ the Hibi ring over a field $k$ of characteristic $p > 0$ on a distributive lattice $D = J(P)$, where $J(P)$ is the set of all poset ideals of $P$. The main theorem in this paper is the following: (see Theorem 2.4, Theorem 3.9 and Corollary 4.2) Let $R = \mathcal{R}_{k}[D]$ be a Hibi ring, and ${{\mathfrak{m}}} = R_{+}$ the unique graded maximal ideal of $R$. Then $$\begin{aligned} c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) &= \operatorname{rank}^{*} P + 2,\\ -a(R) &= \operatorname{rank}P + 2,\\ \operatorname{fpt}({{\mathfrak{m}}}) &= \operatorname{rank}_{*} P + 2. \end{aligned}$$ In particular, 1. $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}})$, $\operatorname{fpt}({{\mathfrak{m}}}) \in {{\mathbb{N}}}$, 2. $\operatorname{fpt}({{\mathfrak{m}}}) \le \min\{\operatorname{length}C \mid C \in \mathcal{C}\} + 2 \le -a(R) = \max\{\operatorname{length}C \mid C \in \mathcal{C}\} + 2$ $\le c^{{{\mathfrak{m}}}}({{\mathfrak{m}}})$, where $a(R)$ denotes the $a$-invariant of $R$ (see [@GW]) and $\mathcal{C}$ denotes the set of all maximal chains of $P$. Recently, this inequality $\operatorname{fpt}({{\mathfrak{m}}}) \le -a(R) \le c^{{{\mathfrak{m}}}}({{\mathfrak{m}}})$ was proved by Hirose, Watanabe and Yoshida for any homogeneous toric ring $R$ (see [@HWY]). In [@Hir], Hirose gave formulae of $F$-thresholds and $F$-pure thresholds for any homogeneous toric ring $R$. However, it seems to be difficult for us to construct many examples by his formula. For Hibi rings, we give formulae of $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}})$ and $\operatorname{fpt}({{\mathfrak{m}}})$ in terms of poset, that is, the upper rank (denoted by $\operatorname{rank}^{*}$) and the lower rank (denoted by $\operatorname{rank}_{*}$). Thanks to these formulae, we can find enough examples. More precisely, for given integers $a \ge b\ge c \ge 1$, we can find a connected poset $P$ such that $\operatorname{rank}^{*} P = a, \operatorname{rank}P = b$ and $\operatorname{rank}_{*} P = c$ (see Example 4.4). As a corollary, we give formulae of $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}})$ and $\operatorname{fpt}({{\mathfrak{m}}})$ of Segre products of two polynomial rings. Segre products are important objects in commutative ring theory and combinatorics. Let $k$ be a perfect field of positive characteristic, and let $m,n \ge 2$ be integers. Let $R=k[X_1, \ldots, X_m], S=k[Y_1, \ldots, Y_n]$ be polynomial rings, and let $R \# S$ be the Segre product of $R$ and $S$. Let ${{\mathfrak{m}}}$ be the unique graded maximal ideal of $R \# S$. Then $$\begin{aligned} c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) = -a(R \# S) &= \max\{m, n\},\\ \operatorname{fpt}({{\mathfrak{m}}}) &= \min\{m, n\}. \end{aligned}$$ In particular, $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) = \operatorname{fpt}({{\mathfrak{m}}})$ if and only if $m = n$. Let us explain the organization of this paper. In Section 1, we set up the notions of posets, and define the Hibi ring and $\operatorname{rank}^{*} P$ and $\operatorname{rank}_{*} P$ in order to state our main theorem. In Section 2, we recall the definition and several basic results of $F$-threshold and give a formula of diagonal $F$-thresholds $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}})$ for Hibi rings. In Section 3, we recall the definition of $F$-pure threshold and give a formula of $F$-pure thresholds $\operatorname{fpt}({{\mathfrak{m}}})$ for Hibi rings. In Section 4, we compute a-invariants $a(R)$ for Hibi rings and compare $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}})$ and $\operatorname{fpt}({{\mathfrak{m}}})$ with $-a(R)$. Moreover, for given integers $a \ge b\ge c \ge 1$, we find a connected poset $P$ such that $\operatorname{rank}^{*} P = a, \operatorname{rank}P = b$ and $\operatorname{rank}_{*} P = c$. Preliminaries ============= First, we set up the notions of posets and define the Hibi ring. Let $P = \{p_{1}, p_{2}, \ldots, p_{N}\}$ be a finite partially ordered set(poset for short). Let $J(P)$ be the set of all poset ideals of $P$, where a poset ideal of $P$ is a subset $I$ of $P$ such that if $x \in I$, $y \in P$ and $y \le x$ then $y \in I$. By structure theorem of Birkhoff(see [@Bir]), for a distributive lattice $D$, there exists a poset $P$ such that $D \cong J(P)$ ordered by inclusion. A [*chain*]{} $X$ of $P$ is a totally ordered subset of $P$. The [*length*]{} of a chain $X$ of $P$ is $\#X - 1$, where $\#X$ is the cardinality of $X$. The [*rank*]{} of $P$, denoted by $\operatorname{rank}P$, is the maximum of the lengths of chains in $P$. A poset is called [*pure*]{} if its all maximal chains have the same length. For $x, y \in P$, we say that $y$ [*covers*]{} $x$, denoted by $x \lessdot y$, if $x < y$ and there is no $z \in P$ such that $x < z < y$. Let the notation be as above. We consider the following map: $$\varphi : D( = J(P)) \hspace{5mm} \longrightarrow \hspace{5mm} K[T, X_{1}, \ldots, X_{N}]$$ $$\hspace{8mm} I \hspace{16mm} \longmapsto \hspace{15mm} T \prod_{p_{i} \in I} X_{i}$$ Then we define the [*Hibi ring*]{} $\mathcal{R}_{k}[D]$ as follows: $$\mathcal{R}_{k}[D] = k[\varphi(I) \mid I \in J(P)].$$ Consider the following poset $P(1 \le 3, 2 \le 3 \ {\rm and}\ 2 \le 4)$: $\ $ @ (0, 0); (10, -16) = “A”, @ “A” \*[P = ]{}; (24, -24) \*++!R[1]{} \* = “B”, @[-]{} “B”; (24, -8) \*++!R[3]{} \* = “C”, @[-]{} “C”; (48, -24) \*++!L[2]{} \* = “D”, @[-]{} “D”; (48, -8) \*++!L[4]{} \* = “E”, @ “E”; (72, -16) \*[J(P) = ]{} = “F”, @ “F”; (108, -16) \*++!R[{1, 2}]{} \* = “G”, @ “G”; (132, -16) \*++!L[{2, 4}]{} \* = “H”, @[-]{} “G”; (120, -8) \*++!L[{1, 2, 4}]{} \* = “I”, @[-]{} “H”; “I”, @ “I”; (120, -24) \*++!L[{2}]{} \* = “J”, @[-]{} “G”; “J”, @[-]{} “H”; “J”, @ “I”, (96, -8) \*++!R[{1, 2, 3}]{} \* = “K”, @[-]{} “G”; “K”, @ “K”, (108, 0) \*++!D[{1, 2, 3, 4}]{} \* = “L”, @[-]{} “I”; “L”, @[-]{} “K”; “L”, @[-]{} “G”; (96, -24) \*++!R[{1}]{} \* = “M”, @[-]{} “M”; (108, -32) \*++!RU \* = “N”, @[-]{} “J”; “N”, Then we have $$\mathcal{R}_{k}[D] = k[T, TX_{1}, TX_{2}, TX_{1}X_{2}, TX_{2}X_{4}, TX_{1}X_{2}X_{3}, TX_{1}X_{2}X_{4}, TX_{1}X_{2}X_{3}X_{4}].$$ Consider the following poset $P$: @ (0, 0); (42, -8) = “A”, @ “A” \*[P = ]{}; (60, -18) \*++!R[p\_[1]{}]{} \* = “B”, @[-]{} “B”; (60, -10) \*++!R[p\_[2]{}]{} \* = “C”, @[-]{} “C”; (60, -6) = “D”, @[.]{} “D”; (60, -2) = “E”, @[-]{} “E”; (60, 2) \*++!R[p\_[m - 1]{}]{} \*= “F”, @ “F”; (80, -18) \*++!L[q\_[1]{}]{} \* = “G”, @[-]{} “G”; (80, -10) \*++!L[q\_[2]{}]{} \* = “H”, @[-]{} “H”; (80, -6) = “I”, @[.]{} “I”; (80, -2) = “J”, @[-]{} “J”; (80, 2) \*++!L[q\_[n - 1]{}]{} \* then $\mathcal{R}_{k}[D] \cong k[X]/I_{2}(X)$, where $X$ is an $m \times n$-matrix whose all entries are indeterminates. 1. ([@Hib]) Hibi rings are toric ASL, thus normal Cohen-Macaulay domains. 2. $\dim \mathcal{R}_{k}[D] = \# P + 1$. 3. ([@Hib]) $\mathcal{R}_{k}[D]$ is Gorenstein if and only if $P$ is pure. Finally, we define $\operatorname{rank}^{*} P$ and $\operatorname{rank}_{*} P$ for a poset $P$ in order to state our main theorem. A sequence $C = (q_{1}, \ldots, q_{t})$ is called a [*path*]{} of $P$ if $C$ satisfies the following conditions: 1. $q_{1}, \ldots, q_{t}$ are distinct elements of $P$, 2. $q_{1}$ is a minimal element of $P$ and $q_{t - 1} \lessdot q_{t}$, 3. $q_{i} \lessdot q_{i + 1}$ or $q_{i + 1} \lessdot q_{i}$. In short, we regard the Hasse diagram of $P$ as a graph, and consider paths on it. In particular, if $q_{t}$ is a maximal element of $P$, then we call $C$ [*maximal*]{} path. For a path $C = (q_{1}, \ldots, q_{t})$, we denote $C = q_{1} \to q_{t}$. For a path $C = (q_{1}, \ldots, q_{t})$, $q_{i}$ is said to be a [*locally maximal element*]{} of $C$ if $q_{i - 1} \lessdot q_{i}$ and $q_{i + 1} \lessdot q_{i}$, and a [*locally minimal element*]{} of $C$ if $q_{i} \lessdot q_{i - 1}$ and $q_{i} \lessdot q_{i + 1}$. For convenience, we consider that $q_{1}$ is a locally minimal element and $q_{t}$ is a locally maximal element of $C$. For a path $C = (q_{1}, \ldots, q_{t})$, if $q_{1} \le \cdots \le q_{t}$ then we call $C$ an [*ascending chain*]{} and if $q_{1} \ge \cdots \ge q_{t}$ then we call $C$ a [*descending chain*]{}. We denote a ascending chain by a symbol $A$ and a descending chain by a symbol $D$. For a ascending chain $A = (q_{1}, \ldots, q_{t})$, we put $t(A) = q_{t}$ and $<A> = \{q \in P \mid q \le t(A)\}$. Since $<A>$ is a poset ideal of $P$ generated by $A$, we note that $<A> \in J(P)$. Let $C = (q_{1}, \ldots, q_{t})$ be a path. We now introduce the notion of the [*decomposition*]{} of $C$. We decompose $V(C)$ as follows: $$V(C) = V(A_{1}) \cup V(D_{1}) \cup V(A_{2}) \cup \cdots \cup V(D_{n - 1}) \cup V(A_{n})$$ such that $$\begin{aligned} V(A_{1}) &= \{q_{1}, \ldots, q_{a(1)}\}, \\ V(D_{1}) &= \{q^{\prime}_{1}, \ldots, q^{\prime}_{d(1)}\}, \\ V(A_{2}) &= \{q_{a(1) + 1}, \ldots, q_{a(2)}\}, \end{aligned}$$ $\rotatebox{90}{$\cdots$}$ $$\begin{aligned} \hspace{8mm} V(D_{n - 1}) &= \{q^{\prime}_{d(n - 2) + 1}, \ldots, q^{\prime}_{d(n - 1)}\}, \\ V(A_{n}) &= \{q_{a(n - 1) + 1}, \ldots, q_{a(n)} = q_{t}\},\end{aligned}$$ where $\{q_{a(1)}, \ldots, q_{a(n)}\}$ is the set of locally maximal elements and $\{q_{1}, q^{\prime}_{d(1)}, \ldots, q^{\prime}_{d(n - 1)}\}$ is the set of locally minimal elements of $C$. Then $A_{i}$ are ascending chains and $D_{j}$ are descending chains. This decomposition is denoted by $C = A_{1} + D_{1} + A_{2} + \cdots + D_{n - 1} + A_{n}$. For a path $C = (q_{1}, \ldots, q_{t})$, we define [*the upper length*]{} by $$\operatorname{length}^{*} C = \#\{(q_{i}, q_{i + 1}) \in E(C) \mid q_{i} \lessdot q_{i + 1}\},$$ where $E(C)$ is the set of edges of $C$. 1. If $C$ is a chain, then $\operatorname{length}^{*} C = \operatorname{length}C$. 2. Consider the following path $C$: @ (64, -8) \*++!D \* = “A”; @[-]{}\_[1]{} “A”;(64, 0) \*++!D \* = “B”; @[-]{}\_[2]{} “B”;(64, 8) \*++!D \* = “C”; @[-]{} “C”;(80, 0) \*++!D \* = “D”; @[-]{} “D”;(96, -8) \*++!D \* = “E”; @[-]{}\^[3]{} “E”;(96, 0) \*++!D \* = “F”; @[-]{}\_[4]{} “F”;(96, 8) \*++!D \* Then $\operatorname{length}^{*} C = 4$. Next, we introduce the condition (\*). For a path $C = (q_{1}, \ldots, q_{t})$, we say that $C$ [*satisfies a condition*]{} (\*) if $C$ satisfies the following conditions: For the decomposition $C = A_{1} + D_{1} + \ldots + D_{n - 1} + A_{n}$, \(1) $V(D_{i}) \cap \left(\bigcup^{i - 1}_{m = 1} <A_{m}> \cup <A_{i} \setminus t(A_{i})> \cup \{t(A_{i})\}\right) = \emptyset$, \(2) $V(A_{i + 1}) \cap \left(\bigcup^{i}_{m = 1} <A_{m}> \right) = \emptyset$ for all $1 \le i \le n - 1$. At the above definition, condition (\*) means as follows: assume that $C = (q_{1}, \ldots, q_{r - 1}, q_{r}, q_{r + 1}, \ldots)$ and $q_{r}$ is a locally maximal element or a locally minimal element of $C$. Then for all $s > r$ and $r > t$, $q_{s} \not\le q_{t}$. For a path $C = (q_{1}, \ldots, q_{t})$ such that $C$ satisfies a condition (\*) and $q_{t}$ is a locally maximal element, we can extend $C$ to a path $\tilde{C} = (q_{1}, \ldots, q_{t}, \ldots, q_{t^{\prime}})$ such that $\tilde{C}$ is a maximal path which satisfies a condition (\*). Indeed, if $q_{t}$ is not a maximal element of $P$, then there exists $q_{t + 1}$ such that $q_{t} \lessdot q_{t + 1}$. We decompose $C = A_{1} + D_{1} + \ldots + D_{n - 1} + A_{n}$. If $q_{t + 1} \in \ <A_{i}>$ for some $i$, then so is $q_{t}$. This means that $C$ does not satisfy a condition (\*), a contradiction. Hence a path $C^{\prime} = (q_{1}, \ldots, q_{t}, q_{t + 1})$ also satisfies a condition (\*). Therefore, by repeating this operation, we can extend $C$ to a path $\tilde{C} = (q_{1}, \ldots, q_{t}, \ldots, q_{t^{\prime}})$ such that $\tilde{C}$ is a maximal path which satisfies a condition (\*). Consider the following poset $P$: @ (64, -8) \*++!R[q\_[1]{}]{} \* = “A”; @[-]{} “A”;(64, 0) \*++!R[q\_[2]{}]{} \* = “B”; @[-]{} “B”;(64, 8) \*++!R[q\_[3]{}]{} \* = “C” @[-]{} “C”;(80, 0) \*++!D[q\_[4]{}]{} \* = “D”; @[-]{} “D”;(80, -8) \*++!L[q\_[5]{}]{} \* = “E”; @[-]{} “B”;“E”; @[-]{} “E”;(96, 0) \*++!D[q\_[6]{}]{} \* Then, $C_{1} = (q_{1}, q_{2}, q_{5}, q_{6})$ satisfies the condition (\*), but $C_{2} = (q_{1}, q_{2}, q_{3}, q_{4}, q_{5}, q_{6})$ does not satisfy the condition (\*) because $q_{2} \ge q_{5}$. Now, we define the upper rank $\operatorname{rank}^{*} P$ and the lower rank $\operatorname{rank}_{*} P$ for a poset $P$. For a poset $P$, we define $$\begin{aligned} \operatorname{rank}^{*} P &= \max\{\operatorname{length}^{*} C \mid C {\rm\ is\ a\ maximal\ path\ which\ satisfies\ a\ condition} (*)\},\\ \operatorname{rank}_{*} P &= \min\{\operatorname{length}^{*} C \mid C {\rm\ is\ a\ maximal\ path\ which\ satisfies\ a\ condition} (*)\}. \end{aligned}$$ We call $\operatorname{rank}^{*} P$ [*upper rank*]{} and $\operatorname{rank}_{*} P$ [*lower rank*]{} of $P$. We note that $\operatorname{rank}^{*} P \ge \operatorname{rank}P \ge \operatorname{rank}_{*} P$. Consider the following poset $P$: @ (72, -8) \*++!R[q\_[1]{}]{} \* = “A”; @[-]{} “A”;(72, 0) \*++!R[q\_[2]{}]{} \* = “B”; @[-]{} “B”;(72, 8) \*++!R[q\_[3]{}]{} \* = “C” @ “C”;(88, 0) \*++!L[q\_[5]{}]{} \* = “D”; @[-]{} “D”;(88, -8) \*++!L[q\_[4]{}]{} \* = “E”; @[-]{} “B”;“E”; @[-]{} “D”;(88, 8) \*++!L[q\_[6]{}]{} \* Then, the following paths satisfy the condition (\*): @ (0, -8) \*++!R[q\_[1]{}]{} \* = “A”; @[-]{} “A”;(0, 0) \*++!R[q\_[2]{}]{} \* = “B”; @[-]{} “B”;(0, 8) \*++!R[q\_[3]{}]{} \* = “C”; @ “A”;(16, -8) @ (16, -8);(32, -8) \*++!R[q\_[1]{}]{} \* = “D”; @[-]{} “D”;(32, 0) \*++!R[q\_[2]{}]{} \* = “E”; @[-]{} “E”;(48, -8) \*++!L[q\_[4]{}]{} \* = “F”; @[-]{} “F”;(48, 0) \*++!L[q\_[5]{}]{} \* = “G”; @[-]{} “G”;(48, 8) \*++!L[q\_[6]{}]{} \* = “H”; @ “F”;(64, -8) @ (64, -8);(80, 0) \*++!R[q\_[2]{}]{} \* = “I”; @[-]{} “I”;(80, 8) \*++!R[q\_[3]{}]{} \* = “J”; @[-]{} “I”;(96, -8) \*++!L[q\_[4]{}]{} \* = “K”; @ “K”;(112, -8) @ (112, -8);(128, -8) \*++!R[q\_[4]{}]{} \* = “L”; @[-]{} “L”;(128, 0) \*++!R[q\_[5]{}]{} \* = “M”; @[-]{} “M”;(128, 8) \*++!R[q\_[6]{}]{} \* = “N”; @ “N”;(136, -8) Hence we have $\operatorname{rank}^{*} P = 3$ and $\operatorname{rank}_{*} P = \operatorname{rank}P = 2$. Diagonal F-thresholds of Hibi rings =================================== In this section, we recall the definition and several basic results of $F$-threshold and give a proof of the main theorem. Definition and basic properties ------------------------------- Let $R$ be a Noetherian ring of characteristic $p > 0$ with $\dim R = d \ge 1$. Let ${{\mathfrak{m}}}$ be a maximal ideal of $R$. Suppose that ${{\mathfrak{a}}}$ and $J$ are ${{\mathfrak{m}}}$-primary ideals of $R$ such that ${{\mathfrak{a}}} \subseteq \sqrt{J}$ and ${{\mathfrak{a}}} \cap R^{\circ} \neq \emptyset$, where $R^{\circ}$ is the set of elements of $R$ that are not contained in any minimal prime ideal of $R$. Let $R, {{\mathfrak{a}}}, J$ be as above. For each nonnegative integer $e$, put $\nu_{{{\mathfrak{a}}}}^{J}(p^{e}) = \max \{ r \in {{\mathbb{N}}} \mid {{\mathfrak{a}}}^{r} \not\subseteq J^{[p^{e}]}\}$, where $J^{[p^{e}]} = (a^{p^{e}} \mid a \in J)$. Then we define $$c^{J}({{\mathfrak{a}}}) = \lim_{e \to \infty} \frac{\nu_{{{\mathfrak{a}}}}^{J}(p^{e})}{p^{e}}$$ if it exists, and call it the $F$-[*threshold*]{} of the pair $(R, {{\mathfrak{a}}})$ with respect to $J$. Moreover, we call $c^{{{\mathfrak{a}}}}({{\mathfrak{a}}})$ the [*diagonal*]{} $F$-[*threshold*]{} of $R$ with respect to ${{\mathfrak{a}}}$. For convenience, we put $$c_{+}^{J}({{\mathfrak{a)}}} = \limsup_{e \to \infty} \frac{\nu_{{{\mathfrak{a}}}}^{J}(p^{e})}{p^{e}}, \hspace{6mm} c_{-}^{J}({{\mathfrak{a)}}} = \liminf_{e \to \infty} \frac{\nu_{{{\mathfrak{a}}}}^{J}(p^{e})}{p^{e}}.$$ About basic properties and examples of $F$-thresholds, see [@HMTW]. In this section, we summarize basic properties of the diagonal $F$-thresholds $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}})$. 1. Let $(R, {{\mathfrak{m}}})$ be a regular local ring of positive characteristic. Then $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) = \dim R.$ 2. Let $k[X_{1}, \ldots, X_{d}]^{(r)}$ be the $r$-th Veronese subring of a polynomial ring $S = k[X_{1}, \ldots, X_{d}]$. Put ${{\mathfrak{m}}} = (X_{1}, \ldots, X_{d})^{r}R$. Then $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) = \frac{r + d - 1}{r}$. 3. (\[MOY, Corollary 2.4\]) If $(R, {{\mathfrak{m}}})$ is a local ring with $\dim R = 1$, then $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) = 1$. (\[MOY, Theorem 2\]) Let $S = k[X_{1}, \ldots, X_{m}, Y_{1}, \ldots, Y_{n}]$ be a polynomial ring over $k$ in $m + n$ variables, and put ${{\mathfrak{n}}} = (X_{1}, \ldots, X_{m}, Y_{1}, \ldots, Y_{n})S$. Take a binomial $f = X_{1}^{a_{1}} \cdots X_{m}^{a_{m}} - Y_{1}^{b_{1}} \cdots Y_{n}^{b_{n}} \in S$, where $a_{1} \ge \cdots \ge a_{m}, b_{1} \ge \cdots \ge b_{n}$. Let $R = S_{{{\mathfrak{n}}}}/(f)$ be a binomial hypersurface local ring with the unique maximal ideal ${{\mathfrak{m}}}$. Then $$c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) = m + n - 2 + \frac{\max\{a_{1} + b_{1} - \min\{\sum_{i = 1}^{m} a_{i}, \sum_{j = 1}^{n} b_{j}\}, 0\}}{\max\{a_{1}, b_{1}\}}.$$ Proof of the main theorem ------------------------- In this subsection, we give a proof of the main theorem. Recall Theorem 1: Let $P$ be a finite poset, and $D = J(P)$ the distributive lattice. Put $R = \mathcal{R}_{k}[D]$. Let ${{\mathfrak{m}}} = R_{+}$ be the graded maximal ideal of $R$. Then $$c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) = \operatorname{rank}^{*} P + 2.$$ $c_{-}^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) \ge \operatorname{rank}^{*} P + 2$. First of all, we note that for all $Q = p^{e}$, $${{\mathfrak{m}}}^{[Q]} = (\varphi(I)^{Q} \mid I \in J(P))R$$ and $$R = \bigoplus^{+\infty}_{r = 0}\left( T^{r}X_{1}^{s(p_{1})} \cdots X_{N}^{s(p_{N})} \mid 0 \le s(p_{i}) \le r, p_{i} \le p_{j} \Rightarrow s(p_{i}) \ge s(p_{j}) \right).$$ Take a path $C$ such that $\operatorname{length}^{*} C = \operatorname{rank}^{*} P$ and decompose $C = A_{1} + D_{1} + \cdots + D_{n - 1} + A_{n}$, where $$\begin{aligned} V(A_{1}) &= \{q_{1}, \ldots, q_{a(1)}\}, \\ V(D_{1}) &= \{q^{\prime}_{1}, \ldots, q^{\prime}_{d(1)}\}, \\ V(A_{2}) &= \{q_{a(1) + 1}, \ldots, q_{a(2)}\}, \end{aligned}$$ $\rotatebox{90}{$\cdots$}$ $$\begin{aligned} \hspace{8mm} V(D_{n - 1}) &= \{q^{\prime}_{d(n - 2) + 1}, \ldots, q^{\prime}_{d(n - 1)}\}, \\ V(A_{n}) &= \{q_{a(n - 1) + 1}, \ldots, q_{a(n)} = q_{m}\}.\end{aligned}$$ Then we note that $m = \operatorname{rank}^{*} P + 1$. Next, we define an increasing sequence of poset ideals as follows: $$I_{1} = \{q_{k(1)}\},$$ $$I_{i} = <\{q_{k(i)}\}> \cup I_{i - 1} \ \ (2 \le i \le m).$$ To prove Lemma 2.5, it is enough to show that $$M = \prod_{I = \emptyset, I_{1}, \ldots, I_{m}} \varphi(I)^{Q - 1} \in {{\mathfrak{m}}}^{(m + 1)(q - 1)} \setminus {{\mathfrak{m}}}^{[Q]}.$$ [*Proof of Claim 2.6.*]{} Put $M = T^{r} X^{s(p_{1})}_{1} \cdots X^{s(p_{n})}_{n}$. Then, by the construction of $M$, we have $r = (m + 1)(Q - 1)$. Hence $M \in {{\mathfrak{m}}}^{(m + 1)(Q - 1)}$. Moreover, since $C$ satisfies a condition (\*), $s(q_{k(1)}) = m(Q - 1)$, $s(q_{k(2)}) = (m - 1)(Q - 1)$, …, $s(q_{k(m)}) = Q - 1$. We assume that $M \in {{\mathfrak{m}}}^{[Q]}$. Then there exists $I \in J(P)$ such that $M/\varphi(I)^{Q} \in R$. Put $M/\varphi(I)^{Q} = T^{r^{\prime}} X^{s^{\prime}(p_{1})}_{1} \cdots X^{s^{\prime}(p_{n})}_{n}$. Then $0 \le s^{\prime}(p_{i}) \le r^{\prime}$ and $s^{\prime}(p_{i}) \le s^{\prime}(p_{j})$ if $p_{i} \ge p_{j}$. By the construction of $M$, we have $r^{\prime} = m(Q - 1) - 1$. Moreover, $s^{\prime}(p_{i}) = s(p_{i}) - Q$ if $p_{i} \in I$ and $s^{\prime}(p_{i}) = s(p_{i})$ if $p_{i} \not\in I$. Hence, if $q_{k(1)} \not\in I$, then $s^{\prime}(q_{k(1)}) = s(q_{k(1)}) = m(Q - 1) < r$, a contradiction. Therefore $q_{k(1)} \in I$. Also if $q_{k(2)} \not\in I$, then $s^{\prime}(q_{k(2)}) = s(q_{k(2)}) = (m - 1)(Q - 1) > (m - 1)(Q - 1) - 1 = s^{\prime}(q_{k(1)})$. This contradicts $q_{k(2)} \ge q_{k(1)}$. Hence $q_{k(2)} \in I$. In the same way, $q_{k(3)}, \ldots, q_{k(m)} \in I$. This contradicts $s(q_{k(m)}) = Q - 1 < Q$. Therefore $M \not\in {{\mathfrak{m}}}^{[Q]}$. Hence we have $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) \ge \operatorname{rank}^{*} P + 2$. Next, we prove the opposite inequality. For all large $Q = p^{e} \gg 0$, if $r \ge (\operatorname{rank}^{*} P + 2)(Q - 1) + 1$ then ${{\mathfrak{m}}}^{r} \subseteq {{\mathfrak{m}}}^{[Q]}$. We note that $${{\mathfrak{m}}}^{r} = (T^{r}X_{1}^{s(p_{1})} \cdots X_{N}^{s(p_{N})} \mid 0 \le s(p_{i}) \le r, p_{i} \ge p_{j} \Rightarrow s(p_{i}) \le s(p_{j}))R.$$ We will show that for each $M = T^{r}X_{1}^{s(p_{1})} \cdots X_{N}^{s(p_{N})} \in {{\mathfrak{m}}}^{r}$, there exists $I \in J(P)$ such that $\frac{M}{\varphi(I)^{Q}} \in \mathcal{R}_{k}[D]$. Case 1: For all minimal elements $p \in P$, $r - s(p) \ge Q$. Put $I = \emptyset$. Then $\frac{M}{\varphi(I)^{Q}} \in \mathcal{R}_{k}[D]$. Indeed, put $\frac{M}{\varphi(I)^{Q}} = T^{r^{\prime}} X^{s^{\prime}(p_{1})}_{1} \cdots X^{s^{\prime}(p_{n})}_{n}$, then $r^{\prime} = r - Q$ and $s^{\prime}(p_{i}) = s(p_{i})$. Hence $0 \le s^{\prime}(p_{i}) \le r^{\prime}$ and $s^{\prime}(p_{i}) \le s^{\prime}(p_{j})$ if $p_{i} \ge p_{j}$. Therefore $\frac{M}{\varphi(I)^{Q}} \in \mathcal{R}_{k}[D]$. Case 2: There exists a minimal element $p \in P$ such that $r - s(p) \le Q - 1$. For each $p \in P$, we define a function $d_{M}:P \to \{0, 1\}$ as follows: We define $d_{M}(p) = 1$ if there exists a path $C = p_{\min} \to p$ such that $C$ satisfies the following conditions, and $d_{M}(p) = 0$ otherwise: 1. $r - s(p_{\min}) \le Q - 1$. 2. $C$ satisfies a condition (\*). 3. We decompose $C = A_{1} + D_{1} + A_{2} + \cdots + D_{n^{\prime} - 1} + A_{n^{\prime}}$. Let $q_{1}, \ldots, q_{m^{\prime}}$ be the elements of $V(A_{1}). \ldots, V(A_{n^{\prime}})$ as in Lemma 2.5. Then for all $i = 1, \ldots, m^{\prime}$, $s(q_{k(i)}) - s(q_{k(i + 1)}) \le Q - 1$. <!-- --> 1. If $d_{M}(p) = 1$ then $s(p) \ge Q$. 2. If $p^{\prime} \gtrdot p$, $d_{M}(p) = 1$ and $d_{M}(p^{\prime}) = 0$, then $s(p) - s(p^{\prime}) \ge Q$. \(1) For all $p \in P$ such that $d_{M}(P) = 1$, by definition of $d_{M}$, there exists a path $C = p_{\min} \to p$ such that $C$ satisfies the following conditions: 1. $r - s(p_{\min}) \le Q - 1$. 2. $C$ satisfies a condition (\*). 3. We decompose $C = A_{1} + D_{1} + A_{2} + \cdots + D_{n^{\prime} - 1} + A_{n^{\prime}}$. Let $q_{1}, \ldots, q_{m^{\prime}}$ be the elements of $V(A_{1}). \ldots, V(A_{n^{\prime}})$ as in Lemma 2.5. Then for all $i = 1, \ldots, m^{\prime}$, $s(q_{k(i)}) - s(q_{k(i + 1)}) \le Q - 1$. Then we note that $\operatorname{length}^{*} C = m^{\prime} + 1 \le m + 1$. We put $$\begin{aligned} V(A_{1}) &= \{q_{1}, \ldots, q_{a(1)}\}, \\ V(D_{1}) &= \{q^{\prime}_{1}, \ldots, q^{\prime}_{d(1)}\}, \\ V(A_{2}) &= \{q_{a(1) + 1}, \ldots, q_{a(2)}\}, \end{aligned}$$ $\rotatebox{90}{$\cdots$}$ $$\begin{aligned} \hspace{8mm} V(D_{n - 1}) &= \{q^{\prime}_{d(n - 2) + 1}, \ldots, q^{\prime}_{d(n - 1)}\}, \\ V(A_{n}) &= \{q_{a(n - 1) + 1}, \ldots, q_{a(n)} = q_{m}\}.\end{aligned}$$ Since $r \ge (m + 1)(Q - 1) + 1$ by assumption, we get $$\begin{aligned} s(q_{m}) &\ge r - m^{\prime}(Q - 1) \\ &\ge (m + 1)(Q - 1) + 1 - m^{\prime}(Q - 1) \\ &\ge Q.\end{aligned}$$ \(2) Assume that $p^{\prime} \gtrdot p$, $d_{M}(p) = 1$ and $d_{M}(p^{\prime}) = 0$. If $s(p) - s(p^{\prime}) \le Q - 1$, then there exists a path $C = p_{\min} \to p$ since $d_{M}(p) = 1$. By Remark 1.8, we can extend $C$ to a path $\tilde{C} = p_{\min} \to p^{\prime}$ satisfying a condition (\*). Hence $d_{M}(p^{\prime}) = 1$, a contradiction. Therefore $s(p) - s(p^{\prime}) \ge Q$. We return to the proof of Theorem 2.4. Put $I = \{p \in P \mid$ there exists $p^{\prime} \ge p$ such that $d_{M}(p^{\prime}) = 1\} \in J(P)$. We prove that $M/\varphi(I)^{Q} \in \mathcal{R}_{k}[D]$. Put $M/\varphi(I)^{Q} = T^{r^{\prime}} X^{s^{\prime}(p_{1})}_{1} \cdots X^{s^{\prime}(p_{N})}_{N}$. Then $r^{\prime} = r - Q$. Moreover, $s^{\prime}(p_{i}) = s(p_{i}) - Q$ if $p_{i} \in I$ and $s^{\prime}(p_{i}) = s(p_{i})$ if $p_{i} \not\in I$. Firstly, we prove that $s^{\prime}(p_{i}) \le s^{\prime}(p_{j})$ if $p_{i} \ge p_{j}$. We may assume that $p_{i} \gtrdot p_{j}$. We note that $s^{\prime}(p_{i}) = s(p_{i}) - Q$ if $p_{i} \in I$ and $s^{\prime}(p_{i}) = s(p_{i})$ if $p_{i} \not\in I$. Hence, if $p_{i}, p_{j} \in I$, or $p_{i}, p_{j} \not\in I$, then $M/\varphi(I)^{Q} \in \mathcal{R}_{k}[D]$. Therefore, we may assume that $p_{i} \not\in I$ and $p_{j} \in I$. Then we have $d_{M}(p_{i}) = 0$. If $d_{M}(p_{j}) = 1$, then $s^{\prime}(p_{i}) =s(p_{i}) \le s(p_{j}) - Q = s^{\prime}(p_{j})$ by Fact 2.8(2). If $d_{M}(p_{j}) = 0$, then there exists $p_{k} \in P$ such that $d_{M}(p_{k}) = 1$ and $p_{k} \ge p_{j}$. If $p_{k} \ge p_{i}$, then $p_{i} \in I$, a contradiction. Hence $p_{k} \not\ge p_{i}$. Since $d_{M}(p_{k}) = 1$, there exists a path $C = p_{\min} \to p_{k}$. Case 2-1: We can extend $C$ to a path $\tilde{C} = p_{\min} \to p_{i}$ satisfying a condition (\*). If $s(p_{k}) - s(p_{i}) \le Q - 1$, then $d_{M}(p_{i}) = 1$, a contradiction. Hence $s(p_{k}) - s(p_{i}) \ge Q$. Therefore, we have $s^{\prime}(p_{j}) - s^{\prime}(p_{i}) = s(p_{j}) - Q - s(p_{i}) \ge s(p_{k}) - s(p_{i}) - Q \ge 0$. Case 2-2: We cannot extend $C$ as Case 2-1. In this case, a path $\tilde{C} = p_{\min} \to p_{i}$ does not satisfy a condition (\*). Hence there exists $p_{\ell} \in V(C)$ such that $p_{\ell} \ge p_{k}, p_{i}$. This contradicts $d_{M}(p_{j}) = 0$. Therefore, we have that $s^{\prime}(p_{i}) \le s^{\prime}(p_{j})$ if $p_{i} \ge p_{j}$. Secondly, we prove that $0 \le s^{\prime}(p_{i}) \le r^{\prime}$. By Fact 2.8(1), $0 \le s^{\prime}(p_{i})$. To prove $s^{\prime}(p_{i}) \le r^{\prime}$, it is enough to show that $s^{\prime}(p_{\min}) \le r^{\prime}$ for all minimal element $p_{\min}$. If $p_{\min} \in I$, $s^{\prime}(p_{\min}) = s(p_{\min}) - Q \le r - Q = r^{\prime}$. Assume that $p_{\min} \not\in I$. If $r - s(p_{\min}) \le Q - 1$, then $d_{M}(p) = 1$, a contradiction. Hence $r - s(p_{\min}) \ge Q$, and thus $r^{\prime} - s^{\prime}(p_{\min}) = r - Q - s(p_{\min}) \ge 0$. $c_{+}^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) \le \operatorname{rank}^{*} P + 2$. As a result, we have $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) = \operatorname{rank}^{*} P + 2$. $F$-pure thresholds of Hibi rings ================================= The $F$-pure threshold, which was introduced by [@TW], is an invariant of an ideal of an $F$-finite $F$-pure ring. $F$-pure threshold can be calculated by computing generalized test ideals (see [@HY]), and [@Bl] showed how to compute generalized test ideals in the case of toric rings and its monomial ideals. Since Hibi rings are toric rings, we can compute $F$-pure thresholds of the homogeneous maximal ideal of arbitrary Hibi rings, and will be described in terms of poset. Let $R$ be an $F$-finite $F$-pure ring of characteristic $p > 0$, ${{\mathfrak{a}}}$ a nonzero ideal of $R$, and $t$ a non-negative real number. The pair $(R, {{\mathfrak{a}}}^t)$ is said to be $F$-pure if for all large $q = p^e$, there exists an element $d \in {{\mathfrak{a}}}^{{\lceil t(q-1) \rceil}}$ such that the map $R \longrightarrow R^{1/q} \ (1 \mapsto d^{1/q})$ splits as an $R$-linear map. Then the $F$-pure threshold $\operatorname{fpt}({{\mathfrak{a}}})$ is defined as follows: $$\begin{aligned} \operatorname{fpt}({{\mathfrak{a}}}) = \sup\{t \in \mathbb{R}_{\geq 0} \mid (R, {{\mathfrak{a}}}^t) \text{ is } F \text{-pure}\}. \end{aligned}$$ Hara and Yoshida [@HY] introduced the generalized test ideal $\tau({{\mathfrak{a}}}^t)$ ($t$ is a non negative real number). Then $\operatorname{fpt}({{\mathfrak{a}}})$ can be calculated as the minimum jumping number of $\tau({{\mathfrak{a}}}^c)$, that is, $$\begin{aligned} \operatorname{fpt}({{\mathfrak{a}}}) = \sup \{ t \in R_{\geq 0} \mid \tau({{\mathfrak{a}}}^t) = R \}.\end{aligned}$$ Especially, [@Bl] showed how to calculate $\tau({{\mathfrak{a}}}^c)$ in the case of monomial ideals $\mathfrak{a}$ in a toric ring $R$. Now, we recall the following theorem of [@Bl]. setting for toric rings ----------------------- Let $k$ be a perfect field, $N = M^{\vee} \cong \mathbb{Z}^n$ a dual pair of lattices. Let $\sigma \subset N_{\mathbb{R}} = N \otimes_{\mathbb{Z}} \mathbb{R}$ be a strongly convex rational polyhedral cone given by $\sigma = \{r_1u_1 + \cdots + r_su_s \mid r_i \in \mathbb{R}_{\geq 0} \}$ for some $u_1, \dots, u_s$ in $N$. The dual cone $\sigma^{\vee}$ is a (rational convex polyhedral) cone in $M_{\mathbb{R}}$ defined by $\sigma^{\vee} = \{m \in M_{\mathbb{R}} \mid (m,v) \geq 0, \forall v \in \sigma\}$. The lattice points in $\sigma^{\vee}$ give a sub-semigroup ring of Laurent polynomial ring $k[X_1^{\pm 1}, \dots, X_n^{\pm 1}]$, generated by $X^m = X_1^{m_1} \cdot \cdots \cdot X_n^{m_n} \ (m \in \sigma^{\vee} \cap M)$. This affine semigroup ring is denoted by $$\begin{aligned} R_{\sigma} = k[\sigma^{\vee} \cap M].\end{aligned}$$ $R_{\sigma}$ is said to be the toric ring defined by ${\sigma}$. For a monomial ideal ${{\mathfrak{a}}} = (X^{\alpha_1}, \dots, X^{\alpha_s}) \subset R_{\sigma}$, $P({{\mathfrak{a}}})$ denotes the convex hull of $\alpha_1, \dots, \alpha_s$ in $M_{\mathbb{R}}$. Let $R_{\sigma}$ be a toric ring defined by ${\sigma}$ over a field of positive characteristic and ${{\mathfrak{a}}}$ a monomial ideal of $R_{\sigma}$. Let $v_1, \dots, v_s \in \mathbb{Z}^n$ be the primitive generator of $\sigma$. Then a monomial $X^m \in R_{\sigma}$ is in $\tau({{\mathfrak{a}}}^c)$ if and only if there exists $w \in M_{\mathbb{R}}$ with $(w,v_i) \leq 1$ for all $i$, such that $$\begin{aligned} m + w \in \text{relint } cP({{\mathfrak{a}}}), \end{aligned}$$ where $cP({{\mathfrak{a}}}) = \{cm \in M \mid m \in P({{\mathfrak{a}}})\}$. Especially, $\tau({{\mathfrak{a}}}^c) = R$ if and only if $X^0 \in \tau({{\mathfrak{a}}}^c)$. Let $$\mathcal{O} = \{ w \in M_{\mathbb{R}} \mid (w,v_i) \geq 1 (\exists i)\}.$$ Then we get following corollary. \[hi\] $\operatorname{fpt}({{\mathfrak{a}}}) = \sup \{ c \in \mathbb{R}_{\geq 0} \mid (\check{\sigma} \setminus \mathcal{O}) \cap cP({{\mathfrak{a}}}) \neq \emptyset \}$. $F$-pure threshold of Hibi rings -------------------------------- Since Hibi rings are toric rings, we can compute $F$-pure threshold of the homogeneous maximal ideal of any Hibi ring using corollary \[hi\]. Recall that Hibi rings have the structure as toric rings. Let $P$ be a finite poset, $R = R_k(D), {{\mathfrak{m}}}$ the unique homogeneous maximal ideal of $R$, where $D=J(P)$. $\mathbb{R}^P$ denotes $\#P$-dimensional $\mathbb{R}$-vector space, which entries are indexed by $P$. $\mathbb{Z}^P$ denotes the lattice points in $\mathbb{R}^P$. For a monomial $T^{u_T}\prod_{p \in P}X_p^{u_p} \in R$, $u = (u_T, u_p)_{p \in P}$ is a corresponding vector in $\mathbb{Z} \oplus \mathbb{Z}^P$. It is known that $R$ is a toric ring defined by a strongly convex rational polyhedral cone generated from “the order polytope of $P$”. $P, \mathbb{R}^P$ are as above. An element of P descrived as $(u_p)_{p \in P}$. The order polytope of $P$ is a subset of $\mathbb{R}^P$ satisfying following conditions. 1. $0 \leq u_p \leq 1$ for all $p \in P$. 2. $u_p \leq u_{p'}$ if $p \geq p'$. Note that the condition 2) is slightly different from the original. It is arranged for construction of Hibi rings in this paper. Let $\mathfrak{m}$ be the maximal homogeneous ideal of $R$. &gt;From a constraction of $R$, $P(\mathfrak{m}) - (1, \overrightarrow{0}) \subset (0) \oplus \mathbb{R}^P$ is the order polytope of $P$. $$R = k[\mathbb{R}_{\geq 0} P(\mathfrak{m}) \cap (\mathbb{Z} \oplus \mathbb{Z}^P)].$$ Hence, if we put $\sigma^{\vee} = \mathbb{R}_{\geq 0} P(\mathfrak{m})$, then $R$ is the toric ring defined by $\sigma$. Now, the primitive generators of $\check{\sigma}$ is the following. $$\begin{aligned} \left\{ u = (u_i, u_T)_{i \in P} \left| \begin{array}{cc} u_T = 1 \\ u_i = 1 & (i \in I)\\ u_i = 0 & (i \not \in I) \end{array} ,I \in J(P) \right. \right\}. \end{aligned}$$ And $R$ is represented as $k[X^u \mid u \in \check{\sigma} \cap \mathbb{Z}^{\#P +1}]$. Since $P({{\mathfrak{m}}}) = \check{\sigma} \cap (u_T = 1)$, we can obtain the following lemma from Corollary \[hi\]. $$\begin{aligned} \label{Hfpt} \operatorname{fpt}({{\mathfrak{m}}}) = \sup \{ \deg_T u \mid u \in \check{\sigma} \setminus \mathcal{O} \}.\end{aligned}$$ Set $\overline{P}$ be $P \cup \{-\infty, \infty\}$, and let $\Sigma$ be the set of real functions $\psi$ satisfying following properties. 1) : $\psi(\infty)=0$. 2) : $x \lessdot y \Longrightarrow \psi(y) - \psi(x) \leq 1$ \[hhh\] Let $R = \mathcal{R}_k(D)$ be the Hibi ring corresponding to a finite poset $P$, and ${{\mathfrak{m}}}$ its homogeneous maximal ideal. Then $$\begin{aligned} \operatorname{fpt}({{\mathfrak{m}}}) = \max \{\psi(-\infty) \mid \psi \in \Sigma \}. \end{aligned}$$ Note that $\check{\sigma}$ is a rational polyhedral cone given by the following conditions: $$\begin{aligned} \begin{array}{ccc} 0 \leq & u_i & (i : \text{maximal}), \\ 0 \leq & u_i - u_j & (j \lessdot i), \\ 0 \leq & u_i - u_T & (i : \text{minimal}). \end{array} \end{aligned}$$ Then, $\check{\sigma} \setminus \mathcal{O}$ is a domain satisfying following conditions. $$\begin{aligned} \begin{array}{ccccc} 0 \leq & u_i & \leq & 1 & (i ; \text{maximal}), \\ 0 \leq & u_i - u_j & \leq & 1 & (j \lessdot i), \\ 0 \leq & u_i - u_T & \leq & 1 & (i ; \text{minimal}). \end{array} \end{aligned}$$ Thus we obtain the required assertion. From the theorem \[hhh\], we get the following inequality. $$\begin{aligned} \operatorname{fpt}({{\mathfrak{m}}}) &\leq \min \{\operatorname{length}C \mid C : \text{maximal chain in} \ \overline{P} \} \\ &= \min \{\operatorname{length}C \mid C : \text{maximal chain in} \ P \}+2. \\ \end{aligned}$$ We have another assertion of $\operatorname{fpt}(\mathfrak{m})$ in terms of $\operatorname{rank}_*P$. \[fptrank\] Under the same notation of theorem \[hhh\], $$\operatorname{fpt}(\mathfrak{m})=\operatorname{rank}_* \overline{P} = \operatorname{rank}_*P + 2.$$ In particular, $$\operatorname{fpt}({{\mathfrak{m}}}) \in \mathbb{N}.$$ The second equality is clear. First, we prove that $\operatorname{fpt}(\mathfrak{m}) \leq \operatorname{rank}_*P +2$. If $A = (p_0, \dots, p_r)$ is a chain in $\overline{P}$ and $\psi \in \Sigma$, $\psi(p_0) \leq \psi(p_r)+r$ from the condition 2). If a path $C$ in $\overline{P}$ satisfying (\*) has a decomposition into $A_1 + D_1 + \dots +A_n \ (A_i = (p_{i_0}, \dots , p_{r_i}))$ and $ \psi \in \Sigma$, then $\psi(-\infty) \leq \psi(\infty) + \sum_{i=1}^{n}r_i \leq \operatorname{rank}_* \overline{P}$. Hence $\operatorname{fpt}(\mathfrak{m}) \leq \operatorname{rank}_*P + 2$. Next, we will prove that $\operatorname{fpt}(\mathfrak{m}) \geq \operatorname{length}^*C$ for some path $C$ in $\overline{P}$. If we can find such a path, we will get $\operatorname{fpt}(\mathfrak{m}) \ge \operatorname{length}^*C \ge \operatorname{rank}_{*} \overline{P}$. In general, we can calculate $\operatorname{fpt}(\mathfrak{m})$ by following. We will define $\lambda_i$ and $\Lambda_i (i \in \mathbb{N})$ inductively as subsets of $\overline{P}$. 1. $\Lambda_0 = \lambda_0 = \{\infty\}$. 2. $\lambda_i = \{ p \in \overline{P} \setminus \bigcup_{j=0}^{i-1} \Lambda_j \mid \exists q \in \Lambda_{i-1} \ s.t. \ p \lessdot q\}$. 3. $\Lambda_i = \{ p \in \overline{P} \setminus \bigcup_{j=0}^{i-1} \Lambda_j \mid \exists q \in \lambda_i \ s.t. \ p \geq q\}$. Note that $\lambda_i$ is a subset of $\Lambda_i$, and $\overline{P}$ is the disjoint union of $\Lambda_i$’s. \[hoge\] Suppose that $p \in \Lambda_i$ and $p' \in \Lambda_j$. If $i > j$ then $p \not > p'$. Suppose that $p > p'$. Because $p' \in \Lambda_j$, we can take q $q \in \lambda_j$ such that $q < p$. Since $i > j$ and $q < p$, $p \not \in \bigcup_{l=1}^{j-1}\Lambda_l$, and $p \in \Lambda_j$ by the definition. This is a contradiction. Now let us construct function $\psi$ given by $\psi(p) = i$ if $p$ is in $\Lambda_i$. Then $\psi$ is in $\Sigma$ because of Claim \[hoge\] and $\psi(-\infty) = \max\{ i \mid \Lambda_i \neq \emptyset \}$. Next, we will find a path $C$ such that $\operatorname{length}^*C = \operatorname{fpt}(\mathfrak{m})$. Let $p_0 = -\infty$. Then $p_0 \in \lambda_l$ for some $l$. If $p_i \in \lambda_l$, there exists $p_{i+1} \in \Lambda_{l-1}$ for which covers $p_i$. If $p_i \in \Lambda_l \setminus \lambda_l$, then there exist $p \in \lambda_l$ such that $p \leq p_i$, and a sequence $p_i, p_{i+1}, \dots, p_j = p \ $($p_{k+1} \lessdot p_k$ for $i \leq k \leq j$) in $\Lambda_l$. At the end, $p_s$ becomes to $\infty$ and define a path $C = (p_0 , \dots , p_s)$ then $C$ satisfies condition (\*). Because it is clear by construction that if $p \in \Lambda_k$ and $p' \in \Lambda_l$, then $k \geq l$. If $k = l$, then $p' \leq p$. This shows $p$ and $p'$ is belong to the same $D_i$. If $k > l$, then $p \not > p'$ because of Claim \[hoge\]. Since $\operatorname{length}^*C$ is the number of pairs $(p_i \lessdot p_{i+1})$. This corresponds to the number of non-empty $\Lambda_i$ by construction of $C$, that is $n(-\infty)$. For this $n$ and $C$, $\operatorname{rank}_*\overline{P}~\leq~\operatorname{length}^*C~=~n(-\infty)~\leq~\operatorname{fpt}(\mathfrak{m})$. Application and $-a(R)$ of Hibi rings ===================================== In this section, we recall the definition of the $a$-invariant $a(R)$ and compare $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}})$ and $\operatorname{fpt}({{\mathfrak{m}}})$ with $-a(R)$ for Hibi rings. First, we recall that the definition of a-invariant is $$a(R) = \max\{n \in {{\mathbb{Z}}} \mid [H_{{{\mathfrak{m}}}}^{\dim R}(R)]_{n} \neq 0\}$$ (see [@GW]). Bruns and Herzog computed $a(R)$ for an ASL (\[BH, Theorem 1.1\]). By their theorem, we can obtain the following fact. (\[BH, Theorem 1.1\]) Let $R = \mathcal{R}_{k}[D]$ be the Hibi ring made by a distributive lattice $D = J(P)$, where $P$ is a finite poset. Then $$-a(R) = \operatorname{rank}P + 2.$$ In particular, Theorems 2.4 and 3.7 imply the following corollary. Under the same notation as in Theorem 2.4, we have $$\operatorname{fpt}({{\mathfrak{m}}}) \le -a(R) \le c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}).$$ Segre products of two polynomial rings are one of the important examples of Hibi rings. Since the Segre product of $k[X_{1}, \ldots, X_{m}]$ and $k[Y_{1}, \ldots, Y_{n}]$ is isomorphic to the determinantal ring $k[X]/I_{2}(X)$, where $X$ is an $m \times n$ matrix whose all entries are indeterminates, we give the following corollary by Example 1.3. Let $k$ be a perfect field of positive characteristic, and let $m,n \ge 2$ be integers. Let $R=k[X_1, \ldots, X_m], S=k[Y_1, \ldots, Y_n]$ be polynomial rings, and let $R \# S$ be the Segre product of $R$ and $S$. Let ${{\mathfrak{m}}}$ be the unique graded maximal ideal of $R \# S$. Then $$\begin{aligned} c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) = -a(R \# S) &= \max\{m, n\},\\ \operatorname{fpt}({{\mathfrak{m}}}) &= \min\{m, n\}. \end{aligned}$$ In particular, $c^{{{\mathfrak{m}}}}({{\mathfrak{m}}}) = \operatorname{fpt}({{\mathfrak{m}}})$ if and only if $m = n$. Finally, we make general examples. For given integers $a \ge b\ge c \ge 1$, we can find a connected poset $P$ such that $\operatorname{rank}^{*} P = a, \operatorname{rank}P = b$ and $\operatorname{rank}_{*} P = c$. We put $a = db + r$, $e = \lceil \frac{a}{b} \rceil$ and $f = \max\{c - (b - r + 1), 0\} + 1$, where $0 \le r \le b - 1$. Case 1: $d \ge 2$. @ (0, 0);(0, -46) \*++!R[q\_[11]{}]{} \* = “A1”; @[-]{} “A1”;(0, -38) \*++!R[q\_[12]{}]{} \* = “A2”; @[-]{} “A2”;(0, -32) = “A3”; @[.]{} “A3”;(0, -24) = “A4”; @[-]{} “A4”;(0, -18) \*++!R[q\_[1b]{}]{} \* = “A5”; @[-]{} “A5”;(0, -10) \*++!D[q\_[1b + 1]{} = q\^\_[1c+1]{}]{} \* = “A6”; @[-]{} “A6”;(8, -18) \*++!LD[q\^\_[1c]{}]{} \* = “B1”; @[-]{} “B1”;(16, -26) = “B2”; @[.]{} “B2”;(22, -32) = “B3”; @[-]{} “B3”;(28, -38) \*++!R[q\^\_[12]{}]{} \* = “B4”; @[-]{} “B4”;(36, -46) \*++!U[q\^\_[11]{} = q\_[21]{}]{} \* = “B5”; @[-]{} “B5”;(36, -38) \*++!L[q\_[22]{}]{} \* = “C1”; @[-]{} “C1”;(36, -32) = “C2”; @[.]{} “C2”;(36, -24) = “C3”; @[-]{} “C3”;(36, -18) \*++!R[q\_[2b]{}]{} \* = “C4”; @[-]{} “C4”;(36, -10) \*++!D[q\_[2b + 1]{} = q\^\_[2c+1]{}]{} \* = “C5”; @[-]{} “C5”;(44, -18) \*++!R \* = “D1”; @[-]{} “D1”;(50, -24) = “D2”; @[.]{} “D2”;(54, -28) = “D3”; @[.]{} “D3”;(68, -28) = “D4”; @[.]{} “D4”;(72, -32) = “D5”; @[-]{} “D5”;(78, -38) \*++!R \* = “D6”; @[-]{} “D6”;(86, -46) \*++!U[q\_[e-11]{}]{} \* = “D7”; @[-]{} “D7”;(86, -38) \*++!L[q\_[e-12]{}]{} \* = “E1”; @[-]{} “E1”;(86, -34) = “E2”; @[.]{} “E2”;(86, -26) = “E3”; @[-]{} “E3”;(86, -22) \*++!R[q\_[e-1 r + 1]{} = q\^\_[e-1 f + 1]{}]{} \* = “E4”; @[-]{} “E4”;(86, -18) = “E5”; @[.]{} “E5”;(86, -14) = “E6”; @[-]{} “E6”;(86, -10) \*++!D[q\_[e - 1 b + 1]{}]{} \* = “E7”; @[-]{} “E4”;(98, -28) \*++!DL[q\^\_[e - 1 f]{}]{} \* = “F1”; @[-]{} “F1”;(106, -32) = “F2”; @[.]{} “F2”;(114, -36) = “F3”; @[-]{} “F3”;(122, -40) \*++!UR[q\^\_[e - 12]{}]{} \* = “F4”; @[-]{} “F4”;(134, -46) \*++!U[q\^\_[e - 11]{} = q\_[e1]{}]{} \* = “F5”; @[-]{} “F5”;(134, -38) \*++!R[q\_[e2]{}]{} \* = “G1”; @[-]{} “G1”;(134, -32) = “G2”; @[.]{} “G2”;(134, -24) = “G3”; @[-]{} “G3”;(134, -18) \*++!R[q\_[eb]{}]{} \* = “G4”; @[-]{} “G4”;(134, -10) \*++!D[q\_[eb + 1]{}]{} \* = “G5”; Case 2: $d = 1$ and $c \ge b - r$. Put $g = c - b + r$. @ (0, 0);(48, -46) \*++!U[q\_[11]{}]{} \* = “A1”; @[-]{} “A1”;(48, -38) \*++!R[q\_[12]{}]{} \* = “A2”; @[-]{} “A2”;(48, -34) = “A3”; @[.]{} “A3”;(48, -26) = “A4”; @[-]{} “A4”;(48, -22) \*++!R[q\_[1 r + 1]{} = q\^\_[1 g + 1]{}]{} \* = “A5”; @[-]{} “A5”;(48, -18) = “A6”; @[.]{} “A6”;(48, -14) = “A7”; @[-]{} “A7”;(48, -10) \*++!D[q\_[1 b + 1]{}]{} \* = “A8”; @[-]{} “A5”;(60, -28) \*++!DL[q\^\_[1 g]{}]{} \* = “B1”; @[-]{} “B1”;(68, -32) = “B2”; @[.]{} “B2”;(76, -36) = “B3”; @[-]{} “B3”;(84, -40) \*++!UR[q\^\_[12]{}]{} \* = “B4”; @[-]{} “B4”;(96, -46) \*++!U[q\^\_[11]{} = q\_[21]{}]{} \* = “B5”; @[-]{} “B5”;(96, -38) \*++!L[q\_[22]{}]{} \* = “C1”; @[-]{} “C1”;(96, -32) = “C2”; @[.]{} “C2”;(96, -24) = “C3”; @[-]{} “C3”;(96, -18) \*++!L[q\_[2b]{}]{} \* = “C4”; @[-]{} “C4”;(96, -10) \*++!D[q\_[2b + 1]{}]{} \* = “C5”; Case 3: $d = 1$ and $c < b - r$. @ (0, 0);(80, -46) \*++!U[q\^\_[1 1]{} = q\_[11]{}]{} \* = “A1”; @[-]{} “A1”;(80, -38) \*++!L[q\_[12]{}]{} \* = “A2”; @[-]{} “A2”;(80, -34) = “A3”; @[.]{} “A3”;(80, -26) = “A4”; @[-]{} “A4”;(80, -22) \*++!R[q\_[1 r + 1]{}]{} \* = “A5”; @[-]{} “A5”;(80, -18) = “A6”; @[.]{} “A6”;(80, -14) = “A7”; @[-]{} “A7”;(80, -10) \*++!D[q\_[1 b + 1]{}]{} \* = “A8”; @[-]{} “A5”;(128, -46) \*++!U[q\_[21]{}]{} \* = “B1”; @[-]{} “B1”;(128, -38) \*++!L[q\_[22]{}]{} \* = “C1”; @[-]{} “C1”;(128, -32) = “C2”; @[.]{} “C2”;(128, -24) = “C3”; @[-]{} “C3”;(128, -18) \*++!L[q\_[2b]{}]{} \* = “C4”; @[-]{} “C4”;(128, -10) \*++!D[q\_[2b + 1]{}]{} \* = “C5”; @[-]{} “A1”;(68, -40) \*++!UR[q\^\_[1 2]{}]{} \* = “D1”; @[-]{} “D1”;(60, -36) = “D2”; @[.]{} “D2”;(52, -32) = “D3”; @[-]{} “D3”;(44, -28) \*++!UR[q\^\_[1 c]{}]{} \* = “D4”; @[-]{} “D4”;(32, -22) \*++!D[q\^\_[1 c + 1]{}]{} \* = “D5”; ${{{\mathbf{Acknowledgement.}}}}$ The authors wish to thank Professor Ken-ichi Yoshida for many valuable comments and his encouragement. The second author was partially supported by Nagoya University Scholarship for Outstanding Graduate Students. [99]{} [G. Birkhoff]{}, [*Lattice Theory*]{}, 3rd. ed., [Amer. Math. Soc. Colloq. Publ]{}. [No.25, Amer. Math. Soc. Providence]{}, [R. I., 1967]{}. , [*On the computation of a-invariants*]{}, [manuscripta math.]{}, **77** (1992), 201–213. , [*Multiplier ideals and modules on toric varieties*]{}, [Math. Z]{}, **248** (2004), 113–121. , [*On graded rings*]{}, I, [J. Math. Soc. Japan]{} **30**(2) (1978), 179–213. , [*Distributive lattices, affine semigroup rings and algebras with straightening laws*]{}, [in “Commutative Algebra and Combinatorics” (M. Nagata and H. Matsumura, eds.) Adv. Stud. Pure Math. 11, North Holland, Amsterdam]{}, (1987), 93–109. , [*Formulas of F-thresholds and F-jumping coefficients on toric rings*]{}, [Kodai Math. J.]{}, **32** (2009), 238–255. , [*F-thresholds, tight closure, integral closure, and multiplicity bounds*]{}, [Michigan Math. J.]{}, **57** (2008), 461–480. , [*A generalization of tight closure and multiplier ideals*]{}, [Trans. Amer. Math]{}, **355** (2003), 3143–3174. , [*F-thresholds vs. a-invariants for homogeneous toric rings*]{}, [preprint]{}. , [*Diagonal F-thresholds on binomial hypersurfaces*]{}, [Communications in Algebra]{}, **38** (2010), 2992–3013. , [*F-thresholds and Bernstein-Sato polynomials*]{}, [European Congress of Mathematics]{}, 341–364, Eur. Math. Soc., Zürich. , [*Two Poset Polytopes*]{}, [Discrete & Computational Geometry]{}, **1** (1986), 9–23. , [*On F-pure thresholds*]{}, [J. Algebra]{}, **282** (2004), 278–297.
{ "pile_set_name": "ArXiv" }
--- author: - | Wei Wang, Zheng Dang, Yinlin Hu, Pascal Fua, Mathieu Salzmann\ School of Computer and Communication Sciences\ CVLab, EPFL\ CH-1015 Lausanne, Switzerland\ `{wei.wang zheng.dang yinlin.hu pascal.fua mathieu.salzmann}@epfl.ch`\ bibliography: - 'ms.bib' title: 'Backpropagation-Friendly Eigendecomposition' ---
{ "pile_set_name": "ArXiv" }
--- author: - Dominic Verdon bibliography: - 'bibliography.bib' date: | School of Mathematics\ University of Bristol\ title: Unitary pseudonatural transformations --- Introduction ============ Overview -------- Natural transformations between functors are a crucial element of category theory. We recall the basic definition. Let $\mathcal{C},\mathcal{D}$ and $F,G: \mathcal{C} \to \mathcal{D}$ be functors. Then a *natural transformation* $\alpha:F \to G$ is a set of morphisms $\{\alpha_X:F(X) \to G(X)\}_{X \in \Obj(\mathcal{C})}$ such that for any $f: X \to Y$ in $\mathcal{D}$ the following diagram commutes: F(X) & \^[F(f)]{} & F(Y)\ \^[\_X]{} & & \_[\_Y]{}\ G(X) & \_[G(f)]{} & G(Y) We say that a natural transformation is *invertible* if its components $\{\alpha_X\}$ are invertible in $\mathcal{D}$. If $\mathcal{D}$ is a dagger category, then we say that an invertible natural transformation is *unitary* if its components are additionally unitary in $\mathcal{D}$. Perhaps more naturally, these notions of invertibility may be defined with respect to the category $\Fun(\mathcal{C},\mathcal{D})$ of functors and natural transformations. An invertible natural transformation is just an invertible morphism in this category. If $\mathcal{C}, \mathcal{D}$ are dagger and the functors unitary, the category $\Fun(\mathcal{C},\mathcal{D})$ inherits a dagger structure; a unitary natural transformation is a unitary morphism in this dagger category. Just as natural transformations between functors are a fundamental part of category theory, pseudonatural transformations between pseudofunctors (Definition \[def:pnt\]) are an important part of 2-category theory, which includes monoidal category theory. This short paper makes the elementary step of generalising the above-mentioned notions of invertibility to pseudonatural transformations. Let $\mathcal{C},\mathcal{D}$ be 2-categories, and let $\Fun(\mathcal{C},\mathcal{D})$ be the 2-category of pseudofunctors $\mathcal{C} \to \mathcal{D}$, pseudonatural transformations and modifications. We first consider invertibility. The most general notion of an invertible 1-morphism in a 2-category is duality, or adjunction; the right (resp. left) ‘inverse’ 1-morphism is called a right (resp. left) dual. A 2-category is said to ‘have right (resp. left) duals’ when every 1-morphism has a right (resp. left) dual. A coherent choice of left and right duals for every object is called a pivotal structure; a 2-category with a pivotal structure is called pivotal. Here we unpack the notion of duality for pseudonatural transformations (Definition \[def:dualpnt\]) and show the following facts. - If $\mathcal{C}$ has left (resp. right) duals and $\mathcal{D}$ has right (resp. left) duals, then $\Fun(\mathcal{C},\mathcal{D})$ has right (resp. left) duals (Corollary \[cor:dualsexistence\]). - If $\mathcal{C}, \mathcal{D}$ are pivotal, then $\Fun_{p}(\mathcal{C},\mathcal{D})$ is also pivotal, where the subscript $p$ represents restriction to pivotal functors. (Theorem \[thm:pivinduced\]). If the 2-categories $\mathcal{C},\mathcal{D}$ additionally have a dagger structure, we restrict $\Fun(\mathcal{C},\mathcal{D})$ to unitary pseudofunctors. At this point we need a notion of unitarity for pseudonatural transformations. This requirement arises either physically, by the desire that the components of the transformation should be unitary in $\mathcal{D}$; or categorically, by the desire that the 2-category $\Fun(\mathcal{C},\mathcal{D})$ should itself inherit a dagger structure (for general pseudonatural transformations, there is no obvious dagger structure on $\Fun(\mathcal{C},\mathcal{D})$). We could say that a pseudonatural transformation is unitary when all its 2-morphism components are unitary in $\mathcal{D}$. However, the more categorically natural way of specifying unitarity of a pseudonatural transformation is to say that its dagger is equal to its inverse — i.e. its right dual. When $\mathcal{C},\mathcal{D}$ are pivotal dagger (i.e. possessing compatible pivotal and dagger structures), we observe that there is a notion of the *dagger* of a pseudonatural transformation such that this definition makes sense and gives the same result. Indeed, for $\mathcal{C},\mathcal{D}$ pivotal dagger, we have the following: - The following definitions of *unitary pseudonatural transformation* are equivalent (Proposition \[prop:unitaritydefsequiv\]): - All 2-morphism components of a pseudonatural transformation are unitary. - The dual of a pseudonatural transformation is equal to its dagger. - Upon restriction to unitary pseudonatural transformations, the 2-category $\Fun(\mathcal{C},\mathcal{D})$ inherits a dagger structure. Moreover, $\Fun_p(\mathcal{C},\mathcal{D})$ inherits a pivotal dagger structure (Theorem \[thm:funcddagger\]). Our main motivation for this work is the study of unitary pseudonatural transformations between fibre functors on representation categories of compact quantum groups, which is the subject of a companion paper [@Verdon2020]. As a 2-categorical example, we remark, but do not show here, that Jones’ biunitaries [@Jones1999 §2.11] can be understood as examples of unitary pseudonatural transformations between pseudofunctors embedding a planar subalgebra. Acknowledgements ---------------- The author thanks Ashley Montanaro, David Reutter, Changpeng Shao and Jamie Vicary for useful discussions related to this work. The work was supported by EPSRC. Structure --------- In Section \[sec:background\] we introduce necessary background material for the rest of this paper. In Section \[sec:pnts\] we recall the basic theory of pseudonatural transformations. In Section \[sec:duals\] we discuss dualisability of pseudonatural transformations. In Section \[sec:unitary\] we introduce unitary pseudonatural transformations. Background: Pivotal dagger 2-categories {#sec:background} ======================================= Diagrams for 2-categories ------------------------- A 2-category is a generalisation of a category. While a category has objects, morphisms, and composition laws, a 2-category has objects, morphisms, and morphisms between the morphisms, called 2-morphisms, obeying composition laws. The general ‘weak’ definition of 2-category can be found in e.g. [@Leinster1998]. Roughly, a 2-category $\mathcal{C}$ is defined by a set of objects of objects $r,s,\dots$, together with a category of morphisms $\mathcal{C}(r,s)$ for every pair of objects, and functors $\mathcal{C}(r,s) \times \mathcal{C}(s,t) \to \mathcal{C}(r,t)$ defining composition of these Hom-categories, with various coherence data. Fortunately, 2-categories are much more manageable than the general definition might suggest. Recall that every monoidal category is equivalent to a strict monoidal category [@MacLane1963]. This allows us to assume our monoidal categories are strict, allowing the use of a convenient and well-known diagrammatic calculus [@Selinger2010]. In 2-category theory, a similar strictification result holds — every weak 2-category is equivalent to a strict 2-category [@Leinster1998]. We can therefore also use a diagrammatic calculus in this case. A monoidal category is precisely a 2-category with a single object, where 1-morphisms are the ‘objects’ of the monoidal category, 2-morphisms are the ‘morphisms’, and composition of 1-morphisms is the ‘monoidal product’. The 2-categorical diagrammatic calculus is nothing more than the diagrammatic calculus for monoidal categories enhanced with region labels. We briefly summarise this calculus now, closely following the exposition in [@Marsden2014]. More information can be found in e.g. [@Hummon2012]. Objects $r,s, \cdots$ of a 2-category are represented by labelled regions: ![image](Figures/svg/2cats/object.png) 1-morphisms $X: r \to s$ are represented by edges, separating the region $r$ on the left from the region $s$ on the right: ![image](Figures/svg/2cats/1morphism.png) Edges corresponding to identity 1-morphisms $\id_r: r \to r$ are invisible in the diagrammatic calculus. 1-morphisms compose from left to right. That is, for 1-morphisms $X:r \to s, Y: s \to t$, the composite[^1] $X \circ Y: r \to t$ is represented as: ![image](Figures/svg/2cats/1morphismcomp.png) For two parallel 1-morphisms $X, Y: r \to s$, a 2-morphism $\alpha: X \to Y$ is represented by a vertex in the diagram, drawn as a box: ![image](Figures/svg/2cats/2morphism.png) 2-morphisms can compose in two ways, depending on their type. For parallel 1-morphisms $X,Y,Z: r \to s$, 2-morphisms $\alpha:X \to Y, \beta: Y \to Z$ can be composed ‘vertically’ to obtain a 2-morphism $\alpha \circ_V \beta: X \to Z$. This is represented by vertical juxtaposition in the diagram: ![image](Figures/svg/2cats/2morphismvcomp.png) For 1-morphisms $X,X': r \to S$ and $Y,Y': s \to t$, 2-morphisms $\alpha: X \to X'$ and $\beta: Y \to Y'$ can be composed ‘horizontally’ to obtain a 2-morphism $\alpha \circ_H \beta: X \circ Y \to X' \circ Y'$. This is represented by horizontal juxtaposition in the diagram: ![image](Figures/svg/2cats/2morphismhcomp.png) As with 1-morphisms, the identity 2-morphisms $\id_X: X \to X$ are invisible in the diagrammatic calculus. 2-categories satisfy the *interchange law*. For any 1-morphisms $X,X',X'': r \to s$ and $Y,Y',Y'': s \to t$, and 2-morphisms $\alpha: X \to X'$, $\alpha': X' \to X''$, $\beta: Y \to Y',\beta':Y' \to Y''$: $$(\alpha \circ_V \alpha') \circ_H (\beta \circ_V \beta') = (\alpha \circ_H \beta) \circ_V (\alpha' \circ_H \beta')$$ This corresponds to well-definition of the following diagram: \[eq:interchange\] ![image](Figures/svg/2cats/interchange.png) We also have the following *sliding equalities*, which may be obtained by taking some morphisms to be the identity in : ![image](Figures/svg/2cats/slide1.png)   =   ![image](Figures/svg/2cats/2morphismhcomp.png)   =   ![image](Figures/svg/2cats/slide3.png) These equalities allow us to move 2-morphism boxes past each other provided there are no obstructions. Before moving onto pseudofunctors, we give a first definition from 2-category theory. Equivalence is a strong notion of invertibility of a 1-morphism in a 2-category. From now on we will not draw an enclosing box around diagrams. \[def:equiv\] Let $\mathcal{C}$ be a 2-category and let $X: r \to s$ be a 1-morphism in $\mathcal{C}$. We say that $X$ is an *equivalence* if there exists a 1-morphism $X^{-1}: s \to r$, and invertible[^2] 2-morphisms $\alpha: \id_r \to X \circ X^{-1}$ and $\beta: \id_s \to X^{-1} \circ X$. In diagrams, the equations for invertibility of $\alpha, \beta$ are as follows, where $\alpha^{-1},\beta^{-1}$ are the inverse 2-morphisms: ![image](Figures/svg/2cats/equiva11.png) = ![image](Figures/svg/2cats/equiva12.png), &   ![image](Figures/svg/2cats/equiva21.png) = ![image](Figures/svg/2cats/equiva22.png), && ![image](Figures/svg/2cats/equivb11.png) = ![image](Figures/svg/2cats/equivb12.png), &   ![image](Figures/svg/2cats/equivb21.png) = ![image](Figures/svg/2cats/equivb22.png) If there exists an equivalence $X: r \to s$ we say that the objects $r$ and $s$ are *equivalent* in $\mathcal{C}$. Diagrams for pseudofunctors --------------------------- While our 2-categories are strictified, allowing us to use the diagrammatic calculus, we will consider functors between them which are not strict. For this, we use a graphical calculus of *functorial boxes* previously applied in the special case of monoidal functors [@Mellies2006]. Let $\mathcal{C}, \mathcal{D}$, be 2-categories. A *pseudofunctor* $F: \mathcal{C} \to \mathcal{D}$ consists of the following data. - For each object $r$ of $\mathcal{C}$, an object $F(r)$ of $\mathcal{D}$. - For each hom-category $\mathcal{C}(r,s)$ of $\mathcal{C}$, a functor $F_{r,s}: \mathcal{C}(r,s) \to \mathcal{D}(F(r),F(s))$. In the graphical calculus, we represent the effect of the functor $F_{r,s}$ by drawing a shaded box around 1- and 2-morphisms in $\mathcal{C}(r,s)$. For example, $X,Y: r \to s$ be 1-morphisms and $f: X \to Y$ a 2-morphism in $\mathcal{C}$. Then the 2-morphism $F(f): F(X) \to F(Y)$ in $\mathcal{D}(F(r), F(s))$ is represented as: ![image](Figures/svg/monoidalfunctors/boxnotation1.png)   =   ![image](Figures/svg/monoidalfunctors/boxnotation2.png) - For every pair of composable 1-morphisms $X:r \to s$, $Y: s \to t$ of $\mathcal{C}$, an invertible *multiplicator* 2-morphism $m_{X,Y}:F(X) \circ_D F(Y) \to F(X \circ_C Y)$. In the graphical calculus, these 2-morphisms and their inverses are represented as follows: ![image](Figures/svg/monoidalfunctors/mXY.png) & ![image](Figures/svg/monoidalfunctors/mXYdag.png)\ \[eq:multiplicator\] m\_[X,Y]{}: F(X) \_ F(Y) F(X \_ Y) & m\_[X,Y]{}\^[-1]{}: F(X \_ Y) F(X) \_ F(Y) - For every object $r$ of $\mathcal{C}$, an invertible ‘unitor’ 2-morphism $u_r: \id_{F(r)} \to F(\id_{r})$. In the diagrammatic calculus, these 2-morphism and their inverses are represented as follows (recall that identity 1-morphisms are invisible): ![image](Figures/svg/monoidalfunctors/u.png) & ![image](Figures/svg/monoidalfunctors/udag.png)\ \[eq:unitor\] u\_r: \_[F(r)]{} F(\_[r]{}) & u\^[-1]{}: F(\_[r]{}) \_[F(r)]{} The multiplicators and unitor must obey the following coherence equations: - *Naturality*. For any objects $r,s,t$, 1-morphisms $X,X': r \to s$, $Y,Y': s \to t$, and 2-morphisms $f:X \to X', g: Y \to Y'$ in $\mathcal{C}$: \[eq:psfctnat\] ![image](Figures/svg/monoidalfunctors/natural1.png)   =   ![image](Figures/svg/monoidalfunctors/natural2.png) - *Associativity*. For any objects $r,s,t,u$ and 1-morphisms $X:r \to s$, $Y: s \to t$, $Z: t \to u$ of $\mathcal{C}$: \[eq:psfctassoc\] ![image](Figures/svg/monoidalfunctors/assoc1.png)   =   ![image](Figures/svg/monoidalfunctors/assoc2.png) - *Unitality*. For any objects $r,s$ and 1-morphism $X:r \to s$ of $\mathcal{C}$: \[eq:psfctunital\] ![image](Figures/svg/monoidalfunctors/unitality1.png)   =   ![image](Figures/svg/monoidalfunctors/unitality2.png)   =   ![image](Figures/svg/monoidalfunctors/unitality3.png) We say that a pseudofunctor $F: \mathcal{C} \to \mathcal{D}$ is an *equivalence* if every object in $\mathcal{D}$ is equivalent to an object in the image of $F$ and the functors $F_{r,s}: \mathcal{C}(r,s) \to \mathcal{D}(r,s)$ are equivalences. We observe that the analogous *conaturality*, *coassociativity* and *counitality* equations for the inverses $\{m_{X,Y}^{-1}\},\{u_r\}^{-1}$, obtained by reflecting (\[eq:psfctnat\]-\[eq:psfctunital\]) in a horizontal axis, are already implied by (\[eq:psfctnat\]-\[eq:psfctunital\]). To give some idea of the calculus of functorial boxes, we explicitly prove the following lemma and proposition. From now on we will unclutter the diagrams by omitting region and 1-morphism labels, unless adding the labels seems to significantly aid comprehension. \[lem:pushpast\] For any objects $r,s,t,u$ and 1-morphisms $X: r \to s$, $Y: s \to t$, $Z: t \to u$, the following equations are satisfied: ![image](Figures/svg/monoidalfunctors/pushpast11.png)   =   ![image](Figures/svg/monoidalfunctors/pushpast12.png) & ![image](Figures/svg/monoidalfunctors/pushpast21.png)   =   ![image](Figures/svg/monoidalfunctors/pushpast22.png) We prove the left equation; the right equation is proved similarly. ![image](Figures/svg/monoidalfunctors/pushpastproof1.png)   =   ![image](Figures/svg/monoidalfunctors/pushpastproof2.png)   =   ![image](Figures/svg/monoidalfunctors/pushpastproof3.png)   =   ![image](Figures/svg/monoidalfunctors/pushpastproof4.png) Here the first and third equalities are by invertibility of $m_{X,Y}$, and the second is by coassociativity. With Lemma \[lem:pushpast\], the equations (\[eq:psfctnat\]-\[eq:psfctunital\]) are sufficient to deform functorial boxes topologically as required. From now on we will do this mostly without comment. Pivotal 2-categories -------------------- In a 2-category the most general notion of invertibility of a 1-morphism is *duality*, also known as *adjunction*. \[def:duals\] Let $X: r \to s$ be a 1-morphism in a 2-category. A *right dual* $[X^*,\eta,\epsilon]$ for $X$ is: - A 1-morphism $X^*: s \to r$. - Two 2-morphisms $\eta: \id_s \to X^* \circ X$ and $\epsilon: X \circ X^* \to \id_r$ satisfying the following *snake equations*: \[eq:rightsnakes\] ![image](Figures/svg/2cats/2catsnake11.png)   =   ![image](Figures/svg/2cats/2catsnake12.png) & ![image](Figures/svg/2cats/2catsnake21.png)   =   ![image](Figures/svg/2cats/2catsnake22.png) A *left dual* $[{}^*X,\eta,\epsilon]$ is defined similarly, with 2-morphisms $\eta: \id_s \to X \circ {}^*X$ and $\epsilon: {}^*X \circ X \to \id_r$ satisfying the analogues of . We say that a 2-category $\mathcal{C}$ *has right duals* (resp. *has left duals*) if every 1-morphism $X$ in $\mathcal{C}$ has a chosen right dual $[X^*,\eta,\epsilon]$ (resp. a chosen left dual). To represent duals in the graphical calculus, we draw an upward-facing arrow on the $X$-wire and a downward-facing arrow on the $X^*$- or ${}^*X$-wire, and write $\eta$ and $\epsilon$ as a cup and a cap, respectively. Then the equations  become purely topological: ![image](Figures/svg/2cats/2catsnaketop11.png)   =   ![image](Figures/svg/2cats/2catsnaketop12.png)          ![image](Figures/svg/2cats/2catsnaketop21.png)   =   ![image](Figures/svg/2cats/2catsnaketop22.png) && ![image](Figures/svg/2cats/2catsnaketopL11.png)   =   ![image](Figures/svg/2cats/2catsnaketopL12.png)          ![image](Figures/svg/2cats/2catsnaketopL21.png)   =   ![image](Figures/svg/2cats/2catsnaketopL22.png)\ && Since the graphical calculus for 2-categories is just a ‘region-labelled’ version of the graphical calculus for monoidal categories, various statements about duals in monoidal categories immediately generalise to duals in 2-categories. We recall some of these statements now. \[prop:nestedduals\] If $[X^*,\eta_X,\epsilon_X]$ and $[Y^*,\eta_Y,\epsilon_Y]$ are right duals for $X:r \to s$ and $Y: s \to t$ respectively, then $[Y^* \circ X^*, \eta_{X \circ Y},\epsilon_{X \circ Y}]$ is right dual to $X \circ Y$, where $\eta_{X \circ Y}$ and $\epsilon_{X \circ Y}$ are defined by: ![image](Figures/svg/2cats/etatensordual.png) && ![image](Figures/svg/2cats/epsilontensordual.png)\ \[eq:nestedduals\] \_[X Y]{} && \_[X Y]{} Moreover, for any object $r$, $[\id_r,\id_{\id_r},\id_{\id_r}]$ is right dual to $\id_r$. Analogous statements hold for left duals. \[prop:relateduals\] Let $X: r \to s$ be a 1-morphism, and let $[X^*,\eta,\epsilon],[X^*{}',\eta',\epsilon']$ be right duals. Then there is a unique 2-isomorphism $\alpha: X^* \to X^*{}'$ such that \[eq:relateduals\] ![image](Figures/svg/2cats/dualsisoeta1.png)   =   ![image](Figures/svg/2cats/dualsisoeta2.png) & ![image](Figures/svg/2cats/dualsisoepsilon1.png)   =   ![image](Figures/svg/2cats/dualsisoepsilon2.png) An analogous statement holds for left duals. In a 2-category with duals, we can define a notion of transposition for 2-morphisms. Let $X,Y: r \to s$ be 1-morphisms with chosen right duals $[X^*,\eta_X,\epsilon_X]$ and $[Y^*,\eta_Y,\epsilon_Y]$. For any 2-morphism $f:X \to Y$, we define its *right transpose* $f^*: Y^* \to X^*$ as follows: \[eq:rtranspose\] ![image](Figures/svg/2cats/rtranspose1sq.png)   =   ![image](Figures/svg/2cats/rtranspose2sq.png) For left duals ${}^*X,{}^*Y$, a *left transpose* may be defined analogously. In this work we are mostly interested in categories with compatible left and right duals. Such categories are called *pivotal*. The definition of pivotality requires a notion of monoidal natural isomorphism between pseudofunctors, which we will not introduce until Definition \[def:pnt\]. However, we will not need the full definition until after that point; for now we will only require its consequences. Let $\mathcal{C}$ be a 2-category with right duals. It is straightforward to check that the following defines an identity-on-objects pseudofunctor $\mathcal{C} \to \mathcal{C}$, which we call the *double duals* pseudofunctor: - 1-morphisms $X:r \to s$ are taken to the double dual $X^{**}:=(X^*)^*$. - 2-morphisms $f: X \to Y$ are taken to the double transpose $f^{**}:=(f^*)^*$. - The multiplicators $m_{X,Y}$ and unitors $u_r$ are defined using the isomorphisms of Proposition \[prop:relateduals\]. \[def:pivcat\] We say that a 2-category $\mathcal{C}$ with right duals is *pivotal* if the double duals pseudofunctor is monoidally naturally isomorphic to the identity pseudofunctor. Roughly, the existence of a monoidal natural isomorphism in Definition \[def:pivcat\] comes down to the following statement: - For every 1-morphism $X: r \to s$, there is a 2-isomorphism $\iota_X: X^{**} \to X$. - These $\{\iota_X\}$ can be chosen compatibly with composition in $\mathcal{C}$. In a pivotal 2-category, for any $X: r \to s$ the right dual $X^*$ is also a left dual for $X$ by the following cup and cap (here we have drawn a double upwards arrow on the double dual): \[eq:pivldualdef\] ![image](Figures/svg/2cats/pivldualcup1.png)   :=   ![image](Figures/svg/2cats/pivldualcup2.png) & ![image](Figures/svg/2cats/pivldualcap1.png)   :=   ![image](Figures/svg/2cats/pivldualcap2.png) With these left duals, the left transpose of a 2-morphism is equal to the right transpose. Whenever we refer to a pivotal 2-category from now on, we suppose that the left duals are chosen in this way. There is a very useful graphical calculus for these compatible dualities in a pivotal 2-category. To represent the transpose, we make our 2-morphism boxes asymmetric by tilting the right vertical edge. We now write the transpose by rotating the boxes, as though we had ‘yanked’ both ends of the wire in the RHS of : ![image](Figures/svg/2cats/rtranspose3.png) := ![image](Figures/svg/2cats/rtranspose1.png) Using this notation, 2-morphisms now freely slide around cups and caps. \[prop:sliding\] Let $\mathcal{C}$ be a pivotal 2-category and $f:X \to Y$ a 2-morphism. Then: ![image](Figures/svg/2cats/pivslideldualcup1.png)   =   ![image](Figures/svg/2cats/pivslideldualcup2.png) & ![image](Figures/svg/2cats/pivsliderdualcup1.png)   =   ![image](Figures/svg/2cats/pivsliderdualcup2.png) & ![image](Figures/svg/2cats/pivslideldualcap1.png)   =   ![image](Figures/svg/2cats/pivslideldualcap2.png) & ![image](Figures/svg/2cats/pivsliderdualcap1.png)   =   ![image](Figures/svg/2cats/pivsliderdualcap2.png) The diagrammatic calculus is summarised by the following theorem, which to our knowledge has only been proved in special cases but is almost certainly true. \[thm:graphcalcpiv\] Two diagrams for a 2-morphism in a pivotal 2-category represent the same 2-morphism if there is a planar isotopy between them, which may include sliding of 2-morphisms as in Proposition \[prop:sliding\]. #### Pivotal functors. We now consider pseudofunctors between pivotal 2-categories. We first observe that the duals in $\mathcal{C}$ induce duals in $\mathcal{D}$ under a pseudofunctor $F: \mathcal{C} \to \mathcal{D}$. \[prop:indduals\] Let $X: r \to s$ be a 1-morphism in $\mathcal{C}$ and $[X^*,\eta,\epsilon]$ a right dual. Then $F(X^*)$ is a right dual of $F(X)$ in $\mathcal{D}$ with the following cup and cap: ![image](Figures/svg/2cats/inddualcup.png) & ![image](Figures/svg/2cats/inddualcap.png)\ F() & F() The analogous statement holds for left duals. We show one of the snake equations  in the case of right duals; the others are all proved similarly. ![image](Figures/svg/monoidalfunctors/dualsproof1.png)   =   ![image](Figures/svg/monoidalfunctors/dualsproof2.png)   =   ![image](Figures/svg/monoidalfunctors/dualsproof3.png)   =   ![image](Figures/svg/monoidalfunctors/dualsproof4.png) Here the first equality is by Lemma \[lem:pushpast\], the second by  and the third by . For any 1-morphism $X$ of $\mathcal{C}$, then, we have two sets of left and right duals on $F(X)$; the first from the pivotal structure in $\mathcal{C}$ by Proposition \[prop:indduals\], and the second from the pivotal structure in $\mathcal{D}$. To depict both dualities in the graphical calculus, we here introduce elements of the graphical syntax which allow us to ‘zoom in’ and ‘zoom out’, representing $F(X)$ as a directed coloured wire rather than as a boxed wire: \[eq:zoominout\] ![image](Figures/svg/monoidalfunctors/fXzoomout.png) & ![image](Figures/svg/monoidalfunctors/fXzoomin.png) We emphasise that these elements of the graphical calculus are semantically empty, simply switching between two ways of representing $F(X)$. We can now represent the duality corresponding to the pivotal structure in $\mathcal{D}$ in the usual way on the directed coloured wire, writing $F(X)^* $ and $F(X)^{**}$ with a downwards and a double upwards arrow respectively, as usual. We now define a pivotal pseudofunctor. Let $\mathcal{C},\mathcal{D}$ be pivotal dagger 2-categories, and let $F: \mathcal{C} \to \mathcal{D}$ be a pseudofunctor. By Proposition \[prop:relateduals\], for every 1-morphism $X: r \to s$ in $\mathcal{C}$ we obtain two 2-isomorphisms $F_l,F_r: F(X^*) \to F(X)^*$, the first from the left duality and the second from the right duality: \[eq:pivlrisos\] ![image](Figures/svg/2cats/pivfuncisor1.png)   =   ![image](Figures/svg/2cats/pivfuncisor2.png) & ![image](Figures/svg/2cats/pivfuncisol1.png)   =   ![image](Figures/svg/2cats/pivfuncisol2.png) \[def:pivfct\] Let $\mathcal{C},\mathcal{D}$ be pivotal dagger 2-categories, and let $F: \mathcal{C} \to \mathcal{D}$ be a pseudofunctor, and let $F_l,F_r: F(X^*) \to F(X)^*$ be the isomorphisms . We say that $F$ is *pivotal* if $F_l = F_r=:P$. In the graphical calculus we again here write these isomorphisms $P$ and their inverses as ‘zoom ins’ and ‘zoom outs’, which this time are not semantically empty: ![image](Figures/svg/monoidalfunctors/fXdualzoomout.png)   =   ![image](Figures/svg/monoidalfunctors/fXdualzoomoutdef.png) & ![image](Figures/svg/monoidalfunctors/fxdualzoomin.png)   =   ![image](Figures/svg/monoidalfunctors/fXdualzoomindef.png) Pivotal dagger 2-categories --------------------------- The final structure we will consider on a 2-category is a *dagger*. In this section we define a dagger 2-category and discuss compatibility with the various notions already introduced. \[def:dagcat\] A 2-category $\mathcal{C}$ is *dagger* if: - For each pair of objects $r,s$ there is a contravariant identity-on-objects functor $\dagger_{r,s}:\mathcal{C}(r,s) \to \mathcal{C}(r,s)$, which is *involutive*: for any morphism $f: X \to Y$ in $\mathcal{C}(r,s)$, $\dagger_{r,s}(\dagger_{r,s}(f)) = f$. (This is to say that $\mathcal{C}(r,s)$ is a *dagger category*.) - The dagger is compatible with composition of 1-morphisms: for any 1-morphisms $X,X': r \to s$ and $Y,Y': s \to t$, and 2-morphisms $\alpha: X \to X'$ and $\beta: Y \to Y'$ we have $(\alpha \circ_H \beta)^{\dagger_{r,t}} = \alpha^{\dagger_{r,s}} \circ_H \beta^{\dagger_{s,t}}$. We call the image of a 2-morphism $f:X \to Y$ under $\dagger_{r,s}$ its *dagger*, and write it as $f^{\dagger_{r,s}}$. In the graphical calculus, we represent the dagger of a 2-morphism by reflection in a horizontal axis, preserving the direction of any arrows: \[eq:graphcalcdagger\] ![image](Figures/svg/2cats/daggerflip2.png)   :=   ![image](Figures/svg/2cats/daggerflip1.png) Let $\mathcal{C}$ be a dagger 2-category. We say that a 2-morphism $\alpha: X \to Y$ in $\mathcal{C}(r,s)$ is an *isometry* if $\alpha \circ_V \alpha^{\dagger_{r,s}} = \id_X$. We say that it is *unitary* if it is an isometry and additionally $\alpha^{\dagger_{r,s}} \circ_V \alpha = \id_Y$. \[def:daggerequiv\] Let $\mathcal{C}$ be a dagger 2-category and let $r,s$ be objects. We say that a 1-morphism $X: r \to s$ is a *dagger equivalence* if it is an equivalence (Definition \[def:equiv\]) and the invertible 2-morphisms $\alpha: \id_r \to X \circ X^{-1}$ and $\beta: \id_s \to X^{-1} \circ X$ are unitary. We now give the condition for compatibility of dagger and pivotal structure. \[def:pivdagcat\] Let $\mathcal{C}$ be a pivotal 2-category which is also a dagger 2-category. We say that $\mathcal{C}$ is *pivotal dagger* when, for all 1-morphisms $X: r \to s$: ![image](Figures/svg/2cats/dagdualrcup.png)   =   ( ![image](Figures/svg/2cats/dagdualrcupflip.png))\^ & ![image](Figures/svg/2cats/dagduallcap.png)   =   (![image](Figures/svg/2cats/dagduallcapflip.png))\^ Clearly Definition \[def:pivdagcat\] implies compatibility between the graphical calculus of the duality and the graphical calculus of the dagger. Finally, we consider the right notion of a pseudofunctor between dagger 2-categories. Let $\mathcal{C},\mathcal{D}$ be dagger 2-categories and let $F: \mathcal{C} \to \mathcal{D}$ be a pseudofunctor. We say that $F$ is *unitary* if the following hold: - For any 2-morphism $f$, $F(f^{\dagger}) = F(f)^{\dagger}$: ![image](Figures/svg/monoidalfunctors/unitaryfunctor.pdf) - The multiplicators $\{m_{X,Y}\}$ and unitors $\{u_r\}$ are all unitary 2-morphisms in $\mathcal{D}$. The latter condition implies that our depiction of the inverses $\{m^{-1}_{X,Y}\}$ and $\{u_r^{-1}\}$ by reflection in a horizontal axis (\[eq:multiplicator\], \[eq:unitor\]) is consistent with the diagrammatic calculus of the dagger in $\mathcal{D}$. Pseudonatural transformations {#sec:pnts} ============================= Having run through the necessary background on 2-category theory, we recall the definition of a pseudonatural transformation between pseudofunctors [@Leinster1998]. \[def:pnt\] Let $\mathcal{C},\mathcal{D}$ be 2-categories, and let $F,G: \mathcal{C} \to \mathcal{D}$ be pseudofunctors (depicted by blue and red boxes respectively). A *pseudonatural transformation* $\alpha: F \to G$ is defined by the following data: - For every object $r$ of $\mathcal{C}$, a 1-morphism $\alpha_r:F(r) \to G(r)$ of $\mathcal{D}$ (drawn as a green wire). - For every 1-morphism $X:r \to s$ of $\mathcal{C}$, a 2-morphism $\alpha_X: F(X) \circ \alpha_s \to \alpha_r \circ G(X)$ (drawn as a white vertex): \[eq:pntdef\] ![image](Figures/svg/biunitarynattransfs/psnattransfdef.png) The 2-morphisms $\alpha_X$ must satisfy the following conditions: - *Naturality*. For every 2-morphism $f: X \to Y$ in $\mathcal{C}$: \[eq:pntnat\] ![image](Figures/svg/biunitarynattransfs/nat1.png)   =   ![image](Figures/svg/biunitarynattransfs/nat2.png) - *Monoidality.* - For every pair of 1-morphisms $X: r \to s, Y: s \to t$ in $\mathcal{C}$: \[eq:pntmon\] ![image](Figures/svg/biunitarynattransfs/monoidal1.png)   =   ![image](Figures/svg/biunitarynattransfs/monoidal2.png) - For every object $r$ of $\mathcal{C}$: \[eq:pntmonunit\] ![image](Figures/svg/biunitarynattransfs/unitality1.png)   =   ![image](Figures/svg/biunitarynattransfs/unitality2.png) (Equation  already implies the analogous pullthroughs for the comultiplicators $\{m_{X,Y}^{-1}\}$.) If $\alpha_r = \id_{F(r)}$ for every object $r$ of $\mathcal{C}$, we say that $\alpha$ is a *monoidal natural transformation*. (Definition \[def:pivcat\] is now complete.) The diagrammatic calculus shows that pseudonatural transformation is a planar notion. The $\{\alpha_r\}$-labelled wire (the ‘$\alpha$-wire’) forms a boundary between two regions of the $\mathcal{D}$-plane, one in the image of $F$ and the other in the image of $G$. By pulling through the $\alpha$-wire, 2-morphisms from $\mathcal{C}$ can move between the two regions . Pseudonatural transformations $\alpha: F \to G$ and $\beta: G \to H$ can be composed associatively. We define $\alpha \circ \beta: F \to H$ as follows. - For every object $r$ of $\mathcal{C}$, $(\alpha \circ \beta)_r := \alpha_r \circ \beta_r$. - For any 1-morphism $X: r \to s$ of $\mathcal{C}$, $(\alpha \circ \beta)_X$ is defined as the following composite (we colour the $\beta$-wire orange, and the $H$-box brown): ![image](Figures/svg/biunitarynattransfs/comptransfs.png) There are also morphisms between pseudonatural transformations, known as *modifications* [@Leinster1998]. Let $\alpha, \beta: F \Rightarrow G$ be pseudonatural transformations between pseudofunctors $F,G: \mathcal{C} \to \mathcal{D}$. (We colour the $\alpha$-wire green and the $\beta$-wire orange.) A *modification* $f: \alpha \to \beta$ is defined by the following data: - For every object $r$ of $\mathcal{C}$, a 2-morphism $f_r: \alpha_r \to \beta_r$ in $\mathcal{D}$, such that the 2-morphisms $\{f_r\}$ satisfy the following equation for all 1-morphisms $X:r \to s$ in $\mathcal{C}$: ![image](Figures/svg/biunitarynattransfs/modification1.png)   =   ![image](Figures/svg/biunitarynattransfs/modification2.png) Modifications can themselves be composed horizontally and vertically in an obvious way. Altogether, this compositional structure is again a 2-category. Let $\mathcal{C},\mathcal{D}$ be 2-categories. The 2-category $\Fun(\mathcal{C},\mathcal{D})$ is defined as follows: - Objects: monoidal functors $F,G,\dots,\cdot: \mathcal{C} \to \mathcal{D}$. - 1-morphisms: pseudonatural transformations $\alpha,\beta,\dots: F \to G$. - 2-morphisms: modifications $f, g,\dots: \alpha \to \beta$. Because we are able to assume that $\mathcal{C}$ and $\mathcal{D}$ are strict, $\Fun(\mathcal{C},\mathcal{D})$ is also strict. Dualisable pseudonatural transformations {#sec:duals} ======================================== Duals ----- Pseudonatural transformations categorify natural transformations. We now consider the categorification of natural isomorphisms. As we saw in Definition \[def:duals\], the most general notion of invertibility in a 2-category is dualisability. This unpacks as follows in $\Fun(\mathcal{C},\mathcal{D})$. \[def:dualpnt\] Let $F, G: \mathcal{C} \to \mathcal{D}$ be pseudofunctors and $\alpha: F \to G$ a pseudonatural transformation. A *right dual* for $\alpha$ is a triple $[\alpha^*, \eta, \epsilon]$, where $\alpha^*: G \to F$ is a pseudonatural transformation and $\eta, \epsilon$ are modifications ![image](Figures/svg/biunitarynattransfs/cupmodepsilon.png) & ![image](Figures/svg/biunitarynattransfs/cupmodnu.png)\ : \^\* \_F & : \_G \^\* such that the following equations hold for any 1-morphism $X: r \to s$ in $\mathcal{C}$: \[eq:rightmodsnakes\] ![image](Figures/svg/biunitarynattransfs/rightmodsnake11.png)   =   ![image](Figures/svg/biunitarynattransfs/rightmodsnake22.png) & ![image](Figures/svg/biunitarynattransfs/rightmodsnake21.png)   =   ![image](Figures/svg/biunitarynattransfs/rightmodsnake12.png) In the above equations we have drawn the $\alpha$-wire in green with an upwards-facing arrow and the $\alpha^*$-wire in green with a downwards-facing arrow, as though $\alpha_r$ and $\alpha_r^*$ were dual 1-morphisms. This will be justified by Lemma \[lem:pntduals\]. A *left dual* is defined analogously. \[lem:pntduals\] Let $F,G: \mathcal{C} \to \mathcal{D}$ be pseudofunctors and $\alpha: F \to G$ a pseudonatural transformation with right dual $[\alpha^*,\eta,\epsilon]$. Then for each object $r$ of $\mathcal{C}$, $[\alpha^*_r,\eta_r,\epsilon_r]$ is a right dual for $\alpha_r$ in $\mathcal{D}$. The analogous statement holds for left duals. We prove the right snake equation for right duals; everything else may be proved similarly. For any object $r$ of $\mathcal{C}$: ![image](Figures/svg/biunitarynattransfs/adualproofm2.png)   =   ![image](Figures/svg/biunitarynattransfs/adualproofm1.png)   =   ![image](Figures/svg/biunitarynattransfs/adualproof0.png)   =   ![image](Figures/svg/biunitarynattransfs/adualproof1.png)   =   ![image](Figures/svg/biunitarynattransfs/adualproof2.png)   =   ![image](Figures/svg/biunitarynattransfs/adualproof3.png) Here the first equation is by invertibility of the unitor $u_r$  for $F$; the second by monoidality  of the pseudonatural transformation $\alpha$ on the 1-morphism $\id_r: r \to r$ and invertibility of the unitor for $G$; the third by ; the fourth by monoidality of $\alpha$ and $\alpha^*$ on $\id_r$; and the last by invertibility of the unitor $u_r$. From this point forward, therefore, we will draw $\eta_r$ and $\epsilon_r$ as a cup and cap. From the perspective of the graphical calculus, dualisability of a pseudonatural transformation $\alpha$ corresponds to topological deformability of the $\alpha$-wire boundary between the $F$- and $G-$ regions of the $\mathcal{D}$-plane. If $\mathcal{C}$ has duals, we obtain explicit expressions for the left and right duals in $\Fun(\mathcal{C},\mathcal{D})$ whenever they exist. \[thm:pntduals\] Let $F, G: \mathcal{C} \to \mathcal{D}$ be pseudofunctors, and suppose that $\mathcal{C}$ has left duals. A pseudonatural transformation $\alpha: F \to G$ has a right dual in $\Fun(\mathcal{C},\mathcal{D})$ precisely when $\alpha_r$ has some right dual $[\alpha_r^*,\eta_r, \epsilon_r]$ in $\mathcal{D}$ for each object $r$ of $\mathcal{C}$. The right dual $\alpha^*$ is defined as follows: - For each object $r$ of $\mathcal{C}$, $(\alpha^*)_r = (\alpha_r)^*$ and the components of the modifications $\eta, \epsilon$ are $[\eta_r,\epsilon_r]$. - For each 1-morphism $X: r \to s$ of $\mathcal{C}$, $(\alpha^*)_X$ is: \[eq:dualpnt\] ![image](Figures/svg/biunitarynattransfs/rightdualtransfdef.png) := ![image](Figures/svg/biunitarynattransfs/rightdualtransfdef2.png) This statement also holds with ‘left’ and ‘right’ swapped, in which case the left dual ${}^*\alpha$ is defined as follows: - For each object $r$ of $\mathcal{C}$, $({}^*\alpha)_r = {}^*(\alpha_r)$ and the components of the modifications $\eta, \epsilon$ are $[\eta_r,\epsilon_r]$. - For each 1-morphism $X: r \to s$ of $\mathcal{C}$, $({}^*\alpha)_X$ is defined as in , but with the opposite transposition. We consider the case of the right dual $\alpha^*$; the argument for the left dual is similar. If some $\alpha_r$ has no right dual, then nor can $\alpha$ by Lemma \[lem:pntduals\]. If every $\alpha_r$ has some right dual, then we must show firstly that $\alpha^*$ as defined is a pseudonatural transformation, and secondly that $\eta, \epsilon$ as defined are modifications satisfying the snake equations . 1. *Naturality of $\alpha^*$. * For all 2-morphisms $f: X \to Y$ in $\mathcal{C}$: ![image](Figures/svg/biunitarynattransfs/rightdualtransfnat1.png)   =   ![image](Figures/svg/biunitarynattransfs/rightdualtransfnat2.png)   =   ![image](Figures/svg/biunitarynattransfs/rightdualtransfnat3.png)   =   ![image](Figures/svg/biunitarynattransfs/rightdualtransfnat4.png) Here the first and third equalities use the sliding notation of Proposition \[prop:sliding\] for the left transpose; the second equality is by naturality of $\alpha$ on $f^{T}:{}^*Y \to {}^*X$. 2. *Monoidality of $\alpha^*$.* (\[eq:pntmon\]-\[eq:pntmonunit\]) - For every pair of 1-morphisms $X:r \to s, Y: s \to t$ in $\mathcal{C}$: ![image](Figures/svg/biunitarynattransfs/alphadualmonoidality1.png)   =   ![image](Figures/svg/biunitarynattransfs/alphadualmonoidality2.png)   =   ![image](Figures/svg/biunitarynattransfs/alphadualmonoidality3.png)   =   ![image](Figures/svg/biunitarynattransfs/alphadualmonoidality4.png)\ \[eq:pntdualmonpf\]   =   ![image](Figures/svg/biunitarynattransfs/alphadualmonoidality5.png)   =   ![image](Figures/svg/biunitarynattransfs/alphadualmonoidality6.png)   =   ![image](Figures/svg/biunitarynattransfs/alphadualmonoidality7.png) Here the first equality is by definition; the second by a snake equation for $\alpha_s$; the third by monoidality of $\alpha$ and some manipulation of functorial boxes; the fourth by Propositions \[prop:nestedduals\] and \[prop:relateduals\], where $f$ is the isomorphism between ${}^*Y \circ {}^*X$ and the chosen left dual ${}^*(X \circ Y)$ in $\mathcal{C}$; the fifth by naturality of $\alpha$; and the sixth by definition. - For every object $r$ of $\mathcal{C}$: ![image](Figures/svg/biunitarynattransfs/alphadualmonoidalityunit1.png)   =   ![image](Figures/svg/biunitarynattransfs/alphadualmonoidalityunit2.png)   =   ![image](Figures/svg/biunitarynattransfs/alphadualmonoidalityunit3.png)   =   ![image](Figures/svg/biunitarynattransfs/alphadualmonoidalityunit4.png) Here the first equality is by definition, the second by monoidality of $\alpha$, and the third by a snake equation for $\alpha_r$. We have assumed for that the chosen left dual of $\id_r$ is $[\id_r,\id_{\id_r},\id_{\id_r}]$; in general one can use Proposition \[prop:relateduals\] and naturality of $\alpha$ as in . 3. Since $\eta_r,\epsilon_r$ already satisfy the snake equations for every $r$ by assumption, we need only show that $\eta, \epsilon$ are modifications. For all $X: r \to s$ in $\mathcal{C}$: ![image](Figures/svg/biunitarynattransfs/rightdualduality1.png)   =   ![image](Figures/svg/biunitarynattransfs/rightdualduality2.png)   =   ![image](Figures/svg/biunitarynattransfs/rightdualduality3.png)   =   ![image](Figures/svg/biunitarynattransfs/rightdualduality4.png) ![image](Figures/svg/biunitarynattransfs/rightdualduality21.png)   =   ![image](Figures/svg/biunitarynattransfs/rightdualduality22.png)   =   ![image](Figures/svg/biunitarynattransfs/rightdualduality23.png)   =   ![image](Figures/svg/biunitarynattransfs/rightdualduality24.png) Here, the first equalities are by definition, the second are by a snake equation for $\alpha_r^*$ or $\alpha_s^*$, and the third are by naturality and monoidality of $\alpha$. \[cor:dualsexistence\] If $\mathcal{C}$ has left duals, and $\mathcal{D}$ has right duals, then $\Fun(\mathcal{C},\mathcal{D})$ has right duals. This statement also holds with ‘left’ and ‘right’ swapped. It is well-known that a monoidal natural transformation between monoidal functors from a monoidal category with duals is invertible. Theorem \[thm:pntduals\] generalises this result. Indeed, if the objects $\alpha_r$ are all identity morphisms, then the cup and cap are trivial and the dual is simply a strict inverse. Pivotality ---------- We have seen that, for a pseudonatural transformation $\alpha: F \to G$, the $\alpha$-wire forms a boundary between a region in the image of $F$ and a region in the image of $G$, and dualisability corresponds to topological deformation of this boundary. To freely deform the boundary in a coherent way, we would like $\Fun(\mathcal{C},\mathcal{D})$ to be pivotal. We recall that a 2-category with right duals is *pivotal* (Definition \[def:pivcat\]) if there is a monoidal natural isomorphism (Definition \[def:pnt\]) from the double duals pseudofunctor to the identity pseudofunctor. We now show that $\Fun(\mathcal{C},\mathcal{D})$ inherits pivotality from $\mathcal{C}$ and $\mathcal{D}$ upon restriction to pivotal pseudofunctors. When $\mathcal{C},\mathcal{D}$ are pivotal we define $\Fun_p(\mathcal{C},\mathcal{D}) \subset \Fun(\mathcal{C},\mathcal{D})$ to be the subcategory whose objects are pivotal pseudofunctors. \[thm:pivinduced\] Let $\mathcal{C}, \mathcal{D}$ be pivotal 2-categories, and let $\iota: {**}_{\mathcal{D}} \to \id_{\mathcal{D}}$ be the pivotal structure on $\mathcal{D}$. Then the 2-category $\Fun_p(\mathcal{C},\mathcal{D})$ is itself a pivotal 2-category. The monoidal natural transformation $\hat{\iota}: {**}_{\Fun(\mathcal{C},\mathcal{D})} \to \id_{\Fun(\mathcal{C},\mathcal{D})}$ assigns to every pseudonatural transformation $\alpha^{**}: F \to G$ the invertible modification $\hat{\iota}_{\alpha}: \alpha^{**} \to \alpha$ whose components are the 2-isomorphisms $\iota_{\alpha_r}: \alpha_r^{**} \to \alpha_r$ from the pivotal structure on $\mathcal{D}$. First we show that the $\hat{\iota}_{\alpha}$ are really modifications. Since $\{\iota_{\alpha_r}\}$ are 2-isomorphisms it is immediate that the *$\hat{\iota}_{\alpha}$-conjugate* $(\alpha^{**})^{\hat{\iota}_{\alpha}}$ of $\alpha^{**}$ is a pseudonatural transformation $F \to G$, where $(\alpha^{**})^{\hat{\iota}_{\alpha}}_r = \alpha_r$ for all objects $r$ of $\mathcal{C}$, and $(\alpha^{**})^{\hat{\iota}_{\alpha}}_X$ is defined as follows for all $X: r \to s$: \[eq:conjpnt\] ![image](Figures/svg/biunitarynattransfs/pivalphaconjdef.png) It is also clear that $\hat{\iota}_{\alpha}$ is a modification $\alpha^{**} \to (\alpha^{**})^{\hat{\iota}_{\alpha}}$. We now show that $\hat{\iota}_{\alpha}$ has the right target, i.e. $(\alpha^{**})^{\hat{\iota}_{\alpha}} = \alpha$. We first observe that the left dual of a pseudonatural transformation between pivotal functors is identical to its right dual: \[eq:ldualisrdual\] ![image](Figures/svg/biunitarynattransfs/pivrdualisldual1.png)   =   ![image](Figures/svg/biunitarynattransfs/pivrdualisldual2.png)   =   ![image](Figures/svg/biunitarynattransfs/pivrdualisldual3.png)   =   ![image](Figures/svg/biunitarynattransfs/pivrdualisldual4.png) Here for the first and third equalities we used Proposition \[prop:relateduals\] and the ‘zoom out’ notation  to relate the duals in $\mathcal{C}$ and $\mathcal{D}$, and for the second we used the graphical calculus of the pivotal 2-category $\mathcal{D}$ (Theorem \[thm:graphcalcpiv\]) to deform the diagram around the morphism in the dashed box. For the third equality we require that the pseudofunctors are pivotal. Now for any $\alpha: F \to G$ and $X: r \to s$ in $\mathcal{C}$ we have: ![image](Figures/svg/biunitarynattransfs/pivddual1morph0.png)   =   ![image](Figures/svg/biunitarynattransfs/pivddual1morph1.png)   =   ![image](Figures/svg/biunitarynattransfs/pivddual1morph2.png)\   = ![image](Figures/svg/biunitarynattransfs/pivddual1morph22.png)   =   ![image](Figures/svg/biunitarynattransfs/pivddual1morph23.png)   =   ![image](Figures/svg/biunitarynattransfs/pivddual1morph3.png)\   =   ![image](Figures/svg/biunitarynattransfs/pivddual1morph4.png) Here the first equality is by definition; the second uses ; the third uses the definition  of the left duality in the pivotal 2-category $\mathcal{D}$; the fourth uses naturality of $\alpha$ to insert $\iota \iota^{-1}$, where $\iota: X^{**} \to X$ is the isomorphism from the pivotal structure in $\mathcal{C}$; the fifth uses the definition  of the left duality in $\mathcal{C}$; and the last uses the snake equations in $\mathcal{C}$ and $\mathcal{D}$. Finally, we need to show that $\hat{\iota}$ is a monoidal natural transformation ${**}_{\Fun(\mathcal{C},\mathcal{D})} \to \id_{\Fun(\mathcal{C},\mathcal{D})}$. - *Monoidality*: For every pair of pseudonatural transformations $\alpha: F \to G$, $\beta: G \to H$, we need $\hat{\iota}_{\alpha \circ \beta} = \hat{\iota}_{\alpha} \circ_H \hat{\iota}_{\beta}$. For each $X: r \to s$ this is implied by monoidality of $\iota: **_{\mathcal{D}} \to \id_{\mathcal{D}}$. - *Naturality*: We need that, for every modification $f: \alpha \to \beta$, $\hat{\iota}_{\beta} \circ_V f^{**} = f \circ \hat{\iota}_{\alpha}$. For each $X: r \to s$ this is implied by naturality of $\iota: **_{\mathcal{D}} \to \id_{\mathcal{D}}$. Unitary pseudonatural transformations {#sec:unitary} ===================================== We have considered the case where $\mathcal{C},\mathcal{D}$ are pivotal. We now consider the case where $\mathcal{C},\mathcal{D}$ are pivotal dagger and the pseudofunctors are unitary. In this case, we get a new contravariant operation on pseudonatural transformations. Let $F,G: \mathcal{C} \to \mathcal{D}$ be unitary pseudofunctors between pivotal dagger 2-categories. Then for any pseudonatural transformation $\alpha: F \to G$, its *dagger* $\alpha^{\dagger}: G \to F$, defined componentwise for each $X: r \to s$ in $\mathcal{C}$ as \[eq:daggerpnt\] ![image](Figures/svg/biunitarynattransfs/pntdagger.png) is also a pseudonatural transformation. We must show naturality and monoidality. - *Naturality.* For any $f: X\to Y$ in $\mathcal{C}$: ![image](Figures/svg/biunitarynattransfs/pntdaggernat1.png)   =   ![image](Figures/svg/biunitarynattransfs/pntdaggernat2.png)   =   ![image](Figures/svg/biunitarynattransfs/pntdaggernat3.png)   =   ![image](Figures/svg/biunitarynattransfs/pntdaggernat4.png) Here the first equality is by unitarity of $G$, the second equality is by naturality of $\alpha$, and the third equality is by unitarity of $F$. - *Monoidality.* For any $X: r \to s, Y: s \to t$ in $\mathcal{C}$: ![image](Figures/svg/biunitarynattransfs/pntdaggermon1.png)   =   ![image](Figures/svg/biunitarynattransfs/pntdaggermon2.png)   =   ![image](Figures/svg/biunitarynattransfs/pntdaggermon3.png)\   =   ![image](Figures/svg/biunitarynattransfs/pntdaggermon4.png)   =   ![image](Figures/svg/biunitarynattransfs/pntdaggermon5.png) Here the first and second equalities are by dagger pivotality of $\mathcal{D}$, the third equality is by monoidality of $\alpha$, and the fourth equality is by unitarity of $F,G$ and dagger pivotality of $\mathcal{D}$. We leave the other monoidality condition  to the reader. We would like $\Fun(\mathcal{C},\mathcal{D})$ to inherit the structure of a dagger 2-category. In general, however, there is no reason why the componentwise dagger of a modification $f: \alpha \to \beta$ — the only reasonable candidate for a dagger on $\Fun(\mathcal{C},\mathcal{D})$ — should yield a modification $f^{\dagger}: \beta \to \alpha$. This problem is resolved by restriction to ‘unitary’ pseudonatural transformations. There are two obvious ways to define unitarity. First, given that the dual is the ‘inverse’ of a pseudonatural transformation, we could ask that the dagger  of the transformation should be equal to the right dual . Alternatively, by analogy with the definition of unitary monoidal natural transformations, and motivated by physicality in quantum mechanics [@Vicary2012], we might demand that the components of the transformation be individually unitary in $\mathcal{D}$. In fact, these definitions are equivalent. \[prop:unitaritydefsequiv\] Let $\mathcal{C},\mathcal{D}$ be pivotal dagger 2-categories and let $\alpha: F \to G$ be a pseudonatural transformation between functors $F,G: \mathcal{C} \to \mathcal{D}$. The following are equivalent: 1. There is an equality of pseudonatural transformations $\alpha^* = \alpha^{\dagger}$. 2. For all 1-morphisms $X: r \to s$ in $\mathcal{C}$, the component $\alpha_X: F(X) \circ \alpha_r \to \alpha_s \circ G(X)$ is unitary. \(i) $\Rightarrow$ (ii): For all $X: r \to s$ in $\mathcal{C}$, unitarity of $\alpha_X$ follows from right duality: ![image](Figures/svg/biunitarynattransfs/unitarytounitarycomps1.png)   =   ![image](Figures/svg/biunitarynattransfs/unitarytounitarycomps2.png)   =   ![image](Figures/svg/biunitarynattransfs/unitarytounitarycomps3.png) & ![image](Figures/svg/biunitarynattransfs/unitarytounitarycomps21.png)   =   ![image](Figures/svg/biunitarynattransfs/unitarytounitarycomps22.png)   =   ![image](Figures/svg/biunitarynattransfs/unitarytounitarycomps23.png) \(ii) $\Rightarrow$ (i): Unitarity of the components implies that $[\alpha^{\dagger},\eta,\epsilon]$ is a right dual, where $\eta,\epsilon$ are the cup and cap of the right dual $[\alpha^*,\eta,\epsilon]$, since for each component: ![image](Figures/svg/biunitarynattransfs/unitarycompstounitarydual11.png)   =   ![image](Figures/svg/biunitarynattransfs/unitarycompstounitarydual12.png)   =   ![image](Figures/svg/biunitarynattransfs/unitarycompstounitarydual13.png) && ![image](Figures/svg/biunitarynattransfs/unitarycompstounitarydual21.png)   =   ![image](Figures/svg/biunitarynattransfs/unitarycompstounitarydual22.png)   =   ![image](Figures/svg/biunitarynattransfs/unitarycompstounitarydual23.png) But this implies equality $\alpha^{\dagger} = \alpha^{*}$ for all, as, since the cup and cap modifications are identical, the unique 2-isomorphism of Proposition \[prop:relateduals\] relating the two right duals in $\Fun(\mathcal{C},\mathcal{D})$ must be the identity. We therefore make the following definition. Let $\mathcal{C},\mathcal{D}$ be pivotal dagger 2-categories and let $F,G: \mathcal{C} \to \mathcal{D}$ be unitary pseudofunctors. Then a *unitary pseudonatural transformation (UPT)* $\alpha: F \to G$ is a pseudonatural transformation such that either of the following equivalent conditions are satisfied: - There is an equality of pseudonatural transformations $\alpha^* = \alpha^{\dagger}$. - For all 1-morphisms $X: r \to s$ in $\mathcal{C}$, the component $\alpha_X: F(X) \circ \alpha_r \to \alpha_s \circ G(X)$ is unitary. When $\mathcal{C},\mathcal{D}$ are pivotal dagger we restrict the 1-morphisms of $\Fun(\mathcal{C},\mathcal{D})$ and $\Fun_p(\mathcal{C},\mathcal{D})$ to UPTs. Following this restriction, $\Fun(\mathcal{C},\mathcal{D})$ indeed becomes a dagger 2-category. \[thm:funcddagger\] Let $\mathcal{C},\mathcal{D}$ be pivotal dagger 2-categories. Then the 2-category $\Fun(\mathcal{C},\mathcal{D})$ is dagger, where the dagger of a modification $f: \alpha \to \beta$ is defined on components as $(f^{\dagger})_r = (f_r)^{\dagger}$. Moreover, $\Fun_p(\mathcal{C},\mathcal{D})$ is pivotal dagger. We first show that $f^{\dagger}$ is a modification $\beta \to \alpha$: ![image](Figures/svg/biunitarynattransfs/daggermod1.png)   =   ![image](Figures/svg/biunitarynattransfs/daggermod2.png)   =   ![image](Figures/svg/biunitarynattransfs/daggermod3.png)   =   ![image](Figures/svg/biunitarynattransfs/daggermod4.png)\   =   ![image](Figures/svg/biunitarynattransfs/daggermod5.png)   =   ![image](Figures/svg/biunitarynattransfs/daggermod6.png) Here the second equality is by unitarity of $\alpha$, and the fourth equality is by transposition in $\Fun(\mathcal{C},\mathcal{D})$. For the last statement we must show that the duals of $\Fun_p(\mathcal{C},\mathcal{D})$ are dagger duals. This follows from the fact that the dagger of a modification is taken componentwise, and the cup and cap for each component come from the pivotal dagger structure in $\mathcal{D}$. [^1]: For 1-morphisms, $X \circ Y$ is ‘$X$ followed by $Y$’ rather than ‘$Y$ followed by $X$’. [^2]: I.e. invertible in the Hom-categories $\mathcal{C}(r,s)$ and $\mathcal{C}(s,r)$. We sometimes call an invertible 2-morphism a *2-isomorphism*.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This sequential technical report extends some of the previous results we posted at arXiv:1306.0225.' author: - | Qing Hui and Haopeng Zhang\ [^1] bibliography: - 'Reference.bib' title: 'Semistability-Based Convergence Analysis for Paracontracting Multiagent Coordination Optimization' --- Introduction ============ Recently we have proposed a new class of swarm optimization algorithms called the Multiagent Coordination Optimization (MCO) [@ZH:CEC:2013; @ZH:CASE:2013; @HZ:TR:2013] algorithm inspired by swarm intelligence and consensus protocols for multiagent coordination in [@HHB:TAC:2008; @HHB:TAC:2009; @Hui:IJC:2010; @HH:AUT:2008]. This new algorithm is a new optimization technique based not only on swarm intelligence [@BDT:1999] which simulates the bio-inspired behavior, but also on cooperative control of autonomous agents. The MCO algorithm starts with a set of random solutions for agents which can communicate with each other. The agents then move through the solution space based on the evaluation of their cost functional and neighbor-to-neighbor rules like multiagent consensus protocols [@HHB:TAC:2008; @HHB:TAC:2009; @HH:AUT:2008; @Hui:IJC:2010; @Hui:TAC:2011; @Hui:AUT:2011]. Detailed convergence analysis for MCO has been conducted in the companion report [@HZ:TR:2013]. In this sequential report, we first propose a *paracontraction* [@EKN:LAA:1990] based MCO algorithm and then implement the paracontracting MCO algorithm in a parallel computing way by introducing MATLAB^^ built-in function $\mathtt{parfor}$ into the paracontracting MCO algorithm. Then we rigorously analyze the global convergence of the paracontracting MCO algorithm by means of *semistability theory* [@HHB:TAC:2008; @Hui:TAC:2013]. This sequential report can be viewed as an addendum to the companion report [@HZ:TR:2013]. Mathematical Preliminaries {#mp} ========================== Graphs ------ Let $\mathbb{R}$ denote the set of real numbers and $\mathbb{R}^{n\times n}$ denote the set of $n\times n$ real matrices. In this sequential report, we use algebraic graph-related notation to describe our paracontracting MCO algorithm. More specifically, let $\mathcal{G}(t)= (\mathcal{V}, \mathcal{E}(t), \mathcal{A}(t))$ denote a *node-fixed dynamic directed graph* (or *node-fixed dynamic digraph*) with the set of vertices $\mathcal{V}= \{v_1,v_2,\ldots,v_{n}\}$ and $\mathcal{E}(t)\subseteq \mathcal{V}\times \mathcal{V}$ represent the set of edges, where $t\in\overline{\mathbb{Z}}_{+}=\{0,1,2,\ldots\}$. The time-varying matrix $\mathcal{A}(t)\in\mathbb{R}^{n\times n}$ with nonnegative adjacency elements $a_{i,j}(t)$ serves as the weighted adjacency matrix. The node index of $\mathcal{G}(t)$ is denoted as a finite index set $\mathcal{N}=\{1,2,\ldots,n\}$. An edge of $\mathcal{G}(t)$ is denoted by $e_{i,j}(t)=(v_i,v_j)$ and the adjacency elements associated with the edges are positive. We assume $e_{i,j}(t)\in \mathcal{E}(t)\Leftrightarrow a_{i,j}(t)=1$ and $a_{i,i}(t)=0$ for all $i\in \mathcal{N}$. The set of neighbors of the node $v_i$ is denoted by $\mathcal{N}^{i}(t)=\{v_j \in \mathcal {V}:(v_i,v_j)\in \mathcal {E}(t), j=1,2, \ldots, |\mathcal{N}|, j\not = i\}$, where $|\mathcal{N}|$ denotes the cardinality of $\mathcal{N}$. The degree matrix of a node-fixed dynamic digraph $\mathcal{G}(t)$ is defined as $\Delta(t) =[\delta_{i,j}(t)]_{i,j=1,2,\ldots,|\mathcal{N}|}$, where$$\begin{aligned} \delta_{i,j}(t)=\left\{ \begin{array}{ll} \sum_{j=1}^{|\mathcal{N}|} a_{i,j}(t), & \hbox{if $i=j$,} \\ 0, & \hbox{if $i\neq j$.} \end{array} \right.\end{aligned}$$ The *Laplacian matrix* of the node-fixed dynamic digraph $\mathcal{G}(t)$ is defined by $L(t)=\Delta(t) - \mathcal {A}(t)$. If $L(t)=L^{\mathrm{T}}(t)$, then $\mathcal{G}(t)$ is called a *node-fixed dynamic undirected graph* (or simply *node-fixed dynamic graph*). If there is a path from any node to any other node in a node-fixed dynamic digraph, then we call the dynamic digraph *strongly connected*. Analogously, if there is a path from any node to any other node in a node-fixed dynamic graph, then we call the dynamic graph *connected*. From now on we use short notations $L_{t},\mathcal{G}_{t},\mathcal{N}^{i}_{t}$ to denote $L(t),\mathcal{G}(t),\mathcal{N}^{i}(t)$, respectively. Paracontraction --------------- Paracontraction is a nonexpansive property for a class of linear operators which can be used to guarantee convergence of linear iterations [@EKN:LAA:1990]. The following definition due to [@EKN:LAA:1990] gives the notion of paracontracting matrices. Let $\mathbb{R}^{n}$ denote the set of $n$-dimensional real column vectors and $W\in\mathbb{R}^{n\times n}$. $W$ is called *paracontracting* if for any $x\in\mathbb{R}^{n}$, $Wx\neq x$ is equivalent to $\|Wx\|<\|x\|$, where $\|\cdot\|$ denotes the 2-norm in $\mathbb{R}^{n}$. Recall from [@Bernstein:2009; @HCH:2009; @Hui:TAC:2013] that a matrix $A\in\mathbb{R}^{n\times n}$ is called *discrete-time semistable* if ${\rm{spec}}(A)\subseteq\{s\in\mathbb{C}:|s|<1\}\cup\{1\}$, and if $1\in{\rm{spec}}(A)$, then $1$ is semisimple, where ${\mathrm{spec}}(A)$ denotes the spectrum of $A$. Hence, $A$ is discrete-time semistable if and only if $\lim_{k\to\infty}A^{k}$ exists. $A\in\mathbb{R}^{n\times n}$ is called *nontrivially discrete-time semistable* [@Hui:TAC:2013] if $A$ is discrete-time semistable and $A\neq I_{n}$, where $I_{n}\in\mathbb{R}^{n\times n}$ denotes the $n\times n$ identity matrix. The following result shows a close relationship between paracontracting matrices and discrete-time semistable matrices under certain circumstances. To state this result, let $\ker(A)$ denote the kernel of $A$. \[lemma\_para\] Let $W\in\mathbb{R}^{n\times n}$. Then $W$ is nontrivially discrete-time semistable, $\|W\|\leq 1$, and $\ker((W-I_{n})^{\mathrm{T}}(W-I_{n})+W^{\mathrm{T}}-I_{n}+W-I_{n})=\ker((W-I_{n})^{\mathrm{T}}(W-I_{n})+(W-I_{n})^{2})$ if and only if $W$ is paracontracting. Let $\mathbb{R}^{m\times n}$ denote the set of $m\times n$ real matrices. The following definition is due to [@HZ:TR:2013]. Let $A_{k}\in\mathbb{R}^{n\times n}$, $k=0,1,2,\ldots$, and $C\in\mathbb{R}^{m\times n}$. The set of pairs $\{(A_{k},C)\}_{k\in\overline{\mathbb{Z}}_{+}}$ is called *discrete-time approximate semiobservable with respect to some matrix $A\in\mathbb{R}^{n\times n}$* if $$\begin{aligned} \bigcap_{k=0}^{\infty}\ker(C(I_{n}-A_{k}))=\ker(I_{n}-A).\end{aligned}$$ Finally, using the above definition and Theorem 1 of [@EKN:LAA:1990], one can show the following key results which are needed for the main convergence result in this technical report. The detailed proofs can be found in [@HZ:TR:2013]. \[lemma\_DTSS\] Let $J$ be a (possibly infinite) countable index set and $P_{k}\in\mathbb{R}^{n\times n}$, $k\in J$, be discrete-time semistable, $\|P_{k}\|\leq1$, and $\ker(P_{k}^{\mathrm{T}}P_{k}-I_{n})=\ker((P_{k}-I_{n})^{\mathrm{T}}(P_{k}-I_{n})+(P_{k}-I_{n})^{2})$. Consider the sequence $\{x_{i}\}_{i=0}^{\infty}$ defined by the iterative process $x_{i+1}=Q_{i}x_{i}$, $i=0,1,2,\ldots$, where $Q_{i}\in\{P_{k}:\forall k\in J\}$. - If $|J|<\infty$, then $\lim_{i\to\infty}x_{i}$ exists. If in addition, $P_{k}\in\mathbb{R}^{n\times n}$ is nontrivially discrete-time semistable for every $k\in J$, then $\lim_{i\to\infty}x_{i}$ is in $\bigcap_{k\in\mathcal{I}}\ker(I_{n}-P_{k})$, where $\mathcal{I}$ is the set of all indexes $k$ for which $P_{k}$ appears infinitely often in $\{Q_{i}\}_{i=0}^{\infty}$. - If there exists $s\in J$ such that $P_{s}$ is nontrivially discrete-time semistable, $\{(Q_{k},I_{n})\}_{k\in\overline{\mathbb{Z}}_{+}}$ is discrete-time approximate semiobservable with respect to some nontrivially discrete-time semistable matrix $Q_{r}$, $r\in\overline{\mathbb{Z}}_{+}$, and for every positive integer $N$, there always exists $j\geq N$ such that $Q_{j}=Q_{r}$, then $\lim_{i\to\infty}x_{i}$ exists and the limit is in $\ker(I_{n}-Q_{r})$. Paracontracting Multiagent Coordination Optimization {#pmco} ==================================================== Paracontracting MCO with Node-Fixed Dynamic Graph Topology ---------------------------------------------------------- The MCO algorithm with static graph topology, proposed in [@ZH:CEC:2013] to solve a given optimization problem $\min_{\textbf{x}\in\mathbb{R}^{n}}f(\textbf{x})$, can be described in a vector form as follows: $$\begin{aligned} \textbf{v}_{k}(t+1)&=&\textbf{v}_{k}(t)+\eta\sum_{j\in\mathcal{N}^{k}}(\textbf{v}_{j}(t)-\textbf{v}_{k}(t))+\mu\sum_{j\in\mathcal{N}^{k}}(\textbf{x}_{j}(t)-\textbf{x}_{k}(t))+\kappa(\textbf{p}(t)-\textbf{x}_{i}(t)),\label{PSO_1}\\ \textbf{x}_{k}(t+1)&=&\textbf{x}_{k}(t)+\textbf{v}_{k}(t+1),\label{PSO_2}\\ \textbf{p}(t+1)&=&\left\{\begin{array}{ll} \textbf{p}(t)+\kappa(\textbf{x}_{\min}(t)-\textbf{p}(t)), & {\mathrm{if}}\,\,\textbf{p}(t)\not\in\mathcal{Z},\\ \textbf{x}_{\min}(t), & {\mathrm{if}}\,\,\textbf{p}(t)\in\mathcal{Z},\\ \end{array}\right.\label{PSO_3}\end{aligned}$$ where $k=1,\ldots,q$, $t\in\overline{\mathbb{Z}}_{+}$, $\textbf{v}_{k}(t)\in\mathbb{R}^{n}$ and $\textbf{x}_{k}(t)\in\mathbb{R}^{n}$ are the velocity and position of particle $k$ at iteration $t$, respectively, $\textbf{p}(t)\in\mathbb{R}^{n}$ is the position of the global best value that the swarm of the particles can achieve so far, $\eta$, $\mu$, and $\kappa$ are three scalar random coefficients which are usually selected in uniform distribution in the range $[0,1]$, $\mathcal{Z}=\{\textbf{y}\in\mathbb{R}^{n}:f(\textbf{x}_{\min})<f(\textbf{y})\}$, and $\textbf{x}_{\min}=\arg\min_{1\leq k\leq q}f(\textbf{x}_{k})$. Later in [@HZ:TR:2013] we have extended (\[PSO\_1\]) to the dynamic graph case where $\mathcal{N}^{k}$ becomes $\mathcal{N}^{k}(t)=\mathcal{N}_{t}^{k}$. In this sequential report, we further extend (\[PSO\_1\]) to the form with dynamic graph topology sequence $\{\mathcal{G}_{t}\}_{t=0}^{\infty}$ given by $$\begin{aligned} \textbf{v}_{k}(t+1)&=&P(t)\textbf{v}_{k}(t)+\eta P(t)\sum_{j\in\mathcal{N}^{k}_{t}}(\textbf{v}_{j}(t)-\textbf{v}_{k}(t))+\mu P(t)\sum_{j\in\mathcal{N}^{k}_{t}}(\textbf{x}_{j}(t)-\textbf{x}_{k}(t))+\kappa P(t)(\textbf{p}(t)-\textbf{x}_{i}(t)),\nonumber\\ \label{PMCO_1}\end{aligned}$$ where $P(t)\in\mathbb{R}^{n\times n}$ is a paracontracting matrix, and $\mathcal{N}^{k}(t)=\mathcal{N}_{t}^{k}$ represents the node-fixed dynamic or time-varying graph topology. Here we use a specific dynamic neighborhood structure called Grouped Directed Structure (GDS) [@LH:ACC:2013] to generate a neighboring set sequence $\{\mathcal{N}_{t}^{k}\}_{t=0}^{\infty}$. The reason of using GDS for the neighboring set sequence $\{\mathcal{N}_{t}^{k}\}_{t=0}^{\infty}$ is to prevent all the particles in paracontracting MCO from being trapped to local optima other than the global optimum. In this structure, we divide all particles into different groups at every time instant. In each group, particles have the strongly-connected graphical structure. The information exchange between the two groups is directed. For example, in Figure \[structure\], we divide the 6 particles into two groups, one contains particles 1,2 called “all-information" group and the other includes particles 3–6 called “half information" group. In each group, the graphical structure is strongly-connected. Particles 1,2 can know the information of all the other particles and particles 3–6 cannot know the information of particles 1,2. With this technique, if the information from the particle 1 or 2 is not desirable then we can limit the information inside of the group of particles 1,2. Meanwhile, if the information from the particle in “all-information" group is desirable then it is highly possible to lead the particles in “all-information" group to global optima. [r]{}[.40]{} The function of introducing $P(t)$ in (\[PMCO\_1\]) is to use contraction mapping to guarantee the convergence of MCO. A natural question arising from (\[PSO\_2\])–(\[PMCO\_1\]) is the following: Can we always guarantee the convergence of (\[PSO\_2\])–(\[PMCO\_1\]) for a given optimization problem $\min_{\textbf{x}\in\mathbb{R}^{n}}f(\textbf{x})$? Here convergence means that all the limits $\lim_{t\to\infty}\textbf{x}_{k}(t)$, $\lim_{t\to\infty}\textbf{v}_{k}(t)$, and $\lim_{t\to\infty}\textbf{p}(t)$ exist for every $k=1,\ldots,q$. This sequential report tries to answer this question by giving some sufficient conditions to guarantee the convergence of (\[PSO\_2\])–(\[PMCO\_1\]). The basic idea borrowing from [@SHH:CDC:2011] is to convert the iterative algorithm into a discrete-time switched linear system and then discuss its semistability property. Parallel Implementation ----------------------- Similar to [@HZ:TR:2013], in this section a parallel implementation of the paracontracting MCO algorithm is introduced, which is described as Algorithm \[MCO\] in the MATLAB language format. The command $\mathtt{matlabpool}$ opens or closes a pool of MATLAB sessions for parallel computation, and enables the parallel language features within the MATLAB language (e.g., $\mathtt{parfor}$) by starting a parallel job which connects this MATLAB client with a number of labs. The command $\mathtt{parfor}$ executes code loop in parallel. Part of the $\mathtt{parfor}$ body is executed on the MATLAB client (where the $\mathtt{parfor}$ is issued) and part is executed in parallel on MATLAB workers. The necessary data on which $\mathtt{parfor}$ operates is sent from the client to workers, where most of the computation happens, and the results are sent back to the client and pieced together. In Algorithm \[MCO\], the command $\mathtt{parfor}$ is used for loop of the update formula of all particles. Since the update formula needs the neighbors’ information, so two temporary variables $C$ and $D$ are introduced for storing the global information of position and velocity, respectively, $P_{k}$ is a (time-dependent) paracontracting matrix, and $L_{k}$ is the (time-dependent) Laplacian matrix for the communication topology $\mathcal{G}_{k}$ for MCO. Initialize the agent’s position with a uniformly distributed random vector: $x_{i}\sim U(\underline{x},\overline{x})\in \mathbf{R}^{n\times 1}$, where $\underline{x}$ and $\overline{x}$ are the lower and upper boundaries of the search space; Initialize the agent’s velocity: $v_{i}\sim U(\underline{v},\overline{v})$, where $\underline{v}$ and $\overline{v} \in \mathbf{R}^{n\times 1}$ are the lower and upper boundaries of the search speed; Update the agent’s best known position to its initial position: $p_{i}\leftarrow x_{i}$; If $f(p_{i})<f(p)$ update the multiagent network’s best known position: $p\leftarrow p_i$. $k \leftarrow k+1$;\ $C=[x_1,x_2,\cdots, x_q]^{\rm{T}}$, $D=[v_1,v_2,\cdots, v_q]^{\rm{T}}$;\ $\mathtt{parfor}$ [each agent $i=1,\ldots,q$]{} Choose random parameters: $\eta\sim U(0,1)$, $\mu\sim U(0,1)$, $\kappa\sim U(0,1)$; Update the agent’s velocity: $v_{i}\leftarrow P_{k}v_{i}+\eta P_{k}(L_{k}(i,:)D)^{\rm{T}}+\mu P_{k}(L_{k}(i,:)C)^{\rm{T}}+\kappa P_{k}(p-x_{i})$; Update the agent’s position: $x_{i}\leftarrow x_{i}+v_{i}$;\ $\mathtt{endparfor}$\ Update the agent’s best known position: $p_{i}\leftarrow x_{i}$; Update the multiagent network’s best known position: $p\leftarrow p+\kappa(p_{i}-p)$; If $f(p_{i})<f(p)$ update the multiagent network’s best known position: $p\leftarrow p_{i}$;\ Convergence analysis {#scr} ==================== In this section, we present some theoretic results on global convergence of the iterative process in Algorithm \[MCO\]. We follow the steps and key ideas in [@HZ:TR:2013]. In particular, we view the randomized paracontracting MCO algorithm as a discrete-time switched linear system and then use semistability theory to rigorously show its global convergence. To proceed with presentation, we need the following definition. \[def\_odot\] Let $x\in\mathbb{R}^{n}$ be a column vector and $S,K\subseteq\mathbb{R}^{m}$ be subspaces. Define $x\otimes S=\{x\otimes y:y\in S\}$, $x\odot S=\{[x_{1}y_{1}^{\mathrm{T}},\ldots,x_{n}y_{n}^{\mathrm{T}}]^{\mathrm{T}}:[x_{1},\ldots,x_{n}]^{\mathrm{T}}=x,x_{i}\in\mathbb{R},y_{i}\in S,i=1,\ldots,n\}$, and $S+K=\{x+y:x\in S,y\in K\}$. The following property about the operation “$\odot$” is immediate. \[lemma\_odot\] Let $x=[x_{1},\ldots,x_{n}]^{\mathrm{T}}\in\mathbb{R}^{n}$ and $S$ be a subspace. Then $x\odot S=\sum_{i=1}^{n}x_{i}\textbf{e}_{i}\otimes S$, where $[\textbf{e}_{1},\ldots,\textbf{e}_{n}]=I_{n}$. By definition, $x\odot S=\{[x_{1}y_{1}^{\mathrm{T}},\ldots,x_{n}y_{n}^{\mathrm{T}}]^{\mathrm{T}}:y_{i}\in S,i=1,\ldots,n\}$. On the other hand, $\sum_{i=1}^{n}x_{i}\textbf{e}_{i}\otimes S=\{\sum_{i=1}^{n}x_{i}\textbf{e}_{i}\otimes y_{i}:y_{i}\in S,i=1,\ldots,n\}$. Since $\sum_{i=1}^{n}x_{i}\textbf{e}_{i}\otimes y_{i}=[x_{1}y_{1}^{\mathrm{T}},\ldots,x_{n}y_{n}^{\mathrm{T}}]^{\mathrm{T}}$, it follows that $x\odot S=\sum_{i=1}^{n}x_{i}\textbf{e}_{i}\otimes S$. Next, using the new operations defined in Definition \[def\_odot\], we have the following results. \[lemma\_EW\] Let $n,q$ be positive integers and $q\geq 2$. For every $j=1,\ldots,q$, let $E_{n\times nq}^{[j]}\in\mathbb{R}^{n\times nq}$ denote a block-matrix whose $j$th block-column is $I_{n}$ and the rest block-elements are all zero matrices, i.e., $E_{n\times nq}^{[j]}=[\textbf{0}_{n\times n},\ldots,\textbf{0}_{n\times n},I_{n},\textbf{0}_{n\times n},\ldots,\textbf{0}_{n\times n}]$, $j=1,\ldots,q$, where $\textbf{0}_{m\times n}$ denotes the $m\times n$ zero matrix. Define $W^{[j]}=(\textbf{1}_{q\times 1}\otimes P)E_{n\times nq}^{[j]}$ for every $j=1,\ldots,q$, where $\otimes$ denotes the Kronecker product, $P\in\mathbb{R}^{n\times n}$ is a paracontracting matrix, and $\textbf{1}_{m\times n}$ denotes the $m\times n$ matrix whose entries are all ones. Then the following statements hold: - For every $j=1,\ldots,q$, ${\mathrm{rank}}(I_{q}\otimes P-W^{[j]})=(q-1){\mathrm{rank}}(P)$, where ${\mathrm{rank}}(A)$ denotes the rank of $A$. - For any $\textbf{w}=[w_{1},\ldots,w_{q}]^{\mathrm{T}}\in\mathbb{R}^{q}$, $W^{[j]}(\textbf{w}\otimes\textbf{e}_{i})=w_{j}(I_{q}\otimes P)(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})$ for every $j=1,\ldots,q$ and every $i=1,\ldots,n$. In particular, $W^{[j]}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})=(I_{q}\otimes P)(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})$ and $\ker(W^{[j]}-I_{q}\otimes P)=\textbf{1}_{q\times 1}\otimes{\mathrm{span}}\{\textbf{e}_{1},\ldots,\textbf{e}_{n}\}+(\textbf{1}_{q\times 1}-\textbf{g}_{j})\odot{\mathrm{span}}\{\textbf{j}_{1},\ldots,\textbf{j}_{n-{\mathrm{rank}}(P)}\}$ for every $j=1,\ldots,q$ and every $i=1,\ldots,n$, where $[\textbf{g}_{1},\ldots,\textbf{g}_{q}]=I_{q}$, ${\mathrm{span}}\,S$ denotes the span of a subspace $S$, and ${\mathrm{span}}\{\textbf{j}_{1},\ldots,\textbf{j}_{n-{\mathrm{rank}}(P)}\}=\ker(P)$. - For any $\textbf{w}=[w_{1},\ldots,w_{q}]^{\mathrm{T}}\in\mathbb{R}^{q}$, $E_{n\times nq}^{[j]}(\textbf{w}\otimes\textbf{e}_{i})=w_{j}\textbf{e}_{i}$ for every $j=1,\ldots,q$ and every $i=1,\ldots,n$. In particular, $E_{n\times nq}^{[j]}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})=\textbf{e}_{i}$ for every $j=1,\ldots,q$ and every $i=1,\ldots,n$. Next, for any $A\in\mathbb{R}^{n\times n}$, $E_{n\times nq}^{[j]}(\textbf{1}_{q\times 1}\otimes A)=A$, $E_{n\times nq}^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})=\textbf{j}_{r}$ if $s=j$, and $E_{n\times nq}^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})=\textbf{0}_{n\times 1}$ if $s\neq j$ for every $j=1,\ldots,q$, every $s=1,\ldots,q$, and every $r=1,\ldots,n-{\mathrm{rank}}(P)$. Finally, $W^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})=\textbf{0}_{nq\times 1}$ for every $j=1,\ldots,q$, every $s=1,\ldots,q$, and every $r=1,\ldots,n-{\mathrm{rank}}(P)$. $i$) First note that by Fact 7.4.3 of [@Bernstein:2009 p. 445], $W^{[j]}=(\textbf{1}_{q\times 1}\otimes P)E_{n\times nq}^{[j]}=\textbf{1}_{q\times 1}\otimes PE_{n\times nq}^{[j]}$ for every $j=1,\ldots,q$. Now it follows from Fact 7.4.20 of [@Bernstein:2009 p. 446] that $$\begin{aligned} \label{Wj2} &&\hspace{-2.3em}W^{[j]}=\textbf{1}_{q\times 1}\otimes PE_{n\times nq}^{[j]}=(\textbf{1}_{q\times 1}\otimes[\textbf{0}_{n\times n},\ldots,\textbf{0}_{n\times n},P,\textbf{0}_{n\times n},\ldots,\textbf{0}_{n\times n}])\nonumber\\ &&\hspace{0.9em}=[\textbf{1}_{q\times 1}\otimes\textbf{0}_{n\times n},\ldots,\textbf{1}_{q\times 1}\otimes\textbf{0}_{n\times n},\textbf{1}_{q\times 1}\otimes P,\textbf{1}_{q\times 1}\otimes\textbf{0}_{n\times n},\ldots,\textbf{1}_{q\times 1}\otimes\textbf{0}_{n\times n}]\nonumber\\ &&\hspace{0.8em}=\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right].\end{aligned}$$ Next, since $P$ is discrete-time semistable, it follows from [@HH:IJC:2009] that $P$ is group invertible [@Bernstein:2009 p. 403], and hence, $P^{\#}$ exists, where $P^{\#}$ denotes the group generalized inverse of $P$ (see [@Bernstein:2009 p. 403]). Note that it follows from (\[Wj2\]) that $$\begin{aligned} W^{[j]}(I_{q}\otimes P^{\#})(I_{q}\otimes P)&=&\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right]\small\left[\begin{array}{ccc} P^{\#} & \ldots & \textbf{0}_{n\times n} \\ \vdots & \ddots & \vdots \\ \textbf{0}_{n\times n} & \ldots & P^{\#} \\ \end{array}\right]\nonumber\\ &&\times\small\left[\begin{array}{ccc} P & \ldots & \textbf{0}_{n\times n} \\ \vdots & \ddots & \vdots \\ \textbf{0}_{n\times n} & \ldots & P \\ \end{array}\right]\nonumber\\ &=&\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & PP^{\#}P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & PP^{\#}P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right]\nonumber\\ &=&\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right]=W^{[j]},\label{WjP1}\\ (I_{q}\otimes P)(I_{q}\otimes P^{\#})W^{[j]}&=&\small\left[\begin{array}{ccc} P & \ldots & \textbf{0}_{n\times n} \\ \vdots & \ddots & \vdots \\ \textbf{0}_{n\times n} & \ldots & P \\ \end{array}\right]\small\left[\begin{array}{ccc} P^{\#} & \ldots & \textbf{0}_{n\times n} \\ \vdots & \ddots & \vdots \\ \textbf{0}_{n\times n} & \ldots & P^{\#} \\ \end{array}\right]\nonumber\\ &&\times\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right]\nonumber\\ &=&\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & PP^{\#}P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & PP^{\#}P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right]\nonumber\\ &=&\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right]=W^{[j]},\label{WjP2}\end{aligned}$$ where we used the fact that $PP^{\#}P=P$ (see (6.2.11) in [@Bernstein:2009 p. 403]). Let $M=(I_{q}\otimes P^{\#})W^{[j]}(I_{q}\otimes P^{\#})$. Then it follows from (\[WjP1\]) and (\[WjP2\]) that $(I_{q}\otimes P)M(I_{q}\otimes P)=(I_{q}\otimes P)(I_{q}\otimes P^{\#})W^{[j]}(I_{q}\otimes P^{\#})(I_{q}\otimes P)=W^{[j]}$. Furthermore, it follows from (\[WjP1\]) or (\[WjP2\]) that $$\begin{aligned} M(I_{q}\otimes P)M&=&(I_{q}\otimes P^{\#})W^{[j]}(I_{q}\otimes P^{\#})(I_{q}\otimes P)(I_{q}\otimes P^{\#})W^{[j]}(I_{q}\otimes P^{\#})\nonumber\\ &=&(I_{q}\otimes P^{\#})W^{[j]}(I_{q}\otimes P^{\#})W^{[j]}(I_{q}\otimes P^{\#})\nonumber\\ &=&(I_{q}\otimes P^{\#})\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right]\small\left[\begin{array}{ccc} P^{\#} & \ldots & \textbf{0}_{n\times n} \\ \vdots & \ddots & \vdots \\ \textbf{0}_{n\times n} & \ldots & P^{\#} \\ \end{array}\right]\nonumber\\ &&\times\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right](I_{q}\otimes P^{\#})\nonumber\\ &=&(I_{q}\otimes P^{\#})\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & PP^{\#}P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & PP^{\#}P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right](I_{q}\otimes P^{\#})\nonumber\\ &=&(I_{q}\otimes P^{\#})\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right](I_{q}\otimes P^{\#})\nonumber\\ &=&(I_{q}\otimes P^{\#})W^{[j]}(I_{q}\otimes P^{\#})=M.\end{aligned}$$ Now it follows from Fact 2.10.30 of [@Bernstein:2009 p. 128] that ${\mathrm{rank}}(I_{q}\otimes P-W^{[j]})={\mathrm{rank}}(I_{q}\otimes P)-{\mathrm{rank}}(W^{[j]})$. Clearly it follows from (\[Wj2\]) that ${\mathrm{rank}}(W^{[j]})={\mathrm{rank}}(P)$. Thus, ${\mathrm{rank}}(I_{q}\otimes P-W^{[j]})={\mathrm{rank}}(I_{q}\otimes P)-{\mathrm{rank}}(W^{[j]})=q\times{\mathrm{rank}}(P)-{\mathrm{rank}}(P)=(q-1){\mathrm{rank}}(P)$ for every $j=1,\ldots,q$. $ii$) It follows from (\[Wj2\]) that for every $j=1,\ldots,q$ and every $i=1,\ldots,n$, $$\begin{aligned} W^{[j]}(\textbf{1}_{q\times 1}\otimes \textbf{e}_{i})&=&\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right]\small\left[\begin{array}{c} \textbf{e}_{i}\\ \vdots \\ \textbf{e}_{i}\\ \end{array}\right]=\small\left[\begin{array}{c} P\textbf{e}_{i}\\ \vdots \\ P\textbf{e}_{i}\\ \end{array}\right]=\textbf{1}_{q\times 1}\otimes P\textbf{e}_{i}\nonumber\\ &=&(I_{q}\otimes P)(\textbf{1}_{q\times 1}\otimes \textbf{e}_{i}),\end{aligned}$$ namely, $(W^{[j]}-I_{q}\otimes P)(\textbf{1}_{q\times 1}\otimes \textbf{e}_{i})=\textbf{0}_{nq\times 1}$ for every $j=1,\ldots,q$. Since by $i$), ${\mathrm{rank}}(W^{[j]}-I_{q}\otimes P)=(q-1){\mathrm{rank}}(P)$ for every $j=1,\ldots,q$, it follows from Corollary 2.5.5 of [@Bernstein:2009 p. 105] that ${\mathrm{def}}(W^{[j]}-I_{q}\otimes P)=nq-{\mathrm{rank}}(W^{[j]}-I_{q}\otimes P)=nq-(q-1){\mathrm{rank}}(P)\geq n$ for every $j=1,\ldots,q$, where ${\mathrm{def}}(A)$ denotes the defect of $A$. Note that $\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}$, $i=1,\ldots,n$, are linearly independent, it follows that $\textbf{1}_{q\times 1}\otimes{\mathrm{span}}\{\textbf{e}_{1},\ldots,\textbf{e}_{n}\}={\mathrm{span}}\{\textbf{1}_{q\times 1}\otimes\textbf{e}_{1},\ldots,\textbf{1}_{q\times 1}\otimes\textbf{e}_{n}\}\subseteq\ker(W^{[j]}-I_{q}\otimes P)$ for every $j=1,\ldots,q$. Let $x=[x_{1}^{\mathrm{T}},\ldots,x_{q}^{\mathrm{T}}]^{\mathrm{T}}\in\ker(W^{[j]}-I_{q}\otimes P)$, where $x_{i}\in\mathbb{R}^{n}$, $i=1,\ldots,q$. Then it follows that $Px_{j}-Px_{i}=0$ for every $i=1,\ldots,q$, i.e., $x_{i}-x_{j}\in\ker(P)$, $i\neq j$, $i=1,\ldots,q$, where $x_{j}\in\mathbb{R}^{n}$ is arbitrary. Note that $x_{j}\in{\mathrm{span}}\{\textbf{e}_{1},\ldots,\textbf{e}_{n}\}$. Hence, $\textbf{1}_{q\times 1}\otimes{\mathrm{span}}\{\textbf{e}_{1},\ldots,\textbf{e}_{n}\}+(\textbf{1}_{q\times 1}-\textbf{g}_{j})\odot\ker(P)=\ker(W^{[j]}-I_{q}\otimes P)$. Finally, for any $\textbf{w}=[w_{1},\ldots,w_{q}]^{\mathrm{T}}\in\mathbb{R}^{q}$, it follows from (\[Wj2\]) that $$\begin{aligned} W^{[j]}(\textbf{w}\otimes\textbf{e}_{i})&=&\small\left[\begin{array}{ccccccc} \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n} & P & \textbf{0}_{n\times n} & \ldots & \textbf{0}_{n\times n}\\ \end{array}\right]\small\left[\begin{array}{c} w_{1}\textbf{e}_{i}\\ \vdots \\ w_{q}\textbf{e}_{i}\\ \end{array}\right]=\small\left[\begin{array}{c} w_{j}P\textbf{e}_{i}\\ \vdots \\ w_{j}P\textbf{e}_{i}\\ \end{array}\right]\nonumber\\ &=&w_{j}\textbf{1}_{q\times 1}\otimes P\textbf{e}_{i}=w_{j}(I_{q}\otimes P)(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})\end{aligned}$$ for every $j=1,\ldots,q$ and every $i=1,\ldots,n$. $iii$) For any $\textbf{w}=[w_{1},\ldots,w_{q}]^{\mathrm{T}}\in\mathbb{R}^{q}$, $E_{n\times nq}^{[j]}(\textbf{w}\otimes\textbf{e}_{i})=[\textbf{0}_{n\times n},\ldots,\textbf{0}_{n\times n},I_{n},\textbf{0}_{n\times n},\ldots,\textbf{0}_{n\times n}][w_{1}\textbf{e}_{i}^{\mathrm{T}},\ldots,\\w_{q}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}=w_{j}\textbf{e}_{i}$ for every $j=1,\ldots,q$ and every $i=1,\ldots,n$. In particular, $E_{n\times nq}^{[j]}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})=\textbf{e}_{i}$ for every $j=1,\ldots,q$ and every $i=1,\ldots,n$. Next, for every $j=1,\ldots,q$, $E_{n\times nq}^{[j]}(\textbf{1}_{q\times 1}\otimes A)=[\textbf{0}_{n\times n},\ldots,\textbf{0}_{n\times n},I_{n},\textbf{0}_{n\times n},\ldots,\\\textbf{0}_{n\times n}][A^{\mathrm{T}},\ldots,A^{\mathrm{T}}]^{\mathrm{T}}=A$. For every $j=1,\ldots,q$, every $s=1,\ldots,q$, and every $r=1,\ldots,n-{\mathrm{rank}}(P)$, $E_{n\times nq}^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})=[\textbf{0}_{n\times n},\ldots,\textbf{0}_{n\times n},I_{n},\textbf{0}_{n\times n},\ldots,\textbf{0}_{n\times n}][\textbf{0}_{1\times q},\ldots,\textbf{j}_{r}^{\mathrm{T}},\ldots,\textbf{0}_{1\times q}]^{\mathrm{T}}=\textbf{j}_{r}$ if $s=j$. Otherwise, $E_{n\times nq}^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})=\textbf{0}_{n\times 1}$. Finally, $W^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})=(\textbf{1}_{q\times 1}\otimes P)E_{n\times nq}^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})=(\textbf{1}_{q\times 1}\otimes P)\textbf{j}_{r}=\textbf{1}_{q\times 1}\otimes P\textbf{j}_{r}=\textbf{0}_{nq\times 1}$ if $s=j$ and $W^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})=(\textbf{1}_{q\times 1}\otimes P)E_{n\times nq}^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})=(\textbf{1}_{q\times 1}\otimes P)\textbf{0}_{n\times 1}=\textbf{0}_{nq\times 1}$ if $s\neq j$. The following two lemmas are needed for the next result. \[lemma\_ker\] Let $A\in\mathbb{R}^{n\times m}$ and $B\in\mathbb{R}^{l\times k}$. Then $\ker(A\otimes B)=\ker(A\otimes I_{l})+\ker(I_{n}\otimes B)$. It follows from Equality (2.4.13) of [@Bernstein:2009 p. 103] and Equality (7.1.7) of [@Bernstein:2009 p. 440] that $\ker(A\otimes B)={\mathrm{ran}}((A\otimes B)^{\mathrm{T}})^{\bot}={\mathrm{ran}}(A^{\mathrm{T}}\otimes B^{\mathrm{T}})^{\bot}$, where ${\mathrm{ran}}(A)$ denotes the range space of $A$ and $S^{\bot}$ denotes the orthogonal complement of $S$. On the other hand, it follows from Fact 7.4.23 of [@Bernstein:2009 p. 447] that ${\mathrm{ran}}(A^{\mathrm{T}}\otimes B^{\mathrm{T}})={\mathrm{ran}}(A^{\mathrm{T}}\otimes I_{l})\cap{\mathrm{ran}}(I_{n}\otimes B^{\mathrm{T}})$. Now it follows from Fact 2.9.16 of [@Bernstein:2009 p. 121] that ${\mathrm{ran}}(A^{\mathrm{T}}\otimes B^{\mathrm{T}})^{\bot}=({\mathrm{ran}}(A^{\mathrm{T}}\otimes I_{l})\cap{\mathrm{ran}}(I_{n}\otimes B^{\mathrm{T}}))^{\bot}={\mathrm{ran}}(A^{\mathrm{T}}\otimes I_{l})^{\bot}+{\mathrm{ran}}(I_{n}\otimes B^{\mathrm{T}})^{\bot}$. Finally, it follows from Equality (2.4.13) of [@Bernstein:2009 p. 103] that $\ker(A\otimes B)={\mathrm{ran}}(A^{\mathrm{T}}\otimes I_{l})^{\bot}+{\mathrm{ran}}(I_{n}\otimes B^{\mathrm{T}})^{\bot}=\ker(A\otimes I_{l})+\ker(I_{n}\otimes B)$. \[lemma\_S\] Let $S_{i}$, $i=1,2,3$, be subspaces such that $S_{1}\cup S_{2}$ or $S_{2}\cup S_{3}$ or $S_{3}\cup S_{1}$ is a subspace. Then $\dim(S_{1}+S_{2}+S_{3})=\dim S_{1}+\dim S_{2}+\dim S_{3}-\dim(S_{1}\cap S_{2})-\dim(S_{2}\cap S_{3})-\dim(S_{3}\cap S_{1})+\dim(S_{1}\cap S_{2}\cap S_{3})$, where $\dim S$ denotes the dimension of a subspace $S$. Here we just consider the case where $S_{1}\cup S_{2}$ is a subspace. It follows from the subspace dimension theorem (Theorem 2.3.1 of [@Bernstein:2009 p. 98]) that $\dim(S_{1}+S_{2}+S_{3})=\dim(S_{1}+S_{2})+\dim S_{3}-\dim[(S_{1}+S_{2})\cap S_{3}]=\dim S_{1}+\dim S_{2}-\dim(S_{1}\cap S_{2})+\dim S_{3}-\dim[(S_{1}+S_{2})\cap S_{3}]$. Since by assumption $S_{1}\cup S_{2}$ is a subspace, it follows from Fact 2.9.11 of [@Bernstein:2009 p. 121] that $S_{1}+S_{2}=S_{1}\cup S_{2}$. Hence, $(S_{1}+S_{2})\cap S_{3}=(S_{1}\cup S_{2})\cap S_{3}=(S_{1}\cap S_{3})\cup(S_{2}\cap S_{3})$. On the other hand, note that $(S_{1}+S_{2})\cap S_{3}$ is a subspace, and hence, $(S_{1}\cap S_{3})\cup(S_{2}\cap S_{3})$ is a subspace as well. Thus, by Fact 2.9.11 of [@Bernstein:2009 p. 121], $(S_{1}\cap S_{3})\cup(S_{2}\cap S_{3})=S_{1}\cap S_{3}+S_{2}\cap S_{3}$. Then it follows from the subspace dimension theorem that $\dim[(S_{1}+S_{2})\cap S_{3}]=\dim(S_{1}\cap S_{3}+S_{2}\cap S_{3})=\dim(S_{1}\cap S_{3})+\dim(S_{2}\cap S_{3})-\dim(S_{1}\cap S_{2}\cap S_{3})$. Consequently, $\dim(S_{1}+S_{2}+S_{3})=\dim S_{1}+\dim S_{2}+\dim S_{3}-\dim(S_{1}\cap S_{2})-\dim(S_{2}\cap S_{3})-\dim(S_{3}\cap S_{1})+\dim(S_{1}\cap S_{2}\cap S_{3})$. Next, we use some graph notions to state a result on the rank of certain matrices related to the matrix form of the iterative process in Algorithm \[MCO\]. \[lemma\_Arank\] Define a (possibly infinite) series of matrices $A^{[j]}_{k}$, $j=1,\ldots,q$, $k=0,1,2,\ldots$, as follows: $$\begin{aligned} \label{Amatrix} A_{k}^{[j]}=\small\left[\begin{array}{ccc} \textbf{0}_{nq\times nq} & I_{nq} & \textbf{0}_{nq\times n} \\ -\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} (I_{q}\otimes P_{k}) & -\eta_{k} L_{k}\otimes P_{k} & \kappa_{k} \textbf{1}_{q\times 1}\otimes P_{k} \\ \kappa_{k} E_{n\times nq}^{[j]} & \textbf{0}_{n\times nq} & -\kappa_{k} I_{n} \\ \end{array}\right],\end{aligned}$$ where $\mu_{k},\eta_{k},\kappa_{k}\geq0$, $k\in\overline{\mathbb{Z}}_{+}$, $P_{k}\in\mathbb{R}^{n\times n}$ denotes a paracontracting matrix, $L_{k}\in\mathbb{R}^{q\times q}$ denotes the Laplacian matrix of a node-fixed dynamic digraph $\mathcal{G}_{k}$, and $E_{n\times nq}^{[j]}\in\mathbb{R}^{n\times nq}$ is defined in Lemma \[lemma\_EW\]. - If $\mu_{k}=0$ and $\kappa_{k}=0$, then ${\mathrm{rank}}(A_{k}^{[j]})=nq$ and $\ker(A_{k}^{[j]})=\{[\sum_{i=1}^{n}\sum_{l=1}^{q}\alpha_{il}(\textbf{e}_{i}\otimes\textbf{g}_{l})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{i=1}^{n}\beta_{i}\\\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\alpha_{il}\in\mathbb{R},\forall\beta_{i}\in\mathbb{R},i=1,\ldots,n,l=1,\ldots,q\}$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. - If $\mu_{k}=0$ and $\kappa_{k}\neq0$, then ${\mathrm{rank}}(A_{k}^{[j]})=2nq-(q-1)(n-{\mathrm{rank}}(P_{k}))$ and $\ker(A_{k}^{[j]})=\{[\sum_{i=1}^{n}\alpha_{i}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})^{\mathrm{T}}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}}-\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{jr}(\textbf{g}_{j}\otimes\textbf{j}_{r})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{i=1}^{n}\alpha_{i}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\alpha_{i},\beta_{sr}\in\mathbb{R},i=1,\ldots,n,s=1,\ldots,q,r=1,\ldots,n-{\mathrm{rank}}(P_{k})\}$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. - If $\mu_{k}\neq0$ and $\kappa_{k}\neq0$, then ${\mathrm{rank}}(A_{k}^{[j]})=2nq-(q-1)(n-{\mathrm{rank}}(P_{k}))$ and $\ker(A_{k}^{[j]})=\{[\sum_{i=1}^{n}\alpha_{0i}(\textbf{w}_{0}\otimes\textbf{e}_{i})^{\mathrm{T}}+\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\sum_{m=1}^{n}\gamma_{lm}(\textbf{e}_{i}^{\mathrm{T}}(I_{n}-P_{k}^{+}P_{k})\textbf{e}_{m})(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}},\\\textbf{0}_{1\times nq},\sum_{i=1}^{n}\alpha_{0i}\textbf{e}_{i}^{\mathrm{T}}+\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\sum_{m=1}^{n}\gamma_{lm}(\textbf{e}_{i}^{\mathrm{T}}(I_{n}-P_{k}^{+}P_{k})\textbf{e}_{m})w_{lj}\textbf{e}_{i}^{\mathrm{T}}+\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{jr}\textbf{j}_{r}^{\mathrm{T}}]^{\mathrm{T}}:\forall\alpha_{0i},\beta_{sr},\gamma_{lm}\in\mathbb{R},i=1,\ldots,n,s=1,\ldots,q,r=1,\ldots,n-{\mathrm{rank}}(P_{k}),l=1,\ldots,q-1-{\mathrm{rank}}(L_{k}),m=1,\ldots,n\}$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$, where $A^{+}$ denotes the Moore-Penrose generalized inverse of $A$, ${\mathrm{span}}\{\textbf{w}_{0},\textbf{w}_{1},\ldots,\textbf{w}_{q-1-{\mathrm{rank}}(L_{k})}\}=\ker(L_{k})$, $\textbf{w}_{0}=\textbf{1}_{q\times 1}$, and $\textbf{w}_{l}=[w_{l1},\ldots,w_{lq}]^{\mathrm{T}}\in\mathbb{R}^{q}$ for every $l=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$. - If $\mu_{k}\neq0$ and $\kappa_{k}=0$, then ${\mathrm{rank}}(A_{k}^{[j]})=nq+{\mathrm{rank}}(L_{k}){\mathrm{rank}}(P_{k})$ and $\ker(A_{k}^{[j]})=\{[\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\\\alpha_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{i=1}^{n}\gamma_{i}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\alpha_{li},\beta_{sr},\gamma_{i}\in\mathbb{R},i=1,\ldots,n,l=0,1,\ldots,q-1-{\mathrm{rank}}(L_{k}),s=1,\ldots,q,r=1,\ldots,n-{\mathrm{rank}}(P_{k})\}$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. First, it follows from (\[Amatrix\]) that $\ker(A_{k}^{[j]})=\{[\textbf{z}_{1}^{\rm{T}},\textbf{z}_{2}^{\rm{T}},\textbf{z}_{3}^{\rm{T}}]^{\rm{T}}\in\mathbb{R}^{2nq+n}:\textbf{z}_{2}=\textbf{0}_{nq\times 1},-\mu_{k} (L_{k}\otimes P_{k})\textbf{z}_{1}-\kappa_{k}(I_{q}\otimes P_{k})\textbf{z}_{1}-\eta_{k}(L_{k}\otimes P_{k})\textbf{z}_{2}+\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\textbf{z}_{3}=\textbf{0}_{nq\times 1}, \kappa_{k}E_{n\times nq}^{[j]}\textbf{z}_{1}-\kappa_{k}\textbf{z}_{3}=\textbf{0}_{n\times 1}\}$, $k\in\overline{\mathbb{Z}}_{+}$, where $\textbf{z}_{1},\textbf{z}_{2}\in\mathbb{R}^{nq}$ and $\textbf{z}_{3}\in\mathbb{R}^{n}$. $i$) If $\mu_{k}=0$ and $\kappa_{k}=0$, then it follows from the similar arguments as in the proof of $i$) of Lemma 4.2 of [@HZ:TR:2013] that the assertion holds. $ii$) If $\mu_{k}=0$ and $\kappa_{k}\neq0$, then substituting $\textbf{z}_{2}=\textbf{0}_{nq\times 1}$ and $\textbf{z}_{3}=E_{n\times nq}^{[j]}\textbf{z}_{1}$ into $-\kappa_{k}(I_{q}\otimes P_{k})\textbf{z}_{1}-\eta_{k}(L_{k}\otimes P_{k})\textbf{z}_{2}+\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\textbf{z}_{3}=\textbf{0}_{nq\times 1}$ yields $$\begin{aligned} \label{z1p} \kappa_{k}(W_{k}^{[j]}-I_{q}\otimes P_{k})\textbf{z}_{1}=\textbf{0}_{nq\times 1},\end{aligned}$$ where $W_{k}^{[j]}=(\textbf{1}_{q\times 1}\otimes P_{k})E_{n\times nq}^{[j]}$. Since, by $ii$) of Lemma \[lemma\_EW\], $\ker(W_{k}^{[j]}-I_{q}\otimes P_{k})=\textbf{1}_{q\times 1}\otimes{\mathrm{span}}\{\textbf{e}_{1},\ldots,\textbf{e}_{n}\}+(\textbf{1}_{q\times 1}-\textbf{g}_{j})\odot\ker(P_{k})$ for every $j=1,\ldots,q$, it follows from (\[z1p\]) and Lemma \[lemma\_odot\] that $\textbf{z}_{1}$ can be represented as $\textbf{z}_{1}=\sum_{i=1}^{n}\alpha_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}-\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{jr}\textbf{g}_{j}\otimes\textbf{j}_{r}$, where $\alpha_{i},\beta_{sr}\in\mathbb{R}$. Furthermore, it follows from $iii$) of Lemma 4.1 of [@HZ:TR:2013] and $iii$) of Lemma \[lemma\_EW\] that $\textbf{z}_{3}=E_{n\times nq}^{[j]}\textbf{z}_{1}=\sum_{i=1}^{n}\alpha_{i}E_{n\times nq}^{[j]}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}E_{n\times nq}^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})-\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{jr}E_{n\times nq}^{[j]}(\textbf{g}_{j}\otimes\textbf{j}_{r})=\sum_{i=1}^{n}\alpha_{i}\textbf{e}_{i}$ for every $j=1,\ldots,q$. Thus, $\ker(A_{k}^{[j]})=\{[\sum_{i=1}^{n}\alpha_{i}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})^{\mathrm{T}}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}}-\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{jr}(\textbf{g}_{j}\otimes\textbf{j}_{r})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{i=1}^{n}\alpha_{i}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\alpha_{i},\beta_{sr}\in\mathbb{R},i=1,\ldots,n,s=1,\ldots,q,r=1,\ldots,n-{\mathrm{rank}}(P_{k})\}$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. Let $\mathcal{S}_{1}=\{[\sum_{i=1}^{n}\alpha_{i}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{i=1}^{n}\alpha_{i}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\alpha_{i}\in\mathbb{R},i=1,\ldots,n\}$ and $\mathcal{S}_{2}=\{[\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\\\beta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}}-\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{jr}(\textbf{g}_{j}\otimes\textbf{j}_{r})^{\mathrm{T}},\textbf{0}_{1\times nq},\textbf{0}_{1\times n}]^{\mathrm{T}}:\forall\beta_{sr}\in\mathbb{R},s=1,\ldots,q,r=1,\ldots,n-{\mathrm{rank}}(P_{k})\}$. Clearly $\ker(A_{k}^{[j]})=\mathcal{S}_{1}+\mathcal{S}_{2}$ and $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ are subspaces. Now it follows from the subspace dimension theorem (Theorem 2.3.1 of [@Bernstein:2009 p. 98]) that $\dim\ker(A_{k}^{[j]})=\dim\mathcal{S}_{1}+\dim\mathcal{S}_{2}-\dim(\mathcal{S}_{1}\cap\mathcal{S}_{2})=n+(q-1)(n-{\mathrm{rank}}(P_{k}))-\dim(\mathcal{S}_{1}\cap\mathcal{S}_{2})$. Since $\mathcal{S}_{1}\cap\mathcal{S}_{2}=\{\textbf{0}_{(2nq+n)\times 1}\}$, it follows that $\dim(\mathcal{S}_{1}\cap\mathcal{S}_{2})=0$, which implies that ${\mathrm{def}}(A_{k}^{[j]})=\dim\ker(A_{k}^{[j]})=nq-(q-1){\mathrm{rank}}(P_{k})$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. Therefore, in this case ${\mathrm{rank}}(A_{k}^{[j]})=2nq+n-{\mathrm{def}}(A_{k}^{[j]})=nq+n+(q-1){\mathrm{rank}}(P_{k})=2nq-(q-1)(n-{\mathrm{rank}}(P_{k}))$. $iii$) If $\mu_{k}\neq0$ and $\kappa_{k}\neq0$, then we claim that $\kappa_{k}/\mu_{k}\not\in{\mathrm{spec}}(-L_{k}\otimes I_{n})$. To see this, it follows from Proposition 1 of [@AC:LAA:2005] that for any $\lambda_{k}\in{\mathrm{spec}}(-L_{k})$, ${\mathrm{Re}}\,\lambda_{k}\leq0$, where ${\mathrm{Re}}\,\lambda_{k}$ denotes the real part of $\lambda_{k}$. Furthermore, note that ${\mathrm{spec}}(-L_{k}\otimes I_{n})={\mathrm{spec}}(-L_{k})$. Thus, if $\kappa_{k}\neq0$, then $0<\kappa_{k}/\mu_{k}\not\in{\mathrm{spec}}(-L_{k})={\mathrm{spec}}(-L_{k}\otimes I_{n})$. Now, substituting $\textbf{z}_{2}=\textbf{0}_{nq\times 1}$ and $\textbf{z}_{3}=E_{n\times nq}^{[j]}\textbf{z}_{1}$ into $-\mu_{k} (L_{k}\otimes P_{k})\textbf{z}_{1}-\kappa_{k}(I_{q}\otimes P_{k})\textbf{z}_{1}-\eta_{k}(L_{k}\otimes P_{k})\textbf{z}_{2}+\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\textbf{z}_{3}=\textbf{0}_{nq\times 1}$ yields $$\begin{aligned} \label{z1} (-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k}+\kappa_{k} W_{k}^{[j]})\textbf{z}_{1}=\textbf{0}_{nq\times 1}, \quad k\in\overline{\mathbb{Z}}_{+}. \end{aligned}$$ Note that $(L_{k}\otimes I_{n})W_{k}^{[j]}=(L_{k}\otimes I_{n})(\textbf{1}_{q\times 1}\otimes E_{n\times nq}^{[j]})=L_{k}\textbf{1}_{q\times 1}\otimes E_{n\times nq}^{[j]}=\textbf{0}_{q\times 1}\otimes E_{n\times nq}^{[j]}=\textbf{0}_{nq\times nq}$ and $L_{k}\otimes P_{k}=(L_{k}\otimes I_{n})(I_{q}\otimes P_{k})$, $k\in\overline{\mathbb{Z}}_{+}$. Pre-multiplying $-L_{k}\otimes I_{n}$ on both sides of (\[z1\]) yields $(\mu_{k}(L_{k}\otimes I_{n})^{2}(I_{q}\otimes P_{k})+\kappa_{k}(L_{k}\otimes I_{n})(I_{q}\otimes P_{k}))\textbf{z}_{1}=(\mu_{k} L_{k}\otimes I_{n}+\kappa_{k} I_{nq})(L_{k}\otimes P_{k})\textbf{z}_{1}=\textbf{0}_{nq\times 1}$, $k\in\overline{\mathbb{Z}}_{+}$. Since $\kappa_{k}/\mu_{k}\not\in{\mathrm{spec}}(-L_{k}\otimes I_{n})$ for every $k\in\overline{\mathbb{Z}}_{+}$, it follows that $\det(\mu_{k} L_{k}\otimes I_{n}+\kappa_{k} I_{nq})\neq0$, $k\in\overline{\mathbb{Z}}_{+}$, where $\det$ denotes the determinant. Hence, $(L_{k}\otimes P_{k})\textbf{z}_{1}=\textbf{0}_{nq\times 1}$, $k\in\overline{\mathbb{Z}}_{+}$. Note that $L_{k}\textbf{w}_{0}=\textbf{0}_{q\times 1}$. Next, it follows from Lemma \[lemma\_ker\] that $\ker(L_{k}\otimes P_{k})=\ker(L_{k}\otimes I_{n})+\ker(I_{q}\otimes P_{k})$. Then it follows that $\bigcup_{i=0}^{q-1-{\mathrm{rank}}(L_{k})}{\mathrm{span}}\{\textbf{w}_{i}\otimes\textbf{e}_{1},\ldots,\textbf{w}_{i}\otimes\textbf{e}_{n}\}=\ker(L_{k}\otimes I_{n})$, $k\in\overline{\mathbb{Z}}_{+}$. Similarly, $\bigcup_{r=1}^{n-{\mathrm{rank}}(P_{k})}{\mathrm{span}}\{\textbf{g}_{1}\otimes\textbf{j}_{r},\ldots,\textbf{g}_{q}\otimes\textbf{j}_{r}\}=\ker(I_{q}\otimes P_{k})$. Consequently, $\ker(L_{k}\otimes P_{k})=\bigcup_{i=0}^{q-1-{\mathrm{rank}}(L_{k})}{\mathrm{span}}\{\textbf{w}_{i}\otimes\textbf{e}_{1},\ldots,\textbf{w}_{i}\otimes\textbf{e}_{n}\}+\bigcup_{r=1}^{n-{\mathrm{rank}}(P_{k})}{\mathrm{span}}\{\textbf{g}_{1}\otimes\textbf{j}_{r},\ldots,\textbf{g}_{q}\otimes\textbf{j}_{r}\}$. Hence, $\textbf{z}_{1}=\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}$, where $\alpha_{li},\beta_{sr}\in\mathbb{R}$ and $\alpha_{li}=\beta_{sr}=0$ for every $i=1,\ldots,n$ and every $s=1,\ldots,q$ if $\textbf{w}_{l}=\textbf{0}_{q\times 1}$ and $\textbf{j}_{r}=\textbf{0}_{n\times 1}$ for some $l\in\{1,\ldots,q-1-{\mathrm{rank}}(L_{k})\}$ and some $r\in\{1,\ldots,n-{\mathrm{rank}}(P_{k})\}$. Substituting this $\textbf{z}_{1}$ into the left-hand side of (\[z1\]) yields $(-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k}+\kappa_{k} W_{k}^{[j]})\textbf{z}_{1}=\kappa_{k}(W_{k}^{[j]}-I_{q}\otimes P_{k})\textbf{z}_{1}=\kappa_{k}(W_{k}^{[j]}-I_{q}\otimes P_{k})(\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r})=\kappa_{k}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}W_{k}^{[j]}(\textbf{w}_{l}\otimes\textbf{e}_{i})-\kappa_{k}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(I_{q}\otimes P_{k})(\textbf{w}_{l}\otimes\textbf{e}_{i})+\kappa_{k}\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\\\beta_{sr}W_{k}^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})-\kappa_{k}\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}(I_{q}\otimes P_{k})(\textbf{g}_{s}\otimes\textbf{j}_{r})$. Note that it follows from $ii$) of Lemma \[lemma\_EW\] that $W_{k}^{[j]}(\textbf{w}_{0}\otimes\textbf{e}_{i})=(I_{q}\otimes P_{k})(\textbf{w}_{0}\otimes\textbf{e}_{i})$ for every $j=1,\ldots,q$ and every $i=1,\ldots,n$. Let $P_{k}(i,j)$ denote the $(i,j)$th entry of $P_{k}$, then it follows from $ii$) of Lemma \[lemma\_EW\] that $$\begin{aligned} &&\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}W_{k}^{[j]}(\textbf{w}_{l}\otimes\textbf{e}_{i})-\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(I_{q}\otimes P_{k})(\textbf{w}_{l}\otimes\textbf{e}_{i})\nonumber\\ &&=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(W_{k}^{[j]}(\textbf{w}_{l}\otimes\textbf{e}_{i})-(I_{q}\otimes P_{k})(\textbf{w}_{l}\otimes\textbf{e}_{i}))\nonumber\\ &&=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(w_{lj}(I_{q}\otimes P_{k})(\textbf{w}_{0}\otimes\textbf{e}_{i})-(I_{q}\otimes P_{k})(\textbf{w}_{l}\otimes\textbf{e}_{i}))\nonumber\\ &&=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(I_{q}\otimes P_{k})((w_{lj}\textbf{w}_{0}-\textbf{w}_{l})\otimes\textbf{e}_{i})\nonumber\\ &&=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(w_{lj}\textbf{w}_{0}-\textbf{w}_{l})\otimes P_{k}\textbf{e}_{i}\nonumber\\ &&=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}(w_{lj}\textbf{w}_{0}-\textbf{w}_{l})\otimes\small\left[\begin{array}{c} \alpha_{li}P_{k}(1,i)\\ \vdots\\ \alpha_{li}P_{k}(n,i)\\ \end{array}\right]\nonumber\\ &&=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}(w_{lj}\textbf{w}_{0}-\textbf{w}_{l})\otimes\small\left[\begin{array}{c} \sum_{i=1}^{n}\alpha_{li}P_{k}(1,i)\\ \vdots\\ \sum_{i=1}^{n}\alpha_{li}P_{k}(n,i)\\ \end{array}\right]\nonumber\\ &&=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}(w_{lj}\textbf{w}_{0}-\textbf{w}_{l})\otimes\Big(\sum_{s=1}^{n}\Big(\sum_{i=1}^{n}\alpha_{li}P_{k}(s,i)\Big)\textbf{e}_{s}\Big)\nonumber\\ &&=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{s=1}^{n}\Big(\sum_{i=1}^{n}\alpha_{li}P_{k}(s,i)\Big)(w_{lj}\textbf{w}_{0}-\textbf{w}_{l})\otimes\textbf{e}_{s}\nonumber\\ &&=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{s=1}^{n}\Big(\sum_{i=1}^{n}\alpha_{li}P_{k}(s,i)\Big)w_{lj}\textbf{w}_{0}\otimes\textbf{e}_{s}+\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{s=1}^{n}\Big(-\sum_{i=1}^{n}\alpha_{li}P_{k}(s,i)\Big)\textbf{w}_{l}\otimes\textbf{e}_{s}.\end{aligned}$$ Moreover, it follows from $iii$) of Lemma \[lemma\_EW\] that $$\begin{aligned} &&\kappa_{k}\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}W_{k}^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})-\kappa_{k}\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}(I_{q}\otimes P_{k})(\textbf{g}_{s}\otimes\textbf{j}_{r})\nonumber\\ &&=-\kappa_{k}\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes P_{k}\textbf{j}_{r}=\textbf{0}_{nq\times 1}.\end{aligned}$$ Note that $\textbf{w}_{l}\otimes\textbf{e}_{s}$, $l=0,1,\ldots,q-1-{\mathrm{rank}}(L_{k})$, $s=1,\ldots,n$, are linearly independent. Hence, $\textbf{z}_{1}$ satisfies (\[z1\]) if and only if $\sum_{i=1}^{n}\alpha_{li}P_{k}(s,i)=0$ for every $s=1,\ldots,q$ and every $l=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$, which is equivalent to $$\begin{aligned} \label{Pka} P_{k}\small\left[\begin{array}{c} \alpha_{l1}\\ \vdots\\ \alpha_{ln} \end{array}\right]=\textbf{0}_{n\times 1}\end{aligned}$$ for every $l=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$. Thus, $\small\left[\begin{array}{c} \alpha_{l1}\\ \vdots\\ \alpha_{ln} \end{array}\right]=(I_{n}-P_{k}^{+}P_{k})(\sum_{i=1}^{n}\gamma_{li}\textbf{e}_{i})$, where $\gamma_{li}\in\mathbb{R}$, $i=1,\ldots,n$, $l=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$, are arbitrary. In other words, $\alpha_{li}=\sum_{m=1}^{n}\gamma_{lm}\textbf{e}_{i}^{\mathrm{T}}(I_{n}-P_{k}^{+}P_{k})\textbf{e}_{m}$ for every $i=1,\ldots,n$ and every $l=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$. In this case, we have $\textbf{z}_{1}=\sum_{i=1}^{n}\alpha_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i}+\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\sum_{m=1}^{n}\gamma_{lm}(\textbf{e}_{i}^{\mathrm{T}}(I_{n}-P_{k}^{+}P_{k})\textbf{e}_{m})(\textbf{w}_{l}\otimes\textbf{e}_{i})+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}$, where $\alpha_{0i},\beta_{sr},\gamma_{lm}\in\mathbb{R}$ are arbitrary. Note that by $iii$) of Lemma \[lemma\_EW\], $\textbf{z}_{3}=E_{n\times nq}^{[j]}\textbf{z}_{1}=\sum_{i=1}^{n}\alpha_{0i}E_{n\times nq}^{[j]}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})+\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\sum_{m=1}^{n}\\\gamma_{lm}(\textbf{e}_{i}^{\mathrm{T}}(I_{n}-P_{k}^{+}P_{k})\textbf{e}_{m})E_{n\times nq}^{[j]}(\textbf{w}_{l}\otimes\textbf{e}_{i})+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}E_{n\times nq}^{[j]}(\textbf{g}_{s}\otimes\textbf{j}_{r})=\sum_{i=1}^{n}\alpha_{0i}\textbf{e}_{i}+\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\\\sum_{i=1}^{n}\sum_{m=1}^{n}\gamma_{lm}(\textbf{e}_{i}^{\mathrm{T}}(I_{n}-P_{k}^{+}P_{k})\textbf{e}_{m})w_{lj}\textbf{e}_{i}+\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{jr}\textbf{j}_{r}$ for every $j=1,\ldots,q$. Thus, $\ker(A_{k}^{[j]})=\{[\sum_{i=1}^{n}\alpha_{0i}(\textbf{w}_{0}\otimes\textbf{e}_{i})^{\mathrm{T}}+\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\sum_{m=1}^{n}\gamma_{lm}(\textbf{e}_{i}^{\mathrm{T}}(I_{n}-P_{k}^{+}P_{k})\textbf{e}_{m})(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\\\beta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{i=1}^{n}\alpha_{0i}\textbf{e}_{i}^{\mathrm{T}}+\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\sum_{m=1}^{n}\gamma_{lm}(\textbf{e}_{i}^{\mathrm{T}}(I_{n}-P_{k}^{+}P_{k})\textbf{e}_{m})w_{lj}\textbf{e}_{i}^{\mathrm{T}}+\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{jr}\\\textbf{j}_{r}^{\mathrm{T}}]^{\mathrm{T}}:\forall\alpha_{0i},\beta_{sr},\gamma_{lm}\in\mathbb{R},i=1,\ldots,n,s=1,\ldots,q,r=1,\ldots,n-{\mathrm{rank}}(P_{k}),l=1,\ldots,q-1-{\mathrm{rank}}(L_{k}),m=1,\ldots,n\}$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. Let $S_{1}=\{[\sum_{i=1}^{n}\alpha_{0i}(\textbf{w}_{0}\otimes\textbf{e}_{i})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{i=1}^{n}\alpha_{0i}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\alpha_{0i}\in\mathbb{R},i=1,\ldots,n\}$, $S_{2}=\{[\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\\\sum_{m=1}^{n}\gamma_{lm}(\textbf{e}_{i}^{\mathrm{T}}(I_{n}-P_{k}^{+}P_{k})\textbf{e}_{m})(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\sum_{m=1}^{n}\gamma_{lm}(\textbf{e}_{i}^{\mathrm{T}}(I_{n}-P_{k}^{+}P_{k})\textbf{e}_{m})w_{lj}\textbf{e}_{i}^{\mathrm{T}}:\forall\gamma_{lm}\in\mathbb{R},l=1,\ldots,q-1-{\mathrm{rank}}(L_{k}),m=1,\ldots,n\}$, and $S_{3}=\{[\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}},\textbf{0}_{1\times nq},\\\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{jr}\textbf{j}_{r}^{\mathrm{T}}]^{\mathrm{T}}:\forall\beta_{sr}\in\mathbb{R},s=1,\ldots,q,r=1,\ldots,n-{\mathrm{rank}}(P_{k})\}$. Clearly $\ker(A_{k}^{[j]})=S_{1}+S_{2}+S_{3}$ and $S_{i}$ is a subspace for every $i=1,2,3$. Furthermore, note that $S_{1}\cup S_{2}=S_{1}+S_{2}$ is a subspace. Hence, it follows from Lemma \[lemma\_S\] that $\dim\ker(A_{k}^{[j]})=\dim S_{1}+\dim S_{2}+\dim S_{3}-\dim(S_{1}\cap S_{2})-\dim(S_{2}\cap S_{3})-\dim(S_{3}\cap S_{1})+\dim(S_{1}\cap S_{2}\cap S_{3})$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. Note that $\dim S_{1}=n$ and $\dim S_{3}=q(n-{\mathrm{rank}}(P_{k}))$. To determine $\dim S_{2}$, it first follows from (\[Pka\]) that $\dim\left\{\small\left[\begin{array}{c} \alpha_{l1}\\ \vdots\\ \alpha_{ln} \end{array}\right]:P_{k}\small\left[\begin{array}{c} \alpha_{l1}\\ \vdots\\ \alpha_{ln} \end{array}\right]=\textbf{0}_{n\times 1}\right\}=\dim\ker(P_{k})=n-{\mathrm{rank}}(P_{k})$ for every $l=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$. Since $S_{2}=\{[\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}w_{lj}\textbf{e}_{i}^{\mathrm{T}}:\alpha_{li}\in\mathbb{R},l=1,\ldots,q-1-{\mathrm{rank}}(L_{k}),i=1,\ldots,n,\,\,{\mathrm{satisfy}}\,\,(\ref{Pka})\}$, it follows that $\dim S_{2}=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\dim\left\{\small\left[\begin{array}{c} \alpha_{l1}\\ \vdots\\ \alpha_{ln} \end{array}\right]:P_{k}\small\left[\begin{array}{c} \alpha_{l1}\\ \vdots\\ \alpha_{ln} \end{array}\right]=\textbf{0}_{n\times 1}\right\}=(q-1-{\mathrm{rank}}(L_{k}))(n-{\mathrm{rank}}(P_{k}))$. Let $S_{4}=\{[\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}w_{lj}\textbf{e}_{i}^{\mathrm{T}}:\forall\alpha_{li}\in\mathbb{R},l=1,\ldots,q-1-{\mathrm{rank}}(L_{k}),i=1,\ldots,n\}$. Clearly $S_{2}\subseteq S_{4}$. Next, since $\textbf{w}_{l}\otimes\textbf{e}_{i}$, $l=0,1,\ldots,q-1-{\mathrm{rank}}(L_{k})$, $i=1,\ldots,n$, are linearly independent, it follows that $\sum_{i=1}^{n}\alpha_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i}-\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}=\textbf{0}_{nq\times 1}$ if and only if $\alpha_{li}=0$ for every $l=0,1,\ldots,q-1-{\mathrm{rank}}(L_{k})$ and every $i=1,\ldots,n$. Hence, $S_{1}\cap S_{4}=\{\textbf{0}_{(2nq+n)\times 1}\}$. Consequently, $\{\textbf{0}_{(2nq+n)\times 1}\}\subseteq S_{1}\cap S_{2}\subseteq S_{1}\cap S_{4}=\{\textbf{0}_{(2nq+n)\times 1}\}$ and $\{\textbf{0}_{(2nq+n)\times 1}\}\subseteq S_{1}\cap S_{2}\cap S_{3}\subseteq S_{1}\cap S_{2}\subseteq S_{1}\cap S_{4}=\{\textbf{0}_{(2nq+n)\times 1}\}$, which imply that $S_{1}\cap S_{2}=\{\textbf{0}_{(2nq+n)\times 1}\}$ and $S_{1}\cap S_{2}\cap S_{3}=\{\textbf{0}_{(2nq+n)\times 1}\}$. Hence, $\dim(S_{1}\cap S_{2})=0$ and $\dim(S_{1}\cap S_{2}\cap S_{3})=0$. Alternatively, note that $\sum_{i=1}^{n}\alpha_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i}=\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}$ for some $\alpha_{0i}$ and $\beta_{sr}$ is equivalent to $\sum_{i=1}^{n}\alpha_{0i}(I_{q}\otimes P_{k})(\textbf{w}_{0}\otimes\textbf{e}_{i})=\textbf{0}_{nq\times 1}$ due to the fact that $\bigcup_{r=1}^{n-{\mathrm{rank}}(P_{k})}{\mathrm{span}}\{\textbf{g}_{1}\otimes\textbf{j}_{r},\ldots,\textbf{g}_{q}\otimes\textbf{j}_{r}\}=\ker(I_{q}\otimes P_{k})$. Thus, $\textbf{0}_{nq\times 1}=\sum_{i=1}^{n}\alpha_{0i}(I_{q}\otimes P_{k})(\textbf{w}_{0}\otimes\textbf{e}_{i})=\sum_{i=1}^{n}\alpha_{0i}\textbf{w}_{0}\otimes P_{k}\textbf{e}_{i}=\textbf{w}_{0}\otimes P_{k}\Big(\sum_{i=1}^{n}\alpha_{0i}\textbf{e}_{i}\Big)=\textbf{w}_{0}\otimes P_{k}\small\left[\begin{array}{c} \alpha_{01}\\ \vdots\\ \alpha_{0n}\\ \end{array}\right]$, which is equivalent to $P_{k}\small\left[\begin{array}{c} \alpha_{01}\\ \vdots\\ \alpha_{0n}\\ \end{array}\right]=\textbf{0}_{n\times 1}$. Hence, $\dim(S_{1}\cap S_{3})=\dim\left\{\small\left[\begin{array}{c} \alpha_{01}\\ \vdots\\ \alpha_{0n} \end{array}\right]:P_{k}\small\left[\begin{array}{c} \alpha_{01}\\ \vdots\\ \alpha_{0n} \end{array}\right]=\textbf{0}_{n\times 1}\right\}=n-{\mathrm{rank}}(P_{k})$. Likewise, $\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}=\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}$ for some $\alpha_{li}$ and $\beta_{sr}$ is equivalent to $\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(I_{q}\otimes P_{k})(\textbf{w}_{l}\otimes\textbf{e}_{i})=\textbf{0}_{nq\times 1}$. Thus, $\textbf{0}_{nq\times 1}=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}\textbf{w}_{l}\otimes P_{k}\textbf{e}_{i}=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\textbf{w}_{l}\otimes P_{k}(\sum_{i=1}^{n}\alpha_{li}\textbf{e}_{i})=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\textbf{w}_{l}\otimes P_{k}\small\left[\begin{array}{c} \alpha_{l1}\\ \vdots\\ \alpha_{ln} \end{array}\right]=\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{s=1}^{n}(\sum_{i=1}^{n}\alpha_{li}\\P_{k}(s,i))\textbf{w}_{l}\otimes\textbf{e}_{s}$, which is equivalent to $\sum_{i=1}^{n}\alpha_{li}P_{k}(s,i)=0$ for every $s=1,\ldots,q$ and every $l=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$. Hence, (\[Pka\]) holds for every $l=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$. Thus, $\dim(S_{2}\cap S_{3})=\dim S_{2}=(q-1-{\mathrm{rank}}(L_{k}))(n-{\mathrm{rank}}(P_{k}))$. Now, ${\mathrm{def}}(A_{k}^{[j]})=n+(q-1-{\mathrm{rank}}(L_{k}))(n-{\mathrm{rank}}(P_{k}))+q(n-{\mathrm{rank}}(P_{k}))-0-(q-1-{\mathrm{rank}}(L_{k}))(n-{\mathrm{rank}}(P_{k}))-(n-{\mathrm{rank}}(P_{k}))+0=n+(q-1)(n-{\mathrm{rank}}(P_{k}))$. Therefore, it follows from Corollary 2.5.5 of [@Bernstein:2009 p. 105] that ${\mathrm{rank}}(A_{k}^{[j]})=2nq+n-{\mathrm{def}}(A_{k}^{[j]})=2nq-(q-1)(n-{\mathrm{rank}}(P_{k}))$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. $iv)$ If $\mu_{k}\neq0$ and $\kappa_{k}=0$, then $\textbf{z}_{2}=\textbf{0}_{nq\times 1}$, $-\mu_{k}(L_{k}\otimes P_{k})\textbf{z}_{1}=\textbf{0}_{nq\times 1}$, and $\textbf{z}_{3}$ in $\ker(A_{k}^{[j]})$ can be chosen arbitrarily in $\mathbb{R}^{n}$. Thus, $\textbf{z}_{3}$ can be represented as $\textbf{z}_{3}=\sum_{i=1}^{n}\gamma_{i}\textbf{e}_{i}$, where $\gamma_{i}\in\mathbb{R}$. In this case, since $(L_{k}\otimes P_{k})\textbf{z}_{1}=\textbf{0}_{nq\times 1}$, $k\in\overline{\mathbb{Z}}_{+}$, it follows from the similar arguments as in the proof of $iii$) that $\textbf{z}_{1}=\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}$, where $\alpha_{li},\beta_{sr}\in\mathbb{R}$. Therefore, $\ker(A_{k}^{[j]})=\{[\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{i=1}^{n}\gamma_{i}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\alpha_{li},\beta_{sr},\gamma_{i}\in\mathbb{R},i=1,\ldots,n,l=0,1,\ldots,q-1-{\mathrm{rank}}(L_{k}),s=1,\ldots,q,r=1,\ldots,n-{\mathrm{rank}}(P_{k})\}$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. Again, let $\mathcal{S}_{1}=\{[\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}},\textbf{0}_{1\times nq},\textbf{0}_{1\times n}]^{\mathrm{T}}:\forall\alpha_{li}\in\mathbb{R},i=1,\ldots,n,l=0,1,\ldots,q-1-{\mathrm{rank}}(L_{k})\}$, $\mathcal{S}_{2}=\{[\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}},\textbf{0}_{1\times nq},\textbf{0}_{1\times n}]^{\mathrm{T}}:\forall\beta_{sr}\in\mathbb{R},s=1,\ldots,q,r=1,\ldots,n-{\mathrm{rank}}(P_{k})\}$, and $\mathcal{S}_{3}=\{[\textbf{0}_{1\times nq},\textbf{0}_{1\times nq},\sum_{i=1}^{n}\gamma_{i}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\gamma_{i}\in\mathbb{R},i=1,\ldots,n\}$. Clearly $\ker(A_{k}^{[j]})=\mathcal{S}_{1}+\mathcal{S}_{2}+\mathcal{S}_{3}$ and $\mathcal{S}_{i}$ is a subspace for every $i=1,2,3$. Next, note that $\mathcal{S}_{1}\cup\mathcal{S}_{3}=\mathcal{S}_{1}+\mathcal{S}_{3}$ is a subspace, $\dim\mathcal{S}_{1}=n(q-{\mathrm{rank}}(L_{k}))$, $\dim\mathcal{S}_{2}=q(n-{\mathrm{rank}}(P_{k}))$, and $\dim\mathcal{S}_{3}=n$. Furthermore, note that $\mathcal{S}_{1}\cap\mathcal{S}_{3}=\{\textbf{0}_{(2nq+n)\times 1}\}$, $\mathcal{S}_{2}\cap\mathcal{S}_{3}=\{\textbf{0}_{(2nq+n)\times 1}\}$, and $\mathcal{S}_{1}\cap\mathcal{S}_{2}\cap\mathcal{S}_{3}=\{\textbf{0}_{(2nq+n)\times 1}\}$. Using the similar arguments as in the proof of $iii$), it follows that $\dim(\mathcal{S}_{1}\cap\mathcal{S}_{2})=\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\dim\left\{\small\left[\begin{array}{c} \alpha_{l1}\\ \vdots\\ \alpha_{ln} \end{array}\right]:P_{k}\small\left[\begin{array}{c} \alpha_{l1}\\ \vdots\\ \alpha_{ln} \end{array}\right]=\textbf{0}_{n\times 1}\right\}=(q-{\mathrm{rank}}(L_{k}))(n-{\mathrm{rank}}(P_{k}))$. Now it follows from Lemma \[lemma\_S\] that $\dim\ker(A_{k}^{[j]})=\dim(\mathcal{S}_{1}+\mathcal{S}_{2}+\mathcal{S}_{3})=\dim\mathcal{S}_{1}+\dim\mathcal{S}_{2}+\dim\mathcal{S}_{3}-\dim(\mathcal{S}_{1}\cap\mathcal{S}_{2})-\dim(\mathcal{S}_{2}\cap\mathcal{S}_{3})-\dim(\mathcal{S}_{3}\cap\mathcal{S}_{1})+\dim(\mathcal{S}_{1}\cap\mathcal{S}_{2}\cap\mathcal{S}_{3})=n(q-{\mathrm{rank}}(L_{k}))+q(n-{\mathrm{rank}}(P_{k}))+n-(q-{\mathrm{rank}}(L_{k}))(n-{\mathrm{rank}}(P_{k}))-0-0+0=nq+n-{\mathrm{rank}}(L_{k}){\mathrm{rank}}(P_{k})$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. Therefore, it follows from Corollary 2.5.5 of [@Bernstein:2009 p. 105] that ${\mathrm{rank}}(A_{k}^{[j]})=2nq+n-{\mathrm{def}}(A_{k}^{[j]})=nq+{\mathrm{rank}}(L_{k}){\mathrm{rank}}(P_{k})$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. It follows from Lemma \[lemma\_Arank\] that 0 is an eigenvalue of $A_{k}^{[j]}$ for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. Next, we further investigate some relationships of the null spaces between a row-addition transformed matrix of $A_{k}^{[j]}$ and $A_{k}^{[j]}$ itself in order to unveil an important property of this eigenvalue 0 later. \[lemma\_Ah\] Consider the (possibly infinitely many) matrices $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$, $j=1,\ldots,q$, $k=0,1,2,\ldots$, where $A_{k}^{[j]}$ is defined by (\[Amatrix\]) in Lemma \[lemma\_Arank\], $$\begin{aligned} \label{Ac} A_{{\mathrm{c}}k}=\small\left[\begin{array}{ccc} -\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k} & -\eta_{k} L_{k}\otimes P_{k} & \kappa_{k} \textbf{1}_{q\times 1}\otimes P_{k} \\ \textbf{0}_{nq\times nq} & \textbf{0}_{nq\times nq} & \textbf{0}_{nq\times n} \\ \textbf{0}_{n\times nq} & \textbf{0}_{n\times nq} & \textbf{0}_{n\times n} \\ \end{array}\right],\end{aligned}$$ and $\mu_{k},\kappa_{k},\eta_{k},h_{k}\geq0$, $k\in\overline{\mathbb{Z}}_{+}$. Then $\ker(A_{k}^{[j]})=\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$ and $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))=\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})$ for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. To show that $\ker(A_{k}^{[j]})=\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$, note that for every $j=1,\ldots,q$, $\ker(A_{k}^{[j]})=\{[\textbf{z}_{1}^{\rm{T}},\textbf{z}_{2}^{\rm{T}},\textbf{z}_{3}^{\rm{T}}]^{\rm{T}}\in\mathbb{R}^{2nq+n}:\textbf{z}_{2}=\textbf{0}_{nq\times 1},-\mu_{k} (L_{k}\otimes P_{k})\textbf{z}_{1}-\kappa_{k}(I_{q}\otimes P_{k})\textbf{z}_{1}-\eta_{k}(L_{k}\otimes P_{k})\textbf{z}_{2}+\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\textbf{z}_{3}=\textbf{0}_{nq\times 1}, \kappa_{k}E_{n\times nq}^{[j]}\textbf{z}_{1}-\kappa_{k}\textbf{z}_{3}=\textbf{0}_{n\times 1}\}$, $k\in\overline{\mathbb{Z}}_{+}$. Alternatively, for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$, let $\textbf{y}=[\textbf{y}_{1}^{\rm{T}},\textbf{y}_{2}^{\rm{T}},\textbf{y}_{3}^{\rm{T}}]^{\rm{T}}\in\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$, where $\textbf{y}_{1},\textbf{y}_{2}\in\mathbb{R}^{nq}$ and $\textbf{y}_{3}\in\mathbb{R}^{n}$, we have $$\begin{aligned} h_{k}(-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k})\textbf{y}_{1}+h_{k}(-\eta_{k} L_{k}\otimes P_{k})\textbf{y}_{2}+\textbf{y}_{2}+h_{k}(\kappa_{k} \textbf{1}_{q\times 1}\otimes P_{k})\textbf{y}_{3}=\textbf{0}_{nq\times 1},\label{y_1p}\\ (-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k})\textbf{y}_{1}+(-\eta_{k} L_{k}\otimes P_{k})\textbf{y}_{2}+(\kappa_{k} \textbf{1}_{q\times 1}\otimes P_{k})\textbf{y}_{3}=\textbf{0}_{nq\times 1},\label{y_2p}\\ \kappa_{k} E_{n\times nq}^{[j]}\textbf{y}_{1}-\kappa_{k}\textbf{y}_{3}=\textbf{0}_{n\times 1}.\label{y_3p}\end{aligned}$$ Substituting (\[y\_2p\]) into (\[y\_1p\]) yields $\textbf{y}_{2}=\textbf{0}_{nq\times 1}$. Together with (\[y\_2p\]) and (\[y\_3p\]), we have $\textbf{y}\in\ker(A_{k}^{[j]})$, which implies that $\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})\subseteq\ker(A_{k}^{[j]})$ for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. On the other hand, if $\textbf{y}\in\ker(A_{k}^{[j]})$, then $\textbf{y}_{2}=\textbf{0}_{nq\times 1}$, $-\mu_{k} (L_{k}\otimes P_{k})\textbf{y}_{1}-\kappa_{k}(I_{q}\otimes P_{k})\textbf{y}_{1}-\eta_{k}(L_{k}\otimes P_{k})\textbf{y}_{2}+\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\textbf{y}_{3}=\textbf{0}_{nq\times 1}$, and $\kappa_{k}E_{n\times nq}^{[j]}\textbf{y}_{1}-\kappa_{k}\textbf{y}_{3}=\textbf{0}_{n\times 1}$. Clearly in this case, (\[y\_1p\])–(\[y\_3p\]) hold, i.e., $\textbf{y}\in\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$, which implies that $\ker(A_{k}^{[j]})\subseteq\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$ for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. Thus, $\ker(A_{k}^{[j]})=\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$ for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. Finally, to show that $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))=\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})$, note that $\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})=\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))$ for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. Let $\textbf{y}\in\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))$, then $(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})\textbf{y}\in\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})=\ker(A_{k}^{[j]})$, and hence, $\textbf{y}\in\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})$, which implies that $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))\subseteq\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))$ for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. Alternatively, let $\textbf{z}\in\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))$, then $(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})\textbf{z}\in\ker(A_{k}^{[j]})=\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$, and hence, $\textbf{z}\in\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})$, which implies that $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))\subseteq\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})$ for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. Thus, $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))=\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})$ for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. Next, we assert that 0 is semisimple for $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. Recall from Definition 5.5.4 of [@Bernstein:2009 p. 322] that $0$ is semisimple if its geometric multiplicity and algebraic multiplicity are equal. \[lemma\_semisimple\] Consider the (possibly infinitely many) matrices $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$, $j=1,\ldots,q$, $k=0,1,2,\ldots$, defined in Lemma \[lemma\_Ah\], where $\mu_{k},\kappa_{k},\eta_{k},h_{k}\geq0$, $k\in\overline{\mathbb{Z}}_{+}$. - If $\kappa_{k}=0$ and $\mu_{k}=0$, then ${\mathrm{rank}}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})=nq$ and 0 is not a semisimple eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. - If $\kappa_{k}=0$ and $\mu_{k}\neq0$, then ${\mathrm{rank}}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})=nq+{\mathrm{rank}}(L_{k}){\mathrm{rank}}(P_{k})$ and 0 is not a semisimple eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. - If $\kappa_{k}\neq0$, then ${\mathrm{rank}}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})=2nq-(q-1)(n-{\mathrm{rank}}(P_{k}))$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. In this case, for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$, 0 is a semisimple eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ if and only if ${\mathrm{rank}}(P_{k})=n$. First, it follows from Lemma \[lemma\_Ah\] that $\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})=\ker(A_{k}^{[j]})$, and hence ${\mathrm{def}}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})={\mathrm{def}}(A_{k}^{[j]})$ for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. Thus, ${\mathrm{rank}}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})=2nq+n-{\mathrm{def}}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})=2nq+n-{\mathrm{def}}(A_{k}^{[j]})={\mathrm{rank}}(A_{k}^{[j]})$ for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. Therefore, all the rank conclusions on $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ in $i$)–$iii$) directly follow from Lemma \[lemma\_Arank\]. Next, it follows from these rank conclusions on $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ that $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ has an eigenvalue 0 for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$. Now we want to further investigate whether 0 is a semisimple eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ or not for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. To this end, we need to study the relationship between $\ker(A_{k}^{[j]})$ and $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. Noting that $(L_{k}\otimes P_{k})(\textbf{1}_{q\times 1}\otimes P_{k})=(L_{k}\textbf{1}_{q\times 1})\otimes P_{k}^{2}=\textbf{0}_{nq\times n}$ and by $iii$) of Lemma \[lemma\_EW\], $E_{n\times nq}^{[j]}(\textbf{1}_{q\times 1}\otimes P_{k})=P_{k}$, we have $$\begin{aligned} &&\hspace{-2em}(A_{k}^{[j]})^{2}=\small\left[\begin{array}{ccc} -\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k} & -\eta_{k} L_{k}\otimes P_{k} & \kappa_{k} \textbf{1}_{q\times 1}\otimes P_{k} \\ \eta_{k}\mu_{k}(L_{k}\otimes P_{k})^{2}+\eta_{k}\kappa_{k} L_{k}\otimes P_{k}^{2}+\kappa_{k}^{2}W_{k}^{[j]} & \eta_{k}^{2}(L_{k}\otimes P_{k})^{2}-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k} & -\kappa_{k}^{2} \textbf{1}_{q\times 1}\otimes P_{k} \\ -\kappa_{k}^{2} E_{n\times nq}^{[j]} & \kappa_{k} E_{n\times nq}^{[j]} & \kappa_{k}^{2} I_{n} \\ \end{array}\right],\\ &&\hspace{-2em}A_{k}^{[j]}A_{{\mathrm{c}}k}=\small\left[\begin{array}{ccc} \textbf{0}_{nq\times nq} & \textbf{0}_{nq\times nq} & \textbf{0}_{nq\times n} \\ \mu_{k}^{2} (L_{k}\otimes P_{k})^{2}+2\mu_{k}\kappa_{k}(L_{k}\otimes P_{k}^{2})+\kappa_{k}^{2} (I_{q}\otimes P_{k})^{2} & \mu_{k}\eta_{k}(L_{k}\otimes P_{k})^{2}+\kappa_{k}\eta_{k}L_{k}\otimes P_{k}^{2} & -\kappa_{k}^{2} \textbf{1}_{q\times 1}\otimes P_{k}^{2} \\ -\kappa_{k}\mu_{k}E_{n\times nq}^{[j]}(L_{k}\otimes P_{k})-\kappa_{k}^{2}E_{n\times nq} ^{[j]}(I_{q}\otimes P_{k}) & -\kappa_{k}\eta_{k}E_{n\times nq}^{[j]}(L_{k}\otimes P_{k}) & \kappa_{k}^{2}P_{k} \\ \end{array}\right].\end{aligned}$$ Thus, for every $j=1,\ldots,q$ and every $k\in\overline{\mathbb{Z}}_{+}$, let $\textbf{y}=[\textbf{y}_{1}^{\rm{T}},\textbf{y}_{2}^{\rm{T}},\textbf{y}_{3}^{\rm{T}}]^{\rm{T}}\in\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))$, where $\textbf{y}_{1},\textbf{y}_{2}\in\mathbb{R}^{nq}$ and $\textbf{y}_{3}\in\mathbb{R}^{n}$, we have $$\begin{aligned} (-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k})\textbf{y}_{1}-(\eta_{k} L_{k}\otimes P_{k})\textbf{y}_{2}+(\kappa_{k} \textbf{1}_{q\times 1}\otimes P_{k})\textbf{y}_{3}=\textbf{0}_{nq\times 1},\label{y_1}\\ (\eta_{k}\mu_{k}(L_{k}\otimes P_{k})^{2}+\eta_{k}\kappa_{k} L_{k}\otimes P_{k}^{2}+\kappa_{k}^{2}W_{k}^{[j]})\textbf{y}_{1}+(\eta_{k}^{2}(L_{k}\otimes P_{k})^{2}-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k})\textbf{y}_{2}\nonumber\\ +(-\kappa_{k}^{2} \textbf{1}_{q\times 1}\otimes P_{k})\textbf{y}_{3}\nonumber\\ +h_{k}(\mu_{k}^{2} (L_{k}\otimes P_{k})^{2}+2\mu_{k}\kappa_{k}(L_{k}\otimes P_{k}^{2})+\kappa_{k}^{2} (I_{q}\otimes P_{k})^{2})\textbf{y}_{1}+h_{k}(\mu_{k}\eta_{k}(L_{k}\otimes P_{k})^{2}+\kappa_{k}\eta_{k}L_{k}\otimes P_{k}^{2})\textbf{y}_{2}\nonumber\\ +h_{k}(-\kappa_{k}^{2} \textbf{1}_{q\times 1}\otimes P_{k}^{2})\textbf{y}_{3}=\textbf{0}_{nq\times 1},\label{y_2}\\ -\kappa_{k}^{2} E_{n\times nq}^{[j]}\textbf{y}_{1}+\kappa_{k}E_{n\times nq}^{[j]}\textbf{y}_{2}+\kappa_{k}^{2}\textbf{y}_{3}+h_{k}(-\kappa_{k}\mu_{k}E_{n\times nq}^{[j]}(L_{k}\otimes P_{k})-\kappa_{k}^{2}E_{n\times nq} ^{[j]}(I_{q}\otimes P_{k}))\textbf{y}_{1}\nonumber\\ +h_{k}(-\kappa_{k}\eta_{k}E_{n\times nq}^{[j]}(L_{k}\otimes P_{k}))\textbf{y}_{2}+h_{k}\kappa_{k}^{2}P_{k}\textbf{y}_{3}=\textbf{0}_{n\times 1}.\label{y_3}\end{aligned}$$ Now we consider two cases on $\kappa_{k}$. *Case 1.* $\kappa_{k}=0$. In this case, (\[y\_3\]) becomes trivial and (\[y\_1\]) and (\[y\_2\]) become $$\begin{aligned} (-\mu_{k} L_{k}\otimes P_{k})\textbf{y}_{1}-(\eta_{k} L_{k}\otimes P_{k})\textbf{y}_{2}=\textbf{0}_{nq\times 1},\label{y_1k}\\ \eta_{k}\mu_{k}(L_{k}\otimes P_{k})^{2}\textbf{y}_{1}+(\eta_{k}^{2}(L_{k}\otimes P_{k})^{2}-\mu_{k} L_{k}\otimes P_{k})\textbf{y}_{2}\nonumber\\ +h_{k}\mu_{k}^{2} (L_{k}\otimes P_{k})^{2}\textbf{y}_{1}+h_{k}\mu_{k}\eta_{k}(L_{k}\otimes P_{k})^{2}\textbf{y}_{2}=\textbf{0}_{nq\times 1}.\label{y_2k}\end{aligned}$$ $i$) If $\mu_{k}=0$, then it follows from (\[y\_1k\]) and (\[y\_2k\]) that $-(\eta_{k}L_{k}\otimes P_{k})\textbf{y}_{2}=\textbf{0}_{nq\times 1}$ and $\eta_{k}^{2}(L_{k}\otimes P_{k})^{2}\textbf{y}_{2}=\textbf{0}_{nq\times 1}$. Hence, either $\eta_{k}=0$ or $(L_{k}\otimes P_{k})\textbf{y}_{2}=\textbf{0}_{nq\times 1}$. If $\eta_{k}=0$, then $\textbf{y}_{1},\textbf{y}_{2}\in\mathbb{R}^{nq}$ and $\textbf{y}_{3}\in\mathbb{R}^{n}$ can be chosen arbitrarily. Thus, $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))=\mathbb{R}^{2nq+n}$, and it follows from $i$) of Lemma \[lemma\_Arank\] that $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))\neq\ker(A_{k}^{[j]})$. By Lemma \[lemma\_Ah\], we have $\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})\neq\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$. Now, by Proposition 5.5.8 of [@Bernstein:2009 p. 323], 0 is not semisimple. Alternatively, if $\eta_{k}\neq0$, then $(L_{k}\otimes P_{k})\textbf{y}_{2}=\textbf{0}_{nq\times 1}$ and $\textbf{y}_{1}\in\mathbb{R}^{nq}$ and $\textbf{y}_{3}\in\mathbb{R}^{n}$ can be chosen arbitrarily. Using the similar arguments as in the proof of $iii$) of Lemma \[lemma\_Arank\], it follows that $\textbf{y}_{2}=\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}$, where $\alpha_{li},\beta_{sr}\in\mathbb{R}$. Hence, $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))=\{[\sum_{i=1}^{n}\sum_{s=1}^{q}\delta_{is}(\textbf{e}_{i}\otimes\textbf{e}_{s})^{\mathrm{T}},\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}},\sum_{i=1}^{n}\gamma_{i}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\alpha_{li}\in\mathbb{R},\forall\delta_{is}\in\mathbb{R},\forall\beta_{sr}\in\mathbb{R},\forall\gamma_{i}\in\mathbb{R},i=1,\ldots,n,s=1,\ldots,q,l=0,\ldots,q-1-{\mathrm{rank}}(L_{k}),r=1,\ldots,n-{\mathrm{rank}}(P_{k})\}$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. Clearly it follows from $i$) of Lemma \[lemma\_Arank\] that $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))\neq\ker(A_{k}^{[j]})$. By Lemma \[lemma\_Ah\], we have $\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})\neq\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$. Now, by Proposition 5.5.8 of [@Bernstein:2009 p. 323], 0 is not semisimple. $ii$) If $\mu_{k}\neq0$, then substituting (\[y\_1k\]) into (\[y\_2k\]) yields $-\mu_{k}(L_{k}\otimes P_{k})\textbf{y}_{2}=\textbf{0}_{nq\times 1}$. Substituting this equation into (\[y\_1k\]) yields $-\mu_{k}(L_{k}\otimes P_{k})\textbf{y}_{1}=\textbf{0}_{nq\times 1}$. Using the similar arguments as in the proof of $iii$) of Lemma \[lemma\_Arank\], it follows that $\textbf{y}_{1}=\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}$ and $\textbf{y}_{2}=\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\gamma_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\delta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}$, where $\alpha_{li},\beta_{sr},\gamma_{li},\delta_{sr}\in\mathbb{R}$. Note that $\textbf{y}_{3}\in\mathbb{R}^{n}$ can be chosen arbitrarily, and hence, $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))=\{[\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}},\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\gamma_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\delta_{sr}(\textbf{g}_{s}\otimes\textbf{j}_{r})^{\mathrm{T}},\sum_{i=1}^{n}\zeta_{i}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\alpha_{li}\in\mathbb{R},\forall\beta_{sr}\in\mathbb{R},\forall\gamma_{li}\in\mathbb{R},\forall\delta_{sr}\in\mathbb{R},\forall\zeta_{i}\in\mathbb{R},i=1,\ldots,n,l=0,\ldots,q-1-{\mathrm{rank}}(L_{k}),s=1,\ldots,q,r=1,\ldots,n-{\mathrm{rank}}(P_{k})\}$ for every $j=1,\ldots,q$, $k\in\overline{\mathbb{Z}}_{+}$. Clearly it follows from $iv$) of Lemma \[lemma\_Arank\] that $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))\neq\ker(A_{k}^{[j]})$. By Lemma \[lemma\_Ah\], we have $\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})\neq\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$. Now, by Proposition 5.5.8 of [@Bernstein:2009 p. 323], 0 is not semisimple. *Case 2.* $\kappa_{k}\neq0$. In this case, substituting (\[y\_1\]) into (\[y\_2\]) and (\[y\_3\]) yields $$\begin{aligned} (\eta_{k}\mu_{k}(L_{k}\otimes P_{k})^{2}+\eta_{k}\kappa_{k} L_{k}\otimes P_{k}^{2}+\kappa_{k}^{2}W_{k}^{[j]})\textbf{y}_{1}+(\eta_{k}^{2}(L_{k}\otimes P_{k})^{2}-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k})\textbf{y}_{2}\nonumber\\ +(-\kappa_{k}^{2} \textbf{1}_{q\times 1}\otimes P_{k})\textbf{y}_{3}\nonumber\\ +h_{k}(\mu_{k}^{2} (L_{k}\otimes P_{k})^{2}+\mu_{k}\kappa_{k}(L_{k}\otimes P_{k}^{2}))\textbf{y}_{1}+h_{k}\mu_{k}\eta_{k}(L_{k}\otimes P_{k})^{2}\textbf{y}_{2}=\textbf{0}_{nq\times 1},\label{y12p}\\ -\kappa_{k}^{2} E_{n\times nq}^{[j]}\textbf{y}_{1}+\kappa_{k}E_{n\times nq}^{[j]}\textbf{y}_{2}+\kappa_{k}^{2}\textbf{y}_{3}=\textbf{0}_{n\times 1}.\label{y13p}\end{aligned}$$ Note that $(L_{k}\otimes P_{k})W_{k}^{[j]}=(L_{k}\otimes P_{k})(\textbf{1}_{q\times 1}\otimes P_{k}E_{n\times nq}^{[j]})=L_{k}\textbf{1}_{q\times 1}\otimes P_{k}^{2}E_{n\times nq}^{[j]}=\textbf{0}_{q\times 1}\otimes P_{k}^{2}E_{n\times nq}^{[j]}=\textbf{0}_{nq\times nq}$. Pre-multiplying $-L_{k}\otimes P_{k}$ on both sides of (\[y\_1\]) yields $(\mu_{k}(L_{k}\otimes P_{k})^{2}+\kappa_{k} L_{k}\otimes P_{k}^{2})\textbf{y}_{1}+\eta_{k}(L_{k}\otimes P_{k})^{2}\textbf{y}_{2}=\textbf{0}_{nq\times 1}$. Substituting this equation into (\[y12p\]) yields $$\begin{aligned} \kappa_{k}^{2}W_{k}^{[j]}\textbf{y}_{1}+(-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k})\textbf{y}_{2}+(-\kappa_{k}^{2} \textbf{1}_{q\times 1}\otimes P_{k})\textbf{y}_{3}=\textbf{0}_{nq\times 1}.\label{y12pp}\end{aligned}$$ Finally, substituting (\[y13p\]) into (\[y\_1\]) and (\[y12pp\]) by eliminating $\textbf{y}_{3}$ yields $$\begin{aligned} (-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k}+\kappa_{k} W_{k}^{[j]})\textbf{y}_{1}-(\eta_{k} L_{k}\otimes P_{k}+W_{k}^{[j]})\textbf{y}_{2}=\textbf{0}_{nq\times 1},\label{y12}\\ (-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k}I_{q}\otimes P_{k}+\kappa_{k}W_{k}^{[j]})\textbf{y}_{2}=\textbf{0}_{nq\times 1}.\label{y21}\end{aligned}$$ $iii$) To show that 0 is a semisimple eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ if ${\mathrm{rank}}(P_{k})=n$, we consider two cases on $\mu_{k}$. If $\mu_{k}=0$, then note that (\[y21\]) is identical to (\[z1p\]). Then it follows from the similar arguments as in the proof of $ii$) of Lemma \[lemma\_Arank\] that $\textbf{y}_{2}=\sum_{i=1}^{n}\alpha_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}-\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{jr}\textbf{g}_{j}\otimes\textbf{j}_{r}$ for some $\alpha_{i},\beta_{sr}\in\mathbb{R}$. Clearly $\textbf{y}_{2}\in\ker(L_{k}\otimes I_{n})+\ker(I_{q}\otimes P_{k})=\ker(L_{k}\otimes P_{k})$. Next, using this property of $\textbf{y}_{2}$ in $(\mu_{k}(L_{k}\otimes P_{k})^{2}+\kappa_{k} L_{k}\otimes P_{k}^{2})\textbf{y}_{1}+\eta_{k}(L_{k}\otimes P_{k})^{2}\textbf{y}_{2}=\textbf{0}_{nq\times 1}$ yields $(\mu_{k}(L_{k}\otimes P_{k})^{2}+\kappa_{k} L_{k}\otimes P_{k}^{2})\textbf{y}_{1}=(\mu_{k}L_{k}\otimes I_{n}+\kappa_{k}I_{nq})(L_{k}\otimes P_{k}^{2})\textbf{y}_{1}=\textbf{0}_{nq\times 1}$, i.e., $(L_{k}\otimes P_{k}^{2})\textbf{y}_{1}=(I_{q}\otimes P_{k})^{2}(L_{k}\otimes I_{n})\textbf{y}_{1}=\textbf{0}_{nq\times 1}$. Since by assumption $P_{k}$ is a full rank matrix, it follows that $I_{q}\otimes P_{k}$ is nonsingular. Hence, $(L_{k}\otimes I_{n})\textbf{y}_{1}=\textbf{0}_{nq\times 1}$. Substituting this relationship into (\[y12\]) yields $(-\kappa_{k} I_{q}\otimes P_{k}+\kappa_{k} W_{k}^{[j]})\textbf{y}_{1}-W_{k}^{[j]}\textbf{y}_{2}=\textbf{0}_{nq\times 1}$. Clearly it follows from (\[y21\]) that $W_{k}^{[j]}\textbf{y}_{2}=(I_{q}\otimes P_{k})\textbf{y}_{2}$ for every $j=1,\ldots,q$. Then $$\begin{aligned} \label{y1y2} (-\kappa_{k}I_{q}\otimes P_{k}+\kappa_{k}W_{k}^{[j]})\textbf{y}_{1}-(I_{q}\otimes P_{k})\textbf{y}_{2}=\textbf{0}_{nq\times 1}. \end{aligned}$$ Since $(L_{k}\otimes I_{n})\textbf{y}_{1}=\textbf{0}_{nq\times 1}$, it follows that $\textbf{y}_{1}=\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\gamma_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}$ for some $\gamma_{li}\in\mathbb{R}$. Now substituting these explicit expressions $\textbf{y}_{1}$ and $\textbf{y}_{2}$ into (\[y1y2\]) together with $iii$) of Lemma \[lemma\_EW\] yields $$\begin{aligned} \label{ldwe} (I_{q}\otimes P_{k})\Big(-\kappa_{k}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\gamma_{li}\textbf{w}_{l}\otimes \textbf{e}_{i}+\kappa_{k}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\gamma_{li}w_{lj}\textbf{w}_{0}\otimes \textbf{e}_{i}-\sum_{i=1}^{n}\alpha_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\Big)=\textbf{0}_{nq\times 1}.\end{aligned}$$ Note that $\textbf{w}_{l}\otimes \textbf{e}_{i}$, $l=0,1,\ldots,q-1-{\mathrm{rank}}(L_{k})$, $i=1,\ldots,n$, are linearly independent and $w_{0j}=1$, it follows from (\[ldwe\]) that $\gamma_{li}=0$ and $\alpha_{i}=0$ for every $l=1,2,\ldots,q-1-{\mathrm{rank}}(L_{k})$ and every $i=1,\ldots,n$. Finally, since $\ker(P_{k})=\{\textbf{0}_{n\times 1}\}$, it follows that $\textbf{j}_{r}=\textbf{0}_{n\times 1}$. Hence, $\textbf{y}_{2}=\textbf{0}_{nq\times 1}$ and $\textbf{y}_{1}=\sum_{i=1}^{n}\gamma_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i}$. Now it follows from (\[y13p\]) and $iii$) of Lemma \[lemma\_EW\] that $\textbf{y}_{3}=E_{n\times nq}^{[j]}\textbf{y}_{1}=\sum_{i=1}^{n}\gamma_{0i}E_{n\times nq}^{[j]}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})=\sum_{i=1}^{n}\gamma_{0i}\textbf{e}_{i}$. Clearly such $\textbf{y}_{1}=\sum_{i=1}^{n}\gamma_{0i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}$, $\textbf{y}_{2}=\textbf{0}_{nq\times 1}$, and $\textbf{y}_{3}=\sum_{i=1}^{n}\gamma_{0i}\textbf{e}_{i}$ satisfy (\[y\_1\])–(\[y\_3\]). Thus, $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))=\{[\sum_{i=1}^{n}\gamma_{0i}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{i=1}^{n}\gamma_{0i}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\gamma_{0i}\in\mathbb{R},i=1,\ldots,n\}=\ker(A_{k}^{[j]})$, where the last step follows from $ii$) of Lemma \[lemma\_Arank\] with ${\mathrm{rank}}(P_{k})=n$. By Lemma \[lemma\_Ah\], we have $\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})=\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$. Now, by Proposition 5.5.8 of [@Bernstein:2009 p. 323], 0 is semisimple. If $\mu_{k}\neq0$, then note that (\[y21\]) is identical to (\[z1\]). Next, it follows from the similar arguments as in the proof of $iii$) of Lemma \[lemma\_Arank\] that $\textbf{y}_{2}=\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}$, where $\alpha_{li},\beta_{sr}\in\mathbb{R}$ and (\[Pka\]) holds. Since $P_{k}$ is a full rank matrix, it follows from (\[Pka\]) that $\alpha_{li}=0$ and $\textbf{j}_{r}=\textbf{0}_{n\times 1}$ for every $l=0,1,\ldots,q-1-{\mathrm{rank}}(L_{k})$, every $i=1,\ldots,n$, and every $r=1,\ldots,n-{\mathrm{rank}}(P_{k})$, which implies that $\textbf{y}_{2}=\textbf{0}_{nq\times 1}$. Again, it follows from the similar arguments as above that $(L_{k}\otimes I_{n})\textbf{y}_{1}=\textbf{0}_{nq\times 1}$ and hence, $\textbf{y}_{1}=\sum_{i=1}^{n}\gamma_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}$, where $\gamma_{i}\in\mathbb{R}$. Then it follows from (\[y13p\]) and $iii$) of Lemma \[lemma\_EW\] that $\textbf{y}_{3}=E_{n\times nq}^{[j]}\textbf{y}_{1}=\sum_{i=1}^{n}\gamma_{i}E_{n\times nq}^{[j]}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})=\sum_{i=1}^{n}\gamma_{i}\textbf{e}_{i}$. Clearly such $\textbf{y}_{1}=\sum_{i=1}^{n}\gamma_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}$, $\textbf{y}_{2}=\textbf{0}_{nq\times 1}$, and $\textbf{y}_{3}=\sum_{i=1}^{n}\gamma_{i}\textbf{e}_{i}$ satisfy (\[y\_1\])–(\[y\_3\]). Thus, $\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))=\{[\sum_{i=1}^{n}\gamma_{i}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})^{\mathrm{T}},\textbf{0}_{1\times nq},\sum_{i=1}^{n}\gamma_{i}\textbf{e}_{i}^{\mathrm{T}}]^{\mathrm{T}}:\forall\gamma_{i}\in\mathbb{R},i=1,\ldots,n\}=\ker(A_{k}^{[j]})$, where the last step follows from $iii$) of Lemma \[lemma\_Arank\] with ${\mathrm{rank}}(P_{k})=n$. By Lemma \[lemma\_Ah\], we have $\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})=\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$. Now, by Proposition 5.5.8 of [@Bernstein:2009 p. 323], 0 is semisimple. To show that 0 is a semisimple eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ only if ${\mathrm{rank}}(P_{k})=n$, conversely we assume that this is not true, that is, ${\mathrm{rank}}(P_{k})<n$. We first claim that a specific solution $\textbf{y}_{2}$ to (\[y21\]) is given by the form $\textbf{y}_{2}=\sum_{i=1}^{n}\alpha_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}$, where $\alpha_{i}\in\mathbb{R}$. Indeed this is clear from $ii$) of Lemma \[lemma\_EW\]. Next, we claim that a specific solution $\textbf{y}_{1}$ and $\textbf{y}_{2}$ to (\[y12\]) and (\[y21\]) is given by the form $\textbf{y}_{1}=\sum_{i=1}^{n}\gamma_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}$ and $\textbf{y}_{2}=\sum_{i=1}^{n}\alpha_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}$, where $\gamma_{i},\alpha_{i}\in\mathbb{R}$ satisfy $$\begin{aligned} \label{Pka1} P_{k}\small\left[\begin{array}{c} \alpha_{1}\\ \vdots\\ \alpha_{n}\\ \end{array}\right]=\textbf{0}_{n\times 1}.\end{aligned}$$ To see this, substituting $\textbf{y}_{1}=\sum_{i=1}^{n}\gamma_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}$ and $\textbf{y}_{2}=\sum_{i=1}^{n}\alpha_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}$ into (\[y12\]) together with $(L_{k}\otimes P_{k})(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})=\textbf{0}_{nq\times 1}$ yields $\sum_{i=1}^{n}\alpha_{i}P_{k}\textbf{e}_{i}=\textbf{0}_{n\times 1}$, which is equivalent to (\[Pka1\]). In this case, it follows from (\[y13p\]) and $iii$) of Lemma \[lemma\_EW\] that $\textbf{y}_{3}=E_{n\times nq}^{[j]}\textbf{y}_{1}=\sum_{i=1}^{n}\gamma_{i}E_{n\times nq}^{[j]}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})=\sum_{i=1}^{n}\gamma_{i}\textbf{e}_{i}$. Clearly such $\textbf{y}_{1}=\sum_{i=1}^{n}\gamma_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}$, $\textbf{y}_{2}=\sum_{i=1}^{n}\alpha_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}$, and $\textbf{y}_{3}=\sum_{i=1}^{n}\gamma_{i}\textbf{e}_{i}$ satisfy (\[y\_1\])–(\[y\_3\]). Since by assumption, ${\mathrm{rank}}(P_{k})<n$, it follows that (\[Pka1\]) has nontrivial solutions, which implies that $\textbf{y}_{2}\not\equiv\textbf{0}_{nq\times 1}$. Thus, it follows from $ii$) and $iii$) of Lemma \[lemma\_Arank\] that $\ker(A_{k}^{[j]})\neq\ker(A_{k}^{[j]}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))$, which implies that $\ker((A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})^{2})\neq\ker(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$. Now, by Proposition 5.5.8 of [@Bernstein:2009 p. 323], 0 is not semisimple, which contradicts the original condition that 0 is a semisimple eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. Hence in this case, if 0 is a semisimple eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$, then ${\mathrm{rank}}(P_{k})=n$. It follows from Lemma \[lemma\_semisimple\] that for every $j=1,\ldots,q$, 0 is a semisimple eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ defined in Lemma \[lemma\_Ah\], where $\mu_{k},\kappa_{k},\eta_{k},h_{k}\geq0$, if and only if $\kappa_{k}\neq0$ and ${\mathrm{rank}}(P_{k})=n$, $k\in\overline{\mathbb{Z}}_{+}$. To proceed, let $\mathbb{C}^{n}$ (respectively $\mathbb{C}^{m\times n}$) denote the set of complex vectors (respectively matrices). Using Lemmas \[lemma\_EW\]–\[lemma\_semisimple\], one can show the following complete result about the nonzero eigenvalue and eigenspace structures of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. \[lemma\_A\] Consider the (possibly infinitely many) matrices $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$, $j=1,\ldots,q$, $k=0,1,2,\ldots$, defined by (\[Amatrix\]) in Lemma \[lemma\_Arank\] and (\[Ac\]) in Lemma \[lemma\_semisimple\], where $\mu_{k},\kappa_{k},\eta_{k}\geq0$ and $h_{k}>0$, $k\in\overline{\mathbb{Z}}_{+}$. Assume that ${\mathrm{rank}}(P_{k})=n$, $k\in\overline{\mathbb{Z}}_{+}$. - Then for every $j=1,\ldots,q$, ${\mathrm{spec}}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})\subseteq\{0,-\kappa_{k},-\frac{\kappa_{k}(1+h_{k})}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}(1+h_{k})^{2}-4\kappa_{k}},\lambda\in\mathbb{C}:\forall \frac{\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k}}{\eta_{k}\lambda+\mu_{k} h_{k}\lambda+\mu_{k}}\in{\mathrm{spec}}(-L_{k})\}=\{0,-\kappa_{k},-\frac{\kappa_{k}(1+h_{k})}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}(1+h_{k})^{2}-4\kappa_{k}},-\frac{\kappa_{k}h_{k}}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}h_{k}^{2}-4\kappa_{k}},\lambda\\\in\mathbb{C}:\forall \frac{\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k}}{\eta_{k}\lambda+\mu_{k} h_{k}\lambda+\mu_{k}}\in{\mathrm{spec}}(-L_{k})\backslash\{0\}\}$. - If $1\not\in{\mathrm{spec}}((\frac{\mu_{k}}{\lambda_{1,2}\kappa_{k}}+\frac{\mu_{k} h_{k}}{\kappa_{k}}+\frac{\eta_{k}}{\kappa_{k}})L_{k})$, then $\lambda_{1,2}=-\frac{\kappa_{k}(1+h_{k})}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}(1+h_{k})^{2}-4\kappa_{k}}$ are the eigenvalues of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. The corresponding eigenspace is given by $$\begin{aligned} \label{egns1} &&\hspace{-2em}\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{1,2} I_{2nq+n}\Big)\nonumber\\ &&\hspace{-2em}=\Big\{\Big[\frac{1+h_{k}\lambda_{1,2}^{*}}{\lambda_{1,2}^{*}}\lambda\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\omega_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}},\lambda\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\omega_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}},\nonumber\\ &&-\lambda\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\omega_{li}w_{lj}\textbf{e}_{i}^{\mathrm{T}}\Big]^{*}:\forall\omega_{li}\in\mathbb{C},i=1,\ldots,n,l=0,1,\ldots,q-1-{\mathrm{rank}}(L_{k})\Big\},\end{aligned}$$ where $\textbf{x}^{*}$ denotes the complex conjugate transpose of $\textbf{x}\in\mathbb{C}^{n}$. - If $1\in{\mathrm{spec}}((\frac{\mu_{k}}{\lambda_{1,2}\kappa_{k}}+\frac{\mu_{k} h_{k}}{\kappa_{k}}+\frac{\eta_{k}}{\kappa_{k}})L_{k})$, and $h_{k}\kappa_{k}\neq 1$, then $\lambda_{1,2}=-\frac{\kappa_{k}(1+h_{k})}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}(1+h_{k})^{2}-4\kappa_{k}}$ are the eigenvalues of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. The corresponding eigenspace is given by $$\begin{aligned} \label{egns2} &&\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{1,2} I_{2nq+n}\Big)\nonumber\\ &&=\Big\{\Big[\frac{1+h_{k}\lambda_{1,2}^{*}}{\lambda_{1,2}^{*}}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}((\textbf{g}_{l}-G_{k}^{+}G_{k}\textbf{g}_{l})\otimes \textbf{e}_{i})^{\mathrm{T}}-\frac{1+h_{k}\lambda_{1,2}^{*}}{\kappa_{k}\lambda_{1,2}^{*}}\sum_{i=1}^{n}\omega_{0i}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})^{\mathrm{T}},\nonumber\\ &&\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}((\textbf{g}_{l}-G_{k}^{+}G_{k}\textbf{g}_{l})\otimes \textbf{e}_{i})^{\mathrm{T}}-\frac{1}{\kappa_{k}}\sum_{i=1}^{n}\omega_{0i}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})^{\mathrm{T}},\nonumber\\ &&\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{1,2}^{*}}{\lambda_{1,2}^{*}(\lambda_{1,2}^{*}+\kappa_{k})}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l}-\textbf{g}_{j}^{\mathrm{T}}G_{k}^{+}G_{k}\textbf{g}_{l})\textbf{e}_{i}^{\mathrm{T}}-\frac{1+h_{k}\lambda_{1,2}^{*}}{\lambda_{1,2}^{*}(\lambda_{1,2}^{*}+\kappa_{k})}\sum_{i=1}^{n}\omega_{0i}\textbf{e}_{i}^{\mathrm{T}}\Big]^{*}:\nonumber\\ &&\forall\omega_{0i}\in\mathbb{C},\forall\varpi_{li}\in\mathbb{C},i=1,\ldots,n,l=1,\ldots,q\Big\},\end{aligned}$$ where $G_{k}=(\frac{\mu_{k}}{\lambda_{1,2}}+\mu_{k} h_{k}+\eta_{k})L_{k}-\kappa_{k}I_{q}$. - If $\frac{\kappa_{k}}{\lambda_{4}}+\lambda_{4}+\kappa_{k} h_{k}\neq0$, $\lambda_{4}\neq-\kappa_{k}$, $\frac{\mu_{k}}{\lambda_{4}}+\mu_{k} h_{k}+\eta_{k}\neq0$, and $\frac{\lambda_{4}^{2}+\kappa_{k} h_{k}\lambda_{4}+\kappa_{k}}{\eta_{k}\lambda+\mu_{k} h_{k}\lambda_{4}+\mu_{k}}\in{\mathrm{spec}}(-L_{k})$, then $\lambda_{4}$ are the eigenvalues of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. The corresponding eigenspace is given by $$\begin{aligned} \label{egns3} &&\hspace{-3em}\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{4} I_{2nq+n}\Big)\nonumber\\ &&\hspace{-3em}=\Big\{\Big[\frac{1+h_{k}\lambda_{4}^{*}}{\lambda_{4}^{*}}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{l}-F_{k}^{+}F_{k}\textbf{g}_{l}+\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}F_{k}\textbf{g}_{l})F_{k}^{+}\psi_{k}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\psi_{k}\Big)^{*}\otimes \textbf{e}_{i}^{\mathrm{T}},\nonumber\\ &&\hspace{-3em}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{l}-F_{k}^{+}F_{k}\textbf{g}_{l}+\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}F_{k}\textbf{g}_{l})F_{k}^{+}\psi_{k}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\psi_{k}\Big)^{*}\otimes \textbf{e}_{i}^{\mathrm{T}},\nonumber\\ &&\hspace{-3em}\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{4}^{*}}{\lambda_{4}^{*}(\lambda_{4}^{*}+\kappa_{k})}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l}-\textbf{g}_{j}^{\mathrm{T}}F_{k}^{+}F_{k}\textbf{g}_{l}+\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}F_{k}\textbf{g}_{l})\textbf{g}_{j}^{\mathrm{T}}F_{k}^{+}\psi_{k}\nonumber\\ &&\hspace{-3em}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j}^{\mathrm{T}}\psi_{k}\Big)^{*}\otimes \textbf{e}_{i}^{\mathrm{T}}\Big]^{*}:\varpi_{li}\in\mathbb{C},i=1,\ldots,n,l=1,\ldots,q\Big\},\end{aligned}$$ where $F_{k}=(\frac{\mu_{k}}{\lambda_{4}}+\mu_{k} h_{k}+\eta_{k})L_{k}+(\frac{\kappa_{k}}{\lambda_{4}}+\lambda_{4}+\kappa_{k} h_{k})I_{q}$ and $$\begin{aligned} \label{psik} \psi_{k}=\small\left\{\begin{array}{ll} (\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}}F_{k}^{+}F_{k})^{+}, & \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4}^{*})}{\lambda_{4}^{*}(\lambda_{4}^{*}+\kappa_{k})}\textbf{g}_{j}\neq \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4}^{*})}{\lambda_{4}^{*}(\lambda_{4}^{*}+\kappa_{k})}F_{k}^{+}F_{k}\textbf{g}_{j},\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(1+|\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}|^{2}\textbf{g}_{j}^{\mathrm{T}}(F_{k}^{\mathrm{T}}F_{k})^{+}\textbf{g}_{j})^{-1}(F_{k}^{\mathrm{T}}F_{k})^{+}\textbf{g}_{j}, & \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4}^{*})}{\lambda_{4}^{*}(\lambda_{4}^{*}+\kappa_{k})}\textbf{g}_{j}= \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4}^{*})}{\lambda_{4}^{*}(\lambda_{4}^{*}+\kappa_{k})}F_{k}^{+}F_{k}\textbf{g}_{j}. \\ \end{array}\right.\end{aligned}$$ - If $\frac{\mu_{k}}{\lambda_{5,6}}+\mu_{k} h_{k}+\eta_{k}\neq0$, $\lambda_{5,6}\neq-\kappa_{k}$, and $\frac{\kappa_{k}}{\lambda_{5,6}}+\lambda_{5,6}+\kappa_{k} h_{k}=0$, then $\lambda_{5,6}=-\frac{\kappa_{k}h_{k}}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}h_{k}^{2}-4\kappa_{k}}$ are the eigenvalues of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. The corresponding eigenspace is given by the form (\[egns3\]) with $\lambda_{4}$ being replaced by $\lambda_{5,6}$. - If $\frac{\mu_{k}}{\lambda_{5,6}}+\mu_{k} h_{k}+\eta_{k}=0$, $\lambda_{5,6}\neq-\kappa_{k}$, $\mu_{k}=0$, and $\frac{\kappa_{k}}{\lambda_{5,6}}+\lambda_{5,6}+\kappa_{k} h_{k}=0$, then $\lambda_{5,6}$ are the eigenvalues of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. The corresponding eigenspace is given by $$\begin{aligned} \label{egns4} &&\hspace{-1em}\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{5,6} I_{2nq+n}\Big)\nonumber\\ &&\hspace{-1em}=\Big\{\Big[\frac{1+h_{k}\lambda_{5,6}^{*}}{\lambda_{5,6}^{*}}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{l}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j})^{\mathrm{T}}\otimes \textbf{e}_{i}^{\mathrm{T}},\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{l}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j})^{\mathrm{T}}\otimes \textbf{e}_{i}^{\mathrm{T}},\textbf{0}_{1\times n}\Big]^{*}:\nonumber\\ &&\hspace{-1em}\varpi_{li}\in\mathbb{C},i=1,\ldots,n,l=1,\ldots,q\Big\}.\end{aligned}$$ - If $1\in{\mathrm{spec}}(\frac{\eta_{k}}{\kappa_{k}}L_{k})$ and $\kappa_{k}h_{k}=1$, then $\lambda_{3}=-\kappa_{k}$ is an eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. The corresponding eigenspace is given by $$\begin{aligned} \label{egns5} &&\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{3} I_{2nq+n}\Big)\nonumber\\ &&=\Big\{\Big[\textbf{0}_{1\times nq},\sum_{i=1}^{n}\sum_{l=1}^{q}\alpha_{li}(\textbf{g}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}},\sum_{i=1}^{n}\sum_{l=1}^{q}\frac{\eta_{k}}{\kappa_{k}}\alpha_{li}(L_{k}\textbf{g}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}}-\sum_{i=1}^{n}\sum_{l=1}^{q}\alpha_{li}(\textbf{g}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}}\Big]^{*}:\nonumber\\ &&\forall\alpha_{li}\in\mathbb{C},i=1,\ldots,n,l=1,\ldots,q\Big\}.\end{aligned}$$ - If $\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}=0$ and $h_{k}=1+\frac{1}{\kappa_{k}}$, then $\lambda_{3}=-\kappa_{k}$ is an eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. The corresponding eigenspace is given by $$\begin{aligned} \label{egns6} &&\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{3} I_{2nq+n}\Big)=\Big\{\Big[\textbf{0}_{1\times nq},\sum_{i=1}^{n}\sum_{l=1}^{q}\alpha_{li}(\textbf{g}_{l}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j})^{\mathrm{T}}\otimes \textbf{e}_{i}^{\mathrm{T}},\textbf{0}_{1\times n}\Big]^{*}:\nonumber\\ &&\forall\alpha_{li}\in\mathbb{C},i=1,\ldots,n,l=1,\ldots,q\Big\}.\end{aligned}$$ - If $1\in{\mathrm{spec}}(\frac{\mu_{k}+\eta_{k}}{\kappa_{k}}L_{k})$ and $h_{k}=1+\frac{1}{\kappa_{k}}$, then $\lambda_{3}=-\kappa_{k}$ is an eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. The corresponding eigenspace is given by $$\begin{aligned} \label{egns7} &&\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{3} I_{2nq+n}\Big)\nonumber\\ &&=\Big\{\Big[\textbf{0}_{1\times nq},\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}(L_{k}^{+}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})^{\mathrm{T}}-\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}(L_{k}^{+}\varphi_{k}\otimes\textbf{e}_{i})^{\mathrm{T}}\nonumber\\ &&+\sum_{l=1}^{q}\sum_{i=1}^{n}\gamma_{li}(\textbf{g}_{l}-L_{k}^{+}L_{k}\textbf{g}_{l}+(\textbf{g}_{j}^{\mathrm{T}}L_{k}\textbf{g}_{l})L_{k}^{+}\varphi_{k}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\varphi_{k})^{\mathrm{T}}\otimes \textbf{e}_{i}^{\mathrm{T}},\sum_{i=1}^{n}\beta_{i}\textbf{e}_{i}^{\mathrm{T}}\Big]^{*}:\nonumber\\ &&\beta_{i}\in\mathbb{C},\gamma_{li}\in\mathbb{C},i=1,\ldots,n,l=1,\ldots,q\Big\},\end{aligned}$$ where $$\begin{aligned} \label{varphik} \varphi_{k}=\small\left\{\begin{array}{ll} (\textbf{g}_{j}^{\mathrm{T}}-\textbf{g}_{j}^{\mathrm{T}}L_{k}^{+}L_{k})^{+}, & \textbf{g}_{j}\neq L_{k}^{+}L_{k}\textbf{g}_{j},\\ (1+\textbf{g}_{j}^{\mathrm{T}}(L_{k}^{\mathrm{T}}L_{k})^{+}\textbf{g}_{j})^{-1}(L_{k}^{\mathrm{T}}L_{k})^{+}\textbf{g}_{j}, & \textbf{g}_{j}= L_{k}^{+}L_{k}\textbf{g}_{j}. \\ \end{array}\right.\end{aligned}$$ - If $1\in{\mathrm{spec}}(\frac{\mu_{k}(\kappa_{k}h_{k}-1)+\eta_{k}\kappa_{k}}{\kappa_{k}(-\kappa_{k} h_{k}+1+\kappa_{k})}L_{k})$ and $\kappa_{k}h_{k}\neq 1$, then $\lambda_{3}=-\kappa_{k}$ is an eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. The corresponding eigenspace is given by $$\begin{aligned} \label{egns8} &&\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{3} I_{2nq+n}\Big)\nonumber\\ &&=\Big\{\Big[\textbf{0}_{1\times nq},\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{l}-M_{k}^{+}M_{k}\textbf{g}_{l}+(\textbf{g}_{j}^{\mathrm{T}}M_{k}\textbf{g}_{l})M_{k}^{+}\phi_{k}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\phi_{k}\Big)^{\mathrm{T}}\otimes\textbf{e}_{i}^{\mathrm{T}},\textbf{0}_{1\times n}\Big]^{*}:\nonumber\\ &&\varpi_{li}\in\mathbb{C},i=1,\ldots,n,l=1,\ldots,q\Big\},\end{aligned}$$ where $M_{k}=(\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k})L_{k}+(\kappa_{k} h_{k}-1-\kappa_{k})I_{q}$ and $$\begin{aligned} \label{phik} \phi_{k}=\small\left\{\begin{array}{ll} (\textbf{g}_{j}^{\mathrm{T}}-\textbf{g}_{j}^{\mathrm{T}}M_{k}^{+}M_{k})^{+}, & \textbf{g}_{j}\neq M_{k}^{+}M_{k}\textbf{g}_{j},\\ (1+\textbf{g}_{j}^{\mathrm{T}}(M_{k}^{\mathrm{T}}M_{k})^{+}\textbf{g}_{j})^{-1}(M_{k}^{\mathrm{T}}M_{k})^{+}\textbf{g}_{j}, & \textbf{g}_{j}= M_{k}^{+}M_{k}\textbf{g}_{j}. \\ \end{array}\right.\end{aligned}$$ For a fixed $j\in\{1,\ldots,q\}$ and a fixed $k\in\overline{\mathbb{Z}}_{+}$, let $\textbf{x}\in\mathbb{C}^{2nq+n}$ be an eigenvector of the corresponding eigenvalue $\lambda\in\mathbb{C}$ for $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. We partition $\textbf{x}$ into $\textbf{x}=[\textbf{x}_{1}^{*},\textbf{x}_{2}^{*},\textbf{x}_{3}^{*}]^{*}\neq\textbf{0}_{(2nq+n)\times 1}$, where $\textbf{x}_{1},\textbf{x}_{2}\in\mathbb{C}^{nq}$, and $\textbf{x}_{3}\in\mathbb{C}^{n}$. It follows from $(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})\textbf{x}=\lambda\textbf{x}$ that $$\begin{aligned} h_{k}(-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k})\textbf{x}_{1}+h_{k}(-\eta_{k} L_{k}\otimes P_{k})\textbf{x}_{2}+\textbf{x}_{2}+h_{k}(\kappa_{k} \textbf{1}_{q\times 1}\otimes P_{k})\textbf{x}_{3}=\lambda\textbf{x}_{1},\label{Aeig_1}\\ (-\mu_{k} L_{k}\otimes P_{k}-\kappa_{k} I_{q}\otimes P_{k})\textbf{x}_{1}+(-\eta_{k} L_{k}\otimes P_{k})\textbf{x}_{2}+(\kappa_{k} \textbf{1}_{q\times 1}\otimes P_{k})\textbf{x}_{3}=\lambda \textbf{x}_{2},\label{x2}\\ \kappa_{k} E_{n\times nq}^{[j]}\textbf{x}_{1}-\kappa_{k}\textbf{x}_{3}=\lambda \textbf{x}_{3}.\label{Aeig_3}\end{aligned}$$ Note that it follows from Lemma \[lemma\_semisimple\] that $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ has an eigenvalue 0. Now we assume that $\lambda\neq0$. Substituting (\[x2\]) into (\[Aeig\_1\]) yields $\textbf{x}_{1}=\frac{1+h_{k}\lambda}{\lambda}\textbf{x}_{2}$. Replacing $\textbf{x}_{1}$ in (\[x2\]) and (\[Aeig\_3\]) with $\textbf{x}_{1}=\frac{1+h_{k}\lambda}{\lambda}\textbf{x}_{2}$ yields $$\begin{aligned} -\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]\textbf{x}_{2}+\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\textbf{x}_{3}=\textbf{0}_{nq\times 1},\label{hx2}\\ \Big(\frac{\kappa_{k}}{\lambda}+\kappa_{k} h_{k}\Big)E_{n\times nq}^{[j]}\textbf{x}_{2}-(\lambda+\kappa_{k})\textbf{x}_{3}=\textbf{0}_{n\times 1}.\label{hx3}\end{aligned}$$ Clearly $[\textbf{x}_{2}^{*},\textbf{x}_{3}^{*}]^{*}\neq\textbf{0}_{2nq\times 1}$. Thus, (\[hx2\]) and (\[hx3\]) have nontrivial solutions if and only if $$\begin{aligned} \label{detcon} \det\small\left[\begin{array}{cc} \Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k}) & -\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\\ \Big(\frac{\kappa_{k}}{\lambda}+\kappa_{k} h_{k}\Big)E_{n\times nq}^{[j]} & -(\lambda+\kappa_{k})I_{n} \end{array}\right]=0.\end{aligned}$$ If $\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]\neq0$, then pre-multiplying $-L_{k}\otimes I_{n}$ on both sides of (\[hx2\]) yields $\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)I_{nq}\Big](L_{k}\otimes P_{k})\textbf{x}_{2}=\textbf{0}_{nq\times 1}$, which implies that $(L_{k}\otimes P_{k})\textbf{x}_{2}=\textbf{0}_{nq\times 1}$. Now following the similar arguments as in the proof of $iii$) in Lemma \[lemma\_Arank\], we have $\textbf{x}_{2}=\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}+\sum_{s=1}^{q}\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\beta_{sr}\textbf{g}_{s}\otimes\textbf{j}_{r}$, where $\varpi_{li},\beta_{sr}\in\mathbb{C}$ and not all $\varpi_{li},\beta_{sr}$ are zero. Substituting this expression of $\textbf{x}_{2}$ into (\[hx2\]) and (\[hx3\]) by using $iii$) of Lemma \[lemma\_EW\] yields $$\begin{aligned} \kappa_{k}P_{k}\textbf{x}_{3}=\left(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k}h_{k}\right)\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}P_{k}\textbf{e}_{i}.\label{x3e1}\\ (\lambda+\kappa_{k})\textbf{x}_{3}=\left(\frac{\kappa_{k}}{\lambda}+\kappa_{k} h_{k}\right)\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}\textbf{e}_{i}.\label{x3e2}\end{aligned}$$ Furthermore, substituting (\[x3e1\]) into $P_{k}$(\[x3e2\]) yields $\lambda P_{k}\textbf{x}_{3}=-\lambda\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}P_{k}\textbf{e}_{i}$, which implies that $P_{k}\textbf{x}_{3}=-\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}P_{k}\textbf{e}_{i}$ since $\lambda\neq0$. Hence, $P_{k}(\textbf{x}_{3}+\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}\textbf{e}_{i})=\textbf{0}_{n\times 1}$, which further implies that $\textbf{x}_{3}+\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}\textbf{e}_{i}\in\ker(P_{k})$. Consequently, $\textbf{x}_{3}=-\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\\\sum_{i=1}^{n}\varpi_{li}w_{lj}\textbf{e}_{i}+\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\delta_{r}\textbf{j}_{r}$, where $\delta_{r}\in\mathbb{C}$. Finally, substituting the obtained expression for $\textbf{x}_{3}$ into (\[x3e2\]) yields $$\begin{aligned} \label{eqn_ei} \left(\frac{\kappa_{k}}{\lambda}+\kappa_{k} h_{k}+\lambda+\kappa_{k}\right)\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}\textbf{e}_{i}-(\lambda+\kappa_{k})\sum_{r=1}^{n-{\mathrm{rank}}(P_{k})}\delta_{r}\textbf{j}_{r}=\textbf{0}_{n\times 1}.\end{aligned}$$ In this case, (\[hx2\]) and (\[hx3\]) have nontrivial solutions if and only if (\[eqn\_ei\]) holds for not all zero $\varpi_{li},\delta_{r}\in\mathbb{C}$. Since by assumption, $P_{k}$ is a full rank matrix, it follows that $\textbf{j}_{r}=\textbf{0}_{n\times 1}$ and hence, (\[eqn\_ei\]) collapses into $\left(\frac{\kappa_{k}}{\lambda}+\kappa_{k} h_{k}+\lambda+\kappa_{k}\right)\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}\textbf{e}_{i}=\textbf{0}_{n\times 1}$, which implies that either $\frac{\kappa_{k}}{\lambda}+\kappa_{k} h_{k}+\lambda+\kappa_{k}=0$ or $\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}\textbf{e}_{i}=\textbf{0}_{n\times 1}$. If $\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}\textbf{e}_{i}=\textbf{0}_{n\times 1}$, then it follows from the expression of $\textbf{x}_{3}$ that $\textbf{x}_{3}=\textbf{0}_{n\times 1}$ and by (\[hx2\]), $\textbf{x}_{2}=\textbf{0}_{nq\times 1}$, and hence, $\textbf{x}_{1}=\frac{1+h_{k}\lambda}{\lambda}\textbf{x}_{2}=\textbf{0}_{nq\times 1}$. This is a contradiction. Thus, $\frac{\kappa_{k}}{\lambda}+\kappa_{k} h_{k}+\lambda+\kappa_{k}=0$, and hence, $\kappa_{k}\neq0$. Let $\lambda_{1,2}$ denote the two solutions to $\frac{\kappa_{k}}{\lambda}+\kappa_{k} h_{k}+\lambda+\kappa_{k}=0$. Then $$\begin{aligned} \label{lambda1} \lambda_{1,2}=-\frac{\kappa_{k}(1+h_{k})}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}(1+h_{k})^{2}-4\kappa_{k}}.\end{aligned}$$ In this case, note that $$\begin{aligned} \label{detlambda} &&\det\Big[\Big(\frac{\mu_{k}}{\lambda_{1,2}}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda_{1,2}}+\lambda_{1,2}+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]\nonumber\\ &&=\det\Big[\Big(\frac{\mu_{k}}{\lambda_{1,2}}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})-\kappa_{k}I_{nq}\Big]\det(I_{q}\otimes P_{k})\nonumber\\ &&=\kappa_{k}^{nq}\det\Big[\Big(\frac{\mu_{k}}{\lambda_{1,2}\kappa_{k}}+\frac{\mu_{k} h_{k}}{\kappa_{k}}+\frac{\eta_{k}}{\kappa_{k}}\Big)(L_{k}\otimes I_{n})-I_{nq}\Big](\det(P_{k}))^{q}.\end{aligned}$$ Hence, $\det\Big[\Big(\frac{\mu_{k}}{\lambda_{1,2}}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda_{1,2}}+\lambda_{1,2}+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]\neq0$ if and only if $1\not\in{\mathrm{spec}}((\frac{\mu_{k}}{\lambda_{1,2}\kappa_{k}}+\frac{\mu_{k} h_{k}}{\kappa_{k}}+\frac{\eta_{k}}{\kappa_{k}})L_{k})$. Thus, if $1\not\in{\mathrm{spec}}((\frac{\mu_{k}}{\lambda_{1,2}\kappa_{k}}+\frac{\mu_{k} h_{k}}{\kappa_{k}}+\frac{\eta_{k}}{\kappa_{k}})L_{k})$, then $\lambda_{1,2}$ given by (\[lambda1\]) are indeed the eigenvalues of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ and the corresponding eigenvectors for $\lambda_{1,2}$ are given by $\textbf{x}=\Big[\frac{1+h_{k}\lambda_{1,2}^{*}}{\lambda_{1,2}^{*}}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}},\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}},-\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}\textbf{e}_{i}^{\mathrm{T}}\Big]^{*}$, where $\varpi_{li}\in\mathbb{C}$ and not all of $\varpi_{li}$ are zero. Therefore, $\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{1,2} I_{2nq+n}\Big)$ is given by (\[egns1\]). Alternatively, if $\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]=0$, then in this case, we consider two additional cases for (\[detcon\]): *Case 1.* If $\lambda\neq-\kappa_{k}$, then it follows from Proposition 2.8.4 of [@Bernstein:2009 p. 116] that (\[detcon\]) is equivalent to $\det\Big(\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})-\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}W_{k}^{[j]}\Big)=0$, which implies that for $\lambda\neq-\kappa_{k}$, the equation $$\begin{aligned} \label{eqn_v} \Big(\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})-\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}W_{k}^{[j]}\Big)\textbf{v}=\textbf{0}_{nq\times 1}\end{aligned}$$ has nontrivial solutions for $\textbf{v}\in\mathbb{C}^{nq}$. It follows from (\[hx2\]) and (\[hx3\]) that solving this $\textbf{v}$ is equivalent to solving $\textbf{x}_{2}$. Again, note that for every $j=1,\ldots,q$, $(L_{k}\otimes I_{n})W_{k}^{[j]}=\textbf{0}_{nq\times nq}$. Pre-multiplying $L_{k}\otimes I_{n}$ on both sides of (\[eqn\_v\]) yields $\Big(\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}^{2}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(L_{k}\otimes P_{k})\Big)\textbf{v}=(I_{q}\otimes P_{k})(L_{k}\otimes I_{n})\Big(\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)I_{nq}\Big)\textbf{v}=\textbf{0}_{nq\times 1}$, which implies that $\Big(\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)I_{nq}\Big)\textbf{v}\in\ker(L_{k}\otimes I_{n})$ due to the assumption that $P_{k}$ is of full rank. Since $\ker(L_{k}\otimes I_{n})=\bigcup_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}{\mathrm{span}}\{\textbf{w}_{l}\otimes\textbf{e}_{1},\ldots,\textbf{w}_{l}\otimes\textbf{e}_{n}\}$, it follows that $$\begin{aligned} \label{Az=b} \Big(\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)I_{nq}\Big)\textbf{v}=\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i},\end{aligned}$$ where $\omega_{li}\in\mathbb{C}$. Now it follows from (\[eqn\_v\]) and (\[Az=b\]) that $$\begin{aligned} \label{Wv_eqn} \frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}W_{k}^{[j]}\textbf{v}=\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}.\end{aligned}$$ If $\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\neq0$, then (\[Az=b\]) has a particular solution $\textbf{v}=(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k})^{-1}\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}$. Let $\textbf{w}_{l}=[w_{l1}^{*},\ldots,w_{lq}^{*}]^{*}$. Substituting this particular solution into (\[Wv\_eqn\]), together with $ii$) of Lemma \[lemma\_EW\], yields $$\begin{aligned} &&(I_{q}\otimes P_{k})\Big(\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}\Big)-\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}W_{k}^{[j]}(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k})^{-1}\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}\nonumber\\ &&=(I_{q}\otimes P_{k})\Big(\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{(\lambda+\kappa_{k})(\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k})}\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}w_{lj}\textbf{w}_{0}\otimes\textbf{e}_{i}\Big)\nonumber\\ &&=(I_{q}\otimes P_{k})\Big(\sum_{i=1}^{n}\Big[\omega_{0i}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{(\lambda+\kappa_{k})(\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k})}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}w_{lj}\Big]\textbf{w}_{0}\otimes\textbf{e}_{i}\nonumber\\ &&\hspace{1em}+\sum_{i=1}^{n}\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}\Big)\nonumber\\ &&=\textbf{0}_{nq\times 1},\end{aligned}$$ which implies that $$\begin{aligned} \label{omega} \omega_{0i}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{(\lambda+\kappa_{k})(\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k})}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}w_{lj}=0\end{aligned}$$ and $\omega_{\ell i}=0$ for every $i=1,\ldots,n$ and every $\ell=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$. Note that $w_{0j}=1$ for every $j=1,\ldots,q$. Substituting $\omega_{\ell i}=0$ into (\[omega\]) yields $$\begin{aligned} \omega_{0i}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{(\lambda+\kappa_{k})(\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k})}\omega_{0i}=0,\quad i=1,\ldots,n.\end{aligned}$$ Then either $1-\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{(\lambda+\kappa_{k})(\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k})}=0$ or $\omega_{0i}=0$ for every $i=1,\ldots,n$. If $\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{(\lambda+\kappa_{k})(\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k})}=1$, then $\lambda^{2}+\kappa_{k}(1+h_{k})\lambda+\kappa_{k}=0$. Hence, $\lambda=\lambda_{12}$ where $\lambda_{1,2}$ are given by (\[lambda1\]). In this case, note that $\frac{\kappa_{k}}{\lambda_{1,2}}+\lambda_{1,2}+\kappa_{k} h_{k}=-\kappa_{k}\neq0$. Then it follows that (\[detlambda\]) holds. Hence, $\det\Big[\Big(\frac{\mu_{k}}{\lambda_{1,2}}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda_{1,2}}+\lambda_{1,2}+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]=0$ if and only if $1\in{\mathrm{spec}}((\frac{\mu_{k}}{\lambda_{1,2}\kappa_{k}}+\frac{\mu_{k} h_{k}}{\kappa_{k}}+\frac{\eta_{k}}{\kappa_{k}})L_{k})$. Furthermore, $\lambda_{1,2}\neq-\kappa_{k}$ if and only if $h_{k}\kappa_{k}\neq 1$. Thus, if $1\in{\mathrm{spec}}((\frac{\mu_{k}}{\lambda_{1,2}\kappa_{k}}+\frac{\mu_{k} h_{k}}{\kappa_{k}}+\frac{\eta_{k}}{\kappa_{k}})L_{k})$ and $h_{k}\kappa_{k}\neq 1$, then $\lambda_{1,2}$ given by (\[lambda1\]) are indeed the eigenvalues of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. In this case, (\[Az=b\]) becomes $$\begin{aligned} \label{Az=bx} \Big(\Big(\frac{\mu_{k}}{\lambda_{1,2}}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})-\kappa_{k}I_{nq}\Big)\textbf{v}=\sum_{i=1}^{n}\omega_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i}\end{aligned}$$ and a specific solution is given by $\textbf{v}=-\frac{1}{\kappa_{k}}\sum_{i=1}^{n}\omega_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i}$. To find the general solution to (\[Az=bx\]), let $G_{k}=(\frac{\mu_{k}}{\lambda_{1,2}}+\mu_{k} h_{k}+\eta_{k})L_{k}-\kappa_{k}I_{q}$ and consider $$\begin{aligned} \label{Gx0} (G_{k}\otimes I_{n})\hat{\textbf{v}}=\textbf{0}_{nq\times 1}.\end{aligned}$$ It follows from $vi$) of Proposition 6.1.7 of [@Bernstein:2009 p. 400] and $viii$) of Proposition 6.1.6 of [@Bernstein:2009 p. 399] that the general solution $\hat{\textbf{v}}$ to (\[Gx0\]) is given by the form $$\begin{aligned} \hat{\textbf{v}}&=&\Big[I_{nq}-(G_{k}\otimes I_{n})^{+}(G_{k}\otimes I_{n})\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[I_{nq}-(G_{k}^{+}\otimes I_{n})(G_{k}\otimes I_{n})\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[I_{q}\otimes I_{n}-((G_{k}^{+}G_{k})\otimes I_{n})\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[(I_{q}-G_{k}^{+}G_{k})\otimes I_{n}\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{l}-G_{k}^{+}G_{k}\textbf{g}_{l})\otimes \textbf{e}_{i},\end{aligned}$$ where $\varpi_{li}\in\mathbb{C}$, $j=1,\ldots,q$, and we used the facts that $(A\otimes B)^{+}=A^{+}\otimes B^{+}$, $A\otimes B-C\otimes B=(A-C)\otimes B$, and $(A\otimes B)(C\otimes D)=AC\otimes BD$ for compatible matrices $A,B,C,D$. Then the general solution to (\[Az=bx\]) is given by $$\begin{aligned} \textbf{v}&=&\hat{\textbf{v}}-\frac{1}{\kappa_{k}}\sum_{i=1}^{n}\omega_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i}\nonumber\\ &=&\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{l}-G_{k}^{+}G_{k}\textbf{g}_{l})\otimes \textbf{e}_{i}-\frac{1}{\kappa_{k}}\sum_{i=1}^{n}\omega_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i},\end{aligned}$$ and hence, $\textbf{x}_{2}=\textbf{v}\neq\textbf{0}_{nq\times 1}$ and $\textbf{x}_{1}=\frac{1+h_{k}\lambda_{1,2}}{\lambda_{1,2}}\textbf{v}$. Furthermore, note that $\textbf{g}_{j}^{\mathrm{T}}\textbf{w}_{0}=1$ for every $j=1,\ldots,q$, it follows that $$\begin{aligned} \textbf{x}_{3}&=&\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{1,2}}{\lambda_{1,2}(\lambda_{1,2}+\kappa_{k})}E_{n\times nq}^{[j]}\textbf{v}\nonumber\\ &=&\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{1,2}}{\lambda_{1,2}(\lambda_{1,2}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})\textbf{v}\nonumber\\ &=&\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{1,2}}{\lambda_{1,2}(\lambda_{1,2}+\kappa_{k})}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})((\textbf{g}_{l}-G_{k}^{+}G_{k}\textbf{g}_{l})\otimes \textbf{e}_{i})\nonumber\\ &&-\frac{1+h_{k}\lambda_{1,2}}{\lambda_{1,2}(\lambda_{1,2}+\kappa_{k})}\sum_{i=1}^{n}\omega_{0i}(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})(\textbf{w}_{0}\otimes\textbf{e}_{i})\nonumber\\ &=&\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{1,2}}{\lambda_{1,2}(\lambda_{1,2}+\kappa_{k})}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l}-\textbf{g}_{j}^{\mathrm{T}}G_{k}^{+}G_{k}\textbf{g}_{l})\textbf{e}_{i}-\frac{1+h_{k}\lambda_{1,2}}{\lambda_{1,2}(\lambda_{1,2}+\kappa_{k})}\sum_{i=1}^{n}\omega_{0i}\textbf{e}_{i}.\end{aligned}$$ Hence, the corresponding eigenvectors for $\lambda_{1,2}$ are given by $$\begin{aligned} \textbf{x}&=&\Big[\frac{1+h_{k}\lambda_{1,2}^{*}}{\lambda_{1,2}^{*}}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}((\textbf{g}_{l}-G_{k}^{+}G_{k}\textbf{g}_{l})\otimes \textbf{e}_{i})^{\mathrm{T}}-\frac{1+h_{k}\lambda_{1,2}^{*}}{\kappa_{k}\lambda_{1,2}^{*}}\sum_{i=1}^{n}\omega_{0i}(\textbf{w}_{0}\otimes\textbf{e}_{i})^{\mathrm{T}},\nonumber\\ &&\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}((\textbf{g}_{l}-G_{k}^{+}G_{k}\textbf{g}_{l})\otimes \textbf{e}_{i})^{\mathrm{T}}-\frac{1}{\kappa_{k}}\sum_{i=1}^{n}\omega_{0i}(\textbf{w}_{0}\otimes\textbf{e}_{i})^{\mathrm{T}},\nonumber\\ &&\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{1,2}^{*}}{\lambda_{1,2}^{*}(\lambda_{1,2}^{*}+\kappa_{k})}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l}-\textbf{g}_{j}^{\mathrm{T}}G_{k}^{+}G_{k}\textbf{g}_{l})\textbf{e}_{i}^{\mathrm{T}}-\frac{1+h_{k}\lambda_{1,2}^{*}}{\lambda_{1,2}^{*}(\lambda_{1,2}^{*}+\kappa_{k})}\sum_{i=1}^{n}\omega_{0i}\textbf{e}_{i}^{\mathrm{T}}\Big]^{*},\end{aligned}$$ where $\varpi_{li}\in\mathbb{C}$, $\omega_{0i}\in\mathbb{C}$, and not all of them are zero. Therefore, $\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{1,2} I_{2nq+n}\Big)$ is given by (\[egns2\]). If $\omega_{0i}=0$ for every $i=1,\ldots,n$, then it follows from (\[eqn\_v\]) and (\[Az=b\]) that $$\begin{aligned} \frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}W_{k}^{[j]}\textbf{v}=\textbf{0}_{nq\times 1},\label{v-1}\\ \Big(\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)I_{nq}\Big)\textbf{v}=\textbf{0}_{nq\times 1}.\label{v-2}\end{aligned}$$ In this case, since $\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\neq0$ and $\lambda\neq-\kappa_{k}$, $\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)I_{nq}\Big]=0$ if and only if $\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\neq0$ and $\frac{\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k}}{\eta_{k}\lambda+\mu_{k} h_{k}\lambda+\mu_{k}}\in{\mathrm{spec}}(-L_{k})$. Thus, if $\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\neq0$, $\lambda\neq-\kappa_{k}$, $\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\neq0$, and $\frac{\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k}}{\eta_{k}\lambda+\mu_{k} h_{k}\lambda+\mu_{k}}\in{\mathrm{spec}}(-L_{k})$, then $\lambda=\lambda_{4}$, where $$\begin{aligned} \label{lambda4} \frac{\lambda_{4}^{2}+\kappa_{k} h_{k}\lambda_{4}+\kappa_{k}}{\eta_{k}\lambda_{4}+\mu_{k} h_{k}\lambda_{4}+\mu_{k}}\in{\mathrm{spec}}(-L_{k}),\end{aligned}$$ are the eigenvalues of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. To find their corresponding eigenvectors, let $F_{k}=\Big(\frac{\mu_{k}}{\lambda_{4}}+\mu_{k} h_{k}+\eta_{k}\Big)L_{k}+\Big(\frac{\kappa_{k}}{\lambda_{4}}+\lambda_{4}+\kappa_{k} h_{k}\Big)I_{q}$. We first show that (\[v-1\]) is equivalent to $$\begin{aligned} \label{v-3} \frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}E_{n\times nq}^{[j]}\textbf{v}=\textbf{0}_{n\times 1}\end{aligned}$$ for every $j=1,\ldots,q$. To see this, let $\textbf{v}=[\textbf{v}_{1}^{*},\ldots,\textbf{v}_{q}^{*}]^{*}$. Then it follows from (\[Wj2\]) that $W_{k}^{[j]}\textbf{v}=[(P_{k}\textbf{v}_{j})^{*},\ldots,(P_{k}\textbf{v}_{j})^{*}]^{*}$. Hence (\[v-1\]) holds if and only if $\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}P_{k}\textbf{v}_{j}=\textbf{0}_{n\times 1}$, i.e., $\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}\textbf{v}_{j}=\textbf{0}_{n\times 1}$ since $P_{k}$ is of full rank. On the other hand, note that $E_{n\times nq}^{[j]}\textbf{v}=\textbf{v}_{j}$. Hence, (\[v-1\]) is equivalent to (\[v-3\]). Then by noting that $E_{n\times nq}^{[j]}=\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}$ for every $j=1,\ldots,q$, it follows from (\[v-2\]) and (\[v-3\]) that $$\begin{aligned} \label{Fv} \small\left[\begin{array}{c} F_{k}\otimes I_{n}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}) \end{array}\right]\textbf{v}=\Big(\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\otimes I_{n}\Big)\textbf{v}=\textbf{0}_{(nq+n)\times 1}.\end{aligned}$$ Next, it follows from $vi$) of Proposition 6.1.7 of [@Bernstein:2009 p. 400] and $viii$) of Proposition 6.1.6 of [@Bernstein:2009 p. 399] that the general solution $\textbf{v}$ to (\[Fv\]) is given by the form $$\begin{aligned} \textbf{v}&=&\Big[I_{nq}-\Big(\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\otimes I_{n}\Big)^{+}\Big(\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\otimes I_{n}\Big)\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[I_{nq}-\Big(\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}\otimes I_{n}\Big)\Big(\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\otimes I_{n}\Big)\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[I_{q}\otimes I_{n}-\Big(\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\otimes I_{n}\Big)\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[\Big(I_{q}-\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\Big)\otimes I_{n}\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{l}-\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\textbf{g}_{l}\Big)\otimes \textbf{e}_{i},\label{gsolution1a}\end{aligned}$$ where $\varpi_{li}\in\mathbb{C}$ and $j=1,\ldots,q$. Note that by Proposition 6.1.6 of [@Bernstein:2009 p. 399], $F_{k}^{\mathrm{T}}(F_{k}^{\mathrm{T}})^{+}=F_{k}^{\mathrm{T}}(F_{k}^{+})^{\mathrm{T}}=(F_{k}^{+}F_{k})^{\mathrm{T}}=F_{k}^{+}F_{k}$. It follows from Fact 6.5.17 of [@Bernstein:2009 p. 427] that $$\begin{aligned} \small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}=\left[\begin{array}{cc} F_{k}^{+}(I_{q}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\psi_{k}\textbf{g}_{j}^{\mathrm{T}}) & \psi_{k} \end{array}\right],\end{aligned}$$ where $\psi_{k}$ is given by (\[psik\]). Hence, it follows that for every $j,l=1,\ldots,q$, $$\begin{aligned} \textbf{g}_{l}-\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\textbf{g}_{l}&=&\textbf{g}_{l}-\left[\begin{array}{cc} F_{k}^{+}(I_{q}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\psi_{k}\textbf{g}_{j}^{\mathrm{T}}) & \psi_{k} \end{array}\right]\small\left[\begin{array}{c} F_{k}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\textbf{g}_{l}\nonumber\\ &=&\textbf{g}_{l}-\small\left[\begin{array}{cc} F_{k}^{+}(I_{q}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\psi_{k}\textbf{g}_{j}^{\mathrm{T}}) & \psi_{k} \end{array}\right]\small\left[\begin{array}{c} F_{k}\textbf{g}_{l}\\ \frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l} \end{array}\right]\nonumber\\ &=&\textbf{g}_{l}-F_{k}^{+}\Big(I_{q}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\psi_{k}\textbf{g}_{j}^{\mathrm{T}}\Big)F_{k}\textbf{g}_{l}\nonumber\\ &&-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\psi_{k}\nonumber\\ &=&\textbf{g}_{l}-F_{k}^{+}F_{k}\textbf{g}_{l}+\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}F_{k}\textbf{g}_{l})F_{k}^{+}\psi_{k}\nonumber\\ &&-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\psi_{k}.\end{aligned}$$ Thus, (\[gsolution1a\]) becomes $$\begin{aligned} \label{gsoa} \textbf{v}=\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{l}-F_{k}^{+}F_{k}\textbf{g}_{l}+\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}F_{k}\textbf{g}_{l})F_{k}^{+}\psi_{k}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\psi_{k}\Big)\otimes \textbf{e}_{i}.\end{aligned}$$ Hence, $\textbf{x}_{1}=\frac{1+h_{k}\lambda_{4}}{\lambda_{4}}\textbf{v}$, $\textbf{x}_{2}=\textbf{v}\neq\textbf{0}_{nq\times 1}$ given by (\[gsoa\]), and $$\begin{aligned} \textbf{x}_{3}&=&\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{4}}{\lambda_{4}(\lambda_{4}+\kappa_{k})}E_{n\times nq}^{[j]}\textbf{v}\nonumber\\ &=&\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{4}}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})\textbf{v}\nonumber\\ &=&\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{4}}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})\Big(\Big(\textbf{g}_{l}-F_{k}^{+}F_{k}\textbf{g}_{l}+\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}F_{k}\textbf{g}_{l})F_{k}^{+}\psi_{k}\nonumber\\ &&-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\psi_{k}\Big)\otimes \textbf{e}_{i}\Big)\nonumber\\ &=&\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{4}}{\lambda_{4}(\lambda_{4}+\kappa_{k})}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l}-\textbf{g}_{j}^{\mathrm{T}}F_{k}^{+}F_{k}\textbf{g}_{l}+\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}F_{k}\textbf{g}_{l})\textbf{g}_{j}^{\mathrm{T}}F_{k}^{+}\psi_{k}\nonumber\\ &&-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j}^{\mathrm{T}}\psi_{k}\Big)\otimes \textbf{e}_{i},\label{gsox3}\end{aligned}$$ where not all of $\omega_{\ell i}$ and $\varpi_{li}$ are zero. The corresponding eigenvectors for $\lambda_{4}$ are given by $$\begin{aligned} \label{eigv4} &&\textbf{x}=\nonumber\\ &&\Big[\frac{1+h_{k}\lambda_{4}^{*}}{\lambda_{4}^{*}}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{l}-F_{k}^{+}F_{k}\textbf{g}_{l}+\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}F_{k}\textbf{g}_{l})F_{k}^{+}\psi_{k}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\psi_{k}\Big)^{*}\otimes \textbf{e}_{i}^{\mathrm{T}},\nonumber\\ &&\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{l}-F_{k}^{+}F_{k}\textbf{g}_{l}+\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}F_{k}\textbf{g}_{l})F_{k}^{+}\psi_{k}-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\psi_{k}\Big)^{*}\otimes \textbf{e}_{i}^{\mathrm{T}},\nonumber\\ &&\frac{\kappa_{k}+\kappa_{k}h_{k}\lambda_{4}^{*}}{\lambda_{4}^{*}(\lambda_{4}^{*}+\kappa_{k})}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l}-\textbf{g}_{j}^{\mathrm{T}}F_{k}^{+}F_{k}\textbf{g}_{l}+\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}F_{k}\textbf{g}_{l})\textbf{g}_{j}^{\mathrm{T}}F_{k}^{+}\psi_{k}\nonumber\\ &&-\frac{\kappa_{k}^{2}(1+h_{k}\lambda_{4})}{\lambda_{4}(\lambda_{4}+\kappa_{k})}(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j}^{\mathrm{T}}\psi_{k}\Big)^{*}\otimes \textbf{e}_{i}^{\mathrm{T}}\Big]^{*},\end{aligned}$$ where $\varpi_{li}\in\mathbb{C}$ and not all of them are zero. Therefore, $\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{4} I_{2nq+n}\Big)$ is given by (\[egns3\]). If $\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}=0$, then $\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}=-\frac{\kappa_{k}\lambda}{\lambda+\kappa_{k}}\neq0$ since $\lambda\neq0$ and $\kappa_{k}\neq0$. In this case, it follows from (\[eqn\_v\]) and (\[Az=b\]) that $$\begin{aligned} &&\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}W_{k}^{[j]}\textbf{v}=(I_{q}\otimes P_{k})\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i},\label{id_1}\\ &&\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})\textbf{v}=\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}.\label{id_2}\end{aligned}$$ Since $I_{q}\otimes P_{k}$ is nonsingular, pre-multiplying $(I_{q}\otimes P_{k})^{-1}$ on both sides of (\[id\_1\]) yields $$\begin{aligned} \label{id_3} \frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}(\textbf{1}_{q\times 1}\otimes I_{n})E_{n\times nq}^{[j]}\textbf{v}=\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}\end{aligned}$$ Since by $i$) of Lemma 4.1 of [@HZ:TR:2013], $(\textbf{1}_{q\times 1}\otimes I_{n})E_{n\times nq}^{[j]}$ is idempotent, it follows from (\[id\_3\]) and $ii$) of Lemma 4.1 of [@HZ:TR:2013] that $$\begin{aligned} \sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}=\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}w_{lj}\textbf{w}_{0}\otimes\textbf{e}_{i}, \end{aligned}$$ and hence, $$\begin{aligned} \sum_{i=1}^{n}\Big(\omega_{0i}-\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}w_{lj}\Big)\textbf{w}_{0}\otimes\textbf{e}_{i}+\sum_{i=1}^{n}\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}=\textbf{0}_{nq\times 1},\end{aligned}$$ which implies that $\omega_{0i}-\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}w_{lj}=0$ and $\omega_{\ell i}=0$ for every $i=1,\ldots,n$, $j=1,\ldots,q$, and $\ell=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$. Consequently, (\[id\_3\]) and (\[id\_2\]) can be simplified as $$\begin{aligned} &&\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}(\textbf{1}_{q\times 1}\otimes I_{n})E_{n\times nq}^{[j]}\textbf{v}=\sum_{i=1}^{n}\omega_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i},\label{new_1}\\ &&\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})\textbf{v}=\sum_{i=1}^{n}\omega_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i}.\label{new_2}\end{aligned}$$ It follows from $ii$) of Lemma 4.1 of [@HZ:TR:2013] that (\[new\_1\]) has a specific solution $$\begin{aligned} \label{vss} \textbf{v}=\Big(\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}\Big)^{-1}\sum_{i=1}^{n}\omega_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i}.\end{aligned}$$ Substituting (\[vss\]) into (\[new\_2\]) yields $\sum_{i=1}^{n}\omega_{0i}\textbf{w}_{0}\otimes\textbf{e}_{i}=\textbf{0}_{nq\times 1}$, which implies that $\omega_{0i}=0$ for every $i=1,\ldots,n$. Hence, (\[new\_1\]) and (\[new\_2\]) can be further simplified as $$\begin{aligned} &&\frac{\kappa_{k}^{2}(1+h_{k}\lambda)}{\lambda(\lambda+\kappa_{k})}(\textbf{1}_{q\times 1}\otimes I_{n})E_{n\times nq}^{[j]}\textbf{v}=\textbf{0}_{nq\times 1},\label{gs0_1}\\ &&\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})\textbf{v}=\textbf{0}_{nq\times 1}.\label{gs0_2}\end{aligned}$$ If $\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\neq0$, note that for $\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}=0$, $\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)I_{nq}\Big]=\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})\Big]=0$. Hence, the general solution $\textbf{v}$ to (\[gs0\_1\]) and (\[gs0\_2\]) is given by the form of (\[gsoa\]) in which $\lambda_{4}$ is replaced by $\lambda_{5,6}$ satisfying $\frac{\kappa_{k}}{\lambda_{5,6}}+\lambda_{5,6}+\kappa_{k} h_{k}=0$. Thus, this case is similar to the previous case where (\[lambda4\]) still holds for $\lambda_{4}$ being replaced by $\lambda_{5,6}$, where $$\begin{aligned} \label{lambda5} \lambda_{5,6}=-\frac{\kappa_{k}h_{k}}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}h_{k}^{2}-4\kappa_{k}}.\end{aligned}$$ Thus, $\lambda=\lambda_{5,6}$ are indeed the eigenvalues of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ and the corresponding eigenvectors are given by the form (\[eigv4\]) with $\lambda_{4}$ being replaced by $\lambda_{5,6}$. Otherwise, if $\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}=0$ and $\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}=0$, then $\mu_{k}(\frac{1}{\lambda}+h_{k})=-\eta_{k}$ and $\kappa_{k}(\frac{1}{\lambda}+h_{k})=-\lambda$. Again, since $\lambda\neq0$, it follows from $\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}=0$ that $\kappa_{k}\neq0$. If $\mu_{k}=0$, then it follows from $\mu_{k}(\frac{1}{\lambda}+h_{k})=-\eta_{k}$ that $\eta_{k}=0$. In this case, $\lambda=\lambda_{5,6}$ are the eigenvalues of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. Furthermore, (\[gs0\_2\]) becomes trivial and (\[gs0\_1\]) is equivalent to $E_{n\times nq}^{[j]}\textbf{v}=\textbf{0}_{n\times 1}$, that is, $(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})\textbf{v}=\textbf{0}_{n\times 1}$. It follows from $vi$) of Proposition 6.1.7 of [@Bernstein:2009 p. 400] and $viii$) of Proposition 6.1.6 of [@Bernstein:2009 p. 399] that the general solution $\textbf{v}$ to $(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})\textbf{v}=\textbf{0}_{n\times 1}$ is given by the form $$\begin{aligned} \textbf{v}&=&\Big[I_{nq}-(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})^{+}(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[I_{nq}-((\textbf{g}_{j}^{\mathrm{T}})^{+}\otimes I_{n})(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[I_{q}\otimes I_{n}-(((\textbf{g}_{j}^{\mathrm{T}})^{+}\textbf{g}_{j}^{\mathrm{T}})\otimes I_{n})\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[(I_{q}-((\textbf{g}_{j}^{\mathrm{T}})^{+}\textbf{g}_{j}^{\mathrm{T}}))\otimes I_{n}\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{l}-((\textbf{g}_{j}^{\mathrm{T}})^{+}\textbf{g}_{j}^{\mathrm{T}})\textbf{g}_{l})\otimes \textbf{e}_{i},\end{aligned}$$ where $\varpi_{li}\in\mathbb{C}$ and $j=1,\ldots,q$. Note that it follows from Fact 6.3.2 of [@Bernstein:2009 p. 404] that $\textbf{g}_{j}^{+}=\textbf{g}_{j}^{\mathrm{T}}$, and hence, $(\textbf{g}_{j}^{\mathrm{T}})^{+}=\textbf{g}_{j}$ for every $j=1,\ldots,q$. Then we have $$\begin{aligned} \label{newv} \textbf{v}&=&\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{l}-(\textbf{g}_{j}\textbf{g}_{j}^{\mathrm{T}})\textbf{g}_{l})\otimes \textbf{e}_{i}\nonumber\\ &=&\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{l}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j})\otimes \textbf{e}_{i}.\end{aligned}$$ Hence, $\textbf{x}_{1}=\frac{1+h_{k}\lambda_{5,6}}{\lambda_{5,6}}\textbf{v}$, $\textbf{x}_{2}=\textbf{v}\neq\textbf{0}_{nq\times 1}$ where $\textbf{v}$ is given by (\[newv\]), and $\textbf{x}_{3}=\textbf{0}_{n\times 1}$. The corresponding eigenvectors for $\lambda_{5,6}$ in this case are given by $$\begin{aligned} \textbf{x}=\Big[\frac{1+h_{k}\lambda_{5,6}^{*}}{\lambda_{5,6}^{*}}\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{l}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j})^{\mathrm{T}}\otimes \textbf{e}_{i}^{\mathrm{T}},\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}(\textbf{g}_{l}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j})^{\mathrm{T}}\otimes \textbf{e}_{i}^{\mathrm{T}},\textbf{0}_{1\times n}\Big]^{*},\end{aligned}$$ where $\varpi_{li}\in\mathbb{C}$ and not all of them are zero. Consequently, in this case $\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{5,6} I_{2nq+n}\Big)$ is given by (\[egns4\]). Finally, if $\mu_{k}\neq0$, then it follows from $\mu_{k}(\frac{1}{\lambda}+h_{k})=-\eta_{k}$ that $\frac{1}{\lambda}+h_{k}=-\frac{\eta_{k}}{\mu_{k}}$. Together with $\kappa_{k}(\frac{1}{\lambda}+h_{k})=-\lambda$, we have $\lambda=\frac{\kappa_{k}\eta_{k}}{\mu_{k}}$. Since $\lambda\neq0$, it follows that $\eta_{k}\neq0$. Substituting this $\lambda$ into $\frac{1}{\lambda}+h_{k}=-\frac{\eta_{k}}{\mu_{k}}$ yields $h_{k}=-\frac{\eta_{k}}{\mu_{k}}-\frac{\mu_{k}}{\kappa_{k}\eta_{k}}<0$, which is a contradiction since $h_{k}\geq0$. Hence, this case is impossible. *Case 2.* If $\lambda=-\kappa_{k}$, then $\kappa_{k}\neq0$ and (\[detcon\]) becomes $$\begin{aligned} \label{detcase2} \det\small\left[\begin{array}{cc} \Big(\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}\Big)(L_{k}\otimes P_{k})+(\kappa_{k} h_{k}-1-\kappa_{k})(I_{q}\otimes P_{k}) & -\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\\ (\kappa_{k} h_{k}-1)E_{n\times nq}^{[j]} & \textbf{0}_{n\times n} \end{array}\right]=0.\end{aligned}$$ If $\kappa_{k}h_{k}=1$, then clearly (\[detcase2\]) holds. In this case, $$\begin{aligned} &&\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]\nonumber\\ &&=\det\Big[\Big(-\frac{\mu_{k}}{\kappa_{k}}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})-\kappa_{k}I_{nq}\Big]\det(I_{q}\otimes P_{k})\nonumber\\ &&=\kappa_{k}^{nq}\det\Big[\Big(-\frac{\mu_{k}}{\kappa_{k}^{2}}+\frac{\mu_{k} h_{k}}{\kappa_{k}}+\frac{\eta_{k}}{\kappa_{k}}\Big)(L_{k}\otimes I_{n})-I_{nq}\Big]\det(I_{q}\otimes P_{k})\nonumber\\ &&=\kappa_{k}^{nq}\det\Big[\frac{\eta_{k}}{\kappa_{k}}(L_{k}\otimes I_{n})-I_{nq}\Big](\det(P_{k}))^{q}.\end{aligned}$$ Hence, $\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]=0$ if and only if $1\in{\mathrm{spec}}(\frac{\eta_{k}}{\kappa_{k}}L_{k})$. Thus, if $1\in{\mathrm{spec}}(\frac{\eta_{k}}{\kappa_{k}}L_{k})$ and $\kappa_{k}h_{k}=1$, then $\lambda=-\kappa_{k}$ is indeed an eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. Clearly when $\kappa_{k}h_{k}=1$ and $\lambda=-\kappa_{k}$, $\textbf{x}_{1}=\frac{1+h_{k}\lambda}{\lambda}\textbf{x}_{2}=\textbf{0}_{nq\times 1}$, (\[hx3\]) becomes trivial, and (\[hx2\]) becomes $$\begin{aligned} \label{x2x3} (\eta_{k}(L_{k}\otimes P_{k})-\kappa_{k}I_{q}\otimes P_{k})\textbf{x}_{2}-\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\textbf{x}_{3}=\textbf{0}_{nq\times1}.\end{aligned}$$ Pre-multiplying $E_{n\times nq}^{[j]}(I_{q}\otimes P_{k}^{-1})$ on both sides of (\[x2x3\]) yields $$\begin{aligned} \label{x3x2} \textbf{x}_{3}=\Big[\frac{\eta_{k}}{\kappa_{k}}(L_{k}\otimes I_{n})-I_{nq}\Big]\textbf{x}_{2}.\end{aligned}$$ Note that $\textbf{x}_{2}$ can be chosen arbitrarily in $\mathbb{C}^{nq}$ other than $\textbf{0}_{nq\times 1}$. Then $\textbf{x}_{2}$ can be represented as $\textbf{x}_{2}=\sum_{i=1}^{n}\sum_{l=1}^{q}\\\alpha_{li}(\textbf{g}_{l}\otimes\textbf{e}_{i})$, where $\alpha_{li}\in\mathbb{C}$, not all of $\alpha_{li}$ are zero. Then it follows from (\[x3x2\]) that $\textbf{x}_{3}=\sum_{i=1}^{n}\sum_{l=1}^{q}\frac{\eta_{k}}{\kappa_{k}}\alpha_{li}(L_{k}\otimes I_{n})(\textbf{g}_{l}\otimes\textbf{e}_{i})-\sum_{i=1}^{n}\sum_{l=1}^{q}\alpha_{li}(\textbf{g}_{l}\otimes\textbf{e}_{i})=\sum_{i=1}^{n}\sum_{l=1}^{q}\frac{\eta_{k}}{\kappa_{k}}\alpha_{li}(L_{k}\textbf{g}_{l}\otimes\textbf{e}_{i})-\sum_{i=1}^{n}\sum_{l=1}^{q}\alpha_{li}(\textbf{g}_{l}\otimes\textbf{e}_{i})$, where $\alpha_{li}\in\mathbb{C}$ and not all of $\alpha_{il}$ are zero. Clearly such $\textbf{x}_{i}$, $i=1,2,3$, satisfy (\[Aeig\_1\])–(\[Aeig\_3\]). Thus, the corresponding eigenvectors for the eigenvalue $\lambda=\lambda_{3}$ are given by $$\begin{aligned} \textbf{x}=\Big[\textbf{0}_{1\times nq},\sum_{i=1}^{n}\sum_{l=1}^{q}\alpha_{li}(\textbf{g}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}},\sum_{i=1}^{n}\sum_{l=1}^{q}\frac{\eta_{k}}{\kappa_{k}}\alpha_{li}(L_{k}\textbf{g}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}}-\sum_{i=1}^{n}\sum_{l=1}^{q}\alpha_{li}(\textbf{g}_{l}\otimes\textbf{e}_{i})^{\mathrm{T}}\Big]^{*},\end{aligned}$$ where $\alpha_{li}\in\mathbb{C}$, not all of $\alpha_{il}$ are zero, and $$\begin{aligned} \label{lambda3} \lambda_{3}=-\kappa_{k}.\end{aligned}$$ Therefore, $\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{3} I_{2nq+n}\Big)$ is given by (\[egns5\]). Now we consider the case where $\kappa_{k}h_{k}\neq1$. Then in this case (\[detcase2\]) holds if and only if the equation $$\begin{aligned} \small\left[\begin{array}{cc} \Big(\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}\Big)(L_{k}\otimes P_{k})+(\kappa_{k} h_{k}-1-\kappa_{k})(I_{q}\otimes P_{k}) & -\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\\ (\kappa_{k} h_{k}-1)E_{n\times nq}^{[j]} & \textbf{0}_{n\times n} \end{array}\right]\textbf{u}=\textbf{0}_{(nq+n)\times 1}\label{deteqn}\end{aligned}$$ has a nontrivial solution $\textbf{u}\in\mathbb{C}^{nq+n}$. Let $\textbf{u}=[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*},\textbf{u}_{0}^{*}]^{*}$, where $\textbf{u}_{i}\in\mathbb{C}^{n}$, $i=0,1,\ldots,q$. Then it follows from (\[deteqn\]) that $$\begin{aligned} \Big(\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}\Big)(L_{k}\otimes P_{k})[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}+(\kappa_{k} h_{k}-1-\kappa_{k})(I_{q}\otimes P_{k})[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}\nonumber\\ -\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\textbf{u}_{0}=\textbf{0}_{nq\times 1},\label{Enqu1}\\ (\kappa_{k} h_{k}-1)E_{n\times nq}^{[j]}[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\textbf{0}_{n\times 1}.\label{Enqu}\end{aligned}$$ If $\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}=0$, in this case, since $\lambda=-\kappa_{k}$, then it follows that $$\begin{aligned} \det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]&=&\det\Big[(\kappa_{k} h_{k}-1-\kappa_{k})(I_{q}\otimes P_{k})\Big]\nonumber\\ &=&(\kappa_{k} h_{k}-1-\kappa_{k})^{nq}\det(I_{q}\otimes P_{k}).\end{aligned}$$ Hence, $\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]=0$ if and only if $\kappa_{k} h_{k}-1-\kappa_{k}=0$. If $\kappa_{k} h_{k}-1-\kappa_{k}=0$, eliminating $h_{k}$ in $\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}=0$ by using $\kappa_{k} h_{k}-1-\kappa_{k}=0$ yields $\mu_{k}+\eta_{k}=0$, and hence, $\mu_{k}=\eta_{k}=0$ since $\mu_{k},\eta_{k}\geq0$. Furthermore, $h_{k}\kappa_{k}=1+\kappa_{k}\neq 1$ due to $\kappa_{k}\neq0$. Next, since $\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}=0$ and $\kappa_{k} h_{k}-1-\kappa_{k}=0$, it follows from (\[Enqu1\]) that $P_{k}\textbf{u}_{0}=\textbf{0}_{n\times 1}$, i.e., $\textbf{u}_{0}=\textbf{0}_{n\times 1}$. Thus in this case, (\[Enqu\]) becomes $E_{n\times nq}^{[j]}[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\textbf{0}_{n\times 1}$, that is, $(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\textbf{0}_{n\times 1}$. Now it follows from (\[newv\]) that $[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\sum_{i=1}^{n}\sum_{l=1}^{q}\alpha_{li}(\textbf{g}_{l}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j})\otimes \textbf{e}_{i}$, where $\alpha_{li}\in\mathbb{C}$ and not all of them are zero. Clearly $\textbf{x}_{1}=\textbf{0}_{nq\times 1}$, $\textbf{x}_{2}=\sum_{i=1}^{n}\sum_{l=1}^{q}\alpha_{li}(\textbf{g}_{l}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j})\otimes \textbf{e}_{i}$, and $\textbf{x}_{3}=\textbf{0}_{n\times 1}$ satisfy (\[Aeig\_1\])–(\[Aeig\_3\]). Thus, if $\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}=0$ and $h_{k}=1+\frac{1}{\kappa_{k}}$, then $\lambda=-\kappa_{k}$ is indeed an eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ and the corresponding eigenvectors for the eigenvalue $\lambda_{3}$ of the form (\[lambda3\]) are given by $$\begin{aligned} \textbf{x}=\Big[\textbf{0}_{1\times nq},\sum_{i=1}^{n}\sum_{l=1}^{q}\alpha_{li}(\textbf{g}_{l}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\textbf{g}_{j})^{\mathrm{T}}\otimes \textbf{e}_{i}^{\mathrm{T}},\textbf{0}_{1\times n}\Big]^{*},\end{aligned}$$ where $\alpha_{li}\in\mathbb{C}$ and not all $\alpha_{li}$ are zero. Therefore, $\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{3} I_{2nq+n}\Big)$ is given by (\[egns6\]). If $\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}\neq 0$ and $\kappa_{k} h_{k}-1-\kappa_{k}=0$, then $h_{k}=1+\frac{1}{\kappa_{k}}$. Clearly $h_{k}\kappa_{k}\neq 1$. In this case, since $\lambda=-\kappa_{k}$, it follows that $$\begin{aligned} &&\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]\nonumber\\ &&=\det\Big[\Big(-\frac{\mu_{k}}{\kappa_{k}}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})-\kappa_{k}I_{nq}\Big]\det(I_{q}\otimes P_{k})\nonumber\\ &&=\kappa_{k}^{nq}\det\Big[\frac{\mu_{k}+\eta_{k}}{\kappa_{k}}(L_{k}\otimes I_{n})-I_{nq}\Big](\det(P_{k}))^{q}.\end{aligned}$$ Hence, $\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]=0$ if and only if $1\in{\mathrm{spec}}(\frac{\mu_{k}+\eta_{k}}{\kappa_{k}}L_{k})$. Note that $1\in{\mathrm{spec}}(\frac{\mu_{k}+\eta_{k}}{\kappa_{k}}L_{k})$ implies that $\mu_{k}+\eta_{k}\neq0$ and hence, by using $\kappa_{k} h_{k}-1-\kappa_{k}=0$, $\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}=\mu_{k}+\eta_{k}\neq0$. Now we assume that $1\in{\mathrm{spec}}(\frac{\mu_{k}+\eta_{k}}{\kappa_{k}}L_{k})$ and $h_{k}=1+\frac{1}{\kappa_{k}}$. Next, since $\kappa_{k} h_{k}-1-\kappa_{k}=0$ and $\mu_{k}+\eta_{k}\neq0$, it follows from (\[Enqu1\]) that $$\begin{aligned} \label{u1u0uq} (L_{k}\otimes I_{n})[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}(\textbf{1}_{q\times 1}\otimes I_{n})\textbf{u}_{0}.\end{aligned}$$ Note that $(L_{k}\otimes I_{n})(\textbf{1}_{q\times 1}\otimes I_{n})=\textbf{0}_{nq\times n}$. Pre-multiplying $L_{k}\otimes I_{n}$ on both sides of (\[u1u0uq\]) yields $(L_{k}\otimes I_{n})(L_{k}\otimes I_{n})[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\textbf{0}_{nq\times 1}$, which implies that $(L_{k}\otimes I_{n})[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}\in\ker(L_{k}\otimes I_{n})$. Hence, $$\begin{aligned} \label{lku1} (L_{k}\otimes I_{n})[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}\textbf{w}_{l}\otimes\textbf{e}_{i},\end{aligned}$$ where $\alpha_{li}\in\mathbb{C}$. Let $\textbf{u}_{0}=\sum_{i=1}^{n}\beta_{i}\textbf{e}_{i}$, where $\beta_{i}\in\mathbb{C}$. Then it follows that $(\textbf{1}_{q\times 1}\otimes I_{n})\textbf{u}_{0}=\sum_{i=1}^{n}\beta_{i}(\textbf{1}_{q\times 1}\otimes I_{n})\textbf{e}_{i}=\sum_{i=1}^{n}\beta_{i}(\textbf{w}_{0}\otimes\textbf{e}_{i})$. Now it follows from (\[u1u0uq\]) and (\[lku1\]) that $$\begin{aligned} \sum_{i=1}^{n}\Big(\alpha_{0i}-\beta_{i}\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\Big)\textbf{w}_{0}\otimes\textbf{e}_{i}+\sum_{l=1}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\alpha_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}=\textbf{0}_{nq\times 1},\end{aligned}$$ which implies that $\alpha_{0i}-\beta_{i}\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}=0$ and $\alpha_{li}=0$ for every $i=1,\ldots,n$ and every $l=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$. Hence, $$\begin{aligned} \label{Axbeqn} (L_{k}\otimes I_{n})[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}.\end{aligned}$$ Together with $E_{n\times nq}^{[j]}[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=(\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n})[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\textbf{0}_{n\times 1}$, we have $$\begin{aligned} \label{Axbeqnab} \small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right][\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\small\left[\begin{array}{c} \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i} \\ \textbf{0}_{n\times 1}\\ \end{array}\right].\end{aligned}$$ Now it follows from $ii$) of Theorem 2.6.4 of [@Bernstein:2009 p. 108] that (\[Axbeqnab\]) has a solution $[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}$ if and only if $$\begin{aligned} \label{rankconab} {\mathrm{rank}}\small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right]={\mathrm{rank}}\small\left[\begin{array}{cc} L_{k}\otimes I_{n} & \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n} & \textbf{0}_{n\times 1}\\ \end{array}\right].\end{aligned}$$ We claim that (\[rankconab\]) is indeed true. First, if $\beta_{i}=0$ for every $i=1,\ldots,n$, then it is clear that ${\mathrm{rank}}\small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right]={\mathrm{rank}}\small\left[\begin{array}{cc} L_{k}\otimes I_{n} & \textbf{0}_{nq\times 1}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n} & \textbf{0}_{n\times 1}\\ \end{array}\right]$. Alternatively, assume that $\beta_{i}\neq0$ for some $i\in\{1,\ldots,n\}$. Note that it follows from Fact 2.11.8 of [@Bernstein:2009 p. 132] that ${\mathrm{rank}}\small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right]\leq{\mathrm{rank}}\small\left[\begin{array}{cc} L_{k}\otimes I_{n} & \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n} & \textbf{0}_{n\times 1}\\ \end{array}\right]$. To show (\[rankconab\]), it suffices to show that $$\begin{aligned} {\mathrm{def}}\small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right]\leq{\mathrm{def}}\small\left[\begin{array}{cc} L_{k}\otimes I_{n} & \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n} & \textbf{0}_{n\times 1}\\ \end{array}\right],\end{aligned}$$ or, equivalently, $$\begin{aligned} \dim\ker\small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right]\leq\dim\ker\small\left[\begin{array}{cc} L_{k}\otimes I_{n} & \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n} & \textbf{0}_{n\times 1}\\ \end{array}\right].\end{aligned}$$ Let $s\in\mathbb{C}$ be such that $s\in\ker\small\left[\begin{array}{c} \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\\ \textbf{0}_{n\times 1}\\ \end{array}\right]$. Then $s\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\beta_{i}=0$ for some $i\in\{1,\ldots,n\}$, which implies that $s=0$. Thus, $\dim\ker\small\left[\begin{array}{c} \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\\ \textbf{0}_{n\times 1}\\ \end{array}\right]=0$. Consequently, it follows from Fact 2.11.8 of [@Bernstein:2009 p. 132] that $$\begin{aligned} \dim\ker\small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right]&=&\dim\ker\small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right]+\dim\ker\small\left[\begin{array}{c} \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\\ \textbf{0}_{n\times 1}\\ \end{array}\right]\nonumber\\ &\leq&\dim\ker\small\left[\begin{array}{cc} L_{k}\otimes I_{n} & \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n} & \textbf{0}_{n\times 1}\\ \end{array}\right], \end{aligned}$$ which implies that ${\mathrm{rank}}\small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right]\geq{\mathrm{rank}}\small\left[\begin{array}{cc} L_{k}\otimes I_{n} & \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n} & \textbf{0}_{n\times 1}\\ \end{array}\right]$. Hence, (\[rankconab\]) holds. Next, it follows from $vi$) of Proposition 6.1.7 of [@Bernstein:2009 p. 400] and $viii)$ of Proposition 6.1.6 of [@Bernstein:2009 p. 399] that the general solution to (\[Axbeqnab\]) is given by the form $$\begin{aligned} \label{usolutionab} [\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}&=&\small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right]^{+}\small\left[\begin{array}{c} \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\\ \textbf{0}_{n\times 1}\\ \end{array}\right]+\sum_{l=1}^{q}\sum_{i=1}^{n}\gamma_{li}\Big(I_{nq}-\small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right]^{+}\small\left[\begin{array}{c} L_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}\\ \end{array}\right]\Big)\nonumber\\ &&(\textbf{g}_{l}\otimes\textbf{e}_{i})\nonumber\\ &=&\Big(\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]\otimes I_{n}\Big)^{+}\small\left[\begin{array}{c} \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{w}_{0}\otimes\textbf{e}_{i}\\ \sum_{i=1}^{n}0\otimes\textbf{e}_{i}\\ \end{array}\right]+\sum_{l=1}^{q}\sum_{i=1}^{n}\gamma_{li}\Big(I_{nq}-\Big(\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]\otimes I_{n}\Big)^{+}\nonumber\\ &&\Big(\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]\otimes I_{n}\Big)\Big)(\textbf{g}_{l}\otimes\textbf{e}_{i})\nonumber\\ &=&\Big(\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]^{+}\otimes I_{n}\Big)\Big(\sum_{i=1}^{n}\small\left[\begin{array}{c} \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\beta_{i}\textbf{w}_{0}\\ 0\\ \end{array}\right]\otimes\textbf{e}_{i}\Big)+\sum_{l=1}^{q}\sum_{i=1}^{n}\gamma_{li}\Big(I_{q}\otimes I_{n}-\Big(\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]^{+}\otimes I_{n}\Big)\nonumber\\ &&\Big(\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]\otimes I_{n}\Big)\Big)(\textbf{g}_{l}\otimes\textbf{e}_{i})\nonumber\\ &=&\sum_{i=1}^{n}\Big(\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]^{+}\small\left[\begin{array}{c} \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\beta_{i}\textbf{w}_{0}\\ 0\\ \end{array}\right]\Big)\otimes\textbf{e}_{i}+\sum_{l=1}^{q}\sum_{i=1}^{n}\gamma_{li}\Big(I_{q}\otimes I_{n}-\Big(\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]^{+}\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]\otimes I_{n}\Big)\Big)\nonumber\\ &&(\textbf{g}_{l}\otimes\textbf{e}_{i})\nonumber\\ &=&\sum_{i=1}^{n}\Big(\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]^{+}\small\left[\begin{array}{c} \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\beta_{i}\textbf{w}_{0}\\ 0\\ \end{array}\right]\Big)\otimes\textbf{e}_{i}+\sum_{l=1}^{q}\sum_{i=1}^{n}\gamma_{li}\Big(\textbf{g}_{l}-\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]^{+}\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}}\\ \end{array}\right]\textbf{g}_{l}\Big)\otimes\textbf{e}_{i},\end{aligned}$$ where $\gamma_{li}\in\mathbb{C}$. Note that by Proposition 6.1.6 of [@Bernstein:2009 p. 399], $L_{k}^{\mathrm{T}}(L_{k}^{\mathrm{T}})^{+}=L_{k}^{\mathrm{T}}(L_{k}^{+})^{\mathrm{T}}=(L_{k}^{+}L_{k})^{\mathrm{T}}=L_{k}^{+}L_{k}$. It follows from Fact 6.5.17 of [@Bernstein:2009 p. 427] that $$\begin{aligned} \small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}=\small\left[\begin{array}{cc} L_{k}^{+}(I_{q}-\varphi_{k}\textbf{g}_{j}^{\mathrm{T}}) & \varphi_{k} \end{array}\right],\end{aligned}$$ where $\varphi_{k}$ is given by (\[varphik\]). Note that $\textbf{g}_{j}^{\mathrm{T}}\textbf{w}_{0}=1$ for every $j=1,\ldots,q$. Hence, it follows that for every $i=1,\ldots,n$ and every $j,l=1,\ldots,q$, $$\begin{aligned} \small\left[\begin{array}{cc} L_{k}^{+}(I_{q}-\varphi_{k}\textbf{g}_{j}^{\mathrm{T}}) & \varphi_{k} \end{array}\right]\small\left[\begin{array}{c} \frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\beta_{i}\textbf{w}_{0}\\ 0\\ \end{array}\right]&=&\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\beta_{i}L_{k}^{+}\textbf{w}_{0}-\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\beta_{i}L_{k}^{+}\varphi_{k},\\ \textbf{g}_{l}-\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\textbf{g}_{l}&=&\textbf{g}_{l}-\small\left[\begin{array}{cc} L_{k}^{+}(I_{q}-\varphi_{k}\textbf{g}_{j}^{\mathrm{T}}) & \varphi_{k} \end{array}\right]\small\left[\begin{array}{c} L_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\textbf{g}_{l}\nonumber\\ &=&\textbf{g}_{l}-\small\left[\begin{array}{cc} L_{k}^{+}(I_{q}-\varphi_{k}\textbf{g}_{j}^{\mathrm{T}}) & \varphi_{k} \end{array}\right]\small\left[\begin{array}{c} L_{k}\textbf{g}_{l}\\ \textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l} \end{array}\right]\nonumber\\ &=&\textbf{g}_{l}-L_{k}^{+}(I_{q}-\varphi_{k}\textbf{g}_{j}^{\mathrm{T}})L_{k}\textbf{g}_{l}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\varphi_{k}\nonumber\\ &=&\textbf{g}_{l}-L_{k}^{+}L_{k}\textbf{g}_{l}+(\textbf{g}_{j}^{\mathrm{T}}L_{k}\textbf{g}_{l})L_{k}^{+}\varphi_{k}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\varphi_{k}.\end{aligned}$$ Then (\[usolutionab\]) becomes $$\begin{aligned} \label{gsolutionab} [\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}&=&\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}L_{k}^{+}\textbf{w}_{0}\otimes\textbf{e}_{i}-\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}L_{k}^{+}\varphi_{k}\otimes\textbf{e}_{i}\nonumber\\ &&+\sum_{l=1}^{q}\sum_{i=1}^{n}\gamma_{li}(\textbf{g}_{l}-L_{k}^{+}L_{k}\textbf{g}_{l}+(\textbf{g}_{j}^{\mathrm{T}}L_{k}\textbf{g}_{l})L_{k}^{+}\varphi_{k}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\varphi_{k})\otimes \textbf{e}_{i}.\end{aligned}$$ In summary, if $1\in{\mathrm{spec}}(\frac{\mu_{k}+\eta_{k}}{\kappa_{k}}L_{k})$ and $h_{k}=1+\frac{1}{\kappa_{k}}$, then $\lambda=-\kappa_{k}$ is indeed an eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. In this case, $\textbf{x}_{1}=\textbf{0}_{nq\times 1}$, $\textbf{x}_{2}=[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}$ given by (\[gsolutionab\]), and $\textbf{x}_{3}=\sum_{i=1}^{n}\beta_{i}\textbf{e}_{i}$, where not all of $\beta_{i}$ and $\gamma_{li}$ are zero. The corresponding eigenvectors for $\lambda_{3}$ are given by $$\begin{aligned} \label{xw0ab} \textbf{x}&=&\Big[\textbf{0}_{1\times nq},\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}(L_{k}^{+}\textbf{w}_{0}\otimes\textbf{e}_{i})^{\mathrm{T}}-\frac{\kappa_{k}}{\mu_{k}+\eta_{k}}\sum_{i=1}^{n}\beta_{i}(L_{k}^{+}\varphi_{k}\otimes\textbf{e}_{i})^{\mathrm{T}}\nonumber\\ &&+\sum_{l=1}^{q}\sum_{i=1}^{n}\gamma_{li}(\textbf{g}_{l}-L_{k}^{+}L_{k}\textbf{g}_{l}+(\textbf{g}_{j}^{\mathrm{T}}L_{k}\textbf{g}_{l})L_{k}^{+}\varphi_{k}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\varphi_{k})^{\mathrm{T}}\otimes \textbf{e}_{i}^{\mathrm{T}},\sum_{i=1}^{n}\beta_{i}\textbf{e}_{i}^{\mathrm{T}}\Big]^{*},\end{aligned}$$ where $\beta_{i}\in\mathbb{C}$ and $\gamma_{li}\in\mathbb{C}$ and not all of them are zero. Therefore, $\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{3} I_{2nq+n}\Big)$ is given by (\[egns7\]). If $\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}\neq 0$, $\kappa_{k} h_{k}-1-\kappa_{k}\neq0$, and $\kappa_{k} h_{k}-1\neq0$, in this case, since $\lambda=-\kappa_{k}$, then it follows that $$\begin{aligned} &&\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)(I_{q}\otimes P_{k})\Big]\nonumber\\ &&=\det\Big[\Big(\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}\Big)(L_{k}\otimes I_{n})+(\kappa_{k} h_{k}-1-\kappa_{k})I_{nq}\Big]\det(I_{q}\otimes P_{k})\nonumber\\ &&=(-\kappa_{k} h_{k}+1+\kappa_{k})^{nq}\det\Big[\frac{\mu_{k}(\kappa_{k}h_{k}-1)+\eta_{k}\kappa_{k}}{\kappa_{k}(-\kappa_{k} h_{k}+1+\kappa_{k})}(L_{k}\otimes I_{n})-I_{nq}\Big](\det(P_{k}))^{q}.\end{aligned}$$ Hence, $\det\Big[\Big(\frac{\mu_{k}}{\lambda}+\mu_{k} h_{k}+\eta_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{\kappa_{k}}{\lambda}+\lambda+\kappa_{k} h_{k}\Big)I_{nq}\Big]=0$ if and only if $1\in{\mathrm{spec}}(\frac{\mu_{k}(\kappa_{k}h_{k}-1)+\eta_{k}\kappa_{k}}{\kappa_{k}(-\kappa_{k} h_{k}+1+\kappa_{k})}L_{k})$. Again, note that $1\in{\mathrm{spec}}(\frac{\mu_{k}(\kappa_{k}h_{k}-1)+\eta_{k}\kappa_{k}}{\kappa_{k}(-\kappa_{k} h_{k}+1+\kappa_{k})}L_{k})$ implies that $\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}\neq 0$ and $\kappa_{k} h_{k}-1-\kappa_{k}\neq0$. Now we assume that $1\in{\mathrm{spec}}(\frac{\mu_{k}(\kappa_{k}h_{k}-1)+\eta_{k}\kappa_{k}}{\kappa_{k}(-\kappa_{k} h_{k}+1+\kappa_{k})}L_{k})$ and $\kappa_{k}h_{k}\neq 1$. Next, let $\textbf{u}_{0}=\sum_{i=1}^{n}\beta_{i}\textbf{e}_{i}$, where $\beta_{i}\in\mathbb{C}$ and it follows from (\[Enqu1\]) that $$\begin{aligned} \label{xabs} \Big(\Big(\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}\Big)(L_{k}\otimes I_{n})+(\kappa_{k} h_{k}-1-\kappa_{k})I_{nq}\Big)[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\kappa_{k}\sum_{i=1}^{n}\beta_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}.\end{aligned}$$ Note that a specific solution $[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}$ to (\[xabs\]) is given by the form $$\begin{aligned} \label{specific} [\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\frac{\kappa_{k}}{\kappa_{k}h_{k}-1-\kappa_{k}}\sum_{i=1}^{n}\beta_{i}\textbf{1}_{q\times 1}\otimes\textbf{e}_{i}.\end{aligned}$$ Substituting (\[specific\]) into (\[Enqu\]) by using $iii$) of Lemma \[lemma\_EW\] yields $\frac{\kappa_{k}(\kappa_{k}h_{k}-1)}{\kappa_{k}h_{k}-1-\kappa_{k}}\sum_{i=1}^{n}\beta_{i}E_{n\times nq}^{[j]}(\textbf{1}_{q\times 1}\otimes\textbf{e}_{i})=\frac{\kappa_{k}(\kappa_{k}h_{k}-1)}{\kappa_{k}h_{k}-1-\kappa_{k}}\\\sum_{i=1}^{n}\beta_{i}\textbf{e}_{i}=\textbf{0}_{n\times 1}$, which implies that $\beta_{i}=0$ for every $i=1,\ldots,n$, and hence, $\textbf{u}_{0}=\textbf{0}_{n\times 1}$. Thus, (\[xabs\]) becomes $$\begin{aligned} \label{homo} \Big(\Big(\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k}\Big)(L_{k}\otimes I_{n})+(\kappa_{k} h_{k}-1-\kappa_{k})I_{nq}\Big)[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\textbf{0}_{nq\times 1}.\end{aligned}$$ Let $M_{k}=(\frac{\mu_{k}}{\kappa_{k}}(\kappa_{k} h_{k}-1)+\eta_{k})L_{k}+(\kappa_{k} h_{k}-1-\kappa_{k})I_{q}$. Again, note that $E_{n\times nq}^{[j]}=\textbf{g}_{j}^{\mathrm{T}}\otimes I_{n}$ for every $j=1,\ldots,q$. Then it follows from (\[homo\]) and (\[Enqu\]) that $$\begin{aligned} \label{uMg} \small\left[\begin{array}{c} M_{k}\otimes I_{n}\\ \textbf{g}_{j}^{\mathrm{T}}\otimes I_{n} \end{array}\right][\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\Big(\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\otimes I_{n}\Big)[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\textbf{0}_{(nq+n)\times 1}.\end{aligned}$$ Next, it follows from $vi$) of Proposition 6.1.7 of [@Bernstein:2009 p. 400] and $viii$) of Proposition 6.1.6 of [@Bernstein:2009 p. 399] that the general solution $[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}$ to (\[uMg\]) is given by the form $$\begin{aligned} [\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}&=&\Big[I_{nq}-\Big(\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\otimes I_{n}\Big)^{+}\Big(\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\otimes I_{n}\Big)\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[I_{nq}-\Big(\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}\otimes I_{n}\Big)\Big(\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\otimes I_{n}\Big)\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[I_{q}\otimes I_{n}-\Big(\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\otimes I_{n}\Big)\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\Big[\Big(I_{q}-\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\Big)\otimes I_{n}\Big]\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\textbf{g}_{l}\otimes\textbf{e}_{i}\nonumber\\ &=&\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{l}-\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\textbf{g}_{l}\Big)\otimes \textbf{e}_{i},\label{gsolution1}\end{aligned}$$ where $\varpi_{li}\in\mathbb{C}$ and $j=1,\ldots,q$. Note that by Proposition 6.1.6 of [@Bernstein:2009 p. 399], $M_{k}^{\mathrm{T}}(M_{k}^{\mathrm{T}})^{+}=M_{k}^{\mathrm{T}}(M_{k}^{+})^{\mathrm{T}}=(M_{k}^{+}M_{k})^{\mathrm{T}}=M_{k}^{+}M_{k}$. It follows from Fact 6.5.17 of [@Bernstein:2009 p. 427] that $$\begin{aligned} \small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}=\small\left[\begin{array}{cc} M_{k}^{+}(I_{q}-\phi_{k}\textbf{g}_{j}^{\mathrm{T}}) & \phi_{k} \end{array}\right],\end{aligned}$$ where $\phi_{k}$ is given by (\[phik\]). Hence, it follows that for every $j,l=1,\ldots,q$, $$\begin{aligned} \textbf{g}_{l}-\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]^{+}\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\textbf{g}_{l}&=&\textbf{g}_{l}-\small\left[\begin{array}{cc} M_{k}^{+}(I_{q}-\phi_{k}\textbf{g}_{j}^{\mathrm{T}}) & \phi_{k} \end{array}\right]\small\left[\begin{array}{c} M_{k}\\ \textbf{g}_{j}^{\mathrm{T}} \end{array}\right]\textbf{g}_{l}\nonumber\\ &=&\textbf{g}_{l}-\small\left[\begin{array}{cc} M_{k}^{+}(I_{q}-\phi_{k}\textbf{g}_{j}^{\mathrm{T}}) & \phi_{k} \end{array}\right]\small\left[\begin{array}{c} M_{k}\textbf{g}_{l}\\ \textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l} \end{array}\right]\nonumber\\ &=&\textbf{g}_{l}-M_{k}^{+}(I_{q}-\phi_{k}\textbf{g}_{j}^{\mathrm{T}})M_{k}\textbf{g}_{l}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\phi_{k}\nonumber\\ &=&\textbf{g}_{l}-M_{k}^{+}M_{k}\textbf{g}_{l}+(\textbf{g}_{j}^{\mathrm{T}}M_{k}\textbf{g}_{l})M_{k}^{+}\phi_{k}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\phi_{k}.\end{aligned}$$ Thus, (\[gsolution1\]) becomes $$\begin{aligned} \label{gso} [\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}=\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{l}-M_{k}^{+}M_{k}\textbf{g}_{l}+(\textbf{g}_{j}^{\mathrm{T}}M_{k}\textbf{g}_{l})M_{k}^{+}\phi_{k}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\phi_{k}\Big)\otimes \textbf{e}_{i}.\end{aligned}$$ In summary, if $1\in{\mathrm{spec}}(\frac{\mu_{k}(\kappa_{k}h_{k}-1)+\eta_{k}\kappa_{k}}{\kappa_{k}(-\kappa_{k} h_{k}+1+\kappa_{k})}L_{k})$ and $\kappa_{k}h_{k}\neq 1$, then $\lambda=-\kappa_{k}$ is indeed an eigenvalue of $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$. In this case, $\textbf{x}_{1}=\textbf{0}_{nq\times 1}$, $\textbf{x}_{2}=[\textbf{u}_{1}^{*},\ldots,\textbf{u}_{q}^{*}]^{*}$ given by (\[gso\]), and $\textbf{x}_{3}=\textbf{0}_{n\times 1}$, where not all of $\varpi_{li}$ are zero. The corresponding eigenvectors for $\lambda_{3}$ are given by $$\begin{aligned} \textbf{x}&=&\Big[\textbf{0}_{1\times nq},\sum_{i=1}^{n}\sum_{l=1}^{q}\varpi_{li}\Big(\textbf{g}_{l}-M_{k}^{+}M_{k}\textbf{g}_{l}+(\textbf{g}_{j}^{\mathrm{T}}M_{k}\textbf{g}_{l})M_{k}^{+}\phi_{k}-(\textbf{g}_{j}^{\mathrm{T}}\textbf{g}_{l})\phi_{k}\Big)^{\mathrm{T}}\otimes\textbf{e}_{i}^{\mathrm{T}},\textbf{0}_{1\times n}\Big]^{*},\end{aligned}$$ where $\varpi_{li}\in\mathbb{C}$ and not all of them are zero. Therefore, $\ker\Big(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}-\lambda_{3} I_{2nq+n}\Big)$ is given by (\[egns8\]). \[lemma\_B\] Define a (possibly infinite) series of matrices $B^{[j]}_{k}$, $j=1,\ldots,q$, $k=0,1,2,\ldots$, as follows: $$\begin{aligned} \label{Bmatrix} B_{k}^{[j]}=\small\left[\begin{array}{ccc} \textbf{0}_{nq\times nq} & h_{k}I_{nq} & \textbf{0}_{nq\times n} \\ -h_{k}\mu_{k} L_{k}\otimes P_{k}-h_{k}\kappa I_{q}\otimes P_{k} & -h_{k}\eta_{k} L_{k}\otimes P_{k} & h_{k}\kappa_{k} \textbf{1}_{q\times 1}\otimes P_{k} \\ E_{n\times nq}^{[j]} & \textbf{0}_{n\times nq} & -I_{n} \\ \end{array}\right],\end{aligned}$$ where $\mu_{k},\eta_{k},\kappa_{k}\geq0$ and $h_{k}>0$, $k\in\overline{\mathbb{Z}}_{+}$, $L_{k}\in\mathbb{R}^{q\times q}$ denotes the Laplacian matrix of a node-fixed dynamic digraph $\mathcal{G}_{k}$, $P_{k}\in\mathbb{R}^{n\times n}$ denotes a paracontracting matrix, and $E_{n\times nq}^{[j]}\in\mathbb{R}^{n\times nq}$ is defined in Lemma \[lemma\_EW\]. Assume that ${\mathrm{rank}}(P_{k})=n$ for every $k\in\overline{\mathbb{Z}}_{+}$. Then for every $j=1,\ldots,q$, $\{0\}\subseteq{\mathrm{spec}}(B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})\subseteq\{0,-1,-\frac{h_{k}^{2}\kappa_{k}}{2}\pm\frac{1}{2}\sqrt{(h_{k}^{2}\kappa_{k})^{2}-4h_{k}^{2}\kappa_{k}},\lambda_{1},\lambda_{2}\in\mathbb{C}:\forall\frac{\lambda_{1}^{2}+\kappa_{k} h_{k}^{2}\lambda_{1}+\kappa_{k}h_{k}^{2}}{\eta_{k}h_{k}\lambda_{1}+\mu_{k} h_{k}^{2}\lambda_{1}+\mu_{k}h_{k}^{2}}\in{\mathrm{spec}}(-L_{k})\backslash\{0\},\lambda_{2}^{3}+(1+h_{k}^{2}\kappa_{k})\lambda_{2}^{2}+(2h_{k}^{2}\kappa_{k}-h_{k}\kappa_{k})\lambda_{2}+h_{k}^{2}\kappa_{k}=0\}$, where $A_{{\mathrm{c}}k}$ is defined by (\[Ac\]) in Lemma \[lemma\_semisimple\]. Furthermore, if $h_{k}\kappa_{k}\neq0$, then 0 is semisimple. For a fixed $j\in\{1,\ldots,q\}$, let $\lambda\in{\mathrm{spec}}(B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})$ and $\textbf{x}=[\textbf{x}_{1}^{*},\textbf{x}_{2}^{*},\textbf{x}_{3}^{*}]^{*}\in\mathbb{C}^{2nq+n}$ be the corresponding eigenvector for $\lambda$, where $\textbf{x}_{1},\textbf{x}_{2}\in\mathbb{C}^{nq}$ and $\textbf{x}_{3}\in\mathbb{C}^{n}$. Then it follows from $(B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})\textbf{x}=\lambda\textbf{x}$ that $$\begin{aligned} h_{k}\textbf{x}_{2}+h_{k}^{2}[-\mu_{k} (L_{k}\otimes P_{k})\textbf{x}_{1}-\kappa_{k}(I_{q}\otimes P_{k})\textbf{x}_{1}-\eta_{k}(L_{k}\otimes P_{k})\textbf{x}_{2}+\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\textbf{x}_{3}]=\lambda\textbf{x}_{1},\label{hx11}\\ h_{k}[-\mu_{k} (L_{k}\otimes P_{k})\textbf{x}_{1}-\kappa_{k}(I_{q}\otimes P_{k})\textbf{x}_{1}-\eta_{k}(L_{k}\otimes P_{k})\textbf{x}_{2}+\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\textbf{x}_{3}]=\lambda \textbf{x}_{2},\label{x21}\\ E_{n\times nq}^{[j]}\textbf{x}_{1}-\textbf{x}_{3}=\lambda \textbf{x}_{3}.\label{Aeig_31}\end{aligned}$$ Let $\textbf{x}_{3}\neq\textbf{0}_{n\times1}$ be arbitrary, $\textbf{x}_{1}=(\textbf{1}_{q\times 1}\otimes I_{n})\textbf{x}_{3}$, and $\textbf{x}_{2}=\textbf{0}_{nq\times 1}$. Clearly such $\textbf{x}_{i}$, $i=1,2,3$, satisfy (\[hx11\])–(\[Aeig\_31\]) with $\lambda=0$. Hence, $\lambda=0$ is always an eigenvalue of $B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k}$. Next, we assume that $\lambda\neq0$. Substituting (\[x21\]) into (\[hx11\]) yields $\textbf{x}_{1}=\frac{h_{k}(1+\lambda)}{\lambda}\textbf{x}_{2}$. Replacing $\textbf{x}_{1}$ in (\[x21\]) and (\[Aeig\_31\]) with $\textbf{x}_{1}=\frac{h_{k}(1+\lambda)}{\lambda}\textbf{x}_{2}$ yields $$\begin{aligned} \Big[\Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)(I_{q}\otimes P_{k})\Big]\textbf{x}_{2}-h_{k}\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\textbf{x}_{3}=\textbf{0}_{nq\times 1},\label{hx21}\\ E_{n\times nq}^{[j]}\textbf{x}_{2}-(1+\lambda)\textbf{x}_{3}=\textbf{0}_{n\times 1}.\label{hx31}\end{aligned}$$ Thus, (\[hx21\]) and (\[hx31\]) have nontrivial solutions if and only if $$\begin{aligned} \label{detcon1} \det\small\left[\begin{array}{cc} \Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)(I_{q}\otimes P_{k}) & -h_{k}\kappa_{k}(\textbf{1}_{q\times 1}\otimes P_{k})\\ E_{n\times nq}^{[j]} & -(1+\lambda)I_{n} \end{array}\right]=0.\end{aligned}$$ If $\det\Big[\Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)(I_{q}\otimes P_{k})\Big]\neq0$, then pre-multiplying $L_{k}\otimes I_{n}$ on both sides of (\[hx21\]) and following the similar arguments as in the proof of Lemma \[lemma\_A\], we have $\textbf{x}_{2}=\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}(\textbf{w}_{l}\otimes\textbf{e}_{i})$, where $\varpi_{li}\in\mathbb{C}$. Substituting this expression of $\textbf{x}_{2}$ into (\[hx21\]) and (\[hx31\]) by using $iii$) of Lemma \[lemma\_EW\] and noting that $P_{k}$ is invertible yields $$\begin{aligned} \Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}\textbf{e}_{i}-h_{k}\kappa_{k}\textbf{x}_{3}=\textbf{0}_{n\times 1},\label{x3_eqn1}\\ \sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\sum_{i=1}^{n}\varpi_{li}w_{lj}\textbf{e}_{i}-(1+\lambda)\textbf{x}_{3}=\textbf{0}_{n\times 1}.\label{x3_eqn2}\end{aligned}$$ Substituting (\[x3\_eqn2\]) into (\[x3\_eqn1\]) yields $$\begin{aligned} \Big[\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)(1+\lambda)-h_{k}\kappa_{k}\Big]\textbf{x}_{3}=\textbf{0}_{n\times 1}.\end{aligned}$$ If $\textbf{x}_{3}=\textbf{0}_{n\times 1}$, then it follows from (\[hx21\]) that $\textbf{x}_{2}=\textbf{0}_{nq\times 1}$, and hence, $\textbf{x}_{1}=\textbf{0}_{nq\times 1}$, which is a contradiction since $\textbf{x}$ is an eigenvector. Thus, $\textbf{x}_{3}\neq\textbf{0}_{n\times 1}$ and consequently, $\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)(1+\lambda)-h_{k}\kappa_{k}=0$, i.e., $$\begin{aligned} \label{cubic3} \lambda^{3}+(1+h_{k}^{2}\kappa_{k})\lambda^{2}+(2h_{k}^{2}\kappa_{k}-h_{k}\kappa_{k})\lambda+h_{k}^{2}\kappa_{k}=0.\end{aligned}$$ Solving this cubic equation in terms of $\lambda$ gives the possible eigenvalues of $B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k}$. This can be done via Cardano’s formula. If $h_{k}\kappa_{k}=0$, then $\lambda=-1$. Otherwise, if $h_{k}\kappa_{k}\neq0$, then it follows from Routh’s Stability Criterion that ${\mathrm{Re}}\,\lambda<0$ if and only if $2h_{k}^{2}\kappa_{k}-h_{k}\kappa_{k}>0$ and $(1+h_{k}^{2}\kappa_{k})(2h_{k}^{2}\kappa_{k}-h_{k}\kappa_{k})>h_{k}^{2}\kappa_{k}$, that is, $h_{k}>1/2$ and $h_{k}+2h_{k}^{3}\kappa_{k}>1+h_{k}^{2}\kappa_{k}$. Alternatively, if $\det\Big[\Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)(I_{q}\otimes P_{k})\Big]=0$, then in this case, (\[detcon1\]) holds if $\lambda=-1$, or $\lambda\neq-1$ and by Proposition 2.8.4 of [@Bernstein:2009 p. 116], $\det\Big(\Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)(I_{q}\otimes P_{k})-\frac{\kappa_{k}h_{k}}{1+\lambda}W_{k}^{[j]}\Big)=0$, which implies that for $\lambda\neq-1$, the equation $$\begin{aligned} \label{eqn_v1} \Big(\Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}\otimes P_{k})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)(I_{q}\otimes P_{k})-\frac{\kappa_{k}h_{k}}{1+\lambda}W_{k}^{[j]}\Big)\textbf{v}=\textbf{0}_{nq\times 1}\end{aligned}$$ has nontrivial solutions for $\textbf{v}\in\mathbb{C}^{nq}$. Again, note that for every $j=1,\ldots,q$, $(L_{k}\otimes I_{n})W_{k}^{[j]}=\textbf{0}_{nq\times nq}$. Pre-multiplying $L_{k}\otimes I_{n}$ on both sides of (\[eqn\_v1\]) yields $\Big(\Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}^{2}\otimes P_{k})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)(L_{k}\otimes P_{k})\Big)\textbf{v}=(I_{q}\otimes P_{k})(L_{k}\otimes I_{n})\Big(\Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)I_{nq}\Big)\textbf{v}=\textbf{0}_{nq\times 1}$, which implies that $\Big(\Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)I_{nq}\Big)\textbf{v}\in\ker(L_{k}\otimes I_{n})$. Since $\ker(L_{k}\otimes I_{n})=\bigcup_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}{\mathrm{span}}\{\textbf{w}_{l}\otimes\textbf{e}_{1},\ldots,\textbf{w}_{l}\otimes\textbf{e}_{n}\}$, it follows that $$\begin{aligned} \label{Az=b1} \Big(\Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)I_{nq}\Big)\textbf{v}=\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i},\end{aligned}$$ where $\omega_{li}\in\mathbb{C}$, which is similar to (\[Az=b\]). Now it follows from (\[eqn\_v1\]) and (\[Az=b1\]) that $$\begin{aligned} \frac{\kappa_{k}h_{k}}{1+\lambda}W_{k}^{[j]}\textbf{v}=\sum_{i=1}^{n}\sum_{l=0}^{q-1-{\mathrm{rank}}(L_{k})}\omega_{li}\textbf{w}_{l}\otimes\textbf{e}_{i}.\end{aligned}$$ If $\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\neq0$, then it follows from the similar arguments after (\[Wv\_eqn\]) that $\omega_{\ell i}=0$ for every $i=1,\ldots,n$ and every $\ell=1,\ldots,q-1-{\mathrm{rank}}(L_{k})$. Furthermore, $$\begin{aligned} \omega_{0i}-\frac{\lambda\kappa_{k}h_{k}}{(1+\lambda)(\lambda^{2}+h_{k}^{2}\kappa_{k}\lambda+h_{k}^{2}\kappa_{k})}\omega_{0i}=0,\quad i=1,\ldots,n.\end{aligned}$$ Then either $1-\frac{\lambda\kappa_{k}h_{k}}{(1+\lambda)(\lambda^{2}+h_{k}^{2}\kappa_{k}\lambda+h_{k}^{2}\kappa_{k})}=0$ or $\omega_{0i}=0$ for every $i=1,\ldots,n$. If $\frac{\lambda\kappa_{k}h_{k}}{(1+\lambda)(\lambda^{2}+h_{k}^{2}\kappa_{k}\lambda+h_{k}^{2}\kappa_{k})}=1$, then $$\begin{aligned} \lambda^{3}+(1+h_{k}^{2}\kappa_{k})\lambda^{2}+(2h_{k}^{2}\kappa_{k}-h_{k}\kappa_{k})\lambda+h_{k}^{2}\kappa_{k}=0,\end{aligned}$$ which is the same as (\[cubic3\]). Since $\lambda\neq -1$, in this case $\kappa_{k}h_{k}\neq0$. Then it follows from Routh’s Stability Criterion that ${\mathrm{Re}}\,\lambda<0$ if and only if $h_{k}>1/2$ and $h_{k}+2h_{k}^{3}\kappa_{k}>1+h_{k}^{2}\kappa_{k}$. If $\omega_{0i}=0$ for every $i=1,\ldots,n$, then it follows from (\[eqn\_v1\]) and (\[Az=b1\]) that $\frac{\kappa_{k}h_{k}}{1+\lambda}W_{k}^{[j]}\textbf{v}=\textbf{0}_{nq\times 1}$ and $\Big(\Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)I_{nq}\Big)\textbf{v}=\textbf{0}_{nq\times 1}$, which implies that $\textbf{v}\in\ker\Big(\Big(\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\Big)(L_{k}\otimes I_{n})+\Big(\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}\Big)I_{nq}\Big)\cap\ker(\frac{\kappa_{k}h_{k}}{1+\lambda}W_{k}^{[j]})$. Clearly $\frac{h_{k}^{2}\mu_{k}}{\lambda}+\mu_{k} h_{k}^{2}+\eta_{k}h_{k}\neq0$. In this case, $\lambda\in\{\lambda_{1}\in\mathbb{C}:\forall\frac{\lambda_{1}^{2}+\kappa_{k} h_{k}^{2}\lambda_{1}+\kappa_{k}h_{k}^{2}}{\eta_{k}h_{k}\lambda_{1}+\mu_{k} h_{k}^{2}\lambda_{1}+\mu_{k}h_{k}^{2}}\in{\mathrm{spec}}(-L_{k})\backslash\{0\}\}$. Alternatively, if $\frac{h_{k}^{2}\kappa_{k}}{\lambda}+\lambda+h_{k}^{2}\kappa_{k}=0$, then it follows from the similar arguments after (\[eigv4\]) in Lemma \[lemma\_A\] that $$\begin{aligned} \lambda=-\frac{h_{k}^{2}\kappa_{k}}{2}\pm\frac{1}{2}\sqrt{(h_{k}^{2}\kappa_{k})^{2}-4h_{k}^{2}\kappa_{k}}\end{aligned}$$ are the possible eigenvalues of $B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k}$. In summary, $$\begin{aligned} \label{egspace} &&\{0\}\subseteq{\mathrm{spec}}(B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})\subseteq\nonumber\\ &&\Big\{0,-1,-\frac{h_{k}^{2}\kappa_{k}}{2}\pm\frac{1}{2}\sqrt{(h_{k}^{2}\kappa_{k})^{2}-4h_{k}^{2}\kappa_{k}},\lambda_{1},\lambda_{2}\in\mathbb{C}:\forall\frac{\lambda_{1}^{2}+\kappa_{k} h_{k}^{2}\lambda_{1}+\kappa_{k}h_{k}^{2}}{\eta_{k}h_{k}\lambda_{1}+\mu_{k} h_{k}^{2}\lambda_{1}+\mu_{k}h_{k}^{2}}\in{\mathrm{spec}}(-L_{k})\backslash\{0\},\nonumber\\ &&\lambda_{2}^{3}+(1+h_{k}^{2}\kappa_{k})\lambda_{2}^{2}+(2h_{k}^{2}\kappa_{k}-h_{k}\kappa_{k})\lambda_{2}+h_{k}^{2}\kappa_{k}=0\Big\}. \end{aligned}$$ Finally, the semisimplicity property of 0 can be proved by using the similar arguments as in the proof of Lemma \[lemma\_semisimple\]. Now we have the main result for the global convergence of the iterative process in Algorithm \[MCO\]. \[thm\_HMCO\] Consider the following discrete-time switched linear model to describe the iterative process for MCO: $$\begin{aligned} x_{i}[k+1]&=&x_{i}[k]+h_{k}v_{i}[k+1],\quad x_{i}[0]=x_{i0},\label{DE_1}\\ v_{i}[k+1]&=&P[k]v_{i}[k]+h_{k}\eta_{k}P[k]\sum_{j\in\mathcal{N}_{k}^{i}}(v_{j}[k]-v_{i}[k])+h_{k}\mu_{k}P[k]\sum_{j\in\mathcal{N}_{k}^{i}}(x_{j}[k]-x_{i}[k])\nonumber\\ &&+h_{k}\kappa_{k}P[k](p[k]-x_{i}[k]),\quad v_{i}[0]=v_{i0},\\ p[k+1]&=&p[k]+h_{k}\kappa_{k}(x_{j}[k]-p[k]),\quad p[k]\not\in\mathcal{Z}_{p},\quad p[0]=p_{0},\\ p[k+1]&=&x_{j}[k],\quad p[k]\in\mathcal{Z}_{p},\quad k=0,1,2,\ldots,\quad i=1,\ldots,q,\label{MSO_4}\end{aligned}$$ where $x_{i}\in\mathbb{R}^{n}$, $v_{i}\in\mathbb{R}^{n}$, $p\in\mathbb{R}^{n}$, $\mu_{k},\eta_{k},\kappa_{k},h_{k}$ are randomly selected in $\Omega\subseteq[0,\infty)$, $\mathcal{Z}_{p}=\{p\in\mathbb{R}^{n}:f(x_{j})<f(p)\}$, and $x_{j}=\{x_{\min}\in\mathbb{R}^{n}:x_{\min}=\arg\min_{1\leq i\leq q}f(x_{i})\}$. Assume that for every $k\in\overline{\mathbb{Z}}_{+}$ and every $j=1,\ldots,q$: - $P[k]\in\mathbb{R}^{n\times n}$ is paracontracting and ${\mathrm{rank}}(P[k])=n$. - $0<h_{k}<-\frac{\lambda+\bar{\lambda}}{|\lambda|^{2}}$ for every $\lambda\in\{-\kappa_{k},-\frac{\kappa_{k}(1+h_{k})}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}(1+h_{k})^{2}-4\kappa_{k}},-\frac{\kappa_{k}h_{k}}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}h_{k}^{2}-4\kappa_{k}},\lambda\in\mathbb{C}:\forall \frac{\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k}}{\eta_{k}\lambda+\mu_{k} h_{k}\lambda+\mu_{k}}\in{\mathrm{spec}}(-L_{k})\backslash\{0\}\}$; - $0<h_{k}<-\frac{\lambda+\bar{\lambda}}{|\lambda|^{2}}$ for every $\lambda\in\{-1,-\frac{h_{k}^{2}\kappa_{k}}{2}\pm\frac{1}{2}\sqrt{(h_{k}^{2}\kappa_{k})^{2}-4h_{k}^{2}\kappa_{k}},\lambda_{1},\lambda_{2}\in\mathbb{C}:\forall\frac{\lambda_{1}^{2}+\kappa_{k} h_{k}^{2}\lambda_{1}+\kappa_{k}h_{k}^{2}}{\eta_{k}h_{k}\lambda_{1}+\mu_{k} h_{k}^{2}\lambda_{1}+\mu_{k}h_{k}^{2}}\in{\mathrm{spec}}(-L_{k})\backslash\{0\},\lambda_{2}^{3}+(1+h_{k}^{2}\kappa_{k})\lambda_{2}^{2}+(2h_{k}^{2}\kappa_{k}-h_{k}\kappa_{k})\lambda_{2}+h_{k}^{2}\kappa_{k}=0\}$; - $\|I_{2nq+n}+h_{k}A_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k}\|\leq 1$ and $\|I_{2nq+n}+B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k}\|\leq1$. - $\ker((h_{k}A_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})^{\mathrm{T}}(h_{k}A_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})+(h_{k}A_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})^{\mathrm{T}}+h_{k}A_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})=\ker((h_{k}A_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})^{\mathrm{T}}\\(h_{k}A_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})+(h_{k}A_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})^{2})$ and $\ker((B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})^{\mathrm{T}}(B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})+(B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})^{\mathrm{T}}+B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})=\ker((B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})^{\mathrm{T}}(B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})+(B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k})^{2})$. Then the following conclusions hold: - If $\Omega$ is a finite discrete set, then $x_{i}[k]\to p^{\dag}$, $v_{i}[k]\to \textbf{0}_{n\times 1}$, and $p[k]\to p^{\dag}$ as $k\to\infty$ for every $x_{i0}\in\mathbb{R}^{n}$, $v_{i0}\in\mathbb{R}^{n}$, $p_{0}\in\mathbb{R}^{n}$, and every $i=1,\ldots,q$, where $p^{\dag}\in\mathbb{R}^{n}$ is some constant vector. - If for every positive integer $N$, there always exists $s\geq N$ such that $h_{s}(A_{s}^{[j_{s}]}+h_{s}A_{{\mathrm{c}}s})=B_{s}^{[j_{s}]}+h_{s}^{2}A_{{\mathrm{c}}s}=h_{T}(A_{T}^{[j_{T}]}+h_{T}A_{{\mathrm{c}}T})=B_{T}^{[j_{T}]}+h_{T}^{2}A_{{\mathrm{c}}T}$ for some fixed $T\in\overline{\mathbb{Z}}_{+}$, where $j_{s},j_{T}\in\{1,\ldots,q\}$, then $x_{i}[k]\to p^{\dag}$, $v_{i}[k]\to \textbf{0}_{n\times 1}$, and $p[k]\to p^{\dag}$ as $k\to\infty$ for every $x_{i0}\in\mathbb{R}^{n}$, $v_{i0}\in\mathbb{R}^{n}$, $p_{0}\in\mathbb{R}^{n}$, and every $i=1,\ldots,q$, where $p^{\dag}\in\mathbb{R}^{n}$ is some constant vector. Let $Z=[x_{1}^{\rm{T}},\ldots,x_{q}^{\rm{T}},v_{1}^{\rm{T}},\ldots,v_{q}^{\rm{T}},p^{\rm{T}}]^{\rm{T}}\in\mathbb{R}^{2nq+n}$. Note that (\[DE\_1\])–(\[MSO\_4\]) can be rewritten as the compact form $Z[k+1]=(I_{2nq+n}+h_{k}(A_{k}^{[j_{k}]}+h_{k}A_{{\mathrm{c}}k}))Z[k]$, $Z[k]\not\in\mathcal{S}$, and $Z[k+1]=(I_{2nq+n}+B_{k}^{[j_{k}]}+h_{k}^{2}A_{{\mathrm{c}}k})Z[k]$, $Z[k]\in\mathcal{S}$, $j_{k}\in\{1,\ldots,q\}$ is selected based on $\mathcal{Z}_{p}$. Let $h_{k}^{\dag}=\min\Big\{-\frac{\lambda+\bar{\lambda}}{|\lambda|^{2}}:\lambda\in\{-\kappa_{k},-\frac{\kappa_{k}(1+h_{k})}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}(1+h_{k})^{2}-4\kappa_{k}},-\frac{\kappa_{k}h_{k}}{2}\pm\frac{1}{2}\sqrt{\kappa_{k}^{2}h_{k}^{2}-4\kappa_{k}},\lambda\in\mathbb{C}:\forall \frac{\lambda^{2}+\kappa_{k} h_{k}\lambda+\kappa_{k}}{\eta_{k}\lambda+\mu_{k} h_{k}\lambda+\mu_{k}}\in{\mathrm{spec}}(-L_{k})\backslash\{0\}\}\Big\}$. First, we show that if $h<h_{k}^{\dag}$, then $I_{2nq+n}+h_{k}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$ becomes discrete-time semistable for every $j=1,\ldots,q$ and every $k=0,1,2,\ldots$. Note that ${\mathrm{spec}}(I_{2nq+n}+h_{k}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))=\{1+h\lambda:\forall\lambda\in{\mathrm{spec}}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})\}$. Since by Lemma \[lemma\_A\] and Assumptions H1 and H2, $A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}$ is semistable for every $j=1,\ldots,q$ and every $k=0,1,2,\ldots$, it follows that ${\mathrm{spec}}(I_{2nq+n}+h_{k}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}))=\{1\}\cup\{1+h\lambda:\forall\lambda\in{\mathrm{spec}}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}),{\mathrm{Re}}\,\lambda<0\}$. Hence, $I_{2nq+n}+h_{k}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$ is discrete-time semistable for every $j=1,\ldots,q$ and every $k=0,1,2,\ldots$ if $|1+h_{k}\lambda|<1$ for every $\lambda\in{\mathrm{spec}}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$ and ${\mathrm{Re}}\,\lambda<0$. Note that $|1+h_{k}\lambda|<1$ is equivalent to $(1+h_{k}\lambda)(1+h_{k}\bar{\lambda})=|1+h_{k}\lambda|^{2}<1$, i.e., $h_{k}<-(\lambda+\bar{\lambda})/|\lambda|^{2}$. By Lemma \[lemma\_A\], for any $h_{k}<h_{k}^{\dag}$, $I_{2nq+n}+h_{k}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k})$ is discrete-time semistable for every $j=1,\ldots,q$ and every $k=0,1,2,\ldots$. Similarly, it follows from Lemma \[lemma\_B\] and Assumptions H1 and H3 that $I_{2nq+n}+B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k}$ is discrete-time semistable for every $j=1,\ldots,q$ and every $k=0,1,2,\ldots$. And (\[DE\_1\])–(\[MSO\_4\]) can further be rewritten as an iteration $Z[k+1]=P_{k}Z[k]$, $k=0,1,2,\ldots$, where $P_{k}\in\{I_{2nq+n}+h_{k}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}),I_{2nq+n}+B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k}:j=1,\ldots,q,k=0,1,2,\ldots\}=\{I_{2nq+n}+h_{k}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}),I_{2nq+n}+B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k}:j=1,\ldots,q,\mu_{k},\eta_{k},\kappa_{k},h_{k}\in\Omega\}$. C1) By assumption, $\Omega$ is a finite discrete set. Hence, $\{I_{2nq+n}+h_{k}(A_{k}^{[j]}+h_{k}A_{{\mathrm{c}}k}),I_{2nq+n}+B_{k}^{[j]}+h_{k}^{2}A_{{\mathrm{c}}k}:j=1,\ldots,q,\mu_{k},\eta_{k},\kappa_{k},h_{k}\in\Omega\}$ is a finite discrete set. Now it follows from Assumptions H4 and H5 as well as $i$) of Lemma \[lemma\_DTSS\] that $\lim_{k\to\infty}Z[k]$ exists. The rest of the conclusion follows directly from (\[DE\_1\])–(\[MSO\_4\]). C2) By assumption, either $h_{T}(A_{T}^{[j_{T}]}+h_{T}A_{{\mathrm{c}}T})$ or $B_{T}^{[j_{T}]}+h_{T}^{2}A_{{\mathrm{c}}T}$ appears infinitely many times in the sequence $\{P_{k}\}_{k=0}^{\infty}$. Next, it follows from Lemmas \[lemma\_Arank\] and \[lemma\_Ah\] as well as the assumption $h_{k}>0$ that $\ker(h_{k}(A_{k}^{[j_{k}]}+h_{k}A_{{\mathrm{c}}k}))=\ker(A_{k}^{[j_{k}]})=\ker(A_{s}^{[j_{s}]})=\ker(h_{s}(A_{s}^{[j_{s}]}+h_{s}A_{{\mathrm{c}}s}))$ for every $k,s\in\overline{\mathbb{Z}}_{+}$. Using the similar arguments, one can prove that $\ker(B_{k}^{[j_{k}]}+h_{k}^{2}A_{{\mathrm{c}}k})=\ker(B_{k}^{[j_{k}]})=\ker(B_{s}^{[j_{s}]})=\ker(B_{s}^{[j_{s}]}+h_{s}^{2}A_{{\mathrm{c}}s})$ for every $k,s\in\overline{\mathbb{Z}}_{+}$. Hence, it follows from Assumptions H4 and H5 as well as $ii$) of Lemma \[lemma\_DTSS\] that $\lim_{k\to\infty}Z[k]$ exists. The rest of the conclusion follows directly from (\[DE\_1\])–(\[MSO\_4\]). Note that in this case, $\Omega$ may be an infinite set. [^1]: This work was supported by the Defense Threat Reduction Agency, Basic Research Award \#HDTRA1-10-1-0090 and Fundamental Research Award \#HDTRA1-13-1-0048, to Texas Tech University.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The ground set for all matroids in this paper is the set of all edges of a complete graph. The notion of a [*maximum matroid for a graph*]{} $G$ is introduced, and the existence and uniqueness of the maximum matroid for any graph $G$ is proved. The maximum matroid for $K_3$ is shown to be the cycle (or graphic) matroid. This result is persued in two directions - to determine the maximum matroid for the $m$-cycle $C_m$ and to determine the maximum matroid for the complete graph $K_m$. The maximum matroid for $K_4$ is the matroid whose bases are the Laman graphs, related to structural rigidity of frameworks in the plane. The maximum matroid for $K_5$ is related to a famous 153 year old open problem of J. C. Maxwell.' author: - | Meera Sitharam and Andrew Vince\ University of Florida,\ Gainesville, FL, USA\ [[email protected]]{} date: title: The Maximum Matroid for a Graph --- [**Keywords**]{}: graph, matroid, framework [**Mathematical subject codes**]{}: 05B35, 05C85, 52C25 Introduction {#sec:intro} ============ Let $K_n$ be the complete graph on $n$ vertices. A [*graph*]{} in this paper is a subset $E$ of the edge set $E(K_n)$ of $K_n$. The number of edges of a graph $E$ is denoted $|E|$ and graph isomorphism by $\approx$. The vertex set, i.e. the set of vertices incident to the edges of $E$ will be denoted $V(E)$. For a graph $E \subseteq E(K_n)$ and edge $e \in E(K_n)$, the notation $E+e$ is used for $E \cup \{e\}$ and $E-e$ for $E\setminus \{e\}$. All matroids in this paper will be on the ground set $E(K_n)$. The best known graph matroid is the [*cycle matroid*]{}, also called the [*graphic matroid*]{}. The independent sets of the cycle matroid are the forests. The bases are the spanning trees. The circuits (minimal dependent sets) are the cycles (no repeated vertices on a cycle). An edge $e = \{x,y\}$ lies in the closure of $E$ if vertices $x$ and $y$ are joined by a path in $E-e$. The flats (closed sets) are vertex disjoint unions of cliques, and therefore the lattice of flats is isomorphic to the lattice of partitions of an $n$-element set. There are other matroids on the edge set of a graph that have been extensively studied, for example the bicircular matroid, the transversal matroid, the bond matroid, and the gammoid. For background on matroids see [@G; @O]. The circuits of a matroid are the minimal dependent sets. If a matroid on $E(K_n)$ is regarded in terms of its independent sets, then the circuits are, in a sense, the “forbidden subgraphs": a set is independent if and only if it contains no circuit. This paper introduces the notion of a [*maximum matroid*]{}, denoted $\widehat M(G)$, for a fixed “forbidden" graph $G$. Basically what we seek in a maximum matroid for $G$ is the matroid, among all the matroids on $E(K_n)$ for which each graph isomorphic to $G$ is a circuit, the one that has, in the strongest sense, the most independent sets. More precisely: \[def:um\] Let $G$ be a subgraph of $K_n$. The [**maximum matroid**]{} ${\widehat M}(G) := \widehat {M}_n(G)$ for $G$ is the matroid on the ground set $E(K_n)$, with the properties that (1) every graph isomorphic to $G$ is a circuit of $\widehat M(G)$, and (2) if $M$ is any matroid satisfying property (1), then every independent set in $M$ is independent in $\widehat M(G)$. We say “the" maximum matroid because its existence is proved in Section \[sec:alg\]. Uniqueness then follows directly from properties (1) and (2). Note that Definition \[def:um\] does not require that the set of graphs isomorphic to $G$ be itself the set of circuits of $\widehat M(G)$, only that it be a subset. In the terminology of [@GSS], condition (2) would be expressed as: $\widehat M(G)$ [*majorizes*]{} $M$, denoted $\widehat M(G) \succeq M$. Also, in matroid terminology, condition (2) in the definition says that the identity map on $E(K_n)$ is a [*weak*]{} map from matroid $\widehat M(G)$ to matroid $M$. Technically, for a given graph $G$, there is a matroid for each $n$, but the subscript is omitted when no confusion arises. The terminology “maximum" matroid is motivated by item (3) of the next proposition, whose proof follows immediately from basic definitions. \[prop\] For matroids $M$ and $M'$ on the same ground set $E$, the following are equivalent. 1\. Every independent set in $M$ is independent in $M'$. 2\. Every dependent set in $M'$ is dependent in $M$. 3\. $\text{rank}_{M'} (A) \geq \text{rank}_{M}(A)$ for all $A\subseteq E$. In 1970, H. H. Crapo [@Cr] introduced the notations of truncation and erection of matroids, in particular the [*free erection*]{} of a matroid. In 1975 D. Knuth [@K] independently came up with the same notions, in particular using the term [*free completion*]{}. The point of this remark is that our concept of maximum matroid for a graph $G$ is not the same as a free erection. In particular, we start with a single graph, not a matroid. The techniques and algorithms for the free erection do not carry over. The motivation is also different. Crapo and Knuth were likely interested in constructing new matroids or, in Knuth’s papar, random matroids, from a given matroid. Our motivation comes from the theory of rigidity of frameworks, as explained below. Although the results and proofs are graph and matroid theoretic, a main motivation for exploring the maximum matroid $\widehat M(G)$ of a graph $G$ comes from the theory of ridigity of frameworks, systems of stiff bars joined at movable joints. In particular, the motivation relates to a 153 year old open problem of J. C. Maxwell [@M]. Define a [*framework*]{} $(\Gamma,\mathbf p)$ as a graph $\Gamma$ together with an embedding $\mathbf p : V \rightarrow {\mathbb R}^d$ of the vertex set $V$ of $\Gamma$ into Euclidean space. A framework is [*rigid*]{} in ${\mathbb R}^d$ if the only motions of the framework that preserve the distances between adjacent vertices of $\Gamma$ arise from rigid motions, i.e., orientation preserving isometries of ${\mathbb R}^d$. We refer to [@CW; @GSS] for background on structural rigidity and, in particular, the definitions of a generic embedding of a graph in ${\mathbb R}^d$. It suffices for the purposes of this paper to state the fact that, if one generic embedding of $\Gamma$ is rigid, then all generic embeddings are rigid. So generic rigidity depends only on the graph $\Gamma$, not on the embedding. A well known theorem of Laman [@L] provides, in the generic case, a simple combinatorial necessary and sufficient condition for a graph to be rigid (see Theorem \[thm:L\] in Section \[sec:K4\]). In 1864 Maxwell formulated a necessary condition for rigidity in $3$-dimensions, a condition analogous to the Laman condition. Since it is not sufficient for rigidity (see Section \[sec:open\]), Maxwell asked for a characterization in $3$ dimensions akin to Laman’s result in $2$-dimensions (see the introductory paragraph of Section \[sec:open\]). No such characterization has been found. Associated with a framework in ${\mathbb R}^d$ is the [*rigidity matroid*]{}, defined as the matroid whose independent sets are the sets of independent row vectors of the rigidity matrix, a matrix derived from the incidence matrix of the graph. A subset $E \subset E(K_n)$ is rigid in ${\mathbb R}^d$ if and only if the closure of $E$ in the rigidity matroid is the complete graph on $V(E)$. This numerical method for determining rigidity, however, is not a combinatorical solution to Maxwell’s question. The maximum matroid for the complete graph $K_4$, as defined in this paper, is shown to coincide with the $2$-dimensional rigidity matroid (see Section \[sec:K4\]). Whether the maximum matroid for $K_5$ coincides with the $3$-dimensional rigidity matroid is open. Our proof of the existence of the maximum matroid is constructive. A completely combinatorial algorithm, called Algorithm A in Section \[sec:alg\], gives the maximum matroid in terms of the closure operator. If the maximum matroid for $K_5$ is the $3$-dimensional rigidity matroid, then Algorithm A applied to $K_5$ may be the closest one may hope for in the way of an answer to Maxwell’s question. If the maximum matroid for $K_5$ is not the $3$-dimensional rigidity matroid, then, as discussed in Section \[sec:open\], a well-known conjecture in rigidity theory, called the [*maximal conjecture*]{} in ${\mathbb R}^3$, is false. Organization and Results ======================== Two simple examples are investigated in Section \[sec:easy\]. The maximum matroid $\widehat M(C_3)$ for the $3$-cycle $C_3$ (see Theorem \[thm:k3\]) and the maximum matroid $\widehat M(K_{1,3})$ for the complete bipartite graph $K_{1,3}$ (see Theorem \[thm:k13\]) are determined. There are two obvious directions in which to generalize Theorem \[thm:k3\], to determine the maximum matroid for the complete graphs $K_m, \, m\geq 4$, and to determine the maximum matroid for the $m$-cycle $C_m, \, m\geq 4$. In Section \[sec:cycles\], the maximum matroid for the $m$-cycle $C_m$, for all $m\geq 4$, is determined (Theorems \[thm:c4\] and \[thm:cycles\]). The maxmum matroid for $K_4$ is determined in Section \[sec:K4\] (Theorems \[thm:ab\] and \[thm:k4\]), and its relationship to $2$-dimensional framework rigidity is discussed in that section. Section \[sec:open\] is a discussion of the maximum matroid for $K_5$ and its relation to $3$-dimensional framework rigidity and to Maxwell’s open problem. The existence of a maximum matroid for any graph $G$ is proved in Section \[sec:alg\]. The proof is via an algorithm, called Algorithm A. The input to Algorithm A is a given graph $G$ and the output is a matroid $M_A(G)$, defined in terms of its closure operator. Although different in its description, Algorithm A was motived by [@C Definition 5.2]. Theorem \[thm:2\] states that $M_A(G)$ satisfies the matroid closure axioms. Theorem \[thm:main2\] states that $M_A(G) =\widehat M(G)$, the maximum matroid for the graph $G$. The proofs of the main results of Section \[sec:alg\], not quite as straightforward as the algorithm itself, use a closure operation. First Examples {#sec:easy} ============== As examples of the maximum matroid of a graph, we first consider the case where the graph is the complete graph $K_3$, equivalently the $3$-cycle $C_3$, and the case where the graph is the complete bipartite graph $K_{1,3}$ (star). The proofs of Theorems \[thm:k3\] and \[thm:k13\] use the matroid closure axioms and the matroid circuit axioms . The matroid closure of a set $E \subseteq E(K_n)$ will be denoted $[E]$. The closure axioms for a matroid are as follows, axiom (4) known as the [*exchange axiom*]{}. For all $E,F \subseteq E(K_n)$ and $e \in E(K_n)$: CL1. $E \subseteq [E]$ CL2. $F \subseteq E \; \Rightarrow \; [F] \subseteq [E]$ CL3. $[[E]] = [E]$ CL4. $f \in [E +e] \setminus [E] \; \Rightarrow \; e \in [E +f]$. The circuit axioms are as follows. A set ${\mathcal C}$ is the set of circuits of a matroid if: C1. The empty set is not in ${\mathcal C}$. C2. No member of ${\mathcal C}$ is a proper subset of another member of ${\mathcal C}$. C3. If $C_1$ and $C_2$ are distinct members of ${\mathcal C}$ and $e \in C_1\cap C_2$, then $(C_1 \cup C_2) - e$ contains a member of ${\mathcal C}$. \[thm:k3\] The maximum matroid $\widehat M(C_3)$ (equivalently $\widehat M(K_3)$) is the cycle matroid. Clearly the cycle matroid satisfies property (1) in Definition \[def:um\]. Let $D$ be any dependent set in the cycle matroid. Via Proposition \[prop\], it is sufficient to prove that $D$ is dependent in any matroid for which each $3$-cycle is a circuit. Let $C$ be a cycle contained in $D$. that $C$ is Let $e_1= \{v_0,v_1\} ,e_2 = \{v_1,v_2\} , \dots, e_k = \{v_{k-1},v_0\}$ be the successive edges of $C$. Let $M$ be any matroid satisfying property (1), and let $[E]$ denote the closure of $E$ in $M$. It is sufficient to show that $C$ is dependent in $M$. Because every triangle is a circuit, we have successively $\{v_0,v_2\} \in [e_1,e_2]$, $\{v_0,v_3\} \in [\{v_0,v_2\},e_3] \subseteq [e_1,e_2,e_3] \dots$, $\{v_0,v_k\} \in [\{v_0, v_{k-1}\}, e_{k-1}] \subseteq [e_1,e_2, \dots, e_{k-1}]$, $e_k =\{v_0,v_k\}$ $\in [e_1, \dots, e_{k-1}]$. Therefore cycle $C$ is dependent in $M$. The $k$-[*uniform matroid*]{} is the matroid whose bases are all the subsets of $E(K_n)$ consisting of exactly $k$ edges. If $G$ is a graph with $k$ edges, let $\mathcal B (k,G)$ denote the set of all subgraphs of $E(K_n)$, except those isomorphic to $G$, consisting of exactly $k$ edges. If $\mathcal B (k,G)$ comprise the bases of a matroid (it is not a matroid for some choices of $G$), call it the $(k, G)$-[*uniform matroid*]{}, and denote it by $U(k,G)$. It is routine, for example, to verify a $(3, K_{1,3})$-uniform matroid. \[thm:k13\] $\widehat M(K_{1,3}) = U(3, K_{1,3})$. Clearly $U(3,K_{1,3})$ satisfies property (1) in Definition \[def:um\]. Let $M$ be any matroid that satisfies property (1) in Definition \[def:um\], and let $[E]$ denote the closure of a set $E$ in $M$. If $D \not \approx K_{1,3}$ is any dependent set in $U(3,K_{1,3})$, then, by Propositioin \[prop\], it suffices to show that $D$ is dependent in $M$. Hence it suffices to show that if $E$ is any set with exactly four edges, then $E$ is dependent in $M$. Assume that there is a path $p = v_0, v_1, v_2$ in $E$ of length $2$, and let $e = \{u,v\}$ and $e' = \{u',v'\}$ be the other two edges in $E$. Also assume that none of $u,v,u',v'$ coincide with any of $v_0,v_1,v_2$. Let $F = E - e'$. By successively using circuits that are isomorphic to $K_{1,3}$, we obtain $\{v_1, u\}, \, \{ v_1,u'\} \in [F]$; then $\{u,u'\} \in [F+\{v_1, u\}]$; then $e' = \{u',v'\} \in [F+ \{ v_1,u'\}+\{u,u'\}$. Therefore $e' \in [F] = [E-e']$, and $E$ is thus dependent in $M$. If one of $u,v,u',v'$ coincides with any of $v_0,v_1,v_2$, then the proof is similar and even shorter. If there is no path of length $2$ in $E$, then let $E = \{e_1 = \{u_1,v_1\} ,e_2 = \{u_2, v_2\} ,e_3,e_4\}$ be the set of four pairwise vertex disjoint edges. The sets $C_1 = \{ e_1, e_2, \{u_1,v_1\}, e_3\}$ and $C_2 = \{ e_1, e_2, \{u_1,v_1\}, e_4\}$ are both dependent in $M$. If some subset of $C_1$ or $C_2$ is dependent, then the proof is even easier than if we assume this is not the case. Hence assume that both $C_1$ and $C_2$ are circuits in $M$. By circuit axiom C3, the set $\{C_1 \cup C_2 -\{u_1,v_1\} = \{e_1,e_2,e_3,e_4\}$ is dependent in $M$. The Maximum Matroid for $\mathbf{C_m}$ {#sec:cycles} ====================================== Let $C_m$ denote the cycle of length $m$. The easiest case, $m=3$, was considered in the previous section, the maximum matroid for $C_3$ being the cycle matroid. The cases $m \geq 4$ are slightly more complicated. For example, let $M$ be any matroid for which every graph isomorphic to $C_4$ is a circuit. We claim that the graph $E$ on the left in Figure \[fig:1\] is dependent in $M$, in particular $f \in [E-f]$. This is shown on the right in Figure \[fig:1\]. Using axiom CL2, first add edge 1 to the closure of $E-f$, then add edge 2, then add edge $f$. ![A dependent set.[]{data-label="fig:1"}](fig2.png){width="6cm"} In the above argument and in some of the proofs in this section, the following algorithm is implicitly used. Let $G$ be a subgraph of $K_n$ and $M$ any matroid for which every graph isomorphic to $G$ is a circuit. Input: a set $E \subset E(K_n)$ Output: a set $\overline E$ such that $\overline E \subseteq [E]$ Initialize: set $\overline E=E$ for all $E\subset E(K_n)$ While there is triple $(e,F,E)$ such that $F\subseteq E, \, e \in E(K_n)\setminus E$ and $F + e \approx G$ do $\overline E\leftarrow \overline E+e$. It would be convenient if this algorithm was sufficient, without having to invoke axiom CL4. This, unfortunately, is not the case, which is evidenced by complexity of the proof of valididity of Algorithm A in Section \[sec:alg\]. Denote by $P_j$ the path of length $j-1$ with successive vertices $ V_j = \{1,2,\dots, j\}$, by $K_j$ the complete graph on $V_j$, and by $B_j$ the complete bipartite graph on $V_j$ with even numbered vertices in one partite set and odd numbered vertices in the other. \[lem:1\] Let $M$ be any matroid for which every graph isomorphic to $C_m$ is a circuit of $M$. 1. If $m$ is odd, then $[P_j] = K_j$ for all $j \geq m+1$. 2. If $m$ is even, then $B_j \subseteq [P_j]$ for all $j \geq m+1$. To prove statements (1) and (2) for $j=m+1$, let $1,2,\dots, m+1$ be the successive vertices of $P_{m+1}$. First note that $\{1,m\} \in [P_{m+1}]$ by considering the $m$-cycle $1,2,\dots, m,1$. Similarly, $\{2,m+1\} \in [P_{m+1}]$. We next show, by induction on $b$, that $\{1,b\} \in [P_{m+1}]$ for all even $b \leq m$. This follows from the $m$-cycle $1,b,b+1,b+2,\dots , m+1, 2, 3, \dots, b-2, 1$. If $m$ is odd, then we also have $\{1,m+1\} \in [P_{m+1}]$ by using the $m$-cycle $1,m+1,2,3,4, \dots, m-1,1$. Now, for $m$ odd and for all $1 \leq a < b \leq m+1$, we have the edge $\{a,b\} \in [P_{m+1}]$. This follows, by induction on $b-a$, from the $m$-cycle $a,b, b-1, b-2, \dots, a+2, b+1, b+2, b+3. \dots, a$. This completes the proof of (1). Now assume that $m$ is even. We have $\{2,b\} \in [P_{m+1}]$ for all odd $b$ because of the $m$-cycle $2,b,b-1,b-2,\dots, 5,4,1,b+1,b+2, \dots, m+1,2$. By symmetry we also have $\{m+1,b\} \in [P_{m+1}]$ for all even $b$ and $\{m,b\} \in [P_{m+1}]$ for all odd $b$. By induction on $b-a$, we have $\{a,b\} \in [P_{m+1}]$ for all $a < b$ of opposite parity. With $a$ even, this is verified by the $m$-cycle $a,b,b+1,b+2, \dots, m+1, a+2, a+3, \dots, b-1, a+1, 2, 3,4, \dots, a$. By symmetry, the same is true with $a$ odd. We show (1) and (2) in the case $j>m+1$ by induction on $j$. The statements have been shown to be true for $j = m+2$. If the vertices of $P_{j+1}$ are successively $\{1,2,\dots, j, j+1\}$, then we have shown that, in the case $m$ odd, both the complete graph on $\{1,2,\dots,j\}$ and the complete graph on $\{2,3,\dots, j+1\}$ are contained in $[P_{j+1}]$. Therefore the complete graph on $\{1,2,\dots,j+1\}$ and the complete graph on $\{2,3,\dots, j+1\}$ are contained in $[P_{j+1}]$. A similar argument suffices in the case $m$ even. Denote by $\mathcal I$ the set of all subgraphs $I$ of $K_n$ for which every component of $I$ has at most one cycle and that cycle is odd. Let $M_O$ denote the matroid with $\mathcal I$ as the set independent sets. If there were no restriction on the parity of the cycle, this would be the bicircular matroid introduced by Simes-Pereira [@SP]. It is not hard to show that $M_O$ is a matroid. The circuits of $M_O$ are even cycles and pairs of edge disjoint odd cycles joined by a path $p$ (of possibly $0$ length) such that the only vertices of $p$ in common with the odd cycles are the two ends. Call the latter type of graph an [*odd dumbell*]{}. \[thm:c4\] $\widehat M(C_4) = M_O$. It is required to prove that if $M$ is any matroid for which each graph isomorphic to $C_4$ is a circuit, then any even cycle or odd dumbell is dependent in $M$. If $C$ is an even cycle, then $C$ dependent follows from Lemma \[lem:1\]. If $D$ is an odd dumbell, let $D_1$ and $D_2$ be the two odd cycles, an $m_1$ and $m_2$-cycle, respectively, and $a$ and $b$ the ends of the path joining $D_1$ and $D_2$. Let $u_1$ and $v_1$ be the two vertices of $D_1$ adjacent to $a$, and let $u_2$ and $v_2$ be any two adjacent vertices of $D_2$, neither adjacent to $b$. We claim that $e_2 = \{u_2,v_2\} \in [D-e_2]$, which would imply that $D$ is dependent. It follows from Lemma \[lem:1\] that $e_1 = \{u_1, v_1\} \in [D_1] \subset [D-e_2]$. Let $p$ be the path in $D_1$ joining $u_1$ and $v_1$. In $D-p+e_1$ There exist a unique path from $u_1$ to $u_2$ of odd length and a unique path from $v_1$ to $v_2$ of odd length. By Lemma \[lem:1\], $\{u_1, u_2\} \in [D-e_2]$ and $\{v_1, v_2\} \in [D-e_2]$. Since $u_2,u_1,v_1,v_2,u_2$ is a $4$-cycle, $e_2 \in [D-e_2]$. Recall that $U(m,C_m)$ denotes the $(m,C_m)$-uniform matroid as defined in Section \[sec:easy\]. Consequently, $C$ is a circuit in $U(m,C_m)$ if and only if $C \not \supseteq C_m$ contains exactly $m+1$ edges or if $C \approx C_m$. It is routine to check that $U(m,C_m)$ is a matroid. Note that the maximum matroids for $C_3$ and $C_4$ do not fit the pattern of the maximum matroids for $C_m, \, m\geq 5$. \[lem:2\] Assume that $m \geq 5,\; n\geq m+2$, and $M$ is a matroid on ground set $E(K_n)$ for which every graph isomorphic to $C_m$ is a circuit. If $H$ is the union of the $m$ edges of a path $p$ of length $m$ and any additional edge, then $H$ is dependent in $M$. Let $p=(0,1,2,\dots, m)$ be a path joining successive vertices $0,1,2,\dots, m$, and let $f$ be an additional edge. There are five cases in the proof. \(1) If $f$ is a chord of $p$ (an edge joining any two non-adjacent vertices of $p$), then $H$ is dependent by Lemma \[lem:1\] if $m$ is odd. If $f$ is a chord of $p$ joining vertices of different parity, then, $H$ is dependent by Lemma \[lem:1\] if $m$ is even. See (5) below for the case where $m$ is even and $f$ is a chord of $p$ joining vertices of the same parity. \(2) Let $f = \{m,m+1\}$, where veretex $m+1$ is not a vertex of $p$. Then $H$ is the set of edges of a path of length $m+1$. Let $q = (0,1,\dots, m)$ and $q' = (1,2,\dots, m+1)$ be two paths of length $m$, and let $Q$ and $Q'$ denote the set of edges in $q$ and $q'$, respectively. If $e$ is a chord, say chord $\{1,4\}$, then $C = Q + e$ and $C' = Q' + e$ are circuits by (1). Therefore $H = C\cup C' - e$ contains a circuit by circuit axiom C3. \(3) Let $f$ be any edge with exactly one vertex, say $j \neq 0,m$, in common with $p$. If $f= \{j,m+1\}$, then $q = (j+1, j+2, \dots, m, 0, 1, \dots, j, m+1)$ and $q' = (m+1, j, j+1, \dots, m, 0, 1, \dots, j-1)$ are paths of length $m+1$, and each is dependent in $M$ by (2). We will assume that the corresponding edge sets $Q$ and $Q'$ are circuits; otherwise each contains a circuit, and the subsequent proof becomes easier. Now $H = Q \cup Q' - \{0,m\}$, which contains a circuit by circuit axiom C3. \(4) Let $f = \{m+1,m+2\}$ be any edge with no vertex in common with $p$. Then $q = (1,\dots, m, m+1,m+2)$ and $q'= (0,1,2,\dots, m,m+1)$, both paths, are circuits in $M$ by (2). Therefore $H = Q \cup Q' - \{m,m+1\}$ contains a circuit by circuit axiom C3. \(5) The only remaining case is when $m$ is even and $f$ is a chord of $p = (0,1,2,\dots, m)$. Assume that the chord is $f = \{0,j\}$, where $j \neq 1$ is a vertex of $p$. If $m+1$ is a vertex not on $p$, then $q = (0,1,\dots, m, m+1)$ is a path of length $m+1$, and $q'= (1,2,\dots, m,m+1) \cup \{0,j\}$ is the union of a path of length $m$ and an edge with exactly one vertex in common with the path. Both edge sets $Q$ and $Q'$ are circuits in $M$ by (2) and (3), respectively. Therefore $H = Q \cup Q' - \{m,m+1\}$ contains a circuit by circuit axiom C3. Assume next that the chord is $f = \{1,j\}$, where $j \neq 0,2$ is a vertex of $p$. If $m+1$ is a vertex not on $p$, then $q = (0,1,\dots, m, m+1)$ is a path of length $m+1$, and $q'= (1,2,\dots, m,m+1) \cup \{1,j\}$ is a graph with $m+1$ edges of the type proved to be dependent in the paragraph above. As before, we assume without loss of generality that corresponding dependent edge sets $Q$ and $Q'$ are circuits. Therefore $H = Q \cup Q' - \{m,m+1\}$ contains a circuit by circuit axiom C3. The case where $f = \{i,j\}, \; 1 < i < j,$ is an arbitrary chord is proved, just as in the case $i = 1$, by the obvious induction. An examination of the steps in the above proof reveals that no more than $m+2$ vertices are required. Of course, graph $H$ may itself have more than $m+2$ vertices, in which case $n$ must be at least that number. \[thm:cycles\] If $m \geq 5$ and $n \geq m+2$ , then $\widehat M_n(C_m) = U(m,C_m)$. Let $M$ be any matroid for which each graph isomorphic to $C_m$ is a circuit and there are no other dependent sets of size $m$. It suffices to show that every subgraph with $m+1$ edges is dependent in $M$. Let $H$ be the union of the set of edges of a path $p = (0,1,\dots, m+1-d$) of length $m+1-d$ and an arbitrary set $S$ of an additional $d$ edges. We will prove, by induction on $d$, that $H$ is dependent for all $d = 1,\dots, m+1$. When $d=m+1$, the set $S$ is an arbitrary graph with $m+1$ edges. Therefore, this will complete the proof of Theorem \[thm:cycles\]. The statement is true for $d=1$ by Lemma \[lem:2\]. Assume it is true for $d$, and let $H$ be the union of the set $P$ edges of the path $p = (0, 1, \dots , m-d)$ of length $m-d$ and the set $S$ of an additional $d+1$ edges. Let $u$ be a vertex not in $p \cup V(S)$. If $e,e' \in S$, then $C = P +\{0,u\}- e$ and $C' = P \cup \{0,u\} - e'$ are dependent by the induction hypothesis. As in proof of Lemma \[lem:2\], there is no loss of generality in assuming that $C$ and $C'$ are circuits. Therefore $H = C \cup C' - \{0,u\}$ contains a circuit by circuit axiom C3. The definition of maximal matroid of a graph $G$ requires that, in the ground set $E(K_n)$, the integer $n$ is at least as large as the number of verices in $G$. In Theorem \[thm:cycles\] it is assumed, to give an additional two vertices wiggle room in the proof, that $n\geq m+2$. We conjecture that Theorem \[thm:cycles\] is actually true for $n \geq m$. The Maximum Matroid for $K_4$ {#sec:K4} ============================= Given non-negative integers $a$ and $b$, let $\mathcal I(a,b)$ denote the set of all subgraphs $E$ of $E(K_n)$ for which $|E'| \leq a|V(E')|-b$ for all $E' \subseteq E$. In [@LS; @ST] these graphs are called $(a,b)$-[*sparse*]{}. The set of all $(a,b)$-sparse graphs that, in addition, satisfy $|E| = a|V(E)|-b$ are called $(a,b)$-[*tight*]{}. Let ${\mathcal C}(a,b)$ denote the set of all subsets $E$ such that $|E| = a|V(E')|-b +1$ and $|E'| \leq a|V(E')|-b$ for all $E' \subsetneq E$. \[thm:ab\] Let $a$ and $b$ be integers with $2a > b \geq 0$. The collection $\mathcal I(a,b)$ is the set of independent sets of a matroid $M(a,b)$. The collection ${\mathcal C}(a,b)$ is the set of circuits of $M(a,b)$. It will first be shown that ${\mathcal C}(a,b)$ satisfies the circuit axioms of a matroid. It then immediately follows from the definitions of independent and circuit that $\mathcal I(a,b)$ is the corresponding family of independent sets. Since axioms C1 and C2 follow immediately from the definition of ${\mathcal C}(a,b)$, it suffices to check axiom C3. Let $E_1$ and $E_2$ be distinct members of ${\mathcal C}(a,b)$ and $e \in E_1\cap E_2$. If $E = E_1\cup E_2 - e$, then $$\begin{aligned} |E| &= |E_1|+|E_2| - 2 \geq (a|V(E_1)| - b + 1) + (a|V(E_2)| - b + 1) - 2 \\ & = a( |V(E_1)| +|V(E_2)| ) -2b \geq a (|V(E)|+2)- 2b = a|V(E)| + 2(a-b) \\ & > a|V(E)| -b . \end{aligned}$$ Therefore there is a subset $E_0\subseteq E$ such that $E_0 \in {\mathcal C}(a,b)$. The matroid $M(1,1)$ is the graphic matroid; $M(1,0)$ is the bicircular matroid. The circuits of the $M(2,3)$ can be obtained from the tetrahedron by what is called $1$-extension and $2$-sum [@BJ]. Note that $\mathcal I(3,6)$ is not the set of independent sets of any matroid, i.e., ${\mathcal C}(3,6)$ is not a set of circuits. In particular, circuit axiom C3 fails when $E_1$ and $E_2$ are copies of $K_5$ that have exactly one edge in common. This fact is relevant to the discussion of the maximum matroid for $K_5$ in Section \[sec:open\]. \[thm:k4\] $\widehat M(K_4) = M(2,3)$. Clearly every graph isomorphic to $K_4$ is a circuit of $M(2,3)$. Let $M$ be any matroid with the property that every graph isomorphic to $K_4$ is a circuit of $M$. We prove by induction on $m:= |E|$ that any set $E$, dependent in $M(2,3)$, is also dependent in $M$. The statement is clearly true for $m \leq 4$; assume that it is true for any dependent set of size less than $m$ and assume that $|E| = m$. Since $E$ is dependent in $M(2,3)$, there is a circuit $E_0$ of $M(2,3)$ contained in $E$. The circuit $E_0$ cannot contain a vertex of degree $2$ (incident, say, to edges $e_1,e_2 \in E_0$); otherwise graph $E_0-\{e_1,e_2\}$ would be a circuit in $M(2,3)$, contradicting the definition of a circuit as a minimal dependent set. If all vertices of $E_0$ have degree at least $4$, then again $E_0$ would not be a circuit because $|E_0| \geq 2|V(E_0|$. Therefore, consider a vertex $v$ of degree $3$ incident to edges, say, $e_1 = \{v,v_1\}, \, e_2 = \{v,v_2\}, \, e_3 = \{v,v_3\}$ of $E_0$. If $f_1 = \{v_1,v_2\}, f_2 = \{v_2,v_3\}$, and $f_3 = \{v_3 ,v_1\}$ are all contained in $E_0$, then $e_1, e_2, e_3, f_1, f_2, f_3$ forms a $K_4$, rendering $E_0$ dependent. Otherwise, assume that $f \in \{ f_1, f_2, f_3 \} \setminus E_0$. If $E_0 - \{e_1, e_2, e_3\}$ is dependent in $M(2,3)$, then it is dependent in $M$ by the induction hypothesis, and the proof is complete. Otherwise, if $\widehat E = E_0 - \{e_1, e_2, e_3\} +f$, then $$|\widehat E| = |E_0| - 2 = 2|V(E_0)| -2 -2 = 2( |V(E_0)| - 1) -2 = 2 |V(\widehat E )| -2,$$ which, by the induction hypothesis, implies that $\widehat E$ is dependent in $M$, and hence, with respect to the closure operator $[\, \cdot\,]_M$ in $M$, that $f \in [E_0 - \{e_1, e_2, e_3\}]_{M}$. Since $f$ was chosen arbitrarily from $\{f_1, f_2, f_3\}$, we have $\{f_1, f_2, f_3\} \subset [E_0 - \{e_1, e_2, e_3\}]_{M}$. Because he edges, $f_1,f_2, f_3, e_1, e_2, e_3$ form a $K_4$, we have $e_1 \in [E_0 - e_1]_M$, and therefore that $E_0$, and hence $E$, is dependent in $M$. The matroid $M(2,3)$ is of special interest in the theory of rigidity of frameworks, as dicussed in Section \[sec:intro\]. A framework is [*minimally rigid*]{} in ${\mathbb R}^d$ if it is rigid, but with any edge removed, it is not rigid. A well known theorem of Laman [@L] provides, in the generic case, a combinatorial necessary and sufficient condition for a graph to be rigid. For this reason, a $(2,3)$-tight graph is also referred to as a [*Laman graph*]{}. \[thm:L\] A graph in the plane is minimally rigid if and only if it is $(2,3)$-tight. Associated with framework rigidity in ${\mathbb R}^d$ is the [*rigidity matroid*]{}, defined as the matroid whose independent sets are the sets of independent row vectors of the rigidity matrix, a matrix derived from the incidence matrix of the graph. A subset $E \subset E(K_n)$, considered as a graph, is rigid in ${\mathbb R}^d$ if and only if the closure of $E$ in the rigidity matroid is the complete graph on $V(E)$. Intuitively, the distances between all pairs of vertices of $V(E)$ are determined by the lengths of edges in $E$. Often an edge not in $E$ whose length is determined by those in $E$ is referred to as an [*implied non-edge*]{}. Laman’s theorem implies that the ${\mathbb R}^2$ rigidity matroid is $M(2,3)$, and therefore that the ${\mathbb R}^2$ rigidity matroid is the maximum matroid $\widehat M(K_4)$ (by Theorem \[thm:k4\]). Intuitively, $K_4$ is a circuit in the rigidity matroid and, for any graph in ${\mathbb R}^2$, the set of implied non-edges is, in a sense, determined purely by the circuits isomorphic to $K_4$. Whether similar results carry over to ${\mathbb R}^3$ is discussed in Section \[sec:open\] Every Graph Has a Unique Maximum Matroid {#sec:alg} ======================================== For each graph $G$ considered in the previous sections, there is a simple description of the maximum matroid for $G$, for example, for $K_3$ it is the cycle matroid and for $K_4$ it is the matroid whose bases are the Laman graphs. This may be misleading. As explained in Section \[sec:open\], a description of the maximum matroid for even $K_5$ is problematic. What is proved in this section is that, for any given graph $G$, there is a maximum matroid for $G$. This matroid is not described explicitly, but is given by an algorithm that computes the matroid in terms of its closure operation. For a fixed graph $G$, Algorithm $A$ below defines a matroid $M_A(G)$, which we call the [*A-matroid for*]{} $G$. That it is indeed a matroid is Theorem \[thm:2\]. The matroid $M_A(G)$ is computed in terms of the closure operator. In the algorithm, for each subset $E \subseteq E(K_n)$, there is a corresponding set $[E]$ of edges of $K_n$ that begins with $[E] = E$ and with edges possibly added to $[E]$ as the algorithm progresses. The set $[E]$ at termination of the algorithm is denoted $[E]^*$ and is called the [*closure*]{} of $E$ in $M_A(G)$. A [*step*]{} in Algorithm A is the addition of a single edge $e$ to a set $[E]$ and to $[E']$ for all $E' \supset E$. This is done on the second to last line of Algorithm A. There are two ways that the edge $e$ can be added, by conditions labeled 1 and 2 in the algorithm. These will be referred to as [*addition rule 1*]{} and [*addition rule 2*]{}. The algorithm proceeds one step at a time, so the steps can be numbered $1,2, \dots$. The set $[E]$ just after step $i$ but before step $i+1$ will be referred to as the [*closure of $E$ at step $i$*]{}. The notation $[E]^k$ denotes the closure of $E$ directly after completing loop $k$ of the FOR statement, just before starting loop $k+1$. The number of vertices in a graph $G$ is denoted $n(G)$. [**Algorithm A**]{} Input: Graph $G$ Output: The closure $[E]^*$ for all $E\subset E(K_n)$ Initialize: $[E] = E$ for all $E\subset E(K_n)$ For $k = n(G)-1$ to $n \choose 2$ do While there exists a triple $(e,E,F)$ that satisfies all of the following - $e \in E(K_n) \setminus [E]$ and $F \subseteq E(K_n)$, - $F \subseteq [E]$, - $|F| \leq k$, - one of the following hold: 1. $F + e \approx G$ 2. $f \in [F- f + e] $ for some $f \in F$ and $f \notin [F-f]^{k-1}$ for all $f \in F$, add $e$ to $[E]$. $[E]^* = [E]$ for all $E \subseteq E(K_n)$ \[ex:A\] In this example, $G = C_5$, the $5$-cycle. Referring to Figure \[fig:3\], we explain how Algorithm A adds edge $e$, in red in panel 2, to the closure of $E$, the graph shown in panel 1. Consider each of the panels 4-7. During the first $k$-loop, when $k=4$, the red edge is added to the closure of each graph in green by addition rule 1. During the $k=5$ loop, the red edges in panels 4-7 are successively added to the closure of the graph in panel 3, again by addition rule 1. Note that the red edge in panel 7 is edge $f$ in panel 2. Since $f\in [F-f+e]$, addition rule 2 yields $e \in E$. (It is not hard to show that the algorithm could not have, for any $h\in F$, previously added $h$ to the closure of $F-h$, i.e., $h \notin [F-h]$ for all $h \in F$.) ![See Example \[ex:A\].[]{data-label="fig:3"}](fig3.png "fig:"){width="15cm"} -2cm The proofs of the main results of this section procced as follows. Lemma \[lem:1A\] states that the closure operator defined by Algorithm A is independent of the order in which the steps in the algorithm are performed. Theorem \[thm:2\] states that the closure operator defined by Algorithm A satisfies closure axioms CL1-CL4 of a matroid as given in Section \[sec:easy\]. Theorem \[thm:main1\] states, in particular, that each subgraph of $K_n$ isomorphic to $G$ is a circuit. Theorem \[thm:main2\] states that $M_A(G) = \widehat M(G)$, i.e., that $M_A(G)$ is the maximum matroid for $G$. The proofs of these results depend on the technical Lemmas \[lem:x0\]-\[lem:x3\]. \[lem:x0\] If $E \subseteq H$, then $[E]^k \subseteq [H]^k$ for all $k$. Assume that $e$ is added to $[E]$ during FOR loop $j\leq k$, and that $e$ is the first such edge added with the property that $e \notin [H]^j$. Then there is an $F \subseteq [E], \, |F| \leq j,$ such that the conditions in edge addition rule 1 or 2 hold. By the minimality of $e$, also $F \subseteq [H]^j$. Therefore $e$ is added to $[H]$ by the end of FOR loop $j$. \[lem:x1\] For all $E \subseteq E(K_n)$ and for all $k$ we have $[[E]^k]^k = [E]^k$. The containment $[E]^k \subseteq [[E]^k]^k$ follows from Lemma \[lem:x0\]. To prove the opposite containment, let $H = [E]^k$ and, by way of contradiction, assume that $e \in [H]^k \setminus H$. Assume further that $e$ is the first such edge added, say at step $i$ during the FOR loop $j\leq k$. Since it is the first such edge, we have $[H] = H$ at step $i-1$ of the algorithm. But this means that there is an $F \subseteq [H] = H = [E]^k, \; |F| \leq j \leq k,$ such that one of the edge addition rules (1) or (2) in the algorithm holds. If this were the case, however, then $e$ would be added to $[E]$ by the end of FOR loop $k$, because the same conditions (1) or (2) would still hold at that stage (note that the requirement $f \notin [F-f]^{k-1}$ for all $f \in F$ in edge addition (2) is determined by the end of FOR loop $k-1$). But this contradicts $e \notin H = [E]^k$. \[cor:1\] For all $E \subseteq E(K_n)$ and for all $j\leq k$ we have $[[E]^j]^k = [E]^k$. Using Lemma \[lem:x0\] we have $[E]^k \subseteq [[E]^j]^k \subseteq [[E]^k]^k$, the first inclusion because $E \subseteq [E]^j$ and the second inclusion because $[E]^j \subseteq [E]^k$. The result then follows from Lemma \[lem:x1\]. \[lem:independent\] For any $H \subset E(K_n)$, there is a subset $H' \subseteq H$ such that $[H']^k = [H]^k$ and $h \notin [H' - h]^k$ for all $h \in H'$. Let $h_1 \in H$ be such that $h_1 \in [H-h_1]^k$. If no such $h_1$ exists, then $h \notin [H-h]^k$ for all $h \in H$. Continuing in this was, there is a sequence $(h_1, h_2, \dots , h_m)$ of elements of $H$ and a sequence $(H_0, H_1, H_2, \dots , H_m)$ of distinct subsets of $H$ such that $$\begin{aligned} H_0 &:= H, \\ H_{i} &:= H_{i-1} - h_{i}, \qquad \qquad \text{and} \qquad \qquad h_i \in H_{i-1} \cap [H_i]^k \qquad \text{for} \quad i = 1,2 \dots, m, \\ h &\notin [H_m - h]^k \quad \text{for all} \; \; h \in H_m. \end{aligned}$$ Taking $H' = H_m$ in the statement of the lemma, it now suffices to prove that $[H_i]^k = [H_{i-1}]^k$ for $i =1,2,\dots, m$. By Lemma \[lem:x0\], $[H_{i}]^k = [H_{i-1} - h_i]^k \subseteq [H_{i-1}]^k$. For the opposite inclusion, use Lemma \[lem:x1\] to obtain $$[ H_{i-1}]^k = [H_i + h_i]^k \subseteq [[H_i ]^k]^k = [H_i]^k.$$ \[lem:x2\] For all $e,f \in E(K_n), \; H \subseteq E(K_n)$, if $|H| \leq k$ and $f \in [H+e]^{k+1} \setminus [H]^{k}$, then $e \in [H+f]^{k+1}$. Assume that $f \in [H+e]^{k+1} \setminus [H]^{k}$. By Lemma \[lem:independent\] there is an $H'$ such that $[H']^k = [H]^k$ and $h \notin [H'-h]^k$ for all $h\in H'$. With notation as in the proof of Lemma \[lem:independent\], we have $$[H_{i-1} + e]^{k+1} = [H_i+h_i +e]^{k+1} \subseteq [ [H_i]^k+e]^{k+1} \subseteq [[H_i+e]^k]^{k+1} = [H_i +e]^{k+1},$$ for $i = 1,2, \dots, m$, the last equality by Maximum \[cor:1\]. Therefore $f \in [H+e]^{k+1} \subseteq [H'+e]^{k+1}$. Now $f\in [H'+e]^{k+1}\setminus [H']^k$ and $h \notin [H'-h]^k$ for all $h\in H'$. Letting $E = F = H'+f$, that is equivalent to: $|F| \leq k+1, \; f \in [F-f+e]^{k+1}\setminus [F-f]^k$, and $h \notin [F-f-h]^k$ for all $h \in F-f$. That $h \notin [F-f-h]^k \subseteq [F-h]^k$ for all $h \in F-f$ implies that $h' \notin [F-h']^k$ for all $h' \in F$ because $f \notin [F-f]^k$. Trivially $F \subseteq [E]^k$. Therefore, by an edge addition of type (2) in the algorithm, if not already there by then, $e$ would be added to the closure of $E$ by at the end of the $k+1$ FOR loop. Hence $e \in [E]^{k+1} = [H'+f]^{k+1} \subseteq [H+f]^{k+1}$. \[lem:pre\] If 1. $|E| = k$, 2. $|F| = k+1$, and 3. $F\subseteq [E]^{k}$, then $f \in [F-f] ^k $ for some $f \in F$. For any $f_1 \in F \setminus E$, there is a set $E' \subseteq E$ such that $$\begin{aligned} f_1 &\in [E']^k \\ f_1 & \notin [E'-e_1]^k \quad \text{for all} \; \; e_1 \in E'. \end{aligned}$$ Choose such an $e_1 \notin F$. This is possible unless $E' \subset F$, in which case $f_1 \in [E']^k \subseteq [F-f_1]^k$, completing the proof of this lemma. Now use Lemma \[lem:x2\] with $H = E'-e_1, \, e=e_1, \, f=f_1$ to conclude that $e_1\in [E'-e_1+f_1]^k \subseteq [E-e_1+f_1]^k$. Let $E_1 = E - e_1+f_1$, and note that $$\begin{aligned} (0) \quad [E]^k &= [E-e_1+e_1]^k \subseteq [ (E-e_1) \cup [E-e_1+f_1]^k]^k = [[E-e_1+f_1]^k]^k \\ &= [E-e_1+f_1]^k= [E_1]^k, \\ (1') \quad |E_1| &= k, \\ (3') \quad F\subseteq & \; [E_1]^k, \end{aligned}$$the second to last equality of (0) by Lemma \[lem:x1\] and statement $(3')$ by (0). Now repeat the above procedure to obtain a sequence $E_1, E_2, \dots, E_{k}$ of sets, a sequence $e_1, e_2, \dots , e_{k}$ of edges, and a sequence $f_1, f_2, \dots , f_{k}$ of distinct edges in $F$ such that, for $i = 1,2, \dots , k$, 1. $e_i \in E_{i-1} \setminus \{f_1, f_2, \dots, f_{i-1} \}$, 2. $E_i = E_{i-1}-e_i+f_i$, 3. $|E_i| = k$, 4. $F\subseteq [E_i]^k$. Denoting its $k+1$ elements by $F = \{f_0, f_1, f_2, \dots, f_k\}$, at the last stage we have $$f_0 \in F \subseteq [E_k]^k = [E -e_1 - e_2 - \cdots - e_k + f_1 + f_2 + \cdots + f_k]^k = [F-f_0]^k.$$ \[lem:x3\] If $[E]^*$ denotes closure of set $E$ at the termination of Algorithm A, then $[E]^* = [E]^{|E|}$ for every $E \subseteq E(K_n)$. Assume, by way of contradiction, that $|E| =k$ and $e \in [E]^*\setminus [E]^k$. Assume further that $e$ is the first edge added to $[E]$ after FOR loop $k$, say during FOR loop $j>k$. Since $j \geq k+1 \geq n(G)$, it could not have been a type (1) edge addition. For an edge addition of type (2), let $F'$ be any subset of $F$ with $|F'| = k+1$. By Lemma \[lem:pre\] there is an $f\in F'$ such that $f\in [F'-f]^k$, and hence an $f\in F$ such that $f\in [F-f]^{j-1}$. But this contradicts the requirement on $F$ in the algorithm for a type (2) edge addition. \[lem:1A\] For any $E\in E(K_n)$, the closure $[E]^*$ for Algorithm A at termination is independent of the order that edges are added. To show the independence of order, let $A_1$ and $A_2$ be two runs of the algorithm, with different orders. Let $[E]_1$ denote the closure of $E$ at some specific step in $A_1$; similarly for $[E]_2$. Let $[E]_1^k$ and $[E]_2^k$ be the respective closures just before FOR loop $k+1$ begins, and let $[E]_1^*$ and $[E]_2^*$ denote the closures of $E$ at termination, using $A_1$ and $A_2$, respectively. Assume, by way of contradiction, that $[E]_1^k = [E]_2^k$ for some $k$ and for all $E \subseteq E(K_n)$, and that $e$ is the first edge added during the $k+1$ loop of $A_1$, say to $[E]_1$, such that $e \notin [E]_2^*$. Assume that $k$ is minimal for which this is true. If the edge addition is of type (1), then clearly $e$ will, at some point in algorithm $A_2$, be added to the closure of $E$. Therefore it must be a type (2) edge addition. Thus there is an $F \subseteq [E]_1$ with $|F| \leq k+1$ such that $f \notin [F-f]_1^k$ for all $f\in F$ and $f \in [F- f + e]_1$ for some $f \in F$. Since $|F-f| \leq k$, by Lemma \[lem:x3\] and the minimality of $k$, we have $$[F-f]_1^k = [F-f]_2^k = [F-f]_2^* .$$ Since $f \notin [F-f]_1^k$ for all $f\in F$, we have $$\label{eq:order1} f \notin [F-f]_2^* \quad \text{for all} \;\; f \in F.$$ Moreover, by the minimality condition - that $e$ is the first edge addition with $e \in [E]_1 \setminus [E]_2^*$ during FOR loop $k+1$ of $A_1$ - and since $f \in [F- f + e]_1$ prior to the addition of edge $e$ to $[E]_1$ - we have $$\label{eq:order2} f \in [F- f + e]_2^*.$$ By   and above, at some step before termination in algorithm $A_2$, edge $e$ will be added to the current closure of of $E$. Therefore $e \in [E]_2^*$, a contradiction. \[thm:2\] The closure operator constructed by Algorithm A satisfies the closure axioms of a matroid. Recall, in the notation of Algorithm A, the closure axioms for a matroid. For all $E, F \subseteq E(K_n)$: 1. $E \subseteq [E]^*$, 2. $F \subseteq E$ implies $[F]^* \subseteq [E]^*$, 3. $[[E]^*]^* = [E]^*$, 4. For any $e,f \in E(K_n)$ and $E \subseteq E(K_n)$ if $f \in [E+ e]^* \setminus [E]^*$, then $e \in [E+ f]^*$. Axiom (1) is clear, and axiom (2) follows from Lemma \[lem:x0\]. Axiom (3) follows from Lemma \[lem:x1\] by taking $k = {n\choose 2}$. Concerning axiom (4), assume $f \in [E+ e]^* \setminus [E]^*$. If $|E|=k$, then by Lemma \[lem:x3\] we have $f \in [E+ e]^{k+1} \setminus [E]^k$. Taking $H = E$ in Lemma \[lem:x2\] gives $e \in [E+ f]^{k+1} = [E+f]^*$, again using Lemma \[lem:x3\]. \[thm:main1\] Given a graph $G$, Algorithm A produces a matroid $M_A(G)$ for which every isomorphic copy of $G$ is a circuit of the matroid, and for which there are no circuits with fewer edges than $G$. Denote the number of edges in $G$ by $|G|$. We first show, by way of contradiction, that there are no dependent sets with less than $|G|$ edges. Suppose that there is a circuit $C$ with less than $|G|$ edges. An edge $e \in C$ must have been added to $[C-e]$ at step, say $i$, in Algorithm A. Assume that the pair $(e,C)$ is minimum in that there is no pair $(e',C')$ with $C'$ satisfying $|C'| \leq |C|$ such that $e'\notin [C'-e']$ is added to $[C'-e']$ in Algorithm A at a step $j<i$. Let $H = C-e$. By the minimality of the step number $[H] = H$ prior to step $i$ (before $e$ is added to $[H]$). Since $|C| < |G|$, $e$ could not have been added to $[H]$ by rule 1. By addition rule 2, there is a set $F\subseteq [H] = H$ and an edge $f\in F$ such that $f$ is added to $[F-f+e]$ , at some step $j < i$. Letting $C'$ denote the circuit $F+e$, the pair $(f,C')$ contradicts the minimality of $(e,C)$ because $j<i$. Theorem \[thm:2\] immediately implies that the matroid $M_A(G)$ is well defined and produces a closure operator satisfying the closure axioms of a matroid. Clearly, edge addition of type (1) implies that each copy of $G$ is dependent. By the paragraph above, each copy of $G$ is a circuit. \[thm:main2\] The maximum matroid for a given graph $G$ is the A-matroid $M_A(G)$. By Theorem \[thm:main1\], every graph isomorphic to $G$ is a circuit of $M_A(G)$, thus verifying condition 1 in Definition \[def:um\]. To prove condition 2, let $M$ be any matroid for which every graph isomorphic to $G$ is a circuit. Denote the closure of a set $E$ at an arbitrary step in Algorithm A by $[E]$ and the closure at termination by $[E]^*$. Let ${\mathcal C}$ denote the set of all circuits of $M$. We claim that, whenever an edge $e$ is added to an independent set $E$ in the A-matroid, say at step $i$ in Algorithm A, then there is a a $C\in {\mathcal C}$ such that $C \subseteq E+e$. The claim would imply the theorem. The claim is proved by induction on the step number $i$ in Algorithm A when $e$ is added to $E$. Assume that edge $e$ is added at step $i$ to a set $E$ that is independent in the A-matroid. Note that, since $E$ is independent in the A-matroid, the closure $[E]$ is just $E$ prior to step $i$. If $e$ is added by addition rule 1, then there is an $F\subseteq [E]=E$ such that $F+e \approx G$. Since $F+e$ is a circuit in $M$, clearly there is a $C\in {\mathcal C}$ such that $C \subseteq F+e \subseteq E+e$. Note that the first edge addition in Algorithm A must be by addition rule 1; therefore the claim is true for $i=1$. If $e$ is added to $[E]=E$ by addition rule 2, then there is a pair $(f,F)$ such that $f\in F\subseteq [E]=E$ and edge $f$ is added to $[F-f+e]$ at some step $j < i$. Note that $F$ is independent in the A-matroid because, by addition rule 2, we have $f\notin [F-f]^{k-1} = [F-f]^*$, the last equality by Lemma \[lem:x3\]. By the induction hypothesis, since $j<i$, there a $C \in {\mathcal C}$ such that $C \subseteq F+f = F+e \subseteq E+e$. The Maximum Matroid for $K_5$ {#sec:open} ============================= Although Theorems \[thm:main1\] and \[thm:main2\] guarantee that it exists, we have no explicit description of the maximum matroid for $K_5$. There does exist a matroid for which every copy of $K_5$ is a circuit, namely the $3$-dimensional rigidity matroid discussed in Sections \[sec:intro\] and \[sec:K4\]. This relates to a 153 year old open problem dating back to J. C. Maxwell [@M]. Laman’s theorem (Theorem \[thm:L\] in Section \[sec:K4\]), provides a simple combinatorial characterization of rigidity in $2$ dimensions. Maxwell asked for a such a characterization in $3$ dimensions, but no such characterization has been found. The natural analog of Theorem \[thm:L\] for ${\mathbb R}^3$ is to replace the condition that the graph be $(2,3)$-tight by the condition that it be $(3,6)$-tight. The resulting statement, however, is false, as shown by the graph, called the “double banana," in Figure \[fig:banana\]. This graph is $(3,6)$-tight, but is clearly not rigid. ![Double banana.[]{data-label="fig:banana"}](2K5.png){width="4cm"} In dimension 2, the rigidity matroid is the maximum matroid $\widehat M(K_4)$, as explained in Section \[sec:K4\]. Therefore it is natural to ask: \[ques\] Is the $3$-dimensaional rigidity matroid equal to the maximum matroid $\widehat M(K_5)$ for $K_{5}$? If the answer to the above question is “yes", then Algorithm A, which is purely combinatorial, may be the closest one may hope for in the way of an answer to Maxwell’s question. In this case, to determine whether or not a graph $E$ is rigid in ${\mathbb R}^3$, it would be sufficient to compute the closure $[E]^*$ using Algorithm A. Then $E$ would be rigid if and only if $[E]^*$ is the complete graph on the vertex set $V(E)$. (It should be admitted that Algorithm A, in its present form, is not computationally efficient.) Independent of the answer to Question \[ques\], Algorithm A may provide an upper bound on the rank function of the $3$-dimensional rigidity matroid, a topic of importance in combinatorial rigidity theory [@J]. By Proposition \[prop\], the rank of a graph $E$ in the 3D rigidity matroid is bounded above by the rank of $E$ in the A-matroid (the maximum matroid for $K_5$), i.e., $\text {rank}(E) \leq \text{rank}_A(E).$ Question \[ques\] is also related to what is referred to as the [*maximal conjecture*]{}. An [*abstract rigidity matroid*]{} is defined by six axioms, the closure axioms CL1-CL4 and an additional two closure axioms. Since it is peripheral to this paper, we omit the defintion and a proof that $\widehat M(K_5)$ is an abstract rigidity matroid. Denote by $M \succeq M'$ the statement that $M$ majorizes $M'$, i.e., every independent set in matroid $M'$ is independent in matroid $M$. Denote by $R(d)$ the $d$-dimensional rigidity matroid. The maximal conjecture states: $R(d) \succeq M$ for every abstract rigidity matroid $M$. The maximal conjecture is known to be true for $d=2$ and false for $d\geq 4$. The conjecture is open for $d=3$. We now know that 1. $\widehat M(K_5) \succeq R(3)$ (because each $K_5$ is a circuit in $R(3)$, and $\widehat M(K_5)$ is the maximal matroid for $K_5$), and 2. $R(3) \succeq \widehat M(K_5)$ if the maximal conjecture is true for $d=3$. Therefore, either $\widehat M(K_5) = R(3)$, providing an affirmative answer to Question \[ques\] (and a partial solution to the question of Maxwell), or else the maximal conjecture is false in ${\mathbb R}^3$. Acknowledgements {#acknowledgements .unnumbered} ================ This work was partially supported by a grant from the Simons Foundation (\#322515 to Andrew Vince). [99]{} A. R. Berg and T. Jordán, A proof of Connelly’s conjecture on 3-connected circuits of the rigidity matroid, [*J. Combin. Theory Ser. B*]{}, [**88**]{} (2003), 77-97. J. Cheng, Towards Combinatorial Chcaracterizations and Algorithms for Bar-and-Joint Independence and Rigidity in 3D and Higher Dimensions, Ph.D. Thesis, University of Florida, 2013. H. H. Crapo, Erecting Geometries, In Proc. Second Chapel Hill Conf. on Combinatorial Mathematics and its Applications (Univ. North Carolina, Chapel Hill, N.C., 1970), pages 74-99. H. Crapo and W. Whiteley, The Geometry of Rigid Structures, Encyclopedia of Math., Cambridge University Press, (1993). G. Gordon and J. McNulty, Matroids, a Geometric Introduction, Cambridge University Press, Cambridge, UK, 2012. J.E. Graver, B. Servatius and H. Servatius, Combinatorial Rigidity, [*Graduate Studies in Mathematics*]{}, Amer. Math. Soc., 1993. B. Jackson and T. Jordán, On the rank function of the 3-dimensional rigidity matroid, [*Internat. J. Comput. Geom. Appl.*]{}, [**16**]{} (2006), 415-429. D. Knuth, Random matroids, [*Discrete Math.*]{} [**12**]{} (1975), 341-358. G. Laman, On graphs and rigidity of plane skeletal structures, [*J. Engrg. Math.*]{} [**4**]{} (1970), 331-340. A. Lee and I. Streinu, Pebble game algorithms and sparse graphs, [*Disc. Math.*]{}, [**308**]{} (2008), 1425-1437. J. C. Maxwell, On the calculation of the equilibrium and stiffness of frames, [*Philos. Mag.*]{} [**27**]{} (1864), 294-299. J. Oxley, Matroid Theory, Graduate Texts in Mathematics, Oxford University Press, Oxford, 1992. J. M. S. Simões-Pereira, On subgraphs as matroid cells, [*Math. Zeitschrift*]{}, [**127**]{} (1972), 315-322. I. Streinu and L. Theran, Sparse hypergraphs and pebble game algorithms, [*Eur. Jour. of Combinatorics*]{}, [**30**]{} (2009), 1944-1964.
{ "pile_set_name": "ArXiv" }
--- abstract: | We attempt to answer whether neutrinos and antineutrinos, such as those in the cosmic neutrino background, would clusterize among themselves or even with other dark-matter particles, under certain time span, say $1\, Gyr$. With neutrino masses in place, the similarity with the ordinary matter increases and so is our confidence for neutrino clustering if time is long enough. In particular, the clusterings could happen with some seeds (cf. see the text for definition), the chance in the dark-matter world to form dark-matter galaxies increases. If the dark-matter galaxies would exist in a time span of $1\, Gyr$, then they might even dictate the formation of the ordinary galaxies (i.e. the dark-matter galaxies get formed first); thus, the implications for the structure of our Universe would be tremendous. [ PACS Indices: 05.65.+b (Self-organized systems); 98.80.Bp (Origin and formation of the Universe); 12.60.-i (Models beyond the standard model); 12.10.-g (Unified field theories and models).]{} --- [**The dark-matter world: Are there dark-matter galaxies?**]{} .5cm W-Y. Pauchy Hwang[^1]\ [*Asia Pacific Organization for Cosmology and Particle Astrophysics,\ Institute of Astrophysics, Center for Theoretical Sciences,\ and Department of Physics, National Taiwan University, Taipei 106, Taiwan*]{} .2cm [(October 10, 2011)]{} Introduction ============ The phenomenon of clustering is a highly nonlinear effect. It is against the tendency toward the uniform distribution or against the democratization. Like the grown-up process, clustering seems to acquire a life of certain form. What comes to our minds is whether the dark-matter world, 25 % of the modern Universe (in comparison, only 5 % in the ordinary matter), would form the dark-matter galaxies, even before the ordinary-matter galaxies. A dark-matter galaxy is by our definition a galaxy that does not see any strong and electromagnetic interactions with our visible ordinary-matter world. This fundamental question, though difficult, or even impossible, to answer, deserves some thoughts, for the structural formation of our Universe. We wish to address ourselves about the likelihood of the clustering - a question for which we could not begin with a theory and prove the existence theorem of the clustering; rather, if there is some clustering in some region, there is also de-clustering in the other region(s). However, the large-scale clustering is so essential that all life depends on it. In the ordinary-matter world, strong and electromagnetic forces make the clustering a different story - they manufacture atoms, molecules, complex molecules, and chunks of matter, and then the stars and the galaxies; the so-called “seeded clusterings”. The seeds in the dark-matter world could be extra-heavy dark-matter particles - one such particle would be equivalent to thousands of ordinary-matter molecules. If neutrinos and antineutrinos be alone and don’t interact with other dark-matter particles, the clustering would then be no-seed, and it might be too slow as compared to the time span of, e.g., $1\,Gyr$. So, we need to keep in mind whether a seeded clustering is possible - and it could be seeded in a different manner. Maybe we could try out by other avenue - try to show that the dark-matter world follows the rules of the ordinary-matter world, except the scales that might be slightly different. Then we put together everything in the time span of the age of our Universe. This is the first part of the “proof” which we wish to follow in this note. The next part of the proof come from the fact that atoms, molecules, complex molecules, and chunks of matter in fact come from strong and electromagnetic forces - these seeds enhance the clustering effect in a time span of $10^9\, Gyr$, the life of the early Universe (the seeded clustering) - for the enhancement of this clustering effect, weak interactions play little role, Of course, the gravitational forces all add up the clustering effects because the effects, though tiny, uniformly add up. In the world that is full of unknowns - about 25% of dark matter and 70% of dark energy, we are thinking of the world from the 5% visible ordinary-matter world. We know that the part in dark energy is uniformly distributed. Is the dark-matter world clusterized? There is some positive indication - such as the rotation curves of the spiral galaxies. We might argue that individual objects that have masses would have the gravitational $1/ r^2$ force so that it leads to large-scale clustering, provided that the time is long enough. Thus, evidence of neutrino’s masses is one positive indication for long-time gravitational large-scale clustering. In fact, numerical simulations do give the evidence that gravitational forces alone yield the clustering, if the time is sufficiently long. But what we have in mind is that the sequence of atoms-molecules-complex molecules-etc. yields the “seeds” of the clusterings - the seeded clustering that might be relevant for the time span of, e.g., $1\, Gyr$, the age of the early Universe. Such seeds for dark-matter galaxies may also come from heavy dark-matter particles (with the mass greater than, e.g., $1\, TeV$). If neutrinos, alone (but assumed that there are unknown not-so-weak interactions) or together with other dark-matter species, could be found to aggregate at large distances (somewhat larger than those for ordinary matter), it would add the evidence of the seeded clustering. Note that the seeded clustering could occur during the entire part of the early universe and slightly longer, say, 1 Gyr or more. In the Big-Bang theory, neutrinos and other weakly-interacting particles are manufactured earlier, than atoms and molecules. The crucial question is whether the clustering in the dark-matter world is seeded or no-seed. We start with the minimal Standard Model of particles (which defines the ordinary-matter world, as the zeroth-order approximation) and describe all kinds of the known interactions - strong, electromagnetic, weak, and other interactions. These particles do aggregate (into atoms, molecules, and then macroscopically gravitational objects) under strong, electromagnetic, and gravitational forces (in a time span of $1\, Gyr$, the life of the earlier Universe) - for weak forces, they are much weaker, too weak for the clustering problem that we are talking about. Presumably the same things might have happened for the neutrinos and other dark-matter particles except at a much larger scale. So, the first part of the “proof” that neutrinos and other dark-matter particles are described by the extended Standard Model does mean something important. Thus, it is essential to understand the world of the extended Standard Model (for ordinary-matter particles and dark-matter particles) for the large-scale clustering of the dark-matter world. If everything turns out to be similar to the particles in the “extended” Standard Model, we would assert that there is at least the long-time large-scale clustering of the dark-matter world - “long-time” in comparison to the age of our Universe. Whether there is the “seeded clustering” (relevant in a time span of $1\,Gyr$) is the next question to answer. Note that the world of the minimal Standard Model, the visible ordinary-matter world, seems to be extremely simple. One starts out with the electron, a point-like spin-1/2 particle, and ends up with other point-like Dirac particles, but with interactions through gauge fields modulated by the Higgs fields. This is what we experimentally know, and it is a little strange that it seems to be “complete” and that nothing else seems to exist. After Dirac’s equation we have searched for the point-like particles, now for eighty years. It seems that Dirac equations explain all the relativistic point-like particles and their interactions. So, why don’t we formulate a working rule to describe this fact? Well, if the dark matter, occupying 25% of the present-day Universe, would clusterize or even form the dark-matter galaxies (particularly before the ordinary-matter galaxies), the implications would be tremendous. That is why we wish to “analyze” and to answer. Dirac Similarity Principle and Minimum Higgs Hypothesis ======================================================= In the minimal Standard Model that has been experimentally verified and that describes the ordinary-matter world, it could be understood as a world consisting of a set of point-like Dirac particles interacting through gauge fields modulated by the Higgs fields. The only unknowns are neutrinos, which we believe may also be point-llke Dirac particles. Thus, the minimal Standard Model is basically a Dirac world interacting among themselves through gauge fields modulated by the Higgs fields. In extending the minimal Standard Model, we try to keep the “principle” of point-like Dirac particles intact - thinking of the eighty-year experience some sort of sacred. On the other hand, the forty-year search for Higgs (scalar fields) still in vain amounts to “minimal Higgs hypothesis”. This certainly offers us a very useful guide - the two working hypotheses simplify a lot of things. In other words, we follow another paper [@Hwang3] and introduce “the Dirac similarity principle” that every “point-like” particle of spin-1/2 could be observed in our space-time if it is “connected” with the electron, the original spin-1/2 particle. For some reason[@Wu], this clearly has something to do with how relativity and the space-time structure gets married with spin-1/2 particles. This is interesting since there are other ways to write down (or, to express) spin-1/2 particles, but so far they are not seen perhaps because they are not connected with the electron. In other words, the partition between geometry (in numbers such as $4\times 4$ $\sigma/2$ in the angular momentum) and space-time (such as $\vec r \times \vec p$) is similar to the electron. We adopt “Dirac similarity principle” as the working “principle” when we extend the Standard Model to include the dark-matter particles as well. These are “point-like” Dirac particles of which the size we believe is less than $10^{-17}\,cm$, the current resolution of the length. Mathematically, the “point-like” Dirac particles are described by “quantized Dirac fields” - maybe via renormalizable lagrangian. The “quantized Dirac fields”, which we can axiomatize for its meaning, in fact does not contain anything characterizing the size (maybe as it should be). In addition, the word “renormalization” may contain something of the variable size. Now that the dark matter are the species of the extended Standard Model; then the dark matter world should exhibit the clustering effect, just like the ordinary matter world. This is why we need to know to begin with whether we could use the extended Standard Model to describe the dark matter world. So, if we could verify that the dark matter species are in the extension of the Standard Model, then we have a proof of those guys do form clusters just like the ordinary matter species. In addition, we should pay another look at the Standard Model. After forty years, the search for Higgs or scalar particle(s) is still in vain, although the scalar fields are in some sense trivial (without spin, etc. - no internal structure of any kind). So, the presence of these particles should follow the “minimum Higgs hypothesis” - it might make a lot of sense even though we don’t know why our world looks like this. Let’s now “apply” the Dirac similarity principle and the minimum Higgs hypothesis to our problem. The neutrinos are now Dirac particles of some kind - so, right-handed neutrinos exist and the masses could be written in terms of them. To make Dirac neutrinos massive, we need a Higgs doublet for that. Is this Higgs doublet a new Higgs doublet? In principle, we could use the Standard-Model Higgs doublet and take and use the complex conjugate (like in the case of quark masses) - the problem is the tininess of these masses and if this would go it is definitely un-natural. If a new and “remote” Higgs doublet would exist and the tininess of the neutrino masses is explained by the neutrino couplings to the “remote” Higgs, then it comes back to be “natural”. Why are the neutrino couplings to “remote” Higgs doublet should be small? - just similar to the CKM matrix elements (that is, the 31 matrix element is much small than the 21 matrix element); the other “naturalness” reason. We’d better look for the minimum number of Higgs multiplets as our choice and the couplings to the “remote” Higgs would be much smaller than to the ordinary one. As said earlier, this hypothesis makes the case of the tiny neutrino masses very natural and, vice versa, we rephrase the natural situation to get the hypothesis. Why do we adopt such hypothesis? For more than forty years, we haven’t found any solid signature for the Higgs; that the neutrinos have tiny masses (by comparison with quarks and charged leptons) is another reason. [*Therefore, under “Dirac similarity principle” and “minimum Higgs hypothesis”, we have a unique Standard Model if the gauge group is determined or given.*]{} These two working hypotheses sort of summarize the characteristics of the (extended) Standard Model. That neutrinos have tiny masses can be taken as a signature that there is a heavy extra $Z^{\prime 0}$, so that a new Higgs doublet should exist. This extra $Z^{\prime 0}$ then requires the new “remote” Higgs doublet[@Hwang]. This Higgs doublet also generates the tiny neutrino masses. This is one possibility, $SU_c(3)\times SU_L(2)\times U(1) \times U(1)$; with minimum Higgs hypothesis, we are talking about the unique extra $U(1)$ generation. Alternately, we could require that the right-hand $SU_R(2)$ group exists to restore the left-right symmetry[@Salam]. In this case, the left-handed sector and the right-handed sector each has the Higgs doublet, each is the left-right image of the other. The original picture[@Salam] contains many options regarding the Higgs sector, but now the “minimum Higgs hypothesis” makes the unique choice. In our terms, the right-handed Higgs would be the “remote” Higgs for the left-handed species. That determines the size of the coupling, including the tiny neutrino masses. Note that the only cut-off is the masses of the right-handed gauge bosons (by keeping the left-right symmetry). We believe that the phenomenology of the said unique left-right symmetric model should be seriously pursued. There is another option - the family option [@Family]. Here we are curious at why there are three generations - is the family symmetry in fact some sort of gauge symmetry because that the associated interactions are too weak? We try to combine the Standard Model $SU_c(3)\times SU_L(2) \times U(1)$ with $SU_f(3)$, with $(\nu_{\tau R},\,\nu_{\mu R},\, \nu_{eR})$ the basic $SU_f(3)$ triplet. Here $SU_f(3)$ has an orthogonal neutrino multiplet since the right-handed neutrinos do not enter the minimal Standard Model. In this way, we obtain the $SU_c(3) \times SU_L(2) \times U(1) \times SU_f(3)$ minimal model. Or, the right-handed indices could be removed altogether in the family group, just like the other $SU_c(3)$ combining with $SU_L(2)\times U(1)$ to avoid anomalies. Again, Dirac similarity principle and minimum Higgs hypothesis saves the day - uniqueness in the choice. In this case [@Family], the three family calls for $SU_f(3)$ and to make the gauge bosons all massive the minimum choice would be a pair of Higgs triplets - apparently a kind of broken gauge symmetry. Under the “minimum Higgs hypothesis”, the structure of the underlying Higgs mechanism is determined. Then, neutrinos acquire their masses, to the leading order, with the aid of both the Higgs triplets. Unless we could live with the unexplained duplications of generations, the story might be the only way to go. In other words, the particles in the ordinary-matter and dark-matter worlds are divided into two categories: those in the dark-matter world and those ordinary-matter particles as described by the minimal Standard Model, with the only exception of neutrinos in the $SU_f(3)$-extended Standard Model [@Family]. So, the neutrinos serve as the bridge between two worlds. Otherwise, we can’t think of the connections between the dark-matter and ordinary-matter worlds and those spiral galaxies (such as our Milky Way) are truly unthinkable. All new species are “classified” as the so-called “dark matter”, 25% of the present-day Universe (compared to 5% of ordinary matter), in view of their feeble (weak) interactions with the ordinary-matter species (in the minimum Standard Model). Let us focus on the extra $Z^{\prime 0}$ model to illustrate the “minimum Higgs hypothesis” in some detail. There are left-handed neutrinos belong to $SU_L(2)$ doublets while the right-handed neutrinos are singlets. The term specified by $$\varepsilon\cdot ({\bar\nu}_L,{\bar e}^-_L) \nu_R \varphi$$ with $\varphi=(\varphi^0,\varphi^-)$ the new “remote” Higgs doublet could generate the tiny mass for the neutrino and it is needed for the extra $Z^{\prime 0}$. We need to introduce one working hypothesis on the couplings to the Higgs - to the first (standard) Higgs doublet, from the electron to the top quark we call it “normal” and $G_i$ is the coupling to the first Higgs doublet, and to the second (extra, “remote”) Higgs doublet the strength of the couplings for the Dirac particles is down by the factor $(v/v')^2$ with $v$ the VEV for the standard Higgs and $v'$ the VEV for the (remote) Higgs. Presumably, this contains in the “minimum Higgs hypothesis”. The hypothesis sounds very reasonable, similar to the CKM matrix elements, and one may argue about the second power but for the second Higgs fields some sort of scaling may apply. With the working hypothesis, the coupling of the neutrinos to the standard Higgs would vanish completely (i.e., it is natural) and its coupling to the second (remote) Higgs would be $G_j (v/v')^2$ with $G_j$ the “normal” size. The “minimum Higgs hypothesis” amounts to the assertion that there should be as less Higgs fields as possible and the couplings would be ordering like the above equation, Eq. (1). Indeed, in the real world, neutrino masses are tiny with the heaviest in the order of $0.1\, eV$. The electron, the lightest Dirac particle except neutrinos, is $0.511\, MeV$[@PDG] or $5.11 \times 10^5\, eV$. That is why the standard-model Higgs, which “explains” the masses of all other Dirac particles, is likely not responsible for the tiny masses of the neutrinos. The “minimum Higgs hypothesis” makes the hierarchy very natural. In an early paper in 1987[@Hwang], we studied the extra $Z^{\prime 0}$ extension paying specific attention to the Higgs sector - since in the Minimal Standard model the standard Higgs doublet $\Phi$ has been used up by $(W^\pm,\,Z^0)$. We worked out by adding one Higgs singlet (in the so-called 2+1 Higgs scenario) or adding a Higgs doublet (the 2+2 Higgs scenario). It is the latter that we could add the neutrino mass term naturally. (See Ref.[@Hwang] for details. Note that the complex conjugate of the second “remote” Higgs doublet there is just the $\varphi$ above.) The new Higgs potential follows the standard Higgs potential, except that the parameters are chosen such that the masses of the new Higgs are much bigger. The coupling between the two Higgs doublets should not be too big to upset the nice fitting[@PDG] of the data to the Standard Model. All these go with the smallness of the neutrino masses. Note that spontaneous symmetry breaking happens such that the three components of the standard Higgs get absorbed as the longitudinal components of the standard $W^\pm$ and $Z^0$. As a parenthetical note, we could remark on the cancelation of the flavor-changing scalar neutral quark currents. Suppose that we work with two generations of quarks, and it is trivial to generalize to the physical case of three. We should write $$\begin{aligned} ({\bar u}_L,\,{\bar d}^\prime_L)d^\prime_R\Phi + c.c.;\nonumber\\ ({\bar c}_L,\,{\bar s}^\prime_L)s^\prime_R\Phi + c.c.;\nonumber\\ ({\bar u}_L,\,{\bar d}^\prime_L)u_R \Phi^*+c.c.;\nonumber\\ ({\bar c}_L,\,{\bar s}^\prime_L)c_R \Phi^*+c.c.,\end{aligned}$$ noting that we use the rotated right-handed down quarks and we also use the complex conjugate of the standard Higgs doublet. This is a way to ensure that the GIM mechanism[@GIM] is complete. Without anything to do the opposite, it is reasonable to continue to assume the GIM mechanism. There are additional questions such as: How about the couplings between quarks (or charged leptons) and the (non-standard) remote Higgs? “Minimum Higgs hypothesis” helps to set these couplings to zero or to be very small. For the new gauge bosons (such as the right-handed $W^\pm,\,Z^0$), their large masses serve as the cut-off. For the family gauge theory, all the couplings between family gauge bosons and all Dirac particles, except neutrinos, vanish identically. We have a lot to go in spelling out the details regarding the mass-generation (Higgs) mechanisms - we eventually should pay a decent look at this mechanism rather than simply attributing it the scalar fields. To sum up, in any of the three extended standard models - the extra $Z^{\prime 0}$ [@Hwang], the left-right model [@Salam], and the family model [@Family], there is a clear distinction between the Standard-Model Higgs and the remote Higgs - it is sufficient for quarks and charged leptons to couple to the Standard-Model Higgs (to generate masses) and for neutrinos tiny masses are generated with the aid of the remote Higgs. Anything that does not belong to the minimal Standard Model could be classified as the “dark-matter particle”. Symmetry missing or new ======================= Because the minimal Standard Model is a gauge theory (and is experimentally proven), we could focus our discussion on the relations of these models with symmetries - with the left-right model [@Salam] and the family model [@Family] as some obvious examples. Contrary to this, the extra $Z^{\prime 0}$ model is not symmetry-driven and will not be discussed it in this section. What we have done so far: We try to rephrase the Standard Model as a world of point-like Dirac particles, or quantized Dirac fields, with interactions. Dirac, in his relativistic construction of Dirac equations, was enormously successful in describing the electron. (The point-like nature of the electron was realized almost a century later.) Quarks, carrying other intrinsic degrees (color), are described by Dirac equations and interact with the electron via gauge fields. We also know muons and tau-ons, the other charged leptons. So, how about neutrinos? Our first guess is also that neutrinos are point-like Dirac particles of some sort (against Majorana or other Weyl fields). For some reasons, point-like Dirac particles are implemented with some properties - that they know the other point-like Dirac particles in our space-time. That is why we call it the “Dirac similarity principle” to begin with. This is a world of point-like Dirac particles interacting among themselves via gauge fields modulated by Higgs fields. For our real minimal-Standard-model world, we begin with the electron and end up with all three family of quarks and leptons, with gauge fields of strong and electroweak interactions modulated by Higgs fields - the world satisfied by the Dirac similarity principle and minimum Higgs hypothesis. To proceed from there, we treat neutrinos as Dirac particles in accord with the Dirac similarity principle. As a matter of fact, we will treat only with renormalizable interactions - with the spin-1/2 field power 3/2 and scalar and gauge fields power 1; the total power counting less than 4. What is surprising is the role of “renormalizability”. We could construct quite a few such extensions of the minimal Standard Model; they are all renormalizable - the present extra $Z^{\prime 0}$, the left-right model (in the minimum sense), and the recent proposed family gauge theory[@Family]; there are more. Apparently, we should not give up our original thinking (such as the principle of renormalizability, the word of my mentor, late Professor Henry Primakoff) though the road seems to have been blocked. Our space-time is “defined” when the so-called “point-like Dirac particles” are “defined”, and vice versa. On the other hand, “point-like Dirac particles” are in terms of “quantized Dirac fields”. These concepts are “defined” together, rather consistently. We are amazed that a world of point-like Dirac particles, as described by quantum field theory (the mathematical language), turns out to be the physical world around us - that may define the space-time for us. The interactions are mediated by gauge fields modulated slightly by Higgs fields. There may be some new gauge fields, such as the extra $Z^{\prime 0}$, or the missing right-handed partners[@Salam], or the family gauge symmetry[@Family], or others. There are two related remarks. The first remark related to the $SU_L(2)\times SU_R(2) \times U(1)$ model[@Salam]. The second has to be related to the family (gauge) symmetry[@Family]. In the $SU_L(2)\times SU_R(2) \times U(1)$ model[@Salam], suppose that in the left and right parts each has one Higgs doublet (minimal) and we could try to make the tiny neutrino masses in the right-handed sector - the “remote” Higgs in the entire construction. Here we employ the so-called “minimum” working hypothesis. This seems to be rather natural. We should think more about the left-right symmetric model very seriously - except that we should think of the Higgs mechanism in a real minimum fashion, judging that we are looking for Higgs for about forty years. Thus, we have one left-handed Higgs doublet and another right-handed (remote) Higgs doublet - due to spontaneous symmetry breaking (SSB), only two neutral Higgs particles are left (after spontaneous symmetry breaking). That means that we are advocating a particular kind of the left-right symmetric model. Regarding the family (gauge) symmetry, it is difficult to think of the underlying reasons why there are three generations (of quarks and leptons). But why? This is Raby’s question; should we stop asking if the same question went by without an answer for decades? This is why we promote the family symmetry to the family gauge symmetry[@Family]. In both cases (left-right and family), the proposed Dirac similarity principle and the “minimum Higgs hypothesis” both may hold - an interesting and strange fact. In both cases in the above remarks, it implies that, at temperature somewhat higher than $1\,TeV$, there would be another phase transition - for the spontaneous symmetry breaking. If most of the products from the phase transition would remain to be dark matter, we would have most natural candidates for the large amount (25%) of dark matter. In other words, those unseen particles so far, owing to their weak interactions with ordinary matter, can be classified as “dark matter particles” in the extra $Z^{\prime 0}$ model, or in the left-right model, or in the family gauge symmetry model. The other interesting aspect is that the left-right symmetry is the missing symmetry while the family gauge symmetry is the symmetry which we have found but suspect that it is partially seen so far. It is difficult to speculate whether the missing left-right symmetry would be found first or the family gauge symmetry would first be seen. But it seems that the extra $Z^{\prime 0}$ picture has the least chance in winning this game - no symmetry reason whatsoever. In mathematics, it turns out that they all are self-consistent, some even “beautiful”. In physics, 25% of the dark matter compared to only 5% of ordinary matter means rooms in our imaginations - even along the line of the extended Standard Model. The time span of $1\, Gyr$ – the age of our early Universe ========================================================== Suppose that the spiral of the Milky Way is caused by the dark-matter aggregate of four or five times the mass of the Milky Way, and similarly for other spiral galaxies. This aspect will serve as a “basic fact” for our analysis of this section. Let’s look at our ordinary-matter world. Those quarks can aggregate in no time, to hadrons, including nuclei, and the electrons serve to neutralize the charges also in no time. Then atoms, molecules, complex molecules, and so on. These serve as the seeds for the clusters, and then stars, and then galaxies, maybe in a time span of $1\, Gyr$. The aggregation caused by strong and electromagnetic forces is fast enough to give rise to galaxies in a time span of $1\, Gyr$. On the other hand, the weak interactions proceed fairly slowly in this time span and they could not help responsible for the galactic formation process as a whole. On the other hand, the seeded clusterings might proceed with abundance of extra-heavy dark-matter particles such as familons and family Higgs, all greater than, e.g., $1\, TeV$. They belong to the dark-matter world, so they don’t interact via strong or electromagnetic interactions (not directly, but indirectly through loops). The first part of the proof states that the ordinary-matter world and the dark-matter world are jointly described by the extended Standard Model - proven by giving three examples, the extra $Z^{\prime 0}$ model, the left-right symmetric model, and the family $SU_3(3)$ gauge model, all renormalizable and obeying “Dirac similarity principle” and the “minimum Higgs hypothesis”. In other words, one, the last, “extended Standard Model” would exist, to complete the saga of the “Standard Model” for our space-time. Our Universe is after all consistent. The statement that all ordinary-matter particles and dark-matter particles are described by the extended Standard Model is important, but maybe not sufficient for the clustering, and in particular for the clustering in a time span of, say, $1\, Gyr$. In the minimum left-right symmetric model, the symmetry breaking for the right sector happened much earlier, at the temperature greater than tens of $TeV$ (maybe at least). Are there any remnants to begin manufacturing the clusters? The issue is that all these are right-handed weak interactions but manufactured at very high temperature ($>> 10 \,TeV$). The time span is $10^{-15}\, sec$ or shorter. For ordinary left-handed weak interactions, it happens at $T\approx 0.3\, TeV$ but there is no trace of the clustering effects (in a time span of $1\,Gyr$). We would conclude that the clustering effects, relevant in a time span of $1\, Gyr$ (the age of the early Universe), could not come from the manufacture of the right-handed sector. On the other hand, it may be easy to analyze the minimal $SU_f(3)$ model - the strong $SU_c(3)$ does give the aggregation effect that eventually gives rise to galaxies, etc. So, $SU_f(3)$ is just another $SU(3)$ acting exclusively on the dark sector - the related stuff serves as the seeds for the dark-matter clusters and even galaxies - provided that the $SU_f(3)$ coupling is normal. In fact, there are different scenarios, for example, the left-right symmetric model may be recovered at $T\approx 3\, TeV$ (ten times the VEV at the electroweak scale, maybe too low) and the family $SU_f(3)$ assumed to recovered at $T\approx 100\, TeV$. Of course, the two models could be recovered at a different ordering. Here we assume that the extra $Z^{\prime 0}$ model, the model without symmetry-driven, is less favored. In all cases, the seeded clustering effects, which may be relevant in a time span of $1\,Gyr$, may come from the manufacture of the $SU_f(3)$ dark-matter sector, if the $SU_f(3)$ coupling is normal (and strong); and the extra $Z^{\prime 0}$ model or the left-right symmetric model seems to have nothing to do with the seeded clustering effects. Are there dark-matter galaxies? =============================== In this note, we investigate the clustering of dark matter particles, including neutrinos, by proposing to use the “extended” Standard Model to describe dark matter particles. We extend the minimal Standard Model using Dirac Similarity Principle and minimum Higgs hypothesis - the experience of a half or a century. We do have a couple of very nice extended Standard Models, inside which the clusterings would happen at different stages (judging from the scales involved). Using the extended Standard Model, we proceeded to look into the “seeded” clusterings, which may be relevant in a time span of about $1\,Gyr$, the life of the early Universe. The seeds might be the heavy dark-matter particles such as familons or family Higgs. In that case, dark-matter galaxies might be formed before ordinary-matter galaxies. If the dark-matter galaxies exist and play the hosts to the ordinary-matter galaxies, the dark-matter hosts get formed at first. The picture of our Universe is completely different from the conventional thinking, but it makes more sense in terms of the 25% dark matter versus the 5% visible ordinary matter. Apparently, the dark-matter galaxies, judging from the tiny neutrino masses and the feeble interactions, would be much bigger, maybe by a couple of orders (in length), than the visible ordinary-matter galaxies to which they host. Acknowledgments {#acknowledgments .unnumbered} =============== This research is supported in part by National Science Council project (NSC 99-2112-M-002-009-MY3). [99]{} W-Y. P. Hwang, arXiv:11070156v1 (hep-ph, 1 Jul 2011), Plenary talk given at the 10th International Conference on Low Energy Antiproton Physics (Vancouver, Canada, April 27 - May 1, 2011), to be published. For a detailed desciption of Dirac equation (especially in the coupling of Dirac spin and angular momentum), see Ta-You Wu and W-Y. Pauchy Hwang, [*Relativstic Quantum Mechanics and Quantum Mechanics*]{} (World Scientific, Singapore, 1991). W-Y. P. Hwang, arXiv:1009.1954v2 (hep-ph, 27 May 2011); W-Y. P. Hwang, Phys. Rev. [**D36**]{}, 261 (1987); see the second paper for more earlier references. J.C. Pati and A. Salam, Phys. Rev. [**D10**]{}, 275 (1974); R.N. Mohapatra and J.C. Pati, Phys. Rev. [**D11**]{}, 566 (1975); [**D11**]{}, 2559 (1975). W-Y. Pauchy Hwang, Nucl. Phys. [**A844**]{}, 40c (2010); W-Y. Pauchy Hwang, International J. Mod. Phys. [**A24**]{}, 3366 (2009); the idea first appeared in hep-ph, arXiv: 0808.2091; talk presented at 2008 CosPA Symposium (Pohang, Korea, October 2008), Intern. J. Mod. Phys. Conf. Series [**1**]{}, 5 (2011); plenary talk at the 3rd International Meeting on Frontiers of Physics, 12-16 January 2009, Kuala Lumpur, Malaysia, published in American Institute of Physics 978-0-7354-0687-2/09, pp. 25-30 (2009). Particle Data Group, “Review of Particle Physics”, J. Phys. G: Nucl. Part. Phys. [**37**]{}, 1 (2010). S.L. Glashow, J. Iliopoulos, and L. Maiani, Phys. Rev. [**D2**]{}, 1285 (1970). [^1]: Correspondence Author; Email: [email protected]; arXiv:xxxx (hep-ph, to be submitted)
{ "pile_set_name": "ArXiv" }
--- abstract: 'We examine the X-ray spectra and variability of the sample of X-ray sources with $L_{\rm X} \approx 10^{31} - 10^{33}$ [erg s$^{-1}$]{} identified within the inner 9 of the Galaxy by @mun03. Very few of the sources exhibit intra-day or inter-month variations. We find that the spectra of the point sources near the Galactic center are very hard between 2–8 keV, even after accounting for absorption. When modeled as power laws the median photon index is $\Gamma = 0.7$, while when modeled as thermal plasma we can only obtain lower limits to the temperature of $kT > 8$ keV. The combined spectra of the point sources is similarly hard, with a photon index of $\Gamma = 0.8$. Strong line emission is observed from low-ionization, He-like, and H-like Fe, both in the average spectra and in the brightest individual sources. The line ratios of the highly-ionized Fe in the average spectra are consistent with emission from a plasma in thermal equilibrium. This line emission is observed whether average spectra are examined as a function of the count rate from the source, or as a function of the hardness ratios of individual sources. This suggests that the hardness of the spectra may in fact be to due local absorption that partially-covers the X-ray emitting regions in the Galactic center systems. We suggest that most of these sources are intermediate polars, which (1) often exhibit hard spectra with prominent Fe lines, (2) rarely exhibit either flares on short time scales or changes in their mean X-ray flux on long time scales, and (3) are the most numerous hard X-ray sources with comparable luminosities in the Galaxy.' author: - 'M. P. Muno, J. S. Arabadjis, F. K. Baganoff, M. W. Bautz, W. N. Brandt, P. S. Broos, E. D. Feigelson, G. P. Garmire, M. R. Morris, and G. R. Ricker' nocite: - '[@mgo85]' - '[@lzm02]' - '[@byk02; @byk03]' title: 'The Spectra and Variability of X-ray Sources in a Deep [[*Chandra*]{}]{} Observation of the Galactic Center' --- Introduction\[sec:intro\] ========================= Recent deep [[*Chandra*]{}]{} observations of the inner 9 around the super-massive black hole at the Galactic center have revealed over 2000 individual point-like X-ray sources [@mun03]. The sources have luminosities between $10^{31}$ and $10^{33}$ [erg s$^{-1}$]{} [for a distance of 8 kpc to the Galactic center; see @mcn00], and thus they probably represent some combination of young stellar objects, Wolf-Rayet and early O stars, interacting binaries (RS CVns), cataclysmic variables (CVs), young pulsars, and black holes and neutron stars accreting from binary companions (low- and high-mass X-ray binaries; LMXBs and HMXBs). However, the spectra of the Galactic center sources are very hard in the energy range of 2–8 keV. Spectra that are similarly hard have only been observed previously from magnetically accreting CVs (mCVs) and HMXB pulsars. Moreover, seven of the hard sources exhibit X-ray modulations with periods between 300 s and 4.5 h, which also suggests that they are magnetized white dwarfs or neutron stars [@mun03c]. These basic observations are a good first step toward determining the natures of the point sources. However, if their natures can be determined conclusively, the large number of sources in the field would make it possible to study two important pieces of astrophysics: (1) the history of star formation at the Galactic center, and (2) the physics of X-ray production in accreting stellar remnants. How stars form at the Galactic center is still a mystery, because the strong tidal forces and milliGauss magnetic fields there should prevent all but the most massive molecular clouds from collapsing. Nonetheless, it appears that star formation has occurred recently, because three massive stellar clusters younger than $10^{7}$ years old lie within $\approx 30$ pc of the Galactic center: IRS 16, the Arches, and the Quintuplet [@kra95; @pau01; @fig99]. However, it is still a matter of debate as to whether the star formation is continuous or episodic, and whether it occurs only in localized regions or is relatively uniform throughout the Galactic center. @fig03 addressed this question by modeling the evolution of the population of luminous infrared stars, and concluded that the star formation is probably continuous. The X-ray sources at the Galactic center could provide an additional, independent constraint on the star formation history there, because they should be dominated by accreting stellar remnants. The size of the sample of X-ray sources — an order of magnitude larger than the numbers of known LMXBs, HMXBs, and magnetic CVs — also makes it a valuable database for studying the physics of X-ray production. Several outstanding questions could be addressed with the current data. If the sample contains large numbers of magnetic CVs, it could be used to determine the duty cycle of bright accretion states [e.g., @gs88] and the fraction of such systems that exhibit hard spectral components [e.g. @ram03]. If there is a significant number of neutron star HMXBs, it may be possible to determine whether material accreted at rates far below the Eddington limit can penetrate the neutron star’s magnetosphere and reach its surface [@neg00; @cam02; @orl03]. Finally, the large sample of Galactic center X-ray sources would be useful for identifying systems with unusual properties. Previous hard X-ray surveys of the Galactic plane have identified several slowly-rotating accreting neutron stars and white dwarfs [@kin98; @tor99; @oos99; @sak00; @sug00], magnetic CVs with extremely strong emission lines from He-like Fe [@mis96; @ish98; @ter99], and accreting stellar remnants with high intrinsic absorption [@pat03; @mg03; @wal03]. These systems could represent resting points for stellar remnants that have not been observed previously, and are therefore important for calculating the formation rate of such remnants in the Galaxy. In this paper, we take a further step toward the above goals by using the properties of the X-ray emission from the point sources near the Galactic center [@mun03] to constrain better their natures. In Sections 2.1–2.3, we examine the spectra of the point sources both individually and averaged together, in order to determine the temperatures of the emitting regions. In Section 2.4, we search for short-term variability, which is often seen from coronal X-ray sources, and long-term variability, which is common in some accreting X-ray sources. In Section 3, we compare the properties of the observed sources with those of known classes of X-ray source. Finally, in Section 4, we briefly explore the future prospects for definitively identifying the natures of these sources. Observations and Data Analysis\[sec:obs\] ========================================= Twelve separate pointings toward the Galactic center have been carried out using the Advanced CCD Imaging Spectrometer imaging array (ACIS-I) aboard the [*Chandra X-ray Observatory*]{} [@wei02] in order to monitor  (Table \[tab:obs\]). The ACIS-I is a set of four 1024-by-1024 pixel CCDs, covering a field of view of 17 by 17. When placed on-axis at the focal plane of the grazing-incidence X-ray mirrors, the imaging resolution is determined primarily by the pixel size of the CCDs, 0492. The CCDs also measure the energies of incident photons within a calibrated energy band of 0.5–8 keV, with a resolution of 50–300 eV (depending on photon energy and distance from the read-out node). The CCD frames are read out every 3.2 s, which provides the nominal time resolution of the data. The methods we used to create a combined image of the field, to identify point sources, and to compute the photometry for each source are described in @mun03 and @tow03. In brief, for each observation we corrected the pulse heights of the events for position-dependent charge-transfer inefficiency [@tow02b], excluded events that did not pass the standard ASCA grade filters and [[*Chandra*]{}]{} X-ray center (CXC) good-time filters, and removed intervals during which the background rate flares to $\ge 3\sigma$ above the mean level. The final total live time was 626 ks. In order to produce a single composite image, we then applied a correction to the absolute astrometry of each pointing using three Tycho sources detected strongly in each [[*Chandra*]{}]{} observation [compare @bag03], and re-projected the sky coordinates of each event to the tangent plane at the radio position of . The image (excluding the first half of ObsID 1561, during which the $10^{-10}$ [erg cm$^{-2}$ s$^{-1}$]{} transient  was observed; see Muno et al. 2003c) was searched for point sources using [@fre02] in three energy bands: 0.5–8 keV, 0.5–1.5 keV, and 4–8 keV. We used a significance threshold of $10^{-7}$, which corresponds to the chance probability of detecting a spurious source within a beam defined by the point spread function (PSF). We detected a total of 2357 X-ray point sources. Of these, 281 were detected in the soft band (124 exclusively in the soft band), and so are located in the foreground of the Galactic center. The remaining sources, of which 1792 were detected in the full band, and 1832 in the hard band (441 exclusively in the hard band) are most likely located near or beyond the Galactic center. We computed photometry for each source in the 0.5–8.0 keV band using the routine from the Tools for X-ray Analysis (TARA).[^1] We extracted event lists for each source for each observation, using a polygonal region generally chosen to match the contour of 90% encircled energy from the PSF, although smaller regions were used if two sources were nearby in the field. We used a region defined by the PSF for 1.5 keV photons for foreground sources, and a larger extraction area corresponding to the PSF for 4.5 keV photons for Galactic center sources. A background event list was extracted for each source from a circular region centered on the point source, excluding from the event list ([*i*]{}) counts in circles circumscribing the 95% contour of the PSF around any point sources and ([*ii*]{}) the bright, filamentary structures noted by @par04. The background region was unique for each observation. It was chosen to include a fraction of $\approx 1200$ total counts, where the number of counts from each observation was scaled to the fraction of the total exposure time. The photometry for the complete sample of sources is listed in the electronic version of Table 3 from @mun03. We then extracted spectra and background estimates for each of the sources from the same regions from which we computed the photometry. We summed the source and background spectra from all 12 observations. Then, we grouped the source spectra between 0.5–8.0 keV so that each spectral bin contained at least 20 total counts. Next, we computed the effective area function at the position of each source for each observation. This was corrected to account for the fraction of the PSF enclosed by the extraction region and for the time-varying hydrocarbon build-up on the detectors.[^2] We estimated the detector response for each source in each observation using position-dependent response files that accounted for the corrections we made to undo partially the charge-transfer inefficiency [@tow02a]. Finally, to create composite functions for the full data set, we averaged both the response and effective area functions, weighted by the number of counts detected from each source in each observation. Four example spectra are displayed in Figure \[fig:spec\].[^3] We have confirmed that the spectra of the point sources were not contaminated by the diffuse X-ray emission in the field by repeating the analysis above for a subset of sources using an extraction region that enclosed only 50% of the PSF at 4.5 keV. The spectra were indistinguishable for the larger and smaller extraction regions, which confirms that we have successfully removed the background emission from the point-source spectra. Spectra of Individual Sources ----------------------------- We modeled the X-ray spectra of those sources with at least 80 net counts, which provided four or more independent spectral bins. To provide a rough characterization of the spectrum, we used either a power-law or thermal plasma continuum absorbed at low energies by gas and dust. To model the thermal plasma, we used in [@mew86]. We assumed that the elemental abundances were 0.5 solar, which is consistent with the values derived from the average spectra of the point sources (see Section \[sec:av:vapec\]), as well as the Fe abundances often observed from CVs [e.g., @do97; @fi97; @ish97].[^4] We accounted for gas absorption using the model [phabs]{}, and the dust scattering using a modified version of the model [dust]{} in which we removed the assumption that the dust was optically thin. The column depth of dust was set to $\tau = 0.485 \cdot N_{\rm H}/(10^{22} {\rm cm}^{-2})$, and the halo size to 100 times the PSF size [@bag03]. In Table \[tab:indiv\] we list the parameters of the best-fit spectral models: the column densities $N_{\rm H}$, either the power-law slope $\Gamma$ or the temperature $kT$, the observed and de-absorbed 2–8 keV fluxes, and the reduced $\chi^2$. The uncertainties are 90% confidence intervals ($\Delta \chi^2 = 2.71$). We also indicate sources from which the spectra should be viewed with caution, of which there are three categories: confused sources for which the radii of their PSF overlap those of a nearby source by more than 25%, sources that were near chip edges, and sources with variability (Section \[sec:var\]). About 25% of the sources are flagged in this manner. We consider a spectrum to be adequately reproduced by a model if the chance probability of obtaining the derived value of $\chi^2$ is greater than 5%. Of the 566 sources that we modeled in this manner, the spectra of 470 could be modeled with an absorbed power law, and 469 could be modeled with an absorbed, collisionally-ionized plasma. Both spectral models were consistent with the data for 440 sources, because of limited statistics and the small bandpass over which photons are detected (for most sources, Galactic absorption prevents photons $< 2$ keV from reaching the detector, while the effective area of the ACIS-I is small above 8 keV). Only 30 sources could be modeled with a power law but not a thermal plasma, while 29 sources could be modeled by a thermal plasma but not a power law. Finally, 67 spectra deviated from both continuum models at the 95% level. About 25 of these resemble statistical fluctuations, 30 are bright sources that appear to require multiple continuum components, and 12 exhibited strong iron emission between 6 and 7 keV. For individual sources, the uncertainties on the spectral parameters are rather large. However, it is possible to draw some general conclusions about the population of X-ray sources from Table \[tab:indiv\]. In Figure \[fig:dist\], we plot the distributions of the best-fit absorption columns and photon indices or temperatures for all of those sources that were adequately fit with the simple absorbed continuum model. As was mentioned in @mun03, 265 out of 470 of the point sources have power-law spectra with $\Gamma < 1$, even after accounting for the absorption column. However, only 77 sources have 90% confidence limits for which $\Gamma < 1$, and some of the apparent hardness of their spectra is due to line emission from Fe between 6–7 keV (see Section \[sec:fe\]). Not surprisingly, the hard sources also have high best-fit temperatures under the thermal plasma model. However, because of the poor statistics and small bandpass, the temperatures are unconstrained in one-third of the sources, and there are only 21 sources with 90% lower limits on the temperatures that are $>10$ keV. The slopes of the spectra are not correlated with the intensities of the sources (Figure \[fig:gvf\]). The median absorption column toward the the sources is $6 \times 10^{22}$ cm$^{-2}$ for a power-law continuum, and $11 \times 10^{22}$ cm$^{-2}$ for a thermal plasma continuum. The median value for the power-law continuum is identical to the column that is inferred from the $K$-band extinction toward  [see @td03 for a summary], while that for a thermal plasma continuum is twice as high. The difference between the median values results from the facts that (1) for a faint, hard source the values of $N_{\rm H}$ and the spectral slope cannot be determined independently, and (2) the thermal plasma continuum, can appear no harder than a $\Gamma \approx 1.5$ power law. Therefore, the thermal plasma models produce a higher median value of $N_{\rm H}$. Under either model, 30% of the sources are inferred to have a absorption column of $> 10^{23}$ cm$^{-2}$. The one source that appears to be absorbed by more than $10^{24}$ cm$^{-2}$, CXOGC J174539.3–290027, is relatively faint, and the spectrum is poorly constrained. Finally, we note that these spectra have been modeled assuming that all of the X-rays from a given source are absorbed by a single column of material. If we assume instead that a fraction of the X-ray emitting region is absorbed by a higher column (so-called “partial covering” models) an acceptable fit can be obtained for an arbitrary range of continuum shapes, because the bandpass over which we measure the spectrum is limited. Iron Emission\[sec:fe\] ----------------------- Visual inspection of the 67 sources for which the simple continuum failed to reproduce their spectra indicates that $\approx 20$% exhibit residuals between 6–7 keV that may represent line emission from Fe. Therefore, we have performed a uniform search for Fe line emission from the brightest sources with hard spectra. We selected only those sources with more than 160 net counts, because we found that fainter sources were unlikely to provide more than one spectral bin between 6–7 keV. Likewise, we selected only those sources that were best-fit by absorbed power laws with $\Gamma < 5$, because sources with steeper spectra were typically background-dominated above 6 keV. There were 183 sources that met both of these criteria. We modeled each source with an absorbed power law plus a Gaussian line that was fixed at either 6.4 keV to search for low-ionization (“neutral”) Fe emission, or at 6.7 keV to search for He-like Fe. The widths of the lines were fixed at 100 eV, to account for the fact that both the low-ionization and He-like lines are actually a blend of multiple transitions [e.g., @nag94]. To evaluate the significance of the added line, we computed a statistic $f$ from the reduction in $\chi^2$ provided by the more complex model: $$f = {{(\chi^2_s - \chi^2_c)} \over {(\nu_s - \nu_c)}} {{\nu_s} \over {\chi^2_s}},$$ where $\chi^2_c$ and $\chi^2_s$ are the values with and without the line, and $\nu_c$ and $\nu_s$ are the numbers of degrees of freedom for the fits ($\nu_s - \nu_c = 1$, here). Unfortunately, because the null result of our more complex model, a line of zero flux, lies on a boundary of the parameter space we are considering (i.e., an emission line with negative flux is not physical, so the hard lower limit to the line flux is 0), $f$ is not distributed according to the $F-$distribution [@pro02]. Therefore, we have simulated the expected distribution of $f$ for each of our sources using the Markov-Chain Monte Carlo technique described in @ara03. This technique allows us to evaluate the chance probability of observing a value of $f$ given (1) the statistical distributions of $N_{\rm H}$, $\Gamma$, and the normalization from the fit, (2) the method used to group the spectral bins, and (3) the spectrum assumed for the background subtraction. Between 100–1000 simulations[^5] were computed for each source. We present in Figure \[fig:mcmc\] a comparison of the chance probabilities derived from the theoretical $F$-distribution and from the Monte Carlo simulations. In general, the theoretical $F$-distribution significantly under-estimates the significance of a line (the chance probability is deemed to be too large), although in 10% of the cases the theoretical distribution over-estimates the significance. This demonstrates the necessity of performing simulations in order to estimate the significance of an added line feature. We have listed those sources with significant line emission in Table \[tab:iron\]. We consider a line to be detected if it has less than a 1% chance probability of arising from random variations in the continuum, and if the fit produced $\chi^2_c/\nu_c < 2$. In total, 35% (64 out of 181) of the sources that we examined have significant Fe line emission. We find that 25% of sources exhibit a significant line near 6.7 keV, while 16% of sources exhibit emission at 6.4 keV. There are 15 sources with significant lines at both 6.4 and 6.7 keV, but these tend to be faint sources in which it is not possible to constrain the line energy. We plot histograms of the equivalent widths and the equivalent widths as a function of the flux in Figure \[fig:iron\]. For both species, the equivalent widths range from 200 eV to 5 keV. The median equivalent widths were 370 eV for the 6.4 keV Fe line, and 530 eV for the 6.7 keV Fe line. Six sources have lines with equivalent widths greater than 1 keV that appear upon visual inspection to be real. There are two systems with apparent equivalent widths $> 10$ keV, but these have steep continuum spectra with $\Gamma > 4.5$, and the excess emission between 6–7 keV represents either a hard continuum component or poor background subtraction. When line emission is not detected, the median 1-$\sigma$ upper limits to the line equivalent widths are 220 eV for He-like Fe, and 240 eV for low-ionization Fe. Combined Spectra of Point Sources\[sec:avspec\] ----------------------------------------------- In order to understand the average spectra of the Galactic center point sources, we summed the spectra of sub-groups of the individual point sources. We selected only those sources that were not detected below 1.5 keV with , as these are most likely to lie near the Galactic center. We also excluded sources brighter than 500 net counts, because these sources provided individual spectra of good quality. We computed average effective area and response functions by averaging those from the individual sources, weighted by the number of counts from each source. We estimated the average background by extracting a spectrum from the rectangular region that traced the orientation of the ACIS-I detector during the 500 ks series of observations from 2002 May to June.[^6] We excluded from the background spectrum events that fell within circles circumscribing the 95% contour of the PSF around any point sources. In Figure \[fig:psraw\], we display the summed spectra of the Galactic center point sources with fewer than 500 net counts. The spectrum below 2 keV is dominated by the diffuse emission from the Galactic center. In Figure \[fig:psmod\] we display the background-subtracted spectrum. The instrumental Ni line at 7.5 keV is absent from the spectrum, which indicates that the background subtraction was successful. Lines of Si, S, and Ar are also weak or absent in the spectrum of the point sources, in contrast to that of the diffuse emission (dashed line in Figure \[fig:psraw\]). Prominent lines from He-like and H-line Fe are evident at 6.7 and 6.9 keV, while weak, fluorescent K-$\alpha$ emission from low-ionization Fe is evident at 6.4 keV. We modeled this spectrum using two approaches. First, we modeled the emission phenomenologically, using a power-law continuum and Gaussian line emission from Si, S, Ar, Ca, and Fe. The lines we included were chosen by examining whether lines from the strongest transitions from Table 1 in Mewe, Gronenschild, & van den Oord (1985) significantly improved the residuals when comparing the model to the data. For the final model, we placed lines at the energies expected for the He-like $n=2-1$ transitions of Si, S, Ar, Ca, and Fe; the He-like $n=3-1$ transitions of Si and S; the H-like $n=2-1$ transitions of Si, S, Ar, and Fe; and low-ionization Fe K-$\alpha$ at 6.4 keV. This model allowed us to measure and compare the equivalent widths of the lines. We also used a model consisting of two thermal plasma components, each of which was absorbed by a separate column of gas. This model is identical to the model used for the diffuse emission by @mun04. We also note that this model is qualitatively similar to the multi-temperature, multi-absorber models typically used to model the accretion shocks in magnetized CVs [e.g. @ram03]. Several assumptions were required for the models to reproduce the data. First, we only applied the model between $1.0-8.0$ keV. Below this energy range the photon counts are dominated by foreground diffuse emission, while above this range the ACIS-I has a small effective area. Second, when modeling individual lines with Gaussians, the widths of the lines from He-like Si, S, and Fe and low-ionization Fe were allowed to be as large as $\approx 70$ eV to account for the fact that the lines are blends of several transitions that cannot be resolved with ACIS. Third, we allowed for a $\lesssim 1$% shift in the energy scale in each spectrum because of uncertainties in gain calibration of our CTI-corrected data. When fitting Gaussians to the He-like transitions and the 6.4 keV line of Fe, the line centroids were varied one-by-one until they achieved best-fit values, and then frozen. When using plasma models, the red-shift parameter was used to change the energy scale in a similar manner. Finally, a 3% systematic uncertainty was added in quadrature to the statistical uncertainty in order to account for uncertainties in the ACIS effective area. To account for absorption in each model we assumed (1) that the entire region was affected by one column of material that represents the average Galactic absorption (modeled with in ) and (2) that a fraction of each region was affected by a second column that represents absorbing material that only partially-covers the X-ray emitting region (modeled with ). This partial-covering absorption model produces a low-energy cut-off that is less steep than that which would be produced by a single absorber. The model can roughly account for the fact that both the point sources and absorbing material are distributed along the line of sight. The mathematical form of the model was $$e^{-\sigma(E)N_{\rm H}}([1-f_{\rm pc}] + f_{\rm pc}e^{-\sigma(E)N_{\rm pc,H}}), \label{eq:abs}$$ where $\sigma(E)$ is the energy-dependent absorption cross-section, $N_{\rm H}$ is the absorption column, $N_{\rm pc,H}$ is the partial-covering column, and $f_{\rm pc}$ is the partial-covering fraction. Dust scattering was not included, because when modeling the spectrum its optical depth was degenerate with the partial-covering fraction $f_{\rm pc}$. ### Phenomenological Model Since the natures of these point sources are uncertain, the most straightforward way of modeling their spectrum is with an absorbed power-law continuum and Gaussian line emission. The average spectrum of sources with fewer than 500 net counts is displayed along with the model spectrum in Figure \[fig:psmod\]. The model parameters are listed in the first column of Table \[tab:psint\]. After some initial tests, we found that $f_{\rm pc}$ was poorly constrained, but the best-fit values were near 0.95. We therefore fixed $f_{\rm pc}$ to this value. The remaining parameters were allowed to vary. The total absorption column, $N_{\rm H} + N_{\rm pc,H} \approx 9\times10^{22}$ cm$^{-2}$, is slightly higher than the expected Galactic value. Simulations in indicate that this is probably because we did not include dust scattering, which produces about 30% of the total absorption. The inferred continuum is flat, with photon index $\Gamma = 0.8$, which is similar to the median value from the individual sources, $\Gamma = 0.7$. Finally, the equivalent width of the He-like Fe line is 400 eV, which is similar to that observed from individual bright sources. The strength of the neutral Fe line is considerably lower than those detected from the individual sources, although this is not surprising since fewer of the individual sources exhibit 6.4 keV lines than do 6.7 keV lines (see Section \[sec:fe\]). We then examined how the average spectra of the point sources varied with flux. We compared the summed spectra of sources with fewer than 80 net counts to those with 80–500 net counts, because these two groups of sources produce nearly the same numbers of net counts ($\approx 7.5 \times 10^{5}$). The best-fit parameters of our phenomenological model of these spectra are listed in the second two columns of Table \[tab:psint\]. Both the absorption column and photon index were slightly larger in the bright sources. However, the continuum shape looks very similar by eye. To highlight this, in Figure \[fig:rat\] we plot the ratio of the averaged spectra of sources with 80–500 net counts to that of sources with $< 80$ net counts. The only differences in the continuum spectra are above 7 keV, which may indicate that the faint sources produce slightly more high-energy flux. Differences in the equivalent widths of the line emission are also evident in Figure \[fig:rat\]. The equivalent width of the Fe XXV He-$\alpha$ emission was 30% higher in the fainter sources (3.7$\sigma$), and the equivalent width of neutral Fe K-$\alpha$ was a factor of 2 lower in the faint sources (4.9$\sigma$). There is also a factor of 3 less S He-$\alpha$ emission in the faint sources (4.3$\sigma$). We also split the data into smaller flux intervals, and found similar results. Thus, the continuum spectra of the point sources change very little with intensity, but the S and Fe lines range over a factor of $\ga 2$ in equivalent width. We also examined the combined spectra as a function of the hard color, which provides a measure of the steepness of the spectrum. The hard color is defined as $HR = (h-s)/(h+s)$, where $h$ is the number of counts in a hard band from 4.7–8.0 keV, and $s$ is the number of counts in a softer band from 3.3–4.7 keV. We divided the data using three ranges in hard color, guided by the expected power-law index from simulations with [@mun03]: $HR > 0.2$ for sources with $\Gamma \approx 0$ spectra, $-0.1 < HR < 0.2$ for sources with $\Gamma \approx 1.5$ spectra, and $HR < -0.1$ for sources with $\Gamma > 2$ spectra. The spectra are plotted in Figure \[fig:spechr\], and the best-fit parameters for the phenomenological model are listed in the last three columns of Table \[tab:psint\]. We find that the sources with larger $HR$ have continuum spectra that are intrinsically flatter, at least given our model for the interstellar and intrinsic absorption. We also find significant changes in the line strengths, although there is no monotonic trend with $HR$. ### Two-$kT$ Plasma Model\[sec:av:vapec\] We next modeled the spectra of the point sources as originating from two thermal plasmas, each of which was absorbed by a separate component as parameterized in Equation \[eq:abs\], as well as emission from low-ionization Fe at 6.4 keV. The parameters of the best fit models to the average spectra for the groups of sources used in the previous section are all listed in Table \[tab:twokt\]. The values of $\chi^2_\nu$ are generally larger under the plasma models than under the phenomenological models, because the plasma models predict too little flux above 7 keV. However, the plasma models do provide a good qualitative description of the data. The weak lines from Si, S, and Ar require cool plasmas with $kT_1 \approx 0.5$ keV; this component appears to be visible only in sources with $HR < -0.1$. The abundances of Si and S appear to be significantly below solar values in the soft sources, where the lines are most clearly detected. The He-like and H-like Fe lines require hotter plasmas with $kT_2 \approx 8$ keV. The abundances of Fe are also generally about 50% solar, although they are consistent with the solar ones in the faintest and softest sources. The temperatures of these plasma components are very similar to those inferred from the diffuse emission, because emission lines from the same set of ions are present in both spectra [@mun04]. However, in contrast to the diffuse emission, the cooler plasma in the point sources is heavily absorbed and contributes little to the observed flux. More importantly, the presence of prominent He-like and H-like Fe emission from $kT = 7-9$ keV plasma in all groups of sources suggests that the X-ray emission is produced by similar physical mechanisms. Search for Variability\[sec:var\] --------------------------------- We searched for variability in the 0.5–8.0 keV bandpass from the entire sample of X-ray sources from @mun03, in order to identify flux changes that occurred within single observations, and long-term variations in the mean flux between observations. ### Short-term Variability To search for short-term variability, we applied a Kolmogorov-Smirnov (KS) test to the un-binned arrival times of the events during each observation. Before performing the search, we removed events flagged as potential cosmic ray after-glows. We also excluded data received near the edges of the detector chips, and data from the first part of ObsID 1561 for sources that were within 5.5 of the bright transient  [@mun03b]. If the cumulative distribution of the arrival times differed from a uniform distribution (which would imply a constant flux) with greater than 99.9% confidence in any observation, we considered the source to vary on short time-scales. We find that 18 foreground sources and 21 Galactic center sources are variable on short time scales according to the KS test. We list these sources in Table \[tab:shortvar\]. Examples of short-term variability are shown in the bottom three panels of Figure \[fig:lcurves\]. To characterize the duration and amplitude of the variability, we have applied the“Bayesian Blocks” algorithm of @sca98 [see also @eck04]. The algorithm is based on a parametric maximum-likelihood model of a Poisson process that divides the data into sequential segments, each of which has a constant count rate. The segments were identified by dividing the events into sub-intervals, and computing the odds ratio that the count rate has varied. If variability was found, then each interval was split further into sub-intervals, in order to track the structure of the variability. We found that by using an odds ratio corresponding to a 67% chance that the variation is real, we could identify changes in the flux from all but four of the variable sources. If the Bayesian Blocks code identified only two intervals with differing count rates, then the variability was classified as a “step” function. If it identified more than two intervals with differing count rates, we defined the variability as a flare; a search of the data revealed no instances of dips, which are unlikely to be detected in sources with low count rates. We computed the background-subtracted maximum and minimum count rates for each variable observation, and divided them by the mean value of the effective area function for that source to convert them into photon fluxes. The maximum and minimum fluxes are listed in Table \[tab:shortvar\]. We also list the durations of the flares in kiloseconds, and either the ratio between the maximum and minimum flux, or the lower limit thereto if the baseline flux is an upper limit. The photon fluxes in the table can be converted to energy fluxes by the factors 1 [ph cm$^{-2}$ s$^{-1}$]{}$= 3\times10^{-9}$ [erg cm$^{-2}$ s$^{-1}$]{} (0.5–8.0 keV) for foreground sources, and 1 [ph cm$^{-2}$ s$^{-1}$]{}$= 8\times10^{-9}$ [erg cm$^{-2}$ s$^{-1}$]{} (2–8 keV) for Galactic center sources. The peak luminosities of the variable Galactic center sources typically range from $6\times10^{31}$ to $1.7 \times 10^{33}$ [erg s$^{-1}$]{}. However, one flare from CXOGC J174552.2–290744 consists of 5 photons received in 100 s and has a peak luminosity of $10^{34}$ [erg s$^{-1}$]{}. None of the events in the apparent flare were flagged as cosmic rays by the CXC pipeline, and tests of events from elsewhere on the detector that were flagged as cosmic rays indicate that they deposit their energy on time scales of $\la 20$ s, so we consider the flare real. No source exhibits flares similar to those seen about once a day from , with durations of $\approx 1$ h and $L_{\rm X} \ga 10^{34}$ [erg s$^{-1}$]{}. Figure \[fig:shortterm\] displays histograms of the variability amplitude and the durations of the flares. Nearly half of the variability has a peak flux over 10 times the quiescent level. In the bottom-left panel, we plot the amplitude of the variability as a function of the mean flux from each source. The amplitudes of the variability are not a strong function of the mean flux from the source. However, we are unable to detect the faintest sources when they are in their low-flux states, so for these sources we only can report lower limits to the variability amplitude. The flare durations are spread fairly evenly, with a median duration of about 20 ks. As can be seen from the bottom-right panel of Figure \[fig:shortterm\], the flare durations show no correlation with the mean flux from a source. In order to quantify our sensitivity to short-term variations, we need to examine the probability that a change in count rate could be detected. If we assume a baseline count rate $r_l$ persists for a time $t_l$, and that a flare occurs with count rate $r_h$ lasting $t_h$, then the total number of counts in each interval follows the Poisson distribution. Therefore, the joint probability that the measured baseline count rate $N_l/t_l$ is less than the measured flare count rate $N_h/t_h$ is $$\begin{aligned} \nonumber P(N_h > N_l t_h/t_l) = \\ \sum_{N_l = 0}^\infty \left( {{(r_l t_l)^{N_l} e^{- r_l t_l}} \over {N_l!}} \sum_{N_h > N_l t_h/t_l}^\infty {{(r_h t_h)^{N_h} e^{- r_h t_h}} \over {N_h!}} \right). \label{eq:poiss}\end{aligned}$$ This probability represents the chance that a flare of amplitude $r_h/r_l$ would be detected. The median net counts from the sources in the catalog of @mun03 was 49, with a background of 52 counts. These values translate to count rates of $n = 7.8\times10^{-5}$ count s$^{-1}$ net and $b = 8.3 \times 10^{-5}$ count s$^{-1}$ background. If we use $r_l = n + b$ in Equation \[eq:poiss\], a 36 ks flare during the long 150 ks observation could be detected with an amplitude a factor of $\approx 10$. Such a flare could be detected from half of the sources in our sample. On the other hand, a flare that reaches twice the quiescent flux level for 36 ks could only be detected if the quiescent count rate was $r_l = 2 \times 10^{-3}$ count s$^{-1}$. Only 17 sources are this bright, so such a small-amplitude flare would generally be unobservable. Not surprisingly, all of the short-time scale variability with amplitudes $< 3$ are long duration, step-like changes in flux. ### Long-term Variability To search for long-term variability, we computed the value of $\chi^2$ for the photon fluxes in each observation under the assumption that the mean rate was constant. We computed the approximate total (source plus background) photon flux by dividing the total number of counts detected by the live time and the mean value of the effective area function. To compute a net flux, we then subtracted a background count rate, which was estimated in the same manner as for the spectrum. We considered a source as variable if the photon fluxes both before and after background subtraction were inconsistent with a constant mean value with more than 99% confidence.[^7] We excluded sources with short-term variability when searching for long-term variability. We also excluded data from the first part of ObsID 1561 for sources that were within 5.5 of the bright transient . Long-term variability is illustrated in the top three panels of Figure \[fig:lcurves\]. We find that 20 foreground sources and 77 Galactic center sources vary on long time scales. We list in Table \[tab:longvar\] the minimum and maximum background-subtracted photon fluxes for the variable sources. We present a histogram of the ratio of the maximum to minimum fluxes in the top panel of Figure \[fig:longterm\]. Most of the ratios are upper limits, because the sources are not detected at their minimum flux levels. The bottom panel illustrates the variability amplitudes as a function of the mean intensity of each source. There is no apparent correlation between the amplitude and intensity of the source. We can quantify our sensitivity to long-term variations in the same manner as for short-term variability, using Equation \[eq:poiss\]. The most extreme form of long-term variability is that of a source that is bright for some portion of the observations lasting a total time $t_h$, and decreases below the background level for the remaining observations lasting a time $t_l$. We therefore assume that the total counts from a source is consistent with the background $b = 8.3 \times 10^{-5}$ count s$^{-1}$ during the time $t_l$ when it is faint. If the source is “off” during one of the 12 ks observations, we could detect this decrease in count rate with 99% confidence if during the remaining 614 ks of observations the source was brighter than $3 \times 10^{-4}$ net count $s^{-1}$. Approximately 9% of the sources are this bright. We could detect variability from a source that is “off” during all but the 500 ks monitoring campaign (2002 May–June) if the count rate at maximum was $8\times 10^{-5}$ count s$^{-1}$, which is valid for half of the sources searched. Discussion\[sec:disc\] ====================== The large number of sources detected toward the Galactic center is most likely a product of the large density of stars there. The 17 by 17 field spans a physical distance of 20 pc in projection from , and therefore probes the inner regions of the Nuclear Bulge that was studied extensively by Launhardt, Zylka, & Mezger (2002). Using their models, we estimate that $1.3\times10^8$ [$M_{\odot}$]{} of stars lie within a cylinder of radius 20 pc and depth 440 pc that is centered on the Galactic center. Thus, our observation encompasses up to 0.1% of the total Galactic stellar mass, which is $\sim 10^{11}$ [$M_{\odot}$]{}. However, we have also found that the surface density of the X-ray sources falls off as $\theta^{-1}$ away from , so it is possible that most of the X-ray sources lie in an isothermal sphere of radius 20 pc [@mun03]. Such a sphere would contain $3\times 10^7$ [$M_{\odot}$]{} of stars, or 0.03% of the Galactic stellar mass. For comparison, the shallower survey carried out by @wgl02 covered the entire Nuclear Bulge (albeit with a factor of 5 less sensitivity), and thus sampled $\sim 1$% of the mass of stars in the Galaxy. The stellar density at the location of the X-ray sources is between 240–900 [$M_{\odot}$]{}pc$^{-3}$ (for a 20 by 440 pc cylinder and a 20 pc sphere, respectively), compared to 0.1 [$M_{\odot}$]{} pc$^{-3}$ in the local stellar neighborhood [@bt94 p. 16]. We will keep these numbers in mind as we consider the likely natures of the Galactic center point sources. The sample identified as part of the [[*Chandra*]{}]{} observations of the  field is unique, because the long exposure time allows us to detect faint sources ($F_{\rm X} = (3-100)\times 10^{-15}$ [erg cm$^{-2}$ s$^{-1}$]{}, 2–8 keV), whereas the strong diffuse X-ray emission and the high absorption toward the Galactic center prevent us from observing X-ray sources unless they are prominent in the 4–8 keV band. Prior to [[*Chandra*]{}]{}, the most sensitive hard X-ray survey of the Galactic plane was taken with [[*ASCA*]{}]{}. That survey identified only 163 sources, with a detection limit of $\approx 3\times 10^{-13}$ [erg cm$^{-2}$ s$^{-1}$]{}  [2–10 keV; @sug01]. The Galactic center sources are on average much harder than those detected in the [[*ASCA*]{}]{} survey. The brighter [[*ASCA*]{}]{}  sources had a median photon index of $\Gamma = 2.5$, with only 15% of the sources having $\Gamma < 1$, while the fainter [[*Chandra*]{}]{} sources have a median $\Gamma = 0.7$ (Figure \[fig:dist\]). The difference in hardness of the two samples is probably a selection effect caused by the high absorption and the strong diffuse emission toward the Galactic center. The X-ray sources that we have studied in this paper probably sample only the hardest examples of the population identified with [[*ASCA*]{}]{}. Likewise, [[*Chandra*]{}]{} observations of globular clusters have identified a couple hundred X-ray sources with $L_{\rm X} = 10^{29} - 10^{33}$ [erg s$^{-1}$]{}[@gri01; @pool02; @bec03; @hei03; @hei03b]. These luminosities overlap those inferred for the sources near the Galactic center in our sample. However, only a few of the $\approx 60$ globular cluster sources with spectral information are best modeled by a $\Gamma < 1$ power-law. Most have steeper spectra that are consistent with $\Gamma > 1.5$ power-laws or $kT \approx 1-20$ keV thermal plasmas. In addition to the hardness of their spectra, the X-ray sources detected toward the Galactic center share several other interesting properties. Most notable is line emission from low-ionization, He-like, and H-like Fe (Figures \[fig:iron\] and \[fig:psmod\]). On average, the low-ionization Fe lines have equivalent widths of 100–230 eV, while the He-like Fe lines have equivalent widths of 350–450 eV. The strengths of these lines range over a factor of two when considering sources with a range of intensity and spectral hardness (Table \[tab:psint\]). However, in all cases the average ratios of the He-like and H-like lines are consistent with those expected from a thermal plasma of $kT \approx 8$ keV (Table \[tab:twokt\]). The presence of these Fe lines in a large fraction of the sources suggests that they could be dominated by a single population of sources. However, the emission from such a plasma should produce a much steeper continuum spectrum, with $\Gamma \approx 1.5$ instead of $\Gamma \approx 0.7$. Unfortunately, it is not possible to determine unambiguously the physical process producing the X-ray emission from the continuum and iron lines alone. For instance, if the X-ray emitting regions are partially absorbed by material local to the X-ray sources, the observed spectra can be much harder than the intrinsic ones. Alternatively, the line emission could be produced in photo-ionized plasmas, although the large equivalent widths of the lines indicates that the continuum emission exciting them must not be observed directly [e.g. @muk03]. In either of these cases, the intrinsic X-ray luminosity would be significantly higher than is inferred from the flux received with [[*Chandra*]{}]{}. While the average spectral properties provide an overview of the characteristics of the X-ray sources near the Galactic center, it is still important to examine the properties of individual sources to determine how various classes of sources contribute to the population there. Therefore, in Figure \[fig:indiv\] we display the spectra and intensities of individual sources by plotting the hard color of each source against its photon flux. These quantities are measures of the physical quantities of interest, the intrinsic spectral shape and the luminosity of a system.[^8] We also indicate in Figure \[fig:indiv\] the expected hardness ratios and photon fluxes for sources over a range of luminosities and with either (1) thermal plasmas over a range of temperatures $kT$ ([*solid lines*]{}), or power laws over a range of photon indices $\Gamma$ ([*dotted lines*]{}). In all cases, we have assumed the sources are absorbed by the median column density from the spectral models, $6 \times 10^{22}$ cm$^{-2}$ of gas and dust. Approximately 75% of the 2000 Galactic center X-ray sources are detected with 90% confidence in both the 3.3–4.7 and 4.7–8.0 keV bands, and therefore have hard colors in Figure \[fig:indiv\]. We also indicate in Figure \[fig:indiv\] which sources exhibit line emission from Fe. Nearly all of the sources brighter than $4\times10^{-6}$ [ph cm$^{-2}$ s$^{-1}$]{} and harder than $HR = 0$ exhibit line emission from He-like Fe. This is not surprising, given the prominence of line emission in the average spectra. Fe emission is detected less often in fainter sources, but this is probably due to lower signal-to-noise. Finally, we indicate which sources exhibit variability. The sources that are identified as variable tend to be brighter, because the signal-to-noise is better. Soft sources are most likely to exhibit short-term variability. In the following sections, we use these properties to guide our discussion of the natures of the sources. The luminosities of the Galactic center sources are consistent with those of young stellar objects (YSOs), interacting binaries (RS CVns), Wolf-Rayet (WR) and early O stars, cataclysmic variables, quiescent black hole and neutron star X-ray binaries, and possibly the ejecta of recent supernova that are interacting with molecular clouds. We consider each in turn. Sources with Active Stellar Coronae ----------------------------------- Many stars produce X-rays in their magnetic coronae. In particular, K and M dwarfs are so numerous that they contribute significantly to heating the ISM [e.g., @schl02]. However, individually their X-ray emission is faint, with $L_{\rm X} < 10^{29}$ [erg s$^{-1}$]{}, and cool, with $kT < 1$ keV [e.g., @kri01]. Although, most of the foreground sources are probably low-mass main sequence stars (P. Zhao, in preparation), few of the Galactic center sources should be. YSOs and RS CVns are significantly brighter, with $L_{\rm X} \approx 10^{29}$ to $10^{32}$ [erg s$^{-1}$]{}[e.g., @fei02; @dem93a], and are therefore more likely to be seen at the Galactic center. ### Young Stellar Objects The number of YSOs at the Galactic center will depend upon whether low-mass stars have formed there recently. For instance, if star formation proceeds at the Galactic center in a similar manner as it has in the Orion nebula, then the two-dozen massive, emission-line stars in the central parsec of the Galaxy could conceivably be accompanied by tens of thousands of low-mass YSOs [e.g., @fei02]. However, the strong tidal forces, milliGauss magnetic fields, and turbulent molecular clouds near the Galactic center may prevent low-mass stars from forming there [@mor93]. YSOs have luminosities between $10^{29} - 10^{31.7}$ [erg s$^{-1}$]{}, and spectra that can be described by thermal plasma emission with $kT = 1-10$ keV [e.g., @pz02; @koh02; @fei02]. Therefore, any YSOs that are located near the Galactic center should be found in the bottom left of Figure \[fig:indiv\], with $HR < 0$ and fluxes $<2 \times 10^{-7}$ [ph cm$^{-2}$ s$^{-1}$]{}. However, the detection threshold for sources with $HR < 0$ is approximately $10^{31.5}$ [erg s$^{-1}$]{}, and only $\sim 0.4$% of YSOs are brighter than this limit [@fei02]. YSOs also commonly exhibit flares lasting several hours, but fewer than 0.1% exhibit flares brighter than $10^{32}$ [erg s$^{-1}$]{} [@gro04; @fei04]. In contrast, the faintest genuine flare from a Galactic center source has a peak luminosity of $5\times10^{32}$ [erg s$^{-1}$]{}, whereas only three sources with short-term variability have peak luminosities below $10^{32}$ [erg s$^{-1}$]{}. Therefore, we believe that even the flaring sources are unlikely to be YSOs, and that any population of YSOs remain largely undetected at the Galactic center. ### Interacting Binaries RS CVn systems are among the most numerous hard X-ray sources with $L_{\rm X} > 10^{29}$ [erg s$^{-1}$]{}, with a local space density of $\approx 5 \times 10^{-5}$ pc$^{-3}$ [@fms95]. Using the models of @lzm02 to scale the local number density to the stellar density at the Galactic center (Section \[sec:disc\]), we estimate that the total number of RS CVns within 20 pc of the Galactic center is $\approx 1.5\times10^{4}$, while the number within a cylinder of 20 pc radius extending the length of the nuclear bulge (440 pc) is $7\times10^{4}$. However, RS CVns would be difficult to detect near the Galactic center. They typically have soft spectra, with $kT \approx 0.1-2$ keV, and luminosities of $L_{\rm X} = 10^{29} - 10^{32}$ [erg s$^{-1}$]{}[e.g., @dem93b; @sdw96]. Therefore, RS CVns would have $HR < -0.3$ and photon flux $< 3\times10^{-7}$ [ph cm$^{-2}$ s$^{-1}$]{}in Figure \[fig:indiv\]. This portion of the figure is sparsely populated. Moreover, we are only sensitive to sources with $kT \la 2$ keV if they are more luminous than $5\times10^{32}$ [erg s$^{-1}$]{}, whereas only $\sim 2\%$ of RS CVns are this luminous [@dem93a]. Finally, although RS CVns do exhibit flares lasting several hours with amplitudes of up to a factor of ten, they are seldom more luminous than $\approx 10^{32.5}$ [erg s$^{-1}$]{} [e.g., @tsu89; @fpt01; @fpt03]. Therefore, these flares would not be observable in our Galactic center data. The difficulty of detecting RS CVns and the lack of good candidate objects indicates we have probably identified only a tiny fraction of the RS CVns at the Galactic center. Winds from Massive Stars ------------------------ There is currently significant debate about the origin of the X-ray emission from WR and early O stars [e.g., @wc01], but it is generally thought that the X-rays are produced through shocks in their winds [see, e.g., @cg91]. They have luminosities of up to $\approx 10^{33.5}$ [erg s$^{-1}$]{} in isolation, and $\approx 10^{35}$ [erg s$^{-1}$]{} when two such stars are in a colliding-wind binary. Their spectra can usually be modeled as thermal plasma with $kT = 0.1 - 6$ keV [e.g., @pol87; @poz02]. These systems would lie in the portion of the color-intensity diagram with $HR < 0$ in Figure \[fig:indiv\]. The number of these systems present near the Galactic center is unknown, because it is determined by the uncertain star formation history [see, e.g., @mor93]. @fig95 and @cot99 have identified several massive, emission-line stars associated with HII regions near the Galactic center, but none of these has counterparts in our X-ray catalog. The wide-area search that @fig95 conducted failed to turn up additional candidates. Still, the unique conditions at the Galactic center make it important to understand the number of massive stars there, so we suggest that the relatively soft X-ray sources would serve as good targets for future searches for massive stars. Millisecond Pulsars ------------------- Isolated millisecond pulsars typically produce X-ray emission from particles accelerated as they spin down, with $L_{\rm X} = 10^{28} - 10^{31}$ [erg s$^{-1}$]{}[@pos02]. At these luminosities, millisecond pulsars would be undetectable at the Galactic center. However, @cheng04 have predicted that the wind from a millisecond pulsar could produce $L_{\rm X} = 10^{31} - 10^{33}$ [erg s$^{-1}$]{} by interacting with dense regions of the ISM ($n \ga 100$ cm$^{-3}$). They suggest that $\sim 100$ millisecond pulsars could be present in our field, although the number of detectable systems would depend upon the volume of dense gas at the Galactic center. @mun04 have demonstrated that a large fraction of the inner 20 pc of the Galaxy is filled with hot ($T \sim 10^{8}$ K), low density ($n \approx 0.1$ cm$^{-3}$), X-ray emitting plasma, so only a small fraction of isolated millisecond pulsars may be detectable. Moreover, their spectra should be power laws with $\Gamma = 1.5-2.5$, which corresponds to $-0.2 < HR < 0.0$ in Figure \[fig:indiv\]. This places millisecond pulsars on the same portion of the hardness-intensity diagram as CVs and RS CVns. As discussed in @cheng04, identifying candidate systems among the point sources would be difficult, but millisecond pulsars could account for extended features seen in the field [@mor03]. Accreting Sources ----------------- ### Low-Mass X-ray Binaries Neutron stars and black holes accreting from low-mass companions that over-fill their Roche lobes are typically identified in outburst with $L_{\rm X} > 10^{36}$ [erg s$^{-1}$]{}, although the majority of their time is spent in quiescence with $L_{\rm X} < 10^{34}$ [erg s$^{-1}$]{}. LMXBs have been observed extensively in quiescence. The spectra of quiescent neutron star systems have been described with a $kT \approx 0.3$ keV black body producing $L_{\rm X} \sim 10^{32}$ [erg s$^{-1}$]{}, plus a $\Gamma \approx 1-2$ power-law tail that contributes $L_{\rm X} \sim 10^{31}$ [erg s$^{-1}$]{}[@asa98; @kon02]. The black hole systems have $L_{\rm X} \lesssim 10^{31}$ [erg s$^{-1}$]{}  and exhibit $\Gamma \approx 1-2$ power-law spectra [@rut01; @wij02; @cam02]. The thermal emission from a neutron star would be unobservable behind $6\times10^{22}$ cm$^{-2}$ of absorption, so is of little relevance to the current observations. The power-law components of both the neutron star and black hole systems would produce $-0.1 < HR < 0.2$. However, LMXBs are rare — theoretical models predict that $\sim 10^4$ should currently be in quiescence in the entire Galaxy, whereas only only $\sim 100$ LMXBs, or 1%, have been identified [compare @itf97; @bt04]. Thus, if LMXBs form at the Galactic center in a similar manner as in the disk, our observation should encompass $\sim 20$ of them [@bt04]. Transient outbursts from three LMXBs already have been identified within 10 of the Galactic center [@eyl75; @pgs94; @mae96]. If these truly represent $\sim 20$% of the total number there, then it would appear that LMXBs near the Galactic center are considerably more active than those in the Galactic disk. Alternatively, LMXBs could be concentrated near the Galactic center through dynamical settling [e.g., @mor93; @poz03]. In order to better constrain the numbers of LMXBs within the nuclear bulge, it is important to continue to monitor this region in order to search for transient outbursts from additional systems. Prior to the Roche-lobe overflow phase, accretion onto the compact objects should also proceed at low rates from the winds of the low-mass companions [@ble02; @wk03; @bt04]. These pre-LMXBs should have $L_{\rm X} = 10^{28} - 10^{32}$ [erg s$^{-1}$]{}, and would probably resemble Roche-lobe overflow systems in quiescence. Up to $10^5$ systems could be present in the Galaxy, and $\sim 20-100$ in our image of [@wk03; @bt04]. ### High-Mass X-ray Binaries Neutron stars and black holes accreting from the winds of massive companions should be about as common as LMXBs, because although the massive companions have much shorter lifetimes, accretion can occur when the separations between the binary components are much larger ($\sim 1$ AU compared to $\sim R_\odot$; see Pfahl, Podsiadlowski, & Rappaport 2002). They could be particularly abundant near the Galactic center, because it appears that 10% of Galactic star formation is currently occurring within the nuclear bulge [@lzm02]. Our observations encompass $\sim 5$% of the nuclear bulge, so it would not be unreasonable to assume that, of the $\sim 10^{4}$ HMXBs in the Galaxy, $0.1\cdot0.05\cdot10^{4} \sim 50$ could be present in the field around  [see @pfa02]. In both outburst and quiescence, black hole HMXBs generally resemble LMXBs, because their X-ray emission is produced entirely in the accretion flow. Neutron star HMXBs, on the other hand, usually look much different from their LMXB counterparts, because the neutron stars in the young, high-mass systems tend to be more highly magnetized ($B \ga 10^{12}$ Gauss). Neutron star HMXBs in outburst produce X-rays from shocks that form in the magnetically-channeled column of accreted material. At the location of the shocks, the accretion flow is optically-thick, so the resulting spectra are flat, and can be described with a $\Gamma < 1$ power law between 2–8 keV [e.g. @cam01]. Therefore, neutron star HMXBs should have $HR > 0.1$ in Figure \[fig:indiv\]. HMXBs also sometimes exhibit line emission at 6.4 keV from fluorescent neutral material in the companion’s wind, as well as weaker emission from He-like Fe at 6.7 keV that is produced by photo-ionized plasma in the wind. Although large equivalent widths have been reported from low-resolution measurements with gas proportional counters [e.g., @app94], the few measurements of these lines with CCD resolution spectra indicate that they have equivalent widths $\lesssim 100$ eV [e.g., @nag94; @shr99]. Since the strong magnetic fields around the neutron stars in HMXBs channel the accreted material onto the star’s polar caps, the surest way to identify neutron star HMXBs is through periodic modulations in their X-ray emission. We have found that seven hard Galactic center sources in our field exhibit periodic variability [@mun03c]. However, the periods are all $> 300$ s, which makes it impossible to rule out that they are accreting white dwarfs. On the other hand, modulations with shorter periods would be rendered undetectable by Doppler shifts from orbital motion, so the lengths of the periods observed are not necessarily a strong constraint on the entire population of sources at the Galactic center. Other variability is also seen from HMXBs. Short-term (several ks) flares are seen infrequently and are ascribed to instabilities in the accretion flow [e.g., @aug03; @mew03]. Long-term variations are more common and are often caused either by changes in the density of the wind at the location of compact objects that have eccentric orbits around the donor star, or by instabilities in the excretion disks around the Be stars that are the mass donors in half of the known HMXBs [e.g. @app94]. The above considerations suggest that faint, neutron star HMXBs can account for some fraction of the hard Galactic center point sources. The main problem with this hypothesis is that few HMXBs have been observed at $L_{\rm X} < 10^{34}$ [erg s$^{-1}$]{}, and the ones that have can be described with much softer $\Gamma \sim 2$ spectra [e.g., @cam02]. Nonetheless, the physics of X-ray production at low accretion rates is uncertain, so we cannot be certain of what X-ray properties to expect from faint HMXBs. ### Cataclysmic Variables Cataclysmic variables (CVs) are the most numerous accretion-powered X-ray sources. Their local space density is $\sim 3 \times 10^{-5}$ pc$^{-3}$ [@sch02], so that if we scale their number to the stellar density at the Galactic center, within 20 pc of  we would expect $\sim 9\times10^{3}$ CVs, and within a cylinder centered on the Galactic center that is 20 pc in radius and 440 pc deep we would expect $\sim 4\times10^{4}$. About 50% of CVs are luminous enough to be observed from the Galactic center [@ver97], so they could account for the majority of the X-ray sources detected there. Systems with non-magnetized white dwarfs, which comprise 80% of CVs, have luminosities between $10^{29.5} - 10^{32}$ [erg s$^{-1}$]{}, and spectra that can be described with $kT = 1-25$ keV plasma from an accretion shock [e.g., @ehp91; @ms93; @ver97]. Thus, these systems should have hard colors $HR < 0$ in Figure \[fig:indiv\], and would be located in a similar portion of the color-intensity diagram as RS CVns and YSOs. CVs containing magnetized white dwarfs, which are referred to as polars and intermediate polars depending on whether or not the rotational period of the white dwarf is synchronized to its orbital period, comprise about 20% of all CVs [e.g., @war95 see also the CVcat database[^9]]. Polars have similar spectra and luminosities as un-magnetized CVs, with the addition of a $kT \sim 50$ eV “soft excess” that is attributed to “blobs” of accreted material that penetrate deeply into the photosphere [e.g., @ram94; @ver97; @ei99]. The soft component would be unobservable above 2 keV, so polars should also have $HR < 0.1$ in Figure \[fig:indiv\]. Polars also commonly exhibit variations in their average luminosity on time scales of years: $\approx 50$% of the polars surveyed by @ram04 changed in intensity by factors of $\ga 4$ between observations taken with [[*ROSAT*]{}]{} (1990–1999) and [[*XMM-Newton*]{}]{} (2000–present). Such variations would be detectable from most of the Galactic center sources (Figure \[fig:shortterm\]). Therefore, in the two years spanned by the [[*Chandra*]{}]{} observations of the Galactic center, we would expect $\sim 15\%$ of the polars to exhibit long-term variations. Since only 2% of the sources located at or beyond the Galactic center are variable, at most 20% could be polars. The intermediate polars are typically more luminous than other CVs, with $L_{\rm X} = 10^{31} - 10^{32.6}$ [erg s$^{-1}$]{}, and represent about 5% of the total population [see CVcat; @kub03]. This is thought to be related to the fact that they tend to have longer orbital periods ($> 2$ h), which could result in a higher mass transfer rate; however, the high $\dot{M}$ could also be a selection effect, because if a CV is bright, it is easier to detect modulations in the X-ray and optical emission at the rotational and orbital periods [@war95]. Intermediate polars also typically have much harder spectra than other CVs: when approximated as a power law, the optically thin thermal plasma usually seen from CVs should have $\Gamma \approx 1.5$, whereas the spectra of intermediate polars usually have $\Gamma \approx 0$. This is probably a result of the geometry of the accretion flow, because, as in other CVs, prominent line emission from He-like and H-like Fe indicates that the X-rays are produced either by plasma with $kT \approx 1-20$ keV or by a plasma photo-ionized by continuum X-rays that are not observed directly [e.g., @ei99; @muk03]. In either case, the X-ray emitting regions would have to be partially absorbed by material in the accretion flows, which removes low-energy photons from the spectra, thus making them flatter. Intermediate polars should have $HR > 0.1$ in Figure \[fig:indiv\], which makes them the best candidates among CVs for the hard Galactic center sources. The detailed spectral properties of intermediate polars are broadly consistent with the average spectra of the point sources in Figure \[fig:psmod\]. Weak emission at 6.4 keV is observed from these systems, and is attributed to X-rays that reflect off of the white dwarf’s surface [e.g., @ms93; @ei99]. Moreover, when the spectra of intermediate polars are modeled as emission from thermal plasma, the derived Fe abundances are often near or below the solar values [e.g., @do97; @fi97; @ish97]. This is similar to what we infer for the point sources in Table \[tab:twokt\]. Finally, the general lack of variability in the X-ray emission from the Galactic center sources (aside from periodic modulations) is also consistent with the stable emission usually seen from intermediate polars. On long time scales, the optical luminosity of intermediate polars usually remains constant for many decades [e.g., @gs88]; because the optical and X-ray flux are correlated in polars, we would expect the X-ray emission from intermediate polars also remains constant. Flares lasting several hours, presumably from accretion events, are sometimes observed from magnetic CVs, but appear to be rare and most prominent in the soft X-ray band [$< 2$ keV; e.g., @ps93; @cda99; @sm01]. The predominant short time scale variability in intermediate polars is due to modulations of the emitting regions as the white dwarfs rotate [e.g., @nw89; @sch02; @rc03]. We have detected periodic modulations from seven of the brightest 285 Galactic center sources [@mun03c]. Since we were only sensitive to high-amplitude modulations, it is likely that many sources with low-amplitude modulations went undetected. Therefore, although the faintness of the Galactic center X-ray sources is probably the main cause of the lack of observed short-term variability, it is also plausible that the sources are intrinsically steady X-ray emitters like intermediate polars. Since the properties of the Galactic center sources change little as a function of their luminosity between $10^{31}$ and $10^{33}$ [erg s$^{-1}$]{}(Figures \[fig:rat\] and \[fig:indiv\]), we believe that the majority of the Galactic center sources are intermediate polars. Intermediate polars comprise 5% of all known CVs [@kub03], so given that there could be $4\times10^4$ CVs within a pencil-beam centered on the Galactic center that is 20 pc in radius and 440 pc deep, they could reasonably account for the 1000 X-ray sources with $HR > 0$. Supernova Ejecta ---------------- Bykov (2002, 2003) has suggested that the point sources in the Galactic center may not be stellar, but could be iron-rich fragments of supernova explosions that are interacting with molecular clouds. On order $10^{3}$ X-ray emitting knots could plausibly be produced by just 3 supernova occurring within the last 1000 y within 20 pc of the Galactic center; already, Sgr A East [@mae02] and the radio wisp ’E’ [@ho85] are thought to be remnants of recent supernova. The observational properties of the point sources can be reproduced by choosing several parameters in the ejecta model [@byk03]: the slope $\log N - \log S$ distribution of the knots ($\alpha \approx 1.7$) is determined by their sizes and velocities, the slopes of their continuum X-ray emission ($0 < \Gamma < 1.5$) is set by the amplitudes of magneto-hydrodynamic turbulence in the shocks they produce, and the equivalent widths of the Fe emission (up to 1 keV) by their iron abundances. Future observations of known supernova remnants will better constrain the properties of the X-ray emitting knots, which in turn could make it possible to distinguish such knots from the stellar sources in the field. Unusual Sources --------------- A handful of the Galactic center sources resemble unusual objects that have been found through shallower [[*ASCA*]{}]{}, [[*BeppoSAX*]{}]{}, [[*XMM-Newton*]{}]{}, and [[*INTEGRAL*]{}]{} surveys of the Galactic plane. These sources are important, because they could represent stellar remnants that are in short-lived states of accretion. We list the properties of 14 unusual sources from other surveys in Table \[tab:odd\]. The first three are polars that were identified with [[*ASCA*]{}]{} as having unusually strong emission lines from He-like Fe (equivalent widths $> 1$ keV); the fourth [[*XMM-Newton*]{}]{} source has similarly strong Fe emission at 6.7 keV, but its nature is uncertain. We find that 6 out of 183 Galactic center sources searched for Fe emission have 6.7 keV lines with equivalent widths greater than 1 keV, which is similar to the fraction of such sources identified in the [[*ASCA*]{}]{} Galactic plane survey. The next four are highly-absorbed ($N_{\rm H} > 10^{23}$ cm$^{-2}$) sources identified with [[*INTEGRAL*]{}]{} and [[*XMM-Newton*]{}]{}, one of which has strong low-ionization Fe emission with an equivalent width $> 1$ keV. We find that 30% of the Galactic center sources have similarly high absorption, and two systems exhibit 6.4 keV Fe lines with equivalent widths $> 1$ keV (CXOGC J174613.7–290662 and GXOCG J174617.2–285449 in Table \[tab:iron\]). The final five are hard X-ray sources with slow ($> 100$ s), high-amplitude periodic modulations in their X-ray emission. We find seven hard sources near the Galactic center (and one foreground source) with similar periodic X-ray modulations [@mun03c]. These sources would have been difficult to identify with the soft X-ray detectors on [[*ROSAT*]{}]{} (0.1–2.4 keV), which was the last observatory that systematically surveyed the sky for faint X-ray sources. Our study of the Galactic center suggests that they account for a few percent of all faint X-ray sources. Conclusions =========== We have established that, on average, the X-ray sources detected in 626 ks of [[*Chandra*]{}]{} ACIS-I observations of the field around  have hard, $\Gamma < 1$ spectra with prominent emission from He-like Fe at 6.7 keV (Figure \[fig:psmod\] and Table \[tab:psint\]). They also generally do not vary by more than factors of a few on time scales of hours or months. The best candidates for these hard X-ray sources are intermediate polars, which represent the most luminous and spectrally hardest 5% of all CVs. Therefore, the Galactic center X-ray sources are likely to be only a sub-sample of a population of $\sim 10^{4}$ CVs located near the Galactic center. Although a single population of sources may dominate the image, there are certainly many classes of objects present in smaller numbers in the field. Determining the numbers of rare objects is particularly important. For instance, the numbers of massive Wolf-Rayet and O stars and faint neutron star high-mass X-ray binaries can constrain the recent rate of massive star formation near the Galactic center, while the numbers of LMXBS provide direct tests of the validity of unusual pathways for binary stellar evolution. For this reason, we are carrying out deep infrared observations of the Galactic center to identify counterparts to the X-ray sources. These observations will be useful for distinguishing CVs from, for example, HMXBs and WR/O stars. At at a distance of 8 kpc and with an extinction of $A_K \approx 5$ [@td03], CVs should have $K$ magnitudes of 22–25, and therefore would be among the faintest detectable sources at the Galactic center [@war95; @hoa02]. In contrast, HMXBs and WR/O stars should have $K$ magnitudes brighter than 15 [@zom90; @weg94] and will be very easy to detect. Therefore, the prospects for identifying the natures of the Galactic center X-ray sources are promising. We thank C. Belczyski, A. Bykov, M. Eracleous, C. Heinke, K. Mukai, F. Paerels, J. Sokoloski, and R. Taam for helpful discussions about the natures of the Galactic center X-ray sources, and the referee for comments that helped to clarify the text. We are also grateful to M. Nowak for providing us his implementation of the Bayesian Blocks algorithm. MPM was supported by a Hubble Fellowship from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. WNB was acknowledges an NSF CAREER award AST-9983783. [0]{} Apparao, K. M. V. 1994, [*SSRev*]{}, 69, 255 Arabadjis, J. S., Bautz, M. W., & Arabadjis, G. 2003, submitted to , astro-ph/0305547 Asai, K., Dotani, T., Hoshi, R., Tanaka, Y., Robinson, C. R., & Terada, K. 1998, , 50, 611 Augello, G., Iaria, R., Robba, N. R., Di Salvo, T., Burderi, L., Lavagetto, G., & Stella 2003, , 596, L63 Baganoff, F. K.  2003, , 591, 891 Becker, Werner, Swartz, D. A., Pavlov, G. G., Elsner, R. F., Grindlay, J., Mignani, R., Tennant, A. F., Backer, D., Pulone, L., Testa, V., & Weisskopf, M. C. 2003, , 594, 798 Belczynski, K. & Taam, R. E. 2004, submitted to , astro-ph/0307492 Binney, J. & Tremaine, S. 1994, [*Galactic Dynamics*]{}, Princeton University Press Bleach, J. N. 2002, , 332, 689 Bykov, A. M. 2002, , 390, 327 Bykov, A. M. 2003, , 410, L5 Campana, S., Gastaldello, F., Stella, L., Israel, G. L., Colpi, M., Pizzolato, F., Orlandini, M., & Dal Fiume, D. 2001, , 561, 924 Campana, S., Stella, L., Gastaldello, F., Mereghetti, S., Colpi, M., Israel, G. L., Burderi, L., Di Salvo, T., & Robba, R. N. 2002, , 575, L15 , S., [Stella]{}, L., [Israel]{}, G. L., [Moretti]{}, A., [Parmar]{}, A. N., & [Orlandini]{}, M. 2002b, , 580, 389 Cheng, K. S., Taam, R. E., Wang, W., & Belczynski, K. 2004, submitted to . Chlebowski, T. & Garmany, C. D. 1991, , 368, 241 Choi, C.-S., Dotani, T., & Agrawal, P. C. 1999, , 525, 399 Cotera, A. S., Simpson, J. P., Erickson. E. F., Colgan, S. W. J., Burton, M. G., & Allen, D. A. 1999, , 510, 747 Dempsey, R. C., Linsky, J. L., Fleming, T. A., & Schmitt, J. H. M. M. 1993a, , 86, 599 Dempsey, R. C., Linsky, J. L., Schmitt, J. H. M. M., & Fleming, T. A. 1993b, , 413, 333 Done, C. & Osbrone, J. P. 1997, , 288, 649 Eckardt, A.  2004, astro-ph/0403577 Eracleous, M., Halpern, J., & Patterson, J. 1991, , 382, 290 Eyles, C. J., Skinner, G. K., Willmore, A. P., & Rosenberg, F. D. 1975, , 257, 291 , H. & [Ishida]{}, M. 1999, , 120, 277 Favata, F., Micela, G., & Sciortino, S. 1995, , 298, 482 Feigelson, E. D. 2004, in Stars As Suns: Activity, Evolution, and Planets, A. Benz & A. Dupree (eds.) IAU Symposium 219, in press Feigelson, E. D., Broos, P., Gaffney, J. A. III, Garmire, G., Hillenbrand, L. A., Pravdo, S. H., Townsley, L., & Tsuboi, Y. 2002, , 574, 258 Figer, D. F. 1995, Ph.D. thesis, University of California, Los Angeles Figer, D. F., Kim, S. S., Morris, M., Serabyn, E., Rich, R. M., & McLean, I. S. 1999, , 525, 750 Figer, D. F., Rich, R. M., Kim, S. S., Morris, M., & Serabyn, E. 2004, , 601, 319 Franciosini, E., Pallavicini, R., & Tagliaferri, G. 2001, , 375, 196 Franciosini, E., Pallavicini, R., & Tagliaferri, G. 2003, , 399, 279 Freeman, P. E., Kashyap, V., Rosner, R., & Lamb, D. Q. 2002, , 138, 185 Fujimoto, R., & Ishida, M. 1997, , 474, 774 Garnavich, P. & Szkody, P. 1988, , 100, 1522 Grindlay, J. E., Heinke, C., Edmonds, P. D., & Murray, S. S. 2001, [*Science*]{}, 292, 2290 Grosso, N., Montmerle, T., Feigelson, E. D., & Forbes, T. G. 2004, submitted to , astro-ph/0402672 Heinke, C. O., Edmonds, P. D., Grindlay, J. E., Lloyd, D. A., Cohn, H. N., & Lugger, P. M. 2003a, , 590, 809 Heinke, C. O., Edmonds, P. D., Grindlay, J. E., Lloyd, D. A., Murray, S. S., Cohn, H. N., & Lugger, P. M. 2003b, , 598, 516 Ho, P. T. P., Jackson, J. M., Barret, A. H., & Armstrong, J. T. 1985, , 288, 575 Hoard, D. W., Wachter, S., Clark, L. L, & Bowers, T. P. 2002, , 565, 511 in ’t Zand, J. J. M., Ubertini, P., Capitanio, F., & Del Santo, M. 2003, IAUC 8077 Ishida, M., Matsuzaki, K., Fujimoto, R., Mukai, K., & Osborne, J. P. 1997, , 287, 651 Ishida, M., Greiner, J., Remillard, R. A., & Motch, C. 1998, , 336, 200 Iben, Icko, Jr., Tutukov, A. V., & Fedoroval, A. V. 1997, , 486, 955 , K.  1998, , 495, 435 Kohno, M., Koyama, K., & Hamaguchi, K. 2002, , 567, 423 Kong, A. K. H., McClintock, J. E., Garcia, M. R., Murray, S. S., & Barret, D. 2002a, , 570, 277 Krabbe, A.  1995, , 447, L95 Krishnamurthi, A., Reynolds, C. S., Linsky, J. L., Martín, E., & Gagné, M. 2001, , 121, 337 Kube, J., Gänsicke, B. T., Euchner, F., & Hoffmann, B. 2003, , 404, 1159 Launhardt, R., Zylka, R., & Mezger, P. G. 2002, , 384, 112 Maeda, Y.  2002, , 570, 671 Maeda, Y., Koyama, K., Sakano, M., Takeshima, T., & Yamauchi, S. 1996, , 48, 417 Matt, G. & Guainazzi, M. 2003, , 341, L13 McNamara, D. H., Madsen, J. B., Barnes, J., & Ericksen, B. F. 2000, , 112, 202 Mewe, R., Gronenschild, E. H. B. M., & van den Oord, G. H. J. 1985, , 62, 197 Mewe, R., Lemen, J. R., & van den Oord, G. H. J. 1986, , 65, 511 Misaki, K., Terashima, Y., Kamata, Y., Ishida, M., Kunieda, H., & Tawara, Y. 1996, , 470, L53 Moon, D.-S., Eikenberry, S. S., & Wasserman, I. M. 2003, , 582, L91 Morris, M. 1993, , 408, 496 Morris, M. Baganoff, F., Muno, M., Howard, C., Maeda, Y., Feigelson, E., Bautz, M., Brandt, W. N., Chartas, G., Garmire, G., & Townsley, L. 2003, Astronomische Nachrichten, 324, S1, 167 Mukai, K., Kinkhabwala, A., Peterson, J. R., Kahn, S. M., & Pearels, F. 2003, , 586, L77 Mukai, K. & Shiokawa, K. 1993, , 418, 863 Muno, M. P., Baganoff, F. K., Bautz, M. W., Brandt, W. N., Broos, P. S., Feigelson, E. D., Garmire, G. P., Morris, M., Ricker, G. R., & Townsely, L. K. 2003a, , 589, 225 Muno, M. P., Baganoff, F. K., & Arabadjis, J. A. 2003b, , 598, 474 Muno, M. P., Baganoff, F. K., Bautz, M. W., Brandt, W. N., Garmire, G. P., & Ricker, G. R. 2003c, , 599, 465 Muno, M. P., Baganoff, F. K., Bautz, M. W., Feigelson, E. D., Garmire, G. P., Morris, M. R., Park, S. , Ricker, G. R., & Townsley, L. K. 2004, submitted to , astro-ph/0402087 Nagase, F., Zylstra, G., Sonobe, T., Kotani, T., Inoue, H., & Woo, J. 1994, , 436, L1 Negueruela, I., Reig, P., Finger, M. H., & Roche, P. 2000, , 356, 1003 Norton, A. J. & Watson, M. G. 1989, , 237, 853 , T., [Orlandini]{}, M., [Parmar]{}, A. N., [Angelini]{}, L., [Israel]{}, G. L., [Dal Fiume]{}, D., [Mereghetti]{}, S., [Santangelo]{}, A., & [Cusumano]{}, G. 1999, , 351, L33 Orlandini, M.  2003, astro-ph/0309819 Park, S., Baganoff, F. K., Morris, M., Maeda, Y., Muno, M. P., Howard, C., Bautz, M. W., & Garmire, G. P. 2003,  in press, astro-ph/0311460 Patel, S. K.  2004, , 602, L45 Patterson, J. & Szkody, P. 1993, PASP, 105, 1116 Paumard, T., Maillard, J. P., Morris, M., & Rigaut, F. 2001, , 366, 466 Pavlinsky, M. N., Grebenev, S. A., & Sunyaev, R. A. 1994, , 425, 110 , E., [Rappaport]{}, S., & [Podsiadlowski]{}, P. 2002, , 571, L37 Pollock, A. M. T. 1987, , 320, 283 Pooley, D. 2002, , 569, 405 Portegies-Zwart, S. F., Pooley, D., & Lewin, W. H. G. 2002, , 574, 762 Portegies-Zwart, S. F. McMillan, S. L. W., & Gerhard, O. 2003, , 593, 352 Possenti, A., Cerutti, R., Colpi, M., & Mereghetti, S. 2002, , 387, 993 Priebisch, T. & Zinnecker, H. 2002, , 123, 1613 Protassov, R., van Dyk, D. A., Connors, A. Kashyap, V. L. & Siemiginowska, A. 2002, , 571, 545 Ramsay, G., Cropper, M., Mason, K. O., Córdova, F. A., & Priedhorsky, W. 2004a, , 347, 95 Ramsay, G. & Cropper, M. 2003, , 338, 219 Ramsay, G., Cropper, M., Wu, K., Mason, K. O., Córdova, F. A., & Priedhorsky, W. 2004, astro-ph/0402526 Ramsay, G., Mason, K. O., Cropper, M., Watson, M. G., & Clayton, K. L. 1994, , 270, 692 Revnivtsev, M., Tuerler, M., Del Santo, M., Westergaard, N. J., Gehrels, N., & Winkler, C. 2003a, IAUC, 8097 Revnivtsev, M., Sazonov, S., Gilfanov, M., & Sunyaev, R. 2003, astro-ph/0303274 Rodriguez, J., Tomsick, J. A., Foschini, L., Walter, R. & Goldwurm, A. 2003 IAUC 8096 Rutledge, R. E., Bildsten, L., Brown, E. F., Pavlov, G. G., & Zavlin, V. E. 2001, , 551, 921 , M., [Torii]{}, K., [Koyama]{}, K., [Maeda]{}, Y., & [Yamauchi]{}, S. 2000, , 52, 1141 Sakano, M., Warwick, R. S., Decourchelle, A., & Wang, W. D. 2004, to appear in Young Neutron Stars and Their Environments, eds. Camilo, F. & Gaensler, B. M., IAUS, 218, 183 Sako, M., Liedahl, D. A., Kahn, S. M., & Paerels, F. 1999, , 525, 921 Scargle, J. D. 1998, , 504, 405 Schlickeiser, R. 2002, “Cosmic Ray Astrophysics”, Springer-Verlag, Berlin Schwope, A. D., Brunner, H., Buckley, D., Greiner, J., Heyden, K. V. D., Neizvestny, S., Potter, S., & Schwarz, R. 2002, , 396, 895 Shrader, C. R., Sutaria, F. K., Singh, K. P., & Macomb, D. J. 1999, , 512, 920 Singh, K. P., Drake, S. A., & White, N. E. 1996, , 111, 2415 Still, M. & Mukai, K. 2001, ApJ, 562, L71 , M., [Kinugasa]{}, K., [Matsuzaki]{}, K., [Terada]{}, Y., [Yamauchi]{}, S., & [Yokogawa]{}, J. 2000, , 534, L181 Sugizaki, M., Mitsuda, K., Kaneda, H., Matsuzaki, K., Yamauchi, S., & Koyama, K. 2001, , 134, 77 Swank, J. H. & Markwardt, C. B. 2003, ATEL 128 Tan, J. D. & Draine, B. T. 2003, astro-ph/0310442 Terada, Y., Kaneda, H., Makishima, K., Ishida, M., Matsuzaki, K., Nagase, F., & Kotani, T. 1999, , 51, 39 , K., [Sugizaki]{}, M., [Kohmura]{}, T., [Endo]{}, T., & [Nagase]{}, F. 1999, , 523, L65 Townsley, L. K. , 2002a, NIM-A, 486, 716 Townsley, L. K. , 2002b, NIM-A, 486, 751 Townsley, L. K., Feigelson, E. D., Montmerle, T., Broos, P. S., Chu, Y.-H., & Garmire, G. P. 2003, , 593, 874 Tsuru, T.  1989, , 41, 679 Verbunt, F., Bunk, W. H., Ritter, H., & Pfeffermann, E. 1997, , 327, 602 Waldron, W. L. & Cassinelli, J. P. 2001, , 548, L45 Walter, R., Rodriguez, J., Foschini, L., de Plaa, J., Corbel, S., Courvoisier, T. J.-L., den Hartog, P. R., Lebrun, F., Parmar, A. N., Tomsick, J. A. & Ubertini, P. 2003, , 411, L427 Wang, Q. D., Gotthelf, E. V., & Lang, C. C. 2002, , 415, 148 , B. 1995, [*[Cataclysmic Variable Stars]{}*]{}, Cambridge University Press Wegner, W. 1994, , 270, 229 Weisskopf, M. C., Brinkman, B., Canizares, C., Garmire, G., Murray, S., van Speybroeck, L. P. 2002, , 114, 1 Wijnands, R., Guainazzi, M., van der Klis, M., & Méndez, M. 2002, , 573, L45 Willems, B. & Kolb, U. 2003, , 343, 949 Wojdowski, P. S., Liedahl, D. A., Sako, M., Kahn, S. M., & Paerels, F. 2003, , 582, 959 , M. V. 1990, [*[Handbook of Space Astronomy and Astrophysics]{}*]{}, Cambridge University Press [lccccc]{} 1999 Sep 21 02:43:00 & 0242 & 40,872 & 266.41382 & $-$29.0130 & 268\ 2000 Oct 26 18:15:11 & 1561 & 35,705 & 266.41344 & $-$29.0128 & 265\ 2001 Jul 14 01:51:10 & 1561 & 13,504 & 266.41344 & $-$29.0128 & 265\ 2002 Feb 19 14:27:32 & 2951 & 12,370 & 266.41867 & $-$29.0033 & 91\ 2002 Mar 23 12:25:04 & 2952 & 11,859 & 266.41897 & $-$29.0034 & 88\ 2002 Apr 19 10:39:01 & 2953 & 11,632 & 266.41923 & $-$29.0034 & 85\ 2002 May 07 09:25:07 & 2954 & 12,455 & 266.41938 & $-$29.0037 & 82\ 2002 May 22 22:59:15 & 2943 & 34,651 & 266.41991 & $-$29.0041 & 76\ 2002 May 24 11:50:13 & 3663 & 37,959 & 266.41993 & $-$29.0041 & 76\ 2002 May 25 15:16:03 & 3392 & 166,690 & 266.41992 & $-$29.0041 & 76\ 2002 May 28 05:34:44 & 3393 & 158,026 & 266.41992 & $-$29.0041 & 76\ 2002 Jun 03 01:24:37 & 3665 & 89,928 & 266.41992 & $-$29.0041 & 76 [lcccccccccccccc]{} 174521.9–290519 & 499 & $ 8_{- 3}^{+ 5}$ & $ 0.1_{- 0.5}^{+ 0.9}$ & 4.0 & 6.5 & 24/27 & $ 14_{- 2}^{+ 1}$ & $63.2(e)$ & 3.6 & 9.7 & 27/27 & n & n & -\ 174521.9–290616 & 366 & $ 10_{- 6}^{+ 10}$ & $ 0.3_{- 0.9}^{+ 1.6}$ & 3.3 & 5.9 & 14/20 & $ 17_{- 4}^{+ 4}$ & $15.6(e)$ & 3.1 & 10.6 & 15/20 & n & n & l\ 174522.3–290322 & 97 & $< 8$ & $-1.0_{- 1.4}^{+ 2.2}$ & 0.8 & 0.9 & 6/4 & $ 15_{- 7}^{+ 17}$ & $> 1.6$ & 0.6 & 2.0 & 7/4 & & & -\ 174522.9–285718 & 139 & $ 3$ & -0.3(e) & 0.9 & 1.1 & 15/7 & $ 8$ & $79.9(e)$ & 0.8 & 1.5 & 19/7 & & & -\ 174522.9–290706 & 158 & $ 16_{- 13}^{+ 17}$ & $ 3.3_{- 2.7}^{+ 3.6}$ & 0.8 & 5.2 & 14/16 & $ 18_{- 9}^{+ 11}$ & 1.8$_{- 0.8}^{+ 4.4}$ & 0.9 & 6.5 & 10/16 & & & -\ \[5pt\] 174523.1–290205 & 123 & $ 12_{- 8}^{+ 12}$ & $ 1.4_{- 2.1}^{+ 3.0}$ & 0.8 & 2.0 & 9/5 & $ 16_{- 6}^{+ 10}$ & 3.3$_{- 1.9}^{+16.1}$ & 0.8 & 3.4 & 5/5 & & & l\ 174523.2–290116 & 93 & $ 6$ & 1.3(e) & 0.5 & 0.9 & 7/3 & $ 8_{- 3}^{+ 3}$ & $> 1.9$ & 0.5 & 1.1 & 6/3 & & & -\ 174523.3–290637 & 127 & $< 12$ & $-1.6_{- 1.2}^{+ 1.9}$ & 1.1 & 1.2 & 9/13 & $ 21_{- 11}^{+ 16}$ & $> 1.7$ & 0.9 & 3.9 & 12/13 & & & -\ 174523.4–290248 & 80 & $< 10$ & $-1.9_{- 1.1}^{+ 1.2}$ & 0.7 & 0.7 & 1/3 & $ 22_{- 9}^{+ 23}$ & $> 1.3$ & 0.6 & 2.8 & 3/3 & & & -\ 174523.8–290514 & 92 & $ 8_{- 6}^{+ 46}$ & $>-0.1$ & 0.6 & 1.1 & 9/7 & $ 8_{- 3}^{+ 7}$ & $79.9(e)$ & 0.6 & 1.1 & 9/7 & & & -\ \[5pt\] 174523.8–290652 & 94 & $< 26$ & $-2.1_{- 0.9}^{+ 5.8}$ & 0.9 & 0.9 & 15/13 & $ 25_{- 12}^{+ 19}$ & $> 0.9$ & 0.7 & 3.7 & 14/13 & & & -\ 174524.0–285947 & 82 & $< 4$ & $-0.9_{- 1.0}^{+ 1.8}$ & 0.6 & 0.7 & 2/2 & $ 6$ & $79.7(e)$ & 0.5 & 0.8 & 6/2 & & & -\ 174524.1–285845 & 224 & $ 0.8_{- 0.7}^{+ 1.0}$ & $ 2.9_{- 0.4}^{+ 0.6}$ & 0.2 & 0.2 & 2/9 & $ 0.6_{- 0.5}^{+ 0.7}$ & 2.5$_{- 0.6}^{+ 1.0}$ & 0.2 & 0.2 & 5/9 & n & n & -\ 174524.7–290038 & 121 & $ 3$ & -0.6(e) & 0.9 & 1.0 & 11/4 & $ 10$ & $ 6.0(e)$ & 0.7 & 1.7 & 13/4 & & & -\ 174525.1–285703 & 152 & $< 0.5$ & $ 1.7_{- 0.3}^{+ 0.4}$ & 0.2 & 0.2 & 4/6 & $< 0$ & 5.2$_{- 2.4}^{+ 9.3}$ & 0.2 & 0.2 & 5/6 & & & se [lcccccccccccc]{} 174508.7–290324 & $ 2.1_{- 1.2}^{+ 3.6}$ & $< 0.2$ & $< 830$ & 0.9 & 0.064 & 11.8/ 11 & $ 3.4_{- 1.7}^{+ 4.4}$ & $ 0.3_{- 0.3}^{+ 0.5}$ & 1581 & 3.3 & 0.000 & 9.3/ 11\ 174510.3–285435 & $ 3.0_{- 2.0}^{+ 1.2}$ & $< 0.5$ & $< 1425$ & 1.2 & 0.100 & 21.6/ 17 & $ 3.0_{- 1.8}^{+ 1.0}$ & $ 0.2_{- 0.2}^{+ 0.3}$ & 732 & 2.0 & 0.007 & 20.7/ 17\ 174510.5–290645 & $ 1.5_{- 1.1}^{+ 1.3}$ & $ 0.9_{- 0.6}^{+ 0.3}$ & 411 & 7.7 & 0.000 & 42.4/ 39 & $ 1.6_{- 1.0}^{+ 0.7}$ & $ 1.0_{- 0.5}^{+ 0.4}$ & 474 & 8.6 & 0.000 & 41.2/ 39\ 174512.4–290604 & $ 1.3_{- 1.3}^{+ 3.4}$ & $< 0.5$ & $< 1447$ & 4.2 & 0.024 & 11.0/ 20 & $ 2.2_{- 2.0}^{+ 3.1}$ & $ 0.3_{- 0.3}^{+ 0.3}$ & 1085 & 5.4 & 0.000 & 10.2/ 20\ 174517.3–290440 & $-0.3_{- 0.4}^{+ 0.6}$ & $< 0.4$ & $< 558$ & 5.3 & 0.011 & 21.6/ 23 & $-0.2_{- 0.4}^{+ 0.6}$ & $ 0.4_{- 0.2}^{+ 0.3}$ & 467 & 6.7 & 0.002 & 20.0/ 23\ \[5pt\] 174519.8–290114 & $ 0.9_{- 0.9}^{+-0.9}$ & & & & & 36.5/ 15 & $ 1.4_{- 1.4}^{+-1.4}$ & $ 0.5_{- 0.5}^{+-0.5}$ & 948 & 4.9 & 0.006 & 28.3/ 15\ 174520.9–285818 & $ 4.2_{- 0.7}^{+ 1.5}$ & $ 0.3_{- 0.1}^{+ 0.1}$ & $>10^4$ & 6.1 & 0.003 & 10.3/ 11 & $ 4.8_{- 1.3}^{+ 1.0}$ & $ 0.3_{- 0.2}^{+ 0.1}$ & $>10^4$ & 5.9 & 0.003 & 10.6/ 11\ 174525.5–290028 & $ 0.3_{- 0.5}^{+ 0.5}$ & $< 0.6$ & $< 449$ & 4.8 & 0.012 & 49.9/ 28 & $ 0.5_{- 0.4}^{+ 0.2}$ & $ 1.0_{- 0.4}^{+ 0.2}$ & 818 & 13.0 & 0.000 & 33.0/ 28\ 174527.6–285258 & $ 0.8_{- 0.6}^{+ 0.5}$ & $ 0.7_{- 0.4}^{+ 0.5}$ & 223 & 6.2 & 0.009 & 48.2/ 40 & $ 1.2_{- 0.8}^{+ 0.6}$ & $ 0.9_{- 0.3}^{+ 0.7}$ & 310 & 10.0 & 0.000 & 42.9/ 40\ 174527.8–290542 & $ 1.0_{- 1.5}^{+ 1.0}$ & $< 0.2$ & $< 394$ & 1.1 & 0.152 & 6.7/ 13 & $ 2.0_{- 2.1}^{+ 0.6}$ & $ 0.4_{- 0.4}^{+ 0.4}$ & 761 & 4.2 & 0.003 & 5.1/ 13\ \[5pt\] 174529.0–290406 & $ 2.5_{- 1.4}^{+ 0.6}$ & $< 0.5$ & $< 699$ & 2.9 & 0.028 & 13.7/ 11 & $ 2.9_{- 1.3}^{+ 2.1}$ & $ 0.4_{- 0.3}^{+ 0.7}$ & 716 & 4.7 & 0.006 & 11.1/ 11\ 174529.6–285432 & $ 1.5_{- 1.1}^{+ 0.5}$ & $ 0.2_{- 0.2}^{+ 0.2}$ & 238 & 3.3 & 0.007 & 11.4/ 18 & $ 1.3_{- 0.8}^{+ 1.2}$ & $< 0.4$ & $< 642$ & 1.8 & 0.050 & 12.5/ 18\ 174531.1–290219 & $-1.5_{--1.5}^{+ 1.5}$ & & & & & 14.0/ 5 & $ 1.6_{- 1.9}^{+ 2.7}$ & $ 0.7_{- 0.3}^{+ 0.7}$ & 2450 & 4.8 & 0.003 & 3.4/ 5\ 174532.3–290251 & $ 1.6_{- 1.5}^{+ 0.7}$ & $< 0.5$ & $< 345$ & 2.1 & 0.060 & 24.9/ 14 & $ 1.7_{- 1.3}^{+ 1.9}$ & $ 0.8_{- 0.3}^{+ 0.4}$ & 589 & 7.5 & 0.000 & 14.5/ 14\ 174532.4–290259 & $-0.4_{- 1.0}^{+ 0.6}$ & $ 0.4_{- 0.3}^{+ 0.2}$ & 434 & 4.9 & 0.001 & 17.5/ 12 & $-0.4_{- 0.4}^{+ 1.2}$ & $ 0.5_{- 0.2}^{+ 0.3}$ & 542 & 6.5 & 0.002 & 14.1/ 12\ \[5pt\] 174534.5–285523 & $-0.0_{- 0.5}^{+ 1.6}$ & $< 0.2$ & $< 280$ & 0.9 & 0.228 & 22.2/ 16 & $ 1.3_{- 1.4}^{+ 0.7}$ & $ 0.7_{- 0.3}^{+ 0.3}$ & 943 & 12.3 & 0.000 & 6.5/ 16\ 174534.5–290201 & $ 0.3_{- 0.4}^{+ 0.4}$ & $ 0.6_{- 0.3}^{+ 0.3}$ & 230 & 8.4 & 0.001 & 43.7/ 39 & $ 0.4_{- 0.6}^{+ 0.4}$ & $ 0.9_{- 0.3}^{+ 0.4}$ & 344 & 14.3 & 0.000 & 35.7/ 39\ 174534.9–290118 & $ 0.7_{- 0.9}^{+ 0.8}$ & $< 0.5$ & $< 321$ & 2.6 & 0.053 & 20.0/ 17 & $ 1.3_{- 1.4}^{+ 0.7}$ & $ 0.6_{- 0.3}^{+ 0.3}$ & 439 & 7.5 & 0.001 & 13.7/ 17\ 174535.6–290034 & $ 0.8_{- 0.8}^{+ 0.9}$ & $< 0.2$ & $< 122$ & 1.0 & 0.148 & 26.2/ 24 & $ 1.5_{- 1.1}^{+ 0.9}$ & $ 0.6_{- 0.4}^{+ 0.2}$ & 513 & 7.5 & 0.000 & 19.1/ 24\ 174536.1–285638 & $ 3.4_{- 3.4}^{+-3.4}$ & & & & & 259.9/111 & $ 3.6_{- 0.2}^{+ 0.1}$ & $ 2.7_{- 0.4}^{+ 0.4}$ & 2173 & 48.5 & 0.000 & 156.4/111\ \[5pt\] 174537.6–290144 & $ 0.4_{- 0.5}^{+ 0.6}$ & $< 0.4$ & $< 410$ & 3.5 & 0.018 & 23.5/ 21 & $ 0.4_{- 0.5}^{+ 0.6}$ & $ 0.2_{- 0.2}^{+ 0.2}$ & 249 & 3.7 & 0.007 & 23.3/ 21\ 174537.7–290002 & $-0.0_{- 1.0}^{+ 0.4}$ & $< 0.4$ & $< 815$ & 4.8 & 0.020 & 22.7/ 15 & $ 0.2_{- 0.5}^{+ 0.6}$ & $ 0.5_{- 0.2}^{+ 0.3}$ & 890 & 6.7 & 0.000 & 18.9/ 15\ 174537.9–290134 & $ 4.4_{- 0.6}^{+ 1.0}$ & $< 0.3$ & $< 15803$ & 3.9 & 0.010 & 12.2/ 9 & $ 4.6_{- 0.9}^{+ 0.9}$ & $ 0.2_{- 0.2}^{+ 0.1}$ & 16521 & 3.9 & 0.007 & 12.3/ 9\ 174540.1–290055 & $ 2.2_{- 0.5}^{+ 0.8}$ & $ 0.2_{- 0.2}^{+ 0.3}$ & 130 & 3.4 & 0.000 & 41.1/ 51 & $ 2.1_{- 0.6}^{+ 0.7}$ & $< 0.2$ & $< 129$ & 1.0 & 0.050 & 43.1/ 51\ 174540.5–285550 & $ 0.5_{- 0.3}^{+ 0.9}$ & $< 0.4$ & $< 249$ & 1.7 & 0.092 & 42.2/ 26 & $ 1.0_{- 0.9}^{+ 0.4}$ & $ 0.7_{- 0.3}^{+ 0.3}$ & 448 & 8.0 & 0.002 & 31.7/ 26\ \[5pt\] 174541.2–290210 & $-1.0_{- 0.7}^{+ 1.0}$ & $< 0.7$ & $< 279$ & 2.1 & 0.055 & 26.1/ 17 & $-0.9_{- 0.6}^{+ 0.5}$ & $ 0.8_{- 0.4}^{+ 0.5}$ & 361 & 6.4 & 0.001 & 19.1/ 17\ 174541.5–285814 & $ 1.0_{- 0.2}^{+ 0.1}$ & $ 0.3_{- 0.2}^{+ 0.2}$ & 174 & 6.7 & 0.003 & 78.1/ 78 & $ 0.9_{- 0.1}^{+ 0.3}$ & $< 0.5$ & $< 271$ & 3.9 & 0.020 & 81.1/ 78\ 174541.6–285952 & $-0.1_{- 0.6}^{+ 0.5}$ & $ 0.4_{- 0.3}^{+ 0.4}$ & 365 & 5.1 & 0.003 & 12.2/ 14 & $ 0.3_{- 0.8}^{+ 0.8}$ & $< 1.1$ & $< 1060$ & 2.3 & 0.021 & 15.7/ 14\ 174541.8–290037 & $ 0.4_{- 1.5}^{+ 1.1}$ & $< 0.7$ & $< 432$ & 4.6 & 0.019 & 41.5/ 27 & $ 0.3_{- 1.1}^{+ 1.6}$ & $ 1.0_{- 0.4}^{+ 0.6}$ & 678 & 11.1 & 0.001 & 30.0/ 27\ 174542.0–285824 & $ 0.3_{- 2.4}^{+ 1.4}$ & $< 0.4$ & $< 451$ & 1.6 & 0.112 & 8.8/ 6 & $ 0.3_{- 1.9}^{+ 1.1}$ & $ 0.4_{- 0.3}^{+ 0.3}$ & 584 & 4.4 & 0.002 & 4.3/ 6\ 174542.2–285732 & $-0.2_{- 0.4}^{+ 0.7}$ & $ 0.2_{- 0.1}^{+ 0.3}$ & 229 & 4.1 & 0.007 & 23.0/ 18 & $ 0.1_{- 0.3}^{+ 0.4}$ & $< 0.5$ & $< 601$ & 4.9 & 0.020 & 21.8/ 18\ \[5pt\] 174543.3–285605 & $-0.2_{- 0.6}^{+ 1.4}$ & $< 0.2$ & $< 342$ & 1.7 & 0.083 & 9.6/ 12 & $ 1.0_{- 1.1}^{+ 1.6}$ & $ 0.5_{- 0.3}^{+ 0.4}$ & 782 & 8.3 & 0.000 & 4.0/ 12\ 174543.4–285841 & $ 1.2_{- 1.1}^{+ 0.4}$ & $< 0.5$ & $< 312$ & 3.0 & 0.029 & 26.8/ 21 & $ 1.4_{- 1.0}^{+ 0.6}$ & $ 0.7_{- 0.4}^{+ 0.3}$ & 409 & 7.7 & 0.000 & 20.2/ 21\ 174543.7–285946 & $ 0.7_{- 1.0}^{+ 0.6}$ & $ 1.3_{- 0.5}^{+ 0.5}$ & 654 & 14.1 & 0.000 & 23.9/ 28 & $ 0.7_{- 0.9}^{+ 0.6}$ & $ 1.0_{- 0.5}^{+ 0.5}$ & 493 & 7.2 & 0.001 & 34.9/ 28\ 174544.3–290156 & $ 0.7_{- 1.7}^{+ 1.5}$ & $ 0.2_{- 0.2}^{+ 0.3}$ & 355 & 6.0 & 0.001 & 1.2/ 7 & $ 0.9_{- 1.9}^{+ 3.9}$ & $< 1.3$ & $< 1890$ & 1.8 & 0.060 & 3.8/ 7\ 174544.8–285953 & $ 0.3_{- 1.5}^{+ 5.0}$ & $< 0.3$ & $< 1151$ & 0.9 & 0.200 & 32.1/ 17 & $ 4.3_{- 2.7}^{+ 3.8}$ & $ 0.9_{- 0.3}^{+ 1.8}$ & 5241 & 8.7 & 0.002 & 17.5/ 17\ \[5pt\] [@@split]{}174544.9–290027 & $ 0.5_{- 0.3}^{+ 0.2}$ & $ 0.4_{- 0.3}^{+ 0.2}$ & 237 & 6.7 & 0.000 & 33.0/ 38 & $ 0.4_{- 0.3}^{+ 0.2}$ & $ 0.4_{- 0.4}^{+ 0.2}$ & 248 & 3.9 & 0.001 & 35.8/ 38\ 174546.1–290057 & $-1.7_{- 0.5}^{+ 0.7}$ & $< 0.3$ & $< 485$ & 2.6 & 0.050 & 16.4/ 11 & $-1.2_{- 0.5}^{+ 0.4}$ & $ 0.4_{- 0.2}^{+ 0.2}$ & 674 & 6.5 & 0.001 & 9.6/ 11\ 174546.2–285906 & $ 0.6_{- 0.4}^{+ 0.3}$ & $< 0.3$ & $< 292$ & 3.1 & 0.034 & 21.6/ 31 & $ 0.7_{- 0.4}^{+ 0.5}$ & $ 0.2_{- 0.2}^{+ 0.2}$ & 217 & 5.8 & 0.000 & 19.6/ 31\ 174546.9–285903 & $ 0.5_{- 1.5}^{+ 1.9}$ & $< 0.4$ & $< 558$ & 3.2 & 0.014 & 11.0/ 11 & $ 2.8_{- 2.9}^{+ 2.5}$ & $ 0.7_{- 0.5}^{+ 1.1}$ & 823 & 4.4 & 0.004 & 9.6/ 11\ 174547.0–285333 & $ 0.3_{- 0.6}^{+ 0.8}$ & $< 0.4$ & $< 188$ & 1.4 & 0.124 & 60.5/ 43 & $ 0.7_{- 0.4}^{+ 0.4}$ & $ 0.8_{- 0.4}^{+ 0.4}$ & 423 & 10.8 & 0.000 & 47.3/ 43\ \[5pt\] 174547.2–290000 & $ 0.1_{- 0.8}^{+ 2.3}$ & $ 0.5_{- 0.2}^{+ 0.3}$ & 561 & 6.8 & 0.001 & 25.4/ 18 & $ 1.0_{- 1.4}^{+ 1.4}$ & $ 0.4_{- 0.3}^{+ 0.3}$ & 327 & 3.0 & 0.008 & 33.2/ 18\ 174548.9–285751 & $ 0.9_{- 0.4}^{+ 0.4}$ & $ 0.5_{- 0.3}^{+ 0.2}$ & 261 & 7.7 & 0.000 & 66.4/ 49 & $ 1.0_{- 0.3}^{+ 0.3}$ & $ 0.8_{- 0.3}^{+ 0.3}$ & 446 & 13.8 & 0.000 & 56.8/ 49\ 174549.3–285557 & $-0.4_{- 0.3}^{+ 0.4}$ & $ 0.7_{- 0.3}^{+ 0.3}$ & 364 & 10.4 & 0.000 & 44.9/ 37 & $-0.2_{- 0.4}^{+ 0.3}$ & $ 0.8_{- 0.3}^{+ 0.3}$ & 413 & 9.7 & 0.000 & 46.1/ 37\ 174549.6–290457 & $ 0.7_{- 1.4}^{+ 0.7}$ & $ 0.4_{- 0.2}^{+ 0.2}$ & 414 & 6.5 & 0.000 & 12.2/ 14 & $ 0.4_{- 0.5}^{+ 1.6}$ & $< 0.8$ & $< 862$ & 2.8 & 0.013 & 17.5/ 14\ 174550.9–285430 & $ 0.9_{- 0.4}^{+ 0.5}$ & $ 0.5_{- 0.4}^{+ 0.3}$ & 426 & 6.5 & 0.000 & 18.0/ 23 & $ 0.7_{- 0.4}^{+ 1.0}$ & $< 0.6$ & $< 509$ & 1.9 & 0.030 & 22.7/ 23\ \[5pt\] 174552.0–285312 & $ 0.6_{- 0.8}^{+ 0.6}$ & $ 1.1_{- 0.5}^{+ 0.8}$ & 238 & 6.9 & 0.001 & 59.1/ 43 & $ 0.7_{- 0.8}^{+ 0.7}$ & $ 0.8_{- 0.6}^{+ 0.8}$ & 183 & 3.9 & 0.002 & 63.8/ 43\ 174554.4–285816 & $ 0.1_{- 1.0}^{+ 0.3}$ & $ 0.9_{- 0.4}^{+ 0.3}$ & 420 & 9.2 & 0.000 & 39.0/ 26 & $-0.2_{- 0.7}^{+ 0.4}$ & $ 0.9_{- 0.3}^{+ 0.3}$ & 417 & 8.4 & 0.000 & 40.8/ 26\ 174555.6–285600 & $ 2.8_{- 2.5}^{+ 3.6}$ & $< 0.5$ & $< 1595$ & 3.4 & 0.020 & 6.9/ 12 & $ 5.5_{- 3.6}^{+ 0.5}$ & $ 0.5_{- 0.4}^{+ 0.4}$ & 2640 & 5.5 & 0.003 & 5.5/ 12\ 174558.9–290724 & $ 0.3_{- 0.3}^{+ 0.3}$ & $ 1.1_{- 0.4}^{+ 0.4}$ & 338 & 13.5 & 0.000 & 96.4/ 66 & $ 0.4_{- 0.3}^{+ 0.3}$ & $ 1.4_{- 0.5}^{+ 0.4}$ & 416 & 13.7 & 0.000 & 96.0/ 66\ 174559.5–290601 & $ 1.7_{- 1.9}^{+ 0.6}$ & $< 0.5$ & $< 552$ & 3.0 & 0.050 & 29.6/ 20 & $ 1.4_{- 1.0}^{+ 1.0}$ & $ 0.4_{- 0.3}^{+ 0.4}$ & 508 & 5.0 & 0.002 & 26.3/ 20\ \[5pt\] 174601.0–285854 & $ 1.5_{- 0.9}^{+ 0.8}$ & $< 1.1$ & $< 284$ & 3.3 & 0.030 & 50.6/ 31 & $ 1.7_{- 1.1}^{+ 0.5}$ & $ 1.8_{- 0.8}^{+ 0.4}$ & 526 & 12.2 & 0.000 & 34.8/ 31\ 174601.1–285953 & $ 1.0_{- 1.3}^{+ 0.6}$ & $ 0.4_{- 0.3}^{+ 0.3}$ & 749 & 4.5 & 0.006 & 4.2/ 7 & $-0.4_{- 0.7}^{+ 2.3}$ & $< 0.4$ & $< 601$ & 0.9 & 0.170 & 8.5/ 7\ 174606.3–285810 & $ 1.1_{- 1.7}^{+ 0.6}$ & $ 1.1_{- 0.4}^{+ 0.4}$ & 841 & 14.6 & 0.000 & 25.9/ 29 & $ 1.0_{- 1.4}^{+ 0.7}$ & $ 0.6_{- 0.3}^{+ 0.3}$ & 458 & 6.1 & 0.004 & 40.2/ 29\ 174608.4–290623 & $ 2.2_{- 1.3}^{+ 0.5}$ & $ 1.3_{- 0.5}^{+ 0.9}$ & 455 & 9.9 & 0.000 & 48.3/ 36 & $ 2.3_{- 1.4}^{+ 0.8}$ & $ 1.4_{- 0.6}^{+ 1.1}$ & 510 & 8.9 & 0.002 & 50.0/ 36\ 174609.8–290321 & $ 1.6_{- 2.6}^{+ 3.5}$ & $< 11.6$ & $< 1560$ & 4.1 & 0.037 & 26.4/ 15 & $ 1.6_{- 2.4}^{+ 4.1}$ & $ 2.9_{- 0.7}^{+ 7.0}$ & 623 & 5.6 & 0.004 & 23.1/ 15\ \[5pt\] 174610.9–285345 & $ 1.3_{- 1.0}^{+ 1.1}$ & $ 0.7_{- 0.6}^{+ 0.6}$ & 336 & 4.8 & 0.001 & 48.4/ 51 & $ 1.2_{- 0.9}^{+ 1.1}$ & $< 0.7$ & $< 346$ & 1.8 & 0.100 & 51.5/ 51\ 174612.3–285706 & $-0.6_{- 1.1}^{+ 1.9}$ & $< 0.2$ & $< 450$ & 1.2 & 0.180 & 27.0/ 23 & $ 0.1_{- 1.4}^{+ 1.4}$ & $ 0.3_{- 0.2}^{+ 0.2}$ & 817 & 5.8 & 0.000 & 21.5/ 23\ 174613.7–290622 & $ 1.3_{- 1.1}^{+ 1.2}$ & $ 0.3_{- 0.2}^{+ 0.2}$ & 1594 & 6.8 & 0.006 & 15.3/ 22 & $ 1.1_{- 1.3}^{+ 1.2}$ & $< 0.5$ & $< 2798$ & 1.7 & 0.060 & 20.1/ 22\ 174614.5–285428 & $ 0.1_{- 2.4}^{+ 7.3}$ & $ 0.5_{- 0.5}^{+ 2.1}$ & 923 & 3.7 & 0.004 & 62.7/ 51 & $ 0.1_{- 2.5}^{+ 5.0}$ & $< 18.6$ & $< 26553$ & 1.0 & 0.360 & 66.2/ 51\ 174616.5–285846 & $-1.3_{- 0.5}^{+ 0.5}$ & $ 0.3_{- 0.3}^{+ 0.3}$ & 344 & 5.5 & 0.006 & 23.2/ 27 & $-1.4_{- 0.5}^{+ 1.9}$ & $< 0.4$ & $< 385$ & 1.3 & 0.160 & 27.6/ 27\ 174617.2–285449 & $ 1.0_{- 0.5}^{+ 1.0}$ & $ 0.2_{- 0.2}^{+ 0.4}$ & 3021 & 1.7 & 0.000 & 35.2/ 26 & $ 0.9_{- 0.5}^{+ 0.9}$ & $< 0.2$ & $< 2647$ & 1.0 & 0.130 & 36.1/ 26\ \[5pt\] 174619.4–290213 & $-0.3_{- 0.6}^{+ 0.5}$ & $ 0.7_{- 0.5}^{+ 0.6}$ & 369 & 7.6 & 0.001 & 13.1/ 22 & $-0.4_{- 0.4}^{+ 1.5}$ & $< 0.9$ & $< 432$ & 1.6 & 0.090 & 18.2/ 22\ 174623.6–285629 & $ 1.4_{- 1.0}^{+ 1.9}$ & $< 1.7$ & $< 582$ & 4.4 & 0.020 & 76.8/ 51 & $ 1.6_{- 1.2}^{+ 2.2}$ & $ 1.2_{- 0.7}^{+ 1.0}$ & 440 & 5.8 & 0.002 & 74.6/ 51 [lcccccc]{} $N_{\rm H}$ ($10^{22}$ cm$^{-2}$) & 1.6$_{-0.1}^{+0.1}$ & 1.6$_{-0.2}^{+0.1}$ & 2.2$_{-0.1}^{+0.2}$ & 1.4$_{-0.1}^{+0.2}$ & 2.3$_{-0.1}^{+0.3}$ & 4.1$_{-0.1}^{+0.4}$\ $N_{\rm pc,H}$ ($10^{22}$ cm$^{-2}$) & 7.1$_{- 0.5}^{+ 0.2}$ & 7.1$_{- 0.6}^{+ 0.1}$ & 7.5$_{- 0.2}^{+ 0.5}$ & 8.3$_{- 0.2}^{+ 0.7}$ & 8.2$_{- 0.2}^{+ 0.8}$ & 8.7$_{- 0.4}^{+ 0.9}$\ $f_{\rm pc}$ & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 & 0.95\ $\Gamma$ & 0.86$_{-0.08}^{+0.03}$ & 0.85$_{-0.14}^{+0.01}$ & 1.04$_{-0.00}^{+0.10}$ & 0.34$_{-0.01}^{+0.12}$ & 1.51$_{-0.04}^{+0.15}$ & 3.31$_{-0.12}^{+0.25}$\ $N_\Gamma$ ($10^{-6}$ ph cm$^{-2}$ s$^{-1}$ arcmin$^{-2})$ & 5.0$_{- 0.7}^{+ 0.3}$ & 2.3$_{- 0.5}^{+ 0.1}$ & 3.2$_{- 0.1}^{+ 0.8}$ & 1.2$_{- 0.1}^{+ 0.3}$ & 5.7$_{- 0.4}^{+ 1.9}$ & 29.2$_{- 3.2}^{+18.1}$\ Si XIII He-$\alpha$ (eV) & 181 $\pm$ 29& 138 $\pm$ 38& 248 $\pm$ 57& 242 $\pm$ 65& 97 $\pm$ 41& 253 $\pm$ 123\ Si XIV Ly-$\alpha$ (eV) & 53 $\pm$ 14& 32 $\pm$ 20& 81 $\pm$ 26& 88 $\pm$ 34& $< 37$ & 85 $\pm$ 41\ Si XIII He-$\beta$ (eV) & 57 $\pm$ 15& 64 $\pm$ 24& 50 $\pm$ 24& 78 $\pm$ 35& 56 $\pm$ 27& 67 $\pm$ 42\ S XV He-$\alpha$ (eV) & 117 $\pm$ 19& 45 $\pm$ 20& 184 $\pm$ 33& 131 $\pm$ 35& 69 $\pm$ 26& 259 $\pm$ 128\ S XVI Ly-$\alpha$ (eV) & 21 $\pm$ 8& $< 18$ & 33 $\pm$ 13& 67 $\pm$ 20& 10 $\pm$ 13& $< 23$\ Ar XVII He-$\alpha$ (eV) & 17 $\pm$ 5& $< 14$ & 23 $\pm$ 8& 35 $\pm$ 11& $< 4$ & 34 $\pm$ 18\ Ca XIX He-$\alpha$ (eV) & 7 $\pm$ 4& 5 $\pm$ 5& 7 $\pm$ 5& $< 2$ & 17 $\pm$ 7& 24 $\pm$ 14\ Fe K-$\alpha$ (eV) & 137 $\pm$ 21& 96 $\pm$ 21& 180 $\pm$ 32& 134 $\pm$ 27& 128 $\pm$ 37& 226 $\pm$ 120\ Fe XXV He-$\alpha$ (eV) & 404 $\pm$ 59& 465 $\pm$ 90& 350 $\pm$ 61& 396 $\pm$ 76& 388 $\pm$ 108& 411 $\pm$ 213\ Fe XXVI Ly-$\alpha$ (eV) & 225 $\pm$ 34& 266 $\pm$ 53& 195 $\pm$ 36& 209 $\pm$ 41& 217 $\pm$ 62& 335 $\pm$ 180\ $F$ ($10^{-13}$ erg cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$) & 0.45 & 0.22 & 0.21 & 0.26 & 0.15 & 0.04\ $uF$ ($10^{-13}$ erg cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$) & 0.45 & 0.22 & 0.21 & 0.26 & 0.15 & 0.04\ $\chi^2/\nu$ & 490/464 & 448/464 & 523/464 & 593/464 & 526/464 & 518/436 [lcccccc]{} $N_{\rm H1}$ ($10^{22}$ cm$^{-2}$) & 4.5$_{-0.2}^{+0.4}$ & 1.4$_{-2.8}^{+3.9}$ & 4.8$_{-0.3}^{+0.5}$ & 1.2$_{-0.4}^{+2.1}$ & 1.1$_{-0.3}^{+0.5}$ & 7.6$_{-1.6}^{+0.9}$\ $N_{\rm pc,H1}$ ($10^{22}$ cm$^{-2}$) & 4.6$_{- 0.4}^{+ 0.8}$ & 3.2$_{- 0.8}^{+ 3.0}$ & 4.3$_{- 0.8}^{+ 1.2}$ & 3.1$_{- 1.0}^{+ 0.8}$ & 4.5$_{- 1.2}^{+ 2.0}$ & 22.8$_{- 3.6}^{+ 0.8}$\ $f_{{\rm pc},1}$ & 0.95 & 0.95 & 0.95 & 0.95 & 0.95 & 0.95\ $kT_1$ (keV) & 0.56$_{-0.08}^{+0.04}$ & 0.67$_{-0.08}^{+0.08}$ & 0.63$_{-0.09}^{+0.10}$ & 0.69$_{-0.11}^{+0.13}$ & 0.69$_{-0.09}^{+0.19}$ & 0.58$_{-0.09}^{+0.23}$\ $EM_1$ ($10^{-4}$ cm$^{-6}$ pc) & 2.0$_{- 0.6}^{+ 2.0}$ & 0.0$_{- 0.8}^{+ 4.6}$ & 0.7$_{- 0.4}^{+ 1.2}$ & 0.0$_{- 0.0}^{+ 0.2}$ & 0.0$_{- 0.0}^{+ 0.0}$ & 6.7$_{- 5.1}^{+ 3.8}$\ $F_1$ ($10^{-13}$ erg cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$) & 0.03 & 0.01 & 0.02 & 0.01 & 0.01 & 0.01\ $uF_1$ ($10^{-13}$ erg cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$) & 0.2 & 0.1 & 0.1 & 0.0 & 0.0 & 1.7\ $N_{\rm H2}$ ($10^{22}$ cm$^{-2}$) & 15.3$_{- 1.2}^{+ 0.6}$ & 7.5$_{- 6.4}^{+ 7.4}$ & 16.2$_{- 0.7}^{+ 1.4}$ & 8.5$_{- 0.4}^{+ 3.2}$ & 4.9$_{- 0.2}^{+ 0.8}$ & 9.7$_{- 8.4}^{+ 7.8}$\ $N_{\rm pc,H2}$ ($10^{22}$ cm$^{-2}$) & 117$_{- 18f}^{+ 13}$ & 24$_{- 84f}^{+ 101}$ & 119$_{- 5f}^{+ 44}$ & 39$_{- 1f}^{+ 4}$ & 10$_{- 1f}^{+ 1}$ & 2$_{- 9f}^{+ 16}$\ $f_{{\rm pc},2}$ & 0.77 & 0.66 & 0.74 & 0.81 & 0.81 & 0.95\ $kT_2$ (keV) & 7.8$_{-0.1}^{+0.4}$ & 9.0$_{-0.3}^{+0.3}$ & 7.8$_{-0.5}^{+0.3}$ & 8.4$_{-0.2}^{+0.2}$ & 8.6$_{-0.3}^{+0.4}$ & 8.7$_{-0.4}^{+1.3}$\ $EM_2$ ($10^{-4}$ cm$^{-6}$ pc) & 1.71$_{- 0.24}^{+ 0.47}$ & 0.24$_{- 0.43}^{+ 0.88}$ & 0.76$_{- 0.22}^{+ 0.55}$ & 0.44$_{- 0.02}^{+ 0.04}$ & 0.14$_{- 0.01}^{+ 0.01}$ & 0.02$_{- 0.01}^{+ 0.01}$\ $F_2$ ($10^{-13}$ erg cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$) & 0.41 & 0.20 & 0.18 & 0.24 & 0.15 & 0.03\ $uF_2$ ($10^{-13}$ erg cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$) & 3.60 & 1.71 & 2.10 & 1.01 & 0.32 & 0.06\ $Z_{\rm Si}/Z_{{\rm Si},\odot}$ & 0.19$_{-0.05}^{+0.06}$ & 0.82$_{-0.15}^{+0.45}$ & 0.26$_{-0.04}^{+0.09}$ & 1.64$_{-1.21}^{+0.78}$ & 0.57$_{-0.57}^{+0.58}$ & 0.46$_{-0.00}^{+0.27}$\ $Z_{\rm S}/Z_{{\rm S},\odot}$ & 0.30$_{-0.06}^{+0.07}$ & 0.00$_{-0.00}^{+0.38}$ & 0.47$_{-0.09}^{+0.10}$ & 1.51$_{-0.90}^{+0.87}$ & 1.14$_{-1.00}^{+0.79}$ & 0.82$_{-0.27}^{+0.52}$\ $Z_{\rm Fe}/Z_{{\rm Fe},\odot}$ & 0.45$_{-0.05}^{+0.05}$ & 0.97$_{-0.06}^{+0.03}$ & 0.39$_{-0.10}^{+0.05}$ & 0.67$_{-0.05}^{+0.04}$ & 0.81$_{-0.05}^{+0.05}$ & 0.72$_{-0.02}^{+0.29}$\ Fe K-$\alpha$ ($10^{-7}$ ph cm$^{-2}$ s$^{-1}$ arcmin$^{-1}$) & 3.8$_{- 0.8}^{+ 1.1}$ & 0.8$_{- 0.3}^{+ 1.4}$ & 2.4$_{- 0.4}^{+ 1.3}$ & 1.4$_{- 0.1}^{+ 0.1}$ & 0.6$_{- 0.1}^{+ 0.1}$ & 0.1$_{- 0.0}^{+ 0.0}$\ $\chi^2/\nu$ & 598/465 & 564/465 & 495/465 & 622/465 & 546/465 & 520/437\ [lccccccc]{} 174517.4–290650 & gc & 3392 & step & & $ 28^{+ 9}_{- 7}$ & $ 105^{+ 14}_{- 12}$ & $ 3$\ 174520.3–290143 & gc & 2943 & step & & $ 3^{+ 5}_{- 3}$ & $ 85^{+ 53}_{- 39}$ & $ 25$\ 174520.6–290152 & f & 3392 & flare & 91.0 & $ 28^{+ 9}_{- 7}$ & $1160^{+329}_{-256}$ & $ 40$\ & & 3393 & flare & 34.1 & $ 16^{+ 5}_{- 4}$ & $ 819^{+ 84}_{- 78}$ & $ 51$\ 174521.8–285912 & f & 3392 & flare & 24.2 & $ 4^{+ 3}_{- 3}$ & $ 189^{+ 63}_{- 56}$ & $ 44$\ \[5pt\] 174525.1–285703 & f & 3665 & step & & $ 10^{+ 5}_{- 4}$ & $ 288^{+137}_{-110}$ & $ 28$\ 174530.3–290341 & gc & 3392 & flare & 22.0 & $ 6^{+ 9}_{- 5}$ & $ 86^{+ 22}_{- 18}$ & $ 14$\ 174531.0–285605 & gc & 3393 & step & & $ 2^{+ 2}_{- 2}$ & $ 40^{+ 11}_{- 9}$ & $ 18$\ 174533.4–285328 & f & 0242 & flare & 8.2 & $ 26^{+ 11}_{- 9}$ & $ 217^{+ 67}_{- 51}$ & $ 8$\ 174534.5–290236 & gc & 3392 & step & & $< 1$ & $ 10^{+ 4}_{- 4}$ & $> 5$\ \[5pt\] 174535.6–290133 & gc & 3665 & flare & 61.2 & $ 14^{+ 8}_{- 6}$ & $ 162^{+ 21}_{- 20}$ & $ 11$\ 174535.8–290159 & gc & 3393 & step & & $< 3$ & $ 12^{+ 4}_{- 3}$ & $> 2$\ 174535.9–290806 & gc & 3393 & step & & $ 4^{+ 5}_{- 4}$ & $ 30^{+ 9}_{- 8}$ & $ 6$\ 174536.3–285545 & f & 3392 & flare & 3.8 & $ 3^{+ 2}_{- 2}$ & $ 182^{+ 77}_{- 64}$ & $ 63$\ 174538.2–285602 & f & 3665 & flare & 28.4 & $ 13^{+ 15}_{- 9}$ & $ 116^{+ 21}_{- 19}$ & $ 9 $\ \[5pt\] 174538.3–290048 & gc & 3392 & step & & $ 26^{+ 6}_{- 5}$ & $ 56^{+ 9}_{- 8}$ & $ 2$\ 174540.1–290804 & f & 3392 & flare & 87.2 & $< 3$ & $ 276^{+ 36}_{- 32}$ & $> 76$\ & & 3393 & step & & $ 24^{+ 10}_{- 8}$ & $ 62^{+ 8}_{- 8}$ & $ 2$\ 174540.4–285831 & gc & 3392 & step & & $ 5^{+ 3}_{- 2}$ & $ 23^{+ 7}_{- 6}$ & $ 4$\ 174541.4–290348 & gc & 3392 & step & & $< 2$ & $ 20^{+ 5}_{- 4}$ & $> 8$\ \[5pt\] 174541.5–290752 & f & 2943 & flare & 7.6 & $< 11$ & $ 124^{+ 45}_{- 40}$ & $> 7$\ 174541.8–290319 & gc & 1561a & & & & &\ 174542.8–285352 & gc & 0242 & step & & $< 3$ & $ 36^{+ 18}_{- 15}$ & $> 6$\ 174542.9–285522 & f & 3665 & flare & 3.3 & $< 4$ & $ 190^{+ 95}_{- 75}$ & $> 30$\ 174543.4–290347 & gc & 1561a & & & & &\ \[5pt\] 174543.9–290456 & f & 3392 & flare & 11.1 & $ 81^{+ 8}_{- 7}$ & $1397^{+466}_{-370}$ & $ 17$\ & & 3663 & flare & 6.1 & $113^{+ 21}_{- 19}$ & $ 433^{+ 87}_{- 77}$ & $ 3$\ 174545.0–290336 & gc & 3663 & step & & $< 8$ & $ 37^{+ 13}_{- 12}$ & $> 3$\ 174546.8–290252 & gc & 1561b & & & & &\ 174547.4–290817 & f & 3393 & step & & $ 7^{+ 4}_{- 4}$ & $ 54^{+ 18}_{- 16}$ & $ 8$\ \[5pt\] 174548.0–290352 & f & 3665 & flare & 11.0 & $ 4^{+ 5}_{- 3}$ & $ 84^{+ 29}_{- 25}$ & $ 22 $\ 174548.4–290234 & gc & 2943 & flare & 9.9 & $< 22$ & $ 84^{+ 30}_{- 26}$ & $> 2$\ 174548.4–290832 & gc & 3393 & step & & $< 3$ & $ 21^{+ 10}_{- 9}$ & $> 4$\ 174548.6–290522 & f & 3392 & flare & 35.3 & $ 12^{+ 5}_{- 4}$ & $ 226^{+ 53}_{- 44}$ & $ 18 $\ 174550.7–290434 & f & 3393 & flare & 15.0 & $< 4$ & $ 46^{+ 21}_{- 17}$ & $> 7$\ \[5pt\] 174552.1–290422 & gc & 3393 & step & & $ 5^{+ 3}_{- 3}$ & $ 26^{+ 9}_{- 8}$ & $ 5$\ 174552.2–290744 & gc & 3392 & flare & 0.1 & $ 99^{+ 14}_{- 13}$ & $2123$ & $ 21$\ 174552.9–290358 & f & 0242 & flare & 0.7 & $ 34^{+ 15}_{- 13}$ & $ 652^{+330}_{-264}$ & $ 19$\ & & 3393 & flare & 7.3 & $ 20^{+ 13}_{- 10}$ & $ 404^{+116}_{-105}$ & $ 20$\ 174556.9–285819 & f & 2952 & & & & &\ \[5pt\] 174558.5–290451 & f & 3392 & flare & 12.6 & $ 14^{+ 5}_{- 4}$ & $ 253^{+ 50}_{- 46}$ & $ 18$\ 174559.0–290418 & f & 3392 & step & & $< 3$ & $ 24^{+ 6}_{- 6}$ & $> 5$\ 174605.2–290700 & gc & 3393 & flare & 24.8 & $< 4$ & $ 269^{+ 98}_{- 86}$ & $> 47$\ 174612.4–290234 & f & 3393 & step & & $< 2$ & $ 17^{+ 8}_{- 7}$ & $> 3$\ [lccccccc]{} 174503.9–290051 & gc & 3392 & $ 44^{+ 6}_{- 6}$ & 0242 & $ 88^{+ 24}_{- 20}$ & $ 2$\ 174507.0–290356 & gc & 1561b & $< 9$ & 2952 & $ 41^{+ 25}_{- 19}$ & $> 2$\ 174514.1–285426 & gc & 2953 & $< 13$ & 3392 & $ 30^{+ 6}_{- 5}$ & $> 1$\ 174517.5–285646 & gc & 3663 & $< 3$ & 3665 & $ 14^{+ 5}_{- 4}$ & $> 2$\ 174519.8–290114 & gc & 3663 & $ 12^{+ 7}_{- 6}$ & 0242 & $ 35^{+ 10}_{- 9}$ & $ 3$\ \[5pt\] 174520.5–285927 & gc & 3665 & $< 2$ & 2953 & $ 22^{+ 29}_{- 16}$ & $> 2$\ 174520.6–285712 & gc & 0242 & $< 8$ & 2954 & $ 150^{+ 37}_{- 32}$ & $> 15$\ 174520.8–285304 & f & 3665 & $< 3$ & 3393 & $ 15^{+ 4}_{- 4}$ & $> 3$\ 174521.7–285812 & gc & 3665 & $< 2$ & 2952 & $ 13^{+ 19}_{- 11}$ & $> 1$\ 174521.9–290616 & gc & 1561b & $< 19$ & 2953 & $ 68^{+ 30}_{- 24}$ & $> 2$\ \[5pt\] 174522.4–285707 & gc & 3665 & $< 2$ & 2953 & $ 21^{+ 22}_{- 13}$ & $> 3$\ 174523.1–290205 & gc & 1561a & $< 3$ & 2954 & $ 11^{+ 13}_{- 8}$ & $> 1$\ 174526.4–290148 & gc & 3392 & $< 1$ & 2951 & $ 8^{+ 12}_{- 7}$ & $> 1$\ 174526.7–290220 & gc & 3665 & $< 2$ & 2952 & $ 18^{+ 15}_{- 10}$ & $> 4$\ 174527.4–285938 & gc & 0242 & $< 4$ & 2943 & $ 28^{+ 10}_{- 8}$ & $> 4$\ \[5pt\] 174529.0–290406 & gc & 1561b & $< 8$ & 3663 & $ 25^{+ 9}_{- 7}$ & $> 2$\ 174529.6–285432 & gc & 2952 & $< 9$ & 3393 & $ 40^{+ 6}_{- 5}$ & $> 3$\ 174530.5–290323 & gc & 3392 & $< 1$ & 3663 & $ 8^{+ 6}_{- 4}$ & $> 5$\ 174531.3–285949 & gc & 3393 & $< 1$ & 2953 & $ 25^{+ 26}_{- 16}$ & $> 6$\ 174531.8–290000 & gc & 3392 & $< 1$ & 2951 & $ 11^{+ 28}_{- 11}$ & $> 0$\ \[5pt\] 174532.7–290552 & f & 0242 & $< 3$ & 2954 & $ 108^{+ 33}_{- 27}$ & $> 28$\ 174532.9–285823 & gc & 3393 & $ 4^{+ 2}_{- 2}$ & 2952 & $ 20^{+ 16}_{- 11}$ & $ 5$\ 174533.0–285355 & gc & 0242 & $< 5$ & 3393 & $ 38^{+ 6}_{- 5}$ & $> 6$\ 174534.2–290119 & gc & 3393 & $< 3$ & 2951 & $ 74^{+ 26}_{- 21}$ & $> 20$\ 174535.5–290124 & gc & 0242 & $< 3$ & 2951 & $ 255^{+ 45}_{- 40}$ & $> 80$\ \[5pt\] 174535.9–285825 & gc & 3392 & $< 1$ & 1561b & $ 9^{+ 16}_{- 7}$ & $> 1$\ 174536.1–285638 & f & 2952 & $ 65^{+ 26}_{- 21}$ & 1561b & $ 199^{+ 39}_{- 35}$ & $ 3$\ 174536.6–290109 & gc & 3393 & $< 2$ & 1561a & $ 13^{+ 13}_{- 8}$ & $> 2$\ 174537.2–285459 & f & 1561a & $< 3$ & 2951 & $ 59^{+ 27}_{- 21}$ & $> 11$\ 174537.5–290125 & gc & 3665 & $< 2$ & 2951 & $ 24^{+ 16}_{- 11}$ & $> 5$\ \[5pt\] 174538.0–290022 & gc & 3665 & $ 39^{+ 7}_{- 6}$ & 0242 & $ 290^{+ 31}_{- 29}$ & $ 7 $\ 174538.4–290044 & gc & 1561a & $< 2$ & 2954 & $ 108^{+ 30}_{- 26}$ & $> 33$\ 174538.7–290134 & gc & 1561a & $< 5$ & 3392 & $ 19^{+ 4}_{- 3}$ & $> 2$\ 174539.1–290112 & gc & 1561a & $< 4$ & 3392 & $ 10^{+ 3}_{- 3}$ & $> 2$\ 174539.5–285454 & f & 3392 & $ 3^{+ 2}_{- 2}$ & 2951 & $ 25^{+ 18}_{- 13}$ & $ 8 $\ \[5pt\] 174540.1–290055 & gc & 2954 & $ 16^{+ 17}_{- 12}$ & 3663 & $ 86^{+ 16}_{- 14}$ & $ 5$\ 174540.6–290001 & gc & 0242 & $< 9$ & 1561b & $ 134^{+ 33}_{- 28}$ & $> 11$\ 174540.8–290040 & gc & 0242 & $< 3$ & 2953 & $ 10^{+ 15}_{- 10}$ & $> 0$\ 174541.0–290014 & gc & 1561a & $< 5$ & 3392 & $ 204^{+ 0}_{- 0}$ & $> 37$\ 174541.5–285148 & f & 3393 & $< 5$ & 2953 & $ 40^{+ 32}_{- 24}$ & $> 3$\ \[5pt\] 174541.5–285814 & f & 2951 & $< 14$ & 3663 & $ 170^{+ 21}_{- 19}$ & $> 11$\ 174541.7–285555 & gc & 3663 & $< 4$ & 3392 & $ 24^{+ 4}_{- 4}$ & $> 4$\ 174542.2–285732 & gc & 3665 & $ 2^{+ 2}_{- 2}$ & 3392 & $ 57^{+ 6}_{- 6}$ & $ 25 $\ 174542.2–290132 & gc & 3393 & $< 1$ & 2951 & $ 14^{+ 15}_{- 9}$ & $> 4$\ 174542.5–285722 & gc & 3665 & $< 1$ & 1561b & $ 10^{+ 11}_{- 7}$ & $> 2$\ 174543.4–285742 & gc & 0242 & $< 2$ & 2953 & $ 37^{+ 30}_{- 20}$ & $> 7$\ \[5pt\] 174543.4–285900 & gc & 0242 & $< 3$ & 2951 & $ 80^{+ 43}_{- 32}$ & $> 15$\ 174543.6–285629 & gc & 3665 & $ 2^{+ 3}_{- 2}$ & 2952 & $ 23^{+ 45}_{- 20}$ & $ 9 $\ 174543.9–290245 & gc & 0242 & $< 3$ & 2952 & $ 16^{+ 15}_{- 10}$ & $> 2$\ 174544.2–290644 & gc & 3665 & $< 4$ & 3392 & $ 13^{+ 4}_{- 3}$ & $> 2$\ 174545.2–285828 & f & 1561b & $ 68^{+ 26}_{- 21}$ & 2951 & $ 208^{+ 43}_{- 38}$ & $ 3$\ \[5pt\] 174546.1–285831 & gc & 3665 & $< 5$ & 1561b & $ 39^{+ 21}_{- 16}$ & $> 4$\ 174546.6–290356 & gc & 3393 & $< 1$ & 2953 & $ 11^{+ 13}_{- 8}$ & $> 3$\ 174547.8–290145 & gc & 1561a & $< 5$ & 3663 & $ 21^{+ 8}_{- 7}$ & $> 2$\ 174549.5–285815 & gc & 3392 & $< 2$ & 1561a & $ 11^{+ 7}_{- 5}$ & $> 2$\ 174550.5–285239 & f & 1561a & $ 23^{+ 26}_{- 19}$ & 1561b & $ 112^{+ 40}_{- 34}$ & $ 4 $\ \[5pt\] 174550.9–285430 & gc & 3393 & $ 29^{+ 7}_{- 6}$ & 2951 & $ 88^{+ 33}_{- 27}$ & $ 3$\ 174552.0–290324 & gc & 0242 & $< 6$ & 2953 & $ 22^{+ 17}_{- 12}$ & $> 1$\ 174552.5–285759 & gc & 3393 & $< 1$ & 2943 & $ 10^{+ 7}_{- 5}$ & $> 4$\ 174553.3–290444 & f & 2943 & $ 5^{+ 6}_{- 4}$ & 3393 & $ 46^{+ 6}_{- 5}$ & $ 8 $\ 174553.3–290632 & gc & 3665 & $ 4^{+ 4}_{- 3}$ & 2954 & $ 38^{+ 21}_{- 16}$ & $ 10$\ 174554.2–285729 & gc & 3665 & $ 2^{+ 3}_{- 2}$ & 3392 & $ 10^{+ 3}_{- 3}$ & $ 4$\ \[5pt\] [@@split]{}174558.4–290120 & f & 3665 & $< 3$ & 1561a & $ 36^{+ 18}_{- 14}$ & $> 7$\ 174558.9–290724 & gc & 3665 & $ 65^{+ 10}_{- 9}$ & 0242 & $ 138^{+ 22}_{- 20}$ & $ 2$\ 174601.0–285854 & gc & 2952 & $ 9^{+ 13}_{- 8}$ & 1561b & $ 46^{+ 21}_{- 17}$ & $ 4$\ 174601.4–285416 & f & 3392 & $< 5$ & 2952 & $ 77^{+ 34}_{- 28}$ & $> 9$\ 174603.7–290247 & f & 1561a & $< 7$ & 3393 & $ 30^{+ 5}_{- 5}$ & $> 3$\ \[5pt\] 174606.2–290941 & gc & 3665 & $< 11$ & 3392 & $ 44^{+ 7}_{- 7}$ & $> 3$\ 174607.5–285951 & f & 2952 & $ 34^{+ 21}_{- 16}$ & 1561a & $ 209^{+ 26}_{- 24}$ & $ 6 $\ 174610.8–290019 & gc & 3665 & $< 4$ & 1561a & $ 60^{+ 15}_{- 13}$ & $> 10$\ 174612.3–285706 & gc & 3665 & $< 5$ & 0242 & $ 78^{+ 17}_{- 15}$ & $> 13$\ 174613.9–285924 & gc & 3393 & $< 4$ & 2951 & $ 47^{+ 25}_{- 20}$ & $> 7$\ \[5pt\] 174614.0–290220 & f & 3665 & $< 5$ & 0242 & $ 74^{+ 16}_{- 14}$ & $> 11$\ 174615.9–290257 & f & 2951 & $< 11$ & 2943 & $ 35^{+ 13}_{- 11}$ & $> 2$\ 174624.4–285712 & f & 3393 & $ 25^{+ 8}_{- 7}$ & 2954 & $ 88^{+ 72}_{- 52}$ & $ 3$\ [lccccccccc]{} AX J2315–0592 & 50 & 0.07 & & 17 & 6.8 & 900 & 5360 & 90 & \[1\]\ RX J1802.1$+$1804 & 0.5 & 13 & & $>7$ & 6.7 & 4000 & 6840 & 100 & \[2\]\ AX J1842.8–0423 & 4 & 5 & 2.9 & 5.1 & 6.7 & 4000 & & & \[3\]\ XMM J174457–2850.3 & 6.5 & 6 & 0.98 & & 6.7 & 180 & & & \[4\]\ \[5pt\] IGR J16358–4726 & 70 & 33 & 0.5 & & 6.4 & 130 & 5580 & 37 & \[5,6\]\ IGR J16318–4848 & 8 & 196 & 1.6 & & 6.4 & 2000 & & & \[7,8\]\ IGR J16320–4751 & 400 & 21 & 2.5 & & & & & & \[9,10\]\ XMM J174544–2913.0 & 6.5 & 12 & & & 6.7 & 2000 & & & \[4\]\ \[5pt\] AX J1820.5–1434 & 23 & 9.8 & 0.9 & & 6.4 & 90 & 152 & 57 & \[11\]\ AX J170006–4157 & 5 & 6 & 0.2 & & & $<$1200 & 715 & 50 & \[12\]\ AX J1740.1–2847 & 4 & 2.5 & 0.7 & & & $< 500$ & 729 & 100 & \[13\]\ 1SAX J1452.8–5949 & 0.6 & 1.9 & 1.4 & & 6.4 & $> 1300$ & 437 & 74 & \[14\]\ AX J183220–0840 & 11 & 1.3 & 0.8 & & 6.7 & 450 & 1549 & 63 & \[15\]\ [^1]: [[http://www.astro.psu.edu/xray/docs/TARA/]{}]{} [^2]: [[http://www.astro.psu.edu/users/chartas/xcontdir/xcont.html]{}]{} [^3]: The spectra, response functions, effective area functions, background estimates, and event lists for each source are available from [[http://www.astro.psu.edu/users/niel/galcen-xray-data/galcen-xray-data.html]{}]{}. [^4]: We note that the abundance parameter merely measures the relative strengths of the lines and the continuum. If the continuum is non-thermal or the lines are produced by photo-ionization, the abundance parameter will not measure the physical abundances of metals in the plasma. [^5]: We first computed 100 simulations. For many of the sources, the $f$ values from the simulations exceeded the observed $f$ value more than once, indicating the significance of the line was $ 99$%. For the rest, we ran 1000 simulations, to establish whether the line was at least 99% significant (i.e., $< 10$ $f$ values exceeding the observed one). [^6]: We did not simply sum the background spectra obtained for individual sources, because doing so would have double-counted events from background regions that overlapped. [^7]: The requirement on the total flux was designed to ensure that the variability was not due systematic changes in the background estimate. Such changes occurred where there were gradients in the diffuse emission, because the regions in which the background were estimated were not identical for each observation (Section \[sec:obs\]). [^8]: As we discuss in Section \[sec:avspec\], if there is local X-ray absorption that affects only a fraction of the emitting region, the inferred spectrum can seem artificially flat, and the inferred luminosity would be too low. Therefore, the hardness ratios and photon fluxes could potentially be misleading. This is nearly impossible to avoid. Indeed, the spectral models we applied to the individual sources suffer from the same shortcoming, because we also assume that the emitting region is absorbed uniformly. Using hardness ratios and photon fluxes is the best option, since unlike the parameters of the spectral models, the former can be derived for almost all of the sources in our sample. [^9]: [[http://minerva.uni-sw.gwdg.de/cvcat/tpp3.pl]{}]{}; Kube et al. (2003)
{ "pile_set_name": "ArXiv" }
--- abstract: 'We prove that when $n\ge 5$, the Dehn function of $\operatorname{SL}(n,{\ensuremath{\mathbb{Z}}})$ is at most quartic. The proof involves decomposing a disc in $\operatorname{SL}(n,{\ensuremath{\mathbb{R}}})/\operatorname{SO}(n)$ into a quadratic number of loops in generalized Siegel sets. By mapping these loops into $\operatorname{SL}(n,{\ensuremath{\mathbb{Z}}})$ and replacing large elementary matrices by “shortcuts,” we obtain words of a particular form, and we use combinatorial techniques to fill these loops.' address: | Institut des Hautes Études Scientifiques,\ Le Bois Marie, 35 route de Chartres, F-91440 Bures-sur-Yvette, France author: - Robert Young title: 'A polynomial isoperimetric inequality for $\operatorname{SL}(n,{\ensuremath{\mathbb{Z}}})$' --- Introduction ============ The Dehn function is a geometric invariant of a space (typically, a riemannian manifold or a simplicial complex) which measures the difficulty of filling closed curves with discs. This can be made into a group invariant by defining the Dehn function of a group to be the Dehn function of a space on which the group acts cocompactly. The choice of space affects the Dehn function, but its rate of growth depends solely on the group. The study of Dehn functions of lattices in semisimple Lie groups is a natural direction. For cocompact lattices, this is straightforward; such a lattice acts on a non-positively curved symmetric space $X$, and this non-positive curvature gives rise to a linear or quadratic Dehn function. Non-cocompact lattices have more complicated behavior. The key difference is that if the lattice is not cocompact, it acts cocompactly on a subset of $X$ rather than the whole thing, and the boundary of this subset may contribute to the Dehn function. In the case that $\Gamma$ has ${\ensuremath{\mathbb{Q}}}$-rank 1, the Dehn function is almost completely understood, and depends primarily on the ${\ensuremath{\mathbb{R}}}$-rank of $G$. In this case, $\Gamma$ acts cocompactly on a space consisting of $X$ with infinitely many disjoint horoballs removed. When $G$ has ${\ensuremath{\mathbb{R}}}$-rank 1, the boundaries of these horoballs correspond to nilpotent groups, and the lattice is hyperbolic relative to these nilpotent groups. The Dehn function of the lattice is thus equal to that of the nilpotent groups, and Gromov showed that unless $X$ is the complex, quaternionic, or Cayley hyperbolic plane, the Dehn function is at most quadratic [@GroAII]. If $X$ is the complex or quaternionic hyperbolic plane, the Dehn function is cubic [@GroAII; @PittetCLB]; if $X$ is the Cayley hyperbolic plane, the precise growth rate is unknown, but is at most cubic. When $G$ has ${\ensuremath{\mathbb{R}}}$-rank 2 and $\Gamma$ has ${\ensuremath{\mathbb{Q}}}$-rank 1 or 2, Leuzinger and Pittet [@LeuzPitRk2] proved that the Dehn function grows exponentially. As in the ${\ensuremath{\mathbb{R}}}$-rank 1 case, the proof relies on understanding the subgroups corresponding to the removed horoballs, but in this case the subgroups are solvable and have exponential Dehn function. Finally, when $G$ has ${\ensuremath{\mathbb{R}}}$-rank 3 or greater and $\Gamma$ has ${\ensuremath{\mathbb{Q}}}$-rank 1, Drutu [@DrutuFilling] has shown that the boundary of a horoball satisfies a quadratic filling inequality and that $\Gamma$ enjoys an “asymptotically quadratic” Dehn function, i.e., its Dehn function is bounded by $n^{2+\epsilon}$ for any $\epsilon>0$. When $\Gamma$ has ${\ensuremath{\mathbb{Q}}}$-rank larger than $1$, the geometry of the space becomes more complicated. The main difference is that the removed horoballs are no longer disjoint, so many of the previous arguments fail. In many cases, the best known result is due to Gromov, who sketched a proof that the Dehn function of $\Gamma$ is bounded above by an exponential function [@GroAII 5.A$_7$]. A full proof of this fact was given by Leuzinger [@LeuzingerPolyRet]. In this paper, we consider $\operatorname{SL}(n,{\ensuremath{\mathbb{Z}}})$. This is a lattice with ${\ensuremath{\mathbb{Q}}}$-rank $n-1$ in a group with ${\ensuremath{\mathbb{R}}}$-rank $n-1$, so when $n$ is small, the methods above apply. When $n=2$, the group $\operatorname{SL}(2,{\ensuremath{\mathbb{Z}}})$ is virtually free, and thus hyperbolic. As a consequence, its Dehn function is linear. When $n=3$, the result of Leuzinger and Pittet mentioned above implies that the Dehn function of $\operatorname{SL}(3,{\ensuremath{\mathbb{Z}}})$ grows exponentially; this was first proved by Epstein and Thurston [@ECHLPT]. Much less is known about the Dehn function for lattices in $\operatorname{SL}(n,{\ensuremath{\mathbb{Z}}})$ when $n\ge 4$. By the results of Gromov and Leuzinger above, the Dehn function of any such lattice is bounded by an exponential function, but the Dehn function may be polynomial in many cases. Thurston [@ECHLPT] conjectured that When $n\ge 4$, $\operatorname{SL}(n,{\ensuremath{\mathbb{Z}}})$ satisfies the isoperimetric inequality $$\delta_{\operatorname{SL}(n,{\ensuremath{\mathbb{Z}}})}(\ell)\lesssim \ell^2.$$ In this paper, we will prove that \[thm:mainthm\] When $n\ge 5$, $\operatorname{SL}(n,{\ensuremath{\mathbb{Z}}})$ satisfies the isoperimetric inequality $$\delta_{\operatorname{SL}(n,{\ensuremath{\mathbb{Z}}})}(\ell)\lesssim \ell^4.$$ In Section \[sec:prelims\], we present some preliminaries, and in Section \[sec:overview\], we sketch an overview of the proof. In Sections \[sec:fundset\]–\[sec:filling\], we prove Theorem \[thm:mainthm\]. Some of the ideas in this work were inspired by discussions at the American Institute of Mathematics workshop, “The Isoperimetric Inequality for $\operatorname{SL}(n,{\ensuremath{\mathbb{Z}}})$,” and the author would like to thank the organizers, Nathan Broaddus, Tim Riley, and Kevin Wortman; and participants, especially Mladen Bestvina, Alex Eskin, Martin Kassabov, and Christophe Pittet. The author would also like to thank Tim Riley and Yves de Cornulier for many helpful conversations while the author was visiting Bristol University and Université de Rennes. Preliminaries {#sec:prelims} ============= In this section, we recall several facts about $\operatorname{SL}(p,{\ensuremath{\mathbb{Z}}})$, $\operatorname{SL}(p,{\ensuremath{\mathbb{R}}})$, and about Dehn functions. We provide only a minimal introduction to Dehn functions here; for a survey with examples, see for instance [@Bridson]. The Dehn function is a group invariant which gives one way to describe the difficulty of determining whether a word in a group represents the identity. It can be described both combinatorially and geometrically, and the interaction between these two viewpoints is often crucial. We first give some terminology. If $X$ is a set, and $x_i\in X$ for $1\le i\le n$, we call the formal product $x_1\dots x_n$ a word in $X$. Let $X^*$ to be the set of words in $X\cup X^{-1}$, where $X^{-1}$ is the set of formal inverses of elements of $X$. We denote the empty word by ${\varepsilon}$. If $w\in X^*$, we can write $w=x_1x_2\dots x_n$, and we define the length $\ell(w)$ of $w$ to be $n$. Note especially that these words are not reduced; that is, $x$ may appear next to $x^{-1}$. If $X\subset H$ for some group $H$, there is a natural evaluation map $X^*\to H$, and we say that words represent elements of $H$. Using these concepts, we can describe the combinatorial Dehn function. If $$H=\langle h_1,\dots,h_d \mid r_1,\dots,r_s\rangle$$ is a finitely presented group, we can let $\Sigma=\{h_1,\dots,h_d\}$ and consider words in $\Sigma^*$. If a word $w$ represents the identity, then there is a way to prove this using the relations. That is, there is a sequence of steps which reduces $w$ to the empty word, where each step is a free expansion (insertion of a subword $x_i^{\pm 1}x_i^{\mp1}$), free reduction (deletion of a subword $x_i^{\pm 1}x_i^{\mp1}$), or the application of a relator (insertion or deletion of one of the $r_i$). We call the number of applications of relators in a sequence its [*cost*]{}, and we call the minimum cost of a sequence which starts at $w$ and ending at $\varepsilon$ the [ *filling area*]{} of $w$, denoted by $\delta_H(w)$. We then define the [*Dehn function*]{} of $H$ to be $$\delta_H(n)=\max_{\ell(w)\le n} \delta_H(w),$$ where the maximum is taken over words representing the identity. This depends [*a priori*]{} on the chosen presentation of $H$; we will see that the growth rate of $\delta_H$ is independent of this choice. For convenience, if $v,w$ are two words representing the same element of $H$, we define $\delta_H(v,w)=\delta_H(vw^{-1})$; this denotes the minimum cost to transform $v$ to $w$. This can also be interpreted geometrically. If $K_H$ is the [ *presentation complex*]{} of $H$ (a simply-connected 2-complex whose 1-skeleton is the Cayley graph of $H$ and whose $2$-cells correspond to translates of the relators), then $w$ corresponds to a closed curve in the $1$-skeleton of $K_H$. Similarly, the sequence of steps reducing $w$ to the identity corresponds to a homotopy contracting this closed curve to a point. More generally, if $X$ is a riemannian manifold or simplicial complex, we can define the filling area $\delta_X(\gamma)$ of a Lipschitz curve $\gamma:S^1\to X$ to be the infimal area of a Lipschitz map $D^2\to X$ which extends $\gamma$. Then we can define the Dehn function of $X$ to be $$\delta_X(n)=\sup_{\ell(\gamma)\le n} \delta_X(\gamma),$$ where the supremum is taken over null-homotopic closed curves. As in the combinatorial case, if $\beta$ and $\gamma$ are two curves connecting the same points which are homotopic with their endpoints fixed, we define $\delta_X(\beta,\gamma)$ to be the infimal area of a homotopy between $\beta$ and $\gamma$ which fixes their endpoints. Gromov stated a theorem connecting these two definitions, proofs of which can be found in [@Bridson] and [@BurTab]: \[thm:GroFill\] If $X$ is a simply connected riemannian manifold or simplicial complex and $H$ is a finitely presented group acting properly discontinuously, cocompactly, and by isometries on $M$, then $\delta_H\sim \delta_M$. Here, $\sim$ is an equivalence relation which requires that $\delta_H$ and $\delta_M$ have the same growth rate according to the following definition: if $f,g:{\ensuremath{\mathbb{N}}}\to {\ensuremath{\mathbb{N}}}$, let $f\lesssim g$ if and only if there is a $c$ such that $$f(n)\le c g(cn+c)+c\text{ for all }n$$ and $f\sim g$ if and only if $f\lesssim g$ and $g\lesssim f$. One consequence of Theorem \[thm:GroFill\] is that the Dehn functions corresponding to different presentations are equivalent under this relation. We state the following lemma, which is used in the proof of Theorem \[thm:GroFill\]. The lemma follows from the Federer-Fleming Deformation Lemma [@FedFlem] or from the Cellulation Lemma [@Bridson 5.2.3]: \[lem:approx\] Let $H$ and $X$ be as in the Filling Theorem, and let $f:K_H\to X$ be an $H$-equivariant map of a presentation complex for $H$ to $X$. There is a $c$ such that: 1. Let $s:[0,1]\to X$ connect $f(e)$ and $f(h)$, where $e$ is the identity in $H$ and $h\in H$. There is a word $w$ which represents $h$ and which has length $\ell(w)\le c\ell(s)+c$. If $X$ is simply connected, then $w$ approximates $s$ in the sense that if $\gamma_w:[0,1]\to K_H$ is the curve corresponding to $w$, then $$\delta_X(s,\gamma_w)\le c \ell(s)+c.$$ 2. If $w$ is a word representing the identity in $H$ and $\gamma:S_1\to K_H$ is the corresponding closed curve in $K_H$, then $$\delta_H(w)\le c(\ell(w)+\delta_X(f\circ \gamma)).$$ We now set out notation for $\operatorname{SL}(p)$ and several of its subgroups. In the following, ${\ensuremath{\mathbb{K}}}$ represents either ${\ensuremath{\mathbb{Z}}}$ or ${\ensuremath{\mathbb{R}}}$; when it is omitted, we take it to be ${\ensuremath{\mathbb{R}}}$. Let $G=\operatorname{SL}(p,{\ensuremath{\mathbb{R}}})$ and let $\Gamma=\operatorname{SL}(p,{\ensuremath{\mathbb{Z}}})$. Let $z_1,\dots,z_p$ generate ${\ensuremath{\mathbb{Z}}}^p$, and if $S\subset \{1,\dots,p\}$, let ${\ensuremath{\mathbb{K}}}^S=\langle z_s\rangle_{s\in S}$ be a subspace of ${\ensuremath{\mathbb{K}}}^p$. If $q\le p$, there are many ways to include $\operatorname{SL}(q)$ in $\operatorname{SL}(p)$. Let $\operatorname{SL}(S)$ be the copy of $\operatorname{SL}(\#S)$ in $\operatorname{SL}(p)$ which acts on ${\ensuremath{\mathbb{R}}}^S$ and fixes $z_t$ for $t\not \in S$. If $S_1,\dots,S_n$ are disjoint subsets of $\{1,\dots, p\}$ such that $\bigcup S_i=\{1,\dots,p\}$, let $$U(S_1,\dots,S_n;{\ensuremath{\mathbb{K}}})\subset \operatorname{SL}(p,{\ensuremath{\mathbb{K}}})$$ be the subgroup of matrices preserving the flag $${\ensuremath{\mathbb{R}}}^{S_i}\subset {\ensuremath{\mathbb{R}}}^{S_i\cup S_{i-1}} \subset \dots\subset {\ensuremath{\mathbb{R}}}^p$$ when acting on the right. If the $S_i$ are sets of consecutive integers in increasing order, $U(S_1,\dots,S_n;{\ensuremath{\mathbb{K}}})$ is block upper triangular. For example, $U(\{1\},\{2,3,4\};{\ensuremath{\mathbb{K}}})$ is the subgroup of $\operatorname{SL}(4,{\ensuremath{\mathbb{K}}})$ consisting of matrices of the form: $$\begin{pmatrix} * & * & * & * \\ 0 & * & * & * \\ 0 & * & * & * \\ 0 & * & * & * \end{pmatrix}.$$ If $d_1,\dots d_n>0$, let $U(d_1,\dots,d_n;{\ensuremath{\mathbb{K}}})$ be the group of upper block triangular matrices with blocks of the given lengths, so that the subgroup illustrated above is $U(1,3;{\ensuremath{\mathbb{K}}})$. Each group $U(d_1,\dots,d_n;{\ensuremath{\mathbb{Z}}})$ is a parabolic subgroup of $\Gamma$, and any parabolic subgroup of $\Gamma$ is conjugate to a unique such group. Let ${\ensuremath{\mathcal{P}}}$ be the set of these groups. We will note some facts about the combinatorial group theory of $\Gamma$ and its subgroups. Let $I$ be the identity matrix. If $1\le i\ne j\le p$, let $e_{ij}(x)\in \operatorname{SL}(p,{\ensuremath{\mathbb{Z}}})$ be the elementary matrix which consists of the identity matrix with the $(i,j)$-entry replaced by $x$. Let $e_{ij}:=e_{ij}(1)$. There is a finite presentation which has the matrices $e_{ij}$ as generators [@Milnor]: $$\begin{aligned} \notag \operatorname{SL}(p,{\ensuremath{\mathbb{Z}}})=\langle e_{ij} \mid \; &[e_{ij},e_{kl}]=I & \text{if $i\ne l$ and $j\ne k$}\\ & [e_{ij},e_{jk}]=e_{ik} & \text{if $i\ne k$}\label{eq:steinberg}\\ \notag & (e_{ij} e_{ji}^{-1} e_{ij})^4=I \rangle,\end{aligned}$$ where we adopt the convention that $[x,y]=xyx^{-1}y^{-1}$. We will use a slightly expanded set of generators. Let $$\Sigma=\Sigma(p)=\{e_{ij}\mid 1\le i\ne j\le p\}\cup D,$$ where $D$ is the set of diagonal matrices in $\operatorname{SL}(p,{\ensuremath{\mathbb{Z}}})$. Then there is a finite presentation of $\operatorname{SL}(p,{\ensuremath{\mathbb{Z}}})$ with generating set $\Sigma$ and relations consisting of those in and relations expressing each element of $D$ as a product of elementary matrices. The advantage of this generating set is that if $H=\operatorname{SL}(S,{\ensuremath{\mathbb{Z}}})$ or $H=U(S_1,\dots,S_n;{\ensuremath{\mathbb{Z}}})$, then $H$ is generated by $\Sigma \cap H$. The group $\Gamma$ is a lattice in $G=\operatorname{SL}(p,{\ensuremath{\mathbb{R}}})$, and the geometry of $G$ and of the quotient will be important in our proof. We think of $G$ and $\Gamma$ as acting on the symmetric space on the left. Let ${\ensuremath{\mathcal{E}}}=\operatorname{SL}(p,{\ensuremath{\mathbb{R}}})/\operatorname{SO}(p,{\ensuremath{\mathbb{R}}})$. The tangent space of ${\ensuremath{\mathcal{E}}}$ at the identity, $T_{I}{\ensuremath{\mathcal{E}}}$ is isomorphic to the space of symmetric matrices with trace 0. If $u^{tr}$ represents the transpose of $u$, then we can define an inner product $\langle u,v\rangle=\operatorname{trace}(u^{tr}v)$ on $T_I {\ensuremath{\mathcal{E}}}$. Since this is $\operatorname{SO}(p)$-invariant, it gives rise to a $G$-invariant riemannian metric on ${\ensuremath{\mathcal{E}}}$. Under this metric, ${\ensuremath{\mathcal{E}}}$ is a non-positively curved symmetric space. The lattice $\Gamma$ acts on ${\ensuremath{\mathcal{E}}}$ with finite covolume, but the action is not cocompact. Let ${\ensuremath{\mathcal{M}}}:=\Gamma\backslash {\ensuremath{\mathcal{E}}}$. If $x\in G$, we write the equivalence class of $x$ in ${\ensuremath{\mathcal{E}}}$ as $[x]_{\ensuremath{\mathcal{E}}}$; similarly, if $x\in G$ or $x\in {\ensuremath{\mathcal{E}}}$, we write the equivalence class of $x$ in ${\ensuremath{\mathcal{M}}}$ as $[x]_{\ensuremath{\mathcal{M}}}$. If $g\in G$ is a matrix with coefficients $\{g_{ij}\}$, we define $$\|g\|_2=\sqrt{\sum_{i,j}g_{ij}^2},$$ $$\|g\|_\infty=\max_{i,j}|g_{ij}|.$$ Note that for all $g,h\in G$, we have $$\|gh\|_2\le \|g\|_2\|h\|_2$$ $$\|g^{-1}\|_2\ge \|g\|^{1/p}_2$$ and that there is a $c$ such that $$c^{-1} d_G(I,g)-c\le \log \|g\|_2 \le c d_G(I,g)+c.$$ It will be useful to have a geometric picture of elements of ${\ensuremath{\mathcal{E}}}$ and ${\ensuremath{\mathcal{M}}}$. The rows of a matrix in $\operatorname{SL}(p,{\ensuremath{\mathbb{R}}})$ give a unit volume basis of ${\ensuremath{\mathbb{R}}}^p$, and we can think of $G$ as the set of such bases. From this viewpoint, $\operatorname{SO}(p)$ acts on a basis by rotating the basis vectors, so ${\ensuremath{\mathcal{E}}}$ consists of the set of bases up to rotation. An element of $\Gamma$ acts by replacing the basis elements by integer combinations of basis elements. This preserves the lattice that they generate, so we can think of $\Gamma\backslash G$ as the set of unit-covolume lattices in ${\ensuremath{\mathbb{R}}}^p$. The quotient ${\ensuremath{\mathcal{M}}}$ is then the set of unit-covolume lattices up to rotation. Nearby points in ${\ensuremath{\mathcal{M}}}$ or ${\ensuremath{\mathcal{E}}}$ correspond to bases or lattices which can be taken into each other by small linear deformations of ${\ensuremath{\mathbb{R}}}^p$. Finally, we define a subset of ${\ensuremath{\mathcal{E}}}$ on which $\Gamma$ acts cocompactly. Let ${\ensuremath{\mathcal{E}}}(\epsilon)$ be the set of points which correspond to lattices with injectivity radius at least $\epsilon$. When $\epsilon \le 1/2$, this set is contractible and $\Gamma$ acts on it cocompactly [@ECHLPT]; we call it the [*thick part*]{} of ${\ensuremath{\mathcal{E}}}$, and its preimage $G(\epsilon)$ in $G$ the thick part of $G$. Let $\iota:K_\Gamma\to {\ensuremath{\mathcal{E}}}$ be a $\Gamma$-equivariant map; if $\epsilon$ is sufficiently small, then the image of $\iota$ is contained in ${\ensuremath{\mathcal{E}}}(\epsilon)$. Overview of proof {#sec:overview} ================= To understand our methods for proving a polynomial Dehn function for $\Gamma$, it is helpful to consider a related method for proving an exponential Dehn function. Let $w\in \Sigma^*$ be a word which represents the identity in $\Gamma$, so that $w$ corresponds to a closed curve in $K_\Gamma$. By abuse of notation, we also call this curve $w$. We can construct a curve $\alpha:S^1\to {\ensuremath{\mathcal{E}}}$ which corresponds to $w$ by letting $\alpha=[\iota(w)]_{\ensuremath{\mathcal{E}}}$. Let $\ell=\ell(\alpha)$ and assume that $\alpha$ is parameterized by length. Since ${\ensuremath{\mathcal{E}}}$ is non-positively curved, we can use geodesics to fill $\alpha$. If $x,y\in {\ensuremath{\mathcal{E}}}$, let $\lambda_{x,y}:[0,1]\to {\ensuremath{\mathcal{E}}}$ be a geodesic parameterized so that $\lambda_{x,y}(0)=x$, $\lambda_{x,y}(1)=y$, and $\lambda_{x,y}$ has constant speed. We can define a homotopy $h:[0,\ell] \times [0,1]\to {\ensuremath{\mathcal{E}}}$ by $$h(x,t)=\lambda_{\alpha(x),\alpha(0)}(t/\ell).$$ Let $D^2\subset {\ensuremath{\mathbb{R}}}^2$ be the disc of radius $\ell$ centered at the origin and let $$f(r,\theta)=h(\ell \frac{\theta}{2\pi},r/\ell)$$ where $r$ and $\theta$ are polar coordinates. Since ${\ensuremath{\mathcal{E}}}$ is non-positively curved, this map is Lipschitz and its Lipschitz constant $\operatorname{Lip}(f)$ is bounded independently of $\alpha$; in particular, it has area $O(\ell^2)$. Furthermore, the image of $f$ is contained in a ball around $[I]_{\ensuremath{\mathcal{E}}}$ of radius $\ell$. Since $\Gamma$ does not act cocompactly on ${\ensuremath{\mathcal{E}}}$, this filling does not directly correspond to an efficient filling of $w$ in $K_\Gamma$. To construct a filling in $K_\Gamma$, we will need a map $\rho:{\ensuremath{\mathcal{E}}}\to \Gamma$. We can construct one from a fundamental set for the action of $\Gamma$ on ${\ensuremath{\mathcal{E}}}$; we let ${\ensuremath{\mathcal{S}}}$ be a Siegel set (see Sec.\[sec:fundset\]) and define $\rho$ so that for all $x\in {\ensuremath{\mathcal{E}}}$, $x\in \rho(x){\ensuremath{\mathcal{S}}}$. Since ${\ensuremath{\mathcal{M}}}$ is not compact, this map is not a quasi-isometry; if $x\in {\ensuremath{\mathcal{E}}}$ is deep in the cusp of ${\ensuremath{\mathcal{M}}}$, then small changes in $x$ can result in large changes in $\rho(x)$. On the other hand, the injectivity radius in the cusp shrinks at most exponentially with the distance from a basepoint. That is, there is a $c$ such that if $x\in B_r(I)\subset {\ensuremath{\mathcal{E}}}$, and $d_{{\ensuremath{\mathcal{E}}}}(x,y)<\exp(-c r),$ then $d_{\Gamma}(\rho(x),\rho(y))\le c$. Our basic technique is to construct a triangulation $\tau$ of the disc, and use $f$ as a template for a map $\bar{f}:\tau\to K_\Gamma$. We will construct $\bar{f}:\tau\to K_\Gamma$ one dimension at a time. Let $\tau$ be a triangulation of $D^2$ with $O(e^{2c\ell})$ cells such that the image of each cell under $f$ has diameter at most $e^{-c\ell}$. If $x$ and $y$ are vertices of an edge of $\tau$, then $$d_{\Gamma}(\rho(f(x)),\rho(f(y)))\le c,$$ where $d_{\Gamma}$ is the word metric on $\Gamma$ given by the generating set $\Sigma$. Let $\bar{f}_0:\tau^{(0)}\to K_\Gamma$ be given by $\bar{f}_0(x)=\rho(f(x))$ for all $x$ (where we identify elements of $\Gamma$ with the corresponding vertices in $K_\Gamma$). To construct $\bar{f}_1:\tau^{(1)}\to K_\Gamma$, we must find words in $\Sigma^*$ which connect the images of adjacent vertices of $\tau$; that is, for each edge $e=(x,y)$, we must find a word in $\Sigma^*$ representing $\bar{f}_0(x)^{-1}\bar{f}_0(y)$. Since $\bar{f}_0(x)^{-1}\bar{f}_0(y)$ is a bounded element of $\Gamma$, we choose $\bar{f}_1(e)$ to be a word of length at most $c$. Finally, we construct $\bar{f}$ on the triangles of $\tau$. If $\Delta$ is a triangle of $\tau$, then $\bar{f}_1(\partial\Delta)$ corresponds to a word of length at most $3c$ which represents the identity. Since $K_\Gamma$ is simply connected, each such word can be filled by a disc of area at most $\delta_{\Gamma}(3c)$. This results in a map $\bar{f}$ of area $O(e^{2c\ell})$. The boundary of $\bar{f}$ is not quite $w$, but it remains a bounded distance from $w$, and there is a homotopy between the two of area $O(\ell)$. Thus $\delta_\Gamma(w)=O(e^{2c\ell})$, and $$\delta_{\Gamma}(\ell)\lesssim e^{\ell},$$ as desired. We will prove a polynomial bound with a similar scheme. The main difference is that we construct $\tau$ by dividing $D^2$ into $O(\ell^2)$ triangles of diameter $\le 1$ instead of exponentially many triangles of exponentially small diameter. We define $\rho$ and $\bar{f}_0$ as described above, but it is no longer the case that if $x$ and $y$ are connected by an edge, then $d_{\Gamma}(\bar{f}_0(x),\bar{f}_0(y))<c$. In Section \[sec:parabounds\], we use the geometry of ${\ensuremath{\mathcal{M}}}$ to show instead that $\bar{f}_0(x)^{-1}\bar{f}_0(y)$ is the product of a block-diagonal element of $\Gamma$ with bounded coefficients and a unipotent element with at most exponentially large coefficients. Because $\bar{f}_0(x)^{-1}\bar{f}_0(y)$ is no longer a bounded element of $\Gamma$, we must change the way we define $\bar{f}_1$ as well. In Section \[sec:normalform\], we will define a normal form for block upper-triangular matrices with bounded block-diagonal part and exponentially large unipotent part. We will replace edges of $\tau$ with words in this normal form which have length $O(\ell)$. Finally, we construct $\bar{f}$ by extending $\bar{f}_1$ to the 2-cells of $\tau$. The boundary of each 2-cell is a product of three words in normal form and has length $O(\ell)$; in Section \[sec:filling\], we will show that such words can be filled with discs of area $O(\ell^2)$. Since there are $O(\ell^2)$ such triangles to fill, this method will give an $\ell^4$ upper bound on the Dehn function. Constructing a fundamental set {#sec:fundset} ============================== In this section, we will define ${\ensuremath{\mathcal{S}}}$, a fundamental set for $\Gamma$. Let $\operatorname{diag}(t_1,\dots, t_p)$ be the diagonal matrix with entries $(t_1,\dots, t_p)$. Let $A$ be the set of diagonal matrices in $G$ and let $$A^+_{\epsilon}=\{\operatorname{diag}(t_1,\dots, t_p)\mid \prod t_i=1, t_i > 0, t_i\ge \epsilon t_{i+1}\}.$$ Let $N$ be the set of upper triangular matrices with 1’s on the diagonal and let $N^+$ be the subset of $N$ with off-diagonal entries in the interval $[-1/2,1/2]$. Translates of the set $N^+A^+_\epsilon$ are known as Siegel sets. The following properties of Siegel sets are well known (see for instance [@BorHar-Cha]). \[lem:redThe\] \ There is an $1>\epsilon_{{\ensuremath{\mathcal{S}}}}>0$ such that if we let $${\ensuremath{\mathcal{S}}}:=[N^+A^+_{\epsilon_{{\ensuremath{\mathcal{S}}}}}]_{\ensuremath{\mathcal{E}}}\subset {\ensuremath{\mathcal{E}}},$$ then - $\Gamma{\ensuremath{\mathcal{S}}}={\ensuremath{\mathcal{E}}}$. \[lem:redThe:cover\] - There are only finitely many elements $\gamma\in \Gamma$ such that $\gamma {\ensuremath{\mathcal{S}}}\cap {\ensuremath{\mathcal{S}}}\ne \emptyset$. \[lem:redThe:fundSet\] We define $A^+:=A^+_{\epsilon_{{\ensuremath{\mathcal{S}}}}}$. Translates of ${\ensuremath{\mathcal{S}}}$ cover all of ${\ensuremath{\mathcal{E}}}$, so we can define a map $\rho:{\ensuremath{\mathcal{E}}}\to \Gamma$ such that $\rho({\ensuremath{\mathcal{S}}})=I$ and $x\in \rho(x){\ensuremath{\mathcal{S}}}$ for all $x$. As in Section \[sec:overview\], we define $\bar{f}_0:\tau^{(0)}\to K_\Gamma$ by $\bar{f}_0(x)=\rho(f(x))$. The inclusion $A^+ \hookrightarrow {\ensuremath{\mathcal{S}}}$ is a Hausdorff equivalence: \[lem:easyHausdorff\] Give $A$ the riemannian metric inherited from its inclusion in $G$, so that $$d_{A}(\operatorname{diag}(d_1,\dots,d_p),\operatorname{diag}(d'_1,\dots,d'_p))=\sqrt{\sum_{i=1}^p \left|\log \frac{d'_i}{d_i}\right|^2}.$$ - There is a $c$ such that if $x\in {\ensuremath{\mathcal{S}}}$, then $d_{\ensuremath{\mathcal{E}}}(x,[A^+]_{\ensuremath{\mathcal{E}}})\le c$. - If $x,y\in A^+$, then $d_{A}(x,y)=d_{{\ensuremath{\mathcal{S}}}}(x,y)$. For the first claim, note that if $x=[na]_{\ensuremath{\mathcal{E}}}$, then $x=[a(a^{-1} n a)]_{\ensuremath{\mathcal{E}}}$, and $a^{-1}na\in N$. Furthermore, $$\|a^{-1}na\|_\infty\le \epsilon_{\ensuremath{\mathcal{S}}}^p,$$ so $$d_{\ensuremath{\mathcal{E}}}([x]_{\ensuremath{\mathcal{E}}},[a]_{\ensuremath{\mathcal{E}}})\le d_G(I,a^{-1}na)$$ is bounded independently of $x$. For the second claim, we clearly have $d_{A}(x,y)\ge d_{{\ensuremath{\mathcal{S}}}}(x,y)$. For the reverse inequality, it suffices to note that the map ${\ensuremath{\mathcal{S}}}\to A^+$ given by $na\mapsto a$ for all $n\in N^+$, $a\in A^+$ is distance-decreasing. Siegel conjectured that the quotient map from ${\ensuremath{\mathcal{S}}}$ to ${\ensuremath{\mathcal{M}}}$ is also a Hausdorff equivalence, that is: \[thm:SiegConj\] There is a $c$ such that if $x,y\in {\ensuremath{\mathcal{S}}}$, then $$d_{{\ensuremath{\mathcal{S}}}}(x,y)-c\le d_{{\ensuremath{\mathcal{M}}}}([x]_{\ensuremath{\mathcal{M}}},[y]_{\ensuremath{\mathcal{M}}})\le d_{{\ensuremath{\mathcal{S}}}}(x,y)$$ Proofs of this conjecture can be found in [@Leuzinger; @Ji; @Ding]. One consequence is that $A^+$ is Hausdorff equivalent to ${\ensuremath{\mathcal{M}}}$, and it will be helpful to have a map $\phi_{\ensuremath{\mathcal{M}}}:{\ensuremath{\mathcal{M}}}\to A^+$ which realizes this Hausdorff equivalence. Ji and MacPherson [@JiMacPherson] used precise reduction theory to define such a map in a more general setting. In the special case that $G=\operatorname{SL}(n)$ and $\Gamma=\operatorname{SL}(n,{\ensuremath{\mathbb{R}}})$, their map and the map $\phi_{\ensuremath{\mathcal{M}}}$ that we will define differ by a bounded distance. Any point $x\in {\ensuremath{\mathcal{E}}}$ can be written as $x=[\gamma na]_{\ensuremath{\mathcal{E}}}$ for some $\gamma\in \Gamma$, $n\in N^+$ and $a\in A^+$ in at most finitely many ways. These decompositions have the following property: \[cor:Hausdorff\] There is a constant $c_{\phi}$ such that if $x,y\in {\ensuremath{\mathcal{E}}}$, $\gamma,\gamma' \in \Gamma$, $n,n' \in N^+$ and $a,a'\in A^+$ are such that $x=[\gamma na]_{\ensuremath{\mathcal{E}}}$ and $y=[\gamma' n'a']_{\ensuremath{\mathcal{E}}}$, then $$|d_{{\ensuremath{\mathcal{M}}}}([x]_{\ensuremath{\mathcal{M}}},[y]_{\ensuremath{\mathcal{M}}})- d_{A}(a,a')|\le c_\phi.$$ In particular, if $[\gamma na]_{\ensuremath{\mathcal{E}}}=[\gamma' n'a']_{\ensuremath{\mathcal{E}}}$, then $$d_{A}(a,a')\le c_\phi.$$ Without loss of generality, we may assume that $\gamma=\gamma'=I$. Let $c$ be as in Theorem \[thm:SiegConj\] and let $c'$ be as in Lemma \[lem:easyHausdorff\], so that $$d_{{\ensuremath{\mathcal{M}}}}([na]_{\ensuremath{\mathcal{M}}},[a]_{\ensuremath{\mathcal{M}}})\le d_{{\ensuremath{\mathcal{E}}}}([na]_{\ensuremath{\mathcal{E}}},[a]_{\ensuremath{\mathcal{E}}})\le c'.$$ Then $$\begin{aligned} |d_{{\ensuremath{\mathcal{M}}}}([x]_{\ensuremath{\mathcal{M}}},[y]_{\ensuremath{\mathcal{M}}})- d_{A}(a,a')|&=|d_{{\ensuremath{\mathcal{M}}}}([na]_{\ensuremath{\mathcal{M}}},[n'a']_{\ensuremath{\mathcal{M}}})- d_{A}(a,a')| \\ &\le c+ |d_{{\ensuremath{\mathcal{S}}}}([na]_{\ensuremath{\mathcal{E}}},[n'a']_{\ensuremath{\mathcal{E}}})- d_{{\ensuremath{\mathcal{S}}}}([a]_{\ensuremath{\mathcal{E}}},[a']_{\ensuremath{\mathcal{E}}})| \\ &\le c+ 2c'. \end{aligned}$$ We now define $\phi_{\ensuremath{\mathcal{M}}}$. Any point $x\in {\ensuremath{\mathcal{E}}}$ can be uniquely written as $x=[\rho(x)na]_{\ensuremath{\mathcal{E}}}$ for some $n\in N^+$ and $a\in A^+$. Let $\phi:{\ensuremath{\mathcal{E}}}\to A^+$ be the map $[\rho(x)na]_{\ensuremath{\mathcal{E}}}\mapsto a$. This is not quite $\Gamma$-equivariant, but we can still define a map $\phi_{\ensuremath{\mathcal{M}}}:{\ensuremath{\mathcal{M}}}\to A^+$ by choosing a lift $\tilde{x}\in {\ensuremath{\mathcal{E}}}$ for all $x\in {\ensuremath{\mathcal{M}}}$ and defining $\phi_{{\ensuremath{\mathcal{M}}}}(x)=\phi(\tilde{x})$. By the corollary, $\phi_{{\ensuremath{\mathcal{M}}}}(x)$ is a Hausdorff equivalence with constant $c_\phi$. Bounding group elements corresponding to edges {#sec:parabounds} ============================================== In this section, we will restrict the possible values of $\rho(x)^{-1}\rho(y)$ when $x,y\in {\ensuremath{\mathcal{E}}}$ and $d_{{\ensuremath{\mathcal{E}}}}(x,y)\le 1$. This is a key step in extending $\bar{f}_0$ to the $1$-skeleton of $\tau$. The possible values of $\rho(x)^{-1}\rho(y)$ depend on $\phi(x)$. We will construct a cover of $A^+$ by sets corresponding to parabolic subgroups so that the possible values of $\bar{f}_0(x)^{-1}\bar{f}_0(y)$ depend on which set $\phi(x)$ falls into. If $P=U(d_1,\dots,d_r)$, where $\sum d_i=p$, let $s_i=\sum_{j=1}^i d_i$ for $0\le i\le r$. Let $$X_P(t)=\{\operatorname{diag}(a_1,\dots,a_p)\in A\mid t a_{i+1}<a_i \text{ if and only if } i\in\{s_1,\dots,s_{r-1}\}\}.$$ These sets partition $A^+$ into $2^{p-1}$ disjoint subsets. If $\phi(x)\in X_P(t)$ for some sufficiently large $t$, then the geometry of the lattice corresponding to $x$ is quite distinctive. Recall that if $\tilde{x}\in G$ is a representative of $x\in {\ensuremath{\mathcal{E}}}$, then we can construct a lattice ${\ensuremath{\mathbb{Z}}}^p \tilde{x}\subset {\ensuremath{\mathbb{R}}}^p$, and different representatives of $x$ correspond to rotations of ${\ensuremath{\mathbb{Z}}}^p \tilde{x}$. Let $$V(x,r)=\langle v\in {\ensuremath{\mathbb{Z}}}^p\mid \|v \tilde{x}\|_2 \le r \rangle;$$ this corresponds to the subspace of the lattice generated by vectors of length at most $r$, and is independent of the choice of $\tilde{x}$. As such, $V(x,r)$ is $\Gamma$-equivariant: if $\gamma\in \Gamma$, then $V(\gamma x,r)=V(x,r)\gamma^{-1}$. In many cases, $\phi(x)$ and $\rho(x)$ determine $V(x,r)$. Let $z_1,\dots,z_p\in {\ensuremath{\mathbb{Z}}}^p$ be the standard generating set of ${\ensuremath{\mathbb{Z}}}^p$, and let $Z_j=\langle z_{j},\dots, z_{p}\rangle$. \[lem:phiFlag\] There is a $c_V>1$ such that if $x\in {\ensuremath{\mathcal{E}}}$, $\phi(x)=\operatorname{diag}(a_1,\dots,a_p)$, and $$a_{j+1}c_V<r<c_V^{-1}a_{j},$$ then $V(x,r)=Z_j\rho(x)^{-1}$. It suffices to show that if the hypotheses hold and $x\in {\ensuremath{\mathcal{S}}}$, then $V(x,r)=Z_j$. There is an $n=\{n_{ij}\}\in N^+$ such that $x=[n\phi(x)]_{\ensuremath{\mathcal{E}}}$, and if $\tilde{x}=n\phi(x)$, then $$\begin{aligned} z_j \tilde{x} &= z_j n\phi(x) \\ &= a_j z_j+\sum_{i=j+1}^p n_{ji} z_ia_{i}. \end{aligned}$$ Since $|n_{ji}|\le 1/2$ when $i>j$ and $a_{i+1}\le a_i \epsilon_{{\ensuremath{\mathcal{S}}}}^{-1}$, we have $$\|z_j \tilde{x}\|_2 \le a_j\sqrt{p}\epsilon_{{\ensuremath{\mathcal{S}}}}^{-p},$$ so $$V(x,a_j\sqrt{p}\epsilon_{{\ensuremath{\mathcal{S}}}}^{-p})\supset Z_j.$$ On the other hand, if $v\not \in Z_j$, then $v=\sum_i v_i z_i$ for some $v_i\in {\ensuremath{\mathbb{Z}}}$. Let $k$ be the smallest $k$ such that $v_k\ne 0$; by assumption, $k<j$. The $z_{k}$-coordinate of $v\tilde{x}$ is $v_{k} a_{k}$, so $$\|v n\phi(y)\|_2 \ge |a_{k}|> a_{j-1} \epsilon_{{\ensuremath{\mathcal{S}}}}^{p}$$ and thus if $t<a_{j-1} \epsilon_{{\ensuremath{\mathcal{S}}}}^{p}$, then $V(x,t)\subset Z_j$. Therefore, if $$a_j\sqrt{p}\epsilon_{{\ensuremath{\mathcal{S}}}}^{-p}\le t<a_{j-1} \epsilon_{{\ensuremath{\mathcal{S}}}}^{p},$$ then $V(\tilde{x},t)=Z_j$. In particular, if $\phi(x)\in X_P(2 c_V^2)$, then $a_{s_i+1}c_V<c_V^{-1}a_{s_i}$ and we can find $r_i$ such that $V(x,r_i)=Z_{s_i} \rho(x)^{-1}$. Let $M_P$ be the subgroup of $P$ consisting of block diagonal matrices, so that $M_P$ contains $\operatorname{SL}(d_1)\times \dots \times \operatorname{SL}(d_n)$ as a finite index subgroup. Let $N_P\subset P$ be the subgroup of block upper triangular matrices whose diagonal blocks are the identity matrix. Any element $z\in P$ can be uniquely decomposed as a product $z=nm$, where $n\in N_P$ and $m\in M_P$; we call $m$ the $P$-reductive part of $z$ and $n$ the $P$-unipotent part. We will show that if $d(x,y)\le 1$, then $z=\rho(x)^{-1}\rho(y)\in P$ for some $P$, where the $P$-reductive part of $z$ has coefficients bounded independently of $\ell$ and the $P$-unipotent part has coefficients at most exponential in $\ell$. \[lem:paraBounds\] Let $p\ge 3$. There is a $t_0>0$ and a $c_\rho>0$ such that for all $P\in {\ensuremath{\mathcal{P}}}$ and all $x,y\in {\ensuremath{\mathcal{E}}}$ such that $\phi(x)\in X_P(t_0)$ and $d_{{\ensuremath{\mathcal{E}}}}(x,y)\le 1$, we can decompose $\rho(x)^{-1}\rho(y)$ as a product $\rho(x)^{-1}\rho(y)=nm$, where $n\in N_P({\ensuremath{\mathbb{Z}}})$, $m\in M_P({\ensuremath{\mathbb{Z}}})$, $d_{\Gamma}(I,m)<c_\rho$, and $\|n\|_2 \le c_\rho e^{c_\rho d_{{\ensuremath{\mathcal{E}}}}(x,[I]_{\ensuremath{\mathcal{E}}})}$. Let $$t_0=2\exp(4(c_\phi+1))c_V^2,$$ let $P=U(d_1,\dots,d_n)$, and let $x$ and $y$ be as in the hypothesis of the lemma. By translating $x$ and $y$ by $\rho(x)^{-1}$, we may assume that $x\in {\ensuremath{\mathcal{S}}}$ and thus $\rho(x)=I$. We first claim that $\rho(y)\in P({\ensuremath{\mathbb{Z}}})$. Let $a_i,a'_i$ be such that $\phi(x)=\operatorname{diag}(a_1,\dots,a_p)$ and let $\phi(y)=\operatorname{diag}(a'_1,\dots,a'_p)$. Let $r_i=a_{s_i} \sqrt{t_0}$. We claim that for all $i$, $$Z_{s_i}=V(x,r_i)=V(y,r_i)=Z_{s_i}\rho(y)^{-1}.$$ The fact that $Z_{s_i}=V(x,r_i)$ follows from Lemma \[lem:phiFlag\]; in fact, $V(x,e^{-1}r_i)=V(x,er_i)=Z_{s_i}$. Since $d_{{\ensuremath{\mathcal{E}}}}(x,y)\le 1$, the lattices corresponding to $x$ and $y$ only differ by a small deformation; this deformation can change the length of a vector in the lattice by at most a factor of $e$. Thus, if $v\in {\ensuremath{\mathbb{Z}}}^p$, then $$e^{-1}\le\frac{\|v\tilde{x}\|_2}{\|v\tilde{y}\|_2}\le e,$$ and in particular, $$V(x,e^{-1} r_i)\subset V(y,r_i)\subset V(x,e r_i).$$ Since the outer two sets are equal, we have $V(x,r_i)=V(y,r_i)$. Finally, we need to show that $V(y,r_i)=Z_{s_i}\rho(y)^{-1}$. By the lemma, it suffices to show that $a'_{s_i}c_V<r_i<c_V^{-1}a'_{s_i}.$ By Corollary \[cor:Hausdorff\], we know that $d_{A}(\phi(x),\phi(y))\le c_\phi+1$; in particular, $$\left|\log\frac{a_i}{a'_{i}}\right|\le c_\phi+1,$$ and $a'_{s_i}c_V<r_i<c_V^{-1}a'_{s_i}$ as desired. Thus $Z_{s_i}=Z_{s_i}\rho(y)^{-1}$ for all $i$, so $\rho(y)\in P$. We decompose $\rho(y)$ as a product $\rho(y)=n_ym_y$, where $n_y\in N_P({\ensuremath{\mathbb{Z}}})$ and $m_y\in M_P$ consists of the diagonal blocks of $\rho(y)$. We will bound $m_y$ by constructing a map from ${\ensuremath{\mathcal{E}}}$ to a product of symmetric spaces. Let $A_P\subset P$ be the subgroup consisting of diagonal matrices whose diagonal blocks are scalar matrices with positive coefficients; this is isomorphic to $({\ensuremath{\mathbb{R}}}^+)^{n-1}$. The parabolic subgroup $P$ can be uniquely decomposed according to the Langlands decomposition as $P=N_PM_PA_P$, and we can define a map $\mu:P\to M_P$ so that if $g=n m a$, where $n\in N_P$, $m\in M_P$, and $a\in A_P$, then $\mu(g)=m$. Furthermore, since $M_P$ normalizes $A_P$ and $N_P$, this is a homomorphism. This descends to a map on symmetric spaces; if we let $K_P=\operatorname{SO}(p)\cap P$, we get a map $\mu_{{\ensuremath{\mathcal{E}}}}:{\ensuremath{\mathcal{E}}}\to M_P/K_P$. This map is Lipschitz. Furthermore, if $p\in P$, $x\in {\ensuremath{\mathcal{E}}}$, then $\mu_{{\ensuremath{\mathcal{E}}}}(px)=\mu(p)\mu_{{\ensuremath{\mathcal{E}}}}(x)$. This map can be interpreted geometrically. Note that $M_P/K_P$ is a product of symmetric spaces of lower dimensions, so $\mu_{{\ensuremath{\mathcal{E}}}}$ breaks a lattice in ${\ensuremath{\mathbb{R}}}^p$ into lattices in lower-dimensional subspaces. Let $V_0=\{0\}$ and $V_i=Z_{s_i}$, so that $P$ preserves the flag $V_0\subset \dots\subset V_n={\ensuremath{\mathbb{Z}}}^p$. Then if $g\in G$ (not necessarily parabolic) is a representative of $x$, then $V_ig/V_{i-1}g$ is a $d_i$-dimensional lattice in $({\ensuremath{\mathbb{R}}}\otimes V_i)g/({\ensuremath{\mathbb{R}}}\otimes V_{i-1})g$. This lattice generally does not have unit covolume, but we can rescale and possibly reflect it to a unit-covolume lattice. These lattices correspond to a point in $$M_P/K_P=\operatorname{SL}(d_1,{\ensuremath{\mathbb{R}}})/\operatorname{SO}(d_1)\times\dots\times \operatorname{SL}(d_n,{\ensuremath{\mathbb{R}}})/\operatorname{SO}(d_n),$$ and this point is $\mu_{{\ensuremath{\mathcal{E}}}}(x)$. The group $M_P$ acts on $M_P/K_P$ on the left, but this action is not cocompact. We will show that $\mu_{{\ensuremath{\mathcal{E}}}}(x)$ and $\mu_{{\ensuremath{\mathcal{E}}}}(y)$ lie near an orbit of this action and use this to show that $\rho(y)$ is bounded. Let $B:=B_{c_\phi+1}(X_P(t_0),A^+)$ be a neighborhood of $X_P(t_0)$ in $A^+$, so that $\phi(y)\in B$. Let $$\beta_P=[P N^+ B]_{\ensuremath{\mathcal{E}}}.$$ If $z\in {\ensuremath{\mathcal{E}}}$, $\rho(z)\in P$, and $\phi(z)\in B$, then $z\in \beta_P$; in particular, $x,y\in \beta_P$. We claim that the image of $\beta_P\cap {\ensuremath{\mathcal{S}}}$ is a bounded set in $M_P/K_P$. If $b\in \beta_P\cap {\ensuremath{\mathcal{S}}}$, there is a unique decomposition $b= [n_b a_b]_{\ensuremath{\mathcal{E}}}$, where $n_b\in N^+$ and $a_b\in B$, and $\mu_{\ensuremath{\mathcal{E}}}(b)=[\mu(n_b)\mu(a_b)]_{\ensuremath{\mathcal{E}}}$. Since $N^+$ is compact, $\mu(n_b)$ is bounded. Since $a_b\in B$, the ratio of two coefficients in a diagonal block of $a_b$ is bounded, and so $\mu(a_b)$ is bounded as well. Thus $\mu_{\ensuremath{\mathcal{E}}}(\beta_P\cap {\ensuremath{\mathcal{S}}})$ is bounded; call this set $\omega_P$. Since $x\in\beta_P\cap {\ensuremath{\mathcal{S}}}$ and $y\in \beta_P\cap \rho(y){\ensuremath{\mathcal{S}}}=\rho(y)(\beta_P\cap {\ensuremath{\mathcal{S}}})$, we know $\mu_{\ensuremath{\mathcal{E}}}(x)\in \omega_P$ and $\mu_{\ensuremath{\mathcal{E}}}(y)\in m_y \omega_P$. Since $M_P({\ensuremath{\mathbb{Z}}})$ acts properly discontinuously on $M_P/K_P$ and $d_{M_P/K_P}(\mu_{\ensuremath{\mathcal{E}}}(x),\mu_{\ensuremath{\mathcal{E}}}(y))\le \operatorname{Lip}(\mu_{\ensuremath{\mathcal{E}}})$, there are only finitely many possibilities for $m_y$. To bound $n_y$, write $x$ and $y$ as $x=n\phi(x)\operatorname{SO}(p)$ and $y=\rho(y)n'\phi(y)\operatorname{SO}(p)$ for some $n,n'\in N^+$. Since $d_{\ensuremath{\mathcal{E}}}(x,y)\le 1$, there is a $c$ such that $$\|(n\phi(x))^{-1}\rho(y)n'\phi(y)\|_2<c.$$ and thus $$\begin{aligned} \|\rho(y)\|_2&< \|n\phi(x)\|_2\|(n\phi(x))^{-1}\rho(y)n'\phi(y)\|_2\|(n'\phi(y))^{-1}\|_2\\ \log \|\rho(y)\|_2&<\log c+\log \|n\phi(x)\|_2+\log \|(n'\phi(y))^{-1}\|_2\\ &=O(d_{{\ensuremath{\mathcal{E}}}}(I,\phi(x)) + d_{{\ensuremath{\mathcal{E}}}}(I,\phi(y))) \end{aligned}$$ By Corollary \[cor:Hausdorff\], we see that $\log \|\rho(y)\|_2=O(d_{{\ensuremath{\mathcal{E}}}}(I,x))$ as desired. The work of Ji and MacPherson [@JiMacPherson] suggests how this construction might be extended to lattices in other symmetric spaces. We can replace $\phi$ with a map from the quotient to the asymptotic cone of the quotient and replace $X_P$ with a generalized Siegel set for $P$ and get similar results. In the next section, we will need the following corollary, which tells us that if $\Delta$ is a 2-cell of $\tau$, then all the edges of $\Delta$ satisfy the conditions of Lemma \[lem:paraBounds\] for a single parabolic subgroup $P$. \[cor:triBounds\] Let $x_1,x_2,x_3\in {\ensuremath{\mathcal{E}}}$ be such that the distance between any pair of points is at most $1$. There is a $c_\rho'$ such that if $\phi(x_1)\in X_P(t_0)$, then for all $i,j$, we can decompose $\rho(x_i)^{-1}\rho(x_j)$ as a product $\rho(x_i)^{-1}\rho(x_j)=nm$, where $n\in N_P({\ensuremath{\mathbb{Z}}})$, $m\in M_P({\ensuremath{\mathbb{Z}}})$, $d_{\Gamma}(I,m)<c_\rho'$, and $\|n\|_2 \le c_\rho' e^{c_\rho' d_{{\ensuremath{\mathcal{E}}}}(x,I)}$. In particular, if $\Delta$ is a 2-cell in $\tau$, we can choose $x$ to be a vertex of $\Delta$ and let $P_{\Delta}\in {\ensuremath{\mathcal{P}}}$ be such that $\phi(f(x))\in X_{P_\Delta}(t_0)$. Then if $y$ and $z$ are vertices of $\Delta$, then $\bar{f}_0(y)^{-1}\bar{f}_0(z)$ can be decomposed as above. This follows from the lemma for $\rho(x_1)^{-1}\rho(x_2)$ and $\rho(x_1)^{-1}\rho(x_3)$, and $$\rho(x_3)^{-1}\rho(x_2)=(\rho(x_3)^{-1}\rho(x_1))(\rho(x_1)^{-1}\rho(x_2)).$$ Constructing words representing edges {#sec:normalform} ===================================== We will use Lemma \[lem:paraBounds\] to extend $\bar{f}_0$ to a map $\bar{f}_1:\tau^{(1)}\to K_\Gamma$. This corresponds to choosing, for each edge $e=(x,y)$, a word $w_e$ representing $\bar{f}_0(x)^{-1}\bar{f}_0(y)$. If $\phi(x)\in X_P(t_0)$, we will choose a $w_e$ which is a product of boundedly many generators of $M_P$ and boundedly many words in $\Sigma^*$ which each represent an elementary matrix in $N_P$. One difficulty is doing this consistently, so that the boundary of each triangle satisfies this condition for a single $P$. We will need two main lemmas. The first states that elementary matrices with large coefficients can be represented by “shortcuts”. This is a key ingredient in the proof of the theorem of Lubotzky, Mozes, and Raghunathan [@LMRComptes] which states that when $p\ge 3$, the word metric on $\Gamma$ is equivalent to the metric induced by the Riemannian metric on $G$; see also [@RileyNav] for an explicit combinatorial construction. \[lem:shortcuts\] If $p\ge 3$, then for every $i,j\in \{1,\dots,p\}$, $i\ne j$, and $x\in {\ensuremath{\mathbb{Z}}}$, there is a word ${\widehat}{e}_{ij}(x)$ representing $e_{ij}(x)$ which has length $O(\log |x|)$. To state the second lemma, we will need to define some sets of matrix indices. If $P=U(S_1,\dots,S_n)$, let $$\begin{aligned} & \chi(M_P):=\{(s_1,s_2)\mid s_1,s_2\in S_i\text{ for some $i$}\},\\ & \chi(N_P):=\{(s_1,s_2)\mid s_1\in S_i,s_2\in S_j\text{ for some $i<j$}\},\\ & \chi(P):=\chi(M_P)\cup \chi(N_P)=\{(s_1,s_2)\mid s_1\in S_i,s_2\in S_j\text{ for some $i\le j$}\}.\end{aligned}$$ Let $P_\Delta$ be as in Corollary \[cor:triBounds\]. \[lem:edgeChar\] If $p\ge 3$, there is a $c$ depending only on $p$ and a choice of a word $w_e\in \Sigma^*$ for each edge $e$ in $\tau$ such that if $\Delta$ is a 2-cell of $\tau$ and $e=(x,y)$ is an edge of $\Delta$, then: - $w_e$ represents $\bar{f}_0(x)^{-1}\bar{f}_0(y)$, - $\ell(w_e)=O(\ell)$, - $w_e$ can be written as a product $w_e=z_1\dots z_n$ such that $n\le c$ and each $z_i$ is either an element of $\Sigma\cap M_{P_\Delta}$ or a word ${\widehat}{e}_{ij}(x)$ where $(i,j)\in \chi(N_{P_\Delta})$ and $|x|\le c_\rho' e^{c_\rho'\ell}$, where $c_\rho'$ is the constant from Cor. \[cor:triBounds\]. In [@LMRComptes], the ${\widehat}{e}_{ij}(x)$ are constructed by including the solvable group ${\ensuremath{\mathbb{R}}}\ltimes {\ensuremath{\mathbb{R}}}^2$ in the thick part of $G$; since ${\ensuremath{\mathbb{R}}}^2\subset {\ensuremath{\mathbb{R}}}\ltimes {\ensuremath{\mathbb{R}}}^2$ is exponentially distorted, there are curves in ${\ensuremath{\mathbb{R}}}\ltimes {\ensuremath{\mathbb{R}}}^2$ which can be approximated by words in $\Gamma$. For our purposes, we will need a construction which uses more general solvable groups. In particular, when $p\ge 4$, we can construct the ${\widehat}{e}_{ij}(x)$ as approximations of curves in solvable groups with quadratic Dehn function. Let $S,T\subset \{1,\dots,n\}$ be disjoint subsets and let $s=\#S$ and $t=\#T$. Assume that $s\ge 2$. We will define a solvable subgroup $H_{S,T}\subset U(S,T)$. Let $A_1,\dots, A_{s}$ be a set of simultaneously diagonalizable positive-definite matrices in $\operatorname{SL}(S,{\ensuremath{\mathbb{Z}}})$. The $A_i$’s have the same eigenvectors; call these shared eigenvectors $v_1,\dots, v_{s}\in {\ensuremath{\mathbb{R}}}^{S}$, and normalize them to have unit length. The $A_i$ are entirely determined by their eigenvalues, and we can define vectors $$q_i=(\log \|A_iv_1\|_2,\dots,\log \|A_iv_s\|_2)\in {\ensuremath{\mathbb{R}}}^s$$ Since $A_i\in \operatorname{SL}(S,{\ensuremath{\mathbb{Z}}})$, the product of its eigenvectors is $1$, and the sum of the coordinates of $q_i$ is 0. We require that the $A_i$ are independent in the sense that the $q_i$ span a $(s-1)$-dimensional subspace of ${\ensuremath{\mathbb{R}}}^s$; since they are all contained in an $(s-1)$-dimensional subspace, this is the maximum rank possible. If a set of matrices satisfies these conditions, we call them a set of [*independent commuting matrices*]{} for $S$. A construction of such matrices can be found in Section 10.4 of [@ECHLPT]. The $A_i$ generate a subgroup isomorphic to ${\ensuremath{\mathbb{Z}}}^{s-1}$, and by possibly choosing a different generating set for this subgroup, we can assume that $\lambda_i:=\|A_iv_i\|_2>1$ for all $i$. Let $B_1^{tr},\dots, B_{t}^{tr}\in \operatorname{SL}(T,{\ensuremath{\mathbb{Z}}})$ (where $^{tr}$ represents the transpose of a matrix) be a set of independent commuting matrices for $T$ and let $w_1,\dots, w_{t}\in {\ensuremath{\mathbb{R}}}^{T}$ be the basis of unit eigenvectors of the $B_i^{tr}$. Choose the $B_i$ so that $\mu_i:= \|w_i B_i\|_2 >1$. Let $$\begin{aligned} H_{S,T} :=&\left\{\begin{pmatrix}\prod_i A_i^{x_i} & V \\ 0 & \prod_i B_i^{y_i}\end{pmatrix} \middle|\; x_i,y_i\in {\ensuremath{\mathbb{R}}}, V\in {\ensuremath{\mathbb{R}}}^S\otimes {\ensuremath{\mathbb{R}}}^T\right\} \\ =&({\ensuremath{\mathbb{R}}}^{s-1}\times{\ensuremath{\mathbb{R}}}^{t-1})\ltimes ({\ensuremath{\mathbb{R}}}^S\otimes {\ensuremath{\mathbb{R}}}^T). \end{aligned}$$ Note that $H_{S,T}\cap \Gamma$ is a cocompact lattice in $H_{S,T}$, so $H_{S,T}$ is contained in the thick part of $G$. That is, if $\epsilon$ is sufficiently small, then $[H_{S,T}]_{\ensuremath{\mathcal{E}}}\subset {\ensuremath{\mathcal{E}}}(\epsilon)$, so Lemma \[lem:approx\] can be used to construct words in $\Sigma^*$ out of paths in $H_{S,T}$. We will use this and the fact that the subgroup ${\ensuremath{\mathbb{R}}}^S\otimes {\ensuremath{\mathbb{R}}}^T$ is exponentially distorted in $H_{S,T}$ to get short words in $\Sigma^*$ representing certain unipotent matrices. By abuse of notation, let $A_i$ and $B_i$ refer to the corresponding matrices in $H_{S,T}$. The group $H_{S,T}$ is generated by powers of the $A_i$, powers of the $B_i$, and elementary matrices in the sense that any element of $H_{S,T}$ can be written as $$\prod A_i^{x_i}\prod B_i^{y_i} \begin{pmatrix} I_S & V \\ 0 & I_T \end{pmatrix},$$ for some $x_i,y_i\in {\ensuremath{\mathbb{R}}}$ and $V\in {\ensuremath{\mathbb{R}}}^S\otimes {\ensuremath{\mathbb{R}}}^T$, where $I_S$ and $I_T$ represent the identity matrix in $\operatorname{SL}(S,{\ensuremath{\mathbb{Z}}})$ and $\operatorname{SL}(T,{\ensuremath{\mathbb{Z}}})$ respectively. As with discrete groups we will associate generators with curves, and words with concatenations of curves. We let $A_i^{x}$ correspond to the curve $$d\mapsto \begin{pmatrix} A_i^{xd} & 0 \\ 0 & I_T \end{pmatrix},$$ $B_i^{x}$ to the curve $$d\mapsto \begin{pmatrix} I_S & 0 \\ 0 & B_i^{xd} \end{pmatrix},$$ and $$u(V)=\begin{pmatrix} I_S & V \\ 0 & I_T \end{pmatrix}$$ to the curve $$d\mapsto \begin{pmatrix} I_S & dV \\ 0 & I_T \end{pmatrix},$$ where in all cases, $d$ ranges from $0$ to $1$. Let $c\ge \max\{\ell(A_i),\ell(B_i)\}$. Then the word $A_i^x u(v_i\otimes w) A_i^{-x}$ represents the matrix $u(\lambda_i^xv_i\otimes w)$ and corresponds to a curve of length at most $2cx+\|v_i\|_2\| w\|_2$ connecting $I$ and $u(\lambda_i^xv_i\otimes w)$. Similarly, if $t\ge 2$, then $B_i^{-x} u(v\otimes w_i) B_i^{x}$ has length at most $2cx+\|v_i\|_2\|w\|_2$ and connects $I$ and $u(\mu_i^xv\otimes w_i).$ If $V\in {\ensuremath{\mathbb{R}}}^S\otimes {\ensuremath{\mathbb{R}}}^T$, then $$V=\sum_{i,j} x_{ij}v_i\otimes w_j$$ for some $x_{ij}\in {\ensuremath{\mathbb{R}}}$. Let $$l_{i}(x)= \begin{cases} \lceil \log_{\lambda_i} |x|\rceil &\text{ if $|x|>1$,} \\ 0&\text{ if $|x|\le 1$,} \end{cases}$$ and define $$\gamma_{ij}(x)= A_i^{l_{i}(x)} u\biggl(\frac{x}{\lambda_i^{l_{i}(x)}} v_i\otimes w_j\biggr) A_i^{-l_{i}(x)}.$$ Note that $|x/\lambda_i^{l_{i}(x)}|\le 1$. Let $${\widehat}{u}(V):=\prod_{i,j} \gamma_{ij}(x_{ij}).$$ Then ${\widehat}{u}(V)$ represents $u(V)$ and there is a $c'$ such that $$\ell({\widehat}{u}(V))\le c'(1+\log \|V\|_2)$$ for all $V$. If $i\in S$ and $j\in T$, then $e_{ij}(x)=u(x z_i\otimes z_j)\in H_{S,T}$. If $x\in {\ensuremath{\mathbb{Z}}}$, then we can apply Lemma \[lem:approx\] to approximate ${\widehat}{u}(x z_i\otimes z_j)$ by a word ${\widehat}{e}_{ij;S,T}(x)\in \Sigma^*$ which represents $e_{ij}(x)$ and whose length is $O(\log |x|)$. In general, changing $S$ and $T$ will change ${\widehat}{e}_{ij;S,T}(x)$ drastically, but later, we will prove that if $i\in S,S'$ and $j\in T,T'$, and $S$ and $S'$ satisfy some mild conditions, then ${\widehat}{e}_{ij;S,T}(x)$ and ${\widehat}{e}_{ij;S',T'}(x)$ are connected by a homotopy of area $O((\log |x|)^2)$. Because of this, the choice of $S$ and $T$ is largely irrelevant. Thus, for each $(i,j)$, we choose a $d\not\in \{i,j\}$ and let $${\widehat}{e}_{ij}(x)={\widehat}{e}_{ij;\{i,d\},\{j\}}(x).$$ If $e=(x,y)$ is an interior edge of $\tau$, it is in the boundary of two 2-cells; call these $\Delta$ and $\Delta'$. By Corollary \[cor:triBounds\], there is a $c$ depending only on $p$ such that if $g=\bar{f}_0(x)^{-1}\bar{f}_0(y)\in \Gamma$ and $g_{ij}$ is the $(i,j)$-coefficient of $g$, then $$\begin{aligned} &g\in P_{\Delta}({\ensuremath{\mathbb{Z}}})\cup P_{\Delta'}({\ensuremath{\mathbb{Z}}}) \\ & |g_{ij}|<c \text{\quad if $(i,j)\in \chi(M_{P_{\Delta}})\cup \chi(M_{P_{\Delta'}})$} \\ & \|g\|_{\infty}<c e^{c\ell} \end{aligned}$$ The last inequality follows from the fact that $d_{{\ensuremath{\mathcal{E}}}}([I]_{\ensuremath{\mathcal{E}}},f(x))\le \ell$. Note that $P_{\Delta}\cap P_{\Delta'}$ is parabolic. We express $g$ as a word in $\Sigma^*$ as follows. Let $g=nm$, where $n\in N_{P_{\Delta}\cap P_{\Delta'}}({\ensuremath{\mathbb{Z}}})$ and $m\in M_{P_{\Delta}\cap P_{\Delta'}}({\ensuremath{\mathbb{Z}}})$. Then $\|m\|_{\infty}< c,$ and there is a $c'$ depending on $p$ such that $\|m\|_2 < c'$ and $\|m^{-1}\|_2< c'$. Therefore, $$\|n\|_{\infty}\le \|gm^{-1}\|_{2}\le p^2c' c e^{c\ell}$$ and if $(i,j)\in \chi(M_{P_{\Delta}})\cup \chi(M_{P_{\Delta'}})$, then $|n_{ij}|<pc'$. Since $n$ is a unipotent matrix, we can write $n$ as a product $$n=\prod_{(i,j)\in \chi(N_{P_{\Delta}\cap P_{\Delta'}})} e_{ij}(n_{ij})$$ for an appropriate ordering of $\chi(N_{P_{\Delta}\cap P_{\Delta'}})$. We can replace the terms corresponding to large coefficients with shortcuts. Let $$w_1=\prod_{(i,j)\in \chi(N_{P_{\Delta}\cap P_{\Delta'}})} \begin{cases} e_{ij}^{n_{ij}} & \text{if $(i,j)\in \chi(M_{P_{\Delta}})\cup \chi(M_{P_{\Delta'}})$,}\\ {\widehat}{e}_{ij}(n_{ij}) & \text{otherwise.} \end{cases}$$ This represents $n$ and has length $O(\ell)$. Finally, there is a $c''$ depending only on $p$ such that we can write $m$ as a product $w_2\in (\Sigma\cap M_{P_{\Delta}\cap P_{\Delta'}})^*$ of no more than $c''$ generators of $M_{P_{\Delta}\cap P_{\Delta'}}$. Let $$\bar{f}_1(e)=w_1w_2\in \Sigma^*.$$ This satisfies the conditions of the lemma for both $\Delta$ and $\Delta'$ If $e$ is on the boundary of $\tau$ and $e$ is an edge of $\Delta$, then $P_\Delta=G$, and since $N_G=\{I\}$, there is a $c$ such that $d_{\Gamma}(\bar{f}_0(x),\bar{f}_0(y))<c$. We can take $w_e$ to be a geodesic word representing $\bar{f}_0(x)^{-1}\bar{f}_0(y)$. We then construct $\bar{f}_1$ by defining $\bar{f}_1|_e$ to be the curve corresponding to $w_e$. Note that $\bar{f}_1|_{\partial\tau}$ differs from the original $w$ by only a bounded distance. In particular, there is an annulus in $K_\Gamma$ whose boundary curves are $w$ and $\bar{f}_1|_{\partial\tau}$ and which has area $O(\ell)$. Filling the 2-skeleton {#sec:filling} ====================== In the previous section, we reduced the problem of filling $\alpha$ to the problem of filling the curves $\bar{f}_1(\partial\Delta)$, where $\Delta$ ranges over all $2$-cells of $\tau$. Each of these curves is a product of a bounded number of elements of $\Sigma$ and a bounded number of shortcuts ${\widehat}{e}_{ij}(x)$. In this section, we will describe methods for filling such curves. The key to many of these methods is the group $H_{S,T}$ from Section \[sec:normalform\], which we used to construct ${\widehat}{e}_{ij}$. This group has two key properties. First, when either $S$ or $T$ is large enough, then $H_{S,T}$ has quadratic Dehn function; this is a special case of a theorem of de Cornulier and Tessera. Second, when both $S$ and $T$ are sufficiently large, $H_{S,T}$ contains multiple ways to shorten elementary matrices. A good choice of shortening makes it possible to fill many discs, including discs corresponding to the Steinberg relations. We first state a special case of a theorem of de Cornulier and Tessera: \[thm:HDehn\] If $s\ge 3$ or $t\ge 3$, then $H_{S,T}$ has quadratic Dehn function. The quadratic Dehn function will let us switch between different shortenings. Say $\#S\ge 3$, $\#T\ge 2$, and let $A_i\in \operatorname{SL}(S,{\ensuremath{\mathbb{Z}}})$, $B_i\in \operatorname{SL}(T,{\ensuremath{\mathbb{Z}}})$, $v_i\in {\ensuremath{\mathbb{R}}}^S$, and $w_i\in {\ensuremath{\mathbb{R}}}^T$ be as in Section \[sec:normalform\]. Then we can express $u(x v_i\otimes w_j)$ either as $A_i^k u(v_i\otimes w_j) A_i^{-k}$ or as $B_j^{-l} u(v_i\otimes w_j) B_j^{l}$. In the following lemma, we switch between these representations to find fillings for words representing conjugates of ${\widehat}{u}(V)$. Let $\Sigma_S:=\Sigma\cap \operatorname{SL}(S,{\ensuremath{\mathbb{Z}}})$ and $\Sigma_T:=\Sigma\cap \operatorname{SL}(T,{\ensuremath{\mathbb{Z}}})$. These are generating sets for $\operatorname{SL}(S,{\ensuremath{\mathbb{Z}}})$ and $\operatorname{SL}(T,{\ensuremath{\mathbb{Z}}})$. \[lem:xiConj\] If $\#S\ge 3$ and $\#T\ge 2$ or vice versa, there is an $\epsilon>0$ and a $c>0$ such that if $\gamma$ is a word in $(\Sigma_S\cup \Sigma_T)^*$ representing $(M,N)\in \operatorname{SL}(S,{\ensuremath{\mathbb{Z}}}) \times \operatorname{SL}(T,{\ensuremath{\mathbb{Z}}})$, then $$\delta_{{\ensuremath{\mathcal{E}}}(\epsilon)}([\gamma {\widehat}{u}(V)\gamma^{-1}]_{\ensuremath{\mathcal{E}}},[{\widehat}{u}(MVN^{-1})]_{\ensuremath{\mathcal{E}}})= c(\ell(\gamma)+\log{(\|V\|_2+2)})^2.$$ Let $\omega:=\gamma {\widehat}{u}(V)\gamma^{-1}]_{\ensuremath{\mathcal{E}}}{\widehat}{u}(MVN^{-1})^{-1}$; this is a closed curve in $G$. We first consider the case that $V=x v_i\otimes w_j$ and $\gamma\in \Sigma_T^*$. In this case, $M=I$; and $\gamma {\widehat}{u}(V)\gamma^{-1}$ and ${\widehat}{u}(VN^{-1})$ are both words in the group $$\begin{aligned} F &:=\left\{\begin{pmatrix}\prod_i A_i^{x_i} & V \\ 0 & D \end{pmatrix} \middle|\; x_i\in {\ensuremath{\mathbb{R}}}, D\in \operatorname{SL}(T,{\ensuremath{\mathbb{Z}}}), V\in {\ensuremath{\mathbb{R}}}^S\otimes {\ensuremath{\mathbb{R}}}^T\right\} \\ &=({\ensuremath{\mathbb{R}}}^{s-1}\times \operatorname{SL}(T,{\ensuremath{\mathbb{Z}}}) )\ltimes ({\ensuremath{\mathbb{R}}}^S\otimes {\ensuremath{\mathbb{R}}}^T). \end{aligned}$$ This group is generated by $$\Sigma_F:=\{A_i^x\mid x\in {\ensuremath{\mathbb{R}}}\}\cup \{u(V)\mid V\in {\ensuremath{\mathbb{R}}}^S\otimes {\ensuremath{\mathbb{R}}}^T\}\cup \Sigma_T.$$ Let $\epsilon \le 1/2$ be sufficiently small that $H_{S,T}\subset G(\epsilon)$. Since $G(\epsilon)$ is contractible and $F\subset G(\epsilon)$, words in $\Sigma_F^*$ correspond to curves in $G(\epsilon)$. We will show that $$\delta_{G(\epsilon)}(\gamma {\widehat}{u}(V)\gamma^{-1},{\widehat}{u}(VN^{-1}))\le O(\ell(\omega)^2).$$ Words in $\Sigma_F^*$ satisfy certain relations which correspond to discs in $G(\epsilon)$. In particular, note that if $\sigma\in \Sigma_T$, $|x|\le 1$, and $\|W\|_2\le 1$, then $$\label{eq:commute}[\sigma, A_k^x]$$ and $$\label{eq:conj}\sigma u(W)\sigma^{-1}u(W\sigma^{-1})^{-1}$$ are both closed curves of bounded length. Since $G(\epsilon)$ is contractible, their filling areas are bounded, and we can think of them as “relations” in $F$. Let $C=\log_{\min_k\{\lambda_k\}} (p+1)$, and let $z=C\ell(\gamma)+l_i(x)$. This choice of $z$ ensures that $$\|\lambda_i^{-z} V N\|_2\le 1.$$ Indeed, it ensures that if $d_{\operatorname{SL}(T,{\ensuremath{\mathbb{Z}}})}(I,N')\le \ell(\gamma)$, then $$\|\lambda_i^{-z} V N'\|_2\le 1.$$ Furthermore, $z=O(\ell(\omega))$. We will construct a homotopy which lies in $G(\epsilon)$ and goes through the stages $$\begin{aligned} \omega_1&=\gamma {\widehat}{u}(V)\gamma^{-1} \\ \omega_2&=\gamma A_i^{z} u(\lambda_i^{-z} V) A_i^{-z} \gamma^{-1} \\ \omega_3&= A_i^{z} \gamma u(\lambda_i^{-z} V) \gamma^{-1} A_i^{-z} \\ \omega_4&= A_i^{z} u(\lambda_i^{-z} V N^{-1}) A_i^{-z} \\ \omega_5&= {\widehat}{u}(VN^{-1}). \end{aligned}$$ Each stage is a word in $\Sigma_F^*$ and so corresponds to a curve in $G(\epsilon)$. We can construct a homotopy between $\omega_1$ and $\omega_2$ and between $\omega_4$ and $\omega_5$ using Thm. \[thm:HDehn\]. We need to construct homotopies between $\omega_2$ and $\omega_3$ and between $\omega_3$ and $\omega_4$. We can transform $\omega_2$ to $\omega_3$ by applying at most $O(\ell(\omega)^2)$ times. This corresponds to a homotopy with area $O(\ell(\omega)^2)$. Similarly, we can transform $\omega_3$ to $\omega_4$ by applying at most $O(\ell(\omega))$ times, corresponding to a homotopy of area $O(\ell(\omega))$. Combining all of these homotopies, we find that $$\delta_{G(\epsilon)}(\gamma {\widehat}{u}(V)\gamma^{-1},{\widehat}{u}(VN^{-1}))\le O(\ell(\omega)^2).$$ as desired. We can use this case to generalize to the case $V=\sum_{i,j}x_{ij} v_i\otimes w_j$ and $\gamma\in \Sigma_T^*$. By applying the case to each term of ${\widehat}{u}(V)$, we obtain a homotopy of area $O(\ell(\omega)^2)$ from $\gamma {\widehat}{u}(V)\gamma^{-1}$ to $$\prod_{i,j}{\widehat}{u}(x_{ij} v_i\otimes w_j N^{-1}).$$ This is a curve in $H_{S,T}$ of length $O(\ell(\omega))$ which connects $I$ and $u(VN^{-1})$. By Thm. \[thm:HDehn\], there is a homotopy between this curve and ${\widehat}{u}(VN^{-1})$ of area $O(\ell(\omega)^2)$. When $\gamma\in \Sigma_S^*$, we instead let $F$ be the group $$\begin{aligned} F&:=\left\{\begin{pmatrix}D & V \\ 0 & \prod_i B_i^{x_i} \end{pmatrix} \middle|\; x_i\in {\ensuremath{\mathbb{R}}}, D\in \operatorname{SL}(S,{\ensuremath{\mathbb{Z}}}), V\in {\ensuremath{\mathbb{R}}}^S\otimes {\ensuremath{\mathbb{R}}}^T\right\} \\ &=(\operatorname{SL}(S,{\ensuremath{\mathbb{Z}}})\times {\ensuremath{\mathbb{R}}}^{t-1} )\ltimes ({\ensuremath{\mathbb{R}}}^S\otimes {\ensuremath{\mathbb{R}}}^T). \end{aligned}$$ Here, ${\widehat}{u}(V)$ is not a word in $F$, but since $\#T\ge 2$, we can replace the $A_i$ with the $B_i$ in the construction of ${\widehat}{u}(V)$. This results in shortcuts ${\widehat}{u}'(V)$ in the alphabet $$\{B_i^x\mid x\in {\ensuremath{\mathbb{R}}}\}\cup\{u(V)\mid V\in {\ensuremath{\mathbb{R}}}^S\otimes {\ensuremath{\mathbb{R}}}^T\}.$$ These are curves in $H_{S,T}$ which represent $u(V)$ and have length $O(\log\|V\|_2)$, so by Thm. \[thm:HDehn\], there is a homotopy of area $O((\log\|V\|_2)^2)$ between ${\widehat}{u}'(V)$ and ${\widehat}{u}(V)$. The argument for $\gamma\in \Sigma_S^*$ shows that $$\delta_{G(\epsilon)}(\gamma {\widehat}{u}'(V)\gamma^{-1},{\widehat}{u}'(MV))=O(\ell(\omega)^2).$$ Replacing ${\widehat}{u}'(V)$ with ${\widehat}{u}(V)$ and ${\widehat}{u}'(MV)$ with ${\widehat}{u}(MV)$ adds area $O(\ell(\omega)^2)$, so $$\delta_{G(\epsilon)}(\gamma {\widehat}{u}(V)\gamma^{-1},{\widehat}{u}(MV))=O(\ell(\omega)^2).$$ If $\gamma\in (\Sigma_S\cup \Sigma_T)^*$, and $\gamma_S\in \Sigma_S^*$ and $\gamma_T\in \Sigma_T^*$ are the words obtained by deleting all the letters in $\Sigma_T$ and $\Sigma_S$ respectively, then $\delta_G(\gamma,\gamma_S\gamma_T)=O(\ell(\omega)^2)$. We can construct a homotopy from $\gamma {\widehat}{u}(V)\gamma^{-1}$ to ${\widehat}{u}(MVN^{-1}))$ going through the steps $$\begin{aligned} \gamma {\widehat}{u}(V)\gamma^{-1} & \to \gamma_S \gamma_T {\widehat}{u}(V)\gamma_T^{-1} \gamma_S^{-1}\\ &\to \gamma_S {\widehat}{u}(VN^{-1}) \gamma_S^{-1}\\ &\to {\widehat}{u}(MVN^{-1}). \end{aligned}$$ This homotopy has area $O(\ell(\omega)^2)$. Recall that ${\widehat}{e}_{ij;S,T}(x)$ is an approximation of a curve ${\widehat}{u}(x z_i\otimes z_j)$; we write this curve as ${\widehat}{u}_{S,T}(x z_i\otimes z_j)$ to distinguish curves in different solvable subgroups. \[lem:shortEquiv\] If $p\ge 5$, $i\in S,S'$ and $j\in T,T'$, where $2\le \#S,\#S'\le p-2$, then $$\delta_{\Gamma}({\widehat}{e}_{ij;S,T}(x), {\widehat}{e}_{ij;S',T'}(x))=O((\log |x|)^2).$$ Case 1: Let $V=x z_i\otimes z_j$. We first consider the case that $S=S'$. Both ${\widehat}{u}_{S,T}(V)$ and ${\widehat}{u}_{S',T'}(V)$ are curves in $H_{S,S^c}$ for $S^c$ the complement of $S$. Since $k\ge 5$, Thm. \[thm:HDehn\] states that $H_{S,S^c}$ has quadratic Dehn function, so the lemma follows. In particular, $$\delta_{\Gamma}({\widehat}{e}_{ij;S,T}(x), {\widehat}{e}_{ij;S,\{j\}}(x))=O((\log |x|)^2).$$ Case 2: Let $S\subset S'$, $\#S'\ge 3$, $T\subset T'$, and $\#T'\ge 2$. Let $\{A_i\}$ be as in the definition of $H_{S,T}$, with eigenvectors $v_i$ and let $\{A'_i\}\in \operatorname{SL}(S',{\ensuremath{\mathbb{Z}}})$ be the set of independent commuting matrices used in defining $H_{S',T'}$. Recall that ${\widehat}{u}_{S,T}(V)$ is the concatenation of curves $\gamma_i$ of the form $$A_i^{c_i} u(x_i v_i\otimes z_j) A_i^{-c_i}$$ where $c_i\in {\ensuremath{\mathbb{Z}}}$ and $|x_i|\le 1$. Since $A_i\in \operatorname{SL}(S,{\ensuremath{\mathbb{Z}}})\subset \operatorname{SL}(S',{\ensuremath{\mathbb{Z}}})$, each of these curves satisfies the hypotheses of Lemma \[lem:xiConj\] for $S'$ and $T'$, and so there is a homotopy of area $O((\log|x|)^2)$ between $\gamma_i$ and $${\widehat}{u}_{S',T'}(\lambda_i^{c_i} x_i v_i\otimes z_j).$$ Each of these curves lie in $H_{S',T'}$, and since ${\widehat}{u}_{S',T'}(V)$ also lies in $H_{S',T'}$ and $H_{S',T'}$ has quadratic Dehn function, $$\delta_{\Gamma}({\widehat}{e}_{S,T}(V),{\widehat}{e}_{S',T'}(V))=O((\log |x|)^2).$$ Combining these two cases proves the lemma. First, we construct a homotopy between ${\widehat}{e}_{S,T}(V)$ and a word of the form ${\widehat}{e}_{\{i,d\},\{j\}}(V)$. If $\#S=2$, we can use case 1. Otherwise, let $d\in S$ be such that $d\ne i$. We can construct a homotopy going through the stages $${\widehat}{e}_{S,T}(V)\to {\widehat}{e}_{S,S^c}(V)\to {\widehat}{e}_{\{i,d\},\{j\}}(V).$$ The second step is an application of case 2, possible because $\{i,d\}\subset S$, $\#S\ge3$, and $\{j\}\subset S^c$. Similarly, we can construct a homotopy between ${\widehat}{e}_{S,T}(V)$ and a word of the form ${\widehat}{e}_{\{i,d'\},\{j\}}(V)$. If $d=d'$, we’re done. Otherwise, we can use case 2 to construct homotopies between each word and ${\widehat}{e}_{\{i,d,d'\},\{i,d,d'\}^c}(V)$. Using these lemmas, we can give fillings for a wide variety of curves; note that (\[lem:infPres:add\])–(\[lem:infPres:commute\]) are versions of the Steinberg relations. \[lem:infPres\] If $p\ge 5$ and $x,y\in {\ensuremath{\mathbb{Z}}}{{-}}\{0\}$, then 1. \[lem:infPres:add\] If $1\le i,j\le p$ and $i\ne j$, then $$\delta_{\Gamma}({\widehat}{e}_{ij}(x){\widehat}{e}_{ij}(y),{\widehat}{e}_{ij}(x+y))=O((\log |x|+\log |y|)^2).$$ In particular, $$\delta_{\Gamma}({\widehat}{e}_{ij}(x){\widehat}{e}_{ij}(-x))=O((\log |x|)^2).$$ 2. \[lem:infPres:multiply\] If $1\le i,j,k\le p$ and $i\ne j\ne k$, then $$\delta_{\Gamma}([{\widehat}{e}_{ij}(x),{\widehat}{e}_{jk}(y)],{\widehat}{e}_{ik}(xy))= O((\log |x|+\log |y|)^2).$$ 3. \[lem:infPres:commute\] If $1\le i,j,k,l\le p$, $i\ne l$, and $j\ne k$ $$\delta_{\Gamma}([{\widehat}{e}_{ij}(x),{\widehat}{e}_{kl}(y)])=O((\log |x|+\log |y|)^2).$$ 4. \[lem:infPres:swap\] Let $1\le i,j,k,l\le p$, $i\ne j$, and $k\ne l$, and $$s_{ij}=e_{ji}^{-1}e_{ij}e_{ji}^{-1},$$ so that $s_{ij}$ represents $$\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}\in\operatorname{SL}(\{i,j\},{\ensuremath{\mathbb{Z}}}).$$ Then $$\delta_{\Gamma}(s_{ij} {\widehat}{e}_{kl}(x) s^{-1}_{ij},{\widehat}{e}_{\sigma(k)\sigma(l)}(\tau(k,l)x))=O( (\log |x|+\log |y|)^2),$$ where $\sigma$ is the permutation switching $i$ and $j$, and $\tau(k,l)=-1$ if $k=i$ or $l=i$ and $1$ otherwise. 5. \[lem:infPres:diag\] If $b=\operatorname{diag}(b_1,\dots,b_p)$, then $$\delta_{\Gamma}(b {\widehat}{e}_{ij}(x) b^{-1},{\widehat}{e}_{ij}(b_i b_j x)(\tau(k,l)x))=O( \log |x|^2).$$ For part \[lem:infPres:add\], note that $${\widehat}{e}_{ij}(x){\widehat}{e}_{ij}(y){\widehat}{e}_{ij}(x+y)^{-1}$$ is within bounded distance of a closed curve in $H_{\{j\}^c,\{j\}}$ of length $O(\log|x|)$. Thus part \[lem:infPres:add\] of the lemma follows from Thm. \[thm:HDehn\]. For part \[lem:infPres:multiply\], let $d\not \in \{i,j,k\}$ and let $S=\{i,j,d\}$, so that ${\widehat}{e}_{ij;\{i,d\},\{j\}}(x)$ is a word in $\operatorname{SL}(S,{\ensuremath{\mathbb{Z}}})$. We construct a homotopy going through the stages $$\begin{aligned} & [{\widehat}{e}_{ij}(x),{\widehat}{e}_{jk}(y)]{\widehat}{e}_{ik}(xy)^{-1} &\\ & [{\widehat}{e}_{ij;\{i,d\},\{j\}}(x),{\widehat}{u}_{S,\{k\}}(y z_{j}\otimes z_{k})]{\widehat}{e}_{ik;S,\{k\}}(xy)^{-1} & \text{by Lem.~\ref{lem:shortEquiv}}\\ & {\widehat}{u}_{S,\{k\}}((xy z_i+y z_{j})\otimes z_{k}){\widehat}{u}_{S,\{k\}}(y z_{j}\otimes z_{k})^{-1}{\widehat}{e}_{ik;S,\{k\}}(xy z_{i}\otimes z_k)^{-1}& \text{by Lem.~\ref{lem:xiConj}}\\ & {\varepsilon}& \text{by Thm.~\ref{thm:HDehn}} \end{aligned}$$ All these homotopies have area $O((\log |x|+\log |y|)^2)$. For part \[lem:infPres:commute\], we let $S=\{i,j,d\}$, $T=\{k,l\}$, and use the same techniques to construct a homotopy going through the stages $$\begin{aligned} & [{\widehat}{e}_{ij}(x),{\widehat}{e}_{kl}(y)])\\ & [{\widehat}{e}_{ij;S,T}(x),{\widehat}{e}_{kl;S,T}(y)] & \text{by Lem.~\ref{lem:shortEquiv}}\\ & {\varepsilon}& \text{by Thm.~\ref{thm:HDehn}} \end{aligned}$$ This homotopy has area $O((\log |x|+\log |y|)^2)$. Part \[lem:infPres:swap\] breaks into several cases depending on $k$ and $l$. When $i,j,k,$ and $l$ are distinct, the result follows from part \[lem:infPres:commute\], since $s_{ij}=e_{ji}^{-1}e_{ij}e_{ji}^{-1}$, and we can use part \[lem:infPres:commute\] to commute each letter past ${\widehat}{e}_{kl}(x)$. If $k=i$ and $l\ne j$, let $d,d'\not\in \{i,j,l\}$, $d\ne d'$, and let $S=\{i,j,d\}$ and $T=\{l,d'\}$. There is a homotopy from $$s_{ij} {\widehat}{e}_{il}(x) s^{-1}_{ij}{\widehat}{e}_{jl}(-x)^{-1}$$ to $$s_{ij} {\widehat}{u}_{S,T}(x z_i\otimes z_l) s^{-1}_{ij}{\widehat}{e}_{jl}(-x z_j\otimes z_l)$$ of area $O( (\log |x|)^2),$ and since $s_{ij}\in \Sigma_S^*$, the proposition follows by an application of Lemma \[lem:xiConj\]. A similar argument applies to the cases $k=j$ and $l\ne i$; $k\ne i$ and $l= j$; and $k\ne j$ and $l= i$. If $(k,l)=(i,j)$, let $d, d'\not \in \{i,j\}$. There is a homotopy going through the stages $$\begin{aligned} & s_{ij} {\widehat}{e}_{ij}(x) s^{-1}_{ij} & \\ & s_{ij} [e_{id},{\widehat}{e}_{dj}(x)] s^{-1}_{ij}& \text{ by part \ref{lem:infPres:multiply}}\\ & [s_{ij}e_{id}s^{-1}_{ij},s_{ij}{\widehat}{e}_{dj}(x)s^{-1}_{ij}]& \text{ by free insertion}\\ & [e_{jd}^{-1},{\widehat}{e}_{di}(x)] & \text{ by previous cases}\\ & {\widehat}{e}_{jd}(-x) & \text{ by part \ref{lem:infPres:multiply}} \end{aligned}$$ and this homotopy has area $O( (\log |x|)^2)$. One can treat the case $(k,l)=(j,i)$ the same way. Since any diagonal matrix in $\Gamma$ is the product of at most $p$ elements $s_{ij}$, part \[lem:infPres:diag\] follows from part \[lem:infPres:swap\]. This lemma allows us to fill shortenings of curves in nilpotent subgroups of $\Gamma$ efficiently. \[lem:shortNP\] Let $P=U(S_1,\dots, S_s)\in {\ensuremath{\mathcal{P}}}$, let $w_i= {\widehat}{e}_{a_ib_i}(x_i)$ and let $w=w_1\dots w_d$ for some $(a_i,b_i)\in \chi(N_P)$. Let $h=\max\{\log |x_i|,1\}$. If $w$ represents the identity, then $\delta_G(w)=O(d^3h^2)$. We first describe a normal form for elements of $N_P$. Let $$\chi_k(N_P)=\{(a,b)\mid a\in S_k, (a,b)\in \chi(N_P)\}.$$ The set $\{e_{ab}\mid (a,b)\in \chi_k(N_P)\}$ generates an abelian subgroup of $\Gamma$. If $n\in N_P$, let $n_{ab}$ be the $(a,b)$-coefficient of $n$ and let $$\kappa_q(n)=\prod_{(a,b)\in \chi_q(N_P)} {\widehat}{e}_{ab}(n_{ab}).$$ Let $$\nu_P(n)=\kappa_s(n)\kappa_{s-1}(n)\dots \kappa_1(n)$$ This is a word representing $n$, and it has length $O(\log \|n\|_2)$. Let $n_i\in \Gamma$ be the element represented by $w_1\dots w_i$. There is a $c$ such that $\log \|n_i\|_2\le chd$. The words $\nu_P(n_i)$ connect the identity to points on $w$, so we can fill $w$ by filling the wedges $\nu_P(n_{i-1})w_{i} \nu_P(n_{i})^{-1}$; we consider this filling as a homotopy between $\nu_P(n_{i-1})w_{i}$ and $\nu_P(n_{i})$. Note that if $a_i\in S_k$, then $$\kappa_s(n_{i-1})\dots\kappa_{k+1}(n_{i-1})=\kappa_s(n_{i})\dots\kappa_{k+1}(n_{i}),$$ so it suffices to transform $$\kappa_k(n_{i-1})\dots\kappa_{1}(n_{i-1}) w_i\to \kappa_k(n_{i})\dots\kappa_{1}(n_{i}).$$ We can use parts \[lem:infPres:multiply\] and \[lem:infPres:commute\] of Lemma \[lem:infPres\] to move $w_i$ to the left. That is, we repeatedly replace subwords of the form ${\widehat}{e}_{ab}(x)w_i$ with $w_i{\widehat}{e}_{ab}(x)$ if $b\ne a_i$ and with $w_i{\widehat}{e}_{ab_i}(x x_i){\widehat}{e}_{ab}(x)$ if $b= a_i$. We always have $a\in S_j$ for some $j\le k$, so $a<b_i$. Each step has cost $O((\log |x|+\log |x_i|)^2)$. Since $\log|x|\le \log \|n_i\|_2\le chd$, this is $O(h^2d^2)$. We repeat this process until we have moved $w_i$ to the left end of the word, which takes at most $p^2$ steps and has total cost $O(h^2d^2)$. The result is a word of the form $\kappa'_k\dots \kappa'_1$ where $\kappa'_q$ is a product of words of the form ${\widehat}{e}_{ab}(x)$ for $(a,b)\in \chi_q(N_P)$. Furthermore, the $\kappa'_q$ are obtained from the $\kappa_q(n_i)$ by inserting at most $p^2$ additional words in all (at most one word is added in each step, in addition to the original $w_i$). Since the elements represented by the terms of $\kappa'_q$ all commute, we can use parts \[lem:infPres:add\] and \[lem:infPres:commute\] of Lemma \[lem:infPres\] to rearrange the terms in each $\kappa'_q$ and transform $\kappa'_k\dots \kappa'_1$ into $\kappa_k(n_{i})\dots\kappa_{1}(n_{i})$. This takes at most $4p^4$ applications of part \[lem:infPres:commute\] and at most $2p^2$ applications of part \[lem:infPres:add\], each of which has cost $O(h^2d^2)$. Thus $$\delta_{\Gamma}(\kappa'_k\dots \kappa'_1, \kappa_k(n_{i})\dots\kappa_{1}(n_{i}))=O(h^2d^2),$$ and so $$\delta_{\Gamma}(\nu_P(n_{i-1})w_{i} \nu_P(n_{i})^{-1})=O(h^2d^2).$$ To fill $w$, we need to fill $d$ such wedges, so $\delta_{\Gamma}(w)=O(h^2d^3).$ In particular, if $d$ is fixed, then $\delta_{\Gamma}(w)=O(h^2)$. Finally, we use these tools to fill the curves that occur as $\bar{f}_1(\partial\Delta)$. If $\Delta$ is a 2-cell in $\tau$, $$\delta_{K_\Gamma}(\bar{f}_1(\partial\Delta))=O(\ell^2).$$ By Lemma \[lem:edgeChar\], there is a $c$ depending only on $p$ such that we can write the word corresponding to $\bar{f}_1(\partial\Delta)$ as $g=g_1\dots g_d$, where $d\le c$ and each $g_i$ is either an element of $\Sigma\cap M_{P_\Delta}$ or a word ${\widehat}{e}_{ab}(x)$ where $(a,b)\in \chi(N_{P_\Delta})$ and $|x|\le c e^{c\ell}$. Let $x_i\in P$ be the element represented by $g_1\dots g_i$; by the hypotheses, there is a $c'$ independent of $\alpha$ such that $\|x_i\|_2\le c' e^{c'\ell}$. Let $x_i=m_in_i$ for some $m_i\in M_P$ and $n_i\in N_P$. Then $d_{\Gamma}(I,m_i)\le c$, and there is a $c''$ independent of $\alpha$ such that $\|n_i\|_2 \le c'' e^{c'' \ell}$. Let $\gamma_i$ be a geodesic word representing $m_i$, and let $w_i=\gamma_i\nu_P(n_i)$. The $w_i$ are words of length $O(\ell(\alpha))$ connecting points on $g$ to the identity, and we can get a filling of $g$ by filling the wedges $w_ig_{i+1}w_{i+1}^{-1}$. The filling depends on $g_{i+1}$. If $g_{i+1}\in \Sigma_{M_P}$, then $$w_ig_{i+1}w_{i+1}^{-1}=\gamma_i \nu_P(n_i) g_{i+1} \nu_P(n_{i+1})^{-1}\gamma_{i+1}^{-1},$$ and $g_{i+1}^{-1}n_ig_{i+1}=n_{i+1}$. Lemma \[lem:infPres\] allows us to move $g_{i+1}$ past the individual terms of $\nu_P(n_i)$, using $O(\ell(\alpha)^2)$ steps. After this, we have a word of the form $$\gamma_i g_{i+1} h_1\dots h_k \nu_P(n_{i+1})^{-1}\gamma_{i+1}^{-1},$$ where $h_i={\widehat}{e}_{a_ib_i}(x_i)$ for some $(a_i,b_i)\in \chi(N_P)$, $|x_i| \le c'' e^{\ell(\alpha)}$, and $k\le p^2$. By Lemma \[lem:shortNP\], $h_1\dots h_k \nu_P(n_{i+1})^{-1}$ can be reduced to the trivial word at cost $O(\ell^2)$. This leaves us with the word $\gamma_i g_{i+1} \gamma_{i+1}^{-1}$; this has length at most $2c+1$ and can be reduced to the trivial word at bounded cost. If $g_{i+1}={\widehat}{e}_{ab}(x)$ for $(a,b)\in \chi(N_P)$, then $\gamma_i=\gamma_{i+1}$, and $\nu_P(n_i) g_{i+1} \nu_P(n_{i+1})^{-1}$ represents the identity. This satisfies the hypotheses of Lemma \[lem:shortNP\], and can be reduced to the trivial word at cost $O(\ell(\alpha)^2)$. This leaves $\gamma_i\gamma_{i+1}^{-1}$; as before, this has length at most $2c$ and can thus be reduced to the trivial word at bounded cost. Thus the cost of filling each wedge is $O(\ell^2)$. Since there are at most $c$ wedges, the cost of filling $w$ is $O(\ell^2)$. Since there are $O(\ell^2)$ such 2-cells to fill, we can fill $\bar{f}_1(\partial\tau)$ with area $O(\ell^4)$. Furthermore, $\bar{f}_1(\partial\tau)$ is a bounded distance from $w$ in $K_\Gamma$, so $$\delta_\Gamma(w)\le \delta_{K_\Gamma}(w,\bar{f}_1(\partial\tau))+\delta_{K_\Gamma}(\bar{f}_1(\partial\tau))=O(\ell^4).$$ This proves Theorem \[thm:mainthm\]. \[2\][ [\#2](http://www.ams.org/mathscinet-getitem?mr=#1) ]{} \[2\][\#2]{} [10]{} A. Borel and Harish-Chandra, *Arithmetic subgroups of algebraic groups*, Ann. of Math. (2) **75** (1962), 485–535. M. R. Bridson, *The geometry of the word problem*, Invitations to geometry and topology, Oxf. Grad. Texts Math., vol. 7, Oxford Univ. Press, Oxford, 2002, pp. 29–91. J. Burillo and J. Taback, *Equivalence of geometric and combinatorial [D]{}ehn functions*, New York J. Math. **8** (2002), 169–179 (electronic). Y. de Cornulier, personal communication, 2008. J. T. Ding, *A proof of a conjecture of [C]{}. [L]{}. [S]{}iegel*, J. Number Theory **46** (1994), no. 1, 1–11. C. Dru[ţ]{}u, *Filling in solvable groups and in lattices in semisimple groups*, Topology **43** (2004), no. 5, 983–1033. D. B. A. Epstein, J. W. Cannon, D. F. Holt, S. V. F. Levy, M. S. Paterson, and W. P. Thurston, *Word processing in groups*, Jones and Bartlett Publishers, Boston, MA, 1992. H. Federer and W. H. Fleming, *Normal and integral currents*, Ann. of Math. (2) **72** (1960), 458–520. M. Gromov, *Asymptotic invariants of infinite groups*, Geometric group theory, Vol. 2 (Sussex, 1991), London Math. Soc. Lecture Note Ser., vol. 182, Cambridge Univ. Press, Cambridge, 1993, pp. 1–295. L. Ji, *Metric compactifications of locally symmetric spaces*, Internat. J. Math. **9** (1998), no. 4, 465–491. L. Ji and R. MacPherson, *Geometry of compactifications of locally symmetric spaces*, Ann. Inst. Fourier (Grenoble) **52** (2002), no. 2, 457–559. E. Leuzinger, *On polyhedral retracts and compactifications of locally symmetric spaces*, Differential Geom. Appl. **20** (2004), no. 3, 293–318. , *Tits geometry, arithmetic groups, and the proof of a conjecture of [S]{}iegel*, J. Lie Theory **14** (2004), no. 2, 317–338. E. Leuzinger and Ch. Pittet, *Isoperimetric inequalities for lattices in semisimple [L]{}ie groups of rank [$2$]{}*, Geom. Funct. Anal. **6** (1996), no. 3, 489–511. A. Lubotzky, S. Mozes, and M. S. Raghunathan, *Cyclic subgroups of exponential growth and metrics on discrete groups*, C. R. Acad. Sci. Paris Sér. I Math. **317** (1993), no. 8, 735–740. J. Milnor, *Introduction to algebraic [$K$]{}-theory*, Princeton University Press, Princeton, N.J., 1971, Annals of Mathematics Studies, No. 72. Ch. Pittet, *Isoperimetric inequalities in nilpotent groups*, J. London Math. Soc. (2) **55** (1997), no. 3, 588–600. T. R. Riley, *Navigating in the [C]{}ayley graphs of [${\rm SL}\sb N(\Bbb Z)$]{} and [${\rm SL}\sb N(\Bbb F\sb p)$]{}*, Geom. Dedicata **113** (2005), 215–229.
{ "pile_set_name": "ArXiv" }
--- abstract: | In this paper we derive ages and masses for 276 clusters in the merger galaxy NGC 3256. This was achieved by taking accurate photometry in four wavebands from archival HST images. Photometric measurements are compared to synthetic stellar population (SSP) models to find the most probable age, mass and extinction. The cluster population of NGC 3256 reveals an increase in the star formation rate over the last 100 million years and the initial cluster mass function (ICMF) is best described by a power law relation with slope $\alpha = 1.85 \pm 0.12$. Using the observed cluster population for NGC 3256 we calculate the implied mass of clusters younger than 10 million years old, and convert this to a cluster formation rate over the last 10 million years. Comparison of this value with the star formation rate (SFR) indicates the fraction of stars found within bound clusters after the embedded phase of cluster formation, $\Gamma$, is $22.9\% \pm^{7.3}_{9.8} $ for NGC 3256. We carried out an in-depth analysis into the errors associated with such calculations showing that errors introduced by the SSP fitting must be taken into account and an unconstrained metallicity adds to these uncertainties. Observational biases should also be considered. Using published cluster population data sets we calculate $\Gamma$ for six other galaxies and examine how $\Gamma$ varies with environment. We show that $\Gamma$ increases with the star formation rate density and can be described as a power law type relation of the form $\Gamma(\%) = (29.0\pm{6.0}) \Sigma_{SFR}^{0.24\pm0.04} ({M\ensuremath{_{\small{\sun}}}}\ yr^{-1}\ kpc^{-2})$. author: - | Q. E. Goddard$^{1}$[^1], N. Bastian$^{1}$ & R. C. Kennicutt$^{1}$\ $^{1}$Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge. CB3 0HA title: On the Fraction of Star Clusters Surviving the Embedded Phase --- \[firstpage\] galaxies: structure - galaxies: stellar content - stars: formation - stars: clusters Introduction ============ Galaxy mergers produce some of the most extreme star formation events in the universe, inducing starbursts with intensities (i.e. star-formation rates; SFRs) orders of magnitude larger than in quiescent galaxies. In the distant universe these can be seen as Hyper-luminous Infrared galaxies [e.g. @verma02] with SFRs exceeding a few thousand solar masses per year. In the local universe, ongoing mergers like Arp 220 e.g. [e.g. @scoville98; @wilson06] and NGC 3256 [e.g. @zepf99; @trancho07b] have SFRs of a few tens to hundreds of solar masses per year, and due to their proximity, can be resolved with Hubble Space Telescope imaging. In these systems large numbers of super-star clusters are often found, with ages of a 1-500 Myr and masses up to $\sim 8\times10^7$[M$_{\small{\sun}}$]{} [@maraston04]. We can think of star clusters as simple stellar populations which can be modelled relatively easily. This, combined with their high surface brightness that allows them to be easily detected, and their long lifetimes, make them ideal tracers of the star formation history of a galaxy. NGC 3256 is a relatively nearby starburst galaxy that is clearly the result of a recent galactic merger. It displays two prominent tidal tails, thought to have been produced during the first encounter between two spiral galaxies approximately 500 million years ago [@zepf99; @english03]. @zepf99 catalogued over 1000 young star clusters in the main body of the merger using HST imaging. @trancho07b discovered three massive ($1-3 \times 10^5$ [M$_{\small{\sun}}$]{}) clusters within one of the tidal tails, whose ages and velocities places their formation within the tidal debris. Additionally, @trancho07a (hereafter T07a) studied a further sample of 23 clusters in the main body of the remnant and found that the clusters had metallicities of $\sim1.5$ [Z$_{\small{\sun}}$]{}, masses in the range $2-4 \times 10^5 $[M$_{\small{\sun}}$]{} and ages from a few to 150 Myr. The current SFR of NGC 3256 is $\sim50$ [M$_\odot$yr$^{-1}$]{}[@bastian08], however it’s SFR appears to have been increasing for the past $\sim200$ Myr (see § \[sec:formation-history\]) and @trancho07a argue that it is likely to continue to increase in the future. The age distribution of clusters reveals the underlying star formation history of the host galaxy. The mass distribution of clusters gives information on the cluster initial mass function (CIMF) which is often approximated by a power-law of the form $Ndm \propto M^{-\alpha}dm$, with $\alpha \sim 2$ [@elmegreen97; @zhang99; @degrijs03b; @bik03]. More recently, it has been shown that the CIMF may have a truncation at high masses, being well described by a Schechter function [@schechter76] which is a power-law in the low-mass regime and has an exponential cutoff above a given mass, [M$_{\star}$]{}[@gieles06a; @bastian08; @larsen09; @gieles09]. In order to obtain accurate cluster ages and masses either spectroscopy or photometry may be used. Although spectra yield more accurate results, photometry is more applicable to large sample sets. In order to break the age-dust degeneracy, a cluster must be observed in at least four broad photometric bands and cover the Balmer break [@anders04]. By comparing the observed colours and magnitudes of each cluster to synthetic stellar population (SSP) models, we can estimate the age, mass and extinction of each cluster. This technique has been used on several cluster populations, the Antennae [@fall05; @anders07], NGC 1569 [@anders04] & M51 [@bastian05b]. If all stars are formed in clusters, then only a small percentage of them are still within clusters at the end of the embedded phase [@lada03]. For up to the first three million years of a cluster’s lifetime it remains embedded in the progenitor molecular cloud until stellar winds, ionising flux from massive stars, and stellar feedback expel the remaining gas [see @goodwin08 for a recent review]. In expelling this gas many clusters become unbound and subsequently disperse into the surrounding medium [@lada03], a process commonly described as ’infant mortality’. Even if clusters do survive the embedded phase intact other disruption mechanisms may come into effect, such as stellar evolutionary mass loss, two body relaxation and GMC encounters. These mechanisms generally operate over longer time-scales, being a few 10s of Myr for stellar evolution (e.g. @bastian09) and 100s of Myr for GMC encounters [@gieles06c], except when the GMC number density is particularly high (e.g. @lamers05). After a cluster disrupts the stars disperse and become part of the galactic background. Comparing the fraction light emitted from clusters to the total (i.e. background plus clusters) galactic emission gives an indication of the fraction of stars within bound clusters. @meurer95 estimated that $20-50\%$ of UV light in starburst galaxies comes from star clusters. @zepf99 calculated the fraction of light from clusters in the B band to be $15-20\%$ and half that in the I band for the galaxy NGC 3256. However the most comprehensive set of results comes from @larsen00, listing the fraction of light from clusters in both the U-band (T$_L$(U)) and V-bands (T$_L$(V)) for 32 galaxies with varying star formation rates. @larsen00 found that T$_L$(U) increases with the star formation rate of the host galaxy and a stronger correlation with the star formation rate surface density, indicating that the host environment may influence the mode of star and/or cluster formation and how likely a cluster is to survive. Measuring the fraction of light from clusters is a relative easy calculation assuming foreground stars can be eliminated from the sample. However, it is presently unknown, but calculable, how these values may be influenced by the presence of bursts of star formation in the past and differential extinction of younger clusters. Hence the fraction of light observed in clusters will be a (possibly complicated) combination of the fraction of stars formed in clusters, the star-formation history of the galaxy, the cluster disruption time-scale/rate, and the difference in the amount of extinction towards clusters and the field. In this paper we attempt to improve the situation by deriving ages and masses for clusters in the galaxy NGC 3256, which in turn is used to calculate a cluster formation rate (CFR) over the last ten million years. Comparing this inferred CFR to the star formation rate (SFR, measured by [H$\alpha$  ]{}fluxes or infrared luminosities) we compute the fraction of stars in clusters younger than ten million years which have survived the embedded phase intact, a value hereafter referred to as $\Gamma$ [@bastian08]. We pay particular attention to the possible sources of uncertainty which may affect these calculations. Using other data sets of cluster ages and masses for different galaxies we perform the same calculation for an additional 6 galaxies in an attempt to find how $\Gamma$ varies with environment. Throughout this paper we will define a “cluster” as a gravitationally bound, centrally concentrated group of stars that can be identified on high resolution optical imaging. Hence, clusters refer to objects that have survived the transition from being embedded in their natal GMC to an exposed state. We assume that this process happens approximately 3 Myr after the cluster forms. We adopt a Hubble constant of $H_{0} = 70$ km s$^{-1}$ Mpc$^{-1}$, which places NGC 3256 at a distance of 36.1 Mpc given it has a recession velocity relative to the local group of $+2804\pm6$ km s$^{-1}$. This corresponds to a distance modulus of 32.79. This paper is structured as follows, we begin in § \[sec:obs\] by introducing the dataset for NGC 3256 and the methods we used to determine the age and masses of clusters in the galaxy. In § \[sec:gamma\_calc\] we detail how we calculated $\Gamma$ for NGC 3256 and go on to carefully examine all the possible sources of error in making such a calculation. In § \[sec:results\] we introduce other datasets taken from the literature and calculate $\Gamma$ for each. In § \[sec:implications\] we search for trends with $\Gamma$ and galactic properties and discuss the implication. Finally, in § \[sec:conclusions\] we present our conclusions and summarise our main results. Our Study of NGC 3256 {#sec:obs} ===================== The Data -------- We used archival Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) images of NGC 3256 across four filters, [*F330W*]{} (PI: Holland Ford; ID: 9300), [*F435W*]{} (PI: Alex Fillippenko; ID: 10272), [*F555W*]{} (ID: 9300) and [*F814W*]{} (ID: 9300). Details of all the images used can be found in Table \[tab:ims\]. The High Resolution Camera (HRC) images all cover the same area apart from the HRC-[*F435W*]{} image which is offset. To ensure we were examining the same area in all four bands we used the ACS Wide Field Camera (WFC) [*F435W*]{} (PI: Aaron S. Evans; ID: 10592) image over the HRC [*F435W*]{} image. The WFC image has the advantage of a longer exposure time, though the pixel scale is slightly larger, $0.05\arcsec$ compared to $0.027\arcsec$. Throughout this paper we will refer to the [*F330W*]{}, [*F435W*]{}, [*F555W*]{}, and [*F814W*]{} in the Cousins-Johnson [*U*]{}, [*B*]{}, [*V*]{} and [*I*]{} notation, although we stress that no transformations have been applied. -------- -------- ---------- ------------ ---------- Filter Camera Exposure Aperture Name$^a$ Time (s) Correction F330W HRC 2000 0.418 U F435W HRC 840 - - F555W HRC 760 0.397 V F814W HRC 660 0.405 I F435W WFC 1320 0.328 B -------- -------- ---------- ------------ ---------- : Details of images[]{data-label="tab:ims"} $^a$ We refer to the filters by their closest Cousins-Johnson filter name, however no transformations have been applied. Cluster Selection & Photometry ------------------------------ We selected star clusters using the [*SExtractor* ]{}program [@bertin96], run on the B-band WFC image in order to maximise the number of clusters detected. We changed the [*SExtractor* ]{}detection parameters to maximise the number of objects found, attempting to avoid blending of objects in the most crowded regions. We set a minimum detection limit as $4\sigma$ above the local background as determined by [*SExtractor* ]{}, and a minimum object area of 8 pixels. Co-ordinates of the clusters were translated from the B band image into co-ordinates for the other bands using the *GEOMAP* and *GEOXYTRAN* routines within *IRAF*. Photometry across all images was carried out using the *APER* routine from the *DAOPHOT* package in *IRAF*. We did not use the photometry from [*SExtractor* ]{}to ensure a consistent and unbiased method in all wavebands. We used an aperture of radius $7.0$ pixels for the HRC images and $3.5$ pixels for the WFC image. An annulus with radius 8 pixels and a thickness of one pixel was used to measure the local background in the HRC images, radii and thickness of the annulus were adjusted accordingly for the WFC image. Aperture corrections were calculated based on several bright, isolated and spatially resolved clusters, by comparing their flux in an aperture of 30 (HRC) pixels relative to our adopted 7 (HRC) pixel aperture. The resulting aperture corrections are shown in Table \[tab:ims\]. We corrected the photometry for foreground galactic extinction, using the extinction law of @savage79, and a dust extinction value of A$_{V} = 0.122$, taken from @schlegel98. In total we measured fluxes for 904 clusters in NGC 3256 across the U, B, V and I bands. Figure \[fig:ubvi\] shows a colour-colour diagram of clusters within a magnitude limit ($m_B < 21.0$). Figure \[fig:galim\] shows a colour-composite image of NGC 3256. To avoid overcrowding we have only marked the positions of the 276 cluster that have *good* fits to SSP models, more details on this selection can be found in section \[sec:good\]. Determination of Cluster Parameters {#sec:good} ----------------------------------- In order to determine the age, mass and local extinction for each cluster the photometric values were compared to those of cluster evolution models, in a similar fashion to @bik03 and @bastian05. We used the most recent GALEV simple stellar population (SSP) models [@anders03; @anders04]. Although SSP models exist based on both the Padova and Geneva isochrones, we have only used the Padova tracks to determine cluster parameters. The Geneva tracks display a feature known as the red loop and tend to produce a poor fit to data compared to the Padova tracks [@whitmore02]. The GALEV models have the advantage of being produced with colours matching the HST filters, this avoids errors associated with converting between HST filters and the standard Cousins-Johnson system. In addition, the GALEV models include gaseous emission lines and continuum emission. The youngest data points in the GALEV models are 4Myr and 8Myr, this can lead to poor fitting of the youngest clusters [@bastian05]. To avoid this we linearly interpolate between the youngest model ages for all filters. This produces SSP tracks which are regularly sampled at intervals of 0.02 in log time. The determined age of a young cluster is not bound to 4 or 8Myr in this case and can take a more precise value with a better fit over the four wavebands. We have used a three-dimensional maximum likehood fitting method developed by @bik03. The method is described in detail there, and so we only summarise the process in this paper. We also note that this method has been tested against colour-colour methods and has shown itself to be superior [@degrijs03a; @parmentier03]. In simple terms the method works as follows: the GALEV SSP models give a grid of colours as a function of age. We then apply extinction to each model in steps of 0.02 in E(B-V) to extend the grid. Each cluster then has its observed spectral energy distribution compared to the grid using a minimum $\chi^2$ test. The model with the age and extinction which produces the lowest $\chi^2$ is chosen, and a range of acceptable values is calculated. As the GALEV model magnitudes are scaled to a mass of $10^6$[M$_{\small{\sun}}$]{} the difference between the observed magnitudes and model magnitudes can be converted to a mass, given the distance modulus and extinction. The accuracy of this fitting method is not studied directly here as it has been reported previously [@bastian05], for the majority of the sample errors on the age and mass are less than 0.3 dex. After obtaining the best fit for each cluster we sought to determine if the fit was good or not. In fitting cluster parameters a $\chi^2$ value is calculated and this is one way of identifying a *good* fit. However for bright objects with low photometric errors the $\chi^2$ value may be high and thus result in a *bad* fit. To avoid such instances we use the standard deviation, defined in Equation \[eqn:stddev\]. $$\label{eqn:stddev} \sigma^2 = \sum\limits_i \frac{(m_i^{mod} - m_i^{obs})^2}{n_{filters}}$$ The $\sigma^2$ value is summed over all four wavebands. The spectral energy distributions were compared to the values predicted by the best fit models and compared by eye. This was done to establish a minimum value for $\sigma^2$, $\sigma_{min}^2$. We also computed a histograms of $\sigma^2$ for all sources and approximating the distribution with a gaussian and took the $\frac{1}{e}$ point. This value was very close to an appropriate value for $\sigma_{min}^2$ determined by visual inspection at 0.03. In the resulting analysis of the properties of cluster system of NGC 3256 we only consider those clusters which pass the following criteria: 1. detected in all four wavebands, at least 5$\sigma$ above the background; 2. uncertainties in the magnitude of all wavebands $\le$ 0.2 mag; 3. is well fitted by an SSP model ($\sigma \le 0.03$). The number of clusters which passed these rigourous selection criteria depends on the metallicity of the SSP model used. With a solar metallicity Padova model we accept 276 objects, with a twice solar metallicity we accept 285 objects. Details of the 276 clusters accepted using the solar metallicity Padova tracks can found with the electronic version of this article. It should be noted that the higher number of accepted objects produced by the twice solar metallicity models is not an indication of the global metallicity of the galaxy. Metallicity is very hard to fit accurately with only photometry, as shown by @bastian05. @trancho07a found the metallicity to range between $1.1 - 1.7\ $[Z$_{\small{\sun}}$]{} for 23 clusters in NGC 3256. @trancho07a (hereafter T07a) presented optical spectroscopy of 23 clusters in NGC3256. We have attempted to compare our results with those of T07a, however there are several problems that we encountered. Firstly, the astrometry presented by T07a is not accurate enough to identify any one cluster from our study, we were forced to visually inspect images from the two studies and then attempted to find the most likely matching clusters. Secondly, several of the clusters presented by T07a are in fact complexes, comprised of several clusters. Out of the ten clusters in the T07a study that lie within the same field of view as our study we found that eight had corresponding clusters in our study which matched our criteria for a *good* cluster fit. The remaining two clusters from the T07a were both complexes located in extremely crowded regions of the galaxy and could correspond to any one of several clusters in our study. Of the eight matched clusters T07a states that six of these were emission line objects and have reported ages of less than $10^{6.8}$ years. We find that ages for seven of these clusters is also under $10^{6.8}$ years, the remaining object has an age between $10^{6.7} - 10^{6.9}$ which appears to be consistent with the findings of T07a. We have also be able to match two absorption line clusters from the T07a study, both of these clusters have reported ages that are consistent within their respective errors. Cluster Properties of NGC 3256 {#sec:clprop} ------------------------------ We can determine the properties and history of NGC 3256 by examining the clusters that reside within the galaxy. With the age, mass and extinction of the clusters known we can derive several important properties. Figure \[fig:agemass\] shows the age/mass distribution for NGC 3256 derived from the solar metallicity Padova SSP models. Immediately obvious from Figure \[fig:agemass\] is the lack of low mass, old clusters. This is due to the fact that clusters dim as they age and so eventually become fainter than our detection limit. We can also note the large number of clusters seen with ages below 10Myr and over a large range of masses. Although the cluster fitting method can create some observed structure in the age/mass diagram [@gieles05] it is unlikely to do so over all masses at low ages. We can conclude that there is a genuine over-density of clusters with ages below 10Myr. This can be interpreted as a lack of older clusters which could be due in part to the disruption of clusters, a point we shall address when we look as the cluster formation history in § \[sec:formation-history\]. Various mass limits are shown on Figure \[fig:agemass\] as horizontal dashed lines. Due to the effects of old clusters becoming fainter, higher mass limits result in a population which is complete up to a larger range of cluster ages. For example a mass limit of $log(M/{M\ensuremath{_{\small{\sun}}}})>4.5$ would only be complete for clusters younger than 10Myr. A mass limit of $log(M/{M\ensuremath{_{\small{\sun}}}})>5.5$ would however be complete for much older clusters, those less than $\sim200$ Myr. We can compute the slope of the cluster mass function for clusters younger than 10 million years which we have shown in Figure \[fig:massfn\]. This assumes the number of clusters of a given mass follows a power law distribution, like that in Equation \[eqn:mass\]. $n(M)$ is the number of clusters with mass $M$, $\chi$ is a constant of normalisation and $\alpha$ is the slope of the power law. $$\label{eqn:mass} N(m)dm = \chi M^{-\alpha}dm$$ When calculating the slope of a mass distribution for any type of object the way in which the data are binned can affect the result [@maiz05]. To avoid such biases we have used bins with an equal number of objects in each. The minimum and maximum limits to each bin then depend on the masses of objects within. We have chosen to calculate the mass function for young clusters with ages less than 10Myr, in order to avoid any mass dependent cluster disruption effects, resulting in a complete sample. This calculation produces the mass function shown in Figure \[fig:massfn\] where the data points are shown in black and the line of best fit is in red. The slope is fit to data points with masses above the limit $M/{M\ensuremath{_{\small{\sun}}}}> log(4.3)$, below which we see a turn over in the mass function. This common turn over in cluster mass functions is due to incompleteness in the sample for low mass clusters. The slope of the mass function for NGC 3256 was measured to be $\alpha = 1.85\pm0.12$, which is similar to other observed cluster populations that are generally in the range of $\alpha = 1.8 - 2.2$ [@degrijs03c; @mccrady07]. Cluster Formation History of NGC 3256 {#sec:formation-history} ------------------------------------- Ages determined by fitting photometric observations to SSP models can lead to large uncertainties, especially for faint objects with large photometric errors. In constructing a cluster formation history for NGC 3256 it is important to take the uncertainty in the cluster ages into account. We have constructed a $\frac{dN}{d\tau}$ distribution using the same method described by @gieles07. We do not reiterate the details laid out in @gieles07, only the central ideas. Each cluster has a contribution to the overall age distribution as defined by an asymmetrical Gaussian spread in $log\ t$. This is calculated based on the minimum and maximum allowed ages of each cluster, calculated by the SSP fitting routine. In Figure \[fig:hist\] we show the smoothed age distribution results for two different lower mass limits, $10^{4.7}$ [M$_{\small{\sun}}$]{} (left panel) and $10^{5.7}$ [M$_{\small{\sun}}$]{} (right). The red shaded area shows the $1\sigma$ Poisson errors, estimated by counting the number of clusters in bins of width 0.25, corresponding to the mean uncertainty in the log age values of the clusters. Using two different lower mass limits in Figure \[fig:hist\] our sample of clusters is complete up to two ages, the $10^{4.7}$ [M$_{\small{\sun}}$]{} limit shown in the left panel is complete for clusters younger than 20 million years. The higher mass limit of $10^{5.7}$ [M$_{\small{\sun}}$]{} shown on the right is complete for clusters younger than 200 million years, both of these ages are shown as a vertical line on each plot. In both panels the cluster formation rate declines for clusters older than the completeness limit, due in part to older clusters fading and becoming undetectable. Focussing on the high mass cut sample, we see that the cluster formation rate (CFR) has increased by a factor of $\sim10$ during the past $\sim200$ Myr. This increase is expected due to the ongoing galactic merger which is presumably the cause of the ongoing starburst [@bastian09]. This interpretation is also supported by the present SFR of NGC 3256 which is $\sim46$ [M$_\odot$yr$^{-1}$]{}, which is approximately ten times higher than the sum of the two progenitor spiral galaxies. As in the Antennae [@bastian09], we do not see evidence for a high degree of long duration ($>10$ Myr) mass independent cluster disruption, as proposed by @fall05. However, our high mass cuts means that we are insensitive to any disruption in the lower mass end of the cluster mass function, which may be strongly effected by cluster disruption during galaxy mergers (Kruijssen et al. 2010 in prep.). Calculating the Ratio of Stars Forming Within Clusters, $\Gamma$ {#sec:gamma_calc} ================================================================ Initial Estimate ---------------- Assuming we were able to detect all the clusters within a galaxy we would be able to establish the total mass of clusters which had recently formed. Comparing this mass with a measure of the star formation rate (SFR) over the same time scale gives the ratio of stars forming within clusters, hereafter referred to as $\Gamma$ [@bastian08]. In principle this calculation is simple but is made more complex by the incompleteness of our sample of clusters. We are not able to detect low mass clusters, even those with young ages. Any low mass cluster is also likely to have large photometric errors and thus be rejected as a *poor* fit. We know, however, our sample is likely to be complete for bright, massive, young clusters. We can then extrapolate the mass found in these clusters to calculate the total mass of all clusters. In order to calculate the ratio of mass found above a certain mass limit we made sample cluster populations based on a power law distribution (stochastically sampled), like that in Equation \[eqn:mass\]. We made model populations with 500,000 clusters with a lower mass limit of 100[M$_{\small{\sun}}$]{}, and an upper limit of $10^{11}$[M$_{\small{\sun}}$]{}. This upper mass limit may appear very high but with a power law distribution with index of $-2.0$ (i.e. $\alpha=2.0$) we only expect 50 clusters above $10^6$[M$_{\small{\sun}}$]{} based on this upper mass limit. To avoid erroneous results we ignore clusters with masses larger than $10^7$[M$_{\small{\sun}}$]{}. The upper mass limit allows the model populations to be fully populated for high masses. We also calculate an error associated with the fraction of mass found above a certain mass limit, this is the standard deviation of results from 200 model cluster populations. The error does not decrease with an increased number of model populations as it is limited by stochastic variations in the masses of high mass clusters, which are rare in each model population. We have used a power law index of $-2.0$ in our claculations although it should be noted that an index of $-1.85$ was measured for NGC 3256. We assume an index of $-2.0$ as this is the value found for large samples of clusters, we do accept that this value may vary between galaxies and the implications of this are discussed further in § \[sec:uncertainty\]. At this point we note that sampling an ICMF to generate a cluster population simply to calculate the mass above a mass limit may appear to be a long winded way of making this calculation, as it is straightforward to analytically calculate these numbers for a given ICMF. However analytical calculations do not give an estimate of the error associated with stochastic sampling of the ICMF, an effect that increases as you estimate the fraction of mass above an increasing mass limit. As estimating realistic errors for this and similar studies is one of the goals of this study, we choose to use the monte carlo technique in the subsequent analysis. With the ages and mass of clusters for NGC 3256 calculated we found the total mass of clusters less than 10 million years old, and with a mass greater than $10^{4.7}$[M$_{\small{\sun}}$]{}. The lower mass limit of $10^{4.7}$[M$_{\small{\sun}}$]{} was chosen as a conservative estimate to ensure our sample was complete for this range of masses and ages. This gave a mass of $1.66\times 10^7$[M$_{\small{\sun}}$]{} contained within clusters. The fraction of mass expected above this mass limit was taken from our model cluster populations to be $0.61\pm0.09$. We expect the total mass contained within all clusters in this age range to be $2.73\pm0.42\times10^7$[M$_{\small{\sun}}$]{}. This was divided by the age range of 7 million years to find the stellar mass forming in clusters per year, $3.90\pm0.60$[M$_{\small{\sun}}$]{} yr$^{-1}$. We used an age of 7 rather than 10 million years as extremely young clusters with ages less than 3 million years will still be embedded, an thus undetectable. We compare the mass of stars forming within clusters to the SFR which was calculated based on the IR luminosity from @sargent89 and converted to a SFR of $46.17$ [M$_{\small{\sun}}$]{} yr$^{-1}$ using the conversion of @kennicutt98. The conversions given by @kennicutt98 are based on a Salpeter IMF. In fitting SSP models to our photometric data we used SSP models which adopt a Kroupa IMF. This has the effect of underestimating the mass of clusters compared to a similar result using SSP models based on a Salpeter IMF. If we are to compare the mass of stars in clusters in clusters to the SFR both results must be made using the same IMF. We correct the masses of clusters for this by multiplying by 1.38. For NGC 3256, we find that the fraction of stars which remain within optically selected clusters after the embedded phase, $\Gamma$, is $0.12\pm0.02$. This is, however, a lower limit as various selection effects and fitting artefacts must be taken into account, which will be explored below. The implications of this result are discussed in § \[sec:implications\]. Sources of Uncertainty {#sec:uncertainty} ---------------------- In calculating $\Gamma$ there are several sources of error we should consider. Here we focus on the errors associated with our study of NGC 3256 and also address issues which may arise with other data sets from different galaxies. ### Error Associated with the slope of the mass function {#sec:sloperr} As we previously mentioned, the slope of the cluster mass function was taken to be $-2.0$. However this may vary between galaxies and has an effect on the fraction of mass we infer to be above the mass limit. With a steeper slope fewer high mass clusters are present so the fraction of mass above a mass limit is reduced. Conversely a shallower slope creates more high mass clusters and so the fraction of mass above a mass limit is increased. We have measured this effect by making sample cluster populations with a mass function slope between $-1.8$ and $-2.2$. The results of these simulations are shown in Figure \[fig:sloperr\], in which we assume a mass cut of $M>10^{4.7}$, the same used in our study of NGC 3256. The top panel in Figure \[fig:sloperr\] shows how the number of objects found above the mass cut varies with differing mass function slopes. The middle panel of Figure \[fig:sloperr\] shows the ratio of mass within clusters with masses greater than the mass limit compared to the total mass of all the clusters, $R_{\alpha}$. Over the range of $1.8 < \alpha < 2.2$ this ratio changes dramatically from 0.93 to 0.25 respectively. Measured values of $\alpha$ vary but the best estimates are close to $2.0$. The bottom panel in Figure \[fig:sloperr\] shows how the value of $R_{\alpha}$ compares to the value for a mass function with an index of $-2.0$. If the cluster IMF index varies between $1.8 < \alpha < 2.2$ then we may be under or over estimating $\Gamma$ by roughly $\pm50\%$ by assuming a single value of $\alpha=2.0$. ### Power Law or Schechter Function? The exact form of the initial cluster mass function (ICMF) is still under debate. In the case of NGC 3256 we have assumed a simple power law of the form shown in Equation \[eqn:mass\]. For other galaxies an index of $\alpha \approx 2$ has been found (e.g. @zhang99 [@degrijs03b; @bik03; @mccrady07]) and is expected from theoretical considerations [@elmegreen97]. Recent studies have, however, shown that there may be truncation in the mass function at the high mass end [@gieles06b; @bastian08; @larsen09]. This can be represented by a Schechter function [@schechter76] which behaves as a pure power law in the low mass regime and has an exponential fall off above a given value, [M$_{\star}$]{}. In Fig. \[fig:schec\] we show how the resulting fraction of mass in clusters varies for different mass cuts and different truncation values, [M$_{\star}$]{}. Overall, the difference between the results with an underlying power-law or a Schechter function are small. In the specific case of NGC 3256, if a truncation exists it is likely to be well above $10^{6}{M\ensuremath{_{\small{\sun}}}}$, hence any effect on the derived $\Gamma$ values will be minimal. ### Error Associated with Fitting Cluster Parameters using SSP models {#sec:errs1} @bastian05 and @gieles05 compared the results of SSP fitting for clusters in M51 with the results for a model population, noting that the number of clusters at certain ages and masses can be enhanced or reduced by the fitting method. This can affect the calculated total mass of clusters when extrapolating from the mass found in high mass clusters. To investigate this we made a model population of clusters based on the solar metallicity [*GALEV*]{} SSP model tracks (Padova isochrones). Ages of clusters were randomly distributed in the range $0 < t < 10^{10}$ yrs. The masses of clusters were assigned stochastically assuming a power law distribution of slope $-2.0$. Based on the age, masss and distance to NGC 3256 we then assigned magnitudes from the SSP models. Photometric errors for each band were estimated by examining the errors of the clusters measured in our sample from NGC 3256. We were able to approximate the error with the relation $\Delta m_{\nu} = 10^{d_1+d_2\times m_{\nu}}$. The values of $d_1$ and $d_2$ are displayed in Table \[tab:errs\]. A random correction to the magnitude in each waveband was added in the range $-\Delta M_{\nu}$ to $+\Delta M_{\nu}$. We produced a catalogue of 2607 clusters which fulfilled our magnitude limit. Filter $d_1$ $d_2$ -------- -------- ------- U -5.726 0.228 B -3.276 0.116 V -4.881 0.178 I -5.134 0.196 : Parameters for the uncertainty of the magnitudes: $\Delta m_{\nu} = 10^{d_1+d_2\times m_{\nu}}$.[]{data-label="tab:errs"} We ran the same fitting procedure on this artificial catalogue as we had done on the data for NGC 3256. Fitting cluster photometry to the solar metallicty Padova SSP models. When examining the results of this fitting procedure we were as stringent with the artificial clusters as we were with the real data. We only accepted those results which met our criteria to be a *good* fit, as defined in § \[sec:good\]. We then compared the known mass, age and extinction of the model population to the results from the model fitting. We also excluded those model clusters with ages less than 3 million years, assuming that in reality these young clusters would be embedded and unobservable. We have compared the age-mass distribution of the model cluster population to the results of the SSP fitting routine in Figure \[fig:comp\]. The colour scale represents the difference in the number of clusters found with a given age and mass compared to the input model population. The white slope across the Figure shows a large deficit of clusters, this is due to the stringent limits we apply to our clusters, accepting only clusters with low errors ($<0.2$). Any faint clusters will have large errors and so we reject the fit. Figure \[fig:comp\] shows that young clusters are not fitted as we assume we will not detect young embedded clusters. The inset in Figure \[fig:comp\] shows a comparison of the mass in clusters above a mass limit, comparing the results of cluster fitting to the known values of the input model cluster. For mass cuts below $10^5$[M$_{\small{\sun}}$]{} we tend to underestimate the mass above the mass cut. This implies that measuring the total mass of stars within clusters would be underestimated due to the effects of the fitting procedure. This effect is small however and for the mass cut used in our study of NGC 3256 results in an underestimation by 5%. For the largest mass cut used we in fact over estimate this mass but only by 10%, though it should be noted with higher mass cuts we are more likely to encounter errors due to the stochastic sampling of massive young clusters. The test we have performed here to estimate the error associated with the SSP fitting does not take into account the intrinsic problems with either the Padova or Geneva SSP models. These synthetic stellar evolution tracks do not perfectly represent real star clusters, especially at young ages. Even for models without stellar rotation or binary stars the uncertainties associated with massive stars can be large, and when these effects are included the errors increase. The true affect of fitting to inaccurate SSP models is ultimately hard to quantify and the analysis we have carried out can only be considered as a rough estimate of this effect. ### Error Due to an Unknown Metallicity {#sec:error_metallicity} In determining the cluster parameters of age, mass and extinction we have fitted the photometric data to a SSP model of a certain metallicity. Using optical spectroscopy, T07a found that the metallicity of clusters and H[ii]{} regions in NGC 3256 was between 1 and 1.7 [Z$_{\small{\sun}}$]{}. The Padova SSP models used have only two metallicities in/near this range, namely solar and twice solar. Here we test the effect of our metallicity assumptions on our derived value of $\Gamma$. Just as we did in § \[sec:errs1\] we produced model cluster populations based on the solar metallicity Padova tracks, however we then tried to fit these data to the twice solar Padova SSP tracks. We also produced a model population based on the twice solar Padova tracks and fitted them to the solar Padova tracks in an attempt to examine the effects of over or under estimating the metallicity. These results are shown in Figure \[fig:metcomp\], the red line shows what happens when we overestimate the metallicity, using a twice solar metallicity track to fit clusters with a solar metallicity. In this case we overestimate the mass in cluster above a particular mass cut. This overestimation changes a little depending on the particular mass cut but is approximately a factor of two for the case of NGC 3256. ### Error Due to Selection Effects {#sec:seleffs} In measuring $\Gamma$ we need to know how efficient we are at detecting massive, bright clusters, which we use to infer the total mass of stars in clusters. Measuring this efficiency is not trivial and a rigourous, quantitative discussion is not included here. Instead we examined our HST images of NGC 3256, overlaying the objects selected from the B band images. Assuming our detection was perfect all the brightest objects should have been identified. In reality this is not possible as dust will obscure some bright clusters and crowding makes identification of individual clusters difficult. We selected the *SExtractor* detection parameters to minimise the chance of two close clusters being identified as a single object. A typical field from a crowded region of NGC 3256 is shown in Figure \[fig:close\], and we have overlaid the detected clusters with green circles, those clusters which passed our criteria for a *good* fit have been identified by yellow boxes. The brightest of clusters are all selected and have *good* fits to SSP models. There are several clusters which we detect but do not have *good* fits, on examining these clusters we find that they have either one or two wavebands with errors greater than 0.2 mag and are thus rejected, and labelled as *poor* fits. We see from Figure \[fig:close\] that the actual detection of clusters is not limiting our cluster sample, the ability to accurately fit SSP models is a larger effect. However in our synthetic cluster population models we apply the same criterion to determine *good* fits from *poor* fits. We are very severe in the limits we apply, disregarding any cluster which has an error greater than 0.2 mag in any of the 4 wave bands. It is this criterion which limits our final sample, and is taken into account by modelling cluster populations. We can test this by looking at the fraction of bright, young, massive clusters we reject as *poor* fits in both the real sample and synthetic cluster populations. We examined the percentage of clusters with accepted *good* fits, for clusters younger than 10 Myr and with masses greater than $10^{4.7}$ [M$_{\small{\sun}}$]{}. In our sample from NGC 3256 this percentage was 47% whilst in our synthetic population the result was higher at 76%. The discrepancy in these results could have several causes, firstly our synthetic clusters have a more conservative estimate on the level of dust extinction than we might expect. With a greater level of dust extinction clusters become fainter, photometric errors become larger and the chance we reject the cluster fit increases. Secondly the synthetic cluster population is entirely theoretical and so poor photometry from overcrowding or from misaligned apertures is not taken into account. We can only estimate the percentage of clusters we either miss altogether or discount due to a *poor* cluster fit. An estimate of 50% is the worst case scenario but some of this is accounted for by our model cluster populations to which we apply the same stringent criteria for *good* cluster fits. Taking this into account we estimate that roughly 20% of clusters are either not detected or rejected as *poor* cluster fits in addition to the fraction we account for based on the synthetic cluster population models. Source Variation Effect ---------------------- -------------------------------------------------------- ------------- CMF slope ($\alpha$) $\alpha = 1.8 - 2.2$ $0.66-2.38$ Schechter Function M$_{*} = 10^{5-7}$[M$_{\small{\sun}}$]{} 3.72 - 1.33 SSP Fitting - 1.04 Metallicity Z = 0.5[Z$_{\small{\sun}}$]{}- 2[Z$_{\small{\sun}}$]{} 0.5 - 2 Selection - 1.25 : A list of the possible sources of uncertainty in measuring $\Gamma$ for NGC 3625. The effect of each source is expressed as the factor applied to correct the measure value of $\Gamma$. \[tab:sumerrs\] The Value of $\Gamma$ in NGC 3256 --------------------------------- We are in a position to fully calculate the fraction of stars in clusters ($\Gamma$) for NGC 3256, including all the possible souces of error. Initial inspection of our data for NGC 3256 revealed a value of $0.117$. We must account for the over or underestimation introduced when we fit data to cluster models. In § \[sec:errs1\] we calculated that we would underestimate the mass above our mass limit by 5%, however to make this calculation we made model cluster photometry based on SSP models, and then compared this to the same models. In reality clusters are not perfectly modeled by any SSP and this underestimation is likely larger in reality. We have been rather sceptical in assuming that we only recover $80\pm10\%$ of the mass when we use this cluster fitting method. This increases the value of $\Gamma$ from $0.12\pm0.02$ to $0.15\pm0.03$. If we assume all our clusters were solar metallicity the we expect to recover 96% of the mass using a mass cut of $10^{4.7}$[M$_{\small{\sun}}$]{}. A more likely scenario is that the cluster population has metallicities in the range $1-2$ Z$_{\small{\sun}}$, this will affect the total mass estimated to be in clusters compared to the actual mass, as shown in Figure \[fig:metcomp\]. As the metallicity of each individual cluster is unknown we cannot be certain if the mass in clusters calculated is an under or over estimation of the true mass. We expect clusters in NGC 3256 to be closer to a solar metallicity than twice solar, and so estimate that our cluster fitting method underestimates the total mass by a factor of $0.8\pm^{0.2}_{0.3}$ taken from figure \[fig:metcomp\] for the mass cut used in our study. This further increases $\Gamma$ from $0.15\pm0.03$ to $0.18\pm^{0.06}_{0.08}$. We also need to consider the fraction of clusters that were not detected or thrown out as *poor* cluster fits. We estimated this fraction to be 20% in section \[sec:seleffs\]. Combining these effects we arrive at a more assured value for $\Gamma$ than the value the data initially suggests. The value used throughout the remainder of this study is $0.23\pm^{0.07}_{0.10}$. Results from Other Data-sets {#sec:results} ============================ Fitting photometric observations to cluster evolutionary tracks is very time consuming. This study does not attempt to replicate the workings carried out on the clusters of NGC 3256 on numerous other galaxies. Instead we have used previous results from other studies, in an effort to calculate the ratio of stars forming within clusters. The various data sources are discussed below. NGC 1569 -------- We use the data set of @anders04b, which includes ages and masses for 161 clusters in the galaxy NGC 1569. Just as for NGC 3256 we consider clusters younger than 10 Myrs. With the small distance to NGC 1569 @anders04b was able to observe many low mass clusters, and we can use a low mass cut to estimate the total mass in clusters. In order to avoid measurements which are affected heavily by stochastic sampling of the ICMF we must use the lowest mass limit possible whilst being wary of the observational limits. We chose three lower mass limits, $10^{3.0}$ [M$_{\small{\sun}}$]{}, $10^{3.2}$ [M$_{\small{\sun}}$]{} and $10^{3.4}$ [M$_{\small{\sun}}$]{}. In the same manner as we did for NGC 3256 we extrapolated the mass found above this limit up to the total mass in clusters, this resulted in consistent answers from all three of these limits. Averageing these three results we calculated the total mass in clusters to be $3.52 \pm 0.05 \times 10^{5}$ [M$_{\small{\sun}}$]{}. Assuming we are unable to observe any embedded clusters younger than three million years the resulting CFR is $0.05$ [M$_{\small{\sun}}$]{} yr$^{-1}$. The star formation rate is taken from an [H$\alpha$  ]{}measurement from @moustakas06 and the [H$\alpha$  ]{}SFR calibration in @kennicutt98, using the same adopted distance to NGC 1569 as @anders04b of 2.2Mpc. Giving a SFR of $0.36 \pm 0.02$ [M$_{\small{\sun}}$]{} yr$^{-1}$. $\Gamma$ is simply the ratio of the CFR to the SFR, $13.9\pm0.8\%$. NGC 6946 -------- We used data from @larsen02 who quoted ages and masses for 90 clusters in NGC 6946. Considering clusters younger than 10 Myrs and a lower mass limit of $10^{3.5}$ [M$_{\small{\sun}}$]{} we calculated the mass in clusters above this limit, giving $1.284 \times 10^5$ [M$_{\small{\sun}}$]{}. The fraction of mass expected above this mass limit is calculated as in previous sections and results in a total inferred cluster mass of $1.95 \pm 0.23 \times 10^5$ [M$_{\small{\sun}}$]{}. We continue to assume that clusters younger than 3 Myrs are embedded an unobservable and so have an age range of 7 Myrs, thus giving a CFR of $0.022 \pm 0.003$ [M$_{\small{\sun}}$]{} yr$^{-1}$. The data of @larsen02 only comes from one pointing of the HST WFPC2 camera and does not include the whole galaxy. To calculate $\Gamma$ we need the SFR across this region only. This was found using the area covered by the WFPC2 and the SFR density taken from @larsen02, giving a SFR of 0.1725 [M$_{\small{\sun}}$]{}yr$^{-1}$. Consequently $\Gamma$ is $12.5\pm^{1.8}_{2.5} \%$, although we note this HST pointing included the centre of NGC 6946 and so the SFR might be higher that the global average we used. Small Magellanic Cloud ---------------------- We have used the data set generated by @hunter03, a catalogue of ages and masses for 191 clusters in the SMC. This catalogue is thought to be incomplete for clusters younger than 10 Myrs, so we calculated the mass in clusters between 10 and 100 Myrs. With the SMC lying at a distance of $61 \pm 3$ kpc [@hilditch05] the data set is complete to very low cluster masses. Using lower mass limits of $10^{2.6}, 10^{2.8}, 10^{3.0}\ \&\ 10^{3.2}$ [M$_{\small{\sun}}$]{} to calculate the mass in total mass in clusters gave consistent results of $1.59 \pm 0.03 \times 10^5$ [M$_{\small{\sun}}$]{}. We assumed a constant CFR and SFR over the age range 10-100 Myrs, giving a CFR of $1.77 \pm 0.03 \times 10^{-3}$ [M$_{\small{\sun}}$]{} yr$^{-1}$. The SFR which we assume to be constant over the last 100 million years is taken from the extinction corrected [H$\alpha$  ]{}luminosity value of @kennicutt86, converted to a SFR of $0.043$ [M$_{\small{\sun}}$]{}yr$^{-1}$ using the Equations given by @kennicutt98. The resulting value for $\Gamma$ is $4.2\pm^{0.2}_{0.3}\%$. Our value for $\Gamma$ agrees extremely well with the value calculated by @gieles08 of 3-5%, derived using size-of-sample effects on the same cluster sample. @kruijssen08 looked at the ratio of clustered to field stars in the SMC and quote a minimum $\Gamma$ of $0.5\%$ and suggest a more reasonable value of $10\%$ as their best result. Large Magellanic Cloud ---------------------- Ages and masses for 748 clusters in the LMC were taken from the data set of @hunter03. Just as in the case of the SMC this catalogue is believed to be incomplete for clusters younger than 10 millions years and so we studied clusters in the age range $10-100$ million years. With the LMC being a relatively nearby galaxy at a distance of only 58.5 kpc [@macri06] the sample is complete to low mass and we calculated the mass of clusters above four mass limits $10^{2.6}, 10^{2.8}, 10^{3.0}\ \&\ 10^{3.2}$ [M$_{\small{\sun}}$]{}. The fraction of mass expected above these limits was found as described in previous sections, resulting in a total inferred mass in clusters of $6.31 \pm 0.20 \times 10^5$ [M$_{\small{\sun}}$]{}. We assume a constant SFR and CFR over the 90 million year period considered, resulting in a calculted CFR of $7.01 \pm 0.22 \times 10^{-3}$ [M$_{\small{\sun}}$]{} yr$^{-1}$. The SFR ($0.12$ [M$_{\small{\sun}}$]{} yr$^{-1}$) and the galaxy area (79 kpc$^2$) are both taken from @larsen00. $\Gamma$ for the LMC was thus found to be $5.8\pm0.5\%$. The Milky Way (The Solar Neighbourhood) --------------------------------------- @lada03 estimate the rate of embedded cluster formation within 2.0kpc of the sun to be between $2-4$ Myrs$^{-1}$ kpc$^{-2}$, and claim the average mass of an embedded cluster to be 500[M$_{\small{\sun}}$]{}. @lada03 go on to estimate the number of embedded clusters that survive, with approximately 7% surviving to the age of the Pleiades. We convert the rate of embedded cluster formation to a mass formation rate assuming the average cluster mass of 500[M$_{\small{\sun}}$]{}, giving $0.15$[M$_{\small{\sun}}$]{} yr$^{-1}$ in the solar neighbourhood. We calculate a value for $\Gamma$ by assuming the SFR is equal to the rate of embedded cluster formation. Comparing this to the mass found in older clusters, 7% though estimates of this percentage varies between 4% and 14% depending on the numbers used. @roberts57, @millerscalo78, and @adamsmyers01 also derived a similar percentage of star-formation in clusters for the solar neighbourhood. M83 --- @harris01 presented a catalogue of 45 massive star clusters in the centre of M83, with reliable ages, masses and extinction values. Using this catalogue and a lower mass limit of $10^{2.8}$ [M$_{\small{\sun}}$]{} we calculated the total mass in clusters younger than 10 million years to be $7.24 \pm 0.23 \times 10^5$ [M$_{\small{\sun}}$]{}. To calculate the CFR we assume that clusters younger than 3 million years old are embedded and so are unobservable. The resulting CFR is $0.10 \pm 0.01$ [M$_{\small{\sun}}$]{} yr$^{-1}$. The study of @harris01 only covers the central region of M83 and so we used an [H$\alpha$  ]{}image of M83 to measure the [H$\alpha$  ]{}from the same region studied by @harris01. We used archival narrowband and R-band images of M83 taken as part of the SINGS survey [@meurer06]. We measured the background subtracted flux for an identical area as that studied by @harris01 and converted this flux to a SFR using the calibrations of @kennicutt98, resulting in a SFR of $0.23 \pm 0.03$ [M$_{\small{\sun}}$]{} yr$^{-1}$. We correct this value for extinction using the reddening curves of @calzetti01b. The extinction value was taken as the averaged extinction of all clusters in the @harris01 sample, giving a correction of 1.71, the resulting dust corrected SFR is $0.39 \pm 0.06$ [M$_{\small{\sun}}$]{} yr$^{-1}$. With the CFR and SFR known we calculate the value of $\Gamma$ to be $26.7\pm^{5.3}_{4.0}\%$. The Antennae Galaxies --------------------- Ages and masses have been published for 752 clusters in the Antennae Galaxies by @anders07. We attempted to apply the same calculations as we did for other cluster populations, as described above. However using different lower mass limits we obtained a range of values for $\Gamma$, from $60\%$ to $>100\%$; assuming a star formation rate of 20 [M$_{\small{\sun}}$]{}yr$^{-1}$ [@zhang01]. Obviously values in-excess of $100\%$ are unphysical and this problem is exasperated by the range of quoted SFR’s for the Antennae Galaxies which can range from $5-20$ [M$_{\small{\sun}}$]{}yr$^{-1}$ [@zhang01; @knierman03]. We investigated the slope of the mass function for young clusters and found that this data set presents a very shallow slope of $\alpha\simeq1.6$. Given that mass function slopes are usually in the range $1.8<\alpha<2.2$ [@degrijs03c] a slope this shallow represents a very unusual population, unfortunately @anders07 do not calculate a similar mass function for these clusters. Correcting for this shallow mass function gave results for $\Gamma$ that were still above $50\%$ and did vary significantly with the assumed lower mass limit. In comparison we noted that @fall05 conclude that at least $20\%$ but possible all clusters form in clusters; the value for $\Gamma$ may well be high for the Antennae. Given the range of values we can calculate for $\Gamma$ based on the @anders07 data set and the extremely shallow mass function of young clusters in this sample combined with the range in quoted SFR’s for the Antennae we do not publish a value for this system. It serves as a reminder of how difficult these calculations can be and how dependent they are on accurate knowledge of the SFR. The Variation of $\Gamma$ with SFR {#sec:implications} ================================== In Table \[tab:res\] we present all the information used hereafter, including the values of $\Gamma$, star formation rate (SFR) and area over which these quantities were measured. The superscript *p* after the galaxy names indicates that only a partial area of the galaxy was used to derive these results, as described in the previous sections. At this point it is worth noting that because results have been obtained from various data sets the methodology used differs between results. Overall the consistency of results is thought to be robust as measurements of $\Gamma$ are based on the brightest and easiest clusters to detect which should give consistent results. It is harder to estimate the errors associated with $\Gamma$ based on other data sets. Results shown in Figure \[fig:final2\] for galaxies other than NGC 3256 and the Milky Way only include errors associated with the uncertainty in the fraction of mass expected above whatever lower mass limit was used. Errors due to uncertainties in the metallicity and SSP fitting have not been estimated as this is difficult without reprocessing the entire data set. ---------------------- ------------------------------------ ------------- ----------------------- Galaxy SFR A $\Gamma$ ([M$_{\small{\sun}}$]{} yr$^{-1}$) (kpc$^{2})$ (%) NGC 1569 0.3626 13 $13.9\pm0.8$ NGC 3256 46.17 74.85 $22.9\pm^{7.3}_{9.8}$ NGC 5236 (M83)$^{p}$ 0.3867 0.7077 $26.7\pm^{5.3}_{4.0}$ NGC 6946$^{p}$ 0.1725 37.49 $12.5\pm^{1.8}_{2.5}$ LMC 0.1201 79 $5.8\pm0.5$ SMC 0.0426 58.55 $4.2\pm^{0.2}_{0.3}$ Milky Way$^{p}$ 0.1508 12.56 $7.0\pm^{7.0}_{3.0}$ ---------------------- ------------------------------------ ------------- ----------------------- : Summary of results included in this paper.[]{data-label="tab:res"} Figure \[fig:final2\] shows the relation between the SFR density ($\Sigma_{SFR}$) and $\Gamma$ for all galaxies discussed in this paper. In addition we plot results for 3 galaxies from @gieles09b in which CFRs (and subsequently $\Gamma$) are calculated via comparison with empirical luminosity functions. It is immediately obvious that $\Gamma$ increases with the SFR density. Over three orders of magnitude a power-law relationship holds and the dot-dashed line shows a least-squares fit to the data of the form, $\Gamma \propto \Sigma_{SFR}^{\alpha}$. The numerical version of the derived relationship is displayed in Equation \[eqn:pl\]. $$\label{eqn:pl} \Gamma(\%) = (29.0\pm{6.0}) \Sigma_{SFR}^{0.24\pm0.04} ({M\ensuremath{_{\small{\sun}}}}\ yr^{-1}\ kpc^{-2})$$ From this Equation we can predict at what density we might expect to see all stars in clusters, effectively a $\Gamma$ of 100%, this would occur at a SFR density of $7.5\times10^3$ [M$_{\small{\sun}}$]{} yr$^{-1}$ kpc$^{-2}$, a rather high value. Interestingly we can integrate this value over the time it takes a cluster to form, assuming this takes roughly three million years we expect $2.25 \times 10^{4}$ [M$_{\small{\sun}}$]{} pc$^{-2}$, which is approximately the density of a typical cluster. Although the results taken from @gieles09b are not calculated in the same manner as we have for other data sets these points fit the trend quite well, particularly at low SFR densities. It should be noted that our calculation of the best fit power law does not include these additional data points, given the different method used to determine these results, they are shown here to demonstrate the consistency of our results. We have investigated whether the observed correlation (Eqn. \[eqn:pl\]) could be due to a systematic effect, namely if the mass function index varied with environment. Assuming the LMC cluster distribution is well approximated with $\alpha=2$, an index of $\alpha \sim 1.2$ would be required in order to bring the results of NGC 3256 into agreement with the LMC. This kind of gross deviation is ruled out by our direct observations of the cluster population (e.g. Fig. \[fig:massfn\]). A similar conclusion (that more stars are formed in bound clusters with increasing SFR density) has been reached based on the fraction of light observed in clusters relative to the host galaxy. Specifically, Meurer et al. (1995) and Zepf et al. (1999) found this for merging/starburst galaxies and @larsen00 and @larsen04 found this for a larger sample that includes starburst and quiescent galaxies. While measuring the fraction of light in clusters is significantly easier than deriving $\Gamma$, hence suitable for construction of large surveys, it is affected by the star formation history of the galaxy and possibly by differential extinction effects. The exact relation between $\Gamma$ and the fraction of light observed in clusters will be modelled in a future paper. @kennicutt98b investigated a global Schmidt-Kennicutt law across galaxies, correlating the SFR density to the surface gas density. As $\Gamma$ appears to follow a power-law relationship with the SFR density using the Schmidt-Kennicutt law defined in @kennicutt98b we can construct a relation between $\Gamma$ and the surface gas density, shown in Equation \[eqn:gas\]. $$\label{eqn:gas} \Gamma(\%) = (4.1\pm1.9) \Sigma_{gas}^{0.34\pm0.07} ({M\ensuremath{_{\small{\sun}}}}\ pc^{-2})$$ We show the derived surface gas density corresponding to a given SFR density on the top axis of Figure \[fig:final2\]. Although our data only charts the value of $\Gamma$ for gas densities in the range $1-300$ [M$_{\small{\sun}}$]{} pc$^{-2}$ it is insightfull to interpret this relation to the more extreme star forming environments. Arp220 represents one of the most active star forming galaxies found [@wilson06], with a global SFR of 240 [M$_{\small{\sun}}$]{} yr$^{-1}$ based on the FIR luminosity of @sanders03 and the calibration of @kennicutt98. Assuming that Arp220 occupies an area of roughly 1 kpc$^{3}$ [@ananth00], and thus has a surface area of 1 kpc$^{2}$ on the sky we can derive the SFR density is approximately 240 [M$_{\small{\sun}}$]{} yr$^{-1}$ kpc$^{-2}$. Such a high SFR density translates to a value of $\Gamma$ of roughly 85% using Equation \[eqn:pl\], indicating that almost all the star clusters in Arp220 would be expected to survive the embedded phase, and the CMF would be almost identical to the true underlying initial CMF. Hence, there are regions where we would expect most/all young stars to be found in clusters. Conclusions {#sec:conclusions} =========== Over the course of this paper we have shown that it is possible to accurately measure the fraction of stars found within young clusters, a parameter we have termed $\Gamma$. This is achieved by obtaining ages and masses for the cluster population of NGC 3256 through multiple waveband photometry and comparison with synthetic stellar population models. We have also been able to calculate $\Gamma$ for several other galaxies using published cluster populations. We examined how $\Gamma$ varies with the star formation rate of the host galaxy and we summarise our conclusions below. 1. The cluster formation history of NGC 3256 shows an increase in the star formation rate over the last 100 million years, most likely caused by the ongoing merger of the two progenitor spiral galaxies. The CMF for NGC 3256 is best described by a power law with slope $\alpha = 1.85\pm0.12$. 2. $\Gamma$ may be calculated directly if an accurate cluster population is known, however there are several possible sources of error in making these calculations. The effect of SSP fitting must be taken into account and an unknown metallicity may produce additional uncertainties. Selection effects may also alter the value of $\Gamma$ but this can be estimated. The calculated $\Gamma$ does depend on the form of the cluster mass function used, we assume a simple power law function with slope $\alpha = 2.0$. We have shown that a Schechter function produces similar but higher values, however this effect is small if the [M$_{\star}$]{}is high ([M$_{\star}$]{}$> 10^6$[M$_{\small{\sun}}$]{}). 3. We found a weak positive correlation of $\Gamma$ with the total star formation rate. We find a strong relation between $\Gamma$ and the star formation rate density, which we have written as a power law type relation, $\Gamma(\%) = (25.5\pm{6.0}) \Sigma_{SFR}^{0.22\pm0.05}$ ([M$_{\small{\sun}}$]{} yr$^{-1}$ kpc$^{-2}$). This is similar to that found by @larsen00 for the fraction of U-band light in clusters relative to the total galaxy. This result can also be interpreted as a correlation with the surface gas density through the Schmidt-Kennicutt Law [@kennicutt98b]. This implies that either clusters born in high density environments are more resistant to disruption in the embedded phase, or the environment changes the fraction of stars born in clusters. Measuring $\Gamma$ is not trivial and it does have several sources of error which must be taken into account. However it is possible to make these calculations which may be further refined with consistent measurements using the same instruments, detection methods and cluster fitting models. This study is intended to show that if all sources of uncertainty are taken into account and measured then calculating $\Gamma$ is possible. Future work, including larger homogeneous samples and deeper observations to constrain the form of the cluster mass function to lower masses, will be needed to conclusively investigate the intriguing relation between $\Gamma$ and the SFR density of the host galaxy. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Søren Larsen and Mark Gieles for valuable discussions and suggestions. NB is supported by an STFC Advanced Fellowship. [72]{} natexlab\#1[\#1]{} F. C., [Myers]{} P. C., 2001, [ApJ]{}, 553, 744 K. R., [Viallefond]{} F., [Mohan]{} N. R., [Goss]{} W. M., [Zhao]{} J. H., 2000, [ApJ]{}, 537, 613 P., [Bissantz]{} N., [Boysen]{} L., [de Grijs]{} R., [Fritze-v. Alvensleben]{} U., 2007, [MNRAS]{}, 377, 91 P., [Bissantz]{} N., [Fritze-v. Alvensleben]{} U., [de Grijs]{} R., 2004, [MNRAS]{}, 347, 196 P., [de Grijs]{} R., [Fritze-v. Alvensleben]{} U., [Bissantz]{} N., 2004, [MNRAS]{}, 347, 17 P., [Fritze-v. Alvensleben]{} U., 2003, [A&A]{}, 401, 1063 N., [Trancho]{} G., [Konstantopoulos]{} I. S., [Miller]{} B. W., 2009, [ApJ]{}, 701, 607 N., 2008, [MNRAS]{}, 390, 759 N., [Gieles]{} M., [Efremov]{} Y. N., [Lamers]{} H. J. G. L. M., 2005, [A&A]{}, 443, 79 N., [Gieles]{} M., [Lamers]{} H. J. G. L. M., [Scheepmaker]{} R. A., [de Grijs]{} R., 2005, [A&A]{}, 431, 905 E., [Arnouts]{} S., 1996, [AAPS]{}, 117, 393 A., [Lamers]{} H. J. G. L. M., [Bastian]{} N., [Panagia]{} N., [Romaniello]{} M., 2003, [A&A]{}, 397, 473 D., 2001, New Astronomy Review, 45, 601 R., [Anders]{} P., [Bastian]{} N., [Lynds]{} R., [Lamers]{} H. J. G. L. M., [O’Neil]{} E. J., 2003, [MNRAS]{}, 343, 1285 R., [Bastian]{} N., [Lamers]{} H. J. G. L. M., 2003, [MNRAS]{}, 340, 197 R., [Fritze-v. Alvensleben]{} U., [Anders]{} P., [Gallagher]{} J. S., [Bastian]{} N., [Taylor]{} V. A., [Windhorst]{} R. A., 2003, [MNRAS]{}, 342, 259 B. G., 2008, [ApJ]{}, 672, 1006 B. G., [Efremov]{} Y. N., 1997, [ApJ]{}, 480, 235 J., [Norris]{} R. P., [Freeman]{} K. C., [Booth]{} R. S., 2003, [AJ]{}, 125, 1134 S. M., [Chandar]{} R., [Whitmore]{} B. C., 2005, [ApJL]{}, 631, L133 M., 2009, ArXiv e-prints, 0908.2974 —, 2009, [MNRAS]{}, 394, 2113 M., [Bastian]{} N., 2008, [A&A]{}, 482, 165 M., [Lamers]{} H. J. G. L. M., [Portegies Zwart]{} S. F., 2007, [ApJ]{}, 668, 268 M., [Larsen]{} S. S., [Bastian]{} N., [Stein]{} I. T., 2006, [A&A]{}, 450, 129 M., [Larsen]{} S. S., [Scheepmaker]{} R. A., [Bastian]{} N., [Haas]{} M. R., [Lamers]{} H. J. G. L. M., 2006, [A&A]{}, 446, L9 M., [Portegies Zwart]{} S. F., [Baumgardt]{} H., [Athanassoula]{} E., [Lamers]{} H. J. G. L. M., [Sipior]{} M., [Leenaarts]{} J., 2006, [MNRAS]{}, 371, 793 M., [Bastian]{} N., [Lamers]{} H. J. G. L. M., [Mout]{} J. N., 2005, [A&A]{}, 441, 949 S. P., 2009, [Ap&SS]{}, 108 J., [Calzetti]{} D., [Gallagher]{} III J. S., [Conselice]{} C. J., [Smith]{} D. A., 2001, [AJ]{}, 122, 3046 R. W., [Howarth]{} I. D., [Harries]{} T. J., 2005, [MNRAS]{}, 357, 304 D. A., [Elmegreen]{} B. G., [Dupuy]{} T. J., [Mortonson]{} M., 2003, [AJ]{}, 126, 1836 Jr. R. C., 1998, [ARAA]{}, 36, 189 —, 1998, [ApJ]{}, 498, 541 Jr. R. C., [Hodge]{} P. W., 1986, [ApJ]{}, 306, 130 K. A., [Gallagher]{} S. C., [Charlton]{} J. C., [Hunsberger]{} S. D., [Whitmore]{} B., [Kundu]{} A., [Hibbard]{} J. E., [Zaritsky]{} D., 2003, [AJ]{}, 126, 1227 Kruijssen, J. M. D., & Lamers, H. J. G. L. M. 2008, [ASPC]{}, 396, 149 C. J., [Lada]{} E. A., 2003, [ARAA]{}, 41, 57 H. J. G. L. M., [Gieles]{} M., [Portegies Zwart]{} S. F., 2005, [A&A]{}, 429, 173 S. S., 2002, [AJ]{}, 124, 1393 S. S., 2004, [A&A]{}, 416, 537 —, 2009, [A&A]{}, 494, 539 S. S., [Richtler]{} T., 2000, [A&A]{}, 354, 836 L. M., [Stanek]{} K. Z., [Bersier]{} D., [Greenhill]{} L. J., [Reid]{} M. J., 2006, [ApJ]{}, 652, 1133 J., [[Ú]{}beda]{} L., 2005, [ApJ]{}, 629, 873 C., [Bastian]{} N., [Saglia]{} R. P., [Kissler-Patig]{} M., [Schweizer]{} F., [Goudfrooij]{} P., 2004, [A&A]{}, 416, 467 N., [Graham]{} J. R., 2007, [ApJ]{}, 663, 844 G. R., [Heckman]{} T. M., [Leitherer]{} C., [Kinney]{} A., [Robert]{} C., [Garnett]{} D. R., 1995, [AJ]{}, 110, 2665 Meurer, G. R., et al.  2006, [ApJS]{}, 165, 307 G. E., [Scalo]{} J. M., 1978, [PASP]{}, 90, 506 J., [Kennicutt]{} Jr. R. C., 2006, [ApJ]{}, 651, 155 G., [de Grijs]{} R., [Gilmore]{} G., 2003, [MNRAS]{}, 342, 208 M. S., 1957, [PASP]{}, 69, 59 D. B., [Mazzarella]{} J. M., [Kim]{} D.-C., [Surace]{} J. A., [Soifer]{} B. T., 2003, [AJ]{}, 126, 1607 A. I., [Sanders]{} D. B., [Phillips]{} T. G., 1989, [ApJL]{}, 346, L9 B. D., [Mathis]{} J. S., 1979, [ARAA]{}, 17, 73 P., 1976, [ApJ]{}, 203, 297 D. J., [Finkbeiner]{} D. P., [Davis]{} M., 1998, [ApJ]{}, 500, 525 N. Z., [Evans]{} A. S., [Dinshaw]{} N., [Thompson]{} R., [Rieke]{} M., [Schneider]{} G., [Low]{} F. J., [Hines]{} D., [Stobie]{} B., [Becklin]{} E., [Epps]{} H., 1998, [ApJL]{}, 492, L107+ G., [Bastian]{} N., [Miller]{} B. W., [Schweizer]{} F., 2007, [ApJ]{}, 664, 284 (T07a) G., [Bastian]{} N., [Schweizer]{} F., [Miller]{} B. W., 2007, [ApJ]{}, 658, 993 A., [Rowan-Robinson]{} M., [McMahon]{} R., [Efstathiou]{} A., 2002, [MNRAS]{}, 335, 574 B. C., [Chandar]{} R., [Fall]{} S. M., 2007, [AJ]{}, 133, 1067 B. C., [Zhang]{} Q., 2002, [AJ]{}, 124, 1418 C. D., [Harris]{} W. E., [Longden]{} R., [Scoville]{} N. Z., 2006, [ApJ]{}, 641, 763 S. E., [Ashman]{} K. M., [English]{} J., [Freeman]{} K. C., [Sharples]{} R. M., 1999, [AJ]{}, 118, 752 Q., [Fall]{} S. M., [Whitmore]{} B. C., 2001, [ApJ]{}, 561, 727 Q., [Fall]{} S. M., 1999, [ApJL]{}, 527, L81 ----- ------------ ------------- ------- ----------- ------- ----------- ------- ----------- ------- ----------- ------ ------ ------ ------ ------ ------ ------ ------ ------ -- $\Delta$ra $\Delta$dec ID (s) (arcsec) U $\Delta$U B $\Delta$B V $\Delta$V I $\Delta$I min best max min best max min best max (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) $\Delta$ra $\Delta$dec ID (s) (arcsec) U $\Delta$U B $\Delta$B V $\Delta$V I $\Delta$I min best max min best max min best max (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) 1 51.61 28.11 22.79 0.18 23.93 0.09 23.14 0.07 22.40 0.07 0.18 0.42 0.70 6.92 6.79 6.92 4.12 4.24 4.37 2 50.25 24.82 20.82 0.06 21.98 0.10 21.51 0.05 21.02 0.07 0.00 0.04 0.22 6.94 6.85 6.94 4.47 4.53 4.64 3 52.02 27.63 20.25 0.03 21.07 0.01 20.81 0.01 20.17 0.01 0.00 0.02 0.14 6.98 6.90 6.98 4.87 4.90 5.05 4 51.20 26.78 22.58 0.13 23.30 0.05 22.60 0.06 21.80 0.05 0.26 0.50 0.70 6.90 6.77 6.90 4.43 4.54 4.62 5 51.81 27.61 22.00 0.09 22.61 0.02 22.28 0.04 21.48 0.03 0.12 0.22 0.26 7.75 7.56 7.75 4.44 5.31 5.44 6 50.23 25.06 21.12 0.06 22.19 0.08 21.62 0.05 21.20 0.06 0.08 0.24 0.44 6.85 6.75 6.85 4.49 4.56 4.65 ----- ------------ ------------- ------- ----------- ------- ----------- ------- ----------- ------- ----------- ------ ------ ------ ------ ------ ------ ------ ------ ------ -- : Cluster photometry and parameters for the 276 *good* cluster fits.[]{data-label="tab:fulldat"} \(1) Gives the Object ID number referred to through out the paper. (2) and (3) give the right accession and declination offsets form R.A. = $10^{h}20^{m}00^{}s$; decl. = $-43 54 00$ (J2000.0). Cols. (4-11) give the U, B, V & I band photometry with associated errors. (12-14) gives the dust extinction of the object in terms of E(B-V) after correcting for foreground galactic extinction as derived from our cluster fitting method. We show the most likely value as well as the maximum and minimum acceptable results. (15-17) shows the cluster age in log (yrs), once again we show the minimum, best and maximum values from the cluster fitting. (18-20) log cluster mass with maximum, minimum and best values from the cluster fitting. A full version of this table is available with the online edition of this article. \[lastpage\] [^1]: E-mail: [email protected]; [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'Universal exact conditions guided the construction of most ground-state density functional approximations in use today. We derive the relation between the entropy and Mermin free energy density functionals for thermal density functional theory. Both the entropy and sum of kinetic and electron-electron repulsion functionals are shown to be monotonically increasing with temperature, while the Mermin functional is concave downwards. Analogous relations are found for both exchange and correlation. The importance of these conditions is illustrated in two extremes: the Hubbard dimer and the uniform gas.' bibliography: - 'Master.bib' - 'hubbard.bib' - 'thermal.bib' --- Warm dense matter (WDM) is a rapidly growing multidisciplinary field that spans many branches of physics, including for example astrophysics, geophysics, attosecond physics, and nuclear physics[@KD09; @CECO11; @D13; @MH13; @RR14; @SEJD14; @SGLC15; @KDBL15; @C15]. In the last decade, quantum molecular dynamics, using DFT with electrons at finite temperatures, has been extremely successful at predicting material properties under extreme conditions, and has become a standard simulation tool in this field[@GDRT14]. Almost all such simulations use ground-state exchange-correlation (XC) approximations, even when the electrons are significantly heated. Thermal density functional theory (thDFT) was formalized by Mermin[@M65], when he showed that the reasoning of Hohenberg and Kohn[@HK64] could be extended to the grand canonical potential of electrons coupled to a thermal bath at temperature $\tau$. In recent times, the Mermin-Kohn-Sham (MKS) equations of non-interacting electrons at finite temperature, whose density matches that of the physical system, are being solved to simulate warm dense matter[@KS65; @PPGB14]. In most of these calculations, the ground-state approximation (GSA) is made, in which the exchange-correlation (XC) free energy, which typically depends on $\tau$, is approximated by its ground-state value. Accurate results for the uniform gas are still being found[@BCDC13; @KSDT14; @FFBM15; @SGVB15], which provide input to a thermal local density approximation, but LDA is insufficiently accurate for many modern applications, and thermal GGA’s are being explored[@SD14]. Many useful exact conditions in ground-state DFT (relation between coupling constant and scaling, correlation scaling inequalities, exchange and kinetic scaling equalities, signs of energy components) were first derived[@LP85] by studying the variational principle in the form of the Levy constrained search[@L79]. Most of these conditions are satisfied (by construction) by the local density approximation[@KS65] and have been used for decades to constrain and/or improve more advanced approximations[@PBE96]. Their finite temperature analogs were derived in Ref. [@PPFS11] (see also Ref. [@DT11]), and extended in Ref. [@PB15]. Because the kinetic and entropic contributions always appear in the same combination as the so-called kentropic energy (see Eq. (\[Axc\]) and related text), such relations can never be used to extract either component individually. Many basic thermodynamic relations are proven via quantum statistical mechanics[@Schwabl07]. However, converting these to conditions on density functionals is neither obvious nor trivial. In the present work, we extend these methods to the dependence of the Mermin functional (i.e., the universal part of the free-energy functional) on the [*temperature*]{}, rather than on the coupling constant or the scale of the density. We find several new equalities and inequalities which apply to thDFT of all electronic systems. This allows us to separate entropic and kinetic contributions. We show that the entropy density functional is monotonically increasing with temperature, as is the sum of the kinetic and electron-electron repulsion density functionals, and that the temperature derivative of the Mermin functional is the negative of the entropy functional. Thus the Mermin functional is concave downwards as a function of temperature. Applying these conditions to the MKS system yields conditions on the exchange-correlation free energy functionals. Lastly, we illustrate all our findings in the two extreme cases of the uniform gas and the Hubbard dimer. We find a recent parametrization of the XC free energy of the uniform gas violates our conditions, although only for densities that are so low as to be unlikely to significantly affect any property calculated within thLDA. For a given average particle number, define the free energy of a statistical density-matrix $\Gamma$ as A= H\[\] - S\[\], where $\hat H$ is the Hamiltonian operator, $S$ extracts the entropy, and we use $\tau$ to denote temperature. Define =T\[\]+V, where $\hat T$ is the kinetic energy operator and $\hat V\ee$ the electron-electron repulsion operator. Then F= - S\[\]. The Mermin functional, written in terms of a constrained search, is[@PPFS11] F= \_ F, where the argument distinguishes functionals of the density from those of the density-matrix. The free energy of a given system can be found from A= \_{ F+ d\^3r v() () }. \[Atdef\] We denote by $\Gamma\t[\n]$ the statistical density matrix that minimizes $\hat F\t$ and yields density $\n(\br)$. Then: = + d , where all are evaluated at $\Gamma\t[\n]$. Because $\Gamma\t[\n]$ is the minimizer, its derivative with respect to temperature (or any variable) vanishes. Thus = - S. \[SfromF\] This is the DFT analog of the standard thermodynamic relation[@Schwabl07], and implies F=F\^0\[\]-\_0\^d’ S, where $F^0[\n]$ is the ground-state functional[@HK64]. We note that Eq. (\[SfromF\]) was derived in [@C15], but only within lattice DFT. Given a Mermin functional (approximate or exact, interacting or not), Eq. (\[SfromF\]) defines what the corresponding entropy functional must be. Since coordinate scaling[@PPFS11] can separate the kentropic and potential contributions in $F$, Eq. (\[SfromF\]) allows the entropic and kinetic energy functionals to be separated. Alternatively, given an entropy functional, Eq. (\[SfromF\]) defines the temperature-dependence of the corresponding Mermin functional. Since the entropy is always positive, dF/d0, \[dFineq\] i.e., the Mermin functional is monotonically decreasing. Now consider what happens when, for a given density and temperature $\tau$, we evaluate the Mermin functional on the density matrix for that density but at a different temperature. By the variational principle, Eq. (\[Atdef\]), FF, for any value of $\tau'$. Thus - S\[\] - S, or - S- S. \[Dttp\] Since this result is true for any pair of temperatures, we reverse $\tau$ and $\tau'$ to find: - ’ S-’ S. \[Dtpt\] Addition of Eqs. (\[Dttp\]) and (\[Dtpt\]) yields (-’) (S-S) 0, so that the entropy monotonically increases with $\tau$: [dS]{}/[d]{} 0. Combining this with Eq. (\[SfromF\]) implies [d\^2F]{}/[d\^2]{} 0. Thus $F\t[\n]$ is concave downwards. We can also isolate the behavior of $\D\t[\n]$. If we multiply Eq. (\[Dttp\]) by $\tau'$, and Eq. (\[Dtpt\]) by $\tau$, and add them together, all entropic contributions cancel, yielding (’-) (-) 0,     [d]{}/[d]{} 0. Both $\D\t[\n]$ and $S\t[\n]$ are monotonically increasing, but the net effect is that the Mermin free energy is decreasing. Applying these conditions to the Mermin-Kohn-Sham electrons[@PPGB14], we find /[d]{} = - S, \[SsfromFs\] and the inequalities , 0 , \[dFsineq\] where subscript s denotes non-interacting, and $F\s\t[\n]=T\s\t[\n]-\tau\, S\s\t[\n]$. Some of these relations have long been invoked for the uniform and slowly-varying gases and for constructing orbital-free density functionals (see Ref. [@VST12] and references therein), but here they have been proven for every inhomogeneous system. ![Energy components for the Hubbard dimer in units of $2\,t$, where $U=2\,t$ and $\dn=0$: $F\t,\D\t,S\t$, both interacting (solid) and non-interacting (dashed).[]{data-label="Hub"}](HubComp.png){width="\columnwidth"} To illustrate these results, we calculate all energy components for an asymmetric Hubbard dimer, i.e. a two-site Hubbard model with a potential $v_1 = -v_2$, as described in Ref. [@CFSB15] for the groundstate and [@SPB16] for the thermal system. Here $t$ is the hopping parameter, $U$ the on-site repulsion, and $\dn$ the difference in site occupations where the difference comes from having an inhomogeneous potential $\dv = v_2 - v_1$. This is the simplest possible model in which one can perform an exact thermal calculation, including the exact thermal correlation components. Fig. \[Hub\] shows the energy components, both interacting and non-interacting, as a function of temperature for the homogeneous system with $\dn=0$. All our exact conditions are satisfied for many values of $\dn$ and $U$. ![Temperature dependence of the Mermin functional for spin-unpolarized uniform gas for several values of the Wigner-Seitz radius $r\s$, using the XC parametrization of Ref. [@KSDT14], where $\epsilon\F$ is the Fermi energy.[]{data-label="unifhi"}](dfOfTz0.png){width="\columnwidth"} At the other extreme is the uniform electron gas and a modern parametrization of its free energy[@KSDT14]. In the special case of a uniform density and potential, our formulas become the same as the standard thermodynamic formulas. In Fig. \[unifhi\], we plot the derivative of the free energy per particle for fixed density ($r\s$ value where $r\s = (3/(4\pi\n))^{1/3}$) as a function of temperature, on the scale of the Fermi energy and in atomic units. As $r\s \to 0$, these curves converge to their well known[@DAC86] non-interacting value, in which the derivative is negative and decreasing everywhere, in accordance with Eq. (\[dFineq\]). Unfortunately, by decreasing the density so that XC effects become relatively more important, we find that the parametrization violates our conditions for $r\s > 10$. Via Eq. (\[SfromF\]), this implies that the entropy is unphysically negative. While such low densities are irrelevant to most practical calculations using thLDA, parametrizations of the uniform gas should build in simple exact conditions such as ours. Note that our restrictions apply only to continuous parametrizations. The QMC data on which Ref. [@KSDT14] is based[@BCDC13] is for the XC energy at discrete values of the density, and so does not directly give the entropy. For extremely high temperatures, sums over KS eigenstates become impractical, and only pure DFT can be applied. Because the uniform gas satisfies our conditions, and because Thomas-Fermi (TF) theory uses local approximations to the kinetic and entropic contributions which satisfy the conditions pointwise, we deduce that TF theory satisfies our conditions. However, recent attempts to go beyond TF theory, such as using generalized gradient approximations for the energy[@KCST13; @SD13; @SD14], should be tested for satisfaction of these constraints. In the final section of this paper, we apply this reasoning to the MKS method. The Mermin functional is written in terms of the MKS quantities and a correction: F=F+U+A, called the exchange-correlation (XC) free energy. (The Hartree energy, $U\H[\n]$, has no explicit temperature dependence). The XC free energy is a sum of three components: A= K+U=T-S+U, \[Axc\] where $U\xc\t$ is the potential contribution and $K\xc\t$ is the kentropic contribution, which in turn consists of $T\xc\t$, the kinetic contribution, and $-\tau S\xc\t$, where $S\xc\t$ is the entropic contribution. Subtract Eq. (\[SsfromFs\]) from Eq. (\[SfromF\]) to find = - S, or A=E-\_0\^d’ S. \[SxcfromAxc\] All thermal XC effects are contained in the XC contribution to the entropy. This provides an intriguing alternative to the adiabatic connection formula of Ref. [@PPFS11] or the thermal connection formula of Ref. [@PB15]. Our inequalities do not yield definite signs for XC quantities, just weak constraints that would be difficult to impose universally on an XC approximation: -,     -. We can also combine these with the coupling-constant derivatives of Ref. [@PB15] to find Maxwell-style relations: ( )\_= - ( )\_where $\lambda$ denotes evaluation at coupling-constant $\lambda$, holding the density fixed[@PPFS11]. Exchange can be isolated by considering the limit of either weak interaction or scaling to the high-density limit[@PPFS11]. The exchange free energy is A=V-U\[Axtdef\] in a case of no degeneracies (the only case we consider here). Because $\Gamma\s\t$ minimizes the kentropy alone, to first order in $\lambda$, kentropic corrections must be zero. Thus K=0,   T= S =-dA/d. \[Kxt\] It may seem odd to consider a kinetic contribution to exchange (impossible in the ground state), but $T\x\t$ vanishes as $\tau\to 0$ in Eq. (\[Kxt\]). For a uniform gas, the thermal exchange energy is well-known[@DAC86]. But for our Hubbard dimer[@SPB16], when $\langle N \rangle =2$, we find $E\x[\n]=-U\H[\n]/2$, so that $T\x\t=S\x\t=0$. ![Correlation entropy in the Hubbard dimer for several values of $\dn$ as a function of temperature, in units of $2\,t$, where $U=2\,t$.[]{data-label="HubC"}](HubSc.png){width="\columnwidth"} The results of Eq. (\[SxcfromAxc\]) apply to correlation alone and can be used in either direction, just as the relation for the full functional. They are well-known for the uniform gas from statistical mechanics[@I82; @PD84; @PD00]. But for an inhomogeneous system, they are non-trivial, and so we illustrate them on the asymmetric Hubbard dimer. In Fig. \[HubC\], we plot the entropic correlation as a function of temperature for several values of $\dn$, the occupation difference that arises from the asymmetric potential. Eq. (\[SxcfromAxc\]) is satisfied within numerical precision. The derivative of $S\c\t$ can change sign, even though both $S\t(\dn)$ and $S\s\t(\dn)$ are monotonically increasing (This explains the small dip seen in Fig. 7 of Ref. [@SPB16]). Finally, we explain the apparent success of the ground-state approximation (GSA) for $A\xc\t[\n]$ in MKS equilibrium calculations. Almost all present-day calculations of WDM use this approximation, and a recent calculation on the Hubbard dimer[@SPB16] found that GSA worked well when neither the temperature nor the strength of the correlations were large (the conditions corresponding to most WDM calculations). Now we explain why. Write F\^[,[GSA]{}]{}\[\]=F+U+E. Clearly, all temperature dependence is contained only in the KS part (usually a very dominant piece). Since the KS piece satisfies all the different inequalities and equalities, then so does any GSA calculation. But attempt to add corrections to a GSA calculation by writing A\^[,[GSA]{}]{}\[\]=E\^[GSA]{}\[\]+ A. Only the thermal correction appears in the exact conditions we have derived, since they all contain temperature derivatives. But there is no simple way to know if the corrections will satisfy the exact conditions for all possible systems. The only case would be using local approximations for all temperature-dependent quantities, and then using energy densities from the uniform gas. Thus a TF calculation, with thermal LDA corrections, [would]{} satisfy these conditions, since they would be satisfied pointwise, as the uniform gas satisfies these conditions for every density. But in any MKS calculation using approximate thermal XC corrections, this is not guaranteed. Unless special care is taken to guarantee satisfaction of our conditions, [*only*]{} GSA automatically does this. This is analogous to the situation in TDDFT (at zero temperature): The adiabatic LDA, which ignores the history dependence that is known to exist in the TDDFT functionals, satisfies most exact conditions, while the time-dependent LDA (the Gross-Kohn approximation[@GK85]) violates several important constraints[@Db94]. All this explains why the GSA has been working well in many situations[@KD09; @KDD08]. The GSA appears to be correct in both the low- and high-temperature limits and, at least for model systems, reproduces the exact KS orbitals accurately[@SPB16]. Of course, this depends on the specific property being calculated and the acceptable level of error, and does not preclude moderate deviations, especially between these extremes, i.e., warm dense matter. But any calculation that includes, e.g., semilocal thermal XC corrections, risks violating the exact conditions listed here that GSA automatically satisfies, and should be checked for such violations. On the other hand, the Hartree-Fock approximation (or rather, the DFT equivalent, called EXX[@KK08]), must satisfy the conditions since any expansion in powers of the coupling constant up to some order must satisfy all our conditions. To conclude, the formulas presented here are exact conditions applying to every thermal electronic system when treated with DFT, and should guide the future construction of approximate functionals. The authors acknowledge support from the National Science Foundation (NSF) under grant CHE - 1464795. J.C.S. acknowledges support through the NSF Graduate Research fellowship program under award \# DGE-1321846. P.E.G. acknowledges support from the Department of Energy (DOE) under grant DE14-017426. A.P.J’s work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. A.P.J. was supported in part by the University of California President’s Postdoctoral Fellowship. \[page:end\]
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the two-logarithm matrix model with the potential $X\Lambda+\alpha\log(1+X)+\beta\log(1-X)$ related to an exactly solvable Kazakov–Migdal model. In the proper normalization, using Virasoro constraints, we prove the equivalence of this model and the Kontsevich–Penner matrix model and construct the $1/N$-expansion solution of this model.' --- -1.4cm =-8mm =-8mm === = .5ex \#1 Appendix {#appendix .unnumbered} ========= SMI-Th-26/98\ hep-th/9811200\ [Two-logarithm matrix model with an external field]{}\ \ \ [*Gubkina 8, 117966, GSP–1, Moscow, Russia*]{}\ and\ \ \ [*Vorobyevy Gory, 119899, Moscow, Russia*]{}\ 0.9 cm Introduction ============ Matrix models with the coupling to external matrices plays an important role in the contemporary mathematical and theoretical physics. Historically, the first model of such type was the Brezin–Gross (BG) model [@BG] of the unitary matrix $U$ linearly coupled to an external matrix field $\Lambda$. But the real breakthrough in this field was caused by Kontsevich’s papers [@Konts] where the generating functional for the 2D topological gravity was proved to be the integral over the Hermitian matrices $X$ with the potential $X^3$, which are linearly coupled to an external matrix $\Lambda$. Simultaneously, the Witten hypothesis [@Wit91] that this generating functional is a $\tau$-function of the KdV hierarchy was proved [@Konts; @IZ92]. The generalized Kontsevich model (GKM)—the model with an arbitrary polynomial potential $V(X)$ and coupling with an external field—turned out to be a $\tau$-function of the Kadomtsev–Petviashvili hierarchy [@KMMM]. Then, the interest to matrix models with logarithmic potentials appeared. The first such model with the external field coupling was proposed in [@CM] (the authors named it the Kontsevich–Penner (KP) model) and was pushed forward in [@ACM; @ACKM] where its equivalence to the Hermitian one-matrix model with an arbitrary nonsingular potential was proved. Underlying geometrical structure is the discretized moduli space (d.m.s.) construction [@Ch1]. Later on, the exact relation was proved that connects this model in the d.m.s. times with two copies of the Kontsevich integral taken at different time sets [@Ch2]. Both the Kontsevich and the KP models, as well as the BG model, admit an explicit solutions in the $1/N$-expansion [@BG; @IZ92; @ACKM]. Such solutions arise from the loop equation (or the Virasoro algebra constraints), which are at most quadratic in fields. One can formulate the problem to find [*all*]{} external field matrix models that manifest this property. Another model of this kind was the so-called NBI matrix model of IIB superstrings with the potential $X\Lambda+X^{-1}+(2\eta+1)\log X$ appeared [@F-Z; @ChZ] in the context of the (M)atrix string theory. This model includes the BG model as a particular case ($\eta=0$) [@MMS] and away of this point, it can be reduced [@AC], after the time changing, to the Kontsevich model. (In particular, this enables one to produce the answer for the NBI model in the moment technique as soon as the answer for the Kontsevich model is known.) Note that the proof of equivalence of these two models relies upon the coincidence of the Virasoro algebras. The last model, which completes the list of matrix models with the loop equations quadratic in fields and which can be therefore solved in the $1/N$-expansion framework is the two-logarithm (2-log) model with the potential $X\Lambda+\alpha\log(1-X)+\beta\log(1+X)$. This model turns out to be closely related to the exactly solvable Kazakov–Migdal models [@Mak1] and it was thoroughly investigated in the case of the unit matrix $\Lambda$, i.e., where it is reduced to the one-matrix model. Even in this case, this model manifests a rich phase structure [@Mak2]. In the present paper, we do not investigate all possible phases of the 2-log model and rather confine our consideration to the Kontsevich phase only, in which the expansion over traces of negative powers of the matrix $\Lambda$ makes sense. First, we solve this model in the leading order of the $1/N$-expansion; then, we find the constraint equations (the Virasoro algebra) and prove that in the proper normalization, these equations are exactly equivalent to the constraint equations of the KP model [@CM]. Possible applications of the 2-log model are discussed. Matrix model with two logarithms ================================ We start with the following matrix integral, which appear, for instance, in the logarithmic Kazakov–Migdal model [@Mak1; @Mak2]: $$\label{mm} Z=\!\!\int\!\!dX\e^{-N{\:{\rm tr}\,\left}[X\Lambda +\alpha\log (1-X) +\beta\log(1+X)\right]}\,.$$ This integral is of the most general form, since, rescaling and shifting the fields $X$ and $\Lambda$, we may change the logarithmic branch points; however, we cannot change the constants $\alpha$ and $\beta$, which are actual charges in the model(\[mm\]). The matrix integral(\[mm\]) belongs to a class of generalized Kontsevich models (GKM) [@GKM]. Such models with negative powers of the matrix $X$ have been previously discussed in the context of $c=1$ bosonic string theory [@DMP]. For the models of this type, the large $N$ solution is known explicitly only in some special cases. The models with cubic potential for $X$ [@cubic] and the combination of the logarithmic and quadratic potentials [@CM; @ZC] were solved by a method based on the Schwinger–Dyson equations, developed first for the unitary matrix models with external field [@BG; @unit]. The same technique, being applied to the integral(\[mm\]), also allows one to find its large $N$ asymptotic expansions in the closed form for arbitrary $\alpha$ and $\beta$. The Schwinger–Dyson equations for(\[mm\]) follow from the identity $$\label{...=0} \left(\frac{1}{N^3}\,\frac{\partial }{\partial \Lambda_{jk} } \,\frac{\partial }{\partial \Lambda_{li} }-\frac{1}{N}\delta_{jk} \delta_{li}\right)\!\!\int\!\!dX \frac{\partial }{\partial X_{ij}}\, \e^{-N{\:{\rm tr}\,\left}[X\Lambda +\alpha\log (1-X) +\beta\log(1+X)\right]}=0.$$ Written in terms of the eigenvalues, these $N$ equations read $$\label{SDeig} \left[-\frac{1}{N^2}\,\lambda_i\,\frac{\partial^2}{\partial \lambda_i^2}-\frac{1}{N^2}\!\sum_{j\neq i}\lambda_j\,\frac{1} {\lambda_j-\lambda_i}\left(\frac{\partial}{\partial\lambda_j} -\frac{\partial}{\partial\lambda_i}\right) +\frac{\alpha+\beta-2}{N}\,\frac{\partial} {\partial\lambda_i}+(\beta-\alpha)+\lambda_i\right]\!Z(\lambda)\!=\!0.$$ It is convenient to set $$\label{defW} W(\lambda _i)=\frac{1}{N}\,\frac{\partial }{\partial \lambda _i}\,\log Z.$$ We also introduce the eigenvalue density of the matrix $\Lambda $: $$\label{dens} \rho (x)=\frac{1}{N}\,\sum_{i}\delta (x-\lambda _i).$$ The density obeys the normalization condition $$\label{norm} \int\!\!dx\,\rho(x)=1$$ and in the large $N$ limit becomes a smooth function. A simple power counting shows that the derivative of $W(\lambda_i )$ in the first term on the left hand side of equation(\[SDeig\]) is suppressed by the factor $1/N$ and can be omitted at $N=\infty $. The remaining terms are rewritten as follows: $$\label{inteqn} -\,xW^2(x)-\!\!\int\!\!dy\,\rho(y)\,y\,\frac{W(y)-W(x)}{y-x} +(\alpha+\beta-2)W(x)+(\beta-\alpha)+x=0,$$ where $\lambda_i$ is replaced by $x$. Equation(\[inteqn\]) can be simplified by the substitution $$\label{subst} \tw(x)=xW(x)-\frac{\alpha+\beta-1}{2}\,.$$ After some transformations, using the normalization condition(\[norm\]), we obtain $$\label{maineq} \tw^2(x)+x\!\!\int\!\!dy\,\rho(y)\,\,\frac{\tw(y)-\tw(x)}{y-x}= x^2+(\beta-\alpha)x+\frac{(\alpha+\beta-1)^2}{4}\,.$$ The nonlinear integral equation(\[maineq\]) can be solved with the help of the anzatz $$\label{anz} \tw(x)=f(x)+\frac{x}{2}\!\int\!\!dy\, \frac{\rho(y)}{f(y)}\,\frac{f(y)-f(x)}{y-x}\,,$$ where $f(x)$ is an unknown function to be determined by substituting(\[anz\]) into Eq.(\[maineq\]). The asymptotic behaviors of $\tw(x)$ and $f(x)$ as $x\rightarrow \infty $ follow from Eq.(\[maineq\]): $\tw(x)\sim {x}+(\beta-\alpha-1)/2$, and the analytic solution with minimal set of singularities is merely $$\label{fx} f(x)=\sqrt{ax^2+bx+c\,}.$$ Let us introduce the moments of the external field $$I_0=\!\!\int\!\!\frac{\rho(x)}{f(x)}\,dx,\qquad J_0=\!\!\int\!\!\frac{\rho(x)}{f(x)}x\,dx.$$ The parameters $a$, $b$, and $c$ are unambiguously determined from Eq.(\[maineq\]). We find that $$c=(\beta+\alpha-1)^2/4,$$ and $a$ and $b$ are implicitly defined by the following two constraints: $$\begin{aligned} \label{defa} & & 1+\frac{1}{2}I_0=\frac{1}{\sqrt{a}}\,, \nonumber \\[-2.5mm] & & \\[-2.5mm] & & \sqrt{a}J_0=\beta-\alpha-\frac{b}{a}\,, \nonumber\end{aligned}$$ or, in terms of the eigenvalues, $$\begin{aligned} \label{a1} & & 1+\frac{1}{2N}\sum_{j}\frac{1}{\sqrt{a\lambda^2_j+ b\lambda_j+c\,}}=\frac{1}{\sqrt{a}}\,, \nonumber \\[-2.5mm] & & \\[-2.5mm] & & \sqrt{a}\frac{1}{N}\sum_{j}\frac{\lambda_j}{\sqrt{a\lambda^2_j+ b\lambda_j+c\,}}=\beta-\alpha-\frac{b}{a}\,. \nonumber\end{aligned}$$ So, we have $$\label{W1} W(x)=\frac{\sqrt{ax^2+bx+c\,}}{x}+\frac{1}{2}\!\!\int\!\!dy\,\rho(y)\, \frac{f(y)-f(x)}{f(y)(y-x)}+\frac{\alpha+\beta-1}{2x}\,.$$ Then, integrating(\[W1\]) w.r.t. $x$ and checking that the stationary conditions w.r.t. the variables $a$ and $b$ take place, we find the answer for the integral in the large $N$ limit, $$\begin{aligned} \label{res} & & \hspace*{-7.0mm}\log Z=N^2(\beta-\alpha)^2\left[\frac{1}{8} \log(b^2-4ac)-\frac{1}{4}\log a\right]+N^2(\beta-\alpha) \Biggl[\frac{1}{4}\log a-\frac{1}{4}\log(b^2-4ac) \nonumber \\ & & \hspace*{8.0mm}+\,\sqrt{c}\arctanh\frac{2\sqrt{ca}}{b} -\frac{b}{2a}\Biggr]+N^2\left[\frac{b^2}{8a^2}-\frac{c}{2a} +\frac{2c}{\sqrt{a}}+\frac{c}{2}\log(b^2-4ac)\right] \nonumber \\ & & \hspace*{8.0mm}+\,N\sum_{i}\Biggl[\frac{\alpha+\beta-1}{2}\log\lambda_i +\frac{f(\lambda_i)}{\sqrt{a}} +\frac{1}{2}(\beta-\alpha)\log\left(\sqrt{a}\lambda_i +\frac{b}{2\sqrt{a}}+f(\lambda_i)\right) \nonumber \\ & & \hspace*{8.0mm}-\,\sqrt{c}\arctanh\biggl(\frac{\sqrt{c} +\lambda_i\,b/(2\sqrt{c})}{f(\lambda_i)}\biggr)\Biggr] -\frac{1}{4}\sum_{ij}\Biggl[\log(\lambda_i-\lambda_j) \nonumber \\ & & \hspace*{8.0mm}+\,\arctanh\left(\frac{a\lambda_i\lambda_j +(\lambda_i+\lambda_j)\,b/2+c}{f(\lambda_i)f(\lambda_j)}\right)\Biggr].\end{aligned}$$ One can verify directly that $$\label{stat} \frac{\partial}{\partial a}\log Z= \frac{\partial}{\partial b}\log Z=0$$ and $\dis{\frac{1}{N}\,\frac{\partial}{\partial\lambda_i}}\log Z=W(\lambda _i)$, as far as Eq.(\[a1\]) hold. Let us establish a relation between the constraint equations of the 2-log model and KP model [@CM]. It is convenient to introduce new charges (parameters) instead of $\alpha$ and $\beta$ $$\gamma\equiv(\beta-\alpha)/2,\qquad \varphi\equiv-\,(\alpha+\beta-1)/2=\sqrt{c},$$ and new variables $$\tilde b\equiv b/a,\qquad \tilde c\equiv c/a.$$ Shifting all eigenvalues $\lambda_i$ by the same constant $\xi$, we can rewrite the 2-log constraint equations as follows: $$\begin{aligned} & & \frac{\varphi}{\sqrt{\tilde c}}\pm\frac{1}{2N}\sum_i \frac{1}{\sqrt{\lambda_i^2+\left(2\xi+\tilde b\right)\lambda_i +\left(\xi^2+\xi\tilde b+\tilde c\right)}}=1, \nonumber \\[-2.5mm] & & \label{2l_c_eq} \\[-2.5mm] & & \pm\,\frac{1}{N}\sum_i\frac{\lambda_i+\xi}{\sqrt{\lambda_i^2 +\left(2\xi+\tilde b\right)\lambda_i+\left(\xi^2+\xi\tilde b +\tilde c\right)}}=2\gamma-\tilde b, \nonumber\end{aligned}$$ where “$\pm$" depends on the branch of the square root. Then, we make the following time change: $${\rm tr}\frac{1}{\lambda^n}={\rm tr}\frac{1}{\eta^n}\pm \biggl(-\,2\varphi\,\frac{N}{\left(-\xi\right)^n}+2N\delta_{n,1} -N\delta_{n,2}\biggl), \label{time1}$$ where the role of “$\pm$" is the same. Making the presented time change and connecting the variables of the two models $$\begin{aligned} \label{var_2l_KP} & & 2\xi+\tilde b=4b, \nonumber \\[-2.5mm] & & \\[-2.5mm] & & \xi^2+\xi\tilde b+\tilde c=4c, \nonumber\end{aligned}$$ where $b$ and $c$ are already the KP variables, we have $$\begin{aligned} & & 2b\pm\frac{1}{N}\sum_i\frac{1}{\sqrt{\eta_i^2+4b\eta_i+4c\,}}=0, \nonumber \\[-2.5mm] & & \label{KP_c_eq} \\[-2.5mm] & & \pm\,\frac{1}{2N}\sum_i\frac{\eta_i}{\sqrt{\eta_i^2+4b\eta_i+4c\,}} +c-3b^2=\gamma-\varphi, \nonumber\end{aligned}$$ i.e., exactly the constraint equations of the KP model with $\gamma-\varphi\equiv\tilde\alpha$ [@CM]. Here $\xi$ is an arbitrary parameter. Using the original parameters $\alpha$, $\beta$, and $\alpha_{\scr\rm KP}$ ($\alpha_{\scr\rm KP}$ is the parameter $\alpha$ of the KP model, and $\alpha_{\scr\rm KP}+1/2=\tilde\alpha$ in the notation of [@CM]), we see that $\alpha_{\scr\rm KP}=\beta-1$. Naively, the parameter $\beta$ is more preferred than $\alpha$ for some reason. Indeed, they play equal roles. The obvious symmetry of the 2-log matrix integral is encoded in the transformations $\lambda_i\rightarrow-\lambda_i$ ($i=\overline{1,N}$) and $\alpha\leftrightarrow\beta$. Under such a symmetry, $\gamma\rightarrow-\gamma$, $\varphi\rightarrow\varphi$, and $\alpha_{\scr\rm KP}=\gamma-\varphi\rightarrow\alpha_{\scr\rm KP} =-\gamma-\varphi=\alpha-1$. Let us recall the answer in the large $N$ limit for the KP model [@CM]. Substituting $\tilde\alpha$$=$$\gamma$$-$$\varphi$, we have $$\begin{aligned} & & \hspace*{-8.0mm}\log Z_{\scr\rm KP} =\frac{N^2}{2}\left(\gamma-\varphi -\frac{1}{2}\right)\log \left(b^2-c\right)-\frac{5}{2}\,b^2c -\left(\gamma-\varphi\right) c +\frac{c^2}{4}+3\left(\gamma-\varphi\right)b^2+\frac{9}{4}\,b^4 \nonumber \\ & & \hspace*{7.0mm}+\,N\sum_i\Biggl\{\left(\frac{\eta_i}{2}-b\right) \sqrt{\frac{\eta_i^2}{4}+b\eta_i+c\,} +\left(\gamma-\varphi\right) \log \Biggl(\eta_i+2b+\sqrt{\frac{\eta_i^2}{4}+b\eta_i+c\,}\,\Biggr) +\frac{\eta_i^2}{4}\Biggr\} \nonumber \\ & & \hspace*{7.0mm}-\,\frac{1}{4}\sum_{ij} \log \Biggl(\frac{\eta_i\eta_j}{4}+\frac{b}{2}\left(\eta_i+\eta_j\right) +c+\sqrt{\frac{\eta_i^2}{4}+b\eta_i+c\,}\, \sqrt{\frac{\eta_j^2}{4}+b\eta_j+c\,}\,\Biggr). \label{g0_KP}\end{aligned}$$ Here we compare the large $N$ limit answer for the 2-log model with(\[g0\_KP\]). Further all equalities hold up to pure complex constant and irrelevant factors which can polynomially depend only on the parameters $\alpha$ and $\beta$ (the polynomial of no more than second degree) of the 2-log model. Obviously such additional terms cannot influence the critical behavior of the model. Making the eigenvalue shift by $\xi$ and denoting $d=b^2-c$, we obtain $$\begin{aligned} & & \hspace*{-7.0mm}\log Z=\frac{N^2\gamma^2}{2} \log d+N^2\gamma\Biggl[-\left(\varphi+\frac{1}{2}\right) \log d+2\varphi\log\left(\tilde b +2\sqrt{\tilde c}\right)-\tilde b\Biggr] +N^2\Biggl[2d+2\varphi\sqrt{\tilde c} \nonumber \\ & & \hspace*{5.0mm}+\,\frac{1}{2}\left(\varphi+\frac{1}{2}\right)^2 \log d-\varphi\left(\varphi+\frac{1}{2}\right)\log\tilde c\Biggr] +N\sum_i\Biggl\{\gamma\log \Biggl(1+\frac{2b}{\lambda_i}+\sqrt{1+\frac{4b}{\lambda_i} +\frac{4c}{\lambda_i^2}\,}\,\Biggr) \nonumber \\ & & \hspace*{5.0mm}-\,\varphi\log \Biggl(\sqrt{1+\frac{4b}{\lambda_i} +\frac{4c}{\lambda_i^2}\,}+\frac{\tilde b}{2\sqrt{\tilde c\,}} +\frac{\sqrt{\tilde c\,} +\xi\,\tilde b/(2\sqrt{\tilde c\,})}{\lambda_i}\Biggr) +\sqrt{\lambda_i^2+4b\lambda_i+4c\,}-\lambda_i \nonumber \\ & & \hspace*{5.0mm}+\,\left(\gamma-\varphi-\frac{1}{2}\right) \log \lambda_i+\lambda_i\Biggr\} -\frac{1}{4}\sum_{ij}f\biggl(\frac{1}{\lambda_i}\,, \frac{1}{\lambda_j}\biggr),\end{aligned}$$ where $$f\left(x,y\right)=\log \left(\frac{1}{2}+b\left(x+y\right) +2cxy+\frac{1}{2}\sqrt{1+4bx+4cx^2\,}\,\sqrt{1+4by+4cy^2\,}\,\right).$$ After some tedious algebra (similar to the one in [@AC]), we obtain $$\log Z=\log Z_{\scr\rm KP}+N\sum_i\Biggl\{\Bigl(\gamma -\varphi-\frac{1}{2}\Bigr) \log\,\Bigl(\frac{\lambda_i}{\eta_i}\Bigr) +\lambda_i-\frac{\eta_i^2}{2}\Biggr\} +2\varphi^2\log \varphi. \label{NL_conn}$$ The difference between $\log Z$ and $\log Z_{\scr\rm KP}$ depends only on some normalization factors in the large $N$ limit. As is worth noting, these factors differ from the original normalization factors of the two models, which can be obtained by the early developed scheme [@AC]. We show that the appeared normalization factors are indeed natural. Let us investigate the Kontsevich regime of the two models ($\Lambda\rightarrow\infty$ and $\eta\rightarrow\infty$). Then, for KP model we have (up to a constant) $$Z_{\scr\rm KP}\!=\!\!\!\int\!\!DX \e ^{N{\rm tr}[\eta X\!-\!\frac{X^2}{2}+\alpha\log X]}=\e^{\frac{N}{2} {\rm tr}\,\eta^2}({\rm det}\,\eta)^{\alpha N}\!\!\int\!\!DX \e^{N{\rm tr}[-\frac{X^2}{2}+\alpha \log(1+\frac{X}{\eta})]}\!\simeq\!\e^{\frac{N}{2}{\rm tr}\,\eta^2} ({\rm det}\,\eta)^{\alpha N}.$$ There are two stationary points, $X_0=\pm 1+Y/\Lambda$, in the Kontsevich regime for the 2-log model ($Y$ is the new variable). Choosing $X_0=-1+Y/\Lambda$ for definiteness (another stationary point gives the same answer after symmetry $\Lambda\rightarrow -\Lambda$), we obtain $$Z=({\rm det}\,\Lambda)^{N(\beta-1)}\e^{N{\rm tr}\,\Lambda} \!\!\int\!\! DY \e^{-N{\rm tr}[Y+\alpha\log(2-\frac{Y}{\Lambda})+\beta\log Y]} \simeq({\rm det}\,\Lambda)^{N(\beta-1)}\e^{N{\rm tr}\,\Lambda}.$$ This is nothing but our normalizing factors. Let us make the eigenvalue shift in the master equation of the 2-log model $\Bigl(\partial_i\equiv\dis{\frac{\partial}{\partial\lambda_i}}\Bigr)$ $$\Bigl[-\,\frac{1}{N^2}(\lambda_i+\xi)\partial_i^2-\frac{1}{N^2}\sum_{j\neq i} \frac{\lambda_j+\xi}{\lambda_j-\lambda_i}(\partial_j-\partial_i) +\frac{\alpha+\beta-2}{N}\,\partial_i+\beta-\alpha+\lambda_i+\xi\Bigr]Z=0.$$ Using our normalizing factor $$\prod_i\lambda_i^{N(\beta-1)}e^{N\lambda_i} \label{2l_n_f}$$ and pushing it through derivatives, we replace $$\partial_i\longrightarrow\partial_i+\frac{N(\beta-1)}{\lambda_i}+N.$$ Then, we obtain master equation for the normalized partition function $$\begin{aligned} & & \Bigl[-\,\frac{1}{N^2}(\lambda_i+\xi)\partial_i^2-\frac{1}{N^2}\sum_{j\neq i} \frac{\lambda_j+\xi}{\lambda_j-\lambda_i}(\partial_j-\partial_i) -\frac{2\lambda_i}{N}\,\partial_i+\frac{\alpha-\beta-2\xi}{N}\,\partial_i \nonumber \\ & & \hspace*{2.0mm}-\,\frac{2\xi(\beta-1)}{N\lambda_i}\,\partial_i -\frac{\xi(\beta-1)^2}{\lambda_i^2}+\frac{\beta-1}{\lambda_i} \biggl(\alpha-2\xi+\frac{\xi}{N}\sum_j\frac{1}{\lambda_j}\biggr) \Bigr]{\cal Z}=0.\end{aligned}$$ Let us introduce the times of the 2-log model $$t_n=\frac{1}{n}\sum_i\frac{1}{\lambda_i^n}\,.$$ Then, the constraint equations for ${\cal Z}(\{t_n\})$ are obtained after some tedious algebra which we omit here. Collecting all coefficients to the term $1/(\lambda_i^kN^2)$, we obtain $$L_k{\cal Z}(\{t_n\})=0,\qquad k\geqslant -1,$$ where $$\begin{aligned} & & L_k=V_{k+1}+\xi V_k+\xi N(\alpha+\beta-1) \Bigl((1-\delta_{k,0}-\delta_{k,-1})\frac{\partial}{\partial t_k} -N(\beta-1)\delta_{k,0}\Bigr) \nonumber \\ & & \hspace*{10mm}+\,\xi\delta_{k,-1}N(\beta-1)(t_1-2N),\end{aligned}$$ and $$\begin{aligned} & & V_k=-\sum_{m=1}^{\infty}mt_m\frac{\partial}{\partial t_{m+k}} -\sum_{m=1}^{k-1}\frac{\partial}{\partial t_m}\, \frac{\partial}{\partial t_{k-m}} -N(\alpha-\beta+1)(1-\delta_{k,0}-\delta_{k,-1}) \frac{\partial}{\partial t_k} \nonumber \\ & & \hspace*{10.0mm}+\,2N(1-\delta_{k,-1})\frac{\partial}{\partial t_{k+1}} +t_1\delta_{k,-1}\frac{\partial}{\partial t_{k+1}} +N^2\alpha(\beta-1)\delta_{k,0}\,.\end{aligned}$$ Here, the derivatives over $t_0$ and $t_{-1}$ are fictitious and are used only for compactifying the presentation. For $k,l\geqslant -1$, $L_k$ satisfy the algebra $$[L_k,L_l]=(l-k)(L_{k+l+1}+\xi L_{k+l}).$$ Zero shift ($\xi=0$) results in the Virasoro algebra where the $L_{-1}$ generator is absent, $$[V_k,V_l]=(l-k)V_{k+l}, \qquad k,l\geqslant 0.$$ We can also obtain the Virasoro algebra from the general algebra with nonzero shift by the replacement $${\cal L}_k=\sum_{s=0}^{\infty}\frac{(-1)^s}{\xi^{s+1}}L_{k+s}, \qquad k\geqslant -1,$$ which is singular at $\xi=0$. Performing the replacement and using the relations $\alpha_{\scr\rm KP}=\beta-1$ and $\varphi=-(\alpha+\beta-1)/2$, we obtain $$\begin{aligned} & & {\cal L}_k=-\sum_{m=1}^{\infty}mt_m\frac{\partial}{\partial t_{m+k}} -\sum_{m=1}^{k-1}\frac{\partial}{\partial t_m}\, \frac{\partial}{\partial t_{k-m}} +2N\alpha_{\scr\rm KP}\frac{\partial}{\partial t_k} +2N\frac{\partial}{\partial t_{k+1}} \nonumber \\ & & \hspace*{10.0mm}-\,2\varphi N\sum_{s=1}^{\infty}\frac{1}{(-\xi)^s}\, \frac{\partial}{\partial t_{k+s}} -2N\alpha_{\scr\rm KP}(\delta_{k,0}+\delta_{k,-1}) \frac{\partial}{\partial t_k} -N^2\alpha_{\scr\rm KP}^2\delta_{k,0} \nonumber \\ & & \hspace*{10.0mm}+\,\delta_{k,-1} \biggl(t_1-2N-\frac{2\varphi N}{\xi}\biggr)\biggl(N\alpha_{\scr\rm KP}+ \frac{\partial}{\partial t_{k+1}}\biggr).\end{aligned}$$ After the time changing $$t_n=\tilde t_n -2\varphi\frac{N}{(-\xi)^n}+2N\delta_{n,1} -\frac{N}{2}\delta_{n,2}\,, \label{time2}$$ where $$\tilde t_n=\frac{1}{n}\sum_i\frac{1}{\eta_i^n}$$ are the times of the KP model, we obtain $$\begin{aligned} & & \hspace*{-10.0mm}{\cal L}_k=-\sum_{m=1}^{\infty}m\tilde t_m \frac{\partial}{\partial\tilde t_{m+k}} -\sum_{m=1}^{k-1}\frac{\partial}{\partial\tilde t_m}\, \frac{\partial}{\partial\tilde t_{k-m}} +2N\alpha_{\scr\rm KP}\frac{\partial}{\partial\tilde t_k} +N\frac{\partial}{\partial\tilde t_{k+2}} \nonumber \\ & & -\,2N\alpha_{\scr\rm KP}(\delta_{k,0}+\delta_{k,-1}) \frac{\partial}{\partial\tilde t_k} -N^2\alpha_{\scr\rm KP}^2\delta_{k,0} +\tilde t_1\delta_{k,-1}\frac{\partial}{\partial\tilde t_{k+1}} +N\alpha_{\scr\rm KP}\tilde t_1\delta_{k,-1}\,.\end{aligned}$$ This is exactly the Virasoro algebra that appears in the KP model with the normalizing factor $$\prod_i \eta_i^{\alpha_{\scr\rm KP}N} e^{\frac{N}{2}\eta_i^2}\,. \label{KP_n_f}$$ Indeed, we can perform the same operation for the KP model. First, we write the master equation for the normalized partition function $\Bigl(\partial_i\equiv\dis{\frac{\partial}{\partial\eta_i}}\Bigr)$ $$\Bigl[-\,\frac{1}{N^2}\partial_i^2-\frac{1}{N^2}\sum_{j\neq i} \frac{\partial_j-\partial_i}{\eta_j-\eta_i}-\frac{\eta_i}{N}\,\partial_i -\frac{2\alpha_{\scr\rm KP}}{N\eta_i}\,\partial_i+ \frac{\alpha_{\scr\rm KP}}{N\eta_i}\sum_j\frac{1}{\eta_j} -\frac{\alpha_{\scr\rm KP}^2}{\eta_i^2}\Bigr]{\cal Z}_{\scr\rm KP}=0.$$ Then, using the KP model times $\tilde t_n$ and collecting all coefficients to the term $1/(\eta_i^k N^2)$, we obtain $${\cal L}_k{\cal Z}_{\scr\rm KP}=0,\qquad k\geqslant -1.$$ Therefore, we have proven the equivalence between the 2-log and KP models. Now, we write the explicit relation between the normalized partition functions of the two models $${\cal Z}_{\scr\rm 2\mbox{-}log}\Bigl[\biggl\{\frac{1}{n}\, {\rm tr}\frac{1}{\lambda^n}\biggr\};\alpha,\beta\Bigr] =C(\alpha,\beta)\xi^{2\varphi(\beta-1)N^2}\e^{N^2(2\beta-1)\xi} {\cal Z}_{\scr\rm KP}\left[\tilde t_n(\xi,\varphi), \alpha_{\scr\rm KP}\right], \label{ex_rel}$$ where $$\begin{aligned} & & {\cal Z}_{\scr\rm 2\mbox{-}log}\Bigl[\biggl\{\frac{1}{n}\, {\rm tr}\frac{1}{\lambda^n}\biggr\};\alpha,\beta\Bigr] =\frac{Z_{\scr\rm 2\mbox{-}log}\left[\lambda;\alpha,\beta\right]} {\prod_i\left\{(\lambda_i-\xi)^{N(\beta-1)}\, \e^{N(\lambda_i-\xi)}\right\}}\,, \nonumber \\[-2.5mm] & & \\[-2.5mm] & & {\cal Z}_{\scr\rm KP}\left[\tilde t_n(\xi,\varphi), \alpha_{\scr\rm KP}\right] =\frac{Z_{\scr\rm KP}\left[\eta(\xi,\varphi), \alpha_{\scr\rm KP}\right]} {\prod_i\left\{(\eta_i)^{N\alpha_{\scr\rm KP}}\, \e^{\frac{N}{2}\eta_i^2}\right\}}\,, \nonumber\end{aligned}$$ $\alpha_{\scr\rm KP}$$=$$\beta-1$ and $C(\alpha,\beta)$ is some constant depending on the parameters $\alpha$ and $\beta$. Note that we use here unshifted initial field $\lambda$ and explicitly show the dependence on the arbitrary parameter $\xi$ by the following reason. For the unshifted $\lambda$-field, the Virasoro algebra for the 2-log model does not possess the $L_{-1}$ generator. So, a question arises how we can obtain the ${\cal L}_{-1}$ generator of the KP model when passing to the KP model. The reason is that after the time changing(\[time2\]), the KP times $\tilde t_n$ become $\xi$-dependent. Differentiating(\[ex\_rel\]) over $\xi$ and using the relation $$\frac{d\tilde t_n}{d\xi}=(n+1)\tilde t_{n+1}-N\delta_{n,1}\,,$$ we obtain one more equation for ${\cal Z}_{\scr\rm KP}$, $${\cal L}_{-1}{\cal Z}_{\scr\rm KP}=0,$$ where ${\cal L}_{-1}$ is just the generator of the KP Virasoro algebra. Let us recall the genus expansion for the KP model [@ACKM], $$\log {\cal Z}_{\scr\rm KP}=\sum_{g=0}^\infty N^{2-2g}F_g\,,$$ where $$F_g=\!\!\!\sum_{\alpha_j>1,\,\beta_j>1}\!\!\! \left\langle\alpha_1\dots\alpha_s; \beta_1\dots\beta_l|\alpha\beta\gamma\right\rangle_g \frac{M_{\alpha_1}\dots M_{\alpha_s}J_{\beta_1}\dots J_{\beta_l}} {M_1^\alpha J_1^\beta d^\gamma}\,,\qquad g>1,$$ and $$F_1=-\,\frac{1}{24}\log(M_1J_1d^4),\qquad g=1.$$ The moments were defined as follows ($k\geqslant 0$) $$\begin{aligned} & & M_k=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{(\eta_i-x_+)^{k+1/2}\, (\eta_i-x_-)^{1/2}}-\delta_{k,1}\,, \nonumber \\[-2.5mm] & & \\[-2.5mm] & & J_k=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{(\eta_i-x_+)^{1/2}\, (\eta_i-x_-)^{k+1/2}}-\delta_{k,1}\,, \nonumber\end{aligned}$$ where $x_{\pm}$ are the endpoints of the cut for the one-cut solution and $d=x_+-x_-$. In our notation, $$x_{\pm}=-2b\pm\sqrt{4b^2-2c\,}\,.$$ Let us introduce the moments for the 2-log model ($k\geqslant 0$) $$\begin{aligned} & & N_k=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{(\lambda_i-y_+)^{k+1/2}\, (\lambda_i-y_-)^{1/2}}\,, \nonumber \\[-2.5mm] & & \\[-2.5mm] & & K_k=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{(\lambda_i-y_+)^{1/2}\, (\lambda_i-y_-)^{k+1/2}}\,, \nonumber\end{aligned}$$ where $$y_{\pm}=-\,\frac{\tilde b}{2}\pm\sqrt{\frac{\tilde b^2}{4}-\tilde c\,}\,.$$ We are interested in the relation between the moments of the two models for $k$$\geqslant$$0$ (for $k$$=$$0$, the relation is given by constraint equations(\[2l\_c\_eq\]) and(\[KP\_c\_eq\])). After making the eigenvalue shift ($y_{\pm}$$=$$x_{\pm}+\xi$) and performing the time changing, we obtain ($k\geqslant 1$) $$\begin{aligned} & & M_k=N_k+2\varphi\frac{(-1)^{k+1}}{y_+^{k+1/2}\,y_-^{1/2}}\,, \nonumber \\[-2.5mm] & & \\[-2.5mm] & & J_k=K_k+2\varphi\frac{(-1)^{k+1}}{y_+^{1/2}\,y_-^{k+1/2}}\,. \nonumber\end{aligned}$$ So, for the 2-log model, we have $$\log {\cal Z}=\sum_{g=0}^\infty N^{2-2g}F_g^{\scr\rm 2\mbox{-}log}\,,$$ where $$\begin{aligned} & & F_g^{\scr\rm 2\mbox{-}log}=\!\!\!\sum_{\alpha_j>1,\,\beta_j>1}\!\!\! \left\langle\alpha_1\dots\alpha_s; \beta_1\dots\beta_l|\alpha\beta\gamma\right\rangle_g \,\prod_{i=1}^{s}\biggl(N_{\alpha_i}+2\varphi \frac{(-1)^{\alpha_i+1}}{y_+^{\alpha_i+1/2}\,y_-^{1/2}}\biggr) \nonumber \\ & & \hspace*{20.0mm}\times\prod_{i=1}^{l}\biggl(K_{\beta_i}+2\varphi \frac{(-1)^{\beta_i+1}}{y_+^{1/2}\,y_-^{\beta_i+1/2}}\biggr) \Bigl\{\biggl(N_1+2\varphi\frac{1}{y_+^{3/2}\,y_-^{1/2}}\biggr)^\alpha \nonumber \\ & & \hspace*{20.0mm}\times\biggl(K_1+2\varphi \frac{1}{y_+^{1/2}\,y_-^{3/2}}\biggr)^\beta (y_+-y_-)^\gamma\Bigr\}^{-1},\qquad g>1, \label{2l_exp_g}\end{aligned}$$ and $$F_1=-\,\frac{1}{24}\log \Bigl\{\biggl(N_1+2\varphi\frac{1}{y_+^{3/2}\,y_-^{1/2}}\biggr) \biggl(K_1+2\varphi\frac{1}{y_+^{1/2}\,y_-^{3/2}}\biggr) (y_+-y_-)^4\Bigr\}. \label{2l_exp_1}$$ Therefore, expression(\[res\]) for genus zero, taking into account the normalizing factor(\[2l\_n\_f\]), and expressions(\[2l\_exp\_g\]), (\[2l\_exp\_1\]), completely determine the partition function of the model(\[mm\]) at all genera. The exact determinant formulas in our model can be easily found using the Itzykson–Zuber–Mehta technique for the integration over angular variables in multi-matrix models. The partition function can be expressed as follows $$Z=(2\pi)^{\frac{N^2-N}{2}}\!\int\limits_{\theta_1}^{\theta_2}\! \prod_i\left\{dx_i(1-x_i)^{-\alpha N} (1+x_i)^{-\beta N}\e^{-N\lambda_i x_i}\right\} \frac{\triangle(x)}{\triangle(\lambda)}\,,$$ where $\triangle(x)=\prod_{i>j}^{N}(x_i-x_j)$ is the Van der Monde determinant and $\theta_{1,2}$ are some integration limits. We know that in the large $N$ limit, the difference between partition functions calculated in various integration limits is exponentially small and does not affect the $1/N$ perturbative expansion. So, we investigate several cases. (i).For $\theta_1=-1$ and $\theta_2=1$, we use the following integral representation ($a,b>0$) $$\int\limits_{-1}^1\!\!dx\,(1-x)^{a-1}(1+x)^{b-1}\e^{-cx} =2^{a+b-1}\,\e^{-c}\,B(a,b)\,\Phi(a,a+b;2c),$$ where $\Phi(a,c;z)$$\equiv$$_1F_1(a,c;z)$ is the confluent hypergeometric function and $B(a,b)$ is the beta-function. Then, in the domain $\alpha,\beta<1/N$, we obtain $$\begin{aligned} & & Z=(2\pi)^{\frac{N^2-N}{2}}2^{-(\alpha+\beta)N^2+N(N+1)/2} \prod_i \biggl\{B(-\alpha N+1,-\beta N+i)\biggr\} \nonumber \\ & & \hspace*{9.0mm}\times\,\frac{\e^{-N{\rm tr}\,\lambda}}{\triangle(\lambda)} \det_{1\leqslant i,\,j\leqslant N}|| \Phi(-\alpha N+1,-(\alpha+\beta)N+j+1;2N\lambda_i)||\,. \label{det_1}\end{aligned}$$ (ii).For $\theta_1=1$ and $\theta_2=\infty$, we use the relation ($a,c>0$) $$\int\limits_1^{\infty}\!\!dx\,(1-x)^{a-1}(1+x)^{b-1}\e^{-cx} =(-1)^{a-1}2^{a+b-1}\,\e^{-c}\,\Gamma(a)\,\Psi(a,a+b;2c),$$ where $$\Psi(a,c;z)=\frac{\Gamma(1-c)}{\Gamma(a-c+1)}\,\Phi(a,c;z) +\frac{\Gamma(c-1)}{\Gamma(a)}\,z^{1-c}\,\Phi(a-c+1,2-c;z)$$ is the other confluent hypergeometric function and $\Gamma(a)$ is the gamma-function. Then, in the domain where $\alpha<1/N$, $\beta$ is unrestricted, and $\lambda_i>0$, we have $$\begin{aligned} & & Z=(2\pi)^{\frac{N^2-N}{2}}(-1)^{-\alpha N^2} 2^{-(\alpha+\beta)N^2+N(N+1)/2}\,\Gamma^N(-\alpha N+1) \nonumber \\ & & \hspace*{9.0mm}\times\,\frac{\e^{-N{\rm tr}\,\lambda}}{\triangle(\lambda)} \det_{1\leqslant i,\,j\leqslant N}|| \Psi(-\alpha N+1,-(\alpha+\beta)N+j+1;2N\lambda_i)||\,.\end{aligned}$$ This answer covers more general domain of the parameters $\alpha$ and $\beta$ than(\[det\_1\]). (iii).If $\alpha=0$ we get the simplest answer setting $\theta_1=-1$ and $\theta_2=\infty$. In the domain $\beta<1/N$ and $\lambda_i>0$, we obtain $$\Bigl.Z\Bigr|_{\alpha=0}= (2\pi)^{\frac{N^2-N}{2}}\prod_i\left\{ \Gamma(-\beta N+i)\right\} (\det\Lambda)^{(\beta-1)N}\e^{N{\rm tr}\,\Lambda}\,,$$ which is the unshifted normalizing factor up to a constant. Let us calculate the string susceptibility w.r.t. $\gamma$ and $\varphi$ for(\[res\]). By virtue of Eq.(\[stat\]), $\dis{\frac{d}{d\gamma}}\log Z=\dis{\frac{\partial}{\partial\gamma}}\log Z$ and the same holds true for $\varphi$. Furthermore, an amazing fact is that the expressions obtained are themselves stationary w.r.t. differentiation over $a$ and $b$. This means that the total second derivatives in $\gamma$ and $\varphi$ coincide with the corresponding partial derivatives, so we have $$\begin{aligned} \label{suscept} & & \chi_1=\frac{1}{N^2}\frac{d^2}{d\gamma^2}\log Z=\log(b^2-4ac)-2\log a\,, \nonumber \\ & & \chi_2=\frac{1}{N^2}\frac{d^2}{d\gamma d\varphi}\log Z= -2\arctanh\frac{2\sqrt{ac}}{b}= -\log\frac{b+2\sqrt{ac}}{b-2\sqrt{ac}}\,, \\ & & \chi_3=\frac{1}{N^2}\frac{d^2}{d\varphi^2}\log Z=\log(b^2-4ac)+6. \nonumber\end{aligned}$$ Recalling the string susceptibility of the KP model in the KP variables $b$ and $c$ [@CM] $$\chi_{\scr\rm KP}=\log(b^2-c)$$ and using the relations(\[var\_2l\_KP\]), we obtain $$\chi_{\scr\rm KP}=\chi_1.$$ Conclusion ========== This paper concludes the series of papers [@CM; @Ch2; @ChZ; @AC] devoted to styduing the external field matrix problems with logarithmic potentials. We see that, at least in the $1/N$-expansion in terms of the corresponding moments, all these models can be reduced either to the Kontsevich model or to the Hermitian one-matrix model with an arbitrary potential. Here, the question arises whether this can be derived directly within the $\tau$-function framework [@KMMM]. The related question is which reductions of the Kadomtsev–Petviashvili hierarchy correspond to the NBI and 2-log models. One can always say the origin of the logarithmic terms is due to additional degrees of freedom that were integrated out. Matrix integral (\[mm\]) can be represented as the $O(\alpha,\beta)$-type [@O(n)] matrix integral $$\label{mmOn} Z=\!\!\int\!\!dX\,\prod_{i=1}^{\alpha}d{\ov \Psi}_i d\Psi_i \prod_{j=1}^{\beta}d{\ov \Phi}_j d\Phi_j \e^{-N{\:{\rm tr}\,\left}[{\ov \Psi}_i \Psi_i +{\ov \Phi}_j \Phi_j +X\bigl(\Lambda-\Psi_i {\ov \Psi}_i+\Phi_j {\ov \Phi}_j\bigr)\right]}\,,$$ where the sum over repeated indices is implied and we assume the matrix fields $\Phi$ and $\Psi$ are Grassmann even. Action (\[mmOn\]) is of a nonlinear sigma model type with free matrix fields $\Phi$ and $\Psi$ dwelling on the manifold $\Lambda-\Psi_i {\ov \Psi}_i+\Phi_j {\ov \Phi}_j=0$. Acknowledgements ================ The work is supported by the Russian Foundation for Basic Research (Grant No. 96–01–00344). [99]{} E.Brézin and D.Gross, [*Phys. Lett.*]{} [**B97**]{} (1980) 120. M.Kontsevich, [*Funkts. Anal. i Prilozhen.*]{} [**25**]{} (1991), 50 (in Russian); [*Commun. Math. Phys.*]{} [**147**]{} (1992) 1. E.Witten, [*Nucl. Phys.*]{} [**B340**]{} (1990) 281. C.Itzykson and J.-B.Zuber, [*Int. J. Mod. Phys.*]{} [**A7**]{} (1992) 5661. S.Kharchev, A.Marshakov, A.Mironov, and A.Morozov, [*Nucl. Phys.*]{} [**B397**]{} (1993) 339. L.Chekhov and Yu.Makeenko, [*Mod. Phys. Lett.*]{} [**A7**]{} (1992) 1223. J.Ambj[ø]{}rn, L.Chekhov, and Yu.Makeenko, [*Phys. Lett.*]{} [**B282**]{} (1992) 341. J.Ambj[ø]{}rn, L.Chekhov, C.Kristjansen, and Yu.Makeenko, [*Nucl. Phys.*]{} [**B404**]{} (1993) 127. L.Chekhov, [*Geom. Phys.*]{} [**12**]{} (1993) 153. L.Chekhov, [*Acta Appl. Math.*]{} [**48**]{} (1997) 33. A.Fayyazuddin, Y.Makeenko, P.Olesen, D.J.Smith and K.Zarembo, [*Towards a Non-perturbative Formulation of IIB Superstrings by Matrix Models*]{}, hep-th/9703038. L.Chekhov and K.Zarembo, [*Mod. Phys. Lett.*]{}, [**A12**]{} (1997) 2331. A.Mironov, A.Morozov, and G.Semenoff, [*Int. J. Mod. Phys.*]{} [**A10**]{} (1995) 2015. J.Ambj[ø]{}rn and L.Chekhov, [*The NBI matrix model of IIB Syperstrings*]{}, hep-th/9805212 Yu.Makeenko, [*Phys. Lett.*]{}, [**B314**]{} (1993) 197. L.Paniak and N.Weiss, [*J. Math. Phys.*]{}, [**36**]{} (1995) 2512;\ Yu.Makeenko, [*Int. J. Mod. Phys.*]{}, [**A10**]{} (1995) 2615. S.Kharchev, A.Marshakov, A.Mironov, A.Morozov, and A.Zabrodin, [*Nucl. Phys.*]{} [**B380**]{} (1992) 181; [*Phys. Lett.*]{} [**B275**]{} (1992) 311. R.Dijkgraaf, G.Moore, and R.Plesser, [*Nucl. Phys.*]{} [**B394**]{} (1993) 356;\ C.Imbimbo and S.Mukhi, [*Nucl. Phys.*]{} [**B449**]{} (1995) 553. V.A.Kazakov and I.K.Kostov, unpublished, as cited in I.K.Kostov, [*Nucl. Phys. B (Proc. Suppl.)*]{} [**A10**]{} (1989) 295;\ D.J.Gross and M.J.Newman, [*Phys. Lett.*]{} [**B266**]{} (1991) 291;\ Yu.Makeenko and G.W.Semenoff, [*Mod. Phys. Lett.*]{} [**A6**]{} (1991) 3455. K.Zarembo and L.Chekhov, [*Theor. Math. Phys.*]{} [**93**]{} (1992) 1328. R.C.Brower and M.Nauenberg, [*Nucl. Phys.*]{} [**B180\[FS2\]**]{} (1981) 221;\ R.C.Brower, P.Rossi and C.-I.Tan, [*Phys. Rev.*]{} [**D23**]{} (1981) 942. J.Distler and C.Vafa, [*Mod. Phys. Lett.*]{} [**A6**]{} (1991) 259;\ C.-I.Tan, [*Mod. Phys. Lett.*]{} [**A6**]{} (1991) 1373;\ S.Chaudhuri, H.Dykstra and J.Lykken, [*Mod. Phys. Lett.*]{} [**A6**]{} (1991) 1665. Yu.Makeenko and G.Semenoff, [*Mod. Phys. Lett.*]{}, [**A6**]{}, (1991) 3455. I.Kostov, [*Mod. Phys. Lett.*]{} [**A4**]{} (1989) 217;\ I.Kostov and M.Staudacher, [*Nucl. Phys.*]{} [**B384**]{} (1992) 459;\ B.Eynard and J.Zinn-Justin, [*Nucl. Phys.*]{} [**B386**]{} (1992) 55;\ B.Eynard and C.F.Kristjansen, [*Nucl. Phys.*]{} [**B455**]{} (1995) 577.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The global COVID-19 pandemic has led to a startling rise in social-media fueled misinformation and conspiracies, leading to dangerous outcomes for our society and health. We quantify the reach and belief in COVID-19 related misinformation, revealing a troubling breadth and depth of misinformation that is highly partisan.' author: - | Sophie Nightingale\ School of Information\ University of California, Berkeley\ `[email protected]` Marc Faddoul\ School of Information\ University of California, Berkeley\ `[email protected]` Hany Farid\ Electrical Engineering and Computer Sciences\ School of Information\ University of California, Berkeley\ `[email protected]` bibliography: - 'main.bib' title: 'Quantifying the Reach and Belief in COVID-19 Misinformation' --- Introduction {#introduction .unnumbered} ============ The COVID-19 global pandemic has been an ideal breeding ground for online misinformation: Social-media traffic has reached an all-time record [@facebook_newsroom] as people are forced to remain at home, often idle, anxious, and hungry for information [@pakpour2020], while at the same time, social-media services are unable to rely on human moderators to enforce their rules [@washington_post]. The resulting spike in COVID-19 related misinformation is of grave concern to health professionals [@avaaz]. The World Health Organization has listed the need for surveys and qualitative research about the infodemic in its top priorities to contain the pandemic [@WHO2020]. A recent survey confirmed that belief in COVID-19 conspiracy theories is associated with smaller compliance with public health directives [@allington2020]. Another recent study found that political affiliation is a strong predictor of knowledge of COVID-19 related information [@nielsen2020]. Building on this earlier work, we launched a large-scale US-based study to examine the belief in $20$ prevalent COVID-19 related false statements, and $20$ corresponding true statements. We evaluate the reach and belief in these statements and correlate the results with political leaning and primary source of media consumption. Methods {#methods .unnumbered} ======= A total of $611$ participants were recruited from U.S.-based Mechanical Turk workers.[^1] Participants were instructed that they would participate in a study to evaluate the reach and belief in COVID-19 related misinformation. They were asked to read, one at a time, $40$ statements (Table \[tab:statements\]), half of which are true and half are not, and specify: (1) if they had seen/heard the statement before; (2) if they believed the statement to be true; and (3) if they know someone that believes or is likely to believe the statement. The $40$ statements were sourced from reputable fact-checking websites (e.g., *snopes.com/fact-check* and *reuters.com/fact-check*). To ensure a balanced design, each false statement was matched with a similarly themed true statement. The $40$ statements plus three attention-check questions (Table \[tab:statements\]) were presented in a random order. At the end of the survey, participants were asked how they consume news, their political leaning, and basic demographics: education-level, age, gender, and race. All responses were collected between April 11, 2020 and April 21, 2020, amidst the global COVID-19 crisis. The three, obviously false, attention-check questions were used to ensure that participants were paying attention to the survey. A participant’s data was discarded if they failed to correctly answer any of these attention-check questions: $111$ of the $611$ responses were discarded, yielding a total of $500$ usable responses. Participants were paid \$2.00 for their participation in the study. At the end of the study, participants were again informed that half of the statements they read were not true, asked to confirm that they understood this, and were directed to several websites with accurate health information. -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![The distribution of the (a) number of true (blue) and false (red) statements (Table \[tab:statements\]) that reached participants ($N=500$), (b) are believed by participants, and (c) that others believe.[]{data-label="fig:results"}](figures/seen2.pdf "fig:"){width="0.7\linewidth"} ![The distribution of the (a) number of true (blue) and false (red) statements (Table \[tab:statements\]) that reached participants ($N=500$), (b) are believed by participants, and (c) that others believe.[]{data-label="fig:results"}](figures/believe2.pdf "fig:"){width="0.7\linewidth"} ![The distribution of the (a) number of true (blue) and false (red) statements (Table \[tab:statements\]) that reached participants ($N=500$), (b) are believed by participants, and (c) that others believe.[]{data-label="fig:results"}](figures/others2.pdf "fig:"){width="0.7\linewidth"} -- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Results {#results .unnumbered} ======= On average, $55.7\%$/$29.8\%$ of true/false statements reached participants, of which $57.8\%$/$10.9\%$ are believed (Table \[tab:statements\]). When participants are asked if they know someone that believes or is likely to believe a statement, $71.4\%$/$42.7\%$ of the true/false statements are believed by others known to the participant. The median number of true/false statements that reached a participant is $11$/$6$ (Figure \[fig:results\](a)); the median number of true/false statements believed by a participant is $12$/$2$ (Figure \[fig:results\](b)); the median number of true/false statements believed by others known to the participant is $15$/$8$ (Figure \[fig:results\](c)); and $31\%$ claimed to believe at least one false conspiracy (cf. [@freeman2020]). It is generally encouraging that true statements have a wider reach and wider belief than false statements. The reach and belief in false statements, however, is still troubling, particularly given the potentially deadly consequences that might arise from misinformation. Even more troubling is the partisan divide that emerges upon closer examination of the data. We conducted six negative binomial regression models with six outcome variables corresponding to the reach, belief, and belief by others in each true/false statement. The predictor variables included participant demographics: gender, age, education, political leaning, and main source of news. We briefly review the largest effects of demographics. Political leaning and main news source had an effect on the likelihood of the number of false statements that are believed. The number of false statements believed by those on the right of the political spectrum[^2] is $2.15$ times greater than those on the left ($95\%$ CI \[$1.84$, $2.53$\]). Although a smaller effect, those on the right were also less likely to believe true statements than those on the left ($0.89$, $95\%$ CI \[$0.83$, $0.95$\]). The number of false statements believed by those with social media as their main source of news is $1.41$ times greater than those who cited another main news source ($95\%$ CI \[$1.19$, $1.66$\]). We next performed a binary logistic regression to evaluate how political leaning and main news source influenced belief in each false statement (Table \[tab:statements\]). Political leaning influenced the likelihood of believing $12/20$ false statements, and main news source influenced the likelihood of believing $7/20$ false statements. For $11/12$ false statements where there is an effect of political leaning, those on the right are more likely to believe the false information. For $6/7$ false statements where there was an effect of main news source, those with social media as a main source are more likely to believe the false information. The five largest effects were based on political leaning, where, as compared to those on the left, those on the right are: - $15.44$ times more likely to believe that “asymptomatic carriers of COVID-19 who die of other medical problems are added to the coronavirus death toll to get the numbers up to justify this pandemic response", $95\%$ CI $[8.22, 29.0]$. - $14.70$ times more likely to believe that “House Democrats included \$$25$ million to boost their own salaries in their proposal for the COVID-19 related stimulus package", $95\%$ CI $[7.69, 28.11]$. - $9.44$ times more likely to believe that “COVID-19 was man-made in a lab and is not thought to be a natural virus", $95\%$ CI $[4.99, 17.87]$. - $7.41$ times more likely to believe that “Silver solution kills COVID-19", $95\%$ CI $[1.83, 30.02]$. - $6.97$ times more likely to believe that “COVID stands for *Chinese Originated Viral Infectious Disease*", $95\%$ CI $[3.05, 15.92]$. The one false statement that those on the left were more likely to believe than those on the right was that “Sales of Corona beer dropped sharply in early 2020 because consumers mistakenly associated the brand name with the new coronavirus." The effects of main news source on likelihood of believing false statements is smaller than for political leaning. Those with social media as their primary source are $6.45$ times more likely to believe that “Silver solution kills COVID-19" ($95\%$ CI $[1.59, 26.14]$), and $5.71$ times more likely to believe that “Drinking sodium bicarbonate and lemon juice reduces the acidity of the body and the risk of getting infected with COVID-19" ($95\%$ CI $[1.70, 19.15]$). Discussion {#discussion .unnumbered} ========== There is a troublingly wide reach and belief in COVID-19 related misinformation that is highly partisan and is more prevalent in those that consume news primarily on social media. As with previous work [@nielsen2020], our study was conducted online, so an average belief in false information of $4.8\%$ may not be representative of the general public [@cooper2016Sun]. To address this limitation, participants were also asked about the belief of those familiar to them. This revealed what is likely an upper bound of $40\%$ in the belief of misinformation in the general public. The real-world impact of such beliefs has already been demonstrated with devastating consequences. For example, false claims on social media that drinking high-proof alcohol will kill the virus has been linked to the death of over $300$ people in the Republic of Iran [@Karimi20]. It remains unclear the extent to which COVID-19 misinformation is a result of coordinated attacks, or has arisen organically through mis-understanding and fear. It is also remains unclear if the spread and belief in this misinformation is on the rise or decline, and how it has impacted other parts of the world. We are actively pursuing answers to each of these questions. Falsehoods spread faster than the truth [@vosoughi2018], and falsehoods are often resistant to correction [@nyhan2010; @lewandowsky2012]. Media, and social media in particular, must do a better job at preventing these falsehoods from reaching their platforms, and they must do a better job in preventing their spread. Acknowledgement {#acknowledgement .unnumbered} =============== This work was supported by funding from Facebook, the Defense Advanced Research Projects Agency (DARPA FA8750-16-C-0166), and a Seed Fund Award from CITRIS and the Banatao Institute at the University of California. [^1]: This research was approved by UC Berkeley’s Office for Protection of Human Subjects, Protocol ID: 2019-08-12441. Participants gave informed consent prior to taking part. [^2]: From $500$ responses, $287/115$ reported as politically left/right of center, and $98$ as center. For the evaluation of the impact of political leaning, only those reporting left/right of center were considered.
{ "pile_set_name": "ArXiv" }
--- abstract: | We performed deep photometry of the central region of Galactic globular cluster M15 from archival Hubble Space Telescope data taken on the High Resolution Channel and Solar Blind Channel of the Advanced Camera for Surveys. Our data set consists of images in far-UV (FUV$_{140}$; F140LP), near-UV (NUV$_{220}$; F220W), and blue (B$_{435}$; F435W) filters. The addition of an optical filter complements previous UV work on M15 by providing an additional constraint on the UV-bright stellar populations. Using color-magnitude diagrams (CMDs) we identified several populations that arise from non-canonical evolution including candidate blue stragglers, extreme horizontal branch stars, blue hook stars (BHks), cataclysmic variables (CVs), and helium-core white dwarfs (He WDs). Due to preliminary identification of several He WD and BHk candidates, we add M15 as a cluster containing a He WD sequence and suggest it be included among clusters with a BHk population. We also investigated a subset of CV candidates that appear in the gap between the main sequence (MS) and WDs in FUV$_{140}-$NUV$_{220}$ but lie securely on the MS in NUV$_{220}-$B$_{435}$. These stars may represent a magnetic CV or detached WD-MS binary population. Additionally, we analyze our candidate He WDs using model cooling sequences to estimate their masses and ages and investigate the plausibility of thin vs. thick hydrogen envelopes. Finally, we identify a class of UV-bright stars that lie between the horizontal branch and WD cooling sequences, a location not usually populated on cluster CMDs. We conclude these stars may be young, low-mass He WDs. author: - | Nathalie C. Haurberg, Gabriel M. G. Lubell, Haldan N. Cohn,\ and Phyllis M. Lugger - Jay Anderson - 'Adrienne M. Cool' - 'Aldo M. Serenelli' title: 'UV-Bright Stellar Populations and Their Evolutionary Implications in the Collapsed-Core Cluster M15' --- INTRODUCTION {#sec:intro} ============ The Galactic globular cluster (GC) M15 is the archetypal core-collapsed GC with an extremely small and dense core [as measured most recently by @vdb06] and thus has been an object of interest for many observers. Though convincing models for core collapse have existed for some time [e.g., @cohn80], many questions remain and recent attention has turned to how core collapse might affect the stellar populations in the central regions of GCs. These high density central regions [M15: $\rho_{0} \approx 7 \times 10^{6}$ M$_{\odot}$ pc$^{-3}$; @vdb06] provide an environment that can be ideal for the production of many types of exotic binaries. In such a setting, the encounter rate is sufficiently high that one expects to find an increased number of close binary systems due to dynamical interactions between stars that can harden primordial binaries and form new binary systems [@pool03]. Because of the large amount of binding energy (sometimes comparable to the binding energy of the cluster) contained in close binary systems, interactions of these systems can have a significant effect on the dynamical evolution of the cluster [@hut92] and thus are of particular importance in understanding effects of core collapse. Close binary systems, especially those containing a degenerate remnant, are expected to be overabundant in GC cores due to both the preferential formation of close binaries in high density environments as well as the effects of mass segregation. Many such systems are X-ray sources and therefore should be identified in X-ray surveys; M15 contains six X-ray sources with optical counterparts: two low-mass X-ray binaries (LMXBs) - AC211 [@aur84] and M15 X-2 [@whiang01; @die05], a dwarf nova cataclysmic variable (CV) - M15 CV1 [@sha04], two suspected dwarf novae CVs whose optical counterparts are unclear [@han05], and a quiescent LMXB - M15 X-3 [@hei09]. In this work we probe into these close binary populations through the use of UV photometry and color-magnitude diagrams (CMDs). Studies of M15’s CMD have previously been used to examine many features of the cluster’s stellar population. The blue straggler (BS) population was examined by @yan94, the horizontal branch (HB) and giant branches were analyzed by @cholee07, and UV studies by @marpar94 [@marpar96] probed into the extremely blue objects, including white dwarfs (WDs). Most recently, @die07 performed a UV analysis of M15, focusing on the variable stars in M15’s core as well as performing a basic breakdown of the UV-bright stellar populations. This study revealed a substantial array of “gap” stars that appear in the region of the CMD surrounded by the HB, main sequence (MS), and WD cooling sequence. This gap zone had previously been dubbed the “CV zone" by @kni02 due to the likelihood of CVs appearing in this part of the CMD. Optical CMDs of the central region of M15, though relatively deep, emphasize populations above the MS turn-off (MSTO). With perhaps a few exceptions, it is unlikely that CVs would be visible so far up on an optical CMD [e.g., @yan94; @vdm02]. Observations in the UV, however, result in a distinct CMD morphology that highlights populations often not included on optical CMDs. In such diagrams, the gap zone is emphasized which can result in a larger list of potential CVs than that which optical filter combinations would yield. UV CMDs are also useful in identifying the extremely blue features of the HB and WD sequences in clusters. M15 is a low-metallicity cluster [[\[]{}Fe/H[\]]{} $= -2.26$; @har96], which results in a correspondingly blue HB. In addition, M15 is known to possess an extended blue HB tail [@moe95] and, as we discuss in this paper, possibly harbors a small population of extreme-HB (EHB) stars which are hotter than the hottest HB stars predicted by canonical stellar evolution theories. Furthermore, UV CMDs have the unique ability to photometrically distinguish a set of He-rich EHB stars known as “blue hook (BHk) stars" which are characterized as being apparent EHB stars that are unusually faint in the UV [e.g., @whi98; @dcr00; @bro01]. Another intriguing close-binary population expected to be found in the central regions of GCs are helium-core white dwarfs (He WDs). He WDs are believed to arise in cases where mass loss on the red giant branch (RGB) is severe enough that the remaining mass in the H shell is too small to ignite core He-burning, resulting in an exposed He-core. They are predicted to lie in sequences slightly brighter and redder than the canonical CO WD cooling sequences due to their lower mass and therefore larger radii [e.g., @benalt98]. The only cluster in which there is a confirmed sequence of He WDs is NGC 6397 [@str09 and references therein] though it has also been suggested that a significant fraction of the WDs in $\omega$ Cen are He WDs [@cal08]. A small number of He WDs are also known in each of several other nearby clusters as counterparts to ultracompact X-ray binaries [@and93; @die05] and millisecond pulsars [@edm01; @fer03; @sig03]; in addition one was found in M4 by @oto06 as a companion to a subdwarf B star and one was identified by @kni08 in 47 Tuc using UV spectroscopy. We performed PSF-fitting photometry on optical, near-, and far-UV images of M15’s central region obtained by the Hubble Space Telescope (HST) using the Advanced Camera for Surveys (ACS). This data set is a unique one: confined neither exclusively in the UV nor optical regimes; we analyzed M15 using colors that allow us to probe deeply into the optically faint yet UV-bright populations initially uncovered in M15 by @die07. By including an optical filter, we further investigate the photometric properties of these populations with an additional color and the more well-studied optical CMD morphology. In §\[sec:obs\] we describe the observations and photometry. In §\[sec:cmd\] we provide a breakdown of the various UV-bright populations. Section \[sec:analysis\] contains our analysis, with §\[sec:wdcool\] containing the description our WD cooling models and analysis of the WD population, §\[sec:raddist\] describing our analysis of the radial distribution of populations believed to be of binary origin, and §\[sec:discbhk\]-\[sec:discbrightgap\] focussing on the potential EHB, BHk, He WD, and CV populations, respectively. Finally, we discuss the implications of our results in §\[sec:discussion\] and summarize our results in §\[sec:conclusions\]. OBSERVATIONS & PHOTOMETRY {#sec:obs} ========================= Our data consist of HST archival images taken with ACS over several epochs in three filters. The filters used were F435W, F220W, and F140LP; the first of these is similar to Johnson B, while the F220W and F140LP bands are in the near- and far-UV, respectively. The F435W data consisted of 13 separate frames, from GO-10401, taken on the High Resolution Channel (HRC) all having exposure lengths of 125 s. The F220W and F140LP data, from GO-9792, were taken with the HRC and Solar Blind Channel (SBC), respectively. The F220W data were taken in eight exposures of 290 s for a total exposure time of 2320 s. The F140LP data were taken in a manner designed to optimize detection of variability and consist of 90 individual exposures that result in a total exposure time of 24,800 s. These two data sets are described in detail by @die07. Henceforth, the three filters will be referred to as B$_{435}$, NUV$_{220}$, and FUV$_{140}$. Despite the high resolution of the raw data, the individual frames were insufficient for resolving most stars in and around the cluster core. Thus, to increase the resolution, the image data were combined using a computer program developed by one of us (J. A.). The program is similar to drizzle with the pixfrac parameter set to 0, except that the transformations from each exposure to the master frame are based purely on the locations of the bright stars — no image header information contributed. The resolution of the resulting images was approximately doubled and was sampled with a pixel size of 0.0125$\arcsec \times$ 0.0125$\arcsec$ for all three filters. The central 10$\arcsec$ $\times$ 10$\arcsec$ of the stacked images are shown in Figure \[fig:master\_images\]; the complete field of view is 29$\arcsec$ $\times$ 26$\arcsec$ for the HRC and 39$\arcsec \times$ 31$\arcsec$ for the SBC. We performed photometry using the software package DAOPHOT II [@ste90]. The detection threshold was determined by trial and error for each image individually in order to ensure that the majority of the sources were detected while still limiting the number of spurious detections. The IRAF task `xyxymatch` was used to match individual stars across the filters. A tolerance of 2 pixels was used to match between the B$_{435}$ and NUV$_{220}$ frames, while a 3-pixel tolerance was used between NUV$_{220}$ and FUV$_{140}$ frames. The difference in tolerances proved necessary on account of differing geometric distortions in fields from the HRC and SBC. Suspected edge detections, spurious detections, and diffraction artifacts that passed through this routine were inspected by eye and rejected when necessary. Between 130 and 150 individual stars were selected from each image as models for a point-spread function (PSF). Stars were chosen based on their visual appearance and the relative crowding of their neighborhoods. The PSF was determined in an iterative fashion with a careful effort to distinguish between PSF artifacts and close neighbors. The final quadratically-varying PSF was used with the ALLSTAR PSF-fitting photometry routine in order to determine the instrumental magnitudes. Magnitudes were calibrated and transformed into the STMAG system in a similar manner to that outlined in detail in the HST ACS Data Handbook and by @sir05. Since ALLSTAR performs PSF fitting photometry, we first had to calculate an initial offset between the magnitudes from a small aperture, chosen to be 0.05$\arcsec$, and the magnitudes produced by ALLSTAR. This calibration was performed with the same set of stars used to determine the PSF due to the relative brightness and isolated nature of these stars. The offset was quite uniform across the field, with a standard deviation for the calibration stars not exceeding 0.016 magnitudes. Since the field is extremely crowded (see Fig. \[fig:master\_images\]) it was impossible to obtain a magnitude offset for the suggested 0.5$\arcsec$ aperture. Thus, in order to make the final calibrations for our magnitudes, we used the encircled energy fractions tabulated in @sir05 and applied the STMAG zero point following the process outlined in the HST ACS Data Handbook. The calibrated data set was directly compared to the data set used in @die07 (hereafter D07). Of the 1913 stars with both NUV$_{220}$ and FUV$_{140}$ detections included in the data set used by D07 (A. Dieball, private communication 2007) we recovered a unique match for 1813. In addition, we detected NUV$_{220}$ sources for 37 FUV$_{140}$ detections that had no matched NUV$_{220}$ source in the D07 data set. The 100 stars from D07 that we did not recover are not surprising; many of these stars are very dim and some lay in regions of the image that were not included in our master images (such as the occulting finger in the F140LP frame) because we relied solely on the combined frames for source detection. In addition, some of the D07 FUV$_{140}$ sources appeared as two separate sources in our catalog, indicating that we possibly resolved some stars that appeared blended in their master images. Our final data set contained a total of 10,728 individual stars detected in at least two filters, necessarily including F220W. This consisted of 3052 individual stars detected in both FUV$_{140}$ and NUV$_{220}$, 2943 of which were also detected in B$_{435}$. Of the 1813 stars matched to sources from the D07 data set, 1777 were detected in B$_{435}$ as well. The discrepancy between our total number of FUV$_{140}$ sources, which totaled to 3961, and that of D07, which totaled to only 2137, seems to be due to our use of a lower threshold for source detection which resulted in the inclusion of on the order of 2000 faint sources that were not included in the D07 data set. We expect that most of the false detections in this enlarged number of FUV$_{140}$ sources were filtered out by object matching across filters and our visual inspections of individual objects. While most of our FUV$_{140}$ and NUV$_{220}$ photometry is consistent with the similar results presented in D07, we note that for many of the variable stars our FUV$_{140}$ magnitudes are systematically dim as compared to those of D07. This is likely a consequence of having performed photometry on combined frames that had gone through a rejection routine designed to remove bad pixels, warm pixels, and other artifacts. This routine rejected any pixel inconsistent at the 3 sigma level and therefore would affect any stars with variability that exceeded this threshold. THE CMDs {#sec:cmd} ======== Photometric results are illustrated in the CMDs presented in Figures 2-4. In each figure we have identified various stellar populations based on their position in a given CMD (i.e. FUV$_{140}$-NUV$_{220}$ vs. FUV$_{140}$) and then plotted those same stars on a different CMD (i.e. NUV$_{220}$-B$_{435}$ vs. B$_{435}$) using a scheme that allows the stars, or groups of stars, to be tracked between the diagrams. This technique allows us to better evaluate the photometric nature of a star or population by examining its position on multiple CMDs and additionally is useful for understanding the unfamiliar morphology of the UV CMDs. We have included multiple versions of the same CMD, with different identification schemes for the stellar populations, in order to clearly illustrate the photometric properties of each population we analyzed. Individual components of the various diagrams are discussed in the following sections. Unless otherwise stated, the differentiation between stellar populations was based on the overall appearance of the CMD and the location of the stars on the CMD. We caution that this non-quantitative method will result in a small amount of confusion between the individual groups, but should not have a significant effect on our results. All magnitudes are given in the STMAG system. Photometric results for M15 X-2, CV1, AC211 and H05X-B {#sec:known} ------------------------------------------------------ We have obtained photometric measurements for 4 X-ray sources previously identified with close binary systems (labeled and plotted as squares in Fig. 2-4). These systems are include the confirmed dwarf nova (DN), M15 CV1 (hereafter CV1), identified by @sha04 as well as optical counterparts to the two LMXBs: AC211, the optical counterpart to 4U 2127+119 as identified by @aur84, and M15 X-2 (hereafter X-2), identified as the optical counterpart of an ultracompact X-ray binary corresponding to X-ray source CXO J212958.1+121002 by @whiang01 and @die05. We also have provided photometry in NUV$_{220}$ and B$_{435}$ for a faint X-ray source identified in @han05 as a possible DN or a quiescent soft-X-ray transient (qSXT; their “source B”). We refer to this source, hereafter, as H05X-B and discuss it in detail in §\[sec:H05XB\]. H05X-B was too dim to be detected in FUV$_{140}$ (see Figure \[fig:H05X-B\]), thus it is not included on CMDs which include this filter. Photometric data for these four objects are listed in Table \[tab:known\]. Our NUV$_{220}$ and FUV$_{140}$ magnitudes for X-2, AC211, and H05X-B are consistent with those of D07, within the errors. We have also provided photometry in B, thus adding to the photometric information for these sources. CV1 was not detected in the D07 FUV$_{140}$ master image as they found this star too faint and too near a bright neighbor to be detected. But, they do detect the star during an outburst phase in one of the observing epochs, therefore confirming it as a DN. In our FUV$_{140}$ master image, the source corresponding to CV1 is difficult to resolve by eye but the FIND task in DAOPHOT II was able to identify it as a discrete source (see Figure \[fig:CV1\]). However, when run through the PSF-fitting photometry of ALLSTAR, the proximity to the bright neighbor proved an issue and caused the star to be subtracted as part of the PSF of the neighboring star; thus we used a FUV$_{140}$ magnitude derived from aperture photometry. We report the NUV$_{220}$ and B$_{435}$ photometry of CV1 with equal confidence to stars of similar magnitudes, but note that since we were not able to use PSF-fitting for our FUV$_{140}$ photometry of CV1 the detection may be more questionable and our results less reliable. The NUV$_{220}$ magnitude for CV1 reported by D07 is significantly brighter than our reported magnitude. As noted in the erratum to that paper, this is likely related to the fact that we performed PSF- fitting photometry while D07 used aperture photometry [@dieerat10]. With aperture photometry, a significant amount of flux from the bright neighbor star would likely end up in the aperture for the CV1 counterpart, and thus cause the calculated magnitude to be too bright. When we performed aperture photometry with an aperture of comparable size, it resulted in a magnitude consistent with that in D07. The Main Sequence and Red Giant Branch {#sec:MSandRGB} -------------------------------------- The MS extends approximately 3.5 magnitudes below the turn-off in B$_{435}$ (see small orange dots in Figure \[fig:b\_cmds\]a). The MS has a width of about 0.5 magnitudes just below the turn-off and widens somewhat at the faint end. There is scatter to either side of the MS, some of which likely is due to photometric error, but may also indicate potential binary systems. The detection limit at B$_{435}$ $\simeq$ 22.5 and NUV$_{220}$$-$ B$_{435}$ $\simeq$ 2 is due to the depth of the NUV$_{220}$ data. Since MS stars are relatively red, there are many fewer FUV$_{140}$ detections of MS stars and the MS in Figure \[fig:f\_cmds\]a extends only about one magnitude below the turn-off with a significant amount of scatter due its proximity to the detection limit. The RGB (large red dots) is very well defined in Figure \[fig:b\_cmds\]a, forming a tight, narrow sequence stretching over 4.5 magnitudes in B$_{435}$. The FUV$_{140}$ filter strongly suppresses the presence of the red giant population and the fact that a large number of RGB stars are detectable in the FUV$_{140}$ frame at all is a bit of a mystery. It is most likely due to the “red leak" phenomenon associated with the UV filters and discussed in @chi07 and [@bof08]. Red leak can greatly affect photometric measurements of red stars, but since the focus of this paper is UV-bright stars it should not affect our results. Because it is very difficult to distinguish MSTO stars from RGB stars in the FUV$_{140} -$ NUV$_{220}$ vs. FUV$_{140}$ CMD we make the distinction only using the NUV$_{220} -$ B$_{435}$ vs. B$_{435}$ diagram. The Asymptotic Giant Branch {#sec:AGB} --------------------------- The asymptotic giant branch (AGB; small teal dots) is most easily identified in Figure \[fig:b\_cmds\]a as the slightly curved sequence turning away from the tip of the RGB and spanning about 2.5 magnitudes in color. The transition between the AGB and the HB is somewhat unclear; for the purposes of this paper we have defined the edge of the AGB just redward of the clump of variable stars belonging to the RR Lyrae instability strip. Due to this loose distinction between the HB and AGB, there may be some confusion between the blue edge of the AGB and the reddest HB stars, however this is not of particular concern as these stars are not the focus of this paper. Like the RGB, the AGB is not as well defined in Figure \[fig:f\_cmds\]a because very little flux from these stars is emitted in the far-UV bandpasses. However, the AGB can be identified as the sequence of stars connecting the tip of the RGB to the HB. In Figures \[fig:b\_cmds\]b and \[fig:f\_cmds\]a there is a clear bend toward the blue at the point where the AGB transitions to the HB (FUV$_{140}$ $\approx$ 22.5). The Horizontal Branch {#sec:HB} --------------------- ### Normal Horizontal Branch Stars {#sec:normHB} The main portion of the HB (plotted as large light blue dots on Fig. 2-4) consists of a small set of red horizontal (RHB) stars and much larger set of blue horizontal branch (BHB) stars separated by variable stars in the RR Lyrae instability strip. A significant RHB population is not necessarily expected since metal-poor stars produce BHB stars; however, the presence of the RHB population in M15 is well documented [e.g., @buo85; @pre06]. The BHB begins at the blue edge of the RR Lyrae strip extending toward the WD sequence. For comparison we have plotted a theoretical zero-age horizontal branch (ZAHB; solid dark red and dark blue lines) generously provided by Santi Cassisi (private communication, 2009 & 2010) and described in @pie04, on Figures \[fig:b\_cmds\] - \[fig:n\_cmds\]. They were generated assuming a metallicity of Z$=$0.0001 (corresponding to an \[Fe/H\] $\approx -2.3$) and were adjusted for extinction following the method outlined in @car89. We assumed a color excess of E(B$-$V)$=$0.1 [@har96] which resulted in a correction A$_{\lambda} =$ 0.81$\pm$0.11, 0.94$\pm$0.12, and 0.42$\pm$0.08 for FUV$_{140}$, NUV$_{220}$, and B$_{435}$, respectively . For the dark blue curve, a distance of 10.3$\pm$0.4 kpc [@vdb06] was assumed resulting in a distance modulus of 15.06. This is in general the most accepted distance measurement for M15, however, it is clear in the CMDs including the FUV$_{140}-$NUV$_{220}$ color that using this distance modulus does not produce a good fit for the red side of the HB, as the ZAHB appears significantly brighter than the observed HB stars. If the distance modulus is adjusted to 15.25, consistent with the distance modulus determined by @kra03, (solid dark red curve), a much better fit is achieved. However, with the larger distance modulus the ZAHB is significantly dimmer than expected on the blue side of the HB. So, neither distance modulus produces an ideal fit for the entire HB and we ascribe this to the limitations of these models in fitting our dataset. Yet, since the fit is reasonable in both cases, we believe the models to be sufficient for the purposes of this paper. The dotted lines associated with each curve represent the 1$\sigma$ error bars including both the error in distance and reddening. For the dark blue curve, @kra03 do not cite an error, so we assumed a reasonable error of 0.10 magnitudes in the distance modulus. ### Extreme Horizontal Branch Stars {#sec:EHB} As previously noted by many authors [e.g., @dor93; @fer98; @dal08] the hottest BHB stars owe their extremely blue colors to thin H envelopes which result in higher effective temperatures. The thin envelope is believed to be due to significant mass loss on the RGB, likely to a close binary companion or possibly stellar winds. Because of the thin envelope these stars do not undergo the usual evolutionary progression to the tip of the AGB. After leaving the HB, they have an insufficient mass in their hydrogen shell to enable the onset of the thermal pulsation phase; instead they either move off the AGB early and become post-early AGB (P-EAGB) or never ascend the AGB at all, becoming so called AGB-manqué stars. This group, composed of the hottest BHB stars, is known as the extreme horizontal branch (EHB) and is often defined as the set of BHB stars with effective temperature greater than 20,000K [e.g., @bro01; @moe04; @die09] but is more difficult to define observationally. Spectroscopically EHB stars correspond to subdwarf B (sdB) and subdwarf OB (sdOB) field stars [@heb87], but photometric definitions differ between authors. For clarity, we use the previously mentioned ZAHB models, and chose stars with T$\rm_{eff}$ $\geq$ 20,000K as EHB stars; using this method we identify 6 stars as EHB candidates (maroon pinched squares on Fig. 2-4). One of these candidates was not detected in B$_{435}$ due to its location near the edge of the B$_{435}$ image, so all 6 candidates only appear on Figures \[fig:b\_cmds\]b, \[fig:f\_cmds\]a, and \[fig:n\_cmds\]b and only Figure \[fig:f\_cmds\]a displays all 6 candidates with the expected shape and color. ### Blue Hook Stars {#sec:bhk} Five of the 6 potential EHB stars appear subluminous in the UV compared to BHB stars with similar temperatures (plotted as EHB stars in diamonds in Fig. 2-4). This indicates these as candidate blue hook (BHk) stars. (The subluminous nature of these 5 stars is perhaps most easily seen in Fig. \[fig:n\_cmds\]b.) Defining EHB and BHk stars photometrically can be a very difficult task as there is no clear consistent photometric definition. Here we have chosen to rigorously define EHB stars as only those with T$\rm_{eff}$ $\geq$ 20,000K and have chosen our BHk candidates as those that appear dimmer than the ZAHB models of similar temperature HB stars in the CMDs containing the FUV$_{140}-$NUV$_{220}$ color. Clearly this definition depends somewhat on which ZAHB curves we use. As can be seen in Figures \[fig:b\_cmds\]b, \[fig:f\_cmds\]a, and \[fig:n\_cmds\]b, four of the BHk candidates lie fairly clearly below the canonical ZAHB while the last one appears subluminous compared to the ZAHB using the smaller distance modulus (dark blue curve) but is more well within the error bars of the dimmer curve which uses larger distance modulus (dark red curve). We still consider it a possible BHk star but recognize that its identification as such is weaker than the other four. Data on the stars we have identified as BHk candidates can be found in Table \[tab:uv\], which also contains photometric data for other UV bright sources whose classification is uncertain. These stars, as well as all the Table 2 stars, are plotted on Figure \[fig:cb\]. BHk stars are similar to EHB stars but have helium rich envelopes that cause increased opacity in the atmosphere and leads to their subluminous nature. They are an intrinsically rare and relatively poorly understood population that have only recently begun to be identified [e.g., @bro01; @bus07; @sanhess08]. At least two scenarios have been presented as to the origin of BHk stars. One scenario, originally proposed by @lee05, is that BHk stars and the hottest HB stars may result from the standard evolution of a He-rich subpopulation in the cluster [see also @die09; @dancal08; @moe07]. The other scenario discussed by many authors [e.g., @bro01; @dcr00; @moe04] suggests that BHk stars lose a significant amount of mass along the RGB, likely to a close binary companion, such that they do not ignite helium at the tip of the RGB, but instead, undergo a helium-core flash as they begin to descend the white dwarf cooling sequence – a so called late He-flash. The “blue hook" terminology is a reference to the appearance of this population on UV CMDs. While M15 does not currently have a recognized population of BHk stars, one helium-rich hot BHB star was spectroscopically identified by @moe97 that would most likely appear as a BHk star on a CMD. This star is well beyond our field of view, so we can provide no further insight on it. However, the 5 candidates we have identified could establish the presence of a BHk population in M15, and may provide an important clue about the origin of these stars. The significance of BHk stars in M15 is further discussed in §\[sec:discbhk\] & §\[sec:m15bhk\]. White Dwarfs {#sec:wd} ------------ A prominent population of WD candidates that is in general agreement with the theoretical cooling models is apparent in the CMDs in Figures 2-4 (magenta pinched triangles). We have detected 64 WD candidates in FUV$_{140}$ and only 25 in B$_{435}$ (20 of which were also detected in FUV$_{140}$). The majority of the WD candidates detected in FUV$_{140}$ were not detected in B$_{435}$ due to the faintness of these stars in B$_{435}$ and crowding problems that become particularly pronounced in the B$_{435}$ image. As can be seen by examining the cooling curves plotted on Figures 2-4, some WD candidates appear to be more consistent with the cooling sequences for helium-core WDs (He WDs) than for normal carbon-oxygen WDs (CO WDs). This, along with the details of the WD population and models, is discussed in §\[sec:wdcool\]. ### Helium-Core White Dwarfs {#sec:hewd} He WDs are the remnant of a star that loses most of its hydrogen envelope on the RGB and therefore never undergoes a He-flash or subsequent HB and AGB evolution, but instead ends up cooling as a degenerate helium core with a thin hydrogen envelope. He WDs are usually found as members of a binary system and are believed to arise primarily due to Roche lobe overflow and mass-transfer in a close binary system [e.g., @web75]. In dense environments such as GC cores it is possible that He WDs are formed by collisions involving RGB stars that result in a common envelope phase and eventual ejection of the envelope [@dav91]. Although @cascas93 show that in cases of extreme mass loss from winds it is possible for isolated stars to become He WDs, this seems an unlikely scenario for the production of a substantial number of He WD as the amount of mass loss required is much larger than can be explained by canonical stellar evolution alone. Based on theoretical studies, [e.g., @han03] GCs, especially those that have undergone core collapse, should possess observable He WD sequences. However, only one GC (NGC 6397) has an extended He WD sequence currently identified [e.g., @cool98; @edm99; @tay01; @han03; @str09]. No He WD population has been previously identified in M15, but in §\[sec:wdcool\] we present strong evidence for the presence of a He WD sequence and further discuss this population in §\[sec:dischewd\]. Blue Stragglers {#sec:bs} --------------- The BS sequence appears as an extension of the MS to luminosities greater than the turn-off point. It is generally believed that these stars are the products of a merger of two or more MS stars that, upon merging, produce a core hydrogen-burning star more massive than the MSTO. There is an ongoing discussion of whether the most important formation mechanism for BSs is the gradual coalescence of a close binary systems or physical collisions between two stars [e.g., @fre04; @map06; @lei07; @kni09]. However, it is well agreed that the BSs found preferentially in the dense central regions of clusters. We have identified 31 BS candidates from their position in Figure \[fig:b\_cmds\]a (small blue inverted triangles). However, it is difficult to judge where the MS ends and the BS sequence begins. It appears that Figure \[fig:f\_cmds\]a may be more insightful for distinguishing BSs from MSTO stars as the sequence of stars bluer than the MSTO is stretched out, thus making BSs more clearly distinct from the turn-off stars. Using Figure \[fig:f\_cmds\]a we have identified approximately 53 BS candidates. Gap Objects {#sec:gap} ----------- We also have identified a significant set of stars that we term gap objects following D07 (plotted as bright green dots and open circles in Figures 2-4). This population is composed of stars that populate the gap zone between the MS and WD regions of the CMD where there is an increased likelihood of finding CVs. The previously identified cataclysmic variable, CV1, appears in this region as expected. In Figure \[fig:f\_cmds\]a we identified 60 stars clearly populating the gap zone (thus classified as gap objects) however in Figure \[fig:b\_cmds\]a we identify only 22 gap objects, most of which lie near the MS or the faint-end photometric limits. Most of the sources identified as gap objects in Figure \[fig:f\_cmds\]a were detected in B$_{435}$ but rather than appearing in the gap region on NUV$_{220} -$ B$_{435}$ CMDs they appear primarily on the MS. This puzzling feature is discussed in detail in §\[sec:discgap\] . We are confident in the significance of the gap population as identified in Figure \[fig:f\_cmds\]a, as these sources were all inspected by eye and D07 found a very similar population in their FUV$_{140}$ and NUV$_{220}$ photometry. Bright Blue Gap Objects {#sec:brightgap} ----------------------- The population of stars that we identified as bright blue gap objects (pink asterisks in Fig. 2-4) consists of 8 stars (including X-2) detected in each filter that are bluer than the MS in both NUV$_{220} -$ B$_{435}$ and FUV$_{140} -$ NUV$_{220}$ by at least 1 magnitude but are found between the standard WD and HB sequences in brightness. Canonical stellar evolution does not include stages expected to populate this region of the CMD for any significant time. WDs rapidly transiting from the post-AGB phase toward the WD cooling sequence may pass through this region, but the timescale is such that the likelihood of detecting even one star in such a phase is extremely small. Thus, from a canonical stellar evolution standpoint, the presence of several objects in this region is unexpected. In Figures 2-4 it is however clear that the theoretical He WD cooling sequences run through this region and we will discuss the plausibility that these stars may be young He WDs in §\[sec:brightgaphewd\]. This population is perhaps most clearly defined in Figure \[fig:f\_cmds\]a as the stars with with FUV$_{140} -$ NUV$_{220}<-0.5$ and 16.4 $\leq$ FUV$_{140}$ $\leq$ 18.4. Using this criteria we have identified 10 bright blue gap objects; however, in Figure \[fig:f\_cmds\]b, one appears more convincingly to be a hot WD and another appears on the RGB at NUV$_{220} -$ B$_{435} \approx$ 4. The nature of the latter star is quite confusing; a plausible explanation is that we have detected a chance superposition of an RGB star and a very blue object, such as an HB star. When examined by eye, the center of light appears to be slightly inconsistent across the three images, supporting this hypothesis. It is unclear what the true nature of these bright blue gap objects is and we consider several possibilities in §\[sec:discbrightgap\]. One of the 8 objects that appear as bright blue gap objects in both colors is X-2, raising the possibility that some of these stars could be close binaries with accretion disks. These stars may also be related to the BHk or He WD populations and each of these possibilities are discussed in §\[sec:discbrightgap\]. Stars identified as bright blue gap objects are included in Table \[tab:uv\] and Figure \[fig:cb\] as UV8 and UV10-12. ANALYSIS {#sec:analysis} ======== White Dwarf Cooling Curves {#sec:wdcool} -------------------------- A set of cooling sequence models for both CO WDs (solid blue curves) and He WDs (dashed green curves and dotted purple curves) have been calculated and plotted on Figures \[fig:wd\_thin\] & \[fig:wd\_thick\]. Figures 2-4 also include the CO WD cooling sequences (solid lines) and one set of the He WD cooling sequences (dashed lines) for orientation purposes. The code and input physics for these models are described in detail by @ser02 and are calculated in the same manner as the models used in the analysis of the He WD population in NGC 6397 by @str09. The models cover a mass range of M = 0.45 - 1.10 M$_{\odot}$ for the CO WDs and M = 0.175 - 0.45 M$_{\odot}$ for the He WDs (masses indicated in caption for Fig. \[fig:wd\_thin\] & \[fig:wd\_thick\]). The models in Figures \[fig:wd\_thin\] & \[fig:wd\_thick\] differ from each other in that the He WD models in Figure 8 (purple dotted lines) have thin hydrogen envelopes and the He WD models in Figure 9 (green dashed lines) have thick hydrogen envelopes. The thickness of the hydrogen envelope affects the color and cooling times for He WDs as discussed in @ser02, @alt01, and briefly in §\[sec:dischewd\] of this paper. A distance of 10.3 kpc [@vdb06] was assumed and the cooling curves were adjusted for extinction in the same manner as the ZAHB. We have included error bars in the upper portion of Figures \[fig:wd\_thin\] & \[fig:wd\_thick\] that represent the 1$\sigma$ error in both distance and reddening for the cooling curves. The He WD models have a progenitor metallicity of Z$=$0.0002 (corresponding to \[Fe/H\] $\approx$ $-2$), consistent with the derived metallicity for M15. The CO WD models are all generated for progenitor stars with solar metallicity. Since the evolution of CO WDs depends very little on nuclear burning and the cooling timescale is not metallicity dependent, we consider these tracks reasonably suitable for M15 despite the metallicity discrepancy. From their location on the CMDs in Figures 2-4, we have identified a total of 73 stars that appear WD-like in at least one CMD. These stars are plotted as dots and open circles on Figures \[fig:wd\_thin\] & \[fig:wd\_thick\]. It is somewhat difficult to distinguish CO and He WD candidates as there is large uncertainty in the photometry for the dim stars. Table \[tab:wd\] contains a summary of our assessment of these WD candidates as discussed in detail in the following paragraphs. Using their location on the CMDs, we determined 11 of these stars represent likely candidates for CO WDs, 45 appear to be likely He WD candidates, 10 appear to be good WD candidates but are ambiguous as to whether they belong to the CO or He WD population, and 2 are variable stars from D07 (plotted as filled triangles in Fig. \[fig:wd\_thin\] & \[fig:wd\_thick\]) that appear on the CO WD curves, but are more likely CV candidates. The remaining 5 represent possible WD candidates but are subject to significant photometric scatter that makes their nature unclear. Of these 73 stars, only 20 were detected in all three frames (see filled black and red dots in Fig. \[fig:wd\_thin\] & \[fig:wd\_thick\]). For 3 of these it is unclear whether they are truly WDs: they appear WD like in Figures \[fig:wd\_thin\]b & \[fig:wd\_thick\]b but lie more than 0.5 magnitudes to the red side of the tracks in Figures \[fig:wd\_thin\]a & \[fig:wd\_thick\]a, thus they may actually be CVs. This leaves us with 17 stars which we consider to be our strongest WD candidates because we have two-color data that allows us to better distinguish their true nature. We draw the following conclusions concerning these 17 stars: (1) Five are likely candidates for CO WDs. Two are strong candidates that appear on the CO WD cooling curves in both diagrams and the other three are consistent with being either low-mass CO WDs or young, massive He WDs. (2) Seven are good candidates for He WDs because they appear reasonably consistent with the He WD cooling sequences in both diagrams and are clearly separated from the CO WD cooling curves. These are the 7 stars plotted as large red filled circles on Figures \[fig:wd\_thin\] & \[fig:wd\_thick\]. (3) Five of the 17 stars are clearly WD-like but are ambiguous as to which population they belong since they appear on different cooling curves in each diagram. We caution that, for many of these stars, the distinction between He WDs and CO WDs is very difficult and subject to assumptions such as the cluster reddening and distance (especially in CMDs using FUV$_{140}-$NUV$_{220}$ as these two quantities are the most sensitive to reddening). The effect of the uncertainties in these quantities can be seen by examining the 1$\sigma$ error bars for the cooling curves which are plotted in the upper portion of Figures \[fig:wd\_thin\] & \[fig:wd\_thick\]. If we assume less reddening, some He WD candidates become better CO WD candidates, and if we assume more reddening, some CO WD candidates become He WD candidates. Despite these uncertainties, we still find our claim that M15 possesses some population of He WDs to be strong. In addition to the WD candidates discussed above, the bright blue gap objects are plotted on Figures \[fig:wd\_thin\] & \[fig:wd\_thick\] (grey asterisks) because a subset of these objects appear to be consistent with the thick H envelope He WD cooling curves. Of the 7 bright blue gap objects detected in all three frames (not including X-2), all appear to be consistent with being young, low-mass He WDs with thick hydrogen envelopes (see Fig. \[fig:wd\_thick\]). The nature and origin of He WDs, the significance of the thin versus thick H envelope models, and the implications of the existence of He WDs in M15 is discussed in more detail in §\[sec:dischewd\]. Radial Distributions {#sec:raddist} -------------------- Binary systems with a total mass larger than the average stellar mass in the cluster are expected to segregate towards the cluster core on the time scale of a half-mass relaxation time due to dynamical friction. Since M15 is a core-collapsed cluster, with a half-mass relaxation time of $t\rm_{r,hm}\approx1$ Gyr [@har96], mass segregation should have taken place and any “massive" binaries should be centrally concentrated. Many of the populations addressed in this paper (BS, BHk, EHB, He WD, CV) are likely members of such binary systems, thus are expected to be segregated towards the cluster center. In Figure \[fig:raddist\] we have plotted the cumulative radial distributions for the BSs, gap objects (which have been separated into two subsets, see §\[sec:gapdist\]), and bright blue gap objects, along with the radial distribution of HB stars for comparison. We chose the HB population as the reference population because it is bright enough that completeness issues should be minimized, even in the dense central regions. To determine the statistical center of the cluster we used the MSTO population from Figure \[fig:b\_cmds\]a using the positions in the NUV$_{220}$ image. We must note that any dim population will be drastically incomplete in the most central part of the cluster of the FUV$_{140}$ image because the extended PSF haloes and diffraction artifacts from several bright stars concentrated near the cluster center effectively mask many dim stars in this region (see Fig. \[fig:master\_images\]). In order to analyze the significance of any apparent central concentrations, we performed Kologorov-Smirnov (KS) tests comparing the radial distribution of each population to our reference HB population. The KS statistic gives the probability that the two samples being compared are consistent with being drawn from the same distribution; therefore, a lower KS probability corresponds to a more statistically significant difference between the distributions of the two samples, thus a more centrally concentrated sample. ### Distribution of Bright Blue Gap Objects {#sec:brightbluedist} We find a KS probability of 0.04% and 0.4% for the bright blue gap objects (as identified in Figure \[fig:b\_cmds\]a & \[fig:f\_cmds\]a, respectively) as being consistent with the HB sample. This is a strong indication that these stars are more massive than the average HB mass in the cluster, likely due to a binary nature. ### Distribution of Blue Stragglers {#sec:bsdist} We find a KS probability of 1% that the BS population, as identified from Figure \[fig:f\_cmds\]a, is consistent with the HB sample. This indicates a significant segregation towards the cluster center. Yet, for the BSs identified from Figure \[fig:b\_cmds\]a, the KS probability is 30%. This is a surprising result considering that the latter set represents the brightest and presumably most massive BSs, which are expected to display the strongest effects of mass segregation. Also, nearly every BS in Figure \[fig:b\_cmds\]a also was chosen as a BS in Figure \[fig:f\_cmds\]a, implying that it is the dimmer, less massive BSs that show the strongest degree of segregation; upon further inspection we find this to be true. If we consider the stars identified as BSs in Figure \[fig:f\_cmds\]a that were detected in all three frames and divide them into a “bright" group with B$_{435}$ $<$ 18 and a “dim" group with B$_{435}$ $\geq$ 18, we find that the “bright BSs" have a KS probability of 13% while the “dim BSs" have a KS probability of 1%. This would seem to indicate that these dim BSs are significantly more centrally concentrated than the bright BSs. Although a rigorous explanation for this phenomenon is beyond the scope of this paper, we suggest it may be a statistical issue due to the smaller number of BSs in Figure \[fig:b\_cmds\] (only 31 BSs selected from Fig. \[fig:b\_cmds\] vs. 53 selected from Fig. \[fig:f\_cmds\]). Recently, a significant number of globular clusters have been discovered to have a bimodal BS distribution with a peak inside the core radius then a quick drop off in a region termed the “zone of avoidance" followed by a second peak at several core radii [e.g., @fer93; @map06; @dal08]. We don’t expect to see such a distribution in our data since the expected radius for the drop off in M15 is well beyond our field of view. Following the method outlined in @map06 with reasonable parameters for M15 we estimate a zone of avoidance at $r \approx 3.3 - 7.5\arcmin$ (10-20 pc), depending on the exact parameters used in the calculation. Even though this is a relatively large range it is undoubtably beyond our field of view and near the half-mass radius of the cluster. ### Distribution of Gap Objects {#sec:gapdist} For the gap objects we find a KS probability of 89% (from the Figure \[fig:b\_cmds\]a group) and 80% (from the Figure \[fig:f\_cmds\]a group) as being consistent with the HB. This indicates that the gap objects do not show a central concentration. But, since many of the gap objects are faint, this population suffers serious incompleteness issues, especially in the innermost regions of the cluster. In an attempt to lessen this bias we have plotted the distribution of only those gap objects with NUV$_{220}$ $\leq$ 22. The gap objects are plotted as two subsets on Figures 2-4 (NUV$_{220}$ $\leq$ 22: large bright green dots; NUV$_{220}$ $>$ 22: bright green open circles), and thus it can be seen that by using this criteria alone we were able to select a group that should have completeness levels that are more consistent with the other populations we are analyzing. For this brighter subset (NUV$_{220}$ $\leq$ 22) we find a KS probability of 8% and 64% from the Figure \[fig:b\_cmds\] and \[fig:f\_cmds\], respectively. These are still relatively high KS probabilities and therefore do not necessarily indicate the presence of a statistically significant central concentration for these populations, but they do suggest that the brighter set of the population derived from Figure \[fig:b\_cmds\]a may be centrally concentrated to some degree. Despite the moderate KS probabilities we report, we still consider it likely that at least some of these objects are close binary systems based on their photometric properties and suggest that the KS probabilities given here be regarded as upper limits due the completeness and crowding issues discussed above. ### Distribution of Other Populations {#sec:distother} Both the EHB and He WD populations are possibly members of binary systems more massive than the average stellar mass in the cluster and therefore may be segregated to the center as well. We, however, do not report on the radial distribution statistics for these populations due to issues of small number statistics and completeness, respectively. The EHB is composed of only 6 stars, so the results from any statistical analysis we perform would not be robust. The He WD population has enough candidate members to allow for more robust statistics, but the dim nature of these stars makes them suffer drastically from the completeness and crowding issues in the cluster center that have been discussed throughout this paper. Therefore, there is an apparent dearth of these stars in the innermost region of the cluster which is almost certainly due to crowding issues that mask these dim stars. However, because of this feature, our data is not sufficient to provide meaningful statistics on their radial distribution. ### Comparison with D07 {#sec:compdist} D07 also performed an analysis of the radial distribution of the BSs, and gap objects compared with the HB stars. Consistent with our results, they found that the BSs are significantly more centrally concentrated than the HB. However they find a KS probability of the two samples being from the same parent distribution of 14%, which is much larger than the KS probability that we report of 1%. This is most likely due to differences between the stars classified as BSs; D07 classified 69 stars as BSs while we only classified 53 stars as such. Our finding that the gap objects do not necessarily appear to be centrally concentrated seems at odds with the finding in D07 that the gap population is the most centrally concentrated with a KS probability of only 8% when compared with the HB stars. However, D07 only considered stars with NUV$_{220}$ $<$ 21, so their result should only be compared with our result for the “bright" gap objects with NUV$_{220}$ $<$ 22. While the samples are still not entirely similar, it is striking that we find a much larger KS probability of 64%, even for this brighter sample. This could be, in part, due to the fact that we extend our cut a magnitude deeper and therefore include more stars and may have more issues concerning completeness near the cluster center. However, a large contribution to this difference is almost certainly due to the fact that the gap population in D07 included the stars we have termed bright blue gap objects which are very strongly centrally concentrated. Blue Hook Candidates {#sec:discbhk} -------------------- Of the many UV-bright stellar exotica that have been discussed thus far, evidence of a BHk population is of particular interest as it may provide an important clue about how these stars originate. The presence of BHk stars in M15 is somewhat contrary to the hypothesis that BHk stars are the product of a He-rich subpopulation, because with a mass of only $4.4 \times 10^{5}$ M$_{\odot}$ [@vdb06] it is unclear whether M15 has a potential well deep enough to maintain sufficient gas from stellar ejecta to form a second generation of stars. In addition, M15 does not show a split MS as expected for a cluster containing a second generation (e.g., NGC2808: Piotto et al. 2007; $\omega$ Cen: Bedin et al. 2004). @dancal08 argue that the period distribution of RR Lyrae stars in M15 is best explained by the existence of a He-rich subpopulation, but note that their models produce HR diagrams which are not particularly good fits for the morphology of M15’s HB. So, while a subpopulation in M15 should not be ruled out, it seems a somewhat unlikely explanation for BHk stars in this cluster. Furthermore, the high central density of M15, $7 \times 10^{6}$ M$_{\odot}$ pc$^{-3}$ [@vdb06], results in a very a high interaction rate weighing in favor of the late He-flash origin. In this picture the He-flash occurs on the WD cooling sequence, thus it is necessary that there be enough mass loss during RGB phases to avoid the He-flash at the tip of the RGB . The high central density and high interaction rate of M15 should lead to an increased number of mass-transfer binaries and collisions that may strip the outer layers of an RGB star, thus increasing the potential for BHk stars to be formed this way. Helium White Dwarfs {#sec:dischewd} ------------------- We have identified a substantial number of He WD candidates (see §\[sec:wdcool\]). However many of our candidates lie very near our detection limit and therefore suffer from significant photometric scatter and are more susceptible to being false detections. For this reason, we will focus our analysis on those 7 He WD candidates that were detected in all three filters and appear as good candidates for He WDs in both Figures \[fig:wd\_thin\] & \[fig:wd\_thick\] (plotted as large red filled circles). ### Thick vs. Thin H Envelopes {#sec:thickthin} One question of current interest about He WDs is how massive their hydrogen envelope is. The candidates identified here, in conjunction with the models discussed in §\[sec:wdcool\], may provide some insight on this parameter. The mass, and thus thickness, of the hydrogen envelope greatly affects the cooling timescale and, to a lesser extent, the photometric properties of a He WD of a given mass. Since it is unclear what envelope mass is expected, we have included models for both “thick" and “thin" envelopes. In the thick H envelope case nuclear burning (pp-chain) at the base of the envelope dominates the radiation and cooling time, while in the thin H envelope case the contribution from nuclear burning is negligible so the remnant cools via thermal radiation. The models presented here represent the two opposite extremes in the the range of hydrogen envelope mass and therefore, as pointed out by @str09, should “bracket reality." For further details of the models and the implications for “thick" vs. “thin" envelopes the reader is referred to @ser02 and @str09. ### Photometric Masses {#sec:photmass} It is difficult to derive exact photometric masses from our candidate WD populations, however in Figure \[fig:wd\_thick\] the 7 strong He WD candidates seem to be well bracketed by the curves spanning the mass range from M $\approx 0.200$ M$_{\odot}$– 0.275 M$_{\odot}$. The 7 bright blue gap objects (grey asterisks) that appear to be He WD candidates in Figure \[fig:wd\_thick\] also appear to be most consistent with the models in this mass range. The photometric mass range is much less well constrained in Figure \[fig:wd\_thin\], as the set of 7 strong He WD candidates seem to span the mass range from M $\approx$ 0.175 M$_{\odot}$– 0.350 M$_{\odot}$. It is nearly impossible to approximate a photometric mass using the entire set of 45 He WD candidates. The candidates span the entire range of model He WD masses in Figure \[fig:wd\_thick\] and approximately 10 of the candidates appear significantly redder than the reddest model in Figure \[fig:wd\_thin\]. It is possible that the reddest stars are not truly He WDs, but instead are either CVs or detached WD-MS binary systems that just happen to lie near the WD cooling sequence. However, since this region is relatively well populated and reasonably well separated from the gap object population in color, we still present them as more likely WD candidates than gap objects. This still does not eliminate the possibility that some of our candidates may be CVs, detached WD-MS binaries, or even LMXBs especially as this region contains two variable stars classified as likely CVs based on the variability study by D07. ### Cooling Ages and Implication for Formation {#sec:coolage} Following the analysis in @str09 we analyze the apparent cooling ages of the He WDs in order to gain insight into the possible formation mechanisms as well as investigate the plausibility of the thin vs. thick H envelope models. Focussing on the 7 strongest He WD candidates in the thin H envelope case (Figure \[fig:wd\_thin\]), we see that all 7 stars have cooling age $<$100 Myr, with the dimmest having a cooling age approximately equal to 100 Myr. Therefore we can calculate an average formation rate of about 70 Gyr$^{-1}$. Yet 4 of these stars have cooling ages of $<$ 2 Myr which implies an implausible formation rate in recent epochs of 2000 Gyr$^{-1}$. We can estimate the rate at which stars are turning off the MS in M15 for comparison to the formation rate implied by the cooling ages of our WD candidates. Adding our sample of MSTO stars that are clearly redward of the turn off point (NUV$_{220} -$ B$_{435} \approx$ 1.5) to our number of RGB stars, we have a total of about 800 stars in these phases. Given that the time that a star spends in such phases is approximately 1.5 Gyr [@pols98], we obtain a rate for stars turning off the MS of 530 Gyr$^{-1}$. This means that the formation rate implied by the 4 stars with cooling ages of $<$ 2 Myr using the thin H envelope models is significantly larger than the rate at which stars are leaving the MS and for this reason alone, we find that the formation rate implied from the thin H envelopes models to be unreasonable. It is possible that these brighter candidates are not He WDs but are instead WD-MS detached binaries or CVs, so we can not rule out the thin H envelope model altogether with just this evidence. Additionally, we should note that the estimation of the rate at which stars are turning off the MS can be calculated in several ways and may be sensitive to issues that are difficult to account for, such as mass segregation. Another way to calculate this rate is to use the HB; using the total number of stars found in the HB ($\approx$ 130) and an estimate for the lifetime of a HB star [$\approx$ 100 Myr; @cas04] we get a turnoff rate of approximately 1300 stars Gyr$^{-1}$. This is about 2.5 times larger than the rate obtained using the MSTO stars. The reason that these two rates do not agree more closely is not clear but may be related to the uncertainties concerning the lifetime of post-MS phases such as the RGB and HB, as well as dynamical considerations such as mass segregation and stellar interactions. However, even using this larger estimate for the turnoff rate in M15, we still find the formation rates implied by the thin H envelope models to be larger than the rate at which stars are turning off the MS and therefore unreasonable. Using Figure \[fig:wd\_thin\] to evaluate the *entire* population of He WD candidates, 38 of these appear to have cooling ages $\lesssim$ 100 Myr, with the remaining 7 either having a longer cooling age or being significantly redder than the most low-mass cooling sequence. This implies a fairly extreme formation rate of 380 Gyr$^{-1}$ and would imply that between 25% to 70% of the stars turning off the MS are being formed into He WDs. But, many of our candidates are not detected in all three filters so some of these candidate are likely to be falsely identified, and we therefore can not rule out the thin H envelope models on this evidence alone either. However, because we find significant doubt as to the feasibility of the formation rates implied by the thin H envelope models and the photometric mass range for these models seems much less well-constrained (see previous section) we will focus on the thick H envelope models for the the remainder of this section as they appear to be the more plausible models. For thick H envelope models (Figure \[fig:wd\_thick\]) the 7 strongest candidates appear to have cooling ages down to $\lesssim$ 1 Gyr. This implies a very reasonable average formation rate of 7 Gyr$^{-1}$, and if we include the 7 bright blue gap objects that also seem consistent with the thick H envelope models, we still find a reasonable average formation rate of 14 Gyr$^{-1}$. Nine of these appear to have cooling ages $\lesssim$ 250 Myr implying a larger, yet not unreasonable, formation rate of 36 Gyr$^{-1}$. In fact, this number is in excellent agreement with estimates calculated using our entire sample of He WD candidates in Figure \[fig:wd\_thick\] (see below). In total, 30 of the candidates plus the 7 bright blue gap objects, appear to have cooling ages of $\lesssim$ 1 Gyr, implying a formation rate of 30–37 Gyr$^{-1}$. The remaining 15 candidates appear to have cooling ages of $\gtrsim$ 1.5 Gyr, which will result in lower average formation rate in epochs more than 1 Gyr ago. Although, since this cooling age is well below the detection limit for many of the models we likely have only detected a fraction of the objects with cooling ages between 1 Gyr and 1.5 Gyr. In summary, using the thick H envelope models we have obtained a formation rate between 7 Gyr$^{-1}$ and 37 Gyr$^{-1}$ depending on which potential He WDs are included in the sample. We can compare theses estimated formation rates with average collisional rate for RGB stars in the central portions of M15. We will use the formulation in @bintre87 that $$\frac{1}{t\rm_{coll}} = 16\sqrt{\pi} n\sigma R_{*}^{2}(1+\frac{GM_{*}}{2R_{*}\sigma^{2}}).$$ For our estimates we will confine the “central portion" to the the central 0.1 (0.3 pc). For the average stellar density, $n$, in this region, we will use the central luminosity density from @har96 of $2.4 \times 10^{5}$ L$_{\odot}$ pc$^{-3}$ and an average M/L $\approx$ 2 for the central portion of M15 from @vdb06 to get a density of $4.8 \times 10^{5}$ stars pc$^{-3}$ (assuming an average stellar mass for the central region of $\approx$ 1 M$_{\odot}$). Using this with the velocity dispersion for M15, $\sigma = 11$ km s$^{-1}$ [@geb94; @dull97; @dull03], and the radius and mass of a red giant, R$_{*}$ = 10 R$_{\odot}$ and M$_{*}$ = 0.8 M$_{\odot}$, the collision rate for a given red giant is approximately 1 collision per 2 Gyr. We found about 200 RGB stars in our data that lie within 0.1 of the cluster center, and thus predict 100 collisions involving an RGB star per Gyr in the central regions of M15. This predicted collision rate is high enough to account for the upper limits on the formation rates of He WDs calculated above (for the thick H envelope models) under the assumption that He WDs are in fact formed through collisions involving RGB stars. However, we caution that our calculations here are very approximate and sensitive to our assumptions. Mass loss in a close binary system is another possible mechanism for the formation of He WDs. So, again following @str09, we will assume that He WDs are formed primarily from primordial binaries and then estimate a lower limit on the binary fraction in the core of M15 by considering the formation rate of He WDs. Our estimated range for the He WD formation rate of 7–37 Gyr$^{-1}$, leads to an implied binary fraction from 1% to 7% (using the MSTO to estimate the turnoff rate) or 0.5% to 3% (using the HB to estimate the turnoff rate). These are in reasonable agreement with the results of @geb97 who find a binary fraction of 7% in M15. However, we are considering only the fraction of binaries that produce He WDs, which itself is only a fraction of the binaries in the cluster, thus our estimate should be a lower limit on the binary fraction. But, as shown previously in this section, collisions likely contribute to the formation of He WDs as well, and therefore we conclude that collisions involving RGB stars as well as mass transfer in close binary systems are both possible formation channels for He WDs in M15. ### Bright Blue Gap Objects as He WDs {#sec:brightgaphewd} Many of the bright blue gap objects are consistent with the thick H envelope He WD cooling models (see Fig. 9). This, combined with the very reasonable implied He WD formation rates calculated with the thick H envelope models and the inclusion of these stars, suggests that at least some of the bright blue gap objects are correctly identified as He WDs. Many of the bright blue gap objects are very bright and therefore appear to be quite young He WDs, some having cooling ages of less than 100 Myr which may indicate that there has been an increase in the production rate of He WDs over the last several Myrs as a result of the dynamical evolution of the cluster, but is still consistent with our interpretation of these objects as He WDs. ### Cooling Ages and Implications for CO WDs {#sec:coform} If we use this same idea to consider the formation rate of CO WDs we find that 9 of the of stars we considered CO WD candidates have cooling ages of less than 30 Myr, implying a formation rate of about 300 Gyr$^{-1}$. This is a very small formation rate, as it is almost 2 times smaller than our lowest estimate for the turnoff rate for stars from the MS and more than 4 times smaller than the turnoff rate calculated using the HB. However, there are also 10 ambiguous WDs of which 6 appear to fall approximately in the same range for the cooling age. If these are added to the sample, we can calculate an implied formation rate of 500 Gyr$^{-1}$. This is a more reasonable formation rate, but using the turnoff rate calculated from the HB stars this still seems to be rather small. It is unclear why this is the case, but as mentioned previously, there is likely some confusion in classifying CO WDs vs. He WDs that may account for the discrepancies in the formation rates. Nature of the Gap Objects {#sec:discgap} ------------------------- In Figure \[fig:f\_cmds\]a we have identified 60 gap objects that represent possible CV candidates (bright green dots and open circles). While this seems to be a promising set of candidates, when the B$_{435}$ photometry is added and those same stars are plotted on Figure \[fig:f\_cmds\]b, most no longer appear to be gap objects at all but instead lie on the MS. Of the 60 gap objects identified in Figure \[fig:f\_cmds\]a, 54 were detected in the B$_{435}$ frame as well and therefore are plotted on Figure \[fig:f\_cmds\]b. Two of these stars (including CV1) appear to be gap objects in both Figures \[fig:f\_cmds\]a & b. The photometry for CV1 puts it clearly in the gap region on both CMDs as expected. The other star mutually identified as a gap object, hereafter UV16 (see Table \[tab:uv\] and Fig. \[fig:cb\]), lies very near the blue edge of the MS in Figure \[fig:f\_cmds\]b and almost as blue as the WD sequence in Figure \[fig:f\_cmds\]a. While we can make no conclusive statement about the nature of UV16 based on photometry alone, we present it as a likely CV candidate. Excluding CV1 and UV16, there remain 52 gap objects that were detected in all 3 filters. Twenty-two of these have NUV$_{220}$ $>$ 22 placing them in the regime where our NUV$_{220}$ photometry is less reliable; the remaining 30 stars with reliable photometry appear as gap objects in Figure \[fig:f\_cmds\]a yet lie securely on the MS in Figure \[fig:f\_cmds\]b. While this result is puzzling, our FUV$_{140}$ and NUV$_{220}$ photometry is consistent with the similar results published by D07. In addition, these sources were inspected by eye to rule out spurious matches and false detections. Some possible explanations are discussed in the following sections. ### Cataclysmic Variable Candidates {#sec:gapcv} Since it is unclear whether the gap objects from Figure \[fig:f\_cmds\] are plausible CV candidates (see following sections for further discussion), we will focus our discussion of likely CV candidates to stars that appear in the gap in Figure \[fig:b\_cmds\]a. This includes 22 objects only 6 of which were detected in FUV$_{140}$ and therefore appear on Figure \[fig:b\_cmds\]b. These 6 include CV1 and the likely CV candidate UV16 as well 4 stars that lie either on the WD cooling sequences or MS in Figure \[fig:b\_cmds\]b. We do not consider the two that appear on the MS as probable CVs because they show no UV excess in the FUV. The remaining two gap objects that appear on the WD cooling sequences represent much more probable CV candidates and have been included as such in Table \[tab:uv\] (UV17 & UV18) as well as shown on Figure \[fig:cb\]. While both stars are dim (NUV$_{220}$ $>$ 22), they appear to have a UV excess in both NUV$_{220} -$ B$_{435}$ and FUV$_{140} -$ NUV$_{220}$ indicating the possible presence of an accretion disk. There are also 5 somewhat luminous gap objects (21.5 $\gtrsim$ NUV$_{220} \gtrsim$ 20; 21$\gtrsim$ B$_{435} \gtrsim$ 19.5 - see Fig. 2a & Fig. 4a) that are surprisingly not detected in FUV$_{140}$. We individually inspected these sources in the FUV$_{140}$ frame and determined that the stars in question should be bright enough to be detected but are near bright stars and consequently lost in the extended PSF haloes. Hence these stars also remain plausible CV candidates despite the fact that they were not detected in our FUV$_{140}$ data. D07 discuss several other CV candidates which they identified from light curves; these stars are identified in their paper as V7, V11, V15, V39, V40, and V41. Our photometry produced very similar results to the ones presented in their paper although as noted previously there is a systematic magnitude difference in FUV$_{140}$ for the variable stars. V15 and V40 were detected in FUV$_{140}$ but not NUV$_{220}$ which is consistent with their results (however since they were not detected in NUV$_{220}$ they are not included on our CMDs). We can make no further assertions as to the nature of these variable stars as we did no investigation of the variability, but we do confirm the photometric nature as identified in D07 as being potential CVs (see Fig. \[fig:cb\] for the location of V7, V11, V39, and V41 on our CMDs). ### WD-MS Detached Binary Systems {#sec:gapwdms} It is possible that at least some portion of the gap object population in Figure \[fig:f\_cmds\]a could be explained as WD-MS detached binary systems which, based on theoretical grounds alone, should be found in substantial numbers in GCs [see @iva06]. We performed a very basic test to explore the plausibility that the photometric properties of these gap objects could be explained as WD-MS detached binary systems. We fit a polynomial to the MS data in each CMD using a least squares fit. We then added the fluxes of these model MS stars to the fluxes from the model WD stars along the 0.6 M$_{\odot}$ CO WD cooling sequence to construct a set of “observed fluxesÓ for WD-MS binaries. In Figures \[fig:grid\] & \[fig:gridcc\] we have plotted a grid where each line represents a specific model MS or WD star and each intersection represents the resulting flux from the MS $+$ WD combination. The WD lines are solid lines labeled with the appropriate temperature for the model WD and the MS lines are dotted lines labeled with a letter so that they can easily be followed between the two diagrams. For these models we only considered WDs between 13,000 - 40,000 K and MS stars with NUV$_{220}$ magnitudes ranging from approximately 20.3 - 24.5 (models A-J). From our rudimentary model in Figures \[fig:grid\] & \[fig:gridcc\], it is plausible that the observed magnitudes of the numerous gap objects from Figure \[fig:f\_cmds\]a could represent a population WD-MS detached binaries. By examining the range of MS models and WD temperatures that are populated by gap objects in Figures \[fig:grid\]a and comparing them with Figures \[fig:grid\]b, it can be seen that similar, though not identical, areas of our MS-WD grid are populated in both CMDs. Furthermore, the majority of these objects were examined for variability by D07 and were not found to be variable. We therefore present this detached WD-MS binary scenario as a viable explanation for some of the gap population seen in Figure \[fig:f\_cmds\]a that seems to be absent in Figure \[fig:f\_cmds\]b. We must point out that our grid of WD-MS detached binary models does overlap significantly into the WD region, specifically in the region where many of our He WD candidates appear (see Figure \[fig:grid\]a). However, we see that these same models do *not* significantly overlap the WD region when examining Figure \[fig:grid\]b, so therefore the addition of the optical filter allows us to more clearly distinguish between detached binary systems and He WD candidates. In addition, this overlap region in Figure \[fig:grid\]a contains a relatively large number of stars (20-25), but is only covered by models requiring the combination of WD with a very low-mass companion. So, if our He WD candidates are in fact interpreted as WD-MS detached binaries it would indicate a strong preference for WDs to have low-mass companions; we find no obvious reason for such a preference to exist so we consider the likelihood that a significant amount of the He WD candidates are actually detached binaries to be very small. ### Magnetic CVs {#sec:gapamher} It is also possible that some of gap objects in Figure \[fig:f\_cmds\]a could be magnetic CVs. Magnetic CVs come primarily in two types: polars (AM Her stars) which have no accretion disks and intermediate polars (IPs; DQ Her stars) which have truncated accretion disks due to most, or all, of the accretion in these CV systems being directed along magnetic field lines onto the magnetic poles of the primary. In the field the CV population is comprised of about 25% magnetic CVs [@wicfer00] and it has been proposed on an observational basis that GCs contain a higher fraction of magnetic CVs than the field [e.g., @gri95; @edm99]. In addition, @iva06 modeled the dynamical processes that lead to the formation of CVs in clusters and found that cluster CVs tend to have more massive WD primaries. Since high mass WDs have been shown to be more likely to have strong magnetic fields [e.g., @lie88; @sch03 and references therein] this may explain the magnetic nature of clusters CVs. Since the accretion disk contributes strongly to the flux of non-magnetic CVs, polars and IPs will have different photometric properties than non-magnetic CVs such as dwarf novae (like CV1) and we argue that the missing accretion disk could account for the lack of NUV excess in Figure \[fig:f\_cmds\]b. In FUV$_{140}$ the WD dominates the flux while the MS companion dominates the flux in B$_{435}$; the lack of an accretion disk leads to a significant decrease in the FUV$_{140}$ and NUV$_{220}$ flux compared to that of an accretion disk system and would cause the system to have colors more similar to that of a WD-MS detached binary. @edm99 find that one of the CVs identified as a magnetic CV in their paper appears on the MS in the V vs. V$-$I CMD. Additionally, in recent results from GALEX observations of open cluster M67, EU Cnc, an AM Her system [@nair05], has been identified to have very similar photometric properties to the perplexing gap objects identified here (Hamper et al. 2010, in preparation). The literature contains ideas invoking the magnetic nature of globular cluster CVs to explain observed differences between field CVs and cluster CVs, such as the relatively red optical colors and high optical to X-ray flux ratios [e.g., @edm99; @dob06 and references therein]. @edm03 and @dob06 also propose that a combination of low accretion rates and strong magnetic moments of WDs in magnetic CVs may stabilize the disk such that outbursts would be rare, possibly explaining the low rate of observed outbursts in cluster CVs [e.g., @sha96; @dob06 and references therin]. We therefore suggest that it is quite possible that some of our gap objects do represent a population of magnetic CVs. X-ray luminosity measurements would be particularly advantageous in constraining the nature of these objects. Currently 6 sets of observations of M15 from the Chandra X-ray Observatory are available in the archives. For a complete list of the available observations and limiting X-ray luminosities associated with each, the reader is referred to @hei09. Despite the generous amount of X-ray data available for M15, the region of most interest for this paper, the central 20$\times$ 20, is mostly dominated by two extremely bright sources (AC211 and X-2) that dominate most of the dimmer sources in this region – see Fig. 1 of @hei09. ### Dim X-ray Source H05X-B {#sec:H05XB} @han05 used archival HST images (from the Wide Field/Planetary Camera 2) to identify a possible counterpart to the suspected DN H05X-B (see Fig. 6 in that paper). The source that we have identified as H05X-B is consistent with the position of the suggested counterpart for “source B" in their paper. However, the color of this star does not appear to have any blue excess (see Fig. \[fig:b\_cmds\]-\[fig:n\_cmds\] and Fig. \[fig:cb\] of this paper). @han05 identify H05X-B as a possible DN or qSXT, yet the color that we measure (NUV$_{220} -$ B$_{435}$ = 2.15) is redder than expected for DNe or SXTs, even those observed during quiesence (qSXT: Shahbaz et al. 2003; Edmonds et al. 2002; DNe: Bailey 1980). It is possible that the star that we have chosen is not the true counterpart. Several other sources are within the error circle shown in @han05, but we examined each of these and none seem to have particularly blue colors either. @han05 report this star as blue in their U-V color, so we do not rule out that the source identified as H05X-B could be the true counterpart. It is possible that the NUV$_{220}$ observations in our data set happened to coincide with a particularly quiescent state. However, since the reported colors in the literature for quiescent DNe are bluer than those for qSXTs, it seems more plausible that H05X-B may be a qSXT, which is contrary to the conclusion made by @han05. Post-HB Stars {#sec:discpagb} ------------- We have identified 3 stars, aside from AC211, that lie at least 0.5 mag above the horizontal branch (see light blue X’s in Fig. 2-4 or UV1-3 in Table 2 and Fig. 7). We tentatively identify these stars as being in post-HB evolutionary states and P-EAGB or AGB-manqué candidates. As these stars are only between 0.5 and 1 mag brighter than the HB, it is possible they have not yet exhausted core helium and are just beginning to evolve off the HB. Spectra for these stars or careful comparison with post-HB models is necessary to confirm the nature of these stars, but is beyond the scope of this paper. Judging solely on their relative location on the CMDs in Figure \[fig:n\_cmds\], we present UV1 as an AGB-manqué candidate and UV2 and UV3 as P-EAGB candidates as these stars appear roughly consistent with the \[Fe/H\] = $-2.3$ models of @dor93. Since P-EAGB and AGB-manqué stars are considered the progeny of the EHB, this raises the question of whether we should expect to see any such stars in M15 because we have a very small sample of EHB candidates. For this paper we defined the EHB as stars with T$\rm_{eff}$ $\geq$ 20,000K, but in the low-metallicity models of @dor93 some stars with T$\rm_{eff}$ as low as 10,000K do not reach the thermal pulsation phase of the AGB and therefore become P-EAGB stars. @dor93 find the ratio of the post-HB lifetime to HB lifetime for AGB-manqué stars to be between 1:5 and 1:6. For P-EAGB stars this ratio varies considerably depending on luminosity, with the dimmest having the longest post-HB lifetime, but the ratio of lifetimes is usually about an order of magnitude less than that of AGB-manqué stars [see also @ber95]. If we treat our entire population of EHB candidates as true EHB stars (instead of BHks), then we would expect to find 1 AGB-manqué star. However, it would seem highly unlikely to also find 2 P-EAGB stars. However, if we include stars with T$\rm_{eff}$ $\geq$ 15,000K as possible progenitors of P-EAGB stars, we have 16 candidate progenitors. And following @dor93 decreasing our temperature to T$\rm_{eff}$ $\geq$ 10,000K, we end up with 37 candidate P-EAGB progenitors. Therefore, we do not rule out these stars as P-EAGB candidates, but we caution that it is unclear whether HB stars as cool as T$\rm_{eff}$ = 10,000K or 15,000K could produce the P-EAGB candidates identified here. Nature of Bright Blue Gap Objects {#sec:discbrightgap} --------------------------------- The nature of the objects identified as bright blue gap objects is very hard to determine from photometric properties alone as they do not obviously belong to any standard population and they populate a region of the CMD in which stars following canonical stellar evolution do not exist for any substantial period of time. From Figure \[fig:b\_cmds\]b, the 7 stars, excluding X-2, that were detected in all three filters and appear as bright blue gap objects on all CMDs (UV8 & UV10-15 in Table 2 and Fig. 7) appears as though they could feasibly be members of the HB or BS sequence; however, Figure \[fig:b\_cmds\]a illustrates that this is highly unlikely. The color of these objects makes them possible CV candidates, but this also seems an unlikely explanation because 4 of the 7 stars are brighter than the MSTO and several magnitudes brighter than CV1 in all three filters. The remaining 3 have a B$_{435}$ magnitude similar to CV1 and the MSTO, but are significantly brighter in NUV$_{220}$ and FUV$_{140}$. The radial distribution of the bright blue gap objects (Fig. \[fig:raddist\]) indicates that these stars are centrally concentrated and therefore are likely massive binary systems. This, combined with the fact that X-2 (an ultra-compact LMXB) lies in a similar region of the CMD, leads us to consider that they may be close binary systems with some current mass transfer. If this is the case and the population *is* related to accretion disk phenomena it is expected that at least some of these stars should be variable. Although these stars were all included in D07, none were identified as variables. While this is not favorable for the accretion disk hypothesis, X-2 was not selected as variable in D07 either, as the amplitude of its variability is less than 0.2 mag and therefore too small to be selected as a variable in their study. We, therefore, can not rule out these objects as potential mass-transfer binaries but find this explanation somewhat weak because none of these stars were found to show strong FUV$_{140}$ variability and none have yet been identified as sources of X-ray emission. The dimmest three bright blue gap objects (UV13, UV14, & UV15; Table \[tab:uv\]) appear to possibly be associated with the WD sequence, but are more than 1.5 magnitudes brighter than the other white dwarfs in B$_{435}$ making this seem unlikely as well. Based on their position in the CMD alone, one could infer that the entire bright blue gap object population could be related to the BHk stars. But again, we rule this out as a likely explanation because BHk stars are not expected to be found more than approximately 1 magnitude dimmer than the ZAHB in UV filters [@bro01 and references therein]; it can be seen in all the CMDs that the bright blue gap objects are significantly more than 1 mag. dimmer than the equivalent temperature BHB stars in all three filters. UV8 seems to be the most likely BHk candidate of the bright blue gap objects judging by its location in the FUV$_{140} -$ NUV$_{220}$ CMDs, however its location in the NUV$_{220} -$ B$_{435}$ diagrams makes its nature unclear. Finally, we return to the possibility that some of the bright blue gap objects might be young He WDs. As is most apparent in Figure \[fig:wd\_thick\] these objects appear to be roughly consistent with the early stages of the He WD cooling sequence for 0.200$-$0.275 M$_{\odot}$ He WDs in the thick H envelope models. In our analysis in §\[sec:dischewd\] we included these stars as plausible He WD candidates when analyzing the cooling ages and implied formation rates and found that while the cooling ages implied for these stars require a somewhat increased production rate of He WDs over the last several 100 Myrs, this interpretation seems very reasonable. We find it most likely that the majority of the bright blue gap objects are either young He WDs or somehow related to mass transfer binaries with current accretion disk phenomena. We find the former to be the more convincing explanation but do not rule out that the bright gap population may encompass two or more physically distinct populations, especially since it does include X-2. DISCUSSION {#sec:discussion} ========== Gap Population -------------- The filter combination of FUV$_{140}$ and NUV$_{220}$ was very successful in identifying members of the gap zone, which are potential CVs as pointed out by D07. However, the addition of the B$_{435}$ filter has made the nature of these stars less clear. Because the majority of these stars appear on the MS in NUV$_{220} -$ B$_{435}$ CMDs, it seems unlikely that they are standard DNe CVs. We find the most likely explanation to be that this group is composed of a combination of magnetic CVs and WD-MS detached binaries. There is no analog in the literature for such a significant population of FUV-bright objects that appear MS-like in a color such as NUV$_{220} - $B$_{435}$; so we can draw only weak conclusions about the gap population we have identified in Figure \[fig:f\_cmds\]a without the addition of H$\alpha$ imaging to seek out emission line sources or spectroscopic follow up. M15 as a Blue Hook Cluster {#sec:m15bhk} -------------------------- We have identified 5 of the EHB stars as potential BHk stars. If we are correct in our classification of these stars as BHk stars, it may provide an important data point in understanding the properties of BHk clusters. @die09 investigated trends among clusters containing BHk stars with their strongest result being that BHk stars seem to exist exclusively in the most massive clusters. They note that this might be attributed to a natural bias toward massive clusters, since intrinsically rare stars are necessarily more likely to be found in larger samples. They also found weaker correlations between the presence of BHk stars and other cluster properties such as concentration parameter, core radius, and relaxation time, but considered the most significant correlation to be that with cluster mass. M15 was included in their sample, but it was considered a non-BHk cluster as it was only known to have one BHk star in the outer region (see §\[sec:bhk\]). Had M15 been included as a BHk cluster in that study it would have had a strong lever arm on the results, because, of the clusters in their study that had 4 or more BHk stars, M15 has the lowest mass, highest central density, shortest core relaxation time, smallest core radius, and highest concentration parameter. However, it is unclear whether M15 would be considered a BHk cluster even with all the candidates we have identified here included since all the “BHk clusters" in the @die09 sample had 10 or more BHk stars. Yet, we have only searched a small region in M15, so there may be more BHk stars that we have not identified. Nevertheless, irrespective of how we define a BHk cluster, it is clear that a BHk population in M15 provides an important constraint in understanding the origin of BHk stars. Since we feel that the existence of a BHk population most likely weighs in favor of the late He-flash scenario, as discussed in §\[sec:discbhk\], it should be noted that within this scenario the expected mass range for BHk stars is quite small. @milber08 calculate that the expected mass of a low-metallicity remnant capable of becoming a BHk star via a late He-flash lies in a very narrow range between 0.48 - 0.50 M$_{\odot}$ (see that paper for more details). It is expected that there would be a spread in the total mass loss from the progenitors of these BHk stars thus resulting in a comparable number of stars with post-MS ages similar to BHk stars ($\lesssim$ 100 Myr) that just “miss" becoming BHk stars and end up as either massive He WDs (which just avoid He-core ignition) or low-mass CO WDs (which arrive as such following a phase as an EHB or BHk star). It is unclear whether M15 contains such a population. There are a significant number of potential WDs that may be either low-mass CO WDs or massive He WDs in the appropriate age range (see Fig. \[fig:wd\_thin\] & \[fig:wd\_thick\]), but many of these candidates have ended up classified as “Ambiguous WDs" (Table \[tab:wd\]) or were only detected in two frames due to the intrinsic photometric uncertainty associated with such dim stars. So, without a more precise determination of both the mass and age of our WD candidates, we can make no further claim to the existence of such a population other than to note its expected presence. SUMMARY & CONCLUSIONS {#sec:conclusions} ===================== We have presented a photometric identification and analysis for several UV-bright populations in the central region of the post-core collapse globular cluster M15. We additionally have included photometry for 4 previously identified X-ray sources: M15 CV1, AC211, M15 X-2, and H05X-B. Our work has elaborated on the work of @die07, who analyzed the FUV$_{140}$ and NUV$_{220}$ images. We reanalyzed these images and added the B$_{435}$ filter and the NUV$_{220} -$ B$_{435}$ color which has added further insight into the nature of the populations discussed here. The UV-bright populations we have analyzed include many stages of non-canonical stellar evolution including blue stragglers, extreme horizontal branch stars, blue hook stars, helium-core white dwarfs, and cataclysmic variable candidates. We have selected 53 blue straggler candidates which display a clear central concentration as expected for BSs. Since the expected zone of avoidance is beyond the field of view for our images we are unable to investigate whether the BS population displays the double peaked bimodal distribution that has been discussed in the literature for several other clusters. We found 60 CV candidates populating the gap between the MS and WD region, consistent with the findings of D07, however upon inclusion of the B$_{435}$ filter we found that many of these stars do not display expected CV colors, but instead appear as MS stars. Thus, we suggest that these gap objects may be magnetic CVs with truncated or absent accretion disks or possibly detached WD-MS binaries. However, we also find three gap objects that we consider very likely CV candidates (UV16-18) as they display colors similar to what is expected for a CV. We have used a ZAHB model to select 6 extreme horizontal branch candidates, 5 of which appear to be subluminous in the UV and therefore better candidates for blue hook stars. Due to the existence of these candidates, in addition to the one previously identified BHk star, we suggest that M15 be considered a BHk cluster for future studies on clusters containing BHk stars as it may provide important constraints on how these stars are formed. We also identify three stars that represent plausible post-HB stars that may be AGB-manqué or post-early AGB stars, the progeny of EHB stars. Our identification of these post-HB stars however is very preliminary as their photometric properties seem consistent with AGB-manqué and P-EAGB states, yet statistically it seems improbable to have detected three stars in these relatively short lived stages of evolution. Additionally, we uncovered a population of “bright blue gap objects" for which there is no obvious analog in the literature. We consider these stars to most likely be young He WDs, but they could be related to the BHk population or accretion disk binary systems. UV10-UV15 seem to be the most plausible candidates for He WDs. There is no obvious population to which UV8 belongs but it seems to be related to the HB and is a possible BHk candidate. In addition to the bright blue gap objects, we have identified a significant population of candidate He WDs which we analyzed using model He WD cooling sequences. We have probed quite deep into the He WD sequence and find 7 strong He WD candidates detected in all three images; an additional 38 candidates were detected in only two images. This is the first strong evidence of the existence of a He WD population in M15. We analyzed both thin and thick H envelope models and based on the cooling ages and overall fit to the population we find the thick H envelope models to be better fit to our data. The formation rates suggested by the thin H envelope models are unreasonable unless we have misidentified several of our candidate He WDs. The formation rates implied by the thick H envelope models range from 7-37 He WDs produced each Gyr. The collision timescale of red giants in the core of M15, is high enough that it is possible that collisions account for a significant fraction of the He WDs. Furthermore, the implied lower limit on the binary fraction calculated from these formation rates suggest that close binary systems are also likely to contribute to the formation of He WDs in M15. Althaus, L. G., Serenelli, A. M., & Benvenuto, O. G. 2001, , 324, 617 Anderson, S. F., Margon, B., Deutsch, E. W., & Downes, R. A. 1993, , 106, 1049 Aurière, M., Le Fevre, O., & Terzan, A. 1984, , 138, 415 Bailey, J. 1980, , 190, 119 Bedin, L. R., Piotto, G., Anderson, J., Cassisi, S., King, I. R., Momany, Y., & Carraro, G. 2004, , 605, L125 Benvenuto, O. G., & Althaus, L. G. 1998, , 293, 177 Bertola, F., Bressan, A., Burstein, D., Buson, L. M., Chiosi, C., & di Serego Alighieri, S. 1995, , 438, 680 Binney, J., & Tremaine, S. 1987, Princeton, NJ, Princeton University Press, 1987, 747 pp.539-545 Boffi, F. R., Sirriani, M., Lucas, R. A., Walborn, N. R., & Proffitt, C. R. 2008, Technical Instrument Report ACS 2008-002 Brown, T. M., Sweigart, A. V., Lanz, T., Landsman, W. B., & Hubeny, I. 2001, , 562, 368 Buonanno, R., Corsi, C. E., & Fusi Pecci, F. 1985, , 145, 97 Busso, G., *et al.* 2007, , 474, 105 Calamida, A., *et al.*  2008, , 673, L29 Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, , 345, 245 Cassisi, S., Castellani, M., Caputo, F., & Castellani, V. 2004, , 426, 641 Castellani, M., & Castellani, V. 1993, , 407, 649 Chiaberge, M. & Sirianni, M. 2007, Instrument Science Report ACS 07-03 Cho, D.-H., & Lee, S.-G. 2007, , 133, 2163 Cohn, H. 1980, , 242, 765 Cool, A. M., Grindlay, J. E., Cohn, H. N., Lugger, P. M., & Bailyn, C. D. 1998, , 508, L75 D’Antona, F., & Caloi, V. 2008, , 390, 693 D’Cruz, N. L., *et al.*  2000, , 530, 352 Dalessandro, E., Lanzoni, B., Ferraro, F. R., Vespe, F., Bellazzini, M., & Rood, R. T. 2008, , 681, 311 Davies, M. B., Benz, W., & Hills, J. G. 1991, , 381, 449 de Marchi, G., & Paresce, F. 1994, , 422, 597 de Marchi, G., & Paresce, F.\ 1996, , 467, 658 Dieball, A., Knigge, C., Maccarone, T. J., Long, K. S., Hannikainen, D. C., Zurek, D., & Shara, M. 2009, , 394, L56 Dieball, A., Knigge, C., Zurek, D. R., Shara, M. M., Long, K. S., Charles, P. A., Hannikainen, D. C., & van Zyl, L. 2005, , 634, L105 Dieball, A., Knigge, C., Zurek, D. R., Shara, M. M., Long, K. S., Charles, P. A., & Hannikainen, D. 2007, , 670, 379 (D07) Dieball, A., Knigge, C., Zurek, D. R., Shara, M. M., Long, K. S., Charles, P. A., & Hannikainen, D. 2010, , 708, 1772 Dobrotka, A., Lasota, J.-P., & Menou, K. 2006, , 640, 288 Dorman, B., Rood, R. T., & O’Connell, R. W. 1993, , 419, 596 Dull, J. D., Cohn, H. N., Lugger, P. M., Murphy, B. W., Seitzer, P. O., Callanan, P. J., Rutten, R. G. M., & Charles, P. A. 1997, , 481, 267 Dull, J. D., Cohn, H. N., Lugger, P. M., Murphy, B. W., Seitzer, P. O., Callanan, P. J., Rutten, R. G. M., & Charles, P. A. 2003, , 585, 598 Edmonds, P. D., Gilliland, R. L., Heinke, C. O., Grindlay, J. E., & Camilo, F. 2001, , 557, L57 Edmonds, P. D., Gilliland, R. L., Heinke, C. O., & Grindlay, J. E. 2003, , 596, 1197 Edmonds, P. D., Grindlay, J. E., Cool, A., Cohn, H., Lugger, P., & Bailyn, C. 1999, , 516, 250 Edmonds, P. D., Heinke, C. O., Grindlay, J. E., & Gilliland, R. L. 2002, , 564, L17 Ferraro, F. R., Paltrinieri, B., Pecci, F. F., Rood, R. T., & Dorman, B. 1998, , 500, 311 Ferraro, F. R., Pecci, F. F., Cacciari, C., Corsi, C., Buonanno, R., Fahlman, G. G., & Richer, H. B. 1993, , 106, 2324 Ferraro, F. R., Possenti, A., Sabbi, E., & D’Amico, N. 2003, , 596, L211 Fregeau, J. M., Cheung, P., Portegies Zwart, S. F., & Rasio, F. A. 2004, , 352, 1 Gebhardt, K., Pryor, C., Williams, T. B., & Hesser, J. E. 1994, , 107, 2067 Gebhardt, K., Pryor, C., Williams, T. B., Hesser, J. E., & Stetson, P. B. 1997, , 113, 1026 Grindlay, J. E., Cool, A. M., Callanan, P. J., Bailyn, C. D., Cohn, H. N., & Lugger, P. M. 1995, , 455, L47 Hannikainen, D. C., Charles, P. A., van Zyl, L., Kong, A. K. H., Homer, L., Hakala, P., Naylor, T., & Davies, M. B. 2005, , 357, 325 Hansen, B. M. S., Kalogera, V., & Rasio, F. A. 2003, , 586, 1364 Harris, W. E. 1996 , 112, 1478 Heber, U. 1987, Mitteilungen der Astronomischen Gesellschaft Hamburg, 70, 79 Heinke, C. O., Cohn, H. N., & Lugger, P. M. 2009, , 692, 584 Hut, P., et al. 1992, , 104, 981 Ivanova, N., Heinke, C. O., Rasio, F. A., Taam, R. E., Belczynski, K., & Fregeau, J. 2006, , 372, 1043 Knigge, C., Dieball, A., Ma[í]{}z Apell[á]{}niz, J., Long, K. S., Zurek, D. R., & Shara, M. M. 2008, , 683, 1006 Knigge, C., Leigh, N., & Sills, A. 2009, , 457, 288 Knigge, C., Zurek, D. R., Shara, M. M., & Long, K. S. 2002, , 579, 752 Kraft, R. P., & Ivans, I. I. 2003, , 115, 143 Lee, Y.-W., *et al.* 2005, , 621, L57 Leigh, N., Sills, A., & Knigge, C. 2007, , 661, 210 Liebert, J. 1988, , 100, 1302 Mapelli, M., Sigurdsson, S., Ferraro, F. R., Colpi, M., Possenti, A., & Lanzoni, B. 2006, , 373, 361 Miller Bertolami, M. M., Althaus, L. G., Unglaub, K., & Weiss, A. 2008, , 491, 253 Moehler, S., Dreizler, S., Lanz, T., Bono, G., Sweigart, A. V., Calamida, A., Monelli, M., & Nonino, M. 2007, , 475, L5 Moehler, S., Heber, U., & de Boer, K. S. 1995, , 294, 65 Moehler, S., Heber, U., & Durell, P. R. 1997, , 317, L83 Moehler, S., Sweigart, A. V., Landsman, W. B., Hammer, N. J., & Dreizler, S. 2004, , 415, 313 Nair, P. H., Kafka, S., Honeycutt, R. K., & Gilliland, R. L. 2005, Information Bulletin on Variable Stars, 5585, 1 O’Toole, S. J., Napiwotzki, R., Heber, U., Drechsel, H., Frandsen, S., Grundahl, F., & Bruntt, H. 2006, Baltic Astronomy, 15, 61 Pietrinferni, A., Cassisi, S., Salaris, M., & Castelli, F. 2004, , 612, 168 Piotto, G., et al.  2007, , 661, L53 Pols, O. R., Schroder, K.-P., Hurley, J. R., Tout, C. A., & Eggleton, P. P. 1998, , 298, 525 Pooley, D., et al.  2003, , 591, L131 Preston, G. W., Sneden, C., Thompson, I. B., Shectman, S. A., & Burley, G. S. 2006, , 132, 85 Sandquist, E. L., & Hess, J. M. 2008, , 136, 2259 Schmidt, G. D., *et al.*  2003, , 595, 1101 Serenelli, A. M., Althaus, L. G., Rohrmann, R. D., & Benvenuto, O. G. 2002, , 337, 1091 Shahbaz, T., Zurita, C., Casares, J., Dubus, G., Charles, P. A., Wagner, R. M., & Ryan, E. 2003, , 585, 443 Shara, M. M., Bergeron, L. E., Gilliland, R. L., Saha, A., & Petro, L. 1996, , 471, 804 Shara, M. M., Hinkley, S., Zurek, D. R., Knigge, C., & Bond, H. E. 2004, , 128, 2847 Sigurdsson, S., Richer, H. B., Hansen, B. M., Stairs, I. H., & Thorsett, S. E. 2003, Science, 301, 193 Sirianni, M., et al.  2005, , 117, 1049 Stetson, P. B., Davis, L. E., & Crabtree, D. R. 1990, CCDs in astronomy, 8, 289 Strickler, R. R., Cool, A. M., Anderson, J., Cohn, H. N., Lugger, P. M., & Serenelli, A. M. 2009, , 699, 40 Taylor, J. M., Grindlay, J. E., Edmonds, P. D., & Cool, A. M. 2001, , 553, L169 van den Bosch, R., de Zeeuw, T., Gebhardt, K., Noyola, E., & van de Ven, G. 2006, , 641, 852 van der Marel, R. P., Gerssen, J., Guhathakurta, P., Peterson, R. C., & Gebhardt, K. 2002, , 124, 3255 Webbink, R. F. 1975, , 171, 555 White, N. E., & Angelini, L. 2001, , 561, L101 Whitney, J. H., et al.  1998, , 495, 284 Wickramasinghe, D. T., & Ferrario, L. 2000, , 112, 873 Yanny, B., Guhathakurta, P., Bahcall, J. N., & Schneider, D. P. 1994, , 107, 1745 [c c c c c c]{} AC211$^{a}$ & 21:29:58.323 & +12:10:01.94 & 15.23 & 14.49 & 13.91\ X-2$^{b}$ & 21:29:58.142 & +12:10:01.52 & 19.51 & 17.98 &16.96\ CV1$^{c}$ & 21:29:58.357 & +12:10:00.33 & 20.42 & 20.28 & 21.06\ HX05-B$^{d}$ & 21:29:58.323 & +12:10:11.69 & 21.19 & 23.33 & - \[tab:known\] [c c c c c c c]{} UV1 & 233.98 & 93.13 & 16.14 & 15.57 & 14.85 & AGB-manqué\ UV2 & 341.12 & 920.76 & 14.61 & 16.22 & 16.82 & P-EAGB\ UV3 & 579.31 & 500.38 & 14.74 & 16.42 & 17.74 & P-EAGB\ UV4 & 773.86 & 722.57 & 17.88 & 16.70 & 15.67 & BHk\ UV5 & 65.31 & 1016.84 & 18.07 & 16.89 & 15.85 &BHk\ UV6 & 672.66 & 595.49 & 18.40 & 17.04 & 15.97 & BHk\ UV7 & 932.36 & 976.88 & 18.40 & 17.20 & 16.16 & BHk\ UV8 & 688.90 & 696.32 & 17.74 & 17.20 & 16.47 & Unknown\ UV9 & 1028.58 & 1072.23 & - & 17.42 & 16.08 & BHk\ UV10 & 636.65 & 635.43 & 18.70 & 18.31 & 17.45 & He WD\ UV11 & 548.94 & 603.81 & 20.10 & 18.43 & 17.16 & He WD\ UV12 & 630.50 & 602.71 & 18.58 & 18.55 & 17.83 & He WD\ UV13 & 629.48 & 698.07 & 18.82 & 18.69 & 17.86 & He WD\ UV14 & 582.81 & 643.68 & 20.47 & 19.01 & 17.85 & He WD\ UV15 & 412.42 & 687.64 & 20.07 & 19.01 & 18.06 & He WD\ UV16 & 209.254 & 608.48 & 20.24 & 20.95 & 20.12 & CV\ UV17 & 626.60\* & 369.06\* & 22.33 & 23.24 & 21.81 & CV\ UV18 & 212.69\* & 980.50\* & 23.42 & 24.25 & 23.68 & CV \[tab:uv\] [c c c c]{} **All WDs**$^{1}$ & **73** & **20**\ CO WDs & 11 & 5\ He WDs & 45 & 7\ Ambiguous WDs$^{2}$ & 10 & 5\ Possible WDs$^{3}$ & 5 & 3\ D07 Variables & 2 & -\ Bright He WDs$^{4}$ & 7 & 7 \[tab:wd\] ![The central 10$\times$10of the B$_{435}$, NUV$_{220}$, and FUV$_{140}$ images (from left to right). North is up in all 3 images.[]{data-label="fig:master_images"}](f1.eps){width="100.00000%"} ![Comparison of NUV$_{220} -$ B$_{435}$ and FUV$_{140} -$ NUV$_{220}$ CMDs. Stellar populations are distinguished and color-coded according to their positions in 2a, the NUV$_{220} -$ B$_{435}$ CMD. See key upper righthand corner of 2b for symbol explanation. Theoretical WD cooling sequences for CO WDs (solid lines) ranging in mass from 0.45 - 1.10 M$_{\odot}$ and thick H envelope He WDs (dashed lines) ranging in mass from 0.175 - 0.45 M$_{\odot}$, as well as two ZAHBs (solid dark red line and dark blue lines) are included; see §\[sec:wdcool\] and \[sec:HB\] for details. All variable stars were identified as such by D07. The “dim gap objects" (NUV $>$ 22) have been plotted as a separate group so they can be distinguished on all CMDS, the reason for this differentiation is described in §\[sec:gapdist\]. Squares mark optical counterparts of four X-ray sources discussed in the text.[]{data-label="fig:b_cmds"}](f2.eps){width="100.00000%"} ![Same as Figure \[fig:b\_cmds\] except populations distinguished and color-coded according to their positions on the FUV$_{140} - $NUV$_{220}$ (3a) CMD. See key in upper righthand corner of 3a for symbol explanation. The RGB has not been distinguished from the MS in these figures as it is very difficult to the distinguish the RGB in 3a, as discussed in the text.[]{data-label="fig:f_cmds"}](f3.eps){width="100.00000%"} ![Similar to Figures \[fig:b\_cmds\] & \[fig:f\_cmds\] but with the magnitude axis being NUV$_{220}$ for both CMDs. Populations distinguished and color-coded according to their position in Figure \[fig:b\_cmds\]a. See key in upper righthand corner of 4a for symbol explanations.[]{data-label="fig:n_cmds"}](f4.eps){width="100.00000%"} ![The location of the suspected counterpart to H05X-B in B$_{435}$, NUV$_{220}$, and FUV$_{140}$ respectively.[]{data-label="fig:H05X-B"}](f5.eps){width="100.00000%"} ![The location of the CV1 in B$_{435}$, NUV$_{220}$, and FUV$_{140}$ respectively.[]{data-label="fig:CV1"}](f6.eps){width="100.00000%"} ![CMD showing the stars from Tables \[tab:known\] & \[tab:uv\] (red squares) as well as CV candidate variable stars from D07 (purple triangles).[]{data-label="fig:cb"}](f7.eps){width="100.00000%"} ![WD candidates and model cooling sequences for CO WDs (blue solid lines) and thin H envelope He WDs (purple dot-dash lines). Masses (in M$_{\odot}$) for models, from left to right – CO WDs: 1.10, 1.00, 0.90, 0.80, 0.70, 0.60, 0.50, 0.45; thin H envelope He WDs: 0.45, 0.35, 0.30, 0.25, 0.20, 0.175. Cooling ages are marked along the cooling curves and indicated in the key located in the upper right. The cooling curve for the fiducial 0.6 M$_{\odot}$ CO WD has been plotted as a thicker line for orientation purposes. Filled circles are WD candidates that were detected in all three frames; open circles were only detected in two frames; larger red filled circles were detected in all three frames and represent the 7 *strongest* He WD candidates (see §\[sec:dischewd\]); and filled triangles are stars identified as variables in D07. The grey asterisks are bright blue gap objects that we consider possible He WD candidates based on the curves in Figure \[fig:wd\_thick\] (§\[sec:dischewd\] & \[sec:discbrightgap\]). Error bars shown were calculated by the program ALLSTAR in DAOPHOT II (see Stetson *et al.* 1990). The error bars in the upper portion of the figure show the 1$\sigma$ error for the cooling curves due to the uncertainty in distance and reddening.[]{data-label="fig:wd_thin"}](f8.eps){width="100.00000%"} ![WD candidates and model cooling sequences for CO WDs (blue solid lines) and thick H envelope He WDs (green dashed lines). Symbols for the WD candidates, bright blue gap objects, and CO WD curves are the same as in Fig. \[fig:wd\_thin\]. Masses (in M$_{\odot}$) for models from left to right – CO WDs: 1.10, 1.00, 0.90, 0.80, 0.70, 0.60, 0.50, 0.45; thick H envelope He WDs: 0.45, 0.40, 0.35, 0.30, 0.275, 0.25, 0.225, 0.20, 0.175. Cooling ages for thick envelope He WD curves are indicated in the key located in the upper right.[]{data-label="fig:wd_thick"}](f9.eps){width="100.00000%"} ![Cumulative radial distributions for selected stellar populations. The populations used for the left panel were selected from Fig. 2a and the populations used for the right panel were selected from Fig. 3a (i.e.“BS Candidates" on left panel represents the distribution of those plotted as inverted blue triangles on Fig. 2 and “BS candidates“ on the right panel represents the distribution of those plotted as inverted blue triangles in Fig. 3). See key in lower righthand corner of right panel. The distribution for ”Gap Objects" includes *all* objects identified as gap objects (no magnitude limit).[]{data-label="fig:raddist"}](f10.eps){width="100.00000%"} ![Gap objects from Figure \[fig:f\_cmds\]a (green dots) are plotted with grids from our MS-WD detached binary system models. Gap objects that were not detected in B$_{435}$ are plotted as green inverted triangles and stars classified as WDs are plotted as grey pinched triangles. Intersections are labeled by the effective temperature of the WD and a letter representing the MS model (see §\[sec:gapwdms\]); each intersection represents the resultant flux for the combination of the MS and a 0.6 M$_{\odot}$ CO WD star. MS models range from NUV$_{220}$ $\approx$ 20.3 - 24.5 (A-J).[]{data-label="fig:grid"}](f11.eps){width="100.00000%"} ![Color-color diagram for stars detected in all three filters; symbols as in Fig. \[fig:grid\]. For reference, MS stars appear as a clump on the right hand side, HB stars appear at the top above the model WD-MS binaries, and the BSs form a sequence near the line corresponding to WD-MS binaries that include the MS model labeled F.[]{data-label="fig:gridcc"}](f12.eps){width="100.00000%"}
{ "pile_set_name": "ArXiv" }
--- abstract: 'The particular advantages of using the diatomic molecule radium monofluoride (RaF) as a versatile molecular probe for physics beyond the Standard Model are highlighted. i) RaF was previously suggested as being potentially amenable to direct cooling with lasers. As shown in the present work, RaF’s energetically lowest electronically excited state is of ${}^{2}\Pi$ symmetry (in contrast to BaF), such that no low-lying ${}^{2}\Delta$ state prevents efficient optical cooling cycles. ii) The effective electric field acting on the unpaired electron in the electronic ground state of RaF is estimated larger than in YbF, from which the best restrictions on the electron electric dipole moment (eEDM) were obtained experimentally. iii) Favourable crossings of spin-rotational levels of opposite parity in external magnetic fields exist, which are important for the measurement of the nuclear anapole moment of nuclei with a valence neutron. Thus, RaF appears currently as one of the most attractive candidates for investigation of parity-odd as well as simultaneously parity- and time-reversal-odd interactions in the realms of molecular physics.' author: - 'T. A. Isaev' - 'R. Berger' title: 'Lasercooled radium monofluoride: A molecular all-in-one probe for new physics' --- Introduction {#introduction .unnumbered} ============ The principle advantage of using heavy-atom, polar, diatomic molecules for searching for space parity violation interactions ([$\cal P$-odd ]{}interactions) and simultaneously space parity and time reversal violating forces ([$\cal P, T$-odd ]{}forces) is known for more than 30 years. Nevertheless, only the latest generation of molecular experiments finally surpassed their atomic competitors in sensitivity to one of the most important [$\cal P, T$-odd ]{}properties of elementary particles, namely the permanent electric dipole moment of an electron (eEDM) [@Hudson:11] (see also the recent report on preliminary data for ThO which claims an improved restriction on the eEDM by almost an order of magnitude [@acme:2013]). Whereas molecules are ideally tailored to create favourable, well-defined fields at the heavy nucleus, the complexity of measurements with molecules is typically connected with various systematic effects which can mimic [$\cal P$-odd ]{}correlations. This calls for an active identification of promising molecular candidates that allow different approaches to suppress systematic effects. In the search for an eEDM, experimentally oriented research groups are currently focussing on YbF [@Hudson:11], PbO [@Demille:08], ThO [@acme:11; @acme:2013], WC [@Lee:09], PbF [@Shafer:06] molecules and HfH$^+$ molecular ion [@Leanhardt:11]. In all these experiments, high-quality electronic structure calculations are crucial both for the preparation stage of the experiment and for subsequent interpretation of experimental data obtained [@Titov:06amin]. Another set of pivotal molecular experiments is connected with attempts to measure the nuclear anapole moment, a [$\cal P$-odd ]{}electromagnetic form-factor appearing in $I>0$ nuclei due to [$\cal P$-odd ]{}nuclear forces. The only nucleus, for which the anapole moment was successfully determined, is $^{133}$Cs. In this experiment a vapour of Cs atoms was employed [@Wood:97]. The results are apparently in disagreement with the earlier measurement on Tl atom [@vetter:1995; @ginges:2004]. Currently molecular experiments are under development in Yale on BaF [@Demille:08] and in Groningen on SrF [@vandenberg:2012]. Recently, we identified the open-shell diatomic molecule RaF as an exceptionally suitable candidate for nuclear anapole moment measurements, having on the one hand a high enhancement factor for nuclear spin-dependent weak interaction and on the other hand offering potential for direct cooling with lasers. In the present work we demonstrate that RaF presents unique possibilities for measurements of [$\cal P$-odd ]{}and [$\cal P, T$-odd ]{}effects due to favourable combinations of peculiarities of molecular electronic structure and nuclear structure of radium isotopes. Direct cooling of molecules with lasers {#direct-cooling-of-molecules-with-lasers .unnumbered} ======================================= We identified earlier a set of requirements on molecular electronic structure for molecules being suitable for direct cooling with lasers [@Isaev:10a]. Monofluorides of group II elements (e.g. BaF and RaF) belong to the first class of molecules with highly diagonal Frank-Condon matrix. One problem emerging, however, in lasercooling of BaF molecule (isovalent to RaF) is connected with the existence of a metastable $^2\Delta$ level, lying energetically below the $^2\Pi$ level involved in the optical cooling loop. Our previous electron correlation calculations of the spectroscopic parameters in RaF indicated that the energetically lowest electronically excited level is $^2\Pi$. We took now larger atomic basis sets and active spaces of virtual molecular orbitals to investigate the stability of the ordering of electronic levels. The results are summarized in [Table \[raf\]]{}. These show that even considerable alterations in the parameters of the Fock-space relativistic coupled cluster (FS-RCC as implemented in [dirac]{} program package [@DIRAC:11]) calculations do not change the ordering of levels in RaF. On the other hand FS-RCC, calculations of BaF with a basis set of similar quality as for RaF and as large active spaces (see supplementary material) confirm the first excited electronic level in BaF to be $^2\Delta$. Based on comparison to experimental BaF results we estimate the accuracy of $\tilde{T}_\mathrm{e}$ calculation for RaF to be within 1200 [cm$^{-1}$]{}(without changing the ordering of the levels), $R_\mathrm{e}$ within about 0.1 $a_0$ and $\tilde{\omega}_\mathrm{e}$ about 60 [cm$^{-1}$]{}. [lcccc]{} & $R_\mathrm{e}/a_0$ & $\tilde{\omega}_\mathrm{e}/\mathrm{cm}^{-1}$ & $\tilde{T}_\mathrm{e}/(10^4 \mathrm{cm}^{-1})$ & $\tilde{D}_\mathrm{e}/(10^4 \mathrm{cm}^{-1})$\ \ \ $^2\Sigma_{1/2}$ & 4.24$^a$& 428$^a$& & 3.21$^a$\ $^2\Pi_{1/2}$ & 4.24$^a$& 432$^a$& 1.40$^a$& 3.13$^a$\ $^2\Pi_{3/2}+ ^2\Delta_{3/2}$ & 4.25 & 410 & 1.60 &\ $^2\Delta_{3/2}+ ^2\Pi_{3/2}$ & 4.25 & 432 & 1.64 &\ $^2\Delta_{5/2}$ & 4.27 & 419 & 1.71 &\ $^2\Sigma_{1/2}$ & 4.26 & 416 & 1.81 &\ \ $^2\Sigma_{1/2}$ & 4.29 & 431 & & 4.26$^b$\ $^2\Pi_{1/2}$ & 4.29 & 428 & 1.33 &\ $^2\Pi_{3/2}+ ^2\Delta_{3/2}$ & 4.31 & 415 & 1.50 &\ $^2\Delta_{3/2}+ ^2\Pi_{3/2}$ & 4.28 & 431 & 1.54 &\ $^2\Delta_{5/2}$ & 4.30 & 423 & 1.58 &\ $^2\Sigma_{1/2}$ & 4.32 & 419 & 1.67 &\ \ \ $^2\Sigma_{1/2}$ & 4.15 & 456 & & 4.67$^c$\ $^2\Delta_{3/2}$ & 4.21 & 455 & 1.09 &\ $^2\Delta_{5/2}$ & 4.21 & 455 & 1.13 &\ $^2\Pi_{1/2}$ & 4.19 & 446 & 1.17 &\ $^2\Pi_{3/2}$ & 4.19 & 444 & 1.24 &\ $^2\Sigma_{1/2}$ & 4.23 & 455 & 1.42 &\ \ \ $^2\Sigma_{1/2}$ & 4.09$^d$& 469$^d$& &4.68 $\pm$ 0.07$^d$\ $^2\Delta$ & & 437 & 1.0940 &\ $^2\Pi$ & 4.13$^e$& 437 & 1.1727 &\ $^2\Sigma$ & 4.17$^e$& 424 & 1.3828 &\ \ \ Level crossing in magnetic field and sensitivity to the nuclear anapole moment {#level-crossing-in-magnetic-field-and-sensitivity-to-the-nuclear-anapole-moment .unnumbered} ============================================================================== [$\cal P$-odd ]{}effects in diatomic molecules can be greatly enhanced by shifting levels of opposite parity to near-crossing with the help of external magnetic fields [@flambaum:1985]. This idea is exploited in [@Demille:08] for measurement attempts of the nuclear anapole moment in BaF. One of the main problems in the suggested approach is to create highly homogeneous magnetic fields in large volumes. Favorable values of magnetic fields required to tune spin-rotational levels to near crossing would be below 10 kG (1 T), as creation of larger magnetic flux densities require special effort, for instance superconducting magnets. To estimate if fields with $|B|<1$T suffice to create near level crossings in RaF we calculated Zeeman splittings for spin-rotational levels (See Fig. 2 and Fig. 3 in supplementary materials). Matrix elements of the spin-rotational Hamiltonian in magnetic field were implemented as in Ref. [@kozlov:1991]. The following parameters of the spin-rotational Hamiltonian were employed: rotational constant $B_\mathrm{e}=5689~$MHz (as calculated from equilibrium structure), ratio of spin-doubling constant $\Delta$ to $2B$, $\Delta/2B=0.97$ (as calculated within a four-component Dirac–Hartree–Fock approach), components of the hyperfine tensor for $^{225}$Ra nucleus $A_{\parallel}=-15100$ MHz and $A_\perp=-14800$ MHz (calculated with the two-component zeroth order regular approximation (ZORA) approach as implemented in a modified version of the program package [turbomole]{} [@ahlrichs:1989]; values were already used for scaling in Ref. [@isaev:2012], but not explicitly reported therein) and components of $G$-tensor $G_{\parallel}=1.993$ and $G_\perp=1.961$ (crude estimate based on results for HgH). According to our calculation, the first crossing of levels of opposite parity takes place at about 3 kG for levels with the projection $F_z$ of the total angular momentum on the direction of the magnetic field being $-3/2$. A few more crossings take place in fields up to 10 kG for levels with $F_z=-1/2$, thus providing additional freedom for choosing the optimal experimental parameters. To estimate the lowest possible flux of RaF molecules, which allows to measure the anapole moment in RaF, one needs to find the ratio $\frac{\Delta W}{W}$ with $W$ being the experimentally measured signal proportional to the matrix element of the nuclear spin-dependent weak interaction $W_\mathrm{a}$, and $\Delta W$ its experimental uncertainty. The condition for meaningful measurement is $\frac{\Delta W}{W} < 1$. We assume that the experimental scheme employed for measurement of the anapole moment is analogous to the one suggested in [@Demille:08]. According to [@Demille:08] the maximal value of $\frac{\Delta W}{W}$ is $$\frac{\Delta W}{W} \simeq \frac{1}{2 \sqrt{2 N_0} t W},$$ with $N_0$ being the total number of molecules ($N_0=F \tau$, where $F$ is the [*detected*]{} molecular flux and $\tau$ is the total measurement time) and $t$ the interaction time between molecule and external fields. Thus one gets $F > 1/(8 W ^2 t^2\tau)$. The time for molecular trapping can reach a few seconds [@Hoekstra:07], so let $t=1$ s, $\tau= 1~\mathrm{h}=3600~\mathrm{s}$. To estimate $W$ we just scale the $W$ value for Ba given in Ref. [@Demille:08], as $W^\mathrm{Ba}/W^\mathrm{Ra} \simeq W_\mathrm{a}^\mathrm{Ba}/W_\mathrm{a}^\mathrm{Ra}$, and take $W^\mathrm{Ra} \simeq 10W^\mathrm{Ba} = 50~\mathrm{Hz}$. The least required flux of RaF is roughly then $F = 1/(8\cdot 2500 \cdot 1 \cdot 3600)~\mathrm{s}^{-1} = 1.4\cdot 10^{-8}~\mathrm{s}^{-1}$. In practice one might expect trapping and detection of at least one molecule during an experiment time of $\tau= 1~\mathrm{h}$ (see below), corresponding to a flux of $2.8\cdot 10^{-4}~\mathrm{s}^{-1}$, which is a few orders of magnitude higher than the least required flux of RaF. This implies that it should be possible to perform successful measurements with signals from [*single*]{} trapped RaF molecules. Effective electric field acting on the unpaired electron in RaF {#effective-electric-field-acting-on-the-unpaired-electron-in-raf .unnumbered} =============================================================== One of the most important parameters in molecular experiments on eEDM is the effective electric field $E_\mathrm{eff}$ acting on the unpaired electron in the electronic ground state of the molecule of interest. This field, however, can not be measured directly in experiment, but is predicted from quantum chemical calculations. We estimate here the effective electric field acting on the unpaired electron in RaF by using relations between matrix elements of different [$\cal P$-odd ]{}and [$\cal P, T$-odd ]{}operators as it has been done in Ref. [@kozlov:1985] by Kozlov and extended recently in Ref. [@dzuba:2011]. According to the semiempirical model of Kozlov [@kozlov:1985] the relation between the parameter $W_\mathrm{s}$ of the [$\cal P, T$-odd ]{}term and the parameter $W_\mathrm{a}$ of the [$\cal P$-odd ]{}term in the effective spin-rotational Hamiltonian is $$\label{eq:WstoWa} W_\mathrm{s}/W_\mathrm{a} = Z 3 \gamma/(2 \gamma + 1),$$ where $Z$ is the nuclear charge number and $\gamma = \sqrt{1-(\alpha Z)^2}$. As one can see from the [Table \[all\]]{} this relation provides good agreement with the results of explicit calculations of the $W_\mathrm{s}$ by a two-component ZORA generalized Hartree–Fock (GHF) method for BaF, YbF and RaF molecules. On the other hand for HgH and CnH there exists a bigger discrepancy between estimated and calculated $W_\mathrm{s}$. This can be attributed to the influence of the core-valence polarisation, which also contributes considerably to the $W_\mathrm{a}$ values as noted in [@isaev:2012]. To clarify the situation with the scalar [$\cal P, T$-odd ]{}interaction in HgH and CnH one needs high-precision correlation calculations, similar to those in [@Isaev:04]. Matrix elements $M_\mathrm{SPT}$ of the scalar [$\cal P, T$-odd ]{}interaction and matrix elements $M_\mathrm{EDM}$ of the coupling between eEDM and inner molecular electric field are given in [@dzuba:2011]. Relations between these M.E.’s can be also expressed through relativistic enhancement factor $R(Z)$ [@Moskalev:76], which reflects the impact of relativistic effects on molecular electronic structure (see e.g. [@isaev:2012] on the influence of $R(Z)$ on $W_\mathrm{a}$ for a range of diatomic molecules). The relation reads as $$\begin{aligned} \frac{C_\mathrm{SP}}{(d_\mathrm{e}/(e a_0))} \frac{M_\mathrm{EDM}}{M_\mathrm{SPT}}= \frac{(1-0.283\alpha^2Z^2)^2}{(1-0.56\alpha^2Z^2)}\frac{16\sqrt{2}\pi Z\alpha}{3 A ({G_\mathrm{F}}/(E_\mathrm{h}~a_0^3))} \frac{2\gamma+1}{\gamma(1+\gamma)(4\gamma^2-1)} \frac{1}{R(Z)}.\end{aligned}$$ In the equation above $G_\mathrm{F}$ is Fermi’s constant, $A$ the atomic mass number, $E_\mathrm{h}$ the Hartree energy and $a_0$ the Bohr radius. $C_\mathrm{SP}$ and $d_\mathrm{e}$ are the effective constant of the scalar [$\cal P, T$-odd ]{}interaction and the electron electric dipole moment, respectively. Herein the proper coefficient (namely 0.283) in front of $\alpha^2Z^2$ is used in the numerator [@dzuba-priv:13], instead of the misprinted one (0.375) in [@dzuba:2011]. For the relation between $E_\mathrm{eff}$ and $W_\mathrm{s}$ one obtains then $$\label{eq:WstoEeff} E_\mathrm{eff}=-{\Omega}\frac{C_\mathrm{SP}}{d_\mathrm{e}}\frac{M_\mathrm{EDM}}{M_\mathrm{SPT}} \frac{A}{Z}W_\mathrm{s},$$ in which the quantum number $\Omega$ of the projection of the electron total angular momentum on the internuclear axis is in the present work always taken equal to $1/2$. It is interesting to note that in experiments with one kind of molecules the scalar [$\cal P, T$-odd ]{}interaction is indistinguishable from the eEDM effect – they both contribute to the [$\cal P, T$-odd ]{}electron paramagnetic resonance signal. One can, however, disentangle the contributions by taking data from experiments with different molecules (or molecule and atom) as proposed in [@dzuba:2011] for eEDM measurements in Tl and YbF. Taking into account the above relations one can easily estimate the effective electric field acting on the electron in the ground $^2\Sigma$ state of RaF. The results are given in [Table \[all\]]{}. The accuracy of such an estimate is not high, but sufficient to identify RaF as promising candidate for eEDM measurements. [lrrrrr]{} & $Z$ & & & &\ BaF & 56 & & 1.9$\times$10$^{2}$ & $-$8.5 (12) & 1.3 (1.8) \[1.9$^{b)}$\]\ YbF & 70 & & 6.1$\times$10$^{2}$ & $-$41 (38) & 4.4 (4.1) \[6.0$^{c)}$\]\ RaF & 88 & & 2.1$\times$10$^{3}$ & $-$15$\times$10$^{1}$ (13$\times$10$^{1}$) & 11 (9.5)\ HgH & 80 & & 2.0$\times$10$^{3}$ & $-$38$\times$10$^{1}$ (19$\times$10$^{1}$) & 32 ( 16)\ CnH & 112 & & 3.1$\times$10$^{4}$ & $-$87$\times$10$^{2}$ (35$\times$10$^{2}$) & 746 (300)\ \ \[all\] Another attractive feature of measurements with Ra nuclei is that there exist also a nuclear mechanism enhancing [$\cal P$-odd ]{}and [$\cal P, T$-odd ]{}effects in certain Ra isotopes. According to [@Auerbach:96] and [@Spevak:97] the Schiff moment in nuclei possessing octapole deformation is enhanced by about $10$ to $100$ times. The mechanism is similar to the one in diatomic and chiral molecules: enhancement is reached due to closeness of rotational levels of opposite parity. As a result the estimated Schiff moments in $^{225}$Ra and $^{223}$Ra isotopes are equal to 300 and 400 (in units $\eta~10^8~\mathrm{e}~\mathrm{fm}^3$, where $\eta$ is effective nucleon-nucleon [$\cal P, T$-odd ]{}force constant), respectively, whereas for $^{199}$Hg for example it is only $-1.4$ (data taken from Ref. [@Spevak:97]). Production of RaF {#prod .unnumbered} ================= Besides the possible routes discussed in Ref. [@Isaev:10a], we propose in Ref. [@Isaev:13] to produce neutral RaF via RaF$^+$, which is subsequently neutralised by charge exchange in collision with a suitably chosen collision gas or by interaction with surfaces that provide the adequate work function for an iso-enthalpic electron transfer. RaF$^+$ can in turn be formed in reactive collisions of radium ions with a suitable fluorine containing compound. Conclusion {#conclusion .unnumbered} ========== We demonstrated various special properties of RaF that render it a versatile molecular laboratory for studying a wide range of physical phenomena, from laser cooling to physics beyond the Standard Model. A unique combination of rovibronic and nuclear structure features renders RaF particularly attractive for further experimental study. For the first time the parameter of the scalar [$\cal P, T$-odd ]{}interaction $W_\mathrm{s}$ is calculated with the accounting for spin-polarisation for the molecules RaF, HgH and CnH. The authors are grateful to D. DeMille, V. Flambaum, M. Kozlov and S. Hoekstra for discussion. [32]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase {10.1038/nature10104}) @noop [ ()]{},  @noop [****,  ()]{} [****,  ()](\doibase {10.1103/PhysRevA.84.034502}) @noop [****,  ()]{} @noop [****,  ()]{} [****, ()](\doibase {10.1016/j.jms.2011.06.007}) @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.74.2658) [****,  ()](\doibase 10.1016/j.physrep.2004.03.005) [**** (),](\doibase {10.1140/epjd/e2012-30017-5}) @noop [****,  ()]{} @noop [****,  ()](\doibase {10.1080/00268979000101311}) [****,  ()](\doibase 10.1103/PhysRevA.86.062515) [****,  ()](\doibase {10.1038/199804a0}) @noop [“,” ]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [**** (), [10.1103/PhysRevLett.98.133001]{}](\doibase {10.1103/PhysRevLett.98.133001}) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevA.84.052108) @noop [****,  ()]{} @noop [****,  ()]{} @noop (),  @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase {10.1103/PhysRevC.56.1357}) @noop [ ()]{},
{ "pile_set_name": "ArXiv" }
--- abstract: 'We prove that the $p$th Hecke operator on the Morava $E$-cohomology of a space is congruent to the Frobenius mod $p$. This is a generalization of the fact that the $p$th Adams operation on the complex $K$-theory of a space is congruent to the Frobenius mod $p$. The proof implies that the $p$th Hecke operator may be used to test Rezk’s congruence criterion.' address: 'Max Planck Institute for Mathematics, Bonn, Germany' author: - Nathaniel Stapleton bibliography: - 'mybib.bib' title: 'A canonical lift of Frobenius in Morava $E$-theory' --- Introduction ============ The $p$th Adams operation on the complex $K$-theory of a space is congruent to the Frobenius mod $p$. This fact plays a role in Adams and Atiyah’s proof [@adamsatiyahhopf] of the Hopf invariant one problem. It also implies the existence of a canonical operation $\theta$ on $K^0(X)$ satisfying $$\psi^p(x) = x^p + p\theta(x),$$ when $K^0(X)$ is torsion-free. This extra structure was used by Bousfield [@bousfieldlambda] to determine the $\lambda$-ring structure of the $K$-theory of an infinite loop space. There are several generalizations of the $p$th Adams operation in complex $K$-theory to Morava $E$-theory: the $p$th additive power operation, the $p$th Adams operation, and the $p$th Hecke operator. In this note, we show that the $p$th Hecke operator is a lift of Frobenius. In [@rezkcongruence], Rezk studies the relationship between two algebraic structures related to power operations in Morava $E$-theory. One structure is a monad $\operatorname{\mathbb{T}}$ on the category of $E_0$-modules that is closely related to the free $E_{\infty}$-algebra functor. The other structure is a form of the Dyer-Lashof algebra for $E$, called $\Gamma$. Given a $\Gamma$-algebra $R$, each element $\sigma \in \Gamma$ gives rise to a linear endomorphism $Q_{\sigma}$ of $R$. He proves that a $\Gamma$-algebra $R$ admits the structure of an algebra over the monad $\operatorname{\mathbb{T}}$ if and only if there exists an element $\sigma \in \Gamma$ (over a certain element $\bar{\sigma} \in \Gamma/p$) such that $Q_{\sigma}$ is a lift of Frobenius in the following sense: $$Q_{\sigma}(r) \equiv r^p \mod pR$$ for all $r \in R$. We will show that $Q_{\sigma}$ may be taken to be the $p$th Hecke operator $T_p$ as defined by Ando in [@Isogenies Section 3.6]. We prove this by producing a canonical element $\sigma_{can} \in \Gamma$ lifting the Frobenius class $\bar{\sigma} \in \Gamma/p$ [@rezkcongruence Section 10.3] such that $Q_{\sigma_{can}} = T_p$. This provides us with extra algebraic structure on torsion-free algebras over the monad $\operatorname{\mathbb{T}}$ in the form of a canonical operation $\theta$ satisfying $$T_p(r) = r^p + p\theta(r).$$ Let ${\mathbb{G}}_{E_0}$ be the formal group associated to $E$, a Morava $E$-theory spectrum. The Frobenius $\phi$ on $E_0/p$ induces the relative Frobenius isogeny $${\mathbb{G}}_{E_0/p} {\overset{}{\longrightarrow}} \phi^*{\mathbb{G}}_{E_0/p}$$ over $E_0/p$. The kernel of this isogeny is a subgroup scheme of order $p$. By a theorem of Strickland, this corresponds to an $E_0$-algebra map $$\bar{\sigma} \colon E^0(B\Sigma_p)/I {\overset{}{\longrightarrow}} E_0/p,$$ where $I$ is the image of the transfer from the trivial group to $\Sigma_p$. This map further corresponds to an element in the mod $p$ Dyer-Lashof algebra $\Gamma/p$. Rezk considers the set of $E_0$-module maps $[\bar{\sigma}] \subset \hom(E^0(B\Sigma_p)/I,E_0)$ lifting $\bar{\sigma}$. There is a canonical choice of lift $\sigma_{can} \in [\bar{\sigma}]$. The construction of $\sigma_{can}$ is an application of the formula for the $K(n)$-local transfer (induction) along the surjection from $\Sigma_p$ to the trivial group [@Ganterexponential Section 7.3]. Let $X$ be a space and let $$P_p/I \colon E^0(X) {\overset{}{\longrightarrow}} E^0(B\Sigma_p)/I \otimes_{E_0} E^0(X)$$ be the $p$th additive power operation. The endomorphism $Q_{\sigma_{can}}$ of $E^0(X)$ is the composite of $P_p/I$ with $\sigma_{can} \otimes 1$. For any space $X$, the following operations on $E^0(X)$ are equal: $$Q_{\sigma_{can}} = (\sigma_{can} \otimes 1)(P_p/I) = T_p.$$ This has the following immediate consequence: Let $X$ be a space such that $E^0(X)$ is torsion-free. There exists a canonical operation $$\theta \colon E^0(X) {\overset{}{\longrightarrow}} E^0(X)$$ such that, for all $x \in E^0(X)$, $$T_p(x) = x^p + p\theta(x).$$ *Acknowledgements* It is a pleasure to thank Tobias Barthel, Charles Rezk, Tomer Schlank, and Mahmoud Zeinalian for helpful discussions and to thank the Max Planck Institute for Mathematics for its hospitality. Tools ===== Let $E$ be a height $n$ Morava $E$-theory spectrum at the prime $p$. We will make use of several tools that let us access $E$-cohomology. We summarize them in this section. For the remainder of this paper, let $E(X) = E^0(X)$ for any space $X$. We will also write $E$ for the coefficients $E^0$ unless we state otherwise. *Character theory* Hopkins, Kuhn, and Ravenel introduce character theory for $E(BG)$ in [@hkr]. They construct the rationalized Drinfeld ring $C_0$ and introduce a ring of generalized class functions taking values in $C_0$: $$Cl_n(G,C_0) = \{\text{$C_0$-valued functions on conjugacy classes of map from ${\mathbb{Z}}_{p}^n$ to $G$}\}.$$ They construct a map $$E(BG) {\overset{}{\longrightarrow}} Cl_n(G,C_0)$$ and show that it induces an isomorphism after the domain has been base-changed to $C_0$ [@hkr Theorem C]. When $n=1$, this is a $p$-adic version of the classical character map from representation theory. *Good groups* A finite group $G$ is good if the character map $$E(BG) {\overset{}{\longrightarrow}} Cl_n(G,C_0)$$ is injective. Hopkins, Kuhn, and Ravenel show that $\Sigma_{p^k}$ is good for all $k$ [@hkr Theorem 7.3]. *Transfer maps* It follows from a result of Greenlees and Sadofsky [@greenlees-sadofsky] that there are transfer maps in $E$-cohomology along all maps of finite groups. In [@Ganterexponential Section 7.3], Ganter studies the case of the transfer from $G$ to the trivial group and shows that there is a simple formula for the transfer on the level of class functions. Let $$\operatorname{Tr}_{C_0} \colon Cl_n(G,C_0) {\overset{}{\longrightarrow}} C_0$$ be given by the formula $f \mapsto \frac{1}{|G|}\sum_{[{\alpha}]}f([{\alpha}])$, where the sum runs over conjugacy classes of maps ${\alpha}\colon {\mathbb{Z}}_{p}^n \rightarrow G$. Ganter shows that there is a commutative diagram $$\xymatrix{E(BG) \ar[r]^-{\operatorname{Tr}_{E}} \ar[d] & E \ar[d] \\ Cl_n(G) \ar[r]^-{\operatorname{Tr}_{C_0}} & C_0,}$$ in which the vertical maps are the character map. *Subgroups of formal groups* Let ${\mathbb{G}}_{E} = \operatorname{Spf}(E(BS^1))$ be the formal group associated to the spectrum $E$. In [@etheorysym], Strickland produces a canonical isomorphism $$\operatorname{Spf}(E(B\Sigma_{p^k})/I) \cong \operatorname{Sub}_{p^k}({\mathbb{G}}_{E}),$$ where $I$ is the image of the transfer along $\Sigma_{p^{k-1}}^{\times p} \subset \Sigma_{p^k}$ and $\operatorname{Sub}_{p^k}({\mathbb{G}}_{E})$ is the scheme that classifies subgroup schemes of order $p^k$ in ${\mathbb{G}}_E$. We will only need the case $k=1$. *The Frobenius class* The relative Frobenius is a degree $p$ isogeny of formal groups $${\mathbb{G}}_{E/p} {\overset{}{\longrightarrow}} \phi^*{\mathbb{G}}_{E/p},$$ where $\phi \colon E/p \rightarrow E/p$ is the Frobenius. The kernel of the map is a subgroup scheme of order $p$. Using Strickland’s result, there is a canonical map of $E$-algebras $$\bar{\sigma} \colon E(B\Sigma_p)/I {\overset{}{\longrightarrow}} E/p$$ picking out the kernel. In [@rezkcongruence Section 10.3], Rezk describes this map in terms of a coordinate and considers the set of $E$-module maps $[\bar{\sigma}] \subset \hom(E(B\Sigma_p),E)$ that lift $\bar{\sigma}$. *Power operations* In [@structuredmoravae], Goerss, Hopkins, and Miller prove that the spectrum $E$ admits the structure of an $E_{\infty}$-ring spectrum in an essentially unique way. This implies a theory of power operations. These are natural multiplicative non-additive maps $$P_m \colon E(X) {\overset{}{\longrightarrow}} E(B\Sigma_{m}) \otimes_E E(X)$$ for all $m>0$. For $m=p^k$, they can be simplified to obtain interesting ring maps by further passing to the quotient $$P_{p^k}/I \colon E(X) {\overset{}{\longrightarrow}} E(B\Sigma_{p^k}) \otimes_E E(X) {\overset{}{\longrightarrow}} E(B\Sigma_{p^k})/I \otimes_E E(X),$$ where $I$ is the transfer ideal that appeared above. *Hecke operators* In [@Isogenies Section 3.6], Ando produces operations $$T_{p^k} \colon E(X) {\overset{}{\longrightarrow}} E(X)$$ by combining the structure of power operations, Strickland’s result, and ideas from character theory. Let $\operatorname{\mathbb{T}}= ({\mathbb{Q}}_p/{\mathbb{Z}}_p)^n$, let $H \subset \operatorname{\mathbb{T}}$ be a finite subgroup, and let $D_{\infty}$ be the Drinfeld ring at infinite level so that $\operatorname{Spf}(D_{\infty}) = \operatorname{Level}(\operatorname{\mathbb{T}},{\mathbb{G}}_{E})$ and ${\mathbb{Q}}\otimes D_{\infty} = C_0$. Ando constructs an Adams operation depending on $H$ as the composite $$\psi^H \colon E(X) {\overset{P_p/I}{\longrightarrow}} E(B\Sigma_p)/I \otimes_E E(X) {\overset{H\otimes 1}{\longrightarrow}} D_{\infty} \otimes_E E(X).$$ He then defines the $p^k$th Hecke operator $$T_{p^k} = \sum_{\substack{H \subset \operatorname{\mathbb{T}}\\ |H| = p^k}} \psi^H$$ and shows that this lands in $E(X)$. A canonical representative of the Frobenius class ================================================= We construct a canonical representative of the set $[\bar{\sigma}]$. The construction is an elementary application of several of the tools presented in the previous section. We specialize the transfers of the previous section to $G = \Sigma_p$. Let $$\operatorname{Tr}_E \colon E(B\Sigma_p) {\overset{}{\longrightarrow}} E$$ be the transfer from $\Sigma_p$ to the trivial group and let $$\operatorname{Tr}_{C_0} \colon Cl_n(\Sigma_p, C_0) {\overset{}{\longrightarrow}} C_0$$ be the transfer in class functions from $\Sigma_p$ to the trivial group. This is given by the formula $$\operatorname{Tr}_{C_0}(f) = \frac{1}{p!}\sum_{[{\alpha}]}f([{\alpha}]).$$ Recall that $\operatorname{\mathbb{T}}= ({\mathbb{Q}}_p/{\mathbb{Z}}_p)^n$ and let $\operatorname{Sub}_p(\operatorname{\mathbb{T}})$ be the set of subgroups of order $p$ in $\operatorname{\mathbb{T}}$. \[sigmap\] [@marshthesis Section 4.3.6] The restriction map along ${\mathbb{Z}}/p \subseteq \Sigma_p$ induces an isomorphism $$E(B\Sigma_p) {\overset{\cong}{\longrightarrow}} E(B{\mathbb{Z}}/p)^{\operatorname{Aut}({\mathbb{Z}}/p)}.$$ After a choice of coordinate $x$, $$E(B\Sigma_p) \cong E[y]/(yf(y)),$$ where the degree of $f(y)$ is $$|\operatorname{Sub}_p(\operatorname{\mathbb{T}})| = \frac{p^n-1}{p-1} = \sum_{i=0}^{n-1}p^i,$$ $f(0)=p$, and $y$ maps to $x^{p-1}$ in $E(B{\mathbb{Z}}/p) \cong E{[\![x]\!]}/[p](x)$. [@Quillenelementary Proposition 4.2] After choosing a coordinate, there is an isomorphism $$E(B\Sigma_p)/I \cong E[y]/(f(y)),$$ and the ring is free of rank $|\operatorname{Sub}_p(\operatorname{\mathbb{T}})|$ as an $E$-module. After choosing a coordinate, the restriction map $E(B\Sigma_p) \rightarrow E$ sends $y$ to $0$ and the map $$E(B\Sigma_p) \rightarrow E(B\Sigma_p)/I$$ is the quotient by the ideal generated by $f(y)$. \[index\] The index of the $E$-module $E(B\Sigma_p)$ inside $E \times E(B\Sigma_p)/I$ is $p$. This can be seen using the coordinate. There is a basis of $E(B\Sigma_p)$ given by the set $\{1, y, \ldots, y^m\}$, where $m = |\operatorname{Sub}_p(\operatorname{\mathbb{T}})|$, and a basis of $E \times E(B\Sigma_p)/I$ given by $$\{(1,0),(0,1),(0,y), \ldots, (0,y^{m-1})\}.$$ By Lemma \[sigmap\], the image of the elements $\{1, y, \ldots, y^{m-1}, p-f(y)\}$ in $E(B\Sigma_p)$ is the set $$\{(1,1),(0,y), \ldots, (0,y^{m-1}), (0,p)\}$$ in $E \times E(B\Sigma_p)/I$. The image of $y^m$ is in the span of these elements and the submodule generated by these elements has index $p$. [@rezkcongruence Section 10.3] In terms of a coordinate, the Frobenius class $$\bar{\sigma} \colon E(B\Sigma_p)/I {\overset{}{\longrightarrow}} E/p$$ is the quotient by the ideal $(y)$. Now we modify $\operatorname{Tr}_{C_0}$ to construct a map $$\sigma_{can} \colon E(B\Sigma_p)/I {\overset{}{\longrightarrow}} E.$$ By Ganter’s result [@Ganterexponential Section 7.3] and the fact that $\Sigma_p$ is good, the restriction of $\operatorname{Tr}_{C_0}$ to $E(B\Sigma_{p})$ is equal to $\operatorname{Tr}_{E}$. It makes sense to restrict $\operatorname{Tr}_{C_0}$ to $$E \times E(B\Sigma_p)/I \subset Cl_n(\Sigma_p,C_0).$$ Lemma \[index\] implies that this lands in $\frac{1}{p}E$. Thus we see that the target of the map $${{ \left.\kern-\nulldelimiterspace p!\operatorname{Tr}_{C_0} \vphantom{\big|} \right|_{E \times E(B\Sigma_p)/I} }}$$ can be taken to be $E$. We may further restrict this map to the subring $E(B\Sigma_p)/I$ to get $${{ \left.\kern-\nulldelimiterspace p!\operatorname{Tr}_{C_0} \vphantom{\big|} \right|_{E(B\Sigma_p)/I} }} \colon E(B\Sigma_p)/I {\overset{}{\longrightarrow}} E.$$ From the formula for $\operatorname{Tr}_{C_0}$, for $e \in E \subset E(B\Sigma_p)/I$, we have $${{ \left.\kern-\nulldelimiterspace p!\operatorname{Tr}_{C_0} \vphantom{\big|} \right|_{E(B\Sigma_p)/I} }}(e) = |\operatorname{Sub}_p(\operatorname{\mathbb{T}})|e.$$ Note that $|\operatorname{Sub}_p(\operatorname{\mathbb{T}})|$ is congruent to $1$ mod $p$ (and therefore a $p$-adic unit). We set $$\sigma_{can} = {{ \left.\kern-\nulldelimiterspace p!\operatorname{Tr}_{C_0} \vphantom{\big|} \right|_{E(B\Sigma_p)/I} }}.$$ One may also normalize $\sigma_{can}$ by dividing by $|\operatorname{Sub}_p(\operatorname{\mathbb{T}})|$ so that $e$ is sent to $e$. We now show that $\sigma_{can}$ fits in the diagram $$\xymatrix{& E \ar[d] \\ E(B\Sigma_p)/I \ar[r]_-{\bar{\sigma}} \ar[ru]^-{\sigma_{can}} & E/p,}$$ where $\bar{\sigma}$ picks out the kernel of the relative Frobenius. The map $$\sigma_{can} \colon E(B\Sigma_p)/I {\overset{}{\longrightarrow}} E$$ is a representative of Rezk’s Frobenius class. We may be explicit. Choose a coordinate so that the quotient map $$q \colon E(B\Sigma_p) {\overset{}{\longrightarrow}} E(B\Sigma_p)/I$$ is given by $$q \colon E[y]/(yf(y)) {\overset{}{\longrightarrow}} E[y]/(f(y)).$$ We must show that $$\xymatrix{E(B\Sigma_p)/I \ar[r]^-{\sigma_{can}} & E \ar[r]^-{\text{mod } p} & E/p}$$ is the quotient by the ideal $(y) \subset E(B\Sigma_p)/I$. There is a basis of $E(B\Sigma_p)$ (as an $E$-module) given by $\{1,y,\ldots,y^{m}\}$, where $m = |\operatorname{Sub}_p(\operatorname{\mathbb{T}})|$. We will be careful to refer to the image of $y^i$ in $E(B\Sigma_p)/I$ as $q(y^i)$. For the basis elements of the form $y^i$, where $i \neq 0$, the restriction map $E(B\Sigma_p) \rightarrow E$ sends $y^i$ to $0$. Thus $$\operatorname{Tr}_{E}(y^i) = {{ \left.\kern-\nulldelimiterspace \operatorname{Tr}_{C_0} \vphantom{\big|} \right|_{E(B\Sigma_p)/I} }}(q(y^i)) \in E.$$ Now the definition of $\sigma_{can}$ implies that $\sigma_{can}(q(y^i))$ is divisible by $p$. So $$\sigma_{can}(q(y^i)) \equiv 0 \mod p.$$ It is left to show that, for $e$ in the image of $E \rightarrow E(B\Sigma_p)/I$, $$\sigma_{can}(e) \equiv e \mod p.$$ We have already seen that $${{ \left.\kern-\nulldelimiterspace p!\operatorname{Tr}_{C_0} \vphantom{\big|} \right|_{E(B\Sigma_p)/I} }}(e) = |\operatorname{Sub}_p(\operatorname{\mathbb{T}})|e.$$ The result follows from the fact that $|\operatorname{Sub}_p(\operatorname{\mathbb{T}})| \equiv 1$ mod $p$. The Hecke operator congruence ============================= We show that the $p$th additive power operation composed with $\sigma_{can}$ is the $p$th Hecke operator. This implies that the Hecke operator satisfies a certain congruence. The two maps in question are the composite $$\xymatrix{E(X) \ar[r]^-{P_p/I} & E(B\Sigma_p)/I \otimes_E E(X) \ar[r]^-{\sigma_{can} \otimes 1}& E(X)}$$ and the Hecke operator $T_p$ described in Section \[tools\]. The $p$th additive power operation composed with the canonical representative of the Frobenius class is equal to the $p$th Hecke operator: $$(\sigma_{can} \otimes 1)(P_{p}/I) = T_p.$$ This follows in a straight-forward way from the definitions. Unwrapping the definition of the character map, the map $\sigma_{can}$ is the sum of a collection of maps $$E(B\Sigma_p)/I {\overset{}{\longrightarrow}} C_0,$$ one for each subgroup of order $p$ in $\operatorname{\mathbb{T}}$. These are the maps induced by the canonical isomorphism $$C_0 \otimes \operatorname{Sub}_p({\mathbb{G}}_{E}) \cong \operatorname{Sub}_p(\operatorname{\mathbb{T}}).$$ In other words, they classify the subgroups of order $p$ in $\operatorname{\mathbb{T}}$. Since $\sigma_{can} \in [\bar{\sigma}]$, the following diagram commutes $$\xymatrix{E(X) \ar[r]^-{P_p} & E(B\Sigma_p) \otimes_E E(X) \ar[r] \ar[d]_-{\text{Res}\otimes 1} & E(B\Sigma_p)/I \otimes_E E(X) \ar[d]_-{\bar{\sigma}\otimes 1} \ar[r]^-{\sigma_{can} \otimes 1} & E(X) \ar[dl] \\ & E(X) \ar[r] & E(X)/p &}$$ and this implies that $$(\sigma_{can}\otimes 1)(P_p/I)(x) \equiv x^p \mod p.$$ For $x \in E(X)$, there is a congruence $$T_p(x) \equiv x^p \mod p.$$ Let $X$ be a space with the property that $E(X)$ is torsion-free. The corollary above implies the existence of a canonical function $$\theta \colon E(X) {\overset{}{\longrightarrow}} E(X)$$ such that $$T_p(x) = x^p + p\theta(x).$$ When $n=1$, ${\mathbb{G}}_E$ is a height $1$ formal group, $$E(B\Sigma_p)/I$$ is a rank one $E$-module, and $\sigma_{can}$ is an $E$-algebra isomorphism. The composite $$\xymatrix{E(X) \ar[r]^-{P_p/I} & E(B\Sigma_p)/I \otimes_E E(X) \ar[r]^-{\sigma_{can} \otimes 1} & E(X)}$$ is the $p$th unstable Adams operation. In this situation, the function $\theta$ is understood by work of Bousfield [@bousfieldlambda]. At arbitrary height, we may consider the effect of $T_p$ on $z \in {\mathbb{Z}}_p \subset E$. Since $T_p$ is a sum of ring maps $$T_p(z) = |\operatorname{Sub}_p(\operatorname{\mathbb{T}})|z.$$ This is congruent to $z^p$ mod $p$. At height $2$ and the prime $2$, Rezk constructed an $E$-theory associated to a certain elliptic curve [@rezkpowercalc]. He calculated $P_2/I$, when $X=*$. He found that, after choosing a particular coordinate $x$, $$E(B\Sigma_2)/I \cong {\mathbb{Z}}_2{[\![u_1]\!]}[x]/(x^3-u_1x-2)$$ and $$P_2/I \colon {\mathbb{Z}}_2{[\![u_1]\!]} {\overset{}{\longrightarrow}} {\mathbb{Z}}_2{[\![u_1]\!]}[x]/(x^3-u_1x-2)$$ sends $u_1 \mapsto u_{1}^2+3x-u_1x^2$. In [@Drinfeld Section 4B], Drinfeld explains how to compute the ring that corepresents ${\mathbb{Z}}/2\times {\mathbb{Z}}/2$-level structures. Note that in the ring $${\mathbb{Z}}_2{[\![u_1]\!]}[y,z]/(y^3-u_1y-2),$$ $y$ is a root of $z^3-u_1z-2$ and $$\frac{z^3-u_1z-2}{z-y} = z^2+yz+y^2-u_1.$$ Drinfeld’s construction gives $$D_1 = \Gamma \operatorname{Level}({\mathbb{Z}}/2 \times {\mathbb{Z}}/2, {\mathbb{G}}_{E}) \cong {\mathbb{Z}}_2{[\![u_1]\!]}[y,z]/(y^3-u_1y-2,z^2+yz+y^2-u_1).$$ The point of this construction is that $x^3-u_1x-2$ factors into linear terms over this ring. In fact, $$x^3-u_1x-2 = (x-y)(x-z)(x+y+z).$$ The three maps $E(B\Sigma_2)/I \rightarrow D_1 \subset C_0$ that show up in the character map are given by sending $x$ to these roots. A calculation shows that $$\sigma_{can}(x) = 0$$ and that $$T_p(u_1) = (\sigma_{can} \otimes 1)(P_2/I)(u_1) = u_{1}^2.$$
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this Letter, 2D Dirac oscillator in the quantum deformed framework generated by the $\kappa$-Poincaré-Hopf algebra is considered. The problem is formulated using the $\kappa$-deformed Dirac equation. The resulting theory reveals that the energies and wave functions of the oscillator are modified by the deformation parameter.' address: - ' Departamento de Matemática e Estatística, Universidade Estadual de Ponta Grossa, 84030-900 Ponta Grossa-PR, Brazil ' - ' Departamento de Física, Universidade Federal do Maranhão, Campus Universitário do Bacanga, 65085-580 São Luís-MA, Brazil ' author: - 'Fabiano M. Andrade' - 'Edilberto O. Silva' title: 'The 2D $\kappa$-Dirac oscillator' --- $\kappa$-Poincaré-Hopf algebra ,Dirac oscillator Introduction {#sec:introduction} ============ The Dirac oscillator, established in 1989 by Moshinsky and Szczepaniak [@JPA.1989.22.817; @Book.Moshinsky.1996], is considered a natural framework to access the relativistic quantum properties of quantum harmonic oscillator-like systems. This model has inspired a great deal of investigations in recent years. These studies have allowed the exploration of new models in theoretical and experimental physics. In the context of recent investigations, the interest in this issue appears, for example, in quantum optics [@PRA.2007.76.041801; @PRA.2008.77.033832], deformed Kempf algebra [@SR.2013.3.3221], graphene physics [@arXiv:1311.2021], noncommutative space [@JMP.2014.55.032105; @IJMPA.2011.26.4991; @IJTP.2012.51.2143], quantum phase transition [@PRA.2008.77.063815; @arXiv:1312.5251] and topological defects [@AP.2013.336.489; @PRA.2011.84.32109]. Among several recent contributions on the Dirac oscillator, we refer to its first experimental verification [@PRL.2013.111.170405]. For a more detailed approach of the Dirac oscillator see Refs. [@JPA.1997.30.2585; @AIPCP.2011.1334.249; @Book.1998.Strange; @PRA.1994.49.586; @EPJB.2001.22.31; @MPLA.2004.19.2147; @arXiv:1403.4113]. The dynamics of the Dirac oscillator is governed by the Dirac equation with the nonminimal prescription $$\label{eq:prescription} \mathbf{p}\rightarrow \mathbf{p}-im\omega\tilde{\beta}\mathbf{r},$$ where $\mathbf{p}$ is the momentum operator, $m$ is the mass, $\omega$ the frequency of the oscillator and $\mathbf{r}$ is the position vector. In the same context, another usual framework where one can study the dynamics of the Dirac oscillator is that in connection with the theory of quantum deformations. These quantum deformations are realized based on the $\kappa$-Poincaré-Hopf algebra [@PLB.1991.264.331; @PLB.1992.293.344; @PLB.1994.329.189; @PLB.1994.334.348] and has direct implications on the quantum dynamics of relativistic and nonrelativistic quantum systems. The deformation parameter $\kappa$ appearing in the theory is usually interpreted as being the Planck mass $m_{P}$ [@PLB.2012.711.122]. Some important contributions on $\kappa$-deformation have been studied in Refs. [@AoP.1995.243.90; @CQG.2010.27.025012; @NPB.2001.102-103.161; @EPJC.2003.31.129; @PLB.2002.529.256; @PRD.2011.84.085020; @JHEP.2011.1112.080; @EPJC.2013.73.2472; @PRD.2009.79.045012; @EPJC.2006.47.531; @EPJC.2008.53.295; @PRD.2013.87.125009; @PRD.2012.85.045029; @PRD.2009.80.025014]. The physical properties of $\kappa$-deformed relativistic quantum systems can be accessed by solving the $\kappa$-deformed Dirac equation [@PLB.1993.302.419; @PLB.1993.318.613]. Recently, some studies involving $\kappa$-deformation have also been reported with great interest. Some theoretical contributions in this context can be found, for example, in Refs. [@PLB.2013.719.467; @PLB.1994.339.87; @PRD.2007.76.125005; @PLB.1995.359.339; @MPLA.1995.10.1969]. The 3D Dirac oscillator has been discussed in connection with the theory of quantum deformations in Ref. [@PLB.2014.731.327]. However, it is well known that the 2D Dirac oscillator exhibits a dynamics completely different from that of 3D one. In this context, the main goal of this Letter is study the dynamics of the 2D Dirac oscillator in the quantum deformed framework generated by the $\kappa$-Poincaré-Hopf algebra and after compare with the usual (undeformed) 2D Dirac oscillator. This Letter is organized as follow. In Section \[sec:2ddiraco\], we revise the 2D Dirac oscillator and determine the energy levels and wave functions. In Section \[sec:kappadirac\], the 2D Dirac oscillator in the framework of the quantum deformation is discussed. A brief conclusion is outlined in Section \[sec:conclusion\]. The 2D Dirac oscillator {#sec:2ddiraco} ======================= In this section, we briefly discuss the usual 2D Dirac oscillator for later comparison with the deformed one. One begins by writing the Dirac equation for the four-component spinor $\Psi$ $$\label{eq:dirac} \left( \tilde{\beta}\tilde{\boldsymbol{\gamma}} \cdot \mathbf{p}+\tilde{\beta} m \right)\Psi=E\Psi.$$ The 2D Dirac oscillator is obtained throught the nonmininal prescription in Eq. where $\mathbf{r}=(x,y)$ is the position vector. We shall now make use of the underlying symmetry of the system to reduce the four-component Dirac equation to a two-component spinor equation. We use the following representation for the $\tilde{\gamma}$ matrices [@PRD.1978.18.2932; @NPB.1988.307.909; @PRL.1989.62.1071] $$\begin{aligned} \tilde{\beta}=\tilde{\gamma}_{0}= \left( \begin{array}{cc} \sigma_{z} & 0 \\ 0 & -\sigma_{z} \end{array} \right), & \qquad \tilde{\gamma}_{1}= \left( \begin{array}{cc} i\sigma_{y} & 0 \\ 0 & -i\sigma_{y} \end{array} \right),\\ \tilde{\gamma}_{2}= \left( \begin{array}{cc} - i\sigma_{x} & 0 \\ 0 & i\sigma_{x} \end{array} \right), & \qquad \tilde{\gamma}_{3}= \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right),\end{aligned}$$ where $\sigma_{i}$ are the Pauli matrices. The 2D Dirac oscillator being independent of the $z$ direction, allow us decouple the usual four-component Dirac equation into two two-component equations $$\label{eq:dirac_oscillator} H_{0}\psi=\left[ \beta \gamma \cdot (\mathbf{p}-i m\omega \beta\mathbf{r})+\beta m \right]\psi=E\psi,$$ where $\psi$ is a two-component spinor and the three-dimensional $\gamma$ matrices are [@PRL.1990.64.2347; @IJMPA.1991.6.3119] $$\beta = \gamma_{0} = \sigma_{z}, \qquad \gamma_{1}=i\sigma_{y}, \qquad \gamma_{2}=-is\sigma_{x},$$ where the parameter $s$, which is twice the spin value, can be introduced to characterize the two spin states [@EPJC.2014.74.2708], with $s=+1$ for spin “up” and $s=-1$ for spin “down”. By squaring Eq. , one obtains $$\label{eq:2ddiracoscillator} \left[ p^2+m^2\omega^2r^2-2m\omega(\sigma^{3}+s L_{3}) \right]\psi= (E^2-m^{2})\psi.$$ If one adopts the following decomposition $$\label{eq:ansatz} \psi= \left( \begin{array}{c} \psi_{1} \\ \psi_{2} \end{array} \right)= \left( \begin{array}{c} f(r)\;e^{i l \phi} \\ i g(r)\;e^{i(l+s)\phi} \end{array} \right),$$ for the spinor, from Eq. , it is possible to obtain the energy spectrum [@arXiv:1403.4113]: $$\label{eq:energy2ddo} E=\pm\sqrt{m^{2}+2m\omega\left(2n+|l|-sl\right)},$$ and unnormalized wave functions $$\label{eq:eigenfunction_2d_dirac} f(\rho) =\rho^{(|l|+1)/2} e^{-\rho/2}\; M\left(-n,1+|l|,\rho\right).$$ Here, $n=0,1,2,\ldots$, $l=0,\pm 1, \pm 2,\ldots$, $\rho=m\omega r^2$ and $M(z)$ is the confluent hypergeometric function of the first kind [@Book.1972.Abramowitz]. It should be noted that the spectrum is spin dependent, and for $sl>0$ the energy eigenvalues are independent of $l$ as depicted in the Fig. \[fig:fig1\]. ![ \[fig:fig1\] (Color online) The positive energy spectrum, Eq. , for the 2D Dirac oscillator for different values of $n$ and $l$ with $m=\omega=1$ and $s=1$. Notice that levels with quantum numbers $n \pm q$ have the same energy as levels with $l \pm q$, with $q$ an integer.](fig1){width="\columnwidth"} The 2D $\kappa$-Dirac Oscillator {#sec:kappadirac} ================================ In this section, we address the 2D Dirac oscillator in the framework of $\kappa$-Poincaré-Hopf algebra. We begin with the $\kappa$-Dirac equation defined in [@PLB.1993.302.419; @PLB.1993.318.613; @PLB.1995.359.339] when the third spatial coordinate is absent. So, using the same reasoning of the previous section we have $$\left\{ \gamma_{0} P_{0}-\gamma_{i}P_{i}+ \frac{\varepsilon}{2} \left[ \gamma_{0}(P_{0}^{2}-P_{i}P_{i})-m P_{0} \right] \right\}\psi=m\psi,$$ with $i=1,2$. Identifying $P_{0}=H=E$ and $P_{i}=\pi_{i}=p_{i}-im\omega\beta r_{i}$, we have $$\label{eq:def_dirac} H\psi= \left[ H_{0}-\frac{\varepsilon}{2}(H^{2}-\pi_{i}\pi_{i}-m\beta H) \right] \psi=E\psi,$$ By iterating Eq. , we have up to $\mathcal{O}(\varepsilon)$ ($\varepsilon^{2}\approx 0$) $$\label{eq:def_dirac_oe} H\psi= \left[ H_{0}-\frac{\varepsilon}{2}(H_{0}^{2}-\pi_{i}\pi_{i}-m\beta H_{0}) \right] \psi=E\psi,$$ with $H_{0}$ given in Eq. . By using the same representation of the previous section for the $\gamma$ matrices, Eq. assumes the form $$\begin{gathered} \label{eq:ddiracocart} \left(1+\frac{m\varepsilon}{2}\sigma_{z}\right) (\sigma_{x}\pi_{x}+s\sigma_{y}\pi_{y}+m\sigma_{z})\psi=\\ \left\{ E+\varepsilon \left[ m^2\omega^2r^2-m\omega sL_{z}+ im\omega\sigma_{z}(\mathbf{r}\cdot\mathbf{p}) \right] \right\}\psi.\end{gathered}$$ Equation , in polar coordinates $(r,\phi)$, reads $$\begin{gathered} \label{eq:ddiracopolar} e^{is\sigma_{z}\phi} \left(1+\frac{m\varepsilon}{2}\sigma_{z}\right) \left[ \sigma_{x}\partial_{r}+ \sigma_{y}\left(\frac{s}{r}\partial_{\phi}-im\omega r \right) \right]\psi=\\ i\left[ E-m\sigma_{z}+\varepsilon \left( m^{2}\omega^{2}r^{2}+im\omega s\partial_{\phi}+ m\omega\sigma_{z}r\partial_{r} \right) \right]\psi.\end{gathered}$$ Our task now is solve Eq. . As the deformation does not break the angular symmetry, then we can use a similar ansatz as for the usual (undeformed) case $$\label{eq:ansatzeps} \psi= \left( \begin{array}{c} f_{\varepsilon}(r)\;e^{i l \phi} \\ i g_{\varepsilon}(r)\;e^{i(l+s)\phi} \end{array} \right),$$ but now with the radial part labeled by the deformation parameter. So, by using the ansatz into Eq. , we find a set of two coupled radial differential equations of first order $$\begin{gathered} \label{eq:ddiracopolarcomp1} \left(1+\frac{m\varepsilon}{2}\right) \left[ \frac{d}{dr}+ \frac{s(l+s)}{r}-m\omega r \right]g_{\varepsilon}(r)= (E-m)f_{\varepsilon}(r)\\ + \varepsilon \left( m^{2}\omega^{2}r^{2}-m\omega s l + m\omega r \frac{d}{dr} \right)f_{\varepsilon}(r),\end{gathered}$$ $$\begin{gathered} \label{eq:ddiracopolarcomp2} \left(1-\frac{m\varepsilon}{2}\right) \left[ \frac{d}{dr}- \frac{sl}{r}+m\omega r \right]f_{\varepsilon}(r)= -(E+m)g_{\varepsilon}(r)\\ -\varepsilon \left( m^{2}\omega^{2}r^{2}-m\omega s(l+s)- m\omega\rho \frac{d}{dr} \right)g_{\varepsilon}(r).\end{gathered}$$ The above system of equations can be decoupled yielding a single second order differential equation for $f_{\varepsilon}(r)$, $$\begin{gathered} \label{eq:edof} -f_{\varepsilon}''(r)- \left( \frac{1}{r}+2m^{2}\varepsilon \omega r \right)f_{\varepsilon}'(r)\\ + \left[ \frac{l^{2}}{r^{2}}+ \left(1-2 E\varepsilon \right) m^{2}\omega^{2}r^{2}- k_{\varepsilon}^{2} \right] f_{\varepsilon}(r) =0,\end{gathered}$$ where $$\label{eq:mue} k_{\varepsilon}^{2}=E^2-m^2+ 2m\omega\left[(sl+1)(1-\varepsilon E)+m\varepsilon\right].$$ A similar equation for $g_{\varepsilon}$ there exists. The regular solution for Eq. is $$\label{eq:def_eigen} f_{\varepsilon}(\rho) =(\lambda_{\varepsilon}\;\rho)^{(|l|-1)/2} e^{-(m\varepsilon+\lambda_{\varepsilon})\rho/2} M\left( d_{\varepsilon},1+|l|,\lambda_{\varepsilon}\;\rho \right),$$ where $$\lambda_{\varepsilon}=1-E\varepsilon,$$ and $$d_{\varepsilon}=\frac{1+|l|}{2}+ \frac{2m^2\omega \varepsilon-k_{\varepsilon}^2} {4 \gamma \lambda_{\varepsilon}}.$$ The deformed spectrum are obtained by establishing as convergence criterion the condition $d_{\varepsilon}=-n$. In this manner, the deformed energy spectrum is given by $$\label{eq:energy_def} E^{2}-m^{2}=2m\omega(2n+|l|-sl)\lambda_{\varepsilon}.$$ Thus, solving Eq. for $E$, the deformed energy levels are explicitly given by $$\label{eq:energy_def_ex} E= \pm \sqrt{m^2+2m\omega \left( 2n+|l|-sl \right)}-m\varepsilon\omega(2n+|l|-sl),$$ and its unnormalized wave functions are of the form $$\label{eq:def_eigen_def} f_{\varepsilon}(\rho) =(\lambda_{\varepsilon}\;\rho)^{(|l|-1)/2} e^{-(m\varepsilon+\lambda_{\varepsilon})\rho/2} M\left( -n,1+|l|,\lambda_{\varepsilon}\;\rho \right).$$ In deriving our results we have neglected terms of $O(\varepsilon^2)$. ![ \[fig:fig2\] (Color online) The deformed energy levels, Eq. , for the 2D $\kappa$-Dirac oscillator for $n=10$, $s=1$ and for different values of $l$. We use units such as $m=\omega=1$.](fig2){width="\columnwidth"} We can observe that the particle and antiparticle energies in the 2D $\kappa$-Dirac oscillator are different, as a consequence of charge conjugation symmetry breaking caused by the deformation parameter in the same manner as observed in the three-dimensional one [@PLB.2014.731.327]. Notice that, $\varepsilon=0$, exactly conducts to the results for the energy levels and wave functions of the previous section for the usual (undeformed) 2D Dirac oscillator, revealing the consistency of the description here developed. It is worthwhile to note that the infinity degeneracy present in the usual two-dimensional Dirac oscillator is preserved by the deformation, but affecting the separation of the energy levels. The distance between the adjacent energy levels decreases as the deformation parameter increases. In Fig. \[fig:fig2\] is depicted the undeformed and deformed energy levels for some values of the deformation parameter for $n=10$. In Ref. [@PLB.2013.719.467], we have determined a upper bound for the deformation parameter. Taking into account this upper bound, the product $m\varepsilon$ should be smaller than $0.00116$. In this manner, using units such as $m=1$, we must consider values for the deformation smaller than $0.00116$. Conclusions {#sec:conclusion} =========== In this letter, we considered the dynamics of the 2D Dirac oscillator in the context of $\kappa$-Poincaré-Hopf algebra. Using the fact that the deformation does not break the angular symmetry, we have derived the $\kappa$-deformed radial differential equation whose solution has led to the deformed energy spectrum and wave functions. We verify that the energy spectrum and wave functions are modified by the presence of the deformation parameter $\varepsilon$. Using values for the deformation parameter lower than the upper bound $0.00116$, we have examined the dependence of the energy of the oscillator with the deformation. The deformation parameter modifies the energy spectrum and wave functions the Dirac oscillator, preserving the infinity degeneracy, but affecting the distance between the adjacent energy levels. Finally, the case $\varepsilon=0$, exactly conducts for the results for the usual 2D Dirac oscillator. Acknowledgments =============== We would like to thank Rodolfo Casana for discussions on dimensionality of the deformed Dirac equation. This work was supported by the Fundação Araucária (Grant No. 205/2013 (PPP) and No. 484/2014 (PQ)), and the Conselho Nacional de Desenvolvimento Científico e Tecnológico (Grants No. 482015/2013-6 (Universal), No. 306068/2013-3 (PQ)) and FAPEMA (Grant No. 00845/13).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Path-velocity decomposition is an intuitive yet powerful approach to address the complexity of kinodynamic motion planning. The difficult trajectory planning problem is solved in two separate and simpler steps: first, find a path in the configuration space that satisfies the geometric constraints (path planning), and second, find a time-parameterization of that path satisfying the kinodynamic constraints. A fundamental requirement is that the path found in the first step should be time-parameterizable. Most existing works fulfill this requirement by enforcing quasi-static constraints in the path planning step, resulting in an important loss in completeness. We propose a method that enables path-velocity decomposition to discover truly dynamic motions, i.e. motions that are not quasi-statically executable. At the heart of the proposed method is a new algorithm – Admissible Velocity Propagation – which, given a path and an interval of reachable velocities at the beginning of that path, computes exactly and efficiently the interval of all the velocities the system can reach after traversing the path while respecting the system kinodynamic constraints. Combining this algorithm with usual sampling-based planners then gives rise to a family of new trajectory planners that can appropriately handle kinodynamic constraints while retaining the advantages associated with path-velocity decomposition. We demonstrate the efficiency of the proposed method on some difficult kinodynamic planning problems, where, in particular, quasi-static methods are guaranteed to fail[^1].' author: - 'Quang-Cuong Pham' - Stéphane Caron - | \ Puttichai Lertkultanon - Yoshihiko Nakamura bibliography: - 'cri.bib' title: 'Admissible Velocity Propagation: Beyond Quasi-Static Path Planning for High-Dimensional Robots' --- Introduction ============ Planning motions for robots with many degrees of freedom and subject to kinodynamic constraints (i.e. constraints that involve higher-order time-derivatives of the robot configuration [@DonX93acm; @LK01ijrr]) is one of the most important and challenging problems in robotics. Path-velocity decomposition is an intuitive yet powerful approach to address the complexity of kinodynamic motion planning: first, find a *path* in the configuration space that satisfies the geometric constraints, such as obstacle avoidance, joint limits, kinematic closure, etc. (path planning), and second, find a *time-parameterization* of that path satisfying the kinodynamic constraints, such as torque limits for manipulators, dynamic balance for legged robots, etc. #### Advantages of path-velocity decomposition This approach was suggested as early as [-@KZ86ijrr] – only a few years after the birth of motion planning itself as a research field – by @KZ86ijrr, in the context of motion planning amongst movable obstacles. Since then, it has become an important tool to address many kinodynamic planning problems, from manipulators subject to torque limits [@BobX85ijrr; @SM86tac; @Bob88jra], to coordination of teams of mobile robots [@SimX02tra; @PA05ijrr], to legged robots subject to balance constraints [@KufX02ar; @SulX10tr; @HauX08ijrr; @EscX13ras; @Hau14ijrr; @PS15tmech], etc. In fact, to our knowledge, path-velocity decomposition \[either explicitly or implicitly, as e.g. when only the geometric motion is planned and the time-parameterization is left to the execution phase\] is the only *motion planning* approach that has been shown to work on *actual high-DOF robots* such as humanoids. Path-velocity decomposition is appealing in that it *exploits the natural decomposition* of the constraints, in most systems, into two categories: those depending uniquely on the robot configuration, and those depending in particular on the velocity, which in turn is related to the energy of the system. Consider for instance a humanoid robot in a multi-contact task. Such a robot must (1) avoid collision with the environment, (2) avoid self-collisions, (3) respect kinematic closure for the parts in contact with the environment (e.g. the stance foot must be fixed with respect to the ground), (4) maintain balance. It can be noted that constraints (1 – 3) are exclusively related to the configuration of the robot, while constraint (4), once a path is given, depends mostly on the path velocity. From a practical viewpoint, the two sub-problems – geometric path planning and kinodynamic time-parameterization – have received so much attention from the robotics community in the past three decades that a large body of theory and good practices exist and can be readily combined to yield efficient *trajectory* planners. Briefly, high-dimensional and cluttered geometric path planning problems can now be solved in seconds thanks to sampling-based planning algorithms such as PRM [@KavX96tra] or RRT [@KL00icra] and to the dozens of heuristics that have been developed for these algorithms. Regarding kinodynamic time-parameterization, two important discoveries about the structure of the problem have led to particularly efficient algorithmic solutions. First, the bang-bang nature of the optimal velocity profile was identified by [@BobX85ijrr; @SM86tac], leading to fast *numerical integration* methods [see @Pha14tro for extensive historical references]. Second, this problem was shown to be reducible to a *convex optimization* problem, leading to robust and versatile convex-optimization-based solutions [see e.g. @VerX09tac; @Hau14ijrr]. #### Problems with state-space planning and trajectory optimization approaches Alternative approaches to path-velocity decomposition include planning directly in the state space and trajectory optimization. The first approach deploys traditional path planners such as RRT [@LK01ijrr] or PRM [@HsuX02ijrr] directly into the *state space*, that is, the configuration space augmented with velocity coordinates. Three main difficulties are associated with this approach. First, the dimension of the state space is twice that of the configuration space, resulting in higher algorithmic complexity. Second, while connecting two adjacent configurations under geometric constraints is trivial (using e.g. linear segments), connecting two adjacent states under kinodynamic constraints is considerably more challenging and time-consuming, requiring e.g. to solve a two-point boundary value problem [@LK01ijrr] or to sample in the control space and to integrate forward the sampled control [@HsuX02ijrr; @SK12tro; @PapX14arxiv; @LiX15wafr]. Third, especially for state-space RRTs, designing a reasonable *metric* is particularly difficult: @ShkX09icra showed that, even for the 1-DOF pendulum subject to torque constraints, a state-space RRT with a simple Euclidean metric is doomed to failure. The authors then proposed to construct an efficient metric by solving local optimal control problems. In a similar fashion, kinodynamic planners based on locally linearized system dynamics were proposed, such as LQR-Tree [@Ted09rss] or LQR-RRT$^*$ [@PerX12icra]. While such methods can be applied to low-DOF systems, the necessity to solve an optimal control problem of the dimension of the system at each tree extension makes it unlikely to scale to higher dimensions. For these reasons, in spite of appealing completeness guarantees [under some precise conditions, see e.g. @CarX14icra; @PapX14arxiv; @KS15wafr], there exist, to our knowledge, few examples of successful application of state-space planning to high-DOF systems with complex nonlinear dynamics and constraints in challenging environments [see e.g. @SK12tro]. The second approach, trajectory optimization, starts with an initial trajectory, which may not be valid (for example the trajectory may not reach the goal configuration, the robot may collide with the environment or may lose balance at some time instants, etc.) One then iteratively modifies the trajectory so as to decrease a cost – which encodes in particular how much the constraints are violated – until it falls below a certain threshold, implying in turn that the trajectory reaches the goal and all constraints are satisfied. Many interesting variations exist: the iterative modification step may be deterministic [@RatX09icra] or stochastic [@KalX11icra], the optimization may be done *through* contact [@MorX12acm; @PT13wafr], etc. However, for long time-horizon and high-DOF systems, this approach requires solving a large nonlinear optimization problem, which is computationally challenging because of the huge problem size and the existence of many local minima [see @Hau14ijrr for an extensive discussion of the advantages and limitations of trajectory optimization and comparison with path-velocity decomposition]. #### The quasi-static condition and its limitations Coming back to path-velocity decomposition, a fundamental requirement here is that the path found in the first step must be time-parameterizable. A commonly-used method to fulfill this requirement is to consider, in that step, the *quasi-static* constraints that are derived from the original kinodynamic constraints by assuming that the motion is executed at zero velocity. Indeed, the so-derived quasi-static constraints can be expressed using only configuration-space variables, in such a way that planning with quasi-static constraints is purely a geometric path planning problem. In the context of legged robots for example, the balance of the robot at zero velocity is guaranteed when the projection of the center of gravity lies in the support area – a purely geometric condition. This quasi-static condition is assumed in most works dedicated to the planning of complex humanoid motions [see e.g. @KufX02ar]. This workaround suffers however from a major limitation: the quasi-static condition may be too restrictive and one thus may overlook many possible solutions, i.e. incurring an important *loss in completeness*. For instance, legged robots walking with ZMP-based control [@VukX01humanoids] are dynamically balanced but almost never satisfy the aforementioned quasi-static condition on the center of gravity. Another example is provided by an actuated pendulum subject to severe torque limits, but which can still be put into the upright position by swinging back and forth several times. It is clear that such solutions make an essential use of the system dynamics and can in no way be discovered by quasi-static methods, nor by any method that considers only configuration-space coordinates. #### Planning truly dynamic motions Here we propose a method to overcome this limitation. At the heart of the proposed method is a new algorithm – Admissible Velocity Propagation (AVP) – which is based in turn on the classical Time-Optimal Path Parameterization (TOPP) algorithm first introduced by @BobX85ijrr [@SM86tac] and later perfected by many others [see @Pha14tro and references therein]. In contrast with TOPP, which determines *one* optimal velocity profile along a given path, AVP addresses *all* valid velocity profiles along that path, requiring only slightly more computation time than TOPP itself. Combining AVP with usual sampling-based path planners, such as RRT, gives rise to a family of new trajectory planners that can appropriately handle kinodynamic constraints while retaining the advantages associated with path-velocity decomposition. The remainder of this article is organized as follows. In Section \[sec:avp\], we briefly recall the fundamentals of TOPP before presenting AVP. In Section \[sec:planning\], we show how to combine AVP with usual sampling-based path planners such as RRT. In Section \[sec:applications\], we demonstrate the efficiency of the new AVP-based planners on some challenging kinodynamic planning problems – in particular, those where the quasi-static approach is *guaranteed* to fail. In one of the applications, the planned motion is executed on an actual 6-DOF robot. Finally, in Section \[sec:discussion\], we discuss the advantages and limitations of the proposed approach (one particular limitation is that the approach does not *a priori* apply to under-actuated systems) and sketch some future research directions. Propagating admissible velocities along a path {#sec:avp} ============================================== Background: Time-Optimal Path Parameterization (TOPP) {#sec:topp} ----------------------------------------------------- As mentioned in the Introduction, there are two main approaches to TOPP: “numerical integration” and “convex optimization”. We briefly recall the numerical integration approach [@BobX85ijrr; @SM86tac], on which AVP is based. For more details about this approach, the reader is referred to @Pha14tro. Let ${\mathbf{q}}$ be an $n$-dimensional vector representing the configuration of a robot system. Consider second-order inequality constraints of the form [@Pha14tro] $$\label{eq:gen} {\mathbf{A}}({\mathbf{q}})\ddot{\mathbf{q}}+ \dot{\mathbf{q}}^\top {\mathbf{B}}({\mathbf{q}}) \dot{\mathbf{q}}+ {\mathbf{f}}({\mathbf{q}}) \leq 0,$$ where ${\mathbf{A}}({\mathbf{q}})$, ${\mathbf{B}}({\mathbf{q}})$ and ${\mathbf{f}}({\mathbf{q}})$ are respectively an $M\times n$ matrix, an $n\times M \times n$ tensor and an $M$-dimensional vector. Inequality (\[eq:gen\]) is general and may represent a large variety of second-order systems and constraints, such as fully-actuated manipulators[^2] subject to velocity, acceleration or torque limits [see e.g. @BobX85ijrr; @SM86tac], wheeled vehicles subject to sliding and tip-over constraints [@SG91tra], etc. *Redundantly-actuated* systems, such as closed-chain manipulators subject to torque limits or legged robots in multi-contact subject to stability constraints, can also be represented by inequality (\[eq:gen\]) [@PS15tmech]. However, *under-actuated* systems cannot be in general taken into account by the framework, see Section \[sec:discussion\] for a more detailed discussion. Note that “direct” velocity bounds of the form $$\label{eq:velo} \dot{\mathbf{q}}^\top {\mathbf{B}}_v({\mathbf{q}}) \dot{\mathbf{q}}+ {\mathbf{f}}_v({\mathbf{q}}) \leq 0,$$ can also be taken into account [@Zla96icra]. For clarity, we shall not include such “direct” velocity bounds in the following development. Rather, we shall discuss separately how to deal with such bounds in Section \[sec:avpremarks\]. Consider now a path ${\mathcal{P}}$ in the configuration space, represented as the underlying path of a trajectory ${\mathbf{q}}(s)_{s\in[0,s_{\mathrm{end}}]}$. Assume that ${\mathbf{q}}(s)_{s\in[0,s_{\mathrm{end}}]}$ is $C^1$- and piecewise $C^2$-continuous. \[def:valid\] A *time-parameterization* of ${\mathcal{P}}$ – or time-*re*parameterization of ${\mathbf{q}}(s)_{s\in[0,s_{\mathrm{end}}]}$ – is an increasing *scalar function* $s : [0,T']\rightarrow [0,s_{\mathrm{end}}]$. A time-parameterization can be seen alternatively as a *velocity profile*, which is the curve $\dot s(s)_{s \in [0,s_{\mathrm{end}}]}$ in the $s$–$\dot s$ plane. We say that a time-parameterization or, equivalently, a velocity profile, is *valid* if $s(t)_{t\in[0,T']}$ is continuous, $\dot s$ is always strictly positive, and the *retimed* trajectory ${\mathbf{q}}(s(t))_{t\in[0,T']}$ satisfies the constraints of the system. To check whether the retimed trajectory satisfies the system constraints, one may differentiate ${\mathbf{q}}(s(t))$ with respect to $t$: $$\label{eq:dotq} \dot{\mathbf{q}}= {\mathbf{q}}_s\dot s, \quad \ddot{\mathbf{q}}= {\mathbf{q}}_s \ddot s + {\mathbf{q}}_{ss} \dot s^2 ,$$ where dots denote differentiations with respect to the time parameter $t$ and ${\mathbf{q}}_s=\frac{{\mathrm{d}}{\mathbf{q}}}{{\mathrm{d}}s}$ and ${\mathbf{q}}_{ss}=\frac{{\mathrm{d}}^2 {\mathbf{q}}}{{\mathrm{d}}s^2}$. Substituting (\[eq:dotq\]) into (\[eq:gen\]) then leads to $$\ddot s {\mathbf{A}}({\mathbf{q}}){\mathbf{q}}_s + \dot s^2 {\mathbf{A}}({\mathbf{q}}){\mathbf{q}}_{ss}+\dot s^2 {\mathbf{q}}_s^\top{\mathbf{B}}({\mathbf{q}}){\mathbf{q}}_s + {\mathbf{f}}({\mathbf{q}}) \leq 0,$$ which can be rewritten as $$\label{eq:gen2} \ddot s {\mathbf{a}}(s) + \dot s^2 {\mathbf{b}}(s) + {\mathbf{c}}(s) \leq 0,\quad\textrm{where}$$ $$\begin{aligned} \label{eq:toto} {\mathbf{a}}(s)&{\stackrel{\mathrm{def}}{=}}&{\mathbf{A}}({\mathbf{q}}(s)){\mathbf{q}}_s(s),\nonumber\\ {\mathbf{b}}(s)&{\stackrel{\mathrm{def}}{=}}&{\mathbf{A}}({\mathbf{q}}(s)){\mathbf{q}}_{ss}(s) + {\mathbf{q}}_s(s)^\top{\mathbf{B}}({\mathbf{q}}(s)){\mathbf{q}}_s(s),\\ {\mathbf{c}}(s)&{\stackrel{\mathrm{def}}{=}}&{\mathbf{f}}({\mathbf{q}}(s)).\nonumber \end{aligned}$$ Each row $i$ of equation (\[eq:gen2\]) is of the form $$a_i(s)\ddot s + b_i(s)\dot s^2 + c_i(s) \leq 0.$$ Next, - if $a_i(s)>0$, then one has $\ddot s \leq \frac{-c_i(s)-b_i(s)\dot s^2}{a_i(s)}$. Define the acceleration *upper bound* $\beta_i(s, \dot s) {\stackrel{\mathrm{def}}{=}}\frac{-c_i(s)-b_i(s)\dot s^2}{a_i(s)}$; - if $a_i(s)<0$, then one has $\ddot s \geq \frac{-c_i(s)-b_i(s)\dot s^2}{a_i(s)}$. Define the acceleration *lower bound* $\alpha_i(s, \dot s) {\stackrel{\mathrm{def}}{=}}\frac{-c_i(s)-b_i(s)\dot s^2}{a_i(s)}$. One can then define for each $(s,\dot s)$ $$\label{eq:tata} \alpha(s,\dot s){\stackrel{\mathrm{def}}{=}}\max_i \alpha_i(s,\dot s),\nonumber \quad \beta(s,\dot s){\stackrel{\mathrm{def}}{=}}\min_i \beta_i(s,\dot s).\nonumber$$ From the above transformations, one can conclude that ${\mathbf{q}}(s(t))_{t\in[0,T']}$ satisfies the constraints (\[eq:gen\]) if and only if $$\label{eq:bounds} \forall t\in[0,T']\quad \alpha(s(t),\dot s(t)) \leq \ddot s(t) \leq \beta(s(t),\dot s(t)).$$ Note that $(s,\dot s)\mapsto(\dot s,\alpha(s,\dot s))$ and $(s,\dot s)\mapsto(\dot s,\beta(s,\dot s))$ can be viewed as two vector fields in the $s$–$\dot s$ plane. One can integrate velocity profiles following the field $(\dot s,\alpha(s,\dot s))$ (from now on, $\alpha$ in short) to obtain *minimum acceleration* profiles (or $\alpha$-profiles), or following the field $\beta$ to obtain *maximum acceleration* profiles (or $\beta$-profiles). Next, observe that if $\alpha(s,\dot s)>\beta(s,\dot s)$ then, from (\[eq:bounds\]), there is no possible value for $\ddot s$. Thus, to be valid, every velocity profile must stay below the maximum velocity curve (${\mathrm{MVC}}$ in short) defined by[^3] $$\label{eq:mvc} {\mathrm{MVC}}(s){\stackrel{\mathrm{def}}{=}}\left\{ \begin{array}{cl} \min \{\dot s\geq 0: \alpha(s,\dot s)= \beta(s,\dot s)\}&\mathrm{if}\ \alpha(s,0) \leq \beta(s,0),\\ 0&\mathrm{if}\ \alpha(s,0) > \beta(s,0). \end{array} \right.$$ It was shown [see e.g. @SL92jdsmc] that the time-minimal velocity profile is obtained by a *bang-bang*-type control, i.e., whereby the optimal profile follows alternatively the $\beta$ and $\alpha$ fields while always staying below the ${\mathrm{MVC}}$. A method to find the optimal profile then consists in (see Fig. \[fig:bobrow\]A for illustration): - find all the possible $\alpha\rightarrow\beta$ switch points. There are three types of such switch points: “discontinuous”, “singular” or “tangent” and they must all be on the ${\mathrm{MVC}}$. The procedure to find these switch points is detailed in @Pha14tro; - from each of these switch points, integrate backward following $\alpha$ and forward following $\beta$ to obtain the Limiting Curves (${\mathrm{LC}}$) [@SY89tra]; - construct the Concatenated Limiting Curve (CLC) by considering, for each $s$, the value of the lowest ${\mathrm{LC}}$ at $s$; - integrate forward from $(0,\dot s_{\mathrm{beg}})$ following $\beta$ and backward from $(s_{\mathrm{end}},\dot s_{\mathrm{end}})$ following $\alpha$, and consider the intersection of these profiles with each other or with the CLC. Note that the path velocities $\dot s_{\mathrm{beg}}$ and $\dot s_{\mathrm{end}}$ are computed from the desired initial and final velocities $v_{\mathrm{beg}}$ and $v_{\mathrm{end}}$ by $$\label{eq:convert} \dot s_{\mathrm{beg}}{\stackrel{\mathrm{def}}{=}}v_{\mathrm{beg}}/\|{\mathbf{q}}_s(0)\|, \quad \dot s_{\mathrm{end}}{\stackrel{\mathrm{def}}{=}}v_{\mathrm{end}}/\|{\mathbf{q}}_s(s_{\mathrm{end}})\|.$$ ![**A**: Illustration for Maximum Velocity Curve (MVC) and Concatenated Limiting Curve (CLC). The optimal velocity profile follows the green $\beta$-profile, then a portion of the CLC, and finally the yellow $\alpha$-profile. **B**: Illustration for the Switch Point Lemma.[]{data-label="fig:bobrow"}](fig/bobrow){width="10cm"} We now prove two lemmata that will be important later on. \[lemma:sp\] Assume that a forward $\beta$-profile hits the ${\mathrm{MVC}}$ at $s=s_1$ and a backward $\alpha$-profile hits the ${\mathrm{MVC}}$ at $s=s_2$, with $s_1<s_2$, then there exists at least one $\alpha\rightarrow\beta$ switch point on the ${\mathrm{MVC}}$ at some position $s_3\in[s_1,s_2]$. **Proof** At $(s_1,{\mathrm{MVC}}(s_1))$, the angle from the vector $\beta$ to the tangent to the ${\mathrm{MVC}}$ is negative (see Fig. \[fig:bobrow\]B). In addition, since we are on the ${\mathrm{MVC}}$, we have $\alpha=\beta$, thus the angle from $\alpha$ to the tangent is negative too. Next, at $(s_2,{\mathrm{MVC}}(s_2))$, the angle of $\alpha$ to the tangent to the ${\mathrm{MVC}}$ is positive (see Fig. \[fig:bobrow\]B). Thus, since the vector field $\alpha$ is continuous, there exists, between $s_1$ and $s_2$ (i) either a point where the angle between $\alpha$ and the tangent to the ${\mathrm{MVC}}$ is 0 – in which case we have a *tangent* switch point; (ii) or a point where the ${\mathrm{MVC}}$ is discontinuous – in which case we have a *discontinuous* switch point; (iii) or a point where the ${\mathrm{MVC}}$ is continuous but non differentiable – in which case we have a *singular* switch point. For more details, the reader is referred to @Pha14tro. $\Box$ \[lemma:clc\] Either one of the ${\mathrm{LC}}$’s reaches $\dot s=0$, or the ${\mathrm{CLC}}$ is *continuous*. **Proof** Assume by contradiction that no ${\mathrm{LC}}$ reaches $\dot s=0$ and that there exists a “hole” in the ${\mathrm{CLC}}$. The left border $s_1$ of the hole must then be defined by the intersection of the ${\mathrm{MVC}}$ with a forward $\beta$-${\mathrm{LC}}$ (coming from the previous $\alpha\rightarrow\beta$ switch point), and the right border $s_2$ of the hole must be defined by the intersection of the ${\mathrm{MVC}}$ with a backward $\alpha$-${\mathrm{LC}}$ (coming from the following $\alpha\rightarrow\beta$ switch point). By Lemma \[lemma:sp\] above, there must then exist a switch point between $s_1$ and $s_2$, which contradicts the definition of the hole. $\Box$ Admissible Velocity Propagation (AVP) {#sec:avp2} ------------------------------------- This section presents the Admissible Velocity Propagation algorithm (AVP), which constitutes the heart of our approach. This algorithm takes as inputs: - a path ${\mathcal{P}}$ in the configuration space, and - an interval $[\dot s_{\mathrm{beg}}^{\min},\dot s_{\mathrm{beg}}^{\max}]$ of initial path velocities; and returns the *interval* (cf. Theorem \[theo:interval\]) $[\dot s_{\mathrm{end}}^{\min}, \dot s_{\mathrm{end}}^{\max}]$ of *all* path velocities that the system can reach *at the end* of ${\mathcal{P}}$ after traversing ${\mathcal{P}}$ while respecting the system constraints[^4]. The algorithm comprises the following three steps: A : Compute the limiting curves; B : Determine the *maximum* final velocity $\dot s_{\mathrm{end}}^{\max}$ by integrating *forward* from $s=0$; C : Determine the *minimum* final velocity $\dot s_{\mathrm{end}}^{\min}$ by bisection search and by integrating *backward* from $s=s_{\mathrm{end}}$. We now detail each of these steps. #### A   Computing the limiting curves We first compute the Concatenated Limiting Curve (CLC) as shown in Section \[sec:topp\]. From Lemma \[lemma:clc\], either one of the ${\mathrm{LC}}$’s reaches 0 or the ${\mathrm{CLC}}$ is continuous. The former case is covered by A1 below, while the latter is covered by A2–5. A1 : One of the ${\mathrm{LC}}$’s hits the line $\dot s=0$. In this case, the path cannot be traversed by the system without violating the kinodynamic constraints: AVP returns `Failure`. Indeed, assume that a backward ($\alpha$) profile hits $\dot s=0$. Then any profile that goes from $s=0$ to $s=s_{\mathrm{end}}$ must cross that profile somewhere and *from above*, which violates the $\alpha$ bound (see Figure \[fig:A\]A). Similarly, if a forward ($\beta$) profile hits $\dot s=0$, then that profile must be crossed somewhere and *from below*, which violates the $\beta$ bound. Thus, no valid profile can go from $s=0$ to $s=s_{\mathrm{end}}$; **AB**\ ![Illustration for step A (computation of the ${\mathrm{LC}}$’s). **A**: illustration for case A1. A profile that crosses an $\alpha$-${\mathrm{CLC}}$ violates the $\alpha$ bound. **B**: illustration for case A3.[]{data-label="fig:A"}](fig/A "fig:"){width="10cm"} The CLC is now assumed to be continuous and strictly positive. Since it is bounded by $s=0$ from the left, $s=s_{\mathrm{end}}$ from the right, $\dot s=0$ from the bottom and the MVC from the top, there are only four exclusive and exhaustive cases, listed below. A2 : The ${\mathrm{CLC}}$ hits the ${\mathrm{MVC}}$ while integrating backward and while integrating forward. In this case, let $\dot s_{\mathrm{beg}}^*{\stackrel{\mathrm{def}}{=}}{\mathrm{MVC}}(0)$ and go to **B**. The situation where there is no switch point is assimilated to this case; A3 : The ${\mathrm{CLC}}$ hits $s=0$ while integrating backward, and the ${\mathrm{MVC}}$ while integrating forward (see Figure \[fig:A\]B). In this case, let $\dot s_{\mathrm{beg}}^*{\stackrel{\mathrm{def}}{=}}{\mathrm{CLC}}(0)$ and go to **B**; A4 : The ${\mathrm{CLC}}$ hits the ${\mathrm{MVC}}$ while integrating backward, and $s=s_{\mathrm{end}}$ while integrating forward. In this case, let $\dot s_{\mathrm{beg}}^*{\stackrel{\mathrm{def}}{=}}{\mathrm{MVC}}(0)$ and go to **B**; A5 : The ${\mathrm{CLC}}$ hits $s=0$ while integrating backward, and $s=s_{\mathrm{end}}$ while integrating forward. In this case, let $\dot s_{\mathrm{beg}}^*{\stackrel{\mathrm{def}}{=}}{\mathrm{CLC}}(0)$ and go to **B**. #### B   Determining the maximum final velocity {#sec:B} Note that, in any of the cases A2–5, $\dot s_{\mathrm{beg}}^*$ was defined so that no valid profile can start above it. Thus, if $\dot s_{\mathrm{beg}}^{\min} > \dot s_{\mathrm{beg}}^*$, the path is not traversable: AVP returns `Failure`. Otherwise, the interval of *valid* initial velocities is $[\dot s_{\mathrm{beg}}^{\min}, \dot s_{\mathrm{beg}}^{{\max}*}]$ where $\dot s_{\mathrm{beg}}^{{\max}*} {\stackrel{\mathrm{def}}{=}}\min(\dot s_{\mathrm{beg}}^{\max},\dot s_{\mathrm{beg}}^*)$. Under the nomenclature introduced in Definition \[def:valid\], we say that a velocity $\dot s_{\mathrm{end}}$ is a *valid* final velocity if there exists a valid profile that starts at $(0,\dot s_0)$ for some $\dot s_0\in[\dot s_{\mathrm{beg}}^{\min},\dot s_{\mathrm{beg}}^{\max}]$ and ends at $(s_{\mathrm{end}},\dot s_{\mathrm{end}})$. We argue that the maximum valid final velocity can be obtained by integrating forward from $\dot s_{\mathrm{beg}}^{{\max}*}$ following $\beta$. Let’s call $\Phi$ the velocity profile obtained by doing so. Since $\Phi$ is continuous and bounded by $s=s_{\mathrm{end}}$ from the right, $\dot s=0$ from the bottom, and either the MVC or the CLC from the top, there are four exclusive and exhaustive cases, listed below (see Figure \[fig:B\] for illustration). ![Illustration for step B: one can determine the maximum final velocity by integrating forward from $(0,\dot s_{\mathrm{beg}}^*)$.[]{data-label="fig:B"}](fig/B){width="7cm"} B1 : $\Phi$ hits $\dot s=0$ (cf. profile B1 in Fig. \[fig:B\]). Here, as in the case A1, the path is not traversable: AVP returns `Failure`. Indeed, any profile that starts below $\dot s_{\mathrm{beg}}^{{\max}*}$ and tries to reach $s=s_{\mathrm{end}}$ must cross $\Phi$ somewhere and *from below*, thus violating the $\beta$ bound; B2 : $\Phi$ hits $s=s_{\mathrm{end}}$ (cf. profile B1 in Fig. \[fig:B\]). Then $\Phi(s_{\mathrm{end}})$ corresponds to the $\dot s_{\mathrm{end}}^{\max}$ we are looking for. Indeed, $\Phi(s_{\mathrm{end}})$ is reachable – precisely by $\Phi$ –, and to reach any value above $\Phi(s_{\mathrm{end}})$, the corresponding profile would have to cross $\Phi$ somewhere and from below; B3 : $\Phi$ hits the ${\mathrm{CLC}}$. There are two sub-cases: 1. If we proceed from cases A4 or A5 (in which the ${\mathrm{CLC}}$ reaches $s=s_{\mathrm{end}}$, cf. profile B3 in Fig. \[fig:B\]), then ${\mathrm{CLC}}(s_{\mathrm{end}})$ corresponds to the $\dot s_{\mathrm{end}}^{\max}$ we are looking for. Indeed, ${\mathrm{CLC}}(s_{\mathrm{end}})$ is reachable – precisely by the concatenation of $\Phi$ and the ${\mathrm{CLC}}$ –, and no value above ${\mathrm{CLC}}(s_{\mathrm{end}})$ can be valid by the definition of the ${\mathrm{CLC}}$; 2. If we proceed from cases A2 or A3, then the ${\mathrm{CLC}}$ hits the ${\mathrm{MVC}}$ while integrating forward, say at $s=s_1$; we then proceed as in case B4 below; B4 : $\Phi$ hits the ${\mathrm{MVC}}$, say at $s=s_1$. It is clear that ${\mathrm{MVC}}(s_{\mathrm{end}})$ is an upper bound of the valid final velocities, but we have to ascertain whether this value is reachable. For this, we use the predicate IS\_VALID defined in Box \[algo:valid\] of **C**: - if IS\_VALID$({\mathrm{MVC}}(s_{\mathrm{end}}))$, then ${\mathrm{MVC}}(s_{\mathrm{end}})$ is the $\dot s_{\mathrm{end}}^{\max}$ we are looking for; - else, the path is not traversable: AVP returns `Failure`. Indeed, as we shall see, if for a certain $\dot s_{\mathrm{test}}$, the predicate IS\_VALID($\dot s_{\mathrm{test}}$) is `False`, then no velocity below $\dot s_{\mathrm{test}}$ can be valid either. #### C   Determining the minimum final velocity {#sec:C} Assume that we proceed from the cases B2–4. Consider a final velocity $\dot s_{\mathrm{test}}$ where - $\dot{s}_{\mathrm{test}}<\Phi(s_{\mathrm{end}})$ if we proceed from B2; - $\dot s_{\mathrm{test}}< {\mathrm{CLC}}(s_{\mathrm{end}})$ if we proceed from B3a; - $\dot s_{\mathrm{test}}< {\mathrm{MVC}}(s_{\mathrm{end}})$ if we proceed from B3b or B4. Let us integrate backward from $(s_{\mathrm{end}},\dot s_\mathrm{test})$ following $\alpha$ and call the resulting profile $\Psi$. We have the following lemma. \[lemma:psi\] $\Psi$ cannot hit the ${\mathrm{MVC}}$ before hitting either $\Phi$ or the ${\mathrm{CLC}}$. **Proof** If we proceed from B2 or B3a, then it is clear that $\Psi$ must first hit $\Phi$ (case B2) or the ${\mathrm{CLC}}$ (case B3a) before hitting the ${\mathrm{MVC}}$. If we proceed from B3b or B4, assume by contradiction that $\Psi$ hits the ${\mathrm{MVC}}$ first at a position $s=s_2$. Then by Lemma \[lemma:sp\], there must exist a switch point between $s_2$ and the end of the ${\mathrm{CLC}}$ (in case B3b) or the end of $\Phi$ (in case B4). In both cases, there is a contradiction with the fact that the ${\mathrm{CLC}}$ is continuous. $\Box$ We can now detail in Box \[algo:valid\] the predicate IS\_VALID which assesses whether a final velocity $\dot s_{\mathrm{test}}$ is valid. **Input:** candidate final velocity $\dot s_\mathrm{test}$ **Output:** `True` iff there exists a valid velocity profile with final velocity $\dot s_\mathrm{test}$ Consider the profile $\Psi$ constructed above. Since it must hit $\Phi$ or the CLC before hitting the MVC, the following five cases are exclusive and exhaustive (see Fig. \[fig:C\] for illustrations): C1 : $\Psi$ hits $\dot s=0$ (Fig. \[fig:C\], profile C1). Then, as in cases A1 or B1, no velocity profile can reach $s_\mathrm{test}$: return ; C2 : $\Psi$ hits $s=0$ for some $\dot s_0<\dot s_{\mathrm{beg}}^{\min}$ (see Figure \[fig:C\], profile C2). Then any profile that ends at $\dot s_\mathrm{test}$ would have to hit $\Psi$ from above, which is impossible: return ; C3 : $\Psi$ hits $s=0$ at a point $\dot s_0 \in [\dot s_{\mathrm{beg}}^{\min},\dot s_{\mathrm{beg}}^{{\max}*}]$ (Fig. \[fig:C\], profile C3). Then $\dot s_\mathrm{test}$ can be reached following the valid velocity profile $\Psi$: return . (Note that, if $\dot s_0 > \dot s_{\mathrm{beg}}^{{\max}*}$ then $\Psi$ must have crossed $\Phi$ somewhere before arriving at $s=0$, which is covered by case C4 below); C4 : $\Psi$ hits $\Phi$ (Fig. \[fig:C\], profile C4). Then $\dot s_\mathrm{test}$ can be reached, precisely by the concatenation of a part of $\Phi$ and $\Psi$: return ; C5 : $\Psi$ hits the ${\mathrm{CLC}}$ (Fig. \[fig:C\], profile C5). Then $\dot s_\mathrm{test}$ can be reached, precisely by the concatenation of $\Phi$, a part of the ${\mathrm{CLC}}$ and $\Psi$: return . \[algo:valid\] ![Illustration for the predicate IS\_VALID: one can assess whether a final velocity $\dot s_\mathrm{test}$ is valid by integrating backward from $(s_{\mathrm{end}},\dot s_\mathrm{test})$.[]{data-label="fig:C"}](fig/C){width="7cm"} At this point, we have that, either the path is not traversable, or we have determined $\dot s_{\mathrm{end}}^{\max}$ in **B**. Remark from C3–5 that, if some $\dot s_0$ is a valid final velocity, then any $\dot s \in [\dot s_0, \dot s_{\mathrm{end}}^{\max}]$ is also valid. Similarly, from C1 and C2, if some $\dot s_0$ is *not* a valid final velocity, then *no* $\dot s \leq s_0$ can be valid. We have thus established the following result: \[theo:interval\] The set of valid final velocities is an interval. This interval property enables one to efficiently search for the minimum final velocity as follows. First, test whether $0$ is a valid final velocity: if IS\_VALID$(0)$, then the sought-after $\dot s_{\mathrm{end}}^{\min}$ is 0. Else, run a standard bisection search with initial bounds $(0,\dot s_{\mathrm{end}}^{\max}]$ where 0 is not valid and $\dot s_{\mathrm{end}}^{\max}$ is valid. Thus, after executing $\log_2(1/\epsilon)$ times the routine IS\_VALID, one can determine $\dot s_{\mathrm{end}}^{\min}$ with a precision $\epsilon$. Remarks {#sec:avpremarks} ------- #### Implementation and complexity of AVP As clear from the previous section, AVP can be readily adapted from the numerical integration approach to TOPP. As a matter of fact, we implemented AVP in about 100 lines of C++ code based on the TOPP library we developed previously (see <https://github.com/quangounet/TOPP>). In terms of complexity, the main difference between AVP and TOPP lies in the bisection search of step C, which requires $\log(1/\epsilon)$ backward integrations. However, in practice, these integrations terminate quickly, either by hitting the MVC or the line $\dot s=0$. Thus, the actual running time of AVP is only slightly larger than that of TOPP. As illustration, in the bottle experiment of Section \[sec:bottle\], we considered 100 random paths, discretized with grid size $N=1000$. TOPP and AVP (with the bisection precision $\epsilon=0.01$) under velocity, acceleration and balance constraints took the same amount of computation time 0.033 $\pm$ 0.003s per path. #### “Direct” velocity bounds “Direct” velocity bounds in the form of (\[eq:velo\]) give rise to another maximum velocity curve, say ${\mathrm{MVC}}_D$, in the $(s,\dot s)$ space. When a forward profile intersects ${\mathrm{MVC}}_D$ (before reaching the “Bobrow’s ${\mathrm{MVC}}$”), several cases can happen : 1. If “sliding” along the ${\mathrm{MVC}}_D$ does not violate the actuation bounds (\[eq:gen\]), then slide as far as possible along the ${\mathrm{MVC}}$. The “slide” terminates either (a) when the maximum acceleration vector $\beta$ points downward from the ${\mathrm{MVC}}_D$: in this case follow that vector out of ${\mathrm{MVC}}_D$ or (b) when the minimum acceleration vector $\alpha$ points upward from the ${\mathrm{MVC}}_D$: in this case, proceed as in 2; 2. If not, then search forward on the ${\mathrm{MVC}}_D$ until finding a point from which one can integrate backward. Such a point is guaranteed to exist and the backward profile will intersect the forward profile. For more details, the reader is referred to [@Zla96icra]. This reasoning can be extended to AVP: when integrating the forward or the backward profiles (in steps A, B, C of the algorithm), each time a profile intersects the ${\mathrm{MVC}}_D$, one simply applies the above steps. #### AVP-backward Consider the “AVP-backward” problem: given an interval of final velocities $[\dot s_{\mathrm{end}}^{\min}, \dot s_{\mathrm{end}}^{\max}]$, compute the interval $[\dot s_{\mathrm{beg}}^{\min},\dot s_{\mathrm{beg}}^{\max}]$ of all possible initial velocities. As we shall see in Section \[sec:implementation\], AVP-backward is essential for the *bi-directional* version of AVP-RRT [see also @NM91tra]. It turns out that AVP-backward can be easily obtained by modifying AVP as follows [@LP14ijcai]: - step A of AVP-backward is the same as in AVP; - in step B of AVP-backward, one integrates *backward* from $\dot s_{\mathrm{end}}^{{\min}*}$ instead of integrating *forward* from $\dot s_{\mathrm{beg}}^{{\max}*}$; - in the bisection search of step C of AVP-backward, one integrates *forward* from $(0,\dot s_{\mathrm{test}})$ instead of integrating *backward* from$(s_{\mathrm{end}},\dot s_{\mathrm{test}})$. #### Convex optimization approach As mentioned in the Introduction, “convex optimization” is another possible approach to TOPP [@VerX09tac; @Hau14ijrr]. It is however unclear to us whether one can modify that approach to yield a “convex-optimization-based AVP” other than sampling a large number of $(\dot s_{\mathrm{start}},\dot s_{\mathrm{end}})$ pairs and running the “convex-optimization-based TOPP” between $(0,\dot s_{\mathrm{start}})$ and $(s_{\mathrm{end}},\dot s_{\mathrm{end}})$, which would arguably be very slow. Kinodynamic trajectory planning using AVP {#sec:planning} ========================================= Combining AVP with sampling-based planners {#sec:avprrt} ------------------------------------------ The AVP algorithm presented in Section \[sec:avp2\] is general and can be combined with various iterative path planners. As an example, we detail in Box \[algo:rrt\] and illustrate in Figure \[fig:avp-rrt\] a planner we call AVP-RRT, which results from the combination of AVP with the standard RRT path planner [@KL00icra]. As in the standard RRT, AVP-RRT iteratively constructs a tree ${\mathcal{T}}$ in the configuration space. However, in contrast with the standard RRT, a vertex $V$ here consists of a triplet ($V$.config, $V$.inpath, $V$.interval) where $V$.config is an element of the configuration space ${\mathscr{C}}$, $V$.inpath is a path ${\mathcal{P}}\subset {\mathscr{C}}$ that connects the configuration of $V$’s parent to $V$.config, and $V$.interval is the interval of reachable velocities at $V$.config, that is, at the end of $V$.inpath. At each iteration, a random configuration ${\mathbf{q}}_{\mathrm{rand}}$ is generated. The EXTEND routine (see Box \[algo:extend\]) then tries to extend the tree ${\mathcal{T}}$ *towards* ${\mathbf{q}}_{\mathrm{rand}}$ from the closest – in a certain metric $d$ – vertex in ${\mathcal{T}}$. The algorithm terminates when either - A newly-found vertex can be connected to the goal configuration (line 10 of Box \[algo:rrt\]). In this case, AVP guarantees by recursion that there exists a path from ${\mathbf{q}}_{\mathrm{start}}$ to ${\mathbf{q}}_{\mathrm{goal}}$ and that this path is time-parameterizable; - After $N_\mathrm{maxrep}$ repetitions, no vertex could be connected to ${\mathbf{q}}_{\mathrm{goal}}$. In this case, the algorithm returns `Failure`. **Input:** ${\mathbf{q}}_{\mathrm{start}}$, ${\mathbf{q}}_{\mathrm{goal}}$ **Output:** A valid trajectory connecting ${\mathbf{q}}_{\mathrm{start}}$ to ${\mathbf{q}}_{\mathrm{goal}}$ or `Failure` ${\mathcal{T}}\leftarrow$ NEW\_TREE() $V_{\mathrm{start}}\leftarrow$ NEW\_VERTEX() $V_{\mathrm{start}}.\mathrm{config}\leftarrow{\mathbf{q}}_{\mathrm{start}}$; $V_{\mathrm{start}}.\mathrm{inpath}\leftarrow\texttt{Null}$; $V_{\mathrm{start}}.\mathrm{interval}\leftarrow [0,0]$ INITIALIZE(${\mathcal{T}},V_{\mathrm{start}}$) ${\mathbf{q}}_{\mathrm{rand}}\leftarrow$ RANDOM\_CONFIG() $V_{\mathrm{new}}\leftarrow$ EXTEND(${\mathcal{T}},{\mathbf{q}}_{\mathrm{rand}}$) ADD\_VERTEX(${\mathcal{T}},V_{\mathrm{new}}$) COMPUTE\_TRAJECTORY(${\mathcal{T}},{\mathbf{q}}_{\mathrm{goal}}$) `Failure` ![Illustration for AVP-RRT. The horizontal plane represents the configuration space while the vertical axis represents the path velocity space. Black areas represent configuration space obstacles. A vertex in the tree is composed of a configuration (blue disks), the incoming path from the parent (blue curve), and the interval of admissible velocities (bold magenta segments). At each tree extension step, one interpolates a smooth, collision-free path in the configuration space and propagates the interval of admissible velocities along that path using AVP. The fine magenta line shows one possible valid velocity profile (which is guaranteed to exist by AVP) “above” the path connecting ${\mathbf{q}}_{\mathrm{start}}$ and ${\mathbf{q}}_{\mathrm{new}}$.[]{data-label="fig:avp-rrt"}](fig/avp-rrt){width="14cm"} **Input:** ${\mathcal{T}}$, ${\mathbf{q}}_{\mathrm{rand}}$ **Output:** A new vertex $V_{\mathrm{new}}$ or `Null` $V_{\mathrm{near}}\leftarrow$ NEAREST\_NEIGHBOR(${\mathcal{T}},{\mathbf{q}}_{\mathrm{rand}}$) $({\mathcal{P}}_{\mathrm{new}},{\mathbf{q}}_{\mathrm{new}}) \leftarrow$ INTERPOLATE($V_{\mathrm{near}},{\mathbf{q}}_{\mathrm{rand}}$) $[\dot s_{\min},\dot s_{\max}]\leftarrow$ AVP(${\mathcal{P}}_{\mathrm{new}},V_{\mathrm{near}}.\mathrm{interval}$) $V_{\mathrm{new}}\leftarrow$ NEW\_VERTEX() $V_{\mathrm{new}}.\mathrm{config}\leftarrow{\mathbf{q}}_{\mathrm{new}}$; $V_{\mathrm{new}}.\mathrm{inpath}\leftarrow {\mathcal{P}}_{\mathrm{new}}$; $V_{\mathrm{new}}.\mathrm{interval}\leftarrow [\dot s_{\min},\dot s_{\max}]$ $V_{\mathrm{new}}$ `Failure` The other routines are defined as follows: - CONNECT($V, {\mathbf{q}}_{\mathrm{goal}}$) attempts at connecting directly $V$ to the goal configuration ${\mathbf{q}}_{\mathrm{goal}}$, using the same algorithm as in lines 2 to 10 of Box \[algo:extend\], but with the further requirement that the goal velocity is included in the final velocity interval; - COMPUTE\_TRAJECTORY(${\mathcal{T}},{\mathbf{q}}_{\mathrm{goal}}$) reconstructs the entire path ${\mathcal{P}}_\textrm{total}$ from ${\mathbf{q}}_{\mathrm{start}}$ to ${\mathbf{q}}_{\mathrm{goal}}$ by recursively concatenating the $V$.inpath. Next, ${\mathcal{P}}_\textrm{total}$ is time-parameterized by applying TOPP. The existence of a valid time-parameterization is guaranteed by recursion by AVP. - NEAREST\_NEIGHBOR(${\mathcal{T}}, {\mathbf{q}}$) returns the vertex of ${\mathcal{T}}$ whose configuration is closest to configuration ${\mathbf{q}}$ in the metric $d$, see Section \[sec:implementation\] for a more detailed discussion. - INTERPOLATE($V,{\mathbf{q}}$) returns a pair $({\mathcal{P}}_{\mathrm{new}},{\mathbf{q}}_{\mathrm{new}})$ where ${\mathbf{q}}_{\mathrm{new}}$ is defined as follows - if $d(V$.config,${\mathbf{q}})\leq R$ where $R$ is a user-defined extension radius as in the standard RRT algorithm [@KL00icra], then ${\mathbf{q}}_{\mathrm{new}}\leftarrow{\mathbf{q}}$; - otherwise, ${\mathbf{q}}_{\mathrm{new}}$ is a configuration “in the direction of” ${\mathbf{q}}$ but situated within a distance $R$ of $V$.config (contrary to the standard RRT, it might not be wise to choose a configuration laying exactly on the segment connecting $V$.config and ${\mathbf{q}}$ since here one has also to take care of $C^1$-continuity, see below). The path ${\mathcal{P}}_{\mathrm{new}}$ is a smooth path connecting $V$.config and ${\mathbf{q}}_{\mathrm{new}}$, and such that the concatenation of $V$.inpath and ${\mathcal{P}}_{\mathrm{new}}$ is $C^1$ at $V$.config, see Section \[sec:implementation\] for a more detailed discussion. Implementation and variations {#sec:implementation} ----------------------------- As in the standard RRT [@KL00icra], some implementation choices influence substantially the performance of the algorithm. Metric : In state-space RRTs, the most critical choice is that of the metric $d$, in particular, the *relative weighting* between configuration-space coordinates and velocity coordinates. In our approach, since the whole interval of valid path velocities is considered, the relative weighting does not come into play. In practice, a simple Euclidean metric on the configuration space is often sufficient. However, in some applications, one may also include the *final orientation* of $V$.inpath in the metric. Interpolation : In geometric path planners, the interpolation between two configurations is usually done using a straight segment. Here, since one needs to propagate velocities, it is necessary to enforce $C^1$-continuity at the junction point. In the examples of Section \[sec:applications\], we used third-degree polynomials to do so. Other interpolation methods are possible: higher-order polynomials, splines, etc. The choice of the appropriate method depends on the application and plays an important role in the performance of the algorithm. K-nearest-neighbors : Attempting connection from $K$ nearest neighbors, where $K>1$ is a judiciously chosen parameter, has been found to improve the performance of RRT. To implement this, it suffices to replace line 2 of Box \[algo:extend\] with a FOR loop that enumerates the $K$ nearest neighbors. Note that this procedure is geared towards reducing the search time, not at improving the trajectory quality as in RRT$^*$ [@KF11ijrr], see also below. Post-processing : After finding a solution trajectory, one can improve its quality (e.g. trajectory duration, trajectory smoothness, etc.), by repeatedly applying the following shortcutting procedure [@GO07ijrr; @Pha12mms]: 1. select two random configurations on the trajectory; 2. interpolate a smooth shortcut path between these two configurations; 3. time-parameterize the shortcut using TOPP; 4. if the time-parameterized shortcut has shorter duration than the initial segment, then replace the latter by the former. Instead of shortcutting, one may also give the trajectory found by AVP-RRT as initial guess to a trajectory optimization algorithm, or implement a re-wiring procedure as in RRT$^*$ [@KF11ijrr], which has been shown to be asymptotically optimal in the context of state-based planning (note however that such re-wiring is not straightforward and might require additional algorithmic developments). Another significant benefit of AVP is that one can readily adapt heuristics that have been developed for geometric path planners. We discuss two such heuristics below. Bi-directional RRT : @KL00icra remarked that growing simultaneously two trees, one rooted at the initial configuration and one rooted at the goal configuration yielded significant improvement over the classical uni-directional RRT. This idea [see also @NM91tra] can be easily implemented in the context of AVP-RRT as follows [@LP14ijcai]: - The start tree is grown normally as in Section \[sec:avprrt\]; - The goal tree is grown similarly, but using AVP-backward (see Section \[sec:avpremarks\]) for the velocity propagation step; - Assume that one finds a configuration where the two trees are *geometrically* connected. If the forward velocity interval of the start tree and the backward velocity interval of the goal tree have a non-empty intersection at this configuration, then the two trees can be connected *dynamically*. Bridge test : If two nearby configurations are in the obstacle space but their midpoint ${\mathbf{q}}$ is in the free space, then most probably ${\mathbf{q}}$ is in a narrow passage. This idea enables one to find a large number of such configurations ${\mathbf{q}}$, which is essential in problems involving narrow passages [@HsuX03icra]. This idea can be easily implemented in AVP-RRT by simply modifying RANDOM\_CONFIG in line 6 of Box \[algo:rrt\] to include the bridge test. One can observe from the above discussion that powerful heuristics developed for geometric path planning can be readily used in AVP-RRT, precisely because the latter is built on the idea of path-velocity decomposition. It is unclear how such heuristics can be integrated in other approaches to kinodynamic motion planning such as the trajectory optimization approach discussed in the Introduction. Examples of applications {#sec:applications} ======================== As AVP-RRT is based on the classical Time-Optimal Path Parameterization (TOPP) algorithm, it can be applied to any type of systems and constraints TOPP can handle, from double-integrators subject to velocity and acceleration bounds, to manipulator subject to torque limits [@BobX85ijrr; @SM86tac], to wheeled vehicles subject to balance constraints [@SG91tra], to humanoid robots in multi-contact tasks [@PS15tmech], etc. Furthermore, the overhead for addressing a new problem is minimal: it suffices to reduce the system constraints to the form of inequality (\[eq:gen\]), and *le tour est joué!* In this section, we present two examples where AVP-RRT was used to address planning problems in which *no quasi-static solution exists*. In the first example, the task consisted in swinging a double pendulum into the upright configuration under severe torque bounds. While this example does not fully exploit the advantages associated with path-velocity decomposition (no configuration-space obstacle nor kinematic closure constraint was considered), we chose it since it was simple enough to enable a careful comparison with the usual state-space planning approach [@LK01ijrr]. In the second example, the task consisted in transporting a bottle placed on a tray through a small opening using a commercially-available manipulator (6 DOFs). This example demonstrates the full power of path-velocity decomposition: geometric constraints (going through the small opening) and dynamics constraints (the bottle must remain on the tray) could be addressed separately. To the best of our knowledge, this is the first successful demonstration on a non custom-built robot that kinodynamic planning can succeed where quasi-static planning is guaranteed to fail. Double pendulum with severe torque bounds {#sec:pendulum} ----------------------------------------- We first consider a fully-actuated double pendulum (see Figure \[fig:pendulum\]B), subject to torque limits $$|\tau_1|\leq \tau_1^{\max}, \qquad |\tau_2|\leq \tau_2^{\max}.$$ Such a pendulum can be seen as a 2-link manipulator, so that the reduction to the form of (\[eq:gen\]) is straightforward, see @Pha14tro. ### Obstruction to quasi-static planning The task consisted in bringing the pendulum from its initial state $(\theta_1, \theta_2, \dot\theta_1, \dot\theta_2) = (0, 0, 0, 0)$ towards the upright state $(\theta_1, \theta_2, \dot\theta_1, \dot\theta_2) = (\pi, 0, 0, 0)$, while respecting the torque bounds. For simplicity, we did not consider self-collision issues. Any trajectory that achieves the task must pass through a configuration where $\theta_1=\pi/2$. Note that the configuration with $\theta_1=\pi/2$ that requires the smallest torque at the first joint to stay still is $(\theta_1, \theta_2) = (\pi/2, \pi)$. Let then $\tau_1^\mathrm{qs}$ be this smallest torque. It is clear that, if $\tau_1^{\max} < \tau_1^\mathrm{qs}$, then *no quasi-static trajectory* can achieve the task. In our simulations, we used the following lengths and masses for the links: $l=0.2$m and $m=8$kg, yielding $\tau_1^\mathrm{qs}=15.68$N$\cdot$m. For information, the smallest torque at the second joint to keep the configuration $(\theta_1,\theta_2)=(0,\pi/2)$ stationary was $7.84$N$\cdot$m. We carried experiments in the following scenarii: $(\tau_1^{\max}, \tau_2^{\max}) \in \{(11,7),(13,5),(11,5) \}$ (N$\cdot$m). ### Solution using AVP-RRT {#sec:pend-res} For simplicity we used the uni-directional version of AVP-RRT as described in Section \[sec:planning\], without any heuristics. Furthermore, for fair comparison with state-space RRT in Python (see Section \[sec:comparison\]), we used a Python implementation of AVP rather than the C++ implementation contained in the TOPP library [@Pha14tro]. Regarding the number of nearest neighbors to consider, we chose $K=10$. The maximum number of repetitions was set to $N_\mathrm{maxrep}=2000$. Random configurations were sampled uniformly in $[-\pi, \pi]^2$. A simple Euclidean metric in the configuration space was used. Inverse Dynamics computations (required by the TOPP algorithm) were performed using OpenRAVE [@Dia10these]. We ran 40 simulations for each value of $(\tau_1^{\max}, \tau_2^{\max})$ on a 2GHz Intel Core Duo computer with 2GB RAM. The results are given in Table \[tab:pend-results\] and Figure \[fig:pendulum\]. A video of some successful trajectories are shown at <http://youtu.be/oFyPhI3JN00>. \[tab:pend-results\] **AB**\ ![Swinging up a fully-actuated double pendulum. A typical solution for the case $(\tau_1^{\max},\tau_2^{\max})=(11,5)$ N$\cdot$m, with trajectory duration 1.88s (see also the attached video). **A**: The tree in the $(\theta_1,\theta_2)$ space. The final path is highlighted in magenta. **B**: snapshots of the trajectory, taken every 0.1s. Snapshots taken near the beginning of the trajectory are lighter. A video of the movement is available at <http://youtu.be/oFyPhI3JN00>. **C**: Velocity profiles in the $(s,\dot s)$ space. The MVC is in cyan. The various velocity profiles (CLC, $\Phi$, $\Psi$, cf. Section \[sec:avp2\]) are in black. The final, optimal, velocity profile is in dashed blue. The vertical dashed red lines correspond to vertices where 0 is a valid velocity, which allowed a discontinuity of the path tangent at that vertex. **D**: Torques profiles. The torques for joint 1 and 2 are respectively in red and in blue. The torque limits are in dotted line. Note that, in agreement with time-optimal control theory, at each time instant, at least one torque limit was saturated (the small overshoots were caused by discretization errors).[]{data-label="fig:pendulum"}](fig/pendulum_2d "fig:"){height="4.7cm"} ![Swinging up a fully-actuated double pendulum. A typical solution for the case $(\tau_1^{\max},\tau_2^{\max})=(11,5)$ N$\cdot$m, with trajectory duration 1.88s (see also the attached video). **A**: The tree in the $(\theta_1,\theta_2)$ space. The final path is highlighted in magenta. **B**: snapshots of the trajectory, taken every 0.1s. Snapshots taken near the beginning of the trajectory are lighter. A video of the movement is available at <http://youtu.be/oFyPhI3JN00>. **C**: Velocity profiles in the $(s,\dot s)$ space. The MVC is in cyan. The various velocity profiles (CLC, $\Phi$, $\Psi$, cf. Section \[sec:avp2\]) are in black. The final, optimal, velocity profile is in dashed blue. The vertical dashed red lines correspond to vertices where 0 is a valid velocity, which allowed a discontinuity of the path tangent at that vertex. **D**: Torques profiles. The torques for joint 1 and 2 are respectively in red and in blue. The torque limits are in dotted line. Note that, in agreement with time-optimal control theory, at each time instant, at least one torque limit was saturated (the small overshoots were caused by discretization errors).[]{data-label="fig:pendulum"}](fig/pendulum_traj "fig:"){height="4.7cm"}\ **CD**\ ![Swinging up a fully-actuated double pendulum. A typical solution for the case $(\tau_1^{\max},\tau_2^{\max})=(11,5)$ N$\cdot$m, with trajectory duration 1.88s (see also the attached video). **A**: The tree in the $(\theta_1,\theta_2)$ space. The final path is highlighted in magenta. **B**: snapshots of the trajectory, taken every 0.1s. Snapshots taken near the beginning of the trajectory are lighter. A video of the movement is available at <http://youtu.be/oFyPhI3JN00>. **C**: Velocity profiles in the $(s,\dot s)$ space. The MVC is in cyan. The various velocity profiles (CLC, $\Phi$, $\Psi$, cf. Section \[sec:avp2\]) are in black. The final, optimal, velocity profile is in dashed blue. The vertical dashed red lines correspond to vertices where 0 is a valid velocity, which allowed a discontinuity of the path tangent at that vertex. **D**: Torques profiles. The torques for joint 1 and 2 are respectively in red and in blue. The torque limits are in dotted line. Note that, in agreement with time-optimal control theory, at each time instant, at least one torque limit was saturated (the small overshoots were caused by discretization errors).[]{data-label="fig:pendulum"}](fig/pendulum_phase "fig:"){height="4.7cm"} ![Swinging up a fully-actuated double pendulum. A typical solution for the case $(\tau_1^{\max},\tau_2^{\max})=(11,5)$ N$\cdot$m, with trajectory duration 1.88s (see also the attached video). **A**: The tree in the $(\theta_1,\theta_2)$ space. The final path is highlighted in magenta. **B**: snapshots of the trajectory, taken every 0.1s. Snapshots taken near the beginning of the trajectory are lighter. A video of the movement is available at <http://youtu.be/oFyPhI3JN00>. **C**: Velocity profiles in the $(s,\dot s)$ space. The MVC is in cyan. The various velocity profiles (CLC, $\Phi$, $\Psi$, cf. Section \[sec:avp2\]) are in black. The final, optimal, velocity profile is in dashed blue. The vertical dashed red lines correspond to vertices where 0 is a valid velocity, which allowed a discontinuity of the path tangent at that vertex. **D**: Torques profiles. The torques for joint 1 and 2 are respectively in red and in blue. The torque limits are in dotted line. Note that, in agreement with time-optimal control theory, at each time instant, at least one torque limit was saturated (the small overshoots were caused by discretization errors).[]{data-label="fig:pendulum"}](fig/pendulum_torque "fig:"){height="4.7cm"} ### Comparison with state-space RRT {#sec:comparison} We compared our implementation of AVP-RRT with the standard state-space RRT [@LK01ijrr] including the $K$-nearest-neighbors heuristic ($K$NN-RRT). More complex kinodynamic planners have been applied to low-DOF systems like the double pendulum, in particular those based on locally linearized dynamics [such as LQR-RRT$^*$ @PerX12icra]. However, such planners require delicate tunings and have not been shown to scale to systems with DOF $\geq$ 4. The goal of the present section is to compare the behavior of AVP-RRT to its RRT counterpart on a low-DOF system. (In particular, we do not claim that AVP-RRT is the best planner for a double pendulum.) We equipped the state-space RRT with generic heuristics that we tuned to the problem at hand, see Appendix A. In particular, we selected the best number of neighbors $K$ for $K\in\{1,10,40,100\}$. Figure \[fig:comp\] and Table \[tab:comp\] summarize the results. **AB**\ ![Comparison of AVP-RRT and $K$NN-RRT. **A**: Percentage of trials that have reached the goal area at given time instants for ${\tau^{\max}}= (11,7)$. **B**: Individual plots for each trial. Each curve shows the distance to the goal as a function of time for a given instance (red: AVP-RRT, blue: RRT-40). Dots indicate the time instants when a trial successfully terminated. Stars show the mean values of termination times. **C** and **D**: same legends as A and B but for ${\tau^{\max}}= (11,5)$.[]{data-label="fig:comp"}](fig/comp-all-11-07 "fig:"){width="6cm"} ![Comparison of AVP-RRT and $K$NN-RRT. **A**: Percentage of trials that have reached the goal area at given time instants for ${\tau^{\max}}= (11,7)$. **B**: Individual plots for each trial. Each curve shows the distance to the goal as a function of time for a given instance (red: AVP-RRT, blue: RRT-40). Dots indicate the time instants when a trial successfully terminated. Stars show the mean values of termination times. **C** and **D**: same legends as A and B but for ${\tau^{\max}}= (11,5)$.[]{data-label="fig:comp"}](fig/comp-vip-vs-rrt40-11-07 "fig:"){width="6cm"}\ **CD**\ ![Comparison of AVP-RRT and $K$NN-RRT. **A**: Percentage of trials that have reached the goal area at given time instants for ${\tau^{\max}}= (11,7)$. **B**: Individual plots for each trial. Each curve shows the distance to the goal as a function of time for a given instance (red: AVP-RRT, blue: RRT-40). Dots indicate the time instants when a trial successfully terminated. Stars show the mean values of termination times. **C** and **D**: same legends as A and B but for ${\tau^{\max}}= (11,5)$.[]{data-label="fig:comp"}](fig/comp-all-11-05 "fig:"){width="6cm"} ![Comparison of AVP-RRT and $K$NN-RRT. **A**: Percentage of trials that have reached the goal area at given time instants for ${\tau^{\max}}= (11,7)$. **B**: Individual plots for each trial. Each curve shows the distance to the goal as a function of time for a given instance (red: AVP-RRT, blue: RRT-40). Dots indicate the time instants when a trial successfully terminated. Stars show the mean values of termination times. **C** and **D**: same legends as A and B but for ${\tau^{\max}}= (11,5)$.[]{data-label="fig:comp"}](fig/comp-vip-vs-rrt40-11-05 "fig:"){width="6cm"} --------- --------- --------------- --------- --------------- Planner Success Search time Success Search time rate (min) rate (min) AVP-RRT 100% 3.3$\pm$2.6 100% 9.8$\pm$12.1 RRT-1 40% 70.0$\pm$34.1 47.5% 63.8$\pm$36.6 RRT-10 82.5% 53.1$\pm$59.5 85% 56.3$\pm$60.1 RRT-40 92.5% 44.6$\pm$42.6 87.5% 54.6$\pm$52.2 RRT-100 82.5% 88.4$\pm$54.0 92.5% 81.2$\pm$46.7 --------- --------- --------------- --------- --------------- : Comparison of AVP-RRT and $K$NN-RRT \[tab:comp\] In the two problem instances, AVP-RRT was respectively 13.4 and 5.6 times faster than the best $K$NN-RRT in terms of search time. We noted however that the search time of AVP-RRT increased significantly from instance $({\tau^{\max}}_1,{\tau^{\max}}_2) = (11,5)$ to instance $({\tau^{\max}}_1,{\tau^{\max}}_2) = (11,7)$, while that of RRT only marginally increased. This may be caused by the “superposition” phenomenon: as torque constraints become tighter, more “pumping” swings are necessary to reach the upright configuration. However, since our metric was only on the configuration-space variables, configurations with different speeds (corresponding to different pumping cycles) may become indistinguishable. While this problem could be addressed by including a measure of reachable velocity intervals into the metric, we chose not to do so in the present paper in order to avoid over-fitting our implementation of AVP-RRT to the problem at hand. Nevertheless, AVP-RRT still significantly over-performed the best $K$NN-RRT. Non-prehensile object transportation {#sec:bottle} ------------------------------------ Here we consider the non-prehensile (i.e. without grasping) transportation of a bottle, or “waiter motion”. Non-prehensile transportation can be faster and more efficient than prehensile transportation since the time-consuming grasping and un-grasping stages are entirely skipped. Moreover, in many applications, the objects to be carried are too soft, fragile or small to be adequately grasped (e.g. food, electronic components, etc.) ### Obstruction to quasi-static planning A plastic milk bottle partially filled with sand was placed (without any fixation device) on a tray. The mass of the bottle was 2.5kg, its height was 24cm (the sand was filled up to 16cm) and its base was a square of size 8cm $\times$ 8cm. The tray was mounted as the end-effector of a 6-DOF serial manipulator (Denso VS-060). The task consisted in bringing the bottle from an initial configuration towards a goal configuration, these two configurations being separated by a small opening (see Fig. \[fig:bottle\]A). For the bottle to remain stationary with respect to the tray, the following three conditions must be satisfied: - (Unilaterality) The normal component $f_n$ of the reaction force must be non-negative; - (Non-slippage) The tangential component ${\mathbf{f}}_t$ of the reaction force must satisfy $\|{\mathbf{f}}_t\| \leq \mu f_n$, where $\mu$ is the static friction coefficient between the bottle and the tray. In our experimental set-up, the friction coefficient was set to a high value ($\mu=1.7$), such that the non-slippage condition was never violated before the ZMP condition; - (ZMP) The ZMP of the bottle must lie inside the bottle base [@VukX01humanoids]. The height of the opening was designed so that, for the bottle to go through the opening, it must be tilted by at least an angle $\theta^\mathrm{qs}$. However, when the bottle is tilted by that angle, the center of mass (COM) of the bottle projects outside of the bottle base. As the projection of the COM coincides with the ZMP in the quasi-static condition, tilting the bottle by the angle $\theta^\mathrm{qs}$ thus violates the ZMP condition and as a result, the bottle will tip over. One can therefore conclude that *no quasi-static motion* can bring the bottle through the opening without tipping it over. ### Solution using AVP-RRT {#solution-using-avp-rrt} We first reduced the three aforementioned conditions to the form of (\[eq:gen\]). Details of this reduction can be found in @LP14ijcai. We next used the bi-directional version of AVP-RRT presented in Section \[sec:implementation\]. All vertices in the tree were considered for possible connection from a new random configuration, but they were sorted by increasing distance from the new configuration (a simple Euclidean metric in the configuration space was used for the distance computation). As the opening was very small (narrow passage), we made use of the bridge test [@HsuX03icra] in order to automatically sample a sizable number of configurations inside or close to the opening. Note that the use of the bridge test was natural thanks to path-velocity decomposition. Because of the discrepancy between the planned motion and the motion actually executed on the robot (in particular, actual acceleration switches cannot be infinitely fast), we set the safety boundaries to be a square of size 5.5cm $\times$ 5.5cm (the actual base size was 8cm $\times$ 8cm), which makes the planning problem even harder. Nevertheless, our algorithm was able to find a feasible movement in about 3 hours on a 3.2GHz Intel Core computer with 3.8GB RAM (see Fig. \[fig:bottle\]B–E), and this movement could be successfully executed on the actual robot, see Fig. \[fig:bottle2\] and the video at <http://youtu.be/LdZSjNwpJs0>. Note that the computation time of 3 hours was for a particularly difficult problem instance: if the opening was only 5cm higher, computation time would be about 2 minutes, see Fig. \[fig:bottle\]F. **A** **B** **C**\ ![Non-prehensile transportation of a bottle. **A**: Simulation environment. The robot must bring the bottle to the other side of the opening while keeping it balanced on the tray. **B**: Bi-RRT tree in the workspace: the start tree had 125 vertices and the goal tree had 116 vertices. Red boxes represent the obstacles. Red stars represent the initial and goal positions of the bottle COM. Green lines represent the paths of the bottle COM in the tree. The successful path is highlighted in blue: it had 6 vertices. **C**: MVC and velocity profiles in the $(s,\dot s)$ space. Same legend as in Fig. \[fig:pendulum\]C. **D**: ZMP of the bottle in the tray reference frame (RF) for the successful trajectory. Note that the ZMP always stayed within the imposed safety borders $\pm 2.75$cm (the actual borders were $\pm 4$cm). **E**: COM of the bottle in the tray RF for the successful trajectory. Note that the X-coordinate of the COM reached the maximum value of 4.03cm, around the moment when the bottle went through the opening, indicating that the successful trajectory would not be quasi-statically feasible. **F**: Here we varied the opening height (X-axis, from left to right: higher opening to lower opening) and determine the average and standard deviation of computation time (Y-axis, logarithmic scale) required to find a solution. We carried out 30 runs for opening heights from 0.4m to 0.365m, 10 runs for 0.36m, 3 runs for 0.355m and 0.35m and 2 runs for 0.345m. The red dashed vertical line indicates the critical height below which no quasi-static trajectory was possible. Here, we used $\pm 4$cm as boundaries for the ZMP, so that the computed motions, while theoretically feasible, might not be actually feasible.[]{data-label="fig:bottle"}](fig/scene-simu "fig:"){height="3.25cm"} ![Non-prehensile transportation of a bottle. **A**: Simulation environment. The robot must bring the bottle to the other side of the opening while keeping it balanced on the tray. **B**: Bi-RRT tree in the workspace: the start tree had 125 vertices and the goal tree had 116 vertices. Red boxes represent the obstacles. Red stars represent the initial and goal positions of the bottle COM. Green lines represent the paths of the bottle COM in the tree. The successful path is highlighted in blue: it had 6 vertices. **C**: MVC and velocity profiles in the $(s,\dot s)$ space. Same legend as in Fig. \[fig:pendulum\]C. **D**: ZMP of the bottle in the tray reference frame (RF) for the successful trajectory. Note that the ZMP always stayed within the imposed safety borders $\pm 2.75$cm (the actual borders were $\pm 4$cm). **E**: COM of the bottle in the tray RF for the successful trajectory. Note that the X-coordinate of the COM reached the maximum value of 4.03cm, around the moment when the bottle went through the opening, indicating that the successful trajectory would not be quasi-statically feasible. **F**: Here we varied the opening height (X-axis, from left to right: higher opening to lower opening) and determine the average and standard deviation of computation time (Y-axis, logarithmic scale) required to find a solution. We carried out 30 runs for opening heights from 0.4m to 0.365m, 10 runs for 0.36m, 3 runs for 0.355m and 0.35m and 2 runs for 0.345m. The red dashed vertical line indicates the critical height below which no quasi-static trajectory was possible. Here, we used $\pm 4$cm as boundaries for the ZMP, so that the computed motions, while theoretically feasible, might not be actually feasible.[]{data-label="fig:bottle"}](fig/vertices "fig:"){height="3.25cm"} ![Non-prehensile transportation of a bottle. **A**: Simulation environment. The robot must bring the bottle to the other side of the opening while keeping it balanced on the tray. **B**: Bi-RRT tree in the workspace: the start tree had 125 vertices and the goal tree had 116 vertices. Red boxes represent the obstacles. Red stars represent the initial and goal positions of the bottle COM. Green lines represent the paths of the bottle COM in the tree. The successful path is highlighted in blue: it had 6 vertices. **C**: MVC and velocity profiles in the $(s,\dot s)$ space. Same legend as in Fig. \[fig:pendulum\]C. **D**: ZMP of the bottle in the tray reference frame (RF) for the successful trajectory. Note that the ZMP always stayed within the imposed safety borders $\pm 2.75$cm (the actual borders were $\pm 4$cm). **E**: COM of the bottle in the tray RF for the successful trajectory. Note that the X-coordinate of the COM reached the maximum value of 4.03cm, around the moment when the bottle went through the opening, indicating that the successful trajectory would not be quasi-statically feasible. **F**: Here we varied the opening height (X-axis, from left to right: higher opening to lower opening) and determine the average and standard deviation of computation time (Y-axis, logarithmic scale) required to find a solution. We carried out 30 runs for opening heights from 0.4m to 0.365m, 10 runs for 0.36m, 3 runs for 0.355m and 0.35m and 2 runs for 0.345m. The red dashed vertical line indicates the critical height below which no quasi-static trajectory was possible. Here, we used $\pm 4$cm as boundaries for the ZMP, so that the computed motions, while theoretically feasible, might not be actually feasible.[]{data-label="fig:bottle"}](fig/MVC "fig:"){height="3.25cm"}\ **D** **E** **F**\ ![Non-prehensile transportation of a bottle. **A**: Simulation environment. The robot must bring the bottle to the other side of the opening while keeping it balanced on the tray. **B**: Bi-RRT tree in the workspace: the start tree had 125 vertices and the goal tree had 116 vertices. Red boxes represent the obstacles. Red stars represent the initial and goal positions of the bottle COM. Green lines represent the paths of the bottle COM in the tree. The successful path is highlighted in blue: it had 6 vertices. **C**: MVC and velocity profiles in the $(s,\dot s)$ space. Same legend as in Fig. \[fig:pendulum\]C. **D**: ZMP of the bottle in the tray reference frame (RF) for the successful trajectory. Note that the ZMP always stayed within the imposed safety borders $\pm 2.75$cm (the actual borders were $\pm 4$cm). **E**: COM of the bottle in the tray RF for the successful trajectory. Note that the X-coordinate of the COM reached the maximum value of 4.03cm, around the moment when the bottle went through the opening, indicating that the successful trajectory would not be quasi-statically feasible. **F**: Here we varied the opening height (X-axis, from left to right: higher opening to lower opening) and determine the average and standard deviation of computation time (Y-axis, logarithmic scale) required to find a solution. We carried out 30 runs for opening heights from 0.4m to 0.365m, 10 runs for 0.36m, 3 runs for 0.355m and 0.35m and 2 runs for 0.345m. The red dashed vertical line indicates the critical height below which no quasi-static trajectory was possible. Here, we used $\pm 4$cm as boundaries for the ZMP, so that the computed motions, while theoretically feasible, might not be actually feasible.[]{data-label="fig:bottle"}](fig/ZMP "fig:"){width="4.7cm"} ![Non-prehensile transportation of a bottle. **A**: Simulation environment. The robot must bring the bottle to the other side of the opening while keeping it balanced on the tray. **B**: Bi-RRT tree in the workspace: the start tree had 125 vertices and the goal tree had 116 vertices. Red boxes represent the obstacles. Red stars represent the initial and goal positions of the bottle COM. Green lines represent the paths of the bottle COM in the tree. The successful path is highlighted in blue: it had 6 vertices. **C**: MVC and velocity profiles in the $(s,\dot s)$ space. Same legend as in Fig. \[fig:pendulum\]C. **D**: ZMP of the bottle in the tray reference frame (RF) for the successful trajectory. Note that the ZMP always stayed within the imposed safety borders $\pm 2.75$cm (the actual borders were $\pm 4$cm). **E**: COM of the bottle in the tray RF for the successful trajectory. Note that the X-coordinate of the COM reached the maximum value of 4.03cm, around the moment when the bottle went through the opening, indicating that the successful trajectory would not be quasi-statically feasible. **F**: Here we varied the opening height (X-axis, from left to right: higher opening to lower opening) and determine the average and standard deviation of computation time (Y-axis, logarithmic scale) required to find a solution. We carried out 30 runs for opening heights from 0.4m to 0.365m, 10 runs for 0.36m, 3 runs for 0.355m and 0.35m and 2 runs for 0.345m. The red dashed vertical line indicates the critical height below which no quasi-static trajectory was possible. Here, we used $\pm 4$cm as boundaries for the ZMP, so that the computed motions, while theoretically feasible, might not be actually feasible.[]{data-label="fig:bottle"}](fig/COM "fig:"){width="4.7cm"} ![Non-prehensile transportation of a bottle. **A**: Simulation environment. The robot must bring the bottle to the other side of the opening while keeping it balanced on the tray. **B**: Bi-RRT tree in the workspace: the start tree had 125 vertices and the goal tree had 116 vertices. Red boxes represent the obstacles. Red stars represent the initial and goal positions of the bottle COM. Green lines represent the paths of the bottle COM in the tree. The successful path is highlighted in blue: it had 6 vertices. **C**: MVC and velocity profiles in the $(s,\dot s)$ space. Same legend as in Fig. \[fig:pendulum\]C. **D**: ZMP of the bottle in the tray reference frame (RF) for the successful trajectory. Note that the ZMP always stayed within the imposed safety borders $\pm 2.75$cm (the actual borders were $\pm 4$cm). **E**: COM of the bottle in the tray RF for the successful trajectory. Note that the X-coordinate of the COM reached the maximum value of 4.03cm, around the moment when the bottle went through the opening, indicating that the successful trajectory would not be quasi-statically feasible. **F**: Here we varied the opening height (X-axis, from left to right: higher opening to lower opening) and determine the average and standard deviation of computation time (Y-axis, logarithmic scale) required to find a solution. We carried out 30 runs for opening heights from 0.4m to 0.365m, 10 runs for 0.36m, 3 runs for 0.355m and 0.35m and 2 runs for 0.345m. The red dashed vertical line indicates the critical height below which no quasi-static trajectory was possible. Here, we used $\pm 4$cm as boundaries for the ZMP, so that the computed motions, while theoretically feasible, might not be actually feasible.[]{data-label="fig:bottle"}](fig/log_avgrunningtime "fig:"){width="4.7cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/scene-1 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/scene-2 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/scene-3 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/scene-4 "fig:"){width="3.5cm"}\ ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/scene-5 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/scene-6 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/scene-7 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/scene-8 "fig:"){width="3.5cm"}\ ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/bottle-2 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/bottle-3 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/bottle-4 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/bottle-5 "fig:"){width="3.5cm"}\ ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/bottle-6 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/bottle-7 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/bottle-8 "fig:"){width="3.5cm"} ![Snapshots of the motion of Fig. \[fig:bottle\]B–E taken every 0.5s. **Top two rows**: front view of the motion in the simulation environment. **Bottom two rows**: side view of the motion executed on the actual robot (see also the video at <http://youtu.be/LdZSjNwpJs0>). []{data-label="fig:bottle2"}](fig/bottle-9 "fig:"){width="3.5cm"} ### Comparison with OMPL-KPIECE We were interested in comparing AVP-RRT with a state-of-the-art planner on this bottle-and-tray problem. We chose KPIECE since it is one of the most generic and powerful existing kinodynamic planners [@SK12tro]. Moreover, a robust open-source implementation exists as a component of the widely-used Open Motion Planning Library (OMPL) [@SMK12ram]. The methods and results of the comparison are reported in detail in Appendix \[sec:kpiece\]. Briefly, we first fine-tuned OMPL-KPIECE on the same 6-DOF manipulator model as above. At this stage, we considered only bounds on velocity and accelerations, the bottle and the tray were ignored for simplicity. Next, we compared AVP-RRT (Python/C++) and OMPL-KPIECE (pure C++, with the best possible tunings obtained previously) in an environment similar to that of Fig. \[fig:bottle\]. Here, we considered bounds on velocity and accelerations and collisions with the environment. We ran each planner 20 times with a time limit of 600 seconds. AVP-RRT had a success rate of $100\%$ and an average running time of $68.67$s, while OMPL-KPIECE failed to find any solution in any run. Based on this decisive result, we decided not to try OMPL-KPIECE on the full bottle-and-tray problem. These comparison results thus further suggest that planning directly in the state-space, while interesting from a theoretical viewpoint and successful in simulations and/or on custom-built systems, is unlikely to scale to practical high-DOF problems. Discussion {#sec:discussion} ========== We have presented a new algorithm, Admissible Velocity Propagation (AVP) which, given a path and an interval of reachable velocities at the beginning of that path, computes exactly and efficiently the interval of valid final velocities. We have shown how to combine AVP with well-known sampling-based geometric planners to give rise to a family of new efficient kinodynamic planners, which we have evaluated on two difficult kinodynamic problems. #### Comparison to existing approaches to kinodynamic planning Compared to traditional planners based on path-velocity decomposition, our planners remove the limitation of quasi-static feasibility, precisely by propagating admissible velocity intervals at each step of the tree extension. This enables our planner to find solutions when quasi-static trajectories are guaranteed to fail, as illustrated by the two examples of Section \[sec:applications\]. Compared to other approaches to kinodynamic planning, our approach enjoys the advantages associated with path-velocity decomposition, namely, the separation of the complex planning problem into two simpler sub-problems: geometric and dynamic, for both of which powerful methods and heuristics have been developed. The bottle transportation example in Section \[sec:bottle\] illustrates clearly this advantage. To address the problem of the narrow passage constituted by the small opening, we made use of the bridge test heuristics – initially developed for geometric path planners [@HsuX03icra] – which provides a large number of samples inside the narrow passage. It is unclear how such a method could be integrated into the “trajectory optimization” approach for example. Next, to steer between two configurations, we simply interpolated a geometric path – and can check for collision at this stage – and then found possible *trajectories* by running AVP. By contrast, in a “state-space planning” approach, it would be difficult – if not impossible – to steer exactly between two *states* of the system, which requires for instance solving a two-point boundary value problem. To avoid solving such difficult problems, @LK01ijrr [@HsuX02ijrr] propose to sample a large number of time-series of random control inputs and to choose the time-series that steers the system the closest to the target state. However, such shooting methods are usually considerably slower than “exact” methods – which is the case of AVP –, as also illustrated in our simulation study (see Section \[sec:pendulum\] and Appendices \[sec:knnrrt\], \[sec:kpiece\]). #### Class of systems where AVP is applicable Since AVP is adapted from TOPP, AVP can handle all systems and constraints that TOPP can handle, and only those systems and constraints. Essentially, TOPP can be applied to a path ${\mathbf{q}}(s)$ in the configuration space if the system can track that path at any velocities $\dot s$ and accelerations $\ddot s$, subject only to *inequality constraints* on $\dot s$ and $\ddot s$. This excludes – *a priori* – all under-actuated robots since, for these robots, most of the paths in the configuration space cannot be traversed at all [@Lau98book], or at only *one* specific velocity. @BL01tac identified a subclass of under-actuated robots (including e.g. planar 3-DOF manipulators with one passive joint or 3D underwater vehicles with three off-center thrusters) for which one can compute a large subset of paths that can be TOPP-ed (termed “kinematic motions”). Investigating whether AVP-RRT can be applied to such systems is the subject of ongoing research. At the other end of the spectrum, redundantly-actuated robots can track most of the paths in their configuration space (again, subject to actuation bounds). The problem here is that, for a given admissible velocity profile along a path, there exists in general an infinity of combinations of torques that can achieve that velocity profile. @PS15tmech showed how to optimally exploit actuation redundancy in TOPP, which can be adapted straightforwardly to AVP-RRT. #### Further remarks on completeness and complexity The AVP-RRT planner as presented in Section \[sec:planning\] is likely *not probabilistically complete*. We address in more detail in Appendix \[sec:completeness\] the completeness properties of AVP-RRT, and more generally, of AVP-based planners. We now discuss another feature of AVP-based planners that makes them interesting from a complexity viewpoint. Consider a trajectory or a trajectory segment that is “explored” by a state-space planning or a trajectory optimization method – either in one extension step for the former, or in an iterative optimization step for the latter. If one considers the *underlying path* of this trajectory, one may argue that these methods are exploring only *one* time-parameterization of that path, namely, that corresponding to the trajectory at hand. By contrast, for a given path that is “explored” by AVP, AVP precisely explores *all* time-parameterizations of that path, or in other words, the whole *“fiber bundle”* of path velocities above the path at hand – at a computation cost only slightly higher than that of checking *one* time-parameterization (see Section \[sec:avpremarks\]). Granted that path velocity encodes important information about possible violations of the dynamics constraints as argued in the Introduction, this full and free (as in free beer) exploration enables significant performance gains. #### Future works As just mentioned, we have recently extended TOPP to redundantly-actuated systems, including humanoid robots in multi-contact tasks [@PS15tmech]. This enables AVP-based planners to be applied to multi-contact planning for humanoid robots. In this application, the existence of kinematic closure constraints (the parts of the robot in contact with the environment should remain fixed) makes path-velocity decomposition highly appealing since these constraints can be handled by a kinematic planner independently from dynamic constraints (torque limits, balance, etc.) In a preliminary experiment, we have planned a non-quasi-statically-feasible but dynamically-feasible motion for a humanoid robot (see <http://youtu.be/PkDSHodmvxY>). Going further, we are currently investigating how AVP-based planners can enable existing quasi-static multi-contact planning methods [@HauX08ijrr; @EscX13ras] to discover truly dynamic motions for humanoid robots with multiple contact changes. Acknowledgments {#acknowledgments .unnumbered} --------------- We are grateful to Prof. Zvi Shiller for inspiring discussions about the TOPP algorithm and kinodynamic planning. This work was supported by a JSPS postdoctoral fellowship, by a Start-Up Grant from NTU, Singapore, and by a Tier 1 grant from MOE, Singapore. Probabilistic completeness of AVP-based planners {#sec:completeness} ================================================ Essentially, the probabilistic completeness of AVP-based planners relies on two properties: the completeness of the path sampling process (Property 1), and the completeness of velocity propagation (Property 2) Property 1 : any smooth path ${\mathcal{P}}$ in the configuration space will be approximated arbitrarily closely by the sampling process for a sufficiently large number of samples; Property 2 : if a smooth path $\widehat{\mathcal{P}}=\hat{\mathbf{q}}(s)_{s\in[0,1]}$ obtained by the sampling process can be time-parameterized into a solution trajectory according to a certain velocity profile $\hat v$, then $\hat v$ is contained within the velocity band propagated by AVP. We first discuss the conditions under which these two Properties are verified and then establish the completeness of AVP-based planners. Let $d$ designate a ${\mathcal{L}}^\infty$-type distance between two trajectories or between two paths: $$\begin{aligned} \label{eq:Pi} d(\Pi,\widehat\Pi)&{\stackrel{\mathrm{def}}{=}}&\sup_{t\in[0,T]}\|\Pi(t)-\widehat\Pi(t)\|,\\ \label{eq:cP} d({\mathcal{P}},\widehat{\mathcal{P}})&{\stackrel{\mathrm{def}}{=}}&\sup_{s\in[0,1]}\|{\mathbf{q}}(s)-\widehat{\mathbf{q}}(s)\|+ \sup_{s\in[0,1]}\|{\mathbf{u}}(s)-\widehat{\mathbf{u}}(s)\|, \end{aligned}$$ where ${\mathbf{u}}(s)$ is the unit tangent vector to ${\mathcal{P}}$ at $s$. Property 1 is true under the following hypotheses on the sampling process H1 : each time a random configuration ${\mathbf{q}}_{\mathrm{rand}}$ is sampled, consider the set ${\mathcal{S}}$ of existing vertices within a distance $\delta>0$ of ${\mathbf{q}}_{\mathrm{rand}}$ in the configuration space. Select $K$ random vertices within ${\mathcal{S}}$, where $K$ is proportional to the number of vertices currently existing in the tree, and attempt to connect these vertices to ${\mathbf{q}}_{\mathrm{rand}}$ through the usual interpolation and AVP procedures. For each successful connection, create a new vertex $V_{\mathrm{new}}$, which has the same configuration ${\mathbf{q}}_{\mathrm{rand}}$ but a different “inpath” and a different “interval”, depending on the parent vertex in ${\mathcal{S}}$[^5]; H2 : consider the path interpolation from $({\mathbf{u}}_1,{\mathbf{q}}_1)$ to ${\mathbf{q}}_2$. The unit vector ${\mathbf{u}}_2$ at the end of the interpolated path ${\mathcal{P}}_\mathrm{int}$ is set to be the unit vector pointing from ${\mathbf{q}}_1$ to ${\mathbf{q}}_2$, denoted ${\mathbf{u}}_{{\mathbf{q}}_1\to{\mathbf{q}}_2}$[^6]; H3 : for every $\Delta>0$, there exists $\eta>0$ such that, if $\|{\mathbf{u}}_1-{\mathbf{u}}_{{\mathbf{q}}_1\to{\mathbf{q}}_2})\|<\eta$, then $d({\mathcal{P}}_\mathrm{int},{\mathcal{P}}_\mathrm{str}({\mathbf{q}}_1,{\mathbf{q}}_2))<\Delta/3$, where ${\mathcal{P}}_\mathrm{str}({\mathbf{q}}_1,{\mathbf{q}}_2)$ is the straight segment joining ${\mathbf{q}}_1$ to ${\mathbf{q}}_2$[^7]. **Proof** Consider a smooth path ${\mathcal{P}}={\mathbf{q}}(s)_{s\in[0,1]}$ in the configuration space such that ${\mathbf{q}}(0)={\mathbf{q}}_{\mathrm{start}}$. Since ${\mathcal{P}}$ is smooth, for $s_1$ and $s_2$ close enough, the path segment between $s_1$ and $s_2$ will look like a straight line, see Fig. \[fig:complete\]B. This intuition can be more formally stated as follows: consider an arbitrary $\Delta>0$, - there exists a $\delta_1$ such that, if $\|{\mathbf{q}}(s_2)-{\mathbf{q}}(s_1)\|\leq \delta_1$, then $$\label{eq:fact1} d({\mathbf{q}}(s)_{s\in[s_1,s_2]},{\mathcal{P}}_\mathrm{str}({\mathbf{q}}(s_1),{\mathbf{q}}(s_2)))<\Delta/3;$$ - there exists $\delta_2$ such that, if $\|{\mathbf{q}}(s_2)-{\mathbf{q}}(s_1)\|\leq \delta_2$, then $$\label{eq:fact2} \|{\mathbf{u}}(s_1)-{\mathbf{u}}_{{\mathbf{q}}(s_1)\to{\mathbf{q}}(s_2)})\|<\eta/6 \quad\textrm{and}\quad \|{\mathbf{u}}(s_2)-{\mathbf{u}}_{{\mathbf{q}}(s_1)\to{\mathbf{q}}(s_2)})\|<\eta/6,$$ where $\eta$ is defined in (H3). **AB**\ ![Completeness of AVP-RRT. **A**: Existence of an admissible velocity profile above an approximated path. **B**: Approximation of a given smooth path.[]{data-label="fig:complete"}](fig/complete2 "fig:"){height="3.5cm"} ![Completeness of AVP-RRT. **A**: Existence of an admissible velocity profile above an approximated path. **B**: Approximation of a given smooth path.[]{data-label="fig:complete"}](fig/complete "fig:"){height="3.5cm"} Divide now the path ${\mathcal{P}}$ into $n$ subpaths ${\mathcal{P}}_1$,…,${\mathcal{P}}_n$ of lengths approximately $\delta{\stackrel{\mathrm{def}}{=}}\min(\delta_1,\delta_2)$. Let ${\mathbf{q}}_i,{\mathbf{u}}_i$ denote the starting configuration and unit tangent vector of subpath ${\mathcal{P}}_i$. Consider the balls $B_i$ centered on the ${\mathbf{q}}_i$ and having radius $\epsilon$, where $\epsilon{\stackrel{\mathrm{def}}{=}}\frac{\eta\delta}{12}$. With probability 1, there will exist a time in the sampling process when (s1) : $n$ *consecutive* random configurations $\hat{\mathbf{q}}_1,\dots,\hat{\mathbf{q}}_1$ are sampled in $B_1,\dots,B_n$ respectively; (s2) : ${\mathbf{q}}_{\mathrm{start}}$ is selected for connection attempt towards $\hat{\mathbf{q}}_1$, and the random ${\mathbf{u}}_{\mathrm{start}}$ verifies $\|{\mathbf{u}}_{\mathrm{start}}-{\mathbf{u}}_{{\mathbf{q}}_{\mathrm{start}}\to{\mathbf{q}}_1}\|<2\eta/3$. The interpolation results in a new vertex $V_1$ and a new subpath $\widehat{\mathcal{P}}_1$ connecting ${\mathbf{q}}_{\mathrm{start}}$ to $V_1$; (s3) : for $i\in[1,n-1]$, $V_i$ is selected for connection attempt to $\hat{\mathbf{q}}_{i+1}$, resulting in a new vertex $V_{i+1}$ and a new subpath $\widehat{\mathcal{P}}_i$ connecting $V_i$ to $V_{i+1}$. Note that (s2) and (s3) are possible since, by (H1), the number of connection attempts $K$ grows linearly with the number of existing vertices in the tree. We first prove that, for all $i\in[0,n]$, we have $\|\hat{\mathbf{u}}_i-{\mathbf{u}}_i\|<2\eta/3$. At rank $0$, the property is true owing to (s2). For $i\geq 1$, we have - $\|\hat{\mathbf{u}}_i-{\mathbf{u}}_{\hat{\mathbf{q}}_{i-1}\to\hat{\mathbf{q}}_i}\|=0$ by (H2); - $\|{\mathbf{u}}_{\hat{\mathbf{q}}_{i-1}\to\hat{\mathbf{q}}_i}-{\mathbf{u}}_{{\mathbf{q}}_{i-1}\to{\mathbf{q}}_i}\|< 2\epsilon/\delta=\eta/6$ by the fact that each ${\mathbf{q}}_i$ is contained in the ball $B_i$; - $\|{\mathbf{u}}_{{\mathbf{q}}_{i-1}\to{\mathbf{q}}_i}-{\mathbf{u}}_i\|<\eta/6$ by (\[eq:fact2\]). Applying triangle inequality yields $\|\hat{\mathbf{u}}_i-{\mathbf{u}}_i\|<2\eta/3$. Next, we prove for all $i\in[0,n-1]$ that $d(\widehat{{\mathcal{P}}}_i,{\mathcal{P}}_i)<\Delta$. Note that - $\|\hat{\mathbf{u}}_i-{\mathbf{u}}_i\|<2\eta/3$ by the above reasoning; - $\|{\mathbf{u}}_i-{\mathbf{u}}_{{\mathbf{q}}_{i}\to{\mathbf{q}}_{i+1}}\|<\eta/6$ by (\[eq:fact2\]); - $\|{\mathbf{u}}_{{\mathbf{q}}_{i}\to{\mathbf{q}}_{i+1}}-{\mathbf{u}}_{\hat{\mathbf{q}}_{i}\to\hat{\mathbf{q}}_{i+1}}\|< 2\epsilon/\delta=\eta/6$ by the fact that each ${\mathbf{q}}_i$ is contained in the ball $B_i$. Thus, by triangle inequality, we have $\|\hat{\mathbf{u}}_i-{\mathbf{u}}_{\hat{\mathbf{q}}_{i}\to\hat{\mathbf{q}}_{i+1}}\|<\eta$. By (H3), we have $d(\widehat{\mathcal{P}}_i,{\mathcal{P}}_\mathrm{str}(\hat{\mathbf{q}}_1,\hat{\mathbf{q}}_2))<\Delta/3$. Next, $d({\mathcal{P}}_\mathrm{str}(\hat{\mathbf{q}}_1,\hat{\mathbf{q}}_2),{\mathcal{P}}_\mathrm{str}({\mathbf{q}}_1,{\mathbf{q}}_2))$ can be made smaller than $\Delta/3$ for judicious choices of $\epsilon$ and $\delta$. Finally, we have $d({\mathcal{P}}_i,{\mathcal{P}}_\mathrm{str}({\mathbf{q}}(s_1),{\mathbf{q}}(s_2)))<\Delta/3$ by (\[eq:fact1\]). Applying the triangle inequality again, we obtain $d(\widehat{{\mathcal{P}}}_i,{\mathcal{P}}_i)<\Delta$. $\Box$ Property 2 is true. **Proof** Consider a path $\widehat{\mathcal{P}}$ obtained by the sampling process, i.e. $\widehat{\mathcal{P}}$ is composed of $n$ interpolated path segments $\widehat{\mathcal{P}}_1$,…,$\widehat{\mathcal{P}}_n$. Let $v_1$,…,$v_n$ be the corresponding subdivisions of the associated velocity profile $v$. We prove by induction on $i\in[0,n]$ that the concatenated profile $[v_1,\dots,v_i]$ is contained within the velocity band propagated by AVP. For $i=0$, i.e., at the start vertex, $v(0)=0$ is contained within the initial velocity band, which is $[0,0]$. Assume that the statement holds at $i$. This implies in particular that the final value of $v_i$, which is also the initial value of $v_{i+1}$, belongs to $[v_{\min},v_{\max}]$, where $(v_{\min},v_{\max})$ are the values returned by AVP at step $i$. Next, consider the velocity band that AVP propagates at step $i+1$ from $[v_{\min},v_{\max}]$. Since $v_{i+1}(0)\in[v_{\min},v_{\max}]$ and that $v_{i+1}$ is continuous, the whole profile $v_{i+1}$ will be contained, by construction, in the velocity band propagated by AVP. $\Box$ We can now prove the probabilistic completeness for a class of AVP-based planners. \[theo:complete\] An AVP-based planner that verifies Properties 1 and 2 is probabilistically complete. **Proof** Assume that there exists a smooth state-space trajectory $\Pi$ that solves the query, with $\Delta$-clearance in the state space, i.e., every smooth trajectory $\widehat\Pi$ such that $d(\Pi,\widehat\Pi)\leq \Delta$ also solves the query[^8]. Let ${\mathcal{P}}$ be the underlying path of $\Pi$ in the configuration space. By Property 1, with probability 1, there exists a time when the sampling process will generate a smooth path $\widehat{\mathcal{P}}$ such that $d({\mathcal{P}},\widehat{\mathcal{P}})\leq\Delta/2$. One can then construct, by continuity, a velocity profile $\hat v$ above $\widehat{\mathcal{P}}$, such that the time-parameterization of $\widehat{\mathcal{P}}$ according to $\hat v$ yields a trajectory $\widehat\Pi$ within a radius $\Delta$ of $\Pi$ (see Fig. \[fig:complete\]A). As $\Pi$ has $\Delta$-clearance, $\widehat\Pi$ also solves the query. Thus, by Property 2, the velocity profile (or time-parameterization) $\hat v$ must be contained within the velocity band propagated by AVP, which implies finally that $\widehat{\mathcal{P}}$ can be successfully time-parameterized in the last step of the planner. $\Box$ Comparison of AVP-RRT with $K$NN-RRT on a 2-DOF pendulum {#sec:knnrrt} ======================================================== Here, we detail the implementation of the standard state-space planner $K$NN-RRT and the comparison of this planner with AVP-RRT. The full source code for this comparison is available at <https://github.com/stephane-caron/avp-rrt-rss-2013>. Note that, for fairness, all the algorithms considered here were implemented in Python (including AVP). Thus, the presented computation times, in particular those of AVP-RRT, should not be considered in absolute terms. $K$NN-RRT --------- ### Overall algorithm {#subsec:gen} Our implementation of RRT in the state-space [@LK01ijrr] is detailed in Boxes \[algo:rrt-annex\] and \[algo:extend-annex\]. ${\mathcal{T}}$.INITIALIZE(${\mathbf{x}}_{\mathrm{init}}$) ${\mathbf{x}}_{\mathrm{rand}}\leftarrow$ RANDOM\_STATE() **if** mod(rep,5) $\neq 0$ **else** ${\mathbf{x}}_{\mathrm{goal}}$ ${\mathbf{x}}_{\mathrm{new}}\leftarrow$ EXTEND(${\mathcal{T}},{\mathbf{x}}_{\mathrm{rand}}$) ${\mathcal{T}}$.ADD\_VERTEX(${\mathbf{x}}_{\mathrm{new}}$) ${\mathbf{x}}_{{\mathrm{new}}2} \leftarrow$ EXTEND(${\mathbf{x}}_{\mathrm{new}},{\mathbf{x}}_{\mathrm{goal}}$) Success Failure ${\mathbf{x}}_{\mathrm{near}}^k\leftarrow$ KTH\_NEAREST\_NEIGHBOR(${\mathcal{T}},{\mathbf{x}}_{\mathrm{rand}},k$) ${\mathbf{x}}_{\mathrm{new}}^k\leftarrow$ STEER(${\mathbf{x}}_{\mathrm{near}}^k,{\mathbf{x}}_{\mathrm{rand}}$) $\arg\min_k d({\mathbf{x}}_{\mathrm{new}}^k,{\mathbf{x}}_{\mathrm{rand}})$ #### Steer-to-goal frequency We asserted the efficiency of the following strategy: every five extension attempts, try to steer directly to ${\mathbf{x}}_{\mathrm{goal}}$ (by setting ${\mathbf{x}}_{\mathrm{rand}}= {\mathbf{x}}_{\mathrm{goal}}$ on line 3 of Box \[algo:rrt-annex\]). See also the discussion in @LK01ijrr, p. 387, about the use of uni-directional and bi-directional RRTs. We observed that the choice of the steer-to-goal frequency (every 5, 10, etc., extension attempts) did not significantly alter the performance of the algorithm, except when it is too large, once every two extension attempts. #### Metric The metric for the neighbors search in EXTEND (Box \[algo:extend-annex\]) and to assess whether the goal has been reached (line 7 of Box \[algo:rrt-annex\]) was defined as: $$\begin{aligned} \label{eq:d} d({\mathbf{x}}_a,{\mathbf{x}}_b) &=& d\left(({\mathbf{q}}_a,{\mathbf{v}}_a),({\mathbf{q}}_b,{\mathbf{v}}_b)\right)\nonumber\\ &=& \frac{\sum_{j=1,2}{\sqrt{1-\cos({{\mathbf{q}}_a}_j-{{\mathbf{q}}_b}_j)}}}{4} + \frac{\sum_{j=1,2}|{{\mathbf{v}}_a}_j-{{\mathbf{v}}_b}_j|} {4V_{\max}},\end{aligned}$$ where $V_{\max}$ denotes the maximum velocity bound set in the random sampler (function RANDOM\_STATE() in Box \[algo:rrt-annex\]). This simple metric is similar to an Euclidean metric but takes into account the periodicity of the joint values. #### Termination condition We defined the goal area as a ball of radius $\epsilon = 10^{-2}$ for the metric around the goal state ${\mathbf{x}}_{\mathrm{goal}}$. As an example, $d({\mathbf{x}}_a, {\mathbf{x}}_a)=\epsilon$ corresponds to a maximum angular difference of $\Delta q_1 \approx 0.057$ rad $\approx~$3.24 degrees in the first joint. This choice is connected to that of the integration time step (used in Forward Dynamics computations in section \[sec:local\]), which we set to $\delta t = 0.01$ s. Indeed, the average angular velocities we observed in our benchmark was around $\bar{V}~=~5$ rad.s$^{-1}$ for the first joint, which corresponds to an average instantaneous displacement $\bar{V} \cdot \delta t \approx 5 . 10^{-2}$ rad of the same order as $\Delta q_1$ above. #### Nearest-neighbor heuristic Instead of considering only extensions from the nearest neighbor, as has commonly been done, we considered the “best” extension from the $K$ nearest neighbors (line 5 in Box \[algo:extend-annex\]), i.e. the extension yielding the state closest to ${\mathbf{x}}_{\mathrm{rand}}$ for the metric $d$ (Equation ). ### Local steering {#sec:local} Regarding the local steering scheme (STEER on line 3 of Box \[algo:extend-annex\]), there are two main approaches, corresponding to the two sides of the equation of motion: state-based and control-based steering [@CarX14icra]. #### Control-based steering In this approach, a control input $\tau(t)$ is computed first. It generates a given trajectory computable by forward dynamics. Because $\tau(t)$ is computed beforehand, there is no direct control on the end-state of the trajectory. To palliate this, the function $\tau(t)$ is then updated, with or without feedback on the end-state, until some satisfactory result is obtained or a computation budget is exhausted. For example, in works such as @LK01ijrr [@HsuX02ijrr], random functions $u$ are sampled from the set of piecewise-constant functions. A number of them are tried and only the one bringing the system closest to the target is retained. Linear-Quadratic Regulation [@PerX12icra; @Ted09rss] is another example of control-based steering where the function $u$ is computed as the optimal policy for a linear approximation of the system dynamics (given a quadratic cost function). In the present work, we followed the control-based approach from @LK01ijrr [@HsuX02ijrr], as described by Box \[algo:steer\]. The random control is a stationary $(\tau_1, \tau_2)$ sampled as: $$(\tau_1, \tau_2)\ \sim\ {\cal U}([-{\tau^{\max}}_1, {\tau^{\max}}_1] \times [-{\tau^{\max}}_2, {\tau^{\max}}_2]).$$ where $\cal U$ denotes uniform sampling from a set. The random time duration $\Delta t$ is sampled uniformly in $[\delta t, \Delta t_{\max}]$ where $\Delta t_{\max}$ is the maximum duration of local trajectories (parameter to be tuned), and $\delta t$ is the time step for the forward dynamics integration (set to $\delta t = 0.01$ s as discussed in Section \[subsec:gen\]). The number of local trajectories to be tested, $N_\mathrm{local\_trajs}$, is also a parameter to be tuned. ${\mathbf{u}}\leftarrow$ RANDOM\_CONTROL(${\tau^{\max}}_1,{\tau^{\max}}_2$) $\Delta t \leftarrow$ RANDOM\_DURATION($\Delta t_{\max}$) ${\mathbf{x}}^p\leftarrow$ FORWARD\_DYNAMICS(${\mathbf{x}}_{\mathrm{near}},{\mathbf{u}},\Delta t$) $\mathrm{argmin}_p d({\mathbf{x}}^p,{\mathbf{x}}_{\mathrm{rand}})$ #### State-based steering In this approach, a trajectory $\wq(t)$ is computed first. For instance, $\wq$ can be a Bezier curve matching the initial and target configurations and velocities. The next step is then to compute a control that makes the system track it. For fully- or over-actuated system, this can done using inverse dynamics. If no suitable controls exist, the trajectory is rejected. Note that both the space $\Im(\wq)$ and timing $t$ impact the dynamics of the system, and therefore the existence of admissible controls. Bezier curves or B-splines will conveniently solve the spatial part of the problem, but their timing is arbitrary, which tends to result in invalid controls and needs to be properly cared for. To enable meaningful comparisons with AVP-RRT, we considered the simple state-based steering described in Box \[algo:state-based\]. Trying to design the best possible nonlinear controller for the double pendulum would be out of the scope of this work, as it would imply either problem-specific tunings or substantial modifications to the core RRT algorithm [as done e.g. in @PerX12icra]. $\wq \leftarrow \INTERPOLATE(T, {\mathbf{x}}_{\mathrm{near}}, {\mathbf{x}}_{\mathrm{rand}})$ $\wtau := \textrm{INVERSE\_DYNAMICS}(\wq(t), \wqd(t), \wqdd(t))$ $t^\dagger = \sup\{t | |\wtau(t)| \leq {\tau^{\max}}\}$ $\wq(t^\dagger)$ failure Here, $\INTERPOLATE(T, {\mathbf{x}}_a, {\mathbf{x}}_b)$ returns a third-order polynomial $P_i(t)$ such that $P_i(0)={\mathbf{q}}_{ai},\ P'_i(0)={\mathbf{v}}_{ai},\ P_i(T)={\mathbf{q}}_{bi},\ P_i'(T)={\mathbf{v}}_{bi}$, and our local planner tries 10 different values of $T$ between 0.01 s and 2 s. We use inverse dynamics at each time step of the trajectory to check if a control $\wtau(t)$ is within torque limits. The trajectory is cut at the first inadmissible control. #### Comparing the two approaches On the pendulum, state-based steering yielded RRTs with slower exploration speeds compared to control-based steering, as illustrated in Figure \[fig:steer\]. This slowness is likely due to the uniform sampling in a wide velocity range $[-V_{\max}, V_{\max}]$, which resulted in a large fraction of trajectories exceeding torque limits. However, despite a better exploration of the state space, trajectories from control-based steering systematically ended outside of the goal area. To palliate this, we added a subsequent step: from each state reached by control-based steering, connect to the goal area using state-based steering. Thus, if a state is reached that is not in the goal area but from which steering to goal is easy, this last step will take care of the final connection. However, this patch improved only marginally the success rate of the planner. In practice, trajectories from control-based steering tend to end at energetic states from which steering to goal is difficult. As such, we found that this steering approach was not performing well on the pendulum and turned to state-based steering. ![ Comparison of control-based and state-based steering for $K=1$ (left-top), $K=10$ (right-top), $K=40$ (left-bottom) and $K=100$ (right-bottom). Computation time is fixed, which explains why there are more points for small values of $K$. The X-axis represents the angle of the first joint and the Y-axis its velocity. The trees grown by the state-based and control-based methods are in red and blue, respectively. The goal area is depicted by the red ellipse on the left side. Control-based steering yields better exploration of the state space, but fails to connect to the goal area. []{data-label="fig:steer"}](fig/k1 "fig:"){width="7cm" height="4cm"} ![ Comparison of control-based and state-based steering for $K=1$ (left-top), $K=10$ (right-top), $K=40$ (left-bottom) and $K=100$ (right-bottom). Computation time is fixed, which explains why there are more points for small values of $K$. The X-axis represents the angle of the first joint and the Y-axis its velocity. The trees grown by the state-based and control-based methods are in red and blue, respectively. The goal area is depicted by the red ellipse on the left side. Control-based steering yields better exploration of the state space, but fails to connect to the goal area. []{data-label="fig:steer"}](fig/k10 "fig:"){width="7cm" height="4cm"}\ ![ Comparison of control-based and state-based steering for $K=1$ (left-top), $K=10$ (right-top), $K=40$ (left-bottom) and $K=100$ (right-bottom). Computation time is fixed, which explains why there are more points for small values of $K$. The X-axis represents the angle of the first joint and the Y-axis its velocity. The trees grown by the state-based and control-based methods are in red and blue, respectively. The goal area is depicted by the red ellipse on the left side. Control-based steering yields better exploration of the state space, but fails to connect to the goal area. []{data-label="fig:steer"}](fig/k40 "fig:"){width="7cm" height="4cm"} ![ Comparison of control-based and state-based steering for $K=1$ (left-top), $K=10$ (right-top), $K=40$ (left-bottom) and $K=100$ (right-bottom). Computation time is fixed, which explains why there are more points for small values of $K$. The X-axis represents the angle of the first joint and the Y-axis its velocity. The trees grown by the state-based and control-based methods are in red and blue, respectively. The goal area is depicted by the red ellipse on the left side. Control-based steering yields better exploration of the state space, but fails to connect to the goal area. []{data-label="fig:steer"}](fig/k100 "fig:"){width="7cm" height="4cm"} Let us remark here that, although AVP-RRT follows the state-based paradigm (it indeed interpolates paths in configuration space and then computes feasible velocities along the path using Bobrow-like approach, which includes inverse dynamics computations), it is much more successful. The reason for this lies in AVP: when the interval of feasible velocities is small, a randomized approach will have a high probability of sampling unreachable velocities. Therefore, it will fail most of the time. Using AVP, the set of reachable velocities is exactly computed and this failure factor disappears. With AVP-RRT, failures only occur from “unlucky” sampling in the configuration space. Note however that the algorithm only saves and propagates the *norm* of the velocity vectors, not their directions, which may make the algorithm probabilistically incomplete (cf. discussion in Section \[sec:discussion\]). ### Fine-tuning of $K$NN-RRT {#sec:id} Based on the above results, we now focus on $K$NN-RRTs with state-based steering for the remainder of this section. The parameters to be tuned are: - $N_\mathrm{local\_trajs}$: number of local trajectories tested in each call to STEER; - $\Delta t_{\max}$: maximum duration of each local trajectory. The values we tested for these two parameters are summed up in table \[table:tunings\]. Number of trials $N_\mathrm{local\_trajs}$ $\Delta t_{\max}$ ------------------ --------------------------- ------------------- 10 1 0.2 10 30 0.2 10 80 0.2 20 20 0.5 20 20 1.0 20 20 2.0 : Parameter sets for each test.[]{data-label="table:tunings"} The parameters we do not tune are: - Maximum velocity $V_{\max}$ for sampling velocities. We set $V_{\max} = 50$ rad.s$^{-1}$, which is about twice the maximum velocity observed in the successful trials of AVP-RRT; - Number of neighbors $K$. In this tuning phase, we set $K= 10$. Other values of $K$ will be tested in the final comparison with AVP in section \[sec:compare\]; - Space-time precision $(\epsilon, \delta t)$: as discussed in Section \[subsec:gen\], we chose $\epsilon = 0.01$ and $\delta t = 0.01$ s. Finally, in this tuning phase, we set the torque limit as $({\tau^{\max}}_1,{\tau^{\max}}_2) = (13,7)$ N.m, which are relatively “slack” values, in order to obtain faster termination times for RRT. Tighter values such as $({\tau^{\max}}_1, {\tau^{\max}}_2) = (11, 5)$ N.m will be tested in our final comparison with AVP-RRT in section \[sec:compare\]. **AB** ![Minimum distance to the goal as a function of time for different values of $N_\mathrm{local\_trajs}$ and $\Delta t_{\max}$. At each instant, the minimum distance of the tree to the goal is computed. The average of this value across the 10 trials of each set is drawn in bold, while shaded areas indicate standard deviations. **A**: tuning of $N_\mathrm{local\_trajs}$. **B**: tuning of $\Delta t_{\max}$.[]{data-label="fig:id"}](fig/id-nbtraj "fig:"){width=".49\textwidth"} ![Minimum distance to the goal as a function of time for different values of $N_\mathrm{local\_trajs}$ and $\Delta t_{\max}$. At each instant, the minimum distance of the tree to the goal is computed. The average of this value across the 10 trials of each set is drawn in bold, while shaded areas indicate standard deviations. **A**: tuning of $N_\mathrm{local\_trajs}$. **B**: tuning of $\Delta t_{\max}$.[]{data-label="fig:id"}](fig/id-trajdur "fig:"){width=".49\textwidth"} Fig. \[fig:id\]A shows the result of simulations for different values of $N_\mathrm{local\_trajs}$. One can note that the performance of RRT is similar for values $10$ and $30$, but gets worse for $80$. Based on this observation, we chose $N_\mathrm{local\_trajs}=20$ for the final comparison in section \[sec:compare\]. Fig. \[fig:id\]B shows the simulation results for various values of $\Delta t_{\max}$. Observe that the performance of RRT is similar for the three tested values, with smaller values (0.5 s) performing better earlier in the trial and larger values (2.0 s) performing better later on. We also noted that smaller values of $\Delta t_{\max}$ such as 0.1 s or 0.2 s tended to yield poorer results (not shown here). Our choice for the final comparison was thus $\Delta t_{\max} = 1.0$ s. Comparing $K$NN-RRT and AVP-RRT {#sec:compare} ------------------------------- In this section, we compare the performance of $K$NN-RRT (for $K\in\{1, 10, 40, 100\}$, the other parameters being set to the values discussed in the previous section) against AVP-RRT with 10 neighbors. For practical reasons, we further limited the execution time of every trial to $10^4$ s, which had no impact in most cases or otherwise induced a slight bias in favor of RRT (since we took $10^4$ s as our estimate of the “search time” when RRT does not terminate within this time limit). We ran the simulations for two instances of the problem, namely - $({\tau^{\max}}_1, {\tau^{\max}}_2) = (11, 7)$ N.m; - $({\tau^{\max}}_1, {\tau^{\max}}_2) = (11, 5)$ N.m. For each problem instance, we ran 40 trials for each planner AVP-RRT, state-space RRT with 1 nearest neighbor (RRT-1), RRT-10, RRT-40 and RRT-100. Note that for each trial $i$, all the planners received the same sequence of random states $$\mathbf{X}_i = \left\{{\mathbf{x}}_{{\mathrm{rand}}}^{(i)}(t) \in \mathbf{R}^4\ \middle|\ t \in \mathbf{N}\right\} \sim {\cal U}\left((]-\pi, \pi]^2 \times [-V_{\max}, +V_{\max}]^2)^{\mathbf{N}}\right),$$ although AVP-RRT only used the first two coordinates of each sample since it plans in the configuration space. The results of this benchmark were already illustrated in Fig. \[fig:comp\]. Additional details are provided in Tables \[tab:1107\] and \[tab:1105\]. All trials of AVP successfully terminated within the time limit. For $({\tau^{\max}}_1, {\tau^{\max}}_2) = (11, 7)$, the average search time was $3.3$ min. Among the $K$NN-RRT, RRT-40 performed best with a success rate of 92.5% and an average computation time ca. 45 min, which is however $13.4$ times slower than AVP-RRT. For $({\tau^{\max}}_1, {\tau^{\max}}_2) = (11, 7)$, the average search time was $9.8$ min. Among the $K$NN-RRT, again RRT-40 performed best in terms of search time (54.6 min on average, which was $5.6$ times slower than AVP-RRT), but RRT-100 performed best in terms of success rate within the $10^4$s time limit (92.5%). Planner Success rate Search time (min) --------- -------------- ------------------- -- -- AVP-RRT 100% 3.3$\pm$2.6 RRT-1 40% 70.0$\pm$34.1 RRT-10 82.5% 53.1$\pm$59.5 RRT-40 92.5% 44.6$\pm$42.6 RRT-100 82.5% 88.4$\pm$54.0 : Comparison of AVP-RRT and $K$NN-RRT for $({\tau^{\max}}_1,{\tau^{\max}}_2) = (11,7)$. []{data-label="tab:1107"} Planner Success rate Search time (min) --------- -------------- ------------------- -- -- AVP-RRT 100% 9.8$\pm$12.1 RRT-1 47.5% 63.8$\pm$36.6 RRT-10 85% 56.3$\pm$60.1 RRT-40 87.5% 54.6$\pm$52.2 RRT-100 92.5% 81.2$\pm$46.7 : Comparison of AVP-RRT and $K$NN-RRT for $({\tau^{\max}}_1,{\tau^{\max}}_2) = (11,5)$.[]{data-label="tab:1105"} Comparison of AVP-RRT with KPIECE on a 6-DOF and a 12-DOF manipulators {#sec:kpiece} ====================================================================== Here, we detail the comparison between AVP-RRT and the OMPL implementation of KPIECE [@SK12tro; @SMK12ram] on a kinodynamic problem involving a $n$-DOF manipulators, for $2\leq n\leq 12$. The full source code for this comparison is available at <https://github.com/quangounet/kpiece-comparison>. KPIECE ------ We used the implementation of KPIECE available in the Open Motion Planning Library (OMPL) [@SMK12ram]. The library provides utilities such as function templates, data structures, and generic implementations of various planners, written in C++ with Python interfaces. It, however, does not provide modules such as collision checker and modules for visualization purposes. Therefore, we used OpenRAVE [@Dia10these] for collision checking and visualization. ### Overall algorithm {#overall-algorithm} KPIECE grows a tree of motions in the state-space. A motion $\nu$ is a tuple $(s, u, t)$, where $s$ is the initial state of the motion, $u$ is the control being applied to $s$, and $t$ is the control duration. Initially the tree contains only one motion $\nu_{{\mathrm{start}}}$. Then in each iteration the algorithm proceeds by first selecting a motion on the tree to expand from. A control input is then selected and applied to the state for a time duration. Finally, the algorithm will evaluate the progress that has been made so far. To select an existing motion from the tree, KPIECE utilizes information obtained from projecting states in the state-space $\mathcal{Q}$ into some low-dimensional Euclidean space $\mathbf{R}^{k}$. Low-dimensionality of the space allows the planner to discretize the space into cells. KPIECE will then score each cell based on several criteria (see [@SK12tro] for more detail). Based on an assumption that the coverage of the low-dimensional Euclidean space can reflect the true coverage of the state-space, KPIECE uses its cell scoring system to help bias the exploration towards unexplored area. For the following simulations, we used KPIECE planner implementation in C++ provided via the OMPL library. Since the library only provides generic implementation of the planner, we also needed to implement some problem specific functions for the planner such as state projection and state propagation. Those functions were also implemented in C++. We will give details on state projection and state propagation rules we used in our simulations. #### State projection Since the state-space exploration is mainly guided by the projection (as well as their cell scoring), more meaningful projections which better reflect the progress of the planner will help improve its performance. For planning problems for a robot manipulator, we used a projection that projects a state to an end-effector position in $3$D space. @SK12tro suggested that when planning for a manipulator motion, the tool-tip position in $3$D space is representative. However, by simply discarding all the velocity components we may lose information which can essentially help solve the problem. Thus, we decided to include also the norm of velocity into the projection. This inclusion of the norm of velocity was also used in [@SK12tro] when planning for a modular robot. Therefore, the projection projects a state into a space of dimension $4$. #### State propagation KPIECE uses a control-based steering method. It applies a selected control to a state over a number of propagation steps to reach a new state. In our cases, since the robot we were using was position-controlled, our control input were joint accelerations. Let the state be $(\bf{q}, \dot{\bf{q}})$, where ${\mathbf{q}}$ and $\dot{{\mathbf{q}}}$ are the joint values and velocities, respectively. The new state $({\mathbf{q}}^+, \dot{{\mathbf{q}}}^+)$ resulting from applying a control $\ddot{{\mathbf{q}}}$ to $(\bf{q}, \dot{\bf{q}})$ over a short time interval $\Delta t$ can be computed from $$\begin{aligned} \label{eq:KPIECE_control_update} {\mathbf{q}}^{+} &= {\mathbf{q}}+ \Delta t \dot{{\mathbf{q}}} + 0.5(\Delta t)^{2}\ddot{{\mathbf{q}}}\\ \dot{{\mathbf{q}}}^{+} &= \dot{{\mathbf{q}}} + \Delta t \ddot{{\mathbf{q}}}.\end{aligned}$$ ### Fine-tuning of KPIECE We employed $L_{2}$ norm as a distance metric in order not to bias the planning towards any heuristics. Next, in order for the planner not to spend too much running time into simulations for fine-tuning, we selected the threshold value to be $0.1$. The threshold is used to decide whether a state has reached the goal or not. If the distance from a state to the goal, according to the given distance metric, is less than the selected threshold, the problem is considered as solved. Then we tested the algorithm with a number of sets of parameters to find the best set of parameters. At this stage, the testing environment consisted only of the models of the Denso VS-$060$ manipulator and its base. There was no other object in the environment. Here, to check validity of a state, we need to check for only robot self-collision. In the following runs, we planned motions for only the first two joints of the robot (the ones closest to the robot base). The robot had to move from $(0, 0, 0, 0)$ to $(1, 1, 0, 0)$, where the first two components of the tuples are joint values and the others are joint velocities. We set the goal bias to $0.2$. With chosen parameters and projection, we ran simulations with difference combinations of cell size, $c$, and propagation step size, $p$. Note that here we assigned cell size, which defines the resolution of discretization of the projecting space, to be equal in every dimension. Both cell size and propagation step size were chosen from a set $\{0.01, 0.05, 0.1, 0.5, 1.0\}$. We tested for all different $25$ combinations of the parameters and recorded the running time of the planner. We ran $50$ simulations for each pair $(c, p)$. For any value of cell size, we noticed that the propagation step size of $0.05$ performed best. For $p = 0.05$, the values $c$ being $0.05, 0.1, 1.0$ performed better than the rest. The resulting running times using those values of cell size did not significantly differ from each other. The differences were in order of $1$ ms. Therefore, in the following section, we repeated all the simulations with three different pairs $(c, p) \in \{ (0.05, 0.05), (0.1, 0.05), (1.0, 0.05) \}$. KPIECE simulation results and comparison with AVP-RRT ----------------------------------------------------- With the previously selected parameters, we conducted simulations as follows. First of all, to show how running time of KPIECE and AVP-RRT scale when the dimensionality of the problem increases, we used both planners to plan motions for a $n-DOF$ robot, with $2\leq n\leq 12$. For this, we concatenated two Denso VS-$060$ manipulator manipulators together into a composite 12-DOF manipulator and used the first $n$ joints of that robot. The robot was to move from all-zeros configuration to all-ones configuration. Initial and final velocities were set to zero. There was no other obstacle in the scene. Since the implementation of KPIECE is unidirectional, we also used a unidirectional version of AVP-RRT. The AVP-RRT implementation was written in Python. Only the time-parameterization algorithm was implemented in C++. We gave each planner 200s. for each run, and simulated $20$ runs for each condition. For KPIECE we tested different values of cell sizes (0.05, 0.1, and 1.0). Fig. \[fig:kpiece\_sim\]A shows average running times over 20 runs. From the figure, the three values of the cell size produced similar results. Although KPIECE performed well when planning for low numbers of DOFs, the running time increased very quickly (exponentially) with increasing numbers of DOFs. Correlatively, the success rate when planning using KPIECE dropped rapidly when the number of DOFs increased, as can be seen from Fig. \[fig:kpiece\_sim\]A. When the number of DOFs was higher than 8, KPIECE failed to find any solution within 200s. The computation time for AVP-RRT also increased exponentially with the number of DOFs but the rate was much lower as compared to KPIECE. **AB**\ Finally, we considered an environment similar to that of our experiment on non-prehensile object transportation of Section \[sec:bottle\]. The tray and the bottle were, however, removed from the (6-DOF) robot model. The problem was therefore less constrained. We considered here only bounds on joint values, joint velocities, and joint accelerations. We shifted the lower edge of the opening upward for $13$cm. and set the opening height to be lower ($25$cm. in this case) to make the problem more interesting. Then for each run, both planners had a time limit of $600$s to find a motion for the robot to move from one side of the wall to the other. We repeated simulations $20$ times for both planners. For KPIECE, since the performance when using different cell size from $\{0.05, 0.1, 1.0\}$ did not differ much from each other, we chose to ran simulations with cell size $c = 0.05$. The average running time for AVP-RRT in this case was $68.67$s with a success rate of $100\%$. The average number of nodes in the tree when the planner terminated was $60.15$ and the average number of trajectory segments of the solutions was $8.60$. Fig. \[fig:avp-rrt-example-solution\] shows the scene used in simulations as well as an example of a solution trajectory found by AVP-RRT. On the other hand, KPIECE could not find any solution, in any run, within the given time limit. **AB** ![The scene used in the last experiment. Both KPIECE and AVP-RRT were to find a motion for the robot to move from one side of the wall to the other. **A**: the start configuration of the robot. **B**: the goal configuration of the robot. The pink line in the figure indicates an end-effector path from a solution found by AVP-RRT. KPIECE could not find any solution, in any run, within the given time limit of 600s.[]{data-label="fig:avp-rrt-example-solution"}](fig/scene_example_traj2_start "fig:"){width="45.00000%"} ![The scene used in the last experiment. Both KPIECE and AVP-RRT were to find a motion for the robot to move from one side of the wall to the other. **A**: the start configuration of the robot. **B**: the goal configuration of the robot. The pink line in the figure indicates an end-effector path from a solution found by AVP-RRT. KPIECE could not find any solution, in any run, within the given time limit of 600s.[]{data-label="fig:avp-rrt-example-solution"}](fig/scene_example_traj2_goal "fig:"){width="45.00000%"} [^1]: This paper is a substantially revised and expanded version of @PhaX13rss, which was presented at the conference *Robotics: Science and Systems*, 2013. [^2]: When dry Coulomb friction or viscous damping are not negligible, one may consider adding an extra term ${\mathbf{C}}({\mathbf{q}})\dot{\mathbf{q}}$. Such a term would simply change the computation of the fields $\alpha$ and $\beta$ (see infra), but all the rest of the development would remain the same [@SY89tra]. [^3]: Setting ${\mathrm{MVC}}(s)=0$ whenever $\alpha(s,0) > \beta(s,0)$ as in (\[eq:mvc\]) precludes multiple-valued MVCs [cf. @SD85icra]. We made this choice throughout the paper for clarity of exposition. However, in the implementation, we did consider multiple-valued MVCs. [^4]: @JH12icra also introduced a velocity interval propagation algorithm along a path but for pure kinematic constraints and moving obstacles. [^5]: Note that enforcing this hypothesis on the AVP-RRT planner presented in Section \[sec:avprrt\] will turn it into an “AVP-PRM”. [^6]: Note that, if ${\mathbf{q}}_1={\mathbf{q}}_{\mathrm{start}}$, then there is no associated unit tangent vector at ${\mathbf{q}}_1$. In such case, sample a random unit tangent vector ${\mathbf{u}}_{\mathrm{start}}$ for each interpolation call. [^7]: This hypothesis basically says that, if the initial tangent vector (${\mathbf{u}}_1$) is “aligned” with the displacement vector (${\mathbf{u}}_{{\mathbf{q}}_1\to{\mathbf{q}}_2}$), then the interpolation path is close to a straight line, which is verified for any “reasonable” interpolation method. [^8]: Note that this property presupposes that the robot is fully-actuated, see also the paragraph “Class of systems where AVP is applicable” in Section \[sec:discussion\].
{ "pile_set_name": "ArXiv" }
--- abstract: | We formulate a classification conjecture for conformally invariant families of measures on simple loops that builds on a conjecture of Kontsevich and Suhov [@KontSuh]. The main example in this class of objects was constructed by Werner [@Wer_loops]. We present partial results towards the algebraic step of this classification. Solving this conjecture would provide another argument explaining why planar statistical mechanics models with conformally invariant scaling limits naturally occur in a one-parameter family, together with the dynamical characterization of SLE via Schramm’s central limit argument, and with the conformal field theory point of view and its central charge parameter. author: - Stéphane Benoist bibliography: - 'biblio.bib' title: Classifying conformally invariant loop measures --- Motivation ========== We are interested in describing collections of measures on sets of simple loops that are conformally invariant scaling limits of interfaces found in two-dimensional statistical mechanics models. Let us first give an example of such a loop measure, coming from the Ising magnetization model. The Ising loop measure ---------------------- The Ising model on a subgraph $\mathcal{G}$ of the square grid $\Z^2$ at inverse temperature $\beta>0$ is a measure on configurations $\left(\sigma_{x}\right)_{x\in\mathcal{G}}$ of $\pm1$ spins located at the vertices of $\mathcal{G}$. A configuration appears with probability proportional to $\exp\left(-\beta H\left(\sigma\right)\right)$, where the energy $H$ is given by $-H\left(\sigma\right)=\sum_{x\sim y}\sigma_{x}\sigma_{y}$ (the sum is over all pair of adjacent vertices of $\mathcal{G}$). When the temperature is high (i.e. $\beta$ is small), we tend to see configurations that are very disordered at microscopic scale (i.e. spins are virtually independent): one can imagine that the heat agitation of each atom is enough to overcome the energy constraint, i.e. constraints due to the interactions between atoms. On the other hand, at low temperatures (i.e. high values of $\beta$), the bias $e^{-2\beta}$ will exclude configurations with too many disagreeing neighbors. The picture tends to be frozen (all spins have same sign) at the microscopic level. There is a unique critical parameter $\beta_c=\frac{1}{2}\ln\left(\sqrt{2}+1\right)$, where the Ising model exhibits an intermediary behavior between disordered and being frozen. Interfaces of the critical Ising model are known to converge to a conformally invariant scaling limit [@SmiChe_Ising], and this allows us to construct a continuous loop measure from discrete Ising interfaces. Given a simply-connected domain $\O$ in the plane, we approximate it for each $\delta>0$ by a discrete domain $\O^\delta$ which is a collection of faces of a square lattice of mesh size $\delta$. We consider the critical Ising model on the graph $\O^\delta$, with $+$ boundary conditions, i.e. we fix the spins on the boundary of $\O^\delta$ to be $+1$. To a spin configuration $\sigma$, we can associate a random collection of curves $c(\sigma)$: the set of all interfaces i.e. the set of loops on the graph dual to $\O^\delta$ that wind between $+$ and $-$ spins. We call $m_{\O^\delta}$ the measure on collections of loops $c$ that are interfaces of the Ising spin model. In the scaling limit $\delta\to 0$, the measure $m_{\O^\delta}$ converges (loops are compared using the supremum norm up to reparametrization) towards a measure $m_\O$ called CLE$_3$ [@BeHo_CLEIs]. The collection of measures $(m_\O)_\O$ is then conformally invariant: given two simply-connected domains $\O$ and $\O'$ in the plane, and a conformal isomorphism $\phi:\O\rightarrow \O'$ between them, we have $$m_{\O'} = \phi_\ast\left(m_{\O}\right).$$ The convergence of the whole collection $c$ of Ising loops implies the convergence of the measures $\mu_{\O^\delta}$ on single Ising interface loops $\ell$ $$\mu_{\O^\delta}({\rm d} \ell)= \E^{\sigma}\left[\sum_{\ell_0\in c(\sigma)} \delta_{\ell_0}({\rm d} \ell)\right] = \int_{c\in\mathcal{C}} \sum_{\ell_0\in c} \delta_{\ell_0}({\rm d} \ell) m_{\O^\delta}({\rm d} c)$$ to a conformally invariant collection of measures $\mu_\Omega$ (of infinite mass) which describes the loops of a CLE$_3$: $$\mu_{\O}({\rm d} \ell)=\int_{c\in\mathcal{C}} \sum_{\ell_0\in c} \delta_{\ell_0}({\rm d} \ell) m_{\O}({\rm d} c).$$ Loops and interactions ---------------------- One can wonder whether the Ising loop measure $\mu_{\O}({\rm d} \ell)$ is characterized by the macroscopic interactions of the statistical mechanics model. One way to make sense of this question is by keeping track of interactions by investigating how the position of the boundary of the domain boundary influences the shape of the loops. At the discrete level, we can do the computation. Let $\O'$ be a subdomain of $\O$, and let us consider a loop $\ell \subset {\O'}^\delta$. We can compute the respective likelihood to see the loop $\ell$ as an Ising loop in ${\O'}^\delta$ and $\O^\d$. This Radon-Nikodym derivative can indeed be written as a ratio of Ising partition functions: $$\label{eq:Z} \frac{{\rm d}\mu_{{\O'}^\delta}}{{\rm d}\mu_{\O^\delta}}(\ell)=\frac{Z_{{\O'}^\delta\setminus\ell}Z_{\O^\delta}}{Z_{{\O'}^\delta}Z_{\O^\delta\setminus\ell}},$$ where, for a discrete domain $\mathcal{G}$, the partition function is given by $$Z_{\mathcal{G}}=\sum_{\sigma\in\{\pm 1\}^{\mathcal{G}}}e^{-\beta_c H(\sigma)}.$$ The right hand side of (\[eq:Z\]) may be tractable in the scaling limit and converge to a function that we represent as $\exp(f(\ell,{\O'},\O))$, for a certain function $f$ (see Section \[sec:step1\] for a description of the function $f$). This step is well-understood for the uniform spanning tree model [@BeDu_SLE2]. The continuous Ising loop measure $\mu_\O$ should then react to domain restriction (i.e. boundary deformation) in the following way: $$\begin{aligned} \label{eq:restr} \frac{{\rm d}\mu_{\O'}}{{\rm d}\mu_\O}(\ell)=\exp\left(f(\ell,\O',\O)\right)\ind_{\ell\subset\O'}.\end{aligned}$$ Now, suppose that we are given a family of measures $(\widetilde{\mu}_\O)_\O$ whose behavior under restriction is also given by the Ising restriction formula (\[eq:restr\]). Is $\widetilde{\mu}_\O$ the Ising loop measure, i.e is it true that for any domain $\Omega$, $\widetilde{\mu}_\O=\mu_\O$? The above discussion could (at least conjecturally) be repeated for any discrete model exhibiting conformal invariance: the scaling limits of single loops in such models should fall in the class of families of measures that satisfy (\[eq:restr\]) for some function $f$. This leads to the following question. \[prob\] Classify all families of measures on single loops that can a priori appear as scaling limits, i.e. classify conformally invariant families of measures on loops, together with their restriction property. Some aspects of this classification are closely related to a question of Malliavin [@Mal_diffcircle] on existence of loop measures, as well as to a conjecture of Kontsevich and Suhov [@KontSuh]. This classification would conjecturally provide another argument explaining why planar statistical mechanics models with conformally invariant scaling limits naturally occur in a one-parameter family. Arguments with similar conclusions include the dynamical characterization of SLE via Schramm’s central limit argument - and the related CLE classification [@ShWe_CLEcharac], as well as the conformal field theory classification, which (loosely) extracts a real parameter (the central charge) out of the action of the conformal group on local observables of the model. Classification of loop measures {#sec:steps} ------------------------------- The classification question (Problem \[prob\]) splits into three steps on which we elaborate in this section. The first step is to classify the possible restriction formulas, i.e. to understand what restriction functions $f(\ell,\O',\O)$ can appear in the formula (\[eq:restr\]). Indeed, the function $f$ need to satisfy some algebraic conditions in order to appear as such a Radon-Nikodym derivative. The second step of the classification would be to prove uniqueness of the loop measures, i.e. to prove that there is at most one collection of loop measures for each type of boundary interaction $f$. Thirdly and finally, one should construct all these measures. ### Restriction functions {#sec:step1} The first step, the question of classifying possible restriction formulas is in part an algebraic question that can be rephrased as a cohomology computation on the space of loop-decorated Riemann surfaces. We conjecture in this paper (Conjecture \[conj\]) that the restriction functions, up to absolute continuity of the underlying measures, a priori reduce to a one-parameter family for algebraic reasons. This one-parameter family can be written as $f=c M$, where the quantity $M(\ell,\O',\O)$ is (up to a factor) as in [@BeDu_SLE2 Proposition 2.29] and can be interpreted as the mass of Brownian loops in $\Omega$ that intersect both $\ell$ and $\O\setminus\O'$ (see [@LW Section 4]). Moreover (with the right choice of normalization factor for $M$), the central charge $c$ is related to the SLE parameter $\kappa$ by $$c=\frac{(3\kappa-8)(6-\kappa)}{2\kappa}.$$ Conjecturally, loop measures exist only when $c\leq 1$ for probabilistic reasons [@KontSuh]. This situation is reminiscent of the classification of restriction measures [@LSW3] , that are a priori classified by one positive real parameter $\alpha>0$, and later shown to only exist for $\alpha\geq 5/8$ for probabilistic reasons. Moreover, note that quantities similar to the Brownian loop mass $c M(\ell,\O',\O)$ appear when one studies how chordal SLE$_\kappa$ depends on the boundary of the domain [@LSW3 Section 7.2]. ### Characterization {#sec:step2} The second step of the classification program, the uniqueness of the loop measure having a fixed restriction property, is a conjecture of Kontsevich and Suhov [@KontSuh]. It has been proved by Werner for $\kappa = 8/3$ [@Wer_loops], which corresponds to trivial interactions with the boundary, i.e. $f(\ell,\Omega',\Omega)=0$. The same result was achieved in [@ChaPic_Werloop] by considering the structure of infinitesimal deformations of domains. ### Construction {#sec:step3} Loop measures were built by Werner [@Wer_loops] for $\kappa=8/3$ as boundaries of Brownian loops. Loop measures for $\kappa=2$ were constructed as a scaling limit of a discrete loop-erased walk [@KasKen_RandomCurves; @BeDu_SLE2]. For general values of the SLE parameter $0<\kappa\leq 4$ or equivalently general values of the central charge $c\leq 1$ (this is the simple curve regime, and conjecturally covers all simple loop measures), the loop measures are constructed in a work in preparation [@BeDu_SLEkloops; @BeDu_SLE4loops] by finding them as flow lines of the Gaussian free field in the imaginary geometry coupling of Miller and Sheffield [@MilSheIG1]. Content of this paper --------------------- We now focus on the first step of the classification (Problem \[prob\]), i.e. we want to understand all possible functions $f$ than can appear in the formula (\[eq:restr\]). In Section \[sec:resfun\], we setup this algebraic question as a cohomology problem. In Section \[sec:cohom\], we discuss a couple of results on the corresponding cohomology group (Propositions \[prop:non-trivial\] and \[prop:obstruction\]). The algebraic classification of restriction functions {#sec:resfun} ===================================================== By Riemann surface, we mean a surface $\Sigma$ equipped with a complex structure, with finitely many handles, finitely many boundary components, and no punctures. For our purposes, there is no loss of generality by thinking of $\Sigma$ as an open subset of the complex plane $\mathbb{C}$ with finitely many holes (see Proposition \[prop:ess\]). An embedding $\Sigma_1 \hookrightarrow \Sigma_2$ is a conformal injective map from the Riemann surface $\Sigma_1$ to the Riemann surface $\Sigma_2$. A simple loop $\ell$ is the image of the unit circle by an injective continuous map: the map is considered up to reparametrization, including rerooting and orientation switching. The topology on loops we will use is the topology of uniform distance up to reparametrization, and we work with the corresponding Borel $\sigma$-algebra. Setup {#sec:setup} ----- We now consider families of $\sigma$-finite measures $(\mu_\Si)_\Si$ indexed by Riemann surfaces, where $\mu_\Si$ is a measure on the set of simple loops $\l$ on $\Si$. Implicit in this formalism is that such a family of measures is conformally invariant. A family $(\mu_\Si)_\Si$ is Malliavin-Kontsevich-Suhov (MKS) if it satisfies a restriction property as in (\[eq:restr\]): if $\Si_1\subset\Si_2$ are two Riemann surfaces, then $$\begin{aligned} \label{eq:restr2} \frac{{\rm d}\mu_{\Si_1}}{{\rm d}\mu_{\Si_2}}(\ell)= e^{f_\mu(\l,\Si_1,\Si_2)} \mathbbm{1}_{\l\subset\Si_1},\end{aligned}$$ where $f_\mu(\l,\Si_1,\Si_2)$ is a priori an arbitrary function that we call restriction function. Suppose that we have two MKS families of measures $\mu$ and $\nu$ that are in the same absolute continuity class: one can find a (conformally invariant, i.e. coordinate independent) function $g(\l,\Si)$ defined on pairs formed by a Riemann surface $\Sigma$ and a loop $\l\subset\Si$ such that $\nu=e^{-g}\mu$. Then, the restriction function $f_\nu$ associated to the measure $\nu$ can be expressed as: $$\begin{aligned} \label{eq:cb} f_\nu(\l,\Si_1,\Si_2)=f_\mu(\l,\Si_1,\Si_2) + g(\l,\Si_2) - g(\l,\Si_1).\end{aligned}$$ Moreover, the inverse operation also makes sense: given an MKS family of measures $\mu$ and a conformally invariant function $g(\l,\Si)$, we can define another MKS family of measures $\nu$ by $\nu=e^{-g}\mu$. The restriction function $f_\nu$ is then given by (\[eq:cb\]). An example where two families of measures $\mu$ and $\nu$ are related in this way is when these families describe loops arising from the same statistical mechanics model, but with different boundary conditions. Understanding the set of restriction functions $f$ modulo the equivalence relation (\[eq:cb\]) is a first step towards classifying the absolute continuity classes of MKS families of measures (as discussed in Section \[sec:steps\]). The cohomology of loops {#sec:defcoho} ----------------------- We are thus led to consider the following problem. We call a configuration either - a pair $(\l,\Si)$ consisting of a Riemann surface $\Sigma$ and a loop $\ell\subset\Si$, or - a triple $(\l,\Si_1,\Si_2)$ consisting of two Riemann surfaces $\Si_1\subset\Si_2$ and a loop $\ell\subset\Si_1$. Which of the two we consider will be clear from context at any given point. We define the set $\mathcal{C}$ of cocycles as the set of real-valued functions $f(\l,\Si_1,\Si_2)$ on configurations such that: - $f$ is an additive cocycle, i.e. given three Riemann surfaces $\Si_1\subset\Si_2\subset\Si_3$ and a loop $\l\subset\Si_1$, we have that $$\begin{aligned} \label{eq:cocycle} f(\l,\Si_1,\Si_3)=f(\l,\Si_1,\Si_2)+f(\l,\Si_2,\Si_3).\end{aligned}$$ - $f$ is conformally invariant (i.e. coordinate-independent). - $f$ is continuous in $\l$ (for the topology of uniform convergence up to reparametrization). Note that the first item is satisfied by restriction functions. The second item is trivially satisfied from the formalism, but we insist on the fact that $f$ should not depend on how coordinates are chosen on $\Sigma_2$, e.g. how $\Sigma_2$ is embedded in a larger Riemann surface (see the non-trivial Lemma \[lem:rep\]). The third item is a convenient way to enforce the measurability and the (local) integrability of $e^f$ in (\[eq:restr2\]). However, this is more than a technical condition (see the comment after Proposition \[prop:obstruction\]). Let us now define the set of coboundaries $\B$ as the set of real-valued functions $f(\l,\Si_1,\Si_2)$ on configurations such that there exists a function $g(\l,\Si)$ on configurations $\l\subset\Si$ that satisfies: - $f(\l,\Si_1,\Si_2)=g(\l,\Si_2)-g(\l,\Si_1).$ - $g$ is conformally invariant (i.e. coordinate-independent). - $g$ is continuous in $\l$ (for the topology of uniform convergence up to reparametrization). Note that the function $g$ associated to a coboundary is unique up to a global additive constant, and thus we can always assume that $g(\S^1,\C\P^1)=0$. Moreover, note that every coboundary is a cocycle, i.e. $\B\subseteq\Ca$. The classification of absolute continuity classes of MKS family of measures (as discussed in Section \[sec:setup\]) amounts to understanding the cohomology of restriction functions, i.e. to understand the set of all cocycles modulo coboundaries. \[conj\] The cohomology group $\mathcal{H}=\Ca\slash \B$ is a one-dimensional real vector space. We use here the word cohomology in the sense of understanding the quotient of a space $\Ca$ with additive properties such as (\[eq:cocycle\]) by telescopic sums $\B$. The cohomology space $\mathcal{H}$ carries information on the structure of the space of all loop-decorated Riemann surfaces modulo conformal equivalence. On the cohomology of loops {#sec:cohom} ========================== The cohomology is non-trivial ----------------------------- \[prop:non-trivial\] The cohomology group $\mathcal{H}$ is non-trivial. We give a probabilistic proof that relies on the existence of SLE loop measures. It would be interesting to have a purely algebraic proof. Consider the point in cohomology $f^{\SLE_2}(\ell,\Sigma_1,\Sigma_2)=-2M(\ell,\Sigma_1,\Sigma_2)$ that is associated to the $\SLE_2$ loop measure $\mu^{\SLE_2}$ built in [@BeDu_SLE2] (the precise definition of the mass of Brownian loop $M$ does not matter here). We argue that $f^{\SLE_2}$ cannot be a coboundary. Indeed, if it were, one could write $f^{\SLE_2}(\ell,\Sigma_1,\Sigma_2)=g(\ell,\Sigma_2)-g(\ell,\Sigma_1)$ for some function $g$. The loop measure $\nu=e^g\mu^{\SLE_2}$ would then satisfy the exact restriction property $$\label{eq:truerestr} \nu_{\Si_1}=\nu_{\Si_2} \mathbbm{1}_{\l\subset\Si_1},$$ for all Riemann surfaces $\Si_1\subset\Si_2$. The only loop measure (up to global scaling) satisfying (\[eq:truerestr\]) is the $\SLE_{8/3}$ loop measure $\mu^{\SLE_{8/3}}$ [@Wer_loops], and so we would have $\nu=C \mu^{\SLE_{8/3}}$, i.e. $e^g\mu^{\SLE_2}=C \mu^{\SLE_{8/3}}$. However, the measure $\mu^{\SLE_{8/3}}$ does not belong to the absolute continuity class of $\mu^{\SLE_2}$ (e.g. because the Hausdorff dimension of an $\SLE_\kappa$ curve for $\kappa\leq 8$ is given by $1+\frac{\kappa}{8}$ [@Bef_Haus]), a contradiction. The obstruction lies in the regularity of $g$ --------------------------------------------- We now prove that the obstruction to a cocycle being a coboundary lies in the regularity of $g$ (Proposition \[prop:obstruction\]). We call a configuration $(\ell,A)$ essential if $A$ is an annulus, and if the loop $\l$ is homotopically non-trivial in $A$, i.e. if $\ell$ disconnects the two boundary components of $A$. A configuration $(\l,A_1,A_2)$ is called essential if $(\l,A_1)$ and $(\l,A_2)$ are. We say that a loop $\l$ drawn on a surface $\Si$ is analytic, if we can find an annular neighborhood $A$ of $\l$ in $\Si$ and a conformal embedding $\phi:A\hookrightarrow\C\P^1$ such that the configuration $(\l,A)$ is essential and $\phi(\ell)=\S^1$. Note that this is equivalent to asking that there exists an analytic parametrization of the loop $\l$ by the unit circle $\S^1$. We call a configuration $(\l,\Si)$ (resp. $(\l,\Si_1,\Si_2)$) analytic if the loop $\l$ is. Note that a configuration being analytic is not a condition on the roughness of the embedding $\partial\Si_1\hookrightarrow\Si_2$ (which may even be ill-defined). We now prove, in the spirit of [@KontSuh], that all the structure of restriction functions comes from essential configurations, i.e. from annular regions. \[prop:ess\] If $f$ is a coboundary for essential configurations, then $f$ is a coboundary. By assumption, we can find a continuous function $g(\l,A)$ defined for configurations $\l \subset A$ where $\l$ is a homotopically non-trivial loop in an annulus $A$ and such that $f(\l,A_1,A_2)=g(\l,A_2)-g(\l,A_1)$ for all essential configurations $\l\subset A_1\subset A_2$. Given a configuration $\l\subset\Si$, let us pick an annulus $A\subset\Sigma$ such that $(\ell,A)$ is an essential configuration, and tentatively define $g(\l,\Si):=f(\l,A,\Si)+g(\l,A)$. - The function $g$ does not depend on the choice of the annulus $A$ : for a configuration $\l\subset A' \subset A\subset \Sigma$, we have that $$\begin{aligned} f(\l,A',\Si)+g(\l,A')-f(\l,A,\Si)-g(\l,A)&=&g(\l,A')-g(\l,A)+f(\l,A',\Si)-f(\l,A,\Si)\nonumber\\ &=&-f(\l,A',A)+f(\l,A',A)=0.\nonumber\end{aligned}$$ - If for any annulus $A$ the function $g(\ell,A)$ is continuous, then the function $g(\ell,\Sigma)$ is continuous in $\l$ for any Riemann surface $\Sigma$. - $f$ and $g$ are related by the coboundary formula: given a configuration $\ell\subset \Si_1\subset\Si_2$, consider an annulus $A\subset\Sigma_1$ such that the configuration $(\ell, A)$ is essential. Then, we have that $$\begin{aligned} g(\ell,\Si_2)-g(\ell,\Si_1)&=&f(\l,A,\Si_2)+g(\l,A)-f(\l,A,\Si_1)-g(\l,A)\nonumber\\ &=&f(\ell,\Si_1,\Si_2).\nonumber\end{aligned}$$ \[lem:rep\] If $f$ is a cocycle, the function $f(\S^1,A,\C\P^1)$ (for configurations where $\S^1$ winds non-trivially around the annulus $A$) only depends on the conformal type of $(\S^1,A)$. In particular, the quantity $f(\S^1,A,\C\P^1)$ does not depend on the embedding $(\S^1,A)\hookrightarrow(\S^1,\C\P^1)$. Before we give the proof of this Lemma, let us define the group $\Diff$ of analytic diffeomorphisms of the circle. An element $\psi\in\Diff$ is a map from the unit circle $\S^1$ to itself such that: - The map $\psi$ is analytic: seeing $S^1$ as the quotient $\R/2\pi$, the map $\psi$ is a $2\pi$-periodic real-analytic map from $\R$ to itself. - The map $\psi$ is a bijection. - The derivative of $\psi$ does not vanish (together with the preceding items, this is equivalent to asking that $\psi$ admits an analytic inverse). The group law on $\Diff$ is given by composition. Given a cocycle $f$, we define a morphism $\rho$ from the group $\Diff$ of analytic diffeomorphisms of the circle to $(\R,+)$. Pick a diffeomorphism $\psi\in\Diff$ and consider a small enough annular neighborhood $A$ of $\S^1$ such that $\psi$ extends to $A$ as an injective holomorphic map. We consider the map $\rho:\Diff \to \R$ given by $\rho(\psi) = f(\S^1,\psi(A),\C\P^1)-f(\S^1,A,\C\P^1)$. - The quantity $\rho(\psi)$ does not depend on the choice of $A$ : given an annulus $A'$ such that $\S^1\subset A' \subset A$, we have that $$\begin{aligned} &&f(\S^1,\psi(A'),\C\P^1)-f(\S^1,A',\C\P^1)-f(\S^1,\psi(A),\C\P^1)+f(\S^1,A,\C\P^1)\nonumber\\ &=&f(\S^1,\psi(A'),\C\P^1)-f(\S^1,\psi(A),\C\P^1)+f(\S^1,A,\C\P^1)-f(\S^1,A',\C\P^1)\nonumber\\ &=&f(\psi(\S^1),\psi(A'),\psi(A))-f(\S^1,A',A)=0.\nonumber\end{aligned}$$ - The map $\rho$ is a group morphism. Indeed, let $\psi,\phi \in \Diff$, and let $A$ be an annulus such that $\psi_{|A}$ and $\phi_{|\psi(A)}$ are injective maps. Then, we have $$\begin{aligned} \rho(\phi\circ\psi)&=& f(\S^1,\phi\circ\psi(A),\C\P^1)-f(\S^1,A,\C\P^1)\nonumber\\ &=& f(\S^1,\phi\circ\psi(A),\C\P^1)-f(\S^1,\psi(A),\C\P^1)+f(\S^1,\psi(A),\C\P^1)-f(\S^1,A,\C\P^1)\nonumber\\ &=&\rho(\phi)+\rho(\psi).\nonumber\end{aligned}$$ However, any morphism $\rho:\Diff\rightarrow(\R,+)$ needs to be trivial (Corollary \[cor:morph\], Appendix \[sec:app\]). Hence, given two embeddings $(\S^1,A)$ and $(\S^1,\psi(A))$ of the same configuration in $\C\P^1$, $f(\S^1,\psi(A),\C\P^1)-f(\S^1,A,\C\P^1)=\rho(\psi)=0$. \[prop:obstruction\] If $f$ is a cocycle, there exists a (not necessarily continuous) function $g$ such that $f(\l,A_1,A_2)=g(\l,A_2)-g(\l,A_1)$ on essential analytic configurations $\ell\subset A_1\subset A_2$. Analytic configurations being dense (and in light of Proposition [prop:ess]{}), only the (uniform) continuity of $g$ is missing to imply that any cocycle $f$ is a coboundary. However, this is not the case as the cohomology space $\mathcal{H}$ is non-trivial (Proposition \[prop:non-trivial\]). The obstruction to any cocycle being a coboundary is hence a regularity constraint. Given a cocycle $f$, let us build a function $g$ as claimed. We look for such a function $g$ such that $g(\S^1,\C\P^1)=0$. We then want to define $g(\S^1,A):=-f(\S^1,A,\C\P^1)$ for all configurations $(\S^1,A)$ where $A$ is an annular neighborhood of the unit circle in the Riemann sphere. This is a coordinate-independent definition thanks to Lemma \[lem:rep\]. Given an analytic and essential configuration $\l\subset A$, let us cut a small enough annular neighborhood $A'$ of $\l$ in $A$ such that $(\l,A')$ is conformally equivalent (by a conformal isomorphism $\phi$) to a configuration $(\S^1,\phi(A'))$ where $\phi(A')$ is a subset of the Riemann sphere. We define $g(\l,A):=f(\l,A',A)+g(\S^1,\phi(A'))$. - The function $g$ does not depend on the choice of $A'$ : take an annulus $A''$ such that $\l\subset A'' \subset A'$. Then $$\begin{aligned} &&f(\l,A',A)+g(\S^1,\phi(A'))-f(\l,A'',A)-g(\S^1,\phi(A''))\nonumber\\ &=&f(\l,A',A)-f(\l,A'',A)+g(\S^1,\phi(A'))-g(\S^1,\phi(A''))\nonumber\\ &=&-f(\l,A'',A')+f(\phi(\l),\phi(A''),\phi(A'))=0.\nonumber\end{aligned}$$ - The function $g$ does not depend on the choice of $\phi$, by Lemma \[lem:rep\]. - $f$ and $g$ are related by the coboundary formula. Indeed, given an essential analytic configuration $\ell\subset A'\subset A$, let $A''$ be an annulus such that $\ell\subset A''\subset A'$. Then $$\begin{aligned} g(\ell,A)-g(\ell,A')&=&f(\ell,A'',A)-f(\S^1,\phi(A''),\C\P^1)-f(\ell,A'',A')+f(\S^1,\phi(A''),\C\P^1)\nonumber\\ &=&f(\ell,A',A).\nonumber\end{aligned}$$ The group of analytic diffeomorphisms of the circle does not admit non-trivial morphisms to $(\R,+)$ {#sec:app} ==================================================================================================== \[prop:diff-simple\] The group $\Diff^+$ of orientation-preserving analytic diffeomorphisms of the circle is perfect: any element of $\Diff^+$ can be written as a finite composition of commutators, i.e. elements of the form $f\circ g\circ f^{-1}\circ g^{-1}$. We proceed in several steps. - The subgroup of the conformal transformations of the sphere $\C\P^1$ that fix the unit circle is isomorphic to PSL$_2(\R)$, and naturally embeds in the group $\Diff^+$ of orientation-preserving analytic diffeomorphisms of the circle, as the family of maps $$z\mapsto e^{i\theta}\frac{z+c}{\overline{c}z+1}.$$ This subgroup of $\Diff^+$ contains all rotations $R_\theta$ of angle $\theta$, and is well-known to be perfect. In particular, all rotations in $\Diff^+$ are finite compositions of commutators. - For any element $f\in\Diff^+$ we define its rotation number $r(f)\in\R/\Z$ in the following way. Let us pick a lift $F:\R \rightarrow \R$, i.e. if $\pi:\R\to\R/\Z\simeq \S^1$ is the canonical projection, we pick a function $F$ such that $\pi\circ F=f\circ\pi$. The rotation number is then given by $$r(f)=\lim_{n\to\infty}\frac{F^{(n)}(1)}{n},$$ where $F^{(n)}$ denotes the composition of $F$ with itself $n$ times. The rotation number (as a real number) comes with an ambiguity of $\Z$ resulting from the choice of a lift $F$. Analytic diffeomorphisms $f$ whose rotation number belongs to a non-trivial subset $\Theta\subset\R/\Z$ (of full Lebesgue measure) are analytically conjugated to a rotation [@Herm_conjdiffcercle]: if $r(f)\in\Theta$, we can find an analytic diffeomorphism $h\in\Diff^+$ such that $f=h^{-1}\circ R_{r(f)}\circ h$. - For any diffeomorphism $f\in\Diff^+$, the map $\alpha\mapsto r(R_\alpha\circ f)$ is onto, as a periodic non-decreasing continuous map. For continuity, see e.g. [@Kuehn_rotation]: it follows from the fact that the rotation number $r(f)=\frac{p}{q}\in\Q$ if and only if the iterated map $f^{(q)}$ has a fixed point. Hence, given an element $f\in\Diff^+$, we can find an angle $\alpha$ such that the rotation number $r(R_\alpha\circ f)=\theta\in \Theta$. This implies that there exists an element $h$ of $\Diff^+$ such that $$R_\alpha\circ f = h^{-1}\circ R_\theta \circ h.$$ We can then express $f$ in the following way: $$f=R_{-\alpha}\circ \left(h^{-1}\circ R_\theta \circ h \circ R_\theta^{-1}\right)\circ R_\theta,$$ which is a composition of two rotations and a commutator, hence a finite composition of commutators. \[cor:morph\] The group $\Diff$ of analytic diffeomorphisms of the circle does not admit non-trivial morphism to $(\R,+)$. Given a group morphism $\rho:G\rightarrow A$ taking values in an abelian group $A$, the kernel of $\rho$ is a group that contains all commutators of $G$. In particular, by Proposition \[prop:diff-simple\], given a group morphism $\rho:\Diff\rightarrow \R$, the subgroup $\Diff^+\subset\Diff$ of orientation-preserving diffeomorphisms is in the kernel of $\rho$. Hence $\rho$ factors through $\rho:\Diff\to \Diff/\Diff^+\simeq\Z/2\Z \to \R$, where the first map is the canonical quotient map, and where the second map needs to be trivial, as there are no non-trivial morphisms from $\Z/2\Z$ to $(\R,+)$. Acknowledgements {#acknowledgements .unnumbered} ================ I would like to thanks Yves Benoist and Julien Dubédat for helpful discussions.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Bass trace conjectures are placed in the setting of homotopy idempotent selfmaps of manifolds. For the strong conjecture, this is achieved via a formulation of Geoghegan. The weaker form of the conjecture is reformulated as a comparison of ordinary and $L^{2}$–Lefschetz numbers.' address: - | Department of Mathematics\ National University of Singapore\ Kent Ridge 117543\ Singapore - | Department of Mathematics\ The Ohio State University\ 231 W 18th Ave\ Columbus OH 43210\ USA - | Department of Mathematics\ ETH Zürich\ 8092 Zürich\ Switzerland author: - AJ Berrick - I Chatterji - G Mislin bibliography: - 'link.bib' title: 'Homotopy idempotents on manifolds and Bass’ conjectures' --- The Bass trace conjectures are placed in the setting of homotopy idempotent selfmaps of manifolds. For the strong conjecture, this is achieved via a formulation of Geoghegan. The weaker form of the conjecture is reformulated as a comparison of ordinary and L&lt;sup&gt;2&lt;/sup&gt;&ndash;Lefschetz numbers. Preface {#preface .unnumbered} ------- This note has its origins in talks discussing Bass’ trace conjecture. After one such lecture (by IC), R Geoghegan kindly mentioned his geometric perspective on the matter. Then, when another of us (AJB) spoke about the conjecture at the Kinosaki conference, he thought that a topological audience might like to hear about that geometric aspect. Thus, it seemed desirable to attempt to put the conjecture (and its weaker version) in a setting that would be as motivating as possible to topologists. The result of that attempt appears below. Acknowledgement {#acknowledgement .unnumbered} --------------- The authors warmly thank both R Geoghegan and BJ Jiang for their interest in this work. Second author partially supported by the Swiss NSF grant PA002-101406 and USA NSF grant DMS 0405032. Introduction {#sec:intro} ============ In 1976, H Bass [@B] conjectured that for any discrete group $G$, the Hattori–Stallings trace of a finitely generated projective module over the integral group ring of $G$ should be supported on the identity component only. Despite numerous advances (see, for example, Eckmann [@E], Emmanouil [@Emm], and our earlier paper [@BCM]), this conjecture remains open in general. In [@Geo:LNM], R Geoghegan gave the first topological interpretation, in terms of Nielsen numbers (stated as below). In the setting of selfmaps on manifolds, this translates to the following. \[principal\]The following are equivalent. - The Bass conjecture is a theorem. - Every homotopy idempotent selfmap of a closed, smooth and oriented manifold of dimension greater than $2$ is homotopic to one that has precisely one fixed point. Throughout this paper we use the word “closed” to refer to a connected, compact manifold without boundary. Background material on homotopy idempotents and related invariants will be discussed in Sections \[hi\] and \[invar CW\]. A weaker version of Bass’ conjecture amounts to saying that for any group $G$, the coefficients of the non-identity components of the Hattori–Stallings trace of a finitely generated projective module over the integral group ring of $G$ should sum to zero (and not necessarily be individually zero). \[weakelyprincipal\]The following are equivalent. 1. The weak Bass conjecture is a theorem. 2. Every pointed homotopy idempotent selfmap of a closed, smooth and oriented manifold inducing the identity on the fundamental group, has its Lefschetz number equal to the $L^{2}$–Lefschetz number of the induced map on its universal cover. Background material regarding $L^{2}$–Lefschetz numbers is explained in . The implication $\mathrm{(a)} \Rightarrow \mathrm{(b)}$ had already been observed by Eckmann in [@EN] in a slightly different form. The proofs of these two theorems proceed as follows. is derived from the analogous statement for finite CW–complexes (involving Nielsen numbers) from Geoghegan’s work, which is explained in . The transition from CW–complexes to manifolds is done in . The proof of as well as some applications is discussed in . For the strategy is somewhat similar: namely, we first prove the statements for finitely presented groups instead of arbitrary groups and for finite complexes instead of manifolds (see ). To deduce Bass’ conjectures (weak and classical) for arbitrary groups we use a remark due to Bass (). Review of Bass’ conjectures {#Rewiew} =========================== We briefly recall Bass’ conjectures. Let $\mathbb{Z}G$ denote the integral group ring of a group $G$. The *augmentation trace* is the $\mathbb{Z}$–linear map $$\epsilon\co\mathbb{Z}G\rightarrow\mathbb{Z},\quad g\mapsto1$$ induced by the trivial group homomorphism on $G$. Writing $[\mathbb{Z} G,\mathbb{Z}G]$ for the additive subgroup of $\mathbb{Z}G$ generated by the elements $gh-hg$ ($g,h\in G$), we identify $\mathbb{Z}G/[\mathbb{Z} G,\mathbb{Z}G]$ (the Hochschild homology group $HH_{0}(\mathbb{Z}G) $) with $\bigoplus_{[s]\in\lbrack G]}\mathbb{Z}\cdot\lbrack s]$, where $[G]$ is the set of conjugacy classes $[s]$ of elements $s$ of $G$. The *Hattori–Stallings trace* of $M=\sum_{g\in G}m_{g}g\in\mathbb{Z}G$ is then defined by $$\begin{aligned} \mathrm{HS}(M) & =M+[\mathbb{Z}G,\mathbb{Z}G]\\ & =\sum_{[s]\in\lbrack G]}\epsilon_{s}(M)[s] \ \in\bigoplus_{\lbrack s]\in\lbrack G]}\mathbb{Z}\cdot\lbrack s],\end{aligned}$$ where for $[s]\in\lbrack G]$, $\epsilon_{s}(M)=\sum_{g\in\lbrack s]} m_{g}$ is a partial augmentation. In particular, the component $\epsilon_{e}$ of the identity element $e\in G$ in the Hattori–Stallings trace is known as the *Kaplansky trace* $$\kappa\co\mathbb{Z}G\rightarrow\mathbb{Z}, \quad\sum m_{g}g\mapsto m_{e}.$$ Now, an element of $K_{0}(\mathbb{Z}G)$ is represented by a difference of finitely generated projective $\mathbb{Z}G$–modules, each of which is determined by an idempotent matrix having entries in $\mathbb{Z}G$. Combining the usual trace map to $\mathbb{Z}G$ of such a matrix with any of the above traces on $\mathbb{Z}G$ turns out to induce a well-defined trace map on $K_{0}(\mathbb{Z}G)$ that is given the same name and notation as before. Moreover, $\mathrm{HS}$ and $\epsilon$ are natural with respect to all group homomorphisms (and $\kappa$ with respect to group monomorphisms). In the case of a free module $\mathbb{Z}G^{n}$, $\epsilon$ takes the value $n$ and so is just the rank of the module. In [@B], Bass conjectured the following. \[Bass\] For any group $G$, the induced map $$\mathrm{HS}\co K_{0}(\mathbb{Z}G)\rightarrow\bigoplus_{\lbrack s]\in\lbrack G]}\mathbb{Z}\cdot\lbrack s]$$ has image in $\mathbb{Z}\cdot\lbrack e]$. \[WeakBass\] For any group $G$, the induced maps $$\epsilon,\kappa\co K_{0}(\mathbb{Z}G)\rightarrow\mathbb{Z}$$ coincide. To clarify the discussion below, it is helpful to consider also reduced $K$–groups. The inclusion $\left\langle e\right\rangle \hookrightarrow G$ induces a natural homomorphism $$\mathbb{Z}=K_{0}(\mathbb{Z})=K_{0}(\mathbb{Z}\left\langle e\right\rangle )\longrightarrow K_{0}(\mathbb{Z}G)$$ whose cokernel is the *reduced* $K$*–group* $\wtilde{K}_{0}(\mathbb{Z}G)$, equipped with natural epimorphism $\eta\co K_{0}(\mathbb{Z}G)\twoheadrightarrow\wtilde{K}_{0}(\mathbb{Z}G)$. Homotopy idempotent selfmaps {#hi} ============================ Let $X$ be a connected CW–complex. A selfmap $f\co X\rightarrow X$ is called *homotopy idempotent* if $f^{2}=f\circ f$ is *freely* homotopic to $f$. Since $X$ is path-connected we can always assume that $f$ fixes a basepoint $x_{0}\in X$, so that $f$ induces a (not necessarily idempotent) map $f_{\sharp}\co \pi_{1}(X)\rightarrow\pi_{1}(X)$. Given a homotopy idempotent selfmap $f\co X\rightarrow X$ on a *finite dimensional CW–complex $X$*, according to Hastings and Heller [@HH] there is a CW–complex $Y$ and maps $u\co X\rightarrow Y$ and $v\co Y\rightarrow X$ such that the following diagram is (freely) homotopy commutative: $$\xymatrix{ X\ar[rr]^f\ar[dr]_{u}\ar@(ur,ul)[rrrr]^f& &X\ar[rr]^f\ar[dr]_{u}& &X\\ &Y\ar [rr]^{\mathrm{id}}\ar[ur]_{v} & &Y\ar[ur]_{v}.& }\label{Heller}$$ In fact, in this diagram we can arrange that the outside triangles strictly commute. By replacing the maps by homotopic ones, we can (and do) choose the maps to preserve basepoints. We then get the following commutative diagram of groups:$$\xymatrix{ \pi_1(X)\ar[rr]^{f_\sharp}\ar[dr]_{u_{\sharp}} & &\pi_1(X)\ar[rr]^{f_{\sharp}}\ar[dr]_{u_{\sharp}}& &\pi_1(X)\\ &\pi_1(Y)\ar[rr]\ar[ur]_{v_{\sharp}} & &\pi_1(Y)\ar[ur]_{v_{\sharp}}.& }$$ Here the bottom arrow consists of conjugation by the class of a loop at the basepoint of $Y$. Looking at the middle triangle, we see that $v_{\sharp}$ is an injective homomorphism while $u_{\sharp}$ is surjective; hence we can make the identification $$\pi_{1}(Y)\cong v_{\sharp}(\pi_{1}(Y))=\mathrm{Im}(f_{\sharp})\leq\pi_{1}(X).$$ If the homotopy idempotent $f$ is a pointed homotopy idempotent (meaning that $f^{2}$ is pointed homotopic to $f$), then $u\circ v\co Y\rightarrow Y$ will induce the identity on $\pi_{1}(Y)$. If we require that $f_{\sharp }=\mathrm{id}$, we then get that $\pi_{1}(Y)$ is isomorphic to $\pi_{1}(X)$ via $v_{\sharp}=u_{\sharp}^{-1}$. We now explain how, starting from a homotopy idempotent $f\co X\rightarrow X$ of a finite connected complex $X$ with fundamental group $G=\pi_{1}(X)$, we obtain an element $w(f)\in K_{0}(\mathbb{Z}G)$. In the situation above, $Y$ is called *finitely dominated*; then the singular chain complex of the universal cover $\tilde{Y}$ of $Y$ is chain homotopy equivalent to a complex of type FP over $\mathbb{Z}\pi_{1}(Y)$ $$0\rightarrow P_{n}\rightarrow\cdots\rightarrow P_{1}\rightarrow P_{0}\rightarrow\mathbb{Z}$$ with each $P_{i}$ a finitely generated projective $\mathbb{Z}\pi_{1}(Y)$–module. We then look at the *Wall element* $$w(Y)=\sum_{i=0}^{n}(-1)^{i}[P_{i}]\in K_{0}(\mathbb{Z}\pi_{1}(Y))$$ (where we follow the notation of Mislin [@Mislin-Handbook]). Its image $$\tilde{w}(Y)=\eta(w(Y))\in\wtilde{K}_{0}(\mathbb{Z}\pi_{1}(Y))$$ is known as Wall’s *finiteness obstruction*, and $\tilde{w}(Y)=0$ exactly when $Y$ is homotopy equivalent to a finite complex. Finally, we define $$w(f)=v_{\sharp}(w(Y))\in K_{0}(\mathbb{Z}G),$$ whose reduction $\tilde{w}(f)\in\tilde{K}_{0}(\mathbb{Z}G)$ was first considered by Geoghegan [@Geo:LNM]. As he notes, the element $\tilde {w}(f)$ “can be interpreted as the obstruction to splitting $f$ through a finite complex”. Before proceeding, we check that $w(f)$ is well-defined. First, we observe a form of naturality of Wall elements. \[WallElement\] Let $X$ be a finite $n$–dimensional complex, and suppose that there are maps (of spaces having the homotopy type of a connected CW–complex)$$X\overset{u}{\longrightarrow}W\overset{a}{\longrightarrow}V\overset {v}{\longrightarrow}X$$ such that $a\co W\rightarrow V$ and $u\circ v\co V\rightarrow W$ are homotopy inverse. Then the Wall elements $w(W)\in K_{0}(\mathbb{Z}\pi_{1}(W))$ and $w(V)\in K_{0}(\mathbb{Z}\pi_{1}(V))$ are related by$$w(V)=a_{{\sharp}}w(W).$$ We use the fact that, because conjugation in $G$ induces the identity map on $K_{0}(\mathbb{Z}G)$, homotopic maps induce the same homomorphism of $K$–groups. Recall from Wall [@Wall] that $w(W)$ is defined (uniquely) by means of any $n$–connected map $\psi\co L\rightarrow W$ where $L$ is a finite $n$–dimensional complex:$$w(W)=(-1)^{n}[\pi_{n+1}(M_{\psi},L)]$$ where $M_{\psi}$ denotes the mapping cylinder of $\psi$, and the relative homotopy group is considered as a $\pi_{1}(W)$–module (finitely generated and projective because of the assumption that $W$ is dominated by $X$). Therefore, to define $w(V)$, we may take $$w(V)=(-1)^{n}[\pi_{n+1}(M_{a\psi},L)]\text{.}$$ The result then follows from the natural isomorphism of the exact homotopy sequences (of $\pi_{1}$–modules) of the pairs $(M_{\psi},L)$ and $(M_{a\psi },L)$ induced by $a$. We now can see that the obstruction to splitting a homotopy idempotent through a finite complex is well defined. Let $X$ be a finite complex with fundamental group $G$, and for $i=1,2$ let $X\overset{u_{i}}{\longrightarrow}Y_{i}\overset{v_{i}}{\longrightarrow}X$ be a (homotopy) splitting of a homotopy idempotent map $f\co X\rightarrow X$. Then in $K_{0}(\mathbb{Z}G)$$$v_{1{\sharp}}(w(Y_{1}))=v_{2{\sharp}}(w(Y_{2}))\text{.}$$ From the homotopy commutative diagram $$\xymatrix{ & Y_1\ar[rr]^{\rm id}\ar[dr]_{v_1}& & Y_1\ar[dr]_{v_1}\ar[rr]^{\rm id}& & Y_1\\ X\ar[rr]_f\ar[dr]_{u_2}\ar[ur]_{u_1}& & X\ar[rr]_f\ar[dr]_{u_2}\ar [ur]_{u_1}& & X\ar[dr]_{u_2}\ar[ur]_{u_1} & \\ & Y_2\ar[rr]_{\rm id}\ar[ur]_{v_2}& & Y_2 \ar[rr]_{\rm id}\ar[ur]_{v_2}& &Y_2 }$$ we deduce from a simple diagram chase that $a:=u_{2}\circ v_{1}\co Y_{1}\rightarrow Y_{2}$ and $b:=u_{1}\circ v_{2}\co Y_{2}\rightarrow Y_{1}$ are mutually inverse homotopy equivalences. Therefore $$v_{2}\circ a\circ u_{1}\co X\rightarrow Y_{1}\rightarrow Y_{2}\rightarrow X$$ is such that $a$ is homotopy inverse to $u_{1}\circ v_{2}\co Y_{2}\rightarrow Y_{1}$; and thus, by , $a_{\sharp}(w(Y_{1}))=w(Y_{2})$. It then follows that $$v_{1\sharp}(w(Y_{1}))=v_{1\sharp}\circ a_{\sharp}^{-1}(w(Y_{2}))=v_{1\sharp }\circ u_{1\sharp}\circ v_{2\sharp}(w(Y_{2}))=f_{\sharp}\circ v_{2\sharp }(w(Y_{2})).$$ Similarly $$v_{2\sharp}(w(Y_{2}))=f_{\sharp}\circ v_{1\sharp}(w(Y_{1})).$$ Then substituting in the previous formula, and using idempotency of $f_{\sharp}$, gives the result. The key fact for giving a topological meaning to the Bass conjecture is the following, which can be extracted from a result of Wall [@Wall Theorem F] in the light of the above. It is also shown explicitly by Mislin [@OG]. \[all elts obstructions\]Let $G$ be a finitely presented group, let $\tilde{\alpha}\in\wtilde{K}_{0}(\mathbb{Z}G)$, and let $n\geq2$. Then there is a finite $n$–dimensional complex $X^{n}$ with fundamental group $G$ and a pointed homotopy idempotent selfmap $f$ of $X^{n}$ inducing the identity on $\pi_{1}$, such that $\tilde{w}(f)$ is equal to $\tilde{\alpha}$. \[unreduced Wall\]For $n\geq3$, the unreduced version of this result also holds. For, given $\alpha\in K_{0}(\mathbb{Z}G)$, then choose a map $f$ as in the theorem with respect to $\tilde{\alpha}=\eta(\alpha)$. It follows that for some nonnegative $r,s$ we have $w(f)=\alpha+[\mathbb{Z}G]^{r}-[\mathbb{Z}G]^{s}$. Replacing $f$ by $f\vee\mathrm{id}_{W}$ where $W=\bigl( \bigvee _{r}S^{3}\bigr) \vee\bigl( \bigvee_{s}S^{2}\bigr)$ then gives the desired selfmap. When $n=2$, the method fails, as without the possibility of adjoining a simply-connected space of non-positive Euler characteristic we can only increase the rank of the Wall element. (Recall from Mislin [@Mislin-Handbook Lemma 5.1] that for any finitely dominated space $Y$, the rank of $w(Y)$ equals $\chi(Y)$.) Invariants for selfmaps of complexes\[invar CW\] ================================================ We recall from, for example, the articles [@Geo:LNM; @Geo:Hbk] by Geoghegan, the definition of the *Nielsen number* $N(f)$ of a selfmap $f\co X\rightarrow X$ of a finite connected complex (assumed, as discussed above, to fix a basepoint of $X$). Let $f_{{\sharp}}$ be the endomorphism of $G=\pi_{1}(X,x)$ induced by $f$. Define elements $\alpha$ and $\beta$ of $G$ to be $f_{{\sharp}}$*–conjugate* if for some $z\in G$$$\alpha=z\cdot\beta\cdot(f_{{\sharp}}z)^{-1}\,\text{,}$$ and let $G_{f_{{\sharp}}}$ denote the set of $f_{{\sharp}}$–conjugacy classes, making $\mathbb{Z}G_{f_{{\sharp}}}$ a quotient of $\mathbb{Z}G$. Now $N(f)$ is defined to be the number of nonzero coefficients in the formula for the *Reidemeister trace* of $f$ at $x\in X$:$$R(f,x)=\sum\nolimits_{C\in G_{f_{{\sharp}}}}n_{C}\cdot C\in\mathbb{Z}G_{f_{{\sharp}}}\text{.}$$ The coefficient $n_{C}$ can be described geometrically as the fixed-point index of a fixed-point class of $f$, and homologically in terms of traces of the homomorphisms induced by $f$ on the chain complex of the universal cover of $X$. In the literature, $R(f,x)$ is also known as the *generalized Lefschetz number*. When, as prompted by above, we take $f$ to induce the identity on $\pi_{1}$, then $R(f,x)\in\bigoplus_{\lbrack s]\in\lbrack G]}\mathbb{Z}\cdot\lbrack s]$ and the following holds (cf Geoghegan [@Geo:Hbk p505]). \[GeogheganLemma\] In the setting of diagram of where $X$ is a finite connected complex, and $f$ is a pointed homotopy idempotent selfmap inducing the identity map on the fundamental group,$${\mathrm{HS}}({w}(f))=R(f,x).$$ For the computation of Nielsen numbers, the following result also proved in Jiang [@Jiang-Lectures p20], attributed to Fadell, is useful. \[Nielsens agree\]Suppose that the diagram of finite connected complexes and based maps $$\xymatrix{ \wwbar{T}\ar[rr]^{\bar{g}}\ar[d]^r & &\wwbar{T}\ar[drr]^r & & \\ T \ar[rr]^g & &T\ar[u]^s\ar[rr]^{\rm id} & &T }$$ is commutative up to (free) homotopy. Then $N(g)=N(\bar{g})$. We use the definition and notation for $N(g)$ and $N(\bar{g})$ given above; we also put $\wwbar{G}=\pi_{1}(\wwbar{T},\,s(t_{0}))$ where $t_{0}$ is the basepoint of $T$. Evidently, because $s\circ g\simeq\bar{g}\circ s$, there is a well-defined function$$s_{{\sharp}}\co G_{g_{{\sharp}}}\longrightarrow\wwbar{G}_{\bar{g}_{{\sharp}}}$$ which, because also $r\circ\bar{g}\simeq g\circ r$, while $r\circ s\simeq\mathrm{id}_{T}$, has left inverse $r_{{\sharp}}\co \wwbar{G}_{\bar {g}_{{\sharp}}}\rightarrow G_{g_{{\sharp}}}$. In particular, $s_{{\sharp}}$ is injective. With reference to the sentence before [@Geo:LNM (2.2)], note that this is not true in general without some condition on $s$, such as its having a left inverse. The formula for the Reidemeister trace of $g$ at $t_{0}\in T$ is:$$R(g,t_{0})=\sum\nolimits_{C\in G_{g_{{\sharp}}}}n_{C}\cdot C\in\mathbb{Z}G_{g_{{\sharp}}}\text{.}$$ According to [@Geo:LNM (2.2)], we also have$$R(\bar{g},\,s(t_{0}))=\sum\nolimits_{C\in G_{g_{{\sharp}}}}n_{C}\cdot s_{{\sharp}}(C)\in\mathbb{Z}\wwbar{G}_{\bar{g}_{{\sharp}}}\text{.}$$ Because $s_{{\sharp}}$ is injective, the two sums have the same number of nonzero coefficients; that is, the Nielsen numbers agree. The next lemma permits us in our discussion of Nielsen numbers to restrict to the case of those homotopy idempotents that are pointed homotopy idempotents and induce the identity on the fundamental group. [\[reduction\]]{} Suppose that $f\co X\rightarrow X$ is a homotopy idempotent on a finite connected complex $X$, fixing $x_{0}\in X$, with $G=\pi _{1}(X,x_{0})$ and $H:=f_{\sharp}(G)$. Then there is a finite connected complex $K$ with fundamental group isomorphic to $H$ and a pointed homotopy idempotent $g\co K\rightarrow K$, inducing the identity map on $H$, such that $N(f)=N(g)$. Furthermore, if $G$ satisfies the Bass conjecture, then so does $H$. Let $X\overset{u}{\longrightarrow}Y\overset {v}{\longrightarrow}X$ be a splitting for $f$. Then, by [@Mislin-Handbook Corollary 5.5], $Y\times S^{3}$ is homotopy equivalent to a finite connected complex $K$, because $Y$ is finitely dominated and the Euler characteristic of $S^{3}$ is zero. Let $h\co Y\times S^{3}\rightarrow K$ be a pointed homotopy equivalence, with pointed homotopy inverse $k$; define $g\co K\rightarrow K$ to be the map $h\circ(\mathrm{id}_{Y}\times\{\ast\})\circ k$, where $\mathrm{id}_{Y}\times\{\ast\}\co Y\times S^{3}\rightarrow Y\times S^{3}$ denotes the idempotent on $Y\times S^{3}$ given by the projection onto $Y$. Clearly $g$ is a pointed homotopy idempotent, inducing the identity on the fundamental group of $K$, and $\pi_{1}(K)\cong\pi_{1}(Y)\cong H$. Writing $u^{\prime}=u\times\mathrm{id}_{S^{3}}$ and $v^{\prime}=v\times \mathrm{id}_{S^{3}}$, we now apply with $\wwbar{T}=X\times S^{3}$ and $T=K$, and maps $\wbar{g}=f\times \{\ast\}$, $r=h\circ u^{\prime}$, $s=v^{\prime}\circ k$ and $g$ as defined already. This yields the following homotopy commutative diagram $$\xymatrix{ X\times S^{3}\ar[rr]^{f\times\{\ast\}}\ar[d]^{u^{\prime}}\ar@(l,ul)[dd]_r & & X\times S^{3}\ar[dr]^{u'}\ar@(ur,u)[ddrr]^r & & \\ Y\times S^{3}\ar[d]^h & & Y\times S^{3}\ar[u]^{v'} & Y\times S^{3}\ar[dr]^h &\\ K\ar[rr]^g & & K\ar[rr]^{\mathrm{id}}\ar[u]^k\ar@(ul,l)[uu]^s & & K. }$$ We conclude that $N(g)=N(f\times\{\ast\})$. On the other hand, $N(f\times\{\ast\})=N(f)$, as can be seen by again applying , with the top part of the diagram as before, but $T=X$, $r=\mathrm{pr}_{X}$ and $s$ the inclusion $x\mapsto(x,\ast)$ $$\xymatrix{ X\times S^{3}\ar[rr]^{f\times\{\ast\}}\ar[d] & & X\times S^{3}\ar[drr] & & \\ X\ar[rr]^f & & X \ar[rr]^{\mathrm{id}}\ar[u]& & X. }$$ Therefore $N(f)=N(g)$. That $H\cong\pi_{1}(Y)$ satisfies the Bass conjecture if $G$ does follows by observing that $v_{\sharp}\co \pi_{1}(Y)\rightarrow\pi _{1}(X)$ is a split injection, and therefore the induced map $HH_{0}(\mathbb{Z}\pi_{1}(Y))\rightarrow HH_{0}(\mathbb{Z}\pi_{1}(X))$ is a split injection too. We are now able to obtain a restatement of the theorem of Geoghegan referred to in the Introduction ([@Geo:LNM Theorem 4.1’] (i)’ $\Leftrightarrow$ (iii)’), in a form suitable to the present treatment. \[Geoghegan Theorem 4.1’\] Let $G$ be a finitely presented group. The following are equivalent. - $G$ satisfies Bass’ . - Every homotopy idempotent selfmap $f$ on a finite connected complex with fundamental group $G$ has Nielsen number either zero or one. We start with the implication $\mathrm{(a)} \Rightarrow \mathrm{(b)}$. Let $f$ be as in (b). Then by we can assume that $f\co X\rightarrow X$ is actually a pointed homotopy idempotent on a finite connected complex $X$ that induces the identity map on $\pi_{1}(X,x_{0})\cong G$. Because $G$ satisfies the Bass conjecture, we have $\mathrm{HS}(w(f))\in$ $\mathbb{Z}\cdot\lbrack e]$. Then, by , $R(f,x_{0})$ has at most one nonzero coefficient, and $N(f)\leq1$. In the other direction, we of course use and . Then, given ${\alpha}\in{K}_{0}(\mathbb{Z}G)$, there is a finite $n$–dimensional complex $X$ ($n\geq3$) with fundamental group $G$ and a pointed homotopy idempotent selfmap $f$ of $X$ inducing the identity on $\pi_{1}(X)=G$, such that ${w}(f)$ is equal to ${\alpha}$. So, from we deduce that ${\mathrm{HS}}({\alpha })=R(f,x)$. This last term vanishes when $N(f)=0$; if $N(f)=1$, $R(f,x)$ is a nonzero multiple of some class $[s]$, and we are done in case $[s]=[e]$. So, the remaining case is where $R(f,x)$ is a nonzero multiple (necessarily $\chi(Y)$) of some class $[s]\neq\lbrack e]$. In that event we may turn instead to $f^{\prime}=f\vee\mathrm{id}_{S^{2}}$ with corresponding $Y^{\prime}=Y\vee S^{2}$ having $w(f^{\prime})$ $=w(f)+[\mathbb{Z}G]$. However, this implies the contradiction that $N(f^{\prime})=2$, and can therefore be eliminated. The actual wording of [@Geo:LNM Theorem 4.1’] is in terms of the Bass conjecture for a particular element $\alpha$ of $K_{0}(\mathbb{Z}G)$, rather than for $G$ itself. Selfmaps of manifolds\[CW-&gt;M\] ================================= From Wecken’s work [@W], one knows that $N(f)$ serves as a lower bound for the number of fixed points of any map homotopic to $f$. Thus, the implication (ii) $\Rightarrow$ (i) in the next result is immediate. \[Nielsen to unique fp\]Suppose that $f\co M\rightarrow M$ is a selfmap of a closed manifold $M$ of dimension at least $3$. Then the following are equivalent: 1. the Nielsen number of $f$ is $0$ or $1$; 2. $f$ is homotopic to a map having one arbitrarily chosen unique fixed point. We need only prove that (i) implies (ii). First, various results in the literature (see in particular Wecken [@W], Brown [@Brown], and Shi [@Shi] for the PL case; Jiang [@Jiang-LNM886] for the smooth case) show that every selfmap of $M$ with Nielsen number $N$ is homotopic to a map with exactly $N$ fixed points. By a result of Schirmer [@Schirmer Lemma 2], these fixed points may be chosen arbitrarily. Second, recall from [@Schirmer] that every fixed-point-free selfmap of a connected compact PL manifold of dimension at least $3$ is homotopic to a selfmap having an arbitrary unique fixed point. The argument there can be adapted as follows. Choose $a\in M$, and consider the closure $\bar{B}$ of an open ball $B$ around $a$ lying in a chart domain for $M$. Since $f$ is fixed-point-free, we choose the ball $\bar{B}$ to be small enough so that it is disjoint from its image under $f$. For convenience, we consider points of $\bar{B}$ with coordinates so that $a=\mathbf{0}$ and $\bar{B}$ consists of those points $x$ with $\left\| x\right\| \leq1$. Now let $\gamma$ be any path from $a$ to $f(a)$ that continues a unit-speed ray from $a$ to the boundary of $\bar{B}_{1/2}$ (the closed ball of points of $B$ of norm at most $1/2$) and never re-enters $\bar{B}_{1/2}$. Also, let $\lambda\co M\rightarrow\lbrack0,1]$ be a function having $\lambda^{-1}(1)=M-B$ and $\lambda^{-1}(0)=\bar{B}_{1/2}$. Then the desired map $g\co M\rightarrow M$ homotopic to $f$ is given as follows.$$g(x)=\left\{ \begin{array} [c]{lll}\gamma(t_{x}) & \quad & 0\leq2\left\| x\right\| <1\\ f((1-\lambda(x))a+\lambda(x)x) & & 1\leq2\left\| x\right\| \leq2\\ f(x) & & x\in M-\bar{B}\ \text{,}\end{array} \right.$$ where $t_{x}=1-\exp(-2\left\| x\right\| /(1-2\left\| x\right\| ))$. Here, recall the standard inequality $$\ln(1-u)>-u/(1-u)$$ for $0<u<1$. It implies that, whenever $t_{x}\neq0$ and $\gamma(t_{x})\in \bar{B}_{1/2}\,$, so that $\left\| \gamma(t_{x})\right\| =t_{x}\,$, $$\left\| \gamma(t_{x})\right\| >2\left\| x\right\| \,\text{.}$$ Hence $a$ is the unique fixed point of $g$. Note that it is possible to make $g$ smooth. For, since every map is homotopic to a smooth map and homotopy does not change Nielsen numbers, there is no loss of generality in assuming $f$ to be smooth. Then, by our taking both $\gamma$ and $\lambda$ to be smooth functions in the above argument, a smooth map $g$ results. Note that this result cannot be extended to dimension $2$ in general. Indeed, for every connected, closed surface of negative Euler characteristic and every natural number $n$, Jiang [@Jiang:non-Wecken Theorem 2] exhibits a selfmap $f_{n}$ of the surface having $N(f_{n})=1$, but with every map homotopic to $f_{n}$ having more than $n$ fixed points. For some particular results on selfmaps on surfaces, see also Kelly [@K]. We next observe that selfmaps of complexes may be studied by means of selfmaps of manifolds without changing the Nielsen number. \[mani\] \[complex to manifold\]Let $X$ be a finite connected complex. Then the following hold. 1. There is a closed, oriented and smooth manifold $M$ of dimension at least $3$ with maps $r\co M\rightarrow X$ and $s\co X\rightarrow M$ having $r\circ s$ pointed homotopic to $\mathrm{id}_{X}$ and inducing isomorphisms of fundamental groups. 2. For any selfmap $f\co X\rightarrow X$, the selfmap $\bar{f}=s\circ f\circ r\co M\rightarrow M$ has Nielsen number$$N(\bar{f})=N(f)\text{.}$$ 3. If $f$ is either homotopy idempotent or pointed homotopy idempotent, then so is $\bar{f}$. (a)Working up to pointed homotopy type, we may assume that $X$ is a finite simplicial complex of dimension $n\geq2$. By a result of Wall [@Wall:AnnM1966 Theorem 1.4] we can do surgery on the constant map $S^{2n}\rightarrow X$ to obtain a smooth, oriented (indeed, stably parallelizable) closed $2n$–manifold $M$ and an $n$–connected map (called an $n$–equivalence by Spanier [@Spanier]) $r\co M\rightarrow X$. Because $n\geq2$, the map $r$ is a $\pi_{1}$–isomorphism. Moreover, since the obstruction groups $H^{i}(Y;\pi_{i}(r))$ all vanish (or by [@Spanier (7.6.13)]), the identity map $X\rightarrow X$ factors up to pointed homotopy through $M\rightarrow X$, and the result follows. (b)This result is immediate from above, on putting $\wwbar{T}=M,\ T=X,\ g=f$ and $\bar{g}=\bar{f}$. (c)Obviously,$$\begin{aligned} \bar{f}\circ\bar{f} & \simeq s\circ f\circ r\circ s\circ f\circ r\\ & \simeq s\circ f\circ f\circ r\simeq s\circ f\circ r\simeq\bar{f}\text{,}$$ and $\bar{f}$ is a pointed idempotent if $f$ is. For any connected non-contractible space $X$, the monoid of homotopy classes of selfmaps of $X$ always contains at least two idempotents, the class of nullhomotopic maps and the class of maps homotopic to the identity. Each constant map in the former class contains exactly one fixed point (which by connectivity is arbitrary), and obviously has Nielsen number $1$. On the other hand, for $X$ a finite complex the identity map has Nielsen number equal to $\min\{1$, $\left| \chi(X)\right| \}$. When $X$ is also a smooth manifold, it admits a smooth vector field whose only singularity is an arbitrarily chosen point $x_{0}\in X$. Its associated flow provides a homotopy from the identity map to a smooth map with sole fixed point $x_{0}$. Proof of and applications {#prfThm1} ========================= The discussion above now allows a reformulation of statement (b) of . combines with to yield a manifold version of statement (b), as follows. \[PrincipalUtile\]Let $G$ be a finitely presented group. The following are equivalent. - $G$ satisfies Bass’ . - Given any closed, smooth and oriented manifold $M$ of dimension at least $3$ with $G=\pi_{1}(M)$, every homotopy idempotent selfmap $f$ on $M$ is homotopic to one that has a single fixed point. The following facts combine to show that the dimension $3$ in (b) above is best possible. For $F$ a closed surface of negative Euler characteristic and $n \ge 2$, Kelly [@Kelly-new] constructs a homotopy idempotent selfmap $f_n\co F\rightarrow F$ such that every map homotopic to $f_n$ has at least $n$ fixed points. On the other hand, the fundamental groups of surfaces are well-known to satisfy Bass’ (see Eckmann [@EId], for example). The following argument of Bass, reported by R Geoghegan, shows that it suffices to consider finitely presented groups in considering Bass’ conjectures. \[GeoBass\] Conjectures \[Bass\] and \[WeakBass\] hold for all groups if they hold for all finitely presented groups. Fix a group $G$. We show that any idempotent $\mathbb{Z}G$–matrix $A$ lifts to an idempotent matrix $A_{1}$ over the group ring of a finitely presented group $G_{1}$. There is a finitely generated subgroup $G_{0}$ of $G$ such that the entries of $A$ lie in $\mathbb{Z}G_{0}$. Write $G_{0}$ as $F/R$ where $F$ is a finitely generated free group; and let $B$ be a lift of $A$ to $\mathbb{Z}F$. Then there is a finite subset $W$ of $R$ such that the matrix $B^{2}-B$ has all its entries in the ideal of $\mathbb{Z}F$ generated by $\{1-r\mid r\in W\}$. Now let $R_{1}\leq R$ be the normal closure of $W$ in $F$. Then we have $G_{1}:=F/R_{1}$ finitely presented, and the image $A_{1}$ of $B$, with entries in $\mathbb{Z}G_{1}$, is an idempotent matrix. The map $G_{1}\twoheadrightarrow G_{0}\hookrightarrow G$ takes $A_{1}$ to $A$. Therefore $[A_{1}]\mapsto\lbrack A]$ under the induced map $K_{0}(\mathbb{Z}G_{1})\rightarrow K_{0}(\mathbb{Z}G)$. Then the result follows from naturality of $\mathrm{HS}$ . After , it is now straightforward to deduce . Our arguments lead to variations on Theorem 1. First, one can sharpen the implication (b) $\Rightarrow$ (a) by referring in (b) to a smaller class of manifolds. Because, by above, for a finitely presented group $G$ any $\tilde{\alpha}\in\tilde{K}_{0}(\mathbb{Z}G)$ can be realized by a homotopy idempotent selfmap $f$ of a $2$–dimensional complex with fundamental group $G$ (so $\tilde{w}(f)=\tilde{\alpha}$), the Bass conjecture is equivalent to the following: *Every homotopy idempotent selfmap of a closed, stably parallelizable smooth $4$–manifold is homotopic to one with a single fixed point.* In the other direction, one can strengthen (a) $\Rightarrow$ (b) by enlarging the class of spaces to which (b) applies. There is no need to restrict attention to oriented, smooth manifolds; one can also apply to PL manifolds and other, possibly bounded, Wecken spaces (see Jiang [@Jiang-AmJM1980]). As an application of we obtain the following. Any homotopy idempotent selfmap on a closed, smooth and oriented $3$–dimensional manifold $M$ is homotopic to one with a single fixed point. It is enough to show that the fundamental group $G$ of a closed smooth oriented $3$–dimensional manifold $M$ satisfies Bass’ conjecture; the Corollary then follows from . By Kneser’s result (see Milnor [@Milnor]), $M$ is a connected sum of prime manifolds $M_{i}$, where each $M_{i}$ belongs to one of the following classes: 1. $M_{i}$ with finite fundamental group; 2. $M_{i}$ with fundamental group $\mathbb{Z}$; 3. $M_{i}$ a $K(\pi,1)$ manifold (so $\pi$ is a Poincaré duality group). Note that the fundamental group of $M$ is the free product of the fundamental groups of the various $M_{i}$. By Gersten’s result [@G], given two groups $\Gamma$ and $H$, the reduced projective class group of the free product $\Gamma\ast H$ reads $$\tilde{K}_{0}(\mathbb{Z}(\Gamma\ast H))\cong\tilde{K}_{0}(\mathbb{Z}\Gamma)\oplus\tilde{K}_{0}(\mathbb{Z}H)\;.$$ Thus, every element in $\tilde{K}_{0}(\mathbb{Z}(\Gamma\ast H))$ is an integral linear combination of projectives induced up from $\Gamma$ and $H$ respectively. It follows that if Bass’ conjecture holds for both $\Gamma$ and $H$, then it holds for $\Gamma\ast H$ as well. In the list above, clearly finite groups and $\mathbb{Z}$ satisfy Bass’ conjecture. That 3–dimensional Poincaré duality groups satisfy Bass’ conjecture follows from Eckmann’s work (see Eckmann [@EId p247]) on groups of rational cohomological dimension $2$. Since the Bass conjecture is known for instance for the fundamental groups of manifolds in the class below [@EId], we have another consequence. Any homotopy idempotent selfmap of a non-positively curved, oriented closed manifold of dimension at least $3$ is homotopic to a map with a single fixed point. It would be interesting to see geometric proofs of these facts. Lefschetz numbers {#Lef} ================= Let $X$ be a CW–complex and $f\co X\rightarrow X$ a continuous selfmap. Then $f$ induces for each $n\in\mathbb{N}$ a map $$f_{n}\co H_{n}(X;\,\mathbb{Q})\rightarrow H_{n}(X;\,\mathbb{Q})$$ of $\mathbb{Q}$–vector spaces. If the sum of the dimensions of the vector spaces $H_{n}(X;\,\mathbb{Q})$ is finite, the *Lefschetz number* of ** $f$ is defined as $$L(f)=\sum_{n\geq0}(-1)^{n}{\mathrm{{Tr}}}(f_{n}).$$ In cases where the CW–complex $X$ is finite or finitely dominated, the Lefschetz number of a selfmap is obviously always defined, and for $f=\mathrm{id}_{X}$, $L(f)=\chi(X)$ the Euler characteristic of $X$. One extends this definition to the case of $G$–CW–complexes as follows. Let $\mathcal{N}\!G$ denote the von Neumann algebra of the discrete group $G$ (*i.e.* the double commutant of $\mathbb{C}G$ considered as a subalgebra of the algebra of bounded operators on the Hilbert space $\ell ^{2}G$ – see for example Lück [@Lueck]). With $e$ as the neutral element of $G$, write $e\in G\subset\ell^{2}G$ for the delta-function $${e}\co G\rightarrow\mathbb{C},\quad g\mapsto\left\{ \begin{array} [c]{cc}1 & \hbox{ if }g=e\\ 0 & \hbox{ otherwise.}\end{array} \right.$$ The standard trace $$\mathrm{{tr}}_{G}\co \mathcal{N}\!G\rightarrow\mathbb{C},\quad x\mapsto\langle xe,e\rangle_{\ell^{2}G}\in\mathbb{C}$$ extends to a trace $\mathrm{{tr}}_{G}(\phi)\in\mathbb{C}$ for $\phi \co M\rightarrow M$ a map of finitely presented $\mathcal{N}\!G$–modules as follows. Recall that a finitely presented $\mathcal{N}\!G$–module $M$ is of the form $S\oplus T$ with $S$ projective and $T$ of von Neumann dimension $0$; the trace of $\phi$ is then defined as the usual von Neumann trace of the composite $S\rightarrow M\rightarrow M\rightarrow S$ (for the trace of selfmaps of finitely generated projective $\mathcal{N}\!G$–modules see Lück [@Lueck]). It follows that $\mathrm{{tr}}_{G}(\mathrm{id}_{M})=\dim _{G}(M)$, the von Neumann dimension of the finitely presented $\mathcal{N}\!G$–module $M$ (a non-negative real number cf [@Lueck]). The Kaplansky trace (as defined in ) induces a trace on $G$–maps $\psi\co P\rightarrow P$ of finitely generated projective $\mathbb{Z}G$–modules, and $${\kappa}(\psi)=\mathrm{{tr}}_{G}(\mathrm{id}_{\mathcal{N}\!G}\otimes\psi)$$ where $\mathrm{id}_{\mathcal{N}\!G}\otimes\psi\co \mathcal{N}\!G\otimes _{\mathbb{Z}G}P\rightarrow\mathcal{N}\!G\otimes_{\mathbb{Z}G}P$. Now let $Z$ be a free $G$–CW–complex that is dominated by a cocompact $G$–CW–complex (for example, the universal cover of a finitely dominated CW–complex with fundamental group $G$). A $G$–map $\tilde{f}\co Z\rightarrow Z$ induces a map of singular chain complexes $C_{\ast}(Z)\rightarrow C_{\ast}(Z)$ and of $L^{2}$–chain complexes $$C_{\ast}^{(2)}(Z):=\mathcal{N}\!G\otimes_{\mathbb{Z}G}C_{\ast}(Z)\rightarrow C_{\ast}^{(2)}(Z)\text{,}$$ and therefore of $L^{2}$–homology groups $$H_{n}^{(2)}(Z):=H_{n}(Z;\,\mathcal{N}\!G)\rightarrow H_{n}^{(2)}(Z).$$ The groups $H_{n}^{(2)}(Z)$ are finitely presented $\mathcal{N}\!G$–modules, because the complex $C_{\ast}^{(2)}\!(Z)$ is chain homotopy equivalent to a complex of type FP over $\mathcal{N}\!G$ and because the category of finitely presented $\mathcal{N}\!G$–modules is known to be abelian [@Lueck]. Thus the induced map $$\wtilde{f}_n\co H_n^{(2)}(Z)\to H_n^{(2)}(Z)$$ is a selfmap of a finitely presented $\mathcal{N}\!G$–module and has, therefore, a well defined trace $\mathrm{{tr}}_G(\wtilde{f}_n)$ as explained in the beginning of this section; we also write $\beta_n^{(2)}(Z;G)$ for the von Neumann dimension of $H_n^{(2)}(Z)$. Let $Y$ be a finitely dominated CW–complex with fundamental group $G$ and universal cover $Z$. Then the $n$th $L^{2}$–Betti number $\beta _{n}^{(2)}(Y)$ of $Y$ is defined to be $\beta_{n}^{(2)}(Z;G)$. (If $Y$ happens to be a finite complex, this reduces to the usual $L^{2}$–Betti number of $Y$ as defined for instance in Atiyah [@atiyah] and Eckmann [@EM].) By definition, the alternating sum $\sum(-1)^{i}\beta_{i}^{(2)}(Y)=\chi^{(2)}(Y)$ is the $L^{2}$–Euler characteristic of $Y$. Recall that for $Y$ a finite complex, $\chi(Y)=\chi^{(2)}(Y)$ by Atiyah’s formula [@atiyah]; see also Chatterji–Mislin [@us] and below for more general results. We now define $L^{2}$–Lefschetz numbers as follows. Let $Z$ be be a free $G$–CW–complex that is dominated by a cocompact $G$–CW–complex and let $\wtilde{f}\co Z\rightarrow Z$ be a $G$–map. Denote by ${\wtilde{f}}_n\co H_{n}^{(2)}(Z)\rightarrow H_{n}^{(2)}(Z)$ the induced map in $L^{2}$–homology. Then the $L^{2}$*–Lefschetz number* of $\wtilde{f}$ is given by $$L^{(2)}(\wtilde{f}):=\sum_{n\geq0}(-1)^{n}\mathrm{tr}_{G}(\wtilde{f}_n)\in \mathbb{R}.$$ In case $Z$ is cocompact our $L^{2}$–Lefschetz number agrees with the one defined by Lück and Rosenberg [@LR Remark 1.7]. If $Y$ is a finitely dominated connected CW–complex with fundamental group $G$, and with universal cover the free $G$–space $\tilde{Y}$, then the $L^{2}$–Lefschetz number of the identity map of $\tilde{Y}$ is $\chi^{(2)}(\tilde{Y};\,G)=\chi^{(2)}(Y)$, the $L^{2}$–Euler characteristic of $Y$. Proof of {#Proof 2} ========= Let $Y$ be a finitely dominated connected CW–complex. Thus $\chi(Y)$ and $\chi^{(2)}(Y)$ are defined as above, and are related as follows. \[wBass=Euler\]Let $G$ be a finitely presented group. Then the following holds. - Let $Y$ be a finitely dominated connected CW–complex with fundamental group $G$. If the finiteness obstruction $\tilde{w}(Y)\in\tilde {K}_{0}(\mathbb{Z}G)$ is a torsion element, then $\chi^{(2)}(Y)=\chi(Y)$. - The following are equivalent. 1. The weak Bass conjecture holds for $G$. 2. For any finitely dominated connected CW–complex $Y$ with $\pi _{1}(Y)=G$, we have $$\chi^{(2)}(Y)=\chi(Y).$$ As in , for $Y$ finitely dominated, the chain complex $C_{\ast}(\tilde{Y})$ is chain homotopy equivalent to a chain complex $P_{\ast}$ of type FP over $\mathbb{Z}G$, $G=\pi_{1}(Y)$, and we have the Wall element $$w(Y)=\sum_{i=0}^{n}(-1)^{i}[P_{i}]\in K_{0}(\mathbb{Z}G).$$ As $\mathcal{N}\!G\otimes_{\mathbb{Z}G}P_{\ast}\simeq C^{(2)}(\tilde{Y})$ and $\mathrm{{tr}}_{G}(\mathcal{N}\!G\otimes_{\mathbb{Z}G}P_{i})=\kappa(P_{i})$, we see that $$\chi^{(2)}(Y)=\sum(-1)^{i}\mathrm{{tr}}_{G}(\mathcal{N}\!G\otimes _{\mathbb{Z}G}P_{i})=\sum(-1)^{i}\kappa(P_{i})=\kappa(w(Y)).$$ On the other hand, $\mathbb{Z}\otimes_{\mathbb{Z}G}P_{\ast}\simeq C_{\ast}(Y)$ and $\epsilon(P_{i})=\mathrm{dim}_{\mathbb{Q}}(\mathbb{Q}\otimes_{\mathbb{Z}G}P_{i})$ so that $$\chi(Y)=\sum(-1)^{i}\mathrm{dim}_{\mathbb{Q}}(\mathbb{Q}\otimes_{\mathbb{Z}G}P_{i})=\sum(-1)^{i}\epsilon(P_{i})=\epsilon(w(Y)).$$ (a)Again we observe that for $n>1$, $$\chi(Y\vee(\vee^{k}S^{n}))=\chi(Y)+(-1)^{n}k$$ and $$\chi^{(2)}(Y\vee(\vee^{k}S^{n}))=\chi^{(2)}(Y)+(-1)^{n}k.$$ On the other hand, $w(Y\vee(\vee^{k}S^{n}))=w(Y)+(-1)^{n}k$, so that without loss of generality we may assume that actually $w(Y)$ is a torsion element. But then, since the range of the Hattori–Stallings trace is torsion-free, Bass’ conjectures are valid for torsion elements of $K_{0}(\mathbb{Z}G)$ and we have$$\kappa(w(Y))=\epsilon(w(Y))\text{.}$$ Thus, $$\chi^{(2)}(Y)=\chi(Y),$$ proving the claim. (b)(i) $\Rightarrow$ (ii): Assuming the weak Bass conjecture, we have $$\chi^{(2)}(Y)=\kappa(w(Y))=\epsilon(w(Y))=\chi(Y).$$ Assuming the Bass conjecture, this implication has also been proved by Eckmann [@EN]. \(ii) $\Rightarrow$ (i): Recall from that for a finitely generated projective $\mathbb{Z}G$–module $P$ there is always a finitely dominated CW–complex $Y$ whose Wall element $w(Y)$ equals $[P]\in K_{0}(\mathbb{Z}G)$. We then have that $$\kappa(P)=\kappa(w(Y))=\chi^{(2)}(Y)=\chi(Y)=\epsilon(w(Y))=\epsilon(P). \proved$$ Next, we need an intermediate result. \[Lef=Euler\]Let $X$ be a finite connected complex, and $f\co X\rightarrow X$ be a homotopy idempotent. Let $Y$ be a finitely dominated CW–complex determined by $f$ as in . 1. Then $L(f)=\chi(Y)$. 2. If moreover, $f$ is a pointed homotopy idempotent inducing the identity on $G=\pi_1(X)$ and $\wtilde{f}$ denotes the induced $G$–map on the universal cover of $X$, then $L^{(2)}(\wtilde{f})=\chi^{(2)}(Y)$. (a)Applying $H_{i}(\,-\,;\,\mathbb{Q})$ to the diagram of yields the following commutative diagram of groups: $$\xymatrix{ H_i(X)\ar[rr]^{f_i}\ar[dr]_{u_{i}}\ar@(ur,ul)[rrrr]^{f_i}& &H_i(X)\ar[rr]^{f_{i}} \ar[dr]_{u_{i}}& &H_i(X)\\ &H_i(Y)\ar[rr]^{{\mathrm{id}}_i}\ar[ur]_{v_{i}} & &H_i(Y)\ar[ur]_{v_{i}}.& }$$ We can now compute that $$\mathrm{{Tr}}(f_{i})=\mathrm{{Tr}}(v_{i}u_{i})=\mathrm{{Tr}}(u_{i}v_{i})=\mathrm{{Tr}}(\mathrm{id}_{i})=\dim(H_{i}(Y;\,\mathbb{Q}))\text{,}$$ so that (a) follows by taking alternating sums. (b)Here we need to know that $f$ induces the identity map on $\pi_{1}(X)$, in order to obtain equivariance of the induced maps on the universal covers $\tilde{X}$ and $\tilde{Y}$. We apply $H_{i}^{(2)}$ to diagram : $$\xymatrix{ {{H}^{(2)}_i}(\tilde{X})\ar[rr]^{\wtilde{f}_i}\ar[dr]_{\wtilde{u}_{i}}\ar@(ur,ul)[rrrr]^{\wtilde{f}_i}& &{{H}^{(2)}_i}(\tilde{X})\ar[rr]^{\wtilde{f}_{i}}\ar[dr]_{\wtilde{u}_{i}}& &{{H}^{(2)}_i}(\tilde{X})\\ &{{H}^{(2)}_i}(\tilde{Y})\ar[rr]^{{\mathrm{id}}_i}\ar[ur]_{\wtilde{v}_{i}}& &{{H}^{(2)}_i}(\tilde{Y}) \ar[ur]_{\wtilde{v}_{i}},& }$$ and compute $$\mathrm{{tr}}_{G}(\wtilde{f}_{i})=\mathrm{{tr}}_{G}(\wtilde{v}_{i}\wtilde{u}_{i})=\mathrm{{tr}}_{G}(\wtilde{u}_{i}\wtilde {v}_{i})=\mathrm{{tr}}_{G}(\mathrm{id}_{i})=\dim_{G}(H_{i}^{(2)}(\tilde {Y}))\text{,}$$ and take alternating sums. The desired equality uses the fact that, given two finitely presented $\mathcal{N}\!G$–modules $A$ and $B$, with two maps $\phi\co A\rightarrow B$ and $\psi\co B\rightarrow A$, then $\mathrm{{tr}}_{G}(\phi\psi)=\mathrm{{tr}}_{G}(\psi\phi)$. Let $G$ be a finitely presented group. The following are equivalent. - $G$ satisfies the weak Bass conjecture. - Every pointed homotopy idempotent selfmap of a closed, smooth and oriented manifold $M$ with $\pi_{1}(M)=G$ and inducing the identity on $G$ has its Lefschetz number equal to the $L^{2}$–Lefschetz number of the induced $G$–map on the universal cover of $M$. That (a) implies (b) follows from the implication (a) $\Rightarrow$ (b) in , combined with . To prove that (b) implies (a), namely that the Lefschetz number information on manifolds is enough to imply the weak Bass conjecture, it suffices to see that for a finite connected complex $X$ of dimension $n\geq2$ there are a closed smooth oriented manifold $M$ and maps $X\rightarrow M$, $M\rightarrow X$ inducing isomorphisms of the fundamental groups, and such that $X\rightarrow M\rightarrow X$ is pointed homotopic to $\mathrm{id}_{X}$. However, this was already discussed in . We then conclude by combining the implication (b) $\Rightarrow$ (a) in with . Finally, we turn to the proof of . That (a) implies (b) follows from the previous proposition, which also shows that (b) implies (a) for all finitely presented groups, and therefore for all groups via . For each group $G$, it is evident that the algebraic statement of for $G$ implies the statement of . For our geometric formulations of the conjectures, the implication is less clear. One can approach this problem via work of Lück and Rosenberg on computing $L^{2}$–Lefschetz numbers and local degrees [@LR].
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we describe all indecomposable two-term partial tilting complexes over a Brauer tree algebra with multiplicity 1 using a criterion for a minimal projective presentation of a module to be a partial tilting complex. As an application we describe all two-term tilting complexes over Brauer star algebra and compute their endomorphism rings.' author: - Mikhail Antipov and Alexandra Zvonareva title: 'TWO-TERM PARTIAL TILTING COMPLEXES OVER BRAUER TREE ALGEBRAS' --- Introduction ============ In [@RZ] Rouquier and Zimmermann defined a derived Picard group $\text{TrPic}(A)$ of an algebra $A$, i. e. a group of autoequivalences of the derived category of $A$, given by multiplication by a two-sided tilting complex modulo natural isomorphism. The tensor product of two-sided tilting complexes gives the multiplication in this group. Despite the fact that for a Brauer tree algebra with the multiplicity of the exceptional vertex 1 several braid group actions on $\text{TrPic}(A)$ are known ([@RZ], [@IM]), the whole derived Picard group is computed only for an algebra with two simple modules ([@RZ]). On the other hand Abe and Hoshino showed that over a selfinjective artin algebra of finite representation type any tilting complex $P$ such that $\text{add}(P) = \text{add}(\nu P)$, where $\nu$ is the Nakayama functor, can be presented as a product of tilting complexes of length $\leq 1$ ([@AH]). Therefore instead of considering the derived Picard group we can consider the derived Picard groupoid corresponding to some class of derived equivalent algebras. The objects of this groupoid are the algebras from this class and the morphisms are the derived equivalences given by multiplication by a two-sided tilting complex modulo natural isomorphism. For example, one can consider the derived Picard groupoid corresponding to the class of Brauer tree algebras with fixed number of simple modules and multiplicity $k$ (the algebras from this class are derived equivalent and this class is closed under derived equivalence). Then the result of Abe and Hoshino means that the derived Picard groupoid corresponding to the class of Brauer tree algebras with fixed number of simple modules and multiplicity $k$ is generated by one-term and two-term tilting complexes. In this paper we give a criterion for a minimal projective presentation of a module without projective direct summands to be a partial tilting complex, namely we have the following: **Proposition 1** *Let $A$ be a selfinjective $K$-algebra, $M$ be a module without projective direct summands and let $T:= P^0 \overset{f}{\rightarrow} P^1$ be a minimal projective presentation of module $M.$ Complex $T$ is partial tilting if and only if $\emph{Hom}_{A}(M,\Omega^2M)=0$ and $\emph{Hom}_{K^b(A)}(T,M)=0.$* In Proposition 1 module $M$ is considered as a stalk complex concentrated in degree $0$, complex $T:= P^0 \overset{f}{\rightarrow} P^1$ is concentrated in degrees $0$ and $1$ accordingly. Using this proposition we classify all indecomposable two-term partial tilting complexes over a Brauer tree algebra with multiplicity 1. **Theorem 1** *Let $A$ be a Brauer tree algebra with multiplicity 1. A minimal projective presentation of an indecomposable non-projective $A$-module $M$ is a partial tilting complex if and only if $M$ is not isomorphic to $P/ \emph{soc}(P)$ for any indecomposable projective module $P.$* Hopefully it will allow us to obtain a full classification of two-term tilting complexes over Brauer tree algebras. As an illustration we describe all two-term tilting complexes over Brauer star algebra and compute their endomorphism rings (for an arbitrary multiplicity) in sections 5 and 6. Note that the results in sections 5 and 6 partially intersect with [@SI1], [@SI2]. **Acknowledgement:** We would like to thank Alexander Generalov for his helpful remarks. Preliminaries ============= Let $K$ be an algebraically closed field, $A$ be a finite dimensional algebra over $K$. We will denote by $A\text{-}{\rm mod}$ the category of finitely generated left $A$-modules, by $K^b(A)$ – the bounded homotopy category and by $D^b(A)$ the bounded derived category of $A\text{-}{\rm mod}.$ The shift functor on the derived category will be denoted by $[1].$ Let us denote by $A\text{-}{\rm perf}$ the full subcategory of $D^b(A)$ consisting of perfect complexes, i.e. of bounded complexes of finitely generated projective $A$-modules. In the path algebra of a quiver the product of arrows $\overset{a}{\rightarrow} \overset{b}{\rightarrow}$ will be denoted by $ab.$ For convenience all algebras are supposed to be basic. A complex $T \in A\text{-}{\rm perf}$ is called tilting if 1. $\emph{Hom}_{D^b(A)}(T,T[i])=0, \mbox{ for } i \neq 0$; 2. T $A\text{-}{\rm perf} \mbox{ as a triangulated category.}$ Tilting complexes were defined by Rickard ([@Ri1]) and play an essential role in the study of the equivalences of derived categories. A complex $T \in A\text{-}{\rm perf}$ is called partial tilting if the condition $\emph{1}$ from definition $\emph{1}$ is satisfied. A tilting complex $T \in A\text{-}{\rm perf}$ is called basic if it does not contain isomorphic direct summands or equally if $\emph{End}_{D^b(A)}(T)$ is a basic algebra. We will call a (partial) tilting complex a two-term (partial) tilting complex if it is concentrated in two neighboring degrees. An algebra $A$ is called special biserial (*SB*-algebra), if $A$ is isomorphic to $KQ/I$ for some quiver $Q$ and an admissible ideal of relations $I,$ and the following is satisfied: 1. any vertex of $Q$ is the starting point of at most two arrows; 2. any vertex of $Q$ is the end point of at most two arrows; 3. if $b$ is an arrow in $Q$ then there is at most one arrow $a$ such that $ab \notin I$; 4. if $b$ is an arrow in $Q$ then there is at most one arrow $c$ such that $bc \notin I$. For an SB-algebra the full classification of indecomposable modules up to isomorphism is known ([@GP], [@WW]). Let $B$ be a symmetric *SB*-algebra over a field $K.$ $A$-cycle is a maximal ordered set of nonrepeating arrows of $Q$ such that the product of any two neighboring arrows is not equal to zero. Note that the fact that algebra is symmetric means that $A$-cycles are actually cycles. Also sometimes just a maximal ordered set of arrows of $Q$ such that the product of any two neighboring arrows is not equal to zero is called an $A$-cycle (see [@AG]). Note also that in this case $A$-cycles are maximal nonzero paths in $B$. An important example of an SB-algebra of finite representation type is a Brauer tree algebra. Also these algebras play an important role in modular representation theory of finite groups. Let $\Gamma$ be a tree with $n$ edges and an exceptional vertex which has an assigned multiplicity $k \in \mathbb{N}$. Let us fix a cyclic ordering of the edges adjacent to each vertex in $\Gamma$ (if $\Gamma$ is embedded into plane we will assume that the cyclic ordering is clockwise). In this case $\Gamma$ is called a Brauer tree of type $(n,k)$. For a Brauer tree of type $(n,k)$ one can associate a finite dimensional algebra $A(n,k)$. Algebra $A(n,k)$ is an algebra with $n$ simple modules $S_i$ which are in one to one correspondence with edges $i \in \Gamma$. The two series of composition factors of an indecomposable projective module $P_i$ (with top $S_i$) are obtained by going anticlockwise around the $i$-th vertex. We go around the $i$-th vertex $k$ times if the vertex is exceptional and one time if it is not. The full description of the Brauer tree algebras in terms of composition factors is given in [@Al]. Furthermore, Rickard showed that two Brauer tree algebras corresponding to the trees $\Gamma$ and $\Gamma'$ are derived equivalent if and only if their types $(n,k)$ and $(n',k')$ coincide ([@Ri2]) and it follows from the results of Gabriel and Riedtmann that this class is closed under derived equivalence ([@GR]). Two-term tilting complexes over selfinjective algebras ====================================================== Let $A$ be an arbitrary finite dimensional selfinjective $K$-algebra. Any two-term complex $T:= P^0 \overset{f}{\rightarrow} P^1 \in A\text{-}{\rm perf}$ is isomorphic to a direct sum of the minimal projective presentation of a module and a stalk complex of projective module concentrated in degree 0. **Proof** Let us denote by $M$ the cokernel of $f$. The minimal projective presentation of $M$ is a direct summand of $T$. So $T$ is a direct sum of the minimal projective presentation of $M$, some stalk complex $P^0$ concentrated in degree 0, which can be zero and on which $f$ acts as a zero map, and a complex of the form $P \overset{\text{id}}{\rightarrow} P,$ which is homotopic to 0. $\Box$ We will suppose that the minimal projective presentation of a module is concentrated in degrees 0 and 1 in cohomological notation. For the sake of simplicity we will consider only minimal projective presentations of modules without projective summands. Direct summands corresponding to stalk complexes of projective modules concentrated in degree 1 will be considered separately in Proposition 2. Let $A$ be a selfinjective $K$-algebra, $M$ be a module without projective direct summands and let $T:= P^0 \overset{f}{\rightarrow} P^1$ be a minimal projective presentation of module $M.$ Complex $T$ is partial tilting if and only if $\emph{Hom}_{A}(M,\Omega^2M)=0$ and $\emph{Hom}_{K^b(A)}(T,M)=0.$ **Proof** Let $h:P^1 \rightarrow P^0$ be such that $hf=0=fh,$ i.e. $h$ gives a morphism $T \rightarrow T[-1].$ $$\xymatrix { 0 \ar[r]& \text{Ker}(f) \ar[r]^-{i} & P^0 \ar[r]& P^1 \ar[r]^-{\pi} \ar[ld]& \text{Coker}(f) \ar[r] \ar@{-->}[lld] \ar@{-->}[llld]& 0 \\ 0 \ar[r]& \text{Ker}(f) \ar[r] & P^0 \ar[r]& P^1 \ar[r] & \text{Coker}(f) \ar[r] & 0\\}$$ The condition $hf=0$ means that $\text{Im}(f)\subseteq \text{Ker}(h)$, consequently $h$ goes through $\text{Coker}(f)$, i.e. there exists $h' \in \text{Hom}_{A}(\text{Coker}(f),P^0)$ such that $h=h'\pi,$ but $\pi$ is surjective, hence $\text{Im}(h)=\text{Im}(h').$ The condition $fh=0$ means that $\text{Im}(h')=\text{Im}(h) \subseteq \text{Ker}(f)$ consequently $h'$ goes through $\text{Ker}(f),$ i.e. there exists $h''$ such that $h'=ih'',$ $h=ih''\pi.$ Note that since $\pi$ is surjective and $i$ is injective, $h=0$ if and only if $h''=0.$ Also if there is a nonzero $h'' \in \text{Hom}_A(\text{Coker}(f), \text{Ker}(f))$ a morphism $h=ih''\pi$ gives a nonzero morphism $T \rightarrow T[-1].$ So $$\label{eqn:1} \text{Hom}_{D^b(A)}(T,T[-1])=0 \Leftrightarrow \text{Hom}_{A}(M,\Omega^2M)=0. \tag{$\ast$}$$ Let us now verify that $$\label{eqn:2}\text{Hom}_{D^b(A)}(T,T[1])=0 \Leftrightarrow \text{Hom}_{K^b(A)}(T,M)=0. \tag{$\ast\ast$}$$ We have that $\text{Hom}_{D^b(A)}(T,T[1])=\text{Hom}_{D^b(A)}(T,P_{\bullet})=\text{Hom}_{D^b(A)}(T,M),$ where $P_{\bullet}$ is the projective resolution of $M.$ Since $T$ consists of projective modules, $\text{Hom}_{D^b(A)}(T,M)=\text{Hom}_{K^b(A)}(T,M).$ $\Box$ The projective presentation of a band-module over a symmetric *SB*-algebra can not be a partial tilting complex. **Proof** In the Auslander-Reiten quiver all band-modules lie on 1-tubes ([@BR]), so $\Omega^2M=M.$ $\Box$ The proof of the next statement is analogous to the proof of Proposition 1. Let $A$ be a selfinjective $K$-algebra, $M$ be a module without projective direct summands such that its minimal projective presentation is a partial tilting complex. The sum of a stalk complex of projective module $P$ concentrated in degree 0 and the minimal projective presentation of module $M$ is a partial tilting complex if and only if $\emph{Hom}_{A}(M,P)=0=\emph{Hom}_{A}(P,M).$ The sum of a stalk complex of projective module $P$ concentrated in degree 1 and the minimal projective presentation of module $M$ is a partial tilting complex if and only if $\emph{Hom}_{A}(\Omega^2M,P)=0=\emph{Hom}_{A}(P,\Omega^2M).$ Two-term tilting complexes over Brauer tree algebras with multiplicity 1 ======================================================================== The next remark ([@Ha]) plays an important role. Let $A$ be a finite dimensional algebra over a field $K$, let $\text{\rm proj-}A$ and $\text{\rm inj-}A$ be the categories of finitely generated projective and injective modules respectively, $K^b(\text{\rm proj-}A)$, $K^b(\text{\rm inj-}A)$ bounded homotopy categories, $D$ the duality of the module category with respect to $K.$ Then the Nakayama functor $\nu$ induces an equivalence of triangulated categories $K^b(\text{\rm proj-}A) \rightarrow K^b(\text{\rm inj-}A)$ and there is a natural isomorphism $D\emph{Hom}(P,-)\rightarrow \emph{Hom}(-,\nu P)$ for $P \in K^b(\text{\rm proj-}A)$. In the case of the symmetric algebra it means that for $T \in A\text{-}{\rm perf}$ the condition $\text{Hom}_{D^b(A)}(T,T[1])=0$ is satisfied if and only if $\text{Hom}_{D^b(A)}(T,T[-1])=0.$ From now on in this section we will consider only Brauer tree algebras $A$ corresponding to a Brauer tree $\Gamma$ such that the multiplicity of the exceptional vertex of $\Gamma$ is 1. Let us fix an $A$-module $M$ and let us denote by $T:= P^0 \overset{f}{\rightarrow} P^1$ its minimal projective presentation. Let $M$ be an indecomposable nonprojective $A$-module. The condition $\emph{Hom}_{A}(P^0,M)=0$ implies $\emph{Hom}_{A}(M,\Omega^2M)=0$ and $\emph{Hom}_{K^b(A)}(T,M)=0.$ **Proof** The condition $\text{Hom}_{A}(P^0,M)=0$ obviously implies $\text{Hom}_{K^b(A)}(T,M)=0.$ Let us show that $\text{Hom}_{A}(P^0,M)=0$ implies $\text{Hom}_{A}(M,\Omega^2M)=0.$ Since $\text{Hom}_{A}(P^0,M)=0$, there is no composition factor in $M$ isomorphic to a direct summand of $\text{top}(P^0)=\text{soc}(P^0).$ The module $\Omega^2M$ is a submodule of $P^0,$ hence $\text{soc}(\Omega^2M) \subseteq \text{soc}(P^0)$. For any $h \in \text{Hom}_{A}(M,\Omega^2M)$ we have that $\text{Im}(h)\cap\text{soc}(\Omega^2M)=0,$ hence $h=0.$ $\Box$ Let $M$ be a nonprojective $A$-module such that $\emph{dim}(\emph{top}(M))=1.$ The minimal projective presentation of $M$ is a partial tilting complex if and only if $M$ is not isomorphic to $P/\emph{soc}(P)$ for any indecomposable projective module $P.$ **Proof** The condition $\text{dim}(\text{top}(M))=1$ implies that $M \simeq P^1/U,$ where $P^1$ is indecomposable. If $U=\text{soc}(P^1),$ then $P^0 \simeq P^1$ because $A$ is symmetric. Hance $\Omega^2M$ is a submodule of $P^1,$ hence, $\text{soc}(\Omega^2M)=\text{soc}(P^1)=\text{top}(P^1)=\text{top}(M),$ which means that $\text{Hom}_{A}(M,\Omega^2M) \neq 0.$ By we get that $\text{Hom}_{D^b(A)}(T,T[-1]) \neq 0.$ Let us assume that $U \neq \text{soc}(P^1).$ We denote by $I$ the set of indexes corresponding to composition factors of $\text{top}(U).$ The projective cover of $U$ is isomorphic to $\bigoplus_{i \in I}Ae_i.$ Since $U \neq \text{soc}(P^1)$, the set $I$ does not contain the indexes corresponding to $\text{soc}(P^1)$ or to composition factors of $P^1/U$ (over a Brauer tree algebra with multiplicity 1 all composition factors of an indecomposable projective module except for the top and the socle are distinct). Hance $\text{Hom}_{A}(P^0,M)=0.$ By Lemma 2 and Proposition 1 the minimal projective presentation of $P^1/U$ is a partial tilting complex. $\Box$ Let us denote by $CF(L)$ the set of the composition factors of module $L$. For any indecomposable nonprojective $A$-module $M$ such that $\emph{dim}(\emph{top}(M)) \geq 2$ the condition $\emph{Hom}_{K^b(A)}(T,M)=0$ is satisfied. **Proof** Note that $\text{dim}(\text{top}(M)) \geq 2$ implies $CF(\text{top}(P^0))\cap CF(M) \subseteq \text{soc}(M)$. Indeed, since over a Brauer tree algebra with multiplicity 1 all composition factors of an indecomposable nonprojective module are distinct, $CF(\text{top}(P^0))\cap CF(M) \subseteq \text{soc}(M)$. Consequently, for any morphism $h:P^0\rightarrow M$ the following holds $\text{Im}(h)\subseteq \text{soc}(M),$ hence $\text{Ker}h \supseteq \text{rad}(P^0) \supseteq \text{Ker}f,$ hence $h$ goes through $f$ and $h=0$ in $K^b(A).$$\Box$ Finally we have: A minimal projective presentation of an indecomposable non-projective $A$-module $M$ is a partial tilting complex if and only if $M$ is not isomorphic to $P/ \emph{soc}(P)$ for any indecomposable projective module $P.$ **Proof** The case $\text{dim}(\text{top}(M)) = 1$ is dealt with in Lemma 3; in the case $\text{dim}(\text{top}(M)) \geq 2$ the required result holds because of Lemma 4 Remark 1 and . $\Box$ Two-term tilting complexes over Brauer star algebra =================================================== Let us consider a quiver $Q:$ $$\xymatrix { & 2 \ar[r]^-{\alpha_2} & 3 \ar[dr] & \\ 1 \ar[ur]^-{\alpha_1} & & & 4 \ar[dl] \\ & n \ar[ul]^-{\alpha_n} & \cdots \ar[l] & \\ }$$ The vertices of the quiver are numbered by elements of $\mathbb{Z}/n\mathbb{Z}.$ Consider the ideal $I$ generated by relations $$I:=\langle (\alpha_{i} \cdot \alpha_{i+1} \cdot \ldots \cdot \alpha_{i-1})^k \cdot \alpha_{i}, \mbox{ } i=1,\ldots,n \rangle.$$ Set $A=kQ/I.$ We denote by $e_i$ the path of length 0 corresponding to the vertex $i$. Any indecomposable module over this algebra is uniserial, in particular any indecomposable module is uniquely determined by the ordered set of its composition factors. We will denote a module by the set of the indexes corresponding to its composition factors ordered from the top to the socle. For example, the simple module corresponding to the idempotent $e_i$ will be denoted by $(i)$. In the previous section the description of all two-term partial tilting complexes in the case $k=1$ was given. Now we will describe such complexes over a Brauer star algebra for an arbitrary $k.$ The minimal projective presentation of an indecomposable $A$-module is a partial tilting complex if and only if $l(M) < n,$ where $l(M)$ is the length of $M.$ **Proof** If $|CF(M)|>n-1$ then both $M$ and $\Omega^2M$ contain all simple modules as composition factors. In particular, $\text{top}(M)$ is a composition factor of $\Omega^2M$ hence $\text{Hom}_{A}(M,\Omega^2M) \neq 0.$ If $|CF(M)|<n,$ then in $\Omega^2M$ there is no composition factor isomorphic to $\text{top}(M)$ hence $\text{Hom}_{A}(M,\Omega^2M) = 0.$ It is also clear that $\text{Hom}_{K^b(A)}(T,M)=0,$ since there is no composition factor isomorphic to $\text{top}(P^0)$ in $M$. $\Box$ Let us describe all two-term tilting complexes over $A,$ concentrated in degrees 0 and 1. Let there be given two modules $M=(i, i-1,...,j)$ and $N=(m, m-1,...,l)$ such that the number of composition factors of $M$ and of $N$ is less then $n.$ Let $T$ be the minimal projective presentation of $M,$ $T'$ be the minimal projective presentation of $N.$ Note that $\Omega^2M=(i-1,...,j-1),$ $\Omega^2N=(m-1,...,l-1).$ Let us state when the sum of the minimal projective presentations of $M$ and $N$ is a partial tilting complex. $\text{Hom}_{A}(M,\Omega^2N)=0$ if and only if $i \notin \{m-1, m-2,...,l-1\}$ or $i \in \{m-1, m-2,...,l-1\},$ but $j \in \{i, i-1,...,l\}.$ $\text{Hom}_{A}(N,\Omega^2M)=0$ if and only if $m \notin \{i-1, i-2,...,j-1\}$ or $m \in \{i-1, i-2,...,j-1\},$ but $l \in \{m, m-1,...,j\}.$ Analysing these conditions we conclude that either the sets $\{i, i-1,...,j-1\},$ $\{m, m-1,...,l-1\}$ do not intersect or one lies inside the other. Now let us figure out when a sum of the minimal projective presentation of a module $M=(i, i-1,...,j)$ and a stalk complex of a projective module $P_m=(m, m-1,...,m)$ concentrated in degree 0 is a partial tilting complex. $\text{Hom}_{A}(M,P)=0=\text{Hom}_{A}(P,M)$ if and only if $m\notin \{i, i-1,...,j\}.$ Similarly, a sum of the minimal projective presentation of a module $M=(i, i-1,...,j)$ and a stalk complex of a projective module $P_m=(m, m-1,...,m)$ concentrated in degree 1 is a partial tilting complex if and only if $\text{Hom}_{A}(\Omega^2M,P)=0=\text{Hom}_{A}(P,\Omega^2M),$ i.e. $m\notin \{i-1,i-2,...j-1\}.$ Note also that all stalk complexes of projective modules are concentrated either in degree 0 or in degree 1, since for any two projective modules $P_m,$ $P_l$ over a Brauer star algebra $\text{Hom}_{A}(P_m,P_l)\neq 0$. It is known that in the case of a symmetric algebra of finite representation type any partial tilting complex with $n$ (where $n$ is the number of isoclasses of simple modules) nonisomorphic direct summands is tilting ([@AH]). Thus to describe all two-term tilting complexes is the same as to describe all configurations of $n$ pairwise orthogonal indecomposable complexes, each of which is either a minimal projective presentation of an indecomposable module $M$ such that the number of composition factors of $M$ is less then $n$ or a stalk complex of a projective module concentrated in degree 0 or degree 1, i.e. of $n$ complexes which pairwisely satisfy the conditions stated before. We will call an interval a set of vertices of an $n$-gon taken in order with marked starting point and end point. The covering $S$ of an $n$-gon by distinguished intervals is the following structure: an $n$-gon with a partition of its vertices into noncrossing intervals (we call them outer), each interval can contain from $1$ to $n$ vertices; in each outer interval containing $r$ ($r>1$) vertices $r-2$ inner intervals are additionally chosen, each of which contains more that 1 vertex; inner intervals either do not intersect or lie one inside the other. Also in each outer interval $(i, i-1,...,j)$ with length greater than 1 we will pick out an interval of length 1 as follows: it is either a starting point for all outer intervals or an end point. Note that the covering contains exactly $n$ intervals. To such a covering $S$ one can assign a two-term tilting complex $T_{S}$ as follows. We will consider two cases: 1\) To all outer intervals $(i, i-1,...,j) \in S$ of length greater than 1 an inner interval $(j)$ of length 1 is assigned. Let us construct a tilting complex as follows: for each interval $(i, i-1,...,j)$ containing more than 1 vertex take a module $M=(i, i-1,...,j+1)$, as a direct summand of the tilting complex take the minimal projective presentation of $M$. For each interval containing 1 vertex take a stalk complex of the projective module corresponding to this vertex concentrated in degree 0. In this way we get $n$ summands. 2\) To all outer intervals $(i, i-1,...,j) \in S$ of length greater than 1 an inner interval $(i)$ of length 1 is assigned. As before for each interval $(i, i-1,...,j)$ containing more than 1 vertex take a module $M=(i, i-1,...,j+1)$, as a direct summand of the tilting complex take the minimal projective presentation of $M$. For each interval containing 1 vertex take a stalk complex of the projective module corresponding to this vertex concentrated in degree 1. In this way we get $n$ summands. To the trivial covering, containing only intervals of length $1$, two tilting complexes: $A$ and $A[-1]$ are assigned. Based on the previous construction we get the following: Over a Brauer star algebra with $n$ vertices and multiplicity $k$ the set of all basic two-term tilting complexes not isomorphic to $A$ or $A[-1]$ is in one to one correspondence with the set of all nontrivial coverings of an $n$-gon by distinguished intervals. Endomorphism rings ================== Let us construct the endomorphism ring of a two-term tilting complex over a Brauer star algebra with $n$ vertices and multiplicity $k$, i.e. the endomorphism rings of a tilting complex corresponding to the covering $S$ of an $n$-gon. It is well known that it is isomorphic to a Brauer tree algebra corresponding to some Brauer tree $\Gamma$ with multiplicity $k.$ For this purpose we first compute the Cartan matrix of the algebra $\text{End}_{K^b(A)}(T_S)$. It will tell us which edges of $\Gamma$ are incident to one vertex. After that we will only have to establish the cyclic ordering of the edges incident to each vertex of $\Gamma$. It is easy to compute the Cartan matrix of $\text{End}_{K^b(A)}(T_S)$ using the well known formulae by Happel [@Ha2]: let $Q=(Q^r)_{r \in \mathbb{Z}}, R=(R^s)_{s \in \mathbb{Z}} \in A\text{-}{\rm perf}$, then $$\sum_i (-1)^i {\rm dim}_K {\rm Hom}_{K^b(A)}(Q,R[i])=\sum_{r,s} (-1)^{r-s}{\rm dim}_K {\rm Hom}_{A}(Q^r,R^s).$$ Note that if ${\rm Hom}_{K^b(A)}(Q,R[i])=0, i \neq 0$ (for example, in the case when $Q$ and $R$ are summands of a tilting complex) then the left hand side of the formulae becomes ${\rm dim}_K {\rm Hom}_{K^b(A)}(Q,R).$ As before we will consider two cases: 1\) To all outer intervals $(i, i-1,...,j) \in S$ of length greater than 1 an inner interval $(j)$ of length 1 is assigned, i.e. all stalk complexes of projective modules which are direct summands of $T_S$ are concentrated in degree 0. Let $(i, i-1,...,j)$, $(t, t-1,...,l) \in S$ be two arbitrary intervals of the covering $S$ of length greater then 1. And let $(m), (r) \in S$ be intervals of length 1. It is easy to see that $$\text{dim}(\text{Hom}_{K^b(A)}(P_j\rightarrow P_i,P_l\rightarrow P_t))=$$ $$=\left\{% \begin{array}{ll} 0, & \hbox{if } \{i, i-1,...,j\}\cap\{t, t-1,...,l\}=\varnothing;\\ 0, & \hbox{if } \{i, i-1,...,j\} \subset \{t, t-1,...,l\}, i\neq t, j\neq l; \\ 1, & \hbox{if } \{i, i-1,...,j\} \subset \{t, t-1,...,l\}, i = t, j\neq l; \\ 1, & \hbox{if } \{i, i-1,...,j\} \subset \{t, t-1,...,l\}, i\neq t, j= l; \\ 2, & \hbox{if } \{i, i-1,...,j\} = \{t, t-1,...,l\}. \\ \end{array}% \right.$$ $$\text{dim}(\text{Hom}_{K^b(A)}(P_m,P_j\rightarrow P_i))=\left\{% \begin{array}{ll} 1, & \hbox{if } m=j; \\ 0, & \hbox{if } m \neq j. \\ \end{array}% \right.$$ $$\text{dim}(\text{Hom}_{K^b(A)}(P_j\rightarrow P_i,P_m))=\left\{% \begin{array}{ll} 1, & \hbox{if } m=j; \\ 0, & \hbox{if } m \neq j. \\ \end{array}% \right.$$ $$\text{dim}(\text{Hom}_{K^b(A)}(P_m,P_r))=\left\{% \begin{array}{ll} k, \mbox{ for } m\neq r; \\ k+1, \mbox{ otherwise}. \\ \end{array}% \right.$$ These data give us the partition of the vertices of $\text{End}_{K^b(A)}(T_S)$ into $A$-cycles or equally which edges of the Brauer tree of algebra $\text{End}_{K^b(A)}(T_S)$ are incident to the same vertex (we will identify the edges of the Brauer tree of $\text{End}_{K^b(A)}(T_S)$ and the indecomposable summand of $T_S$ corresponding to them). Now we have to find out the cyclic ordering of the edges incident to one vertex and which vertex is exceptional. Note that if we arrange the vertices of the $A$-cycle of length $r$ in such a manner that successive composition of $kr$ morphisms (in the case of the exceptional vertex) or of $r$ morphisms (in the case of a nonexceptional vertex) between them is not homotopic to zero then this arrangement will give us the desired cyclic order. In the case when all stalk complexes of projective modules are concentrated in degree 0 in the algebra $\text{End}_{K^b(A)}(T_S)$, the following types of $A$-cycles can occur: a) the $A$-cycle of projective modules; b) an $A$-cycle containing an indecomposable stalk complex of a projective module $P$ concentrated in degree 0 and two-term complexes having $P$ as a 0-component; c) an $A$-cycle containing two-term complexes with the same 0-components; d) an $A$-cycle containing two-term complexes with the same components in degree 1. For convenience let us use the following notation: a homomorphism $P_l\rightarrow P_m$ induced by multiplication on the right by $\alpha_{l}\alpha_{l+1}...\alpha_{m-1}$ will be denoted by $\alpha_{l,m-1}.$ a\) Let $(m_1), (m_2),..., (m_r) \in S$ where the set $\{m_1,m_2,...,m_r\}$ is ordered according to the cyclic ordering of the edges in the Brauer star, $r$ is maximal. It is clear that the following diagram of chain maps holds: $$\xymatrix { ... \ar[r] & 0 \ar[r] \ar[d]&P_{m_1} \ar[r] \ar[d]^{\alpha_{m_1,m_2-1}}& 0 \ar[d] \ar[r]& ...\\ ... \ar[r] & 0 \ar[r] \ar[d]&P_{m_2} \ar[r] \ar[d]& 0 \ar[d] \ar[r]& ...\\ ... \ar[r] &... \ar[r] \ar[d]& ... \ar[d] \ar[r]& ... \ar[r] \ar[d]& ...\\ ... \ar[r] & 0 \ar[r] \ar[d]&P_{m_r} \ar[r] \ar[d]^{\alpha_{m_r,m_1-1}}& 0 \ar[d] \ar[r]& ...\\ ... \ar[r] & 0 \ar[r]&P_{m_1} \ar[r] & 0 \ar[r]& ... \\ }$$ The successive composition of any $kr$ morphisms is not homotopic to 0. So the edges of $\text{End}_{K^b(A)}(T_S)$ corresponding to stalk complexes of projective modules have a common vertex and the cyclic ordering in $\text{End}_{K^b(A)}(T_S)$ is induced by the cyclic ordering in the Brauer star. The vertex of $\text{End}_{K^b(A)}(T_S)$ corresponding to this cycle is exceptional. b\) Let $(m_1, m_1 - 1,...,j ), (m_2, m_2 - 1,...,j),..., (m_r, m_r - 1,...,j), (j) \in S,$ where the set $\{j,m_1,m_2,...,m_r\}$ is ordered according to the cyclic ordering of the edges in the Brauer star, $r$ is maximal. Let us consider the following diagram of chain maps: $$\xymatrix { ... \ar[r] & 0 \ar[r] \ar[d]&P_{j} \ar[r] \ar[d]^{(\alpha_{j,j-1})^k}& 0 \ar[d] \ar[r] & 0 \ar[d] \ar[r]& ...\\ ... \ar[r] & 0 \ar[r] \ar[d]&P_{j} \ar[r] \ar[d]^{1} & P_{m_1} \ar[d]^{\alpha_{m_1,m_2-1}} \ar[r] & 0 \ar[d] \ar[r]& ...\\ ... \ar[r] & 0 \ar[r] \ar[d]&P_{j} \ar[r] \ar[d] & P_{m_2} \ar[d] \ar[r] & 0 \ar[d] \ar[r]& ...\\ ... \ar[r] &... \ar[r] \ar[d]& ... \ar[d] \ar[r]& ... \ar[r] \ar[d] & ... \ar[d] \ar[r] & ...\\ ... \ar[r] & 0 \ar[r] \ar[d]&P_{j} \ar[r] \ar[d]^{1}& P_{m_r} \ar[d] \ar[r]&0 \ar[d] \ar[r]& ...\\ ... \ar[r] & 0 \ar[r]&P_{j} \ar[r] & 0 \ar[r]& 0 \ar[r]& ... \\ }$$ The successive composition of any $r+1$ morphisms is not homotopic to 0. That means that the edges of $\text{End}_{K^b(A)}(T_S)$ corresponding to this $A$-cycle are ordered in the following way: $\{P_j, P_j\rightarrow P_{m_1},P_j\rightarrow P_{m_2},...,P_j\rightarrow P_{m_r}\}.$ c\) Similarly, if $(m_1, m_1 - 1,...,j ), (m_2, m_2 - 1,...,j),..., (m_r, m_r - 1,...,j) \in S$ is the set of intervals corresponding to some $A$-cycle in $\text{End}_{K^b(A)}(T_S),$ where the set $\{m_1,m_2,...,m_r\}$ is ordered according to the cyclic ordering of the edges in the Brauer star, $r$ is maximal, then the edges of $\text{End}_{K^b(A)}(T_S)$ corresponding to this $A$-cycle are ordered in the following way: $\{ P_j\rightarrow P_{m_1},P_j\rightarrow P_{m_2},...,P_j\rightarrow P_{m_r}\}.$ d)Let us now consider an $A$-cycle containing the summand with the same components in degree 1. Let $(j, j-1,...,m_1), (j, j-1,...,m_2),..., (j,j-1,...,m_r) \in S,$ where the set $\{m_1,m_2,...,m_r\}$ is ordered according to the cyclic ordering of the edges in the Brauer star, $r$ is maximal. Then the following diagram of chain maps holds: $$\xymatrix { ... \ar[r] & 0 \ar[r] \ar[d]&P_{m_1} \ar[r] \ar[d]^{\alpha_{m_1,m_2-1}}& P_j \ar[d]^{1} \ar[r] & 0 \ar[d] \ar[r]& ...\\ ... \ar[r] & 0 \ar[r] \ar[d]&P_{m_2} \ar[r] \ar[d] & P_{j} \ar[d]^{1} \ar[r] & 0 \ar[d] \ar[r]& ...\\ ... \ar[r] &... \ar[r] \ar[d]& ... \ar[d] \ar[r]& ... \ar[r] \ar[d] & ... \ar[d] \ar[r] & ...\\ ... \ar[r] & 0 \ar[r] \ar[d]&P_{m_{r-1}} \ar[r] \ar[d]^{\alpha_{m_{r-1},m_r-1}}& P_{j} \ar[d]^{1} \ar[r]&0 \ar[d] \ar[r]& ...\\ ... \ar[r] & 0 \ar[r] \ar[d]&P_{m_r} \ar[r] \ar[d]^{0} & P_j \ar[r] \ar[d]^{(\alpha_{j,j-1})^k}& 0 \ar[r] \ar[d]& ... \\ ... \ar[r] & 0 \ar[r] &P_{m_1} \ar[r] & P_j \ar[r] & 0 \ar[r]& ...\\ }$$ The successive composition of any $r$ morphisms is not homotopic to 0. This means that the edges of $\text{End}_{K^b(A)}(T_S)$ corresponding to this $A$-cycle are ordered in the following way: $\{P_{m_1}\rightarrow P_{j},P_{m_2}\rightarrow P_{j},...,P_{m_r}\rightarrow P_{j}\}.$ This completes the first case. Since for each of the 4 cases of $A$-cycles we have described the cyclic ordering of vertices, which is naturally induced by the cyclic ordering of the vertices in the Brauer star algebra. 2\) Let us consider the second case. To all outer intervals $(i, i-1,...,j) \in S$ of length greater than 1 an inner interval $(i)$ of length 1 is assigned, i.e. all stalk complexes of projective modules which are direct summand of $T_S$ are concentrated in degree 1. Let $(i, i-1,...,j)$, $(t, t-1,...,l) \in S$ be two arbitrary intervals of length greater than 1. And let $(m), (r) \in S$ be intervals of length 1. It is easy to see that $$\text{dim}(\text{Hom}_{K^b(A)}(P_j\rightarrow P_i,P_l\rightarrow P_t))=$$ $$=\left\{% \begin{array}{ll} 0, & \hbox{if } \{i, i-1,...,j\}\cap\{t, t-1,...,l\}=\varnothing;\\ 0, & \hbox{if } \{i, i-1,...,j\} \subset \{t, t-1,...,l\}, i\neq t, j\neq l; \\ 1, & \hbox{if } \{i, i-1,...,j\} \subset \{t, t-1,...,l\}, i = t, j\neq l; \\ 1, & \hbox{if } \{i, i-1,...,j\} \subset \{t, t-1,...,l\}, i\neq t, j= l; \\ 2, & \hbox{if } \{i, i-1,...,j\} = \{t, t-1,...,l\}. \\ \end{array}% \right.$$ $$\text{dim}(\text{Hom}_{K^b(A)}(P_m,P_j\rightarrow P_i))=\left\{% \begin{array}{ll} 1, & \hbox{if } m=i; \\ 0, & \hbox{if } m \neq i. \\ \end{array}% \right.$$ $$\text{dim}(\text{Hom}_{K^b(A)}(P_j\rightarrow P_i,P_m))=\left\{% \begin{array}{ll} 1, & \hbox{if } m=i; \\ 0, & \hbox{if } m \neq i. \\ \end{array}% \right.$$ $$\text{dim}(\text{Hom}_{K^b(A)}(P_m,P_r))=\left\{% \begin{array}{ll} k, \mbox{ for } m\neq r; \\ k+1, \mbox{ otherwise}. \\ \end{array}% \right.$$ As in the previous case, the exceptional vertex corresponds to the cycle of stalk complexes of projective modules (this time they are concentrated in degree 1). All $A$-cycles can be divided into 4 types. For 3 of them (namely a, c, d) we already know the cyclic ordering. The remaining case is: e\) Let $(j, j-1,...,m_1), (j, j-1,...,m_2),..., (j,j-1,...,m_r), (j) \in S,$ where the set $\{j,m_1,m_2,...,m_r\}$ is ordered according to the cyclic ordering of the edges in the Brauer star, $r$ is maximal. Let us consider the following diagram of chain maps: $$\xymatrix { ... \ar[r] & 0 \ar[r] \ar[d]&P_{m_1} \ar[r] \ar[d]^{\alpha_{m_1,m_2-1}}& P_j \ar[d]^{1} \ar[r] & 0 \ar[d] \ar[r]& ...\\ ... \ar[r] & 0 \ar[r] \ar[d]&P_{m_2} \ar[r] \ar[d] & P_{j} \ar[d]^{1} \ar[r] & 0 \ar[d] \ar[r]& ...\\ ... \ar[r] &... \ar[r] \ar[d]& ... \ar[d] \ar[r]& ... \ar[r] \ar[d] & ... \ar[d] \ar[r] & ...\\ ... \ar[r] & 0 \ar[r] \ar[d]&P_{m_{r-1}} \ar[r] \ar[d]^{\alpha_{m_{r-1},m_r-1}}& P_{j} \ar[d]^{1} \ar[r]&0 \ar[d] \ar[r]& ...\\ ... \ar[r] & 0 \ar[r] \ar[d]&P_{m_r} \ar[r] \ar[d] & P_j \ar[r] \ar[d]^{(\alpha_{j,j-1})^k}& 0 \ar[r] \ar[d]& ... \\ ... \ar[r] & 0 \ar[r] \ar[d]&0 \ar[r] \ar[d] & P_j \ar[r] \ar[d]^{1}& 0 \ar[r] \ar[d]& ... \\ ... \ar[r] & 0 \ar[r] &P_{m_1} \ar[r] & P_j \ar[r] & 0 \ar[r]& ...\\ }$$ The successive composition of any $r+1$ morphisms is not homotopic to 0. This means that the edges of $\text{End}_{K^b(A)}(T_S)$ corresponding to this $A$-cycle are ordered in the following way: $\{P_{m_1}\rightarrow P_{j},P_{m_2}\rightarrow P_{j},...,P_{m_r}\rightarrow P_{j}, P_j\}.$ $\Box$ The following is clear from the description of endomorphism rings of two-term tilting complexes. A two-term tilting complex $T_S$ over a Brauer star algebra with $n$ edges and multiplicity 1, which is not isomorphic to $A$ or $A[-1]$, gives a derived autoequivalence if and only if the covering $S$ of the $n$-gon has the following form: $$(j,j-1,...,j+1), (j,j-1,...,j+2),...,(j,j-1),(j) j=1,...,n$$ or $$(j-1,j-2,...,j), (j-2,j-3,...,j),...,(j+1,j),(j), j=1,...,n.$$ The subgroup of the derived Picard group generated by these autoeqivalences was studied in *[@IM]*. In the case $k\neq 1$ a two-term tilting complex $T_S$ gives a derived autoequivalence if and only if the covering $S$ is trivial. Let us consider an example of a two-term tilting complex and compute its endomorphism ring. Let $k=1,n=4.$ And let $S=(1,2,3,4), (2,3,4), (2,3), (1).$ Then $T_S$ consists of the following direct summand: $P_4\rightarrow P_1, P_4\rightarrow P_2, P_3\rightarrow P_2$ and $P_1,$ concentrated in degree 1. Let us denote the vertices of $\text{End}_{K^b(A)}(T_S)$ as follows: $a$ is a vertex corresponding to $P_4\rightarrow P_1,$ $b$ to $P_4\rightarrow P_2,$ $c$ to $P_3\rightarrow P_2,$ $d$ to $P_1.$ Then the quiver of $\text{End}_{K^b(A)}(T_S)$ is of the following form: $$d \leftrightarrows a \leftrightarrows b\leftrightarrows c,$$ and the Brauer graph is a string: $\bullet \hrulefill \bullet \hrulefill \bullet \hrulefill \bullet \hrulefill \bullet.$ For any algebra $B$ corresponding to the Brauer tree $\Gamma$ with $n$ edges and multiplicity $k$ there is a two-term tilting complex $T_S$ over $A$ such that $B\simeq \emph{End}_{K^b(A)}(T_S).$ **Proof** Let us assume that the root of $\Gamma$ is chosen in the exceptional vertex, and that $\Gamma$ is embedded in the plane in such a manner that all nonroot vertices are situated on the plane lower than the root according to their level (the further from the root, the lower, all vertices of the same level lie on a horizontal line). The edges around vertices are ordered clockwise. Let us number the edges of the tree $\Gamma$ as follows: put 1 on the right-hand edge incident to the root, on the next edge incident to the root according to the order put $1+ k_1+1,$ where $k_1$ is the number of successors of the nonroot end of the edge with label 1. Let the $(i-1)$-st edge incident to the root be labelled with $m$ and let the nonroot vertex incident to the edge with label $m$ have $k_{m}$ successors, then put on the $i$-th edge incident to the root label $m+ k_{m} + 1.$ Further on let us put the labels as follows: consider a vertex of an odd level (a vertex which can be connected to the root by a path of odd length), let the edge connecting it to the vertex of a higher level be labelled with $j.$ Put $j+1+ k_1$ on the right-hand edge incident to this vertex, where $k_{1}$ is the number of successors of the other end of this edge. Put $j+1 + k_{1} + k_2 +1$ on the next edge incident to this vertex, where $k_{2}$ is the number of successors of the other end of this edge. Further on let us put the labels by induction: let the $(i-1)$-st edge incident to the fixed vertex be labelled with $m,$ and let the lower end of the next edge have $k_{m}$ successors, put $m+ k_{m}+ 1$ on the $i$-th edge incident to this vertex. Consider a vertex of an even level, let the edge connecting it to the vertex of a higher level be labelled with $t$ and let the edge connecting the other end of the edge labelled with $t$ and the vertex of a higher level be labelled with $j.$ Put $j+1$ on the right-hand edge incident to this vertex. Put $j+1 + k_{j+1}+1$ on the next edge incident to this vertex, where $k_{j+1}$ is the number of successors of the other end of the edge labelled with $j+1$. Let the $(i-1)$-st edge incident to the fixed vertex be labelled with $m,$ and let the lower end of the $(i-1)$-st edge incident to the fixed vertex have $k_{m}$ successors, put $m+ k_{m}+ 1$ on the $i$-th edge incident to this vertex. Let us construct a tilting complex over algebra $A$ using a labelled tree $\Gamma$. Assume that the root of $\Gamma$ has $l$ children and there are labels $\{n_1, n_2,...,n_l\}$ on the edges incident to the root. Take stalk complexes of projective modules $P_{n_1},P_{n_2}...,P_{n_l}$ concentrated in degree 0 as summands of the tilting complex. Let us consider a vertex of an odd level. Assume that the edge connecting it to a vertex of a higher level is labelled by $j,$ the other edges incident to this vertex have labels $j_1, j_2,...j_h,$ where $h$ is the number of children of this vertex. In the tilting complex the following direct summands will correspond to these edges: $P_j\rightarrow P_{j_1}, P_j\rightarrow P_{j_2},...,P_j\rightarrow P_{j_h}.$ Let us consider a vertex of an even level. Assume that the edge connecting it to a vertex of a higher level is labelled by $g,$ the other edges incident to this vertex have labels $g_1, g_2,...,g_d,$ where $d$ is the number of children of this vertex. In the tilting complex the following direct summands will correspond to these edges: $P_{g_1}\rightarrow P_g, P_{g_2}\rightarrow P_g,...,P_{g_d}\rightarrow P_g.$ It is clear that we have the desired number of summands. Because of the construction this complex is tilting and the Brauer tree corresponding to its endomorphism ring is $\Gamma.$ Similarly, we could construct a tilting complex with all the stalk complexes of projective modules concentrated in degree 1. $\Box$ [99]{} R. Rouquier, A. Zimmermann, Picard groups for derived module categories, Proc. London Math. Soc. (3) 87 (2003), no. 1, 197–225. I. Muchtadi-Alamsyah, Braid action on derived category of Nakayama algebras, Comm. Algebra 36 (2008), no. 7, 2544–2569. H. Abe, M. Hoshino, On derived equivalences for selfinjective algebras, Comm. in Algebra 34 (2006), no. 12, 4441–4452. M. Schaps, E. Zakay-Illouz, Combinatorial partial tilting complexes for the Brauer star algebras, Proc. Int. Conference on Representations of Algebra, Sao Paulo, (2001), 187–208. M. Schaps, E. Zakay-Illouz, Pointed Brauer trees, J. Algebra, 246 (2001), no. 2, 647–672. J. Rickard, Morita theory for derived categories, J. London Math. Soc. 39 (1989), no. 2, 436–456. I.M. Gelfand, V.A. Ponomarev, Indecomposable representations of the Lorentz group, Russian Math. Surveys 23 (1968), no. 2(140), 3–59. B. Wald, J. Waschbüsch, Tame biserial algebras, J. Algebra 95 (1985), 480–500. M. A. Antipov, A. I. Generalov, Finite generation of the Yoneda algebra of a symmetric special biserial algebra, Algebra i Analiz, 17:3 (2005), 1–23. J. L. Alperin, Local representation theory, Cambridge studies in advanced mathematics 11, Cambridge University Press (1986). J. Rickard, Derived categories and stable equivalence, J. Pure Appl. Algebra, 61 (1989), 303–317. P. Gabriel, C. Riedtmann, Group representations without groups, Comment. Math. Helv. 54 (1979), 240–287. M. C. R. Butler, C. M. Ringel, Auslander-Reiten sequences with few middle terms and applications to string algebras, Comm. Algebra, 15 (1-2) (1987), 145–179. D. Happel, Auslander-Reiten triangles in derived categories of finite-dimensional algebras, Proc. Amer. Math. Soc. 112 (1991), 641–648. D. Happel, Triangulated Categories in the Representation of Finite Dimensional Algebras, Cambridge University Press (1988).
{ "pile_set_name": "ArXiv" }
--- author: - Yannic Noller - Rody Kersten - 'Corina S. Păsăreanu' bibliography: - 'refs.bib' - 'bib-symex.bib' - 'refs1.bib' title: 'Badger: Complexity Analysis with Fuzzing and Symbolic Execution' --- =1
{ "pile_set_name": "ArXiv" }
--- abstract: 'We introduce an effective quark model that is in principle dynamically derivable from the QCD action. An important feature is the incorporation of spontaneous chiral symmetry breaking in a renormalizable fashion. The quark propagator in the condensed vacuum exhibits complex conjugate poles, indicative of an unphysical spectral form, i.e. confined quarks. Moreover, the ensuing mass function can be fitted well to existing lattice data. To validate the physical nature of the new model, we identify not only a massless pseudoscalar (i.e. a pion) in the chiral limit, but we also present reasonable estimates for the $\rho$ meson mass and decay constant, employing a contact point interaction and a large $N$ argument to simplify the diagrammatic spectral analysis. We stress that we do not use any experimental input to obtain our numbers, but only rely on our model and lattice quark data.' author: - | D. Dudal$^{a}$[^1], M.S. Guimaraes$^{b}$[^2], L.F. Palhares$^{c}$[^3], S.P. Sorella$^{b}$[^4]\ \ [[$^{b}$ Ghent University, Department of Physics and Astronomy, Krijgslaan 281-S9, 9000 Gent, Belgium]{.nodecor}]{}\ [$^{a}$ Departamento de Física Teórica, Instituto de Física, UERJ - Universidade do Estado do Rio de Janeiro]{.nodecor}\ [$^{c}$ Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany]{.nodecor} title: '[**From QCD to a dynamical quark model: construction and some meson spectroscopy**]{}' --- Introduction ============ Next to confinement — the absence of color charged particles in the QCD spectrum, the other crucial nonperturbative ingredient of QCD is the dynamical breaking of the (almost) chiral symmetry (D$\chi$SB). The latter is what ensures most of the hadron mass and what explains the gap between the light pseudoscalar mesons and the rest of the spectrum due to their Goldstone boson nature. Albeit confinement and D$\chi$SB are well appreciated, their treatment still inspires active research, see e.g. [@Bashir:2013zha] for a recent effort linking both concepts. Here, we present a novel quark model, directly grounded in QCD. A few prerequisites must be met, the model should be (i) able to capture the correct quark dynamics, in particular to describe properly the nonperturbative quark propagator; (ii) renormalizable, to describe quark dynamics over the whole momentum range, including its UV behaviour; (iii) practical to compute with; (iv) displaying the correct chiral behaviour such as massless pions in the chiral limit; (v) allowing to construct the rest of the meson spectrum. The quark model: chiral symmetry breaking and massless pion =========================================================== We initiate from the QCD action in $4d$ Euclidean space[^5], but we add an extra piece: $$\label{e1} S=\int {\ensuremath{\mathrm{d}}}^4x \left(\frac{1}{4}F_{\mu\nu}^2+{\overline{\psi}}{\ensuremath{D\!\!\!\!/}}\psi \underline{-{\overline{\lambda}}{\partial}^2\xi-{\overline{\xi}}{\partial}^2\lambda-{\overline{\eta}}{\partial}^2\theta+{\overline{\theta}}{\partial}^2\eta}\right)$$ The additional (fermion) fields are perturbatively trivial, as the underlined piece constitutes a unity. Moreover, it is BRST exact [@Baulieu:2009xr]. The action is equivalent to the QCD one, in particular should its symmetry content be. We treat the new fields as chiral singlets, as their quadratic form is not of the usual kind. We nevertheless recover the chiral symmetry under $$\label{e1bis} \delta_5 \psi= i\,\gamma_5\psi\,,\quad \delta_5{\overline{\psi}}=i\,{\overline{\psi}}\gamma_5\,,\quad \delta_5(\text{rest})=0.$$ We did not explicitly write the gauge fixing part. To also describe nonperturbative gluon dynamics, a nonperturbative gauge fixing as the Gribov-Zwanziger scheme in the Landau gauge can be selected [@Gribov:1977wm] or other schemes like [@Serreau:2012cg]. It is now useful to record the mass dimensions $\dim[\psi,{\overline{\psi}}]=3/2$ and $\dim [\lambda,\xi,\eta,\theta,{\overline{\lambda}},{\overline{\xi}},{\overline{\eta}},{\overline{\theta}}]=1$. Consider next the local composite operators $\mathcal{O}_1= {\overline{\xi}}\psi+{\overline{\psi}}\xi-{\overline{\lambda}}\psi-{\overline{\psi}}\lambda$ and $\mathcal{O}_2={\overline{\lambda}}\xi+{\overline{\xi}}\lambda+{\overline{\eta}}\theta-{\overline{\theta}}\eta$, with $\dim[\mathcal{O}_1]=5/2$ and $\dim[\mathcal{O}_2]=2$. The mixed fermion condensate $\braket{\mathcal{O}_1}$ serves as an order parameter for chiral symmetry: $\mathcal{O}_1=-\delta_5\pi$ with $\pi=-i~\left({\overline{\xi}}\gamma_5\psi+{\overline{\psi}}\gamma_5\xi-{\overline{\lambda}}\gamma_5\psi-{\overline{\psi}}\gamma_5\lambda\right)$. We shall later on show that $\pi$ corresponds to the pion, viz. a massless pseudoscalar in the chiral limit. Let us construct the underlying effective action, $\Gamma$, for the nonperturbative dynamics related to $\braket{\mathcal{O}_1}$. We introduce 2 scalar sources, $J$ and $j$, coupled to $\mathcal{O}_1$ and $\mathcal{O}_2$, with $\dim[J]=3/2$, $\dim[j]=2$, $$\label{act10} S\to S+ \int {\ensuremath{\mathrm{d}}}^4x\left(J\mathcal{O}_1+j\mathcal{O}_2-\zeta(g^2)\frac{j^2}{2}\right)$$ as derivation w.r.t. the sources allows to define the quantum operators. The parameter $\zeta$ is indispensable for a homogeneous linear renormalization group for $\Gamma$, while its value can be consistently determined order by order, making it a function of the coupling $g^2$. It reflects the vacuum energy divergence $\propto j^2$. We refer to [@Verschelde:1995jj] for the seminal paper plus toolbox. As an important asset of $4d$ quantum field theory is its multiplicative renormalizability, this property needs to be established for eq. . Using a more general set of sources, this can be proven to all orders of perturbation theory [@Baulieu:2009xr], from where[^6] it becomes evident that there is no pure vacuum term in $J$. Although RG controllable, we need a workable action description as the nonlinear $\zeta(g^2)\frac{j^2}{2}$ clouds the energy interpretation and outruns standard 1PI formalism [@Banks:1975zw]. This can be overcome by a double Hubbard-Stratonovich (HS) unity: $$\label{hs1} 1=\mathcal{N}\int\left[{\ensuremath{\mathrm{d}}}\sigma d\Sigma\right] e^{-\frac{1}{2\zeta}\int {\ensuremath{\mathrm{d}}}^4x\left(\sigma-\zeta j+\mathcal{O}_2\right)^2}e^{-\frac{1}{2\Lambda}\int {\ensuremath{\mathrm{d}}}^4x\left(\Sigma-\Lambda J+\mathcal{O}_1\right)^2}\,,$$ leading to an equivalent action, $$\begin{aligned} \label{hs2} S_f+S_{j,J}&\equiv&\int {\ensuremath{\mathrm{d}}}^4x \left(\frac{1}{4}F_{\mu\nu}^2+{\overline{\psi}}{\ensuremath{D\!\!\!\!/}}\psi -{\overline{\lambda}}{\partial}^2\xi-{\overline{\xi}}{\partial}^2\lambda-{\overline{\eta}}{\partial}^2\theta+{\overline{\theta}}{\partial}^2\eta+ \frac{\sigma^2}{2\zeta}+\frac{\sigma}{\zeta}\mathcal{O}_2+\frac{\mathcal{O}_2^2}{2\zeta}+\frac{\Sigma^2}{2\Lambda}+\frac{\mathcal{O}_1^2}{2\Lambda}+\frac{\Sigma}{\Lambda}\mathcal{O}_1\right)\nonumber\\ &&\hspace{-1cm}+\int {\ensuremath{\mathrm{d}}}^4x\left(-\sigma j-\Sigma J+\Lambda J^2\right)\,.\end{aligned}$$ Amusingly, contrary to its usual purpose, the HS transformation introduces quartic fermion interactions. We may discard the $J^2$ term in , since it is irrelevant for RG purposes: without changing the physics, we could have added a canceling $-\Lambda J^2$ to the action . The sources now appear linearly. Acting with $\left.\frac{{\partial}}{{\partial}\{j,J\}}\right|_{j=J=0}$ on both partition functions, i.e. before and after the HS transformation, provides with the correspondences $\braket{\Sigma}=-\braket{\mathcal{O}_1}$ and $\braket{\sigma}=-\braket{\mathcal{O}_2}$. $\Lambda$ is a mass dimension 1 parameter, necessary to end up with the appropriate mass dimensions throughout, since $\dim[\sigma]=2$, $\dim[\Sigma]=5/2$. $\Lambda$ will not enter any physical result if we were to compute exactly, as the underlying transformation constitutes just a unity. Let us collect a few observations. (i) In a loop expansion, $\Lambda$ will unavoidably enter any calculated quantity. However, assuming that passing to higher order one gets closer to the exact result, which encompasses $\Lambda$-independence, we can fix $\Lambda$ in a case-by-case scenario by the principle of minimal sensitivity (PMS) [@Stevenson:1981vj]: we look for solutions of $\frac{{\partial}E_{vac}}{{\partial}\Lambda}=0$ or higher derivatives if the latter eq. has no zeros. (ii) $\Gamma$ itself could be examined using the background field method [@Jackiw:1974cv]; then we do no longer need the sources and can set them to zero. (iii) If the dynamics would decide that $\braket{\Sigma}=\braket{\sigma}=0$, then we are dealing with nothing else than QCD without D$\chi$SB, as the trivial and thus irrelevant unities can be integrated out. If, on the contrary, $\braket{\Sigma}\neq0$ by means of dimensional transmutation, we find ourselves in a vacuum where chiral invariance is dynamically broken. We draw attention to the invariance of the action itself, $\delta_5S_f=0$, since naturally $\delta_5\Sigma=-\delta_5\mathcal{O}_1$. The rôle of $\braket{\sigma}$ is to furnish a dynamical mass for the auxiliary fermion fields. (iv) What about the standard chiral condensate, $\braket{{\overline{\psi}}\psi}$? We find $\braket{{\overline{\psi}}\psi}\propto \braket{\Sigma}$, for example by adding a mass term $\mu$ to ${\overline{\psi}}\psi$ in and deriving the vacuum energy w.r.t. $\mu$ at the end whilst setting $\mu=0$. This illustrates the 1-1 correspondence between $\braket{\Sigma}$, $\braket{{\overline{\psi}}\psi}$ and chiral symmetry breaking. (v) With a bare quark mass $\mu$, the tree level quark propagator yields $$\label{qp} \Braket{{\overline{\psi}}\psi}_p=\frac{i{\ensuremath{p\!\!\!\!/}}+\mathcal{M}(p^2)}{p^2+\mathcal{M}^2(p^2)}\,,\quad \mathcal{M}(p^2)=\frac{\braket{\Sigma}/\Lambda}{p^2+\braket{\sigma}}+\mu$$ The momentum dependent mass function $\mathcal{M}(p^2)$ is a result of the chiral symmetry breaking. This tree level functional form has been applied in fits to lattice quark propagators in [@Furui:2006ks; @Parappilly:2005ei; @Burgio:2012ph; @Rojas:2013tza]. This corroborates the relevance of our model, since we end up with a quark propagator in a nontrivial vacuum, which functional form is consistent with the nonperturbative lattice counterpart. Moreover, we did not make any sacrifices w.r.t. the renormalizability and in the absence of condensation, our action is equivalent to QCD. Our $\Gamma$ is thus unlike NJL models, which are not directly linked to QCD or describing quark propagation that can confront contemporary lattice data. Before continuing, we wish to point out here that a momentum dependent quark mass also showed up in an instanton based analysis of the QCD vacuum [@Diakonov:1985eg; @Diakonov:1987ty] or by solving the Dyson-Schwinger equation for the quark propagator [@Bhagwat:2003vw]. For now, we will take advantage of the foregoing lattice studies to fix our condensates. This will provide us with a tree level quark propagator, with the global form factor $\mathcal{Z}\equiv1$. Including loop corrections on top of the nonperturbative vacuum will lead to $\mathcal{Z}(p^2)$, as also seen in e.g. Figure 5 of [@Parappilly:2005ei], which however only deviates mildly from $1$ over a large range of momenta: the tree approximation $\mathcal{Z}=1$ appears to be valid. (vi) As a final important remark, let us scrutinize if we can introduce a pion field. We consider $\braket{\Sigma}\neq0$ and the already introduced field $\pi$ \[we assume the chiral limit here, $\mu=0$\]. Using $S_f$, the chiral current is easily derived to be $j_\mu^5={\overline{\psi}}\gamma_\mu\gamma^5\psi$. We then adapt the standard derivation, presented in e.g. [@Pokorski:1987ed]: the correlation function $\mathcal{G}_\mu(x-y)=\braket{j_\mu^5(x)\pi(y)}$ is subject to ${\partial}_\mu\mathcal{G}_\mu(x-y)=\delta(x-y)\braket{\Sigma}$, using a path integral or a current algebra argument. Fourier transforming and using the Euclidean invariance learns that $\mathcal{G}_\mu=\frac{p_\mu}{p^2}\braket{\Sigma}$. To close the argument, we consider the $\mathcal{S}$-matrix element of the current destroying a pion state, $\braket{j_5^\mu(x)\pi(p)}\propto p_\mu e^{-ipx}$ which is related to the amputated propagator when the pion is put on-shell. Assuming a pion mass $m_\pi^2$, we would get $\braket{j_5^\mu(x)\pi(p)}\propto\lim_{p^2\to -m_\pi^2} (p^2+m_\pi^2) \mathcal{G}_\mu(p) e^{-ipx}$ with pion propagator $\braket{\pi\pi}_{p^2\sim -m_\pi^2}\propto\frac{1}{p^2+m_\pi^2}$. Recombination provides us with $\lim_{p^2\to -m_\pi^2} (p^2+m_\pi^2) \mathcal{G}_\mu(p)\propto p_\mu$, and having shown that $\mathcal{G}_\mu=\frac{p_\mu}{p^2}\braket{\Sigma}$, we must require $m_\pi^2=0$, i.e. $\pi$ does describe a massless particle. It is instructive to notice that, using the tree level action stemming from in the condensed phase, we can rewrite $\pi\propto{\overline{\psi}}\frac{\Sigma/\Lambda}{-{\partial}^2+\braket{\sigma}}\gamma^5\psi$ by means of the equations of motions of the auxiliary fermions. We recognize a nonlocal version of the usual pseudoscalar pion field. This is not a surprise, since the action itself becomes a nonlocal quark action upon integrating out the extra fermion fields in the condensed phase. The vector ($\rho$-) meson in our model ======================================= The charged $\rho$-meson correlator ----------------------------------- As the proposed framework displays the desired chiral properties, we should address further its meson spectrum. As a representative example, let us construct and solve a gap equation for the (charged) $\rho^\pm$ meson mass under suitable simplifying approximations. Using the data of [@Parappilly:2005ei], in case their Figure 5, we consider degenerate up ($u$) and down ($d$) quarks with current mass $\mu=0.014~\text{GeV}$. The dynamical quark mass can be fitted excellently with $$\label{fit} \mathcal{M}(p^2)=\frac{M^3}{p^2+m^2}+\mu~\text{with}~M^3=0.1960(84)~\text{GeV}^3\,, m^2=0.639(46)~\text{GeV}^2 \quad (\chi^2/\text{d.o.f.}~=~1.18)\,.$$ see also Figure 1. ![Lattice quark mass function [@Parappilly:2005ei] with its fit $\mathcal{M}(p^2)$.](massas.png){width="9cm"} We are ultimately interested in obtaining a pole in the charged $\rho$-meson channel, corresponding to a bound state. We may generalize the technology set out in [@Capri:2012hh] to the QCD case. We are first concerned with the one-loop contribution to the correlation function, $$\begin{aligned} \label{corr2} \braket{\rho_\mu^-\rho_\nu^+}_k &=&8\int \frac{{\ensuremath{\mathrm{d}}}^4q}{(2\pi)^4}f^{-+}(k,q)/\left[(k-q)^2\left((k-q)^2+m^2\right)^2\right.\nonumber\\&&\hspace{-0.3cm}\left.+\left(M^3+\mu\left((k-q)^2+m^2\right)\right)^2\right]\left[q^2\left(q^2+m^2\right)^2+\left(M^3+ \mu(q^2+m^2)\right)^2\right]\,,\nonumber\\ f^{-+}(k,q)&=&\left(q\cdot(k-q)\right)\left((k-q)^2+m^2\right)^2\left(q^2+m^2\right)^2+2\left(M^3+\mu\left((k-q)^2+m^2\right)\right)\left((k-q)^2+m^2\right) \nonumber\\&&\hspace{-0.3cm}\times\left(M^3+\mu\left(q^2+m^2\right)\right)\left(q^2+m^2\right)\end{aligned}$$ We need the operators $\rho_\mu^-=\overline u \gamma_\mu d$, $\rho_\mu^+=\overline d \gamma_\mu u$ and their correlation function. We consequently consider $$\begin{aligned} \Braket{\rho_{\mu}^-(x) \rho_{\nu}^+(y)} &=& \int\frac{{\ensuremath{\mathrm{d}}}^4k{\ensuremath{\mathrm{d}}}^4q{\ensuremath{\mathrm{d}}}^4k'{\ensuremath{\mathrm{d}}}^4q'}{(2\pi)^{16}} \, {\rm e}^{-i(k+q)\cdot x-i(k'+q')\cdot y} \Braket{ \overline{u}(k)\gamma_{\mu} d(q)\overline{d}(k')\gamma_{\nu} u(q')} \,,\end{aligned}$$ where up to tree level we find $$\begin{aligned} \Braket{ \overline{u}(k)\gamma_{\mu} d(q)\overline{d}(k')\gamma_{\nu} u(q')} &=& {\rm Tr}\Big[ \delta(k+q') \Braket{u\overline u}_k \gamma_{\mu} \delta(k'+q) \Braket{d\overline d}_q\gamma_{\nu} \Big] \,,\end{aligned}$$ using the quark propagators in momentum space given in eq. . Thence, $$\begin{aligned} \Braket{ \rho_{\mu}^-(x) \rho_{\nu}^+(y)} &=& \int\frac{{\ensuremath{\mathrm{d}}}^4k{\ensuremath{\mathrm{d}}}^4q}{(2\pi)^8} \, {\rm e}^{-i(k+q)\cdot(x-y)} \frac{{\rm Tr}\Big\{ [i\slashed{k}+\mathcal{A}_u(k^2)] \gamma_{\mu} [i\slashed{q}+\mathcal{A}_d(q^2)] \gamma_{\nu} \Big\} }{[k^2+\mathcal{A}^2_u(k^2)][q^2+\mathcal{A}^2_d(q^2)]}~=~ \int\frac{{\ensuremath{\mathrm{d}}}^4k}{(2\pi)^4} \,{\rm e}^{-ik\cdot(x-y)} \Braket{ \rho_{\mu}^- \rho_{\nu}^+}_k \,,\end{aligned}$$ with $$\begin{aligned} \Braket{ \rho_{\mu}^- \rho_{\nu}^+}_k &=& \int\frac{{\ensuremath{\mathrm{d}}}^4q}{(2\pi)^4} \, \frac{{\rm Tr}\Big\{ [i\gamma_{\rho}(k_{\rho}-q_{\rho})+\mathcal{A}_u\big((k-q)^2\big)] \gamma_{\mu} [i\gamma_{\sigma}q_{\sigma}+\mathcal{A}_d(q^2)] \gamma_{\nu} \Big\} }{[(k-q)^2+\mathcal{A}^2_u\big((k-q)^2\big)][q^2+\mathcal{A}^2_d(q^2)]} \,.\end{aligned}$$ Using standard results for the trace over $\gamma$-matrices, $$\begin{aligned} {\rm Tr}\Big[ \gamma_{\mu}\gamma_{\nu} \Big]&=& 4\delta_{\mu\nu}\,, {\rm Tr}\Big[ \gamma_{\rho}\gamma_{\mu}\gamma_{\sigma}\gamma_{\nu} \Big]= 4[\delta_{\rho\mu}\delta_{\sigma\nu} -\delta_{\rho\sigma}\delta_{\mu\nu} +\delta_{\rho\nu}\delta_{\mu\sigma} ]\,, {\rm Tr}\Big[ {\rm odd~number~of~\gamma's} \Big] =0 \,,\end{aligned}$$ we arrive at $$\begin{aligned} \Braket{ \rho_{\mu}^- \rho_{\nu}^+}_k &=& 4 \int\frac{{\ensuremath{\mathrm{d}}}^4q}{(2\pi)^4} \, \frac{ -k_{\mu}q_{\nu} -k_{\nu}q_{\mu} +2q_{\mu}q_{\nu} +\delta_{\mu\nu}(k\cdot q-q^2) +\delta_{\mu\nu}\,\mathcal{A}_u\big((k-q)^2\big)\, \mathcal{A}_d(q^2) }{[(k-q)^2+\mathcal{A}^2_u\big((k-q)^2\big)][q^2+\mathcal{A}^2_d(q^2)]} \,.\end{aligned}$$ In the case of degenerate up and down quark mass, the following correlator is transverse thanks to the EOMs and therefore guaranteed to describe a massive spin $1$ particle: $$\label{corr1} \braket{\rho_\mu^-\rho_\nu^+}_k=\frac{1}{3}\left(\delta_{\mu\nu}-\frac{k_\mu k_\nu}{k^2}\right)\braket{\rho_\rho^-\rho_\rho^+}_k\,.$$ Explicitly, we have $$\begin{aligned} \Braket{ \rho_{\rho}^- \rho_{\rho}^+}_k &=& 8 \int\frac{{\ensuremath{\mathrm{d}}}^4q}{(2\pi)^4} \, \frac{ k\cdot q-q^2 +2\,\mathcal{A}\big((k-q)^2\big)\, \mathcal{A}(q^2) }{[(k-q)^2+\mathcal{A}^2\big((k-q)^2\big)][q^2+\mathcal{A}^2(q^2)]} \,.\end{aligned}$$ or, when written out, $$\begin{aligned} \Braket{ \rho_{\rho}^- \rho_{\rho}^+}_k &=& 8 \int\frac{{\ensuremath{\mathrm{d}}}^4q}{(2\pi)^4} \, f_{\rho}^{-+}(k,q)\, \frac{1}{\Big[(k-q)^2\big[(k-q)^2+m^2\big]^2+\big\{M^3+\mu \big[(k-q)^2+m^2\big]\big\}^2\Big]} \nonumber\\ &&\quad \frac{1}{ \Big[q^2\big[q^2+m^2\big]^2+\big\{M^3+\mu \big[q^2+m^2\big]\big\}^2\Big]} \,,\label{corr-beforepartial}\end{aligned}$$ where we have defined $$\begin{aligned} f_{\rho}^{-+}(k,q) &=& [q\cdot(k-q)]\big[(k-q)^2+m^2\big]^2\big[q^2+m^2\big]^2 +\nonumber\\ && +2\, \big\{M^3+\mu \big[(k-q)^2+m^2\big]\big\}\big[(k-q)^2+m^2\big] \big\{M^3+\mu \big[q^2+m^2\big]\big\}\big[q^2+m^2\big]\,.\end{aligned}$$ From the knowledge of the poles of the propagator, solutions $y_0=-\omega$ and $y_{\pm}=(-\omega_r\pm i\theta)$ of the following cubic equation $$\begin{aligned} y\big[y+m^2\big]^2+\big\{M^3+\mu \big[y+m^2\big]\big\}^2 &=&0 \,,\label{def-poles}\end{aligned}$$ we may decompose each propagator appearing in as $$\begin{aligned} \frac{ 1 }{ \Big[q^2\big[q^2+m^2\big]^2+\big\{M^3+\mu \big[q^2+m^2\big]\big\}^2\Big]} &=& \frac{R}{q^2+\omega} + \frac{R_+}{q^2+\omega_r+i\theta} + \frac{R_-}{q^2+\omega_r-i\theta} \,, \label{def-Rs}\end{aligned}$$ where $R$ and $R_{\pm}$ can be obtained from $\omega, \omega_r$ and $\theta$. In appropriate GeV units, we find $$\label{numbers} R\approx2.467\,, \omega\approx 0.849\,, R_\pm\approx -1.234\pm i\,15.121\,, \omega_\pm=0.214\pm i\,0.052\,.$$ With the previous numbers filled in, we encounter a real pole and a pair of $cc$ poles, corresponding to the $i$-particles of the seminal works [@Baulieu:2009ha] where it has been discussed how a pair of such poles can be combined to give a physical, i.e. consistent with the Källén-Lehmann (KL) representation, contribution to the bound state propagator. For the remainder of this work, we shall thence only be concerned by this physical part of the correlator, under the assumption that any unphysical piece, originating from combining poles in pairs that are not $cc$, will eventually cancel out when a full-fledged analysis and methodology to deal with $i$-particles would become feasible. Taking this result in eq. , only the terms combining two Yukawa poles ($\omega$) or complex-conjugate poles ($\omega_r\pm i\theta$ with $\omega_r\mp i\theta$) will contribute to the physical part of the spectral function we are interested in. Thus we may write: $$\begin{aligned} \Braket{ \rho_{\rho}^- \rho_{\rho}^+}_k &=& 8 \int\frac{{\ensuremath{\mathrm{d}}}^4q}{(2\pi)^4} \, f_{\rho}^{-+}(k,q)\, \bigg\{ \frac{R^2}{\big[(k-q)^2+\omega\big]\big[q^2+\omega \big]} +\frac{R_+R_-}{\big[(k-q)^2+\omega_r+i\theta\big]\big[q^2+\omega_r-i\theta \big]} \nonumber\\&& +\frac{R_-R_+}{\big[(k-q)^2+\omega_r-i\theta\big]\big[q^2+\omega_r+i\theta \big]} \bigg\}+\big\{{\rm unphysical} \big\} \,,\label{corr-afterpartial}\end{aligned}$$ The spectral representation\[SpecRep-rho\] ------------------------------------------ In order to search in the next section for a physical pole associated with the charged $\rho$-meson, we need to obtain the spectral representation of the correlator . Following [@Dudal:2010wn], we can use the Cutkosky cut rules and their generalization for the case of complex poles to derive the spectral function $\rho^{-+}(\tau)$: $$\begin{aligned} \rho^{-+}(\tau=E^2)&=&\frac{1}{\pi}\, {\rm Im}~\Braket{ \rho_{\rho}^- \rho_{\rho}^+}_{k=(E,\vec{0})} \,,\end{aligned}$$ associated with the correlation function $\Braket{ \rho_{\rho}^- \rho_{\rho}^+}_k$ given in eq. . The general result we shall apply to each part of eq.  corresponds to sections 2.1 and 2.2 of [@Dudal:2010wn] and is stated as follows[^7]. Let $$\begin{aligned} \mathcal{F}(k,m_1,m_2)&=& \int \frac{{\ensuremath{\mathrm{d}}}^dq}{(2\pi)^d} f(k,q)\frac{1}{(k-q)^2-m_1^2}\frac{1}{q^2-m_2^2} \,,\end{aligned}$$ where $f(k,q)$ is a Lorentz scalar, being thus a function of (and only of) the available scalars: $(k-q)^2=m_1^2,\,q^2=m_2^2,$ and $2q\cdot(k-q)=E^2-m_1^2-m_2^2$. Then $$\begin{aligned} {\rm Im}~\mathcal{F}(k=(E,\vec{0}),m_1,m_2)= \frac{1}{2} \int \frac{{\ensuremath{\mathrm{d}}}^dq}{(2\pi)^{(d-2)}} f((E,\vec{0}),q)~ \theta(E-q^0)\delta\big[ (E-q^0)^2-\omega_{q,1}^2 \big]\theta(q^0)\delta\big[ (q^0)^2-\omega_{q,2}^2 \big] \,,\end{aligned}$$ with $\omega_{q,i}^2\equiv \vec{q}^2+m_i^2$. After carrying out the momentum integrations, we arrive at: $$\begin{aligned} {\rm Im}~\mathcal{F}(k=(E,\vec{0}),m_1,m_2)&=& \frac{1}{8}\, \frac{1}{2^{d-3}\,\pi^{(d-3)/2}\Gamma((d-1)/2)} \, \frac{|\vec{q}_0|^{d-3}}{E}\, f\big((E,\vec{0}),(\omega_{0,2},\vec{q}_0)\big) \,, \label{ImF-Cut}\end{aligned}$$ where we have defined: $$\begin{aligned} |\vec{q}_0|^2 &\equiv& \frac{(E^2-m_1^2-m_2^2)^2-4m_1^2m_2^2}{4E^2}\,,\quad \omega_{0,i}^2~\equiv~ |\vec{q}_0|^2+m_i^2 \,,\label{omega0i}\end{aligned}$$ with $\omega_{0,2}=E-\omega_{0,1}$. In eq. , we have three physical contributions (and dimension $d=4$): #### $$\begin{aligned} {\rm Im}~\mathcal{F}_a(E^2)&=& \frac{1}{8\pi}\, \frac{|\vec{q}_a|}{E}\, 8R^2\,f_{\rho}^{-+}\big((E,\vec{0}),(\omega_{a,2},\vec{q}_a)\big) \,,\end{aligned}$$ with $$\begin{aligned} |\vec{q}_a|^2 &\equiv& \frac{E^2}{4}-\omega\,,\quad \omega_{a,i}^2~\equiv~ \frac{E^2}{4} \,,\end{aligned}$$ and $(k-q)^2=m_1^2\mapsto\omega,\,q^2=m_2^2\mapsto\omega,$ and $2q\cdot(k-q)=E^2-m_1^2-m_2^2\mapsto E^2-2\omega$, so that $$\begin{aligned} f_{\rho}^{-+}\big((E,\vec{0}),(\omega_{a,2},\vec{q}_a)\big) &=& [E^2/2-\omega]\big[\omega+m^2\big]^4+2\, \big\{M^3+\mu \big[\omega+m^2\big]\big\}^2\big[\omega+m^2\big]^2\,.\end{aligned}$$ The final form of this contribution is: $$\begin{aligned} {\rm Im}~\mathcal{F}_a(E^2)&=& \frac{1}{\pi}\, \frac{\sqrt{E^2/4-\omega}}{E}\, R^2\, \Big\{ [E^2/2-\omega]\big[\omega+m^2\big]^4 +2\, \big\{M^3+\mu \big[\omega+m^2\big]\big\}^2\big[\omega+m^2\big]^2 \Big\} \,, \label{ImF-res-a}\end{aligned}$$ and the threshold for multiparticle production (related to $\tau_0$, the lower limit of integration of the spectral representation) is $\tau_1=4\omega$. #### $$\begin{aligned} {\rm Im}~\mathcal{F}_b(E^2)&=& \frac{1}{8\pi}\, \frac{|\vec{q}_b|}{E}\,8 R_+R_-\,f_{\rho}^{-+}\big((E,\vec{0}),(\omega_{b,2}=\omega_{b,-},\vec{q}_b)\big) \,,\nonumber\\\end{aligned}$$ with $$\begin{aligned} |\vec{q}_b|^2 &\equiv& \frac{(E^2-2\omega_r)^2-4(\omega_r^2+\theta^2)}{4E^2} \,,\quad \omega_{b,\pm}^2~\equiv~ \frac{(E^2-2\omega_r)^2-4(\omega_r^2+\theta^2)}{4E^2}+\omega_r\pm i\theta \,,\end{aligned}$$ and $(k-q)^2=m_1^2\mapsto\omega_r+i\theta,\,q^2=m_2^2\mapsto\omega_r-i\theta,$ and $2q\cdot(k-q)=E^2-m_1^2-m_2^2\mapsto E^2-2\omega_r$, so that $$\begin{aligned} f_{\rho}^{-+}\big((E,\vec{0}),(\omega_{b,-},\vec{q}_b)\big) &=& [E^2/2-\omega_r]\big[(\omega_r+m^2)^2+\theta^2\big]^2 +\nonumber\\ && +2\, \big\{\big[M^3+\mu (\omega_r+m^2)\big]^2+\mu^2\theta^2\big\}\, \big[(\omega_r+m^2)^2+\theta^2\big]\,.\end{aligned}$$ The final form of this contribution is: $$\begin{aligned} {\rm Im}~\mathcal{F}_b(E^2)&=& \frac{1}{\pi}\, \frac{\sqrt{(E^2-2\omega_r)^2-4(\omega_r^2+\theta^2)}}{2E^2} \, R_+R_-\,\times \nonumber\\&& \!\!\!\!\!\!\!\!\times~ \Big\{[E^2/2-\omega_r]\big[(\omega_r+m^2)^2+\theta^2\big]^2 +2\, \big\{\big[M^3+\mu (\omega_r+m^2)\big]^2+\mu^2\theta^2\big\}\, \big[(\omega_r+m^2)^2+\theta^2\big] \Big\}\label{Res-(b)}\end{aligned}$$ and the threshold is in this case $\tau_2=2\omega_r+2\sqrt{\omega_r^2+\theta^2}$, below which the square root above becomes imaginary, signaling the instability related to multiparticle production. #### it is straightforward to see that this contribution will be exactly the same as in $(b)$, eq. . Finally, the full spectral function associated with the physical part of the correlator $\Braket{\rho_{\rho}^-\rho_{\rho}^+}_k$ (given in eq. ) reads: $$\begin{aligned} \rho_{\rm -+}(\tau)&=& \theta(\tau-\tau_1)~\frac{1}{\pi}\, {\rm Im}~\mathcal{F}_a(\tau) +\theta(\tau-\tau_2)~\frac{2}{\pi}\, {\rm Im}~\mathcal{F}_b(\tau) \,,\end{aligned}$$ with $ {\rm Im}~\mathcal{F}_a(\tau)$ and $ {\rm Im}~\mathcal{F}_b(\tau)$ given in eqs.  and , respectively. Putting everything together, we eventually obtain $$\begin{aligned} \label{corr4} \braket{\rho_\alpha^-\rho_\alpha^+}_k &=& \int_0^{\infty} {\ensuremath{\mathrm{d}}}\tau \frac{\rho^{-+}(\tau)}{\tau+k^2}+\big\{{\rm unphysical} \big\}\,,\quad \rho^{-+}(\tau)\geq0\,,\nonumber\\ \rho^{-+}(\tau)&=&\theta(\tau-\tau_1)\frac{R^2}{\pi^2}\sqrt{1/4-\omega/\tau}\left[\left(\tau/2-\omega\right)\left(\omega+m^2\right)^4+2\left(M^3+\mu(\omega+M^2)\right)^2\left(\omega+m^2\right)^2\right] \nonumber\\&&+\theta(\tau-\tau_2)\frac{R_+R_-}{\pi^2}\frac{\sqrt{(\tau-2\omega_r)^2-4(\omega_r^2+\theta^2)}}{\tau}\left[\left(\tau/2-\omega_r\right)\left((\omega_r+m^2)^2+\theta^2\right)^2 \right.\nonumber\\&&\left.+2\left(\left(M^3+\mu(\omega_r+m^2)\right)^2+\mu^2\theta^2\right)\left((\omega_r+m^2)^2+\theta^2\right)\right]\end{aligned}$$ with thresholds $\tau_1=4\omega$, $\tau_2=2\omega_r+2\sqrt{\omega_r^2+\theta^2}$. Eq.  is the spectral representation of the single bubble approximation to the $\rho$-correlator. Adding (effective) QCD interactions: bubble resummation for $N\to\infty$ with a contact interaction --------------------------------------------------------------------------------------------------- In order to find a bound state, we need to take into account the QCD interaction structure. The quarks do not interact directly, the force being mediated by the gluon. Unfortunately, taking the full gluon interaction into account is a very complicated task. To simplify the analysis in this first attempt, we consider the observation made in e.g. [@Roberts:2011wy] that a gluon contact point interaction can give qualitatively good results. Specifically, it is by now well accepted that the Landau gauge gluon propagator $\mathcal{D}(p^2)$ becomes dynamically massive-like in the infrared. We make the assumption that the gluon is massive-like in the region $p<1$-$2~\text{GeV}$, where the relevant QCD physics is supposed to happen, and that it can be approximated by a constant in momentum space, $\mathcal{D}(p^2)=\frac{1}{\Delta^2}$, or by $\mathcal{D}(x-y)\propto \delta(x-y)$ in position space. Integrating out at lowest order such gluon leads to a NJL-like (contact) interaction between the quarks, more precisely one finds after some Fierz rearranging [@Buballa:2003qv] the interaction $\frac{1}{2}G (\overline q\vardiamond q)^2 \,+$ lower in $1/N$, where $\vardiamond\in\{1,i\gamma^5,\frac{i}{\sqrt{2}}\gamma_\mu,\frac{i}{\sqrt{2}}\gamma_\mu\gamma_5\}$ and $G=\frac{2g^2}{\Delta^2}\frac{N^2-1}{N}$. To allow for additional simplification without throwing away too much crucial dynamics, we furthermore consider only the leading order in $1/N$. For the eventual coupling $G$, we estimate an appropriate value. In the Landau gauge, the gluon and ghost propagator form factors, $\mathcal{Z}_{gl}(p^2,{\overline{\mu}}^2)$ and $\mathcal{Z}_{gh}(p^2,{\overline{\mu}}^2)$, can be combined into a renormalization scale ${\overline{\mu}}$ independent strong coupling constant, $4\pi g^2(p^2)\equiv\alpha(p^2)=\alpha({\overline{\mu}}^2) \mathcal{Z}_{gl}(p^2,{\overline{\mu}}^2) \mathcal{Z}_{gh}^2(p^2,{\overline{\mu}}^2)$, see e.g. [@Fischer:2002hna]. Using the most recent lattice data on this matter [@Bornyakov:2013pha], albeit for $N=2$ without fermions[^8], which rely on a MOM scheme ($\mathcal{Z}_{gh}=\mathcal{Z}_{gl}=1$ at $p^2={\overline{\mu}}^2$, with ${\overline{\mu}}=2.2~\text{GeV}$), we can estimate from their Figure. 5 that $\alpha({\overline{\mu}})\sim0.5$. For the “constant” MOM scheme gluon propagator, we may simply set $\Delta^2={\overline{\mu}}^2$, thereby overestimating the UV and underestimating the IR, to roughly approximate $G\sim 8.5~\text{GeV}^{-2}$. Coincidentally, this value falls nicely in the NJL ballpark [@Klevansky:1992qe]. Though, the NJL parameters are fixed by matching to experiment, while we do not use *any* such input, we only rely on quark/gluon lattice data (or analytical descriptions thereof). We will use $G=5,7.5,10~\text{GeV}^{-2}$ as exemplary values. In the large-$N$ approximation, we can then consistently consider the sum of bubble diagrams[^9] in our quark model, see Figure. \[bubblechain\]. ![Bubble diagrams for the meson correlator. Full (open) circles represent four-fermion vertices (meson operator insertions); solid lines are nonperturbative quark propagators.[]{data-label="bubblechain"}](bubblechain.pdf){width="8cm"} The four-fermion coupling includes [*a priori*]{} all interaction channels $\vardiamond\in\{1,i\gamma^5,\frac{i}{\sqrt{2}}\gamma_\mu,\frac{i}{\sqrt{2}}\gamma_\mu\gamma_5\}$, while the initial and final insertions carry the vectorial character of the $\rho$ meson. In this case, only the vector four-fermion coupling $\gamma_\mu$ contributes. Let us show that bubbles that connect a vector insertion $\gamma^{\mu}$ with a coupling in a different channel (i.e. $1,\gamma^5,\gamma_\mu\gamma_5$) are absent: they either vanish identically due to Dirac algebra and integration symmetries or produce a term $\propto k^{\mu} f(k^2)$, which does not contribute to the transverse correlator (cf. ) under consideration. Indeed, the basic integral $$\begin{aligned} \mathcal{I}_{\mu\vardiamond}(k) =\int\frac{{\ensuremath{\mathrm{d}}}^4p}{(2\pi)^4} \frac{-{\rm Tr}\left[(i\slashed{p}+m)\gamma_{\mu}(i\slashed{p}-i\slashed{k}+m)\vardiamond\right] }{[p^2+m^2][(p-k)^2+m^2]} \label{Idiamond}\end{aligned}$$ furnishes the following trivial results for nonvector $\vardiamond$-insertions, $\vardiamond\in \{ 1,\gamma^5,\frac{1}{\sqrt{2}}\gamma_{\mu}\gamma^5 \} $: - $\vardiamond=1$: in this case, the Dirac trace in the integrand of eq. (\[Idiamond\]) reduces to $$\begin{aligned} \mathcal{I}_{\mu -}(k)&=& \int\frac{{\ensuremath{\mathrm{d}}}^4p}{(2\pi)^4} \frac{-{\rm Tr}\left[(i\slashed{p}+m)\gamma_{\mu}(i\slashed{p}-i\slashed{k}+m)\right] }{[p^2+m^2][(p-k)^2+m^2]}\nonumber \\ &=& -4im \int\frac{{\ensuremath{\mathrm{d}}}^4p}{(2\pi)^4} \frac{2p_{\mu}-k_{\mu} }{[p^2+m^2][(p-k)^2+m^2]}~\propto~k_{\mu} \, f(k^2) \,,\end{aligned}$$ where the last line is obtained after the introduction of a Feynman parameter in the usual way. Recalling that we are computing a two-point correlation function which is transverse, cfr. , it is straightforward to conclude that this term $\propto k_{\mu}$ does not contribute to our observable. - $\vardiamond=\gamma^5$: the Dirac trace in the integrand of eq.  vanishes identically. $$\begin{aligned} \mathcal{I}_{\mu 5}(k)&=& \int\frac{{\ensuremath{\mathrm{d}}}^4p}{(2\pi)^4} \frac{-{\rm Tr}\left[(i\slashed{p}+m)\gamma_{\mu}(i\slashed{p}-i\slashed{k}+m)\gamma^5\right] }{[p^2+m^2][(p-k)^2+m^2]}~=~0 \,,\end{aligned}$$ - $\vardiamond=\frac{1}{\sqrt{2}}\gamma_{\rho}\gamma^5$: in this case, only the term with four gamma matrices survives: $$\begin{aligned} \mathcal{I}_{\mu \rho5}(k)&=&\frac{1}{\sqrt{2}} \int\frac{{\ensuremath{\mathrm{d}}}^4p}{(2\pi)^4} \frac{-{\rm Tr}\left[(i\slashed{p}+m)\gamma_{\mu}(i\slashed{p}-i\slashed{k}+m)\gamma_{\rho}\gamma^5\right] }{[p^2+m^2][(p-k)^2+m^2]}\nonumber \\ &=& 4i \frac{1}{\sqrt{2}} \int\frac{{\ensuremath{\mathrm{d}}}^4p}{(2\pi)^4} \frac{p_{\sigma}(p_{\nu}-k_{\nu})\epsilon_{\sigma\mu\nu\rho} }{[p^2+m^2][(p-k)^2+m^2]}~=~ - 4i \frac{1}{\sqrt{2}} \, k_{\nu}\epsilon_{\sigma\mu\nu\rho}\, k_{\sigma} f(k^2)~=~0\,,\end{aligned}$$ where we have used that the antisymmetry of the $\epsilon$-tensor, while we introduced again a Feynman parameter in the last step. We can conclude, as anticipated above, that the bubble chain resummation that contributes to the rho-meson two-point correlator is a pure vector one. The resummation itself reduces to a geometric series involving the one-loop result, . With the already derived spectral form, we then get $$\mathcal{R}^{-+}(k^2)= \frac{\mathcal{F}^{-+}(k^2)}{1+\frac{G}{2} \mathcal{F}^{-+}(k^2)}\,,\quad \mathcal{F}^{-+}(k^2)=\int_0^\infty \frac{\rho^{-+}(\tau){\ensuremath{\mathrm{d}}}\tau }{\tau+k^2}.$$ An important observation is that the branch cut structure of $\mathcal{R}^{-+}(k^2)$ is determined by that of $\mathcal{F}^{-+}(k^2)$, in particular $\mathcal{R}^{-+}(k^2)$ has a physical KL form if $\mathcal{F}^{-+}(k^2)$ does. The remaining task is to identify if $\mathcal{R}^{-+}(k^2)$ allows for poles in the physical region $\max(-\tau_1,-\tau_2)<k^2<0$. Therefore, we need to solve the gap equation $\mathcal{F}^{-+}(k^2)=-2/G$. As common for $4d$ quantum field theories, the quantity $\mathcal{F}^{-+}(k^2)$ is divergent due to violent UV behaviour, explicitly visible from the integral representation . We assume dimensional regularization and Landau gauge renormalization factors to kill off sub-loop divergences, whereas the residual infinities in the composite operator Green function can be taken care off by additive subtractions in the BPHZ approach. Taking $n>0$ subtractions at scale $\mathcal{T}>\max(-\tau_1,-\tau_2)$ corresponds to $$\label{sub1} \mathcal{F}^{-+}_{sub}(k^2,\mathcal{T})=(\mathcal{T}-k^2)^n\int_0^{\infty} {\ensuremath{\mathrm{d}}}\tau \frac{\rho^{-+}(\tau)}{(\tau+\mathcal{T})^n(\tau+k^2)}\,.$$ In the current case $n\geq2$ is required. If no subtractions were to be necessary, then $\mathcal{F}^{-+}(k^2)$ is a strictly decreasing function thanks to $\rho^{-+}(\tau)\geq 0$, so at most one solution is possible. This property is not necessarily maintained at the subtracted level, except for $n=1$. Consequently, spurious extra solutions can appear, caused by the enforced functional behaviour after subtraction. Therefore, we take $n=3$ in which case only a single solution is found. Next to the pole of the propagator, also the associated residue carries physical information. With the conventions of [@Jansen:2009hr; @Maris:1999nt], we define the decay constant $f_{\rho^\pm}$ via $f_{\rho^\pm} m_{\rho^\pm} \varepsilon_\mu = \left\langle0| \overline u \gamma_\mu d|\rho^+\right\rangle$, with $\varepsilon_\mu$ the polarization tensor of the $\rho^+$ meson, normalized as $\varepsilon_\mu\cdot\varepsilon_\mu=3$. From the matrix element representation of the KL spectral density, it becomes clear that $3f_{\rho^\pm}^2m_\rho^2$ corresponds to the residue of $\mathcal{R}^{-+}(k^2)$ at its pole $-m_{\rho^\pm}^2$. In Figure 3, we have displayed both $m_\rho^\pm(\mathcal{T})$ and $f_{\rho^\pm}(\mathcal{T})$. To get a reasonable value for $\mathcal{T}$, we rely on PMS as observable quantities should not depend on a chosen subtraction scale. Since we have two (related) physical quantities at hand, we minimized $\delta(\mathcal{T})=|\overline m_{\rho^\pm}'(\mathcal{T})|+|\overline f_{\rho^\pm}'(\mathcal{T})|$ where $\prime={\partial}/{\partial}\mathcal{T}$, and we used the rescaled mass and decay constant, $\overline m_{\rho^\pm}(\mathcal{T})=m_{\rho^\pm}(\mathcal{T})/m_{\rho^\pm}(-\tau_2)$, $\overline f_{\rho^\pm}(\mathcal{T})=f_{\rho^\pm}(\mathcal{T})/f_{\rho^\pm}(-\tau_2)$ to attribute to both quantities an “equal start”. We choose the smallest possible scale, viz. $\mathcal{T}=-\tau_2$, as reference. Figure 3 shows that $\delta(\mathcal{T})$ develops a clear minimum. We find $\mathcal{T}^{G=5,~7.5,~10}_\ast\approx-0.38,-0.43,-0.46~\text{GeV}$ leading to $m_{\rho^\pm}^{G=5,~7.5,~10}\approx0.84,0.83,0.83~\text{GeV}$ and $f_{\rho^\pm}^{G=5,~7.5,~10}\approx0.13,0.10,0.09~\text{GeV}$. The mass estimate is pretty stable, while the decay constant seems to be more sensitive to the value of the coupling. We found it useless to attempt presenting a detailed error analysis given the various approximations, but it is at least reassuring that we find a result not too far off the experimental $\rho$ meson mass, $775.49\pm0.34~\text{MeV}$ [@Beringer:1900zz], despite the approximations made (contact point interaction, bubble approximation), next to having used heavier bare quarks. The parameters involved were the 2 vacuum condensates and a value for the coupling. No empirical input was invoked, only lattice data in the quark/gluon/ghost sector. For the decay constant, we can quote experimental and lattice estimates, $f_{\rho^\pm}^{exp}\approx 0.208~\text{GeV}$ [@Becirevic:2003pn], $f_{\rho^\pm}^{latt}\approx 0.25~\text{GeV}$ [@Jansen:2009hr]. Our estimates, crude as they may look due to the intermediate approximations, are somewhat lower, though it must be remarked that the heavier the particle mass, the smaller the decay will get for a fixed residue. If we, for example, set $\mathcal{T}$ to its value at $G=5~\text{GeV}^2$ where $m_{\rho^\pm}$ equals its experimental value, we obtain $f_{\rho^\pm}\approx 0.16~\text{GeV}$, whose deviation from the experimental value is of the same size as the lattice result. ![$m_{\rho^\pm}$ (full line) and $f_{\rho^\pm}$ (dashed line) in terms of $\mathcal{T}$ (left) and $\delta(\mathcal{T})$ (right) (see main text for definitions).](rhomassadecaydelta.png){width="15cm"} Summary and outlook =================== We presented a novel calculable quark model that displays quark confinement and the right (chiral) properties to describe the mesonic QCD sector as bound states of unphysical (confined) degrees of freedom. To our knowledge, this is the first QCD model that has the concrete feature of a quark-free spectrum built in, being yet compatible with lattice quark data and able to describe mesons. It would, for example, be interesting to (i) consider a full-blown study of $\Gamma$, a challenging task [@Verschelde:1995jj] currently being undertaken, which can avoid using lattice input, (ii) consider other meson states, (iii) derive the Gell-Mann-Oakes-Renner relations for multiflavour versions with bare quark masses, etc. As our model incorporates an effective way of confining quarks simultaneously with breaking the chiral symmetry, thereby generating propagators that are consistent with lattice QCD, it might provide a setup to investigate nontrivial finite temperature dynamics [@Fukushima:2012qa], given the intertwining of (de)confinement and chiral symmetry breaking/restoration. Acknowledgments {#acknowledgments .unnumbered} =============== D. D. acknowledges support from the Research-Foundation Flanders, S. P. S., L. F. P. and M. S. G. from CNPq-Brazil, Faperj, SR2-UERJ and CAPES. L. F. P. is an Alexander von Humboldt Foundation fellow. We thank O. Oliveira for Figure 1 and the fit, and U. Heller for the data of [@Parappilly:2005ei]. [99]{} A. Bashir, A. Raya, J. Rodriguez-Quintero, arXiv:1302.5829 \[hep-ph\]. L. Baulieu, M. A. L. Capri, A. J. Gomez, V. E. R. Lemes, R. F. Sobreiro, S. P. Sorella, Eur. Phys. J. C [**66**]{} (2010) 451; L. Baulieu, S. P. Sorella, Phys. Lett. B [**671**]{} (2009) 481. V. N. Gribov, Nucl. Phys. B [**139**]{} (1978) 1; N. Vandersickel, D. Zwanziger, Phys. Rept.  [**520**]{} (2012) 175; D. Dudal, J. A. Gracey, S. P. Sorella, N. Vandersickel, H. Verschelde, Phys. Rev. D [**78**]{} (2008) 065047; J. A. Gracey, Phys. Rev. D [**82**]{} (2010) 085032; F. Canfora, L. Rosa, Phys. Rev. D [**88**]{} (2013) 045025. J. Serreau, M. Tissier, Phys. Lett. B [**712**]{} (2012) 97. H. Verschelde, Phys. Lett. B [**351**]{} (1995) 242. T. Banks, S. Raby, Phys. Rev. D [**14**]{} (1976) 2182. P. M. Stevenson, Phys. Rev. D [**23**]{} (1981) 2916. R. Jackiw, Phys. Rev. D [**9**]{} (1974) 1686. S. Pokorski, *Gauge Field Theories*, Cambridge, UK: Univ. Pr. (1987). S. Furui, H. Nakajima, Phys. Rev. D [**73**]{} (2006) 074503. M. B. Parappilly, P. O. Bowman, U. M. Heller, D. B. Leinweber, A. G. Williams, J. B. Zhang, Phys. Rev. D [**73**]{} (2006) 054504. G. Burgio, M. Schrock, H. Reinhardt, M. Quandt, Phys. Rev. D [**86**]{} (2012) 014506 E. Rojas, J. P. B. C. de Melo, B. El-Bennich, O. Oliveira, T. Frederico, arXiv:1306.3022 \[hep-ph\]. D. Diakonov, V. Y. Petrov, Nucl. Phys. B [**272**]{} (1986) 457. D. Diakonov, V. Y .Petrov, P. V. Pobylitsa, Nucl. Phys. B [**306**]{} (1988) 809. M. S. Bhagwat, M. A. Pichowsky, C. D. Roberts, P. C. Tandy, Phys. Rev. C [**68**]{} (2003) 015203. R. Alkofer, C. S. Fischer, F. J. Llanes-Estrada, K. Schwenzer, Annals Phys.  [**324**]{} (2009) 106. M. A. L. Capri, D. Dudal, M. S. Guimaraes, L. F. Palhares, S. P. Sorella, Int. J. Mod. Phys. A [**28**]{} (2013) 1350034. L. Baulieu, D. Dudal, M. S. Guimaraes, M. Q. Huber, S. P. Sorella, N. Vandersickel, D. Zwanziger, Phys. Rev. D [**82**]{} (2010) 025021; D. Dudal, M. S. Guimaraes, S. P. Sorella, Phys. Rev. Lett.  [**106**]{} (2011) 062003. D. Dudal, M. S. Guimaraes, Phys. Rev. D [**83**]{} (2011) 045013. H. L. L. Roberts, A. Bashir, L. X. Gutierrez-Guerrero, C. D. Roberts, D. J. Wilson, Phys. Rev. C [**83**]{} (2011) 065206. M. Buballa, Phys. Rept.  [**407**]{} (2005) 205. S. P. Klevansky, Rev. Mod. Phys.  [**64**]{} (1992) 649. C. S. Fischer, R. Alkofer, Phys. Lett. B [**536**]{} (2002) 177; A. C. Aguilar, D. Binosi, J. Papavassiliou, J. Rodriguez-Quintero, Phys. Rev. D [**80**]{} (2009) 085018. V. G. Bornyakov, E. -M. Ilgenfritz, C. Litwinski, V. K. Mitrjushkin, M. Muller-Preussker, arXiv:1302.5943 \[hep-lat\]. B. Blossier, P. .Boucaud, M. Brinet, F. De Soto, V. Morenas, O. Pene, K. Petrov, J. Rodriguez-Quintero, arXiv:1310.3763 \[hep-ph\]; P. .Boucaud, M. Brinet, F. De Soto, V. Morenas, O. Pene, K. Petrov, J. Rodriguez-Quintero, arXiv:1310.4087 \[hep-ph\]. J. Beringer [*et al.*]{} \[Particle Data Group Collaboration\], Phys. Rev. D [**86**]{} (2012) 010001. K. Jansen [*et al.*]{} \[ETM Collaboration\], Phys. Rev. D [**80**]{} (2009) 054510. P. Maris, P. C. Tandy, Phys. Rev. C [**60**]{} (1999) 055214. D. Becirevic, V. Lubicz, F. Mescia, C. Tarantino, JHEP [**0305**]{} (2003) 007. K. Fukushima, K. Kashiwa, Phys. Lett. B [**723**]{} (2013) 360; S. Benic, D. Blaschke, M. Buballa, Phys. Rev. D [**86**]{} (2012) 074002. [^1]: [email protected] [^2]: [email protected] [^3]: [email protected] [^4]: [email protected] [^5]: We did not write bare quark masses. For the simplicity of presentation, we work with one flavour. [^6]: The operator $\mathcal{O}_1$ is split in 4 pieces which are separately studied using 4 sources, see eqns. (25)-(27) in [@Baulieu:2009xr]. From eq. (72), it can be inferred that these 4 operators share their renormalization factor, thus the 4 sources can be taken equal to our $J_1$. [^7]: We use for the moment Minkowski notation. The results however can be continued to Euclidean space and complex-conjugate poles, as discussed in [@Dudal:2010wn]. [^8]: See the recent works [@Blossier:2013ioa] for a thorough study including multiple dynamical flavours. [^9]: This is also known as the Random Phase Approximation (see e.g. [@Buballa:2003qv]).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider the possibility of observing deviations from the Standard Model gauge-boson self-couplings at a future $500$ GeV $e^+ e^-$ linear collider. We concentrate on the case in which the electroweak symmetry breaking sector is strongly interacting and there are no new resonances within reach of the collider. We find a sensitivity to the anomalous couplings that is two orders of magnitude higher than that achievable at LEP II. We also show how a polarized electron beam extends the reach of the collider, allowing experiments to probe different directions in parameter space.' --- \#1[[Tr]{}( \#1 )]{} \#1\#2\#3[[*Phys. Rev.*]{} [**D\#1**]{} \#2 (19\#3)]{} \#1\#2\#3[[*Phys. Lett.*]{} [**\#1B**]{} \#2 (19\#3)]{} \#1\#2\#3[[*Nucl. Phys.*]{} [**B\#1**]{} \#2 (19\#3)]{} \#1\#2\#3[[*Phys. Rev. Lett.*]{} [**\#1**]{} \#2 (19\#3)]{} 0.0in 0.0in 6.0in 8.75in -1.0in .5in =-0.5in BINP-95-02\ UCD-95-33\ ISU-HET-95-5\ October 1995 [**Study of Anomalous Couplings at a $500$ GeV $e^+e^-$ Linear Collider with Polarized Beams**]{}\ [*$^{(a)}$ On leave of absence from the Branch of The Institute for Nuclear Physics,\ Protvino, 142284 Russia*]{}. E-mail: [email protected]\ [*$^{(b)}$ Department of Physics, University of California at Davis, Davis  CA 95616*]{}\ E-mail: [email protected]\ [*$^{(c)}$ Department of Physics, Iowa State University, Ames IA 50011*]{}\ E-mail: [email protected]\ Introduction ============ The Standard Model of electroweak interactions is in remarkable agreement with all precision measurements performed thus far [@reviewsm]. These measurements, however, have not probed directly energy scales higher than a few hundred GeV, and precise measurements have been limited to scales up to the $Z$-mass. This has been used as a motivation to propose tests of the Standard Model by studying the self-couplings of the electroweak gauge bosons in future colliders. Deviations from the self-couplings predicted by the minimal Standard Model are called “anomalous” gauge boson couplings and have been studied extensively in recent years. In particular, they have been discussed in the context of future $e^+e^-$ colliders by many authors [@boud; @group]. There are two main differences between our present study and those that can be found in the literature. We interpret the success of the Standard Model as an indication that the $SU(2)_L\times U(1)_Y$ gauge theory of electroweak interactions is essentially correct, and that the only sector of the theory that has not been probed experimentally is the electroweak symmetry breaking sector. This point of view has many practical consequences in limiting the number of anomalous couplings that need to be studied, and in estimating their possible magnitude [@bdv]. A second difference with other studies, is that we consider the effect of having polarized beams. This paper is organized as follows. In Section 2 we summarize the effective Lagrangian formalism that we use to describe the anomalous couplings. In Section 3 we apply these results to a $500$ GeV linear collider with polarized beams and discuss the relevant phenomenology. Finally we present our conclusions. Anomalous Couplings for a Strongly-Interacting Electroweak Symmetry Breaking Sector =================================================================================== We wish to describe the electroweak symmetry breaking sector in the case in which there is no light Higgs boson or any other new particle. To do this in a model independent manner we use an effective Lagrangian for the interactions of gauge bosons of an $SU(2)_L \times U(1)_Y$ gauge symmetry spontaneously broken to $U(1)_Q$. The lowest order effective Lagrangian contains a gauge invariant mass term as well as the kinetic terms for the $SU(2)_L$ and $U(1)_Y$ gauge bosons [@longo]: $${\cal L} ^{(2)}=\frac{v^2}{4}\mbox{Tr}\biggl(D^\mu \Sigma^{\dagger} D_\mu \Sigma \biggr) -\frac{1}{2}\mbox{Tr}\biggl(W^{\mu\nu}W_{\mu\nu}\biggr) -\frac{1}{2}\mbox{Tr}\biggl(B^{\mu\nu}B_{\mu\nu}\biggr)\; . \label{lagt}$$ $W_{\mu\nu}$ and $B_{\mu\nu}$ are the $SU(2)_L$ and $U(1)_Y$ field strength tensors $$\begin{aligned} W_{\mu\nu}&=&{1 \over 2}\biggl(\partial_\mu W_\nu - \partial_\nu W_\mu + {i \over 2}g[W_\mu, W_\nu]\biggr)\:, \nonumber \\ B_{\mu\nu}&=&{1\over 2}\biggl(\partial_\mu B_\nu-\partial_\nu B_\mu\biggr) \tau_3\:,\end{aligned}$$ and $W_\mu \equiv W^i_\mu \tau_i$. The Pauli matrices $\tau_i$ are normalized so that $Tr(\tau_i\tau_j)=2\delta_{ij}$. The matrix $\Sigma \equiv \exp(i\vec{\omega}\cdot \vec{\tau} /v)$ contains the would-be Goldstone bosons $\omega_i$ that give the $W$ and $Z$ their mass via the Higgs mechanism, and the $SU(2)_L \times U(1)_Y$ covariant derivative is given by: $$D_\mu \Sigma = \partial_\mu \Sigma +{i \over 2}g W_\mu^i \tau^i\Sigma -{i \over 2}g^\prime B_\mu \Sigma \tau_3\:. \label{covd}$$ The physical masses are obtained with $v \approx 246$ GeV. This non-linear realization of the symmetry breaking sector is a non-renormalizable theory that is interpreted as an effective field theory, valid below some scale $\Lambda \leq 3$  TeV. The lowest order interactions between the gauge bosons and fermions are the same as those in the minimal Standard Model. Deviations from these minimal couplings (referred to as anomalous gauge boson couplings), correspond to higher dimension ($SU(2)_L\times U(1)_Y$ gauge invariant) operators. For energies below the scale of symmetry breaking $\Lambda$, it is possible to organize the effective Lagrangian in a way that corresponds to an expansion of scattering amplitudes in powers of $E^2/\Lambda^2$. The next to leading order effective Lagrangian that arises in this context has been discussed at length in the literature [@longo; @holdom; @fls; @bdv; @appel]. The contributions of this Lagrangian to the anomalous couplings have also been written down before [@appel]. In this paper we consider the process $e^+e^- \ra W^+ W^-$ at tree level and work in unitary gauge, therefore, the anomalous couplings enter the calculation only through the three gauge boson vertex $VW^+W^-$ (where $V=Z,\gamma$).[^1] It is conventional to write the most general $CP$ conserving $VW^+W^-$ vertex in the form [@hagi]: $$\begin{aligned} {\cal L}_{WWV}&= & -ie {c_\theta \over s_\theta } g_1^Z \biggl( W_{\mu\nu}^{\dagger} W^{\mu}-W_{\mu\nu} W^{\mu~\dagger}\biggr) Z^\nu -ie g_1^\gamma\biggl( W_{\mu\nu}^{\dagger} W^{\mu}-W_{\mu\nu} W^{\mu~\dagger}\biggr) A^\nu \nonumber \\ && -ie {c_\theta \over s_\theta } \kappa_Z W_{\mu}^{\dagger} W_{\nu}Z^{\mu\nu} -ie \kappa_\gamma W_{\mu}^{\dagger} W_{\nu}A^{\mu\nu} \nonumber \\ & & -e {c_\theta \over s_\theta } g_5^Z \epsilon^{\alpha\beta\mu\nu}\biggl( W_\nu^-\partial_\alpha W_\beta^+-W_\beta^+\partial_\alpha W_\nu^-\biggr)Z_\mu \;. \label{vertx}\end{aligned}$$ where $s_\theta=\sin \theta^{}_W, c_\theta =\cos \theta^{}_W$. The effective Lagrangian framework for the case of a strongly interacting symmetry breaking sector, predicts the five constants in Eq. (\[vertx\]), they are [@appel; @valen2]: $$\begin{aligned} g_1^Z&=&1+{e^2\over c_\theta^2} \biggl({1\over 2 s_\theta^2 } L_{9L} +{1\over (c_\theta^2-s_\theta^2)}L_{10}\biggr){v^2\over \Lambda^2}+\cdots \;,\nonumber \\ g_1^\gamma&=& 1+\cdots \;, \nonumber \\ \kappa_Z&=&1+ e^2\biggl({1\over 2 s_\theta^2 c_\theta^2} \biggl(L_{9L}c_\theta^2 -L_{9R}s_\theta^2\biggr) +{2 \over (c_\theta^2-s_\theta^2)}L_{10} \biggr){v^2\over \Lambda^2}+\cdots \;, \label{unot} \\ \kappa_\gamma&=&1+{e^2 \over s_\theta^2} \biggl({L_{9L}+L_{9R}\over 2} -L_{10}\biggr) {v^2\over \Lambda^2}+ \cdots\;, \nonumber \\ g_5^Z &=& {e^2 \over s_\theta^2 c_\theta^2}\hat{\alpha}{v^2\over \Lambda^2}+\cdots\;. \nonumber\end{aligned}$$ In Eq. (\[unot\]) we have written down the leading contribution to each anomalous coupling,[^2] and denoted by $\cdots$ other contributions that arise at higher order (${\cal O}(1/\Lambda^4)$), or at order ${\cal O}(1/\Lambda^2)$ with custodial $SU(2)$ breaking. We are thus assuming that whatever breaks electroweak symmetry has at least an approximate custodial symmetry. Under these assumptions there are only four operators in the next to leading order effective Lagrangian that are relevant: $$\begin{aligned} {\cal L} ^{(4)}\ &=&\ \frac{v^2}{\Lambda^2} \biggl\{ - i g L_{9L} \,\mbox{Tr}\biggl( W^{\mu \nu} D_\mu \Sigma D_\nu \Sigma^{\dagger}\biggr) \ -\ i g^{\prime} L_{9R} \,\mbox{Tr} \biggl(B^{\mu \nu} D_\mu \Sigma^{\dagger} D_\nu\Sigma\biggr) \nonumber \\ & +& g g^{\prime} L_{10}\, \mbox{Tr}\biggl( \Sigma B^{\mu \nu} \Sigma^{\dagger} W_{\mu \nu}\biggr)\ +\ g {\hat \alpha} \epsilon^{\alpha \beta \mu \nu}\mbox{Tr}\biggl(\tau_3 \Sigma^{\dagger} D_\mu \Sigma\biggr) \mbox{Tr}\biggl( W_{\alpha \beta} D_\nu \Sigma \Sigma^{\dagger}\biggr) \biggr\} \label{lfour}\end{aligned}$$ The first three terms conserve the custodial $SU(2)_C$ symmetry, and we have explicitly introduced the factor $v^2/\Lambda^2$ in our definition of ${\cal L}^{(4)}$ so that the $L_i$ are naturally of ${\cal O}(1)$. The term with $\hat{\alpha}$ breaks the custodial symmetry but we include it because it provides the leading contribution to $g_5^Z$. In theories with a custodial symmetry, this term is, therefore, expected to be smaller than the other ones in Eq. (\[lfour\]). This term is also special in that it is the only one at ${\cal O}(1/\Lambda^2)$ that violates parity while conserving $CP$. With our normalization, we expect $\hat{\alpha}$ to be of ${\cal O}(1)$ in theories without a custodial symmetry and much smaller in theories that have a custodial symmetry [@dv]. For our discussion we will assume that the new physics is such that the tree-level coefficients of ${\cal L}^{(4)}$ are larger than the (formally of the same order) effects induced by ${\cal L}^{(2)}$ at one-loop. More precisely, that after using dimensional regularization and a renormalization scheme similar to the one used in Ref. [@bdv], the $L_i(\mu)$ evaluated at a typical scale (around 500 GeV for this process) are equal to the tree-level coefficients, and that their scale dependence is unimportant for the energies of interest. The physical motivation for this assumption is that, even if we do not see any new resonances directly, the effects of the new physics from high mass scales must clearly stand out if there is to be any hope of observing them. When the indirect effects of the new physics enter at the level of SM radiative corrections, very precise experiments (as the ones being performed at LEP I) are needed to unravel them. We are assuming that there will not be any such precision measurements in the next generation of high energy colliders. All the necessary Feynman rules in unitary gauge have been written down in Ref. [@valen2]. For our numerical study we will use the input parameters: $$M_Z = 91.187 \mbox{ GeV,\ \ } \alpha = 1/128.8\:,\;\; G_F=1.166\cdot 10^{-5}\mbox{~GeV}^{-2}\:. \label{param}$$ We will also use $\Lambda=2$ TeV as the scale normalizing our next to leading order effective Lagrangian, Eq. (\[lfour\]). The parameter $L_{10}$ can be very tightly constrained by precision measurements at LEP I [@valen1]: $$-1.1 \leq L_{10}(M_Z) \leq 1.5\:. \label{altb}$$ We find that this bound cannot be significantly improved with a 500 GeV linear collider so we will not study $L_{10}$ further in this paper. To summarize, we consider the next to leading order effective Lagrangian for a $CP$ conserving, strongly interacting, electroweak symmetry breaking sector with an (at least) approximate custodial symmetry. We then find that the leading contribution to the anomalous couplings relevant for $e^+e^- \ra W^+W^-$ at $\sqrt{s}=500$ GeV can be written down in terms of four coupling constants. Finally we note that one of those coupling constants has already been tightly constrained at LEP I. We are thus left with a model that contains only three parameters $L_{9L}$, $L_{9R}$, and $\hat{\alpha}$. In the following sections we discuss the phenomenology of these three constants at a future linear collider with polarized beams. Bounds from the process $e^+e^-\to W^+W^-$ ========================================== The process of $W$-boson pair production in $e^+ e^-$ collisions in the Born approximation is determined by the diagrams shown in Fig. \[ffr\]. =4.5in The full circles represent vertices that include both the standard model couplings, and the anomalous couplings. The anomalous couplings enter these vertices directly or through renormalization of standard model parameters [@valen2]. We will denote the degree of longitudinal polarization of the electron and positron by $z_1$ and $z_2$, respectively. Our notation is such that $z_1=1$ corresponds to a [*right-handed*]{} electron, whereas $z_2=1$ corresponds to a [*left-handed*]{} positron. The cross section for $e^+e^-\to W^+W^-$ with polarized beams can be written in terms of the usual Mandelstam variables $s$ and $t$ as: $$\begin{aligned} \int_{t_{min}}^{t_{max}}\frac{d\sigma}{dt}dt &=& \frac{\pi\alpha^2}{4 s^2 M_W^4} \nonumber \\ &\cdot& \sum^{3}_{i,j=1} C_{ij}\left(T_{ij}(t_{max})- T_{ij}(t_{min})\right)\:. \label{sigmat}\end{aligned}$$ The terms $C_{ij}T_{ij}$ give the contributions of the pair products of amplitudes of the corresponding diagrams (see Fig. \[ffr\]) to the cross-section. The coefficients $C_{ij}$ depend on the electroweak parameters and on the polarization of the initial particles. They are: $$\begin{aligned} C_{11} & = & \frac{S_1}{s^2}, \nonumber \\ C_{12} & = & -2\frac{(s-M_Z^2)c_\theta }{s_\theta s ((s-M_Z^2)^2 + M_Z^2 \Gamma_Z^2)}, \nonumber \\ C_{22} & = & \frac{c^2_\theta}{s^2_\theta ((s-M_Z^2)^2 + M_Z^2 \Gamma_Z^2)}, \\ C_{13} & = & \frac{S_2 - S_1}{2s s^2_\theta}, \nonumber \\ C_{23} & = & \frac{(s-M_Z^2)c_\theta }{2 s^3_\theta ((s-M_Z^2)^2 + M_Z^2 \Gamma_Z^2)}, \nonumber \\ C_{33} & = & \frac{S_1 - S_2}{8 s^4_\theta}, \nonumber\end{aligned}$$ where $S_1$ and $S_2$ carry the dependence on the beam polarization: $$S_1 = 1+z_1 z_2,\;\;\; S_2 = z_1 + z_2\:. \label{polar}$$ Analytic expressions for $T_{ij}=T_{ij}(M_W,\kappa_{\gamma,Z}, g_{1\gamma,1Z},g_5, s,t)$ are given in the Appendix. With $\theta$ the angle between the incoming electron and the outgoing $W^-$ in the $e^+e^-$ center of mass frame, we can use Eq. (\[sigmat\]) to construct the differential cross-section and the $\cos\theta$ distribution for any angular binning. Assumed experimental parameters ------------------------------- In order to study the physics of anomalous couplings at a $500$ GeV linear collider, we first need to know some machine and detector parameters. For the collider we will use an integrated luminosity of $\int {\cal L}dt=50 \; fb^{-1}$ per year and a center of mass energy of $\sqrt{s} = 500$ GeV, the numbers commonly used for NLC, CLIC, VLEPP and JLC projects. For the maximal degree of beam polarization we use the values determined by the VLEPP study group [@vlepppol]: $z_1,\: z_2=(-0.8,\: 0.8)$. Depending on the mechanism used to polarize the beams it should at least be possible to achieve this high a polarization for the electrons [@nopos]. This is very encouraging because we will find that to place bounds on the anomalous gauge boson couplings of our model there is no need for positron polarization. We will use the conservative estimates of Ref. [@djoud; @miller] for the expected systematic errors in the measurements of the muonic and hadronic cross-sections and asymmetries, and in the luminosity in the experiments at the 500-GeV collider: $\Delta\epsilon_{\mu}/\epsilon_{\mu}$ $\Delta\epsilon_{h}/\epsilon_{h}$ $\Delta A^l_{FB}$ $\Delta A_{LR}$ $\Delta L/L$ ----------------- --------------------------------------- ----------------------------------- ------------------- ----------------- -------------- $\Delta_{syst}$ 0.5% 1.% $\ll$1.% 0.003 1.% A detailed investigation of the process $e^+e^-\to W^+W^-$ has shown that the systematic error in the cross-section measurement can be $\sim$2% [@frank; @gouna; @choi]. This error is due to the uncertainty in the luminosity measurement ($\delta {\cal L}\simeq$1%), the error in the acceptance ($\delta_{accep.}\simeq$1%), the error for background subtraction ($\delta_{backgr.}\simeq$0.5%) and a systematic error for the knowledge of the branching ratio ($\delta_{Br}\simeq$0.5%). In order to fully reconstruct the $WW$-pair events and to identify the $W$ charges, we consider only the “semileptonic” channel, namely, $WW \rightarrow l^\pm \nu + 2$-jets. According to the preliminary estimates of Ref. [@frank; @gouna], the efficiency for $WW$-pair reconstruction (using the “semileptonic” channel) is $\epsilon_{WW}=0.15$. It is easy to estimate that for the anticipated luminosity of $\sim 50\; fb^{-1}$ the expected number of unreconstructed events is $\sim 3.7\times 10^5$, which corresponds to a relative statistical error in the cross-section value of $\sim 0.17$%. After reconstruction, the number of $WW$-pairs is about $\sim 5.5\times 10^4$, which corresponds to a relative statistical error of $\sim 0.4$%. This means that for this process the systematic error may be the dominant one. However, this situation could change when there are kinematical cuts, or when the beams are polarized. To be conservative, we thus include both the statistical error and an estimate of a possible systematic error in our analysis. Observables used to bound new physics ------------------------------------- The choice of experimental observables and data processing procedure is crucial in analyzing the capability of the future $e^+e^-$ collider to place bounds on new physics. The total and differential cross-sections, as well as the asymmetries of the process under study, are commonly used. To discuss the sensitivity of the $e^+e^-\to W^+W^-$ process to $L_{9L}$, $L_{9R}$, and $\hat{\alpha}$, we will use the total cross-section $\sigma_{total}$ and the asymmetry $A_{FB}$. For this process these quantities are defined analogously to the case of $e^+e^-\to f\bar f$.[^3] Typically one uses the SM predictions as the “experimental” data,[^4] and considers possible effects due to new physics as small deviations. One then requires agreement between the predictions including new physics and the “experimental” values within expected experimental errors. The parameters representing new physics are, thus, bound by requiring that their effect on the selected observables be smaller than the expected experimental errors. It is common to consider differential distributions such as $d\sigma/d\cos\theta$ as observables (where $\theta$ is the angle between the $e^-$-beam direction and the direction of the $W^-$). However, as it has been emphasized in Ref. [@my], it is difficult to perform a meaningful analysis of these distributions in the absence of real experimental data and detailed knowledge of the detector. We start our analysis using the total cross-section and forward-backward asymmetry as observables. These two observables are constructed from the independent measurements of the forward and backward cross-sections $\sigma_F$ and $\sigma_B$. The two observables: $\sigma = \sigma_F + \sigma_B$ and $\sigma\cdot A_{FB} = \sigma_F - \sigma_B$ are thus independent and we can analyze them simultaneously by requiring that: $$\sqrt{\left(\frac{\sigma -\tilde\sigma}{\Delta\sigma} \right)^2+ \left(\frac{A_{FB} -\tilde A_{FB}}{\Delta A_{FB}} \right)^2}\leq \;\mbox{number of standard deviations}\: . \label{total2}$$ In this way we use all the information in the total cross-section, as well as partial information from angular dependence. In Eq. (\[total2\]) $\sigma\equiv \sigma^{SM}$ and $A_{FB}\equiv A_{FB}^{SM}$ represent anticipated experimental data, $\tilde\sigma$ and $\tilde A_{FB}$ are the predictions including new physics. $\Delta\sigma$ and $\Delta A_{FB}$ are the corresponding absolute uncertainties including systematic and statistical errors.[^5] We have:[^6] $$\begin{aligned} \Delta\sigma &=& \sigma^{SM} \cdot \sqrt{\delta_{stat}^2+\delta_{syst}^2}\:, \label{uncer1} \\ \delta_{stat} &=& \frac{1}{\sqrt{N_{events}}}= \frac{1}{\sqrt{\epsilon_{WW}{\cal L}\sigma^{SM}}}\:, \nonumber\\ \delta_{syst} &=& \sqrt{\delta {\cal L}^2+\delta_{accep}^2+\delta_{backgr}^2+ \delta_{Br}^2}\:, \nonumber\end{aligned}$$ and $$\begin{aligned} \Delta A_{FB} &=& A_{FB}^{SM} \cdot \sqrt{\delta_{1\: stat}^2+\delta_{1\: syst}^2}\:, \label{uncer2} \\ \delta_{1\: stat} &=& \frac{1}{\sqrt{N_{events}}} \sqrt{\frac{1-A_{FB}^2}{A_{FB}^2}}\:, \nonumber\\ \delta_{1\: syst} &=& \sqrt{\delta_{accep}^2+\delta_{backgr}^2+ \delta_{Br}^2}\:, \nonumber\end{aligned}$$ A typical choice for the number of standard deviations in Eq. (\[total2\]) is two. Assuming a Gaussian distribution for the systematic errors, this $2\sigma$ level corresponds to 95% C.L. for the resulting bounds on the parameters under study. It is possible to use more information from the angular distribution than that present in the forward-backward asymmetry. To do so, one can use a simple $\chi^2$-criterion defined as $$\chi^2 = \sum_i \left(\frac{X_i - Y_i}{\Delta^i_{exp}} \right)^2 , \label{chi2}$$ where $$X_i = \int_{\cos\theta_i}^{\cos\theta_{i+1}} \frac{d\sigma^{SM}}{d\cos\theta}d\cos\theta,\;\;\; Y_i = \int_{\cos\theta_i}^{\cos\theta_{i+1}} \frac{d\sigma^{NEW}}{d\cos\theta}d\cos\theta \:,$$ and $\Delta^i_{exp}$ are the corresponding (expected) experimental errors in each bin defined as in Eq. (\[uncer1\]). For the binning we subdivide the chosen range of $\cos\theta$ into equal bins. This procedure gives us a rough idea of the additional information present in the angular distribution. However, a significant analysis of the angular distribution cannot really be done at this stage as discussed in Ref. [@my]. Bounding $L_{9L}$, $L_{9R}$ and $\hat{\alpha}$ ---------------------------------------------- In a scenario for electroweak symmetry breaking like the one discussed in Section 2, we have only three parameters determining the anomalous couplings: $L_{9L}$, $L_{9R}$, and $\hat{\alpha}$. This scenario is analyzed in terms of an effective Lagrangian with operators of higher dimension being suppressed by additional powers of the scale of new physics $\Lambda$. Our amplitudes involving the couplings $L_{9L}$, $L_{9R}$ and $\hat\alpha$ are, thus, the lowest order terms in a perturbative expansion in powers of $(E^2,v^2)/\Lambda^2$. For the whole formalism to make sense, the corrections to the standard model amplitudes (linear in the anomalous couplings) must be small. For a numerical analysis one can take two different points of view: - Formally, we have truncated the amplitudes at order $1/\Lambda^2$. Therefore, when calculating the cross-section we must drop the terms quadratic in the anomalous couplings since our calculation is only complete to order $1/\Lambda^2$. We will call this approach the “linear” approximation. - We may invoke a naturalness assumption, under which we do not expect contributions to an observable that come from different anomalous couplings to cancel each other out. Under this assumption we truncate the amplitudes at order $1/\Lambda^2$, but after this we treat them as exact. We will refer to this approach as the “quadratic” approximation from now on. Clearly, if the perturbative expansion is adequate, both approaches will lead to the same conclusions; the difference between them being higher order in the $1/\Lambda^2$ expansion. We will mostly use the “linear” approximation, but we will occasionally use the “quadratic” approximation for comparison as well. Any difference between them may be considered a rough estimate of the theoretical uncertainty. We will consider three cases: one in which the beams are unpolarized; one in which both electron and positron beams have their maximum degree of polarization, $|z_{e^+,e^-}|=0.8$; and one in which only the electron beam is polarized, $|z_{e^-}|=0.8$, $z_{e^+}=0.$ ### Dependence on angular cut The process $e^+e^-\to W^+W^-$ proceeds via the three diagrams in Figure \[ffr\]. Of these, the $t$-channel neutrino exchange diagram dominates the cross-section. This dominant contribution to the cross-section, however, does not depend on the new physics parameters $L_{9L}$, $L_{9R}$, or $\hat{\alpha}$. Since this dominant contribution is peaked at small values of the angle $\theta$, we expect to improve the sensitivity to new physics by excluding this kinematic region. To implement this idea we impose the cut $|\cos\theta| \leq c < 1$ and study the resulting interplay between a better sensitivity to the anomalous couplings and a loss in the number of events (with the corresponding increase in statistical error). We have studied the dependence of the bounds on the kinematical cut $|\cos\theta| \leq c$ for the range $0.1\leq c \leq 0.989$ (the upper limit corresponding to the minimal characteristic scattering angle defined by the geometry of the experimental setup [@frank; @gouna]). We find that this symmetric kinematical cut does not affect the bounds significantly. Nevertheless, it is possible to improve the sensitivity of this process to the anomalous couplings by using an [*asymmetric*]{} kinematical cut of the form $-1\leq c_1\leq \cos\theta \leq c_2\leq 1$. With a strong cut in the forward direction and a weak cut in the backward hemisphere one can reduce the $t$-channel background with a tolerable loss of statistics. We have explored the sensitivity of the resulting bounds to the value of the cuts for a wide range of parameters $c_1$ and $c_2$, and for different combinations of initial particle polarizations. As a typical example we present in Fig. \[fac\] the allowed $L_{9L}-L_{9R}$ parameter region for unpolarized (dashed line) and maximally polarized (solid line) beams. We set $\hat \alpha =0$, and show three sets of angular cuts for the forward hemisphere: $c_2= 0.1,\: 0.4,\: 0.989$, while keeping $c_1=-0.989$. We find an optimal set of cuts that we will use for the remainder of our analysis given by: $$c_1= -0.989,\: c_2 \simeq 0.4. \label{angcuts}$$ =2.in =2.in =2.in ### Polarization dependence An interesting question is whether the use of polarized beams significantly improves the bounds that can be placed on the anomalous couplings. A preliminary study in Ref. [@dv] indicated that the sensitivity to $\hat{\alpha}$ is greatly increased with polarized beams, but only if the degree of polarization is very close to one. Here we study the effect of having a degree of polarization that can be achieved in practice, $z \leq 0.8$. In Fig. \[fac\]b we present the allowed $L_{9L}-L_{9R}$ parameter region (with $\hat \alpha =0$) for maximally ($z_1=z_2=0.8$) polarized and unpolarized beams. We see that the bounds that can be obtained with polarized beams (solid lines) are slightly better than the bounds that can be obtained with unpolarized beams (dashed lines). This effect is due to the reduction of the relative contribution of the “background” $t$-channel diagram which results in a better sensitivity of the process to the anomalous couplings. With the maximum degree of polarization that can be achieved in practice, one does not find the spectacular effects that could be found with completely polarized beams [@dv]. =2.in =2.in Nevertheless, polarized beams are very useful to constrain new physics that is described by several unknown parameters. The unpolarized case can only constrain a particular linear combination of parameters (in this case $L_{9L}$ and $L_{9R}$) thus giving the dashed band shown in Fig. \[fac\]b. The polarized result depends on a [*different*]{} linear combination of parameters. The simultaneous study of polarized and unpolarized collisions can, therefore, give much better bounds on the anomalous couplings than either one of them separately. An intermediate degree of polarization, such as $z_1=z_2=0.4$ also leads to an improvement of the bounds (see Fig. \[fpol\]a), although it is not as effective as the case with maximum practical degree of polarization in reducing the allowed region of parameter space when combined with the unpolarized measurement. If polarization is available only for the electron beam it is still possible to reduce the region of parameter space that is allowed by the unpolarized measurement. We illustrate this in Fig. \[fpol\]b where we show the case $z_1=0.8$, $z_2=0$. Using the “quadratic” approximation, one finds that each allowed region of parameter space in Fig. \[fpol\] is replaced by several possible regions. This is because the terms that are quadratic in the anomalous couplings in the cross-section give rise to allowed regions shaped like ellipsoids. The case with polarized beams gives rise to a rotated ellipsoid, and the two intersect in more than one region. It is obvious, however, that only the region that contains the standard model point is physical, and this region is very much like that shown in Fig. \[fpol\] for the “linear” approximation. It is interesting to notice that one could decide which is the true allowed region experimentally. By changing the degree of polarization one obtains a different rotated ellipsoid that intersects the unpolarized one in several regions. Only the region containing the standard model point is common to the different degrees of beam polarization. This further illustrates the complementarity of polarized and unpolarized measurements. Results ======= We first present the bounds on the anomalous couplings that follow from Eq. (\[total2\]). In the case of the “quadratic” approximation, the cross-section contains terms that are quadratic in the anomalous couplings, as well as interference terms between the different anomalous couplings. The allowed parameter region is a volume element in the $L_{9L}-L_{9R}-\hat \alpha$ space enclosed by a nontrivial surface. Due to the interplay between couplings, the allowed volume may have holes, and therefore, it is in general not adequate to study two dimensional projections. In keeping with our previous discussion we select the allowed region that contains the standard model point, and that is very similar in shape to the results of the “linear” approximation. Doing this we have a simple region for which two-dimensional projections are adequate. We present in Fig. \[fpairs\] the two-dimensional projections obtained in the directions in which one of the three anomalous couplings vanishes. We present the case corresponding to two standard deviation ( 95% C.L.) bounds from Eq. (\[total2\]). These results correspond to the “linear” approximation, but are practically identical to those obtained in the “quadratic” approximation. Thus, the bounds correspond to anomalous couplings that are small enough for the perturbative expansion to be meaningful. This, in itself, indicates that a 500 GeV linear collider with polarized beams will be able to place significant bounds on a strongly interacting symmetry breaking sector. Allowing two of the couplings to vary and setting the third one to its standard model value we find (“linear” case): $$\begin{aligned} -1.4\: \leq &L_{9L}& \leq \: 1.4\;, \nonumber\\ -0.7\: \leq &L_{9R}& \leq \: 0.7 \;, \label{bounds1} \\ -3.3 \: \leq &\hat \alpha & \leq \: 3.3\:. \nonumber\end{aligned}$$ or (“quadratic” case): $$\begin{aligned} -1.3\: \leq &L_{9L}& \leq \: 1.3\;, \nonumber\\ -0.6\: \leq &L_{9R}& \leq \: 0.7 \;, \label{bounds2} \\ -3.4 \: \leq &\hat \alpha & \leq \: 3.2\:. \nonumber\end{aligned}$$ =2.in =2.in It is worth mentioning that the allowed regions are sometimes bound by curved lines, even in the “linear” approximation. This is due to the intrinsically non-linear combination of observables that we used, Eq. (\[total2\]). In this respect, one interesting feature can be seen in Fig. \[fpairs\]. While the allowed regions in Fig. \[fpairs\]a and Fig. \[fpairs\]b are bounded by curves, the domain in Fig. \[fac\]b is bound by almost straight lines. This means that the deviations of the $L_{9L},\: L_{9R}$ parameters affect mainly the cross-section, but practically do not modify the forward-backward asymmetry. In terms of the angular distribution this can be rephrased saying that variations of the couplings $L_{9L},\: L_{9R}$ lead to a change of the overall normalization of the differential cross-section, while changes in $\hat\alpha$ lead to changes in the shape of the distribution. This effect will be demonstrated further when we discuss the angular distributions. $\chi^2$ Analysis of the Angular Distribution --------------------------------------------- In this section we discuss the bounds on the anomalous couplings that can be obtained from the analysis of the differential cross-section $d\sigma / d\cos\theta$. We will use the $\chi^2$ criterion in the form of Eq. (\[chi2\]) with experimental uncertainties defined in Eq. (\[uncer1\]). We will allow two parameters to vary at a time while fixing the third one at its standard model value (0 at tree-level). Therefore, in order to use a $\chi^2$ approach we need a minimum of 4 bins to have $N_{DOF} = N_{measurements}-N_{parameters}-1=1$. We will consider the cases with the angular region ($-0.989< \cos\theta < 0.4$) divided into 4, 5, and 10 bins. To compare these $\chi^2$ results with those obtained in the previous section using the criterion Eq. (\[total2\]), we adopt the same C.L. of 95%. For the $\chi^2$ approach it is important to understand which is the number of bins that gives the strongest bounds on the parameters given an event sample. As we mentioned before, the total expected number of reconstructed $WW$-events for the chosen luminosity is $\sim 5.5\times 10^4$. However, with the kinematical cut on scattering angle that we use, $-0.989 < \cos\theta < 0.4$, this number is reduced to 4384 events. With unpolarized beams and choosing 4 angular bins, the number of events in each bin varies from 327 to 2175 (with the smaller number in the backward-most bin). These numbers correspond to relative statistical errors varying from 3.8% to 2.1%. For the case of 5(10) bins the number of events varies from 229(81) to 1854(1068), and the statistical error varies from 6.6%(11.1%) to 2.3%(3.1%). If the beams are polarized there is an even larger loss of statistics due to the partial cancellation of the dominant $t$-channel diagram. One can see that for these binnings of the events the corresponding statistical errors are larger than the systematic error. This means that we have a statistically unsaturated event sample, and the strongest bounds are obtained with the minimum number of bins. Before using the angular distribution to place bounds on the parameters, it is useful to see the behaviour of this distribution for small deviations from the standard model. For illustration purposes we choose the values $L_{9L}=5$, $L_{9R}=5$, and $\hat \alpha =5$. Notice that these numbers are are small enough to neglect the difference between the “quadratic” and “linear” approximations. In Fig. \[fdisnor\] we show the behaviour of the angular distribution for the unpolarized case in the range $-0.989 < \cos\theta < 0.4$, normalized to the angular distribution predicted by the standard model. The solid line corresponds to $L_{9L}=5$, the short dashed line corresponds to $L_{9R}=5$, and the long dashed line corresponds to $\hat\alpha =5$. In Fig. \[fdisnor\]a (\[fdisnor\]b) we present the normalized angular distributions for unpolarized (polarized) beams. One can see in Fig. \[fdisnor\]a that variations of $L_{9L}$ and $L_{9R}$ lead to a change in the overall normalization of the distribution; whereas variations in $\hat\alpha$ result in a change in the shape of the distribution. However, this difference is not evident in the case of polarized beams (see Fig. \[fdisnor\]b). =2.in =2.in In Fig. \[fcomp\] we show the projection of the allowed parameter region in the $L_{9L}-L_{9R}$ plane for unpolarized beams, which corresponds to 95% C.L. in the $\chi^2$-analysis for the cases of 4 (solid line), 5 (short-dashed line), and 10 (long-dashed line) bins. One can see that the best bounds are, indeed, obtained with the smallest number of bins, four. The same result holds true for polarized beams. =3.in We find that the angular distribution gives slightly better bounds than the combined criterion of Eq. (\[total2\]), as shown in Fig. \[fchi\]. =2.in =2.in =2.in Thus, choosing the case of 4 bins we can present the resulting bounds on $L_{9L}$, $L_{9R}$, and $\hat \alpha$ following from the $\chi^2$-analysis of the angular distribution, which are shown in Fig. \[fchi\]. The two-parameter fit bounds (setting one of the three couplings at a time to its standard model value) are: $$\begin{aligned} -1.2\: \leq &L_{9L}& \leq \: 1.0\;, \nonumber\\ -0.6\: \leq &L_{9R}& \leq \: 0.7 \;, \label{bounds3} \\ -3.5 \: \leq &\hat \alpha & \leq \: 3.5\:. \nonumber\end{aligned}$$ Summary and Conclusions ======================= If the electroweak symmetry breaking sector is strongly interacting, and there are no new resonances below a TeV one expects deviations of the gauge boson self-interactions from their standard model values. In theories that conserve $CP$ and have an approximate custodial symmetry we can parameterize these deviations in terms of three constants, $L_{9L}$, $L_{9R}$ and $\hat\alpha$. An $e^+e^-$ collider operating at $\sqrt{s}= 0.5$ TeV with polarized beams and an integrated luminosity of 50 fb$^{-1}$ can provide important input into our understanding of the nature of electroweak symmetry breaking. We find that such a collider can place the following bounds: $$\begin{aligned} (-1.4 \to -1.2)\: \leq &L_{9L}& \leq \: (1.0\to 1.4)\;, \nonumber\\ (-0.7\to -0.6)\: \leq &L_{9R}& \leq \: 0.7 \;, \label{bounds5} \\ (-3.5\to -3.3)\: \leq &\hat \alpha & \leq \: (3.2\to 3.5)\:. \nonumber\end{aligned}$$ The ranges correspond to the difference between the “linear” and “quadratic” approximations, and to the difference between using the simple criterion of Eq. (\[total2\]) and a more sophisticated $\chi^2$ analysis of the angular distribution. These differences can be taken as a rough guide of the theoretical uncertainties under our stated assumptions. The authors of Ref.[@gouna] have also studied the process $e^+e^- \ra W^+ W^-$ in terms of anomalous couplings at a future $e^+e^-$ collider like the one we discuss here. Because they do not have in mind a strongly interacting electroweak symmetry breaking sector, as we do, they look for deviations of the standard model in terms of a larger number of parameters than we do. They do not, however, study the parity violating coupling $\hat{\alpha}$. A meaningful comparison of their results with ours involves their two-parameter fit to their quantities $\delta_Z$ and $X_\gamma$ which we translate into[^7] $$\begin{aligned} -2.0 \: \leq &L_{9L}& \leq \: 1.8\;, \nonumber\\ -3.4 \: \leq &L_{9R}& \leq \: 4.7 \;, \label{boundgou}\end{aligned}$$ We can see that the bounds we obtained by combining unpolarized and polarized collisions are significantly better. This is especially true for the case of $L_{9R}$. This emphasizes the additional sensitivity to new physics provided by polarized beams. We have shown that polarized beams with adjustable degrees of polarization would constitute a very significant tool in the search for new physics. In terms of new physics parameterized by a set of anomalous couplings, beam polarization makes it possible to explore directions of parameter space that cannot be reached in unpolarized collisions. To place our bounds in perspective, we now compare them to those obtained from LEP I and those that can be obtained at LEP II. Precision measurements of $Z$ partial widths imply [@valen1]: $$\begin{aligned} -28\: \leq &L_{9L}& \leq \: 27\;, \nonumber\\ -9\: \leq &\hat{\alpha}& \leq \: 5\;, \nonumber\\ -100\: \leq &L_{9R}& \leq \: 190 \;. \label{lep1}\end{aligned}$$ Expected bounds from LEP II with $\sqrt{s}=190$ GeV and $\int {\cal L} dt = 500$ pb$^{-1}$ are [@boud] $$\begin{aligned} -41\: \leq &L_{9L}& \leq \: 26\;, \nonumber\\ -100\: \leq &L_{9R}& \leq \: 330 \;. \label{lep2}\end{aligned}$$ Similar bounds have been obtained for different future colliders. For example, with an $e\gamma$-collider with $\sqrt{s_{ee}}= 500$ GeV and $\int {\cal L} dt = 50$ fb$^{-1}$ they are [@valen2]: $$\begin{aligned} (-7 \to -5)\: \leq &L_{9L}& \leq \: (4 \to 6)\;, \nonumber\\ (-17\to -5)\: \leq &L_{9R}& \leq \: (4 \to 16) \;, \label{eg} \\ -15 \: \leq &\hat \alpha & \leq \: 7\:. \nonumber\end{aligned}$$ Studies for the LHC (with $\sqrt s = 14$ TeV and integrated luminosity 100 fb$^{-1}$) have found [@group] a sensitivity to $L_{9L}$ of order $10$. After completion of this paper a similar analysis by M. Ginter [*et. al.*]{} has appeared [@newgodf]. These authors consider polarized electron beams as we do, and they reach similar conclusions to ours for the parameters that are common to our study[^8] in the case of one-parameter fits. Acknowledgements {#acknowledgements .unnumbered} ================ The work of A. A. L. has been made possible by a fellowship of Royal Swedish Academy of Sciences and is carried out under the research program of International Center for Fundamental Physics in Moscow. A. A. L.is also supported in part by the International Science Foundation under grants NJQ000 and NJQ300. The work of T.H. is supported in part by the DOE grant DE-FG03-91ER40674 and in part by a UC-Davis Faculty Research Grant. The work of G.V. was supported in part by the DOE OJI program under contract number DE-FG02-92ER40730. We thank S. Dawson for useful discussions. Analytic expressions for the cross-section {#analytic-expressions-for-the-cross-section .unnumbered} ========================================== We present below the explicit expressions for the dimensional functions\ $T_{ij}=T_{ij}(M_W,\: \kappa_{\gamma,Z}, g_{1\gamma,1Z},\: g_5,\: s,\: t)$ used in expressions (\[sigmat\]) for the cross-section of the $e^+e^-\to W^+W^-$ process. In this appendix we use $M\equiv M_W$, and $t$ is the absolute value of the usual Mandelstam variable. Because we do not need to consider the renormalization due to $L_{10}$ as explained in the text, the parameters $a_f= T_{3f}/2c_\theta s_\theta$ and $v_f = (T_{3f}-2Q_fs^2_\theta)/ 2s_\theta c_\theta$ are the usual tree-level standard-model axial and vector couplings of the $Z$ to fermions. $$\begin{aligned} T_{11} = \frac{t^3}{3} &\cdot & (4 s M^2 g^2_{1\gamma}+4 s M^2 \kappa^2_{\gamma} -24 M^4 g^2_{1\gamma}-2 s^2 \kappa^2_{\gamma})\\ -\frac{t^2}{2}&\cdot &(4 s^2 M^2 g^2_{1\gamma}+8 s^2 M^2 \kappa^2_{\gamma} -32 s M^4 g^2_{1\gamma}-8 s M^4 \kappa^2_{\gamma} +48 M^6 g^2_{1\gamma}-2 s^3 \kappa^2_{\gamma})\\ +t &\cdot & (4 s^3 M^2 g_{1\gamma} \kappa_{\gamma}+2 s^3 M^2 g^2_{1\gamma} +2 s^3 M^2 \kappa^2_{\gamma}-16 s^2 M^4 g^2_{1\gamma} \kappa_{\gamma} -8 s^2 M^4 g^2_{1\gamma}\\ && -10 s^2 M^4 \kappa^2_{\gamma} +4 s M^6 g^2_{1\gamma}+4 s M^6 \kappa^2_{\gamma}-24 M^8 g^2_{1\gamma})\end{aligned}$$ $$\begin{aligned} T_{12}= \frac{t^3(v_e S_1-a_e S_2) }{3} &\cdot & (4 s M^2 g_{1Z} g_{1\gamma}+4 s M^2 \kappa_Z \kappa_{\gamma} -24 M^4 g_{1Z} g_{1\gamma}-2 s^2 \kappa_Z \kappa_{\gamma})\\ -\frac{t^2(v_e S_1-a_e S_2) }{2} &\cdot &(4 s^2 M^2 g_{1Z} g_{1\gamma}+ 8s^2M^2 \kappa_Z\kappa_{\gamma} -32 s M^4 g_{1Z}g_{1\gamma}-8 s M^4 \kappa_Z\kappa_{\gamma} \\ && +48 M^6 g_{1Z} g_{1\gamma}-2s^3 \kappa_Z \kappa_{\gamma})\\ +t(v_e S_1-a_e S_2) &\cdot & (2 s^3M^2 g_{1Z} g_{1\gamma}+2 s^3M^2 g_{1Z}\kappa_{\gamma} +2 s^3 M^2 \kappa_Zg_{1\gamma}+2s^3 M^2 \kappa_Z \kappa_{\gamma}\\ && -8 s^2 M^4 g_{1Z} g_{1\gamma}-8s^2 M^4 g_{1Z}\kappa_{\gamma} -8 s^2 M^4 \kappa_Z g_{1\gamma}-10 s^2 M^4 \kappa_Z \kappa_{\gamma}\\ && +4 s M^6 g_{1Z} g_{1\gamma}+4 s M^6 \kappa_Z \kappa_{\gamma} -24 M^8 g_{1Z}g_{1\gamma})\\ -\frac{t^2(a_e S_1-v_e S_2)g_5}{2} &\cdot & (4 s^2 M^2 g_{1\gamma}+4 s^2 M^2 \kappa_{\gamma} -16 s M^4 g_{1\gamma}-16 s M^4 \kappa_{\gamma})\\ +t(a_e S_1-v_e S_2)g_5 &\cdot & (2s^3 M^2 g_{1\gamma}+2s^3 M^2 \kappa_{\gamma} -12s^2 M^4 g_{1\gamma}-12s^2 M^4 \kappa_{\gamma} +16 s M^6 g_{1\gamma}\\ &&+16 s M^6 \kappa_{\gamma})\end{aligned}$$ $$\begin{aligned} T_{13} = \frac{t^3}{3}&\cdot & (4 M^2 g_{1\gamma}-2 s \kappa_{\gamma}) \\ - \frac{t^2}{2} &\cdot &(4 s M^2 g_{1\gamma}+4 s M^2 \kappa_{\gamma} -2 s^2 \kappa_{\gamma})\\ + t &\cdot & (4 s^2 M^2 g_{1\gamma}+4 s^2 M^2 \kappa_{\gamma} -10 s M^4 \kappa_{\gamma}-12 M^6 g_{1\gamma})\\ - \ln \biggl({t \over 1{\rm ~GeV}^2}\biggr) &\cdot &(8 s M^6 g_{1\gamma} + 8 s M^6 \kappa_{\gamma} +8 M^8 g_{1\gamma})\end{aligned}$$ $$\begin{aligned} T_{22} = \frac{t^3( (v^2_e+a_e^2)S_1-2v_e a_e S_2)}{3} &\cdot & (4 s M^2 g^2_{1Z} +4 s M^2 \kappa^2_Z -24 M^4 g^2_{1Z}-2 s^2 \kappa^2_Z)\\ -\frac{t^2( (v^2_e+a_e^2)S_1-2v_e a_e S_2)}{2} &\cdot & (4 s^2 M^2 g^2_{1Z} +8 s^2 M^2 \kappa^2_Z -32 s M^4 g^2_{1Z}-8 s M^4 \kappa^2_Z \\ && +48 M^6 g^2_{1Z}-2 s^3 \kappa^2_Z)\\ +t( (v^2_e+a_e^2)S_1-2v_e a_e S_2) &\cdot & (4 s^3 M^2 g_{1Z} \kappa_Z+2 s^3 M^2 g^2_{1Z} +2 s^3 M^2 \kappa^2_Z \\ && -16 s^2 M^4 g_{1Z} \kappa_Z -8 s^2 M^4 g^2_{1Z}-10 s^2 M^4 \kappa^2_Z +4 s M^6 g^2_{1Z}\\ && +4 s M^6 \kappa^2_Z-24 M^8 g^2_{1Z}) \\ -\frac{t^2g_5(2v_e a_e S_1-(v^2_e+a_e^2)S_2)}{2} &\cdot & (8 s^2 M^2 g_{1Z} +8 s^2 M^2 \kappa_Z -32 s M^4 g_{1Z}-32 s M^4 \kappa_Z) \\ +tg_5 (2v_e a_e S_1-(v^2_e+a_e^2)S_2)&\cdot & (4 s^3 M^2 g_{1Z} +4 s^3 M^2 \kappa_Z-24 s^2 M^4 g_{1Z} -24 s^2 M^4 \kappa_Z\\ &&+32 s M^6 g_{1Z}+32 s M^6 \kappa_Z)\\ + \frac{t^3g^2_5( (v^2_e+a_e^2)S_1-2v_e a_e S_2)}{3} &\cdot & (4s M^2 -16 M^4)\\ - \frac{t^2g^2_5( (v^2_e+a_e^2)S_1-2v_e a_e S_2)}{2} &\cdot &(4s^2M^2 -24s M^4 +32 M^6)\\ +tg^2_5( (v^2_e+a_e^2)S_1-2v_e a_e S_2) &\cdot & (2 s^3 M^2 -16 s^2 M^4+36s M^6-16M^8)\end{aligned}$$ $$\begin{aligned} T_{23} = \frac{t^3}{3} &\cdot & (4 M^2 g_{1Z}-2 s \kappa_Z) \\ - \frac{t^2}{2} &\cdot & (4 s M^2 \kappa_Z +4 s M^2 g_{1Z}-2 s^2 \kappa_Z) \\ + t &\cdot &(4 s^2 M^2 \kappa_Z + 4 s^2 M^2 g_{1Z}-10 s M^4 \kappa_Z -12 M^6 g_{1Z})\\ - \ln \biggl({t \over 1{\rm ~GeV}^2}\biggr) &\cdot & (8 s M^6 \kappa_Z+8 s M^6 g_{1Z}+8 M^8 g_{1Z})\\ - \frac{t^2 g_5}{2} &\cdot & (8 s M^2-8 M^4)\\ + t g_5 &\cdot &(4 s^2 M^2-8 s M^4+16 M^6) \\ - \ln \biggl({t \over 1{\rm ~GeV}^2}\biggr) g_5 &\cdot &(8 s M^6-8 M^8)\end{aligned}$$ $$T_{33} =-\frac{2}{3} t^3-\frac{1}{2} t^2 (4 M^2-2s) +t(8 s M^2-10 M^4)-\ln \biggl({t \over 1{\rm ~GeV}^2}\biggr) (-8 s M^4+16 M^6) +\frac{8}{t} M^8$$ [999]{} [^1]: The anomalous couplings also affect the $e\nu W$ and $e^+e^-Z$ vertices through renormalization. However, they do so only through the parameter $L_{10}$, and we will argue later that it is not necessary to consider this coupling in detail because it has already been severely constrained at LEP. [^2]: This is why we do not have terms corresponding to the usual $\lambda_Z$ and $\lambda_\gamma$: they only occur at higher order in $1/\Lambda^2$. [^3]: Recall that we only use the channel that allows a complete reconstruction of the $WW$ pair. [^4]: There are several ways for such data modelling: a) application of the analytical SM expressions to represent “experimental” distributions, see, for example, [@choi]; b) Monte-Carlo simulation of the experimental distributions according to the SM predictions taking into account a probabilistic spread, see, for example [@miy; @bark]. [^5]: It should be noted that for the case of $A_{FB}$ the bulk of the systematics (for example the uncertainty due to luminosity measurements), cancels out. [^6]: We neglect any correlation between statistical and systematic errors. [^7]: Our $\chi^2$ analysis is different from that of Ref. [@gouna], page 747. Nevertheless, we take their results at face value to compare with our results since their bounds would be weaker using our $\chi^2$ criterion and our conclusion remains the same. [^8]: These are $L_{9L}$ and $L_{9R}$ albeit with a different normalization.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We report exact results for the Fermi Edge Singularity in the absorption spectrum of an out-of-equilibrium tunnel junction. We consider two metals with chemical potential difference $V$ separated by a tunneling barrier containing a defect, which exists in one of two states. When it is in its excited state, tunneling through the otherwise impermeable barrier is possible. We find that the lineshape not only depends on the total scattering phase shift as in the equilibrium case but also on the difference in the phase of the reflection amplitudes on the two sides of the barrier. The out-of-equilibrium spectrum extends below the original threshold as energy can be provided by the power source driving current across the barrier. Our results have a surprisingly simple interpretation in terms of known results for the equilibrium case but with (in general complex-valued) combinations of elements of the scattering matrix replacing the equilibrium phase shifts.' author: - 'B. Muzykantskii$^1$, N. d’Ambrumenil$^{1,2}$ and B. Braunecker$^{3,1}$' bibliography: - 'out\_of\_eqm.bib' title: 'Fermi edge singularity in a non-equilibrium system' --- Developments in the fabrication and manipulation of mesoscopic systems have allowed detailed and well-characterized transport measurements for a large range of devices including quantum pumps, tunnel junctions and carbon nanotubes. It is often the case that such measurements explore non-equilibrium effects particularly when the potential difference is dropped across a narrow potential barrier or over a short distance inside the metallic region [@RB94; @KGG01; @NCL00]. While there is often a very good theoretical description of much that has been observed for systems close to equilibrium, the theoretical picture for systems out of equilibrium is less clear with fewer established theoretical results. A natural point to start, when looking for a description of non-equilibrium effects in many-electron systems is the Fermi Edge Singularity (FES), which is one of the simplest non-trivial many-body effects. The FES is characteristic of the response of a Fermi gas to a rapid switching process. Initially it was associated with the shape of the absorption edge and spectral line found when a core hole is created [@Mahan67]. However, it turns out to be a generic feature of a Fermi system’s response to any fast switching process and reflects the large number of low-energy (particle-hole) excitations which exist in Fermi liquids. It has also been shown to be related to Anderson’s orthogonality catastrophe [@ND69; @CN71] and can be used to reformulate the Kondo problem in terms of a succession of spin flips which are treated as the switching of a one-body potential between two different values [@YA70]. We consider a system at zero temperature with two Fermi surfaces separated by a barrier with a potential difference (bias) $V$ applied across the barrier (see Fig. \[fig:fig1\]). The barrier contains a defect, which exists in one of two states with energy separation $E_0$. Tunneling through the barrier is assumed to be possible only when the defect is in its excited state. We compute the absorption spectrum close to the threshold at $\omega_0=E_0-\mbox{Re}(\Delta(V))$, for frequencies $(\omega-\omega_0) \ll \xi_0$, where $\xi_0$ is of order the bandwidth and Re$(\Delta(V))$ is real part of the combined energy shift of the two Fermi seas when the defect is in its excited state. ($\Delta(V)$ is complex for non-zero $V$ on account of the dissipation in the system.) Using an approach based on that of Nozières and de Dominicis (ND) [@ND69], we solve exactly for the asymptotic behavior of the absorption spectrum in two limiting cases: $(\omega-\omega_0)\gg V$ and $(\omega-\omega_0) \ll V$. Our results have a simple interpretation in terms of generalized (complex) phase-shifts at the Fermi energy. Typical lineshapes for the case $(\omega-\omega_0)\gg V$ illustrating the dependence on the reflection amplitudes and phases are shown in Figure \[fig:fig2\]. Our treatment of the problem is based on that of Muzykantskii and Adamov (MA) for the statistics of charge transfer in quantum pumps, which uses the relation between the many-particle response to the changing one-body potential and the solution of an associated matrix Riemann-Hilbert (RH) problem [@MA03]. This problem was also addressed perturbatively and using the ND approach in [@CR00; @Ng96], although the results in [@CR00] led the authors to question the validity of the ND approach of [@Ng96] (see also [@BB02]). Our solution shows clearly that the ND approach is valid, with the earlier difficulties probably associated with an incomplete analysis of the matrix RH problem associated with their singular integral equation. We characterize the scattering at the interface between the two subsystems via the unitary $2\times2$ matrix, $S(\epsilon,t)$, connecting scattering states in the two wires for particles with energy $\epsilon$. This takes one of two values $S^g$, and $S^e$ depending on whether the the defect is in its ground ($g$) or excited state ($e$). In the following, we will take a row/column index equal to one (two) for the left (right) electrode so that the diagonal (off-diagonal) elements correspond to reflection (transmission) amplitudes (see Figure \[fig:fig1\]). We choose the scattering states to be the eigen states of the system when the defect is in its ground state and the barrier is totally reflecting hence: $S^g_{ij}=\delta_{ij}$. $S^e$ is an arbitrary unitary matrix with reflection probability $R=|S^e_{11}|^2 < 1$. We will assume that a negative potential $-V$ ($V>0$) has been applied to the left electrode with respect to the right electrode. The spectral function, $\rho(\omega,V)$, for absorption by the local level is given by [@ND69]: $$\begin{aligned} \rho(\omega,V) & \sim & \text{Re} \int_{-\infty}^\infty \chi (t_f,V) e^{i\omega t_f} dt_f \label{eq:rho_definition} \\ \chi (t_f,V) & = & \langle0|U(t_f,0)|0\rangle. \label{eq:chi_definition}\end{aligned}$$ Here $|0\rangle$ is the ground state wavefunction of the complete system (the filled Fermi seas in the two electrodes and the defect in its ground state), while $U(t_f,0)$ is the time-evolution operator for the system between $t=0$ and $t=t_f$ with the defect in its excited state. $\chi(t_f,0)$ is the same as the core hole Green’s function computed in [@Mahan67; @ND69; @CN71]. Before discussing the full non-equilibrium case, we briefly review the known equilibrium results. When $V = 0$ the response of the system is that of the core hole problem in a non-separable potential considered in [@YY82; @Matveev-Larkin] $$\log{\chi(t_f,0)} = -i(E_0-\Delta(0))t_f - \beta \log{it_f\xi_0} \label{eq:V=0}$$ where $\beta= \sum_{j=1,2} \left(\frac{\delta_j}{\pi}\right)^2$. Here $e^{-i2\delta_j}$ are the eigen values of $S^e$. The threshold is shifted from $E_0$, the energy separation in the two-level system, by $\Delta(0)$, which is the shift of the ground state energy of the two Fermi seas when the scattering defect is in its excited state. This standard equilibrium result (\[eq:V=0\]) is well understood in terms of the low-lying particle-hole excitations created by the rapid switching of the potential, with the principal contributions to the logarithm in (\[eq:V=0\]) from excitations with frequencies between $t_f^{-1}$ and $\xi_0$. When a voltage is applied across the barrier with the defect in its excited state and $R\neq1$, a current will flow and the system will become dissipative. For $t_f \ll V^{-1}$, the spectral response is dominated by excitations with frequencies $\omega \gg V$, involving states which do not sense the potential drop across the barrier. As a result $\chi(t_f,V)$ is unchanged from its value in equilibrium. When $t_f \gg V^{-1}$, the response is controlled by electrons within the band of width $V$ about the mean Fermi energy. We find that $$\log{\chi(t_f,V)} = -i(E_0-\Delta(V)) t_f - \beta' \log{(Vt_f)} + D \label{eq:Vneq0}$$ Here the function $\Delta(V)$ is given by: $$\Delta(V) = \int_{-\infty}^0 \frac{\mbox{tr}\log{(S(E))}}{2\pi i} dE + \int_0^V \frac{\log{(S_{11}(E)) }}{2\pi i} dE \label{eq:Delta(V)}$$ This expression (\[eq:Delta(V)\]) for the (in general complex) energy shift of the two Fermi seas, when the defect is in its excited state, can be thought of as the generalization of Fumi’s theorem [@Friedel52; @Fumi55] to the out-of-equilibrium case. The exponent $\beta'$ in (\[eq:Vneq0\]) is given by $$\beta' = \sum_{j=1,2}\left(-\log{(S^e_{jj})}/2\pi i\right)^2 . \label{eq:beta'}$$ The constant term $D$ gives the contribution from excitations with frequencies between $V$ and $\xi_0$, which do not sense the potential drop across the barrier. To logarithmic accuracy [@Note_on_cutoff]: $$D = \beta \log{\xi_0/V}. \label{eq:D}$$ Writing $S^e_{jj}=\sqrt{R}e^{i\alpha_j}$ and comparing the forms for $\beta$ and $\beta'$ in (\[eq:V=0\]) and (\[eq:beta’\]), we see that the quantity $-\log{(S^e_{jj})/2i}=-\alpha_j/2 + i(\log{R})/4\pi$ is acting as a complex phase shift. Its real part, $-\alpha_j/2$, characterizes the scattering in the $j$’th electrode and in (\[eq:Vneq0\]) describes the effect of particle-hole excitations in the band of width $V$ from the Fermi energy. Its imaginary part $(\log{R})/4\pi$ relates to the lifetime of the excitation. The absorption spectrum is found from the Fourier transform of $\chi(t_f,V)$ in (\[eq:rho\_definition\]). Measuring $\omega$ from $\omega_0=E_0-\mbox{Re}(\Delta(V))$, it is given by [@note_on_Fourier_transform]: $$\rho(\omega) \sim \frac{1}{\Omega^{1-\beta'_1}} e^{-\beta'_2\phi_\Omega} \sin{\left(\beta'_1\pi- (\beta'_1-1) \phi_\Omega -\beta'_2 \log\Omega \right)}. \label{eq:rho(omega)}$$ Here we have defined $\Omega\exp{i\phi_\Omega} \equiv \omega/V - i(\log{R})/4\pi$ and written $\beta'=\beta'_1 + i \beta'_2$. While the dependence on $\beta'_1$ reflects the total overall scattering on the two sides of the barrier as in equilibrium, $\beta'_2$ is proportional to the difference in the phases of the two reflection amplitudes $S^e_{11}$ and $S^e_{22}$ and its appearance in (\[eq:rho(omega)\]) is entirely an out-of-equilibrium effect. When $R=1$, the term multiplying $\Omega^{1-\beta'_1}$ in (\[eq:rho(omega)\]) is proportional to the theta function $\theta(\omega)$ and describes the usual sharp threshold in $\rho(\omega)$. With $R<1$ it leads to a smearing of the threshold (see Figure \[fig:fig2\]). As pointed out in [@CR00], this broadening of the threshold reflects the existence of ‘negative energy excitations’ in the system involving a hole in the left electrode and a particle in the right electrode. From an experimental point of view, the below threshold broadening with its functional dependence on the phases of the reflection amplitudes and its overall energy scale fixed by the bias are probably the key signatures of the non-equilibrium effects we are describing. The sensitivity to the difference in scattering phase shifts (this difference is proportional to $\beta'_2$) would show up in changes in the line shape on reversing the bias and should also be observable. The derivation of the overlap $\chi(t_f)$ follows quite closely that of MA [@MA03]. We introduce the operators $a_i(\epsilon)$ which annihilate particles on the $i$’th side of the barrier with energy $\epsilon$ in eigen states of the system with the defect in its ground state ($S=1$). The effect of the time-evolution operator $U$ acting between $t=0$ and $t_f$ on states $a^\dagger_i|\rangle$, where $|\rangle$ is the true vacuum with no particles, is given by $$Ua^\dagger_i(\epsilon) |\rangle = \sum_{i} \int d\epsilon' \sigma_{ij}(\epsilon,\epsilon') a^\dagger_j(\epsilon') |\rangle. \label{eq:sigma}$$ One can show that for states near the Fermi energy (see [@AM01] for example) $\sigma$ is given by: $$\sigma_{ij}(\epsilon,\epsilon')= e^{-iE_0t_f} \frac{1}{2\pi} \int_{-\infty}^\infty S_{ij}(t) e^{i(\epsilon-\epsilon')t} dt \label{eq:sigma=S}$$ provided that the adiabaticity condition $$\hbar \frac{\partial S}{\partial t} \frac{\partial S}{\partial E} \ll 1 \label{eq:adiabaticity}$$ is satisfied. In (\[eq:sigma\]) $S(t)=S^g$ for $t<0$ and $t>t_f$ and $S(t)=S^e$ for $0<t<t_f$ and we have suppressed the explicit dependence of $S$ on energy. When computing the low frequency asymptotics, this becomes a slow dependence on $(\epsilon + \epsilon')/2$, and can be neglected. The overlap $\chi(t_f)$ can be written $$\chi(t_f) = \langle0|U|0\rangle = \mbox{det}' \sigma \label{eq:determinant}$$ where the prime indicates that the operator determinant is to be taken only over the occupied states in the two filled Fermi seas. This reduces in the equilibrium case to the determinant in [@CN71]. With zero chemical potential in the right electrode and treating the (non-equilibrium) Fermi distribution as the diagonal operator $f_{ij}(\epsilon,\epsilon')= \delta_{ij} \delta(\epsilon-\epsilon') \theta(-(\epsilon+V(2-i))$ allows us to write $$\begin{aligned} \chi(t_f) & = & \mbox{det}(1-f+f\sigma) \label{eq:full_determinant} \\ \log{\chi(t_f)} & = & \mbox{Tr} \left( \log{(1-f+f\sigma)} - f\log{\sigma} \right) + \mbox{Tr}f\log{\sigma} \nonumber \\ & \equiv & C(V,t_f) + \mbox{Tr}f\log{\sigma} \label{eq:log_chi}\end{aligned}$$ where the operator determinant is now the full determinant taken over all states and the trace, Tr, is the trace over energy and channels. The last term in the expression (\[eq:log\_chi\]) can be found by explicitly carrying out the integral in (\[eq:sigma=S\]). This gives that $\sigma_{ij}(\epsilon,\epsilon') = \delta_{ij}\delta{(\epsilon-\epsilon')} - X_{ij}(\epsilon-\epsilon')$. The logarithm can then be expanded as a power series in the matrix $X$ [@Note_on_expanding_log_sigma]. After evaluating $X^n$ term by term and then resumming we obtain: $\mbox{Tr}f\log{\sigma}=-i(E_0 - \Delta(0))t_f + (V t_f/2\pi i) (\log{S})_{11}$. The difference between this and $-i(E_0 - \Delta(V))t_f$ in (\[eq:Delta(V)\]) is contained in the function $C(t_f,V)$. To evaluate $C(V,t_f)$ we introduce $\widetilde{S}(t,\lambda)$ where $$\widetilde{S}(t,\lambda) = \exp{(\lambda \log{S(t)}}), \label{eq:S_matrix}$$ so that $\widetilde{S}(t,1)=S(t)$. We now apply the following gauge transformation: $$\begin{aligned} \mathbf{a}(\epsilon) & \rightarrow & \mathbf{a}(\epsilon,t) = e^{iLVt} \mathbf{a}(\epsilon) \label{eq:gauge_transform} \\ \widetilde{S}(t,\lambda) & \rightarrow & \widetilde{S}(t,\lambda)=e^{iLVt} \widetilde{S}(t,\lambda) e^{-iLVt} \label{eq:S(t,lambda)}\end{aligned}$$ Here $L$ is the diagonal matrix with $L_{11}=1$ and $L_{22}=0$. This has the advantage of eliminating the chemical potential difference between the two electrodes at the expense of an added time-dependence for $\widetilde{S}$ when $t\in [0,t_f]$. After switching to the time-representation (in which the trace, Tr, becomes a trace over channels and an integral over time) and substituting for $\sigma$ from (\[eq:sigma=S\]), $C(t_f,V)$ can be written $$C(t_f,V) = \mbox{Tr} \int_0^\lambda d\lambda \left[ \left((1-f+f\widetilde{S})^{-1}f -f\widetilde{S}^{-1}\right) \frac{d\widetilde{S}}{d\lambda} \right]. \label{eq:integral_over_lambda}$$ Using a parallel argument to that of [@MA03], we find that $$(1-f+f\widetilde{S})^{-1} = Y_+\left((1-f)Y_+^{-1} + fY_-^{-1}\right). \label{eq:RH_inverse_matrix}$$ where $Y_{\pm}=Y(t\pm i0,\lambda)$. Here $Y(z,\lambda)$ is an analytic (matrix) function of complex $z$ in the complement of the cut along the real axis between $z=0$ and $z=t_f$, and satisfies: $$Y_-Y_+^{-1} = \widetilde{S}(t,\lambda) \,\,\, \mbox{and} \,\,\, Y(z,\lambda) \rightarrow \mbox{const} \,\,\, \mbox{for} \,\,\, |z| \rightarrow \infty. \label{eq:RH}$$ If there is no tunneling between electrodes ($S^e$ diagonal), this matrix RH problem can be shown to be the same as the homogeneous part of that solved in [@ND69]. After substituting (\[eq:RH\_inverse\_matrix\]) into (\[eq:integral\_over\_lambda\]), using the fact that in the time-representation (after the gauge transformation \[eq:gauge\_transform\]) $f(t,t')=i(2\pi(t-t'+i0))^{-1}$ and letting $t'\rightarrow t$ to compute the trace, Tr, we finally obtain $$C(t_f,V) = \frac{i}{2\pi} \int_0^1 d\lambda \int_0^{t_f} \mbox{tr}\left\{\frac{dY_+}{dt} Y_+^{-1} S^{-1}\frac{dS}{d\lambda}\right\} dt. \label{eq:logY_logS}$$ Here tr denotes a trace over channel indices. Solving for $\chi(t_f,V)$ is equivalent to solving for the quantity $Y(z,\lambda)$. For small $V$, we can expand the exponential factors in $\widetilde{S}(z,\lambda)$ (see \[eq:S(t,lambda)\]) as $e^{\pm iVz} = 1 \pm iVz$. In this case $$Y(z,\lambda) = \exp{\left[ \frac{1}{2\pi i}\log{\left(\frac{z}{z-t_f}\right) \log{\widetilde{S}(z,\lambda)}} \right]} \label{eq:Y_small_t}$$ solves the RH problem. For $|z| \rightarrow \infty$, the exponent (and hence $Y$) tends to a constant as required. If $Vt_f \ll 1$ we can insert this result into (\[eq:logY\_logS\]) and compute the integrals over $t$ and $\lambda$. This yields the equilibrium result (\[eq:V=0\]). Although there are corrections to the equilibrium ($V=0$) solution for $Y_+$ which are linear in $Vt$, these cancel out after taking the trace in (\[eq:logY\_logS\]). Corrections to $C(t_f,V)$ can therefore only be of order $(Vt_f)^2$ or higher. For times $t_f>V^{-1}$, a general solution to this type of matrix RH problem is not known. The form (\[eq:Y\_small\_t\]) for $Y_+$ is still valid for $0<t<V^{-1}$ and $t_f>t>t_f-V^{-1}$. The integral over times close to the branch points of $Y$ then gives the contribution varying as $D=\log{(\xi_0/V)}$ in (\[eq:D\]). However, although the form for $Y$ in (\[eq:Y\_small\_t\]) still satisfies the discontinuity condition along the cut, the exponent is unbounded for large $|z|$ and hence (\[eq:Y\_small\_t\]) is useless as a starting point for solving for $Y_+$ for $t\gg V^{-1}$. Following the derivation of [@MA03], we find that: $$Y_+(t,\lambda) = \left[ \begin{array}[c]{rcl} \psi_+(t,\lambda) &\hbox{ when }& t< 0 \\ \begin{pmatrix} 1 & -\gamma(t,\lambda) \\ 0 & 1 \end{pmatrix} \psi_+(t,\lambda) & \hbox{ when }& 0 < t < t_f \\ \psi_+(t,\lambda) &\hbox{ when } & t_f < t \end{array} \right. \label{eq:Y_large_t}$$ is asymptotically correct for $t\gg V^{-1}$. Here $\gamma(t,\lambda)=\widetilde{S}_{12}(t,\lambda)/ \widetilde{S}_{11}(t,\lambda)$ and $\psi_+(t,\lambda)=\psi(t+i0,\lambda)$ where $$\psi(z,\lambda) = \exp \left( \log \frac{z}{z-t_f} \left[ \frac{ \log{ \widetilde{S}_{11}/\widetilde{S}^*_{22} } }{4\pi i}\tau_0 + \frac{\log{(\widetilde{S}_{11}\widetilde{S}^*_{22})}}{4\pi i} \tau_3 \right]\right). \label{eq:psi}$$ The corresponding function $Y(z,\lambda)$ is not analytic across vertical cuts in the complex $z$-plane through the points $z=0$ and $z=t_f$, with discontinuities which decay as $e^{-V|z|}$ or $e^{-V|z-t_f|}$. (These factors show that we cannot describe the reverse bias case by taking $V<0$ in (\[eq:Y\_large\_t\]). Instead $Y_+$ takes a different form for negative $V$.) After inserting the solution (\[eq:Y\_large\_t\]) in (\[eq:logY\_logS\]) and computing the integrals over $\lambda$ and $t$, we obtain the first two terms in (\[eq:Vneq0\]). The term obtained after differentiating $\gamma$ in (\[eq:Y\_large\_t\]) and adding to the term from $\mbox{Tr}f\log \sigma$ in (\[eq:log\_chi\]), leads after some algebra to the term $-i(E_0-\Delta(V))t_f$. Differentiating $\psi_+(t_f,\lambda)$ in (\[eq:Y\_large\_t\]) leads to the term proportional to $\log{Vt_f}$. The constant term is derived using the form (\[eq:Y\_small\_t\]) for $Y_+$ valid for small $t$ and $t-t_f$ as discussed above.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Quantum corrections of the biquadratic interaction in the 1D spin-1/2 frustrated ferromagnetic Heisenberg model are studied. The biquadratic interaction for spin-1/2 chains is eliminated and transformed to the quadratic interaction. Doing a numerical experiment, new insight as to how the classical phases get modified on the inclusion of quantum fluctuations is provided. Observed results suggest the existence of an intermediate region in the ground state phase diagram of the frustrated ferromagnetic spin-1/2 chains with combination of dimer and chiral orders. In addition, from the quantum entanglement view point, differences between quantum phases are also obtained. The nearest neighbor spins never be entangled in the frustrated ferromagnetic chains but are entangled up to the Majumdar-Ghosh point in the frustrated antiferromagnetic chains. On the other hand, the next nearest neighbor spins in the mentioned intermediate region are entangled.' author: - 'Javad Vahedi$^1$, Saeed Mahdavifar$^2$' title: 'Quantum corrections of the biquadratic interaction in the 1D spin-1/2 frustrated ferromagnetic systems' --- Introduction {#sec1} ============ The explore of novel order in frustrated models in low dimensional quantum systems have been studied extensively from theoretical and experimental point of view. An example which shows a variety of intriguing phenomena is frustrated ferromagnetic spin-$\frac{1}{2}$ chain with added nearest-neighbor biquadratic interaction[@kaplan]: $$\emph{H}=\sum_{n=1}^{N}\big[J_{1}\vec{S}_{n}.\vec{S}_{n+1}+J_{2}\vec{S}_{n}.\vec{S}_{n+2}-A(\vec{S}_{n}.\vec{S}_{n+1})^{2}\big], \label{e1}$$ where $J_{1}<0$, $J_{2}>0$ are the nearest-neighbor (NN) and next-nearest-neighbor (NNN) exchange couplings. $\vec{S}_{n}$ represents the spin-$\frac{1}{2}$ operator at the $n$th site, and $A$ denotes the biquadratic exchange. We introduce parameters $\alpha=\frac{J_{2}}{|J_{1}|}$ and $a=\frac{A}{|J_{1}|}$ for convenience. The pure frustrated ferromagnetic model ($a=0$) is well studied[@Aligia; @Dmitri; @Hikihara; @Mahdavifar]. Beside a general interest in understanding [*frustrations*]{} and phase transitions, it helps people to understand intriguing magnetic properties of a novel class of edge-sharing copper oxides, described by the frustrated ferromagnetic model[@Mizuno; @Hase; @Solodovnikov]. Several compounds with edge-sharing chains are known, such as $Li_{2}CuO_{2}$, $La_{6}Ca_{8}Cu_{21}O_{41}$, and $Ca_{2}Y_{2}Cu_{5}O_{10}$[@Mizuno]. Though the pure frustrated ferromagnetic model has been a subject of many studies [@Tonegawa; @Chubukov; @Cabra; @Krivnov] the complete picture of the quantum phases of this model has remained unclear up to now. It is known that the ground state is ferromagnetic for $\alpha=\frac{J_{2}}{|J_{1}|}<\frac{1}{4}$. At $\alpha_{c}=1/4$ the ferromagnetic state is degenerate with a singlet state. The wave function of this singlet state is exactly known [@Hamada; @Dmitriev]. For $\alpha>\frac{1}{4}$ however, the ground state is an incommensurate singlet. It has been long believed that at $\alpha>\frac{1}{4}$ the model is gapless[@White; @Allen] but the one-loop renormalization group analysis indicates[@Cabra; @Nersesyan] that the gap is open due to a Lorentz symmetry breaking perturbation. However, existence of the energy gap has not been yet verified numerically[@Cabra]. Using field theory considerations it has been proposed[@Dmitriev] that a very tiny but finite gap exists which can be hardly observed by numerical techniques. In a very recent work [@kaplan], T. Kaplan presents the classical ground state phase diagram of the frustrated model with added biquadratic exchange interaction ($a\neq 0$). By considering spins as vector and using a kind of cluster method which is based on a block of three spins, he found the classical ground state phase diagram as Fig. [\[schematic1\]]{}. The classical phase diagram exhibits the ferromagnetic, the spiral, the canted-ferro, and up-up-down-down spin structures. In the non frustrated Heisenberg case ($\alpha=0$), the spiral phase is caused by the contest between the Heisenberg and the biquadratic interactions[@kaplan]. There are two known sources of these terms: Firstly, purely electronic: higher order terms in the hopping amplitudes or orbital overlap (leading order yields the Heisenberg interactions)[@bb1; @bb2] and Secondly, lattice induced: spin-lattice interaction [@bb3]. The presence of chiral phase in quasi-one dimensional frustrated magnets has been intensively studied during the last decade[@Nersesyan; @bb11; @bb12; @bb13; @bb14; @bb15]. This interest was triggered by the prediction of a ground state with non-zero vector spin chirality, $\left< \vec{S}_{l}\times\vec{S}_{m}\right>\neq0$. As it is pointed in ref.\[30\], classical states with spontaneously broken chirality only exist together with helical long range order. The helical order breaks the continuous symmetry of global spin rotations along the $z$-axis. Consequently, the existence of long range helical order is in most cases precluded by zero point fluctuations of 1D quantum systems [@bb16] (Mermin-Wagner theorem [@bb17]). On the other hand, chiral orderings are allowed because they only break discrete symmetries. For this reason, chiral orders in quantum spin systems can be thought as remnants of the helical order in classical systems. This is one of the main motivations for finding chiral orders in quantum spin Hamiltonian whose ground state exhibits helical order in the $S\rightarrow\infty$ limit[@bb16]. The structure of the paper is as follows: In Sec. II we check the validity of classical phase diagram capture exhaustively with the accurate lanczos scheme from quantum point of view. In Sec. III we will use the entanglement of formation (EoF) to check the presence of quantum phase transitions and check the presence of critical lines which were predicted by T.Kaplan approach. Finally, we will present our results. ![(Color online) Classical phase diagram: $\textbf{a}\equiv A/ |J1|$ vs $\alpha\equiv J2 / |J1|$. Disorder occurs on the emphasized vertical line segments.[]{data-label="schematic1"}](schematicphase.eps){width="0.95\columnwidth"} Quantum phase diagram {#sec2} ===================== By considering operator $2(S_n.S_{n+1})+1/2$ as the permutation operator, the 1D frustrated ferromagnetic Hamiltonian is transformed to the following model $$\emph{H}^{T}=\sum_{n=1}^{N}\big[-(1+\frac{a}{2})\vec{S}_{n}.\vec{S}_{n+1}+\alpha \vec{S}_{n}.\vec{S}_{n+2}\big]+constant. \label{et}$$ This is nothing but the isotropic spin-1/2 Heisenberg model with NN exchange $(1+\frac{a}{2})$ and NNN exchange $\alpha$. From quantum point of view, one encounter with four different cases by changing the strength of the biquadratic and the frustration exchanges $$\begin{aligned} (I)&\alpha&<0,~~a<-2,~~~ nonfrustrated~AF-F~ model \nonumber\\ (II)&\alpha&<0,~~a>-2,~~~nonfrustrated~F-F~model \nonumber \\ (III)&\alpha&>0,~~a<-2,~~~frustrated~AF-AF~model \nonumber \\ (IV)&\alpha&>0,~~a>-2,~~~frustrated~F-AF~model.\nonumber\end{aligned}$$ It is known that the ground state of the 1D spin-1/2 non-frustrated F-F model has the ferromagnetic long-range order. On the other hand the spectrum of the non-frustrated AF-F model is gapless. The 1D frustrated AF-AF is well known. In the classical limit the system develops spiral order for $\frac{\alpha}{\mid1+a/2\mid}>\frac{1}{4}$ whereas a quantum phase transition into a dimerized phase occurs at $\alpha_{c}\simeq 0.2411~\mid1+a/2\mid$. This dimerized phase is characterized by a singlet ground state with twofold degeneracy and an excitation gap to the first excited state. At the Majumdar-Ghosh point[@saeed0], i.e. $\alpha=0.5~\mid1+a/2\mid$ the ground state is exactly solvable. In addition, the ground state of the frustrated F-F model is ferromagnetic for $\frac{\alpha}{1+a/2}<\frac{1}{4}$. At $\alpha_{c}=\frac{1}{4}(1+a/2)$ the ferromagnetic state is degenerate with a singlet state. For $\alpha>\alpha_{c}$, the existence of a tiny gapped region suggested. Recently, the possible relevance of this model to the several quasi-1D edge-sharing cuprates[@saeed1; @saeed2; @saeed3; @saeed4; @saeed5] is raised very serious[@saeed6; @saeed7; @saeed8; @saeed9]. These compounds can exhibit multiferroic behavior in low-temperature chiral spin ordered phases. Theoretically, the study of the anisotropy effect clearly has shown that the chiral phase appears and extends up to the vicinity of the SU(2) point for moderate values of frustration[@saeed7; @saeed9] in well agreement with the experimental results. In the following, to find the ground state quantum phase diagram and providing proper insight as how the classical phases can modify by the inclusion of quantum fluctuations, we did a numerical experiment by using the Lanczos method. To explore the nature of the spectrum and the quantum phase transitions, we diagonalized numerically chains with length up to $N=24$ for different values of the biquadratic exchanges. The energies of the few lowest eigenstates were obtained for chains with periodic boundary conditions. We start our study with magnetization where defined as $$M^{\gamma}=\frac{1}{N} \sum_{j=1}^{N}\left<GS\mid S^{\gamma}_{j}\mid GS\right> \label{e3}$$ where $\gamma=x, y, z$ and the notation $\left<GS\mid ... \mid GS\right>$ represents the ground state expectation value. One of the most intriguing properties of quasi-one dimensional frustrated systems is the dependence of the magnetization on the applied magnetic field at $T=0$. The magnetization is characterized by a swift increase (or even discontinuity) in the magnetization when the external field exceeds a critical value. It is expected that the magnetization exhibits a true jump (the metamagnetic transition) when the frustration $\alpha$ is a little larger than $\alpha_{c}=0.25$[@Aligia; @Dmitriev]. In Fig. \[magnetization\](a) and Fig. \[magnetization\](b), for chain size $N=24$, we have plotted $M^{x}$ as a function of frustration and biquadratic parameters respectively in order to sweep all parts of the ground state phase diagram. As it can be seen from Fig. \[magnetization\](a), the magnetization is saturated, $M^{x}=0.5$, in the ground state of the nonfrustrated F-F model and for some values of the frustration, $\alpha<\alpha_{c}=\frac{1}{4}(1+a/2)$ in the frustrated F-AF model. At the critical point $\alpha_c=\frac{1}{4}(1+a/2)$, a sudden jump is happened which is known as the metamagnetic phase transition[@Mahdavifar]. Numerical results presented in Fig. \[magnetization\](b) show that quantum fluctuations destroy the suggested classical long range canted ferromagnetic order in the nonfrustrated AF-F model. By changing the biquadratic exchange a metamagnetic phase transition between nonfrustrated AF-F and F-F models happens at the exact critical biquadratic exchange $a=-2.0$. In the insets of Fig. \[magnetization\] we have plotted the magnetization for a fixed value of the biquadratic interaction Fig. \[magnetization\](a) and frustration parameter Fig. \[magnetization\](b) for different chain sizes $N=12, 16, 20, 24$. It is completely clear that there is not any size effect on the numerical results of the magnetization that confirms the presence of critical lines in the thermodynamic limit. In conclusion the quantum critical line which separates the ferromagnetic phase from the spiral phase is consistent with the classical line, but our calculations show that the vertical critical line which separates the ferromagnetic phase from up-up-down-down phase no longer exists in the quantum level and quantum correlations expand the ferromagnetic phase to live even in the region $\alpha \geq 0.5$ and $a \geq 2.0$. ![(Color online.) Magnetization ($M^{x}$) curve versus (a) frustration parameter $\alpha$ with different fixed biquadratic parameters **a** = 1.0, 1.2, ..., 3.0 for chain with length N =24. (b) biquadratic parameter **a** with different fixed frustration $\alpha=-0.1, -0.2, -0.3$ for chain with length N =24. In both plots the inset shows scaling behavior for chain with lengths N= 12, 16, 20, 24.[]{data-label="magnetization"}](mxa.eps "fig:"){width="1.0\columnwidth"} ![(Color online.) Magnetization ($M^{x}$) curve versus (a) frustration parameter $\alpha$ with different fixed biquadratic parameters **a** = 1.0, 1.2, ..., 3.0 for chain with length N =24. (b) biquadratic parameter **a** with different fixed frustration $\alpha=-0.1, -0.2, -0.3$ for chain with length N =24. In both plots the inset shows scaling behavior for chain with lengths N= 12, 16, 20, 24.[]{data-label="magnetization"}](mxb.eps "fig:"){width="1.0\columnwidth"} ![(Color online.) The dimer order parameter $d$ as function of (a) frustration parameter $\alpha$ with different fixed biquadratic parameters **a** = 1.0, 1.2, ..., 3.0 for chain with length N =24. (b) biquadratic parameter **a** with different fixed frustration $\alpha=-0.1, -0.2, -0.3$ for chain with length N =24. In both plots the inset shows scaling behavior for chain with lengths N= 12, 16, 20, 24.[]{data-label="dimer"}](dimeralpha.eps "fig:"){width="1.0\columnwidth"} ![(Color online.) The dimer order parameter $d$ as function of (a) frustration parameter $\alpha$ with different fixed biquadratic parameters **a** = 1.0, 1.2, ..., 3.0 for chain with length N =24. (b) biquadratic parameter **a** with different fixed frustration $\alpha=-0.1, -0.2, -0.3$ for chain with length N =24. In both plots the inset shows scaling behavior for chain with lengths N= 12, 16, 20, 24.[]{data-label="dimer"}](dimera.eps "fig:"){width="1.0\columnwidth"} To display the quantum ground state magnetic phase diagram of the model and check the nature of the classical suggested up-up-down-down phase we have calculated the quantum dimer order parameter which is defined as $$\begin{aligned} d=\frac{1}{N} \sum_{j} \langle GS| \vec{S}_{j}\cdot\vec{S}_{j+1}-\vec{S}_{j}\cdot\vec{S}_{j+2} |GS\rangle. \label{e4}\end{aligned}$$ In Fig. \[dimer\](a), we have plotted the dimer order parameter $d$ as a function of the frustration parameter $\alpha$ with different fixed values of the biquadratic parameter $a=1.0, 1.2, ..., 3.0$ for chain size $N=24$. It is clear from Fig. \[dimer\](a) that in the frustrated F-F model, for values of the frustration $\alpha < \alpha_{c_{_{1}}}=\frac{1}{4}(1+a/2)$ the dimer order parameter is equal to zero in well agreement with fully polarized ferromagnetic phase. By further increasing the frustration and for $\alpha > \alpha_{c_{_{1}}}$, the dimer order parameter starts to increase and reaches its saturation value ($\simeq0.5$) at $\alpha=\alpha_{c_{2}}(a)$. At the first critical point, $\alpha=\alpha_{c_{1}}$, quantum fluctuations suppress the ferromagnetic ordering and the system undergoes a quantum phase transition from the ferromagnetic phase into a phase with dimer ordering. The positive value of the dimer order parameter in the region $\alpha>\alpha_{c_{1}}$, shows the dimerization between next nearest neighbors which is named “Dimer-II”. The oscillations (quasi-plateaus) at finite $N$ in the region $\alpha_{c_{1}}<\alpha< \alpha_{c_{2}}$, are the result of level crossing between the ground state and excited states of the model[@Mahdavifar08]. At the second quantum critical point, $\alpha=\alpha_{c_{2}}$, the ground state of the system goes into a phase with almost fully polarized dimer state between next nearest neighbors. We have also checked the size effects on the dimerization and numerical results are shown in the inset of Fig. \[dimer\](a) with fixed biquadratic exchange $a=2.0$ for different chain lengths $N=12, 16, 20, 24$. In Fig. \[dimer\](b), the dimer order parameter is plotted vs the frustrated parameter for a chain size $N=24$ and different values of the biquadratic parameter $a<-2.0$. Indeed, in order to check the nature of the classical suggested canted ferromagnetic phase, we have plotted the dimer order parameter as a function of the frustrated parameter for fixed values of biquadratic exchanges in this region. As it can be seen from Fig. \[dimer\](b), in the region $\alpha<0$, namely nonfrustrated AF-F model, the ground state of the system has the long-range dimerization between nearest neighbors, so called the Dimer-I phase. In the case of the frustrated AF-AF model, as soon as the frustration increases from $\alpha_{c}$, the dimerization order parameter starts to increase and becomes zero at almost Majumdar-Ghosh point $\alpha=0.5~\mid1+a/2\mid$. The value of the the critical frustration, $\alpha_{c}$, depends on the biquadratic exchange. By more increasing the frustration from MG point, the dimerization increases very rapidly and reaches to the saturation value ($d \simeq 0.4$). Thus, in the region of the biquadratic exchange $a<-2$, for negative values of the frustration, the ground state is in the Dimer-I phase and by increasing the frustration, a quantum phase transition happens at the critical positive frustration $\alpha_{c}$, from the Dimer-I phase into a phase with dimer ordering between NNN which is named Dimer-II phase. In the inset of Fig. \[dimer\](b) the dimerization order parameter is plotted as a function of the frustration with fixed biquadratic exchange $a=-3.0$ for different chain lengths $N=12, 16, 20, 24$. By comparing results of the different sizes it can be conclude that there are two different dimer phase with true long-range ordering. In the presence of biquadratic parameter $a$, at classical level spins order as spiral structure in some part of phase diagram. It might be expectable that a part of the broken symmetries in classical spiral spin configuration may remain to be spontaneously broken even in the quantum regime. The spirality or chirality in quantum literature can be measured with vector chiral order parameter, $$\begin{aligned} \chi^{\gamma}&=&\frac{1}{N} \sum_{j} \left<GS\mid ({\bf S}_{j}\times {\bf S}_{j+1})^{\gamma}\mid GS\right>. $$ The vector chiral order correspondence to the spontaneous breaking of the discrete $Z_{2}$ symmetry about center. One should note that there are two different quantum types of the chiral ordered phases, gapped and gapless[@bb18; @bb19]. The vector chiral phase is characterized by long-range order of the vector chiral correlation defined as $$\begin{aligned} C^{\gamma}=\sum_{l=1}^{N}\left<GS\mid \chi_{j}~\chi_{j+l}\mid GS\right>. \label{e6}\end{aligned}$$ ![(Color online.) (a)The vector chiral correlation as function of frustration parameter $\alpha$ with fixed biquadratic parameter $a = 2.0$, (b) the spin structure factor as wave vector for chains with different lengths N =12, 16, 20, 24.[]{data-label="chiral"}](chiral2.eps "fig:"){width="1.0\columnwidth"} ![(Color online.) (a)The vector chiral correlation as function of frustration parameter $\alpha$ with fixed biquadratic parameter $a = 2.0$, (b) the spin structure factor as wave vector for chains with different lengths N =12, 16, 20, 24.[]{data-label="chiral"}](structurexx.eps "fig:"){width="1.0\columnwidth"} To find a deeper insight into the nature of the quantum phases we have calculated numerically the vector chiral correlation for chains with periodic boundary conditions and lengths $N=12, 16, 20, 24$. In Fig. \[chiral\], we have presented Lanczos results on the vector chiral correlation, $C^{x}$, as a function of the frustration parameter $\alpha$ for a fixed value of the biquadratic exchange $a=2.0$, corresponding to the frustrated F-AF model, including different chain lengths $N=12, 16, 20, 24$. As is clearly seen, in the region $\alpha<\alpha_{c_1}=\frac{1}{4}(1+a/2)$ there is no long-range chiral order along the $x$ axis in well agreement with the ferromagnetic phase. By increasing the frustration, in a intermediate region, $\alpha_{c_1}<\alpha<\alpha_{c_2}$, the ground state shows a profound chiral order. It is important to note that the growth of the results in the intermediate region by increasing size of the system, shows the diverging in the thermodynamic limit $N\longrightarrow\infty$ the characteristic of the true long-range order of the chirality. As soon as the frustration increases from $\alpha_{c_{2}}$, the chirality drops rapidly. The constant value of the vector chiral correlation in the region $\alpha>\alpha_{c_{2}}$ shows that the $C^{x}/N$ takes zero value in the thermodynamic limit $N\longrightarrow\infty$. Also, we did our numerical experiment for other values of the biquadratic exchange in the region $a>-2.0$ and found the same qualitative picture. Therefore, in the intermediate region $\alpha_{c_1}<\alpha<\alpha_{c_2}$ and for values of the biquadratic exchange, $a>-2$, corresponding to the frustrated F-AF model, the dimer ordering between next nearest spins coexists with the chirality. Another way of the quantum mechanical mimic of the classical pitch angle is the possibility to study at which wave vector $q$ the static spin structure factor $$\begin{aligned} S^{\alpha}(q)=\sum_{j}^{N/2}e^{iqj}\left<GS\mid S_{0}^{\alpha}S_{j}^{\alpha}\mid GS\right>. \label{e7}\end{aligned}$$ is peaked. In Fig. (\[chiral\]-b), we have plotted the structure factor versus $0\leq q\leq2\pi$ with fixed parameters $a = 2.0$ and $\alpha=0.68$. As it can be seen, the structure factor shows two peaks around the $q\sim1.0$ and $q\sim5.0$ in the predicted chiral phase. Ground state entanglement {#sec3} ========================= In recent years interest of the quantum information community to study in condensed matter has stimulated an exciting cross fertilization between the two areas [@bb20]. It has been found that entanglement plays a crucial role in the low-temperature physics of many of these systems, particularly in their ground state[@bb21; @bb22; @bb23; @bb24]. The pioneering study of quantum information in the condensed matter area was the observation that two body entanglement in the ground state of a cooperative system, exhibits peculiar scaling features approaching a quantum critical point [@bb22]. These seminal studies showed that at quantum phase transitions the dramatic change in the ground state of a many-body system is associated to a change in the way entanglement is distributed among the elementary constituents. We here focus on one of the most frequently used entanglement measure: *concurrence*. A knowledge of two-site reduced density matrix enables one to calculate concurrence, a measure of entanglement between two spin at site $i$ and $j$ [@bb20; @bb21]. The reduced density matrix defined as $$\begin{aligned} \rho_{ij}&=&\frac{1}{4}\Big( 1+\langle\sigma_{i}^{z}\rangle\sigma_{i}^{z}+\langle\sigma_{j}^{z}\rangle\sigma_{j}^{z}+\langle\sigma_{i}^{x}\sigma_{j}^{x}\rangle\sigma_{i}^{x}\sigma_{j}^{x}\nonumber\\ &+&\langle\sigma_{i}^{y}\sigma_{j}^{y}\rangle\sigma_{i}^{y}\sigma_{j}^{y}+\langle\sigma_{i}^{z}\sigma_{j}^{z}\rangle\sigma_{i}^{z}\sigma_{j}^{z}\Big) \label{e8}\end{aligned}$$ where $\sigma_{i}$’s is the Pauli matrix and the concurrence $C$ is given by $ C = max\{\varepsilon_{1}-\varepsilon_{2}-\varepsilon_{3}-\varepsilon_{4},0\}$, where $\varepsilon_{i}$’s are square roots of the eigenvalues of the operator $\varrho_{ij}=\rho_{ij}(\sigma_{i}^{y}\otimes\sigma_{j}^{y})\rho_{ij}^{\ast} (\sigma_{i}^{y}\otimes\sigma_{j}^{y})$ in descending order. $C=0$ implies an unentanglement state whereas $C=1$ corresponds to maximum entanglement. ![(Color online). (a) The concurrence between next nearest neighbors $C_{j,j+2}$ as a function of the frustration parameter $\alpha$ with a fixed biquadratic values (a) $a=2.0$ and (b) $a=-3.0$ for different chain lengths N =12, 16, 20, 24. In the inset of both plot, we plot entanglement between nearest neighbors $C_{j,j+1}$ as a function of the frustration parameter.[]{data-label="Concurrence1"}](concurrenca.eps "fig:"){width="1.0\columnwidth"} ![(Color online). (a) The concurrence between next nearest neighbors $C_{j,j+2}$ as a function of the frustration parameter $\alpha$ with a fixed biquadratic values (a) $a=2.0$ and (b) $a=-3.0$ for different chain lengths N =12, 16, 20, 24. In the inset of both plot, we plot entanglement between nearest neighbors $C_{j,j+1}$ as a function of the frustration parameter.[]{data-label="Concurrence1"}](concurrencb.eps "fig:"){width="1.0\columnwidth"} ![(Color online) Modified quantum phase diagram.[]{data-label="schematic"}](quantumschematicphase.eps){width="1.0\columnwidth"} The numerical Lanczos results describing the concurrence are shown in Fig. \[Concurrence1\]. In this figure the concurrence between two NN and NNN spins is plotted as a function of the frustration $\alpha$ for chain lengths $N=12, 16, 20, 24$ with fixed values of the biquadratic exchange. For $a=2.0$ (Fig. \[Concurrence1\](a)), corresponding to the frustrated F-AF model, it can be seen that in the absence of the frustration, NNN spins are not entangled in well agreement with the ferromagnetic phase. By applying the frustration and up to the first quantum critical point $\alpha_{c_{1}}=\frac{1}{4}(1+a/2)$, the concurrence between NNN spins remains zero. As soon as the frustration increases from $\alpha_{c_{1}}$, a jump happens which is the characteristic of the metamagnetic phase transition. In the intermediate region, $\alpha_{c_1}<\alpha<\alpha_{c_2}$, the concurrence between NNN spins increases by increasing the frustration and reaches its nearly saturated value at $\alpha=\alpha_{c_2}$. In the region $\alpha>\alpha_{c_2}$, the concurrence between NNN spins remains almost constant. Indeed the quantum correlations between two NNN spins in the intermediate region, increases with increasing the frustration and takes the almost maximum value at $\alpha_{c_2}$. In the inset of Fig. \[Concurrence1\](a), we have plotted the concurrence between NN spins as a function of the frustration for the biquadratic exchange $a=2.0$. It can be seen that the NN spins do not show any entanglement in the frustrated F-AF model. To complete our study of the entanglement phenomena we have calculated the concurrence between NN and NNN spins in different sectors of the ground state phase diagram. For example, we have presented our numerical results for the biquadratic exchange $a=-3$ in Fig. \[Concurrence1\](b). As it can be seen, in the region of frustration, $\alpha<0$, corresponding to the nonfrustrated AF-F model, the NNN spins are not entangled but NN spins are entangled (inset of Fig. \[Concurrence1\](b)). On the other hand, in the frustrated AF-AF model, the NN spins remain entangled up to the Majumdar-Ghosh point and then after the Majumdar-Ghosh by increasing the frustration parameter only the NNN spins will be entangled. Summary and discussion {#sec4} ====================== We have considered the frustrated ferromagnetic chains spin-$\frac{1}{2}$ with added nearest-neighbor biquadratic interaction. In a very recent work [@kaplan], the classical ground state phase diagram of the model was studied. The existence of ferromagnetic, spiral, canted-ferro and up-up-down-down spin structures was shown. To find the quantum corrections, first, using a permutation operator we eliminated the biquadratic interaction and transformed it to the quadratic interaction. By changing the biquadratic parameter, it is shown that the transformed Hamiltonian covers all types of NN and NNN interaction models. Then, we did a numerical experiment to observed quantum corrections. Our numerical experiment showed that the quantum fluctuations are strong to change the classical ground state phase diagram. As it can be seen from Fig. \[schematic\], depending on the values of the frustration and the biquadratic exchange parameters, the ground state of the system can be found in the ferromagnetic, the Dimer-I, the Dimer-II and the chiral magnetic orders. In very recent works, it was shown that the chiral phase appears in anisotropic frustrated ferromagnetic chains and extends up to the vicinity of the SU(2) point for moderate values of frustration[@saeed7; @saeed9] in well agreement with the experimental results. The complete picture of the quantum phases of this model has remained unclear up to now. Also, several authors have discussed deeply in this area[@Cabra; @Dmitrie; @White; @Allen; @Nersesyan]. The existence of a tiny but finite gap in the region of the frustration $\alpha>0.24$ is one of the interesting and still puzzling effects in frustrated ferromagnetic chains. It is also worth mentioning, using the coupled cluster method for infinite chain and exact diagonalization for finite chain, author in ref.\[52\], have studied the effect of a third-neighbor exchange $J_{3}$ on the ground state of the spin half Heisenberg chain with ferromagnetic nearest-neighbor interaction $J_{1}$ and frustrating antiferromagnetic next-nearest-neighbor interaction $J_{1}$. By setting $J_{1}=-1$, they have proposed that the quantum phase diagram consist of spiral and ferromagnetic phases in the $J_{2}-J_{3}$ plane. Across the $J_{3}=0$ line, in the proposed diagram the second-order transition will take place from FM to spiral phase. Our study shows, there should be the mentioned region and It is surprising that this region have the two ordering phases: Dimer-II and chiral. However more research is needed in this respect. From quantum entanglement point of view, difference between quantum phases is also studied. we have calculated the concurrence between two NN and NNN spins in different sectors of the ground state phase diagram. We showed that the concurrence function is a very useful tool to recognize the different quantum phases specially in this model. Acknowledgement =============== It is our pleasure to thank T. Vekua and T. Nishino, for very useful comments. [99]{} T. A. Kaplan, Phys. Rev. B [**80**]{} 012407 (2009). A. A. Aligia, Phys. Rev. B [**63**]{} 014402 (2001). D. V. Dmitriev and V. Ya. Krivnov, Phys. Rev. B [**73**]{} 024402 (2006). T. Hikihara, L. Kecke, T. Momoi and A. Furusaki, Phys. Rev. B [**78**]{} 144404 (2008). S. Mahdavifar, J. Phys.: Condens. Matter [**20**]{} 335230 (2008). Y. Mizuno, T. Tohyama, S. Maekawa, T. Osafune, N. Motoyama, H. Eisaki, and S. Uchida, Phys. Rev. B [**57**]{} 5326 (1998). M. Hase, H. Kuroe, K. Ozawa, O. Suzuki, H. Kitazawa, G. Kido and T. Sekin, Phys. Rev. B [**70**]{} 104426 (2004). S. F. Solodovnikov and Z. A. Solodovinkova, J. Struct. Chem [**38**]{} 765 (1997). T. Tonegawa and I. Harada, J. Phys. Soc. Jpn. [**58**]{} 2902 (1989). A. V. Chubukov, Phys. Rev. B [**44**]{} 5362 (1991). D. C. Cabra, A. Honecker and P. Pujol, Eur. Phys. J. B [**13**]{} 55 (2000). V. Ya. Krivnov and A. A. Ovchinnikov, Phys. Rev. B [**53**]{} 6435 (1996). T. Hamada, J. Kane, S. Nakagawa and Y. Natsume, J. Phys. Soc. Jpn. [**57**]{} 1891 (1988). D. V. Dmitriev, V. Ya. Krivnov and A. A. Ovchinnikov, Phys. Rev. B [**56**]{} 5985 (1997). S. R. White, Ian Affleck, Phys. Rev. B [**54**]{} 9863 (1996). D. Allen and D. Senechal, Phys. Rev. B [**55**]{} 299 (1997). A. A. Nersesyan, A. Q. Gogolin and F. H. L. Essler, Phys. Rev. Lett. [**81**]{} 910 (1998). P. W. Anderson, in Magnetism, edited by G. Rado and H. Suhl, Academic, New York [**I**]{} 41 (1963). N. L. Huang and R. Orbach, Phys. Rev. Lett. [**12**]{} 275 (1964). C. Kittel, Phys. Rev. [**120**]{} 335 (1960); M. Barma, Phys. Rev. B [**16**]{} 593 (1977). E. A. Harris and J. Owen, Phys. Rev. Lett. [**11**]{} 9 (1963). D. S. Rodbell, I. S. Jacobs, and J. Owen, Phys. Rev. Lett. [**11**]{} 10 (1963). U. Schollwock, J. Richter, D. Farnell, and R. B. (Eds.) Quantum Magnetism, Lecture Notes in Physics, Springer-Verlag, Berlin/Heidelberg (2004). F. Mila and F.C. Zhang, Eur. Phys. J. B [**16**]{} 7 (2000). J. J. Garca-Ripoll, M. A. Martin-Delgado, and J. I. Cirac, Phys. Rev. Lett. [**93**]{} 250405 (2004). S. Trotzky, P. Cheinet, S. Folling, M. Feld, U. Schnorrberger, A. M. Rey, A. Polkovnikov, E. A. Demler, M. D. Lukin, and I. Bloch, Science [**319**]{} 295 (2008). A. V. Gorshkov, M. Hermele, V. Gurarie, C. Xu, P. S. Julienne, J. Ye, P. Zoller, E. Demler, M. D. Lukin, and A. M. Rey, Nature Physics [**6**]{} 289 (2010). M. Kaburagi, H. Kawamura, and T. Hikihara, J. Phys. Soc. Jpn. [**68**]{} 3185 (1999). T. Hikihara, M. Kaburagi, and H. Kawamura, Phys. Rev. B [**63**]{} 174430 (2001). A. Kolezhuk, Phys. Rev. B [**62**]{} R6057 (2000); A. Kolezhuk and T. Vekua, Phys. Rev. B [**72**]{} 0944424 (2005). I. P. McCulloch, R. Kube, M. Kurz, A. Kleine, U. Schollwck, and A. K. Kolezhuk, Phys. Rev. B [**77**]{} 094404 (2008). K. Okunishi, J. Phys. Soc. Jpn. [**77**]{} 114004 (2008). C. D. Batista, arXiv: 0908.3639v1. N. D. Mermin and H.Wagner, Phys. Rev. Lett. [**17**]{} 1133 (1966). C. K. Majumdar and D. K. Ghosh, J. Math. Phys. [**10**]{} 1399 (1969). G. Kamieniarz, et.al, Comp. Phys. Comm. [**147**]{} 716 (2002). M. Hase, H. Kuroe, K. Ozawa, O. Suzuki, H. Kitazawa, G. Kido, and T. Sekine, Phys, Rev. B [**70**]{} 104426 (2004). T. Masuda, A. Zheludev, A. Bush, M. Markina, and V. Vasiliev, Phys, Rev. B [**72**]{} 014405 (2005). M. Enderle, et.al, Europhys. Lett. [**70**]{} 237 (2005). M. Baran, et.al, Phys. Stat. Sol. (c) [**3**]{} 220 (2006). S. Furukawa, M. Sato, Y. Saiga, and S. Onoda, J. Phys. Soc. Jpn.[**77**]{} 123712 (2008). S. Furukawa, M. Sato, and S. Onoda, Phys. Rev. Lett. [**105**]{} 257205 (2010). S. Furukawa, M. Sato, and A. Furusaki, Phys. Rev. B [**81**]{} 094430 (2010). M. Sato, S. Furukawa, S. Onoda, and A. Furusaki, Modern Physics Letters B [**25**]{} 901 (2011). M. Kaburagi, H. Kawamura, T. Hikihara, J. Phys. Soc. Jpn. [**68**]{} 3185 (1999). T. Hikihara et al. J. Phys. Soc. Jpn. [**69**]{} 259 (2000). W. K. Wootters, Phys. Rev. Lett. [**80**]{} 042302 (1998); K. M. O’Connor and W. K. Wootters, Phys. Rev. A [**63**]{} 052302 (2001). M. C. Arnesen, S. Bose and V. Vedral, Phys. Rev. Lett. [**87**]{} 017901 (2001); D. Gunlycke, V. M. Kendon, V. Vedrsl and S. Bose, Phys. Rev. A [**64**]{} 042302 (2001). T. J. Osborne and M. A. Nielsen, Phys, Rev. A [**66**]{} 032110 (2002). S. Sachdev, *Quantum Phase Transitions*, Cambridge University Press, Cambridge, UK (2000). X. G. Wen, *Quantum Field Theory of Many-Body Systems*, Oxford University, New York (2004). R. Zinke, J. Richter and S. L.Drechsler, J. Phys.: Condens. Matter [**22**]{} 446002 (2010) .
{ "pile_set_name": "ArXiv" }
--- abstract: 'A phenomenological model for the calculation of reduction probabilities of a superposition of several states is presented. The approach bases only the idea that quantum state reduction has its origin in a mutual physical interaction between the states. The model is explicitly worked out for the gravitational reduction hypothesis of Diósi and Penrose. It agrees for typical quantum mechanical experiments with the projection postulate and predicts regimes in which other behavior could be observed. An outlook is given, how the new effects can possibly become of interest for biology. For verification a feasible quantum optical experiment is proposed. The approach is analyzed from the view point of quantum non-locality in concrete its consequences for signaling.' author: - 'Garrelt Quandt-Wiese' title: 'Can the study of reduction probabilities reveal news about the nature of quantum state reduction?' --- Introduction {#intro} ============ A fundamental issue of quantum theory is the question about the reduction of the state-vector, also known as the collapse of the wave-function. A key problem for establishing a theory of quantum state reduction is the difficulty to get experimental facts for developing and verifying a theoretical approach. Our experimental knowledge about quantum state reduction might be characterized by the following issues: 1. The measurement process forces a reduction of the state-vector towards an eigenvector of the operator describing the measurement process. 2. The reduction probabilities towards these eigenvectors are given by the projection postulate [@Neumann]. 3. The nonlocal nature of quantum state reduction is demonstrated by Bell-type experiments [@Gisin2008b]. The question how much mass can be involved in a superposition before it decays by state reduction is so far open. A main problem is to distinguish the real reduction phenomenon from decoherence. A lower limit how much mass can be involved in a superposition can be estimated from recent experiments demonstrating e.g. quantum superpositions of fullerenes [@Zeilinger] or superconducting currents [@Lukens]. Due to the rare experimental facts about quantum state reduction a variety of different theoretical approaches were developed in the last decades. Several of them try to explain state reduction by physical mechanisms like e.g. uncertainties of the space-time [@Milburn; @Diosi1989; @Penrose1996]. Others, as the GRW approach, introduce a spontaneous localization process, without having a concrete physical mechanism in mind [@Ghirardi1986; @Ghirardi1990]. Parallel to the theoretical activities a lot of experimental proposals were developed benefiting from recent technological progress [@Bouwmeester; @Christian; @Penrose1998; @vanWezel; @Karolyhazi; @Pearl; @Lamine; @Power; @Bose; @Henkel; @Simon; @Amelino]. But with none of them it was so far possible to bring more light into the subject. The concern of this work is to stimulate the research of quantum state reduction by proposing new experiments. This is done by investigating the idea that state reduction has its origin in a somehow mutual physical interaction between the states of the superposition. The investigation is done with help of a phenomenological model, which aims at the calculation of reduction probabilities. The model’s approach is to calibrate it for superposition of two states to the projection postulate and then to study its behavior for superpositions of more states. Although the model is explicitly worked out for the Diósi-Penrose approach of gravity-induced quantum state reduction, other approaches for the physical interaction between the states can be plugged into the model as well. In Chapter \[Model\] the phenomenological model is introduced and the Diósi-Penrose approach of gravity-induced quantum state reduction is recapitulated. The application of the phenomenological model to thought experiments predicts regimes, in which deviant behavior from the projection postulate could be observed. In Chapter \[DisQM\] the question is discussed whether the model can explain the reduction behavior of typical quantum mechanical experiments, in which deviations from the projection postulate are not observed so far. This discussion is based on an analysis of the decay behavior of solid states in a quantum superposition. In Chapter \[Experiment\] the question is followed up whether it is possible to verify the predictions of the model with current state of the art technology. This is done by proposing and analyzing a concrete quantum optical experiment. Chapter \[Biology\] gives an outlook on the possible role of quantum state reduction in biology. Finally the approach is analyzed from the view point of quantum non-locality in concrete its consequences for signaling. Model {#Model} ===== In the derivation of the following phenomenological model it is assumed that according to Penrose [@Penrose1996; @Penrose1994] state evolution can be described by a formal sequence of U- and R-processes, where the U-process is the unitary evolution of the state vector, described e.g. by the Schrödinger equation, and the R-process is causing the reduction of the state vector. Furthermore, it is assumed that there exists a statistical process, which triggers the R-processes on quantum superpositions. This process shall be denoted here as the “reduction triggering process”. The lifetimes of quantum superpositions, predicted e.g. by the Diósi-Penrose approach, are correlated to this process in the following way. The life-time $\tau$ corresponds to a decay rate ${\dot p}_{decay}$ by ${\dot p}_{decay}=1/\tau$ , where the decay rate ${\dot p}_{decay}$ describes the probability $\Delta p$ that the reduction triggering process triggers within a given time-interval $\Delta t$ an R-process (${\dot p}_{decay}=\Delta p/\Delta t$). In the Diósi-Penrose approach the derived decay rate does not distinguish, whether the superposition decays towards state 1 or 2. The basic idea of the following phenomenological model is to introduce a “direction” for the reduction triggering process, i.e. to write down separate trigger rates for stimulating a decay of state 1 towards state 2 (${\dot p}_{decay}^{\, 1 \to 2}$) and vice versa (${\dot p}_{decay}^{\, 2 \to 1}$). But before developing the model in detail, the basics of the Diósi-Penrose approach shall be recapitulated. ![Thought experiment for generating a quantum superposition of a macroscopic rigid mass at two different locations. If the diffracted photon is measured by the detector, the position of the mass is shifted by a small distance.[]{data-label="fig:PenroseExp"}](PenroseExp.eps) In the derivation of Penrose the hypothesis of gravity-induced quantum state reduction is a manifestation of the incompatibility of general relativity and the unitary time evolution of quantum physics [@Penrose1996; @Penrose1994]. Penrose’s basic idea can be explained with the thought experiment shown in Figure \[fig:PenroseExp\]. Depending on whether the diffracted photon is detected by the detector at the right, a rigid mass inside the detector is shifted by a small distance or not. Following the unitary evolution of the Schrödinger equation the system evolves at this experiment into a superposition of two macroscopic states corresponding to the shifted and not shifted mass. According to the theory of general relativity the superposed macroscopic states have slightly different space-time geometries, which means that a clock at the same position runs with slightly different speeds. Penrose argues that due to this the time-translation operator for the superposed space-times involves an inherent ill-definedness leading to an essential uncertainty in the energy of the superposed states, which results finally to a decay of the superposition towards to one of the states. The expression $$\label{form:EG12} E_{G \, 1,2}=\xi \, G \int d^3 \vec{x} \; d^3 \vec{y} \frac{(\rho_1(\vec{x})-\rho_2(\vec{x}))(\rho_1(\vec{y})-\rho_2(\vec{y}))}{|\vec{x}-\vec{y}|}$$ ($G=$gravitational constant, $\rho_i(\vec{x})=$mass-density distribution of state $i$, $\xi$ dimensionless parameter, which is expected to be in the order of 1[^1] [@Christian]) defines a measure how much the space-time geometries of the states 1 and 2 differ from each other [@Diosi1989; @Penrose1996; @Diosi2005]. If the superposition consists of a rigid mass at different locations, equation (\[form:EG12\]) expresses the mechanical work, which is needed to separate the masses from each other under the assumption that gravitation acts between them. The stability of the superposition is expressed by the decay rate $$\label{form:decrate-EG12} {\dot p}_{decay}=\frac{E_{G \, 1,2}}{\hbar}$$ or its corresponding lifetime $\tau=1/{\dot p}_{decay}$. A derivation of equations \[form:EG12\] and \[form:decrate-EG12\] basing on the argumentation above is given in Appendix \[AppendixA\]. For superpositions of atoms and molecules the Diósi-Penrose hypothesis predicts in consistency to quantum theory extremely long life-times. Lifetimes in the order of seconds are predicted for masses and lengths in the order of the bacterial scale (microns). With current state of the art technology it was so far not possible to study the decay of quantum superpositions involving masses of this order. Concrete proposals for checking the Diósi-Penrose hypothesis were developed by several authors [@Bouwmeester; @Christian; @Penrose1998; @vanWezel]. ![Distribution of the wave-functions ($| \Psi_{System} |^2$) in configuration space for the experiment of Figure 1 shortly after the photon was detected. The wave-function is localized at two points corresponding to the states “photon detected” and “photon not detected”.[]{data-label="fig:ConfigSpace"}](ConfigSpace.eps) In the following discussion of the phenomenological reduction model a special representation for the wave-function is used, which is shown in Figure \[fig:ConfigSpace\]: This figure shows schematically the distribution of the wave-function in configuration space for experiment \[fig:PenroseExp\] shortly after the photon has reached the detector. The wave-function $\Psi_{System}$ describes here always the whole system consisting in the experiment of Figure \[fig:PenroseExp\] of photon, aperture and detector. The configuration space on which the wave-function $\Psi_{System}$ is defined is given for a many particle system mainly by the positions of all participating particles $(\vec{x_1},\ldots,\vec{x_n})$. The configuration space shall always be chosen in such a way that a classical state corresponds to a wave-function $\Psi_{System}$, which is localized around one point in the space. The quantum superposition evolving at the experiment of Figure \[fig:PenroseExp\] is then represented by a wave-function localized at two well distinguished points corresponding to the states “photon detected” and “photon not detected” respectively, as indicated at the right of Figure \[fig:ConfigSpace\]. Both axis of Figure \[fig:ConfigSpace\] (and the following Figures \[fig:ThoughtExp\], \[fig:Transition\] and \[fig:StatesCell\]) represent the almost infinite dimensions of the configuration space. Since each point of the configuration space represents a classical state with a well defined mass-density distribution $\rho(\vec{x})$, expression $E_{G \, 1,2}$ (\[form:EG12\]) can be calculated for each pair of points in the configuration space as indicated in Figure \[fig:ConfigSpace\] for state 1 and 2. By integrating $|\Psi_{System}|^2$ in the surroundings of state 1 and 2 the amount of their amplitudes $|c_i|^2$ can be defined as shown in Figure \[fig:ConfigSpace\]. For the state amplitudes $|c_1|^2$ and $|c_2|^2$ applies the normalization $|c_1|^2+|c_2|^2=1$. From the projection postulate the reduction probabilities towards state 1 and 2 are expected to be $|c_1|^2$ and $|c_2|^2$ respectively. Let’s now return to the derivation of the reduction model. The basic idea of the model is to split the decay rate (\[form:decrate-EG12\]) describing the probability for triggering an R-process into two rates specifying the direction of the decay like $$\label{form:splitDecayRate} {\dot p}_{decay} \Rightarrow {\dot p}_{trigger}^{\, 1 \to 2} + {\dot p}_{trigger}^{\, 2 \to 1} \; .$$ Under the assumption that the direction of the reduction triggering process determines the final outcome of the experiment, i.e. a triggering of a decay of state 1 towards state 2 (${\dot p}_{trigger}^{\, 1 \to 2}$) leads finally to a complete vanishing of state 1 and a reduction towards state 2, ${\dot p}_{trigger}^{\, 1 \to 2}$ and ${\dot p}_{trigger}^{\, 2 \to 1}$ have to be chosen like $$\label{form:approachPtrigger12} {\dot p}_{trigger}^{\, 1 \to 2}=\frac{E_{G \, 1,2}}{\hbar}|c_2|^2, \quad {\dot p}_{trigger}^{\, 2 \to 1}=\frac{E_{G \, 1,2}}{\hbar}|c_1|^2$$ to calibrate the model to the predictions of the projection postulate. For a quantum superposition of more than two states the trigger rates (\[form:approachPtrigger12\]) can be generalized like $$\label{form:approachPtriggerIJ} \hbar{\dot p}_{trigger}^{\, i \to j}=E_{G \, i,j}|c_j|^2 \; ,$$ where $E_{G \, i,j}$ is the generalization of expression (\[form:EG12\]) for a pair of states $i$ and $j$. Expression (\[form:approachPtriggerIJ\]) is interpreted in the following as the probability rate for triggering a decay of state $i$ towards state $j$. Before calculating the final outcome of the experiment with expression (\[form:approachPtriggerIJ\]), a physical interpretation of the chosen approach shall be presented: Summing all trigger rates (\[form:approachPtriggerIJ\]), which trigger the decay of a certain state $i$, one can define a decay rate for this state as[^2] $$\label{form:PdecayI} \hbar{\dot p}_{decay}^{\, i}=\sum_{j\neq i}\hbar{\dot p}_{trigger}^{\, i \to j}=\sum_{j}E_{G \, i,j}|c_j|^2 \; .$$ With the definition of the gravitational potential of a state $i$ $$\label{form:PhiI} \phi_{i}(\vec{x})=-G \int d^{3}\vec{y} \frac{\rho_{i}(\vec{y})}{|\vec{x}-\vec{y}|} \; ,$$ and the normalization $\sum_i |c^{2}_{i}|=1$, equation (\[form:PdecayI\]) can be transformed to $$\label{form:PdecayIInterpretation} \hbar{\dot p}_{decay}^{\, i}=\left( \int d^{3} \vec{x} \phi_{mean} (\vec{x}) \rho_{i} (\vec{x}) - \int d^{3} \vec{x} \phi_{i} (\vec{x}) \rho_{i} (\vec{x}) \right) + \left( \int d^{3} \vec{x} \phi_{i} (\vec{x}) \rho_{mean} (\vec{x}) - \sum_{j} |c_{j}|^{2} \int d^{3} \vec{x} \phi_{j} (\vec{x}) \rho_{j} (\vec{x}) \right) \; ,$$ where the mean mass distribution and potential are defined by $$\label{form:PhiMeanRhoMean} \rho_{mean}(\vec{x})=\sum_{i}|c_{i}|^{2}\rho_{i}(\vec{x}),~~~ \phi_{mean}(\vec{x})=-G \sum_{i}|c_{i}|^{2} \int d^{3}\vec{y} \frac{\rho_{i}(\vec{y})}{|\vec{x}-\vec{y}|} \; .$$ With the approximation that the gravitational self energies of the states are nearly identical($\int d^{3}\vec{x}\phi_{i}\rho_{i}\approx\int d^{3}\vec{x}\phi_{j}\rho_{j}$) expression (\[form:PdecayIInterpretation\]) can be simplified to $$\label{form:PdecayIApproximation} \hbar{\dot p}_{decay}^{\, i} \approx 2\left( \int d^{3} \vec{x} \phi_{mean} (\vec{x}) \rho_{i} (\vec{x}) - \int d^{3} \vec{x} \phi_{i} (\vec{x}) \rho_{i} (\vec{x}) \right) \; .$$ This yields the physical interpretation that the decay rate of a state $i$ is given by the energy difference of this state in the mean gravitational potential of the superposition and its own gravitational potential. For a superposition of two states, where $|c_{1}|^{2}$ is much bigger than $|c_{2}|^{2}$, the mean potential is close to the potential of state 1 and its decay probability is small. For state 2 one gets a big energy difference and a high decay rate. Consequently the superposition mostly decays towards state 1. Let’s now continue with the calculation of the final outcome of the experiment. From the assumption that the reduction triggering process ($\hbar{\dot p}_{trigger}^{\, i \to j}=E_{G \, i,j}|c_j|^2$) induces a reduction stream from state $i$ towards state $j$, which might be defined as $J_{i \to j} = \frac{d}{dt} |c_{j}|^{2} - \frac{d}{dt} |c_{i}|^{2}$, it follows that the reduction streams of all other States $k$ towards state $j$ become also bigger than 0 ($J_{k \to i} > 0$), since $\frac{d}{dt} |c_{j}|^{2}$ is bigger than 0 and $\frac{d}{dt} |c_{k}|^{2}$ equals 0. Since there is a physical interaction between these states described by the matrix element $E_{G \, i,j}$, it is assumed that this net stream from state k towards state j triggers like the reduction triggering process $\hbar{\dot p}_{trigger}^{\, k \to j}$ a reduction stream from state $k$ towards state $j$. This means that the initial trigger event determines the final outcome of the experiment[^3]. By summing up all trigger rates towards a state $j$ one can define a reduction rate for this state describing all trigger events leading to a reduction towards this state like $$\label{form:PreduceJ} \hbar{\dot p}_{reduce}^{\;j}=\sum_{i\neq j}\hbar{\dot p}_{trigger}^{i \to j}=\left (\sum_{i}E_{G \, i,j} \right) |c_j|^2 \; .$$ By summing up all scenarios, which lead finally to a reduction towards state $j$, the reduction probability $p_j$ can be calculated like $$\label{form:PJexact} p_j= \int^{\infty}_{t_0} p_{sup \; stable} (t) {\dot p}_{reduce}^{\;j}(t) \; ; ~~~ p_{sup \; stable} = e^{-\int^{t}_{t_0} dt \sum_i \dot p_{decay}^{\;i}(t)} \; ,$$ where $p_{sup \; stable} (t)$ is the probability that the quantum superposition stays stable till the time $t$ and $t_0$ is the time, when the states of the superposition start to separate from each other. If the time dependencies of all couplings $E_{G \, i,j}(t)$ are equal, the calculation of equation (\[form:PJexact\]) simplifies to $$\label{form:PJapproximation} p_j \propto \left (\sum_{i}E_{G \, i,j} \right) |c_j|^2 \; ,$$ where the proportional constant of equation (\[form:PJapproximation\]) can be determined with the normalization $\sum_{j}p_j = 1$. An important property of the derived model is that the predicted reduction probabilities do not depend on the coherence of the superposed states since $p_j$ in equation (\[form:PJapproximation\]) depends only on the amounts of the state amplitudes $|c_j|^2$. A more general argument for the independence of the reduction behavior from decoherence is simply the fact that the Diósi-Penrose approach predicts state reduction in regimes, where coherence between the states is normally lost. Since reduction behavior according to the projection postulate takes place in a regime, where coherence is lost, it is obvious that reduction behavior different from the projection postulate (predicted by the model) is also not impacted by decoherence. ![Comparison of two thought experiments generating superpositions of four macroscopic states. At experiment b) the three lower detectors of experiment a) are replaced by MDD conserving detectors, which leads to a changed concurrency situation between the states, as indicated at the lower part of the figure.[]{data-label="fig:ThoughtExp"}](ThoughtExp.eps) Let’s now turn to the discussion of thought experiments. At the upper diagram of Figure \[fig:ThoughtExp\]a an experiment is shown, at which a single photon is split into four beams, where the photon is detected at each beam by the same kind of detector as in Figure \[fig:PenroseExp\]. The matrix $E_{G \, i,j}$ for this experiment is given by $$\label{form:MatrixA} E_{G \, i,j}(t)= \left[ \begin{array}{cccc} 0 & 2E_G (t)\; & 2E_G (t)\; & 2E_G (t)\; \\ 2E_G (t)\; & 0 & 2E_G (t)\; & 2E_G (t)\; \\ 2E_G (t)\; & 2E_G (t)\; & 0 & 2E_G (t)\; \\ 2E_G (t)\; & 2E_G (t)\; & 2E_G (t)\; & 0 \end{array} \right] \; ,$$ where $E_G (t)$ is the evaluation of $E_{G \, 1,2}$ (equation (\[form:EG12\])) for a single detector between the states “photon detected” and “no photon detected”. $E_G (t)$ is zero when the photon enters the mirrors and arrives a constant value after the detector has shifted the position of its mass. By inserting matrix (\[form:MatrixA\]) into equation (\[form:PJapproximation\]) one gets the reduction probabilities predicted by the projection postulate $p_i = |c_i|^2$. From equation (\[form:PJapproximation\]) one can easily see that the projection postulate is always reproduced, if the couplings $E_{G \, i,j}$ between the superposed states are all equal. At Figure \[fig:ThoughtExp\]b the experiment of Figure \[fig:ThoughtExp\]a is modified. The three lower detectors are replaced by so called “mass-density distribution conserving detectors” (MDD conserving detector), which means that these detectors do not change their mass-density distributions during the detection process. But nevertheless these detectors shall store the information, whether a photon was detected or not, persistently, so that this information can be read after the decay of the superposition has taken place. This persistent storage of information inside the detector is indicated in Figure \[fig:ThoughtExp\]b by the switches, which can have the positions “detected” and “not detected”. In Chapter \[Experiment\] the question will be pursued whether it is possible to construct such a MDD conserving detector. The matrix $E_{G \, i,j}$ of the experiment of Figure \[fig:ThoughtExp\]b is given by $$\label{form:MatrixB} E_{G \, i,j}(t)= \left[ \begin{array}{cccc} 0 & E_G (t)\; & E_G (t)\; & E_G (t)\; \\ E_G (t)\; & 0 & 0 & 0 \\ E_G (t)\; & 0 & 0 & 0 \\ E_G (t)\; & 0 & 0 & 0 \end{array} \right] \; .$$ The difference between the coupling matrices of experiments \[fig:ThoughtExp\]a and \[fig:ThoughtExp\]b is visualized in the lower part of Figure \[fig:ThoughtExp\], which shows that in experiment \[fig:ThoughtExp\]b there are no couplings between the states 2, 3 and 4 anymore ($E_{G \, i,j}=0$). With result (\[form:PJapproximation\]) the probability for detecting the photon at detector 1 is obtained to be $p_1 = 0.5$, if the photon intensity is split symmetrically at the mirrors ($|c_1|^2 = |c_2|^2 = ...~= 0.25$). For the other detectors one gets $p_2 = p_3 = p_4 = \frac{1}{3} \cdot 0.5$. But since the couplings between the states 2, 3 and 4 are zero ($E_{G \, i,j}=0$), it is expected that a triggering of a reduction event form state 1 towards e.g. state 2 causes no reduction stream from the other states 3 and 4 towards state 2. Furthermore, the initial reduction stream $J_{1 \to 2}$ has the effect that $J_{1 \to 3}$ and $J_{1 \to 4}$ are also bigger than zero[^4], which induces a decay of state 1 towards the states 3 and 4 as well. The final result is therefore expected to be a superposition of the states 2, 3 and 4. The probability for this superposition is also 0.5 ($p_{sup \, 2,3,4} = 0.5$). The deviation from the projection postulate for the reduction probability of state 1 (0.5 instead of 0.25) becomes more dramatic, if one evaluates an analogous experiment with 8 detectors, where seven of them are MDD conserving detectors. The reduction probability for state 1 is again 0.5. The same result is also obtained for experiments with 16, 32, 64 etc. detectors. This 50%-rule can also be seen directly from the structure of the couplings (Figure \[fig:ThoughtExp\]b). Here one has six non-vanishing trigger rates (${\dot p}_{trigger}^{2 \to 1}$, ${\dot p}_{trigger}^{3 \to 1}$, ${\dot p}_{trigger}^{4 \to 1}$, ${\dot p}_{trigger}^{1 \to 2}$, ${\dot p}_{trigger}^{1 \to 3}$, ${\dot p}_{trigger}^{1 \to 4}$), which have all the same amount, and where half of them reduce the superposition towards state 1 and the other ones towards the superposition of the other states. This dramatic change in the reduction behavior provokes questions, which define the guideline of the following chapters. 1. Why have such significant effects not been observed so far? Or: How far is the proposed model consistent to known experimental results? This is the subject of the following chapter. 2. How can the predicted behavior of the reduction model been verified by experiments? This question is the subject of Chapter \[Experiment\], in which a concrete experiment is proposed. 3. Does the dramatic increase of the reduction probability towards a distinguished state play a role in nature? This question is picked up in Chapter \[Biology\]. ![Dependency of $E_{G \, 1,2}$ for a superposed amorphous solid state on the displacement $\Delta x$ between the two superposed states.[]{data-label="fig:SolidState"}](SolidState.eps) Discussion of standard quantum mechanical experiments {#DisQM} ===================================================== To get an understanding how the proposed reduction model behaves in typical quantum mechanical experiments it is necessary to understand the decay behavior of a solid state in a quantum superposition. Such a quantum superposition occurs e.g. in the experiment of Figure \[fig:PenroseExp\], where the mass inside the detector is shifted by a small distance $\Delta x$, depending on whether the photon is detected or not. The vibration of the nuclei around their fix-positions in a solid state leads to a broadening of the extension of the wave-function $\Psi_{System}$ in configuration space. At the application of the Diósi-Penrose approach the mass-density distribution $\rho (\vec{x})$ in expression (\[form:EG12\]) has to be determined for the broadened wave-function $\Psi_{System}$. This means that the mass-density distribution $\rho (\vec{x})$ is approximately given by Gaussian distributions around the fix positions of the nuclei, where the mean diameter for the extension of the nuclei $d_{nucl}$ is mainly determined by acoustical phonons, which can be estimated with $d_{nucl} \approx \sqrt{2kT / m_{nucl}}(l_{lattice} / v_{phonon}) $($k$ = Boltzmann constant, $T$ = temperature, $m_{nucl}$ = mass of nucleus, $l_{lattice}$ = lattice constant, $v_{phonon}$ = velocity of the acoustical phonons). For iron at room temperature $d_{nucl}$ is roughly $d_{nucl} \approx 0.2 \cdot 10^{-10}$m. For an amorphous solid state it is expected that the quantity $E_{G \, 1,2}$ achieves a constant value after all its nuclei are separated far enough from each other, i.e. for $\Delta x > \approx d_{nucl} $, as schematically shown in Figure \[fig:SolidState\]. The amount of this value, which shall be denoted here as the microscopic contribution to $E_{G \, 1,2}$, is given by the gravitational energy, which is needed to separate all the nuclei from each other[^5]. The amount of the microscopic contribution is then just given by the number of nuclei $N_{nucl}$ multiplied with the energy $E_{G \, nucl}$ to bring a single nucleus in a quantum superposition of spatially far separated states $$\label{form:SolidMicrContr} E_{G \, i,j} = N_{nucl} E_{G \, nucl} \; .$$ With the approximation that the density distribution of one nucleus inside the sphere with diameter $d_{nucl}$ is constant, the energy $E_{G \, nucl}$ is given by $$\label{form:SolidEGnucl} E_{G \, nucl} = \frac{12}{5} \frac{G m^{2}_{nucl}}{d_{nucl}} \; .$$ For 100g iron at room temperature equation (\[form:SolidMicrContr\]) predicts a decay rate of roughly $E_{G \, 1,2} / \hbar \approx 10^9$s$^{-1}$. For small displacements ($\Delta x << d _{nucl}$) $E_{G \, 1,2}$ scales with $\Delta x$ like (see Figure \[fig:SolidState\]) $$\label{form:SolidSmallDisplace} E_{G \, 1,2} \approx \alpha N_{nucl} E_{G \, nucl} (\Delta x / d_{nucl} )^{2} \; ,$$ where $\alpha$ is roughly 5. For bigger displacements ($\Delta x >> d _{nucl}$) a macroscopic contribution has to be added on top of the microscopic one, which has as its physical origin that the center of masses of both states are separated from each other. This contribution depends on the shape of the solid and the direction of the displacement. We give two examples: a) For a long rod with length $l$ and diameter $d$($l >> d$), which is displaced along its axis, $E_{G \, 1,2}$ scales for $\Delta x << l$ like $$\label{form:SolidBigDisplace} E_{G \, 1,2} \approx \beta G d^{3} \rho^{2}_{macr} \Delta x^{2}$$ ($\rho_{macr}$ = macroscopic averaged density of the solid), where $\beta$ is approximately 5.  b) For a disc ($l << d$) $\beta$ becomes significantly lower and the result depends also on the ratio of $l$ and $d$. For a rod made of 100g iron ($d = 1cm$, $l \approx 16cm$) the macroscopic contribution (\[form:SolidBigDisplace\]) achieves the same decay rate as the microscopic one ($E_{G \, 1,2} / \hbar \approx 10^9 s^{-1}$) for a displacement of roughly $\Delta x \approx 20 \cdot 10^{-10}$m. ![Three types of typical quantum mechanical experiments: a) Diffracted particle is detected by one detector. b) Particle is detected by a continuous medium. c) Particle is detected by two detectors simultaneously.[]{data-label="fig:StandardQM"}](StandardQM.eps) Let’s now turn to the discussion of typical quantum mechanical experiments. In Figure \[fig:StandardQM\] three different types of experiments are shown. In experiment a) the position of a diffracted particle is measured by a detector. According to the discussion of the experiment of Figure \[fig:PenroseExp\] the system turns here into a superposition of two states corresponding to the cases “particle detected” and “particle not detected”, for which the reduction model reproduces per definition (its calibration) the projection postulate. In experiment b) the particle is detected by a continuous medium, a film. This case can be treated as follows. The continuous detection medium can be modeled by an infinite number of small detectors. In this case one gets an infinite number of superpositions and all off-diagonal elements of the matrix $E_{G \, i,j}$ have the same amount, which leads according equation (\[form:PJapproximation\]) to the reduction probabilities predicted by the projection postulate. In experiment c) the particle is measured by two detectors simultaneously. Here one has a superposition of three states (state 1 = “particle detected at upper detector”, state 2 = “particle detected at lower detector”, state 3 = “particle not detected”). Since the matrix element $E_{G \, 1,2}$ is in this case twice as big as the other two couplings $E_{G \, 1,3}$ and $E_{G \, 2,3}$, the observation of deviations from the projection postulate should be possible. But these deviations are difficult to observe: One reason is that the two detectors need a perfect synchronization in time. The discussion of the decay rates of superposed solids has shown that already very small displacements $\Delta x$ in the order of $d_{nucl} \approx 0.2 \cdot 10^{-10}$m lead to high decay rates in the order of $10^9$s$^{-1}$. The investigation of concrete detectors in the next chapter will show that physical processes accompanying the detection process, as e.g. changing electric fields etc., can easily cause displacements of this order and even much bigger ones. Therefore, if there is no perfect synchronization in time, the quantum superposition is broken stepwise, where one has at each step only the concurrency of two states, for which the projection postulate is exactly reproduced. Another point is that the ratio between the detection probabilities of the two detectors does not show any deviations from the projection postulate. Only the absolute values of the reduction probabilities are different. But this difference is small, if the detection zones of the detectors are small compared to the extension of the particle. In the discussion of further types of experiments one should keep in mind that not only the detection processes inside the detector, but also its interaction with the environment causes small displacements of solids. To be specific, this can be the interaction with the detector’s power supply or with the device storing finally the measurement result, as indicated for the upper detector of experiment c) (Figure \[fig:StandardQM\]). Therefore the discussion, whether the result of an experiment is consistent to the proposed model or not, requires a detailed analysis of all processes accompanying the detection process. Normally, experimentalist do not care about these processes, which makes a subsequent discussion of experiments difficult. More clarity, whether the proposed reduction model is correct, can only be achieved by experiments, in which all processes accompanying the detection process are carefully controlled. Such an experiment is proposed in the following chapter. ![Proposed circuit for the detectors of the thought experiments of Figures \[fig:ThoughtExp\]a and \[fig:ThoughtExp\]b. If the piezo control at the bottom is connected by the two switches, the detector is operating in the MDD changing mode, otherwise in the MDD conserving mode.[]{data-label="fig:Detector"}](Detector.eps) Proposed experiment {#Experiment} =================== In this chapter the question is discussed to what extent it is possible to verify the predictions of the model with current state of the art technology. This is done by proposing and investigating a concrete realization for the thought experiments of Figure \[fig:ThoughtExp\]a and \[fig:ThoughtExp\]b. Figure \[fig:Detector\] shows the proposed circuit for the photon detector, which can be operated in the MDD conserving and MDD changing mode as well: If the piezo at the bottom is connected by the two switches to the circuit, the detector is operating in the MDD changing mode, otherwise in the MDD conserving mode. An important strategy of the proposed reduction experiment is that the reduction process itself is part of the experiment and is not induced by an external observation process. This is realized as follows. The information about the photon measurement will be stored during the detection process inside the detector and will not be forwarded to an external observer. Sufficient time after the reduction event has taken place the observer can read out the measurement result from the detector. In the circuit of Figure \[fig:Detector\] the photon detection leads to a voltage drop in the lower capacitor, which can be measured afterwards by connecting the voltage meter to the capacitor with help of the two switches. The photon detection is realized by an avalanche photodiode (APD), which is biased above its breakdown voltage. At this so-called Geiger mode a single photon can generate - by exciting an electron hole pair - an avalanche, which leads to a macroscopic current pulse. In the experiment two characteristics of the APD are important: 1. The quantum efficiency, which is the probability to detect the photon. 2. The dark count probability, which is the probability to measure a photon without a stimulating photon. The dark count probability can be reduced significantly by operating the APD in the so-called gated mode [@Gisin1989]. Here the bias voltage will be kept slightly below the breakdown voltage and raised above the breakdown level only inside a window, at which the photon is expected. Quantum efficiency and dark count probability increase both with the voltage at the APD. They can therefore not be adapted independently from each other. For an InGaAs/InP APD cooled to 77K combinations of 60% and $10^{-4}$ (quantum efficiency, dark count probability) or 10% and $10^{-6}$ are possible [@Gisin1989]. The circuit of Figure \[fig:Detector\] works now as follows. At the beginning of the measurement the capacitors will be charged to 36V and 29V respectively, where 29V is below and 36V above the breakdown voltage of InGaAs/InP at 77K [@Gisin1989]. The switch connects the APD with the upper capacitor. At the time, where the photon is expected, the switch changes to the lower capacitor of 36V within a time window of 26ns (see Figure \[fig:Detector\]). If the photon triggers an avalanche, the voltage of the lower capacitor will decrease due to the avalanche current, which will stop after some time. In the MDD changing mode the voltage drop in the lower capacitor affects also the connected piezo, which in turn shifts the position of the connected rigid mass. A MDD changing detector using an APD and a piezo for shifting a mass was already realized and used in another context [@Gisin2008]. ![Complete experimental setup of the proposed experiment. The setup allows to toggle by connecting/ disconnecting the piezos for the three lower detectors between the thought experiments of Figures \[fig:ThoughtExp\]a and \[fig:ThoughtExp\]b. []{data-label="fig:PropExperiment"}](PropExperiment.eps) Figure \[fig:PropExperiment\] shows the complete experimental setup, which allows to toggle between the experiments of Figure \[fig:ThoughtExp\]a and \[fig:ThoughtExp\]b, by connecting/ disconnecting the piezos of the three lower detectors. The single photons are generated by short laser pulses of 150ps, which are attenuated by 100db [@Gisin1989]. For the InGaAs/InP APDs a photon wavelengths in the infra red of $\lambda = 1.3\mu$m is needed [@Gisin1989]. The probability that the laser emits exactly one photon (single photon efficiency) depends on the attenuation. Cases, where no photon or more than one photon are emitted, can be eliminated from the measurement series by evaluating the detection results of all four detectors. The switch for the 2.6ns gate is synchronized with the laser pulse (see Figure \[fig:PropExperiment\]). To eliminate long-term drifts in the comparison of the experiments of Figure \[fig:ThoughtExp\]a and \[fig:ThoughtExp\]b, as e.g. changes of quantum efficiencies of the APDs or drifting mirror alignments, one can toggle between both configurations by connecting/ disconnecting the piezo controls of the three lower detectors at every second measurement. The accuracy with which the reduction probability $p_1$ for Detector 1 can be measured depends on the number of measurements like $\Delta p_{1} \approx 1.3 \cdot N^{-1/2}_{succ}$, where $N_{succ}$ denotes the number of successful measurements, at which exactly one photon was detected in one of the four detectors. To achieve with a quantum efficiency of the APD of 50% an accuracy of $\Delta p_1 = 10^{-2}$, one needs roughly $3\cdot 10^4$ measurements. The proposed experiment is facilitated by the fact that one has not to care about the coherence of the superposed quantum states since the predicted deviations from the projection postulate are not expected to be sensitive to decoherence (see Chapter \[Model\]). In the proposed experiment the trigger rates (\[form:approachPtriggerIJ\]) become significant, when the piezo starts to move the mass. Here the coherence between the four superposed states is already lost mainly due to the avalanche currents in the detectors. We turn now to the decisive question, whether the proposed circuit of Figure \[fig:Detector\] can be designed in such a way that it satisfies approximately the requirements of a MDD conserving detector. The decay rate of the MDD conserving detector $E_{G \, con} / \hbar$ which is given by $E_{G \, i,j}$ between the states “photon detected” and “no photon detected” has to be significantly lower than the decay rate $E_{G \, cha}$ of the MDD changing detector ($E_{G \, con} / E_{G \, cha} << 1$). From the discussion in Chapter \[DisQM\] it follows that a decay rate of the MDD changing detector in the order of $E_{G \, cha} / \hbar \approx 10^9 s^{-1}$ can easily be achieved. Several physical effects contribute to the decay rate of the MDD conserving detector. The following effects are only a selection: 1. One effect occurring in the capacitor $C$ of the circuit of Figure \[fig:Detector\] is the compression of its dielectric, which changes slightly due to the voltage drop by the avalanche current. Assuming that the avalanche current causes a voltage drop from 36 to 29V one gets for a circular plate capacitor with radius $r$=5cm, a plate distance of $d$=0.1mm and SiO$_{2}$ as dielectric ($\epsilon \approx$ 3.7 , compression module $E \approx 7.6 \cdot 10^{10}$Nm$^{-2}$), which corresponds to a capacity of $C \approx 2.6$ nF, a change of the plate distance of roughly $\Delta d \approx 2 \cdot 10^{-15}$m, which is 10.000 times smaller than the extension of the nuclei $d_{nucl} \approx 0.2 \cdot 10^{-10}$m. With equation (\[form:SolidSmallDisplace\]) the contribution of this effect to $E_{G \, con}$ can be estimated as $E_{G \, con} / \hbar \approx 10^{-1}$s$^{-1}$. 2. A further effect occurring in the capacitor is due to the changing numbers of electrons on its plates by the avalanche current, which leads to a small change of the mass density of the plates due to the electron’s masses. The contribution of this effect to $E_{G \, con}$ can be estimated with equation (\[form:EG12\]) as $E_{G \, con} / \hbar \approx 10^{-15}$s$^{-1}$. 3. An effect occurring in the resistor $R$ of the circuit of Figure \[fig:Detector\] is its heating by the avalanche current, which leads via thermal extension to small displacements of the resistor. Assuming that the resistor is just a wire of Cu (extension coefficient $\alpha \approx 1.7 \cdot 10^{-5}$K$^{-1}$) with length $l$=10cm and diameter $d$=3mm, which corresponds to a resistance of $R \approx 2.4 \cdot 10^{-4} \Omega$, one gets for a discharge of the capacitor ($C \approx 2.6$nF) from 36 to 29V a change in the resistor’s length of $\Delta l \approx 4 \cdot 10^{-13}$m, which is 50 times smaller than $d_{nucl} \approx 0.2 \cdot 10^{-10}$m. With equation (\[form:SolidSmallDisplace\]) the contribution of this effect to $E_{G \, con}$ leads to a decay rate of $E_{G \, con} / \hbar \approx 10^{5}$s$^{-1}$. 4. A further effect is caused by the impetus of the photon, which transmits its impetus to the detector, leading to a small movement of the detector and a displacement $\Delta x$ increasing linearly with time. The contribution of this effect to $E_{G \, con}$ is after one second still smaller than $E_{G \, con} / \hbar \approx 10^{-20}$s$^{-1}$. This first analysis is encouraging since the strongest effect the heating of the resistor is with a decay rate of $E_{G \, con} / \hbar \approx 10^{5}$s$^{-1}$ still 10,000 times smaller than the assumed decay rate of the MDD changing detector $E_{G \, cha} / \hbar \approx 10^{9}$s$^{-1}$. But nevertheless, a lot of further challenges remain to keep the detector’s decay rate sufficient small, as e.g. the engineering of the five electronic switches in the circuit of Figure \[fig:Detector\] or to keep the environments contribution to the decay rate of the detector as small as possible. Another aspect for the dimensioning of the experiment is that the lifetime of the MDD conserving detector (here $\tau_{con} \approx 10\mu$s) has to be significantly bigger than fluctuations in the reaction times of the APDs, which can be defined by the time span between the entering of the photon into the APD and the point in time, at which the avalanche current has reached a certain strength. Measurements of such fluctuations are not known by the author and should be checked for the chosen APDs. ![Modification of the experiment of Figure \[fig:ThoughtExp\]b, at which the states 2, 3 and 4 converge so close in configuration space that they can be treated as one state, as shown at the left part of the figure.[]{data-label="fig:Transition"}](Transition.eps) An issue which will remain open in this work is the question how far the macroscopic quantum states have to be separated from each other in configuration space so that they can be regarded as separated states in terms of the proposed model and how their separation can be defined. This issue can be explained with help of the experiment shown in Figure \[fig:Transition\], at which the three MDD conserving detectors of the experiment of Figure \[fig:ThoughtExp\]b are removed. This experiment resembles the one of Figure \[fig:ConfigSpace\]. The only difference is that the photon is distributed at three concrete locations instead blurred around the detector. The states 2, 3 and 4 are now so close in configuration space that they can be treated as one state (see Figure \[fig:Transition\]). For the matrix $E_{G \, i,j}$ one has therefore to regard only the concurrency of two states leading to the reduction probabilities of the projection postulate. Hence there has to be a smooth transition of the reduction probabilities of the experiment of Figure \[fig:Transition\] towards the ones of the experiment of Figure \[fig:ThoughtExp\]b depending on the separation of the states. A possible criterion for this separation could be the number of particles participating at the photon detection process combined with the distance the particles are moving at the detection process. For the proposed experiment it is assumed that the electrons moving with the avalanche current from one side of the capacitor to the other cause a sufficient separation of the states. To sum up, the analysis of the experimental proposal has shown that a verification of the predicted effects should be possible with current state of the art technology. Outlook to biology {#Biology} ================== Encouraged by the fact that the reduction behavior of the model is not expected to be sensitive to decoherence the question shall be investigated whether the predictions of the model can play a role in biology. This investigation is additional motivated by the matter of fact that the Diósi-Penrose hypothesis predicts longer life-times for quantum superpositions in an fluid environment (as e.g. in a cell) than for quantum superpositions of solids, which can be explained as follows. Since the particles in a fluid environment will move according to the Brownian motion, their wave-function will disperse and be blurred after some time. Consequently the microscopic mass-density distribution will become almost flat and will not have the sharp peaks as in solid states. Therefore the microscopic contribution to the decay rates of solid states $E_{G \, nucl} \cdot N_{nucl}\;$(see Figure \[fig:SolidState\]), which has its origin in the fact that the nuclei of the superposed states have to be separated from each other, plays no role, which should result in bigger lifetimes than in solids. According to the discussion in Chapter \[Model\] the most significant deviation from the projection postulate is expected for a superposition of many states, from which only one state is distinguished by a different mass-density distribution. In the following it will be shown that a cell, e.g. a bacterium, can evolve in such a quantum superposition. According to the Löwdin two-step model for mutations the modification of a DNA molecule is initiated by a quantum tunneling process of an H-bounded proton between two adjacent sites within a base pair [@Loewdin] leading to the generation of a tautomeric form of a DNA base (e.g. keto guanine $\rightarrow$ enol guanine). In the second step the tautomeric DNA base can lead at the replication of the DNA strand to an incorporation of an incorrect base (e.g. enol guanine with keto thymine instead of cytosine). After the tunneling of the proton the wave-function of the whole cell consists of two superposed states, which differ from each other by only one base pair in one DNA molecule. But the superposed states will differ much more from each other, when the cells starts to replicate the DNA molecule and to produce proteins from it in its ribosomes. According to the Diósi-Penrose hypothesis the superposition of the cell is at this stage far from the point where a reduction is expected. Note that the evaluation of $E_{G \, 1,2}$ for a superposed sphere of water with a diameter of 1$\mu$m, where the two superposed states are separated far from each other, leads to a life-time in the order of seconds. Since bacteria have extensions of this order, the stability of the superposition is expected to be orders of magnitude above this time scale. The stability of the superposition can reduce significantly, if the protein produced in one of the states is an enzyme that enables the cell to live from a chemical substance (e.g. lactose) available in its environment. This will lead to metabolism between the cell and its environment, which in turn causes movement of masses and therefore leads to a change of the mass-density distribution. If one assumes that the proton tunneling occurs with a small probability and with a constant rate for all sections of the DNA molecule, the amplitude corresponding to the original DNA molecule $|c_1|^2$ will decrease slightly at each proton tunneling. Assuming that the amplitudes of the separated states with the modified DNA molecules have all the same amount, their amplitudes are given by $|c_i|^2 = (1 - |c_1|^2) / n$, where $n$ is the number of occurred proton tunnelings. All the states of the superposition will correspond to different types of produced enzymes, where normally only the state with the original DNA molecule will be able to catalyze the chemical substance available in the environment. The matrix $E_{G \, i,j}$ for the superposed states in the cell is visualized in Figure \[fig:StatesCell\] by connecting lines between the states. Since only state 1 with the original DNA molecule leads to metabolism and therefore to a change of the mass-density distribution, all matrix elements are vanishing except the ones between this and all other states. These matrix elements have all the same amount. From equation (\[form:PJapproximation\]) follows that the reduction probability of the superposition towards the original state 1 is given by ![Visualization of the matrix elements $E_{G \, i,j}$ for a set of superposed states in a cell by connecting lines between the states. The state in the centre distinguishes from all others by its mass-density distribution due to the catalysis of a chemical reaction. []{data-label="fig:StatesCell"}](StatesCell.eps) $$\label{form:BiologyNormal} p_1 \approx 1 - \frac{1}{n} \cdot \frac{1 - |c_1|^1}{|c_1|^2} \; ,$$ which means that the probability for the superposition to reduce back to the original “good state” (i.e. the state, which enables the cell to live from the food of its environment) is almost 100% for big $n$, even if the amplitude $|c_1|^2$ has reduced significantly. The derived model could be suitable to explain the effect of adaptive mutation first observed on Escherichia coli bacteria [@Cairns]. A non-fermenting strain of Escherichia coli bacteria, which means that this mutant of Escherichia coli is not able to live from lactose, shows a significant increased mutation rate of its DNA molecules towards a lactose-fermenting mutant, if one changes the food on its agar plate to lactose only. This mutation effect occurs on non-growing strains, i.e. the bacteria do not replicate themselves during the mutation [@Cairns]. Some attempts were already made to explain this phenomenon by quantum mechanical effects [@McFadden; @Goswani; @Ogryzko]. With the derived reduction model the explanation of adaptive mutation could be as follows. If the food on the agar plate of the bacteria strain is changed to lactose, the state with the unmodified DNA molecule will not lead to metabolism and consequently to a change of the mass-density distribution anymore. Instead the state with the lactose-fermenting mutant will be distinguished from all other states. From equation (\[form:PJapproximation\]) it follows that the reduction probability towards this state is given for big $n$ by $$\label{form:BiologyMutate} p_i \approx \frac{1 - |c_1|^1}{2 - |c_1|^2} \; ,$$ which leads to a significant non-vanishing probability towards this state, which is for big $n$ orders of magnitudes bigger than the probability expected from the projection postulate $p_i = |c_i|^2$. This behaviour might be nominated as “selective reduction” since the state that is the best for the survival of the cell gets a strong increased reduction probability. Further ideas, how selective reduction can play a role in biology, were published by the author in reference [@QuandtWiese]. Discussion {#Discussion} ========== The following discussion deals the following two issues. First, it is attempted to define an generalized experimental agenda, which is independent from the specific approach of the phenomenological model and the assumption that the physical couplings are governed by gravity. It shall base only on the thesis that quantum state reduction has its origin in a mutual physical interaction between the states. Second, the approach is analyzed from the view point of quantum non-locality in concrete its consequences for signaling. A conclusion, which can be extracted from the analysis of the phenomenological model is that changes of the strengths of the physical couplings between the states should lead to somehow changes of the reduction probabilities. An experimental agenda verifying this thesis would be characterized at least by the following points: 1. Superpositions of more than two states have to be investigated (otherwise condition 2 can’t be fulfilled). 2. The coupling strengths between the states have to be chosen significantly different. It is recommended to compare reduction probabilities of experiments with equal and significantly different coupling strengths as proposed in Figure \[fig:ThoughtExp\]. Beneath gravity-induced quantum state reduction other theses for the physical origin of the couplings $E_{G \, i,j}$ or modifications of the Diósi-Penrose approach should be tested[^6]. 3. The different coupling strengths become relevant in a regime, where the states have already lost their coherence to each other. Experiments fulfilling these conditions were not performed so far. It is evident to mention that the proposed program should also check for smaller changes of reduction probabilities than predicted by the model. This requires long measurement times, since the accuracy of probability measurements $\Delta p$ scales with the number of returns $N$ like $\Delta p \approx N^{-1/2}$. Point 3 of the agenda might look for many physicists unusual. Normally it is not expected that quantum effects are further relevant when the states have lost their coherence to each other. Point 3 can simply be justified from the experimental fact that it was so far not possible to observe state reduction at a quantum interference experiment. Therefore state reduction has to take place in a regime, where quantum coherence between the states is already lost. As already stated the phenomenological model predicts also no dependence of the reduction probabilities from the coherence between the states. They depend only on the amplitudes of the states (see equation \[form:PJapproximation\]). At the proposed experiment in Chapter \[Experiment\] the four superposed states have also lost their coherence when the rigid mass inside the detectors is moved mainly due to the avalanche currents in the APDs. A question of general concern is whether quantum state reduction by a mutual physical interaction between the states activates signalling via quantum non-locality. From the conclusion that this thesis leads to deviant behavior from the projection postulate one can easily construct experiments which enable signaling via quantum non-locality. This shall be exemplarily shown by a modification of the thought experiment of Figure \[fig:ThoughtExp\]. By removing the upper detector in Figure \[fig:ThoughtExp\] spatial far from the three lower ones, information can be exchanged between the two locations by changing the modes of the three lower detectors between MDD changing and MDD conserving. By estimating several measurements in the upper detector the mode of the three lower detectors can be determined via the changed reduction probabilities for the upper detector. From the fact that Bell-type experiments demonstrate faster-than-light correlations of separated measurements via quantum non-locality of state reduction [@Gisin2008b] follows that the proposed reduction model comes in conflict with faster-than-light singalling. This follows also, if one interprets the state amplitudes of the phenomenological model $|c_{i}|$ as global properties of the system and assumes that these amplitudes change abruptly when the decay of the superposition is trigged. This does also not change if one takes into account that the state amplitudes $|c_{i}|$ might not change abruptly but with a somehow time-dependency. Also the experiments restriction that one can not forward the detector’s results instantaneously to an observer but has to wait until the superposition is decayed completely before reading out the detector’s result does not change the conclusion. In this context the question if of interest whether the proposed reduction model is in conflict with existing proves for the impossibility of signalling via quantum non-locality. These proves investigate the question whether a measurement on a quantum state can influence the result of an other spatially far separated measurement [@Ghirardi1980]. The impossibility of an influence on the second measurement bases on the fact that the measurement operators of the spatially separated measurements commutate with each other [@Ghirardi1980]. These proves can not be applied to the proposed experiments of this work. Here one has not a sequence of two measurements corresponding to two separated state reductions but a single spatially distributed measurement (state reduction). The question whether state reduction by a physical interaction between the states is generally in conflict with superluminal signalling requires the development of a Lorentz invariant reduction model basing on this idea. The decisive question is, whether this model allows as the presented model deviant behaviour from the projection postulate. If this is the case, the model has to predict for the experiment of Figure 3b reduction probabilities, which converge to the projection postulate, if the detectors are far separated from each other. The development of such a model is going beyond the scope of this paper and should be the subject of further investigations. To sum up, a careful execution of the proposed experimental program, characterized by the above three issues, which checks reduction probabilities also for small deviations from the projection postulate and tests beneath the gravitational hypothesis further hypotheses for the physical couplings, will give us a deeper insight into the nature of quantum state reduction. A negative result would manifest the projection postulate as a fundamental principle of nature. It would be a hint against the thesis that quantum state reduction has its origin in a physical interaction between the superposed states and show that state reduction is governed by other principles. A positive result would reveal a new aspect of quantum state reduction and would strongly stimulate further experimental and theoretical research. Appendix: Derivation of equations \[form:EG12\] and \[form:decrate-EG12\] for the decay rate of a superposition {#AppendixA} =============================================================================================================== Equations \[form:EG12\] and \[form:decrate-EG12\] for the decay rate of a superposition can be derived with Penrose’s argument that the time-translation operator of the superposition is ill-defined as follows. The component $g_{00}$ of the metric tensor is given in the Newtonian limit by $$\label{G00} g_{00} = 1 + \frac{2\phi(\vec{x})}{c^{^2}} \; ,$$ where $\phi(\vec{x})$ is the gravitational potential. The derivation of the physical time $\tau$ to the time coordinate $t$ ($c\cdot t = x_0$) is given by $$\label{timeDilitation} \frac{d\tau}{dt}= \frac{ds}{dx_0} = \sqrt{g_{00}}\approx 1 + \frac{\phi(\vec{x})}{c^{^2}} \; ,$$ where $s$ is the space-time invariant ($ds= c \cdot d\tau$). The fuzziness of the energy of state 1 by due to the difference of the time-translation operators of state 1 and 2 can be estimated by $$\label{energyFuzzinesState1} \Delta E_1 = \int d^{3}\vec{x}\rho_{1}(\vec{x})c^{2}(\frac{d\tau_{2}}{dt} - \frac{d\tau_{1}}{dt} ) = \int d^{3}\vec{x}\rho_{1}(\vec{x})(\phi_{2}(\vec{x}) - \phi_{1}(\vec{x}))$$ and for state 2 analogous $$\label{energyFuzzinesState2} \Delta E_2 = \int d^{3}\vec{x}\rho_{2}(\vec{x})c^{2}(\frac{d\tau_{1}}{dt} - \frac{d\tau_{2}}{dt} ) = \int d^{3}\vec{x}\rho_{2}(\vec{x})(\phi_{1}(\vec{x}) - \phi_{2}(\vec{x})) \; .$$ The addition of $\Delta E_1$ and $\Delta E_1$ leads directly to expression \[form:EG12\] for $E_{G \, 1,2}$: $$\label{derivationOfEG12} \Delta E_1 + \Delta E_2 = \int d^{3}\vec{x} (\rho_{1}(\vec{x})- \rho_{2}(\vec{x})) (\phi_{2}(\vec{x})- \phi_{1}(\vec{x})) = G \int d^3 \vec{x} \; d^3 \vec{y} \frac{(\rho_1(\vec{x})-\rho_2(\vec{x}))(\rho_1(\vec{y})-\rho_2(\vec{y}))}{|\vec{x}-\vec{y}|} = E_{G \, 1,2} \; .$$ The corresponding frequency of the energy fuzziness $\nu = (\Delta E_1 + \Delta E_2)/\hbar$ yields the decay rate ${\dot p}_{decay}$ of equation \[form:decrate-EG12\]. I thank Prof. Wolfgang Elsä[ß]{}er for valuable support at the design of the proposed experiment. I thank Prof. Werner Martienssen, Prof. Gernot Alber, Dr. Eric Hildebrandt, Dr. Helmar Becker, Prof. Wolfgang Dultz, Prof. Achim Richter and Prof. Thomas Görnitz for interesting and helpful discussions. I thank Dr. Christoph Lamm for proofreading the manuscript. This work is dedicated to my parents. J. von Neumann, Mathematical Foundations of Quantum Mechanics, Princeton University Press, Princeton (1955) D. Salart, A. Baas, J. C. Branciard, N. Gisin, H. Zbinden, Testing spooky action at a distance, Nature, 454, 861 (2008) M. Arndt, O. Nairz, J. Voss-Andreae, C. Keller, G. van der Zouw, A. Zeilinger, Wave-particle duality of C60 molecules, Nature , 401, 680-682 (1999) J. Friedman, V. Patel, W. Chen, S. Tolpygo, J. Lukens, Quantum superposition of distinct macroscopic states, Nature , 406, 43-46 (2000) G.J. Milburn, Intrinsic decoherence in quantum mechanics, Phys. Rev. A, 44, 5401-5406 (1991) L. Diósi, Models for universal reduction of macroscopic quantum fluctuations, Phys. Rev. A, 40, 1165-1174 (1989) R. Penrose, On gravity’s role in quantum state reduction, Gen. Rel. Grav., 28, 581-600 (1996) G. C. Ghirardi, A. Rimini, T. Weber, Unified dynamics for microscopic and macroscopic systems, Phys. Rev. D, 34, 470 (1986) G. C. Ghirardi, P. Pearle, A. Rimini, Markov processes in Hilbert space and continuous spontaneous localization of systems of identical particles, Phys. Rev. A, 42, 78-89 (1990) W. Marshall, C. Simon, R. Penrose, D. Bouwmeester, Towards quantum superpositions of a mirror, Phys. Rev. Lett., 91, 130401 (2003) J. Christian, Testing gravity-driven collapse of the wavefunction via cosmogenic neutrinos, Phys. Rev. Lett., 95, 160403 (2005) R. Penrose, Quantum computation, entanglement and state reduction, . Trans. R. Soc. Lond. A, 356, 1927-1939 (1998) J. van Wezel, T. Oosterkamp, J. Zaanen, Towards an experimental test of gravity-induced quantum state reduction, arxiv:0706.3976v1, 5 Feb. (2008) F. Károlyházi, A. Frenkel , B. Lukács, Physics as natural philosophy, MIT, Cambridge MA (1982) P. Pearle, E. Squires, Bound state excitation, nucleon decay experiments, and models of wave function collapse, Phys. Rev. Lett., 73, 1-5 (1994) B. Lamine, M. T. Jaekel, S. Reynaud, Gravitational decoherence of atomic interferometers, Eur. Phys. J. D, 20, 165-176 (2002) W. L. Power, I. Percival, Decoherence of quantum wave packets due to interaction with conformal space-time fluctuations, Proc. Roy. Soc. Lond. A, 456, 955-968 (2000) S. Bose, K. Jacobs, P. L. Knight, Scheme to probe the decoherence of a macroscopic object, Phys. Rev. A, 59, 3204-3210 (1999) C. Henkel, M. Nest, P. Domokos, R. Folman, Optical discrimination between spatial decoherence and thermalization of a massive object, Phys. Rev. A, 70, 023810 (2004) C. Simon, D. Jaksch, Possibility of observing energy decoherence due to quantum gravity, Phys. Rev. A, 70, 052104 (2004) G. Amelino-Camelia, Gravity-wave interferometers as probes of a low-energy effective quantum gravity, Phys. Rev. D, 62, 024015 (2000) R. Penrose, Shadows of the mind: an approach to the missing science of consciousness, Oxford University Press, Oxford (1994) L. Diósi, Intrinsic time-uncertainties and decoherence: comparison of 4 models, Brazilian Journal of Physics, 35, 260-265 (2005) G. Ribordy, J.D. Gautier, H. Zbinden, N. Gisin, Performance of InGaAs/InP avalanche photodiodes as gated-mode photon counters, Applied Optics, 37, 2272-2277 (1998) D. Salart, A. Baas, J. A.W. van Houwelingen, N. Gisin, H. Zbinden, Spacelike Separation in a Bell Test Assuming Gravitationally Induced Collapses, Phys. Rev. Lett., 100, 220404 (2008) P.O. Löwdin, Advances in Quantum Chemistry, 213-360. Academic Press, New York (1965) J. Cairns, J. Overbaugh, S. Millar, The origin of mutants, Nature, 335, 142-145 (1988) J. McFadden, J. Al-Khalili, A quantum mechanical model of adaptive mutation, Biosystems, 50, 203-211 (1999) A. Goswani, D. Todd, Is there conscious choice in directed mutation, phenocopies, and related phenomena?, Integr. Physiol. Behav. Sci., 32, 132-142 (1997) V.V. Ogryzko, A quantum-theoretical approach to the phenomena of directed mutations in bacteria (hypothesis), Biosystems, 43, 83-95 (1997) G. Quandt-Wiese, Evolutionary Quantum Theory and the Physical Representation of Awareness, Mensch & Buch Verlag, Berlin (2002) G. C. Ghirardi, A General Argument against Superluminal Transmission through the Quantum Mechanical Measurement Process, Lettere al Nuovo Cimento, 27, page 293 (1980) [^1]: In all following calculations $\xi$ is assumed to be 1. [^2]: Note that the matrix $E_{G \, i,j}$ is symmetrical ($E_{G \, i,j}=E_{G \, j,i}$) and that its diagonal elements are vanishing $E_{G \, i,j}=0$. [^3]: This assumption loses its validity, if one regards for one amplitude of a superposition the limiting case $|c_{i}|^{2} \rightarrow 0$. Here the reduction stream $J_{i \to j}$ becomes so small that later occurring trigger events can overrule the final outcome of the experiment. This problem of the draft estimations (\[form:PJexact\]) and (\[form:PJapproximation\]) can be seen, if one studies the transition of a superposition of three states towards a superposition of two states by choosing $|c_{i}|^{2} \rightarrow 0$. Here one will find discontinuities of the reduction probabilities as a function of $|c_{i}|^{2}$ at $|c_{i}|^{2}=0$. But to get a first understanding of the model this effect can be neglected. [^4]: From $\frac{d}{dt} |c_1|^2 < 0$ and $\frac{d}{dt} |c_3|^2 = \frac{d}{dt} |c_4|^2 = 0$ follows $J_{1 \to 3} >0$ and $J_{1 \to 4} >0$. [^5]: Remember that equation (\[form:EG12\]) expresses for rigid masses the energy needed to separate the masses of the states against their gravitational attraction from each other. [^6]: In this context it is interesting to mention that with the assumption that state reduction by a physical interaction between the states leads to changed reduction probabilities it is possible to determine the coupling strengths $E_{G \, i,j}$ by a modification of the proposed experiments. This requires to change the MDD conserving detectors of experiment \[fig:ThoughtExp\]b in the way that they move a rigid mass like the MDD changing detectors but with a time delay $\Delta t$ after the photon has entered the detector. This delayed change of the mass-density distribution can impact the measurements result only if the four states are still in a superposition. By measuring changes of the reduction probabilities of experiment \[fig:ThoughtExp\]b and the modified experiment as function of the time delay $\Delta t$ it is possible to determine the superposition’s life-time and accordingly the coupling strengths.
{ "pile_set_name": "ArXiv" }
--- author: - 'N. G. Parker' - 'B. Jackson' - 'A. M. Martin' - 'C. S. Adams' title: 'Vortices in Bose-Einstein Condensates: Theory' --- Quantized vortices ================== Vortices are pervasive in nature, representing the breakdown of laminar fluid flow and hence playing a key role in turbulence. The fluid rotation associated with a vortex can be parameterized by the circulation $\Gamma=\oint {\rm d}{\bf r}\cdot{\bf v}({\bf r})$ about the vortex, where ${\bf v}({\bf r})$ is the fluid velocity field. While classical vortices can take any value of circulation, superfluids are irrotational, and any rotation or angular momentum is constrained to occur through vortices with quantized circulation. Quantized vortices also play a key role in the dissipation of transport in superfluids. In BECs quantized vortices have been observed in several forms, including single vortices [@matthews:prl1999; @anderson:prl2000], vortex lattices [@madison:prl2000; @aboshaeer:science2001; @hodby:prl2002; @raman:prl2001] (see also Chap. VII), and vortex pairs and rings [@anderson:prl2001; @dutton:science2001; @inouye:prl2001]. The recent observation of quantized vortices in a fermionic gas was taken as a clear signature of the underlying condensation and superfluidity of fermion pairs [@zwierlein]. In addition to BECs, quantized vortices also occur in superfluid Helium [@donnelly; @barenghi], nonlinear optics, and type-II superconductors [@tilley]. Theoretical Framework --------------------- ### Quantization of circulation Quantized vortices represent phase defects in the superfluid topology of the system. Under the Madelung transformation, the macroscopic condensate ‘wavefunction’ $\psi({\bf r},t)$ can be expressed in terms of a fluid density $n({\bf r},t)$ and a macroscopic phase $S({\bf r},t)$ via $\psi({\bf r})=\sqrt{n({\bf r},t)} \exp[iS({\bf r},t)]$. In order that the wavefunction remains single-valued, the change in phase around any closed contour $C$ must be an integer multiple of $2\pi$, $$\int_{\rm C} \nabla S\cdot d{\bf l}=2\pi q,$$ where $q$ is an integer. The gradient of the phase $S$ defines the superfluid velocity via ${\bf v}({\bf r},t)=(\hbar/m){\bf \nabla} S({\bf r},t)$. This implies that the circulation about the contour $C$ is given by, $$\Gamma=\int_{\rm C} {\bf v}\cdot d{\bf l}=q \left(\frac{h}{m}\right).$$ In other words, the circulation of fluid is quantized in units of $(h/m)$. The circulating fluid velocity about a vortex is given by ${\bf v}(r,\theta)=q\hbar/(mr) \hat{\bm{\theta}}$, where $r$ is the radius from the core and $\hat{\bm{\theta}}$ is the azimuthal unit vector. ### Theoretical model The Gross-Pitaevskii equation (GPE) provides an excellent description of BECs at the mean-field level in the limit of ultra-cold temperature [@dalfovo:rmp1999]. It supports quantized vortices, and has been shown to give a good description of the static properties and dynamics of vortices [@dalfovo:rmp1999; @fetter:jp2001]. Dilute BECs require a confining potential, formed by magnetic or optical fields, which typically varies quadratically with position. We will assume an axially-symmetric harmonic trap of the form $V=\frac{1}{2}m(\omega_r^2 r^2+\omega_z^2 z^2)$, where $\omega_r$ and $\omega_z$ are the radial and axial trap frequencies respectively. Excitation spectra of BEC states can be obtained using the Bogoliubov equations, and specify the stability of stationary solutions of the GPE. For example, the presence of the so-called anomalous modes of a vortex in a trapped BEC are indicative of their thermodynamic instability. The GPE can also give a qualitative, and sometimes quantitative, understanding of vortices in superfluid Helium [@donnelly; @barenghi]. Although this Chapter deals primarily with vortices in repulsively-interacting BECs, vortices in attractively-interacting BECs have also received theoretical interest. The presence of a vortex in a trapped BEC with attractive interactions is less energetically favorable than for repulsive interactions [@dalfovo:pra96]. Indeed, a harmonically-confined attractive BEC with angular momentum is expected to exhibit a center-of-mass motion rather than a vortex [@wilkin]. The use of anharmonic confinement can however support metastable vortices, as well as regimes of center-of-mass motion and instability [@saito; @lundh:prl2004; @kavoulakis]. Various approximations have been made to incorporate thermal effects into the GPE to describe vortices at finite temperature (see also Chap. XI). The Popov approximation self-consistently couples the condensate to a normal gas component using the Bogoliubov-de-Gennes formalism [@virtanen:prl2001] (cf. Chap. I Sec. 5.2). Other approaches involve the addition of thermal/quantum noise to the system, such as the stochastic GPE method [@gardiner; @penckwitt; @duine:pra2004] and the classical field/truncated Wigner methods [@steele; @davis:pra2002; @lobo; @simula]. Thermal effects can also be simulated by adding a phenomenological dissipation term to the GPE [@tsubota]. ### Basic properties of vortices In a homogeneous system, a quantized vortex has the 2D form, $$\begin{aligned} \psi(r,\theta)=\sqrt{n_{\rm v}(r)}\exp(iq\theta). \label{vortex-wave}\end{aligned}$$ The vortex density profile $n_{\rm v}(r)$ has no analytic solution, although approximate solutions exist [@pethick]. Vortex solutions can be obtained numerically by propagating the GPE in imaginary time ($t \rightarrow -it$) [@minguzzi04], whereby the GPE converges to the lowest energy state of the system (providing it is stable). By enforcing the phase distribution of Eq. (\[vortex-wave\]), a vortex solution is generated. Figure 1 shows the solution for a $q=1$ vortex at the center of a harmonically-confined BEC. The vortex consists of a node of zero density with a width characterized by the condensate healing length $\xi=\hbar/\sqrt{m n_0 g}$, where $g=4\pi \hbar^2 a /m$ (with $a$ the s-wave scattering length) and $n_0$ is the peak density in the absence of the vortex. For typical BEC parameters [@madison:prl2000], $\xi\sim 0.2~\mu m$. For a $q=1$ vortex at the center of an axially-symmetric potential, each particle carries $\hbar$ of angular momentum. However, if the vortex is off-center, the angular momentum per particle becomes a function of position [@fetter:jp2001]. Vortex structures ----------------- Increasing the vortex charge widens the core due to centrifugal effects. In harmonically-confined condensates a multiply-quantized vortex with $q>1$ is energetically unfavorable compared to a configuration of singly-charged vortices [@butts:nature99; @lundh:pra2002]. Hence, a rotating BEC generally contains an array of singly-charged vortices in the form of a triangular Abrikosov lattice [@madison:prl2000; @aboshaeer:science2001; @hodby:prl2002; @raman:prl2001; @haljan:prl2001] (see also Chap. VII), similar to those found in rotating superfluid helium [@donnelly]. A $q>1$ vortex can decay by splitting into singly-quantized vortices via a dynamical instability [@mottonen:pra2003; @shin:prl2004], but is stable for some interaction strengths [@pu:pra1999]. Multiply-charged vortices are also predicted to be stabilized by a suitable localized pinning potential [@simula:pra2002] or the addition of quartic confinement [@lundh:pra2002]. Two-dimensional vortex-antivortex pairs (i.e. two vortices with equal but opposite circulation) and 3D vortex rings arise in the dissipation of superflow, and represent solutions to the homogeneous GPE in the moving frame [@jones:jpa1982; @jones:jpa1986], with their motion being self-induced by the velocity field of the vortex lines. When the vortex lines are so close that they begin to overlap, these states are no longer stable and evolves into a rarefaction pulse [@jones:jpa1982]. Having more than one spin component in the BECs (cf. Chap. IX) provides an additional topology to vortex structures. Coreless vortices and vortex ‘molecules’ in coupled two-component BECs have been probed experimentally [@leanhardt:prl2003] and theoretically [@kasamatsu:prl2004]. More exotic vortex structures such as skyrmion excitations [@ruostekoski] and half-quantum vortex rings [@ruostekoski:prl2003] have also been proposed. Nucleation of vortices ====================== Vortices can be generated by rotation, a moving obstacle, or phase imprinting methods. Below we discuss each method in turn. Rotation -------- As discussed in the previous section, a BEC can only rotate through the existence of quantized vortex lines. Vortex nucleation occurs only when the rotation frequency $\Omega$ of the container exceeds a critical value $\Omega_c$ [@fetter:jp2001; @butts:nature99; @nozieres]. Consider a condensate in an axially-symmetric trap which is rotating about the [*z*]{}-axis at frequency $\Omega$. In the Thomas-Fermi limit, the presence of a vortex becomes energetically favorable when $\Omega$ exceeds a critical value given by [@Lundh97], $$\Omega_c=\frac{5}{2}\frac{\hbar}{mR^2} \ln \frac{0.67 R}{\xi}. \label{Omega_c}$$ This is derived by integrating the kinetic energy density $m n(r) v(r)^2/2$ of the vortex velocity field in the radial plane. The lower and upper limits of the integration are set by the healing length $\xi$ and the BEC Thomas-Fermi radius $R$, respectively. Note that $\Omega_c<\omega_r$ for repulsive interactions, while $\Omega_c>\omega_r$ for attractive interactions [@dalfovo:pra96]. In a non-rotating BEC the presence of a vortex raises the energy of the system, indicating thermodynamic instability [@rokhsar:prl1997]. In experiments, vortices are formed only when the trap is rotated at a much higher frequency than $\Omega_c$ [@madison:prl2000; @aboshaeer:science2001; @hodby:prl2002], demonstrating that the energetic criterion is a necessary, but not sufficient, condition for vortex nucleation. There must also be a dynamic route for vorticity to be introduced into the condensate, and hence Eq. (\[Omega\_c\]) provides only a lower bound for the critical frequency. The nucleation of vortices in rotating trapped BECs appears to be linked to instabilities of collective excitations. Numerical simulations based on the GPE have shown that once the amplitude of these excitations become sufficiently large, vortices are nucleated that subsequently penetrate the high-density bulk of the condensate [@penckwitt; @lobo; @tsubota; @lundh:pra2003; @parker:lattice]. One way to induce instability is to resonantly excite a surface mode by adding a rotating deformation to the trap potential. In the limit of small perturbations, this resonance occurs close to a rotation frequency $\Omega_r = \omega_\ell /\ell$, where $\omega_\ell$ is the frequency of a surface mode with multipolarity $\ell$. In the Thomas-Fermi limit, the surface modes satisfy $\omega_\ell =\sqrt{\ell}\omega_r$ [@stringari96], so $\Omega_r = \omega_r/\sqrt{\ell}$. For example, an elliptically-deformed trap, which excites the $\ell=2$ quadrupole mode, would nucleate vortices when rotated at $\Omega_r \approx \omega_r/\sqrt{2}$. This value has been confirmed in both experiments [@madison:prl2000; @aboshaeer:science2001; @hodby:prl2002] and numerical simulations [@penckwitt; @lobo; @tsubota; @lundh:pra2003; @parker:lattice]. Higher multipolarities were resonantly excited in the experiment of Ref. [@raman:prl2001], finding vortex formation at frequencies close to the expected values, $\Omega = \omega_r/\sqrt{\ell}$, and lending further support to this picture. A similar route to vortex nucleation is revealed by considering stationary states of the BEC in a rotating elliptical trap, which can be obtained in the Thomas-Fermi limit by solving hydrodynamic equations [@recati01]. At low rotation rates only one solution is found; however at higher rotations ($\Omega > \omega_r/\sqrt{2}$) a bifurcation occurs and up to three solutions are present. Above the bifurcation point one or more of the solutions become dynamically unstable [@Sinha01], leading to vortex formation [@Parker06]. Madison [*et al. *]{}[@madison01] followed these stationary states experimentally by adiabatically introducing trap ellipticity and rotation, and observed vortex nucleation in the expected region. Surface mode instabilities can also be induced at finite temperature by the presence of a rotating noncondensed “thermal” cloud. Such instabilities occur when the thermal cloud rotation rate satisfies $\Omega > \omega_{\ell} /\ell$ [@williams02]. Since all modes can potentially be excited in this way, the criterion for instability and hence vortex nucleation becomes $\Omega_c > {\rm min} (\omega_{\ell}/\ell)$, analogous to the Landau criterion. Note that such a minimum exists at $\Omega_c >0$ since the Thomas-Fermi result $\omega_\ell =\sqrt{\ell}\omega_r$ becomes less accurate for high $\ell$ [@dalfovo01]. This mechanism may have been important in the experiment of Haljan [*et al. *]{}[@haljan:prl2001], where a vortex lattice was formed by cooling a rotating thermal cloud to below $T_c$. Nucleation by a moving object ----------------------------- Vortices can also be nucleated in BECs by a moving localized potential. This problem was originally studied using the GPE for 2D uniform condensate flow around a circular hard-walled potential [@frisch92; @winiecki99], with vortex-antivortex pairs being nucleated when the flow velocity exceeded a critical value. In trapped BECs a similar situation can be realized using the optical dipole force from a laser, giving rise to a localized repulsive Gaussian potential. Under linear motion of such a potential, numerical simulations revealed vortex pair formation when the potential is moved at a velocity above a critical value [@jackson98]. The experiments of [@raman99; @onofrio00] oscillated a repulsive laser beam in an elongated condensate. Although vortices were not observed directly, the measurement of condensate heating and drag above a critical velocity was consistent with the nucleation of vortices [@jackson:pra2000a]. An alternative approach is to move the laser beam potential in a circular path around the trap center [@caradoc99]. By “stirring” the condensate in this way one or more vortices can be created. This technique was used in the experiment of Ref. [@raman:prl2001], where vortices were generated even at low stirring frequencies. Other mechanisms and structures ------------------------------- A variety of other schemes for vortex creation have been suggested. One of the most important is that by Williams and Holland [@williams99], who proposed a combination of rotation and coupling between two hyperfine levels to create a two-component condensate, one of which is in a vortex state. The non-vortex component can then either be retained or removed with a resonant laser pulse. This scheme was used by the first experiment to obtain vortices in BEC [@matthews:prl1999]. A related method, using topological phase imprinting, has been used to experimentally generate multiply-quantized vortices [@leanhardt:prl2002]. Apart from the vortex lines considered so far, vortex rings have also been the subject of interest. Rings are the decay product of dynamically unstable dark solitary waves in 3D geometries [@anderson:prl2001; @dutton:science2001; @ginsberg:prl2005; @komineas:pra2003]. Vortex rings also form in the quantum reflection of BECs from surface potentials [@scott:prl2005], the unstable motion of BECs through an optical lattice [@scott:pra2004], the dragging of a 3D object through a BEC [@jackson:pra1999], and the collapse of ultrasound bubbles in BECs [@berloff:prl2004]. The controlled generation of vortex rings [@ruostekoski:pra2005] and multiple/bound vortex ring structures [@crasovan] have been analyzed theoretically. A finite temperature state of a quasi-2D BEC, characterized by the thermal activation of vortex-antivortex pairs, has been simulated using classical field simulations [@simula:prl2006]. This effect is thought to be linked to the Berezinskii-Kosterlitz-Thouless phase transition of 2D superfluids, recently observed experimentally in ultracold gases [@hadzibabic06]. Similar simulations in a 3D system have also demonstrated the thermal creation of vortices [@davis02; @berloff02]. Dynamics of vortices ==================== The study of vortex dynamics has long been an important topic in both classical [@lamb] and quantum [@barenghi] hydrodynamics. Helmholtz’s theorem for uniform, inviscid fluids, which is also applicable to quantized vortices in superfluids near zero temperature, states that the vortex will follow the motion of the background fluid. So, for example, in a superfluid with uniform flow velocity ${\bf v}_s$, a single straight vortex line will move with velocity ${\bf v}_L$, such that it is stationary in the frame of the superfluid. Vortices similarly follow the “background flow” originating from circulating fluid around a vortex core. Hence vortex motion can be induced by the presence of other vortices, or by other parts of the same vortex line when it is curved. Most generally, the superfluid velocity ${\bf v}_i$ due to vortices at a particular point ${\bf r}$ is given by the [*Biot-Savart*]{} law [@barenghi], in analogy with the similar equation in electromagnetism, $${\bf v}_i = \frac{\Gamma}{4 \pi} \int \frac{({\bf s}-{\bf r}) \times d{\bf s}} {|{\bf s}-{\bf r}|^3}; \label{biot-savart}$$ where ${\bf s} (\zeta,t)$ is a curve representing the vortex line with $\zeta$ the arc length. Equation (\[biot-savart\]) suffers from a divergence at ${\bf r}={\bf s}$, so in calculations of vortex dynamics this must be treated carefully [@tsubota00]. Equation (\[biot-savart\]) also assumes that the vortex core size is small compared to the distance between vortices. In particular, it breaks down when vortices cross during collisions, where reconnection events can occur. These reconnections can either be included manually [@schwarz85], or by solving the full GPE [@koplik93]. The latter method also has the advantage of including sound emission due to vortex motion or reconnections [@leadbeater:prl2001; @leadbeater:pra2003]. In a system with multiple vortices, motion of one vortex is induced by the circulating fluid flow around other vortices, and vice-versa [@donnelly]. This means that, for example, a pair of vortices of equal but opposite charge will move linearly and parallel to each other with a velocity inversely proportional to the distance between them. Two or more vortices of equal charge, meanwhile, will rotate around each other, giving rise to a rotating vortex lattice as will be discussed in Chap. VII. When a vortex line is curved, circulating fluid from one part of the line can induce motion in another. This effect can give rise to helical waves on the vortex, known as Kelvin modes [@kelvin]. It also has interesting consequences for a vortex ring, which will travel in a direction perpendicular to the plane of the ring, with a self-induced velocity that decreases with increasing radius. Classically, this is most familiar in the motion of smoke rings, though similar behavior has also been observed in superfluid helium [@rayfield64]. This simple picture is complicated in the presence of density inhomogeneities or confining walls. In a harmonically-trapped BEC the density is a function of position, and therefore the energy, $E$, of a vortex will also depend on its position within the condensate. To simplify matters, let us consider a quasi-2D situation, where the condensate is pancake-shaped and the vortex line is straight. In this case, the energy of the vortex depends on its displacement ${\bf r}$ from the condensate center [@svidzinsky:prl2000], and a displaced vortex feels a force proportional to $\nabla E$. This is equivalent to a Magnus force on the vortex [@jackson:pra2000b; @lundh00; @mcgee01] and to compensate the vortex moves in a direction perpendicular to the force, leading it to precess around the center of the condensate along a line of constant energy. This precession of a single vortex has been observed experimentally [@anderson:prl2000], with a frequency in agreement with theoretical predictions. In more 3D situations, such as spherical or cigar-shaped condensates, the vortex can bend [@garcia:pra2001a; @garcia:pra2001b; @aftalion:pra2001; @rosenbusch:prl2002] leading to more complicated motion [@fetter:jp2001]. Kelvin modes [@bretin:prl2003; @fetter:pra2004] and vortex ring dynamics [@jackson:pra2000b] are also modified by the density inhomogeneity in the trap. In the presence of a hard-wall potential, a new constraint is imposed such that the fluid velocity normal to the wall must be zero, ${\bf v}_s \cdot \hat{\bf n}=0$. The resulting problem of vortex motion is usually solved mathematically [@lamb] by invoking an “image vortex” on the other side of the wall (i.e. in the region where there is no fluid present), at a position such that its normal flow cancels that of the real vortex at the barrier. The motion of the real vortex is then simply equal to the induced velocity from the image vortex circulation. Stability of vortices ===================== Thermal instabilities --------------------- At finite temperatures the above discussion is modified by the thermal occupation of excited modes of the system, which gives rise to a noncondensed normal fluid in addition to the superfluid. A vortex core moving relative to the normal fluid scatters thermal excitations, and will therefore feel a frictional force leading to dissipation. This mutual friction force can be written as [@donnelly], $${\bf f}_D = - n_s \Gamma \{ \alpha {\bf s}' \times [\, {\bf s}' \times ({\bf v}_n - {\bf v}_L)] + \alpha' {\bf s}' \times ({\bf v}_n - {\bf v}_L) \}, \label{eq:mutual}$$ where $n_s$ is the background superfluid density, ${\bf s}'$ is the derivative of ${\bf s}$ with respect to arc length $\zeta$, $\alpha$ and $\alpha'$ are temperature dependent parameters, while ${\bf v}_L$ and ${\bf v}_n$ are the velocities of the vortex line and normal fluid respectively. The mutual friction therefore has two components perpendicular to the relative velocity ${\bf v}_n-{\bf v}_L$. To consider an example discussed in the last section, an off-center vortex in a trapped BEC at zero temperature will precess such that its energy remains constant. In the presence of a non-condensed component, however, dissipation will lead to a loss of energy. Since the vortex is topological it cannot simply vanish, so this lost energy is manifested as a radial drift of the vortex towards lower densities. In Eq. (\[eq:mutual\]) the $\alpha$ term is responsible for this radial motion, while $\alpha'$ changes the precession frequency. The vortex disappears at the edge of the condensate, where it is thought to decay into elementary excitations [@fedichev:pra1999vor]. Calculations based upon the stochastic GPE have shown that thermal fluctuations lead to an uncertainty in the position of the vortex, such that even a central vortex will experience thermal dissipation and have a finite lifetime [@duine:pra2004]. This thermodynamic lifetime is predicted to be of the order of seconds [@fedichev:pra1999vor], which is consistent with experiments [@matthews:prl1999; @madison:prl2000; @rosenbusch:prl2002]. Hydrodynamic instabilities -------------------------- Experiments indicate that the crystallization of vortex lattices is temperature-independent [@hodby:prl2002; @aboshaeer:prl2002]. Similarly, vortex tangles in turbulent states of superfluid Helium have been observed to decay at ultracold temperature, where thermal dissipation is virtually nonexistent [@davis:pb2000]. These results highlight the occurrence of zero temperature dissipation mechanisms, as listed below. ### Instability to acceleration The topology of a 2D homogeneous superfluid can be mapped on to a (2+1)D electrodynamic system, with vortices and phonons playing the role of charges and photons respectively [@arovas]. Just as an accelerating electron radiates according to the Larmor acceleration squared law, a superfluid vortex is inherently unstable to acceleration and radiates sound waves. ![Profile of a singly-quantized ($q=1$) vortex at the center of a harmonically-confined BEC: (a) condensate density along the $y=0$ axis (solid line) and the corresponding density profile in the absence of the vortex (dashed line). (b) 2D density and (c) phase profile of the vortex state. These profiles are calculated numerically by propagating the 2D GPE in imaginary time subject to an azimuthal $2\pi$ phase variation around the trap center.[]{data-label="vortex_soliton_profile"}](2D_profile_newB2.eps){width="11cm"} ![Vortex path in the dimple trap geometry of Eq. (\[eqn:dimple\]) with $\omega_{\rm d}=0.28 (c/\xi)$. Deep $V_0 =10\mu$ dimple (dotted line): mean radius is constant, but modulated by the sound field. Shallow $V_0 =0.6\mu$ dimple and homogeneous outer region $\omega_r=0$ (dotted line): vortex spirals outwards. Outer plots: Sound excitations (with amplitude $\sim 0.01n_0$) radiated in the $V_0=0.6\mu$ system at times indicated. Top: Far-field distribution $[-90,90]\xi \times[-90,90]\xi$. Bottom: Near-field distribution $[-25,25]\xi \times[-25,25]\xi$, with an illustration of the dipolar radiation pattern. Copyright (2004) by the American Physical Society [@parker:prl2004].[]{data-label="vortex_spiral"}](vortex_soundN.eps){width="10cm"} Vortex acceleration can be induced by the presence of an inhomogeneous background density, such as in a trapped BEC. Sound emission from a vortex in a BEC can be probed by considering a trap of the form [@parker:prl2004], $$V_{\rm ext}=V_0\left[1-\exp\left(- \frac{m\omega_{\rm d}^2 r^2}{2V_0} \right)\right]+ \frac{1}{2}m\omega_r^2 r^2. \label{eqn:dimple}$$ This consists of a gaussian dimple trap with depth $V_0$ and harmonic frequency component $\omega_{\rm d}$, embedded in an ambient harmonic trap of frequency $\omega_r$. A 2D description is sufficient to describe this effect. This set-up can be realized with a quasi-2D BEC by focussing a far-off-resonant red-detuned laser beam in the center of a magnetic trap. The vortex is initially confined in the inner region, where it precesses due to the inhomogeneous density. Since sound excitations have an energy of the order of the chemical potential $\mu$, the depth of the dimple relative to $\mu$ leads to two distinct regimes of vortex-sound interactions. $V_0\gg \mu$: The vortex effectively sees an infinite harmonic trap - it precesses and radiates sound but there is no net decay due to complete sound reabsorption. However, a collective mode of the background fluid is excited, inducing slight modulations in the vortex path (dotted line in Fig \[vortex\_spiral\]). $V_0<\mu$: Sound waves are radiated by the precessing vortex. Assuming $\omega_r=0$, the sound waves propagate to infinity without reinteracting with the vortex. The ensuing decay causes the vortex to drift to lower densities, resulting in a spiral motion (solid line in Fig. \[vortex\_spiral\]), similar to the effect of thermal dissipation. The sound waves are emitted in a dipolar radiation pattern, perpendicularly to the instantaneous direction of motion (subplots in Fig. \[vortex\_spiral\]), with a typical amplitude of order $0.01 n_0$ and wavelength $\lambda \sim 2\pi c/\omega_{\rm V}$ [@fetter:jp2001], where $c$ is the speed of sound and $\omega_V$ is the vortex precession frequency. The power radiated from a vortex can be expressed in the form [@parker:prl2004; @vinen:prb2000; @lundh:pra2000], $$\begin{aligned} P=\beta m N \left(\frac{a^2}{\omega_{\rm V}}\right), \label{eqn:vortex_power}\end{aligned}$$ where $a$ is the vortex acceleration, $N$ is the total number of atoms, and $\beta$ is a dimensionless coefficient. Using classical hydrodynamics [@vinen:prb2000] and by mapping the superfluid hydrodynamic equations onto Maxwell’s electrodynamic equations [@lundh:pra2000], it has been predicted that $\beta=\pi^2/2$ under the assumptions of a homogeneous 2D fluid, a point vortex, and perfect circular motion. Full numerical simulations of the GPE based on a realistic experimental scenario have derived a coefficient of $\beta \sim 6.3 \pm 0.9$ (one standard deviation), with the variation due to a weak dependence on the geometry of the system [@parker:prl2004]. When $\omega_r \neq 0$, the sound eventually reinteracts with the vortex, slowing but not preventing the vortex decay. By varying $V_0$ it is possible to control vortex decay, and in suitably engineered traps this decay mechanism is expected to dominate over thermal dissipation [@parker:prl2004]. Vortex acceleration (and sound emission) can also be induced by the presence of other vortices. A co-rotating pair of two vortices of equal charge has been shown to decay continuously via quadrupolar sound emission, both analytically [@pismen] and numerically [@barenghi:jltp2004]. Three-body vortex interactions in the form of a vortex-antivortex pair incident on a single vortex have also been simulated numerically, with the interaction inducing acceleration in the vortices with an associated emission of sound waves [@barenghi:jltp2004]. Simulations of vortex lattice formation in a rotating elliptical trap show that vortices are initially nucleated in a turbulent disordered state, before relaxing into an ordered lattice [@parker:lattice]. This relaxation process is associated with an exchange of energy from the sound field to the vortices due to these vortex-sound interactions. This agrees with the experimental observation that vortex lattice formation is insensitive to temperature [@hodby:prl2002; @aboshaeer:prl2002]. ### Kelvin wave radiation and vortex reconnections In 3D a Kelvin wave excitation will induce acceleration in the elements of the vortex line, and therefore local sound emission. Indeed, simulations of the GPE in 3D have shown that Kelvin waves excitations on a vortex ring lead to a decrease in the ring size, indicating the underlying radiation process [@leadbeater:pra2003]. Kelvin wave excitations can be generated from a vortex line reconnection [@leadbeater:prl2001; @leadbeater:pra2003] and the interaction of a vortex with a rarefaction pulse [@berloff:pra2004]. Vortex lines which cross each other can undergo dislocations and reconnections [@caradoc], which induce a considerable burst of sound emission [@leadbeater:prl2001]. Although they have yet to be probed experimentally in BECs, vortex reconnections are hence thought to play a key role in the dissipation of vortex tangles in Helium II at ultra-low temperatures [@donnelly]. Dipolar BECs ============ A BEC has recently been formed of chromium atoms [@Griesmaier05], which feature a large dipole moment. This opens the door to studying of the effect of long-range dipolar interactions in BECs. The Modified Gross-Pitaevskii Equation \[MGPE\] ----------------------------------------------- The interaction potential $U_{dd}({\bf r})$ between two dipoles separated by $\bf{r}$, and aligned by an external field along the unit vector $\hat{\bf{e}}$ is given by, $$\begin{aligned} U_{dd}({\bf r})=\frac{C_{dd}}{4 \pi} \hat{e}_i\hat{e}_j \frac{\left(\delta_{ij}-3\hat{r}_i \hat{r}_j\right)}{r^3}. \label{eqn:U_dd_Dipolar}\end{aligned}$$ For low energy scattering of two atoms with dipoles induced by a static electric field ${\bf E}=E \hat{\bf{e}}$, the coupling constant $C_{dd}=E^2 \alpha^2/\epsilon_0$ [@Marinescu98; @Yi00], where $\alpha$ is the static dipole polarizability of the atoms and $\epsilon_0$ is the permittivity of free space. Alternatively, if the atoms have permanent magnetic dipoles, $d_m$, aligned in an external magnetic field ${\bf B}=B \hat{\bf{e}}$, one has $C_{dd}= \mu_0 d_m^2$ [@Goral00], where $\mu_0$ is the permeability of free space. Such dipolar interactions give rise to a mean-field potential $$\Phi_{dd} ({\bf r}) = \int d^3 r U_{dd} \left( {\bf r}-{\bf r}^{\prime} \right) |\psi\left({\bf r}^{\prime} \right)|^2, \label{eqn:Phi_dd_Dipolar}$$ which can be incorporated into the GPE to give, $$\label{GPE_Dipolar} i \hbar \psi_t = \left[ -\frac{\hbar^2}{2m} \nabla^2 + g|\psi|^2 + \Phi_{dd} + V\right]\psi.$$ For an axially-symmetric quasi-2D geometry ($\omega_z\gg\omega_r$) rotating about the ${\it z}$-axis, the ground state wavefunction of a single vortex has been solved numerically [@Yi06]. Considering $10^5$ chromium atoms and $\omega_r=2\pi \, \times 100$Hz, several solutions were obtained depending on the strength of the $s$-wave interactions and the alignment of the dipoles relative to the trap. For the case of axially-polarized dipoles the most striking results arise for attractive $s$-wave interactions $g<0$. Here the BEC density is axially symmetric and oscillates in the vicinity of the vortex core. Similar density oscillations have been observed in numerical studies of other non-local interaction potentials, employed to investigate the interparticle interactions in $^4$He [@Oritz95; @Sadd97; @Berloff99; @Dalfovo92], with an interpretation that relates to the roton structure in a superfluid [@Dalfovo92]. For the case of transversely-polarized dipoles, where the polarizing field is co-rotating with the BEC, and repulsive $s$-wave interactions ($g>0$), the BEC becomes elongated along the axis of polarization [@Stuhler05] and as a consequence the vortex core is anisotropic. Vortex Energy \[Vortex\_Energy\] -------------------------------- Assuming a dipolar BEC in the TF limit (cf. Sec. 5.1 in Chap. I), the energetic cost of a vortex, aligned along the axis of polarization ($z$-axis), has been derived using a variational ansatz for the vortex core [@ODell06], and thereby the critical rotation frequency $\Omega_c$ at which the presence of a vortex becomes energetically favorable has been calculated. For an oblate trap ($\omega_{r}< \omega_z$), dipolar interactions decrease $\Omega_c$, while for prolate traps ($\omega_{r} > \omega_z$) the presence of dipolar interactions increases $\Omega_c$. A formula resembling Eq. (\[Omega\_c\]) for the critical frequency of a conventional BEC can be used to explain these results, with $R$ being the modified TF radius of the dipolar BEC. Indeed, using the TF radius of a vortex-free dipolar BEC [@ODell04; @Eberlein05] and the conventional $\it s$-wave healing length $\xi$, it was found that Eq. (\[Omega\_c\]) closely matches the results from the energy cost calculation. Deviations become significant when the dipolar interactions dominate over [*s*]{}-wave interactions. In this regime the $\it s$-wave healing length $\xi$ is no longer the relevant length scale of the system, and the equivalent dipolar length scale $\xi_d=C_{dd}m/(12 \pi \hbar^2)$ will characterize the vortex core size. For $g>0$ and in the absence of dipolar interactions, the rotation frequency at which the vortex-free BEC becomes dynamically unstable, $\Omega_{dyn}$, is always greater than the critical frequency for vortex stabilization $\Omega_c$. However in the presence of dipolar interactions, $\Omega_{dyn}$ can become less than $\Omega_c$, leading to an intriguing regime in which the dipolar BEC is dynamically unstable but vortices will not enter [@ODell06; @Bijnen06]. As with attractive condensates [@wilkin], the angular momentum may then be manifested as center of mass oscillations. Analogs of Gravitational Physics in BECs ======================================== There is growing interest in pursuing analogs of gravitational physics in condensed matter systems [@Barcelo05], such as BECs. The rationale behind such models can be traced back to the work of Unruh [@Unruh81; @Unruh95], who noted the analogy between sound propagation in an inhomogeneous background flow and field propagation in curved space-time. This link applies in the TF limit of BECs where the speed of sound is directly analogous to the speed of light in the corresponding gravitational system [@Barcelo01]. This has led to proposals for experiments to probe effects such as Hawking radiation [@Hawking74; @Hawking75] and superradiance [@Bekenstein98]. For Hawking radiation it is preferable to avoid the generation of vortices [@Barcelo05; @Barcelo03], and as such will not be discussed here. However, the phenomena of superradiance in BECs, which can be considered as stimulated Hawking radiation, relies on the presence of a vortex [@Slatyer05; @Basak03a; @Basak03b; @Federici06], which is analogous to a rotating black hole. Below we outline the derivation of how the propagation of sound in a BEC can be considered to be analogous to field propagation [@Barcelo05]. From the GPE it is possible to derive the continuity equation for an irrotational fluid flow with phase $S({\bf r},t)$ and density $n({\bf r},t)$, and a Hamilton-Jacobi equation whose gradient leads to the Euler equation. Linearizing these equations with respect to the background it is found that $$\label{Linearization1} \partial_t S'=-\frac{1}{m}\nabla S \cdot \nabla S'-gn'+\frac{\hbar^2}{4m\sqrt{n}} \left(\nabla^2\frac{n'}{\sqrt{n}}-\frac{n'}{n}\nabla^2\sqrt{n}\right),$$ $$\label{Linearization2} \partial_t n'=-\frac{1}{m}\nabla \cdot \left(n \nabla S'\right)-\frac{1}{m}\nabla \cdot \left(n' \nabla S\right),$$ where $n'$ and $S'$ are the perturbed values of the density $n$ and phase $S$ respectively. Neglecting the quantum pressure $\nabla^2$-terms, the above equations can be rewritten as a covariant differential equation describing the propagation of phase oscillations in a BEC. This is directly analogous to the propagation of a minimally coupled massless scalar field in an effective Lorentzian geometry which is determined by the background velocity, density and speed of sound in the BEC. Hence, the propagation of sound in a BEC can be used as an analogy for the propagation of electromagnetic fields in the corresponding space-time. Of course one has to be aware that this direct analogy is only valid in the TF regime, which breaks down on scales of the order of a healing length, i.e. the theory is only valid on large length scales, as is general relativity. Superradiance ------------- Superradiance in BECs relies on sound waves incident on a vortex structure and is characterized by the reflected sound energy exceeding the incident energy. This has been studied using Eqs. (\[Linearization1\]) and (\[Linearization2\]) for monochromatic sound waves of frequency $\omega_s$ and angular wave number $q_s$ incident upon a vortex [@Slatyer05] and a ‘draining vortex’’ (a vortex with outcoupling at its center) [@Basak03a; @Basak03b; @Federici06]. For the vortex case, a vortex velocity field ${\bf v}(r,\theta)=(\beta/r)\hat{\bm{\theta}}$ and a density profile ansatz was assumed. Superradiance then occurs when $\beta q_s>A c_\infty$, where $A$ is related to the vortex density ansatz and $c_\infty$ is the speed of sound at infinity [@Slatyer05]. Interestingly, this condition is frequency independent. For the case of a draining vortex, an event horizon occurs at a distance $a$ from the vortex core, where the fluid circulates at frequency $\Omega$. Assuming a homogeneous density $n$ and a velocity profile ${\bf v}(r,\theta)=\left(-ca \hat{\bf r}+\Omega a^2 \hat{ {\bm \theta}}\right)/r$ where $c$ is the homogeneous speed of sound, superradiance occurs when $0 < \omega_s < q_s \Omega$ [@Basak03a; @Basak03b; @Federici06]. The increase in energy of the outgoing sound is due to an extraction of energy from the vortex and as such it is expected to lead to slowing of the vortex rotation. However, such models do not include quantized vortex angular momentum, and as such it is expected that superradiance will be suppressed [@Federici06]. This raises tantalizing questions, such as whether superradiance can occur if vorticity is quantized, if such effects can be modeled with the GPE, and whether the study of quantum effects in condensate superradiance will shed light on quantum effects in general relativity. [99.]{} M. R. Matthews, B. P. Anderson, P. C. Haljan, D. S. Hall, C. E. Wieman, and E. A. Cornell, Phys. Rev. Lett. [**83**]{}, 2498 (1999). B. P. Anderson, P. C. Haljan, C. E. Wieman, and E. A. Cornell, Phys. Rev. Lett. [**85**]{}, 2857 (2000). K. W. Madison, F. Chevy, W. Wohlleben, and J. Dalibard, Phys. Rev. Lett. [**84**]{}, 806 (2000). J. R. Abo-Shaeer, C. Raman, J. M. Vogels, and W. Ketterle, Science [**292**]{}, 476 (2001). E. Hodby, C. Hechenblaikner, S. A. Hopkins, O. M. Maragò, and C. J. Foot, Phys. Rev. Lett. [**88**]{}, 010405 (2002). C. Raman, J. R. Abo-Shaeer, J. M. Vogels, K. Xu, and W. Ketterle, Phys. Rev. Lett. [**87**]{}, 210402 (2001). B. P. Anderson, P. C. Haljan, C. A. Regal, D. L. Feder, L. A. Collins, C. W. Clark, and E. A. Cornell, Phys. Rev. Lett. [**86**]{}, 2926 (2001). Z. Dutton, M. Budde, C. Slowe, and L. V. Hau, Science [**293**]{}, 663 (2001). S. Inouye, S. Gupta, T. Rosenband, A. P. Chikkatur, A. Görlitz, T. L. Gustavson, A. E. Leanhardt, D. E. Pritchard, and W. Ketterle, Phys. Rev. Lett. [**87**]{}, 080402 (2001). M. W. Zwierlein, J. R. Abo-Shaeer, A. Schirotzek, C. H. Schunck, and W. Ketterle, Nature [**435**]{}, 1047 (2005). R. J. Donnelly: (Cambridge University Press, Cambridge, 1991). C. F. Barenghi, R. J. Donnelly, and W. F. Vinen[ (Eds.)]{}: (Springer Verlag, Berlin, 2001). D. R. Tilley and J. Tilley: (IOP, Bristol, 1990). F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Rev. Mod. Phys. [**71**]{}, 463 (1999). A. L. Fetter and A. A. Svidzinsky, J. Phys.: Condens. Matter [**13**]{}, R135 (2001). F. Dalfovo and S. Stringari, Phys. Rev. A [**53**]{}, 2477 (1996). N. K. Wilkin, J. M. F. Gunn, and R. A. Smith, Phys. Rev. Lett. [**80**]{}, 2265 (1998). H. Saito and M. Ueda, Phys. Rev. A [**69**]{}, 013604 (2004). E. Lundh, A. Collin, and K-A. Suominen, Phys. Rev. Lett. [**92**]{}, 070401 (2004). G. M. Kavoulakis, A. D. Jackson, and G. Baym, Phys. Rev. A [**70**]{}, 043603 (2004). S. M. M. Virtanen, T. P. Simula, M. M. Salomaa, Phys. Rev. Lett. [**86**]{}, 2704 (2001). C. W. Gardiner, J. R. Anglin, and T. I. A. Fudge, J. Phys. B [**35**]{}, 1555 (2002). A. A. Penckwitt, R. J. Ballagh, and C. W. Gardiner, Phys. Rev. Lett. [**89**]{}, 260402 (2002). R. A. Duine, B. W. A. Leurs, and H. T. C. Stoof, Phys. Rev. A [**69**]{}, 053623 (2004). M. J. Steel, M. K. Olsen, L. I. Plimak, P. D. Drummond, S. M. Tan, M. J. Collett, D. F. Walls, and R. Graham, Phys. Rev. A [**58**]{}, 4824 (1998). M. J. Davis, S. A. Morgan, and K. Burnett, Phys. Rev. A [**66**]{}, 053618 (2002). C. Lobo, A. Sinatra, and Y. Castin, Phys. Rev. Lett. [**92**]{}, 020403 (2004). T. P. Simula and P. B. Blakie, Phys. Rev. Lett. [**96**]{}, 020404 (2006). M. Tsubota, K. Kasamatsu, and M. Ueda, Phys. Rev. A [**65**]{}, 023603 (2002). C. J. Pethick and H. Smith: [*Bose-Einstein Condensation in Dilute Gases*]{} (Cambridge, 2002). A. Minguzzi, S. Succi, F. Toschi, M. P. Tosi, and P. Vignolo, Phys. Rep. [**395**]{}, 223 (2004). D. A. Butts and D. S. Rokshar, Nature [**397**]{}, 327 (1999). E. Lundh, Phys. Rev. A [**65**]{}, 043604 (2002). P. C. Haljan, I. Coddington, P. Engels, and E. A. Cornell, Phys. Rev. Lett. [**87**]{}, 210403 (2001). M. Möttönen, T. Mizushima, T. Isoshima, M. M. Salomaa, and K. Machida, Phys. Rev. A [**68**]{}, 023611 (2003). Y. Shin, M. Saba, A. Schirotzek, T. A. Pasquini, A. E. Leanhardt, D. E. Pritchard, and W. Ketterle, Phys. Rev. Lett. [**93**]{}, 160406 (2004). H. Pu, C. K. Law, J. H. Eberly, and N. P. Bigelow, Phys. Rev. Lett. [**59**]{}, 1533 (1999). T. P. Simula, S. M. M. Virtanen, and M. M. Salomaa, Phys. Rev. A [**65**]{}, 033614 (2002). C. A. Jones and P. H. Roberts, J. Phys. A [**15**]{}, 2599 (1982). C. A. Jones, S. J. Putterman, and P. H. Roberts, J. Phys. A [**19**]{}, 2991 (1986). A. E. Leanhardt, Y. Shin, D. Kielpinski, D. E. Pritchard, and W. Ketterle, Phys. Rev. Lett. [**90**]{}, 140403 (2003). K. Kasamatsu, M. Tsubota, and M. Ueda, Phys. Rev. Lett. [**93**]{}, 250406 (2004). J. Ruostekoski and J. R. Anglin, Phys. Rev. Lett. [**86**]{}, 003934 (2001). J. Ruostekoski and J. R. Anglin, Phys. Rev. Lett. [**91**]{}, 190402 (2003). N. G. Parker, N. P. Proukakis, C. F. Barenghi, and C. S. Adams, Phys. Rev. Lett. [**92**]{}, 160403 (2004). P. Nozieres and D. Pines: (Perseus Publishing, New York, 1999). E. Lundh, C.J. Pethick and H. Smith, Phys. Rev. A [**55**]{}, 2126 (1997). D. S. Rokhsar, Phys. Rev. Lett. [**79**]{}, 2164 (1997). E. Lundh, J. P. Martikainen, and K. A. Suominen, Phys. Rev. A [**67**]{}, 063604 (2003). N. G. Parker and C. S. Adams, Phys. Rev. Lett. [**95**]{}, 145301 (2005); J. Phys. B [**39**]{}, 43 (2006). S. Stringari, Phys. Rev. Lett. [**77**]{}, 2360 (1996). A. Recati, F. Zambelli, and S. Stringari, Phys. Rev. Lett. [**86**]{}, 377 (2001). S. Sinha and Y. Castin, Phys. Rev. Lett. [**87**]{}, 190402 (2001). N.G. Parker, R.M.W. van Bijnen and A.M. Martin, Phys. Rev. A [**73**]{}, 061603(R) (2006). K. W. Madison, F. Chevy, V. Bretin, and J. Dalibard, Phys. Rev. Lett. [**86**]{}, 4443 (2001). J. E. Williams, E. Zaremba, B. Jackson, T. Nikuni, and A. Griffin, Phys. Rev. Lett. [**88**]{}, 070401 (2002). F. Dalfovo and S. Stringari, Phys. Rev. A [**63**]{}, 011601(R) (2001). T. Frisch, Y. Pomeau, and S. Rica, Phys. Rev. Lett. [**69**]{}, 1644 (1992). T. Winiecki, J. F. McCann, and C. S. Adams, Phys. Rev. Lett. [**82**]{}, 5186 (1999). B. Jackson, J. F. McCann, and C. S. Adams, Phys. Rev. Lett. [**80**]{}, 3903 (1998). C. Raman, M. Köhl, R. Onofrio, D. S. Durfee, C. E. Kuklewicz, Z. Hadzibabic, and W. Ketterle, Phys. Rev. Lett. [**83**]{}, 2502 (1999). R. Onofrio, C. Raman, J. M.Vogels, J. R. Abo-Shaeer, A. .P. Chikkatur, and W. Ketterle, Phys. Rev. Lett. [**85**]{}, 2228 (2000). B. Jackson, J. F. McCann, and C. S. Adams, Phys. Rev. A [**61**]{}, 051603(R) (2000). B. M. Caradoc-Davies, R. J. Ballagh, and K. Burnett, Phys. Rev. Lett. [**83**]{}, 895 (1999). J. E. Williams and M. J. Holland, Nature [**401**]{}, 568 (1999). A. E. Leanhardt, A. Görlitz, A. Chikkatur, D. Kielpinski, Y. Shin, D. E. Pritchard, and W. Ketterle, Phys. Rev. Lett. [**89**]{}, 190403 (2002). N. S. Ginsberg, J. Brand, and L. V. Hau, Phys. Rev. Lett. [**94**]{}, 040403 (2005). S. Komineas and N. Papanicolaou, Phys. Rev. A [**68**]{}, 043617 (2003). R. G. Scott, A. M. Martin, T. M. Fromhold, and F. W. Sheard, Phys. Rev. Lett. [**95**]{}, 073201 (2005). R. G. Scott, A. M. Martin, S. Bujkiewicz, T. M. Fromhold, N. Malossi, O. Morsch, M. Cristiani, and E. Arimondo, Phys. Rev. A [**69**]{}, 033605 (2004). B. Jackson, J. F. McCann, and C. S. Adams, Phys. Rev. A [**60**]{}, 4882 (1999). N. G. Berloff and C. F. Barenghi, Phys. Rev. Lett. [**93**]{}, 090401 (2004). J. Ruostekoski and Z. Dutton, Phys. Rev. A [**70**]{}, 063626 (2005). L. C. Crasovan, V. M. Pérez-García, I. Danaila, D. Mihalache, and L. Torner, Phys. Rev. A [**70**]{}, 033605 (2004). T. P. Simula and P. B. Blakie, Phys. Rev. Lett. [**96**]{}, 020404 (2006). Z. Hadzibabic, P. Krüger, M. Cheneau, B. Battelier, and J. Dalibard, Nature [**441**]{}, 1118 (2006). M. J. Davis, S. A. Morgan, and K. Burnett, Phys. Rev. Lett. [**66**]{}, 053618 (2002). N. G. Berloff and B. V. Svistunov, Phys. Rev. A [**66**]{}, 013603 (2002). H. Lamb: [*Hydrodynamics*]{} (Cambridge University Press, 1932). M. Tsubota, T. Araki, and S. K. Nemirovskii, Phys. Rev. B [**62**]{}, 11751 (2000). K. W. Schwarz, Phys. Rev. B [**31**]{}, 5782 (1985). J. Koplik and H. Levine, Phys. Rev. Lett. [**71**]{}, 1375 (1993). M. Leadbeater, T. Winiecki, D. C. Samuels, C. F. Barenghi, and C. S. Adams, Phys. Rev. Lett. [**86**]{}, 1410 (2001). M. Leadbeater, D. C. Samuels, C. F. Barenghi, and C. S. Adams, Phys. Rev. A [**67**]{}, 015601 (2003). W. Thomson[ (Lord Kelvin)]{}, Philos. Mag. [**10**]{}, 155 (1880). G. W. Rayfield and F. Reif, Phys. Rev. [**136**]{}, A1194 (1964). A. A. Svidzinsky and A. L. Fetter, Phys. Rev. Lett. [**84**]{}, 5919 (2000). B. Jackson, J. F. McCann, and C. S. Adams, Phys. Rev. A [**61**]{}, 013604 (2000). E. Lundh and P. Ao, Phys. Rev. A [**61**]{}, 063612 (2000). S. A. McGee and M.J. Holland, Phys. Rev. A [**63**]{}, 043608 (2001). J. J. García-Ripoll and V. M. Pérez-García, Phys. Rev. A [**63**]{}, 041603 (2001). J. J. García-Ripoll and V. M. Pérez-García, Phys. Rev. A [**64**]{}, 053611 (2001). A. Aftalion and T. Riviere, Phys. Rev. A [**64**]{}, 043611 (2001). P. Rosenbusch, V. Bretin, and J. Dalibard, Phys. Rev. Lett. [**89**]{}, 200403 (2002). V. Bretin, P. Rosenbusch, F. Chevy, G. V. Shlyapnikov, and J. Dalibard, Phys. Rev. Lett. [**90**]{}, 100403 (2003). A. L. Fetter, Phys. Rev. A [**69**]{}, 043617 (2004). P. O. Fedichev and G. V. Shylapnikov, Phys. Rev. A [**60**]{}, R1779 (1999). J. R. Abo-Shaeer, C. Raman, and W. Ketterle, Phys. Rev. Lett. [**88**]{}, 070409 (2002). S. I. Davis, P. C. Hendry, and P. V. E. McClintock, Physica B [**280**]{}, 43 (2000). D. P. Arovas and J. A. Freire, Phys. Rev. B [**55**]{}, 3104 (1997). W. F. Vinen, Phys. Rev. B [**61**]{}, 1410 (2000). E. Lundh and P. Ao, Phys. Rev. A [**61**]{}, 063612 (2000). L. M. Pismen: (Clarendon Press, Oxford, 1999). C. F. Barenghi, N. G. Parker, N. P. Proukakis, and C. S. Adams, J. Low. Temp. Phys. [**138**]{}, 629 (2005). N. G. Berloff, Phys. Rev. A [**69**]{}, 053601 (2004). B. M. Caradoc-Davies, R. J. Ballagh, and P. B. Blakie, Phys. Rev. A [**62**]{}, 011602 (2000). A. Griesmaier, J. Werner, S. Hensler, J. Stuhler, and T. Pfau, Phys. Rev. Lett. [**94**]{}, 160401 (2005). M. Marinescu and L. You, Phys. Rev. Lett. [**81**]{}, 4596 (1998). S. Yi and L. You, Phys. Rev. A [**61**]{}, 041604 (2000). K. G[ó]{}ral, K. Rz[a]{}[ż]{}ewski, and T. Pfau, Phys. Rev. A [**61**]{}, 051601 (2000). S. Yi and H. Pu, Phys. Rev. A [**73**]{}, 061602(R) (2006). G. Oritz and D. M. Ceperley, Phys. Rev. Lett. [**75**]{}, 4642 (1995). M. Sadd, G.V. Chester, and L. Reatto, Phys. Rev. Lett. [**79**]{}, 2490 (1997). N. G. Berloff and P. H. Roberts, J. Phys. A [**32**]{}, 5611 (1999). F. Dalfovo, Phys. Rev. B [**46**]{}, 5482 (1992). J. Stuhler, A. Griesmaier, T. Koch, M. Fattori, T. Pfau, S. Giovanazzi, P. Pedri, and L. Santos, Phys. Rev. Lett. [**95**]{}, 150406 (2005). D.H.J. O’Dell and C. Eberlein, Phys. Rev. A [**75**]{}, 013604 (2007). D.H.J. O’Dell, S. Giovanazzi, and C. Eberlein, Phys. Rev. Lett. [**92**]{}, 250401 (2004). C. Eberlein, S. Giovanazzi, and D.H.J. O’Dell, Phys. Rev. A [**71**]{}, 033618 (2005). R.M.W. van Bijnen, D. H. J. O’Dell, N.G. Parker, and A.M. Martin, Phys. Rev. Lett. accepted, cond-mat/0602572 (2006). C. Barceló, S. Liberati and M. Visser, Living Rev. Rel. [**8**]{}, 12 (2005). W.G. Unruh, Phys. Rev. Lett. [**46**]{}, 1351 (1981). W.G. Unruh, Phys. Rev. D [**27**]{}, 2827 (1995). C. Barceló, S. Liberati and M. Visser, Class. Quant. Gav. [**18**]{}, 1137 (2001). S.W. Hawking, Nature [**248**]{}, 30 (1974). S.W. Hawking, Commun. Math. Phys. [**43**]{}, 199 (1975). J.D. Bekenstein and M. Schiffer, Phys. Rev. [**58**]{}, 064014 (1998). C. Barceló, S. Liberati, and M. Visser, Phys. Rev. A [**68**]{}, 053613 (2003). T.R. Slatyer and C.M. Savage, Class. Quant. Grav. [**22**]{}, 3833 (2005). S. Basak and P. Majumdar, Class. Quant. Grav. [**20**]{}, 2929 (2000). S. Basak and P. Majumdar, Class. Quant. Grav. [**20**]{}, 3907 (2000). F. Federici, C. Cherubini, S. Succi, and M.P. Tosi, Phys. Rev. A [**73**]{}, 033604 (2006).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a preliminary analysis of HST-WFPC2 observations of globular cluster systems in the two brightest galaxies, UGC 9799 (cD) and NGC 1129 (non-cD), located in the center of rich clusters.' author: - 'Myung Gyoon Lee, Eunhyeuk Kim' - Doug Geisler - Terry Bridges - Keith Ashman title: A Comparative Study of Globular Cluster Systems in UGC 9799 and NGC 1129 --- \#1[[*\#1*]{}]{} \#1[[*\#1*]{}]{} = \#1 1.25in .125in .25in UGC 9799 is a cD galaxy located in the center of the massive Abell 2052 cluster at z=0.035, and is known to have the largest number of the globular clusters (N(total) $\approx 46,000$) and the highest specific frequency of globular clusters from the ground-based observations ($S_N = 20\pm6$, Harris, Pritchet, & McClure 1995). On the other hand, NGC 1129 is a giant, but not cD galaxy, located in the center of a rich cluster AWM7 at z=0.018. Its globular cluster system has not yet been studied. The foreground reddenings are known to be $E(V-I)=0.051$ for UGC 9799 and $E(V-I)=0.159$ for NGC 1129. We adopt the redshift distance modulus $(m-M)_0=36.0$ for UGC 9799 and $(m-M)_0=34.5$ for NGC 1129 based on the Hubble constant of $H_0 = 65 $ km/s/Mpc. Deep images of these galaxies were obtained using the HST-WFPC2 with $F555W$ ($V$) and $F814W$ ($I$) filters. We have obtained the photometry of the point sources in the images where bright galaxies were subtracted, using the HSTphot package (Dolphin 2000) and the image classification parameters. Our photometry reaches $V \approx 27.2$ mag and $I \approx 26.0$ mag with 50 % completeness. Figure 1 displays the color-magnitude diagrams of the point sources in UGC 9799 and NGC 1129. In Figure 1 there is seen a vertical structure at $0.8<(V-I)<1.5$ extending up to $I \approx 23$ mag, which represents the globular clusters in these galaxies. Faint blue objects are mostly background compact galaxies. Figure 2 shows that the $(V-I)_0$ color distributions of the bright point sources with $V<26.5$ mag. In Figure 2 the dominant peaks are due to the globular clusters and the color distribution of the globular clusters in both galaxies are similarly bimodal: a blue peak at $(V-I)_0=1.07$ (\[Fe/H\] = –0.8) and a red peak at $(V-I)_0=1.17$ (\[Fe/H\] = –0.4) for UGC 9799, and a blue peak at $(V-I)_0=1.02$ (\[Fe/H\] = –1.1) and a red peak at $(V-I)_0=1.17$ (\[Fe/H\] = –0.4) for NGC 1129. The number of the bright globular clusters with $V<26.5$ mag we find is 860 for UGC 9799 and 1,060 for NGC 1129. The ratio of the number of the blue globular clusters (BGC) and that of the red globular clusters (RGC) for UGC 9799 is derived to be N(BRC)/N(RGC) = 1.8, higher than that of NGC 1129, N(BRC)/N(RGC) =1.3. The surface number density profiles of the globular clusters show that the globular clusters in both galaxies are spatially more extended than those of the stellar halo, and the mean colors of the globular clusters are bluer than those of the stellar halo. The RGCs are found to be more centrally concentrated than the RGCs in both galaxies. Luminosity functions of the globular clusters (GCLFs) are derived after background subtraction and incompleteness correction, but they do not reach the turnovers which are expected to be at $V\approx 28.7$ mag for UGC 9799 and $V\approx 27.5$ mag for NGC 1129. We estimate the total number of the globular clusters from the GCLFs, obtaining N(total)=$10,000\pm 700$ for UGC 9799 and N(total)=$7,000\pm 700$ for NGC 1129. The total number of the globular clusters in UGC 9799 derived in this study is much smaller than that derived from the ground-based observation by Harris et al. (1995). From the integrated photometry of the galaxies the total magnitudes of the galaxies are estimated to be $V=12.10$ mag and $I=10.77$ mag for UGC 9799 ($r<63''$), and $V=10.80$ mag and $I=9.41$ mag for NGC 1129 ($r<80''$). Absolute total magnitudes of the galaxies are derived to be $M_V=-24.02$ mag and $M_I=-25.30$ mag for UGC 9799, and $M_V=-24.08$ mag and $M_I=-25.32$ mag for NGC 1129, showing that both galaxies belong to the brightest galaxies. Finally we estimate the specific frequency of the globular clusters, $S_N=N_t \times 10^{0.4(M_V +15)} = 2.5\pm 0.2$ for UGC 9799 and $S_N=1.7\pm 0.2$ for NGC 1129. These values are significantly lower than those for normal elliptical galaxies, and the value for UGC 9799 is much lower than that based on the ground-based observation (Harris et al. 1995). If we use the total magnitudes of the galaxies given in the literature ($M_V=-23.4$ mag for UGC 9799, $M_V=-22.88$ mag for NGC 1129), we get $S_N=4.4\pm 0.3$ for UGC 9799 and $S_N=5.0\pm 0.5$ for NGC 1129. This result is not consistent with the intracluster globular cluster model that suggests that the globular clusters are not bound to individual galaxies, but bound to the gravitational potential of the clusters (West et al. 1995). This research is supported in part by the MOST/KISTEP International Collaboration Research Program (1-99-009). Dolphin, A. E. 2000, , 112, 1383 Harris, W. Pritchet, C. J., & McClure, R. D. 1995, , 441, 120 West, M. J., Côte, P., Jones, C., Forman, W., & Marzke, R. O. 1995, , 453, L77
{ "pile_set_name": "ArXiv" }
--- abstract: | We report extensive multi-station photometry of TT Boo during its June 2004 superoutburst. The amplitude of the superoutburst was about 5.5 mag and its length over 22 days. The star showed a small re-brightening starting around the 9th day of the superoutburst. During entire bright state we observed clear superhumps with amplitudes from 0.07 to 0.26 mag and a mean period of $P_{sh} = 0.0779589(47)$ days ($112.261\pm0.007$ min). The period was not constant but decreased at the beginning and the end of superoutburst yet increased in the middle phase. We argue that the complicated shape of the $O-C$ diagram is caused by real period changes rather than by phase shifts. Combining the data from two superoutbursts from 1989 and 2004 allowed us to trace the birth of the late superhumps and we conclude that it is a rather quick process lasting about one day. [**Key words:**]{} Stars: individual: TT Boo – binaries: close – novae, cataclysmic variables author: - | A.  O l e c h$^1$,  L.M.  C o o k$^2$,  K.  Z [ł]{} o c z e w s k i$^3$,  K.  M u l a r c z y k$^3$,\  P.  K ȩ d z i e r s k i$^3$,  A.  U d a l s k i$^3$  and   M.  W i [ś]{} n i e w s k i$^1$ date: | $^1$ Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, ul. Bartycka 18, 00-716 Warszawa, Poland\ [e-mail: (olech,mwisniew)@camk.edu.pl]{}\  \ $^2$ Center for Backyard Astrophysics (Concord),\ 1730 Helix Court, Concord, CA 94518, USA\ [e-mail: [email protected]]{}\  \ $^3$ Warsaw University Observatory, Al. Ujazdowskie 4, 00-476 Warszawa, Poland\ [e-mail: (kzlocz,kmularcz,pkedzier,udalski)@astrouw.edu.pl]{} title: | **Curious Variables Experiment (CURVE).\ TT Bootis - superhump period change pattern confirmed.** --- Introduction ============ $UBV$ photometry of TT Boo in quiescence was obtained by Szkody (1987). She found values of $B-V=0.38$ and $U-B=-1.09$ mag, which are rather typical for dwarf novae of comparatively long periods but not for SU UMa stars which usually have $B-V$ color around zero. Howell & Szkody (1988) reported quiescent photometry of TT Boo and their observations revealed light variations with a period near $111\pm5$ min with an amplitude of 0.2 mag. They also mention the observations of Thorstensen and Brownsberger, who observed the star in the bright state and found features that looked like superhumps with a tentative period of 97 min. This is different from observations of Howell & Szkody (1988) because in all confirmed typical SU UMa stars the superhump period is slightly longer than the orbital period. Spectroscopy of TT Boo in the outburst was obtained by Bruch (1989). More detailed observations of TT Boo in its bright state were performed during two nights of April 1993 by Kato (1995). He found clear superhumps with a period of $0.07811(5)$ days confirming that TT Boo belongs to the SU UMa-type dwarf novae class. Despite its quite frequent outbursts and brightness at maximum of $mV = 12.7$ mag, TT Boo is a poorly studied object. The best determination of the superhump period is based on only two nights of observations and the orbital period of the system is not known. We were alerted to the ongoing outburst of TT Boo by Carlo Gualdoni’s VSNET outburst alert number 6316. He reported that on June 3.9792 UT the star was at magnitude 12.8. Observations and Data Reduction =============================== Observations of TT Boo reported in the present paper were obtained during two superoutbursts. Observations from August 1989 were collected at Dominion Astrophysical Observatory (DAO) Victoria B.C., Canada with the 1.22-m telescope equipped with an RCA-2 CCD camera. A Johnson $V$ filter was used. The exposure times were 60 and 120 seconds on the first and second nights, respectively. The data from 2004 were collected at two locations: the Ostrowik station of the Warsaw University Observatory and CBA Concord in the San Francisco suburb of Concord, approximately 50 km from East of the City. The Ostrowik data were collected using the 60-cm Cassegrain telescope equipped with a Tektronics TK512CB back-illuminated CCD camera. The scale of the camera was 0.76"/pixel providing a 6.5’ x 6.5’ field of view. The full description of the telescope and camera was given by Udalski and Pych (1992). The CBA data were collected using an f/4.5 73- cm reflector operated at prime focus on an English cradle mount. Images were collected with a Genesis G16 camera using a KAF1602e chip giving a field of view of 14.3’ x 9.5’. Images were reduced using AIP4WIN software (Berry and Burnell 2000). In Ostrowik and CBA Concord the star was monitored in “white light” in order to be able to observe it also at minimum light of around 19 mag. We used two comparison stars: GSC 3047:313 ($RA = 14^h57^m55.03^s$, Decl.$ = +40^\circ45' 17"$) and GSC 3047:41 ($RA = 14^h57^m43.0^s$, Decl.$ = +40\circ45'06"$). CBA Concord exposure times were 15, 20 and 30 seconds depending upon the brightness of the star. The Ostrowik exposure times were from 90 to 150 seconds during the bright state and from 150 to 240 seconds at minimum light. A full journal of our CCD observations of TT Boo is given in Table 1. In 2004, we monitored the star for 69 hours during 25 nights and obtained 3924 exposures. In 1989, during two nights, we collected 366 exposures and followed the star for a total time of 8.47 hours. ---------------- -------- ------------- ------------- --------- ------------- Date No. of Start End Length Location frames 2447000\. + 2447000\. + \[hr\]  1989 Aug 04/05 252 742.70840 742.89724 4.532 DAO 1989 Aug 05/06 114 743.70928 743.87345 3.940 DAO Total 366 – – 8.472 Date No. of Start End Length Location frames 2453000\. + 2453000\. + \[hr\]  2004 Jun 04/05 564 161.69325 161.94452 6.030 CBA Concord 2004 Jun 05/06 564 162.70358 162.95793 6.104 CBA Concord 2004 Jun 06/07 55 163.44986 163.54116 2.191 Ostrowik 2004 Jun 07/08 48 164.46096 164.53105 1.682 Ostrowik 2004 Jun 08/09 73 165.37321 165.52394 3.618 Ostrowik 2004 Jun 09/10 128 166.35531 166.53572 4.330 Ostrowik 2004 Jun 10/11 61 167.43234 167.51018 1.868 Ostrowik 2004 Jun 12/13 78 169.35002 169.50455 3.709 Ostrowik 2004 Jun 13/14 110 170.34641 170.53687 4.571 Ostrowik 2004 Jun 14/15 286 171.69416 171.91756 5.362 CBA Concord 2004 Jun 15/16 616 172.70006 172.94261 5.821 CBA Concord 2004 Jun 16/17 412 173.69051 173.89688 4.953 CBA Concord 2004 Jun 17/18 171 174.73610 174.82214 2.065 CBA Concord 2004 Jun 18/19 15 175.70148 175.81575 0.301 CBA Concord 2004 Jun 19/20 104 176.69015 176.77809 2.111 CBA Concord 2004 Jun 20/21 375 177.69177 177.92182 5.521 CBA Concord 2004 Jun 21/22 74 178.35158 178.53101 4.306 Ostrowik 2004 Jun 21/22 160 178.69378 178.79589 2.451 CBA Concord 2004 Jun 24/25 4 181.36495 181.38887 0.574 Ostrowik 2004 Jun 29/30 1 186.36816 186.37024 0.002 Ostrowik 2004 Jun 30/01 5 187.39702 187.40611 0.218 Ostrowik 2004 Jul 02/03 6 189.45532 189.46587 0.253 Ostrowik 2004 Jul 04/05 5 191.42160 191.44154 0.479 Ostrowik 2004 Jul 06/07 5 193.39290 193.40401 0.267 Ostrowik 2004 Jul 09/10 4 196.36391 196.37317 0.222 Ostrowik Total 3924 – – 69.01 ---------------- -------- ------------- ------------- --------- ------------- : Journal of the CCD observations of TT Boo. All the Ostrowik and DAO data reductions were performed using a standard procedure based on the IRAF[^1] package and profile photometry was derived using the DAOphotII package (Stetson 1987). The typical accuracy of our measurements varied between 0.004 and 0.11 mag depending on the brightness of the object. The median value of the photometric errors was 0.015 mag General light curve =================== Figure 1 shows the general light curve of TT Boo during our 2004 campaign. The rough transformation to $V$ magnitude was made using the comparison star GSC 3047:313 ($RA = 14^h57^m55.3^s$, Decl.$ = +40^\circ45'17"$, $V = 12.862$, $B-V = 0.889$) and assuming that the $B-V$ color of TT Boo in superoutburst is around zero (Bruch and Engel 1994). We additionally assumed that the sensitivity of our detector in “white light” roughly corresponds to Cousins R band (Udalski and Pych 1992) and used Caldwell et al. (1993) transformation between $B-V$ and $V-R$ colors. CCD observations are marked with dots while the open square corresponds to the observation of Carlo Gualdoni from June 3/4 reported to VSNET. The star was caught by him at the very beginning of the superoutburst because AAVSO observations from June 1/2 found TT Boo below 17.5 mag. Thus we conclude that the superoutburst started on June 2 or 3. Our first observations taken on June 4 between 4:38 and 10:40 UT show a slight declining trend with slope of 0.04 mag/day. During the next seven nights the decline was much steeper with a slope of 0.096 mag/day. Around June 12/13 we noted a clear change of the slope value to 0.073 mag/day. Similar phenomenon is often observed in other SU UMa stars as was noticed and summarized by Kato et al (2003). Around 12 UT on June 21, TT Boo entered the final decline phase with slope of 1.015 mag/day reaching magnitude 18 around June 25. The entire superoutburst thus lasted 22-23 days. From June 22 to July 7 the star stayed at brightness of 18 mag and on July 9 it finally faded to its quiescent magnitude of around 19 mag.   Superhumps ========== As shown in Fig. 2, superhumps were present in the light curve of the dwarf nova on all nights from June 4 till June 21 (HJD from 161 to 178). It is difficult to recognize them on June 22 when the star entered the final decline phase. On June 4 the superhumps have an amplitude of only 0.10 mag and a sinusoidal shape. This means that we caught TT Boo very close to the moment of birth of the superhumps. On June 5 the star flashed with fully developed characteristic tooth-shape modulations with amplitude of 0.26 mag. The evolution of the amplitude of the superhumps and shape of the light curve is shown in Fig. 3. This plot shows nightly light curves of TT Boo phased with the corresponding period (see next sections) and averaged in 0.02-0.05 phase bins. One can clearly see that tooth-shaped and large amplitude variations were observed from June 5 to 10. Around June 11-12 (HJD 169-170) amplitude significantly decreased and the secondary humps at phase around 0.3-0.5 became visible. Power spectrum of 2004 data --------------------------- From each light curve of TT Boo in superoutburst we removed the first or second order polynomial and analyzed them using [anova]{} statistics with two harmonic Fourier series (Schwarzenberg-Czerny 1996). The resulting periodogram is shown in the upper panel of Fig. 4. The most prominent peak is found at a frequency of $f_1=12.818\pm0.005$ c/d, which corresponds to a period of $P_{sh}=0.078015(30)$ days ($112.34\pm0.04$ min). The first harmonic of this frequency at $f_2=25.74\pm0.03$ c/d is also clearly visible. It was shown by Olech at al. (2004) that some of SU UMa stars (eg. IX Dra and ER UMa) also show modulations with their orbital periods during the entire superoutburst phase. To check if any other periodicity is present in the superoutburst light curve of TT Boo, we first removed the decreasing trend from our nightly observations and then grouped them into blocks containing 2-4 nights. Within the nights in one block the shape of the superhumps was similar. Then the data from each segment were fitted with the following sum: $$rel.~mag = A_0 + \sum^4_{j=1} A^1_j\sin(2j\pi t/P^*_{sh}+\phi^1_j)$$ where $P^*_{sh}$ is the superhump period determined for each block independently. In the next step, this analytic relation was removed from the light curve of each block. The whole resulting light curve was again analyzed using [anova]{} statistics with two harmonic Fourier series. The result is shown in lower panel of Fig. 4. This power spectrum is noisy with the highest peak (not exceeding the $3\sigma$ level) at frequency of $12.818\pm0.007$ c/d, which looks like the residual of main superhump frequency rather than a real peak. Finally, we conclude that in the light curve of TT Boo in superoutburst we did not find any other frequency except that connected with superhumps. The $O-C$ analysis of 2004 data ------------------------------- To check the stability of the superhump period and to determine its value we constructed an $O-C$ diagram. We decided to use the timings of primary maxima, because they were almost always better defined than minima. In total, we were able to determine 35 times of maxima and they are listed in Table 3 together with their errors, cycle numbers $E$ and $O-C$ values. ------------ ------------------------- -------- ------------ Cycle $HJD_{\rm max}-2453000$ Error $O-C$ number $E$ \[cycles\] 0 161.7550 0.0020 $-0.1466$ 1 161.8365 0.0020 $-0.1012$ 2 161.9144 0.0020 $-0.1019$ 13 162.7800 0.0015 $+0.0015$ 14 162.8568 0.0015 $-0.0133$ 15 162.9362 0.0015 $+0.0052$ 22 163.4810 0.0015 $-0.0064$ 35 164.4942 0.0025 $-0.0095$ 47 165.4290 0.0020 $-0.0184$ 48 165.5080 0.0080 $-0.0050$ 59 166.3645 0.0015 $-0.0183$ 60 166.4420 0.0030 $-0.0241$ 61 166.5212 0.0017 $-0.0082$ 73 167.4542 0.0020 $-0.0401$ 98 169.4125 0.0025 $+0.0799$ 99 169.4923 0.0020 $+0.1036$ 111 170.4308 0.0015 $+0.1422$ 112 170.5100 0.0025 $+0.1581$ 129 171.8350 0.0030 $+0.1546$ 141 172.7675 0.0020 $+0.1162$ 142 172.8440 0.0015 $+0.0975$ 143 172.9218 0.0025 $+0.0955$ 153 173.7015 0.0018 $+0.0971$ 154 173.7800 0.0030 $+0.1041$ 155 173.8550 0.0040 $+0.0661$ 167 174.7896 0.0020 $+0.0547$ 192 176.7297 0.0022 $-0.0587$ 205 177.7396 0.0020 $-0.1042$ 206 177.8166 0.0018 $-0.1164$ 207 177.8930 0.0040 $-0.1364$ 213 178.3592 0.0040 $-0.1562$ 214 178.4385 0.0030 $-0.1390$ 215 178.5142 0.0030 $-0.1680$ 218 178.7500 0.0025 $-0.1432$ 232 179.8290 0.0040 $-0.3024$ ------------ ------------------------- -------- ------------ : Times of maxima in the light curve of TT Boo during its 2004 superoutburst. The least squares linear fit to the data from Table 3 gives the following ephemeris for the maxima: $${\rm HJD}_{max} = 2453161.76643(57) + 0.0779575(48) \cdot E$$ indicating that the mean value of the superhump period is equal to 0.0779575(48) days ($112.259\pm0.007$ min). This is in good agreement with the value obtained from the power spectrum analysis. Combining both our period determinations gives a mean value of the superhump period as $P_{sh} = 0.0779589(47)$ days ($112.261\pm0.007$ min). The $O-C$ values computed according to the ephemeris (2) are listed in Table 3 and also shown in the upper panel of Fig. 5. Period change pattern --------------------- Until the mid of 1990’s all members of the SU UMa group seemed to show only negative superhump period derivatives (Warner 1995, Patterson et al. 1993). This was interpreted as a result of disk shrinkage during the superoutburst, thus lengthening its precession rate (Lubow 1992). This picture became more complicated when the first stars with $\dot P>0$ were discovered. Positive period derivatives were observed only in stars with short superhump periods close to the minimum orbital period for hydrogen rich secondary (e.g. SW UMa - Semeniuk et al. 1997, WX Cet - Kato et al. 2001a, HV Vir - Kato et. al 2001b) or for stars below this boundary (e.g. V485 Cen - Olech 1997, 1RXS J232953.9+062814 - Uemura et al. 2002). The diversity of $\dot P$ behavior is well represented in the $\dot P/P$ versus $P_{sh}$ diagram shown for example in Kato et al (2003b) or Olech et al. (2003). This graph seems to suggest that short period systems are characterized by positive period derivatives, while these with longer period by negative period derivatives. Recently Olech et al (2003) investigated the $O-C$ diagrams for stars such as KS UMa, ER UMa, V1159 Ori, CY UMa, V1028 Cyg, RZ Sge and SX LMi and claimed that most (probably almost all) SU UMa stars show decreasing superhump periods at the beginning and the end of superoutburst but increasing period in the middle phase. The $O-C$ diagram obtained for TT Boo seems to confirm this hypothesis. The superhump period change is quite complex and the rough fitting of the parabolas to the following cycle intervals: 0–35, 47–112, 98–232 gives the period derivatives of $(-52.3\pm1.3) \times 10^{-5}$, $(12.3\pm4.8) \times 10^{-5}$ and $(-6.2\pm0.9) \times 10^{-5}$, respectively. It is interesting that the period changes seem to be correlated with changes in the amplitude of the superhumps and variations of the brightness of the star. This is clearly visible in Fig. 5 which shows $O-C$ values, amplitude variations in time and the light curve from superoutburst after removing the mean long term decline. Period change, phase shift or both? ----------------------------------- The problem with tracing the period changes using $O-C$ diagrams is that slight and continuous phase shifts at constant period can mimic true period changes. In fact, exactly the same $O-C$ plots could be obtained for synthetic light curves in one case with constant period and time dependent phase shift and in another case with constant phases and period variation in time. Most recently, Pretorius et al (2004) described the results of an extensive campaign on the new SU UMa-type variable SDSS J013701.06 -091234.9. In their $O-C$ diagram of superhump maxima they describe the behavior as consistent with a constant period during first week of superoutburst and continuous phase shift in later period. However, detailed inspection of the $O-C$ values from first part of the superoutburst seems to agree with scenario of decreasing period during first three days and increasing during next four. It is interesting, that about a dozen days after maximal brightness the $O-C$ has a value of 0.5 indicating that at this moment light modulations could be classified as late superhumps. These late superhumps have a significantly shorter period than normal superhumps indicating that except for possible phase shifts, a clear change of the period occurred. Do we observe late superhumps in 2004 superoutburst of TT Boo? The answer seems to be ’no’. The $O-C$ values at the termination of the superoutburst are around $-0.3$. Even changing the ephemeris (2) to a longer period, better describing the large amplitude superhumps observed between cycles 10 and 100, we obtain a phase shift between maxima only at the level of 0.35 cycle. As a trace of young late superhumps we can assume the modulations observed on June 22/23 when the star entered into the final decline phase (see next section for details). Do we observe period change in late stages of TT Boo superoutburst? In this case, the answer seems to be ’yes’. First, the superhump period for nights with cycle numbers larger than 180 is shorter than the superhump period of the large amplitude superhumps observed at the beginning and in the middle of superoutburst (just as in case of SDSS J01370106 -091234.9). Second, if we group our observations into two night segments and calculate the superhump period for each of these segments we obtain a slightly decreasing pattern. A simple linear fit to the period values obtained for cycle number 100 and larger gives a slope of $(-6.6\pm2.5) \times 10^{-5}$, clearly consistent with a parabola fitting the times of maxima in the same cycle intervals, which gives a value of $(-6.2\pm0.9) \times 10^{-5}$. It is now clear that the simple model with a shrinking disk as the cause of negative superhump period derivatives is no longer valid. A new model must contend with the following observational facts: - complex superhump period change patterns as seen in TT Boo and other well observed SU UMa stars, - extreme values of superhump period derivatives as observed for example in KK Tel (Kato et al., 2003b) and MN Dra (Nogami et al., 2003). - no superhump period changes occur in some SU UMa stars (for example IX Dra - Olech et al., 2004) Late superhumps =============== In August 1989 TT Boo was observed during two consecutive nights near the end of the superoutburst. Combining these data with observations from June 2004 allowed us to trace the birth of the late superhumps. The upper panel of Fig. 6 shows observations from June 21/22 and June 22/23 of 2004. When the star was at $V$ magnitude between 14.3 and 14.7, it showed clear modulations with an amplitude of 0.085 mag and only a weak trace of secondary humps. The first four arrows point the moments of maxima displayed in Table 3. The last two arrows are expected times of maxima computed based on the moment of maximum at $E=218$ and period of $P=0.0775$ days. One can clearly see that on June 22/23 the amplitude of the modulations did not change significantly in comparison with the previous night but the secondary humps became strong enough to show an amplitude similar to the main maxima. What happens when the star fades to $V\approx16$ mag can be seen thanks to the 1989 data shown in the lower panel of Fig. 6. On August 4/5, when the star was at $V=14.4$ mag we see the same behavior as in the corresponding stage of superoutburst from 2004. The arrows mark the positions of maxima of ordinary superhumps. The secondary humps are marginally visible. The amplitude of the modulations is about 0.1 mag. On the next night, when the star was at $V\approx16$ mag, the amplitude increased to over 0.3 mag. The last three arrows on the lower panel of Fig. 6 show expected positions of ordinary humps maxima and in this case they coincide with the secondary maxima. Main maxima are thus shifted by 0.5 in phase in comparison with the previous night. Summing up, the behavior of TT Boo in superoutbursts observed in 1989 and 2004 suggests that during last stage of plateau phase the period of superhumps decreases continuously producing a phase shift of about 0.3 in comparison with large amplitude superhumps observed in the beginning of the superoutburst. In this phase, we still can call the observed modulations ordinary superhumps, even in the case when the phase shift caused by a decrease of the period reaches a value of 0.5 as was observed in SDSS J013701.06-091234.9 (Pretorius et al. 2004). During the final decline stage, the amplitude of the secondary humps becomes comparable with the amplitude of main modulations, and within one day these secondary humps flash into large amplitude late superhumps. Thus the late superhumps are in fact shifted in phase by 0.5 but in comparison with ordinary superhumps observed at the end of superoutburst not with these seen at the beginning. A similar situation was observed in the well studied dwarf nova VW Hyi (Schoembs & Vogt 1980, Vogt 1983) where late superhumps appeared after the rapid decline phase and caused a beat phenomenon due to the combination with the orbital hump. Summary ======= We described the results of the observations of TT Boo in two superoutbursts from 1989 and 2004. The main conclusions of our work are summarized below: 1. The amplitude of the 2004 June superoutburst was about 5.5 mag and lasted just over 22 days. 2. The star showed a clear re-brightening around 9th day of the superoutburst (see Kato et al. 2003a for more detailed discussion of such a phenomenon). 3. During two observed superoutbursts, we detected clear superhumps with a period of $P_{sh} = 0.0779589(47)$ days ($112.261\pm0.007$ min) No other periodicity was detected. 4. The superhump period change is quite complex but has a decreasing period during the first and the end phase of the superoutburst with an increasing period in the middle stage. 5. Combining the data from two superoutbursts from 1989 and 2004 allowed us to trace the birth of the late superhumps and we conclude that it is a rather quick process lasting about one day. [**Acknowledgments.**]{}  We acknowledge generous allocation of the Warsaw Observatory 0.6-m telescope time. Data from AAVSO and VSNET observers is also appreciated. This work was supported by KBN grant number 1 P03D 006 27 to AO and BST grant to Warsaw University Observatory. Observations of TT Boo in 1989 were supported by the grant of the NRSEC of Canada to Dr. S.M. Rucinski. Berry, R. and Burnell, J, 2000, The Handbook of Astronomical Imaging Processing, Willmann-Bell, Inc., Richmond, VA, USA. Bruch A., 1989, A&A Suppl. Ser., 78, 145 Bruch A., Engel A., 1994, A&AS, 104, 79 Caldwell J.A.R., Cousins A.W.J., Ahlers C.C., van Wamelen P., Maritz E.J., 1993, SAAO Circ., 15, 1 Howell S.B., Szkody P., 1988, PASP, 100, 224 Kato T., 1995, IBVS no. 4243 Kato T., Masumoto K., Nogami D., Morikawa K., Kiyota S., 2001a, PASJ, 53, 893 Kato T., Sekine T., Hirata R., 2001b, PASJ, 53, 1191 Kato T., Nogami D., Moilanen M., Yamaoka H., 2003a, PASJ, 55, 989 Kato T., Santallo R., Bolt G. et al., 2003b, MNRAS, 339, 861 Kholopov P.N., Samus N.N., Frolov M. et al, 1998, Combined General Catalogue of Variable Stars, 4.1 Ed (II/214A) Lubow S.H., 1992, ApJ, 401, 317 Meinunger L., 1966, Mitt. Veränderl. Sterne, 3, 113 Nogami D., Uemura M., Ishioka R. et al., 2003, A&A, 404, 1067 Olech A., 1997, Acta Astron., 47, 281 Olech A., Schwarzenberg-Czerny A., P. Kȩdzierski, K. Z[ł]{}oczewski, K. Mularczyk, M. Wiśniewski, 2003, Acta Astron., 53, 175 Olech A., K. Z[ł]{}oczewski, K. Mularczyk, P. Kȩdzierski, M. Wiśniewski, G. Stachowski, 2004, Acta Astron., 54, 57 Patterson J., Bond H.E., Grauer A.D., Shafter A.W., Mattei J.A., 1993, PASP, 105, 69 Pretorius M.L., Woudt P.A., Warner B., Bolt G., Patterson J., Armstrong E., 2004, MNRAS, in print, astro-ph/0405202 Schoembs R., Vogt N., 1980, A&A, 91, 25 Schwarzenberg-Czerny A., 1996, ApJ Letters, 460, L107 Semeniuk I., Olech A., Kwast T., Należyty M., 1997, Acta Astron., 47, 201 Shapley H., 1923, Harvard Coll. Obs. Bull. no. 791 Stetson P.B., 1987, PASP, 99, 191 Szkody P., 1987, ApJ Suppl. Ser., 63, 685 Udalski A., Pych W., 1992, Acta Astron., 42, 28 Uemura M., Kato T., Ishioka I. et al., 2002, PASJ, 54, 599 Vogt N., 1983, A&A, 118, 95 Warner B., 1995, [*Cataclysmic Variable Stars*]{}, Cambridge University Press [^1]: IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation.
{ "pile_set_name": "ArXiv" }
--- author: - | Saskia Becker\ Weierstraß Institute for Applied Analysis and Stochastics\ Mohrenstraße 39, 10117 Berlin, Germany\ [email protected] title: | **Regularization of statistical inverse problems\ and the Bakushinski[ĭ]{} veto** --- > **Abstract.** In the deterministic context Bakushinski[ĭ]{}’s theorem excludes the existence of purely data driven convergent regularization for ill-posed problems. We will prove in the present work that in the statistical setting we can either construct a counter example or develop an equivalent formulation depending on the considered class of probability distributions. Hence, Bakushinski[ĭ]{}’s theorem does not generalize to the statistical context, although this has often been assumed in the past. To arrive at this conclusion, we will deduce from the classic theory new concepts for a general study of statistical inverse problems and perform a systematic clarification of the key ideas of statistical regularization. Introduction ============ We consider statistical inverse problems, where an unknown signal $x$ should be reconstructed from indirect noisy measurements $ y_{\mathrm{noise}} = Tx + \mathrm{noise}$. The problem is assumed to be ill-posed, i.e. the operator $T$ is not continuously invertible such that we can only approximate the signal. In classic inverse problems the noise is supposed to be deterministic and bounded. Nevertheless it is well-known that various applications cannot be modeled appropriately in this way. Therefore, stochastic models have been introduced, where the noise is taken as random variable or stochastical process [@MR2505875; @MR2361904; @hofinger; @MR2503292]. In some studies, e.g. [@MR2158113; @hofinger; @MR859375], not only the noise but also the operator or the signal are stochastic. In both the deterministic and the stochastic setting one crucial point is the knowledge of the noise level which is often not available in application. However, the Bakushinski[ĭ]{} veto [@MR760252] states for classic inverse problems the equivalence of the ill-posedness of the problem and the nonexistence of purely data driven reconstruction methods, for which the approximated solution tends to the exact signal $x$ when the noise vanishes. This theorem is of particular importance since it constitutes the need of supplemental information, as for instance the noise level. For statistical inverse problems the situation is ambiguous as we will discuss in the paper at hand. To study the existence of such reconstruction methods we need explicit definitions of the involved objects. While an extensive theory for classic inverse problems has been developed [@MR1408680; @MR742928; @MR859375], only selected aspects of statistical inverse problems have been analyzed so far. Additional difficulties, arising from the possible unboundedness of stochastic noise, are the need of new error and convergence criteria [@MR2361904; @MR1929274; @MR2503292; @MR874480]. Cavalier explained in [@MR2421941] how concepts of nonparametric statistics, e.g. the white noise model, risk estimation and model selection, can be applied to inverse problems. We will proceed in reverse by studying how the key ideas of the classic inversion theory have to be modified for beeing suitable for a statistical setting. First of all we give a brief recapitulation of the classic regularization theory, in which we suggest in particular a reduction of the usually required convergence properties. Our statistical setting is introduced in section \[sec:setting\] being followed by the presentation of the main concepts and central definition in section \[sec:kindsConv\]. There we propose to link the noise to the asymptotic of the noise level, which will turn out to be the deciding idea for definition \[def:stKonvRV\] of convergent statistical regularization methods and our main result stated in section \[sec:stBak\]: We prove an equivalent formulation and give a counterexample to Bakushinski[ĭ]{}’s theorem depending on the considered class of probability distributions. Classic inverse problems {#sec:clIP} ======================== We consider the usual setting of classic inverse problems. Let $\mathbb{H}_{1}$ and $\mathbb{H}_{2}$ denote separable Hilbert spaces with scalar products $\langle .,. \rangle_{\mathbb{H}_{i}}$ and the induced norms $\Vert . \Vert_{\mathbb{H}_{i}}$, $i=1,2$. Further let $T: \mathbb{H}_{1} \rightarrow \mathbb{H}_{2}$ be a linear, compact and bounded operator with a nonclosed range $\mathcal{R}(T)$. We are interested in the problem $$\label{eq:detyd} y_{\delta} = Tx + \delta \xi,$$ where $x \in \mathbb{H}_{1}$ denotes the unknown signal, $\delta > 0$ is the noise level and the normalized noise $\xi \in \mathbb{H}_{2}$ satisfies $\Vert \xi \Vert_{\mathbb{H}_{2}} \leq 1$. With $\ker(T)^{\perp}$ as orthogonal complement of the kernel of $T$ we can define the generalized inverse $T^{+}$ as the linear extension of the inverse of $ T\vert_{\ker(T)^{\perp}} $. A motivation and some properties of the generalized inverse can be found e.g. in [@MR1408680]. Since the range of $T$ is assumed to be nonclosed, $T^{+}$ is discontinuous and $x^{+} := T^{+} y \in \mathbb{H}_{1}$ has to be regularized. In the following subsection we will not present the common definition of (convergent) regularization methods given in [@MR1408680], but the definitions introduced by Hofmann and Mathé in [@MR2318806]. Research has shown that purely data driven regularization methods can yield remarkably good results, see for instance [@MR2175028; @MR1408680], although these methods are not convergent as the Bakushinski[ĭ]{} veto proves. This teaches us to distinguish convergent and arbitrary regularization schemes as is done in the following approach. Linear and convergent regularization schemes {#sec:LinStReg} -------------------------------------------- Let $ \left\{ \left( s_{j}; v_{j}, u_{j} \right) \right\}_{j \in \mathbb{N}}$ denote the singular system of the operator $T$, where $\left\{ s_j \right\}_{j \in \mathbb{N}}$ is arranged in decreasing order with $ \underset{j \rightarrow \infty}{\lim} s_{j} = 0$. The following series expansion holds: $$Tx = \sum_{j \geq 1} s_{j} \left\langle x, v_{j} \right\rangle_{\mathbb{H}_{1}} u_{j}$$ \[def:klRV\] A family $ F := \left\{ F_{\alpha} \right\}_{\alpha>0} $ of linear and bounded operators $F_{\alpha}: \left[ 0, \left\| T \right\|^{2} \right] \rightarrow \mathbb{R}$ is called regularization (filter) if the following properties hold: The associated bias family $\left\{ b_{\alpha} \right\}_{\alpha>0}$, where $b_{\alpha}(\vartheta) := 1 - \vartheta F_{\alpha}(\vartheta)$, converges pointwise to zero: $\underset{\alpha \rightarrow 0}{\lim} \, b_{\alpha}(s_{j}^2 ) = 0$ for all $s_j>0$. The bias family is uniformly bounded by some $\gamma_{0} > 0$, i.e. $\underset{\alpha \leq \alpha_{0}}{\sup} \;\underset{s_{j}>0}{\sup} \left| b_{\alpha}(s_j^2) \right| \leq \gamma_{0}$. There is a constant $\gamma_{*} > 0$ such that the parameter family can be normalized for all $\alpha \in (0,\infty)$ and $s_{j}>0$ by $ s_{j} \left| F_{\alpha}(s_{j}^2) \right| < \gamma_{*} / \sqrt{\alpha}$. In this case, the family $ R := \left\{ R_{\alpha} \right\}_{\alpha > 0} $ of linear and bounded operators $ R_{\alpha} : \mathbb{H}_{2} \rightarrow \mathbb{H}_{1}$ with $$\label{eq:klROperator} R_{\alpha} y := x_{\alpha} := F_{\alpha}(T^{*}T)T^{*}y = \sum_{s_{j} > 0} F_{\alpha}(s_{j}^{2}) s_j \left\langle y, u_{j} \right\rangle_{\mathbb{H}_{1}} v_{j}, \, y \in \mathbb{H}_{2},$$ is called linear regularization scheme (in short: regularization), where the last equation follows from the functional calculus described in [@MR1408680]. Below, we will use without further comments the notations $$F := \left\{ F_{\alpha} \right\}_{\alpha>0} \qquad \text{ and } \qquad R := \left\{ R_{\alpha} \right\}_{\alpha > 0}.$$ \[ex:RVen\] The given definition is satisfied by many of the known linear regularization in terms of [@MR1408680] such as spectral cut-off, which is defined by $$F_{\alpha}(\vartheta) := \vartheta^{-1} \, \chi_{\left( \alpha, \left\| T \right\|^{2} \right) } (\vartheta) \quad \text{ such that } x_{\alpha} = R_{\alpha}y = \sum_{s_{j}^{2} > \alpha} s_{j}^{-1} \left\langle y, u_{j} \right\rangle v_{j},$$ where $\chi$ denotes the indicator function, $\alpha, \vartheta \in \left( 0, \left\| T \right\|^{2} \right] $, and Tikhonov regularization with $$F_{\alpha}(\vartheta) := 1/(\alpha + \vartheta), \text{ such that } x_{\alpha} = R_{\alpha}y = (\alpha I + T^{*}T)^{-1} T^{*} y.$$ \[rem:linRVen\] Later on we will require a stricter bound instead of property (3) of definition \[def:klRV\]: $$\label{eq:Vainikko3} \sup_{0 < \vartheta \leq \left\| T \right\|^{2}} \vert F_{\alpha} (\vartheta) \vert \leq \tfrac{\gamma}{\alpha}, \, \gamma > 0.$$ It is easy to show, that the given examples satisfy this property too. In [@MR859375] it is shown, that (3) follows if (2) and (\[eq:Vainikko3\]) hold. As generalization we could also require that the index family of $F$ is an arbitrary subset of the real numbers with at least one accumulation point, say $h \in \mathbb{R}$. Then property (1) has to be reformulated in the following way: $\underset{\alpha \rightarrow h }{\lim} \, b_{\alpha}(s_j^2) = 0$ for all $ s_j>0 $. We cannot skip it completely because it yields the following important proposition. \[thm:R-pktw-T+\] Let $ R $ denote a linear regularization and $\mathcal{D}(T^{+})$ the domain of the generalized inverse $T^{+}$ of $T$. If $y \in \mathcal{D}(T^{+})$, then $ \underset{\alpha \leq \alpha_{0}}{\sup} \left\| x_{\alpha} \right\|_{\mathbb{H}_{1}} < \infty \text{ and } x_{\alpha} = R_{\alpha}y \rightarrow T^{+}y \text{ when } \alpha \rightarrow 0. $ If $y \notin \mathcal{D}(T^{+})$, then $\underset{\alpha \rightarrow 0}{\lim} \left\| x_{\alpha} \right\|_{\mathbb{H}_{1}} = \infty $. In particular we get for all $y \in \mathbb{H}_{2}$ that $ \underset{\alpha \rightarrow 0}{\lim}\, T R_{\alpha}y = TT^{+}y = Qy$, where $Q: \mathbb{H}_{2} \rightarrow \overline{\mathcal{R}(T)}$ denotes the orthogonal projection onto $\overline{\mathcal{R}(T)}$. A similar result can be found in [@MR1408680 proposition 3.6]. Convergence in general and especially convergence rates are established quality criteria for the comparison of regularization schemes. Normally, one claims that the regularized solution $x_{\alpha}$ should converge uniformly to the exact one, if the error tends to zero: \[def:klPW\]\[def:klKonvRV\] Let $ R $ denote a linear regularization scheme and $\alpha: (0, \infty) \times \mathbb{H}_{2}\rightarrow (0, \infty)$ a function. If for all $y \in \mathcal{D}(T^{+})$ it holds $$\lim_{\delta \rightarrow 0} \left( \sup \left\{ \alpha(\delta, y_{\delta}) : y_{\delta} \in \mathbb{H}_{2}, \left\| y - y_{\delta} \right\|_{\mathbb{H}_{2}} \leq \delta \right\} \right) = 0,$$ then $\alpha$ is called (classic) parameter choice. In particular we will say: $\alpha$ is purely data driven or heuristic if it depends only on the data, i.e. $\alpha = \alpha(y_{\delta})$. $\alpha$ is (classic) convergent w.r.t. $R$ if for all $y \in \mathcal{D}(T^{+})$ it holds $$\lim_{\delta \rightarrow 0} \left( \sup \left\{ \left\| T^{+}y - R_{\alpha (\delta, y_{\delta}) } y_{\delta} \right\|_{\mathbb{H}_{1}} : y_{\delta} \in \mathbb{H}_{2}, \left\| y - y_{\delta} \right\|_{\mathbb{H}_{2}} \leq \delta \right\} \right) = 0.$$ The pair $(R, \alpha )$ of a linear regularization $R $ and a parameter choice $\alpha$ is called (classic) convergent regularization method of $T^{+}$ if $\alpha$ is convergent w.r.t. $R$. Here, we applied the usual error criterion for classic inverse problems: $$e(R, \alpha, x, \delta) := \sup \left\{ \left\| T^{+}y - R_{\alpha (\delta, y_{\delta}) } y_{\delta} \right\|_{\mathbb{H}_{1}} : y_{\delta} \in \mathbb{H}_{2}, \left\| y - y_{\delta} \right\|_{\mathbb{H}_{2}} \leq \delta \right\} \text{, where } y = Tx.$$ Many parameter choice strategies depend on the applied regularization scheme $R$ which is why we should write $\alpha(R, \delta, y_{\delta})$. However, we will use for simplicity $\alpha(\delta, y_{\delta})$ instead. \[ex:detPW\] The discrepancy principle [@MR1408680; @MR0208819] is a good example of a parameter choice which is very common for classic inverse problems but cannot be applied in the statistical setting as we will explain in remark \[rem:DPinStIP\]. It chooses the regularization parameter for a given regularization scheme $R$ and a fixed constant $\tau>1$ by setting $$\alpha_{*} := \sup \left\{ \alpha \leq \left\| A^{*} A \right\|, \left\| A R_{\alpha} y_{\delta} - y_{\delta} \right\| \leq \tau \delta \right\}.$$ Therein and in most of the established convergent methods the knowledge of the noise level $\delta$ is needed. In contrast, the quasi-solution of Ivanov [@MR2010817] yields convergent regularization assuming instead of that an upper bound for the norm of the solution $x_\alpha$. Well-known purely data driven parameter choices are the L-curve criterion of Hansen [@MR1193012], the generalized cross-validation of Wahba [@MR1045442] and quasi-optimality [@MR0455365]. \[thm:detBak\] A purely data driven (classic) convergent regularization method exists if and only if the generalized inverse $T^{+}$ is continuous. With a purely data driven (classic) convergent regularization method $(R, \alpha)$ we get necessarily for exact data that $T^{+} y = R_{\alpha(y)} y$ for all $y \in D(T^{+})$ such that for arbitrary sequences $\left\{y_{n}\right\}_{n\in\mathbb{N}} \subseteq D(T^{+})$ with $ \underset{n\rightarrow \infty}{\lim} y_{n} = y$ it holds $ \underset{n\rightarrow \infty}{\lim} T^{+} y_{n} = \underset{n\rightarrow \infty}{\lim} R_{\alpha(y_{n})}y_{n} = T^{+} y$, which yields the well-posedness of the problem. Reduction of the requirements ----------------------------- In the statistical setting we cannot require uniform convergence as we do in the deterministic context since the noise may be unbounded. The resulting question is, if for classic inverse problems the convergence criterion could also be diminished. We want to ensure that the approximated solution of the problem converges to the exact one if the noise tends to zero. But for that purpose we do not need to include the supremum as is done in definition \[def:klPW\]. It is only a technical simplification. Additionally we want to skip the requirement that the function $\alpha$ has to converge to zero if the noise vanishes. In fact, it is unimportant how $\alpha$ behaves as long as (\[eq:gKonRven\]) is satisfied. \[def:konvRV\] The pair $(R, \alpha )$ of a linear regularization $R$ and a function $\alpha: (0, \infty) \times \mathbb{H}_{2}\rightarrow (0, \infty)$ is called (generally) convergent regularization of $T^{+}$ if the regularized solution converges in the following sense to the exact one: For all $ \left\{ y^{(k)} \right\}_{k \geq 1} $ with $y^{(k)} := y + \delta^{(k)} \xi^{(k)} $, $\delta^{(k)}> 0$, $ \left\| \xi^{(k)} \right\|_{\mathbb{H}_{2}} \leq 1 $ and $\underset{k \rightarrow \infty}{\lim} \delta^{(k)} = 0$ we have $$\label{eq:gKonRven} \lim_{k \rightarrow \infty} \left\| T^{+}y - R_{\alpha (\delta^{(k)}, y^{(k)}) } y^{(k)} \right\|_{\mathbb{H}_{1}} = 0.$$ \[rem:pointw\] In order to achieve an easier notation, one could be tempted to claim only pointwise convergence. But this would mean to fix the noise and vary only the noise level, which forms a considerable and unrealistic restriction. \[rem:detBak\] As the supremum is not necessary for the proof of theorem \[thm:detBak\], an equivalent formulation can be varified analoguosly for generally convergent regularization. Statistical inverse problems {#sec:stIP} ============================ In this section we provide new concepts for a general study of statistical inverse problems. As main idea we link the noise to the asymptotic of the noise level varying its probability distribution. Statistical setting {#sec:setting} ------------------- In recent publications about statistical inverse problems one can find two models of stochastical noise, random variables [@hofinger; @MR2503292] and Hilbert-space processes [@MR2505875; @MR2361904]. As every Hilbert-space valued random variable with finite second moment can be identified with a Hilbert-space process, we will concentrate mostly on the latter. A Hilbert-space process is a linear and continuous operator $$\Xi: \mathbb{H}_{2} \rightarrow L^{2}(\Omega, \mathcal{F}, \mathbb{P}), \; v \mapsto \Xi v =: \left\langle \Xi, v \right\rangle_{\mathbb{H}_{2}},$$ where $(\Omega, \mathcal{F}, \mathbb{P})$ denotes a probability space, $\mathcal{B}_{\mathcal{T}}$ the Borel-$\sigma$-algebra generated by the topological space $\mathcal{T}$ and $$L^{2}(\Omega, \mathcal{F}, \mathbb{P}) := \left\{ Z: (\Omega, \mathcal{F}, \mathbb{P} ) \rightarrow (\mathbb{R}, \mathcal{B}_{\mathbb{R}}) \text{ square-integrable random variable} \right\}.$$ The covariance $ \mathrm{Cov}_{\Xi}: \mathbb{H}_{2} \rightarrow \mathbb{H}_{2} $ of a Hilbert-space process $\Xi$ is implicitly defined by $$\left\langle \mathrm{Cov}_{\Xi} y_{1}, y_{2} \right\rangle_{\mathbb{H}_{2}} = \mathrm{Cov} \left( \left\langle \Xi, y_{1} \right\rangle_{\mathbb{H}_{2}} , \left\langle \Xi, y_{2} \right\rangle_{\mathbb{H}_{2}} \right), \, y_{1}, y_{2} \in \mathbb{H}_{2}.$$ Hence it is a bounded and linear operator. \[ex:GWR\] A centered Hilbert-space process $\Xi$ with the unit matrix as covarianceis called white noise process. In this case $\Xi$ is Gaussian if the associated random variables are Gaussian, i.e. if $ \left\langle \Xi, v \right\rangle_{\mathbb{H}_{2}} \sim \mathcal{N} \left( 0, \left\| v \right\|^{2}_{\mathbb{H}_{2}} \right)$. Inverse problems with Gaussian white noise have been studied e.g. in [@MR2503292; @MR2438944; @MR2240642]. \[ass:Xi\] We assume $\Xi: \mathbb{H}_{2} \rightarrow L^{2}(\Omega, \mathcal{F}, \mathbb{P})$ to be a centered Hilbert-space process with $ \mathbb{E} \left[ \left\langle \Xi, v \right\rangle_{\mathbb{H}_{2}} \right] = 0 $ for all $ v \in \mathbb{H}_{2} $ and $ \left\| \mathrm{Cov}_{\Xi} \right\| < \infty $. \[not:ObsModel\] Let $\Xi$ be as in assumption \[ass:Xi\]. We consider the following abstract observation model: $$\label{eq:ObsModel} Y_{\delta} = y + \delta \Xi, \, \text{ where } y \in \mathcal{D}(T^{+}) \text{ and } \delta > 0.$$ The realizations of $\Xi$ and thus of $Y_{\delta}$ do not have to be in $\mathbb{H}_{2}$ because $\Xi$ is only a weak random element of $\mathbb{H}_{2}$. As a consequence several basic concepts have to be revised: \[not:InterprPXi\] We want to generalize the notation $\mathbb{P}^{\Xi}$ of image measures from random variables to Hilbert-space processes. Let $ \Xi $ be a Hilbert-space process. Then we interpret $\mathbb{P}^{\Xi}$ as the probability measure which is well-defined by its finite-dimensional marginal distributions on the space $(\mathbb{R}^{ \mathbb{H}_{2} }, (\mathcal{B}_{\mathbb{R}})^{ \otimes \mathbb{H}_{2} })$, where $ \mathbb{R}^{ \mathbb{H}_{2} } $ denotes the space of all functions $f: \mathbb{H}_{2} \rightarrow \mathbb{R}$ and $ (\mathcal{B}_{\mathbb{R}})^{ \otimes \mathbb{H}_{2} } $ denominates the associated $\sigma$-algebra. The existence and uniqueness of $\mathbb{P}^{\Xi}$ is ensured by the Kolmogorov extension theorem [@MR1083357]. \[def:Rauschpegel\] The definitions of the noise level of classic and statistical inverse problems differ significantly. The noise level $\delta$ of an inverse problem is defined as scale factor of the noise $\xi$ or accordingly $\Xi$, such that $ \left\| \xi \right\|_{\mathbb{H}_{2}} \leq 1$ and therefore $\left\| y - y_{\delta} \right\|_{\mathbb{H}_{2}} \leq \delta$ for all $\delta > 0$ if $y_{\delta}$ as in (\[eq:detyd\]), $ \mathbb{E} \left[ \left\| \Xi \right\|_{\mathbb{H}_{2}}^{2} \right] \leq 1 $ and therefore $\mathbb{E} \left[ \left\| y - Y_{\delta} \right\|_{\mathbb{H}_{2}}^{2} \right] \leq \delta^{2} $ for all $\delta > 0$ if $Y_{\delta} \in L^2(\Omega, \mathbb{H}_2)$, $\left\| \mathrm{Cov}_{\Xi} \right\|^{1/2} \leq 1$ if $Y_{\delta}$ as in notation \[not:ObsModel\]. \[rem:DPinStIP\] From a statistical point of view, only the third case is of interest. For instance, the discrepancy principle desribed in example \[ex:detPW\] cannot be applied to observations with white noise since the term $\left\| A R_{\alpha} Y^{\delta} (\omega) - Y_{\delta} (\omega) \right\|_{\mathbb{H}_{2}}$ could be infinite. For observations with noise modelled as random variables it yields convergent methods by contrast. So, the second case is very close to the deterministic setting as we will support by proposition \[prop:TransfRVrv\]. For the deterministic context we defined the regularization operators between the observed Hilbert-spaces. The following notation allows us to apply them also to Hilbert-space processes: \[not:RtXi-XiHP\] We observe a Hilbert-space process $\Xi: \mathbb{H}_{2} \rightarrow L^{2}(\Omega, \mathcal{F}, \mathbb{P})$ and a linear and bounded operator $R : \mathbb{H}_{2} \rightarrow \mathbb{H}_{1}$. Then, we will interpret the composition $R \: \Xi$ as a Hilbert-space process on $\mathbb{H}_{1}$, i.e. as $ R \: \Xi : \mathbb{H}_{1} \rightarrow L^{2}(\Omega, \mathcal{F}, \mathbb{P}) $ with $ v \mapsto R \: \Xi \: v =: \left\langle R \: \Xi, v \right\rangle_{\mathbb{H}_{1}} = \left\langle \Xi, R^{*} v \right\rangle_{\mathbb{H}_{2}}$. \[rem:BemZuInterprRXi\] $R \: \Xi$ is well-defined, since $\left\langle \Xi, R^{*} v \right\rangle_{\mathbb{H}_{2}} = \Xi (R^{*} v) \in L^{2}(\Omega, \mathcal{F}, \mathbb{P})$. The linearity of $R$ yields further that $R \: Y_{\delta} = R \: y + \delta R \: \Xi$. As parameter choices do not have to be linear, we cannot interprete the term $\alpha \left( \delta, Y_{\delta} \right)$ in a similar way. That is why we will use, where necessary, the sequence space model, which was discussed for instance in [@MR2438944; @MR2421941; @MR2013911]: \[not:SSModel\] Let $ \left\{ \left( s_{j}; v_{j}, u_{j} \right) \right\}$ denote the singular system of the operator $T$. The sequence space model is defined by $$\label{eq:Yjdelta} Y_{\delta} (\omega) := \left\{ Y_{\delta, j} (\omega) \right\}_{s_{j} > 0} \text{ with } Y_{\delta, j} (\omega) = \left\langle y, u_{j} \right\rangle_{\mathbb{H}_{2}} + \delta \left\langle \Xi, u_{j} \right\rangle_{\mathbb{H}_{2}} (\omega), \, \omega \in \Omega.$$ In application only finite data are available why we introduce additionally the following observation model, which is more realistic and has been studied for example in [@MR1425958; @MR2240642]: \[not:discreteModel\] Let us consider the one-sided discretization of $Y_{\delta}$: $$\label{eq:diskrRM} Q Y_{\delta} = Q Ax + \delta Q \Xi = \sum_{j=1}^n Y_{\delta, j} w_{j} \text{ with } Y_{\delta, j} (\omega) = \left\langle y, w_{j} \right\rangle_{\mathbb{H}_{2}} + \delta \left\langle \Xi, w_{j} \right\rangle_{\mathbb{H}_{2}} (\omega), \, \omega \in \Omega,$$ where $Q$ denotes the projection onto the linear span of an orthonormal system $ \left\{ w_{1},..., w_n \right\}$. We assume to have observations without repetitions. (\[eq:diskrRM\]) conforms to the well-known regression model with orthonormal design. It is evident that this model leads to a supplemental error term, the discretization error, which changes the convergence rates but not the underlying convergence behaviour if we require that $n=n(\delta)$ with $\underset{\delta \rightarrow 0}{\lim} \, n(\delta) = \infty $. To compare and qualify different methods we need an error criterion. Most authors use the mean squared error (MSE) and so will we. It is defined as follows: Let $\Xi$ satisfy assumption \[ass:Xi\]. We set $$MSE(R, \alpha, x, \delta) := \left( \mathbb{E} \left[ \left\| T^{+}y - R_{\alpha}Y_{\delta} \right\|^{2}_{\mathbb{H}_{1}} \right] \right)^{1/2} \text{, where } y = Tx.$$ \[prop:endlFehler\] Let $R_{\alpha}$, $\alpha>0$, denote a regularization operator with associated regularization filter $F_{\alpha}$ satisfying (\[eq:Vainikko3\]). If the operator $T$ is Hilbert-Schmidt, the MSE of $R_{\alpha}$ is finite for all $x \in \mathbb{H}_1$ and $\delta > 0$. By Parseval’s identity and Fubini’s theorem we get for all $y \in \mathcal{D}(T^{+})$ the so called bias-variance decomposition of the mean squared error: $$\label{eq:BiasVar} \mathbb{E} \left[ \left\| T^{+}y - R_{\alpha}Y_{\delta} \right\|^{2}_{\mathbb{H}_{1}} \right] = \left\| T^{+}y - R_{\alpha}y \right\|^{2}_{\mathbb{H}_{1}} + \delta^{2} \mathbb{E} \left[ \left\| R_{\alpha} \Xi \right\|^{2}_{\mathbb{H}_{1}} \right]$$ The first term is the squared bias, which is related to the approximation error and specifies the difference between the exact solution and the expectation value of its estimate. It is finite for all $y \in \mathcal{D}(T^{+})$ and vanishes if $\alpha \rightarrow 0$ as we have shown in proposition \[thm:R-pktw-T+\]. The variance measures the variability of the estimate caused by the noise. Applying the singular system $\left\{ \left( s_{j}; v_{j}, u_{j} \right) \right\}_{j \in \mathbb{N}} $ of $T$ with $s_j \leq \left\| T \right\|$ we get $$\label{eq:endlFehler} \mathbb{E} \left[ \left\| R_{\alpha} \Xi \right\|^{2}_{\mathbb{H}_{1}} \right] = \sum_{s_{j}>0} \left| F_{\alpha}(s_{j}^{2})s_{j} \right|^{2} \mathbb{E} \left[ \Xi_{j}^{2} \right] \leq \sum_{s_{j} > 0} \left| F_{\alpha}(s_{j}^{2})s_{j} \right|^{2} \leq \tfrac{\gamma^2}{\alpha^2} \, \left\| T \right\|_{\text{HS}}^{2}.$$ since from $ \left\| \mathrm{Cov}_{\Xi} \right\| \leq 1 $ it follows that $\mathbb{E} \left[ \Xi_{j}^{2} \right] \leq 1$ for all coordinates $\Xi_{j} := \langle \Xi, u_j \rangle_{\mathbb{H}_2}$, $j \geq 1$. \[ass:THS\] In the following we assume the operator $T$ to be Hilbert-Schmidt and any considered regularization filter to satisfy (\[eq:Vainikko3\]). We stress that the bound in (\[eq:endlFehler\]) does not yield optimal order. Regularization of statistical inverse problems {#sec:RegStIP} ---------------------------------------------- \[sec:kindsConv\] To define convergent statistical regularization methods we need a reasonable handling of the stochastical noise when studying the asymptotic of a regularization method for $\delta \rightarrow 0$. As crucial point we recognize that not only the realization of the observations could vary for changing noise levels but even the underlying probability distribution could alter. \[rem:handling\] For a chosen class of probability distributions $\mathcal{W}$ we consider the asymptotic behaviour of a regularization method $(R, \alpha)$ when the index $k \geq 1$ tends to infinity, i.e. we study $$\lim_{k \rightarrow \infty} \left\| T^{+}y - R_{\alpha(\delta^{(k)}, Y^{(k)})} Y^{(k)} \right\|_{\mathbb{H}_{1}} \quad \text{ for } y \in \mathcal{D}(T^{+}),$$ where $Y^{(k)} := Y^{(k)}(y) := y + \delta^{(k)} \Xi^{(k)} $ with $ \Xi^{(k)} \sim \mathbb{P}^{\Xi^{(k)}} \in \mathcal{W} $, $ \delta^{(k)} > 0 $ and $\underset{k \rightarrow \infty}{\lim} \delta^{(k)} = 0 $. Let $\mathbb{P}^{\Xi}$ be any probability distribution and $\mathcal{W} := \lbrace \mathbb{P}^{\Xi} \rbrace$, i.e. we set $\Xi^{(k)} := \Xi$ for all $k \geq 1$. The assumed distribution can be interpreted as a priori knowledge of the noise behaviour. The most popular example of this approach are observations with Gaussian white noise. By setting $\mathcal{W} := \lbrace \mathbb{P}^{\Xi}: \Xi \sim \mathbb{P}^{\Xi} \text{ centered Hilbert-space process with } \Vert \mathrm{Cov}_{\Xi} < \infty \rbrace$ we approve arbitrary observations $Y^{(k)} := y + \delta^{(k)} \Xi^{(k)} $ where $\Xi^{(k)} $ can be any Hilbert-space process satisfying assumption \[ass:Xi\]. Here the change to the stochastic context causes a loss of information. As a compromise we could consider any subclass of $\mathcal{W}_{0}$ such as the Dirac measures or the centered normal distributions with bounded covariance. In order to formulate the aspired definitions we still lack in a convenient kind of convergence. In consideration of definition \[def:Rauschpegel\] there are basically three possibilities available: convergence in mean square, convergence in probability and convergence in distribution. The latter is too weak to yield usefull results but convergence in probability should suffice for a lot of cases. Nevertheless the convergence in mean square is often prefered because of its technical advantages. One should decide as the case arises. \[def:stKonvRV\] Let $ R $ be a linear regularization scheme, $\alpha: (0,\infty) \times \mathbb{R}^{\mathbb{N}} \rightarrow (0,\infty)$ a measurable function and $\mathcal{W}$ a class of probability distributions. We set $$\label{eq:Mw(y)} \mathbb{M}_{\mathcal{W}} (y) := \left\{ Y_{\delta} = y + \delta \Xi : \delta > 0, \, \mathbb{P}^{\Xi} \in \mathcal{W} \text{ and } \left\| \mathrm{Cov}_{\Xi} \right\| \leq 1 \right\} \text{ for any } y \in \mathcal{D}(T^{+}).$$ The pair $\left( R, \alpha \right) $ is called convergent statistical regularization w.r.t. $\mathcal{W}$ if for all $y \in \mathcal{D}(T^{+})$ and arbitrary observations $\left\{ Y_{\delta} \right\}_{\delta > 0} \subseteq \mathbb{M}_{\mathcal{W}} (y) $ the regularized solution converges $\mathbb{P}$-stochastically to the exact one when $\delta \rightarrow 0$: $$\begin{split} \text{For all } \left\{ Y^{(k)} \right\}_{k \geq 1} \subseteq \mathbb{M}_{\mathcal{W}} (y) \text{ with } Y^{(k)} := y + \delta^{(k)} \Xi^{(k)} \text{ and } \lim_{k \rightarrow \infty} \delta^{(k)} = 0 \text{ we have} \\ \lim_{k \rightarrow \infty} \mathbb{P} \left( \left\{ \omega \in \Omega: \left\| T^{+}y - R_{\alpha(\delta^{(k)}, Y^{(k)}(\omega))} Y^{(k)}(\omega) \right\|_{\mathbb{H}_{1}} > \epsilon \right\} \right) = 0 \text{ for all } \epsilon > 0. \end{split}$$ The convergence in probability could be replaced by the convergence in mean square. We call such schemes convergent statistical regularization in mean square w.r.t. $\mathcal{W}$. \[rem:randVar\]\[ex:konvRVenfWR\] Random variables: Hofinger and Pikkarainen study in [@hofinger; @MR2503292] convergence rates of the Tikhonov regularization using the Ky-Fan metric as error criterion and allowing only observations whose noise can be modeled as random variable. Statistical parameter choices: In addition to modifications of classic parameter choices, several strategies have been developed especially for the stochastic context. One of them was introduced by Lepski[ĭ]{} in [@MR1091202] and since then adapted to various models as for example statistical inverse problems with Gaussian white noise [@MR2175028; @MR2240642]. Another common parameter choice is cross-validation. In Tsybakov [@MR2013911] it is presented in a regression model and in [@MR1045442] one can find a $\delta$-free version. Gaussian white noise in the abstract model (\[eq:ObsModel\]): In [@MR2175028] the convergence in mean square of a Lepski[ĭ]{}-type parameter choice applied to spectral cut-off is proven for observations with white noise. Gaussian white noise in the regression modell (\[eq:diskrRM\]): Math[é]{} and Pereverzev have shown in [@MR2240642] that Lepski[ĭ]{}’s procedure converges also with Tikhonov regularization. Our analysis in section \[sec:stBak\] will be based on this study. That is why we want to outline briefly the crucial results. In [@MR2240642] the authors focused on discretized data with random noise as described in notation \[not:discreteModel\]. They assumed that: $\left\langle \Xi, w_{j} \right\rangle_{\mathbb{H}_{2}} \overset{\mathrm{iid}}{\sim} \mathcal{N}(0,1), j = 1,...,n$ $x^+ \in T_{\varphi} (R) := \lbrace x \in \mathbb{H}_{1}: x = \varphi (T^* T) v, \Vert v \Vert \leq R \rbrace$, where $\varphi: \left( 0, \Vert T \Vert^2 \right] \rightarrow \mathbb{R}_+$ is an increasing and operator monotone function with $ \varphi (0) = 0 $. The singular values of $T$ satisfy $s_j \asymp j^{-r}$ for all $ j \geq 1$ and some $ r > 0 $. There is a constant $C > 1$ such that $ \Vert (I - Q) T: \mathbb{H}_1 \rightarrow \mathbb{H}_2 \Vert \leq C \, \mathrm{rank}(Q)^{-r} $. Further, they set $R_{\alpha} := ( \alpha I + B^* B )^{-1} B^*$ with $B := QT$ $\alpha_0 := \delta^2$ and $ \alpha_j := \alpha_0 \, q^j, $ where $ q > 1 $ and $ j = 1,...,m := \lceil 2 \log_q (\Vert T \Vert^2 / \delta) \rceil $ $x_{j, \delta} := R_{\alpha_j} Q \, y_{\delta} = ( \alpha_j I + B^* B )^{-1} B^* Q Y_{\delta} (\omega)$ $n = n(\alpha) \asymp \lceil \alpha^{-1/2r} \rceil $ and $ Q = Q_{n}, $ where $\alpha > 0$ and $Q $ the described orthonormal projection onto $\mathrm{span} ( \left\lbrace w_1,...,w_n \right\rbrace )$ Let $C_{\Psi}, \, C_1$ and $ C_2 > 0 $ be such that $$\begin{aligned} \Psi (j) &:= C_{\Psi} \sqrt{\tfrac{1}{4 \alpha_j} \mathrm{rank} (Q)} \geq \mathbb{E} \left[ \Vert R_{\alpha_j} Q \Xi \Vert^2 \right] \text{ (decreasing) and }\\ \Phi(j) &:= C_1 \varphi (C_2 \alpha_j) \geq \Vert T^+ y - R_{\alpha_j} Q Tx \Vert \text{ (increasing) with } j=0,...,m \end{aligned}$$ satisfy $\delta \Psi (\alpha_0) \geq \Phi (\alpha_0)$. Now, the regularization parameter is chosen according to $$\label{eq:LepskiPW} \alpha_* := \alpha_{j_*} \text{ with } j_* := \max \left\lbrace j = 1,...,m: \Vert x_{k, \delta} - x_{j, \delta} \Vert \leq 4 \, \kappa \, \delta \; \Psi (k) \text{ for all } k \leq j \right\rbrace,$$ where $\kappa := \sqrt{m}$. The idea of this choice is to approximate the parameter $\alpha_{\mathrm{opt}}$ which satisfies $ \delta \Psi (\alpha_{\mathrm{opt}}) = \Phi (\alpha_{\mathrm{opt}}) $. Finally, we get with $ \Theta (t) := t^{(2r+1)/4r} \varphi(t), 0 < t \leq \Vert T \Vert^2, $ and $\delta_0 > 0$ sufficiently small, that $$\sup_{x \in T_{\varphi}(R)} \left( \mathbb{E} \left[ \Vert x - x_{j_*, \delta} \Vert^2 \right] \right) ^{1/2} \leq C_{\mathrm{all}} \sqrt{\lceil 2 \log_q \left( \tfrac{\Vert T \Vert^2}{\delta} \right) \rceil } \varphi \left( \Theta^{-1} \left( \tfrac{\delta}{R}\right) \right) , \quad\delta \leq \delta_0,$$ what converges to zero when $\delta \rightarrow 0$. For more details about the concepts of *general source conditions* and *operator monotone functions* we refer to [@MR1477662; @MR2384768; @MR2240642] and the references therein. Relation between classic and statistical regularization methods {#sec:transf} --------------------------------------------------------------- As justification for section \[sec:RegStIP\] and as preparation of section \[sec:stBak\] we are interested in the connection of regularization methods of the two settings. In general, we have to modify at least the parameter choice $\alpha$ because of the changed domain of definition. In order to formulate sufficient criteria for the stochastical convergence of $(R, \tilde{\alpha})$ we need to control the decay of $\tilde{\alpha}( \delta, Y_{\delta})$. The following notation will help us to describe it conveniently. \[not:Landau\]Let $(Z_{n})_{n \in \mathbb{N}}$ be a sequence of random variables on a probability space $(\Omega, \mathcal{F},\mathbb{P})$ and $(c_{n})_{n \in \mathbb{N}}$ a sequence of real-valued constants. We denote $$Z_{n} = o_{\mathbb{P}}(c_{n}) :\Longleftrightarrow \lim_{n \rightarrow \infty} \mathbb{P} \left( \vert \tfrac{Z_{n}}{c_{n}} \vert > \epsilon \right) = 0 \text{ for all } \epsilon > 0.$$ \[thm:det-stRV\] Let $(R, \alpha)$ be any generally convergent regularization, $$\label{eq:W0} \mathcal{W} \subseteq \lbrace \mathbb{P}^{\Xi}: \Xi \sim \mathbb{P}^{\Xi} \text{ centered Hilbert-space process with} \Vert \mathrm{Cov}_{\Xi} \Vert < \infty \rbrace =: \mathcal{W}_0$$ and $\mathbb{M}_{\mathcal{W}} (y) $, $ y \in \mathcal{D}(T^{+}) $, such as in (\[eq:Mw(y)\]). The modified method $(R, \tilde{\alpha})$ constitutes a convergent statistical regularization w.r.t. $\mathcal{W}$ for any measurable function $\tilde{\alpha}: (0,\infty) \times \mathbb{R}^{\mathbb{N}} \rightarrow (0, \infty)$ if for arbitrary observations $\left\{ Y^{(k)} \right\}_{k \geq 1} $ with $Y^{(k)} := y + \delta^{(k)} \Xi^{(k)} \in \mathbb{M}_{\mathcal{W}} (y)$ it holds $$\label{eq:a=oPd} \lim_{k \rightarrow \infty} \mathbb{P} \left( \tilde{\alpha} ( \delta_k, Y_k) > \epsilon \right) = 0 \text{ for all } \epsilon > 0 \quad \text{ and } \quad (\tilde{\alpha} ( \delta_k, Y_k))^{-1} = o_{\mathbb{P}} (\delta_k^{-1}).$$ Let $ y \in \mathcal{D}(T^{+}) $, $\left\{ Y^{(k)} \right\}_{k \geq 1} \subseteq \mathbb{M}_{\mathcal{W}} (y)$ with $Y^{(k)} := y + \delta^{(k)} \Xi^{(k)} $ and $\underset{k \rightarrow \infty}{\lim} \delta^{(k)} = 0$. Proposition \[prop:endlFehler\] yields with assumption \[ass:THS\] for any number $\alpha>0$ the finiteness of the mean squared error: $$\mathbb{E} \left[ \left\| T^{+}y - R_{\alpha} Y^{(k)} \right\|^{2}_{\mathbb{H}_{1}} \right] \leq \left\| T^{+}y - R_{\alpha} y \right\|^{2}_{\mathbb{H}_{1}} + (\delta^{(k)})^{2} \tfrac{\gamma^2}{\alpha^2} \, \left\| T \right\|_{\text{HS}}^{2} < \infty\, , \quad k \geq 1.$$ Now, we consider a measurable function $\tilde{\alpha}: (0,\infty) \times \mathbb{R}^{\mathbb{N}} \rightarrow (0,\infty)$ satisfying (\[eq:a=oPd\]) and insert in place of the number $\alpha$ the function value $ \tilde{\alpha}(\delta^{(k)}, Y^{(k)}(\omega)) $, where $Y^{(k)} (\omega) = \lbrace Y^{(k)}_{\delta^{(k)}, j} (\omega) \rbrace_{j \geq 1}$ for $\omega \in \Omega$ and $k \geq 1$. In doing so we allow for a moment that the parameter choice and the regularization operator are applied to different realizations of $Y^{(k)}$, $k \geq 1$. We get from proposition \[thm:R-pktw-T+\] that $$\label{eq:stKonvMSE} \lim_{k \rightarrow \infty} \mathbb{P} \left( \left\{ \omega \in \Omega: \mathbb{E} \left[ \left\| T^{+}y - R_{\tilde{\alpha}(\delta^{(k)}, Y^{(k)}(\omega))} Y^{(k)} \right\|^{2}_{\mathbb{H}_{1}} \right] > \epsilon \right\} \right) = 0$$ for any $\epsilon > 0$ since the sum of two stochastical convergent sequences converges stochastically. So, we can say: For all $\epsilon > 0$ there exists a subset $\Omega_{\epsilon} \subseteq \Omega$ with $\mathbb{P}(\Omega_{\epsilon}) \geq 1 - \epsilon$, such that $$\lim_{k \rightarrow \infty} \mathbb{E} \left[ \left\| T^{+}y - R_{\tilde{\alpha}(\delta^{(k)}, Y^{(k)}(\tilde{\omega}))} Y^{(k)} \right\|^{2}_{\mathbb{H}_{1}} \right] = 0 \quad \text{for all} \, \tilde{\omega} \in \Omega_{\epsilon} \text{ with } \mathbb{P}({\tilde{\omega}}) > 0.$$ Further we can deduce that for all $\epsilon,\eta > 0$ and $ \tilde{\omega} \in \Omega_{\epsilon} $ with $\mathbb{P}({\tilde{\omega}}) > 0$ $$\lim_{k \rightarrow \infty} \mathbb{P} \left( \left\{ \omega \in \Omega: \left\| T^{+}y - R_{\tilde{\alpha}(\delta^{(k)}, Y^{(k)}(\tilde{\omega}))} Y^{(k)}(\omega) \right\|^{2}_{\mathbb{H}_{1}} > \eta \right\} \right) = 0.$$ Finally, we achieve $$\lim_{k \rightarrow \infty} \mathbb{P} \left( \left\{ \omega \in \Omega: \left\| T^{+}y - R_{\tilde{\alpha}(\delta^{(k)}, Y^{(k)}(\omega))} Y^{(k)}(\omega) \right\|^{2}_{\mathbb{H}_{1}}> \eta \right\} \right) \leq \epsilon$$ for all $\epsilon, \eta > 0$. Since $\epsilon$ is independent of $\eta$, we can conclude stochastical convergence. \[prop:TransfRVrv\] Any generally convergent regularization $(R, \alpha)$, where $\alpha$ is measurable, satisfies definition \[def:stKonvRV\] of convergent statistical regularization w.r.t. $$\label{eq:W2} \mathcal{W}_{1} \subseteq \mathcal{W}_{2} := \lbrace \mathbb{P}^{\Xi}: \Xi \in L^{2}(\Omega, \mathbb{H}_{2}) \text{ with } \mathbb{E} \left[ \left\| \Xi \right\|_{\mathbb{H}_{2}}^{2} \right] \leq 1 \rbrace.$$ The converse holds if $\mathcal{W}_{1}$ contains the Dirac measures. Let $(R, \alpha)$ be a generally convergent regularization method with measurable $\alpha$, $ y \in \mathcal{D}(T^{+}) $ and $\left\{ Y^{(k)} \right\}_{k \geq 1} $ with $Y^{(k)} := y + \delta^{(k)} \Xi^{(k)} $, $\mathbb{P}^{\Xi^{(k)}} \in \mathcal{W}_{1}$, $\delta^{(k)} > 0$ for all $k \geq 1$ and $\underset{k \rightarrow \infty}{\lim} \delta^{(k)} = 0$. We fix $\epsilon > 0$, set $C := \epsilon^{-1/2}$ and define for any $\omega \in \Omega$ the set $$\begin{split} \mathcal{K}(\omega) &:= \left\{ k \geq 1: \left\| \Xi^{(k)}(\omega) \right\|_{\mathbb{H}_{2}} \leq C \right\} \subseteq \mathbb{N} \text{ and the number }\\ k(\omega) &:= \text{argmin} \left\{ k \geq 1: \left\| T^{+}y - R_{\alpha(\delta^{(k)}, Y^{(k)}(\omega))} Y^{(k)}(\omega) \right\|_{\mathbb{H}_{1}} \leq \epsilon \; \forall \, l \in \mathcal{K}(\omega) \text{ with } l \geq k \right\}. \end{split}$$ Then it follows from Chebychev’s inequality and the convergence of $(R, \alpha)$ that $$\begin{split} \mathbb{P} &\left( \left\{ \omega \in \Omega: \left\| T^{+}y - R_{\alpha(\delta^{(k)}, Y^{(k)}(\omega))} Y^{(k)}(\omega) \right\|_{\mathbb{H}_{1}} > \epsilon \right\} \right)\\ &\quad \leq \mathbb{P} \left( \left\{ \omega \in \Omega: k \notin \mathcal{K}(\omega) \right\} \right) + \mathbb{P} \left( \left\{ \omega \in \Omega: k < k(\omega) \right\} \right) < 2 \epsilon \end{split}$$ for $k \geq 1$ sufficiently large and finally $$\lim_{k \rightarrow \infty} \mathbb{P} \left( \left\{ \omega \in \Omega: \left\| T^{+}y - R_{\alpha(\delta^{(k)}, Y^{(k)}(\omega))} Y^{(k)}(\omega) \right\|_{\mathbb{H}_{1}} > \epsilon \right\} \right) = 0 \text{ for all } \epsilon > 0.$$ \[prop:st-detRVnf\] Any purely data driven convergent statistical regularization $(R, \alpha)$ w.r.t. $\mathcal{W}_{0}$, induces a purely data driven generally convergent regularization $(R, \tilde{\alpha})$. Let us contemplate deterministic observations of the form $y^{(k)} := y + \delta^{(k)} \xi^{(k)} \in \mathbb{H}_{2}$ with $y \in \mathcal{D}(T^{+})$, $\left\| \xi^{(k)} \right\|_{\mathbb{H}_{2}} \leq 1$, $\delta^{(k)} > 0$ for all $k \geq 1$ and $\underset{k \rightarrow \infty}{\lim} \delta^{(k)} = 0$. We define for any $k \geq 1$ the following Hilbert-space valued random variable $$Y^{(k)}(\omega) := \begin{cases} y^{(k)}, &\text{ if } \omega \in \Omega_{1} \\ -y^{(k)}, &\text{ if } \omega \in \Omega_{2}, \end{cases}$$ where $\mathbb{P}(\Omega_{1}) = \mathbb{P}(\Omega_{2}) = 0.5$. Every random variable $ Y^{(k)}$, $k \geq 1 $, can be identified with a centered Hilbert-space process, such that the function $$\tilde{\alpha}: \mathbb{H}_{2} \rightarrow (0, \infty), \, y^{(k)} \mapsto \alpha \left( \left\{ y_{\delta^{(k)}, j}^{(k)} \right\}_{j \geq 1} \right),$$ where $y_{\delta^{(k)}, j}^{(k)} := Y_{\delta^{(k)}, j}^{(k)} (\omega)$ for any $\omega \in \Omega_{1}$ and $ j \geq 1$, constitutes with the regularization $R$ a purely data driven generally convergent regularization. \[rem:GenerSt-detRVnf\] The proposition holds also for methods w.r.t. a subclass $ \mathcal{W} \subseteq \mathcal{W}_{0}$ if $\mathcal{W}$ allows for arbitrary deterministic observations $\left\lbrace y^{(k)} \right\rbrace_{k \geq 1} $ of the above form the definition of a sequence $\left\lbrace Y^{(k)} \right\rbrace_{k \geq 1} \subseteq \mathbb{M}_{\mathcal{W}} (y)$ with $ \mathbb{P} \left( \left\lbrace \omega \in \Omega: Y_k (\omega) = y_k \right\rbrace \right) > \eta$ for $ \eta>0$. The Bakushinski[ĭ]{} veto for statistical inverse problems {#sec:stBak} ========================================================== \[sec:RegUnabhRP\] The following study was motivated by the paper "Regularization independent of the noise level: an analysis of quasi-optimality" by Bauer and Rei[ß]{} [@MR2438944], which raised the question of the transferability of the Bakushinski[ĭ]{} veto to statistical inverse problems. \[thm:stBak\]\[thm:stBakfPXiFix\] A purely data driven convergent statistical regularization method w.r.t. $\mathcal{W}_{0} $, see (\[eq:W0\]), exists if and only if the range $\mathcal{R}(T)$ of the operator $T$ is closed. For certain probability distributions $\mathbb{P}^{\Xi}$ there exist purely data driven convergent statistical regularization w.r.t. $\mathcal{W} := \lbrace \mathbb{P}^{\Xi} \rbrace$ even if the problem is ill-posed. The first statement remains valid for sufficiently large subclasses of $\mathcal{W}_{0}$ such as $\mathcal{W}_{2}$ of (\[eq:W2\]) or the class of all Dirac measures. We refer to remark \[rem:GenerSt-detRVnf\]. For the proof of the second statement we need some preperation: \[not:setting\] In order to construct an example supporting theorem \[thm:stBakfPXiFix\] (2) let us focus on an operator $T: L^2( \left[ 0, 1 \right] ) \rightarrow L^2( \left[ 0, 1 \right] )$ and data with Gaussian white noise modeled by $Y_{\delta} (t) = Tx(t) + \delta \, \Xi_{t}, \, t \in \left[ 0,1 \right] ,$ which is consistent with (\[eq:ObsModel\]). We consider the equidistant decomposition $\mathcal{Z}_n := (0=t_0 < t_1 < ... < t_n = 1) \text{ with } t_j := j / n \text{ for } j= 0,...,n$ and the orthornormal system $ \lbrace \varphi_j \rbrace_{j=1,...,n}$, where $ \varphi_j := \left( \sqrt{t_j - t_{j-1}} \right)^{-1} \chi_{\left[ t_{j-1}, t_j \right)}. $ By projecting $ Y_{\delta} $ onto the linear span of $ \lbrace \varphi_j \rbrace_{j=1,...,n}$ we get a finite set of coefficients $$Y_{\delta,j} := \langle Y_{\delta}, \varphi_j \rangle_{L^2( \left[ 0, 1 \right] )} = \left( \sqrt{t_j - t_{j-1}} \right)^{-1} \, \int_{t_{j-1}}^{t_j} Tx(s)ds + \delta \epsilon_j , \quad \epsilon_j \overset{iid}{\sim} \mathcal{N}(0,1)$$ with $j= 1,...,n$, such that $$QY_{\delta} (t) = \sum_{j = 1}^{n} Y_{\delta,j} \varphi_j (t) = (t_j - t_{j-1})^{-1} \, \int_{t_{j-1}}^{t_j} Tx(s)ds + \sqrt{n} \, \delta \, \epsilon_j \text{ for } t \in \left[ t_{j-1}, t_j \right).$$ This setting conforms to the regression model with orthonormal design and without repetitions as discribed in notation \[not:discreteModel\]. In example \[ex:konvRVenfWR\] we mentioned that Tikhonov regularization forms with a Lepski[ĭ]{}-type parameter choice a convergent statistical regularization method $(R, \alpha)$ w.r.t. $\mathcal{N}(0,I)$ [@MR2240642]. Plugging in an estimation of the noise level into this method we can deduce a purely data driven one as we will verify now. For that purpose we want to use the following estimator: \[def:estimator\] $$\label{eq:SchaetzeRPimRegrM} \tilde{\delta}_n^{2} := \frac{1}{2n^2} \sum_{j = 1}^{n} ( Q Y_{\delta}(t_{j}) - Q Y_{\delta}(t_{j-1}) )^{2}$$ Before studying its asymptotical behaviour we remind of the following notation: ([@MR1902050]) Let $I$ denote an interval. A function $f:I \rightarrow \mathbb{R}$ is called Hölder continuous with exponent $s \in \left( 0,1 \right] $ if for all $t_{0} \in I$ a neighborhood $U \subseteq \mathbb{R}$ exists, such that $$\sup_{t, t' \in U \cap I, t \neq t'} \dfrac{\vert f(t) - f(t') \vert }{ \vert t - t' \vert^{s} } < \infty.$$ \[ass:TxHcont\] Let $y = Tx \in L^2( \left[ 0, 1 \right] )$ be Hölder-continuous of order $s \in \left( \tfrac{1}{2}, 1 1 \right]$. Assumption \[ass:TxHcont\] is satisfied for any integral operator $T: L^2( \left[ 0, 1 \right] ) \rightarrow L^2( \left[ 0, 1 \right] )$, $$(Tx)(t) = \int_0^t k(t,u) x(u) du,$$ with kernel $k: \left[ 0, 1 \right]^2 \rightarrow \mathbb{R}$ satisfying for some constant $C > 0$ $$\sup_{u \in \left[ 0, 1 \right]} \vert k(t,u) - k(t',u) \vert \leq C \vert t - t' \vert^s, \, t,t' \in \left[ 0, 1 \right] .$$ Assumption \[ass:TxHcont\] implies in pursuance of [@MR606200 pages 212-213] that $$( Q y (t_{j-1}) - Q y (t_{j}) )^{2} \asymp O (n^{-2s}), \quad j=1,...,n,$$ what from we can deduce the asymptotical unbiasedness of $\tilde{\delta}_n^{2}$ when $n \rightarrow \infty$: $$\mathbb{E} (\tilde{\delta}^{2}_{n}) = \delta^2 + O(n^{-(1+2s)}) \text{ and } \mathbb{E} (\tilde{\delta}^{2}_{n}) \geq \delta^2$$ In proposition \[prop:MPmitNneu\] we need $s > \tfrac{1}{2}$. \[prop:Omega-\] Let $n = n(\delta) \asymp \lceil \delta^{-\eta} \rceil$ with $2 > \eta \geq \tfrac{2}{1+2s}$, $ \hat{\delta} := \tau \, \tilde{\delta}_n $ and $$\label{eq:Omega+} \Omega_+ := \Omega_+ (\delta, \tau, K) := \left\lbrace \omega \in \Omega: \hat{\delta} (\omega) \in \left[ \delta, K \tau \delta \right] \right\rbrace, \, \tau, K >1 \text{ appropriate.}$$ The following assertions hold for all $\delta \leq \delta_0$ with $\delta_0>0$ sufficiently small: There are constants $C_1, C_2 > 0$ such that $ \mathbb{P} ( \Omega \setminus \Omega_+ ) \leq C_1 \exp \left( - \, C_2 \, n^2(\alpha, \delta) \,\delta^2 \right). $ It holds for $\alpha > 0$ and some $C_3>0$ that $$\underset{T^+ y \in T_{\varphi}(R)}{\sup} \int_{\Omega} \Vert T^+ y - R_{\alpha} Q Y_{\delta} \Vert^4 d \mathbb{P} \leq C_3 \, \delta^{4} \alpha^{-4} n^2(\alpha, \delta).$$ We want to use the following Lemma for the proof of proposition \[prop:Omega-\]: \[lem:equivMoments\] Let $X$ be a Gaussian random vector in a Banach space $\mathcal{B}$ and $\Vert X \Vert_p := \mathbb{E} \left[ \Vert X \Vert_{\mathcal{B}}^p \right]^{1/p} $, $0<p<\infty$, the $L^p$-norm of $X$. For all $0 < p,q < \infty$ there is a constant $K_{p,q} > 0$ such that $ \Vert X \Vert_p \leq K_{p,q} \Vert X \Vert_q $. Let $X \in L^4(\Omega, \mathcal{F}, \mathbb{P})$ be nonnegative. It holds $$\int_{\Omega} X^4(\omega) d \mathbb{P} (\omega) = 4 \int_0^{\infty} t^3 \, \mathbb{P} (X > t) dt.$$ $ \Vert (\alpha I + B^* B )^{-1} B^* B \Vert \leq 1$ The first statement can be found in [@MR1102015 page 60] and the second one follows by a generalized version of partial integration, which is given in [@MR0270403 chapter 5, $\S$ 6]. By spectral calculus we get the inequality in (3). Using Lemma \[lem:equivMoments\] (1) we get with $\tilde{\delta}_n = \Vert X_n \Vert $, where $$X_n := \tfrac{1}{\sqrt{2} \, n} \left( Q Y_{\delta} (t_1) - Q Y_{\delta} (t_0), ..., Q Y_{\delta} (t_{n}) - Q Y_{\delta} (t_{n-1}) \right) \sim \mathcal{N} (\mathbb{E} X_n, I_n) ,$$ that $$\begin{split} \mathbb{P} &( \Omega \setminus \Omega_+ ) = \mathbb{P} \left( \left\lbrace \omega \in \Omega: \tau \, \tilde{\delta}_n (\omega) < \delta \right\rbrace \right) + \mathbb{P} \left( \left\lbrace \omega \in \Omega: \tau \, \tilde{\delta}_n (\omega) > \tau \, K \delta \right\rbrace \right) \\ &= \, \mathbb{P} \left( K_{2,1}^{-1} \Vert \tilde{\delta}_n \Vert_{2} - \tilde{\delta}_n > K_{2,1}^{-1} \Vert \tilde{\delta}_n \Vert_{2} - \tfrac{\delta}{\tau} \right) + \,\mathbb{P} \left( \tilde{\delta}_n - K_{1,2} \Vert \tilde{\delta}_n \Vert_{2} > K \delta - K_{1,2} \Vert \tilde{\delta}_n \Vert_{2} \right) \\ &\leq \, \mathbb{P} \left( \vert \mathbb{E} \left[ \tilde{\delta}_n \right] - \tilde{\delta}_n \vert > K_{2,1}^{-1} \Vert \tilde{\delta}_n \Vert_{2} - \tfrac{\delta}{\tau} \right) + \, \mathbb{P} \left( \vert \tilde{\delta}_n - \mathbb{E} \left[ \tilde{\delta}_n \right] \vert > K \delta - \Vert \tilde{\delta}_n \Vert_{2} \right) \end{split}$$ since the Cauchy-Schwarz inequality yields $K_{1,2} =1$. At this point, we would like to apply the concentration inequality (3.2) in [@MR1102015 page 57] what for we have to ensure that $\tau \Vert \tilde{\delta}_n \Vert_{2} > K_{2,1}\delta$ and $K \delta > \Vert \tilde{\delta}_n \Vert_{2}$. The first requirement is satisfied for all $\tau > K_{2,1}$ as $\Vert \tilde{\delta}_n \Vert_{2} \geq \delta$. For the second we need that $K \gg 1$ since we have for a constant $c > 0$ $$\Vert \tilde{\delta}_n \Vert_{2} \leq \delta + c n^{-(1+2s)/2} \asymp \delta + c \, \delta^{\eta (1+2s)/2} \leq (c+1) \, \delta \text{ for } \delta \in (0,1).$$ Supposing that $\tau$ and $K$ are appropriate it follows that for some constants $C_1, C_2 > 0 $ $$\begin{split} \mathbb{P} &( \Omega \setminus \Omega_+ ) \\ &\leq \, 2 \exp \left( - \tfrac{2}{\pi^2} n^2(\alpha, \delta) \left[ K_{2,1}^{-1} \Vert \tilde{\delta}_n \Vert_{2} - \tfrac{\delta}{\tau} \right]^2 \right) + \, 2 \exp \left( - \tfrac{2}{\pi^2} n^2(\alpha, \delta) \left[ K \delta - \Vert \tilde{\delta}_n \Vert_{2} \right]^2 \right)\\ &\leq C_1 \exp \left( - \, C_2 \, n^2(\alpha, \delta) \,\delta^2 \right). \end{split}$$ After lemma \[lem:equivMoments\] (2) and (3) and the concentration inequality (3.5.) in [@MR1102015 page 59] it holds for some constant $ C_3 > 0 $ that $$\begin{split} \int_{\Omega} &\Vert T^+ y - R_{\alpha} Q Y_{\delta} \Vert^4 d \mathbb{P} = 4 \int_0^{\infty} t^3 \mathbb{P} \left( \Vert T^+ y - R_{\alpha} Q Y_{\delta} \Vert > t \right) dt \\ &\leq 4 \int_0^{2 \Vert T^+ y \Vert_{\mathbb{H}_1}} t^3 dt + 16 \int_{2 \Vert T^+ y \Vert_{\mathbb{H}_1}}^{\infty} t^3 \, \mathbb{P} \left( 2 \Vert T^+ y \Vert + \delta \Vert R_{\alpha} Q \Xi \Vert > t \right) dt \\ &\leq 16 \Vert T^+ y \Vert_{\mathbb{H}_1}^4 + 16 \int_{2 \Vert T^+ y \Vert_{\mathbb{H}_1}}^{\infty} t^3 \exp \left( - \tfrac{(t - 2 \Vert T^+ y \Vert_{\mathbb{H}_1})^2}{E} \right) dt \\ &= 16 \Vert T^+ y \Vert_{\mathbb{H}_1}^4 + 16 \int_{0}^{\infty} \left( t^3 + 6 t^2 \Vert T^+ y \Vert_{\mathbb{H}_1} + 12 t \Vert T^+ y \Vert_{\mathbb{H}_1}^2 + 8 \Vert T^+ y \Vert_{\mathbb{H}_1}^3 \right) e^{- t^2 / E} dt \\ &= 16 \Vert T^+ y \Vert_{\mathbb{H}_1}^4 + \tfrac{1}{2} E^2 + \tfrac{ 3 \sqrt{\pi} }{2} \Vert T^+ y \Vert_{\mathbb{H}_1} E^{3/2} + 6 \Vert T^+ y \Vert_{\mathbb{H}_1}^2 E + 4 \sqrt{\pi} \Vert T^+ y \Vert_{\mathbb{H}_1}^3 \sqrt{E} \\ &\leq 16 \Vert T^+ y \Vert_{\mathbb{H}_1}^4 + C_3 \, \delta^{4} \alpha^{-4} n^2(\alpha, \delta), \end{split}$$ where $0 < s_j \leq \Vert T \Vert $ yields $$E := 8 \mathbb{E} \left[ \Vert \delta R_{\alpha} Q \Xi \Vert^2 \right] = 8 \delta^2 \sum_{s_j>0} \vert \tfrac{s_j}{\alpha + s_j^2} \vert^2 \mathbb{E} \left[ \vert \langle Q \Xi, u_j \rangle_{\mathbb{H}_2} \vert^2 \right] \leq 8 \delta^{2} \Vert T \Vert^2 \alpha^{-2} n(\alpha, \delta).$$ The assertion follows for all $\delta \leq \delta_0$ with $\delta_0 > 0$ sufficiently small. In example \[ex:konvRVenfWR\] we set $n \asymp \lceil \alpha^{-1/2r} \rceil$, $r>0$, but in proposition \[prop:Omega-\] it was more advantageous to link $n$ to $\delta$. Combining the two approaches we get $$\label{eq:n(a,d)} n := n(\alpha, \delta) = \max \lbrace n_1(\alpha), n_2(\delta) \rbrace \text{, where } n_1(\alpha) \asymp \lceil \alpha^{-1/2r} \rceil \text{ and } n_2(\delta) \asymp \lceil \delta^{-\eta} \rceil.$$ Due to the fact that we take another asymptotic behaviour of $n$ as basis of our analysis than stated in example \[ex:konvRVenfWR\] we have to revise the convergence result. \[prop:MPmitNneu\] Let $\alpha_* := \alpha_{j_*} (\delta, Y_{\delta})$ denote the regularization parameter according to Lepski[ĭ]{}’s principle as described in example \[ex:konvRVenfWR\]. If we assume that $n = n(\alpha, \delta)$ as in (\[eq:n(a,d)\]) with $\eta < 2$ (instead of $n = n(\alpha) \asymp \lceil \alpha^{-1/2r} \rceil$ as before) then $$\lim_{\delta \rightarrow 0} \left( \sup_{T^+ y \in T_{\varphi}(R)} \mathbb{E} \left[ \Vert T^+ y - R_{\alpha_*} Q Y_{\delta} \Vert^2 \right] \right) = 0.$$ Mathé and Pereverzev have shown in [@MR2240642 Theorem 5] that under the assumptions and notations of example \[ex:konvRVenfWR\] it holds for some $C_0 > 0$ that $$\sup_{T^+ y \in T_{\varphi}(R)} \mathbb{E} \left[ \Vert T^+ y - R_{\alpha_*} Q Y_{\delta} \Vert^2 \right] \leq C_{0} \sqrt{ \lceil 2 \log_q (\Vert T \Vert^2/ \delta) \rceil } \varphi (\check{\alpha}),$$ where $\check{\alpha} := \alpha_{\check{j}}$ with $\check{j} := \max \left\lbrace j \leq m: \Phi(j) \leq \delta \Psi(j) \right\rbrace $. The proof of this bound does not depent on the asymptotic behaviour of $n$ aside from the requirement of the existence of a constant $D>0$ satisfying $\Psi(j) \leq D \, \Psi (j+1)$ for all $j=0,...,m-1$. Since this is fulfilled even for our new choice of $n$ we cite the given inequality without further proof. The only modification which we made is a slight change of the definition of $\check{j}$, which simplifies the notation. Now, we want to prove that the right hand side converges to zero. We follow the ideas in [@MR2240642] and set $$\Theta_{\delta} (t) := \max \lbrace \lceil t^{1/4s} \rceil, \lceil \delta^{\eta/2} \rceil \rbrace \sqrt{t} \varphi(t), \, t>0, \text{ and } \alpha^*:= \inf \left\lbrace \alpha > 0: \Theta_{\delta} (\alpha) \geq \delta \right\rbrace.$$ $\Theta_{\delta}$ is increasing in $t$ such that for every $\delta > 0$ there is a unique choice for $\alpha^*$. We notice that $$\delta \, \Psi(\alpha^*) = \delta \, C_{\Psi} \sqrt{\tfrac{n(\alpha^*, \delta)}{4 \alpha^*} } = \tfrac{C_{\Psi}}{2} \, \delta \, \left( \max \lbrace \lceil (\alpha^*)^{1/4s} \rceil, \lceil \delta^{\eta/2} \rceil \rbrace \sqrt{\alpha^*} \right)^{-1} \leq \tfrac{C_{\Psi}}{2} \, \varphi(\alpha^*).$$ This leads to $\check{\alpha} \leq \alpha^{*}$ because of the definition of $\Phi$ and the monotonicity of $\Phi$ and $\Psi$. Finally, we can deduce $$\label{eq:MPnewRate} \lim_{\delta \rightarrow 0} \left( \sup_{T^+ y \in T_{\varphi}(R)} \mathbb{E} \left[ \Vert T^+ y - R_{\alpha_*} Q Y_{\delta} \Vert^2 \right] \right) \leq \lim_{\delta \rightarrow 0} C_{\mathrm{all}} \sqrt{ \lceil 2 \log_q (\Vert T \Vert^2/ \delta) \rceil } \varphi (\alpha^*(\delta)) = 0$$ since $\underset{\delta \rightarrow 0}{\lim} \alpha^*(\delta) = 0$ if $\eta < 2$. The convergence rate given in (\[eq:MPnewRate\]) has not to be optimal. Finally, we achieve: Any purely data driven convergent statistical regularization method $(R, \alpha )$ w.r.t. $\mathcal{W}_{0}$ induces the existence of a purely data driven convergent regularization $(R, \tilde{\alpha})$ in terms of definition \[def:konvRV\], as shown in proposition \[prop:st-detRVnf\]. If so, the range $\mathcal{R}(T)$ of $T$ is closed, see lemma \[rem:detBak\]. So, we turn to the second statement:\ We consider the setting desribed in notation \[not:setting\] with $Tx$ satisfying assumption \[ass:TxHcont\], the estimator $\tilde{\delta}_n^2$ given in definition \[def:estimator\] and the set $\Omega_+$ introduced in (\[eq:Omega+\]). Let $R_{\alpha}$, $m$, $\lbrace \alpha_j \rbrace_{j=0,...,m}$, $\lbrace x_{j, \delta} \rbrace_{j=0,...,m}$, $T_{\varphi} (R)$, $\Psi$ and $\kappa$ be as in example \[ex:konvRVenfWR\] and $n := n(\alpha, \delta) $ as in (\[eq:n(a,d)\]). First of all we want to verify if the assumptions of example \[ex:konvRVenfWR\] are satisfied. The first one follows by definition and the second if $x^+ \in T_{\varphi} (R)$. The definition of the projection $Q$ and the Hölder continuity of $Tx$ yield by [@MR606200 pages 212-213] asumption (d) since $$\Vert (I - Q) T: \mathbb{H}_1 \rightarrow \mathbb{H}_2 \Vert \leq C \, \mathrm{rank}(Q)^{-s},$$ where $s \in (1/2,1]$ denominates the Hölder exponent of $Tx$. Assumption (c) has been used in [@MR2240642] as basis of assumption (d) and in order to prove the order optimality of the convergence result, why we can ignore it. As a consequence we set $r:=s$ in $n_1(\alpha) \asymp \lceil \alpha^{-1/2r} \rceil$ such that $$n_1 (\alpha) \leq n_1 (\alpha_0) \asymp \delta^{-1/s} \leq \delta^{-\eta} \asymp n_2(\delta) \text{ if } \eta \geq 1/s.$$ Now, we want to examine $$\mathbb{E} \left[ \Vert T^+ y - R_{\hat{\alpha}_*} Q Y_{\delta} \Vert^2 \right] = \int_{\Omega_{+}} \Vert T^+ y - R_{\hat{\alpha}_*} Q Y_{\delta} \Vert^2 d \mathbb{P} + \int_{\Omega \setminus \Omega_+} \Vert T^+ y - R_{\hat{\alpha}_*} Q Y_{\delta} \Vert^2 d \mathbb{P},$$ where $ \hat{\alpha}_* := \alpha_{j_*} (\hat{\delta}, Y_{\delta}) \text{ with } \hat{\delta} := \tau \tilde{\delta}_n $ denotes the regularization parameter resulting from Lepski[ĭ]{}’s principle (\[eq:LepskiPW\]) when using the estimated noise level. It is quite evident that $$\Omega_+ \subseteq \Omega_{\kappa} := \Omega_{\kappa} (\delta) := \left\lbrace \omega \in \Omega: \max_{j=1,...,m(\delta)} \tfrac{\delta \, \Vert R_{\alpha_j} Q_n \, \Xi (\omega) \Vert_{\mathbb{H}_2}}{\Psi_{\delta}(j)} \leq \kappa \right\rbrace$$ if the constant $C_{\Psi} >0$ in $\Psi$ is sufficiently large. As $\alpha_{j_*} (\delta, Y_{\delta})$ and $\hat{\alpha}_*$ lead on $\Omega_+$ to the same asymptotic behaviour of $R_{\alpha} Q Y_{\delta}$ we can deduce from proposition \[prop:MPmitNneu\] that the first term on the right vanishes when $\delta \rightarrow 0$ if $\eta <2$. Furthermore, the Hölder-inequality yields that $$\int_{\Omega \setminus \Omega_+ }\Vert T^+ y - R_{\hat{\alpha}_*} Q Y_{\delta} \Vert^2 d \mathbb{P} \leq \left( \int_{\Omega } \Vert T^+ y - R_{\hat{\alpha}_*} Q Y_{\delta} \Vert^4 d \mathbb{P} \right)^{1/2} \left( \mathbb{P}(\Omega \setminus \Omega_+) \right) ^{1/2}.$$ Hence, it follows from proposition \[prop:Omega-\] that for all $\delta < \delta_0$ with $\delta_0 > 0$ sufficiently small it holds with $\eta := 1/s \geq \tfrac{2}{1+2s}$, where $s \in \left(\tfrac{1}{2}, 1\right]$, that $$\sup_{T^+ y \in T_{\varphi}(R)} \int_{\Omega \setminus \Omega_+ }\Vert T^+ y - R_{\hat{\alpha}_*} Q Y_{\delta} \Vert^2 d \mathbb{P} \leq C_{\text{all}} \lceil \delta^{-(2+\eta)} \rceil \exp \left( - \, \tfrac{1}{2} \, C_2 \, \delta^{2-2\eta} \right),$$ and finally $$\lim_{\delta \rightarrow 0} \left( \sup_{T^+ y \in T_{\varphi}(R)} \mathbb{E} \left[ \Vert T^+ y - R_{\hat{\alpha}_*} Q Y_{\delta} \Vert^2 \right] \right) = 0,$$ whicch completes the proof. \[rem:Procedure\] The numerical procedure including the estimation of the noise level can be described with the notations of example \[ex:konvRVenfWR\] as follows: --------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------- Choose: $\tau > K_{2,1}; \quad p > 1; \quad q > 1; \quad n \in \mathbb{N}; \quad m \in \mathbb{N}; \quad \epsilon > 0; \quad \hat{\delta}_0 := 0; \quad k:=0;$ Do: $k := k+1; $ $ \hat{\delta}_k := \tfrac{1}{2} \, \tau \, n^{-2} \sum_{j=1}^{n} \left( y_{\delta}(j/n) - y_{\delta}((j-1)/n) \right)^2;$ $\alpha := \hat{\delta}_k^2; $ $ n := \max \lbrace n(\alpha, \hat{\delta}_k), p \ast n \rbrace ;$ While: $\left( (k<m) \epsilon + (k>m) \max \lbrace \vert \hat{\delta}_k - \hat{\delta}_j \vert, j = k-m,...,k \rbrace > \epsilon \hat{\delta}_k \right) ; \qquad \qquad$ Adapt: $\kappa := \sqrt{m}; \quad n = n(\alpha, \hat{\delta}_k); \quad B := Q_{n} T; \quad x_1 := (\alpha I + B^* B )^{-1} B^* y_{\delta}; \quad k := 0;$ Do: $k := k+1; $ $ \alpha := q \ast \alpha; $ $ n := n(\alpha, \hat{\delta}_k); $ $ B := Q_{n} T;$ $x_k := (\alpha I + B^* B )^{-1} B^* y_{\delta};$ While: $\left( \Vert x_j - x_k \Vert \leq 4 \kappa \delta \sqrt{\psi(\alpha q^{j-k})}, j \leq k \text{ and } \alpha \leq \Vert T \Vert^2 \right) ;$ Return: $x_{k-1};$ --------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------- The second part is a modified version of the strategy presented in [@MR2240642]. Conclusion ========== In this paper we have developed new concepts for the study of statistical inverse problems. The central idea was to link the noise to the asymptotic of the noise level $\delta \rightarrow 0$, varying its probability distribution, which is assumed to be an element of a fixed class $\mathcal{W}$ w.r.t. which the convergence of the considered regularization is required. By means of this approach we were able to disprove the often supposed general transferability of the Bakushinski[ĭ]{} veto to the stochastical context. A lot of continuative issues arise out of this result: The estimation of the noise level gained in importance. In particular estimation methods which utilize just one data set are of special interest as the estimate can be incorporated into a regularization method. How does the various parameter choices react to the usage of an estimated noise level and how can we compensate unwanted behaviors? For which other classes of probability distributions does an analog statement to the Bakushinski[ĭ]{} veto hold and for which ones can we derive counter examples? Acknowledgment {#acknowledgment .unnumbered} ============== The author would like to thank Peter Mathé, WIAS Berlin, and Markus Reiß, Humboldt-Universität zu Berlin, for helpful discussions. [10]{} A. B. Bakushinski[ĭ]{}. Remarks on the choice of regularization parameter from quasioptimality and relation tests. , 24(8):1258–1259, 1984. F. Bauer, T. Hohage, and A. Munk. Iteratively regularized [G]{}auss-[N]{}ewton method for nonlinear inverse problems with random noise. , 47(3):1827–1846, 2009. F. Bauer and S. Pereverzev. Regularization without preliminary knowledge of smoothness and error behaviour. , 16(3):303–317, 2005. F. Bauer and M. Rei[ß]{}. Regularization independent of the noise level: an analysis of quasi-optimality. , 24(5):055009, 16, 2008. H. Bauer. . de Gruyter Lehrbuch. \[de Gruyter Textbook\]. Walter de Gruyter & Co., Berlin, fifth edition, 2002. R. Bhatia. , volume 169 of [*Graduate Texts in Mathematics*]{}. Springer-Verlag, New York, 1997. N. Bissantz, T. Hohage, A. Munk, and F. Ruymgaart. Convergence rates of general regularization methods for statistical inverse problems and applications. , 45(6):2610–2636 (electronic), 2007. L. D. Brown and M. G. Low. Asymptotic equivalence of nonparametric regression and white noise. , 24(6):2384–2398, 1996. L. Cavalier. Nonparametric statistical inverse problems. , 24(3), 2008. L. Cavalier and N. W. Hengartner. Adaptive estimation for inverse problems with noisy operators. , 21(4):1345–1361, 2005. H. W. Engl, M. Hanke, and A. Neubauer. , volume 375 of [ *Mathematics and its Applications*]{}. Kluwer Academic Publishers Group, Dordrecht, 1996. S. N. Evans and P. B. Stark. Inverse problems as statistics. . W. Feller. Second edition. John Wiley & Sons Inc., New York, 1971. T. Gasser, L. Sroka, and C. Jennen-Steinmetz. Residual variance and residual pattern in nonlinear regression. , 73(3):625–633, 1986. C. W. Groetsch. , volume 105 of [*Research Notes in Mathematics*]{}. Pitman (Advanced Publishing Program), Boston, MA, 1984. P. C. Hansen. Analysis of discrete ill-posed problems by means of the l-curve. , 34(4):561–580, 1992. A. Hofinger. . PhD thesis, Johannes-Kepler-Universität Linz, Trauner Verlag, 2006. A. Hofinger and H. K. Pikkarainen. Convergence rates for linear inverse problems in the presence of an additive normal noise. , 27(2):240–257, 2009. B. Hofmann and P. Math[é]{}. Analysis of profile functions for general linear regularization methods. , 45(3):1122–1141 (electronic), 2007. V. K. Ivanov, V. V. Vasin, and V. P. Tanana. . Inverse and Ill-posed Problems Series. Second edition. M. Ledoux and M. Talagrand. , volume 23 of [*Ergebnisse der Mathematik und ihrer Grenzgebiete (3) \[Results in Mathematics and Related Areas (3)\]*]{}. Springer-Verlag, Berlin, 1991. Isoperimetry and processes. O. V. Lepski[ĭ]{}. A problem of adaptive estimation in [G]{}aussian white noise. , 35(3):459–470, 1990. P. Math[é]{}. Principles of regularization in [H]{}ilbert spaces. Lecture Notes, 2010. P. Math[é]{} and B. Hofmann. How general are general source conditions? , 24(1):015009, 5, 2008. P. Math[é]{} and S. V. Pereverzev. Regularization of some linear ill-posed problems with discretized random noisy data. , 75(256):1913–1929 (electronic), 2006. V. A. Morozov. On the solution of functional equations by the method of regularization. , 7:414–417, 1966. F. O’Sullivan. A statistical perspective on ill-posed inverse problems. , 1(4):502–527, 1986. With comments and a rejoinder by the author. D. Revuz and M. Yor. , volume 293 of [*Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\]*]{}. Springer-Verlag, Berlin, 1991. L. L. Schumaker. . John Wiley & Sons Inc., New York, 1981. Pure and Applied Mathematics, A Wiley-Interscience Publication. A. N. Tikhonov and V. Y. Arsenin. . V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New York, 1977. Translated from the Russian, Preface by translation editor Fritz John, Scripta Series in Mathematics. A. B. Tsybakov. , volume 41 of [*Mathématiques & Applications (Berlin) \[Mathematics & Applications\]*]{}. Springer-Verlag, Berlin, 2004. G. M. Va[ĭ]{}nikko and A. Y. Veretennikov. . “Nauka”, Moscow, 1986. G. Wahba. , volume 59 of [ *CBMS-NSF Regional Conference Series in Applied Mathematics*]{}. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1990.
{ "pile_set_name": "ArXiv" }
--- abstract: | Both the mean square polynomial stability and exponential stability of $\theta$ Euler-Maruyama approximation solutions of stochastic differential equations will be investigated for each $0\le\theta\le 1$ by using an auxiliary function $F$ (see the following definition (\[dingyi\])). Sufficient conditions are obtained to ensure the polynomial and exponential stability of the numerical approximations. The results in Liu et al [@LFM] will be improved and generalized to more general cases. Several examples and non stability results are presented to support our conclusions. author: - | Yunjiao Hu$^a$, Guangqiang Lan$^a$[^1],  Chong Zhang$^a$\ $^{a}$School of Science, Beijing University of Chemical Technology, Beijing 100029, China title: '**Polynomial and exponential stability of $\theta$-EM approximations to a class of stochastic differential equations**' --- **MSC 2010:** 60H10, 65C30. **Key words:** stochastic differential equation, $\theta$ Euler-Maruyama approximation, polynomial stability, exponential stability. Introduction ============ Given a probability space $(\Omega,\mathscr{F},P)$ endowed with a complete filtration $(\mathscr{F}_t)_{t\geq 0}$. Let $d,m\in\mathbb{N}$ be arbitrarily fixed. We consider the following stochastic differential equations (SDEs) $$\label{sde}dX_t=f(X_t,t)dt+g(X_t,t)dB_t,\ X_0=x_0\in \mathbb{R}^d,$$ where the initial $x_0\in \mathbb{R}^d, (B_t)_{t\geq0}$ is an $m$-dimensional standard $\mathscr{F}_t$-Brownian motion, $f:(t,x)\in[0,\infty)\times\mathbb{R}^d\mapsto f(t,x)\in\mathbb{R}^d$ and $g:(t,x)\in[0,\infty)\times\mathbb{R}^d\mapsto\sigma(t,x)\in \mathbb{R}^d\otimes\mathbb{R}^m$ are both Borel measurable functions. The corresponding $\theta$ Euler-Maruyama ($\theta$-EM) approximation (or the so called stochastic theta method) of the above SDE is $$\label{SEM} X_{k+1}=X_k+[(1-\theta)f(X_k,k\Delta t)+ \theta f(X_{k+1},(k+1)\Delta t)]\Delta t+g(X_k,k\Delta t)\Delta B_k,$$ where $X_0:=x_0$, $\Delta t$ is a constant step size, $\theta\in [0,1]$ is a fixed parameter, $\Delta B_k:=B((k+1)\Delta t)-B(k\Delta t)$ is the increment of Brownian motion. Note that $\theta$-EM includes the classical EM method ($\theta=0$), the backward EM method ($\theta=1$) and the so-called trapezoidal method ($\theta=\frac{1}{2}$). Throughout of this paper, we simply assume that the coefficients $f$ and $g$ satisfy the following local Lipschitz condition: For every integer $r\ge1$ and any $t\ge0$, there exists a positive constant $\bar{K}_{r,t}$ such that for any $x,y\in\mathbb{R}^d$ with $\max\{|x|,|y|\}\le r,$ $$\label{local} \max\{|f(x,t)-f(y,t)|,|g(x,t)-g(y,t)|\}\le \bar{K}_{r,t}|x-y|.$$ Condition (\[local\]) could make sure that equation (\[sde\]) has a unique solution, which is denoted by $X_t(x_0)\in\mathbb{R}^d,$ (this condition could be weakened to more generalized condition, see e.g. [@Lan; @Lan1]). Stability theory is one of the central problems in numerical analysis. The stability concepts of numerical approximation for SDEs mainly include moment stability (M-stability) and almost sure stability (trajectory stability). Results concerned with different kinds of stability analysis for numerical methods can be found in many literatures. For example, Baker and Buckwar [@BB] dealt with the $p$-th moment exponential stability of stochastic delay differential equations when the coefficients are both globally Lipschitz continuous, Higham [@Higham1; @Higham2] considered the scalar linear case and Higham et al. [@HMS] for one sided Lipschitz and the linear growth condition. Other results concerned with moment stability can be found in the Mao’s monograph [@Mao], Higham et al [@HMY], Zong et al [@ZW], Pang et al [@PDM], Szpruch [@Szpruch] (for the so called $V$-stability) and references therein. For the almost sure stability of numerical approximation for SDEs, by Borel-Cantelli lemma and Chebyshev inequality, recently, Wu et al [@WMS] investgated the almost sure exponential stability of the stochastic theta method by the continuous and discrete semi martingale convergence theorems (see Rodkina and Schurz [@RS] for details), Chen and Wu [@CW] and Mao and Szpruch [@MS] also used the same method to prove the almost sure stability of the numerical approximations. However, [@CW; @HMY; @WMS] only dealt with the case that the coefficient of the diffusion part is at most linear growth, that is, there exists $K>0$ such that $$\label{linear} |g(x)|\le K|x|, \forall x\in \mathbb{R}^d.$$ This condition excludes the case when the coefficient $g$ is super-linearly growing (that is, $g(x)=C|x|^\gamma,\ \gamma>1)$. In Mao and Szpruch [@MS], authors examined the globally almost sure asymptotic stability of the $\theta$-EM scheme (\[SEM1\]), they presented a rather weak sufficient condition to ensure that the $\theta$-EM solution is almost surely stable when $\frac{1}{2}<\theta\le 1$, but they didn’t give the convergence rate of the solution to zero explicitly. In [@ZW], the authors studied the mean square exponential stability of $\theta$-EM scheme systematically, they proved that if $0\le\theta<\frac{1}{2},$ the $\theta$-EM scheme preserves mean square exponential stability under the linear growth condition for both the drift term and the diffusion term; if $\frac{1}{2}<\theta\le 1,$ the $\theta$-EM preserves mean square exponential stability without the linear growth condition for the drift term (the linear growth condition for the diffusion term is still necessary), exponential stability for the case $\theta=\frac{1}{2}$ is not studied there. However, to the best of our knowledge, there are few results devoted to the exponential stability of the numerical solutions when the coefficient of the diffusion term does not satisfy the linear growth condition, which is one of the main motivations of this work. Recently, in [@LFM], Liu et al examined the polynomial stability of numerical solutions of SDEs (\[sde\]). They considered the polynomial stability of both the classical and backward Euler-Maruyama approximation. The condition on diffusion coefficient $g$ is bounded with respect to variable $x$. This condition excludes the case that $g$ is unbounded with respect to variable $x$. It immediately raises the question of whether we can relax this condition. This is the other main motivation of this work. To study the polynomial stability of equation (\[SEM\]), we consider the following condition: $$\label{c1} 2\langle x,f(x,t)\rangle+|g(x,t)|^2\le C(1+t)^{-K_1}- K_1(1+t)^{-1}|x|^2, \forall t\ge0, x\in\mathbb{R}^d,$$ where $K_1, C$ are positive constants, and $K_1>1,$ $\langle \cdot,\cdot\rangle$ stands for the inner product in $\mathbb{R}^d$ and $|\cdot|$ denotes the both the Euclidean vector norm and the Hilbert-Schmidt matrix norm. To study the exponential stability of equation (\[SEM\]), we need stronger condition on the coefficients, $$\label{c2} 2\langle x,f(x,t)\rangle+|g(x,t)|^2\le -C|x|^2, \forall x\in\mathbb{R}^d,$$ where $C>0$ is a constant. Define an operator $L$ by $$\aligned LV(x,t):&=\frac{\partial}{\partial t}V(x,t)+\sum_{i=1}^df^i(x,t) \frac{\partial}{\partial x_i}V(x,t)\\& \quad+\frac{1}{2}\sum_{i,j=1}^d\sum_{k=1}^m g^{ik}(x,t)g^{jk}(x,t)\frac{\partial^2} {\partial x_i\partial x_j}V(x,t),\endaligned$$ where $V(x,t): \mathbb{R}^d\times\mathbb{R}^+\rightarrow\mathbb{R}^+$ has continuous second-order partial derivatives in $x$ and first-order partial derivatives in $t.$ It is clear that under condition (\[local\]) and (\[c1\]) (or (\[c2\])), there exists a unique global solution of equation (\[sde\]). By taking $V(x,t)=(1+t)^m|x|^2,$ or $V(x,t)=|x|^2,$ respectively, it is easy to see that under condition (\[c1\]) the true solution $X_t(x_0)$ of equation (\[sde\]) is mean square polynomially stable (see Liu and Chen [@LC] Theorem 1.1) or mean square exponentially stable under condition (\[c2\]) (the proof is the same as Higham et al, see [@HMY] Appendix A). So a natural question raises: Whether $\theta$-EM method can reproduce the polynomial and exponential stability of the solution of (\[sde\]). If $\frac{1}{2} <\theta\le 1$, we will study the polynomial stability and exponential stability of $\theta$-EM scheme (\[SEM\]) under conditions (\[c1\]) and (\[c2\]) respectively. For the exponential stability, we first investigate the mean square exponential stability, then we derive the almost sure exponential stability by Borel-Cantelli lemma. If $0\le\theta\le\frac{1}{2},$ besides condition (\[c1\]) (respectively, (\[c2\])), linear growth condition for the drift term is also needed to ensure the corresponding stability, that is, there exists $K>0$ such that $$\label{growth} |f(x,t)|\le K(1+t)^{-\frac{1}{2}}|x|$$ for polynomial stability case and $$\label{growth1} |f(x,t)|\le K|x|, \forall x\in \mathbb{R}^d$$ for exponential stability case. Notice that condition (\[growth\]) is strictly weaker than condition (2.4) in [@LFM]. The main feature of this paper is that we consider conditions in which both diffusion and drift coefficients are involved, which give weaker sufficient conditions than known ones, while in most of the preceding studies, such conditions have been provided as separate ones for diffusion coefficients and drift coefficients. The rest of the paper is organized as follows. In Section 2, we give some lemmas which will be used in the following sections to prove the stability results. In Section 3 we study the polynomial stability of the $\theta$-EM scheme. Our method hinges on various properties of the gamma function and the ratios of gamma functions. We show that when $\frac{1}{2}<\theta\le 1$, the polynomial stability of the $\theta$-EM scheme holds under condition (\[c1\]) plus one sided Lipschitz condition on $f$; when $0\le\theta\le\frac{1}{2},$ the linear growth condition for the drift term $f$ is also needed. In Section 4, we investigate the exponential stability of the $\theta$-EM scheme for all $0\le\theta\le 1$. Finally, we give in Section 5 some non stability results and counter examples to support our conclusions. Preliminary =========== To ensure that the semi-implicit $\theta$-EM scheme is well defined, we need the first two lemmas.The first lemma gives the existence and uniqueness of the solution of the equation $F(x)=b.$ We can prove the existence and uniqueness of the solution of the $\theta$-EM scheme based on this lemma. \[l1\] Let $F$ be the vector field on $\mathbb{R}^d$ and consider the equation $$\label{e1} F(x)=b$$ for a given $b\in\mathbb{R}^d$. If $F$ is monotone, that is, $$\langle x-y, F(x)-F(y)\rangle>0$$ for all $x,y\in\mathbb{R}^d,x\neq y$, and $F$ is continuous and coercive, that is, $$\lim_{|x|\rightarrow\infty}\frac{\langle x, F(x)\rangle}{|x|}=\infty,$$ then for every $b\in\mathbb{R}^d,$ equation (\[e1\]) has a unique solution $x\in\mathbb{R}^d$. This lemma follows directly from Theorem 26.A in [@Zeidler]. Consider the following one sided Lipschitz condition on $f$: There exists $L>0$ such that $$\label{c3} \langle x-y, f(x,t)-f(y,t)\rangle\le L|x-y|^2.$$ \[l2\] Define $$\label{dingyi}F(x,t):=x-\theta\Delta t f(x,t), \forall t>0, x\in\mathbb{R}^d.$$ Assume conditions (\[c1\]) and (\[c3\]) and $\Delta t$ is small enough such that $\Delta t<\frac{1}{\theta L}$. Then for any $t>0$ and $b\in\mathbb{R}^d,$ there is a unique solution of equation $F(x,t)=b.$ By this Lemma, we know that the $\theta$-EM scheme is well defined under conditions (\[c1\]) and (\[c3\]) for $\Delta t$ small enough. The proof of Lemma \[l2\] is the same as that of Lemma 3.4 in [@LFM] and Lemma 3.3 in [@MS2], just notice that condition (\[c3\]) implies $\langle x-y,F(x,t)-F(y,t)\rangle>0$, and (\[c1\]) (or (\[c2\])) implies $\langle x,F(x,t)\rangle\rightarrow\infty$ as $x\rightarrow\infty$. Notice also that our condition (\[c3\]) is weaker than (2.3) in [@LFM]. We also need the following two lemmas to study the polynomial stability of the $\theta$-EM scheme. \[l3\] Given $\alpha>0$ and $\beta\ge 0,$ if there exists a $\delta$ such that $0<\delta<\alpha^{-1}$, then $$\prod_{i=a}^b\Big(1-\frac{\alpha\delta}{1+(i+\beta)\delta}\Big)= \frac{\Gamma(b+1+\delta^{-1}+\beta-\alpha)}{\Gamma(b+1+\delta^{-1}+\beta)} \times\frac{\Gamma(a+\delta^{-1}+\beta)}{\Gamma(a+\delta^{-1}+\beta-\alpha)},$$ where $0\le a\le b,$ $\Gamma(x):=\int_0^\infty y^{x-1}e^{-y}dy.$ \[l4\] For any $x>0,$ if $0<\eta<1,$ then $$\frac{\Gamma(x+\eta)}{\Gamma(x)}<x^\eta,$$ and if $\eta>1,$ then $$\frac{\Gamma(x+\eta)}{\Gamma(x)}>x^\eta.$$ The proof of Lemmas \[l3\] and \[l4\] could be found in [@LFM]. Polynomial stability of $\theta$-EM solution (\[SEM\]) ====================================================== We are now in the position to give the polynomial stability of $\theta$-EM solution (\[SEM\]). First, we consider the case $\frac{1}{2}<\theta\le 1.$ We have the following \[polynomial\] Assume that conditions (\[c1\]) and (\[c3\]) hold. If $\frac{1}{2}<\theta\le 1,$ then for any $0<\varepsilon<K_1-1$, we can choose $\Delta t$ small enough such that the $\theta$-EM solution satisfies $$\label{bu} \limsup_{k\rightarrow \infty}\frac{\log \mathbb{E}|X_k|^2}{\log k\Delta t}\le -(K_1-1-\varepsilon)$$ for any initial value $X_0=x_0\in \mathbb{R}^d.$ **Proof** We first prove that condition (\[c1\]) implies that for $\Delta t$ small enough, $$\label{ineq} 2\langle x,f(x,t)\rangle+|g(x,t)|^2+(1-2\theta)\Delta t|f(x,t)|^2\le C(1+t)^{-K_1}-(K_1-\varepsilon)(1+t)^{-1}|F(x,t)|^2$$ holds for $\forall t\ge0, x\in\mathbb{R}^d.$ Here and in the following, $F$ is defined by (\[dingyi\]). In fact, we only need to show that $$(2\theta-1)\Delta t|f(x,t)|^2- (K_1-\varepsilon)(1+t)^{-1}|F(x,t)|^2\ge - K_1(1+t)^{-1}|x|^2.$$ On the other hand, by the definition of $F(x,t),$ we have $$\aligned &\quad\ (2\theta-1)\Delta t|f(x,t)|^2- (K_1-\varepsilon)(1+t)^{-1}|F(x,t)|^2\\& =(2\theta-1)\Delta t|f(x,t)|^2- (K_1-\varepsilon)(1+t)^{-1}[|x|^2-2\theta\Delta t\langle x, f(x,t)\rangle+\theta^2\Delta t^2|f(x,t)|^2]\\& =[(2\theta-1)\Delta t-(K_1-\varepsilon)(1+t)^{-1}\theta^2\Delta t^2]|f(x,t)|^2\\&\quad+2(K_1-\varepsilon)(1+t)^{-1}\theta\Delta t\langle x,f(x,t)\rangle- (K_1-\varepsilon)(1+t)^{-1}|x|^2\\& =a|f(x,t)+bx|^2-(ab^2+(K_1-\varepsilon)(1+t)^{-1})|x|^2,\endaligned$$ where $$a:=(2\theta-1)\Delta t-(K_1-\varepsilon)(1+t)^{-1}\theta^2\Delta t^2,\quad b:=\frac{(K_1-\varepsilon)(1+t)^{-1}\theta\Delta t}{a}.$$ Since $$a\ge (2\theta-1)\Delta t-(K_1-\varepsilon)\theta^2\Delta t^2 =\Delta t(2\theta-1-(K_1-\varepsilon)\theta^2\Delta t),$$ we can choose $\Delta t$ small enough (for example $\Delta t\le \min\{\frac{1}{\theta L},\frac{(2\theta-1)(\varepsilon\wedge 1)}{K_1(K_1-\varepsilon)\theta^2}\}$) such that $a\ge 0$ and $ab^2\le \frac{\varepsilon}{1+t}.$ Then we have $$\aligned \ (2\theta-1)\Delta t|f(x,t)|^2- (K_1-\varepsilon)(1+t)^{-1}|F(x,t)|^2& \ge -(\frac{b^2}{a}+(K_1-\varepsilon)(1+t)^{-1})|x|^2\\& \ge -K_1(1+t)^{-1}|x|^2.\endaligned$$ So we complete the proof of inequality (\[ineq\]). Now by the definition of $F(x,t)$, it follows that $$F(X_{k+1},(k+1)\Delta t)=F(X_{k},k\Delta t)+f(X_k,k\Delta t)\Delta t +g(X_k,k\Delta t)\Delta B_k.$$ So we have $$\label{F}\aligned |F(X_{k+1},(k+1)\Delta t)|^2&= [2\langle X_k,f(X_k,k\Delta t)\rangle+|g(X_{k},k\Delta t)|^2+(1-2\theta) |f(X_{k},k\Delta t)|^2\Delta t]\Delta t\\&\quad+|F(X_{k},k\Delta t)|^2+M_k,\endaligned$$ where $$\label{M}\aligned M_k&:=|g(X_{k},k\Delta t)\Delta B_k|^2 -|g(X_{k},k\Delta t)|^2\Delta t+2\langle F(X_{k},k\Delta t),g(X_{k},k\Delta t)\Delta B_k\rangle\\& \quad+2\langle f(X_{k},k\Delta t)\Delta t,g(X_{k},k\Delta t)\Delta B_k\rangle.\endaligned$$ Notice that $$\mathbb{E}(M_k|\mathscr{F}_{k\Delta t})=0.$$ Then by condition (\[c1\]) and inequality (\[ineq\]), we have $$\mathbb{E}(|F(X_{k+1},(k+1)\Delta t)|^2|\mathscr{F}_{k\Delta t})\le (1-\frac{(K_1-\varepsilon)\Delta t}{1+k\Delta t})|F(X_{k},k\Delta t)|^2 +\frac{C\Delta t}{(1+k\Delta t)^{K_1-\varepsilon}}.$$ We can get by iteration that $$\aligned\mathbb{E}(|F(X_{k},k\Delta t)|^2)&\le \Big(\prod_{i=0}^{k-1} (1-\frac{(K_1-\varepsilon)\Delta t}{1+k\Delta t})\Big)|F(x_0,0)|^2\\&\quad +\sum_{r=0}^{k-1}\Big(\prod_{i=r+1}^{k-1}(1-\frac{(K_1-\varepsilon)\Delta t}{1+i\Delta t})\Big) \frac{C\Delta t}{(1+r\Delta t)^{K_1-\varepsilon}}.\endaligned$$ Then by Lemma \[l3\], $$\label{budeng}\aligned\mathbb{E}(|F(X_{k},k\Delta t)|^2)&\le \frac{\Gamma(k+\frac{1}{\Delta t}-(K_1-\varepsilon))\Gamma(\frac{1}{\Delta t})}{\Gamma(k+\frac{1}{\Delta t}) \Gamma(\frac{1}{\Delta t}-(K_1-\varepsilon))}|F(x_0,0)|^2\\&\quad +C\Delta t\sum_{r=0}^{k-1}\frac{\Gamma(k+\frac{1}{\Delta t}-(K_1-\varepsilon)) \Gamma(r+1+\frac{1}{\Delta t})}{\Gamma(k+\frac{1}{\Delta t})\Gamma(r+1+\frac{1}{\Delta t} -(K_1-\varepsilon))}(1+r\Delta t)^{-(K_1-\varepsilon)}.\endaligned$$ On the other hand, since $K_1-\varepsilon>1,$ by Lemma \[l4\] or [@LFM] one can see that $$\label{kongzhi1}\frac{\Gamma(k+\frac{1}{\Delta t}-(K_1-\varepsilon)) \Gamma(\frac{1}{\Delta t})}{\Gamma(k+\frac{1}{\Delta t})\Gamma(\frac{1}{\Delta t}- (K_1-\varepsilon))}\le ((k-(K_1-\varepsilon))\Delta t+1)^{-(K_1-\varepsilon)}$$ and that $$\label{kongzhi2}\frac{\Gamma(k+\frac{1}{\Delta t}-(K_1-\varepsilon)) \Gamma(r+1+\frac{1}{\Delta t})}{\Gamma(k+\frac{1}{\Delta t})\Gamma(r+1+\frac{1}{\Delta t}- (K_1-\varepsilon))}\le ((k-(K_1-\varepsilon))\Delta t+1)^{-(K_1-\varepsilon)}((r+1)\Delta t+1)^{K_1-\varepsilon}.$$ Substituting (\[kongzhi1\]) and (\[kongzhi2\]) into inequality (\[budeng\]) yields $$\aligned\mathbb{E}(|F(X_{k},k\Delta t)|^2)&\le((k-(K_1-\varepsilon))\Delta t+1)^{-(K_1-\varepsilon)} |F(x_0,0)|^2\\&\quad+C\Delta t\sum_{r=0}^{k-1}((k-(K_1-\varepsilon))\Delta t+1)^{-(K_1-\varepsilon)} \frac{((r+1)\Delta t+1)^{K_1-\varepsilon}}{(1+r\Delta t)^{K_1-\varepsilon}}\\& \le2^{K_1-\varepsilon}(k\Delta t+1)^{-(K_1-\varepsilon)}\Big[|F(x_0,0)|^2+C\Delta t\sum_{r=0}^{k-1} \Big(\frac{(r+1)\Delta t+1}{1+r\Delta t}\Big)^{K_1-\varepsilon}\Big]\\&\le2^{K_1-\varepsilon}(k\Delta t+1)^{-(K_1-\varepsilon)} [|F(x_0,0)|^2+C\cdot 2^{K_1-\varepsilon}k\Delta t]\\&\le 2^{K_1-\varepsilon}(|F(x_0,0)|^2+C\cdot 2^{K_1-\varepsilon}) (k\Delta t+1)^{-(K_1-\varepsilon)+1}.\endaligned$$ We have used the fact that $(k-(K_1-\varepsilon))\Delta t+1\ge \frac{1}{2}(k\Delta t+1) $ for small $\Delta t$ in second inequality and that $((r+1)\Delta t+1)/(1+r\Delta t)\le 2$ in the third inequality. Now by condition (\[c1\]), $$\aligned |F(x,t)|^2&=|x|^2-2\theta\Delta t\langle x,f(x,t)\rangle+\theta^2\Delta t^2|f(x,t)|^2\\& \ge |x|^2- C(1+t)^{-K_1}\theta\Delta t+ K_1(1+t)^{-1}|x|^2\theta\Delta t+\theta^2\Delta t^2|f(x,t)|^2\\& \ge |x|^2- C(1+t)^{-K_1}\theta\Delta t\ge |x|^2- C(1+t)^{-(K_1-\varepsilon)}\theta\Delta t.\endaligned$$ Therefore, for small enough $\Delta t,$ $$\aligned\mathbb{E}(|X_{k}|^2)&\le\mathbb{E}(|F(X_{k},k\Delta t)|^2)+C(1+k\Delta t)^{-(K_1-\varepsilon)}\theta\Delta t\\& \le 2^{K_1-\varepsilon}(|F(x_0,0)|^2+C\cdot 2^{K_1-\varepsilon})(k\Delta t+1)^{-(K_1-\varepsilon)+1}+C(1+k\Delta t)^{-(K_1-\varepsilon)}\theta\Delta t\\& \le 2^{K_1-\varepsilon}(|F(x_0,0)|^2+C\cdot2^{K_1-\varepsilon}+C\theta)(k\Delta t+1)^{-(K_1-\varepsilon)+1}.\endaligned$$ Namely, the $\theta$-EM solution of (\[sde\]) is mean square polynomial stable with rate no greater than $-(K_1-1-\varepsilon)$ when $\frac{1}{2}<\theta\le 1$ and $\Delta t$ is small enough. We complete the proof. $\square$ Notice that we can not let $\varepsilon\rightarrow0$ in (\[bu\]) since $\Delta t$ depends on $\varepsilon.$ Moreover, our condition (\[c1\]) could cover conditions (2.5) and (2.6) (even though not entirely. They need $K_1>0.5$, but our $K_1>1$) for the polynomial stability of backward EM approximation of SDE (\[sde\]). Now let us consider the case $0\le\theta\le\frac{1}{2}.$ We have \[polynomial1\] Assume that conditions (\[c1\]), (\[growth\]) and (\[c3\]) hold. If $0\le\theta\le\frac{1}{2},$ then for any $0<\varepsilon<K_1-1$, we can choose $\Delta t$ small enough such that the $\theta$-EM solution satisfies $$\limsup_{k\rightarrow \infty}\frac{\log \mathbb{E}|X_k|^2}{\log k\Delta t}\le -(K_1-1-\varepsilon)$$ for any initial value $X_0=x_0\in \mathbb{R}^d.$ **Proof** Notice that in this case $$\aligned &\quad\ (2\theta-1)\Delta t|f(x,t)|^2- (K_1-\varepsilon)(1+t)^{-1}|F(x,t)|^2\\& =(2\theta-1)\Delta t|f(x,t)|^2- (K_1-\varepsilon)(1+t)^{-1}[|x|^2-2\theta\Delta t\langle x,f(x,t)\rangle +\theta^2\Delta t^2|f(x,t)|^2]\\& =[(2\theta-1)\Delta t-(K_1-\varepsilon)(1+t)^{-1}\theta^2\Delta t^2]|f(x,t)|^2\\&\quad+2 (K_1-\varepsilon) (1+t)^{-1}\theta\Delta t\langle x,f(x,t)\rangle- (K_1-\varepsilon)(1+t)^{-1}|x|^2\\& \ge aK^2(1+t)^{-1}|x|^2-2 K(K_1-\varepsilon)(1+t)^{-\frac{3}{2}}\theta\Delta t|x|^2-(K_1-\varepsilon)(1+t)^{-1}|x|^2,\endaligned$$ where $$a:=(2\theta-1)\Delta t-(K_1-\varepsilon)(1+t)^{-1}\theta^2\Delta t^2\le 0.$$ Thus, we can choose $\Delta t$ small enough such that $$aK^2(1+t)^{-1}-2 KK_1(1+t)^{-\frac{3}{2}}\theta\Delta t-(K_1-\varepsilon)(1+t)^{-1}\ge -K_1(1+t)^{-1}.$$ Therefore, by condition (\[c1\]), we have $$\mathbb{E}(|F(X_{k+1},(k+1)\Delta t)|^2|\mathscr{F}_{k\Delta t})\le (1-\frac{(K_1-\varepsilon) \Delta t}{1+k\Delta t})|F(X_{k},k\Delta t)|^2+\frac{C\Delta t}{(1+k\Delta t)^{K_1-\varepsilon}}.$$ Then by the same argumentation as Theorem \[polynomial\], we have $$\aligned |F(x,t)|^2&=|x|^2-2\theta\Delta t\langle x,f(x,t)\rangle+\theta^2\Delta t^2|f(x,t)|^2\\& \ge |x|^2- C(1+t)^{-K_1}\theta\Delta t+ K_1(1+t)^{-1}|x|^2\theta\Delta t\\& \ge |x|^2- C(1+t)^{-K_1}\theta\Delta t\ge |x|^2- C(1+t)^{-(K_1-\varepsilon)}\theta\Delta t.\endaligned$$ Therefore, for small enough $\Delta t,$ we can derive in the same way as in proof of Theorem \[polynomial\] that $$\aligned\mathbb{E}(|X_{k}|^2)&\le\mathbb{E}(|F(X_{k},k\Delta t)|^2)+C(1+k\Delta t)^{-(K_1-\varepsilon)}\theta\Delta t\\& \le 2^{K_1-\varepsilon}(|F(x_0,0)|^2+C\cdot 2^{K_1-\varepsilon})(k\Delta t+1)^{-(K_1-\varepsilon)+1}+C(1+k\Delta t)^{-(K_1-\varepsilon)}\theta\Delta t\\& \le 2^{K_1-\varepsilon}(|F(x_0,0)|^2+C\cdot2^{K_1-\varepsilon}+C\theta)(k\Delta t+1)^{-(K_1-\varepsilon)+1}.\endaligned$$ Namely, the $\theta$-EM solution of (\[sde\]) is mean square polynomial stable with rate no greater than $-(K_1-1-\varepsilon)$ when $0\le\theta\le\frac{1}{2}$ and $\Delta t$ is small enough. We complete the proof. $\square$ In [@LFM] Condition 2.3, authors gave the sufficient conditions on coefficients $f$ and $g$ separately for the polynomial stability of the classical EM scheme, their conditions (2.5) and (2.6) hold for $K_1>1$ and $C>0,$ then it is easy to see that our condition (\[c1\]) holds automatically for the same $K_1$ and $C,$ and our condition (\[growth\]) is strictly weaker than (2.4). Therefore, we have improved Liu et al and generalized it to $0\le\theta\le\frac{1}{2}.$ Exponential stability of $\theta$-EM solution (\[SEM\]) ======================================================= Now let us consider the exponential stability of $\theta$-EM solution of (\[sde\]). When SDE (\[sde\]) goes back to time homogeneous case, that is, $$\label{sde1}dX_t=f(X_t)dt+g(X_t)dB_t,\ X_0=x_0\in \mathbb{R}^d,a.s.$$ The corresponding $\theta$-EM approximation becomes to $$\label{SEM1} X_{k+1}=X_k+[(1-\theta)f(X_k)+\theta f(X_{k+1})]\Delta t +g(X_k)\Delta B_k.$$ In [@MS], Mao and Szpruch gave a sufficient condition ensuring that the almost sure stability of $\theta$-EM solution of (\[sde1\]) holds in the case that $\frac{1}{2}<\theta\le 1$. However they didn’t reveal the rate of convergence. Their method of the proof is mainly based on the discrete semi martingale convergence theorem. We will study the exponential stability systematically for $0\le\theta\le 1$ for the time inhomogeneous case. We first prove the mean square exponential stability, then we prove the almost sure stability by Borel-Cantelli lemma. \[exponential\] Assume that conditions (\[c2\]) and (\[c3\]) hold. Then for any $\frac{1}{2}<\theta\le 1$ and $0<\varepsilon<1$, we can choose $\Delta t$ small enough such that the $\theta$-EM solution satisfies $$\limsup_{k\rightarrow \infty}\frac{\log \mathbb{E}|X_k|^2}{k\Delta t}\le -C(1-\varepsilon)$$ for any initial value $X_0=x_0\in \mathbb{R}^d$ and $$\label{ex} \limsup_{k\rightarrow \infty}\frac{\log |X_k|}{k\Delta t}\le -\frac{C(1-\varepsilon)}{2}\quad a.s.$$ **Proof** Define $F(x,t)$ as in Lemma \[l2\]. We have $$\aligned &\quad\ (2\theta-1)\Delta t|f(x,t)|^2-C(1-\varepsilon)|F(x,t)|^2\\& =(2\theta-1)\Delta t|f(x,t)|^2- C(1-\varepsilon)[|x|^2-2\theta\Delta t\langle x,f(x,t)\rangle +\theta^2\Delta t^2|f(x,t)|^2]\\& =[(2\theta-1)\Delta t-C\theta^2\Delta t^2(1-\varepsilon)]|f(x,t)|^2+2C\theta\Delta t (1-\varepsilon)\langle x,f(x,t)\rangle-C(1-\varepsilon)|x|^2\\& =a|f(x,t)+bx|^2-(ab^2+C(1-\varepsilon))|x|^2,\endaligned$$ where $$a:=(2\theta-1)\Delta t-C\theta^2\Delta t^2(1-\varepsilon), \quad b:=\frac{C\theta\Delta t(1-\varepsilon)}{a}.$$ We can choose $\Delta t$ small enough (for example $\Delta t\le \min \{\frac{1}{\theta L},\frac{\varepsilon(2\theta-1)}{C(1-\varepsilon)\theta^2}\}$) such that $a\ge 0$ and $ab^2\le C\varepsilon$, and therefore $$(2\theta-1)\Delta t|f(x,t)|^2-C(1-\varepsilon)|F(x,t)|^2\ge -C|x|^2.$$ Then by condition (\[c2\]), we can prove that $$\label{ineq1} 2\langle x,f(x,t)\rangle+|g(x,t)|^2+(1-2\theta)\Delta t|f(x,t)|^2\le -C(1-\varepsilon)|F(x,t)|^2$$ holds for $\forall x\in\mathbb{R}^d.$ Therefore, by (\[F\]), for small enough $\Delta t$ ($\Delta t\le \frac{1}{\theta L}\wedge\frac{\varepsilon(2\theta-1)}{C\theta^2(1-\varepsilon)} \wedge\frac{1}{C(1-\varepsilon)}$), $$\aligned \mathbb{E}(|F(X_{k+1},(k+1)\Delta t)|^2) \le \mathbb{E}(|F(X_{k},k\Delta t)|^2)(1-C(1-\varepsilon)\Delta t).\endaligned$$ So we have $$\label{mean1}\aligned \mathbb{E}(|X_{k}|^2)\le \mathbb{E}(|F(X_{k},k\Delta t)|^2) \le |F(x_0,0)|^2(1-C(1-\varepsilon)\Delta t)^k.\endaligned$$ or $$\label{mean}\aligned \mathbb{E}(|X_{k}|^2)\le |F(x_0,0)|^2e^{-C(1-\varepsilon)k\Delta t}, \forall k\ge 1.\endaligned$$ The first inequality of (\[mean1\]) holds because of condition (\[c2\]). Thus, the $\theta$-EM solution of (\[sde1\]) is mean square exponential stable when $\frac{1}{2}<\theta\le 1$ and $\Delta t$ is small enough. On the other hand, by Chebyshev inequality, inequality (\[mean\]) implies that $$P(|X_k|^2>k^2e^{-kC(1-\varepsilon)\Delta t})\le\frac{|F(x_0,0)|^2}{k^2}, \forall k\ge 1.$$ Then by Borel-Cantelli lemma, we see that for almost all $\omega\in\Omega$ $$\label{bound}|X_k|^2\le k^2e^{-kC(1-\varepsilon)\Delta t}$$ holds for all but finitely many $k$. Thus, there exists a $k_0(\omega),$ for all $\omega\in\Omega$ excluding a $P$-null set, for which (\[bound\]) holds whenever $k\ge k_0$. Therefore, for almost all $\omega\in\Omega$, $$\label{bound1}\frac{1}{k\Delta t}\log|X_k|\le-\frac{C(1-\varepsilon)}{2} +\frac{\log k}{k\Delta t}$$ whenever $k\ge k_0$. Letting $k\rightarrow\infty$ we obtain (\[ex\]). The proof is then complete. $\square$ If $0\le\theta\le\frac{1}{2},$ then we have the following \[exponential1\] Assume that conditions (\[c2\]), (\[growth1\]) and (\[c3\]) hold. Then for any $0<\varepsilon<1$, we can choose $\Delta t$ small enough such that the $\theta$-EM solution satisfies $$\limsup_{k\rightarrow \infty}\frac{\log \mathbb{E}|X_k|^2}{k\Delta t}\le -C(1-\varepsilon)$$ for any initial value $X_0=x_0\in \mathbb{R}^d$ and $$\label{ex1} \limsup_{k\rightarrow \infty}\frac{\log |X_k|}{k\Delta t}\le -\frac{C(1-\varepsilon)}{2}\quad a.s.$$ **Proof** By the same argument as Theorem \[polynomial1\], we have $$\aligned &\quad\ (2\theta-1)\Delta t|f(x,t)|^2-C(1-\varepsilon)|F(x,t)|^2\\& =(2\theta-1)\Delta t|f(x,t)|^2- C(1-\varepsilon)[|x|^2-2\theta\Delta t\langle x,f(x,t)\rangle +\theta^2\Delta t^2|f(x,t)|^2]\\& =[(2\theta-1)\Delta t-C\theta^2\Delta t^2(1-\varepsilon)]|f(x,t)|^2+2C\theta\Delta t (1-\varepsilon)\langle x,f(x,t)\rangle-C(1-\varepsilon)|x|^2\\& \ge aK^2|x|^2-2KC\theta\Delta t(1-\varepsilon)|x|^2-C(1-\varepsilon)|x|^2\endaligned$$ since $$a:=(2\theta-1)\Delta t-C\theta^2\Delta t^2(1-\varepsilon)\le 0.$$ We have used condition (\[growth\]) in the last inequality. We can choose $\Delta t$ small enough such that $$\Delta t\le \frac{1}{\theta L}\wedge\frac{K(1-2\theta)+2C\theta(1-\varepsilon)}{KC(1-\varepsilon)\theta^2} \wedge\frac{C\varepsilon}{2(K^2(1-2\theta)+2KC\theta(1-\varepsilon))},$$ and thus $$aK^2-2KC\theta\Delta t(1-\varepsilon)\ge -C\varepsilon.$$ Then we have $$(2\theta-1)\Delta t|f(x,t)|^2-C(1-\varepsilon)|F(x,t)|^2\ge -C|x|^2.$$ Then by condition (\[c2\]), we can prove that $$\label{ineq1} 2\langle x,f(x,t)\rangle+|g(x,t)|^2+(1-2\theta)\Delta t|f(x,t)|^2\le -C(1-\varepsilon)|F(x,t)|^2$$ holds for $\forall x\in\mathbb{R}^d.$ Therefore, for small enough $\Delta t$, $$\aligned \mathbb{E}(|F(X_{k+1},(k+1)\Delta t)|^2) \le \mathbb{E}(|F(X_{k},k\Delta t)|^2)(1-C(1-\varepsilon)\Delta t).\endaligned$$ So we have $$\label{mean2}\aligned \mathbb{E}(|X_{k}|^2)\le \mathbb{E}(|F(X_{k},k\Delta t)|^2) \le |F(x_0,0)|^2(1-C(1-\varepsilon)\Delta t)^k.\endaligned$$ or $$\label{mean3}\aligned \mathbb{E}(|X_{k}|^2)\le |F(x_0,0)|^2e^{-C(1-\varepsilon)k\Delta t}, \forall k\ge 1.\endaligned$$ The first inequality of (\[mean1\]) holds because of condition (\[c2\]). Thus, the $\theta$-EM solution of (\[sde1\]) is mean square exponential stable when $\frac{1}{2}<\theta\le 1$ and $\Delta t$ is small enough. From (\[mean\]) we can show the almost sure stability assertion (\[ex1\]) in the same way as in the proof of Theorem \[exponential\]. The proof is complete. $\square$ Non stability results and counter examples ========================================== In this section we will give some non stability results for the classical EM scheme and counter examples to support our conclusions. We show that there are cases that our assertion works while the assertions in the literature do not work. Let us consider the following 1-dimensional stochastic differential equations: $$\label{SDE} dX_t=(aX_t+b|X_t|^{q-1}X_t)dt+c|X_t|^\gamma dB_t.\ X_0=x_0(\neq0)\in \mathbb{R}.$$ When $b\le0, q> 0$ and $\gamma\ge\frac{1}{2},$ by Gyöngy and Krylov [@GK] Corollary 2.7 (see also [@IW; @Lan; @Lan1; @RY]), there is a unique global solution of equation (\[SDE\]). Here $|x|^{q-1}x:=0$ if $x=0$. For this equation, if $q=2\gamma-1, a<0$ and $2b+c^2\le 0,$ then condition (\[c2\]) is automatically satisfied. Therefore, the true solution of SDE (\[SDE\]) is mean square exponentially stable. Now let us consider the corresponding Euler-Maruyama approximation: $$\label{EM} X_{k+1}=X_k+(aX_k+b|X_k|^{q-1}X_k)\Delta t+c|X_k|^\gamma \Delta B_k.$$ For the classical EM approximation $X_k$, we have the following \[divergence1\] Suppose $q>1, q>\gamma.$ If $\Delta t>0$ is small enough, and $$|X_1|\ge\frac{2^\frac{q+2}{q-1}}{(|b|\Delta t)^\frac{1}{q-1}},$$ then for any $K\ge 1,$ there exists a positive number $\alpha$ such that $$P(|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}},\forall 1\le k\le K)\ge\exp(-4e^{-\alpha/\sqrt{\Delta t}})>0,$$ where $\alpha:=\frac{2^{\frac{(q-\gamma)(q+2)}{q-1}}}{2|c|}(1\wedge((q-\gamma)\log 2)).$ That is, no matter what values $a,b,c$ take, by taking the initial value and the step size suitably, the numerical approximation solution of SDE (\[SDE\]) is divergent with a positive probability when $q>1, q>\gamma.$ **Proof of Lemma \[divergence1\]**: According to (\[EM\]), $$\aligned|X_{k+1}|&=|X_k|\Big| b|X_k|^{q-1}\Delta t+c\cdot \textrm{sgn} (X_k)|X_k|^{\gamma-1}\Delta B_k+1+a\Delta t\Big|\\& \ge |X_k|\Big||b||X_k|^{q-1}\Delta t-|c| |X_k|^{\gamma-1}|\Delta B_k|-1-|a|\Delta t\Big|.\endaligned$$ Take $\Delta t$ small enough such that $|a|\Delta t\le 1.$ If $|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}}$ and $|\Delta B_k|\le\frac{1}{2|c|}2^{(k+\frac{3}{q-1})(q-\gamma)}$, then $$\aligned |X_{k+1}|&\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}} (2^{k(q-1)+3}(1-\frac{1}{2})-2)\\& =\frac{2^{k+1+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}}.\endaligned$$ Thus, given that $|X_1|\ge\frac{2^{\frac{q+2}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}}$, the event that $\{|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}}, \forall 1\le k\le K\}$ contains the event that $\{|\Delta B_k|\le \frac{1}{2|c|}2^{(k+\frac{3}{q-1})(q-\gamma)},\forall 1\le k\le K\}$. So $$P(|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}},\forall 1\le k\le K) \ge\prod_{k=1}^KP(|\Delta B_k|\le\frac{1}{2|c|}2^{(k+\frac{3}{q-1})(q-\gamma)},\forall 1\le k\le K).$$ We have used the fact that $\{\Delta B_k\}$ are independent in the above inequality. But $$\aligned P(|\Delta B_k|\ge\frac{1}{2|c|}2^{(k+\frac{3}{q-1})(q-\gamma)})&= P(\frac{|\Delta B_k|}{\sqrt{\Delta t}}\ge\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|\sqrt{\Delta t}})\\& =\frac{2}{\sqrt{2\pi}}\int_{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c| \sqrt{\Delta t}}}^\infty e^{-\frac{x^2}{2}}dx.\endaligned$$ We can take $\Delta t$ small enough such that ${\frac{ 2^{(k+\frac{3}{q-1}) (q-\gamma)}}{2|c|\sqrt{\Delta t}}}\ge 2,$ so $x\le\frac{x^2}{2}$ for $x\ge {\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|\sqrt{\Delta t}}}$ and therefore, $$\aligned P(|\Delta B_k|\ge\frac{1}{2|c|}2^{(k+\frac{3}{q-1})(q-\gamma)})& \le\frac{2}{\sqrt{2\pi}}\int_{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c| \sqrt{\Delta t}}}^\infty e^{-x}dx\\&=\frac{2}{\sqrt{2\pi}}\exp\{ -{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|\sqrt{\Delta t}}}\}.\endaligned$$ Since $$\log (1-u)\ge -2u,\quad 0<u<\frac{1}{2},$$ we have $$\aligned\log P(|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}}, \forall 1\le k\le K)&\ge\sum_{k=1}^K\log (1-\exp(-{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|\sqrt{\Delta t}}}))\\& \ge-2\sum_{k=1}^K\exp(-{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c|\sqrt{\Delta t}}}).\endaligned$$ Next, by using the fact that $r^x\ge r(1\wedge\log r)x$ for any $ x\ge 1, r>1,$ we have $$\aligned\sum_{k=1}^K\exp(-{\frac{ 2^{(k+\frac{3}{q-1})(q-\gamma)}}{2|c| \sqrt{\Delta t}}})&=\sum_{k=1}^K\exp(-{\frac{ 2^{\frac{3(q-\gamma)}{q-1}}}{2|c|\sqrt{\Delta t}}}(2^{q-\gamma})^k)\\& \le\sum_{k=1}^K\exp(-{\frac{ 2^{\frac{3(q-\gamma)}{q-1}}}{2|c|\sqrt{\Delta t}}}2^{q-\gamma}(1\wedge\log 2^{q-\gamma})k)\\& \le\frac{e^{-\frac{\alpha}{\sqrt{\Delta t}}}}{1-e^{-\frac{\alpha}{\sqrt{\Delta t}}}} \le 2e^{-\frac{\alpha}{\sqrt{\Delta t}}}\endaligned$$ for $\Delta t$ small enough, where $\alpha:=\frac{1}{2|c|}2^{\frac{(q+2)(q-\gamma)}{q-1}}(1\wedge\log 2^{q-\gamma}).$ Hence $$\aligned\log P(|X_k|\ge\frac{2^{k+\frac{3}{q-1}}}{(|b|\Delta t)^\frac{1}{q-1}}, \forall 1\le k\le K)\ge -4e^{-\frac{\alpha}{\sqrt{\Delta t}}}.\endaligned$$ We complete the proof. $\square$ When $0<q<1, \frac{1}{2}\le\gamma<1, |b|<a,$ we also have the divergence result of the EM approximation: \[divergence2\] For any $\Delta t>0$ small enough, if $|X_1|\ge r:=1+\frac{a-|b|}{2}\Delta t,$ then there exist $k_0\ge 1$ $($depending on $\Delta t)$, A and $\alpha>0$ such that $$\log P(|X_k|\ge r^k,\forall k\ge 1)\ge A-\frac{2e^{-k_0\alpha}}{1-e^{-k_0\alpha}}>-\infty,$$ where $A$ is finite, $\alpha:=\frac{(a-|b|)\Delta t}{2|c|}r^{1-\gamma}(1\wedge\log r^{1-\gamma}).$ **Proof**: First, we show that $$|X_k|\ge r^k\ \textrm{and}\ |\Delta B_k|\le\frac{(r-1)r^{k(1-\gamma)}}{|c|}\Rightarrow |X_{k+1}|\ge r^{k+1}.$$ Now $$\aligned|X_{k+1}|&=|X_k|\Big| b|X_k|^{q-1}\Delta t+c\cdot\textrm{sgn}(X_k) |X_k|^{\gamma-1}\Delta B_k+1+a\Delta t\Big|\\& \ge |X_k|\Big|1+a\Delta t-|b||X_k|^{q-1}\Delta t-|c| |X_k|^{\gamma-1}|\Delta B_k|\Big|\\& \ge r^k(1+a\Delta t-|b|\Delta t-|c| r^{k(\gamma-1)}\frac{(r-1)r^{k(1-\gamma)}}{|c|})\\& =r^k(1+2(r-1)-(r-1))=r^{k+1}.\endaligned$$ Thus, given that $|X_1|\ge r$, the event that $\{|X_k|\ge r^k,\forall k\ge 1\}$ contains the event that $\{|\Delta B_k|\le \frac{(r-1)r^{k(1-\gamma)}}{|c|},\forall k\ge 1\}$. If $\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}\ge 2$, then $$\aligned P(|\Delta B_k|\ge\frac{(r-1)r^{k(1-\gamma)}}{|c|})& =\frac{2}{\sqrt{2\pi}}\int_{\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}}^\infty e^{-\frac{x^2}{2}}dx\\&\le\frac{2}{\sqrt{2\pi}}\int_{\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}}^\infty e^{-x}dx\\& =\frac{2}{\sqrt{2\pi}}\exp(-\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}).\endaligned$$ We can choose $k_0$ be the smallest $k$ such that $\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}\ge 2$ (note that since $r>1$, such $k_0$ always exists). On the other hand, $$\aligned &\sum_{k=k_0}^\infty\log(1-\frac{2}{\sqrt{2\pi}}\exp(-\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}}))\\& \ge -2\sum_{k=k_0}^\infty\exp(-\frac{(r-1)r^{k(1-\gamma)}}{|c|\sqrt{\Delta t}})\\& \ge -2\sum_{k=k_0}^\infty\exp(-k\times\frac{(r-1)r^{1-\gamma}(1\wedge\log r^{1-\gamma})}{|c|\sqrt{\Delta t}})\\& =-\frac{2e^{-k_0\alpha}}{1-e^{-k_0\alpha}}>-\infty.\endaligned$$ So $\prod_{k=1}^\infty P(|\Delta B_k|\le\frac{(r-1)r^{k(1-\gamma)}}{|c|})$ is well defined and therefore $$P(|X_k|\ge r^k, \forall\ k\ge 1)\ge\prod_{k=1}^\infty P(|\Delta B_k|\le\frac{(r-1)r^{k(1-\gamma)}}{|c|}).$$ Then as in proof of Lemma \[divergence1\], we have $$\aligned \log P(|X_k|\ge r^k, \forall\ k\ge 1)&\ge A-\frac{2e^{-k_0\alpha}}{1-e^{-k_0\alpha}}>-\infty,\endaligned$$ where $$A=\sum_{k=1}^{k_0-1} \log P(|\Delta B_k|\le\frac{(r-1)r^{k(1-\gamma)}}{|c|}),$$ $$\alpha=\frac{(r-1)r^{1-\gamma}(1\wedge\log r^{1-\gamma})}{|c|\sqrt{\Delta t}}.$$ We complete the proof. $\square$ Let us give an example to show that the $\theta$-EM scheme ($\frac{1}{2}<\theta\le 1$) is exponentially stable while EM scheme is not. **Example 1:** Consider the following one dimensional stochastic differential equation, $$\label{sde2} dX_t=(aX_t+b|X_t|^{2\gamma-2} X_t)dt+c|X_t|^\gamma dB_t,$$ where $\gamma>1$ $a<0$ and $2b+c^2\le 0$. It is clear that both of the coefficients are locally Lipschitz continuous. Thus SDE (\[sde2\]) has a unique global solution. By Lemma \[divergence1\], since $2\gamma-1>\gamma>1$, we know that when we choose the step size $\Delta t$ small enough and the initial value $X_1$ suitably, the classical EM scheme is divergent with a positive probability. Now let us consider the exponential stability of $\theta$-EM scheme. The corresponding $\theta$-EM scheme of (\[sde2\]) is $$\label{STM2} X_{k+1}=X_k+[(1-\theta)X_k(a+b|X_k|^{2\gamma-2})+\theta X_{k+1} (a+b|X_{k+1}|^{2\gamma-2})]\Delta t+c|X_{k}|^\gamma \Delta B_k,$$ Notice that in our case $g(x)=c|x|^\gamma$ does not satisfy the linear growth condition. Therefore, the stability results in [@HMS; @HMY; @PDM; @WMS] as well as [@CW] for the moment as well as almost sure exponential stability of the backward EM scheme case ($\theta=1$) can not be used here. On the other hand, since in this case $f(x)=ax+b|x|^{2\gamma-2} x, g(x)=c|x|^\gamma,$ it is obvious that $$2\langle x,f(x)\rangle+|g(x)|^2=2ax^2+(2b+c^2)|x|^{2\gamma}\le 2ax^2.$$ Since $a<0$, then condition (\[c2\]) holds for $C=-2a$. Moreover, $$\langle x-y,f(x)-f(y)\rangle=a(x-y)^2+b(x-y)(|x|^{2\gamma-2}x-|y|^{2\gamma-2}y).$$ Since $$(x-y)(|x|^{2\gamma-2}x-|y|^{2\gamma-2}y)\ge0$$ holds for $\forall x,y\in\mathbb{R},$ it follows that $$\langle x-y,f(x)-f(y)\rangle\le a(x-y)^2.$$ We have used the fact that $b<0$ here. Thus conditions (\[c2\]) and (\[c3\]) hold. By Theorem \[exponential\], we know that, for any $0<\varepsilon<1$, the $\theta$-EM ($\frac{1}{2}<\theta\le 1$) scheme (\[STM2\]) of the corresponding SDE (\[sde2\]) is mean square exponentially stable with with Lyapunov exponent no greater than $2a(1-\varepsilon)$ and almost surely exponentially stable with Lyapunov exponent no greater than $a(1-\varepsilon)$ if $\Delta t$ is small enough. For the polynomial stability, we consider the following example. **Example 2:** Now let us consider the following scalar stochastic differential equation, $$\label{sde3} dX_t=\frac{-(1+t)^{\frac{1}{2}}|X_t|^{2\gamma-2}X_t-2K_1X_t}{2(1+t)}dt+ \sqrt{\frac{|X_t|^{2\gamma}}{(1+t)^\frac{1}{2}}+\frac{C}{(1+t)^{K_1}}} dB_t,$$ where $C>0$, $K_1>1$, $\gamma\ge1$ are constants. Since in this case $$f(x,t)=\frac{-(1+t)^\frac{1}{2}|x|^{2\gamma-2}x-2K_1x}{2(1+t)},\quad g(x,t)= \sqrt{\frac{|x|^{2\gamma}}{(1+t)^\frac{1}{2}}+\frac{C}{(1+t)^{K_1}}},$$ It is clear that both of the coefficients are locally Lipschitz continuous. Moreover, it is easy to verify that $$2\langle x,f(x,t)\rangle+|g(x,t)|^2\le C(1+t)^{-K_1}- K_1(1+t)^{-1}|x|^2,$$ and $$\langle x-y,f(x,t)-f(y,t)\rangle\le 0\le L|x-y|^2.$$ Thus conditions (\[c1\]) and (\[c3\]) hold. Therefore, SDE (\[sde3\]) has a unique global solution. If $\gamma>1,$ then by Theorem \[polynomial\], for any $0<\varepsilon<K_1-1,$ the $\theta$-EM ($\frac{1}{2}<\theta\le 1$) solution of (\[sde3\]) satisfies the polynomial stability (with rate no great than $-(K_1-1-\varepsilon)$) for $\Delta t$ small enough. If $\gamma=1,$ it is obvious that $f$ also satisfies the linear growth condition (\[growth\]) (condition (2.4) in [@LFM] failed in this case), then by Theorem \[polynomial1\], the $\theta$-EM ($0\le\theta\le\frac{1}{2}$) solution of (\[sde3\]) satisfies the polynomial stability for $\Delta t$ small enough. However, since the coefficient $g(x,t)$ is not bounded with respect to $x$, we can not apply Theorem 3.1 and Theorem 3.5 in [@LFM] to get the polynomial stability of the classical EM scheme and back EM scheme respectively. **Acknowledgement:** The second named author would like to thank Professor Chenggui Yuan for useful discussions and suggestions during his visit to Swansea University. [99]{} Baker, C.T.H. and Buckwar, E., Exponential stability in p-th mean of solutions, and of convergent Euler-type solutions, of stochastic delay differential equations. J. Comput. Appl. Math., 2005,184(2):404-427. Chen, L. and Wu, F., Almost sure exponential stability of the $\theta$-method for stochastic differential equations,Statistics and Probability Letters, 2012, 82:1669-1676. Gyöngy, I. and Krylov, N., Existence of strong solutions for Itô’s stochastic equations via approximations. Probab. Theory Relat. Fields, 1996, 105:143-158. Higham, D.J., A-stability and stochastic mean-square stability. BIT Numerical Mathematics, 2000, 40(2):404-409. Higham, D.J., Mean-square and asymptotic stability of the stochastic theta method. SIAM Journal on Numerical Analysis, 2001, 38(3):753-769. Higham, D.J., Mao, X. and Stuart, A.M., Exponential mean-square stability of numerical solutions to stochastic differential equations. LMS J. Comput. Math., 2003, 6:297-313. Higham, D., Mao, X., Yuan, C., Almost sure and moment exponential stability in the numerical simulation of stochastic differential equations. SIAM J. Number. Anal. 2007, 45(2):592-609. Ikeda, I. and Watanabe, S., Stochastic differential equations and diffusion processes, North-Holland, Amsterdam, 1981. Lan, G., Pathwise uniqueness and non-explosion of stochastic differential equations with non-Lipschitzian coefficients, Acta. Math. Sinica, Chinese series, 2009, 52(4):109-114. Lan, G. and Wu, J.-L., New sufficient conditions of existence, moment estimations and non confluence for SDEs with non-Lipschitzian coefficients, Stoch. Proc. Applic., 2014, 124, 4030-4049. Liu, K. and Chen, A, Moment decay rates of solutions of stochastic differential equations, Tohoku Math. J., 2001,53:81-93. Liu, W., Foondun, M and Mao, X., Mean Square Polynomial Stability of Numerical Solutions to a Class of Stochastic Differential Equations, arXiv:1404.6073v1. Mao, X., Stochastic differential equations and applicatons, 2nd edition, Horwood, Chichester, 2007. Mao, X. and Szpruch, L., Strong convergence and stability of implicit numerical methods for stochastic differential equations with non-globally Lipschitz continuous coefficients, J. Comput. Appl. Math., 2013, 238:14-28. Mao, X. and Szpruch, L, Strong convergence rates for backward Euler-Maruyama method for nonlinear dissipative-type stochastic differential equations with super-linear diffusion coefficients. Stochastics, 2013,85(1):144-171. Pang, S, Deng, F. and Mao, X., Almost sure and moment exponential stability of Euler-Maruyama discretizations for hybrid stochastic differential equations, J. Comput. Appl. Math., 2008, 213:127-141. Rodkina, A. and Schurz, H., Almost sure asymptotic stability of drift-implicit $\theta$-methods for bilinear ordinary stochastic differential equations in $\mathbb{R}^1$, J. Comput. Appl. Math., 2005,180:13-31. Revuz, D. and Yor, M., Continuous martingales and Brownian motion, Grund. der Math. Wissenschaften, 293, Springer-Verlag, 1991. Szpruch, L., V-stable tamed Euler schemes, arXiv:1310.0785. Wu, F., Mao, X. and Szpruch, L., Almost sure exponential stability of numerical solutions for stochastic delay differential equations, Numer. Math., 2010, 115:681-697. Zeidler, E., Nonlinear Functional Analysis and its Applications, Springer Verlag, 1985. Zong, X. and Wu, F., Choice of $\theta$ and mean-square exponential stability in the stochastic theta method of stochastic differential equations, J. Comput. Appl. Math., 2014, 255:837-847. [^1]: Corresponding author. Email: [email protected]. Supported by China Scholarship Council, National Natural Science Foundation of China (NSFC11026142) and Beijing Higher Education Young Elite Teacher Project (YETP0516).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Given e-commerce scenarios that user profiles are invisible, session-based recommendation is proposed to generate recommendation results from short sessions. Previous work only considers the user’s sequential behavior in the current session, whereas the user’s main purpose in the current session is not emphasized. In this paper, we propose a novel neural networks framework, i.e., Neural Attentive Recommendation Machine (NARM), to tackle this problem. Specifically, we explore a hybrid encoder with an attention mechanism to model the user’s sequential behavior and capture the user’s main purpose in the current session, which are combined as a unified session representation later. We then compute the recommendation scores for each candidate item with a bi-linear matching scheme based on this unified session representation. We train NARM by jointly learning the item and session representations as well as their matchings. We carried out extensive experiments on two benchmark datasets. Our experimental results show that NARM outperforms state-of-the-art baselines on both datasets. Furthermore, we also find that NARM achieves a significant improvement on long sessions, which demonstrates its advantages in modeling the user’s sequential behavior and main purpose simultaneously.' author: - Jing Li - Pengjie Ren - Zhumin Chen - Zhaochun Ren - Tao Lian - Jun Ma bibliography: - 'sigproc.bib' title: 'Neural Attentive Session-based Recommendation' --- INTRODUCTION ============ A user session is kicked off when a user clicks a certain item; within a user session, clicking on the interesting item, and spending more time viewing it. After that, the user clicks another interesting one to start the view again. Such iterative process will be completed until the user’s requirements are satisfied. Current recommendation research confronts challenges when recommendations are merely from those user sessions, where existing recommendation methods [@koren2009matrix; @adomavicius2005toward; @weimer2007maximum; @su2009survey] cannot perform well. To tackle this problem, session-based recommendation [@schafer1999recommender] is proposed to predict the next item that the user is probably interested in based merely on implicit feedbacks, i.e., user clicks, in the current session. @hidasi2015session apply recurrent neural networks (RNN) with Gated Recurrent Units (GRU) for session-based recommendation. The model considers the first item clicked by a user as the initial input of RNN, and generates recommendations based on it. Then the user might click one of the recommendations, which is fed into RNN next, and the successive recommendations are produced based on the whole previous clicks. @tan2016improved further improve this RNN-based model by utilizing two crucial techniques, i.e., data augmentation and a method to account for shifts in the input data distribution. Though all above RNN-based methods show promising improvements over traditional recommendation approaches, they only take into account the user’s sequential behavior in the current session, whereas the user’s main purpose in the current session is not emphasized. Relying only on the user’s sequential behavior is dangerous when a user accidentally clicks on wrong items or s/he is attracted by some unrelated items due to curiosity. Therefore, we argue that both the user’s sequential behavior and main purpose in the current session should be considered in session-based recommendation. Suppose that a user wants to buy a shirt on the Internet. As shown in Figure 1, during browsing, s/he tends to click on some shirts with similar styles to make a comparison, meanwhile s/he might click a pair of suit pants by accident or due to curiosity. After that, s/he keeps looking for suitable shirts. In this case, if we only consider about his/her sequential behavior, another shirt or suit pants even a pair of shoes might be recommended because many users click them after clicking some shirts and suit pants, as shown in Figure 1(a). Assume that the recommender is an experienced human purchasing guide, the guide could conjecture that this user is very likely to buy a short sleeve shirt at this time because most of his/her clicked items are related to it. Therefore, more attention would be paid to the short sleeve shirts that the user has clicked and another similar shirt would be recommended, as shown in Figure 1(b). Ideally, in addition to considering about the user’s entire sequential behavior, a better recommender should also take into account the user’s main purpose which is reflected by some relatively important items in the current session. Note that the sequential behavior and the main purpose in one session are complementary to each other because we can not always conjecture a user’s main purpose from a session, e.g., when the session is too short or the user just clicks something aimlessly. To tackle the above problem, we propose a novel neural networks framework, namely Neural Attentive Recommendation Machine (NARM). Specifically, we explore a hybrid encoder with an attention mechanism to model the user’s sequential behavior and capture the user’s main purpose in the current session, which are combined as a unified session representation later. With this item-level attention mechanism, NARM learns to attend differentially to more and less important items. We then compute the recommendation scores for each candidate item with a bi-linear matching scheme based on the unified session representation. NARM is trained by jointly learning the item and session representations as well as their matchings. The main contributions of this work are summarized as follows: - We propose a novel NARM model to take into account both the user’s sequential behavior and main purpose in the current session, and compute recommendation scores by using a bi-linear matching scheme. - We apply an attention mechanism to extract the user’s main purpose in the current session. - We carried out extensive experiments on two benchmark datasets. The results show that NARM outperforms state-of-the-art baselines in terms of recall and MRR on both datasets. Moreover, we find that NARM achieves better performance on long sessions, which demonstrates its advantages in modeling the user’s sequential behavior and main purpose simultaneously. RELATED WORK ============ Session-based recommendation is a typical application of recommender systems based on implicit feedbacks, where no explicit preferences (e.g., ratings) but only positive observations (e.g., clicks) are available [@mild2003improved; @he2016fast; @ren2017social]. These positive observations are usually in a form of sequential data as obtained by passively tracking users’ behavior over a sequence of time. In this section, we briefly review the related work on session-based recommendation from the following two aspects, i.e., traditional methods and deep learning based methods. Traditional Methods ------------------- Typically, there are two traditional modeling paradigms, i.e., general recommender and sequential recommender. **General recommender** is mainly based on item-to-item recommendation approaches. In this setting, an item-to-item similarity matrix is pre-computed from the available session data. Items that are often clicked together (i.e., co-occurrence) in sessions are considered to be similar. @linden2003amazon propose an item-to-item collaborative filtering method to personalize the online store for each customer. @sarwar2001item analyze different item-based recommendation generation algorithms and compare their results with basic k-nearest neighbor approaches. Though these methods have proven to be effective and are widely employed, they only take into account the last click of the session, ignoring the information of the whole click sequence. **Sequential recommender** is based on Markov chains which utilizes sequential data by predicting users’ next action given the last action [@zimdars2001using; @shani2005mdp]. @zimdars2001using propose a sequential recommender based on Markov chains and investigate how to extract sequential patterns to learn the next state using probabilistic decision-tree models. @shani2005mdp present a Markov Decesion Processes (MDP) aiming to provide recommendations in a session-based manner and the simplest MDP boil down to first-order Markov chains where the next recommendation can be simply computed through the transition probabilities between items. @mobasher2002using study different sequential patterns for recommendation and find that contiguous sequential patterns are more suitable for sequential prediction task than general sequential patterns. @yap2012effective introduce a new Competence Score measure in personalized sequential pattern mining for next-item recommendations. @chen2012playlist model playlists as Markov chains, and propose logistic Markov Embeddings to learn the representations of songs for playlists prediction. A major issue with applying Markov chains in the session-based recommendation task is that the state space quickly becomes unmanageable when trying to include all possible sequences of potential user selections over all items. Deep Learning based Methods --------------------------- Deep learning has recently been applied very successfully in areas such as image recognition [@krizhevsky2012imagenet; @he2016deep], speech recognition [@graves2013speech; @amodei2016deep; @hinton2012deep] and neural language processing [@socher2011parsing; @de2014Medical; @rsoy2014deep; @song2017summarizing; @li2017salience]. Deep models can be trained to learn discriminative representations from unstructured data [@he2017neural; @he2017neuralfact; @li2017neural]. Here, we focus on the related work that uses deep learning models to solve recommendation tasks. **Neural network recommender** is mostly focusing on the classical collaborative filtering user-item setting. @salakhutdinov2007restricted first propose to use Restricted Boltzmann Machines (RBM) for Collaborative Filtering (CF). In their work, RBM is used to model user-item interactions and to perform recommendations. Recently, denoising auto-encoders have been used to perform CF in a similar manner [@wu2016collaborative; @sedhain2015autorec]. @wang2015learning introduce a hierarchical representation model for the next basket recommendation which is based on encoder-decoder mechanism. Deep neural networks have also been used in cross-domain recommendations whereby items are mapped to a joint latent space [@elkahky2015a]. Recurrent Neural Networks (RNN) have been devised to model variable-length sequence data. Recently, @hidasi2015session apply RNN to session-based recommendation and achieve significant improvements over traditional methods. The proposed model utilizes session-parallel mini-batch training and employs ranking-based loss functions for learning the model. @tan2016improved further study the application of RNN in session-based recommendation. They propose two techniques to improve the performance of their model, namely data augmentation and a method to account for shifts in the input data distribution. @zhang2014sequential also use RNN for the click sequence prediction, they consider historical user behaviors as well as hand-crafted features for each user and item. Though a growing number of publications on session-based recommendation focus on RNN-based methods, unlike existing studies, we propose a novel neural attentive recommendation model that combines both the user’s sequential behavior and main purpose in the current session, which to the best of our knowledge, is not considered by existing researches. And we apply the attention mechanism to session-based recommendation for the first time. METHOD ====== In this section, we first introduce the session-based recommendation task. Then we describe the proposed NARM in detail. Session-based Recommendation ---------------------------- Session-based recommendation is the task of predicting what a user would like to click next when his/her current sequential transaction data is given. Here we give a formulation of the session-based recommendation problem. Let $[x_{1},x_{2},...,x_{n-1},x_{n}]$ be a click session, where $x_{i}\in\mathcal{I} \,(1 \leq i \leq n)$ is the index of one clicked item out of a total number of $m$ items. We build a model $\mathbf{M}$ so that for any given prefix of the click sequence in the session, $\textbf{x} = [x_{1},x_{2},...,x_{t-1},x_{t}], 1 \leq t \leq n$, we get the output $\textbf{y} = \mathbf{M}(\textbf{x})$, where $\textbf{y} = [y_{1},y_{2},...,y_{m-1},y_{m}]$. We view $\textbf{y}$ as a ranking list over all the next items that can occur in that session, where $y_{j} \,(1 \leq j \leq m)$ corresponds to the recommendation score of item $j$. Since a recommender typically needs to make more than one recommendations for the user, thus the top-$k$ $(1 \leq k \leq m)$ items in $\textbf{y}$ are recommended. ![The general framework and dataflow of the encoder-decoder-based NARM.](fig/fig2){height="2.2in" width="2.2in"} Overview -------- In this paper, we propose an improved neural encoder-decoder architecture [@shang2015neural; @ren2017leveraging] to address the session-based recommendation problem, named Neural Attentive Recommendation Machine (NARM). The basic idea of NARM is to build a hidden representation of the current session, and then generate predictions based on it. As shown in Figure 2, the encoder converts the input click sequence $\textbf{x} = [x_{1},x_{2},...,x_{t-1},{x_t}]$ into a set of high-dimensional hidden representations $\textbf{h} = [\textbf{\textsl{h}}_{1},\textbf{\textsl{h}}_{2},...,\textbf{\textsl{h}}_{t-1},\textbf{\textsl{h}}_{t}]$, which along with the attention signal at time $t$ (denoted as $\alpha_{t}$), are fed to the session feature generator to build the representation of the current session to decode at time $t$ (denoted as $\textbf{\textsl{c}}_{t}$). Finally $\textbf{\textsl{c}}_{t}$ is transformed by a matrix $\textbf{\textsl{U}}$ (as part of the decoder) into an activate function to produce a ranking list over all items, $\textbf{y} = [y_{1},y_{2},...,y_{m-1},y_{m}]$, that can occur in the current session. The role of $\alpha_{t}$ is to determine which part of the hidden representations should be emphasized or ignored at time $t$. It should be noted that $\alpha_{t}$ could be fixed over time or changes dynamically during the prediction process. In the dynamic setting, $\alpha_{t}$ can be a function of the representations of hidden states or the input item embeddings. We adopt the dynamic setting in our model, more details will be described in §3.4. The basic idea of our work is to learn a recommendation model that takes into consideration both the user’s sequential behavior and main purpose in the current session. In the following part of this section, we first describe the global encoder in NARM which is used to model the user’s sequential behavior (§3.3). Then we introduce the local encoder which is used to capture the user’s main purpose in the current session (§3.4). Finally we show our NARM which combines both of them and computes the recommendation scores for each candidate item by using a bi-linear matching scheme (§3.5). Global Encoder in NARM ---------------------- In the global encoder, the inputs are entire previous clicks while the output is the feature of the user’s sequential behavior in the current session. Both the inputs and output are uniformly represented by high-dimensional vectors. Figure 3(a) shows the graphical model of the global encoder in NARM. We use a RNN with Gated Recurrent Units (GRU) rather than a standard RNN because @hidasi2015session demonstrate that GRU can outperform the Long Short-Term Memory (LSTM) [@hochreiter2012long] units for the session-based recommendation task. GRU is a more elaborate RNN unit that aims at dealing with the vanishing gradient problem. The activation of GRU is a linear interpolation between the previous activation $\textbf{\textsl{h}}_{t-1}$ and the candidate activation $\widehat{\textbf{\textsl{h}}}_{t}$, $$\textbf{\textsl{h}}_{t} = (1-\textbf{\textsl{z}}_{t})\textbf{\textsl{h}}_{t-1} + \textbf{\textsl{z}}_{t}\widehat{\textbf{\textsl{h}}}_{t} \;,$$ where the update gate $\bm{z_{t}}$ is given by $$\textbf{\textsl{z}}_{t} = \sigma(\textbf{\textsl{W}}_{z}\textbf{\textsl{x}}_{t} + \textbf{\textsl{U}}_{z}\textbf{\textsl{h}}_{t-1})\;.$$ The candidate activation function $\widehat{\textbf{\textsl{h}}}_{t}$ is computed as $$\widehat{\textbf{\textsl{h}}}_{t} = tanh[\textbf{\textsl{W}}\textbf{\textsl{x}}_{t} + \textbf{\textsl{U}}(\textbf{\textsl{r}}_{t} \odot \textbf{\textsl{h}}_{t-1})] \;,$$ where the reset gate $\bm{r_{t}}$ is given by $$\textbf{\textsl{r}}_{t} = \sigma(\textbf{\textsl{W}}_{r}\textbf{\textsl{x}}_{t} + \textbf{\textsl{U}}_{r}\textbf{\textsl{h}}_{t-1}) \;.$$ With a trivial session feature generator, we essentially use the final hidden state $\textbf{\textsl{h}}_{t}$ as the representation of the user’s sequential behavior $$\textbf{\textsl{c}}_{t}^{\mathrm{g}} = \textbf{\textsl{h}}_{t} \;.$$ However, this global encoder has its drawbacks such as a vectorial summarization of the whole sequence behavior is often hard to capture a preciser intention of the current user. Local Encoder in NARM --------------------- ![image](fig/fig5){height="2.3in" width="5.8in"} The architecture of the local encoder is similar to the global encoder as shown in Figure 3(b). In this encoding scheme we also use RNN with GRU as the basic component. To capture the user’s main purpose in the current session, we involve an item-level attention mechanism which allows the decoder to dynamically select and linearly combine different parts of the input sequence, $$\bm{c}_{t}^{\mathrm{l}} = \sum_{j=1}^{t} \alpha_{tj}\textbf{\textsl{h}}_{j} \;,$$ where the weighted factors $\alpha$ determine which part of the input sequence should be emphasized or ignored when making predictions, which in turn is a function of hidden states, $$\alpha_{tj} = q(\textbf{\textsl{h}}_{t},\textbf{\textsl{h}}_{j}) \;.$$ Basically, the weighted factor $\alpha_{tj}$ models the alignment between the inputs around position $j$ and the output at position $t$, so it can be viewed as a specific matching model. In the local encoder, the function $q$ specifically computes the similarity between the final hidden state $\textbf{\textsl{h}}_{t}$ and the representation of the previous clicked item $\textbf{\textsl{h}}_{j}$, $$q(\textbf{\textsl{h}}_{t},\textbf{\textsl{h}}_{j}) = \textbf{\textsl{v}}^{\mathrm{T}}\sigma(\textbf{\textsl{A}}_{1}\textbf{\textsl{h}}_{t} + \textbf{\textsl{A}}_{2}\textbf{\textsl{h}}_{j}) \;,$$ where $\sigma$ is an activate function such as sigmoid function, matrix $\textbf{\textsl{A}}_{1}$ is used to transform $\textbf{\textsl{h}}_{t}$ into a latent space, and $\textbf{\textsl{A}}_{2}$ plays the same role for $\textbf{\textsl{h}}_{j}$. This local encoder enjoys the advantages of adaptively focusing on more important items to capture the user’s main purpose in the current session. NARM Model ---------- For the task of session-based recommendation, the global encoder has the summarization of the whole sequential behavior, while the local encoder can adaptively select the important items in the current session to capture the user’s main purpose. We conjecture that the representation of the sequential behavior may provide useful information for capturing the user’s main purpose in the current session. Therefore, we use the representations of the sequential behavior and the previous hidden states to compute the attention weight for each clicked item. Then a natural extension combines the sequential behavior feature and the user purpose feature by concatenating them to form an extended representation for each time stamp. As shown in Figure 4, we can see the summarization $\bm{h}_{t}^{\mathrm{g}}$ is incorporated into $\bm{c}_{t}$ to provide a sequential behavior representation for NARM. It should be noticed that the session feature generator in NARM will evoke different encoding mechanisms in the global encoder and the local encoder, although they will be combined later to form a unified representation. More specifically, the last hidden state of the global encoder $\bm{h}_{t}^{\mathrm{g}}$ plays a role different from that of the local encoder $\bm{h}_{t}^{\mathrm{l}}$. The former has the responsibility to encode the entire sequential behavior. The latter is used to compute the attention weights with the previous hidden states. By this hybrid encoding scheme, both the user’s sequential behavior and main purpose in the current session can be modeled into a unified representation $\bm{c}_{t}$, which is the concatenation of vectors $\bm{c}_{t}^{\mathrm{g}}$ and $\bm{c}_{t}^{\mathrm{l}}$, $$\bm{c}_{t} = [\bm{c}_{t}^{\mathrm{g}};\bm{c}_{t}^{\mathrm{l}}] = [\bm{h}_{t}^{\mathrm{g}};\sum_{j=1}^{t} \alpha_{tj} \bm{h}_{t}^{\mathrm{l}}] \;.$$ Datasets all the clicks train sessions test sessions all the items avg.length ------------------ ---------------- ---------------- --------------- --------------- ------------ YOOCHOOSE $1/64$ 557248 369859 55898 16766 6.16 YOOCHOOSE $1/4$ 8326407 5917746 55898 29618 5.71 DIGINETICA 982961 719470 60858 43097 5.12 Figure 4 also gives a graphical illustration of the adopted decoding mechanism in NARM. Generally, a standard RNN utilizes fully-connected layer to decode. But using fully-connected layer means that the number of parameters to be learned in this layer is $|H| \ast |N|$ where $|H|$ is the dimension of the session representation and $|N|$ is the number of candidate items for prediction. Thus we have to reserve a large space to store these parameters. Though there are some approaches to reduce the parameters such as using a hierarchical softmax layer [@mnih2009scalable], and negative sampling at random [@mikolov2013distributed], they are not the best choices for our model. We propose an alternative bi-linear decoding scheme which not only reduces the number of the parameters, but also improves the performance of NARM. Specifically, a bi-linear similarity function between the representations of the current session and each candidate items is used to compute a similarity score $S_{i}$, $$S_{i} = {emb}_{i}^{\text{T}}\bm{B}\,\bm{c}_{t} \;,$$ where $\textbf{\textsl{B}}$ is a $|D| \ast |H|$ matrix, $|D|$ is the dimension of each item embedding. Then the similarity score of each item is entered to a softmax layer to obtain the probability that the item will occur next. By using this bi-linear decoder, we reduce the number of parameters from $|N| \ast |H|$ to $|D| \ast |H|$, where $|D|$ is usually smaller than $|N|$. Moreover, the experiment results demonstrate that using this bi-linear decoder can improve the performance of NARM (as demonstrated in §4.4). To learn the parameters of the model, we do not utilize the proposed training procedure in [@hidasi2015session], where the model is trained in a session-parallel, sequence-to-sequence manner. Instead, in order to fit the attention mechanism in the local encoder, NARM process each sequence $[x_{1},x_{2},...,x_{t-1},x_{t}]$ separately. Our model can be trained by using a standard mini-batch gradient descent on the cross-entropy loss: $$L(p,q)=-\sum_{i=1}^{m}p_{i}log(q_{i})$$ where $q$ is the prediction probability distribution and $p$ is the truly distribution. At last, a Back-Propagation Through Time (BPTT) method for a fixed number of time steps is adopted to train NARM. EXPERIMENTAL SETUP ================== In this section, we first describe the datasets, the state-of-the-art methods and the evaluation metrics employed in our experiments. Then we compare NARMs with different decoding schemes. Finally, we compare NARM with state-of-the-art methods. Dataset ------- We evaluate different recommenders on two standard transaction datasets, i.e., YOOCHOOSE dataset and DIGINETICA dataset. - YOOCHOOSE[^1] is a public dataset released by RecSys Challenge 2015. This dataset contains click-streams on an e-commerce site. After filtering out sessions of length 1 and items that appear less than 5 times, there remains 7981580 sessions and 37483 items. - DIGINETICA[^2] comes from CIKM Cup 2016. We only used the released transaction data and also filtered out sessions of length 1 and items that appear less than 5 times. Finally the dataset contains 204771 sessions and 43097 items. We first conducted some preprocesses over two datasets. For YOOCHOOSE, we used the sessions of subsequent day for testing and filtered out clicks from the test set where the clicked items did not appear in the training set. For DIGINETICA, the only difference is that we use the sessions of subsequent week for testing. Because we did not train NARM in a session-parallel manner [@hidasi2015session], a sequence splitting preprocess is necessary. For the input session $[x_{1},x_{2},...,x_{n-1},x_{n}]$, we generated the sequences and corresponding labels $([x_{1}],V(x_{2})$, $([x_{1},x_{2}],V(x_{3})$, ..., $([x_{1},x_{2},...,x_{n-1}],V(x_{n}))$ for training on both YOOCHOOSE and DIGINETICA. The corresponding label $V(x_{i})$ is the last click in the current session. For the following reasons: (1) YOOCHOOSE is quite large, (2) @tan2016improved verified that the recommendation models do need to account for changing user behavior over time, (3) their experimental results showed that training on the entire dataset yields slightly poorer results than training on more recent fractions of the datasets. Thus we sorted the training sequences of YOOCHOOSE by time and reported our results on the model trained on more recent fractions $1/64$ and $1/4$ of training sequences as well. Note that some items that in the test set would not appear in the training set since we trained the model only on more recent fractions. The statistics of the three datasets (i.e., YOOCHOOSE $1/64$, YOOCHOOSE $1/4$ and DIGINETICA) are shown in Table 1. ------------------------------------ -------------- ----------- -------------- ----------- -------------- ----------- (lr)[2-3]{} (lr)[4-5]{}(lr)[6-7]{} Recall@20(%) MRR@20(%) Recall@20(%) MRR@20(%) Recall@20(%) MRR@20(%) Fully-connected decoder 67.67 29.17 69.49 29.54 57.84 24.77 Bi-linear similarity decoder **68.32** 28.76 **69.73** 29.23 **62.58** **27.35** ------------------------------------ -------------- ----------- -------------- ----------- -------------- ----------- ------------------------------------ -------------- ----------- -------------- ----------- -------------- ----------- (lr)[2-3]{} (lr)[4-5]{}(lr)[6-7]{} Recall@20(%) MRR@20(%) Recall@20(%) MRR@20(%) Recall@20(%) MRR@20(%) POP 6.71 1.65 1.33 0.30 0.91 0.23 S-POP 30.44 18.35 27.08 17.75 21.07 14.69 Item-KNN 51.60 21.81 52.31 21.70 28.35 9.45 BPR-MF 31.31 12.08 3.40 1.57 15.19 8.63 FPMC 45.62 15.01 - - 31.55 8.92 GRU-Rec 60.64 22.89 59.53 22.60 43.82 15.46 Improved GRU-Rec 67.84 **29.00** 69.11 29.22 57.95 24.93 NARM **68.32** 28.76 **69.73** **29.23** **62.58** **27.35** ------------------------------------ -------------- ----------- -------------- ----------- -------------- ----------- On YOOCHOOSE $1/4$, we do not have enough memory to initialize FPMC. Our available memory is 120G. Baseline Methods ---------------- We compare the proposed NARM with five traditional methods (i.e., POP, S-POP, Item-KNN, BPR-MF and FPMC) and two RNN-based models (i.e., GRU-Rec and Improved GRU-Rec). - **POP**: Popular predictor always recommends the most popular items in the training set. Despite its simplicity, it is often a strong baseline in certain domains. - **S-POP**: This baseline recommends the most popular items for the current session. The recommendation list changes during the session gains more items. Ties are broken up using global popularity values. - **Item-KNN**: In this baseline, similarity is defined as the co-occurrence number of two items in sessions divided by the square root of the product of the number of sessions in which either item occurs. Regularization is also included to avoid coincidental high similarities between rarely visited items [@linden2003amazon; @davidson2010youtube]. - **BPR-MF**: BPR-MF [@rendle2009bpr] optimizes a pairwise ranking objective function via stochastic gradient descent. Matrix factorization can not be directly applied to session-based recommendation because new sessions do not have precomputed latent representations. However, we can make it work by representing a new session with the average latent factors of items occurred in the session so far. In other words, the recommendation score can be computed as the average of the similarities between latent factors of a candidate item and the items in the session so far. - **FPMC**: FPMC [@rendle2010factorizing] is a state-of-the-art hybrid model on the next-basket recommendation. In order to make it work on session-based recommendation, we do not consider the user latent representations when computing recommendation scores. - **GRU-Rec**: We denote the model proposed in [@hidasi2015session] as GRU-Rec, which utilizes session-parallel mini-batch training process and also employs ranking-based loss functions for learning the model. - **Improved GRU-Rec**: We denote the model proposed in [@tan2016improved] as Improved GRU-Rec. Improved GRU-Rec adopts two techniques which include data augmentation and a method to account for shifts in the input data distribution to improve the performance of GRU-Rec. Evaluation Metrics and Experimental Setup ----------------------------------------- ### Evaluation Metrics As recommender systems can only recommend a few items at each time, the actual item a user might pick should be amongst the first few items of the list. Therefore, we use the following metrics to evaluate the quality of the recommendation lists. - Recall@20: The primary evaluation metric is Recall@20 that is the proportion of cases when the desired item is amongst the top-20 items in all test cases. Recall@N does not consider the actual rank of the item as long as it is amongst the top-N and also usually correlates well with other metrics such as click-through rate (CTR) [@liu2012enlister]. - MRR@20: Another used metric is MRR@20 (Mean Reciprocal Rank), which is the average of reciprocal ranks of the desire items. The reciprocal rank is set to zero if the rank is larger than 20. MRR takes the rank of the item into account, which is important in settings where the order of recommendations matters. ### Experimental Setup The proposed NARM model uses 50-dimensional embeddings for the items. Optimization is done using Adam [@kingma2014adam] with the initial learning rate sets to 0.001, and the mini-batch size is fixed at 512. There are two dropout layers used in NARM: the first dropout layer is between the item embedding layer and the GRU layer with 25% dropout, the second one is between the GRU layer and the bi-linear similarity layer with 50% dropout. We also truncate BPTT at 19 time steps as the setting in the state-of-the-art method [@tan2016improved] and the number of epochs is set to 30 while using 10% of the training data as the validation set. We use one GRU layer in our model and the GRU is set at 100 hidden units. The model is defined and trained in Theano on a GeForce GTX TitanX GPU. The source code of our model is available online[^3]. Comparison among Different Decoders ----------------------------------- We first empirically compare NARMs with different decoders, i.e., fully-connected decoder and bi-linear similarity decoder. The results over three datasets are shown in Table 2. Here we only illustrate the results on 100-dimensional hidden states because we obtain the same conclusions on other dimension settings. We make following observations from Table 2: (1) With regard to Recall@20, the performance improves when using the bi-linear similarity decoder, and the improvements are around 0.65%, 0.24% and 4.74% respectively over three datasets. (2) And with regard to MRR@20, the performance on the model using the bi-linear decoder becomes a little worse on YOOCHOOSE $1/64$ and $1/4$. But on DIGINETICA, the model with the bi-linear decoder still obviously outperforms the model with the fully-connected decoder. For the session-based recommendation task, as the recommender system recommends top-20 items at once in our settings, the actual item a user might pick should be among the list of 20 items. Thus we consider that the recall metric is more important than the MRR metric in this task, and NARM adopts the bi-linear decoder in the following experiments. Comparison against Baselines ---------------------------- Next we compare our NARM model with state-of-the-art methods. The results of all methods over three datasets are shown in Table 3. And a more specific comparison between NARM and the best baseline (i.e., Improved GRU-Rec) over three datasets are illustrated in Figure 5. We have the following observations from the results: (1) For YOOCHOOSE $1/4$ dataset, BPR-MF does not work when we use the average of item factors occurred in the session to replace the user factor. Besides, since we regard each session as one user in FPMC, we do not have enough memory to initialize it. These problems indicate traditional user-based methods are no longer suitable for session-based recommendation. (2) Overall, three RNN-based methods consistently outperform the traditional baselines, which demonstrates that RNN-based models are good at dealing with sequence information in sessions. (3) By taking both the user’s sequential behavior and main purpose into consideration, the proposed NARM can outperform all the baselines in terms of recall@20 over three datasets and can outperform most of the baselines in terms of MRR@20. Take DIGINETICA dataset as an example, when compared with the best baseline (i.e., Improved GRU-Rec), the relative performance improvements by NARM are around 7.98% and 9.70% respectively in terms of recall@20 and MRR@20. (4) As we can see, the recall values on two YOOCHOOSE datasets are not as significantly as the results on DIGINETICA and the obtained MRR values are very close to each other. We consider that one of the important reasons is when we split YOOCHOOSE dataset to $1/64$ and $1/4$, we do not filter out clicks from the test set where the clicked items are not in the training set in order to be consistent with the setting on Improved GRU-Rec [@tan2016improved]. While on DIGINETICA, we filter out these clicks from the test set, and hence NARM outperforms the baselines significantly in terms of both Recall@20 and MRR@20. ANALYSIS ======== In this section, We further explore the influences of using different session features in NARM and analyze the effectiveness of the adopted attention mechanism. Influence of Using Different Features ------------------------------------- In this part, we refer to the NARM that uses the sequential behavior feature only, the NARM that uses the user purpose feature only, and the NARM that uses both two features as $NARM_{global}$, $NARM_{local}$ and $NARM_{hybrid}$ respectively. As shown in Table 4, (1) $NARM_{global}$ and $NARM_{local}$, which only use a single feature, do not perform well on three datasets. Besides, their performance are very close to each other in terms of two metrics. This indicates that merely considering the sequential behavior or the user purpose in the current session may not be able to learn a good recommendation model. (2) When we take into account both the user’s sequential behavior and main purpose, $NARM_{hybrid}$ performs better than $NARM_{global}$ and $NARM_{local}$ in terms of Recall@20 and MRR@20 on different hidden state dimensions over three datasets. Take DIGINETICA dataset as an example, when compared with $NARM_{global}$ and $NARM_{local}$ with the dimensionality of the hidden state set to 50, the relative performance improvements by $NARM_{hybrid}$ are around 3.52% and 5.09% in terms of Recall@20 respectively. These results demonstrate the advantages of considering both the sequential behavior and the main purpose of the current user in session-based recommendation. ![image](fig/att){height="1.7in" width="6.8in"} [cccc]{}\ Length & Baseline correct & NARM correct & Performance\ 1 & 8747 & 9358 & +6.98%\ 2 & 6601 & 7084 & +7.31%\ 3 & 4923 & 5299 & +7.63%\ 4 & 3625 & 3958 & +9.18%\ 5 & 2789 & 3019 & +8.24%\ 6 & 2029 & 2202 & +8.52%\ 7 & 1520 & 1656 & +8.94%\ 8 & 1198 & 1295 & +8.09%\ 9 & 915 & 996 & +8.85%\ 10 & 690 & 753 & +9.13%\ 11 & 509 & 587 & **+15.32%**\ 12 & 411 & 459 & **+11.67%**\ 13 & 304 & 323 & +6.25%\ 14 & 243 & 260 & +6.99%\ 15 & 199 & 219 & **+10.05%**\ 16 & 149 & 165 & **+10.73%**\ 17 & 98 & 112 &**+14.28%**\ 18 & 88 & 93 & +5.68%\ 19 & 70 & 75 & +7.14%\ Influence of Different Session Lengths -------------------------------------- Our NARM model is based on the assumption that when a user is browsing online, his/her click behavior frequently revolves his/her main purpose in the current session. However, we can hardly capture the user’s main purpose when s/he just clicks a few items. Therefore, our NARM model should be good at modeling long sessions. To verify this, we make comparisons among sessions with different lengths on DIGINETICA. As shown in Table 5, (1) NARM performs better when the session lengths are between 4 and 17 in general. This indicates that NARM do capture the user’s main purpose more accuracy on long sessions. In other words, it could make a better prediction if NARM captures more user purpose features on the basis of the existing sequential behavior features. (2) When sessions are too long, the performance improvements of NARM are declined. We consider the reason is that when a session is too long, the user is very likely to click some items aimlessly, so that the local encoder in NARM could not capture the user’s main purpose in the current session. Visualize the Attention Weights ------------------------------- To illustrate the role of the attention mechanism intuitively, we present an example in Figure 6. The session instances are chosen randomly from DIGINETICA. The depth of the color corresponds to the importance of items given by equation (7). We have following observations from the example: (1) Overall, it is obvious that not all items are related to the next click and almost all the important items in the current session is continuous. This implies that the users’ intentions in sessions are indeed localized, which is one of the reasons why NARM can outperform the general RNN-based model. (2) The most important items are often near the end of the session. This is in line with people’s browsing behavior: a user is very likely to click other items that are related to what s/he has clicked just now. Recall that general RNN-based models are able to model this fact, thus they can achieve fairly good performance in session-based recommendation. (3) In some cases, the most important items appear in the beginning or middle of the session (e.g., in session 7974 or 4260). In this situation, we believe that our NARM can perform better than general RNN-based models because the attention mechanism could learn to pay more attention to more important items regardless of its position in one session. CONCLUSION & FUTURE WORK ======================== We have proposed the neural attentive recommendation machine (NARM) with an encoder-decoder architecture to address the session-based recommendation problem. By incorporating an attention mechanism into RNN, our proposed approach can capture both the user’s sequential behavior and main purpose in the current session. 0With this attention mechanism, NARM can attend differentially to more and less important items.Based on the sequential behavior feature and the user purpose feature, we have applied NARM to predict a user’s next click in the current session. We have conducted extensive experiments on two benchmark datasets and demonstrated that our approach can outperform state-of-the-art methods in terms of different evaluation metrics. Moreover, we have performed an analysis on user click behaviors and found that users’ intentions are localized in most sessions, which proves the rationality of our model. As to future work, more item attributes, such as prices and categories, may enhance the performance of our method in session-based recommendation. Meanwhile, both the nearest neighbor sessions and the importance of different neighbors should give new insights. Finally, the attention mechanism can be used to explore the importance of attributes in the current session. Acknowledgments {#acknowledgments .unnumbered} =============== The authors wish to thank the anonymous reviewers for their helpful comments. This work is supported by the Natural Science Foundation of China (61672322, 61672324), the Natural Science Foundation of Shandong province (2016ZRE27468) and the Fundamental Research Funds of Shandong University. [^1]: http://2015.recsyschallenge.com/challenge.html [^2]: http://cikm2016.cs.iupui.edu/cikm-cup [^3]: https://github.com/lijingsdu/sessionRec\_NARM
{ "pile_set_name": "ArXiv" }
--- abstract: 'Mechanical active galactic nucleus (AGN) feedback plays a key role in massive galaxies, galaxy groups and clusters. However, the energy content of AGN jets that mediate this feedback process is still far from clear. Here we present a preliminary study of radial elongations $\tau$ of a large sample of X-ray cavities, which are apparently produced by mechanical AGN feedback. All the cavities in our sample are elongated along the angular (type-I) or jet directions (type-II), or nearly circular (type-III). The observed value of $\tau$ roughly decreases as the cavities rise buoyantly, confirming the same trend found in hydrodynamic simulations. For young cavities, both type-I and II cavities exist, and the latter dominates. Assuming a spheroidal cavity shape, we derive an analytical relation between the intrinsic radial elongation $\bar{\tau}$ and the inclination-angle-dependent value of $\tau$, showing that projection effect makes cavities appear more circular, but does not change type-I cavities into type-II ones, or vice versa. We summarize radial elongations of young cavities in hydrodynamic simulations, and find that $\bar{\tau}$ increases with the kinetic fraction of AGN jets. While mild jets always produce type-II cavities, thermal-energy-dominated strong jets produce type-I cavities, and kinetic-energy-dominated strong jets produce type-II cavities. The existence of type-I young cavities indicates that some AGN jets are strong and dominated by thermal energy (or cosmic rays). If most jets are dominated by non-kinetic energies, our results suggest that they must be long-duration mild jets. However, if most jets are strong, they must be dominated by the kinetic energy.' author: - Fulai Guo bibliography: - 'ms.bib' title: 'Probing the Physics of Mechanical AGN Feedback with Radial Elongations of X-ray Cavities' --- Introduction {#section:intro} ============ ![Sketch of type I and II X-ray cavities in a galaxy cluster. $R_{\rm l}$ and $R_{\rm w}$ are the semi axes along the jet direction and the angular direction perpendicular to the jet direction, respectively. The jet direction here is defined as the radial direction from the cluster center to the cavity center. Type I cavities are oblate, elongated along the angular direction, while type II cavities are prolate, elongated along the jet direction. Nearly circular type-III cavities with $R_{\rm l}\approx R_{\rm w}$ have also been found in galaxy clusters. The radial elongation of a cavity is defined as $\tau=R_{\rm l}/R_{\rm w}$.[]{data-label="plot1"}](f1.eps){width="35.00000%"} Mechanical feedback from active galactic nuclei (AGNs) plays a key role in the evolution of massive elliptical galaxies, galaxy groups, and clusters, suppressing cooling flows and the associated star formation activities in central galaxies (@mcnamara07; @mcnamara12; @li15; @soker16; @werner19). Direct evidence for the operation of AGN feedback comes from mounting detections of “X-ray cavities" in deep X-ray images of galaxy groups and clusters, apparently evolved from the interaction of AGN jets with the hot intracluster medium (ICM; @boehringer93; @fabian02; @birzan04; @croston11, @vagshette19). The properties of X-ray cavities may thus contain important information of mechanical AGN feedback. The enthalpy of X-ray cavities has been widely used to estimate the energetics of mechanical AGN feedback [@birzan04; @rafferty06; @hlavacek12; @hlavacek15]. For a cavity with volume $V$, its enthalpy can be written as $$\begin{aligned} E_{\rm jet}=H\equiv \frac{\gamma}{\gamma -1}pV {\rm ,} \end{aligned}$$ where $E_{\rm jet}$ refers to the energy of the jet creating the cavity, $p$ is the pressure of the local ICM surrounding the cavity, and $\gamma$ is the adiabatic index of the plasma inside the cavity. Assuming that the cavity is mainly filled with relativistic cosmic rays, one has $\gamma=4/3$ and $E_{\rm jet}=4pV$. This energy estimate is based on the “slow-piston" approximation for quasi-static point outbursts in a uniform medium, and the energy coupling efficiency between the outburst and the ambient medium is very low $\eta_{\rm cp} = pV/E_{\rm jet}=(\gamma-1)/\gamma$ [@duan20]. For realistic jet outbursts in galaxy clusters, recent hydrodynamic simulations by @duan20 show that the energy coupling efficiency is much higher with $\eta_{\rm cp} \sim 0.7$-$0.9$, leading to significantly higher estimates for the jet energy $E_{\rm jet}=10$-$30pV$ for $\gamma=4/3$. In addition to the cavity volume, its shape may also contain important information about mechanical AGN feedback. Hydrodynamic jet simulations by @guo15 and @guo16 suggest that the shape of young X-ray cavities recently created by the jet-ICM interaction can be used to probe jet properties, while the shape of old X-ray cavities is affected by the level of viscosity in the ICM. Kinetic-energy-dominated jets on kpc scales typically produce young X-ray cavities more elongated along the jet direction than non-kinetic-energy-dominated jets, which may be energetically dominated by thermal energy, cosmic rays, or magnetic fields. Hydrodynamic simulations of mechanical AGN feedback often adopt kinetic-energy-dominated jets (e.g., @gaspari11; @yang16; @guo18; @martizzi19; @bambic19), while recent AGN feedback simulations also start to investigate cosmic-ray-dominated jets (@guo11; @ruszkowski17; @yang19; @wang20). @duan20 recently found that strong non-kinetic-energy-dominated jets are much more effective in delaying the onset of cooling catastrophe than kinetic-energy-dominated jets with the same power. The particle content in AGN jets and X-ray cavities has also been investigated observationally (e.g., @croston08; @birzan08; @croston14). In this paper, we extend our previous theoretical studies in @guo15 and @guo16 on the cavity shape, and present a preliminary study on the shape of observed X-ray cavities, focusing on their radial elongations. As a “zeroth-order" approximation, observed X-ray cavities are often approximated as ellipses, and as seen in Section 2, observed X-ray cavities are usually elongated along either the jet direction or the angular direction, which is perpendicular to the jet direction (see also @birzan04 [@rafferty06; @hlavacek12]). Here the jet direction is defined as the radial direction from the cluster center to the cavity center. In Figure 1, we show a sketch of these two types of X-ray cavities. The radial elongation of a given X-ray cavity may be defined as $\tau=R_{\rm l}/R_{\rm w}$, where $R_{\rm l}$ is the semi axis along the jet direction and $R_{\rm w}$ is the semi axis along the angular direction. In this paper, we refer to cavities with $\tau < 1$ as type I cavities and those with $\tau > 1$ as type II cavities. Nearly circular cavities with $R_{\rm l}\approx R_{\rm w}$ have also been found in galaxy clusters, and may be referred to as type III cavities, which may be nearly spherical cavities in reality, or type I or II cavities viewed nearly along the jet axis. The remainder of the paper is organized as follows. Following a preliminary study of radial elongations of a sample of observed X-ray cavities in Sec. 2, we investigate the impact of line-of-sight projection on radial elongations in Sec. 3. By comparing with the results from a suite of hydrodynamic jet simulations, we then discuss what the observations of $\tau$ may reveal about the physics of mechanical AGN feedback in Sec. 4. We summarize our main results in Section \[section:discussion\]. [lcccc]{} \[tab1\] A85 &6.3&8.9&21&1, 2\ A262 & 5.4&3.4&8.7&1, 3, 4\ & 5.7&3.4&8.1&\ Perseus &7.3&9.1&9.4&1, 5, 6\ & 4.7&8.2&6.5&\ & 7.3&17&28&\ & 13&17&39&\ 2A 0335+096 & 6.5&9.3&23&1, 7\ A478 & 5.5&3.4&9&1, 8\ & 5.6&3.4&9&\ MS 0735.6+7421 & 110&87&160&1, 9\ & 130&89&180&\ 4C+55.16 & 10&7.5&16&1, 10\ & 13&9.4&22&\ RBS 797 & 13&8.5&24&1, 11\ & 9.7&9.7&20&\ M84 & 1.6&1.6&2.3&1, 12\ & 2.1&1.2&2.5&\ M87 & 2.3&1.4&2.8&1, 13\ & 1.6&0.8&2.2&\ Centaurus & 2.4&3.3&6&1, 14\ & 1.6&3.3&3.5&\ HCG 62 & 4.3&5.0&8.4&1, 15\ & 4.0&4.0&8.6&\ Zw 2701& 8.75&12.25&18.9&16\ &10.5&14.0&19.25&\ A3581 & 3.5&2.6&4.6& 1, 17\ & 3.2&2.7&3.8&\ & 3.8&8.4&24&17\ MACS J1423.8+2404 & 9.4&9.4&16&1\ & 9.4&9.4&17&\ A2052& 7.9&11&11&1, 18\ & 6.2&6.5&6.7&\ A2199& 12.1&8.5&23&19\ & 14.7&9.9&23&\ 3C 388&15&15&27&1, 20\ & 10&24&21&\ 3C 401&12&12&15& 1\ & 12&12&15&\ Cygnus A&29&17&43&1, 21\ & 34&23&45&\ A2597&7.1&7.1&23&1, 22\ & 10&7.1&23&\ A4059&20&10&23&1, 23\ & 9.2&9.2&19&\ [lcccc]{} \[tab1\] Hydra A & 20.5&12.4&24.9&24\ & 21&12.3&25.6&\ & 31.5&47.2&100.8&\ & 20.9&29&59.3&\ & 99.7&105&225.6&\ & 50.1&67.7&104.3&\ RX J1532.9+3021&14.4&17.3&28&25\ & 12.1&14.9&39&\ NGC 5813&0.95&0.95&1.3&26\ & 1.03&0.93&1.4&\ &3.9&3.9&7.7&\ & 2.2&2.9&4.9&\ &2.4&2.8&9.3&\ & 3.0&5.2&22.2&\ &4.4&8.0&18&\ References — (1) @rafferty06; (2) @durret05; (3) @clarke09; (4) @blanton04; (5) @fabian02; (6) @fabian00; (7) @mazzotta03; (8) @sun03; (9) @mcnamara09; (10) @hl11; (11) @doria12; (12) @fj01; (13) @forman07; (14) @fabian05; (15) @gitti10; (16) @vagshette16; (17) @canning13; (18) @blanton11; (19) @nulsen13; (20) @kraft06; (21) @wilson06; (22) @clarke05; (23) @heinz02; (24) @wise07; (25) @hl13; (26) @randall11 Radial Elongations of Observed Cavities {#section3} ======================================= ![[*Top panel*]{}: The distance of the cavity center to the host system’s center $d$ vs. the radial elongation $\tau=R_{\rm l}/R_{\rm w}$. No strong correlation exists between $d$ and $\tau$. [*Bottom panel*]{}: $d/R_{\rm l}$ vs. $\tau$. Old cavities have gone through buoyant evolution in the ICM, and are expected to have higher values of $d/R_{\rm l}$. There exists a general trend that the value of $\tau$ decreases as $d/R_{\rm l}$ increases. The vertical dotted line denotes $\tau=1$, which separates type-I cavities with $\tau<1$ and type-II cavities with $\tau>1$.[]{data-label="plot2"}](f2.eps){width="45.00000%"} ![Histograms of the radial elongation $\tau=R_{\rm l}/R_{\rm w}$ in the relatively young X-ray cavities with $d/R_{\rm l}<1.5$ (top) and $d/R_{\rm l}<2$ (bottom). Bin widths are 0.1 in $\tau$. For young X-ray cavities, both type I and II cavities exist, and the latter dominates. The ratio between type I and II cavities increases from around $1:5$ for young cavities with $d/R_{\rm l}<1.5$ to around $1:3$ for those with $d/R_{\rm l}<2$.[]{data-label="plot3"}](f3.eps){width="45.00000%"} The Cavity Sample ----------------- In this section, we present a preliminary study of radial elongations in a sample of observed X-ray cavities drawn from the literature. Our cavity sample was mainly drawn from two large cavity samples in @rafferty06 and @hlavacek12, respectively. The @rafferty06 sample is a large sample of local and low-redshift X-ray cavities, most of which have also been used in other studies (e.g., @birzan04; @diehl08; @birzan08; @birzan20). @rafferty06 provide the relevant parameters of each cavity in their sample, in particular, the distance of the cavity center to the host system’s center (hereafter denoted as $d$), the semi-major and semi-minor axes. Some cavities have relatively low contrast with respect to their surroundings, and are assigned a value of “3" for the figure of merit in @rafferty06. For accuracy, in our sample we exclude these poorly defined cavities without bright rims. We additionally determine the elongation of each cavity by eye in X-ray images published in the literature (as specifically listed in the rightmost column in Table 1). We find that all the cavities in our sample are elongated along either the jet direction or the angular direction perpendicular to the jet direction, and consequently we determine the values of $R_{\rm l}$ and $R_{\rm w}$ for each cavity according to the values of the semi-major and semi-minor axes given in @rafferty06. We updated the parameters ($R_{\rm l}$, $R_{\rm w}$, and $d$) of the cavities in Hydra A according to @wise07, those in Zw 2701 according to @vagshette16, and those in A2199 according to @nulsen13. We added one more cavity in A3581 observed by @canning13. We also supplemented our sample with two cavities in RX J1532.9+3021 [@hl13] and seven cavities in NGC 5813 [@randall11]. The parameters and references of these $60$ cavities are listed in Table 1. The @hlavacek12 sample includes 31 X-ray cavities in 20 galaxy clusters located at the redshift range of $0.3\leq z \leq 0.7$. The values of $R_{\rm l}$, $R_{\rm w}$, and $d$ of these cavities are explicitly listed in Table 3 of @hlavacek12. Combining with the cavities listed in Table 1, our cavity sample comprises a set of 91 cavities in 45 host systems, including 42 galaxy clusters, 2 galaxy groups (HCG 62 and NGC 5813) and one galaxy (M84). As located at relatively high redshifts, the cavities in the @hlavacek12 sample usually do not have very high contrast with respect to their surroundings. However, we stress that our main results in the paper do not change qualitatively if we exclude the @hlavacek12 sample from our analysis. Radial Elongations and Implications ----------------------------------- The first immediate result is that all the cavities in our sample are either type-I cavities elongated along the angular direction, type-II cavities elongated along the jet direction, or nearly circular type-III cavities. This indicates that X-ray cavities are not subject to significant rotation during their evolution in the ICM. Otherwise, a large fraction of the cavities would be elongated along random directions with respect to the jet direction. It also implies that the observed difference between type I and II cavities in the cavity elongation with respect to the jet direction is not due to rotation in the ICM. Gas motions in the ICM may shift or bend X-ray cavities (e.g., @fabian00; @venturi13; @pm13), but our result suggests that the kpc-scale rotation in the ICM velocity field is not significant. In other words, the level of turbulence in the inner regions of galaxy clusters may be relatively low, consistent with recent HITOMI observations of the Perseus cluster [@hitomi16]. The distribution of the cavities in our sample is illustrated in Figure \[plot2\], which shows the diagrams of $d$ vs $\tau$ (top) and $d/R_{\rm l}$ vs $\tau$ (bottom). The value of radial elongation $\tau$ ranges between $0.4$ and $2$, and the centers of most cavities are located within $1\lesssim d \lesssim 100$ kpc from the host system’s center. Figure \[plot2\] indicates that, while no clear correlation exists between $d$ and $\tau$, the value of $\tau$ roughly decreases as $d/R_{\rm l}$ increases. Cavities with high values of $d$ do not directly correspond to old cavities, as a big young cavity may be created directly with a large value of $d\sim R_{\rm l}$. Old cavities have gone through buoyant evolution in the ICM, and are instead expected to have high values of $d/R_{\rm l}$. Thus, the bottom panel of Figure 2 suggests that the value of $\tau$ decreases as a cavity rises buoyantly in the ICM. In other words, a cavity tends to become more elongated along the angular direction as it rises buoyantly in the ICM. An extreme case of this evolution is the pancake-shaped northwestern ghost cavity in Perseus [@fabian00; @churazov01], and this trend is also consistent with the predictions in hydrodynamic simulations (e.g., Figure 5 of @guo16). Furthermore, the dearth of type-II cavities with $d/R_{\rm l}>2$ shown in the bottom panel of Figure 2 suggests that type-II cavities may evolve into type-I cavities as they rise buoyantly in the ICM. While the shape of old cavities may change substantially during the buoyant evolution, the intrinsic shape of young cavities may be used to probe jet properties, as indicated by hydrodynamic simulations of @guo15 and @guo16. Figure 3 shows the histograms of the radial elongation $\tau$ in the relatively young X-ray cavities with $d/R_{\rm l}<1.5$ (top) and $d/R_{\rm l}<2$ (bottom). It is clear that both type I and II young cavities exist, and the latter dominates. The ratio between type I and II cavities is around $1:5$ for young cavities with $d/R_{\rm l}<1.5$, and increases to around $1:3$ for cavities with $d/R_{\rm l}<2$, possibly due to buoyant evolution. As shown in Sec. 3, the inclination angle between the jet direction and the line of sight affects the observed value of $\tau$, but projection effect does not change type I cavities into type II cavities, or vice versa. Remarkably, Figures 2 and 3 also indicate that there exist a large number of nearly circular type-III cavities with $\tau \approx 1$. While some of them may be intrinsically spherical cavities, many of them may be type I or II cavities viewed along lines of sight close to the jet direction, as further discussed in Section 3. ![Sketch of parallel projection of an X-ray cavity onto the sky. The cavity is approximated as a spheroid with an intrinsic semi axis $\bar{R}_{l}$ along the jet direction, and two equal semi axes $\bar{R}_{\rm w}$ along two directions perpendicular to the jet direction. When projected onto the sky, ${R}_{l}$ is the apparent semi axis along the projected jet direction, while ${R}_{\rm w}=\bar{R}_{\rm w}$ is the semi axis along the angular direction perpendicular to the jet direction. $\theta$ is the inclination angle between the line of sight and the radial direction from the cluster center to the cavity center.[]{data-label="plot4"}](sketch.eps){width="45.00000%"} ![Impact of parallel projection along lines of sight on two idealized spheroidal cavities with intrinsic radial elongations $\bar{\tau}=0.7$ (top) and $1.5$ (bottom). $\theta$ is the inclination angle between the line of sight and the jet direction (see Fig. 4). It is clear that, depending on the inclination angle, parallel projection leads to an apparent radial elongation ranging from its intrinsic value $\tau=\bar{\tau}$ if $\theta=90^{\circ}$ to $\tau=1$ if $\theta=0^{\circ}$.[]{data-label="plot5"}](proj.eps){width="45.00000%"} Projection Effect on Radial Elongations ======================================= Observed X-ray cavities are parallel projections of three-dimensional (3D) low-density ICM cavities along lines of sight onto the sky. Considering the evolution of an axisymmetric jet in a spherically-symmetric ICM, the created low-density cavity is expected to be axisymmetric around the jet axis. As a “zeroth-order" approximation, the cavity may thus be approximated as a spheroid with a semi axis $\bar{R}_{l}$ along the jet direction, and two equal semi axes $\bar{R}_{\rm w}$ along two directions perpendicular to the jet direction. The cavity volume can be written as $V=4\pi \bar{R}_{l} \bar{R}_{\rm w}^{2}/3$, and the intrinsic radial elongation of this 3D cavity may be defined as $\bar{\tau}\equiv \bar{R}_{\rm l}/\bar{R}_{\rm w}$. In this section, we investigate how line-of-sight projections affect the observed value of radial elongation $\tau$. Figure 4 shows a sketch of parallel projection of a 3D X-ray cavity onto the sky. For a 3D cavity with the intrinsic radial elongation $\bar{\tau}\equiv \bar{R}_{\rm l}/\bar{R}_{\rm w}$, the observed value of radial elongation $\tau=R_{\rm l}/R_{\rm w}$ depends on the inclination angle $\theta$ between the line of sight and the jet direction. When projected onto the sky, ${R}_{l}$ is the apparent semi axis along the projected jet direction on the sky, while ${R}_{\rm w}=\bar{R}_{\rm w}$ is the semi axis along the angular direction perpendicular to the projected jet direction. Here we use Figure 4 to facilitate the derivation of ${R}_{l}$, which may depend on $\bar{R}_{\rm l}$, $\bar{R}_{\rm w}$, and $\theta$. It is obvious that ${R}_{l}=\bar{R}_{\rm w}$ and $\tau=1$ if $\theta=0^{\circ}$, and ${R}_{l}=\bar{R}_{\rm l}$ and $\tau=\bar{\tau}$ if $\theta=90^{\circ}$. Considering a Cartesian coordinate system $(x, z)$ with the origin at the cavity center, the line of sight passing through the origin can be written as $x=-z \text{~tan} \theta$, and the coordinates of an arbitrary point located at the cavity surface (ellipse) may be written as $(\bar{R}_{\rm w} \text{~cos} \phi, \bar{R}_{\rm l}\text{~sin} \phi)$. For the values of $\theta$ and $\phi$, we consider $0^{\circ} \leq \theta \leq 90^{\circ}$ and $0^{\circ} \leq \phi \leq 90^{\circ}$. The distance $l$ between a point $(\bar{R}_{\rm w} \text{~cos~} \phi, \bar{R}_{\rm l}\text{~sin~} \phi)$ at the cavity surface and a point $(-z \text{~tan~} \theta, z)$ on the line of sight passing through the cavity center can be written as $$\begin{aligned} l^{2}= (\bar{R}_{\rm w}\text{~cos} \phi+z \text{~tan} \theta)^{2}+(\bar{R}_{\rm l}\text{~sin} \phi-z)^{2}{\rm .} \label{cdistance} \end{aligned}$$ The distance of the point $(\bar{R}_{\rm w} \text{~cos} \phi, \bar{R}_{\rm l}\text{~sin} \phi)$ to the line of sight $x=-z \text{~tan} \theta$ can then be evaluated according to Eq. (\[cdistance\]) with the value of $z$ derived from the condition $\partial l^{2}/\partial z=0$, which gives $z=\bar{R}_{\rm l}\text{~sin} \phi \text{~cos}^{2}\theta-\bar{R}_{\rm w}\text{~cos} \phi \text{~sin}\theta \text{~cos}\theta$. Therefore one has $$\begin{aligned} l= \bar{R}_{\rm w}\text{~cos} \phi \text{~cos}\theta+\bar{R}_{\rm l}\text{~sin} \phi \text{~sin} \theta {\rm .} \label{cdistance2} \end{aligned}$$ As illustrated in Figure 4, the apparent semi axis along the projected jet direction on the sky ($R_{\rm l}$) corresponds to the maximum value of $l$ in Eq. (\[cdistance2\]), which occurs at the value of $\phi$ determined by the condition $\partial l/\partial \phi=0$, i.e., $$\begin{aligned} \text{tan} \phi=\bar{\tau}\text{tan} \theta {\rm ~,} \label{cdistance3} \end{aligned}$$ where $\bar{\tau}= \bar{R}_{\rm l}/\bar{R}_{\rm w}$. As ${R}_{\rm w}=\bar{R}_{\rm w}$, Eq. (\[cdistance2\]) can be rewritten as $$\begin{aligned} \tau= &\text{~cos} \phi \text{~cos}\theta+\bar{\tau}\text{~sin} \phi \text{~sin} \theta \nonumber \\ =&\sqrt{\text{~cos}^{2}\theta +\bar{\tau}^{2}\text{~sin}^{2} \theta} {\rm ,} \label{cdistance4} \end{aligned}$$ where we have used the value of $\phi$ in Eq. (\[cdistance3\]). Due to the axisymmetry around the jet axis, Eq. (\[cdistance4\]) also holds for $90^{\circ}\leq \theta \leq 180^{\circ}$. For the special case of spherical cavities with $\bar{\tau}=1$, Eqs. (\[cdistance3\]) and (\[cdistance4\]) reduce to $\phi=\theta$ and $\tau=1$, respectively. Eq. (\[cdistance4\]) indicates that, depending on the inclination angle $\theta$, parallel projection leads to an apparent radial elongation $\tau$ ranging from its intrinsic value $\tau=\bar{\tau}$ if $\theta=90^{\circ}$ to $\tau=1$ if $\theta=0^{\circ}$, as clearly illustrated in Figure 5 for a type-I cavity with $\bar{\tau}=0.7$ (top) and a type-II cavity with $\bar{\tau}=1.5$ (bottom). While projection effect affects the observed value of $\tau$ by making cavities appear more circular, it does not change type I cavities into type II cavities, or vice versa. Thus the quantity ratio between type I and II young cavities investigated in Sec. 2 is not affected by projection effect. We also note that due to the measurement uncertainties in $R_{\rm l}$ and $R_{\rm w}$, a small but substantial fraction of type-I and -II cavities with small values of $|\theta|$ (i.e., viewed along lines of sight close to the jet axis) may be classified as type-III cavities with $\tau \approx 1$. Eq. (\[cdistance4\]) can also be used to improve the measurement of the cavity volume $V=4\pi R_{l} R_{\rm w}^{2}/3$, which is often adopted to estimate the energetics and power of mechanical AGN feedback (e.g., @hlavacek12). For type-I cavities, Eq. (\[cdistance4\]) indicates that $V=4\pi R_{l} R_{\rm w}^{2}/3$ is actually an upper limit of the real cavity volume. In contrast, for type-II cavities, $V=4\pi R_{l} R_{\rm w}^{2}/3$ is a lower limit of the real cavity volume. If there is a very large sample of young X-ray cavities detected in the future, the distribution of $\theta$ may be considered to be random, and Eq. (\[cdistance4\]) may then be used to derive the intrinsic probability distribution function (PDF) of $\bar{\tau}$ from the observed PDF of $\tau$. For type-I cavities, one has $$\begin{aligned} f(\tau)d\tau & \propto (\int^{\tau}_{0} g(\bar{\tau})\left|\frac{d\theta(\tau,\bar{\tau})}{d\tau}\right|d\bar{\tau})d\tau \nonumber \\ &=\frac{\tau}{\sqrt{1-\tau^{2}}}(\int^{\tau}_{0} \frac{g(\bar{\tau})}{\sqrt{\tau^{2}-\bar{\tau}^{2}}}d\bar{\tau})d\tau {\rm ~,} \end{aligned}$$ where $|d\theta(\tau,\bar{\tau})/d\tau|=\tau/\sqrt{(1-\tau^{2})(\tau^{2}-\bar{\tau}^{2})}$ is derived from Eq. (\[cdistance4\]), and $f(\tau)d\tau$ and $g(\bar{\tau})d\bar{\tau}$ are the PDFs of $\tau$ and $\bar{\tau}$ of type-I cavities, respectively. Similarly, for type-II cavities, one has $$\begin{aligned} f(\tau)d\tau \propto \frac{\tau}{\sqrt{\tau^{2}-1}}(\int^{\infty}_{\tau} \frac{g(\bar{\tau})}{\sqrt{\bar{\tau}^{2}-\tau^{2}}}d\bar{\tau})d\tau {\rm .} \end{aligned}$$ Intrinsic Radial Elongations in Simulations {#section2} =========================================== ![Intrinsic radial elongations $\bar{\tau}$ of young X-ray cavities in a series of hydrodynamic jet simulations of @duan20 as a function of the jet kinetic fraction $f_{\rm kin}$. These simulated X-ray cavities are produced by AGN jets in a realistic galaxy cluster, and the jets in all the simulations presented here have the same jet energy $E_{\rm inj}=2.3\times 10^{60}$ erg. The top panel shows the results of three simulations of strong jets with a short jet duration $t_{\rm inj}=5$ Myr at two times $t=5$ Myr and $30$ Myr, while the bottom panel shows those of three simulations of mild jets with a long jet duration $t_{\rm inj}=50$ Myr at two times $t=50$ Myr and $100$ Myr. The cavities presented here are relatively young, still attached to or just slightly detached from the cluster center. The simulations presented in the top and bottom panels are illustrated in Figures 4 and 5 of @duan20, respectively. The horizontal dotted line in each panel refers to type-III cavities with $\bar{\tau}=1$.[]{data-label="plot6"}](f6.eps){width="45.00000%"} In this section, we investigate what the observations of radial elongations of young X-ray cavities may tell us about the physics of mechanical AGN feedback. In our previous studies [@guo15; @guo16], we found that in a typical smooth spherically-symmetric ICM, the shape of young cavities is mainly affected by the properties of AGN jets. Here we adopt our recent hydrodynamic jet simulations in @duan20 to summarize the dependence of the intrinsic radial elongation $\bar{\tau}$ of young X-ray cavities on jet properties, particularly its kinetic fraction. In these simulations, we use thermal jets carrying both thermal and kinetic energies, which are initialized at the jet base on kpc scales (more specifically at $z_{\rm inj}=1$ kpc from the cluster center). The kinetic fraction $f_{\rm kin}=1-f_{\rm th}$ is defined as the ratio of the kinetic energy density to the total energy density within the jet at its base. Here $f_{\rm th}$ is the jet’s thermal fraction at the jet base. For more details of the setup and results of these simulations, we refer the reader to @duan20 [see also @guo18 and @duan18]. Figure \[plot6\] shows $\bar{\tau}$ of young cavities as a function of $f_{\rm kin}$ in a series of strong (top) and mild (bottom) jet simulations in @duan20. These simulated X-ray cavities are produced by AGN jets in a realistic galaxy cluster (Abell 1795), and the jets all have the same total energy $E_{\rm inj}=2.3\times 10^{60}$ erg. For a given total energy $E_{\rm inj}$, @duan20 demonstrate that there exists a characteristic radius $R_{\rm fb}$ within which the total ICM energy equals $E_{\rm inj}$, which defines a characteristic jet power $P_{\rm fb}=E_{\rm inj}/t_{\rm s}$, where $t_{\rm s}$ is the sound crossing time across $R_{\rm fb}$. For a given $E_{\rm inj}$, a jet with power much higher (lower) than $P_{\rm fb}$ may be considered to be a strong (mild) jet, and shock dissipation in the ICM is much more significant in strong AGN outbursts than in mild ones [@duan20]. While the jet radius and velocity are fixed in these simulations, the value of $f_{\rm kin}$ varies as we adopt different values of the internal jet density and thermal energy density at the jet base. The strong and mild jet simulations used to make Figure \[plot6\] are illustrated in Figures 4 and 5 of @duan20, respectively. For strong jet outbursts, the value of $\bar{\tau}$ of young cavities increases with $f_{\rm kin}$, and as clearly shown in the top panel of Figure \[plot6\], the transition from type I to type II cavities occurs roughly at $f_{\rm kin}\sim 0.5$. In other words, thermal-energy-dominated jets tend to produce type-I cavities, while kinetic-energy-dominated jets tend to produce type-II cavities. Compared to strong jets, mild jets with the same total energy have a higher jet duration, producing young cavities that are more elongated along the jet direction and have higher values of $\bar{\tau}$, as clearly seen in Figure \[plot6\] (see also relevant discussions in Sec. 3.1 of @guo15). While $\bar{\tau}$ also roughly increases with $f_{\rm kin}$ for mild jet outbursts, both thermal-energy-dominated and kinetic-energy-dominated mild jets produce type II cavities with $\bar{\tau}>1$. Now we can use our simulation results to interpret the observations of $\tau$ of young X-ray cavities presented in Sec. 2. In our cavity sample, both type-I and -II young cavities exist, and type II young cavities dominate. As shown in Sec. 3, while projection effect affects the observed value of $\tau$, it does not change type I cavities into type II cavities, or vice versa. Therefore, the existence of type-I young cavities indicates that some AGN jets are strong jets energetically dominated by thermal energy on kpc scales. These jets are not dominated by the kinetic energy on kpc scales as often assumed in the literature (e.g., @gaspari11; @yang16; @guo18; @martizzi19; @bambic19), and may alternatively be energetically dominated by cosmic rays as recently suggested (e.g., @guo08a; @guo11; @ruszkowski17; @yang19; @wang20). It has also been demonstrated by @duan20 that strong non-kinetic-energy-dominated jets are much more effective in delaying the onset of cooling catastrophe than kinetic-energy-dominated jets with the same power. If most AGN jets in mechanical AGN feedback are strong jets with relatively short durations, the dominance of type-II over type-I cavities suggests that most AGN jets are energetically dominated by the kinetic energy. However, if a large or dominant fraction of AGN jets are energetically dominated by non-kinetic energies (e.g. thermal energy or cosmic rays), our results suggest that they must be mild jets with relatively long durations. Summary and Discussion {#section:discussion} ====================== Mechanical AGN feedback is usually thought to play a key role in the evolution of massive galaxies, galaxy groups and clusters. However, the particle and energy content of AGN jets that mediate this feedback process is still far from clear. [*Chandra*]{} and [*XMM-Newton*]{} observations have detected a large number of X-ray cavities, apparently evolved from mechanical AGN feedback in the hot gaseous halo of these systems and potentially containing important information on the physics of mechanical AGN feedback. The enthalpy of X-ray cavities has already been extensively used to estimate the energetics and power of mechanical AGN feedback. Here we present a preliminary study of radial elongations of a large sample of X-ray cavities drawn from the literature, and investigate the implications on the physics of mechanical AGN feedback. All the 91 X-ray cavities in our sample are type-I cavities elongated along the angular direction (perpendicular to the jet direction), type-II cavities elongated along the jet direction, or nearly circular type-III cavities. This suggests that X-ray cavities are not subject to significant rotation during their evolution in the ICM, and implies that the observed difference in radial elongation between type I and II cavities is not due to the cavity rotation in the ICM. Our result also suggests that the kpc-scale rotation in the ICM velocity field is not significant and the level of turbulence in the inner 100-kpc regions of galaxy clusters may be relatively low, consistent with recent HITOMI observations of the Perseus cluster [@hitomi16]. The value of radial elongation $\tau$ may vary as a cavity rises buoyantly in the ICM. We explored this issue with the potential correlations between $\tau$ and $d$, and between $\tau$ and $d/R_{\rm l}$. Here $d$ is the distance of the cavity center to the host system’s center. All the three types of cavities are seen in our sample with different value of $d$ varying between 1 and 100 kpc, and there is not a clear trend between $\tau$ and $d$. In contrast, a trend between $\tau$ and $d/R_{\rm l}$ exists in our sample, and $\tau$ roughly decreases with the increasing value of $d/R_{\rm l}$. This suggests that $d/R_{\rm l}$ may be a better indicator determining whether a cavity has gone through significant buoyant evolution in the ICM, and X-ray cavities tend to become more elongated along the angular direction (with lower values of $\tau$) as they rise buoyantly in the ICM. This is consistent with the predictions in hydrodynamic simulations. The dearth of type-II cavities with $d/R_{\rm l}>2$ suggests that type-II cavities may evolve into type-I cavities as they rise buoyantly in the ICM. Young X-ray cavities have not gone through significant buoyant evolution, and their radial elongations may tell us about the properties of AGN jets that created them. In our cavity sample, both type I and II young cavities exist, and the latter dominates. The observed value of $\tau$ is expected to be affected by projection effect. Assuming that X-ray cavities are spheroidal and axisymmetric around the jet axis, we derive an analytical relation between the intrinsic radial elongation $\bar{\tau}$ and the observed value of $\tau$: $\tau= \sqrt{\text{~cos}^{2}\theta +\bar{\tau}^{2}\text{~sin}^{2} \theta}$, which depends on the inclination angle $\theta$. The value of $\tau$ generally lies between 1 and $\bar{\tau}$, indicating that projection effect makes cavities appear more circular, but does not change type-I cavities into type-II ones, or vice versa. The relation can be used to improve the measurement of the cavity volume and confirms that some type-III cavities may be intrinsically type-I or -II cavities viewed along lines of sight close to the jet axis. We investigate the intrinsic radial elongations of young cavities in a suite of hydrodynamic jet simulations, and find that $\bar{\tau}$ typically increases with the kinetic fraction of AGN jets. As demonstrated in @duan20, for a given jet energy, there exists a characteristic jet power that separates short-duration strong jets and long-duration mild jets. Irrespective of the kinetic fraction, mild jets always produce type-II cavities. However, for strong jets, thermal-energy-dominated jets tend to produce type-I cavities, while kinetic-energy-dominated jets produce type-II cavities. The existence of type-I young cavities indicates that some AGN jets are strong and dominated by non-kinetic energies on kpc scales, such as thermal energy or cosmic rays. If most jets are dominated by non-kinetic energies, the dominance of type-II cavities in our young cavity sample suggests that most jets are long-duration mild jets. However, if most jets are strong, they must be dominated by the kinetic energy. Additional observations are required to further infer if most jets are strong jets dominated by the kinetic energy or mild jets dominated by non-kinetic energies. Radio observations of mechanical AGN feedback indicate that there is a dichotomy between Fanaroff-Riley (FR) type I and II radio sources [@fr74]. While both FR I and II radio sources exist in galaxy clusters, the former dominates. According to the values of radial elongations derived from X-ray observations, our study suggests that both type I and II young X-ray cavities exist, and the latter dominates. While the archetypical FR II radio jets in Cygnus A produce type-II young cavities, FR II radio sources are not very common in galaxy clusters, suggesting that many FR I radio sources also produce type-II young X-ray cavities. It is thus possible that some FR I radio sources produce type-I cavities, while others produce type-II cavities. The difference may be caused by the duration of AGN jets, as short-duration (long-duration) jets tend to produce type I (II) cavities. As radial elongation is also affected by the jet’s kinetic fraction, the difference may also be caused by the properties and the dissipation/entrainment history of AGN jets on sub-kpc scales. Acknowledgments {#acknowledgments .unnumbered} =============== The author is grateful to Zhen-Ya Zheng and Minfeng Gu for helpful discussions. This work was supported partially by National Natural Science Foundation of China (Grant No. 11873072 and 11633006), Natural Science Foundation of Shanghai (No. 18ZR1447100), and Chinese Academy of Sciences through the Key Research Program of Frontier Sciences (No. QYZDB-SSW-SYS033 and QYZDJ-SSW-SYS008). \[lastpage\]
{ "pile_set_name": "ArXiv" }
--- abstract: 'Model description of patterns of atomic displacements in twisted bilayer systems has been proposed. The model is based on the consideration of several dislocation ensembles, employing a language that is widely used for grain boundaries and film/substrate systems. We show that three ensembles of parallel screw dislocations are sufficient both to describe the rotation of the layers as a whole, and for the vortex-like displacements resulting from elastic relaxation. The results give a clear explanation of the observed features of the structural state such as vortices, accompanied by alternating stacking.' author: - 'Yu. N. Gornostyrev' - 'M. I. Katsnelson' title: Origin of the vortex displacement field in twisted bilayer graphene --- Introduction ============ Bilayer systems consisting of two layers of identical or different two-dimensional materials such as bilayer graphene (G/G), bilayer hexagonal boron nitride (BN/BN), and bilayer graphene/hexagonal boron nitride (G/BN) are the subject of great interest now as the simplest examples of “Van der Waals heterostructures” (for review, see Refs.[@GG2013; @Katsnelsonbook]). Building bilayer devices involves mechanical processes such as rotation and translation of one layer with respect to the other. This has substantial influence on the performance and quality of such devices [@GLi2009; @JMBL2007]. The rotation leads to structural moiré patterns which directly affect the electronic properties of bilayers [@WTP2005; @Xue2011; @Tang2013; @Yang2013; @Woods2014; @Slotman2015; @Shi2020]. A further growth of interest to the field was triggered by a recent discovery of superconductivity and metal-insulator transition in “magic angle” twisted bilayer graphene [@Cao1; @Cao2]. In this paper, we focus on the structural aspects of moiré patterns and consider bilayer G/G. We suggest a description of the Moiré patterns in terms of vortices and in terms of dislocations and establish a connection between these two languages. In two-dimensional materials, such as monolayer graphene, the term “dislocation” is typically used to describe pointlike (0D) defects lying within the sheet, e.g., pentagon-heptagon or square-octagon pairs [@Nelsonbook]; they are also used to describe grain boundaries as dislocation walls [@Yazyev; @Akhukov; @Srolovits]. Such defects are edge dislocations with line directions oriented normal to the sheet. Unlike the case of monolayers, in bilayers it is also possible to have one-dimensional (line) dislocations that lie between the two layers of a bilayer material; these dislocations do not require the generation of any topological defects within each of two sheets. The geometry of displacement fields in bilayer Van der Waals systems has been discussed repeatedly starting from the discovery of commensurate-incommensurate transition in G/BN system [@Woods2014]. The results of atomistic simulations [@Fasolino2014; @Fasolino2015; @Fasolino2016] show that the formation of the vortex lattice is rather typical for the picture of displacements of the relaxed twisted bilayer systems (both G/BN and G/G). On the other hand, the electron microscopy study [@JSA2013] reveals multiple stacking domains with soliton-like boundaries between them in slightly twisted bilayer graphene, the domain boundaries can be also described as one-dimensional Frenkel-Kontorova dislocations. Wherein, a topological defect where six domains meet can be considered as vortex in displacement field. A similar multiple stacking domain structure was recently discussed in Ref. [@Bagchi2020] in the framework of a model employing a network of partial dislocation. However, the relation between these two descriptions, in terms of vortices and in terms of dislocations, is currently unclear. Here, based on the dislocation model, we propose a general description of the moiré patterns in twisted bilayer graphene it terms of twist grain boundaries in layered material. We show that both pictures (vortex network and dislocation arrays) are consistent and presented two possible ways for a qualitative description of such systems and physical interpretation of the computer simulation results. Dislocation model of a twisted bilayer system ============================================= Moiré patterns are formed initially by a rigid twist of the upper layer with respect to the bottom layer; their geometry is determined by lattice type and the rotation angle [@Hermann2012]. If one allows atomic relaxation, that is, their shifts from these ideal geometric positions within the lattice of coincidence sites to minimize the total energy, the picture becomes more complicated and, according to simulations [@Fasolino2015], vortex displacement field arises. Notably, the vortices form a regular lattice and are separated by broad regions of almost zero displacements. There are two canonical ways to describe conjugation in bilayer systems. The first one is the simplest picture of coincidence site lattice (CSL) [@Marc74] where one lattice just rotates and puts onto the other one without atomic relaxation; it corresponds to the moiré description. In the second approach, a general description of twist boundaries in bilayer systems can be derived on the basis of dislocation models proposed earlier for three-dimensional materials and thin films [@RS1962]. A consideration of grain boundaries based on the concept of surface dislocations was given in the book [@HL1968] where general relations between grain boundaries and geometry of dislocation arrays were discussed. In the context of graphene, this language was used in Refs. [@Yazyev; @Akhukov; @Srolovits]. General geometric relationships ------------------------------- At least, two arrays of parallel equidistant screw dislocations are necessary to ensure a given relative twist of two crystallites in their conjugation plane [@HL1968]. In this case, certain geometric relations must be fulfilled so that the total shear deformation in the plane of the boundary is zero. In particular, in the case of two arrays, it is necessary to require that the dislocation axes in these arrays were perpendicular to each other. To present correctly the geometry of conjugation of two graphene layers (and the corresponding moiré pattern), two dislocation arrays are not enough. We consider more general case and represent the plastic distortion tensor $\beta^p_{ij}$ produced by one array of dislocations in the form $\beta^p_{ij}=n_ib_j/d$, where ${\bf n}$ is the normal to the dislocation line lying in the plane of the boundary, ${\bf b}$ is the Burgers vector of dislocation, $d$ is the distance between neighboring dislocations in array. The plastic deformation $\varepsilon^p_{ij}$ and rotation $\omega^p_{ij}$ are determined by the symmetric and antisymmetric parts of the tensor $\beta^p_{ij}$ [@HL1968] $$\varepsilon^p_{ij} =\frac{n_ib_j+n_jb_i}{2d} \ \ \ \ \ \ \omega^p_{ij} = \frac{n_ib_j-n_jb_i}{2d} \label{eq:tensor}$$ ![The schematic representation of the dislocation network used to describe the twist boundary. (a) - network of screw dislocations, (b) - reconstructed network of dislocations. Vectors 1,2,3 indicate the directions of dislocation lines. []{data-label="network"}](fig1a.pdf "fig:"){width="3.50cm"}     ![The schematic representation of the dislocation network used to describe the twist boundary. (a) - network of screw dislocations, (b) - reconstructed network of dislocations. Vectors 1,2,3 indicate the directions of dislocation lines. []{data-label="network"}](fig1b.pdf "fig:"){width="3.90cm"} To describe the conjugation of two twisted graphene layers and taking into account the lattice trigonal symmetry, we will use three dislocation arrays rotated with respect to each other by $\pi$/3. Assuming that the normal to the graphene layers is $\left[ 001 \right]$ and considering screw dislocations with Burgers vector parallel to dislocation line, we write (see Fig. \[network\]): $$\begin{aligned} {\bf b}_1 =b\left[ \frac{\sqrt{3}}{2}, -\frac{1}{2}, 0\right], \ \ \ {\bf n}_1 = \left[ \frac{1}{2}, \frac{\sqrt{3}}{2}, 0\right], \nonumber \\ {\bf b}_2 =b\left[ \frac{\sqrt{3}}{2}, \frac{1}{2}, 0\right], \ \ \ {\bf n}_2 = \left[ -\frac{1}{2}, \frac{\sqrt{3}}{2}, 0\right] \label{eq:vectors12}\end{aligned}$$ where $b$ is the module of the Burgers vector which should be equal to the elementary translation in graphene layer. To ensure the total deformation $\varepsilon^p_{ij}$ being equal to zero, it is necessary to choose the third dislocation array with the vector ${\bf b}_3$ orthogonal to ${\bf b}_1+{\bf b}_2$: $${\bf b}_3 =b\left[0, 1, 0\right], \ \ \ {\bf n}_3 = \left[-1, 0, 0\right] \label{eq:vector3}$$ Indeed, substituting expressions (\[eq:vectors12\]) and (\[eq:vector3\]) into equation (\[eq:tensor\]), we find $$\varepsilon^p_{ij} =0 \ \ \ \ \ \ \omega^p_{ij} = \frac{3b}{2d} \begin{pmatrix} % or pmatrix or bmatrix or Bmatrix or ... 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix} \label{eq:tensor2}$$ and the rotation vector $\omega_k = \frac12 \epsilon_{ijk}\omega_{ij}$ will be $\omega = \dfrac{3b}{2d}\left[0,0,1\right]$. Note that for small rotation angle $\psi \approx \omega_3=3b/(2d)$; this expression is similar to that determining the geometry of moiré in the model of CSL $\psi \sim a/l$, $l$ being the distance between coincidence points, $a$ being the elementary lattice translation. Relaxed displacement field within the dislocation model ------------------------------------------------------- The equations (\[eq:tensor2\]) are valid on the average for the whole sheet. In fact, the displacements are non-uniformly distributed and concentrated near the dislocation lines. As discussed above, to describe correctly the displacement field in the case of twisted bilayer graphene [*three*]{} arrays of dislocations are necessary. We believe that the Frenkel-Kontorova model [@Braun] gives a qualitatively correct description of the displacement field created by surface dislocations. Assuming that the energy relief of substrate may have an additional minimum [@Srolovits2; @Savini] and dislocations can split into the partial ones [@HL1968; @Braun], the screw component of the displacement field for one family can be represented as $$\begin{aligned} u_s(x) =\frac{b}{\pi} \sum_i \left[\arctan\left(\exp\left(\frac{x-x^0_i-\delta/2}{\xi}\right)\right) \right. \nonumber \\ + \left. \arctan\left(\exp\left(\frac{x-x^0_i+\delta/2}{\xi}\right)\right)\right] \label{eq:displ1}\end{aligned}$$ where $x^0_i$ correspond to position of the center of dislocation line, $\xi$ is core width and $\delta \sim \mu b/\gamma$ is distance splitting between partial dislocations, $\mu$ is shear modulus and $\gamma$ is stacking fault energy. The splitting of the dislocation on hexagonal lattice results in formation of stacking fault which is accompanied also by the appearance of an edge components of partial dislocations [@Bagchi2020], ${\bf b}=({\bf b}/2+{\bf b}_e)+({\bf b}/2-{\bf b}_e)$. In this case, the edge component of displacement field can be written as $$\begin{aligned} u_e(x) =\frac{b_e}{\pi} \sum_i \left[\arctan\left(\exp\left(\frac{x-x^0_i-\delta/2}{\xi}\right)\right) \right. \nonumber \\ - \left. \arctan\left(\exp\left(\frac{x-x^0_i+\delta/2}{\xi}\right)\right)\right] \label{eq:displ2}\end{aligned}$$ Fig. \[u(x)\]a displays the dependence $u_s(x)$ for the cases of narrow and wide (split) dislocations. In the case of narrow dislocations the displacements are concentrated in the dislocation core and include both plastic and elastic parts. When the width of the dislocation core $\xi$ increases, the dependence $u_s(x)$ becomes close to linear $u \approx u^p =bx/d$ and represents a pure plastic shear. Fig. \[u(x)\]b shows the edge component of displacements in case of split dislocation. ![Screw component (a) of displacements produced by one dislocation array in the case of non-split ($\xi=0.2$, $\delta=0.2$, curve 1) and split ($\xi=0.1$, $\delta=0.8$, curve 2) dislocation cores. Edge component (b) of the displacements is produced by one array of partial dislocation ($\xi=0.1$, $\delta=0.8$). Distances and parameters $\xi$, $\delta$ are given in units of $d$.[]{data-label="u(x)"}](fig2a.pdf "fig:"){width="6.80cm"}    ![Screw component (a) of displacements produced by one dislocation array in the case of non-split ($\xi=0.2$, $\delta=0.2$, curve 1) and split ($\xi=0.1$, $\delta=0.8$, curve 2) dislocation cores. Edge component (b) of the displacements is produced by one array of partial dislocation ($\xi=0.1$, $\delta=0.8$). Distances and parameters $\xi$, $\delta$ are given in units of $d$.[]{data-label="u(x)"}](fig2b.pdf "fig:"){width="6.80cm"} Following the discussion in the previous section, we represent the total displacement field in twisted graphene layers as a superposition of three dislocations arrays. Subtracting the plastic part, we write the elastic displacements produced by screw dislocations in the form $$\begin{aligned} {\bf u}^{el}({\bf r}) & = \sum_{k=1}^3 \frac {{\bf b}_k}{\pi}\sum_{i=-m}^m \arctan\left(\exp\left(\frac{{\bf r n}_k-x^{k}_i-\delta/2}{\xi}\right)\right) \\ & +\arctan\left(\exp\left(\frac{{\bf r n}_k-x^{k}_i+\delta/2}{\xi}\right)\right) \\ & - \left(mb+\frac {b}{L}{\bf r n}_k \right) %\end{aligned} \label{eq:displt} \end{aligned}$$ where ${\bf n}_k =[001] \times {\bf b}_k $ is the normal to the dislocation line of the k-th array. In addition, in the case of split dislocations, there is a displacement field created by arrays of edge partial dislocations $$\begin{aligned} {\bf u}_{e}({\bf r}) & = \sum_{k=1}^3 \frac{{\bf b}^p_k}{\pi} \sum_{i=-m}^m \left[ \arctan\left(\exp\left(\frac{{\bf r n}_k-x^{k}_i+\delta/2}{\xi}\right)\right)\right. \\ & -\left. \arctan\left(\exp\left(\frac{{\bf r n}_k-x^{k}_i-\delta/2}{\xi}\right)\right)\right] %\end{aligned} \label{eq:displt2} \end{aligned}$$ The vector fields described by equation (\[eq:displt\]) are shown in Fig. \[vecfield\] for the cases of narrow (non-split) and split dislocation cores. As one can see from Fig. \[vecfield\]a,b, the screw component of the displacement field producing by elastic relaxation (\[eq:displt\]) forms a vortex lattice. However, the geometry of displacements is quite different for the cases under consideration. in particular, dislocation splitting results in a decrease of the period of vortex lattice in two times; the magnitude of displacement becomes essentially smaller (Fig. \[vecfield\]b). Distribution of the corresponding edge components of displacement field ${\bf u}_{edge}({\bf r})$ is shown in Fig. \[vecfield\]c. Alternating domains of almost constant displacements correspond to different types of the stacking faults. ![image](fig3a.pdf){width="7.0cm"}      ![image](fig3b.pdf){width="7.0cm"}\ ![image](fig3c.pdf){width="7.0cm"}      ![image](fig3d.pdf){width="7.0cm"} ![image](fig4a.pdf){width="6.90cm"}    ![image](fig4b.pdf){width="6.90cm"} The picture of elastic displacement is rather similar to that obtained in atomistic simulations (see Ref’s [@Fasolino2015; @Savini]), which indicates a semiquantitative correctness of the description of atomic relaxation effects in twisted graphene bilayers within our simple dislocation model. It is worthwhile to note that in the center of vortex situated in the crossing of dislocation lines in Fig. \[network\] the relative displacement of the layers is equal to the half of elementary translation which results in formation of a stacking fault. To visualize this stacking fault, one needs to pass from elastic deformations ${\bf u}_{el}({\bf r})$ to the relative displacements between the layers described by equation (\[eq:displ1\])). In this case the more energetically favorable configuration is the one where one of the dislocation families is shifted from the symmetry position (Fig. \[network\]b) by the value $\delta l$ in the direction normal to dislocation lines. The distribution of the elastic displacements after such a reconstruction of the dislocation network is shown in Fig. \[vecfield\]d. One can see that the reconstruction of the dislocation network results in a drastic change of the displacement field ${\bf u}_{el}({\bf r})$ (c.f. Fig. \[vecfield\] a and d). The regions with almost zero displacements are surrounded by six triangular regions with the largest displacements at their borders. According to Ref. [@Bagchi2020] the structure of conjugation of bilayer graphene can be described in terms of network of partial dislocations a/3&lt;1100&gt; separating the regions of AB and BA stacking. The picture presented in Fig. \[vecfield\]d agrees qualitatively with that discussed in Ref. [@Bagchi2020]. The other quantity characterizing peculiarities of the displacement field is the distribution of the rotation vector ${\boldsymbol \omega}(\bf r)= {\nabla \times \bf u}_{el}(\bf r)$ presented in Fig. \[rot\]. As one can see from this figure, in the case of narrow core (Fig. \[rot\]a), the dislocation lines are clearly visible in the distribution of the rotation vector and their intersections correspond to the centers of vortices. At the same time, the only lattice of vortices remains visible in the case of split dislocations. Thus, depending on the representation used, the conjugation of layers can be described either in terms of vortices or dislocations. Wherein, the vortex displacement field originates naturally from elastic relaxation of atomic positions. Note that the magnitude of rotation vector $\omega$ is close to zero for the edge component of displacements. The other quantity which is actively discussed now [@pseudomag] is the distribution of pseudomagnetic field (PMF) [@Katsnelsonbook; @physrep] given by the equations $$H_{PMF}= \frac{dA_y}{dx}-\frac{dA_x}{dy} \label{pmf}$$ where vector potential is expressed (with the accuracy of some constant multiplier) via deformations as $$A_{x}= u_{xx}-u_{yy}, \ \ \ \ \\ A_{x}= -2u_{xy}, \nonumber \label{vpot}$$ $$\begin{aligned} u_{xx}=\frac{du_x}{dx}+\frac{1}{2} \left(\frac{du_y}{dx} \right)^2 +\frac{1}{2} \left( \frac{du_y}{dx}\right)^2, \nonumber \\ u_{yy}=\frac{du_y}{dy}+\frac{1}{2} \left( \frac{du_y}{dy}\right)^2 +\frac{1}{2} \left( \frac{du_x}{dy}\right)^2, \nonumber \\ u_{xy}=\frac{1}{2} \left( \frac{du_x}{dy}+ \frac{du_y}{dx} + \frac{du_x}{dy}\frac{du_x}{dx} + \frac{du_y}{dy}\frac{du_y}{dx} \right) \label{def}\end{aligned}$$ The distribution of PMF calculated from equations (\[pmf\])-(\[def\]) for the reconstructed dislocation network with the displacement field from Fig. \[vecfield\]d is shown in Fig. \[dist\]. Pink regions (with negative PMF) correspond to the domains of small displacements in Fig. \[vecfield\]d; quasicircular yellow regions with positive PMF are situated in between. The presented picture is characteristic of the reconstructed dislocation network and will be much less regular for the other distributions of displacement fields considered here. ![Distribution of PMF calculated for reconstructed dislocation network (Fig. \[vecfield\]d) by using Eq’s (\[pmf\])-(\[def\]) []{data-label="dist"}](fig5.pdf){height="7.0cm"} Discussion and conclusions ========================== The model developed here allows us to explain naturally some qualitative peculiarities of the moiré patterns in bilayer systems. In particular, the rotation angle and location of the coincidence site of the moiré patterns are related to Burgers vector and the distance between dislocation lines (see Section IIa). We show that the plastic part of the displacement field provides rotation of one layer with respect to the other one as a whole. At the same time, the distribution of elastic displacements is quite complicated and vortices are the most typical element; the observed picture is in a qualitative agreement with both experiment [@JSA2013] and the results of atomistic simulations [@Fasolino2014; @Fasolino2015; @Fasolino2016; @Bagchi2020]. Although, by construction, the displacements $\bf u^{el}$ are equal to zero at a point located in the middle between two vortices, the derivative $d u^{el}_i/d x_j \neq 0$. Note that the values $d u^{el}_i/d x_j $ depend on distance between the dislocations in a given array and the width of their core. For narrow dislocations located far enough from each other $d u^{el}_i/d x_j \approx 0$ in the middle between two vortices. However, a more realistic consideration of G/G case [@Srolovits2; @Savini; @Bagchi2020] assumes the dislocation core splitting into partial dislocation cores. This means that the total dislocation core width is rather large, and the cores are situated rather close to each other. As a consequence, one should expect a remarkable deviation of the values $d u^{el}_i/d x_j $ from zero between the vortices. By using equation (\[eq:displt\]) and assuming $L \gg z$ we have $d u^{el}_x/d x_y \approx 2b/\pi z \exp(-L/2z)\cosh(d/z)$. As a result, the region between the vortices is characterized by an excess of elastic energy density $\Delta E_{el} \sim \mu (d u^{el}_x/d x_y)^2$. This additional energy can be reduced by self-reorientation of the graphene layers [@Fasolino2016] resulting in the increase of the distance between dislocations (c.f. Eq. (\[eq:tensor2\])). Note, that although the model does not take into account a details related to the characteristics of chemical bonding and the formation of various types of stacking, it provides a clear vision of qualitative structural features of twisted bilayer systems. Our results unify the language used in the physics of moiré patterns in twisted bilayer graphene and other Van der Waals heterostructures with that traditionally used at the description of nano- and mesostructures in solids. We suggest explicit analytical expressions for the distribution of atomic displacements in twisted bilayer graphene which can be used for both model theoretical studies and interpretation of experimental and computational results. Acknowledgements ================ This work of MIK was supported by the JTC-FLAGERA Project GRANSPORT and work of YNG was financed by the Russian Ministry of education and science (topic “Structure” A18-118020190116-6). [00]{} A. K. Geim and I. V. Grigorieva, Nature [**499**]{}, 419 (2013). M. I. Katsnelson, [*The Physics of Graphene*]{} (2nd edition) (Cambridge Univ. Press, Cambridge, 2020). G. Li, A. Luican, J. L. Dos Santos, A. Castro Neto, A. Reina, J. Kong, and E. Andrei, Nature Phys. [**6**]{}, 109 (2009). J. M. B. Lopes dos Santos, N. M. R. Peres, and A. H. Castro Neto, Phys. Rev. Lett. [**99**]{}, 256802 (2007). W.-T. Pong and C. Durkan, J. Phys. D [**38**]{}, R329 (2005). J. Xue, J. Sanchez-Yamagishi, D. Bulmash, P. Jacquod, A. Deshpande, K. Watanabe, T. Taniguchi, P. Jarillo-Herrero, and B. J. LeRoy, Nature Mater. 10, 282 (2011). S. Tang, H. Wang, Y.u Zhang, A. Li, H. Xie, X. Liu, L. Liu, T. Li, F. Huang, X. Xie, and M. Jiang, Sci. Rep. [**3**]{}, 2666 (2013). W. Yang, G. Chen, Z. Shi, C.-C. Liu, L. Zhang, G. Xie, M. Cheng, D. Wang, R. Yang, D. Shi, K. Watanabe, T. Taniguchi, Y. Yao, Y. Zhang, and G. Zhang, Nature Mater. [**12**]{}, 792 (2013). C. R. Woods, L. Britnell, A. Eckmann, R. S. Ma, J. C. Lu, H. M. Guo, X. Lin, G. L. Yu, Y. Cao, R. V. Gorbachev, A. V. Kretinin, J. Park, L. A. Ponomarenko, M. I. Katsnelson, Yu. N. Gornostyrev, K. Watanabe, T. Taniguchi, C. Casiraghi, H-J. Gao, A. K. Geim and K. S. Novoselov, Nature Phys. [**10**]{}, 451 (2014). G. J. Slotman, M. M. van Wijk, P.-L. Zhao, A. Fasolino, M.I. Katsnelson, and S. Yuan, Phys. Rev. Lett. [**115**]{}, 186801 (2015). H. Shi, Z. Zhan, Z. Qi, K. Huang, E. van Veen, J. A. Silva-Guillen, R. Zhang, P. Li, K. Xie, H. Ji, M. I. Katsnelson, S. Yuan, S. Qin, and Z. Zhang, Nature Commun. [**11**]{}, 371 (2020). Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, Nature [**556**]{}, 43 (2018). Y. Cao, V. Fatemi, A. Demir, S. Fang, S. L. Tomarken, J. Y. Luo, J. D. Sanchez-Yamagishi, K. Watanabe, T. Taniguchi, E. Kaxiras, R. C. Ashoori, and P. Jarillo-Herrero, Nature [**556**]{}, 80 (2018). D. R. Nelson, [*Defects and Geometry in Condensed Matter Physics*]{} (Cambridge Univ. Press, Cambridge, 2002). O. V. Yazyev and S. G. Louie, Nature Mater. [**9**]{}, 806 (2010). M. A. Akhukov, A. Fasolino, Y. N. Gornostyrev, and M. I. Katsnelson, Phys. Rev. B [**85**]{}, 115407 (2012). S. Dai, Y. Xiang, and D. J. Srolovitz, Phys. Rev. B [**93**]{}, 085410 (2016). M. M. van Wijk, A. Schuring, M. I. Katsnelson, and A. Fasolino, Phys. Rev. Lett. [**113**]{}, 135504 (2014). M. M. van Wijk, A. Schuring, M. I. Katsnelson, and A. Fasolino, 2D Mater. [**2**]{}, 034010 (2015). C.R. Woods, F. Withers, M.J. Zhu, Y. Cao, G. Yu1, A. Kozikov, M. Ben Shalom, S.V. Morozov, M.M. van Wijk, A. Fasolino, M.I. Katsnelson, K. Watanabe, T. Taniguchi, A.K. Geim, A. Mishchenko, K.S. Novoselov, Nature Comm., [**7**]{}, 10800 (2016). J. S. Alden, A. W. Tsen, P. Y. Huang, R. Hovden, L. Brown, J. Park, D. A. Muller, and P. L. McEuen, Proc. Natl. Acad. Sci. USA [**110**]{}, 11256 (2013). S. Bagchi, H. T. Johnson, and H. B. Chew, Phys. Rev. B [**101**]{}, 054109 (2020). K. Hermann, J. Phys.: Cond. Mat. [**24**]{}, 314210 (2012). K. Sadananda and M.J. Marcinkowski, J. Appl. Phys. [**45**]{}, 1521 (1974). R. Siems, P. Delavignette, and S. Amelinckx, Phys. Status Solidi (b) [**2**]{}, 421 (1962). J. P. Hirth and J. Lothe, [*Theory of Dislocations*]{} (McGraw-Hill, New York, 1968). S. Zhou, J. Han, S. Dai, J. Sun, and D. J. Srolovitz, Phys. Rev. B [**92**]{}, 155438 (2015). O. M. Braun and Y. S. Kishvar, [*The Frenkel-Kontorova Model: Concepts, Methods and Applications*]{} (Sptinger, Berlin, 2004). G. Savini, Y. J. Dappe, S. Öberg, J.-C. Charlier, M. I. Katsnelson, and A. Fasolino, Carbon [**49**]{}, 62 (2011). H. Shi, Z. Zhan, Z. Qi, K. Huang, E. van Veen, J. A. Silva-Guillen, R. Zhang, P. Li, K. Xie, H. Ji, M. I. Katsnelson, S. Yuan, S. Qin, and Z. Zhang, Nature Commun. [**11**]{}, 371 (2020). M. A. H. Vozmediano, M. I. Katsnelson, and F. Guinea, Phys. Rep. [**496**]{}, 109 (2010).
{ "pile_set_name": "ArXiv" }
--- abstract: | Research in life sciences is increasingly being conducted in a digital and online environment. In particular, life scientists have been pioneers in embracing new computational tools to conduct their investigations. To support the sharing of digital objects produced during such research investigations, we have witnessed in the last few years the emergence of specialized repositories, e.g., DataVerse and FigShare. Such repositories provide users with the means to share and publish datasets that were used or generated in research investigations. While these repositories have proven their usefulness, interpreting and reusing evidence for most research results is a challenging task. Additional contextual descriptions are needed to understand how those results were generated and/or the circumstances under which they were concluded. Because of this, scientists are calling for models that go beyond the publication of datasets to systematically capture the life cycle of scientific investigations and provide a single entry point to access the information about the hypothesis investigated, the datasets used, the experiments carried out, the results of the experiments, the conclusions that were derived, the people involved in the research, etc. In this paper we present the Research Object (RO) suite of ontologies, which provide a structured container to encapsulate research data and methods along with essential metadata descriptions. Research Objects are portable units that enable the sharing, preservation, interpretation and reuse of research investigation results. The ontologies we present have been designed in the light of requirements that we gathered from life scientists. They have been built upon existing popular vocabularies to facilitate interoperability. Furthermore, we have developed tools to support the creation and sharing of Research Objects, thereby promoting and facilitating their adoption. address: - | School of Computer Science, University of Manchester, UK.\ first$\_$name.last$\[email protected] - | Department of Zoology, University of Oxford, UK.\ first$\_$name.last$\[email protected] - | Ontology Engineering Group, Universidad Politécnica de Madrid, Spain.\ {dgarijo, ocorcho}@fi.upm.es - | Leiden University Medical Center, Leiden, The Netherlands.\ [email protected] - | Poznan Supercomputing and Networking Center, Poznan, Poland.\ [email protected] - | iSOCO, Madrid, Spain.\ [email protected] author: - Khalid Belhajjame - Jun Zhao - Daniel Garijo - Kristina Hettne - Raul Palma - Oscar Corcho - 'José Manuel Gómez-Pérez' - Sean Bechhofer - Graham Klyne - Carole Goble bibliography: - 'bib.bib' title: 'The Research Object Suite of Ontologies: Sharing and Exchanging Research Data and Methods on the Open Web' --- Scholarly communication ,Semantic Web ,Ontologies ,Provenance ,Scientific Workflow, Scientific Methods.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Near-surface nitrogen-vacancy ([NV]{}) centers in diamond have been successfully employed as atomic-sized magnetic field sensors for external spins over the last years. A key challenge is still to develop a method to bring NV centers at nanometer proximity to the diamond surface while preserving their optical and spin properties. To that aim we present a method of controlled diamond etching with nanometric precision using an oxygen inductively coupled plasma (ICP) process. Importantly, no traces of plasma-induced damages to the etched surface could be detected by X-ray photoelectron spectroscopy (XPS) and confocal photoluminescence microscopy techniques. In addition, by profiling the depth of NV centers created by $5.0$ keV of nitrogen implantation energy, no plasma-induced quenching in their fluorescence could be observed. Moreover, the developed etching process allowed even the channeling tail in their depth distribution to be resolved. Furthermore, treating a ${}^{12}$C isotopically purified diamond revealed a threefold increase in T${}_2$ times for NV centers with $<4$ nm of depth (measured by NMR signal from protons at the diamond surface) in comparison to the initial oxygen-terminated surface.' author: - Felipe Fávaro de Oliveira - 'S. Ali Momenzadeh' - Ya Wang - Mitsuharu Konuma - Matthew Markham - 'Andrew M. Edmonds' - Andrej Denisenko - Jörg Wrachtrup bibliography: - 'FFOliveira.bib' title: 'Effect of Low-Damage Inductively Coupled Plasma on Shallow NV Centers in Diamond' --- The negatively-charged nitrogen-vacancy ([NV]{}) center in diamond has attracted increasing attention due to its outstanding properties. It is an atomic-sized, bright and stable single photon source[@reviewJWrachtrup] with relatively long coherence times, ranging milliseconds in isotopically purified single crystal diamond layers[@JahnkeC12; @GopanTHz]. Additionally, its electron spin can be coherently manipulated by microwave and readout optically at room temperature. In the recent years the use of near-surface (shallow) NV centers as sensors to detect external nuclear[@staudacher; @MaminNMR2013; @NMRMueller] and electronic[@Bernhard; @singleProteinFazhan] spins have been successfully demonstrated. However, since the signal detection relies on the relatively weak dipolar coupling to the targeted external spins, decaying proportional to r${}^{-3}$ (with $r$ being the distance between the targeted and sensor spins), NV centers must be located close to the diamond surface ($< 5$ nm)[@staudacher; @MaminNMR2013]. Up to now, the engineering of near-surface NV centers has relied mostly on low-energy nitrogen implantation[@pezzagnaYield] or epitaxial growth of high quality nitrogen-doped CVD diamond followed by electron[@dGrownT2_KOhno] or ion irradiation[@C12irr]. Furthermore, NV centers can be brought closer to the diamond surface by post-treatments such as thermal oxidation[@LorentzAB; @OxPlasmaKim] and etching in plasma[@OxPlasmaKim; @VertDist; @NLplasma]. ![\[ICPfigure\] (a): schematic representation of the performed etching experiment. (b): [XY]{} confocal photoluminescence scan from the diamond surface treated with Ar/O${}_2$ RIE plasma; the fluorescence contrast between the masked- and oxygen soft plasma-treated regions can be clearly seen. (c): etching depth measurements by AFM from oxygen soft plasma-treated polished and previously Ar/O${}_2$ RIE-etched surfaces; a difference of $1-2$ nm can be seen (represented by $\Delta$h). The etching rate for 180 W of ICP power was $1.5 \pm 1.0$ nm/min.](figure1){width="\columnwidth"} A major drawback for thermal oxidation is the uncertainty in the resulting etching rate and infeasibility of selective etching. Overcoming these issues, plasma processes are widely employed, providing a smooth and uniform method to selectively etch diamond. In particular for reactive ion etching (RIE) processes, the presence of a bias between the plasma source and the sample leads to ion bombardment on the diamond surface. This results in an enhanced etching rate and directionality of the process, but also produces a highly damaged layer containing vacancies and implanted ions of a few nanometers below the diamond surface[@PlasmaDamXPS]. If located in the vicinity of NV centers, these defects can lead to suppression of the photoluminescence emission and degraded spin properties[@OxPlasmaKim; @NLplasma]. Thus a precise, uniform and high-selective etching process, by which the damages to the etched diamond surface are avoided, is highly desired. To that aim we present a low-pressure oxygen inductively coupled plasma (ICP) process with nanometric etching precision and high reproducibility. All reported plasma-related processes were performed in an Oxford PlasmaPro NGP80 machine equipped with an additional ICP source (Oxford Plasma Technologies). The effect of the developed etching process in the optical and spin properties of near-surface NV centers in diamond CVD layers will be explored in details. The optimized plasma recipe consists of two steps: first the RIE source with $10$ mTorr constant chamber pressure and $30$ W of power is used to ignite the plasma for a few seconds. Afterward the RIE source is switched off and the plasma is sustained only by the ICP source, which is located at a remote position from the substrate holder in the chamber. Under such conditions the sample surface is exposed mainly to neutral chemical radicals. This is supported by the fact that using both sources simultaneously showed no significant change in the DC bias, while varying the ICP power up to $300$ W and keeping the RIE power constant at $30$ W. This plasma process will be referred further as “oxygen soft plasma” in this paper. A high-purity single-crystal $[100]$-oriented electronic grade CVD diamond with an as-polished surface was taken as a substrate. The initial root mean squared (RMS) surface roughness was measured to be in the range of 1 nm by atomic force microscopy (AFM), allowing a detailed analysis in the quality of the post-treated surface. A schematic representation of the experiment is presented in figure \[ICPfigure\](a). The sample was masked by lithography-patterned AZ $5214$ E (MicroChemicals) photoresist to protect part of the polished surface. Subsequently, the sample was exposed to Ar/O${}_2$ RIE plasma for 16 minutes ($100$ sccm and $11$ sccm respectively, $37.5$ mTorr chamber pressure and $70$ W RIE power). Next, the polished and the Ar/O${}_2$ RIE etched regions were again patterned by optical lithography, leaving stripes exposed to the subsequent treatment with the oxygen soft plasma. Such procedure yields four different areas on the diamond surface, namely (I.) as-polished, (II.) polished combined with the oxygen soft plasma, (III.) Ar/O${}_2$ RIE as-etched and (IV.) Ar/O${}_2$ RIE etched combined with the oxygen soft plasma. The sample was cleaned by boiling in a triacid mixture of H${}_2$SO${}_4$:HNO${}_3$:HClO${}_4$, $1$:$1$:$1$ volumetric ratio for three hours, referred as wet chemical oxidation (WCO). To analyze the effect of the oxygen soft plasma in the properties of the described surface areas, confocal photoluminescence (PL) microscopy (home-build setup with a $532$ nm wavelength excitation laser), AFM and X-ray photoelectron spectroscopy (XPS) measurements were performed. The XPS photoemmision spectra were acquired using a AXIS ULTRA (Kratos Analytical Ltd.) spectrometer equipped with a monochromatized Al K*$\alpha$* (1486.6 eV) radiation source. The binding energy scale was calibrated by means of Ag and Au reference samples. High resolution spectra of the C$1$s core levels detected normal to the sample surface are shown in figure \[XPS\]. There, the spectrum after the oxygen soft plasma (curve 1) is compared to those from a reference diamond sample. The spectral lines related to WCO and the previously described Ar/O${}_2$ RIE plasma process are shown in curves 2 and 3, respectively. The Ar/O${}_2$ RIE treatment has been reported to induce detectable damages to the diamond sub-surface layers[@ArO2Denisenko]. In the presented confocal PL microscopy measurement (figure \[ICPfigure\](b)), these damages are revealed as a slightly increased luminescence background. The lower part of this PL scan had an additional oxygen soft plasma treatment for 1 minute with $180$ W of ICP power. As it can be seen, this post-treatment was enough to eliminate the background related to the Ar/O${}_2$ RIE process, showing a clear fluorescence contrast. Furthermore, related AFM measurements are presented in figure \[ICPfigure\] (c). Threating the as-polished and the previously Ar/O${}_2$ RIE etched regions simultaneously with the oxygen soft plasma, approximately $1-2$ nm more in the etching depth (represented by $\Delta$h) could be observed in the Ar/O${}_2$ RIE etched region. Longer exposure times and also different ICP powers did not show significant increase in this difference. This is attributed to the faster initial removal of the RIE-damaged diamond layers. In addition, the corresponding XPS spectrum to the Ar/O${}_2$ RIE plasma process is presented in figure \[XPS\], curve 3. It shows a broad peak shifted by approximately $+1.2$ eV with respect to the sp${}^3$ bulk component at $284.3$ eV. This is associated to an amorphous phase of carbon ($\alpha$-sp${}^3$) of a few nanometers in thickness[@PlasmaDamXPS]. Importantly, such signature for RIE-induced damages is absent in the C1s core level spectrum from the surface treated with the oxygen soft plasma process, as presented in figure \[XPS\], curve 1. Hereafter, the results by confocal PL microscopy, AFM and XPS techniques indicates the complete removal of the amorphous RIE-damaged diamond sub-surface layers by the oxygen soft plasma. Moreover, the extracted thickness of $1 - 2$ nm is in good agreement with the literature[@PlasmaDamXPS]. Besides, the spectrum showed in figure \[XPS\], curve 1 is shown to be very similar to the one related to WCO (curve 2). The dominance of the sp${}^3$ bulk component is observed together with small peaks shifted to higher binding energies, known to be related to carbon-oxygen functional groups on the diamond surface[@XPSYagi]. Both oxygen soft plasma- and WCO-treatments show similar intensities of the O$1$s core level spectra (see the inset in figure \[XPS\]), indicating a full coverage of the surface with oxygen species. ![\[XPS\] X-ray photoelectron spectroscopy measurements; the different curves represent the oxygen soft plasma-treated sample (curve 1) and the reference sample containing a WCO (curve 2) and Ar/O${}_2$ RIE (curve 3) treated regions. The intensity of the peaks were normalized by the maximum value of the signal.](figure2){width="\columnwidth"} In addition to that, it is also known that new NV centers are formed by post-plasma high temperature annealing[@OxPlasmaKim]. In contrast to the surface treated with the Ar/O${}_2$ RIE plasma process, this effect was not observed in the present experiment with an additional oxygen soft plasma treatment before the annealing procedure. This supports further the previously mentioned results, confirming the removal of the sub-surfaces defective layers that would contain plasma-induced vacancies. ![\[figure3\] (a): experimental values for the areal density of NV centers by $5.0$ keV of implantation energy vs. the etching depth. The fit corresponds to a Gaussian complementary error function (b): The experimentally fitted depth profile of NV centers is plotted together with results of corresponding SRIM and CTRIM simulations.](figure3){width="\columnwidth"} Near-surface NV centers can be exquisite tools to detect plasma-induced defects since they are extremely sensitive to surface modifications[@OxPlasmaKim; @NLplasma; @VertDist]. Therefore, to validate the oxygen soft plasma process, the depth profile of NV centers created close to the diamond surface by $5.0$ keV of nitrogen implantation energy with a dose of $4 \times 10^{10}$ cm${}^{-2}$ was analyzed. After the implantation the sample was consequently submitted to high temperature annealing at approximately $950\degree$C in high vacuum ($<10^{-6}$ mbar) for two hours and the mentioned WCO treatment. The chosen annealing temperature is known to minimize the density of paramagnetic defects such as vacancy complexes[@Annealing1000]. The evaluation consisted of sequential etching steps using the oxygen soft plasma process followed by confocal PL microscopy measurements to extract the areal density of NV centers. Each etching step comprised 1 minute of RIE plasma - aiming low etching rate and homogeneity - followed by 1 minute of the ICP with $180$ W power - aiming the removal of the RIE-induced surface damages (steps referred to the oxygen soft plasma recipe). The experimental areal density of NV centers versus the etching depth is plotted in figure \[figure3\](a). For simplicity, the experimental data was fitted to a Gaussian complementary error function. In figure \[figure3\](b) the experimentally fitted profile is compared to SRIM[@SRIM] and CTRIM[@CTRIM] simulations, which consider an amorphous material and ion channeling in a crystalline lattice, respectively. A good agreement was found to the CTRIM simulated profile for a $[100]$-oriented diamond surface with a $3\degree$-off implantation angle, which is in the frame of the accuracy specified for surface polishing and the accuracy of the implantation process. Thus, the presence of *ion channeling* could be resolved even for a low energy of implantation, as also predicted by theoretical investigations using molecular dynamics simulations[@DenisMD]. Besides, the yield (conversion from implanted nitrogen atoms to NV centers) was found to be $1.7 \pm 0.3 \%$, a typical value for the used energy of implantation[@pezzagnaYield]. It should be highlighted that the point related to the first etching step at $5$ nm (vertical doted line in figure \[figure3\](a)) does not show rapid quenching in the density of NV centers in comparison to the initial value (WCO). This contrasts to other plasma etching processes reported in literature[@OxPlasmaKim; @NLplasma]. Thereby one could ascertain that the developed process preserves the optical properties of shallow NV centers. The influence of the oxygen soft plasma on the spin coherence characteristics (T${}_2$) of shallow NV centers is of great importance and will be discussed below. To avoid any undesired effects due to polishing-induced defects[@PolishingVolpe] or ${}^{13}$C spin bath noise[@C13bath], this study was conducted on an as-grown surface of a ${}^{12}$C isotopically purified ($99.999 \%$) diamond CVD layer with more than $50$ $\mu$m in thickness, overgrown on a natural abundance, single-crystal $[100]$-oriented electronic grade CVD diamond substrate (Element Six). The sample was first implanted with nitrogen ions at $2.5$ keV of energy with a dose of $7 \times 10^{9}$ cm${}^{-2}$ followed by high temperature annealing and WCO as described before. Single NV centers were addressed by confocal microscopy technique. Coherence times were measured by means of optically detected magnetic resonance (ODMR) method using a Hahn echo sequence scheme, which allows us to probe the coherence between the ground state $m_s=\mid 0>$ (bright state) and $m_s=\mid \pm 1>$ (dark state) of the NV center electronic spin. A magnetic field of approximately $420$ Gauss was aligned parallel to the NV axis. Further on, nuclear magnetic resonance (NMR) signal originating from protons present in the immersion oil on the diamond surface was used to calibrate the depth of individual NV centers by means of a XY$8-16$ sequence scheme, as described in reference [@staudacher]. The related results are summarized in figure \[T2asgrown\]. ![\[T2asgrown\] (a): Spin coherence times T${}_2$ (Hahn echo) vs. depth of single NV centers evaluated by the NMR signal generated by protons on the diamond surface. The oxygen soft plasma process is performed and the post-treatment values are compared to the initial ones. (b): an example of measured T${}_2$; no signal originated from ${}^{13}$C spins could be observed. (c): an example of the contrast seen in the fluorescence of the NV center where the frequency of oscillation from protons on the diamond surface is expected.](figure4){width="\columnwidth"} After treating the initial surface with the oxygen soft plasma process for $2$ minutes with $180$ W of ICP power (corresponding to $2 - 3$ nm of etching depth), NV centers located a few nanometers ($<4$ nm) from the diamond surface could still be observed (figure \[T2asgrown\](a)). This supports further the assumption of preservation of the optical properties of shallow NV centers. Likewise, the values of coherence times showed an improvement up to $\sim3$ times, yielding an NMR signal up to $B_{RMS} = 3.1$ $\mu$[T]{}, especially for NV centers located $< 2.5$ nm below the surface. The reason for such improvement is not clear yet, but it can be associated to modifications of the electronic configuration of the diamond surface, which would be affected now by the oxygen soft plasma. Indeed, fluctuations of surface charges are believed to be a significant source of decoherence due to electric field noise[@ElecNoise], meaning that the developed oxygen soft plasma process may increase the surface charge stability. Further studies must be performed to mitigate this effect. To summarize, an alternative method for etching diamond using an oxygen ICP process was presented. Possessing qualities such as nanometric-precise etching rate, high reproducibility and selective etching, this plasma process was demonstrated to be a promising technique to bring NV centers closer to the diamond surface while preserving their optical and spin characteristics. The presented technique could be used in the future to precisely control the depth of created NV centers by a variety of techniques such as ion and electron irradiation. In addition, being able to profile the depth distribution of NV centers with such precision, one would gain important information about the vacancy diffusion process regarding the creation of color centers in diamond. F. F. de Oliveira acknowledges the financial support by CNPq project number $204246/2013-0$. J. W. acknowledges the support by the EU via ERC grant SQUTEC and IP DIADEMS as well as the DFG.
{ "pile_set_name": "ArXiv" }
Conclusion {#sec:conclusions} ========== The semi-automatic segmentation of vertebral bodies in a volumetric scenario is a challenging task, due to the large number of slices in the exams. To obtain a proper 3D reconstruction of the vertebrae, one has to pay attention on allowing a fast and accurate segmentation of slices. We have investigated this challenge and used the slope coefficient of the annotation time, so that the specialists annotations were extrapolated from a slice to its neighbours up to a given limit without losing accuracy and, at the same time, reduced the total time spent on manual annotation. On the dataset used, on average, only 37% of the slices with vertebral body content had to be annotated, consequently making the process faster (on average, 36 seconds for each vertebral body). We have proposed [3DBGrowth]{}method, which significantly outperforms GrowCut and keeps comparable running time. Moreover, [3DBGrowth]{}presented the best results even with simple/sloppy seed points, which demands less effort on the annotation process. Acknowledgment {#acknowledgment .unnumbered} ============== This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001 and grant No.: 0487/17083480, by the São Paulo Research Foundation (FAPESP, grants No. 2016/17078-0, 2017/23780-2, 2018/06228-7, 2018/24414-2), and the National Council for Scientific and Technological Development (CNPq).
{ "pile_set_name": "ArXiv" }
--- author: - Ben Smyth date: 'January 22, 2019' title: '7em TLS 1.3 for engineers: An exploration of the TLS 1.3 specification and Oracle’s Java implementation' --- Further features ----------------
{ "pile_set_name": "ArXiv" }
--- abstract: | We study a family of combinatorial optimization problems defined by a parameter $p\in[0,1]$, which involves spectral functions applied to positive semidefinite matrices, and has some application in the theory of optimal experimental design. This family of problems tends to a generalization of the classical maximum coverage problem as $p$ goes to $0$, and to a trivial instance of the knapsack problem as $p$ goes to $1$. In this article, we establish a matrix inequality which shows that the objective function is submodular for all $p\in[0,1]$, from which it follows that the greedy approach, which has often been used for this problem, always gives a design within $1-1/e$ of the optimum. We next study the design found by rounding the solution of the continuous relaxed problem, an approach which has been applied by several authors. We prove an inequality which generalizes a classical result from the theory of optimal designs, and allows us to give a rounding procedure with an approximation factor which tends to $1$ as $p$ goes to $1$. author: - | Guillaume Sagnol[^1]\ [Zuse Institut Berlin (ZIB), Takustr. 7, 14195 Berlin, Germany]{}\ [[[email protected]]([email protected])]{} title: 'Approximation of a Maximum-Submodular-Coverage problem involving spectral functions, with application to Experimental Design' --- #### Keyword Maximum Coverage, Optimal Design of Experiments, Kiefer’s $p-$criterion, Polynomial-time approximability, Rounding algorithms, Submodularity, Matrix inequalities. Introduction ============ This work is motivated by a generalization of the classical maximum coverage problem which arises in the study of optimal experimental designs. This problem may be formally defined as follows: given $s$ positive semidefinite matrices $M_1, \ldots, M_s$ of the same size and an integer $N<s$, solve: $$\begin{aligned} \max_{I \subset [s]} &\quad {\operatorname{rank}}\Big( \sum_{i \in I } M_i \Big) \label{maxrank_intro} \tag{$P_0$}\\ \operatorname{s.t.} &\quad {\operatorname{card}}(I) \leq N, \nonumber\end{aligned}$$ where we use the standard notation $[s]:=\{1,\ldots,s\}$ and ${\operatorname{card}}(S)$ denotes the cardinality of $S$. When each $M_i$ is diagonal, it is easy to see that Problem  is equivalent to a max-coverage instance, by defining the sets $S_i=\{k: (M_i)_{k,k}>0\}$, so that the rank in the objective of Problem  is equal to ${\operatorname{card}}\big(\cup_{i \in I} S_i \big)$. A more general class of problems arising in the study of optimal experimental designs is obtained by considering a *deformation* of the rank which is defined through a spectral function. Given $p\in[0,1]$, solve: $$\begin{aligned} \max_{{\boldsymbol{n}}\in{\mathbb{N}}^s} &\quad \varphi_p \left( {\boldsymbol{n}} \right) \label{Pp_intro} \tag{$P_p$}\\ \operatorname{s.t.} &\quad \sum_{i\in[s]} n_i \leq N, \nonumber\end{aligned}$$ where $\varphi_p({\boldsymbol{n}})$ is the sum of the eigenvalues of $\sum_{i\in[s]} n_i M_i$ raised to the exponent $p$: if the eigenvalues of the positive semidefinite matrix $\sum_{i\in[s]} n_i M_i$ are $\lambda_1,\ldots,\lambda_m$ (counted with multiplicities), $\varphi_p({\boldsymbol{n}})$ is defined by $$\varphi_p({\boldsymbol{n}}) = {\operatorname{trace}}\Big(\sum_{i\in[s]} n_i M_i \Big)^p = \sum_{k=1}^m \lambda_k^p.$$ We shall see that Problem  is the limit of Problem  as $p \to 0^+$ indeed. On the other hand, the limit of Problem  as $p \to 1$ is a knapsack problem (in fact, it is the trivial instance in which the $i{{}^{\mathrm{th}}}$ item has weight $1$ and utility $u_i={\operatorname{trace}}M_i$). Note that a matrix $M_i$ may be chosen $n_i$ times in Problem , while choosing a matrix more than once in Problem  cannot increase the rank. Therefore we also define the binary variant of Problem : $$\max_{{\boldsymbol{n}}} \left\{\varphi_p \left( {\boldsymbol{n}} \right):\ {\boldsymbol{n}}\in\{0,1\}^s,\ \sum_{i\in[s]} n_i \leq N\right\} \tag{$P_p^\textrm{bin}$} \label{Ppbin}$$ We shall also consider the case in which the selection of the $i{{}^{\mathrm{th}}}$ matrix costs $c_i$, and a total budget $B$ is allowed. This is the budgeted version of the problem: $$\max_{{\boldsymbol{n}}} \left\{\varphi_p \left( {\boldsymbol{n}} \right):\ {\boldsymbol{n}}\in\mathbb{N}^s,\ \sum_{i\in[s]} c_i n_i \leq B\right\} \tag{$P_p^\textrm{bdg}$} \label{Ppbdg}$$ Throughout this article, we use the term *design* for the variable ${\boldsymbol{n}}=(n_1,\ldots,n_s)\in\mathbb{N}^s$. We say that ${\boldsymbol{n}}$ is a $N-$*replicated design* if it is feasible for Problem , a $N-$*binary design* if ${\boldsymbol{n}}$ is feasible for Problem , and a $B-$*budgeted design* when it satisfies the constraints of . Motivation: optimal experimental design --------------------------------------- The theory of *optimal design of experiments* plays a central role in statistics. It studies how to best select experiments in order to estimate a set of parameters. Under classical assumptions, the best linear unbiased estimator is given by least square theory, and lies within confidence ellipsoids which are described by a positive semidefinite matrix depending only on the selected experiments. The *optimal design of experiments* aims at selecting the experiments in order to make these confidence ellipsoids as small as possible, which leads to more accurate estimators. A common approach consists in minimizing a scalar function measuring these ellipsoids, where the function is taken from the class of $\Phi_p$-information functions proposed by Kiefer [@Kief75]. This leads to a combinatorial optimization problem (decide how many times each experiment should be performed) involving a spectral function which is applied to the information matrix of the experiments. For $p\in ]0,1]$, the Kiefer’s $\Phi_p$-optimal design problem is equivalent to Problem  (up to the exponent $1/p$ in the objective function). In fact, little attention has been given to the combinatorial aspects of Problem  in the optimal experimental design literature. The reason is that there is a natural relaxation of the problem which is much more tractable and usually yields very good results: instead of determining the exact number of times $n_i$ that each experiment will be selected, the optimization is done over the fractions $w_i=n_i/N\in[0,1]$, which reduces the problem to the maximization of a concave function over a convex set (this is the theory of *approximate optimal designs*). For the common case, in which the number $N$ of experiments to perform is large and $N>s$ (where $s$ is the number of available experiments), this approach is justified by a result of Pukelsheim and Rieder [@PR92], who give a rounding procedure to transform an optimal approximate design ${\boldsymbol{w^*}}$ into an $N-$replicated design ${\boldsymbol{n}}=(n_1,\ldots,n_s)$ which approximates the optimum of the Kiefer’s $\Phi_p-$optimal design problem within a factor $1-\frac{s}{N}$. This problem describes an *underinstrumented situation*, in which a small number $N<s$ of experiments should be selected. In this case, the combinatorial aspects of Problem  become crucial. A similar problem was studied by Song, Qiu and Zhang [@SQZ06], who proposed to use a greedy algorithm to approximate the solution of Problem . In this paper, we give an approximation bound which justifies this approach. Another question addressed in this manuscript is whether it is appropriate to take roundings of (continuous) approximate designs in the underinstrumented situation (recall that this is the common approach when dealing with experimental design problems in the *overinstrumented* case, where the number $N$ of experiments is large when compared to $s$). Appendix \[sec:problemStatement\] is devoted to the application to the theory of optimal experimental designs; we explain how a statistical problem (choose which experiments to conduct in order to estimate a set of parameters) leads to the study of Problem , with a particular focus to the *underinstrumented* situation described above. For more details on the subject, the reader is referred to the monographs of Fedorov [@Fed72] and Pukelsheim [@Puk93]. Organisation and contribution of this article ---------------------------------------------   [lll]{} Algorithm & Approximation factor for Problem  & Reference\ & &\ Greedy & $1-e^{-1} \qquad$ (or $1-(1-\frac{1}{N})^N$) & \[coro:1m1se\] ([@NWF78])\ Any $N-$replicated design ${\boldsymbol{n}}$& &\ (posterior bound) & &\ ---------------------------------- Rounding \[algo:greedyrounding\] (prior bound) ---------------------------------- : Summary of the approximation bounds obtained in this paper, as well as the bound of Pukelsheim and Rieder [@PR92]. The column “Reference” indicates the number of the theorem, proposition or remark where the bound is proved (a citation in parenthesis means a direct application of a result of the cited paper, which is possible thanks to the submodularity of $\varphi_p$ proved in Corollary \[coro:SubmodExample\]). In the table, *posterior* denotes a bound which depends on the continuous solution ${\boldsymbol{w^*}}$ of the relaxed problem, while a *prior bound* depends only on the parameters of the problem. \[tab:factors\] & [$\left\{\begin{array}{cl} \left(\frac{N}{s} \right)^{1-p} & \quad \textrm{if } \left(\frac{N}{s} \right)^{1-p} \leq \frac{1}{2-p} ; \\ 1-\frac{s}{N}(1-p)\left(\frac{1}{2-p}\right)^{\frac{2-p}{1-p}} & \quad \textrm{Otherwise} \end{array} \right.$]{} & \[theo:factorF\]\ Apportionment rounding & $(1-\frac{s}{N})^p \qquad$ if $N\geq s$ & [@PR92]\ Algorithm & Approximation factor for Problem  & Reference\ & &\ Greedy & $1-e^{-1} \qquad $ (or $1-(1-\frac{1}{N})^N$) & \[coro:1m1se\] ([@NWF78])\ Any $N-$binary design ${\boldsymbol{n}}$ & &\ (posterior bound)& &\ --------------------------------------- Keep the $N$ largest coord. of ${\boldsymbol{w^*}}$ (prior bound) --------------------------------------- : Summary of the approximation bounds obtained in this paper, as well as the bound of Pukelsheim and Rieder [@PR92]. The column “Reference” indicates the number of the theorem, proposition or remark where the bound is proved (a citation in parenthesis means a direct application of a result of the cited paper, which is possible thanks to the submodularity of $\varphi_p$ proved in Corollary \[coro:SubmodExample\]). In the table, *posterior* denotes a bound which depends on the continuous solution ${\boldsymbol{w^*}}$ of the relaxed problem, while a *prior bound* depends only on the parameters of the problem. \[tab:factors\] & $ \left( \frac{N}{s} \right)^{1-p}$ if $p\leq 1-\frac{\ln N}{\ln s}$ & \[theo:factorFbin\]\ Algorithm & Approximation factor for Problem  & Reference\ & &\ Adapted Greedy & $1-e^{-\beta}\simeq 0.35$ (where $e^\beta=2-\beta$) & \[rem:budg\]([@Wol82])\ Greedy+triples enumeration & $1-e^{-1} \qquad $ & \[rem:budg\]([@Svi04])\ Any $B-$budgeted design ${\boldsymbol{n}}$ & &\ (posterior bound) & & The objective of this article is to study some approximation algorithms for the class of problems ${}_{p \in [0,1]}$. Several results presented in this article were already announced in the companion papers [@BGSagnol08Rio; @BGSagnol10ENDM], without the proofs. This paper provides all the proofs of the results of [@BGSagnol10ENDM] and gives new results for the rounding algorithms. We shall now present the contribution and the organisation of this article. In Section \[sec:submod\], we establish a matrix inequality (Proposition \[prop:ineqPropos\]) which shows that a class of spectral functions is submodular (Corollary \[coro:Submod\]). As a particular case of the latter result, the objective function of Problem  is submodular for all $p\in[0,1]$. The submodularity of this class of spectral functions is an original contribution of this article for $0<p<1$, however a particular case of this result was announced –without a proof– in the companion paper on the telecom application [@BGSagnol08Rio]. In the limit case $p=0$, we obtain two functions which were already known to be submodular (the rank and the log of determinant of a sum of matrices). Due to a celebrated result of Nemhauser, Wolsey and Fisher [@NWF78], the submodularity of the criterion implies that the greedy approach, which has often been used for this problem, always gives a design within $1-e^{-1}$ of the optimum (Theorem \[coro:1m1se\]). We point out that the submodularity of the determinant criterion was noticed earlier in the optimal experimental design literature, but under an alternative form [@RS89]: Robertazzi and Schwartz showed that the determinant of the inverse of a sum of matrices is supermodular, and they used it to write an algorithm for the construction of approximate designs (i.e. without integer variables) which is based on the accelerated greedy algorithm of Minoux [@Min78]. In contrast, the originality of the present paper is to show that a whole class of criteria satisfies the submodularity property, and to study the consequences in terms of approximability of a combinatorial optimization problem. In Section \[sec:rounding\], we investigate the legitimacy of using rounding algorithms to construct a $N-$replicated design ${\boldsymbol{n}}=(n_1,\ldots,n_s)\in\mathbb{N}^s$ or a $N$-binary design ${\boldsymbol{n}}\in\{0,1\}^s$ from an optimal approximate design ${\boldsymbol{w^*}}$, i.e. a solution of a continuous relaxation of Problem . We establish an inequality (Propositions \[prop:boundW\] and \[prop:boundW\_n\]) which bounds from below the approximation ratio of any integer design, by a function which depends on the continuous solution ${\boldsymbol{w^*}}$. Interestingly, this inequality generalizes a classical result from the theory of optimal designs (the upper bound on the weights of a D-optimal design [@Puk80; @HT09] is a particular case ($p=0$) of Proposition \[prop:boundW\]). The proof of this result is presented in Appendix \[sec:proofIneq\] ; it relies on matrix inequalities and several properties of the differentiation of a scalar function applied to symmetric matrices. Then we point out that the latter lower bound can be maximized by an incremental algorithm which is well known in the resource allocation community (Algorithm \[algo:greedyrounding\]), and we derive approximation bounds for Problems  and  which do not depend on ${\boldsymbol{w^*}}$ (Theorems \[theo:factorFbin\] and \[theo:factorF\]). For the problem with replicated designs , the approximation factor is an increasing function of $p$ which tends to $1$ as $p\to1$. In many cases, the approximation guarantee for designs obtained by rounding is better than the greedy approximation factor $1-e^{-1}$. We have summarized in Table \[tab:factors\] the approximation results proved in this paper (this table also includes another known approximability result for Problem , the *efficient apportionment rounding* of Pukelsheim and Rieder [@PR92]). Submodularity and Greedy approach {#sec:submod} ================================= In this section, we study the greedy algorithm for solving Problems  and  through the submodularity of $\varphi_p$. We first recall a result presented in [@BGSagnol08Rio], which states that the *rank optimization* problem is NP-hard, by a reduction from the *Maximum Coverage* problem. It follows that for all positive $\varepsilon$, there is no polynomial-time algorithm which approximates  by a factor of $1-\frac{1}{e}+\varepsilon$ unless $P=NP$ (this has been proved by Feige for the Maximum Coverage problem [@Fei98]). Nevertheless, we show that this bound is the worst possible ever, and that the greedy algorithm always attains it. To this end, we show that a class of spectral functions (which includes the objective function of Problem ) is *nondecreasing submodular*. The maximization of submodular functions over a matroid has been extensively studied [@NWF78; @CC84; @CCPV07; @Von08; @KST09], and we shall use known approximability results. To study its approximability, we can think of Problem  as the maximization of a set function $\varphi_p':2^E \mapsto \mathbb{R}^+$. To this end, note that each design ${\boldsymbol{n}}$ can be seen as a subset of $E$, where $E$ is a pool which contains $N$ copies of each experiment (this allows us to deal with replicated designs, i.e. with experiments that are conducted several times; if replication is not allowed (Problem ), we simply set $E:=[s]$). Now, if $S$ is a subset of $E$ corresponding to the design ${\boldsymbol{n}}$, we define $\varphi_p'(S):=\varphi_p({\boldsymbol{n}})$. In the sequel, we identify the set function $\varphi_p'$ with $\varphi_p$ (i.e., we omit the *prime*). We also point out that multiplicative approximation factors for the $\Phi_p-$optimal problem cannot be considered when $p\leq0$, since the criterion is identically $0$ as long as the the information matrix is singular. For $p\leq0$ indeed, the instances of the $\Phi_p$-optimal problem where no feasible design lets $M_F({\boldsymbol{n}})$ be of full rank have an optimal value of $0$. For all the other instances, any polynomial-time algorithm with a positive approximation factor would necessarily return a design of full rank. Provided that $P\neq NP$, this would contradict the NP-hardness of *Set-Cover* (it is easy to see that *Set Cover* reduces to the problem of deciding whether there exists a set $S$ of cardinal $N$ such that $\sum_{i\in S} M_i$ has full rank for some diagonal matrices $M_i$, by a similar argument to the one given in the first paragraph of this article). Hence, we investigate approximation algorithms only in the case $p \in [0,1]$. A class of submodular spectral functions ---------------------------------------- In this section, we are going to show that a class of spectral functions is submodular. We recall that a real valued function $F:2^E \rightarrow \mathbb{R}$, defined on every subset of $E$ is called nondecreasing if for all subsets $I$ and $J$ of $E$, $I \subseteq J$ implies $F(I)\leq F(J)$. We also give the definition of a *submodular* function: A real valued set function $F$ : $2^E \longrightarrow \mathbb{R}$ is *submodular* if it satisfies the following condition : $$F(I)+F(J)\geq F(I\cup J)+F(I\cap J) \quad \textrm{for all}\quad I,J\subseteq E.$$ We next recall the definition of operator monotone functions. The latter are real valued functions applied to hermitian matrices: if $A=U\operatorname{Diag}(\lambda_1,\ldots,\lambda_m)U^*$ is a $m\times m$ hermitian matrix (where $U$ is unitary and $U^*$ is the conjugate of $U$), the matrix $f(A)$ is defined as $U\operatorname{Diag}(f(\lambda_1),\ldots,f(\lambda_m))U^*$. A real valued function $f$ is *operator monotone* on ${\mathbb{R}}_+$ (resp. ${\mathbb{R}}_+^*$) if for every pair of positive semidefinite (resp. positive definite) matrices $A$ and $B$, $$A\preceq B \Longrightarrow f(A)\preceq f(B).$$ We say that $f$ is *operator antitone* if $-f$ is operator monotone. The next proposition is a matrix inequality of independent interest; it will be useful to show that $\varphi_p$ is submodular. Interestingly, it can be seen as an extension of the Ando-Zhan Theorem [@AZ99], which reads as follows: *Let $A$, $B$ be semidefinite positive matrices. For any unitarily invariant norm ${|\!|\!|}\cdot {|\!|\!|}$, and for every non-negative operator monotone function $f$ on $[0,\infty)$, $${|\!|\!|}f(A+B) {|\!|\!|}\leq {|\!|\!|}f(A)+f(B) {|\!|\!|}.$$* Kosem [@Kos06] asked whether it is possible to extend this inequality as follows: $${|\!|\!|}f(A+B+C) {|\!|\!|}\leq {|\!|\!|}f(A+B)+f(B+C)-f(C) {|\!|\!|},$$ and gave a counterexample involving the trace norm and the function $f(x)=\frac{x}{x+1}$. However, we show in next proposition that the previous inequality holds for the trace norm and every primitive $f$ of an operator antitone function (in particular, for $f(x)=x^p,\ p\in]0,1]$). Note that the previous inequality is not true for any unitarily invariant norm and $f(x)=x^p$ either. It is easy to find counterexamples with the spectral radius norm. \[prop:ineqPropos\] Let $f$ be a real function defined on ${\mathbb{R}}_+$ and differentiable on ${\mathbb{R}}_+^*$. If $f'$ is operator antitone on ${\mathbb{R}}_+^*$, then for all triples $(X,Y,Z)$ of $m\times m$ positive semidefinite matrices, $$\begin{aligned} \mathrm{trace}\ f(X+Y+Z)+ \mathrm{trace}\ f(Z) \leq \mathrm{trace}\ f(X+Z) + \mathrm{trace}\ f(Y+Z). \label{ineqxyz}\end{aligned}$$ Since the eigenvalues of a matrix are continuous functions of its entries, and since $\mathbb{S}_m^{++}$ is dense in $\mathbb{S}_m^{+}$, it suffices to establish the inequality when $X$, $Y$, and $Z$ are positive definite. Let $X$ be an arbitrary positive definite matrix. We consider the map: $$\begin{aligned} \psi : \mathbb{S}_m^{+} & \longrightarrow \mathbb{R} \nonumber \\ T & \longmapsto \mathrm{trace}\ f(X+T) - \mathrm{trace}\ f(T). \nonumber\end{aligned}$$ The inequality to be proved can be rewritten as $$\psi(Y+Z)\leq \psi(Z).$$ We will prove this by showing that $\psi$ is nonincreasing with respect to the Löwner ordering in the direction generated by any positive semidefinite matrix. To this end, we compute the Frechet derivative of $\psi$ at $T \in \mathbb{S}_m^{++}$ in the direction of an arbitrary matrix $H \in \mathbb{S}_m^{+}$. By definition, $$D\psi(T)(H)=\lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon} \big( \psi(T+\epsilon H)-\psi(T) \big). \label{Dfrechet}$$ When $f$ is an analytic function, $X \longmapsto \mathrm{trace}\ f(X)$ is Frechet-differentiable, and an explicit form of the derivative is known (see [@HP95; @JB06]): $D\big(\mathrm{trace}\ f(A)\big)(B)=\mathrm{trace} \big(f'(A)B\big)$. Since $f'$ is operator antitone on ${\mathbb{R}}_+^*$, a famous result of Löwner [@Low34] tells us (in particular) that $f'$ is analytic at all points of the positive real axis, and the same holds for $f$. Provided that the matrix $T$ is positive definite (and hence $X+T$), we have $$D\psi(T)(H)=\mathrm{trace} \Big(\ \big(f'(X+T)-f'(T)\big) H \Big).$$ By antitonicity of $f'$ we know that the matrix $W=f'(X+T)-f'(T)$ is negative semidefinite. For a matrix $H\succeq0$, we have therefore: $$\begin{aligned} D\psi(T)(H) & ={\operatorname{trace}}\ (WH) \leq 0.\end{aligned}$$ Consider now $h(s):=\psi(sY+Z)$. For all $s \in [0,1]$, we have $$h'(s)=D\psi(sY+Z)(Y)\leq 0,$$ and so, $h(1)=\psi(Y+Z)\leq h(0)=\psi(Z)$, from which the desired inequality follows. \[coro:Submod\] Let $M_1,\ldots,M_s$ be $m\times m$ positive semidefinite matrices. If $f$ satisfies the assumptions of Proposition \[prop:ineqPropos\], then the set function $F: 2^{[s]} \rightarrow \mathbb{R}$ defined by $$\forall I \subset [s],\ F(I)={\operatorname{trace}}\ f(\sum_{i\in I} M_i),$$ is submodular. Let $I,J \subseteq 2^{[s]}$. We define $$\begin{aligned} X=\sum_{i \in I\setminus J} M_{i},\ Y=\sum_{i\in J\setminus I} M_{i},\ Z=\sum_{i\in I\cap J} M_{i}. \nonumber\end{aligned}$$ It is easy to check that $$\begin{aligned} F(I)&=\textrm{trace}\ f(X+Z), \nonumber\\ F(J)&=\textrm{trace}\ f(Y+Z), \nonumber\\ F(I\cap J)&=\textrm{trace}\ f(Z), \nonumber\\ F(I\cup J)&=\textrm{trace}\ f(X+Y+Z). \nonumber\end{aligned}$$ Hence, Proposition \[prop:ineqPropos\] proves the submodularity of $F$. A consequence of the previous result is that the objective function of Problem  is submodular. In the limit case $p\to 0^+$, we find two well-known submodular functions: \[coro:SubmodExample\] Let $M_1,...,M_s$ be $m\times m$ positive semidefinite matrices. - $\forall p\in]0,1], I \mapsto {\operatorname{trace}}(\sum_{i\in I}M_i)^p$ is submodular. - $I \mapsto {\operatorname{rank}}(\sum_{i\in I}M_i)$ is submodular. If moreover every $M_i$ is positive definite, then: - $I \mapsto \log\det (\sum_{i\in I}M_i)$ is submodular. It is known that $x\mapsto x^q$ is operator antitone on $\mathbb{R}_+^*$ for all $q\in[-1,0[$. Therefore, the derivative of the function $x\mapsto x^p$ (which is $px^{p-1}$), is operator antitone on $\mathbb{R}_+^*$ for all $p\in]0,1[$. This proves the point $(i)$ for $p\neq 1$. The case $p=1$ is trivial, by linearity of the trace. The submodularity of the rank $(ii)$ and of $\log\det$ $(iii)$ are classic. Interestingly, they are obtained as the limit case of $(i)$ as $p\rightarrow0^+$. (For $\log\det$, we must consider the second term in the asymptotic development of $X \mapsto {\operatorname{trace}}\ X^p$ as $p$ tends to $0^+$, cf. Equation ). Greedy approximation -------------------- We next present some consequences of the submodularity of $\varphi_p$ for the approximability of Problem . Note that the results of this section hold in particular for $p=0$, and hence for the *rank maximization* problem . They also hold for $E=[s]$, i.e. for Problem . We recall that the principle of the greedy algorithm is to start from $\mathcal{G}_0=\emptyset$ and to construct sequentially the sets $$\mathcal{G}_{k+1}:=\mathcal{G}_{k} \cup \displaystyle{\operatorname{argmax}_{i \in E\setminus\mathcal{G}_k}}\ \varphi_p(\mathcal{G}_k \cup \{i\}),$$ until $k=N$. \[coro:1m1se\] Let $p\in[0,1]$. The greedy algorithm always yields a solution within a factor $1-\frac{1}{e}$ of the optimum of Problem . We know from Corollary \[coro:SubmodExample\] that for all $p\in[0,1]$, $\varphi_p$ is submodular ($p=0$ corresponding to the rank maximization problem). In addition, the function $\varphi_p$ is nondecreasing, because $X \longrightarrow X^p$ is a matrix monotone function for $p \in [0,1]$ (see e.g. [@Zhan02]) and $\varphi_p(\emptyset)=0$. Nemhauser, Wolsey and Fisher [@NWF78] proved the result of this theorem for any nondecreasing submodular function $f$ satisfying $f(\emptyset)=0$ which is maximized over a uniform matroid. Moreover when the maximal number of matrices which can be selected is $N$, this approximability ratio can be improved to $1-\big(1-1/N\big)^N.$ One can obtain a better bound by considering the *total curvature* of a given instance, which is defined by: $$c=\max_{i \in [s]}\quad 1-\frac{\varphi_p\big(E)-\varphi_p\big(E\setminus\{i\}\big)}{\varphi_p\big(\{i\}\big)} \in [0,1].$$ Conforti and Cornuejols [@CC84] proved that the greedy algorithm always achieves a factor for the maximization of an arbitrary nondecreasing submodular function with total curvature $c$. In particular, since $\varphi_1$ is additive it follows that the total curvature for $p=1$ is $c=0$, yielding an approximation factor of $1$: $$\lim_{c \rightarrow 0^+} \frac{1}{c}\big(1-(1-\frac{c}{N})^N \big)=1.$$ As a consequence, the greedy algorithm always gives the optimal solution of the problem. Note that Problem $(P_1)$ is nothing but a *knapsack* problem, for which it is well known that the greedy algorithm is optimal if each available item has the same weight. However, it is not possible to give an upper bound on the total curvature $c$ for other values of $p \in [0,1[$, and $c$ has to be computed for each instance. \[rem:budg\] The problem of maximizing a nondecreasing submodular function subject to a budget constraint of the form $\sum_i c_i n_i \leq B$, where $c_i\geq 0$ is the cost for selecting the element $i$ and $B$ is the total allowed budget, has been studied by several authors. Wolsey presented an adapted greedy algorithm [@Wol82] with a proven approximation guarantee of $1-e^{-\beta}\simeq0.35$, where $\beta$ is the unique root of the equation $e^x=2-x$. More recently, Sviridenko [@Svi04] showed that the budgeted submodular maximization problem was still in polynomial time, with the help of an algorithm which associates the greedy with a partial enumeration of every solution of cardinality $3$. We have attained so far an approximation factor of $1-e^{-1}$ for all $p\in [0,1[$, while we have a guarantee of optimality of the greedy algorithm for $p=1$. This leaves a feeling of mathematical dissatisfaction, since intuitively the problem should be easy when $p$ is very close to $1$. In the next section we remedy to this problem, by giving a rounding algorithm with an approximation factor $F(p)$ which depends on $p$, and such that $p\mapsto F(p)$ is continuous, nondecreasing and $\lim_{p\to 1} F(p)=1$. Approximation by rounding algorithms {#sec:rounding} ==================================== The optimal design problem has a natural continuous relaxation which is simply obtained by removing the integer constraint on the design variable ${\boldsymbol{n}}$, and has been extensively studied [@Atw73; @DPZ08; @Yu10a; @Sagnol09SOCP]. As mentioned in the introduction, several authors proposed to solve this continuous relaxation and to round the solution to obtain a near-optimal discrete design. While this process is well understood when $N\geq s$, we are not aware of any bound justifying this technique in the underinstrumented situation $N<s$. A continuous relaxation ----------------------- The continuous relaxation of Problem  which we consider is obtained by replacing the integer variable ${\boldsymbol{n}}\in \mathbb{N}^s$ by a continuous variable ${\boldsymbol{w}}$ in Problem : $$\max_{\substack{{\boldsymbol{w}}\ \in ({\mathbb{R}}_+)^s\\ \sum_k w_k \leq N}}\ \Phi_p(M_F({\boldsymbol{w}})) \label{Pcont}$$ Note that the criterion $\varphi_p({\boldsymbol{w}})$ is raised to the power $1/p$ in Problem  (we have $\Phi_p(M_F({\boldsymbol{w}}))= m^{-1/p}\varphi_p({\boldsymbol{w}})^{1/p}$ for $p>0$). The limit of Problem  as $p\to 0^+$ is hence the maximization of the determinant of $M_F({\boldsymbol{w}})$ (cf. Equation ). We assume without loss of generality that the matrix $M_F({\mathbf{1}})=\sum_{k=1}^s M_k$ is of full rank (where ${\mathbf{1}}$ denotes the vector of all ones). This ensures the existence of a vector ${\boldsymbol{w}}$ which is feasible for Problem , and such that $M_F({\boldsymbol{w}})$ has full rank. If this is not the case ($r^*:={\operatorname{rank}}(M_F({\mathbf{1}}))<m)$, we define instead a projected version of the continuous relaxation: Let $U \Sigma U^T$ be a singular value decomposition of $M_F({\mathbf{1}})$. We denote by $U_{r^*}$ the matrix formed with the $r^*$ leading singular vectors of $M_F({\mathbf{1}})$, i.e. the $r^*$ first columns of $U$. It can be seen that Problem  is equivalent to the problem with projected information matrices $\bar{M_k}:=U_{r^*}^T M_k U_{r^*}$ (see Paragraph $7.3$ in [@Puk93]). The functions $X\mapsto \log(\det(X))$ and $X\mapsto X^p$ ($p\in]0,1]$) are strictly concave on the interior of $\mathbb{S}_m^+$, so that the continuous relaxation  can be solved by interior-points technique or multiplicative algorithms [@Atw73; @DPZ08; @Yu10a; @Sagnol09SOCP]. The strict concavity of the objective function indicates in addition that Problem  admits a unique solution if and only if $$w_1 M_1+w_2 M_2 + \ldots + w_s M_s=y_1 M_1 + y_2 M_2 + \ldots + y_s M_s \Rightarrow (w_1,\ldots,w_s)=(y_1,\ldots,y_s),$$ that is to say whenever the matrices $M_i$ are linearly independent. In this paper, we focus on the rounding techniques only, and we assume that an optimal solution ${\boldsymbol{{\boldsymbol{w^*}}}}$ of the relaxation  is already known. In the sequel, we also denote a discrete solution of Problem  by ${\boldsymbol{n}}^*$ and a binary solution of Problem  by $S^*$. Note that we always have $\varphi_p({\boldsymbol{w^*}}) \geq \varphi_p({\boldsymbol{n}}^*)\geq \varphi_p(S^*)$. Posterior bounds ---------------- In this section, we are going to bound from below the approximation ratio $\varphi_p({\boldsymbol{n}})/\varphi_p({\boldsymbol{w^*}})$ for an arbitrary discrete design ${\boldsymbol{n}}$, and we propose a rounding algorithm which maximizes this approximation factor. The lower bound depends on the continuous optimal variable ${\boldsymbol{w^*}}$, and hence we refer it as a *posterior* bound. We start with a result for binary designs ($\forall i\in [s], n_i\leq 1$), which we associate with a subset $S$ of $[s]$ as in Section \[sec:submod\]. The proof relies on several matrix inequalities and technical lemmas on the directional derivative of a scalar function applied to a symmetric matrix, and is therefore presented in Appendix \[sec:proofIneq\]. \[prop:boundW\] Let $p\in[0,1]$ and ${\boldsymbol{w^*}}$ be optimal for the continuous relaxation  of Problem . Then, for any subset $S$ of $[s]$, the following inequality holds: $$\frac{1}{N} \sum_{i\in S} (w_i^*)^{1-p} \leq \frac{\varphi_p(S)}{\varphi_p({\boldsymbol{w^*}})}.$$ In this proposition and in the remaining of this article, we adopt the convention $0^0=0$. We point out that this proposition includes as a special case a result of Pukelsheim [@Puk80], already generalized by Harman and Trnovská [@HT09], who obtained: $$\frac{w_i^*}{N} \leq \frac{{\operatorname{rank}}M_i}{m},$$ i.e. the inequality of Proposition \[prop:boundW\] for $p=0$ and a singleton $S=\{i\}$. However the proof is completely different in our case. Note that there is no constraint of the form $w_i\leq 1$ in the continuous relaxation , although the previous proposition relates to binary designs $S\in[s]$. Proposition \[prop:boundW\] suggests to select the $N$ matrices with the largest coordinates $w_i^*$ to obtain a candidate $S$ for optimality of the binary problem . We will give in the next section a *prior bound* (i.e., which does not depend on ${\boldsymbol{w^*}}$) for the efficiency of this rounded design. We can also extend the previous proposition to the case of replicated designs ${\boldsymbol{n}} \in \mathbb{N}^s$ (note that the following proposition does not require the design ${\boldsymbol{n}}$ to satisfy $\sum_i n_i=N$): \[prop:boundW\_n\] Let $p\in[0,1]$ and ${\boldsymbol{w^*}}$ be optimal for the continuous relaxation  of Problem . Then, for any design ${\boldsymbol{n}}\in\mathbb{N}^s$, the following inequality holds: $$\frac{1}{N} \sum_{i\in[s]} n_i^p (w_i^*)^{1-p} \leq \frac{\varphi_p({\boldsymbol{n}})}{\varphi_p({\boldsymbol{w^*}})}.$$ We consider the problem in which the matrix $M_i$ is replicated $n_i$ times: $$\forall i\in[s],\ \forall k\in[n_i], M_{i,k}=M_i.$$ Since ${\boldsymbol{w^*}}$ is optimal for Problem , it is clear that $(w_{i,k})_{(i,k) \in \cup_{j\in[s]} \{j\} \times [n_j]}$ is optimal for the problem with replicated matrices if $$\forall i \in [s], \sum_{k \in [n_i]} w_{i,k} = w_i^*, \label{constraintalloc}$$ i.e. $w_{i,k}$ is the part of $w_i^*$ allocated to the $k{{}^{\mathrm{th}}}$ copy of the matrix $M_i$. For such a vector, Proposition \[prop:boundW\] shows that $$\frac{\varphi_p({\boldsymbol{n}})}{\varphi_p({\boldsymbol{w^*}})} \geq \frac{1}{N} \sum_{i=1}^s \sum_{k=1}^{n_i} (w_{i,k}^*)^{1-p}.$$ Finally, it is easy to see (by concavity) that the latter lower bound is maximized with respect to the constraints of Equation  if $\forall i\in[s], \forall k\in [n_i],\ w_{i,k} = \frac{w_i^*}{n_i}$: $$\frac{\varphi_p({\boldsymbol{n}})}{\varphi_p({\boldsymbol{w^*}})} \geq \frac{1}{N} \sum_{i=1}^s \sum_{k=1}^{n_i} \left(\frac{w_i^*}{n_i}\right)^{1-p} =\frac{1}{N} \sum_{i=1}^s n_i^p (w_i^*)^{1-p}.$$ We next give a simple rounding algorithm which finds the feasible design ${\boldsymbol{n}}$ which maximizes the lower bound of Proposition \[prop:boundW\_n\]: $$\max_{\substack{{\boldsymbol{n}} \in \mathbb{N}^s \\ \sum n_i = N }}\quad \sum_{j \in [s]} n_j^p\ w_j^{1-p}. \label{probaposteriori}$$ The latter maximization problem is in fact a *ressource allocation problem with a convex separable objective*, and the incremental algorithm which we give below is well known in the resource allocation community (see e.g. [@IK88]). **Input:** A nonnegative vector ${\boldsymbol{w}} \in \mathbb{R}^s$ such that $\sum_{i=1}^s w_i =N\in \mathbb{N}\setminus\{0\}$. Sort the coordinates of ${\boldsymbol{w}}$; We assume wlog that $w_1\geq w_2\geq\ldots\geq w_s$; ${\boldsymbol{n}} \leftarrow [1,0\ldots,0]\in\mathbb{R}^s$ Select an index $i_{max} \in \operatorname{argmax}_{i\in[s]} \limits \big((n_i+1)^p- n_i^p\big)\ w_i^{1-p}$ $n_{i_{max}} \leftarrow n_{i_{max}}+1$ **return:** a $N-$replicated design ${\boldsymbol{n}}$ which maximizes $\sum_{i=1}^s n_i^p w_i^{1-p}$. If ${\boldsymbol{w}}$ is sorted ($w_1\geq w_2\geq\ldots\geq w_s$), then the solution of Problem  clearly satisfies $n_1 \geq n_2 \geq \ldots \geq n_s$. Consequently, it is not necessary to test every index $i \in [s]$ to compute the $\operatorname{argmax}$ in Algorithm \[algo:greedyrounding\]. Instead, one only needs to compute the increments $\big((n_i+1)^p- n_i^p\big)\ w_i^{1-p}$ for the $i\in [s]$ such that $i=1$ or $n_i+1 \leq n_{i-1}$. We shall now give a posterior bound for the budgeted problem . We only provide a sketch of the proof, since the reasoning is the same as for Propositions \[prop:boundW\] and \[prop:boundW\_n\]. We also point out that the approximation bound provided in the next proposition can be maximized over the set of $B-$budgeted designs, thanks to a dynamic programming algorithm which we do not detail here (see [@MM76]). \[prop:boundW\_bdg\] Let $p\in[0,1]$ and ${\boldsymbol{w^*}}$ be optimal for the continuous relaxation $$\max_{{\boldsymbol{w}}\in\mathbb{R}^s} \left\{\Phi_p\big(M_F({\boldsymbol{w}})\big):\ {\boldsymbol{w}}\geq{\boldsymbol{0}},\ \sum_{i\in[s]} c_i w_i \leq B\right\} \label{Ppbdgcont}$$ of Problem . Then, for any design ${\boldsymbol{n}}\in\mathbb{N}^s$, the following inequality holds: $$\frac{1}{B} \sum_{i\in[s]} c_i n_i^p (w_i^*)^{1-p} \leq \frac{\varphi_p({\boldsymbol{n}})}{\varphi_p({\boldsymbol{w^*}})}.$$ First note that after the change of variable $z_i:= N B^{-1} c_i w_i$, the continuous relaxation  can be rewritten under the standard form , where the matrix $M_i$ is replaced by . Hence, we know from Proposition \[prop:KKT\_cond\] that the optimality conditions of Problem  are: $$\forall i \in [s],\quad Bc_i^{-1} {\operatorname{trace}}(M_F({\boldsymbol{w^*}})^{p-1} M_i) \leq \varphi_p\big({\boldsymbol{w^*}}\big),$$ with inequality if $w_i^*>0$. Then, we can apply exactly the same reasonning as in the proof of Proposition \[prop:boundW\], to show that $$\forall S\subset[s],\quad \frac{1}{B} \sum_{i\in S} c_i (w_i^*)^{1-p} \leq \frac{\varphi_p(S)}{\varphi_p({\boldsymbol{w^*}})}.$$ The only change is that the optimality conditions must be multiplied by a factor proportional to $c_i (w_i^*)^{1-p}$ (instead of $(w_i^*)^{1-p}$ as in Equation ). Finally, we can apply the same arguments as in the proof of \[prop:boundW\_n\] to obtain the inequality of this proposition. Prior bounds ------------ In this section, we derive *prior bounds* for the solution obtained by rounding the continuous solution of Problem , i.e. approximation bounds which depend only on the parameters $p$, $N$ and $s$ of Problems  and . We first need to state one technical lemma. \[lemma:minsummax\] Let ${\boldsymbol{w}} \in \mathbb{R}^s$ be a nonnegative vector summing to $r\leq s$, $r\in \mathbb{N}$, and $p$ be an arbitrary real in the interval $[0,1]$. Assume without loss of generality that the coordinates of ${\boldsymbol{w}}$ are sorted, i.e. $w_1 \geq \ldots \geq w_s \geq 0$. If one of the following two conditions holds: $$\begin{aligned} (i)&\quad \forall i\in[s],\ w_i\leq 1 ; \\ (ii)&\quad p \leq 1 - \frac{\ln r}{\ln s},\end{aligned}$$ then, the following inequality holds: $$\frac{1}{r} \sum_{i=1}^r w_i^{1-p} \geq \left(\frac{r}{s}\right)^{1-p}.$$ We start by showing the lemma under the condition $(i)$. To this end, we consider the minimization problem $$\min_{{\boldsymbol{w}}} \{\sum_{i=1}^r w_i^{1-p}:\ \sum_{i=1}^s w_i = r ;\ 1\geq w_1 \geq \ldots \geq w_s\geq 0\}. \label{minimaxcoord3}$$ Our first claim is that the optimum is necessarily attained by a vector of the form ${\boldsymbol{w}} = [ u + \alpha_1,\ldots,u+\alpha_r,u,\ldots,u]^T,$ where $\alpha_1,\ldots,\alpha_r\geq 0$, i.e. the $s-r$ coordinates of ${\boldsymbol{w}}$ which are not involved in the objective function are equal. To see this, assume *ad absurbium* that ${\boldsymbol{w}}$ is optimal for Problem , with $w_i>w_{i+1}$ for an index $i>r$. Define $k$ as the smallest integer such that $w_1=w_2=\ldots=w_k>w_{k+1}$. Then, ${\boldsymbol{e_i}} - 1/k \sum_{j\in[k]} {\boldsymbol{e_j}}$ is a feasible direction along which the objective criterion $\sum_{i=1}^r w_i^{1-p}$ is decreasing, a contradiction. Problem  is hence equivalent to: $$\min_{u,{\boldsymbol{\alpha}}} \{\sum_{i=1}^r (u+\alpha_i)^{1-p}:\ \sum_{i=1}^r \alpha_i = r-su ;\ 0\leq u;\ 0 \leq \alpha_i \leq 1-u\ (\forall i\in[r]) \}. \label{minimaxcoord4}$$ It is known that the objective criterion of Problem  is Schur-concave, as a symmetric separable sum of concave functions (we refer the reader to the book of Marshall and Olkin [@MO79] for details about the theory of majorization and Schur-concavity). This tells us that for all $u\in[0,\frac{r}{s}]$, the minimum with respect to ${\boldsymbol{\alpha}}$ is attained by $${\boldsymbol{\alpha}}=[\underbrace{1-u,\ldots,1-u}_{k\ \textrm{times}},r-su-k(1-u),0,\ldots,0]^T,$$ where $k=\lfloor \frac{r-su}{1-u} \rfloor$ (for a given $u$, this vector majorizes all the vectors of the feasible set). Problem  can thus be reduced to the scalar minimization problem $$\min_{u\in[0,\frac{r}{s}]}\ \left\lfloor \frac{r-su}{1-u} \right\rfloor + \Big( u + r -su - \left\lfloor \frac{r-su}{1-u} \right\rfloor (1-u) \Big)^{1-p} +\big(r-\left\lfloor \frac{r-su}{1-u} \right\rfloor-1\big) u^{1-p}.$$ It is not difficult to see that this function is piecewise concave, on the $r-1$ intervals of the form $u\in\left[\frac{r-(k+1)}{s-(k+1)}, \frac{r-k}{s-k} \right],\ k\in[r-1]$, corresponding to the domains where $k=\lfloor \frac{r-su}{1-u} \rfloor$ is constant. It follows that the minimum is attained for a $u$ of the form $\frac{r-k}{s-k}$, where $k \in [r]$, and the problem reduces to $$\min_{k\in [r]}\ k + (r-k) \left(\frac{r-k}{s-k}\right)^{1-p}.$$ Finally, one can check that the objective function of the latter problem is nondecreasing with respect to $k$, such that the minimum is attained for $k=0$ (which corresponds to the uniform weight vector ${\boldsymbol{w}}=[r/s,\ldots,r/s]^T$). This achieves the first part of this proof. The proof of the lemma for the condition $(ii)$ is similar. This time, we consider the minimization problem $$\min_{{\boldsymbol{w}}} \{\sum_{i=1}^r w_i^{1-p}:\ \sum_{i=1}^s w_i = r ;\ w_1 \geq \ldots \geq w_s\geq 0\}. \label{minimaxcoord}$$ Again, the optimum is attained by a vector of the form ${\boldsymbol{w}} = [ u + \alpha_1,\ldots,u+\alpha_r,u,\ldots,u]^T,$ which reduces the problem to: $$\min_{u,{\boldsymbol{\alpha}}} \{\sum_{i=1}^r (u+\alpha_i)^{1-p}:\ \sum_{i=1}^r \alpha_i = r-su ;\ u,\alpha_1,\ldots,\alpha_r\geq0 \}. \label{minimaxcoord2}$$ For a fixed $u$, the Schur-concavity of the objective function indicates that the minimum is attained for ${\boldsymbol{\alpha}}=[r-su,0,\ldots,0]^T$. Finally, Problem  reduces to the scalar minimization problem $$\min_{u\in[0,\frac{r}{s}]}\ \big( u+(r-su) \big)^{1-p} + (r-1) u^{1-p},$$ where the optimum is always attained for $u=0$ or $u=r/s$ by concavity. It now is easy to see that the inequality of the lemma is satisfied when the latter minimum is attained for $u=r/s$, i.e. if $r(\frac{r}{s})^{1-p} \leq r^{1-p}$, which is equivalent to the condition $(ii)$ of the lemma. As a direct consequence of this lemma, we obtain a *prior* approximation bound for Problem  when $p$ is in a neighborhood of $0$. \[theo:factorFbin\] Let $p\in[0,1]$, $N\leq s$ and ${\boldsymbol{w^*}}$ be a solution of the continuous optimal design problem . Let $S$ be the $N-$binary design obtained by selecting the $N$ largest coordinates of ${\boldsymbol{w^*}}$. If $p\leq 1-\frac{\ln N}{\ln s}$, then we have $$\frac{\varphi_p(S)}{\varphi_p(S^*)} \geq \frac{\varphi_p(S)}{\varphi_p({\boldsymbol{w^*}})} \geq \Big(\frac{N}{s}\Big)^{1-p}.$$ This is straightforward if we combine the result of Proposition \[prop:boundW\] and the one of Lemma \[lemma:minsummax\] for $r=N$ and condition $(ii)$. In the next theorem, we give an approximation factor for the design provided by Algorithm \[algo:greedyrounding\]. This factor $F$ is plotted as a function of $p$ and the ratio $\frac{N}{s}$ on Figure \[fig:F3d\]. For every value of $\frac{N}{s}$, this theorem shows that there is a continuously increasing difficulty from the easy case ($p=1$, where $F=1$) to the most degenerate problem ($p=0$, where $F=\min(\frac{N}{s},1-\frac{s}{4N})$). \[theo:factorF\] Let $p\in[0,1]$, ${\boldsymbol{w^*}}$ be a solution of the continuous optimal design problem  and ${\boldsymbol{n}}$ be the vector returned by Algorithm \[algo:greedyrounding\] for the input ${\boldsymbol{w}}={\boldsymbol{w^*}}$. Then, we have $$\frac{\varphi_p({\boldsymbol{n}})}{\varphi_p({\boldsymbol{n^*}})} \geq \frac{\varphi_p({\boldsymbol{n}})}{\varphi_p({\boldsymbol{w^*}})} \geq F,$$ where $F$ is defined by: $$F= \left\{\begin{array}{cll} \left(\frac{N}{s} \right)^{1-p} & \qquad \textrm{if } \left(\frac{N}{s} \right)^{1-p} \leq \frac{1}{2-p} &\quad (\textrm{in particular, if } \frac{N}{s}\leq e^{-1}); \\ 1-\frac{s}{N}(1-p)\left(\frac{1}{2-p}\right)^{\frac{2-p}{1-p}} & \qquad \textrm{Otherwise} & \quad (\textrm{in particular, if } \frac{N}{s} \geq \frac{1}{2}); \end{array} \right.$$ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Approximation factor F of Theorem \[theo:factorF\]: (a) as a function of $p$ and the ratio $\frac{N}{s}$ (log scale); (b) as a function of $p$ for selected values of $\frac{N}{s}$. \[fig:F3d\]](FF_semilog_grayscale-crop.pdf "fig:"){width="50.00000%"} ![Approximation factor F of Theorem \[theo:factorF\]: (a) as a function of $p$ and the ratio $\frac{N}{s}$ (log scale); (b) as a function of $p$ for selected values of $\frac{N}{s}$. \[fig:F3d\]](FF_SEVERALNs_grayscale2-crop.pdf "fig:"){width="45.00000%"} (a) (b) -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- For all $i\in [s]$ we denote by $f_i:=w_i^*-\lfloor w_i^* \rfloor$ the fractional part of $w_i^*$, and we assume without loss of generality that these numbers are sorted, i.e. , $f_1\geq f_2 \geq \ldots \geq f_s$. We will prove the theorem through a simple (suboptimal) rounding ${\boldsymbol{\bar{n}}}$, which we define as follows: $$\bar{n}_i = \left\{ \begin{array}{cl} \lfloor w_i^* \rfloor +1 & \quad \textrm{if } i \leq N - \sum_{i \in [s]} \lfloor w_i^* \rfloor; \\ \lfloor w_i^* \rfloor & \quad \textrm{Otherwise.} \end{array} \right.$$ We know from Proposition \[prop:boundW\_n\] and from the fact that Algorithm \[algo:greedyrounding\] solves Problem  the integer vector ${\boldsymbol{n}}$ satisfies $$N \frac{\varphi_p({\boldsymbol{n}})}{\varphi_p({\boldsymbol{w^*}})} \geq \sum_{i=1}^s n_i^p (w_i^*)^{1-p} \geq \sum_{i=1}^s \bar{n}_i^p (w_i^*)^{1-p}. \label{ineqnbar}$$ We shall now bound from below the latter expression: if $\bar{n}_i =\lfloor w_i^* \rfloor$ and $\lfloor w_i^* \rfloor \neq 0$, then $$\bar{n}_i^p (w_i^*)^{1-p} = \lfloor w_i^* \rfloor \left( \frac{w_i^*}{\lfloor w_i^* \rfloor} \right)^{1-p} \geq \lfloor w_i^* \rfloor. \label{ineqicase1}$$ Note that Inequality  also holds if $\bar{n}_i =\lfloor w_i^* \rfloor=0$. If $\bar{n}_i =\lfloor w_i^* \rfloor + 1$, we write $$\bar{n}_i^p (w_i^*)^{1-p} = \underbrace{\left(\frac{w_i^*}{\bar{n}_i}\right)^{1-p} + \ldots + \left(\frac{w_i^*}{\bar{n}_i}\right)^{1-p}}_{\bar{n}_i\textrm{ terms}} \geq \underbrace{1^{1-p} + \ldots +1^{1-p}}_{ \lfloor w_i^* \rfloor\textrm{ terms}} + f_i^{1-p} =\lfloor w_i^* \rfloor + f_i^{1-p}, \label{ineqicase2}$$ where the inequality is a consequence of the concavity of ${\boldsymbol{w}} \mapsto \sum_j w_j^{1-p}$. Combining Inequalities  and  yields $$\sum_{i=1}^s \bar{n}_i^p (w_i^*)^{1-p} \geq \sum_{i=1}^s \lfloor w_i^* \rfloor + \sum_{j=1}^{N-\sum_{i=1}^s \lfloor w_i^* \rfloor} f_i^{1-p}= \bar{N} + \sum_{j=1}^{N-\bar{N}} f_i^{1-p},$$ where we have set $\bar{N}:=\sum_{i=1}^s \lfloor w_i^* \rfloor \in \{\max(N-s+1,0),\ldots,N\}$. Since the vector ${\boldsymbol{f}}=[f_1,\ldots,f_s]$ sums to $N-\bar{N}$, we can apply the result of Lemma \[lemma:minsummax\] with condition $(i)$, with $r=N-\bar{N}$, and we obtain $$\sum_{i=1}^s \bar{n}_i^p w_i^{1-p} \geq \bar{N} + (N-\bar{N}) \left( \frac{N-\bar{N}}{s} \right)^{1-p} \geq \min_{u\in [0,N]} u + (N-u) \left( \frac{N-u}{s} \right)^{1-p}.$$ We will compute this lower bound in closed-form, which will provide the approximation bound of the theorem. To do this, we define the function $g: u \mapsto u + (N-u) \left( \frac{N-u}{s} \right)^{1-p}$ on $]-\infty,N]$, and we observe (by differentiating) that $g$ is decreasing on $]-\infty,u^*]$ and increasing on $[u^*,N[$, where $$u^* = N - s \left(\frac{1}{2-p}\right)^{\frac{1}{1-p}}.$$ Hence, only two cases can appear: either $u^*\leq 0$, and the minimum of $g$ over $[0,N]$ is attained for $u=0$; or $u^*\geq 0$, and $g_{|[0,N]}$ attains its minimum at $u=u^*$. Finally, the bound given in this theorem is either $N^{-1} g(0)$ or $N^{-1} g(u^*)$, depending on the sign of $u^*$. In particular, since the function $$h: p \mapsto \left(\frac{1}{2-p}\right)^{\frac{1}{1-p}}$$ is nonincreasing on the interval $[0,1]$, with $h(0)=\frac{1}{2}$ and $h(1)=e^{-1}$, we have: $$\forall p\in[0,1], \quad \frac{N}{s}\leq e^{-1} \Longrightarrow u^* \leq 0 \quad\textrm{ and }\quad \frac{N}{s}\geq \frac{1}{2} \Longrightarrow u^* \geq 0.$$ The alternative rounding ${\boldsymbol{\tilde{n}}}$ is very useful to obtain the formula of Theorem \[theo:factorF\]. However, since ${\boldsymbol{\tilde{n}}}$ differs from the design ${\boldsymbol{n}}$ returned by Algorithm \[algo:greedyrounding\] in general, the inequality $\frac{\varphi_p({\boldsymbol{n}})}{\varphi_p({\boldsymbol{w^*}})} \geq F$ is not tight. Consider for example the situation where $p=0$ and $N=s$, which is a trivial case for the rank optimization problem : the incremental rounding algorithm always returns a design ${\boldsymbol{n}}$ such that $(w_i^*>0 \Rightarrow n_i>0)$, and hence the problem is solve to optimality (the design is of full rank). In contrast, Theorem \[theo:factorF\] only guarantees a factor $F=\frac{3}{4}$ for this class of instances. We point out that Theorem \[theo:factorF\] improves on the greedy approximation factor $1-e^{-1}$ in many situations. The gray area of Figure \[fig:bettergreedy\] shows the values of $(\frac{N}{s},p) \in {\mathbb{R}}^*_+ \times [0,1]$ for which the approximation guarantee is better with Algorithm \[algo:greedyrounding\] than with the greedy algorithm of section \[sec:submod\]. ![in gray, values of $(\frac{N}{s},p) \in {\mathbb{R}}^*_+ \times [0,1]$ such that the factor $F$ of Theorem \[theo:factorF\] is larger than $1-e^{-1}$. \[fig:bettergreedy\]](area_rounding_better_than_greedy2.pdf){width="60.00000%"} Recall that the relevant criterion for the theory of optimal design is the *positively homogeneous* function ${\boldsymbol{w}} \mapsto \Phi_p\big(M_F({\boldsymbol{w}})\big) = m^{-1/p} \varphi_p({\boldsymbol{w}})^{1/p}$ (cf. Equation ). Hence, if a design is within a factor $F$ of the optimum with respect to $\varphi_p$, its $\Phi_p-$efficiency is $F^{1/p}$. In the *overinstrumented* case $N>s$, Pukelsheim gives a rounding procedure with a $\Phi_p-$efficiency of $1-\frac{s}{N}$ (Chapter 12 in [@Puk93]). We have plotted in Figure \[fig:betterpuk\] the area of the domain $(\frac{s}{N},p)\in[0,1]^2$ where the approximation guarantee of Theorem \[theo:factorF\] is better. ![in gray, values of $(\frac{s}{N},p)\in[0,1]^2$ such that the factor $F$ of Theorem \[theo:factorF\] is larger than $(1-s/N)^p$. \[fig:betterpuk\]](area_rounding_better_than_pukelsheim.pdf){width="60.00000%"} Conclusion ========== This paper gives bounds on the behavior of some classical heuristics used for combinatorial problems arising in optimal experimental design. Our results can either justify or discard the use of such heuristics, depending on the settings of the instances considered. Moreover, our results confirm some facts that had been observed in the literature, namely that rounding algorithms perform better if the density of measurements is high, and that the greedy algorithm always gives a quite good solution. We illustrate these observations with two examples: In a sensor location problem, Uciński and Patan [@UP07] noticed that the trimming of a Branch and Bound algorithm was better if they activated more sensors, although this led to a much larger search space. The authors claims that this surprising result can be explained by the fact that a higher density of sensors leads to a better continuous relaxation. This is confirmed by our result of approximability, which shows that the larger is the number of selected experiments, the better is the quality of the rounding. It is also known that the greedy algorithm generally gives very good results for the optimal design of experiments (see e.g. [@SQZ06], where the authors explicitly chose not to implement a local search from the design greedily chosen, since the greedy algorithm already performs very well). Our $(1-1/e)-$approximability result guarantees that this algorithm always well behaves indeed. Acknowledgment ============== The author wishes to thank Stéphane Gaubert for his useful comments and advises, for suggesting to work within the framework of matrix inequalities and submodularity, and of course for his warm support. He also wants to thank Mustapha Bouhtou for the stimulating discussions which are at the origin of this work. The author expresses his gratitude to an anonymous referee of the ISCO conference – where an announcement of some of the present results was made [@BGSagnol10ENDM], and to three referees of DAM for precious comments and suggestions. [CCPV07]{} C.L. Atwood. Sequences converging to [D]{}-optimal designs of experiments. , 1(2):342–352, 1973. T. Ando and X. Zhan. Norm inequalities related to operator monotone functions. , 315:771–780, 1999. M. Bouhtou, S. Gaubert, and G. Sagnol. Optimization of network traffic measurement: a semidefinite programming approach. In [*Proceedings of the International Conference on Engineering Optimization (ENGOPT)*]{}, Rio De Janeiro, Brazil, 2008. 978-85-7650-152-7. M. Bouhtou, S. Gaubert, and G. Sagnol. Submodularity and randomized rounding techniques for optimal experimental design. , 36:679 – 686, March 2010. ISCO 2010 - International Symposium on Combinatorial Optimization. Hammamet, Tunisia. R. Bhatia. . Springer Verlag, 1997. M. Conforti and G. Cornuéjols. Submodular set functions, matroids and the greedy algorithm: tight worst-case bounds and some generalizations of the [R]{}ado-[E]{}dmonds theorem. , 7(3):251–274, 1984. G. Calinescu, C. Chekuri, M. [Pál]{}, and J. [Vondrák]{}. Maximizing a submodular set function subject to a matroid constraint. In [*Proceedings of the 12th international conference on Integer Programming and Combinatorial Optimization, IPCO*]{}, volume 4513, pages 182–196, 2007. H. Dette, A. Pepelyshev, and A. Zhigljavsky. Improving updating rules in multiplicative algorithms for computing [D]{}-optimal designs. , 53(2):312 – 320, 2008. V.V. Fedorov. . New York : Academic Press, 1972. Translated and edited by W. J. Studden and E. M. Klimko. U. Feige. A threshold of $\operatorname{ln} n$ for approximating set cover. , 45(4):634–652, July 1998. F. Hansen and G.K. Pedersen. Perturbation formulas for traces on [C\*]{}-algebras. , 31:169–178, 1995. R. Harman and M. Trnovská. Approximate [D]{}-optimal designs of experiments on the convex hull of a finite set of information matrices. , 59(5):693–704, December 2009. T. Ibaraki and N. Katoh. . MIT Press, 1988. E. Jorswieck and H. Boche. . Now Publishers Inc., 2006. J. Kiefer. General equivalence theory for optimum designs (approximate theory). , 2(5):849–879, 1974. J. Kiefer. Optimal design: Variation in structure and performance under change of criterion. , 62(2):277–288, 1975. T. Kosem. inequalities between $\vert f(a+b) \vert$ and $\vert f(a)+f(b) \vert$. , 418:153–160, 2006. A. Kulik, H. Shachnai, and T. Tamir. Maximizing submodular set functions subject to multiple linear constraints. In [*SODA ’09: Proceedings of the Nineteenth Annual ACM -SIAM Symposium on Discrete Algorithms*]{}, pages 545–554, Philadelphia, PA, USA, 2009. K. L[ö]{}wner. ber monotone [M]{}atrixfunktionen. , 38(1):177–216, 1934. Michel Minoux. Accelerated greedy algorithms for maximizing submodular set functions. In J. Stoer, editor, [*Optimization Techniques*]{}, volume 7 of [ *Lecture Notes in Control and Information Sciences*]{}, pages 234–243. Springer Berlin / Heidelberg, 1978. 10.1007/BFb0006528. T.L. Morin and R.E. Marsten. An algorithm for nonlinear knapsack problems. , pages 1147–1158, 1976. AW Marshall and I. Olkin. . Academic Press, 1979. G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher. An analysis of approximations for maximizing submodular set functions. , 14:265–294, 1978. F. Pukelsheim and S. Rieder. Efficient rounding of approximate designs. , pages 763–770, 1992. F. Pukelsheim and G.P.H. Styan. Convexity and monotonicity properties of dispersion matrices of estimators in linear models. , 10(2):145–149, 1983. F. Pukelsheim. On linear regression designs which maximize information. , 4:339–364, 1980. F. Pukelsheim. . Wiley, 1993. TG Robertazzi and SC Schwartz. . , 10:341, 1989. G. Sagnol. Computing optimal designs of multiresponse experiments reduces to second-order cone programming. , 141(5):1684 – 1708, 2011. G. Sagnol, S. Gaubert, and M. Bouhtou. Optimal monitoring on large networks by successive c-optimal designs. In [*22nd international teletraffic congress (ITC22), Amsterdam, The Netherlands*]{}, September 2010. H.H. Song, L. Qiu, and Y. Zhang. Netquest: A flexible framework for largescale network measurement. In [*ACM SIGMETRICS’06*]{}, St Malo, France, 2006. M. Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint. , 32(1):41–43, 2004. D. Uciński and M. Patan. D-optimal design of a monitoring network for parameter estimation of distributed systems. , 39(2):291–322, 2007. J. Vondrák. Optimal approximation for the submodular welfare problem in the value oracle model. In [*ACM Symposium on Theory of Computing, STOC’08*]{}, pages 67–74, 2008. L.A. Wolsey. Maximising real-valued submodular functions: Primal and dual heuristics for location problems. , pages 410–425, 1982. Y. Yu. Monotonic convergence of a general algorithm for computing optimal designs. , 38(3):1593–1606, 2010. X. Zhan. . Springer, 2002. [**Appendix**]{} From optimal design of statistical experiments to Problem (Pp) {#sec:problemStatement} ============================================================== The classical linear model -------------------------- We denote vectors by bold-face lowercase letters and we make use of the classical notation $[s]:=\{1,\ldots,s\}$ (and we define $[0]:=\emptyset$). The set of nonnegative (resp. positive) real numbers is denoted by ${\mathbb{R}}_+$ (resp. ${\mathbb{R}}_+^*$), the set of $m\times m$ symmetric (resp. symmetric positive semidefinite, symmetric positive definite) is denoted by $\mathbb{S}_m$ (resp. $\mathbb{S}_m^+$, $\mathbb{S}_m^{++}$). The expected value of a random variable $X$ is denoted by ${\mathbb{E}}[X]$. We denote by ${\boldsymbol{\theta}} \in {\mathbb{R}}^m$ the vector of the parameters that we want to estimate. In accordance with the classical linear model, we assume that the experimenter has a collection of $s$ experiments at his disposal, each one providing a (multidimensional) observation which is a linear combination of the parameters, up to a noise on the measurement whose covariance matrix is known and positive definite. In other words, for each experiment $i \in [s]$, we have $${\boldsymbol{y_i}}=A_{i} {\boldsymbol{\theta}} + {\boldsymbol{\epsilon_i}},\qquad {\mathbb{E}}[{\boldsymbol{\epsilon_i}}]={\boldsymbol{0}}, \qquad {\mathbb{E}}[{\boldsymbol{\epsilon_i}}{\boldsymbol{\epsilon_i}}^T]=\Sigma_i, \label{mesEq}$$ where ${\boldsymbol{y_i}}$ is the vector of measurement of size $l_i$, $A_{i}$ is a $(l_i \times m)-$matrix, and $\Sigma_i\in \mathbb{S}_{l_i}^{++}$ is a known covariance matrix. We will assume that the noises have unit variance for the sake of simplicity: $\Sigma_i=I$. We may always reduce to this case by a left multiplication of the observation equation  by $\Sigma_i^{-1/2}$. The errors on the measurements are assumed to be mutually independent, i.e. $$i \neq j \Longrightarrow {\mathbb{E}}[{\boldsymbol{\epsilon_i}} {\boldsymbol{\epsilon_j}}^T]=0.$$ As explained in the introduction, the aim of experimental design theory is to choose how many times each experiment will be performed so as to maximize the accuracy of the estimation of ${\boldsymbol{\theta}}$, with the constraint that $N$ experiments may be conducted. We therefore define the integer-valued *design* variable ${\boldsymbol{n}}\in\mathbb{N}^s$, where $n_k$ indicates how many times the experiment $k$ is performed. We denote by $i_k \in [s]$ the index of the $k{{}^{\mathrm{th}}}$ conducted experiment (the order in which we consider the measurements has no importance), so that the aggregated vector of observation reads: $${\boldsymbol{y}}=\mathcal{A}\ {\boldsymbol{\theta}} + {\boldsymbol{\epsilon}},$$ $$\textrm{ where } {\boldsymbol{y}}=\left[ \begin{array}{c} {\boldsymbol{y_{i_1}}} \\ \vdots \\ {\boldsymbol{y_{i_N}}} \end{array} \right],\qquad \mathcal{A}= \left[ \begin{array}{c} A_{i_1} \\ \vdots \\ A_{i_N} \end{array} \right],\qquad {\mathbb{E}}[{\boldsymbol{\epsilon}}]={\boldsymbol{0}},\quad \mathrm{and}\quad {\mathbb{E}}[{\boldsymbol{\epsilon}} {\boldsymbol{\epsilon}}^T]=I.$$ Now, assume that we have enough measurements, so that $\mathcal{A}$ is of full rank. A common result in the field of statistics, known as the *Gauss-Markov* theorem, states that the best linear unbiased estimator of ${\boldsymbol{\theta}}$ is given by a pseudo inverse formula. Its variance is given below: $$\begin{aligned} \hat{{\boldsymbol{\theta}}} =\Big(\mathcal{A}^T \mathcal{A} \Big)^{-1} \mathcal{A}^T {\boldsymbol{y}}. \label{bestEstimator} \\ \textrm{Var}(\hat{{\boldsymbol{\theta}}}) =(\mathcal{A}^T \mathcal{A} )^{-1}. \label{bestVar}\end{aligned}$$ We denote the inverse of the covariance matrix  by $M_F({\boldsymbol{n}})$, because in the Gaussian case it coincides with the Fisher information matrix of the measurements. Note that it can be decomposed as the sum of the information matrices of the selected experiments: $$\begin{aligned} M_F({\boldsymbol{n}}) &=& \mathcal{A}^T \mathcal{A} \nonumber \\ &=&\sum_{k=1}^N A_{i_k}^T A_{i_k} \nonumber \\ &=&\sum_{i=1}^s n_i A_{i}^T A_{i}. \label{fisher} \end{aligned}$$ The classical experimental design approach consists in choosing the design ${\boldsymbol{n}}$ in order to make the variance of the estimator (\[bestEstimator\]) *as small as possible*. The interpretation is straightforward: with the assumption that the noise ${\boldsymbol{\epsilon}}$ is normally distributed, for every probability level $\alpha$, the estimator $\hat{{\boldsymbol{\theta}}}$ lies in the confidence ellipsoid centered at ${\boldsymbol{\theta}}$ and defined by the following inequality: $$\begin{aligned} \label{e-confidence} ({\boldsymbol{\theta}}-\hat{{\boldsymbol{\theta}}})^T Q({\boldsymbol{\theta}}-\hat{{\boldsymbol{\theta}}}) \leq \kappa_\alpha,\end{aligned}$$ where $\kappa_\alpha$ depends on the specified probability level, and $Q=M_F({\boldsymbol{n}})$ is the inverse of the covariance matrix $\textrm{Var}(\hat{{\boldsymbol{\theta}}})$. We would like to make these confidence ellipsoids *as small as possible*, in order to reduce the uncertainty on the estimation of ${\boldsymbol{\theta}}$. To this end, we can express the inclusion of ellipsoids in terms of matrix inequalities. The space of symmetric matrices is equipped with the [*Löwner ordering*]{}, which is defined by $$\forall B,C \in \mathbb{S}_m, \qquad B \succeq C \Longleftrightarrow B-C \in \mathbb{S}_m^+.$$ Let ${\boldsymbol{n}}$ and ${\boldsymbol{n'}}$ denote two designs such that the matrices $M_F({\boldsymbol{n}})$ and $M_F({\boldsymbol{n'}})$ are invertible. One can readily check that for any value of the probability level $\alpha$, the confidence ellipsoid  corresponding to $Q=M_F({\boldsymbol{n}})$ is included in the confidence ellipsoid corresponding to $Q'=M_F({\boldsymbol{n'}})$ if and only if $M_F({\boldsymbol{n}})\succeq M_F({\boldsymbol{n'}})$. Hence, we will prefer the design ${\boldsymbol{n}}$ to the design ${\boldsymbol{n'}}$ if the latter inequality is satisfied. Statement of the optimization problem ------------------------------------- Since the Löwner ordering on symmetric matrices is only a partial ordering, the problem consisting in maximizing $M_F({\boldsymbol{n}})$ is ill-posed. So we will rather maximize a scalar *information function* of the Fisher matrix, i.e. a function mapping $\mathbb{S}_m^{+}$ onto the real line, and which satisfies natural properties, such as positive homogeneity, monotonicity with respect to Löwner ordering, and concavity. For a more detailed description of the information functions, the reader is referred to the book of Pukelsheim [@Puk93], who makes use of the class of matrix means $\Phi_p$, as first proposed by Kiefer [@Kief75]. These functions are defined like the $L_p$-norm of the vector of eigenvalues of the Fisher information matrix, but for $p \in [-\infty,1]$: for a symmetric positive definite matrix $M\in\mathbb{S}_m^{++}$, $\Phi_p$ is defined by $$\Phi_p(M)=\left\{ \begin{array}{ll} \lambda_{\mathrm{min}}(M) & \textrm{for $p=-\infty$ ;} \\ (\frac{1}{m}\ \mathrm{trace}\ M^p)^{\frac{1}{p}} & \textrm{for $p \in\ ]-\infty,1],\ p \neq 0$;} \\ (\operatorname{det}(M))^{\frac{1}{m}} & \textrm{for $p=0$,} \end{array} \right. \label{phip}$$ where we have used the extended definition of powers of matrices $M^p$ for arbitrary real parameters $p$: if $\lambda_1,\ldots,\lambda_m$ are the eigenvalues of $M$ counted with multiplicities, $\mathrm{trace}\ M^p=\sum_{j=1}^m \lambda_j^p$. For singular positive semi-definite matrices $M \in \mathbb{S}_m^{+}$, $\Phi_p$ is defined by continuity: $$\Phi_p(M)=\left\{ \begin{array}{ll} 0 & \textrm{for $p \in [-\infty,0]$ ;} \\ (\frac{1}{m}\ \mathrm{trace}\ M^p)^{\frac{1}{p}} & \textrm{for $p \in\ ]0,1]$. } \end{array} \right. \label{phising}$$ The class of functions $\Phi_p$ includes as special cases the classical optimality criteria used in the experimental design literature, namely $E-$optimality for $p=-\infty$ (smallest eigenvalue of $M_F({\boldsymbol{n}})$), $D-$optimality for $p = 0$ (determinant of the information matrix), $A-$optimality for $p=-1$ (harmonic average of the eigenvalues), and $T-$optimality for $p=1$ (trace). The case $p=0$ (D-optimal design) admits a simple geometric interpretation: the volume of the confidence ellipsoid  is given by $C_m\kappa_\alpha^{m/2}\det(Q)^{-1/2}$ where $C_m>0$ is a constant depending only on the dimension. Hence, maximizing $\Phi_0(M_F({\boldsymbol{n}}))$ is the same as minimizing the volume of every confidence ellipsoid. We can finally give a mathematical formulation to the problem of selecting $N$ experiments to conduct among the set $[s]$: $$\begin{aligned} \label{optdesProblem} \max_{n_i \in \mathbb{N}\ (i=1,\ldots,s)} &\quad \Phi_p \Big( \sum_{i=1}^s n_i A_i^T A_i \Big) \\ \operatorname{s.t.} &\quad \sum_i n_i \leq N, \nonumber\end{aligned}$$ The underinstrumented situation ------------------------------- We note that the problem of maximizing the information matrix $M_F({\boldsymbol{n}})$ with respect to the Löwner ordering remains meaningful even when $M_F({\boldsymbol{n}})$ is not of full rank (the interpretation of $M_F({\boldsymbol{n}})$ as *the inverse of the covariance matrix of the best linear unbiased estimator* vanishes, but $M_F({\boldsymbol{n}})$ is still the Fisher information matrix of the experiments if the measurement errors are Gaussian). This case does arise in *underinstrumented situations*, in which some constraints may not allow one to conduct a number of experiments which is sufficient to infer all the parameters. An interesting and natural idea to find an optimal under-instrumented design is to choose the design which maximizes the rank of the observation matrix $\mathcal{A}$, or equivalently of $M_F({\boldsymbol{n}})=\mathcal{A}^T \mathcal{A}$. The *rank maximization* is a nice combinatorial problem, where we are looking for a subset of matrices whose sum is of maximal rank: $$\begin{aligned} \max_{{\boldsymbol{n}} \in \mathbb{N}^s} &\quad {\operatorname{rank}}\Big( \sum_i n_i A_{i}^T A_{i} \Big) \\ \operatorname{s.t.}\ & \qquad \sum_i n_i \leq N. \nonumber\end{aligned}$$ When every feasible information matrix is singular, Equation  indicates that the maximization of $\Phi_p(M_F({\boldsymbol{n}}))$ can be considered only for nonnegative values of $p$. Then, the next proposition shows that $\Phi_p$ can be seen as a deformation of the rank criterion for $p \in ]0,1]$. First notice that when $p>0$, the maximization of $\Phi_p(M_F({\boldsymbol{n}}))$ is equivalent to: $$\begin{aligned} \displaystyle{\max_{{\boldsymbol{n}} \in \mathbb{N}^s}} & \quad \varphi_p\big({\boldsymbol{n}}\big):=\ \textrm{trace}\: \Big(\displaystyle{\sum_i} n_i A_i^T A_i \Big)^p \\ {\operatorname{s.t.}}& \quad \quad\displaystyle{\sum_i}\ n_i \leq N. \nonumber\end{aligned}$$ If we set $M_i=A_i^T A_i$, we obtain the problems  and  which were presented in the first lines of this article. \[limp0\] For all positive semidefinite matrix $M \in \mathbb{S}_m^+,$ $$\lim_{p\rightarrow0^+} \mathrm{trace}\ M^p = \mathrm{rank}\ M.$$ Let $\lambda_1,\ldots, \lambda_r$ denote the positive eigenvalues of $M$, counted with multiplicities, so that $r$ is the rank of $M$. We have the first order expansion as $p \to 0^+$: $$\begin{aligned} \mathrm{trace}\ M^p = \sum_{k=1}^r \lambda_k^p = r + p\ \log (\prod_{k=1}^r \lambda_k) + \mathcal{O}(p^2) \label{expansion1}\end{aligned}$$ Consequently, $\mathrm{trace}\ M^0$ will stand for $\mathrm{rank}(M)$ in the sequel and the rank maximization problem  is the limit of problem  as $p \to 0^+$. \[corop0\] If $p>0$ is small enough, then every design ${\boldsymbol{{\boldsymbol{n^*}}}}$ which is a solution of Problem  maximizes the rank of $M_F({\boldsymbol{n}})$. Moreover, among the designs which maximize this rank, ${\boldsymbol{{\boldsymbol{n^*}}}}$ maximizes the product of nonzero eigenvalues of $M_F({\boldsymbol{n}})$. Since there is only a finite number of designs, it follows from  that for $p>0$ small enough, every design which maximizes $\varphi_p$ must maximize in the lexicographical order first the rank of $M_F({\boldsymbol{n}})$, and then the pseudo-determinant $\prod_{\{k:\lambda_k>0\}} \lambda_k$. Proof of Proposition \[prop:boundW\] {#sec:proofIneq} ==================================== The proof of Proposition \[prop:boundW\] relies on several lemmas on the directional derivative of a scalar function applied to a symmetric matrix, which we state next. First recall that if $f$ is differentiable on ${\mathbb{R}}_+^*$, then $f$ is Fréchet differentiable over $\mathbb{S}_m^{++}$, and for $M\in\mathbb{S}_m^{++}$, $H\in\mathbb{S}_m$, we denote by $D\!f(M)(H)$ its directional derivative at $M$ in the direction of $H$ (see Equation ). \[lemma:permDeriv\] If $f$ is continuously differentiable on ${\mathbb{R}_{+}^{*}}$, i.e. $f\in\mathcal{C}^1({\mathbb{R}_{+}^{*}})$, $M\in{\mathbb{S}_m^{++}}, A,B \in \mathbb{S}_m$, then $${\operatorname{trace}}(A\ D\!f(M)(B))={\operatorname{trace}}(B\ D\!f(M)(A)).$$ Let $M=Q D Q^T$ be an eigenvalue decomposition of $M$. It is known (see e.g. [@Bha97]) that $D\!f(M)(H)$ can be expressed as $Q (f^{[1]}(D) \odot Q^T H Q) Q^T$, where $f^{[1]}(D)$ is a symmetric matrix called the *first divided difference* of $f$ at $D$ and $\odot$ denotes the Hadamard (elementwise) product of matrices. With little work, the latter derivative may be rewritten as: $$D\!f(M)(H)=\sum_{i,j} f^{[1]}_{ij} {\boldsymbol{q_i}}{\boldsymbol{q_i}}^T H {\boldsymbol{q_j}}{\boldsymbol{q_j}}^T,$$ where ${\boldsymbol{q_k}}$ is the $k{{}^{\mathrm{th}}}$ eigenvector of $M$ (i.e., the $k{{}^{\mathrm{th}}}$ column of $Q$) and $f^{[1]}_{ij}$ denotes the $(i,j)-$element of $f^{[1]}(D)$. We can now conclude: $$\begin{aligned} {\operatorname{trace}}(A\ D\!f(M)(B)) & = \sum_{i,j} f^{[1]}_{ij} {\operatorname{trace}}( A{\boldsymbol{q_i}}{\boldsymbol{q_i}}^T B {\boldsymbol{q_j}}{\boldsymbol{q_j}}^T) \\ & = \sum_{i,j} f^{[1]}_{ji} {\operatorname{trace}}( B{\boldsymbol{q_j}}{\boldsymbol{q_j}}^T H {\boldsymbol{q_i}}{\boldsymbol{q_i}}^T) \\ & = {\operatorname{trace}}(B\ D\!f(M)(A))\end{aligned}$$ We next show that when $f$ is antitone, the mapping $X\mapsto D\!f(M)(X)$ is nonincreasing with respect to the Löwner ordering. \[lemma:nonincDeriv\] If $f$ is differentiable and antitone on ${\mathbb{R}_{+}^{*}}$, then for all $A,B$ in $\mathbb{S}_m$, $$A\preceq B \Longrightarrow D\!f(M)(A)\succeq D\!f(M)(B).$$ The lemma trivially follows from the definition of the directional derivative: $$D\!f(M)(A) = \lim_{\epsilon \to 0^+} \frac{1}{\epsilon} \big(f(M+\epsilon A) - f(M) \big)$$ and the fact that $A\preceq B$ implies $M+\epsilon A\preceq M+\epsilon B$ for all $\epsilon>0$. \[lemma:commDeriv\] Let $f$ be differentiable on ${\mathbb{R}_{+}^{*}}$, $M\in {\mathbb{S}_m^{++}}$, $A \in \mathbb{S}_m$. If $A$ and $M$ commute, then $$D\!f(M)(A)=f'(M)A \in \mathbb{S}_m,$$ where $f'$ denotes the (scalar) derivative of $f$. Since $A$ and $M$ commute, we can diagonalize them simultaneously: $$M=Q {\operatorname{Diag}}({\boldsymbol{\lambda}}) Q^T,\quad A=Q {\operatorname{Diag}}({\boldsymbol{\mu}}) Q^T.$$ Thus, it is clear from the definition of the directional derivative that $$D\!f(M)(A)=Q\ D\!f\big({\operatorname{Diag}}({\boldsymbol{\lambda}})\big)\big({\operatorname{Diag}}({\boldsymbol{\mu}})\big)\ Q^T.$$ By reasoning entry-wise on the diagonal matrices, we find: $$D\!f\big({\operatorname{Diag}}({\boldsymbol{\lambda}})\big)\big({\operatorname{Diag}}({\boldsymbol{\mu}})\big)= {\operatorname{Diag}}\big(f'(\lambda_1) \mu_1,\ldots,f'(\lambda_m) \mu_m\big) ={\operatorname{Diag}}\big(f'({\boldsymbol{\lambda}})\big){\operatorname{Diag}}({\boldsymbol{\mu}})$$ The equality of the lemma is finally obtained by writing: $$D\!f(M)(A)=Q {\operatorname{Diag}}\big(f'({\boldsymbol{\lambda}})\big){\operatorname{Diag}}({\boldsymbol{\mu}}) Q^T= Q {\operatorname{Diag}}\big(f'({\boldsymbol{\lambda}})\big)Q^T Q{\operatorname{Diag}}({\boldsymbol{\mu}}) Q^T=f'(M) A.$$ Note that the matrix $f'(M) A$ is indeed symmetric, because $f'(M)$ and $A$ commute. Before we give the proof of the main result, we recall an important result from the theory of optimal experimental designs, which characterizes the optimum of Problem . \[prop:KKT\_cond\] Let $p \in [0,1]$. A design ${\boldsymbol{w^*}}$ is optimal for Problem  if and only if: $$\forall i \in [s],\quad N {\operatorname{trace}}(M_F({\boldsymbol{w^*}})^{p-1} M_i) \leq \varphi_p\big({\boldsymbol{w^*}}\big).$$ Moreover, the latter inequalities become equalities for all $i$ such that $w_i^*>0$. For a proof of this result, see [@Kief74] or Paragraph 7.19 in [@Puk93], where the problem is studied with the normalized constraint $\sum_i w_i\leq 1$. In fact, the *general equivalence theorem* details the Karush-Kuhn-Tucker conditions of optimality of Problem . To derive them, one can use the fact that when $M_F({\boldsymbol{w}})$ is invertible, $$\frac{\partial \varphi_p({\boldsymbol{w}})}{\partial w_i}={\operatorname{trace}}(M_F({\boldsymbol{w}})^{p-1} M_i)\quad \textrm{for all}\quad p\in]0,1],$$ and $$\frac{\partial \log\det(M_F({\boldsymbol{w}}))}{\partial w_i}={\operatorname{trace}}(M_F({\boldsymbol{w}})^{-1} M_i).$$ Note that for $p\neq 1$, the proposition implicitly implies that $M_F({\boldsymbol{w^*}})$ is invertible. A proof of this fact can be found in Paragraph 7.13 of [@Puk93]. We can finally prove the main result: Let ${\boldsymbol{w^*}}$ be an optimal solution to Problem  and $S$ be a subset of $[s]$ such that $w_i^*>0$ for all $i \in S$ (the case in which $w_i^*=0$ for some index $i\in S$ will trivially follow if we adopt the convention $0^0=0$). We know from Proposition \[prop:KKT\_cond\] that $N^{-1} \varphi_p\big({\boldsymbol{w^*}}\big) = {\operatorname{trace}}(M_F({\boldsymbol{w^*}})^{p-1} M_i)$ for all $i$ in $S$. If we combine these equalities by multiplying each expression by a factor proportional to $(w_i^*)^{1-p}$, we obtain: $$\begin{aligned} \frac{1}{N} \varphi_p\big({\boldsymbol{w^*}}\big) = \sum_{i \in S} \frac{(w_i^*)^{1-p}}{\sum_{k \in S} (w_k^*)^{1-p}} {\operatorname{trace}}(M_F({\boldsymbol{w^*}})^{p-1} M_i) \label{proporKKT}\\ \Longleftrightarrow \frac{1}{N} \sum_{k \in S} (w_k^*)^{1-p} = \frac{\sum_{i\in S} (w_i^*)^{1-p} {\operatorname{trace}}(M_F({\boldsymbol{w^*}})^{p-1} M_i)}{ \varphi_p({\boldsymbol{w^*}})}. \nonumber\end{aligned}$$ We are going to show that for all ${\boldsymbol{w}}\geq{\boldsymbol{0}}$ such that $M_F({\boldsymbol{w}})$ is invertible, $\sum_{i\in S} w_i^{1-p} {\operatorname{trace}}(M_F({\boldsymbol{w}})^{p-1} M_i) \leq {\operatorname{trace}}(M_S)^p$, where $M_S:=\sum_{i\in S} M_i$, which will complete the proof. To do this, we introduce the function $f$ defined on the open subset of $({\mathbb{R}}_+)^s$ such that $M_F({\boldsymbol{w}})$ is invertible by: $$f({\boldsymbol{w}})=\sum_{i\in S} w_i^{1-p} {\operatorname{trace}}(M_F({\boldsymbol{w}})^{p-1} M_i)={\operatorname{trace}}\left ( \Big(\sum_{i\in S} w_i^{1-p} M_i\Big) M_F({\boldsymbol{w}})^{p-1} \right).$$ Note that $f$ satisfies the property $f(t{\boldsymbol{w}})=f({\boldsymbol{w}})$ for all positive scalar $t$; this explains why we do not have to work with normalized designs such that $\sum_i w_i = N$. Now, let ${\boldsymbol{w}}\geq{\boldsymbol{0}}$ be such that $M_F({\boldsymbol{w}})\succ 0$ and let $k$ be an index of $S$ such that $w_k=\min_{i\in S} w_i$. We are first going to show that $\frac{\partial f({\boldsymbol{w}})}{\partial w_k}\geq0$. By the rule of differentiation of a product, $$\begin{aligned} \frac{\partial f({\boldsymbol{w}})}{\partial w_k} &= {\operatorname{trace}}\left( (1-p) w_k^{-p} M_k M_F({\boldsymbol{w}})^{p-1} + \Big(\sum_{i\in S} w_i^{1-p} M_i\Big) \frac{\partial (M_F({\boldsymbol{w}})^{p-1})}{\partial w_k} \right) \nonumber\\ &={\operatorname{trace}}\left( (1-p) w_k^{-p} M_k M_F({\boldsymbol{w}})^{p-1} + \Big(\sum_{i\in S} w_i^{1-p} M_i\Big) D[x\mapsto x^{p-1}](M_F({\boldsymbol{w}}))(M_k) \right) \nonumber\\ &={\operatorname{trace}}M_k \left( (1-p) w_k^{-p} M_F({\boldsymbol{w}})^{p-1} + D[x\mapsto x^{p-1}]\big(M_F({\boldsymbol{w}})\big)\big(\sum_{i\in S} w_i^{1-p} M_i)\big) \right), \label{Mkparent}\end{aligned}$$ where the first equality is simply a rewriting of $\frac{\partial (M_F({\boldsymbol{w}})^{p-1})}{\partial w_k}$ by using a directional derivative, and the second equality follows from Lemma \[lemma:permDeriv\] applied to the function $x\mapsto x^{p-1}$. By linearity of the Fréchet derivative, we have: $$w_k^p\ D[x\mapsto x^{p-1}]\big(M_F({\boldsymbol{w}})\big)\big(\sum_{i\in S} w_i^{1-p} M_i\big)= D[x\mapsto x^{p-1}]\big(M_F({\boldsymbol{w}})\big)\big(\sum_{i\in S} w_i \left(\frac{w_k}{w_i}\right)^{\!\!p} M_i\big) .$$ Since $w_k\leq w_i$ for all $i\in S$, the following matrix inequality holds: $$\sum_{i\in S} w_i \left(\frac{w_k}{w_i}\right)^{\!\!p} M_i\preceq \sum_{i\in S} w_i M_i \preceq M_F({\boldsymbol{w}}).$$ By applying successively Lemma \[lemma:nonincDeriv\] ($x\mapsto x^{p-1}$ is antitone on ${\mathbb{R}_{+}^{*}}$) and Lemma \[lemma:commDeriv\] (the matrix $M_F({\boldsymbol{w}})$ commutes with itself), we obtain: $$\begin{aligned} w_k^p\ D[x\mapsto x^{p-1}]\big(M_F({\boldsymbol{w}})\big)\big(\sum_{i\in S} w_i^{1-p} M_i\big) &\succeq D[x\mapsto x^{p-1}]\big(M_F({\boldsymbol{w}})\big)\big(M_F({\boldsymbol{w}})\big) \\ & = (p-1) M_F({\boldsymbol{w}})^{p-2} M_F({\boldsymbol{w}})\\ & = (p-1) M_F({\boldsymbol{w}})^{p-1}.\end{aligned}$$ Dividing the previous matrix inequality by $w_k^p$, we find that the matrix that is inside the largest parenthesis of Equation  is positive semidefinite, from which we can conclude: $\frac{\partial f({\boldsymbol{w}})}{\partial w_k}\geq0$. Thanks to this property, we next show that $f({\boldsymbol{w}})\leq f({\boldsymbol{v}})$, where ${\boldsymbol{v}}\in \mathbb{R}^s$ is defined by $v_i=\max_{k\in S} (w_k)$ if $i \in S$ and $v_i=w_i$ otherwise. Assume without loss of generality (after a reordering of the coordinates) that $S=[s_0]$, $w_1\leq w_2 \leq \ldots \leq w_{s_0}$, and denote the vector of the remaining components of ${\boldsymbol{w}}$ by ${\boldsymbol{\bar{w}}}$ (i.e., we have ${\boldsymbol{w}}^T=[w_1,\ldots,w_{s_0},{\boldsymbol{\bar{w}}}]$ and ${\boldsymbol{v}}^T=[w_{s_0},\ldots,w_{s_0},{\boldsymbol{\bar{w}}}]$). The following inequalities hold: $$f({\boldsymbol{w}}) = f\left(\left[ \begin{array}{c} w_1 \\w_2\\w_3\\ \vdots\\ w_{s_0} \\ {\boldsymbol{\bar{w}}} \end{array} \right]\right) \leq f\left(\left[ \begin{array}{c} w_2 \\w_2\\w_3\\ \vdots\\ w_{s_0} \\ {\boldsymbol{\bar{w}}} \end{array} \right]\right) \leq f\left(\left[ \begin{array}{c} w_3 \\w_3\\w_3\\ \vdots\\ w_{s_0} \\ {\boldsymbol{\bar{w}}} \end{array} \right]\right) \leq \ldots \leq f\left(\left[ \begin{array}{c} w_{s_0} \\w_{s_0}\\w_{s_0}\\ \vdots\\ w_{s_0} \\ {\boldsymbol{\bar{w}}} \end{array} \right]\right) =f({\boldsymbol{v}}).$$ The first inequality holds because $\frac{\partial f({\boldsymbol{w}})}{\partial w_1}\geq0$ as long as $w_1\leq w_2$. To see that the second inequality holds, we apply the same reasoning on the function $\tilde{f}: [w_2,w_3,\ldots] \mapsto f([w_2,w_2,w_3,\ldots])$, i.e., we consider a variant of the problem where the matrices $M_1$ and $M_2$ have been replaced by a single matrix $M_1+M_2$. The following inequalities are obtained in a similar manner. Recall that we have set $M_S=\sum_{i \in S} M_i$. We have: $$M_F({\boldsymbol{v}}) = w_{s_0} M_S + \sum_{i \notin S} w_i M_i \succeq w_{s_0} M_S$$ and by isotonicity of the mapping $x \mapsto x^{1-p}$, $M_F({\boldsymbol{v}})^{1-p} \succeq (w_{s_0}\ M_S)^{1-p}$. We denote by $X^\dagger$ the Moore-Penrose inverse of $X$. It is known [@PS83] that if $M_i\in\mathbb{S}_m^+$, the function $X\mapsto {\operatorname{trace}}(X^\dagger M_i)$ is nondecreasing with respect to the Löwner ordering over the set of matrices $X$ whose range contains $M_i$. Hence, since $M_F({\boldsymbol{v}})\succeq M_F({\boldsymbol{w}})$ is invertible, $$\forall i\in S, \quad {\operatorname{trace}}(M_F({\boldsymbol{v}})^{p-1} M_i)={\operatorname{trace}}\left( \big( M_F({\boldsymbol{v}})^{1-p} \big)^\dagger M_i \right) \leq {\operatorname{trace}}\left( \big((w_{s_0}\ M_S)^{1-p}\big)^\dagger M_i\right)$$ and $$\begin{aligned} f({\boldsymbol{v}}) & = w_{s_0}^{1-p} \sum_{i\in S} {\operatorname{trace}}(M_F({\boldsymbol{v}})^{p-1} M_i) \\ &\leq w_{s_0}^{1-p} \sum_{i\in S} {\operatorname{trace}}\left( \big((w_{s_0}\ M_S)^{1-p}\big)^\dagger M_i\right) \\ & ={\operatorname{trace}}\big(M_S^{1-p}\big)^\dagger M_S\\ & = {\operatorname{trace}}M_S^p\end{aligned}$$ Finally, we have $f({\boldsymbol{w}})\leq f({\boldsymbol{v}}) \leq {\operatorname{trace}}M_S^p = \varphi_p(S),$ and the proof is complete. [^1]: Parts of this work were done when the author was with INRIA Saclay Île de France & CMAP, École Polytechnique, being supported by Orange Labs through the research contract CRE EB 257676 with INRIA.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss the matching of the quark model to the effective mass operator of the $1/N_c$ expansion using the permutation group $S_N$. As an illustration of the general procedure we perform the matching of the Isgur-Karl model for the spectrum of the negative parity $L=1$ excited baryons. Assuming the most general two-body quark Hamiltonian, we derive two correlations among the masses and mixing angles of these states which should hold in any quark model. These correlations constrain the mixing angles and can be used to test for the presence of three-body quark forces.' author: - Dan Pirjol - 'Carlos Schat[^1]' title: 'Testing for three-body quark forces in $L=1$ excited baryons ' --- [ address=[ National Institute for Physics and Nuclear Engineering, Department of Particle Physics,\ 077125 Bucharest, Romania]{} ]{} [ address=[Department of Physics and Astronomy, Ohio University, Athens, Ohio 45701, USA\ Departamento de Física, FCEyN, Universidad de Buenos Aires, Ciudad Universitaria, Pab.1, (1428) Buenos Aires, Argentina]{} ]{} Introduction ============ Quark models provide a simple and intuitive picture of the physics of ground state baryons and their excitations [@De; @Rujula:1975ge; @Isgur:1977ef]. An alternative description is provided by the $1/N_c$ expansion, which is a systematic approach to the study of baryon properties [@Dashen:1993jt]. This program can be realized in terms of a quark operator expansion, which gives rise to a physical picture similar to the one of the phenomenological quark models, but is closer connected to QCD. In this context quark models gain additional significance. The $1/N_c$ expansion has been applied both to the ground state and excited nucleons [@Goity:1996hk; @Pirjol:1997bp; @Carlson:1998vx; @Schat:2001xr; @Goity:2003ab; @Pirjol:2003ye; @Matagne:2004pm]. In the system of negative parity $L=1$ excited baryons this approach has yielded a number of interesting insights for the spin-flavor structure of the quark interaction. In a recent paper [@Pirjol:2007ed] we showed how to match an arbitrary quark model Hamiltonian onto the operators of the $1/N_c$ expansion, thus making the connection between these two physical pictures. This method makes use of the transformation of the states and operators under $S_N^{\rm sp-fl}$, the permutation group of $N$ objects acting on the spin-flavor degrees of the quarks. This is similar to the method discussed in Ref. [@Collins:1998ny] for $N_c=3$ in terms of $S_3^{\rm orb}$, the permutation group of three objects acting on the orbital degrees of freedom. The main result of [@Pirjol:2007ed] can be summarized as follows: consider a two-body quark Hamiltonian $V_{qq} = \sum_{i<j} O_{ij} R_{ij}$, where $O_{ij}$ acts on the spin-flavor quark degrees of freedom and $R_{ij}$ acts on the orbital degrees of freedom. Then the hadronic matrix elements of the quark Hamiltonian on a baryon state $|B\rangle$ contains only the projections $O_\alpha$ of $O_{ij}$ onto a few irreducible representations of $S_N^{\rm sp-fl}$ and can be factorized as $\langle B |V_{qq}|B\rangle = \sum_\alpha C_\alpha \langle O_\alpha\rangle$. The coefficients $C_\alpha$ are related to reduced matrix elements of the orbital operators $R_{ij}$, and are given by overlap integrals of the quark model wave functions. The matching procedure has been discussed in detail for the Isgur-Karl (IK) model in Ref. [@Galeta:2009pn] providing a simple example of this general formalism. In Ref. [@Pirjol:2008gd; @PirSch:2010a] we used the general $S_N$ approach to study the predictions of the quark model with the most general two-body quark interactions, and to obtain information about the spin-flavor structure of the quark interactions from the observed spectrum of the $L=1$ negative parity baryons. This talk summarizes the main ideas and emphasizes their relevance as a possible test for three-body forces in excited baryons. The mass operator of the Isgur-Karl model {#IKV} ========================================= The Isgur-Karl model is defined by the quark Hamiltonian $$\begin{aligned} {\cal H}_{IK} = {\cal H}_0 + {\cal H}_{\rm hyp} \,, \end{aligned}$$ where ${\cal H}_0$ contains the confining potential and kinetic terms of the quark fields, and is symmetric under spin and isospin. The hyperfine interaction ${\cal H}_{\rm hyp}$ is given by $$\begin{aligned} \label{HIK} {\cal H}_{\rm hyp} = A \sum_{i<j}\Big[ \frac{8\pi}{3} \vec s_i \cdot \vec s_j \delta^{(3)}(\vec r_{ij}) + \frac{1}{r_{ij}^3} (3\vec s_i \cdot \hat r_{ij} \ \vec s_j \cdot \hat r_{ij} - \vec s_i\cdot \vec s_j) \Big] \,, \end{aligned}$$ where $A$ determines the strength of the interaction, and $\vec r_{ij} = \vec r_i - \vec r_j$ is the distance between quarks $i,j$. The first term is a local spin-spin interaction, and the second describes a tensor interaction between two dipoles. This interaction Hamiltonian is an approximation to the gluon-exchange interaction, neglecting the spin-orbit terms[^2]. In the original formulation of the IK model [@Isgur:1977ef] the confining forces are harmonic. We will derive in the following the form of the mass operator without making any assumption on the shape of the confining quark forces. We refer to this more general version of the model as IK-V(r). The $L=1$ quark model states include the following SU(3) multiplets: two spin-1/2 octets $8_\frac12, 8'_\frac12$, two spin-3/2 octets $8_\frac32, 8'_\frac32$, one spin-5/2 octet $8'_\frac52$, two decuplets $10_\frac12, 10_\frac32$ and two singlets $1_\frac12, 1_\frac32$. States with the same quantum numbers mix. For the $J=1/2$ states we define the relevant mixing angle $\theta_{N1}$ in the nonstrange sector as $$\begin{aligned} N(1535) & = & \cos\theta_{N1} N_{1/2} + \sin\theta_{N1} N'_{1/2} \,, \\ N(1650) & = & -\sin\theta_{N1} N_{1/2} + \cos\theta_{N1} N'_{1/2} \end{aligned}$$ and similar equations for the $J=3/2$ states, which define a second mixing angle $\theta_{N3}$. We find [@Galeta:2009pn] that the most general mass operator in the IK-V(r) model depends only on three unknown orbital overlap integrals, plus an additive constant $c_0$ related to the matrix element of ${\cal H}_0$, and can be written as $$\begin{aligned} \label{IKMass} \hat M = c_0 + a S_c^2 + b L_2^{ab} \{ S_c^a\,, S_c^b\} + c L_2^{ab} \{ s_1^a\,, S_c^b\} \,,\end{aligned}$$ where the spin-flavor operators are understood to act on the state $|\Phi(SI)\rangle$ constructed as a tensor product of the core of quarks 2,3 and the ‘excited’ quark 1, as given in [@Carlson:1998vx; @Pirjol:2007ed]. The coefficients are given by $$\begin{aligned} \label{coefa} && a = \frac12 \langle R_S\rangle \,,\, \ b = \frac{1}{12} \langle Q_S\rangle - \frac16 \langle Q_{MS}\rangle \,,\, \ c = \frac16 \langle Q_S\rangle + \frac16 \langle Q_{MS}\rangle \,\,. \label{coefc}\end{aligned}$$ The reduced matrix elements $R_S,Q_S,Q_{MS}$ for the orbital part of the interaction contain the unknown spatial dependence and are defined in Refs. [@Carlson:1998vx; @Galeta:2009pn]. Evaluating the matrix elements using Tables II, III in Ref. [@Carlson:1998vx] we find the following explicit result for the mass matrix $$\begin{aligned} M_{1/2} &=& \left( \begin{array}{cc} c_0 + a & -\frac53 b + \frac{5}{6}c \\ -\frac53 b + \frac{5}{6}c & c_0 + 2a + \frac53(b+c)\\ \end{array} \right) \,, \\ M_{3/2} &=& \left( \begin{array} {cc} c_0 + a & \frac{\sqrt{10}}{6} b -\frac{\sqrt{10}}{12}c \\ \frac{\sqrt{10}}{6} b - \frac{\sqrt{10}}{12}c & c_0 + 2a - \frac43(b+c)\\ \end{array} \right) \,, \\ M_{5/2} &=& c_0 + 2a +\frac13 (b+c) \,, \\ \Delta_{1/2} &=& \Delta_{3/2} = c_0 + 2a \,.\end{aligned}$$ ![Masses predicted by the IK model (black bars), by the IK-V(r) model (hatched bars) and the experimental masses (green boxes) from Ref. [@Amsler:2008zzb]. []{data-label="fig:masses"}](specikv.eps){width="8.0cm"} Computing the reduced matrix elements with the interaction given by Eq. (\[HIK\]), one finds that the reduced matrix elements in the IK model with harmonic oscillator wave functions are all related and can be expressed in terms of the single parameter $\delta$ as $$\begin{aligned} \langle Q_{MS}\rangle = \langle Q_S\rangle = - \frac35 \delta \qquad ; \qquad \langle R_S\rangle = \delta \,.\end{aligned}$$ This gives a relation among the coefficients $a,b,c$ of the mass matrix Eq. (\[IKMass\]) $$\begin{aligned} \label{coefIK} a = \frac12 \delta \,, \qquad b = \frac{1}{20} \delta \,, \qquad c = - \frac15 \delta \,.\end{aligned}$$ We recover the well known result that in the harmonic oscillator model, the entire spectroscopy of the $L=1$ baryons is fixed by one single constant $\delta=M_\Delta-M_N \sim 300 \ {\rm MeV}$, along with an overall additive constant $c_0$, and the model becomes very predictive. In Fig. \[fig:masses\] we show the result of a best fit of $a,b,c$ in the IK-V(r) model together with the predictions of the IK model. The IK-V(r) spectrum is the best fit possible for a potential model with the spin-flavor interaction given in Eq. (\[HIK\]). The most general two-body quark Hamiltonian {#Sec:Hamiltonian} =========================================== The most general two-body quark interaction Hamiltonian in the constituent quark model can be written in generic form as $V_{qq} = \sum_{i<j} V_{qq}(ij)$ with $$\begin{aligned} \label{2} V_{qq}(ij) &=& \sum_k f_{0,k}(\vec r_{ij}) O_{S,k}(ij) + f_{1,k}^a(\vec r_{ij}) O_{V,k}^a(ij) + f_{2,k}^{ab}(\vec r_{ij}) O_{T,k}^{ab}(ij)\,, \end{aligned}$$ where $a,b=1,2,3$ denote spatial indices, $O_{S}, O_V^a, O_T^{ab}$ act on spin-flavor, and $f_k(\vec r_{ij})$ are orbital functions. Their detailed form is unimportant for our considerations. The scalar, spin-orbit and tensor parts of the interaction yield factors of $\mathbf{1}, L^i, L_2^{ij} = \frac12 \{L^i, L^j\} - \frac13\delta^{ij} L(L+1)$, which are coupled to the spin-flavor part of the interaction as shown in Table I of Ref. [@Pirjol:2008gd] Following Refs. [@Pirjol:2007ed; @Pirjol:2008gd] one finds that the most general form of the effective mass operator in the presence of these two-body quark interactions is a linear combination of 10 nontrivial spin-flavor operators $$\begin{aligned} \label{10Ops} & & O_1 = T^2\,,\,\, O_2 = \vec S_c^2\,,\,\, O_3 = \vec s_1\cdot \vec S_c\,, \,\, O_4 = \vec L\cdot \vec S_c\,,\,\, O_5 = \vec L\cdot \vec s_1\,,\,\, O_6 = L^i t_1^a G_c^{ia}\,,\nonumber \\ & & O_7 = L^i g_1^{ia} T_c^a\,,\,\, O_8 = L_2^{ij} \{ S_c^i, S_c^j\} \,,\,\, O_9 = L_2^{ij} s_1^i S_c^j \,,\,\, O_{10} = L_2^{ij} g_1^{ia} G_c^{ja} \,\end{aligned}$$ and the unit operator. It turns out that the 11 coefficients $C_{0-10}$ contribute to the mass operator of the negative parity $N^*$ states only in 9 independent combinations: $C_0,C_1 - C_3/2, C_2+C_3, C_4, C_5, C_6, C_7, C_8+C_{10}/4, C_9 - 2C_{10}/3$. This implies the existence of two universal relations among the masses of the 9 multiplets plus the two mixing angles, which must hold in any quark model containing only two-body quark interactions. The first universal relation involves only the nonstrange hadrons, and requires only isospin symmetry. It can be expressed as a correlation among the two mixing angles $\theta_{N1}$ and $\theta_{N3}$ (see Fig. \[fig:corr\] ) $$\begin{aligned} \label{correl} && \frac{1}{2} (N(1535) + N(1650)) + \frac{1}{2}(N(1535)-N(1650)) (3 \cos 2\theta_{N1} + \sin 2\theta_{N1}) \\ && - \frac{7}{5} (N(1520) + N(1700)) + (N(1520) - N(1700)) \Big[ - \frac{3}{5} \cos 2\theta_{N3} + \sqrt{\frac52} \sin 2\theta_{N3}\Big] \nonumber \\ && = -2 \Delta_{1/2} + 2 \Delta_{3/2} - \frac{9}{5} N_{5/2} \nonumber\,. \end{aligned}$$ This correlation holds also model independently in the $1/N_c$ expansion, up to corrections of order $1/N_c^2$, since for non-strange states the mass operator to order $O(1/N_c)$ [@Carlson:1998vx; @Schat:2001xr] is generated by the operators in Eq. (\[10Ops\]). An example of an operator which violates this correlation is $L^i g^{ja} \{ S_c^j\,, G_c^{ia}\}$, which can be introduced by three-body quark forces. ![Correlation in the $(\theta_{N1}, \theta_{N3})$ plane from the quark model with the most general two-body quark interactions. []{data-label="fig:corr"}](figproc.eps){width="8.0cm"} In Fig.\[fig:corr\] we show the two solutions correlating the two mixing angles, where the solid line and the dashed line are obtained replacing the central values of the baryon masses in Eq. (\[correl\]). These lines are expanded into bands given by the scatter plot when the experimental errors in the masses are taken into account. The second universal relation expresses the spin-weighted SU(3) singlet mass $\bar \Lambda = \frac16 (2\Lambda_{1/2} + 4 \Lambda_{3/2})$ in terms of the nonstrange hadronic parameters and can be found in Ref. [@Pirjol:2008gd]. The green area in Fig. \[fig:corr\] shows the allowed region for $(\theta_{N1}, \theta_{N3})$ compatible with both relations and singles out the solid line as the preferred solution. On the same plot we show also, as a black dot with error bars, the values of the mixing angles obtained in Ref. [@Goity:2004ss] from an analysis of the $N^*\to N\pi$ strong decays and in Ref. [@Scoccola:2007sn] from photoproduction amplitudes. These angles are in good agreement with the correlation Eq. (\[correl\]), and provide no evidence for the presence of spin-flavor dependent three-body quark interactions, within errors. It would be interesting to narrow down the errors on masses and mixing angles, and also compare with the upcoming results of lattice calculations for these excited states, to see if violations of the correlation given by Eq. (\[correl\]) become apparent. Conclusions =========== We presented the Isgur-Karl model mass operator in a form that makes the connection with the $1/N_c$ operator expansion clear. This simple and explicit calculation (for details see Ref. [@Galeta:2009pn]) should serve as an illustration of the general matching procedure discussed in Ref. [@Pirjol:2007ed] using the permutation group. We used the more general matching procedure [@Pirjol:2008gd] to saturate the contribution of all possible two-body forces to the masses of the negative parity $L=1$ excited baryons, without making any assumptions about the orbital hadronic wave functions. We derived two universal correlations among masses and mixing angles, which will be broken by the presence of three-body forces, and could be used to set bounds on their strength given a more precise determination of all the masses and mixing angles for the negative parity $L=1$ baryons. The work of C.S. was supported by CONICET and partially supported by the U. S. Department of Energy, Office of Nuclear Physics under contract No. DE-FG02-93ER40756 with Ohio University. [99]{} A. De Rujula, H. Georgi and S. L. Glashow, Phys. Rev.  D [**12**]{}, 147 (1975). N. Isgur and G. Karl, Phys. Lett.  B [**72**]{}, 109 (1977). R. F. Dashen, E. Jenkins and A. V. Manohar, Phys. Rev.  D [**49**]{}, 4713 (1994) \[Erratum-ibid.  D [**51**]{}, 2489 (1995)\]; R. F. Dashen, E. Jenkins and A. V. Manohar, Phys. Rev.  D [**51**]{}, 3697 (1995). J. L. Goity, Phys. Lett.  B [**414**]{}, 140 (1997). D. Pirjol and T. M. Yan, Phys. Rev. D [**57**]{}, 1449 (1998); Phys. Rev. D [**57**]{}, 5434 (1998). C. E. Carlson, C. D. Carone, J. L. Goity and R. F. Lebed, Phys. Lett.  B [**438**]{}, 327 (1998); Phys. Rev.  D [**59**]{}, 114008 (1999). C. L. Schat, J. L. Goity and N. N. Scoccola, Phys. Rev. Lett.  [**88**]{}, 102002 (2002); J. L. Goity, C. L. Schat and N. N. Scoccola, Phys. Rev.  D [**66**]{}, 114014 (2002). J. L. Goity, C. Schat and N. N. Scoccola, Phys. Lett.  B [**564**]{}, 83 (2003). D. Pirjol and C. Schat, Phys. Rev. D [**67**]{}, 096009 (2003); AIP Conf. Proc.  [**698**]{}, 548 (2004). N. Matagne and F. Stancu, Phys. Rev.  D [**71**]{}, 014010 (2005); Phys. Lett.  B [**631**]{}, 7 (2005); Phys. Rev.  D [**74**]{}, 034014 (2006). D. Pirjol and C. Schat, Phys. Rev.  D [**78**]{}, 034026 (2008). H. Collins and H. Georgi, Phys. Rev.  D [**59**]{}, 094010 (1999). L. Galeta, D. Pirjol and C. Schat, Phys. Rev.  D [**80**]{}, 116004 (2009). D. Pirjol and C. Schat, Phys. Rev. Lett. [**102**]{}, 152002 (2009). D. Pirjol and C. Schat, \[arXiv:1007.0964 \[hep-ph\]\]. C. Amsler [*et al.*]{} \[Particle Data Group\], Phys. Lett.  B [**667**]{}, 1 (2008). J. L. Goity, C. Schat and N. Scoccola, Phys. Rev.  D [**71**]{}, 034016 (2005). N. N. Scoccola, J. L. Goity and N. Matagne, Phys. Lett.  B [**663**]{}, 222 (2008). [^1]: Speaker, [*XI Hadron Physics*]{}, March 21-26, 2010, São Paulo, Brazil [^2]: In Ref.[@Isgur:1977ef] A is taken as $A=\frac{2 \alpha_S}{3 m^2}$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we define the singular Hochschild cohomology groups ${\mathop{\mathrm{HH}}\nolimits}^i_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$ of an associative $k$-algebra $A$ as morphisms from $A$ to $A[i]$ in the singular category ${\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}})$ for $i\in {\mathbb{Z}}$. We prove that ${\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$ has a Gerstenhaber algebra structure and in the case of a symmetric algebra $A$, ${\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$ is a Batalin-Vilkovisky (BV) algebra.' author: - 'Zhengfang WANG [^1]' title: Singular Hochschild Cohomology and Gerstenhaber Algebra Structure --- Introduction ============ Let $A$ be an associative algebra over a commutative ring $k$ such that $A$ is projective as a $k$-module. Then the Hochschild cohomology groups ${\mathop{\mathrm{HH}}\nolimits}^i(A, A)$ can be defined as morphisms from $A$ to $A[i]$ in the bounded derived category ${\EuScript D}^b(A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}})$ of the enveloping algebra $A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}}$ for $i\in{\mathbb{Z}}_{\geq 0}$. Namely, we have $${\mathop{\mathrm{HH}}\nolimits}^i(A, A):={\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}^b(A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}})}(A, A[i]).$$ M. Gerstenhaber showed in [@Ger1] that there is a very rich structure on ${\mathop{\mathrm{HH}}\nolimits}^*(A, A)$. More precisely, he proved that ${\mathop{\mathrm{HH}}\nolimits}^*(A, A)$ is a so-called Gerstenhaber algebra. Namely, there is a Gerstenhaber bracket $[\cdot,\cdot]$ such that $[\cdot,\cdot]$ is a Lie bracket of degree $-1$, and a graded commutative associative cup product $\cup$, such that $[\cdot,\cdot]$ is a graded derivation of the cup product $\cup$ in each variable. In this paper, we will generalize the Hochschild cohomology groups to define the singular Hochschild cohomology groups ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^i(A, A)$ for $i\in {\mathbb{Z}}$. Namely, we define the singular Hochschild cohomology groups as $${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^i(A, A):={\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}})}(A, A[i]),$$ where ${\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}})$ is the singular category of $A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}}$. Recall that ${\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}})$ is the Verdier quotient of ${\EuScript D}^b(A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}})$ by the full subcategory consisting of perfect complexes, that is, bounded complexes of projective $A$-$A$-bimodules (cf. [@Bu; @Orl]). We observe that ${\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)=0$ for an algebra $A$ of finite global dimension since the singular category ${\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}})$ is zero in this case. So, from this point of view, the algebras we are interested in are those of infinite global dimension. Note that in general, ${\mathop{\mathrm{HH}}\nolimits}^{i}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$ does not vanish even for $i\in {\mathbb{Z}}_{<0}.$ The main result of this paper is as follow. Let $A$ be an associative algebra over a commutative ring $k$ such that $A$ is projective as a $k$-module. Then the singular Hochschild cohomology $${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^*(A, A):=\bigoplus_{i\in{\mathbb{Z}}} {\mathop{\mathrm{HH}}\nolimits}^i_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$$ is a Gerstenhaber algebra, equipped with a Gerstenhaber bracket $[\cdot,\cdot]$ and the Yoneda product $\cup$ in the singular category ${\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A\otimes A^{{\mathop{\mathrm{op}}\nolimits}})$. From Buchweitz’s work in his manuscript [@Bu], we have a nice description on the singular Hochschild cohomology ${\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$ for a self-injective algebra $A$. Namely, suppose that $A$ is a self-injective algebra over a field $k$ (e.g. a group algebra of a finite group). Then we have $${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^i(A, A)\cong \begin{cases} {\mathop{\mathrm{HH}}\nolimits}^i(A, A) & \mbox{if}\ \ i>0,\\ {\mathop{\mathrm{HH}}\nolimits}_{-i-1}(A, {\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A, A^e)) & \mbox{if} \ i<-1. \end{cases}$$ Here we remark that, from [@CiSo], for the case of a group algebra $k[G]$, where $G$ is a finite abelian group and the characteristic of $k$ divides the order of $G$, ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^*(A, A)$ is very related to the Tate cohomology $\widehat{{\mathop{\mathrm{HH}}\nolimits}}^*(G, k)$ of $G$ with coefficients in $k$, the trivial $kG$-module. We also remark that in the case of a self-injective algebra $A$, the singular Hochschild cohomology ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^*(A, A)$ agrees with the Tate-Hochschild cohomology defined in [@BeJo] and [@BeJoOp] and the stable Hochschild cohomology defined in [@EuSc]. Recall that L. Menichi in [@Men] and T. Tradler in [@Tra] independently showed that the Hochschild cohomology ${\mathop{\mathrm{HH}}\nolimits}^*(A, A)$ of a finite dimensional symmetric algebra $A$ has a new structure, the so-called Batalin-Vilkovisky (BV) structure (c.f. Definition \[defn-BV\]), which has been studied in topology and mathematical physics during several decades. Roughly speaking a BV structure is a differential operator on Hochschild cohomology and it is a “generator” of the Gerstenhaber bracket $[\cdot,\cdot]$, which means that $[\cdot,\cdot]$ is the obstruction of the differential operator being a graded derivation with respect to the cup product. In this paper, we will generalize this result and prove that the singular Hochschild cohomology ${\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$ of a finite dimensional symmetric algebra $A$ has a BV algebra structure. Namely, we have the following result. Let $A$ be a symmetric algebra over a field $k$. Then the singular Hochschild cohomology ${\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$ is a BV algebra with the BV operator $\Delta_{{\mathop{\mathrm{sg}}\nolimits}}$, which is the Connes B-operator for the negative part ${\mathop{\mathrm{HH}}\nolimits}^{< 0}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$, the $\Delta$-operator for the positive part ${\mathop{\mathrm{HH}}\nolimits}^{> 0}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$ and $$\Delta_{{\mathop{\mathrm{sg}}\nolimits}}|_{{\mathop{\mathrm{HH}}\nolimits}^0_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)}=0: {\mathop{\mathrm{HH}}\nolimits}^0_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\rightarrow {\mathop{\mathrm{HH}}\nolimits}^{-1}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A).$$ In particular, we have two BV subalgebras ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{\leq 0}(A, A)$ and ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{\geq 0}(A, A)$ with induced BV algebra structures. As a corollary, we obtain that the cyclic homology ${\mathop{\mathrm{HC}}\nolimits}_*(A, A)[-1]$ of a symmetric algebra $A$, has a graded Lie algebra structure (cf. Corollary \[cor-cy\]). We remark that L. Menichi showed that the negative cyclic cohomology ${\mathop{\mathrm{HC}}\nolimits}^*_{-}(A,A)[-1]$ (cf. Proposition 25, [@Men1]) and the cyclic cohomology ${\mathop{\mathrm{HC}}\nolimits}_{\lambda}^*(A)[-1]$ (cf. Corollary 43, [@Men]) are both a graded Lie algebra . So in some sense our result is a dual version of his results. Throughout this paper, we fix a commutative ring $k$ with unit. We assume that all rings and the modules are simultaneously $k$-modules and that all operations on rings and modules are naturally $k$-module homomorphisms. For simplicity, we often use the symbol $\otimes$ to represent $\otimes_k,$ the tensor product over the commutative base ring $k$. For a $k$-algebra $A$, we denote $(a_i\otimes a_{i+1}\otimes\cdots\otimes a_{j})\in A^{\otimes j-i+1}(i\leq j)$ sometimes by $a_{i, j}$ for short, and denote the enveloping algebra $A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}}$ by $A^e$. Acknowledgement {#acknowledgement .unnumbered} =============== This work is a part of author’s PhD thesis, I would like to thank my PhD supervisor Alexander Zimmermann for introducing this interesting topic and for his many valuable suggestions for improvement. I am grateful to Claude Cibils and Selene Sanchez for some interesting discussions, to Murray Gerstenhaber for some remarks on his paper [@Ger1] and to Reiner Hermann for some discussions on his PhD thesis when I just started this project. I also would like to thank Ragnar-Olaf Buchweitz for useful suggestions during this project. Special thanks to my PhD co-supervisor, Marc Rosso for his constant support and encouragement during my career in mathematics. Preliminaries ============= In this section we recall some notions on Hochschild cohomology and Gerstenhaber algebras. For more details, we refer the reader to [@Lod; @Ger1]. Let $k$ be a commutative ring with unit. A differential graded Lie algebra (DGLA) is a differential ${\mathbb{Z}}$-graded $k$-module $(L, d)$ with a bracket $[\cdot,\cdot]: L^i\times L^j\rightarrow L^{i+j}$ which satisfies the following properties: 1. it is skew-symmetric: $$[\alpha, \beta]=-(-1)^{|\alpha||\beta|} [\beta, \alpha];$$ 2. satisfies the graded Leibniz rule: $$d([\alpha, \beta])=(-1)^{|\beta|}[d\alpha, \beta]+[\alpha, d\beta];$$ 3. and the graded Jacobi identity: $$(-1)^{(|\alpha|-1)(|\gamma|-1)}[[\alpha, \beta], \gamma]+(-1)^{(|\beta|-1)(|\alpha|-1)}[[\beta,\gamma],\alpha]+(-1)^{(|\gamma|-1)(|\beta|-1)}[[\gamma, \alpha], \beta]=0,$$ where $\alpha, \beta, \gamma$ are arbitrary homogeneous elements in $(L, d)$ and $|\alpha|$ is the degree of the homogeneous element $\alpha$. Let $(L, d, [\cdot,\cdot])$ be a DGLA. Then the homology $H^*(L, d)$ of the differential graded module $(L, d)$ is a ${\mathbb{Z}}$-graded Lie algebra with the induced bracket $[\cdot,\cdot]$. A Gerstenhaber algebra is a ${\mathbb{Z}}$-graded $k$-module ${\mathcal{H}}^*:=\oplus_{n\in{\mathbb{Z}}}{\mathcal{H}}^n$ equipped with: 1. a graded commutative associative product $\cup$ of degree zero, with unit $1\in{\mathcal{H}}^0$, $$\begin{aligned} \begin{tabular}{cccc} $\cup:$ & ${\mathcal{H}}^m \times {\mathcal{H}}^n$ & $\rightarrow$ & ${\mathcal{H}}^{m+n}$\\ & $(\alpha, \beta)$ & $\mapsto$ & $\alpha\cup\beta.$ \end{tabular} \end{aligned}$$ In particular, $\alpha\cup \beta=(-1)^{|\alpha||\beta|}\beta\cup\alpha$; 2. a graded Lie algebra structure $[\cdot, \cdot]$ on ${\mathcal{H}}^*[-1]$, that is, $$[\alpha, \beta]=-(-1)^{(|\alpha|-1)(|\beta|-1)}[\beta, \alpha]$$ and $$(-1)^{(|\alpha|-1)(|\gamma|-1)}[[\alpha, \beta], \gamma]+(-1)^{(|\beta|-1)(|\alpha|-1)}[[\beta,\gamma],\alpha]+(-1)^{(|\gamma|-1)(|\beta|-1)}[[\gamma, \alpha], \beta]=0;$$ 3. compatibility between $\cup$ and $[\cdot, \cdot]$: $$[\alpha, \beta\cup \gamma]=\beta\cup[\alpha, \gamma]+(-1)^{|\gamma|(|\alpha|-1)}[\alpha, \beta]\cup \gamma,$$ (or equivalently, $$[\alpha\cup \beta, \gamma]=[\alpha, \gamma]\cup \beta+(-1)^{|\alpha|(|\gamma|-1)}\alpha\cup[\beta, \gamma])$$ where $\alpha, \beta, \gamma$ are arbitrary homogeneous elements in ${\mathcal{H}}^*$ and $|\alpha|$ is the degree of the homogeneous element $\alpha$. We follow [@BaGi] to define a Gerstenhaber module of a Gerstenhaber algebra ${\mathcal{H}}^*$. \[defn-module\] A Gerstenhaber module of ${\mathcal{H}}^*$ is a ${\mathbb{Z}}$-graded vector space ${\mathcal{F}}^*$ equipped with: 1. a module structure $\cup'$ for the graded algebra $({\mathcal{H}}^*, \cup)$; 2. a module structure $[\cdot, \cdot]'$ for the graded Lie algebra $({\mathcal{H}}^*[-1], [\cdot, \cdot])$. That is, $$[[\alpha, \beta], x]'=[\alpha, [\beta, x]']'-(-1)^{(|\alpha|-1)(|\beta|-1)}[\beta,[\alpha, x]']';$$ 3. compatibility: $$[\alpha\cup\beta, x]'=(-1)^{|\alpha|(|x|-1)}\alpha\cup'[\beta, x]'+(-1)^{(|\alpha|+|x|-1)|\beta|} \beta\cup'[\alpha, x]',$$ $$[\alpha, \beta\cup'x]'=\beta\cup'[\alpha, x]'+(-1)^{|x|(|\alpha|-1)}[\alpha, \beta]\cup'x$$ where $\alpha, \beta$ are arbitrary homogeneous elements in ${\mathcal{H}}^*$ and $x$ is arbitrary homogeneous element in ${\mathcal{F}}^*$. For any Gerstenhaber algebra ${\mathcal{H}}^*$, ${\mathcal{H}}^*$ is a Gerstenhaber module over itself. Naturally, we can define morphisms between Gerstenhaber modules. Let $({\mathcal{H}}^*, \cup, [\cdot, \cdot])$ be a Gerstenhaber algebra over a commutative ring $k$. Let $({\mathcal{F}}_1^*, \cup_1,[\cdot, \cdot]_1)$ and $({\mathcal{F}}^*_2, \cup_2, [\cdot,\cdot]_2)$ be two Gerstenhaber modules of ${\mathcal{H}}^*$. We say that a $k$-module morphism $\varphi: {\mathcal{F}}^*_1\rightarrow {\mathcal{F}}^*_2$ is a Gerstenhaber morphism of degree $r, (r\in{\mathbb{Z}})$ if the following two conditions are satisfied 1. $\varphi$ is a module morphism of degree $r$ for the graded commutative algebra $({\mathcal{H}}^*, \cup)$. That is, for any $f\in {\mathcal{H}}^m$ and $g\in {\mathcal{F}}_1^*$, $$\varphi(f\cup_1 g)=(-1)^{mr}f\cup_2\varphi(g)$$ 2. $\varphi$ is a module morphism of degree $r$ for the graded Lie algebra $({\mathcal{H}}^*[-1], [\cdot, \cdot]).$ That is, for any $f\in {\mathcal{H}}^m$ and $g\in {\mathcal{F}}_1^*$, $$\varphi([f, g]_1)=(-1)^{(m-1)r}[f, \varphi(g)]_2.$$ The classical example of Gerstenhaber algebras is the Hochschild cohomology $${\mathop{\mathrm{HH}}\nolimits}^*(A, A):=\bigoplus_{n\in{\mathbb{Z}}_{\geq 0}}{\mathop{\mathrm{HH}}\nolimits}^n(A, A)$$ of an associative algebra $A$. Let us start to recall some notions on Hochschild cohomology. Let $A$ be an associative algebra over $k$ such that $A$ is projective as a $k$-module and $M$ be an $A$-$A$-bimodule. Recall that the Hochschild cohomology of $A$ with coefficients in $M$ is defined as $${\mathop{\mathrm{HH}}\nolimits}^*(A, M):={\mathop{\mathrm{Ext}}\nolimits}_{A^e}^*(A, M),$$ and Hochschild homology is defined as $${\mathop{\mathrm{HH}}\nolimits}_*(A, M):={\mathop{\mathrm{Tor}}\nolimits}_*^{A^e}(A, M)$$ where $A^e:=A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}}$ is the enveloping algebra of $A$. Recall that we have the following (un-normalized) bar resolution of $A$, $$\label{bar} \xymatrix{ {\mathop{\mathrm{Bar}}\nolimits}_*(A): \cdots\ar[r] & A^{\otimes(r+2)}\ar[r]^{d_r} & A^{\otimes(r+1)} \ar[r] & \cdots \ar[r] & A^{\otimes 3} \ar[r]^{d_1} & A^{\otimes 2} \ar[r]^{d_0:=\mu} & A },$$ where $\mu$ is the multiplication of $A$ and $d_r$ is defined as follows, $$d_r(a_0\otimes a_1\otimes \cdots \otimes a_{r+2})=\sum_{i=0}^r(-1)^ia_{0, i-1}\otimes a_ia_{i+1}\otimes a_{i+2, r+1}.$$ Denote $$\begin{split} C^r(A, M):&={\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A^{\otimes r+2}, M), \\ C_r(A, M):&= M\otimes_{A^e}A^{\otimes r+2}, \end{split}$$ for any $r\in{\mathbb{Z}}_{\geq 0}$. Note that $$\begin{split} {\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A^{\otimes r+2}, M)&\cong {\mathop{\mathrm{Hom}}\nolimits}_{k}(A^{\otimes r}, M),\\ M\otimes_{A^e}A^{\otimes r+2}&\cong M\otimes_k A^{\otimes r}. \end{split}$$ We also consider the the normalized bar resolution $\overline{{\mathop{\mathrm{Bar}}\nolimits}}_*(A)$, which is defined as $$\overline{{\mathop{\mathrm{Bar}}\nolimits}}_r(A):=A\otimes \overline{A}^{\otimes r}\otimes A,$$ where $\overline{A}:=A/(k \cdot 1_A)$, with induced differential in ${\mathop{\mathrm{Bar}}\nolimits}_*(A)$. Thus the Hochschild cohomology ${\mathop{\mathrm{HH}}\nolimits}^*(A, M)$ can be computed by the following complex, $$\xymatrix{ C^*(A, M): M\ar[r]^-{\delta^0} & {\mathop{\mathrm{Hom}}\nolimits}_k(A, M)\ar[r] & \cdots \ar[r] & {\mathop{\mathrm{Hom}}\nolimits}_k(A^{\otimes r}, M) \ar[r]^{\delta^r} & {\mathop{\mathrm{Hom}}\nolimits}_k(A^{\otimes r+1}, M) \ar[r] &\cdots},$$ where $\delta^r$ is defined as follows, for any $f\in {\mathop{\mathrm{Hom}}\nolimits}_k(A^{\otimes r}, M)$ $$\begin{aligned} \delta^r(f)(a_1\otimes\cdots \otimes a_{r+1}):&=&a_1f(a_{2,r+1})+\sum_{i=1}^r(-1)^if(a_{1,i-1}\otimes a_ia_{i+1}\otimes a_{i+2, r+1})+\\ &&(-1)^{r+1}f(a_{1,r})a_{r+1}.\end{aligned}$$ We denote $$Z^r(A, M):={\mathop{\mathrm{Ker}}\nolimits}(\delta^r)$$ and $$B^r(A, M):={\mathop{\mathrm{Im}}\nolimits}(\delta^{r-1})$$ for any $r\in {\mathbb{Z}}_{\geq 0}$. Then we have $${\mathop{\mathrm{HH}}\nolimits}^r(A, M)=Z^r(A, M)/B^r(A, M).$$ We can also compute the Hochschild cohomology ${\mathop{\mathrm{HH}}\nolimits}^*(A,A)$ via the normalized Bar resolution $\overline{{\mathop{\mathrm{Bar}}\nolimits}}_*(A)$, namely, we have (cf. e.g. [@Lod]) $${\mathop{\mathrm{HH}}\nolimits}^r(A, M)=\overline{Z}^r(A, M)/\overline{B}^r(A, M),$$ where $\overline{Z}^r(A, M)$ and $\overline{B}^r(A, M)$ are respectively the r-th cocycle and r-th coboundary in the normalized cochain complex $\overline{C}^*(A, M)$. The Hochschild homology ${\mathop{\mathrm{HH}}\nolimits}_*(A, M)$ is the homology of the following complex, $$\xymatrix{ C_*(A, M): \cdots\ar[r] & C_r(A, M) \ar[r]^{\partial_r} & C_{r-1}(A, M)\ar[r] & \cdots \ar[r] & C_0(A, M) }$$ where $$\begin{split} \partial_r(m\otimes a_1\otimes\cdots\otimes a_r):= ma_1\otimes a_{2,r}+\sum_{i=1}^{r-1}(-1)^{i}m\otimes a_{1, i-1}\otimes a_ia_{i+1}\otimes a_{i+2, r}+(-1)^ra_rm\otimes a_{1, r-1}. \end{split}$$ Denote $$\begin{split} B_r(A, M):&={\mathop{\mathrm{Im}}\nolimits}(\partial_{r+1})\\ Z_r(A, M):&={\mathop{\mathrm{Ker}}\nolimits}(\partial_r). \end{split}$$ Then we have $${\mathop{\mathrm{HH}}\nolimits}_r(A, M):=Z_r(A, M)/B_r(A, M).$$ Similarly, it also can be computed by the normalized bar resolution $\overline{{\mathop{\mathrm{Bar}}\nolimits}}_*(A, M)$, namely, we have (cf. e.g. [@Lod]) $${\mathop{\mathrm{HH}}\nolimits}_r(A, M)=\overline{Z}_r(A, M)/\overline{B}_r(A, M).$$ Let us recall the cup product, $$\cup: C^m(A, A)\times C^n(A, M)\rightarrow C^{m+n}(A, M),$$ which is defined in the following way. Given $f\in C^m(A, A)$ and $g\in C^n(A, M)$, $$(f\cup g)(a_1\otimes \cdots \otimes a_{m+n}):=f(a_{1, m})g(a_{m+1,m+n}).$$ One can check that this cup product $\cup$ induces a well-defined operation (still denoted by $\cup$) on cohomology groups, that is, $$\cup: {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\times {\mathop{\mathrm{HH}}\nolimits}^n(A, M)\rightarrow {\mathop{\mathrm{HH}}\nolimits}^{m+n}(A, M).$$ Recall that there is also a circ product, $$\circ: C^n(A, M)\times C^m(A, A)\rightarrow C^{m+n-1}(A, M)$$ which is defined as follows, given $f\in C^m(A, A)$ and $g\in C^n(A, M)$, for $1\leq i\leq n$, set $$g\circ_if(a_1\otimes \cdots\otimes a_{m+n-1}):=g(a_{1, i-1} \otimes f(a_{i, i+m-1})\otimes a_{i+m,m+n-1}),$$ $$\label{equ-circ-product} g\circ f:=\sum_{i=1}^n(-1)^{(m-1)(i-1)}g\circ_if,$$ for $n=0$, we set $$g\circ f:=0.$$ Using this circ product, one can define a Lie bracket $[\cdot, \cdot]$ on ${\mathop{\mathrm{HH}}\nolimits}^*(A, A)$ in the following way. Let $f\in C^m(A, A)$ and $g\in C^n(A, A)$, define $$[f, g]:=f\circ g-(-1)^{(m-1)(n-1)}g\circ f.$$ One can check that this Lie bracket induces a well-defined Lie bracket (still denoted by $[\cdot, \cdot]$) on ${\mathop{\mathrm{HH}}\nolimits}^*(A, A)$. With these two operators $\cup$ and $[\cdot, \cdot]$ on ${\mathop{\mathrm{HH}}\nolimits}^*(A, A)$, Gerstenhaber proves the following result. Let $A$ be an associative algebra over a commutative ring $k$. Then the Hochschild cohomology ${\mathop{\mathrm{HH}}\nolimits}^*(A, A)$, equipped with the cup product $\cup$ and bracket $[\cdot,\cdot]$ is a Gerstenhaber algebra. Let $A$ be an associative $k$-algebra such that $A$ is projective as a $k$-module. Then for any $m\in {\mathbb{Z}}_{\geq 0}$, $${\mathop{\mathrm{HH}}\nolimits}^m(A, A)\cong{\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}^b(A^e)}(A, A[m]),$$ and the cup product $\cup$ can be interpreted as compositions of morphisms (namely, the Yoneda product) in ${\EuScript D}^b(A\otimes_k A^{{\mathop{\mathrm{op}}\nolimits}})$. At the end of this section, let us recall the cap product $\cap$, which is an action of Hochschild cohomology on Hochschild homology. For any $r, p\in{\mathbb{Z}}_{\geq 0}$ such that $r\geq p$, there is a bilinear map $$\cap: C_r(A, M)\otimes C^p(A, A)\rightarrow C_{r-p}(A, M)$$ sending $(m\otimes a_1\otimes \cdots \otimes a_r)\otimes \alpha$ to $$(m\otimes a_1\otimes \cdots \otimes a_r)\cap \alpha:= (-1)^{rp}(m\otimes_A\alpha(a_1\otimes \cdots \otimes a_p)\otimes a_{p+1}\otimes \cdots \otimes a_{r})$$ It is straightforward to verify that $\cap$ induces a well-defined map, which we still denote by $\cap$, on the level of homology, $$\label{equ-cap} \cap: {\mathop{\mathrm{HH}}\nolimits}_r(A, M)\otimes {\mathop{\mathrm{HH}}\nolimits}^p(A, A)\rightarrow {\mathop{\mathrm{HH}}\nolimits}_{r-p}(A, M).$$ Singular Hochschild cohomology ============================== Let $A$ be an associative algebra over a commutative ring $k$ such that $A$ is projective as a $k$-module. Recall the un-normalized bar resolution ${\mathop{\mathrm{Bar}}\nolimits}_*(A)$ (cf. (\[bar\])) of $A$, $$\begin{aligned} \xymatrix{ \cdots\ar[r]^{d_2}& A^{\otimes 3} \ar[r]^-{d_1} & A^{\otimes 2} \ar[r]^{d_0} & A\ar[r] &0. }\end{aligned}$$ Let us denote the $p$-th kernel ${\mathop{\mathrm{Ker}}\nolimits}(d_{p-1})$ in the un-normalized bar resolution ${\mathop{\mathrm{Bar}}\nolimits}_*(A)$ by $\Omega^p(A)$. Then we have the following short exact sequence for $p\in{\mathbb{Z}}_{>0}$, $$\begin{aligned} \label{short-exact} 0\rightarrow\Omega^p(A)\rightarrow A^{\otimes (p+1)} \rightarrow \Omega^{p-1}(A)\rightarrow 0\end{aligned}$$ which induces a long exact sequence $$\begin{aligned} \label{longexact} \cdots\rightarrow {\mathop{\mathrm{HH}}\nolimits}^m(A, A^{\otimes (p+1)}) \rightarrow {\mathop{\mathrm{HH}}\nolimits}^m(A, \Omega^{p-1}(A))\rightarrow {\mathop{\mathrm{HH}}\nolimits}^{m+1}(A, \Omega^p(A))\rightarrow\cdots\end{aligned}$$ We denote the connecting morphism in (\[longexact\]) by, $$\theta_{m,p-1}: {\mathop{\mathrm{HH}}\nolimits}^m(A, \Omega^{p-1}(A))\rightarrow {\mathop{\mathrm{HH}}\nolimits}^{m+1}(A, \Omega^p(A))$$ for $m\in{\mathbb{Z}}_{\geq 0}$. Hence we obtain an inductive system for any fixed $m\in{\mathbb{Z}}_{\geq 0}$, $$\begin{aligned} \label{system} \xymatrix{ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\ar[r]^-{\theta_{m, 0}} & {\mathop{\mathrm{HH}}\nolimits}^{m+1}(A, \Omega^1(A)) \ar[r]^{\theta_{m,1}} & {\mathop{\mathrm{HH}}\nolimits}^{m+2}(A, \Omega^2(A))\ar[r] &\cdots }\end{aligned}$$ Let $$\lim_{\substack{\longrightarrow\\p\in{\mathbb{Z}}_{\geq 0}}} {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A))$$ be the colimit of the inductive system (\[system\]) above. Since $A$ is a $k$-algebra such that $A$ is projective as a $k$-module, we have a canonical isomorphism $${\mathop{\mathrm{HH}}\nolimits}^{m}(A, M)\cong {\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}^b(A^e)}(A, M[m])$$ for any $A$-$A$-bimodule $M$ and $m\in {\mathbb{Z}}_{\geq 0}$. Hence we have the following morphism for any $m\in {\mathbb{Z}}, p\in {\mathbb{Z}}_{\geq 0}$ such that $m+p>0$, $$\label{equ-natural} \Phi_{m, p}:{\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A))\rightarrow {\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A^e)}(A, \Omega^p(A)[m+p])\rightarrow {\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A^e)}(A, A[m]),$$ which are compatible with the inductive system (\[system\]) above. So the collection of maps $\Phi_{m, p}$ induces a morphism for any fixed $m\in {\mathbb{Z}}$, $$\Phi_m: \lim_{\substack{\longrightarrow\\p\in{\mathbb{Z}}_{\geq 0}\\ m+p>0}} {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A))\rightarrow {\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A^e)}(A, A[m]).$$ Next we will prove that $\Phi_m$ is an isomorphism for any $m\in{\mathbb{Z}}$. \[prop\] For any $m\in{\mathbb{Z}}$, the morphism $$\Phi_m: \lim_{\substack{\longrightarrow\\p\in{\mathbb{Z}}_{\geq 0}\\ m+p>0}} {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A))\rightarrow {\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A^e)}(A, A[m])$$ defined above is an isomorphism. [[*Proof.*]{}]{}First, let us recall the following fact (cf. e.g. Proposition 6.7.17 [@Zim]): The following canonical homomorphism is an isomorphism for any $m\in{\mathbb{Z}}$ $${\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A^e)}(A, A[m])\cong \lim_{\substack{\longrightarrow\\p\in{\mathbb{Z}}_{\geq 0}\\m+p>0}} \underline{{\mathop{\mathrm{Hom}}\nolimits}}_{A^e}(\Omega^{m+p}(A), \Omega^p(A)).$$ Now using the fact above, we obtain that $\Phi_m$ is surjective. Indeed, assume $$f\in {\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A^e)}(A, A[m]),$$ then from the fact above, there exists $p\in{\mathbb{Z}}_{\geq 0}$ such that $f$ can be represented by some element $$f'\in \underline{{\mathop{\mathrm{Hom}}\nolimits}}_{A^e}(\Omega^{m+p}(A), \Omega^p(A)),$$ hence $f$ is also represented by some element $f''\in {\mathop{\mathrm{Hom}}\nolimits}_{A^e}(\Omega^{m+p}(A), \Omega^p(A)).$ It follows that $f''$ induces a cocycle (See Diagram (\[diagram2\])) $$\alpha:=f''\circ d_{m+p}\in {\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A^{\otimes m+p+2}, \Omega^p(A)),$$ hence $\alpha\in {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A))$. $$\begin{aligned} \label{diagram2} \xymatrix{ A^{\otimes m+p+3} \ar[d]_{d_{m+p+1}}\\ A^{\otimes m+p+2} \ar[r]^{\alpha}\ar@{->>}[d]_{d_{m+p}} & \Omega^p(A)\\ \Omega^{m+p}(A)\ar[ru]_{f''}\ar@{_(->}[d]\\ A^{\otimes m+p+1} }\end{aligned}$$ Moreover, we have that $\Phi_m(\alpha)=f$, so $\Phi_m$ is surjective. Here we remark that the following morphism between two inductive systems, $$\begin{aligned} \xymatrix{ \cdots\ar[r]&{\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A))\ar[r]\ar[d] & {\mathop{\mathrm{HH}}\nolimits}^{m+p+1}(A,\Omega^{p+1}(A))\ar[r]\ar[d]& \cdots\\ \cdots\ar[r] & \underline{{\mathop{\mathrm{Hom}}\nolimits}}_{A^e}(\Omega^{m+p}(A), \Omega^p(A))\ar[r] & \underline{{\mathop{\mathrm{Hom}}\nolimits}}_{A^e}(\Omega^{m+p+1}(A), \Omega^{p+1}(A))\ar[r]&\cdots }\end{aligned}$$ induces a commutative diagram, $$\xymatrix{ \lim\limits_{\substack{\longrightarrow\\p\in{\mathbb{Z}}_{\geq 0}\\ m+p>0}} {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A))\ar[r]^-{\Phi_m}\ar[d] & {\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A^e)}(A, A[m])\\ \lim\limits_{\substack{\longrightarrow\\p\in{\mathbb{Z}}_{\geq 0}\\m+p>0}} \underline{{\mathop{\mathrm{Hom}}\nolimits}}_{A^e}(\Omega^{m+p}(A), \Omega^p(A))\ar[ru]_{\cong}. }$$ It remains to prove that $\Phi_m$ is injective. Assume that there exists $$\beta\in \lim_{\substack{\longrightarrow\\p\in{\mathbb{Z}}_{\geq 0}\\ m+p>0}} {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A))$$ such that $\Phi_m(\beta)=0$. Then there exists $p\in{\mathbb{Z}}_{\geq 0}$ such that $\beta$ is represented by some element $\beta'\in {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A))$ and $\beta'$ is mapped into zero under the morphism $${\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A))\rightarrow \underline{{\mathop{\mathrm{Hom}}\nolimits}}_{A^e}(\Omega^{m+p}(A), \Omega^p(A)).$$ So from this it follows that the cocycle $\beta'$ induces a morphism $g':\Omega^{m+p}(A)\rightarrow \Omega^p(A)$ such that $g'$ factors through a projective $A$-$A$-bimodule $P$. The maps are illustrated by the following diagram. $$\label{diagram-tau} \xymatrix{ A^{\otimes m+p+3} \ar[d]_{d_{m+p+1}}\\ A^{\otimes m+p+2} \ar[r]^{\beta'}\ar@{->>}[d]_{d_{m+p}} & \Omega^p(A)\\ \Omega^{m+p}(A)\ar[ru]_-{g'}\ar@{_(->}[d] \ar[r]_-{\sigma} & P\ar[u]_-{\tau}\\ A^{\otimes m+p+1} }$$ By funtctoriality, the $A$-$A$-bimodule morphism $\tau:P\rightarrow \Omega^p(A)$ in Diagram (\[diagram-tau\]) induces the following map. $$\begin{aligned} \label{tau} \begin{tabular}{ccccc} $\tau^*$ : & ${\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, P)$&$\rightarrow$ & ${\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A))$\\ &$\sigma$&$\mapsto$&$ \beta'$\\ \end{tabular}\end{aligned}$$ Since $P$ is a projective $A$-$A$-bimodule, we have the following commutative diagram between two short exact sequences. $$\begin{aligned} \xymatrix{ 0\ar[r] & \Omega^{p+1}\ar[r] & A^{p+2} \ar[r] & \Omega^p(A)\ar[r] & 0\\ 0\ar[r] & 0\ar[r] \ar[u]& P\ar[r]^{\cong}\ar[u]^{\tau'} & P \ar[r]\ar[u]^-{\tau} & 0 }\end{aligned}$$ Hence from the functoriality of long exact sequences induced from short exact sequences, we have the following commutative diagram. $$\begin{aligned} \label{diagram1} \xymatrix@C=1.8em{ \cdots\ar[r] & {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, A^{\otimes p+2}) \ar[r] & {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^{p}(A))\ar[r]^-{\theta_{m+p,p}} & {\mathop{\mathrm{HH}}\nolimits}^{m+p+1}(A, \Omega^{p+1}(A))\\ \cdots\ar[r] & {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, P) \ar[u]^{\tau'^*} \ar[r]^{\cong} & {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, P)\ar[u]^{\tau^*}\ar[r] & 0\ar[u] }\end{aligned}$$ So from Diagram (\[diagram1\]) above, we obtain that $$\theta_{m+p,p}(\beta')=\theta_{m+p, p}\tau^*(\sigma)=0,$$ thus it follows that $\beta$ is represented by zero element in ${\mathop{\mathrm{HH}}\nolimits}^{m+p+1}(A, \Omega^{p+1}(A))$, hence $$\beta=0\in \lim_{\substack{\longrightarrow\\p\in{\mathbb{Z}}_{\geq 0}\\m+p>0}} {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \Omega^p(A)).$$ So $\Phi_m$ is injective. Therefore, $\Phi_m$ is an isomorphism. By the same argument, we also have the following isomorphism for any $m\in {\mathbb{Z}}$, $$\overline{\Phi}_m: \lim_{\substack{\longrightarrow\\p\in{\mathbb{Z}}_{\geq 0}\\ m+p>0}} {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \overline{\Omega}^p(A))\rightarrow {\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A^e)}(A, A[m])$$ where $\overline{\Omega}^p(A)$ is the $p$-th kernel in the normalized bar resolution of $A$. Gerstenhaber algebra structure on singular Hochschild cohomology ================================================================ In this section, we will prove the following main theorem. \[thm-gerst\] Let $A$ be an associative algebra over a commutative ring $k$ and suppose that $A$ is projective as a $k$-module. Then the singular Hochschild cohomology (shifted by $[1]$) $${\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)[1]:=\bigoplus_{n\in{\mathbb{Z}}}{\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A^e)}(A, A[n])[1]$$ is a Gerstenhaber algebra. For $p\in {\mathbb{Z}}_{\geq 0}$, we denote the $p$-th kernel ${\mathop{\mathrm{Ker}}\nolimits}(\overline{d}_{p-1})$ in the normalized bar resolution $\overline{{\mathop{\mathrm{Bar}}\nolimits}}_*(A)$ by $\overline{\Omega}^p(A).$ Note that $\overline{\Omega}^0(A)=A$. Let $m, n\in {\mathbb{Z}}_{>0}$ and $p, q\in {\mathbb{Z}}_{\geq0}$. we shall define a Gerstenhaber bracket as follows $$\label{equ-bracket} [\cdot,\cdot]: C^m(A, \overline{\Omega}^{p}(A))\otimes C^n(A, \overline{\Omega}^q(A))\rightarrow C^{m+n-1}(A, \overline{\Omega}^{p+q}(A)).$$ Let $f\in C^m(A, \overline{\Omega}^p(A))$ and $g\in C^n(A, \overline{\Omega}^q(A))$, define $$f\bullet_i g:= \begin{cases} d((f\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes q})({\mathop{\mathrm{id}}\nolimits}^{\otimes i-1}\otimes g\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes m-i})\otimes 1)&\mbox{if} \ 1\leq i\leq m, \\ d(({\mathop{\mathrm{id}}\nolimits}^{\otimes -i}\otimes f\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes q+i})(g\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes m-1})\otimes 1) & \mbox{if} \ -q\leq i \leq -1, \end{cases}$$ and $$f\bullet g:=\sum_{i=1}^m(-1)^{r(m,p;n,q, i)} f\bullet_i g+\sum_{i=1}^q (-1)^{s(m,p; n, q; i)}f\bullet_{-i} g$$ where $r(m, p;n, q;i)$ and $s(m,p;n,q;i)$ are defined in the following way, $$\label{equ-coe} \begin{split} r(m, p;n, q;i)&=p+q+(i-1)(q-n-1), \ \ 1\leq i\leq m,\\ s(m, p;n, q;i)&=p+q+i(p-m-1), \ \ 1\leq i\leq q. \end{split}$$ Then we define $$\label{equ-defn-bracket} [f, g]:=f\bullet g-(-1)^{(m-p-1)(n-q-1)} g\bullet f.$$ We have a double complex $C^{*, *}(A, A)$, which is defined by $$C^{m,p}(A, A):= \begin{cases} C^{m}(A, \overline{\Omega}^p(A)) & \mbox{if}\ \ m\in{\mathbb{Z}}_{>0}, p\in{\mathbb{Z}}_{\geq 0},\\ 0 & \mbox{otherwise} \end{cases}$$ with the horizontal differential $\delta: C^{m}(A, \overline{\Omega}^p(A))\rightarrow C^{m+1}(A, \overline{\Omega}^p(A))$, induced from bar resolution and the vertical differential zero. Recall that the total complex of $C^{*, *}(A, A)$, denoted by ${\mathop{\mathrm{Tot}}\nolimits}(C^{*, *}(A, A))^*$, is defined as follows, $${\mathop{\mathrm{Tot}}\nolimits}(C^{*, *}(A, A))^n:=\bigoplus_{n=m-p} C^{m,p}(A,A)$$ with the differential induced from $C^{*, *}(A, A)$. Note that the bullet product $\bullet$ is not associative, however it has the following “weak associativity”. \[lemma-bullet1\] Let $f_i\in C^{m_i}(A, \overline{\Omega}^{p_i}(A))$ for $i=1, 2, 3.$ 1. For $1\leq j\leq m_1$ and $1\leq i\leq m_1+m_2-1$, we have $$(f_1\bullet_j f_2)\bullet_i f_3= \begin{cases} (-1)^{p_1+p_3}f_1\bullet_j(f_2\bullet_{i-j+1}f_3) &\mbox{if} \ \ 0\leq i-j< m_2, \\ (-1)^{p_3+p_2}(f_1\bullet_{i+p_2-m_2+1} f_3)\bullet_j f_2 & \mbox{if} \ \ m_2\leq i-j, i<m_1+m_2-p_2\\ f_3\bullet_{-(p_1+p_2+1+i-m_1-m_2)}(f_1\bullet_j f_2) &\mbox{if} \ \ m_2\leq i-j, m_1+m_2-p_2\leq i\\ (-1)^{p_1+p_3}f_1\bullet_i (f_2\bullet_{-(j-i)} f_3) & \mbox{if} \ \ 1\leq j-i\leq p_3 \\ (-1)^{p_2+p_3}(f_1\bullet_if_3)\bullet_{m_3+j-p_3-1} f_2 & \mbox{if} \ \ p_3<j-i\\ \end{cases}$$ 2. For $1\leq j\leq p_2$ and $1\leq i\leq m_1+m_2-1$, $$(f_1\bullet_{-j} f_2)\bullet_if_3= \begin{cases} (-1)^{p_1+p_3} f_1\bullet_{-j}(f_2\bullet_if_3) & \mbox{if} \ \ 1\leq i\leq m_2\\ (-1)^{p_2+p_3}(f_1\bullet_{p_2-j+i-m_2+1} f_3)\bullet_{-j} f_2 & \mbox{if} \ \ m_2<i<m_1+m_2-p_2+j\\ f_3\bullet_{-(p_1+p_2+1+i-m_1-m_2)}(f_1\bullet_{-j} f_2) & \mbox{if} \ \ m_2<m_1+m_2-p_2+j\leq i\\ f_3\bullet_{-(p_1+p_2+1+i-m_1-m_2)}(f_1\bullet_{-j} f_2) & \mbox{if} \ \ m_2<i, m_1+j\leq p_2\\ \end{cases}$$ 3. For $1\leq j\leq m_1$ and $1\leq i\leq p_3$, $$(f_1\bullet_jf_2)\bullet_{-i} f_3= \begin{cases} (-1)^{p_1+p_3} f_1\bullet_{-i}(f_2\bullet_{-(i+j-1)} f_3) & \mbox{if}\ \ i+j<p_3-2\\ (-1)^{p_2+p_3}(f_1\bullet_{-i}f_3)\bullet_{m_3-p_3+j+i-1} f_2 & \mbox{if} \ \ p_3-2\leq i+j \end{cases}$$ 4. For $1\leq j\leq p_2$ and $1\leq i\leq p_3$, $$(f_1\bullet_{-j} f_2)\bullet_{-i} f_3=(-1)^{p_1+p_3}f_1\bullet_{-(i+j)}(f_2\bullet_{-i} f_3).$$ Similarly, we have the following lemma. \[lemma-bullet2\] 1. For $1\leq j\leq m_1$ and $1\leq i\leq m_2$, $$f_1\bullet_j(f_2\bullet_i f_3)=(f_1\bullet_jf_2)\bullet_{j+i-1} f_3$$ 2. For $1\leq j\leq m_1$ and $1\leq i\leq p_3$, $$f_1\bullet_j(f_2\bullet_{-i} f_3)= \begin{cases} (-1)^{p_1+p_3}(f_1\bullet_{i+j} f_2)\bullet_{j} f_3 & \mbox{if} \ \ i+j\leq m_1\\ (-1)^{p_1+p_2}f_2\bullet_{-(i+j-m_1+p_1)}(f_1\bullet_j f_3) & \mbox{if}\ \ m_1<i+j \end{cases}$$ 3. For $1\leq j\leq p_2+p_3$ and $1\leq i\leq m_2$, $$f_1\bullet_{-j}(f_2\bullet_if_3)= \begin{cases} (-1)^{p_1+p_3}(f_1\bullet_{-j} f_2)\bullet_i f_3 & \mbox{if} \ \ 0\leq j\leq p_2\\ (-1)^{p_1+p_2}f_2\bullet_i(f_1\bullet_{-(j+m_2-p_2-i)}f_3) & \mbox{if}\ \ 0<j-p_2\leq p_3+i-m_2\\ (f_2\bullet_if_3)\bullet_{m_2+m_3-p_2-p_3-1+j} f_1 & \mbox{if}\ \ 1\leq p_3+i-m_2+1\leq j-p_2\\ (f_2\bullet_if_3)\bullet_{m_2+m_3-p_2-p_3-1+j} f_1 & \mbox{if}\ \ 1\leq j-p_2, p_3+i-m_2<0 \end{cases}$$ 4. For $1\leq j\leq p_2+p_3$ and $1\leq i\leq p_3$, $$f_1\bullet_{-j}(f_2\bullet_{-i} f_3)= \begin{cases} (-1)^{p_1+p_3} (f_1\bullet_{i-j+1} f_2)\bullet_{-j} f_3 & \mbox{if} \ \ 0\leq i-j\\ (-1)^{p_1+p_3}(f_1\bullet_{-(j-i)} f_2)\bullet_{-i} f_3 & \mbox{if} \ \ 0<j-i\leq p_2\\ (f_2\bullet_{-i} f_3)\bullet_{m_2+m_3-p_2-p_3-1+j}f_1 &\mbox{if} \ \ p_2<j-i \end{cases}$$ \[rem-delta\] Similar to [@Ger1], the cup product $\cup$ for $C^{*,*}(A, A)$ can be expressed by the multiplication $\mu$ and the bullet product $\bullet$. Namely, for $f\in C^{m,p}(A,A)$ and $g\in C^{n,q}(A, A)$, $$f\cup g=(-1)^q(\mu\bullet_{-p} f)\bullet_{m+1} g.$$ Indeed, we have $$\begin{split} (\mu\bullet_{-p} f)\bullet_{m+1} g(a_{1, m+n})=& d(\mu\bullet_{-p} f\otimes {\mathop{\mathrm{id}}\nolimits})(a_{1,m}\otimes g(a_{m+1, m+n}))\otimes 1\\ =&(-1)^pd({\mathop{\mathrm{id}}\nolimits}_p\otimes \mu)(f(a_{1,m})\otimes g(a_{m+1,m+n})\otimes 1\\ =& (-1)^q f(a_{1,m})g(a_{m+1,m+n}). \end{split}$$ The differential $\delta$ in $C^{*,*}(A, A)$ can also be expressed using the multiplication and the bullet product $\bullet$. Namely, let $f\in C^{m,p}(A,A)$, then $$\delta(f)=[f, -\mu]=(-1)^{m-p-1}[\mu, f].$$ Indeed, by definition, we have, $$\begin{split} [f,-\mu](a_{1,m+1})=&\sum_{i=1}^{m}(-1)^{i+p}f\bullet_i \mu (a_{1,m+1})+ \sum_{i=1}^2(-1)^{i(m-p-1)+p}\mu\bullet_if(a_{1,m+1}) +\\ &\sum_{i=1}^{p}(-1)^{i+m-1}\mu\bullet_{-i}f\\ =&\sum_{i=1}^m(-1)^{i}f(a_{1,i-1}\otimes a_ia_{i+1}\otimes a_{i+2,m+1})+ (-1)^{m-1}d(\mu\otimes {\mathop{\mathrm{id}}\nolimits})(f(a_{1,m})\otimes a_{m+1})\otimes 1+\\ &a_1f(a_{2,m+1})+ \sum_{i=1}^p(-1)^{i+m-1} d({\mathop{\mathrm{id}}\nolimits}_i\otimes \mu)(f(a_{1,m})\otimes a_{m+1})\otimes 1 \\ =& \sum_{i=1}^m(-1)^{i}f(a_{1,i-1}\otimes a_ia_{i+1}\otimes a_{i+2,m+1})+ a_1f(a_{2,m+1})+ (-1)^mf(a_{1,m})a_{m+1}\\ =& \delta(f)(a_{1,m}) \end{split}$$ where we used the fact that $d(\overline{\Omega}^p(A))=0$ and $d\circ d=0$ in the third identity. Note that the bullet product $\bullet$ does not define a preLie algebra structure (defined in [@Ger1]) on ${\mathop{\mathrm{Tot}}\nolimits}(C^*(A, A))$, in general. However, in the following proposition we will show that the bullet product $\bullet$ defines a (graded) Lie-admissible algebra structure (defined in Section 2.2 of [@MeVa]) on ${\mathop{\mathrm{Tot}}\nolimits}(C^*(A, A)).$ That is, the associated Lie bracket $[\cdot,\cdot]$ defines a differential graded Lie algebra structure on ${\mathop{\mathrm{Tot}}\nolimits}(C^*(A, A))$. \[prop-lie\] The bracket $[\cdot,\cdot]$ (defined in (\[equ-defn-bracket\])) gives a differential graded Lie algebra (DGLA) structure on the total complex (shifted by $[1]$) $${\mathop{\mathrm{Tot}}\nolimits}(C^{*,*}(A,A))^*[1].$$ As a consequence, $${\mathop{\mathrm{HH}}\nolimits}^{>0}(A, \overline{\Omega}^*(A))[1]:=\bigoplus_{\substack{m\in{\mathbb{Z}}_{>0},\\ p\in Z_{\geq 0}}} {\mathop{\mathrm{HH}}\nolimits}^{m}(A, \overline{\Omega}^p(A))[1]$$ is a ${\mathbb{Z}}$-graded Lie algebra, with the grading $${\mathop{\mathrm{HH}}\nolimits}^{>0}(A, \overline{\Omega}^*(A))_n:=\bigoplus_{\substack{m\in{\mathbb{Z}}_{>0}, p\in Z_{\geq 0}\\ m-p=n}} {\mathop{\mathrm{HH}}\nolimits}^{m}(A, \overline{\Omega}^p(A)).$$ [[*Proof.*]{}]{}From the definition of the bracket in (\[equ-defn-bracket\]), we observe that $[\cdot,\cdot]$ is skew symmetric, $$[f_1, f_2]=-(-1)^{(m_1-p_1-1)(m_2-p_2-1)}[f_2, f_1].$$ Now let us check the Jacobi identity. That is, let $f_i\in C^{m_i}(A, \overline{\Omega}^{p_i}(A))$ for $i=1, 2, 3,$ we need to check that $$\label{equ-Jacobi} \begin{split} (-1)^{n_1n_3}[[f_1, f_2], f_3]+(-1)^{n_2n_1}[[f_2, f_3], f_1]+ (-1)^{n_3n_2}[[f_3, f_1], f_2]=0 \end{split}$$ where $n_i:=m_i-p_i-1,$ for $i=1, 2, 3.$ Now let us apply Lemma \[lemma-bullet1\] and \[lemma-bullet2\], from these two lemmas, we note that every term on the left hand side of Identity (\[equ-Jacobi\]) will appear exactly twice. So the only thing that we should do is to compare the coefficients of the same two terms. However, this can be done case by case. Let us first consider the coefficient of the term $(f_1\bullet_j f_2)\bullet_if_3$ in (\[equ-Jacobi\]), which is $$(-1)^{n_1n_3+r(m_1,p_1; m_2, p_2;j)+r(m_1+m_2-1, p_1+p_2; m_3,p_3; i)}=(-1)^{n_1n_3+(j-1)n_2+(i-1)n_3+p_3}.$$ By Lemma \[lemma-bullet1\], we have the following cases 1. If $0\leq i-j<m_2$, we have $$\label{equ-b1} (f_1\bullet_jf_2)\bullet_i f_3=(-1)^{p_1+p_3}f_1\bullet_j(f_2\bullet_{i-j+1}f_3).$$ The coefficient of the term $f_1\bullet_j(f_2\bullet_{i-j+1}f_3)$ is $$(-1)^{n_1n_3+1+r(m_2,p_2;m_3;p_3;i-j+1)+r(m_1,p_1;m_2+m_3-1;p_2+p_3; j)}=(-1)^{n_1n_3+1+(i-j)n_3+(j-1)(n_2+n_3)+p_1}.$$ Hence the coefficients of these two terms in Identity (\[equ-Jacobi\]) are up to the scale $-(-1)^{p_1+p_3}$, then from (\[equ-b1\]), it follows that these two terms will be cancelled in (\[equ-Jacobi\]). 2. If $m_2\leq i-j, i<m_1+m_2-p_2,$ then $$(f_1\bullet_jf_2)\bullet_if_3=(-1)^{p_3+p_2}(f_1\bullet_{i+p_2-m_2+1} f_3)\bullet_j f_2.$$ The coefficient of the term $(f_1\bullet_{i+p_2-m_2+1} f_3)\bullet_j f_2$ is $$(-1)^{n_2n_3+1+n_1n_3+r(m_1,p_1;m_3,p_3; i-n_2)+r(m_1+m_3-1,p_1+p_3; m_2,p_2; j)}=(-1)^{n_1n_3+1+ (j-1)n_2+(i-1)n_3+p_2}.$$ Hence the coefficients of these two terms are up to the scale $-(-1)^{p_2+p_3},$ so they will be cancelled in Identity (\[equ-Jacobi\]). 3. If $m_2\leq i-j, m_1+m_2-p_2\leq i$, then $$(f_1\bullet_jf_2)\bullet_i f_3=f_3\bullet_{-(p_1+p_2+1+i-m_1-m_2)}(f_1\bullet_j f_2).$$ The coefficient of the term $f_3\bullet_{-(p_1+p_2+1+i-m_1-m_2)}(f_1\bullet_j f_2)$ in (\[equ-Jacobi\]) is $$(-1)^{n_2n_3+1+r(m_1,p_1;m_2,p_2;j)+s(m_3,p_3; m_1+m_2-1,p_1+p_2; n_1+n_2-1+i)}= (-1)^{n_1n_3+1+(j-1)n_2+(i-1)n_3+p_3}.$$ Hence the coefficients are up to the scale $-1$, so in the same reason they will be cancelled. 4. If $1\leq j-i\leq p_3$, then $$(f_1\bullet_jf_2)\bullet_i f_3=(-1)^{p_1+p_3}f_1\bullet_i (f_2\bullet_{-(j-i)} f_3).$$ The coefficient of the term $f_1\bullet_i (f_2\bullet_{-(j-i)} f_3)$ in (\[equ-Jacobi\]) is $$(-1)^{n_1n_2+1+n_1(n_2+n_3)+r(m_1,p_1;m_2+m_3-1,p_2+p_3; i)+s(m_2,p_2;m_3,p_3; j-i)}=(-1)^{n_1n_3+1+n_2 (i-1)+n_3(j-1)+p_1}.$$ Hence the coefficients are up to the scale $-(-1)^{p_1+p_3}$, so they will be cancelled. 5. If $p_3<j-i$, then $$(f_1\bullet_jf_2)\bullet_if_3= (-1)^{p_2+p_3}(f_1\bullet_if_3)\bullet_{m_3+j-p_3-1} f_2$$ The coefficient of the term $(f_1\bullet_if_3)\bullet_{m_3+j-p_3-1} f_2$ is $$(-1)^{n_2n_3+1+n_1n_3+r(m_1,p_1:m_3,p_3; i)+r(m_1+m_3-1,p_1+p_3; m_2,p_2; n_3+j)}=(-1)^{1+n_1n_3 +n_3(i-1)+(j-1)n_2+p_2}.$$ Hence the coefficients are up to the scale $-(-1)^{p_2+p_3}$, so they will be cancelled. In a similar way, the other cases can be checked. So Identity (\[equ-Jacobi\]) holds. Hence, it remains to verify the following identity, for $f\in C^{m,p}(A,A)$ and $g\in C^{n,q}(A, A)$, $$\label{equ-diff} \delta([f, g])=(-1)^{n-q-1}[\delta(f), g]+[f, \delta(g)].$$ Now by Remark \[rem-delta\], it is equivalent to verify that $$[[f,g],\mu]=(-1)^{n-q-1}[[f,\mu], g]+[f, [g,\mu]],$$ which is exactly followed from the Jacobi identity. Hence Identity (\[equ-diff\]) holds. Therefore, ${\mathop{\mathrm{Tot}}\nolimits}(C^{*,*}(A, A))^*[1]$ is a differential graded Lie algebra. Observe that for any $n\in{\mathbb{Z}}$, we have, $$H^n({\mathop{\mathrm{Tot}}\nolimits}(C^{*,*}(A, A))^*)\cong \bigoplus_{\substack{m\in{\mathbb{Z}}_{>0}, p\in Z_{\geq 0}\\ m-p=n}} {\mathop{\mathrm{HH}}\nolimits}^{m}(A, \overline{\Omega}^p(A))$$ hence $${\mathop{\mathrm{HH}}\nolimits}^{>0}(A, \overline{\Omega}^*(A))[1]:=\bigoplus_{\substack{m\in{\mathbb{Z}}_{>0},\\ p\in Z_{\geq 0}}} {\mathop{\mathrm{HH}}\nolimits}^{m}(A, \overline{\Omega}^p(A))[1]$$ is a ${\mathbb{Z}}$-graded Lie algebra since the homology of a differential graded Lie algebra is a graded Lie algebra with the induced Lie bracket. Let $m\in{\mathbb{Z}}_{>0}$ and $p\in{\mathbb{Z}}_{\geq 0}$. Then we have a short exact sequence (from the normalized bar resolution of $A$), $$\xymatrix{ 0 \ar[r]& \overline{\Omega}^{p+1}(A)\ar[r]^{\iota_{p+1}}& A^{\otimes (p+2)} \ar[r]^{d_p} & \overline{\Omega}^p(A)\ar[r]& 0, }$$ which induces a long exact sequence, $$\label{long-new-1} \xymatrix{ \cdots\ar[r] & {\mathop{\mathrm{HH}}\nolimits}^{m}(A, \overline{\Omega}^p(A))\ar[r]^-{\theta_{m,p}} & {\mathop{\mathrm{HH}}\nolimits}^{m+1}(A, \overline{\Omega}^{p+1}(A))\ar[r] & {\mathop{\mathrm{HH}}\nolimits}^{m+1}(A, A^{\otimes (p+2)}) \ar[r] & \cdots }$$ where $$\theta_{m,p}:{\mathop{\mathrm{HH}}\nolimits}^{m}(A, \overline{\Omega}^p(A))\rightarrow {\mathop{\mathrm{HH}}\nolimits}^{m+1}(A, \overline{\Omega}^{p+1}(A))$$ is the connecting homomorphism. In fact, we can write $\theta_{m,p}$ explicitly. For any $f\in C^{m}(A, \overline{\Omega}^p(A))$, $$\label{equ-theta-formular} \theta_{m,p}(f)(a_{1,m+1})=(-1)^pd(f(a_{1,m})\otimes a_{m+1}\otimes 1).$$ Indeed, $\theta_{m, p}$ is induced from the following lifting, $$\xymatrix{ A\otimes \overline{A}^{\otimes m}\otimes A \ar[d]^{\overline{f}} \ar[dr]^-{f_0}& A\otimes \overline{A}^{\otimes m+1}\otimes A\ar[l]_-{d}\ar[dr]^-{f_1}\\ \overline{\Omega}^p(A) & A\otimes \overline{A}^{\otimes p}\otimes A\ar@{->>}[l]^-{d} & A\otimes \overline{A}^{\otimes p+1}\otimes A\ar[l]^-{d} }$$ where $$\begin{split} \overline{f}(a_{1, m+2})&=a_1f(a_{2, m+1})a_{m+2},\\ f_0(a_{1, m+2})&=(-1)^p a_1f(a_{2, m+1}) \otimes a_{m+2},\\ f_1(a_{1, m+3})&=(-1)^ma_1f(a_{2, m+1})\otimes a_{m+1}\otimes a_{m+2}. \end{split}$$ Hence $\theta_{m, p}(f)(a_{1, m+1})=(-1)^p d(f(a_{1, m})\otimes a_{m+1} \otimes 1).$ As a result, these connecting homomorphisms $\theta_{m,p}$ induce a homomorphism of degree zero between ${\mathbb{Z}}$-graded vector spaces, $$\label{equ-connecting} \theta: {\mathop{\mathrm{HH}}\nolimits}^{*}(A, \overline{\Omega}^*(A))\rightarrow {\mathop{\mathrm{HH}}\nolimits}^{*}(A, \overline{\Omega}^*(A))$$ where $$\theta|_{{\mathop{\mathrm{HH}}\nolimits}^{m}(A, \overline{\Omega}^p(A))}=\theta_{m, p}:{\mathop{\mathrm{HH}}\nolimits}^{m}(A, \overline{\Omega}^p(A))\rightarrow {\mathop{\mathrm{HH}}\nolimits}^{m+1}(A, \overline{\Omega}^{p+1}(A)).$$ The following proposition shows that $\theta$ is a module homomorphism of the ${\mathbb{Z}}$-graded Lie algebra ${\mathop{\mathrm{HH}}\nolimits}^*(A,\overline{\Omega}^*(A))$. \[prop-hom-ger\] Let $A$ be an associative algebra over a commutative algebra $k$, then the homomorphism of ${\mathbb{Z}}$-graded $k$-modules $$\theta: {\mathop{\mathrm{HH}}\nolimits}^{*}(A, \overline{\Omega}^*(A))\rightarrow {\mathop{\mathrm{HH}}\nolimits}^{*}(A, \overline{\Omega}^*(A))$$ (defined in (\[equ-connecting\]) above) is a module homomorphism of degree zero over the ${\mathbb{Z}}$-graded Lie algebra ${\mathop{\mathrm{HH}}\nolimits}^{*}(A, \overline{\Omega}^*(A))$. [[*Proof.*]{}]{}Let $f\in{\mathop{\mathrm{HH}}\nolimits}^m(A, \overline{\Omega}^p(A))$ and $g\in{\mathop{\mathrm{HH}}\nolimits}^n(A,\overline{\Omega}^q (A))$, it is sufficient to verify that $$\label{equ-77} \theta_{m+n-1,p+q}([f, g])=[\theta_{m,p}(f),g].$$ First we claim that the following two identities hold, $$\label{equ-78} \begin{split} \theta_{m,p}(f)\bullet g-\theta_{m+n-1,p+q}(f\bullet g)&=(-1)^{m(q-n-1)+p+q} d(f\otimes g\otimes 1)\\ g\bullet \theta_{m,p}(f)-\theta_{m+n-1, p+q}(g\bullet f)&=(-1)^{(p+1)(q-n-1)+p+q} d(f\otimes g\otimes 1). \end{split}$$ It is easy to check that (\[equ-78\]) implies Identity (\[equ-77\]). Now let us verify the two identities in (\[equ-78\]). Indeed, we have $$\begin{split} \theta_{m,p}(f)\bullet g=&\sum_{i=1}^{m+1}(-1)^{r(m+1, p+1;n, q;i)} \theta_{m,p}(f)\bullet_i g+\sum_{i=1}^q(-1)^{s(m+1,p+1;n, q;i)} \theta_{m,p}(f)\bullet_{-i} g\\ =&\sum_{i=1}^{m}(-1)^{r(m, p;n, q;i)} \theta_{m+n-1,p+q}(f\bullet_i g)+\sum_{i=1}^q(-1)^{s(m,p;n, q;i)} \theta_{m,p}(f)\bullet_{-i} g+\\ &(-1)^{r(m+1,p+1;n,q;m+1)}\theta_{m,p}(f)\bullet_{m+1} g\\ =& \theta_{m+n-1,p+q}(f\bullet g)+(-1)^{m(q-n-1)+p+q}d(f\otimes g\otimes 1) \end{split}$$ Similarly, we have $$\begin{split} g\bullet \theta_{m,p}(f)=& \sum_{i=1}^n (-1)^{r(n,q;m+1,p+1;i)}g\bullet_i\theta_{m,p}(f)+\sum_{i=1} ^{p+1}(-1)^{s(n,q;m+1,p+1;i)}g\bullet_{-i}\theta_{m,p}(f)\\ =&\sum_{i=1}^n(-1)^{r(n,q;m,p;i)}\theta_{m+n-1,p+1}(g\bullet_i f)+\sum_{i=1}^p (-1)^{s(n,q;m,p;i)}\theta_{m+n-1,p+1}(g\bullet_{-i} f)+\\ &(-1)^{s(n,q;m+1,p+1;p+1)}g\bullet_{-p-1}\theta_{m,p}(f)\\ =&\theta_{m+n-1,p+q}(g\bullet f)+(-1)^{(p+1)(q-n-1)+p+q}d(f\otimes g\otimes 1). \end{split}$$ Hence we have proved that the two identities in (\[equ-78\]) hold, so the proof is completed. \[rem-H0\] Note that we did not consider ${\mathop{\mathrm{HH}}\nolimits}^0(A, \Omega^p(A))$ for $p\in{\mathbb{Z}}_{>0}$ when we defined the Lie bracket $[\cdot,\cdot]$ on ${\mathop{\mathrm{HH}}\nolimits}^{>0}(A, \overline{\Omega}^*(A))$. In fact, for $p\in{\mathbb{Z}}_{\geq 0}$, let us consider the homomorphism $$\theta_{0,p}: {\mathop{\mathrm{HH}}\nolimits}^0(A, \overline{\Omega}^p(A))\rightarrow {\mathop{\mathrm{HH}}\nolimits}^1(A, \overline{\Omega}^{p+1}(A)).$$ Via this homomorphism, we can define a Lie bracket action of ${\mathop{\mathrm{HH}}\nolimits}^0(A, \Omega^p(A))$ on ${\mathop{\mathrm{HH}}\nolimits}^n(A, \Omega^q(A))$ for $n\in{\mathbb{Z}}_{>0}$ and $p\in{\mathbb{Z}}_{\geq 0}$ as follows: for $\alpha\in {\mathop{\mathrm{HH}}\nolimits}^0(A, \Omega^p(A))$ and $g\in {\mathop{\mathrm{HH}}\nolimits}^n(A, \Omega^q(A)),$ define $$[\alpha, g]:=[\theta_{0,p}(\alpha), g].$$ Denote $${\mathop{\mathrm{HH}}\nolimits}^{\geq 0}(A, \overline{\Omega}^*(A)):=\bigoplus_{\substack{m\in{\mathbb{Z}}_{\geq 0}, p\in Z_{\geq 0}}} {\mathop{\mathrm{HH}}\nolimits}^{m}(A, \overline{\Omega}^p(A)).$$ In the following proposition, we will prove that there is a Gerstenhaber algebra structure on ${\mathop{\mathrm{HH}}\nolimits}^{\geq 0}(A, \overline{\Omega}^*(A))$. \[prop-ger3\] Let $A$ be an associative algebra over a commutative ring $k$. Suppose that $A$ is projective as a $k$-module. Then $${\mathop{\mathrm{HH}}\nolimits}^{\geq 0}(A, \overline{\Omega}^*(A)):=\bigoplus_{\substack{m\in{\mathbb{Z}}_{\geq 0}, p\in Z_{\geq 0}}} {\mathop{\mathrm{HH}}\nolimits}^{m}(A, \overline{\Omega}^p(A))$$ with the Lie bracket $[\cdot,\cdot]$ and the cup product $\cup$, is a Gerstenhaber algebra. [[*Proof.*]{}]{}From Proposition \[prop-lie\], Proposition \[prop-hom-ger\] and Remark \[rem-H0\], it follows that ${\mathop{\mathrm{HH}}\nolimits}^{\geq 0}(A, \overline{\Omega}^*(A))$ is a ${\mathbb{Z}}$-graded Lie algebra. Next let us prove that $\cup$ defines a graded commutative algebra structure. Let $f_i\in{\mathop{\mathrm{HH}}\nolimits}^{m_i}(A, \overline{\Omega}^{p_i}(A))$ for $i=1, 2$. Then we claim that $$\begin{split} f_1\cup f_2-(-1)^{(m_1-p_1)(m_2-p_2)} f_2 \cup f_1 =\delta(f_2\bullet f_1), \end{split}$$ indeed, by immediate calculation, we have $$\begin{split} \delta(f_2\bullet f_1)(a_{1, m+n})=&\sum_{i=1}^{m_2}(-1)^{p_1+p_2+(i-1)(p_1-m_1-1)}\delta(f_2\circ_i f_1)(a_{1, m+n}) +\\ &\sum_{i=1}^{p_1} (-1)^{p_1+p_2+i(p_2-m_2-1)} \delta(f_2\circ_{-i} f_1)(a_{1, m+n})\\ =& f_1(a_{1, m_1})f_2(a_{m_1+1, m_1+m_2})-(-1)^{(m_1-p_1)(m_2-p_2)} f_2(a_{1, m_2})f_1(a_{m_2+1, m_1+m_2}). \end{split}$$ Hence we obtain, in ${\mathop{\mathrm{HH}}\nolimits}^{m_1+m_2}(A, \overline{\Omega}^{p_1+p_2}(A))$ $$f_1\cup f_2=(-1)^{(m_1-p_1)(m_2-p_2)} f_2\cup f_1.$$ So the graded commutativity of cup products holds. It remains to verify the compatibility between $\cup$ and $[\cdot,\cdot]$, namely, for $f_i\in{\mathop{\mathrm{HH}}\nolimits}^{m_i}(A,\overline{\Omega}^{p_i}(A))$, $$\label{equ-compa} [f_1\cup f_2, f_3]=[f_1, f_3]\cup f_2+(-1)^{(m_3-p_3-1)(m_1-p_1)}f_1\cup [f_2, f_3].$$ \[claim-equ2\] In the cohomology group ${\mathop{\mathrm{HH}}\nolimits}^{m_1+m_2+m_3-1}(A,\overline{\Omega}^{p_1+p_2+p_3}(A))$, we also have the following identity, $$\label{equ-claim2} f_3\bullet (f_1\cup f_2)-(-1)^{(m_2-p_2)(m_3-p_3-1)}(f_3\bullet f_1)\cup f_2-f_1\cup(f_3\bullet f_2)=0.$$ \[claim-equ1\] In the cohomology group ${\mathop{\mathrm{HH}}\nolimits}^{m_1+m_2+m_3-1}(A,\overline{\Omega}^{p_1+p_2+p_3}(A))$, we have the following identity, $$\label{equ-claim1} (f_1\cup f_2)\bullet f_3-(f_1\bullet f_3)\cup f_2-(-1)^{(m_1-p_1)(m_3-p_3-1)}f_1\cup (f_2\bullet f_3)=0.$$ It is easy to check that these two claims imply Identity (\[equ-compa\]). Now let us prove these two claims. The proofs of these two claims are very similar to the proof of Theorem 5 in [@Ger1]. [*Proof of Claim \[claim-equ2\],* ]{} First, it is easy to check that we have the following identity for the left hand side in (\[equ-claim2\]) $$\begin{split} \mbox{LHS}=&\sum_{i=1}^{m_3} (-1)^{r(m_3,p_3;m_1+m_2,p_1+p_2;i)}f_3\bullet_i(f_1\cup f_2)+ \sum_{i=1}^{p_1} (-1)^{s(m_3,p_3;m_1+m_2,p_1+p_2;i)}f_3\bullet_{-i}(f_1\cup f_2)-\\ & \sum_{i=1}^{m_3}(-1)^{(m_2-p_2)(m_3-p_3-1)+r(m_3,p_3;m_1,p_1; i)}(f_3\bullet_if_1)\cup f_2- \\ &\sum_{i=1}^{p_1}(-1)^{(m_2-p_2)(m_3-p_3-1)+s(m_3,p_3;m_1,p_1; i)}(f_3\bullet_{-i} f_1)\cup f_2- \sum_{i=1}^{m_3} (-1)^{r(m_3,p_3;m_2,p_2;i)}f_1\cup (f_3\bullet_i f_2). \end{split}$$ Set $$\begin{split} H_1:=& \sum_{i=1}^{p_1}\sum_{j=1}^{m_3-1}(-1)^{\epsilon_{i,j}} d({\mathop{\mathrm{id}}\nolimits}^{\otimes i}\otimes f_3\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes p_1+p_2-i})(f_1\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes j-1} \otimes f_2\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes m_3-j-1})\otimes 1+\\ & \sum_{i=1}^{m_3}\sum_{j=1}^{m_2-1} (-1)^{\epsilon'_{i,j}}d((f_3\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes p_1+p_2})({\mathop{\mathrm{id}}\nolimits}^{\otimes i-1}\otimes f_1\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes j-1} \otimes f_2\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes m_3-i-j})\otimes 1). \end{split}$$ where to simplify the formula, we do not consider the sign of each term in $H_1$. Then we will show that $$\mbox{LHS}=\delta(H_1).$$ Namely, take any element $a_{1, m_1+m_2+m_3-1}\in A^{\otimes m_1+m_2+m_3-1}$, we need to prove that $$\label{equ-com} \mbox{LHS}(a_{1, m_1+m_2+m_3-1})=\delta(H_1)(a_{1, m_1+m_2+m_3-1}).$$ Indeed, this identity can be proved by a recursive procedure. That is, as a first step, we will verify that those terms containing $f_1(a_{1, m_1})$ in (\[equ-com\]) can be cancelled by frequently using the fact that $d^2=0$ and $d(\overline{\Omega}^p(A))=0$ for $p\in {\mathbb{Z}}_{>0}$. This can be done by comparing the terms containing $f_1(a_{1, m_1})$ of both sides in (\[equ-com\]). Then after cancelling those terms containing $f_1(a_{1, m_1})$, we obtain a new identity $$\mbox{LHS}'(a_{1, m_1+m_2+m_3-1})=\delta(H_1)'(a_{1, m_1+m_2+m_3-1}),$$ which does not have the terms containing $f_1(a_{1, m_1})$. By similar processing, we will cancel the terms containing $f_1(a_{2, m_1+1})$. Hence after several times, all the terms will be cancelled, so Identity (\[equ-com\]) holds. Therefore $\mbox{LHS}=0$ in the cohomology group ${\mathop{\mathrm{HH}}\nolimits}^{m_1+m_2+m_3-1}(A, \overline{\Omega}^{p_1+p_2+p_3}(A))$. [*Proof of Claim \[claim-equ1\],* ]{} Similarly, the left hand side in (\[equ-claim1\]) can be written as $$\begin{split} \mbox{LHS}=&\sum_{i=1}^{m_1}(-1)^{r(m_1+m_2,p_1+p_2;m_3,p_3;i)} (f_1\cup f_2)\bullet_i f_3+\sum_{i=1}^{p_3}(-1)^{s(m_1+m_2,p_1+p_2;m_3,p_3;i)} (f_1\cup f_2)\bullet_{-i} f_3-\\ &\sum_{i=1}^{m_1} (-1)^{r(m_1,p_1;m_3,p_3;i)}(f_1\bullet_if_3)\cup f_2-\sum_{i=1}^{p_3} (-1)^{s(m_1,p_1;m_3,p_3;i)}(f_1\bullet_{-i}f_3)\cup f_2-\\ &\sum_{i=1}^{p_3}(-1)^{(m_1-p_1)(m_3-p_3-1)+s(m_2,p_2;m_3,p_3; i)}f_1\cup(f_2\bullet_{-i} f_3). \end{split}$$ Set $$\begin{split} H_2:=&\sum_{i=1}^{m_1}\sum_{j=1}^{p_3} (-1)^{\epsilon_{i, j}} d((f_1\otimes{\mathop{\mathrm{id}}\nolimits}^{\otimes j-1}\otimes f_2\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes p_3-j})({\mathop{\mathrm{id}}\nolimits}^{\otimes i-1}\otimes f_3\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes m_1+m_2-i-1})\otimes 1)+\\ &\sum_{i=1}^{p_3}(-1)^{\epsilon_i}d(({\mathop{\mathrm{id}}\nolimits}\otimes f_1\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes i-1} \otimes f_2\otimes {\mathop{\mathrm{id}}\nolimits}^{\otimes p_3-i-1})(f_3(a_{1, m_3})\otimes a_{m_3, m_1+m_2+m_3-2})\otimes 1) \end{split}$$ Similarly as above, by calculation, we obtain that $$\mbox{LHS}=\delta(H_2),$$ hence $\mbox{LHS}=0$ in the cohomology group ${\mathop{\mathrm{HH}}\nolimits}^{m_1+m_2+m_3-1}(A, \overline{\Omega}^{p_1+p_2+p_3}(A))$.\ Therefore, we have completed the proof. The proof of Proposition \[prop-ger3\] above relies on combinatorial calculations. To understand the Gerstenhaber algebra structure much better, it is interesting to investigate whether there is a $B_{\infty}$-algebra structure on the total complex of $C^{*, *}(A, A)$ since from Section 5.2 in [@GeJo] it follows that a $B_{\infty}$-algebra structure on a chain complex $C$ induces a canonical Gerstenhaber algebra structure on the homology $H_*(C)$. Now let us prove the main theorem (cf. Theorem \[thm-gerst\]) in this section. [*Proof of Theorem \[thm-gerst\],* ]{} From Proposition \[prop\], we have the following isomorphism for $m\in {\mathbb{Z}}$, $$\Phi_m: \lim_{\substack{\longrightarrow\\p\in{\mathbb{Z}}_{\geq 0}\\ m+p>0}} {\mathop{\mathrm{HH}}\nolimits}^{m+p}(A, \overline{\Omega}^p(A))\rightarrow {\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A^e)}(A, A[m]).$$ From Proposition \[prop-hom-ger\] and Proposition \[prop-ger3\], it follows that the structural morphism in the direct system ${\mathop{\mathrm{HH}}\nolimits}^{m+*}(A, \overline{\Omega}^{p+*}(A)),$ $$\theta_{m, p}:{\mathop{\mathrm{HH}}\nolimits}^{m}(A, \overline{\Omega}^p(A))\rightarrow {\mathop{\mathrm{HH}}\nolimits}^{m+1}(A, \overline{\Omega}^{p+1}(A))$$ preserves the Gerstenhaber algebra structure. Therefore, there is an induced Gerstenhaber algebra structure on its direct limit $${\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\cong \lim_{\substack{\longrightarrow\\p\in{\mathbb{Z}}_{\geq 0}\\ *+p>0}} {\mathop{\mathrm{HH}}\nolimits}^{*+p}(A, \overline{\Omega}^p(A)).$$ So ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^*(A, A)$ is a Gerstenhaber algebra, equipped with the cup product $\cup$ and the induced Lie bracket $[\cdot, \cdot]$. Let us denote, for $m\in {\mathbb{Z}}_{\geq 0},$ $${\mathop{\mathrm{Ker}}\nolimits}^{m, \infty}(A, A):=\ker({\mathop{\mathrm{HH}}\nolimits}^m(A, A)\rightarrow {\mathop{\mathrm{HH}}\nolimits}^m_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)),$$ and $${\mathop{\mathrm{HH}}\nolimits}^{m, \infty}(A, A):={\mathop{\mathrm{Im}}\nolimits}({\mathop{\mathrm{HH}}\nolimits}^m(A, A)\rightarrow {\mathop{\mathrm{HH}}\nolimits}^m_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)).$$ Then we have $${\mathop{\mathrm{HH}}\nolimits}^{m, \infty}(A, A)={\mathop{\mathrm{HH}}\nolimits}^m(A, A)/{\mathop{\mathrm{Ker}}\nolimits}^{m, \infty}(A, A).$$ ${\mathop{\mathrm{Ker}}\nolimits}^{*, \infty}(A, A)$ is a Gerstenhaber ideal of ${\mathop{\mathrm{HH}}\nolimits}^*(A, A)$. In particular, ${\mathop{\mathrm{HH}}\nolimits}^{*, \infty}(A, A)$ is a Gerstenhaber subalgebra of ${\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$. [[*Proof.*]{}]{}First let us claim that the natural morphism $$\Phi: {\mathop{\mathrm{HH}}\nolimits}^*(A, A)\rightarrow {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^*(A, A)$$ is a Gerstenhaber algebra homomorphism. In order to prove this claim, from Proposition \[prop-hom-ger\] and Remark \[rem-H0\], it is sufficient to verify that $$[\theta_{0, 0}({\mathop{\mathrm{HH}}\nolimits}^0(A, A)), \theta_{0, 0}({\mathop{\mathrm{HH}}\nolimits}^0(A, A))]=0,$$ where we recall that $$\theta_{0, 0}: {\mathop{\mathrm{HH}}\nolimits}^0(A, A)\rightarrow {\mathop{\mathrm{HH}}\nolimits}^1(A, \overline{\Omega}^1(A))$$ is the connecting homomorphism in (\[long-new-1\]). Let $\lambda,\mu\in {\mathop{\mathrm{HH}}\nolimits}^0(A, A)$, then from (\[equ-theta-formular\]) it follows that for any $a\in A$, $$\begin{split} \theta_{0, 0}(\lambda)(a)=\lambda\otimes a -\lambda a\otimes 1;\\ \theta_{0, 0}(\mu)(a)=\mu\otimes a -\mu a\otimes 1.\\ \end{split}$$ Hence, by direct calculation, we obtain that for any $a\in A$, $$\begin{split} [\theta_{0, 0}(\lambda),\theta_{0, 0}(\mu)](a)=0. \end{split}$$ So we have shown that $$\Phi: {\mathop{\mathrm{HH}}\nolimits}^*(A, A)\rightarrow {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^*(A, A)$$ is a Gerstenhaber algebra homomorphism. Thus its kernel ${\mathop{\mathrm{Ker}}\nolimits}^{*, \infty}(A, A)$ is a Gerstenhaber ideal and its image ${\mathop{\mathrm{HH}}\nolimits}^{* \infty}(A, A)$ is a Gerstenhaber algebra. Gerstenhaber algebra structure on ${\mathop{\mathrm{HH}}\nolimits}^*(A, A^{\otimes >0})$ ======================================================================================== Let $m,n\in{\mathbb{Z}}_{>0}$ and $p, q\in {\mathbb{Z}}_{>1}$, we will define a star product as follows, $$\star:C^m(A, A^{\otimes p})\times C^n(A, A^{\otimes q})\rightarrow C^{m +n-1}(A, A^{\otimes p+q-1})$$ for $f\in C^m(A, A^{\otimes p})$ and $g\in C^n(A, A^{\otimes q})$, denote $$\begin{split} f\star_0 g:&=(f\otimes {\mathop{\mathrm{id}}\nolimits})({\mathop{\mathrm{id}}\nolimits}_{m-1}\otimes g),\\ f\star_1 g:&=({\mathop{\mathrm{id}}\nolimits}_{p-1}\otimes f)(g\otimes {\mathop{\mathrm{id}}\nolimits}). \end{split}$$ Then we define $$\begin{split} f\star g:=(-1)^{(m-1)(n-1)}f\star_0 g+f\star_1 g \end{split}$$ and denote $$\label{equ-bra-a} \{f, g\}:=f\star g-(-1)^{(m-1)(n-1)}g\star f.$$ For the case $p=1$, we can also define a Lie bracket $$\label{equ-bra-b} \{\cdot,\cdot\}: C^m(A, A)\times C^n(A, A^{\otimes q})\rightarrow C^{m+n-1}(A, A^{\otimes q}),$$ let $f\in C^m(A, A)$ and $g\in C^n(A, A^{\otimes q})$, $$\{f, g\}=f\star g-(-1)^{(m-1)(n-1)}g\circ f,$$ where $g\circ f$ is the circ product (cf. (\[equ-circ-product\])) defined in [@Ger1]. We remark that in general, the star product $\star$ is not associative. However, some associativity properties hold. In the following lemma, we list some of them. \[lemma-star\] Let $f_i\in C^{m_i}(A, A^{\otimes p_i}), i=1, 2, 3.$ Then we have the following, 1. for $p_1\in{\mathbb{Z}}_{>0}, p_2,p_3\in{\mathbb{Z}}_{>1}$, $$\begin{split} (f_1\star_0 f_2)\star_0 f_3=f_1\star_0(f_2\star_0 f_3),\\ (f_1\star_0 f_2)\star_1 f_3=(f_1\star_1 f_3)\star_0 f_2,\\ (f_1\star_1 f_2)\star_1 f_3=f_1\star_1(f_2\star_1 f_3), \end{split}$$ 2. for $p_1,p_2\in{\mathbb{Z}}_{>0}, p_3\in{\mathbb{Z}}_{>1}$, $$f_1\star_1(f_2\star_0 f_3)=f_2\star_0(f_1\star_1 f_3).$$ Note that $C^{*>0}(A, A^{\otimes *> 0})$ is a double complex with horizontal and vertical differentials induced from the bar resolution. We consider the total complex $${\mathop{\mathrm{Tot}}\nolimits}(C^{*>0}(A, A^{\otimes *> 0}))^*,$$ which is a differential ${\mathbb{Z}}$-graded $k$-module, with the grading $${\mathop{\mathrm{Tot}}\nolimits}(C^{*>0}(A, A^{\otimes *>1}))^m=\bigoplus_{\substack{p\in{\mathbb{Z}}_{>0}}}C^m(A, A^{\otimes p})$$ and the differential induced by the horizontal differential. \[rem-prod-delta\] We remark that the horizontal differential $\delta$ in $C^{*>0}(A, A^{*>0})$ can be expressed by the star product, circ product and the multiplication $\mu$ of $A$. That is, for $f\in C^m(A, A^{\otimes p})$, $$\delta(f)=(-1)^{m-1}\{\mu, f\}.$$ Indeed, we have, $$\begin{split} \{\mu, f\}(a_{1, m+1})=&\mu\star f(a_{1, m+1})-(-1)^{m-1}f\circ \mu(a_{1, m+1})\\ =&(-1)^{m-1}(\mu\otimes {\mathop{\mathrm{id}}\nolimits})(a_1\otimes f(a_{2, m+1}))+({\mathop{\mathrm{id}}\nolimits}_{p-1}\otimes \mu)(f(a_{1,m})\otimes a_{m+1})-\\ &(-1)^{m-1}\sum_{i=1}^m(-1)^{i-1}f(a_{1,i-1}\otimes a_ia_{i+1}\otimes a_{i+2, m+1})\\ =&(-1)^{m-1}\delta(f)(a_{1,m+1}). \end{split}$$ ${\mathop{\mathrm{Tot}}\nolimits}(C^{*>0}(A, A^{\otimes *>0}))^*$ is a differential graded Lie algebra (DGLA) with the bracket $\{\cdot, \cdot\}$ defined in (\[equ-bra-a\]). As a consequence, $${\mathop{\mathrm{HH}}\nolimits}^{*>0}(A, A^{\otimes *>0})=\bigoplus_{m,p\in{\mathbb{Z}}_{>0}}{\mathop{\mathrm{HH}}\nolimits}^m(A, A^{\otimes p})$$ is a ${\mathbb{Z}}$-graded Lie algebra with the grading $${\mathop{\mathrm{HH}}\nolimits}^{*>0}(A, A^{\otimes *>0})^m:=\bigoplus_{p\in{\mathbb{Z}}_{>0}} {\mathop{\mathrm{HH}}\nolimits}^m(A, A^{\otimes p}).$$ [[*Proof.*]{}]{}Let us prove first that it is a ${\mathbb{Z}}$-graded Lie algebra. Note that the bracket is skew-symmetric. We need to verify the Jacobi identity, namely, for $f_i\in C^{m_i}(A, A^{\otimes p_i}), i=1, 2, 3,$ $$\label{equ-Jacobi2} \begin{split} (-1)^{n_1n_3}\{\{f_1, f_2\}, f_3\}+(-1)^{n_2n_1}\{\{f_2, f_3\}, f_1\}+ (-1)^{n_3n_2}\{\{f_3, f_1\}, f_2\}=0 \end{split}$$ where $n_i:=m_i-1.$ Let us discuss the following three cases. [**Case 1:**]{} $p_i\in{\mathbb{Z}}_{>1}$. From Lemma \[lemma-star\], it follows that each term of the right hand side in (\[equ-Jacobi2\]) will appear exactly twice, hence it is sufficient to compare the coefficients of those two same terms. For example, the coefficient of $(f_1\star_0 f_2)\star_0 f_3$ in (\[equ-Jacobi2\]) is $(-1)^{n_2(n_1+n_3)}.$ From Lemma \[lemma-star\], we obtain that $$(f_1\star_0 f_2)\star_0 f_3=f_1\star_0 (f_2\star_0 f_3).$$ And the coefficient of $f_1\star_0 (f_2\star_0 f_3)$ in (\[equ-Jacobi2\]) is $ -(-1)^{n_2(n_1+n_3)},$ hence these two same terms will be cancelled in (\[equ-Jacobi2\]). Similarly, we can verify the other terms. [**Case 2:**]{} $p_1,p_2=1$ and $p_3\in{\mathbb{Z}}_{>1}.$ From Theorem 2 in [@Ger1], we have the following identity, $$f_3\circ\{f_1, f_2\}=(f_3\circ f_1)\circ f_2-(-1)^{(m_1-1)(m_2-1)}(f_3\circ f_2)\circ f_1.$$ Thus, to verify the Jacobi identity (\[equ-Jacobi2\]) is equivalent to verify the following one, $$\label{99} \begin{split} \{f_1, f_2\}\star f_3 =&f_1\star (f_2\star f_3 )-(-1)^{(m_2-1)m_3}f_1\star (f_3 \circ f_2)-(-1)^{(m_1-1)(m_2+m_3-1)}(f_2\star f_3 )\circ f_1-\\ &(-1)^{(m_1-1)(m_2-1)}f_2\star (f_1\star f_3 )+(-1)^{(m_1-1)m_2}f_2\star (f_3 \circ f_1)+\\ &(-1)^{(m_3-1)(m_2-1)}(f_1\star f_3 )\circ f_2. \end{split}$$ It is easy, by definition, to check that the following identity holds, $$\label{100} \begin{split} & (f_1\circ f_2)\star f_3+(-1)^{(m_2-1)m_3}f_1\star (f_3 \circ f_2)-(-1)^{(m_3-1)(m_2-1)}(f_1\star f_3 )\circ f_2\\ =&(-1)^{(m_1+m_2)(m_3-1)+(m_1-1)(m_2-1)} f_1\star_0(f_2\star_0f_3)+ f_1\star_1(f_2\star_1 f_3). \end{split}$$ Similarly, we also have $$\label{101} \begin{split} & (f_2\circ f_1)\star f_3+(-1)^{(m_1-1)m_3}f_2\star (f_3 \circ f_1)-(-1)^{(m_3-1)(m_1-1)}(f_2\star f_3 )\circ f_1\\ =&(-1)^{(m_1+m_2)(m_3-1)+(m_1-1)(m_2-1)} f_2\star_0(f_1\star_0f_3)+ f_2\star_1(f_1\star_1 f_3). \end{split}$$ We also note that $$\label{102} \begin{split} &(f_1\star (f_2\star f_3)-(-1)^{(m_1-1)(m_2-1)}f_2\star (f_1\star f_3))\\ =&(-1)^{(m_1-1)(m_2+m_3)}f_1\star_0(f_2\star_0 f_3)+ f_1\star_1(f_2\star_1 f_3)-\\ &(-1)^{(m_1-1)(m_2+m_3)+(m_2-1)(m_1+m_3 )}f_2\star_0(f_1\star_0 f_3)-(-1)^{(m_1-1)(m_2-1)} f_2\star_1(f_1\star_1 f_3). \end{split}$$ So combining (\[100\]), (\[101\]) and (\[102\]), we get Identity (\[99\]). Hence the Jacobi identity holds. [**Case 3:**]{} $p_1=1$, $p_2, p_3\in{\mathbb{Z}}_{>1}$. First, similar to Case 2, we have the following identities for $\{i, j\}=\{ 2, 3\}$. $$\begin{split} &(f_i\circ f_1)\star f_j+(-1)^{(m_1-1)m_j}f_2\star(f_3\circ f_1)-(-1)^{(m_1-1)(m_j-1)} (f_i\star f_j)\circ f_1\\ =&(-1)^{(m_1+m_i)(m_j-1)+(m_1-1)(m_i-1)}f_2\star_0(f_1\star_0 f_j)-f_i\star_1(f_1\star_1 f_j). \end{split}$$ Hence, each term in Jacobi identity in (\[equ-Jacobi2\]) can be expressed by star product (no circ product). So, similar to Case 1, by using Lemma \[lemma-star\], we can compare the same two terms. Thus, we have verified that ${\mathop{\mathrm{Tot}}\nolimits}(C^{*>0}(A, A^{\otimes *> 0}))^*$ is a ${\mathbb{Z}}$-graded Lie algebra. Next we will prove that $\{\cdot,\cdot\}$ is compatible with differential, that is, let $f_i\in C^{m_i}(A, A^{\otimes p_i})$ for $i=1, 2,$ we need to verify the following identity, $$\delta(\{f_1, f_2\})=(-1)^{m_1-1}\{f_1, \delta(f_2 )\}+\{\delta(f_1), f_2\}.$$ From Remark \[rem-prod-delta\], we note that it follows from the Jacobi identity (\[equ-Jacobi2\]). Since $$H^m({\mathop{\mathrm{Tot}}\nolimits}(C^{*>0}(A, A^{\otimes *> 0}))\cong {\mathop{\mathrm{HH}}\nolimits}^{*>0}(A, A^{\otimes *>0})^m,$$ it follows that ${\mathop{\mathrm{HH}}\nolimits}^{*>0}(A, A^{\otimes *>0})$ is a ${\mathbb{Z}}$-graded Lie algebra. Hence we have completed the proof. \[lemma-zero-product\] Let $f_i\in{\mathop{\mathrm{HH}}\nolimits}^{m_i}(A, A^{\otimes p_i})$ for $i=1, 2$ and suppose that $p_2>1$. Then we have $f_1\cup f_2=f_2\cup f_1=0$. [[*Proof.*]{}]{}In fact, we have the following identity for $f_i\in C^{m_i}(A, A^{\otimes p_i})$ and $p_2>1$, $$\begin{split} \delta(f_1\star_0 f_2)&=\delta(f_1)\star_0 f_2+(-1)^{m_1-1} f_1\star_0 \delta(f_2)+(-1)^{m_1} f_1\cup f_2,\\ \delta(f_1\star_1 f_2)&=f_1\star_1 \delta(f_2)+(-1)^{m_2-1} \delta(f_1)\star_1 f_2+(-1)^{m_2} f_2\cup f_1. \end{split}$$ Hence it follows that $f_1\cup f_2=f_2\cup f_1=0.$ Next we will prove that ${\mathop{\mathrm{HH}}\nolimits}^{*>0}(A, A^{\otimes *>0})$ is a Gerstenhaber algebra. \[prop-zero-product\] ${\mathop{\mathrm{HH}}\nolimits}^{*>0}(A, A^{\otimes *>0})$ is a Gerstenhaber algebra (without the unity) with the cup product and Lie bracket. [[*Proof.*]{}]{}It remains to verify the compatibility between cup product and Lie bracket. On the other hand, from Lemma \[lemma-zero-product\] above, it is sufficient to verify that for $f_i\in {\mathop{\mathrm{HH}}\nolimits}^{m_i}(A, A)$, $i=1, 2$ and $f_3\in {\mathop{\mathrm{HH}}\nolimits}^{m_3}(A, A^{\otimes p_3})$, where $p_3>1$, $$\label{equ-com-bar} \begin{split} \{f_1\cup f_2, f_3\}&=(-1)^{m_1(m_3-1)}f_1\cup\{f_2, f_3\}+(-1)^{(m_1+m_3-1)m_2} f_2\cup\{f_1, f_3\}\\ \end{split}$$ Recall that we have the following identity in ${\mathop{\mathrm{HH}}\nolimits}^{m_1+m_2+n-1}(A, A^{\otimes p})$(cf. Theorem 5. [@Ger1]), $$\label{id2} \begin{split} f_3\circ (f_1\cup f_2)-f_1\cup (f_3\circ f_2)-(-1)^{m_2(m_3-1)}(f_3\circ f_1)\cup f_2=0.\\ \end{split}$$ Hence it follows that Identity (\[equ-com-bar\]) is equivalent to the following identity, $$\label{identity14} \begin{split} (f_1\cup f_2)\star f_3-(f_1\star f_3)\cup f_2-(-1)^{m_1(m_3-1)}f_1\cup (f_2\star f_3)=0.\\ \end{split}$$ Let us compute the left hand side in (\[identity14\]), $$\begin{split} &((f_1\cup f_2)\star f_3-(f_1\star f_3)\cup f_2-(-1)^{m_1(n-1)}f_1\cup (f_2\star f_3))\\ =&(-1)^{(m_1+m_2-1)(m_3-1)}f_1\cup (f_2\star_0 f_3)+(f_1\star_1 f_3)\cup f_2-(-1)^{(m_1-1)(m_3-1)}(f_1\star_0 f_3)\cup f_2-\\ & (f_1\star_1 f_3)\cup f_2-(-1)^{m_1(m_3-1)+(m_2-1)(m_3-1)} f_1\cup (f_2\star_0 f_3)-(-1)^{m_1(m_3-1)}f_1\cup(f_2\star_1 f_3)\\ =&-(-1)^{(m_1-1)(m_3-1)}(f_1\star_0 f_3)\cup f_2 -(-1)^{m_1(m_3-1)}f_1\cup(f_2\star_1 f_3) \end{split}$$ Set $$G:=(-1)^{m_1m_3}(f_1\star_0 f_3)\star_1 f_2.$$ By calculation, we obtain that $$\delta(G)=(-1)^{(m_1-1)(m_3-1)}(f_1\star_0 f_3)\cup f_2 +(-1)^{m_1(m_3-1)}f_1\cup(f_2\star_1 f_3),$$ hence it follows that in ${\mathop{\mathrm{HH}}\nolimits}^{m_1+m_2+n-1}(A, A^{\otimes p})$, $$(f_1\cup f_2)\star g-(f_1\star g)\cup f_2-(-1)^{m_1(n-1)}f_1\cup (f_2\star g)=0.$$ Therefore, we have verified Identity (\[equ-com-bar\]). Let $A$ be an associative algebra over a commutative ring $k$. Let $f_i\in{\mathop{\mathrm{HH}}\nolimits}^{m_i}(A, A)$, $m_i\in{\mathbb{Z}}_{>0}$ for $i=1, 2$. Then for any $g\in {\mathop{\mathrm{HH}}\nolimits}^m(A, A^{\otimes p}), p\in {\mathbb{Z}}_{>1}, m>0,$ we have $$\{f_1\cup f_2, g\}=0.$$ [[*Proof.*]{}]{}This result follows from Lemma \[lemma-zero-product\] and Proposition \[prop-zero-product\]. Special case: self-injecitve algebra ==================================== In this section, let $A$ be a self-injective algebra over a field $k$. Then $A^e:=A\otimes_kA^{{\mathop{\mathrm{op}}\nolimits}}$ is also a self-injective algebra. Recall that the singular category ${\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A)$ of a self-injective algebra $A$ is equivalent to the stable module category $A$-$\underline{{\mathop{\mathrm{mod}}\nolimits}}$ as triangulated categories, namely, we have the following proposition. Let $A$ be a self-injective algebra. Then the following natural functor is an equivalence of triangulated categories, $$\mbox{$A$-$\underline{{\mathop{\mathrm{mod}}\nolimits}}$}\rightarrow {\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A).$$ Recall that for any $m\in{\mathbb{Z}}$, we define $${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^m(A, A):={\mathop{\mathrm{Hom}}\nolimits}_{{\EuScript D}_{{\mathop{\mathrm{sg}}\nolimits}}(A^e)}(A, A[m]).$$ Thanks to Corollary 6.4.1 in [@Bu], we have the following descriptions for ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^*(A, A)$ in the case of a self-injective algebra $A$. \[prop-hom\] Let $A$ be a self-injective algebra over a field $k$, denote $A^\vee:={\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A, A^e)$. Then 1. ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^i(A, A)\cong {\mathop{\mathrm{HH}}\nolimits}^i(A,A)$ for all $i>0$, 2. ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-i}(A, A)\cong {\mathop{\mathrm{Tor}}\nolimits}_{i-1}^{A^e}(A, A^{\vee})$ for all $i\geq 2$, 3. there is an exact sequence $$\xymatrix{ 0\ar[r] & {\mathop{\mathrm{HH}}\nolimits}^{-1}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\ar[r] & A^{\vee}\otimes_{A^e}A\ar[r] & {\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A, A) \ar[r] & {\mathop{\mathrm{HH}}\nolimits}^0_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\rightarrow 0. }$$ Since $A$ is self-injective, so is $A^e$. Hence we have for $i\geq 2$ and $n\geq 1$, $${\mathop{\mathrm{Tor}}\nolimits}_{i-1}^{A^e}(A, A^{\vee})\cong {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-i}(A, A)\cong\underline{{\mathop{\mathrm{Hom}}\nolimits}}_{A^e}(\Omega^{i}(A), \Omega^{n+i}(A)\cong {\mathop{\mathrm{Ext}}\nolimits}^{n}_{A^e}(A, \Omega^{n+i}(A)).$$ To simplify the notation, we denote each of these isomorphisms by $\lambda_{i, n}$. In fact, we can write the isomorphism $\lambda_{i, n}$ explicitly, for $$\alpha\otimes a_1\otimes \cdots \otimes a_{i-1}\in{\mathop{\mathrm{Tor}}\nolimits}_{i-1}^{A^e}(A, A^{\vee}),$$ we have $$\lambda_{i, n}(\alpha\otimes a_1\otimes \cdots \otimes a_{i-1})\in{\mathop{\mathrm{Ext}}\nolimits}^{n}_{A^e}(A, \Omega^{n+i}(A))$$ which is defined as follows, $$\label{equ-formular} \begin{split} \lambda_{i, n}(\alpha\otimes a_1\otimes \cdots \otimes a_{i-1})(b_1\otimes \cdots \otimes b_n)=\sum_j d(x_j\otimes a_{1, i-1}\otimes y_j\otimes b_{1, n}\otimes 1), \end{split}$$ where we write $\alpha(1):=\sum_jx_j\otimes y_j$. Under the isomorphisms $\lambda_{i, n}$ above, we have the following relations between the cap product and cup product. \[lemma-cap-cup1\] Let $A$ be a self-injective algebra. Then we have the following commutative diagram for $m\geq 1, n\geq 2$ and $n-m\geq 2$. $$\xymatrix{ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}^{-n}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A) \ar[r]^-{\cup}\ar[d]^{{\mathop{\mathrm{id}}\nolimits}\otimes \lambda_n} & {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{m-n}(A, A)\ar[d]^{\lambda_{n-m}}\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{n-1}(A, A^{\vee}) \ar[r]^-{\cap} & {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{n-m-1}(A, A^{\vee}). }$$ [[*Proof.*]{}]{}Take $f\otimes (\alpha\otimes a_1\otimes \cdots \otimes a_{n-1})\in {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{n-1}(A, A^{\vee})$. Then $$f\cap (\alpha\otimes a_1\otimes \cdots \otimes a_{n-1})=\sum_i x_i f(a_{1, m})\otimes y_i\otimes a_{m+1, n-1}\in{\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{n-m-1}(A, A^{\vee}),$$ where $\alpha(1):=\sum_i x_i\otimes y_i$. Hence we have $$\lambda_{n-m, m+1}(f\cap (\alpha\otimes a_1\otimes \cdots\otimes a_{n-1})) \in {\mathop{\mathrm{Ext}}\nolimits}^{m+1}(A, \Omega^{n+1}(A)).$$ From the formula in (\[equ-formular\]), it follows that for any $(b_1\otimes \cdots\otimes b_{m+1})\in A^{\otimes (m+1)}$, $$\lambda_{n-m, m+1}(f\cap (\alpha\otimes a_1\otimes \cdots \otimes a_{n-1}))(b_1\otimes \cdots\otimes b_{m+1})= \sum_id(x_if(a_{1, m})\otimes a_{m+1, n-1}\otimes y_i\otimes b_{1, m+1}\otimes 1).$$ We consider the following diagram, $$\label{equ-diagram1} \xymatrix{ {\mathop{\mathrm{Ext}}\nolimits}_{A^e}^m(A, A)\otimes {\mathop{\mathrm{Ext}}\nolimits}_{A^e}^1(A, \Omega^{n+1}(A)) \ar[r]^-{\cup} &{\mathop{\mathrm{Ext}}\nolimits}^{m+1}_{A^e}(A, \Omega^{n+1}(A))\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}^{-n}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\ar[u]_-{\lambda_{-m}\otimes \lambda_{n, 1}} \ar[r]^-{\cup}\ar[d]^{{\mathop{\mathrm{id}}\nolimits}\otimes \lambda_n} & {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{m-n}(A, A)\ar[d]^{\lambda_{n-m}}\ar[u]_-{\lambda_{n-m, m+1}}\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{n-1}(A, A^{\vee}) \ar@/^8pc/[uu]^{{\mathop{\mathrm{id}}\nolimits}\otimes \lambda_{n, 1}} \ar[r]^-{\cap} & {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{n-m-1}(A, A^{\vee})\ar@/_8pc/[uu]_{\lambda_{n-m, m+1}} }$$ where $\cup$ in the first row represents the Yoneda product in the bounded derived category ${\EuScript D}^b(A\otimes A^{{\mathop{\mathrm{op}}\nolimits}})$. It is clear that the top square in (\[equ-diagram1\]) is commutative. Let us prove the commutativity of the outer square. For $$f\otimes (\alpha\otimes a_1\otimes \cdots a_{n-1})\in {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{n-1}(A, A^{\vee}),$$ we have (via the up-right direction in Diagram (\[equ-diagram1\])) $$(\cup \circ ({\mathop{\mathrm{id}}\nolimits}\otimes \lambda_{n, 1}))(f\otimes \alpha\otimes a_{1, n-1})\in {\mathop{\mathrm{Ext}}\nolimits}_{A^e}^{m+1}(A, \Omega^{n+1}(A))$$ which sends $(b_{1}\otimes \cdots\otimes b_{m+1})\in A^{\otimes (m+1)}$ to $$(\cup \circ ({\mathop{\mathrm{id}}\nolimits}\otimes \lambda_{n, 1}))(f\otimes \alpha\otimes a_{1, n-1})(b_{1, m+1})=\sum_i f(b_{1,m})d(x_i\otimes a_{1, n-1}\otimes y_i\otimes b_{m+1}\otimes 1).$$ On the other hand, we have (via the right-up direction in Diagram (\[equ-diagram1\])) $$(\lambda_{n-m, m+1}\circ \cap)(f\otimes \alpha\otimes a_{1, n-1})\in {\mathop{\mathrm{Ext}}\nolimits}_{A^e}^{m+1}(A, \Omega^{n+1}(A))$$ which sends $(b_{1}\otimes \cdots\otimes b_{m+1})\in A^{\otimes (m+1)}$ to $$(\lambda_{n-m, m+1}\circ \cap)(f\otimes \alpha\otimes a_{1, n-1})(b_{1, m+1})=\sum_i d(x_if(a_{1, m})\otimes a_{m+1, n-1}\otimes y_i\otimes b_{1, m+1}\otimes 1).$$ For $j=1, 2, \cdots, n,$ define $$H_j\in {\mathop{\mathrm{Hom}}\nolimits}_k(A^{\otimes m}, \Omega^{n+1}(A))$$ as follows, $$H_j(b_{1,m}):=\sum_i d ({\mathop{\mathrm{id}}\nolimits}_{j}\otimes f)(x_i\otimes a_{1, n-1}\otimes y_i\otimes b_{1, m}\otimes 1).$$ By calculation, we have $$(\cup \circ ({\mathop{\mathrm{id}}\nolimits}\otimes \lambda_{n, 1})-\lambda_{n-m, m+1}\circ \cap)(f\otimes \alpha\otimes a_{1, n-1})= \sum_{j=1}^n (-1)^{\epsilon_j}\delta(H_j),$$ where $\epsilon_j\in {\mathbb{Z}}$ depends on $H_j$. Hence in ${\mathop{\mathrm{Ext}}\nolimits}_{A^e}^{m+1}(A, \Omega^{n+1}(A))$, we have $$(\cup \circ ({\mathop{\mathrm{id}}\nolimits}\otimes \lambda_{n, 1}))(f\otimes \alpha\otimes a_{1, n-1})=(\lambda_{n-m, m+1}\circ \cap)(f\otimes \alpha\otimes a_{1, n-1}).$$ So we have verified that the outer square in Diagram (\[equ-diagram1\]) commutes, hence the lower square also commutes. For the case $n-m=1$, we have the following commutative diagram $$\xymatrix{ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}^{-n}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A) \ar[r]^-{\cup}\ar[d]^{{\mathop{\mathrm{id}}\nolimits}\otimes \lambda_n} & {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{1}(A, A)\ar@{^{(}->}[d]^{\lambda_{1}}\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{n-1}(A, A^{\vee}) \ar[r]^-{\cap} & {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{0}(A, A^{\vee}), }$$ where the injection $\lambda_1:{\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^1(A, A)\rightarrow {\mathop{\mathrm{Tor}}\nolimits}_0^{A^e}(A, A^{\vee})$ is defined in Proposition \[prop-hom\]. Indeed, it is sufficient to prove that $$\label{*} f\cap (\alpha\otimes a_{1, n-1})=\sum_ix_if(a_{1, m})\otimes y_i\in {\mathop{\mathrm{Im}}\nolimits}(\lambda_1)$$ for $f\in {\mathop{\mathrm{HH}}\nolimits}^m(A, A)$ and $\alpha\otimes a_{1, n-1}\in {\mathop{\mathrm{Tor}}\nolimits}_{n-1}^{A^e}(A, A^{\vee}).$ From the exact sequence in Propostion \[prop-hom\], $$\xymatrix{ 0\ar[r]& {\mathop{\mathrm{HH}}\nolimits}^{-1}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\ar[r]^-{\lambda_1} & A^{\vee}\otimes_{A^e}A\ar[r]^-{\mu_1} & {\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A, A) \ar[r] & {\mathop{\mathrm{HH}}\nolimits}^0_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\rightarrow 0, }$$ it follows that (\[\*\]) is equivalent to $$\mu_1(f\cap (\alpha\otimes a_{1, n-1}))=\sum_ix_if(a_{1,m})y_i=0.$$ Indeed, we have $$\begin{split} \sum_ix_if(a_{1, m})y_i=&\sum_i f(x_ia_1\otimes a_{2, m})y_i+\sum_i \sum_{j=1}^{m-1}(-1)^{j}f(x_i\otimes a_{1, j-1} \otimes a_ja_{j+1}\otimes a_{j+2, m})y_i+\\ &(-1)^mf(x_i\otimes a_{1, m-1})a_my_i\\ =&0. \end{split}$$ Here we used $\delta(f)=0$ in the first identity and $d(\alpha\otimes a_{1, n-1})=0$ in the second identity. We are also interested in the case $n-m<0$. Before discussing this case, let us define some operators on Hochschild cohomology and homology. Let $A$ be an associative algebra (not necessarily, self-injective) over a field $k$. We will define a (generalized) cap product as follows. Let $0\leq n<m$, define $$\cap: C^m(A, A)\otimes C_{n}(A, A^{\vee})\rightarrow C^{m-n-1}(A, A)$$ $(f\cap (\alpha \otimes a_1\otimes \cdots\otimes a_n))(b_1\otimes \cdots \otimes b_{m-n-1}):=\sum_ix_if(a_1\otimes \cdots \otimes a_n\otimes y_i\otimes b_1\otimes \cdots \otimes b_{m-n-1}),$ where $\alpha(1):=\sum_ix_i\otimes y_i$. \[lemma-cap\] The $\cap$ induces a well-defined operator on the level of homology, $$\cap: {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}_n(A, A^{\vee})\rightarrow {\mathop{\mathrm{HH}}\nolimits}^{m-n-1}(A, A).$$ [[*Proof.*]{}]{}It is sufficient to check that $$\begin{split} Z^m(A, A)\cap Z_n(A, A^{\vee})\subset Z^{m-n-1}(A, A),\\ Z^m(A, A)\cap B_n(A, A^{\vee})\subset B^{m-n-1}(A, A),\\ B^m(A, A)\cap Z_n(A, A^{\vee})\subset B^{m-n-1}(A, A).\\ \end{split}$$ Let $f\in Z^m(A, A)$ and $z=\sum \alpha\otimes a_{ 1}\otimes \cdots \otimes a_{n}\in Z_n(A, A^{\vee}).$ Then we have $$\begin{split} &\delta(f\cap z)(b_1\otimes \cdots \otimes b_{m-n})\\ =&b_1(f\cap z)(b_{2, m-n})+\sum_{j=1}^{m-n-1}(-1)^j(f\cap z)(b_{1, j-1}\otimes b_jb_{j+1}\otimes b_{j+2, m-n})+\\ &(-1)^{m-n}(f\cap z)(b_{1, m-n-1})b_{m-n}\\ =&b_1\sum_k x_{ k}f(a_{1, n}\otimes y_k\otimes b_{2, m-n})+\sum_{j=1}^{m-n-1}(-1)^j\sum_k x_{ k}f(a_{1, n}\otimes y_k\otimes b_{2, j-1}\otimes b_jb_{j+1}\otimes b_{j+2, m-n})+\\ &(-1)^{m-n}\sum_k x_{ k}f(a_{1, n}\otimes y_k\otimes b_{1, m-n-1})b_{m-n}\\ =&0, \end{split}$$ where the last identity follows from the fact that $\delta(f)=0$ and $dz=0$. Similarly, for $f\in{\mathop{\mathrm{Hom}}\nolimits}(A^{\otimes m-1}, A)$ and $z\in Z_n(A, A^{\vee})$ we have $$\delta(f)\cap z=\delta(f\cap z),$$ and for $f\in Z^m(A, A)$ and $z\in C_{n+1}(A, A^{\vee})$ $$f\cap \partial_{n+1}(z)=\delta(f\cap z).$$ Let $A$ be an associative algebra. We will also define a (generalized) cup product on ${\mathop{\mathrm{Tor}}\nolimits}_*^{A^e}(A, A^{\vee})$. Let $m, n\in {\mathbb{Z}}_{\geq 0},$ $$\label{gene-cup} \cup:C_{m}(A, A^{\vee})\times C_{n}(A, A^{\vee})\rightarrow C_{m+n+1}(A, A^{\vee})$$ is defined as follows, take $$\sum_i (x_i\otimes y_i)\otimes a_{1, m}\in C_{m}(A, A^{\vee})$$ and $$\sum _j (x_j'\otimes y_j')\otimes b_{1,n}\in C_{n}(A, A^{\vee}),$$ we define $$(\sum_i x_i\otimes y_i\otimes a_{1, m})\cup(\sum_j x_j'\otimes y_j'\otimes b_{1,n}):=\sum_{i, j} (x_i\otimes y_j'y_i)\otimes a_{1,m}\otimes x_j'\otimes b_{1,n}.$$ The generalized cup product defined above is well-defined on ${\mathop{\mathrm{Tor}}\nolimits}_{> 0}^{A^e}(A, A^{\vee})$. Moreover, it is graded commutative. [[*Proof.*]{}]{}Let $$\alpha=\sum (x_i\otimes y_i)\otimes a_{1, m}\in Z_{m}(A, A^{\vee})$$ and $$\beta=\sum (x_j'\otimes y_j')\otimes b_{1,n}\in Z_{n}(A, A^{\vee}),$$ then we have the following, $$\begin{split} d(\alpha\cup\beta)=&d(\sum (x_i\otimes y_j'y_i)\otimes a_{1,m}\otimes x_j'\otimes b_{1,n})\\ =&\sum (x_ia_1\otimes y_j'y_i)\otimes a_{2,m}\otimes x_j'\otimes b_{1,n})+\\ &\sum^{m-1}_{k=1}(-1)^{k} \sum (x_i\otimes y_j'y_i)\otimes a_{1,i-1}\otimes a_ia_{i+1}\otimes a_{i+2, m}\otimes x_j'\otimes b_{1,n}+\\ &\sum (-1)^{m} \sum (x_i\otimes y_j'y_i)\otimes a_{1, m-1}\otimes a_mx_j'\otimes b_{1,n}+\\ &\sum(-1)^{m+1}\sum (x_i\otimes y_j'y_i)\otimes a_{1, m}\otimes x_j'b_1\otimes b_{2,n}+\\ &\sum\sum^{n-1}_{k=1}(-1)^{m+1+k}(x_i\otimes y_j'y_i)\otimes a_{1, m}\otimes x_j'\otimes b_{1,k-1}\otimes b_kb_{k+1}\otimes b_{k+2,n}+\\ &\sum (-1)^{m+n+1}(x_i\otimes b_ny_j'y_i)\otimes a_{1, m}\otimes x_j'\otimes b_{1,n-1}\\ =&0, \end{split}$$ where the last identity follows from the fact that $d\alpha=d\beta=0$. Hence we have $$Z_m(A, A^{\vee})\cup Z_{n}(A, A^{\vee})\subset Z_{m+n+1}(A, A^{\vee}).$$ Similarly, the followings can be verified $$\begin{split} Z_m(A, A^{\vee})\cup B_{n}(A, A^{\vee})\subset B_{m+n+1}(A, A^{\vee}),\\ B_m(A, A^{\vee})\cup Z_{n}(A, A^{\vee})\subset B_{m+n+1}(A, A^{\vee}). \end{split}$$ It remains to verify the graded commutativity. By calculation, we have $$\begin{split} \alpha\cup \beta-(-1)^{mn}\beta\cup \alpha=\sum \sum_{k=0}^{m}(-1)^{m+n(k-1)}d((x_i\otimes y_i) \otimes a_{1, m-k}\otimes x_j'\otimes b_{1,n}\otimes y_j'\otimes a_{m-k+1,m}). \end{split}$$ Hence, we have $$\alpha\cup \beta-(-1)^{mn}\beta\cup \alpha\in B_{m+n+1}(A, A^{\vee}).$$ Thus, $\alpha\cup\beta=(-1)^{mn}\beta\cup \alpha$. Therefore, we have finished the proof. Let us go back to our special case, then we have the following. \[lemma-cap-cup2\] Let $A$ be a self-injecitve algebra over a field $k$. Then the following diagram commutes for $m\geq 1, n\geq 2$ and $m-n\geq 1,$ $$\xymatrix{ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-n}(A, A)\ar[r]^-{\cup} & {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{m-n}(A, A)\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{n-1}(A, A^{\vee})\ar[r]^-{\cap} \ar[u]^{{\mathop{\mathrm{id}}\nolimits}\otimes \lambda_{n}}&{\mathop{\mathrm{HH}}\nolimits}^{m-n}(A, A)\ar[u]^{\cong}. }$$ [[*Proof.*]{}]{}The proof is similar to the proof of Lemma \[lemma-cap-cup1\]. For $m-n=0,$ we have the following commutative diagram, $$\xymatrix{ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-m}(A, A)\ar[r]^-{\cup} & {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{0}(A, A)\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{m-1}(A, A^{\vee})\ar[r]^-{\cap} \ar[u]^{{\mathop{\mathrm{id}}\nolimits}\otimes \lambda_{n}}&{\mathop{\mathrm{HH}}\nolimits}^{0}(A, A)\ar@{->>}[u]^{\pi_0}, }$$ where $\pi_0: {\mathop{\mathrm{HH}}\nolimits}^0(A, A)\rightarrow {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^0(A, A)$ is the surjection defined in Proposition \[prop-hom\]. Similarly, we also have the following lemma. \[lemma-negative-cup\] Let $A$ be a self-injective algebra over a field $k$. Then we have the following commutative diagram for $m\geq 2, n\geq 2$, $$\xymatrix{ {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-m}(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-n}(A, A)\ar[r]^-{\cup} & {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-m-n}(A, A)\\ {\mathop{\mathrm{Tor}}\nolimits}_{m-1}^{A^e}(A, A^{\vee})\otimes {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{n-1}(A, A^{\vee})\ar[r]^-{\cup} \ar[u]^{\lambda_m\otimes \lambda_{n}}&{\mathop{\mathrm{Tor}}\nolimits}_{m+n-1}^{A^e}(A, A^{\vee})\ar[u]_{\lambda_{m+n}}. }$$ \[rem-negative-cup\] For $m=n=1$, we have the following commutative diagram, $$\xymatrix{ {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-1}(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-1}(A, A)\ar[r]^-{\cup} \ar[d]_-{(\lambda_1\otimes \lambda_1)^{-1}}& {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-2}(A, A)\\ {\mathop{\mathrm{Tor}}\nolimits}_{0}^{A^e}(A, A^{\vee})\otimes {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{0}(A, A^{\vee})\ar[r]^-{\cup} &{\mathop{\mathrm{Tor}}\nolimits}_{1}^{A^e}(A, A^{\vee})\ar[u]_{\lambda_{m+n}}. }$$ Therefore, in conclusion, the graded commutative associative algebra structure on $({\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A), \cup)$ becomes well-understood. Namely, it can be interpreted as the (generalized) cap product and the (generalized) cup product. Next we will investigate the graded Lie algebra structure on $({\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A), [\cdot,\cdot])$ in the case of a symmetric algebra $A$. Before starting the case of symmetric algebras, let us recall the Connes B-operator on Hochschild homology and the structure of a Batalin-Vilkovisky (BV) algebra. For more details, we refer to [@Con; @Lod; @Xu]. Let $A$ be an associative algebra over a commutative algebra $k$. We define an operator on Hochschild homology, $$B: C_r(A, A)\rightarrow C_{r+1}(A, A),$$ which sends $a_0\otimes \cdots \otimes a_r\in C_r(A, A)$ to $$\begin{split} B(a_0\otimes \cdots \otimes a_r):&=\sum_{i=0}^r(-1)^{ir}1\otimes a_i\otimes \cdots \otimes a_r\otimes a_0\otimes \cdots \otimes a_{i-1}+\\ &(-1)^{ir}a_i\otimes 1\otimes a_{i+1}\otimes \cdots \otimes a_r\otimes a_0\otimes \cdots a_{i-1}. \end{split}$$ It is easy to check that $B$ is a chain map satisfying $$B\circ B=0,$$ which induces an operator (still denote by B), $$B : {\mathop{\mathrm{HH}}\nolimits}_r(A, A)\rightarrow {\mathop{\mathrm{HH}}\nolimits}_{r+1}(A, A).$$ We call this operator Connes B-operator. We also consider the Connes B-operator on normalized Hochschild complex. $$\overline{B}: \overline{C}_r(A, A)\rightarrow \overline{C}_{r+1}(A, A),$$ which sends $a_{0, r}\in\overline{C}_r(A, A)$ to $$\overline{B}(a_{0, r}):=\sum_{i=0}^r(-1)^{ir}1\otimes a_i\otimes \cdots \otimes a_r\otimes a_0\otimes \cdots \otimes a_{i-1}.$$ \[defn-BV\] A Batalin-Vilkovisky algebra (BV algebra for short) is a Gerstenhaber algebra $({\mathcal{H}}^*, \cup, [\cdot, \cdot])$ together with an operator $\Delta: {\mathcal{H}}^*\rightarrow {\mathcal{H}}^{*-1}$ of degree $-1$ such that $\Delta\circ \Delta =0, \Delta(1)=0$ and satisfying the following BV identity, $$[\alpha, \beta]=(-1)^{|\alpha|+1}\Delta(\alpha\cup \beta)+(-1)^{|\alpha|}\Delta(\alpha)\cup \beta+\alpha\cup \Delta(\beta)$$ for homogeneous elements $\alpha, \beta \in{\mathcal{H}}^*$. From here onwards, assume that $k$ is a field. Let $A$ be a symmetric $k$-algebra (i.e. there is a symmetric, associative and non-degenerate inner product $\langle\cdot, \cdot \rangle: A\otimes A\rightarrow k$). Then the inner product $\langle\cdot, \cdot\rangle$ induces an $A$-$A$-bimodule isomorphism $$\label{equ-t} \begin{tabular}{rccc} $t: $ & $A$ & $\rightarrow$ &$ D(A):={\mathop{\mathrm{Hom}}\nolimits}_k(A, k)$\\ & $a $ & $\mapsto$ & $\langle a, -\rangle.$ \end{tabular}$$ where the $A$-$A$-bimodule structure on $D(A)$ is given as follows, for $f\in D(A)$ and $a\otimes b\in A\otimes A^{{\mathop{\mathrm{op}}\nolimits}}$, $$((a\otimes b)f)(c)=f(cba).$$ This isomorphism $t$ induces the following isomorphism $$\begin{tabular}{rccc} $t\otimes {\mathop{\mathrm{id}}\nolimits}: $ & $A\otimes A$ & $\rightarrow$ &$ D(A)\otimes A \cong{\mathop{\mathrm{End}}\nolimits}(A)$\\ & $a\otimes b $ & $\mapsto$ & $t(a)\otimes b\mapsto ( x\mapsto t(a)(x) b).$ \end{tabular}$$ We define the element $$(t\otimes {\mathop{\mathrm{id}}\nolimits})^{-1}({\mathop{\mathrm{id}}\nolimits}):=\sum_i e_i\otimes f_i\in A\otimes A$$ as the Casimir element of $A$ (with respect to the inner product $\langle \cdot, \cdot \rangle$) (cf. [@Brou]). The following proposition states some properties on Casimir element. \[prop-broue\] 1. For all $a, a'\in A$, we have $$\sum_i ae_ia'\otimes f_i=e_i\otimes a'f_ia.$$ 2. The map $$\begin{tabular}{rccc} $A$ & $\rightarrow$ &$ {\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A, A^e)$\\ $a $ & $\mapsto$ & $\sum_i e_ia\otimes f_i.$ \end{tabular}$$ is an right $A$-$A$-bimodule isomorphism. Here $A$ is a right $A$-$A$-bimodule defined as follows, for $a\in A$ and $b\otimes c\in A^e$, $$a\cdot(b\otimes c):=cab,$$ the right $A$-$A$-bimodule structure on ${\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A, A^e)$ is given by, for $f\in {\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A, A^e)$ and $b\otimes c\in A^e$, $$f\cdot(b\otimes c)(a):=f(cba),$$ and we identify ${\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A, A^e)$ as $$(A\otimes A)^A:=\{\sum a_i\otimes b_i\in A\otimes A \ | \ \sum aa_i\otimes b_i=\sum a_i\otimes b_ia, \ \mbox{for any $a\in A$} \}.$$ Since we have the following isomorphisms via the isomorphism $t$ defined in (\[equ-t\]) above, $$\begin{split} D(C_n(A, A))&\cong D(A\otimes_{A^e}D^2(A^{\otimes n+2}))\cong {\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A, D(A^{\otimes n+2}))\\ &\cong {\mathop{\mathrm{Hom}}\nolimits}_{A^e}(D^2(A^{\otimes n+2}), D(A))\cong C^n(A, A) \end{split}$$ where the third isomorphism follows from the fact that $D$ induces an equivalence between $A^e$-${\mathop{\mathrm{mod}}\nolimits}$ and $(A^e$-${\mathop{\mathrm{mod}}\nolimits})^{{\mathop{\mathrm{op}}\nolimits}}$ and the forth isomorphism is induced from the isomorphism $t$. Hence we have a duality between Hochschild homology and cohomology, for $n\in Z_{\geq 0}$, $$\label{equ-dua} {\mathop{\mathrm{HH}}\nolimits}_n(A, A)^*\cong {\mathop{\mathrm{HH}}\nolimits}^n(A, A).$$ Hence from Proposition \[prop-hom\] and \[prop-broue\], we have for $n\geq 1$, $$\kappa_{n}:{\mathop{\mathrm{Ext}}\nolimits}^{1}(A, \Omega^{n+2}(A))\cong {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-n-1}(A, A)\cong{\mathop{\mathrm{Tor}}\nolimits}_n^{A^e}(A, A^{\vee})\cong {\mathop{\mathrm{Tor}}\nolimits}_n^{A^e}(A, A))\cong {\mathop{\mathrm{HH}}\nolimits}^n(A, A)^*.$$ In the rest of our paper, for simplicity, we often use the same $\kappa_{n}$ to indicate any of those natural isomorphisms above. For example, we have the following isomorphism, $$\begin{tabular}{ccccc} $ \kappa_{n}:$& ${\mathop{\mathrm{Tor}}\nolimits}_{n}^{A^e}(A, A)$ & $\rightarrow$ & ${\mathop{\mathrm{Ext}}\nolimits}^{1}(A, \Omega^{n+2}(A))$\\ & $a_0\otimes a_1\otimes \cdots \otimes a_n$ & $\mapsto$ & $(b\mapsto \sum_i d (e_ia_0\otimes a_{1, n}\otimes f_i\otimes b\otimes 1)).$ \end{tabular}$$ Moreover we have the following result on symmetric algebras. \[thm-tra\] Let $A$ be a symmetric algebra over a field $k$. Then $$({\mathop{\mathrm{HH}}\nolimits}^*(A, A), \cup, [\cdot, \cdot], \Delta)$$ is a BV algebra, where the BV-operator $\Delta$ is the dual of the Connes $ B $-operator via the duality (\[equ-dua\]) above. Now let us state our propositions. \[prop-tor1\] Let $A$ be a symmetric algebra over a field $k$. Then we have the following commutative diagram for $m\geq 1, n\geq 2$ and $n-m\geq 1,$ $$\xymatrix{ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Ext}}\nolimits}^1_{A^e}(A, \Omega^{n+1}(A)) \ar[r]^-{[\cdot, \cdot]} & {\mathop{\mathrm{Ext}}\nolimits}^m_{A^e}(A, \Omega^{n+1}(A))\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-n}(A, A)\ar[r]^-{[\cdot, \cdot]} \ar[u]^{\cong} & {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{m-n-1}(A, A)\ar[u]_{\cong}\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}_{n-1}^{A^e}(A, A)\ar@/^8pc/[uu]^{{\mathop{\mathrm{id}}\nolimits}\otimes \kappa_n}\ar[r]^-{\{\cdot, \cdot\}} \ar[u]^{{\mathop{\mathrm{id}}\nolimits}\otimes \kappa_n} & {\mathop{\mathrm{Tor}}\nolimits}_{n-m}^{A^e}(A, A)\ar[u]_{\kappa_{n-m+1}}\ar@/_8pc/[uu]_-{\kappa_{n-m+1}} }$$ where $\{\cdot, \cdot\}$ is defined as follows, for $f\in{\mathop{\mathrm{HH}}\nolimits}^m(A, A), \alpha\in{\mathop{\mathrm{Tor}}\nolimits}_{n-1}^{A^e}(A, A)$, $$\begin{split} \{f, \alpha\}:=(-1)^{m}\Delta(f)\cap \alpha+f\cap B (\alpha)+(-1)^{m+1} B (f\cap\alpha). \end{split}$$ [[*Proof.*]{}]{}From the definition of the Gerstenhaber bracket $[\cdot,\cdot]$, it follows that the top square is commutative. Hence it remains to check the commutativity of the outer square. Let $f\in {\mathop{\mathrm{HH}}\nolimits}^m(A, A)$ and $$z:=\sum a_0\otimes a_1\otimes \cdots \otimes a_{n-1}\in {\mathop{\mathrm{Tor}}\nolimits}_{n-1}^{A^e}(A, A).$$ Then $$\begin{split} & \kappa_{n-m+1}(\{f, z\})(b_1\otimes \cdots \otimes b_m)\\ =&\sum (-1)^{m+(m-1)(n-1)} d(e_ja_0\Delta(f)(a_{1, m-1})\otimes a_{m, n-1}\otimes f_j\otimes b_{1, m}\otimes 1)+\\ &\sum \sum_{i=0}^{n-1}(-1)^{i(n-1)+mn}d(e_j(f\otimes {\mathop{\mathrm{id}}\nolimits})(a_{i, n-1}\otimes a_{0, i-1})\otimes f_j\otimes b_{1, m}\otimes 1)+\\ &\sum (-1)^{m+1+m(n-1)}d(e_j\otimes a_0f(a_{1, m})\otimes a_{m+1, n-1}\otimes f_j\otimes b_{1, m}\otimes 1)+\\ &\sum \sum_{i=1}^{n-m-1}(-1)^{m+1+m(n-1)+i(n-m-1)}d(e_j\otimes a_{i+m, n-1}\otimes a_0f(a_{1, m})\otimes a_{m+1, i+m-1}\otimes f_j\otimes b_{1, m}\otimes 1)\\ \end{split}$$ $$\begin{split} & [f, \kappa_n(z)](b_1\otimes \cdots \otimes b_m)=f\bullet \kappa_n(z)(b_{1, m})-\kappa_n(z)\circ f(b_{1, m})\\ =&\sum \sum_{i=1}^m(-1)^{(m+1-i)(n+1)}d(f\otimes {\mathop{\mathrm{id}}\nolimits})(b_{1, i-1}\otimes d(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_i\otimes 1)\otimes b_{i+1, m})\otimes 1+\\ &\sum_{i=1}^{n+1}(-1)^{n+1+(n+1-i)m}d(id_i\otimes f)(d(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_1\otimes 1)\otimes b_{2, m}\otimes 1)-\\ & \sum d(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes f(b_{1, m})\otimes 1)\\ =&\sum \sum_{i=1}^m(-1)^{(m-i)(n+1)}d(f\otimes {\mathop{\mathrm{id}}\nolimits})(b_{1, i-1}\otimes e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{i, m})\otimes 1+\\ &\sum \sum_{i=1}^{n}(-1)^{(n+1-i)m}d(id_i\otimes f)(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{1, m}\otimes 1) \end{split}$$ Claim that $$\begin{split} 0=&\sum \sum_{i=1}^{n-m+1}(-1)^{i(n-1)+mn}d(e_j(f\otimes {\mathop{\mathrm{id}}\nolimits})(a_{i, n-1}\otimes a_{0, i-1})\otimes f_j\otimes b_{1, m}\otimes 1)+\\ &\sum (-1)^{m+1+m(n-1)}d(e_j\otimes a_0f(a_{1, m})\otimes a_{m+1, n-1}\otimes f_j\otimes b_{1, m}\otimes 1)+\\ &\sum \sum_{i=1}^{n-m-1}(-1)^{m+1+m(n-1)+i(n-m-1)}d(e_j\otimes a_{i+m, n-1}\otimes a_0f(a_{1, m})\otimes a_{m+1, i+m-1}\otimes f_j\otimes b_{1, m}\otimes 1)-\\ &\sum \sum_{i=1}^{n-m}(-1)^{(n+1-i)m}d(id_i\otimes f)(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{1, m}\otimes 1) \end{split}$$ Hence we have $$\label{equ-tor1} \begin{split} &( \kappa_{n-m+1}(\{f, z\})-[f, \kappa_n(z)])(b_1\otimes \cdots \otimes b_m)\\ =&\sum (-1)^{m+(m-1)(n-1)} d(e_ja_0\Delta(f)(a_{1, m-1})\otimes a_{m, n-1}\otimes f_j\otimes b_{1, m}\otimes 1)+\\ &\sum(-1)^{mn}d(e_jf(a_{0, m-1})\otimes a_{m, n-1}\otimes f_j\otimes b_{1, m}\otimes 1+\\ &\sum \sum_{i=n-m+2}^{n-1}(-1)^{i(n-1)+mn}d(e_j(f\otimes {\mathop{\mathrm{id}}\nolimits})(a_{i, n-1}\otimes a_{0, i-1})\otimes f_j\otimes b_{1, m}\otimes 1)-\\ &\sum \sum_{i=1}^m(-1)^{(m-i)(n+1)}d(f\otimes {\mathop{\mathrm{id}}\nolimits})(b_{1, i-1}\otimes e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{i, m})\otimes 1 -\\ & \sum \sum_{i=n-m+1}^{n}(-1)^{(n+1-i)m}d(id_i\otimes f)(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{1, m}\otimes 1) \end{split}$$ We have the following $$\label{equ-tor2} \begin{split} &\sum_j d({\mathop{\mathrm{id}}\nolimits}_n\otimes f)(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{1, m}\otimes 1)\\ =& \sum_{j, k}d(e_ja_0\otimes a_{1, n-1} \otimes \langle e'_kf(f_j\otimes b_{1, m-1}), 1\rangle f'_k\otimes b_m\otimes 1)\\ = &\sum_{j, k}\sum_{l=1}^{m-1}\sum_{i=l}^{m-1}(-1)^{(m+l)(m-i)+1}\delta(d(e_ja_0\otimes a_{1,n-l}\otimes \langle f({\mathop{\mathrm{id}}\nolimits}_{m-i-1}\otimes e_k'\otimes a_{n-l+1, n-1}\otimes f_j\otimes {\mathop{\mathrm{id}}\nolimits}_{i-l}), 1\rangle f_k'\\ &\otimes {\mathop{\mathrm{id}}\nolimits}_{l}\otimes 1)(b_{1, m})+\sum_{j}\sum_{i=1}^{m}(-1)^{m(m-1)}d(e_ja_0\otimes a_{1, n-1}\otimes \Delta(f)(b_{1, m-1})f_j\otimes b_m\otimes 1)+\\ &\sum_{k}\sum_{l=1}^m(-1)^{(m-1)(m+l)}d(f(b_{1, m-l}\otimes e_k'\otimes a_{n-l+1, n-1})a_0\otimes a_{1, n-l}\otimes f_k'\otimes b_{m-l+1, m}\otimes 1)+\\ & \sum_{j}\sum_{l=1}^{m-1}(-1)^{ml+1}d(e_ja_0\otimes a_{1, n-l-1}\otimes f(a_{n-l, n-1}\otimes f_j\otimes b_{1, m-l-1})\otimes b_{m-l, m}\otimes 1\\ \end{split}$$ Combining (\[equ-tor1\]) with (\[equ-tor2\]), we obtain that $$\label{equ-tor3} \begin{split} &( \kappa_{n-m+1}(\{f, z\})-[f, \kappa_n(z)])(b_1\otimes \cdots \otimes b_m)\\ =&\sum(-1)^{mn}d(e_jf(a_{0, m-1})\otimes a_{m, n-1}\otimes f_j\otimes b_{1, m}\otimes 1+\\ &\sum \sum_{i=n-m+1}^{n-1}(-1)^{i(n-1)+mn}d(e_j(f\otimes {\mathop{\mathrm{id}}\nolimits})(a_{i, n-1}\otimes a_{0, i-1})\otimes f_j\otimes b_{1, m}\otimes 1)-\\ &\sum \sum_{i=1}^m(-1)^{(m-i)(n+1)}d(f\otimes {\mathop{\mathrm{id}}\nolimits})(b_{1, i-1}\otimes e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{i, m})\otimes 1 -\\ & \sum_{j}\sum_{l=1}^m(-1)^{(m-1)(m+l)+m}df(b_{1, m-l}\otimes e_j\otimes a_{n-l+1, n-1})a_0\otimes a_{1, n-l}\otimes f_j\otimes b_{m-l+1, m}\otimes 1 \end{split}$$ Let us compute the following term in (\[equ-tor3\]). $$\label{equ-tor4} \begin{split} &\sum d(f\otimes {\mathop{\mathrm{id}}\nolimits})(b_{1, m-1}\otimes e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{m})\otimes 1\\ =&\sum df(b_{1,m-1}\otimes e_ja_0)\otimes a_{1, n-1}\otimes f_j\otimes b_m\otimes 1\\ =&(-1)^{m-1}\sum db_1f(b_{2, m-1}\otimes e_j\otimes a_0)\otimes a_{1, n-1}\otimes f_j\otimes b_m\otimes 1+\\ &\sum\sum_{i=1}^{m-2}(-1)^{m-1+i}f(b_{1, i-1}\otimes b_ib_{i+1}\otimes b_{i+2, m-1} \otimes e_j\otimes a_0)\otimes a_{1, n-1}\otimes f_j\otimes b_m\otimes 1+\\ &f(b_{1, m-2}\otimes b_{m-1}e_j\otimes a_0)\otimes a_{1, n-1}\otimes f_j\otimes b_m\otimes 1+df(b_{1, m-1}\otimes e_j)a_0\otimes a_{1, n-1}\otimes f_j\otimes b_m\otimes 1\\ =&\sum_{i=1}^{m-1}(-1)^{m-1+(i-1)n}\sum\delta(df({\mathop{\mathrm{id}}\nolimits}_{m-i-1}\otimes e_j\otimes a_{0, i-1})\otimes a_{i, n-1}\otimes {\mathop{\mathrm{id}}\nolimits}_i\otimes 1)(b_{1, m})\\ &\sum_{i=1}^{m-1}(-1)^{(n-1)i+1}df(b_{1, m-1-i}\otimes e_ja_0\otimes a_{1, i})\otimes a_{i+1, n-1}\otimes f_j\otimes b_{m-i, m}\otimes 1+\\ &\sum_{i=1}^{m}(-1)^{(m-1)i+m+1}df(b_{1, m-i}\otimes e_j\otimes a_{n-i+1, n-1})a_0\otimes a_{1, n-i}\otimes f_j\otimes b_{m-i+1, m}\otimes 1+\\ &\sum (-1)^{mn}de_jf(a_{0, m-1})\otimes a_{m, n-1}\otimes f_j\otimes b_{1, m}\otimes 1+\\ &\sum_{i=n-m+1}^{n-1}(-1)^{i(n-1)+mn}de_j(f\otimes {\mathop{\mathrm{id}}\nolimits})(a_{i, n-1}\otimes a_{0, i-1})\otimes f_j\otimes b_{1, m}\otimes 1 \end{split}$$ From (\[equ-tor4\]), it follows that $$\kappa_{n-m+1}(\{f, z\})-[f, \kappa_n(z)]\in Z^m(A, \Omega^{n+1}(A)),$$ hence we have the following identity in ${\mathop{\mathrm{Ext}}\nolimits}^m_{A^e}(A, \Omega^{n+1}(A))$ $$\kappa_{n-m+1}(\{f, z\})=[f, \kappa_n(z)]).$$ Therefore we have completed our proof. \[rem-bracket\] We also have the following commutative diagram for $m\geq 1, n\geq 2$ and $n-m\geq 1$, $$\xymatrix{ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}^{n-1}(A, A)^*\ar[r]^-{[\cdot, \cdot]^*} \ar[d]^-{{\mathop{\mathrm{id}}\nolimits}\otimes \kappa_{n}} & {\mathop{\mathrm{HH}}\nolimits}^{n-m}(A, A)^*\ar[d]^{\kappa_{m-n+1}}\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}_{n-1}^{A^e}(A, A)\ar[r]^-{\{\cdot, \cdot\}} & {\mathop{\mathrm{Tor}}\nolimits}_{n-m}^{A^e}(A, A) }$$ where $[\cdot,\cdot]^*$ is defined as follows, for any $f\in {\mathop{\mathrm{HH}}\nolimits}^m(A, A)$ and $\alpha\in {\mathop{\mathrm{HH}}\nolimits}^{n-1}(A, A)^*$, $$[f, \alpha]^*(-):=\langle\alpha, [f, -]\rangle.$$ In fact, for $f\in{\mathop{\mathrm{HH}}\nolimits}^m(A, A), \alpha\in{\mathop{\mathrm{Tor}}\nolimits}_{n-1}^{A^e}(A, A)$ and $g\in {\mathop{\mathrm{HH}}\nolimits}^{n-m}(A, A)$, $$\begin{split} [f, \kappa_{n}^{-1}(\alpha)]^*(g)=&\langle \kappa_{n}^{-1}(\alpha), [f, g]\rangle\\ =&\langle\kappa_{n}^{-1}(\alpha), (-1)^{m}\Delta(f)\cup g+f\cup\Delta(g)+(-1)^{m+1}\Delta(f\cup g)\rangle\\ =&(-1)^m\langle\kappa_{m-n+1}^{-1}(\Delta(f)\cap\alpha), g \rangle+\langle \kappa_{m-n+1}^{-1}(B (f\cap\alpha)), g \rangle+\\ &(-1)^{m+1}\langle \kappa_{m-n+1}^{-1}(f\cap B (\alpha)), g \rangle\\ =&\langle \kappa_{m-n+1}^{-1}(\{f, \alpha\} ), g \rangle \end{split}$$ where the second identity is the BV identity (cf. Definition \[defn-BV\]), hence it follows that the diagram above is commutative. Similarly, we obtain the following proposition. \[prop-ext1\] Let $A$ be a symmetric algebra over a field $k$. Then we have the following commutative diagram for $m\geq 1, n\geq 2$ and $n-m\leq -2,$ $$\xymatrix{ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Ext}}\nolimits}^{1}_{A^e}(A, \Omega^{n+1}(A))\ar[r] &{\mathop{\mathrm{Ext}}\nolimits}^m_{A^e}(A, \Omega^{n+1}(A))\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-n}(A, A)\ar[u]\ar[r]^-{[\cdot, \cdot]} & {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{m-n-1}(A, A)\ar[u]\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}_{n-1}^{A^e}(A, A)\ar[u]^{{\mathop{\mathrm{id}}\nolimits}\otimes \kappa_n}\ar[r]^-{\{\cdot, \cdot\}} & {\mathop{\mathrm{HH}}\nolimits}^{m-n-1}(A, A) \ar[u]_{\kappa_{n-m+1}} }$$ where $\{\cdot, \cdot\}$ is defined as follows: for any $f\in{\mathop{\mathrm{HH}}\nolimits}^m(A, A)$ and $\alpha\in {\mathop{\mathrm{Tor}}\nolimits}_{n-1}^{A^e}(A, A)$, $$\{f, \alpha\}:=(-1)^{m}\Delta(f)\cap \alpha+f\cap B (\alpha)+(-1)^{m+1} \Delta (f\cap\alpha),$$ where $\alpha$ represents the generalized cap product defined in Lemma \[lemma-cap\]. [[*Proof.*]{}]{}The proof is similar to the one of Proposition \[prop-tor1\]. Take $f\in{\mathop{\mathrm{HH}}\nolimits}^m(A, A)$ and $$z:=\sum a_0\otimes a_{1, n-1}\in{\mathop{\mathrm{Tor}}\nolimits}_{n-1}^{A^e}(A, A).$$ Then we have $$\label{ext-0} \begin{split} &\kappa_{n-m+1}(\{f, z\})(b_{1, m})\\ =&(-1)^m \sum de_ja_0\Delta(f)(a_{1, n-1}\otimes f_j\otimes b_{1, m-n-1})\otimes b_{m-n, m}\otimes 1+\\ &\sum \sum_{i=0}^{n-1}(-1)^{i(n-1)}de_jf(a_{i, n-1}\otimes a_{0, i-1}\otimes f_j\otimes b_{1, m-n-1})\otimes b_{m-n, m}\otimes 1+\\ &(-1)^{m+1}d\Delta(f\cap z)(b_{1, m-n-1})\otimes b_{m-n, m}\otimes 1\\ &\\ & [f, \kappa_n(z)](b_1\otimes \cdots \otimes b_m)\\ =&\sum \sum_{i=1}^m(-1)^{(m-i)(n+1)}d(f\otimes {\mathop{\mathrm{id}}\nolimits})(b_{1, i-1}\otimes e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{i, m})\otimes 1+\\ &\sum \sum_{i=1}^{n}(-1)^{(n+1-i)m}d({\mathop{\mathrm{id}}\nolimits}_i\otimes f)(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{1, m}\otimes 1) \end{split}$$ Let us compute the following term, $$\label{ext-1} \begin{split} &df(b_{1, m-1}\otimes e_ja_0)\otimes a_{1, n-1}\otimes f_j\otimes b_m\otimes 1\\ =&\sum (-1)^{m-1}\delta(df({\mathop{\mathrm{id}}\nolimits}_{m-2}\otimes e_j\otimes a_0)\otimes a_{1, n-1}\otimes f_j\otimes {\mathop{\mathrm{id}}\nolimits}\otimes 1)(b_{1, m})+\\ &(-1)^{}\sum df(b_{1, m-2}\otimes e_j\otimes a_0)\otimes a_{1, n-1}\otimes f_j\otimes b_{m-1}b_m\otimes 1+\\ &\sum df(b_{1, m-2}\otimes e_j\otimes a_0)\otimes a_{1, n-1}\otimes f_j\otimes b_{m-1}\otimes b_m +\\ &\sum df(b_{1, m-2}\otimes b_{m-1}e_j\otimes a_0)\otimes a_{1, n-1}\otimes f_j\otimes b_m\otimes 1+\\ &\sum df(b_{1, m-1}\otimes e_j) a_0\otimes a_{1, n-1}\otimes f_j\otimes b_m\otimes 1\\ =&\sum\sum^{m-1}_{i=1} (-1)^{m-1+(i-1)n} \delta(df({\mathop{\mathrm{id}}\nolimits}_{m-i-1}\otimes e_j\otimes a_{0, i-1})\otimes a_{m-i-1, n-1}\otimes f_j\otimes {\mathop{\mathrm{id}}\nolimits}_{m-i} \otimes 1)(b_{1, m})\\ &\sum \sum_{i=1}^{n} (-1)^{(n-1)i+1}df(b_{1, m-i-1}\otimes e_ja_0\otimes a_{1, i})\otimes a_{i+1 , n-1}\otimes f_j\otimes b_{m-i, m}\otimes 1+\\ &\sum \sum_{i=1}^n(-1)^{i+1}df(b_{1, m-i}\otimes e_j\otimes a_{n-i+1, n-1})a_0\otimes a_{1, n-i} \otimes f_j\otimes b_{m-i+1, m}\otimes 1+\\ &\sum \sum_{i=1}^{n-1}(-1)^{i(n-1)}e_jf(a_{i, n-1}\otimes a_{0, i-1}\otimes f_j\otimes b_{1, m-n-1})\otimes b_{m-n, m}\otimes 1\\ \end{split}$$ Hence the following identity holds by combining (\[ext-0\]) and (\[ext-1\]). $$\label{ext3} \begin{split} &(\kappa_{n-m+1}(\{f, z\})-[f, \kappa_n(z)])(b_{1, m})\\ = &(-1)^m \sum de_ja_0\Delta(f)(a_{1, n-1}\otimes f_j\otimes b_{1, m-n-1})\otimes b_{m-n, m}\otimes 1+\\ &(-1)^{m+1}d\Delta(f\cap z)(b_{1, m-n-1})\otimes b_{m-n, m}\otimes 1-\\ &\sum \sum_{i=1}^{m-n}(-1)^{(m-i)(n+1)}d(f\otimes {\mathop{\mathrm{id}}\nolimits})(b_{1, i-1}\otimes e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{i, m})\otimes 1-\\ &\sum \sum_{i=1}^{n}(-1)^{(n+1-i)m}d({\mathop{\mathrm{id}}\nolimits}_i\otimes f)(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{1, m}\otimes 1)-\\ & \sum \sum_{i=1}^n(-1)^{i+1}df(b_{1, m-i}\otimes e_j\otimes a_{n-i+1, n-1})a_0\otimes a_{1, n-i} \otimes f_j\otimes b_{m-i+1, m}\otimes 1\\ \end{split}$$ By calculation, we have the following identity $$\label{ext4} \begin{split} &\sum de_ja_0\otimes a_{1, n-1}\otimes f(f_j\otimes b_{1, m-1})\otimes b_m 1\\ =&\sum de_ja_0\otimes a_{1, n-1}\otimes \langle e_k'f(f_j\otimes b_{1, m-1}), 1\rangle f_k'\otimes b_m \otimes 1\\ =&\sum\sum_{l=1}^{n}\sum_{i=l}^{m-1} (-1)^{(m+l)(m-i)+1}\delta (de_ja_0\otimes a_{1, n-l}\otimes \langle f({\mathop{\mathrm{id}}\nolimits}_{m-i-1}\otimes e_k'\otimes a_{n-l+1, n-1}\otimes f_j\otimes {\mathop{\mathrm{id}}\nolimits}_{i-l}), 1\rangle f_k'\\ &\otimes {\mathop{\mathrm{id}}\nolimits}_l\otimes 1)(b_{1, m})+\sum\sum_{i=1}^m(-1)^{m(m-1)} de_ja_0\otimes a_{1, n-1}\otimes \Delta(f)(b_{1, m-1})f_j\otimes b_m\otimes 1+\\ &\sum\sum_{l=1}^{n-1}(-1)^{ml+1}de_ja_0\otimes a_{1,n-l-1}\otimes f(a_{n-l, n-1} \otimes f_j\otimes b_{1, m-l-1})\otimes b_{m-l, m}\otimes 1+\\ &\sum\sum_{i=0}^{n-1}(-1)^{(m+1)i+1}de_ja_0\otimes a_{1, n-i-1}\otimes \langle f(b_{1, m-i-1}\otimes e'_j\otimes a_{n-i, n-1})f_j, 1\rangle f_j'\otimes b_{m-i, m}\otimes 1+\\ & \sum \sum_{i=1}^{m-n}de_ja_0\langle f(b_{i, m-n-1}\otimes e_j'\otimes a_{1, n-1}\otimes f_j\otimes b_{1, i-1}), 1\rangle f_j'\otimes b_{m-n, m}\otimes 1 \end{split}$$ Combining (\[ext3\]) and (\[ext4\]), we obtain that $$\label{ext5} \begin{split} &(\kappa_{n-m+1}(\{f, z\})-[f, \kappa_n(z)])(b_{1, m})\\ = &(-1)^{m+1}\Delta(f\cap z)(b_{1, m-n-1})\otimes b_{m-n, m}\otimes 1-\\ &\sum \sum_{i=1}^{m-n}(-1)^{(m-i)(n+1)}d(f\otimes {\mathop{\mathrm{id}}\nolimits})(b_{1, i-1}\otimes e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{i, m})\otimes 1-\\ & \sum \sum_{i=1}^{m-n}de_ja_0\langle f(b_{i, m-n-1}\otimes e_j'\otimes a_{1, n-1}\otimes f_j\otimes b_{1, i-1}), 1\rangle f_j'\otimes b_{m-n, m}\otimes 1 \end{split}$$ Hence from (\[ext5\]), it remains to verify the following identity, $$\label{ext6} \begin{split} 0=& (-1)^{m+1}\Delta(f\cap z)(b_{1, m-n-1})\otimes b_{m-n, m}\otimes 1-\\ &\sum \sum_{i=1}^{m-n}(-1)^{(m-i)(n+1)}d(f\otimes {\mathop{\mathrm{id}}\nolimits})(b_{1, i-1}\otimes e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{i, m})\otimes 1-\\ & \sum \sum_{i=1}^{m-n}de_ja_0\langle f(b_{i, m-n-1}\otimes e_j'\otimes a_{1, n-1}\otimes f_j\otimes b_{1, i-1}), 1\rangle f_j'\otimes b_{m-n, m}\otimes 1 \end{split}$$ Let us prove Identity (\[ext6\]) above. $$\begin{split} & \sum d\langle e_ja_0f(a_{1, n-1}\otimes f_j\otimes b_{1, m-n-1}\otimes e_j'), 1\rangle f_j'\otimes b_{m-n, m}\otimes 1\\ =&\sum(-1)^{n-1} d\langle f(e_ja_0\otimes a_{1, n-1}\otimes f_jb_1\otimes b_{2, m-n-1}\otimes e_j'), 1\rangle f_j'\otimes b_{m-n, m}\otimes 1+\\ &\sum_{i=1}^{m-n-2} (-1)^{n-1+i}\sum d\langle f(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{1,i-1} \otimes b_ib_{i+1}\otimes b_{i+2, m-n-1}\otimes e_j'), 1\rangle f_j'\otimes b_{m-n, m}\otimes 1+\\ &\sum (-1)^{m} d\langle f(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{1, m-n-2}\otimes b_{m-n-1}e_j'), 1\rangle f_j'\otimes b_{m-n, m}\otimes 1+\\ &\sum (-1)^{m+1} d\langle f(e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{1, m-n-1})e_j', 1\rangle f_j'\otimes b_{m-n, m}\otimes 1\\ =&\sum\sum_{i=1}^{m-n}(-1)^{m+i-1}df(b_{1, i}\otimes e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{i, m-n-1})\otimes b_{m-n, m}\otimes 1+\\ &\sum \sum^{m-n-1}_{i=1}(-1)^{i(m-n)+1}\langle e_ja_0f(a_{1, n-1}\otimes f_j\otimes b_{i, m-n-1}\otimes e'_j\otimes b_{1, i-1}), 1\rangle f_j'\otimes b_{m-n, m}\otimes 1+\\ &\sum \sum^{m-n}_{i=1} d\langle f(b_{1, i-1}\otimes e_k'e_ja_0\otimes a_{1, n-1}\otimes f_j\otimes b_{i, m-n-1}), 1\rangle f_k'\otimes b_{m-n,m}\otimes 1.\\ \end{split}$$ Hence we have the right hand side in (\[ext6\]) is zero. \[prop-ext-tor\] Let $A$ be a symmetric algebra over a field $k$. Then we have the following commutative diagram for $m\geq 2, n\geq 2$, $$\xymatrix{ {\mathop{\mathrm{Ext}}\nolimits}^1(A, \Omega^{m+1}(A))\otimes {\mathop{\mathrm{Ext}}\nolimits}^{1}_{A^e}(A, \Omega^{n+1}(A))\ar[r]^-{[\cdot,\cdot]} &{\mathop{\mathrm{Ext}}\nolimits}^1_{A^e}(A, \Omega^{m+n+2}(A))\\ {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-m}(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-n}(A, A)\ar[u]\ar[r]^-{[\cdot, \cdot]} & {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-m-n-1}(A, A)\ar[u]\\ {\mathop{\mathrm{Tor}}\nolimits}_{m-1}^{A^e}(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}_{n-1}^{A^e}(A, A)\ar[u]^{\kappa_m\otimes \kappa_n}\ar[r]^-{\{\cdot, \cdot\}} & {\mathop{\mathrm{Tor}}\nolimits}^{A^e}_{m+n}(A, A) \ar[u]_{\kappa_{m+n+1}} }$$ where $\{\cdot,\cdot\}$ is defined as follows, for any $\alpha \in{\mathop{\mathrm{Tor}}\nolimits}_{m-1}^{A^e}(A, A)$ and $\beta\in {\mathop{\mathrm{Tor}}\nolimits}_{m-1}^{A^e}(A, A)$ $$\{\alpha,\beta\}:=(-1)^mB(\alpha)\cup \beta+\alpha\cup B(\beta)+(-1)^{m+1}B(\alpha\cup \beta),$$ where $\cup$ represents the generalized cup product defined in (\[gene-cup\]). [[*Proof.*]{}]{}The proof is similar to the proof in Proposition \[prop-ext1\]. Therefore, combining Theorem \[thm-tra\], Proposition \[prop-tor1\], \[prop-ext1\] and \[prop-ext-tor\], we obtain the following corollary. \[cor-bv\] Let $A$ be a symmetric algebra over a field $k$. Then ${\mathop{\mathrm{HH}}\nolimits}^*_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$ is a BV algebra with BV operator $\Delta_{{\mathop{\mathrm{sg}}\nolimits}}$, which is the Connes B-operator for the negative part ${\mathop{\mathrm{HH}}\nolimits}^{< 0}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$, the $\Delta$-operator for the positive part ${\mathop{\mathrm{HH}}\nolimits}^{> 0}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)$ and $$\Delta_{{\mathop{\mathrm{sg}}\nolimits}}|_{{\mathop{\mathrm{HH}}\nolimits}^0_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)}=0: {\mathop{\mathrm{HH}}\nolimits}^0_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\rightarrow {\mathop{\mathrm{HH}}\nolimits}^{-1}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A).$$ In particular, we have two BV subalgebras ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{\leq 0}(A, A)$ and ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{\geq 0}(A, A)$ with induced BV algebra structures. [[*Proof.*]{}]{}It remains to prove that we have the following commutative diagram for $m\in {\mathbb{Z}}_{>0}$, that is, the image of the bracket $\{\cdot,\cdot\}$ is contained in ${\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-1}(A, A)$. $$\xymatrix{ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}_{m-1}^{A^e}(A, A)\ar[rd] \ar[r]^-{\{\cdot, \cdot\}} & {\mathop{\mathrm{Tor}}\nolimits}_0^{A^e}(A, A)\\ & {\mathop{\mathrm{HH}}\nolimits}^{-1}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\ar@{_(->}[u] }$$ where we recall that the injection $${\mathop{\mathrm{HH}}\nolimits}^{-1}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\rightarrow {\mathop{\mathrm{Tor}}\nolimits}_0^{A^e}(A, A)$$ is defined in Proposition \[prop-hom\] and $$\{f, \alpha\}:=(-1)^m \Delta(f)\cap \alpha+f\cap B(\alpha)$$ for any $f\in {\mathop{\mathrm{HH}}\nolimits}^{m-1}(A, A)$ and $\alpha\in {\mathop{\mathrm{Tor}}\nolimits}_{m-1}^{A^e}(A, A)$. From the short exact sequence in Proposition \[prop-hom\], $$\xymatrix{ 0\ar[r] & {\mathop{\mathrm{HH}}\nolimits}^{-1}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\ar[r] & A^{\vee}\otimes_{A^e}A\ar[r]^-{\mu^*}& {\mathop{\mathrm{Hom}}\nolimits}_{A^e}(A, A) \ar[r] & {\mathop{\mathrm{HH}}\nolimits}^0_{{\mathop{\mathrm{sg}}\nolimits}}(A, A)\rightarrow 0 }$$ it is sufficient to show that for any $f\in {\mathop{\mathrm{HH}}\nolimits}^{m-1}(A, A)$ and $$\alpha:=\sum a_0\otimes a_{1, m-1}\in {\mathop{\mathrm{Tor}}\nolimits}_{m-1}^{A^e}(A, A),$$ we have $$\mu^*(\{f, \alpha\})=0.$$ Indeed, we have $$\begin{split} \mu^*(\{f, \alpha\})=&\sum_j (-1)^me_ja_0\Delta(f)(a_{1, m-1}) f_j+\sum_j\sum_{i=0}^{m-1} (-1)^{i(m-1)}e_jf(a_{i, m-1}\otimes a_{0, i-1})f_j.\\ =&\sum_j (-1)^me_ja_0\langle \Delta(f)(a_{1, m-1})f_j e_k', 1\rangle f_k'+ \sum_j\sum_{i=0}^{m-1} (-1)^{i(m-1)}e_jf(a_{i, m-1}\otimes a_{0, i-1})f_j\\ =&\sum_j\sum_{i=1}^m (-1)^{m+i(m-1)}e_ja_0\langle f(a_{i, m-1}\otimes f_je_k'\otimes a_{1, i-1}), 1\rangle f_k'+ \\ &\sum_j\sum_{i=0}^{m-1} (-1)^{i(m-1)}e_jf(a_{i, m-1}\otimes a_{0, i-1})f_j\\ =&0 \end{split}$$ since by direct calculation, we obtain that $$\begin{split} \sum_j\sum_{i=1}^m (-1)^{m+i(m-1)}e_ja_0\langle f(a_{i, m-1}\otimes f_je_k'\otimes a_{1, i-1}), 1\rangle f_k'&=0,\\ \sum_j\sum_{i=0}^{m-1} (-1)^{i(m-1)}e_jf(a_{i, m-1}\otimes a_{0, i-1})f_j&=0. \end{split}$$ Moreover, we have the following commutative diagram, $$\xymatrix{ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{Tor}}\nolimits}_{m-1}^{A^e}(A, A) \ar[d]^-{{\mathop{\mathrm{id}}\nolimits}\otimes \kappa_m} \ar[r]^-{\{\cdot,\cdot\}} & {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-1}(A, A)\ar@{=}[d]\\ {\mathop{\mathrm{HH}}\nolimits}^m(A, A)\otimes {\mathop{\mathrm{HH}}\nolimits}^{-m}_{{\mathop{\mathrm{sg}}\nolimits}}(A, A) \ar[r]^-{[\cdot, \cdot]} & {\mathop{\mathrm{HH}}\nolimits}_{{\mathop{\mathrm{sg}}\nolimits}}^{-1}(A, A). }$$ Therefore, the proof has been completed. \[cor-cy\] Let $A$ be a symmetric algebra over a field $k$. Then the cyclic homology ${\mathop{\mathrm{HC}}\nolimits}_*(A, A)$ is a graded Lie algebra of lower degree 2, that is, ${\mathop{\mathrm{HC}}\nolimits}_*(A, A)[-1]$ is a graded Lie algebra. [[*Proof.*]{}]{}This is an immediate corollary of Porposition 26 in [@Men1] since from Corollary \[cor-bv\] above it follows that ${\mathop{\mathrm{HH}}\nolimits}_*(A, A),$ equipped with Connes B-operator is a BV algebra. [99]{} Vladimir Baranovsky and Victor Ginzburg, [*Gerstenhaber-Batalin-Vilkoviski structures on coisotropic intersections,*]{} arXiv:0907.0037. Peter Andreas Bergh and David A. Jorgensen, [*Tate-Hochschild homology and cohomology of Frobenius algebras,*]{} Journal of Noncommutative Geometry, [**7**]{}, (2013), 907-937. Petter Andreas Bergh, David A. Jorgensen and Steffen Oppermann, [*The negative side of cohomology for Calabi-Yau categories,*]{} Bulletin of the London Mathematical Society, [**46**]{}, (2014), 291-304. Michel Broué, [*Higman criterion revisited.*]{} Michigan Journal of Mathematics, [**58**]{}, (2009), 125-179. Ragnar-Olaf Buchweitz, [*Maximal Cohen-Macaulay modules and Tate-cohomology over Gorenstein rings,*]{} manuscript Universität Hannover 1986. Claude Cibils and Andrea Solotar, [*Hochschild cohomology algebra of abelian groups*]{} Archiv der Mathematik [**68**]{} (1997), 17-21. Alain Connes, [*Non-commutative differential geometry,*]{} Publications Mathématiques de l’IHÉS, [**62**]{}, 1985, 257-360. Ezra Getzler and J.D.S. Jones, [*Operads, homotopy algebra and iterated integrals for double loop spaces,*]{} hep-th/9403055. Ching-Hwa Eu and Travis Schedler, [*Calabi-Yau Frobenius algebras,*]{} Journal of Algebra [**321**]{} (2009), no. 3, 774-815. Murray Gerstenhaber, [*The cohomology structure of an associative ring,*]{} Annals of Mathematics, Vol. [**78**]{}, No. 2, September, 1963. Bernhard Keller, [*Hochschild cohomology and derived Picard groups,*]{} Journal of Pure and Applied Algebra [**136**]{} (1999), 1-56. Bernhard Keller and Dieter Vossieck, [*Sous les catégories dérivées,*]{} Comptes Rendus de l’Académie des Sciences Paris, Série I Mathématique [**305**]{} (6) (1987) 225-228. Jean-Louis Loday, [*Cyclic homology,*]{} Grundlehren der Mathematischen Wissenschaften, 301, Springer, 1992. Luc Menichi, [*Batalin-Vilkovisky algebra structures on Hochschild cohomology,*]{} Bulletin de la Société Mathématique de France [**137**]{}(2):277-295, 2009. Luc Menichi, [*Connes-Moscovici characteristic map is a Lie algebra morphims,*]{} Journal of Algebra [**331**]{} (2011), 311-337. Sergei Merkulov and Bruno Vallette, [*Deformation theory of representations of prop(erad)s I,*]{} Journal für die reine und angewandte Mathematik (Crelle) [**634**]{} (2009), 51-106. Dmitri Orlov, [*Derived categories of coherent sheaves and triangulated categories of singularities,*]{} Algebra, arithmetic, and geometry: in honor of Yu. I. Manin. Volume II, 503531, Progress in Mathematics, 270, Birkhuser Boston, Inc., Boston, MA, 2009. Jeremy Rickard, [*Derived Categories and Stable Equivalences,*]{} Journal of Pure and Applied Algebra [**61**]{} (1989) 303-317. Thomas Tradler, [*The BV algebra on Hochschild cohomology induced by infinity inner products,*]{} Annales de l’institut Fourier (Grenoble) [**58**]{} (2008), no. 7, 2351-2379. Charles Weibel, [*An introduction to Homological algebra,*]{} Cambridge University Press, Cambridge 1995. Ping Xu, [*Gerstenhaber algebras and BV-algebras in Possion geometry,*]{} Communications in Mathematical Physics, [**200**]{}, 1999, 545-560. Alexander Zimmermann, [*Representation Theory: A Homological Algebra Point of View,*]{} Springer Verlag London, 2014. [^1]: [email protected], Université Paris Diderot-Paris 7, Institut de Mathématiques de Jussieu-Paris Rive Gauche CNRS UMR 7586, Bâtiment Sophie Germain, Case 7012, 75205 Paris Cedex 13, France
{ "pile_set_name": "ArXiv" }
--- abstract: 'sPHENIX is a new collaboration and future detector project at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC). It seeks to answer fundamental questions on the nature of the quark gluon plasma (QGP), including its temperature dependence and coupling strength, by using a suite of precision jet and upsilon measurements that probe different length scales of the QGP. This will be achieved with large acceptance, $|\eta| < 1$ and $0$-$2\pi$ in $\phi$, electromagentic and hadronic calorimeters and precision tracking enabled by a $1.5$ T superconducting magnet. With the increased luminosity afforded by accelerator upgrades, sPHENIX will perform high statistics measurements extending the kinematic reach at RHIC to overlap the LHC’s. This overlap with the LHC will facilitate better understanding of the role of temperature, density and parton virtuality in QGP dynamics and for jet quenching in particular. This talk will focus on key future measurements and the current state of the sPHENIX project.' address: 'Columbia University, New York, NY USA' author: - Sarah Campbell for the sPHENIX Collaboration bibliography: - 'sPHENIX\_HotQuarks2016.bib' title: 'sPHENIX: The next generation heavy ion detector at RHIC' --- Introduction ============ The goal of the sPHENIX program [@proposal] is to probe the Quark-Gluon Plasma (QGP) created in heavy ion collisions at multiple length scales. It does this through three avenues: studying the structure of jets as they evolve in the QGP, varying the jet parton (from the smallest gluon-jets, to light quark-, and then the larger, heavier charm and bottom quark-jets) and studying the sequential melting of the three upsilon states ($\Upsilon(1S)$, $\Upsilon(2S)$, $\Upsilon(3S)$). Initially jets created in heavy ion collisions are highly virtual, traveling through the QGP, but as they fragment they reach a scale close to that of the QGP and QGP interactions are more probable. As a result, the sensitivity of jet measurements to QGP effects is higher in lower energy jets and jets created at RHIC energies. Similarly, the partonic composition of jets differs at RHIC and LHC energies, with a higher fraction of quark-jets available at lower jet energies at RHIC compared to the LHC. Additional jet-parton studies require identifying heavy flavor jets. This will allow further study of collisional versus radiative energy loss and the dead cone effect. Finally, measurements of the upsilon states at RHIC will provide insight on bottom quarkonia behavior at lower QGP temperatures, $T_{RHIC}$ is $77$% lower than $T_{LHC}$, and with reduced $\Upsilon$ production from coalescence. For each of these signals, it is necessary to obtain complementary measurements at both RHIC and the LHC to disentangle their respective contributions and sensitivities. sPHENIX is the RHIC detector capable of complementing the LHC heavy ion program. The sPHENIX detector ==================== The sPHENIX detector design, Figure \[Fig:Detectors\], is driven by the goal of measuring these rare signals. To make the most of RHIC luminosities, the sPHENIX detector covers $2\pi$ in azimuth ($\phi$) and $\pm1$ units in rapidity ($\eta$) and reads out data at a rate of $15$ kHz. To efficiently resolve jets and their energies, sPHENIX has both hadronic and electromagnetic calorimeters. Efficient tracking of particles from $0.2$ to $40$ GeV/c in transverse momentum ($p_{T}$) is needed for jet fragmentation measurements. Heavy flavor jet identification requires precision vertexing with a distance of closest approach in the $x$-$y$ plane, $DCA_{xy}$, resolution of better than $70$ $\mu$m. To measure the three $\Upsilon$ states in the dielectron decay channel hadron rejection of better than $99\%$ and a mass resolution of $1\%$ at the $\Upsilon$ mass is required. These precise and efficient tracking requirements are met using a $1.5$ T magnet and three tracking subsystems: a time projection chamber (TPC), a Si strip inner tracker (INTT) and a precision MAPS detector based off of ALICE’s ITS upgrade. The remainder of this section discusses the calorimeter and tracking detectors, Figures \[Fig:Calorimeters\] and \[Fig:Trackers\] respectively, in more detail. Calorimeters ------------ The hadronic calorimeters (HCal) consist of tilted steel plates interleaved with polystyrene panels embedded with $1$ mm wavelength shifting fiber. It has a segmentation of $0.1 \times 0.1$ in $\phi$ and $\eta$. Inner and Outer HCal subsystems measure the hadronic energy deposited before and after the magnetic coil. The HCal design has a single particle energy resolution requirement of better than $100\%/\sqrt{E}$. The electromagnetic calorimeter (EMCal) is made up of W modules embedded with Si fibers evenly spaced in $\phi$ and $\eta$. The EMCal $\phi$ and $\eta$ segmentation is $0.025 \times 0.025$ and it has an energy resolution requirement of better than $15\%/\sqrt{E}$. Research and development work on the EMCal is ongoing to determine whether modules should be projective in both $\phi$ and $\eta$ or just in $\phi$. Si-photomultipliers measure the light produced in both the HCal and EMCal systems and a common digitizer readout is planned. In the winter of 2016, a prototype of the calorimeter systems was studied at the FermiLab Test Beam Facility. The measured energy distributions in the HCal are well reproduced by full GEANT simulations. This provides added confidence in our simulated detector response. Furthermore, preliminary analyses of the the combined HCal and EMCal single particle energy resolution and electron energy resolution in the EMCal meet the design goals, with a projected single particle energy resolution in the range $(70.6\%-95.7\%)/\sqrt{E}$ and an electron energy resolution of roughly $14.2\%/\sqrt{E}$ or $12.7\%/\sqrt{E}$ depending on the shower calibration method. Tracking detectors ------------------ The outer-most tracking detector, the TPC, is located between $20$ and $78$ cm in radius and has an approximate $250$ $\mu$m effective hit resolution. It provides the bulk of the pattern recognition and momentum resolution for the tracking of particles between $0.2$ and $40$ GeV/c in $p_{T}$. A continuous, non-gated, TPC readout is planned to be compatible with sPHENIX’s high data acquisition rate. The inner-most tracking detector, the MAPS detector, consists of three layers of Si sensors following the ALICE ITS upgrade design [@ALICE]. It contributes both precision event vertex determination, $|z_{vtx}| < 10$ cm, and identification of off-vertex decays, $DCA_{xy} < 70$ $\mu$m. Located between the TPC and the MAPS layers, the INTT detector provides needed continuity in the tracking, redundancy in pattern recognition and DCA determination, and pile-up rejection. It consists of $4$ layers of Si strips and will be readout out by reusing PHENIX FVTX electronics. Simulated detector performance ============================== The performance of the combined tracking systems is simulated by embedding pions in central HIJING events. Current results in this ongoing effort show efficient tracking out to $40$ GeV/c in $p_{T}$ and a distance of closest approach resolution of $40$ $\mu$m at the lowest $p_{T}$ values. The calorimetric jet reconstruction performance is characterized by the jet energy resolution, efficiency and purity. These quantities in sPHENIX are simulated using central HIJING events with an ATLAS-influenced jet reconstruction algorithm for jet radii of $0.2$, $0.3$, and $0.4$ [@jets]. The resulting jet efficiency is better than $90\%$ for jets with $p_{T}$ greater than $20$ GeV/c and the jet purity is better than $80\%$ for jets with a $p_{T}$ of greater than $25$ GeV/c. To estimate the available jet yields in sPHENIX, projected RHIC luminosities and perturbative QCD (pQCD) rates of hard processes are needed. Thanks to RHIC luminosity upgrades, over one hundred billion minimum bias events are expected in $22$ weeks of $\sqrt{s_{NN}}=200 GeV$ $Au$+$Au$ collisions, of which approximately twenty billion are from $0$-$20\%$ central events. These values combined with pQCD rate calculations provide estimates of the expected jet yields available to sPHENIX as presented in Table \[Tab:pQCDyields\], confirming that rare jet probes can be measured with high statistics at sPHENIX. Figure \[Fig:RAA\] shows the projected statistical errors and kinematic reach of the nuclear modification factor, $R_{AA}$, for jets, b-jets, and direct photons, assuming $22$ weeks of $Au$+$Au$ collisions and $10$ weeks of $p$+$p$ collsions at RHIC. These projections are shown as an extension of the current RHIC capabilities with the PHENIX experiment. Specifically, inclusive jet measurements will extend out to $80$ GeV/c in $p_{T}$, b-jet measurements will extend out to over $40$ GeV/c and direct photon measurements will extend out to over $50$ GeV/c. With this increased kinematic range there will be significant overlap in the accessible $p_{T}$ range for jet and heavy flavor measurements at sPHENIX, the future RHIC experiment, and at future upgraded LHC experiments. [llll]{} Signal & $p_{T}$ range & pQCD & Yields\ Light q + g jets & $p_{T} > 20 GeV/c$ & NLO & $10^{7}$\ Light q + g jets & $p_{T} > 30 GeV/c$ & NLO & $10^{6}$\ Direct photons & $p_{T} > 20 GeV/c$ & NLO & $10^{4}$\ c-jets & $p_{T} > 20 GeV/c$ & FONLL & $10^{4}$\ b-jets & $p_{T} > 20 GeV/c$ & FONLL & $10^{4}$\ To complete the b-jet program, sPHENIX needs to be able to identify or tag b-jet events. Three methods of b-jet tagging are being explored: a) identifying multiple tracks with a large DCA, b) finding a secondary vertex, and c) tagging B-mesons by semi-leptonic decay or the baryon mass. While method c) is still under development, Pythia8 simulations have shown that methods a) and b) can identify b-jets with an estimated $30\%$ purity and $70\%$ efficiency. With this level of b-jet tagging, sPHENIX will be able to measure the $R_{AA}$ of b-jets out to $40$ GeV/$c$ in $p_{T}$ and potentially constrain b-jet transport coefficients in models. For the $\Upsilon$ program, the projected upsilon yields in $10$ weeks of $p$+$p$ collisions measured in sPHENIX are shown in Figure \[Fig:Upsilon\]. Over $8800$ $\Upsilon(1S)$, $2200$ $\Upsilon(2S)$, and 1160 $\Upsilon(3S)$ are expected. Furthermore, sPHENIX can clearly separate the three upsilon states with a $\Upsilon(1S)$ width of $80\pm1.4$ MeV/$c^2$. sPHENIX will provide, for the first time, the ability to separate all three upsilon states at RHIC. This mass resolution is maintained in $Au$+$Au$ collisions, allowing for the centrality dependent measurement of the $R_{AA}$ in each of the upsilon states. Conclusions =========== The sPHENIX project will extend RHIC results beyond PHENIX and STAR’s current capablilites, and provide necessary complementary measurements to the LHC experiments. This complementarity is needed to form a complete picture of the properties and behavior of the QGP created in heavy ion collisions. sPHENIX will achieve this by focusing on jet, upsilon and b-jet observables. Having recently obtained CD-0 designation, with an expected installation in 2021 and first beam available in 2022 [@DoE], the future for heavy ion physics with sPHENIX is bright. References {#references .unnumbered} ==========
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, a sharp interface immersed boundary method is developed for efficiently and robustly solving flow with arbitrarily irregular and changing geometries. The proposed method employs a three-step prediction-correction flow reconstruction scheme for boundary treatment and enforces Dirichlet, Neumann, Robin, and Cauchy boundary conditions in a straightforward and consistent manner. Numerical experiments concerning flow of two and three space dimensions, stationary and moving objects, convex and concave geometries, no-slip and slip wall boundary conditions are conducted to demonstrate the proposed method.' author: - 'Huangrui Mo[^1]' - 'Fue-Sang Lien' - Fan Zhang - 'Duane S. Cronin' bibliography: - 'ref.bib' title: A sharp interface immersed boundary method for solving flow with arbitrarily irregular and changing geometry --- Introduction ============ Particle jets, a rapid and nonuniform dispersal of granular media under impulsive energy release, are observed in many physical processes such as explosions with explosive charges surrounded by or mixed with solid particles [@frost2012particle; @zhang2014large], explosive volcanic eruptions [@kedrinskiy2009hydrodynamic], and impact of a solid projectile on granular media [@lohse2004impact; @pacheco2011impact]. In order to understand this jet phenomenon, many studies have been conducted over the past decades [@zhang2009shock]. However, the formation mechanism of particle jets is still unidentified [@rodriguez2013solid], which is primarily due to the complexity of shock-particle interactions [@rodriguez2013solid] and random nature of the force-chain networks in granular materials [@liu1995force; @jaeger1996granular]. To develop a numerical solver for fully investigating the particle jet phenomenon, a computational fluid dynamics solver for simulating shock-particle interactions under strong shocks and complex particle configurations is a prerequisite, of which the capability of dealing efficiently and robustly with arbitrarily irregular and changing geometries is crucial. In the immersed boundary method introduced by @peskin1972flow, numerically solving flow with irregular geometries is conducted on generic Cartesian grids, which can greatly simplify the grid generation process and take advantages of some important features of modern high-performance computing architecture [@jung2001two; @peskin2002immersed; @mittal2005immersed]. For instance, reduced memory requirements and the capability to use linear arrays with linear indexing techniques as main data structures can benefit from modern processor prefetcher and hierarchal-cache architecture to achieve a highly efficient numerical solver. Extensions of the immersed boundary method have been continuously developed to increase interface resolution and relax stability constraints [@fadlun2000combined; @tseng2003ghost; @uhlmann2005immersed; @mori2008implicit; @kapahi2013three; @yang2015non; @schwarz2016immersed]. @yusof1997combined and @fadlun2000combined developed the direct forcing immersed boundary method. In the direct forcing approach, boundary forces are implicitly imposed via flow reconstruction, which simplifies the numerical discretization procedure considerably and is well-suited for problems with rigid boundaries. @balaras2004modeling later improved the reconstruction procedure of direct forcing and applied to large-eddy simulations. Integrating ideas from the ghost fluid method [@fedkiw1999non; @fedkiw2002coupling] and the direct forcing immersed boundary treatment [@fadlun2000combined; @iaccarino2003immersed], @tseng2003ghost systematically developed a polynomial reconstruction based ghost-cell immersed boundary method to further increase implementation flexibility while maintaining sharp interfaces [@mittal2005immersed; @tseng2003ghost]. @kapahi2013three proposed a least square interpolation approach and applied to solving high velocity impact problems. The development of immersed boundary methods was comprehensively reviewed by @peskin2002immersed, @mittal2005immersed, and @sotiropoulos2014immersed. The robustness of a direct forcing immersed boundary method highly depends on the numerical stability and stencil adaption capability of the employed interpolation method [@tseng2003ghost; @gao2007improved; @kapahi2013three]. Polynomial reconstruction based methods frequently involve constructing linear systems on neighbouring stencils, including a nearby boundary point, of the interpolated node. When one of the stencils is very close to the boundary point, the resulting linear systems may suffer from numerical singularities [@tseng2003ghost; @gao2007improved]. Additionally, a fixed minimum number of stencils is always required to avoid under-determined linear systems. Therefore, special treatments are required when strongly concave or convex geometries exist [@gao2007improved; @kapahi2013three]. To enhance numerical stability and stencil adaption capability, the idea of using inverse distance weighting interpolation is firstly introduced by @tseng2003ghost; and a hybrid Taylor series expansion / inverse distance weighting approach was later developed by @gao2007improved. In addition to numerical stability and stencil adaption capability, correctly enforcing different types of boundary conditions in a straightforward and consistent manner is another vital factor in obtaining an efficient, accurate, and robust immersed boundary method, since a variety of boundary conditions require to be repeatedly enforced on numerical boundaries with a large number of computational nodes [@gibou2013high]. In solving Navier-Stokes equations, constant temperature at a wall and velocity at a no-slip wall have Dirichlet boundary conditions; pressure at a wall and temperature at an adiabatic wall have Neumann boundary conditions; and velocity at a slip wall has a type of Cauchy boundary conditions. Excluding the Dirichlet boundary conditions in which boundary values are determined and known, the enforcement of other types of boundary conditions, particularly Cauchy boundary conditions, for immersed boundaries demands considerable efforts [@crockett2011cartesian; @kempe2015imposing; @schwarz2016immersed]. Recently, @kempe2015imposing firstly devised a numerical implementation of slip wall boundary conditions in the context of immersed boundary methods. However, the realization is not straightforward due to its complexity [@kempe2015imposing]. Therefore, to enforce a variety of boundary conditions in a straightforward and consistent manner is a considerably challenging task. To achieve an efficient and robust boundary treatment method for solving flow with arbitrarily irregular and changing geometries on Cartesian grids, this work develops a sharp interface immersed boundary method. By the development of an inverse distance weighting based three-step prediction-correction flow reconstruction scheme for boundary treatment, the proposed method enforces Dirichlet, Neumann, Robin, and Cauchy boundary conditions in a straightforward and consistent manner. The developed method serves the objective of solving flow interacting with multiple objects involving collision, agglomeration, penetration, and fragmentation processes. This paper is structured as below. Section \[sec:framework\] presents the sharp interface immersed boundary method under a generalized framework of ghost-cell immersed boundary treatment. Section \[sec:numeric\] describes the employed three-dimensional Navier-Stokes solver of this paper. Section \[sec:validity\] validates the proposed immersed boundary method. Section \[sec:conclusion\] draws conclusions. A sharp interface immersed boundary method {#sec:framework} ========================================== Generalized framework --------------------- Fig. \[fig:gcibm\_demo\] shows 2D and 3D schematic diagrams of a computational domain with an immersed boundary. $G$ denotes a ghost node, a computational node that locates at the numerical boundaries but outside the physical domain. $O$ denotes a boundary point with $\mathbf{GO}$ as the outward normal vector. $I$ is the image point of ghost node $G$ reflected by the physical boundary. [0.48]{} ![Schematic diagrams of a computational domain with an immersed boundary. $G$, ghost node; $O$, boundary point; $I$, image point. (a) 2D space. (b) 3D Space.[]{data-label="fig:gcibm_demo"}](gcibm_demo_2D "fig:"){width="\textwidth"}   [0.48]{} ![Schematic diagrams of a computational domain with an immersed boundary. $G$, ghost node; $O$, boundary point; $I$, image point. (a) 2D space. (b) 3D Space.[]{data-label="fig:gcibm_demo"}](gcibm_demo_3D "fig:"){width="\textwidth"} In the ghost-cell immersed boundary method [@tseng2003ghost], the numerical boundary treatment is a reconstruction process of variable values at numerical boundaries via physical boundary conditions and variable values at interior physical domain. To construct the flow at numerical boundaries while admitting the existence of physical boundaries, the method of images [@chu1974boundary] is an effective way [@colella1990multidimensional; @tseng2003ghost]. Therefore, the reconstruction of a generic flow variable $\psi$ at a ghost node $G$ is a two-step approach: \[eq:gcibm\] $$\begin{aligned} \psi_G &= 2\psi_O - \psi_I \label{eq:gcibma} \\ \psi_I &= f(x_I, y_I, z_I) \label{eq:gcibmb} \end{aligned}$$ where $f(x, y, z)$ is a local reconstruction function of $\psi$ at spatial point $I$. Generally, the reconstruction function needs to be determined by physical boundary conditions and known values of $\psi$ at nearby fluid nodes. That is, $$\label{eq:gcibmb_another} \psi_I = f(\{\psi_N\}, \psi_O)$$ where, $\psi_O$ is the value of $\psi$ at the boundary point $O$ where physical boundary conditions are enforced; and $\{\psi_N\}$ represent values of $\psi$ at fluid nodes $\{N\}$ that satisfy: $$\label{eq:domain} d_N = ||\mathbf{r}_I - \mathbf{r}_N|| \le R_I$$ where, $\mathbf{r}_I$ and $\mathbf{r}_N$ are the position vectors of point $I$ and $N$, respectively; $R_I$, referred as the domain of dependence of point $I$ and illustrated in Fig. \[fig:gcibm\_demo\], is the maximum distance from the point $I$ to nearby fluid nodes that are employed for flow reconstruction. The benefit of the incorporation of physical boundary conditions in the reconstruction function of $\psi_I$ can be demonstrated by the following equations: \[eq:limit\] $$\begin{aligned} ||\mathbf{r}_G - \mathbf{r}_O|| &= ||\mathbf{r}_I - \mathbf{r}_O|| \\ \lim_{||\mathbf{r}_I - \mathbf{r}_O|| \to 0}\psi_I &= \lim_{||\mathbf{r}_I - \mathbf{r}_O|| \to 0}f(\{\psi_N\}, \psi_O) = \ \psi_O \\ \lim_{||\mathbf{r}_G - \mathbf{r}_O|| \to 0}\psi_G &= 2\psi_O - \lim_{||\mathbf{r}_I - \mathbf{r}_O|| \to 0}\psi_I = \psi_O \end{aligned}$$ Hence, the constructed $\psi_G$ converges to the exact physical boundary conditions when $G$ converges to $O$. This convergence property is helpful to alleviate unphysical flux over the immersed boundary, an issue resulting from using non-body conformal Cartesian grids [@mittal2005immersed; @mark2008derivation; @seo2011sharp] and examined in the numerical results of this paper. Several approaches are available to construct flow at image points [@tseng2003ghost; @gilmanov2003general; @gilmanov2005hybrid; @gao2007improved; @kapahi2013three]. In this study, we develop an inverse distance weighting based flow reconstruction function to achieve efficient and robust boundary treatment for flow with arbitrarily irregular and changing geometries and to enforce Dirichlet, Neumann, Robin, and Cauchy boundary conditions in a straightforward and consistent manner. Inverse distance weighting interpolation ---------------------------------------- As a convex combination of candidate stencils, inverse distance weighting is a popular interpolation method for the approximation of scattered data sets [@shepard1968two; @junkins1973weighting]. The inverse distance weighting [@shepard1968two] for interpolating the value of a variable $\psi$ at a spatial point $c$ is expressed as the following: $$\label{eq:weighting} \psi_c = \frac{\sum w(d_n) \psi_n }{\sum w(d_n)}, \text{~~~~} d_n \ne 0 \ \text{and} \ \ d_n \le R_c$$ where $\psi_c$ is the interpolated value; $\{\psi_n\}$ are the known values of $\psi$ at stencil points $\{n\}$; $\{d_n\}$ are the distance from $\{n\}$ to the interpolated point $c$; $\{w(d_n)\}$ are weighting functions; $R_c$ is the size of the domain of dependence for interpolating $\psi$ at point $c$. As discussed in [@shepard1968two], the desired $\lim_{d_n \to 0}\psi_c=\psi_n$ is mathematically satisfied. However, an overflow problem may arise from calculating an inverse distance. @shepard1968two suggested using a conditional statement to avoid this issue: $$d = max(d, d_{tiny})$$ where $d_{tiny}$ is a predefined positive constant to avoid float arithmetic overflow of inversing distance. In this paper, this value is set as a function of mesh sizes: $$d_{tiny} = \epsilon_0 * min(\Delta_x, \Delta_y, \Delta_z)$$ where, $\Delta_x, \Delta_y, \Delta_z$ are mesh sizes in $x$, $y$, $z$ directions respectively; $\epsilon_0$ is a constant representing the proportion of $d_{tiny}$ to the smallest mesh size, for instance, $\epsilon_0=1.0\times10^{-6}$. Generally, the weighting function $w(d)$ employs an inverse-power law $1/d^q$, and the typical value of $q$ is $2$ [@shepard1968two; @franke1982scattered]. Our numerical experiments on $q=1,\ 2$ with $R_c / \Delta_{max}=2,\ 4,\ 6$ ($\Delta_{max} = max(\Delta_x, \Delta_y, \Delta_z)$) indicate that computational results are not sensitive to the choice of $q$ and $R_c$. Hence, $q=2$ with $R_c / \Delta_{max} = 2$ are used for the numerical results of this paper. A three-step prediction-correction flow reconstruction scheme ------------------------------------------------------------- The proposed three-step prediction-correction scheme of this paper for constructing $\psi_I=f(\{\psi_N\}, \psi_O)$ with $\psi$ representing a generic field variable is presented as below: 1. Prediction step: pre-estimate the value of $\psi_I$ by applying inverse distance weighting on the fluid nodes that locate in the domain of dependence of the image point $I$. Denote the predicted value as $\psi_I^*$. $$\label{eq:prediciton} \psi_I^* = \frac{\sum w(d_N) \psi_N}{\sum w(d_N)}$$ 2. Physical boundary condition enforcement step: determine the value of $\psi_O$ via the physical boundary conditions that $\psi$ needs to satisfy at the boundary point $O$ and the values of $\psi$ at interior physical domain. 3. Correction step: solve the value of $\psi_I$ by adding the boundary point $O$ as a stencil node for the inverse distance weighting of $\psi_I$. $$\label{eq:correction} \psi_I = \frac{\sum w(d_N) \psi_N + w(d_O) \psi_O}{\sum w(d_N) + w(d_O)} = \frac{\psi_I^* + \frac{w(d_O)}{\sum w(d_N)}\psi_O}{1+\frac{w(d_O)}{\sum w(d_N)}}$$ It is beneficial to note that there is no need to re-do calculations on fluid nodes when the sum of weights and sum of weighted values in Eq.  are preserved. The physical boundary condition enforcement step is described below by the implementation of practical boundary conditions. #### Dirichlet boundary condition If $\psi$ satisfies Dirichlet boundary condition, the value of $\psi_O$ is purely determined by the specified boundary condition: $$\label{eq:dirichlet} \psi_O = g$$ where $g$ is a given value or function. #### Neumann boundary condition $\psi$ satisfies the following equation: $$\label{eq:neumann} \left. \frac{\partial \psi}{\partial n} \right|_O = \frac{\partial \psi_O}{\partial n}$$ where ${\partial \psi_O}/{\partial n}$ is a given value or function. Rewritten Eq.  as the following: $$\label{eq:reneumann} \lim_{l \to 0} \frac{\psi(\mathbf{r}_O + l \mathbf{n}) - \psi(\mathbf{r}_O)}{l} = \frac{\partial \psi_O}{\partial n}$$ where $\mathbf{r}_O$ is the position vector and $\mathbf{n}$ is the unit normal vector at boundary point $O$. Since point $I$ is on the normal direction of point $O$, we have: $$\label{eq:direction} \mathbf{n} = \frac{\mathbf{r}_I - \mathbf{r}_O}{||\mathbf{r}_I - \mathbf{r}_O||}$$ Therefore, $$\label{eq:approxneumann} \frac{\psi_I - \psi_O}{||\mathbf{r}_I - \mathbf{r}_O||} - \left. \frac{\partial^2 \psi}{\partial n^2} \right|_O ||\mathbf{r}_I - \mathbf{r}_O|| + \mathrm{O}(||\mathbf{r}_I - \mathbf{r}_O||^2)= \frac{\partial \psi_O}{\partial n}$$ Due to Eq. , the second order derivative term is negligible: $$\label{eq:secondderivative} \left. \frac{\partial^2 \psi}{\partial n^2} \right|_O = \frac{\psi_I + \psi_G - 2\psi_O}{2||\mathbf{r}_I - \mathbf{r}_O||^2} + \mathrm{O}(||\mathbf{r}_I - \mathbf{r}_O||^2)$$ Hence, $\psi_O$ is determined as: $$\label{eq:resultneumann} \psi_O = \psi_I - ||\mathbf{r}_I - \mathbf{r}_O||\frac{\partial \psi_O}{\partial n}$$ #### Robin boundary condition A linear combination of the values of $\psi$ and its normal derivative on the boundary point $O$ is specified: $$\label{eq:robin} \alpha \psi_O + \beta \left. \frac{\partial \psi}{\partial n} \right|_O = g$$ where $\alpha$ and $\beta$ are the linear combination coefficients, $g$ is a given value or function. After approximating the normal derivative, we have: $$\label{eq:approxrobin} \alpha \psi_O + \beta \frac{\psi_I - \psi_O}{||\mathbf{r}_I - \mathbf{r}_O||} = g$$ Then, $$\label{eq:resultrobin} \psi_O = \frac{\beta \psi_I - ||\mathbf{r}_I - \mathbf{r}_O|| g}{\beta - ||\mathbf{r}_I - \mathbf{r}_O|| \alpha}$$ #### Cauchy boundary condition For illustration purpose, $\psi$ is replaced by the velocity $\mathbf{V}=(u,v,w)$ that satisfies the slip wall boundary condition: \[eq:slipwall\] $$\begin{aligned} \left. (\mathbf{V} \cdot \mathbf{n}) \right|_{\mathbf{r}=\mathbf{r}_O} &= \mathbf{V}_{S} \cdot \mathbf{n} \\ \left. \frac{\partial (\mathbf{V} \cdot \hat{\mathbf{t}})}{\partial n} \right|_{\mathbf{r}=\mathbf{r}_O} &= 0 \\ \left. \frac{\partial (\mathbf{V} \cdot \tilde{\mathbf{t}})}{\partial n} \right|_{\mathbf{r}=\mathbf{r}_O} &= 0 \end{aligned}$$ where $\mathbf{n}$, $\hat{\mathbf{t}}$, and $\tilde{\mathbf{t}}$ are the unit normal vector, unit tangent vector, and unit bitangent vector at boundary point $O$, respectively. $\mathbf{V}_{S}$ is the velocity of the boundary surface. After approximating normal derivatives, we have: \[eq:velocity\] $$\begin{aligned} u_O n_x + v_O n_y + w_O n_z &= u_{S} \hat{n}_x + v_{S} \hat{n}_y + w_{S} \hat{n}_z \\ u_O \hat{t}_x + v_O \hat{t}_y + w_O \hat{t}_z &= u_I \hat{t}_x + v_I \hat{t}_y + w_I \hat{t}_z \\ u_O \tilde{t}_x + v_O \tilde{t}_y + w_O \tilde{t}_z &= u_I \tilde{t}_x + v_I \tilde{t}_y + w_I \tilde{t}_z \end{aligned}$$ Since the coefficient matrix is orthogonal, $\mathbf{V}_O$ is determined as below: $$\label{eq:determined} \begin{pmatrix} u_O \\ v_O \\ w_O \end{pmatrix} = \begin{bmatrix} n_x & n_y & n_z \\ \hat{t}_x & \hat{t}_y & \hat{t}_z \\ \tilde{t}_x & \tilde{t}_y & \tilde{t}_z \end{bmatrix}^{T} \begin{pmatrix} u_{S} \hat{n}_x + v_{S} \hat{n}_y + w_{S} \hat{n}_z \\ u_I \hat{t}_x + v_I \hat{t}_y + w_I \hat{t}_z \\ u_I \tilde{t}_x + v_I \tilde{t}_y + w_I \tilde{t}_z \end{pmatrix}$$ All the solution equations of $\psi_O$ now can be written in a unified form: $$\label{eq:unified} \psi_O = C \psi_I + R.R.H.S.$$ where, the value of the coefficient $C$ and the rest right hand side $R.R.H.S.$ are in Table \[tab:bcmap\]. Type Example Form $C$ $R.R.H.S.$ ----------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Dirichlet $\psi_O = g$ $0$ $g$ Neumann $\left. \frac{\partial \psi}{\partial n} \right|_O = \frac{\partial \psi_O}{\partial n}$ $1$ $- ||\mathbf{r}_I - \mathbf{r}_O||\frac{\partial \psi_O}{\partial n}$ Robin $\alpha \psi_O + \beta \left. \frac{\partial \psi}{\partial n} \right|_O = g$ $\frac{\beta}{\beta - ||\mathbf{r}_I - \mathbf{r}_O|| \alpha}$ $\frac{- ||\mathbf{r}_I - \mathbf{r}_O|| g}{\beta - ||\mathbf{r}_I - \mathbf{r}_O|| \alpha}$ Cauchy $\begin{aligned} \left. (\mathbf{V} \cdot \mathbf{n}) \right|_{\mathbf{r}=\mathbf{r}_O} &= \mathbf{V}_{S} \cdot \mathbf{n} \\ \left. \frac{\partial (\mathbf{V} \cdot \hat{\mathbf{t}})}{\partial n} \right|_{\mathbf{r}=\mathbf{r}_O} &= 0 \\ \left. \frac{\partial (\mathbf{V} \cdot \tilde{\mathbf{t}})}{\partial n} \right|_{\mathbf{r}=\mathbf{r}_O} &= 0 \end{aligned}$ $\begin{bmatrix} n_x & n_y & n_z \\ \hat{t}_x & \hat{t}_y & \hat{t}_z \\ \tilde{t}_x & \tilde{t}_y & \tilde{t}_z \end{bmatrix}^{T} \begin{bmatrix} 0 & 0 & 0 \\ \hat{t}_x & \hat{t}_y & \hat{t}_z \\ \tilde{t}_x & \tilde{t}_y & \tilde{t}_z \end{bmatrix}$ $\begin{bmatrix} n_x & n_y & n_z \\ \hat{t}_x & \hat{t}_y & \hat{t}_z \\ \tilde{t}_x & \tilde{t}_y & \tilde{t}_z \end{bmatrix}^{T} \begin{bmatrix} n_x & n_y & n_z \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} \cdot \mathbf{V}_S$ : Value map of $C$ and $R.R.H.S.$ for different boundary conditions.[]{data-label="tab:bcmap"} Due to the unknown $\psi_I$ in Eq. , the solution equation of $\psi_O$ is coupled with the solution equation of $\psi_I$ in the correction step. To solve this problem, one method is a synchronous solving approach to solve $\psi_O$ and $\psi_I$ simultaneously: $$\label{eq:synchronous} \begin{cases} \psi_O = \ C \psi_I + R.R.H.S. \\ \psi_I = \ \frac{\psi_I^* + \frac{w(d_O)}{\sum w(d_N)}\psi_O}{1+\frac{w(d_O)}{\sum w(d_N)}} \end{cases}$$ The other is an asynchronous solving approach: first, solve $\psi_O$ via approximating the unknown $\psi_I$ with the pre-estimated $\psi_I^*$; then, solve $\psi_I$ in the correction step. $$\label{eq:asynchronous} \begin{cases} \psi_O = C \psi_I^* + R.R.H.S. \\ \psi_I = \frac{\psi_I^* + \frac{w(d_O)}{\sum w(d_N)}\psi_O}{1+\frac{w(d_O)}{\sum w(d_N)}} \end{cases}$$ The enforcement of Dirichlet and trivial Neumann boundary conditions is equivalent in these two approaches. When the asynchronous solving approach is adopted, the physical boundary condition enforcement step and the correction step can be iteratively implemented to improve the accuracy of enforcing other boundary conditions. Method discussion ----------------- The proposed three-step prediction-correction flow reconstruction scheme enables the developed immersed boundary method to enforce a wide class of boundary conditions in a straightforward and consistent manner. In polynomial reconstruction based methods, different linear systems require to be constructed and solved for flow variables satisfying different boundary conditions. In the current method, the enforcement of boundary conditions at boundary points is a separate step of flow reconstruction. This feature provides efficient and uniform boundary treatment for flow with an arbitrary number of field variables that satisfy different types of boundary conditions. Moreover, the proposed immersed boundary method here is scalable to the number of stencils used in the flow reconstruction. In contrast, for polynomial reconstruction based methods, a fixed minimum number of stencils is always required to avoid under-determined linear systems, and special treatments are required for strongly concave or convex geometries [@gao2007improved; @kapahi2013three]. Therefore, the scalable property is where the robustness of the method herein lies: it leads to the automatic adaption to a varying number of stencil nodes; and it guarantees uniform validity when at least one fluid node exists in the domain of dependence of the image point, a condition that is ensured by the definition of a ghost node. In addition, the proposed immersed boundary method can be applied to multiple layers of ghost nodes without extra constraints. The asynchronous solving approach without iterative implementation is currently adopted and examined in this paper, since the validity of the synchronous solving approach is established when the validity of the asynchronous solving approach is proved. A code that features the implementation of the three-step prediction-correction flow reconstruction scheme is provided in the \[app:source\_code\] to illustrate the simplicity and efficiency of our proposed immersed boundary method. Numerical implementation {#sec:numeric} ======================== The governing equations employed in the numerical solver of this paper is the nondimensionalized conservative form of the three-dimensional Navier-Stokes equations in Cartesian coordinates [@anderson1995computational]. The temporal derivatives in the governing equations are discretized by using the third-order TVD Runge-Kutta method [@shu1988efficient; @gottlieb2001strong]. The second order upwind TVD scheme [@harten1984class] and the fifth order WENO scheme [@jiang1996efficient] are both implemented for the discretization of convective fluxes. Meanwhile, central differencing scheme is used for the discretization of diffusive fluxes [@ferziger2002computational]. Strang splitting [@strang1964accurate; @strang1968construction] is employed for dimensional splitting to relax stability constraints and decrease numerical complexities [@woodward1984numerical]. The fluid-solid coupling pattern used in this paper is described in \[app:fluid\_solid\_coupling\]. Numerical experiments {#sec:validity} ===================== Due to the using of the Navier-Stokes equations, the no-slip wall condition with an adiabatic assumption is enforced in the test cases unless otherwise stated. Numerical results are computed by the fifth order WENO scheme unless otherwise stated. Shock diffraction over a cylinder {#case:1_cyn} --------------------------------- A Mach $2.81$ planar incident shock interacting with a stationary circular cylinder is considered. Comprehensive descriptions of this problem are available in [@bryson1961diffraction; @bazhenova1984unsteady; @kaca1988interferometric; @ripley2006numerical; @sambasivan2009ghostb]. In the current study, a circular cylinder with diameter $D = 1$ is positioned at the center of a $6D \times 6D$ square domain while an initial shock is positioned $0.5D$ upstream of the cylinder. This configuration is similar to [@ripley2006numerical] except that a full domain size without symmetrical boundary assumption is used in this paper. ### Grid sensitivity test A consistent numerical method should provide numerical solutions that become less sensitive to the grid size as the mesh is refined. However, the level of numerical error depends on the features of the flow resolved by the grid [@oberkampf2002verification]. The presence of discontinuities, such as shocks, slip surfaces, and interfaces, develops numerical errors on a grid in a complex way [@oberkampf2002verification] and affects the error estimation process, especially for the evaluation of local grid convergence behavior [@roache1998verification]. While admitting these difficulties, this part provides an examination of the grid convergence behavior of the presented immersed boundary method for solving flow with strong discontinuities. [0.48]{} ![Grid sensitivity test. (a) Global grid convergence at a $33^\circ$ line. (b) Global grid convergence on shock position. (c) Local grid convergence at a point with $Arc\ Length = 2.96$. (d) Local relative error at the sample point.[]{data-label="fig:grid_sensitivity"}](grid_global_distribution "fig:"){width="\textwidth"}   [0.48]{} ![Grid sensitivity test. (a) Global grid convergence at a $33^\circ$ line. (b) Global grid convergence on shock position. (c) Local grid convergence at a point with $Arc\ Length = 2.96$. (d) Local relative error at the sample point.[]{data-label="fig:grid_sensitivity"}](grid_global_position "fig:"){width="\textwidth"} \ [0.48]{} ![Grid sensitivity test. (a) Global grid convergence at a $33^\circ$ line. (b) Global grid convergence on shock position. (c) Local grid convergence at a point with $Arc\ Length = 2.96$. (d) Local relative error at the sample point.[]{data-label="fig:grid_sensitivity"}](grid_local_value "fig:"){width="\textwidth"}   [0.48]{} ![Grid sensitivity test. (a) Global grid convergence at a $33^\circ$ line. (b) Global grid convergence on shock position. (c) Local grid convergence at a point with $Arc\ Length = 2.96$. (d) Local relative error at the sample point.[]{data-label="fig:grid_sensitivity"}](grid_local_error "fig:"){width="\textwidth"} A series of successively refined grids are employed to study the grid sensitivity of the developed immersed boundary method, and the numerical results are obtained at $t=1.0$. Due to the presence of complex discontinuity patterns, the global grid convergence behavior is represented by and examined on a line segment from point $(-0.27232, 0.41934)$ to point $(2.5, 2.21970)$, which is located on a $33^\circ$ tangent line of the cylinder and is plotted in Fig. \[fig:triplepoint\]. As shown in Fig. \[fig:grid\_global\_distribution\], when grid resolution changes significantly from the coarsest $251\times251$ grid to the finest $2001\times2001$ grid, excellent overall agreements are achieved on global grid convergence with main discrepancies occurring near the region of boundary interfaces and flow discontinuities. However, these discrepancies are effectively reduced when the grid resolution is sufficiently increased. By observing that the discrepancy of the predicted discontinuity is mainly about its position rather than magnitude, a global grid convergence behavior on discontinuity position is examined and shown in Fig. \[fig:grid\_global\_position\], in which a well performed grid convergence is presented. Since local errors are transported throughout the computational region and are strongly affected by flow discontinuities, an examination of the local grid convergence behavior has limited implications [@roache1998verification]. For instance, the predicted position of discontinuities, which highly depends on grid resolution, has major influence on the values of flow quantities near discontinuities. Nonetheless, the local grid convergence at a sample point, which corresponds to the peak value of the $2001\times2001$ grid solution, is examined in this paper. Fig. \[fig:grid\_local\_value\] shows the predicted density at this sample point, and Fig. \[fig:grid\_local\_error\] shows the numerical error relative to the predicted value of the finest grid. As shown in Fig. \[fig:grid\_local\_error\], the local relative error of the predicted density at this sample point is effectively reduced as the grid is refined. According to the discussed results, the developed immersed boundary method of this paper has well-behaved global and local grid convergence properties over a wide range of grid resolution. ### Numerical results Numerical results of three grids with $501 \times 501$, $1001 \times 1001$, and $2001 \times 2001$ nodes, denoted as grid $C$, grid $M$, and grid $F$ respectively, are discussed below. [0.32]{}   [0.32]{}   [0.32]{} \ [0.32]{}   [0.32]{}   [0.32]{} Numerical results of grid $M$ are illustrated in Fig. \[fig:1\_cyn\_nomv\] in the form of the time evolution of density contour lines. These snapshots clearly show the diffraction of the curved Mach stems over the cylinder and the formation of wake by the collision of the two opposite diffracting shocks, which physical processes are reported in experimental studies [@bryson1961diffraction; @bazhenova1984unsteady]. Comparing with the interferometric measurements of [@kaca1988interferometric] and numerical results in [@sambasivan2009ghostb; @ripley2006numerical; @ji2010numerical], the slip line, the reflected and diffracted shocks over the immersed boundary in the results of this paper are all resolved remarkably well, which demonstrates the high validity of the developed method. To investigate the capability of the developed immersed boundary method on enforcing different physical boundary conditions, two types of wall boundary conditions are studied on grid $M$ and results at $t=1.0$ are shown in Fig. \[fig:bc\_compare\]. After the formation of collision wakes as in Fig. \[fig:1\_cyn\_nomv\_60\_noslip\] and Fig. \[fig:1\_cyn\_nomv\_60\_slip\], a more wall-adhesive low-speed wake is formed in the case of no-slip wall. Fig. \[fig:1\_cyn\_nomv\_60\_vel\_gradient\] and Fig. \[fig:1\_cyn\_nomv\_60\_vel\_gradient\_slip\] show the velocity gradient distribution at the wall region. A thin but gradually growing boundary layer with large velocity gradients is produced along the no-slip wall, while no boundary layer is presented in the slip wall case. [0.48]{} ![Numerical results with two types of wall boundary conditions. (a) Density contour colored by velocity, no-slip wall. (b) Velocity gradient color map, no-slip wall. (c) Density contour colored by velocity, slip wall. (d) Velocity gradient color map, slip wall.[]{data-label="fig:bc_compare"}](1_cyn_nomv_60 "fig:"){width="\textwidth"}   [0.48]{} ![Numerical results with two types of wall boundary conditions. (a) Density contour colored by velocity, no-slip wall. (b) Velocity gradient color map, no-slip wall. (c) Density contour colored by velocity, slip wall. (d) Velocity gradient color map, slip wall.[]{data-label="fig:bc_compare"}](1_cyn_nomv_60_vel_gradient "fig:"){width="\textwidth"} \ [0.48]{} ![Numerical results with two types of wall boundary conditions. (a) Density contour colored by velocity, no-slip wall. (b) Velocity gradient color map, no-slip wall. (c) Density contour colored by velocity, slip wall. (d) Velocity gradient color map, slip wall.[]{data-label="fig:bc_compare"}](1_cyn_nomv_60_slip "fig:"){width="\textwidth"}   [0.48]{} ![Numerical results with two types of wall boundary conditions. (a) Density contour colored by velocity, no-slip wall. (b) Velocity gradient color map, no-slip wall. (c) Density contour colored by velocity, slip wall. (d) Velocity gradient color map, slip wall.[]{data-label="fig:bc_compare"}](1_cyn_nomv_60_vel_gradient_slip "fig:"){width="\textwidth"} ![X-velocity profiles at a line segment from point $(0,\ 0.50)$ to point $(0,\ 0.55)$.[]{data-label="fig:1_cyn_nomv_vel_profile"}](1_cyn_nomv_vel_profile){width="48.00000%"} Fig. \[fig:1\_cyn\_nomv\_vel\_profile\] further shows the x-velocity profiles at a vertical line segment from point $(0,\ 0.50)$ to point $(0,\ 0.55)$. A velocity profile that indicates a zero velocity at wall changing to local free-stream value away from the wall is presented in the no-slip wall case. A velocity profile that indicates a maximum velocity at wall decreasing to local free-stream velocity is presented in the slip wall case. These observations agree with the flow physics at this vertical line of flow over a cylinder. According to the successful solutions of shock diffraction over cylinder with no-slip and slip wall boundary conditions, the developed immersed boundary method is able to correctly enforce different types of physical boundary conditions. [0.32]{}   [0.32]{}   [0.32]{} \ [0.32]{}   [0.32]{}   [0.32]{} \ [0.32]{}   [0.32]{} ### Shock detachment distance As shown in Fig. \[fig:triplepoint\], a bow-shaped shock is reflected from the cylinder when the incident shock hits the cylinder. In [@kaca1988interferometric], the concepts of shock detachment distance and nondimensional time after collision were introduced to describe the reflected incident shock. Fig. \[fig:1\_cyn\_detach\] shows the comparison of predicted detachment distance between present numerical results and the experimental results of [@kaca1988interferometric]. [0.48]{} ![Comparison of detachment distance. (a) TVD. (b) WENO.[]{data-label="fig:1_cyn_detach"}](1_cyn_detach_tvd "fig:"){width="\textwidth"}   [0.48]{} ![Comparison of detachment distance. (a) TVD. (b) WENO.[]{data-label="fig:1_cyn_detach"}](1_cyn_detach_weno "fig:"){width="\textwidth"} [0.48]{} ![Comparison of triple-point path. (a) TVD. (b) WENO.[]{data-label="fig:1_cyn_triple_path"}](1_cyn_triple_path_tvd "fig:"){width="\textwidth"}   [0.48]{} ![Comparison of triple-point path. (a) TVD. (b) WENO.[]{data-label="fig:1_cyn_triple_path"}](1_cyn_triple_path_weno "fig:"){width="\textwidth"} As it is pointed out in [@ripley2006numerical] that numerical studies of the detachment distance generally predicts a greater detachment distance, and the detachment distance subjects to a parabolic distribution with respect to the nondimensional time after collision rather than the linear behavior reported in [@kaca1988interferometric]. This parabolic behavior of the detachment distance, which is clearly observed in the results of this paper, is also presented in polynomial reconstruction based results of [@sambasivan2009ghostb], cut-cell based results of [@ji2010numerical], and unstructured mesh based results of [@ripley2006numerical]. ### Triple-point path The incident shock, reflected shock, and diffracted shock intersect and form an upper triple point on each side of the plane of symmetry. As shown in Fig. \[fig:triplepoint\], this triple point travels in space and produces an upper triple-point path. Interferometric measurements of [@kaca1988interferometric] predict that this upper triple-point path is tangent to the cylinder at an angle of $33^{\circ}$ for Mach numbers in the range of $1.42 - 5.96$. The predicted triple-point paths of this paper and the experimental correlation of [@kaca1988interferometric] are compared in Fig. \[fig:1\_cyn\_triple\_path\]. The least square linear regressions of the predicted triple-point paths of TVD with grid $C$, $M$, and $F$ are about $29.0^{\circ}$, $29.5^{\circ}$, and $29.6^{\circ}$ respectively, of WENO with grid $C$, $M$, and $F$ are about $30.2^{\circ}$, $30.3^{\circ}$, and $30.3^{\circ}$ respectively. For grid $M$ with slip wall boundary condition, TVD and WENO predict $30.1^{\circ}$ and $30.3^{\circ}$ respectively. These results, which agree well with the experimental correlation of [@kaca1988interferometric] and very well with polynomial reconstruction based results of [@sambasivan2009ghostb], cut-cell based results of [@ji2010numerical], and unstructured mesh based results of [@ripley2006numerical], demonstrate the accuracy of the developed method. Explosive dispersal of zero-gap particles ----------------------------------------- To test the robustness of the proposed method, strongly irregular, concave, and changing geometries formed by initially zero-gap configured particles are used. The zero-gap particles may represent one of the most challenging geometries in fluid-solid interactions. As preliminary cases for studying particle jetting instabilities from multiphase explosive detonation [@ripley2014jetting; @zhang2014large], no study case concerning zero-gap particles, which is the common setting when particles are packed in practice, has been found so far. As shown in Fig. \[fig:18\_cyn\_domain\], eighteen identical particles are zero-gap configured in a $1 \times 1$ computational domain. The centers of particles are evenly distributed on a circle whose radius is equal to $0.2$. A flow state $(\rho, u, v, p)^T=(3.67372, 0, 0, 9.04545)^T$ is initially positioned at a circular region centered in the domain, and the radius of the circular region is $0.1$. The flow state at the rest of the region is set to $(\rho, u, v, p)^T=(1, 0, 0, 1)^T$. A grid with $2001 \times 2001$ nodes is used for the numerical solution. Newton’s second law of motion is employed for evolving spatial distribution of particles, during which no collision and gravity effects are currently integrated. ![Illustration of computational configuration of zero-gap particles.[]{data-label="fig:18_cyn_domain"}](18_cyn_domain){width="60.00000%"} Numerical solutions of the explosive dispersal process are illustrated in Fig. \[fig:18\_cyn\_dispersal\]. Complex shock-shock interaction, shock reflection and diffraction behaviors are clearly resolved in the numerical results. In addition, compression waves are formed in front of the moving particles, and high velocity fluid jets are generated at regions between moving particles. These physical and successful solutions of explosive dispersal of zero-gap particles prove the high robustness of the proposed immersed boundary method for strongly irregular, concave, and changing geometries. [0.30]{}   [0.30]{}   [0.30]{} \ [0.30]{}   [0.30]{}   [0.30]{} Shock diffraction over two partially overlapped spheres ------------------------------------------------------- A Mach $2.81$ planar shock interacting with two stationary and partially overlapped spheres (Fig. \[fig:2\_sphere\_domain\]) is solved to test the proposed immersed boundary method for three-dimensional irregular and concave geometries. The diameter of these two identical spheres is $D = 1$, and the centers of them are located at $(0, 0, 0)$ and $(0.5D, 0, 0)$ respectively. The size of the computational domain is $6D \times 6D \times 6D$, and the computational grid is $251 \times 251 \times 251$ nodes. Fig. \[fig:2\_sphere\_rho\] shows density contour on two perpendicular semi-planar slices. As shown in the two identical density contour slices, flow discontinuities resulted from the concave region are adequately resolved on this relatively coarse grid. Moreover, the axis symmetry property of this flow problem is able to be preserved. [0.48]{} ![Shock interacting with two partially overlapped spheres. (a) Computational domain. (b) Density contour on two perpendicular semi-planar slices. Colored by velocity.[]{data-label="fig:2_sphere"}](2_sphere_domain "fig:"){width="\textwidth"}   [0.48]{} ![Shock interacting with two partially overlapped spheres. (a) Computational domain. (b) Density contour on two perpendicular semi-planar slices. Colored by velocity.[]{data-label="fig:2_sphere"}](2_sphere_slices "fig:"){width="\textwidth"} Due to using non-body conformal Cartesian grids, unphysical flux over the immersed boundary is a fundamental issue in immersed boundary methods [@mittal2005immersed], and considerable efforts to overcome this issue have been devoted in [@mark2008derivation; @seo2011sharp]. Since Eq.  has established the convergence property of current method that the constructed ghost flow is able to converge to exact physical boundary conditions, we now numerically examine the unphysical flux under practical grid sizes by synthesizing the solved problems. Fig. \[fig:stream\_trace\] presents the stream traces of shock diffraction problems. The solved stream traces by the developed immersed boundary method are closely aligned with geometry surfaces, even in the three-dimensional problem where a coarse grid is employed. [0.48]{} ![Stream traces colored by velocity with corresponding analytical geometry boundaries. (a) Shock diffraction over a cylinder, no-slip wall. (b) Shock diffraction over a cylinder, slip wall. (c) Shock diffraction over two partially overlapped spheres, no-slip wall.[]{data-label="fig:stream_trace"}](1_cyn_nomv_streamtrace "fig:"){width="\textwidth"}   [0.48]{} ![Stream traces colored by velocity with corresponding analytical geometry boundaries. (a) Shock diffraction over a cylinder, no-slip wall. (b) Shock diffraction over a cylinder, slip wall. (c) Shock diffraction over two partially overlapped spheres, no-slip wall.[]{data-label="fig:stream_trace"}](1_cyn_nomv_slip_streamtrace "fig:"){width="\textwidth"} \ [0.48]{} ![Stream traces colored by velocity with corresponding analytical geometry boundaries. (a) Shock diffraction over a cylinder, no-slip wall. (b) Shock diffraction over a cylinder, slip wall. (c) Shock diffraction over two partially overlapped spheres, no-slip wall.[]{data-label="fig:stream_trace"}](2_sphere_streamtrace "fig:"){width="\textwidth"} To quantitatively examine the unphysical flux over immersed boundary, the absolute flux of the shock diffraction over a cylinder problems at time $t=1.0$ is examined and the flux is calculated on the first layer of ghost nodes. $$\label{eq:absolute_flux} \text{absolute flux} = \frac{1}{S}\iint_{S} \, |(\mathbf{V} - \mathbf{V}_{S})\ \cdot \ \mathbf{n}| \, \mathrm{d}s$$ The absolute flux distribution $|(\mathbf{V} - \mathbf{V}_{S})\ \cdot \ \mathbf{n}|$ over the cylinder with a no-slip wall on grid $M$ is plotted in Fig. \[fig:1\_cyn\_nomv\_flux\_distribution\], in which a symmetrical distribution is presented. The main unphysical flux is observed at the angular range of $\pm[60^{\circ},\, 100^{\circ}]$, where flow has a large velocity gradient at near wall region. [0.48]{} ![Absolute flux over analytical geometry boundaries of the shock diffraction over a cylinder problems. (a) Absolute flux distribution over the cylinder with no-slip wall on grid $M$. (b) Absolute flux over cylinder with no-slip or slip wall.[]{data-label="fig:absolute_flux"}](1_cyn_nomv_flux_distribution "fig:"){width="\textwidth"}   [0.48]{} ![Absolute flux over analytical geometry boundaries of the shock diffraction over a cylinder problems. (a) Absolute flux distribution over the cylinder with no-slip wall on grid $M$. (b) Absolute flux over cylinder with no-slip or slip wall.[]{data-label="fig:absolute_flux"}](1_cyn_nomv_flux_overall "fig:"){width="\textwidth"} Fig. \[fig:1\_cyn\_nomv\_flux\_overall\] plots the absolute flux of different cases, of which the average velocity and maximum velocity of flow fields are provided in Table \[tab:velocity\]. The slip wall and no-slip wall boundary conditions show similar flux values. When the grid is refined from $251\times251$ to $2001\times2001$, the absolute flux of the no-slip wall case is effectively reduced from about $1.44\times10^{-2}$ to about $4.19\times10^{-3}$, in which the latter is about $29.09\%$ of the former and is about $0.21\%$ of the average velocity of the flow field. No-slip wall Slip wall -------------------------------- -------------- ----------- Average velocity of flow field $2.00$ $2.01$ Maximum velocity in flow field $3.83$ $4.41$ : Average velocity and maximum velocity of shock diffraction over a cylinder.[]{data-label="tab:velocity"} According to these qualitative and quantitative results, the developed immersed boundary method of this paper retains a sharp interface and is able to effectively alleviate unphysical flux over physical boundaries when grid resolution is improved. Conclusions {#sec:conclusion} =========== A sharp interface immersed boundary method is developed. This method enforces Dirichlet, Neumann, Robin, and Cauchy boundary conditions in a straightforward and consistent manner and is able to provide efficient and robust boundary treatment for numerically solving flow with arbitrarily irregular and changing geometries while maintaining accuracy. The effectiveness of the proposed method is conformed by numerical experiments concerning flow of two and three space dimensions, stationary and moving objects, convex and concave geometries, no-slip and slip wall boundary conditions. A sample code for boundary treatment implementation {#app:source_code} =================================================== Ghost flow reconstruction function {#app:reconstruction} ---------------------------------- /* * Flow reconstruction of Field_G[N] for N field variables at ghost node G. */ /* pre-estimate Field_I[N] in domain of dependence of image point I. */ compute weightedSum[N] and sumOfWeights by Appendix A.2.; for (int n = 0; n < N; ++n) { Field_Istar[n] = weightedSum[n] / sumOfWeights; } /* enforce physical boundary conditions to determine Field_O[N] */ Field_O[0] = C[0] * Field_Istar[0] + R.R.H.S.[0]; . . . Field_O[N-1] = C[N-1] * Field_Istar[N-1] + R.R.H.S.[N-1]; /* correction step to solve Field_I[N] */ for (int n = 0; n < N; ++n) { Field_I[n] = (weightedSum[n] + Field_O[n] * weight_O) / (sumOfWeights + weight_O); } /* apply the method of images to construct Field_G[N] at ghost node G */ for (int n = 0; n < N; ++n) { Field_G[n] = 2 * Field_O[n] - Field_I[n]; } A search function for inverse distance weighting {#app:inverse_weighting} ------------------------------------------------ /* * Search fluid nodes around node(kI, jI, iI) in the domain of dependence R * to apply inverse distance weighting. node(kI, jI, iI) is a computational * node whose node coordinates are derived from the corresponding spatial * coordinates of a image point I(xI, yI, zI). */ for (int kh = -R; kh <= R; ++kh) { for (int jh = -R; jh <= R; ++jh) { for (int ih = -R; ih <= R; ++ih) { if (Flag[kI+kh][jI+jh][iI+ih] != FLUID) { /* not a fluid node */ continue; } /* a valid stencil node, apply inverse distance weighting */ compute weight for node(kI+kh, jI+jh, iI+ih) to I(xI, yI, zI); sumOfWeights = sumOfWeights + weight; for (int n = 0; n < N; ++n) { weightedSum[n] = weightedSum[n] + Field[kI+kh][jI+jh][iI+ih][n] * weight; } } } } Fluid-solid coupling {#app:fluid_solid_coupling} ==================== Currently, a simple fluid-solid coupling pattern is used. The interactions between fluid and solid are explicitly coupled by applying Strang splitting [@strang1964accurate; @strang1968construction] for physical process splitting as the following: 1. Evolve particle dynamics for $\frac{1}{2}\Delta t$ 2. Evolve fluid dynamics for $\Delta t$ 3. Evolve particle dynamics for $\frac{1}{2}\Delta t$ The evolution of particle dynamics can be expanded as the following procedures: 1. Integrate surface forces. 2. Update the spatial distribution of particles by particle dynamics models. 3. Detect ghost nodes that fall out the regions of their corresponding particles. These ghost nodes now change to fluid nodes, of which values of flow variables can be reconstructed by inverse distance weighting. 4. Re-mesh computational domain, and apply boundary treatment. Acknowledgements {#acknowledgements .unnumbered} ================ Financial support of this work was provided by Natural Sciences and Engineering Research Council of Canada (NSERC) and Defence Research and Development Canada (DRDC). This work was made possible by the facilities of the Shared Hierarchical Academic Research Computing Network (SHARCNET:www.sharcnet.ca) and Compute/Calcul Canada. The first author of this paper is grateful to Prof. Deliang Zhang for introducing Computational Fluid Dynamics and is thankful to Dr. Deyong Wen for discussions of flow visualization. [^1]: Email: `[email protected]`; Corresponding author
{ "pile_set_name": "ArXiv" }
--- abstract: 'Systematic $1/N_c$ counting of correlators is performed to directly relate quark-gluon dynamics to qualitatively different hadronic states order by order. Both 2q and 4q correlators of $\sigma$ quanta are analyzed with $1/N_c$ separation of the instanton, glueball, and in particular, two meson scattering states. The [*bare*]{} resonance pole with no mixing effects are studied with the QCD sum rules (QSR). The bare mass relation for large $N_c$ mesons, $m_{\rho}<m_{4q}^{I=J=0}<m_{2q}^{I=J=0}$, is derived. The firm theoretical ground of the QSR at the leading $1/N_{c}$ analyses is also emphasized.' author: - 'Toru Kojo$^{1}$ and Daisuke Jido$^2$' title: ' Dynamical study of bare $\sigma$ pole with $1/N_c$ classifications' --- Quantum chromodynamical (QCD) descriptions of hadron properties have direct relevance to understanding of the nonperturbative aspects of the strong interaction. For instance, success of the constituent quark model for global hadron spectra has illuminated some properties of chiral symmetry breaking and confinement, and provided the concept of constituent quarks as quasiparticles inside hadrons. Yet, not all hadrons are alike. In addition to the well-established Nambu-Goldstone bosons, there exist some exceptional states, for example, the light scalar mesons ($\sigma$, $\kappa$, $a_0$, $f_0$) [@Close], newly observed charmonia ($X$, $Y$, $Z$) [@Belle], baryon resonance $\Lambda(1405)$ and flavor exotic $\Theta^{+}(1540)$ [@LEPS]. The studies of these exotic hadrons could provide new viewpoints beyond the simple consitituent quark picture, such as multi-quark components and/or inter-hadron-dynamics. The lightest scalar meson, $\sigma$ with $I=J=0$, is a typical example which has almost all important ingredients as the exotic hadrons. The $\sigma$ meson includes not only usual $q \bar q$ (2q) but hadronic states beyond the constituent quark picture: glueball, $\pi \pi$ molecule, and $qq\bar q \bar q$ (4q) state with diquark correlation [@Jaffe]. Thus it is a good laboratory to explore the properties and interplay of these states. Extraction of these properties has direct phenomenological importance to not only the hadron spectroscopy but also nuclear/quark-hadron mater, through the properties of nuclear force  [@Johnston], chiral order parameter of QCD [@Hatsuda], and diquark correlation [@Jaffe]. The studies of the $\sigma$ meson, however, are not straightforward. Since $\sigma$ can be described as admixture of several hadronic states, it is difficult to identify which hadronic states are responsible to which part of the $\sigma$ properties. Experimental information is still not satisfactory to derive the definite conclusion. Therefore, as a first step, it is important to theoretically clarify the properties of each hadronic state in the absence of mixing with other states [@Lattice] and, at the same time, to illuminate the role of interplay between these hadronic states, by examining the descrepancies between states with no mixing effects and the experimental $\sigma$ data with all mixing effects. For this purpose, we introduce the classification based on inverse expansion of number of color, $1/N_{c}$ [@tHooft; @Witten], for the hadronic states in the correlators made of quark-gluon fields: $\Pi(q^2) = i \int d^{4}x \, e^{iq \cdot x}\langle T J(x) J^{\dagger}(0) \rangle$, including contributions from all possible hadronic intermediate states with the same quantum number as $J(x)$. One of the largest virtues of $1/N_{c}$ expansion is that it directly relates the $1/N_c$ classifications of the quark-gluon graphs to the qualitative classifications of the hadronic states with the same quantum number, in a way that the mixing of these hadronic states are suppressed by higher order quark-gluon graphs of $1/N_{c}$. Then we can concentrate on the graphs for the hadronic states of our interest, separating mixing effects from higher order of $1/N_c$. In this letter, we demonstrate this idea in the case of the correlators of 2q and 4q operators with the $\sigma$ quantum number. On the basis of the $1/N_{c}$ distinction, we can give an inductive definition of the bare “2q” and “4q” states, being free from the contributions of glueball, instanton, and, in particular, $\pi\pi$ scattering states, which are the origin of the large width $\sim 500$ MeV [@Caprini] and large background in the $\sigma$ meson spectrum. A novel consequence of our approach is that the bare “4q” state can be investigated independently of 2-meson states, and this explicitly demonstrates the efficiency of the $1/N_{c}$ distinction of states with the same quantum number but with qualitative differences. The existence and properties of the “4q” state are dynamically studied with comparing them to the “2q” state through the QCD sum rules (QSR) [@Shifman], whose theoretical ground is firm in the leading $1/N_{c}$. We will figure out the importance of the “4q” component with smaller mass than “2q” case by $150 \sim 200$ MeV despite of larger number of quarks participating in the dynamics. This indicates the existence of the nontrivial correlation for the mass reduction of the “4q” system. The interpolating fields used in this work are summarized as follows: The 2q interpolating fields are described as $J_{M}^{F}=\bar{q} \tau_F \Gamma_M q$, where Dirac matrix $\Gamma_M$ labeled by $M=(S,P,V,A,T)$ for ($1, i\gamma_5, \gamma_{\mu}, \gamma_{\mu}\gamma_5, \sigma_{\mu\nu}$), respectively, and $\tau_F\ (F=1,2,3)$ are the Pauli matrices acting on $q=(u,d)^T$. The 4q operators with the $\sigma$ quantum number are given (assuming the ideal mixing for the $\sigma$ meson) by $J_{MM}(x) = \sum_{F=1}^3 J_{M}^{F}(x) J_{M}^{F}(x)$ as products of meson operators (Hereafter we take the SU(2) chiral limit for simplicity). Here we first see the $1/N_{c}$ linking between quark-gluon dynamics and hadronic states in the case of the 2q correlators. The known facts are as follows: [@tHooft; @Witten]: 1) For quark-gluon diagrams, $n$-internal quark loops are suppressed by $1/N_{c}^n$. In terms of hadron, $n$-meson or multiquark with $(q\bar{q})^n$ production from $J_M$ are suppressed by $1/N_{c}^n$. 2) The disconnected diagrams with two gluon emission are suppressed by $1/N_{c}$. In terms of hadron, $q\bar{q}$-glueball mixing is suppressed by $1/N_{c}$. 3) Instanton effects are suppressed by $\sim e^{-N_{c}}$. 4) For the meson properties the $n$-meson couplings are given as $g_{nM}=O(N_{c}^{(2-n)/2})$. Here 1)$\sim$3) implies that leading diagrams with $O(N_{c})$ can be naturally interpreted as the bare “2q” state since the 4q propagation diagrams/$\pi\pi$ scattering, glueball, and instanton contributions do not appear at this order. Similar identification and subsequent separation are also possible in the case of the 4q correlator, which incorporates the 4q participating diagrams from the beginning. Here we give inductive definition on the “4q” state following the $1/N_c$ based orthogonal conditions: a) “4q” can [*not*]{} appear in the leading $N_{c}$ 2q correlator, b) “4q” can appear in the 4q correlators even after the separation of 2-meson scattering states. These conditions insist that its dynamical origin is different from “2q” and 2-meson molecule states (The glueball and instanton are easily verified to be higher order in $1/N_{c}$ than those considered below, thus we will not discuss them in the following). Although this definition gives a convenient starting point to discuss the qualitative difference between hadronic states in the $\sigma$ meson, the studies of the “4q” component require systematic $1/N_{c}$ arguments for the 4q correlators, beyond the leading $O(N_{c}^2)$ quark-gluon diagrams including only 2 planar loops (Fig.\[fig:2pointgraph\],a), which are naturally interpretted as free 2-meson scattering and are irrelevant for the studies of the “4q” properties. Thus we must proceed to the next leading order of $1/N_{c}$, $O(N_c)$ diagrams which could include 2-meson scattering, “2q”, and “4q” states at this next leading order of $1/N_{c}$. ![(Color online) Examples of the $O(N_c^2)$ and $O(N_c)$ quark-gluon diagrams for 2 and 3 point correlators.](ncgraph.eps){width="6.0cm" height="3.2cm"} \[fig:2pointgraph\] The $O(N_{c})$ quark-gluon diagrams in the 2-point function could include the various hadronic contributions. The easiest way to classify them is to consider the overlap strength of the operator $J_{MM}$ with hadronic states, involving all the elements of hadronic diagrams. We first classify the overlap strength of the 4q field with the 2 meson states based on $1/N_{c}$, employing 3-point correlator among the 4q current $J_{MM}$ and two separated meson operators $J_{M'}$ (Fig.\[fig:2pointgraph\], d-f). An explicit examination of quark-gluon graphs shows that the leading order diagrams are $O(N_c^2)$ for $M=M'$ case, and $O(N_c)$ for $M \neq M'$. Combining these facts with the overlap strength of $J_M'$ with the 2q meson state $|M'\rangle$ is $O(N_{c}^{1/2}$), the remaining part should be $$\begin{aligned} \langle 0|J_{MM}|M'M'\rangle = O(N_c)\delta_{MM'} + O(1)+.... \label{eq:overlap}\end{aligned}$$ The first term represents the direct coupling to $|MM \rangle$, while the second term reflects that transition into $|M'M'\rangle$ final state needs the additional interactions. This higher order counting is crucial for the separation of the $\pi\pi$ scattering states from 2-point correlators. On the other hand, the overlap strength with “2q” and “4q” states cannot be deduced from $1/N_c$ arguments only, and are assumed to be $$\begin{aligned} \langle 0|J_{MM}|R\rangle = O(N_c^{1/2}), \ \ (R="2q"\ {\rm or}\ "4q") \label{eq:assumption}\end{aligned}$$ which will be assured later through the dynamical calculations. Similarily, the “4q” state will be identified by examining the quantitative difference of poles in 2q and 4q correlators, which is found to be large enough to distinguish the “4q” and “2q” states. The coupling of $R$ to two mesons is estimated by 3-point function in the same way to obtain the meson couplings performed in Refs. [@Witten] and is found to be $O(N_{c}^{-1/2})$. Now we can classify the hadronic states in the 2-point correlators $\langle J_{MM} J_{M'M'} \rangle$ based on $1/N_{c}$ (See, Fig.\[fig:2pointgraph\], a-c): (i) If $M=M'$, $O(N_c^2)$ quark-gluon graphs include only the free 2M scattering states in the region $E \ge 2m_M$. Otherwise, the contributions from these quark-gluon diagrams vanish, indicating absence of 2 meson scattering states. (ii) $O(N_c)$ graphs include the 2M or 2M’ scattering and possible resonance, “2q” and/or “4q”. Note that the relations (\[eq:overlap\]) in the case of $M,M'\neq P\ {\rm nor}\ A$ indicate that the 2$\pi$ scattering states are not included up to $O(N_c)$ diagrams, and then the resonance peaks (if exist) [*below*]{} $2m_{M}$ are isolated and have zero width since the decay channel is absent. Therefore, now we can reduce the $\sigma$ spectrum in the 4q correlator into peak(s) plus continuum [*if we retain only diagrams up to $O(N_c)$*]{}. This separate investigation of the $O(N_c^2)$ and $O(N_c)$ part of QCD dynamics enables to perform the step by step analyses for the 2 meson scattering, “2q”, and “4q” spectra. In the application of the QSR, we perform the operator product expansion (OPE) for the correlators in deep Euclidean region ($q^2=-Q^2$), then translate them, [*term by term of $1/N_c$*]{}, into the [*integral*]{} of the hadronic spectral function through the dispersion relation: $$\begin{aligned} {\rm \Pi}^{ope}_{N_c^n}(-Q^2) = \int_0^{\infty}\hspace{-0.2cm} ds\ \frac{1}{\pi}\frac{ {\rm Im \Pi}^h_{N_c^n}(s) }{s+Q^2} \ \ (n=2,1). \label{disp}\end{aligned}$$ Now we emphasize the practical aspects of $1/N_c$ expansion in the application of the QSR. First, the higher dimension condensates in the OPE, whose values have been not well-known despite of their importance, can be factorized into the products of known condensates, $\langle \bar q q \rangle$, $\langle G^2 \rangle$, and $\langle \bar q g_s \sigma G q \rangle$. For example, $\langle (\bar q Q) (\bar Q q) \rangle \nonumber = \langle \bar q Q \rangle \langle \bar Q q \rangle + \sum \langle 0| \bar q Q |M \rangle \langle M| \bar Q q |0 \rangle \rightarrow \langle \bar q Q \rangle \langle \bar Q q \rangle$, holds in leading $1/N_c$ estimation since $\langle \bar q q \rangle$ is $O(N_c)$, while $\langle 0|\bar q \Gamma q |M \rangle$ is $O(N_c^{1/2})$ [@Witten]. Keeping this merit, we will deduce the final $O(N_c)$ results from the off diagonal correlator $\langle J_{VV}J^{\dag}_{SS} \rangle$, whose leading order is $O(N_c)$ thus without the factorization violations at the $O(N_c)$ OPE. Secondly, the lowest resonance in the reduced $O(N_c)$ spectra ${\rm Im \Pi}_{N_c}^{h}(s)$ for the “2q” and “4q” states, can be described as the sharp peak because of the absence of the decay channel. Applying the usual quark-hadron duality approximations to the higher excited states, $\pi{\rm Im \Pi}^{h}_{N_c} (s) = \lambda^2 \delta(s-m_h^2) + \theta(s-s_{th}) \pi {\rm Im \Pi}_{N_c}^{ope} (s)$, and after the Borel transformation for Eq.(\[disp\]), we can express the effective mass as $$\begin{aligned} m_h^2(M^2;s_{th}) \equiv \frac{ \int_0^{s_{th}} \!ds \ e^{-s/M^2 }s\ {\rm Im} \Pi^{ope} (s) } { \int_0^{s_{th}} \!ds\ e^{-s/M^2 }{\rm Im} \Pi^{ope} (s) }. \label{eq:peakmass}\end{aligned}$$ $s_{th}$ can be uniquely fixed to satisfy the least sensitivity [@LScriteria] of the expression (\[eq:peakmass\]) against the variation of $M$, since the physical peak should not depend on the artificial expansion parameter $M$. This criterion is justified only when the peak is very narrow, and our $1/N_c$ reduction of spectra is essential for its application to allow the QSR framework to determine all physical parameters ($m_h, \lambda, s_{th}$) in self-contained way. In practical application of the QSR, it is essential to reduce errors of finite order trunction of OPE and of the quark-hadron duality approximation. Thus Eq.(\[eq:peakmass\]) must be evaluated in the appropriate $M^2$ window ($M^2_{min},\ M^2_{max}(s_{th})$) to achieve the conditions: good OPE convergence for $M_{min}$ (highest dimension terms $\le$ 10% of whole OPE) and sufficient ground state saturation for $M_{max}$ (pole contribution $\ge$ 50% of the total) [@Reinders; @KHJ; @KJ]. Without the $M^2$ constraint, we are often stuck with the [*pseudo-peak*]{} artifacts [@KJ] often seen in multiquark SRs. Thus we carry out OPE up to dimension 12 [@drop] to include the sufficient low energy contributions which is essential to find the reasonable $M^2$ window [@KJ; @KHJ]. We summarize the numerical values used in the analyses. The gauge coupling constant behaves like $O(N_c^{-1/2})$, and the condensates, $\langle O \rangle=$($\langle \bar{q}q\rangle$, $\langle \alpha_s G^2 \rangle$, $\langle \bar q g_s \sigma G q \rangle$) are $O(N_c)$. Here we additionaly put simple $N_c$ scaling assumptions: $\alpha_s |_{N_c} = 3\alpha_s/N_c$, $\langle O \rangle |_{N_c} = \langle O \rangle N_c/3$. We use the following values with errors for the $N_c=3$ case, $\alpha_s({\rm 1GeV}) =0.4$, $\langle \alpha_s G^2/\pi \rangle = (0.33\ {\rm GeV})^4$, $\langle \bar{q}q\rangle=-(0.25 \pm 0.03\ {\rm GeV})^3$, and $\langle \bar q g_s \sigma G q \rangle/ \langle \bar{q}q\rangle = (0.8 \pm 0.1)\ {\rm GeV^2}$. The results shown below will be obtained with the central values. We finally show $\langle \bar q q \rangle$ and $m_0^2$ dependence of masses in a wide range since most of errors come from these values. We start the Borel analyses from the case of the large $N_c$ 2q correlators for the vector meson as a reference and the scalar meson as the “2q” state in the $\sigma$ meson. The OPE results (up to dimension 6) are nothing but results in the literature with employing the factorization. Shown in Fig.\[fig:2qmeson\] are the effective mass for the large $N_c$ vector and scalar meson as functions of $M^2$ for various $E_{th}$. The downarrow (upperarrows) indicates the value of $M^2_{min}$ ($M^2_{max}(s_{th})$). Following the $E_{th}$ ($\equiv \sqrt{ s_{th} }$) fixing criterion, we obtain $E_{th}$ to 1.0 (1.4) GeV for the vector (scalar) meson, and determine the mass as 0.65 (1.10) GeV. The mass splitting $\sim$0.45 GeV between the vector and scalar mesons roughly coincides with the angular excitation energy expected from the naive consitituent quark picture. The reasons for slightly small value of the large $N_c$ $\rho$ meson mass could be due to the absence of the factorization violations [@violation] if the large $N_c$ scaling of condensates holds. Since our original interests are in the qualitative difference in large $N_c$ mesons rather than their absolute values, thus we will not discuss the details of the absolute values further. Now we turn to the 4q correlator results of our main interest. We have investigated $O(N_c^2)$ and $O(N_c)$ part of $\langle J_{SS}J^{\dag}_{SS} \rangle$, $\langle J_{VV}J^{\dag}_{VV} \rangle$, and $O(N_c)$ part of $\langle J_{VV}J^{\dag}_{SS} \rangle$. We have checked that the typical effective masses for $O(N_c^2)$ part of $\langle J_{SS} J^{\dag}_{SS} \rangle$ ($\langle J_{VV} J^{\dag}_{VV} \rangle$) are well above the twice of the large $N_c$ meson mass 2.2 (1.3) GeV, indicating that there is no prominent structure below the free 2-meson threshold, as expected from $1/N_c$ arguments. ![(Color online) The $O(N_{c})$ effective mass plots for vector and scalar mesons in the cases of various $E_{th}$ values. The downward (upward) arrow represent the $M_{min}^2$ ($M_{max}^2(s_{th})$).[]{data-label="fig:2qmeson"}](2q250ncmass.eps){width="8.0cm"} ![(Color online) The effective mass plots for $\langle J_{SS} J^{\dag}_{VV} \rangle$, including only $O(N_c)$ OPE diagrams. The large $N_c$ “2q” scalar meson mass is also indicated as a reference.](250ncoffmass.eps) \[fig:ncspectra\] The “4q” state can appear from $O(N_c)$ part. Shown in Fig.\[fig:ncspectra\] are the effective masses deduced from $\langle J_{VV} J^{\dag}_{SS} \rangle$ for $E_{th}$=1.0, 1.2, and 1.4 GeV. We take the $E_{th}$=1.2 GeV case and evaluate its mass as $\sim$0.90 GeV, which is obviously lower than that of the “2q” scalar meson case, $\sim$1.10 GeV in large $N_c$ limit, and thus is considered as the mass of the “4q” state. The threshold value 1.2 GeV for $E_{th}$, much below the 2 scalar (vector) meson threshold, 2.2 (1.3) GeV, may be due to the “2q” scalar meson contribution, since our $O(N_c)$ 4q correlators can also include the “2q” contribution. We have also investigated the $O(N_c)$ part of $\langle J_{SS}J^{\dag}_{SS} \rangle$ ($\langle J_{VV}J^{\dag}_{VV} \rangle$), and obtained the almost same mass 0.80 (0.90) GeV although they could suffer from the factorization violation coming from $O(N_c^2)$ OPE. These 3-independent $O(N_c)$ correlator analyses consistently suggest existence of the “4q” state lighter than the “2q” state. ![(Color online) The condensate value dependence of masses of the large $N_c$ mesons, “4q” ($m_0^2$=0.7, 0.8, 0.9 GeV$^2$), “2q” scalar and vector states. ](qqdep.eps) \[fig:mesonspectra\] Finally, we derive a conservative conclusion which does not depend on the details of our numerical parameters, especially $\langle \bar{q}q \rangle$ and $m_0^2$ (The dependence on the other parameters is relatively small). Shown in Fig.\[fig:mesonspectra\] are “2q” vector, scalar meson masses, and of the “4q” mass (deduced from $\langle J_{VV}J^{\dag}_{SS} \rangle$) as functions of the $\langle \bar{q}q \rangle$ and $m_0^2$. We found that the inequality $m_{\rho}<m_{4q}^{I=J=0}<m_{2q}^{I=J=0}$ holds irrespective of details of the condensate values. The results obtained here suggest that the $\sigma$ meson has the “4q” component, which is not generated from $\pi\pi$ interactions but from those at the quark-gluon level. Here we comment on Pelaez’s elaborated work for the $\sigma$ meson using the unitarized chiral perturbation with $1/N_c$ expansion [@Pelaez]. He showed that the $\sigma$ state as the pole in the $\pi\pi$ scattering amplitudes of T-matrix disappears in large $N_c$ limit in contrast to the case of ordinary mesons such as the $\rho$ meson. This is not contradicted with our results since the $\pi\pi$-“4q” mixing is suppressed by $1/N_c$ and “4q” state is not accessible from the $\pi\pi$ initial states. This is in sharp contrast to the 4q correlator approach including the “4q” state directly generated from 4q current. We conjecture that the “4q” component may play important role as a building block of the $\sigma$ meson. To develop this possiblity, we plan to study the 3-point correlator for the “4q”-$\pi\pi$ coupling strength. The coupling strength should be large since the $\sigma$ in nature is the broad resonance. If this is indeed the case, the $\sigma$ meson could be described as the 4q core clothed by the $\pi\pi$ clouds. The relative importance of the “4q” state in the $\sigma$ meson can be investigated through the coupled channel analyses using the effective Lagrangian including not only the $\pi$ field but also an elementary “4q” field, whose effects are considered to be hidden in the parameters or regulation constants in the usual chiral perturbation approaches. This is somehow related to the recent arguments for the $N^*$(1535) resonance [@Hyodo2]. 1/$N_c$ arguments developed in this work are expected to have the wide applications. First, the spectrum reduction is applicable to the analyses of other tetraquark candidates. We have already obtained results that the effective mass in $I=2,J=0$ channel does not show any stability against $M^2$ variation, which indicates absence of “4q” states consistently with experiments. Second, the QCD sum rules which are firm in the large $N_c$ limit could provide useful information to models based on the Gauge/Gravity duality (large $N_c$ QCD $\leftrightarrow$ SUGRA), through the properties of the large $N_c$ mesons. All these issues will be reported in future [@future]. We thank Profs. T. Kunihiro and H. Suganuma for useful discussions and encouragements. T.K is indebted to Profs. M. Harada, W. Weise, B. Mueller for useful discussions during [*New Fronteers in QCD*]{} held at YITP, and also to Profs. T. Hatsuda and S. Sasaki for several important comments. We appriciate Prof. D. Kharzeev for carefully reading the manuscript. This work is supported by RIKEN, Brookhaven National Laboratory and the U. S. Department of Energy \[Contract No. DE-AC02-98CH10886\], and by the Grant for Scientific Research (No. 20028004) in Japan. [12]{} F.E. Close and N.A. Tornqvist, J. Phys. G:Nucl. Part. Phys. [**28**]{} (2002) R249. Belle Collaboration, S.K. Choi [*et al.*]{}, Phys. Rev. Lett. [**91**]{}, 262001(2003); BABAR Collaboration, B. Aubert [*et al.*]{}, Phys. Rev. Lett. [**95**]{},142001 (2005); Belle Collaboration, K. Abe [*et al.*]{}, Phys. Rev. Lett. [**100**]{}, 142001 (2008). LEPS Collaboration, T. Nakano [*et al.*]{}, Phys. Rev. Lett. [**91**]{}, 012002 (2003). R.L. Jaffe, Phys. Rev. D [**15**]{}, 267 (1977). M.H. Johnston and E. Teller, Phys. Rev. [**98**]{}, 783 (1955). T. Hatsuda and T. Kunihiro, Phys. Rept. [**247**]{}, 221 (1994). Lattice calculations have been developed for this direction. For examples, M.G. Alford and R.L. Jaffe, Nucl. Phys. [**B578**]{}, 367 (2000); N. Mathur [*et al*]{}., Phys. Rev. D [**76**]{}, 114505 (2007). For full QCD calculation including the glueball components, T. Kunihiro [*et al.*]{}, Phys. Rev. D [**70**]{}, 034504 (2004). G. ’t Hooft, Nucl. Phys. [**B724**]{}, 61 (1974). E. Witten, Nucl. Phys. [**B160**]{}, 57 (1979). I. Caprini, G. Colangelo and H. Leutwyler, Phys. Rev. Lett. [**96**]{}, 132001 (2006). M.A. Shifman, A.I. Vainshtein, and V.I. Zakharov, Nucl. Phys. [**B147**]{}, 385 (1979). R.D. Matheus and S. Narison, Nucl. Phys. Proc. Suppl. [**152**]{}, 236 (2006); T. Kojo, A. Hayashigaki, and D. Jido, Phys. Rev. C [**74**]{}, 045206 (2006). L.J. Reinders, H. Rubinstein, and S. Yazaki, Phys. Rep. [**127**]{}, 1 (1985). T. Kojo and D. Jido, arXiv:0802.2372 \[hep-ph\]. The graphs with multi-gluon condensates are usually neglected due to the strong suppression by the extra loop factor compared to the graphs with quark condensates. More quantitatively, within the region $(M_{min}^2, M^2_{max})$, we calculate the variance of mass as function of $s_{th}$. Then we select $s_{th}$ which minimizes the variance. To adjust the $\rho$ meson mass to the experimental value, one must include the factor $\sim 2$ factorization violation or adopt the value $\langle \bar{q} q \rangle \sim -(0.280\ {\rm GeV})^3$. J.R. Pelaez, Phys. Rev. Lett. [**92**]{}, 102001(2004); Phys. Rev. Lett. [**97**]{}, 242002 (2006). T. Hyodo, D. Jido, and A. Hosaka, arXiv:0803.2550. T. Kojo and D. Jido, in preparation.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The growing penetration of distributed energy resources (DERs) has increased the complexity of the power system due to their intermittent characteristics and lower inertial response, such as photo voltaic (PV) systems and wind turbines. This restructuring of power system has considerable effect on transient response of the system resulting in inter area oscillations, less synchronized coupling and power swings. Furthermore the concept of being distributed itself and generating electricity from multiple locations in power system makes the transient impact of DERs even worse by raising issues such as reverse power flows. This paper studies some impacts of the changing nature of power system which are limiting the large scale integration of DERs. In addition a solution to increase the inertial response of the system is addressed by adding virtual inertia to the inverter based DERs in power system.The proposed control results in increasing the stability margin and tracking the rated frequency of the system. The injected synchronized active power to the system will prevent the protection relays from tripping by improving the rate of change of frequency. The proposed system operation is implemented on a sample power grid comprising of generation, transmission and distribution and results are verified experimentally through the Opal-RT real-time simulation system.' author: - | Mohammad Khatibi and Sara Ahmed\ Department of Electrical and Computer Engineering\ University of Texas at San Antonio\ San Antonio, TX\ [email protected] bibliography: - 'bibl.bib' title: ' **Impact of Distributed Energy Resources on Frequency Regulation of the Bulk Power System**' --- power system stability,distributed energy resources, photo voltaic system, virtual inertia, rate of change of frequency. Introduction ============ [@fang2017small]In recent years, significant inverter-based inertia-less renewable generation has been integrated in both bulk transmission and distribution (T&D) power systems to improve the sustainability of electric power systems. The increasing penetration of the distributed energy resources (DERs) displacing conventional synchronous generators (SGs) is rapidly changing the dynamics of the large-scale power systems. The electric grids lose inertia, voltage support and oscillation damping. When majority of generated electricity is coming from synchronous generators working with fossil fuels, using DERs reduces the system fuel costs significantly but can have considerable impact on system reliability. This less reliable grid pushed the power system planners to develop a method that helps to decide on operating policies, mixes and sizes in capacity expansion and installation sites when utilizing wind and photo voltaic (PV) systems [@karki2001reliability]. Network expansion planning, voltage stability studies and coordination of voltage controls at the T&D interface are investigated through power flow which is from transmission to distribution in traditional power grids. Reverse power flow from DGs to transmission system and impact of DERs on voltage stability in restructured power grids with high penetration of DERs asks for a new modeling and representation for DGs. For example in [@nikkhajoei2006steady; @chen2006power] only positive sequence representation has been considered for power flow analysis in presence of DERs which is not enough for an unbalanced distribution grid with unbalanced laterals. in [@nikkhajoei2006steady] a three-phase power-flow algorithm is proposed which includes unbalanced lines and loads, single phase laterals and three/four wire distribution lines. A detailed analysis of the impact of large scale wind power generation on the dynamic voltage stability and the transient stability of electric power systems is presented in [@slootweg2003impact]. Using a multidimensional parameter variation [@dierkes2014impact] shows that different control strategies of renewable energy sources have a significant influence on voltage stability of the power system. To verify the relay protection settings and operation and circuit breaker and fuse ratings, short circuit analysis should to be taken in into account. The dynamics of DERs should be included in short circuit analysis during fault in distribution system since each DER will contribute to short circuit current. Inverter based DGs has no inertia. To solve this problem the idea of virtual synchronous machine/ generator (VSM/VSG) has been presented in [@tamrakar2017virtual; @bevrani2014virtual] in which the power electronics interface (PEI) is mimicking the behavior of the SGs. However the implementation of virtual inertia in the literature varies based on the desired level of of model complexity and aplication, the underlying concept is similar among various topologies. Using a detailed mathematical model which represents the dynamics of SG or simpifying the model by using only the swing equation are the the two main solutions for implementing virtual inertia [@tamrakar2017virtual; @vassilakis2013battery]. For example to represent the same dynamics as SGs, synchroverters are introduced in [@zhong2010synchronverters] for inverter-based DGs. Hence as an advantage, traditional operation of power system can be continued without significant canges in operation structure [@zhong2016virtual]. Similar to synchroverter approach, Ise lab is another topology in literaure to implement virtual inertia. In this method, the contro loop solve the swing equation in each cycle to emulate inertia, instead of using a full detail model of SG [@sakimoto2011stabilization]. Other than different topologies VSG application has been illuterated extensively in the literature. In [@liu2016enhanced] an enhanced VSG control is proposed in which by adjusting the virtual stator reactance, active power oscillations during transient states is improved. Furturmore, using inverse voltage droop control and ac bus voltage estimation, an accurate reactive power sharing is achieved. In [@liu2015comparison], the dynamic characteristic of simple droop control and VSG are studied through deriving small signal equations in both islanded and grid connected mode. Then the proposed inertial droop control by the author, inherits the advantage of droop and provides inertial support for the system. Voltage angle deviations (VADs) of generators with respect to the angle of the center of inertia are defined in [@alipoor2016stability] as a tool for transient stability assessment of the multi-VSG micro-grid. To have a smooth transition during disturbances and keeping VADs within a specific range, VSG parameters are tuned using particle swarm optimization. In [@wu2016small] the detailed parameter designs of VSG is proposed and also the conditions to decouple active and reactive power loops are given. For avoiding VSG output voltage distortions the author indicates indicates that the bandwidth of the power loop should be very smaller than twice the line frequency In [@wu2016virtual] to enhance the inertial response of DC micro-grids and stabilize the DC bus voltage fluctuations, the author proposed virtual inertia strategy for DC micro-grids through bidirectional grid-connected converters. A fuzzy-secondary-controller-based VSG control scheme is proposed in [@andalib2018fuzzy] for voltage and frequency regulation in micro-grids. [@shi2017low] proposed a low voltage ride through control strategy for VSG control scheme with providing reactive power support under grid fault. The solution strategy for VSG working under unbalanced voltage conditions is discussed in [@zheng2017comprehensive]. A new VSG control in presented in [@zhao2017multi] with capability to avoid harmonic interference and accurate control vector orientation process. This paper is organized as follows. Section introduces the proposed power grid with DERs. Section discusses impacts of DERs on frequency while the solution for this impact is discussed in Section . Finally, simulation and experimental results and conclusion are shown in and Section and Section receptively. \[pic12\] \[pic11\] \[pic1\] transmission and distribution system {#sec2} ==================================== Fig. \[fig11\] shows the sample power grid with generation, transmission and distribution system. The Kundur’s two-Area system with parameters taken from [@kundur1994power] which is comprised of four synchronous generators(two in each area) that are boosting up with transformers and connected through transmission lines, form generation and transmission parts in this example. The areas are connected to each other through a tie line. IEEE 13 node test feeder [@kersting1991radial] is used as distribution system. The DGs such as PV and battery energy storage unit (BES) are connected to distribution system using a voltage source inverter (VSI). PV system includes PV arrays and a unidirectional boost DC-DC converter which is working under perturb and observe (P&O) maximum power point tracking (MPPT) control . BES includes batteries and a bidirectional boost DC-DC converter controlled by multi-loop voltage and current control. The outer loop controls voltage and inner loop controls current through proportional integral (PI) controllers. The BES and PV unit are connected in parallel and form the DC link. impact of der on frequency ========================== Since the power electronic interfaces used in DGs has no rotating mass and damping, the inertial constant in the micro-grid is reduced which results in an increase in the rate of change of frequency (ROCOF) and may lead to load shedding even under small disturbances in the system. Fig. \[fig12\] shows the frequency curve of system with different amount of inertia in presence of a contingency. For the control and stability of these small scale power grids, a hierarchical control including primary, secondary and tertiary control is introduced, similar to conventional grids. Droop control for voltage source inverters as an example for primary frequency control is discussed in [@rocabert2012control], provide barely any inertia/damping support for the grid. VSG control {#sec3} =========== Without any mechanical rotational part, inverters have a high response speed compared to the conventional rotational machines [@alipoor2013distributed]. Virtual inertia concept is introduced as a solution to overcome this limitation [@shintai2014oscillation]. By emulating the mechanical equation of a real synchronous generator into the inverter, similar behavior can be assumed during normal operation of the system and frequency disturbances for example the time that there is a sudden change (increase or decrease) in active power. Utilizing VSG algorithm, synchronized active power can be injected from the PV to the grid to stabilize the frequency [@fathi2018robust; @sakimoto2012stabilization]. In this paper, during normal operation of the system (rated frequency and voltage), the perturb and observe method (P&O) sets the active power reference $P{ref}$ by measuring voltage and current of the PV [@zhang2012review]. This active power is controlled in two stages as shown in Fig. \[fig2\]. At the first stage, primary frequency is implemented in the same way as a SG. In the second stage virtual inertia and damping are added to complete the loop. The result is a reference angle that will be fed into park transform [@du2013modeling].\ VSG control can be divided into two sections. First, the mechanical swing equation needs to be emulated and solved numerically. Then the results are used as a reference to control the voltage and current of the inverter. \[pic2\] P-F control ----------- Mechanical equation of a SG assuming that the rotor is a rigid body can be described as $$\begin{aligned} \label{eq3} \begin{cases} \frac{d\theta}{dt}=\omega \\ 2H\frac{d\omega}{dt}=T_{m}-T_{e}-D\Delta\omega \end{cases} \end{aligned}$$ where $H$ is the inertia constant in p.u. derived from (\[swing\]), $S_{base}$ is the base power of the machine, $\omega$ is the angular frequency of the SG and $\omega _{0}$ the rated angular frequency [@zeng2015mathematical]. $$\label{swing} J = 2H\frac{{S_{base}}}{\omega _{0}^{2}}$$ $T_{m}$ and $T_{e}$ are the mechanical output torques of the prime mover and the electromagnetic torque of the SG respectively and can be calculated using (\[eq4\]) : $$\label{eq4} \begin{cases} T_{m}=k_{f}(f_{0}-f)+\frac{P_{ref}}{\omega} \\ T_{e}=\frac{P_{e}}{\omega} \end{cases}$$ in which $P_{ref}$ is the rated active power and $P_{e}$ is the output power of the DG. The primary frequency control method and damping coefficient are working here as in the similar to a real SG and are achieved through a proportional cycle in which $k_{f}$ is the droop coefficient and $D$ is called the damping coefficient. for typical synchronous machines $H$ varies between 2 and 10 s [@hirase2016analysis]. The VSG response at a specific output power and voltage is determined by parameters of its second order differential equation which are the real part of its eigenvalues $\sigma_{i}$ and the damping ratio $\xi_{i}$.These parameters are related to $J$ and $D$ directly through the following equation set $$\begin{split} \sigma _{i}=&- \frac {D_{i}}{{2{J_{i}}{\omega _{s}}}} \\ {\omega _{ni}}=&\sqrt {\frac {{{P_{\max i}}\cos \left ({ {{\theta _{ig}}} }\right )}}{{J_{i}{\omega _{s}}}}} \\ {\xi _{i}}=&\frac {{ - {\sigma _{i}}}}{{{\omega _{ni}}}} \end{split}$$ where $P_{maxi}$ is the maximum transferable power from the VSG bus to the grid, $\theta_{ig}$ is the voltage angle of the VSG with respect to the grid and $\omega_{ni}$ is the undamped natural frequency of the VSG. At any working conditions, the parameters corresponding to the desired system response can be achieved by tuning $J$ and $D$ [@li2015coherency]. Q-E regulation -------------- controlling voltage is achieved by regulating the reactive power as (\[eq5\]) $$\label{eq5} E_{r}=E_{0}+k_{q}(Q_{ref}-Q)$$ where $E_{r}$ is the reference voltage, $k_{q}$ is reactive power droop coefficient and $E_{0}$ is the nominated voltage amplitude. V/I control ----------- This control loop features a conventional outer voltage and inner current loop. Its primary function is regulating the output voltage with no steady state error while quickening the dynamic response of the current loop to strengthen the ability of inverter control. This can be achieved through the outer voltage and inner current control loop. The calculated reference voltage $E_{r}$ using (\[eq5\]) is set as the reference for the outer voltage loop. Since the control is achieved in constant reference frame, $E_{r}$ is transformed from synchronous to dq reference frame using Park transformation given in (\[eq7\]) with the calculated angle in (\[eq3\]) as the input angle for the transformation. Then a PI controller is tuned to track the reference voltage and current. $$\label{eq6} E_r=\begin{bmatrix} E_{ar}\\ E_{br}\\ E_{cr} \end{bmatrix}= \begin{bmatrix} E\sin\omega t\\ E\sin\omega(t-\frac{2\pi}{3}) \\ E\sin\omega(t+\frac{2\pi}{3}) \end{bmatrix}$$ $$\label{eq7} \begin{bmatrix} V_{d}^\ast\\ V_{q}^\ast\\ V_{0}^\ast \end{bmatrix}=\sqrt{\frac{2}{3}} \begin{bmatrix} \cos\gamma & \cos(\gamma-\frac{2\pi}{3}) & \cos(\gamma+\frac{2\pi}{3})\\ \sin\gamma & \sin(\gamma-\frac{2\pi}{3}) & \sin(\gamma+\frac{2\pi}{3})\\ \frac{1}{\sqrt2} & \frac{1}{\sqrt2} & \frac{1}{\sqrt2} \end{bmatrix} \begin{bmatrix} E_{ar}\\ E_{br}\\ E_{cr} \end{bmatrix}$$ The current of the converter is also controlled similarly using an inner PI controller. Simulations and results =======================  \[simulations\] As illustrated in Fig. \[fig1\], the proposed VSG is tested for different case studies such as normal condition and sudden changes in loads. The converter and control is simulated using Matlab/Simulink SimPowerSystems toolbox. Real time results are achieved using Opal Rt OP5600 and OP5607 Virtex7 FPGA simulator which is shown in Fig. \[fig16\]. The converter switches are simulate in FPGA due to its capability of working with smaller sampling time with $T_{s}=1 \mu s$. The rest of simulation are implemented in simulator target with $T_{s}=10 \mu s$. The sampling time for both the simulator and FPGA is chosen based on their desired performance and also complexity of the control scheme. \[pic9\] Frequency Regulation -------------------- First the system is working under normal operation condition. The generated power by PV under MPPT is used to feed the load at the distribution system and rest of that is sent to the grid to feed the grid loads. At this scenario, the frequency behavior of the system is tested under different inertia constants ($T_{j}=2H$) with 0.2 p.u increase in active load at $t=1 s$ in which the total load is still less than the total generated MPPT power. The frequency nadir is shown and compared in Fig. \[fig5\]. The smaller inertia constant $T_{j}$, the bigger frequency nadir and the deepest nadir is expected for the case when there is no inertia in system. The bigger frequency nadir results in a higher risk for the system that can cause it to disconnect from the main grid due to triggering ROCOF relays. In next scenario, the increase in load is equal to the total generation of the PV.\ In Fig. \[fig7\] the system frequency performance is compared with the addition of $200MW$ DER and state-of-the-art power electronics inverters that doesn’t emulate inertia versus DERs with the proposed VSG control. The results shows the frequency nadir decreased with the addition of the virtual inertia. Fig. \[fig8\] depicts the system frequency response corresponding to the different DER penetration levels. It can be seen that the VSG is emulating the generator inertia and therefore even with a big increase in the DER penetration level, the frequency nadir is almost the same as with the generators only. \[pic5\] \[pic7\] \[pic8\] conclusion ========== DERs have multiple negative impacts on the bulk power systems which have been addressed in the paper. In between, lower or zero inertia is one of their major aspects that will affect the stability of the whole power grid and may lead into unwanted load shedding. The solution for adding inertia to DERs by mimicking synchronous generation behavior is introduced. By using the proposed seamless control framework, a simple control strategy is implemented to operate under normal and faulty grid conditions. Several experiments have been conducted to verify the performance of the proposed system. Acknowledgments {#acknowledgments .unnumbered} =============== This project was funded in-part by the University of Texas at San Antonio, Office of the Vice President for Research.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, the main goal is to detect a movie reviewer’s opinion using hidden conditional random fields. This model allows us to capture the dynamics of the reviewer’s opinion in the transcripts of long unsegmented audio reviews that are analyzed by our system. High level linguistic features are computed at the level of inter-pausal segments. The features include syntactic features, a statistical word embedding model and subjectivity lexicons. The proposed system is evaluated on the ICT-MMMO corpus. We obtain a F1-score of 82%, which is better than logistic regression and recurrent neural network approaches. We also offer a discussion that sheds some light on the capacity of our system to adapt the word embedding model learned from general written texts data to spoken movie reviews and thus model the dynamics of the opinion.' address: ' $^1$LTCI, Télécom ParisTech, Université Paris Saclay, F-75013, Paris, France' bibliography: - 'main.bib' title: Opinion Dynamics Modeling for Movie Review Transcripts Classification with Hidden Conditional Random Fields --- **Index Terms**: Hidden Conditional Random Field, Opinion Mining, Linguistic Patterns, Word Embedding Introduction ============ With the growing importance of social networks, the amount of Internet user data has increased dramatically in the last few years. It is now important for companies to exploit this new source of information about their customers in order to be more competitive. The concept of some websites is even to be simply a huge database of recommendations, such as *rottentomatoes.com* where the users rate and review movies, thus delivering their opinion about those movies. The domain of opinion mining in textual documents has developed considerably in the last several years. The trend is to use deep learning approaches that allow for achieving high performance, relying on a big amount of training labeled data [@Socher2013]. On the other hand, hybrid approaches [@yang2013joint] combine the robustness and the high accuracy of Machine Learning (ML) algorithms with the fine-grained modeling of linguistic rules. They do not require a huge amount of labeled data and thus are an interesting alternative to deep learning methods. As far as the representation of the data is concerned, various alternatives have been considered in previous works in this area. Using the negations and the intensifiers present in the context of the word as input features for a machine learning algorithm has initially been studied by [@Kennedy2006] on the textual IMdB movie review database. The Bag-of-Words (BOW) is a classical domain-agnostic paragraph representation. [@Perez-Rosas2013] used BoW and SVM for a sentiment analysis task over the MOUD dataset (Vlogs) obtaining a score of 64.94%. The Bag-of-N-grams (BoNG), which is an extension of this model, was used by [@Schuller2009] for a sentiment analysis task over the Metacritic database (textual movie reviews). [@Wagner2014] merge the results of subjectivity lexicons, valence shifters and BoNG to train a classifier for sentiment analysis in tweets. Another trendy option to represent the data nowadays is to create a distributed vector of every word in an unsupervised way, training the model on a large dataset of text. In [@Poria2015a], authors use word2vec with a CNN-SVM for a binary valence classification task on short speech utterances while in [@irsoy2014opinion] the authors use the same representation for a task of subjective expression extraction from sentences. In this paper we are in line with all these studies since we combine distributional word embedding with lexicons, linguistic patterns, syntactic features and paralinguistic cues to train a learning model. Another issue that is tackled in this paper is how to deal with opinion dynamics in long reviews where the speaker develops his/her opinion across the review. For example, a negative review can include some expressions of positive opinion and then end by a negative opinion. When the size of the documents is increasing, it is crucial to account for the dynamics of the document by using relevant ML method. Opinion dynamics modeling has been rarely addressed in the literature of opinion mining. While some studies are restricted to real-time prediction by segmenting the document into several parts of fixed duration [@Poria2016], others insist on the complementarity of the modalities in order to detect multimodal patterns [@morency2011towards; @Bousmalis2011]. We distinguish ourselves from these studies by focusing the analysis on the textual modality while using the audio modality only to segment the text based on the pauses of the reviewer. This is motivated by the idea of modeling the opinion dynamics in a more natural way. Thus, we investigate a latent state model in order to model the opinion of a speaker along a globally annotated audio movie review. The absence of written punctuation prevent us to segment using syntax so we choose to use oral pauses because of the relevant role of those self-interruptions in the segmentation of discourses[@Campione2002]. Here, we consider the task of labeling an audio transcript with respect to opinions using a variant of Conditional Random Fields (CRF), a discriminative classifier that has proven its utility in several NLP and Computer Vision tasks. This variant, called Hidden Conditional Random Fields (HCRF) has been successfully used to analyze sequences of textual, audio or visual to be labeled globally with only one output [@Quattoni2007]. Latent state models have already proven their efficiency for multimodal sentiment analysis or agreement classification [@morency2011towards; @Bousmalis2011]. The objective here is to investigate the potential of HCRF for a classification using transcripts from oral speech. The discriminative nature of CRF will enable some strong linguistic rules combined with other features to emerge directly from the learning phase. In the second section of this paper, we will present the features we chose for our task and our learning model. In the third section, we will present the dataset, talk about our experiments and results and finish in the fourth section with a discussion of the results and then we will conclude our paper. Feature and classification model description ============================================ Overview of the system {#subsec:overview} ---------------------- Because of the structure of spontaneous speech, a lot of sentences are unfinished, making it difficult to segment a spoken review into relevant units. We choose to use the pauses to segment the review into Inter Pausal Units (IPUs). Then, we produce the features for each IPU and use them to feed the HCRF, which predicts the most probable label for the current review (see figure \[fig:overview\]). ![Overview of the system](schema_overview_latex.png "fig:") \[fig:overview\] Features {#subsec:Textual-Features} -------- We can sort the textual features we use into 4 groups :\ - *The N-grams features* : The BoNG presented in [@Schuller2009] is an extension of the classical Bag of Words representation to N-grams. In this work we use words, bi-grams and tri-grams.\ - *The distributed representations* : word2vec is a distributed learning model to represent words [@Mikolov2013]. The principle is to use the surrounding words to find the general context in which a word appears and learn its weights statistically. During the learning phase, the vectors of the words appearing in the same context are expected to get closer. This representation can be used to learn more specific semantic information about the discourse of the speaker in the textual features. We chose to use word2vec since it has been found to give better results on a sentiment analysis task in [@Poria2015a] compared to other statistical word embeddings. The 300-dimensional vectors we used were pre-trained over a corpus of 100 billions words from Google Press[^1]. Generally, it has been empirically found that a more general and bigger training dataset makes it possible to obtain vectors that perform better on several tasks [@Mikolov2013a].\ - *The linguistic and lexicon-based features* : The affective valence of a document can be directly retrieved with a rule-based heuristic using specific values attributed for each word with lexicons. We use the negative, positive and neutral SentiWordNet (SWN) scores [@Baccianella2010] and the dominance, arousal and valence scores of the enriched Affective norms for English Words (ANEW) lexicon [@Warriner2013]. We use linguistic patterns such as adjectives followed by a noun, negations, intensifiers (amplifiers and downtoners). We decided to combine linguistic patterns with sentiment lexicons using the Semantic-Orientation CALculator (SO-CAL) [@Taboada2011] which is composed of a grouping of lexicons containing subjective words, intensifiers and valence shifters with associated values. Those values are used for arithmetic operations following simple patterns to give a semantic orientation score to a sentence (see details [@Taboada2011]). We separate each value into 3 scores reflecting a positive, a negative and a neutral score so that they can be independently significant of different emotional states of the speaker. We finally take the disfluencies, the presence of a capital letter and the 6 parts-of-speech from [@Poria2015a] plus interjections and pronouns which are significant of emotional bursts, or belongings.\ - *The Paralinguistic features* : The paralinguistic information provided in the transcript can indicate an emotional state which the reviewer does not necessarily evoke through words. The 8 main paralinguistic annotations were dispatched in different categories : the intonation, the pronunciation, the laughter and the volume. Classification Model -------------------- The HCRF model is used in order to learn a mapping from a sequence of observations $\mathbf{x}_i=\left\{ x_{1},...,x_{L_{i}}\right\}$ of length $L_{i}$ to a label $y_i\in\mathcal{Y}$. Each observation $x_{k}$ is represented by a feature vector $\phi(x_{k})$. For every $\mathbf{x}_{i}$, a sequence of unobserved latent variables $\mathbf{h}_{i}=\left\{ h_{1},...,h_{L_{i}}\right\}$ is defined where $h_{k}\mathcal{\in H}, \mathcal{H}$ being a finite set of states [@Quattoni2007].\ The label decision is made using the posterior probability $P(y|\mathbf{x},\theta)$ given by Eq , where $\theta$ refers to the parameters of the HCRF. $$P(y|\mathbf{x},\theta)=\sum_{\mathbf{h}}P(y,\mathbf{h}|\mathbf{x},\theta)=\frac{\sum_{\mathbf{h}}e^{\Psi(y,\mathbf{h},\mathbf{x};\theta)}}{\sum_{y',\mathbf{h}}e^{\Psi(y',\mathbf{h},\mathbf{x};\theta)}}, \label{py|x}$$ where $\Psi(y,\mathbf{h},\mathbf{x};\theta)\in\mathbb{R}$ is a potential function (defined in Eq ) that measures the compatibility between a label, a sequence of hidden states and the observations. The definition depends on different types of feature functions described below: $$\begin{gathered} \Psi(y,\mathbf{h},\mathbf{x};\theta) = \sum_{j}\langle\phi(x_{j})\,|\,\theta_o(h_{j})\rangle \\ +\sum_{j}\theta_s(y,h_{j})+\sum_{j}\theta_t(y,h_{j},h_{j+1}) \label{Phi}\end{gathered}$$ - *The hidden state feature functions* depend only on the current observation vector and the current hidden state. A weight $\theta_o(h_{j})$ is created for each hidden state $h_{j}$. The inner product represents the compatibility between an observation and the hidden state.\ - *The label feature functions* depend on the label and the current state. The weight $\theta_s(y,h_{j})$ represents the compatibility between a label $y$ and a hidden state $h_{j}$.\ - *The hidden state transition feature functions* depend on the position in the sequence and the label. The weight $\theta_t(y,h_{j},h_{j+1})$ represents the compatibility between a label $y$ and the transition from a hidden state $h_{j}$ to an other hidden state $h_{j+1}$.\ The model is classically trained by minimizing an $\ell_2$-norm regularized negative log-likelihood cost [@Quattoni2007]. Decision is taken by choosing the label $y$ that maximizes $P(y|\mathbf{x},\theta)$. Experiments and results ======================= We tested three models with different feature sets and segmentations in order to validate our models. Firstly, we created a baseline for our task using a logistic regression model with Bag-of-N-Gram features at document level. Since logistic regression does not take into account the dynamics of the observations, we tried a more powerful alternative baseline model that can handle sequential data: a recurrent neural network (RNN-LSTM) [@Hochreiter1997]. Compared to these models, HCRF offers the benefit of interpretability in the way it handles sequential data, while having the potential to model the dynamics of opinion-related phenomena (emotional states, stances, etc.) through latent states. Moreover, we compared BoNG to our feature set and tested different pause-based segmentations. We used a 10-fold Cross-Validation (CV) where train and test sets are disjoint to validate our models with each test part containing the same proportion of both classes as in the total dataset. Dataset ------- In this study, we used the ICT-MMMO corpus**[^2]** consisting of 365 movie review videos obtained from Youtube.com and ExpoTV.com [@wollmer2013youtube]. Those reviews are performed by non-professional users and the audio quality of the recording varies significantly. All the videos of the corpus have been annotated in valence by one or two independent annotators. The valence score goes from 1, which means that the speaker has a very negative opinion about the movie, to 5 which denotes a strongly positive opinion about the movie from the speaker, and 3 meaning neutral. The reference is obtained by taking the mean of scores given by the two annotators on a video. The dataset contains more positive videos than negative videos (opinion annotations of the videos : 120 negative, 38 neutral, 207 positive). All the video clips are manually transcribed to extract the spoken words. Using the Transcriber software [@Barras2001] each spoken utterance is segmented according to the pause duration. All the annotations, the transcriptions of the text and paralinguistic events were made without using the visual information. We decided to discard the neutral files because they include files annotated with a different polarity by the two annotators. We obtained a total of 321 videos (116 negative and 205 positive) for a total of 13h12 of audio, composed of **12625** segmented IPUs and **143181** words. Baselines using LogReg and LSTM {#subsec:baseline} ------------------------------- ***Methodology :*** We considered a baseline model with a simple textual feature set and with our feature set that we tested for different textual representation levels (at the document level or using the pauses) in order to measure the improvement brought by the HCRF. We used logistic regression with a BoNG model like [@wollmer2013youtube] with the same parametrization: applying trigram features, Porter stemming, TF-IDF transformations, and document-length normalization. We kept a larger vocabulary. We then changed for a more sophisticated feature set (our set in Table \[tab:F1-results\]), that is a representation using the statistical word embedding model from [@Mikolov2013a] described in \[subsec:Textual-Features\]. After a tokenization[^3] we used a spell checker[^4] to eliminate the numerous typos from the transcription and to clean the text before taking the word-vectors (stop-words excluded). We followed the protocol of [@Mikolov2013a] addressing a sentiment analysis task on short texts and we aggregated by averaging the representations of every word contained inside the IPU to obtain one vector of the same size, and standardized them. In order to help the determination of the opinion we added the linguistic rule set and the values of the subjectivity lexicons (as described in \[subsec:Textual-Features\]). We used the number of linguistic patterns we detected as well as the scores from the subjectivity lexicons for every word on each IPU to obtain one score per feature on each IPU. We standardized each linguistic feature. We used pauses longer than 150, 300 and 500 ms (3 experiments) to segment the documents into IPUs.Regarding the tuning of the logistic regression hyperparameters, we trained with values of the inverse of the regularization strength $C$ in {0.1, 0.5, 1, 10, 100}. We used the scikit-learn [@Pedregosa2012] implementation of logistic regression. For the RNN-LSTM, we used the keras implementation [@Chollet2015] with a number of hidden states in {64, 128, 256}, a dropout regularization of $U$ and $W$ (see [@Hochreiter1997]) in {0.1, 0.2, 0.3} (higher dropout decreased performances) and a number of epochs in {4...10}. We used the cross entropy as cost function and Adam as learning algorithm [@Kingma2014].\ ***Results :*** The results of the baselines are listed in the first part of Table \[tab:F1-results\] using F1-scores and accuracy. In this table, the global $F1$ (the harmonic mean of recall and precision) is the average $F1$ of both classes ($F1+$ and $F1-$) weighted by their priors and $Accuracy$ is the percentage of true predictions. We notice that the best results are obtained with our feature set. This result is actually unexpected given that we are averaging all the word-vectors of the document into a single one, but the effectiveness comes from the other sentiment-related and linguistic features. The results of the RNN-LSTM are not better for the negative class. Though it has the potential to capture some dynamics, the neural network requires more data than available in the considered corpus to be fully effective. Using the BoNG baseline, [@Schuller2009] obtained a F1-score of 78.74% for a sentiment analysis task over the Metacritic database (textual movie reviews). HCRF models ----------- ***Methodology :*** The existence of latent states in HCRF makes them useful to model a dynamic system like, for example, the emotional state of the speaker. Using our feature set, including sentiment-related features and a distributed representation, the model is expected to more effectively exploit the concepts employed by the speakers. We also investigated the granularity of the segmentation, using different thresholds to use the pauses to segment. We trained the HCRF model with the Matlab wrapper of the HCRF Library [@Morency2007] and used a L-BFGS solver for the training. Regarding the exploration of the model hyperparameters, we trained with different values for : the $\ell_2$ regularization parameter in {0.01, 0.05, 0.075, 0.1, 0.25, 0.5, 1}, the context window in {0, 1, 2} and the number of hidden states in {2...5}. The context window is the number of IPU neighbors we concatenated with the centered IPU. We also tested more hidden states, without better results but a longer training time.\ ***Results :*** The results with the HCRF models are summarized in Table \[tab:F1-results\]. The best configuration was obtained with 5 hidden states, no context and a value of the regularization parameter equal to 10. As expected, the HCRF improves the results compared to the logistic regression (F1 score improves from 79 to 80) with the BoNG features. The best results were obtained using our set with an improvement of the negative class F1 score (8 points). The 300-ms threshold also brings a slight improvement to the negative class, while using a higher threshold (500 ms) decreases the performance. However, a 10-fold CV does not bring enough information to conclude about the statistical significance of this difference in performance ($p=0.15$ for the negative class). [|L[2.1cm]{}|ccc|c|c|]{} **Features** & **Model** & **F1+** & **F1-** & **F1** & **Acc**\ Majority label & & & 0 & 50 & 63\ BoNG & & & 69 & 78 & 79\ Our set & & & 72 & 79 & 79\ Our set (150ms) & & & 68 & 78 & 78\ BoNG (150ms) & & & 67 & 78 & 79\ Our set (150ms) & & & 72 & 80 & 80\ Our set (300ms) & & & **75** & **82** & **82**\ Our set (500ms) & & & 67 & 77 & 77\ Discussion ========== ***False predictions :*** We provide here an in-depth analysis of false predictions. We found that many examples of wrong classification were due to an opinion too briefly expressed in a review which was globally neutral about the movie. The corresponding videos contain too few linguistic cues of the global opinion of the review. The system also seems to be influenced by the portions of the review where the speaker relates other people’s opinion or where they express a strong opinion about something or somebody. There are also cases where the speaker briefly leaves his/her opinions at the beginning or at the end of the video and the main part of the review consists of the reviewer’s opinions about general things that are not concerning neither the movie nor its features. Thus, the prediction is complex. The algorithm did not have enough examples to check that the most important points are the opinions of the speaker related to the movie and its features. Finally, when examining the pause segmentation ground-truth, we see some segmentation errors: there are 108 IPUs containing more than 50 words (using the 150-ms threshold). Besides, errors are dispatched over more than 18% of the files of the corpus. A clean pause detection method based on a text aligner could be an effective solution to this problem.\ ***Hidden states, transitions and activation words :*** After each training of a HCRF model, there is, for each label $y$ at least one state $h_{y}$ with compatibility weight $\theta_s(y,h_{y})$ that is highly positive with one label and highly negative with the other label. The transition between those states is highly improbable. We will call those states ’*negative state*’ (*Neg*) and ’*positive state*’ (*Pos*) even if it is an abuse of language. The three other states are considered ’neutral’ (*Neu1*, *Neu2*, *Neu3*), with low amplitude transition and compatibility weights. Those states can be used as a bridge between positive and negative states to model the development of the opinion dynamics. In Table \[tab:features\_states\_pros\], we present the most relevant examples of the most compatible features with each hidden state (features that correspond to the 30 highest positive weights). In the first column, we can see that the linguistic and paralinguistic features have a less important weight in the neutral states: the only feature having a positive weight for all the neutral states is ’*\*chuckling\**’ while *Pos* and *Neg* have numerous and various linguistic and paralinguistic features with high positive weights. [C[0.6cm]{}|L[3.8cm]{}|L[3cm]{}]{} **States** & **Linguistic and paralinguistic features** & **Words corresponding to compatible vectors**\ *Pos* & adj, disfluency, conjunction, intensifier, \*lip smacking\*, ... & *honors*, *fearless*, *awesome*, *fantastic*\ *Neu1* & \*chuckling\* & *um*, *Uh*, *ah*, *dunno*, *nada*\ *Neu2* & \*chuckling\* & *um*, *Uh*, *ah*, *dunno*, *nada*\ *Neu3* & $\varnothing$ & *Thanks*, *justin*, *sean*, *michael*, *Sorry*\ *Neg* & negation, \*falling intonation\*, interjection, \*word elongation\*, ... & *miserably*, *disappointing*, *yelling*, *failure*, *lack*\ Regarding word embedding features, our system is no longer learning words but concepts in the 300-dimensional word2vec space by using the information contained inside the word-vectors. In order to analyze the features of the word2vec space, we look for vectors of the words contained in our corpus that activated each state the most. In the second column of Table \[tab:features\_states\_pros\], we can see activation words with high valences, e.g. ’*disappointing*’, ’*miserably*’ and ’*awesome*’. **Words** **Positive State** **Neutral States** **Negative State** ----------- -------------------- -------------------- -------------------- *Uh* -2.57 2.8 -3.86 *Yeah* 0.98 2.21 -5.49 *Yes* -0.06 1.21 -3.14 *Thanks* 2.25 3.48 -7.41 : Examples of differences in feature function values[]{data-label="tab:diff_word_vectors"} \ ***Role of neutral states :*** Learning word embeddings requires a significant amount of text data to be available, that is the reason why we choose to use pre-trained word embeddings. It is interesting to notice that, even though the used word-vectors were learned from general text data, they include spontaneous speech words, such as ’*uhm*’ or ’*dunno*’. However, they do not correspond to the ones that would have been learned on an audio monologue such as the reviews analyzed in this work. For example, while a written ’*uhm*’ in a post may be a stylistic effect aiming at sounding negative, the oral counterpart is a common hesitation and is possibly neutral. Another example is the difference between *yes* and *yeah*: the latter is not common in written text where it reflects a more positive thought (see Table \[tab:diff\_word\_vectors\]). Further, some other words are merely corpus-specific, e.g. *Hi* and *Thanks* (“*Thanks for watching me guys*”) but associated with positive valence by their word2vec trained on text data. Consequently, information inside the word-vectors may sometimes not be adapted to the discourse of the speaker. The hidden neutral states of the HCRF seem to be handling this issue, so that the problematic word-vectors do not affect the states linked with the global labels of the review. Conclusion and future work ========================== In this paper, we have presented a HCRF model that uses a pause-based segmentation of movie review transcripts in order to model the dynamics of the opinion of the speaker through latent states. Our textual feature set includes word embedding, linguistic rules and clues from subjectivity lexicon. The use of HCRF classifiers allows us to implicitly learn local linguistic representations of each inter-pausal segment of the reviews making the integration of word embeddings in the classification system more meaningful. We also investigated a pause-based segmentation on a long unannotated discourse, finding that too long segments lead to a loss of performance.\ In our future work we would like to improve the way we use the word embedding in our model in order to handle more precise concepts with more hidden states. Further, we would like to test on a bigger corpus in order to obtain significant results. [^1]: Details at https://code.google.com/archive/p/word2vec/ [^2]: data available by sending an email to [email protected] [^3]: We used the CoreNLP from Standford [@Schuster2016] [^4]: https://github.com/phatpiglet/autocorrect
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we study the problem of optimal scheduling of content placement along time in a base station with limited cache capacity, taking into account jointly the offloading effect and freshness of information. We model offloading based on popularity in terms of the number of requests and information freshness based on the notion of age of information (AoI). The objective is to reduce the load of backhaul links as well as the AoI of contents in the cache via a joint cost function. For the resulting optimization problem, we prove its hardness via a reduction from the Partition problem. Next, via a mathematical reformulation, we derive a solution approach based on column generation and a tailored rounding mechanism. Finally, we provide performance evaluation results showing that our algorithm provides near-optimal solutions.' author: - Ghafour Ahani and  - Di Yuan bibliography: - 'IEEEabrv.bib' - 'ForIEEEBib.bib' title: Accounting for Information Freshness in Scheduling of Content Caching --- Age of information, base station, caching, time-varying popularity. Introduction ============ Content caching at the network edge is considered to be an enabler for future wireless networks. This technique strives to mitigate the heavy burden on backhaul links via providing the users with their contents of interest from the network edge without the need of going to the core networks. In designing effective caching strategies, previous works have focused on content popularity, whereas another important aspect is information freshness. Popularity of a content is defined as the number of users requesting the content. Popularity may vary over time[@8327582]. Thus, some contents may be added to or removed from the cache as they become popular or unpopular. Freshness of contents in the cache refers to how recent the content has been obtained from the core network. The longer a content is stored in the cache without an update, the higher risk is that the cached content becomes obsolete. Hence, we would like to refresh the cached contents often, which however leads to higher load on the backhaul. Freshness of contents naturally arises in applications such as news, traffic information, etc., and it may have a great impact on user satisfaction. We model freshness of contents using the notion of age of information (AoI). For content caching, AoI is defined as the amount of time elapsed since the time that the content is refreshed. In this paper, we use a joint cost function to address the trade-off between the benefit of offloading via caching and AoI. The works such as [@7562037; @Cost2018Deng; @6883600; @7414014] took into account only the popularities of contents in designing cache placement strategies. The works in [@7562037; @Cost2018Deng] considered content caching with known popularities of contents. The studies in [@6883600; @7414014] showed that the popularites of contents can be estimated via learning-based algorithms. However, in the mentioned works popularity of a content are time-invariant. In [@8357917; @Zhang2018Using], caching with time-varying popularity profiles are investigated. In [@Zhang2018Using] an algorithm is proposed to estimated the time-varying popularities of contents. The studies in [@8000687; @Tang2019] considered information freshness but not popularity of contents in their caching problems. Recently, a few works [@8006505; @8006506; @8795490] have considered both popularity and freshness of contents. However, these works have the following limitations. In [@8006505], the downloading cost of contents from the server is neglected. In [@8006506], only one content of the cache could be updated in each time slot. In [@8795490], it is assumed that the cache capacity is unlimited. In this paper, we study optimal scheduling of content caching along time in a base station (BS) with limited storage capacity taking into account jointly offloading via caching and freshness of contents. The objective is to mitigate the load of backhaul links via minimizing a penalty cost function related to content downloading, content updating, and AoI costs subject to the cache capacity. The main contributions of this work are summarized as follows: - The caching scheduling problem is formulated as an optimization problem. Specifically, it is formulated as an integer linear program (ILP) and the hardness of the problem is proved based on a reduction from the Partition problem. - Via a problem reformulation, a column generation algorithm (CGA) is developed. We prove that the subproblem of CGA can be converted to a shortest path problem that can be solved in polynomial time. In addition, the CGA provides an effective lower bound (LB) of global optimum . - The solution obtained from CGA could be fractional, thus an advanced and problem-tailored rounding algorithm (RA) is derived to construct integer solutions. - Simulations show the effectiveness of our solution approach by comparing the obtained solutions to the LB as well as the conventional algorithms. Our algorithm provides solutions within $1\%$ of global optimum. System Scenario and Problem Formulation ======================================= System Scenario {#System_Scenario} --------------- The system scenario consists of a content server, a BS and a set of users $\mathcal{U}=\{1,2,\dots,U\}$ within the coverage of the BS. The server has all the contents, and the BS is equipped with a cache device of capacity $S$. The contents are dynamic, i.e., the information they contain may change over time. Denote by $\mathcal{F}=\{1,2,\dots,F\}$ the set of the contents. We assume the server has always the up-to-date version of the contents. Denote by $l_f$ the size of content $f$. Each content is either fully stored or not stored at all at the BS. The system scenario is shown in Figure \[SystemScenario\]. ![System scenario.[]{data-label="SystemScenario"}](system_scenario.eps) We consider a slotted time system of $\mathcal{T}=\{1,2,\dots,T\}$ time slots. At the beginning of each time slot, the contents to be stored in the cache need to be determined by an updating/placement action. Namely, some stored contents may be removed from the cache, some contents may be added to the cache, and some contents may be re-downloaded from the server. The freshness of a content may decrease along time. We use AoI to model the freshness of contents. A content that is newly downloaded from the cache has AoI $0$, and for each time slot it remains in the cache without re-downloading, its AoI increases with one time slot. Denote by $p_f(i)$ the cost associated with an AoI of $i$ time slots for content $f$. A content has AoI $i$ time slots when the content has been stored in the cache for $i$ continuously time slots without any update. In our model, user $u\in \mathcal{U}$ requests at most $R_u$ contents within the $T$ time slots based on its interest. The set of requests for user $u$ is denoted by $\mathcal{R}_u=\{1,\dots,R_u\}$. The downloading process of a content starts as soon as the request is made. The content can be downloaded either from the cache if the content is in the cache, or otherwise from the server. We assume the time of each request is known or can be predicted via using a prediction model, e.g., the one in [@Zhang2018Using]. For user $u$ and its $r$-th request, the requested content and the time slot of request are denoted by $h(u,r)$ and $o(u,r)$, respectively. Cost Model ---------- Denote by $x_{tf}$ a binary optimization variable which equals one if and only if the $f$-th content is stored in time slot $t$. Denote by $c_s$ and $c_b$ the costs for downloading one unit of data from the server and the cache to a user, respectively. We have $c_s>c_b$ to encourage downloading from the cache. The downloading cost for user $u$ to obtain its $r$-th request, denoted by $C_{ur}$, is expressed as: $$\begin{aligned} C_{ur}=l_{h(u,r)}[c_bx_{o(u,r)h(u,r)}+c_s(1-x_{o(u,r)h(u,r)})]. \end{aligned}$$ The downloading cost for completing all requests of all users, denoted by $C_{download}$, is $C_{download}=\sum_{u=1}^{U}\sum_{r=1}^{R_u}C_{ur}$. Denote by binary variable $a_{tfi}$, $i \in \{0,1,\dots,t-1\}$, whether or not content $f$ is in the cache and has AoI $i$ time slots. The overall AoI cost is expressed as: $$\begin{aligned} C_{AoI}=\sum_{f=1}^{F}\sum_{t=1}^{T}\sum_{i=1}^{t-1}p_f(i)m_{tf}a_{tfi}, \label{agecostf} \end{aligned}$$ where $m_{tf}$ is the number of users requesting content $f$ in time slot $t$. Updating contents in the cache incurs an updating cost. The updating cost, denoted by $C_{update}$, is expressed as: $$\begin{aligned} C_{update}=\sum_{t=1}^{T}\sum_{f=1}^{F}l_f(c_s-c_b)a_{tf0}, \label{c_update} \end{aligned}$$ where $a_{tf0}$ means that the content is just downloaded from the server and has cost $l_f(c_s-c_b)$. Here $(c_s-c_b)$ is the downloading cost unit from the server to the cache. Finally, the total cost is denoted by $C_{total}$ and expressed as: $$\begin{aligned} C_{total}=C_{dowlad}+C_{update}+\lambda C_{AoI}. \label{agecostf} \end{aligned}$$ Here, $\lambda$ is a weighting factor between $C_{AoI}$ and $C_{update}$. Larger $\lambda$ means frequently updating the contents of the cache and consequently smaller AoI for cached contents. Problem Formulation ------------------- The update-enabled caching problem (UECP) is formulated as an ILP, and shown in (\[UECP\]). $$\begin{aligned} {2} \text{(UECP)}~~ &\min\limits_{\bm{x},\bm{a}}\quad C_{download}+C_{update}+\lambda C_{AoI}\\ \text{s.t}. \quad & \sum_{f=1}^{F}x_{tf}l_f \leq S,t\in \mathcal{T}, \label{const:cacheSize}\\ & \sum_{i=0}^{t-1}a_{tfi}=x_{tf} ,t\in \mathcal{T}, f \in\mathcal{F}, \label{const:a1}\\ & a_{tfi}\ge x_{tf}+a_{(t-1)f(i-1)}-a_{tf0}-1 , \nonumber \\&t\in \mathcal{T}\setminus\{1\},f \in \mathcal{F},i\in\{1,\dots,t-1\}, \label{const:a2}\\ & x_{tf},a_{tfi}\in \{0,1\},t\in \mathcal{T},f\in \mathcal{F}, i \in \{0,\dots,t-1\}.\end{aligned}$$ \[UECP\] Constraints (\[const:cacheSize\]) indicate that used storage space is less than or equal to the cache capacity in each time slot. Constraints  state that if the content is in the cache, it has to have one of the AoIs $0,\dots,t-1$. Constraints  indicate content $f$ in time slot $t$ has AoI $i$ if and only if the content is in the cache in time slot $t$, has not AoI $0$ in time slot $t$, and has AoI $i-1$ in time slot $t-1$. Even though this ILP can be solved by a standard solver, it needs significant computational time. Exploiting the structure of the problem, we develop an solution method based on column generation. Complexity Analysis ------------------- **UECP** is $\mathcal{NP}$-hard. The proof is established by a polynomial reduction from the Partition problem that is $\mathcal{NP}$-complete [@garey1979computers]. Consider a Partition problem with a set of $N$ integers, i.e., $\mathcal{N}=\{n_1,\dots,n_N\}$. The task is to decide whether it is possible to partition $\mathcal{N}$ into two subsets $\mathcal{N}_1$ and $\mathcal{N}_2$ with equal sum. The reduction is constructed as follows. We set the cache capacity as $S=\frac{1}{2}\sum_{i=1}^{N}n_i$, the set of contents to $\mathcal{F}=\{1,\dots,N\}$, size of content $f \in \mathcal{F}$ to $l_f=n_f$, and the number of time slots to one, i.e., $T=1$. As $T=1$, there is no updating or AoI costs. The time slots of all requests are set to $1$, i.e., $o(u,r)=1, u \in \mathcal{U}, r \in \mathcal{R}_u$. We set $m_{1f}=2$ for $f \in \mathcal{F}$, $c_s=2$, and $c_b=1$. By this setting, if the cache stores content $f$, $4l_f-l_f-2l_f=l_f$ gain is achieved. As the cache capacity is $S=\frac{1}{2}\sum_{i=1}^{N}n_i$, a maximum possible of $\frac{1}{2}\sum_{i=1}^{N}n_i$ gain can be achieved. Now, the question is if this maximum gain can be achieved. This question can be answered by solving UECP which also will answer the Partition problem. Hence the conclusion. Reformulation of UECP ===================== We provide a reformulation of the problem that enables a CGA. We define the caching and updating decisions for content $f$ across the $T$ time slots as tuple $(\bm{x}_f,\bm{a}_f)$ in which $\bm{x}_f=[x_{1f}, x_{2f},\dots,x_{Tf}]^ \mathrm{ T }$ and $\bm{a}_f=[a_{1f0}, a_{2f0},\dots,a_{Tf(t-1)}]^ \mathrm{ T }$. In total, $3^T$ of such tuples exist and one of them is used in a solution. Denote by $\mathcal{K}=\{1,2,\dots,3^T\}$ the index set for all possible solutions. We refer to a possible solution as a column. The cost of column $k\in\mathcal{K}$ for content $f\in \mathcal{F}$ is denoted by $C_{fk}$ and can be calculated by the formula in (\[Cfk\]). $$\begin{aligned} C_{fk}&=\sum_{t=1}^{T} l_{f}m_{tf}[c_bx_{tf}^{(k)}+c_s(1-x_{tf}^{(k)})]\\ &+\sum_{t=1}^{T}l_f(c_s-c_b)a^{(k)}_{tf0}+\lambda\sum_{t=1}^{T}\sum_{i=1}^{t-1}p_f(i)m_{tf}a_{tfi}^{(k)}.\label{Cfk} \end{aligned}$$ In (\[Cfk\]), $x_{tf}^{(k)}$ and $a^{(k)}_{tfi}$ are constants and represent the values of $x_{tf}$ and $a_{tfi}$ with respect to $k$-th column, respectively. Now, ILP  can be reformulated as . -5pt $$\begin{aligned} {2} ~~~~~~~~~~&\min\limits_{\bm{w}}\quad \sum_{f\in \mathcal{F}}\sum_{k\in \mathcal{K}}C_{fk}w_{fk} \label{MPC1}\\ \text{s.t}. \quad & \sum_{f\in \mathcal{F}}\sum_{k\in \mathcal{K}}l_fx^{(k)}_{tf}w_{fk} \leq S,t\in \mathcal{T} \label{MP_C1}\\ &\sum_{k\in \mathcal{K}}w_{fk}=1,f\in \mathcal{F}\\ & w_{fk}\in \{0,1 \},f\in \mathcal{F},k\in \mathcal{K}. \label{MP_C2}\end{aligned}$$ \[PR\] -20pt Here, $w_{fk}$ is a binary variable where $w_{fk}=1$ if and only if the $k$-th column of content $f$ is selected, otherwise it is zero. Constraints are the cache capacity constraints, and constraints indicate that only one of the columns is used. Algorithm Design {#alg_design} ================ In this section, we present our solution method which consists of two algorithms. Algorithm $1$ is a column generation algorithm (CGA) applied to the continuous version of . Algorithm $2$ is a rounding algorithm (RA) applied to the solution obtained from CGA if the solution is fractional. These algorithms are applied alternately until an integer solution is constructed. The solution method is shown in Algorithm $\ref{alg_CGAandERA}$. The term RMP in the algorithm will be discussed later. \[alg\_CGAandERA\] STOP $\leftarrow 0$ Apply CGA to RMP and obtain $\bm{w}^*$ STOP $\leftarrow 1$ Apply RA to $\bm{w}^*$ Column Generation Algorithm --------------------------- In column generation, the problem is decomposed into a so called master problem (MP) and a subproblem (SP). The algorithm starts with a subset of columns and solves alternately MP and SP. Each time SP is solved a new column that possibly improves the objective function is generated. The benefit of CGA is to exploit the fact that at optimum only a few columns are used. ### MP and RMP MP is the continuous version of formulation . Restricted MP (RMP) is the MP but with a small subset $\mathcal{K}^\prime_f\subset\mathcal{K}$ for any content $f\in \mathcal{F}$. RMP is expressed in . Denote by $K^\prime_f$ the cardinality of $\mathcal{K}^\prime_f$. -5pt $$\begin{aligned} {2} \text{(RMP)}~~~~~~~~~~& \min\limits_{\bm{w}}\quad \sum_{f\in \mathcal{F}}\sum_{k\in \mathcal{K}^\prime_f}C_{fk}w_{fk} \label{obj:RMP} \\ \text{s.t}. \quad & \sum_{f\in \mathcal{F}}\sum_{k\in \mathcal{K}^\prime_f}l_fx^{(k)}_{tf}w_{fk} \leq S,t\in \mathcal{T}, \label{RMP_cachecapa}\\ &\sum_{k\in \mathcal{K}^\prime_f}w_{fk} = 1,f\in \mathcal{F},\label{RMP_1col}\\ & 0\le w_{fk} \le 1,f\in \mathcal{F},k\in \mathcal{K}^\prime_f.\end{aligned}$$ \[RMP\] -20pt ### Subproblem The SP uses the dual information to generate new columns. Denote by $\mathbf{w}^*=\{w^*_{fk}, f\in \mathcal{F} ~\text{and}~ k\in \mathcal{K}^\prime_f\}$ the optimal solution of RMP. Denote by $\bm{\pi}^*$ and $\bm{\beta}^*$ the corresponding optimal dual variables of and , respectively, i.e., $\bm{\pi}^*=[\pi^*_1,\pi^*_2,\dots,\pi^*_{T}]^\mathrm{ T }$ and $\bm{\beta}^*=[\beta^*_1,\beta^*_2,\dots,\beta^*_F]^ \mathrm{ T }$. After obtaining $\mathbf{w}^*$, we need to check whether $\mathbf{w}^*$ is the optimal solution of RMP. This can be determined by finding a column with the minimum reduced cost for each content $f\in \mathcal{F}$. If all these values are nonnegative, the current solution is optimal. Otherwise, we add the columns with negative reduced cost to corresponding sets. Given $\bm{\pi}^*$ and $\bm{\beta}^*$ for content $f\in \mathcal{F}$, the reduced cost of column $(\bm{x}_f,\bm{a}_f)$ is $C_{f}-\sum_{t=1}^{T}l_f\pi^*_t x_{tf}-\beta^*_f$ where $C_{f}$ can be computed using expression (\[Cfk\]) in which constants $x_{tf}^{(k)}$ and $a_{tfi}^{(k)}$ are replaced with optimization variables $x_{tf}$ and $a_{tfi}$, respectively. To find the column with minimum reduced cost for content $f\in \mathcal{F}$, we need to solve subproblem SP$_f$, shown in . Denote by $(\bm{x}^*_f,\bm{a}^*_f)$ the optimal solution of SP$_f$. If the reduced cost of $(\bm{x}^*_f,\bm{a}^*_f)$ is negative, we add it to $\mathcal{K}^\prime_f$. -5pt $$\begin{aligned} {2} (\text{SP}_f)~~~~~~~& \min\limits_{(\bm{x_f},\bm{a_f})}\quad C_{f}-\sum_{t=1}^{T}l_f\pi^*_t x_{tf}-\beta^*_f \label{SP_objective}\\ \text{s.t}. \quad & \sum_{i=0}^{t-1}a_{tfi}=x_{tf} ,t\in \mathcal{T}, f \in\mathcal{F}, \label{constSP:a1}\\ & a_{tfi}\ge x_{tf}+a_{(t-1)f(i-1)}-a_{tf0}-1 ,\nonumber \\ & t\in \mathcal{T}\setminus\{1\},f \in \mathcal{F},i \in\{1,\dots,t-1\}, \label{constSP:a2}\\ &a_{tfi}\le a_{(t-1)f(i-1)},t\in \mathcal{T}\setminus\{1\},f \in \mathcal{F},\nonumber \\ &i \in\{1,\dots,t-1\}, \label{constSP:a3}\\ & x_{tf} \in \{0,1\},t\in \mathcal{T},f\in \mathcal{F},\\ &a_{tfi}\in \{0,1\},t\in \mathcal{T},f\in \mathcal{F},i \in \{0,\dots,t-1\}.\end{aligned}$$ \[SP\] -20pt $$\label{ObjectiveFunction} \begin{aligned} &C_{fk^\prime}-\sum_{t \in \mathcal{T}}l_f\pi^*_{tf} x_{tf}\\ &=\sum_{u \in \mathcal{U}}\sum_{r \in \mathcal{R}_u: h(u,r)=f}l_f[ \sum_{t=o(u,r)}^{d(u,r)}y_{urt}c_b +(1- \sum_{t=o(u,r)}^{d(u,r)}y_{urt})c_s]+\sum_{t=1}^{T}l(c_s-c_b)a_t-\sum_{t=1}^{T}l\pi^*_t x_{t}\\ &=\underbrace{\sum_{u\in \mathcal{U}}\sum_{r\in \mathcal{R}_u}lc_s}_{C}+ \sum_{t=1}^{T}\left[a_t\underbrace{l(c_s-c_b)}_{c_1}-\sum_{i=1}^{t+1} \underbrace{\left[\sum_{u \in \mathcal{U}}\sum_{r \in \mathcal{R}_u:o(h,r)=i...t+1~ and~ d(u,r)>=t}lc_b +l\pi^*_t \right]}_{g_t}x_{ti} \right]. \end{aligned}$$ Even though is an ILP, in the following, we show that it can be solved as a shortest path problem using for example Dijkstra’s algorithm[@Cormen2009introduction] in polynomial time. For content $f \in \mathcal{F}$, SP$_f$ can be solved in polynomial time as a shortest path problem. \[shortest\_path\] Consider content $f\in\mathcal{F}$. We construct an acyclic directed graph where finding the shortest path from the source to distention is equivalent to solving SP. The objective function  can be rewritten as . Denote by $C$ the total cost for downloading content $f$ via the server for all users requesting the content over all time slots, i.e., $C=\sum_{t=1}^{T}l_fm_{tf}c_s$. Denote by $v_{it}=p_f(i)m_{tf}$ the scenario where $m_{tf}$ users request content $f$ in time slot $t$ and the content has AoI $i$. Denote by $c_1=l_f(c_s-c_b)$ the downloading cost from the server to the cache. Denote by $g_t=l_fm_{tf}(c_s-c_b)-l_f\pi^*_t$ the reduction in $C$ due to storing content $f$. The graph is constricted as follows. Nodes $S$ and $D$ are used to represent the source and destination. Node $V_{00}$ is used to represent $x_0=0$. For time slot $t$, there are $t+1$ vertically aligned nodes. Using node $V_{t0}$ means that the content is not in the cache, and using node $V_{t1}^i$, $i \in \{0,\dots,t-1\}$, means that the content is in the cache and has AoI $i$. From node $S$ to $V_{00}$ there is an arc with weight $C$. For each node $V_{t0}$ there are two outgoing arcs one to $V_{(t+1)0}$ which means that the content is not stored in the next time slot and has weight $0$, and the other to $V_{(t+1)1}^{0}$ which has weight $c_1-g_t$ and means that the content is downloaded to the cache in the next time slot and has AoI $0$. For each node $V_{t1}^i$ there three outgoing arcs to $V_{(t+1)0}$, $V_{(t+1)1}^0$, and $V_{(t+1)1}^{(i+1)}$, respectively. Using the first arc means that the content is deleted for the next time slot and has weight $0$. Using the second arc means the content is re-downloaded from the cache and has AoI $0$ with weight $c_1-g_t$. Using the third arc means that the content is kept and its AoI increases with one unit and has weight $v_{(i+1)(t+1)}-g_{(t+1)}$. Finally, there are $T$ arcs from $V_{T0}$ and $V_{T1}^{i}$ for $i\in\{0,\dots,T-1\}$ to $D$ each with weight $-\beta_f$. Given any solution of , by construction of the graph, the solution directly maps to a path from the source to the destination with the same objective function. Conversely, given a path we construct an ILP solution. For time slot $t$, if flow is in node $V_{t0}$ then we set $x_{tf}=0$. If the flow is in $V_{t1}^i$, we set $x_{t1}=1$ and $a_{tfi}=1$. The resulting ILP solution has the same objective function value as length of the given path in terms of the arcs’s weights. Hence the conclusion. $$\label{re_obj} \begin{aligned} &C_{f}-\sum_{t=1}^{T}l\pi^*_t x_{tf}=\sum_{t=1}^{T} l_fm_{tf}[c_bx_{tf}+c_s(1-x_{tf})] +\sum_{t=1}^{T}l_f(c_s-c_b)a_{tf0}+\sum_{t=1}^{T}\sum_{i=1}^{t-1}p_f(i)m_{tf}a_{tfi}-\sum_{t=1}^{T}l_f\pi^*_{tf} x_{tf}\\ &~~~~~~~~~~~~~~~~~~~~=\underbrace{\sum_{t=1}^{T}l_fm_{tf}c_s}_{C}+ \sum_{t=1}^{T}\left[\underbrace{l_f(c_s-c_b)}_{c_1}a_{tf0}+\sum_{i=1}^{t-1}\underbrace{p_f(i)m_{tf}}_{v_{it}}a_{tfi}- \underbrace{\left[l_fm_{tf}(c_s-c_b)-l_f\pi^*_{tf} \right]}_{g_t}x_{tf}\right]. \end{aligned}$$ ![image](shortest_path.eps) \[alg\_CGA\] $S$, $c_b$, $c_s$, $\lambda$, $l_f$, $f \in \mathcal{F}$, $o(u,r)$ and $h(u,r)$, u $\in \mathcal{U},$\ $r\in \mathcal{R}_u$ $\mathbf{w}^*$ $\mathcal{K}^\prime_f \leftarrow \{(\bf{0}^\mathrm{T},\bf{0}^\mathrm{T})\}$ for $f \in \mathcal{F}$ STOP $\leftarrow 0$ Solve RMP and obtain $\mathbf{w}^*$, $\bm{\pi}^*$, and $\bm{\beta}^*$ STOP $\leftarrow 1$ Solve SP$_f$ and obtain $(\bm{x}^*_f,\bm{a}^*_f)$ $\mathcal{K}^\prime_{f} \leftarrow \mathcal{K}^\prime_{f}\cup \{(\bm{x}^*_f,\bm{a}^*_f)\}$ STOP $\leftarrow 0$ Rounding Algorithm ------------------ The solution of CGA could be fractional. Thus, we need a mechanism to construct integer solutions. We design a rounding algorithm (RA) to achieve this. RA repeatedly fixes the caching decisions of contents over time slots until an integer solution is constructed. The caching decision for content $f$ and time slot $t$ is determined based on value $z_{tf}$, defined as $z_{tf}=\sum_{k\in \mathcal{K}^\prime_f}x^{(k)}_{tf}w^*_{fk}$. This value indicates how likely it is optimal to store content $f$ in time slot $t$. In the following we prove a relationship between $\bm{z}$ and $\bm{w}$ and then give the RA. For any content $f\in \mathcal{F}$, $w^*_{fk}$ is binary for any $k$ if and only if every element of $\bm{z}_{f}=[z_{1f},z_{2f},\dots,z_{Tf}]$ is binary. \[IntegerTheory\] For necessity, for any content $f\in \mathcal{F}$, if $w^*_{fk}$ is binary for any $k$, $k\in \mathcal{K}^\prime_f$, it is obvious from the definition that all elements of $\bm{z}_{f}$ are binary. Now, we prove the sufficiency. For any content $f\in \mathcal{F}$, assume that every element in $\bm{z}_{f}$ is binary. Assume that $w_{fk}^*>0$ for $k \in K_f^{\prime\prime}\subseteq K_{f}^\prime$, then $z_{tf}=\sum_{k\in \mathcal{K}_f^{\prime\prime}}x^{(k)}_{tf}w^*_{fk}$. To satisfy that element $\bm{z}_{tf}$ for $t \in \mathcal{T}$ is binary, elements $x_{tf}^{(k)}$ for $k \in \mathcal{K}^{\prime\prime}$ either are all zero or all one. Otherwise, as $\sum_{k\in\mathcal{K}^{\prime\prime}}w^*_{fk}=1$, one of the $z_{tf}$ becomes fractional. This means that all columns corresponding to $w^*_{fk}$ for $k \in K_f^{\prime\prime}$ must be the same. Having two vectors with the same values violates the condition that the columns of any two $w^*_{fk}$ are different. Therefore, for any content $f$, $f\in \mathcal{F}$, if $z_{tf}$ is binary for any $t$, $t\in \mathcal{T}$, then $w^*_{fk}$ is an binary for any $k$, $k\in \mathcal{K}^\prime_f$. Hence the proof. RA consists of three main steps which are shown in Algorithm \[alg\_ERA\]. First, for content $f \in \mathcal{F}$ in time slot $t\in \mathcal{T}$, the decision is to store the content if $z_{tf}=1$. All columns that do not comply with this caching decision will be discarded. These are done by Lines $2$-$3$. Second, the element of $\bm{z}$ being closest to zero or one is found and rounded. Based on the rounding outcome, the caching decision is determined and non-complying columns are discarded. These are done via Lines $4$-$16$. Finally, the algorithm fixes the decisions of the contents across the time slots to zero if there is no remained spare space in the cache to store them in these time slots. This is done by Lines $20$-$23$. The caching decisions made until now will be remained fixed in all subsequent iterations. Note that with these fixings SP can still be solved as a shortest problem. If $x_{tf}$ is set to $0$, nodes $V_{t1}^i$ for $i\in\{0,\dots,t-1\}$ and their connected arcs will be deleted from the graph. If $x_{tf}$ is set to $1$, node $V_{t0}$ and its connected arcs will be deleted. \[alg\_ERA\] $\bm{w}^*$ and $(\bm{x},\bm{a})$ Compute $\bm{z}=\{z_{tf}, t \in \mathcal{T}, f \in \mathcal{F}\}$ where $z_{tf}=\sum_{k\in \mathcal{K}^\prime_f}x^{(k)}_{tf}w^*_{fk}$ Fix $x_{tf}=1$ in SP$_f$ if $z_{tf}=1$, $t \in \mathcal{T}, f \in \mathcal{F}$ \[fixxto1\] Fix $w_{fk}=0$ in RMP if $x^{(k)}_{tf}=0$, $k \in \mathcal{K}^\prime_f, t \in \mathcal{T}, f \in \mathcal{F}$\[fixyto0\] ${ \underaccent{\bar}{z}}\leftarrow\underset{t\in\mathcal{T}, f\in \mathcal{F}~~~~~~~~~~~~~~~~~~~~~~~~~~~} {\min\{z_{tf}| z_{tf}>0~\text{and}~z_{tf}<1\}}$\[minz\] $({ \underaccent{\bar}{t}},{ \underaccent{\bar}{f}})\leftarrow\underset{t\in\mathcal{T}, f\in \mathcal{F}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~} {\arg\min\{z_{tf}| z_{tf}>0~\text{and}~z_{tf}<1\}}$\[minzloc\] $\bar{z}\leftarrow\underset{t\in\mathcal{T}, f\in \mathcal{F}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~} {\min\{1-z_{tf}| z_{tf}>0~\text{and}~z_{tf}<1\}}$\[maxz\] $(\bar{t},\bar{f})\leftarrow\underset{t\in\mathcal{T}, f\in \mathcal{F}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~} {\arg\min\{1-z_{tf}| z_{tf}>0~\text{and}~z_{tf}<1\}}$\[maxzloc\] \[nearestcheck\] Fix $x_{{ \underaccent{\bar}{t}}{ \underaccent{\bar}{f}}}=0$ in SP$_{{ \underaccent{\bar}{f}}}$\[fixxunderbarto0\] Fix $w_{{ \underaccent{\bar}{f}}k}=0$ if $x^{(k)}_{{ \underaccent{\bar}{t}}{ \underaccent{\bar}{f}}}=1$, $k \in \mathcal{K}^\prime_{{ \underaccent{\bar}{f}}}$\[fixyunderbarto0\] Fix $x_{\bar{t}\bar{f}}=1$ in SP$_{\bar{f}}$\[fixxbarto1\] Fix $w_{\bar{f}k}=0$ if $x^{(k)}_{\bar{t}\bar{f}}=0$, $k \in \mathcal{K}^\prime_{\bar{f}}$\[fixybarto0\] Fix $x_{\bar{t}\bar{f}}=0$ in SP$_{\bar{f}}$\[fixxbarto0\] Fix $w_{\bar{f}k}=0$ if $x^{(k)}_{\bar{t}\bar{f}}=1$, $k \in \mathcal{K}^\prime_{\bar{f}}$\[fixybarto01\] $\mathcal{F}^\prime \leftarrow \{f \in \mathcal{F}| x_{tf} \text{ is set to one}\}$ $S^\prime\leftarrow S-\sum_{f \in \mathcal{F}^\prime}l_f$ \[fixto0bysize1\] Fix $x_{tf}=0$ in SP$_{f}$ Fix $w_{fk}=0$ in RMP if $x^{(k)}_{tf}=1$, $k \in \mathcal{K}^\prime_{f}$ \[fixto0bysize2\] Performance Evaluation {#sec:performance} ====================== We compare CGA to the LB and two conventional caching algorithms: random-based algorithm (RBA) [@7959865] and popularity-based algorithm (PBA) [@Ahlehagh2014Video]. Both algorithms treat contents one by one. In RBA, the contents are considered randomly, but with respect to their total numbers of requests; a content with higher number of requests will be more likely selected for caching. In PBA, popular contents, i.e., contents with higher number of requests, will be considered first. For the content under consideration, if the content was not in the cache in the previous time slot, it is downloaded with AoI zero. Otherwise, if AoI cost has reached fifty percent of downloading cost, the content is re-downloaded. Otherwise, the content is kept and the AoI increases by one. The content popularity distribution is modeled by a ZipF distribution[@KarthikeyanShanmugam2013], i.e., the probability that a user requests the $f$-th content is $\frac{f^{-\gamma}}{\sum_{i \in \mathcal{F}}i^{-\gamma}}$. The popularities of contents are changed randomly across the time slots. We set $U=600$, $F=200$, and $T=24$ with length of one hour for each time slot [@8691020]. The sizes of files are uniformly generated within interval $[1,10]$. The cache capacity is set as $S=\rho \sum_{f \in \mathcal{F}}l_f$. Here, $\rho \in [0,1]$ shows the size of cache in relation to the total size of all contents. The number of requests for each user is randomly generated in $[1,15]$. The performance results are reported in Figures $\ref{impact_U}\text{-}\ref{impact_lambda}$. The deviation from global optimum is bounded by the deviation from the LB, as LB is always less than or equal to the global optimum. We refer to the deviation from LB as optimality gap. The CGA provides solutions within $1\%$ gap from LB and outperforms the conventional algorithms. Figure \[impact\_U\] shows the impact of $U$. When $U$ increases from $400$ to $800$ the cost nearly linearly increases, however, the optimality gap of algorithms decreases. The reason is that with larger number of users, more contents from the content set are requested by users. As the cache capacity is limited, the only way to get many of requested contents is from the server by all algorithms which leads to a lower optimality gap. Figure \[impact\_F\] shows the impact of $F$. Recall that the cache capacity is set to $50\%$ of the total size of the files. For CGA, when $F=50$ the capacity of cache is extremely limited and as $F$ is small, almost all contents will be requested by users. These together imply that many requests need to be satisfied from the server which leads to a high cost. When $F$ increases to $150$, the cost decreases. Because as $F$ increases the cache capacity increases, and CGA is able to efficiently utilize the cache capacity. However when $F$ further increases to $300$, the cost increases. The reason is that even though the capacity increases with $F$ but the diversity of requested contents becomes too large, and consequently some of them need to be satisfied from the server which leads to a higher cost. Figure \[impact\_lambda\] shows the impact of $\lambda$. Recall that larger $\lambda$ means higher backhaul load but smaller AoI. From the figure, it can be seen that when $\lambda$ grows, PBA and RBA push down the average AoI of contents to almost zero but incur substantial amount of load on the backhaul. In contrast, the solutions of CGA achieve a much better balance between the backhaul load and AoI of contents with respect to $\lambda$. Note that the backhaul load and average AoI are normalized to interval $[0,100]$. ![Impact of $U$ on cost when $T=24$, $F=200$, $\lambda=0.5$, $\rho=0.5$, $\gamma=0.54$, $c_s=10$,$c_b=1$.[]{data-label="impact_U"}](impact_U.eps) ![Impact of $F$ on cost when $T=24, U=600$, $\lambda=0.5$, $\rho=0.5$, $\gamma=0.54$, $c_s=10$,$c_b=1$.[]{data-label="impact_F"}](impact_F.eps) ![Impact of $\lambda$ on backhaul load and average AoI when $T=24, U=600$, $F=200$, $\rho=0.5$, $c_s=10$,$c_b=1$.[]{data-label="impact_lambda"}](impact_lambda.eps) Conclusions {#sec:conclo} =========== This paper has investigated scheduling of content caching along time where jointly offloading effect and freshness of the contents are accounted for. The problem is formulated as an ILP and $\mathcal{NP}$-hardness of the problem is proved. Next, via a mathematical reformulation, a solution approach based on column generation and a rounding mechanism is developed. Via the joint cost function, it is possible to address the trade-off between the updating and AoI costs. The numerical results show that our algorithm is able to balance between the two costs. Simulation results demonstrated that our solution approach provides near-optimal solutions.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Cross-domain collaborative filtering (CF) aims to alleviate data sparsity in single-domain CF by leveraging knowledge transferred from related domains. Many traditional methods focus on enriching compared neighborhood relations in CF directly to address the sparsity problem. In this paper, we propose *superhighway* construction, an alternative explicit relation-enrichment procedure to improve recommendations by enhancing cross-domain connectivity. Specifically, assuming partially overlapped items (users), superhighway bypasses multi-hop inter-domain paths between cross-domain users (items, respectively) with direct paths to enrich the cross-domain connectivity. The experiments conducted on a real-world cross-region music dataset and a cross-platform movie dataset show that the proposed superhighway construction significantly improves recommendation performance in both target and source domains.' author: - 'Kwei-Herng Lai' - 'Ting-Hsiang Wang' - 'Heng-Yu Chi' - Yian Chen - 'Ming-Feng Tsai' - 'Chuan-Ju Wang' bibliography: - 'paper.bib' title: 'Superhighway: Bypass Data Sparsity in Cross-Domain CF' --- =1 Introduction ============ Collaborative filtering (CF) in recommender systems is highly susceptible to data sparsity as the method analyzes observed user-item interactions solely. In modern e-commerce, as the number of items and users skyrockets and dwarfs the growth of user-item ratings in comparison, data sparsity takes an increasing toll on the performance of CF-based recommender systems. In response to such a vital issue, cross-domain CF is proposed to enhance recommendation quality in a given target domain by leveraging knowledge transferred from related source domains. As data sparsity in single-domain CF remarks the lack of observed rating data, intuition suggests to alleviate the sparsity issue via explicitly populating relations in a cross-domain system. In the literature, many traditional methods have been proposed to directly enrich the compared neighborhood in CF, which, for example, attach additional intra-domain edges in target domains [@li2009can] or inter-domain edges in overlapped regions [@cremonesi2011cross]. However, such methods typically require additional assumptions; for example, the source domain has to be denser than the target domain [@li2009can]. In this paper, our superhighway construction establishes a new type of relations by means of inference based on existing relations, which allows the source and the target domains to mutually improve due to the enhanced cross-domain connectivity. The construction of superhighways consists of two steps: 1) the identification of cross-domain user candidates suitable for superhighway construction, and 2) weight scaling for superhighways to optimize cross-domain space alignment. Figure \[fig:fig1\] illustrates the connectivity enhancement brought forth by superhighways (red bold lines), which provide additional leverage of combining the source and the target domains. 1) tabular data are often sparse –&gt; dimension grows easily 2) cannot use structure/inference data –&gt; structural pattern generally exist –&gt; traditional method simply try to fill more edges –&gt; improvement can be achieved without enriching tabular/nearest-neighbor data E-commerce players often face the recommender system localization problem when they expands businesss to a foreign domain and faces users who are very different from its original customer base – a classic scenario in cross-domain collaborative filtering (CF) where items in both domains overlap while data sparsity in the target domain hinders recommendation performances. In this paper, we consider the cross-domain data as a whole and remark the *connectivity problem*, which can be decomposed into the *sparsity problem* in the target domain and the *filtering problem* in the source domain. A typical recommender systems localization problem arises when a company expands its business to a foreign domain and faces users who are very different from its original customer base. In this scenario, a company is assumed to have comprehensive customer profile about the source domain and sparse customer profile about the target domain, while the inventory is largely shared, e.g., Spotify’s launching in Japan and Walmart’s acquirement of Jet.com in 2016 remark a cross-market and a cross-platform localization problem, respectively. The cross-domain RS problem is thus twofold: 1) sparsity problem: data sparsity problem in the target domain prevents effective collaborative filtering (CF) and 2) filtering problem: an abundance of data in the source domain makes it unclear which information to transfer to the target domain. ![Illustrative example for superhighways[]{data-label="fig:fig1"}](./Figure1.pdf){height="4.5cm"} Methodology =========== In collaborative filtering (CF), user-item interactions are commonly captured using a bi-adjacency matrix $M=(m_{ij})\in \mathbb{R}^{|U| \times |I|}$, where $U$ and $I$ denote the sets of users and items, respectively; $m_{ij}=1$ if there exists an observed association for user $i$ and item $j$, and otherwise, $m_{ij}=0$. The matrix $M$ can also be represented as a bipartite graph $G=(U,I,R)$, where $R=\{(i,j)\,|\,m_{ij}=1\}$. Given a cross-domain system with source domain $G_{\text{S}}=(U_{\text{S}},I_{\text{S}},R_{\text{S}})$ and target domain $G_{\text{T}}=(U_{\text{T}},I_{\text{T}},R_{\text{T}})$ such that the set of shared items $\tilde{I} = I_{\text{S}} \cap I_{\text{T}} \neq \varnothing$, a *highway* is defined as a path between user $u_i\in U_{\text{S}}$ and $u_j\in U_{\text{T}}$ through shared items in $\tilde{I}$. To enrich the cross-domain connectivity, the *superhighway construction*, denoted as an operation $\mathcal{F}$, establishes direct relations between candidate users $u_i\in \hat{U}_{\text{S}}$ and $u_j\in \hat{U}_{\text{T}}$, where $\hat{U}_{\text{S}}\subseteq {U}_{\text{S}}$ and $\hat{U}_{\text{T}}\subseteq {U}_{\text{T}}$ are the sets of candidate users from the source and target domains, respectively. Consequently, the new graph $\mathcal{F}(G_{\text{S}},G_{\text{T}})$, which is more connected than the naively joined graph $G_{\text{S}}\cup G_{\text{T}}$, can then be used for CF. The candidate user sets $\hat{U}_{\text{S}}$ and $\hat{U}_{\text{T}}$ mentioned above are defined as $$\hat{U}_{d} = \left\{u\,\left|\, u \in U_{d}, \frac{|\mathcal{N}(u) \cap \tilde{I}|}{|\mathcal{N}(u)|}\right.\geq\alpha\right\}, \label{eq:candidate}$$ where $d \in \{\text{S}, \text{T}\}$, $\mathcal{N}(u)$ is the set of neighbors of $u$, and $\alpha$ is the predefined smoothness threshold. While many proposed cross-domain CF methods approach the data sparsity problem by directly enriching the compared neighborhood (e.g., the user-item relations in each domain), we instead enhance cross-domain connectivity. Specifically, we establish superhighways between users from each of the candidate sets defined in Eq. (\[eq:candidate\]), resulting additional $|\hat{U}_{\text{S}}|\times|\hat{U}_{\text{T}}|$ superhighways. The weight between each pair of users is defined as $$w = \beta \times |\mathcal{N}(u_{i}) \cap \mathcal{N}(u_{j})|,$$ where $u_{i} \in \hat{U}_{\text{S}}, u_{j} \in \hat{U}_{\text{T}}$, and $\beta$ is the scaling factor for the strength of domain alignment. Notice superhighways are kept weighted to provide fine-grained alignment between the source domain and the target domain. ------------------------ --------------- --------------- ------------- --------------- (lr)[2-3]{}(lr)[4-5]{} KKBOX\_R1 (S) KKBOX\_R2 (T) Netflix (S) Movielens (T) User 184,607 72,042 480,189 69,878 Item 529,457 87,889 17,779 10,677 Rating 21,961,070 4,473,052 100,480,507 10,000,054 ------------------------ --------------- --------------- ------------- --------------- : Data statistics[]{data-label="tb:data"} [$^*$S and T denote the source and target domains, respectively.]{} Experiments =========== In order to validate the effectiveness of the proposed superhighway construction on cross-domain collaborative filtering (CF), we conducted query-based recommendation [@HPE] and used items as queries. Our experiments employ two sets of real-world cross-domain datasets: 1) KKBOX\_R1–KKBOX\_R2, a cross-region music dataset (R denotes region); 2) Movielens–Netflix, a cross-platform movie dataset. The statistics of the datasets are listed in Table \[tb:data\]. The cross-domain datasets are organized into three structures for training: 1) single: denoting the original target domain, $G_{\text{T}}$; 2) highway: denoting the naive concatenation of the source and target domains, $G_{\text{S}}\cup G_{\text{T}}$; and 3) superhighway: denoting the naive concatenation augmented with superhighways to enhance cross-domain connectivity, $\mathcal{F}(G_{\text{S}}\cup G_{\text{T}})$. With these three structures, models (i.e., user and item embedding) are trained using traditional matrix factorization and two network embedding algorithms: DeepWalk [@deepwalk] and HPE [@HPE]. In addition, transfer learning is conducted for the single structure via pre-training on the source domain and then fine-tuning on the target domain [@tang2015pte]. For each algorithm, we also find the best combination of $\alpha$ and $\beta$ in the interval of $(0.0,1.0]$ and $[0.5,1.5]$, respectively, with $0.1$ increment. \[tb:het\_exp\] MF DeepWalk HPE ------- --------------------- ------------- ------------- ------------- -- Single (Pretrained) 30.4 (30.3) 19.6 (22.2) 14.2 (27.8) Music Highway 30.5 0.193 0.2 Superhighway 32.4 22.6 31.1 Single (Pretrained) 5.5 (5.3) 2.8 (1.7) 4.2 (6.3) Movie Highway 1.4 2.0 0.014 Superhighway 6.8 4.0 7.4 : Recommendation performance (MAP@10) Table \[tb:het\_exp\] compares the performance on the target domain of the above three structures. Note that most models perform worse when training on the highway structure than on the single structure; this phenomenon is likely due to the naive combination of the two domains actually aggravates data sparsity in the system, demonstrating the mere introduction of transferable knowledge is insufficient. In contrast, the superhighway structure reduces data sparsity and facilitates structural alignment between the source and the target domains by enhancing cross-domain connectivity, thereby creating a mutually enriching relationship. Hence, superhighway improves CF-based recommendation across all algorithms, making it widely applicable. In addition, it is worth noting that superhighway, as a user-user relation, does not enrich item neighborhood. Therefore, the improvement in matrix factorization suggests superhighways “bypass” the data sparsity problem in cross-domain CF, addressing the problem indirectly by enhancing the connectivity of the cross-domain system. Moreover, as traditional cross-domain improvements are often directional, i.e., source domains facilitate target domains, superhighway also improves recommendation performances on source domains; e.g., HPE improves from $2.1$ to $4.4$ for the music dataset and from $4.4$ to $4.6$, for the movie dataset. Conclusions =========== This paper proposes an explicit relation-enrichment procedure, *superhighway construction*, to bypass data sparsity in single-domain collaborative filtering by enhancing cross-domain connectivity using self-contained inference. In our approach, superhighways are generated based on suitable (interaction smoothness) highways and then scaled for domain space alignment. According to the results form cross-region music dataset and cross-platform movie dataset, the constructed superhighways not only facilitate improvements across all tested models but also lead to improvements in the source domains, making it widely applicable.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We propose a formulation of the long-distance dynamics of gauge theories at finite temperature on a lattice in Minkowski space, including the effects of hard thermal loops on the dynamics of the long wavelength modes. Our approach is based on the dual classical limits of quantum fields as waves and particles in the infrared and ultraviolet limits, respectively. It exhibits manifest invariance under space-dependent lattice gauge transformations and conserves Gauss’ law.' address: | Department of Physics, Duke University,\ Durham, North Carolina 27708–0305 author: - 'C. R. Hu[^1] and B. Müller[^2]' title: Classical Lattice Gauge Fields with Hard Thermal Loops --- In the past few years great efforts have been made in perturbation theory to calculate transport properties of thermal gauge fields from the low-energy effective action including the contribution of hard thermal loops [@BP90; @TW90; @Nair; @BI; @Thoma]. Unfortunately, perturbative calculations fail in some important cases such as the damping of a traveling mode in the QCD plasma [@TG91], color conductivity [@SG93], and winding number diffusion [@OP93] due to the presence of singularities associated with the static magnetic gauge sector. This difficulty has motivated numerical simulations of the dynamics of classical gauge fields in Minkowski space [@AAPS91; @MT92; @BGMT94; @AK95], which were based on the Hamiltonian formulation of gauge theory on a spatial lattice [@KS75]. These studies have been criticized [@BMS95; @ASY96], because they do not properly account for the influence of hard thermal loops which modifies the dynamics of the long-distance modes of the gauge field. For scalar field theories there exists a straightforward remedy [@BMS95; @CM96]. One introduces a momentum cut-off $k_c$ and describes the influence of the high-momentum modes, which are essentially quantum mechanical, in perturbation theory. The dynamics of the long-distance modes then becomes dissipative and noisy [@CM96; @BLL96]. In the case of gauge theories, gauge invariance dictates the use of a lattice discretization eliminating modes with eigenvalues of the kinetic momentum operator $(-i\mbox{\boldmath $\nabla$}-g\mbox{\boldmath $A$})$ larger than $\pi/a$, where $a$ is the lattice spacing. However, the construction of the appropriate lattice action for the soft fields involving hard thermal loops encounters technical difficulties [@BMS95]. Recently, Huet and Son [@HS96] have argued that it is possible to construct an effective classical dynamics for quasi-static, long wavelength modes of the gauge field containing dissipative and stochastic terms. The resulting equations are valid in the limit $\omega \ll k\le g^2T$ where $\omega$ is the frequency and $k$ the wave vector. Even in this limit the noise term is strongly nonlocal and no efficient numerical treatment was proposed by the authors. In the general case, for $\omega,k\ll T$, the hard thermal loop action contains an infinite number of spatially and temporally nonlocal, dissipative and stochastic vertices, which are difficult to treat numerically. We propose to circumvent these difficulties by representing the hard thermal modes as classical colored particles propagating in the background of the soft gauge fields. Explicitly treating these modes in terms of classical particles, rather than integrating them out, leads to a set of [*local*]{} dynamical equations which can be efficiently solved by numerical intergration after lattice discretization. Below we will construct a lattice version of these equations that is gauge invariant and conserves Gauss’ law. Because the dynamics of the hard modes is treated explicitly in this formulation, it is not necessary to assume that they are thermally populated. One can, as well, consider dynamical situations far off equilibrium, where the density matrix of hard modes of the gauge field is characterized by some large scale. Such conditions are, for example, of interest in the context of equilibration processes occurring in high energy nuclear collisions. The representation of the high momentum components of the gauge field in terms of classical particles propagating in classical gauge fields can be justified by the eikonal limit of the Yang-Mills equations. Heinz [@Heinz85] and Kelly et al. [@MIT94] have shown that a thermal ensemble of particles obeying Wong’s equations [@Wong70] generates the correct hard thermal loop action [@BP90; @TW90] for soft gauge fields. The proof is based on linear response theory, combined with the explicit gauge covariance of the classical equations. In the following we briefly review the continuum formulation of the classical transport theory [@Heinz90] and then show how the equations can be implemented on a spatial lattice. The classical transport theory for nonabelian gauge fields starts from the Boltzmann equation $$p^{\mu} \left[ {\partial\over\partial x^{\mu}} - gQ^a F_{\mu\nu}^a {\partial\over\partial p_{\nu}} - gf_{abc} A_{\mu}^bQ^c {\partial\over\partial Q^a}\right] f(x,p,Q) = C[f] \label{e1}$$ together with the Yang-Mills equations $$D_{\mu}F^{\mu\nu} = g \int [dpdQ] p^{\nu}Q f(x,p,Q) \equiv j^{\nu}(x)~, \label{fields}$$ where $f(x,p,Q)$ denotes the one-particle phase space distribution of classical particles, $Q$ is the classical nonabelian charge carried by the hard gluons, and $D_{\mu}$ is the gauge covariant derivative. Note that $p$ represents the kinetic (not the canonical) momentum of the particles and therefore is gauge invariant. For our purposes, it is sufficient to consider the Vlasov limit, neglecting the collision term $C[f]$. The transport equation (\[e1\]) is then solved by an ensemble of test particles $$f(x,p,Q) = \frac{1}{N_0} \sum_i \delta \left(x-\xi_i\right) \delta \left( p-p_i\right) \delta \left( Q-Q_i\right)~, \label{e3}$$ where $N_0$ is the total number of particles. The space, momentum, and color coordinates of the particles obey Wong’s equations [@Wong70]: $$\begin{aligned} \dot{\mbox{\boldmath $\xi$}}_i &=& \mbox{\boldmath $v$}_i~, \label{position}\\ \dot{\mbox{\boldmath $p$}}_i &=& g Q^a_i \left[\mbox{\boldmath $E$}^a(\xi_i)+ \mbox{\boldmath $v$}_i\times\mbox{\boldmath $B$}^a(\xi_i)\right]~, \label{momentum}\\ {\dot{Q}}_i &=& -ig v^{\mu}_i \left[A_{\mu}(\xi_i),~Q_i\right]~. \label{charge}\end{aligned}$$ The index $i$ enumerates the particles and $\xi^{\mu}_i=(t, \mbox{\boldmath $\xi$}_i)$, $v^{\mu}=(1,\mbox{\boldmath $v$})$ with $\mbox{\boldmath $v$}_i=\mbox{\boldmath $p$}_i/|\mbox{\boldmath $p$}_i|$ being the velocity of the $i$th particle. Note that $v^{\mu}$, as defined here, is not a Lorentz four-vector. A dot denotes a time derivative. $A_{\mu}$ denotes the vector potential, and $\mbox{\boldmath $E$}, $ are the color electric and magnetic fields, respectively. The right-hand side of (\[momentum\]) is a generalization of the electrodynamical Lorentz force. Wong’s equations can be written in manifestly covariant form, but here we have chosen a representation that is convenient for the numerical implementation in the context of a Hamiltonian lattice gauge theory. The Hamiltonian equations of motion for the gauge field and Gauss’ law are obtained from (\[fields\]) as the components $\nu=1,2,3$ and $\nu=0$, respectively. The charge current $j^{\mu}=(\rho,\mbox{\boldmath $j$})$ is defined as $$\begin{aligned} j^{\mu}(x) &=& g\sum_i Q_i v^{\mu}_i \delta^3(\mbox{\boldmath $x$}-\mbox{\boldmath $\xi$}_i)~. \label{current}\end{aligned}$$ Since [$E$]{}, [$B$]{}, and $Q$ transform covariantly under gauge transformations, it is easy to see that (\[position\]) and (\[momentum\]) are gauge invariant. The gauge covariance property of (\[charge\]) is best demonstrated by first recognizing that (\[charge\]) has the following formal solution (omitting the index $i$): $$Q(t)={\cal U}(t,0)Q(0){\cal U}^{\dagger}(t,0)~, \label{rotate}$$ where ${\cal U}(t,0)$ is the parallel transport operator along the particle’s world line: $${\cal U}(t,0) = {\rm T} \exp\left[-ig\int_0^tdt^{\prime} \frac{d\xi^{\mu}}{dt^{\prime}}A_{\mu}(\xi)\right]~. \label{transportor}$$ ${\cal U}$ obeys the equation $$\frac{d{\xi}^{\mu}}{dt}D_{\mu}(\xi){\cal U}(t,0)=0~,\quad {\cal U}(0,0)=1~. \label{transport}$$ Under a gauge transformation $G$, ${\cal U}(t)$ transforms as ${\cal U}(t,0) \rightarrow G(x(t)){\cal U}(t,0)G^{\dagger} (x(0))$. Hence (\[rotate\]) is gauge covariant, and so is (\[charge\]). It is worth noting that both (\[charge\]) and (\[rotate\]) conserve the magnitude of the charges: $$\frac{d}{dt}\sum_a Q^a Q^a=0~.$$ As the charged particle travels in the background field, its charge rotates in color space with an angular velocity $\omega^a = g v^{\mu}A^a_{\mu}(\xi)$, consistent with the notion of the gauge field $A^{\mu}$ as the connection on a curved manifold on which the charge of a moving particle undergoes parallel transport. We now describe how the equations (\[fields\],\[position\],\[momentum\],\[charge\]) can be solved numerically. We discretize (\[fields\]) on a lattice with lattice spacing $a$. Then the soft gauge fields are restricted to modes with $k<k_{\rm c}=\pi/a$. To avoid double-counting of modes the hard particles will be restricted to kinetic momenta $p>k_c$. The precise connection between $k_c$ and $a$ can be established by the requirement that the plasmon mass takes on its correct value found in the continuum theory: $\omega_p = {\sqrt{N_c}gT}/3$. In the proposed simulation, we are mainly interested in modeling physics at the energy scale of $g^2 T$ and the dominant scattering process between hard particles happens at the scale of $g T$. In order to get these right on the lattice, we must set up the simulation in such a way that the following relation between the length scales is satisfied: $$a < (g T)^{-1} < (g^2 T)^{-1} < N a ~,$$ where $N a$ is the lattice size. For discretization of the fields, we use the Kogut-Susskind (KS) model, which is summarized below. Improved lattice Hamiltonians are also possible [@GM96]. In the KS scheme, one chooses the temporal gauge $A_0=0$ and expresses the gauge field in terms of variables $U_{x,i}$ associated with the link $(x,i)$ directed from a site $x$ to its nearest neighbor $x+i$: $$U_{x,i} = \exp \left[ -iga A_i^a (x) \,\tau^a/2 \right] = U^{\dagger}_{x+i,-i}~. \label{link}$$ Under a gauge transformation $G$, the link variables transform as $$U_{x,i}\rightarrow G(x)U_{x,i}G^{\dagger}(x+i)~. \label{link_trans}$$ A plaquette variable is defined as the product of four link variables associated with the sides of an elementary plaquette $(x,ij)$: $$U_{x,ij} = U_{x,i} \, U_{x+i,j} \, U_{x+i+j,-i} \, U_{x+j,-j}~. \label{plaquette}$$ The links are directed and hence the plaquettes are oriented. The electric and the magnetic fields are defined as $$\begin{aligned} E_{x,i} &=& \frac{1}{iga} \dot{U}_{x,i}U^{\dagger}_{x,i}~, \label{E_field} \\ B_{x,k} &=& \frac{1}{4iga^2}\epsilon_{kij}(U^{\dagger}_{x,ij}-U_{x,ij})~, \label{B_field}\end{aligned}$$ where $E_{x,i}$ is associated with the link $(x,i)$ and $B_{x,k}$ with the plaquette $(x,ij)$. Both $E_{x,i}$ and $B_{x,k}$ transform covariantly under a gauge transformation $G(x)$. In the spirit of the Hamiltonian formalism, we choose $U_{x,i}$ and $E_{x,i}$ as the basic dynamic variables. They obey the following equations of motion: $$\begin{aligned} \dot{U}_{x,i} &=& igaE_{x,i}U_{x,i}~, \label{U-dot} \\ \dot{E}_{x,i} &=& \frac{1}{2iga^3}\sum_j(U^{\dagger}_{x,ij}-U_{x,ij}) - j_{x,i}~, \label{E-dot}\end{aligned}$$ and are subject to the constraint of Gauss’ law: $$\frac{1}{a} \sum_i \left[E_{x,i}-U^{\dagger}_{x-i,i}E_{x-i,i}U_{x-i,i}\right]-\rho_x=0~. \label{gauss}$$ In order to define the charge current four-vector $j_x^{\mu}$ on the lattice, we take each site $x$ as the center of a cubic cell $C_x$ of size $a^3$. The color charge of every particle in $C_x$ will be counted as contribution to the charge density $\rho_x$. Any particles entering or leaving the box during a given time step $\Delta t$ will contribute to the component of the color current normal to the face of the cube that is penetrated [@thanks]: $$\rho_x(t) = {g\over a^3} \sum_{k\in C_x} Q_k(t) \label{e21}~,$$ $$j_{x,i}(t+{\Delta t}/2) = {g\over a^2\Delta t} \sum_{k\in C_x}^{(i)} \left[Q_k(t)- Q_k(t+\Delta t)\right]~, \label{e22}$$ where the notation in (\[e22\]) indicates that only particles entering or leaving along the link connecting $x$ and $x+i$ are counted. The variables to be advanced in time according to their equations of motion are $U_{x,i}$, $E_{x,i}$, $\mbox{\boldmath $\xi$}_k$, $\mbox{\boldmath $p$}_k$, and $Q_k$. This can be achieved with a modified leapfrog algorithm that conserves energy and satisfies Gauss’ law. It is most convenient to choose an update scheme in which the link variables $U_{x,i}$ are defined at half integer time steps while $E_{x,i}$, $\mbox{\boldmath $\xi$}_k$, $\mbox{\boldmath $p$}_k$, and $Q_k$ are defined at integer time steps [@22]. We only briefly discuss the momentum update here. The momentum gets contributions from both the electric and the magnetic fields. In the same spirit as our definition of the current $j_{x,i}$ in (\[e22\]), only those particles which move from one cell to another during one time step obtain a momentum kick by the electric field: $$p_{k,i}(t+\Delta t)= p_{k,i}(t) + g Q^a_k(t)E^a_{x,i}(t) \left[\frac{a}{v_{k,i}} \left( 1-\frac{1}{2a^2} \frac{{\rm tr}\left[Q_k(t)Q_k(t)\right]} {{\rm tr}\left[Q_k(t)E_{x,i}(t)\right]} \right)\right]~, \label{e25}$$ if $k\in C_x$ at time $t$ and $k\in C_{x+i}$ at time $t+\Delta t$. The expression in the brackets can be regarded as the effective time needed for a particle to transit from one cell to its nearest neighbor. This choice balances the change in the energy of the particle and the change in the energy of the lattice fields incurred by the transition of that particle from one cell to another. The influence of the magnetic field on the particle momenta is updated continuously during each time step. Such a modified leapfrog algorithm is stable for ${\Delta t}/a \stackrel{<}{\sim} 0.1$. Finally, we point out that the definition (\[e22\]) ensures the conservation of the color current during each time step: $${1\over\Delta t} \left( \rho_x (t+\Delta t) - \rho_x(t)\right) + {1\over a} \sum_i \left( j_{x,i} - U_{x-i,i}^{\dagger} j_{x-i,i} U_{x-i,i} \right) = 0~, \label{e23}$$ where the quantities in the sum are defined at $t+{\Delta t}/2$. This, together with the update scheme described above, automatically ensures the exact conservation of the left-hand side of Gauss’ law (\[gauss\]) from one time step to the next. An important feature of our implementation is that all equations transform correctly under spatial gauge transformations. The dynamical evolution of (soft) gauge field and (hard) particles described by the equations of motion is essentially classical. However, we need to incorporate certain quantum features into the simulation, which are embodied in the initial conditions. There are two places where quantum physics enters. First, while the energy and the momentum of a charged particle are continuous classical variables, the nonabelian charge $Q^a$ contains a factor of $\hbar$ and has a fixed magnitude. In the classical limit the color charge of a gauge boson rotates in color space like a three-vector with fixed length just as the spin of an electron rotates in a magnetic field. In the quantum case, the charges $Q^a$ are $q$-numbers and obey the SU(2) Lie algebra: $$[Q^a,~Q^b]=i\hbar \epsilon^{abc}Q^c~.$$ In the semiclassical limit, we treat them as $c$-numbers but retain their magnitude as proportional to the Casimir operator in the adjoint representation [@Heinz85]: $$\sum_a Q^aQ^a = 2\hbar^2~,$$ which is conserved by (\[charge\]). For particles in the fundamental representation of SU(2), such as fermions, the right-hand side would be replaced by $3\hbar^2/4$. Second, while the dynamics is classical, we require that both the particles representing hard thermal gluons and the fields obey Bose statistics. To illustrate this point, we consider the initialization of the particle ensemble at a certain temperature $T$. According to Bose statistics, the particles should be initialized with the distribution $$f(\mbox{\boldmath $x$, $p$}, Q)= \frac{\theta(p-\hbar k_{\rm c})} {\displaystyle{e^{\beta{\epsilon(p)}}-1}} \delta (Q^2-2\hbar^2)~,$$ where $\epsilon(p)$ is the energy of a particle with momentum [$p$]{}. The initial particle number density is then given by $$n(\mbox{\boldmath $x$}) ={2N_c\over {h^3}}\int\frac{d^3\mbox{\boldmath $p$}} {\displaystyle{e^{\beta{\epsilon(p)}}-1}} \theta(p-\hbar k_{\rm c})~,$$ where the factor $2N_c$ counts the spin and color degeneracies. The linear response of the distribution $f(\mbox{\boldmath $x$, $p$}, Q)$ to the classical field then reproduces the HTL polarization function [@Heinz85; @MIT94]. [*Acknowledgements*]{}: We thank U. Heinz for useful comments on the manuscript and G. Moore for valuable discussions and for advice on formulating the lattice algorithm. BM thanks C. Greiner, S. Leupold, and M. Thoma for discussions. This work was supported in part by the U.S. Department of Energy (Grant No. DE-FG02-96ER40945). E. Braaten and R.D. Pisarski, [*Nucl. Phys. B*]{}[**337**]{}, 569 (1990). J.C. Taylor and S.M.H. Wong, [*Nucl. Phys. B*]{}[**346**]{}, 115 (1990). R. Efraty and V.P. Nair, [*Phys. Rev. Lett.*]{} [**68**]{}, 2891 (1992). J.P. Blaizot and E. Iancu, [*Phys. Rev. Lett.*]{} [**70**]{}, 3376 (1993). M.H. Thoma, in [*Quark-Gluon Plasma 2*]{}, edited by R.C. Hwa, (World Scientific, Singapore, 1995), p. 51. M.H. Thoma and M. Gyulassy, [*Nucl. Phys. B*]{}[**351**]{}, 491 (1991). A.V. Selikhov and M. Gyulassy, [*Phys. Lett. B*]{}[**316**]{}, 373 (1993). O. Philipsen, [*Phys. Lett. B*]{}[**304**]{}, 134 (1993). J. Ambjørn, T. Aksgaard, H. Porter, and M.E. Shaposhnikov, [*Nucl. Phys. B*]{}[**353**]{}, 346 (1991). B. Müller and A. Trayanov, [*Phys. Rev. Lett.*]{} [**68**]{}, 3387 (1992). T.S. Biró, C. Gong, B. Müller, and A. Trayanov, [*Int. J. Mod. Phys. C*]{}[**5**]{}, 113 (1994). J. Ambjørn and A. Krasnitz, [*Phys. Lett. B*]{}[**362**]{}, 97 (1995). J. Kogut and L. Susskind, [*Phys. Rev. D*]{}[**11**]{}, 395 (1975). D. Bödeker, L. McLerran, and A. Smilga, [*Phys. Rev. D*]{}[**52**]{}, 4675 (1995). P. Arnold, D.T. Son, and L. Yaffe, preprint UW/PT-96-19 (hep-ph/9609481). C. Greiner and B. Müller, [*Phys. Rev. D*]{}[**55**]{}, 1026 (1997). D. Boyanovsky, I.D. Lawrie, and D.S. Lee, [*Phys. Rev. D*]{}[**54**]{}, 4013 (1996). P. Huet and D.T. Son, preprint UW/PT-96-20 (hep-ph/9610259). U. Heinz, [*Ann. Phys. (N.Y.) [**161**]{}*]{}, 48 (1985); [**168**]{}, 148 (1986). P.F. Kelly, Q. Liu, C. Lucchesi, and C. Manuel, [*Phys. Rev. Lett.*]{} [**72**]{}, 3461 (1994); [*Phys. Rev. D*]{}[**50**]{}, 4209 (1994). S.K. Wong, [*Nuovo Cimento A*]{}[**65**]{}, 689 (1970). For a complete discussion of the classical transport theory and its relation to the quantum transport theory we refer the reader to the review by Elze and Heinz, [*Phys. Rep.*]{} [**183**]{}, 81 (1989). G.D. Moore, [*Nucl. Phys. [**B480**]{}*]{}, 689 (1996). We thank G. Moore for suggesting this representation of the color charge current on the lattice. Details of this algorithm will be discussed in a forthcoming manuscript where we will present first numerical results. [^1]: [email protected] [^2]: [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: 'Pulsar timing arrays (PTAs) are presently the only means to search for the gravitational wave stochastic background from supermassive black hole binary populations, considered to be within the grasp of current or near future observations. However, the stringent upperlimit set by the Parkes PTA [@ShannonEtAl_PPTAgwbg:2013; @2015Sci...349.1522S]) has been interpreted as excluding at $> 90\%$ confidence the current paradigm of binary assembly through galaxy mergers and hardening via stellar interactions, suggesting evolution is accelerated (by stars and/or gas) or stalled. Using Bayesian hierarchical modelling, we consider implications of this upperlimit for a comprehensive range of astrophysical scenarios, without invoking stalling nor more exotic physical processes. We find they are fully consistent with the upperlimit, but (weak) bounds on population parameters can be inferred. Bayes factors between models vary between $\approx 1.03$ – $5.81$ and Kullback-Leibler divergences between characteristic amplitude prior and posterior lie between $0.37$ – $0.85$. Considering prior astrophysical information on galaxy merger rates, recent upwards revisions of the black hole-galaxy bulge mass relation [@2013ARAA..51..511K] are disfavoured at $1.6\sigma$ against lighter models (eg. [@2016MNRAS.460.3119S]). We also show, if no detection is achieved once sensitivity improves by an order of magnitude, the most optimistic scenario is disfavoured at $3.9\sigma$.' author: - 'Hannah Middleton\*' - Siyuan Chen - Walter Del Pozzo - Alberto Sesana - Alberto Vecchio bibliography: - 'ulPPTA.bib' title: ' No tension between assembly models of supermassive black hole binaries and pulsar observations. ' --- Implications of upper limits {#sec:implications} ============================ Dedicated timing campaigns of ultra-stable radio pulsars lasting over a decade and carried out with the best radio telescopes around the globe have targeted the isotropic gravitational-wave (GW) background in the frequency region $\sim 10^{-9} - 10^{-7}$ Hz. No detection has been reported so far. The most stringent constraint on an isotropic background radiation has been obtained through an 11 year-long timing of 4 radio-pulsars by the Parkes Pulsar Timing Array (PPTA). It yields an upper-limit on the GW characteristic amplitude of $h_\mathrm{1 yr} = 1.0\times 10^{-15}$ (at 95% confidence) at a frequency of 1 yr$^{-1}$ [@2015Sci...349.1522S]. Consistent results, although a factor $\approx 2$ less stringent, have been reported by the European PTA (EPTA; [@2015MNRAS.453.2576L]) and the North Amercian Nanohertz Observatory for Gravitational Waves (NANOGrav; [@2016ApJ...821...13A]). The three PTA collaborations join together to form the International PTA (IPTA; [@2016IPTA]). We use the PPTA limit to place bounds on the properties of the sub-parsec population of super-massive black hole binary (SMBHBs) systems (in the mass range $\sim 10^7 - 10^{10}\, M_\odot$) in the universe and explore what constraints, if any, can be put on the salient physical processes that lead to the formation and evolution of these objects. We consider a comprehensive suite of astrophysical models that combine observational constraints on the SMBHB population with state of the art dynamical modelling of binary evolution. The SMBHB merger rate is anchored to observational estimates of the host galaxy merger rate by a set of SMBH-host relations [@Sesana:2013; @2016MNRAS.463L...6S and Section \[sec:models\]]. Rates obtained in this way are well captured by a five parameter analytical function of mass and redshift, once model parameters are restricted to the appropriate prior range (see Section \[sec:models\]). Individual binaries are assumed to hold a constant eccentricity so long as they evolve via three-body scattering and gradually circularize once GW emission takes over. Their dynamical evolution and emission properties are regulated by the density of the stellar environment (assumed to be a Hernquist profile [@1990ApJ...356..359H] with total mass determined by the SMBH mass – galaxy bulge mass relation) and by the eccentricity during the three-body scattering phase, which we take as a free parameter. For each set of model parameters, the characteristic GW strain $h_c(f)$ at the observed frequency $f$ is computed as described in [@2016arXiv161200455C], and summarised in Section \[sec:models\]. Our model encapsulates the significant uncertainties in the GW background due to the poorly constrained SMBHB merger rate and has the flexibility to produce a low frequency turnover due to either three-body scattering or high eccentricities. SMBHBs are assumed to merge with no significant delay after galaxies merge. As such, the models do not include the effect of stalling or delayed mergers [@2016ApJ...826...11S]. For definiteness, we focus on the impact of the SMBH-galaxy relation by considering: an optimistic model, which we label KH13, based on [@2013ARAA..51..511K], which provides a prediction of the GW background with median amplitude at $f = 1$ yr$^{-1}$ of $h_\mathrm{1yr} = 1.5\times 10^{-15}$; a conservative model (labelled G09, based on [@2009ApJ...698..198G]), with $h_\mathrm{1yr} = 7\times 10^{-16}$; an ultra-conservative model (labelled S16, based on [@2016MNRAS.460.3119S]), with $h_\mathrm{1yr} = 4\times 10^{-16}$; and finally a model that spans the whole range of predictions within our assumptions, which we label “All”. Note that this model contains as subsets KH13, G09 and S16, but it is not limited to them. Details on the models are provided in Section \[sec:models\]. For each model, we use a Bayesian hierarchical analysis to compute the model evidence (which indicates the preference given to a model by the data and allows for the direct comparison of models) and posterior density functions on the model parameters given the data, *i.e.* the posterior distribution of the GW background characteristic amplitude reported by [@2015Sci...349.1522S]. We find that the upper limit is now beginning to probe the most optimistic predictions, but all models are so far consistent with the data. Figure \[fig:SpectrumPosterior\] shows the GW characteristic strain, $h_c(f)$, of the aforementioned models. The dotted area shows the prior range of the GW amplitude under the model assumptions, and the orange solid line the 95% confidence PPTA upper-limit on $h_c$. The (central) 68% and 90% posterior probability intervals on $h_c$ are shown by the shaded blue bands. The posterior density functions (PDFs) on the right hand side of each plot gives the prior (black-dashed line) and posterior (blue line) for $h_c$ at a reference frequency of $f\sim 1/5\mathrm{yr}^{-1}$. ![Comparison between prior and posterior density functions on the GW stochastic background characteristic amplitude in light of the PPTA upper-limit for each of the astrophysical models considered here. The central 90% region of the prior is indicated by the dotted band, and the posterior is shown by the progressively lighter blue shading indicating the central 68% and 90% regions, along with the median (solid blue line). Also shown are the PPTA bin-by-bin limit (orange solid line) and the corresponding integrated limit assuming $h_c(f)\propto f^{-2/3}$ (red star). The difference in the prior and posterior indicates how much has been learnt from the PPTA data. The right-hand side one-dimensional posterior distribution shows the prior (black-dashed) and posterior (blue-solid) at a reference frequency of $f\sim 1/5\mathrm{yr}^{-1}$, with the central 90% regions marked (black and blue-dashed lines respectively).[]{data-label="fig:SpectrumPosterior"}](images/conf_h_SHANK.pdf "fig:"){width="49.00000%"} ![Comparison between prior and posterior density functions on the GW stochastic background characteristic amplitude in light of the PPTA upper-limit for each of the astrophysical models considered here. The central 90% region of the prior is indicated by the dotted band, and the posterior is shown by the progressively lighter blue shading indicating the central 68% and 90% regions, along with the median (solid blue line). Also shown are the PPTA bin-by-bin limit (orange solid line) and the corresponding integrated limit assuming $h_c(f)\propto f^{-2/3}$ (red star). The difference in the prior and posterior indicates how much has been learnt from the PPTA data. The right-hand side one-dimensional posterior distribution shows the prior (black-dashed) and posterior (blue-solid) at a reference frequency of $f\sim 1/5\mathrm{yr}^{-1}$, with the central 90% regions marked (black and blue-dashed lines respectively).[]{data-label="fig:SpectrumPosterior"}](images/conf_h_KHO.pdf "fig:"){width="49.00000%"} ![Comparison between prior and posterior density functions on the GW stochastic background characteristic amplitude in light of the PPTA upper-limit for each of the astrophysical models considered here. The central 90% region of the prior is indicated by the dotted band, and the posterior is shown by the progressively lighter blue shading indicating the central 68% and 90% regions, along with the median (solid blue line). Also shown are the PPTA bin-by-bin limit (orange solid line) and the corresponding integrated limit assuming $h_c(f)\propto f^{-2/3}$ (red star). The difference in the prior and posterior indicates how much has been learnt from the PPTA data. The right-hand side one-dimensional posterior distribution shows the prior (black-dashed) and posterior (blue-solid) at a reference frequency of $f\sim 1/5\mathrm{yr}^{-1}$, with the central 90% regions marked (black and blue-dashed lines respectively).[]{data-label="fig:SpectrumPosterior"}](images/conf_h_G09.pdf "fig:"){width="49.00000%"} ![Comparison between prior and posterior density functions on the GW stochastic background characteristic amplitude in light of the PPTA upper-limit for each of the astrophysical models considered here. The central 90% region of the prior is indicated by the dotted band, and the posterior is shown by the progressively lighter blue shading indicating the central 68% and 90% regions, along with the median (solid blue line). Also shown are the PPTA bin-by-bin limit (orange solid line) and the corresponding integrated limit assuming $h_c(f)\propto f^{-2/3}$ (red star). The difference in the prior and posterior indicates how much has been learnt from the PPTA data. The right-hand side one-dimensional posterior distribution shows the prior (black-dashed) and posterior (blue-solid) at a reference frequency of $f\sim 1/5\mathrm{yr}^{-1}$, with the central 90% regions marked (black and blue-dashed lines respectively).[]{data-label="fig:SpectrumPosterior"}](images/conf_h_ALL.pdf "fig:"){width="49.00000%"} Figure \[fig:BayesFactors\] shows the natural logarithm of the ratio of the model evidence, *i.e.* the Bayes factors, between all possible combinations of models and the Kullback-Leibler divergence between prior and posterior on the characteristic amplitude within a given model (with which we measure the degree of disagreement between the prior and posterior). ![Comparing the Bayes factors between model pairs (left hand, blue bars) and the Kullback-Leibler (K-L) divergences between the prior and posterior of characteristic amplitude (right hand, orange bars). The small range of Bayes factors, indicates that there is little to choose from between these models, although KH13 is weakly disfavoured against the others. The K-L divergences also support this conclusion. Although all values are small, KH13 has the largest K-L divergence (greatest difference between prior and posterior) of the four models. []{data-label="fig:BayesFactors"}](images/logBF_KL_bar.pdf){width="\textwidth"} Qualitatively, the difference between the dotted region and the shaded bands in the main panels in Figure  \[fig:SpectrumPosterior\] indicates the constraining power of the Parkes PTA limit on astrophysical models – the greater the difference between the two regions, the more suspect we are of a particular model. We see that although some upper portion of the allowable prior region is removed from 95% posterior probability interval (less so for S16), none of the models can be ruled out at any significant level. We also see that the regions covered by the confidence bands are curved (as opposed to a $h_c(f)\propto f^{-2/3}$ power-law), which one might assume to indicate the influence of the environment and eccentricity. It is important, however, to note that these are confidence bands and that although eccentricity is allowed by the data, the power-law spectrum of circular binaries driven by radiation reaction alone can clearly be consistently placed within these bands (see also Figure \[fig:TrianglePlots\] for further details on the individual parameter posteriors including eccentricity). This can be quantified in terms of model evidences ${\cal Z}$, shown in Table \[tab:KLandEvidence\]. The normalization is chosen so that a putative model unaffected by the limit yields ${\cal Z} = 1$, and therefore the values can be interpreted as Bayes factors against such a model. None of the posterior probabilities of the models with respect to this putative one show any tension, see Table \[tab:KLandEvidence\]. For example for model All and S16 we find $e^{\textrm{{$-1.23$}}} = 0.3$ and $e^{\textrm{{$-0.6$}}} = 0.55$, respectively. Similar conclusions can be drawn from the K-L divergences, which yield ${0.62}\,$ and ${0.37}$. As a comparison, these values correspond to the K-L divergence between two Gaussian distributions with the same variance and means approximately 1.1 (for All) and 0.8 (for S16) standard deviation apart[^1]. The least favourite model in the range of those considered here is KH13, with Bayes factors in favour of the others ranging from $\approx \textrm{{$1.13$}}$ to $\approx \textrm{{$1.76$}}$. These are however values of order unity, and no decisive inference can be made from the data [@kassr95]. Comparisons between each parameter’s posterior and prior distribution functions are described in the supplementary material, and further support our conclusions. For KH13 – the model that produces the strongest GW background – we find that it has a probability of $e^{\textrm{{$-2.36$}}}=0.094$ with respect to a putative model that is unaffected by the limit. KH13 is therefore disfavoured at $\sim 1.6\sigma$. This conclusion is reflected in the value of the K-L divergence of [0.85]{}[^2]. We note that [@2015Sci...349.1522S] choose in their analysis only a sub-sample of the [@Sesana:2013] models, with properties similar to KH13. Our results for KH13 are therefore consistent with the 91%-to-97% ‘exclusion’ claimed by [@2015Sci...349.1522S]. ------ ---------------- --------------------- ---------------- --------------------- ---------------- --------------------- K-L divergence ${\rm log}{\cal Z}$ K-L divergence ${\rm log}{\cal Z}$ K-L divergence ${\rm log}{\cal Z}$ KH13 [0.85]{} [$-2.36$]{} [2.25]{} [$-5.68$]{} [5.18]{} [$-13.17$]{} G09 [0.39]{} [$-1.2$]{} [1.11]{} [$-3.35$]{} [2.86]{} [$-8.26$]{} S16 [0.37]{} [$-0.6$]{} [0.69]{} [$-1.62$]{} [1.42]{} [$-3.82$]{} ALL [0.62]{} [$-1.23$]{} [1.33]{} [$-2.68$]{} [2.50]{} [$-5.74$]{} ------ ---------------- --------------------- ---------------- --------------------- ---------------- --------------------- : K-L divergence and natural logarithm of the evidence ${\rm log}{\cal Z}$ for each of the four astrophysical models. Besides the PPTA upper limit at $h_\mathrm{1yr} = 10^{-15}$, we also show results for more stringent putative limits at the level of $3\times 10^{-16}$ and $1\times 10^{-16}$.[]{data-label="tab:KLandEvidence"} Discussion ========== [@2015Sci...349.1522S] argue that the Parkes PTA upper-limit excludes at high confidence standard models of SMBH assembly – *i.e.* those considered in this work – and therefore these models need to be substantially revised to accomodate either accelerated mergers via strong interaction with the environment or inefficient SMBHB formation following galaxy mergers. The work presented here does not support either claim. In particular, the posterior parameter distributions (see Section \[sec:results\]) favour neither high eccentricities nor particularly high stellar densities, indicating that a low frequency spectral turnover induced by SMBHB dynamics is not required to reconcile the PTA upper limit with existing models. This finding does not support an observing strategy revision in favor of higher cadence observations aimed at improving the high frequency sensitivity, as proposed by [@2015Sci...349.1522S]. Likewise, neither stalling nor delays between galaxy and SMBHB mergers, which, by construction, are not included in the models considered here, are needed to explain the lack of a detection of GWs at the present sensitivity level. On the other hand, PTA upper limits are now already providing interesting information about the population of merging SMBHs. The fact that KH13 is disfavoured at $1.4\,\sigma$ with respect to S16 indicates that the population may have fewer high mass binaries, mildly favouring SMBH-host galaxy relations with lower normalizations. Although not yet decisive, our findings highlight the potential of PTAs in informing the current debate on the SMBH-host galaxy relation. Recent discoveries of over-massive black holes in brightest cluster ellipticals [@2011Natur.480..215M; @2012MNRAS.424..224H] led to an upward revision of those relations [@2013ApJ...764..184M; @2013ARAA..51..511K]. However, several authors attribute the high normalization of the recent SMBH-host galaxy relations to selection biases [@2016MNRAS.460.3119S] or to the intrinsic difficulty of resolving the SMBH fingerprint in measurements based on stellar dynamics [see discussion in @2016arXiv160607484R]. Future prospects ================ An important question is what is the sensitivity level required to really put under stress our current understanding of SMBHB assembly, if a null result persists in PTA experiments, which in turn leads to a legitimate re-thinking of the PTA observing strategy to target possibly more promising regions in the very-low frequency GW spectrum. To address this question, we simulate future sensitivity improvements by shifting the Parkes PTA sensitivity curve down to provide 95% upper limits of $h_\mathrm{1yr}$ at $3\times 10^{-16}$ and $1\times10^{-16}$. The results are summarised in Table \[tab:KLandEvidence\] (more details are provided in Section \[sec:results\]). At $3\times 10^{-16}$, possibly within the sensitivity reach of PTAs in the next $\approx 5$ years, S16 will be significantly favoured against KH13, with a Bayes factor of $e^{4.06}$, and only marginally over G09, with Bayes factor of $e^{1.76}$. It will still be impossible to reject this model at any reasonable significant level with respect to, say, a model which predicts negligible GW background radiation at $\sim 10^{-9} - 10^{-8}$ Hz. However SMBH-host galaxy relations with high normalizations will show a $\approx 2\,\sigma$ tension with more conservative models. At $1\times 10^{-16}$, within reach in the next decade with the advent of MeerKAT [@2009arXiv0910.2935B], FAST [@2011IJMPD..20..989N] and SKA [@2009IEEEP..97.1482D], KH13, G09 and All are disfavoured at $3.9\,\sigma$, $2.5\,\sigma$ and $1.2\,\sigma$, respectively, with respect to S16. K-L divergences in the range ${5.18}- {1.42}$ show that the data are truly informative. S16 is also disfavoured at $2.3\sigma$ with respect to a model unaffected by the data, possibly indicating the need of additional physical processes to be included in the models. GW background models and hierarchical analysis {#sec:models} ============================================== Here we expand the description of the relevant features of our models and analysis approach. Further details about the astrophysical models can be found in [@2016arXiv161200455C] and for the method see [@2017MNRAS.468..404C]. In Section \[sec:pop\] we present the parametric model describing the GW background generated by a population of eccentric binaries evolving via three-body scattering. In Section \[sec:prior\] we define the prior range of the model parameters, anchoring them to an empirical estimate of the SMBHB merger rate based on observations of close galaxy pairs. In Section \[sec:like\] we describe the details of the implementation of Bayesian hierarchical modelling in the context of this work. Analytical description of the GW background {#sec:pop} ------------------------------------------- The GW background from a cosmic population of SMBHBs is determined by the binary merger rate and by the dynamical properties of the systems during their inspiral. The comoving number density of SMBHBs per unit log chirp mass (${\mathcal{M}}= (M_1 M_2)^{3/5} / (M_1 + M_2)^{1/5}$) and unit redshift, $d^2 n/(d {\log_{10}}{\mathcal{M}}d z)$, defines the normalization of the GW spectrum. If all binaries were evolving under the influence of GW backreaction only in a circular orbit, then the spectral index is also fixed at $h_c(f)\propto f^{-2/3}$ and the GW background is fully determined [@2001astro.ph..8028P]. To get to the point at which GW emission is efficient, however, SMBHBs need to exchange energy and angular momentum with their stellar and/or gaseous environment [@2013CQGra..30v4014S], a process that can lead to an increase in the binary eccentricity [e.g. @1996NewA....1...35Q; @2009MNRAS.393.1423C]. We assume SMBHBs evolve via three-body scattering against the dense stellar background up to a transition frequency $f_t$ at which GW emission takes over. According to recent studies [@2015MNRAS.454L..66S; @2015ApJ...810...49V], the hardening is dictated by the density of background stars $\rho_i$ at the influence radius of the binary $r_i$. The bulge stellar density is assumed to follow a Hernquist density profile [@1990ApJ...356..359H] with total mass $M_*$ and scale radius $a$ determined by the SMBHB total mass $M=M_1+M_2$ via empirical relations from the literature [see full details in @2016arXiv161200455C]. Therefore, for each individual system, $\rho_i$ is determined solely by $M$. In the stellar hardening phase, the binary is assumed to hold constant eccentricity $e_t$ up to $f_t$, beyond which it circularizes under the effect of the now dominant GW backreaction. The GW spectrum emitted by an individual binary adiabatically inspiralling under these assumptions behaves as $h_c(f)\propto f$ for $f\ll f_t$ and settles to the standard $h_c(f)\propto f^{-2/3}$ for $f\gg f_t$. The spectrum has a turnover around $f_t$ and its exact location depends on the binary eccentricity $e_t$. The observed GW spectrum is therefore uniquely determined by the binary chirp mass ${\mathcal{M}}$, redshift $z$, transition frequency $f_t$ and eccentricity at transition $e_t$. The GW spectrum from the overall population can be then computed via integrating the spectrum of each individual system over the co-moving number density of merging SMBHBs: $$h_c^2(f) = \int d z \int d {\log_{10}}{\mathcal{M}}\frac{d^2 n}{d {\log_{10}}{\mathcal{M}}d z} h^2_{c,\mathrm{fit}}\left( f \frac{f_{p,0}}{f_{p,t}} \right) \left( \frac{f_{p,t}}{f_{p,0}} \right)^{-4/3} \left( \frac{{\mathcal{M}}}{{\mathcal{M}}_0}\right)^{5/3} \left( \frac{1+z}{1+z_0}\right)^{-1/3}. \label{eqn:hoff}$$ $h_{c,fit}$ is an analytic fit to the GW spectrum of a reference binary with chirp mass ${\mathcal{M}}_0$ at redshift $z_0$ (i.e. assuming $d^2 n/(d {\log_{10}}{\mathcal{M}}d z)=\delta({\mathcal{M}}-{\mathcal{M}}_0)\delta(z-z_0)$), characterized by eccentricity of $e_0$ at a reference frequency $f_0$. For these reference values, the peak frequency of the spectrum $f_{p,0}$ is computed. The contribution of a SMBHB with generic chirp mass, emission redshift, transition frequency $f_t$ and initial eccentricity $e_t$ are then simply computed by calculating the spectrum at a rescaled frequency $f(f_{p,0}/f_{p,t})$ and by shifting it with frequency mass and redshift as indicated in equation (\[eqn:hoff\]). [@2016arXiv161200455C] demonstrated that this simple self-similar computation of the GW spectrum is sufficient to describe the expected GW signal from a population of eccentric SMBHBs driven by three-body scattering at $f>1$nHz, relevant to PTA measurement. As stated above, the shape of the spectrum depends on $\rho_i$ and $e_t$. $\rho_i$ regulates the location of $f_t$; the denser the environment, the higher the transition frequency. SMBHBs evolving in extremely dense environments will therefore show a turnover in the GW spectrum at higher frequency. $e_t$ has a twofold effect. On the one hand, eccentric binaries emit GWs more efficiently at a given orbital frequency, thus decoupling at lower $f_t$ with respect to circular ones. On the other hand, eccentricity redistributes the emitted GW power at higher frequencies, thus pushing the spectral turnover at high frequencies. In our default model, $\rho_i$ is fixed by the SMBHB total mass $M$ and we make the simplifying assumption that all systems have the same $e_t$. We also considered an extended model where $\rho_i$ is multiplied by a free parameter $\eta$. This corresponds to a simple rescaling of the central stellar density, relaxing the strict $M-\rho_i$ relation imposed by our default model. We stress here that including this parameter in our main analysis yielded quantitatively identical results. We use a generic simple model for the cosmic merger rate density of SMBHBs based on an overall amplitude and two power law distributions with exponential cut-offs, $$\frac{d^2 n}{d {\log_{10}}{\mathcal{M}}d z} = {{\dot n}_0}\left(\frac{{\mathcal{M}}}{10^7{\mathrm{M}_{\odot}}}\right)^{-\alpha}e^{-{\mathcal{M}}/{\mathcal{M}_*}} (1+z)^{\beta}e^{-z/z_*} \frac{d t_r}{d z} \label{eqn:model}$$ where $dt_r / dz$ is the standard relationship between time and redshift assuming a standard $\Lambda$CDM flat Universe with cosmological constant of $H_0 = 70 \mathrm{km s^{-1} Mpc^{-1}}$. The five free parameters are: ${{\dot n}_0}$ representing the co-moving number of mergers per Mpc$^3$ per Gyr; $\alpha$ and ${\mathcal{M}_*}$ controlling the slope and cut-off of the chirp mass distribution respectively; $\beta$ and $z_*$ regulating the equivalent properties of the redshift distribution. Equation (\[eqn:model\]) is also used to compute the number of emitting systems per frequency resolution bin at $f>10$ nHz. The small number statistics of the most massive binaries determines a steepening of the GW spectrum at high frequencies, full details of the computation are found in [@SesanaVecchioColacino:2008] and [@2016arXiv161200455C]. The GW spectrum is therefore uniquely computed by a set of six(seven) parameters $\theta = {{{\dot n}_0}, \beta, z_*, \alpha, {\mathcal{M}_*}, e_t (,\eta)}$. Anchoring the model prior to astrophysical models {#sec:prior} ------------------------------------------------- Although no sub-parsec SMBHBs emitting in the PTA frequency range have been unambiguously identified to date, their cosmic merger rate can be connected to the merger rate of their host galaxies. The procedure has been extensively described in [@Sesana:2013], to which we refer the reader for full details. The galaxy merger rate can be estimated directly from observations via: $$\frac{d^3n_G}{dzdM_Gdq}=\frac{\phi(M_G,z)}{M_G\ln{10}}\frac{{F}(z,M_G,q)}{\tau(z,M_G,q)}\frac{dt_r}{dz}. \label{galmrate}$$ Here, $\phi(M_G,z)=(dn/d{\rm log}M_G)_z$ is the galaxy mass function measured at redshift $z$; ${F}(M_G,q,z)=(df_p/dq)_{M_G,z}$, for every $M_G$ and $z$, denotes the fraction of galaxies paired with a companion galaxy with mass ratio between $q$ and $q+\delta{q}$; $\tau(z,M_G,q)$ is the merger timescale of the pair as a function of the relevant parameters. We construct a library of galaxy merger rates by combining four measurements of the galaxy mass function $\phi(M_G,z)$ , four estimates of the close pair fraction ${F}(M_G,q,z)$ [@bundy09; @deravel09; @lopez12; @xu12] and two estimates of the merger timescale $\tau(z,M_G,q)$ [@kit08; @lotz10]. Each merging galaxy pair is assigned SMBHs with masses drawn from 14 different SMBH-galaxy relations found in the literature (see table \[tabrel\]). We write them in the form $${\rm log}_{10} M=a+b{\rm log}_{10}X, \label{scalingrel}$$ where $X=\{\sigma/200$km s$^{-1}$, $L_i/10^{11}L_{\sun}$ or $M_*/10^{11}{M_\odot}\}$, being $\sigma$ the stellar velocity dispersion of the galaxy bulge, $L_i$ its mid-infrared luminosity, and $M_*$ its bulge stellar mass. Each relation is characterized by an intrinsic scatter $\epsilon$. $a, b, \epsilon$ are listed in table \[tabrel\]. SMBHBs are then assumed to merge in coincidence with their host galaxy (i.e. no stalling or extra delays). Paper $X$ $a$ $b$ $\epsilon$ ------------------------ ---------- -------- -------- ------------ [@haring04] $M_*$ 8.2 1.12 0.30 [@sani11] $M_*$ 8.2 0.79 0.37 [@beifiori12] $M_*$ 7.84 0.91 0.46 [@2013ApJ...764..184M] $M_*$ 8.46 1.05 0.34 [@graham12] $M_*$ 8.56 1.01 0.44 (8.69) (1.98) (0.57) [@2013ARAA..51..511K] $M_*$ 8.69 1.17 0.29 [@sani11] $L_i$ 8.19 0.93 0.38 [@2009ApJ...698..198G] $\sigma$ 8.23 3.96 0.31 [@graham11] $\sigma$ 8.13 5.13 0.32 [@beifiori12] $\sigma$ 7.99 4.42 0.33 [@2013ApJ...764..184M] $\sigma$ 8.33 5.57 0.40 [@grahamscott12] $\sigma$ 8.28 6.01 0.41 [@2013ARAA..51..511K] $\sigma$ 8.5 4.42 0.28 [@2016MNRAS.460.3119S] $\sigma$ 7.8 4.3 0.3 : List of parameters $a$, $b$ and $\epsilon$. See text for details. [@graham12] proposes a double power law with a break at $\bar{M}_*=7\times10{M_\odot}$, values in parenthesis refer to $M_*<\bar{M}_*$.[]{data-label="tabrel"} All possible combinations of galaxy merger rates as per equation (\[galmrate\]) and SMBH masses assigned via equation (\[scalingrel\]) result in an allowed SMBHB merger rate density as a function of chirp mass and redshift. We then marginalize over mass and redshift separately to obtain the functions $dn/dz$ and $dn/d{\mathcal{M}}$. We are particularly interested here in testing different SMBH-host galaxy relations, we therefore construct the function $dn/dz$ and $dn/d{\mathcal{M}}$ under four different assumptions: 1. Model KH13 is constructed by considering both the M$-\sigma$ and M$-M_*$ relations from [@2013ARAA..51..511K]; 2. Model G09 is based on the M$-\sigma$ relation of [@2009ApJ...698..198G]; 3. Model S16 employs both the M$-\sigma$ relation from [@2016MNRAS.460.3119S]; 4. Model All is the combination of all 14 SMBH mass-host galaxy relations listed in table \[tabrel\]. For each of these four models, the allowed regions of $dn/dz$ and $dn/d{\mathcal{M}}$ are shown in figure \[fig:astroPriors\]. The figure highlights the large uncertainty in the determination of the SMBHB merger rate and unveils the trend of the chosen models; S16 and KH13 represent the lower and upper bound to the rate, whereas G09 sits in the middle and is representative of the median value of model ‘All’. ![Left panel: mass density distribution $dn/d{\cal M}$ of the four astrophysical priors selected in this study (see text for full description). Right panel: redshift evolution of the SMBHB mass density for the same four models. Note that the coloured region represent the 99% interval allowed by each model, this is why individual models can extend beyond the region associated to model All (which include KH13, G09, S16 as subsets).[]{data-label="fig:astroPriors"}](images/supplementaryImages/model_prior_M.pdf "fig:"){width="49.00000%"} ![Left panel: mass density distribution $dn/d{\cal M}$ of the four astrophysical priors selected in this study (see text for full description). Right panel: redshift evolution of the SMBHB mass density for the same four models. Note that the coloured region represent the 99% interval allowed by each model, this is why individual models can extend beyond the region associated to model All (which include KH13, G09, S16 as subsets).[]{data-label="fig:astroPriors"}](images/supplementaryImages/model_prior_Z.pdf "fig:"){width="49.00000%"} ![Prior distributions of the astrophysical model parameters. Panels show: top row from left to right, ${{\dot n}_0}$, $\beta$, $z_*$; bottom row from left to right $\alpha$, ${\mathcal{M}_*}$, $e_t$. The lines represent the prior of the four astrophysical models: KH13 (orange, solid), S16 (blue, dashed), G09 (green dotted) and ALL (black dash-dot).[]{data-label="fig:comparingPriors"}](images/supplementaryImages/comparing_model_priors){width="\textwidth"} The numerical SMBHB mass functions obtained in this way have to be described analytically by the expression (\[eqn:model\]). Our strategy is therefore to make a large series of random draws of the five parameters defining equation (\[eqn:model\]), and to retain only those sets that produce $dn/dz$ and $dn/d{\mathcal{M}}$ within the boundaries set by the empirical models shown in figure \[fig:astroPriors\]. The prior distributions obtained in this way are shown in figure \[fig:comparingPriors\] for the four models. Redshift parameters ($\beta$ and $z_*$) have very similar prior for each of the models. The main differences are in the number rate density of mergers ${{\dot n}_0}$ and in the mass distribution parameters ($\alpha$ and ${\mathcal{M}_*}$). KH13 and All prefer higher values of ${{\dot n}_0}$. On the other hand S16 allows for slightly higher values of $\alpha$ (in comparison to KH13 and G09), corresponding to a more negative slope on the mass distribution, with preference for a larger number of low mass binaries. We then have to make sure that the distribution of characteristic amplitudes $h_c$ obtained by using the cosmic SMBHB merger rate density of equation (\[eqn:model\]) with the prior parameters chosen as above is consistent with the $h_c$ distributions of the original models. To check this, we computed in both cases the GW background under the assumption of circular GW driven systems (i.e. $h_c \propto f^{-2/3}$) and we compared the distributions of $h_\mathrm{1yr}$, i.e. the strain amplitudes at $f=1$yr$^{-1}$. The $h_\mathrm{1yr}$ distributions obtained with the two techniques were found to follow each other quite closely with a difference of median values and 90% confidence regions smaller than 0.1dex. We conclude that our analytical models provide an adequate description of the observationally inferred SMBHB merger rate, and can therefore be used to constrain the properties of the cosmic SMBHB population. In particular model KH13 provides an optimistic prediction of the GW background with median amplitude at $f = 1$ yr$^{-1}$ of $h_\mathrm{1yr} \approx 1.5\times 10^{-15}$; model G09 results in a more conservative prediction $h_\mathrm{1yr} \approx 7\times 10^{-16}$; model S16 result in an ultra conservative estimate with median $h_\mathrm{1yr} \approx 4\times 10^{-16}$; and finally the characteristic amplitude predicted by the compilation of all models (All) encompasses almost two orders of magnitudes with median value $h_\mathrm{1yr} \approx 8\times 10^{-16}$. As for the parameters defining the binary dynamics, we assume that all binaries have the same eccentricity for which we pick a flat prior in the range $10^{-6}<e_t<0.999$. In the extended model, featuring a rescaling of the density $\rho_i$ regulating the binary hardening in the stellar phase, we assume a log flat prior for the multiplicative factor $\eta$ in the range $0.01<\eta<100$. Likelihood function and hierarchical modelling {#sec:like} ---------------------------------------------- By making use of Bayes theorem, the posterior probability distribution $p(\theta|d,M)$ of the model parameters $\theta$ inferred by the data $d$ given a model $M $ is $$p(\theta|d,M) = \frac{p(d|\theta,M)p(\theta|M)}{{\cal Z}_M}, \label{eqn:BayesTheorem}$$ where $p(\theta|M)$ is the prior knowledge of the model parameters, $p(d|\theta,M)$ is the likelihood of the data $d$ given the parameters $\theta$ and ${\cal Z}_M$ is the evidence of model $M$, computed as $${\cal Z}_M= \int p(d|\theta,M)p(\theta|M)d\theta. \label{eqn:evidence}$$ The evidence is the integral of the likelihood function over the multi-dimensional space defined by the model parameters $\theta$, weighted by the multivariate prior probability distribution of the parameters. When comparing two competitive models A and B, the odds ratio is computed as $${\cal O}_{A,B}=\frac{{\cal Z}_A}{{\cal Z}_B}\frac{P_A}{P_B}={\cal B}_{A,B}\frac{P_A}{P_B},$$ where ${\cal B}_{A,B}={\cal Z}_A/{\cal Z}_B$ is the Bayes factor and $P_M$ is the prior probability assigned to model $M$. When comparing the four models KH13, G09, S16 and All, we assign equal prior probability to each model. Therefore, in each model pair comparison, the odds ratio reduces to the Bayes factor. In Section \[sec:prior\] we already defined the distribution of prior parameters $p(\theta|M)$, to proceed with model comparison and parameter estimation we need to define the likelihood function $p(d|\theta,M)$. The likelihood function, $p(d|\theta, M)$ is defined following [@2017MNRAS.468..404C]. We take the posterior samples from the Parkes PTA analysis (courtesy of Shannon and collaborators) used to place the 95% upper limit at $h_\mathrm{1yr} = 1 \times 10^{-15}$, when a single power law background $h_c\propto f^{-2/3}$ is assumed. However, for our analysis we would like to convert this upper limit at $f=1\mathrm{yr}^{-1}$ to a frequency dependant upper limit on the spectrum as shown by the orange curve in figure \[fig:SpectrumPosterior\]. The likelihood is constructed by multiplying all bins together, therefore the resulting overall limit from these bin-by-bin upper-limits must be consistent with $h_\mathrm{1yr} = 1 \times 10^{-15}$. The $f_{\mathrm{1yr}}$ posterior distribution is well fitted by a Fermi function. To estimate a frequency dependant upper limit, we use Fermi function likelihoods at each frequency bin, which are then shifted and re-normalised in order to provide the correct overall upper limit. In our analysis we consider the contributions by only the first 4 frequency bins of size $1/11\,\mathrm{yr}^{-1}$, as the higher frequency portion of the spectrum provides no additional constraint. We have verified that when we include additional bins the results of the analysis are unchanged. Ideally, we would take the bin-by-bin upper limits directly from the pulsar timing analysis to take account of the true shape of the posterior; however, the method we use here provides a consistent estimate for our analysis. Having defined the population of merging binaries, the astrophysical prior and the likelihood based on the PPTA upper limit result, we use a nested sampling algorithm [@Skilling2004a; @cpnest] to construct posterior distributions for each of the 6 model parameters. For the results shown here, we use 2000 live points and run each analysis 5 times, giving an average of around 18000 posterior samples. Detailed results {#sec:results} ================ S16 KH13\ ![Triangle plots for each astrophysical model showing the prior and posterior distribution for each parameter: top left S16; top right KH13, bottom left G09, bottom right All. The diagonal plots show the one-dimensional marginalised distributions for each of the 6 parameters with the thin black line indicating the posterior and the thick green line the prior. The central plots show the two-dimensional posterior distributions for each of the parameter combinations along with the green contour showing the extent of the prior.[]{data-label="fig:TrianglePlots"}](images/supplementaryImages/triangleSHANK "fig:"){width="49.00000%"} ![Triangle plots for each astrophysical model showing the prior and posterior distribution for each parameter: top left S16; top right KH13, bottom left G09, bottom right All. The diagonal plots show the one-dimensional marginalised distributions for each of the 6 parameters with the thin black line indicating the posterior and the thick green line the prior. The central plots show the two-dimensional posterior distributions for each of the parameter combinations along with the green contour showing the extent of the prior.[]{data-label="fig:TrianglePlots"}](images/supplementaryImages/triangleKHO "fig:"){width="49.00000%"} G09 ALL\ ![Triangle plots for each astrophysical model showing the prior and posterior distribution for each parameter: top left S16; top right KH13, bottom left G09, bottom right All. The diagonal plots show the one-dimensional marginalised distributions for each of the 6 parameters with the thin black line indicating the posterior and the thick green line the prior. The central plots show the two-dimensional posterior distributions for each of the parameter combinations along with the green contour showing the extent of the prior.[]{data-label="fig:TrianglePlots"}](images/supplementaryImages/triangleG09 "fig:"){width="49.00000%"} ![Triangle plots for each astrophysical model showing the prior and posterior distribution for each parameter: top left S16; top right KH13, bottom left G09, bottom right All. The diagonal plots show the one-dimensional marginalised distributions for each of the 6 parameters with the thin black line indicating the posterior and the thick green line the prior. The central plots show the two-dimensional posterior distributions for each of the parameter combinations along with the green contour showing the extent of the prior.[]{data-label="fig:TrianglePlots"}](images/supplementaryImages/triangleALL "fig:"){width="49.00000%"} The nested sampling algorithm returns the full posterior of the N-dimensional parameter space and the value of the model evidence. The posteriors are shown in the triangle plots of figure \[fig:TrianglePlots\] for our main analysis of the PPTA upper limit using the default six parameter model ($\theta = {{{\dot n}_0}, \beta, z_*, \alpha, {\mathcal{M}_*}, e_t}$). The plots on the diagonal of the triangle show the one-dimensional marginalised distributions for each parameter whilst the two-dimensional histograms show the posterior distributions for each parameter pair. It is immediately clear that current PTA observations impose little constraint on the shape of the SMBHB mass function. For the most conservative model (S16), the prior (green-thick lines) and posterior (black) are virtually identical (top left panel). Even for the KH13 model, the two distributions match closely, with only appreciable differences for $\beta$ and $\alpha$. This is because the PPTA limit excludes the highest values of $h_c$ predicted by the model (cf Figure \[fig:SpectrumPosterior\]), which results in a preference for large $\alpha$ and negative $\beta$. In fact, for the mass function adopted in equation (\[eqn:model\]), a large $\alpha$ results in a SMBHB population dominated by low mass systems, which tends to suppress the signal. Likewise, a small (or negative) $\beta$ implies a sparser population of SMBHB at higher redshift, again reducing the GW background level. In any case, little new information on the SMBHB cosmic population is acquired with current PTA measurements, which is demonstrated by the small K-L divergences between prior and posterior of the individual model parameters shown in table \[tab:KL\]. ------ --------- --------- --------- -------- --------- --------- KH13 $0.06$ $0.05$ $<0.01$ $0.24$ $0.03$ $<0.01$ G09 $<0.01$ $0.01$ $<0.01$ $0.04$ $0.01$ $<0.01$ S16 $<0.01$ $<0.01$ $<0.01$ $0.01$ $<0.01$ $<0.01$ All $0.02$ $0.02$ $<0.01$ $0.08$ $0.02$ $<0.01$ ------ --------- --------- --------- -------- --------- --------- : K-L divergences of the marginalized distributions of individual parameters for the default models considered in this study as constrained by the PPTA upper limit.[]{data-label="tab:KL"} We also extended our analysis in two directions: (i) We explore a model that includes a seventh parameter, $\eta$, as described in Section \[sec:pop\]; this parameter allows us to vary the efficiency of three-body hardening by adjusting the stellar density at the SMBHB influence radius; and (ii) We consider putative more stringent upper limits at $h_{\mathrm{1yr},95\%}=3\times10^{-16}$ and $1\times 10^{-16}$; in this case we represent this sensitivity improvement by simply lowering the PPTA upper limit curve by a factor of 3 and 10 respectively. ------ ----------------------- ----------------------- ----------------------- ----------------------- ------------------------ ----------------------- $e_t$ $e_t+\eta$ $e_t$ $e_t+\eta$ $e_t$ $e_t+\eta$ KH13 [$-2.36$]{}([0.85]{}) [$-2.23$]{}([0.84]{}) [$-5.68$]{}([2.25]{}) [$-5.47$]{}([2.25]{}) [$-13.17$]{}([5.18]{}) [$-9.03$]{}([7.11]{}) G09 [$-1.2$]{}([0.39]{}) [$-1.1$]{}([0.39]{}) [$-3.35$]{}([1.11]{}) [$-3.17$]{}([1.09]{}) [$-8.26$]{}([2.86]{}) [$-6.38$]{}([4.02]{}) S16 [$-0.6$]{}([0.37]{}) [$-0.57$]{}([0.38]{}) [$-1.62$]{}([0.69]{}) [$-1.6$]{}([0.71]{}) [$-3.82$]{}([1.42]{}) [$-3.56$]{}([1.48]{}) All [$-1.23$]{}([0.62]{}) [$-1.14$]{}([0.62]{}) [$-2.68$]{}([1.33]{}) [$-2.63$]{}([1.31]{}) [$-5.74$]{}([2.50]{}) [$-5.09$]{}([2.53]{}) ------ ----------------------- ----------------------- ----------------------- ----------------------- ------------------------ ----------------------- : Natural logarithm of model evidences and associated K-L divergences (in parenthesis) for each of the four astrophysical SMBHB coalescence rates models: KH13, G09, S16 and ALL. For each population we consider two different parametrisations of the SMBHB dynamics; one which has only $e_t$ as a free parameter (column ‘$e_t$’ 6 parameter model), and one where we add the normalization factor $\eta$ to the density at the influence radius $\rho_i$ as a free parameter (column ‘$e_t+\eta$’, 7 parameter model). Numbers are reported for three values of the 95% PTA upper limit $h_{\mathrm{1yr},95\%}$, namely $10^{-15}, 3\times10^{-16}, 10^{-16}$.[]{data-label="tab:all"} The results are summarised in table \[tab:all\], where we list ${\rm log}{\cal Z}$ and K-L divergence (in parenthesis) of each individual model for all the performed analyses. Let us start by considering the implications of the current PPTA upper limit at $h_{\mathrm{1yr},95\%}= 1\times 10^{-15}$ on the extended 7-dimensional parameter models. First of all, there are no significant differences between the six and the seven parameter model. Both evidence and K-L divergence are virtually identical. Together with the flat $e_t$ posteriors shown in figure (\[fig:TrianglePlots\]), this leads us to an important conclusion: current PTA non detections do not favour (nor require) a strong coupling with the environment. Neither high stellar densities (i.e. efficient 3-body scattering) nor high eccentricities are preferred by the data. As expected, the conservative S16 model is always favoured. However, even when compared to KH13, one obtains $\ln{\cal B} = 1.76$, which only mildly favours S16[@kassr95]. In addition, all K-L divergences are smaller than unity, indicating only minor updates with respect to the $h_c$ prior distributions. This is another measure of the fact that the data are not very informative. ---------------------------------------------------------------------------------- ![image](images/supplementaryImages/conf_h_KH13_A16_ecc.pdf){width="\textwidth"} ![image](images/supplementaryImages/conf_m_KH13_A16_ecc.pdf){width="\textwidth"} ![image](images/supplementaryImages/conf_z_KH13_A16_ecc.pdf){width="\textwidth"} ---------------------------------------------------------------------------------- ![image](images/supplementaryImages/cornerplotFullPrior_KH13_A16_ecc.pdf){width="\textwidth"}\ ---------------------------------------------------------------------------------- ![image](images/supplementaryImages/conf_h_KH13_A16_rho.pdf){width="\textwidth"} ![image](images/supplementaryImages/conf_m_KH13_A16_rho.pdf){width="\textwidth"} ![image](images/supplementaryImages/conf_z_KH13_A16_rho.pdf){width="\textwidth"} ---------------------------------------------------------------------------------- ![image](images/supplementaryImages/cornerplotFullPrior_KH13_A16_rho.pdf){width="\textwidth"} A putative limit at $h_{\mathrm{1yr},95\%}=3\times10^{-16}$ would obviously be more constraining, as also shown by the numbers in the table. The K-L divergences of all models, with the exception of S16, are now larger than one indicating that the upper limit is becoming more informative. In terms of model comparison, S16 is now mildly favoured with respect to G09 ($\ln{\cal B} = 1.73$) and strongly favoured compared to KH13 ($\ln{\cal B} = 4.06$). We notice that again, adding $\eta$ does not make a significant difference to the model evidence. Even with such a low upper limit, neither high eccentricity nor strong coupling with the environment improve the agreement between model expectations and data. Although this seems counter-intuitive, we should keep in mind that the upper limit is set around $f\approx 5\times 10^{-9}$Hz (cf figure \[fig:SpectrumPosterior\]). Any dynamical effect should therefore cause a turnover of the spectrum around $10^{-8}$Hz to have an impact on model selection, which occurs only in a small corner of parameter space where both $e_t$ and $\eta$ are high. However, for all models $h_{\mathrm{1yr},95\%}=3\times10^{-16}$ is still consistent with the tail of the $h_c$ distribution when an $f^{-2/3}$ spectrum is assumed, and invoking high $e_t$ and $\eta$ is not necessary. The limit becomes far more interesting if it reaches $h_{\mathrm{1yr},95\%}=1\times10^{-16}$. Now all K-L divergences are substantial, indicating that the measurement is indeed informative. Model selection now strongly favours model S16 compared to any other model, whether $\eta$ is included or not. Even including all environmental effects, when comparing S16 to KH13, we find $\ln{\cal B} = 5.47$, providing decisive preference for model S16. Note however, that S16 has a log evidence of $-3.56$ of its own. This is considerably lower than zero (the evidence of a model that is unaffected by the measurement). Since delays and stalling can potentially decrease the GW background by preventing many SMBHB from merging, it is likely that a non detection at $h_{\mathrm{1yr},95\%}=1\times10^{-16}$ will provide strong support for those dynamical effects. Those are not yet included in our modelling and we plan to explore them in future work. We have found that, contrary to the previous cases, a $1\times 10^{-16}$ limit would provide in some case significant evidence in favour of a strong coupling with the environment. To illustrate this we consider the KH13 model, where the effect is more pronounced. In this case we get $\ln{\cal B} = 4.14$ in favour of the $e_t+\eta$ model over the $e_t$ model only. Both high eccentricities and high densities would be required to explain the non detection in the context of the KH13 model. The triangle plot in figure \[fig:triangleKHO6\] shows the posterior distribution of the model parameters for the $e_t$ case. We see that now all the posteriors differ significantly from the respective prior. Low $\beta$ and $z_*$ are preferred, because this suppresses the total number of SMBHs at high redshifts. Note that higher values of ${{\dot n}_0}$ are preferred. Although this might be surprising, it is dictated by the shape of the prior of $dn/dz$ (shown in the lower left panel in figure \[fig:triangleKHO6\]); in order to minimize the signal, it is more convenient to allow a negative $\beta$ at the expenses of a higher local normalization ${{\dot n}_0}$ of the merger rate. High $\alpha$ values are obviously preferred, since they imply a population dominated by low mass SMBHBs (this is evident in the middle left panel of figure \[fig:triangleKHO6\] showing $dn/d{\cal M}$). The $e_t$ posterior now shows a prominent peak close to the maximum $e_t=0.999$, with a long tail extending to zero. Very high eccentricities are preferred, although low values are still possible. This is because $10^{-16}$ is only a $95\%$ upper limit, therefore there is a small chance that a low eccentricity model producing a signal surpassing the $10^{-16}$ value is nonetheless accepted in the posterior. The triangle plot in figure \[fig:triangleKHO7\] shows how the situation changes when the $\eta$ parameter is added in the $e_t+\eta$ model. Most notably, now extremely high eccentricities and high densities are strongly favoured. This is primarily because the addition of $\eta$ extends the prior in $h_c$ (shown in the upper left panel) downwards quite below the level imposed by the upper limit. It is therefore now easier to find points in the parameter space consistent with the measurement when $e_t$ and $\eta$ are large. Should other SMBH-host galaxy relations being ruled out by independent constraints, a PTA $10^{-16}$ upper limit would provide strong evidence of surprisingly extreme dynamical conditions of SMBHBs. - **Acknowledgements:** HM and AV acknowledge the support by the Science and Technology Facilities Council (STFC), AS is supported by a URF of the Royal Society. - **Author contributions:** All the authors have contributed to this work. - **Competing interests:** The authors declare that they have no competing financial interests. - **Correspondence:** Correspondence and requests for materials should be addressed to H. Middleton (email: [email protected]). - **Data availability:** The results of our analysis for this study are available from the corresponding author on request. [^1]: The Kullback-Leibler divergence between two normal distributions $p\sim N(\mu_p, \sigma_p^2)$ and $q\sim N(\mu_q, \sigma_q^2)$ is $\mathrm{D}_\mathrm{KL}(p||q) = \ln(\sigma_q/\sigma_p) - 1/2 + 1/2 \left[(\sigma_p/\sigma_q)^2 + (\mu_p - \mu_q)^2/\sigma_q^2)\right]$. For $\sigma_p = \sigma_q$ and $\mu_p = \mu_q + \sigma_q$ the KL divergence is 0.5. [^2]: This is the same K-L between two Gaussian distributions with the same variance and means approximately 1.3 standard deviation apart
{ "pile_set_name": "ArXiv" }
--- abstract: 'Local Hebbian learning is believed to be inferior in performance to end-to-end training using a backpropagation algorithm. We question this popular belief by designing a local algorithm that can learn convolutional filters at scale on large image datasets. These filters combined with patch normalization and very steep non-linearities result in a good classification accuracy for shallow networks trained locally, as opposed to end-to-end. The filters learned by our algorithm contain both orientation selective units and unoriented color units, resembling the responses of pyramidal neurons located in the cytochrome oxidase “interblob’’ and “blob’’ regions in the primary visual cortex of primates. It is shown that convolutional networks with patch normalization significantly outperform standard convolutional networks on the task of recovering the original classes when shadows are superimposed on top of standard CIFAR-10 images. Patch normalization approximates the retinal adaptation to the mean light intensity, important for human vision. We also demonstrate a successful transfer of learned representations between CIFAR-10 and ImageNet $32\times 32$ datasets. All these results taken together hint at the possibility that local unsupervised training might be a powerful tool for learning general representations (without specifying the task) directly from unlabeled data.' author: - | Leopold Grinberg\ IBM Research\ `[email protected]`\ John Hopfield\ Princeton Neuroscience Institute\ Princeton University\ `[email protected]`\ Dmitry Krotov\ MIT-IBM Watson AI Lab\ IBM Research\ `[email protected]`\ title: Local Unsupervised Learning for Image Analysis --- Introduction ============ Local learning, motivated by Hebbian plasticity, is usually believed to be inferior in performance to gradient-based optimization, for example a backpropagation algorithm. Common arguments include the following ideas. Feature detectors in the early layers of neural networks should be specifically crafted to solve the task that the neural network is designed for, thus some information about the errors of the network or the loss functions must be available during learning of the early layer weights. Making random changes in weights and accepting those changes that improve accuracy, as evolutionary algorithms do, is very inefficient in large networks. Gradient-based optimization seems to converge to the desired solution faster than alternative methods, which do not rely on the gradient. At the same time, modern neural networks are heavily overparametrized, which means that there are many combinations of weights that lead to a good generalization performance. Thus, there is an appealing idea that this manifold of “good” weights might be found by a purely local learning algorithm that operates directly on the input data without the information about the output of the network or the task to be performed. A variety of local learning algorithms relying only on bottom-up propagation of the information in neural networks have been recently discussed in the literature [@Chklovskii; @Pehlevan; @Hawkins; @Seung; @Bahroun; @Krotov_Hopfield_2019]. A recent paper [@Krotov_Hopfield_2019], for example, proposed a learning algorithm that is local and unsupervised in the first layer. It manages to learn useful early features necessary to achieve a good generalization performance, in line with networks trained end-to-end on simple machine learning benchmarks. The limitation of this study is that the proposed algorithms were tested only in fully connected networks, only on pixel permutation invariant tasks, and only on very simple datasets: pixel permutation invariant MNIST and CIFAR-10. An additional limitation of study [@Krotov_Hopfield_2019], which is not addressed in this work, is that they studied neural networks with only one hidden layer. Our main contributions are the following. Based on the open source implementation [@bio_learning_github] of the algorithm proposed in [@Krotov_Hopfield_2019], we designed an unsupervised learning algorithm for networks with [*local*]{} connectivity. We wrote a fast CUDA library that allows us to quickly learn weights of the convolutional filters at scale. We propose a modification to the standard convolutional layers, which includes patch normalization and very steep non-linearities in the activation functions. These unusual architectural features together with the proposed learning algorithm allow us to match the performance of networks of similar size and architecture trained using the backpropagation algorithm end-to-end on CIFAR-10. On ImageNet $32\times 32$ the accuracy of our algorithm is slightly worse than the accuracy of the network trained end-to-end, but it’s in the same ballpark. The usefulness of patch normalization is illustrated by designing an artificial test set from CIFAR-10 images that are dimmed by shadows. The network with patch normalization outperforms the standard convolutional network by a large margin on this task. At the end, transfer learning between CIFAR-10 and ImageNet $32 \times 32$ is discussed. The main goal of our work is to investigate the concept of local bottom-up learning and its usefulness for machine learning and generalization, and not to design a biologically plausible framework for deep learning. We acknowledge that the proposed algorithm uses shared weights in convolutional filters, which is not biological. Despite this lack of the overall biological motivation in this work, we note that the filters learned by our algorithm show a well pronounced separation between color-sensitive cells and orientation-selective cells. This computational aspect of the algorithm matches nicely with a similar separation between the stimulus specificity of the responses of neurons in blob and interblob pathways, known to exist in the V1 area of the visual cortex. Learning Algorithm and Network Architecture =========================================== During training, each input image is cut into small patches of size $W\times W\times 3$. Stride of $1$ pixel and no padding is used at this stage. The resulting set of patches $v_i^A$ (index $i$ enumerates pixels, index $A$ enumerates different patches) is shuffled at each epoch and is organized into minibatches that are presented to the learning algorithm. The learning algorithm uses weights $M_{\mu i}$, which is a matrix of $K$ channels by $N=W\cdot W\cdot 3$ visible units, that are initialized from a standard normal distribution and iteratively updated according to the following learning rule [@Krotov_Hopfield_2019] $$\Delta M_{\mu i} = \varepsilon \sum\limits_{A\in \text{minibatch} }g\Big[ \text{Rank}\Big(\sum\limits_j M_{\mu j} v_j^A\Big) \Big] \Big[ v_i^A- \Big(\sum\limits_k M_{\mu k} v_k^A\Big) M_{\mu i} \Big]\label{learning rule}$$ where $\varepsilon$ is the learning rate. The activation function $g(\cdot)$ is equal to one for the strongest driven channel and is equal to a small negative constant for the channel that has a rank $m$ in the activations $$g(i) = \left\{ \begin{array}{cl}1, & \text{if}\ i=1\\ -\Delta, & \text{if }\ i=m\\ 0, & \text{otherwise} \end{array}\right.\label{discrete learning activation function}$$ Ranking is done for each element of the minibatch separately. The weights are updated after each minibatch for a certain number of epochs, until each row of the weight matrix converges to a unit vector. This is a literal implementation of the algorithm [@Krotov_Hopfield_2019] adapted to small patches of images. The resulting matrix $M_{\mu i}$ is used as weights of the convolutional filters with two important modifications. Frist, each patch $v_i$ of the image is normalized to be a unit vector before taking the dot product with the weight matrix. Given that the rows of the weight matrix themselves are unit vectors, the dot product between the weight and the patch is a cosine of the similarity between the two. Thus, $\sum_iM_{\mu i } v_i \in [-1, 1 ]$. Second, the result of the dot product is passed through a very steep non-linearity - rectified power function [@DAM2016; @DAM2018] $$f(x) = \Big[ \text{ReLU}(x) \Big]^n$$ where the power $n$ is a hyperparameter of that layer. We call these slightly unusual convolutional layers NNL-CONV layers (normalized non-linear convolutional layers), in order to distinguish them from the standard ones, denoted CONV in this paper. The standard CONV layers do not use per-patch normalization and use ReLU as an activation function. In the following sections we also use standard max-pooling layers and standard fully connected layers with softmax activation function for the classifier. Evaluation of the model on CIFAR-10 dataset {#section CIFAR} =========================================== The algorithm (\[learning rule\]) was applied to images from CIFAR-10 dataset to learn the weights of the NNL-CONV filters. They are shown in Fig.\[bio receptive fields\]. Each small square corresponds to a different channel (hidden unit) and is shown by projecting its corresponding weights into the image plane $W\times W\times 3$. The connection strength to each pixel has $3$ components corresponding to the RGB color image. These weights are linearly stretched so that they change on a segment $[0,1]$ for each channel. Thus, black color corresponds to synaptic weights that are either equal to zero, or negative; white color corresponds to weights that are large and have approximately equal values in all three RGB channels; blue color corresponds to weights that have large weights connected to blue neurons, and small or zero weights connected to red and green neurons, etc. As is clear from this figure, the resulting weights show a diversity of features of the images, including line detectors, color detectors, and detectors of more complicated shapes. These feature detectors appear to be much smoother than the feature detectors resulting from standard end-to-end training of CNNs with the backpropagation algorithm. Guided by these examples, one can make the following qualitative observations: 1. The majority of the hidden units are black and white, having no preference for R,G, or B color. These units respond strongly to oriented edges, corners, bar ends, spots of light, more complicated shapes, etc. 2. There is a significant presence of hidden units detecting color. Those neurons tend to have a smaller preference for orientation selectivity, compared to black and white units. In order to test the quality of these learned filters with respect to the generalization performance, they were used as the weights of a simple network with one NNL-CONV layer. Consider for example the architecture shown in Fig. \[networks errors\] (left). In this architecture a NNL-CONV layer is followed by a max pooling layer and then a fully connected softmax classifier. The weights of the NNL-CONV layer were fixed, and given by the output of the algorithm (\[learning rule\]). The max-pooling layer does not have any trainable weights. The weights of the top layer were trained using the gradient decent based optimization (Adam optimizer with the cross-entropy loss function). The accuracy of this network was compared with the accuracy of the network of the same capacity, with NNL-CONV layers replaced by the standard CONV layers, trained end-to-end using Adam optimizer. The results are shown in Fig. \[networks errors\] (right). Here one can see how the errors on the held-out test set decrease as training progresses. Training time here refers to the training of the top layer classifier only in the case of the NNL-CONV network, and training of all the layers (convolutional layer and the classifier) in the case of the standard CONV network trained end-to-end. For the simple network shown on the left the error of the locally trained network is $27.80\%$, the error of the network trained end-to-end is $27.11\%$. In order to achieve a better test accuracy it also helps to organize the NNL-CONV layer as a sequence of blocks with various sizes of the window $W$, like in Fig. \[networks errors\] (middle). Having this diversity of the receptive windows allows the neural network to detect features of different scales in the input images. Same as above, the performance of this network trained using algorithm (\[learning rule\]) is compared with the performance of a similar size network trained end-to-end. The results are shown by the red and blue curves in Fig. \[networks errors\] (right). The error of the locally trained network is $23.40\%$, the error of the network trained end-to-end is $22.57\%$. The main conclusion here is that the networks with filters obtained using the local unsupervised algorithm (\[learning rule\]) achieve almost the same accuracy as the networks trained end-to-end. This result is at odds with the common belief that the first layer feature detectors should be crafted specifically to solve the narrow task specified by the top layer classifier. Instead, this suggests that a general set of weights of the first layer can be learned using the local bottom-up unsupervised training (\[learning rule\]), and that it is sufficient to communicate the task only to the top layer classifier, without adjusting the first layer weights. We have done an extensive set of experiments with networks of different size of the hidden layer as well as with different sizes of convolutional windows $W$, pooling windows $W_p$, different strides in NNL-CONV and pooling layers, and different powers $n$ of the activation function. The conclusions are the following: 1. The classification accuracy increases as the hidden layer gets wider. This is a known phenomenon for networks trained end-to-end with the backpropagation algorithm. The same phenomenon is valid for the networks trained using the local learning (\[learning rule\]). 2. For a given choice of windows $W$ of the blocks of the NNL-CONV layer the remaining hyperparameters (powers $n$, pooling windows $W_p$, strides, size of the minibatch, etc) can be optimized on the validation set (see Appendix for details) so that the accuracy of the locally trained network is almost the same as the accuracy of the network with the same set of windows $W$ trained end-to-end. In all the experiments in this section a standard validation procedure was used. Standard CIFAR-10 training set of 50000 images was randomly split into 45000 training images and 5000 validation images. The hyperparameters were adjusted to optimize the accuracy on the validation set. Once this is done, the validation set was combined together with the training set to retrain the models for the optimal values of the hyperparameters. These models were tested on the standard held-out test set of 10000 images. We also acknowledge that better accuracies on CIFAR-10 dataset are achievable [@CIFAR; @accuracies], but they require the use of one (or several) architectural/algorithmic methods that go beyond the limits of this work, such as: deeper architectures, dropout, data augmentation, injection of noise during training, etc. Evaluation of the model on ImageNet $32 \times 32$ dataset ========================================================== A similar set of experiments were conducted on ImageNet $32 \times 32$ dataset [@ImageNet32]. A large family of filters with windows $W$ changing in the range $2\leq W \leq 16$ were trained using the algorithm (\[learning rule\]). Examples of those learned filters are shown in Fig. \[ImageNet\_rec\_fields\_fig\]. Visually, they look similar to the filters resulting from training on CIFAR-10, see Fig. \[bio receptive fields\]. The separation between color sensitive and orientation selective cells is present for all sizes of windows $W$, as is the case for CIFAR-10. In order to benchmark the accuracy of the models $10\%$ of the standard training set was used as a validation set to tune the hyperparameters. After the hyperparameters were chosen, the validation set was combined with the training set to retrain the models for the optimal values of the hyperparameters. The accuracy was measured on the standard held-out test set. A family of models with one layer of NNL-CONV units, having block structure was considered. The “optimal” model is shown in Fig. \[network\_ImageNet\] and consists of four blocks with windows of sizes $W=3,4,5,8$ pixels. The error rate of the model is: $84.13\%$ in top-1 classification, and $70.00\%$ in top-5 classification. The errors on the training and test sets are shown in Fig. \[network\_ImageNet\]. Details of these experiments are in the Appendix. For comparison, the network of the same size trained end-to-end achieves: top-1 error $79.72\%$, top-5 error $62.58\%$. Thus, on this task the locally trained network performs slightly worse than the network trained end-to-end, but the difference is not that big. Especially considering that no information about the class labels was used in training the NNL-CONV filters. We also acknowledge that significantly better accuracies can be achieved on this dataset by training CONV networks end-to-end, see table 1 in [@ImageNet32], but this requires deep architectures. Color Sensitivity, Orientation Selectivity and Cytochrome Oxidase Stain ======================================================================= The filters shown in Fig. \[bio receptive fields\], \[ImageNet\_rec\_fields\_fig\] are unit vectors, as a result of convergence of learning rule (\[learning rule\]), and are multiplied by pixel intensities of a patch, which is also normalized to be a unit vector. Thus, the maximal value of this dot product is equal to one, when the patch matches exactly the filter (assuming that the filter does not have negative elements). Thus, the learned filters can be interpreted as preferred stimulus (input images which maximize firing) of the corresponding hidden units. Below we describe a metaphorical comparison of these learned preferred stimuli with preferred stimuli of cells in the V1 area of the visual cortex in primates. An interesting anatomical feature of the primary visual cortex is revealed when cells are stained with a cytochrome oxidase enzyme. This staining shows a pattern of blobs and interblob regions. In the famous set of experiments [@Livingstone; @Hubel] it was discovered that the cells in the interblob regions are highly orientation selective and respond to luminance rather than color, while the neurons inside the blobs respond to colors and are insensitive to orientation. There are many important details and subtleties of these experiments that are not discussed in this paper, e.g. the neocortical visual areas of the primate brain have six anatomical layers many clearly divided into sublayers; neurons in different layers have different response properties; the response properties also vary smoothly within a layer. These facts limit the usefulness of describing how a “typical” pyramidal cell in V1 responds, especially to strong natural stimuli. However, the literature supports the qualitative assertion that there exist a segregation of orientation selective and color processing cells [@Livingstone; @Hubel; @Johnson]. A segregation resembling the one discussed above can be seen in the preferred stimuli of the hidden neurons learned by our algorithm, see Fig. \[bio receptive fields\], \[ImageNet\_rec\_fields\_fig\]. For completeness, in Fig. \[Standard\_Nets\_fig\] we show filters learned by two standard networks used in computer vision: AlexNet [@AlexNet] and ResNet-18 [@ResNet]. These networks are much deeper than ours and were trained on the full resolution ImageNet dataset. These two aspects make it difficult to compare their filters with the ones learned by our algorithm. However, they appear to look very different from the filters shown in Fig. \[bio receptive fields\], \[ImageNet\_rec\_fields\_fig\]. Patch Normalization, Retinal Adaptation, and Shadows ==================================================== Natural scenes can have a several thousand fold range of light intensity variation [@Dunn]. At the same time, an 8-bit digital camera has only 256 possible intensity values. In order to cope with this huge variation of light intensities two separate systems exist in biological vision: a global control based on changing the size of the pupil, and a local adaptation on the retina. The latter, being the dominant one, enables the retina to have a good signal to noise ratio in both sun and shadow regions of a visual scene [@Dunn; @Heeger]. Patch normalization, which is essential for a good performance of our algorithm, can be thought of as a mathematical formalization of the local circuitry on the retina that is responsible for this adaptation. Although we do not have a dataset with images of real scenes in various lighting conditions, it can be reasonably emulated by multiplying images from CIFAR-10 dataset pixelwise by a function $I(x,y)$, which changes between zero and one. Patch normalization discards the overall normalization constant in every patch, which is strongly dependent on the light conditions, and focuses chiefly on the shape of an object that a given unit sees. Examples of images constructed this way are shown in Fig. \[shadows\_fig\], where $\approx 80\%$ of each image was covered by a shadow having $I(x,y)=0.3$. Human can see and correctly classify these “shadowed” images. Two networks: standard CONV net trained end-to-end, and NNL-CONV net trained as described in section \[section CIFAR\] were trained on raw CIFAR-10 images, but tested on the shadowed images. Both networks had exactly the same architecture, shown in Fig. \[networks errors\] (middle). The results are shown in Fig. \[shadows\_fig\] (right). While the errors on the raw CIFAR-10 images are approximately the same for the two networks ($\approx 23\%$), the error on the shadowed images of the NNL-CONV net is much lower ($\approx 28\%$) compared to the error of the standard CONV net ( more than $\approx 50\%$). This illustrates that patch normalization can be a useful tool for dealing with images having large differences in light intensity (for example coming from shadows), without having images with these kinds of shadows in the training set. A variety of intensity normalization schemes were used in the era of feature engineered systems, see for example [@Love; @Collins]. Some of them, for example normalized cross correlation of [@Collins] or SIFT normalization [@Love], have some similarities and differences with the proposed normalization of NNL-CONV neurons. The biggest difference between our approach and those feature engineered systems is that the filters used in our networks are learned and not hand-crafted. Transfer Learning {#transfer learning section} ================= The filters learned by our algorithm are independent of the task specified by the top layer classifier. This makes them natural candidates for the transfer learning. In order to test this idea we used the weights trained on ImageNet $32 \times 32$ as NNL-CONV filters in the architecture shown in Fig. \[networks errors\] (middle) and retrained the top layer on the CIFAR-10 images. This gave an error $22.19\pm0.07\%$ (mean$\pm$ std over 5 training runs). This is more than $1\%$ lower than the error of the same network trained on CIFAR-10 images, which makes sense since ImageNet $32 \times 32$ has many more images than CIFAR-10 dataset. Following the same procedure with filters obtain through end-to-end backpropagation training we obtained $22.32\pm0.09\%$. Thus, the locally trained filters perform as well as the standard ones, despite being learned by the local and unsupervised algorithm. Transfer in the opposite direction - weights trained on CIFAR-10 used in the architecture shown in Fig. \[network\_ImageNet\] with top layer retrained on ImageNet $32 \times 32$ resulted in the top-1 error $85.38\%$, top-5 error $71.75\%$. This needs to be compared with top-1 error $84.13\%$, and top-5 error $70.00\%$ when trained using local learning directly on ImageNet $32 \times 32$. Again, having more images helps in reducing the error, but the difference is not very big. Computational Aspects ===================== The concept of local learning seems to be a powerful idea from the algorithmic perspective. Unlike end-to-end training, which requires keeping in memory the weights of the entire neural network together with activations of all neurons in all layers, in local training it is sufficient to keep in compute device memory (for example a GPU) only one layer of weights, a minibatch of data, and the activations of neurons only in that one layer. Thus, local learning is appealing for the use on accelerators with a low memory capacity ($<16$GB). Additionally, since the weights are general, i.e. - derived directly from the data without the information about the task - it is possible to save and reuse them in different neural network architectures, without the need to retrain. This makes it possible to easily experiment with modular architectures (composed of blocks of different kinds of neurons), like the one shown in the middle panel of Fig. \[networks errors\] and Fig. \[network\_ImageNet\], without the need to recompute those weights multiple times. The main limitation of the open source NumPy implementation [@bio_learning_github] is that it is sequential and not GPU accelerated. Thus it is extremely slow, which makes it unpractical for working with image datasets. At the same time, existing frameworks for deep learning are designed for end-to-end training. While it is certainly possible to implement the algorithm (\[learning rule\]) using PyTorch or TensorFlow, such implementations would not be performance optimal. Thus, we invested time in building a fast C++ library for CUDA that takes most advantage of the concept of local learning and optimizes the performance. Our fully parallel algorithm requires efficient (fast and programmable) hardware optimized mostly for i) matrix-matrix multiplications; ii) vector operations; iii) high-bandwidth connectivity between the processor and different types of memory; iv) sparse algebra to apply the effect of the activation function $g(\cdot)$. Parallel implementation of the activation function also requires the use of atomic operations. While the memory footprint of the input data we work with is in the range of 10GB to 400GB, the working data set (minibatch of data + model’s weights) requires between 30MB and 3GB of storage, which fits well into the V-100 GPU memory. We also use the Unified Virtual Address Space and the Address Translation Service [@P9_NPU] to access input data that can be spread over the GPU’s High Bandwidth Memory, the system (CPU) memory, the SSD device, and even the file system. The 50-75 GB/s \[unidirectional\] NVLink between each GPU and CPU makes it possible to do a fast data transfer from the CPU memory to the GPU when the input data does not fit to the GPU memory. This results in a typical training time of filters on the ImageNet $32\times 32$ dataset (2,562,334 images) about 20-25 minutes, which is significantly faster than what would be possible with end-to-end training on a single GPU (typical training time for the architectures considered in this paper is of the order of 3-10 hours). Discussion and Conclusions ========================== This work is a proof of concept that local bottom-up unsupervised training is capable of learning useful and task-independent representations of images in networks with local connectivity. Similarly to [@Krotov_Hopfield_2019], we focus on networks with one trainable hidden layer and show that their accuracies on standard image recognition tasks are close to the accuracies of networks of similar capacity trained end-to-end. The appealing aspects of the proposed algorithm are that it is very fast, conceptually simple (the weight update depends only on the activities of neurons connected by that weight, and not on the activities of all the neurons in the network), and leads to smooth weights with a well pronounced segregation of color and geometry features. We believe that this work is a first step toward algorithms that can train multiple layers of representations. Even if it turns out that this algorithm is only useful in the first layer, it can still be combined with backpropagation training in higher layers and take advantage of the smoothness of the first layer’s weights. This might enhance interpretability of the networks or robustness against adversarial attacks, questions that require a comprehensive investigation. Convolutional neurons with patch normalization and steep activation functions, proposed in this paper, are essential for the good performance of our networks. It is worth emphasizing that if the filters learned by our algorithm were simply substituted as weights in the conventional convolutional network, instead of the NNL-CONV network, the accuracy of the classification would be very poor. There exists a large family of unsupervised feature learning methods, such as learning with surrogate classes [@surrogate; @classes] , adversarial feature learning [@GAN; @GAN_K_means], etc. Local Hebbian learning and unsupervised learning with GANs are not mutually exclusive, but rather are complimentary. We hope that this work is a step toward merging these ideas and designing new powerful unsupervised learning algorithms. Appendix. Technical Details of Experiments discussed in this paper. {#appendix.-technical-details-of-experiments-discussed-in-this-paper. .unnumbered} =================================================================== We have done an extensive set of experiments varying various parameters of the local training algorithm. We have experimented with the following parameters of the convolutional blocks: size of the hidden layer $100 \leq K \leq 2000$, convolutional window $2 \leq W \leq 18$, strides $1 \leq ST \leq 4$, strength of the anti-Hebbian learning $0 \leq \Delta \leq 0.3$, etc. Additionally, we have experimented with the hyperparameters of the full architecture: pooling size $ 1 \leq W_p \leq 18$, power $1 \leq n \leq 100$ (this parameter was varied with increment $10$), pooling strides $1 \leq ST_p \leq 4$, size of the minibatch, learning rate annealing schedule. All the parameters were determined on the validation set, as discussed in the main text for both the networks trained locally and the networks trained end-to-end. For the experiments reported in Fig. \[networks errors\] (left network), the optimal hyperparameters were: $m=2$, $\Delta = 0.2$, $K=400$, $W=4$, $n=40$, $W_p=11$, convolutional stride $ST=1$, pooling stride $ST_p=2$, minibatch size for local training was $1000$ patches, minibatch size for the top layer backpropagation training was $300$ images. The convolutional filters were trained for $500$ epochs with learning rate linearly decreasing from $\varepsilon_0=1 \cdot 10^{-4}$ to zero. The top layer was trained for $70$ epochs with the following schedule of the learning rate decrease: $$\varepsilon= \left\{ \begin{array}{cl}1\cdot 10^{-4}, & \text{epoch}\leq 15\\ 8 \cdot 10^{-5}, & 15<\text{epoch}\leq 30 \\ 5 \cdot 10^{-5}, & 30<\text{epoch}\leq 45 \\ 2 \cdot 10^{-5}, & 45<\text{epoch}\leq 60 \\ 1 \cdot 10^{-5}, & 60<\text{epoch}\leq 70 \end{array}\right.\label{learning rate annealing CIFAR}$$ The backpropagation counterpart of this network was chosen to have (almost) the same capacity as the NNL-CONV network. Standard CONV filters have biases as parameters of the convolutional filters, while in NNL-CONV nets these biases are set to zero. Thus, strictly speaking the CONV net always has a little bit more parameters than the corresponding (with the same number of channels $K$ and the same set of windows $W$) NNL-CONV net, but this difference is equal to the number of convolutional filters, and thus is much smaller than the total number of parameters of the network. We ignore this small difference and assume that the two networks have the same capacity. The end-to-end counterpart of the NNL-CONV network shown in Fig. \[networks errors\] (left) had the following hyperparameters: $K=400$, $W=4$, $ST=1$, $ST_p=2$, minibatch size was $300$ images, and the activation function was a ReLU. The network was trained for $70$ epochs using the learning rate decrease (\[learning rate annealing CIFAR\]). In order to make the comparison of the two networks simpler we used the same value of $W_p=11$ for this network, instead of optimizing it on the validation set. We have also checked that additionally optimizing over the parameter $W_p$ on the validation set does not significantly change the accuracy on the test set. The network trained end-to-end finds solutions of approximately the same accuracy for a broad range of $W_p$ around the optimum. For the NNL-CONV net shown in Fig. \[networks errors\] (middle) the remaining hyperparameters (not shown in the figure) were the following. The sequence of $\Delta$ for the five blocks was: $\Delta=[0.1,\ 0.1,\ 0.2,\ 0.15,\ 0.2$. For all blocks $m=2$, $ST=1$, $ST_p=2$, minibatch size for the local training was 1000 patches. The learning rate linearly decreased from $\varepsilon_0=1 \cdot 10^{-4}$ to zero during $500$ epochs. The minibatch size for the top layer training was $300$ and the learning rate annealing followed the schedule (\[learning rate annealing CIFAR\]). For the network trained end-to-end, minibatch size was 300 images, the set of pooling windows was: $W_p=[14,11,11,7,11]$ (not optimized on the validation set). We checked that this additional optimization would not change the results of Fig. \[networks errors\]. Activation functions were ReLU. The learning rate annealing followed (\[learning rate annealing CIFAR\]). For the experiments reported in Fig. \[network\_ImageNet\] the remaining hyperparameters (not shown in the figure) were the following. The sequence of $\Delta$ for the four blocks were: $\Delta=[0.1,\ 0.2,\ 0.2,\ 0.2]$. For all blocks $m=2$, $ST=1$, $ST_p=2$, minibatch size for the local training was 10000 patches. The learning rate linearly decreased from $\varepsilon_0=1 \cdot 10^{-4}$ to zero during $50$ epochs. The minibatch size for the top layer training was $200$ and the learning rate annealing followed the schedule $$\varepsilon= \left\{ \begin{array}{cl}1\cdot 10^{-4}, & \text{epoch}\leq 15\\ 8 \cdot 10^{-5}, & 15<\text{epoch}\leq 25 \\ 5 \cdot 10^{-5}, & 25<\text{epoch}\leq 35 \\ 2 \cdot 10^{-5}, & 35<\text{epoch}\leq 45 \\ 1 \cdot 10^{-5}, & 45<\text{epoch}\leq 48 \end{array}\right.\label{learning rate annealing ImageNet}$$ for $48$ epochs. For the network trained end-to-end the training was done for $5$ epochs with the learning rate $1\cdot 10^{-4}$ with minibatch of size 200 images. The same set of pooling windows as specified in Fig. \[network\_ImageNet\] was used. Additional optimization over these pooling windows does not result in improved accuracy. For the experiments in reported in Fig. \[shadows\_fig\] the same settings were used as discussed above for the network shown in Fig. \[networks errors\] (middle) and its end-to-end trained counterpart. The only difference is that in addition to testing the accuracy on raw images, the accuracy was also tested on shadowed images. For the experiments reported in section \[transfer learning section\] the setting corresponding to the setting for the target (to which the transfer is made) network was used. The set of $\Delta$ for the weights trained on ImageNet $32\times 32$ (and transferred to CIFAR-10) was $\Delta=[0,\ 0.1,\ 0.2,\ 0.2,\ 0.2]$ for the five blocks. Parameter $m$ was set to $m=2$ for all the five blocks, minibatch size for local training was $10000$ patches. The convolutional filters were trained for $50$ epochs with learning rate linearly decreasing from $\varepsilon_0=1 \cdot 10^{-4}$ to zero. The set of $\Delta$ for the weights trained on CIFAR-10 (and transferred to ImageNet $32\times 32$) was $\Delta=[0.1, \ 0.2,\ 0.15,\ 0.2]$ for the four blocks. Parameter $m$ was set to $m=2$ for all the four blocks, minibatch size for local training was $1000$ patches. The convolutional filters were trained for $500$ epochs with learning rate linearly decreasing from $\varepsilon_0=1 \cdot 10^{-4}$ to zero. No dropout, noise injection, data augmentation or data preprocessing of any kind were used in this paper. We leave the investigation of the influence of these methods on the accuracy of our algorithm for a separate study. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Quanfu Fan and Hilde Kuehne for useful discussions. [99]{} Pehlevan, C., Hu, T. and Chklovskii, D.B., 2015. A hebbian/anti-hebbian neural network for linear subspace learning: A derivation from multidimensional scaling of streaming data. Neural computation, 27(7), pp.1461-1495. Pehlevan, C., Sengupta, A.M. and Chklovskii, D.B., 2018. Why do similarity matching objectives lead to Hebbian/anti-Hebbian networks?. Neural computation, 30(1), pp.84-124. Cui, Y., Ahmad, S. and Hawkins, J., 2016. Continuous online sequence learning with an unsupervised neural network model. Neural computation, 28(11), pp.2474-2504. Seung, H.S. and Zung, J., 2017. A correlation game for unsupervised learning yields computational interpretations of Hebbian excitation, anti-Hebbian inhibition, and synapse elimination. arXiv preprint arXiv:1704.00646. Bahroun, Y. and Soltoggio, A., 2017. Online representation learning with single and multi-layer Hebbian networks for image classification. In International Conference on Artificial Neural Networks (pp. 354-363), Springer. Krotov, D. and Hopfield, J., 2019. Unsupervised learning by competing hidden units, Proceedings of the National Academy of Sciences, 116 (16) 7723-7731; DOI: 10.1073/pnas.1820458116 GitHub repository “Biological Learning”, <https://github.com/DimaKrotov/Biological_Learning> Krotov, D. and Hopfield, J.J., 2016. Dense associative memory for pattern recognition. In Advances in neural information processing systems (pp. 1172-1180). Krotov, D. and Hopfield, J., 2018. Dense associative memory is robust to adversarial inputs. Neural computation, 30(12), pp.3151-3167. See for a example a current leader board: https://benchmarks.ai/cifar-10 Livingstone, M.S. and Hubel, D.H., 1984. Anatomy and physiology of a color system in the primate visual cortex. Journal of Neuroscience, 4(1), pp.309-356. Johnson, E.N., Hawken, M.J. and Shapley, R., 2008. The orientation selectivity of color-responsive neurons in macaque V1. Journal of Neuroscience, 28(32), pp.8096-8106. Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). He, K., Zhang, X., Ren, S. and Sun, J., 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). Appelhans, D., Auerbach, G., Averill, D., Black, R., Brown, A., Buono, D., Cash, R., Chen, D., Deindl, M., Duffy, D. and Eastman, G., 2018. Functionality and performance of NVLink with IBM POWER9 processors. IBM Journal of Research and Development, 62(4-5). Chrabaszcz, P., Loshchilov, I. and Hutter, F., A Downsampled Variant of ImageNet as an Alternative to the CIFAR datasets. CoRR abs/1707.0 (2017). Dunn, F.A., Lankheet, M.J. and Rieke, F., 2007. Light adaptation in cone vision involves switching between receptor and post-receptor sites. Nature, 449(7162), p.603. D. Heeger. Perception Lecture Notes: Light/Dark Adaptation. Available: http://www.cns.nyu.edu/ david/courses/perception/lecturenotes/light-adapt/light-adapt.html Lowe, D.G., 2004. Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2), pp.91-110. R.Collins. Lecture 7: Correspondence Matching. Available: http://www.cse.psu.edu/ rtc12/CSE486/lecture07.pdf Dosovitskiy, A., Springenberg, J.T., Riedmiller, M. and Brox, T., 2014. Discriminative unsupervised feature learning with convolutional neural networks. In Advances in neural information processing systems (pp. 766-774). Donahue, J., Krähenbühl, P. and Darrell, T., 2016. Adversarial feature learning. arXiv preprint arXiv:1605.09782. Premachandran, V. and Yuille, A.L., 2016. Unsupervised learning using generative adversarial training and clustering.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider $ C^{2} $ Hénon-like families of diffeomorphisms of $ \mathbb R^{2} $ and study the boundary of the region of parameter values for which the nonwandering set is uniformly hyperbolic. Assuming sufficient dissipativity, we show that the loss of hyperbolicity is caused by a first homoclinic or heteroclinic tangency and that uniform hyperbolicity estimates hold *uniformly in the parameter* up to this bifurcation parameter and even, to some extent, at the bifurcation parameter.' address: - 'Department of Mathematics, Suzhou University, Suzhou 215006, Jiangsu, P.R. China' - 'Dept. of Mathematics, Imperial College, 180 Queen’s Gate, London SW7 2AZ, UK' - 'Universidade Federal Fluminense, Niteroi, RJ, Brazil.' author: - Yongluo Cao - Stefano Luzzatto - Isabel Rios date: 'February 9, 2005, revised May 10, 2007' title: | The boundary of hyperbolicity\ for Hénon-like families --- [^1] Introduction and statement of results ===================================== Our aim in this paper is to study the *boundary of hyperbolicity* of certain families of two dimensional maps. Hénon-like families ------------------- We say that a family of $ C^{2} $ plane diffeomorphisms is called a *Hénon-like family* if it can be written in the form $$f_{a, b , \eta}(x,y) = (1-ax^{2}+y, bx) + \varphi(x,y,a)$$ where $ a\in\mathbb R $, $ b \neq 0 $ and $ \varphi (x,y,a) $ is a $ C^{2} $ “perturbation” of the standard *Hénon family* $ h_{a,b}(x,y) = (1-ax^{2}+y, bx) $ [@Hen76] satisfying $$\|\varphi \|_{C^{2}(x,y,a)}\leq \eta.$$ In this paper we consider $ |b| \neq 0, \eta > 0 $ fixed sufficiently small and investigate the dynamics as the parameter $ a $ is varied. For simplicity we shall therefore omit $ b $ and $ \eta $ from the notation and denote a Hénon-like family by $ \{f_{a} \} $. For future reference we remark that the inverse of $ f_{a} $ is given by an equation of a similar form: $$f^{-1}_{a} (x,y) = (y/b, x-1 + ay^2/b^2) + \tilde\varphi (x,y,a)$$ where $ \|\tilde \varphi \|_{C^{2}(x,y,a)} \to 0$ as $ \|\varphi \|_{C^{2}(x,y,a)} \to 0 $. We shall suppose without loss of generality that $$\|\tilde \varphi \|_{C^{2}(x,y,a)} \leq \eta.$$ The boundary of hyperbolicity ----------------------------- ### Basic background Hénon and Hénon-like families have been extensively studied over the last almost 30 years. One of the earliest rigorous results on the subject is [@DevNit79] in which it was shown that the non-wandering set $ \Omega_{a,b} $ is uniformly hyperbolic for all $ b\geq 0 $ and all sufficiently large $ a $ (depending on $ b $). On the other hand, for small $ b\neq 0 $ and $ a\lesssim 2 $ there exists positive probability of “strange attractors” which contain tangencies between stable and unstable leaves. This was first proved in [@BenCar91] for the Hénon family and later generalized in [@MorVia93] to Hénon-like families, see also [@WanYou01; @LuzVia03]. These attractors cannot be uniformly hyperbolic due to the presence of tangencies but turn out to satisfy weaker *nonuniform* hyperbolicity conditions [@BenYou93; @BenYou00; @BenVia01]. ### Complex methods More recently Bedford and Smillie have described the transition between these two regimes for Hénon families by identifying and describing some of the properties of the *boundary of uniform hyperbolicity* [@BedSmi06]. In particular they show that for small $ |b| $, the nonwandering set is uniformly hyperbolic up until the first parameter $ a $ at which a tangency occurs between certain stable and unstable manifolds. Combining this with the statements contained in [@BedSmi02] their results also imply uniform bounds on the Lyapunov exponents of all invariant probability measures at the bifurcation parameter [@Bed05]. Their methods rely crucially on previous work [@BedSmi04] which in turn is based on the polynomial nature of the Hénon family, a feature which allows them to consider its complexification and to apply original and highly sophisticated arguments of holomorphic dynamics. ### Real methods In this paper we develop a new and completely different strategy to the problem, based purely on geometric “real” arguments, which have the advantage of applying to general $ C^{2} $ *Hénon-like* families. We also obtain the analogous *uniformity* results by showing that the hyperbolicity expansion and contraction rates are uniform right up to the point of tangency and that even *at* the point of tangency some strong version of nonuniform hyperbolicity continues to hold: all Lyapunov exponents of all invariant measures are uniformly bounded away from 0. For all $ |b| > 0 $ and $ \eta >0 $ sufficiently small we have the following property. For every Hénon-like family $ \{f_{a}\}_{\ a\in\mathbb R} $ of plane diffeomorphisms there exists a unique $ a^{*} $ such that 1. For all $ a> a^{*} $ the nonwandering set $ \Omega_{a} $ is uniformly hyperbolic; 2. For $ a=a^{*} $ the nonwandering set $ \Omega_{a^{*}} $ contains an orbit of tangency but is “almost uniformly hyperbolic” in the sense that all Lyapunov exponents of all invariant probability measures supported on $ \Omega $ are uniformly bounded away from 0. Moreover, the bounds on the expansion and contraction rates for all $ a\geq a^{*} $ are independent of $ a $ and of the family. ### Singular perturbations We remark that this is not the only existing definition of Hénon-like in the literature. One standard approach is to consider “singular” perturbations of the limiting one-dimensional map corresponding to the case $ b=0 $: $$f_{a}(x,y) = (1-ax^{2}, 0) + \varphi_{a}(x,y).$$ This formulation however has some slight technical issues. For example, one cannot assume that $ \|\varphi_{a}\|_{C^{2}} $ is small on all of $ \mathbb R^{2} $ since that would violate the requirement that $ f_{a} $ be a global diffeomorphism of $ \mathbb R^{2} $. This can be dealt with by restricting our attention to some compact region, say $ [-2, 2] \times [-2,2] $, and supposing only that $ \|\varphi_{a}\|_{C^{2}}\leq \eta $ in this region. Our arguments apply in this case also and yield a more local result on the hyperbolicity of the nonwandering set restricted to $ [-2, 2] \times [-2,2] $. Basic definitions {#basicdefs} ----------------- ### Nonwandering set We recall that a point $ z $ belongs to the *nonwandering* set $ \Omega $ of $ f $ if it has the property that for every neighbourhood $ \mathcal U $ of $ z $ there exists some $ n\geq 1 $ such that $ f^{n}(\mathcal U)\cap \mathcal U \neq \emptyset $. The nonwandering set is always invariant and closed (and thus if bounded, also compact). ### Uniform hyperbolicity We say that a compact invariant set $ \Omega $ is *uniformly hyperbolic* (with respect to $ f $) if there exists constants $ C^{u}, C^{s}>0, \lambda^{u}>0>\lambda^{s} $ and a *continuous* decomposition $ T\Omega=E^{s}\oplus E^{u} $ of the tangent bundle such that for every $ x\in \Omega $, every non-zero vector $ v^{s}\in E^{s}_{z} $ and $ v^{u}\in E^{u}_{z} $ and every $ n\geq 1 $ we have $$\label{UH} \|Df^{n}_{z}(v^{s})\|\leq C^{s}e^{\lambda^{s} n} \quad\text{and} \quad \|Df^{n}_{z}(v^{u})\|\geq C^{u}e^{\lambda^{u} n}.$$ By standard hyperbolic theory, the stable and unstable subspaces $ E^{s}_{z}, E^{u}_{z} $ are tangent everywhere to the stable and unstable manifolds. In particular uniform hyperbolicity is incompatible with the presence of any tangencies in $ \Omega $ between any stable and any unstable invariant manifolds associated to points of $ \Omega $. ### Nonuniform hyperbolicity A weaker notion of hyperbolicity can be formulated in terms of invariant measures. For simplicity we restrict our discussion to the two-dimensional setting, as relevant to the situation we consider in this paper. Let $ \mu $ be an $ f $-invariant ergodic probability measure with support in some compact invariant set $ \Omega $. By Oseledec’s Ergodic Theorem [@Ose68] there exist constants $ \lambda^{u} \geq \lambda^{s} $ and a measurable decomposition $ T\Omega=E^{s}\oplus E^{u} $ such that for $ \mu $-almost every $ z $ and every non-zero vector $ v^{s}\in E^{s}_{z} $ and $ v^{u}\in E^{u}_{z} $ we have $$\label{NUH} \lim_{n\to\infty} \frac{1}{n}\log\|Df^{n}_{z}(v^{s})\| = \lambda^{s} \quad\text{and}\quad \lim_{n\to\infty} \frac{1}{n}\log\|Df^{n}_{z}(v^{u})\| = \lambda^{u}.$$ The constants $ \lambda^{s} $ and $ \lambda^{u} $ are called the *Lyapunov exponents* associated to the measure $ \mu $. We say that $ \mu $ is *hyperbolic* [@Pes76; @Pes77] if $$\lambda^{u}>0>\lambda^{s}.$$ Clearly implies for any $ \mu $. The converse however is false in general: the measurable decomposition may not extend to a continuous one on all of $ \Omega $ and the exponential expansion and contraction in implies only a limited version of in which the constants $ C^{s}, C^{u} $ are measurable functions of $ x $ and not bounded away from 0. This definition of hyperbolicity in terms of Lyapunov exponents is sometimes called *nonuniform hyperbolicity* and is consistent in principle with the existence of tangencies between stable and unstable manifolds. ### The boundary between uniform and nonuniform hyperbolicity In general there may be many ergodic invariant probability measures supported in $ \Omega $ of which some may be hyperbolic and some not. Even if they are all hyperbolic the corresponding Lyapunov exponents may not be uniformly bounded way from $ 0 $. The situation in which all Lyapunov exponents of all ergodic invariant measures are uniformly bounded away from zero is, in some sense, as “uniformly hyperbolic” as one can get while admitting the existence of tangencies. This situation can indeed occur, for example in the present context of Hénon-like maps. A first example of a set satisfying this property was given in [@CaoLuzRioTan]. A one-dimensional version ------------------------- After completing the proof of the Theorem 1 we realized that *much simpler* versions of our arguments yield an analogous, new and non-trivial, result in the context of one-dimensional maps. We explain and give a precise formulation of this result. We consider first the quadratic family $$h_{a}(x) = 1-ax^{2}.$$ We choose this particular parametrization for convenience and consistency with our two dimensional results, but any choice of smooth family of unimodal or even multimodal maps with negative Schwarzian derivative would work in exactly the same way. It is well known that for $ a>2 $ the nonwandering set $ \Omega_{a} $ is uniformly expanding although we emphasize here that this depends crucially on the negative Schwarzian derivative property. The negative Schwarzian property is not robust with respect to $ C^{2} $ perturbations and standard methods do not therefore yield this statement for such perturbations. There exists a constant $ \eta>0 $ such that if a family $ \{g_{a} \}$ of $ C^{2} $ one-dimensional maps satisfies $$\|g_{a}-h_{a}\|_{C^{2}}\leq \eta$$ then there exists a unique parameter value $ a^{*} $ such that 1. For all $ a>a^{*} $ the non-wandering set $ \Omega $ is uniformly hyperbolic; 2. For $ a=a^{*} $ the Lyapunov exponents of all ergodic invariant probability measures are all positive and uniformly bounded away from 0. Moreover the rates of expansion and the bound on the Lyapunov exponents are uniform, independent of the family and of the parameter. The proof of this result is exactly the same as that of Theorem 1 but hugely simpler as all more geometrical arguments concerning curvature etc become essentially trivial. We emphasize that the uniform expansivity of $ \Omega_{a} $ for a particular parameter value $ a>2 $ is of course robust under sufficiently small perturbations of $ f_{a} $, by standard hyperbolic theory. However this approach *requires the size of the perturbation to depend on the parameter* $ a $ and in particular to shrink to zero as $ a $ tends to $ 2 $. The crucial point of our approach, both in this one-dimensional setting, as in the two-dimensional setting is that the size of the perturbation *does not* depend on the parameter. Overview of the paper --------------------- We have divided our argument into three main sections. In Section \[proofnonwan\] we analyze the geometric structure of stable and unstable manifolds of the two fixed points and define the parameter $ a^{*} $ as the first parameter for which a tangency occurs between some compact parts of these manifolds. We also identify a region $ \mathcal D $ which we show contains the non-wandering set. In Section \[sectionhyp\] we define a “critical neighbourhood” $ \Delta_{\varepsilon} $ outside of which our maps are uniformly hyperbolic by simple perturbation arguments. However $\Delta_{\varepsilon} $ does contain points of $ \Omega $ and thus we cannot ignore this region. To control the hyperbolicity in $ \Delta_{\varepsilon} $ we introduce the notions of Hyperbolic Coordinates and Critical Points which form the key technical core of our approach. Finally, in Section \[sectionhypest\] we apply these techniques to prove the required hyperbolicity properties. The non-wandering set {#proofnonwan} ===================== In this section we define the parameter $a^{*}$ as in the statement of our main Theorem, and show that for $ a\geq a^{*} $ the nonwandering set is contained in the closure if the unstable manifold of a hyperbolic fixed point restricted to a certain compact region of $ \mathbb R^{2} $. The parameter $ a^{*} \protect$ ------------------------------- We define the bifurcation parameter $ a^{*} $ below as the first parameter for which there is a tangency between certain compact parts of the stable and unstable manifolds of the fixed points. This does not immediately imply that it is a first parameter of tangency though this will follow from our proof of the fact that the nonwandering set is uniformly hyperbolic for all $ a> a^{*} $. ### Fixed points and invariant manifolds for the one-dimensional limit For the endomorphisms $h_{a}=h_{a,0}$ with $a\geq 2$, there are two fixed points, $$p_a = \frac{-1+\sqrt{1+4a}}{2a}> q_a = \frac{-1-\sqrt{1+4a}}{2a}$$ both hyperbolic. For the special parameter value $ a=2 $, to simplify the notation below, we write $$f_{*}=h_{2,0}, \text{ and denote the two fixed points by } p_{*}=(1/2, 0) \text{ and } q_{*}=(-1,0).$$ Since $ q^{*} $ and $ p^{*} $ are repelling in the horizontal direction, their stable sets are simply their preimages: $$W^{s}(q^{*}) = \bigcup_{n\geq 0}f_{*}^{-n}(q^{*}) \quad\text{and} \quad W^{s}(p^{*})=\bigcup_{n\geq 0}f_{*}^{-n}(p^{*}).$$ In particular these sets contain the following curves $$f_{*}^{-1}(q_{*}) = \{(x, y): f_{*}( x, y) = (1-2 x^{2}+ y, 0) = (-1, 0)\} = \{ y = 2 x^{2} - 2\}$$ and $$f_{*}^{-2}(q_{*}) = \{(x, y): f_{*}( x, y) = (1-2 x^{2}+ y, 0) = (1, 0)\} = \{ y = 2 x^{2} \}$$ ![First two “generations” of $ W^{s}(q_{*}) $ and $ W^{s}(p_{*}) $.[]{data-label="Wsb=0"}](wsbNew){width="30.00000%"} The first preimage of $ q^{*} $ is a parabola with a minimum at $ (0, -2) $ and intersecting the $ x $-axis at $ x=\pm 1 $, and having slope equal to $ -4 $ at the point $ q_{*}=(-1,0) $ and the second is a parabola with a minimum at $ (0,0) $. Similarly we can compute $$f_{*}^{-1}(p_{*}) = \{ z=( x, y): f_{*}( z) = (1-2 x^{2}+ y, 0) = (1/2, 0)\} = \{ y = 2 x^{2} - 1/2\}$$ which is a parabola with a minimum at $ (0, -1/2) $, intersecting the $ x $-axis at $ x= \pm 1/2 $ and having slope equal to $ 2 $ at the point $ p_{*}= (1/2, 0) $, and $$f_{*}^{-2}(p_{*}) = \{ z=( x, y): f_{*}( z) = (1-2 x^{2}+ y, 0) = (-1/2, 0)\} = \{ y = 2 x^{2} - 3/2\}$$ which is a parabola with a minimum at $ (0, -3/2) $. The unstable manifolds going $ W^{u}(q_{*}) $ and $ W^{u}(p_{*}) $ can be defined and computed in a similar way and are easily seen to be horizontal. ### Fixed points for Hénon-like families Consider first the *Hénon family* $h_{a,b}(x,y)=(1-ax^{2}+y,bx).$ For $b\neq 0$, $h_{a,b}$ is a diffeomorphism. The hyperbolicity of the fixed points implies that there exists a neighbourhood of the set $ \{ (a,0):a\geq 2 \}$ corresponding to pairs of parameters for which there is an *analytic continuation* $ q_{a,b}, p_{a,b} $ as hyperbolic fixed points of $h_{a,b}$. Considering $\eta$ small, we also have that the analytic continuations $q_{f_{a}}$ and $p_{f_{a}}$ are also well defined and hyperbolic. For simplicity we shall often just refer to these two points as $ q, p $ leaving implicit their dependence on $ f $. Explicit formulas for $ q_{a,b}, p_{a,b} $ can be easily derived from the equation $ ( 1-ax^{2}+y, bx )= (x, y ) $ but these would not be particular useful. Instead we just observe that the fixed points must lie on the line $ \{y=bx\} $ and so in particular this means that for $ a\approx 2 $ and $ b \gtrapprox 0 $ the vertical coordinates of $ q_{a,b} $ and $ p_{a,b} $ are negative and positive respectively, and the converse for $ b \lessapprox 0 $. Clearly the same holds for $q=q_{f_{a}}$ and $p=p_{f_{a}}$ if $ \eta $ is sufficiently small. Moreover, the determinant of $ h_{a,b} $ is given by $$\det Dh_{a,b}=\det \begin{pmatrix} -2ax & 1 \\ b & 0 \end{pmatrix} = -b.$$ In particular the determinant is constant and negative if $ b $ is positive and positive if $ b $ is negative. We thus refer to the case $ b>0 $ as the *orientation-reversing* case, and the case $ b<0 $ as the *orientation-preserving* case. ![Fixed points and their local stable and unstable manifolds for the orientation-reversing ($ b>0 $) and the orientation-preserving ($ b<0 $) case (dotted curves indicate negative eigenvalues)[]{data-label="fixedpoints"}](FixedPointsbpos "fig:"){width="45.00000%"} ![Fixed points and their local stable and unstable manifolds for the orientation-reversing ($ b>0 $) and the orientation-preserving ($ b<0 $) case (dotted curves indicate negative eigenvalues)[]{data-label="fixedpoints"}](FixedPointsbneg "fig:"){width="45.00000%"} Recall that the determinant of a matrix is the product of the eigenvalues, and thus in particular, the sign of the determinant has implications for the sign of the eigenvalues which, as we shall see, in turn has implications for the geometry of the stable and unstable manifolds of the fixed points. For $ b=0 $ the fixed points $ p_{*} $ and $ q_{*} $ have derivatives $ 4 $ and $ -2 $ respectively, and thus, for $ b\neq 0 $ and $ \eta $ small, the expanding eigenvalues of $ p $ and $ q $ are $ \approx 4 $ and $ \approx -2 $ respectively. This implies that for $ b \gtrapprox 0 $, the orientation-reversing case, the contracting eigenvalues of $ q $ and $ p $ must be $ <0 $ and $ >0 $ respectively, while for $ b<0 $, the orientation preserving case, they must be $ >0 $ and $ <0 $ respectively. The two situations are illustrated in Figure \[fixedpoints\] with dashed lines showing the invariant manifolds corresponding to negative eigenvalues. ### Analytic continuation of stable and unstable manifolds By classical hyperbolic theory, compact parts of the stable manifolds depend continuously on the map (see e.g. [@PalMel82]). Therefore, for small $ b $ and small $ \eta $ the analytic continuations $ q, p $ of the fixed points $ q_{*} $ and $ p_{*} $ have stable and unstable manifolds which are close to those computed above for the limiting case. Elementary calculations show that the actual geometrical relations between these continuations depend on whether we consider the orientation reversing ($ b>0 $) or the orientation preserving ($ b<0 $) case, and are as illustrated in Figure \[intersection\]. ![Invariant manifolds for $a>a^{*}$[]{data-label="intersection"}](compman){width="9cm"} We let $$\Gamma^{u}_{a}(p)\subset W^{u}_{a}(p), \quad \Gamma^{s}_{a}(q)\subset W^{u}_{a}(q), \quad \Gamma^{u}_{a}(q)\subset W^{u}_{a}(q),$$ denote the compact parts of the stable and unstable manifold as shown in Figure \[intersection\] and notice that, in particular, since for $ b=0 $ and $a>2$ the unstable manifold of $p_{a}$ and $ q_{a} $ extend to the whole of the line, for each $ a>2 $ and $ b>0 $ sufficiently small we have that $W^{u}_{loc}(p)$ crosses $W^{s}_{loc}(p)$ four times, and for each $ a>2 $ and $ b<0 $ sufficiently small we have that $ W^{u}_{loc}(q) $ crosses $ W^{s}_{loc}(q) $ four times, and also we can ensure that the compact parts defined above and in the Figure intersect transversally. Again this continues to hold also for a Hénon-like family for sufficiently small $ \eta $. ### Definition of $ a^{*} \protect$ {#astar} We are now ready to define the parameter $ a^{*} $. We fix $ b\neq 0 $.\ For an *orientation-reversing* ($ b>0 $) Hénon-like family $ f_{a} $, we let $$a^{*}=\inf\{a: \Gamma^{s}_{a}(p) \text{ and } \Gamma^{u}_{a}(q) \text{ intersect transversally }\}.$$ For an *orientation-preserving* $ b<0 $ Hénon-like family $ f_{a} $, we let $$a^{*}=\inf\{a: \Gamma^{s}_{a}(q) \text{ and } \Gamma^{u}_{a}(q) \text{ intersect transversally }\}.$$ We also define a parameter $$\hat a < a^{*}$$ as the $ \inf $ of parameters $ a $ for which $W^{u}_{loc}(p)$ crosses $W^{s}_{loc}(p)$ four times ($ b>0 $) or $W^{u}_{loc}(q)$ crosses $W^{s}_{loc}(q)$ four times ($ b<0 $). Clearly this is a weaker condition and thus $ a^{*}\geq \hat a$. Notice that $ a^{*} $ and $ \hat a $ converge to $ a=2 $ as $ b $ and $ \eta $ tend to 0. Localization of the nonwandering set ------------------------------------ In this section we carry out a detailed geometrical study aimed at showing that the nonwandering set is contained in a relatively restricted region. To prove hyperbolicity we will then just have to focus our efforts in this region. For the moment we restrict ourselves to the orientation-reversing case. At the end we shall remark how the orientation-preserving case follows by identical arguments with a few minor changes of notation. First of all we let $ \mathcal D $ denote the closed topological disc bounded by compact pieces of the $ W^{u}(p) $ and $ W^{s} (q) $ as shown in Figure \[RegionD\]. ![The region $ \mathcal D $[]{data-label="RegionD"}](regionDnew){width="4cm"} The main result of this section is the following \[nonwan\] For all $ a> \hat a $ we have $$\Omega \subset \overline{W^{u}(p)} \cap \mathcal D \cap \{[-2,2] \times (-4b, 4b)\}.$$ In this paper we are interested in parameters $ a \geq a^{*} (\geq \hat a) $, but it is worth observing that if follows from Proposition \[nonwan\] that for all $ a\in (\hat a, a^{*}) $, and so in particular for a certain range of parameter values which may contain multiple tangencies the recurrent dynamics is captured to some extent by the dynamics on $ W^{u}(p) $. This includes in particular all complex dynamical phenomena associated to the unfolding of the tangency at the parameter $ a^{*} $ (indeed, this includes the range of parameters considered by Benedicks-Carleson in [@BenCar91]. We split the proof of Proposition \[nonwan\] in several Lemmas. Once again we deal first with the case $ b>0 $. At the end of the proof we indicate the necessary modifications in order to deal with the case $ b< 0 $. We first define a relatively “large” region $ R $ and show that $ \Omega\subset R $ and then show in separate arguments that $ \Omega \subset \mathcal D$ and $ \Omega\subset \overline{W^{u}(p)} $, and finally refine our estimate to obtain the statement in the Proposition. Let $$R= (-2, 2)\times (-4, 4b) \subset \hat R = (-2, 2)\times (-4, 2)$$ We also define the following 6 (overlapping) regions (see Figure \[RegionV\]): $$\begin{aligned} V_{1}&=\{(x,y): x\leq -2, y\leq |x|\}, \\ V_{2}&=\{(x,y): x\leq 2, y\leq -4\},\\ V_{3}&=\{(x,y): x\geq 2, y\leq 2\},\\ V_{4}&=\{(x,y): x\geq -2, y\geq 2\},\\ V_{5}&=\{(x,y): x\leq -2, y\geq |x|\},\\ V_{6}&= \{(x,y): |x|\leq 2, y\geq 4b\}\end{aligned}$$ ![Regions $ V_{1} $ to $ V_{6} $[]{data-label="RegionV"}](RegionsV){width="9cm"} Then $$\hat R= \mathbb R^{2}\setminus (V_{1}\cup\dots\cup V_{5}) \text{ and } R = \mathbb R^{2}\setminus (V_{1}\cup\dots\cup V_{6})$$ We prove the following two statements. \[1.1\] $ \Omega \subset R$. We show that the orbit of every point $ (x,y)\in V_{i} $, $ i=1,\ldots,6 $ is unbounded in either backward or forward time. This implies in particular that no such point is nonwandering. For $ n \in \mathbb Z $, let $ (x_{n}, y_{n})= f_{a}^n(x,y) $. We shall use repeatedly the fact that $ a\approx 2 $ and $ b \approx 0 $. For $ (x,y) \in V_{1} $ we have $ x\leq -2 $ and therefore $ x_{1}=1-ax^{2}+y +\varphi_{1}(x,y,a)\leq 1-ax^{2}+|x| +\eta \leq -2 $ and $ y_{1}=bx +\varphi_{2}(x,y,a) \leq -2b+\eta < |x_{1}| $, as long as $\eta $ is sufficiently small. Thus $ (x_1,y_1) \in V_{1} $, and $ |x_{1}| \geq ax^{2}-|x|-1 -\eta \geq 2|x| $. Repeating the calculation we have $ |x_{n}|\geq 2^{n}|x| $ and so $ |x_{n}|\to\infty $. For $ (x,y)\in V_{2} $ we have $ x_{1}=1-ax^{2}+y +\varphi_{1}(x,y,a)\leq -2$ and $ y_{1}=bx +\varphi_{2}(x,y,a)\leq 2b +\eta < |x_{1}| $. Thus $ (x_{1},y_{1})\in V_{1} $ and so $ |x_{n}|\to\infty $. Similarly, for $ (x,y)\in V_{3} $ we have $ x_{1}=1-ax^{2}+y+\varphi_{1}(x,y,a) \leq 1- 2\cdot 2^{2} + 2+\eta \leq -2$ and $ y_{1}=bx +\varphi_{2}(x,y,a)\leq 2b +\eta < |x_{1}| $. Thus $ (x_{1},y_{1})\in V_{1} $ and we argue as above. For $ (x,y) \in V_{4} $ we consider backward iterations of $ f_{a} $. Note that $ (x_{-1}, y_{-1})= (y/b, x-1+ay^{2}/b^{2}) +\tilde \varphi (x,y,a)$. Then $ x_{-1}\geq 2/b -\eta \geq -2 $ and $ y_{1}\geq -2+4a/b^{2}-\eta \geq 2 $. Thus $ f^{-1}(x,y)\in V_{4} $ and $ y_{1}\geq y/b $. Therefore $ y_{-n}\geq y/b^{n} $ and so $ |y_{-n}|\to\infty $. For $ (x,y) \in V_{5} $ we have $ y\geq |x| \geq 2 $. Thus $ x_{1}\geq y/b-\eta \geq 2 $ and $ y_{1}\geq y^{2}/b^{2}\geq 2 $ . So we have that $ f^{-1}(x,y)\in V_{4} $, and we argue as above. For $ (x,y) \in V_{6} $, we have $ x_{-1}=y/b+\tilde \varphi _{2}(x,y,a) \geq 2 $ and $ y_{-1}\geq 2 $. Therefore, $ f_{a}(x,y) \in V_{4} $ and again, we argue as above. $ \mathcal D \subset \hat R$. The arguments used above have implications for the locations of the stable and unstable manifolds of the fixed points. Indeed the stable manifolds of the fixed points cannot intersect $ V_1\cup V_2 \cup V_3 $ since all points in this region tend to infinity in forward time, whereas, by definition, points in the stable manifolds tend to the fixed points under forward iteration. Similarly the unstable manifolds of the fixed points cannot intersect $ V_{4}\cup V_{5} $ since all points in this region tend to infinity in backwards time. By definition $ \mathcal D $ is bounded by arcs of stable and unstable manifolds of the fixed point as in the Figure and therefore $ \mathcal D \subset \mathbb R^{2}\setminus (V_{1}\cup \ldots \cup V_{5}) = (-2, 2)\times (-4, 2) $. ![Regions $R_{0}^{i}$[]{data-label="regionsR"}](regri){width="9cm"} $ \Omega\subset \mathcal D. $ To show that $ \Omega\subset \mathcal D $ we refine the strategy used in the proof of the previous lemma, and show that the orbits of all points outside $ \mathcal D $ are unbounded in either backward or forward time. Since we have already shown that $ \Omega\subset R$, we need to consider only points in the region $ R \setminus \mathcal D $. ### Subdividing {#subdividing .unnumbered} We write $$R \setminus \mathcal D = R_{0}\cup R_{1}\cup R_{2}\cup R_{3}$$ where the regions $ R_{0}, R_{1}, R_{2}, R_{3}$ are defined as follows. Consider the points $A$ and $B$ of intersection of $W^{u}(P)$ and $W^{s}(q)$ and $C$ ,$D$ of intersection of $W^{s}(q)$ and $y=4b$ as in the Figure \[regionsR\]. We let $R_{0}$ denote the closed region bounded by the arcs of manifold $AC$, $AB$ and $BD$, and the segment $CD$. We let $ R_{1} $ denote the region bounded by the arc of manifolds $ HF$, $FA$ and $AC$, and the segment $HC$. We let $ R_{2} $ denote the region bounded by the arcs of $W^{u}(p)$ and $W^{s}(q)$ between the points $E$ and $F$, as in Figure \[regionsR\]. We let $ R_{3} = R \setminus (\mathcal D \cup R_{0}\cup R_{1}\cup R_{2} )$. We also define $$\tilde R_{3}\subset R_{3}$$ as the region satisfying $-2b-\eta <y<2b+\eta$ at the left side of the arc of $W^{s}(q)$ between the points $I$ and $J$, of intersection of that manifold with the lines $y=-2b-\eta$ and $y=2b+\eta$, as in the Figure \[RegionsRNew\] (b). ### Points of $ R_{0} \protect$ escape in backward time {#points-of-r_0-protect-escape-in-backward-time .unnumbered} Since $b$ is small, we have that all the points $(x,y) \in R_{0}$ satisfy $x>0.2$. Notice that, for the unperturbed Hénon map $h_{a,b}(x,y)=(1-ax^{2}+y, bx)$, we have that any piece of curve $ \gamma $ with slope less than $ 1/10 $ contained in the region where $|x|>0.2$ is mapped to another curve with slope less than $ 1/10 $. Indeed, letting $ (v_{1}, v_{2}) $ denote a tangent vector to $ \gamma $ with $ |v_{2}|/|v_{1}| < 1/10 $, we have $ (v_{1}', v_{2}') = Dh_{a,b}(v_{1}, v_{2}) = (-2axv_{1}+v_{2}, bv_{1}) $ whose slope is $ |v_{2}'|/|v_{1}'| = |b/(-2ax+(v_{2}/v_{1}))|<1/10 $, provided $b$ is small and $a$ is close to 2. For future reference, notice that, if $|x|>0.5$, we also have that the norm of $(v_{1},v_{2})$ is uniformly expanded. So, since $f_{a}$ is close to $h_{a,b}$ in the $C^{2}$ topology, we can assume that $f_{a}$ also has this property in $R_{0}$. Now call $\alpha_{n}$ the successive images of the segment $CD$ intersected to $R_{0}$. Since they cannot intersect each other, and $CD$ has a point of the stable manifold of $p$, the curves $\alpha_{n}$ determine a system of “fundamental domains” in $R_{0}$: they cross $R_{0}$ from one stable boundary to the other, and they converge to the arc of unstable manifold $AB$. Call $R_{0}^{i}$ the region of $R_{0}$ between $\alpha_{i-1}$ and $\alpha _{i}$, $\alpha _{0}=CD$, and notice that $f^{-1}(R_{0}^{i})\subset R_{0}^{i-1}$. We also have that $f^{-1}(R_{0})$ falls outside $R$. That implies that $ R_{0} \setminus AB$ does not intersect $\Omega $, and any point which has an iterate in $R_{0}\setminus AB$ is not in $\Omega$. ### Points of $ R_{1} \protect$ map to $ R_{3} \protect$ {#points-of-r_1-protect-map-to-r_3-protect .unnumbered} We show that $ f(R_{1}) \cap R \subset R_{3} $. Indeed, the unstable eigenvalue of $ p $ is positive and therefore $ f(R_{1}) $ must remain on the same side of $ W^{s}(q) $ as $ R_{1} $. Also, since $ f(R) \subset \mathbb R \times [-2b-\eta, 2b + \eta] $ we have that $ f(R_{1}) $ does not intersect any of $ \mathcal D, R_{0}, R_{1}, R_{2} $. ![Regions $ R_{1} $ to $ R_{4} $[]{data-label="RegionsRNew"}](RegionsRNew){width="9cm"} ### Points of $ R_{3} \protect$ map to $ \tilde R_{3} \protect$ {#points-of-r_3-protect-map-to-tilde-r_3-protect .unnumbered} We now show that $ f(R_{3}) \subset \tilde R_{3}$. Again, we use the fact that $ f(R) \subset \mathbb R \times [-2b-\eta, 2b + \eta] $. Then, since one of the components of the boundary of $R_{3}$ is an arc of stable manifold of $q$ containing the fixed point $q$, and the unstable eigenvalue of $q$ is positive, we conclude that the image of $R_{3} $ is contained $ \tilde R_{3} $. ### Points of $ \tilde R_{3} \protect$ escape in forward time {#points-of-tilde-r_3-protect-escape-in-forward-time .unnumbered} We can assume, if $b$ is small, that all the points $(x,y)$ in $\tilde R_{3}$ satisfy $x<-0.5$ (notice that, for $b=0$, we have $q=(-1,0)$. Take $t$ a point in $\tilde R_{3}\setminus W^{s}(q)$, and connect $t$ to the boundary of $\tilde R_{3}$ through a horizontal line inside $\tilde R_{3}$, determining a point $t' \in W^{s}(q)$. Again, by the proximity of $f$ and $h_{a,b}$, and the fact that vectors with slope smaller than $1/10$ in $\tilde R_{3} \cap R$ are sent by $Dh_{a,b}$ in vectors with slope smaller than $1/10$, and uniformly expanded, we have that the horizontal distance between $f(t)$ and $f(t')$ is uniformly expanded. Applying $f$ repeatedly, as long as the image is inside $\tilde R_{3}\cap R$, we have that the horizontal distance between the successive images of $t$ and $W^{s}(q)$ increases exponentially. Then the forward images of $t$ leave $R$ for some positive time. ### Points of $ R_{2} \protect$ map to $ R_{0} \protect$ in backward time {#points-of-r_2-protect-map-to-r_0-protect-in-backward-time .unnumbered} Notice that $ f^{-1}(R_{2})\cap R \subset R_{0} $ since all the other regions in $R$ outside $\mathcal D$ are mapped forward to the region $\tilde R_{3}$, and so do not contain points of the backward image of $R_{2}$. Moreover, the unstable boundary of $R_{2}$ belongs to $W^{u}(p)$ and approaches $p$ as we apply $f^{-1}$, and the stable boundary cannot cross $W^{s}(q)$, then $f^{-1}(R_{2})$ does not intersect $\mathcal D$. Since $f^{-1}(R_{2})\cap R\subset R_{0}$, the points in there that are not in $W^{u}(p)$ leave $R$ for backward iterations. ![Invariant manifolds and the region $ \mathcal D $ for $ b< 0 $[]{data-label="negativeb"}](negativeb){width="9cm"} $ \Omega \subset \overline{W^{u}(p)} $. Notice first of all that by the $ \lambda $-lemma we have $ q\in\overline{W^{u}(p)} $. Now suppose by contradiction that there exists $ z=(x,y) \in \Omega $ with $ z\notin \overline{W^{u}(p)} $. Then there exists $ \varepsilon $ and an $ \varepsilon $ neighbourhood $ B_{\varepsilon}(z) $ of $ z $ with $ B_{\varepsilon}(z) \cap \overline{W^{u}(p)} = \emptyset $. Since $ \Omega $ is $ f $-invariant we have $ f^{-n}(z)\in\Omega(f)\subset\mathcal D $ for all $ n\in\mathbb N $ and therefore $ z\in f^{n}(\mathcal D) $ for all $ n\in\mathbb N $. Notice that the boundary $ \partial f^{n}(\mathcal D) \subset W^{u}(p) \cup f^{n}(EB^{s}) $ (where $ EB^{s} $ denotes the piece of $ W^{s}(q) $ between $ E $ and $ B $, as in Figure \[regionsR\]. It is enough therefore to show that, for large $n$, the $ \partial f^{n}(\mathcal D) $ is $ \varepsilon $-dense in $ f^{n}(\mathcal D) $ as this will imply that $ B_{\varepsilon}(z) \cap \overline{W^{u}(p)} \neq \emptyset $, contradicting the assumptions. This follows easily from the fact that $ f $ is (strongly) area contracting, and thus the area of $ f^{n}(\mathcal D) $ tends to zero as $ n\to\infty $. In particular we must have that $ B_{\varepsilon}(z)\cap \partial f^{n}(\mathcal D) \neq \emptyset$ for all $ n\geq N $ sufficiently large. Moreover, the length of the part of the boundary which belongs to $ W^{s}(q) $ also tends to zero. Thus most of the boundary belongs to $ W^{u}(p) $ and thus we must have $ B_{\varepsilon}(z)\cap W^{u}(p) \neq \emptyset$ for all $ n $ sufficiently large. ### Completion of the proof Combining the results of the Lemmas stated above we have that $ \Omega\subset \overline{W^{u}(p)} \cap \mathcal D$. The statement in the Proposition now follows immediately by observing that $ \Omega \subset \mathcal D $ implies $ \Omega \subset f(\mathcal D) $ and that $ f(\mathcal D) \subset [-2,2]\times [-4b, 4b] $ directly from the definition of $ f $ if $ \eta $ is sufficiently small. Finally, in the case $b<0$, we consider the stable and unstable manifolds of $q$ crossing as in Figure \[negativeb\] (the rectangle $R$ is exactly the same), determining the region $ \mathcal D$ in this case. The proof is entirely analogous considering the points $ A', B' $, etc., corresponding to the points $ A, B, $ etc., above. Hyperbolic coordinates and critical points {#sectionhyp} ========================================== The key idea of our whole strategy is the notion of *dynamically defined critical point* which relies on the fundamental notion of *hyperbolic coordinates*. In this section we introduce these notions and develop the main technical ideas which we will use. In Section \[prelim\] we clarify the relations between various constants used in the argument and introduce some preliminary geometric constructions. In Section \[hypcoord\] we discuss the definition and basic theory of hyperbolic coordinates. In Section \[curvature\] we introduce the idea of *admissible curves* and prove certain estimates concerning the images of admissible curves. Finally, in Section \[hypcrit\] we introduce the notion of dynamically defined critical point and prove that such critical points always exist in images of certain admissible curves. Preliminary geometric definitions and fixing the constants {#prelim} ---------------------------------------------------------- ### Fixing the constants {#constants} We now explain the required relations between the different constants used in the proof, and the order in which these constants are chosen. All constants are positive. First of all we fix two constants $$\delta = 1/10 \quad\text{ and } \alpha = 1/2.$$ These will be introduced in Sections \[fixptnhbd\] and \[curvature\] below. Even though we specify the actual numerical value of these constants we shall continue to use the constants in the argument because they have some specific geometric meaning and it is useful to keep track of their occurrence throughout the paper. We then fix a constant $ k_{0} $ large enough so that $$\label{k0} \frac{\sqrt{\delta}}{2\sqrt{3}} \left(\sqrt{3/\sqrt 5}\right)^{k_{0}-1}>1$$ In Section \[critnhbd\] we fix a constant $ \varepsilon $ which will then remain unchanged. Finally, at some finite number of places in the argument, we will require $ a $ to be sufficiently close to 2 and $ |b| $ and $ \eta $ to be sufficiently small. We remark that we can suppose that $ a $ is close to 2 without compromising the fact that hyperbolicity holds for all larger values of $ a $. Indeed, once we fix a neighbourhood of $ 2 $ in the $ a $ parameter space, we can always guarantee uniform hyperbolicity for values of $ a> 2 $ outside this neighbourhood by taking $ |b| $ and $ \eta $ sufficiently small, (depending on this neighbourhood). ### The fixed point neighbourhoods {#fixptnhbd} Recall first of all that the map $ f_{*}=h_{2,0} $ has two fixed points $ p_{*} $ and $ q_{*}=(-1,0) $ with $ f_{*}(1,0) = q_{*} $. For $ \delta=1/10 $ we let $ \mathcal Q = \mathcal Q_{0}:= B_{\delta}(q_{*}) $ be the open ball of radius $ \delta $ centred at $ q_{*} $ and $ \mathcal V=\mathcal V_{0} $ be the component of $ f^{-1}_{*}(\mathcal Q) $ not intersecting $ \mathcal Q $, see Figure \[Q\]. The expanding eigenvalue at the point $ q_{*} $ is equal to 4 and so we can suppose that $|a-2|, |b|, \eta, $ are all small enough so that $ \|Df_{z}\| > 3$ for all $ z\in \mathcal Q $. Then, for $ n\geq 0 $, let $$\mathcal Q_{n}(f)=\bigcap_{i=0}^{n}f^{-i}(\mathcal Q_{0}) \quad \text{ and } \quad \mathcal V_{n}(f)=f^{-1} (\mathcal Q_{n}(f)) \cap \mathcal V.$$ Notice that $ \mathcal V_{n} $ is just the component of $ f^{-1} (\mathcal Q_{n}(f)) $ containing $ (1,0) $. ![The neighbourhoods $\mathcal Q$ and $\mathcal V$[]{data-label="Q"}](neighbqv){width="9cm"} Since $ \mathcal Q_{n} $ is a neighbourhood of $ q $ for every $ n $, $ \mathcal V_{n}\setminus f^{-1}(W^{s}_{\delta}(q))$, where $ W^{s}_{\delta}(q) $ denotes the connected component containing $ q $ of $ W^{s}(q) \cap \mathcal Q_{0} $, has two components: we let $$\mathcal V_{n}^{-}=\mathcal \mathcal V_{n}\cap \mathcal D \quad \text{ and }\quad \mathcal V_{n}^{+} = \mathcal V_{n} \setminus \mathcal V_{n}^{-}.$$ Notice that a piece of $ W^{s}(q) $ forms the boundary between $ \mathcal V_{n}^{+} $ and $ \mathcal V^{-}_{n} $. We mention for future reference a simple estimate which we shall use below. \[distance1\] $ d(z, f^{-1}(W^{s}_{\delta}(q)) ) \geq \delta/5^{k} $ for all $ z\in \mathcal V_{k}\setminus \mathcal V_{k+1}.$ $ z \in \mathcal V_{k}\setminus\mathcal V_{k+1} $ implies, by definition, $ d(z_{k+1}, q ) \geq \delta $. For points $ z $ close to $ f^{-1}(W^{s}_{\delta}(q) ) $ this also means $ d(z_{k+1}, W^{s}_{\delta}(q)) \geq \delta$ since such points come very close to the fixed point $ q $ and escape the $ \delta $ neighbourhood of $ q $ along the direction of $ W^{u}(q) $. Thus, using the fact that the norm of the derivative $ Df $ in $ \mathcal D $ is uniformly bounded above by 5 we obtain the result. ### The critical neighbourhood {#critnhbd} For $ \varepsilon > 0 $ we define a *critical neighbourhood* $$\Delta_{\varepsilon} = (-\varepsilon, \varepsilon) \times (-4b, 4b).$$ Notice that we can take $ \varepsilon $ sufficiently small so that $ q_{f}\in f(\mathcal V) $ and $$f(\Delta_{\varepsilon}) \subset \mathcal V_{k_{0}}.$$ From now on we consider $ \varepsilon $ fixed. We also let $$\label{delta} \Delta=\Delta_{a}=\{x\in\Delta_{\varepsilon}: f (x) \notin\mathcal D\}.$$ For $ a $ sufficiently close to $ 2 $ and $ |b| $ and $ \eta $ sufficiently small we have uniform hyperbolicity outside $ \Delta_{\varepsilon} $. We state this fact more formally in the following \[LemmaUnifHyp\] For every $ \hat\lambda \in (0, \log 2) $ and $ |a-2|, |b|, \eta >0 $ sufficiently small, there exists a constant $ C_{\varepsilon}>0 $ such that for all $ k\geq 1 $ and points $ z $ with $ z, f(z), \ldots, f^{k-1}(z) \notin \Delta_{\varepsilon} $, and vector $ v $ with slope $ < \alpha $ we have $$\label{slope} \text{slope } Df^{k}_{z}(v) < \alpha,$$ $$\label{UE1} \|Df^{k}_{z}(v)\|\geq C_{\varepsilon} e^{\hat\lambda k}\|v\|.$$ If, *moreover*, $ f^{k}(z)\in\Delta $ then we have $$\label{UE2} \|Df^{k}_{z}(v)\|\geq e^{\hat\lambda k}\|v\|.$$ This is a standard result (see for example [@BenCar91] or [@MorVia93]) and so we omit the details. We just mention that it follows from the fact that the limiting one-dimensional map $ h_{2,0} $ satisfies uniform expansivity estimates outside an arbitrary critical neighbourhood $ \Delta_{\varepsilon} $ (with constant $ \hat\lambda $ arbitrarily close to $ \log 2 $ but constant $ C_{\varepsilon} $ depending on $ \varepsilon $ and arbitrarily small for $ \varepsilon $ small), see e.g. [@MelStr93]. Considering this one-dimensional map as embedded in the space of two-dimensional maps and using the fact that uniform hyperbolicity is an open condition we obtain the statement in the Lemma for $ |b|, \eta \neq 0 $ sufficiently small. Hyperbolic coordinates {#hypcoord} ---------------------- The notion of Hyperbolic Coordinates is inspired by some constructions in [@BenCar91; @MorVia93], developed in [@LuzVia03] and formalized in [@HolLuz06] as an alternative framework with which to approach the classical theory of invariant manifolds. Here we review the basic definitions and theory to the extent to which they will be required for our purposes. ### Hyperbolicity of compositions of linear maps We recall the notion of hyperbolic coordinates and give the basic definitions and properties in the general context of $ C^{2} $ diffeomorphisms of a Riemannian surface $ M $. For $ z\in M, k\geq 1 $ we let $$F_{k}(z) = \|Df^{k}_{z}\|\quad\text{ and } \quad E_{k}(z) = \|(Df^{k}_{z})^{-1}\|^{-1}$$ denote the maximum expansion and the maximum contraction respectively of $ Df^{k}_{z}$. Then we think of the quantity $$H_{k}(z) = E_{k}(z) / F_{k}(z)$$ as the *hyperbolicity* of $ Df^{k}_{z}$. Notice that $ H_{k}\leq 1 $ always. The condition $ H_{k} = E_{k}/F_{k} < 1 $ implies that the linear map $ Df^{k} $ maps the unit circle $\mathcal S \subset T_{z}M $ to an ellipse $ \mathcal S_{k} = Df_{z}^{k}(\mathcal S) \subset T_{f^{k}(z)}M $ with well defined major and minor axes. The unit vectors $ e^{(k)}, f^{(k)} $ which are mapped to the minor and major axis respectively of the ellipse, and are thus the *most contracted* and *most expanded* vectors respectively, are given analytically as solutions to the differential equation $ d\|Df_{z}^{k}(\cos\theta, \sin\theta)\|/d\theta = 0 $ which can be solved to give the explicit formula $$\label{contdir} \tan 2\theta = \frac{2 [({\ensuremath{\partial_{x}f_{1}}}^{k})({\ensuremath{\partial_{y}f_{1}}}^{k}) +({\ensuremath{\partial_{x}f_{2}}}^{k})({\ensuremath{\partial_{y}f_{2}}}^{k})]} {({\ensuremath{\partial_{x}f_{1}}}^{k})^2+({\ensuremath{\partial_{x}f_{2}}}^{k})^2 - ({\ensuremath{\partial_{y}f_{1}}}^{k})^2 -({\ensuremath{\partial_{y}f_{2}}}^{k})^2}.$$ Here $ f=(f_{1}, f_{2}) $ are the two coordinate functions of $ f $. Notice that $ e^{(k)} $ and $ f^{(k)} $ are always *orthogonal* and *do not* in general correspond to the stable and unstable eigenspaces of $ Df^{k} $. ### Hyperbolic coordinates and stable and unstable foliations We define the *hyperbolic coordinates of order* $ k $ at the point $ z $ as the orthogonal coordinates $ \mathcal H_{k}(z) $ given at $ z $ by the most contracted and most expanded directions for $ Df^{k}_{z} $. If $ f $ is $ C^{2} $ and $ H_{k}(z) < 1 $ then hyperbolic coordinates are defined in some neighbourhood of $ z $ and define two orthogonal $ C^{1} $ vector fields. In particular they are locally integrable and thus give rise to two orthogonal foliations. We let $ \mathcal E^{(k)} $ denote the *stable foliation of order* $ k $ formed by the integral curves of the vector field $ \{e^{(k)}\} $ and $ \mathcal F^{(k)} $ denote the *unstable foliation of order* $ k $ formed by the integral curves of the vector field $ \{f^{(k)}\}$. ### Hyperbolic coordinates for Hénon-like maps {#hyphen} A crucial property of hyperbolic coordinates and finite order stable and unstable foliations is that, under very mild assumptions, they *converge* in quite a strong sense as $ k\to \infty $. We formulate a version of this property here in our specific context. \[c2close\] For every $ k\geq 1 $, hyperbolic coordinates $ \mathcal H_{k} $ and stable and unstable foliations $ \mathcal E^{(k)} $ and $ \mathcal F^{(k)} $ are defined in $ \mathcal V^{+}\cup \mathcal V^{-}_{k} $ Moreover 1. the angle between each stable direction $ e^{(k)} $ and the slope of $ f^{-1}(W^{s}_{\delta}(q)) (\approx 2) $. 2. the curvature of each stable leaf , are both $ \lesssim b $. Also, the $ C^{2} $ distance between leafs of $ \mathcal E^{(k)} $ and leaves $ \mathcal E^{(k+1)} $ is $ \lesssim bk $. Analogous convergence results are formulated and proved in great generality in [@HolLuz06] under weak (subexponential) growth of the derivative. Here we shall need only some very particular cases of these estimates and therefore we first describe the specific setting in which they will be applied here. The main ingredient for the proof is that fact that by our choice of $ \delta $ and assuming that $ |a-2| $, $ |b| $ and $ \eta $ are small enough we have that $ \|Df (z) - Df_{*}(q_{*})\| $ is small for all $ \ z\in \mathcal Q $ and thus in particular $$\label{contest} E_{k}(z_{0}) \leq b^{k} \quad\text{and} \quad F_{k}(z_{0}) \geq 3^{k} \quad \forall \ z_{0}\in \mathcal V_{k}.$$ It follows immediately that hyperbolic coordinates, and their associated foliations, of order $ k $ are well defined in $ \mathcal V_{k} $. Points in $ \mathcal V_{k}^{-} $ are then re-injected into $ \mathcal D \setminus \mathcal Q $ and these hyperbolicity estimates can no longer be guaranteed, a priori, for all time. Points in $ \mathcal V_{k}^{+} $ however are outside $ \mathcal D $ and therefore, by the arguments of Section \[proofnonwan\], eventually escape towards infinity. In particular the required hyperbolicity conditions can be guaranteed to hold for all positive iterates. This implies that hyperbolic coordinates of order $ k $ are well defined in $ \mathcal V^{+}\cup\mathcal V^-_{k} $ as in the statement of the proposition. The statements about the direction of the stable directions, the curvature of the leaves and the $ C^{2} $ distance between stable leaves of different orders, all follow directly from [@HolLuz06]\*[Main Theorem]{}. These calculations are purely technical and do not add to our geometrical understanding of this situation, we therefore omit the details and refer the reader to that paper. Admissible curves {#curvature} ----------------- Recall that the curvature $ \kappa(s) $ of a parametrized curve $ \gamma(s) = (x(s), y(s)) $ is given by $$\kappa(s) = \frac{|\dot x \ddot y-\dot y \ddot x|}{\|(\dot x, \dot y)\|^{3}} = \frac{|\dot\gamma \times \ddot\gamma |}{|\dot\gamma|^{3}}.$$ The equivalence between the two formulas is given by the formula $(v_{1},v_{2})\times (w_{1},w_{2})=v_{1}w_{2}-v_{2}w_{1}$. For $ \alpha > 0 $, we say that a $ C^{2} $ curve $ \gamma = \gamma(s) = (x(s), y(s)) $ is *admissible* if $ |\dot y(s)|/|\dot x(s)| < \alpha $ and $ |\kappa(s)| < \alpha $ for all $ s $. We remark that both the curvature and the slope of tangent vectors of a curve are independent of the parametrization, and thus so is the definition of admissibility. We shall want to compare the curvature at a point of a curve and at the corresponding point of its image. So, we suppose $ \gamma_{i-1}(s) $ is a parametrized $ C^{2} $ curve and $ \gamma_{i}(s) =f(\gamma_{i-1}(s)) $. For simplicity we shall often omit the parameter $ s $ and simply write $ Df $ to denote the derivative at the point $ \gamma_{i-1}(s) $. \[smallcurv\] Let $ \{\gamma_{i}\}_{i=0}^{n}$ be a sequence of $ C^{2} $ curves with $ \gamma_{i}=f^{i}(\gamma_{0}) $. Suppose that for some $ s $, $ n $ is a “hyperbolic time” in the sense that $$\|\dot\gamma_{n}(s)\|\geq C e^{\lambda j}\|\dot\gamma_{n-j}(s)\|$$ for all $ j=0, .., n-1 $. Then for $ |b|, \eta $ sufficiently small, $ \kappa_{0}(s) < \alpha $ implies $\kappa_{n}(s) < \kappa_{0}(s) < \alpha. $ \[admisstoadmiss\] If $ \gamma\subset \mathcal D\setminus\Delta_{\varepsilon} $ is admissible, then $ f(\gamma) $ is also admissible This follows from Proposition \[smallcurv\] and the hyperbolicity outside $ \Delta_{\varepsilon} $. Condition implies that the slope of each tangent vector to $ f(\gamma) $ is $ <\alpha $ and condition together with Lemma \[smallcurv\] gives the curvature $ <\alpha $. To prove Proposition \[smallcurv\] we first prove a general curvature estimate. We fix some bounded neighbourhood $ \hat R $ of $ R $ and, as above, we suppose $ \{\gamma_{i}\}_{i=0}^{n}$ is a sequence of $ C^{2} $ (not necessarily admissible) curves with $ \gamma_{i}=f^{i}(\gamma_{0}) $, all contained in $ \hat R$. \[curv\] There exists $K > 0 $ independent of $ a, b, \eta $ such that for all $ i=1,\ldots, n $ we have $$\kappa_i(s) \le K(b+\eta) \frac{|\dot{\gamma}_{i-1}(s)|^3}{|\dot{\gamma}_{i} (s)|^3} \kappa_{i-1}(s)+ K(b+\eta) \frac{|\dot{\gamma}_{i-1} (s)|^3}{|\dot{\gamma}_{i} (s)|^3}$$ We use the formula $ \kappa = |\dot\gamma \times \ddot\gamma |/|\dot\gamma|^{3} $ for the curvature. We have $$\dot\gamma_{i}= (Df) \dot \gamma_{i-1} = \begin{pmatrix} f_{1,x} & f_{1,y} \\ f_{2,x} & f_{2,y} \end{pmatrix}\dot \gamma_{i-1} = \begin{pmatrix} -2ax_{i-1} + \varphi_{1,x} &1+\varphi_{1,y}\\ b+ \varphi_{2,x} & \varphi_{2,y} \end{pmatrix} \dot \gamma_{i-1}$$ and $$\ddot\gamma_{i} = \begin{pmatrix} \nabla f_{1,x} \cdot \dot\gamma_{i-1} & \nabla f_{1,y} \cdot \dot\gamma_{i-1} \\ \nabla f_{2,x} \cdot \dot\gamma_{i-1} & \nabla f_{2,y}\cdot \dot\gamma_{i-1} \end{pmatrix} \dot\gamma_{i-1} + (Df)\ddot\gamma_{i-1}.$$ Therefore $ \dot\gamma_{i}\times\ddot\gamma_{i} $ is given by $$\label{curv1} (Df)\dot\gamma_{i-1} \times \begin{pmatrix} \nabla f_{1,x} \cdot \dot\gamma_{i-1} & \nabla f_{1,y} \cdot \dot\gamma_{i-1} \\ \nabla f_{2,x} \cdot \dot\gamma_{i-1} & \nabla f_{2,y}\cdot \dot\gamma_{i-1} \end{pmatrix} \dot\gamma_{i-1} \\ + (Df) \dot\gamma_{i-1} \times (Df)\ddot\gamma_{i-1}$$ where $$\label{curv1a} \nabla f_{1,x} = \begin{pmatrix} -2a + \varphi_{1, xx} \\ \varphi_{1, xy} \end{pmatrix},$$ and $$\label{curv1b} \nabla f_{1,y} = \begin{pmatrix} \varphi_{1, xy} \\ \varphi_{1, yy} \end{pmatrix}; \nabla f_{2,x} = \begin{pmatrix} \varphi_{2, xx} \\ \varphi_{2, xy} \end{pmatrix}; \nabla f_{2,y} = \begin{pmatrix} \varphi_{2, xy} \\ \varphi_{2, yy} \end{pmatrix};$$ We shall estimate the two terms of separately. These will yield the two terms in the statement of the Proposition. For the second term we have $$|(Df) \dot\gamma_{i-1} \times (Df)\ddot\gamma_{i-1}| = |Det (Df)| |\dot\gamma_{i-1} \times \ddot\gamma_{i-1}|= |Det (Df)| \kappa_{i-1}|\gamma_{i-1}|^{3}.$$ Indeed, for the first equality, $ |\dot\gamma_{i-1} \times \ddot\gamma_{i-1}| $ is the area of the parallelogram defined by the two vectors $ \dot\gamma_{i-1} $ and $ \ddot\gamma_{i-1} $, and $ |(Df) \dot\gamma_{i-1} \times (Df)\ddot\gamma_{i-1}| $ is the are of the parallelogram defined by the two vectors $ \dot\gamma_{i-1} $ and $\ddot\gamma_{i-1} $ which of course just the image of the first parallelogram under $ Df $. The second equality just follows immediately from the definition of $ \kappa_{i-1} $. So it just remains to show that the value of $ |Det (Df)| $ is bounded above by some multiple of $ b $ and $ \eta $. Indeed, writing $ f=h+\varphi $ we have, by the “row-linearity” of the determinant, $$\begin{aligned} Det(Df) &= Det \begin{pmatrix} h_{1x}+\varphi_{1x} & h_{1y}+\varphi_{1y} \\ h_{2x}+\varphi_{2x} & h_{2x}+\varphi_{2y} \end{pmatrix} \\ &= Det \begin{pmatrix} h_{1x}& h_{1y} \\ h_{2x}+\varphi_{2x} & h_{2x}+\varphi_{2y} \end{pmatrix} + Det \begin{pmatrix} \varphi_{1x} & \varphi_{1y} \\ h_{2x}+\varphi_{2x} & h_{2x}+\varphi_{2y} \end{pmatrix} \\ & = Det \begin{pmatrix} h_{1x}& h_{1y} \\ h_{2x}& h_{2x} \end{pmatrix} + Det \begin{pmatrix} h_{1x}& h_{1y} \\ \varphi_{2x} & \varphi_{2y} \end{pmatrix} + Det \begin{pmatrix} \varphi_{1x} & \varphi_{1y} \\ h_{2x}& h_{2x} \end{pmatrix} + Det \begin{pmatrix} \varphi_{1x} & \varphi_{1y} \\ \varphi_{2x} & \varphi_{2y} \end{pmatrix} \end{aligned}$$ Using $ h_{1x}=-2a $, $ h_{1y}= 1 $, $ h_{2x}=b $, $ h_{2y}=0 $ and $ \|\varphi\|_{C^{2}}\leq \eta $ this gives $$Det (Df) \leq b+ (2a\eta + \eta) + (2a\eta + \eta) + \eta = b+ 4a \eta + 3 \eta \leq b + 12\eta$$ where in the last step we have used the fact that $ a $ is close to $ 2 $. Substituting this above gives the required bound for the second term of . To bound the first term we write $$(Df)\dot\gamma_{i-1} \times \begin{pmatrix} \nabla f_{1,x} \cdot \dot\gamma_{i-1} & \nabla f_{1,y} \cdot \dot\gamma_{i-1} \\ \nabla f_{2,x} \cdot \dot\gamma_{i-1} & \nabla f_{2,y}\cdot \dot\gamma_{i-1} \end{pmatrix} \dot\gamma_{i-1} = \begin{pmatrix} a_{1} & b_{1} \\ c_{1} & d_{1} \end{pmatrix} \begin{pmatrix} v_{1} \\ v_{2} \end{pmatrix} \times \begin{pmatrix} a_{2} & b_{2} \\ c_{2} & d_{2} \end{pmatrix} \begin{pmatrix} v_{1} \\ v_{2} \end{pmatrix}$$ Then the norm of this expression is bounded above by $$\begin{aligned} & |a_{1}c_{1}v_{1}^{2}+a_{1}d_{2}v_{1}v_{2} + b_{1}c_{2}v_{1}v_{2} + b_{1}d_{2}v_{2}^{2} - a_{2}c_{1}v_{1}^{2} - a_{2}d_{1}v_{1}v_{2} - b_{2}c_{1}v_{1}v_{2} - d_{1}b_{2}v_{2}^{2}| \\ \leq & \max\{|a_{1}c_{2}-a_{2}c_{1}|, |b_{1}d_{2}-d_{1}b_{2}| + |a_{1}d_{2}-c_{1}b_{2}|+|b_{1}c_{2}-a_{2}d_{1}|\} (v_{1}^{2}+v_{2}^{2}) \\ \leq & 4 \max\{|a_{1}c_{2}-a_{2}c_{1}|, |b_{1}d_{2}-d_{1}b_{2}|, |a_{1}d_{2}-c_{1}b_{2}|, |b_{1}c_{2}-a_{2}d_{1}|\} (v_{1}^{2}+v_{2}^{2}) \\ \leq & 8 \max\{|a_{1}c_{2}|, |a_{2}c_{1}|, |b_{1}d_{2}|, |d_{1}b_{2}|, |a_{1}d_{2}|, |c_{1}b_{2}|, |b_{1}c_{2}|, |a_{2}d_{1}|\} |\dot\gamma_{i-1}|^{2}. \end{aligned}$$ All the terms contain a factor $ \dot\gamma_{i-1} $; each of the terms $ b_{2}, c_{2}, d_{2}$, see , contains a bounded constant multiplied by the factor $ \eta $; the term $ a_{2} $, see , is of the order of $ 2a $ but here it is multiplied by either $ c_{1} $ or $ d_{1} $ each one of which contains a term which is bounded by $ \eta $. Therefore, there exists a constant $ K>0 $ such that $$\left| (Df)\dot\gamma_{i-1} \times \begin{pmatrix} \nabla f_{1,x} \cdot \dot\gamma_{i-1} & \nabla f_{1,y} \cdot \dot\gamma_{i-1} \\ \nabla f_{2,x} \cdot \dot\gamma_{i-1} & \nabla f_{2,y}\cdot \dot\gamma_{i-1} \end{pmatrix} \dot\gamma_{i-1} \right| \leq K\eta |\gamma_{i-1}|^{3}.$$ Applying Lemma \[curv\] recursively we get $$\begin{aligned} \kappa_n(s) &\le K(b+\eta) \kappa_{n-1}(s) \frac{|\dot\gamma_{n-1}|^3 }{|\dot\gamma_n|^{3}} + K(b+\eta) \frac{|\dot\gamma_{n-1}|^3 }{|\dot\gamma_n|^{3}} \\ & \le (K(b+\eta))^2 \kappa_{n-2}\frac{|\dot\gamma_{n-2}|^3 }{|\dot\gamma_n|^{3}} + (K(b+\eta))^2\frac{|\dot\gamma_{n-2}|^3 }{|\dot\gamma_n|^{3}} +K(b+\eta) \frac{|\dot\gamma_{n-1}|^3 }{|\dot\gamma_n|^{3}} \\ & \le \ldots $$ Using the expansivity assumption and $ b, \eta $ small, this gives $$\kappa_n(s) \le \frac{1}{C^3} (K(b+\eta) e^{-\lambda})^n \kappa_0(s) + \frac{1}{C^3} \frac{K(b+\eta) e^{-\lambda}}{1-K(b+\eta)e^{-\lambda}}\leq \kappa_{0}(s) \leq \alpha.$$ Critical points {#hypcrit} --------------- The next Proposition makes precise the notion of a *critical point of order* $ k $. We recall that $ \gamma $ is a $ C^{2} $ *admissible curve* if all its tangent vectors have slope $ <\alpha $ and it has curvature $ <\alpha $. We say that $ \gamma $ is a *long admissible curve* if it is an admissible curve which crosses the entire length of $ \Delta_{\varepsilon} $. \[tang\] Let $ \gamma\subset\Delta_{\varepsilon}\cap\mathcal D $ be a long admissible curve. Then there exists a unique point $ c^{(k)}\in \gamma $ such that $ \gamma_{0}= f(\gamma) $ has a (quadratic) tangency at $ c^{(k)}_{0}=f(c^{(k)}) \in \mathcal V_{k}^{-}\cup\mathcal V^{+} $ with the stable foliation $ \mathcal E^{(k)}$, for any $ k\geq k_{0} $. Moreover there exists a constant $ K $, independent of $ b, \eta $, such that $ d(c^{(k)}_{0}, c_{0}^{(k+1)}) \leq Kb^{k} $. In particular, the sequence $ \{c^{(k)}_{0}\} $ is Cauchy. We call $ c^{(k)} $ and $ c_{0}^{(k)} $ respectively a *critical point* and *critical value* of order $ k $, associated to the long admissible curve $ \gamma $. We remark that critical values $ c_{0}^{(k)} $ of finite order are not guaranteed to be outside $ \mathcal D $, however we shall show below that their limit points as $ k\to\infty $, i.e. the “real” critical points always fall strictly outside $ \mathcal D $ for $ a> a^{*} $. Given a parametrized curve $ \gamma_{0}=\gamma_{0}(t) $ and its image $ \gamma_{1}=\gamma_{1}(t) = f(\gamma_{0}(t)) $ we denote by $ \kappa_{0}(t) $ the curvature of $ \gamma_{0} $ at the point $ \gamma_{0}(t) $ and by $ \kappa_{1}(t)$ the curvature of $ \gamma_{1} $ at the point $ \gamma_{1}(t) $. \[poscurv\] Let $ \gamma_{0}(t) $ be an admissible curve and let $ \gamma_{1}(t)=f(\gamma(t)) = (\xi_{1} (t), \eta_{1}(t)) $. Suppose that for some $ t $ we have $ \dot \eta_{1}(t) \neq 0 $ and $ |\dot \xi _{1}(t)/ \dot \eta_{1}(t)| < 1 $. Then $ |\kappa_{1}(t)| > a/b \gg 1 $. Lemma \[poscurv\] essentially says that if the tangent direction of the image of an admissible curve at a certain point is roughly vertical (or at least contained in the “vertical” cone between the positive and the negative diagonals) then the curvature at this point is strictly bounded away from 0. This does not apply to admissible curves outside $ \Delta_{\varepsilon} $ since we have shown above ( Corollary \[admisstoadmiss\]) that images of such curves are still admissible and therefore their tangent directions are roughly horizontal. We will instead apply it below to the images of admissible curves inside $ \Delta_{\varepsilon} $ as a way of pinpointing the location of *folds*. First recall that the curvature $ \kappa_{1}(t) $ is independent of the choice of parametrization and also the condition $ |\dot \xi _{1}(t)/ \dot \eta_{1}(t)| < 1 $ is independent of the parametrization since $ |\dot \xi _{1}(t)/ \dot \eta_{1}(t)| $ is just the slope of the tangent vector. Therefore we choose the parametrization $$\gamma_{0}(t) = (t, y(t)).$$ For simplicity we also omit the subscript $ 1 $ from the coordinate functions of $ \gamma_{1} $ and just write $ \gamma_{1}(t) = (\xi(t), \eta(t)) $. From the definition of $ f $ we have $$\begin{aligned} (\xi(t), \eta(t)) &= (1+at^{2}+y(t) + \varphi_{1}(\gamma_{0}(t)), bt+\varphi_{2}(\gamma_{0}(t))) \\ (\dot\xi(t), \dot\eta(t)) &= (-2at + \dot y(t) + \nabla\varphi_{1}(\gamma_{0}(t)) \cdot \dot\gamma_{0}(t), b+\nabla\varphi_{2}(\gamma_{0}(t)) \cdot \dot\gamma_{0}(t)) \\ (\ddot \xi(t), \ddot\eta(t)) &= (-2a + \dot y(t) + D^{2}\varphi_{1}(\gamma_{0}(t)) [\dot\gamma_{0}(t)]^{2}, D^{2}\varphi_{2}(\gamma_{0}(t))[\dot\gamma_{0}(t)]^{2}) \end{aligned}$$ Choosing $ \eta $ sufficiently small, for example so that $ 4\|\nabla\varphi_{2}(\gamma_{0}(t))\|(1+\alpha) < b $ this implies $$\label{etadot} 3b/4 \leq |\dot\eta(t)| \leq 5b/4.$$ We can now compute the curvature $ \kappa_{1}(t) $. First notice that the condition $ |\dot \xi _{1}(t)/ \dot \eta_{1}(t)| < 1 $ implies in particular $ \|(\dot\xi(t), \dot\eta(t))\|\leq \sqrt 2 |\dot\eta(t)| $. Then we have $$\kappa_{1}(t) = \frac{|\ddot \xi (t) \dot\eta (t) - \dot\xi(t) \ddot\eta (t)|}{\|(\dot\xi(t), \dot(\eta(t))\|^{3}}\geq \frac{|\ddot \xi (t) \dot\eta (t) - \dot\xi(t) \ddot\eta (t)|}{4 |(\dot\eta(t))|^{3}}$$ Dividing numerator and denominator by $ |\dot\eta(t)| $, using the condition $ |\dot \xi _{1}(t)/ \dot \eta_{1}(t)| < 1 $ and we get $$\kappa_{1}(t) \geq \frac{|\ddot \xi (t) - \frac{\dot\xi(t)}{\dot\eta(t)} \ddot\eta (t)|}{4 |(\dot\eta(t))|^{2}} \geq \frac{|\ddot \xi (t)| - \left|\frac{\dot\xi(t)}{\dot\eta(t)}\right| \ |\ddot\eta (t)|}{4 |(\dot\eta(t))|^{2}} \geq \frac{|\ddot \xi (t)| - |\ddot\eta (t)|}{4 |(\dot\eta(t))|^{2}} \geq \frac{|\ddot \xi (t)| - |\ddot\eta (t)|}{7b^{2}}$$ Finally, from the formulas for $ \ddot\xi(t) $ and $ \ddot\eta(t) $ and the fact that $ |\dot y(t)| \leq \alpha $ by the admissibility of $ \gamma_{0} $, we get $$|\ddot \xi (t)| - |\ddot\eta (t)| \geq 2a - \alpha - 2 \|\varphi\|_{C^{2}} \geq a$$ as long as $ \eta $ is sufficiently small. The existence of a tangency between $ f(\gamma) $ and the stable foliation $ \mathcal E^{(k)} $ follows by the simple geometric observation that the image of a long admissible curve necessarily “changes direction” between one endpoint and the other. Thus, by a simple Intermediate Value argument it follows that there is some point of tangency. Now, Proposition \[c2close\] says that the leaves of the stable foliations $ \mathcal E^{(k)} $ are close to the piece of stable manifold $ f^{-1}(W^{s}_{\delta}(q) $ and thus have slope close to $ 2 $, and that their curvature is small. In particular the point of tangency must occur at some point at which the tangent direction to $ f(\gamma) $ is close to 2 and therefore Proposition \[tang\] shows that at this point of tangency $ f(\gamma) $ has strictly positive curvature. This implies that this tangency is quadratic as well as unique. ![Hyperbolic coordinates](zoom){width="8cm"} Hyperbolicity estimates {#sectionhypest} ======================= This is the final and main section of the paper. We apply the notion of hyperbolic coordinates and dynamical defined critical points to prove Theorem 1. In section \[shadowing\] we combine the hyperbolic coordinates and the curvature estimates to show that all components of the unstable manifold $ W^{u}(p) $ in $ \Delta_{\varepsilon} $ are almost horizontal curves with small curvature. In particular they all have well-defined critical points. In section \[HypRet\] we take advantage of the structure of critical points on such components to show that points in the critical region $ \Delta_{\varepsilon}\setminus\Delta $ recover hyperbolicity after some bounded number of iterations depending only on the parameter $ a $. In Section \[UnifHyp\] we then extend these estimates to uniform expansion estimates on all of $ W^{u}(p) $ with a hyperbolicity constant $ C_{a} $ depending only on the parameter. In Section \[UnifHypOm\] we then show how to extend this hyperbolicity to the closure of $ W^{u}(p) $ and thus to the whole nonwandering set $ \Omega $. Finally, in Section \[Lyap\], we consider the bifurcation parameter value $ a=a^{*} $ and show that all Lyapunov exponents are uniformly bounded away from 0. Shadowing --------- Let $$\label{lambda} \lambda = \min\left\{\frac{1}{2}\ln \frac{3}{\sqrt 5}, \hat\lambda\right\}.$$ \[recovering\] For all $ a\geq a^{*} $ all components of $ W^{u}(p) \cap \Delta_{\varepsilon} $ are long admissible curves. Moreover, for all $ z\in W^{u}(p)\cap(\Delta_{\varepsilon}\setminus\Delta) $ and any vector $ v $ tangent to $ W^{u}(p) $ at $ z $ and $ k\geq 1 $ such that $ f(z)\in \mathcal V_{k}\setminus \mathcal V_{k+1} $ we have $$\|Df^{k}_{z}(v)\|\geq e^{\lambda k}\|v\|.$$ We emphasize that Proposition \[recovering\] holds also for parameter values for which the first tangency occurs. We first prove the expansivity statement and then the admissibility of leaves of $ W^{u}(p) $ in $ \Delta_{\varepsilon} $. ### Expansion If $ \gamma(s) = (x(s), y(s)) \subset\Delta_{\varepsilon}\cap \mathcal D $ is a long admissible curve we consider the tangent vectors $ \dot\gamma(s) $ and their images $ \dot\gamma_{0}(s) = Df(\dot\gamma(s)) $. By Proposition \[tang\], $ \dot\gamma_{0} $ is tangent to the stable direction $ e^{(k)} $ at the point $ c_{0}^{(k)} $. For this and other nearby points on $ \gamma $ we can write the tangent vector $$\dot\gamma_{0} = h^{(k)}_{0}f^{(k)} + v^{(k)}_{0} e^{(k)}$$ where $ (f^{(k)}, e^{(k)}) $ is the orthogonal basis given by the most expanded and most contracted direction for $ Df^{k} $ and $ h_{0}^{(k)} $ and $ v_{0}^{(k)} $ are the components of $ \dot\gamma_{0} $ in this basis. Notice that the basis itself depends on the point. Proposition \[c2close\] implies that the basis varies *very slowly* with the base point, and Proposition \[tang\] implies that the tangent vector $ \dot\gamma_{0} $ is varying at *positive speed* with respect to this basis. We omit the calculations which are relatively standard, see for example [@LuzVia03]. Specifically this implies that the component $ h_{0}^{(k)} $ of the tangent vector $ \dot\gamma_{0} $ at some point $ z_{0}=f(z) \in \gamma_{0} $ is proportional to the distance between $ z $ and the critical point of order $ k $, $ c^{(k)} $. In our setting, the constants actually give $$\label{horizontal} |h_{0}^{(k)}(z_{0})| \geq d(z, c^{(k)}).$$ We can now prove the following \[binding\] Suppose $ \gamma \subset \Delta_{\varepsilon} $ is an admissible curve, $ z\in \gamma $, $ z_{0}=f(z)\in\mathcal V_{k}\setminus\mathcal V_{k+1} $ and $ c^{(k)} $ is the critical point of order $ k $ in $ \gamma $. Then for a vector $ w $ tangent to $ \gamma $ at $ z $ and all $ j=0,\ldots, k $ we have $$\|Df^{j+1}_{z}(w)\| \geq 3^{j} d(z, c^{(k)})\|w\|.$$ In particular $$\|Df^{k+1}_{z}(w)\|\geq e^{\lambda (k+1)}\|w\|.$$ The first equality follows immediately from and . To prove the second we need to find a bound for $ d(z, c^{(k)}) $ in terms of $ k $. Using the quadratic nature of $ \gamma_{0} $ and the proximity to the one-dimensional map $ 1-ax^{2} $ with $ a\approx 2 $, we obtain $$\label{critdist} d(z, c^{(k)}) \geq \frac{1}{3} \sqrt{d(z_{0}, c_{0}^{(k)})}.$$ To estimate $ d(z_{0}, c_{0}^{(k)}) $ we use the observation that the “real” critical value $ c_{0} $ on $ \gamma_{0} $, i.e. the point of tangency between $ \gamma_{0} $ and the limiting stable foliation $ \mathcal E^{(\infty)} $ lies necessarily either on $ W^{s}(q) $ (this is only a possibility if $ a=a^{*} $) or to the right of $ W^{s}(q) $ in $ \mathcal Q $. We write this as $ \delta_{0}= d(c_{0}, W^{s}(q)) \geq 0 $. Combining this with Lemma \[distance1\] and the rate of convergence of critical points of finite order $ d(c_{0}^{k}, c_{0}) \leq K b^{k} $ as mentioned in Proposition \[tang\] and taking $ b $ sufficiently small, we get $$\begin{aligned} d(z_{0}, c_{0}^{(k)}) &\geq d(z_{0}, W^{s}(q)) + d(W^{s}(q), c_{0}) - d(c_{0}^{(k)}, c_{0}) \\ &\geq \frac{\delta}{2}5^{-k}+\delta_{0}- K b^{k} \geq \frac{\delta}{3}5^{-k}.\end{aligned}$$ Substituting this into and using the fact that we can assume $ k\geq k_{0} $ as well as the definition of $ k_{0} $ in and of $ \lambda $ in , we have $$3^{k}d(z,c^{(k)}) \geq \frac{\sqrt \delta}{2\sqrt 3}\left(\frac{3}{\sqrt{5}}\right)^{k}\geq e^{\lambda (k+1)}$$ ### Admissibility Returning to the proof of the Proposition, to obtain the statement about admissibility, notice first of all that combining Lemma \[binding\] with Lemma \[smallcurv\] we immediately obtain the statement that if $ \gamma\subset W^{u}(p)\cap\Delta_{\varepsilon} $ is admissible and $ k $ is the first time that $ f^{k}(\gamma) \subset \Delta_{\varepsilon} $, then $ f^{k}(\gamma) $ is admissible. Now, by choosing $ |b| $ and $ \eta $ small we can guarantee that $ W^{u}_{loc}(p)\cap\Delta_{\varepsilon} $ is a long admissible curve. Moreover, every piece of $ W^{u}(p) \cap\Delta_{\varepsilon} $ is the image of some curve in $ W^{u}_{loc}(p)\cap\Delta_{\varepsilon} $ and is therefore admissible. Hyperbolicity after returns to $ \Delta_{\varepsilon} \protect $ {#HypRet} -------------------------------- Proposition \[recovering\] gives a pointwise recovery time for the hyperbolicity of points in the critical region, based on their position. The following Proposition gives a key uniformity estimate in the phase space for each parameter $ a> a^{*} $. \[N\] For all $ a> a^{*} $ there exists a constant $ N_{a} $ such that for $ z\in W^{u}(p)\cap\Delta_{\varepsilon}\cap \Omega(f)$, and $ v $ a tangent vector to $ W^{u}(p) $ at $ z $, there exists $ n(z) \leq N_{a} $ such that $ Df^{n(z)}_{z}(v) $ is almost horizontal and $$\|Df^{n(z)}_{z}(v)\|\geq e^{\lambda n(z)}\|v\|.$$ We remark that the constant $ N $ is *not* uniformly bounded in $ a$ and in particular *does not* apply to $ a=a^{*} $. However it gives us a *uniformity statement* in $ z $ which will implies, as we shall see below, uniform hyperbolicity for each given parameter value $ a> a^{*} $. For the proof we need to extend the definition of admissibility naturally to curves which are only differentiable of class $ C^{1+1} $ (Lipschitz continuous derivative). We say that $ \gamma(s)\subset \Delta_{\varepsilon} $ is a $ C^{1+1} $ admissible curve if $ |\dot y| /|\dot x| < \alpha $, and $ \dot \gamma (s) $ is Lipschitz with Lipschitz constant $ \leq \alpha $. We also give the formal definition of a “real” critical point, which applies both to $ C^{2} $ and to $ C^{1+1} $ admissible curves. We say that $ c\in\gamma $ is a critical point if $ e^{(\infty)} $ is defined at $ f(c)\in \gamma $ and coincides with $ Df_{c}(\dot\gamma(c)) $. \[unique critical\] For every $ a>a^{*} $, every $ z\in \overline{W^{u}(p)}\cap\Delta_{\varepsilon}\cap\Omega$ lies on a $ C^{1+1} $ admissible curve $ \gamma $ which is the limit of $ C^{2} $ admissible curves in $ W^{u}(p) $ and $ \gamma $ contains a unique critical point $ c(\gamma) $ with $ d(z, c)>0 $. We split the proof into two parts. ### Every point lies on an admissible curve {#every-point-lies-on-an-admissible-curve .unnumbered} We show first of all that every point $ z\in \overline{W^{u}(p)}\cap\Delta_{\varepsilon}\cap\Omega $ lies on a $ C^{1+1} $ admissible curve which is the limit of $ C^{2} $ admissible curves in $ W^{u}(p) $. Let $ z\in \overline{W^{u}(p)}\cap\Delta_{\varepsilon}\cap\Omega $ and let $ z_{n}\to z $ be a sequence in with $ z_{n}\in W^{u}(p)\cap\Delta_{\varepsilon}\cap\Omega $. By Proposition \[recovering\] each $ z_{n} $ belongs to a long admissible curve $ \gamma_{n}\subset W^{u}(p) $. We can write these as functions $ \gamma_{n}:I\to \mathbb R $ with $ I=[-\varepsilon, \varepsilon] $ and suppose that converge pointwise to $ \gamma: I \to \mathbb R$. Since $ I $ is compact and $ \gamma_{n}, \dot\gamma_{n} $ are bounded and equicontinuous sequences we have that $ \gamma $ is $ C^{1} $ and $ \gamma_{n}\to \gamma $ in the $ C^{1} $ topology. To see that $ \dot\gamma $ is Lipschitz, let $ x,y \in I $ and observe that each $ \dot\gamma_{n} $ is a Lipschitz function with uniformly bounded Lipschitz constant $ \alpha $. Then we have $ |\dot\gamma_{n}(x)-\dot\gamma_{n}(y)|\leq \alpha |x-y| $ and hence $ |\dot\gamma_{n}(x)-\dot\gamma_{n}(y)|\leq \alpha |x-y| $. ### Every admissible curve contains a critical point {#every-admissible-curve-contains-a-critical-point .unnumbered} We now show that any such curve $ \gamma $ contains a unique critical point. We show first that it must contain at most one, and then argue that it must contain at least one. Let $ \theta (\gamma_{n}(t)) $ be the angle between the vectors $ Df_{(t, \gamma_{n}(t))}(1, \gamma'_{n}(t)) $ and $ e^{\infty}(f(t, \gamma_{n}(t)) $. Since the image of each admissible curve is quadratic with respect to $ \mathcal E^{(\infty)} $ we have that $ \theta (\gamma_{n}(t)) $ has a strictly non-zero derivative having at most one root corresponding to a point of tangency between $ f(\gamma_{n}) $ and $ \mathcal E^{(\infty)} $. Since $ \gamma_{n}\to\gamma $ in the $ C^{1} $ topology, we have that $ \theta(\gamma(t)) $ also has strictly non-zero derivative having at most one root also corresponding to a point of tangency between $ f(\gamma) $ and $ \mathcal E^{(\infty)} $. To see that such a point exists, observe that if $ a> a^{*} $ then the graph of $ \gamma $ crosses the boundary of $ \Delta $ twice and $ f(\gamma\cap\Delta) $ is outside $ \mathcal D $ where the foliation $ \mathcal E^{(\infty)} $ is well defined and the extreme points of $ f(\gamma\cap\Delta) $ both lie on a piece of $ W^{s}(q) $ which is a leaf of the foliation $ \mathcal E^{(\infty)} $. This implies that there exists a point outside the interior of $ \mathcal D $ where $ f(\gamma) $ is tangent to $ \mathcal E^{(\infty)} $. Lemma \[unique critical\] allows us to define a canonical set $ \mathcal C_{a} $ of *critical points* as the union of all critical points $ c(\gamma) $ for every $ C^{1+1} $ which are $ C^{1} $ limits of long admissible curves of $ W^{u}\cap \Delta_{\varepsilon} $. In the next Lemma we show that this set is bounded away from the set of nonwandering points. \[crit\] For all $ a> a^{*} $ we have $ d(\mathcal C_{a}, \Omega) > 0 $. We emphasize that $ d(\mathcal C_{a}, \Omega) $ is not uniformly bounded in the parameter. The constant $ N_{a} $ in the Proposition will be defined below in terms of $ d(\mathcal C_{a}, \Omega) $. Notice first of all that $ \mathcal C_{a}\subset\Delta_{\varepsilon} $ and thus in particular is bounded. Let $ c_{k}=c(\gamma_{k}) $ be a sequence converging to some point $ c $. We need to show that $ c\in\mathcal C_{a} $. Since each $ \gamma_{k} $ is the limit of long admissible curves, we can consider sequences $ \gamma_{k}^{(n)}\to \gamma_{k} $ for each $ k $. Using Lemma \[unique critical\] and the fact that $ \{\gamma_{k}^{(k)}\} $ converges pointwise to $ \gamma $, we conclude that this convergence is in fact $ C^{1} $. Since $ \theta(\gamma_{k}^{(k)}(c_{k}))\to 0 $ we have that $ \theta (\gamma(c))=0 $ and this implies that $ c $ is a critical point as required. We have therefore shown that the critical set $ \mathcal C_{a} $ is compact. Since $ \Omega $ is also compact, it is sufficient to show that $ \mathcal C_{a}\cap\Omega = \emptyset $ to imply that they are at some positive distance apart. Disjointness follows from the observation that the image of a critical point is always outside $ \mathcal D $, while $ \Omega $ is an invariant set contained in $ \mathcal D $. By Lemma \[crit\] and the uniform approximation of the critical set $ \mathcal C $ by the finite order critical sets $ \mathcal C^{(n)} $, there exists $ N_{a} $ sufficiently large so that the following two conditions hold (using also $ \lambda < \log 3) $: $$\label{Ndef} d(\mathcal C^{(N_{a})}_{a}, \mathcal C_{a}) < d (\mathcal C_{a}, \Omega) /2 \ \ \text{ and } \ \ 3^{N_{a}} d(\mathcal C^{(N_{a})}_{a}, \Omega) \geq e^{\lambda N_{a}}.$$ Now consider $ z\in \Delta_{\varepsilon}\cap W^{u}(p)\cap \Omega $ and let $ n\geq 1 $ be such that $ f(z)\in \mathcal V_{n}\setminus \mathcal V_{n+1} $. Recall $ f(\Delta_{\varepsilon}) \subset \mathcal V_{k_{0}}$ and therefore such an $ n $ is always well defined except for those points which map exactly to the the curve $ f^{-1}(W^{s}_{\delta}(q) $ which forms the boundary between $ \mathcal V^{+} $ and $ \mathcal V^{-1} $. For these points we let $ n=+\infty $. Then we let $$n(z) = \min\{n, N_{a}\}.$$ If $ n \leq N_{a} $ the statement follows from Proposition \[recovering\]. Otherwise our choice of $ N_{a} $ in gives $$\|Df^{N_{a}}(v)\| \geq 3^{N_{a}}d (z, \mathcal C_{a}^{(N_{a})})\|v\| \geq 3^{N_{a}}d (\Omega, \mathcal C_{a}^{(N_{a})})\|v\| \geq 3^{N_{a}}d (\Omega, \mathcal C_{a})\|v\| /2 \geq e^{\lambda N_{a}}\|v\|$$ The first inequality follows from Lemma \[binding\], the second one follows from $ z\in \Omega $, the third one follows from the first part of , and the last one follows from the second part of . Finally, considering the components of $ v $ in hyperbolic coordinates we have $ \|v_{N_{a}}^{(N_{a})}\|\leq (b/3)^{N_{a}} $ and $ \|h_{N_{a}}^{(N_{a})}\|\geq e^{\lambda N_{a}} $ and therefore $ Df^{N_{a}}(v) $ is almost horizontal. Uniform hyperbolicity on $ W^{u}(p) \protect$ {#UnifHyp} --------------------------------------------- The following Proposition is is essentially a Corollary of Proposition \[N\]. However we state it separately as it gives an explicit construction of the constant $ C_{a} $ of hyperbolicity for each $ a> a^{*} $. Before stating the result we define this constant. Let $ C^{-}_{N_{a}}=\min\{\|(Df^{j}_{z})^{-1}\|^{-1}: x\in\mathcal D, 1\leq j \leq N_{a}\} $ and $ C^{+}_{N_{a}}=\max\{\|Df^{j}_{z} \|: x\in\mathcal D, 1\leq j \leq N_{a}\} $ denote the maximum possible contraction and the maximum possible expansion exhibited by any vector $ v\in T_{x}\mathbb R^{2} $ for any point $ x\in\mathcal D $ in at most $ N_{a} $ iterations. Letting $ C_{\varepsilon} $ denote the constant of hyperbolicity as in on page , we then let $$C_{a}=\min\left\{\frac{C_{\varepsilon}}{C^{+}_{N}}, \frac{C_{N}^{-}e^{-\lambda N}}{C^{+}_{N}} \right\}$$ \[HypProp1\] For all $ a> a^{*} $, all $ z\in W^{u}(p)\cap\Omega(f) $ and all vectors $ w $ tangent to $ W^{u}(p) $ at $ z $ we have $$\|Df^{n}_{z}(w) \| \geq C_{a} e^{\lambda n}\|w\|$$ for all $ n\geq 1 $. Let $ z\in W^{u}(p)\cap\Omega(f) $ and let $ w $ tangent to $ W^{u}(p) $ at $ z $. Since we do not assume anything about the location of $ z $ the vector $ w $ may or may not be almost horizontal. We distinguish these two possibilities. ### Case 1: $ w \protect$ almost horizontal {#case-1-w-protect-almost-horizontal .unnumbered} Let $ 0 \leq k_{1}< \ldots < k_{s} < n $ be the sequence of returns of the iterates of $ z $ to $ \Delta_{\varepsilon} $ ( with $ k_{1}=0 $ if $ z\in\Delta_{\varepsilon } $ and $ k_{1}>0 $ otherwise). Then for each $ k_{i} $ we have an integer $ n_{i}=n(z_{k_{i}}) \leq N_{a}$ given by Proposition \[recovering\]. Then we can write $$k_{i+1}=k_{i}+n_{i}+q_{i}$$ where $ q_{i} $ is the number of iterates during which the point remains outside $ \Delta_{\varepsilon} $. From Proposition \[recovering\] properties and , the images of the vector at these iterates remains horizontal and we have $$\|Df^{k_{i}}_{z}(w)\|\geq e^{\lambda k_{i}}\|w\|$$ for all $ i=1,\ldots, s $, in particular for $ i=s $. If $ k_{s}+n_{s}\leq n $, applying to the remaining iterates gives $ \|Df_{z}^{n}(w)\|\geq C_{\varepsilon}e^{\lambda n}\|w\| \geq C_{a}e^{\lambda n}\|w\| $ as required. If $ k_{s}+n_{s} > n $ we have expansion for the first $ k_{s} $ iterates which gives $ \|Df^{k_{s}}(w)\| \geq e^{\lambda k_{s}}\|w\| $. There follow $ n-k_{s} \leq n_{s}\leq N_{a} $ iterates (since $ n_{s}\leq N_{a} $) during which we can bound the contraction coarsely by the $ N_{a} $’th power of the maximum contraction in the region $ \mathcal D $ which gives $$\|Df^{n}(w)\|\geq C_{N}^{-} e^{\lambda k_{s}}\|w\| = C_{N}^{-} e^{-\lambda N} e^{\lambda n}\|w\|.$$ ### Case 2: $ w \protect$ is not almost horizontal {#case-2-w-protect-is-not-almost-horizontal .unnumbered} We now suppose that $ w $ is not almost horizontal. There exists $$N_{a}\geq m > 0$$ such that $ f^{-m}(z) \in \Delta_{\varepsilon} $ and $ w_{-m}= Df^{-m}(w) $ is almost horizontal. We show first of all that some preimage of $ z $ lies in $ \Delta_{\varepsilon} $. Indeed, $ z\in W^{u}(p) $ implies that $ z_{-n}\to p $ as $ n\to \infty $ and therefore that $ w_{-n} $ is almost horizontal for sufficiently large $ n $ since the local unstable manifold of $ p $ is admissible. By the invariance of the unstable conefield outside $ \Delta_{\varepsilon} $ images of $ w_{-n} $ remain almost horizontal unless some return to $ \Delta_{\varepsilon} $ takes place. Now let $ m>0 $ be the smallest integer such that $ f^{-m}\in \Delta_{\varepsilon} $. Then $ w_{-m} $ is almost horizontal since every component of $ W^{u} $ in $ \Delta_{\varepsilon} $ is almost horizontal. From Proposition \[N\] it follows that $ Df_{z_{-m}}^{n(z_{-m})}(w_{-m}) $ is almost horizontal and therefore it follows necessarily that $ m\leq n(z_{m}) \leq N_{a} $. Otherwise $ w$ will be almost horizontal. Returning to the proof of the Proposition, we can now argue as in the previous case to obtain exponential growth starting from time $ -m $: $$\label{C} \|Df^{n}(w)\|=\|Df^{n+m}(w_{-m})\| \geq C' e^{\lambda (n+m)} \|w_{-m}\|$$ where $ C'= \min\{C_{\varepsilon}, C_{N}^{-} e^{-\lambda N}\}. $ Moreover $$\|w\|=\|Df^{m}(w_{-m})\|\leq \|Df^{m}\| \ \|w_{-m}\| \leq C_{N}^{+} \|w_{-m}\|.$$ Substituting this back into completes the proof. Uniform hyperbolicity on $ \Omega \protect $ {#UnifHypOm} -------------------------------------------- We have obtained uniform expansion estimates for vectors tangent to $ W^{u}(p) $. In this section we show that these estimates can be extended to $ \Omega $. This part of the argument uses very little of the specific Hénon-like form of the map and therefore we state it in a more abstract and general context. \[HypProp2\] Let $ f: \mathbb R^{2}\to \mathbb R^{2} $ be a $ C^{1} $ diffeomorphism and $ \Omega $ a compact invariant set with $ |\det Df| < 1 $ on $ \Omega $. Suppose that there exists some invariant submanifold $ W $ dense in $ \Omega $ and such that there exist constants $ C, \lambda > 0 $ such that $ \|Df_{z}(v)\|\geq Ce^{\lambda n} $ for all $ z\in W\cap \Omega $ and $ v $ tangent to $ W $. Then $ \Omega $ is uniformly hyperbolic with hyperbolic constants $ C $ and $ \lambda $. Proposition \[HypProp2\] completes the proof of part $ (a) $ of the Theorem and shows that the rates of expansion and contraction admit uniform bounds independent of the parameter. We shall show that $ \Omega $ is uniformly hyperbolic by constructing an invariant hyperbolic splitting $ E^{s}_{z}\oplus E^{u}_{z} $ at every point of $ \Omega $ and then showing that this splitting is continuous. We carry out this construction in several steps. The starting point is the observation that $ E^{u}_{z} $ is already given by the tangent direction to $ W $ for all points $ z\in \Omega\cap W $. \[limit\] For any $ z\in\Omega $ and any sequence $ z_{j}\in W $ with $ z_{j}\to z $, the sequence $ E^{u}(z_{j}) $ converges to a (unique) limit direction $ E^{u}(z) $. Each vector $ v\in E^{u}(z) $ satisfies $$\|Df_{z}^{n}(v) \| \geq C e^{\lambda n}\|v\| \quad \text{ and }\quad \|Df_{z}^{-n}(v) \|\leq C^{-1}e^{-\lambda n}\|v\|$$ for all $ n\geq 1 $. Suppose $ z\in\Omega $ and let $ z_{j}\in W $ be a sequence with $ z_{j} \to z $. Consider the sequence of corresponding directions $ E^{u}(z_{j}) $. By compactness (of the space $ \mathbb S^{1} $ of possible directions) there must exist some subsequence $ z_{j_{i}} $ such that the corresponding directions $ E^{u}_{j_{i}} $ converge to some direction which we call $ E^{u}(z) $. Notice that a priori this direction is not unique since it depends on the choice of subsequence. We shall show first that the forward expansion and backward contraction estimates hold and then show that this actually implies uniqueness. Let $ v\in E^{u}_{z} $ and $ v_{j_{i}}\in E^{u}_{z_{j_{i}}} $ be a sequence with $ v_{j_{i}} \to v $. Then, for each $ n\in\mathbb N $ we have, by the continuity of $ Df^{n} $, $$\|Df^{n}_{z_{j}}(v_{j})\|\to \|Df^{n}_{z}(v)\|$$ By assumption we know that $ \|Df_{z_{j_{i}}}(v_{j})\|\geq Ce^{\lambda n}\|v_{j}\| $ and therefore $$\|Df^{n}_{z}(v)\| \geq Ce^{\lambda n}-\varepsilon$$ for any $ \varepsilon > 0 $. Therefore $ \|Df^{n}_{z}(v)\| \geq Ce^{\lambda n} $ and, since this holds for every $ n $, we have the required statement as far as the expansion in forward time is concerned. To prove contraction in backward time it is sufficient to prove it for points on $ W $ and then apply exactly the same approximation argument. For $ z\in W$ this follows immediately from the uniform expansivity assumption in forward time. Indeed, letting $ v_{-n} = Df_{z}^{-n}(v) $, the expansivity assumption gives $$\|v\|\geq \|Df^{n}_{z_{-n}}(v_{-n})\|\geq Ce^{\lambda n}\|v_{-n}\|$$ which immediately implies $ \|v_{-n}\| \leq C^{-1}e^{-\lambda n}\|v\| $. It remains to show uniqueness of $ E^{u}(z) $ for each $ z\in \Omega$. Suppose by contradiction that we could find two sequences $ z_{j}\to z $ and $ \tilde z_{j}\to z $ with corresponding directions $ E^{u}_{z_{j}} $ and $ E^{u}_{\tilde z_{j}} $ converging to two different directions $ E^{u}_{z} $ and $ \tilde E^{u}_{z} $. Let $ v \in E^{u}_{z} $ and $ \tilde v \in \tilde E^{u}_{z} $. Then $ v, \tilde v $ must be linearly independent and thus every other vector $ w \in T_{z}\mathbb R^{2} $ can be written as a linear combination $ w = a_{1}v + a_{2}\tilde v$ for some $ w_{1}, w_{2} \in \mathbb R $. By linearity and the backward contraction estimates obtained above this implies that $$\|w_{-n}\| = \|Df^{-n}_{z}(w)\| \to 0$$ as $ n\to \infty $. Since $ w $ was arbitrary this implies that all vectors are shrinking to zero in backward time. But this is impossible since we have assumed that $ |det Df| < 1 $ and thus $ |det Df^{-1}| > 1 $ on $ \Omega $. \[unique\] At every point $ z \in \Omega $ there exists a unique tangent space splitting $ E^{u}_{z}\oplus E^{s}_{z} $ which is invariant by the derivative $ Df $ and which satisfies the standard uniform hyperbolicity expansion and contraction estimates. Lemma \[limit\] gives the expanding direction $ E^{u}_{z} $ of the splitting with the required hyperbolic expansion estimates in forward time. The invariance for points in $ W $ is automatic (since tangent directions to $ W $ are mapped to tangent directions to $ W $), and the invariance for general points follows immediately from the definition of $ E^{u}_{z}= \lim E^{u}_{z_{j}} $, the invariance of $ E^{u}_{z_{j}} $ for $ z_{j}\in W $, and the continuity of $ Df $. The stable direction $ E^{s}_{z} $ is given immediately by as the limit of the sequence $ e^{(n)} $ of vectors most contracted by $ Df_{z}^{n} $, as discussed in section \[shadowing\]. This also automatically gives the uniqueness and invariance. To complete the proof of the Proposition, we just need to show that the given tangent space splitting is continuous. This follows by standard arguments from the uniqueness proved in Corollary \[unique\]. Indeed, for any $ z\in\Omega $ and any sequence $ z_{j}\in\Omega $ with $ z_{j}\to z $, every limit point of the corresponding sequence of splittings $ E^{u}_{z_{j}}\oplus E^{s}_{z_{j}} $ must also be a splitting $ \tilde E^{u}_{z}\oplus \tilde E^{s}_{z} $. By approximation arguments identical to those used above it follows that this splitting must also satisfy the uniform hyperbolic contraction and expansion estimates. Therefore. by uniqueness, it must coincide with the existing splitting $ E^{u}_{z}\oplus E^{s}_{z} $. This completes the proof that $ \Omega $ is uniformly hyperbolic. Lyapunov exponents for $ f_{a^{*}}\protect$ {#Lyap} -------------------------------------------- Finally it remains to consider the dynamics of $ f_{a^{*}} $. Recall that $ a^{*} $ is defined on page as the first parameter for which a tangency occurs between the compact parts of $ W^{s}(q) $ and $ W^{u}(p) $, see Figure \[tangency\] for the pictures in the two cases $ b>0 $ and $ b< 0 $. ![Invariant manifolds for $a=a^{*}$[]{data-label="tangency"}](tangency){width="10cm"} We need to show that, for $ a=a^{*} $, all Lyapunov exponents are uniformly bounded away from 0. We show that for each point $ z \in \Omega_{a^{*}}$ *not in the orbit of tangency* $ \mathcal T $ (it is not necessary to consider the orbit of tangency since this is a countable set without recurrence and can therefore not support any invariant probability measure) there exists a constant $ C_{z} $, a vector $ v_{z} $, and a sequence $ \{n_{i}\} $ with $ n_{i}\to\infty $ such that, for all $ i\geq 0 $, $$\|Df^{n_{i}}_{z}(v_{z})\|\geq C_{z} e^{\lambda n_{i}} \|v_{z}\|.$$ This is obviously true if the orbit of $ z $ never enters $ \Delta_{\varepsilon} $ in forward time or it enters $ \Delta_{\varepsilon} $ only a finite number of times. Indeed suppose that there exists some $ k $ such that $ f^{i}(z) \notin\Delta_{\varepsilon} $ for all $ i\geq k $. Then let $ w $ be a vector which is mapped to the horizontal vector $ w_{k}= Df_{z}^{k}(w) $ after $ k $ iterations. Then by we have $ \| Df_{z_{k}}^{k+n}(w) \| \geq C_{\varepsilon} e^{\lambda n} \|w_{k}\|$ for all $ n\geq 1 $. This implies that there exists a constant $ C_{z} $ such that $ \| Df_{z}^{k+n}(w) \| \geq C_{z} e^{\lambda (k+n)} \|w\|$ for all $ n\geq 1 $. Otherwise there exists an infinite sequence $ 0< m_{1}< \cdots < m_{k}< \cdots$ such that $ m_{k}\to\infty $ and $ f^{m_{k}}(z)\in\Delta_{\varepsilon} $. By Lemma \[unique critical\], $ z_{m_{i}}=f^{m_{i}}(z) $ lies on either a $ C^{2} $ long admissible curve or a $ C^{1+1} $ long admissible curve which is the $ C^{1} $ limit of $ C^{2} $ long admissible curves in $ W^{u}(p) $. Since $ z $ has an infinite number of returns to $ \Delta_{\varepsilon} $, this implies in particular that $ z\notin W^{s}(q) $ and so $ z_{m_{i}}\notin W^{s}(q) $ and so there exists $ n_{i}=n(z_{m_{i}}) $ such that $ f(z_{m_{i}}) \in \mathcal V_{n_{i}}\setminus\mathcal V_{n_{i}+1} $. Therefore *exactly* the same arguments as in Lemmas \[binding\] and \[limit\] show that for a vector $ v_{i} $ tangent to such an admissible curve $ \gamma $ at $ z_{m_{i}} $ we have $$\label{lasteq} \|Df^{n_{i}+1}_{z_{m_{i}}}(v_{i})\| \geq e^{\lambda (n_{i}+1)}\|v_{i}\|.$$ Notice that since the $ C^{1} $ limits of $ C^{2} $ admissible curves are unique, as proved above, we have $ v_{i+1}=Df^{m_{i+1}-m_{i}}(v_{i}) $. Then, by and we have $$\|Df^{m_{i}+n_{i}+1-m_{1}}(v_{1})\|\geq e^{\lambda(m_{i}+n_{i}+1-m_{1})}\|v_{1}\|.$$ Then we can define $ v_{z}= Df^{-m_{1}}(v_{1}) $ and we have $ \|Df^{m_{i}+n_{i}+1}(v_{z})\|\geq C_{z} e^{\lambda(m_{i}+n_{i}+1)}\|v_{z}\|$. where the constant $ C_{z} $ is required simply to compensate for the possible lack of expansion for the first $ n_{1} $ iterates. In particular it can be chosen by considering the maximum possible contraction along the orbit of $ z $ for the first $ n_{1} $ iterations $$C_{z} = \min_{\|v\|=1} \|Df^{n_{1}}_{z}(v)\|.$$ We have shown therefore that for each $ z\in\Omega $ $ \limsup_{n\to\infty} \frac{1}{n}\ln \|Df^{n}_{z}\|\geq \lambda. $ This clearly implies the same bound for the limit wherever it exists. In particular any point which is typical for some ergodic invariant probability measure and for which therefore such a limit does exist, will have a positive Lyapunov exponent $ \geq \lambda $. By dissipativity this immediately implies also that the other Lyapunov exponent is negative and uniformly bounded away from 0 both in the dynamical and in the parameter space. [^1]: IR was partially supported by CAPES, FAPERJ (Brazil) and EPSRC UK. YC was partially supported by NSF(10571130) and NCET of China, the Royal Society and EPSRC UK. SL was partially supported by NSF(10571130) and NCET of China. This work was carried out at Imperial College London and Suzhou University and we acknowledge the hospitality and support of these institutions. We would also like to thank the referee for a careful reading of the paper and several very useful suggestions which have improved the accuracy and presentation of the arguments.
{ "pile_set_name": "ArXiv" }
=cmr6 at 10truept \#1[.3ex]{} hep-th/0409226\ September 2004 [**Moduli Entrapment with Primordial Black Holes**]{}\ \ \ \ ABSTRACT We argue that primordial black holes in the early universe can provide an efficient resolution of the Brustein-Steinhardt moduli overshoot problem in string cosmology. When the universe is created near the Planck scale, all the available states in the theory are excited by strong interactions and cosmological particle production. The heavy states are described in the low energy theory as a gas of electrically and magnetically charged black holes. This gas of black holes quickly captures the moduli which appear in the relation between black hole masses and charges, and slows them down with their [*vevs*]{} typically close to the Planck scale. From there, the modulus may slowly roll into a valley with a positive vacuum energy, where inflation may begin. The black hole gas will redshift away in the course of cosmic expansion, as inflation evicts the black holes out of the horizon. Inflation [@inflation] is at present the best framework for explaining the origin of our universe. However there still are technical difficulties with implementing inflation in fundamental theory. Inflation needs light scalar fields with very flat potentials and positive vacuum energy, which are hard to obtain from first principles. Even if such degrees of freedom are present, there is still the problem of arranging favorable initial conditions [@initial], which ensure that the vacuum energy controlled by the light scalars dominates over other energy sources in the inflationary patch. One approach towards resolving this problem is the idea of eternal inflation [@steinh; @vilenkin; @chaotic] (for a recent review see [@alan]), which posits that after inflation starts in some regions of a huge “metauniverse" where the environmental conditions favor it, it will produce many universes as big as ours. Recently, there has been progress in the direction of implementing eternal inflation in string theory. Backgrounds where eternal inflation can occur have been constructed in flux compactifications [@kklt; @kklmmt; @race]. These solutions naturally belong to the landscape of string vacua [@bopo; @Susskind; @douglas; @savas]. In fact, many of the older cosmological solutions in supergravity limits of string theory without future singularities [@oldsols] could be fitted at the foothills of the landscape, close to the supersymmetric limits. The landscape picture provides for many valleys where inflation can happen, with some of the moduli fields as inflatons. This would realize the old hope that string moduli may be inflatons [@bingai]. However, this approach may also resurrect the moduli overshooting problem, discussed by Brustein and Steinhardt [@bruste]. Depending how they start, the moduli fields may acquire a lot of kinetic energy on the approach towards the inflationary minima and overshoot them, running off towards the supersymmetric regions of weak coupling. There the spacetime decompactifies and new dimensions of space open up. Thus in order to find a phenomenologically viable cosmology, it is essential to find a way to gently lower the moduli to where inflation can occur and the world remains four-dimensional for a long time. In this framework, the universe may be described as a compact space, with flat or negatively curved spatial slices [@andreicomp] (see also [@coulemartin]). It is created close to the Planck scale. Three of the spatial dimensions grew very big during inflation. The modulus which plays the role of the inflaton could start in the inflationary valley of the potential like in topological inflation [@race; @andreicomp]. However in some regions modulus may have started high up the slope, and normally it would have overshot the minimum as it rolled down; if instead there is enough radiation, the modulus could still get captured in the minimum thanks to the extra cosmological friction [@rasha]. The main ingredient of this mechanism has been observed in string cosmology earlier in [@dilatons; @tsey]. In the backgrounds dominated by conformal matter where $T^\mu{}_\mu = 0$ the source terms arising from the gravitational couplings of the moduli to matter vanish and moduli are quickly kinetically damped [@wetterich; @oldconf]. Their energy density dilutes as $\rho_h \sim 1/a^6$ for the homogeneous mode or as $\rho_i \sim 1/a^4$ for inhomogeneities [@banks], where $a$ is the cosmological scale factor. Some aspects of more phenomenologically motivated, but similar, mechanisms have also been discussed in [@tida; @bcc; @hsow]. We will present here a different mechanism for moduli entrapment. The idea is to use the source term coming from the interactions of the moduli with very heavy, nonrelativistic, charged states in the early universe to enhance the attraction of the inflationary basin. To illustrate the mechanism consider the following analogy with a marble and a bathtub. If one drops a marble alongside the steep wall of the bathtub, the marble will have acquired so much kinetic energy that it will fly over any of the small dimples in the tub’s floor and continue rolling around for a long time. However, if the tub is first filled with water, which slowly drains away, then the marble will loose its kinetic energy very rapidly thanks to the interactions with the water molecules. It will be captured quickly in a dimple on the bottom near the place of original impact. Then water will drain away, analogously to the redshifting of the cosmological matter contents, leaving the marble, or the modulus potential, in control of the evolution of the universe. We will show that a gas of heavy states in string theory, described in the low energy supergravity limit as extremal black holes, can enforce the same effect on the runaway moduli of string compactifications. In this way, the gas of black holes, with high initial energy density, can provide a new mechanism for overcoming the Brustein-Steinhardt overshoot problem. In what follows we will work with a single dilaton-like scalar modulus, assuming that the theory has been compactified to four dimensions already. We believe that the same mechanism can be realized in more complicated scenarios. Then in the approach of [@andreicomp], the universe emerges somewhere on the landscape with an energy density close to the string scale. We will consider what happens with the regions where the modulus is not immediately placed in the inflationary plateau. Instead it starts far from the minimum along a steep slope and acquires a large kinetic energy. Because the spatial curvature of the universe is either zero or negative, it does not immediately collapse, but expands under the influence of the dominant sources of stress-energy. This separates the flatness problem from the Planck scale physics, and allows for the possibility that inflation may begin well below the Planck scale. At very high energies where the universe is born, strong interactions, and specifically gravitational particle production [@gpp; @zelsta] will excite all the modes in the spectrum of the theory. This includes the heavy states in the theory, with masses above the string scale. When their masses exceed the Planck scale, such states, with some given quantum numbers, are described as BPS black holes in the supergravity limit. Such states are known to locally fix the scalar field [*vevs*]{} on the horizons to values completely independent of the asymptotic geometry [@fixed]. The masses of these states depend explicitly on the expectation value of the scalar at infinity. To see this, recall the example of the BPS black hole solutions in heterotic string theory, described by the bosonic sector of the effective supergravity action [@gibbons; @klopp; @dura] S = d\^4 x (R + ()\^2 - 12 F\^2\_ ) . \[sugra\] A generic BPS soliton of (\[sugra\]) is described by an extremal black hole configuration with mass ${\cal M}$, electric charge ${\cal Q}$ and magnetic charge ${\cal P}$, which are related by [@gibbons; @klopp; @dura] = 12 ( |[Q]{}| e\^[/2]{} + |[P]{}| e\^[-/2]{} ) , \[mass\] where $\varphi$ is the dilaton [*vev*]{} infinitely far away from the hole, determining the asymptotic value of the string coupling $g_S = \exp(\varphi/2)$. This parameter is left completely undetermined by the local black hole dynamics. In the supersymmetric limit it is a modulus which can take any value because it is a flat direction of the theory. The charges ${\cal Q}$ and ${\cal P}$ are quantized in the units of the string scale $\sim 1/\ell_S$. Similar situation persists for the case of other moduli fields. If, for the sake of simplicity, we assume that all but one of the moduli are stabilized in the compactification to four dimensions (such as in [@kklt; @gkp]), the effective 4D action which describes the light modes of the theory is, in the Einstein frame, [@gibbons] S = d\^4 x ( - 12 ()\^2 - e\^[-[g]{}\_2 ]{} F\^2\_ ) , \[dila\] where ${\tt g}_2$ is the modulus coupling, and $\kappa =1/M_{Pl}$ is the inverse Planck mass in 4D. A generic BPS soliton of the theory (\[sugra\]) is described by an extremal black hole configuration with mass ${\cal M}$, electric charge ${\cal Q}$ and magnetic charge ${\cal P}$. In general, the solutions for the arbitrary charge assignment for ${\cal Q}, {\cal P}$ are not known exactly for an arbitrary value of ${\tt g}_2$ [@gibbons]. However, it is straightforward to solve (\[dila\]) for the black hole solutions with only electric, or only magnetic charge, in the extremal limit. They are given by zero entropy, zero temperature configurations[^1] of the same global structure as those with ${\tt g}_2 = \sqrt{2}$ [@gibbons]. Their masses and charges are related according to[^2] \_q = e\^[[g]{}\_2 /2]{} ,              [M]{}\_p = e\^[-[g]{}\_2 /2]{} , \[masa\] where again $\phi$ is the scalar [*vev*]{} infinitely far away from the hole, determining the asymptotic value of the coupling constant $g = \exp({{\tt g}_2 \kappa \phi/2})$, and can take any value in the supersymmetric limit. In what follows we will work with the models where the Planck scale and the string scale are close together, so that asymptotically $g \sim {\cal O}(1)$, because typically the minima of the nonperturbative scalar modulus potential appear in this regime. Appropriately, as we will see in what follows, the black hole toy model works as a moduli stopper most efficiently precisely in this regime. The scalar minima in the mass function (\[mass\]) have prompted a suggestion [@renata] that moduli could acquire effective potentials generated by the virtual black hole loops, which can stabilize them. This idea is very interesting because it may incorporate some nonperturbative quantum gravity effects, even though is difficult to reliably compute them. We still don’t know quantum gravity Feynman diagrams for black hole states. Instead of following this route, we will focus on a different application of the minima of (\[mass\]). We will consider an early universe environment where there is a large number of black holes with scalar-dependent masses as in (\[masa\]). Once there is a net energy density of them, with both electric and magnetic charges, their interactions with the scalar will produce a damping effect, trapping the scalar to [*vevs*]{} of order of the Planck scale (after we normalize the scalar field canonically). The electric charges block the modulus from running off to the strong coupling, where they are heavy, and the magnetic charges block it from going to the weak coupling, where they become heavy. Thus as long as the charges hang around the modulus cannot easily escape to either side. This is akin to the environmentally induced stabilization effects of [@tida]. In what follows we will only consider the effects of the black hole gas populated with only electric and only magnetic charges. The dyonic solutions should also exist, and contribute as well. However in the regime where we will work, dyons will be generally heavier than monopoles. Because we assume a nearly thermal initial population of heavy states, we expect that the dyon contributions and effects will be exponentially suppressed relative to the monopole states, and we can safely ignore them. How do we describe this ensemble of black holes? It is well known that the extremal black hole states with like charges can be arranged in static arrays because the electromagnetic repulsions can neutralize the gravitational attraction between them. In the early universe these charges would neither remain static, nor would they all be of the same sign. Gravity respects gauge symmetries and so charge conservation would require the net charge to be zero. In general, the black hole states will be initially produced by strong (gravitational) interactions near the string scale [@gpp; @zelsta] both as “particles" and “antiparticles", with opposite charges, carrying both electric and magnetic charges. Hence we will approximate their initial abundances by the nearly thermal distribution law, which should be a sensible order of magnitude estimate. Systems of such black holes would be dynamical, since the forces between black holes would not cancel exactly, due to the presence of charges with both signs. Furthermore, outside of the black holes the space would not be in the vacuum, but in a some, roughly thermally, excited state. It is clear by CPT invariance (\[masa\]) and equivalence principle that both “particles" and “antiparticles" contribute equally to the stress-energy. States from all mass scales will contribute to the total stress-energy. If they meet and annihilate, their energy would be released as relativistic particles, which we will ignore later on. The effect of radiation on cosmological moduli dynamics has already been considered in [@dilatons; @tsey; @rasha]. While overall unstable, such arrays of black holes may become sufficiently separated so that the cosmic expansion prevents their complete annihilation. In this case, away from the static limit the leading order electromagnetic interactions can be neglected, and the leading order gravitational effects from the charges can be approximated by the stress-energy tensor of the fluid of massive particles [@modulispace]. Thus since they are heavy states, almost immediately after they are produced, surviving black holes will fall out of thermal equilibrium. Therefore their leading contribution to the energy density will come from their rest masses. With this in mind we can use the dilute gas approximation for the description of the evolution of these particles. This should be reasonable below (but close to) the string scale, and will rapidly become better as the universe expands. The initial energy density of black holes should be high, but still below the Planck scale, to guarantee the validity of the (super)gravity description where we can ignore higher derivative corrections in the effective action. We expect that while our approach is an approximation, it should be a reasonable one in most of the space when the initial black hole density is not too large. Namely, it is clear that close to any individual black hole the averaging procedure which we will embrace below must break down, because of the strong fields close to a hole [@fixed]-[@renata]. In that region, the solution is completely controlled by the local dynamics which fixes the modulus to a value specified by the charges and renders it insensitive to its value at infinity [@fixed; @renata]. However we are interested in showing how an initial population of black holes can help resolve the overshoot problem [@bruste] of string cosmology, which is the statement that the asymptotic value of the scalar is really a zero mode that may run off to the weak coupling regime. If the black hole density is not too large initially, so that their individual separation is e.g. $ \ga {\cal O}(10)$ Planck lengths (and initially $1/H_0 \ga \ell_{Pl}$), then the value of the modulus in the space surrounding them will rapidly approach its asymptotic value. Our approximation should be a reasonable approximation controlling the dynamics of the modulus in the interstitial space between the holes. Once inflation starts, this approximation will rapidly get better and better as time goes on. In a more realistic situation, however, the modulus will have a distribution of values determined by the individual black holes. There will be pockets with all kinds of values of couplings locally fixed by hole charges, and those which can be later stabilized by different minima of the landscape potential where inflation can occur will give rise to sibling universes with different low energy couplings[^3]. We can estimate the initial energy density of black holes as follows. The number density of the black hole states which are produced is Boltzmann-suppressed, $n_{\cal M} \propto \exp(-{\cal M}/T_0)$, where $T_0 \sim \sqrt{H_0 M_{Pl}}$ is the temperature of the universe when it is formed. We will take it to be $T_0 \la 1/\ell_S$, so that we can reliably use the supergravity limit for the description of cosmology. The total energy density which these massive states contribute will be [@wetterich; @dilatons] \_[BH]{} = \_[[M]{}]{} n\_[M]{} [M]{} , \[density\] where we estimate ${\cal M}$ by (\[masa\]) and $n_{\cal M}$, as indicated above, roughly by the Boltzmann distribution when ${\cal M} \ge 1/\ell_S$. Thus the main contribution will come from the lightest black holes. From (\[masa\]) and $T_0 \simeq 1/\ell_S$ we see that the Boltzmann suppression factor of the initial number density scales as $n_{\cal M} \propto \exp\Bigl(- |{\cal Q}| M_{Pl} \ell_S g_{0} \Bigr)$ and $n_{\cal M} \propto \exp\Bigl(- |{\cal P}| M_{Pl} \ell_S/g_{0} \Bigr)$. For a fixed initial value of the coupling constant $g_{0}$, the masses obey ${\cal M} \sim g_{0} {\cal Q} M_{Pl} \sim {\cal P} M_{Pl} /g_{0} \gg 1/\ell_S$, and since $M_{Pl} \sim 1/(g_{0} \ell_S)$ they indeed are black holes, ${\cal M} \ga M_{Pl}$. Thus when $g_{0}$ is of the order of unity, the black holes can be described as a nonrelativistic fluid, whose pressure can be neglected, and whose energy density in an approximately FRW background can be estimated from (\[density\]) with the help of a saddle point approximation. In fact because the mass in (\[mass\]) is a linear combination of the terms proportional to $g$ and $1/g$, assuming a roughly thermal initial distribution $n_0 \sim e^{-{{\cal M} (\phi_0) \ell_S}} / \ell_S^3$ we can sum up the magnetic and electric contributions to $\rho_{BH}$ separately. This yields the initial black hole energy density \_[BH 0]{} = |n\_0 M\_[Pl]{} ( q g\_0 + p g\_0\^[-1]{} ) , \[energdense\] where $\bar n_0$ is the number density of the lightest black holes in the ensemble, and $\langle q \rangle = \sum_{\cal Q} |{\cal Q}| \exp(-{|\cal Q}| {M_{Pl}}\ell_S g_0)/\bar n_0$ and $\langle p \rangle$ are ensemble-averaged values of ${|{\cal Q}|}, {|{\cal P}|}$ respectively, obeying roughly $\langle q \rangle \sim \langle p \rangle \sim {\cal O}(1)$ when initial value of $g$ is not too far from unity. As the universe evolves undergoing cosmic expansion, the energy density of the black hole gas will change in two ways: [*i)*]{} it will redshift away according to the usual $\sim 1/a^3$ law for massive particles, since we are ignoring the black hole interactions (we expect that this is justified for heavy black holes when they are sufficiently separated [@modulispace]) and [*ii)*]{} the evolution of the modulus will change the coupling and so the mass of the black holes, as is clear from (\[masa\]). When the evolution is smooth, a good approximation accounting for these two effects is to represent the energy density as a function of the scale factor and the modulus [*vev*]{} as [@wetterich; @oldconf] \_[BH]{} = |n\_0 M\_[Pl]{} ()\^3 ( q e\^[[g]{}\_2 /2]{} + p e\^[-[g]{}\_2 /2]{} ) , \[energdens\] It is convenient to define $\phi_*$ by $\exp({{\tt g}_2 \kappa \phi_*}) = {\langle p \rangle}/{\langle q \rangle}$. Then the formula for the energy density of black holes becomes \_[BH]{} = () , \[energydens\] where $\rho_0 = \sqrt{\langle p\rangle\,\langle q \rangle}\, \bar n_0 a^3_0 M_{Pl}$. Here we allow for the change in $\rho_{BH}$ via a subsequent adiabatic evolution of $\phi$ away from its initial value. In the limits $g_{0} \gg 1$ and $g_{0} \ll 1$ the number densities of black holes with large electric charge and with large magnetic charge, respectively, are exponentially suppressed. We will comment on these limits later on. The energy density of black holes (\[energydens\]) will appear as a source in the Friedmann equation governing the evolution of the scale factor of the universe $a$. In the Einstein frame, this equation is 3 M\^2\_P H\^2 + 3 = + V() + \_[BH]{} + \_[rad]{} , \[friedman\] where we have included for completeness the contribution from all relativistic particles in the universe $\rho_{rad}$. We stress that we restrict the spatial curvature to be $0,-1$ only thanks to the arguments of [@andreicomp], which strongly favor these values at high densities. Thus the collapse is averted. The kinetic and potential contributions from the scalar are included as ${\dot \phi^2}/{2}$ and $V(\phi)$ respectively. Now, to find how the black hole gas affects the modulus, we need the equation of motion for its zero mode. A simple way to obtain it is to use the second order Einstein equation for the scale factor, and resort to the Bianchi identity to find $\ddot \phi$ [@wetterich]. The equation for $\ddot a$ is, at the level of our approximations where $p_{BH} \simeq 0$ and $p_{rad} = \rho_{rad}/3$, = - (2 \^2 - 2V() + \_[BH]{} + 43 \_[rad]{} ) , \[ddota\]and hence taking the first derivative of (\[friedman\]), using (\[energydens\]) and eliminating $\ddot a$ from (\[ddota\]) yields + 3H + + \_[BH]{} () = 0 . \[ddotdil\] This is the master equation which controls cosmological dynamics of the scalar modulus in this problem. Let us now turn to the dynamical evolution of the universe with a modulus $\phi$ as governed by equations (\[friedman\]), (\[ddota\]) and (\[ddotdil\]). First ignore the matter contributions. A typical potential for the moduli may have some local minima, but is very steep on one side, and asymptotes as an exponential function to zero on the other side of the region where the minima lay. An example is provided by a potential given in [@kklt] V() = +, \[potential\] where $\sigma=\exp\left(\sqrt{2/3} \, \kappa \, \phi\right)$. When we consider detailed examples of the scalar evolution below, we will take $W_0=-10^{-4}$, $A=1$, $a=0.1$, $D=3\times 10^{-9}$ in units of ${M_{Pl}}$ following [@kklt], that gives a local minimum for the potential at $\sim \left(10^{-4}{M_{Pl}}\right)^4$. We plot this potential in Fig. (\[fig:pot\]). The direction of $\phi \rightarrow \infty$ corresponds to the weak coupling limit, where the extra dimensions open up and the moduli are the zero modes from the higher-dimensional graviton multiplet. If the modulus is trapped in a minimum, the solution will approximate a 4D world for a long time, controlled by the tunnelling rate through the barrier, and allow inflationary regime [@kklt]. However, for a generic initial value of the modulus in the strong coupling, the modulus will fall down the potential and the evolution will be dominated by its very large initial kinetic energy. The potential will remain subleading because it is too steep: its initial role is to just start the phase of kinetic domination. By the time the modulus reaches a minimum, it will have acquired too much kinetic energy from rolling down such a steep potential. As a result the modulus will skip over the barrier and continue rolling towards the weak coupling. This is the origin of the overshoot problem [@bruste]. If such compactifications are to avoid special initial condition, they need a dynamical mechanism which will arrest the modulus before the flyover. Otherwise, it will be extremely hard to see how to ever keep the modulus around the minimum and start early inflation. The radiation terms $\propto \rho_{rad}$ in (\[friedman\]), (\[ddota\]) can help to slow down the modulus. Their damping effects have been studied in [@rasha] and have been observed earlier in [@dilatons; @tsey]. Because of the conformal symmetry, the stress-energy tensor of radiation vanishes, and so the radiation does not source the modulus. This is manifest in (\[ddotdil\]) where the radiation terms are absent. Then because radiation scales only as $\rho \sim 1/a^4$ it will overtake the scalar kinetic energy, which will damp out as $1/a^6$, and the modulus will stop. We note that the curvature term $k/a^2$ will work in a similar vein, but even more efficiently than radiation, if $k=-1$. In that case, the curvature term will begin to dominate the evolution of the universe soon after it came into existence, and lead to the linear expansion $a \sim t$. This will dilute the kinetic term of the scalar even faster than radiation. Thus if either of these terms dominates early on, and the scalar does not overshoot the minimum before it stops to control $H$, it can get caught and inflation could begin. However, in the case of radiation, since the black hole density redshifts only as $1/a^3$, it will overtake radiation within few Hubble times after the beginning, and if initial $k$ is small or zero, the black hole gas will play the key role in the evolution of the modulus. In what follows we will therefore ignore the $\rho_{rad}$ and $k/a^2$ terms, and instead focus on the black hole contributions $\propto \rho_{BH}$. To leading order, the effects of the interactions of the modulus with the black hole gas $\rho_{BH}$ can be described as an environment-induced mass term, similar to the interactions discussed in [@tida], and more recently to the chameleon fields [@justin] (see also [@gases]). The presence of this new mass term $\propto \rho_{BH}$ suggests a new method for capturing the scalar in a potential minimum. The interactions with the black hole gas pull it towards $\phi_*$ where it awaits for the contributions from the potential $V$ to overtake the black hole terms. When $\phi_*$ is divided from the supersymmetric region by a barrier in $V$, the scalar will eventually settle into a minimum of $V$ where inflation can take place. It will not overshoot the barrier since its kinetic energy will be spent against the black hole mass pumping. However this effect in general is not always enough to stabilize the scalar. Indeed, if the gas gets diluted too fast, the interactions will become too weak to dissipate the scalar kinetic energy. This can be seen immediately from the comparison with a simple particle mechanics with a time-dependent mass. For example, if one considers a pendulum with a mass term $m^2 = \alpha/t^2$, one finds that the motion is very unstable, and there are modes which move the pendulum away from the minimum at zero. However, in our case the black holes do not dilute dangerously fast! To see this, consider a borough of the landscape where the modulus $\phi$ measures the size of the 6D space of a warped compactification of the type IIB theory according to the prescription of [@kklt; @gkp]. Then the gauge field in (\[dila\]) can arise from reducing a 10D 3-form such that one index is in the internal space, and so the corresponding choice for ${\tt g}_2$ is ${\tt g}_2 = -\sqrt{2/3}$. Recall that the weak coupling is recovered in the limit $\phi \rightarrow \infty$. Consider (\[ddotdil\]) with these parameters when the scalar is close to the effective minimum at $\phi_*$. The small perturbations $\vartheta = \phi - \phi_*$ obey the linearized version of (\[ddotdil\]). After black holes start to dominate in (\[friedman\]), the solution behaves as $a = t^{2/3}$ and $\rho_{BH} = 3M^2_P H^2 = 4 M^2_P/3t^2$. In this limit the linearized equation becomes t\^2 + 2 t + 29 = 0 . \[lineq\] The solutions for $\phi$ then are = \_\* + + , \[linsols\] and so thanks to Hubble damping the evolution is stable under small perturbations. This is the key ingredient of our mechanism, which guarantees that the scalar entrapment to $\phi_* = M_{Pl} \ln( g^2_{0} {\langle p \rangle}/{\langle q \rangle}) $ is gentle enough to ensure the loss of the scalar kinetic energy. We have studied the capture of the modulus by the black hole gas numerically in order to verify the stability of the entrapment in the nonlinear regime. In a typical case, the evolution is depicted in Fig. (\[fig:one\]). We have taken the initial density of black holes to be $\sim 10^{-3} {M_{Pl}}^4$, a number close to, but safely below the Planck scale, in accordance with our assumptions that $g_0 \sim {\cal O}(1)$, the validity of the (super)gravity approximation and that the universe emerged near the Planck scale. Since the initial effects of the potential are merely to induce a large kinetic energy of the modulus, early on we can replace it by picking a large initial value of $\dot \phi \sim M^2_{Pl}$ and set $V=0$. Further because of the Boltzmann suppression in (\[density\]) for massive states, it is natural to take the initial value of the black hole energy density to be somewhat lower than $M_{Pl}^4$. In this case the modulus kinetic energy will start dominating the universe. However it dilutes fast, and the black hole density starts to contribute quickly. We have assumed here that the initial value of $g$ is of the order of unity, so that the location of the attractor for $\phi$ is near $M_{Pl}$. Clearly, the black holes trap the modulus at a value $\phi_*$ and the modulus kinetic energy quickly becomes subdominant. It is possible to verify this behavior analytically. An approximate solution of eqs. (\[friedman\]), (\[ddota\]), (\[ddotdil\]) initially is given by the FRW cosmology dominated by a massless scalar, where $\phi(t) \sim \sqrt{\frac{2}{3}}\,{M_{Pl}}\, \ln t$, and $a(t) \sim t^{1/3}$. This ends when the kinetic energy of the scalar redshifts down to the energy density of the black hole gas, roughly at a time $t_1\simeq ({M_{Pl}}/\dot\phi_0) ({\dot\phi_0^2}/{\rho_{BH~0}})^{3/4}$. Since $\rho_{BH~0}$ is below the Planckian density, $\phi$ will have rolled several units of Planck mass until black holes begin to trap it, but not a huge amount. The general dynamics with the black hole terms is not tractable analytically, however we can follow the approach towards the attractor at $\phi_*$ as long as $\phi-\phi_*\gg {M_{Pl}}$. In this case we can approximate the $\cosh$ and $\sinh$ functions by exponentials, and find a solution $$\begin{aligned} \phi\left(t\right)&=&\bar\phi-\frac{2}{5}\,{M_{Pl}}\,\log\left(1+\mu\,t\right)\,,\nonumber\\ a\left(t\right)&=&a_0\,\left(1+\mu\,t\right)^{8/15}\,,\nonumber\\ {\mathrm e}^{\bar\phi/\sqrt{6}\,{M_{Pl}}}&=&\frac{72}{25}\frac{a_0^3\,\mu^2\,{M_{Pl}}^2}{\rho_0}\,.\end{aligned}$$ The dimensionful parameter $\mu$ is determined by the scalar speed $\dot \phi$ at the instant when the black holes begin to take over from the scalar kinetic energy. This limiting behavior is consistent with the numerical integration displayed in Fig. (\[fig:one\]), describing a slow approach of $\phi$ towards the attractor $\phi_*$. After some time in this regime, the black hole density will have redshifted to near the scale of the potential $V\left(\phi\right)$. If the initial value of $g$ was close to unity, so that ${\langle p \rangle}/{\langle q \rangle} \simeq {\cal O}(1)$, the modulus will remain in the region of $\phi \sim \phi_* \sim {\cal O}({M_{Pl}})$ where the minima of $V(\phi)$ are located. The modulus will slide down the slope of $V\left(\phi\right)$ gently, eventually settling down in the minimum and starting an inflationary era (if the value of $V$ at the minimum is nonzero, or if the approach to the minimum is along a very flat slope, where $m_\phi < H$). We plot a typical evolution of the modulus towards the minimum in Fig. (\[fig:two\]). This kind of behavior is generic whenever the initial black hole density is not too small, and when $\phi_* \sim {\cal O}({M_{Pl}})$, so that the black hole $\phi_*$ point lies in the basin of attraction of a minimum of $V$. In the case when the initial value of $\rho_{BH}$ is too small, or if the attractor $\phi_*$ is far too far in the weak coupling regime, the modulus will not be trapped in the minimum of the potential $V\left(\phi\right)$. Either the potential will become important too early, before the black hole gas manages to slow down the modulus, or $\phi_*$ will lie outside of the basin of attraction of the minima of $V$. In each of these cases the modulus will overshoot the minimum at $\phi_{\mathrm {min}}$ and will forever continue rolling down the slope $V\left(\phi\right)$ towards the weak coupling. In such situations the Brustein-Steinhardt problem cannot be avoided and inflation will not start. The system will eventually converge to a tracking regime (analogous to the one observed for exponential potentials in quintessence models). A typical representative of this behavior is displayed in Fig. (\[fig:three\]). However in those cases when the universe is created very close to the Planck scale, as would typically occur in for example the approach advocated in [@andreicomp], the initial black hole density will be significant. This is simply because of equipartition and equivalence principle. Hence the moduli would be captured quickly in the black hole attractor $\phi_*$. It still remains that in the regions of the landscape where the initial value of the coupling was too small, $\phi_*$ would end up outside of the basin of attraction and the modulus would still run off to the weak coupling eventually. However, this behavior is endemic for the small initial coupling regardless of the subsequent dynamics. If the coupling had started off too small, the modulus would never have had a chance of ending in the minimum of $V(\phi)$ anyway. It would always fall out of the basin of attraction of the minima of $V$. Thus this initial regime is of very little interest to start with. One has to make some choices of the initial conditions, and the whole purpose of the resolutions of the overshoot problem here and in [@rasha; @dilatons; @tsey; @bcc; @hsow] is to demonstrate that the favorable initial region may be [*likely*]{}: in this case, it is a [*finite*]{} interval of the [*vevs*]{} of $\phi$. In the landscape, at very high scales where the influence of the potential is negligible, the initial value of $\phi$ may be anything, and so there will be many regions where $\phi \sim {\cal O}({M_{Pl}})$. In these regions our mechanism will work in a natural way, and they will inflate. The [*a posteriori*]{} probability to be in such regions will then be large because they will become exponentially big after inflation. Once inflation starts, the gas of black holes will dilute exponentially quickly. By the time inflation is over, assuming it produces about 65 e-folds or so, the number density of the black holes will drop by a factor of about $10^{90}$. If inflation happens near the GUT scale, say $V^{1/4} \sim 10^{15} {\rm GeV}$, the final number density of black holes at the end of inflation will be n\_[BH final]{} 10\^[-90]{} \~10\^[-22]{} [eV]{}\^3 . \[findens\] After inflation, the universe will expand roughly for another factor of $10^{90}$ until it reaches the present epoch. The number density of these primordial black holes today will therefore be no more than about $n_{BH~final}({\rm today}) \la 10^{-112} {\rm eV}^3$, leaving us with at most one such black hole per more than $10^{13}$ present horizon volumes. In other words, these primordial black holes will become just as efficiently diluted as the primordial monopoles in the original formulation of inflation [@inflation]. They will quietly go away after completing their task. To summarize, in this paper we have proposed a new method for resolving the Brustein-Steinhardt moduli overshoot problem [@bruste] in string cosmology. We find that a gas of primordial black holes in the early universe, where the mass of black holes depends on the modulus, can provide a transient attractor for the modulus. When such black holes are produced they will trap the modulus temporarily, and keep it within the basin of attraction of the minima of the nonperturbative modulus potential $V$. As time goes on, the black hole density will redshift away, placing the modulus gently on the potential slope. Following this the modulus will slowly settle into the minimum, where it can drive inflation. A key ingredient of our mechanism is that the universe should be created near the Planck scale, so that the heavy states are produced initially with non-negligible number density. Such an approach has been proposed recently in [@andreicomp], and our mechanism may be a natural ingredient for helping the modulus stop in the inflationary valleys in the landscape. After inflation has begun, the density of black holes redshifts away to exponentially small numbers, just like the density of heavy monopoles in the early models of inflation. These black holes are pushed outside of the current horizon. We stress that our mechanism relies mainly on the presence of heavy states in the theory whose mass depends on the modulus. It may be possible to realize a similar mechanism in other regimes, using different massive states. It would be interesting to consider such mechanisms in other cosmological models, where for example the initial density of massive states could be small, but the minima of the potential lie at very large [*vevs*]{}, such as in the theories with low scale unification. 0.5cm [**Acknowledgements**]{} We thank R. Brustein, S. Dimopoulos, R. Kallosh, L. Kofman, A. Linde and S. Thomas for useful discussions. NK and LS thank the Aspen Center for Physics for a kind hospitality during the course of this work. The work of NK and LS was supported in part by the DOE Grant DE-FG03-91ER40674, in part by the NSF Grant PHY-0332258 and in part by a Research Innovation Award from the Research Corporation. [99]{} A. Guth, Phys. Rev. D [**23**]{} (1981) 347; A. Linde, Phys. Lett. B [**108**]{} (1982) 389; A. Albrecht and P. Steinhardt, Phys. Rev. Lett.  [**48**]{} (1982) 1220. A. D. Linde, Phys. Lett. B [**162**]{} (1985) 281; A. Albrecht, R. H. Brandenberger and R. Matzner, Phys. Rev. D [**35**]{} (1987) 429; S. W. Hawking and D. N. Page, Nucl. Phys. B [**298**]{} (1988) 789; A. Vilenkin, Phys. Rev. D [**37**]{} (1988) 888; D. S. Goldwirth and T. Piran, Phys. Rept. [**214**]{} (1992) 223; L. Dyson, M. Kleban and L. Susskind, JHEP [**0210**]{} (2002) 011; T. Banks, W. Fischler and S. Paban, JHEP [**0212**]{} (2002) 062. P. J. Steinhardt, in [**The Very Early Universe**]{}, eds: G. W. Gibbons, S. W. Hawking, and S. T. C. Siklos (Cambridge University Press, 1983), pp. 251–266. A. Vilenkin, Phys. Rev. D [**27**]{} (1983) 2848. A. Linde, Phys. Lett. B [**129**]{} (1983) 177; Mod. Phys. Lett. A [**1**]{} (1986) 81. A. H. Guth, Phys. Rept.  [**333**]{} (2000) 555. S. Kachru, R. Kallosh, A. Linde and S. P. Trivedi, Phys. Rev. D [**68**]{} (2003) 046005. S. Kachru, R. Kallosh, A. Linde, J. Maldacena, L. McAllister and S. P. Trivedi, JCAP [**0310**]{} (2003) 013. J. J. Blanco-Pillado, C. P. Burgess, J.M. Cline, C. Escoda, M.  Gomez-Reino, R. Kallosh, A. Linde and F. Quevedo, hep-th/0406230. R. Bousso and J. Polchinski, JHEP [**0006**]{} (2000) 006. L. Susskind, hep-th/0302219; hep-ph/0406197; B. Freivogel and L. Susskind, hep-th/0408133. M. R. Douglas, JHEP [**0305**]{} (2003) 046. S. Ashok and M. R. Douglas, JHEP [**0401**]{} (2004) 060; A. Giryavets, S. Kachru and P. K. Tripathy, JHEP [**0408**]{}, 002 (2004). N. Arkani-Hamed and S. Dimopoulos, hep-th/0405159; M. R. Douglas, hep-th/0405279; G. F. Giudice and A. Romanino, hep-ph/0406088; M. Dine, E. Gorbatov and S. Thomas, hep-th/0407043. K. Behrndt and S. Forste, Nucl. Phys. B [**430**]{} (1994) 441; A. Lukas, B. A. Ovrut and D. Waldram, Phys. Lett. B [**393**]{} (1997) 65; Nucl. Phys. B [**495**]{} (1997) 365; N. Kaloper, Phys. Rev. D [**55**]{} (1997) 3394; H. Lu, S. Mukherji, C. N. Pope and K. W. Xu, Phys. Rev. D [**55**]{} (1997) 7926; J. E. Lidsey, D. Wands and E. J. Copeland, Phys. Rept.  [**337**]{} (2000) 343. P. Binetruy and M. K. Gaillard, Phys. Rev. D [**34**]{} (1986) 3069. R. Brustein and P. J. Steinhardt, Phys. Lett. B [**302**]{} (1993) 196. A. Linde, hep-th/0408164. D. H. Coule and J. Martin, Phys. Rev. D [**61**]{} (2000) 063501. R. Brustein, S. P. de Alwis and P. Martens, hep-th/0408160. N. Kaloper and K. A. Olive, Astropart. Phys.  [**1**]{} (1993) 185. A. A. Tseytlin and C. Vafa, Nucl. Phys. B [**372**]{} (1992) 443; A. A. Tseytlin, hep-th/9206067. C. Wetterich, Nucl. Phys. B [**302**]{} (1988) 645; Nucl. Phys. B [**324**]{} (1989) 141. S. Kalara, N. Kaloper and K. A. Olive, Nucl. Phys. B [**341**]{} (1990) 252; J. A. Casas, J. Garcia-Bellido and M. Quiros, Nucl. Phys. B [**361**]{} (1991) 713. T. Banks, M. Berkooz, S. H. Shenker, G. W. Moore and P. J. Steinhardt, Phys. Rev. D [**52**]{} (1995) 3548. T. Damour and K. Nordtvedt, Phys. Rev. Lett.  [**70**]{} (1993) 2217; Phys. Rev. D [**48**]{} (1993) 3436; T. Damour and A. M. Polyakov, Nucl. Phys. B [**423**]{} (1994) 532; Gen. Rel. Grav.  [**26**]{} (1994) 1171. T. Barreiro, B. de Carlos and E. J. Copeland, Phys. Rev. D [**58**]{} (1998) 083513. G. Huey, P. J. Steinhardt, B. A. Ovrut and D. Waldram, Phys. Lett. B [**476**]{} (2000) 379. L. Parker, Phys. Rev. Lett.  [**21**]{} (1968) 562; Phys. Rev.  [**183**]{} (1969) 1057; Phys. Rev. D [**3**]{} (1971) 346. Y. B. Zeldovich and A. A. Starobinsky, Sov. Phys. JETP [**34**]{} (1972) 1159. S. Ferrara, R. Kallosh and A. Strominger, Phys. Rev. D [**52**]{} (1995) 5412; S. Ferrara and R. Kallosh, Phys. Rev. D [**54**]{} (1996) 1514; Phys. Rev. D [**54**]{} (1996) 1514. G. W. Gibbons, Nucl. Phys. B [**207**]{} (1982) 337; G. W. Gibbons and K. i. Maeda, Nucl. Phys. B [**298**]{} (1988) 741; D. Garfinkle, G. T. Horowitz and A. Strominger, Phys. Rev. D [**43**]{} (1991) 3140 \[Erratum-ibid. D [**45**]{} (1992) 3888\]. R. Kallosh, A. D. Linde, T. Ortin, A. W. Peet and A. Van Proeyen, Phys. Rev. D [**46**]{} (1992) 5278. M. J. Duff and J. Rahmfeld, Phys. Lett. B [**345**]{} (1995) 441; J. Rahmfeld, Phys. Lett. B [**372**]{} (1996) 198. A. V. Frolov and L. Kofman, JCAP [**0305**]{} (2003) 009. S. B. Giddings, S. Kachru and J. Polchinski, Phys. Rev. D [**66**]{} (2002) 106006. R. Kallosh and A. D. Linde, Phys. Rev. D [**56**]{} (1997) 3509. R. C. Ferrell and D. M. Eardley, Phys. Rev. Lett.  [**59**]{} (1987) 1617; K. Shiraishi, Nucl. Phys. B [**402**]{} (1993) 399. J. Khoury and A. Weltman, astro-ph/0309300; Phys. Rev. D [**69**]{} (2004) 044026; P. Brax, C. van de Bruck, A. C. Davis, J. Khoury and A. Weltman, astro-ph/0408415. S. Alexander, R. H. Brandenberger and D. Easson, Phys. Rev. D [**62**]{} (2000) 103509; R. Easther, B. R. Greene, M. G. Jackson and D. Kabat, JCAP [**0401**]{} (2004) 006; S. Watson and R. Brandenberger, JCAP [**0311**]{} (2003) 008; J. Y. Kim, hep-th/0403096; D. Stojkovic and K. Freese, hep-ph/0403248; S. Watson, hep-th/0404177. [^1]: They are null naked singularities, unlike the Schwarzschild solutions whose interplay with scalars has been studied in [@lyova]. Hence they will not Hawking-evaporate. This is why they are identified with heavy states of the theory. [^2]: We have absorbed any irrelevant numerical factors into the definition of couplings. [^3]: We thank Andrei Linde for interesting discussions of this issue.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The transmission spectrum of a high-finesse optical cavity containing an arbitrary number of trapped atoms is presented. We take spatial and motional effects into account and show that in the limit of strong coupling, the important spectral features can be determined for an arbitrary number of atoms, $\numatom$. We also show that these results have important ramifications in limiting our ability to determine the number of atoms in the cavity.' author: - 'Sabrina Leslie$^{1,3}$' - 'Neil Shenvi$^{2,3}$' - 'Kenneth R. Brown$^{2,3}$' - 'Dan M. Stamper-Kurn$^{1}$' - 'K. Birgitta Whaley$^{2,3}$' title: 'Transmission Spectrum of an Optical Cavity Containing $\numatom$ Atoms' --- Introduction {#Section::Introduction} ============ Cavity quantum electrodynamics (CQED) in the strong coupling regime holds great interest for experimentalists and theorists for many reasons [@berm94book; @kimb98; @raim01]. From an applied perspective, CQED provides precise tools for the fabrication of devices which generate useful output states of light, as exemplified by the single-photon source [@law97single; @kuhn97single; @kuhn02single], the $N$-photon source [@brown03fock], and the optical phase gate [@turc95phase]. Conversely, CQED effects transform the high-finesse cavity into a sensitive optical detector of objects which are in the cavity field. Viewed simply, standard optical microscopy is made more sensitive by having a probe beam pass through the sample multiple times, and by efficiently collecting scattered light. In the weak-coupling regime, this has allowed for nanometer- resolution measurements of the positions of a trapped ion [@guth02ion; @mundt02]. In the strong-coupling regime, the presence and position of single atoms can be detected with high sensitivity by monitoring the transmission [@hood00micro; @munst99dyn], phase shift [@mabuchi99single], or spatial mode [@horak02kaleid] of probe light sent through the cavity. In this paper, we consider using strong-coupling CQED effects to precisely count the number of atoms trapped inside a high-finesse optical microcavity. The principle for such detection is straightforward: the presence of atoms in the cavity field splits and shifts the cavity transmission resonance. A precise $N$-atom counter could be used to prepare the atoms-cavity system for generation of optical Fock states of large photon number [@brown03fock], or to study ultra-cold gaseous atomic systems gases in which atom number fluctuations are important, such as number-squeezed [@orze01squeeze] and spin-squeezed [@wine92squeeze; @hald99; @kuzm00qnd] systems. A crucial issue to address in considering such a CQED device is the role of the spatial distribution of atoms and their motion in the cavity field. An N-atom counter (or any CQED device) would be understood trivially if the N atoms to be counted were held at known, fixed positions in the cavity field. This is a central motivation for the integration of CQED with extremely strong traps for neutral atoms [@ye99trap; @mckeever03] or ions [@guth02ion; @mundt02]. The Tavis-Cummings model [@tavis68], which applies to this case, predicts that the transmission spectrum of a cavity containing $N$ identically-coupled (with strength $g$), resonant atoms will be shifted from the empty cavity resonance by a frequency $g \sqrt{N}$ at low light levels. Atoms in a cavity can then be counted by measuring the frequency shift of the maximum cavity transmission, and distinguishing the transmission spectrum of $N$ atoms from that of $N + 1$ atoms in the cavity. However, to assess the potential for precise CQED-aided probing of a many-body atomic system, we consider here the possibility that atoms are confined at length scales comparable to or indeed larger than the optical wavelength. In this paper, we characterize the influence of cavity mode spatial dependence and atomic motion on the transmission spectrum for an arbitrary number of atoms. The impact of atomic motion on CQED has been addressed theoretically in previous work [@ren95; @vern97; @dohe97motion], although attention has focused primarily on the simpler problem of a single atom in the cavity field. We show that when spatial dependence is included, the intrinsic limits on atom counting change significantly. The organization of this paper as follows. In we introduce the system Hamiltonian, define our notation, and derive an explicit expression for the intrinsic transmission function. In , we introduce the method of moments, and use this method to calculate the shape of the intrinsic transmission function. Conclusions and implications for atom counting are presented in . Transmission {#Section::Transmission} ============ Let us consider the Hamiltonian for $\numatom$ identical two-level atoms in a harmonic potential inside an optical cavity which admits a single standing wave mode of light. We consider atomic motion and the spatial variation of the cavity mode only along the cavity axis, assuming that the atoms are confined tightly with respect to the cavity mode waist in the other two dimensions. The Hamiltonian for this system is $$\label{Equation::HFull} \Hfull =\hbar \freqlight \ladderup\ladderdown + \sum_{i=1}^\numatom{ \hbar\freqatom \ket{e_i}\bra{e_i}} + \Hfull_{0} + V$$ where $\freqlight$ is the frequency of the cavity mode and $\ladderdown(\ladderup)$ is the annihilation (creation) operator for the cavity field. The motional Hamiltonian $\Htrap = \sum_i \Hfull_{0,i}$ is a sum over single-atom Hamiltonians $\Hfull_{0,i} = p_i^2/ 2\massatom + \massatom\freqtrap^2 x_i^2/2$ where $\massatom$ is the atomic mass and $\freqtrap$ the harmonic trap frequency. The atomic ground and excited internal states, $\ket{g}$ and $\ket{e}$, respectively, are separated by energy $\hbar \freqatom$. The dipole interaction with the light field $V = \sum_i V_i$ is a sum over interactions with the dipole moment of each atom $V_i = \hbar\gpot\cos(\wvecpot x_i)\left(\ket{e_i}\bra{g_i}\ladderdown + \ket{g_i}\bra{e_i}\ladderup\right)$ where $g$ is the vacuum-Rabi splitting, which depends on the atomic dipole moment and the volume of the cavity mode. In this paper we assume the cavity mode frequency to be in exact resonance with the atomic transition frequency, $\freqlight = \freqatom$. Since the Hamiltonian (Eq. \[Equation::HFull\]) commutes with the total excitation operator, $n_T = \ladderup\ladderdown + \sum_i {\ket{e_i}\bra{e_i}}$, the eigenspectrum of $\Hfull$ breaks up into manifolds labelled by their total excitation number. In this work, we are concerned with excitation spectra of the atoms-cavity system at the limit of low light intensity, and we therefore restrict our treatment to the lowest two manifolds, with $n_T = \{0,1\}$. In particular, we consider the excitation spectra from the ground state (motional and internal) of the atoms-cavity system. This state $\ket{\Psi_0}$ is given simply as a product of motional and internal states, $\ket{\Psi_0} = \ket{\Phi_I} \tensorm |0_c;g_1, g_2, \ldots g_N\rangle$. In the uncoupled internal state notation, the $0_c$ symbol indicates there are zero photons in the cavity, and the $g_i$ symbol indicates that atom $i$ is in the ground state.The motional state $\ket{\Phi_I} = \prod_{i=1}^\numatom{\ket{\phi_0(x_i)}}$ is a product of single-atom ground states of the harmonic trap. Let us calculate the low-light intensity transmission spectrum of the cavity. We assume that the system is pumped by a near-resonant linearly coupled driving field such that the cavity excitation Hamiltonian is $ \Hint = E\left(\ladderup e^{-i\freq t} + \ladderdown e^{i\freq t}\right)$ where $E$ is the product of the external driving electric field strength and the transmissivity of the input cavity mirror, and $\freq$ is the driving frequency. To determine the cavity transmission spectrum, we determine the excitation rate to atoms-cavity states in the $n_T = 1$ manifold from the initial ground state. The atoms-cavity eigenstates decay either by cavity emission, with the transmitted optical power proportional to $\kappa \ensavg{N_c}$ where $\kappa$ is the cavity decay rate and $N_c = \ladderup\ladderdown$ is the intracavity photon number operator, or by other processes (spontaneous emission, losses at the mirrors, etc.) at the phenomenological rate constant $\gamma$. Neglecting the width of the transmission spectrum caused by cavity and atomic decay ($\kappa, \gamma \to 0$), we use Fermi’s Golden Rule to obtain the transmission spectrum $I(\freq)$: $$\label{Equation::IOmega} I(\freq) \propto \sum_{j,n_{T}=1}{ \abs{\bra{\Psi_j}\ladderup\ket{\Psi_0}}^2\delta(\freq_j-\freq_0 - \freq)} = \sum_{j,n_{T}=1}{ \abs{\bkm{\Psi_{j}}{\Psi_I}}^2\delta(\freq_{j}-\freq_0 - \freq)},$$ where $\ket{\Psi_I} = \ladderup\ket{\Psi_0}$. In the summation over all atoms-cavity eigenstates, we make the simplification that only states with $n_T=1$ need be included since only these states are coupled to the ground state by a single excitation. To simplify notation, we make this implicit assumption throughout the remainder of this paper. We denote $I(\freq)$ as the “intrinsic transmission spectrum”. In the limit of $\kappa, \gamma \to 0$ this is composed of delta functions in frequency, while an experimentally observed transmission spectrum would be convolved by non-zero linewidths. To proceed further, we introduce the basis states $\{\ket{0}; \ket{i}\}$ which span the space of *internal states* in the $n_T = 1$ manifold. The state $\ket{0} = \ket{1_c; g_0, g_1, \ldots g_N}$ has one cavity photon and all atoms in their ground state. The state $\ket{i} = \ket{0_c; g_0, g_1, \ldots e_i \ldots g_N}$ is the state in which the cavity field is empty, while a single atom (atom $i$) is in the excited state. Restricted to the $n_T = 1$ manifold, the Hamiltonian (Eq.(\[Equation::HFull\])) is written as $\Hfull = \Htrap + V_{n_T = 1}$, where $$\label{Equation::HNoTrap} \matpotop_{n_T = 1} = \sum_i \hbar\gpot\cos{\left(\wvecpot x_i\right)}\tensorm\left(\ket{i}\bra{0}+\ket{0}\bra{i}\right).$$ To gain intuition regarding the behavior of the system, let us define the operator $\matpot(\v{x})$ as the optical potential operator $V_{n_T = 1}$ for which the position operators are replaced by definite positions $\v{x}$. In the $N+1$ dimensional space of internal states for the $n_T = 1$ manifold, the operator $\matpot(\v{x})$ has two non-zero eigenvalues, $\pm\hbar\gpot\matpoteigval(\v{x}) = \pm \hbar\gpot\sqrt{\sum_i{\cos^2{\wvecpot x_i}}}$ with corresponding eigenstates $$\label{Equation::MatPotEigVec} \ket{\matpoteigvec_{\pm}(\v{x})} = \frac{1}{\sqrt{2}} \left( \ket{0} \pm \frac{1}{\matpoteigval(\v{x}) } \sum_i \cos{\wvecpot x_i} \ket{i}\right).$$ We will refer to the $\ket{\matpoteigvec_-(\v{x})}$ and $\ket{\matpoteigvec_+(\v{x})}$ eigenstates of the potential matrix as the red and blue internal states, respectively, in reference to their energies being red- or blue-detuned from the empty cavity resonance. The remaining $N-1$ eigenvalues of the optical potential matrix are null-valued. These correspond to dark states having no overlap with the excited cavity internal state, $\ket{0}$, and which, therefore, cannot be excited by the cavity excitation interaction $\Hint$. Note that $\ensavg{N_c}=1/2 (0)$ for all bright (dark) states, hence the cavity transmission spectrum is equivalent to the excitation spectrum in this treatment. We can now write the optical potential operator $\matpotop_{n_T = 1}$ as $$\matpotop_{n_T = 1}= \gpot\int{d\v{x} \, \chi(\v{x}) \, \ket{\v{x}}\bra{\v{x}} \tensorm \left( \ket{\matpoteigvec_+(\v{x})} \bra{\matpoteigvec_+(\v{x})} - \ket{\matpoteigvec_-(\v{x})} \bra{\matpoteigvec_-(\v{x})} \right)}.$$ We also note that the initial state $\ket{\totinitvec}$ can be written as a superposition of bright states, $$\label{TotInitVecDpm} \ket{\totinitvec} = \frac{1}{\sqrt{2}}\left( \ket{\spaceinitvec(\v{x})\tensorm\matpoteigvec_-(\v{x})} + \ket{\spaceinitvec(\v{x})\tensorm\matpoteigvec_+(\v{x})}\right).$$ Our treatment allows us to recover easily results of the Tavis-Cummings model [@tavis68] in which a collection of fixed two-level atoms are coupled to a single-mode cavity with fixed, identical dipole coupling. Considering $\matpotop(\v{x_0})$ with all atoms at the origin ($\v{x_0} = \left(0, 0, \ldots 0\right)$), we find a spectrum composed of delta-functions at $\pm \gpot\sqrt{\numatom}$ (see Figure 1a) corresponding to the two bright states $\ket{\matpoteigvec_\pm(\v{x_0})}$. The clear dependence of the frequency of peak transmission on the integer number of atoms in the cavity provides the background for a basic, transmission-based atom-counting scheme. “Extrinsic” line-broadening, due to cavity decay and other losses, will smear out these sharp transmission peaks (see Figure 1b), and will determine the maximum number of atoms that can be counted at the single-atom level by discriminating between the transmission spectra for $N$ and $N+1$ atoms. For the remainder of the paper, we focus on intrinsic limitations to atom counting, i.e. those due to atomic localization and motion. Method of Moments {#Section::Moments} ================= To analyze the transmission characteristics of the atoms-cavity system in the presence of spatial dependence and atomic motion, we shall assume that the key features of the spatially-independent limit discussed above are maintained (Figure 2). Specifically, the transmission spectrum will still be described by two sidebands, one red-shifted and one blue-shifted from the empty cavity resonance by some frequency on the order of $\gpot$. In determining the cavity transmission $I(\freq)$, we may thus divide the bright excited states $\{ \ket{\Psi_{j}}\}$ of the $n_T = 1$ manifold into “red” $\{ \ket{\Psi_{j,-}}\}$ and “blue” $\{ \ket{\Psi_{j,+}}\}$ states. From these “red” and “blue” states, we determine the transmission lineshapes $I_-(\freq)$ and $I_+(\freq)$ of the red and blue sidebands, respectively. The validity of this approach is made more exact by the following considerations. We have already obtained the locally-defined internal-state eigenbasis for the $n_T = 1$ manifold as eigenstates of the operator $\matpot(\v{x})$, namely the states $\ket{\matpoteigvec_\pm(\v{x})}$ and the remaining $N-1$ dark states. Let $\rotationop(\v{x})$ be the rotation operator which connects the uncoupled internal states $\{\ket{0}, \ket{1}, \dots \ket{N}\}$ to the eigenstates of $\matpot(\v{x})$ at a particular set of coordinates $\v{x}$ (the “coupled internal state basis”). Now, consider applying this local choice of “gauge” everywhere in the system. Since the dipole interaction operator $\matpotop$ is diagonalized in the coupled internal state basis, it is convenient to examine the full Hamiltonian $\Hfull$ in this basis. Defining the spatially-dependent rotation operator $\rotationop = \int{d\v{x} \, \ket{\v{x}}\bra{\v{x}} \, \rotationop(\v{x})}$, we therefore consider the transformed Hamiltonian $\Hfull^\prime = \rotationop \Hfull \rotationop^\dagger$. Returning to , the only portion of the Hamiltonian $\Hfull$ which does not commute with the operator $\rotationop$ is the kinetic energy. Considering the transformation of the momentum operator for atom $i$, $$\rotationop p_i \rotationop^\dagger = p_i + \frac{\hbar}{i} \rotationop \frac{d}{d x_i} \rotationop^\dagger = p_i + A_i$$ the transformed Hamiltonian $\Hfull'$ can be expressed as $\Hfull' = \Hfull_{ad} + \Delta \Hfull$, where $$\begin{aligned} H_{ad} &=& \sum_i{\left(\frac{p_i^2}{2\massatom}\tensorm\id + \frac{1}{2}\massatom\freqtrap^2x_i^2\tensorm\id\right)} +\hbar\gpot\chi(\v{x})\left(\ket{D_+}\bra{D_+}-\ket{D_-}\bra{D_-}\right), \\ \Delta \Hfull &=& \frac{1}{2\massatom}\sum_i{\left( p_i A_i + A_i p_i + A_i A_i\right)}. \label{Equation::DeltaH}\end{aligned}$$ The operator $\Hfull_{ad}$ describes the behavior of atoms which adiabatically follow the coupled internal state basis while moving through the spatially-varying cavity field, and $\Delta \Hfull$ represents the kinetic energy associated with this local gauge definition. Let us treat $\Delta H$ as a perturbation and expand the eigenvalues and eigenstates of $\Hfull'$ as $$\begin{aligned} E_{j,\pm} &=& E_{j,\pm}^{(0)} + E_{j,\pm}^{(1)} + \ldots,\\ \ket{\toteigvec_{j,\pm}} &=& \ket{\toteigvec_{j,\pm}^{(0)}} + \ket{\toteigvec_{j,\pm}^{(1)}} + \ldots\end{aligned}$$ We define projection operators onto the red and blue and dark internal states, $\proj_-, \proj_+, \proj_d$, respectively, with the explicit forms $$\proj_\pm = \int{d\v{x} \ket{\v{x}}\bra{\v{x}} \tensorm\ket{\matpoteigvec_\pm(\v{x})}\bra{\matpoteigvec_\pm(\v{x})}}.$$ These projection operators commute with $\Hfull_{ad}$. Hence the bright eigenstates of $\Hfull_{ad}$, which are simultaneous eigenstates of $\proj_\pm$ and $\proj_d$, can be written as $$\ket{\toteigvec_{j,\pm}^{(0)}} = \ket{\spaceeigvec_{j,\pm}^{(0)} \tensorm \matpoteigvec_\pm} \equiv \int{ d\v{x} \, \spaceeigvec_{j,\pm}^{(0)}(\v{x}) \, \ket{\v{x}} \tensorm \ket{\matpoteigvec_\pm(\v{x})}}.$$ We now assign an eigenstate, $\ket{\toteigvec_j}$, of $H'$ to the red or blue sideband if its zeroth order component $\ket{\toteigvec_j^{(0)}}$ belongs respectively to the $\ket{\matpoteigvec_-}$ or $\ket{\matpoteigvec_+}$ manifold. We can therefore define the sideband transmission spectra $I_\pm(\freq)$ as the separate contributions of red/blue sideband states to the total transmission spectra (see ): $$\label{Equation::IOmega_pm} I_\pm(\freq) \propto \sum_j{ \abs{\bkm{\toteigvec_{j,\pm}}{\totinitvec}}^2 \delta(\freq_{j,\pm} -\freq_0 - \freq)}.$$ Determining the exact form of $I_\pm(\freq)$ is equivalent to solving for all the eigenvalues $\hbar \freq_{j,\pm}$ of the full Hamiltonian. This is a difficult problem, particularly as the number of atoms in the cavity increases. In practice, given the potential extrinsic line-broadening effects which may preclude the resolution of individual spectral lines, it may suffice to simply characterize main features of the transmission spectra. As we show below, general expressions for the various moments of the spectral line can be obtained readily as a perturbation expansion in $\Delta H$. These moments allow one to assess the feasibility of precisely counting the number of atoms contained in the high-finesse cavity based on the transmission spectrum. In general, we evaluate averages $\ensavg{\freq_\pm^n}$ weighted by the transmission spectral distributions $I_\pm(\freq)$. We make use of the straightforward identification (for notational clarity, shown here explicitly for the case of the blue sideband) $$\begin{aligned} \label{Equation::EnsAvgE} \hbar\ensavg{\freq_+} &=& \frac{\hbar\int{ d\freq \, I_+(\freq) \, \freq}}{\int{d\freq \, I_+(\freq)}} \\ &=& \frac{\sum_{j}{E_{j,+}\bkm{\totinitvec}{\toteigvec_{j,+}}\bkm{\toteigvec_{j,+}}{\totinitvec}}} {\sum_{j}{\bkm{\totinitvec}{\toteigvec_{j,+}}\bkm{\toteigvec_{j,+}}{\totinitvec}}}\\ &=& \frac{\sum_{j}{E_{j,+}\bra{\totinitvec} \left(\proj_+ + \proj_-\right) \ket{\toteigvec_{j,+}}\bra{\toteigvec_{j,+}} \left(\proj_+ + \proj_-\right) \ket{\totinitvec}}} {\sum_{j}{\bra{\totinitvec} \left(\proj_+ + \proj_-\right) \ket{\toteigvec_{j,+}}\bra{\toteigvec_{j,+}} \left(\proj_+ + \proj_-\right) \ket{\totinitvec}}},\end{aligned}$$ where we have made use of the facts that $\proj_+ + \proj_- + \proj_d = I$ and $\proj_d \ket{\totinitvec} = 0$. To zeroth order, becomes, $$\begin{aligned} \hbar\ensavg{\omega_+}^{(0)} &=& \frac{\sum_{j}{E_{j,+}^{(0)}\bkm{\totinitvec}{\toteigvec_{j,+}^{(0)}}\bkm{\toteigvec_{j,+}^{(0)}}{\totinitvec}}} {\sum_{j}{\bkm{\totinitvec}{\toteigvec_{j,+}^{(0)}}\bkm{\toteigvec_{j,+}^{(0)}}{\totinitvec}}} = 2\bra{\totinitvec}{\proj_+}H_{ad}{\proj_+}\ket{\totinitvec}. \end{aligned}$$ The first-order correction to this result is given by, $$\begin{array}{rcl} \hbar\ensavg{\omega_+}^{(1)} &=& 2\left(\bra{\totinitvec}\proj_+\Delta H\proj_+\ket{\totinitvec} + \bra{\totinitvec}\proj_-\sum_j{E_{j,+}^{(0)}\ket{\toteigvec_{j,+}^{(1)}} \bra{\toteigvec_{j,+}^{(0)}}}\proj_+\ket{\totinitvec} + \bra{\totinitvec}\proj_+\sum_j{E_{j,+}^{(0)}\ket{\toteigvec_{j,+}^{(0)}} \bra{\toteigvec_{j,+}^{(1)}}}\proj_-\ket{\totinitvec} \right)\\ && - 4 \bra{\totinitvec}\proj_+H_{ad}\proj_+\ket{\totinitvec} \left(\bra{\totinitvec}\proj_-\sum_j{\ket{\toteigvec_{j,+}^{(1)}} \bra{\toteigvec_{j,+}^{(0)}}}\proj_+\ket{\totinitvec} + \bra{\totinitvec}\proj_+\sum_j{\ket{\toteigvec_{j,+}^{(0)}} \bra{\toteigvec_{j,+}^{(1)}}}\proj_-\ket{\totinitvec} \right). \end{array}$$ To evaluate the sums over the first-order corrections to the eigenstates, $\ket{\toteigvec_{j,\pm}^{(1)}}$, we approximate the energy denominator in the first-order perturbation correction as the difference between the average energies of the red and blue sidebands, $$\begin{aligned} \bra{\totinitvec}\proj_-\sum_j{\ket{\toteigvec_{j,+}^{(1)}} \bra{\toteigvec_{j,+}^{(0)}}}\proj_+\ket{\totinitvec} &=& \bra{\totinitvec}\sum_j{\sum_k{ \frac{\ket{\toteigvec_{k,-}^{(0)}}\bra{\toteigvec_{k,-}^{(0)}}\Delta H\ket{\toteigvec_{j,+}^{(0)}}}{\hbar\omega_{j,+}^{(0)} - \hbar\omega_{k,-}^{(0)}} \bra{\toteigvec_{j,+}^{(0)}}}}\proj_+\ket{\totinitvec},\\ &\approx& \frac{1}{\ensavg{\hbar\omega_+^{(0)}} - \ensavg{\hbar\omega_-^{(0)}}}\bra{\totinitvec}\proj_-\Delta H\proj_+\ket{\totinitvec}.\end{aligned}$$ Using this approximation, we can evaluate to the first order in perturbation, yielding $$\label{Equation::EnsAvgOmega} \begin{array}{ll} \hbar\ensavg{\omega_+} = &2\ensavg{\proj_+ H \proj_+} + \frac{1}{\ensavg{\hbar\omega_+^{(0)}}-\ensavg{\hbar\omega_-^{(0)}}}\bigl( 4\ensavg{\proj_+H_{ad}\Delta H \proj_-} - 8\ensavg{\proj_+ H_{ad}\proj_+}\ensavg{\proj_- \Delta H \proj_+} \bigr), \end{array}$$ where all expectation values are calculated over the initial state $\ket{\totinitvec}$. We can also calculate the second moment of the distribution using the same technique. To first-order, we obtain, $$\label{Equation::EnsAvgOmega2} \begin{array}{ll} \hbar^2\ensavg{\omega_+^2} = &2\ensavg{\proj_+ \left(H_{ad}^2 + \Delta H H_{ad} + H_{ad} \Delta H\right) \proj_+} + \frac{1}{\ensavg{\hbar\omega_+^{(0)}}-\ensavg{\hbar\omega_-^{(0)}}}\bigl( 4\ensavg{\proj_+H_{ad}^2\Delta H \proj_-} \\ &- 8\ensavg{\proj_+ H_{ad}^2\proj_+}\ensavg{\proj_- \Delta H \proj_+} \bigr). \end{array}$$ In order to evaluate these expressions, we must calculate expectation values of the form $\proj^\pm H_{ad}^j \Delta H^k \proj^\pm$ over the initial state $\ket{\totinitvec}$. To simplify matters, we note that we can act with the projection operators on the initial state $\ket{\totinitvec}$, which is equivalent to operating in the $\ket{D_\pm}$ internal state basis. Since $H_{ad}$ is diagonal in the $\ket{D_\pm}$ basis, and $\spaceinitvec(\v{x})$ is the $\numatom$-dimensional harmonic oscillator ground state, it is straightforward to obtain $$\label{Equation::HadTotInitVec} H_{ad} \ket{\spaceinitvec(\v{x})D_\pm} = \left(E_0 + \hbar \gpot \chi(\v{x})\right) \ket{\spaceinitvec(\v{x})D_\pm}.$$ Using the definition in , we find that the $\ket{D_\pm}$ matrix elements of $\Delta H$ are given by the matrix $$\label{Equation::DeltaHDpm} \Delta H = \frac{\hbar^2\wvecpot^2\zeta(\v{x})}{4\massatom}\tensorm\left( \begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array} \right),$$ where we have defined, $$\zeta(\v{x}) = -\frac{N-1}{\chi^2} + 1 - \sum_{i=1}^N{\frac{\cos^4{(\wvecpot x_i)}}{\chi^4}}.$$ Combining with , we obtain to first-order in $\Delta H$, $$\begin{aligned} \label{Equation::EnsAvgOmegaLambda} \hbar\ensavg{\omega_+} - E_0&=& \hbar\gpot\ensavg{\chi} + \frac{1}{2}\frac{\hbar^2\wvecpot^2}{2\massatom}\ensavg{\zeta} + \frac{1}{2}\frac{\hbar^2\wvecpot^2}{2\massatom}\frac{1}{\ensavg{\chi}}\left(-\ensavg{\zeta \chi } + \ensavg{\zeta}\ensavg{\chi}\right), \\ \label{Equation::EnsAvgOmega2Lambda} \hbar^2\left(\ensavg{\omega_+^2}-\ensavg{\omega_+}^2\right) &=& \hbar^2\gpot^2\left(\ensavg{\chi^2}-\ensavg{\chi}^2\right) + \hbar\gpot\frac{\hbar^2\wvecpot^2}{2\massatom} \left(\frac{1}{\ensavg{\chi}}\left(\ensavg{\zeta \chi^2} + \ensavg{\zeta}\ensavg{\chi^2}\right)-2 \ensavg{\zeta}\ensavg{\chi}\right).\end{aligned}$$ Here all expectation values are taken over the spatial state $\spaceinitvec(\v{x})$. Although the function $\spaceinitvec(\v{x})$ is simply the product of $\numatom$ harmonic oscillator ground states, the presence of various powers of $\matpoteigval(\v{x})$ and $\zeta(\v{x})$ in the above expectation values makes their analytic evaluation very difficult for arbitrary $\numatom$.To determine the dependence of these integrals on atom number $\numatom$, one may expand the integrand as a Taylor series in $\matpoteigval^2$, leading to approximate analytic solutions for the integral as a series in $1/\numatom$. After some tedious algebra, we find the average positions of the red- and blue-transmission sidebands to be $$\label{Equation::ERedBlue} \begin{array}{rcl} \hbar\ensavg{\omega_\pm}- E_0 &=& \pm \hbar\gpot\sqrt{\numatom}\sqrt{\frac{1+\eps}{2}} \left(1-\frac{1}{\numatom}\frac{(1-\eps)^2}{16}\right) -\frac{\hbar^2\wvecpot^2}{2\massatom}\left(\frac{1-\eps}{2(1+\eps)}\right) +\bigo\left(\frac{1}{N}\right). \end{array}$$ Here we quantify the relative length scales of the initial harmonic trap as compared to the optical interaction potential through the parameter $\epsilon = \exp{\left(-\wvecpot^2 \sigma^2 \right)}$, which is related to the Lamb-Dicke parameter $\eta$ by $\sqrt{2}\eta = \wvecpot\sigma$ and $\sigma = \sqrt{\hbar / m \freqtrap}$. Next, we obtain an expression for the width of the red and blue sidebands by evaluating the second moment of the sidebands. Expanding as a series in $1/\numatom$, we obtain $$\label{Equation::WidthRedBlue} \begin{array}{rcl} \hbar^2\left(\ensavg{\omega^2_\pm} - \ensavg{\omega_\pm}^2\right) &=& \frac{1}{16}\hbar^2\gpot^2\left(1-\eps\right)^2\left(1+\eps\right) \pm\hbar\gpot\frac{\hbar^2\wvecpot^2}{2\massatom}\frac{1}{4\sqrt{\numatom}\sqrt{2\left(1+\eps\right)}}\left(1-\eps\right)^2\left(3+\eps\right) +\bigo\left(\frac{1}{\numatom}\right). \end{array}$$ To gain some physical insight into these results, we consider two important regimes: the tight and loose trap regimes. These different regimes are reflected in the corresponding values of the parameter $\epsilon$, which tends towards $1$ in the extreme tight-trap limit and to $0$ in the extreme loose-trap limit. In the tight regime, the length scale of the trapping potential is much smaller than the wavelength of the light, i.e. $\wvecpot \sigma \ll 1$. This is equivalent to the Lamb-Dicke regime and is applicable to current experiments for trapped ions in cavities [@guth02ion; @mundt02], or for neutral atoms held in deep optical potentials [@ye99trap]. In the loose-trap regime, $\wvecpot \sigma \geq 1$ and atoms in the ground state of the harmonic oscillator potential are spread out over a distance comparable to the optical wavelength. As atoms in this regime sample broadly the cavity field, one expects, and indeed finds, a significant inhomogeneous broadening of the atoms-cavity resonance. In the extreme loose-trap limit ($\epsilon \to 0$), we find $$\begin{aligned} \ensavg{\freq_\pm} - \Etrap/\hbar &=& \pm \gpot \sqrt{\frac{\numatom}{2}}\left(1-\frac{1}{16\numatom}\right)+ \frac{1}{2}\frac{\hbar \wvecpot^2}{2\massatom} + \bigo\left(\frac{1}{\numatom}\right), \\ \ensavg{(\Delta \omega_\pm)^2} &=& \frac{1}{8}\gpot^2 \pm \gpot \frac{\hbar\wvecpot^2}{2\massatom}\frac{3}{4\sqrt{2\numatom}} + \bigo\left(\frac{1}{\numatom}\right).\end{aligned}$$ In the loose-trap limit, the center of the red sideband is now located at $\gpot\sqrt{\numatom/2}$ instead of at $\gpot\sqrt{\numatom}$ as we obtained for the spatially independent case. This difference is due to the spatial dependence of the standing mode; the atoms no longer always feel the full strength of the potential, but are sometimes located at nodes of the potential. We also see that the sidebands have an intrinsic width of $\approx \gpot/\sqrt{8}$. This width will play an important part in limiting our ability to count the number of atoms in the cavity in the limit of a loose trap. Considering the tight-trap limit, we expand in the small parameter $\wvecpot \sigma$ and obtain $$\begin{aligned} \ensavg{\freq_\pm} - \Etrap/\hbar &=& \pm \gpot \sqrt{\numatom}\left(1-\frac{1}{4}\wvecpot^2\sigma^2\right) -\frac{1}{4}\frac{\hbar \wvecpot^2}{2\massatom}\wvecpot^2\sigma^2+\bigo\left(\wvecpot^4\sigma^4\right), \\ \ensavg{(\Delta \omega_\pm)^2} &=& \frac{1}{8}\gpot^2\wvecpot^4\sigma^4\pm\gpot\frac{\hbar\wvecpot^2}{2\massatom}\frac{1}{2\sqrt{\numatom}}\wvecpot^4\sigma^4 + \bigo\left(\wvecpot^6\sigma^6\right).\end{aligned}$$ In the limit $\wvecpot \sigma \to 0$, the atoms are confined to the origin and we recover the Tavis-Cummings result discussed earlier, wherein the transmission sidebands are delta functions at $\pm g \sqrt{\numatom}$ away from the empty cavity resonance. As the tightness of the trap decreases, the atoms begin to expericence the weaker regions of the optical potential and the centers of the sidebands move towards the origin. In addition, the sidebands develop an intrinsic variance which scales as $\wvecpot^4\sigma^4$. An important feature of both regimes is the intrinsic linewidth of both the red and blue sidebands (see Figure 2a). This linewidth has a magnitude of approximately $\gpot\sqrt{(1-\epsilon)/8}$ when the vacuum Rabi splitting is much larger than the atomic recoil energy, i.e., $\gpot \gg \hbar \wvecpot^2 / 2 m$. It is unrelated to linewidth due to cavity decay or spontaneous emission which we have not addressed here and results purely from the spatial dependence of the atom-cavity coupling. Thus, it will provide an intrinsic limit to our ability to count $\numatom$ atoms, regardless of the quality of the cavity that is used. Our expression for the intrinsic linewidth also highlights an asymmetry between the red and blue sidebands. To first-order, increasing the atomic recoil energy [*reduces*]{} the linewidth of the red sideband but increases the linewidth of the blue sideband. Consequently probing the red sideband of the atoms-cavity system rather than the blue sideband would facilitate counting atoms. In addition, these results suggest that the ability to tune both the atomic recoil energy $ \hbar \wvecpot^2 / 2 m$ and the coupling strength $g$ (this can be done, for instance, using CQED on Raman transitions) would be beneficial. We attribute the asymmetry between the sidebands to the different effective potentials seen by states within the red and blue sidebands. A detailed analysis of this aspect will be provided in a future publication. Conclusions {#Section::Conclusions} =========== We have found that the transmission spectrum of the cavity containing $\numatom$ atoms trapped initially in the ground state of an harmonic potential will consist of distinct transmission sidebands which are red- and blue-detuned from the bare-cavity resonance, when the vacuum Rabi splitting dominates the atomic recoil energy. Analytic expressions for the first and second moments of the transmission sidebands were derived, and evaluated in the limits of tight and loose initial confinement. These expressions include terms containing the vacuum Rabi splitting $\hbar\gpot$ and the recoil energy $\hbar^2 \wvecpot^2 / 2 m$. The former can be regarded as line shifts and broadenings obtained by quantifying inhomogeneous broadening under a local-density approximation, i.e. treating the initial atomic state as a statistical distribution of infinitely massive atoms. The latter quantifies residual effects of atomic motion, in essence quantifying effects of Doppler shifts and line broadenings. These results can be applied to assess the potential for precisely counting the number of atoms trapped in a high-finesse optical cavity through measuring the transmission of probe light, analogous to the work of Hood *et al.* [@hood00micro] and Münstermann *et al.* [@hood00micro; @munst99dyn] for single atom detection. To set the limits of our counting capability, we assume that atoms are detected through measuring the position of the mean of the red sideband. In order to reliably distinguish between $\numatom$ and $\numatom+1$ atoms in the cavity, the difference between the means for $\numatom$ and $\numatom + 1$ atoms must be greater than the width of our peaks, i.e., $\abs{\ensavg{\freq_\pm(\numatom)} - \ensavg{\freq_\pm(\numatom+1)}} > \Delta \omega_\pm $ (see Figure 3). Let us consider that, in addition to the intrinsic broadening derived in this paper, there exists an extrinsic width $\kappa^\prime$ due to the finite cavity finesse and other broadening mechanisms. Evaluated in the limit $\gpot \gg \hbar \wvecpot^2 / 2 m$ and assuming large $N$, $$\ensavg{\freq_-(\numatom)} - \ensavg{\freq_-(\numatom+1)} \simeq \gpot \sqrt{\frac{1+\epsilon}{8\numatom}}.$$ We thus obtain an atom counting limit of $$\numatom_{max} \simeq \frac{1+\epsilon}{8\frac{\kappa^{\prime \, 2}}{\gpot^2}+\frac{1}{2}(1-\epsilon)^2(1+\eps)}.$$ where we have assumed that the intrinsic and extrinsic widths add in quadrature. This atom counting limit ranges from $\numatom_{max} = g^2 / 4 \kappa^{\prime \, 2}$ in the tight-trap limit, to $\numatom_{max} = 1/(1/2 + 8 \kappa^{\prime \, 2} / g^2)$ in the loose trap limit. Figure 4 shows $N_{max}$ as a function of $\eps$ for various values of $\kappa$. In general, atom counting will be limited by extrinsic linewidth when $16\kappa'^2 > \gpot^2(1-\eps)^2$ and by intrinsic linewidth when $16\kappa'^2 < \gpot^2(1-\eps)^2$. These results demonstrate that atom counting using the transmission spectrum is best accomplished within the tight-trap limit. Certainly, in the loose-trap limit, atom counting will be rendered difficult as the intrinsic linewidth of the sidebands is increased. However, several questions regarding the feasibility of atom counting experiments remain. First, although atom counting by a straightforward measurement of the intensity of the transmitted light may be difficult, it is possible that the phase of the transmitted light may be less affected by motional effects [@mabuchi99single]. Dynamical measurements (possibly using quantum feedback techniques) might also yield higher counting limits. Second, atomic cooling techniques could be used in the loose-trap limit to cool the atoms into the wells of the optical potential, thereby decreasing the observed linewidth [@vul; @hech; @van]. Finally, the state-dependence of spontaneous emission has not yet been taken into account. Although the loose-trap regime leads to an intrinsic linewidth which limits atom counting, it may also suppress the extrinsic linewidth as a result of contributions from superluminescence. On the other hand, in the Lamb-Dicke limit, the atoms are all highly localized, which could lead to enhanced spontaneous emission due to cooperative effects. Future work will investigate alternative methods of atom counting and will explore complementary techniques of reducing the intrinsic linewidth in atom-cavity transmission spectra. We thank Po-Chung Chen for a critical reading of the manuscript. S.L. thanks NSERC for a Postgraduate Scholarship and The Department of Physics of University of California, Berkeley, for a Departmental Fellowship. N.S. thanks the University of California, Berkeley, for a Berkeleyan Fellowship. The work of K.R.B was supported by the Fannie and John Hertz Foundation. The work of D.M.S.K. was supported by the National Science Foundation under Grant No. 0130414, the Sloan Foundation, the David and Lucile Packard Foundation, and the University of California. KBW thanks the Miller Foundation for Basic Research for a Miller Research Professorship 2002-2003. The authors’ effort was sponsored by the Defense Advanced Research Projects Agency (DARPA) and Air Force Laboratory, Air Force Materiel Command, USAF, under Contract No. F30602-01-2-0524. ![a. Intrinsic transmission spectrum of atoms-cavity system neglecting spatial dependence of potential and atomic motion. b. Transmission spectrum of spatially independent case including cavity decay.](fig1) ![a. Intrinsic transmission spectrum of atoms-cavity system including spatial dependence of potential and atomic motion. b. Corresponding transmission spectrum including cavity decay. ](fig2) ![ Plot of $\ensavg{\omega_-}$ as a function of the trap tightness $\eps = \exp{(-\wvecpot^2\sigma^2)}$ for $N=8$ and $N=9$ and small ratio of atomic recoil energy to vacuum Rabi splitting, $\hbar \wvecpot^2 / 2 m g= 0.01$. The shaded regions indicate the instrinsic width of the red sideband, $\pm \sqrt{\ensavg{(\Delta\omega_-)^2}} / 2$. In the tight-trap limit, $N=8$ and $N=9$ can be distinguished. In the loose-trap limit, the intrinsic width of the spectra render determination of atom number difficult. ](fig3) ![ Maximum limit $N_{max}$ on atom counting as a function of trap tightness $\eps = \exp{(-\wvecpot^2\sigma^2)}$ for several values of the decay parameter $\kappa$. $\eps \to 0$ corresponds to the loose-trap limit while $\eps \to 1$ corresponds to the tight-trap limit. Notice that for the infinitely tight trap, atom counting is limited only by $\kappa$. ](fig4) [10]{} , edited by P. Berman (Academic Press, Boston, 1994). H. Kimble, Physica Scripta [**T76**]{}, 127 (1998). J. Raimond, M. Brune, and S. Haroche, Reviews of Modern Physics [**73**]{}, 565 (2001). C. Law and H. Kimble, Journal of Modern Optics [**44**]{}, 2067 (1997). A. Kuhn [*et al.*]{}, App. Phys. B [**B69**]{}, 373 (1997). A. Kuhn, M. Hennrich, and G. Rempe, Phys. Rev. Lett. [**89**]{}, 067901 (2002). K.R. Brown [*et al.*]{}, Phys. Rev. A [**67**]{}, (2003). Q. Turchette [*et al.*]{}, Phys. Rev. Lett. [**75**]{}, 4710 (1995). G. Guthohrlein [*et al.*]{}, Nature [**414**]{}, 49 (2002). A. Mundt [*et al.*]{}, Phys. Rev. Lett. [**89**]{}, 103001 (2002). C. Hood [*et al.*]{}, Science [**287**]{}, 1447 (2000). P. Münstermann [*et al.*]{}, Phys. Rev. Lett. [**82**]{}, 3791 (1999). H. Mabuchi, J. Ye, and H. Kimble, App. Phys. B [**68**]{}, 1095 (1999). P. Horak [*et al.*]{}, Phys. Rev. Lett. [**88**]{}, 043601 (2002). C. Orzel [*et al.*]{}, Science [**291**]{}, 2386 (2001). D. Wineland [*et al.*]{}, Phys. Rev. A [**46**]{}, R6797 (1992). J. Hald [*et al.*]{}, Phys. Rev. Lett. [**83**]{}, 1319 (2000). A. Kuzmich, L. Mandel, and N.P. Bigelow, Phys. Rev. Lett. [**85**]{}, 1594 (2000). J. Ye, D.W. Vernooy, and H.J. Kimble, Phys. Rev. Lett. [**83**]{}, 4987 (1999). J. McKeever [*et al.*]{}, Phys. Rev. Lett. [**90**]{}, 133602 (2003). M. Tavis and F. Cummings, Phys. Rev. [**170**]{}, 379 (1968). W. Ren and H.J. Carmichael, Phys. Rev. A [**51**]{}, 752 (1995). D.W. Vernooy and H.J. Kimble, Phys. Rev. A [**56**]{}, 4287 (1997). A. Doherty [*et al.*]{}, Phys. Rev. A [**56**]{}, 833 (1997). V. Vuletic and S.Chu, Phys. Rev. Lett. [**84**]{}, 3787 (2000). G. Hechenblaikner, M. Gangl, P. Horak, H. Ritsch, Phys. Rev. A [**58**]{}, 3030 (1998). S.J. van Enk, J. McKeever, H.J. Kimble, J. Ye, Phys. Rev. A [**64**]{}, 013407 (2001).
{ "pile_set_name": "ArXiv" }
--- abstract: | We study a variational model for a diblock copolymer-homopolymer blend. The energy functional is a sharp-interface limit of a generalisation of the Ohta-Kawasaki energy. In one dimension, on the real line and on the torus, we prove existence of minimisers of this functional and we describe in complete detail the structure and energy of stationary points. Furthermore we characterise the conditions under which the minimisers may be non-unique. In higher dimensions we construct lower and upper bounds on the energy of minimisers, and explicitly compute the energy of spherically symmetric configurations.\ **Keywords:** block copolymers, copolymer-homopolymer blends, pattern formation, variational model, partial localisation, lipid bilayers\ *Mathematics Subject Classification (2000):* 49N99, 82D60 author: - Yves van Gennip - 'Mark A. Peletier' bibliography: - 'bib\_revision.bib' title: 'Copolymer-homopolymer blends: global energy minimisation and global energy bounds' --- 10 Introduction ============ Micro-phase separation ---------------------- In this paper we study the functional $$\label{eq:functional} F_1(u, v) = \left\{ \begin{array}{ll} \displaystyle c_0 \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla (u + v)| + c_u \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla u| + c_v \int_ {{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla v|\hspace{0.25cm} + \|u - v\|_{H^{-1}({{{\ensuremath{\mathbb{R}}}}}^N)}^2 &\hspace{-.35cm} \mbox{ if $(u, v) \in K_1$,}\vspace{0.25cm}\\ \infty &\hspace{-.35cm} \mbox{ otherwise,} \end{array} \right.$$ where the coefficients $c_i$ are nonnegative (not all equal to zero) and [^1] $$K_1 := \left\{ (u, v) \in \left(\text{BV}({{{\ensuremath{\mathbb{R}}}}}^N)\right)^2 : u(x), v(x) \in \{0, 1\} \text{ a.e., and } uv = 0 \text{ a.e., and } \int_{{{{\ensuremath{\mathbb{R}}}}}^N} u = \int_{{{{\ensuremath{\mathbb{R}}}}}^N} v \; \right\}.$$ Under the additional constraint $u+v\equiv 1$, this functional is the sharp-interface limit of a well-studied variational model for melts of diblock copolymers [@Choksi01; @ChoksiRen03; @ChoksiSternberg06; @FifeHilhorst01; @Muratov02; @RenWei00; @RenWei02; @RenWei03a; @RenWei03b; @RenWei05; @RenWei06a; @RenWei06b]. This underlying diffuse interface model is also closely related to the functional studied in [@Mueller93]. Such polymers consist of two parts, labelled the U and V parts, whose volume fractions are represented by the variables $u$ and $v$. The U and V parts of the polymers repel each other, and this repulsion leads to *micro-phase separation*: phase separation at a length scale comparable to the length of a single molecule. The case studied here is known as the *strong segregation limit*, [@BatesFredrickson99], in which strong repulsion causes strong demixing of the constituents—hence the restriction of $K_1$ to characteristic functions. The modeling assumption here is that stationary points of $F_1$ under constrained (i.e. fixed) mass $\int_{{{{\ensuremath{\mathbb{R}}}}}^N} u$, in particular minimisers, represent the structures formed by the polymers. Although the various simplifications leading to $F_1$ have obscured the connection between this functional and single molecules, the character of the various terms is still recognisable. The interfacial penalisation terms, i.e. the first three terms, are what remains of the repulsion in the strong segregation limit, and these terms favour large-scale demixing. The last term $\|u-v\|_{H^{-1}}$, on the other hand, penalises such large-scale separation and arises from the chemical bond between the U and V parts of the polymer molecules. These competing tendencies cause the functional $F_1$ to prefer structures with a specific length scale, as we now illustrate with a simple example in one space dimension. For simplicity we take as spatial domain the unit torus ${\ensuremath{\mathbb{T}}}_1$, i.e. the set $[0,1]$ with periodic boundary conditions; all global minimisers under the condition $u+v\equiv 1$ then are of the form shown in Figure \[fig:nblock\]. For such structures the value of the functional is $$F_1 = 2n(c_u+c_v) + \frac1{96n^2},$$ as can be seen from the results in Section \[sec:globmin1d\]. If we consider $c_u$ and $c_v$ to be fixed, the energy $F_1$ is clearly minimised at a finite value of $n$. When we study the one-dimensional case on ${{{\ensuremath{\mathbb{R}}}}}$ without the restriction $u+v\equiv 1$ in more detail, in Section \[sec:globmin1d\], we shall see that the energy actually favours a specific *block width* rather than a specific number of blocks. Blends of co- and homopolymers ------------------------------ For $u+v\not\equiv1$, $F_1$ is a model for *blends*, mixtures of diblock copolymers and homopolymers; the homopolymer is considered to fill the space not occupied by the diblock copolymer and has local volume fraction $1-u-v$. The inclusion of homopolymers into a block copolymer melt opens the possibility of structures with two distinct length scales. The repulsion between the two blocks creates micro-phase separation at the length scale of the polymer, as described above. At a larger length scale structures are observed in which regions of pure homopolymer and pure copolymer alternate. Blend systems show a tremendous wealth of behaviour. For instance, many different types of macrodomain geometry have been observed: spheres [@KoizumiHasegawaHashimoto94; @OhtaNonomura97; @UneyamaDoi04; @ZhangJinMa05], cylinders [@KinningWineyThomas88], dumbbells [@OhtaIto95], helices [@HashimotoMitsumuraYamaguchiTakenakaMoritaKawakatsuDoi01], labyrinths and sponges [@LoewenhauptSteurerHellmannGallot94; @Ito98; @OhtaIto95], ball-of-thread [@LoewenhauptSteurerHellmannGallot94], and many more. In addition, the microdomains have varying orientation with respect to this macrodomain geometry. In many cases the micro- and macrodomain geometry appear to be coupled in ways that are not yet understood. There is extensive literature on such blend systems, which is mostly experimental or numerical. For the numerical experiments it is *de rigeur* to apply a self-consistent mean field theory and obtain a generalisation of the Ohta-Kawasaki [@OhtaKawasaki86] model (see e.g. [@NoolandiHong83; @OhtaNonomura97; @ChoksiRen05]). Of the resulting model the energy $F_1$ is a sharp-interface limit [@Baldo90; @ChoksiRen05]. At the level of mathematical analysis, however, little is known. What form do global and local minimisers of $F_1$ take? (Do they even exist? The issue of existence of global minimisers of $F_1$ on ${{{\ensuremath{\mathbb{R}}}}}$ is first addressed in this paper.) Does the functional indeed have a preference for layered structures, as the numerical experiments suggest? What structure and form can macrodomains have? Can we observe in this simplified functional $F_1$ the breadth of behaviour that is observed in experiments? All these questions are open, and in this paper we provide some first answers. Results: global minimisers in one dimension under constrained mass ------------------------------------------------------------------ The first part of the paper focuses on the one-dimensional situation. ### Existence The existence of global minimisers under the constraint of fixed mass follows mostly from classical arguments (proof of Theorem \[th:exist\_real\_line\]). The non-compactness of the set ${{{\ensuremath{\mathbb{R}}}}}$ can be remedied with the cut-and-paste techniques that we introduce to study non-uniqueness (see below). One non-trivial issue arises when e.g. $c_0=c_u=0$, in which case the functional $F_1$ provides no control on the regularity of $u$. We obtain weak convergence in $L^2$ for a minimising sequence, and therefore a priori we can only conclude that the value set of the limit functions is $[0,1]$, the convex hull of $\{0,1\}$; as a result the limit $(u,v)$ need not be an element of $K_1$. With a detailed study of the stationarity conditions on $u$ we show that stationary points of $F_1$ only assume the extremal values $0$ and $1$. The existence of a minimiser then follows from standard lower semi-continuity arguments. ### Characterisation of macrodomains In the one-dimensional situation a macrodomain is a finite sequence of alternating U- and V-‘blocks’ or ‘layers’ as in Figure \[fig:nblock\_blend\]. Choksi and Ren [@ChoksiRen05] studied such macrodomains defined on the torus ${\ensuremath{\mathbb{T}}}_L$ of length $L $, but their techniques apply unchanged to the real line also. They showed that if such a macrodomain is stationary, then all *interior* blocks have equal width, while the end blocks are thinner. The exact dimensions of the blocks are fully determined by the number of blocks, the total mass, and, in the case of the torus, the size of the domain (see Theorems \[th:CR\] and \[th:CR2\]). It is instructive to minimise $F_1$ within classes defined by a specific choice of the sequence of U- and V-blocks; Figure \[fig:blocks\_2x3cases-intro\] shows this minimal energy for different classes and different values of the mass. ![Energy per unit mass for the one-dimensional case ${{{\ensuremath{\mathbb{R}}}}}$, according to the calculations in Section \[sec:lower\_bnd\_1d\]. $M$ is the total U-mass; for the surface tension parameters (see Lemma \[lemma:d\_ij\]) the values $d_{u0} = 1, d_{uv} = 0.7$ and $d_ {v0} = 0.3$ are chosen. The graphs belong to the following structures, as indicated in the figure as well (the lighter coloured blocks are V-blocks, the darker ones U-blocks): (a) UV and VU, (b) UVU, (c) VUV, (d) UVUV and VUVU, (e) UVUVU, (f) VUVUV. The circle indicates where the optimal structure changes.[]{data-label="fig:blocks_2x3cases-intro"}](optimal "fig:"){width="100mm"}\ ### Characterisation of constrained minimisers We extend the results of Choksi & Ren into a full characterisation of global minimisers, by showing that there exists a global minimiser with only one macrodomain, and by fully characterising all *other* global minimisers in terms of the parameters and the morphology (Theorem \[th:exist\_real\_line\]). This characterisation shows that [non-uniqueness]{} of minimisers can take two different forms. The first is the possibility that two different UV-sequences with the same mass have the same energy, as is illustrated by the encircled intersection in Figure \[fig:blocks\_2x3cases-intro\]. This is a common occurrence in variational problems, where a parameter change causes the global minimum to switch from one local minimiser to another. The second form of non-uniqueness is related to the fact, which we prove in Section \[subsec:connected\_support\], that two separate macrodomains can be translated towards each other and joined together without increasing the energy. In fact, in many cases the energy strictly decreases, and it is this possibility of strict decrease that allows us to rule out many cases. This leaves us with a set of conditions for the case of unchanged energy that must be fulfilled when a non-unique global minimiser contains more than one macrodomain (see Theorem \[th:exist\_real\_line\]). This type of non-uniqueness is specific for the problem at hand, and produces not a discrete set of minimisers but a continuum, parametrised by the spacing between the macrodomains. Although the focus of this paper lies on the unbounded domains ${{{\ensuremath{\mathbb{R}}}}}$ and ${{{\ensuremath{\mathbb{R}}}}}^N$, we make a brief excursion to extend the characterisation of global minimisers to the case of the torus ${\ensuremath{\mathbb{T}}}_L$ with length $L$ (Theorem \[th:exist\_periodic\_1d\]). ### A lower bound Figure \[fig:blocks\_2x3cases-intro\] and more clearly Figure \[fig:blocks\_asymp\] illustrate that as the imposed mass increases the number of blocks of the global minimiser(s) also increases. In Section \[sec:lower\_bnd\_1d\] we calculate values of the energy for various global minimisers, and show that the thickness of the internal layers approaches the optimal spacing of $$2m_0 := 6^{1/3}(c_u+c_v)^{1/3},$$ for $M \to \infty$ while the width of the end layers converges to half this value (Remark \[rem:convergenceofwidth\]). As a corollary we obtain an [explicit]{} and [sharp]{} lower bound for the energy on ${{{\ensuremath{\mathbb{R}}}}}$ (Theorem \[th:lower\_bound\_1d\]): $$\label{est:lower_bound_1d_intro_0} F_1 (u,v) \geq 2(c_0+\min(c_u,c_v)) + \left(\frac92\right)^{1/3}(c_u+c_v)^{2/3}\int_{{{\ensuremath{\mathbb{R}}}}}u.$$ The fact that the lower bound is sharp is significant. For instance, the affine dependence of the right-hand side on the mass $\int u$ implies that the minimal energy per unit of mass, $F_1(u,v) \left(\int u\right)^{-1}$, is generically not attained at any finite mass, but only in the limit $\int u \to \infty$. The word ‘generic’ refers here to the assumption that $c_0+\min(c_u,c_v)>0$, and the alternative case $c_0=c_u=0$ (or $c_0=c_v=0$) is fundamentally different. In this latter case macrodomains can be split and joined without changing the energy. The characterisation of global minimisers also allows us to establish an asymptotically sharp upper bound (Theorem \[th:lower\_bound\_1d\]): $$\label{est:upper_bound_1d} \lim_{M\to\infty} \inf\left\{M^{-1} F_1(u,v): (u,v)\in K_1, \ \int u = M\right\} = \left(\frac92\right)^{1/3}(c_u+c_v)^{2/3}.$$ In the limit $M\to\infty$, the bound  coincides with . Results: higher dimensions -------------------------- ### Energy bounds A common strategy in the study of pattern-forming systems is not to make any *Ansätze* about the morphology but to search for *weaker characterisations of behaviour*. As an example of this in the field of block copolymers, Choksi proves that for pure diblock melts the energy is bounded from below by a lower bound with a certain scaling in terms of the physical parameters—without making any *a priori* assumptions on the morphology [@Choksi01]. This scaling is shared by periodic lamellar structures with a specific lamellar separation. For the case at hand, the one-dimensional analysis provides both a lower and an upper bound on the energy in one dimension. Weakening the lower bound  to $$\label{eq:lower_bound_intro} F_1(u,v) \geq \left(\frac92\right)^{1/3}(c_u+c_v)^{2/3} \int u,$$ one might conjecture that the lower bound  holds in $ {{{\ensuremath{\mathbb{R}}}}}^N$, again without making any *a priori* assumption on the morphology. However, we have no proof of this conjecture, and in fact, results on mono- and bilayer stability (see Section \[subsubsec:companion\] below) suggest that such a conjecture may only hold for certain choices of the parameters. In Section \[sec:scaling\] we instead prove a lower bound which is also linear in mass, but has a smaller constant (Theorem \[lem:umassineqs\]). The explicit construction used to prove the upper bound  suggests a natural strategy for proving a similar upper bound in ${{{\ensuremath{\mathbb{R}}}}}^N$. In Sections \[sec:upper\_bound\] and \[subsec:examples\] we extend one-dimensional minimisers as lamellar structures in ${{{\ensuremath{\mathbb{R}}}}}^N$, and prove the same upper bound  in ${{{\ensuremath{\mathbb{R}}}}}^N$. Here the main step in the proof is the estimation that ‘boundary effects’ as a result of the cutoff to finite mass are of lower order. In Section \[subsec:examples\] we use the same idea to calculate the energy values of some structures with spherical geometry: either solid spheres of one phase (U or V) surrounded by a spherical layer of the other phase *(micelles)*, or ring-shaped layered structures. In both cases the asymptotic energy exceeds that given by the upper bound , indicating that they can not be global minimisers in ${{{\ensuremath{\mathbb{R}}}}}^N$. ### Monolayer and bilayer stability in periodic strips {#subsubsec:companion} In a companion paper [@vanGennipPeletier07b] we study the stability with respect to a certain class of perturbations of monolayers and bilayers, i.e. straight layered structures with one respectively two lines of U-V interface, in a periodic strip ${\ensuremath{\mathbb{T}}}_L\times{{{\ensuremath{\mathbb{R}}}}}$. There we show that for sufficiently large $L$ a monolayer (the simplest lamellar structure, of the form UV) is always unstable, while the stability of a bilayer (UVU or VUV) depends on the parameters. For the case of a UVU bilayer with optimal thickness, for instance, we prove a stability criterion of the form $$\text{stability}\qquad \Longleftrightarrow \qquad \frac{c_u+c_v}{c_0+2c_u+c_v} \geq g\left(\frac{L}{(c_0+2c_u+c_v)^{1/3}}\right),$$ where $g$ is a continuous function with values in $(0,1)$. Therefore, the bilayer can be stable or unstable, depending on the relative values in the interface penalisation parameters. Note that the relative value of $c_u+c_v$ should not be too small in order to have stability. More about the special role of $c_u+c_v$ follows in Section \[subsec:d\_12\]. Related work: partial localisation {#subsec:partloc} ---------------------------------- In previous work, one of the authors (Peletier) and R[ö]{}ger studied a related functional whose derivation was inspired by lipid bilayers [@PeletierRoeger06]. Lipid bilayers might be considered block copolymers, and therefore it is not surprising that the functional considered in [@PeletierRoeger06] is similar to $F_1 $: $$\label{eq:lipidbil} \mathcal{F}_{\epsilon}(u, v) := \left\{ \begin{array}{ll} \displaystyle \epsilon \int_{{{{\ensuremath{\mathbb{R}}}}}^2} |\nabla u| + \frac{1}{\epsilon} d_1(u, v) & \mbox { if $(u, v) \in \mathcal{K}_{\epsilon}$,}\vspace{0.25cm}\\ \infty &\mbox{ otherwise.} \end {array} \right.$$ Here $u$ is the volume fraction or density of lipid heads, $v$ is the volume fraction of lipid tails, $d_1(\cdot,\cdot)$ is the Monge-Kantorovich distance and $$\mathcal{K}_{\epsilon} := \left\{ (u, v) \in \text{BV}({\ensuremath{\mathbb{R}}}^2; \{0, 1/\epsilon\})^2 : uv = 0 \text{ a.e., and }\int_{{{{\ensuremath{\mathbb{R}}}}}^2} u = \int_{{{{\ensuremath{\mathbb{R}}}}}^2} v = M \right\}.$$ Apart from the choices $c_0 = c_v = 0$ and $c_u = 1$, the main difference between (\[eq:functional\]) and (\[eq:lipidbil\]) is the different non-local term. Note that the scaling (constant mass but increasing amplitude $1/\e$) implies that the supports of $u$ and $v$ shrink to zero measure. The main goal in [@PeletierRoeger06] was to investigate the limit $\e\to0$, and connect the limit behaviour to macroscopic mechanical properties of the lipid bilayers such as stretching, bending, and fracture. The authors studied sequences $(u_\e,v_\e)$ for which the rescaled energy $ \mathcal{G}_{\epsilon} := {\epsilon^{-2}} (\mathcal{F}_{\epsilon} - 2 M)$ remains finite. They revealed a remarkable property of the functional $\mathcal {G}_{\epsilon}$ (or $\mathcal{F}_{\epsilon}$): boundedness of $\mathcal G_\e(u_ \epsilon,v_\epsilon)$ implies that the support of $u_\e$ and $v_\e$ becomes close, in the sense of Hausdorff distance between sets, to a collection of closed curves of total length $M$. The curve-like behaviour indicates *partial localisation:* localisation in one direction (normal to the limit curve) and non-localisation in the direction of the tangent. In addition one can recognise resistance to stretching (because of the fixed length) and resistance to fracture (because the curves are closed). Moreover, the curves’ support is approximately of ’thickness’ $2\epsilon$, indicating an underlying bilayer structure. The authors also showed that $\mathcal{G}_{\epsilon}$ Gamma-converges to the [*Elastica functional*]{}, which penalises the curvature of curves, showing a tendency of the limit curves to resist bending. These results suggest considering similar limits for the functional $F_1$. In fact the subscript $1$ in $F_1$ and $K_1$ already refers to the appropriate rescaling: $$\label{def:Fe} F_\epsilon(u, v) = \left\{ \begin{array}{ll} \displaystyle \epsilon\left(c_0 \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla (u + v)| + c_u \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla u| + c_v \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla v|\right) + \frac1\epsilon\|u - v\|_{H^{-1}}^2 & \mbox{ if $(u, v) \in K_\epsilon$,}\vspace{0.25cm}\\ \infty &\mbox{ otherwise,} \end{array} \right.$$ where $$K_\epsilon := \left\{ (u, v) \in \left(\text{BV}({{{\ensuremath{\mathbb{R}}}}}^N)\right)^2 : u(x), v(x) \in \{0, 1/\epsilon\} \text{ a.e., and } uv = 0 \text{ a.e., and } \int_{{{{\ensuremath{\mathbb{R}}}}}^N} u = \int_{{{{\ensuremath{\mathbb{R}}}}}^N} v \; \right\}.$$ As mentioned above, in the companion paper [@vanGennipPeletier07b] we investigate the stability of bilayers, and show that parameter choices exist for which they are stable: this provides another suggestion that the functional $F_\epsilon$ may display similar behaviour in the limit $\epsilon\downarrow0$. This is work for future research. Preliminary definitions ======================= Problem setting --------------- In this paper we mostly consider as domain the whole space ${{{\ensuremath{\mathbb{R}}}}}^N$; however, sometimes we will make an excursion to the torus ${\ensuremath{\mathbb{T}}}_L^N$, i.e. a periodic cell $\prod_{i=1}^N [0,L_i]$ with the endpoints of each interval identified. For $f\in L^1({{{\ensuremath{\mathbb{R}}}}}^N)$ (or $L^1({\ensuremath{\mathbb{T}}}_L^N)$) with $\int f =0$ and compact support, $$\label{def:HMO} \|f\|_{H^{-1}}^2 := \int fG*f,$$ where $G$ is a Green’s function of the operator $-\Delta$ on ${{{\ensuremath{\mathbb{R}}}}}^N$ (or on ${\ensuremath{\mathbb{T}}}_L^N$). We define the space $H^{-1}({{{\ensuremath{\mathbb{R}}}}}^N)$ as the completion of $\left\{f\in L^1({{{\ensuremath{\mathbb{R}}}}}^N): \operatorname{supp}f \text{ compact}, \int_{{{{\ensuremath{\mathbb{R}}}}}^N} f=0\right\}$ with respect to the norm in (\[def:HMO\]). Similarly $H^{-1}({\ensuremath{\mathbb{T}}}_L^N)$ is defined as the completion of $\left\{f\in L^1({\ensuremath{\mathbb{T}}}_L^N): \int_{{\ensuremath{\mathbb{T}}}_L^N} f=0\right\}$ with respect to this norm. On ${\ensuremath{\mathbb{T}}}_L^N$ the zero average condition of $f$ is necessary in order for $G*f$ to respect the topology of the torus: $$\int_{{\ensuremath{\mathbb{T}}}_L^N} f = -\int_{{\ensuremath{\mathbb{T}}}_L^N} \Delta(G*f) = 0.$$ This condition also allows for a convenient reformulation of the norm  in terms of the *Poisson potential* $\phi_f$ of $f$, given by $$\phi_f = G*f,$$ such that $$\label{eq:calcHMO} \|f\|_{H^{-1}}^2 = \int f\phi_f = \int |\nabla\phi_f|^2.$$ In some cases it will be useful to add a constant to $\phi_f$; note that this can be done without changing the value in . If the set $H^1_0$ is defined as the completion of $C_c^1({{{\ensuremath{\mathbb{R}}}}}^N)$ (or $C^1({\ensuremath{\mathbb{T}}}_L^N)$ with zero mean) with respect to the norm $\|g\|_{H^1_0}^2=\int|\nabla g|^2$, then  is the dual norm of $H^1_0$ with respect to the $L^2$-inner product and satisfies $$\int fg \leq \|f\|_{H^{-1}} \|g\|_{H^1_0},$$ for all $f\in H^{-1}$ and $g\in H_0^1$. We repeat the definition of $F_1$ and $K_1$ for convenience. \[def:functional\] Let $c_0$, $c_u$, and $c_v$ be real numbers. Define $$\label{eq:functional2} F_1(u, v) = \left\{ \begin{array}{ll} \displaystyle c_0 \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla (u + v)| + c_u \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla u| + c_v \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla v|\hspace{0.25cm} + \|u - v\|_{H^{-1}}^2 & \mbox{ if $(u, v) \in K_1$,}\vspace{0.25cm}\\ \infty &\mbox{ otherwise,} \end{array} \right.$$ where the admissible set is given by $$K_1 := \left\{ (u, v) \in \left(\text{BV}({{{\ensuremath{\mathbb{R}}}}}^N)\right)^2 : u(x), v(x) \in \{0, 1\} \text{ a.e., and}\ uv = 0 \text{ a.e., and } \int_{{{{\ensuremath{\mathbb{R}}}}}^N} u = \int_{{{{\ensuremath{\mathbb{R}}}}}^N} v\right\}.$$ We will require $c_0, c_u$ and $c_v$ to be non-negative and assume that at least one of these coefficients is positive. See also Remark \[rem:coefficients\]. Sometimes we consider the case of the torus instead of ${{{\ensuremath{\mathbb{R}}}}}^N$. It is understood that in the above definition the instances of ${{{\ensuremath{\mathbb{R}}}}}^N$ are then replaced by ${\ensuremath{\mathbb{T}}}_L^N$. Another, equivalent, form of the functional will be useful, in which the penalisation of the three types of interface U-0, V-0, and U-V, is given explicitly by surface tension coefficients $d_{kl}$: \[lemma:d\_ij\] Let the *surface tension coefficients* be given by $$\begin{aligned} d_{u0} &:= c_u+c_0,\\ d_{v0} &:= c_v+c_0,\\ d_{uv} &:= c_u+c_v,\end{aligned}$$ Non-negativity of the $c_i$ is equivalent to the conditions [^2] $$\label{eq:ddemands} 0 \leq d_{kl} \leq d_{kj} + d_{jl} \qquad\text{for each } k\not=l.$$ Then $$F_1(u, v) = \left\{ \begin{array}{ll} d_{u0}\HNo(S_{u0}) + d_{v0}\HNo(S_{v0}) + d_{uv}\HNo(S_{uv}) + \|u - v\|_{H^{-1}}^2 & \mbox{ if $(u, v) \in K_1$,}\\ \infty &\mbox{ otherwise.} \end{array} \right.$$ where $S_{kl}$ is the interface between the phases $k$ and $l$: $$\begin{aligned} &S_{u0} = \partial^* \operatorname{supp}u \setminus \partial^* \operatorname{supp}v,\\ &S_{v0} = \partial^* \operatorname{supp}v \setminus \partial^*\operatorname{supp}u,\\ &S_{uv} = \partial^* \operatorname{supp}u \cap \partial^* \operatorname{supp}v,\end{aligned}$$ and $\partial^*$ is the essential boundary of a set. The essential boundary of a set consists of all points in the set that have a density other than $0 $ or $1$ in the set. Details can be found in [@AmbrosioFuscoPallara00 Chapter 3.5]. The main step in recognising the equivalence of both forms of $F_1$ is noticing that, if $u$ is a characteristic function, then $$\int |\nabla u| = \HNo(\partial^* \operatorname{supp}u \cap \Omega).$$ Note the different interpretations of the coefficients $c_i$ and the surface tension coefficients $d_{kl}$. The latter have a direct physical interpretation: they determine the mutual repulsion between the different constituents of the diblock copolymer-homopolymer blend. For example, the value of $d_{uv}$ (as compared to the values of $d_{u0}, d_{v0}$ and $1$, the coefficient in front of the $H^{-1}$-norm) determines the energy penalty associated with close proximity of U- and V-polymers. In particular, if one of these surface tension coefficients is zero, the corresponding polymers do not repel each other and many interfaces between their respective phases in the model can be expected. On the other hand the coefficients $c_i$, when taken separately, do not convey complete information about the penalisation of the boundary of a phase. If for instance $c_u=0$, but $c_v\neq0$, the part of the U-phase interface that borders on the V-phase still receives a penalty, because $d_{uv}=c_v$. For this reason the use of surface tension coefficients makes more sense from a physical point of view. For the mathematics it is often easier to use the formulation in terms of $c_i$. \[rem:coefficients\] The condition  can be understood in several ways. If, for instance, $d_{uv}>d_{u0}+d_{v0}$, then the U-V type interface, which is penalised with a weight of $d_{uv}$, is unstable, for the energy can be reduced by slightly separating the U and V regions and creating a thin zone of 0 inbetween. A different way of seeing the necessity of  is by remarking that the equivalent requirement of non-negativity of the $c_i$ is necessary for $F_1$ to be lower semicontinuous in e.g. the $L^1$ topology. Our assumption that at least one $c_i$ is positive is equivalent to assuming that at least two $d_{kl}$ are positive. The role of $d_{uv}$ {#subsec:d_12} -------------------- The behaviour of the model described by $F_1$ is crucially different in the two cases $d_{uv}>0$ ($c_u+c_v>0$) and $d_{uv}=0$ ($c_u=c_v=0$). The statements made in the introduction such as ’the functional $F_1$ prefers structures with a definite length scale’ actually only hold in the case $d_{uv}>0$. For most results in this work we will assume this condition to hold, and to justify this we now show with an example how the case $d_{uv}=0$ is different. Consider the one-dimensional case, take $\Omega$ to be the torus ${\ensuremath{\mathbb{T}}}_1$, and fix $c_0=1$ and $c_u=c_v=0$, or equivalently $d_{uv}=0$ and $d_{u0}=d_{v0}=1$. Restricting ourselves to functions $(u,v)\in K_1$ with $\int_0^1 u = \int_0^1 v = M$, for some fixed mass $0<M<1/2$, we find that for any $(u,v)$ there are at least two U-0 or V-0 type transitions, and therefore $$F_1(u,v) = \int_0^1 |(u+v)'| \hspace{0.2cm}+ \|u-v\|_{H^{-1}}^2 \geq 2.$$ On the other hand, equality is only reached if $u-v=0$, which is not possible for positive mass $M$. But the value $2$ can be reached by a sequence of approximating pairs $(u_n,v_n)$, $$\begin{aligned} &u_n(x) = \begin{cases} 1 & |x|\leq n \text{ and } \frac {2k}{n} < x < \frac{2k+1}n, \text{ for some } k\in \mathbb Z \\ 0 &\text{otherwise}\end{cases}\\ &v_n(x) = \begin{cases} 1 & |x|\leq n \text{ and } \frac {2k-1}{n} < x < \frac{2k}n, \text{ for some } k\in \mathbb Z \\ 0 &\text{otherwise}\end{cases}\end{aligned}$$ Then $(u_n,v_n)\in K_1$ and - $\int_0^1 |(u_n+v_n)'| = \int_0^1 |\chi_{[-n,n]}'| = 2$; - In Section \[sec:globmin1d\] it is calculated that a single one-dimensional monolayer of width $2m$ and height $1$ satisfies $\|u - v\|_{H^{-1}} ^2 = 2m^3/3$; extending this result to the functions $(u_n,v_n)$, which are concatenations of $n^2$ such monolayers, each of width $2/n$, we find $\|u - v\|_ {H^{-1}}^2 = n^2 \cdot 2n^{-3}/3 =2n^{-1}/3$. Consequently, $F_1(u_n,v_n)$ converges to $2$ for $n\to \infty$. This sequence illustrates the preferred behaviour when $d_{uv}=0$: since the interfaces between the U- and V-phases are not penalised, rapid alternation of U- and V-phase effectively eliminates the $H^{-1}$-norm, reducing the energy to the interfacial energy associated with a single field $u+v$. Global minimisers in one dimension {#sec:globmin1d} ================================== In this section we fully characterise the set of global minimisers of $F_1$ in one space dimension, i.e. $N=1$. Our main discussion concerns the case of ${{{\ensuremath{\mathbb{R}}}}}$, but in Section \[subsec:ex\_on\_TL\] we will briefly mention results on the torus ${\ensuremath{\mathbb{T}}}_L$. In one space dimension it is useful to regard admissible functions $(u,v)$ as a sequence of *blocks*. A *U-block*, a *V-block*, and a *0-block* are connected components of $\operatorname{supp}u$, $\operatorname{supp}v$, and ${{{\ensuremath{\mathbb{R}}}}}\setminus \operatorname{supp}(u+v)$, respectively. Adjacent blocks are separated by *transitions* or *interfaces*. We will see below (Corollary \[corol:finite-interfaces\]) that any stationary point has a finite number of interfaces, even if either $d_{u0}$ or $d_{v0}$ vanishes. If $(u, v)$ is an admissible pair, each of the connected components of its support $\operatorname{supp}(u+v)$ is in fact a macrodomain in the sense of the introduction. If there is only one such macrodomain, we call the configuration *connected*. Thinking about the structures in terms of sequences of blocks, we can specify connected configurations up to block width and translation by a sequence of U’s and V’s, e.g. UVUVU. Characterising the set of global minimisers falls apart into two steps: - For a given macrodomain we describe the optimal spacing between the transitions; - We derive necessary conditions for the occurrence of a disconnected global minimiser, i.e. a global minimiser with more than one macrodomain. In addition we use the techniques of part B above to prove the existence of a global minimiser. In Section \[subsec:macrodomains\] we first describe the characterisation given by Choksi and Ren [@ChoksiRen05] of the internal structure of macrodomains, which essentially coincides with part A above. We then continue in Section \[subsec:connected\_support\] by showing that the support can be reduced to a single connected component; this also provides necessary and sufficient conditions for non-uniqueness (Theorem \[th:exist\_real\_line\]). The reduction to a single macrodomain also allows us to prove an existence result (Theorem \[th:exist\_real\_line\]). Finally, in Section \[sec:lower\_bnd\_1d\], we calculate the values of these minimisers and derive a lower bound for the energy per unit of mass. Stationarity {#subsec:Stationarity} ------------ Because the set of admissible functions $K_1$ is not locally convex we need to carefully formulate the notion of stationary point. We call $(u, v) \in K_1$ a stationary point of $F_1$ if for any sequence $(u_n, v_n)\subset K_1$ such that $u_n\to u$ in $L^1$ and $v_n\to v$ in $L^1$, $$|F_1(u, v) - F(u_n, v_n)| = o\left(\int_{\Omega} |u-u_n|\,dx + \int_{\Omega} |v-v_n|\,dx\right).$$ As a consequence of this definition, if $t\mapsto (u(t), v(t))$ is a curve in $K_1$, with $(u(0), v(0))$ a stationary point of $F_1$, then $$\left.\frac{d}{dt} F_1(u(t), v(t))\right|_{t=0} = 0.$$ In the proofs of the results in Section \[subsec:macrodomains\] a special case of this is used: for a connected configuration in one dimension that is stationary under constrained mass the derivative of $F_1$ with respect to mass-preserving changes in the position of the interfaces is zero. Characterisation of macrodomains {#subsec:macrodomains} -------------------------------- For periodic domains, Choksi and Ren [@ChoksiRen05] have given a characterisation of the structure of macrodomains. For its formulation it is useful to define three *types* of interface. Interfaces 0-U and U-0 interfaces are considered to be of the same type, as are 0-V and V-0 interfaces and U-V and V-U interfaces. Choksi and Ren’s conclusions are \[th:CR\] Let $(u,v)$ be a stationary point of $F_1$ on the torus ${\ensuremath{\mathbb{T}}}_L$ under constrained mass, with $\operatorname{supp}(u+v)$ connected and with a finite number of interfaces. Then 1. \[thCR:1\] Each pair of adjacent U-V type transitions is separated by the same amount; i.e. each U- or V-block is of the same width, with the exception of the two end blocks. 2. \[thCR:2\] In the cases UVUV…U and VUVU…V the end blocks are half as wide as the internal blocks. 3. \[thCR:3\] In the case UVUV…V (or the mirrored configuration VUVU…U) there is an additional relation that determines the width of the end blocks. The case of ${{{\ensuremath{\mathbb{R}}}}}$ was not explicitly discussed by Choksi and Ren, but both the result and the proof for this case are simpler than for the periodic cell: \[th:CR2\] Let $(u,v)$ be a stationary point of $F_1$ on ${{{\ensuremath{\mathbb{R}}}}}$ under constrained mass, with $\operatorname{supp}(u+v)$ connected and with a finite number of interfaces. Then 1. \[thCR2:1\] Each pair of adjacent U-V type transitions is separated by the same amount; i.e. each U- or V-block is of the same width, with the exception of the two end blocks. 2. \[thCR2:2\] The end blocks are half as wide as the internal blocks. The main tool in the proof of these theorems is the following lemma. \[lemma:equal\_phi\] For any stationary point under constrained mass, the Poisson potential $\phi$ has equal value at any two interfaces of the same [type]{}. The statements about the block sizes are deduced from this lemma, and from the fact that the potential $\phi$ has prescribed second derivative on each block. Reduction to connected support {#subsec:connected_support} ------------------------------ We first need a technical result to rule out the possibility of an infinity of transitions. Let $(u,v)$ be a stationary point under constrained mass, let $\Omega$ be either ${{{\ensuremath{\mathbb{R}}}}}$ or ${\ensuremath{\mathbb{T}}}_L$ and let $\omega\subset \Omega$ be an open set such that $v (\omega) = \{0\}$. Then $\omega$ contains at most two U-0 type transitions. A similar statement holds with $u$ and $v$ exchanged. On $\omega$, $\phi''\leq0$; each U-0 or 0-U transition occurs at the same value of $\phi$ (Lemma \[lemma:equal\_phi\]), say at $\phi=c\in{{{\ensuremath{\mathbb{R}}}}}$. If the set $\{x \in\omega:\phi(x) = c\}$ has more than two elements, then by convexity, $$\begin{aligned} &\phi(x)= c \qquad\text{for }x\in [x_1,x_2],\\ &\phi(x) < c \qquad\text{for }x\in \omega\setminus[x_1,x_2],\end{aligned}$$ for some $x_1 < x_2\in\omega$. On $(x_1,x_2)$, therefore, $\phi''=0$ and thus $u=0$. Therefore there are at most two transitions connecting U and 0, at $x=x_1$ and at $x=x_2$. \[corol:finite-interfaces\] If $d_{uv}>0$, then a stationary point under constrained mass has a finite number of transitions. By , at least two out of the three $d_{ij}$ are strictly positive. If all three are positive, then the finiteness of $F_1$ implies a bound on the number of interfaces. If one is zero, say $d_{u0}$, then the lemma above states that the number of U-0 or 0-U transitions is no larger than the number of V-interfaces. Since the latter is bounded, the former is also. \[th:exist\_real\_line\] Let $N=1$. Let $d_{uv} > 0$, and fix a mass $M>0$. 1. There exists a global minimiser under constrained mass $M$ for which $ \operatorname{supp}(u+v)$ is connected.\[item:reallineconnmin\] 2. This global minimiser is *non*-unique (apart from translation and mirroring) if and only if 1. the energy of this configuration is equal to the energy of another configuration $(\bar u, \bar v)$ for which $\operatorname{supp}(\bar u+ \bar v)$ is also connected, or \[item:sameenergy2\] 2. one of the following two conditions is satisfied: \[item:sameenergy3\] 1. $d_{u0}=0$ and there exists a global minimiser with an internal U-block or;\[item:RL:condunique1\] 2. $d_{v0}=0$ and there exists a global minimiser with an internal V-block.\[item:RL:condunique2\] The non-uniqueness mentioned in condition \[item:sameenergy\] can manifest itself in multiple ways. Figure \[fig:blocks\_2x3cases-intro\] shows how the optimal structure varies with mass: as the mass increases, the global minimiser progresses through structures with more and more layers. At the intersection points of the curves in the figure, indicated by a circle, structures belonging to different curves have the same value of the energy. Another possibility occurs when $d_{u0}=d_{v0}$, since then $u$ and $v$ can be interchanged without changing the energy. The situation where two minimisers are both connected, have the same sequence of blocks (up to mirroring), but differ in the block widths, however, is ruled out by Theorem \[th:CR2\]. The fact that the global minimiser can be non-unique when, for example, $d_{u0} =0$ can easily be recognised by an example. Suppose that there exists a global minimiser of the form UVUVU. Since the outer blocks of this structure are both U-blocks, Lemma \[lemma:equal\_phi\] states that the value of $\phi$ is the same at the two interfaces of U-0 type, and $\phi$ is therefore symmetric around the middle of the structure. We now split the structure at the middle into two parts, and move the two parts apart. In doing so we create two new U-0 type transitions, which carry no energy penalty since we assumed $d_{u0}=0$. Since we split at the middle, where $ \phi'=0$, the new potential $\phi$ can be constructed from the old one by translation of the parts, and the value of $\|u-v\|_{H^{-1}}$ is also unchanged. We defer the proof of existence of a global constrained minimiser to the end, and start by showing that existence of a global minimiser implies existence of a global *connected* minimiser. Suppose $(u, v) \in K_1$ is a global minimiser such that ${{{\ensuremath{\mathbb{R}}}}}\setminus \operatorname{supp}(u+v) $ has at least three connected components. By Corollary \[corol:finite-interfaces\] the support of $u+v$ is bounded, and therefore we can take those three components to be $(-\infty,0)$, $(x_1,x_2)$, and $(x_3,\infty)$. The points $0$, $x_1$, $x_2$, and $x_3$ therefore all are interfaces. Let $\phi$ be the associated potential; since $u$ and $v$ vanish on $(x_1,x_2)$ and $(x_3,\infty)$, $\phi$ is linear on $(x_1,x_2)$ and constant on $(x_3, \infty)$. Denote by $\phi'_{12}$ the value of $\phi'(x)$ for $x \in [x_1, x_2] $. For any $0<a\leq {x_2-x_1}$, which we fix for the moment, we construct a new pair of functions $\bar u$ and $\bar v$ with associated potential $\bar \phi$ as follows. Set $$\begin{aligned} \label{def:overline_u} \bar u(x) &:= \begin{cases} u(x) & x\leq x_1\\ u(x+a) & x_1<x<x_3-a\\ 0 & x\geq x_3-a \end{cases}\\ \bar v(x) &:= \begin{cases} v(x) & x\leq x_1\\ v(x+a) & x_1<x<x_3-a\\ 0 & x\geq x_3-a \end{cases}\\ \widetilde\phi(x) &:= \begin{cases} \phi(x) & x\leq x_1\\ \phi(x+a) -\phi(x_1+a) + \phi(x_1) & x_1<x<x_3-a\\ \phi(x_3)-\phi(x_1+a)+\phi(x_1) & x\geq x_3-a \end{cases} \label{def:widetilde_phi}\end{aligned}$$ Because $\phi'(x_1) = \phi'(x_1+a) = \phi'_{12}$, the function $\widetilde \phi$ is continuously differentiable on ${{{\ensuremath{\mathbb{R}}}}}$; and since $\widetilde\phi$ satisfies $ {\widetilde \phi}'' = \bar u - \bar v$ on ${{{\ensuremath{\mathbb{R}}}}}$, it is the Poisson potential associated with $\bar u$ and $\bar v$. We now show that $F_1(\bar u,\bar v)\leq F_1(u,v)$. As for the interfacial term in $F_1$, if $0<a<x_2-x_1$, then the various transitions remain the same, only translated to different positions; therefore the interfacial term is unchanged. In the case $a=x_2-x_1$, in comparison with $ (u,v)$ the two interfaces at $x=x_1$ and $x=x_2$ have been joined to one interface, or have even annihilated each other; by the assumption  this does not increase the interfacial term. For the second term of $F_1$ we calculate $$\begin{aligned} \notag \int_{{{\ensuremath{\mathbb{R}}}}}\bigl({\widetilde\phi}'\bigr)^2 &= \int_{-\infty}^{x_1}{\phi'}^2 + \int_{x_1+a}^{x_3} {\phi'}^2 \\ \label{ineq:phi1phi3} &\leq \int_{-\infty}^{x_1}{\phi'}^2 + \int_{x_1+a}^{x_3} {\phi'}^2 + a{\phi'}_{12}^2 \\ &= \int_{{{{\ensuremath{\mathbb{R}}}}}} {\phi'}^2. \notag\end{aligned}$$ We conclude that $F_1(\bar u,\bar v)\leq F_1(u,v)$. Since $(u, v)$ is a global minimiser, we conclude that $F_1(\bar u,\bar v)=F_1(u,v)$ and thus that $(\bar u, \bar v)$ is another global minimiser. Furthermore by Corollary \[corol:finite-interfaces\], $\operatorname{supp}(u+v)$ has a finite number of connected components and thus we can repeat this procedure until only one component remains. Therefore we have proved that if a global minimiser exists, then there (also) exists a global minimiser with connected support. Assume now that two global minimisers exist, one of which has connected support. The other global minimiser, let us call it $(u, v)$, either has connected $\operatorname{supp}(u+v)$ or disconnected $\operatorname{supp}(u+v)$. In the former case we have proved part \[item:sameenergy\] of the theorem; therefore we now assume the latter case, and show that this implies part \[item:sameenergy3\]. Since $(u, v)$ has disconnected $\operatorname{supp}(u+v)$, we can apply the construction above. For a given choice of $a$, we find another configuration $(\bar u, \bar v)$ with energy equal or less than that of $(u, v)$. Since $(u, v)$ is a global minimiser, the energy of $(\bar u, \bar v)$ is equal to that of $(u, v)$ and thus the two inequalities encountered above are saturated. In particular, - The joining of the two interfaces surrounding a 0-block does not reduce the energy; - The inequality  is saturated. The saturation of  implies that $\phi'_{12}=0$, and therefore that $\phi(x_1)=\phi(x_2)$. We now prove that these interfaces are of the same type, *i.e.* either both U-0 type or both V-0 type transitions. Suppose not, and to be concrete, suppose that the interface at $x=x_1$ is a V-0 transition, and at $x=x_2$ a 0-U transition. In this paragraph we will explicitly distinguish between mirrored interfaces of the same type, e.g. U-0 and 0-U. Since $-\phi''=u-v$ and $\phi'(x_1) = \phi'(x_2) = \phi'_{12} = 0$, there exists a $y_2 > 0$ such that the next transition is at $x_2 + y_2$ and $ \phi$ decreases for $x \in (x_2, x_2 + y_2)$, implying that the next transition can not be a U-0 transition (which would require the same value for $\phi$ as at $x=x_2$) but is a U-V transition, with a value of $\phi$ less than $\phi(x_2)$. The same argument holds for the interface at $x_1$: the previous transition is at $x_1 - y_1$ for a $y_1 > 0$ and is again a U-V transition, this time with a value of $\phi$ larger than $\phi(x_1) = \phi(x_2)$. Since two U-V transitions have a different value of $\phi$, the structure is not stationary, a contradiction. Since the interfaces at $x_1$ and $x_2$ are of the same type, a non-changing interface energy implies that either $d_{u0}=0$ or $d_{v0}=0$, which is the first part of conditions \[item:RL:condunique1\] and \[item:RL:condunique2\]. Since the construction provides a global minimiser with an internal U-block (if $d_{u0}=0$) or an internal V-block (if $d_{v0}=0$), the second part of these conditions is also satisfied. We have now proved that existence of a disconnected global minimiser implies condition \[item:sameenergy3\]. The opposite statement, that condition \[item:sameenergy3\] suffices for the existence of a disconnected global minimiser, follows from splitting any minimiser at a point $x$ inside a U-block (supposing $d_{u0}=0$) such that $\phi'(x)=0$. It remains to prove the existence of a global minimiser, and we now turn to this issue. Let $(u_n,v_n)$ be a minimising sequence. We first note that the translation arguments that we used above allow us to reduce an arbitrary minimising sequence to a minimising sequence whose elements each are connected. Therefore we may assume that the support of the sequence remains inside some large bounded set $\Omega\subset {{{\ensuremath{\mathbb{R}}}}}$, and does not approach the boundary of this set. Since both $u_n$ and $v_n$ are bounded in $L^\infty(\Omega)$, there exist subsequences (that we again denote by $u_n$ and $v_n$) such that $$u_n \weakstarto u_\infty \qquad\text{and}\qquad v_n\weakstarto v_\infty \qquad\text{in }L^\infty(\Omega).$$ Note that this convergence implies that $\int u_\infty = \int v_\infty = M$, since the constant $1$ is an element of $L^1(\Omega)$. Since $L^2(\Omega)\subset L^1(\Omega)$ we also have $$u_n \weakto u_\infty \qquad\text{and}\qquad v_n \weakto v_\infty \qquad\text{in }L^2(\Omega).$$ The functions $u_\infty, v_\infty$, as the weak-\* limits of $u_n, v_n$, take values in the interval $[0,1]$. Thus if we replace $K_1$ in (\[eq:functional2\]), the definition of $F_1$, by (note the change in value set) $$\tilde K_1 := \left\{ (u, v) \in \left(\text{BV}(\Omega)\right)^2 : u(x), v(x) \in [0, 1] \text{ a.e., and } uv = 0 \text{ a.e., and } \int_{\Omega} u = \int_{\Omega} v\right\},$$ then $(u_\infty, v_\infty)\in \tilde K_1$ and $F_1$ is convex on $L^2(\Omega)$. This implies that the subdifferential of $F_1$ at $(u_\infty, v_\infty)$ is non-empty, i.e. there exist $p_1, p_2\in L^2(\Omega)$ such that $$F_1(u_n, v_n) \geq F(u_\infty, v_\infty) + \int_\Omega p_1 (u_n-u_\infty) + \int_\Omega p_2 (v_n-v_\infty).$$ Weak convergence in $L^2(\Omega)$ now gives us lower semi-continuity with respect to this convergence: $$F_1(u_\infty,v_\infty) \leq \liminf_{n\to\infty} F_1(u_n,v_n).$$ It remains to prove that $u_\infty$ and $v_\infty$ are admissible, [i.e.]{} that they take values $0$ and $1$ and that $u_\infty v_\infty = 0$ almost everywhere. In other words, we want to show that not only $(u_\infty, v_\infty)\in\tilde K_1$, but even $(u_\infty, v_\infty)\in K_1$. By the assumption $d_{uv}>0$ at least one of the coefficients $c_u$ and $c_v$ is strictly positive. Suppose that $c_u>0$; then the boundedness of $\int | u_n'|$ implies that the convergence of $u_n$ is strong in $L^1$ and pointwise almost everywhere [@EvansGariepy92 Theorem 5.2.4]. Therefore, for any $\psi\in L^\infty(\Omega)$, $$\int_\Omega \psi u_\infty v_\infty = \lim_{n\to\infty} \int_\Omega \psi u_n v_n = 0,$$ implying that $u_\infty v_\infty = 0$. Also the pointwise convergence gives $$u_\infty\in \{0,1\} \quad\text{a.e.}$$ If also $c_v>0$, then the same convergence holds for $v_\infty$, and the proof is done. If instead $c_0>0$, then the same holds for $u_\infty+v_\infty$, and again the proof is done. We continue under the assumption that $c_0=c_v=0$. For the pair $(u_\infty,v_\infty)$ to be admissible, it is necessary that $v_\infty$ takes values in the boundary set $\{0,1\}$ only. This is a consequence of the lemma that we state below. Let $c_0=c_v=0$. If $(u,v)$ minimises $F_1$ among all pairs $(\bar u,\bar v)$ such that - $\bar u\in BV({{{\ensuremath{\mathbb{R}}}}};\{0,1\})$ and $\bar v\in BV({{{\ensuremath{\mathbb{R}}}}};[0,1])$; - $\bar u\bar v = 0$ a.e. in ${{{\ensuremath{\mathbb{R}}}}}$; - $\int_{{{\ensuremath{\mathbb{R}}}}}\bar u = \int_{{{\ensuremath{\mathbb{R}}}}}\bar v = \int_{{{\ensuremath{\mathbb{R}}}}}u$, then $v(x)\in\{0,1\}$ for almost every $x\in{{{\ensuremath{\mathbb{R}}}}}$. Choose $0<\eta<1/2$ and let $\omega\subset {{{\ensuremath{\mathbb{R}}}}}$ be the set of intermediate values $$\omega = \{ x\in {{{\ensuremath{\mathbb{R}}}}}: v(x) \in (\eta,1-\eta)\}.$$ We need to prove that $|\omega|=0$. Assume that $|\omega|>0$ and define a perturbation $$\zeta(x) = (\phi(x)-c)\chi_\omega(x),$$ where $\phi = \phi_{u-v}$ is the Poisson potential associated with $u-v$, $\chi_ \omega$ is the characteristic function of the set $\omega$, and $c$ is a constant chosen to ensure that $\int\zeta = 0$. Note that almost everywhere on $ \omega$ the function $\phi$ is twice differentiable with $\phi''\geq \eta>0$. Since the pair $(u, v+\e \zeta)$ is admissible for $\e$ in a neighbourhood of zero, $$0 = \left.\frac\partial{\partial\e} F_1(u, v+\e \zeta)\right|_{\e=0} = 2\int_{{{\ensuremath{\mathbb{R}}}}}\zeta \phi = 2\int_\omega (\phi-c)^2,$$ so that $\phi$ is constant a.e. on $\omega$. As $\phi$ is defined up to addition of constants we may choose $\phi=0$ on $\omega$. Since $|\omega|>0$, we can choose $x_0\in\omega$ such that $\omega$ has density $1$ at $x_0$ and that $\phi$ is twice differentiable at $x_0$, with $\phi''(x_0) \in(\eta,1-\eta)$. Because of the density condition it is possible to find sequences $a_n \in {{{\ensuremath{\mathbb{R}}}}}$, $n\in{{{\ensuremath{\mathbb{N}}}}}$, with the properties - $a_n \to 0$ as $n\to\infty$; - For each $n\in {{{\ensuremath{\mathbb{N}}}}}$, $x_0\pm a_n \in\omega$. Then $$\phi''(x_0) = \lim_{n\to\infty} |a_n|^{-2}\bigl[\phi(x_0-a_n) - 2\phi(x_0) + \phi(x_0+a_n)\bigr] = 0,$$ a contradiction with $\phi''(x_0)\geq \eta$, and therefore with the assumption that $\omega$ has positive measure. Excursion: global minimisers on ${\ensuremath{\mathbb{T}}}_L$ {#subsec:ex_on_TL} ------------------------------------------------------------- By very similar arguments one may prove the corresponding statement for functions on the torus ${\ensuremath{\mathbb{T}}}_L$, thus extending the characterisation of [@ChoksiRen05] to all global minimisers. \[th:exist\_periodic\_1d\] Let $L>0$, $d_{uv} > 0$, and fix a mass $M>0$, with $M<L/2$. 1. There exists a global minimiser $(u, v)$ of $F_1$ on the torus ${\ensuremath{\mathbb{T}}}_L$ under constrained mass $M$ for which $\operatorname{supp}(u+v)$ is connected.\[item:torusconnmin\] 2. This global minimiser is *non*-unique (apart from translation and mirroring) if and only if\[item:torusnonunique\] 1. the energy of this configuration is equal to the energy of another configuration $(\bar u, \bar v)$ for which $\operatorname{supp}(\bar u+\bar v)$ is connected, or \[item:sameenergy\] 2. one of the following two conditions is satisfied: \[th:periodic:cond\] 1. $d_{u0}=0$ and there exists a global minimiser with an internal U-block or;\[item:PER:condunique1\] 2. $d_{v0}=0$ and there exists a global minimiser with an internal V-block. \[item:PER:condunique2\] The proof follows the same lines as in the case of ${{{\ensuremath{\mathbb{R}}}}}$, Theorem \[th:exist\_real\_line\]. We will point out the differences between the two cases. Suppose $(u, v) \in K_1$ is a global minimiser such that ${\ensuremath{\mathbb{T}}}_L\setminus \operatorname{supp}(u+v)$ has at least two connected components, which, by translating $u$ and $v$, we can assume to be $(x_1,x_2)\subset [0,L)$ and $(x_3,L)$ with $x_3>x_2$. Let $\phi$ be the associated potential; since $u$ and $v$ vanish on $(x_1,x_2)$ and $(x_3,L)$, $\phi$ is linear on these two intervals. Let $\phi'(x) = \phi'_ {12}$ for $x \in [x_1, x_2]$ and $\phi'(x) = \phi'_{3L}$ for $x \in [x_3, L]$. By possibly exchanging roles we can assume that $|\phi'_{12}| \geq |\phi'_{3L}| $. Constructing for some $0<a<x_2-x_1$ the same translated functions $\bar u$, $\bar v$, and $\widetilde \phi$ as given in (\[def:overline\_u\]–\[def:widetilde\_phi\]), we have the analogous inequality $$\begin{aligned} \notag \int_0^L {\widetilde\phi'^2} &= \int_0^{x_1}{\phi'}^2 + \int_{x_1+a}^{x_3} {\phi'}^2 + {\phi'}_{3L}^2 (L-x_3+a) \\ \label{ineq:PER:phi1phi3} &\leq \int_0^{x_1}{\phi'}^2 + \int_{x_1+a}^{x_3} {\phi'}^2 + a{\phi'}_{12}^2 + {\phi'}_{3L}^2(L-x_3)\\ &= \int_0^L {\phi'}^2. \notag\end{aligned}$$ Although $\widetilde\phi$ satisfies ${\widetilde \phi}'' = \bar u - \bar v$ on $(0,L)$, the function $\widetilde \phi$ can in general not be extended periodically, i.e. $\widetilde\phi(0)\not=\widetilde\phi(L)$. To correct this we define $$\bar \phi(x) := \widetilde\phi(x) - \frac xL (\widetilde\phi(L)-\widetilde\phi(0)),$$ so that the function $\bar \phi$ solves ${\widetilde \phi}'' = \bar u - \bar v$ on $(0,L)$, is continuously differentiable on $(0, L)$, and satisfies $\bar \phi (0) = \bar \phi(L)$. From $$\bar\phi'(L)-\bar\phi'(0) = \int_0^L \bar \phi'' = \int_0^L (v-u) = 0,$$ we conclude $\bar\phi'(0) = \bar\phi'(L)$, so that $\bar \phi$ is the Poisson potential on ${\ensuremath{\mathbb{T}}}_L$ associated with $\bar u$ and $\bar v$. In addition, $$\begin{aligned} \notag \int_0^L \bar \phi'^2 &= \int_0^L {\widetilde\phi}'^2 - \frac2L(\widetilde\phi(L)-\widetilde\phi(0)) \int_0^L \widetilde\phi' + \frac1L(\widetilde\phi(L)-\widetilde\phi(0))^2 \\ &= \int_0^L {\phi'}^2 - \frac1L(\widetilde\phi(L)-\widetilde\phi(0))^2\notag\\ &\leq \int_0^L {\phi'}^2. \label{ineq:averaging}\end{aligned}$$ From these two inequalities it follows as in the proof of Theorem \[th:exist\_real\_line\] that $F_1(\bar u,\bar v)\leq F_1(u,v)$, so that existence of any global minimiser again implies the existence of a connected global minimiser. We now turn to the discussion of the necessary and sufficient conditions for non-uniqueness. Again we use the fact that inequalities are saturated to deduce necessary conditions; in this case, however, there is an additional inequality in . The reasoning proceeds in two steps. **Step 1: Take $a<x_2-x_1$.** When $a<x_2-x_1$ no interfaces are created, annihilated or changed, and we only need to consider the inequalities in  and . Since these are saturated, the following conditions hold: 1. $|\phi'_{12}| = |\phi'_{3L}|$, and \[item:saturation1\] 2. $\widetilde\phi(L)=\widetilde\phi(0)$.\[item:saturation2\] We first calculate $$\begin{aligned} \widetilde\phi(L)-\widetilde\phi(0) &= \phi(x_3)-\phi(x_1+a)+\phi(x_1) + \phi'_{3L}(L-x_3+a) - \phi(0) \\ &= -\phi(x_1+a)+\phi(x_1) + a \phi'_{3L} \\ &= a (\phi'_{3L} - \phi'_{12}).\end{aligned}$$ By condition \[item:saturation2\] above we have $\phi'_{12}=\phi'_{3L}$, which is also compatible with condition \[item:saturation1\]. We now claim that $\phi'_{12} = \phi'_{3L} = 0$. Suppose not, say (for concreteness) $\phi'_{12}=\phi'_{3L}>0$, then $\phi(x_1) < \phi(x_2)$ and $\phi (x_3) < \phi(L)$. Since for a stationary point the potential $\phi$ has the same value at all U-0 type transitions and the same value for all V-0 type transitions, the two transitions at $x_1$ and at $x_2$ are of different type, thus one is a U-0 type transition and the other a V-0 type transition. The same is true for $x_3$ and $L$ (or $0$); and the transitions at $x_1$ and $x_3$ are the same. Therefore $\phi (x_1) =\phi(x_3)$. For any fixed $a$ in the interval $(0,x_2-x_1)$, however, we have now constructed a second global minimiser $(\bar u,\bar v)$—and therefore a second stationary point—for which $\phi(x_1) \neq \phi(x_3-a)$, since $\bar \phi(x_1) = \widetilde\phi(x_1) = \phi (x_1)$ and $$\begin{aligned} \bar\phi(x_3-a) =\widetilde\phi(x_3-a) &= \phi(x_3) - \phi(x_1+a)+\phi(x_1) \\ &= \phi(x_1) - a\phi'_{12}\\ &< \phi(x_1).\end{aligned}$$ Since the interfaces of $(\bar u, \bar v)$ at $x_1$ and $x_3-a$ are of the same type, this contradicts the stationarity of this second minimiser, and we conclude that $\phi'_{12}= \phi'_{3L} = 0$. Note that since the intervals $ (x_1,x_2)$ and $(x_3,L)$ were chosen as arbitrary connected components of ${\ensuremath{\mathbb{T}}}_L\setminus\operatorname{supp}(u+v)$, this implies that $\phi'$ vanishes on the whole of $ {\ensuremath{\mathbb{T}}}_L\setminus\operatorname{supp}(u+v)$. **Step 2: Take $a=x_2-x_1$.** Non-uniqueness in this case implies that also the interfacial energy remains the same in the construction of $(\bar u, \bar v)$. As in the case of ${{{\ensuremath{\mathbb{R}}}}}$, the interfaces at $x_1$ and $x_2$ that are joined together in the construction of $ (\bar u, \bar v)$ are of the same type, *i.e.* either both U-0 type or both V-0 type transitions. The fact that $\phi$ is constant on 0-blocks is used in this argument. We conclude that either $d_{u0}=0$ or $d_{v0}=0$, and that a connected global minimiser exists with at least one internal U-block (if $d_{u0}=0$) or at least one internal V-block (if $d_{v0}=0$). This proves the necessity of condition \[th:periodic:cond\]. The sufficiency of condition \[th:periodic:cond\] follows by splitting one of the internal blocks, as in the case of ${{{\ensuremath{\mathbb{R}}}}}$. Apart from the simplifying fact that the torus is bounded, the proof of existence of a global minimiser is identical to the case of ${{{\ensuremath{\mathbb{R}}}}}$. Note that the proof of existence of a global minimiser generalises straightforwardly to the higher dimensional case of the torus ${\ensuremath{\mathbb{T}}}_L^N$, because the torus is bounded. On the unbounded domain ${{{\ensuremath{\mathbb{R}}}}}^N$, $N\geq 2$, the above proof does not suffice. Explicit values and a lower bound {#sec:lower_bnd_1d} --------------------------------- We now focus again on functions on ${\ensuremath{\mathbb{R}}}$. The results of the previous sections allow us to calculate global minima of the energy $F_1(u,v)$ as a function of the mass $M=\int u$. Two important special cases are the monolayer and the bilayer. A *monolayer* consists of a single U- and a single V-block, of equal width $m$, where $m$ is the mass of $u$ or $v$, i.e. positioning the block around the origin for convenience, $$u(x) = \chi^{}_{(-m, 0)} \quad\text{and}\quad v(x) = \chi^{}_{(0, m)},$$ where $\chi^{}_A$ is the characteristic function of the set $A$. We then find for the derivative of the Poisson potential $$\phi'(x) = \begin{cases} 0 & \text{for $x< -m$}\\ |x| -m & \text{for $-m<x<m$}\\ 0 & \text{for $x>m$}, \end{cases}$$ The total energy then becomes $$\text{monolayer of mass $M=m$:} \qquad F_1 = 2(c_0+c_u+c_v) + \frac23 m^3.$$ Note the definition of mass: a monolayer of mass $M$ means that $\int u = \int v = M$, and therefore that the ‘total’ mass of the monolayer $\int (u+v)$ equals $2M$. In this case the mass $M$ of the monolayer equals the width $m$ of each of the blocks. A *bilayer* consists of two monolayers joined back-to-back. It comes in two varieties, as UVU and as VUV. For a UVU bilayer of mass $M=2m$, given by $$u(x) = \chi^{}_{(-2m, -m) \cup (m, 2m)} \quad\text{and}\quad v(x) = \chi^{}_{(-m, m)},$$ the derivative of the Poisson potential is $$\phi'(x) = \begin{cases} 0 & \text{for $x< -2m$}\\ -2m - x & \text{for $-2m<x<-m$}\\ x & \text{for $-m<x<m$}\\ 2m - x & \text{for $m<x<2m$}\\ 0& \text{for $x>2m$}. \end{cases}$$ The energy has the value $$\text{UVU bilayer of mass $M=2m$}: \qquad F_1 = 2c_0+4c_u + 2c_v + \frac43 m^3,$$ For a VUV bilayer the situation is of course analogous: $$\text{VUV bilayer of mass $M=2m$}: \qquad F_1 = 2c_0+2c_u + 4c_v + \frac43 m^3.$$ Similarly, *$n$-layered structures* consisting of $n$ monolayers back-to-back, have energy $$\begin{aligned} \label{explicit_energy_multilayer_1} \text{VUVU\ldots V $n$-monolayer with mass $M=n m$}: \qquad F_1 &= 2c_0+nc_u + (n+2)c_v + \frac{2n}3 m^3 \\ &= 2d_{v0} + n d_{uv} + \frac{2n}3 m^3, \label{explicit_energy_multilayer_1a}\\ \label{explicit_energy_multilayer_2} \text{VUVU\ldots U $n$-monolayer with mass $M=n m$}: \qquad F_1 &= d_{u0} + d_{v0} + n d_{uv} + \frac{2n}3 m^3,\\ \label{explicit_energy_multilayer_3} \text{UVUV\ldots U $n$-monolayer with mass $M=n m$}: \qquad F_1 &= 2d_{u0} + n d_{uv} + \frac{2n}3 m^3.\end{aligned}$$ Note that for a VUVU…V $n$-monolayer or UVUV…U $n$-monolayer the value of $n$ is even, while for a VUVU…U $n$-monolayer it is odd. Furthermore $m$ is the U-mass in one monolayer, thus $m$ is the width of the outer blocks, from which we see that the width of the inner blocks is $2m$. By collecting these results we find: \[th:lower\_bound\_1d\] Let $N=1$. For any structure of mass $M$, $$\label{bound:lower_bound_1d} F_1 \geq 2(c_0+\min(c_u,c_v)) + \left(\frac92\right)^{1/3}d_{uv}^{2/3} M.$$ In the limit of large mass, $$\label{limit:Mtoinfty} \lim_{M\to\infty} \inf\left\{ \frac{F_1(u,v)}M: (u,v)\in K_1,\ \int_{{{\ensuremath{\mathbb{R}}}}}u = M\right\} = \left(\frac92\right)^{1/3}d_{uv}^{2/3}.$$ If $d_{uv}=0$, then the first statement is easily checked and the second follows from the example of Section \[subsec:d\_12\]. We continue under the assumption that $d_{uv}>0$. Let $(u, v)$ be a global minimiser with connected $\operatorname{supp}(u+v)$, which exists according to Theorem \[th:exist\_real\_line\]. Note that for all three cases of structures (VUVU…V, UVUV…V, and UVUV…U) the interfacial terms are bounded from below by $2(c_0 + \min(c_u,c_v))$, so that $$F_1 \geq 2(c_0 + \min(c_u,c_v)) + nd_{uv}+ \frac2{3n^2} M^3.$$ Minimising this with respect to $n$ gives the desired lower bound. The particular value of $n$ for which the lower bound is achieved, $$n_0(M)^3:= \frac43 \,\frac{M^3}{d_{uv}},$$ will be useful below. To prove the second part of the theorem, we note that (\[explicit\_energy\_multilayer\_1a\]-\[explicit\_energy\_multilayer\_3\]) imply the upper bound $$\label{ineq:upperboud_F1_M} \inf \left\{\frac{F_1(u,v)}M: (u,v)\in K_1, \int u = M\right \} \leq \frac2M\max\{d_{u0},d_{v0}\} + \inf_{n\in{{{\ensuremath{\mathbb{N}}}}}} \left\{ \frac nM d_{uv} + \frac{2}{3} \left(\frac Mn\right)^2 \right\}.$$ Choosing the largest integer smaller or equal to $n_0(M)$ as particular value of $n$, $$n(M) := \left\lfloor {}^3\sqrt{\frac43\, \frac{M^3}{d_{uv}}}\right\rfloor = \lfloor n_0(M) \rfloor,$$ we have $n_0(M)-1 < n(M)\leq n_0(M)$. In the limit $M\to\infty$ the quotient $n(M)/M$ therefore converges to $(4/3\,d_{uv})^{1/3}$; with this convergence the inequality  implies . In Figure \[fig:blocks\_asymp\] the graphs depicting the energy per mass for VUVU…V configurations consisting of different numbers of monolayers are shown, for some specific parameter values. The lower bound from Theorem \[th:lower\_bound\_1d\] is indicated as well. ![Energy per unit mass for the one-dimensional case, according to the calculations in Section \[sec:lower\_bnd\_1d\]. $M$ is the total U-mass; for the parameters the values $d_{u0} = 1, d_{uv} = 0.6$ and $d_ {v0} = 0.4$ are chosen. All the graphs belong to a VUVU…V $n$-monolayer structure, where $n/2$ increases from $1$ (left) to $20$ (right) with step size $1$. Also drawn are the (dashed) lower bound LB , and the asymptote $2^{-\frac13} 3^{\frac23} d_{uv}^{\frac23} \approx 1.17446$.[]{data-label="fig:blocks_asymp"}](asymp "fig:"){width="110mm"}\ \[rem:convergenceofwidth\] Minimising $F_1/M$ from (\[explicit\_energy\_multilayer\_1\]–\[explicit\_energy\_multilayer\_3\]) with respect to $m$, we find as minimising value for $m$, $$m_0^3(n) := \frac{3(k_1 d_{u0}+k_2 d_{v0}+n d_{uv})}{4n},$$ where, depending on the configuration $k_1=0, k_2=2$ (\[explicit\_energy\_multilayer\_1a\]), $k_1=k_2=1$ (\[explicit\_energy\_multilayer\_2\]) or $k_1=2, k_2=0$ (\[explicit\_energy\_multilayer\_3\]). In all three cases we find that in the limit $n\to\infty$, or equivalently (for $m$ fixed) $M\to\infty$, the width of the inner blocks converges to $$2 \lim_{n\to\infty} m_0(n) = 6^{1/3} d_{uv}^{1/3}.$$ Note that in [@Mueller93] and [@RenWei03a] it is found that one-dimensional minimisers for the functionals under consideration in those papers are periodic with period $\sim (\text{surface tension})^{1/3}$. (In these diffuse interface functionals the surface tension coefficients are given by integrating the square root of the potential.) Higher dimensions {#sec:scaling} ================= In this section we derive bounds on energy of minimisers in terms of the mass $M$. The first result, Theorem \[lem:umassineqs\], shows that the minimal energy has a lower bound that scales linearly in mass in the limit $M\to\infty$. This is an extension of the lower bound  in one dimension, but with a smaller constant. A simple argument immediately gives an upper bound on the minimal energy at given mass: fixing any structure of unit mass, a candidate structure at mass $M\in {{{\ensuremath{\mathbb{N}}}}}$ can be obtained by distributing $M$ copies of the unit-mass structure over ${{{\ensuremath{\mathbb{R}}}}}^N$. The energy of the resulting structure equals $M$ times the energy of the unit-mass structure. This construction can be extended to non-integer mass $M$ by spatially stretching a structure of integer mass close to $M$. In the limit $M\to\infty$ the resulting perturbation of the energy is small. In Sections \[sec:upper\_bound\] and \[subsec:examples\] we therefore provide tighter upper bounds, by constructing $N$-dimensional structures out of near-optimal $k$-dimensional ones, with $k<N$. Lower bound {#subsec:lower_bound} ----------- For this section we pick a function $\kappa\in C_c^\infty({{{\ensuremath{\mathbb{R}}}}}^N)$, non-negative and radially symmetric, such that $$\int_{{\ensuremath{\mathbb{R}}}^N} \kappa = 1.$$ For $\epsilon > 0$ we now define $$\kappa_{\epsilon}(x) := \frac{1}{\epsilon^N} \kappa(x/\epsilon).$$ Note that $\int_{{\ensuremath{\mathbb{R}}}^N}\kappa_{\epsilon} = 1$ for all $\epsilon$. In the following we will use the constant $A_N$, defined as $$A_N := \dashint_{S^{N-1}} |e \cdot w| \,d\mathcal{H}^{N-1}(w),$$ where $S^{N-1}$ is the $(N-1)$-dimensional unit sphere and $e$ is any element of $S^{N-1}$. This definition is independent of the choice of $e$, because the integration is over all of $S^{N-1}$. The central result is an interpolation inequality between the $BV$-seminorm and $H^{-1}$. In spirit, and in its application, it is similar to the Lemma 2.1 of [@Choksi01]. The proof is different, however, and uses an argument of [@KohnOtto02], in combination with the characterisation of $BV$ by [@Davila02]. \[lem:umassineqs\] Let $d_{uv}\neq 0$. For all $(u,v)\in K_1$, $$\label{eq:inequmass1} \int_{{{{\ensuremath{\mathbb{R}}}}}^N} u \leq C_1(\kappa, N) \|u - v \|_{H^{-1}({{{\ensuremath{\mathbb{R}}}}}^N)}^{\frac{2}{3}} \left( \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla u| \right)^{\frac{2}{3}},$$ where $C_1(\kappa, N) > 0$ is given by $$C_1(\kappa, N) := 2^{\frac43} A_N^{\frac23} \left(\int_{{\ensuremath{\mathbb{R}}}^N} |\nabla \kappa|\right)^{\frac23} \left(\int_{{\ensuremath{\mathbb{R}}}^N} |y| \kappa(y)\,dy\right)^ {\frac23}.$$ The inequality  also holds with $u$ and $v$ interchanged. Furthermore, $$\label{est:lower_bound_nD} F_1(u,v) \geq C_2(\kappa,N) \int_{{{{\ensuremath{\mathbb{R}}}}}^N} u,$$ where $$C_2(\kappa,N) := \frac32 C_1(\kappa,N)^{-1} \left( c_u^{3/2}+c_v^{3/2}\right).$$ If $\int_{{{{\ensuremath{\mathbb{R}}}}}^N}u=0$ the statements are trivially true. In what follows we assume $\int_{{{{\ensuremath{\mathbb{R}}}}}^N}u>0$. First note that $\kappa_\epsilon\ast u \in H_0^1\left({{{\ensuremath{\mathbb{R}}}}}^N\right)$. From $uv=0$ it follows that $v\leq 1 -u$, so that $$\int_{{{{\ensuremath{\mathbb{R}}}}}^N} (u - v) \kappa_{\epsilon} \ast u \geq \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \left(2 u - 1\right) \kappa_{\epsilon} \ast u = 2 \int_{{{{\ensuremath{\mathbb{R}}}}}^N} u \kappa_{\epsilon} \ast u - \int_{{{{\ensuremath{\mathbb{R}}}}}^N} u,$$ Writing $$\begin{aligned} 2 \int_{{{{\ensuremath{\mathbb{R}}}}}^N} u \kappa_{\epsilon} \ast u &= 2\int_{{{{\ensuremath{\mathbb{R}}}}}^N}\int_{{{{\ensuremath{\mathbb{R}}}}}^N} u(x)u(y)\kappa_\e(x-y)\, dxdy \\ &= -\int_{{{{\ensuremath{\mathbb{R}}}}}^N}\int_{{{{\ensuremath{\mathbb{R}}}}}^N} (u(x)-u(y))^2 \kappa_\e(x-y) \, dxdy + 2\int_{{{{\ensuremath{\mathbb{R}}}}}^N} u^2\\ &= - \int_{{{{\ensuremath{\mathbb{R}}}}}^N}\int_{{{{\ensuremath{\mathbb{R}}}}}^N} |u(x)-u(y)| \kappa_\e(x-y) \, dxdy + 2 \int_{{{{\ensuremath{\mathbb{R}}}}}^N} u,\end{aligned}$$ we have $$\int_{{{{\ensuremath{\mathbb{R}}}}}^N} u \leq \int_{{{{\ensuremath{\mathbb{R}}}}}^N} (u - v) \kappa_{\epsilon} \ast u + \int_{{{{\ensuremath{\mathbb{R}}}}}^N}\int_{{{{\ensuremath{\mathbb{R}}}}}^N} |u(x)-u(y)| \kappa_\e(x-y) \, dxdy.$$ The first term on the right-hand side is estimated by combining the definition of the $H^{-1}$-norm, $$\int_{{{{\ensuremath{\mathbb{R}}}}}^N} (u - v) \kappa_{\epsilon} \ast u \leq \|u -v \|_{H^{-1}} \|\nabla \kappa_{\epsilon} \ast u \|_{L^2},$$ with the estimate (Young’s inequality [@Adams75 Theorem 4.30]) $$\| \nabla \kappa_{\epsilon} \ast u \|_{L^2} \leq \| u \|_{L^2} \int_{{\ensuremath{\mathbb{R}}}^N} |\nabla \kappa_{\epsilon}| = \|u\|_{L^1}^{\frac12} \int_{{\ensuremath{\mathbb{R}}}^N} |\nabla \kappa_{\epsilon}| = \epsilon^{-1} \|u\|_{L^1}^{\frac12} \int_{{\ensuremath{\mathbb{R}}}^N} |\nabla \kappa|.$$ For the second term we use a density argument as in [@Davila02 proof of Lemma 3] to find $$\begin{aligned} \lefteqn{ \int_{{{{\ensuremath{\mathbb{R}}}}}^N}\int_{{{{\ensuremath{\mathbb{R}}}}}^N} |u(x)-u(y)| \kappa_\e(x-y) \, dxdy \leq {}} \qquad & \\ &\leq \e \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \int_{{\ensuremath{\mathbb{R}}}^N} \int_0^1 \left|\nabla u(t y + (1 - t) x) \frac{(y - x)}{|y - x|} \right| \frac{|y - x|}{\epsilon} \kappa_{\epsilon}(x - y) \, dt \, dy \, dx\\ &= \e\int_{{\ensuremath{\mathbb{R}}}^N} \int_0^1\int_{{{{\ensuremath{\mathbb{R}}}}}^N}\left|\nabla u(x + t h) \cdot \frac{h}{|h|} \right| \frac{|h|}{\epsilon} \kappa_{\epsilon}(h) \, dx\, dt \, dh\\ &= \e \int_{{\ensuremath{\mathbb{R}}}^N} \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \left|\nabla u(z) \cdot \frac{h}{|h|} \right| \frac{|h|}{\epsilon} \kappa_{\epsilon}(h) \, dz \, dh\\ &= \e \int_0^{\infty} \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \int_{S^{N-1}} |\nabla u(z) \cdot w|\, d\mathcal{H}^{N-1}(w)\, r^{N-1} \frac r{\epsilon} \kappa_{\epsilon}(r) \, dz \, dr\\ &= \e A_N \, \mathcal{H}^{N-1}\left(S^{N-1}\right)\int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla u(z)| \, dz\, \int_0^{\infty} \frac{r^N}{\epsilon} \kappa_{\epsilon}(r) \, dr\\ &= \e A_N \int_{{\ensuremath{\mathbb{R}}}^N} |y| \kappa(y) \, dy \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla u(z)| \, dz.\end{aligned}$$ The first equality follows after substituting $y = x+h$, while the substitution $x =z- th$ leads to the second equality. Collecting the parts we find the estimate $$\int_{{{{\ensuremath{\mathbb{R}}}}}^N} u \leq \epsilon^{-1} \|u - v\|_{H^{-1}} \left(\int_{{{{\ensuremath{\mathbb{R}}}}}^N} u\right)^{\frac12} \int_{{\ensuremath{\mathbb{R}}}^N}|\nabla \kappa| + \epsilon C_0(\kappa, N) \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla u|,$$ where $$C_0(\kappa, N) := A_N \int_{{\ensuremath{\mathbb{R}}}^N} |y| \kappa(y) \, dy.$$ Minimising the right hand side with respect to $\epsilon$ we find $$\int_{{{{\ensuremath{\mathbb{R}}}}}^N} u \leq 2 \left[C_0(\kappa, N) \int_{{\ensuremath{\mathbb{R}}}^N} |\nabla \kappa| \left (\int_{{{{\ensuremath{\mathbb{R}}}}}^N} u\right)^{\frac12} \|u-v\|_{H^{-1}} \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla u|. \right]^ {\frac12}$$ Dividing both sides by $\left(\int_{{{{\ensuremath{\mathbb{R}}}}}^N} u\right)^{\frac14}$ and then raising them both to the power $4/3$ gives the first statement of the theorem. Since we have used no property that distinguishes $u$ from $v$, we can apply the same argument with $u$ and $v$ interchanged. To prove the inequality , we remark that from  and Young’s Inequality we obtain, for any $\alpha, \beta>0$, $$\begin{aligned} C_1^{-1} \int u &\leq \frac {2\alpha}3 \int |\nabla u| + \frac1{3\alpha^2} \|u-v\|_{H^{-1}}^2,\\ C_1^{-1} \int u &\leq \frac {2\beta}3 \int |\nabla v| + \frac1{3\beta^2} \|u-v\|_{H^{-1}}^2.\end{aligned}$$ By choosing $$\alpha := c_u^{1/3} \qquad\text{and}\qquad \beta := c_v^{1/3},$$ and then adding the two inequalities with weights $\alpha^2$ and $\beta^2$ respectively, estimate  follows. Note from the proof above that estimate (\[est:lower\_bound\_nD\]) is not sharp if $\int_{{{{\ensuremath{\mathbb{R}}}}}^N}u>0$. Inequality (\[est:lower\_bound\_nD\]) does not hold in the case where $d_{uv} = 0$. The same sequence $(u_n,v_n)$ that was introduced in Section \[subsec:d\_12\] demonstrates this fact, since $F_1(u_n,v_n)\to 2$ while $\int u_n \to\infty$. Upper bound {#sec:upper_bound} ----------- We next show that the one-dimensional upper bound  (or ) also holds in higher dimensions, as a consequence of the more general statement below. Theorem \[th:cutoff\] formalises the intuitive idea that extending a one-dimensional minimiser in the other directions, and then cutting off the resulting planar structure at some large distance, should result in an $N$- dimensional structure whose energy-to-mass ratio is close to that of the original one-dimensional structure. We formulate the result for $k$-dimensional structures that are embedded in $N$ dimensions. Let $1\leq k\leq N-1$, and let us write $K_{1,k}$ for the admissible set $K_1$ on ${{{\ensuremath{\mathbb{R}}}}}^k$. Let $(\overline u , \overline v)$ be - any element of $K_{1,k}$, when $k\geq 3$; or - any element of $K_{1,k}$ with $\int_{{{{\ensuremath{\mathbb{R}}}}}^k} x(\overline u(x)-\overline v (x))\, dx= 0$, when $k\in{1,2}$. (We explain this restriction in Remark \[rem:firstmoment\]). Split vectors $x \in {{{\ensuremath{\mathbb{R}}}}}^N$ into two parts, $x=(\xi, \eta)\in{{{\ensuremath{\mathbb{R}}}}}^k\times {{{\ensuremath{\mathbb{R}}}}}^{N-k}$, and define a cutoff function $\chi_a:{{{\ensuremath{\mathbb{R}}}}}^{N-k}\to[0,1]$ by $$\chi_a(\eta) := \chi(|\eta|-a),$$ where $\chi:{{{\ensuremath{\mathbb{R}}}}}\to[0,1]$ is fixed, smooth, and satisfies $\chi(x)=1$ for $x\leq0 $, $\chi(x) = 0$ for $x\geq1$. We will compare the energy values of the $k$- dimensional structure $(\overline u,\overline v)$ with those of the $N$- dimensional structure $$\label{eq:extension} (u,v)(x) := (\overline u,\overline v)(\xi)\chi_a(\eta).$$ Note that this $(u,v)$ is an element of $K_{1,N}$, the admissible set $K_1$ on $ {{{\ensuremath{\mathbb{R}}}}}^N$. A note on notation: $\omega_d$ will denote the $d$-dimensional Lebesgue measure of the $d$-dimensional unit ball. \[th:cutoff\] Fix $(\overline u,\overline v)$ as given above. Then, for $(u,v)$ as defined in (\[eq:extension\]), $$\frac{F_1(u,v)}{\int_{{{{\ensuremath{\mathbb{R}}}}}^N} u} = \frac{F_1(\overline u,\overline v)}{\int_{{{{\ensuremath{\mathbb{R}}}}}^k} \overline u} + O(1/a) \qquad\text{as }a \to \infty.$$ We first estimate the interfacial terms as follows: $$\begin{aligned} \notag \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla u| &= \int_{{{{\ensuremath{\mathbb{R}}}}}^k} \int_{{{{\ensuremath{\mathbb{R}}}}}^{N-k}}|\nabla\overline u(\xi)|\, \chi^{}_a(\eta) \, d\eta d\xi + \int_{{{{\ensuremath{\mathbb{R}}}}}^k} \int_{{{{\ensuremath{\mathbb{R}}}}}^{N-k}} \overline u(\xi)|\nabla \chi^{}_a(\eta)| \, d\eta d\xi \\ &\begin{cases} \;\leq\; \ds\omega_{N-k} (a+1)^{N-k} \int_{{{{\ensuremath{\mathbb{R}}}}}^k} |\nabla \overline u| + (N-k)\omega_{N-k} (a+1)^{N-k-1} \|\chi'\|_\infty \int_{{{{\ensuremath{\mathbb{R}}}}}^k} \overline u,\\ \;\geq\; \ds\omega_{N-k} a^{N-k} \int_{{{{\ensuremath{\mathbb{R}}}}}^k} |\nabla \overline u|, \end{cases} \label{ineq:0u}\end{aligned}$$ and therefore $$\int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla u| = (1+O(1/a))\omega_{N-k} a^{N-k} \int_{{{{\ensuremath{\mathbb{R}}}}}^k} |\nabla \overline u|\qquad\text{as }a \to \infty.$$ Similarly, $$\begin{aligned} \label{ineq:0v} \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla v| &= (1+O(1/a))\omega_{N-k} a^{N-k}\int_{{{{\ensuremath{\mathbb{R}}}}}^k} |\nabla \overline v| \qquad \text{and}\\ \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla (u+v)| &= (1+O(1/a))\omega_{N-k} a^{N-k}\int_{{{{\ensuremath{\mathbb{R}}}}}^k} |\nabla (\overline u+\overline v)| \label{ineq:0uv}\end{aligned}$$ The estimate of the $H^{-1}$-norm is formulated in Theorem \[thm:HmO-estimate\]. The result now follows by combining the estimates (\[ineq:0u\]–\[ineq:periodic\_and\_not\]) and remarking that the mass of $(u,v)$ is given by $$\int_{{{{\ensuremath{\mathbb{R}}}}}^N} u = \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \overline u(\xi)\chi_a(\eta) = (1+O(1/a))\omega_{N-k}a^{N-k}\int_{{{{\ensuremath{\mathbb{R}}}}}^k} u(\xi).$$ \[thm:HmO-estimate\] Under the conditions above there exists a constant $C=C(k,N)$ such that for all $a>0$, $$\label{ineq:periodic_and_not} \Bigl|\|u-v\|^2_{H^{-1}({{{\ensuremath{\mathbb{R}}}}}^N)} - \omega_{N-k} a^{N-k}\|\overline u-\overline v\|^2_{H^{-1}({{{\ensuremath{\mathbb{R}}}}}^k)}\Bigr| \leq Ca^{N-k-1}\int_{{{{\ensuremath{\mathbb{R}}}}}^k} \bigl[|\nabla \overline\phi|^2+\overline\phi^2\bigr].$$ Here $\overline\phi$ is a $k$-dimensional Poisson potential associated with $ (\overline u, \overline v)$. \[rem:firstmoment\] The restriction of vanishing first moments for $k=1,2$ follows directly from the requirement that $\int_{{{{\ensuremath{\mathbb{R}}}}}^k}\overline\phi^2$ can be chosen finite in . Since the integral of $\overline u-\overline v$ vanishes the potential $\overline \phi := G*(\overline u - \overline v)$ decays to zero at least as fast as $|\xi|^{1-k}$, as can be seen from the multipole expansion of $\bar \phi$ (see [@HohlfeldKingDruedingSandri93]). For dimensions $k\geq 3$ it follows that $\int \overline \phi^2$ is finite; but for $k=1,2$ a higher decay rate is necessary, which we provide by requiring an additional vanishing moment. The case $k=1$ is special: the vanishing of the zero and first moments implies that $\overline\phi := G*(\overline u - \overline v)$ is zero in a neighbourhood of infinity. The Poisson potential $\phi$ associated with $(u,v)$ satisfies $$-\Delta \phi(x) = u(x)-v(x) = (\overline u(\xi)-\overline v(\xi))\chi_a(\eta) \qquad\text{for }x=(\xi,\eta)\in{{{\ensuremath{\mathbb{R}}}}}^N.$$ Similarly, the $k$-dimensional potential $\overline \phi$ associated with $ (\overline u, \overline v)$ satisfies $$-\Delta_\xi\overline\phi(\xi) = \overline u(\xi)-\overline v(\xi) \qquad\text{for } \xi\in{{{\ensuremath{\mathbb{R}}}}}^k.$$ We write $\nabla_\xi$ for the part of the gradient that operates on $\xi$, that is $(\partial_{x_1},\partial_{x_2},\dots,\partial_{x_k},0,\dots,0)$, and we use a similar notation for the other part of the gradient $\nabla_\eta$ and the partial Laplacians $\Delta_\xi$ and $\Delta_\eta$. Remarking that $$\begin{aligned} \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \chi_a(\eta) \nabla \phi(x)\cdot \nabla \overline \phi(\xi)\, dx &= \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \chi_a(\eta) \nabla_{\xi} \phi (x) \cdot \nabla_\xi \overline \phi(\xi)\, dx = -\int_{{{{\ensuremath{\mathbb{R}}}}}^N}\chi_a(\eta) \phi(x) \Delta_\xi \overline\phi(\xi) \, dx\\ &= \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \chi_a(\eta) \phi(x) (\overline u(\xi)-\overline v(\xi))\, dx = -\int_{{{{\ensuremath{\mathbb{R}}}}}^N}\phi(x)\Delta \phi(x) \, dx\\ &= \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla \phi(x)|^2 \, dx,\end{aligned}$$ we calculate $$\begin{aligned} \notag \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla\phi - \chi_a \nabla \overline \phi|^2 &= \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla \phi|^2 - 2\int_{{{{\ensuremath{\mathbb{R}}}}}^N} \chi_a\nabla\phi\nabla\overline \phi + \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \chi_a^2 |\nabla\overline\phi|^2\\ &= - \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla \phi|^2 + \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \chi_a^2 |\nabla\overline\phi| ^2. \label{eq:discrepancy}\end{aligned}$$ One inequality relating the two norms can be deduced directly: $$\|u-v\|_{H^{-1}({{{\ensuremath{\mathbb{R}}}}}^N)}^2 = \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla \phi|^2 \leq \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \chi_a^2 |\nabla\overline\phi|^2 \leq \omega_{N-k} (a+1)^{N-k} \|\overline u - \overline v\|^2_{H^{-1}({{{\ensuremath{\mathbb{R}}}}}^k)}.$$ For the opposite inequality we set $$\psi(x) := \phi(x) - \overline\phi(\xi)\chi_a(\eta),$$ and rewrite $$\begin{aligned} \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla\phi - \chi_a \nabla \overline \phi|^2 &= \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla\psi|^2 + 2\int_{{{{\ensuremath{\mathbb{R}}}}}^N} \overline \phi \nabla_\eta\psi\nabla_\eta\chi_a + \int_{{{{\ensuremath{\mathbb{R}}}}}^N}\overline \phi^2 |\nabla\chi_a|^2\\ &= \int_{{{{\ensuremath{\mathbb{R}}}}}^N} |\nabla\psi|^2 - 2\int_{{{{\ensuremath{\mathbb{R}}}}}^N} \psi\overline \phi\Delta_\eta\chi_a + \int_{{{{\ensuremath{\mathbb{R}}}}}^N}\overline \phi^2 |\nabla\chi_a|^2 \\ &=: I(\psi).\end{aligned}$$ Since $$-\Delta\psi = -\Delta \phi + \chi_a\Delta_\xi\overline\phi + \overline \phi \Delta_\eta\chi_a = \chi_a (\overline u-\overline v) - \chi_a(\overline u - \overline v) + \overline\phi\Delta_\eta\chi_a = \overline\phi\Delta_\eta\chi_a,$$ the function $\psi$ is the global minimiser of $I$, which is convex as functional on $\psi$. Therefore, setting $\psi_0(x) := \overline\phi(\xi) |\nabla\chi_a(\eta)|^2$, $$\begin{aligned} I(\psi) \leq I(\psi_0) &= \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \left[ |\nabla\overline\phi|^2 |\nabla\chi_a|^4 + 4\overline\phi^2 |D^2\chi_a\cdot\nabla\chi_a|^2 - 2\overline\phi^2 |\nabla\chi_a|^2 \Delta\chi_a + \overline \phi^2 |\nabla\chi_a|^2\right]\\ &\leq C(\chi)(a+1)^{N-k-1}\int_{{{{\ensuremath{\mathbb{R}}}}}^k} \left[ |\nabla\overline\phi|^2 + \overline \phi^2\right],\end{aligned}$$ where the constant in the last line depends on $\chi$ but can be chosen independent of $a$. Combining this estimate with  provides us with the opposite inequality, $$\begin{aligned} \omega_{N-k} a^{N-k}\|\overline u - \overline v\|^2_{H^{-1}({{{\ensuremath{\mathbb{R}}}}}^k)} &\leq \int_{{{{\ensuremath{\mathbb{R}}}}}^N} \chi_a^2 |\nabla\overline\phi|^2\notag\\ &\leq \|u-v\|_{H^{-1}({{{\ensuremath{\mathbb{R}}}}}^N)}^2 + \notag\\ & \qquad {}+ C(\chi)(a+1)^{N-k-1}\int_{{{{\ensuremath{\mathbb{R}}}}}^k} \left[ |\nabla\overline\phi|^2 + \overline \phi^2\right]. \label{ineq:chi}\end{aligned}$$ Summarising and as $$\|u-v\|^2_{H^{-1}({{{\ensuremath{\mathbb{R}}}}}^N)} - \omega_{N-k} a^{N-k}\|\overline u-\overline v\|^2_{H^{-1}({{{\ensuremath{\mathbb{R}}}}}^k)} \begin{cases} \leq \omega_{N-k} \bigl((a+1)^{N-k}-a^{N-k}\bigr) \|\overline u-\overline v\|^2_{H^{-1}({{{\ensuremath{\mathbb{R}}}}}^k)} \\ \geq -C(\chi)(a+1)^{N-k-1}\int_{{{{\ensuremath{\mathbb{R}}}}}^k} \left[ |\nabla\overline\phi|^2 + \overline \phi^2\right], \end{cases}$$ we find the statement of the lemma. Examples with prescribed morphology {#subsec:examples} ----------------------------------- Theorem \[th:cutoff\] has the following consequence: when comparing energy-per-unit-mass of structures in dimension $N$, we can include the energy-per-unit-mass of structures in dimension $k<N$, up to a correction term that decays to zero in the limit of large mass. We now use this tool to investigate the energy values of various fixed-geometry structures. - A one-dimensional, lamellar, structure. The optimal energy-per-unit-mass is $\left(\frac92\right)^{1/3}d_{uv}^{2/3}$ (Theorem \[th:lower\_bound\_1d\], achieved in the limit of large mass). - A *micelle* in $N$ dimensions, i.e. a spherical particle described by $$\begin{aligned} u_m(x) &:= \left\{ \begin{array}{ll} 1 & \text{if } 0<|x|< R_1,\\ 0 & \text{otherwise},\end{array}\right.\\ v_m(x) &:= \left\{ \begin{array}{ll} 1 & \text{if } R_1< |x|< R_2,\\ 0 & \text{otherwise}.\end{array}\right.\end{aligned}$$ The equal-mass criterion implies that $R_2=2^{1/N}R_1$, and by optimising with respect to the remaining parameter $R_1$ we find that the optimal energy-per- unit-mass is (Theorem \[thm:energysphersymmono\]) $$3 \left(d_{uv} + d_{v0}\sqrt2\right)^{\frac23} \left(\log\,2 - \frac12\right)^ {\frac13} \qquad\text{for $N=2$},$$ and $$\label{num:3d-micelle} 2^{-1/3}\,3N\bigl(d_{uv}+d_{v0}2^{1-1/N}\bigr)^{2/3} \left(\frac{N+2-N2^{2/N}}{N(N^2-4)}\right)^{1/3} \qquad\text{for $N\geq 3$}.$$ These optimal values are attained at *finite* mass. In both cases the micelle energies are larger than $\left(\frac92\right)^{1/3}d_{uv}^{2/3} $, even when $d_{v0}=0$, implying that for large $M$ lamellar structures have lower energy per unit mass than micelles. - A $k$-dimensional micelle embedded in $N$-dimensional space, similarly to the case of lamellar structures in $N$ dimensions. A two-dimensional micelle thus becomes a cylinderical structure in three dimensions. The energy per unit of mass of such a structure will be lower than that of an $N$-dimensional micelle, since is a strictly increasing function of $N$, but larger than that of a lamellar structure for large mass, by the conclusion of the previous point. - A monolayer in the shape of a spherical shell as in Theorem \[thm:energysphersymmono\]. Here the optimal energy per unit mass can be found (in the limit of large radius $R$) by minimising  with respect to $M$: $$\begin{aligned} &\left(\frac92\right)^{1/3}\left( d_{u0} + d_{uv} + d_{v0} \right)^{2/3} + (N - 1) \left ( d_{v0} - d_{u0} \right) R^{-1} + \\ & \qquad\qquad {}+\left(\frac34\right)^{1/3}(N-1) \left(-\frac{3N-12}{20}(d_{u0}+d_{v0}) +\frac{3N-2}{20}d_{uv}\right) R^{-2} + \mathcal{O}(R^{-3}).\end{aligned}$$ Note that this value approaches for $R\to\infty$ the optimal one-dimensional value when $d_{u0} =d_{v0}=0$. (Although such a choice is ruled out by , one may calculate the value of the energy in this case nonetheless.) In this case the limit value is approached from above. Alternatively, if either $d_{u0}$ or $d_{v0}$ is non-zero, then the limit value is larger than that of the optimal lamellar structures. Among this list, therefore, the structures with lowest energy per unit mass are the lamellar structures. It seems natural to conjecture that global minimisers also resemble cut-off lamellar structures, and have comparable energy per unit mass. On the other hand, the results of the companion paper [@vanGennipPeletier07b] show that bilayer structures are unstable in a part of parameter space, and similarly Ren and Wei showed that for the pure diblock copolymer model ($u+v\equiv 1$) ’wriggled lamellar’ solutions may have lower energy than straight ones [@RenWei05]. Determining the morphology of large-mass global minimisers is therefore very much an open question. Discussion and conclusions ========================== The results discussed in this paper provide an initial view on the properties of the energy , and consequently on mixtures of block copolymers with homopolymers. The sharp-limit version of the more classical smooth-interface energy provides a useful simplification and provides us with tools that would otherwise be unavailable. In one dimension we continued on the work of Choksi and Ren and gave a complete characterisation of the structure of one-dimensional minimisers, both on ${{{\ensuremath{\mathbb{R}}}}}$ and on a periodic cell. In the multi-dimensional case we have proved upper and lower bounds for the energy of minimisers. These bounds both scale linearly with mass, but have different constants. The upper bound is derived from the one-dimensional minimisers, thanks to the cut-off estimate of Theorem \[th:cutoff\]; the results of the companion paper on the stability of mono- and bilayers [@vanGennipPeletier07b] suggest that for some parameter values this upper bound can be exact, while for others it is not. Similarly, the lower bound proved in Section \[subsec:lower\_bound\] has the right scaling in terms of mass, but the constant is not sharp. The sharpness of the estimates is especially relevant in relation to the issue of optimal morphology. A precise estimate of the energy level of energy minimisers may exclude large classes of morphologies and thus limit the possible morphology of energy minimisers. Since we lack such a sharp estimate the question of the preferred morphology in multiple dimensions is still completely open. Part of this question is the behaviour of the morphology near the copolymer-homopolymer interface. For instance, if the preferred morphology is lamellar, does the lamellar orientation show a preference to be orthogonal, parallel, or otherwise aligned with respect to the interface? The experimental observations of for instance [@KoizumiHasegawaHashimoto94; @OhtaNonomura97; @ZhangJinMa05] show both orthogonal and parallel alignments. Other issues are those of the penalty incurred by certain macrodomain morphologies and defects (such as chevron or loop morphologies [@AdhikaryMichler04 Figures 11 and 14]). The large-mass limit for the functional $F_1$ is equivalent to a singular-limit process at fixed mass for the functional $F_\e$ . As discussed in Section \[subsec:partloc\] the results of [@PeletierRoeger06] suggest that for certain values of the $c_i$—to be precise, for those values for which bilayer structures are stable—the functional $F_\e$ may display similar, *partially localised* behaviour. The results from this paper have some interesting physical implications. Theorem \[th:cutoff\] tells us that extended one-dimensional minimisers, i.e. layered structures, will have a relatively low energy (although the question whether or not these are minimisers is still open). Theorem \[th:CR2\] and Remark \[rem:convergenceofwidth\] show that these layers all have the same width. One can think of lamellar configurations like this as having all polymer molecules aligned in straight rows next to each other. Structures like 0U0, which are not to be expected on physical grounds, are a priori not forbidden in our model, but such configurations are not stationary points, as is shown in Section \[subsec:connected\_support\]. Depending on the surface tension coefficients very different structures can appear. As remarked in Section \[subsec:d\_12\] the role of $d_{uv}$ is a special one. If $d_{uv}=0$, there is no repulsion between the U- and V-phases, but there is attraction, due to the $H^{-1}$-norm. Complete mixing of both phases will occur. If one of the other surface tension coefficients is zero instead, say $d_{u0}=0$, then UVU bilayers can be joined together without extra cost, and vice versa UVUVU structures can be split through the middle without increasing the energy. Physically this happens if the U- and 0-phases do not repel each other. The simplest case one can think of is if both phases consist of the same material. If both $d_{u0}>0$ and $d_{v0}>0$, it will always be energetically favourable to join different layers together, because doing so decreases the length of the energetically costly interfaces. Spherically symmetric configurations {#sec:sphersym} ==================================== In this appendix we will compute the energy $F_1$ of spherically symmetric monolayers and bilayers. In [@OhtaNonomura98] the energy for a spherically symmetric bilayer in two and three dimensions is computed. We will give the energy in any dimension $N$. A *spherically symmetric monolayer with inner U-band* in $N$ dimensions consists of a spherical layer of U between radial distances $R_0$ and $R_1$ and a spherical layer of V between radial distances $R_1$ and $R_2$. An example for $N=2$ is drawn in Figure \[fig:sphericalmonolayer\]. Similarly, a *spherically symmetric UVU bilayer* is a spherical layer of V between radial distances $R_1$ and $R_2$, flanked by two spherical layers of U, between radial distances $R_0$ and $R_1$, and $R_2$ and $R_3$ respectively. A two-dimensional example is shown in Figure \[fig:sphericalbilayer\]. Monolayers with inner V-band or VUV bilayers are constructed by interchanging U and V. We will first compute the energy for monolayers. In Theorem \[thm:energysphersymmono\] we give the expansion in terms of the *curvature* $\kappa$ of the energy per mass, for small $\kappa$. The exact expressions for $F_1$ can be found in the proof of the theorem, in (\[eq:sphericalmono2d\]) for $N=2$ and in (\[eq:sphericalmonond\]) for $N\geq3$. The expansion in terms of small curvature is obtained by linearising these exact energy expressions around “$R= \infty$”. To this end we introduce for the monolayer the curvature $\kappa$, total U-mass $M$, and mass per (hyper-)surface area $m$: $$\begin{aligned} \kappa &:= R_1^{-1},\qquad M := \omega_N (R_1^N-R_0^N),\\ m &:= \frac{M}{N \omega_N R_1^{N-1}} = \frac{M \kappa^{N-1}}{N \omega_N}.\end{aligned}$$ We then get $$\begin{aligned} R_0 &= \kappa^{-1} \sqrt[N]{1 - N m \kappa },\\ R_2 &= \kappa^{-1} \sqrt[N]{1 + N m \kappa }.\end{aligned}$$ \[thm:energysphersymmono\] Let $(u, v) \in K_1$ be a spherically symmetric monolayer with inner U-band. Fix the mass per surface area $m > 0$, then for all $N \geq 2$: $$\begin{aligned} \frac{F_{1}}{M}(u, v) =& \, \, m^{-1} \left( d_{u0} + d_{uv} + d_{v0} \right) + \frac{2}{3} m^2 + (N - 1) \left( d_{v0} - d_{u0} \right) \kappa\nonumber\\ &+ (N - 1) m \left( -\frac{1}{2} (d_{u0} + d_{v0}) + \frac{1}{15} (3 N - 2) m^3 \right) \kappa^2+ \mathcal{O}(\kappa^3),\label{eq:genrad}\end{aligned}$$ if $\kappa \downarrow 0$. Note that there are two configurations for a monolayer, depending on whether the U-phase is on the inside or the outside. The theorem above states the case where the U-phase is on the inside. The other case is found by interchanging $d_{u0}$ and $d_{v0}$. The proof consists of three steps. First we compute $F_1$ in terms of the radii $R_i$, then we rewrite it as a function of $\kappa, M$ and $m$. Finally the expansion is found by computing the first terms of the Taylor series of these expressions for $\kappa\ll 1$. The interfacial terms are computed in a straightforward manner. For the $H^{-1}$-norm we need to compute the Poisson potential, which depends only on the radius $r$ because of the spherical symmetry and which we will denote by $\phi(r)$. The Poisson equation in spherical coordinates is $$\label{eq:poissonradial} \left\{ \begin{array}{l} -r^{-N+1} \left(r^{N-1} \phi'(r)\right)' = u - v \quad \text{for } r>0,\\ \phi(0) = \phi'(0) = 0. \end{array} \right.$$ The solutions to this equation for $N = 2$ and $N \geq 3$ are different, and we treat these two cases separately. First we solve for $N = 2$ for the four different regions and we match the solutions under the condition that $\phi \in C^1({{{\ensuremath{\mathbb{R}}}}}^N)$. This gives $$\phi(r) = \left\{ \begin{array}{ll} 0 &\mbox{$r \in (0, R_0)$,}\\ -\frac{1}{4}r^2 + \frac{1}{2} R_0^2 \log r + \frac{1}{4} R_0^2 - \frac{1}{2} R_0^2 \log R_0 &\mbox{$r \in (R_0, R_1)$,}\\ \frac{1}{4}r^2 + \frac{1}{2} \left(R_0^2 - 2 R_1^2\right) \log r + R_1^2 \log R_1 - \frac{1}{2} R_0^2 \log R_0 - \frac{1}{2} R_1^2 + \frac{1}{4} R_0^2 &\mbox{$r \in (R_1, R_2)$,}\\ \frac{1}{4} \left( 2 R_1^2 - R_0^2 \right) \left( 1 - \log (2 R_1^2 - R_0^2) \right) + R_1^2 \log R_1 - \frac{1}{2} R^2 \log R_0 - \frac{1}{2}R_1^2 + \frac {1}{4}R_0^2 &\mbox{$r>R_2$.} \end{array} \right.$$ Note that $\phi$ is constant on $[R_2, \infty)$: the solution can not have a term proportional to $\log r$ on this interval, since $\phi \in W^{1,2}({{{\ensuremath{\mathbb{R}}}}}^N)$ and $$\int_{R_2}^{\infty} |\partial_r \log r|^2 r \, dr = \int_{R_2}^{\infty} \left( \frac {1}{r} \right)^2 r \, dr = \infty.$$ This means that $\phi'(R_0) = \phi'(R_2) = 0$. We now compute the norm via $$\|u - v\|_{H^{-1}(\Omega)}^2 = 2 \pi \left(\int_{R_0}^{R_1} \phi(r) r \, dr - \int_{R_1}^{R_2} \phi(r) r \, dr\right).$$ For $N = 2$, we then find $$\begin{aligned} \frac{1}{2 \pi} F_1(u, v) &= R_0 d_{u0} + R_1 d_{uv} + R_2^2 d_{v0}\nonumber\\ &\hspace{0.4cm} - \frac{1}{4} R_1^4 + \frac{1}{4} R_0^2 R_1^2 - \frac{1}{4} R_0^4 \log R_0 - R_1^2 \left( R_1^2 - R_0^2 \right) \log R_1 \nonumber\\ &\hspace{1.5cm} + \frac{1}{8} \left( 2 R_1^2 - R_0^2 \right)^2 \log\left( 2 R_1^2 - R_0^2 \right),\label{eq:sphericalmono2d}\end{aligned}$$ where the radii are related by $R_2^2 - R_1^2 = R_1^2 - R_0^2$. Analogously solving for $N \geq 3$ we find $$\phi(r) = \left\{ \begin{array}{ll} 0 &\mbox{ if $r \in (0, R_0)$,}\\ \frac{-1}{N (N - 2)} R_0^N r^{-N+2} - \frac{1}{2 N} r^2 + \frac{1}{2 (N - 2)} R_0^2 &\mbox{ if $r \in (R_0, R_1)$,}\\ \frac{-1}{N (N - 2)} \left( R_0^N - 2 R_1^N \right) r^{-N + 2} + \frac{1}{2 N} r^2 + \frac{1}{2 (N - 2)} \left( R_0^2 - 2 R_1^2 \right) &\mbox{ if $r \in (R_1, R_2)$,}\\ \frac{1}{2 (N - 2)} \left( \left( 2 R_1^N - R_0^N \right)^{\frac{2}{N}} - 2 R_1^2 + R_0^2 \right) &\mbox{ if $r >R_2$,}\end{array} \right.$$ and compute the norm. This leads to $$\begin{aligned} \frac{F_1}{N \omega_N}(u,v) &= R_0^{N-1} d_{u0} + R_1^{N-1} d_{uv} + R_2^{N-1} d_{v0}\nonumber\\ &\hspace{0.4cm}+ \frac{1}{N^2 - 4} \left( R_0^{N+2} - R_2^{N+2} \right) + \frac {2}{N (N-2)} R_1^2 \left( R_1^N - R_0^N \right),\label{eq:sphericalmonond}\end{aligned}$$ where the radii are related by $R_2^N - R_1^N = R_1^N - R_0^N$. Rewriting our results in terms of $\kappa, M$ and $m$ gives, for $N=2$, $$\begin{aligned} \frac{F_1}{M}(u, v) =& \, M^{-1} \left(\sqrt{1 - 2 m \kappa } d_{u0} + d_{uv} + \sqrt{1 + 2 m \kappa } d_{v0} \right) - \frac{1}{2} \kappa^{-2} \nonumber\\ &+ \frac12 \left( \frac{1}{4} m^{-1} \kappa^{-3}+ \kappa^{-2} + m \kappa^{-1} \right) \log(1 + 2 m \kappa )\nonumber\\ &- \frac12 \left( \frac{1}{4} m^{-1} \kappa^{-3} - \kappa^{-2} + m \kappa^{-1} \right) \log(1 - 2 m \kappa ).\nonumber\end{aligned}$$ For the monolayer with $N\geq3$ we get $$\begin{aligned} \frac{F_1}{M}(u, v) =& \, m^{-1} \left( (1 - N m \kappa)^{\frac{N-1}{N}} d_{u0} + d_{uv} + (1 + N m \kappa)^{\frac{N-1}{N}} d_{v0} \right)\\ &+ \frac{1}{N^2 - 4} m^{-1} \left( (1 - N m \kappa)^{\frac{N+2}{N}} - (1 + N m \kappa)^{\frac{N+2}{N}} \right) \kappa^{-3} + \frac{2}{N - 2} \kappa^{-2}.\end{aligned}$$ We now expand in terms of $\kappa \ll 1$ to get the result. Next we turn to the bilayer. Here we follow the same route as before. Theorem \[thm:energysphersymbilay\] states the expansion in small curvature; the exact expressions for $F_1$ can be found in the proof in (\[eq:sphericalbi2d\]) for $N=2$ and in (\[eq:sphericalbind\]) for $N\geq 3$. Define $R > 0$ via $R^N = \frac12 \left( R_1^N + R_2^N \right)$, then for the bilayer we introduce curvature $\kappa$, total U-mass $M$ and mass per (hyper-)surface area $m$ for $N\geq 1$ as follows: $$\begin{aligned} \kappa &:= R^{-1},\qquad M:=\omega_N(R_2^N-R_1^N),\\ m &:= \frac{M \kappa^{N-1}}{N \omega_N}.\end{aligned}$$ Then $$\begin{aligned} R_0 &= \kappa^{-1} \sqrt[N]{\left(1 - N m \kappa\right)}, \quad R_1 = \kappa^ {-1} \sqrt[N]{\left(1 - \frac12 N m \kappa\right)},\\ R_2 &= \kappa^{-1} \sqrt[N]{\left(1 + \frac12 N m \kappa\right)}, \quad R_3 = \kappa^{-1} \sqrt[N]{\left(1 + N m \kappa\right)}.\end{aligned}$$ Note that here we have chosen the radii such that the inner and outer U-band have equal mass: $R_3^2 - R_2^2 = R_1^2 - R_0^2$. This is not the optimal choice, in the sense that a small change in the relative thicknesses of the inner and outer monolayers might improve the energy slightly. We expect this to be a small effect, however. \[thm:energysphersymbilay\] Let $(u, v) \in K_1$ be a spherically symmetric UVU bilayer. Fix the mass per surface area $m > 0$, then for all $N \geq 2$: $$\begin{aligned} \frac{F_{1}(u, v)}{M} &= 2 m^{-1} (d_{u0} + d_{uv}) + \frac16 m^2\nonumber\\ &\hspace{0.4cm}+ (N - 1) m \left( -\left( d_{u0} + \frac14 d_{uv} \right) + \frac{11}{240} (3 N - 2) m^3 \right) \kappa^2 + \mathcal{O}(\kappa^4),\label{eq:bilayerexpansion}\end{aligned}$$ if $\kappa \downarrow 0$. An analogous result and proof corresponding to the VUV bilayer is constructed by replacing $d_{u0}$ by $d_{v0}$. As in the proof of theorem \[thm:energysphersymmono\] we follow three steps. First we compute $F_1$ in terms of the radii $R_i$. The resulting expression we rewrite in terms of $\kappa, M$ and $m$ and finally we find the expansion in terms of $\kappa \ll 1$ by calculating the first terms of a Taylor series. The main problem in the first step consists of deriving the Poisson potential that solves (\[eq:poissonradial\]). For $N = 2$ we find $$\phi(r) = \left\{ \begin{array}{ll} 0 &\mbox{ if $r \in (0, R_0)$,}\vspace{0.1cm}\\ -\frac14 r^2 + \frac12 R_0^2 \log r + \frac14 R_0^2 - \frac12 R_0^2 \log R_0 & \mbox{ if $r \in (R_0, R_1)$,}\vspace{0.1cm}\\ \frac14 r^2 + \frac12 (R_0^2 - 2 R_1^2) \log r + R_1^2 \log R_1&\\ \hspace{0.4cm} - \frac12 R_0^2 \log R_0 - \frac12 R_1^2 + \frac14 R_0^2 &\mbox{ if $r \in (R_1, R_2)$,}\vspace{0.1cm}\\ -\frac14 r^2 + \left(\frac12 R_0^2 - R_1^2 + R_2^2\right) \log r - \frac12 R_0^2 \log R_0&\\ \hspace{0.4cm} + R_1^2 \log R_1 - R_2^2 \log R_2 + \frac14 R_0^2 - \frac12 R_1^2 + \frac12 R_2^2 &\mbox{ if $r \in (R_2, R_3)$,}\vspace{0.1cm}\\ -\frac14 R_3^2 + \left(\frac12 R_0^2 - R_1^2 + R_2^2\right) \log R_3 - \frac12 R_0^2 \log R_0&\\ \hspace{0.4cm} + R_1^2 \log R_1 - R_2^2 \log R_2 + \frac14 R_0^2 - \frac12 R_1^2 + \frac12 R_2^2 &\mbox{ if $r >R_3$.} \end{array} \right.$$ For $N \geq 3$ we have $$\phi(r) = \left\{ \begin{array}{ll} 0 &\mbox{ if $r \in (0, R_0)$,}\\ -\frac1{N(N-2)} R_0^N r^{-N+2} - \frac1{2N} r^2 + \frac1{2(N-2)} R_0^2 &\mbox { if $r \in (R_0, R_1)$,}\\ -\frac1{N(N-2)} \left(R_0^N - 2 R_1^N\right) r^{-N+2} + \frac1{2N} r^2 + \frac1 {2(N-2)} \left(R_0^2 - 2 R_1^2\right) &\mbox{ if $r \in (R_1, R_2)$,}\\ -\frac1{N(N-2)} \left(R_0^N - 2 R_1^N + 2 R_2^N\right) r^{-N+2} - \frac1{2N} r^2 + \frac1{2(N+2)} \left(R_0^2 - 2 R_1^2 + 2 R_2^2\right) &\mbox{ if $r \in (R_2, R_3)$,}\\ \frac1{2(N-2)} \left(R_0^2 - 2 R_1^2 + 2 R_2^2 - R_3^2\right) &\mbox{ if $r>R_3 $.} \end{array} \right.$$ We then proceed in the same way as for Theorem \[thm:energysphersymmono\] to find, for $N = 2$, $$\begin{gathered} \label{eq:sphericalbi2d} \frac{1}{2 \pi} F_1(u, v) = (R_0 + R_3) d_{u0} + (R_1 + R_2) d_{uv}+ \frac1{16} \left(R_0^4 - R_3^4\right) +\\ \begin{aligned} &\hspace{1.5cm} + \left( \frac12 R_0^2 R_1^2 + \frac12 R_0^2 R_2^2 - \frac14 R_0^2 R_3^2 \right) \log R_0 + \left(\frac12 R_0^2 R_1^2 - R_1^2 R_2^2 + \frac12 R_1^2 R_3^2\right) \log R_1 +\\ &\hspace{1.5cm} + \left(-\frac12 R_0^2 R_2^2 + R_1^2 R_2^2 - \frac12 R_2^2 R_3^2 \right) \log R_2 + \left(\frac14 R_0^2 R_3^2 - \frac12 R_1^2 R_3^2 + \frac12 R_2^2 R_3^2\right) \log R_3, \end{aligned}\end{gathered}$$ where $R_3^2 - R_2^2 = \frac12 (R_2^2 - R_1^2) = R_1^2 - R_0^2$. For a bilayer with $N \geq 3$ we have $$\begin{gathered} \label{eq:sphericalbind} \frac{1}{N \omega_N} F_1(u, v) = (R_0^{N-1} + R_3^{N-1}) d_{u0} + (R_1^{N-1} + R_2^{N-1}) d_{uv}+ \frac1{2 N (N+2)} \left(R_0^{N+2} - R_3^{N+2} \right)\\ {} + \frac1{2 N (N-2)} \left( -4 R_0^N R_1^2 + 4 R_0^N R_2^2 - 8 R_1^N R_2^2 + R_0^{N+2} + 4 R_1^{N+2} + 4 R_2^{N+2} - R_3^{N+2} \right),\end{gathered}$$ where $R_3^N - R_2^N = \frac12 (R_2^N - R_1^N) = R_1^N - R_0^N$. Rewriting these results in terms of $\kappa, M$ and $m$ gives, for $N = 2$ $$\begin{aligned} \frac{F_{1}(u, v)}{M} &= m^{-1} \left[ d_{u0} \left( (1 - 2 m \kappa)^{1/2} + (1 + 2 m \kappa)^{1/2} \right) + d_{uv} \left( (1 - m \kappa)^{1/2} + (1 + m \kappa)^{1/2} \right) \right]\\ &\hspace{1.4cm}- \frac12 \kappa^{-2} - \frac12 \left( \frac14 m^{-1} \kappa^ {-3} - \kappa^{-2} + m \kappa^{-1} \right) \log(1 - 2 m \kappa)\\ &\hspace{1.4cm} + \frac12 \left( \frac14 m^{-1} \kappa^{-3} + \kappa^{-2} + m \kappa^{-1} \right) \log(1 + 2 m \kappa)\\ &\hspace{1.4cm} - \frac12 \left( \kappa^{-2} + m \kappa^{-1} \right) \log(1 + m \kappa) + \frac12 \left( -\kappa^{-2} + m \kappa^{-1} \right) \log(1 - m \kappa).\end{aligned}$$ For $N \geq 3$ we have $$\begin{aligned} \frac{F_{1}(u, v)}{M} &= m^{-1} \left[\left( (1 - N m \kappa)^{\frac{N-1}{N}} + (1 + N m \kappa)^{\frac{N-1}{N}} \right) d_{u0}\right.\\ &\hspace{1.4cm} \left. + \left( \left(1 - \frac12 N m \kappa\right)^{\frac{N-1} {N}} + \left(1 + \frac12 N m \kappa\right)^{\frac{N-1}{N}} \right) d_{uv}\right] \\ &\hspace{0.4cm}+ \frac1{N^2 - 4} m^{-1} \left( (1 - N m \kappa)^{\frac{N+2}{N}} - (1 + N m \kappa)^{\frac{N+2}{N}} \right) \kappa^{-3}\\ &\hspace{0.4cm}+ \frac{1}{N-2} \left( \left(1 - \frac12 N m \kappa\right)^ {\frac2N} + \left(1 + \frac12 N m \kappa\right)^{\frac2N} \right) \kappa^{-2}.\end{aligned}$$ The result (\[eq:bilayerexpansion\]) now follows from expanding in $\kappa \ll 1$. [^1]: Where we do not explicitly specify the integration measure, we use the Lebesgue measure. [^2]: The indices $j, k, l$ take values in $\{u, v, 0\}$ and the $d_{kl}$ are taken symmetric in their indices, i.e. $d_{vu} := d_{uv}$ etc.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Due to recent successes of a statistical-based nonlocal continuum crystal plasticity theory for single-glide in explaining various aspects such as dislocation patterning and size-dependent plasticity, several attempts have been made to extend the theory to describe crystals with multiple slip systems using ad-hoc assumptions. We present here a mesoscale continuum theory of plasticity for multiple slip systems of parallel edge dislocations. We begin by constructing the Bogolyubov–Born–Green–Yvon–Kirkwood (BBGYK) integral equations relating different orders of dislocation correlation functions in a grand canonical ensemble. Approximate pair correlation functions are obtained for single-slip systems with two types of dislocations and, subsequently, for general multiple-slip systems of both charges. The effect of the correlations manifests itself in the form of an entropic force in addition to the external stress and the self-consistent internal stress. Comparisons with a previous multiple-slip theory based on phenomenological considerations shall be discussed.' author: - Surachate Limkumnerd - Erik Van der Giessen bibliography: - 'references.bib' title: 'Statistical approach to dislocation dynamics: From dislocation correlations to a multiple-slip continuum plasticity theory' --- Introduction ============ Statistical mechanics provides an optimal framework and various tools for studying emergent phenomena from a complex conglomerate of bodies—may they be molecules of gases, polymer chains of rubber, or crystalline defects. The use of correlation functions in analysing two-dimensional solids and their defects has been proven very successful in the past. For example, Mermin showed that two-dimensional crystals do not have conventional long-range order, but can have “directional long-range order.”[@Merm68] Nelson et al. applied the technique to explain dislocation-assisted melting in two dimensions.[@Nels78; @NelsHalp78] Over a decade ago, Groma proposed a theory to describe dislocations and their motions using distribution functions and probability arguments.[@Grom97] Unlike the existing continuum theories at the time,[^1] the new formalism was physically motivated and incorporated correctly the long-range nature of dislocation interactions. Several variations of this work—all of which reduce to the same two-dimensional theory—also exist for three dimensional dislocation systems.[@ElAz00; @LimkSeth06; @LimkSeth07b; @RoyAcha05] Although having laid out the foundation for possible interactions of many-dislocation configurations, Groma’s pioneering work did not investigate these correlated effects in details. Zaiser et al. considered explicitly the evolution of dislocation correlations by extending Groma’s theory for systems of single-slip, parallel edge dislocations.[@ZaisMiguGrom01] They were able to qualitatively obtain the correct scaling behavior of the evolution equations for both single and pair correlation densities, and explained some general properties of these functions. Their formulation, however, was limited to only one active slip system and the analytical forms of pair correlation functions were not derived. In a later work, Groma et al. included the influence of dislocation correlations in the form of a local back stress.[@GromCsikZais03] Yefimov et al. connected this statistical description to a continuum crystal plasticity theory and applied this to various boundary value problems.[@YefiGromGies04; @YefiGromGies04b] While the theory successfully captured most features observed in discrete dislocation simulations, its ad-hoc extension to multiple slip systems failed to explain size effects in single crystal thin films.[@YefiGies05b] The main goals of this paper are: (1) to correctly describe and obtain analytical expressions for dislocation pair correlations, and (2) to systematically generalize the approach of Groma et al. to multiple slip systems. We begin, in Sec. \[S:Definitions\], by introducing ensembles of dislocations and deriving the partition function for multiple slip systems. The $n^\text{th}$-order dislocation densities and dislocation correlation functions are subsequently defined. We construct the Bogolyubov–Born–Green–Yvon–Kirkwood (BBGYK) integral equations in Sec. \[S:BBGYK\]. These equations link correlation functions of order $n$ to those of order $n\!+\!1$ (a technique generally used in the study of dense gases and fluids). The integral equations are expanded in powers of interaction strength (the ratio between the interaction energy and ‘thermal’ energy). We then obtain a set of approximate integral equations for pair ($n=2$) correlation functions after applying a closure approximation to truncate the series. These equations are valid regardless of the form of the interaction potential, and thus are applicable to other systems, provided that this pair interaction vanishes at a large distance. By appealing to Peach–Koehler interaction, analytical expressions for pair dislocation densities for single and multiple slip systems are derived in Sec. \[S:SingleSlip\] and Sec. \[S:MultiSlip\] respectively. Our single-slip solution agrees with the result from the study of induced geometrically necessary dislocations (GND) in terms of a single pinned dislocation by means of a variational approach.[@GromGyorKocs06] The dislocation spacing $1/\sqrt{\rho}$ emerges as a natural lengthscale in this formulation in accordance with the scaling study by Zaiser et al.[@ZaisMiguGrom01] Our analysis further shows long-range attractive correlations when more than one slip system are present, confirming the absence of dislocation patterning in single glide systems as observed in many discrete dislocation simulations[@BenzBrecNeed04; @BenzBrecNeed05; @FourSala96; @GomeDeviKubi06; @GromBako00; @GromPawl93PMA; @GromPawl93MSEA; @GullHart93] and explained in a recent three-dimensional continuum plasticity theory.[@LimkSeth06; @LimkSeth07b] In Sec. \[S:EvolutionLaw\], we write down the transport equations for both total dislocation densities and GND densities on each slip system under the influence of Peach–Koehler forces from both single and pair dislocation correlations. While the former gives a self-consistent, long-range internal stress contribution, the latter exerts an additional short-range, entropic force due to a deviation away from a preferred dislocation arrangement in the form of a back stress. The formulation is a direct extension of the work by Groma and Zaiser[@Grom97; @ZaisMiguGrom01; @GromCsikZais03] for crystals with one active slip system. Using knowledge of the pair correlation functions, we obtain a complete description of the back stress as a function of slip orientations—which previously had been incorporated using ad-hoc phenomenological considerations in the multiple-slip theory.[@YefiGies05b; @YefiGies05] Finally in Sec. \[S:Comparison\], we contrast our theory with the multiple-slip theory of Yefimov et al.[@YefiGies05; @YefiGies05b] While both theories propose that interactions among slip systems depend solely on relative angles of slip orientations, the functional forms are different. We attribute the failure of the earlier theory in explaining size effects in single crystal thin films partly to this difference and partly to the treatment of dislocation nucleation in the theory. Definitions of the basic quantities {#S:Definitions} =================================== Consider a system containing $r$ species of dislocations and denote the coordinate of the $i^\text{th}$ dislocation of species $s$ by $\vec{i}_s$. The dislocation configuration ${\{{\mathbf{N}}\}}$ is the set of the coordinates of all dislocations, where ${\mathbf{N}} \equiv (N_1,N_2; N_3,N_4; \ldots; N_{r-1},N_{r})$ denotes the “collection” of dislocations of type $s$. In this convention, odd and even slots respectively contain plus and minus dislocations on distinct slip systems.[^2] We introduce the notation ${\left\{{\mathbf{N}}+{\mathbf{1}}_{s}\right\}}$ to denote the addition of an extra dislocation of species $s$ to ${\{{\mathbf{N}}\}}$, while similarly a configuration ${\{{\mathbf{N}}\}}$ with coordinates of ${\mathbf{n}}$ removed is indicated by ${\left\{{\mathbf{N}}-{\mathbf{n}}\right\}}$. The interacting Hamiltonian $U_{\mathbf{N}}$ of the system can be written as the sum of potentials $u(\vec{i}_{s_1}-\vec{j}_{s_2})$ of all pairs of dislocations $$U_{\mathbf{N}}({\{{\mathbf{N}}\}}) = \sum_{s_1 \le s_2}\sum_{i\le j} u(\vec{i}_{s_1}-\vec{j}_{s_2})\,.$$ We can define a canonical partition function of configuration ${\mathbf{N}}$ by $$\label{E:Zdef} {\mathcal{Z}}_{\mathbf{N}} \equiv \int {{\rm e}^{-U_{\mathbf{N}}/{k_\text{B}}T}} d{\{{\mathbf{N}}\}}\,,$$ where the integrations are taken over the “volume” measure $d{\{{\mathbf{N}}\}} \equiv \prod_{s=1}^r d^2\vec{1}_s d^2\vec{2}_s \cdots d^2\vec{N}_s$ of the dislocation configuration at ${\{{\mathbf{N}}\}}$. Consider the coordinates of a particular set ${\{{\mathbf{n}}\}}$, the probability of observing the configuration ${\mathbf{n}}$ in $d{\{{\mathbf{n}}\}}$ about the points in ${\{{\mathbf{n}}\}}$ irrespective of the remaining collecttion ${\mathbf{N}}-{\mathbf{n}}$ is $$P_{\mathbf{N}}^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}})\,d{\{{\mathbf{n}}\}} = \frac{d{\{{\mathbf{n}}\}}}{{\mathcal{Z}}_{\mathbf{N}}} \!\int\! {{\rm e}^{-U_{\mathbf{N}}({\{{\mathbf{N}}\}})/{k_\text{B}}T}} d\!{\left\{{\mathbf{N}}-{\mathbf{n}}\right\}},$$ where $\int P_{\mathbf{N}}^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}})\,d{\{{\mathbf{n}}\}} = 1$. The probability density of observing *any* statistically equivalent possible collection ${\mathbf{n}}$ within the volumes $d{\{{\mathbf{n}}\}}$ about the points ${\{{\mathbf{n}}\}}$ is therefore $$\rho_{\mathbf{N}}^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) = \prod_{s=1}^r \frac{N_s!}{(N_s-n_s)!}\, P_{\mathbf{N}}^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}})\,.$$ By using Boltzmann distribution, we assume that our system is ergodic, and thermal equilibrium exists and can be reached. System of dislocations which drift along the local force (thus implying glide to be accompanied by some amount of climb) subject to thermal noise would certainly fit the criterion. Consider now an *open* system (which could be realized, say, by allowing for nucleation and annihilation of dislocations as the system relaxes); a grand canonical partition function is given by $$\Xi = \sum_{{\mathbf{N}}\ge {\mathbf{0}}} \prod_{s=1}^r \frac{z_s^{N_s}}{N_s!}\,{\mathcal{Z}}_{\mathbf{N}}\,,$$ where $z_s$ is the activity of species $s$. The prefactor arises from integrating away the momentum degrees of freedom in the Hamiltonian which are irrelevant to this problem. The probability $\mathcal{P}$ of the occurence of configuration ${\mathbf{N}}$ in the open system is therefore $$\label{E:prob} \mathcal{P}_{\mathbf{N}} = \prod_{s=1}^r \frac{z_s^{N_s}}{N_s!}\,\frac{{\mathcal{Z}}_{\mathbf{N}}}{\Xi}\,.$$ Finally, the probability density of observing *any* $n_1$ dislocations of species 1, $n_2$ dislocations of species 2, etc., (*any* collection ${\mathbf{n}}$) in $d{\{{\mathbf{n}}\}}$ at ${\{{\mathbf{n}}\}}$ is $$\label{E:rhon} \rho^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) = \sum_{{\mathbf{N}}\ge {\mathbf{n}}} \mathcal{P}_{\mathbf{N}}\, \rho^{{({\mathbf{n}})}}_{\mathbf{N}}({\{{\mathbf{n}}\}})\,.$$ The summation is taken over all collections ${\mathbf{N}}$ greater than or equal to ${\mathbf{n}}$, i.e., for all $N_1\ge n_1$, $N_2\ge n_2$, etc. We take Eq. (\[E:rhon\]) as the *definition* of dislocation density of order $({\mathbf{n}})$. Explicitly we have $$\begin{gathered} \label{E:rho} \rho^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) = \frac{1}{\Xi} \sum_{{\mathbf{N}}\ge {\mathbf{n}}} \Big[\prod_{s=1}^r \frac{z_s^{N_s}}{(N_s-n_s)!} \Big] \\ \int {{\rm e}^{-U_{\mathbf{N}}({\{{\mathbf{N}}\}})/{k_\text{B}}T}} d\!{\left\{{\mathbf{N}}-{\mathbf{n}}\right\}}\end{gathered}$$ This definition of an $({\mathbf{n}})^\text{th}$-order dislocation density is equivalent to the ones used by Groma [@Grom97] and Zaiser [@ZaisMiguGrom01] in the realization of an open system.[^3] Finally, we define the ${{({\mathbf{n}})}}^\text{th}$-order correlation function $g^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}})$ through $$\label{E:g} \rho^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) = \left[\prod_{s=1}^r \rho^{(1)}(\vec{1}_s)\rho^{(1)}(\vec{2}_s)\cdots \rho^{(1)}(\vec{n}_s)\right] g^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}})\,.$$ Derivation of the BBGYK integral equations {#S:BBGYK} ========================================== The Bogolyubov–Born–Green–Yvon–Kirkwood integral equations first appeared in the study of classical fluids with a total potential energy given by the sum of pair interactions.[@Kirk35; @Yvon35; @BornGree49; @Gree52] They provide a set of relations between distribution functions of fluid density at different orders. Here we extend the BBGYK formalism to include the non-central interactions of dislocations in a multicomponent system.[@Fish64; @Hill56] We proceed in three steps: (1) take a derivative of the ${{({\mathbf{n}})}}^\text{th}$-order dislocation density with respect to the coordinate of one particle of the interested species; (2) express the result in terms of the next higher order densities; and (3) convert the integral equations of densities into those of correlation functions. Differentiating $\rho^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}})$ as expressed in Eq. (\[E:rho\]) with respect to dislocation 1 of species 1 located at $\vec{1}_1$ we find $$\begin{gathered} \label{E:step1} {\vec{\nabla}_{\!\vec{1}_1}}\rho^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) = -\frac{1}{\Xi} \sum_{{\mathbf{N}}\ge {\mathbf{n}}} \Big[\prod_{s=1}^r \frac{z_s^{N_s}}{(N_s-n_s)!} \Big] \\ \int {{\rm e}^{-\bar{U}_{\mathbf{N}}({\{{\mathbf{N}}\}})}} {\vec{\nabla}_{\!\vec{1}_1}}\bar{U}_{\mathbf{N}}({\{{\mathbf{N}}\}})\, d\!{\left\{{\mathbf{N}}-{\mathbf{n}}\right\}}\,,\end{gathered}$$ where we absorb $1/{k_\text{B}}T$ into the definition $\bar{U}_{\mathbf{N}}:=U_{\mathbf{/}}{k_\text{B}}T$. The derivative of the potential can be separated into two parts: $$\label{E:DofPotential} {\vec{\nabla}_{\!\vec{1}_1}}\bar{U}_{\mathbf{N}} = \underset{(i,s)\ne (1,1)}{\sum_{s=1}^r\sum_{i=1}^{n_s}} {\vec{\nabla}_{\!\vec{1}_1}}\bar{u}(\vec{1}_1-\vec{i}_s) + \sum_{s=1}^r\sum_{i=n_s+1}^{N_s} {\vec{\nabla}_{\!\vec{1}_1}}\bar{u}(\vec{1}_1-\vec{i}_s)$$ Direct substitution of Eq. (\[E:DofPotential\]) into the integrand of Eq. (\[E:step1\]) splits the expression into two integrals $I_1$ and $I_2$. Notice in the first integral that the derivative of the potential does not depend on coordinates ${\left\{{\mathbf{N}}-{\mathbf{n}}\right\}}$, and thus can be taken out of the integral, yielding $$\label{E:I1} I_1 = - \rho^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) \underset{(i,s)\ne (1,1)}{\sum_{s=1}^r\sum_{i=1}^{n_s}} {\vec{\nabla}_{\!\vec{1}_1}}\bar{u}(\vec{1}_1-\vec{i}_s)$$ with the aid of Eq. (\[E:rho\]). The second integral $I_2$ requires a little more work: $$\begin{gathered} \label{E:step2} I_2 = -\frac{1}{\Xi} \sum_{s=1}^r \sum_{{\mathbf{N}}\ge {\mathbf{n}}} \Big[\prod_{s'=1}^r \frac{z_{s'}^{N_{s'}}}{(N_{s'}-n_{s'})!} \Big] \\ \int \! \sum_{i=n_s+1}^{N_s} {\vec{\nabla}_{\!\vec{1}_1}}\bar{u}(\vec{1}_1-\vec{i}_s)\, {{\rm e}^{-\bar{U}_{\mathbf{N}}({\{{\mathbf{N}}\}})}} \, d\!{\left\{{\mathbf{N}}-{\mathbf{n}}\right\}}\end{gathered}$$ The expression involves integrating $\vec{i}_s$ over the sample size. Since each integral over $\vec{i}_s$ between $n_s+1 \le i \le N_s$ is equivalent in infinite space, the summation therefore gives a factor of $(N_s-n_s)$. The remaining integrals over all other dislocation coordinates are unaffected. Eq. (\[E:step2\]) thus becomes $$\label{E:I2} \begin{split} I_2 &= -\sum_{s=1}^r \frac{1}{\Xi} \sum_{{\mathbf{N}}\ge {\mathbf{n}}} \Big[\prod_{s'=1}^r \frac{z_{s'}^{N_{s'}}}{(N_{s'}-n_{s'})!} \Big] (N_s-n_s) \\ &\qquad \int {\vec{\nabla}_{\!\vec{1}_1}}\bar u(\vec{1}_1-\overrightarrow{(n_s+1)}_s) \Big\{ \int {{\rm e}^{-\bar{U}_{\mathbf{N}}({\{{\mathbf{N}}\}})}} \, d\!\left[{\left\{{\mathbf{N}}-{\mathbf{n}}\right\}}\!\setminus\!\{\overrightarrow{(n_s \!+\!1)}_s\}\right] \Big\} \,d^2\overrightarrow{(n_s\!+\!1)}_s \\ &= -\sum_{s=1}^r \int {\vec{\nabla}_{\!\vec{1}_1}}\bar u(\vec{1}_1-\overrightarrow{(n_s+1)}_s) \bigg\{ \frac{1}{\Xi} \sum_{{\mathbf{N}}\ge {\mathbf{n}}} \Big[\prod_{s'=1}^r \frac{z_{s'}^{N_{s'}}(N_s-n_s)}{(N_{s'}-n_{s'})!} \Big] \\ &\quad\qquad\qquad\qquad\qquad\qquad\qquad \int {{\rm e}^{-\bar{U}_{\mathbf{N}}({\{{\mathbf{N}}\}})}} \, d\!\left[{\left\{{\mathbf{N}}-{\mathbf{n}}\right\}}\!\setminus\!\{\overrightarrow{(n_s \!+\!1)}_s\}\right] \bigg\} \,d^2\overrightarrow{(n_s\!+\!1)}_s \\ &= -\sum_{s=1}^r \int {\vec{\nabla}_{\!\vec{1}_1}}\bar u(\vec{1}_1-\overrightarrow{(n_s+1)}_s)\, \rho^{{({\mathbf{n}}+{\mathbf{1}}_{s})}}({\left\{{\mathbf{n}}+{\mathbf{1}}_{s}\right\}})\,d^2\overrightarrow{(n_s\!+\!1)}_s \end{split}$$ The symbol $d\!\left[{\left\{{\mathbf{N}}-{\mathbf{n}}\right\}}\!\setminus\!\{\overrightarrow{(n_s \!+\!1)}_s\}\right]$ represents the volume measure of ${\left\{{\mathbf{N}}-{\mathbf{n}}\right\}}$ *without* $d^2\overrightarrow{(n_s\!+\!1)}_s$. Collecting both $I_1$ and $I_2$ from Eq. (\[E:I1\]) and Eq. (\[E:I2\]), we arrive at the BBGYK equations for the ${{({\mathbf{n}})}}^\text{th}$-order dislocation density: $$\label{E:BBGYKrho} {\vec{\nabla}_{\!\vec{1}_1}}\rho^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) = - \rho^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) \!\!\!\sum_{(s,i)\ne (1,1)}^{(r,n_s)}\!\!\!\!\! {\vec{\nabla}_{\!\vec{1}_1}}\bar{u}(\vec{1}_1-\vec{i}_s) -\sum_{s=1}^r \int {\vec{\nabla}_{\!\vec{1}_1}}\bar u(\vec{1}_1-\overrightarrow{(n_s\!+\!1)}_s)\, \rho^{{({\mathbf{n}}+{\mathbf{1}}_{s})}}({\left\{{\mathbf{n}}+{\mathbf{1}}_{s}\right\}})\,d^2\overrightarrow{(n_s\!+\!1)}_s$$ One can obtain a series of integro-differential equations for the correlation functions $g^{{({\mathbf{n}})}}$ from Eq. (\[E:BBGYKrho\]) by expanding out $\rho^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}})$ using Eq. (\[E:g\]). All but two of the single dislocation densities on the left and right-hand sides of the equality cancel which results in $$\begin{gathered} \label{E:BBGYKgfirst} {\vec{\nabla}_{\!\vec{1}_1}}\! \left[\rho(\vec{1}_1) g^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}})\right] = - \rho(\vec{1}_1) g^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) \!\!\!\sum_{(s,i)\ne (1,1)}^{(r,n_s)}\!\!\!\!\! {\vec{\nabla}_{\!\vec{1}_1}}\bar{u}(\vec{1}_1-\vec{i}_s) \\ -\rho(\vec{1}_1) \sum_{s=1}^r \int {\vec{\nabla}_{\!\vec{1}_1}}\bar u(\vec{1}_1-\overrightarrow{(n_s\!+\!1)}_s)\, \rho(\overrightarrow{(n_s\!+\!1)}_s)g^{{({\mathbf{n}}+{\mathbf{1}}_{s})}}({\left\{{\mathbf{n}}+{\mathbf{1}}_{s}\right\}})\,d^2\overrightarrow{(n_s\!+\!1)}_s\,.\end{gathered}$$ The first order densities $\rho(\vec{1})$ that plague the expression can be removed by first using the product rule to the left-hand side (LHS), then dividing both sides by $\rho(\vec{1})$. The LHS becomes $$\text{LHS} = {\vec{\nabla}_{\!\vec{1}_1}}g^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) + g^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}})\frac{{\vec{\nabla}_{\!\vec{1}_1}}\rho(\vec{1}_1)}{\rho(\vec{1}_1)}.$$ The ratio of the derivative of the first-order density with itself can be rewritten using Eq. (\[E:BBGYKrho\]) specialized to first order, giving $$\frac{{\vec{\nabla}_{\!\vec{1}_1}}\rho(\vec{1}_1)}{\rho(\vec{1}_1)} = -\sum_{s=1}^r \int {\vec{\nabla}_{\!\vec{1}_1}}\bar u(\vec{1}_1-\vec{\xi}_s) \rho(\vec{\xi}_s) g^{(2)}(\vec{1}_1,\vec{\xi}_s)\,d^2\vec{\xi}_s\,,$$ where $\vec{\xi}_s \equiv \overrightarrow{(n_s\!+\!1)}_s$ is the position of the $(n_s\!+\!1)^\text{th}$ dislocation of species $s$, and $g^{(2)}(\vec{1}_1,\vec{\xi}_s)$ represents the pair correlation function between the first dislocation of species 1 at $\vec{1}_1$ and the $(n_s+1)^\text{th}$ dislocation of species $s$ at $\vec{\xi}_s$. This expression could be incorporated seamlessly into the right-hand side of Eq. (\[E:BBGYKgfirst\]). The final result is[^4] $$\begin{gathered} \label{E:BBGYKg} {\vec{\nabla}_{\!\vec{1}_1}}g^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) = - g^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}}) \!\!\!\sum_{(s,i)\ne (1,1)}^{(r,n_s)}\!\!\!\!\! {\vec{\nabla}_{\!\vec{1}_1}}\bar{u}(\vec{1}_1-\vec{i}_s) \\ - \sum_{s=1}^r \int {\vec{\nabla}_{\!\vec{1}_1}}\bar u(\vec{1}_1-\vec{\xi}_s)\, \rho(\vec{\xi}_s) \\ \times\left[g^{{({\mathbf{n}}+{\mathbf{1}}_{s})}}({\left\{{\mathbf{n}}+{\mathbf{1}}_{s}\right\}})- g^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}})g^{(2)}(\vec{1}_1,\vec{\xi}_s) \right] d^2\vec{\xi}_s\,.\end{gathered}$$ For the remainder of this paper, we shall restrict our attention to the Peach–Koehler interaction. Recall that the interaction energy between two parallel edge dislocations of length $L$ (over thermal energy ${k_\text{B}}T$) in an infinite medium is [@HirtLoth82] $$\label{E:energy} \bar u(\vec{i}_s-\vec{j}_{s'}) = -\Gamma\, \psi(\vec{i}_s-\vec{j}_{s'})$$ where $\Gamma \equiv \dfrac{\mu b^2 L}{2\pi (1-\nu) {k_\text{B}}T}$, and $$\begin{gathered} \label{E:IntEnergy} \psi(\vec{i}_s,\vec{j}_{s'}) \equiv \bigg[ (\hat{m}_{\vec{i}_s}\cdot \hat{m}_{\vec{j}_{s'}}) \ln\!\left(| \vec{i}_s - \vec{j}_{s'} |\right) \\ + \frac{\left(\hat{m}_{\vec{i}_s}\cdot (\vec{i}_s - \vec{j}_{s'})\right) \left(\hat{m}_{\vec{j}_{s'}}\cdot (\vec{i}_s - \vec{j}_{s'})\right)}{|\vec{i}_s - \vec{j}_{s'}|^2} \bigg].\end{gathered}$$ Here $\hat{m}_{\vec{i}_s}$ denotes the slip-plane normal of species $s$. The relative strength $\Gamma$ represents the ratio between dislocation interaction energy versus the system’s thermal energy. Note that the latter originates from the use of Boltzmann distribution in Eq. (\[E:Zdef\]) to describe the equilibrium configuration of systems with thermal noise. It was pointed out by Groma et al.[@GromGyorKocs06] that, in systems where dislocations are confined to their slip planes, the glide constraint acts as an effective temperature preventing the systems to relax by means of dislocation annihilation. Seen in this light, the temperature $T$ in this theory is not physical temperature but a fictive temperature associated with the disorder in dislocation distributions.[^5] As the dislocation configuration becomes more and more correlated, $\Gamma$ becomes smaller. For an explicit dependence on $\Gamma$ to use as an expansion coefficient, we rescale the distance by the square-root of the relative strength, $\sqrt{\Gamma}\,\vec{r} \mapsto \vec{r}$. Eq. (\[E:BBGYKg\]) specialized to second order gives $$\begin{gathered} \label{E:gsecondorder} {\vec{\nabla}_{\!\vec{1}}\,}g^{(2)}(\vec{1},\vec{2}) = \Gamma\, g^{(2)}(\vec{1},\vec{2}) {\vec{\nabla}_{\!\vec{1}}\,}\psi(\vec{1},\vec{2}) \\ + \sum_{s=1}^r \int {\vec{\nabla}_{\!\vec{1}}\,}\psi(\vec{1},\vec{3}_s)\, \rho(\vec{3}_s) \\ \times\left[g^{(3)}(\vec{1},\vec{2},\vec{3}_s)- g^{(2)}(\vec{1},\vec{2})g^{(2)}(\vec{1},\vec{3}_s) \right] d^2\vec{3}_s\,.\end{gathered}$$ Here we have simplified the notation even further by suppressing all irrelevant subscripts: vectors $\vec{1}$ and $\vec{2}$ simply denote the positions of dislocations 1 and 2 with their corresponding species. The summation $\sum_r$ is taken over all $s$ species present in the system. We proceed by assuming that the correlation functions have the following forms: \[E:g23\] $$\begin{aligned} \begin{split}\label{E:g2} g^{(2)}(\vec{1},\vec{2}) &= 1 + \Gamma\,f^{(2)}(\vec{1},\vec{2})\,, \end{split} \\ \begin{split}\label{E:g3} g^{(3)}(\vec{1},\vec{2},\vec{3}) &= 1 + \Gamma\left[f^{(2)}(\vec{1},\vec{2}) + f^{(2)}(\vec{1},\vec{3}) + f^{(2)}(\vec{2},\vec{3}) \right] \\ &\qquad + \Gamma^2\, f^{(3)}(\vec{1},\vec{2},\vec{3})\,, \end{split}$$ for any vectors $\vec{1}$, $\vec{2}$, and $\vec{3}$. The functions $f^{(2)}(\vec{1},\vec{2})$ and $f^{(3)}(\vec{1},\vec{2},\vec{3})$ should asympotically vanish along the boundaries of the sample, or as $|\vec{1}-\vec{2}|, |\vec{1}-\vec{3}|, |\vec{2}-\vec{3}| \rightarrow \infty$ for an infinite system. Note in particular that $$\begin{gathered} \label{E:gsubtract} g^{(3)}(\vec{1},\vec{2},\vec{3})- g^{(2)}(\vec{1},\vec{2})g^{(2)}(\vec{1},\vec{3}) = \Gamma\,f^{(2)}(\vec{2},\vec{3}) \\ + \Gamma^2\!\left[f^{(3)}(\vec{1},\vec{2},\vec{3}) - f^{(2)}(\vec{1},\vec{2})f^{(2)}(\vec{1},\vec{3})\right]\!.\end{gathered}$$ So far no approximation has been made. The Eqs. (\[E:g23\]) governing the second–order correlations naturally involve the third–order correlations. To systematically close the chain at the second order, we substitute Eqs. (\[E:g23\]) and (\[E:gsubtract\]) into Eq. (\[E:gsecondorder\]) to produce a set of integro–differential equations of $f^{(2)}$ and $f^{(3)}$ for each power of $\Gamma$. This technique was introduced by Bogolyubov in the study of correlations in Coulomb interactions [@Bogo46] and has since been widely used in both high energy and condensed matter communities in renormalization group theory. The equation of power $\Gamma^0$ gives an identity. After integrating away ${\vec{\nabla}_{\!\vec{1}}\,}$ because $f^{(2)}$ *and* $\psi$ vanish along a boundary, the equation of power $\Gamma$ becomes, $$\label{E:BBGYKf} f^{(2)}(\vec{1},\vec{2}) = \psi(\vec{1},\vec{2}) + \sum_{s=1}^r \int \psi(\vec{1},\vec{3}_s) \rho(\vec{3}_s) f^{(2)}(\vec{2},\vec{3}_s)\, d^2\vec{3}_s$$ This equation is the key result of the analysis. In the following sections, we shall use it to obtain dislocation pair correlation functions for systems with one (Sec. \[S:SingleSlip\]) and many (Sec. \[S:MultiSlip\]) active slip systems. Pair correlation functions for single slip {#S:SingleSlip} ========================================== To illustrate the use of Eq. (\[E:BBGYKf\]), we first apply it to the case of one slip system containing two species of dislocations (denoted $+$ and $-$). According to Eq. \[E:IntEnergy\] valid for an infinite sample, $\psi(\vec{1},\vec{2}) = \psi(\vec{1}-\vec{2}) = \psi(\vec{2}-\vec{1})$ which implies that $f^{ab}(\vec{1},\vec{2}) = f^{ab}(\vec{1}-\vec{2}) = f^{ab}(\vec{2}-\vec{1})$. Without loss of generality, we can take the origin to be at $\vec{2}$ and thus, from (\[E:BBGYKf\]), we obtain the following set of integral equations: \[E:BBGYKoneslip\] $$\begin{aligned} \begin{split} {f^{\texttt{++}}}({\vec{r}}) &= \phantom{-}\psi_1({\vec{r}}) + \int d^2{\vec{r}\,'}\, \psi_1({\vec{r}}-{\vec{r}\,'}) \\ &\qquad\left[{\rho^{\texttt{+}}}({\vec{r}\,'}){f^{\texttt{++}}}({\vec{r}\,'}) - {\rho^{\texttt{-}}}({\vec{r}\,'}){f^{\texttt{+-}}}({\vec{r}\,'})\right] \label{E:fpp} \end{split}\\ \begin{split} {f^{\texttt{+-}}}({\vec{r}}) &= -\psi_1({\vec{r}}) - \int d^2{\vec{r}\,'}\, \psi_1({\vec{r}}-{\vec{r}\,'}) \\ &\qquad\left[{\rho^{\texttt{-}}}({\vec{r}\,'}){f^{\texttt{--}}}({\vec{r}\,'}) - {\rho^{\texttt{+}}}({\vec{r}\,'}){f^{\texttt{-+}}}({\vec{r}\,'})\right] \label{E:fpm} \end{split}\\ \begin{split} {f^{\texttt{--}}}({\vec{r}}) &= \phantom{-}\psi_1({\vec{r}}) + \int d^2{\vec{r}\,'}\, \psi_1({\vec{r}}-{\vec{r}\,'}) \\ &\qquad\left[{\rho^{\texttt{-}}}({\vec{r}\,'}){f^{\texttt{--}}}({\vec{r}\,'}) - {\rho^{\texttt{+}}}({\vec{r}\,'}){f^{\texttt{-+}}}({\vec{r}\,'})\right] \label{E:fmm} \end{split}\\ \begin{split} {f^{\texttt{-+}}}({\vec{r}}) &= -\psi_1({\vec{r}}) - \int d^2{\vec{r}\,'}\, \psi_1({\vec{r}}-{\vec{r}\,'}) \\ &\qquad\left[{\rho^{\texttt{+}}}({\vec{r}\,'}){f^{\texttt{++}}}({\vec{r}\,'}) - {\rho^{\texttt{-}}}({\vec{r}\,'}){f^{\texttt{+-}}}({\vec{r}\,'})\right] \label{E:fmp} \end{split}\end{aligned}$$ In the current context, Eq. (\[E:IntEnergy\]) reduces to $$\label{E:InteractionPotential} \psi_1(\vec{r}) = \psi^{\texttt{++}}(\vec{r}) = -\psi^{\texttt{+-}}(\vec{r}) = \ln(|\vec{r}|) + \frac{y^2}{|\vec{r}|^2}\,,$$ where we orient our $(x,y)$ coordinate system in such a way that the slip direction points along the ${x}$ direction. The minus signs in Eq. (\[E:BBGYKoneslip\]) arise from a sign difference in the interactions between plus–plus dislocations versus plus–minus dislocations as shown in Eq. (\[E:InteractionPotential\]). By comparing Eq. (\[E:fpp\]) against (\[E:fmp\]), and Eq. (\[E:fpm\]) against (\[E:fmm\]), we find that ${f^{\texttt{++}}}({\vec{r}}) = -{f^{\texttt{-+}}}({\vec{r}})$ and ${f^{\texttt{+-}}}({\vec{r}}) = -{f^{\texttt{--}}}({\vec{r}})$. These symmetries further imply that ${f^{\texttt{++}}}({\vec{r}}) = {f^{\texttt{--}}}({\vec{r}})$. Finally we obtain $$\begin{gathered} \label{E:intEqn} {f^{\texttt{++}}}({\vec{r}}) = \psi_1({\vec{r}}) \\ + \int \psi_1({\vec{r}}-{\vec{r}\,'}) {f^{\texttt{++}}}({\vec{r}\,'}) \left[{\rho^{\texttt{+}}}({\vec{r}\,'}) + {\rho^{\texttt{-}}}({\vec{r}\,'})\right] d^2{\vec{r}\,'}\,.\end{gathered}$$ Our general formulation in the previous section allows for spatial variation of an uncorrelated density $\rho({\vec{r}}_s)$. Without externally applied force, $\rho({\vec{r}}_s) = {\left\langle N_s \right\rangle}/A$ is constant in space. An analytical solution to Eq. \[E:intEqn\] can be obtained for constant ${\rho^{\texttt{+}}}$ and ${\rho^{\texttt{-}}}$. The dimensionless nature of the interaction potential $\psi_1$ suggests a change of variable $\sqrt{{\rho^{\texttt{+}}}+{\rho^{\texttt{-}}}}\,{\vec{r}}\mapsto {\vec{r}}$ (note that ${\rho^{\texttt{+}}}$ and ${\rho^{\texttt{-}}}$ are always positive). The resulting dimensionless integral equation $$\label{E:fppdimless} {f^{\texttt{++}}}({\vec{r}}) = \psi_1({\vec{r}}) + \int \psi_1({\vec{r}}-{\vec{r}\,'}) {f^{\texttt{++}}}({\vec{r}\,'}) d^2{\vec{r}\,'}$$ can be solved directly by applying $\Delta^2 \equiv (\partial_x^2 +\partial_y^2)^2$ on both sides of the equation and using the identity $$\Delta^2 \psi_1({\vec{r}}) = 2\pi\Delta\delta({\vec{r}}) +2\pi (\partial^2_y - \partial^2_x)\delta({\vec{r}}) = 4\pi\partial^2_y\delta({\vec{r}}).$$ Eq. (\[E:fppdimless\]) then becomes $$\Delta^2 {f^{\texttt{++}}} = 4\pi \partial^2_y\!\left[{f^{\texttt{++}}} + \delta({\vec{r}}) \right]\,,$$ whose explicit solution is $$\label{E:fsingleslip} {f^{\texttt{++}}} = \frac{y}{r}\sinh(\sqrt{\pi}y)K_1(\sqrt{\pi}r) - \cosh(\sqrt{\pi}y)K_0(\sqrt{\pi}r)\,,$$ with $K_0(\cdot)$ and $K_1(\cdot)$ the zeroth and first order modified Bessel functions of the second kind. With the aid of Eq. (\[E:g2\]), the correlation functions ${g^{(\texttt{++})}} = {g^{(\texttt{--})}}$ and ${g^{(\texttt{+-})}} = {g^{(\texttt{-+})}}$, correct to $\mathcal{O}(\Gamma^2)$, can be expressed in the original coordinates, \[E:goneanswer\] $$\begin{aligned} \begin{split} {g^{(\texttt{++})}}({\vec{r}}) = 1 + \Gamma\bigg[\frac{y}{r}\sinh(k_0 &y) K_1(k_0 r) \\ &- \cosh(k_0 y)K_0(k_0 r)\bigg], \label{E:goneanswer1} \end{split}\\ \begin{split} {g^{(\texttt{+-})}}({\vec{r}}) = 1 - \Gamma\bigg[\frac{y}{r}\sinh(k_0 &y) K_1(k_0 r) \\ &- \cosh(k_0 y)K_0(k_0 r)\bigg], \label{E:goneanswer2} \end{split}\end{aligned}$$ where $k_0 \equiv \sqrt{\pi \Gamma ({\rho^{\texttt{+}}}+{\rho^{\texttt{-}}})}$ gives an inverse “Debye radius” of the dislocation cloud. The third order correlation functions correct up to $\mathcal{O}(\Gamma^2)$ follow straightforwardly from Eq. (\[E:g3\]). The validity of Eq. (\[E:goneanswer\]) can be verified by comparing ${g^{(\texttt{++})}}({\vec{r}})-{g^{(\texttt{+-})}}({\vec{r}})$ with the dislocation difference, or GND, field $\kappa({\vec{r}})$ in Eq. (15) of Ref. . In this latter work the *same* expression is obtained for the induced GND due to a single pinned dislocation, which was interpreted by the authors as the pair correlation of dislocations in a relaxed system. It is interesting to note that the pair correlation functions depend only on the scaled space coordinate $\sqrt{\rho}\,{\vec{r}}$ ($\rho \equiv {\rho^{\texttt{+}}}+{\rho^{\texttt{-}}}$ being the total dislocation density) in agreement with the scaling argument given by Zaiser et al.[@ZaisMiguGrom01] This dependence also holds in the multiple-slip case to be discussed in the next section. Pair correlation functions for multiple slip {#S:MultiSlip} ============================================ The procedure to obtain the correlation functions for a system with multiple slips follows the same types of arguments and expansions as those for single slip. We shall further develop the integral equation (\[E:BBGYKf\]) for a system of $N$ slip systems, each with two charges, and subsequently give an explicit analytical solution for the pair correlation function in the case where the difference in slip orientation angle between adjacent slip planes is constant. For an $N$-slip system with both types of charges, we have $4N^2$ coupled integral equations for different pairs of $\vec{1}$ and $\vec{2}$ in Eq. (\[E:BBGYKf\]). To reduce the number of equations, and essentially decouple them, some symmetry arguments can be employed. For an infinite system, $${\psi^{\texttt{++}}}_{ij} = {\psi^{\texttt{--}}}_{ij} = -{\psi^{\texttt{+-}}}_{ij} = -{\psi^{\texttt{-+}}}_{ij}\,, \quad\text{and}\quad \psi^{ab}_{\,ij} = \psi^{ab}_{\,ji}\,,$$ where the superscripts denote the charges of the first and second dislocations, while the subscripts show the slip systems in which they live. Eq. (\[E:BBGYKf\]) can be re-cast using the convolution operator $\ast$ and the symmetry of $\psi^{ab}_{\,ij}$ as $$\label{E:ftemp} f^{ab}_{\,ij} = \psi^{ab}_{\,ij} + \sum_{k=1}^N \psi^{a\texttt{+}}_{ik}\ast\left[{\rho^{\texttt{+}}}_k f^{b\texttt{+}}_{jk} - {\rho^{\texttt{-}}}_k f^{b\texttt{-}}_{jk} \right].$$ By direct substitution of $+$ and $-$ into $a$ and $b$, it is immediate that ${f^{\texttt{++}}_{\,ij}}({\vec{r}}) = -{f^{\texttt{-+}}_{\,ij}}({\vec{r}})$ and ${f^{\texttt{--}}_{\,ij}}({\vec{r}}) = -{f^{\texttt{+-}}_{\,ij}}({\vec{r}})$, which further implies that $$\label{E:fsimplified} {f^{\texttt{++}}_{\,ij}} = {f^{\texttt{--}}_{\,ij}} = {\psi^{\texttt{++}}}_{ij} + \sum_{k=1}^N {\psi^{\texttt{++}}}_{ik}\ast\left[{\rho^{\texttt{+}}}_k {f^{\texttt{++}}_{\,\!jk}} + {\rho^{\texttt{-}}}_k {f^{\texttt{--}}_{\,\!jk}} \right].$$ With this, Eq. (\[E:ftemp\]) reduces to $$\label{E:fn} {f}_{ij} = \psi_{ij} + \sum_{k=1}^N \psi_{ik}\ast \big[\rho_k {f}_{jk}\big],$$ where the superscripts have been omitted and $\rho_k \equiv {\rho^{\texttt{+}}}_k+{\rho^{\texttt{-}}}_k$ is the total dislocation density of both types on slip $k$. We thus effectively reduce the number of coupled equations to $N^2$. Note also that because of $\psi_{ij} = \psi_{ji}$, there are only $N(N+1)/2$ independent $\psi_{ij}$’s. As seen from the single-slip case, Eq. (\[E:fn\]) subjected to an arbitrary distribution of the local density $\rho_k({\vec{r}})$ cannot be solved analytically. For spatially independent $\rho_k$, however, these equations can be decoupled. Let $\lambda_k$ be the relative population of density in slip system $k$ relative to the total density $\rho$, i.e., $\rho_k = \lambda_k \rho$ where $\sum_{k=1}^N \lambda_k = 1$. We can then perform a change of variable $\sqrt{\rho}\,{\vec{r}}\mapsto {\vec{r}}$ to absorb the $\rho$–dependence. In addition, in Fourier space (indicated by a superposed $\sim$), a convolution becomes a product. We can solve the Fourier-transform of (\[E:fn\]) for ${\widetilde{f}}_{ij}$ by essentially performing a matrix inversion on $$\label{E:fF} {\widetilde{\psi}}_{ij} = \sum_{m,n}(\delta_{im}\delta_{jn} - \lambda_n{\widetilde{\psi}}_{in}\delta_{jm}){\widetilde{f}}_{mn}\,.$$ The Fourier representation of $\psi_{ij}$ in Eq. (\[E:IntEnergy\]) can be expressed very simply in polar coordinates $(k,\phi_k)$, $$\label{E:psi} {\widetilde{\psi}}_{ij} = -\frac{4\pi}{k^2}\sin(\phi_k-\theta_i)\sin(\phi_k-\theta_j) = -\frac{4\pi}{k^4}\,({\hat{m}}_i\cdot{\vec{k}})({\hat{m}}_j\cdot{\vec{k}})$$ where $\theta_i$ is the angle that slip plane $i$ makes with the ${x}$ axis (which can be chosen arbitrarily, so that $\theta_i=i\pi/N$). Owing to the simple form of (\[E:psi\]), the solution to (\[E:fF\]) is[^6] $$\label{E:fsoln} {\widetilde{f}}_{ij} = \frac{{\widetilde{\psi}}_{ij}/\lambda_j}{1-\sum_n{\widetilde{\psi}}_{nn}}$$ where we have used $\sum_n{\widetilde{\psi}}_{in}{\widetilde{\psi}}_{nj} = {\widetilde{\psi}}_{ij}\sum_n{\widetilde{\psi}}_{nn}$. Eq. (\[E:fsoln\]) shall be used in the derivation of the evolution law for parallel edge dislocations in a multislip system in the next section. (a) ![(a) Discrete dislocation result and (b) theoretical prediction of the correlation function ${f^{\texttt{++}}_{\,12}}$ between plus dislocations on $60^\circ$ and $120^\circ$ slip systems. Values increase towards brighter regions. Coordinates are measured in units of $1/\sqrt{\rho}$. Dashed lines indicate the two slip directions where the plus-plus anti-correlation is underpredicted due to the glide constraint of the discrete dislocation simulations. The fitting parameter due to rescalings of length was found to be $k_0 \simeq 22\sqrt{\rho}$. []{data-label="Fig:DensityPlot"}](DensityDDwithLines.pdf "fig:"){width="40.00000%"}\ (b) ![(a) Discrete dislocation result and (b) theoretical prediction of the correlation function ${f^{\texttt{++}}_{\,12}}$ between plus dislocations on $60^\circ$ and $120^\circ$ slip systems. Values increase towards brighter regions. Coordinates are measured in units of $1/\sqrt{\rho}$. Dashed lines indicate the two slip directions where the plus-plus anti-correlation is underpredicted due to the glide constraint of the discrete dislocation simulations. The fitting parameter due to rescalings of length was found to be $k_0 \simeq 22\sqrt{\rho}$. []{data-label="Fig:DensityPlot"}](DensityTheory.pdf "fig:"){width="40.00000%"}\ To verify that Eq. (\[E:fsoln\]) is applicable in glide-controlled systems, we consider an ensemble of 1500 relaxed configurations of 64 plus and 64 minus dislocations randomly placed on a 1 $\mu$m$^2$ square and restricted to move along their glide directions. The simulations were performed with periodic boundary conditions in the absence of thermal noise. The glide constraint helps prevent dislocation annihilation, and thus, to fix the total number of dislocations and to maintain the finite effective temperature. As an example, Fig. \[Fig:DensityPlot\] shows (b) the density plot of the theoretical correlation function ${f^{\texttt{++}}_{\,12}}$ between plus dislocations on $60^\circ$ and $120^\circ$ slip systems against (a) the simulation result. The erroneous oscillations in Fig. \[Fig:DensityPlot\](b) along $0^\circ$ and $90^\circ$ lines are caused by the numerical inverse Fourier transform operation of Eq. (\[E:fsoln\]). (The general closed form solution of a double-slip pair correlation function does not exist for an arbitrary pair of slip orientation angles.) Overall, the theory gives accurate angular predictions except along the two slip directions where it underpredicts the same-sign anti-correlation due to the suppression of climb. The plot of the correlation function along the ${\hat{x}}$ axis is shown in Fig. \[Fig:XSection\]. Very close to the origin, the function diverges logarithmically as does the unscreened potential. About one dislocation spacing from the core, the correlation function decays as $1/x^2$. ![Cross-sectional plot of the data points versus theoretical curves of the pair correlation function (Fig. \[Fig:DensityPlot\]) along ${\hat{x}}$ axis. After a short distance away from the core, the function has a power law decay of $1/x^2$ as shown with the dashed line in the log-log plot in the inset.[]{data-label="Fig:XSection"}](XSection.pdf){width="47.00000%"} The real-space solution to Eq. (\[E:fsoln\]) is possible if we assume that *the angle between each adjacent pair of slip planes is constant*. For any $N \in \mathbf{Z}^+$ and $N>1$, $$\sum_{n=1}^N \sin^2\!\left(\phi_k-\frac{n\pi}{N}\right) = \frac{N}{2}\,,$$ regardless of $\phi_k$. With the above identity, the denominator of ${\widetilde{f}}_{ij}$ becomes angular independent and can be integrated directly. The final result, with $$k_0 \equiv \sqrt{2\pi N\Gamma\rho}\,,$$ reads $$\begin{gathered} \label{E:ffinal} {f}_{ij}(r,\phi)\! = \frac{-2}{\lambda_j}\bigg\{\frac{\cos(2\phi-\theta_i-\theta_j)}{(k_0 r)^2} - \frac{\cos(\theta_i-\theta_j)}{k_0 r} K_1(k_0 r) \\ -\sin(\phi-\theta_i)\sin(\phi-\theta_j)\,K_2(k_0 r)\bigg\}.\end{gathered}$$ At large distances, the first term dominates and the pair correlation decays like $1/r^2$ (except along the directions where the argument of the cosine is $\pi/2$, $3\pi/2$, etc.). Compared to the single slip case (Eq. \[E:fsingleslip\]) where the pair correlation diminishes exponentially (except along the dislocation wall direction), the presence of extra slip(s) suppresses the Debye screening. It should be noted that $-{f}_{ij}({\vec{r}})$ can be thought of as the effective interaction potential due to screening. More precisely, ${\vec{F}}^{\rm PK} \sim {\vec{\nabla}} {f}_{ij}$ is the Peach–Koehler force felt by a positive dislocation on slip system $i$ due to the induced screening of dislocations on slip system $j$. It has been shown[@GromGyorKocs06] that, for single-slip system, the attractive parabolic potential in the glide direction (taken to be along ${\hat{x}}$) falls off with a prefactor of $1/|y|^{5/2}$ along the wall direction. Series expansion of $\phi$ in Eq. (\[E:ffinal\]) about $\theta_i$ and $\theta_j$ reveals that, for multiple-slip system, the prefactor of the parabolic potential about the glide directions decays as $1/r^2$—slightly more slowly than the single-slip case. This could explain the necessity to include more than one slip system to see the formation of cell walls and grain boundaries in two-dimensional discrete dislocation simulations prohibiting climb motion.[@BenzBrecNeed04; @BenzBrecNeed05; @FourSala96; @GomeDeviKubi06; @GromBako00; @GromPawl93PMA; @GromPawl93MSEA; @GullHart93] The analysis also confirms the “directional long-range order” of two-dimensional crystals as rigorously proven by Mermin.[@Merm68] Derivation of a multiple-slip evolution law {#S:EvolutionLaw} =========================================== To arrive at a set of transport equations for an ensemble of multiple-slip dislocation systems, we extend the treatments of Groma et al. in Ref. , , and . The evolution equations for the uncorrelated single-dislocation densities on slip system $i$ read: \[E:drhodt\] $$\begin{aligned} \begin{split} \label{E:drhodt1} \partial_t\rho^\texttt{+}_i&({{\vec{r}}_i},t) = -( {\vec{b}}_i\cdot{\vec{\nabla}})\! \Bigg[ \!\!+\rho^\texttt{+}_i({{\vec{r}}_i},t) \tau_i^{\text{ext}} \\ &+ \sum_j \int d^2{{\vec{r}}_j}\, \left( {\rho_{ij}^{\texttt{++}}}({{\vec{r}}_i},{{\vec{r}}_j},t) - {\rho_{ij}^{\texttt{+-}}}({{\vec{r}}_i},{{\vec{r}}_j},t) \right) \tau_{ij}^{\text{ind}} \Bigg], \end{split}\\ \begin{split} \label{E:drhodt2} \partial_t\rho^\texttt{-}_i&({{\vec{r}}_i},t) = - ( {\vec{b}}_i\cdot{\vec{\nabla}})\! \Bigg[ \!\!-\rho^\texttt{-}_i({{\vec{r}}_i},t) \tau_i^{\text{ext}} \\ &+ \sum_j \int d^2{{\vec{r}}_j}\, \left( {\rho_{ij}^{\texttt{--}}}({{\vec{r}}_i},{{\vec{r}}_j},t) - {\rho_{ij}^{\texttt{-+}}}({{\vec{r}}_i},{{\vec{r}}_j},t) \right) \tau_{ij}^{\text{ind}} \Bigg], \end{split}\end{aligned}$$ where the dislocation mobility has been absorbed into the rescaling of time $t$. With the assumption that all dislocations have the same magnitude $b$, the Burgers vector can be written as ${\vec{b}}_i=b {\hat{s}}_i$ (${\hat{s}}_i$ and ${\hat{m}}_i$ respectively are the slip direction and slip plane normal direction of slip system $i$). $\tau_{ij}^{\text{ind}}({{\vec{r}}_i}-{{\vec{r}}_j})$ is the resolved shear stress exerted on a dislocation at ${{\vec{r}}_i}$ on slip $i$ by a dislocation at ${{\vec{r}}_j}$ on slip $j$, and can be written as $$\label{E:tau} \tau_{ij}^{\text{ind}}({\vec{r}}) = {\hat{s}}_i\cdot{\boldsymbol{\sigma}}_j\cdot{\hat{m}}_i = G\,b\,({\hat{s}}_i\cdot{\vec{\nabla}})({\hat{m}}_i\cdot{\vec{\nabla}})({\hat{m}}_j\cdot{\vec{\nabla}}) \! \left[ r^2\ln r \right]\!.$$ Here, $G \equiv \mu /(2\pi(1-\nu)) = E/(4\pi(1-\nu^2))$, where $E$, $\mu$, $\nu$ are the Young’s modulus, shear modulus, and Poisson ratio respectively. Addition and subtraction of Eqs. (\[E:drhodt1\]) and (\[E:drhodt2\]) give the evolution equations for the total dislocation density $\rho_i \equiv \rho_i^\texttt{+} + \rho_i^\texttt{-}$ and the GND density $\kappa_i \equiv \rho_i^\texttt{+} - \rho_i^\texttt{-}$: \[E:rho\_kappa\_evol\] $$\begin{aligned} \begin{split} \partial_t\rho_i &= - ({\vec{b}}_i\cdot{\vec{\nabla}})\! \Bigg[ \kappa_i\tau_i^{\text{ext}} \\ &+ \sum_j \int d^2{{\vec{r}}_j}\underbrace{ \left({\rho_{ij}^{\texttt{++}}} + {\rho_{ij}^{\texttt{--}}} - {\rho_{ij}^{\texttt{+-}}} - {\rho_{ij}^{\texttt{-+}}} \right)}_{\equiv \kappa_{ij}^{(2)}({{\vec{r}}_i},{{\vec{r}}_j},t)} \tau_{ij}^{\text{ind}} \Bigg] \label{E:rho_evol} \end{split}\\ \begin{split} \partial_t\kappa_i &= - ({\vec{b}}_i\cdot{\vec{\nabla}})\! \Bigg[ \rho_i\tau_i^{\text{ext}} \\ &+ \sum_j \int d^2{{\vec{r}}_j}\underbrace{ \left({\rho_{ij}^{\texttt{++}}} - {\rho_{ij}^{\texttt{--}}} - {\rho_{ij}^{\texttt{+-}}} + {\rho_{ij}^{\texttt{-+}}} \right)}_{\equiv \rho_{ij}^{(2)}({{\vec{r}}_i},{{\vec{r}}_j},t)} \tau_{ij}^{\text{ind}} \Bigg] \label{E:kappa_evol} \end{split}\end{aligned}$$ In accordance with (\[E:g\]), the dislocation–dislocation density can be written as $$\label{E:rho_rho} \begin{split} \rho^{ss'}_{ij} &= \rho_i^s({{\vec{r}}_i})\rho_j^{s'}({{\vec{r}}_j})g_{ij}^{ss'}({{\vec{r}}_i}-{{\vec{r}}_j}) \\ &= \rho_i^s({{\vec{r}}_i})\rho_j^{s'}({{\vec{r}}_j})(1 + d_{ij}^{ss'}({{\vec{r}}_i}-{{\vec{r}}_j})) \,, \end{split}$$ where $s,s'\in \{+,-\}$ and, according to (\[E:g2\]), $d^{ss'}_{ij} = \Gamma f^{ss'}_{\,ij}$. In terms of the single and pair correlation functions, the total dislocation density and GND are \[E:rho2\_kappa2\] $$\begin{aligned} \begin{split} \rho_{ij}^{(2)} =\, &\rho_i({{\vec{r}}_i})\rho_j({{\vec{r}}_j}) + \frac{1}{2} \Big\{-\rho_i({{\vec{r}}_i})\rho_j({{\vec{r}}_j}) d^a_{ij} \\ &+ \rho_i({{\vec{r}}_i})\kappa_j({{\vec{r}}_j})[d^p_{ij} + d^s_{ij}] \\ &+\kappa_i({{\vec{r}}_i})\rho_j({{\vec{r}}_j})[d^p_{ij}-d^s_{ij}] + \kappa_i({{\vec{r}}_i})\kappa_j({{\vec{r}}_j})d^a_{ij}\Big\}, \end{split}\\ \begin{split} \kappa_{ij}^{(2)} =\, &\kappa_i({{\vec{r}}_i})\kappa_j({{\vec{r}}_j}) + \frac{1}{2} \Big\{ \rho_i({{\vec{r}}_i})\rho_j({{\vec{r}}_j}) [ d^p_{ij} - d^s_{ij}] \\ &+ \rho_i({{\vec{r}}_i})\kappa_j({{\vec{r}}_j})d^a_{ij} -\kappa_i({{\vec{r}}_i})\rho_j({{\vec{r}}_j})d^a_{ij} \\ &\qquad+ \kappa_i({{\vec{r}}_i})\kappa_j({{\vec{r}}_j})[d^p_{ij}+d^s_{ij}]\Big\}, \end{split}\end{aligned}$$ where $d^p_{ij} = {d_{ij}^{\texttt{++}}}$, $d^s_{ij} = (1/2)({d_{ij}^{\texttt{+-}}} + {d_{ij}^{\texttt{-+}}})$, and $d^a_{ij} = (1/2)({d_{ij}^{\texttt{+-}}} - {d_{ij}^{\texttt{-+}}})$. After substitution of Eqs. (\[E:rho\_rho\])–(\[E:rho2\_kappa2\]), Eq. (\[E:rho\_kappa\_evol\]) becomes \[E:rho\_kappa\_evol2\] $$\begin{aligned} \partial_t\rho_i &= - ({\vec{b}}_i\cdot{\vec{\nabla}})\! \left[ \kappa_i(\tau_i^{\text{ext}} + \tau_i^{\text{sc}} - \tau_i^\text{f} - \tau_i^\text{b}) + \rho_i\tau_i^\text{a} \right], \label{E:rho_evol2} \\ \partial_t\kappa_i &= - ({\vec{b}}_i\cdot{\vec{\nabla}})\! \left[ \rho_i(\tau_i^{\text{ext}} + \tau_i^{\text{sc}} - \tau_i^\text{f} - \tau_i^\text{b}) + \kappa_i\tau_i^\text{a} \right], \label{E:kappa_evol2}\end{aligned}$$ in which $$\begin{aligned} \tau_i^\text{sc} &= \sum_j \int \kappa_j({{\vec{r}}_j})\tau_{ij}^{\text{ind}}({{\vec{r}}_i}-{{\vec{r}}_j})\, d^2{{\vec{r}}_j}, \\ \tau_i^\text{b} &= -\frac{1}{2} \sum_j \int \kappa_j({{\vec{r}}_j})d^t_{ij} \tau_{ij}^{\text{ind}}({{\vec{r}}_i}-{{\vec{r}}_j})\, d^2{{\vec{r}}_j}, \label{E:tau_b1}\\ \tau_i^\text{f} &= \frac{1}{2} \sum_j \int \rho_j({{\vec{r}}_j})d^a_{ij} \tau_{ij}^{\text{ind}}({{\vec{r}}_i}-{{\vec{r}}_j})\, d^2{{\vec{r}}_j}, \label{E:tau_f} \\ \begin{split} \tau_i^\text{a} &= \frac{1}{2}\sum_j \int \rho_j({{\vec{r}}_j})[d^p_{ij} - d^s_{ij}]\tau_{ij}^{\text{ind}}({{\vec{r}}_i}-{{\vec{r}}_j}) \\ &\qquad\qquad\qquad+ \kappa_j({{\vec{r}}_j})d^a_{ij} \tau_{ij}^{\text{ind}}({{\vec{r}}_i}-{{\vec{r}}_j}) \, d^2{{\vec{r}}_j}. \label{E:tau_a} \end{split}\end{aligned}$$ The term $d^t_{ij} \equiv (1/4)({d_{ij}^{\texttt{++}}} + {d_{ij}^{\texttt{--}}} + {d_{ij}^{\texttt{+-}}} + {d_{ij}^{\texttt{-+}}})$ in (\[E:tau\_b1\]) involves averaging over pairs of correlation functions. Terms involving $\tau^\text{a}_{i}$ in Eq. (\[E:rho\_kappa\_evol2\]) can be cast away by going into a “co-moving” frame of $\rho_i$ and $\kappa_i$ respectively. Although ${d_{ij}^{\texttt{++}}} = {d_{ij}^{\texttt{--}}} = -{d_{ij}^{\texttt{+-}}} = -{d_{ij}^{\texttt{-+}}}$ and hence $d^t_{ij}$ should vanish by definition, this is hardly the case when, e.g., the system is strained through external loading. Only one of these correlation functions dominates locally, resulting in a nonzero $d^t_{ij}$. Similarly the contribution from flow stress, $\tau^\text{f}_{i}$, is greatest in regions with equal population of plus and minus dislocations; in most regions, its effect is negligible. We shall therefore focus only on the contribution from back stress $\tau_i^\text{b}$. The validity of this assumption is supported by the success of the recent single-slip theory.[@YefiGromGies04; @YefiGromGies04b] Although $d^t_{ij}({\vec{r}})$ is long-range, the magnitude of the back stress $\tau_i^\text{b}$ is still considerably smaller than that of the self-consistent internal stress $\tau_i^\text{sc}$ when $r$ is large compared with mean dislocation spacing. We are therefore interested in the contribution of $d^t_{ij}({\vec{r}})$ to the stress only at short distances where its effect is much more pronounced. Consider a dislocation at ${{\vec{r}}_i}$, we can Taylor expand $\kappa_j({{\vec{r}}_j})$ about this point, $\kappa_j({{\vec{r}}_j}) \simeq \kappa_j({{\vec{r}}_i}) + ({{\vec{r}}_j}-{{\vec{r}}_i})\cdot{\vec{\nabla}} \kappa_j\Big|_{{{\vec{r}}_i}} +$ terms of higher orders. Because $d^t_{ij}({\vec{r}})$ is symmetric while $\tau_{ij}^{\text{ind}}({\vec{r}})$ is anti-symmetric under ${\vec{r}}\mapsto -{\vec{r}}$, the first term in the expansion vanishes. We then make a change of variable to the scaled coordinate $\sqrt{\rho}\,{\vec{r}}\mapsto \vec{x}$, where $\rho$ represents the mean total dislocations of the system. To second order this yields $$\label{E:tau_b} \tau_i^\text{b}({{\vec{r}}_i}) = \sum_{j=1}^N \frac{{\vec{\nabla}}\kappa_j}{\rho} \cdot \int {\vec{x}}\, d^t_{ij}({\vec{x}})\tau_{ij}^{\text{ind}}({\vec{x}}) d^2{\vec{x}}\,.$$ Using the Fourier transform expression of $d^t_{ij}$, the integral in Eq. (\[E:tau\_b\]) can be evaluated directly using Parseval’s theorem: $$\label{E:Iint} {\vec{I}}_{ij} \equiv \int {\vec{x}}\, d^t_{ij}({\vec{x}},{\theta})\tau_{ij}^{\text{ind}}({\vec{x}}) d^2{\vec{x}} = \int {\widetilde{d}^t_{ij}}({\vec{k}})\,\mathcal{F}\!\left[{\vec{x}}\,\tau_{ij}^\text{ind}\right]\![{\vec{k}}]\,d^2{\vec{k}}$$ The Fourier transform of ${\vec{x}}\,\tau_{ij}^\text{ind}$ can be computed directly from (\[E:tau\]): $$\label{E:tauF} \begin{split} \mathcal{F}\!\left[{\vec{x}}\,\tau_{ij}^\text{ind}\right]\![{\vec{k}}] &= -4\pi G\,b\, {\vec{\nabla}_{\!\vec{k}}}\!\left[\frac{ ({\hat{s}}_i\cdot{\vec{k}})({\hat{m}}_i\cdot{\vec{k}})({\hat{m}}_j\cdot{\vec{k}})}{k^4}\right] \\ &= -G\,b{\vec{\nabla}_{\!\vec{k}}}\!\left[({\hat{s}}_i\cdot{\vec{k}}){\widetilde{\psi}}_{ij} \right] \end{split}$$ Owing to the connection $d^t_{ij}({\vec{x}}) = \Gamma\,{f}_{ij}({\vec{x}})$, Eq. (\[E:Iint\]) becomes, from (\[E:fsoln\]) and (\[E:tauF\]), $$\label{E:Imidstep} {\vec{I}}_{ij} = \frac{\Gamma^2 G\,b}{\lambda_j} \int \frac{{\widetilde{\psi}}_{ij} {\vec{\nabla}_{\!\vec{k}}}\!\left[({\hat{s}}_i\cdot{\vec{k}}){\widetilde{\psi}}_{ij} \right]}{1-\sum_n {\widetilde{\psi}}_{nn}}\, d^2{\vec{k}}.$$ The vector ${\vec{I}}_{ij}$ is most conveniently expressed in the coordinate system of slip $j$. Substitution of Eq. (\[E:psi\]) into Eq. (\[E:Imidstep\]), while projecting ${\hat{s}}_i$ and ${\hat{m}}_i$ onto $({\hat{s}}_j,{\hat{m}}_j)$, gives $$\begin{gathered} {\vec{I}}_{ij} = (4\pi)^2\frac{\Gamma^2 G\,b}{\lambda_j} \Bigg\{{\hat{s}}_j \int_0^{2\pi} \int_\epsilon^\infty \frac{-1}{k} \frac{\sin^2(\phi_k)\sin(\phi_k+\theta_{ij})\sin(3\phi_k+2\theta_{ij})}{k^2+4\pi\sum_n \sin^2(\phi_k-\theta_n)}\, dk\,d\phi_k\\ +{\hat{m}}_j \int_0^{2\pi}\int_\epsilon^\infty \frac{1}{2k}\frac{\sin(\phi_k)\sin(\phi_k+\theta_{ij})\sin(4\phi_k+2\theta_{ij})}{k^2+4\pi\sum_n \sin^2(\phi_k-\theta_n)}\, dk\,d\phi_k \Bigg\},\end{gathered}$$ where $\theta_{ij} = (j-i)\pi/N$ is the angle between slip planes $i$ and $j$. We impose a cut-off $\epsilon$ at small $k$ to prevent the logarithmic divergence due to the long-range nature of the pair correlation functions. Under the assumption of equal interval of successive slip orientation, as in the previous section, we can carry out the above integrals very straightforwardly, giving $$\vec{I}_{ij} = \frac{GD\,b}{\lambda_j}\cos(\theta_{ij}){\hat{s}}_j$$ where $D = 2\pi^2\Gamma^2|\ln\epsilon|/N$ serves as a fitting parameter. The factor $\lambda_j$ nicely combines with $\rho$ in the denominator of Eq. (\[E:taub\]) to make $\rho_j = \lambda_j \rho$. For physical reasons, we are going to replace $\rho_j$ with its local density $\rho_j({\vec{r}})$. In the previous sections, we calculated the pair correlation functions of an ensemble of *spatially constant* single-dislocation densities in thermal equilibrium. When the distributions of single-dislocation densities are non-uniform in space as is the case for systems out of equilibrium, the back stress response should depend on how much the densities vary locally. The final result is amazingly simple: $$\label{E:taub} \tau_i^\text{b}({\vec{r}}) = GD \sum_{j=1}^N \cos(\theta_{ij}) \frac{({\vec{b}}_j\cdot{\vec{\nabla}})\kappa_j({\vec{r}})}{\rho_j({\vec{r}})}$$ The above form for the back stress converges nicely to the single-slip theory of Groma et al.[@GromCsikZais03; @YefiGromGies04; @YefiGromGies04b; @YefiGies05; @YefiGies05b] The $\cos(\theta_{ij})$ coupling between slip systems should come as no surprise. The angular dependence of the back stress must emerge from the symmetry of the potential. The angular average of $\psi_{ij}$ in Eq. (\[E:IntEnergy\]) selects out $\cos(\theta_{ij})$ as the only possibility. It is interesting to note also that the same coupling also appears in the strain gradient theory for continuum crystal plasticity by Gurtin.[@Gurt00; @Gurt02; @Gurt03] Comparison with the earlier multislip plasticity theory {#S:Comparison} ======================================================= Recently, Yefimov et al.[@YefiGies05; @YefiGies05b] have proposed an extension of their single-slip continuum plasticity theory[@YefiGromGies04; @YefiGromGies04b] to incorporate systems with more than one slip. In their theory, each slip system $j$ contributes some amount of back stress, given in our notation by $$\tau_j^\text{b}({\vec{r}}) = GD \frac{({\vec{b}}_j\cdot{\vec{\nabla}})\kappa_j({\vec{r}})}{\rho_j({\vec{r}})}$$ to the total back stress of slip system $i$ according to $$\tau_i^\text{tot} = \sum_{j=1}^N S_{ij}\tau_j^\text{b}$$ with slip-orientation dependent weight factor $S_{ij}$ acting as a projection matrix. For symmetry reason, three variations were postulated:[@YefiGies05; @YefiGies05b] $$\begin{aligned} S^1_{ij} &= ({\hat{m}}_i\cdot{\hat{m}}_j)({\hat{s}}_i\cdot{\hat{s}}_j) = \cos^2(\theta_{ij}) \\ S^2_{ij} &= {\hat{m}}_i\cdot({\hat{s}}_j\otimes{\hat{m}}_j + {\hat{m}}_j\otimes{\hat{s}}_j)\cdot{\hat{s}}_i = \cos(2\theta_{ij}) \label{E:whattheychose} \\ S^3_{ij} &= {\hat{s}}_i\cdot{\hat{s}}_j = \cos(\theta_{ij}) \label{E:sameasours}\end{aligned}$$ Note that the third possibility (\[E:sameasours\]) is consistent with the expression for the back stress we have derived in (\[E:taub\]). To select among these choices, Yefimov et al. successively used all three laws to numerically analyze the problem of simple shearing of a crystalline strip containing two slip systems with impenetrable walls.[@YefiGies05] The results of each case were compared against that from the discrete dislocation simulations of Shu et al.[@ShuFlecGiesNeed01] The best match was achieved with Eq. (\[E:whattheychose\]). Other choices underpredicted the amount of plastic strain. The chosen interaction law was then tested against the problem of bending of a single crystal strip with satisfactory agreement with discrete dislocation results of Cleveringa et al.[@ClevGiesNeed99] We believe that the success of their continuum theory in the shearing problem despite the incorrect choice of interaction law is due to a different reason. The amount of plastic strain is controlled by (i) the fitting parameter $D$ and (ii) the number density of nucleation sites in the film. By adjusting these values, different interaction laws could be altered to obtain the desired fit. In their analysis, Yefimov et al. used the value of $D$ from their previous single-slip theory[@YefiGromGies04] without any readjustment. There is no a priori reason why this value should stay unaltered. The density of nucleation sources in their continuum theory were chosen to match that in the discrete dislocation simulations. The discrepancy could also arise from different ways in which the discrete dislocation theory and the continuum theory handle dislocation nucleation. In a later publication, Yefimov et al. applied their formalism to the problem of stress relaxation in single-crystal thin films on substrates subjected to thermal loading.[@YefiGies05b] Due to the difference in thermal expansion coefficients between film and substrate, high tensile stresses can develop in the films as the temperature decreases. Contrary to the discrete dislocation simulations by Nicola et al.[@NicoGiesNeed03; @NicoGiesNeed05] which show increasing stress built up inside a film with decreasing film thickness, the results from the continuum theory show a size-dependent hardening only during the early stage of cooling. Moreover, the theory gives identical results between some pair of slip orientations (e.g. when the angle between the two slip planes $\theta_{12}$ is either $60^\circ$ or $120^\circ$), whereas the discrete dislocation simulations and our new theory predict otherwise. Finally, in the previous continuum theory,[@YefiGies05] dislocations nucleate when the sum of the external stress $\tau^\text{ext}$, the self-consistent long-range stress $\tau^\text{sc}$, and back stress $\tau^\text{b}$ exceed a certain value. From our analysis, we believe that, in a more correct treatment of dislocation nucleation, this back stress should be supplemented by flow stress $\tau^\text{f}$ (Eq. (\[E:tau\_f\])) which is dominant in a nucleation region where plus and minus dislocations are equally populated. Applications of the current theory to the shearing problem and the thin film problem which shows the size-dependent hardening will appear shortly following this publication. Discussion and conclusions ========================== We have described $n^\text{th}$-order dislocation densities and dislocation pair correlation functions in a grand canonical ensemble and obtained the relationships between different orders of the correlation functions in the form of a hierarchy of integral equations. Using the Bogolyubov ansatz instead of the more customary Kirkwood approximation, we have closed the chain of the equations at second order and solved for approximate expressions of the pair correlation functions—valid at all distances—for systems with one slip and multiple active slip systems. These solutions are invariant under simultaneous transformations ${\vec{r}}\mapsto {\vec{r}}/\sqrt{\rho}$ and $\rho \mapsto \rho^2$. The transformations suggest that any emergent dislocation pattern should exhibit a length scale given by $1/\sqrt{\rho}$ as pointed out by Holt,[@Holt70] and in agreement with the “law of similitude.”[@RajPhar86] For a complete analysis of scaling relations the reader is referred to Ref. . Recently Groma et al. have developed a mean-field variational approach to study the screening of dislocations,[@GromGyorKocs06] similar in spirit to the Debye–H[ü]{}ckel theory in the study of classical plasmas.[@DebyHuck23; @LandLifs60; @LandLifs69] This method is based on approximating the system’s total density matrix as a product of single-particle density matrices $\rho_i$ with the free energy given by $F = {\left\langle \mathcal{H} \right\rangle} + T \sum_i \text{Tr} \rho_i \ln\rho_i$. Although this technique provides a complimentary approach and results in the same pair correlation expressions for a single-slip system (after some interpretation), its generalization to multiple-slip system is not obvious. In particular, one would have to supply additional cross couplings between different slips by hand. These couplings should automatically emerge from a complete theory. In Sec. \[S:EvolutionLaw\], we have formulated transport equations for the total dislocation and GND densities for general multiple slip. Interactions among dislocation pairs produce an additional (relatively) short-ranged “back stress” contribution to the long-range internal stress of individual dislocations. Most of the complexities of the correlation functions were integrated away, leaving only the $\cos(\theta_{ij})$ coupling between slip systems $i$ and $j$, see Eq. (\[E:taub\]). This dependence was also proposed by Gurtin in his strain gradient plasticity theory.[@Gurt02; @Gurt03] but was abandoned by Yefimov et al.[@YefiGies05; @YefiGies05b] We have argued in Sec. \[S:Comparison\] that this refusal was based on an unfair comparison with discrete dislocation simulations for the way in which dislocation nuncleation was treated. There is an important issue regarding the use of dislocation correlations ${f}_{ij}$ for $d^t_{ij}$ in Sec. \[S:MultiSlip\]. The formalism developed in Sec. \[S:BBGYK\] assumes that dislocations relax along the directions dictated by Peach–Koehler forces. This implies dislocation glide, as included in the transport equations developed in Sec. \[S:EvolutionLaw\], but also climb which is not considered a mechanism of plastic flow here. Mathematically speaking, Eq. (\[E:BBGYKrho\]) is *not* the stationary state of Eq. (\[E:drhodt\]). Early attempts in numerically describing dislocation correlations in glide-only, multiple-slip systems failed to produce noticable patterns due to the need for large number of dislocations; the role of climb (or cross slip) was suggested to help overcome this difficulty.[@BakoGromGyorZima06; @BakoGromGyorZima07] The original motivation for our approach was to find the orientation dependence of the back stress in the most straightforward way. Extracting the angular dependence from a climb-assisted relaxed state gave us a quick input to use in the glide-only multiple-slip theory. The validity of the continuum theory will always be vindicated by comparisons against discrete dislocation results. Finally, we believe that our multiple-slip formulation provides a framework to address a long standing challenge in explaining dislocation patterning. For single-slip systems, short-range correlations occur between two dislocations except along directions normal to their glide plane (taken to be along ${\hat{y}}$). It has been shown that for a small deviation away from this “dislocation wall” direction, an attractive parabolic potential produced by the correlated dislocations decays as $|y|^{-5/2}$, compared with $|y|^{-2}$ in the unscreened case.[@GromGyorKocs06] We have found in Sec. \[S:MultiSlip\], however, that when one or more extra slips are introduced, the effect of Debye-like screening diminishes. In this case, the attractive potential in fact decays like $r^{-2}$ as if it were unscreened. This could explain the necessity to introduce extra slips to see the formation of walls in discrete dislocation simulations,[@BenzBrecNeed04; @BenzBrecNeed05; @FourSala96; @GomeDeviKubi06; @GromBako00; @GromPawl93PMA; @GromPawl93MSEA; @GullHart93] *unless* further aided by climb motions.[@BartCarl97; @BakoGromGyorZima06] The latter suggests the existence of a critical exponent of the attractive potential below which structure formation cannot occur as is the case in single-slip systems restricted to glide. A more detailed investigation of this is left for future work. The authors are grateful to Professor István Groma for his insightful input and valuable suggestions. We also would like to thank Péter Dusán Ispánovity for providing us with the discrete dislocation dynamics data used in Sec. \[S:MultiSlip\]. Funding from the European Commissions Human Potential Programme <span style="font-variant:small-caps;">SizeDepEn</span> under contract number MRTN-CT-2003-504634 is acknowledged. [^1]: For a summary of various continuum theories, see, e.g., Ref.  and references therein. [^2]: Note that this already implies that the analysis applies only to two-dimensional systems of dislocations. [^3]: One distinction due to the choice of an emsemble type can be seen from the normalization condition. In the grand canonical ensemble, according to Eqs. (\[E:prob\]) and (\[E:rhon\]), $\int \rho^{{({\mathbf{n}})}}({\{{\mathbf{n}}\}})\,d{\{{\mathbf{n}}\}} = \sum_{{\mathbf{N}}\ge {\mathbf{n}}} \mathcal{P}_{\mathbf{N}} \Big[\prod_{s=1}^r \frac{N_s!}{(N_s-n_s)!} \Big] = {\left\langle \prod_{s=1}^r \frac{N_s!}{(N_s-n_s)!} \right\rangle}$, where ${\left\langle \cdot \right\rangle}$ denotes an average over all statistically equivalent ensembles. In particular, the density of a single dislocation of species $s$ in a system with no external shear, $\rho^{(1)}(\vec{1}_s)$ is independent of $\vec{1}_s$, thus, $\int \rho^{(1)}(\vec{1}_s) \, d^2\vec{1}_s = \rho^{(1)}_s \,A = {\left\langle N_s \right\rangle}$. In other words $\rho^{(1)} = {\left\langle N_s \right\rangle}/A$ depends on the *average* number of dislocation of species $s$. If one were to carry a similar analysis in a canonical ensemble where the number of dislocations of each species is fixed, $\rho^{(1)}_s$ would have to be replaced by $N_s/A$, where $N_s$ is fixed. For higher order density, the expression becomes quite cumbersome. For example, $\rho^{(2)}_{ss'} = N_s N_s'/A^2$ for $s\ne s'$, while for $s = s'$, it is $N_s(N_s-1)/A^2$. In thermodynamic limit ($N_s\rightarrow \infty$ and $A\rightarrow \infty$ while keeping the ratio fixed), these two expressions reduce to the same thing. In this sense, it is cleaner to work in the grand canonical ensemble. [^4]: In the presence of an external *conservative* force, it can be shown that both (\[E:BBGYKrho\]) and (\[E:BBGYKg\]) remain valid provided that an additional term representing the applied external force, $\vec{F}(\vec{1}_1) \equiv -(1/{k_\text{B}}T){\vec{\nabla}_{\!\vec{1}_1}}\Phi(\vec{1}_1)$ generated by the external potential $\Phi(\vec{1}_1)$ which acts on $\vec{1}_1$, is added to their RHS. Qualitatively speaking, the original expression is nothing but the sum of all the Peach–Koehler interactions on the dislocation at $\vec{1}_1$ due to all other dislocations in the collection ${\mathbf{n}}$. [^5]: There are some systems where climb is typical and $\Gamma$ is naturally small such as dislocations in vortex lattices of type II superconductor where the values of elastic moduli can be small at suitably applied magnetic field.[@BlatFeigGeshLark94] By modifying the form of the interaction potential, the present analysis can be carried over straightforwardly. [^6]: The form of the solution is not surprising; it suggests that the solution can be written as a sum of diagrams due to the expansion $1/(1-x) = 1+x+x^2+\ldots$, often encountered in a many-body theory.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a class of simple algorithms that allows to find the reaction path in systems with a complex potential energy landscape. The approach does not need any knowledge on the product state and does not require the calculation of any second derivatives. The underlying idea is to use two nearby points in configuration space to locate the path of slowest ascent. By introducing a weak noise term, the algorithm is able to find even low-lying saddle points that are not reachable by means of a slowest ascent path. Since the algorithm makes only use of the value of the potential and its gradient, the computational effort to find saddles is linear in the number of degrees of freedom, if the potential is short-ranged. We test the performance of the algorithm for two potential energy landscapes. For the Müller-Brown surface we find that the algorithm always finds the correct saddle point. For the modified Müller-Brown surface, which has a saddle point that is not reachable by means of a slowest ascent path, the algorithm is still able to find this saddle point with high probability.' author: - | Silvia Bonfanti$^{a,b,c}$ and Walter Kob$^{c}$\ $^a$ Dipartimento di Scienza ed Alta Tecnologia, Università dell’Insubria, Via Valleggio 11, 22100 Como, Italy\ $^b$ Department of Physics, University of Milano, via Celoria 16, 20133 Milano, Italy\ $^c$ Laboratoire Charles Coulomb, Université de Montpellier and CNRS, UMR 5221, 34095 Montpellier, France bibliography: - 'q.bib' title: Methods to locate Saddle Points in Complex Landscapes --- Introduction {#sec1} ============ Many static and dynamics properties of complex many body systems can be understood using the concept of the potential energy landscape (PEL), i.e. the hypersurface defined by the interaction potential $V(\{\mathbf{r}_i\})$ between the particles as a function of their coordinates $\mathbf{r}_i$, $i=1,2...N$, with $N$ the total number of particles in the system. Examples for which such an approach has been found to be useful include chemical reactions (reaction path), atomic diffusion (overcoming the local barriers), but also systems that involve many particles such as proteins (folding pathway) and glasses (nature of the relaxation dynamics) [@wales_book]. To understand the static and dynamic properties of such systems one usually relies on the fact that at low temperatures one has a separation of time scales: On short times the system is vibrating around a local minimum of the PEL while on longer time scales it hops over a local barrier. Thus the knowledge of the distribution of the location and height of the local minima allows to understand many of the static properties of the system: The shape of the local minima gives information about the vibrational properties, and the height of the barrier that connects neighboring minima allows to make a coarse grained description of the dynamics of the system [@goldstein_1969]. Finally we mention that these details of the PEL are also needed to determine some of the properties of glasses lt ow temperature since, e.g., a realistic description of the tunneling processes depends in a crucial manner on the geometry of the PEL [@jug2015realistic]. It is often found that the number of such local minima increases exponentially with the number of degrees of freedom of the system, in particular if the system of interest is complex such as it is the case with proteins or glasses [@stillinger1999exponential]. Thus the PEL is very rugged and it is therefore a formidable task to find the location of [*all*]{} these minima. However, using specialized algorithms it is indeed possible to obtain this information on relatively simple systems that have, typically, less than hundred particles [@wales1994rearrangements; @wales_1998; @wales2003stationary; @doye1999evolution; @wales_2015]. Despite these approaches it is at present impossible to determine numerically the complete landscape of a complex bulk system that has, say, $\mathcal{O}(10^3)$ particles. Notwithstanding this impossibility, it is not very difficult to find at least a large number of local minima, since algorithms like the steepest descent procedure allow to efficiently determine for a given starting point in configuration space the nearest local minimum [@press1992numerical], a configuration that in the following we will refer to as “inherent structure” (IS) [@weber1984inherent]. Such an approach has allowed, e.g., to obtain interesting properties of the PEL in glass-forming systems [@weber1984inherent; @Web1985; @heuer1997properties; @Sas1998; @angelani2000saddles; @broderix2000energy; @Ang2003; @doliwa2003does; @Sci2005; @wales2003stationary; @Heu2008]. Much more difficult is the location of the saddle points (SP) that connect neighboring minima, information that is needed to determine the reaction path and the corresponding energy barrier. Roughly speaking one can identify three approaches to find such saddle points: 1. In the case that one knows two minima that are neighbors one can use simple and efficient algorithms that are able to find the corresponding saddle point with a modest numerical effort. A typical example for such a method is the so-called “nudged elastic band” which is basically a minimization of the forces acting on a one dimensional elastic band that connects the two minima [@henkelman2000climbing; @henkelman2000improved]. Although quite powerful if the landscape is not too rough, the method has the drawback that one needs to know that the two minima considered are really neighbors, i.e. that the two basins of attraction touch each other. This problem is also present for more involved algorithms, such as the transition path sampling method [@bolhuis_2002]. 2. The second class of methods needs instead only one starting minimum and uses the information on the local geometry of the PEL to climb up the landscape until a saddle point is found. Popular realizations of this approach are the dimer method [@henkelman1999dimer; @henkelmann_2000], the eigenvector-following method [@munro1999defect; @wales2003stationary] and the Lanczos algorithm of the “ART nouveau” method [@malek2000dynamics], all of which are based on the idea to determine and then follow the direction of the smallest curvature of the PEL, i.e. the softest mode of the Hessian matrix. With such a “slowest ascent” protocol the search is guaranteed to converge to a transition state of the PEL. Although these methods are suited for, e.g., the analysis of small clusters of Lennard-Jones particles [@wales1994rearrangements; @doye1999evolution], not all of them are applicable to large systems since most of them require at each iteration step the evaluation and inversion of the Hessian matrix, a numerical effort that scales like $\mathcal{O}(N^3)$. A notable exception is the so-called “dimer-method” which does not require information on the second derivatives [@henkelman1999dimer]. Another drawback of these approaches is that they do not guarantee to find the lowest saddle point but instead one that is determined on how the algorithm is started [@doliwa2003energy]. In practice it thus can happen that the saddle points that are found are very high up in the PEL and therefore physically irrelevant [@doye2002saddle]. Furthermore, it is sometimes also problematic to escape the local well of the PEL near to the IS in a non-trivial direction, since the softest eigenmode actually corresponds to the translational and rotational zero-frequency modes [@pedersen2014bowl]. Other methods have been proposed to find a reaction path that gives the escape route from a local minimum [@laio2002escaping]. Although these methods are very efficient if the system is not too complex, they are not adapted to the case where one has many degrees of freedom. 3. Finally we mention an approach to locate saddle points that does not make use at all of the minima of the PEL and that has been employed with some success in the field of supercooled liquids and glasses, see, e.g., [@Web1985; @angelani2000saddles; @broderix2000energy]. For this one considers the squared gradient of the potential energy $W=\vert \vec{\nabla}V \vert^2$. The idea is that since at a saddle point one has $\vec{\nabla}V=0$, a minimization of $W$ will lead to a saddle point or a local minimum. However, in practice one finds that this approach has the drawback that i) there are many stationary points in the PEL that are neither saddle points nor minima and ii) that there are also many “quasi-saddles”, i.e. a local shoulder in the PEL at which the derivative is not zero but has only a local minimum (i.e. an inflection point) and which thus shows up in $W$ as a local minimum [@doye2002saddle; @doye2003comment; @angelani2002quasisaddles]. Since at low temperatures, i.e. when one is deep down in the PEL, the number of these quasi-saddles starts to become much larger than the number of true minima or SPs this approach becomes very inefficient. In this paper we propose a new method that allows to locate low lying saddle points associated with a given local minimum. The algorithm makes only use of the value of the potential energy as well as its gradient, i.e. there is no need to calculate the numerically expensive Hessian matrix used by some other algorithms. The rest of the paper is structured as follows: In the following section we introduce the new class of algorithms. In Sec. \[sec3\] we give the details on the two systems that we will use to test the efficiency of the algorithms and in Sec. \[sec4\] we give the results of these tests. Finally we summarize and conclude in Sec. \[sec5\]. Algorithm to find the saddle point {#sec2} ================================== The idea of the algorithm, which we name “discrete difference slowest ascent” (DDSA), is to locate the saddle points of $V(\{\mathbf{r}_i\})$ with the help of a new cost function $H_{\rm DDSA}(\mathbf{X},\beta)$ which can be minimized without using the computationally expensive Hessian matrix. Here $\mathbf{X}$ represents the coordinates of all the particles and $\beta$ is a parameter the meaning of which will be discussed below. Since our algorithm has a certain similarity to the one proposed by Duncan [*et al.*]{}, Ref. \[\], we briefly discuss the latter and point out the differences. In the “Biased Gradient Square Descent” (BGSD) algorithm of Ref. \[\] for finding transition states one starts at a local minimum of $V(\mathbf{X})$ that in the following we will refer to as $\mathbf{X}_{\rm IS}$, where “IS” stands for “inherent structure”. The BGSD algorithm is based on the idea of introducing an auxiliary cost function the minimization of which allows to climb up the PEL in the direction of the SP of $V(\mathbf{X})$ that is close to $\mathbf{X}_{\rm IS}$. The proposed cost function is given by $$H_{\rm BGSD}(\mathbf{X};\alpha,\beta)=\frac{1}{2} \vert \nabla V(\mathbf{X}) \vert ^2 + \frac{1}{2}\alpha(V(\mathbf{X})-\beta)^2 \quad . \label{eq_1}$$ So the first term is identical to the potential $W$ discussed in the introduction. The second term makes that the minimization algorithm will seek to minimize this squared gradient with the constraint that the potential energy has a value $\beta$. Thus if one sets the energy $\beta$ to a value that is slightly higher than the local minimum, the algorithm will make a compromise between the smallest absolute value of the gradient and an energy that is as close as possible to $\beta$. The balance between these two terms is given by the prefactor $\alpha$. Once the local minimum has been found, the value of $\beta$ is increased a bit, thus allowing iteratively to climb up the PEL until a saddle point is found. The drawback of this approach is that usually the algorithm for the minimization of $H_{\rm BGSD}(\mathbf{X};\alpha,\beta)$ will need the first derivative of the cost function, i.e. in the case of Eq. (\[eq\_1\]) the second derivative of $V(\mathbf{X})$, a calculation that becomes very expensive if the number of particles is large. Therefore Duncan [*et al.*]{} have proposed to make use of the relation $$\nabla^2V(\mathbf{X})\nabla V(\mathbf{X})=\lim_{\delta \rightarrow 0} \frac{\nabla V[\mathbf{X}+\delta \nabla V(\mathbf{X})]-\nabla V(\mathbf{X})}{\delta} \label{eq_2}$$ and to approximate the right hand side by a finite difference quotient using a small value of $\delta$. Although this approximation is reasonable if the number of degrees of freedom is not too large, it usually becomes inaccurate for $N$ large (if $\delta$ is kept fixed). The algorithm that we present in the following avoids this problem since it does not need the second derivative of $V(\mathbf{X})$ and hence no approximation of the type given by Eq. (\[eq\_2\]) is necessary. The idea of our DDSA algorithm is to introduce a new cost function $H_{\rm DDSA}(\mathbf{X},\beta)$ that has the same local extrema as $V(\mathbf{X})$ but which does not involve the gradient of $V(\mathbf{X})$ and hence $H_{\rm DDSA}$ can be optimized without the need of calculating the Hessian matrix. Furthermore this function should allow to identify the direction of the PEL that has the smallest slope and hence admit to ascend the PEL in the softest direction. The cost function we propose is given by $$H_{\rm DDSA}(\mathbf{X},\beta)= [V(\mathbf{X})-\beta]^2+[V(\mathbf{X}+\Delta \mathbf{X})-\beta]^2 \label{eq_3}$$ where $\Delta \mathbf{X}$ is a small displacement in phase space (details are given below) and $\beta$ is a target energy value. ![Left: Schematic plot showing possible paths to climb up the PEL that has a local minimum (star). The points $A$ and $B$ lie on the same iso-potential line of height $\beta$, but the slope at $A$ is less than that one at $B$. As a consequence the DDSA algorithm will choose the point $A$. Right: Representation of the one-dimensional profile of the potential surface: The solid line indicates the profile in the direction of point A while the dashed line in the direction B. The horizontal line indicates the energy level $\beta$ used in the DDSA algorithm.[]{data-label="fig1_2d_cartoon"}](Figure1_pel.pdf) To understand the idea of this algorithm it is useful to start with a simple two-dimensional example, a cartoon of which is show in Fig. \[fig1\_2d\_cartoon\]. In panel a) we show the iso-potential lines of the PEL around a local minimum, represented by a star. Consider two lines that start at this minimum. Line “A” is in the direction of the softest mode, i.e. slowest ascent, while direction “B” has a steeper slope. In panel b) we show a cut of the PEL in the direction of A and B. Let us consider these one-dimensional cuts of the potential in the neighborhood of $x=x_0$, where $x_0$ is defined via $V(x_0)=\beta$ and $\beta$ is a given value of the potential energy. Making a Taylor expansion of $V(x)$ around $x_0$ gives for $H_{\rm DDSA}(x,\beta)$ $$H_{\rm DDSA}(x_0+\epsilon,\beta) \approx [2\epsilon^2 +2 \epsilon \Delta x + (\Delta x)^2] [V'(x_0)]^2 \quad . \label{eq_4}$$ One sees easily that the minimum of this function is given if $\epsilon$ is chosen to be $-\Delta x/2$. From Eq. (\[eq\_4\]) one finds that the value of $H_{\rm DDSA}$ at this minimum is given by $[\Delta V'(x_0)]^2/2$, i.e. it is proportional to $[V'(x_0)]^2$. Thus we can conclude that the minimum of the function $H_{\rm DDSA}$ is given by a point at which the gradient is as small as possible since this is the best compromise between the first and second term on the right hand size of Eq. (\[eq\_3\]). The influence of the various terms and steps of this procedure are shown in Fig. \[fig2\_ddsa\_algo\]. [ ![Contributions of the various terms in the cost-function $H_{\rm DDSA}$ for a one-dimensional case. a) Original potential $V(x)$ (full line) and $V(x+\Delta x)$ for $\Delta x=0.25$ (dotted line). The horizontal lines correspond to different energy values $\beta$. The vertical lines show the location of the minimum of $H_{\rm DDSA}$ for the different values of $\beta$. b) The first term on the right hand side of Eq. (\[eq\_3\]) for the three values of $\beta$ of panel a). c) The second term on the right hand side of Eq. (\[eq\_3\]) for the three values of $\beta$ of panel a). d) The final cost function $H_{\rm DDSA}$ for the three values of $\beta$ of panel a).[]{data-label="fig2_ddsa_algo"}](Figure2_ddsa.pdf "fig:")]{} It is easy to see that this argument can be generalized to the case with many degrees of freedom if one replaces the quantity $\epsilon$ in Eq. (\[eq\_4\]) by $\epsilon \nabla V(\mathbf{X}_0)$, where $\mathbf{X}_0$ is a point with $V(\mathbf{X}_0)=\beta$. This implies that the minimization of the function $H_{\rm DDSA}$ from Eq. (\[eq\_3\]) will give a point that is close to the energy level $\beta$ and that has the smallest gradient. We now return to the displacement $\Delta \mathbf{X}$ given in Eq. (\[eq\_3\]). This displacement has to fulfill two requirements: i) it should be small so that the Taylor expansion used above is valid and ii) the point $\mathbf{X}+\Delta \mathbf{X}$ should [*not*]{} be on the energy surface of value $\beta$ since in that case both terms in Eq. (\[eq\_3\]) can be made to vanish. It is of course easy to fulfil the first condition. The second one can be taken care of by choosing the direction of $\Delta \mathbf{X}$ as $$\widehat{\Delta \mathbf{X}} = \frac{\mathbf{X}-\mathbf{R}}{|\mathbf{X}-\mathbf{R}|} \quad , \label{eq_5}$$ where the position $\mathbf{R}$, called in the following “reference point”, will be discussed in Sec. \[sec4\]. But already here we can state that $\mathbf{R}$ will be chosen such that $V(\mathbf{R}) <\beta$, i.e. the vector $\widehat{\Delta \mathbf{X}}$ from Eq. (\[eq\_5\]) is not parallel to an iso-line and points upward in the PEL (see Fig. \[fig3\_define\_vectors\] for an illustration). ![Illustration of the points used in the DDSA algorithm for the case of a two-dimensional energy landscape. $\mathbf{X}_{\rm IS}$ is the local minimum of the landscape, $\mathbf{R}$ is the reference point used in the search, and $\mathbf{X}$ and $\mathbf{X}+\Delta \mathbf{X}$ are the points used to find the path of the slowest ascent.[]{data-label="fig3_define_vectors"}](Figure3_points.pdf) The cost function $H_{\rm DDSA}$ defined by Eq. (\[eq\_3\]) and the displacement vector $\Delta \mathbf{X}$ from Eq. (\[eq\_5\]) allows to find the path of slowest ascent. (In Sec. \[sec4\] we will discuss how the magnitude of $\Delta \mathbf{X}$ has to be chosen.) We have found that in practice the efficiency of the algorithm depends also on how the starting point for the iteration is chosen [@bonfanti_phd_16]. In the following we will denote this starting point by $\mathbf{G}$ and explain in Sec. \[sec4\] how we have chosen it. Systems {#sec3} ======= In this section we describe the two systems which we have used to test the performance of the DDSA algorithm. Although both of them have only two degrees of freedom, they have already many of the complexities encountered in higher dimensional PELs and therefore they can be considered as instructive test cases for the algorithm. The first system is the well known Müller-Brown (MB) potential, a model which was introduced to describe a simple PEL and whose properties have been studied extensively, notably to test the performance of various algorithms aimed to find a reaction path [@muller1979location; @wales1994rearrangements; @ruedenberg1994gradient; @passerone2001action; @doye2002saddle]. The MB potential is the sum of four Gaussians and is given by $$\begin{aligned} V_{\rm MB}(x,y) = \sum_{i=1}^4 &A_i&\exp[a_i(x-\bar{x}_i)^2+\\& b_i&(x-\bar{x}_i)(y-\bar{y}_i)+c_i(y-\bar{y}_i)^2] \label{eq_6}\end{aligned}$$ where $$\begin{split} &A=(-200,-100,-170,15);~~a=(-1,-1,-6.5, 0.7) \\ &b=(0, 0, 11, 0.6);~~c= (-10,-10,-6.5, 0.7) \\ &\bar{x}=(1, 0,-0.5,-1);~~\bar{y}=(0, 0.5, 1.5, 1) . \end{split} \label{eq_7}$$ A contour plot for this potential is shown in Fig. \[fig4\_mb\_mmb\_pel\]a and we recognize the presence of two minima, marked by “IS”, separated by a saddle point (SP). The graph shows that a slowest ascent path is not very curved, thus it should not be that difficult for an algorithm to find it. Since, however, in practice one must expect that the PEL has a slowest ascent path that is more windy, we have also considered a PEL that is from this point of view a bit more challenging. This modified Müller-Brown (MMB) surface is given by the MB potential to which we have added a further term: $$V_{\rm MMB}(x,y) = V_{\rm MB}(x,y) + V_{\rm add}(x,y) \quad . \label{eq_8}$$ ![Contour plots of the Müller-Brown potential, panel a), and modified Müller-Brown potential, panel b). The local minima are shown as red stars (IS) and the saddle points as blue circles (SP).[]{data-label="fig4_mb_mmb_pel"}](Figure4_MB_MMB.pdf) This additional term is given by $$V_{\rm add}(x,y) = A_5\sin(xy)\exp[ a_5(x-\bar{x}_5)^2+c_5(y-\bar{y}_5)^2] \label{eq_9}$$ with $$\begin{split} &\bar{x}_5=-0.5582;~~\bar{y}_5=1.4417\\ &A_5= 500;~~a_5=-0.1;~~c_5=-0.1 \end{split} \label{eq_10}$$ This additional term makes that now the valley emanating from the main minimum is bending away from the original saddle point (now at the lower right corner of Fig. \[fig4\_mb\_mmb\_pel\]b), making it thus more difficult for an algorithm to find this point. In addition the second term has also created a second saddle point in the PEL (upper left corner in Fig.\[fig4\_mb\_mmb\_pel\]b) that has a higher barrier than the original saddle point of the MB surface. Thus we are seeking an algorithm that is able to find the lower saddle point and not the higher one. Test of the algorithm {#sec4} ===================== In this section we will introduce four versions of the DDSA algorithm and discuss how they fare in finding the saddle points in the PELs defined by the MB and MMB potentials. All algorithms have the same basic structure: i) Given a starting point $\mathbf{Y}$, we choose a new target energy $\beta=V(\mathbf{Y})+\delta$ (with $\delta>0$), a reference point $\mathbf{R}$, as well as a starting point for the search, $\mathbf{G}$; ii) We minimize the cost function $H_{\rm DDSA}(\mathbf{X},\beta)$ and find a new point on the slowest ascent path that has an energy close to $\beta$. Then we restart the iteration. In the following we will denote by “level $n$” the $n$’th iteration of this procedure. The main difference between the versions of the algorithm is the choice of the reference point $\mathbf{R}$ and of the point $\mathbf{G}$.\ [**Algorithm 1:**]{} The first form of the DDSA algorithm uses the following expressions for $\mathbf{R}$, $\Delta \mathbf{X}$, and $\mathbf{G}$: $$\mathbf{R} = \mathbf{X}_{\rm IS} \label{eq_11}$$ $$\Delta \mathbf{X} = \frac{\epsilon (\mathbf{X}-\mathbf{R})}{|\mathbf{X}-\mathbf{R}|} \label{eq_12}$$ $$\mathbf{G}= \mathbf{X}_{{\rm min}(n-1)}- \frac{\delta}{|\nabla V(\mathbf{X}_{{\rm min}(n-1)})|} \cdot \frac{\nabla V(\mathbf{X}_{{\rm min}(n-1)})}{|\nabla V(\mathbf{X}_{{\rm min}(n-1)})|} \label{eq_13}$$ Here $X_{\min(n)}$ is the minimum obtained from the iteration number $n$. With this choice of $\mathbf{R}$ the vector $\Delta \mathbf{X}$ points thus in the direction of the local minimum at which we start the slowest ascent. The quantity $\epsilon$ is the magnitude of this displacement and we choose $\epsilon=0.001$ and 0.01 for the MB and MMB potential, respectively. For $\delta$ we have chosen 0.5 (MB) and 4.1 (MMB). These values are appropriate for the length scales occurring in the MB or MMB PEL (see Fig. \[fig4\_mb\_mmb\_pel\]), but they do not need to be fine-tuned. From Eq. (\[eq\_13\]) we see that the position $\mathbf{G}$ at which we start the iteration is given by the position of the previous minimum plus a vector that points in the opposite direction of the gradient of the PEL and that has a length which is just a linear extrapolation of this gradient to the energy level $\beta$. ![ Algorithm 1: Trajectories that start from the pink points around the local minimum of the PEL (star). a) Müller-Brown surface, b) modified Müller-Brown surface. Note that for the MMB surface some of the trajectories lead up to the SP2 which is higher than SP1.[]{data-label="fig5_mb_mmb_alg"}](Figure5_alg1.pdf) To test the efficiency of this algorithm we have used 16 starting points arranged on a circle of radius $r_0$ around the minimum $\mathbf{X}_{\rm IS}$, using a radius $r_0$ of 0.1 and 0.2 for the MB and MMB potential, respectively. This setup allows thus to estimate the probability that the algorithm finds the lowest saddle point. Defining one of these starting points as $\mathbf{X}_{{\rm min}(1)}$, we choose as target energy $V(\mathbf{X}_{\rm IS})+ \delta$ and use Eq. (\[eq\_13\]) to obtain the starting point $\mathbf{G}$ for the optimization of the cost function $H_{\rm DDSA}$ of Eq. (\[eq\_3\]). This optimization was done by means of the Polak-Ribiere variant of the conjugate gradient algorithm [@press1992numerical]. Note that the quantities $\mathbf{R}$ and $\mathbf{G}$ are fixed during the search of the minimum of $H_{\rm DDSA}(\mathbf{X},\beta)$, i.e. the calculation of the gradient of this cost function for the optimization does not involve the calculation of a second derivative of $V(\mathbf{X})$. In Fig. \[fig5\_mb\_mmb\_alg\]a we show the trajectories obtained from the 16 starting points in the MB potential. We see that all trajectories that start toward the lower left direction converge rapidly onto a master curve that does indeed correspond to the path of slowest ascent. For the starting points that lie on the upper right half of the circle the resulting trajectories first follow the slowest ascent path in that direction, i.e. a direction that does not really lead to the correct saddle point. However, at a certain point in the ascent the gradient becomes so large that the algorithm finds a direction in which the gradient is smaller that the simple upward direction and thus the trajectory starts to turn. Although in this case the algorithm does not pass at the saddle point (since it has climbed up too far), it is able to come quite close to the sought saddle point. In that case a steepest descent procedure using the cost function $W=|\nabla V |^2$ and the approximation of Eq. (\[eq\_2\]) would allow to locate the saddle point with good precision. Thus by monitoring the value of $W$ it would be easy to realize that the algorithm has entered in a sector of configuration space in which one of the eigenvalues of the Hessian matrix has become negative, i.e. that one has entered a new basin of attraction for the potential $W$ and the minimum of this basin is most likely the one of the saddle point. For the case of the MMB potential the algorithm performs not so well, Fig. \[fig5\_mb\_mmb\_alg\]b. We see that the trajectories that start on the lower left part of the circle all end up at the saddle point SP2, i.e. the algorithm manages to find a saddle point, but it is not the lowest one. The reason for this failure in the case of the MMB surface is related to the fact that the reference point $\mathbf{R}$ is fixed at $\mathbf{X}_{\rm IS}$ and thus the vector $\Delta \mathbf{X}$, used to define the point at which the second term of $H_{\rm DDSA}(\mathbf{X},\beta)$ is evaluated, does not adapt to the shape of the local PEL close to $\mathbf{X}$ since $\Delta \mathbf{X}$ always points to the local minimum $\mathbf{X}_{\rm IS}$. This is no problem as long at the ascending valley is not curved and emanates in a more or less straight manner from $\mathbf{X}_{\rm IS}$. However, if there is a noticeable curvature, as it is the case for the MMB PEL, the iso-potential lines are no longer (almost) orthogonal to the vector $\Delta \mathbf{X}$ which makes that the minimum of the cost function $H_{\rm DDSA}(\mathbf{X},\beta)$ is no longer the slowest ascent. It can thus be expected that a reference point $\mathbf{R}$ that adapts to the local shape of the PEL will help to alleviate this problem. This is the idea of the next version of the algorithm.\ [**Algorithm 2:**]{} This version of the DDSA algorithm uses a reference point that is moving along with the slowest ascent trajectory. The simplest way to do this is to pick on level $n$ of the path for $\mathbf{R}$ the location of the minimum found on the previous level, i.e. $\mathbf{X}_{{\rm min}(n-1)}$. However, we have found that this choice leads to numerical instabilities and thus the ascent trajectory becomes very erratic [@bonfanti_phd_16]. In algorithm 2 we try to avoid this problem by choosing as reference point the minimum that has been found $k$ levels earlier, where $k$ is an integer. In addition we have also adapted the magnitude of the displacement $\Delta \mathbf{X}$ to take into account the steepness of the PEL in the vicinity of $\mathbf{X}$. Thus the algorithm is given by $$\mathbf{R}= \begin{cases} \mathbf{X}_{\rm IS} & \text{if}~n\leq k \\ \mathbf{X}_{{\rm min}(n-k)} & \text{if}~n>k \end{cases} \label{eq_14}$$ $$\Delta\mathbf{X}=\frac{\delta}{|\nabla V(\mathbf{X}_{{\rm min}(n-k)}) \vert} \cdot \frac{\mathbf{X}-\mathbf{R}}{\vert \mathbf{X}-\mathbf{R} \vert} \label{eq_15}$$ $$\mathbf{G}= \mathbf{X}_{{\rm min}(n-1)} \quad . \label{eq_16}$$ Thus for the first $k$ steps of climbing up the PEL we keep the IS as the reference point, i.e. we assume that the PEL has a simple geometry without winding valleys. After having reached level $k+1$ one uses for $\mathbf{R}$ the minimum $\mathbf{X}_{{\rm min}(1)}$, subsequently $\mathbf{X}_{{\rm min}(2)}$ and so on. In this algorithm we chose the magnitude of the displacement vector $\Delta \mathbf{X}$ such that it adapts to the local slope (see the first factor on the RHS of Eq. (\[eq\_15\])). The values for $\delta$ were 0.50 and 0.25 for the MB and MMB potential, respectively. Note that for this version of the algorithm we have also modified the starting point $\mathbf{G}$ for the minimization of $H_{\rm DDSA}(\mathbf{X},\beta)$ since we have found that for the performance of the algorithm it doesn’t really matter whether we chose the point given by the RHS of Eq. (\[eq\_13\]) or the simpler expression given by Eq. (\[eq\_16\]) [@bonfanti_phd_16]. The values of $k$ were chosen to be 25 and 100 for the MB and MMB potential, respectively. These numbers and the values of $\delta$ imply that the reference point $\mathbf{R}$ is about 12.5 (MB) and 25 (MMB) energy units below the energy at which one seeks the local minimum of the slope. This energy value corresponds thus roughly to the scale on which the shape of the PEL is significantly deformed. In Fig. \[fig6\_algtog\] we show the trajectories obtained from this algorithm. For the case of the MB surface we find that this algorithm has a much better performance than algorithm 1 in that even the points that start on the upper right half of the circle around $\mathbf{X}_{\rm IS}$ converge to the SP. For intermediate times we find that these latter trajectories show a bit of jittering when they jump to the lower left valley, but this motion is quickly damped out. However, for the case of the MMB surface, also this algorithm is not able to find the saddle point, see Fig. \[fig6\_algtog\]b. The reason for this is that at a certain energy level the trajectory becomes very erratic which in turn has the effect that also the reference point $\mathbf{R}$ moves around in an uncontrolled manner. As a consequence the algorithm fails to climb up further. Thus this behavior is qualitatively the same as the one we described at the beginning of the section on algorithm 2, i.e. the case that corresponds to $k=1$. This undesirable behavior is related to a nonlinear feedback mechanism between the choice of the reference point for the optimization on level $n$ and the minimization procedure: On one level $\mathbf{R}$ is slightly on one side of the slowest ascent valley, and on the next level $\mathbf{R}$ jumps on the other side of the valley and has increased somewhat the distance to it, leading to the observed zig-zag motion [@bonfanti_phd_16]. ![ Algorithm 2: Trajectories that start from the pink points around the local minimum of the PEL (star). a) Müller-Brown surface, b) modified Müller-Brown surface. Note that for both PELs there are certain trajectories that are somewhat erratic.[]{data-label="fig6_algtog"}](Figure6_alg2.pdf) To cope with this problem we have introduced a further version of the DDSA algorithm:\ ![ Algorithm 3: Trajectories that start from the pink points around the local minimum of the PEL (star). a) Müller-Brown surface, b) modified Müller-Brown surface. Note that for the MMB PEL some of the trajectories arrive at the lowest lying saddle point SP1.[]{data-label="fig7_modalgtog"}](Figure7_alg3.pdf) [**Algorithm 3:**]{} One possibility to avoid the instability that we have encountered with algorithm 2 is to use the information on the ascent trajectory to define the reference point $\mathbf{R}$ and to damp out the small fluctuations in its location that lead to the numerical instabilities discussed above. In practice we do this by defining $\mathbf{R}$ as the average over a certain number $k$ of previous positions $\mathbf{X}_{\rm min}$. Thus at level $n$ the algorithm is given by $$\mathbf{R}= \begin{cases} \mathbf{X}_{\rm IS} & \text{if}~n\leq k \\ \frac{1}{k} \sum_{i=n-k}^{n-1} \mathbf{X}_{\rm min(i)} & \text{if}~n>k \end{cases} \label{eq_17}$$ $$\Delta\mathbf{X}=\frac{\delta}{|\nabla V(\mathbf{X}_{\rm min(n-k)}) \vert} \cdot \frac{\mathbf{X}-\mathbf{R}}{\vert \mathbf{X}-\mathbf{R} \vert} \label{eq_18}$$ $$\mathbf{G}=\mathbf{X}_{\rm min(n-1)} \quad. \label{eq_19}$$ For $k$ we have chosen 50 (MB) and 165 (MMB) and $\delta=0.5$ (MB and MMB). That this version is indeed able to find the SPs for both the MB and MMB potentials is shown in Fig. \[fig7\_modalgtog\]. For the case of the MB PEL all 16 trajectories lead up to the lowest lying SP. In contrast to the results for algorithm 2, all the trajectories are now very smooth thus indicating that the damping mechanism is indeed able to suppress the numerical instability of the previous version. For the MMB surface we find that 10 out of 16 trajectories reach the [*correct*]{} saddle point, Fig. \[fig7\_modalgtog\]b, thus showing that this algorithm is indeed much more performant than the two previous ones. Also for this PEL the ascending trajectories have become much smoother, indicating that the numerical instability is no longer present. The way the DDSA algorithm is set up, it will attempt to follow the path of the slowest ascent, an approach that it shares with other algorithms, such as, e.g. the dimer method [@henkelman1999dimer]. However, as discussed above, this path does not necessarily lead to the lowest saddle point since the latter might (locally) involve a steeper path. It is therefore useful to probe not only the slowest ascent path, but also trajectories that are from time to time a bit steeper. This is the underlying idea of the next algorithm.\ [**Algorithm 4:**]{} This algorithm introduces noise in the generation of the ascending trajectory and it is given by the following choice of the parameters: $$\mathbf{R}= \begin{cases} \mathbf{X}_{\rm IS} & \text{if}~n\leq k \\ \frac{1}{k} \sum_{i=n-k}^{n-1} \mathbf{X}_{\rm min(i)} & \text{if}~n>k \end{cases} \label{eq_20}$$ $$\Delta\mathbf{X}=\frac{\epsilon (\mathbf{X}-\mathbf{R})}{\vert \mathbf{X}-\mathbf{R} \vert} \label{eq_21}$$ $$\mathbf{G}=\mathbf{X}_{{\rm min}(n-1)}+\mathbf{Z} \quad \text{with}~\mathbf{Z}\cdot \nabla V(\mathbf{X}_{{\rm min}(n-1)})=0\quad. \label{eq_22}$$ Thus the algorithm includes a reference position $\mathbf{R}$ the motion of which is damped by averaging over several local minima. (We have used $k=30$ and 250 for the MB and MMB PELs, respectively.) The displacement vector $\Delta \mathbf{X}$ is the simple expression already used in algorithm 1 with $\epsilon=10^{-2}$ and $10^{-4}$ for the MB and MMB potentials, respectively. The main novelty of this algorithm with respect to the previous ones is the presence of a random vector $\mathbf{Z}$ in the definition of the point $\mathbf{G}$ that is used to start the iteration. This random vector $\mathbf{Z}$ is orthogonal to $\nabla V(\mathbf{X}_{{\rm min}(n-1)})$ and has magnitude $\gamma$, where $\gamma$ is a uniformly distributed random number in the interval $[0,\gamma_0]$. The maximum value we chose for the magnitude of $\mathbf{Z}$ was $\gamma_0=10^{-3}$ (MB) and $0.0052$ (MMB). The presence of this random vector in the definition of the initial position $\mathbf{G}$ gives the algorithm a chance to depart to some extent from the slowest ascent trajectory. That this flexibility can indeed be needed for finding the lowest lying SP can be recognized from the MMB PEL: In Fig. \[fig5\_mb\_mmb\_alg\]b the steepest ascent path leads up to the higher SP and thus is missing the path that goes to the lower SP because the latter path is locally, i.e. where the two paths meet, a bit steeper than the former one. Therefore an algorithm that follows only the slowest ascent will not be able to find SP1. The trajectories obtained from this version of the DDSA algorithm are shown in Fig. \[fig8\_modalgtog\]. We see that for the case of the MB potential all 16 trajectories lead up to the SP, Fig. \[fig8\_modalgtog\]a. For the case of the MMB potential 13 out of 16 trajectories reach the lowest lying SP, \[fig8\_modalgtog\]b, thus showing that algorithm 4 has a better performance than the ones we have presented previously. Hence we can conclude that the presence of weak noise in the search improves the efficiency of the algorithm. ![ Algorithm 4: Trajectories that start from the pink points around the local minimum of the PEL (star). a) Müller-Brown surface, b) modified Müller-Brown surface. Note that for the MMB PEL most of the trajectories arrive at the lowest lying saddle point SP1.[]{data-label="fig8_modalgtog"}](Figure8_alg4.pdf) Conclusion {#sec5} ========== We have introduced a new class of algorithm that allows to find low lying saddle points in complex potential energy landscapes. The algorithm makes only use of the potential and its first derivative, thus quantities that are usually readily available and hence do not need extra coding/calculations. In particular the algorithm does not need any information about the second derivatives of the potential energy and hence scales very favorably with the number of degrees of freedom, this in contrast to other algorithms that need information about the Hessian matrix. The basic idea of the algorithm is to evaluate the potential at two different points and to use this information to locate the direction that has the slowest ascent. This class of algorithm, that we denote as “Discrete Difference Slowest Ascent” has a few parameters the choice of which influences the performance of the algorithm. Using the Müller-Brown potential as well as a modification of this potential as test cases we have looked into four possible choices of these parameters and identified two as the relevant ones: 1) the reference point that is used to determine the relative position of the two points mentioned above and 2) the starting point for the local optimization. A summary of the result of our tests is presented in Table \[tab\_imp\_extr\_alba\_tot\]. We recognize that the MB PEL is a relatively easy case for the algorithm in that it finds the correct SP as soon as the reference point $\mathbf{R}$ is allowed to move. More difficult is the case of the MMB PEL in which the lowest SP is not directly connected to the slowest ascent path. Algorithm 2 fails to find this lowest SP, but is able to find the SP that is directly connected to the slowest ascent path. This case is thus an example that illustrates that algorithms which follow just the eigenvector with the smallest eigenvalue do not necessarily lead to the [*lowest*]{} SP. This problem is partially overcome by algorithm 3 since the reference point $\mathbf{R}$ can (sometimes) help to change the trajectory in the direction of the lowest SP. To overcome this problem in a more systematic manner it is, however, necessary to allow the algorithm to follow at least locally a “non-optimal” path, i.e. to deviate from the slowest ascent valley, since this will allow it to discover additional valleys that (potentially) lead to low lying saddle points. Our algorithm 4 does permit such locally non-optimal trajectories and fares indeed significantly better to find the correct SP. --------- ----------------------- ----------------------- Version MB MMB Success in finding SP Success in finding SP 1 8/16 0/16 2 16/16 0/16 3 16/16 10/16 4 16/16 13/16 --------- ----------------------- ----------------------- : Comparing the success rate of the different versions of the DDSA algorithm applied to the MB and MMB surface.[]{data-label="tab_imp_extr_alba_tot"} Although we have considered here only PELs that depend on two degrees of freedom, there is no reason to expect that the DDSA algorithm will not do well also in cases in which the cost function depends on many degrees of freedom. In such complex systems it still can be expected that the total number of valleys that emanate from a local minimum is a linear function of the number of particles. Hence this will not really add an increased numerical complexity. Since the DDSA algorithm allows to follow each of these valleys in a numerical effort that is linear in the number of degrees of freedom, and the introduced randomness will not change this, it should be possible to locate the saddle points in an efficient manner. Hence we conclude that the DDSA algorithm presented here is a promising approach to probe the properties of complex PELs. The presence of a weakly random component allows it to locate low-lying saddle points even in cases in which certain completely deterministic algorithms will fail. Hence the algorithm should be able to find solutions to optimization problems, such as reaction paths, that so far have been outside of reach of a reasonable numerical effort. Acknowledgements: We thank Giancarlo Jug and Daniele Coslovich for useful discussions and a careful reading of the manuscript. This work was supported by the Italian Ministry of Education, University and Research (MIUR) through a Ph.D. Grant of the Progetto Giovani (ambito indagine n.7: materiali avanzati (in particolare ceramici) per applicazioni strutturali), by the Bando VINCI-2014 of the Università Italo-Francese, and by the ANR-COMET.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider a square-integrable semimartingale and investigate the convex order relations between its discrete, continuous and predictable quadratic variation. As the main results, we show that if the semimartingale has conditionally independent increments and symmetric jump measure, then its discrete realized variance dominates its quadratic variation in increasing convex order. The results have immediate applications to the pricing of options on realized variance. For a class of models including time-changed Lévy models and Sato processes with symmetric jumps our results show that options on variance are typically underpriced, if quadratic variation is substituted for the discretely sampled realized variance.' author: - | Martin Keller-Ressel\ TU Berlin, Institut für Mathematik\ [email protected] - | Claus Griessler\ Universität Wien, Fakultät für Mathematik\ [email protected] bibliography: - 'convex\_ref.bib' date: 'September 25, 2012' title: 'Convex order of discrete, continuous and predictable quadratic variation & applications to options on variance[^1]' --- Introduction ============ Variance Options ---------------- Let $X$ be a stochastic process, and let ${\mathcal{P}}$ be a partition of $[0,T]$ with $n + 1$ division points $$0 = t^n_0 \le t^n_1 \le \dotsm \le t^n_n = T.$$ Then the *realized variance* of $X$ over ${\mathcal{P}}$ is given by $$\label{Eq:RV} RV(X,{\mathcal{P}}) = \sum_{k=1}^n \left(X_{t_{k}} - X_{t_{k-1}}\right)^2.$$ In a financial market, let $S$ be the price of a stock, a currency future, or another security, and let $X_t = \log(S_t/S_0)$ be its (normalized) logarithm. Then the realized variance of $X$, with the increments $t_k - t_{k-1}$ typically being business days, is a way to measure the volatility of the security price over the time interval $[0,T]$. Often realized variance is scaled by $\frac{1}{T}$ such that it measures ‘annualized variance’ over a time interval of unit length. Since the uncertainty of future volatility is a risk factor to which all market participants are exposed, many wish to hedge against it, while others are willing to take on volatility risk against a premium. To this end *volatility derivatives* have emerged, that allow market participants to take up positions in realized variance, see e.g. @Carr2009 for an overview. Many of these derivatives pay, at the terminal time $T$, an amount $f(\tfrac{1}{T}RV(X,{\mathcal{P}}))$ to the holder, where $f$ is the payoff function[^2]. We call such derivatives *options on variance*. Typical choices for the payoff function are > ---------------------------------------------- --------------------------------------------- > **variance swap**: $f(x) = x - K$ **volatility swap**: $f(x) = \sqrt{x} - K$, > \[10pt\] **call option**: $f(x) = (x - K)^+$ **put option**: $f(x) = (K - x)^+$ > ---------------------------------------------- --------------------------------------------- > where $K \in {\mathbb{R}_{\geqslant 0}}$. For the variance swap, $K$ may be chosen in such a way that today’s fair value of the swap is zero; this choice of $K$ is called the swap rate, and we denote it by $s$. The strike $K$ for e.g. call options can then be chosen relative to the swap rate $s$ by setting $K = ks$ for some $k \in {\mathbb{R}_{\geqslant 0}}$. We refer to this choice as *relative strike* options. Note that the payoff functions listed above, with the exception of the volatility swap, are convex. In this article we will be concerned with variance options defined through a generic convex payoff function $f: {\mathbb{R}_{\geqslant 0}}\to {\mathbb{R}}$. Quadratic Variation and the Convex Order Conjecture(s) ------------------------------------------------------ One of the cornerstones of the valuation theory of variance options is the replication argument of @Neuberger1992 (see also e.g. @Lee2010a), which states that a variance swap can be replicated by holding a static portfolio of co-maturing European options on $S$, while dynamically trading in the underlying $S$. The strength of Neuberger’s replication argument lies in the fact that it is essentially model-free, up to the assumptions that (a) $S$ follows a continuous martingale, (b) European options of all strikes and co-maturing with the variance swap are traded, and (c) that the realized variance can be substituted by the quadratic variation $[X,X]_T$ of $X$. Here, we are mainly interested in the last assumption, which is based on the fact that for any semimartingale $X$, and sequence of partitions $({\mathcal{P}}^n)_{n \in {\mathbb{N}}}$ of $[0,T]$, the realized variance $RV(X,{\mathcal{P}}^n)$ converges in probability to the quadratic variation $[X,X]_T$ as the mesh of the partition tends to zero; i.e. $$\label{Eq:RV_convergence} RV(X,{\mathcal{P}}^n) \to [X,X]_T, \qquad \text{in probability}$$ as $\mathrm{mesh}({\mathcal{P}}^n) \to 0$. For options on variance with non-linear payoff, the static replication argument of Neuberger breaks down, but in specific cases like the Heston model, dynamic replication strategies involving European options or variance swaps can be derived (see @Broadie2008a). In general, even if perfect replication is not possible, the arbitrage-free price at time zero of an option on variance is given by $e^{-rT}{\mathbb{E}\left[f(\tfrac{1}{T}RV(X,{\mathcal{P}}))\right]}$ where ${\mathbb{E}\left[.\right]}$ denotes an expectation under the risk-neutral pricing measure, and $r$ the risk-free interest rate. Also for risk-neutral pricing and imperfect hedging, realized variance is frequently substituted by quadratic variation, since the latter is both conceptually and computationally easier to use, and eliminates the dependency on the nature of the partition ${\mathcal{P}}$. See [@Buhler2006; @Carr2003; @Kallsen2009] for examples of this approach. All this raises questions on the qualities of the approximation $${\mathbb{E}\left[f\left(\frac{1}{T}RV(X,{\mathcal{P}})\right)\right]} \approx {\mathbb{E}\left[f\left(\frac{1}{T}[X,X]_T\right)\right]},$$ that is (a) how precise is it? and (b) is there a systematic bias? While the precision of the approximation has been studied in the asymptotic limit $n \to \infty$ and $T \to 0$ (see @Broadie2008 [@Sepp2010; @Farkas2010] resp. @KM2010), we are here interested in the existence of a systematic bias *without asymptotics*, i.e. for fixed and finite $n$ and $T$. Numerical evidence given in @Gatheral2008 [@Buhler2006; @KM2010] strongly supports such a bias and suggests that the price of a variance option with convex payoff, evaluated on discretely sampled variance, is higher than the price of an option with the same payoff, evaluated on quadratic variation. From this evidence we are led to the conjecture that $$\label{Eq:convex_order_conj} {\mathbb{E}\left[f\left(RV(X,{\mathcal{P}})\right)\right]} \ge {\mathbb{E}\left[f\left([X,X]_T\right)\right]},$$ for all convex functions $f$, or in other words that realized variance dominates quadratic variation in convex order.[^3] We call this statement the ‘convex order conjecture’ between discrete and continuous variance and write it in more concise form as $RV(X,{\mathcal{P}}) \ge_\text{cx} [X,X]_T$.[^4] A slightly weaker version is obtained if is required to hold only for all convex, *increasing* functions $f$. We call this the ‘increasing convex order conjecture’; it is equivalent to the realized variance dominating quadratic variation in increasing convex order, or more concisely $RV(X,{\mathcal{P}}) \ge_\text{icx} [X,X]_T$. If ${\mathbb{E}\left[RV(X,{\mathcal{P}})\right]} = {\mathbb{E}\left[[X,X]_T\right]}$ then the convex order conjecture and the increasing convex order conjecture are equivalent, see @Shaked2007 [Thm. 4.A.35]. Similar questions can be asked about the relationship between the quadratic variation $[X,X]$ and the *predictable* quadratic variation ${\left\langle{X},{X}\right\rangle}$. For continuous semimartingales, of course, ${\left\langle{X},{X}\right\rangle}$ coincides with $[X,X]$. For discontinuous processes ${\left\langle{X},{X}\right\rangle}$ is different from $[X,X]$, but sometimes more analytically tractable and has been used as a substitute for realized variance, e.g. in @Kallsen2009 for exactly this reason. In the same article systematic underpricing of variance puts with predictable quadratic variation in comparison to quadratic variation has been observed, such that one may conjecture the relation $$\label{Eq:convex_order_conj_pred} {\mathbb{E}\left[f\left([X,X]_T\right)\right]} \ge {\mathbb{E}\left[f\left({\left\langle{X},{X}\right\rangle}_T\right)\right]}$$ for all convex functions $f$. Note that ${\left\langle{X},{X}\right\rangle}$ is the predictable compensator of $[X,X]$ such that under suitable integrability assumptions ${\mathbb{E}\left[[X,X]_T\right]} = {\mathbb{E}\left[{\left\langle{X},{X}\right\rangle}_T\right]}$ and increasing convex order is equivalent to convex order. The main goal of this article is to prove the presented conjectures under certain assumptions on $X$ and to outline the consequences for the pricing of options on variance. Strategy of the proofs and related work --------------------------------------- The convex (or increasing convex) order relation $Y \le_\text{cx} Z$ ($Y \le_\text{icx} Z$) between two random variables is a statement about the probability laws of $Y$ and $Z$, and thus not sensitive to the nature of the dependency between $Y$ and $Z$. Nevertheless it is often a useful strategy to couple $Y$ and $Z$, i.e. to define them on a common probability space, which allows to use stronger and more effective tools, typically martingale arguments.[^5] For the proof of the convex and increasing convex order conjecture between discrete and continuous quadratic variation, we will couple $RV(X,{\mathcal{P}})$ and $[X,X]_T$ by embedding them into a reverse martingale, as the initial and the limit law respectively. The idea for the construction of this reverse martingale comes from the literature on quadratic variation of Lévy processes, or more generally processes with independent increments. @Cogburn1961 show that the realized variance of a process with independent increments over a sequence of nested partitions ${\mathcal{P}}^n$ converges almost surely (and not just in probability) to the quadratic variation. The crucial step of their proof is to show that the sequence of realized variances over nested partitions forms a *reverse martingale* when the underlying process has symmetric distribution. The almost sure convergence then follows from Doob’s martingale convergence theorem and can be extended to arbitrary processes with independent increments by a symmetrization argument. This technique can in fact be traced back to @Levy1940 where a corresponding result for Brownian motion is shown. Finally let us remark, that convex order relations in the context of variance options have also been explored by @Carr2010, but with a very different objective. The authors observe that in an exponential Lévy model $S_t = S_0e^{X_t}$, where $X$ is a Lévy process without Gaussian component, the annualized quadratic variation $\tfrac{1}{t}[X,X]_t$ is decreasing with $t$ in convex order. Consequently, for each convex payoff $f$ the term structure $t \mapsto {\mathbb{E}\left[f(\frac{1}{t}[X,X]_t)\right]}$ of variance options is decreasing in $t$. Also in the proof of @Carr2010 a reverse martingale is used: the process $t \mapsto \frac{1}{t}[X,X]_t$. Definitions and Preliminaries ============================= We briefly introduce some definitions and properties that will be used in the results and proofs of Section \[Sec:main\].\ For a stochastic process $X$ and a partition ${\mathcal{P}}$ of $[0,T]$, the realized variance $RV(X,{\mathcal{P}})$ of $X$ over ${\mathcal{P}}$ has been defined in . In the case that $RV(X,{\mathcal{P}})$ has finite expectation, we also define the *centered realized variance* by $$\overline{RV}(X,{\mathcal{P}}) = RV(X,{\mathcal{P}}) - {\mathbb{E}\left[RV(X,{\mathcal{P}})\right]}.$$ Finally, to allow for certain generalizations of our results, the *$h$-centered realized variance* is defined, for any function $h: {\mathbb{R}_{\geqslant 0}}\to {\mathbb{R}}$, by $$\label{Eq:centering} \overline{RV}^h(X,{\mathcal{P}}^n) = RV(X,{\mathcal{P}}^n) - h\Big({\mathbb{E}\left[RV(X,{\mathcal{P}}^n)\right]}\Big).$$ Realized variance and centered realized variance can be regarded as the special cases $h(x) = 0$ and $h(x) = x$. We use the same notation to define an $h$-centered version of the quadratic variation $[X,X]_T$, i.e. we define $$\overline{[X,X]}^h_T = [X,X]_T - h\Big({\mathbb{E}\left[[X,X]_T\right]}\Big),$$ provided the expectation is finite. We will only be interested in $h$-centerings where $h: {\mathbb{R}_{\geqslant 0}}\to {\mathbb{R}}$ is Lipschitz continuous with Lipschitz constant at most $1$; we denote the set of these functions by $\mathrm{Lip}_1({\mathbb{R}_{\geqslant 0}})$.\ A sequence $({\mathcal{P}}^n)_{n \in {\mathbb{N}}}$ of partitions is called *nested*, if all division points of ${\mathcal{P}}^{n-1}$ are also division points of ${\mathcal{P}}^n$. Note that by inserting intermediate partitions we can always consider $({\mathcal{P}}^n)$ as a subsequence of a sequence $({{\widetilde{{\mathcal{P}}}}}^n)$ of nested partitions, where each ${{\widetilde{{\mathcal{P}}}}}^n$ has exactly $n+1$ partition points. For the results in this article it will be sufficient to consider partition sequences of this type. Finally, the mesh of a partition ${\mathcal{P}}$ is defined as usual by $\mathrm{mesh}({\mathcal{P}}) = \sup_{k \in {\left\{1, \dots, n\right\}}} (t_{k} - t_{k-1})$. As pointed out above $RV(X,{\mathcal{P}}^n) \to [X,X]_T$ in probability, if $\mathrm{mesh}({\mathcal{P}}^n) \to 0$ for any semimartingale $X$. This results holds also for non-nested partitions, see e.g. @Jacod1987 [Thm. I.4.47].\ Finally, let $({\mathcal{G}}_n)$ be a decreasing sequence of $\sigma$-algebras, and let $X_n$ be a sequence of integrable random variables such that $X_n \in {\mathcal{G}}_n$. Then $(X_n)$ is called a *reverse martingale* with respect to ${\mathcal{G}}_n$, if $$\label{Eq:reverse_mg_prop} {\mathbb{E}\left[\left.X_{n-1}\right|{\mathcal{G}}_n\right]} = X_n$$ for all $n \in {\mathbb{N}}$. Note that $({\mathcal{G}}_n)$ is *decreasing*, and thus not a filtration. Similarly, $(X_n)$ is called a reverse submartingale if holds with ‘$\ge$’ instead of ‘$=$’. An important result is the following (cf. @Loeve1963 [29.3.IV]): A reverse submartingale converges almost surely and also in $L^1$ to a limit $X_\infty$. Note that in contrast to the (forward) submartingale convergence theorem, no additional conditions on $(X_n)$ are needed. If we define the tail $\sigma$-algebra ${\mathcal{G}}_\infty = \bigcap_{k=1}^n {\mathcal{G}}_n$, then for any $n \in {\mathbb{N}}$ the limit $X_\infty$ can be represented as $${\mathbb{E}\left[\left.X_n\right|{\mathcal{G}}_\infty\right]} = X_\infty.$$ Results on discrete and continuous quadratic variation {#Sec:main} ====================================================== Reverse martingales from realized variance ------------------------------------------ Let $(\Omega, {\mathcal{F}}, {\mathbb{F}}, {\mathbb{P}})$ be a filtered probability space, where ${\mathbb{F}}$ satisfies the usual conditions of right-continuity and ${\mathbb{P}}$-completeness. Let $Y$ be an ${\mathbb{F}}$-adapted cadlag process, and let ${\mathcal{H}}$ be a ${\mathbb{P}}$-complete $\sigma$-algebra such that ${\mathcal{H}}\subset {\mathcal{F}}_0$. We say that $Y$ is a process with ${\mathcal{H}}$-conditionally independent increments if for all $0 \le s \le t$ the increment $Y_t - Y_s$ is independent of ${\mathcal{F}}_s$, conditionally on ${\mathcal{H}}$. This definition includes time-changed Lévy processes, and additive processes in the sense of @Sato1999. The ${\mathcal{H}}$-conditional independence is equivalent to the assertion that $$\label{Eq:cond_independence} {\mathbb{E}\left[\left.f(Y_t - Y_s)Z\right|{\mathcal{H}}\right]} = {\mathbb{E}\left[\left.f(Y_t - Y_s)\right|{\mathcal{H}}\right]} \cdot {\mathbb{E}\left[\left.Z\right|{\mathcal{H}}\right]}$$ for all bounded measurable $f$ and bounded ${\mathcal{F}}_s$-measurable random variables $Z$. See also @Kallenberg1997 [Chapter 6] for results on conditional independence and @Jacod1987 for processes with conditionally independent increments. Moreover, we say that a process $Y$ has ${\mathcal{H}}$-conditionally symmetric increments, if $$\label{Eq:cond_symmetry} {\mathbb{E}\left[\left.f(Y_t - Y_s)\right|{\mathcal{H}}\right]} = {\mathbb{E}\left[\left.f(Y_s - Y_t)\right|{\mathcal{H}}\right]}$$ for all $t,s \ge 0$ and bounded measurable $f$. Alternatively we can use conditional charateristic functions to characterize conditional symmetry, i.e. a random variable $X$ is ${\mathcal{H}}$-conditionally symmetric if and only if $$\label{eq:charf_equal} {\mathbb{E}\left[\left.\exp\left(iu X\right)\right|{\mathcal{H}}\right]} = {\mathbb{E}\left[\left.\exp\left(-iuX\right)\right|{\mathcal{H}}\right]}, \qquad \forall\, u \in {\mathbb{R}}.$$ It will be helpful to know that a process with conditionally symmetric and independent increments[^6] is a martingale up to an integrability assumption. \[Lem:martingale\] Let $X$ be an ${\mathbb{F}}$-adapted process satisfying ${\mathbb{E}\left[|X_t|\right]} < \infty$ for all $t \ge 0$. If $X$ has ${\mathcal{H}}$-conditionally symmetric and independent increments, then it is a martingale. Let $t \ge s \ge 0$ and let $Z$ be a bounded ${\mathcal{F}}_s$-measureable random variable. Using first conditional independence and then conditional symmetry of increments we obtain $$\begin{aligned} {\mathbb{E}\left[(X_t - X_s)Z\right]} &= {\mathbb{E}\left[{\mathbb{E}\left[\left.(X_t - X_s)Z\right|{\mathcal{H}}\right]}\right]} = {\mathbb{E}\left[{\mathbb{E}\left[\left.(X_t - X_s)\right|{\mathcal{H}}\right]}\cdot {\mathbb{E}\left[\left.Z\right|{\mathcal{H}}\right]}\right]} = \\ &= {\mathbb{E}\left[{\mathbb{E}\left[\left.(X_s - X_t)\right|{\mathcal{H}}\right]}\cdot {\mathbb{E}\left[\left.Z\right|{\mathcal{H}}\right]}\right]} = {\mathbb{E}\left[(X_s - X_t)Z\right]}\end{aligned}$$ and conclude that ${\mathbb{E}\left[(X_t - X_s)Z\right]} = 0$. Since $Z$ was an arbitrary bounded ${\mathcal{F}}_s$-measurable random variable it follows that ${\mathbb{E}\left[\left.(X_t - X_s)\right|{\mathcal{F}}_s\right]} = 0$ and hence that $X$ is a martingale. The following Lemma establishes a reflection principle for processes whose increments are conditionally symmetric and independent. \[Lem:symmetrization\] Let $Y$ be a process with ${\mathcal{H}}$-conditionally symmetric and independent increments and let $t_* \ge 0$. Then the process ${{\widehat{Y}}}$ defined by $$\label{Eq:reflection} {{\widehat{Y}}}_s = Y_{s \wedge t^*} - \left(Y_s - Y_{s \wedge t^*}\right)$$ is equal in law to $Y$, conditionally on ${\mathcal{H}}$.\ Moreover, $RV({{\widehat{Y}}},{\mathcal{P}}) = RV(Y,{\mathcal{P}})$ for all partitions ${\mathcal{P}}$ for which $t_*$ is a partition point. Showing that ${{\widehat{Y}}}$ is equal in law to $Y$, conditionally on ${\mathcal{H}}$, is equivalent to showing that for any sequence $t_0 \le t_1 \le \dotsm t_n$ and bounded measurable $f: {\mathbb{R}}^n \to {\mathbb{R}}$ it holds that $$\label{eq:equality} {\mathbb{E}\left[\left.f(Y_{t_1} - Y_{t_0}, \dotsc, Y_{t_n} - Y_{t_{n-1}})\right|{\mathcal{H}}\right]} = {\mathbb{E}\left[\left.f({{\widehat{Y}}}_{t_1} - {{\widehat{Y}}}_{t_0}, \dotsc, {{\widehat{Y}}}_{t_n} - {{\widehat{Y}}}_{t_{n-1}})\right|{\mathcal{H}}\right]}.$$ To show this equality it is sufficient to prove the equality of the ${\mathcal{H}}$-conditional characteristic functions of each side. By inserting intermediate points we may assume without loss of generality that $t_* = t_m$ for some $m \in {\left\{0,\dotsc, n\right\}}$. Using the ${\mathcal{H}}$-conditional independence and the ${\mathcal{H}}$-conditional symmetry of increments we obtain $$\begin{aligned} &{\mathbb{E}\left[\left.\exp\left(i \sum_{k=1}^n u_k \left({{\widehat{Y}}}_{t_n} - {{\widehat{Y}}}_{t_{n-1}} \right) \right)\right|{\mathcal{H}}\right]} = \\ &{\mathbb{E}\left[\left.\exp\left(i \sum_{k=1}^m u_k \left(Y_{t_k} - Y_{t_{k-1}}\right) \right)\right|{\mathcal{H}}\right]} \cdot {\mathbb{E}\left[\left.\exp\left(-i \sum_{k=m+1}^n u_k \left(Y_{t_k} - Y_{t_{k-1}}\right) \right)\right|{\mathcal{H}}\right]} = \\ &{\mathbb{E}\left[\left.\exp\left(i \sum_{k=1}^m u_k \left(Y_{t_k} - Y_{t_{k-1}}\right) \right)\right|{\mathcal{H}}\right]} \cdot {\mathbb{E}\left[\left.\exp\left(i\sum_{k=m+1}^n u_k \left(Y_{t_k} - Y_{t_{k-1}}\right) \right)\right|{\mathcal{H}}\right]} = \\ &= {\mathbb{E}\left[\left.\exp\left(i \sum_{k=1}^n u_k \left(Y_{t_k} - Y_{t_{k-1}}\right) \right)\right|{\mathcal{H}}\right]},\end{aligned}$$ which proves . It remains to show that $RV({{\widehat{Y}}},{\mathcal{P}}) = RV(Y,{\mathcal{P}})$ for partitions ${\mathcal{P}}$ for which $t_*$ is a partition point. Denote the partition by $t_0 \le \dotsm \le t_n$ as before and assume that $t_*$ is the $m$-th partition point, i.e. $t_* = t_m$. Then $$\begin{aligned} RV({{\widehat{Y}}},{\mathcal{P}}) &= \sum_{k=1}^m \left(Y_{t_k} - Y_{t_{k-1}}\right)^2 + \sum_{k=m+1}^n (-1)^2 \left(Y_{t_k} - Y_{t_{k-1}}\right)^2 = \\ &=\sum_{k=1}^n \left(Y_{t_k} - Y_{t_{k-1}}\right)^2 = RV(Y,{\mathcal{P}}).\qedhere\end{aligned}$$ Using this Lemma we show that we can construct a reverse martingale from the realized variances of a process with conditionally symmetric and independent increments, when the realized variances are taken over nested partitions. For a process with (unconditionally) symmetric and independent increments this result has been shown by @Cogburn1961 [Thm. 1]. Our result is a minor variation of the theorem of @Cogburn1961, but will be generalized in the next section. \[Thm:reverse\_martingale\] Let $X$ be a process with ${\mathcal{H}}$-conditionally symmetric and independent increments and let $({\mathcal{P}}^n)_{n \in {\mathbb{N}}}$ be a sequence of nested partitions of $[0,T]$ such that ${\mathbb{E}\left[RV(X,{\mathcal{P}}^{n})\right]} < \infty$ for all $n \in {\mathbb{N}}$. Then the realized variance of $X$ over the partitions $({\mathcal{P}}^n)$ is a reverse martingale, i.e. it satisfies $$\label{Eq:bw_martingale} {\mathbb{E}\left[\left.RV(X,{\mathcal{P}}^{n-1})\right|{\mathcal{G}}_n\right]} = RV(X,{\mathcal{P}}^{n}), \qquad (n \in {\mathbb{N}})$$ where $$\label{Eq:define_Gn} {\mathcal{G}}_n = {\mathcal{H}}\vee \sigma\Big(RV(X,{\mathcal{P}}^{n}), RV(X,{\mathcal{P}}^{n+1}), \dotsc\Big).$$ Since $({\mathcal{P}}^n)$ is a nested sequence of partitions, we may assume without loss of generality that ${\mathcal{P}}^{n-1}$ and ${\mathcal{P}}^n$ differ by a single division point, which we denote by $t^*$. Denote by $a, b$ the two closest division points of ${\mathcal{P}}^{n-1}$, i.e. $t^*$ is inserted into the interval $[a,b]$. Let ${{\widehat{X}}}$ be the process $X$ reflected to the right of $t^*$ as in . By Lemma \[Lem:symmetrization\] ${{\widehat{X}}}$ is equal in law to $X$, conditionally on ${\mathcal{H}}$. Moreover, also by Lemma \[Lem:symmetrization\], $RV({{\widehat{X}}},{\mathcal{P}}^k) = RV(X,{\mathcal{P}}^k)$ for all $k \ge n$. This implies that $$\begin{aligned} \label{Eq:equal_filtration} {\mathcal{G}}_n &= {\mathcal{H}}\vee \sigma\Big(RV(X,{\mathcal{P}}^{n}), RV(X,{\mathcal{P}}^{n+1}), \dotsc\Big) = \\ &= {\mathcal{H}}\vee \sigma\Big(RV({{\widehat{X}}},{\mathcal{P}}^{n}), RV({{\widehat{X}}},{\mathcal{P}}^{n+1}),\dotsc\Big). \notag\end{aligned}$$ To ease notation we abbreviate ${\mathcal{R}}_n := \sigma\Big(RV(X,{\mathcal{P}}^{n}), RV(X,{\mathcal{P}}^{n+1}), \dotsc\Big)$. The next step is to calculate $U := {\mathbb{E}\left[\left.RV(X,{\mathcal{P}}^{n-1}) - RV(X,{\mathcal{P}}^{n})\right|{\mathcal{G}}_n\right]}$. By direct calculation we obtain that $$\label{Eq:RV_decomp} RV(X,{\mathcal{P}}^{n-1}) - RV(X,{\mathcal{P}}^{n}) = (X_b - X_{t^*}) (X_{t^*} - X_a)$$ and conclude from the properties of conditional expectations that $${\mathbb{E}\left[U H Q\right]} = {\mathbb{E}\left[(X_b - X_{t^*}) (X_{t^*} - X_a)HQ\right]}, \qquad \forall\,H\in {\mathcal{H}}, \,Q \in {\mathcal{R}}_n.$$ Using Lemma \[Lem:symmetrization\] and we obtain $$\begin{gathered} {\mathbb{E}\left[(X_b - X_{t^*}) (X_{t^*} - X_a) HQ\right]} = {\mathbb{E}\left[{\mathbb{E}\left[\left.(X_b - X_{t^*}) (X_{t^*} - X_a) Q\right|{\mathcal{H}}\right]}H\right]} = \\ = {\mathbb{E}\left[{\mathbb{E}\left[\left.({{\widehat{X}}}_b - {{\widehat{X}}}_{t^*}) ({{\widehat{X}}}_{t^*} - {{\widehat{X}}}_a) Q\right|{\mathcal{H}}\right]}H\right]} = -{\mathbb{E}\left[(X_b - X_{t^*}) (X_{t^*} - X_a) HQ\right]},\end{gathered}$$ and hence that ${\mathbb{E}\left[UHQ\right]} = 0$ for all $H \in {\mathcal{H}}, Q \in {\mathcal{R}}_n$. From ${\mathcal{G}}_n = {\mathcal{H}}\vee {\mathcal{R}}_n$ we conclude that indeed $${\mathbb{E}\left[\left.RV(X,{\mathcal{P}}^{n-1}) - RV(X,{\mathcal{P}}^{n})\right|{\mathcal{G}}_n\right]} = U = 0$$ showing the reverse martingale property . Coupling Realized Variance and Quadratic Variation -------------------------------------------------- To introduce the quadratic variation to the setting we now add the assumption that $X$ is a semimartingale, such that $RV(X,{\mathcal{P}}^n) \to [X,X]_T$ in probability whenever $\mathrm{mesh}({\mathcal{P}}^n) \to 0$. Furthermore we require that $X$ is a square-integrable semimartingale, i.e. that it is a special semimartingale with canonical decomposition $X = X_0 + N + A$ where $N$ is a square-integrable martingale and $A$ has square-integrable total variation. This assumption implies in particular that $\sup_{t \in [0,T]} (X_t - X_0)^2$ is integrable, see @Protter2004 [Ch. V.2].[^7] Combining these assumptions with Theorem \[Thm:reverse\_martingale\] immediately gives the following Corollary. Suppose that $X$ is a square-integrable martingale with ${\mathcal{H}}$-conditionally symmetric and independent increments and that $({\mathcal{P}}^n)_{n \in {\mathbb{N}}}$ is a nested sequence of partitions such that $\mathrm{mesh}({\mathcal{P}}^n) \to 0$. Then $RV(X,{\mathcal{P}}^{n}) \to [X,X]_T$ a.s., ${\mathbb{E}\left[[X,X]_T\right]} < \infty$ and $$\label{Eq:conditional_qv} {\mathbb{E}\left[\left.RV(X,{\mathcal{P}}^n)\right|{\mathcal{G}}_\infty\right]} = [X,X]_T$$ holds for any ${\mathcal{P}}^n$ and with ${\mathcal{G}}_\infty = {\mathcal{H}}\vee \bigcap_{n \in {\mathbb{N}}} \sigma(RV(X,{\mathcal{P}}^n))$. Since $RV(X,{\mathcal{P}}^n) \le 2(n+1)\sup_{t \in [0,T]} (X_t - X_0)^2$ and $X$ is square-integrable, we have that ${\mathbb{E}\left[RV(X,{\mathcal{P}}^n)\right]} < \infty$ for all $n \in {\mathbb{N}}$. It follows from Theorem \[Thm:reverse\_martingale\] that the sequence $(RV(X,{\mathcal{P}}^n))_{n \in {\mathbb{N}}}$ is a reverse ${\mathcal{G}}_n$-martingale. By the convergence theorem for reverse martingales this sequence converges almost surely and also in $L^1$, and it remains to identify the limit. Since $\mathrm{mesh}({\mathcal{P}}^n) \to 0$ we have by [@Jacod1987 Theorem I.4.47] that $RV(X,{\mathcal{P}}^{n})$ converges in probability to the quadratic variation $[X,X]_T$. We conclude that $[X,X]_T$ is in fact the almost sure limit of $RV(X,{\mathcal{P}}^{n})$ as $n \to \infty$, that ${\mathbb{E}\left[[X,X]_T\right]} < \infty$ and that ${\mathbb{E}\left[\left.RV(X,{\mathcal{P}}^n)\right|{\mathcal{G}}_\infty\right]} = [X,X]_T$. In the case that we are only interested in linear expectations of realized variance and quadratic variation, the symmetry assumption on the increments of $X$ can be dropped and we obtain the following. \[cor:expectations\] Let $X$ be a square-integrable semimartingale with ${\mathcal{H}}$-conditionally independent increments and let ${\mathcal{P}}: 0 = t_0 \le t_1 \le \dotsc \le t_N = T$ be a partition of $[0,T]$. Then $$\label{Eq:RV_expectation} {\mathbb{E}\left[\left.RV(X,{\mathcal{P}})\right|{\mathcal{H}}\right]} = {\mathbb{E}\left[\left.[X,X]_T\right|{\mathcal{H}}\right]} + \sum_{k=1}^{n}{{\mathbb{E}\left[\left.X_{t_{k}} - X_{t_{k-1}}\right|{\mathcal{H}}\right]}^2}.$$ In particular, ${\mathbb{E}\left[\left.RV(X,{\mathcal{P}})\right|{\mathcal{H}}\right]} = {\mathbb{E}\left[\left.[X,X]_T\right|{\mathcal{H}}\right]}$ for all partitions ${\mathcal{P}}$ of $[0,T]$, if and only if the process $X$ is a martingale on $[0,T]$. Let $Y$ be an ${\mathcal{H}}$-conditionally independent copy[^8] of $X$. Then $Z = X - Y$ is a process with ${\mathcal{H}}$-conditionally symmetric and independent increments. Decomposing $RV(X-Y,{\mathcal{P}})$ we get $$\begin{aligned} \label{Eq:RV_decomp3} RV(X-Y,{\mathcal{P}}) &= RV(X,{\mathcal{P}}) + RV(Y,{\mathcal{P}}) - \\ &- 2\sum_{k=0}^{n-1}\left(X_{t_{k+1}} - X_{t_k}\right)\left(Y_{t_{k+1}} - Y_{t_k}\right), \notag\end{aligned}$$ which can be bounded from above by $2 (RV(X,{\mathcal{P}}) + RV(Y,{\mathcal{P}}))$. Since $X$ is square-integrable, we have that ${\mathbb{E}\left[RV(X,{\mathcal{P}})\right]} < \infty$ and it follows that also ${\mathbb{E}\left[RV(X-Y,{\mathcal{P}})\right]} < \infty$. Thus Theorem \[Thm:reverse\_martingale\] applies and we obtain that $$\label{Eq:QV_symmetrized} {\mathbb{E}\left[\left.RV(X-Y,{\mathcal{P}})\right|{\mathcal{G}}_\infty\right]} = [X-Y,X-Y]_T = [X,X]_T + [Y,Y]_T.$$ Taking expectations and combining with yields $$\begin{aligned} {\mathbb{E}\left[\left.[X,X]_T\right|{\mathcal{H}}\right]} + {\mathbb{E}\left[\left.[Y,Y]_T\right|{\mathcal{H}}\right]} &= {\mathbb{E}\left[\left.RV(X,{\mathcal{P}})\right|{\mathcal{H}}\right]} + {\mathbb{E}\left[\left.RV(Y,{\mathcal{P}})\right|{\mathcal{H}}\right]} - \\ &- 2 \sum_{k=0}^{n-1}{\mathbb{E}\left[\left.X_{t_{k+1}} - X_{t_k}\right|{\mathcal{H}}\right]} \cdot {\mathbb{E}\left[\left.Y_{t_{k+1}} - Y_{t_k}\right|{\mathcal{H}}\right]}.\end{aligned}$$ Since $X$ and $Y$ have equal ${\mathcal{H}}$-conditional distributions follows. It is obvious that the sum in vanishes if $X$ is a martingale. Conversely, if the sum vanishes for any partition ${\mathcal{P}}$, then ${\mathbb{E}\left[\left.X_t - X_s\right|{\mathcal{H}}\right]} = 0$ for any $0 \le s \le t$. By the ${\mathcal{H}}$-conditional independence of increments, we conclude that for any $Z \in {\mathcal{F}}_s$ $${\mathbb{E}\left[(X_t - X_s)Z\right]} = {\mathbb{E}\left[{\mathbb{E}\left[\left.X_t - X_s\right|{\mathcal{H}}\right]}Z\right]} = {\mathbb{E}\left[\left.X_t - X_s\right|{\mathcal{H}}\right]} \cdot {\mathbb{E}\left[\left.Z\right|{\mathcal{H}}\right]} = 0.$$ Hence ${\mathbb{E}\left[\left.X_t - X_s\right|{\mathcal{F}}_s\right]} = 0$ and $X$ is a martingale. Results for semimartingales with symmetric jump measure ------------------------------------------------------- Theorem \[Thm:reverse\_martingale\] on processes with symmetric and conditionally independent increments is typically not suitable for the applications to variance options that we have in mind. The reason is that the risk-neutral log-price $X$ of some asset can be a martingale only in degenerate cases, as already the asset price $S_t = S_0 e^{X_t}$ itself must be a martingale (at least locally). Assume for illustration that the risk-neutral security price $S_t = S_0 e^{X_t}$ is a true martingale and non-deterministic, then it follows immediately by Jensen’s inequality that ${\mathbb{E}\left[X_t\right]} < \log {\mathbb{E}\left[S_t/S_0\right]} = X_0$ and hence that $X$ is not a martingale. Even if $S$ is a (non-deterministic) strictly local martingale that is bounded away from zero, it follows by Fatou’s lemma and Jensen’s inequality that $${\mathbb{E}\left[X_t\right]} \le \liminf_{n \to \infty} {\mathbb{E}\left[X_{t \wedge \tau_n}\right]} < \log {\mathbb{E}\left[S_{t \wedge \tau_n}/S_0\right]} = X_0,$$ for some localizing sequence $(\tau_n)_{n \in {\mathbb{N}}}$, and hence that $X$ is not a martingale. For this reason we want to weaken the symmetry assumptions on $X$. We introduce the following assumptions: \[myass\] The process $X$ is a square-integrable semimartingale starting at $X_0 = 0$ with the following properties: (a) $X$ has ${\mathcal{H}}$-conditionally independent increments\[Ass:PII\]; (b) $X$ has no predictable times of discontinuity; (c) $\nu$, the predictable compensator of the jump measure of $X$, is symmetric, i.e. it satisfies $\nu(\omega,dt,dx) = \nu(\omega,dt,-dx)$ a.s. \[Ass:symmetry\]. ‘No predictable times of discontinuity’ means that $\Delta X_\tau = 0$ a.s. for each predictable time $\tau$. This condition is also called *quasi-left-continuity* of $X$. It does not rule out discontinuities at inaccessible stopping times, like the jumps of Lévy processes. For processes with unconditionally independent increments, ‘no predictable times of discontinuity’ can be replaced by ‘no fixed times of discontinuity’ (see [@Jacod1987 Cor.II.5.12]). Let us briefly summarize some of the consequences of these assumptions. First, since $X$ is a square-integrable semimartingale, it is also a special semimartingale (cf. [@Jacod1987 II.2.27, II.2.28]), and we can choose $h(x) = x$ as a truncation function, relative to which the semimartingale characteristics $(B,C,\nu)_h$ are defined. For this choice of truncation functions we have that $B = A$, i.e. the drift $B$ is exactly the predictable finite variation part of the canonical semimartingale decomposition. Second, a semimartingale has ${\mathcal{H}}$-conditionally independent increments, if and only if there exists a version of its characteristics $(B,C,\nu)$ that is ${\mathcal{H}}$-measurable. The increments are unconditionally independent, if and only if ${\mathcal{H}}$ is trivial or equivalently if there exists a version of the characteristics $(B,C,\nu)$ that is deterministic (cf. [@Jacod1987 Thm. II.4.15]). Third, the assumption that $X$ has no predictable times of discontinuity implies that the version of the characteristics might be chosen such that they are continuous (cf. [@Jacod1987 Prop. II.2.9]). \[Thm:QV\_order\] Let $X$ be a process satisfying Assumption \[myass\], and let ${\mathcal{P}}$ be a partition of $[0,T]$. Then there exists a $\sigma$-algebra ${\mathcal{G}}_\infty \subset {\mathcal{F}}$ such that $$\label{Eq:expectation_tail} {\mathbb{E}\left[\left.RV(X,{\mathcal{P}})\right|{\mathcal{G}}_\infty\right]} = [X,X]_T + RV(B,{\mathcal{P}}).$$ If the increments of $X$ are unconditionally independent, it also holds that $$\label{Eq:f_centering_tail} {\mathbb{E}\left[\left.\overline{RV}^h(X,{\mathcal{P}})\right|{\mathcal{G}}_\infty\right]} \ge \overline{[X,X]}^h_T,$$ for each $h \in \mathrm{Lip}_1({\mathbb{R}_{\geqslant 0}})$, and with equality for $h(x) = x$. In the proof below ${\mathcal{G}}_\infty$ will be constructed explicitly as the tail $\sigma$-algebra of a sequence similar to . Let $({\mathcal{P}}^n)$ be a nested sequence of partitions with $\mathrm{mesh}({\mathcal{P}}^n) \to 0$, such that ${\mathcal{P}}^N = {\mathcal{P}}$ for some $N$. Let $X = Y + B$ be the canonical decomposition of $X$ into local martingale part and predictable finite variation part. Since $X$ is square-integrable $Y$ is a square-integrable martingale and $B$ has square-integrable total variation. In addition, define $$\label{Eq:Gn_repeat} {\mathcal{G}}_n = {\mathcal{H}}\vee \sigma\Big(RV(Y,{\mathcal{P}}^{n}), RV(Y,{\mathcal{P}}^{n+1}), \dotsc\Big)$$ and let ${\mathcal{G}}_\infty$ be the tail $\sigma$-algebra $\bigcap_{n \in {\mathbb{N}}} {\mathcal{G}}_n$ of $({\mathcal{G}}_n)_{n \in {\mathbb{N}}}$. Following [@Jacod1987 Thm. II.6.6], the conditional characteristic function of an increment of $Y = X - B$ is given by $$\label{Eq:cond_char} \begin{split} \phi_{t,s}(u) &:= {\mathbb{E}\left[\left.e^{iu(Y_t - Y_s)}\right|{\mathcal{H}}\right]} = \\ &= \exp \left(-\frac{u^2}{2}(C_t - C_s) + \int_s^t \int_{{\mathbb{R}}}{\left(e^{iux} - 1 - iux\right)}\nu(\omega,dr,dx) \right). \end{split}$$ The symmetry of $\nu$ implies that $\phi_{t,s}(u) = \phi_{t,s}(-u)$, which by equation  implies the ${\mathcal{H}}$-conditional symmetry of $Y_t - Y_s$. Hence $Y$ is a process with ${\mathcal{H}}$-conditionally symmetric and independent increments. We also have that ${\mathbb{E}\left[RV(Y,{\mathcal{P}}^n)\right]} < \infty$ since $Y$ is square-integrable. Thus Theorem \[Thm:reverse\_martingale\] can be applied to $Y$ and we obtain that $$\label{Eq:tail_sigma1} {\mathbb{E}\left[\left.RV(Y,{\mathcal{P}}^n)\right|{\mathcal{G}}_\infty\right]} = [Y,Y]_T,$$ for all $n \in {\mathbb{N}}$ and in particular for $n = N$ and ${\mathcal{P}}^n = {\mathcal{P}}$. Since $X$ is quasi-left-continuous, its predictable finite variation component $B$ is even continuous, and it follows by [@Jacod1987 Prop. I.4.49] that $$\label{Eq:QV_decomp} [X,X]_T = [Y,Y]_T + [B,B]_T = [Y,Y]_T.$$ Furthermore we can decompose the realized variance of $X$ as $$\label{Eq:RV_decomp_next} RV(X,{\mathcal{P}}) = RV(Y,{\mathcal{P}}) + RV(B,{\mathcal{P}}) + \sum_{k=1}^n \left(Y_{t_k} - Y_{t_{k-1}}\right) \cdot \left(B_{t_k} - B_{t_{k-1}}\right).$$ We set $U := {\mathbb{E}\left[\left.\left(Y_{t_k} - Y_{t_{k-1}}\right) \cdot \left(B_{t_k} - B_{t_{k-1}}\right)\right|{\mathcal{G}}_n\right]}$ and show that $U = 0$, similar to the proof of Theorem \[Thm:reverse\_martingale\]. We set ${\mathcal{R}}_n := \sigma\Big(RV(Y,{\mathcal{P}}^{n}), RV(Y,{\mathcal{P}}^{n+1}), \dotsc \Big)$ and using the reflection principle from Lemma \[Lem:symmetrization\] we obtain for all $H \in {\mathcal{H}}$ and $Q \in {\mathcal{R}}_n$ that $$\begin{aligned} {\mathbb{E}\left[UHQ\right]} &= {\mathbb{E}\left[{\mathbb{E}\left[\left.\left(Y_{t_k} - Y_{t_{k-1}}\right) Q\right|{\mathcal{H}}\right]} \left(B_{t_k} - B_{t_{k-1}}\right) H\right]} = \\ &= {\mathbb{E}\left[{\mathbb{E}\left[\left.\left(Y_{t_{k-1}} - Y_{t_k}\right) Q\right|{\mathcal{H}}\right]} \left(B_{t_k} - B_{t_{k-1}}\right) H\right]} = -{\mathbb{E}\left[UHQ\right]}\end{aligned}$$ and hence that indeed, $U=0$. Taking ${\mathcal{G}}_\infty$-conditional expectations, we have $${\mathbb{E}\left[\left.\left(Y_{t_k} - Y_{t_{k-1}}\right) \cdot \left(B_{t_k} - B_{t_{k-1}}\right)\right|{\mathcal{G}}_\infty\right]} = 0,$$ and thus $$\label{Eq:RV_decomp_next2} {\mathbb{E}\left[\left.RV(X,{\mathcal{P}})\right|{\mathcal{G}}_\infty\right]} = {\mathbb{E}\left[\left.RV(Y,{\mathcal{P}})\right|{\mathcal{G}}_\infty\right]} + RV(B,{\mathcal{P}}).$$ Combining , and yields . Suppose now that the increments of $X$ are unconditionally independent. Then $B$ is even deterministic (see the discussion after Assumption \[myass\]), and we derive from that $${\mathbb{E}\left[\left.\overline{RV}^f(X,{\mathcal{P}})\right|{\mathcal{G}}_\infty\right]} = [X,X]_T + RV(B,{\mathcal{P}}) - f\Big({\mathbb{E}\left[[X,X]_T\right]} + RV(B,{\mathcal{P}})\Big).$$ Applying the Lipschitz property of $f$ follows. Convex Order Relations ---------------------- By a simple application of Jensen’s inequality, Theorem \[Thm:QV\_order\] can be translated into a convex order relation between the realized variance and the quadratic variation of $X$. \[Thm:order1\] Let $X$ be a process satisfying Assumption \[myass\], and let ${\mathcal{P}}$ be a partition of $[0,T]$. Then the following holds: (a) For any increasing convex function $f$ $$\label{Eq:convex_order4} {\mathbb{E}\left[f\Big(RV(X,{\mathcal{P}})\Big)\right]} \ge {\mathbb{E}\left[f\Big([X,X]_T\Big)\right]}.$$ If $X$ is a martingale, then holds for any convex function $f$. (b) If the increments of $X$ are unconditionally independent, then the $h$-centered realized variance satisfies $$\label{Eq:convex_order5} {\mathbb{E}\left[f\Big(\overline{RV}^h(X,{\mathcal{P}})\Big)\right]} \ge {\mathbb{E}\left[f\Big(\overline{[X,X]}_T^h\Big)\right]}$$ for any increasing convex function $f$ and $h \in \mathrm{Lip}_1({\mathbb{R}_{\geqslant 0}})$. If $X$ is a martingale or $h(x) = x$, then holds for any convex function $f$. Summing up the theorem, we can say that the ‘increasing convex order conjecture’ set forth in the introduction holds true at least for square-integrable semimartingales $X$ with conditionally independent increments, symmetric jumps and without predictable times of discontinuity. If $X$ is also a martingale, then even the ‘convex order conjecture’ holds true. However, as discussed at the beginning of the section, $X$ cannot be a martingale if $e^X$ is assumed to be a non-deterministic martingale, such that only the increasing convex order conjecture holds in the relevant applications in finance. Applying an increasing convex function $f$ to both sides of and using Jensen’s inequality, we have that $${\mathbb{E}\left[\left.f\Big(RV(X,{\mathcal{P}})\Big)\right|{\mathcal{G}}_\infty\right]} \ge f\Big([X,X]_T\Big).$$ Taking (unconditional) expectations the convex order relation follows for all increasing convex $f$. If $X$ is a martingale, then by Corollary \[cor:expectations\] ${\mathbb{E}\left[RV(X,{\mathcal{P}})\right]} = {\mathbb{E}\left[[X,X]_T\right]}$ and it follows from [@Shaked2007 Thm.4.A.35] that holds even for any convex $f$. Equation is derived analogously from . In the case that $X$ is a martingale or $h(x) = x$ ‘increasing convex’ can again be replaced by ‘convex’ since ${\mathbb{E}\left[\overline{RV}^h(X,{\mathcal{P}})\right]} = {\mathbb{E}\left[\overline{[X,X]}^h_T\right]}$. Results on predictable quadratic variation ========================================== At this point we want to bring the *predictable quadratic variation* ${\left\langle{X},{X}\right\rangle}$ of $X$ into the picture. As discussed in the introduction we conjecture the convex order relation $[X,X]_T \ge_\text{cx} {\left\langle{X},{X}\right\rangle}_T$ for many processes of interest. Indeed the following holds. \[Thm:predictable\] Let $X$ be a square-integrable semimartingale with ${\mathcal{H}}$-conditionally independent increments. Then the quadratic variation $[X,X]$ dominates the predictable quadratic variation ${\left\langle{X},{X}\right\rangle}$ in convex order, that is $$\label{Eq:convex_order6} {\mathbb{E}\left[f\Big([X,X]_T\Big)\right]} \ge {\mathbb{E}\left[f\Big({\left\langle{X},{X}\right\rangle}_T\Big)\right]}.$$ for any convex function $f$. Let $(B,C,\nu)$ be the characteristics of the semimartingale $X$ relative to the truncation function $h(x) = x$. Following @Jacod1987 [Sec. II.2.a] the quadratic variation of $X$ is given by $$[X,X]_t = C_t^2 + \int_0^t \int_{\mathbb{R}}x^2 \mu(\omega,dx,ds),$$ where $\mu(\omega,dx,ds)$ is the random measure associated to the jumps of $X$. The predictable quadratic variation in turn is given (cf. @Jacod1987 [Prop. II.2.29]) by $${\left\langle{X},{X}\right\rangle}_t = C_t^2 + \int_0^t \int_{\mathbb{R}}x^2 \nu(\omega,dx,ds).$$ Furthermore, as $X$ has ${\mathcal{H}}$-conditionally independent increments, the characteristics $(B,C,\nu)$ of $X$ are ${\mathcal{H}}$-measurable. The process $[X,X]_t - {\left\langle{X},{X}\right\rangle}_t$ is a local ${\mathbb{F}}$-martingale, and due to the square-integrability assumption on $X$ a true ${\mathbb{F}}$-martingale. Together with the fact that ${\mathcal{H}}\subset {\mathcal{F}}_0$ we conclude that $${\mathbb{E}\left[\left.[X,X]_T - {\left\langle{X},{X}\right\rangle}_T\right|{\mathcal{H}}\right]} = {\mathbb{E}\left[\left.{\mathbb{E}\left[\left.[X,X]_T - {\left\langle{X},{X}\right\rangle}_T\right|{\mathcal{F}}_0\right]}\right|{\mathcal{H}}\right]} = 0.$$ Applying a convex function $f$ to both sides of ${\mathbb{E}\left[\left.[X,X]_T\right|{\mathcal{H}}\right]} = {\left\langle{X},{X}\right\rangle}_T$ and using Jensen’s inequality we obtain $${\mathbb{E}\left[\left.f\Big([X,X]_T\Big)\right|{\mathcal{H}}\right]} \ge f\Big({\left\langle{X},{X}\right\rangle}_T\Big).$$ The desired result follows by taking expectations of this inequality. Applications to variance options ================================ We now apply the results of Section \[Sec:main\] to options on variance. We distinguish between models with unconditionally independent increments, such as exponential Lévy models and Sato processes, and models with conditionally independent increments, such as stochastic volatility models with jumps and time-changed Lévy models. In the first class of models we obtain stronger results than in the latter. Models with independent increments ---------------------------------- ### Exponential Lévy models with symmetric jumps {#Sub:levy} In an exponential Lévy model the log-price $X$ is given by a Lévy process with Lévy triplet $(b,\sigma^2 ,\nu)$. Clearly $X$ is a semimartingale with independent increments, and without fixed times of discontinuity. If the Lévy measure $\nu$ is symmetric and satisfies $\int x^2 \nu(dx) < \infty$ then Assumption \[myass\] holds with trivial ${\mathcal{H}}$ and the results of Section \[Sec:main\] apply. We list some commonly used Lévy models with symmetric jump measure; for definitions and notation we refer to @Cont2004 [Ch. 4], or in case of the CGMY model to @Carr2003a: - the Black-Scholes model, - the Merton model with $0$ mean jump size, - the Kou model with symmetric tail decay and equal tail weights ($\lambda_+ = \lambda_-$ and $p = 1/2$), - the Variance Gamma process with $\theta = 0$, - the Normal Inverse Gaussian process with $\theta = 0$, - the CGMY model with symmetric tail decay ($G = M$), - the generalized hyperbolic model with symmetric tail decay ($\beta = 0$). ### Symmetric Sato processes {#Sub:sato} A Sato process is a stochastically continuous process that has independent increments and is self-similar in the sense that $X_{\lambda t} \stackrel{d}{=} \lambda^\gamma X_t$ for all $\lambda, t \ge 0$ and for some fixed exponent $\gamma > 0$. Such processes have been studied systematically by @Sato1991, and used for financial modeling for example in @Carr2007. In @Carr2010 the authors consider options on quadratic variation, when the log-price process $X$ is a Sato process. Following [@Sato1991], a Sato process is a semimartingale with the deterministic characteristics $(t^\gamma b, t^{2\gamma} \sigma^2, \nu_t(dx))$ where $\nu_t(dx)$ is of the form $\nu_t(B) = \int{{\mathbf{1}_{\left\{B\right\}}}(t^\gamma x)}\nu(dx)$ for some fixed Lévy measure $\nu$. If $\nu$ is symmetric with $\int x^2 \nu(dx) < \infty$ then $X$ satisfies Assumption \[myass\] with trivial ${\mathcal{H}}$. Referring to the terminology and notation of [@Carr2007], this symmetry condition is satisfied for the VGSSD model with $\theta = 0$, the NIGSSD model with $\theta = 0$ and the MXNRSSD model with $b = 0$. ### Order relations between option prices For the models described in \[Sub:levy\] and \[Sub:sato\], Theorem \[Thm:order1\] implies the following order relations between the prices of options on variance: Fixed Strike Calls : Consider a call on variance with a fixed strike, i.e. with a payoff function $x \mapsto (x - K)^+$ where $K \ge 0$. In this case Theorem \[Thm:order1\] applies with $h = 0$ and $f(x) = (x - K)^+$. Hence the price of a fixed-strike call on discretely observed realized variance is larger than the price of its continuous counterpart. Relative Strike Calls : Let $s$ be the swap rate, and consider a call with payoff function $x \mapsto (ks - x)^+$. The difference to the fixed strike case is that now the swap rate $s$ also depends on whether discrete or continuous observations are used for pricing. If $k \le 1$, that is if the call is in-the-money or at-the-money then Theorem \[Thm:order1\] applies with $h = kx$ and $f(x) = (x)^+$. Hence, the price of a relative-strike in-the-money-call on discrete realized variance is larger than the price of its continuous counterpart. ATM Puts and Straddles : Consider an at-the-money put with payoff $x \mapsto (s - x)^+$ or an at-the-money straddle with payoff $x \mapsto |s - x|$, where $s$ is the swap rate. In these cases Theorem \[Thm:QV\_order\] applies with $h(x) = x$ and $f(x) = (-x)^+$ or $f(x) = |x|$. Hence, the price of an at-the-money put or an at-the-money straddle on discrete realized variance is larger than the price of its continuous counterpart. Models with conditionally independent increments ------------------------------------------------ ### Time-changed Lévy processes with symmetric jumps {#Sub:tc_levy} Consider a Lévy process $L$ with Lévy triplet $(b,\sigma,\nu)$ where the jump measure $\nu$ is symmetric with $\int x^2 \nu(dx) < \infty$, and a continuous, integrable and increasing process $\tau_t$, independent of $L$. Letting $\tau_t$ act as a stochastic clock, we may define the time-changed process $X_t = L_{\tau_t}$. The process $X$ is a semimartingale with characteristics $(b\tau_t,\sigma \tau_t, \nu \tau_t)$. Defining ${\mathcal{H}}= \sigma(\tau_t, t \ge 0)$ and ${\mathcal{F}}_t = {\mathcal{H}}\vee \sigma(X_s, 0 \le s \le t)$ we observe that $X$ is a ${\mathbb{F}}$-adapted square-integrable semimartingale with ${\mathcal{H}}$-conditionally independent increments and symmetric jumps, and hence satisfies Assumption \[myass\]. ### Stochastic volatility models with jumps {#Sub:sv_jump} Suppose that the risk-neutral log-price $X$ has the semimartingale representation $$\label{Eq:generic_SV} dX_t = b_t \,dt + \sigma_t\,dW_t + x \star \left(\mu(dt,dx) - \nu_t(dx) dt\right), \qquad X_0 = 0$$ where $W$ is a ${\mathbb{F}}$-Brownian motion and $\mu(dt,dx)$ a ${\mathbb{F}}$-Poisson random measure with compensator $\nu_t(x) dt$, and $b_t, \sigma_t, \nu_t$ are ${\mathcal{H}}$-measurable with ${\mathcal{H}}\subset {\mathcal{F}}_0$. Moreover, assume that $\nu_t(dx)$ is symmetric and that $$\int_0^T{\left({\mathbb{E}\left[b_t^2\right]} + {\mathbb{E}\left[\sigma_t^2\right]} + {\mathbb{E}\left[\int x^2 \nu_t(dx)\right]}\right)dt} < \infty,$$ such that $X$ is square-integrable (cf. @KM2010 [p. 8f]). Then $X$ satisfies Assumption \[myass\] and the results of Section \[Sec:main\] apply. This setting includes the uncorrelated Heston model and extensions with symmetric jumps. ### Order relations between option prices For the models described in \[Sub:tc\_levy\] and \[Sub:sv\_jump\], Theorem \[Thm:order1\] implies the following order relation between the prices of options on variance: Fixed Strike Calls : Consider a call on variance with a fixed strike, i.e. with a payoff function $x \mapsto (x - K)^+$ where $K \ge 0$. In this case Theorem \[Thm:order1\] applies with $h = 0$ and $f(x) = (x - K)^+$. Hence the price of a fixed-strike call on discretely observed realized variance is always bigger than the price of its continuous counterpart. Conclusions and counterexamples =============================== In this article we have shown that the increasing convex order conjecture for realized variance holds true in a large class of asset pricing models, namely under the assumption that the log-price $X$ is a square-integrable semimartingale with conditionally independent increments, symmetric jumps and no predictable times of discontinuity. However, the numerical evidence given e.g. in @KM2010 suggests that the conjecture may hold true under even more general conditions. In particular for Lévy models with asymmetric jumps the conjecture seems to be valid, although this case is not covered by the assumptions of the article. It is not obvious – at least not to the authors – how the strategy of the proof can be extended to these cases, in particular how the symmetry condition on the jumps can be relaxed. On the other hand removing the assumption of conditionally independent increments easily leads to a violation of the convex order conjecture. A numerical counterexample for a stochastic volatility model with nonzero leverage parameter has recently appeared in @Bernard2012. Another counterexample can be given using a time-changed Brownian motion; it is based on suggestions by Walter Schachermayer and David Hobson. Let $B$ be an ${\mathbb{F}}$-Brownian motion and define the stopping time $\tau = \inf {\left\{t: B_t \not \in [-1,1]\right\}}$. The stopped process $B^\tau$ is a bounded martingale and can be closed at infinity by adding the terminal value $B^\tau_\infty = B_\tau$, which only takes the values $\pm 1$. Let $f$ be an increasing, continuous and bijective function from $[0,1]$ to $[0,\infty]$ (such as $t \mapsto \tan \left(t\tfrac{\pi}{2}\right)$). Define on the interval $[0,1]$ the process $X = \left((B^\tau)_{f(t)}\right)_{t \in [0,1]}$ which is a continuous martingale w.r.t ${\mathbb{F}}' = \left({\mathcal{F}}_{f(t)}\right)_{t \in [0,1]}$. Using the continuity of $f$ we obtain $[X,X]_1 = \lim_{t \to 1} [B^\tau,B^\tau]_{f(t)} = \tau$. Choosing the partition ${\mathcal{P}}= {\left\{0,1\right\}}$ we get $RV(X,{\mathcal{P}}) = (\pm 1)^2 = 1$ a.s. Using Jensen’s inequality and the fact that ${\mathbb{E}\left[\tau\right]} = 1$ we conclude that $[X,X]_1 \ge_\text{cx} RV(X,{\mathcal{P}})$. Since $[X,X]_1$ is truly random the opposite relation $RV(X,{\mathcal{P}}) \ge_\text{cx} [X,X]_1$ cannot hold true. This counterexample can easily be extended to other partitions of $[0,1]$ by introducing additional stopping times. The next counterexample shows that also the convex order relation $[X,X]_{T} \geq_{\text{cx}} \langle X,X \rangle _{T}$ between predictable and ordinary quadratic variation shown in Theorem \[Thm:predictable\] collapses if the assumption of conditionally independent increments is violated. Let $\theta$ be a random variable taking the values $\pm 1$ with probability $\frac{1}{2}$ each. In addition, let $\tau$ be a uniformly distributed random time on $[0,1]$. Define the cadlag process $Z_t := \theta {\mathbf{1}_{\left\{\tau \le t\right\}}}$. This process is zero up to time $\tau$ and then jumps to one of the values $\pm 1$. Denote by ${\mathbb{H}}$ the natural filtration of $Z$; it is obvious that $Z$ is a ${\mathbb{H}}$-martingale. By direct calculation $$[Z,Z]_t = \theta^2 {\mathbf{1}_{\left\{\tau \le t\right\}}} = {\mathbf{1}_{\left\{\tau \le t\right\}}}.$$ In particular $[Z,Z]_1 = 1$ almost surely. By @Bielecki2002 the ‘hazard function’ $H$ corresponding to the random time $\tau$ is given by $H(t) = -\log(1-t)$ and ${\mathbf{1}_{\left\{\tau \le t\right\}}} - H(t \wedge \tau)$ is a martingale. We conclude that $t \mapsto H(t \wedge \tau)$ is the predictable compensator of $t \mapsto {\mathbf{1}_{\left\{\tau \le t\right\}}}$ and hence that $${\left\langle{Z},{Z}\right\rangle}_t = -\log(1 - (t \wedge \tau)).$$ In particular ${\left\langle{Z},{Z}\right\rangle}_1 = -\log(1 - \tau)$, i.e. it is exponentially distributed with rate $1$, and hence ${\left\langle{Z},{Z}\right\rangle}_1 \ge_\text{cx} [Z,Z]_1$, while the reverse relation does not hold true. [^1]: The authors would like to thank Johannes Muhle-Karbe, Mark Podolskij, David Hobson and Walter Schachermayer for discussions and comments. [^2]: Other examples are so-called weighted variance swaps, which are volatility derivatives, but whose payoff cannot be written as $f(\tfrac{1}{T}RV(X,{\mathcal{P}}))$; see @Lee2010a. [^3]: Note that the factor $\frac{1}{T}$ will be absorbed into the function $f$ from now on. [^4]: This conjecture has been initially conceived in discussions with Johannes Muhle-Karbe at the Bachelier World Congress 2010 in Toronto. [^5]: See @Lindvall1992 for the many uses of couplings in probability theory. [^6]: Here and in the following ‘conditionally symmetric and independent’ is a shorthand for ‘conditionally symmetric and conditionally independent’ [^7]: These assumptions are very natural in the context of variance options, where $X$ is the log-price of a security $S = S_0 e^X$. Under the pricing measure $e^X$ is a local martingale and the semimartingale property of $X$ follows automatically by [@Jacod1987 I.4.57]. The square-integrability of $X$ then merely ensures that the prices of variance options (in particular variance swaps) are finite. [^8]: Such a copy can be constructed using disintegration with respect to ${\mathcal{H}}$.
{ "pile_set_name": "ArXiv" }
--- author: - 'K. Kornet' - 'S. Wolf' bibliography: - 'metallicity.bib' subtitle: ' Predictions based on the core-accretion gas-capture planet-formation model' title: Radial distribution of planets --- Introduction ============ Current radial velocity surveys have led to the discovery of over 150 extrasolar planets around main-sequence stars [see @marcy05; @mayor04]. The large majority of these planets are probably gas giant planets, as their masses are above $100 M_\oplus$. Such a collection provides a good data set for comparing predictions of theories of planet formation [e.g. @ida04a; @ida04b; @alibert05; @kac05]. The standard model for the formation of giant planets is the core-accretion, gas capture-model. The numerical calculations of @pollack96 show, that the formation of a giant planet in this model can be divided into three phases. During the first one, the solid core is formed by the collisional accumulation of planetesimals. The second phase begins when the core reaches the mass of a few Earth masses and starts to accrete a significant amount of gas. During this phase the envelope stays in quasi - static and thermal equilibrium, as the energy radiated by the envelope is compensated for by the energy released by accreted planetesimals. As during this phase the protoplanet accretes gas with a higher rate than solids, the mass of the envelope finally becomes equal to the mass of the core. At this moment phase 3 begins, during which the planet rapidly grows in mass by runaway accretion of gas. The final mass and location of a giant planet are determined by its gravitational interaction with its environment. As it grows in mass it induces spiral waves in the gaseous disk. This leads to the transfer of angular momentum resulting in the inward migration of the planet and possibly in the gap opening [@lin86; @lin96; @ward97]. This last phenomenon strongly reduces the further growth of the planet. The main problem with that scenario is related to the timescale required to form a giant planet in it. In general it is the same order of magnitude as the lifetime of the protoplanetary disk, and it is not necessarily certain if the giant planet is able to form before dispersion of the disk. Close to the star, the formation time of a giant planet in the gas capture model is determined by phase 2, while at larger distances ($\gtrsim 10 {\mathrm{AU}}$) the lengths of phases 1 and 2 become comparable. The lengths of these two phases depend on the initial surface density of the planetesimal swarm in the given location. The larger the density, the faster the core grows and reaches higher mass at the end of phase 1. With the higher mass of the core, the length of phase 2 also diminishes. In general, at every distance from the star there is a critical value of the surface density of planetesimals which enables formation of a giant planet before dispersion of the protoplanetary disk; for a more detailed discussion see @kac_mass. However, the density of the protoplanetary swarm is not in a simple relation with the density of gas in the disk from which it emerges. While the gaseous component evolves in a viscous way, the evolution of the solid component is governed by processes of gas drag, coagulation, sedimentation, and evaporation [@weiden93]. A significant redistribution of solid material takes place in the result, and in the inner parts of the disk its surface density can be significantly enhanced compared to the initial value [@sv97; @weiden03]. Consequently, analysis of the possible masses and orbits of giant planets resulting from the core accretion scenario should also include the global evolution of solids in protoplanetary disks. A simple model of this evolution was proposed by @kac1 and further extended by @kac2 and @kac05 to include subsequent formation of giant planets. In this paper we extend our models to also include the effects of migration and the gap opening by the planet. It enables us for the first time to characterize the distribution of planetary masses and final locations resulting from the core accretion model, not only including the growth of the planet but also the preceding evolution of solids, which determines the surface density of the planetesimal swarm. In Sect. \[s:methods\] we explain our approach to the evolution of the protoplanetary disk and planet formation. The results are presented in Sect. \[s:results\] and discussed in Sect. \[s:concl\]. Methods {#s:methods} ======= In our approach we divide the formation of giant planets into three phases in a natural way. In the first one the planetesimal swarm is formed in the protoplanetary disk by the collisional accumulations of solids. This phase lasts till the gravitational interactions between solid bodies become the dominant factor governing their evolution. In the second epoch the giant planet is build by the accretion of the planetesimals, and at later stages of gas onto the protoplanetary core. Finally, when the mass of the planet reaches a critical value, the planet migrates within the disk and reaches its final position. We now discuss how we modeled each of these phases. Formation of a planetesimal swarm --------------------------------- We describe the protoplanetary disk as a two-component fluid consisting of gas and solids. We modeled the gaseous component as a geometrically thin turbulent $\alpha$ disk [@73aa24_337]. Its surface density $\Sigma$ is given as a function of distance $a$ from the star and time $t$ in terms of a selfsimilar solution of @s98. All other quantities characterizing the gas were obtained by solving the standard set of equations for a thin-disk approximation [e.g. @accr_power]. The initial conditions were parameterized by two quantities: the initial mass of the disk $M_0$ and its initial outer radius $R_0$. The crucial approximation underlying our approach to the evolution of solids is that the size distribution of particles at any given radial location is narrowly peaked around a mean value particular to this location and time. In the practical implementations it means that sizes of the particles $s$ are expressed as a single-value function of time and position, $s=s(t,r)$. Collectively, all solid particles are treated as a turbulent fluid characterized by its mean surface density $\Sigma_{\rm s}(t,r)$, which is governed by the continuity equation: $$\frac{\partial \Sigma_{\rm s}}{\partial t} + \frac{1}{r}\frac{\partial}{\partial r}(r V_{{\rm s}} \Sigma_{\rm s})=0 {\mathrm{.}} \label{dustEvolution}$$ The radial drift velocity of solids $V_{\rm s}$ is the result of the frictional coupling between solids and the gas and depends on the local size of the particles and properties of the gas. In the regions where the temperature of the disk is higher than the evaporation temperature $T_\mathrm{evap}$, the solids are treated as vapour with a velocity equal to the velocity of the gas. The size of the particles is governed by the second equation: $$\frac{\partial \Sigma_{\rm size}}{\partial t} + \frac{1}{r}\frac{\partial}{\partial r}(r V_{{\rm s}} \Sigma_{\rm size}) = f \Sigma_{\rm s} \label{sizeDustEvolution}$$ where $\Sigma_\mathrm{size}=s \Sigma_{\rm s}$. The source function $f$ describes the growth of particles due to mutual collisions and subsequent coagulation. The main assumptions used in its derivation are that all collisions between particles lead to coagulation and that the relative velocities of colliding particles are given by the turbulent model described by @sv97. In the calculation of the density of solids from their surface density, the effect of their sedimentation toward the midplane of the disk is taken into account by evolving the scale height of the solid disk. Initially the surface density of solids is everywhere a constant fraction of the surface density of gas, and their sizes amount to $10^{-3}\ \mathrm{cm}$. The results discussed in the subsequent sections do not depend on the choice of that particular value, as long as the solids are initially small enough to couple well to the gas. Growth of planets {#s:growth} ----------------- Our procedure for the formation of the giant planet starts when the planetesimals at a given point in the disk reach radii of $2\ {\mathrm{km}}$. That moment determines the initial surface density of the planetesimal swarm. The growth of the protoplanetary core is described by the following formula given by @papaloizou99: $$\dot{M_\mathrm{c}}=C_1 C_\mathrm{cap} R_\mathrm{p} R_\mathrm{H} \Omega_\mathrm{K} \Sigma_\mathrm{s} \label{eq:mc}$$ where $$R_\mathrm{H}=a\left(\frac{M_\mathrm{p}}{3 M_\star}\right)^{1/3}$$ is the radius of the Hill sphere of the planet and $\Omega_{\mathrm{K}}$ the Keplerian angular velocity. The value of $C_1$ given by @papaloizou99 is $81\pi/32$; we use a factor of 5 (the difference comes from the different definition of $R_\mathrm{H}$). The quantity $C_\mathrm{cap}$ describes the increase in the effective capture radius of the planet within respect to its real radius $R_\mathrm{p}$ due to the interaction of planetesimals with the envelope of the planet [@podolak88]. We use for it an approximate fit provided by @hubickyj01, made to the results of @boden00. For those core masses that are less then $5 M_{\oplus}$, no increase in the effective capture radius is assumed, i.e. $C_\mathrm{cap}(M_\mathrm{c}<5M_\oplus)=1$. For larger core masses, it increases linearly with the mass of the core, reaching its maximum value of $C_\mathrm{cap}=5$ for $M_\mathrm{c}=15 M_\oplus$. We assume that the surface density of planetesimals $\Sigma_{\mathrm{d}}$ is constant throughout the feeding zone which, extends to 4 Hill radii on both sides of the planetary orbit. Furthermore, $\Sigma_{\mathrm{d}}$ changes only due to accretion of the planetesimals onto the core and the growth of the feeding zone. Significant accretion of gas onto the core begins when it reaches a critical mass of $$M_\mathrm{c,crit}= 10 \left(\frac{\dot{M}_\mathrm{c}}{10^{-6} \ M_\oplus\ yr^{-1}}\right)^{0.25} \left(\frac{\kappa_\mathrm{env}}{1\ \mathrm{cm}^2\ \mathrm{g}^{-1}}\right)^{0.25} \label{e:mcrit}$$ [@ikoma00], where $\kappa_\mathrm{env}$ is the opacity in the envelope of the planet. Its actual magnitude is currently poorly constrained. We assume that $\kappa_\mathrm{env}=1\ \mathrm{cm}^2\ \mathrm{g}^{-1}$. When the mass of the protoplanet exceeds $M_\mathrm{c,crit}$, it contracts on the Kelvin-Helmholtz time scale $\tau_\mathrm{KH}$. By fitting the result of @pollack96 @bryden00 show that $$\tau_\mathrm{KH}=10^{b} \left(\frac{M_\mathrm{p}}{M_\oplus}\right)^{-c} \left(\frac{\kappa}{1 \mathrm{cm^2\ g^{-1}}}\right)\ \mathrm{yr}$$ where the exponents $b=10$ and $c=3$. However, the gas accretion rate connected with this contraction cannot be higher than the speed with which viscous transport of gas replenishes the feeding zone of the planet. Consequently, we adopt the following equations for the gas accretion rate onto the star $$\frac{\mathrm{d} M_\mathrm{env}}{\mathrm{d}t} = \min \left[\frac{M_\mathrm{p}}{\tau_\mathrm{KH}}, \dot{M}_{\mathrm{disk}} \right] \label{eq:menv}$$ where $\dot{M}_{\mathrm{disk}}$ is the accretion rate in the disk without the planet. We assume that it is equal to $\pi \alpha C_s H \Sigma$, where $C_S$ is the sound speed in the gaseous disk, $H$ its scale height, and $\Sigma$ its gas surface density at the given location. When the Hill radius of the planet becomes larger than 1.5 of the scale height of the disk, the planet induces a strong tidal torque on the disk and opens a gap in it. As a result the accretion of gas is strongly diminished. Its maximum rate is then equal to: $$\dot{M}_{\mathrm{env}}= \dot{M}_{\mathrm{disk}} \left\{1.668 \left(\frac{M_{\mathrm{p}}}{M_{\mathrm{J}}}\right)^{1/3} \exp \left[-\frac{M_{\mathrm{p}}}{1.5 M_{\mathrm{J}}}\right]+0.04\right\}$$ where $M_{\mathrm{J}}$ is the Jupiter mass [@veras04]. Migration {#s:migr} --------- The gravitational interaction of a planet with the disk leads to its migration and to formation of the gap in the disk [@lin86; @ward97]. For a low-mass planet the interaction is linear and results in so-called type I migration. In contrast to this scenario, high-mass planets open a gap in the disk that reduces the timescale of migration (referred to as type II migration). Analytical calculations show that the timescale of type I migration in a laminar disk is much shorter than both the disk lifetime and the timescale of planet formation. Consequently, if type I migration has taken place, nearly all planets would be accreted onto the star and the probability of finding any planetary system would be very low. Likewise, recent simulations by @nelson04 suggest that the torques exerted onto a low mass planet in a turbulent MHD disk fluctuate strongly. As a result the planet proceeds at a random walk and the mean value of the migration velocity is highly reduced. Due to these uncertainties we neglect the type I migration in our models. Type II migration begins when the planet is massive enough to open a gap in the gas. It happens when the Hill radius of the planet is larger than 1.5 of the disk scale height. If the mass of the planet is negligible in comparison to the mass of the disk, the inward velocity of the planet is regulated by the viscosity of the disk [@ward97]: $$\frac{\mathrm{d} a}{\mathrm{d} t} = - \alpha \frac{C_\mathrm{s}^2}{a \Omega_\mathrm{k}} \label{e:tIIs}$$ where $C_{\mathrm{s}}$ is the sound velocity in the gas. In the case of a planet with a larger mass than the mass of the disk, the migration is slowed down. The variation in the planetary angular momentum is then equal to the angular momentum transport rate in the disk [@lin96]: $$\frac{\mathrm{d}}{\mathrm{d} t} [M_\mathrm{p} a^2 \Omega_\mathrm{k}] = - \frac{3}{2} \nu \Sigma \Omega_{\rm k} a^2 \ . \label{e:tIIl}$$ In our calculations we used the smaller of the values of $\mathrm{d}a/\mathrm{d}t$ given by Eqs. (\[e:tIIs\]) and (\[e:tIIl\]). The migration of the planet was calculated using the 4th order Runge – Kutta method. It stops when either the time from the beginning of the calculation (including the time needed for the formation of the planetesimal swarm) is longer then the lifetime of the protoplanetary disk $\tau_\mathrm{f}$, or the orbit of the planet becomes smaller than $0.01~{\mathrm{AU}}$. In the latter case we assumed that the planet is accreted onto the star. In our calculations $\tau_\mathrm{f}=3\times 10^6\ {\mathrm{yrs}}$, which agrees with observations [@haisch]. Results {#s:results} ======= Using the methods described in the previous section we were able to investigate the range of masses and orbits of giant planets which are able to form in protoplanetary disks characterized by different initial conditions. First, we calculated a grid of models of protoplanetary disks. Every model is characterized by two parameters: the initial mass ($M_0$) and outer radius ($R_0$) of the gaseous disk. Their initial masses are uniformly distributed between 0.02 and 0.2 $\mathrm{M_\odot}$. The range of outer radii was chosen with the goal of including all models in which the formation of giant planets is possible. The number of radii is constant per interval of $\lg R_0$. The viscosity parameter $\alpha$ is equal to 0.001 for all models. The solids consist of water ice with a bulk density of $1\, \mathrm{g\,cm^{-3}}$ and have an evaporation temperature of $150\,{\mathrm{K}}$. For every value of $[M_0, R_0]$ we also checked the gravitational stability of the corresponding gaseous disk. In some cases the value of the Toomre parameter $$Q=\frac{C_\mathrm{S} \Omega_\mathrm{K}}{\pi G \Sigma}$$ drops below 1 in the outer regions of the disk. It means that they are gravitationally unstable with respect to the axisymmetric modes. We assumed that these parts of the disk fragment and some giant planets are formed there on a very short time scale [see i.e. @boss02]. Consequently we used modified initial values of the mass and outer radius of the disk, which correspond to the mass and size of the stable part of the original disk. Our distribution the of initial parameters of protoplanetary disks is an *artificial* one. In general one should use the distribution that occurs in nature. Unfortunately current observations do not provide such information. However, our results remain valid, as we are interested only in characterizing the general set of possible orbital sizes and masses of planets. Using that grid of evolved models we performed Monte-Carlo calculations to produce the $M_p - a$ distribution of planets resulting from our models. With the uniform probability we randomly chose one of the models from the calculated grid. Next, we chose the initial distance of the planet from the star $a$ with constant probability per interval of $\lg a$. We evolved the mass and semimajor axis of the planet according to the equations given in Sects. \[s:growth\] and \[s:migr\], starting from the time at which the planetesimals at radial distance $a$ reach a radius of $2 \mathrm{km}$ in the given model of the protoplanetary disk. To better illustrate how the type II migration and the accretion of gas after opening the gap change the parameters of the giant planets we also performed calculations in which those two phenomena were not included. The results of calculations in which the planets were neither permitted to increase their masses after the gap opening in the disk or to change their semimajor axes are presented in the upper panel of Fig. \[f:ice\]. It shows that the planets can be divided into two main groups (see also Fig. \[f:schem\]). In the first one there are bodies with masses smaller than $\sim20 M_\oplus$. These bodies were not able to accrete a significant amount of gas so the nearly only consist of their solid cores. In the second group, there are planets with masses larger than $\sim100M_\oplus$. In these cases the runaway gas accretion has already taken place and the majority of their masses are in the form of gaseous envelopes. They are massive enough to open a gap in the disk. The small number of planets with intermediate masses results from the fact that, as the mass of the planet grows above $\sim 20 M_\oplus$, the timescale of the accretion of gas quickly becomes much shorter than the lifetime of the protoplanetary disk. Consequently, the probability of the disk to disperse before the planet grows sufficiently to open a gap is very low [see @ida04a]. The giant planets in this set of models form in a region between 1 and 20 AU from the central star. They can be divided into two subgroups, each of them forming one of the wings of the V-like shape of the distribution of giant planets on the $a - M_p$ diagram. As the final mass of the giant planets in this set of models is determined by equation $$M_\mathrm{p}=3 \left(\frac{1.5 H}{a}\right)^{1/3} M_\star \ ,$$ those two subgroups of planets reflect different ways in which the $H/a$ ratio depends on distance from the star $a$ at the moment of gap opening. Giant planets at distances larger than $\sim 10 {\mathrm{AU}}$ are at that time in the optically thin parts of the planetary disk in which $(H/a) \sim a^{7/20}$ [see @s98]. Consequently, the masses of these planets are correlated with their distances from the star. The correlation is extremely narrow because scale height of the disk does not depend on its initial mass and outer radius in this regime in the used model of the gaseous disk. Giant planets at orbits smaller than $\sim 10$ AU in the moment of gap opening are in the optically thick part of the disk in which $(H/a) \sim a^{-1/4}$, so in their case their masses are anticorrelated with the distance from the star. The spread of this correlation is larger, because in that case the exact form of the $H(a)$ relation depends on the initial mass and radius of the underlying disk. This subgroup contains the most massive, closest planets formed in this set of simulations. They have masses around $450 \mathrm{M_\oplus}$ and orbits around 4 AU. In the next set of models we include the effects of type II migration, while planets still do not accrete any more after gap opening. The dependence of the masses of the planets resulting from this simulations on the sizes of their orbits is presented in the lower panel of Fig. \[f:ice\]. As expected, the only differences in comparison with the previous case are in the region of the giant planets. Moreover, the range of the masses of these planets in both figures are the same. Giant planets at larger orbits open the gap later because the process of their formation is longer due to lower densities of planetesimal swarm and smaller Keplerian frequencies (see Eq. \[eq:mc\]). Consequently the changes in their orbits are smaller as they have less time to migrate. Also the accretion rate in the disk that determines the speed of migration is lower at later times. In our calculations, only planets at orbits smaller than $\sim 9$ AU are able to shrink their orbits by a factor larger than 10%. As the result, the distribution of planets on the $a - M_p$ diagram for $a> 9$ AU is nearly the same as in the case without migration. In the case of planets formed at smaller distances from the star, some of them are able to migrate to orbits as small as $\sim 2$AU. Additionally, more massive planets tend to shrink their orbits by a larger factor. This can be explained by the higher rate of migration in the hotter disks. At the same time such disks have larger scale heights, and giant planets can reach larger masses before gap opening and stopping their growth. All these factors lead to the small spread of giant planets around a curve on the $a-M_p$ diagram. In the third set of calculations, the planets both migrate and grow in mass after gap opening. The distribution of the resulting planets is shown in Fig. \[f:ice2\]. The range of their final semimajor axes is similar to the one in the previous set of models. The maximum size of planets at the given distance is $\sim 2000 M_\oplus$ at 2 AU and decreases to $\sim 400 M_\oplus$ at 17 AU. The increase in the masses of the planets, after gap opening tends to be anticorrelated with the semimajor axes of their orbits. We explain this fact as follows. The rate at which a giant planet accretes gas at this stage of its evolution is determined by the accretion rate in the disk, and as such does not strongly depend on the distance from the star. At the same time the protoplanetary cores at smaller distances grow on shorter timescales. There are two reasons for this. First, the accretion rate of planetesimals onto the core is proportional to Keplerian frequency; second, the surface density of the nascent planetesimal swarm tends to be larger closer to the star. Consequently, protoplanets at smaller orbits are able to start accreting significant amounts of gas for a longer time before dispersion of the disk. Additionally, the accretion rate in the disk, which determines the accretion rate of gas onto the planet, is larger in this earlier epoch. As a result, the maximum size of the planet is everywhere a decreasing function of the distance from the star, even in those regions where the relation was the opposite at the moment of gap opening. Next, we investigate how the relation between the mass of the planet and semimajor axes of its orbit depends on the sort of material solids consist of. For this purpose we performed similar simulations to those described above, but with solids made of high temperature silicates instead of water ice. For the evaporation temperature of silicates we adopted a value of 1350 K, and for their bulk density $3.3\,\mathrm{g\,cm^{-3}}$. The upper panel of Fig. \[f:ht\] presents the distribution of planets resulting from these calculations. Because the solids can survive at higher temperatures than ice, the giant planets in that case can form closer to the star. Because the rate of migration does not depend on the composition of solids, their final orbits can also be smaller. At the same time, the distribution of the final surface densities of the planetesimal swarms, as a function of the distance from the star, seems to be a natural extension of this distribution in the case of water ice solids. As a result, the relation of $a - M_p$ for planets with silicate cores at orbits smaller then 6 AU is just an extension to smaller values of $a$ of the same relation for planet with water ice cores. Consequently, in our models the most massive planets (with $M_p\sim 5000 M_\oplus$) tend to be located closest to the star at orbits with semimajor axes $\sim ~ 0.5\,{\mathrm{AU}}$. On the other hand, planets with silicate cores at orbits larger than $6\,{\mathrm{AU}}$ tend to have smaller masses than their ice core counterparts. The reason is that at these distances the time during which the core of the protoplanet only accretes solid material becomes a significant fraction of the whole time of its formation. The rate of the accretion of planetesimals onto the core with the same mass, but composed of more dense material, is lower due to its smaller physical dimensions. As a result, planets with silicate cores start their gas accretion later and grow to smaller masses before the dispersion of the disk. At distances larger than $\sim 8$ AU, we do not obtain any giant planets with silicate cores. Finally, we investigated the influence of the central star mass on the relation between masses and orbit sizes of giant planets. In Fig. \[f:0.5\] we present the results for two values of the stellar mass: 0.5 and 1 $M_\odot$. The planets around less massive stars also tend to be less massive and have slightly smaller orbits. The first fact is explained by the lower accretion rates in these disks, as these rates also determine the maximum rate at which the planet can accumulate gas. The tendency of giant planets to form around less massive stars at smaller radii is the result of (a) a more effective radial redistribution of solids in the disk prior to the formation of planetesimals and, as a result the higher surface density of the planetesimal swarm, (b) a lower value of the minimal surface density for the formation of a giant planet around less massive stars at orbits smaller than $\sim 10$ AU [see @kac_mass]. Conclusions {#s:concl} =========== We have presented the results of simulations describing the formation of giant planets. Our model include all important phases of the core-accretion scenario, beginning from the formation of the planetesimal swarm from smaller solids, until the gap opening in the disk and subsequent planet migration. We presented the distribution of $a-M_{\mathrm{p}}$ for planets resulting from our models for a set of different initial masses and sizes of the protoplanetary disk and for the place of planet formation. However, our results cannot be compared in a simple way to data for extrasolar planets, because of the lack of knowledge of the distribution of the initial parameters of protoplanetary disks that occur in nature. Instead our aim was to characterize the relationship between the final orbital radii and masses of planets that are in general possible to obtain and to check how the resulting $a-M_{\mathrm{p}}$ distribution changes when including different physical processes such as migration and gas accretion by the planet after gap opening. We also investigated whether the distribution of giant planets is mainly determined by the parameters of the gaseous disk or the distribution of the planetesimal swarm. We have shown that, if we do not include the effects of migration nor accretion of gas by the planet after gap opening, the $a-M_{\mathrm{p}}$ distribution for giant planets is mainly determined by the parameters of the gaseous disk. In that case one is able to read the information about the dependence of the scale height of the gaseous disk on the distance from the star from the distribution of giant planets. The minimal and maximal distance from the star gives information about the surface density of planetesimal swarm at those places. By including migration in our models, reconstructing the the properties of the gaseous disk from the distribution of giant planets becomes more difficult. The distance that the planet can move inward depends not only on the gaseous environment but also on the time between gap opening and dispersion of the disk. But the first of these two moments is determined by the surface density of planetesimal swarm. This, on the other hand, is not related in a simple way to the local parameters of the gaseous disk. Our simulations show that the simple reconstruction of the scale height of the gaseous disk is possible only from distribution of giant planets at greater distances than $\sim 8\,{\mathrm{AU}}$, as planets at such large orbits did not significantly move away from their original places. If we include in our models that the planet can still grow in mass after gap opening, it is even more difficult, if at all possible, to derive conclusions about the gaseous disk (its temperature, scale height, etc.) based on the distribution of giant planets. This is because the rate of this additional accretion onto the planet depends mainly on the accretion rate in the disk, while the time frame in which this accretion can occur is determined by the surface density of the planetesimal swarm. The effect of the additional growth of the planet can even lead to reversion of the correlation between the masses of the planets and semimajor axes of their orbits. We also checked how the mass of the central star influences the distribution of planets. The main result is that in disks around less massive stars, giant planets at the given location tend to be less massive. At the same time, the giant planets with a given mass tend to form closer to the less massive stars. Our models can be seen as an extension of previous calculations performed by @ida04a and @alibert05. In comparison, the surface density of the planetesimal swarms in our models are computed self-consistently. This additional factor makes it much more difficult to predict the structure of the gaseous disk from the distribution of giant planets much more difficult. Nevertheless it does seem necessary. Whenever we performed the similar calculations as described above, but with the assumption that the surface density of the planetesimal swarm is always a constant fraction of the surface density of gas, we were not able to perceive any giant planets. This seems contradict the results of those two papers. A possible reason for this different result may be the difference in the underlying models of the gaseous disk. However, we think that the main factors are the different times for the dispersion of the gaseous disk (3 Myr vs. 10 Myr) and the different sizes of solids at time 0 (small grains vs. planetesimal sizes). Our models show that the time needed to reach planetesimal sizes can be as long 1 Myr and is not negligible. Our results also seem to contradic observations that show that more massive planets tend to be located farther away from the host star. This correlation is reproduced better by the calculations of @ida04a, who conclude that maximal mass of the planets after including type II migration does not strongly depend on the size of the orbit. On the other hand, in the results from @alibert05 the correlation is identical to ours, namely the more massive planets tend to be closer to the star. This difference can have two sources. First, @ida04a neglect the accretion of gas onto the planet, after it opens a gap in the disk. The second reason, which we think is more important, can again be the difference in the structure of the gaseous disk. In our calculations ratio $(H/r)$ is mainly a decreasing function of the radius in the inner parts of the disk, while it is an increasing function in the disk model of @ida04a. Because this ratio determines the mass of the planet which in turn is able to open a gap in the disk, then it can play a big role in determining the properties of the set of giant planets. This project was supported by the German Research Foundation (DFG) through the Emmy Noether grant WO 857/2-1 and the European Community’s Human Potential Programme trough the contract HPRN-CT-2002-00308, PLANETS. KK acknowledge the support from the grant No. 1 P03D 026 26 from the Polish Ministry of Science.
{ "pile_set_name": "ArXiv" }
Introduction and motivation =========================== The three-body problem has been studied extensively by numerical simulations and by analytical methods. However, in full generality this problem is of too high dimensionality and is too complicated for a systematic analysis. This has led so far to the study of various simplified and restricted versions of the problem. The free-fall problem ([@Agekyan], [@Tanikawa1995], referred to as the FFP; the definition will be given later) is one of these. The isosceles problem ([@V.M.Alekseev], [@Brou], [@ZaCh]) and the rectilinear problem ([@HM1993], [@TM2000a; @TM2000b]) are other examples. A research looking at the whole phase space of the planar or three-dimensional three-body problem is desirable. In the present work, we propose a new setting of the problem which is suitable for large-scale numerical studies, and which is hopefully suitable for theoretical consideration. Our setting is summarized into a word ’shape space’. The shape space is a direct product of the shape sphere in the configuration space, the shape sphere in the momentum space, and their relative size and orientation. This is not the phase space, but can be expanded to recover the phase space. In getting the shape space, we have two guiding principles: equivalence relation and boundedness. We do not want to integrate orbits which can be transformed into one another by a suitable change of variables. These orbits are considered equivalent. We will consider only orbits of different equivalence classes. Orbits belonging to different equivalence classes will be called independent. Boundedness is related to the size of the initial condition space. In order that a numerical study of the totality of the solution of the three-body problem be feasible, the initial condition space must be either bounded or finite. Otherwise, it is impossible to exhaust the initial conditions using even the fastest computer. So we would like to impose boundedness on the ranges of variables. We consider the planar three-body problem since the extension to the three dimensions is not so difficult. The phase space is $R^{12}$. As is well-known ([@Whittaker1952], p.351), the possible minimum order of differential equations is four. In order to attain this order, several processes are consecutively done. Thus, equations of motion are reduced from the 12th to the 8th order by using the four integrals of motion of the center of gravity. The use of the angular momentums reduces the order to 7, and use of the elimination of the nodes reduces the order to 6. Lastly, it is possible again to reduce the order of the equations to four by using the integral of energy and eliminating the time. This conventional reduction, however, does not lead to a best choice of variables for the initial value problem. The space of the states of motion described by these 4th-order differential equations may have extremely complicated structure. Even if we stop the reduction at the 6th-order, variables have infinite ranges. The concept of the shape sphere in the configuration space seems due to McGehee in the rectilinear three-body problem when he devised the variables now called after his name [@McGehee74]. Later Moeckel [@Moeckel88] explicitly introdcued the shape sphere in the planar three-body problem. Then, this notion is essentially used in [@eight2000] in order to obtain the figure-eight periodic solution with variational techniques. Our shape space extends the notion to the whole phase space. We start in §2 by introducing the FFP and the involved study for it. In §3 we extend the FFP to any given mass ratio and the whole phase space in the planar case. In §4 we suggest the existence of the ’semi’ global surface of section and generalize this results to the three dimensions. In the final section, we summarize and discuss our results. The Free-Fall Problem ===================== Definition ---------- The free-fall problem (FFP) is characterized by the zero initial velocities, and has been extensively studied by Russian and Japanese schools([@Agekyan],[@Anosova1986], [@Tanikawa1995], [@Umehara00]). In the FFP, the total energy of the three bodies $m_i,i=1,2,3$ is negative and their angular momentum is zero. We here consider the equal mass case: $m_1=m_2=m_3$. In this problem, motions starting from similar triangles transform into one another under appropriate changes of coordinates and time, so we identify these motions. Dissimilar triangles correspond to independent motions. Let mass points $m_2$ and $m_3$ stand still at $A(-0.5,0.0)$ and $B(+0.5,0.0)$, respectively in the $(x,y)$ plane and $m_1$ stand still at a point $P(x,y)$ where $$\begin{aligned} (x,y) \in {\cal D} = \hspace{5cm} \\ \nonumber \{(x,y):x \ge 0,y \ge 0,(x+0.5)^2+y^2 \le 1 \} .\end{aligned}$$ If $m_1$ changes position in ${\cal D}$, then triangles satisfying the condition $\overline{m_2m_3} \ge \overline{m_1m_2} \ge \overline{m_1m_3}$ are exhausted. Conversely, any triangle is similar to one of the triangles formed by three mass points $m_1$, $m_2$, and $m_3$ as above. Thus the positions of $P \in {\cal D}$ specify all possible initial conditions. An attempt to include velocities -------------------------------- Anosova et al. ([@Anosova1981]) considered the region ${\cal D}$ of the free-fall problem. Then they supposed that the system rotates in the plane of initial triangle (2D problem) counterclockwise; the velocity vectors of components A (distant component) and C (component inside ${\cal D}$) are orthogonal to their radius-vectors in the center-of-gravity coordinate system; the angular momenta of these bodies are the same; the velocity of component B is given so that the center-of-gravity of the triple system is motionless; the speed of rotation is parameterized by initial virial ratio $k$. Thus the initial conditions are defined by three parameters: coordinates $(x,y)$ of C component in the region ${\cal D}$ and virial ratio $k$. However, their formulation lost the boundedness of the initial configuration space. This boundedness is one of the most important properties of the FFP. What we should do is to recover this. The definition of our variables for the planer case ===================================================== Equations of motions for the planar three-body problem ------------------------------------------------------ Let $m_k >0$ be the masses of point particles with positions ${\bf q}_k \in {\bf R}^2$ and momenta ${\bf p}_k \in {\bf R}^2; k=1,2,3$. Let ${\bf q,p} \in {\bf R}^6$ denote the vectors $({\bf q}_1, {\bf q}_2, {\bf q}_3)$, $({\bf p}_1, {\bf p}_2, {\bf p}_3)$. The three-body problem is governed by the Hamiltonian function $$\begin{aligned} H({\bf p},{\bf q}) =& \frac{1}{2}{\bf p} \cdot A^{-1} {\bf p} - U({\bf q}) = T ({\bf p}) - U({\bf q}) \label{eq:energy} \\ \nonumber \end{aligned}$$ where $A$ is the $6 \times 6$ mass matrix $diag(m_1, m_1, m_2, m_2, m_3, m_3)$, a dot denotes the scalar product in ${\bf R}^6$, and $$\begin{aligned} U({\bf q})=\frac{m_1m_2}{|{\bf q}_1-{\bf q}_2|}+ \frac{m_2m_3}{|{\bf q}_2-{\bf q}_3|}+ \frac{m_3m_1}{|{\bf q}_3-{\bf q}_1|}. \label{eq:potential}\end{aligned}$$ Hamilton’s equations are $$\begin{aligned} \dot{ {\bf q} }= A^{-1} {\bf p}, \hspace{0.4cm} \dot{ {\bf p} }= \nabla U( {\bf p} ). \label{eq:hamilton}\end{aligned}$$ where a dot above letters denote the time derivative. Without loss of generality, we can assume the center of gravity remains at the origin: $$\begin{aligned} \sum_{i=1}^3 m_i {\bf q}_i ={\bf 0}, \hspace{0.4cm} \sum_{i=1}^3 {\bf p}_i = {\bf 0}. \label{eq:CG}\end{aligned}$$ All these four equations characterize the planar three-body problem. Reduction to our variables -------------------------- The dimension of the original phase space is twelve. The restriction (\[eq:CG\]) reduces it to eight. The four variables out of eight are for the configuration space, and the remaining four are for the momentum space. The restriction (\[eq:CG\]) is equivalent with the fact that the two sets of the three vectors form two triangles. In the configuration space, there remain two variables. Obviously, these are the size and orientation of a triangle. Here, the orientation means the direction angle of a selected edge with respect to the coordinate axis. In the momentum space, there also remain two variables. These are the size and orientation of a triangle. Let us now look for the dimension of the space in which all the independent orbits are contained. Here, we say that two orbits are dependent (resp. independent) if they can (resp. can not) transform to each other under coordinate and time transformations. Let us express the four variables in the configuration by $({\bf f}, r, \omega)$, where and ${\bf f}$ is a two dimensional vector and represents the form of the triangle, $r$ is the size, and $\omega$ is the orientation. In a similar manner, we express the four variables in the momentum space by $({\bf F}, R, \Omega)$. Thus we have eight variables $({\bf f}, {\bf F}, r, R, \omega, \Omega)$. Now, consider two sets of variables $({\bf f}, {\bf F}, r, R, \omega, \Omega)$ and $({\bf f}, {\bf F}, r', R', \omega, \Omega)$. If $r' = \alpha r$ and $R' = \beta R$ for appropriate constants $\alpha$ and $\beta$ (this represents a scale transformation), motions starting at the two initial conditions are not independent. If we choose a particular transformation $r, R \rightarrow 1, R^*$, then the variables reduce to $({\bf f}, {\bf F}, 1, R^*, \omega, \Omega)$, that is, $({\bf f}, {\bf F}, R^*, \omega, \Omega)$. Two orientations $\omega$ and $\Omega$ are not independent. In fact, $\omega$ is measured with respect to a fixed direction in the configuration space, and $\Omega$ is measured with respect to a fixed direction in the momentum space. However, the axes in the momentum space can be adjusted to those in the configuration space. Let us consider two sets of variables $({\bf f}, {\bf F}, R^*, \omega, \Omega)$ and $({\bf f}, {\bf F}, R^*, \omega', \Omega')$. Then, as is easily understood, if $\Omega - \omega = \Omega' - \omega'$ (rotation), motions starting at the two initial conditions are not independent. If we choose a particular rotation $\omega, \Omega \rightarrow 0, \Omega^*$, variables reduce to $({\bf f}, {\bf F}, R^*, 0, \Omega^*)$, that is, $({\bf f}, {\bf F}, R^*, \Omega^*)$. This is our variables. In the following two subsections, we will carry out the above program to the final set of variables. From Shape Plane to Shape Sphere -------------------------------- The variables in the position space in this subsection is equivalent with those for the shape sphere [@Moeckel88] [@eight2000] and for the FFP [@Agekyan]. In [@Moeckel88], the shape sphere has been defined as the sphere of constant moment of inertia originally introduced by McGehee [@McGehee74]. Our definition is slightly different from it. Our construction of the shape sphere is naive and intuitive. We here connect the representations of triangles on the plane and on the sphere. In order for this, let us introduce the [*shape plane*]{} as follows. Let mass points $m_2$ and $m_3$ be at $B(-0.5, 0.0)$ and $C(0.5, 0.0)$, respectively in the $(x,y)$ plane and the position of $m_1$ be $P(x,y)$. If $m_1$ changes position in the plane, then triangles exhaust all of the shape. We call this the shape plane. Obviously the points $A(0,\frac{\sqrt{3}}{2})$ and $A'(0,-\frac{\sqrt{3}}{2})$ correspond to the equilateral configurations. The $x$-axis corresponds to collinear configurations, whereas the $y$-axis to the isosceles triangles in which the length of two edges $\overline{m_1m_2}$ and $\overline{m_1m_3}$ are the same. If $m_1$ is on B, C or its distance from the remaining two tends infinity, then the shape corresponds to binary collision $E_3$, $E_2$ or $E_1$ where $E_i$ means that particle $i$ goes to infinity or particles $(j,k)$ collide where $i \neq j$ and $i \ne k$. If three masses are equal and $m_1$ is at $D(-3/2, 0)$, $O(0, 0)$ or $E(3/2, 0)$, then the corresponding configurations is $C_2$, $C_1$ or $C_3$ where $C_i$ represents the isosceles or collinear configuration. ![The relation between shape plane and shape sphere.[]{data-label="shape"}](shape1.ps){width="\linewidth"} Now, let us obtain the transformation between $(x,y)$ on the shape plane and $(\lambda, \theta)$, the longitude and latitude, on the shape sphere. We put the sphere of radius $\frac{\sqrt{3}}{4}$ in the three-dimensional space $(x,y,z)$ with center at $N(0, 0, \frac{\sqrt{3}}{4})$ as in Fig. \[shape\]. The equation for the shape sphere is $$\begin{aligned} \frac{x^2}{(\sqrt{3}/4)^2} + \frac{y^2}{(\sqrt{3}/4)^2} +\frac{(z-\sqrt{3}/4)^2}{(\sqrt{3}/4)^2}=1. \label{eq:sphere}\end{aligned}$$ Every straight line connecting $E_1( 0, 0, \frac{\sqrt{3}}{2})$ and $( x, y, 0)$ meets the sphere at a point. Hence every point on the shape plane is mapped to a point on the shape sphere, and vice versa. (As usual, infinity in the $(x,y)$ plane is treated as a point.) We denote several particular points on the shape sphere as follows. $L_+$ is the intersection of the line $\overline{E_1 A}$ with the sphere. $L_-$, $E_3$, $E_2$, $C_2$ and $C_3$, respectively, are the intersections of $\overline{E_1 A'}$, $\overline{E_1 B}$, $\overline{E_1 C}$, $\overline{E_1 D}$, $\overline{E_1 E}$ and $\overline{E_1 O}$ with the sphere. We denote the center of the sphere by $N$. We can easily calculate that the angle $\angle E_1 N E_3 $ is $2\pi/3$. Other angles can be calculated. Let us give this sphere the coordinates $(\lambda, \theta)$, the longitude and latitude. The domains of the definition are $[0, 2\pi]$ and $[0, \pm \pi/2]$. The origin of $\theta$, the equator, is the great circle in the $xz$-plane. The north pole is at $L_+$. The origin of $\lambda$ is the great circle passing through $E_1$ and $L_+$, and $\lambda=0$ at $E_1$. Let us take any point $P(x,y,0)$ in the shape plane and connect $P$ and $E_1$ with a straight line. Let $P_1(x_1, y_1, z_1)$ be the unique intersection of the line and the sphere (6) other than $E_1$. Introducing $r = \sqrt{x^2 + y^2}$, we have $$\begin{aligned} x_1 = \frac{3r}{4r^2+3} \cos \theta,\ y_1 = \frac{3r}{4r^2+3} \sin \theta,\nonumber\\ \ z_1= \frac{\sqrt{3}}{2} \biggl( 1 - \frac{3}{4r^2+3} \biggr), \label{X-x}\end{aligned}$$ and $$\begin{aligned} \tan \theta =\frac{y}{x}. \label{theta-x}\end{aligned}$$ If we represent the $(x_1,y_1,z_1)$ in terms of $(\lambda, \theta)$, we have $$\begin{aligned} x_1 = \frac{ \sqrt{3} }{4} \sin \lambda \cos \theta, \ y_1 = \frac{ \sqrt{3} }{4} \sin \lambda \sin \theta, \nonumber\\ z_1 = \frac{ \sqrt{3} }{4} ( 1 + \cos \lambda ). \label{X-lambda}\end{aligned}$$ From Eqs. (\[X-x\]) and (\[X-lambda\]), we get $$\begin{aligned} \sin \lambda = \frac{ 4\sqrt{3} \sqrt{x^2+y^2} }{ 4 (x^2+y^2) + 3 }. \label{lambda-x}\end{aligned}$$ Thus, Eq. (\[theta-x\]) and Eq. (\[lambda-x\]) represent the transformation between $(x,y)$ and $(\lambda, \theta)$. In the above, we have moved from the shape plane to the shape sphere in the configuration space. Now, in the momentum space, let us move from the shape plane to the shape sphere. We need some preparations. We know that three momentum vectors make a triangle by Eq. (\[eq:CG\]). We call this triangle the [**momentum triangle**]{}. We express this triangle in the $(\xi,\eta)$ plane where the $\xi$- and $\eta$-axes correspond, respectively, to the $x$- and $y$-axes in Fig. 1. We borrow the notation from Fig. 1. We normalize the length of ${\bf p}_2$ so that $|{\bf p}_2|=1$. Further, we rotate ${\bf p}_2$ so that it aligns with the $\xi$-axis. Finally, we put the starting point of vector ${\bf p}_2$ at $B(-0.5,0)$ and the end-point at $C(0.5,0)$ (see Fig. 1). We denote this vector by $\widetilde{\bf p}_2$. Correspondingly, ${\bf p}_3$ and ${\bf p}_1$ are transformed to $\widetilde{\bf p}_3$ and $\widetilde{\bf p}_1$. Then, $\widetilde{\bf p}_3$ starts at the end-point of $\widetilde{\bf p}_2$, $\widetilde{\bf p}_1$ starts at the end-point of $\widetilde{\bf p}_3$, and the end-point of $\widetilde{\bf p}_1$ returns to $B$. $$\begin{aligned} \widetilde{\bf p}_1 = \overrightarrow{PB},\ \widetilde{\bf p}_2 = \overrightarrow{BC}, \ \widetilde{\bf p}_3 = \overrightarrow{CP}.\end{aligned}$$ The $(\xi,\eta)$ plane can be called the shape plane of momentum triangle. On this plane, we put the sphere of radius $\sqrt{3}/4$ as in Fig. 1. We introduce the longitude $\Lambda$ and latitude $\Theta$ on this sphere. Then the transition from the shape plane $(\xi, \eta)$ to the shape sphere $(\Lambda, \Theta)$ in the momentum space can be carried out perfectly similar to the case of the configuration space. We obtain the transformation equations (\[theta-x\]) and (\[lambda-x\]) with $(\Lambda, \Theta)$ instead of $(\lambda, \theta)$ and with $(\xi, \eta)$ instead of $(x,y)$. In the next subsection we give this triangle the size and orientation. The remaining variables ----------------------- In subsection 3.2, we obtained a set of variables $({\bf f}, {\bf F}, R^*, \Omega^*)$. In section 3.3, we expressed ${\bf f} = (\lambda, \theta)$ and ${\bf F} = (\Lambda, \Theta)$. In this subsection, we change $R^*$ to a more convenient variable. We briefly talk about $\Omega^*$ in the last part of this section. $R^*$ represents the relative size of configuration and momentum triangles. This is obviously related to the relative magnitude of the (absolute value of) potential energy $U$ and the kinetic energy $T$ (see Eqs. (\[eq:energy\]) and (\[eq:potential\])). For a given energy and a given configuration of three bodies, the configuration triangle is smaller if $U$ is larger, whereas, the momentum triangle is smaller if $T$ is smaller. This consideration makes it plausible to use virial ratio $k$ to parametrize the relative size of configuration and momentum triangles. $k$ is defined by $$\begin{aligned} k \equiv \frac{T}{U} .\end{aligned}$$ There are two advantages in using the virial ratio as one of the variables. One is that the global property of the system can be easily grasped. In fact, the total energy $h$ of the system is positive if $k>1$, whereas $h$ is negative if $k <1$. The other advantage is that any triple system with negative (resp. positive) total energy can be brought to a system with a fixed negative (resp. positive) energy by a similarity transformation. Indeed, we used in §3.2 a scale transformation when we normalize the size of triangles. In that transformation, it turns out $\beta = \alpha^{-1/2}$. This transformation is equivalent with that of the total energy. Let us use this fact to connect $k$ and $R^*$. Following the notation of §3.2 and §3.3, we have $r = |\bf{q}_2-\bf{q}_3|$ and $R = |\bf{p}_2|$. When we change the scale $r,R \rightarrow 1, R^*$, then the relation between these variables is $R^*=\sqrt{r} R$. The relation between $k$ and $r,R,R^*$ is $$\begin{aligned} k=\frac{ T(R,\Lambda,\Theta) }{ U(r,\lambda,\theta) } =\frac{ T(R^*,\Lambda,\Theta) }{ U(1,\lambda,\theta) },\end{aligned}$$ where $T$ and $U$ are represented as functions of $(R,\Lambda,\Theta)$ and $(r,\lambda,\theta)$. We not yet determine the relative angle of the $x$-axis and $\xi$-axis. This is related to the starting position of measuring angle $\Omega^*$ between configuration and momentum triangles. We take the $\xi$-axis (i.e., $\widetilde{\bf p}_2$) along ${\bf q}_2$. $\Omega^*$ is then the angle between $\widetilde{\bf p}_2$ and ${\bf p}_2$ or the angle between ${\bf q}_2$ and ${\bf p}_2$, so $$\begin{aligned} \frac{ \bf{p}_i }{|\bf{p}_i|} = {\cal R}(\Omega^*) \frac{ \widetilde{\bf p}_i }{|\widetilde{\bf p}_i|} ={\cal R}(\Omega^*) \frac{ {\bf q}_i }{|{\bf q}_i|}\end{aligned}$$ where ${\cal R} (\Omega)$ is the rotational matrix around the $z$-axis by $\Omega$. Finally, we have six variables for the planer three-body problem $(\lambda, \theta, \Lambda, \Theta, k, \omega)$ where we use $\omega$ instead of $\Omega^*$. This space is $S^2 \times S^2 \times (I \times S^1)$ where $I = [0, 1)$ or $I=[0, \infty]$. If we consider systems with negative energy, then $I = [0, 1)$ and all variables are bounded. Example orbits in the shape space --------------------------------- Let us represent some preiodic orbits in our shape space. Collinear motions are on the equators of the configuration and momentum shape spheres, and $\omega$ is equal to $0$ or $\pi$, whereas $k$ changes with time. We know that the isosceles motions move on the meridians of the configuration shape sphere. These motions are represented also on the meridians of the momentum shape sphere. The position $(\lambda, \theta)$ of the Euler collinear motion is fixed on the equator of the configuration shape and changes depending on the mass ratio of bodies. Its position $(\Lambda,\Theta)$ is also fixed depending on the mass ratio and $(\lambda, \theta)$, and generally $\Theta \neq 0$. The Lagrange motion is fixed at the north or south pole of the configuration shape sphere (See th ecross in Fig. 2). $(\Lambda,\Theta)$ is fixed and depends only by the mass ratio. Generally $\Theta \neq 0$. See Fig.3 for the motion of $(k,\omega)$. The loci are concentric closed curves with center at $k=0.5, \omega=\pi/2$ which is the position of the so-called Lagrange solution. In the case of the famous figure-eight solution, the motion is represnted by the identical curves in the $\lambda$–$\theta$ sphere and $\Lambda$–$\Theta$ sphere because of the similarity of configuration and momentum triangles [@Fuji2004]. We show this orbit on the $xy$-plane in Fig. \[eight\]. Its $k$-$\omega$ motion is interesting. We show these in the Fig. \[manykw\]. Blue and red triangles correspond to collinear configuration, whereas a green triangle to isosceles configuration. In the above three examples, it is interesting to note that these orbits respectively have the same trajectories in the configuration and momentum spheres. The difference manifest itself in the $(k,\omega)$-surface. In a sense, these orbits are degenerate. We expect that general periodic orbits have different trajectories in three surfaces of the shape space. The final step to our variables =============================== The space represented by variables $(\lambda, \theta, \Lambda, \Theta, k, \omega)$ can be interpreted in two different ways. The first interpretation is that this is a [*shape space*]{}. This is an extension of the shape sphere ([@Moeckel88], [@eight2000]). The other interpretation is that this is an initial condition space. Any possible initial condition of the planar three-body problem can be expressed in this space. Global surface of section ------------------------- In this section, we try to further decrease the number of variables by one. This can be realized if we find a global surface of section. To find a global surface of section is motivated from the rectilinear problem ([@MH1989]). We suggest the surface $\dot{I} = 0$, where $I$ is the moment of inertia of the triple system and $\dot{I}$ is its time derivative. We may change variables from $(\lambda, \theta, \Lambda, \Theta, k, \omega)$ to $(\lambda, \theta, L, \dot{I}, k, \omega)$ where $L$ is the total angular momentum of the system. The equations for the transformation are $$\begin{aligned} L = \sum_i {\bf q}_i \wedge ( {\cal R}(\omega) {\bf p}_i ) = R^* \sum_i {\bf q}_i \wedge ( {\cal R}(\omega) {\bf p}_i ) \\ \dot{I} = \sum_i {\bf r}_i \cdot ( {\cal R}(\omega) {\bf p}_i ) = R^* \sum_i {\bf r}_i \cdot ( {\cal R}(\omega) {\bf p}_i ) , \end{aligned}$$ where ${\cal R}(\omega)$ is the rotation matrix used in §3.4. Now we state the following result. Then we can move to the hyper-surface $(\lambda, \theta, L, \dot{I}=0, k, \omega)$. This hyper-surface plays a role of the global surface of section in the sense that almost all (in the sense of measure) orbits pass through this surface. [**Proposition**]{}. Orbits except those of measure zero experience $\dot{I}=0$. [*Proof.*]{} Let us first recall the classification of [@Chazy1922]: --------- ------------------------------------- H; hyperbolic motions of three bodies, HE$_i$; hyperbolic-elliptic motions in which particle $i$ escape, P; parabolic motions of three bodies, PE$_j$; parabolic-elliptic motions in which particle $j$ escape, B; bounded motions, OS; oscillatory motions. --------- ------------------------------------- These motions can be initial motions and final motions. Then there are 36 combinations of initial and final motions such as H$^-$ – B$^+$ where ’$-$‘ indicates the initial motion and ’$+$‘ indicates the final motion. Among these, combinations of escape initial motions and escape final motions give necessarily $\dot{I}=0$ because $I \rightarrow \infty$ as $t \rightarrow \pm \infty$ and $I$ should attain a minimum at some finite $t$. In the case of OS, there is a minimum of $I$ in each oscillation, so there is a time such that $\dot{I}=0$. The remaining combinations to be examined are between $B$ and escape motions and between $B$ itself. However, the combinations of $B$ and escape motions occupy a set of measure zero in the phase space, as has been shown ([@V.M.Alekseev]). Therefore we only need to check the last combination, B$^-$ – B$^{+}$. If the orbit does not experience $\dot{I}=0$ in the range $t_0 < t < \infty$, then it means $\dot{I} >0$ or $\dot{I}<0$ in this range of $t$. We put $\dot{I}_{\sup}=\sup_{t_0 < t < \infty}{\dot{I}}$ and $\dot{I}_{\inf}=\inf_{t_0 < t < \infty}{\dot{I}}$. Then for the case $\dot{I}>0$, $$\begin{aligned} I(t) &=& \int^t_{t_0} \dot{I}(t) dt > I(t_0) + \dot{I}_{\inf} (t - t_0), \label{key:eq22}\end{aligned}$$ and for the case $\dot{I}<0$, $$\begin{aligned} I(t) &=& \int^t_{t_0} \dot{I}(t) dt < I(t_0) - |\dot{I}_{\sup}| (t-t_0) \label{key:eq23}\end{aligned}$$ Let us first consider the case (\[key:eq22\]). In order that $I(t)$ remains finite as $t \rightarrow \infty$, it is necessary that $\dot{I}_{\inf}=0$. This means that the triple system asymptotically tends to a configuration with constant $I$ as $t \rightarrow \infty$. In order that $I(t)$ remains greater than zero in the case (\[key:eq23\]) as $t \rightarrow \infty$, it is necessary that $\dot{I}_{\sup}=0$. In this case, the triple system asymptotically tends to a configuration with constant $I$ as $t \rightarrow \infty$. If $I(t) \rightarrow 0$ as $t$ increases, this corresponds to triple collision with a finite collision time $t^*$. In this case, the system does not experience $\dot{I} = 0$ for $t_0 < t < t^*$. As is well known, orbits which experience triple collision occupy a set of zero-measure in the phase space. The remaining problem is to estimate the measure of the orbits which asymptotically tend to $\dot{I} =0$ as $t \to \infty$. These orbits constitute the stable set of orbits on the set $\dot{I} =0$. In any situation, this stable set does not span the phase space since otherwise contradiction to the volume preservation will be derived. Therefore, the set of orbits which does not experience $\dot{I} =0$ is of measure zero. Q.E.D. An extension to the three-dimensional case ------------------------------------------ Before extending our results to the three dimensions, we discuss on the dimension of the initial value space of the three-dimensional three-body problem. There are ten equivalence relations. Six of these are for the translations, three of these are for the rotation, and one of these are for the scale. Each of them has its equivalence class. We denote these sets by ${\bf E}_i (i=1,2,...,10)$, ${\rm dim}~{\bf E}_i=1$ and ${\bf E}_i \cap {\bf E}_j = \emptyset$. We define the set ${\bf X}$ as ${\bf R}^{18}/(\Pi_{i=1}^{10} {\bf E}_i)$, then the dimension of ${\bf X}$ is 8. We will find eight bounded variables in ${\bf X}$ in what follows. Even in the three-dimensional case, we can define the plane in which the three bodies live. We take this plane as the initial configuration plane. However, in contrast to the planar case, the plane in which the configuration triangle exists and the plane in which the momentum triangle exists moves separately. Thus we need to specify the relative position of two planes. Now, we look for the remaining variables. Let us consider an arbitrary state of the triple system, and suppose that the shapes and the sizes of configuration and momentum triangles are determined. As before, we put the center of gravity of the configuration triangle at the origin of the $(x,y)$-plane, and put ${\bf q}_3 - {\bf q_2}$ along the $x$-axis. We here define the nominal position of the momentum triangle. Let $\widetilde{\bf p}_i$ be the momentum of particle $i$ in the nominal position. We put the center of mass of the triangle made with $\widetilde{\bf p}_i$ at the origin of the $(x,y)$-plane, and put $\widetilde{\bf p}_2$ parallel to ${\bf q}_2$. We take the $\xi$-axis in this direction, and take the $\eta$-axis perpendicular to the $\xi$-axis. The $\zeta$-axis is defined as the third coordinate axis in the momentum space. We denote the $(\xi,\eta)$-plane by $\pi_0$. So, initally, the plane of the nominal momentum triangle coincides with the plane of the configuration triangle. In order to bring the nominal momentum triangle to the the actual momentum triangle, we need three steps. 1. Rotate $\pi_0$ around the $\zeta$-axis by angle $\omega$. We call the new plane $\pi'$, and denote the $\xi$-axis of $\pi'$ by $\xi_{\pi'}$. 2. Rotate $\pi'$ around the $\xi_{\pi'}$ by angle $\psi$. We call the new plane $\pi''$. Denote the $\zeta$-axis of $\pi''$ by $\zeta_{\pi''}$. 3. Rotate $\pi''$ around $\zeta_{\pi''}$ by angle $\phi$. Then the triangle coincides with the actual momentum triangle. Here, the angle $\omega$ coincides with that in the two-dimensional case. Now it is easy to make the initial conditions for the three-dimensional case. At first, we make the two-dimensional initial condition for the form of the [*momentum triangle*]{} at the nominal place. Second, we rotate the plane to the position where the actual momentum triangle exists. Then the eight variables for the three-dimensional case are $(\lambda ,\theta, \Lambda, \Theta, k, \omega, \psi, \phi)$. As in the two dimensional case we can transform variables from $( \lambda, \theta, \Lambda, \Theta, k, \omega, \psi, \phi)$ to $( \lambda, \theta, L, \dot{I}, k, \omega,\psi,\phi)$, where $L$ and $\dot{I}$ are the total angular momentum and the derivative of the total moment of inertia. Finally, the surface defined by the $\dot{I}=0$ will be the global surface of section as in the two-dimensional case. Summary and Discussion ====================== In this report, for a given mass, we have extended the free-fall problem (FFP) to the full three-body problem of three dimensions. We find new variables $(\lambda, \theta, L, \dot{I}, k, \omega,\phi,\psi)$ for the three-dimensional case and $(\lambda, \theta, L, \dot{I}, k, \omega)$ for the two-dimensional case which are convenient for computer simulations. The reasons for it are the following. If the virial ratio $k$ is positive and large, then the total energy of the triple system is positive and its final motion is simple. So we can omit this case from our consideration, and then we can bound the value of $k$. If we set the virial ratio $k \le 1$, then the domain of the definition for $L$ and $\dot{I}$ is bounded. The surface defined by $\dot{I}=0$ in the space $(\lambda,\theta,L,\dot{I},k,\omega,\phi,\psi)$ becomes the global surface of section. We can map the structure of the whole phase space on this surface of section. Let us briefly discuss about the future role of the present result. The progress of computer enables us to calculate immense number of orbits. We can see the projections of the phase space through integrations of orbits by fixing some of the variables such as $k$, $L$, and $\dot{I}$ in the initial value space. As seen in §3.5, known periodic orbits occupy special positions in the shape space. We hope that new kinds of periodic orbits may be found with the aid of this shape-space representation. [**Acknowledgments.**]{} One of the authors (K.K.) expresses his thanks to Prof. Keiichi Maeda for encouragements. [99]{} Agekyan, T.A. and Anosova, J.P.: 1968, [*Soviet Physics-Astronomy*]{} [**11**]{}, 1006 - 1014. Alekseev, V.M.: 1981 [*Amer.Math.Soc.Transl.* ]{} (2) Vol.116. Anosova, J.P., Bertov, D.I. and Orlov, V.V.: 1981, Astrophysics [**20**]{}, 177. Anosova, J.P.: 1986, [*Astrophys. Sp. Sci.*]{} [**124**]{} 217 - 241. Broucke, R.: 1979, [*Astron. Astrophys*]{}. [**73**]{}, 303 - 313. Chazy, M.: 1922, [*Ann. Sci. École Norm. Sup.*]{} (3) [**39**]{}, 29 - 130. Chenciner, A. and Montgomery, R.: 2000, A remarkable periodic solution of the three-body problem in the case of equal masses, [*Ann. Math.*]{} [**152**]{}, 881 - 901. Hietarinta, J. and Mikkola, S.: 1993, [*Chaos* ]{} [**3**]{} (2), 1993. McGehee, R.: 1974, Triple collision in the collinear three-body problem, [*Invent. Math.*]{} [**27**]{}, 191 - 227. Mikkola, S. and Hietarinta, J.: 1989, [*Celestial Mechanics and Dynamical Astronomy*]{} [**46**]{}, 1 - 18. Mikkola, S. and S.V.Aarseth.: 1990, [*Celestial Mechanics and Dynamical Astronomy*]{} [**47**]{}, 375-390. Mikkola, S. and Aarseth, S.V.: 1993, [*Celestial Mechanics and Dynamical Astronomy*]{} [**57**]{}, 439-459. Moeckel, R.: 1988, Some qualitative features of the Three-body problem, [*Contemporary Mathematics*]{} [**81**]{}, 1 - 17. Tanikawa, K., Umehara, H., and Abe, H.: 1995, [*Cel. Mech. Dynam. Astron.*]{} [**62**]{}, 335 - 362. Tanikawa, K.and Mikkola, S.: 2000, [*Cel. Mech. Dynam. Astron.*]{} [**76**]{}, 23 - 34. Tanikawa, K. and Mikkola, S.: 2000, [*Chaos*]{} [**10**]{}, 649 - 657. Umehara, H. and Tanikawa,K.: 2000, [*Cel. Mech. Dynam. Astron.*]{} [**76**]{}, 187 - 214. Whittaker, E.T.: 1952, [*A Treatise on the Analytical Dynamics of Particles and Rigid Bodies*]{}, Cambridge University Press, Fourth Edition. Zare, K. and Chesley, S.: 1998, [*Chaos* ]{} [**8**]{}, 475-494. T. Fujiwara, H. Fukuda, A. Kameyama, H. Ozaki and M. Yamada, [*Journal of Physics A*]{} [**37**]{} (2004), p.10571.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Over the past two decades several different approaches to defining a geometry over ${{{\mathbb F}_1}}$ have been proposed. In this paper, relying on Toën and Vaquié’s formalism [@TV], we investigate a new category ${{\mathsf{Sch}}}_{{\widetilde{{{\mathsf B}}}}}$ of schemes admitting a Zariski cover by affine schemes relative to the category of blueprints introduced by Lorscheid [@Lor12]. A blueprint, which may be thought of as a pair consisting of a monoid $M$ and a relation on the semiring $M\otimes_{{{{\mathbb F}_1}}} {{\mathbb N}}$, is a monoid object in a certain symmetric monoidal category ${{\mathsf B}}$, which is shown to be complete, cocomplete, and closed. We prove that every ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ can be associated, through adjunctions, with both a classical scheme $\Sigma_{{\mathbb Z}}$ and a scheme $\underline{\Sigma}$ over ${{{\mathbb F}_1}}$ in the sense of Deitmar [@Dei], together with a natural transformation $\Lambda\colon \Sigma_{{\mathbb Z}}\to \underline{\Sigma}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}$. Furthermore, as an application, we show that the category of “${{{\mathbb F}_1}}$-schemes" defined by A. Connes and C. Consani in [@CC] can be naturally merged with that of ${{\widetilde{{{\mathsf B}}}}}$-schemes to obtain a larger category, whose objects we call “${{{\mathbb F}_1}}$-schemes with relations”.' date: last revised title: '**Some remarks on blueprints and ${{{\mathbb F}_1}}$-schemes**' --- [^1] [Claudio Bartocci,$^\P, ^\S$ Andrea Gentili,$^\P$ and Jean-Jacques Szczeciniarz$^\S$]{}\ $^\P$Dipartimento di Matematica, Università di Genova, Genova, Italy\ $^{\S}$[Laboratoire SPHERE, CNRS, Université Paris Diderot (Paris 7), 75013 Paris, France]{} [ ]{} Introduction ============ A quick overview of ${{{\mathbb F}_1}}$-geometry ------------------------------------------------ The nonexistent field ${{{\mathbb F}_1}}$ made its first appearance in Jacques Tits’s 1956 paper ”Sur les analogues algébriques des groupes semi-simples complexes" [@Tits56].[^2] According to Tits, it was natural to call “$n$-dimensional projective space over ${{{\mathbb F}_1}}$” a set of $n+1$ points, on which the symmetric group $\Sigma_{n+1}$ acts as the group of projective transformations. So, $\Sigma_{n+1}$ was thought of as the group of ${{{\mathbb F}_1}}$-points of $SL_{n+1}$, and more generally it was conjectured that, for each algebraic group $G$, one ought to have $W(G)= G({{{\mathbb F}_1}})$, where $W(G)$ is the Weyl group of $G$. A further strong motivation to seek for a geometry over ${{{\mathbb F}_1}}$ was the hope, based on the multifarious analogies between number fields and function fields, to find some pathway to attack Riemann’s hypothesis by mimicking André Weil’s celebrated proof. The idea behind that, as explicitly stated in Yuri Manin’s influential 1991–92 lectures [@ManL] and in Kapranov and Smirnov’s unpublished paper [@KS], was to regard ${\operatorname{Spec}}{{\mathbb Z}}$, the final object of the category of schemes, as an arithmetic curve over the “absolute point” ${\operatorname{Spec}}{{{\mathbb F}_1}}$. Manin’s work drew inspiration from Kurokawa’s paper [@Kur] together with Deninger’s results about “representations of zeta functions as regularized infinite determinants [@Den1; @Den2; @Den3] of certain ‘absolute Frobenius operators’ acting upon a new cohomology theory". Developing these insights, Manin suggested a conjectural decomposition of the classical complete Riemann zeta function of the form [@ManL eq. (1.5)] $$\begin{aligned} \label{Maninequation} Z\bigl(\overline{{\operatorname{Spec}}{{\mathbb Z}}}, s\bigr) &: = 2^{-1/2} \pi^{-s/2} \Gamma(\frac{s}{2}) \zeta(s) = \frac{ \prod^{\tiny\hbox{reg}}_\rho \frac{s-\rho}{2\pi}}{ {\frac{s}{2\pi}}\frac{s-1}{2\pi}}=\nonumber\\ &\stackrel{?}{=} \frac {\hbox{det}^{\tiny\hbox{reg}} \bigl( \frac{1}{2\pi} (s\cdot\operatorname{Id} - \Phi ) \big\vert H^1_{?} (\overline{{\operatorname{Spec}}{{\mathbb Z}}})\bigr)} {\hbox{det}^{\tiny\hbox{reg}} \bigl( \frac{1}{2\pi} (s\cdot\operatorname{Id} - \Phi ) \big\vert H^0_{?} (\overline{{\operatorname{Spec}}{{\mathbb Z}}})\bigr) \hbox{det}^{\tiny\hbox{reg}} \bigl( \frac{1}{2\pi} (s\cdot\operatorname{Id} - \Phi ) \big\vert H^2_{?} (\overline{{\operatorname{Spec}}{{\mathbb Z}}})\bigr) }\,,\end{aligned}$$ where the notation $\prod^{\tiny\hbox{reg}}_\rho$ and $\hbox{det}^{\tiny\hbox{reg}}$ refers to “zeta regularization” of infinite products and the last hypothetical equality “postulates the existence of a new cohomology theory $ H^{\bullet}_{?}$, endowed with a canonical ‘absolute Frobenius’ endomorphism $\Phi$”. He conjectured, moreover, that the functions of the form $\frac{s-n}{2\pi}$ in eq. \[Maninequation\] could be interpreted as zeta functions according to the definition $$Z({{\mathbb T}}^n, s) = \frac{s-n}{2\pi}\,, \quad n\geq 0\,,$$ where “Tate’s absolute motive” ${{\mathbb T}}$ was to be “imagined as a motive of a one-dimensional affine line over the absolute point, ${{\mathbb T}}^0 = \bullet = {\operatorname{Spec}}{{{\mathbb F}_1}}$”. The first full-fledged definition of variety over “the field with one element” was proposed by Christophe Soulé in the 1999 preprint [@Soule99]; five years later such definition was slightly modified by the same author in the paper [@Soule04]). Taking as a starting point Kapranov and Smirnov’s suggestion that ${{{\mathbb F}_1}}$ should have an extension ${{{\mathbb F}_{1^n}}}$ of degree $n$, Soulé insightfully posited that $${{{\mathbb F}_{1^n}}}\otimes_{{{{\mathbb F}_1}}} {{\mathbb Z}}= {{\mathbb Z}}[T] / (T^n -1) =: R_n\,.$$ Let $\mathsf R$ be the full subcategory of the category ${{\mathsf{Ring}}}$ of commutative rings generated by the rings $R_n$, $n\geq 1$ and their finite tensor products. An affine variety $X$ over ${{{\mathbb F}_1}}$ is then defined as a covariant functor $\mathsf R \to {{\mathsf{Set}}}$ plus some extra data such that there exists a unique (up to isomorphism) affine variety $X_{{\mathbb Z}}= X\otimes_{{{\mathbb F}_1}}{{\mathbb Z}}$ over $Z$ along with an immersion $X \hookrightarrow X_{{\mathbb Z}}$ satisfying a suitable universal property [@Soule04 Définition 3]. In particular, one has a natural inclusion $X({{{\mathbb F}_{1^n}}}) \subset (X\otimes_{{{\mathbb F}_1}}{{\mathbb Z}})(R_n)$ for each $n\geq 1$. A notable result proven by Soulé was that smooth toric varieties can always be defined over ${{{\mathbb F}_1}}$. To formalize ${{{\mathbb F}_1}}$-geometry Anton Deitmar adopted, in 2005, a different approach, which can be dubbed as “minimalistic” (using the evocative terminology introduced by Manin in [@Man10]). In his terse paper [@Dei], Deitmar associates to each commutative monoid $M$ its “spectrum over ${{{\mathbb F}_1}}$” ${\operatorname{Spec}}M$ consisting of all prime ideals of $M$, i.e. of all submonoids $P\subset M$ such that $xy \in P$ implies $x\in P$ or $y\in P$. The set ${\operatorname{Spec}}M$ can be endowed with a topology and with a structure (pre)sheaf $\mathcal O_M$ via localization, just as in the usual case of commutative rings. A topological space $X$ with a sheaf $\mathcal O_X$ of monoids is then called a “scheme over ${{{\mathbb F}_1}}$”, if for every point $x\in X$ there is an open neighborhood $U\subset X$ such that $(U, {\mathcal O}_X\vert_U)$ is isomorphic to $({\operatorname{Spec}}M, \mathcal O_M)$ for some monoid $M$. The forgetuful functor ${{\mathsf{Ring}}}\to {{\mathsf{Mon}}}$ has a left adjoint given by $M\mapsto M\otimes_{{{\mathbb F}_1}}{{\mathbb Z}}$ (in Deitmar’s notation), and this functor extend to a functor ${\,\text{-}\,}\otimes_{{{\mathbb F}_1}}{{\mathbb Z}}$ from the category of schemes over ${{{\mathbb F}_1}}$ to the category of classical schemes over ${{\mathbb Z}}$. Tit’s 1957 conjecture stating that $GL_n({{{\mathbb F}_1}})=\Sigma_n$ can be easily proven in Deitmar’s theory. Indeed, since ${{{\mathbb F}_1}}$-modules are just sets and ${{{\mathbb F}_{1^n}}}\otimes_{{{\mathbb F}_1}}{{\mathbb Z}}$ has to be isomorphic ${{\mathbb Z}}^n$, it turns out that ${{{\mathbb F}_{1^n}}}$ can be identified with the set $\{1,\dots, n\}$ of $n$ elements. Hence $$GL_n({{{\mathbb F}_1}}) = \operatorname{Aut}_{{{\mathbb F}_1}}({{{\mathbb F}_{1^n}}}) = \operatorname{Aut}(1,\dots,n) = \Sigma_n\,.$$ It is not hard to show, moreover, that the functor $GL_n$ on rings over ${{{\mathbb F}_1}}$ is represented by a scheme over ${{{\mathbb F}_1}}$ [@Dei Prop. 5.2]. As for zeta functions, Deitmar defines, for a scheme $X$ over ${{{\mathbb F}_1}}$ and for a prime $p$, the formal power series $$Z_X(p, T) = \exp \bigl( \sum_{n=1}^\infty \frac{T^n}{n}\# X({\mathbb F}_{p^n}) \bigr)\,,$$ where ${\mathbb F}_{p^n}$ stands for the field of $p^n$ elements with only its monoidal multiplicative structure and $ X({\mathbb F}_{p^n})$ denotes the set of ${\mathbb F}_{p^n}$-valued points of $X$, and proves that $Z_X(p, T)$ coincides with the Hasse–Weil zeta function of $X\otimes_{{{\mathbb F}_1}}{\mathbb F}_{p^n}$ [@Dei Prop. 6.3]. Albeit elegant, this result is a bit of a letdown, for — as the author himself is ready to admit — it is clear that “this type of zeta function \[...\] does not give new insights”. A natural and extremely general formalism for ${{{\mathbb F}_1}}$-geometry was elaborated by Bertrand Toën and Michel Vaquié in their 2009 paper [@TV], tellingly entitled [*Au dessous de ${\operatorname{Spec}}{{\mathbb Z}}$*]{}, whose approach appears to be largely inspired by Monique Hakim’s work [@Hak]. The authors there showed how to construct an “algebraic geometry" relative to any symmetric monoidal category ${{\mathsf C}}=({{\mathsf C}},\otimes, \mathbf{1})$, which is supposed to be complete, cocomplete and to admit internal homs. The basic idea is that the category ${{\mathsf{CMon}}}_{{\mathsf C}}$ of commutative (associative and unitary) monoid objects in ${{\mathsf C}}$ can be taken as a substitute for the category of commutative rings (the monoid objets in the category ${{\mathsf{Ab}}}= {{\mathbb Z}}{\,\text{-}\,}{{\mathsf{Mod}}}$ of Abelian groups) to the end of defining a suitable notion of “scheme over ${{\mathsf C}}$”. Each object $V$ of ${{\mathsf{CMon}}}_{{\mathsf C}}$ gives rise to the category $V{\,\text{-}\,}{{\mathsf{Mod}}}$ of $V$-modules and each morphism $V \to W$ in ${{\mathsf{CMon}}}_{{\mathsf C}}$ determines a change of basis functor ${\,\text{-}\,}\otimes_V W \colon V{\,\text{-}\,}{{\mathsf{Mod}}}\to W{\,\text{-}\,}{{\mathsf{Mod}}}$; the category of commutative $V$-algebras can be realized as the category of commutative monoids in $V{\,\text{-}\,}{{\mathsf{Mod}}}$ and is naturally equivalent to the category $V/{{{\mathsf{CMon}}}_{{\mathsf C}}}$. An affine schemes over ${{\mathsf C}}$ is, by definition, an object of the opposite category ${{\mathsf{Aff}}}_{{\mathsf C}}= {{\mathsf{CMon}}}^{\text{op}}_{{\mathsf C}}$ and the tautological contravariant functor ${{\mathsf{CMon}}}_{{\mathsf C}}\to {{\mathsf{Aff}}}_{{\mathsf C}}$ is called ${\operatorname{Spec}}({\,\text{-}\,})$. By means of the pseudo-functor $M$ that maps an object $V$ in ${{\mathsf{CMon}}}_{{\mathsf C}}$ to the category of $V$-modules and a morphism ${\operatorname{Spec}}V \to {\operatorname{Spec}}W$ to the functor ${\,\text{-}\,}\otimes_V W\colon V{\,\text{-}\,}{{\mathsf{Mod}}}\to W{\,\text{-}\,}{{\mathsf{Mod}}}$, one may introduce the notions of “Zariski cover" and “flat cover” (“$M$-faithfully flat in Toën and Vaquié’s terminology; see Definition \[definitionsofcovers\] and Remark \[mainremarkonTV\] below) and use such notions to equip ${{\mathsf{Aff}}}_{{\mathsf C}}$ with two distinct Grothendieck topologies, called, respectively, the flat and the Zariski topology. These topologies determine two categories of sheave on ${{\mathsf{Aff}}}_{{\mathsf C}}$, namely ${{\mathsf{Sh}}}^{\text{flat}}({{\mathsf{Aff}}}_{{\mathsf C}})\subset {{\mathsf{Sh}}}^{\text{Zar}}({{\mathsf{Aff}}}_{{\mathsf C}}) \subset {{\mathsf{Presh}}}({{\mathsf{Aff}}}_{{\mathsf C}})$. At this point, mimicking what is done in classical algebraic geometry, a “scheme over ${{\mathsf C}}$” is defined as a sheaf in ${{\mathsf{Sh}}}^{\text{Zar}}({{\mathsf{Aff}}}_{{\mathsf C}})$ that admits an affine Zariski cover (see Definition  \[definitionsofcoversforsheaves\] and Definition \[defschemeoverC\] below). If we take as ${{\mathsf C}}$ the category ${{\mathsf{Set}}}$ of sets endowed with the monoidal structure induced by the Cartesian product, then the category ${{\mathsf{Aff}}}_{{\mathsf{Set}}}$ is nothing but the category ${{\mathsf{Mon}}}^\text{op}$ and the objets of the category ${{\mathsf{Sch}}}_{{\mathsf{Set}}}$ can be thought of — as remarked by Toën and Vaquié — as “schemes over ${{{\mathbb F}_1}}$”. Actually, as proven by Alberto Vezzani in [@Vezz], such schemes, that we shall call [*monoidal schemes*]{}, turn out to equivalent to Deitmar’s schemes. Deitmar’s schemes appear therefore to constitute the very core of ${{{\mathbb F}_1}}$-geometry, not just because their definition is rooted in the basic notion of prime spectrum of a monoid, but especially because they naturally fit into the categorical framework established by Toën and Vaquié in [@TV], which admits of generalizations in many directions (e.g. towards a derived algebraic geometry over ${{{\mathbb F}_1}}$). Nonetheless, they are affected by some intrinsic limitations, which are clearly revealed by a result proven by Deitmar himself in 2008 [@Dei08 Thm. 4.1]: [*Let $X$ be a connected integral ${{{\mathbb F}_1}}$-scheme of finite type.[^3] Then every irreducible component of $X_{{{\mathbb C}}}= X_{{\mathbb Z}}\otimes_{{\mathbb Z}}{{\mathbb C}}$ is a toric variety. The components of $X_{{{\mathbb C}}}$ are mutually isomorphic as toric varieties.*]{} Since every toric variety is the lift $X_{{{\mathbb C}}}$ of an ${{{\mathbb F}_1}}$-scheme $X$, the previous theorem entails that integral ${{{\mathbb F}_1}}$-schemes of finite type are essentially the same as toric varieties. Now, semisimple algebraic groups are not toric varieties, so it is apparent that Deitmar’s ${{{\mathbb F}_1}}$-schemes are too little flexible to implement Tits’s conjectural program. A possible generalization of Deitmar’s geometry over ${{{\mathbb F}_1}}$ was proposed by Olivier Lorscheid, who introduced the notions of “blueprint” and “blue scheme” [@Lor12]. The basic idea can be illustrated through the following example. The affine group scheme $(SL_{2})_ {{\mathbb Z}}$ over the integers is defined as $$(SL_{2})_ {{\mathbb Z}}= {\operatorname{Spec}}\bigl( {{\mathbb Z}}[T_1, T_2, T_3, T_4] / (T_1 T_4 - T_2 T_3 -1) \bigr)\,.$$ As the relation $T_1 T_4 - T_2 T_3 =0$ does not make sense in the monoid ${{{\mathbb F}_1}}[T_1, T_2, T_3, T_4]$, any naive attempt to adapt the previous definition to get a scheme over ${{{\mathbb F}_1}}$ will necessarily be unsuccessful. The notion of “blueprint” just serves serves the purpose of getting rid of this difficulty: [*A [*blueprint*]{} is a pair $B=(R, A)$, where $R$ is a semiring and $A$ is a multiplicative subset of $R$ containing $0$ and $1$ and generating $R$ as a semiring. A blueprint morphism $f \colon B_1=(R_1, A_1) \to B_2=(R_2, A_2)$ is a semiring morphism $f\colon R_1 \to R_2$ such that $f (A_1) \subset A_2$.*]{} The rationale behind this definition can be explained by considering the following situation: if one is given a monoid $A$ and some relation which does not makes sense in $A$ but becomes meaningful in the semiring $A\otimes_{{{\mathbb F}_1}}{{\mathbb N}}$, then one can look at the blueprint $(A\otimes_{{{\mathbb F}_1}}{{\mathbb N}}, A)$. In the same vein as Deitmar’s approach, Lorscheid [@Lor12] associates to each blueprint $B$ its spectrum ${\operatorname{Spec}}B$, which turns out to be a locally blueprinted space (i.e. a topological spaces endowed with a sheaf of blueprints, such all stalks have a unique maximal ideal). An affine blue scheme is then defined as a locally blueprinted space that is isomorphic to the spectrum of a blueprint, and a blue scheme as a locally blueprinted space that has a covering by affine blue schemes. Deitmar’s schemes over ${{{\mathbb F}_1}}$ and classical schemes over ${{\mathbb Z}}$ are recovered as special cases of this definition. About the present paper ----------------------- A natural question arises: do blue schemes fit into Toën and Vaquié’s framework? This problem was addressed by Lorscheid himself in his 2017 paper [@Lor16] and answered in the negative. Nonetheless, it is possible — as already pointed out in [@Lor16] — to define a category of schemes (here called [*${{\mathsf B}}$-schemes*]{}) relative (in Toën and Vaquié’s sense) to the category of blueprints. Our first aim is to study these schemes by introducing the category of blueprint [*in a purely functorial way*]{}, as the category of monoid objects in a closed, complete and cocomplete symmetric monoidal category ${{\mathsf B}}$. There is a natural adjunction $\rho \dashv \sigma \colon {{\mathsf{Aff}}}_{{{\mathsf B}}} \to {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast}$ between the category of affine ${{\mathsf B}}$-schemes and that of affine monoidal schemes. However, since the functor $\rho$ is not continuous w.r.t. the Zariski topology, this adjunction does not give rise to a geometric morphism between the corresponding category of schemes. This hurdle may be sidestepped by introducing a larger category ${{\widetilde{{{\mathsf B}}}}}$ containing ${{\mathsf B}}$ and by considering the category of those schemes in ${{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}$ that admit a Zariski cover by affine ${{\mathsf B}}$-schemes. Such schemes, by a slight abuse of language, will be called [*${{\widetilde{{{\mathsf B}}}}}$-schemes*]{}. It will be proved that the adjunction $\rho \dashv \sigma$ above induce an adjunction $\widehat\rho \dashv \widehat\sigma$ between the category of ${{\widetilde{{{\mathsf B}}}}}$-schemes and that of affine monoidal schemes. Moreover, it will be shown that every ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ generates a pair $(\underline\Sigma, \Sigma_{{\mathbb Z}})$, where $\underline\Sigma$ is a monoidal schemes and $\Sigma_{{\mathbb Z}}$ a classical scheme, together with a natural transformation $\Lambda\colon \Sigma_{{\mathbb Z}}\to \underline{\Sigma}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}$. More in detail the present paper is organized as follows. After briefly recalling in §  \[SectionTV\] the fundamental notions of “relative algebraic geometry” and fixing our notation,[^4] in § \[SectionBschemes\] we define the full subcategory ${{\mathsf B}}$ of the category ${{\mathbb N}}[{\,\text{-}\,}]/{{\mathsf{Mon}}}_0$ (where the functor ${{\mathbb N}}[{\,\text{-}\,}]\colon {{\mathsf{Set}}}_\ast \to {{\mathsf{Mon}}}_0$ is left adjoint to the forgetful functor $\vert{\,\text{-}\,}\vert$ from the category ${{\mathsf{Set}}}_\ast$ of pointed sets to the category of monoids with “absorbent object”; see § \[Sectionnotation\]), whose objects $(X, {{\mathbb N}}[X] \to M)$ satisfy the conditions: $$\begin{array}{l} \text{a) the morphism ${{\mathbb N}}[X] \to M$ is an epimorphism;}\\ \text{b) the composition\ } X\to \vert {{\mathbb N}}[X] \vert \to \vert M\vert\ \text{is a monomorphism.} \end{array}$$ As proven in Theorem \[B-category\], the category ${{\mathsf B}}$ — which corresponds to the category of pointed set endowed with a pre-addition structure introduced in [@Lor16 §4] — carries a natural structure of symmetric monoidal category. Moreover, this structure is closed, complete, and cocomplete. So, the category ${{\mathsf B}}$ possesses all the properties necessary to carry out Toën and Vaquié’s program. It is quite straightforward to show (Proposition \[Bluep-category\]) that the category ${{\mathsf {Blp}}}$ of monoid objects in ${{\mathsf B}}$ coincides with the category of blueprints (this result was already stated, in equivalent terms, in [@Lor16 Lemma 4.1], but we provide a detailed and completely functorial proof). Thus, by applying Toën and Vaquié’s formalism to the category ${{\mathsf B}}$, we define [*the category ${{\mathsf{Aff}}}_{{\mathsf B}}= {{\mathsf {Blp}}}^{\text{op}}$ of affine ${{\mathsf B}}$-schemes*]{} and then [*the category ${{\mathsf{Sch}}}_{{\mathsf B}}$ of ${{\mathsf B}}$-schemes*]{}. The core of our paper is § \[sectionadjunctions\]. The natural adjunction between the category ${{\mathsf{Mon}}}_0$ and the category ${{\mathsf{Set}}}_\ast$ gives rise to an adjunction $\xymatrix{ {{\mathsf{Aff}}}_{{{\mathsf{Mon}}}_0} \ar[r]_{\vert{\,\text{-}\,}\vert} & {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast} \ar@/_1.1pc/[l]_{{\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}} } $ that factorizes as shown in the following diagram $$\label{adjunctiondiagram0} \xymatrix{ {{\mathsf{Aff}}}_{{{\mathsf{Mon}}}_0} \ar[r]^{\vert{\,\text{-}\,}\vert} \ar[d]^{G} & {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast} \ar@/_1.5pc/[l]_{{\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}} \ar[dl]^{\sigma} \\ {{\mathsf{Aff}}}_{{{\mathsf B}}} \ar@<1ex>[ur]^{\rho} \ar@/^0.8pc/[u]^{F} } $$ In Proposition \[Bschemesadjunctions\] it is proven that the functor $F$ in the diagram \[adjunctiondiagram0\] is continuous w.r.t. the Zariski topology and that the induced functor $\widehat{F}\colon {{\mathsf{Sh}}}({{\mathsf{Aff}}}_{{{\mathsf B}}}) \to {{\mathsf{Sh}}}({{{\mathsf{Aff}}}_{{{\mathsf{Mon}}}_0}})$ determines a functor $\widehat{F}\colon {{\mathsf{Sch}}}_{{{\mathsf B}}} \to {{\mathsf{Sch}}}_{{{\mathsf{Mon}}}_0}$ between the category of ${{\mathsf B}}$-schemes and that of semiring schemes. Similarly, in Proposition \[Bschemesadjunctions2\] it is shown the functor $\sigma \colon {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast} \to {{\mathsf{Aff}}}_{{{\mathsf B}}}$ in the diagram \[adjunctiondiagram0\] is continuous w.r.t. the Zariski and that the induced functor $\widehat{\sigma} \colon {{\mathsf{Sh}}}({{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast}) \to {{\mathsf{Sh}}}({{\mathsf{Aff}}}_{{{\mathsf B}}})$ determines a functor $\widehat{\sigma}\colon {{\mathsf{Sch}}}_{{{\mathsf{Set}}}_\ast} \to {{\mathsf{Sch}}}_{{{\mathsf B}}}$ between the category of monoidal schemes and that of ${{\mathsf B}}$-schemes. One would like the functor $\widehat\sigma$ to have a left adjoint determined by the functor $\rho\colon {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast} \to {{\mathsf{Aff}}}_{{\mathsf B}}$ (see diagram \[adjunctiondiagram0\]). However, the functor $\rho$, although it preserves Zariski covers, it does not commute with finite limits. This difficulty may be overcome by introducing the categories ${{\widetilde{{{\mathsf B}}}}}$ and ${{\widetilde{{{\mathsf {Blp}}}}}}$ containing, respectively, ${{\mathsf B}}$ and ${{\mathsf {Blp}}}$ (Definition \[def-nbluep\]), and by defining the category $\widetilde{{{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}}$ of [*${{\widetilde{{{\mathsf B}}}}}$-schemes*]{} as the subcategory of ${{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}$ whose objects admit a Zariski cover by affine schemes in ${{\mathsf{Aff}}}_{{\mathsf B}}$ (Definition \[def-nbschemes\]). So, a ${{\widetilde{{{\mathsf B}}}}}$-scheme is locally described by blueprints. In this way, one shows (Theorem \[rhosigmaadjunction\]) that there is a geometric morphism $$\widehat\rho \dashv \widehat\sigma\colon \widetilde{{{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}} \to {{\mathsf{Sch}}}_{{{\mathsf{Set}}}_\ast}\,.$$ It follows (see Definition \[defBscheme\] and the ensuing remarks) that each ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ determines the following geometric data: - a monoidal scheme $\underline{\Sigma}= \widehat{\rho}(\Sigma)$; - a scheme $\Sigma_{{\mathbb Z}}= \widehat{F}_{{\mathbb Z}}(\Sigma) $ over ${{\mathbb Z}}$; - a natural transformation $\Lambda\colon \Sigma_{{\mathbb Z}}\to\underline{\Sigma}\circ |{\,\text{-}\,}|\cong\underline{\Sigma}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}$. In § \[SectionFinal2\], as an application of our approach, we investigate the relationship of ${{\widetilde{{{\mathsf B}}}}}$-schemes and ${{{\mathbb F}_1}}$-schemes in the sense of Alain Connes and Caterina Consani [@CC]. According to their definition [@CC Def. 4.7], an ${{{\mathbb F}_1}}$-scheme is a triple $(\underline{\Xi}, \Xi_{{\mathbb Z}}, \Phi)$, where $\underline{\Xi}$ is a monoidal scheme, $\Xi_{{\mathbb Z}}$ is a scheme over ${{\mathbb Z}}$, and $\Phi$ is natural transformation $\underline{\Xi}\to \Xi_{{\mathbb Z}}\circ ({\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})$, such that the induced natural transformation $\underline{\Xi}\circ \vert{\,\text{-}\,}\vert \to \Xi_{{\mathbb Z}}$, when evaluated on fields, gives isomorphisms (of sets). Thus, the category of ${{\widetilde{{{\mathsf B}}}}}$-schemes and that of ${{{\mathbb F}_1}}$-schemes can be combined into a larger category, namely their fibered product over the category of monoidal schemes, whose objects will be called [*${{{\mathbb F}_1}}$-schemes with relations*]{} (Definition \[definitionF1schemewr\]). In more explicit terms, a ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ determining the pair $(\underline\Sigma, \Sigma_{{\mathbb Z}})$ and an ${{{\mathbb F}_1}}$-scheme $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}', \Phi)$ will give rise to a ${{{{\mathbb F}_1}}}$-scheme with relations denoted by the quadruple $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}, \Sigma_{{\mathbb Z}}', \Phi)$. The main motivation behind this notion is to combine in a single geometric object both the advantages of blueprint approach and the benefits of Connnes and Consani’s definition (cf. Remark \[finalremark\] for a better explanation). Each ${{{{\mathbb F}_1}}}$-scheme with relations $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}, \Sigma_{{\mathbb Z}}', \Phi)$ (with a slight modification of our terminology, see Convention \[convention\]) determines a natural transformation $$\Psi_1 \colon \Sigma_{{\mathbb Z}}\to \Sigma_{{\mathbb Z}}'$$ and a natural transformation $$\Psi_2\colon \Sigma'_{{{\mathsf B}}}\to \Sigma'_{{\mathbb Z}}\,,$$ where $\Sigma'_{{{\mathsf B}}}$ is a certain pullback sheaf on the category ${{\mathsf{Ring}}}$ (defined by the diagram \[diagramdefiningSigma’Bcat\]). This implies that, given a ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ underlying a ${{{{\mathbb F}_1}}}$-scheme with relations, we can think of its “${\mathbb F}_{1^{q-1}}$-points” in two different senses, and therefore count them in two different ways, as stated in Proposition \[propfirsttransferringmap\] and in Theorem \[thmsecondftransferringmap\]. An interesting case is when the ${{{\mathbb F}_{1^n}}}$-points of the underlying monoidal scheme $\underline\Sigma$ are counted by a polynomial in $n$. Theorem 4.10 of [@CC] shows that, if $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}', \Phi)$ is an ${{{\mathbb F}_1}}$-scheme such that the monoidal scheme $\underline{\Sigma}$ is noetherian and torsion-free, then $\#\underline\Sigma({{{\mathbb F}_{1^n}}}) = P(\underline\Sigma, n)$, where $$P(\underline\Sigma, n) = \sum_{x\in {\underline\Sigma}} \#{\operatorname{Hom}}({{\mathcal O}}_{\underline\Sigma, x}^\times, {{{\mathbb F}_{1^n}}})\,.$$ For an ${{{\mathbb F}_1}}$-scheme with relations $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}, \Sigma_{{\mathbb Z}}', \Phi)$ such that the underlying ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ is noetherian and torsion-free (Definition \[definitionBschemenoethtorsionfree\]), we introduce the polynomial $$Q(\underline\Sigma, n) = \sum_{x\in \underline\Sigma}\# {\operatorname{Hom}}_{{{\mathsf B}}} ({{\mathcal O}}_{\Sigma, x}^\times, {{{\mathbb F}_{1^n}}})\,,$$ and prove (Proposition \[finalproposition\]) that $Q(\underline\Sigma, n) \leq P(\underline\Sigma, n)$. Finally, we would like to emphasize that our approach to blueprints, being entirely functorial, seems to be appropriate to carry out a “derived version” of the category of ${{\mathsf B}}$-schemes. In fact, in quite general terms, a definition of “derived ${{\mathsf B}}$-scheme” could be obtained by replacing, in our definition of ${{\mathsf B}}$-scheme, the category ${{\mathsf{Set}}}$ (resp. ${{\mathsf{Set}}}_\ast$) by the category $\mathsf S$ of spaces (resp. $\mathsf S_\ast$ of pointed spaces) and the notion of monoid object by that of $\mathbb E_\infty$-algebra. This issue will be the object of future work. [**Acknowledgments.**]{} We would like to thank an anonymous referee for pointing out a couple of mistakes in a previous version of this paper and for making helpful remarks. The general setting {#SectionTV} =================== Schemes over a monoidal category {#relativeschemes} -------------------------------- For the reader’s convenience, we start by giving a quick résumé of some of the basic constructions of the “relative algebraic geometry” developed in [@TV §2]. Let ${{\mathsf C}}=({{\mathsf C}},\otimes, \mathbf{1})$ be a symmetric monoidal category ($\mathbf{1}$ is the unit object), and denote by ${{\mathsf{CMon}}}_{{\mathsf C}}$ the category of commutative (associative and unitary) monoid objects in ${{\mathsf C}}$. We assume that ${{\mathsf C}}$ is [*complete, cocomplete, and closed*]{} (i.e., for every pair of objects $X$, $Y$, the contravariant functor ${\operatorname{Hom}}_{{{\mathsf C}}}({\,\text{-}\,}\otimes X, Y)$ is represented by an “internal hom” set ${\underline{{\operatorname{Hom}}}}(X,Y)$). The assumptions on ${{\mathsf C}}$ imply, in particular, that the forgetful functor $$\vert {\,\text{-}\,}\vert\colon {{\mathsf{CMon}}}_{{\mathsf C}}\to {{\mathsf C}}$$ admits a left adjont $$\label{generalleftadjoint} L\colon {{\mathsf C}}\to {{\mathsf{CMon}}}_{{\mathsf C}}\,,$$ which maps an object $X$ to the free commutative monoid object $L(X)$ generated by $X$. For each commutative monoid $V$ in ${{\mathsf{CMon}}}_{{\mathsf C}}$ one may introduce the notion of $V$-module (cf. [@Jo p. 478]). The category $V{\,\text{-}\,}{{\mathsf{Mod}}}$ of such objects has a natural symmetric monoidal structure given by the “tensor product” $\otimes_V$; this structure turns out to be closed. Given a morphism $V \to W$ in ${{\mathsf{CMon}}}_{{\mathsf C}}$, there is a change of basis functor $${\,\text{-}\,}\otimes_V W \colon V{\,\text{-}\,}{{\mathsf{Mod}}}\to W{\,\text{-}\,}{{\mathsf{Mod}}}\,,$$ whose adjoint is the forgetful functor $W{\,\text{-}\,}{{\mathsf{Mod}}}\to V{\,\text{-}\,}{{\mathsf{Mod}}}$. Note that the category of commutative monoids in $V{\,\text{-}\,}{{\mathsf{Mod}}}$ — i.e. the category of [*commutative $V$-algebras*]{} — is naturally equivalent to the category $V/{{{\mathsf{CMon}}}_{{\mathsf C}}}$. The category ${{\mathsf{Aff}}}_{{\mathsf C}}$ of [*affine schemes over ${{\mathsf C}}$*]{} is, by definition, the category ${{\mathsf{CMon}}}^{\text{op}}_{{\mathsf C}}$. Given an object $V$ in ${{\mathsf{CMon}}}_{{\mathsf C}}$ the corresponding object in ${{\mathsf{Aff}}}_{{\mathsf C}}$ will be denoted by ${\operatorname{Spec}}V$. To define, in full generality, the category of schemes over ${{\mathsf C}}$ one follows the standard procedure of glueing together affine schemes. To this end, one first endows ${{\mathsf{Aff}}}_{{\mathsf C}}$ with a suitable Grothendieck topology. Let us recall the general definition. \[topology\] Let ${{\mathsf G}}$ be any category. A [*Grothendieck topology*]{} on ${{\mathsf G}}$ is the assignment to each object $U$ of ${{\mathsf G}}$ of a collection of sets of arrows $\{U_{i} \rightarrow U \}$ called [*coverings of*]{} $U$ so that the following conditions are satisfied: 1. if $V \rightarrow U$ is an isomorphism, then the set $\{ V\rightarrow U\}$ is a covering; 2. if $\{U_{i} \rightarrow U\}$ is a covering and $V\rightarrow U$ is any arrow, then there exist the fibered products $\{U_{i}\times_{U}V\}$ and the collection of projections $\{U_{i}\times_{U}V\rightarrow V \}$ is a covering; 3. if $\{ U_{i} \rightarrow U\}$ is a covering and for each index $i$ there is a covering $\{ V_{ij} \rightarrow U_{i}\}$ (where $j$ varies in a set depending on $i$), each collection $\{ V_{ij} \rightarrow U_{i}\rightarrow U\}$ is a covering of $U$. A category with a Grothendieck topology is a called a [*site*]{}. As it is clear from the definition above, a Grothendieck topology on a category ${{\mathsf G}}$ is introduced with the aim of glueing objects locally defined, and what really matters is therefore the notion of covering. So, in spite of its name, a Grothendieck topology could better thought of as a generalization of the notion of covering rather than of the notion of topology (notice, for example, that, though the maps $U_i\to U$ in a covering can be seen as a generalization of open inclusions $U_i\subset U$, no condition generalizing the topological requirement about unions of open subsets is prescribed). Given a site ${{\mathsf G}}$ and a covering $\mathcal{U}=\{ U_i\to U\}_{i\in I}$, we denote by $h_U$ the presheaf represented by $U$ and by $h_\mathcal{U}\subset h_U$ the subpresheaf of those maps that factorise through some element of $\mathcal{U}$. \[sheaf\] Let ${{\mathsf G}}$ be a site. A presheaf $F\colon{{\mathsf G}}^{\text{\rm op}} \to{{\mathsf{Set}}}$ is said to be a sheaf if, for every covering $\mathcal{U}=\{ U_i\to U\}_{i\in I}$, the restriction map ${\operatorname{Hom}}(h_U,F)\to{\operatorname{Hom}}(h_\mathcal{U},F)$ is an isomorphism. Coming back to our symmetric monoidal category ${{\mathsf C}}$, the associated category of affine schemes ${{\mathsf{Aff}}}_{{\mathsf C}}$ can be equipped with two different Grothendieck topologies by means of the following ingenious definitions (which, of course, generalize the corresponding usual definitions in “classical” algebraic geometry). One says [@TV Def. 2.9, 1), 2), 3)] that a morphism $f\colon {\operatorname{Spec}}W \to {\operatorname{Spec}}V$ in ${{\mathsf{Aff}}}_{{\mathsf C}}$ is - [*flat*]{} if the functor ${\,\text{-}\,}\otimes_V W \colon V{\,\text{-}\,}{{\mathsf{Mod}}}\to W{\,\text{-}\,}{{\mathsf{Mod}}}$ is exact; - [*an epimorphism*]{} if, for any $Z$ in ${{\mathsf{CMon}}}_{{\mathsf C}}$, the functor $$f^\ast\colon {\operatorname{Hom}}_ {{{\mathsf{CMon}}}_{{\mathsf C}}}(W, Z) \to {\operatorname{Hom}}_ {{{\mathsf{CMon}}}_{{\mathsf C}}}(V, Z)$$ is injective ; - [*of finite presentation*]{} if, for any filtrant diagram $\{Z_i\}_{i\in I}$ in $V/{{{\mathsf{CMon}}}_{{\mathsf C}}}$, the natural morphism $${\underrightarrow{\mathrm{lim}}}{\operatorname{Hom}}_{V/{{{\mathsf{CMon}}}_{{\mathsf C}}}}(V, Z_i) \to {\operatorname{Hom}}_{V/{{{\mathsf{CMon}}}_{{\mathsf C}}}}(W, {\underrightarrow{\mathrm{lim}}}Z_i)$$ is an isomorphism. \[definitionsofcovers\] [@TV Def. 2.9, 4); Def. 2.10] a) A collection of morphisms $$\{ f_j\colon {\operatorname{Spec}}W_j \to {\operatorname{Spec}}V\}_{j\in J}$$ in ${{\mathsf{Aff}}}_{{\mathsf C}}$ is a flat cover if 1. each morphism $f_j\colon {\operatorname{Spec}}W_j \to {\operatorname{Spec}}V$ is flat and 2. there exists a finite subset of indices $J'\subset J$ such that the functor $$\prod_{j\in J'} {\,\text{-}\,}\otimes_V W_j \colon V{\,\text{-}\,}{{\mathsf{Mod}}}\to \prod_{j\in J'}W_j {\,\text{-}\,}{{\mathsf{Mod}}}$$ is conservative. \(b) A morphism $f\colon {\operatorname{Spec}}W \to {\operatorname{Spec}}V$ in ${{\mathsf{Aff}}}_{{\mathsf C}}$ is an open Zariski immersion if it is a flat epimorphism of finite presentation. \(c) A collection of morphisms $\{ f_j\colon {\operatorname{Spec}}W_j \to {\operatorname{Spec}}V\}_{j\in J}$ in ${{\mathsf{Aff}}}_{{\mathsf C}}$ is a Zariski cover if it is a flat cover and each $f_j\colon {\operatorname{Spec}}W_j \to {\operatorname{Spec}}V$ is an open Zariski immersion. \[mainremarkonTV\] The previous definition is actually a particular case of a more general construction. Indeed, as shown in [@TV], to define a topology on a complete and cocomplete category ${{\mathsf D}}$ is enough to assign a pseudo-functor $M\colon {{\mathsf D}}^{\text{op}} \to \mathsf{Cat}$ satisfying the the following conditions: 1. for each morphism $q\colon X \to Y$ in ${{\mathsf D}}$, the functor $M(q) = q^\ast\colon M(Y) \to M(X)$ has a right adjoint $q_\ast \colon M(X) \to M(Y)$ which is conservative 2. for each Cartesian diagram $$\xymatrix{ X' \ar[d]_r \ar[r]^{q'} & Y'\ar[d]^{r'} \\ X \ar[r]_q & Y}$$ in ${{\mathsf D}}$, the natural transformation $ q^\ast r'_\ast \Longrightarrow r_\ast q'^\ast$ is an isomorphism. In terms of such a functor one can define the notion of $M$-faithfully flat cover [@TV Def. 2.3] and the associated pretopology [@TV Prop. 2.4], which induces a topology on ${{\mathsf D}}$. In the classical theory of schemes, ${{\mathsf D}}$ is the category ${{\mathsf{Ring}}}^{\text{op}}$ of affine schemes and, for each $X={\operatorname{Spec}}A$, $M(A)$ is the category of quasi-coherent sheaves on $X$. When starting with a monoidal category ${{\mathsf C}}$ satisfying our assumptions, ${{\mathsf D}}$ is the category ${{\mathsf{Aff}}}_{{\mathsf C}}$ and the pseudo-functor $M$ maps an object $V$ in ${{\mathsf{CMon}}}_{{\mathsf C}}$ to the category of $V$-modules and a morphism ${\operatorname{Spec}}V \to {\operatorname{Spec}}W$ to the functor ${\,\text{-}\,}\otimes_V W\colon V{\,\text{-}\,}{{\mathsf{Mod}}}\to W{\,\text{-}\,}{{\mathsf{Mod}}}$. What we have called “flat covers” correspond to Toën-Vaquié’s “$M$-faithfully flat covers” (cf. [@TV Def. 2.8, Def. 2.10]).\ When ${{\mathsf D}}$ is endowed with a topology, a natural question that arises is how the pseudo-functor $M$ behaves with respect to it. It can be proven ([@TV Th. 2.5] that $M$ is a stack with respect to that topology. For the reader’s convenience, we review the notion of a stack in the Appendix. By making use of flat covers and Zariski covers introduced in Definition \[definitionsofcovers\] we may equip the category ${{\mathsf{Aff}}}_{{\mathsf C}}$ with two distinct Grothendieck topologies, called, respectively, the [*flat*]{} and the [*Zariski*]{} topology. Correspondingly, there are two categories of sheaves on ${{\mathsf{Aff}}}_{{\mathsf C}}$, namely $${{\mathsf{Sh}}}^{\text{flat}}({{\mathsf{Aff}}}_{{\mathsf C}}) \subset {{\mathsf{Sh}}}^{\text{Zar}}({{\mathsf{Aff}}}_{{\mathsf C}})\subset {{\mathsf{Presh}}}({{\mathsf{Aff}}}_{{\mathsf C}})\,.$$ Notice that, for each affine scheme $\Xi$, the presheaf $Y(\Xi)$ given by the Yoneda embedding $Y({\,\text{-}\,})\colon {{\mathsf{Aff}}}_{{\mathsf C}}\to {{\mathsf{Presh}}}({{\mathsf{Aff}}}_{{\mathsf C}})$ is actually a sheaf in ${{\mathsf{Sh}}}^{\text{flat}}({{\mathsf{Aff}}}_{{\mathsf C}}) \subset {{\mathsf{Sh}}}^{\text{Zar}}({{\mathsf{Aff}}}_{{\mathsf C}})$ [@TV Cor. 2.11, 1)]; this sheaf will be denoted again by $\Xi$. The next and final step is to define the category of schemes over the category ${{\mathsf C}}$. We first have to introduce the notion of affine Zariski cover in the category ${{\mathsf{Sh}}}^{\text{Zar}}({{\mathsf{Aff}}}_{{\mathsf C}})$. \[definitionsofcoversforsheaves\] [@TV Def. 2.12] a) Let $\Xi$ be an affine scheme in ${{\mathsf{Aff}}}_{{\mathsf C}}$. A subsheaf ${{\mathcal F}}\subset \Xi$ is said to be a Zariski open of $\Xi$ if there exists a collection of open Zariski immersions $\{\Xi_i \to \Xi\}_{i\in I}$ such that ${{\mathcal F}}$ is the image of the sheaf morphism $\coprod_{i\in I} \Xi_i \to \Xi$. \(b) A morphism ${{\mathcal F}}\to {{\mathcal G}}$ in ${{\mathsf{Sh}}}^{\text{Zar}}({{\mathsf{Aff}}}_{{\mathsf C}})$ is said to be an open Zariski immersion if, for any affine scheme $\Xi$ and any sheaf morphism $\Xi \to {{\mathcal G}}$, the induced morphism ${{\mathcal F}}\times_{{{\mathcal G}}} \Xi \to \Xi$ is a monomorphism whose image is a Zariski open of $\Xi$. \(c) Let ${{\mathcal F}}$ be a sheaf in ${{\mathsf{Sh}}}^{\text{Zar}}({{\mathsf{Aff}}}_{{\mathsf C}})$. A collection of open Zariski immersions $\{\Xi_i \to {{\mathcal F}}\}_{i \in I}$, where each $\Xi_i$ is an affine scheme over ${{\mathsf{Aff}}}_{{\mathsf C}}$, is said to be an affine Zariski cover of ${{\mathcal F}}$ if the resulting morphism $$\coprod_{i\in I} \Xi_i \to {{\mathcal F}}$$ is a sheaf epimorphism. It should be noted that, in the case of affine schemes over ${{\mathsf C}}$, the definition of open Zariski immersion in Definition  \[definitionsofcoversforsheaves\], (b) does coincide with that previously introduced in Definition  \[definitionsofcovers\], (b) [@TV Lemma 2.14]. \[defschemeoverC\] A scheme over the category ${{\mathsf C}}$ is a sheaf ${{\mathcal F}}$ in ${{\mathsf{Sh}}}^{\text{Zar}}({{\mathsf{Aff}}}_{{\mathsf C}})$ that admits an affine Zariski cover. The category of schemes over ${{\mathsf C}}$ will be denoted by ${{\mathsf{Sch}}}_{{\mathsf C}}$. Notation and examples {#Sectionnotation} --------------------- Primarily to the purpose of fixing our notational conventions, we now briefly describe the basic examples of symmetric monoidal categories we shall work with in the sequel of the present paper.  The category ${{\mathsf{Set}}}$ of sets can be endowed with a monoidal product given by the Cartesian product. Then $(\mathsf{Set}, \times, \ast)$ is a symmetric monoidal category and ${{\mathsf{CMon}}}_{{\mathsf{Set}}}= {{\mathsf{Mon}}}$ is the usual category of commutative, associative and unitary monoids.   The category ${{\mathsf{Set}}}_\ast$ of pointed sets can be endowed with a monoidal product given by the smash product $\wedge$; in this case, the unit object is the pointed set $\mathbb{S}^0$ consisting of two elements. Then $({{\mathsf{Set}}}_\ast, \wedge, \mathbb{S}^0)$ is a symmetric monoidal category and ${{\mathsf{CMon}}}_{{{\mathsf{Set}}}_\ast} = {{\mathsf{Mon}}}_0$ is the category of commutative, associative and unitary monoids with “absorbent object” (such an object will be denoted by $0$ in multiplicative notation and by $-\infty$ in additive notation).  The category ${{\mathsf{Mon}}}$ can be endowed with a monoidal product $\otimes$ defined in the following way: $R\otimes R'$ is the quotient of the product $R\times R'$ by the relation $\mathcal{\sim}$ such that $(nr,r')\sim (r,nr')$ for each $(n,r,r')\in\mathbb{N}\times R\times R'$. Clearly, the unit object is the additive monoid $({{\mathbb N}}, +)$. Then $({{\mathsf{Mon}}}, \otimes, {{\mathbb N}})$ is a symmetric monoidal category and ${{\mathsf{CMon}}}_{{\mathsf{Mon}}}= {{\mathsf{SRing}}}$ is the category of commutative, associative and unitary semirings.  The category ${{\mathsf{Ab}}}= {{\mathbb Z}}{\,\text{-}\,}{{\mathsf{Mod}}}$ of Abelian groups can be endowed with a monoidal product $\otimes_{{\mathbb Z}}$ given by the usual tensor product of ${{\mathbb Z}}$-modules. Then $({{\mathsf{Ab}}}, \otimes_{{\mathbb Z}}, {{\mathbb Z}})$ is a symmetric monoidal category and ${{\mathsf{CMon}}}_{{\mathsf{Ab}}}= {{\mathsf{Ring}}}$ is the category of commutative, associative and unitary rings. For the functor $L\colon {{\mathsf C}}\to{{\mathsf{CMon}}}_{{\mathsf C}}$ defined in eq. \[generalleftadjoint\] as left adjoint to the forgetful functor $\vert {\,\text{-}\,}\vert\colon {{\mathsf{CMon}}}_{{\mathsf C}}\to {{\mathsf C}}$ we shall adopt the following special conventions:   if ${{\mathsf C}}= {{\mathsf{Set}}}$, $L$ will be denoted by $${{\mathbb N}}[{\,\text{-}\,}] \colon {{\mathsf{Set}}}\to {{\mathsf{Mon}}}\,;$$   if ${{\mathsf C}}= {{\mathsf{Mon}}}$, $L$ will be denoted by $${\,\text{-}\,}\otimes_{{\mathbb U}} {{\mathbb N}}\colon {{\mathsf{Mon}}}\to {{\mathsf{SRing}}}\,,$$ where ${\mathbb U}$ is the monoid consisting of just one element (the notation being motivated by the identity ${\mathbb U} \otimes_{{\mathbb U}} {{\mathbb N}}= {{\mathbb N}}$);   if ${{\mathsf C}}= {{\mathsf{Mon}}}_0$, $L$ will be denoted by $$\label{adjointMon_0}{\,\text{-}\,}\otimes_{{{{\mathbb F}_1}}} {{\mathbb N}}\colon {{\mathsf{Mon}}}_0 \to {{\mathsf{SRing}}}\,,$$ where ${{{\mathbb F}_1}}$ is the object of ${{\mathsf{Mon}}}_0$ consisting of two element, namely ${{{\mathbb F}_1}}=\{0,1\}$ in multiplicative notation (also in this case, the notation is motivated by the identity ${{{\mathbb F}_1}}\otimes_{{{{\mathbb F}_1}}} {{\mathbb N}}= {{\mathbb N}}$);   if ${{\mathsf C}}= {{\mathsf{Ab}}}$, $L$ will be denoted by $${{\mathbb Z}}[{\,\text{-}\,}] \colon {{\mathsf{Ab}}}\to {{\mathsf{Ring}}}\,.$$ All symmetric monoidal categories ${{\mathsf{Set}}}$, ${{\mathsf{Set}}}_\ast$, ${{\mathsf{Mon}}}$, ${{\mathsf{Mon}}}_0$, ${{\mathsf{Ab}}}$ described above are complete, cocomplete, and closed, so we can apply the machinery of Toën-Vaquié’s theory illustrated in Subsection \[relativeschemes\] and define, for each of these categories, the corresponding category of schemes over it. In this way, when ${{\mathsf C}}= {{\mathsf{Ab}}}$, one unsurprisingly recovers the usual notion of [*classical scheme*]{}. A more intriguing example is provided by the case of ${{\mathsf C}}= {{\mathsf{Set}}}$. [**Monoidal schemes**]{} An object of the category ${{\mathsf{Sch}}}_{{\mathsf{Set}}}$ is a “scheme over ${{{\mathbb F}_1}}$” in the sense of [@Dei]. The equivalence between the two definitions was proved in [@Vezz]. We recall that, if $M$ is a commutative monoid, its “spectrum over ${{{\mathbb F}_1}}$” ${\operatorname{Spec}}M$ can be realized as the set of prime ideals of $M$ and given a topological space structure. In the present paper we shall call an object in ${{\mathsf{Sch}}}_{{\mathsf{Set}}}$ a [*monoidal scheme*]{} and use the name of “${{{\mathbb F}_1}}$-scheme” for a different kind of algebro-geometric structures (see Definition  \[CCschemes\]). The category of blueprints {#SectionBschemes} ========================== The notion of [*blueprint*]{} was introduced by Olivier Lorscheid in his 2012 paper [@Lor12]. \[def-blueprints\] A [*blueprint*]{} is a pair $B=(R, A)$, where $R$ is a semiring and $A$ is a multiplicative subset of $R$ containing $0$ and $1$ and generating $R$ as a semiring. A blueprint morphism $f \colon B_1=(R_1, A_1) \to B_2=(R_2, A_2)$ is a semiring morphism $f\colon R_1 \to R_2$ such that $f (A_1) \subset A_2$. Notice that, given a blueprint morphism $f \colon B_1=(R_1, A_1) \to B_2=(R_2, A_2)$, its restriction $f\vert_{A_1}\colon A_1 \to A_2$ is a monoid morphism that uniquely determines $f$ on the whole of $R_1$. The idea underlying the notion of blueprint can be illustrated as follows. Some equivalence relations that do not make sense in a monoid $A$ may be expressed in the semiring $A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}$. Now, any equivalence relation $\mathcal{R}$ on a semiring $S$ induces a projection $S\to S/\mathcal{R}$ and can indeed be recovered by such a map. So, the assignment of a pair $(A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$ is to be interpreted as the datum of a monoid $A$ plus the relation on $A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}$ given by the epimorphism $A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R$. \[example1\] Consider the monoid $A_T =\mathbb{N}\cup\{ -\infty\}$ (in additive notation, corresponding to $\{ T^i\}_{i\in{{\mathbb N}}\cup\{ -\infty\}}$ in multiplicative notation) and the corresponding free semiring $A_T{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}$ of polynomials in $T$ with coefficient in ${{\mathbb N}}$ (the functor ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}$ has been introduced in eq. \[adjointMon\_0\]). Notice that ${\operatorname{Spec}}A_T$ has two points, namely the prime ideals $\{ -\infty\}$ and $({{\mathbb N}}\setminus \{0\})\cup \{ -\infty\}$, which embed in ${\operatorname{Spec}}A_T{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}$ (we are loosely thinking of ${\operatorname{Spec}}A_T{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}$ as the underlying topological space).\ Now, if one takes a closed subset of ${\operatorname{Spec}}A_T{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}$ and intersects it with ${\operatorname{Spec}}A_T$, one could naively think that the intersection is nonempty only when the chosen closed subset is defined by some relation in $A_T$. However, this is not the case: for instance, the relation $2T=1$, which makes the ideal $(T)$ trivial, cannot be expressed in the monoid $A_T$. According to Lorscheid’s idea, one can represent this affine “monoidal scheme” by considering the pair $(A_T, A_T{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to A_T{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}/(2T=1))$. The category of blueprints can be given a handier description, which makes it easier to characterise it as the category of commutative monoids in a suitable symmetric monoidal category. Let us consider the functor ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\colon {{\mathsf{Mon}}}_0 \to {{\mathsf{SRing}}}$ (introduced in eq. \[adjointMon\_0\]) \[def-bluep\] The category ${{\mathsf {Blp}}}$ is the full subcategory of ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}/{{\mathsf{SRing}}}$ whose objects $(A, A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$ satisfy the conditions: $$\label{Blueprintsconditions} \begin{array}{l} \text{a) the morphism $A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R$ is an epimorphism;}\\ \text{b) the composition\ } A\to \vert A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\vert \to \vert R\vert \text{, is a monomorphism} \\ \text{\phantom{b)}(the first map being the unit of the adjunction).} \end{array}$$ It is immediate that the category ${{\mathsf {Blp}}}$ is equivalent to the category of blueprints introduced in Definition \[def-blueprints\] Consider now the forgetful functor $\vert {\,\text{-}\,}\vert \colon {{\mathsf{Mon}}}_0 \to {{\mathsf{Set}}}_\ast$; for each monoid $M$ with absorbent object $0$ (in multiplicative notation), the base point of the associated set $\vert M\vert$ is clearly the element corresponding to $0$. Its adjoint functor is the functor $${{\mathbb N}}[{\,\text{-}\,}] \colon {{\mathsf{Set}}}_\ast \to{{\mathsf{Mon}}}_0\,.$$ We can now form the full subcategory ${{\mathsf B}}$ of ${{\mathbb N}}[{\,\text{-}\,}] /{{\mathsf{Mon}}}_0$ whose objects $(X, {{\mathbb N}}[X] \to M)$ are described by conditions formally identical to those in eq. \[Blueprintsconditions\] $$\label{Bconditions} \begin{array}{l} \text{a) the morphism ${{\mathbb N}}[X] \to M$ is an epimorphism;}\\ \text{b) the composition\ } X\to \vert {{\mathbb N}}[X] \vert \to \vert M\vert\ \text{is a monomorphism.} \end{array}$$ The category ${{\mathsf B}}$ above corresponds to the category of pointed set endowed with a pre-addition structure, as described in [@Lor16 §4]. \[B-category\] The category ${{\mathsf B}}$ carries a natural structure of symmetric monoidal category. Moreover, this structure is closed, complete, and cocomplete. In the category ${{\mathsf B}}$ there is a natural symmetric monoidal product given by $$\label{monoidalproductB} (X, {{\mathbb N}}[X] \to M)\otimes (X', {{\mathbb N}}[X'] \to M')=(X\wedge X', {{\mathbb N}}[X\wedge X'] \to M\otimes M')\,,$$ where the map $ {{\mathbb N}}[X\wedge X'] \to M\otimes M'$ is the composition $${{\mathbb N}}[X\wedge X'] \to {{\mathbb N}}[X] \otimes {{\mathbb N}}[X'] \to M\otimes M'\,;$$ the first morphism maps $n(x,x')$ to $n x\otimes x'$ and is an isomorphism (in other words, the functor ${{\mathbb N}}[{\,\text{-}\,}]$ is monoidal). Since $M\otimes M'$ is generated as a monoid by elements of the form $x\otimes x'$, and since the two maps ${{\mathbb N}}[X] \to M$ and ${{\mathbb N}}[X'] \to M'$ are surjective, the map ${{\mathbb N}}[X\wedge X'] \to M\otimes M'$ is also surjective. Moreover, by the definition of tensor product in the category ${{\mathsf{Mon}}}$, for any $x,y\in X\setminus\{\ast\}$ and $x',y'\in X'\setminus\{\ast\}$ one has $x\otimes x'=y\otimes y'$ if and only if $(x,x')=(y,y')$, so that the map $$X\wedge X'\to |M\otimes M'|$$ is a monomorphism. Conditions \[Bconditions\] are therefore satisfied. We now show that the monoidal category ${{\mathsf B}}$ is closed. Let us define the internal hom functor by setting $$\label{internalhomB} {\underline{{\operatorname{Hom}}}}( (X, {{\mathbb N}}[X] \to M), (Y, {{\mathbb N}}[Y] \to N)) = (Y^X \times_{|N|^X} | N^{M}|, {{\mathbb N}}[Y^X \times_{|N|^X} | N^{M}|] \to\widetilde{N^M})\,,$$ where $\widetilde{N^M}$ is the image of the map $${{\mathbb N}}[Y^X \times_{|N|^X} | N^{M}|]\to {{\mathbb N}}[|N^{M}|] \to N^M$$ (the second map above is the counit of the adjunction). Let us check the adjunction property. For each map $$\label{Bmap} (X,{{\mathbb N}}[X]\to M)\otimes (Y,{{\mathbb N}}[Y]\to N)=(X\wedge Y, {{\mathbb N}}[X\wedge Y]\to M\otimes N)\to (Z,{{\mathbb N}}[Z]\to L)\,,$$ the first component corresponds, by the exponential law in ${{\mathsf{Set}}}_\ast$, to a map $X\to Z^Y$, while the second component is given by a commutative square $$\label{diagraminternalHom} \xymatrix{ {{\mathbb N}}[X\wedge Y] \ar[r] \ar[d] & {{\mathbb N}}[Z] \ar[d] \\ M\otimes N \ar[r] & L }$$ where the arrow on the left is the product map ${{\mathbb N}}[X]\otimes {{\mathbb N}}[Y]\to M\otimes N$ and the top arrow is the image of the map in the first component through the functor ${{\mathbb N}}[{\,\text{-}\,}]$. By using the property that ${{\mathbb N}}[{\,\text{-}\,}]$ is the left adjoint to the forgetful functor and by noticing that the bottom arrow in \[diagraminternalHom\] corresponds to a map $M\to L^N$, it is immediate that to assign the commutative diagram \[diagraminternalHom\] is equivalent to assign the two commutative diagrams $$\xymatrix{ X \ar[r] \ar[dr] & Z^Y \ar[d] \\ & |L|^Y } \qquad \quad \xymatrix{ X \ar[d] \ar[dr] & \\ |M| \ar[r] & |L^N| }$$ together with the condition that the diagonal morphism of the first coincides with the composition of the diagonal morphism of the second and the morphisms $|L^N|\hookrightarrow |L|^{|N|}\to |L|^Y$ (the second map being induced by the map $Y\to |N|$). Summing up, a map as in eq. \[Bmap\] is equivalent to a map from $X$ to the pullback defined by the diagram $$\xymatrix{ & Z^Y \ar[d] \\ |L^N| \ar[r] & |L|^Y }$$ along with a compatible map $M\to L^N$ in such a way that the following diagram commutes: $$\xymatrix{ X \ar[r] \ar[d] \ar@/^1.5pc/[rr] & \vert L^N\vert \times_{\vert L\vert ^Y} Z^Y\ar[r] \ar[d] & Z^Y \ar[d]\\ |M| \ar[r] & \vert L^N\vert \ar[r] & \vert L\vert ^Y }$$ This shows that the internal hom functor in eq. \[internalhomB\] is indeed a right adjoint to the monoidal product functor in eq. \[monoidalproductB\]. We wish now to show that the category ${{\mathsf B}}$ is complete and cocomplete. First we prove that it admits colimits. Given a diagram whose objects are $(X_i,{{\mathbb N}}[X_i]\to M_i)$, we claim that its colimit is the object $$B=(\widetilde{{\underrightarrow{\mathrm{lim}}}X_i},{{\mathbb N}}[\widetilde{{\underrightarrow{\mathrm{lim}}}X_i}]\to{\underrightarrow{\mathrm{lim}}}M_i)\,,$$ where $\widetilde{{\underrightarrow{\mathrm{lim}}}X_i}$ denotes the image of the natural map ${\underrightarrow{\mathrm{lim}}}X_i\to |{\underrightarrow{\mathrm{lim}}}M_i|$; the maps from the diagram to $B$ are the obvious ones. It is immediate that $B$ is an object of ${{\mathsf B}}$. The injectivity condition is satisfied by definition. As for the surjectivity condition, one has that, since the functor ${{\mathbb N}}[{\,\text{-}\,}]$ preserves colimits (being a left adjoint), the map ${{\mathbb N}}[{\underrightarrow{\mathrm{lim}}}X_i]\to{\underrightarrow{\mathrm{lim}}}M_i$ is surjective (it is enough to show that for the cases of coproducts and coequalizers, in which it is a consequence of the surjectivity of the maps ${{\mathbb N}}[X_i]\to M_i$), so that the image of ${\underrightarrow{\mathrm{lim}}}X_i$ generates ${\underrightarrow{\mathrm{lim}}}M_i$; hence, the map $\widetilde{{{\mathbb N}}[{\underrightarrow{\mathrm{lim}}}X_i]}\to{\underrightarrow{\mathrm{lim}}}M_i$ is surjective. Consider a map from the given diagram to an object $C$ of ${{\mathsf B}}$. In the category ${{\mathbb N}}[{\,\text{-}\,}]/{{\mathsf{Mon}}}_0$ such a map factorises in a unique way through the object $({\underrightarrow{\mathrm{lim}}}X_i,{{\mathbb N}}[{\underrightarrow{\mathrm{lim}}}X_i]\to{\underrightarrow{\mathrm{lim}}}M_i)$ because of the colimit properties in the categories ${{\mathsf{Set}}}_\ast$ and ${{\mathsf{Mon}}}_0$ and because the functor ${{\mathbb N}}[{\,\text{-}\,}]$ preserves colimits. If two elements $x,y\in{\underrightarrow{\mathrm{lim}}}X_i$ have the same image $m\in{\underrightarrow{\mathrm{lim}}}M_i$, than their images in the first component of $C$ are mapped by the morphism in the second component to the same element. So, the images of $x$ and $y$ do coincide, just because $C$ is an object of ${{\mathsf B}}$. It follows that he map from the diagram in $C$ uniquely factorises through $B$, so that our claim is proved. Second we prove that ${{\mathsf B}}$ admits limits. Given a diagram as above, we claim that its limit is the object $$B'=({\underleftarrow{\mathrm{lim}}}X_i,{{\mathbb N}}[{\underleftarrow{\mathrm{lim}}}X_i]\to\widetilde{{\underleftarrow{\mathrm{lim}}}M_i})\,,$$ where $\widetilde{{\underleftarrow{\mathrm{lim}}}M_i}$ is the image of the natural map ${{\mathbb N}}[{\underleftarrow{\mathrm{lim}}}X_i]\to{\underleftarrow{\mathrm{lim}}}M_i$, which is adjoint to the map ${\underleftarrow{\mathrm{lim}}}X_i\to{\underleftarrow{\mathrm{lim}}}|M_i|\cong |{\underleftarrow{\mathrm{lim}}}M_i|$ (the last isomorphism holds since $|{\,\text{-}\,}|$ preserves limits, being a right adjoint) induced by the maps $X_i\to |M_i|$; the maps from $B'$ to the diagram are the obvious ones. It is clear that $B'$ is an object of ${{\mathsf B}}$: the surjectivity condition holds by definition, while for the injectivity condition it is enough to note that it holds when the limit is either a product or an equalizer. Consider now a map from an object $C$ to the given diagram. In the category ${{\mathbb N}}[{\,\text{-}\,}]/{{\mathsf{Mon}}}_0$ such a map uniquely factorises through the object $({\underleftarrow{\mathrm{lim}}}X_i,{{\mathbb N}}[{\underleftarrow{\mathrm{lim}}}X_i ]\to{\underleftarrow{\mathrm{lim}}}M_i)$, because of the limit properties in the categories ${{\mathsf{Set}}}_\ast$ and ${{\mathsf{Mon}}}_0$. Since the second component of $C$ is a surjective morphism, this map uniquely factorises through $B'$. Thus, $B'$ satisfies the limit condition, as claimed. \[Bluep-category\] The category ${{\mathsf {Blp}}}$ of blueprints is equivalent to the category ${{\mathsf{CMon}}}_{{\mathsf B}}$ of monoids in the symmetric monoidal category ${{\mathsf B}}$. To begin with, notice that, for each monoid object $(X,{{\mathbb N}}[X] \to M)$ in ${{\mathsf B}}$, the product defined in eq. \[monoidalproductB\], namely $$(X,{{\mathbb N}}[X] \to M)\otimes (X,\mathbb{N}[X]\to M)=(X \wedge X, {{\mathbb N}}[X\wedge X]\to M\otimes M)$$ induces a map $\mu\colon (X \wedge X, {{\mathbb N}}[X\wedge X]\to M\otimes M)\to (X,{{\mathbb N}}[X]\to M)$. So the first component of $\mu$ is a map $$m:X\wedge X\to X$$ which defines a (multiplicative) monoid structure on the set $X$, while the second component of $\mu$ yields a commutative diagram $$\xymatrix{ {{\mathbb N}}[X\wedge X] \ar[d] \ar[r]^{{{\mathbb N}}[m]} & {{\mathbb N}}[X] \ar[d] \\ M\otimes M \ar[r] & M }$$ whose bottom arrow induces an associative and commutative multiplication on the monoid $M$ compatible with its monoidal sum; in other words, it induces a semiring structure on $M$. Similarly, the top arrow induces a semiring structure one the monoid ${{\mathbb N}}[X]$. In this case, since the multiplication is given by the application of the free monoid functor $\mathbb{N}[{\,\text{-}\,}]$ to the multiplication $m$ of $A$, the resulting semiring is nothing but the free semiring $X {\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}$ generated by the monoid $(X, m)$. The commutativity of the diagram ensures that the multiplication on $X$ is consistent with that on $M$, so that $X$ can still be seen as a subobject of $|M|$. In conclusion, a monoid object in the category ${{\mathsf B}}$ is a blueprint, and it is also obvious that any blueprint can be obtained this way. Theorem \[B-category\] and Proposition \[Bluep-category\] should hopefully provide a full elucidation of [@Lor16 Lemma 4.1]. We have shown that the category of blueprints fits in with the general framework proposed by Toën and Vaquié, so we can apply the formalism of Subsection \[relativeschemes\] to define the category of schemes over ${{\mathsf B}}$. An affine ${{\mathsf B}}$-scheme is an object of the category ${{\mathsf{Aff}}}_{{\mathsf B}}= {{\mathsf {Blp}}}^{\text{op}}$, a ${{\mathsf B}}$-scheme an object of the category ${{\mathsf{Sch}}}_{{\mathsf B}}$ [*(see Definition \[defschemeoverC\])*]{}. A “${{\mathsf B}}$-scheme” corresponds to what is called a “subcanonical blue scheme” in [@Lor16]. Adjunctions {#sectionadjunctions} =========== ${{\mathsf B}}$-schemes {#sectionBadjunctions} ----------------------- This sections aims to show that the natural adjunction between the categories ${{\mathsf{Aff}}}_{{{\mathsf{Mon}}}_0}$ and ${{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast}$ factorizes through an adjunction between the categories ${{\mathsf{Aff}}}_{{{\mathsf{Mon}}}_0}$ and ${{\mathsf{Aff}}}_{{{\mathsf B}}}$ and an adjunction between the categories ${{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast}$ and ${{\mathsf{Aff}}}_{{{\mathsf B}}}$, whose right adjoints induce functors between the corresponding categories of relative schemes. \[blueforgetfulfunctorlemma\] The functor $\tilde{F}\colon {{\mathbb N}}[{\,\text{-}\,}] / {{\mathsf{Mon}}}_0 \to {{\mathsf{Mon}}}_0$ mapping an object $(X, {{\mathbb N}}[X] \to M)$ to the monoid $M$ admits a right adjoint $$\label{preblueforgetfulfunctor} \tilde{G} \colon {{\mathsf{Mon}}}_0 \to {{\mathbb N}}[{\,\text{-}\,}] / {{\mathsf{Mon}}}_0\,,$$ mapping a monoid $M$ to the object $(|M|, {{\mathbb N}}[|M|] \to M)$, where the second component is the counit of the adjunction ${{\mathbb N}}[{\,\text{-}\,}] \dashv \vert{\,\text{-}\,}\vert$. The adjunction $\tilde{F} \dashv \tilde{G}$ induces an adjunction between the associated categories of monoids $$\label{preblueadjunction} \xymatrix{{{\mathsf{SRing}}}\ar@/_1.1pc/[r]^{G} & {{\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}/ {{\mathsf{SRing}}}} \ar@/_1.1pc/[l]_{F}}\,,$$ where $F$ maps an object $(A, A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$ to the semiring $R$ and its right adjoint $G$ maps a semiring $R$ to the object $(|R|, |R|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$, where the second component is the counit of the adjunction ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\dashv \vert{\,\text{-}\,}\vert$. Let $(X, {{\mathbb N}}[X] \to M)$ be an object of ${{\mathbb N}}[{\,\text{-}\,}] / {{\mathsf{Mon}}}_0$ and $N$ a monoid. Let us consider a morphism $$(X,\mathbb{N}[X]\to M)\to (|N|,\mathbb{N}[|N|]\to N)$$ in the category ${{\mathbb N}}[{\,\text{-}\,}] / {{\mathsf{Mon}}}_0$ and denote by $f\colon X\to |N|$ the induced set morphism. In the commutative square $$\label{squareofpreblueadjunction} \xymatrix{ \mathbb{N}[X] \ar[r]^{{{\mathbb N}}[f]} \ar[d] & \mathbb{N}[|N|] \ar[d] \\ M \ar[r] & N }$$ the map ${{\mathbb N}}[f]$, because of the property of the vertical arrow on the right (which is the counit of the adjunction), amounts to the same as a map $ \mathbb{N}[X]\to N $. Such a map, by adjunction, must be induced by the map $f\colon X\to |N|$. Thus, the assignment of the map $f$ and the commutative square \[squareofpreblueadjunction\] are equivalent to the assignment of the commutative triangle $$\xymatrix{ \mathbb{N}[X] \ar[dr] \ar[d] & \\ M \ar[r] & N }$$ But this diagram is equivalent to the assignment of a map $M\to N$, since the vertical map is given. We have therefore the adjunction $\tilde{F} \dashv \tilde{G}$, as claimed. The last statement is now straightforward. Since image of the functor $\tilde{G}\colon {{\mathsf{Mon}}}_0 \to {{\mathbb N}}[{\,\text{-}\,}] / {{\mathsf{Mon}}}_0$ is contained in the subcategory ${{\mathsf B}}$, the adjunction \[preblueadjunction\] restricts to the adjunction $$\label{blueadjuction} \xymatrix{{{\mathsf{SRing}}}\ar@/_1.1pc/[r]^{G} & {{{\mathsf {Blp}}}} \ar@/_1.1pc/[l]_{F}}\,.$$ It is immediate that the adjunction $\xymatrix{{{\mathsf{SRing}}}\ar@/_1.1pc/[r]^{\vert{\,\text{-}\,}\vert} &{{\mathsf{Mon}}}_0 \ar@/_1.1pc/[l]_{{\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}}}$ factorises through the adjunction \[blueadjuction\] and the adjunction $$\label{MonBluepadjuction} \xymatrix{{{\mathsf{Mon}}}_0 \ar@/_1.1pc/[r]^{\sigma} & {{{\mathsf {Blp}}}} \ar@/_1.1pc/[l]_{\rho}}\,,$$ where $\rho(A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R) = A$ and $\sigma(A) = (A,\xymatrix{A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\ar[r]^= &A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}})$. The adjunctions above induce opposite adjunctions between the corresponding categories of affine schemes. We have therefore the following diagram $$\label{adjunctiondiagram} \xymatrix{ {{\mathsf{Aff}}}_{{{\mathsf{Mon}}}_0} \ar[r]^{\vert{\,\text{-}\,}\vert} \ar[d]^{G} & {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast} \ar@/_1.5pc/[l]_{{\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}} \ar[dl]^{\sigma} \\ {{\mathsf{Aff}}}_{{{\mathsf B}}} \ar@<1ex>[ur]^{\rho} \ar@/^0.8pc/[u]^{F} } $$ associated to the diagram $$\label{adjunctiondiagram2} \xymatrix{ {{\mathsf{Mon}}}_0 \ar[r]^{\vert{\,\text{-}\,}\vert} \ar[d]^{\tilde{G}} & {{\mathsf{Set}}}_\ast \ar@/_1.5pc/[l]_{{{\mathbb N}}[{\,\text{-}\,}]} \ar[dl]^{\tilde{\sigma}} \\ {{\mathsf B}}\ar@<1ex>[ur]^{\tilde{\rho}} \ar@/^0.8pc/[u]^{\tilde{F}} }$$ We now wish to show that the functors in diagram \[adjunctiondiagram2\] satisfy the conditions that are required to apply [@TV Cor. 2.1, Cor. 2.2]. Of course, it will be enough to check that for the adjunctions $\tilde{ F} \dashv \tilde{G}$ and $\tilde{\rho} \dashv \tilde{\sigma}$. \[lemmaadjunction1\] In the adjunction $\xymatrix{{{\mathsf{Mon}}}_0 \ar@/_1.1pc/[r]^{\tilde{G}} & {{{\mathsf B}}} \ar@/_1.1pc/[l]_{\tilde{F}}}$ 1. the left adjoint $\tilde{F}$ is monoidal; 2. the right adjoint $\tilde{G}$ is conservative; 3. the functor $\tilde{G}$ preserves filtered colimits. \(1) and (2) are straightforward. As for (3), we have to show that the right adjoint preserves filtered colimits, which is also quite obvious. The colimit of a filtered diagram $(X_i, {{\mathbb N}}[X_i] \to M_i)$ is indeed given by $$(\underrightarrow{\mathrm{lim}}X_i, {{\mathbb N}}[\underrightarrow{\mathrm{lim}}X_i] \to\underrightarrow{\mathrm{lim}}M_i)$$ provided that it belongs to our category (notice that ${{\mathbb N}}[\underrightarrow{\mathrm{lim}}X_i] \cong \underrightarrow{\mathrm{lim}}{{\mathbb N}}[X_i]$ since ${{\mathbb N}}[{\,\text{-}\,}]$ is a left adjoint). But it does, because the map $N[\underrightarrow{\mathrm{lim}}X_i] \to\underrightarrow{\mathrm{lim}}M_i$ is surjective due to the fact that so are the maps ${{\mathbb N}}[X_i]\to M_i$ and the injectivity condition is satisfied since the diagram is filtrant. \[lemmaadjunction2\] In the adjunction $\xymatrix{{{\mathsf B}}\ar@/_1.1pc/[r]^{\tilde{\rho}} & {{{\mathsf{Set}}}_\ast} \ar@/_1.1pc/[l]_{\tilde{\sigma}}}$ 1. the left adjoint $\tilde{\sigma}$ is monoidal; 2. the right adjoint $\tilde{\rho}$ is conservative; 3. the functor $\tilde{\rho}$ preserves filtered colimits. The functors $\tilde{\sigma}$, $\tilde{\rho}$ are defined as follows: $\tilde{\sigma}(X) = (X, \xymatrix{{{\mathbb N}}[X] \ar[r]^{=} &{{\mathbb N}}[X]})$ and $\tilde{\rho}(X, {{\mathbb N}}[X] \to M) = X$. (1) is then straightforward. As for (2), we know that a map $(X,{{\mathbb N}}[X]\to M)\to (Y,{{\mathbb N}}[Y]\to N)$ is determined by the first component, so that $\tilde{\rho}$ is conservative. Finally, (3) is proved by proceeding as in the proof of Lemma \[lemmaadjunction1\]. \[Bschemesadjunctions\]The functor $F\colon {{\mathsf{Aff}}}_{{{\mathsf B}}} \to {{\mathsf{Aff}}}_{{{\mathsf{Mon}}}_0}$ is continuous w.r.t. the Zariski and the flat topology; morevover, the functor $$\widehat{F} \colon {{\mathsf{Sh}}}({{\mathsf{Aff}}}_{{{\mathsf B}}}) \to {{\mathsf{Sh}}}({{\mathsf{Aff}}}_{{{\mathsf{Mon}}}_0})$$ preserves the subcategories of schemes and so induces a functor $$\label{Bschemesadjunctionseq} \begin{aligned} \widehat{F}\colon {{\mathsf{Sch}}}_{{{\mathsf B}}} &\to {{\mathsf{Sch}}}_{{{\mathsf{Mon}}}_0}\\ \Sigma &\mapsto \widehat{F}(\Sigma) \end{aligned}$$ a\) We first note that, given objects $X_M= (X, {{\mathbb N}}[X]\to M)$, $X_M'= (X, {{\mathbb N}}[X]\to M')$ in ${{\mathsf B}}$, if $X_M\to X_{M'}$ is a flat morphism in ${{\mathsf B}}$, then in the associated diagram $$\xymatrix{ M{\,\text{-}\,}{{\mathsf{Mod}}}\ar[d] \ar[r] & X_M{\,\text{-}\,}{{\mathsf{Mod}}}\ar[d] \\ M' {\,\text{-}\,}{{\mathsf{Mod}}}\ar[r] & X_{M'}{\,\text{-}\,}{{\mathsf{Mod}}}}$$ the natural transformation between the two compositions is an isomorphism. We wish to prove that an analogous property holds when one considers a flat morphism in the category ${{\mathsf {Blp}}}$. As usual, it will be enough to work in the category ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}/{{\mathsf{SRing}}}$. Let $A_R = (A, A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$ and $A_S =(A, A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to S)$ be objects in this category, and consider a flat morphism $A_R\to A_S$. An $A_R$-module is given by a pair $$(N,M)\in{{\mathsf{Set}}}_\ast\times{{\mathsf{Mon}}}_0$$ such that $N$ is a subset of $|M|$ and generates it as a module, together with an action of $A$ on $N$ and an action of $R$ on $M$, such that the former is the restriction of the latter. If $M$ is an $R$-module $M$, its associated $A_R$-module is the $(R, R {\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$-module $(\vert M\vert,M)$, whose $A_R$-module structure is induced by the map $$A_R\to (R, R{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$$ given by the pair of immersions $\iota\colon A\hookrightarrow R$ and $\iota\otimes_{{{{\mathbb F}_1}}}\text{id} \colon A {\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}$, where the latter fits in the commutative square $$\xymatrix{ A {\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\ar[d] \ar[r]^{\iota\otimes_{{{{\mathbb F}_1}}}\text{id}} & R{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\ar[d] \\ R \ar[r]_{\text{id}_R} & R }$$ The category $R{\,\text{-}\,}{{\mathsf{Mod}}}$ can therefore be identified with the full subcategory of the category of $$(A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}, A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R){\,\text{-}\,}{{\mathsf{Mod}}}$$ whose underlying objects in ${{\mathsf{Mon}}}_0 /{{\mathsf{Mon}}}_0$ are of the kind $(M,M=M)$. We have now to show that, for any flat morphism $A_R\to A_S$ in ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}/{{\mathsf{SRing}}}$, in the associated diagram $$\xymatrix{ R{\,\text{-}\,}{{\mathsf{Mod}}}\ar[d] \ar[r] & A_R{\,\text{-}\,}{{\mathsf{Mod}}}\ar[d] \\ S{\,\text{-}\,}{{\mathsf{Mod}}}\ar[r] & A_S{\,\text{-}\,}{{\mathsf{Mod}}}}$$ the natural transformation between the two compositions is an isomorphism. As for the first component, the commutativity up isomorphism of the above diagram is straightforward. As for the second component, that can be easily shown by adapting the argument in proof of Prop. 3.6 of [@TV]. The statement then follows from [@TV Cor. 2.22]. \[Bschemesadjunctions2\] The functor $\sigma \colon {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast} \to {{\mathsf{Aff}}}_{{{\mathsf B}}}$ is continuous w.r.t. the Zariski and the flat topology; morevover, the functor $$\widehat{\sigma} \colon {{\mathsf{Sh}}}({{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast}) \to {{\mathsf{Sh}}}({{\mathsf{Aff}}}_{{{\mathsf B}}})$$ preserves the subcategories of schemes and so induces a functor $$\begin{aligned} \widehat{\sigma}\colon {{\mathsf{Sch}}}_{{{\mathsf{Set}}}_\ast} &\to {{\mathsf{Sch}}}_{{{\mathsf B}}}\\ \Xi &\mapsto \widehat{\sigma}(\Xi) \end{aligned}$$ Consider a flat morphism $A\to B$ in the category ${{\mathsf{Mon}}}_0$, and denote by $A_{A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}}$ the object $(A, A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}= A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}})$ in ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}/{{\mathsf{SRing}}}$. Each $A_{A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}}$-module is given by a pair $(N,M)\in{{\mathsf{Set}}}_\ast \times{{\mathsf{Mon}}}_0$ together with an action of $A$ on $N$ and an action of $A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}$ on $M$, the two actions being compatible in the obvious sense. In the diagram $$\xymatrix{ A_{A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}}{\,\text{-}\,}{{\mathsf{Mod}}}\ar[d] \ar[r] & A{\,\text{-}\,}{{\mathsf{Mod}}}\ar[d] \\ B_{B{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}}{\,\text{-}\,}{{\mathsf{Mod}}}\ar[r] & B{\,\text{-}\,}{{\mathsf{Mod}}}}$$ the horizontal map sends an object $(N,M)$ the set $N$ endowed with an action of the monoid $A$. Since tensor products are defined “componentwise”, the diagram commutes.\ ${{\widetilde{{{\mathsf B}}}}}$-schemes {#SectionFinal1} --------------------------------------- By Proposition \[Bschemesadjunctions2\] there is an induced functor $\widehat{\sigma}\colon {{\mathsf{Sch}}}_{{{\mathsf{Set}}}_\ast} \to {{\mathsf{Sch}}}_{{{\mathsf B}}}$. One would like this functor to have a left adjoint determined by the functor $\rho\colon {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast} \to {{\mathsf{Aff}}}_{{{\mathsf B}}}$. The functor $\rho$ may be easily shown to preserve Zariski covers, but it does not commute with finite limits (in other words, it is not continuous w.r.t. the Zariski topology, according to the usual terminology). Let us consider the free monoid $M=\langle X,Y\rangle$ and the blueprint $B$ defined by the free monoid $\langle T, T_1,T_2,S, S_1, S_2\rangle$ with the relations $T= T_1+T_2$ and $S= S_1 +S_2$. Let $f, g\colon M \to B$ be the morphisms mapping $(X, Y)$, respectively, into $(T_1, T_2)$ and $(S_1, S_2)$. The coequalizer of $f$ and $g$ is the blueprint $B'$ defined by the free monoid $\langle X,Y, Z\rangle$ with the relation $Z= X+Y$, while the coequalizer of $\rho f$ and $\rho g$ is the the free monoid $\langle T,S, Z_1, Z_2\rangle$. The latter is obviously different from $\rho B'$. This drawback may be sidestepped by proceeding as follows: 1) omit the requirement that the map $A\to \vert A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\vert \to \vert R\vert$ is a monomorphism in Definition \[def-bluep\] and define a category ${{\widetilde{{{\mathsf {Blp}}}}}}$ that contains the category ${{\mathsf {Blp}}}$ of blueprints; analogously, by omitting the second condition in eq. \[Bconditions\], define a category ${{\widetilde{{{\mathsf B}}}}}$ containing ${{\mathsf B}}$; 2) prove that there is a functor $\rho\colon {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast} \to {{\mathsf{Aff}}}_{{{\widetilde{{{\mathsf B}}}}}}$ that is continuous w.r.t. the Zariski topology; 3) define the category of schemes ${{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}$ associated to this new category; 4) restrict our attention to the subcategory of ${{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}$ consisting of schemes that admit a cover by affine schemes in the category ${{\mathsf{Aff}}}_{{{\mathsf B}}}$. More precisely, the categories ${{\widetilde{{{\mathsf B}}}}}$ and ${{\widetilde{{{\mathsf {Blp}}}}}}$ are defined in the following way. \[def-nbluep\] The category ${{\widetilde{{{\mathsf B}}}}}$ is the full subcategory of ${{\mathbb N}}[{\,\text{-}\,}] /{{\mathsf{Mon}}}_0$ whose objects $$(X, {{\mathbb N}}[X] \to M)$$ satisfy the condition that the morphism ${{\mathbb N}}[X] \to M$ is an epimorphism.\ The category ${{\widetilde{{{\mathsf {Blp}}}}}}$ is category ${{\mathsf{CMon}}}_{{\widetilde{{{\mathsf B}}}}}$ of monoids in the symmetric monoidal category ${{\widetilde{{{\mathsf B}}}}}$. We denote again by $\rho\colon {{\widetilde{{{\mathsf {Blp}}}}}}\to {{\mathsf{Mon}}}_0$ the forgetful functor, $\rho(A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R) = A$; analogously to adjunction  \[MonBluepadjuction\], there is an adjunction $$\label{MonBluepadjuctionnew} \xymatrix{{{\mathsf{Mon}}}_0 \ar@/_1.1pc/[r]^{\sigma} & {{{\widetilde{{{\mathsf {Blp}}}}}}} \ar@/_1.1pc/[l]_{\rho}}\,,$$ where $\sigma(A) = (A,\xymatrix{A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\ar[r]^= &A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}})$. \[extm\] [(a)]{} Given an object $(A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$ of ${{\widetilde{{{\mathsf {Blp}}}}}}$, any diagram $X\colon I\to A-{{\mathsf{Mod}}}$ can be lifted to a diagram $I\to (A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)-{{\mathsf{Mod}}}$.\ [(b)]{} Given a diagram $X\colon I\to {{\mathsf{Mon}}}_0$ and sieve $I_0$ of $I$, any lift of $X_{\vert I_0}$ to a diagram $I_0\to {{\widetilde{{{\mathsf {Blp}}}}}}$ can be extended to a diagram $I\to {{\widetilde{{{\mathsf {Blp}}}}}}$. [(a)]{} Let $X\colon I\to A-{{\mathsf{Mod}}}$ be a diagram. For each object $i$ of $I$, consider the $(A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$-module $(X_i,{{\mathbb N}}[X_i]\to M_i^0)$, where $M_i^0$ is the quotient of ${{\mathbb N}}[X_i]$ by the equivalence relation generated by $am=bm$, for each $m\in{{\mathbb N}}[X_i]$ and for each pair $(a,b)$ in the relation defining the quotient $R$.\ By induction, the $(A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$-module $(X_i,{{\mathbb N}}[X_i]\to M_i^{\alpha+1})$ is defined by setting $M_i^{\alpha+1}$ to be the quotient of ${{\mathbb N}}[X_i]$ by the equivalence relation generated by the equations defining $M_i^\alpha$ and by the equations ${{\mathbb N}}[f]m={{\mathbb N}}[f]n$, where $f\colon X_j\to X_i$ is any map in the diagram and where $m=n$ w.r.t. the relation defining $M_j^\alpha$. When $\alpha$ is a limit ordinal, $M_i^\alpha$ is defined as the obvious colimit ${\underrightarrow{\mathrm{lim}}}_{\beta <\alpha} M_i^\beta$. Finally, let $M_i = {\underrightarrow{\mathrm{lim}}}_{\alpha} M_i^\alpha$. It is clear that the diagram $X$ can be lifted in a unique way to a diagram $(X_i,{{\mathbb N}}[X_i]\to M_i)$.\ [(b)]{} The proof is analogous to that of point (a). \[extr\] A particular case of Lemma \[extm\](b) is the following. Given an object $(A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$ of ${{\widetilde{{{\mathsf {Blp}}}}}}$, any diagram $\xymatrix{A \ar@<.5ex>[r]^f \ar@<-.5ex>[r]_g &B}$ can be lifted (w.r.t. $\rho$) to a diagram $\xymatrix{(A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R) \ar@<.5ex>[r] \ar@<-.5ex>[r]& (B,B{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to S)}$. Should one admit the existence of the zero monoid and of the zero ring (i.e. the possibility that $0=1$), in the proof of Lemma \[extm\] it would be enough to set $M_i = 0$ and $S=0$, respectively \[rhopreservesZariski\] The functor $\rho\colon {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast} \to {{\mathsf{Aff}}}_{{{\widetilde{{{\mathsf B}}}}}}$ preserves Zariski covers. Let $$\left\{ {\operatorname{Spec}}(A_i, A_i{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R_i) \to {\operatorname{Spec}}(A, A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)\right\}_{i\in I}$$ be any Zariski cover in the category ${{\mathsf{Aff}}}_{{\widetilde{{{\mathsf B}}}}}$. We have to prove that $\{ {\operatorname{Spec}}A_i\to{\operatorname{Spec}}A\}_{i\in I}$ is a Zariski cover in ${{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast}$. To do that, by taking into account [@TV Déf. 2.10], we have to check the following four points: 1. To show that, for each $i$, ${\operatorname{Spec}}A_i\to{\operatorname{Spec}}A$ is flat, that is that $${\,\text{-}\,}\otimes_AA_i\colon A-{{\mathsf{Mod}}}\to A_i-{{\mathsf{Mod}}}$$ is exact. By applying Lemma \[extm\](a) to any finite diagram, this follows from the flatness of the morphism ${\operatorname{Spec}}(A_i, A_i{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R_i) \to {\operatorname{Spec}}(A, A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$ and from the fact that $\rho$ preserves limits, being a right adjoint. 2. To show that there is a finite subset $J\subset I$ such that $$\prod_{j\in J}{\,\text{-}\,}\otimes_AA_j\colon A-{{\mathsf{Mod}}}\to\prod_{j\in J}A_j-{{\mathsf{Mod}}}$$ is conservative. This follows from Lemma \[extm\](a) in the case where $I$ is the category $\bullet \to \bullet$. 3. To show that $\rho$ preserves epimorphisms. This is consequence of Lemma \[extm\](b) (see Remark \[extr\]). 4. To show that $\rho$ preserves the finite presentation property. This fact follows from Lemma \[extm\](b). \[rhopreservesfinitelimits\] The functor $\rho\colon {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast} \to {{\mathsf{Aff}}}_{{{\widetilde{{{\mathsf B}}}}}}$ preserves finite limits. We will show the equivalent statement that the opposite functor from $\rho\colon {{{\widetilde{{{\mathsf B}}}}}} \to {{{\mathsf{Set}}}_\ast}$ preserves finite colimits. Notice that it is enough to show that it preserves coproducts and coequalizers.\ Let $(A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R)$ and $(B,B{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to S)$ be objects in ${{{\widetilde{{{\mathsf B}}}}}}$ and take the coproduct $(A\coprod B,(A\coprod B){\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R\oplus S)$ in the category ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}/{{\mathsf{SRing}}}$: we have to show that the second component is surjective. This follows from the fact that, being ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}$ a left adjoint, one has $(A\coprod B){\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\cong (A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}})\oplus (B{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}})$.\ Let $f,g\colon (A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to R) \to (B,B{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to S)$. Analogously as above, the domain of the second component of the coequalizer $C$ of $f, g$ in ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}/{{\mathsf{SRing}}}$ is the coequalizer of $$f{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}},g{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\colon A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\to B{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\,.$$ Because of the universal property of colimits, there is a commutative diagram giving rise to a commutative diagram $$\xymatrix{ A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\ar@{>>}[d] \ar@<.5ex>[r] \ar@<-.5ex>[r] & B{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}\ar@{>>}[d] \ar@{>>}[r] & C \ar[d] \\ R \ar@<.5ex>[r] \ar@<-.5ex>[r] & S \ar@{>>}[r] & T}$$ in ${{\mathsf{SRing}}}$, whose rows are coequalizers and where the map $C\to T$ is the second component of the coequalizer of $f,g$ in the category ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}/{{\mathsf{SRing}}}$. As the middle vertical map and the bottom right one are surjective, so is the map $C\to T$. Proposition \[rhopreservesZariski\] and Proposition \[rhopreservesfinitelimits\] entail the following result. The functor $\rho\colon {{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast} \to {{\mathsf{Aff}}}_{{{\widetilde{{{\mathsf B}}}}}}$ is continuous w.r.t. the Zariski topology, and the adjunction \[MonBluepadjuctionnew\] gives rise to a geometric morphism $$\label{MonBluepadjuctionnew2} \xymatrix{ {{\mathsf{Sh}}}({{\mathsf{Aff}}}_{{{\widetilde{{{\mathsf B}}}}}}) \ar@/_1.1pc/[r]^{\widehat\rho} & {{\mathsf{Sh}}}({{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast}) \ar@/_1.1pc/[l]_{\widehat\sigma}}$$ \[rhosigmaadjunction\] The functor $\widehat\rho\colon {{\mathsf{Sh}}}({{\mathsf{Aff}}}_{{{\widetilde{{{\mathsf B}}}}}}) \to {{\mathsf{Sh}}}({{\mathsf{Aff}}}_{{{\mathsf{Set}}}_\ast})$ preserves the subcategories of schemes and so induces a functor $$\widehat\rho \colon {{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}} \to {{\mathsf{Sch}}}_{{{\mathsf{Set}}}_\ast}\,.$$ Hence, the adjunction \[MonBluepadjuctionnew2\] induces an adjunction $\widehat\rho \dashv \widehat\sigma\colon {{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}} \to {{\mathsf{Sch}}}_{{{\mathsf{Set}}}_\ast}$. We already proved that $\widehat\sigma$ preserves the relevant subcategory of schemes in Proposition \[Bschemesadjunctions2\]. So all we have to prove is that $\widehat\rho$ preserves the relevant subcategory of schemes. In view of [@TV Proposition 2.18], it suffices to observe that the following properties of $\widehat\rho$ are satisfied: - it preserves coproducts (for it is a left adjoint), and affine schemes; - it preserves finite limits (by Proposition \[rhopreservesfinitelimits\]) and Zariski opens of affine schemes (by Lemma \[extm\](b) and by the fact that $\widehat\rho$ preserves finite limits); - it preserves images (since it preserves finite limits and colimits) and diagonal morphisms; - it preserves quotients, since it preserves colimits. \[def-nbschemes\] A scheme $\Sigma$ in ${{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}$ that admits a Zariski cover by affine schemes in ${{\mathsf{Aff}}}_{{{\mathsf B}}}$ will be called (by a slight abuse of language) a ${{\widetilde{{{\mathsf B}}}}}$-scheme. The category of such schemes will denoted by $\widetilde{{{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}}$. The rationale behind this definition is that, while ${{\widetilde{{{\mathsf B}}}}}$-schemes retain all good local properties of ${{\mathsf B}}$-schemes (namely, the properties of blueprints), one gains the advantages of working in the wider and more comfortable environment of the category ${{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}$. Notice that the adjunction in Theorem \[rhosigmaadjunction\] obviously restrict to an adjunction $$\label{adjunctionfornbschemes} \widehat\rho \dashv \widehat\sigma\colon \widetilde{{{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}} \to {{\mathsf{Sch}}}_{{{\mathsf{Set}}}_\ast}\,.$$ Morevover, one can define a functor $$\label{Zbluefunctor0} \xymatrix{{{\mathsf{Sch}}}_{{{\mathsf B}}} \ar[r]^{\widehat{F}_{{\mathbb Z}}} &{{\mathsf{Sch}}}_{{{\mathsf{Ab}}}}}\,,$$ obtained by composing the functor $\widehat{F}\colon {{\mathsf{Sch}}}_{{{\mathsf B}}} \to {{\mathsf{Sch}}}_{{{\mathsf{Mon}}}_0}$ in eq. \[Bschemesadjunctionseq\] with the functor $${\,\text{-}\,}\otimes_{{{\mathbb N}}} {{\mathbb Z}}\colon {{\mathsf{Sch}}}_{{{\mathsf{Mon}}}_0} \to {{\mathsf{Sch}}}_{{{\mathsf{Ab}}}}$$ defined in [@TV Prop. 3.4]. Of course, this functor restricts to a functor $$\label{Zbluefunctor} \xymatrix{\widetilde{{{\mathsf{Sch}}}_{{{\mathsf B}}}} \ar[r]^{\widehat{F}_{{\mathbb Z}}} &{{\mathsf{Sch}}}_{{{\mathsf{Ab}}}}}\,.$$ A ${{\widetilde{{{\mathsf B}}}}}$-scheme gives rise, through the functors $\widehat\rho$ and $\widehat{F}_{{\mathbb Z}}$, to a pair consisting of a monoidal scheme and a classical scheme. \[defBscheme\] Given a ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$, we set - $\Sigma_{{\mathbb Z}}: = \widehat{F}_{{\mathbb Z}}(\Sigma)$, which is an object of ${{\mathsf{Sch}}}_{{\mathsf{Ab}}}$ (i.e. a classical scheme); - $\underline{\Sigma}: =\widehat{\rho}(\Sigma)$, which is an object of ${{\mathsf{Sch}}}_{{{\mathsf{Set}}}_\ast}$ (i.e. a monoidal scheme). There is a natural transformation $\Sigma_{{\mathbb Z}}\to \underline{\Sigma}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}$, which is obtained via the unit of the adjunction $\widehat{\rho}\dashv\widehat{\sigma}$ and by applying the functor $\widehat{F}_{{\mathbb Z}}$. By definition, there is indeed a map $$\widehat{F}_{{\mathbb Z}}\Sigma\to \widehat{F}_{{\mathbb Z}}\widehat{\sigma}\,\widehat{\rho}\, \Sigma\cong\underline{\Sigma}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\,,$$ where the isomorphism is given by the natural isomorphism $\widehat{F}_{{\mathbb Z}}\circ\widehat{\sigma}={\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}$. In the affine case, such a map is simply realized as the bottom arrow of the map between arrows $$\xymatrix{ A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\ar[d] \ar[r] & A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\ar[d] \\ A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\ar[r] & R\otimes_{{{\mathbb N}}} {{\mathbb Z}}}$$ where the top and the left map are identities. Summing up, a ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ induces therefore the following objects: $$\begin{aligned} \bullet\ &\text{a monoidal scheme $\underline{\Sigma}$;}\label{Bschemesdata1} \\ \bullet\ &\text{a (classical) scheme $\Sigma_{{\mathbb Z}}$ over ${{\mathbb Z}}$;}\label{Bschemesdata2}\\ \bullet\ &\text{a natural transformation} \ \Lambda\colon \Sigma_{{\mathbb Z}}\to\underline{\Sigma}\circ |{\,\text{-}\,}|\cong\underline{\Sigma}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\,.\label{structuralmapBscheme}\end{aligned}$$ We shall say that the ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ [*generates*]{} the pair $(\underline\Sigma, \Sigma_{{\mathbb Z}})$, the natural transformation \[structuralmapBscheme\] being understood. An application: ${{\widetilde{{{\mathsf B}}}}}$-schemes and ${{{\mathbb F}_1}}$-schemes {#SectionFinal2} ======================================================================================= The geometric data \[Bschemesdata1\], \[Bschemesdata2\], \[structuralmapBscheme\] appear to be similar to (but different from) those used by A. Connes and C. Consani [@CC] in their definition of ${{{\mathbb F}_1}}$-scheme, which is as follows. \[CCschemes\][@CC Def. 4.7] An ${{{\mathbb F}_1}}$-scheme is a triple $(\underline{\Xi}, \Xi_{{\mathbb Z}}, \Phi)$, where 1. $\underline{\Xi}$ is a monoidal scheme; 2. $\Xi_{{\mathbb Z}}$ is a (classical) scheme; 3. $\Phi$ is natural transformation $\underline{\Xi}\to \Xi_{{\mathbb Z}}\circ ({\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})$, such that the induced natural transformation $\underline{\Xi}\circ \vert{\,\text{-}\,}\vert \to \Xi_{{\mathbb Z}}$, when evaluated on fields, gives isomorphisms (of sets).[^5] A manifest difference between ${{\widetilde{{{\mathsf B}}}}}$-schemes and ${{{\mathbb F}_1}}$-schemes is, of course, the direction of the natural transformation linking the monoidal scheme and the classical scheme. Moreover, the condition on $\Phi$ in Definition \[CCschemes\](3) may fail to be fulfilled in the case of ${{\widetilde{{{\mathsf B}}}}}$-schemes, as shown by the following example. Consider a pair $(A,R\to A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})$ defining an affine ${{{\mathbb F}_1}}$-scheme in the sense Definition \[CCschemes\]. Notice that, in this case, the natural transformation $\Phi$ calculated on a field $k$ corresponds to mapping a prime ideal $P$ of $A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}$ plus an immersion $A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}/P\hookrightarrow k$ to their restrictions to $R$; the requirement is that this is a bijection. On the other hand, according to the general idea underlying the notion of blueprint, if the pair $(A,R)$ is associated with an affine ${{\mathsf B}}$-scheme (which is, of course, the same thing as an affine ${{\widetilde{{{\mathsf B}}}}}$-scheme), then the ring $R$ encodes the information of a relation $\mathcal{R}$ intended to *reduce the number of ideals of $A$. Take for instance the case $(A,A{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\to R)$, with $A={{\mathbb N}}\cup\{\ -\infty\}$ (additive notation) and $R=T{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}/(2T-1)$. Then, ${{\mathbb N}}$ is an ideal not coming from any ideal of $R$, since $T$ is invertible (in more algebraic terms, we are saying that the map to any field $k$ sending $T$ to 0 can not be lifted to a map from $R$ to $k$).* The category $\widetilde{{{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}}$ and that of ${{{\mathbb F}_1}}$-schemes may be combined into a larger category. \[definitionF1schemewr\] The category of ${{{{\mathbb F}_1}}}$-schemes with relations is the fibered product of the category $\widetilde{{{\mathsf{Sch}}}_{{{\widetilde{{{\mathsf B}}}}}}}$ of ${{\widetilde{{{\mathsf B}}}}}$-schemes and that of ${{{\mathbb F}_1}}$-schemes over the category of monoidal schemes. Thus, a ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ generating the pair $(\underline\Sigma, \Sigma_{{\mathbb Z}})$ and an ${{{\mathbb F}_1}}$-scheme $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}', \Phi)$ will determine a ${{{{\mathbb F}_1}}}$-scheme with relations denoted by the quadruple $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}, \Sigma_{{\mathbb Z}}', \Phi)$. Notice that the classical scheme $\Sigma_{{\mathbb Z}}$ is derived from the ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ via the functor $\widehat{F}_{{\mathbb Z}}\colon {{\mathsf {Blp}}}\to {{\mathsf{Ring}}}$ (Definition \[defBscheme\]). This means, in particular, that the affine ${{\mathsf B}}$-scheme $\Sigma= (M, M\otimes_{{{{\mathbb F}_1}}} N \to R)$ generates the affine classical scheme $\Sigma_{{\mathbb Z}}= R\otimes_{{\mathbb N}}{{\mathbb Z}}$. So Definition \[definitionF1schemewr\] indicates that, as long as we wish to investigate a relationship between this affine ${{\mathsf B}}$-scheme with an ${{{\mathbb F}_1}}$-scheme and its associated affine classical scheme $\Sigma'_{{\mathbb Z}}$, we are no longer concerned with the “monoid relations” given the map $M\otimes_{{{{\mathbb F}_1}}} {{\mathbb N}}\to R$, but only with the “ring relations” given by the map $M\otimes_{{{{\mathbb F}_1}}} {{\mathbb Z}}\to R\otimes_{{\mathbb N}}{{\mathbb Z}}$ (cf. eq. \[structuralmapBscheme\]). From this viewpoint it appears more natural to work with blueprints with “ring relations”. More precisely, consider the functor $${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}: {{\mathsf{Mon}}}_0\to {{\mathsf{Ring}}}\,$$ which is the left adjoint to the forgetful functor, and define the category $({\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})/{{\mathsf{Ring}}}$. We shall denote by ${{\mathbb Z}}{\,\text{-}\,}{{\mathsf {Blp}}}$ the full subcategory of $({\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})/{{\mathsf{Ring}}}$ formally defined in the same way as the subcategory blueprints ${{\mathsf {Blp}}}$ of ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb N}}}/{{\mathsf{SRing}}}$. Analogously, one defines the category ${{\mathbb Z}}{\,\text{-}\,}{{\widetilde{{{\mathsf {Blp}}}}}}$. A ${{\mathbb Z}}{\,\text{-}\,}{{\widetilde{{{\mathsf B}}}}}$-scheme is then a [*scheme in ${{\mathsf{Sch}}}_{{{\mathbb Z}}{\,\text{-}\,}{{\widetilde{{{\mathsf {Blp}}}}}}}$ that admits a Zariski cover by affine schemes in $({{\mathbb Z}}{\,\text{-}\,}{{\mathsf {Blp}}})^{\text{\rm op}}$.*]{} We shall adopt hereafter the following terminological convention. \[convention\] [*In what follows, by ${{\widetilde{{{\mathsf B}}}}}$-scheme we mean a ${{\mathbb Z}}{\,\text{-}\,}{{\widetilde{{{\mathsf B}}}}}$-scheme, and by ${{{{\mathbb F}_1}}}$-scheme with relations we mean the combination of a ${{\mathbb Z}}{\,\text{-}\,}{{\widetilde{{{\mathsf B}}}}}$-scheme and an ${{{\mathbb F}_1}}$-scheme in the sense of Definition \[definitionF1schemewr\]*]{}. Now, Definition \[defBscheme\] and Definition \[CCschemes\] imply that, for every ${{{{\mathbb F}_1}}}$-scheme with relations $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}, \Sigma_{{\mathbb Z}}', \Phi)$, there is a natural transformation $\Psi_1 \colon \Sigma_{{\mathbb Z}}\to \Sigma_{{\mathbb Z}}'$ given by the composition $$\label{firsttransferringmap} \xymatrix{&\Sigma_{{\mathbb Z}}\ar[r]^{\Lambda\ \ } &\underline{\Sigma}\circ |{\,\text{-}\,}| \ar[r]^{\ \ \Phi} &\Sigma_{{\mathbb Z}}'}\,,$$ which will be called the [*first transferring map*]{} determined by the given ${{{\mathbb F}_1}}$-scheme with relations. As its name would suggest, the natural transformation $\Psi_1$, loosely speaking, conveys information on about how many “points" of $\Sigma_{{\mathbb Z}}'$ are compatible with the ${{\widetilde{{{\mathsf B}}}}}$-scheme that generates the pair $(\underline\Sigma,\Sigma_{{\mathbb Z}})$. Actually, there is a different way to “transfer” this information from the ${{\widetilde{{{\mathsf B}}}}}$-scheme to the ${{{\mathbb F}_1}}$-scheme associated with the fibered object $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}, \Sigma_{{\mathbb Z}}', \Phi)$. The counit of the adjunction ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\dashv |{\,\text{-}\,}|$ induces a map $$\label{Bmap1} \underline{\Sigma}\circ |{\,\text{-}\,}|\to\underline{\Sigma}\circ ||{\,\text{-}\,}|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}|\,.$$ Moreover, the natural transformation \[structuralmapBscheme\] induces a map $$\label{Bmap2} \Sigma'_{{\mathbb Z}}\circ (|{\,\text{-}\,}|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})\to\underline{\Sigma}\circ ||{\,\text{-}\,}|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}| \,.$$ Let $\Sigma'_{{{\mathsf B}}}$ be the sheaf on the category ${{\mathsf{Ring}}}$ obtained as the pullback of the maps \[Bmap1\] and \[Bmap2\], i.e. $$\label{diagramdefiningSigma'Bcat} \xymatrix{\Sigma'_{{{\mathsf B}}} \ar[r] \ar[d] & \Sigma'_{{\mathbb Z}}\circ (|{\,\text{-}\,}|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})\ar[d]\\ \underline{\Sigma}\circ |{\,\text{-}\,}| \ar[r] & \underline{\Sigma}\circ ||{\,\text{-}\,}|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}| }$$ By composing the vertical arrow on the left with $\Phi$, we get a natural transformation $$\Psi_2\colon \Sigma'_{{{\mathsf B}}}\to \Sigma'_{{\mathbb Z}}\,,$$ which will be called the [*second transferring map*]{} determined by the ${{{\mathbb F}_1}}$-scheme $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}, \Sigma_{{\mathbb Z}}', \Phi)$. In the case of an ${{{\mathbb F}_1}}$-scheme $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}', \Phi)$, the natural transformation $\Phi$ induces an isomorphism $\underline\Sigma(\vert{\mathbb K}\vert) \simeq \Sigma_{{\mathbb Z}}'(\mathbb K)$ for every field $\mathbb K$. Since for the finite field ${{\mathbb F}_q}$, one has $\vert{{\mathbb F}_q}\vert = {\mathbb F}_{1^{q-1}}$, it immediately follows, as observed in [@CC], that there is a bijective correspondence between the set of ${{\mathbb F}_q}$-points of $\Sigma_{{\mathbb Z}}'$ and the set of ${\mathbb F}_{1^{q-1}}$-points of $\underline\Sigma$; in others words, one has $$\label{countingpointsofF1schemes} \#\Sigma_{{\mathbb Z}}'({{\mathbb F}_q})= \# \underline\Sigma({\mathbb F}_{1^{q-1}})\,.$$ This result can be extended to our setting in two different ways, because, for a ${{\widetilde{{{\mathsf B}}}}}$-scheme underlying an ${{{\mathbb F}_1}}$-scheme with relations, we can think of its “${\mathbb F}_{1^{q-1}}$-points” in two different senses. On the one hand, the forgetful functor $\vert{\,\text{-}\,}\vert : {{\mathsf{Ring}}}\to {{\mathsf{Mon}}}_0$ admits the obvious factorization $$\xymatrix{{{\mathsf{Ring}}}\ar[r]^{G_{{\mathbb Z}}} & {{\mathbb Z}}{\,\text{-}\,}{{\mathsf {Blp}}}\ar[r]^\rho &{{\mathsf{Mon}}}_0}\,,$$ (cf. eq. \[MonBluepadjuction\]). Clearly, one has $$G_{{\mathbb Z}}({{\mathbb F}_q})= (\mathbb{F}_{1^{q-1}}, \mathbb{F}_{1^{q-1}}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\to \mathbb{F}_{1^{q-1}})$$ and $\rho(G_{{\mathbb Z}}({{\mathbb F}_q})) = \vert {{\mathbb F}_q}\vert = \mathbb{F}_{1^{q-1}}$. Now, by definition, the first transferring map $\Psi_1$ factorises as $\Psi_1 = \Phi\circ \Lambda$. Since $\Phi$ gives isomorphisms (of sets) when evaluated on fields and $\Lambda$ is always locally injective, it is immediate to prove the following result. \[propfirsttransferringmap\] Let $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}, \Sigma_{{\mathbb Z}}', \Phi)$ be an ${{{\mathbb F}_1}}$-scheme with relations. The first transferring map $\Psi_1\colon \Sigma_{{\mathbb Z}}\to \Sigma_{{\mathbb Z}}'$, when evaluated on a field, gives an injective map (of sets). In particular, the set of $G_{{\mathbb Z}}({{\mathbb F}_q})$-points of the underlying ${{\widetilde{{{\mathsf B}}}}}$-scheme naturally injects into the set of $\mathbb{F}_q$-points of the scheme $\Sigma''_{{\mathbb Z}}$ (which isomorphic to the set of ${\mathbb F}_{1^{q-1}}$-points of the monoidal scheme $\underline\Sigma$). On the other hand, one has the immersion $\sigma\colon {{\mathsf{Mon}}}_0 \hookrightarrow {{\mathbb Z}}{\,\text{-}\,}{{\mathsf {Blp}}}$, with $$\sigma(\mathbb{F}_{1^{q-1}}) = (\mathbb{F}_{1^{q-1}}, \mathbb{F}_{1^{q-1}}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}{\buildrel \text{id} \over \longrightarrow} \mathbb{F}_{1^{q-1}}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}) \,.$$ Notice that $G_{{\mathbb Z}}({{\mathbb F}_q})\neq \sigma(\mathbb{F}_{1^{q-1}})$, while $\vert G_{{\mathbb Z}}({{\mathbb F}_q})\vert = \vert \sigma(\mathbb{F}_{1^{q-1}})\vert = \mathbb{F}_{1^{q-1}}$. \[thmsecondftransferringmap\] Let $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}, \Sigma_{{\mathbb Z}}', \Phi)$ be an ${{{\mathbb F}_1}}$-scheme with relations. The set of $\sigma(\mathbb{F}_{1^{q-1}})$-points of the underlying ${{\widetilde{{{\mathsf B}}}}}$-scheme is in natural bijection with the set of $\mathbb{F}_q$-points of the subpresheaf of $\Sigma'_{{\mathbb Z}}$ given by the image of $\Psi_2\colon \Sigma'_{{{\mathsf B}}}\to\Sigma'_{{\mathbb Z}}$. Since we can work locally, we assume that the underlying scheme is given by a monoid $M$, a ring $R$, and a map $M{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\to R$ satisfying the usual conditions. An $\mathbb{F}_{1^{q-1}}$-point is given by a commutative square $$\xymatrix{ M{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\ar[d] \ar[r] & \mathbb{F}_{1^{q-1}}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\cong |\mathbb{F}_q|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\ar[d]^{\mathrm{id}} \\ R \ar[r] & \mathbb{F}_{1^{q-1}}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\cong |\mathbb{F}_q|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}}$$ such that the arrow on the top is induced by a map $M\to\mathbb{F}_{1^{q-1}}$.\ The datum of a generic commutative square as above is equivalent to the datum of an $\mathbb{F}_q$-point in ${\operatorname{Spec}}R\circ (|{\,\text{-}\,}|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})$.\ The fact that the map on the top has the required property is equivalent to the fact that the image of the point above through the restriction map $${\operatorname{Spec}}R(|\mathbb{F}_q|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})\to {\operatorname{Spec}}(M{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})(|\mathbb{F}_q|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})$$ is in the image of the map $${\operatorname{Spec}}M (|\mathbb{F}_q|)\to {\operatorname{Spec}}(M{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})(|\mathbb{F}_q|{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}})$$ induced by the functor ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}$. We are now interested in the case where the ${{{\mathbb F}_{1^n}}}$-points of the underlying monoidal scheme $\underline\Sigma$ are counted by a polynomial in $n$. Some preliminary definitions and results are in order. A monoidal scheme $\Sigma$ is said to be [*noetherian*]{} if it admits a finite open cover by representable subfunctors $\{{\operatorname{Spec}}(A_i)\}$, with each $A_i$ a noetherian monoid. Recall that, as it is proved in [@Gil Theorem 5.10 and 7.8], a monoid is noetherian if and only if it is finitely generated. This immediately implies that, for any prime ideal $\frak p \subset M$, the localized monoid $M_{\frak p}$ is noetherian and the abelian group $M_{\frak p}^\times$ of invertible elements in $M_{\frak p}$ is finitely generated. Notice that, given an ${{{\mathbb F}_1}}$-scheme $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}', \Phi)$, the fact that the monoidal scheme $\underline{\Sigma}$ is noetherian does not entail that the scheme $\Sigma_{{\mathbb Z}}'$ is noetherian as well. Let us consider, for instance, the affine ${{{\mathbb F}_1}}$-scheme given by ${{\mathbb Z}}[X,\varepsilon_i]/(\varepsilon_i^2)\to{{\mathbb Z}}[X]$, with $i\in{{\mathbb N}}$. The monoidal scheme is noetherian, while the ascending chain of ideals $\ldots\subset (\varepsilon_0,\ldots ,\varepsilon_i)\subset (\varepsilon_0,\ldots ,\varepsilon_{i+1})\subset\ldots$ does not have a maximal element. Observe that, as for the points of the classical scheme, the presence of the $\varepsilon_i$’s is immaterial; hence, one has the required isomorphism ${{\mathbb Z}}[X](\vert\mathbb K\vert) \simeq {{\mathbb Z}}[X,\varepsilon_i]/(\varepsilon_i^2)(\mathbb K)$ for any field $\mathbb K$. Let $\widetilde{\underline\Sigma}$ the geometrical realization of the monoidal scheme $\Sigma$. Following Connes-Consani’s definition [@CC p. 25], we shall say that $\underline\Sigma$ is [*torsion-free*]{} if, for any $x\in \widetilde{\underline\Sigma}$, the abelian group ${{\mathcal O}}_{\underline\Sigma, x}^\times$ is torsion-free. \[torsion-freemonoidalschemes\] A noetherian monoidal scheme $\underline\Sigma$ is torsion-free if and only if, for any finite group $G$ with $\#G=n$, the number $\#{\operatorname{Hom}}({{\mathcal O}}_{\underline\Sigma, x}^\times, G)$ is polynomial in $n$. Since $\underline\Sigma$ is noetherian, the abelian group ${{\mathcal O}}_{\underline\Sigma, x}^\times$ is finitely generated by the remark above. So, if $\underline\Sigma$ is also torsion-free, then ${{\mathcal O}}_{\underline\Sigma, x}^\times$ is free of rank ${N(x)}$, and, for any finite group $G$ with $\#G=n$, we have $\#{\operatorname{Hom}}({{\mathcal O}}_{\underline\Sigma, x}^\times, G)=n^{N(x)}$. For the converse, suppose there is a point $x$ such that ${{\mathcal O}}_{\underline\Sigma, x}^\times$ is not torsion-free. Being noetherian, ${{\mathcal O}}_{\underline\Sigma, x}^\times$ decomposes as a product ${{\mathbb Z}}^n\times\prod_{i\in\{ 1,\ldots m\}}{{\mathbb Z}}_{n_i}$. For each prime number $p_0$ not dividing any of the $n_1,\ldots, n_m$, say $p_0>\hbox{LCM}\,(n_1,\ldots, n_m)$, the number of elements of ${\operatorname{Hom}}({{\mathcal O}}_{\underline\Sigma, x}^\times,{{\mathbb Z}}_{p_0})$ is then $p_0^n$. Since there are infinitely many such prime numbers, were $\#{\operatorname{Hom}}({{\mathcal O}}_{\underline\Sigma, x}^\times,{{\mathbb Z}}_p)$ polynomial in $p$, it would be the polynomial $p^n$. Take now a prime number $p_1$ dividing $n_1$; in that case, the number of elements of ${\operatorname{Hom}}({{\mathcal O}}_{\underline\Sigma, x}^\times,{{\mathbb Z}}_{p_1})$ is greater than $p_1^n$. In conclusion, $\#{\operatorname{Hom}}({{\mathcal O}}_{\underline\Sigma, x}^\times,{{\mathbb Z}}_p)$ cannot be polynomial in $p$. By Lemma \[torsion-freemonoidalschemes\], for each noetherian and torsion-free monoidal scheme $\underline\Sigma$, one can define the polynomial $$\label{monoidalpolynomial} P(\underline\Sigma, n) = \sum_{x\in \widetilde{\underline\Sigma}} \#{\operatorname{Hom}}({{\mathcal O}}_{\underline\Sigma, x}^\times, {{{\mathbb F}_{1^n}}})\,.$$ The following result is proved in [@CC] (Theorem 4.10, (1) and (2)). Let $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}', \Phi)$ be an ${{{\mathbb F}_1}}$-scheme such that the monoidal scheme $\underline{\Sigma}$ is noetherian and torsion-free. Then 1. $\#\underline\Sigma({{{\mathbb F}_{1^n}}}) = P(\underline\Sigma, n)$; 2. for each finite field ${\mathbb F}_q$ the cardinality of the set of points of the scheme $\Sigma_{{\mathbb Z}}'$ that are rational over ${\mathbb F}_q$ is equal to $P(\underline\Sigma, q-1)$. Note that the last statement immediately follows from eq. \[countingpointsofF1schemes\], which holds true without any additional assumption on the monoidal scheme. For each ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ and each abelian group $G$ (in multiplicative notation, with absorbing element $0$), we denote by $${\operatorname{Hom}}_{{{\mathsf B}}} ({{\mathcal O}}_{\Sigma, x}^\times, G)$$ the subset of ${\operatorname{Hom}}({{\mathcal O}}_{\underline\Sigma, x}^\times, G)$ given by the morphisms satisfying the relations encoded in the blueprint structure of $\Sigma$. Lemma \[torsion-freemonoidalschemes\] prompts us to introduce the following definition. \[definitionBschemenoethtorsionfree\] A ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ is said to be noetherian if the monoidal scheme $\underline\Sigma$ is noetherian. A noetherian ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ is said to be torsion-free if for any finite group $G$, the number $\#{\operatorname{Hom}}_{{{\mathsf B}}} ({{\mathcal O}}_{\Sigma, x}^\times, G)$ is polynomial in $\# G$. While in the case of a noetherian torsion-free monoidal scheme $\underline{\Sigma}$ the polynomial $\#{\operatorname{Hom}}_{{{\mathsf B}}} ({{\mathcal O}}_{\underline\Sigma, x}^\times, G)$ is always a monic monomial, this is not always the case for a noetherian torsion-free ${{\widetilde{{{\mathsf B}}}}}$-scheme. The next example illustrates this point. Consider the affine ${{\mathsf B}}$-scheme $\Sigma$ given by the free monoid $M=\langle T_1,T_2,T_3,T_4\rangle$ generated by four elements with relations given by the natural projection $${{\mathbb Z}}[T_1,T_2,T_3,T_4]\to{{\mathbb Z}}[T_1,T_2,T_3,T_4]/(T_1-T_3+T_2-T_4)\,.$$ Let $G$ be a finite group (in multiplicative notation, with absorbing element $0$); we look for maps $f\colon M\to G$ together with compatible maps $$\xymatrix{ {{\mathbb Z}}[T_1,T_2,T_3,T_4] \ar[d] \ar[r] & G{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}\ar[d]^{\mathrm{id}} \\ {{\mathbb Z}}[T_1,T_2,T_3,T_4]/(T_1-T_3+T_2-T_4) \ar[r] & G{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}}$$ Since $G{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}$ is free, to ensure the compatibility of $f$ with the relation $T_1+T_2=T_3+T_4$ one must have that either $f(T_1)=f(T_3)$ and $f(T_2)=f(T_4)$ or $f(T_1)=f(T_4)$ and $f(T_2)=f(T_3)$. There are therefore only 3 possible cases for the polynomial expressing the cardinality of ${\operatorname{Hom}}_{{{\mathsf B}}} ({{\mathcal O}}_{\Sigma, x}^\times, G)$: - $f(T_1)=f(T_2)= f(T_3)= f(T_4)=0$; in this case the polynomial is the constant polynomial $1$; - either $f(T_1)=0$ and $f(T_2)\neq 0$ or $f(T_1)\neq 0$ and $f(T_2)= 0$, each case giving rise to two possible cases; therefore, in each of the four possible cases the polynomial is $n$; - $f(T_1)\neq 0$ and $f(T_2) \neq 0$; in this case the polynomial is $2n^2-n$ (the term $2n^2$ accounts for $2$ possible free nonzero choices on $f(T_1)$ and $f(T_2)$, that have to be counted twice since either $f(T_1)=f(T_3)$ or $f(T_1)=f(T_4)$, and the term $-n$ accounts for the case $f(T_1)=f(T_2)$). Let $(\underline{\Sigma}, \Sigma_{{\mathbb Z}}, \Sigma_{{\mathbb Z}}', \Phi)$ be an ${{{\mathbb F}_1}}$-scheme with relations such that the underlying ${{\widetilde{{{\mathsf B}}}}}$-scheme $\Sigma$ is noetherian and torsion-free. We define the polynomial $$Q(\underline\Sigma, n) = \sum_{x\in \underline\Sigma}\# {\operatorname{Hom}}_{{{\mathsf B}}} ({{\mathcal O}}_{\Sigma, x}^\times, {{{\mathbb F}_{1^n}}})\,.$$ \[finalproposition\] In the above hypotheses one has the inequality $Q(\underline\Sigma, n) \leq P(\underline\Sigma, n)$. It is clear that ${\operatorname{Hom}}_{{{\mathsf B}}} ({{\mathcal O}}_{\Sigma, x}^\times, {{{\mathbb F}_{1^n}}}) \subset {\operatorname{Hom}}({{\mathcal O}}_{\underline\Sigma, x}^\times, {{{\mathbb F}_{1^n}}})$, since the first set contains only the monoid morphisms that are compatible with the blueprint structure locally defined around $x$. \[finalremark\] Recall that the aim of Lorscheid’s definition of blueprint is to increase the amount of closed subschemes of a monoidal scheme. If we loosely refer to the features of the underlying topological space as “shape” of the scheme, we could say that category of ${{\mathsf B}}$-schemes (or that of ${{\widetilde{{{\mathsf B}}}}}$-schemes) adds “extra shapes” to Deitmar’s category of monoidal schemes. Consider now ${{{\mathbb F}_1}}$-schemes, and let us restrict our attention to the affine case. So, we just have a ring $R$, a monoid $M$, and a map $R\to M{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}$. Since it is required, by definition, that points remain the same, the monoid is not enriched with “extra shapes”. However, if we think of the given map as a restriction map between the spaces of functions of the affine schemes $ M{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}$ and $R$, we can interpret the datum of the ${{{\mathbb F}_1}}$-scheme as an enlargement of the space of functions of the affine monoidal scheme $M$. In conclusion, an ${{{\mathbb F}_1}}$-scheme with relation, according to the definition \[definitionF1schemewr\], allows us both to add “extra shapes" to the underlying monoidal scheme and to enlarge its space of functions. Fibered categories and stacks ============================= To give the reader a better understanding of Toën and Vaquié’s general construction presented in Section \[relativeschemes\], we briefly review some basic facts about fibered categories, pseudo-functors, and stacks, closely following the exposition in [@V] (to which the reader is referred for further details). Let ${{\mathsf C}}$ be any category. Roughly speaking, a stack is a sheaf of categories on ${{\mathsf C}}$ with respect to some Grothendieck topology (recall Definition \[topology\]). \[deffcartasianarrow\] Let $p_{{\mathsf F}}\colon{{\mathsf F}}\to{{\mathsf C}}$ be a functor. An arrow $\phi: \xi \rightarrow \eta$ of ${{\mathsf F}}$ is [*Cartesian*]{} with respect to $p_{{\mathsf F}}$ if, for any arrow $\psi\colon \zeta \rightarrow \eta$ in $\mathcal{F}$ and any arrow $h\colon p_{{{\mathsf F}}} \zeta \rightarrow p_{{{\mathsf F}}}\xi$ in ${{\mathsf C}}$ with $p_{{{\mathsf F}}}\phi \circ h = p_{{{\mathsf F}}}\psi$, there exists a unique arrow $\theta \colon\zeta\rightarrow \xi$ with $p_{{{\mathsf F}}}\theta = h$ and $\phi \circ \theta = \psi$, as in the following diagram: $$\xymatrix{\zeta \ar[rrr]^{\psi} \ar[rrd]_{\theta} \ar[dd] &&& \eta \ar[dd]\\ && \xi\ar[ru]_{\phi} \ar[dd]\\ p_{\mathcal{F}} \zeta\ar'[rr] [rrr] \ar[rrd]_{h} &&& p_{\mathcal{F}}\eta\\ && p_{\mathcal{F}}\xi \ar[ru] }$$ Whenever $\xi \rightarrow \eta $ is a Cartesian arrow of ${{\mathsf F}}$ mapping to an arrow $U\rightarrow V$ of ${{\mathsf C}}$, we shall also say that $\xi$ is a [*pullback of $\eta$ to $U$*]{}. \[deffiberedcategory\] A category ${{\mathsf F}}$ endowed with a functor $p_{{\mathsf F}}:{{\mathsf F}}\to{{\mathsf C}}$ is said to be fibered over ${{\mathsf C}}$ (with respect to $p_{{\mathsf F}}$) if, for any map $f\colon U\to V$ in ${{\mathsf C}}$ and any object $\eta$ in ${{\mathsf F}}$ such that $p_{{\mathsf F}}\eta=V$, there exists a Cartesian map $\phi\colon\xi\to\eta$ in ${{\mathsf F}}$ such that $p_{{\mathsf F}}\phi=f$. [@V Def. 3.9]\[cleavage\] Given a fibered category $p_{{\mathsf F}}\colon {{\mathsf F}}\to{{\mathsf C}}$ over ${{\mathsf C}}$, a cleavage is a class $K$ of Cartesian maps in ${{\mathsf F}}$ such that, for each map $f:X\to Y$ in ${{\mathsf C}}$ and each object $\xi$ in ${{\mathsf F}}$ over $Y$, there is exactly one map in $K$ over $f$ with codomain $\xi$; when a cleavage is fixed, this unique map will be denoted by $f^\ast_\xi$, or, by a slight abuse of notation, simply by $f^\ast$, if $\xi$ is clear from the context. Let $S$ be a set and $\mathrm{SET}$ the set of small sets. The assignment of a map of sets $f\colon S\to \text{SET}$ is obviously equivalent to the assignment of the map $p_F\colon F\to S$, with $F=\coprod_{s\in S}f(s)$ and $p_F$ the natural projection. Notice that, for every $s\in S$, one can recover the set $f(s)$ as the fiber $p_F^{-1}\{ s\}$.\ If we regard a set as a discrete category, then the notion of fibered category introduced in Definition \[deffiberedcategory\] can be interpreted as a generalization of the construction above to the categorical framework.\ When dealing with a functor ${{\mathsf C}}^{\mathrm{op}}\to{\mathsf {CAT}}$, however, we have not only objects (namely, the categories which are images of objects of ${{\mathsf C}}$), but also maps between them (namely, the functors which are images of maps of ${{\mathsf C}}$). So, the fibers on objects of ${{\mathsf C}}$ with respect to the fibration $p_{{\mathsf F}}\colon {{\mathsf F}}\to{{\mathsf C}}$ have to be connected by maps. This idea is made precise by the notion of a cartesian arrow introduce in Definition \[deffcartasianarrow\]: the existence of a Cartesian arrow $\phi \colon \xi\to\eta$ amounts to say that $\xi$ is the image of $\eta$ by the functor ${{\mathsf F}}_{p_{{\mathsf F}}\eta}\to{{\mathsf F}}_{p_{{\mathsf F}}\xi}$ which is image of the map $p_{{\mathsf F}}\phi\colon p_{{\mathsf F}}\xi\to p_{{\mathsf F}}\eta$. Images of maps in ${{\mathsf C}}$ are defined likewise by imposing the Cartesian condition and using composition rules in ${{\mathsf F}}$. But there is one more issue to be considered: when we enconde functors data in properties of the category ${{\mathsf F}}$ (with respect to $p_{{\mathsf F}}$), we have to bear in mind that categorical properties make sense only up to isomorphisms, and this is reason why, in general, we may expect to recover the original functor only up to equivalences. This fact leads to the following definition. A [*pseudo-functor*]{} $\Pi\colon {{\mathsf C}}^\mathrm{op}\to\mathsf{Cat}$ consists of the following data: 1. for each object $U$ of ${{\mathsf C}}$, a category $\Pi U$; 2. for each arrow $f\colon U \rightarrow V$, a functor $f^{\ast}\colon \Pi V \rightarrow \Pi U$; 3. for each object $U$ of ${{\mathsf C}}$, an isomorphism $\epsilon _{U}\colon Id ^{\ast}_{U} \simeq id_{\Pi U}$ of functors $\Pi U \rightarrow \Pi U$; 4. for each pair of arrows $U \overset {f} \rightarrow V \overset{g} \rightarrow W$, an isomorphism $$\alpha_{f, g} : f^{\ast} g^{\ast} \simeq (gf)^{\ast} \colon \Pi W \rightarrow\Pi U$$ of functors $\Pi W \rightarrow \Pi U$. These data are required to satisfy some natural compatibility conditions which we do not explicitly describe here[(see [@V p. 47])]{}. It can be proven that the assignment of a fibered categories over a category ${{\mathsf C}}$ is equivalent, up to isomorphism, to the assignment of a pseudo-functor ${{\mathsf C}}^\mathrm{op}\to\mathsf{Cat}$. For this reason, in what follows we will tend not to distinguish between a pseudo-functor and the associated fibered category and, given a fibered category $p_{{\mathsf F}}:{{\mathsf F}}\to{{\mathsf C}}$ and an object $X$ of ${{\mathsf C}}$, we will denote by $F(X)$ the fiber over $X$. It can also be shown that any pseudo-functor can be strictified, that is, it admits an equivalent functor ([@V Th. 3.45]). Nonetheless, it can be convenient to work with pseudo-functors because many constructions naturally arising in algebraic geometry produce non strict pseudo-functors. The point of view of fibered categories allows one to deal with pseudo-functors by remaining in the usual context of strict functors. Let us consider Toën and Vaquié’s construction, as summarised in section \[relativeschemes\], in the particular case of classical schemes. In this case, the category of interest is ${{\mathsf{Ring}}}$, regarded as ${{\mathsf{Aff}}}_{{{\mathsf{Ring}}}}^{\mathrm{op}}$, and there is an assignment mapping each ring $A$ to the category $A{\,\text{-}\,}{{\mathsf{Mod}}}$ and each ring morphism $A\to B$ to the functor ${\,\text{-}\,}\otimes_AB\colon A{\,\text{-}\,}{{\mathsf{Mod}}}\to B{\,\text{-}\,}{{\mathsf{Mod}}}$. Given two consecutive morphims $A\to B\to C$ and an object $M$ of $A-{{\mathsf{Mod}}}$, the objects $C\otimes_B(B\otimes_AM)$ and $C\otimes_AM$ are not equal, but only isomorphic. The previous construction provides therefore a naturally defined pseudo-functor $\Pi\colon {{\mathsf{Aff}}}_{{{\mathsf{Ring}}}}^{\mathrm{op}} \to \mathsf{Cat}$.\ The pseudo-functor $\Pi$ can be associated to the fibered category over ${{\mathsf{Ring}}}^\mathrm{op}$ defined (up to equivalence) in the following way. Let ${{\mathsf{Mod}}}$ be the category whose objects are pairs $(A,M)$ with $A$ a ring and $M$ an $A$-module and whose morphisms are pairs of the form $(f, \lambda)\colon (A, M) \to ( B,N)$, where $f\colon B\to A$ is a ring morphism and $\lambda\colon A\otimes_BN\to M$ is a morphism of $A$-modules. Then the natural projection ${{\mathsf{Mod}}}\to{{\mathsf{Ring}}}^\mathrm{op}$ is a fibration corresponding to $\Pi$. For each map $f\colon {\operatorname{Spec}}A\to{\operatorname{Spec}}B$ and for each $B$-module $M$, a natural choice of cartesian lifting is given by $(f\colon B\to A, A\otimes_B M=A\otimes_B M)$. \[exampleYoneda\]Let ${{\mathsf C}}$ be a category closed under fibered products and denote by $\operatorname{Arr}{{\mathsf C}}$ the category of arrows in ${{\mathsf C}}$. Let $p_{\operatorname{Arr}{{\mathsf C}}}\colon \operatorname{Arr}{{\mathsf C}}\rightarrow {{\mathsf C}}$ the functor mapping each arrow $X\rightarrow Y$ to its codomain $Y$ and acting in the obvious way on morphisms in $\operatorname{Arr}{{\mathsf C}}$. The $\operatorname{Arr}{{\mathsf C}}$ is a fibered category, and its associated pseudo-functor maps an object $X$ to the category ${{\mathsf C}}_{/X}$ and a morphism $X\to Y$ to the pullback functor ${\,\text{-}\,}\times_YX\colon {{\mathsf C}}_{/Y}\to{{\mathsf C}}_{/X}$. Similarly, the functor $(p_{\mathrm{Arr}{{\mathsf C}}})^\mathrm{op}\colon (\operatorname{Arr}{{\mathsf C}})^\mathrm{op}\to{{\mathsf C}}^\mathrm{op}$ is a fibration (independently of the existence of pullbacks), corresponding to the covariant functor ${{\mathsf C}}\to\mathrm{Cat}$ acting on objects as above and sending a map $f$ to the composition functor $f\circ{\,\text{-}\,}$.\ For each object $X$ of ${{\mathsf C}}$, the category ${{\mathsf C}}_{/X}$ is naturally fibered over ${{\mathsf C}}$ through the projection ${{\mathsf C}}_{/X}\to{{\mathsf C}}$ given by the domain functor. This fibration is associated, by identifying a set with the corresponding discrete category, to the functor ${{\mathsf C}}({\,\text{-}\,},X)$, that is, the image of $X$ by the Yoneda embedding. It is thus not surprising that the pseudo-functor $(p_{\mathrm{Arr}{{\mathsf C}}})^\mathrm{op}$ can be proven to induce an embedding of ${{\mathsf C}}$ in the 2-category of categories fibered over ${{\mathsf C}}$.\ Using this embedding, the Yoneda lemma can be generalized to categories in the following way. Let us recall that the classical Yoneda lemma states that, for every presheaf $F\colon {{\mathsf C}}^\mathrm{op}\to{{\mathsf{Set}}}$ and every object $X$ of ${{\mathsf C}}$, there is a natural isomorphism ${\operatorname{Hom}}({{\mathsf C}}({\,\text{-}\,},X),F)\cong F(X)$, which is obtained by sending a map of presheaves to the image of $1_X$. It can be shown that, for every pseudo-functor $p_F\colon F\to{{\mathsf C}}$ and every object $X$ of ${{\mathsf C}}$, there is an equivalence of categories ${\operatorname{Hom}}_{{\mathsf C}}({{\mathsf C}}_{/X},F)\simeq F(X)$. For a proof, see [@V 3.6.2]: we just point out that, analogously to the classical case, also the equivalence ${\operatorname{Hom}}_{{\mathsf C}}({{\mathsf C}}_{/X},F)\to F(X)$ is defined on objects by mapping a functor to the image of $1_X$, regarded now as an object of ${{\mathsf C}}_{/X}$ (hence, in the discrete case, this equivalence essentially gives back the Yoneda isomorphism). Example \[exampleYoneda\] should make clear that there is a rather strict analogy between the theory of presheaves in sets and the theory of presheaves in categories. We now see how the theory of sheaves extends to the case of categories.\ Let ${{\mathsf C}}$ be a category endowed with a Grothendieck topology. Recall from Definition \[sheaf\] that a covering $\mathcal{U}=\{ U_i\to U\}_{i\in I}$ is associated to the subpresheaf $h_\mathcal{U}$ of $h_U={{\mathsf C}}({\,\text{-}\,},U)$ given by the maps that factorise through some element of $\mathcal{U}$. The inclusion map induces a restriction map ${\operatorname{Hom}}(h_U,F)\to{\operatorname{Hom}}(h_\mathcal{U},F)$ for each presheaf $F$, and $F$ is said to be i) separated if this map is injective for each covering $\mathcal{U}$; ii) a sheaf if it is bijective for each covering $\mathcal{U}$. In passing from sets to categories, it is natural to replace $h_U$ with ${{\mathsf C}}_{/U}$ (see Example \[exampleYoneda\]) and, accordingly, $h_\mathcal{U}$ with the full subcategory ${{\mathsf C}}_{/\mathcal{U}}$ of ${{\mathsf C}}_{/U}$ whose objects are the maps that factorise through some element of $\mathcal{U}$, “monomorphism” with “embedding” (that is, “fully faithful functor”) and “bijection” with “equivalence”. So, by regarding ${{\mathsf C}}_\mathcal{/U}$ as fibered over ${{\mathsf C}}$ by the composite map ${{\mathsf C}}_{/\mathcal{U}}\hookrightarrow{{\mathsf C}}_{/U}\to{{\mathsf C}}$, we have the following definition. Given a site ${{\mathsf C}}$, a fibered category $p_{{\mathsf F}}:{{\mathsf F}}\to{{\mathsf C}}$ is said to be 1. a prestack if, for any object $U$ of ${{\mathsf C}}$ and covering $\mathcal{U}$ of $U$, the restriction functor ${\operatorname{Hom}}_{{\mathsf C}}({{\mathsf C}}_{/U},{{\mathsf F}})\to{\operatorname{Hom}}_{{\mathsf C}}({{\mathsf C}}_{/\mathcal{U}},{{\mathsf F}})$ is an embedding; 2. a stack if, for any object $U$ of ${{\mathsf C}}$ and covering $\mathcal{U}$ of $U$, the restriction functor ${\operatorname{Hom}}_{{\mathsf C}}({{\mathsf C}}_{/U},{{\mathsf F}})\to{\operatorname{Hom}}_{{\mathsf C}}({{\mathsf C}}_{/\mathcal{U}},{{\mathsf F}})$ is an equivalence. As already observed, the category ${\operatorname{Hom}}_{{\mathsf C}}({{\mathsf C}}_{/U},{{\mathsf F}})$ is equivalent to ${{\mathsf F}}(U)$. The category ${\operatorname{Hom}}({{\mathsf C}}_{/\mathcal{U}},F)$ also admits an explicit description, in terms of descent data, that can be thought of as glueing data up to isomorphism, and that we now describe. Given a site ${{\mathsf C}}$ and a covering $\mathcal{U}=\{ U_i\to U\}_{i\in I}$, we shall write $U_{i_1,\ldots i_n}$ as a shorthand for $U_{i_1}\times_U\ldots\times_UU_{i_n}$. Notice that, whenever $\{ i_{j_1},\ldots i_{j_k}\}\subset\{ i_1\ldots i_n\}$, there is a natural projection map $p_{i_{j_1},\ldots i_{j,k}}:U_{i_1,\ldots i_n}\to U_{i_{j_1},\ldots i_{j_k}}$. [@V Def. 4.2] Let ${{\mathsf C}}$ be a site, ${{\mathsf F}}$ a category fibered over ${{\mathsf C}}$, and $\mathcal{U}= \{ U_{i} \rightarrow U\}$ a covering in ${{\mathsf C}}$. Let be given a cleavage $K$ (see Definition \[cleavage\]). An [*object with descent data*]{} ($\{ \xi _{i} \}, \{\phi_{ij}\}$) on $\mathcal{U}$ is a collection of objects $\xi_{i} \in \mathcal{F}(U_{i})$, together with isomorphisms $\phi_{ij}: pr^{\ast}_{1}\xi_{i} \simeq pr^{\ast}_{2}\xi_{j} $ in $\mathcal{F}(U_{i}\times_{U} U_{j})$ such that the following cocycle conditions is satisfied: $$pr^{\ast}_{13}\phi_{ik} = pr^{\ast}_{12}\phi_{ij}\circ pr^{\ast}_{23}\phi_{jk} : pr^{\ast}_{3}\xi_{k}\rightarrow pr^{\ast}_{1}\xi _{i}\quad \text{\rm for any triple of indices $i, j, k$}\,.$$ The isomorphisms $\phi_{ij}$ are called [*transition isomorphisms*]{} of the object with descent data.\ An arrow between objects with descent data $$\{\alpha_{i}\}: (\{\xi_{i}\}, \{\phi_{ij}\})\rightarrow (\{ \eta_{i}\}, \{\psi_{ij}\})$$ is a collection of arrows $\alpha_{i} : \xi_{i} \rightarrow \eta_{i} $ in $\mathcal{F}(U_{i})$ with the property that for each pair of indices $i, j$ the diagram $$\label{diagraminternalHom} \xymatrix{ pr^{\ast}_{2} \xi_{j} \ar[r]^{pr^{\ast}_{2}\alpha_{j}}\ar[d]^ {\phi_{ij}} & pr ^{\ast} _{2}\eta_{j} \ar[d]^{\psi_{ij}} \\ pr^{\ast}_{1} \xi_{i}\ar[r] _{pr^{\ast}_{1}\alpha_{i}}& pr^{\ast}_{1} \eta_{i} }$$ commutes. [@V Prop. 4.5] Given a site ${{\mathsf C}}$, a category ${{\mathsf F}}$ fibered over ${{\mathsf C}}$, and a cover $\mathcal{U}$ in ${{\mathsf C}}$, objects with descent data on $\mathcal{U}$ with arrows between them form a category, which is equivalent to ${\operatorname{Hom}}({{\mathsf C}}_{/\mathcal{U}},F)$. [10]{} , [*Schemes over ${{{\mathbb F}_1}}$ and zeta functions*]{}, Compositio Mathematica, [**146**]{} (2010), 1383-1415. , [*Schemes over ${{{\mathbb F}_1}}$*]{}, in [*Number Fields and Function Fields —Two Parallel Worlds*]{}, G. van der Geer, B. Moonen, R. Schoof eds., “Progress in Mathematics” [**239**]{}, Birkhäuser, Boston 2005, pp. 87–100. ——, [*Remarks on zeta functions and K-theory over ${{{\mathbb F}_1}}$*]{}, Proceedings of the Japan Academy, Ser. A, Mathematical Sciences, [**82**]{} (2006), 141–146. ——, [*${{{\mathbb F}_1}}$-schemes and toric varieties*]{}, Beiträge zur Algebra und Geometrie, [**49**]{} (2008), 517–525. , [*On the $\Gamma$-factors attached to motives*]{}, Inventiones Mathematicae, [**104**]{} (1991), 245–261. ——, [*Local $L$-factors of motives and regularized determinants*]{}, Inventiones Mathematicae, [**107**]{} (1992), 135–150. ——, [*Motivic $L$-functions and regularized determinants*]{}, in [*Motives (AMS-IMS-SIAM Joint Summer Research Conference on Motives, Seattle 1991)*]{}, “Proceedings of Symposia in Pure Mathematics” [**55**]{}, American Mathematical Society, Providence (RI) 1994, pp. 707–743. , [*Commutative Semigroup Rings*]{}, The University of Chicago Press, Chicago 1980. , [*Topos annelés et schémas relatifs*]{}, “Ergebnisse der Mathematik und ihrer Grenzgebiete” [**64**]{}, Springer-Verlag, Berlin–New York 1972. , [*Sketches of an Elephant. A Topos Theory Compendium*]{}, vol. $1$, Clarendon Press, Oxford 2002. , [*Cohomology determinants and reciprocity laws: number field case*]{}, unpublished typescript. , [*Multiple zeta functions: an example*]{}, in [*Zeta Functions in Geometry*]{}, N. Kurokawa & T. Sunada eds., “Advanced Studies in Pure Mathematics” [**21**]{}, Mathematical Society of Japan, Tokyo 1992, pp. 219–226. , [*Absolute Geometry*]{}, neverendingbooks.org, Universiteit Antwerpen 2011 (`http://macos.ua.ac.be/ lebruyn/LeBruyn2011c.pdf`). , [*The geometry of blueprints. Part I: Algebraic background and scheme theory*]{}, Advances in Mathematics, [**229**]{} (2012), 1804–1846. ——, [*A blueprinted view of ${{{\mathbb F}_1}}$-geometry*]{}, in [*Absolute Arithmetic and ${{{\mathbb F}_1}}$-Geometry*]{}, K. Thas ed., European Mathematical Society, Zürich 2016, pp. 161–219. ——, [*Blue schemes, semiring schemes, and relative schemes after Toën and Vaquié*]{}, Journal of Algebra, [**482**]{} (2017), 264–302. ——, [*${\mathbb F}_1$ for everyone*]{}, Jahresbericht der Deutschen Mathematiker-Vereinigung, [**120**]{} (2018), 83–116. , [*Lectures on zeta functions and motives (according to Deninger and Kurokawa)*]{}, Astérisque, [**228**]{} (1995), 121–163. ——, [*Cyclotomy and analytic geometry*]{}, in [*Quanta of Maths*]{} (Conference in honor of Alain Connes, Paris, March 29–April 6, 2007), É. Blanchard [*et al.*]{} eds., “Clay Mathematics Proceedings” [**11**]{}, American Mathematical Society, Providence (RI) 2010, pp. 385–408. , [*On the field with one element*]{} (exposé à l’Arbeitstagung, Bonn, June 1999), preprint IHES/M/99/55. ——, [*Les variétés sur le corps à un élément*]{}, Moscow Mathematical Journal, [**4**]{} (2004), 217–244. , [*Sur les analogues algébriques des groupes semi-simples complexes*]{} \[1956\], in [*[Œ]{}uvres/Collected Works*]{}, vol. I, F. Buekenhout [*et al.*]{} eds., European Mathematical Society, Zürich 2013, pp. 615–643. , [*Au-dessous de $\operatorname{Spec} {{\mathbb Z}}$*]{}, Journal of $K$-Theory, [**3**]{} (2009), 437–500. , [*Deitmar’s versus Toën-Vaquié’s schemes over ${{{\mathbb F}_1}}$*]{}, Mathematische Zeitschrift, [**271**]{} (2012), 911–926. , [*Grothendieck Topologies, Fibered Categories and Descent Theory*]{}, in B. Fantechi, L. Göttsche, L. Illusie, S. L. Kleiman, N. Nitsure, A. Vistoli, [*Fundamental Algebraic Geometry — Grothendieck’s [*FGA*]{} Explained*]{}, “Mathematical Surveys and Monographs” [**123**]{}, AMS, 2005, pp. 1–137. [^1]: \ C. Bartocci was partially supported by [prin]{} “Geometria delle varietà algebriche”, by [gnsaga-in]{}d[am]{} and by the University of Genova through the research grant “Aspetti matematici nello studio delle interazioni fondamentali”. [^2]: For a more detailed and exhaustive account of the development of ${{{\mathbb F}_1}}$-geometry we refer to [@LeBruyn11] and [@Lor15]. [^3]: A Deitmar’s ${{{\mathbb F}_1}}$-scheme X is said to be of finite type, if it has a finite covering by affine schemes $U_i = {\operatorname{Spec}}M_i$ such that each $M_i$ is a finitely generated monoid. Deitmar proved in [@Dei06] that an ${{{\mathbb F}_1}}$-scheme $X$ is of finite type if and only if $X_{{\mathbb Z}}$ is a ${{\mathbb Z}}$-scheme of finite type. [^4]: This overview is complemented by Appendix A, where we review some basic facts about fibered categories, pseudo-functors, and stacks. [^5]: In [@CC] the functor ${\,\text{-}\,}{\otimes_{{{{\mathbb F}_1}}}{{\mathbb Z}}}$ is denoted by $\beta$ and its right adjoint $|{\,\text{-}\,}|$ by $\beta^\ast$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Starting from the $\pi$-electron Pariser-Parr-Pople (PPP) Hamiltonian which includes both strong electron-phonon and electron-electron interactions, we propose some strongly correlated wave functions of increasing quality for the ground state of conjugated polymers. These wavefunctions are built by combining different finite sets of local configurations extended at most over two nearest-neighbour monomers. With this picture, the doped case with one additional particle is expressed in terms of quasi-particle. Thus, the polaron formation problem goes back to the study of a Holstein like model.' address: | $^{\dag}$Max-Planck-Institut für Physik Komplexer Systeme, Nöthnitzer Straße 38, D-01187 Dresden\ $^{*}$Groupe de Physique des Solides, 2 place Jussieu, 75251 Paris cedex 05 author: - 'Stéphane Pleutin$^{\dag *}$ and Jean-Louis Fave$^{*}$' title: 'Molecular crystal approach for $\pi$-conjugated polymers: from PPP Hamiltonian to Holstein model for polaron states' --- Introduction ============ The nature of the first excited states of conjugated polymers is an important and still unsolved question in condensed matter sciences [@excitations]. Knowing if they are band to band excitations or exciton states, if polarons, bipolarons or solitons are stable quasiparticles in the doping case, are fundamental issues for the understanding of the electronic properties of these compounds. The low-lying excited states are supposed to be suitably described by the well-known $\pi$-electron Pariser-Parr-Pople (PPP) Hamiltonian. This model Hamiltonian takes into account both strong electron-phonon and electron-electron interaction terms yielding only exact numerical solutions for the smallest oligomers[@chandross]. For the thermodynamic limit, the situation remains unclear since the calculations for the ground state and the excited states, including electron correlations, are uneasy to achieve and some drastic approximations are needed [@revue]. However, a first qualitative understanding of this complicated physics can be done by the use of some simplified Hamiltonian. For instance the Rice-Gartstein’s molecular exciton model[@rice], similar to previous works [@excitonic], is useful for a qualitative description of the linear absorption of conjugated polymers. On the other hand, the molecular Holstein’s model gives a simplified picture of the polaron states[@holstein; @bussac]. Recently, an approximate scheme to build the ground and the first excited states has been proposed[@pleutin]. With this method, starting from the PPP Hamiltonian, one reaches a Rice-Gartstein like model; the excitations relevant for linear absorption are then easy to obtain and the results are comparable with those from more tedious methods [@yu]. In this paper, we will show that the same procedure permits to derive formally, from the very same PPP Hamiltonian, the simple molecular Holstein’s model for the polaron state in conjugated polymers. Polarons are thought to be important for the understanding of the charge transport in these compounds, and the possibility to study these non-linear states at a correlated level in an easy formalism is needed and valuable. We choose a simple dimerized linear chain as an effective model for conjugated polymers; this chain is characterized by $r_{d}$ and $r_{s}$, the double and single bond lengths, respectively. Extending our method to a realistic geometry is straightforward but the essential of the physics is reached with this simplified picture. Let us now briefly introduce the Pariser-Parr-Pople Hamiltonian which is our starting point $$\label{PPP} H_{PPP}=-\sum_{n,\sigma}t_{n,n+1}(c^{\dag}_{n,\sigma}c_{n+1,\sigma}+ c^{\dag}_{n+1,\sigma}c_{n,\sigma})+ \frac{1}{2}\sum_{n,m,\sigma,\sigma'}V_{n,m} (c^{\dag}_{n,\sigma}c_{n,\sigma}-\frac{1}{2}) (c^{\dag}_{m,\sigma'}c_{m,\sigma'}-\frac{1}{2})$$ where $c^{\dag}_{n,\sigma}$, ($c_{n,\sigma}$) is the creation (destruction) operator of an electron in site [*n*]{} with spin $\sigma$; $t_{n,n+1}$ is the hopping term which includes via the electron-phonon interaction a linear dependence upon the length of the bond ([*n*]{},[*n+1*]{})[@revue; @tavan]. In the case of a dimerized linear chain, this dependence gives two distinct hopping integrals $t_{d}$ and $t_{s}$ for the double and the single bonds respectively ($\mid t_{d} \mid > \mid t_{s} \mid$); they could be written as $t_{d/s}=t_{0}(1\pm \frac{\alpha}{2t_{0}}\delta)$ where $t_{0}$ is the hopping integral without dimerization, $\alpha$ is the electron-phonon interaction and $\delta$ is a measure of the dimerization giving the difference of the lengths of single and double bonds[@tavan]. The Coulomb term is parametrized following Ohno, where the effect of the $\sigma$ electrons is considered via a phenomenologic screening, $V_{n,m}=\frac{U}{\sqrt{1+0,6117r^{2}_{n,m}}}$ where $r_{n,m}$ is the distance (in $\AA$) between two electrons localized on site [*n*]{} and [*m*]{} [@ohno]. We also write this term as $V(r_{n,m})\equiv V_{n,m}$ and $V=V(r_{d})$ for convenience. In view to link up the PPP Hamiltonian and the molecular crystal models, the Rice-Gardstein and Holstein models, we choose the monomer self-consistent orbitals as basis functions - this is the so-called exciton-basis [@chandross]. This choice is of course led by the dimerization. In our case, the monomers are the double-bonds and their self-consistent orbitals are associated with the following creation (destruction) operators for the bonding and anti-bonding orbitals: $B^{(\dag)}_{n,\sigma}=\frac{1}{\sqrt{2}}(c^{(\dag)}_{2n,\sigma}+ c^{(\dag)}_{2n+1,\sigma})$ and $A^{(\dag)}_{n,\sigma}=\frac{1}{\sqrt{2}} (c^{(\dag)}_{2n,\sigma}-c^{(\dag)}_{2n+1,\sigma})$; here $n$ indexes the double bonds. With this specific choice of local basis operators, the electronic configurations are built by combining different kinds of local configurations (LC)[@chandross; @these]. In order to get a tractable model, we truncate the Hilbert space by choosing a small set of different LC which will be the elemental building blocks for the electronic configurations [@these]. These LC are the so-called generative local configurations (GLC) in [@pleutin]. We may notice that this method shows some similarities with the Valence Bond method used efficiently for the studies of oligomers [@soos] but with the important difference that atomic sites are replaced by monomer units with internal electronic structure (double bonds here). The configurations build from GLC are diagonal with respect to the hopping term $t_{d}$ on contrary to the Valence Bond configurations which are diagonal with respect to the Coulomb term. Each GLC is a set of several Valence Bond diagrams, chosen to be the adequate ones for a reasonable description of polymer states. In this work, we first improve the proposed ground state of ref. [@pleutin] by enlarging the set of electronic configurations used to describe it (section II). Second, we consider the case with an extra electron on the chain and show that, if one authorizes small lattice distortions around the extra particle, our treatment allows to reach quite naturally a Holstein like model but expressed in terms of many-body particle states (section III). The ground state ================ We keep as GLC for the ground state the LC which appear the most relevant in calculations performed on small oligomers[@chandross]. In ref [@pleutin], only three LC were considered; they are named F-LC, D-LC and Ct$^{-}_{1}$-LC and are schematically represented in figure (\[LCGS\].a). This approximation could appear rather abrupt, but it is sufficient to get a correct qualitative picture of the linear absorption spectra as it was shown in [@pleutin]; moreover, even at this level of approximation, the results are quantitatively comparable with the results of more tedious calculations [@yu]. In this work, we propose some natural improvements to this first approximation by extending the set of GLC. In a first improvement, we add to the previous set of GLC, the so-called Triplet-Triplet LC, TT-LC, shown in figure (\[LCGS\].b), where two nearest-neighbour (n.n.) localized triplets are combined into a singlet. Together with the three first LC, they are the major constituents of the ground state wave function in small cluster calculations[@chandross]. In a second improvement, we enlarge again the set of GLC by including in it the LC which interact directly with the four previous selected ones (figure\[LCGS\].c). In the following, only the first case is treated explicitely. We develop in full detail our proposed way to get the ground state wave function with the four selected GLC. The case with the complete set of LC represented in figure (\[LCGS\]) can be treated following the same scheme; only the obtained results are then given. First, we introduce the four GLC, their associated creation operators and their energies. - The named F-LC is associated with the creation operator $$F^{\dag}_{n}=B^{\dag}_{n,\uparrow}B^{\dag}_{n,\downarrow}$$This define the lowest LC in the range of parameter of interest; therefore we choose as reference state $$\mid 0>= \prod_{n}F^{\dag}_{n}\mid Vacuum>$$ where $\mid Vacuum>$ denotes the state without any $\pi$ electron. The state $\mid 0>$ is the ground state considered in the molecular crystal approaches [@rice; @excitonic]; there, the linear dimerized chain is simply identified to an one-dimensional crystal of ethylene without any electronic correlations. With respect to $\mid 0>$, $F^{\dag}_{n}=I^{\dag}_{n}$ which is simply the identity operator. In the following, all the creation operators and the energies are defined with respect to $\mid 0>$. - The named D-LC is associated with the creation operator $$D^{\dag}_{n}=A^{\dag}_{n,\uparrow}A^{\dag}_{n,\downarrow} B_{n,\uparrow}B_{n,\downarrow}$$and with energy given by $\epsilon_{d}=4t_{d}$. The F and D-LC describe the dynamics of the $\pi$-electrons coupled by pairs into each monomer: the two electrons are independent in F-LC, whereas D-LC introduces intramonomer electronic correlation. In the strong dimerization limit, these two LC are sufficient to give a good approximation of the ground state; the system is then very close to a true molecular crystal. For small or intermediate dimerization, it is however necessary to consider more extended LC or, in other words, some fluctuations around the molecular crystal limit. This is done by introducing two more LC extended over two n.n. monomers. - The named Ct$^{-}_{1}$-LC is associated with the creation operator $$\label{ctLC} Ct^{\dag}_{n}=\frac{1}{2}(A^{\dag}_{n+1,\uparrow} B_{n,\uparrow}+A^{\dag}_{n+1,\downarrow}B_{n,\downarrow}- A^{\dag}_{n,\uparrow}B_{n+1,\uparrow}-A^{\dag}_{n,\downarrow}B_{n+1,\downarrow})$$and with energies given in the case of a linear dimerized chain by $\epsilon_{ct}=2t_{d}+V-\frac{1}{4}(V(r_{s})+2V(r_{s}+r_{d})+V(2r_{d}+r_{s}))$. The last term, in bracket, is the attractive interaction between the electron and the hole due to the long-range part of the Ohno potential. The Ct$^{-}_{1}$-LC introduces n.n. intermonomer charge fluctuations, reproducing the conjugation phenomenon in a minimal way. - Last, the named TT-LC is associated with the creation operator $$\begin{array}{c} TT^{\dag}_{n}=\frac{1}{\sqrt{3}}(A^{\dag}_{n,\uparrow}B_{n,\downarrow}A^{\dag}_{n+1,\downarrow}B_{n+1,\uparrow}+A^{\dag}_{n,\downarrow}B_{n,\uparrow}A^{\dag}_{n+1,\uparrow}B_{n+1,\downarrow}+ \frac{1}{2}(A^{\dag}_{n,\uparrow}B_{n,\uparrow}A^{\dag}_{n+1,\uparrow}B_{n+1,\uparrow}+\\A^{\dag}_{n,\uparrow}B_{n,\uparrow}A^{\dag}_{n+1,\downarrow}B_{n+1,\downarrow}+A^{\dag}_{n,\downarrow}B_{n,\downarrow}A^{\dag}_{n+1,\uparrow}B_{n+1,\uparrow}+A^{\dag}_{n,\downarrow}B_{n,\downarrow}A^{\dag}_{n+1,\downarrow}B_{n+1,\downarrow})) \end{array}$$and with energy given by $\epsilon_{tt}=4t_{d}-(U-V)$. In this LC, two Triplets appearing in n.n. monomers are combined into a singlet (figure(\[LCGS\].b)). It was shown to be important for the first time in the work of Schulten and Karplus [@schulten] where it was recognized as a major constituent of the low-lying excitations, the famous $2A_{g}^{-}$ state, optically forbidden. In the ground state, which is our interest here, the importance of this LC can be comparable to the D-LC one [@chandross]. We may notice that a similar treatment for the PPP Hamiltonian was proposed a few years ago to study the spin-charge separation mechanism in the limit of strong dimerization [@mukho]. With our choice of four GLC, all possible electronic configurations are then build up. They are characterized by the number of D, Ct$^{-}_{1}$ and TT-LC, $n_{d}$, $n_{ct}$ and $n_{tt}$ respectively, and by the positions of these different GLC. The positions of the D, Ct$^{-}_{1}$ and TT-LC are labelled by the coordinates $z(k)$ ($k=1,..,n_{d}$), $y(j)$ ($j=1,..,n_{ct}$) and $x(i)$ ($i=1,..,n_{tt}$) respectively. The necessary non-overlapping condition between LC is supposed to be fulfilled all along the paper - the LC behave as hard core bosons. The electronic configurations are then expressed as $$\label{espacemodel} \mid x(1),...,x(n_{tt}),y(1),...,y(n_{ct}),z(1),...,z(n_{d})>=\prod_{i=1}^{n_{tt}}\prod_{j=1}^{n_{ct}}\prod_{k=1}^{n_{d}}TT^{\dag}_{x(i)}Ct^{\dag}_{y(j)}D^{\dag}_{z(k)}\mid 0>$$ The GLC are all neutral local configurations, therefore the energy of (\[espacemodel\]) is independent of the relative positions between LC and entirely determined by the number of each GLC. $$\label{energiemodel} E(n_{tt},n_{ct},n_{d})=n_{tt}\epsilon_{tt}+n_{ct}\epsilon_{d}+n_{d}\epsilon_{d}$$ At this point, we have to mention an incorrect statement in [@pleutin] where it is saying that the energy of the configurations made of F, D and Ct$^{-}_{1}$-LC depends on the relative positions of the Ct$^{-}_{1}$-LC. This statement is actually wrong, however, this simplification goes in favor of our treatment (indeed, it was not possible to do calculations with this statement and finally the energy (\[energiemodel\]) was also adopted in [@pleutin]). The way we choose to diagonalize the PPP Hamiltonian in the reduced Hilbert space spanned by the electronic configurations (\[espacemodel\]) follows from [@pleutin]. First, we reorganize the configurations (\[espacemodel\]). We make linear combinations from the states with $n_{d}$ D-LC, $n_{tt}$ TT-LC localized at sites $x(1),...,x(n_{tt})$ and $n_{ct}$ Ct$^{-}_{1}$-LC localized at sites $y(1),...,y(n_{ct})$. Since we are interested, at the end of the day, only by the lowest state in energy (the ground state), we can consider only the linear combinations of highest symmetry $$\label{Excocorr} \mid x(1),...,x(n_{tt}),y(1),...,y(n_{ct}),n_{d}>=\frac{1}{\sqrt{C_{n_{d}}^{N-2(n_{tt}+n_{ct})}}}\sum_{\{z(k)\}}\prod_{k=1}^{n_{d}}D^{\dag}_{z(k)}\prod_{i=1}^{n_{tt}}\prod_{j=1}^{n_{ct}}TT^{\dag}_{x(i)}Ct^{\dag}_{y(j)}\mid 0>$$where the summation is carried over the $C_{n_{d}}^{N-2(n_{tt}+n_{ct})}$ possible configurations. The energy of these combinations is still given by (\[energiemodel\]). The states (\[Excocorr\]) interact between them by the following term $$\label{interacD} \begin{array}{c} <x(1),...,x(n_{tt}),y(1),...,y(n_{ct}),n_{d}\mid H_{PPP}\mid x(1),...,x(n_{tt}),y(1),...,y(n_{ct}),n_{d}+1>=\\ \sqrt{(n_{d}+1)(N-2(n_{tt}+n_{ct})-n_{d})}\frac{U-V}{2} \end{array}$$ The tri-diagonal matrix, where the diagonal part is given by (\[energiemodel\]) and the off-diagonal part by (\[interacD\]) can be divided into sub-matrices characterized by $n_{ct}$ localized Ct$_{1}^{-}$-LC and $n_{tt}$ localized TT-LC but with a variable number of D-LC, $n_{d}$ ($n_{d}=0,..., 2(n_{ct}+n_{tt})$); these sub-matrices can be separately diagonalized and it is easy to show that the resulting lowest states are given by the following expression $$\label{diago1} \begin{array}{c} \mid x(1),...,x(n_{tt}),y(1),...,y(n_{ct})>^{c}=\sum_{n_{d}=0}^{N-2(n_{tt}+n_{ct})}a^{N-2(n_{tt}+n_{ct})-n_{d}}b^{n_{d}}\sqrt{C_{n_{d}}^{N-2(n_{tt}+n_{ct})}}\\ \mid x(1),...,x(n_{tt}),y(1),...,y(n_{ct}),n_{d}> \end{array}$$ with energy expressed as $$\label{energiecor} E^{c}(n_{tt},n_{ct})=n_{tt}\epsilon_{tt}+n_{ct}\epsilon_{ct}+(N-2(n_{tt}+n_{ct}))\epsilon_{c}$$ where $$\label{Ec} \epsilon_{c}=2t_{d}-\frac{1}{2}\sqrt{16t_{d}^{2}+(U-V)^{2}}$$The coefficients $a$ and $b$ of (\[diago1\]) are given by $a=\frac{U-V}{\sqrt{4\epsilon_{c}^{2}+(U-V)^{2}}}$ and $a^{2}+b^{2}=1$. With these expressions, the double bonds free of Ct$_{1}^{-}$- and TT-LC are correlated independently. The upper-script, $c$, in (\[diago1\]) is for correlated. $\epsilon_{c}$ is called intramonomer correlation energy. The next step toward the evaluation of the ground state is to retain, among all the states resulting from the previous sub-diagonalizations, only the lowest ones given by (\[diago1\]). This approximation is well justified since the energy difference between these states and the corresponding lowest excited ones is given by the quantity $\sqrt{16t_{d}^{2}+(U-V)^{2}}$ which is rather high for usual parameters with a value around $10eV$. We then reorganized the states (\[diago1\]) into collective excitations of highest symmetry $$\mid n_{tt},n_{ct}>^{c}=[C_{n_{tt}+n_{ct}}^{N-n_{tt}-n_{ct}}C_{n_{tt}}^{n_{tt}+n_{ct}}]^{-\frac{1}{2}}\sum_{\{x(i),y(j)\}}\mid x(1),...,x(n_{tt}),y(1),...,y(n_{ct})>^{c}$$still associated with the energy (\[energiecor\]) and where the summation runs over the $C_{n_{tt}+n_{ct}}^{N-n_{tt}-n_{ct}}C_{n_{tt}}^{n_{tt}+n_{ct}}$ possible configurations. The ground state is then expressed as a linear combination $$\label{wfgs} \mid GS>=\sum_{n_{tt},n_{ct}}X_{n_{tt},n_{ct}}\mid n_{tt},n_{ct}>^{c}$$ where the coefficients $X_{n_{tt},n_{ct}}$ are determined by solving the following secular equation $$\label{equationgs} \begin{array}{c} I(n_{tt},n_{ct}-1)X_{n_{tt},n_{ct}-1}+ (n_{tt}\epsilon_{tt}+n_{ct}\epsilon_{ct}-2(n_{tt}+n_{ct})\epsilon_{c}-E)X_{n_{tt},n_{ct}}+ I(n_{tt},n_{ct}+1)X_{n_{tt},n_{ct}+1}+\\\ [n_{tt}(n_{ct}+1)]^{\frac{1}{2}}n_{tt}\frac{\sqrt{3}}{2}t_{s}X_{n_{tt}-1,n_{ct}+1}+ [n_{ct}(n_{tt}+1)]^{-\frac{1}{2}}n_{ct}\frac{\sqrt{3}}{2}t_{s}X_{n_{tt}+1,n_{ct}-1}=0 \end{array}$$ where $$\label{I} I(n_{tt},n_{ct})=\sqrt{(n_{ct}+1)\frac{(N-2(n_{tt}+n_{ct})-1)(N-2(n_{tt}+n_{ct}))}{N-n_{tt}-n_{ct}}}a^{2}t_{s}$$ The equation (\[equationgs\]) is not solvable with the interaction term (\[I\]). Next, and last, we do an approximation on the term $I(n_{tt},n_{ct})$ by assuming $$\label{approx} I(n_{tt},n_{ct}) \simeq \sqrt{(n_{ct}+1)(\frac{N-1}{3}-n_{tt}-n_{ct})}\sqrt{3}a^{2}t_{s}$$ This is a very good approximation of (\[I\]), if the number of GLC extended over two monomers, $n_{2}=n_{tt}+n_{ct}$, is small[@pleutin]. Consequently, this treatment will be justified if in the final wave function, the most important configurations are the ones with a small value of $n_{2}$; this is actually the case as it can be seen from the work of ref. [@pleutin] and as it appears, at the end of the day, in this study. With the last simplification, the problem is mapped onto $(N-1)/3$ independent three-level systems. One write $$\begin{array}{c} X_{n_{tt},n_{ct}}=\sqrt{C_{n_{tt}+n_{ct}}^{\bf{E} ((N-1)/3)}C_{n_{tt}}^{n_{tt}+n_{ct}}}y_{n_{tt},n_{ct}}\\ \mbox{with} \left \{ \begin{array}{c} y_{n_{tt},n_{ct}}/y_{n_{tt}+1,n_{ct}}=\gamma \\ y_{n_{tt},n_{ct}}/y_{n_{tt},n_{ct}+1}=\zeta \end{array}\right. \end{array}$$ where $\bf{E}$ takes the integer part, $\gamma$ and $\zeta$ are real constants to be determined. Inserting this definition in (\[equationgs\]) and after some algebraic manipulations one finds that the problem goes back to calculate the lowest eigenvalue, $\epsilon$, of the following 3 by 3 matrix $$\label{3levels} \left ( \begin{array}{c} 0 \quad \sqrt{3}a^{2}t_{s} \quad \quad \\ \sqrt{3}a^{2}t_{s} \quad \epsilon_{ct}-2\epsilon_{c} \quad \frac{\sqrt{3}}{2}t_{s} \\ \quad \quad \frac{\sqrt{3}}{2}t_{s} \quad \epsilon_{tt}-2\epsilon_{c} \end{array} \right )$$ The ground state energy is then simply divided into two different components $$\label{EGS} E_{GS}=N \epsilon_{c}+\frac{N-1}{3}\epsilon$$ The first part is the intramonomer correlation energy defined by the first subdiagonalization; it is obtained by correlating independently the $N$ double bonds. The second part is the intermonomer fluctuation energy defined by the second subdiagonalization; it is obtained by considering $(N-1)/3$ identical and independent effective three level systems defined by the matrix (\[3levels\]). Finally, the ground state wave function is clarified by the following two equations $$\gamma=\frac{a^{2}t_{s}}{\epsilon} \quad , \quad \zeta=\frac{2a^{2}}{\sqrt{3}}\frac{\epsilon-\epsilon_{tt}}{\epsilon}$$ The resulting wave function contains, as the energy, two different kinds of components: the first ones localize electrons by pairs in the double bonds; the second ones introduce n.n. intermonomer fluctuations, charge fluctuations by means of Ct$_{1}^{-}$-LC and spin fluctuation by means of TT-LC. The ground state proposed above may be easily improved by adding new local configurations extended over two n.n. double bonds. For example, one can include the whole LC represented in the figure (\[LCGS\]); the LC of (\[LCGS\].c) are the ones directly coupled to the others. The strategy is then the same. First, one takes care of the intramonomer correlation; second, one builds the collective excitations of highest symmetry; third, one approximates the part of the resulting interaction connecting configurations which differ by only one LC extended over two monomers in the way of (\[approx\]). The problem is then equivalent to consider $(N-1)/3$ independent seven-level systems; $\epsilon$ is then the lowest eigenvalue of the associated 7 by 7 matrix. In order to test our assumptions from which we propose several ground state wave functions in the form of (\[wfgs\]), we do comparisons, first, for the Su-Schrieffer-Heeger (SSH) model. For this model, similar to (\[PPP\]) but without the complicated Coulomb term[@ssh], the exact result is well known [@ssh; @salem]. We compare this result with successively the results given by the model ground state of ref [@pleutin], the hereafter so-called model I, the one with in addition the TT-LC, the model II, and, last, the model with all the GLC represented in figure (\[LCGS\]), the model III. We make comparison in function of the dimerization parameter $x=\frac{\alpha}{2t_{0}}\delta$. The results are shown in Table I where the percentage of the exact energy for our successive approximations are given. For $x=1$, the case of complete dimerization, the three models give obviously the exact result. For $x=0$, the case without dimerization, one gets around 92$\%$ of the total energy. A-priori in this limit, one would expect less accurate results since the charge fluctuations of longer range than one play a role; they contribute in fact only in the missing 6$\%$. For $x=0.15$, a value often attributed to the polyacetylene, one gets around 97$\%$ of the total energy. In conclusion, our approximation seems rather good for realistic cases, within this independent electron model. Next, we do also comparisons for the Hubbard model which is well known to be exactly solvable in one dimension [@lieb]; this is the model (\[PPP\]) with $\alpha=0$ and where only the on-site electron-electron interaction, $U$, is retained. For $U=0$, one gets the SSH model without dimerization for which we obtained around 92$\%$ of the total energy (see Table I). Starting from these values, the agreement monotonically decreases when $U$ increases to finally get for infinite $U$, between 77$\%$ and 79$\%$ of the total energy, depending on the model (I, II or III) under consideration. This discrepancy shows that important LC are missing especially in the strong $U$ limit; for instance, it is easy to see, just by energetic considerations, that, for large enough $U$, the TTT-LC, which is singlet made by three localized triplets, the TTTT-LC, which is singlet made by four localized triplets, and so on, may become important for the ground state wave function. With our specific choice of basis set completely localized on the double bonds, the dimerization parameter, $x$, is crucial; the more it will be important, the more our treatment will be relevant to become exact for a complete dimerization. In the Hubbard model, the dimerization is simply missing. If $\alpha \ne 0$, the energy of the LC made from localized triplets increases making our approximations more and more reasonable. Last, we do comparison for the so-called extended Peierls-Hubbard model; this is the model (\[PPP\]) but with only the Hubbard term, $U$, and the n.n. interaction $V$, with the assumption that $V=V(r_{d})=V(r_{s})$[@eric1]. On the contrary to the two previous models, this model is not integrable, also, we do comparisons with calculations performed with the Density-Matrix Renormalization Group (DMRG) technique [@white]; a very recent review of the advances related to this method may be found in [@dmrg]. The DMRG calculations have been done by E. Jeckelmann [@eric2] following the method developed in ref [@eric1]. We compare our approximate results with an extrapolation of the energy per unit cell made from calculations for different lattice lengths up to two hundreds double bonds. The calculations are performed for a reasonable choice of parameters, $U=4t_{0}$ and $V=t_{0}$. The results for several values of the dimerization parameter are confined in table II. We see that the errors are always less than 20$\%$ and are around 13 - 10$\%$ for realistic parameters. In our opinion, the agreements obtained here are satisfactory considering the relative simplicity of the wave functions proposed in this work. Moreover, with these approximate wave functions, some analytical insights are now possible which is very new in this range of parameters, appropriated for conjugated polymers. We do not compare for the moment our results with calculations made for the complete PPP Hamiltonian. Nevertheless, since the remaining long range terms of the Coulomb potential are of smaller importance than the other terms of the Hamiltonian, one can reasonably expect only small quantitative changes in the results obtained with the extended Peierls-Hubbard model by using the full PPP Hamiltonian. Before closing this section, we may say that our wave functions are not variational since, in the way we choose to diagonalize the model, we do two successive sub-diagonalizations with some approximations. However, it is possible to build some variational wave-functions very similar to (\[wfgs\]). By the way, works are already done to propose a variational version of the model II [@rva1] and other are in progress for the model III [@rva2]. In other way, a very efficient Matrix-Product-Ansatz is also proposed in [@mp]. Compared to the work developed in [@rva1], one can say that our proposed way to diagonalize the PPP-Hamiltonian in the selected sub-Hilbert space is a very good approximation for appropriated parameters. polaronic states ================ In this part, we consider the situation with one additional charge. We treat explicitely the case of an additional electron but, the case of the removal of one electron can be treated exactly in the same way. We show that this problem can be describe, with some approximations, in terms of quasi-particles which obey to simple effective Hamiltonian. For a rigid lattice, we get a one dimensional tight binding Hamiltonian. If one authorizes some distortions of the lattice around the extra particle, we get at second order in the distortion coordinates, a Holstein like model[@holstein]. In both cases, the parameters of these one-electron models are related to the PPP one’s. In this work, we are not attempted to derive quantitative results. Our goal, based on semi-quantitative results, is to open up a way between a true many-body model given by the PPP-Hamiltonian and more simple one-electron models as the Holstein’s model for polaronic states. Because it is not possible to solve the PPP model and since the important physical ingredients for an understanding of conjugated polymers are still not fully recognized [@excitations], the derivation of more effective models is needed in order to get some physical insight. This work, and the very related one of ref. [@pleutin], goes in this direction. For convenience, we choose in this part the simplest description for the ground state given by the model I, using F, D and Ct$^{-}_{1}$-LC. Since the model I already contains the most important local constituents for the ground state wave function, namely the F and Ct$^{-}_{1}$-LC, we believe the results would not changed dramatically with a better description - by using the model II or III. Then, if we define $$\mid n_{d},n_{ct}>=[C^{N-n_{ct}}_{n_{ct}}C^{N-2n_{ct}}_{n_{d}}]^{-1/2}\sum_{\{ y(i),z(j)\}} Ct^{\dag}_{y(1)}...Ct^{\dag}_{y(n_{t})}D^{\dag}_{z(1)}\cdots D^{\dag}_{z(n_{d})}\mid 0>$$ where the summation is over the $C^{N-n_{ct}}_{n_{ct}}C^{N-2n_{ct}}_{n_{d}}$ possible configurations, the ground state wave function is simply written as $$\label{PF} \mid GS>=\sum^{N_{ct}}_{n_{ct}=0}a^{N_{ct}-n_{ct}}_{ct}{b^{n_{ct}}_{ct}} \sqrt{C^{N_{ct}}_{n_{ct}}}\sum^{N-2n_{ct}}_{n_{d}=0}a^{N-{2n_{ct}}-n_{d}}_{c} {b^{n_{d}}_{c}}\sqrt{C^{N-2n_{ct}}_{n_{d}}}\mid n_{d},n_{ct}>$$ where $N_{ct}={\bf E}(\frac{N-1}{3})$, $a_{c}=\frac{(U-V)}{\sqrt{4\epsilon^{2}_{c}+(U-V)^{2}}}$, $a^{2}_{c}+b^{2}_{c}=1$, $a_{ct}=\frac{\sqrt{3}a^{2}_{d}t_{s}}{\sqrt{\epsilon^{2}_{t}+12a^{4}_{d}t^{2} _{s}}}$ and $a^{2}_{ct}+b^{2}_{ct}=1$. $\epsilon$ is then the lowest eigenvalue of the 2 by 2 matrix obtained from (\[3levels\]) by suppressing the effective level corresponding to the TT-LC[@pleutin]. For a typical choice of parameters relevant for conjugated polymers[@tavan], the most probable LC is the F-LC ($a_{c}^{2}\simeq 0.98$ and $a_{ct}^{2}\simeq 0.25$); typical values for the energies are given by $\epsilon_{c}\simeq -0.26eV$ and $\epsilon \simeq -1.26eV$. An additional charge disturbs the electronic cloud more or less strongly depending on the system under consideration. It could be a local distortion where the extra particle rearranges the system in short distances to create around it what it is called polarization cloud; this is the case for usual semi-conductors. On the contrary, it could be a complete rearrangement of the system as for strongly correlated systems[@fulde]. In our case, the first behaviour is concerned and a quasi-particle picture is reached. We describe the perturbations caused by the extra-particle - the polarization cloud - by introducing a new set of LC more or less extended, which we call Charged Local Configurations (C-LC); the term “charged” means that they contain explicitely the extra-particle. Some example of C-LC, extended over one, two and three double bonds are shown in figure (\[LCP\]) where the extra-electron is represented by the thick arrow. In the case of a “macroscopic” rearrangement of the electronic structure - as it could be the case for strongly correlated systems - the maximum extension of the relevant C-LC would be of the order of the system size. In our case, this critical size is of the order of some monomer units only. All around these C-LC, we assume the electronic structure unchanged with respect to the ground state; therefore, we consider such charged configurations - strictly speaking, these are linear combinations of electronic configurations but we adopt the proposed terminology for convenience - $$\label{excitations} \begin{array}{c} \mid \alpha_{n}>=\mid N_{L}>\otimes\mid C_{n}^{\alpha}>\otimes\mid N_{R}>\\ \mid \beta_{n,n+1}>=\mid N_{L}>\otimes\mid C_{n,n+1}^{\beta}>\otimes\mid N_{R}-1>\\ \mid \gamma_{n,n+1,n+2}>=\mid N_{L}>\otimes\mid C_{n,n+1,n+2}^{\gamma}>\otimes\mid N_{R}-2> \end{array}$$where, $\mid C_{n}^{\alpha}>$, $\mid C_{n,n+1}^{\beta}>$ and $\mid C_{n,n+1,n+2}^{\gamma}>$ are some C-LC extended over one, two and three nearest-neighbour double bonds respectively, $\mid N_{L}>$ ($\mid N_{R}>$, $\mid N_{R}-1>$, $\mid N_{R}-2>$) is the part on the left (right) of the C-LC, described in the same way as $\mid GS>$. With this crude description, a C-LC acts as a dramatic boundary which simply interrupts the chain: the system is separated into two chains both described exactly as the ground state; the boundary contains explicitely the extra-particle within a defined C-LC. The more extended C-LC are inserted in the ground state in the same way as (\[excitations\]). With our approximation, the energy of each charged configuration (as (\[excitations\])) is given by the addition of two different terms. The energy of the isolated C-LC and the energy of the external parts to the left and to the right of the C-LC. Since $\mid N_{L}>$ and $\mid N_{R}>$ are neutral, the external parts don’t interact via the Coulomb potential with the extra-particle. However, the configurations (\[excitations\]) must be improved for more quantitative results. Indeed, in a better description, because of the presence of the P-LC, the relative weight of the F, D and Ct$^{-}_{1}$-LC, controlled by the coefficients $a_{c}$, $b_{c}$, $a_{ct}$ and $b_{ct}$ should depend on their positions on the chain. Moreover, with an additional particle, the electron-hole symmetry is broken. All the LC used in $\mid N_{L}>$ and $\mid N_{R}>$ are in the same sector of symmetry - the proper one for the building of the ground state. This is the case, for instance, of the Ct$^{-}_{1}$-LC where the charge transfer on the right and on the left are of the same importance. In the presence of the P-LC these two charge transfers are no longer equivalent; the symmetry is broken and this implies a coulombic interaction between the P-LC and the external parts. These effects, not considering in this work, would certainly modify the polarisation cloud in a sensitive way. One can say, in other words, that the ’embedding" of the C-LC, due to the part $\mid N_{L}>$ and $\mid N_{R}>$ of (\[excitations\]), are not treated efficiently in this work. We believe this is here the main point to be improved in the future for more quantitative results. For usual values of the PPP model, one kind of charged configurations is smaller in energy than the other and in such way that a perturbative treatment is possible to do. These configurations are due to the C-LC, referred as the P-LC hereafter (P stands for Particle), associated with the following creation operator $$P^{\dag}_{n,\sigma}=A^{\dag}_{n,\sigma}F^{\dag}_{n}$$ and represented in figure (\[LCP\].a). The extra-particle is immersed in the reference vacuum and gives the following charged configurations $$\label{particle} \mid n>=\mid N_{L}>\otimes\mid P_{n}>\otimes\mid N_{R}>$$ where $n$ referred to the position of the P-LC, and with an energy given by ${\cal E}_{n} = \epsilon_{n}+(N^{r}+N^{l})\epsilon_{c} + (N^{r}+N^{l} -3)\frac{\epsilon}{3}$, with $N^{r}+N^{l}=N-1$ and $\epsilon_{n}=t_{d}+\frac{U}{2}+\frac{3V}{2}$, the energy of the isolated P-LC. By comparing with (\[EGS\]), we see that there is a loss of intramonomer correlation energy and a loss of intermonomer fluctuation energy with respect to the ground state; indeed, the additional electron occupies a site in which one cannot place D and Ct$^{-}_{1}$-LC. This loss of energy is more important for the more extended C-LC. In the following, we consider explicitely only the charged configurations (\[particle\]) since the effects of the other charged configurations can be taken into account by perturbation. With our approximation, because of the n.n. hopping integral, the P-LC can hop on the lattice with the help of the F-LC or the Ct$^{-}_{1}$-LC. With the former, the P-LC can hop from site to site on the monomer lattice (see figure \[nnhopping\]). $$\label{hopping} <n \mid H_{ppp} \mid n \pm 1>=J=a_{c}^{2}b_{ct}^{2}\frac{t_{s}}{2}$$In this expression, the product $a_{c}^{2}b_{ct}^{2}$ gives the probability to find a F-LC in the wave function (\[PF\]); the factor $1/2$ in (\[hopping\]) comes from our choice to work with the monomer orbitals. Moreover, with our approximation, there exists also a n.n.n. hopping process with the help of the more extended GLC, the Ct$^{-}_{1}$-LC (see figure \[nnnhopping\]). $$\label{nexthopping} <n \mid H_{ppp} \mid n \pm 2>=a_{ct}^{2}\frac{t_{s}}{4}$$The additional factor 2 in the denominator comes from the fact that only one term from the Ct$^{-}_{1}$-LC (see equation (\[ctLC\])) is involved during the transfer; the coefficient $a_{ct}^{2}$ gives the probability to find a Ct$^{-}_{1}$-LC in the ground state wave function (\[PF\]). The n.n.n. transfer is of course less important than the n.n. one’s. With the values for the parameters we use here, the values of these two hopping processes differ by one order of magnitude. Therefore, we neglect the n.n.n. effective hopping term in this work. The extra-particle (P-LC) can be dressed by perturbation. Some effects of the other C-LC appear then in renormalized energy and n.n. hopping term for the extra-particle. This dressing of the P-LC can be simply done by a second order perturbative treatment, giving, in one hand, the so-called polarization energy $$\label{epolarisation} \epsilon_{p}=\sum_{\delta}\frac{t_{\delta}^{2}}{{\cal E}_{n}-{\cal E}_{\delta}}$$and, in the other hand, some corrections for the n.n. hopping integral $J$ $$\label{effhopping} J_{eff}=\sum_{\delta, \delta^{'}}t_{\delta}t_{\delta^{'}}(\frac{1}{{\cal E}_{n}-{\cal E}_{\delta}}+\frac{1}{{\cal E}_{n}-{\cal E}_{\delta^{'}}})$$ In these expressions $t_{\delta}$ and $t_{\delta^{'}}$ are some interacting terms between the P-LC and other C-LC. The inequalities $\mid \frac{t_{\delta/\delta^{'}}}{{\cal E}_{n}-{\cal E}_{\delta/\delta^{'}}} \mid <<1$ are respected for the values of the parameters we use which guaranty the relevance of a perturbative treatment. After that, we have reached a quasi-particle picture, the quasi-particle being represented by the P-LC. In principle, many C-LC give some contributions to the perturbative series (\[epolarisation\]) and (\[effhopping\]). However, because the states (\[excitations\]) ignore many effects due to an inappropriate embedding, as we already mentioned, we believe it is not useful to carry out the full calculation. Consequently, we do here a simplified treatment for the dressing of the extra-particle which we believe contains anyhow the most important contributions to (\[epolarisation\]) and (\[effhopping\]). This simplified treatment consists to consider the C-LC not embedded in the ground state defined by (\[PF\]) but in a simplified vacuum made of only F-LC. Since, the F-LC is the very most important LC in the ground state (\[PF\]), we believe this simplified treatment sufficient to capture the most important parts of the polarisation energy and the effective hopping term. Moreover, among the remaining charged configurations only a few are incorporated in the perturbative treatment; they are shown in the figure (\[LCP\].b). By this last simplification we neglect all the C-LC shown in the figure (\[LClongrangepolar\]) which take into account some long range polarisation effects; these C-LC are numerous but their total effect on (\[epolarisation\]) and (\[effhopping\]) are small and they don’t participate sensitively to the binding energy of the polaron state which is the main quantity we are looking for here. With this treatment, the corrections for the hopping term remain always negligible in the range of parameters of interest; we will therefore neglect these last corrections, $J_{eff}$. After the dressing operation, we obtain formally a one particle like problem with two characteristic energy terms $E_{n}$, the site energy of the additional ’electron’ with respect to the ground state, and $J$, the hopping term, which are functions of the PPP parameters: $E_{n}=\epsilon_{n}-\epsilon_{c}-\frac{2}{3}\epsilon+\epsilon_{p}$ and $J=\frac{t_{s}}{2}$. If we suppose a rigid lattice, the problem can obviously be diagonalized, giving a band centred at $E_{n}$, with a bandwidth of $4\mid J \mid$. In the case of the SSH Hamiltonian[@ssh] and by neglecting $\epsilon$ and $\epsilon_{p}$, the bottom of the band is given by $E_{n}=\mid t_{d}-t_{s}\mid$, the exact result; with inclusion of these corrective terms, this energy becomes slightly overestimated. The effective mass associated with the P-LC is given by $m^{*}\simeq\frac{\hbar^{2}}{a^{2}}\frac{1}{t_{s}}$ ($a$ is the unit cell length) which is of course higher than the effective mass of a free particle on the bottom of the conduction band. With the Coulomb term, $E_{n}$ increases and $m^{*}$ stays unchanged. In conclusion, for a rigid lattice, we have reached a simple tight binding Hamiltonian - the so-called Hückel model. Last, one may say that such approach is quite close in spirit of a recent work of J. Grafenstein et al., where an effective tight-binding model is derived at ab-initio level by means of an incremental method [@grafenstein]. Now we allow a relaxation of the lattice. For simplicity, we choose a model displacement where the two ’atoms’ of the same double bond move with the same amplitude $\frac{\mid x_{n} \mid}{2}$ but in opposite directions (cf. figure (\[deformation\])). The two parameters $E_{n}$ and $J$ depend now on the lattice coordinates via mainly the linear dependence of the two hopping terms, $t_{d}(x_{n})=t_{d}-\alpha x_{n}$ and $t_{s}(x_{n},x_{n+1})=t_{s}+\alpha (\frac{x_{n}}{2}+\frac{x_{n+1}}{2})$. On the contrary, the coulombic terms remain almost unchanged after a small displacement. The contributions due to these displacements to $E_n$ and $J$ are small so we make a linear expansion with respect to $\{x_{n}\}$ of these two quantities $$E({x_{n}})=E_{n}-\alpha (a_{0}x_{n} + a_{1}(x_{n+1}+x_{n-1}))$$ $$J({x_{n}})=J-\alpha b_{0}(x_{n}+x_{n+1})$$ where $a_{0}$, $a_{1}$ and $b_{0}$ are functions of PPP parameters and $\alpha$ is the electron-phonon interaction term[@ssh]. The extra elastic constraint of the dimerized chain due to the lattice relaxation in presence of an additional charge is expressed as $$E_{el}=\frac{1}{2}\sum_{n}K_{eq}[x^{2}_{n}+(\frac{x_{n}}{2} +\frac{x_{n-1}}{2})^2]$$ where $K_{eq}$, the spring constant, is defined relatively to the dimerized equilibrium structure. The coefficients $C_{n}$ of the Holstein polaron wave function[@holstein], $\mid \Psi_{p}>=\sum_{n}C_{n}({x_{n}})\mid n> $, are determined by minimization of the corresponding total energy, $E_{T}(\{x_{n}\})$, with respect to the lattice coordinate $x_{n}$. At the second order in $x_{n}$ and taking into account that $\frac{\alpha}{K_{eq}}\sim0.1\AA$ in conjugated polymers [@revue], we obtain the characteristic equations of the molecular Holstein’s model $$\label{holstein} [Fx_{n}-2J-\epsilon]C_{n}+JC_{n+1}+JC_{n-1}=0$$ $$\label{relaxation} x_{n}k=F\mid C_{n}\mid ^{2}$$ where the coefficients are expressed in function of the PPP parameters: $F=(a_{0}+2a_{1}+4b_{0})\alpha$, $k=2K_{eq}$ and $J=\frac{t_{s}}{2}$. By injecting (\[relaxation\]) into (\[holstein\]) we obtain the non-linear Schrödinger equation which gives the coefficients of the wave function; the second equation connects in a simple manner these coefficients and the lattice deformation. The analytical solution of these two equations in the continuum limit[@holstein], valid for the “large” polaron case, gives the well known polaronic wave function $C_{n}=\frac{\gamma}{\eta}\mbox{sech}(\gamma(n-n_{0}))$ with $E_{b}=\frac{F^{2}}{2k}$, $\eta^{2}=\frac{E_{b}}{J}$ and $\gamma^{2}=\frac{\eta^{2}}{2}$; the polaron state is localized around $n_{0}$, an undetermined quantity because of the translational invariance of the system. The associated binding energy of the polaron state is given by $E_{p}=\frac{E^{2}_{b}}{12J}$. We evaluate these quantities for several choices of parameters by the following sequence of calculations. First we optimize the dimerized geometry referring to a spring constant, K, relative to a hypothetical undimerized geometry[@ssh]; then, we evaluate $K_{eq}$, calculating the second derivative of $E_{T}$ with respect to the dimerization coordinate at the geometrical equilibrium. Second, we solve the equations (\[holstein\]) and (\[relaxation\]). In the continuum version of the SSH Hamiltonian limit[@campbell], analytical expressions have been given. Our results always overestimate the reported values. For example, with $t_{0}=2.5 eV$, $\alpha=4.1 eV\AA^{-1}$ and $K=21 eV\AA^{-2}$, we get $E_{p}=0.11eV$ in place of $0.064eV$. In the same manner, our method also overestimate the value of the dimerization. These overestimations occur naturally from our starting point which relies on a molecular description. Besides it has been shown that the SSH Hamiltonian is never equivalent to the Holstein’s model for the dimerized linear chain[@campbell], so that the approximations of our model cannot be expected to lead to a good agreement in this case. However our approximations will cope better when the Coulomb interaction is taken into account; then the energies of charge fluctuation components decrease with their extensions, due to the long range part of the potential. This fact is in favour of our approximation. Furthermore the value of $K$ used in this example is the appropriated one for the SSH Hamiltonian[@ssh], but seems not to be in agreement with the experimental results obtained for small oligomers[@revue]. An higher value must be taken, favouring again our description. If one adds the Ohno potential, the binding energy decreases: as example, for the same choice of parameters and $U=11.16eV$, we get $E_{p}=0.091eV$. Finally, taking the same parameters but with a more appropriate value for $K$, $K=41 eV\AA^{-2}$, we get a reasonable equilibrium geometry characterized by $r_{d}=1.33\AA$ and $r_{s}=1.47\AA$. Moreover we get the following values $F\simeq9.5eV\AA^{-1}$, $J\simeq1.1eV$, $k\simeq78eV\AA^{-2}$ and the binding energy for the polaron decreases to $E_{p}\simeq0.025eV$. In any case, we stay around traditionally adopted values. Before closing this section, note that with such a low binding energy, expected for conjugated polymers, the quantum fluctuations of the lattice should be explicitely considered. However, it is for the moment totally hopeless to introduce additional bosonic variables in the full PPP Hamiltonian. conclusion ========== In conclusion, we have proposed a simplified treatment of the PPP Hamiltonian which is typically a diagonalization of this Hamiltonian in a restricted Hilbert space. The method adopted, using monomer orbitals, is a natural way to bridge the gap between small cluster and polymer calculations[@chandross; @pleutin]. The ground state is composed of intermonomer nearest-neighbour fluctuation components in the background of coupled electrons by pairs localized on monomers. Comparisons with DMRG results for the extended Peierls-Hubbard model show satisfactory agreements considering the simplicity of our proposed wave functions. The electronic excitations are then described as local perturbations moving in this “vacuum”. For an appropriate set of parameters, this description gives rather good values for the dimerisation and for the energies of the excited states active in one photon spectroscopy[@pleutin]. In the doped case (2N+1 particles) studied here, following the adiabatic scheme proposed by Holstein[@holstein], we show that our model leads naturally to a Holstein’s polaron like problem. However our description differs drastically from the Holstein’s polaron image in the sense that it is able to describe the behaviour of a strong correlated (2N+1) particle state whereas the Holstein’s model considers only the additional particle in interaction with a deformable medium. The obtained binding energy of the polaron is of the correct order of magnitude. Some improvements are suitable concerning, first, the ground state where more extended GLC must be considered in order to reproduce more accurately the delocalization proper to $\pi$-systems. In other hand, variational calculations based on the very same ideas are possible [@rva1; @rva2; @mp]. For the doped case, we believe the first point would be to improve the description of the vacuum in presence of the extra-particle. Even if it is difficult to test our derivation in part owing to a lack of accurate calculations including correlation effects, we think that our formulation keeps the essential behaviour of the considered physical phenomenon and believe that it could be useful for future more advanced studies in part because of its relative conceptual simplicity and of its ability to give analytical expressions. For example, the behaviour of polaron states in presence of a strong electric field [@bussac], which corresponds to a common situation in electroluminescence studies, would be to consider taking into account the effects of the strongly correlated N particle system. We wish to thank E. Jeckelmann for giving us DMRG results prior to publication. S.P acknowledges support from the European Commission through the TMR network contract ERBFNRX-CT96-0079 (QUCEX). , edited by N.S. Sariciftci (World Scientific Publishing, Singapore, 1997). M. Chandross, Y. Shimoi and S. Mazumdar, Phys. Rev. [**B 59**]{}, 4822 (1999). D. Baeriswyl, D.K. Campbell and S. Mazumdar, in Conjugated Conducting Polymers, edited by H. Kiess (Springer-Verlag, Heidelberg, 1992), pp 7-133. M.J. Rice and Y.N. Gartstein, Phys. Rev. Lett. [**73**]{}, 2504 (1994); Y.N. Gartstein, M.J. Rice and E. Conwell, Phys. Rev. [**B52**]{}, 1683 (1995). W.T. Simpson, J. Am. Chem. Soc. [**77**]{}, 6164 (1955); H.C. Longuet-Higgins and J.N. Murrel, Proc. Roy. Phys. Soc. [**A68**]{}, 602 (1955); J.A. Pople and S.H. Walmsley, Trans. faraday Soc. [**58**]{}, 441 (1962). T. Holstein, Ann. of Phys. [**8**]{}, 325 (1959). M.N. Bussac, J. Dorignac and L. Zuppiroli, Phys. Rev. [**B55**]{}, 8207 (1997). S. Pleutin and J.L. Fave, J. Phys. Cond. Matt. [**10**]{}, 3941 (1998). Z.G. Yu, R.T. Fu, C.Q. Wu, X. Sun and K. Nasu, Phys. Rev. [**B52**]{}, 4849 (1995). P. Tavan and K. Schulten, Phys. Rev. [**B36**]{}, 4337 (1987). K. Ohno, Theor. Chim. Acta [**2**]{}, 219 (1964). S. Pleutin, Thesis, Paris 7 University, 1997. Z.G Soos and S. Ramasesha, in Valence Bond Theory and Chemical Structure, edited by D.J. Klein and N. Trinajstic, Elsevier, Amsterdam, 1990. K. Schulten and M. Karplus, Chem. Phys. Lett. [**14**]{}, 305 (1972). D. Mukhopadhyay, G.W. Hayden and Z.G. Soos, Phys. Rev. [**B51**]{}, 9476 (1995). W.P. Su, J.R. Schrieffer and A.J. Heeger, Phys. Rev [**B22**]{}, 2099 (1980). L. Salem, [*Molecular Orbital Theory of Conjugated Systems*]{} (Benjamin, London 1966). E.H. Lieb and F.Y. Wu, Phys. Rev. Lett. [**20**]{}, 1445 (1968). P. Fulde, [*Electron Correlations in Molecules and Solids*]{}, 3rd edn. Springer Series in Solid-State Sciences (Springer, Berlin, 1995). E. Jeckelmann, Phys. Rev. [**B57**]{}, 11838 (1998). E. Jeckelmann, private communication. S.R. White, Phys. Rev. [**B48**]{}, 10345 (1993). , Lecture Notes in Physics, eds I. Peschel, X. Wang and K. Hallberg, Springer-Verlag, 1999. S. Pleutin, E. Jeckelmann, M.A. Martin-Delgado and G. Sierra, preprint Cond-mat/9908062, submitted to Prog. Theo. Chem. Phys. S. Pleutin, in preparation. M.A. Martin-Delgado, G. Sierra, S. Pleutin and E. Jeckelmann, preprint Cond-mat/9908066, submitted to Phys. Rev. [**B**]{}. J. Grafenstein, H. Stoll and P. Fulde, Phys. Rev. [**B55**]{}, 13 588 (1997). D.K. Campbell, A.R. Bishop and K. Fesser, Phys. Rev. [**B26**]{}, 6862 (1982). \[ssh\] ---------- ---------- ---------- ----------- model I model II model III $x=0.$ 91.6$\%$ 92.1$\%$ 92.7$\%$ $x=0.15$ 96.7$\%$ 96.9$\%$ 97.2$\%$ ---------- ---------- ---------- ----------- : Percentage of the exact energy obtained with the different models studied here for the S.S.H. Hamiltonian. The model I contains the F, D and Ct$_{1}^{-}$-LC; the TT-LC are added for the model II and the whole set of LC shown in figure (\[LCGS\]) are considered for the model III. \[uv\] ---------- ----------- ----------- ----------- ----------- model I model II model III DMRG $x=0.05$ 0.373311 0.369217 0.366820 0.313599 $x=0.15$ 0.306566 0.304317 0.303046 0.270381 $x=0.25$ 0.236707 0.235678 0.235022 0.213969 $x=0.75$ -0.160370 -0.159835 -0.159840 -0.164925 ---------- ----------- ----------- ----------- ----------- : Energy per unit cell for an infinite lattice obtained with the three successive approximations (model I, II, III) and DMRG calculations for the extended Peierls-Hubbard model with $U=4t$ and $V=t$; in the case of DMRG, the energies per unit cell are obtained from extrapolation of large cluster calculations up to 400 sites.
{ "pile_set_name": "ArXiv" }
--- author: - | [^1]\ SKA Organisation\ E-mail: title: Why the SKA will be the most radical telescope ever --- ... === [99]{} Carilli, C. L., & Rawlings, S.2004, [New Astronomy Reviews]{}, 48, 979 Ekers, R. 2012, arXiv:1212.3497 Ekers, R. 2012, [ApJ]{} [^1]: A footnote may follow.
{ "pile_set_name": "ArXiv" }
--- abstract: | [**]{} seeks to identify the viewpoint(s) underlying a text span; an example application is classifying a movie review as “thumbs up” or “thumbs down”. To determine this [**]{}, we propose a novel machine-learning method that applies text-categorization techniques to just the subjective portions of the document. Extracting these portions can be implemented using efficient techniques for finding [*minimum cuts in graphs*]{}; this greatly facilitates incorporation of cross-sentence contextual constraints. [**Publication info:**]{} [*Proceedings of the ACL, 2004*]{}. author: - Bo Pang - | Lillian Lee\ Department of Computer Science\ Cornell University\ Ithaca, NY 14853-7501\ {pabo,llee}@cs.cornell.edu title: 'A Sentimental Education: Using Subjectivity Summarization Based on Minimum Cuts' --- Acknowledgments {#acknowledgments .unnumbered} ===============
{ "pile_set_name": "ArXiv" }
--- abstract: | We present a broadband imaging and spectral study of the radio bright supernova remnant (SNR) 3C 397 with , , & . A bright X-ray spot seen in the HRI image hints at the presence of a pulsar-powered component, and gives this SNR a composite X-ray morphology. Combined  &  imaging show that the remnant is highly asymmetric, with its X-ray emission peaking at the western lobe. The hard band images obtained with the  Gas Imaging Spectrometer show that much of the hard X-ray emission arises from the western lobe, associated with the SNR shell; with little hard X-ray emission associated with the central hot spot. The spectrum of 3C 397 is heavily absorbed, and dominated by thermal emission with emission lines evident from Mg, Si, S, Ar and Fe. Single-component models fail to describe the X-ray spectrum, and at least two components are required: a soft component characterized by a low temperature and a large ionization time-scale, and a hard component required to account for the Fe-K emission line and characterized by a much lower ionization time-scale. We use a set of non-equilibrium ionization (NEI) models (Borkowski  in preparation), and find that the fitted parameters are robust. The temperatures from the soft and hard components are $\sim$ 0.2 keV and $\sim$ 1.6 keV respectively. The corresponding ionization time-scales $n_0 t$ ($n_0$ being the pre-shock hydrogen density) are $\sim$ 6 $\times $10$^{12}$ cm$^{-3}$ s and $\sim$ 6 $\times$ 10$^{10}$ cm$^{-3}$ s, respectively. The large $n_0 t$ of the soft component suggests it is approaching ionization equilibrium; thus it can be fit equally well with a collisional equilibrium ionization model. The spectrum obtained with the Proportional Counter Array (PCA) of  is contaminated by emission from the Galactic ridge, with only $\sim$ 15% of the count rate originating from 3C 397 in the 5–15 keV range. The PCA spectrum allowed us to confirm the thermal nature of the hard X-ray emission. A third component originating from a pulsar-driven component is possible, but the contamination of the source signal by the Galactic ridge did not allow us to determine its parameters, or find pulsations from any hidden pulsar. We discuss the X-ray spectrum in the light of two scenarios: a young ejecta-dominated remnant of a core-collapse SN, and a middle-aged SNR expanding in a dense ISM. In the first scenario, the hot component arises from the SNR shell, and the soft component from an ejecta-dominated component. 3C 397 would be a young SNR (a few thousand years old), but intermediate in dynamical age between the young historical shells (like Tycho or Kepler), and those that are well into the Sedov phase of evolution (like Vela). In the second scenario, the soft component represents the blast wave propagating in a dense medium, and the hard component is associated with hot gas encountering a fast shock, or arising from thermal conduction. In this latter scenario, the SNR would be $\sim$ twice as old, and transitioning into the radiative phase. The current picture we present in this paper is marginally consistent with this second scenario, but it can not be excluded. A spatially resolved spectroscopic study is needed to resolve the soft and hard components and differentiate between the two scenarios. Future  &  data will also address the nature of the mysterious central (radio-quiet) X-ray spot. author: - | S. Safi-Harb , R. Petre, , K. A. Arnaud ,\ J. W. Keohane ,\ K. J. Borkowski , K. K. Dyer , S. P. Reynolds ,\ & J. P. Hughes title: '**A broadband X-ray study of the supernova remnant 3C 397**' --- \#1 =0.0cm=1\#1 mass[[*2MASS*]{}]{} Introduction ============ In the standard scenario of a core collapse explosion of a massive star, the bulk of the stellar envelope is ejected outward at a high velocity, shocking the surrounding medium, and forming a shell of diffuse emission, which is ultimately observed as a supernova remnant (SNR) shell. The central core collapses to form a neutron star, which may be subsequently observed as a pulsar. SNRs are classified according to their morphology as shells, plerions, or composites. While the shells have a shell-like structure, and represent the majority of SNRs (e.g. the Cygnus loop), the plerions show a centrally bright morphology in radio and X-rays (like the Crab), with no evidence of emission from a shell. The composites have both a shell plus a centrally bright component. The X-ray emission from the shell is in most cases thermal, and results from the shocked swept-up interstellar medium (ISM), with a probable contribution from reverse–shocked ejecta in young remnants. The central X-ray emission in the plerionic composites is non-thermal, and results from synchrotron radiation from highly relativistic particles injected by the pulsar. As the pulsar wind encounters the surroundings, it gets shocked and forms the synchrotron nebula, seen as a plerion. In the thermal composites, the central emission is thermal, and arises mostly from thermal emission from the swept up ISM. Rho & Petre (1998) refer to this class as ‘mixed morphology’ SNRs, to distinguish them from the plerionic composites. 3C 397 (G41.1-0.3) is one of the brightest Galactic radio SNRs, and its classification remains ambiguous. In the radio, it is classified as a shell-type SNR, based on its steep spectral index $\alpha$ = 0.48 and shell-like morphology. At 1 GHz, it has a flux density of 22 Jy (Green 1998), ranking the 5th brightest among $\sim$ 100 remnants. The distance to G41.1-0.3 has been estimated as greater than 6.4 kpc on the basis of neutral hydrogen absorption measurements (Caswell  1975). No absorption is seen at negative velocities, locating the remnant closer than 12.8 kpc. An HII region lying $\sim$ 7 west of the SNR is likely a foreground object, and was located between 3.6 kpc and 9.3 kpc (Cerosimo & Magnani 1990). At a distance of 10 kpc, the linear size of the radio shell would be 7 $\times$ 13 pc$^2$. High-resolution radio imaging of G41.1-0.3 (Becker, Markert, & Donahue 1985; Anderson & Rudnick 1993) indicates that the remnant brightens towards the Galactic plane, and is highly asymmetric. It has the appearance of a shell edge-brightened in parts, and lacks the symmetry seen in the young historical SNRs, such as Cas A and Tycho. 3C 397 is slightly polarized with an overall polarized fraction of only 1.5%. Kassim (1989) derives an integrated spectral index $\alpha$ = 0.4, with a turnover at a frequency less than 100 MHz. The discrepancy between the spectral indices derived by Kassim and Green is most likely due to uncertainties in measuring the total flux density of 3C 397 at centimeter wavelengths, and is attributed to confusion with the nearby HII region and the Galactic background. Anderson & Rudnick (1993) investigate the variations of the spectral index across the remnant, and find variations of the order of $\delta$$\alpha$ $\sim$ 0.2 ($\alpha$ $\sim$ 0.5–0.7). The variations do not coincide with variations in the total intensity. They suggest that interactions between the expanding SNR and inhomogeneities in the surrounding medium play a major role in determining the spatial variations of the index across the remnant. Dyer & Reynolds (hereafter DR 99), while finding a similar magnitude of spectral index variations, did not confirm Anderson & Rudnick’s detailed spatial results, suggesting that the variations are due to image reconstruction problems or other difficulties. 3C 397 has no optical counterpart, and is not observed in the UV probably because it lies in the Galactic plane. An IRAS survey of Galactic SNRs (Saken, Fesen & Shull 1992) did not yield a positive identification of the SNR in the far infrared. In X-rays, 3C 397 was first detected with the  Imaging Proportional Counter (IPC) and High-Resolution Imager (HRI), showing two central regions of enhancement (Becker  1985), neither of which correlates with any bright radio feature. The IPC data imply for a thermal model an electron temperature $kT$ $\leq$ 0.25 keV and N$_H$ $\geq$ 5 $\times$ 10$^{22}$ cm$^{-2}$, implying an optical extinction, $A_v$ $\geq$ 22.5 (Gorenstein 1975). The quality of the data was however poor, and clearly other models were not ruled out.  observations of 3C 397 with the PSPC (Rho 1995, Rho & Petre 1998, DR 99) reveal 25 $\times$ 45 diffuse emission, with central emission and an enhancement along the western edge. Spectrally, thermal models are favored over non-thermal models, which yield a very steep power law index (Rho 1995). The spectrum is generally described by a thermal plama with $kT$ $\sim$ 1.7 keV. However the fit is poor, probably due to a combination of abundance effects, nonequilibrium ionization, and the unrealistic assumption of a single temperature. 3C 397 has similar characteristics to the ‘mixed-morphology’ SNRs, in that it has a centrally bright X-ray morphology characterized by a thermal X-ray spectrum; however, the quality of the  data and the narrow bandpass of  does not allow for an accurate determination of its X-ray emission mechanism. This left its classification as a ‘mixed-morphology’ candidate highly uncertain (Rho & Petre 1998). Combined  HRI and high resolution radio images of 3C 397 (DR 99) reveal a more complicated morphology. The SNR appears brightest along the western edge of the shell, in both the radio and X-ray images. Moreover, a bright spot was found with the  HRI, at the center of the SNR shell, but not correlated with any radio enhancement (Figure 12 in DR 99). No pulsations are found to be associated with the X-ray hot spot. In this paper, we present a broadband X-ray study of 3C 397 with the ,  and  satellites. While  has the highest spatial resolution,  has the advantage over  in its broad bandpass and higher spectral resolution, which allowed us to detect strong emission lines typical of those seen in the spectra of young SNRs.  complements the other observations with its higher energy coverage which allow us to to search for a hard non-thermal component. A preliminary analysis of the  GIS data was performed by Keohane (1998) and Reynolds & Keohane (1999), in order to obtain an upper limit on a possible nonthermal component resulting from the extension of the radio spectrum including an exponential cutoff in the electron distribution. The authors found that the radio spectrum had to begin rolling off around $3 \times 10^{16}$ Hz in order to avoid exceeding the continuum around 1 keV. Though they did not perform extensive self-consistent spectral fitting, they found that a very hard component was also necessary to explain the continuum beyond a few keV. The time resolution of  ($\sim$ 1 $\mu$s) allowed us to search for pulsations down to the millisecond range. The analysis of the data was however complicated by the contamination of the source spectrum with the emission from the Galactic ridge. The  data have allowed us to: 1) confirm the presence of the hard component required to fit the  data and better constrain its parameters; 2) find evidence of a third weak component, whose parameters were poorly determined due to the contamination by Galactic ridge; and 3) set an upper limit on the flux from a possibly hidden compact source. The paper is organized as follows: Section 2 summarizes the observations. In Sections 3 & 4, we present the spatial and spectral results with  and . In Section 5, we present the results. In Section 6, we summarize the results from the timing analysis. We discuss the implications of our results in Section 7. Finally, we summarize our conclusions. Observations ============ ASCA ---- 3C 397 was observed using  (Tanaka, Inoue, & Holt 1994) on 1995 July 4. We extracted the data from the HEASARC public database, and present the observations acquired with the Gas Imaging Spectrometer (GIS), and the Solid State Imaging Spectrometer (SIS). The detectors are sensitive to X-rays in the 0.4–10 keV range, with a spectral resolution at 6 keV of 2% for the SIS and 8% for the GIS ($\sim$$E^{-1/2}$). The point spread function of the GIS alone is a Gaussian with a full width at half max (FWHM) of 30 (at 6 keV, $\sim$$E^{-1/2}$). The intrinsic spatial broadening of the SIS detectors is negligible compared to that of the X-ray telescope (XRT). Therefore the spatial resolution of the detectors is limited by the point spread function of the X-ray telescope, which has a relatively sharp core (FWHM of 50), but broad wings (50% encircled radius of 15). The data were screened using the standard process. The pointing is at $\alpha$ = 19$^h$ 07$^m$ 4320, $\delta$ = 07 13 59 (J2000). The observations were performed using the standard time resolution: 0.5s for medium bit rate, and 62.5 millisecond for the high-bit rate. The SIS data were acquired in 1-CCD mode, and read out every 4s. For the timing analysis, we use the GIS high-bit rate data. For the spectral analysis, we use both the SIS and GIS data in high-bit rate mode. We note that Chen  (1999) have reported the analysis of the same  SIS data. In their paper, the authors use a blank sky field for background subtraction. In our paper, we subtract the background from the same field. This method is more appropriate since 3C 397 lies in the Galactic ridge, and removing any contamination from the ridge is necessary before drawing any conclusion on the source spectrum. Furthermore, in addition to the SIS, we analyze the GIS data which are more appropriate than the SIS for studying the hard component. ### ROSAT In order to better understand the origin of the X-ray emission, we compare the  images with the high-spatial resolution images obtained with , sensitive in the $\sim$ 0.1–2.0 keV energy range. 3C 397 was observed with the High-Resolution Imager (HRI) on several occasions in 1994, and with the Position Sensitive Proportional Counter (PSPC) on 1992 October 28 for $\sim$ 4 ksec. The  data have been analysed and presented elsewhere (Rho 1995, DR 99). To generate the  images, we extracted the data from the HEASARC public database. For the HRI, we used the longest exposure (52.3 ksec), performed on 1994 October 14. The  pointing is at $\alpha$ = 19$^h$ 07$^m$ 3360, $\delta$ = +07 08 24 (J2000). RXTE ---- 3C 397 was observed on six occasions between 1997 December 3, and 1997 December 8 for an effective exposure of 58.4 ks. The 1 FWHM field of view (FOV) of  is pointed at $\alpha$ = 19$^h$ 07$^m$ 3499, $\delta$ = 07 07 149 (J2000). The Proportional Counter Array (PCA), consists of 5 collimated Xenon Proportional Counter detectors with a total area of 6,500 cm$^2$, an effective energy range of 2–60 keV, and an energy resolution of 18% at 6 keV (Jahoda  1996). The  (Gruber  1996) instrument consists of two clusters of collimated NaI/CsI phoswich detectors with an effective area of $\sim$ 800 cm$^2$ and an effective energy range of 15–250 keV. In this paper, we report the observations with the PCA. The  count rates are background dominated, and were not included. For the spectral analysis, we use the data in the standard-2 mode, which provide spectral information. For the timing analysis, we use the Good Xenon modes which provide the highest time resolution ($\sim$1 $\mu$s). Spatial analysis ================ The  pointing is $\sim$ 5 offset from the central “hot spot” detected with the  HRI. We generate images of 3C 397 corrected for exposure and vignetting. We use the routine, $\it{ascaexpo}$, which calculates the net exposure time per sky pixel (http://heasarc.gsfc.nasa.gov/docs/asca/abc/). The total time seen by each sky pixel on the detector is computed using an instrument map (generated with $\it{ascaeffmap}$) and the reconstructed aspect. The output exposure map is subsequently used to normalize the sky image. In Figure 1, we show the generated images in the soft (0.5–4.0 keV) and hard (4–10 keV) energy bands of the GIS. The effective exposure is 50.7 ksec (both GIS detectors), and the images are smoothed with a Gaussian with $\sigma$ = 45. At the harder energies, the emission shows an elongation nearly perpendicular to the Galactic plane. In Figure 2, we compare the  HRI image (left panel) and the hard band GIS image (right panel). The brightest features in the image, seen at $\sim$ 1–2 east and west of the central hot spot, correlate with enhancements in the radio shell (DR 99). It is clear that these small scale features seen with the HRI cannot be resolved by the GIS. However, the overall morphology of the GIS image shows an elongation along the axis joining the hot spot with the bright radio edges. The hard emission peaks at the western lobe, and is not centrally peaked at the HRI hot spot (denoted by a cross), as would be expected from a plerionic composite. In Figure 3, we overlay contours from the SIS hard band on the  PSPC image. The SIS contours are correlated with the PSPC intensity map, and again suggest that the hard emission peaks at the western lobe, with some fainter emission associated with the central spot and the eastern lobe. While the GIS is more sensitive than the SIS to the hard X-ray band ($E$ $\geq$ 4 keV), the SIS has a higher efficiency at the softer energies. We subsequently examine the softness ratio, defined as $\frac{0.5-2 (keV)}{2-4 (keV)}$, with the SIS. In Figure 4, we show the resulting image (left panel), with the total soft band (0.5–4 keV) image obtained with the SIS (right panel). The HRI contours are overlayed on both images, to show that there is an enhancement of soft emission from the central region. Spectral analysis: =================== To study the X-ray spectrum, we have extracted events from a circular region of radius $\sim$ 7 from the GIS field, encompassing the entire SNR. For the SIS which has a smaller FOV than the GIS, 3C 397 fills a large fraction of the chip, and extends to the edges of the CCD along the direction perpendicular to the Galactic plane. We have extracted the source events from a circular region of radius $\sim$ 4.2 to avoid the CCD chip boundaries. Since 3C 397 lies $\sim$ 0.3$^\circ$ below the Galactic plane, and only 41 in longitude from the Galactic center, the source spectrum is expected to be contaminated by emission from the Galactic ridge. The large FOV of the GIS allowed us to extract a background spectrum from the same field. For the SIS, it was also possible to extract a background (of radius $\leq$ 1) from the same chip, since 3C 397 does not fill the FOV along the direction parallel to the Galactic pane. This method of background subtraction has the advantage of providing the most accurate model of any spatial or temporal contamination which would affect the spectral analysis. While the extracted SIS spectrum contains most of the emission from the SNR, it is possible that we are missing part of the source flux. This was taken into account by introducing a relative normalization between the SIS and the GIS spectra. Throughout the paper, we perform the flux estimates from the different components of the SNR using the GIS. The background subtracted count rates in the 0.6–9 keV energy range are 0.524 $\pm$ 0.005 and 0.613 $\pm$ 0.005 from GIS2 and GIS3 respectively. The corresponding SIS0 and SIS1 count rates are 0.768 $\pm$ 0.007 and 0.580 $\pm$ 0.007 respectively. In the following, we present our spectral results for both the SIS and GIS data. We fit the SIS data in the 0.6–9 keV range, and the GIS over the 0.8–9 keV band. We disregard energies above 9 keV due to the poor signal-to-noise ratio. We combine the SIS and GIS data to show a joint fit, to which we subsequently add the  PCA data. Single-component models ----------------------- The SIS and GIS spectra are clearly dominated by emission lines, with strong emission from Mg, Si, S, and Ar; the most prominent feature is the Fe-K line. We have first attempted to fit the spectra with Raymond-Smith (RS, Raymond & Smith 1977) and (Mewe, Gronenschild, & van den Oord 1985; Liedahl  1990) models, which are appropriate for modeling plasma in collisional equilibrium ionization. A single-component collisional equilibrium ionization model with solar abundances does not yield an acceptable fit (reduced chi-squared $\chi_{\nu}^2$ = 6.7, $\nu$ = 735, $\nu$ being the number of degrees of freedom). We subsequently used Sedov models (Hamilton, Sarazin, & Chevalier 1983, hereafter HSC 1983, Borkowski , in preparation) which follow the time-dependent ionization of the plasma in a supernova remnant evolving according to Sedov self-similar dynamics. These models are especially important for describing the emission from SNRs whose age is smaller than the time required to reach ionization equilibrium. They are a subclass of non-equilibrium ionization (NEI) models, and include the range of temperatures found in a Sedov remnant. They can therefore account for the hotter X-ray emission expected to originate from the inner parts of such SNRs. The modifications introduced by Borkowski  (in preparation) include improved atomic data (in particular, Fe L-shell line data are based on theoretical calculations by Liedahl, Osterheld, & Goldstein 1995) and the possibility of describing plasmas without electron-ion equipartition, including an incomplete heating of electrons at a blast wave. The models are characterized by three parameters: the shock temperature $T_s$, the post-shock electron temperature $T_e$ ($\le T_s$), and $\eta$ = $n_0^2 E$, which characterizes the rate at which the plasma relaxes to ionization equilibrium ($n_0$ is the hydrogen number density in the unshocked ambient medium, and $E$ is the explosion energy). An equivalent parameter to $\eta$ is the ionization time-scale $n_0 t$ = 1.24 $\times$ 10$^{11}$ $\eta_{51}^{1/3}$ $T_{s,7}^{-5/6}$ (cm$^{-3}$ s), where $\eta_{51}$ is $\eta$ in units of 10$^{51}$ erg cm$^{-6}$, and $T_{s,7}$ is the shock temperature in units of 10$^7$ K. We find that the Sedov model provides a better fit, but only accounts for the X-ray emission up to about 4 keV. Even a Sedov fit with non-equipartition ($T_e$ $\neq$ $T_i$; where $T_e$ and $T_i$ represent the electron and ion temperatures respectively) does not provide a satisfactory fit to the hard component. In Figure 5, we show the single-component Sedov fit, characterized by an interstellar absorption $N_H$ = 2.75 $\times$ 10$^{22}$ cm$^{-3}$, a temperature $kT_s$ = 0.15 keV, and an ionization parameter of $n_0t$ = 1.86 $\times$ 10$^{12}$ s cm$^{-3}$ ($\chi^2_{\nu}$ = 2.89, $\nu$ = 734). Varying the metal abundances within the or the Sedov models improves the fits. However, the models still do not account for the hard X-ray emission (above 4 keV), and the fits are unacceptable ($\chi^2_{\nu}$ $\geq$ 2.6). Two-component models -------------------- While the single-component models account for the X-ray emission at energies below about 4 keV, they fail to fit the higher energy component (Figure 5). This result was also found by Chen (1999). We note that even though the Sedov model does include higher-temperature gas from the remnant interior, this model does not account for the hard component. In particular, the most prominent emission line near 6.55 keV is not accounted for, indicating that the Fe-K line region cannot be explained by the same component responsible for the Fe-L emission, and must be characterized by different ionization parameter values. This result was also found in the young ejecta-dominated SNRs such as Tycho (Hwang, Hughes, & Petre 1998), Cas A (Borkowski  1996) and Kepler (Tsunemi, Kinugasa, & Ohno 1996). In order to characterize the hard component, we first fit the data in the 4-9 keV range with a thermal bremsstrahlung (TB) model plus a Gaussian to account for the Fe-K emission line. The centroid of the Fe-K line is at $E$ = 6.55 keV (6.51–6.60, 3$\sigma$), and the TB temperature is $kT_h$ = 2.5 keV (1.6–4.2, 3$\sigma$). For a collisional equilibrium ionization model, such as Raymond-Smith, the centroid of the strongest Fe-K lines should be $\sim$ 6.7 keV (He-like) and $\sim$ 6.95 keV (H-like). The low fitted centroid energy indicates that the hard component has not reached ionization equilibrium (Borkowski & Szymkowiak 1997), and should be characterized by a NEI model. We use a number of NEI models (Borkowski , in preparation), which are now released in XSPEC 11: - [, which comprises a superposition of components of different ionization ages appropriate for a plane-parallel shock. This model is characterized by the constant electron temperature, $T_e$, and the shock ionization age, $n_0 t$ (where $n_0$ is the pre-shock density, and $t$ is the age of the shock; the post-shock density is constant).]{} - [, a constant-temperature, single-ionization time-scale NEI model.]{} - [, a NEI model based on the Sedov dynamics, which includes a range of temperatures, as described above. ]{} A proper definition of ionization time-scale is the product of postshock electron density $n_e$ and age $t$, because the plasma ionization state depends on $\int n_e dt$. This is the parameter which enters any NEI model, including the models just mentioned. But the quantity of most interest here is $n_0t$, which is equal to $n_et/4.8$ for cosmic abundance plasma and the strong shock Rankine-Hugoniot jump conditions; $n_0$ here includes only Hydrogen. For convenience, we refer to both $n_et$ and $n_0t$ as the ionization time-scale throughout this work. In the 4–9 keV energy range,  data are fitted equally well with and models with the solar Fe abundance. With the model, we obtain $kT_e$ = 2.45 (1.8–4.2) keV, $n_0t$ = 3.1 (1.5–8.3) $\times$10$^{10}$ cm$^{-3}$ s, and $\chi_{\nu}^2$ = 1.0 ($\nu$=145). As expected, the electron temperature in the model is equal to the TB temperature, and the plasma is underionized. We then fit the entire energy range (0.6–9 keV) with various two-component models, using current NEI models mentioned above, and we show the results in Table 1. In fitting the data, we added a Gaussian near 3.1 keV to account for the emission line from Argon, since current NEI models do not include emission from this element. Since the soft component is characterized by a relatively long ionization time-scale ($n_0t$ $\sim$ 10$^{12}$ cm$^{-3}$, previous section), we can represent it using a model. We also represented the soft component with the  model, but because these fits are significantly worse (with a reduced $\chi^2 > 2$) we do not include them in Table 1. The harder component is characterized by a lower ionization time-scale, and it is necessary to fit it with a NEI model. From Table 1, we find that independently of the NEI model used, the fitted temperatures, ionization time-scales, and emission measures ($EM$) are in reasonable agreement. The best fit two-component NEI model, +, yields $N_H$ = 3.21 $\times$ 10$^{22}$ cm$^{-2}$, $kT_l$ = 0.19 keV, $kT_h$ = 1.52 keV, $n_0t_l$ =5.6 $\times$ 10$^{12}$ cm$^{-3}$ s, and $n_0t_h$ = 6.0 $\times$ 10$^{10}$ cm$^{-3}$ s; where $l$ and $h$ refer to the low-temperature and high-temperature components respectively. The fit yields a reduced $\chi^2$ of 1.37 (for 729 degrees of freedom). In Figure 6, we show the  data fitted with the corresponding model. We note that when fitting the hard component with a $\it{SEDOV}$ model (with the soft component fitted with or ), we use both equipartition ($T_e$=$T_i$) and non-equipartition ($T_e$$\neq$$T_i$) models. We find that the non-equipartition model improves the fit, and the corresponding parameters are: $kT_s$=1.73 keV, $kT_e$=0.86 keV, $n_0t$=9.6$\times$10$^{10}$ (cm$^{-3}$ s). In Figure 7, we show the confidence levels for the electron temperature, $T_e$, versus the shock temperature, $T_s$. For shock speeds and ionization timescales derived for the hot component, Coulomb heating is effective, and the mean electron temperature in the shocked gas is much larger than the postshock electron temperature $T_e$, and equal to about 1.5 keV. The hot component temperature in all two-component fits is lower than the temperature of 2.45 keV derived from fitting the hard (4–9 keV) component independently (using with solar abundances). In addition, two-component models always produce too few counts at high energies. We believe that this is caused by presence of multi-temperature plasma in 3C 397, with temperatures in the range 0.17 keV – 2.5 keV, a likely possibility in view of the complex morphology of 3C 397. Because two-component fits are apparently too simple to describe  spectra, we attempted to fit a three-component model. This has not resulted in a better fit, presumably because the description of multi-temperature plasma in terms of just 3 components might still be grossly inadequate, while giving us too many parameters to be reliably determined from the spatially-integrated X-ray spectrum alone. More complex multi-component models might give a better fit, but the problem with a large number of parameters remains, so that we did not pursue multi-component fitting beyond a 3-component model. Abundances ---------- Varying the metal abundances improves the spectral fitting, indicating that at least part of the X-ray spectrum may be associated with an ejecta component. Using the +model, we froze the abundances of the hard component at their solar values, and varied the abundances of the elements producing strong lines in the soft component. We find that the absolute value of the abundances of the individual elements are highly uncertain, while their relative values are nearly the same. Therefore, we indicate their ratio relative to Si, relative to solar (given by Anders and Grevesse 1989). Using the model (with variable abundances) for the soft component, we allow for the abundances of O, Ne, Mg, Si, S, Fe and Ni to vary, with Fe and Ni tied together. We find that the fit improves by a $\Delta\chi^2$ = 324, and yields $\frac{O}{Si}$=4.0 (3.0–5.8), $\frac{Ne}{Si}$=5.8 (2.6–9.7), $\frac{Mg}{Si}$=1.3 (0.9–1.9), $\frac{S}{Si}$=22 (17–28), $\frac{Fe}{Si}$=1.7 (0–4.3) (2$\sigma$, $\chi^2_{\nu}$=1.16, $\nu$=728). The apparent high S abundance might be an artifact of the models used, because the S line is in the energy range where X-ray spectra from the low- and high-temperature components overlap. We also investigated the abundances of the hard component, by fitting the soft component with a model (with solar abundances), and using for the hard component (with variable abundances). We fix H, He, C, N, and O to solar; we tie Ni to Fe; and allow for Mg, Si, S, and Fe to vary. We find that the fit improves by $\Delta\chi^2$ = 280, and yields $kT_h$ = 1.39 keV, Mg=2.5, Si=0, S=1.33, Fe=Ni=1.49. While varying the abundances in this way does improve the fit, the inference of strong Mg and Fe, but no Si, suggests that other possibilities for explaining the less-than-ideal fit, such as the presence of multi-temperature plasma, should be examined. We already know that the hard component temperature is lower in the two-component fits than from fits to high energy ($> 4$ keV) data alone. An underestimate of the temperature of the hard component would underestimate the continuum, which would in turn artificially boost the abundances from this component. We also tied the abundances of the soft and hard component and allowed them to vary. We used and to represent the soft and hard component respectively. We fix H, He, C, and N to solar; we tie Ni to Fe; and allow for O, Ne, Mg, Si, S, and Fe to vary. The fit yields a $\chi^2_{\nu}$= 1.16 ($\nu$=728) with the following abundance ratios: $\frac{O}{Si}$ = 3.2 (2.6–4.1), $\frac{Ne}{Si}$ = 2.3 (0.05–3.1), $\frac{Mg}{Si}$ = 1.2 (0.8–1.4), $\frac{S}{Si}$ = 2.9 (2.5–3.3), $\frac{Fe}{Si}$ = 1.8 (1.4–2.3); the ranges are at the 90% confidence level. In view of a possible presence of a low-temperature ejecta component in 3C 397, one might inquire about the type of the supernova (SN) progenitor, by examining in more detail the abundances determined from fitting the X-ray spectrum. Numerical models for the nucleosynthetic yield as a function of the progenitor’s mass have been calculated by Tsujimoto  (1995) and many others. The models predict approximately solar abundances of O and Ne with respect to Si for core-collapse SNe. For type Ia explosions, the calculations of Nomoto  (1984) indicate negligible O, Ne, and Mg abundances (with respect to Si), and a large Fe to Si ratio. The high interstellar absorption towards 3C 397 did not allow a direct detection of the O and Ne lines in the $\sim$ 0.7–1 keV range. However, since these elements also provide recombination continuum emission, a large O (and Ne) to Si ratio was shown to improve the fit to the soft component. Since the O, Ne, and Fe abundances are a good indicator for the type of the SN explosion, we have tested for their ratio relative to Si (relative to solar) by: - [ Allowing O, Ne, Mg, and Si to vary independently, using the model. The $\chi^2$ value decreases from 1,053 to 1,003 ($\nu$=729) and yields the following ratios: $\frac{O}{Si}$ = 2.8, $\frac{Ne}{Si}$ = 2.8, $\frac{Mg}{Si}$ = 2.2.]{} - [Setting O to zero, a value consistent with a type Ia yield, keeping the Fe abundance frozen to solar. The fit does not improve ($\chi^2$=1,078, $\nu$=729), and yields $\frac{Ne}{Si}$ = 1, $\frac{Mg}{Si}$ = 2.]{} - [Setting O to zero and allowing Fe to vary. The fit requires no Fe ($\frac{Fe}{Si}\leq$1.3, 2$\sigma$), with $\frac{Ne}{Si}$ = 1.5, $\frac{Mg}{Si}$ = 1.1; and a $\chi^2$=1008 ($\nu$=728).]{} - [Forcing a large Fe (Fe $\sim$ 100) as expected from a type Ia yield, and allowing O, Ne, Mg, and Si to vary. The fit gives a $\chi^2$ = 1058 ($\nu$=72= 9), and necessitates very large O, Ne, and Mg abundances relative to Si.]{} - [Finally, allowing O, Ne, Mg, Si, S, and Fe to vary, and tying Mg, Si, and S, we find $\frac{O}{Si}$ = 1.95, $\frac{Ne}{Si}$ = 1.67, and $\frac{Fe}{Si}$ = 0 ($\le$0.6, 2$\sigma$); $\chi^2$ = 1025, $\nu$=728. ]{} We conclude that the abundance ratios with respect to Si are certainly inconsistent with a type Ia yield, but they are consistent with an explosion of a massive progenitor. A large S/Si ratio, $\sim$ 4.4, obtained by varying the S abundance, is inconsistent with both SN types. We already mentioned that the large S abundance might be an artifact of the models used. Although the inferred overabundances of heavy elements are substantial (larger than in the well-known SNR Cas A), the evidence for enrichment of heavy elements is of circumstantial nature, and a more definite conclusion about the SN progenitor will probably require the kind of spatially resolved spectral data that the new generation of X-ray telescopes will provide.  results ======== Background ---------- The  PCA instrumental background consists of internal background as well as the background due to cosmic ray flux and charged particle events. We use the latest background model developed for the analysis of faint sources with the PCA. This model (L7/240) accounts for activation in the PCA (http://lheawww.gsfc.nasa.gov/$\sim$stark/pca/pcabackest.html). In Table 2, we list the background-subtracted count rates in the 2.5–20 keV range for the various observation intervals. We use the XTE/PCA internal background estimator script $\it{pcabackest \ v2.0c}$ in order to estimate the PCA background. We disregard energies below 5 keV, in order to avoid the instrumental Xenon-L edge seen in the 4.5–5 keV range. To avoid uncertainties in the background subtraction at the higher energies and to maximize the signal-to-noise ratio, we analyze the PCA data up to 15 keV only. In addition to the instrumental background, we have to account for the emission from the Galactic ridge. Following Valinia & Marshall (1998, hereafter VM 98), we approximate the ridge emission with a two-component model consisting of a power law with a photon index, $\Gamma_{GR}$, plus a Raymond-Smith thermal plasma with a temperature, kT$_{GR}$. We allow these parameters to vary within the range determined by VM 98. In order to determine the normalization, we examine the scans of the Galactic ridge for a latitude of -0.25, and within longitudes 30 and 50(after removing the bright sources). We also examine the background region selected from the  GIS field of view, and fit its spectrum combined with the  spectrum using the two-component model described above. The corresponding flux is subsequently used in modeling the overall spectrum of 3C 397. We find that only $\sim$15% of the total PCA count rate originates from 3C 397 in the 5–15 keV range. The flux is dominated by the emission from the ridge, since  has a large FOV and lacks the spatial resolution needed to resolve the emission from 3C 397. 3C 397 ------ In fitting 3C 397, we freeze the power law index and the RS temperature of the Galactic ridge, and allow its normalization to span the 3$\sigma$ range determined with the method described above. The spectral fitting with the PCA is insensitive to the soft component (which dominates up to $\sim$ 2 keV), therefore we represent it by the  model. For the hard component, we use the model and find that its parameters are consistent with the  fit. We find that a broad Gaussian line is needed to account for the Fe-line feature seen in the PCA spectrum. It is possible that this line is associated with the background in the field of 3C 397. The lack of spatial resolution, and the uncertainties in the ridge model, leave its origin uncertain. The model describing 3C 397 hard component yields $kT_s$= 1.50 (1.46–1.54, 3$\sigma$) keV, and $n_0t$= 7.1 (5.3–9.6)$\times$10$^{10}$ (cm$^{-3}$ s). The corresponding observed flux from 3C 397 $F_x$(5–15 keV) = 3.22 $\times$ 10$^{-12}$ erg cm$^{-2}$ s$^{-1}$; which corresponds to a luminosity $L_x$(5–15 keV) = 4.0 $\times$ 10$^{34}$ $D_{10}^2$ erg s$^{-1}$. The fit yields a $\chi_{\nu}^2$=1.68 ($\nu$=769). In Table 3, we summarize the results of this fit, and in Figure 8 we show the corresponding fit to the combined SIS, GIS, &  spectra, using the +model in the 0.6–15 keV band, and allowing the relative normalization to be a free parameter. We have further tested for the parameters of the hard component, independently of the soft component, by fitting the  hard band only (4–9 keV) and the PCA spectrum (5–15 keV band), using a  model. A broad Gaussian line was again needed to account for the Fe-line seen in the PCA spectrum. The model describing the hard emission from 3C 397 yields $kT_s$ = 2.3 (2.1–3.1) keV, and $n_0t$ = 3.1 (1.5–6.3) $\times$10$^{10}$ (cm$^{-3}$ s), with a $\chi^2_{\nu}$=1.08 ($\nu$=170). These parameters are consistent with the fit to the  hard (4–9 keV) band described previously, and better constrained. Non-thermal emission? --------------------- The emission from 3C 397 is more complicated than can be simply described by a two-component thermal model, partly because the emission from the central region appears at both energy bands. This might be an indication of the presence of an additional unresolved component, possibly non-thermal, hinting at the presence of a plerion. In addition, the GIS image in the hard energy band (4–9 keV) shows that the overall morphology of the remnant follows the radio and X-ray enhancements seen in the radio and HRI image (see Figure 2), indicating that part of the flux could be non-thermal and associated with highly energetic electrons accelerated at the SN shock. While the Fe-K emission line at 6.55 keV indicates that the hard component is dominated by thermal X-ray emission, a non-thermal component might be hidden underneath. To determine an upper limit on this component, we fit the hard band (4–9 keV) with a power law model, plus a Gaussian line to account for Fe. The fit yields a photon index, $\Gamma$ = 3.4 (2.5–4.5, 3$\sigma$) and a reduced $\chi^2_{\nu}$=1.0 ($\nu$=143). We also fitted the entire 0.6–9 keV range of the  data with a two-component model:  to account for the soft component, and a power law or a synchrotron model, , to represent the hard component. A Gaussian line was also added near 6.55 keV to account for Fe-K emission. The synchrotron model, , is more appropriate for describing synchrotron X-ray emission from SNRs than a power law. It models the particle spectrum cutting off exponentially, and is parameterized by the radio spectral index and flux density, and a characteristic roll-off frequency, $\nu_{roll}$ (Reynolds 1998; Reynolds and Keohane 1999). This gives the sharpest plausible roll-off in a synchrotron spectrum, while the power-law model has no roll-off at all. A true synchrotron description should lie between these extremes. In Table 4, we summarize the parameters of these fits. The +  law and + models yield poorer fits than the +model ($\chi^2_{\nu}$ = 1.75–1.95), as they do not account for the line emission from Sulfur or Iron. We subsequently added the PCA spectrum to test for non-thermal emission in the 5–15 keV band. Fitting the  and the PCA data in the 0.6-15 keV with a +  law model, plus Gaussian lines to account for the emission from Argon and Fe-K, yields a power law index $\Gamma$=4.23 (4.15–4.31, 3$\sigma$), and a reduced chi-squared $\chi^2_{\nu}$=1.79 ($\nu$=763). This model is again worse than the +model (with solar abundances), implying that the data favor the thermal model. Adding a power law component to the +model (shown in Table 3) improves the fit (an F-test yields a probability of 5 $\times$ 10$^{-4}$). The ++ law model yields $kT_s$=1.44 keV and $n_0t$=8.6$\times$10$^{10}$ (cm$^{-3}$ s) for the hard component. The power law component is characterized by a photon index $\Gamma$ $\sim$ 1.5, and a flux, $F_x$ (5–15 keV) $\sim$ 1.2 $\times$10$^{-12}$ erg cm$^{-2}$ s. However, the complexity of the model used and the high contamination by the Galactic ridge leave its parameters highly uncertain. We note that fits to the  hard band (4-9 keV) plus the PCA spectrum (5–15 keV) with a  model (as described in the previous section) do not require an additional power law, as the derived shock temperature $kT_s$=2.3 (2.1–3.1) keV is higher than the temperature derived above ($kT_s$=1.44 keV). In summary, the  data favor a thermal model for the hard component, and do not allow us to constrain the parameters of an additional power law component, due to the complexity of the model used, the large number of fitted parameters, and the uncertainties in modeling the background. Timing results ============== To search for pulsations in the  data, we extract the source events using the GIS detectors in the high-bit rate mode, with the standard time resolution of 62.5 milliseconds. We perform power spectral density (PSD) analysis on the barycenter corrected photon arrival times and search for pulsations in the 0.01–8 Hz frequency range. The upper frequency is dictated by the time resolution of the data. No pulsations were found at a significant level ($\geq$ 3$\sigma$). We computed PSD analyses at two energy bands: soft (0.5–2.4 keV) and hard (2.5–10 keV). We bin the data into 0.5 s bins, and perform a long FFT on the background-subtracted, binned light curves. This allowed us to search for pulsations up to 1 Hz. For the higher frequency range (up to 8 Hz), we use the 62.5 millisecond time resolution and performed average FFT’s with 512 s length each. We find some interesting peaks, but no detection was found with a high confidence level. We subsequently fold the data at the peaks determined from the PSD’s using a Z$_n^2$ test. At the soft energies, no pulsations were found at a level $\geq$ 3$\sigma$, and the pulsed fraction is $\leq$ 7% in the 0.01–8 Hz range. At the higher energies (2.5–10 keV) range, no pulsations were found with a confidence level $\geq$ 1$\sigma$, and the pulsed fraction was $\leq$ 12%. We use the  data to search for higher frequency pulsations. The PCA data observed with Good Xenon modes provide a $\mu$s time resolution. We select the events in the 5–20 keV energy range and from the top layer in order to maximize the signal to noise ratio. We apply the barycentric correction, and compute average FFT’s to search for any coherent pulsations up to 128 Hz. No pulsations were found. The upper limit on the pulsed fraction is $\sim$ 15% in the 5–20 keV range (where we have estimated that $\leq$ 10% of the source count rate originates from a plerion). Discussion ========== The diffuse emission -------------------- In the following, we discuss the origin of the diffuse X-ray emission in the light of the interaction between the SNR material and the surroundings. In the standard picture of the X-ray emission from young SNRs, the soft component arises from shocked ejecta and the hard component is usually attributed to the blast wave. For this remnant, we find that high metal abundances improve the spectral fitting, suggesting that at least part of its X-ray spectrum may be associated with the ejecta. While the absolute values of the abundances are model dependent, we find that the hard component (4–9 keV) could be fitted with a NEI model () with solar abundances, and does not require large metal abundances. It is therefore reasonable to assume that the hard component is associated with the blast wave, and the soft component with the ejecta, most likely of a core-collapse SN as derived from the observed abundance pattern. It is also possible that the hard component is not due to material shocked by the blast wave, but results from the shock entering very low-density regions, and that we should interpret the soft component as a Sedov blast wave. We discuss both these possibilities below. ### Young, Ejecta-Dominated Remnant of a Core-Collapse SN If 3C 397 is an ejecta-dominated SNR and the hard component associated with the blast wave, then we may estimate the parameters of the SN explosion using the Sedov model (Table 1). We use a distance of 10 kpc to 3C 397, and estimate the physical parameters in units of $D_{10}$. From the equation in HSC 83 below Equation (10), we can convert the measured emission measure ($EM$) into an upstream density $n_0$. From the  HRI image (DR 99), the mean angular radius is about $1\farcm8$, implying $r_s = 5.3 D_{10}$ pc. From Table 1 (+ non-equipartition fit, last row), we determine $\int n_e n_H dV = 10^{14} (4 \pi D^2) (EM) = 8.4 \times 10^{58} D_{10}^2$ cm$^{-3}$; then from HSC 83: $n_0 (EM) = 5.64 \times 10^{-29} r_s({\rm pc})^{-3/2} \left( \int {n_e n_H dV} \right)^{1/2} = 1.33 D_{10}^{-1/2} \ {\rm cm}^{-3}.$ This implies a swept-up mass of 29 $M_\odot$ (assuming a mean mass per particle $\mu$ of 1.4), consistent with a massive progenitor and an evolutionary stage between ejecta-dominated and Sedov. The parameters of the SN derived using the Sedov non-equipartition fit to the hard component are summarized in Table 5. The Sedov spectral fit actually overdetermines the SNR parameters, since from the observed shock temperature and ionization time-scale alone we can find the remnant age and upstream density. Equations (4a – 4f) in HSC 83, with our measured values of $T_s = 2.01 \times 10^7$ K and $n_0 t = 9.6 \times 10^{10}$ cm$^{-3}$ s, give $v_s = 1,190$ km s$^{-1}$, $E_0 = 0.83 \times 10^{51} D_{10}^2$ erg, $t = 1,750 D_{10}$ yr, and $n_0 ({\rm Sedov}) = 1.73 D_{10}^{-1}$ cm$^{-3}$. This value of $n_0$ is, within the various uncertainties, consistent with that determined from the measured $EM$. The swept-up mass from $n_0 ({\rm Sedov})$ is 40 $M_{\odot}$; in either case, we find a large mass, indicating massive progenitor, since if the ejected mass were only 1.4 $M_\odot$, the remnant should show little or no evidence of ejecta by this time. In general, the morphology of the remnant, its location in the Galactic plane, and the suggestion that the soft component dominating the X-ray spectrum is due to ejecta, favor a core-collapse explosion. We note that using the fit to the hard component only (4-9 keV), the model yields a higher temperature $kT_s$=2.45 keV (1.8–4.2 keV), and a slightly lower ionization time-scale $\tau$=$n_0t$= 3.1 (1.5–8.3) $\times$10$^{10}$ cm$^{-3}$ s. The observed emission measure of $EM$=0.0224 corresponds to $\int n_e n_H dV = 2.7 \times 10^{58}$ cm$^{-3}$. If the emission volume $V = f V_{\rm tot}$, with $f$ the filling factor, we find an upstream density $n_0 = 0.27 f^{-1/2}$ cm$^{-3}$ implying a shock age $t = 4,100 \ (1,600 - 9,500) \ f^{1/2}$ yr. For a reasonable filling factor of 0.25, these estimates are quite comparable to those from the Sedov fits above and strengthen our confidence in them. We have thus accounted satisfactorily for the gross remnant properties using only the high-temperature component of the X-ray emission. What, then, of the low-temperature component (Table 1) with an emission measure larger by about 600. If that component represents shocked ejecta, its mass cannot greatly exceed that of shocked ISM. The only way the emission measure can be greatly increased is if that material is very highly concentrated in small regions, since for a given total mass $M_{\rm ej}$, $\int n_e n_H dV \propto M_{\rm ej}^2 f_{\rm ej}^{-1}$ with $f_{\rm ej}$ the ejecta filling factor. Our fits then suggest that the ejected material, which is unlikely to comprise more than half the swept-up mass of order 30 $M_\odot$, is concentrated in very small regions. The  HRI image (Figure 12, DR 99), sensitive to the energy range from 0.4 to 2 keV, should illustrate the spatial location of the soft component material. That material appears to be distributed more or less like the radio emission, largely concentrated near the edges (especially the western edge), with a small, bright region in the interior. However, if that material is concentrated into knots, the image might resemble what is observed. A good example of such clumpy ejecta is provided by optically-emitting O-rich knots in Cas A, which are the most dense, undecelerated ejecta fragments plowing through the ambient circumstellar medium. Because of their high velocities, pressures in these knots are much higher than in the bulk of the shocked ejecta. We envision a similar situation in 3C 397, where the soft X-rays with a large emission measure are produced by dense, fast-moving ejecta clumps, while emission from the large-scale reverse shock should be harder and much fainter. The ram-pressure compressed clumps of ejecta must be at pressures at least an order of magnitude higher than the pressure of the ambient, much more tenuous X-ray emitting gas, because of the factor of 600 larger $EM$ for the soft X-ray component (we expect at most a factor of $\sim 50-100$ larger $EM$ for the X-ray emission from a large-scale reverse shock). Future high spatial resolution observations with  and should provide enough information to confirm or refute this picture. ### A Medium-Aged SNR in a Dense ISM The soft X-ray component may alternatively be identified with the blast wave. While we obtained poor fits with the model, this component is well fit by the  model with temperature $kT_l = 0.175$ keV (Table 1). If this temperature is identified with the post-shock temperature, we obtain 375 km s$^{-1}$ for the blast wave velocity. The preshock density $n_0$ may be estimated by noting (for comparable filling factors for the two components) that $n_0 \propto EM^{1/2}$ and that the emission measure ratio between the low- and high-temperature component is equal to $\sim 600$. This gives $n_0(EM) = 33 D_{10}^{-1/2}$ cm $^{-3}$, or a mean postshock electron density of about 160 cm$^{-3}$, implying that the SN progenitor exploded in a particularly dense environment. When combined with the remnant’s angular size, we obtain the total swept mass $M_s = 570 D_{10}^{5/2}$ $M_\odot$. Assuming that the remnant’s dynamics can be well described by a Sedov dynamics, we estimate the total SN kinetic energy $E$ at $1.2 \times 10^{51} D_{10}^{5/2}$ ergs, parameter $\eta = n_0^2E = 1.1 \times 10^{54} D_{10}^{3/2}$ ergs cm$^{-6}$, ionization timescale $n_0t = 5 \times 10^{12} D_{10}^{3/2}$ cm$^{-3}$ s, and the SNR age $t = 5300 D_{10}^2 yr$. At this stage of its evolution, the remnant may be at the transition from the Sedov stage to the radiative stage, because this transition should occur at $t_{tr} = 2.9 \times 10^4 E_{51}^{4/17} n_0^{-9/17}~{\rm yr} = 5000$ yr, an age equal to the estimated SNR age. The high-temperature component in this picture of a middle-aged SNR must come from the hot interior of 3C 397, occupied by gas shocked by a high velocity shock earlier in the evolution of the remnant. It is likely that this gas completely fills the remnant’s interior, i. e., its volume filling fraction is equal to $\sim 0.75$. With this filling fraction, we deduce that its electron density $n_e$ is approximately equal to 2.5 cm$^{-3}$. Because ionization timescale of this hot component is equal to $n_0t = 3.9 \times 10^{10}$ cm$^{-3}$ s (or $n_et = 1.9 \times 10^{11}$ cm$^{-3}$ s) in the model (Table 1), the hot gas was shocked about $n_et/n_e \sim 2500$ yr ago, certainly a reasonable timescale for a 5000 yr old remnant. The mass of the hot gas is equal to 30 $M_\odot$, and its pressure appears to be 6–8 times lower than pressure in the low-temperature component. This is somewhat lower than a ratio of 3 expected in a Sedov model. It is possible that the volume filling fraction of the high-temperature component is 2–3 lower than we assumed, and then its pressure would be higher than derived above. If this is the case, then most of the SNR volume would have to be occupied by a very tenuous hot gas with a negligible emission measure. The origin of the high-temperature component is not clear in the framework of the middle-aged SNR. Its presence may indicate significant departures from a uniform ambient medium, because Sedov models cannot produce such a strong high-temperature component. Because mass of the hot interior gas is nearly 20 times smaller than the mass of the swept shell, this material was shocked early in the evolution of the remnant, and originated relatively close to the SN progenitor. We expect the SN progenitor to be a massive star, which is likely to be found near the place of its birth and associated with dense ISM. But massive SN progenitors modify the distribution of the ambient medium in its vicinity, blowing stellar winds and creating dense gaseous shells. We then would naturally expect deviations from the Sedov dynamics early in the evolution of the remnant. Perhaps the hot-temperature component is the relic from earlier stages of the SNR evolution, when the blast wave encountered an ambient medium strongly modified by the SN progenitor. Another possibility is that the overall dynamics of 3C 397 are poorly described by the Sedov solution. We have already concluded that radiative cooling is likely to be important in the 3C 397 shell, which could decrease the shell temperature and enhance soft X-ray emission. The ambient ISM may also be clumpy, a likely possibility in view of a generally inhomogeneous nature of dense ISM, which could also affect the remnant’s dynamics and its X-ray emission. Finally, neglected physical processes such as electron thermal conduction may have caused significant departures from the Sedov solution. For example, Cox  (1999) estimate that the central density at the time of transition from the Sedov stage to the radiative stage is approximately 10 times lower than the preshock density in SNR models with thermal conduction. Because the density ratio between the preshock gas and the high-temperature component in 3C 397 is also of this order, we attempted to fit  spectra with the thermal conduction models kindly provided to us by Randall Smith (these models were used by Shelton  1999 to model SNR W44). While the resulting fits have not produced better results that our one-component fit with the [*SEDOV*]{} model, models with thermal conduction are still a viable alternative because the existing set of models was designed for SNRs with much lower preshock densities, such as W44. A more serious problem might be the lack of evidence for elemental enrichment in the hot component, as we would expect a moderate enhancement of Fe in the remnant’s interior. A detailed study of the 3C 397 dynamics, clearly outside the scope of our present work, is required in order to understand the nature of the high-temperature component in the framework of a middle-aged SNR. A hard non-thermal tail? ------------------------ In the following, we discuss the possibility that the hard component is synchrotron emission from highly relativistic particles accelerated at the SNR shock. We show that a power-law description results in reasonable parameter values, while a somewhat better motivated cutoff synchrotron model (Reynolds 1998) describes the data as well, giving results consistent with Reynolds and Keohane (1999) which involved highly simplified spectral fits. For power-law models, the index, $\Gamma$, is highly dependent on fitting the soft component, being steep ($\Gamma$ $\sim$ 3.4) when the soft component is fitted with a  or  model, and harder for a Sedov fit ($\Gamma$ $\sim$ 2). In the following, we estimate the equipartition magnetic field and the non-thermal energies, whose values are not too sensitive to the power law photon index. Using the power law fit parameters to the hard component only (4–9 keV), the photon index $\Gamma$ is 3.4, and the flux $F_x$ (4–9 keV) is 3.8 $\times$ 10$^{-12}$ erg cm$^{-2}$ s$^{-1}$, corresponding to a luminosity $L_x$ (4–9 keV) of 4.5 $\times$ 10$^{34}$ $D_{10}^2$ erg s$^{-1}$ (unabsorbed). We use the  hard band only since the PCA spectral fitting is highly dependent on the emission from the Galactic ridge. Assuming equipartition between energy in the relativistic electrons and magnetic fields, we estimate a magnetic field $B$ $\sim$ 1.2$\times$10$^{-5}$ ($f_{s} \theta_{1.8}^3 D_{10}) ^{-2/7}$ G, where $f_{s}$ is the fraction of the flux that is synchrotron radiation. The total energy in the electron distribution out to X-ray emitting energies is $U_e$ $\sim$ 0.7 $\times$ 10$^{45}$ $B_{-5}^2$ erg; $B_{-5}$ being the magnetic field in units of 10$^{-5}$ G. This represents a small fraction of the SN explosion energy. The synchrotron lifetime of an electron emitting $\sim$ 9 keV X-rays is $\tau_{1/2}$ $\sim$ 650 $B_{-5}^{-3/2}$ years. The typical electron energies producing the synchrotron photons of energy $E_{\gamma}$ can be estimated to be $E_e \ \sim \ 150 \ B_{-5}^{-1/2} \left(\frac{E_{\gamma}} {9 \ {\rm keV}}\right)^{1/2}$ TeV. A power-law spectrum is not expected to arise naturally in the high-energy part of the electron spectrum; rather, one expects a slow rolloff from the low-frequency synchrotron power-law. The simplest description of this rolloff is the synchrotron spectrum from an exponentially cut off power-law electron distribution $N(E) = K E^{-s} \exp(-E/E_{max})$, the “cutoff” synchrotron model (Reynolds 1998), called $SRCUT$ in XSPEC. In $SRCUT$, the fitted roll-off frequency, $\nu_{roll}$, is related to the cutoff electron energy, $E_{max}$, via the relation: $\nu_{roll}$ $\sim$ 0.5 $\times$ 10$^{16}$ $\frac{B}{10^{-5}G}$ $\left(\frac{E_{max}}{10 \ TeV}\right)^2$ Hz. Using the fitted value of $\nu_{roll}$ = 2.89 $\times$ 10$^{16}$ Hz, and an equipartition magnetic field of 10$\mu$G, we estimate a cutoff electron energy of 24 $B_{-5}^{-1/2}$ TeV. This value is in agreement with the upper limit derived by Keohane (1998) and Reynolds and Keohane (1999). A synchrotron explanation for the hard component results in reasonable parameters whether it is described by a power law or by the cutoff model; better observations will be needed to see if such a component is demanded by the data. A hidden plerion? ----------------- The presence of a pulsar-powered component (plerion) is suggested by the HRI image showing the hot spot at the center of the remnant. We subsequently estimate the intrinsic parameters from a hidden pulsar taking the hard component luminosity as an upper limit on a plerionic contribution. We assume a Crab-like plerion, and use the empirical formula derived by Seward & Wang (1988), $logL_x$ (erg s$^{-1}$) = 1.39 $log{\dot{E}}$ -16.6; where $L_x$ represents the X-ray luminosity of the plerion in the 0.2–4 keV band. The power law model for the hard component implies an observed flux $F_x$ (0.2–4 keV) = 6.2 $\times$ 10$^{-12}$ erg cm$^{-2}$ s$^{-1}$; which translates to a luminosity $L_x$ (0.2–4 keV) = 4.7 $\times$ 10$^{36}$ erg s$^{-1}$ (after correction for absorption). This implies a spin-down luminosity of $\dot{E}$ $\leq$ 2 $\times$ 10$^{38}$ erg s$^{-1}$, and a period of $P$ $\geq$ 0.08(s) ($t_3$ $\dot{E}_{38}$)$^{-1/2}$; where $\dot{E}_{38}$ is the spin-down luminosity in units of 10$^{38}$ erg s$^{-1}$, and $t_3$ is the pulsar’s age in units of 10$^3$ years. We stress that these parameters are derived assuming that the hard component arises entirely from a plerion. These are extreme limits since the hard band GIS image indicates that the hard X-ray emission peaks at the western lobe, with some emission from the central spot (Figure 2). Assuming that only $\sim$ 10% of the hard component arises from the central hot spot, then the spin-down luminosity would be $\leq$ 4$\times$10$^{37}$ erg s$^{-1}$, and the period $P$ $\geq$ 0.12 $t_3$$^{-1/2}$ s. A synchrotron component in the X-ray spectrum of 3C 397 could be also associated with a central engine injecting relativistic particles, which as they encounter a strong shock, would produce the high-energy non-thermal tail (up to $\geq$ 9 keV). Such a process is known to occur in the SNR W50 powered by the jet source SS433 (Safi-Harb & Petre 1999 and references therein). The axial ratio of 2:1 in 3C 397 is similar to that in W50, in which the bloating results from the interaction between the jets in SS433 and the SNR shell. Summary and Conclusions: ======================== We have presented , ,  and  observations of 3C 397. The  high-resolution HRI image shows a central hot spot, possibly associated with a compact object whose nature remains a mystery, as no X-ray pulsations nor a radio counterpart have been found. The  and  images show that the remnant is highly asymmetric, having a double-lobed morphology similar to the radio shell. The hard band image obtained with the  GIS overlayed on the  HRI image shows that the hard emission peaks at the western lobe, with little hard X-ray emission originating from the central spot. The spectrum is heavily absorbed, and dominated by thermal emission with emission lines evident from Mg, Si, S, Ar and Fe. Single-component models fail to fit the  spectra (0.6-9 keV). Even a model (a NEI model including a range of temperatures) does not account for the emission above $\sim$ 4 keV. Two components, at least, are required to fit the data: a soft component, characterized by a large ionization time-scale, and a hard component, required to account for the Fe-K emission line and characterized by a much lower ionization time-scale. We use a set of NEI models, and find that the fitted parameters are robust. The temperatures from the soft and hard component are $\sim$ 0.2 keV and $\sim$ 1.6 keV respectively. The corresponding ionization time-scales $n_0 t$ are $\sim$ 6 $\times$ 10$^{12}$ cm$^{-3}$ s and $\sim$ 6 $\times$ 10$^{10}$ cm$^{-3}$ s respectively. The large $n_0 t$ of the soft component indicates that it is approaching ionization equilibrium, and it can be fitted equally well with a collisional equilibrium ionization model. The 5–15 keV PCA spectrum, though contaminated by the emission from the Galactic ridge, allowed us to confirm the thermal nature of the hard X-ray emission. Fitting the hard component (5–15 keV band) yields, however, a higher shock temperature ($kT_s$ $\sim$ 2.3 keV) than the one derived from fitting the entire band with two-component models (Table 1). A third component originating from a pulsar-driven component is possible, but the contamination of the source signal by the Galactic ridge did not allow us to determine its parameters, or find pulsations from any hidden pulsar. We discuss the two-component model in the light of two scenarios: a young ejecta-dominated remnant of a core-collapse SN, and a medium-aged SNR in a dense ISM.\ In the first scenario, the hot component would arise from the blast wave and the soft component from the ejecta. The derived age (a few thousand years) and the presence of a central X-ray source makes 3C 397 similar to the young SNRs G11.2–0.3 (Vasisht  1996), Kes 73 (Gotthelf & Vasisht 1997), and RCW 103 (Petre & Gotthelf 1998). G11.2–0.3 harbors a hard X-ray plerion powered by a fast millisecond pulsar (Torii  1997). Kes 73 and RCW 103 harbor radio-quiet X-ray sources: an anomalous X-ray pulsar in Kes 73 (Vasisht & Gotthelf 1997), and a low-mass X-ray binary candidate in RCW 103 (Garmire  2000, IAU Circ 7350). If a central neutron star exists in 3C 397, as suggested by the  HRI image, then it must be radio quiet. The absence of both a radio spot and X-ray pulsations could be then attributed to a cooling neutron star, an anomalous or binary X-ray pulsar, or a weak X-ray plerion buried underneath the SNR.   did not allow an accurate measurement of its spectrum due to the heavy interstellar absorption towards 3C 397, and the narrow energy band of . Future  observations will unveil its nature. We note that recently,  observations of CasA SNR revealed a central radio-quiet X-ray source (Pavlov  2000), and that a new 424 ms (radio-quiet) X-ray pulsar has been discovered with  observations of the SNR PKS 1209-52 (Zavlin  2000). The hybrid X-ray morphology of 3C 397, with both shell and central emission, combined with its age, makes it a unique SNR–perhaps a transition object from a shell (like the historical SNRs) into a composite that is well into the Sedov phase of evolution (like Vela). In the second scenario (a middle-aged SNR), the soft component would represent the emission from the SNR expanding in a dense medium. The hard component would arise from the hot interior shocked by a fast shock earlier in the evolution of the remnant. Alternatively, a Sedov model invoking thermal conduction would modify the Sedov dynamics, and produce a hot inner component (as was proposed for the SNR W44; Cox  1999). In this scenario, the SNR would be entering its radiative phase, and would emit in the infrared. A 1ks exposure obtained with the two-micron All-Sky Survey (mass) Infrared telescope at IPAC reveals, in the J band, faint diffuse emission in the northern part of the SNR (J. Rho, private communication). The extent of the diffuse emission to the west is, however, confused with bright stars in the field. No emission was detected from 3C 397 in the far infra-red. Furthermore, HI and CO observations should reveal the presence of a radiative shell and its interaction with a dense medium (as found for W44, IC 443, and 3C391: Chevalier 1999). To our knowledge, no evidence of an HI shell or CO emission is found to be associated with 3C 397. The current picture we present here is therefore marginally consistent with this scenario, but it can not be excluded. Differentiating between the two scenarios requires a spatially resolved spectroscopy as well as more detailed modeling (which is outside the scope of this paper). In particular, high spatial resolution X-ray data combined with broadband energy coverage is required to resolve the central component from the outer shell, and unveil the nature of the mysterious X-ray spot. This could be achieved with  and . Furthermore, future gamma-ray observations with high-spatial resolution (such as $\it{GLAST}$) will help look for a high-energy tail and test for models with a radiative shell. Since 3C 397 is heavily absorbed, infra-red observations will be crucial to trace the presence of radiative shocks. In particular, searching for \[OI\] 63$\mu$m line emission will be a powerful probe to test for the radiative model. Finally, observations with millimeter telescopes will enable us to get a better estimate of the kinematic distance to 3C 397, and measure the density profile of the medium into which it is propagating. We have observed 3C 397 with the  telescope (Chile), and the data are in the process of being analyzed (Durouchoux in preparation). [**Acknowledgments**]{}\ We greatly acknowledge useful discussions with A. Valinia on the X-ray emission from the Galactic ridge, and thank her for providing us with the  scans of the ridge. We particularly thank E. Gotthelf for his help in using the ftool [$\it ascaexpo$]{}; U. Hwang & G. Allen for scientific discussions. We are grateful to R. Smith for providing us with his thermal conduction code, and J. Rho for her input on the mass  images, prior to their publication. We thank the referee, Don Cox, for his careful reading and invaluable comments and suggestions.\ This research made use of data obtained with the High Energy Astrophysics Science Archive Research Center (HEASARC) Online Service and the NASA’s Astrophysics Data System Abstract Service (ADS), provided by the NASA/Goddard Space Flight Center. S.S.H. acknowledges support from the National Research Council. Model$^a$ Soft Hard $\chi^2_{\nu}$ ($\nu$) --------------------------- ----------------------------------- ---------------------------------------------- ------------------------ + $kT_l$ = 0.19 (0.185–0.2) keV $kT_h$ = 1.52 (1.4–1.62) keV 1.37 (729) $n_0 t$$^b$ (cm$^{-3}$ s) $\tau_l$ = 5.5 $\times$ 10$^{12}$ $\tau_h$ = 5.9 (4.1–9.8) $\times$10$^{10}$ – $EM$$^c$ 31.5 (24–42) 0.053 (0.048–0.057) – $L_x$$^d$ 4.8$\times$10$^{38}$ 2.65$\times$10$^{36}$ – + $kT_l$ = 0.175 (0.17–0.18) keV $kT_h$ = 1.5 (1.34–1.60) keV 1.57 (730) $n_0 t$ (cm$^{-3}$ s) – $\tau_h$ = 3.9 (3.1–5.5) $\times$10$^{10}$ – $EM$ 43.5 (33–57) 0.057 (0.050–0.068) – $L_x$ 5.5$\times$10$^{38}$ 1.58$\times$10$^{36}$ – + $kT_l$ = 0.175 (0.172–0.18) keV $kT_h$ = 1.5 (1.39–1.61) keV 1.6 (730) $n_0 t$ (cm$^{-3}$ s) – $\tau_h$ = 8.5 (5.7–16) $\times$10$^{10}$ – $EM$ 43.5 (33–56)$^b$ 0.057 (0.048-0.068) – $L_x$ 5.5$\times$10$^{38}$ 2.2$\times$10$^{36}$ – + ($T_e$ = $T_i$) $kT_l$ = 0.174 (0.17–0.18) keV $kT_h$ = 1.14 (1.0–1.29) keV 1.66 (730) $n_0 t$ (cm$^{-3}$ s) – $\tau_h$ = 1.1 (0.8–2.0) $\times$10$^{11} $ – $EM$ 43.5 (30–56) 0.063 (0.053–0.083) – $L_x$ 5.5$\times$10$^{38}$ 1.21$\times$10$^{37}$ – + ($T_e$ $\neq$ $T_i$) $kT_l$ = 0.177 keV $kT_s$ = 1.735 (1.65–1.9) keV 1.65 (730) $kT_e$ = 0.87 (0.8–0.9) keV – $n_0 t$ (cm$^{-3}$ s) – $\tau_h$ = 1.17 (0.8–1.85) $\times$10$^{11}$ – $EM$ 39 0.065 (0.058–0.068) – $L_x$ 4.9$\times$10$^{38}$ 5.9$\times$10$^{36}$ – + ($T_e$ $\neq$ $T_i$) $kT_l$ = 0.187 keV $kT_s$ = 1.73 (1.54–1.91) keV 1.39 (729) $kT_e$ = 0.86 (0.79–0.93) keV – $n_0 t$ (cm$^{-3}$ s) $\tau_l$ = 1$\times$10$^{13}$ $\tau_h$ = 9.6 (7.4–11.8) $\times$10$^{10}$ – $EM$ 39 0.070 (0.066–0.076) – $L_x$ 4.8$\times$10$^{38}$ 9.8$\times$10$^{36}$ – : Two-component model fits to the SIS and GIS spectra. $^a$ The NEI models used are (constant-temperature single-ionization timescale NEI model), (plane-parallel NEI shock model), and the models (Borkowski , in preparation). The subscripts $l$ and $h$ refer to the low-energy and high-energy components respectively.\ $^b$ Ionization time-scale; $n_0$ is the preshock hydrogen density\ $^c$ The emission measure in units of $\frac{10^{-14}}{4\pi D^2}$ $\int (n_e n_H dV) (cm^{-5})$.\ $^d$ The X-ray luminosity in the 0.5–9 keV range (at a distance of 10 kpc).\ Observation Number Date Time ($\times$10$^4$ s) PCA$_{L7}$ ()$^a$ -------------------- ------------ ------------------------- ------------------- 01-00 1997-12-03 1.034 12.90 $\pm$ 0.07 01-01 1997-12-04 1.216 13.74 $\pm$ 0.06 01-02 1997-12-06 0.488 14.97 $\pm$ 0.10 01-03 1997-12-07 1.240 13.27 $\pm$ 0.06 01-04 1997-12-08 0.952 12.70 $\pm$ 0.07 01-05 1997-12-08 0.910 12.85 $\pm$ 0.07 : PCA observation segments of 3C 397. $^a$ Background-subtracted count rates in the 5–15 keV range, using the L7/240 background model. The uncertainties reflect the statistical errors only, and do not include any systematic errors associated with the background subtraction.\ Model Soft Hard ----------------------- ---------------------------------------- ----------------------------------------------------- + $kT_l$ = 0.175 $kT_h$ = (1.46-1.54) keV Ionization time-scale – $n_0t$ = (5.3–9.6) $\times$ 10$^{10}$ (cm$^{-3}$ s) EM$^a$ 43 0.058 (0.054–0.061) Galactic ridge $kT_{RS}^a$=3.2 keV $\Gamma_{power \ law}^a$=1.7 $Norm_{RS}^b$ = 1.315$\times$10$^{-2}$ $Norm_{power \ law}^c$ = 7.227 $\times$ 10$^{-3}$ : The  and PCA data fitted in the 0.6-15 keV range. The model consists of a component to account for the soft component, and a component to model the hard emission. The Galactic ridge was fitted with the two-component model of VM 98 $^a$ in units of $\frac{10^{-14}}{4\pi D^2}$ $\int (n_e n_H dV) (cm^{-5})$\ $^b$ Frozen\ $^c$ in units of ph cm$^{-2}$ s$^{-1}$ keV$^{-1}$ at 1keV\ $kT_l$ () Non-thermal Model $\chi^2_{\nu}$($\nu$) ----------- ------------------------------------------ ----------------------- 0.18 keV $\Gamma^a$=4.3 1.75 (729) 0.21 keV $\nu_{roll}^b$=2.9$\times$10$^{16}$ (Hz) 1.95 (730) : The  data fitted with a model (soft) and a nonthermal component (hard) – a power law or . A Gaussian line was added near 6.55 keV to account for the emission from Fe-K. $^a$ Power-law photon index\ $^b$ Roll-off frequency using the model\ --------------------------------------------- -------------------------------- Shock temperature, $kT_s$ (keV) 1.54–1.91 Ionization time-scale, $n_0t$ (cm$^{-3}$ s) (1.5 – 2.4) $\times$10$^{10}$ Shock velocity, $v_s$ (km s$^{-1}$) 1,140–1,270 Age, $t$ (yrs) 1,800 – 2,800 Ambient density, $n_0$ 1.29 – 1.38 Explosion energy, $E_0$ (ergs) $(0.62 - 1.0) \times$10$^{51}$ --------------------------------------------- -------------------------------- : The parameters of the SN explosion derived from using the Sedov non-equipartition ($T_e$ $\neq$ $T_i$) fit to the hard component. The parameters were derived starting with the fitted $EM$ value. [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{} [ ]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper presents a new Proportional-Integral-Derivative-Accelerated (PIDA) control with a derivative filter to improve quadcopter flight stability in a noisy environment. The mathematical model is derived from having an accurate model with a high level of fidelity by addressing the problems of non-linearity, uncertainties, and coupling. These uncertainties and measurement noises cause instability in flight and automatic hovering. The proposed controller associated with a heuristic Genetic Filter (GF) addresses these challenges. The tuning of the proposed PIDA controller associated with the objective of controlling is performed by Stochastic Dual Simplex Algorithm (SDSA). GF is applied to the PIDA control to estimate the observed states and parameters of quadcopters in both attitude and altitude. The simulation results show that the proposed control associated with GF has a strong ability to track the desired point in the presence of disturbances.' author: - 'Seid Miad Zandavi, Vera Chung, Ali Anaissi [^1]' bibliography: - 'Refs.bib' title: 'PIDA: Smooth and Stable Flight Using Stochastic Dual Simplex Algorithm and Genetic Filter' --- Drone, Control, PIDA, SDSA, Genetic Filter. Introduction ============ is exciting areas of research in self-driving/autonomous system, as many engineering applications required this factor. Recently, unmanned aerial vehicles (UAVs) have gained the attention of many researchers working in different applications, such as search and rescue, delivery, and crowdsourcing [@kim2019adaptive; @koh2012dawn]. UAVs or drones have been developed in many areas, including robotics, control, path planning, and communication [@phung2017enhanced; @rajappa2016adaptive; @derafa2012super]. The current attention to increasing the usability of drones in many commercial and civil applications inspires researchers to make this dynamic system more controllable. In particular, quadcopters are popular drones due to their performance in terms of vertical take-off, landing, their simple and stable structures. However, their instability, unstable dynamics, non-linearity, and cross-coupling make this system an interesting underactuated system. Generally, a quadcopter has six degrees of freedom, although four rotors should control all directions. This causes cross-coupling between rotation and translational motions. Therefore, the nonlinear dynamics needs to be managed by the controller. In the past, classical control techniques were applied to address autonomous formulations. The main issue is to formulate an accurate model describing a dynamic system. This means that any changes and modifications in the system, such as uncertainties in both model and environment, affect the performance of the controller, making it necessary to update the controller parameters. Thus, when the system’s dynamics and its operation through the environment change, its control parameters and features require re-tuning. Various control algorithms have been developed to manage the non-linearity of the quadrotor. For example, command-filtered Proportional-Derivative (PD) or Proportional-Integral-Derivative (PID) control [@zuo2010trajectory], integral predictive control [@raffo2010integral], and optimal control [@ritz2011quadrocopter; @zandavi2018multidisciplinary] have been applied. The Sliding Mode Control (SMC) is another common control algorithm that is used to improve performance in terms of stability due to the influence of modeling errors and external disturbances [@derafa2012super; @xu2006sliding; @besnard2007control]. Note that the chattering effect in the SMC arises in the steady state, where it simulates unmodeled frequencies of the system dynamics. Of these controllers, PID is preferred due to its simplicity and ability to adapt to unknown changes. These fast and simple features make the PID control strategy efficient and versatile in robotics, although it causes wide overshoot and large settling time [@ang2005pid]. Initially, the parameters of a PID controller, called gains, were set by the expertise in obeying certain rules, such as investigating its step responses, Bode plots and Nyquist diagrams. However, when the complex environment changes and affects the dynamics of the quadcopter, the PID parameters must to be re-tuned and it is essential to consider uncertainties in formulating the model. In this regard, uncertainties and stochastic processes in the dynamic system can be modeled as color noise and white noise. Thus, the derivative term is able to cope the effect of disturbances. To manage unstable systems, derivative plays a significant role in improved control loop performance. Mathematically, the derivative terms in PID controller open an avenue for more actions when the error (i.e., following the desired response) is fluctuating wildly. Thus, an additional derivative term (i.e., zero) can decrease the size of overshooting [@jung1996analytic]. This can improve controllability. Additionally, this derivative term supports a better response in terms of speed and smoothness, where limiting overshoot and settling time in an acceptable bound are considered. In addition to the control, integrated estimation of states and parameters plays an important role in improving the performance of the quadcopter in the presence of uncertainties and measurement noise. Two different categories of classical and heuristic [@zandavi2019state] filters have been used to address the state estimation problem. For example, Kalman Filter (KF) [@kalman1960new], Extended Kalman Filter (EKF) [@jazwinski2007stochastic], Unscented Kalman Filter (UKF) [@julier1997new] are classical filters, and Particle Filter (PF) [@carpenter1999improved], Simplex Filter (SF) [@nobahari2016simplex] and Genetic Filter (GF) [@zandavi2019state] are introduced as heuristic filters [@zandavi2019state]. Heuristic filters work based on point mass (or particle) representation of the probability densities [@arulampalam2002tutorial]. Unlike UKF, PF represents the required posterior Probability Density Function (PDF) by a set of random samples instead of deterministic ones. Also, it uses a resampling process to reduce the degeneracy of particles. The standard resampling process copies the important particles and discards insignificant ones based on their fitness. This strategy suffers from the gradual loss of diversity among the particles, known as sample impoverishment. Researchers have proposed different resampling strategies such as Binary Search [@gordon1993novel], systematic resampling [@smith2013sequential] and residual resampling [@arulampalam2002tutorial]. Some heuristic optimization algorithms have also been inserted to PF to improve its performance. For example, SF [@nobahari2016simplex] utilizes Nelder-Mead simplex approach for state estimation. GF [@zandavi2019state] was utilized a genetic algorithm scheme, and its operators to estimate the state of dynamic systems. In this paper, the new accelerated PID controller with derivative filter associated with GF is proposed to make an unstable quadcopter track the desired reference with the proper stability. GF is utilized to estimate the height and vertical velocity of the modeled dynamic system (i.e., the quadcopter) while hovering. Consequently, the mathematical model of the dynamic system is provided and considers non-linearity, instability, cross-coupling among different modes (i.e., pitch, roll, and yaw), and the uncertain environment. The controller parameters are tuned using the Stochastic Dual Simplex Algorithm (SDSA) optimization algorithm [@ZandaviSDSA2019], which improves the trade-off between exploration and exploitation to achieve better optimal parameters for the proposed controller. This paper is organized as follows. Section \[Control\_sec2\] describes the mathematical model of the dynamic system. The proposed controller is introduced in Section \[Control\_sec3\]. Stability analysis is presented in Section \[Control\_sec4\]. Optimization and heuristic filter are explored in Section \[OptiHeuFil\]. Numerical results and discussion are given in Section \[Control\_sec5\]. Finally, the paper ends with the conclusion in Section \[Control\_sec6\]. Dynamic Model {#Control_sec2} ============= The mathematical model of a system can be used as the first step to study its performance. In this regard, the quadcopter studied in this paper is modeled in Fig. \[fig\_Control1\], considering earth-centered inertia (ECI) and body frame. Thus, $X_E = [x_E , y_E , z_E]^T$ and $X_B = [x_B , y_B , z_B]^T$ are defined as transformational motions from inertia frame to body frame due to having an accurate dynamic model. The attitude of the quadcopter is formulated based on the Eular angles roll, pitch, and yaw, which are rotated from the x-axis, y-axis and z-axis, respectively. Thus, the Eular angles are $\Theta = [\phi, \theta, \psi]^T$, and the angular velocity in the body frame is $\dot{\Theta} = [\dot{\phi}, \dot{\theta}, \dot{\psi}]^T$. In this sense, the angular velocity in inertia ($\omega = [p, q, r]^T$) is formulated as follows: $$\label{Control_eq1} \omega = \left[ {\begin{array}{ccc} 1 & 0 & -\sin(\theta) \\ 0 & \cos(\phi) & \cos(\theta) \sin(\phi) \\ 0 & -\sin(\phi) & \cos(\theta) \cos(\phi) \end{array}} \right] \cdot \dot{\Theta}$$ Total torques are caused by three segments: thrust forces ($\tau$), body gyroscopic torque (${\tau}_b$) and aerodynamic friction (${\tau}_a$). In addition, each component of the torque vector ($\tau = [{\tau}_{\phi},{\tau}_{\theta},{\tau}_{\psi}]^T$), corresponding to a rotation in the roll, pitch, and yaw axis, can be determined by Eqs (\[Control\_eq2\])–(\[Control\_eq4\]): $$\label{Control_eq2} {\tau}_{\phi} = l (F_2 - F_4)$$ $$\label{Control_eq3} {\tau}_{\theta} = l (F_3 - F_1)$$ $$\label{Control_eq4} {\tau}_{\psi} = c (F_2 - F_1 + F_4 - F_3)$$ where *l* is the distance between the center of motor and the center of mass, and *c* is the force to torque coefficient. As assumed, the quadcopter is a rigid body and symmetrical dynamics apply, from which the torque can be calculated by following equation: $$\label{Control_eq5} {\tau} = I \dot{\omega} + \Omega (I \omega)$$ where *l* is the distance between the center of motor to center of mass, and *c* is the force to torque coefficient. As assumed, the quadcopter is a rigid body and symmetrical dynamics, from which the following equation can calculate the torque: $$\label{Control_eq6} {\Omega} = \left[ {\begin{array}{ccc} 0 & -r & q \\ r & 0 & -p \\ -q & p & 0 \end{array}} \right]$$ In this system, the main control inputs are correlated to the torque ($\tau = [{\tau}_{\phi},{\tau}_{\theta},{\tau}_{\psi}]^T$) caused by thrust forces, body gyroscopic effects, propeller gyroscopic effects and aerodynamic friction. Gyroscopic effects and aerodynamic friction are considered external disturbances for the control. Thus, control inputs are determined as Eq (\[Control\_eq7\]). $$\label{Control_eq7} \left[ {\begin{array}{c} u_{\phi} \\ u_{\theta} \\ u_{\psi} \\ u_{T} \end{array}} \right] = \left[ {\begin{array}{c} \tau_{\phi} \\ \tau_{\theta} \\ \tau_{\psi} \\ \tau_{T} \end{array}} \right] = \left[ {\begin{array}{cccc} 0 & l & 0 & -l \\ -l & 0 & l & 0 \\ -c & c & -c & c \\ 1 & 1 & 1 & 1 \end{array}} \right] \left[ {\begin{array}{c} F_1 \\ F_2 \\ F_3 \\ F_4 \end{array}} \right]$$ where $\tau_{T}$ is the lift force and $u_T$ corresponds to the total thrust acting on the four propellers, where $u_{\phi}$, $u_{\theta}$ and $u_{\psi}$ represent the roll, pitch, and yaw, respectively. The drone’s altitude can be controlled by lift force ($u_T$), which is equal to quadcopter weight. The dynamic equations of the quadcopter are formulated based on the Newton-Euler method [@zipfel2007modeling]. The six degree of freedom (6-DOF) motion equations are stated by Eqs (\[Control\_eq8\])–(\[Control\_eq13\]). $$\label{Control_eq8} \dot{u} = rv -qw -g \sin(\theta)$$ $$\label{Control_eq9} \dot{v} = pw -ru +g \sin(\phi) \cos(\theta)$$ $$\label{Control_eq10} \dot{w} = qu -pv + g \cos(\theta) \cos(\phi) - \frac{1}{m} u_T$$ $$\label{Control_eq11} \dot{p} = \frac{1}{I_{xx}} \left[ (I_{yy}-I_{zz})qr + u_{\phi} + d_{\phi} \right]$$ $$\label{Control_eq12} \dot{q} = \frac{1}{I_{yy}} \left[ (I_{zz}-I_{xx})pr + u_{\theta} + d_{\theta} \right]$$ $$\label{Control_eq13} \dot{r} = \frac{1}{I_{zz}} \left[ (I_{xx}-I_{yy})pq + u_{\psi} + d_{\psi} \right]$$ where $d = [d_{\phi},d_{\theta},d_{\psi}]^T$ is the angular acceleration disturbance corresponded to propeller angular speed, and these acceleration disturbances are modeled by Eq (\[Control\_eq14\]). $$\label{Control_eq14} d = \left[ {\begin{array}{c} + qI_{m} \Omega_{r} \\ -pI_{m} \Omega_{r} \\ 0 \end{array}} \right]$$ where $\Omega_r = \sum_{i=1}^{4} (-1)^{i} \Omega_i $ is the overall residual propeller angular speed, and $\Omega_i$ is the angular velocity of each rotor. $I_{m}$ is the rotor moment of inertia around the axis of rotation. Hence, the dynamics equations of the system can be summarized as follows: $$\begin{aligned} \label{Control_eq15} \begin{split} \dot{x}(t) = A(x) + B(x)u(t) + d \\ y(t) = C(x) + D(x)u(t) \end{split}\end{aligned}$$ where $x = [\phi, \theta, \psi, p, q, r, w]^T$ and $y = [y_1,y_2,y_3,y_4]^T$ are the states and measurable outputs, respectively. $u = [u_1,u_2,u_3,u_4]^T$ is the control and $d$ is the disturbance. $A$, $B$, $C$, and $D$ are the nonlinear functions regarding dynamic equations of the system. The control design is considered to minimize the error for tracking the desired command (see Eq (\[Control\_eq16\])). $$\label{Control_eq16} \lim_{t \to \infty}{\|{e(t)}\|} = \varepsilon$$ where $e(t) = r(t)-y(t)$ is the difference between reference inputs and the system’s measurable outputs and $\varepsilon$ is the small positive value. Proposed PIDA Controller {#Control_sec3} ======================== The PID control is applied to many engineering applications because of its simplicity. Note that PID cannot function effectively when wide overshoot and considerable settling time occur in the system. A modified PID controller can address this issue by adding a zero known as PID acceleration (PIDA). It is employed to achieve a faster and smoother response for a higher-order system and retains both overshoots and settling time within an acceptable limit. The proposed linear control can also control the nonlinear system. In this approach, the dynamic airframe is linearized about the equilibrium point. The linearization of the model is given by Eq (\[Control\_eq17\]). $$\label{Control_eq17} \Delta \dot{X} = J_X \Delta X + J_U \Delta U$$ where $J_X$ and $J_U$ are the Jacobian transformation of the nonlinear model about the equilibrium point ($X_{eq} = [\phi_0,\theta_0,\psi_0,p_0,q_0,r_0,w_0]^T$). Note that the equilibrium point can be calculated by solving $\dot{X} = AX = 0$. Any solution can be the equilibrium point because of the null space if $det(A)$ is equal to zero. In this regard, a multi-input, multi-output (MIMO) control system design follows the desire command in altitude and attitude channels. A MIMO tracking controller can not only stabilize the system, but also make it follow a reference input. Thus, the linear system is given as follows: $$\label{Control_eq18} {\begin{array}{c} \dot{X} = AX + BU + D_d \\ Y = CX \end{array}}$$ where $Y$ is the outputs that follow the reference inputs and $D_d = [0, 0, 0, d^T, 0]^T$ is the angular disturbance. In this approach, the integral state is defined as follows: $$\label{Control_eq19} \dot{X}_N = R-Y = R-CX$$ According to Eq (\[Control\_eq19\]), the new state space of the system is formulated in Eq (\[Control\_eq22\]). The system can follow the reference inputs if the designed controller proves the stability of the system. $$\label{Control_eq22} {\begin{array}{c} \left[ {\begin{array}{c} \dot{X} \\ \dot{X}_N \end{array}}\right]= \left[ {\begin{array}{cc} A & 0 \\ -C & 0 \end{array}}\right] \left[ {\begin{array}{c} {X} \\ {X}_N \end{array}}\right]+\left[ {\begin{array}{c} B \\ \Phi \end{array}}\right]U + \left[ {\begin{array}{c} \Phi \\ I \end{array}}\right]R \\+\left[ {\begin{array}{c} I \\ \Phi \end{array}}\right]D_d \\ Y = \left[{\begin{array}{cc} C & 0 \end{array}}\right] \left[{\begin{array}{c} X \\ X_N \end{array}}\right] \end{array}}$$ where $\Phi$ is a zero matrix. Regarding the acceleration disturbance in the system, the general form of the proposed controller in the time series is given in Eq (\[Control\_eq23\]). $$\label{Control_eq23} u(t) = k_p e(t) + k_i \int{e(t) dt}+ k_d \dot{e}(t) + k_a \ddot{e}(t)$$ where $kp$, $k_i$, $kd$ and $k_a$ are the gain of proposed controller. Then, the MIMO controller is generated by $$\label{Control_eq24} U(s) = \left[ k_p + \frac{k_i}{s} + k_d s + k_a s^2 \right] E(s)$$ As seen in Eq (\[Control\_eq24\]), the derivative term is inefficient in the high-frequency domain and can affect the performance of the whole system in a noisy environment. The addition of a derivative filter is proposed to address this issue. Thus, the proposed control is modeled as follows: $$\label{Control_eq25} U(s) = \left[ k_p + \frac{k_i}{s} + k_d \times s L(s) + k_a \times s L(s) \times s L(s) \right] E(s)$$ where $L(s)$ is the optimal derivative filter which is formulated as follows: $$\label{Control_eq20} L(s) = \frac{N/T}{(N/T) \frac{1}{s}+ 1}$$ where $N$ and $T$ are the order of the filter and time constant, respectively. Based on Eq (\[Control\_eq20\]), the transfer function of the optimal derivative filter can be simplified as follows: $$\label{Control_eq21} L(s) = \frac{1}{1 + T_f s}$$ where $T_f = T/N$ is the time constant of the optimal derivative filter. Hence, the controller and filter’s parameters can be found by SDSA to minimize the objective function given by Eq (\[Control\_eq222\]). $$\label{Control_eq222} f_{obj} = (M_{os}-M_s)^2 - (t_s - t_s)^2$$ where $M_{os}$ is the desired maximum overshoot, which is set to $5$ percent; $t_s$, the desired settling time for the system, is $2~sec$. $M_s$ and $t_s$ are the overshoot and settling time for each set of the designed controller. The stability analysis of the system (Eq (\[Control\_eq15\])) is introduced before the simulation results are presented. Stability Analysis of the Proposed PIDA {#Control_sec4} ======================================= In this section, the stability of the system, considering the proposed controller is investigated. The following definitions are needed. \[def1\] “Asymptotically stable” is a system around its equilibrium point if it meets the following conditions: 1. Given any $\epsilon > 0$, $\exists \delta_{1} > 0$ such that if $\|x(t_0)\| < \delta_1$, then $\|x(t)\| < \epsilon$, $\forall t > t_0$ 2. $\exists \delta_{2} > 0$ such that if $\|x(t_0)\| < \delta_{2}$, then $x(t) \to 0$ as $t \to \infty$ $[V(x) = x^T P x, \quad x \in \mathbb{R}^n]$ is a positive definite function if and only if all the eigenvalues of $P$ are positive. *Proof.* Since $P$ is symmetric, it can be diagonalized by an orthogonal matrix so $P=U^T D U$ with $U^T U = I$ and $D$ diagonal. Then, if $y = Ux$, $$\begin{aligned} \begin{split} V(x) &= x^T P X \\ &= x^T U^T D U x \\ &= y^T D y \\ &= \sum {\lambda}_i |{y_i}|^2 \end{split}\end{aligned}$$ Thus, $$V(x) > 0 \quad \forall x \neq 0 \iff \lambda_i > 0, \quad \forall i$$ \[def2\] A matrix $P$ is a positive definite if it satisfies $x^T P x > 0 \quad \forall x \neq 0$. Therefore, any positive definite matrix follows the inequality in Eq (\[Control\_eq27\]). $$\label{Control_eq27} \lambda_{min} P \|x\|^2 \leq V(x) \leq \lambda_{max} P \|x\|^2$$ \[def3\] ($V$) is a positive definite function as a candidate Lyapunov function if ($\dot{V}$) has derivative, and it is negative semi-definite function. \[theorem1\] If the candidate Lyapunov function (i.e., $V(x) = x^T P x, \quad P>0$) exists for the dynamic system, there is a stable equilibrium point. According to Theorem \[theorem1\] and the dynamic system defined in Eq (\[Control\_eq15\]), the system in the form of Lyapunov function is as follows: $$\begin{aligned} \begin{split} \dot{V}(x) & = \dot{x}^TPx + x^TP\dot{x}\\ & = x^T A^T P x + x^T P A x \\ & = x^T(A^T P + P A)x \\ & = -x^T Q x \end{split}\end{aligned}$$ where the new notation (see Eq (\[Control\_eq29\])) is introduced to simplify the calculation, it is noted that $Q$ is a symmetric matrix. According to Definition \[def3\], $V$ is a Lyapunov function if $Q$ is positive definite (i.e., $Q>0$). Thus, there is a stable equilibrium point which shows the stability of the system around the equilibrium (see Theorem \[theorem1\]). $$\label{Control_eq29} A^T P + P A = -Q$$ The relationship between $Q$ and $P$ shows that the solution of Eq (\[Control\_eq29\]), called a Lyapunov equation, proves the stability of the system for picking $Q > 0$ if $P$ is a positive definite solution. Thus, there is a unique positive definite solution if all the eigenvalues of $A$ are in the left half-plane. A noisy environment causes the movement of eigenvalues to the right half-plane. Therefore, the system dynamics can intensify instability. This issue raises the cross-coupling among different modes such as roll, pitch, and yaw rate, which are caused by the four rotors. Thus, the derivative term of the proposed controller plays an essential role in maintaining stability. The numerical results show that all eigenvalues of the quadcopter with considering the proposed controller with uncertainties in the environment are in the left half-plane, which it proves that the dynamic system is stable with uncertainties. Optimization and Heuristic filter {#OptiHeuFil} ================================= In this section, Stochastic Dual Simplex Algorithm (SDSA) and Genetic Filter (GF) are described. First, the optimizaiton algorithm (i.e., SDSA) general setting out is presented. Then, GF, the state estimation module in the proposed controller, is introduced. Stochastic Dual Simplex Algorithm {#sec5} --------------------------------- The heuristic optimization algorithm, named Stochastic Dual Simplex Algorithm (SDSA), is carried out to find the best tuned parameters of the proposed controller. SDSA is the new version of Nelder-Mead simplex algorithm [@rao2009engineering], executing three different operators such as reflection, expansion, and contraction. These operators make dual simplex reshape and move toward the maximum-likelihood regions of the promising area. Each simplex follows the normal rules of simplex, from which the transformed vertices of the general simplex approach are formulated as in Eq (\[eq30\])-(\[eq32\]). $$\label{eq30} \textbf{x}_r = (1+\alpha)\bar{\textbf{x}}_0 - \alpha \textbf{x}_h , \quad \alpha > 0$$ $$\label{eq31} \textbf{x}_e = \gamma \textbf{x}_r + (1-\gamma)\bar{\textbf{x}}_0 , \quad \gamma > 1$$ $$\label{eq32} \textbf{x}_c = \beta \textbf{x}_h + (1-\beta)\bar{\textbf{x}}_0 , \quad 0 \leq \beta \leq 1$$ where $\alpha$, $\gamma$ and $\beta$ are reflection, expansion and contraction coefficients, respectively. During these transformations, the centroid of all vertices excluding the worst point ($\textbf{x}_h$) is $\bar{\textbf{x}}_0$. In addition to the movement of dual simplex, a new definition of reflection points is applied to improve the diversity and decrease the probability of local minimum. Therefore, during the *i*-th iteration, the worst vertices of simplexes in search space are replaced by normal distribution directions which are modeled in Eq (\[eq33\]). $$\label{eq33} \overset{*}{\textbf{x}}_{h_s}^{(i)} = \textbf{x}_{h_s}^{(i)} + g^{(i)} {\bar{\textbf{x}}_0}^{(i)}$$ where $\overset{*}{\textbf{x}}_{h_s}^{(i)}$ is the new reflected point computed by the worst point of each simplex (${\textbf{x}}_{h_s}^{(i)}$), and $g^{(i)}$ is the normal distribution of the sampled solution in *i*-th iteration and $s$-th simplex. The centroid of all simplexes and the probability density function of the normal distributed simplexes are then expressed in Eq (\[eq34\]) and Eq (\[eq35\]). $$\label{eq34} \bar{\textbf{x}}_0^{(i)} = \sum_{s=1}^{n_s} {\bar{\textbf{x}}_{0_s}^{(i)}}$$ $$\label{eq35} g(\textbf{x}_h|\Sigma) = \frac{1}{\sqrt{2\pi|\Sigma|}}.exp({-\frac{(\textbf{x}_h-\bar{\textbf{x}}_0)^T{\Sigma}^{-1}(\textbf{x}_h-\bar{\textbf{x}}_0)}{2}})$$ where $n_s$ and $\Sigma$ are the number of simplexes and covariance matrix of simplexes, respectively. Reflection makes an action regarding to reflect the worst point, called high, over the centroid $\bar{\textbf{x}}_0$. In this approach, simplex operators utilize the expansion operation to expand the simplex in the reflection direction if the reflected point is better than other points. Nevertheless the reflection output is at least better than the worst, the algorithm carries out the reflection operation with the new worst point again [@ZandaviSDSA2019; @rao2009engineering]. The contraction is another operation which contracts the simplex while the worst point has the same value as the reflected point. The SDSA pseudocode is presented in Algorithm \[SDSA\], and the tuned parameters of SDSA, chosen based on [@ZandaviSDSA2019], are listed in Table \[table1\]. **Initialization** $\quad{\textit{set}} \gets$ $\quad{\textbf{x}_0} \gets \textit{random}$ **Repeat** $\quad{\textbf{x}_h} \gets \textbf{x}_{worst}$ : $\quad{\textbf{x}_h} \gets \overset{*}{\textbf{x}_h}$ **Until** Stop condition satisfied. **Parameters** **Value** ---------------------- ----------- ${a}_{max}$ $10.5907$ ${\alpha}_{max}$ $9.7323$ ${\gamma}_{max}$ $9.9185$ ${\beta}_{max}$ $0.4679$ ${\textit{i}}_{max}$ $979$ : Tuned parameters of SDSA[]{data-label="table1"} Genetic Filter -------------- In the Genetic Filter (GF), the problem is to estimate the states of a discrete nonlinear dynamic system in a continuous search space. The model is as follows: $$\textbf{x}_{k} = \textbf{f}_{k}(\textbf{x}_{k-1},\textbf{w}_{k-1})$$ Where $k$ is time step, $\textbf{f}_{k}$ is the system model, $\textbf{x}_{k-1}$ is the state vector and $\textbf{w}_{k-1}$ is the process noise corresponding to system uncertainties. Also, the measurement model is considered as $$\textbf{z}_{k} = \textbf{h}_{k}(\textbf{x}_{k},\textbf{v}_{k})$$ where $\textbf{h}_{k}$ is the measurement model and $\textbf{v}_{k}$ is the measurements noise. Therefore, GF is introduced as a tool for nonlinear systems state estimation based on Genetic Algorithm (GA). GF has two loops. The outer loop generates an initial population that belongs to the first generation every time a new measurement is entered. The inner loop iterates to find the best estimation of the current states, corresponding to the entered measurement. To do this, the inner loop, first propagates the individuals. Then, for each individual, the corresponding output is calculated based on the measurement model. The calculated outputs are compared with the real measurement, and each individual is assigned a cost. The inner loop uses genetic operation such as selection, mutation, and crossover to select new parents and survive the fittest individual and generate new population toward the maximum likelihood regions of the state space and is terminated when the maximum number of iterations (generations) ($\textit{i}_{max}$) is reached. Finally, the state estimation is made using the average of individuals of the last generation (see Algorithm \[GF\]) . The average of the individuals is calculated and passed as the state estimation. Algorithm \[GF\] represents pseudo-code of GF. **Initialization** $\quad{Set} \gets$ **Repeat** $\quad{\textbf{x}_0} \gets randomize $ $\textbf{Until} \gets i \geq i_{max}$ $\textbf{Until} \gets \text{measurement is stopped}$ Numerical Results {#Control_sec5} ================= The numerical simulation is implemented to evaluate the performance of the proposed controller. The model quadcopter was simulated in MATLAB R2016b in a Simulink environment in Windows 10 with an Intel(R) Core(TM)i7-6700 CPU @ 3.4 Hz. The quadcopter parameters are listed in Table \[table\_Control1\]. **Parameter** **Description** **value** **Unit** --------------- ------------------------------------- ----------- ---------- $m$ Mass $0.8$ $kg$ $l$ Arm length $0.2$ $m$ $g$ Gravity acceleration $9.81$ $m/s^2$ $c$ Force to torque coefficient $3.00e-5$ $kg~m^2$ $I_{xx}$ Body moment of inertia along x-axis $2.28e-2$ $kg m^2$ $I_{yy}$ Body moment of inertia along y-axis $3.10e-2$ $kg~m^2$ $I_{zz}$ Body moment of inertia along z-axis $4.40e-2$ $kg~m^2$ $I_{m}$ Motor moment of inertia $8.30e-5$ $kg~m^2$ : Quadcopter Model Parameters[]{data-label="table_Control1"} To begin the simulation and tune the parameters of the proposed controller, the initial state of the quadcopter is at an altitude of $50~m$; and attitude and velocity in different directions are equal to zero. A disturbance, which is modeled as white noise (mean value ($\mu$) is zero and standard deviation ($\sigma$) is one), at time $1~sec$ in the roll channel, is applied to the quadcopter. This disturbance destabilizes the system and locates the eigenvalues of $A$ in the right half-plane. Additionally, the quadrotor is highly sensitive to the noisy environment because of instability and cross-coupling. In this regard, PIDA with a derivative filter, which obviates the noise from the measurement inputs, is designed to respond to this issue and keep the flight stable. According to the proposed PIDA with a derivative filter, an additional issue is the tracking of desire inputs, which can be defined as a command to the quadcopter, are another issue that can be addressed by a MIMO controller (i.e., four inputs and four outputs). The proposed controller can be set by four gains and the time constant for each mode/channel. Figure \[fig2\_PQR\_init\] and Fig. \[fig2\_PTS\_init\] show the attitude of the modeled quadcopter in the initial state without the noisy environment. It is obvious that the initial state is stable at zero, which is expected to be. A disturbance, which is modeled as white noise (mean value ($\mu$) is zero, and standard deviation ($\sigma$) is one), at time $1~sec$ in roll channel is applied to the quadcopter. As seen in Figs. \[fig2\_PQR\_dis\] and \[fig2\_PTS\_dis\], this disturbance renders the system unstable. Hence, the quadrotor is highly sensitive in the noisy environment. In this regard, PIDA with a derivative filter, which obviates the noise from the measurement inputs, is designed to respond to this issue and keep the flight stable. To tune the parameters of PIDA, the complex commands that enable coupling among different modes of the modeled quadcopter are used to evaluate the performance of the designed controller. New command angles are provided by a step function with $2~sec$ delay time in the simulation environment, where $\phi = -5^{\circ}$, $\theta = 10 ^{\circ}$, $\psi = 30 ^{\circ}$ and with altitude starting from $50~m$ and stabling at $20~m$. Note that noisy measurements have been considered for this simulation, and are modeled as white noise. The parameters of controllers are tuned using SDSA [@ZandaviSDSA2019], and the convergence graph is shown in Fig. \[fig1&1\]. The SDSA is applied to the objective function introduced in Eq (\[Control\_eq222\]). Table \[table\_Control2\] represents the best fit set of parameters for different modes/channels within the noisy environment. ------- ---------- ----------- ----------- ----------- Roll Pitch Yaw Altitude $k_i$ $0.1436$ $3.6869$ $0.0437$ $1.00$ $k_d$ $6.5097$ $21.2743$ $29.9872$ $11.4676$ $k_a$ $0.5772$ $0.3429$ $23.5238$ $7.5114$ $T_f$ $0.0437$ $0.0331$ $0.0117$ $0.3752$ ------- ---------- ----------- ----------- ----------- : Controller Parameters for Altitude and Attitude[]{data-label="table_Control2"} The delay time causes missing measurements in the noisy environment, so the robust heuristic filters is required. GF, as robust filter for the dynamic system, plays an important role to keep the flight stable by accurate estimations and removing noise. Figures \[fig\_Control2\]–\[fig\_Control4\] show that the presented controller with GF can adequately respond to and track the reference commands in the noisy environment. Having PIDA tuned under environmental uncertainties, a particular spiral trajectory is introduced to evaluate the performance of the proposed control in the noisy environment. The evaluated trajectory is modeled in Eq (\[Eq:sprialTraj\]). $$\label{Eq:sprialTraj} {\begin{array}{c} x = 2sin(3 \omega t) + 2 cos(\omega t)\\ y = 2 sin(\omega t) + 2 cos(3 \omega t)\\ z = 0.3 t \end{array}}$$ where $x$, $y$, and $z$ are the reference trajectory. $\omega$ is the period spiral trajectory, which is set to $\omega = 1/2\pi$. $t$ is the flight time between $0~sec$ and $60~sec$. Figures \[fig:2dTrajectory\] and \[fig:2dTrajectory\] demonstrate the drone trajectory for both PIDA and PIDA-GF. These figures show that PIDA associated with GF can boost the performance of the proposed controller in the complex dynamics. Thus, the proposed controller is able to have a great response in the spiral trajectory. Accordingly, the drone movement in cooperation with PIDA-GF can guarantee a smooth and stable flight while the drone is flying in the noisy environment. Figure \[fig:PTSSpiral\] shows the drone responses in the reference path. As seen in Fig. \[fig:PTSSpiral\], PIDA responses fluctuated around the points when the drone turns in the spiral trajectory. This is due to the noise in the environment. Accordingly, GF is able to reduce the effect of noise in the environment; thereby, the smooth flight is performed by PIDA-GF. Consequently, these figures and the reference trajectory demonstrate that the proposed controller associated with GF is powerful enough to control the spiral path of drone. As the simulation results demonstrate, not only is the quadcopter capable of stable flight, but the proposed controller associated with GF also provides a smooth flight. This smoothness and handling are acquired because of having the robust filter, GF, in the dynamic movement. As shown, PIDA provides a fast and stable flight, but applying GF for integrated estimation of the observed states and parameters in both attitude and altitude boosts performance in the quadcopter. Conclusion {#Control_sec6} ========== This paper has proposed a new Proportional-Integral-Derivative-Accelerated (PIDA) controller with a derivative filter to improve flight stability for a quadcopter and considers the noisy environment. The mathematical model considering non-linearity, uncertainties, and coupling was derived from an accurate model with a high level of fidelity. In the indoor environment and as the critical features in the proposed controller, overshoot and settling time were limited during the operation of the dynamic system (drone). The noisy environment causes the movement of poles to the right half-plane. Thus, system dynamics intensify instability. This issue raises the cross-coupling among different modes, such as roll, pitch, and yaw, which were generated by the four rotors. The proposed controller and heuristic Genetic Filter (GF) addressed these challenges. Moreover, the derivative term of the proposed controller assists the dynamic system to recover its stability. The controller gains were optimized before performing the mission. In this regard, the tuning of the proposed controller was performed by Stochastic Dual Simplex Algorithm (SDSA). The simulation results show that the proposed PIDA controller associated with GF was capable of supporting outstanding performance in tracking the desired point despite disturbances. [^1]: Seid M. Zandavi, Vera Chung and Ali Anaissi are with the School of Computer Science, The University of Sydney, Sydney NSW 2006 Australia e-mails: {miad.zandavi, vera.chung, ali.anaissi}@sydney.edu.au.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Concentrated solutions of monoclonal antibodies have attracted considerable attention due to their importance in pharmaceutical formulations, yet their tendency to aggregate and the resulting high solution viscosity has posed considerable problems. It remains a very difficult task to understand and predict the phase behavior and stability of such solutions. Here we present a systematic study of the concentration dependence of the structural and dynamic properties of monoclonal antibodies using a combination of different scattering methods and microrheological experiments. To interpret these data, we use a colloid-inspired approach based on a simple patchy model, which explicitly takes into account the anisotropic shape and the charge distribution of the molecules. Combining theory, simulations and experiments, we are able to disentangle self-assembly and intermolecular interactions and to quantitatively describe the concentration dependence of structural and dynamic quantities such as the osmotic compressibility, the collective diffusion coefficient and the zero shear viscosity over the entire range of investigated concentrations. This simple patchy model not only allows us to consistently describe the thermodynamic and dynamic behavior of mAb solutions, but also provides a robust estimate of the attraction between their binding sites. It will thus be an ideal starting point for future work on antibody formulations, as it provides a quantitative assessment of the effects of additional excipients or chemical modifications on antibody interactions, and a prediction of their effect on solution viscosity.' author: - 'Nicholas Skar-Gislinge' - Michela Ronti - Tommy Garting - Christian Rischel - Peter Schurtenberger - Emanuela Zaccarelli - Anna Stradner bibliography: - 'pnas-sample.bib' title: 'A colloid approach to self-assembling antibodies' --- Immunoglobulin gamma (IgG) constitutes the major antibody isotype found in serum and takes part in the immune response following an infection to the body. IgGs contain three structured domains: two antigen binding domains (FAB) and one so-called constant domain (FC) arranged in a Y shape via a flexible hinge region. The specific details of such a hinge region further classify the IgGs into four subclasses: IgG1, IgG2, IgG3 and IgG4. In the biopharmaceutical industry, monoclonal antibodies (mAb) based on IgGs are a major platform for potential drug candidates, with more than 20 mAb based drugs available on the market and more in development [@Nelson2010; @Reichert2012]. The popularity of these macromolecules is due to a large flexibility in molecular recognition thanks to the variable portions of the FAB, a long half-life time in the body, and the possibility of humanization minimizing the risk of immunogenicity. In order for mAbs to become a successful pharmaceutical product, not only a biological effect but also a high chemical and formulation stability of the solutions is required. Generally, for mAb based drugs, a high concentration formulation of the order of 100 g/L or more is desirable [@Narasimhan2012; @Shire2009]. However, in many cases mAb solutions at these concentrations exhibit dramatically altered flow properties, resulting in serious challenges during production and when administering the drug. The flow properties of protein solutions are primarily determined by the shape of the proteins and their mutual interactions. As the concentration increases, protein-protein interactions become increasingly significant. Despite the extensive experimental and theoretical work devoted to protein crowding and its effects on the resulting stability and flow properties at high protein concentration, our ability to predict for example the concentration dependence of the zero shear viscosity $\eta_0$ and the location of an arrest or glass transition is still limited [@Neergaard2013; @Buck2014; @Godfrin2016; @Grimaldo2014; @Yearley2014; @Ando2010; @Bucciarelli2015; @Bucciarelli2016; @Foffi2014; @Cardinaux2011]. For antibody solutions this is a particularly difficult problem as attractive interactions often lead to reversible self-association between the antibody molecules [@Chari2009; @Kanai2008; @Yadav2010; @Yearley2014; @Godfrin2016], making the change in solution flow properties highly sensitive to the protein concentration [@Lilyestrom2013; @Scherer2010; @Connolly2012; @Schmit2014]. A number of studies have made attempts to characterize cluster formation in mAb solutions, and to interpret antibody solution properties through analogies with colloids or polymers. In particular, scattering techniques were used to investigate protein interactions and self-association in antibody formulations [@Yadav2012; @Yearley2013; @Yearley2014; @Saito2012; @Scherer2013; @Castellanos2014; @Godfrin2016; @Corbett2017]. While investigations of the self-association behavior of various mAb formulations have frequently addressed mAb self-association and its effect on flow properties, we are far from having any predictive understanding and a generally accepted methodology and/or theoretical framework to detect antibody association and model mAb interactions quantitatively. A particular difficulty here is that while the non-spherical shape and internal flexibility has sometimes been addressed, interactions between proteins are frequently treated based on spherical approximations, and in particular the enormous effect that specific, directional interactions can have are generally not considered. Here we present an investigation of the solution behavior of a monoclonal antibody varying the concentration, where we combine scattering methods and viscosity measurements with theoretical calculations and Monte Carlo (MC) simulations. We explicitly consider in our model the anisotropy of both the shape and the interactions of the antibody molecules. To this aim we focus on Y-shaped molecules interacting within a simple patchy model that is built from calculations of the electrostatic properties of the considered mAbs. The simplicity of the model allows for analytical treatment through Wertheim theory [@Wertheim1984], yielding all thermodynamic properties of the solution and in particular the compressibility that can be directly compared to the experimentally determined osmotic compressibility or apparent molecular weight. In addition, we calculate the size distribution of mAb clusters using the Hyperbranched Polymer Theory (HPT) [@Rubinstein2003], without introducing any additional free parameters. Finally, we use MC simulations to verify the results predicted theoretically. With the explicit cluster size distribution obtained by HPT at all concentrations investigated, and assuming that the dynamic solution properties (such as the apparent hydrodynamic radius $R_{h,app}$ or the relative viscosity $\eta_r = \eta_0 / \eta_s$, where $\eta_0$ is the zero shear viscosity and $\eta_s$ is the solvent viscosity) are primarily determined by excluded volume effects, we are able to make an additional coarse-graining step in which we model the mAb clusters as effective hard (HS) or sticky (or adhesive) hard (SHS) spheres, for which quantitative relationships for the concentration dependence of $R_{h,app}$ and $\eta_r $ exist. We find that the measured data are indeed well reproduced by this model, confirming that excluded volume interactions between the assembled clusters are at the origin of the strong increase of $\eta_r $ with increasing concentration. Hence, our simple model is capable of quantitatively predicting the measured concentration-dependence of the viscosity, solely based on static and dynamic light scattering experiments. Our results can be easily generalized to different types of mAbs, salt concentrations and temperature and may provide a crucial step for a proper description of self-association and dynamics of monoclonal antibodies. Experimental Results {#experimental-results .unnumbered} ==================== We have characterized the solution behavior of a monoclonal antibody (mAb) as described in *Materials and Methods*. The results from these experiments are summarized in Fig. \[fig:exp-results\]. The static light scattering (SLS) data in Fig. \[fig:exp-results\]A show that the apparent molecular weight $M_{w,app}$ initially increases with concentration $C$ from the known value of the molecular weight of the mAb monomer, i.e. $M_1 = 147000$ g/mol, goes through a maximum at a concentration of around $C \approx 30$ mg/ml, and then strongly decreases at higher concentrations. A similar trend can also be seen for the apparent hydrodynamic radius $R_{h,app}$, reported in Fig.  \[fig:exp-results\]B, that is obtained by dynamic light scattering (DLS). We find that also $R_{h,app}$ initially increases from the monomer value of $R_{h,app} \approx 6$ nm, reaches a maximum at $C \approx 150$ mg/ml, and finally decreases at higher values of $C$. In contrast, the reduced viscosity $\eta_r$, shown in Fig.  \[fig:exp-results\]C, monotonically increases with concentration and appears to diverge for $C \approx 200 - 300$ mg/ml. Qualitatively, the concentration dependence of the three key quantities $M_{w,app}$, $R_{h,app}$ and $\eta_r$ is in agreement with a behavior where the mAb self-assemble into aggregates with increasing concentration. While this is visible in the SLS and DLS data at low concentrations, the influence of excluded volume effects on the scattering data becomes more prominent at higher concentrations and results in a decrease of the measured values for $M_{w,app}$ and $R_{h,app}$. At the same time, these increasing interaction effects also result in a corresponding increase of the zero shear viscosity of the mAb solution. While it is straightforward to qualitatively assess the existence of aggregation and intermolecular interactions, a quantitative interpretation of the experimental data would require knowledge of both the molecular weight distribution of the resulting aggregates as well as the interaction potential between antibodies. This situation is similar to the difficulties encountered when trying to analyze scattering and rheology data of surfactant molecules forming large polymer-like micelles [@Schurtenberger1989; @Schurtenberger1996]. Crucially, a qualitative comparison between the behavior normally encountered for polymer-like micelles and the data shown in Fig. \[fig:exp-results\] shows significant differences. Indeed, for polymer-like micelles the maxima in $M_{w,app}$ and $R_{h,app}$ are directly linked to the overlap concentration $C^*$ that marks the transition from a dilute to a semi-dilute concentration regime, and thus occur at approximately the same value. For the mAb data shown in Fig. \[fig:exp-results\], however, there exists a large difference between the concentrations related to the maxima in $M_{w,app}$ and $R_{h,app}$, respectively. This clearly indicates that a simple application of polymer models, such as the wormlike chain model previously used successfully to for example describe SLS and DLS data for antigen-mAb complexes [@Murphy1988], does not work. We thus instead exploit analogies to patchy colloids in order to design a coarse-grained model for our system and investigate whether we can obtain with this approach a quantitative analysis of the experimental data. ![[**Experimental results for the concentration dependence of the mAb solutions.**]{} A) Apparent molecular weight $M_{w,app}$ *vs.* weight concentration $C$ as determined by static light scattering. B) Apparent hydrodynamic radius $R_{h,app}$ *vs.* weight concentration $C$ from dynamic light scattering. C) Relative viscosity $\eta_r $ *vs.* weight concentration $C$ measured by DLS-based microrheology.[]{data-label="fig:exp-results"}](figures/fig1.pdf){width="0.95\linewidth"} ![image](figures/fig2.pdf){width="1\linewidth"} Comparing theory and experimental results {#comparing-theory-and-experimental-results .unnumbered} ========================================= Model: Antibodies as patchy particles {#model-antibodies-as-patchy-particles .unnumbered} ------------------------------------- We model mAbs as patchy colloids and use a theoretical approach that has previously been applied successfully to such particles, in order to calculate their structural properties as a function of concentration. Patchy models are coarse-grained models which condense complex anisotropic interactions often of electrostatic origin in simple site-site aggregation, that have been applied in the past to several protein solutions[@fusco2014characterizing; @roosen2014ion; @li2015charge; @quinn2015fluorescent; @mcmanus2016physics; @cai2017eye; @cai2018proof], and other complex systems, including colloidal clays[@RuzickaNatMat] and DNA-based nanoconstructs[@biffi2015equilibrium; @bomboi2016re]. In order to build a meaningful model it is crucial to identify the key ingredients controlling the intermolecular interactions. A previous study of this antibody has shown that the viscosity is sensitive to the salt concentration, pointing towards electrostatic interactions as a main component of the intermolecular interactions[@Neergaard2013]. Therefore, we first carry out a study of the electrostatic isosurface of a single antibody molecule in the considered buffer solution, as described in *Materials and Methods*, in order to locate the active spots on the molecule surface that are involved in particle-particle aggregation. The resulting charge distribution is illustrated in Fig. \[fig:model\]A, which clearly shows that the considered mAbs have an overall positively charged surface on the two arms (FAB domains) and a largely negative charge on the tail (FC domain). This suggests that the main driving mechanism for mAbs aggregation has to be an attractive arm-to-tail interaction. To take into account this result, we thus consider Y-shaped particles formed by six spheres of diameter $\sigma$ and decorated with three patches, one of type $A$ on the tail and two of type $B$ on the arms, as illustrated in Fig. \[fig:model\]B. Interactions between $AB$ patches are attractive and modeled with a square-well potential, while $AA$ and $BB$ interactions are not considered. To predict the behavior of our patchy model, which we call YAB model, we use a thermodynamic perturbation theory, introduced by Wertheim roughly 30 years ago, which describes associating molecules under the hypothesis that each sticky site on a particle cannot bind simultaneously to two or more sites on another particle [@Wertheim1984]. The Helmholtz free energy and the thermodynamic properties of the system, including for example the energy per particle, the specific heat at constant volume and the isothermal compressibility, can thus be predicted from the dependence of the bonding probability $p$ on the temperature $T$ and the number density $\rho$, as explained in more details in *Materials and Methods*. We complement this approach with Monte Carlo simulations of the YAB model in order to validate the theoretical results. In addition, the YAB model belongs to the class of hyperbranched polymers[@Rubinstein2003], for which it is possible to calculate the equilibrium cluster size distribution of the clusters solely from the knowledge of the bonding probability $p$ (see *Materials and Methods*). As this parameter is directly an outcome of Wertheim theory, the YAB model is amenable to a full analytical treatment, allowing one to obtain simultaneously the thermodynamic and the connectivity properties of the solutions, to be directly compared with the experimental results. Comparison between theory and MC simulations {#comparison-between-theory-and-mc-simulations .unnumbered} -------------------------------------------- ![image](figures/fig3.pdf){width="0.9\linewidth"} The mAbs modeled as patchy Y-shaped colloids self-associate into clusters with increasing concentration through reversible $AB$ bonds, as a result from the attraction between $A$ and $B$ patches. Their assembly can be monitored by focusing on the variation of the bonding probability $p$ and the distribution $n(s)$ of clusters of size $s$ as a function of the two parameters controlling the assembly: the attractive strength $k_B T / \epsilon$, where $\epsilon$ is the well depth of the square-well attraction between $A$ and $B$ patches (see *Materials and Methods*), $T$ is the temperature and $k_B$ is the Boltzmann constant, and the mAb concentration $C$. We report in Fig. \[fig:snp-cl\] some representative results comparing theory and simulations, including the bond probability and the cluster size distributions for different concentrations and attraction strengths. In all cases, we find that there is quantitative agreement between theory and simulations for both thermodynamics and cluster observables. Thus, we can confidently use the results of the theoretical approach in order to compare with experimental results. Structural Properties {#structural-properties .unnumbered} --------------------- In order to analyze the measured $M_{w,app}$, we calculate the isothermal compressibility $\kappa_T=-1/V (\partial V/\partial P)_T$ for our YAB model, since $\kappa_T$ is related to the $S(0)$, the static structure factor at $q = 0$, as $$\label{S0} S(0)=\rho k_B T\kappa_T,$$ which in turn is related to the experimentally determined apparent weight average molar mass by $M_{w,app} = M_{1}S(0)$ where $M_1$ is the molar mass of a monomer. In a solution where antibodies self-assemble into larger clusters described via Wertheim theory, static light scattering thus provides an apparent weight average aggregation number $N_{app}$ given simply by $$\label{naggapp} N_{app} = S(0),$$ where $N_{app} = M_{w,app}/M_1$ is the apparent aggregation number, with $M_1$ being the molar mass of a monomer. When trying to understand self-assembly in mAb solutions, we need to be able to account for both the average aggregation number, $N_{agg}$, as well as the resulting interaction effects between the antibody clusters, given by $S(0)$. Using Wertheim theory, we can calculate the free energy and differentiate it twice in order to get $\kappa_T$. As described in more details in *Materials and Methods*, the free energy is the sum of a hard-sphere reference term plus a bonding term. The reference HS term is the Carnahan-Starling (CS) free energy of an equivalent HS system. Since mAbs are not spherical, we cannot directly use the actual volume fraction given by the number density of mAbs and the volume of a monomer, but we rather need to determine an equivalent hard sphere diameter $\sigma_{HS}$ of the Y-molecule. We thus calculate $\kappa_T$ for different values of $\sigma_{HS}$ and $k_B T / \epsilon$ and compare it to the measured data. ![[**Comparison of SLS data with patchy model predictions.**]{} Experimental $N_{app}$ compared with YAB model results: the best agreement, particularly for high concentration data is obtained for an equivalent hard sphere diameter $\sigma_{HS}=2.90\sigma\sim4.2$nm and $\epsilon/k_BT=12.27$. []{data-label="fig:S0"}](figures/fig4.pdf){width="0.9\linewidth"} ![image](figures/fig5.png){width="0.7\linewidth"} By fitting the theoretical results to the experiments as described in *Materials and Methods*, we determine the two unknown parameters: the strength of the $AB$ interaction and the equivalent HS diameter. Fig. \[fig:S0\] compares $N_{app}$ for the YAB model to the SLS data and we find that the best fit of the data, particularly correctly describing the high concentration behavior which is most relevant for the viscosity to be discussed later, is obtained with an effective hard sphere diameter of $\sigma_{HS} = 2.9\sigma$ and a strength of the AB patch-patch attraction given by $\epsilon \simeq 12.3 k_BT$. Note that the estimated value of $\sigma_{HS}$ is considerably smaller than the geometric diameter of the Y molecule, thus accounting for the penetrability of the Y-shaped antibodies. When converted in real units, an effective HS radius of $4.2$ $nm$ is found, which also compares well with the measured radius of gyration of the antibody molecule $ R_g \approx 4.7$ $nm$. Dynamic Properties {#dynamic-properties .unnumbered} ------------------ Having analyzed the SLS data using Wertheim theory, we now have a prediction for the effect of concentration on the self-assembling behavior of mAbs and we can thus calculate the cluster size distributions at all concentrations thanks to HPT. Next we make an attempt to test the consistency of these results with the data obtained using DLS for the same samples shown in Fig. \[fig:exp-results\]B. Unfortunately, this is much less straightforward than the analysis of the SLS data and requires an additional coarse graining step, illustrated in Fig. \[fig:strategy\]. The main problem here is that we currently lack a theoretical model that would allow us to calculate the effective or apparent hydrodynamic radius of concentrated solutions of polydisperse antibody clusters. We thus propose an approach in which we use the self-assembled clusters of the patchy model and treat them as new interacting objects. Their dominant interaction is of course excluded-volume and, hence, we consider them as effective polydisperse hard spheres, each with its own radius resulting from its size in terms of monomers. To go one more step, we also consider them as sticky hard spheres. Within this approach we first calculate the $z$-average [@PeterLectures] hydrodynamic radius $R_{h,z}$ of the mAb solutions using the cluster size distributions obtained theoretically. Next we model the solutions at each concentration as dispersions of colloids with a size given by $R_{h,z}$ and an effective hard sphere volume fraction $\phi_{HS}$. The influence of interparticle interactions on the resulting collective diffusion coefficient, or $R_{h,app}$, is calculated by treating the spheres either as hard or sticky hard spheres, for which accurate expressions exist. First, we need to determine the hydrodynamic radius $R_h$ of mAb clusters of a given size $N_{agg}$. Clusters of mAbs of a given size $N_{agg}$ were generated randomly, where the clusters also have to satisfy the criterion of self-avoidance and where each monomer in a cluster is allowed to have a maximum of 3 connections, i.e. reflecting the YAB structure imposed in Wertheim theory and HPT. For each individual cluster its hydrodynamic radius was then calculated using the program Hydropro[@Ortega2011], and average values were calculated from 100 individual clusters. This resulted in a data set of $R_h$ *vs* $N_{agg}$ that was well reproduced by the phenomenological relationship $R_h = 3.69 + 2.04 \times N_{agg} - 0.069 \times N_{agg}^2$, where $R_h$ is given in $nm$. With this relationship and assuming hard sphere-like interactions between the different clusters, we can now calculate the concentration dependence of both $N_{app}$ and $R_{h,app}$. The expression for the measured apparent molecular mass in this coarse grained model is $M_{w,app} = M_{w}S^{eff}(0)$, where $M_{w}$ is the weight average molar mass of the clusters. Note that the static structure factor $S^{eff}$ introduced here has a different definition than $S(0)$ introduced in Eq. \[S0\], and $S^{eff} = S(0)/N_{agg}$ now corresponds to the effective structure factor of a solution of polydisperse spheres, reflecting the fact that the mAb clusters and not the individual antibodies are the new interacting objects. The apparent weight average aggregation number $N_{app}$ is then given by[@PeterLectures] $$\label{naggapp_col} N_{app} = N_{agg} S^{eff}(0).$$ The only adjustable parameter introduced by this step is the conversion of the weight concentration into the effective hard sphere volume fraction $\phi_{HS}$ of the clusters. For hard spheres, we can exploit the Carnahan-Starling expression for the low wavevector limit of the static structure factor, $$\label{C-S} S_{CS}(0) = \frac{(1 - \phi_{HS})^4}{(1 + 2 \phi_{HS})^2 + \phi_{HS}^3 (\phi_{HS} - 4)},$$ as well as the weight average aggregation number $N_{agg}$, obtained with Wertheim theory and HPT, in order to calculate $N_{app}$ using Eq. \[naggapp\_col\]. In doing these calculations we fix the effective diameter $\sigma_{HS} = 2.9 \sigma$ of each antibody molecule (Fig. \[fig:S0\]). ![[**Comparison between experimental and theoretical results for the concentration dependence of static and dynamic properties of the mAb solutions.**]{} Blue symbols are experimental data, while solid lines are the theoretical data for the hard sphere (orange line) and the sticky hard sphere (green line) models, respectively. The fit parameters are reported in Table\[tab:fitresults\]. A: Apparent aggregation number $N_{app}$ versus weight concentration as determined by SLS; B: Apparent hydrodynamic radius $R_{h,app}$ versus weight concentration from dynamic light scattering; C: Reduced viscosity $\eta_r$ versus weight concentration measured by DLS-based microrheology.[]{data-label="fig:final"}](figures/fig6.pdf){width="0.9\linewidth"} The effective cluster HS volume fraction is calculated taking into account that the excluded volume contribution of an antibody in a cluster is equal to a sphere with a radius equal to the antibody radius of gyration and also that clusters are fractal, giving $$\label{phiHS} \phi_{HS} = \left(\frac{2 R_g}{\sigma_{HS}}\right)^3\!\! \phi \ N_{agg}^{(3 - d_F)/d_F} =1.41\phi \ N_{agg}^{(3 - d_F)/d_F},$$ where $d_F = 2.5$ is the fractal dimension of the clusters and $\phi$ is the nominal antibody volume fraction ($\phi=\pi/6\rho d^3$) based on the geometric diameter $d$ of the molecule. Thus, in the coarse grained model we have an effective hard sphere volume fraction that is $\approx40$% higher than for the individual mAbs in the Wertheim analysis, which does not seem unrealistic because clusters cannot overlap as much as individual antibodies do. The resulting comparison of the model calculations with experiments provides a very good description of the data, as shown in Fig. \[fig:final\] A. In order to calculate $R_{h,app}$ we use the corresponding virial expression for the short time collective diffusion coefficient, which results in $$\label{DHS} R_{h,app} = R_{h} / (1 + k_D \phi_{HS}),$$ where $k_D = 1.45$ for hard spheres [@Banchio2008]. Note that here we use the $z$-average aggregation number in order to calculate $R_{h}$. The agreement for $R_{h,app}$ with the results from the simple hard sphere model is quite good (Fig. \[fig:final\]B), except for the highest values of $C$, where we expect Eq. \[DHS\] to fail and would instead need to include higher order terms. We also find that the apparent hydrodynamic radius obtained in DLS experiments is very sensitive to the interparticle interactions, and we can thus also look at a somewhat refined interaction model, where we also include the possibility of an additional weak attraction between different clusters. Here we use the so-called adhesive or sticky hard sphere model [@Piazza1998; @Cichocki1990], where we include an additional weak short-range attractive potential that could be due to the unbound attractive patches of the mAbs at the exterior of the clusters. In this model, Eqs. \[C-S\] and  \[DHS\] then become $$\label{S(0)-stsph} S_{SHS}(0) = \frac{(1 - \phi_{HS})^4}{(1 + 2 \phi_{HS} - \lambda \phi_{HS})^2},$$ and $$\label{DStSph} R_{h,app} = R_{h} / (1 + (1.45 - 1.125/\tau) \phi_{HS}),$$ where $\tau$ is the stickiness parameter that is inversely proportional to the strength of the attractive interaction and $\lambda$ is given by $$\label{lambda} \lambda = 6 (1 - \tau + \tau / \phi) \biggl(1-\sqrt{1 - \frac{1 + 2 / \phi}{6 (1 - \tau + \tau / \phi)^2}}\biggr).$$ The corresponding theoretical curves when $\tau$ is used as an additional fit parameter to the SLS and DLS data are also shown in Fig. \[fig:final\]A and B. In particular, a better description of the apparent hydrodynamic radius is obtained within the SHS model with $\tau\sim 2.5$, corresponding to a very weak additional attraction between the mAb clusters. While the approximations made in our coarse grained strategy may be too severe to say much about the exact nature of the effective interaction potential between the mAb clusters in solution, the experimental data are very well reproduced by our simple model. This indicates that the two chosen models, a pure hard sphere and an adhesive hard sphere interaction with moderate stickiness, likely bracket the true behavior of the self-assembling antibody investigated in this study. Finally, as an ultimate test, we calculate the concentration dependence of the relative viscosity $\eta_r$. We use the expression for $\eta_r$ developed by Mooney, which is often and successfully applied for mono- and polydisperse hard sphere colloidal suspensions [@Mooney1951]: $$\label{Mooney} \eta_r = e^{\frac{A\phi_{HS}}{(1 - \phi_{HS}/\phi_g)}}.$$ Here $A$ is a constant, which for hard spheres is 2.5, and $\phi_g$ is the maximum packing fraction, which depends on the polydispersity of the system. In order to estimate it, we have evaluated the polydispersity of our antibody clusters as a function of concentration and find that at the highest studied concentration it reaches about 45%. For such polydisperse hard spheres, the maximum packing fraction is $\approx 0.71$[@Farr2009]. Using this value, we then should directly obtain the concentration dependence of $\eta_r$ from the previously determined relationship between $C$ and $\phi_{HS}$ without any free parameter. The resulting comparison between the measured and calculated values of $\eta_r$ is shown in the bottom panel of Fig. \[fig:final\], and the agreement is indeed quite remarkable given the lack of any free parameter. This clearly indicates that it is the excluded volume interactions between the self-associating clusters that is at the origin of the strong increase of the zero shear viscosity with increasing concentration, and our simple model is capable of predicting the measured $C$-dependence based on static and dynamic light scattering experiments quantitatively. Discussion and Conclusions {#discussion-and-conclusions .unnumbered} ========================== The self-assembly of monoclonal antibodies and its effect on the solution properties such as the viscosity is an important factor in determining our ability to develop high concentration formulations. However, there has been a lack of decisive experimental and theoretical approaches to obtain a quantitative and predictive understanding of antibody solutions. A recent theoretical study has proposed a patchy model for antibody molecules[@kastelic2017controlling], in which different types of patch-patch attractions were considered resulting in a large number of parameters to be adjusted to describe different experimental conditions. On the other hand, in the present work we define the simplest model based on electrostatic calculations for the specific type of immunoglobulin also studied experimentally within the same buffer and salt conditions. This very simple model is analytically solvable by well-established theories, in particular the combination of the Wertheim theory with hyperbranched polymer theory to predict the aggregation properties of the mAb solutions. We have also shown that both thermodynamic properties and cluster distributions are in quantitative agreement with MC simulations of the model, thus the theoretical predictions can be directly compared with experiments without suffering from numerical uncertainty. From the mAb self-assembly process built by the patchy interactions, we then employ a second coarse-graining step in which we consider our antibody clusters as the elementary units. We thus use the most basic description considering these clusters interacting essentially as hard spheres or sticky hard spheres with very moderate attraction, and apply available phenomenological descriptions to predict the dynamic properties of the system. This treatment does essentially not depend on any free parameter and is able to reproduce all measured data from SLS, DLS and microrheology. This simple model, based on very fundamental assumptions, thus provides an elegant way to consistently describe the thermodynamic and dynamical behavior of mAb solutions. The patchy model that we have established also provides a robust estimate of the attraction between patchy binding sites through Wertheim theory, and thus will be an ideal starting point to investigate and quantitatively assess the effects of additional excipients or chemical modifications on the antibody interaction. Such information is vital for an advanced formulation strategy and attempts to predict antibody stability and the resulting viscosity from molecular information. Moreover, the combination of static scattering data and Wertheim/HPT to determine the interaction strength and the cluster size distribution as a function of concentration, and the subsequent test using DLS and (micro)rheology measurements without additional free parameters other than a rescaling of the volume fraction, allows us to critically test models for the type of interactions responsible for the self-association of a given mAb into clusters. Materials and Methods ===================== Sample preparation {#sample-preparation .unnumbered} ------------------ The mAb used in this study was a humanized IgG4 against trinitrophenyl , which was previously found to exhibit an increased viscosity at high concentrations [@Neergaard2013] (where it was labeled *mab-C*). It was manufactured by Novo Nordisk A/S and purified using Protein A chromatography, and subsequently concentrated to 100 mg/ml and buffer exchanged into a 10 mM Histidine buffer with 10 mM NaCl at pH 6.5. For measurements, the sample was diluted and buffer exchanged to a 20mM Histidine pH 6.5 buffer containing 10 mM NaCl and subsequently concentrated using a 100 kD cutoff spinfilter (Amicon inc.). The concentrated sample was used as a stock solution for preparing the less concentrated ones. The concentration of each sample was determined by a series of dilutions followed by measurement of the absorption at 280nm using an extinction coefficient of $e^{280nm}_{1\%,1cm} = 2.234$. In order to assess the uncertainty of the concentration determination the dilution series was done in triplicates. Light Scattering {#light-scattering .unnumbered} ---------------- The dynamic and static light scattering experiments were made using a 3D-LS Spectrometer (LS Instruments AG, Switzerland) with a 632nm laser, recording DLS and SLS data simultaneously. The measurements were conducted at $90^{\circ}$ scattering angle. Before measurement, the samples were transferred to pre-cleaned 5mm NMR tubes and centrifuged at 3000 g and 25 $^{\circ}$C for 15 min, to remove any large particles and to equilibrate temperature. Directly after centrifugation, the samples were placed in the temperature equilibrated sample vat and the measurement was started after 5 minutes to allow for thermal equilibration. Additional low concentration SLS measurements were done using a HELIOS DAWN multi-angle light scattering instrument (Wyatt Technology Corporation, CA, USA), connected to a concentration gradient pump. Both instruments were calibrated to absolute scale using a secondary standard, allowing for direct comparison of the two data sets. Microrheology {#microrheology .unnumbered} ------------- The zero shear viscosity $\eta_0$ was obtained using DLS-based tracer microrheology. Sterically stabilized (pegylated) latex particles were mixed with protein samples to a concentration of 0.01 $\% $v/v using vortexing and transferred to 5 mm NMR tubes. The sterically stabilized particles were prepared by cross-linking 0.75 kDa amine-PEG (poly-ethylene glycol) (Rapp Polymere, 12750-2) to carboxylate stabilized polystyrene (PS) particles (ThermoFischer Scientific, C37483) with a diameter of 1.0 $\mu$m using EDC (N-(3-Dimethylaminopropyl)-N’-ethylcarbodiimide) (Sigma Aldrich, 39391) as described in detail in [@Garting2018]. DLS measurements were performed on a 3D-LS Spectrometer (LS Instruments AG, Switzerland) at a scattering angle of 46-50$^\circ$ to stay away from the particle form factor minima and thus to maximise the scattering contribution from the tracer particles with respect to the protein scattering. Measurements were made using modulated 3D cross correlation DLS [@Block2010] to suppress all contributions from multiple scattering that occur in the attempt to achieve conditions where the total scattering intensity is dominated by the contribution from the tracer particles. Samples were either prepared individually or diluted from more concentrated samples using a particle dispersion with the same particle concentration as in the sample as the diluent. The diffusion coefficient $D$ of the particles was then extracted from the intensity autocorrelation function using a 1st order cumulant analysis of the relevant decay. This diffusion coefficient is compared to that of particles in a protein-free buffer and the relative viscosity is extracted from the relationship between diffusion coefficient and viscosity in the Stokes-Einstein equation given by $D = k_BT / 6 \pi \eta_0 R_h$, where $R_h$ is the known hydrodynamic radius of the tracer particles[@Garting2018; @Furst2017]. Isosurface calculations {#isosurface-calculations .unnumbered} ----------------------- The FAB domains were built using the antibody modeler tool in the Molecular Operating Environment (Chemical Computing Group Inc, Canada) computer program[@MOE2011], whereas the FC domain was taken from a crystallographic structure with a similar FC domain found in the protein data bank (PDBID: 4B53) The electrostatic calculations were done in a two step process, using pdb2prq [@Dolinsky2007] and the automated poisson Boltzmann solver[@Jurrus2018] (apbs) pymol plugin. The pdb2pqr server is hosted by the National Biomedical Computation Resource at http://nbcr-222.ucsd.edu/pdb2pqr\_2.1.1/, and was used to calculate the protonation state of the FAB and FC domains at pH $6.5$ taking the local structure around the titratable residues into account. The prepared structures were then used by the apbs plugin to calculate an electrostatic map of the protein. The apbs was run using the default parameters, with the addition of Na+ and Cl- ions corresponding to a salt concentration of 10mM. YAB Patchy model and MC simulations {#yab-patchy-model-and-mc-simulations .unnumbered} ----------------------------------- The antibody molecule is represented as a symmetric $Y$-shaped particle, constructed from six hard spheres of diameter $\sigma$, as illustrated in Fig. \[fig:model\]B. Each mAb is decorated by 3 patches, one of type $A$ on the tail and two of type $B$ on the arms. Only $AB$ interactions are taken into account based on the charge distribution on the surface of the mAb molecule in the studied buffer conditions, and are modeled as an attractive square well (SW) potential of range $\delta = 0.1197\sigma$, which guarantees that each patch is engaged at most in one bond. For this model the geometric diameter $d$ of a single mAb molecule is that of the circle tangent to the external spheres: $d = \frac{9 + 2 \sqrt{3}}{3} \sigma$. We perform standard MC simulations of $N=1000$ YAB particles at different number densities $\rho=N/V$ where $V$ is the volume of the cubic simulation box. The unit of length is $\sigma$. To compare the experimental value of $C$ with simulations and theory, we consider the geometric radius $d/2$ of the Y-colloid equal to the hydrodynamic radius measured for a single mAb molecule, that is $\approx 6$nm. With this choice we have that $\sigma \approx 2.89$nm and, considering that the mass of a molecule is 150 kDa, an experimental concentration of 1 mg/ml corresponds to $9.6938 \times 10^{-5}/\sigma^3$ in simulation units. Theory {#theory .unnumbered} ------ In Wertheim theory[@Wertheim1984; @Tavares2010], the free energy $F$ of a system of $N$ particles in a volume $V$, with number density $\rho=N/V$, is calculated as the sum of a hard sphere reference term plus a bonding term. The bonding free energy $F_b$ per particle of the YAB model is $$\label{Fb} \beta \frac{F_b}{N} = 2 \ln X_B + \ln X_A - X_B - \frac{X_A}{2} + \frac{3}{2}$$ where $X_A$ and $X_B$ are the fractions of non-bonded patch of each species respectively[@Jackson1988] and $\beta = \frac{1}{k_BT}$. For the $YAB$ model they are: $$X_A = \frac{1}{1+2\rho \Delta X_B}; \ \ \ X_B = \frac{1}{1+\rho \Delta X_A}, \label{eq:X}$$ with $\Delta = v_B [e^{\beta \epsilon_0} - 1] \frac{1 - A\eta - B\eta ^2}{(1 - \eta)^3}$, $v_B = \pi \delta^4 \frac{15 \sigma + 4 \delta}{30 \sigma^2}$, $A = \frac{5}{2} \frac{3 + 8 \delta/\sigma + 3(\delta/\sigma)^2}{15 + 4\delta/\sigma}$, $B = \frac{3}{2} \frac{12 \delta/\sigma + 5 (\delta/\sigma)^2}{15 + 4\delta/\sigma}$, $\eta = \frac{\pi}{6} \rho \sigma^3$[@Bianchi2006; @Bianchi2008]. The reference HS system must be chosen according to the nature of the molecule. For non-spherical molecules, the HS reference system effective diameter is not known and needs to take into account correctly the excluded volume of the particles. This is established from the comparison to experiments. Once this is known, experimentally accessible quantities such as the osmotic compressibility of the system can be directly calculated from the expression of $F$. From Eq. \[eq:X\] we can calculate the expressions for the fractions of non bonded patches in terms of density and temperature, as $$\begin{aligned} X_A &=& \frac{2}{1+\rho\Delta+\sqrt{\rho^2\Delta^2+6\rho\Delta+1}};\nonumber\\ X_B &=&\frac{\rho\Delta-1+\sqrt{\rho^2\Delta^2+6\rho\Delta+1}}{4\rho\Delta}. \label{eq:Xtrue}\end{aligned}$$ Instead of using these two variables, it is more convenient to refer to the so-called bond probability $p$, defined as $$\label{p} p \equiv p_B = 1 - X_B = \frac{p_A}{2} = \frac{1 - X_A}{2}.$$ While Wertheim theory directly provides overall quantities such as the compressibility or the fraction of bonded $A$ and $B$ groups as a function of $p$, it does not yield the resulting cluster size distribution that would be needed for a comparison with other experimental quantities. We thus apply hyperbranched polymer theory (HPT) exploiting the fact that the YAB molecule is of the kind $AB_{f-1}$ in HPT language with $f=3$[@Rubinstein2003]. We consider $p$ to be the fraction of bonded $B$ groups (i.e. the bond probability of Wertheim theory defined above) and $(f-1)p$ the fraction of bonded $A$ groups. There is one non-bonded $A$ group for each cluster, therefore the average number of monomers per cluster is the reciprocal of the fraction of unreacted $A$ groups. The only input then needed to evaluate the cluster size distribution $n(s)$ is the bond probability $p$, which we get from Wertheim theory. In the YAB model, calling $p$ ($2p$) the fraction of $B$ ($A$) patchy sites, the cluster size distribution in the framework of hyperbranched polymer theory is finally given by $$\label{ns} n (s) = \frac{(2s)!}{s!(s+1)!} p^{s-1}(1-p)^{s+1},$$ where $n(s)$ is the probability of finding clusters of size $s$ for a system with bond probability $p$. From the cluster size distributions calculated we then calculate the weight average, the $z$-average and the polydispersity of the clusters for each concentration. Finally in order to evaluate $S(0)$ as a function of concentration, we simply perform a double derivative of the free energy in order to get the compressibility of the system[@Tavares2009]. Model fitting {#model-fitting .unnumbered} ------------- All data are fitted using the orthogonal distance regression procedure[@boggs1990; @boggs1992], which includes the experimental errors in both the x and y directions. The full set of fit parameters are given in table \[tab:fitresults\]. For the hard sphere model fits shown in Fig. \[fig:final\] two parameters are fitted: The bonding energy and a scaling factor of the SLS data. The scaling factor is introduced to correct for any errors in the calibration linking the scattering intensity to the apparent molecular mass, and should be close to one. For the sticky hard sphere model three parameters are fitted: The bonding energy, the stickiness parameter and a scaling factor for the SLS data. For the fits using the compressibility from Wertheim theory (figure \[fig:S0\]), the first fit uses three free parameters, the bonding energy, a scaling factor of the SLS data and an effective hard sphere scaling factor. In the second fit the effective hard sphere scaling factor is locked to $\sigma_{HS}$ and only the bonding energy and scaling factor are fitted. [c|rrrr]{} Model fit & $\epsilon/k_BT$ & $\sigma_{HS}/\sigma$ & $\tau$ & SLS scale\ ----------------- Compressibility full fit ----------------- : **Table of the obtained fit parameters**. The $^*$ indicates a fixed parameter.[]{data-label="tab:fitresults"} & $12.04\pm 0.04$ & $2.70 \pm 0.03$ & - & $0.994 \pm 0.002$\ Compressibility & $12.27 \pm 0.03$ & $2.9^*$ & - & $0.984 \pm 0.003$\ HS Model & $12.33 \pm 0.03$ & $2.9^*$ &- & $0.992 \pm 0.008$\ Sticky HS Model & $12.24 \pm 0.03$ & $ 2.9^*$ & $ 2.45 \pm 0.45$ & $0.993 \pm 0.006$ Generation of Antibody Clusters {#generation-of-antibody-clusters .unnumbered} ------------------------------- The antibody clusters were generated using a molecular model of the antibody, from low concentration SAXS data \[Article in preparation\] constructed using the SAXS modeling software BUNCH[@Petoukhov2005]. The FC and FAB domains (generated as described in the *isosurface calculations* section) were represented as rigid bodies, linked together with a flexible linker of dummy residues. The linker was further constrained by linking the dummy residues that represent the cysteine residues together to simulate the cysteine bridges in the hinge region of an IgG4. To generate a self-associated antibody cluster containing N antibodies, the following procedure was used: 1. An initial antibody was placed with its center of mass (CM) at the origin and oriented randomly. 2. A new antibody is placed at a distance of 12.5 nm in a random direction from the initial antibody CM, and oriented randomly and a connection between them is recoded. 3. From the already placed antibodies one is selected at random if it has less that 3 connections. 4. A new antibody is now placed distance of 12.5 nm in random direction from the selected antibody, oriented randomly. The distance between the CM of the newly placed antibody and all other placed antibodies is calculated, and if it is 12.5 nm or more the connection is recorded. If not step 4 is repeated. 5. Steps 3 and 4 are repeated until N antibodies have been placed. For each association number, N, 100 clusters were produced using the method above. Acknowledgments {#acknowledgments .unnumbered} =============== This work was financed by the Swedish Research Council (VR; Grant No. 2016-03301), the Faculty of Science at Lund University, the Knut and Alice Wallenberg Foundation (project grant KAW 2014.0052), the European Research Council (ERC-339678-COMPASS and ERC-681597-MIMIC) and the European Union (MSCA-ITN COLLDENSE, grant agreement No. 642774).
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we study the possible second order Lax operators for all the possible (1+1)-dimensional models with Schwarz variants and some special types of high dimensional models. It is shown that for every (1+1)-dimensional model and some special types of high dimensional models which possess Schwarz variants may have a second order Lax pair. The explicit Lax pairs for (1+1)-dimensional Korteweg de Vries equation, Harry Dym equation, Boussinesq equation, Caudry-Dodd-Gibbon-Sawada-Kortera equation, Kaup-Kupershmidt equation, Riccati equation, (2+1)-dimensional breaking soliton equation and a generalized (2+1)-dimensional fifth order equation are given.' author: - | Sen-yue Lou$^{1,2,3}$[^1], Xiao-yan Tang$^{2,3}$, Qing-Ping Liu$^{1,4,3}$ and T. Fukuyama$^{5,6}$\ **$^{1}$CCAST (World Laboratory), PO Box 8730, Beijing 100080, P. R. China\ **$^{2}$Physics Department of Shanghai Jiao Tong University, Shanghai 200030, P. R. China\ *$^{3}$Abdus Salam International Centre for Theoretical Physics, Trieste, Italy\ *$^{4}$Beijing Graduate School, China University of Mining and Technology,\ *Beijing 100083, Peoples R China. Peking 100083, P. R. China\ *$^5$Department of Physics, Ritsumeikan University, Kusatsu, Shiga 525-8577, Japan\ *$^6$Department of Physics, University of Maryland, College Park, MD20742, U.S.A.********* title: '**Second order Lax pairs of nonlinear partial differential equations with Schwarz variants**' --- -35pt -20pt .1in Introduction ============ In the study of a nonlinear mathematical physics system, if one can find that the nonlinear system can be considered as a consistent condition of a pair of a linear problem, then some types of special important exact solutions of the nonlinear system can be solved by means of the pair of linear problem. The pair of linear system is called as Lax pair of the original nonlinear system and the nonlinear system is called as Lax integrable or IST (inverse scattering transformation) integrable. Usually, a Lax integrable model may have also many other interesting properties, like the existing of infinitely many conservation laws and infinitely many symmetries, multi-soliton solutions, bilinear form, Schwarz variants, multi-Hamiltonian structures, Painlevé property etc. In the recent studies, we found that the existence of the Schwarz variants may plays an important role. Actually, in our knowledge, almost all the known IST integrable (1+1)- and (2+1)-dimensional models possess Schwarz invariant forms which is invariant under the Möbious transformation (conformal invariance)$\cite{Weiss, Nucci}$. The conformal invariance of the well known Schwarz Korteweg de-Vries (SKdV) equation is related to the infinitely many symmetries of the usual KdV equation$\cite{SchKdV}$. The conformal invariant related flow equation of the SKdV is linked with some types of (1+1)-dimensional and (2+1)-dimensional sinh-Gordon (ShG) equations and Mikhailov-Dodd-Bullough (MDB) equations$\cite{Riccati}$. It is also known that by means of the Schwarz forms of many known integrable models, one can find also many other integrable properties like the Bäcklund transformations and Lax pairs$\cite{Weiss}$. In $\cite{Lou}$, one of the present authors (Lou) proposed that starting from a conformal invariant form may be one of the best way to find integrable models especially in high dimensions. Some types of quite general Schwarz equations are proved to be Painlevé integrable. In $\cite{PRL}$, Conte’s conformal invariant Painlevé analysis$\cite{Conte}$ is extended to obtain high dimensional Painlevé integrable Schwarz equations systematically. And some types of physically important high dimensional nonintegrable models can be solved approximately via some high dimensional Painlevé integrable Schwarz equations$\cite{Ruan}$. Now an important question is what kind of Schwarz equations are related to some Lax integrable models? To answer this question generally in arbitrary dimensions is still quite difficult. So in this paper we restrict our main interests to discuss in (1+1)-dimensional models. In the next section, we prove that for any (1+1)-dimensional Schwarz model there may be a second order Lax pair linked with it. In section 3, we list various concrete physically significant examples. In section 4, we discuss some special extensions in higher dimensions. The last section is a short summary and discussion. A second order (1+1)-dimensional Lax pair Linked with an arbitrary Schwarz form =============================================================================== In (1+1)-dimensions, the only known independent conformal invariants are $$\begin{aligned} &&p_1\equiv \frac{\phi_t}{\phi_x},\\ &&p_2\equiv \{\phi; \ x\}\equiv \frac{\phi_{xxx}}{\phi_x}-\frac32\frac{\phi_{xx}^2}{\phi_x^2},\\ &&p_3\equiv \{\phi;\ t\}\equiv \frac{\phi_{ttt}}{\phi_t}-\frac32\frac{\phi_{tt}^2}{\phi_t^2},\\ &&p_4\equiv \{\phi;x;t\}\equiv \frac{\phi_{xxt}}{\phi_t}-\frac{\phi_{xx}\phi_{xt}}{\phi_x\phi_t}-\frac12\frac{\phi_{xt}^2}{\phi_t^2},\\ &&p_5\equiv \{\phi;t;x\}\equiv \frac{\phi_{xtt}}{\phi_x}-\frac{\phi_{tt}\phi_{xt}}{\phi_x\phi_t}-\frac12\frac{\phi_{xt}^2}{\phi_x^2},\end{aligned}$$ where $\phi$ is a function of $\{x,\ t\}$, the subscripts are usual derivatives while $\{\phi;\ x\}$ is the Schwarz derivative. As in $\cite{Lou,PRL,Ruan}$, we say a quantity is a conformal invariant if it is invariant under the Möbious transformation $$\begin{aligned} \phi\rightarrow \frac{a\phi+b}{c\phi+d},\ ad\neq bc.\end{aligned}$$ From (1)–(5), we know that the general (1+1)-dimensional conformal invariant Schwarz equation has the form $$\begin{aligned} F(x,t,p_i,p_{ix},p_{it},p_{ixx},... (i=1,...,5) )\equiv F(p_1,\ p_2,\ p_3,\ p_4,\ p_5)=0,\end{aligned}$$ where $F$ may be an arbitrary function of $x,\ t,\ p_i$ and any order of derivatives and even integrations of $p_i$ with respect to $x$ and $t$. According to the idea of $\cite{Lou}$, (7) (or many of (7)) may be integrable. If $F$ of (7) is a polynomial function of $p_i$ and the derivatives of $p_i$, then one may prove its Painlevé integrability by using the method of $\cite{Lou,PRL}$. However, for general function $F$ in (7), it is difficult to prove its Painlevé integrability. Fortunately, we can find its relevant variant forms with Lax pair. To realize this idea, we consider the following second order Lax pair: $$\begin{aligned} &&\psi_{xx}=u\psi_x+v\psi,\\ &&\psi_t=u_1\psi_x+v_1\psi,\end{aligned}$$ where $u,\ u_1,\ v,$ and $v_1$ are undetermined functions. To link the Lax pair (8) and (9) with the Schwarz equation (7), we suppose that $\psi_1$ and $\psi_2$ are two solutions of (8) and (9), and $\phi$ of (7) is linked to $\psi_1$ and $\psi_2$ by $$\begin{aligned} \phi=\frac{\psi_1}{\psi_2}.\end{aligned}$$ Now by substituting (10) with (8) and (9) into (7) directly, we know that if the functions $u,\ v$ and $u_1$ are linked by $$\begin{aligned} F(P_1,\ P_2,\ P_3,\ P_4,\ P_5)=0\end{aligned}$$ with $$\begin{aligned} &&P_1=u_1,\ P_2=u_x-\frac12u^2-2v,\\ &&P_3=P_2u_1^2+u_1u_{1xx}-\frac12u_{1x}^2 +u_{1xt}+u_1^{-1}(u_{1tt}-u_{1x}u_{1t})-\frac32u_1^{-2}u_{1t}^2,\\ &&P_4=P_2+u_{1xx}u_1^{-1}-\frac12u_1^{-2}u_{1x}^2,\\ &&P_5=u_1^2P_4+u_{1xt}-u_1^{-1}u_{1t}u_{1x},\end{aligned}$$ then the corresponding nonlinear equation system for the fields $u,\ v,\ u_1$ and $v_1$ has a Lax pair (8) and (9) while the fields $u,\ v,\ u_1$ and $v_1$ are linked to the field $\phi$ by the non-auto-Bäcklund transformation $$\begin{aligned} p_i=P_i, (i=1,\ 2,\ ...,\ 5).\end{aligned}$$ Finally to find the evolution equation system is a straightforward work by calculating the compatibility condition of (8) and (9), $$\begin{aligned} \psi_{xxt}=\psi_{txx}.\end{aligned}$$ The result reads $$\begin{aligned} v_t=v_{1xx}+2vu_{1x}+u_1v_x-uv_{1x},\end{aligned}$$ and $$\begin{aligned} u_t=u_{1xx}+2v_{1x}+(uu_1)_x\end{aligned}$$ in addition to the constraint (11). In Eqs. (11), (18) and (19), one of four functions $u,\ u_1,\ v,$ and $v_1$ remains still free. For simplicity, one can simply take $$\begin{aligned} u=0,\ v_1=-\frac12u_{1x}.\end{aligned}$$ Under the simplification (20), the final evolution equation related to the Schwarz form (7) read $$\begin{aligned} v_t=-\frac12u_{1xxx}+2vu_{1x}+u_1v_x\end{aligned}$$ with (11) for $u=0$ while the Lax pair is simplified to $$\begin{aligned} &&L\psi\equiv (\partial_x^2-v)\psi=0,\\ &&\psi_t=M\psi\equiv (u_1\partial_x-\frac12u_{1x})\psi.\end{aligned}$$ It should be emphasized again that the Lax operator given in (22) is only a second order operator. To see the results more concretely, we discuss some special physically significant models in the following section. Special examples ================ From the suitable selections of $F\equiv F(p_1,\ p_2,\ p_3,\ p_4,\ p_5)$ of (7), we may obtain various interesting examples according to the general theory of the last section. For the KdV equation, its Schwarz variant has the simple form $$\begin{aligned} F_{KdV}(p_i)=p_1+p_2=0.\end{aligned}$$ According to the formula (11) with $u=0$, we know that the relation between the functions $v$ and $u_1$ is simply given by $$\begin{aligned} v=\frac12 u_1.\end{aligned}$$ Substituting (25) into (22) and (23), we re-obtain the well known Lax pair $$\begin{aligned} && \psi_{xx}-\frac12u_1\psi=0,\\ && \psi_t=u_1\psi_x-\frac12u_{1x}\psi\end{aligned}$$ for the KdV equation $$\begin{aligned} u_{1t}=3u_1u_{1x}-u_{1xxx}.\end{aligned}$$ For the HD equation, the Schwarz form reads $$\begin{aligned} F_{HD}(p_i)=p_1^2-\frac2{p_2}=0\end{aligned}$$ which leads to the relation between the functions $v$ and $u_1$ by $$\begin{aligned} v=\frac1{u_1^2}.\end{aligned}$$ From (22), (23) and (30), one can obtain the known Lax pair $$\begin{aligned} && \psi_{xx}-\frac1{u_1^2}\psi=0,\\ && \psi_t=u_1\psi_x-\frac12u_{1x}\psi\end{aligned}$$ for the HD equation $$\begin{aligned} u_{1t}=\frac14u_1^3u_{1xxx}.\end{aligned}$$ For the modified Boussinesq (MBQ) equation (and the Boussinesq equation), the Schwarz form has the form $$\begin{aligned} F_{MBQ}(p_i)=p_{2x}+3p_1p_{1x}+3p_{1t}=0.\end{aligned}$$ Using (34) and (11), we have $$\begin{aligned} v=\frac34u_1^2+\frac32\int u_{1t}{\rm dx}.\end{aligned}$$ Substituting (35) into (22) and (23) we get a Lax pair $$\begin{aligned} && \psi_{xx}-\left(\frac34u_1^2+\frac32\int u_{1t}{\rm dx}\right)\psi=0,\\ && \psi_t=u_1\psi_x-\frac12u_{1x}\psi.\end{aligned}$$ The related compatibility condition of (36) and (37) reads $$\begin{aligned} 3u_1^2u_{1x}+3u_{1x}\int u_{1t}{\rm dx}-\frac12 u_{1xxx}-\frac32 \int u_{1tt} {\rm dx} =0.\end{aligned}$$ Eq. (38) is called as the modified Boussinesq equation because it is linked with the known Boussinesq equation $$\begin{aligned} u_{tt}+\left(3u^2+\frac13u_{xx}\right)_{xx}=0\end{aligned}$$ by the Miura transformation $$\begin{aligned} u=\frac13(\pm u_{1x}-u_1^2-\int u_{1t} {\rm dx}).\end{aligned}$$ The generalized fifth order Schwartz KdV equation has the form $$\begin{aligned} F_{FOKdV}(p_i)=p_1-a_1p_{2xx}-a_2p_2^2=0,\end{aligned}$$ where $a_1$ and $a_2$ are arbitrary constants. Using (41) and (11), we have $$\begin{aligned} u_1=-2a_1v_{xx}+4a_2v^2.\end{aligned}$$ Substituting (42) into (22) and (23) we get $$\begin{aligned} && \psi_{xx}-v\psi=0,\\ && \psi_t=(a_1v_{xxx}-4a_2vv_x)\psi-2(a_1v_{xx}-2a_2v^2)\psi_x.\end{aligned}$$ The related compatibility condition of (43) and (44) is the generalized FOKdV equation $$\begin{aligned} v_t-a_1v_{xxxxx}+4(a_1+a_2)vv_{xxx}+2(a_1+6a_2)v_xv_{xx}-20a_2v^2v_x=0.\end{aligned}$$ Some well known fifth order integrable patial differential equations are just the special cases of (45). The usual FOKdV equation is related to (45) for $$\begin{aligned} a_1=1,\ a_2=\frac32.\end{aligned}$$ The Caudry-Dodd-Gibbon-Sawada-Kortera equation is related to (45) for $$\begin{aligned} a_1=1,\ a_2=\frac14\end{aligned}$$ while the parameters $a_1$ and $a_2$ for the Kaup-Kupershmidt equation read $$\begin{aligned} a_1=1,\ a_2=4.\end{aligned}$$ The generalized seventh order Schwartz KdV equation has the form $$\begin{aligned} F_{SOKdV}(p_i)=p_1-p_{2xxxx}-\alpha p_2p_{2xx} -\beta p_{2x}^2-\lambda p_2^3 =0,\end{aligned}$$ where $\alpha$ $\beta$ and $\lambda$ are arbitrary constants. Using (49) and (11), we have $$\begin{aligned} u_1=-2v_{xxxx}+4\alpha vv_{xx} +4\beta v_x^2-8\lambda v^3.\end{aligned}$$ Substituting (50) into (22) and (23) we get $$\begin{aligned} && \psi_{xx}-v\psi=0,\\ && \psi_t=(v_{xxxxx}-2(\alpha+2\beta) v_xv_{xx} -2\alpha vv_{xxx} +12\lambda v^2v_x)\psi\nonumber\\ &&\qquad +(-2v_{xxxx}+4\alpha vv_{xx}+4\beta v_x^2 -8\lambda v^3)\psi_x.\end{aligned}$$ The related compatibility condition of (50) and (51) is the generalized SOKdV equation $$\begin{aligned} &&v_t-v_{xxxxxxx}+2(\alpha+2)vv_{xxxxx}+2(1+2\beta+3\alpha)v_xv_{xxxx} -(16\beta +12\alpha +72\lambda)vv_xv_{xx} \nonumber\\ &&\qquad +(8\alpha v_{xx}-8\alpha v^2+4\beta v_{xx}-12\lambda v^2)v_{xxx} -(24\lambda+4\beta) v_x^3+56\lambda v^3v_x=0.\end{aligned}$$ The usual SOKdV equation is related to (53) for $$\begin{aligned} \alpha=5,\ \beta=\frac52,\ \lambda=\frac52.\end{aligned}$$ The seventh order CDGSK equation corresponds to $$\begin{aligned} \alpha=12,\ \beta=6,\ \lambda=\frac{32}3.\end{aligned}$$ The parameters of the seventh order KK equation can be read from $$\begin{aligned} \alpha=\frac32,\ \beta=\frac34,\ \lambda=\frac16.\end{aligned}$$ If the Schwarz form (7) is simply taken as $$\begin{aligned} F_{SKdV}(p_i)\equiv p_5=0,\end{aligned}$$ then we have $$\begin{aligned} v=\frac12u_1^{-1}u_{1xx}+\frac14u_{1}^{-2}(2u_{1xt}-u_{1x}^2)-\frac12u_1^{-3}u_{1t}u_{1x}.\end{aligned}$$ The evolution equation of $u_1$ reads $$\begin{aligned} 3u_1u_{1t}u_{1xt}-u_1^2u_{1xtt}+u_1u_{1x}u_{1tt}-3u_{1x}u_{1t}^2=0,\end{aligned}$$ while the related Lax pair reads $$\begin{aligned} && \psi_{xx}-(\frac12u_1^{-1}u_{1xx}+\frac14u_{1}^{-2}(2u_{1xt}-u_{1x}^2)-\frac12u_1^{-3}u_{1t}u_{1x})\psi=0,\\ && \psi_t=(-\frac12u_{1x}+\lambda_1)\psi+u_1\psi_x.\end{aligned}$$ Actually (59) is equivalent to a trivial linearizable Riccati equation $$\begin{aligned} w_t=w^2+f_1(x)\end{aligned}$$ under the transformation $$\begin{aligned} u_1=\exp \left(2\int w{\rm dt}\right)\end{aligned}$$ where $f_1(x)$ is an arbitrary function of $x$. It is worth to mention again that the well known (1+1)-dimensional ShG model and MDB model are just the non-invertable Miura type deformation of the Riccati equation$\cite{Riccati}$. Special extensions in higher dimensions ======================================= From section 2, we know that the key procedure to find Lax pair from the general conformal invariant form (7) is to find a suitable Lax form ansatz (like (8) and (9)) and a suitable relation ansatz (like (10) between the field of the Schwarz form and the spectral function such that all the conformal invariants ($p_i$) becomes spectral function independent variables ($P_i$). To extend this idea to high dimension is quite not easy. We hope to solve this problem in future studies. In this section we give out some special extensions in high dimensions with the same Lax pair forms of (8) and (9). If all the fields are functions of not only $\{x,\ t\}$, but also $\{y,\ z,\ ...,\}$ then all the formal theory is still valid if the independent conformal invariants of (11) is still restricted as $p_i,\ i=1,...,5$ while the function $F$ of (11) may also include some derivatives and integrations of $p_i$ with respect to other space variables $y,\ z,\ ...$ etc. Here we list only two special examples: The concept of breaking soliton equations is firstly developed by $\cite{BS1}$ and $\cite{BS2}$ by extending the usual constant spectral problem to non-constant spectral preblem. Various interesting properties of the breaking soliton equations have been revealed by many authors. For instance, infintely many symmetries of some breaking soliton equations are given in $\cite{LiYS, LOUSym}$. In $\cite{strong}$, it is pointed out that every (1+1)-dimensional integrable model can be extended to some higher dimensional breaking soliton equations with help of its strong symmetries. Yu and Toda $\cite{Yu}$ had given out the Schwarz form of the (2+1)-dimensional KdV type breaking soliton equation $$\begin{aligned} F_{2dSKdV}\equiv p_1+\int p_{2y} {\rm dx}=0.\end{aligned}$$ From (11) and (64), we have $$\begin{aligned} u_1=2\int v_y {\rm dx}.\end{aligned}$$ Substituting (65) into (22) and (23), we obtain a Lax pair $$\begin{aligned} && \psi_{xx}=v\psi,\\ && \psi_t=2\int v_y {\rm dx}\psi_x -(v_y-\lambda_1)\psi.\end{aligned}$$ for the (2+1)-dimensional KdV type breaking soliton equation $$\begin{aligned} v_t=-v_{xxy}+4vv_y+2v_x\int v_y {\rm dx}\equiv \Phi v_y,\end{aligned}$$ where $\Phi$ is just the strong symmetry of the (1+1)-dimensional KdV equation. If we make the replacement $$\begin{aligned} p_2\rightarrow \int p_{2y} {\rm dx}\end{aligned}$$ for some of $p_2$ in all the examples of the Last section, then we can obtain some special types of their (2+1)-dimensional extensions. Example 8 is just obtained from the (1+1)-dimensional KdV equation by using the replacement (69). A generalization of the fifth order Schwarz equation (41) reads ($b_1+b_2=a_1,\ c_1+c_2+c_3=a_2$), $$\begin{aligned} p_1=b_1p_{2xy}+b_2p_{xx}+c_1\left(\int p_{2y} {\rm dx} \right)^2+c_2p_2\int p_{2y} {\rm dx}+c_3 p_{2}^2.\end{aligned}$$ From (11) and (70) we know $$\begin{aligned} u_1=-2b_1v_{xy}-2b_2v_{xx}+4c_1\left(\int v_{y} {\rm dx} \right)^2+4c_2v\left(\int v_{y} {\rm dx} \right)+4c_3v^2\end{aligned}$$ and the related Lax pair becomes $$\begin{aligned} && \psi_{xx}=v\psi,\\ && \psi_t=(4c_1(\int v_{y} {\rm dx} )^2+4c_2v(\int v_{y} {\rm dx} )+4c_3v^2-2b_1v_{xy}-2b_2v_{xx})\psi_x\nonumber\\ &&\qquad +(\lambda_1-2v_x(2c_3v+c_2\int v_y{\rm dx}) -2v_y(2c_1\int v_{y} {\rm dx}+c_2v)+b_1v_{xxy}+b_2v_{xxx})\psi.\end{aligned}$$ while the corresponding evolution for the field $v$ is $$\begin{aligned} &&v_t=b_1v_{xxxxy}+b_2v_{xxxxx} +2v_2(4c_2v^2-6c_1v_{xy}-3c_2v_{xx}+8c_1v\int v_y {\rm dx})\nonumber\\ && \qquad-2(c_2v+2b_1v+2c_1\int v_y {\rm dx})v_{xxy}-2(2c_3v+2b_2v+c_2\int v_y {\rm dx})v_{xxx}\nonumber \\ &&\qquad +2(10c_3v^2+2c_1(\int v_y {\rm dx})^26c_2v\int v_y {\rm dx}-(b_2+6c_3)v_{xx}-(b_1+3c_2)v_{xy}).\end{aligned}$$ It is obvious that when $y=x$ and/or $v_y=0$, (2+1)-dimensional fifth order equation (74) will be reduced back to (1+1)-dimensional FOKdV equation (45). On spectral parameters ====================== In the last two sections, we have omitted the spectral parameter(s). In order to add the possible spectral parameter(s) to the Lax pairs, we may use the symmetry transformations of the original nonlinear models. In some cases, to find a symmetry transformation such that a nontrivial parameter can be included in the Lax pair (8) and (9) is quite easy. For instance, it is well known that the KdV equation (28) is invariant under the Galileo transformation $$\begin{aligned} u_1\rightarrow u_1(x+3\lambda t,\ t)+\lambda \equiv u_1(x',\ t)+\lambda .\end{aligned}$$ Substituting (75) into (26) and (27) yields the usual Lax pair of the KdV equation with spectral parameter $\lambda$: $$\begin{aligned} && \psi_{xx}-\frac12(u_1+\lambda)\psi=0,\\ && \psi_t=(u_1-2\lambda)\psi_x-\frac12u_{1x}\psi,\end{aligned}$$ where $x'$ has been rewritten as $x$. However, for some other models to add the parameters to (8) and (9) is quite difficult. In other words, the spectral parameters may be included in (8) and (9) in very complicated way(s). For instance, for the CDGSK equation ((45) with (47)), we failed to include a nontrivial spectral parameter by using its point Lie symmetries. Nevertheless, if we use the higher order symmetries and/or nonlocal symmetries of the model, we can include some nontrivial parameters in (43) and (44) with (47). For instance, for the CDGSK equation, if $\psi_1$ is a special solution of (43) and (44) with (47), one can prove that $$\begin{aligned} u'=u-6\frac{\lambda(\lambda\psi_1^2-\lambda \psi_{1x} p-6\psi_{1x})}{(\lambda p+6)^2}\end{aligned}$$ with $$\begin{aligned} p_x=\psi_1\end{aligned}$$ is also a solution of the CDGSK equation. By substituting (78) into (43) and (44), we obtain a second order Lax pair ($P=6+\lambda p$) $$\begin{aligned} && \psi_{xx}=-\left(u-6\frac{\lambda(\lambda\psi_1^2-\lambda \psi_{1x} p-6\psi_{1x})}{(\lambda p+6)^2}\right)\psi,\\ &&\psi_t=\left(\frac{6\lambda(u_x\psi_1)_x}{P}-12\frac{\psi_1\lambda^2(3u\psi_{1x} +2u_x\psi_1)}{P^2}+72\lambda^3\psi_1\frac{u\psi_1^2-\psi_{1x}^2}{P^3}\right.\nonumber\\ &&\left.\qquad -u_{xxx}-ww_x-36\lambda^4\psi_1^3\frac{2\lambda\psi_1^2-5\psi_{1x}P}{P^5} \right)\psi\nonumber\\ &&\qquad +\left(36\lambda^2u\frac{\psi_1^2}{P^2}+2u_{xx}+u^2 -12\lambda\psi_{1x}\frac{u_x}P-36\lambda^4\frac{\psi_1^4}{P^4} +72\lambda^3\psi_1^2\frac{\psi_{1x}}{P^3} \right)\psi_x\end{aligned}$$ for the CDGSK model with $\psi_1$ being a solution of (43) and (44). Summary and discussions ======================= In summary, every (1+1)-dimensional equation which has a Shwarz variant may possess a second order Lax pair. In this paper, we prove the conclusion when the Schwarz form is an arbitrary function of five conformal invariants and their any order derivatives and integrations. Usually, the Lax operators for various integrable models (except for the KdV hierarchy) are taken as higher order operator. Though the order of the Lax pair operators for some models have been lower down, the spectral parameter have been disappeared. In order to recover some types of nontrivial spectral parameters, we have to use the symmetries of the original nonlinear equations and the spectral parameter(s) would be appeared in the second order Lax operator in some complicated ways. How to obtain some other integrabilities from the Lax pairs listed here for general or special models is worthy of study further though the spectral parameters have not yet been included in explicitly. One may obtain many interesting properties of some special models from the Lax pairs without spectral parameters$^{\cite{sym, Fukuyama}}$. For instance, infinitely many nonlocal symmetries of the KdV equation, HD equation, CDGSK equation and the KK equation can be obtained from the spectral parameter independent Lax pairs$^{\cite{sym}}$. The conclusion for the general (1+1)-dimensional Schwarz equations can also be extended to some special types of (2+1)-dimensional models, like the breaking soliton equations. However, how to extend the method and the conclusions to general (2+1)-dimensions or even in higher dimensions is still open. .2in The work was supported by the Outstanding Youth Foundation and the National Natural Science Foundation of China (Grant. No. 19925522), the Research Fund for the Doctoral Program of Higher Education of China (Grant. No. 2000024832) and the Natural Science Foundation of Zhejiang Province, China. The author is in debt to thanks the helpful discussions with the professor G-x Huang and the Drs. S-l Zhang, C-l Chen and B. Wu. .2in [99]{} J. Weiss, M. Tabor, and G. Carnevale, J. Math. Phys. 24 (1983) 522. M. C. Nucci, J. Phys. A: Math. Gen. 22 (1989) 2897. S-y Lou, J. Phys. A: Math. Gen. 30 (1997) 4803. S-y Lou, J. Phys. A: Math. Gen. 30 (1997) 7259; S-y Lou, J. Yu ang X-y Tang, Z. Naturforsch. A 55: (2000) 867. S-y Lou, J. Math. Phys. 39 (1998) 2112; S-y Lou, Sci. China A,34 (1997) 1317. S-y Lou, Phys. Rev. Lett. 80 (1998) 5027; S-y Lou and J-j Xu, J. Math. Phys. 39 (1998) 5364. R. Conte, Phys. Lett. A140 (1989) 383. H-y Ruan, S-y Lou and Y-x Chen, J. Phys. A: Math. Gen., 32 (1999) 2719. O. I. Bogoyovlenskii, Usp. Mat. Nauk., 45 (1990) 17; Izv. Akad. Nauk. SSSR Ser. Mat. 53 (1989) 234; 907. F. Calogero, Nuvo. Cimento B, 32 (1976) 201. Y-s Li and Y-j Zhang, J. Phys. A. Math. Gen., 26 (1993) 7487. S-y Lou, J. Phys. A. Math. Gen., 28 (1995) 5493. S-y Lou, Commun. Theor. Phys., 28 (1997) 41. K. Toda and S. J. Yu, J. Math. Phys., 41 (2000) 4747-4751. S-y Lou, J. Math. Phys. 35 (1994) 2336; 2390; Phys. Scripta 54 (1996) 428. T. Fukuyama, K. Kamimura and K. Toda, Preprint (2001) (nlin.SI/0108043). [^1]: Email: [email protected]
{ "pile_set_name": "ArXiv" }
--- abstract: | It is well known and not difficult to prove that if $C\subseteq{\mathbb{Z}}$ has positive upper Banach density, the set of differences $C-C$ is syndetic, i.e. the length of gaps is uniformly bounded. More surprisingly, Renling Jin showed that whenever $A$ and $B$ have positive upper Banach density, then $A-B$ is *piecewise syndetic*. Jin’s result follows trivially from the first statement provided that $B$ has large intersection with a shifted copy $A-n$ of $A$. Of course this will not happen in general if we consider shifts by integers, but the idea can be put to work if we allow “shifts by ultrafilters”. As a consequence we obtain Jin’s Theorem. address: | Fakultät für Mathematik, Universität WienNordbergstraße 15\ 1090 Wien, Austria author: - Mathias Beiglböck title: 'An ultrafilter approach to Jin’s Theorem' --- The *upper Banach density* of $C\subseteq {\mathbb{Z}}$ is given by $d^*(C):= \overline\lim_{m-n\to \infty} \frac1{m-n+1}|C\cap \{n,\ldots, m\}|.$ A set $S\subseteq {\mathbb{Z}}$ is syndetic if the gaps of $S$ are of uniformly bounded length, i.e. if there exists some $k>0$ such that $S-\{-k, \ldots, k\}={\mathbb{Z}}$. A set $P\subseteq {\mathbb{Z}}$ is piecewise syndetic if it is syndetic on large pieces, i.e. if there exists some $k\geq 0$ such that $P-\{-k, \ldots, k\}$ contains arbitrarily long intervals of integers. It was first noted by Følner ([@Foln54a; @Foln54b]) that $C-C=\{c_1-c_2:c_1, c_2\in C\}$ is syndetic provided that $d^*(C)>0$. (To see this, pick a subset $\{i_1, \ldots, i_m\}\subseteq {\mathbb{Z}}$ which is *maximal* subject to the condition that $C-i_1,\ldots, C-i_m$ are mutually disjoint. This is possible since disjointness implies $d^*(C-i_1\cup\ldots\cup C-i_m)=m \cdot d^*(C)$, therefore $m$ is at most $1/d^*(C)$. But then maximality implies that for each $n\in {\mathbb{Z}}$ there is some $i_k$, $k\in \{1,\ldots, m\}$ such that $(C-n)\cap(C-i_k)\neq \emptyset$, resp. $n\in (C-C)+i_k$. Thus $\bigcup_{k=1}^m (C-C) +i_k={\mathbb{Z}}$.) Simple counterexamples yield that the analogous statement fails when two different sets are considered, but Renling Jin discovered the following interesting result. \[IntegerJin\] Let $A,B\subseteq {\mathbb{Z}}, d^*(A), d^*(B)>0$. Then $A-B$ is piecewise syndetic. In Section 1 we reprove Jin’s Theorem by reducing it to the $C-C$ case. Subsequently we discuss some modifications of our argument which allow to recover the refinements resp. generalizations of Jin’s result found in [@JiKe03; @BeFW06; @Jin08; @BeBF09]. 1    Ultrafilter proof of Jin’s Theorem {#ultrafilter-proof-of-jins-theorem .unnumbered} ======================================= As indicated in the abstract, we aim to show that one can shift a given large set $A\subseteq {\mathbb{Z}}$ by an ultrafilter so that it will have large intersection with another, previously specified set. We motivate the definition of this ultrafilter-shift by means of analogy: For $n\in{\mathbb{Z}}$ denote by $e(n)$ the principle ultrafilter on ${\mathbb{Z}}$ which corresponds to $n$ and notice that $$A-n=\{k\in {\mathbb{Z}}: n\in A-k \}= \{k\in {\mathbb{Z}}: A-k \in e(n) \}.$$ Given an ultrafilter $p$ on ${\mathbb{Z}}$, we thus define $\{k\in {\mathbb{Z}}: A-k \in p\}$ as the official meaning of “$A-p$”. \[ShiftT\] Let $A,B\subseteq {\mathbb{Z}}$. Then there exists an ultrafilter $p$ on $ {\mathbb{Z}}$ such that $$d^*\Big((A-p) \cap B \Big)\ =\ d^*\Big(\{k:A-k\in p\}\cap B\Big)\ \geq\ d^*(A)\cdot d^*(B).$$ The proof of Lemma \[ShiftT\] requires some preliminaries. For a fixed set $A\subseteq {\mathbb{Z}}$ we can choose an invariant mean, i.e. a shift invariant finitely additive probability measure $\mu$ on $({\mathbb{Z}}, {\mathcal{P}}({\mathbb{Z}}))$) such that $d^*(A)=\mu(A)$. This is well known among aficionados (see for instance [@Berg06 Theorem 5.8]) and not difficult to prove. First pick a sequence of finite intervals $I_n\subseteq {\mathbb{Z}}$ such that $|I_n|\to \infty$ and $d^*(A)= \lim_{n\to \infty} \mu_n(A)$ where $\mu_n(B):= \frac1{|I_n|}|B\cap I_n|$ for $B\in {\mathcal{P}}(B)$. Then let $\mu$ be a cluster point of the set $\{\mu_n: n\in{\mathbb{N}}\} $ in the (compact) product topology of $[0,1]^{{\mathcal{P}}({\mathbb{Z}})}$. We consider the Stone-Čech compactification $\beta {\mathbb{Z}}$ of the discrete space ${\mathbb{Z}}$. For our purpose it is convenient to view $\beta {\mathbb{Z}}$ as the set of all ultrafilters on ${\mathbb{Z}}$. By identifying integers with principal ultrafilters, ${\mathbb{Z}}$ is naturally embedded in $\beta {\mathbb{Z}}$. A clopen basis for the topology is given by the sets $\overline C:= \{p\in\beta {\mathbb{Z}}: C\in p\}$, where $C\subseteq {\mathbb{Z}}$. A mean $\mu$ on ${\mathbb{Z}}$ gives rise to a positive linear functional $\Lambda$ on the space $B({\mathbb{Z}})$ of bounded functions on ${\mathbb{Z}}$. Making the identification $B({\mathbb{Z}})\cong C(\beta{\mathbb{Z}})$ we find that, by the Riesz representation Theorem, there exists a regular Borel probability measure $\tilde\mu$ on $\beta {\mathbb{Z}}$ which corresponds to the mean $\mu$ in the sense that $\mu(A)=\tilde \mu\big(\overline A\big)$ for all $A\subseteq {\mathbb{Z}}$. (This procedure is carried out in detail for instance in [@Pate88 p 11].) Pick a sequence of intervals $I_n\subseteq {\mathbb{Z}},|I_n|\uparrow \infty $ such that $d^*(B)=\lim_{n}Ê\frac {|I_n\cap B|}{|I_n|}$. Pick an invariant mean $\mu$ such that $\mu (A)=d^*(A)$. Define $f_n:\beta {\mathbb{Z}}\to [0,1]$ by $$f_n(p):=\frac1{|I_n|} \sum_{k\in I_n\cap B} {\mathbbm{1}}_{\overline {A-k}}(p)=\frac {|I_n\cap B \cap \{k:A-k\in p\}|}{|I_n|}$$ and set $f(p):= \overline \lim_n f_n(p)\leq d^*(B\cap \{k:A-k\in p\}).$ By Fatou’s Lemma$$\int f\, d\tilde \mu\geq \overline{\lim_{n\to \infty}} \int \frac1{|I_n|} \sum_{k\in I_n\cap B} {\mathbbm{1}}_{\overline {A-k}}\, d\tilde \mu= \overline{\lim_{n\to \infty}} \frac1{|I_n|} \sum_{k\in I_n\cap B} \mu( {A-k}) = d^*(A) \cdot d^*(B),$$ thus there exists $p\in \beta {\mathbb{Z}}$ such that $d^*(A)\cdot d^*(B)\leq f(p).$ The above application of Fatou’s Lemma is inspired by the proof of [@Berg85 Theorem 1.1]. Assume that $d^*(A), d^*(B)>0$. According to Lemma \[ShiftT\], pick an ultrafilter $p$ such that $C:=(A-p)\cap B$ has positive upper Banach density. Then $S:=(A-p)-B\supseteq C-C$ is syndetic. Also $s\in (A-p)-B \ \Longrightarrow \ A-B-s\in p$. Thus for each finite set $\{s_1, \ldots, s_n\}\subseteq (A-p)-B$, we have $\bigcap_{i=1}^n A-B- s_i\in p$. In particular this intersection is non-empty, hence there exists $t\in {\mathbb{Z}}$ such that $t+\{s_1, \ldots, s_n\}\subseteq A-B$. Summing up, we find that $A-B$ is piecewise syndetic since it contains shifted copies of all finite subsets of the syndetic set $(A-p)-B$. 2   Jin’s Theorem in countable amenable (semi-) groups {#jins-theorem-in-countable-amenable-semi--groups .unnumbered} ====================================================== Following Jin’s original work, it was shown in [@JiKe03] that Theorem \[IntegerJin\] is valid in a certain class of abelian groups (including in particular ${\mathbb{Z}}^d$) and in [@Jin08] that it holds in $\oplus_{i=1}^\infty {\mathbb{Z}}$. Answering a question posed in [@JiKe03] it is proved in [@BeBF09 Theorem 2] that Jin’s theorem extends to all countable groups in which the notion of upper Banach density can naturally be formulated, that is, to all countable amenable groups. There is overwhelming evidence (see in particular [@HiSt98]) that whenever ultrafilters can be used to prove a certain combinatorial statement, then this ultrafilter proof will automatically work in a quite abstract setup. In this spirit the approach of Section 1 effortlessly yields the just mentioned strengthenings of Jin’s Theorem; in fact it is not even necessary to restrict the setting to groups. If a semigroup $(S,\cdot)$ admits left- and right Følner sequences[^1], the notions $d^*_L,d^*_R$ of left- resp. right upper Banach density are defined analogously to upper Banach density in ${\mathbb{Z}}$, but with left- resp. right Følner sequences taking the role of intervals $\{n,\ldots, m\}$. Arguing precisely as in Section 1 we then get that for all $A,B\subseteq S$ there is an ultrafilter $p$ on $S$ such that $d_L^*\Big(B\cap (q{^{-1}}A)\Big)\ =\ d_L^*\Big(B\cap \{s:As{^{-1}}\in p\}\Big)\ \geq\ d_R^*(A)\cdot d_L^*(B).$ Consequently we have: \[SemigroupJin\]Let $(S,\cdot)$ be a semigroup which admits left- and right Følner sequences and let $A,B\subseteq S$, $d_R^*(A), d_L^*(B)>0$. Then $AB{^{-1}}=\{s\in S:\exists b\in B\ sb\in A\}$ is (right) piecewise syndetic. A subset $P$ of a semigroup $(S,\cdot)$ is *(right) piecewise syndetic* Êif there exists a finite set $K\subseteq S$ such that for each finite set $F\subseteq S$ there is some $t\in S$ such that $tF\subseteq PK{^{-1}}$. If $(S,\cdot) $ is an amenable group, Theorem \[SemigroupJin\] is equivalent to [@BeBF09 Theorem 2] which asserts that in this setup $AB$ is (right) piecewise syndetic provided that $d^*_R(A),d^*_R(B)>0$. 3   Connections with Bohr sets {#connections-with-bohr-sets .unnumbered} ============================== So far, our results about the structure of $A-B$ originated from the fact that for $d^*(C)>0$ the set $C-C$ is syndetic. Følner ([@Foln54a; @Foln54b]) proved a much stronger assertion, namely that $C-C$ is “almost” a *Bohr$_0$ set.* Bohr$_0$ sets are the neighborhoods of $0$ in the Bohr topology. (Equivalently, $U\subseteq {\mathbb{Z}}$ is a [Bohr$_0$ set]{} iff there exist $\alpha_1, \ldots, \alpha_n\in {\mathbb{T}}= {\mathbb{R}}/{\mathbb{Z}}$ and ${\varepsilon}>0$ such that $\{k\in {\mathbb{Z}}: \|k\alpha_1\|, \ldots,\|k\alpha_n\|< {\varepsilon}\}\subseteq U$.) A *Bohr set* is a translate of a Bohr$_0$ set. Every Bohr set is syndetic, but the converse fails badly. Følner showed that if $d^*(C)>0$, then there exist a Bohr$_0$ set $U$ and set $N\subseteq {\mathbb{Z}}$ with $d^*(N)=0$ such that $C-C\supseteq U\setminus N. $ In analogy to piecewise syndetic sets Bergelson, Furstenberg and Weiss introduced piecewise Bohr sets. A set $P$ is piecewise Bohr if it is Bohr on arbitrarily large intervals, that is, if there exist a Bohr set $U$ and a sequence of intervals $I_n\subseteq {\mathbb{Z}}, |I_n|\uparrow \infty$ such that $P\supseteq U\cap \bigcup_{n=1}^\infty I_n$. Følner’s Theorem, $d^*(C)>0$ trivially implies that $C-C$ is piecewise Bohr. We reprove the following refinement of Jin’s Theorem obtained in [@BeFW06]. \[BohrJin\] Let $A,B\subseteq {\mathbb{Z}}, d^*(A), d^*(B)>0$. Then $A-B$ is piecewise Bohr. As in the case of piecewise syndetic sets one readily shows that if a set $P$ contains a translate of every finite subset of a piecewise Bohr set, then $P$ is piecewise Bohr itself. In Section 1 we have seen that if $d^*(A), d^*(B)>0$, then $A-B$ contains shifts of all finite pieces of a set $C-C, d^*(C)>0$. By Følner’s Theorem the latter set is piecewise Bohr, hence $A-B$ is piecewise Bohr as well. In the spirit of Section 2 it is possible to proceed along these lines in an abstract countable amenable group. This then leads to the amenable analog of Theorem \[BohrJin\] derived in [@BeBF09 Theorem 3]. It is natural to ask whether one can make assertions about the combinatorial richness of the set $A-B$ beyond the fact that is piecewise Bohr. In a certain sense this is not possible. For every piecewise Bohr set $P\subseteq {\mathbb{Z}}$, there exist sets $A,B\subseteq {\mathbb{Z}}$ with $d^*(A), d^*(B)>0$ such that $A-B\subseteq P$ ([@BeBF09 Theorem 4]). \#1[0=0=0 0 by1pt\#1]{} [BFW06]{} Mathias Beiglb[ö]{}ck, Vitaly Bergelson, and Alexander Fish. Sumset phenomenon in countable amenable groups. . Vitaly Bergelson. Sets of recurrence of [${\bf Z}\sp m$]{}-actions and properties of sets of differences in [${\bf Z}\sp m$]{}. , 31(2):295–304, 1985. Vitaly Bergelson. Combinatorial and [D]{}iophantine applications of ergodic theory. In [*Handbook of dynamical systems. [V]{}ol. 1[B]{}*]{}, pages 745–869. Elsevier B. V., Amsterdam, 2006. Appendix A by A. Leibman and Appendix B by Anthony Quas and M[á]{}t[é]{} Wierdl. Vitaly Bergelson, Hillel Furstenberg, and Benjamin Weiss. Piecewise-[B]{}ohr sets of integers and combinatorial number theory. In [*Topics in discrete mathematics*]{}, volume 26 of [*Algorithms Combin.*]{}, pages 13–37. Springer, Berlin, 2006. Erling F[ø]{}lner. Generalization of a theorem of [B]{}ogolioùboff to topological abelian groups. [W]{}ith an appendix on [B]{}anach mean values in non-abelian groups. , 2:5–18, 1954. Erling F[ø]{}lner. Note on a generalization of a theorem of [B]{}ogolioùboff. , 2:224–226, 1954. Neil Hindman and Dona Strauss. , volume 27 of [*de Gruyter Expositions in Mathematics*]{}. Walter de Gruyter & Co., Berlin, 1998. Theory and applications. Renling Jin. The sumset phenomenon. , 130(3):855–861, 2002. Renling Jin. , 2008. Renling Jin and H. Jerome Keisler. Abelian groups with layered tiles and the sumset phenomenon. , 355(1):79–97, 2003. Alan L. T. Paterson. , volume 29 of [*Mathematical Surveys and Monographs*]{}. American Mathematical Society, Providence, RI, 1988. [^1]: A sequence $(F_n)_{n\in{\mathbb{N}}}$ of finite sets in a semigroup $S$ is a left/right Følner sequence if $\lim_{n\to \infty} |(sF_n)\Delta F_n|/|F_n|=0$ resp. $\lim_{n\to \infty} |(F_ns)\Delta F_n|/|F_n|=0$ for all $s\in S$. The existence of Følner sequences is sometimes used to define if a countable group / semigroup is amenable. We note that all abelian semigroups and all solvable groups fall in this class. See for instance [@Pate88].
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a theoretical model of split-gate quantum wires that are fabricated from GaAs-AlGaAs heterostructures. The model is built on the physical properties of donors and of semiconductor surfaces, and considerations of equilibrium in such systems. Based on the features of this model, we have studied different ionization regimes of quantum wires, provided a method to evaluate the shallow donor density, and calculated the depletion and pinchoff voltages of quantum wires both before and after illumination. A real split-gate quantum wire has been taken as an example for the calculations, and the results calculated for it agree well with experimental measurements. This paper provides an analytic approach for obtaining much useful information about quantum wires, as well as a general theoretical tool for other gated nanostructure systems.' author: - Yinlong Sun and George Kirczenow - 'Andrew. S. Sachrajda and Yan Feng' --- An Electrostatic Model of Split-Gate Quantum Wires Department of Physics, Simon Fraser University\ Burnaby, British Columbia, Canada V5A 1S6 Institute of Microstructural Sciences, National Research Council\ Ottawa, Ontario, Canada K1A 0R6 Introduction {#sec:introduction} ============ Modern material-growing techniques such as molecular beam epitaxy and organo-metallic chemical vapour deposition make it possible to fabricate extremely clean semiconductor heterostructures. [@Chang] In a modulation-doped [@Dingle] GaAs-${\rm Al_{x}Ga_{1-x}As}$ heterostructure, a two-dimensional electron gas (2DEG) is present at the interface of ${\rm Al_{x}Ga_{1-x}As}$ and GaAs layers. [@Harris] This 2DEG can be further confined laterally by various confining techniques such as electron-beam lithography [@Wheeler], ion-beam exposure [@Scherer; @Hirayama; @Wieck], or etching [@Skocpol; @Kirtley; @van; @Houten], forming a quasi-one-dimensional system usually called a quantum wire. At present, one widely used confinement method is the split-gate technique [@Thornton; @Zhang]. In a split-gate quantum wire, when a sufficiently negative voltage is applied to the metallic gates, electrons are completely depleted from under the gates, leaving a central channel of electrons undepleted. By further increasing the gate voltage negatively, the density of electrons in the channel is decreased continuously until the channel pinches off. Such quantum wires display unique fascinating properties which have stimulated many theoretical and experimental studies of their physics. [@Ulloa] Because of the sophisticated gating technique and the flexibility of changing the density of electrons by varying the gate voltage, the split-gate quantum wires hold a great potential for realistic applications [@Sheard; @Khuraba]. Considerable progress has been made in developing an understanding of the electronic structure of quantum wires theoretically, based on the results of computer simulations and analytic work. [@Lai; @Laux86; @Laux88; @Davies; @Nixon; @Nakamura; @Ravaioli; @Sun93] However, for many systems, particularly those with an exposed semiconductor surface between the split metallic gates, the current understanding is not complete. For example, it has not been possible to predict accurately the pinchoff voltage of a quantum wire, given the knowledge of the geometric and doping parameters and the history of a given sample. The pinchoff voltage is the gate voltage at which conduction through the wire ceases. It is a quantity of considerable practical importance for these devices. In this paper, we present a study that addresses such issues. In Section \[sec:model description\] we describe the basic physical features of gated quantum wires that are included in our model. In Sections \[sec:regimes\] and \[sec:determination\] we point out that three qualitatively different ionization regimes can exist in the doped layer that supplies electrons to the quantum wire, and show how the ionization regime that a particular sample is in can be identified. We also show how the shallow donor density in the doped layer may be calculated. In Section \[sec:depletion\] we describe the calculation within our model of the depletion voltage for the electron gas under the gates. In Section \[sec:pinchoff\] we calculate the pinchoff voltage of the quantum wire. This calculation uses a Green’s function method, which is an extension of the previous theoretical work of Davies [@Davies] but treats the effects of the charges at the exposed semiconductor surface more accurately. In Section \[sec:discussion\], we take a well-characterized real sample as an example for calculations, and find good quantitative agreement between the calculated and measured depletion and pinchoff voltages both before and after illumination. Model and Formalism {#sec:model} =================== We consider an infinitely long split-gate quantum wire whose crossection is shown in FIG. \[fig:crossection\]. The layers from top are the GaAs cap, the Si-doped ${\rm Al_{x}Ga_{1-x}As}$, the undoped ${\rm Al_{x}Ga_{1-x}As}$ spacer, and the GaAs channel; their thicknesses are $l_{c}$, $l_{d}$, $l_{s}$, and $l_{ch}$, respectively. On top of the GaAs cap are two metallic gates with a spatial separation $2w$. The coordinate frame is chosen in such a way that the exposed surface of GaAs cap is the $z=0$ plane, and the lateral direction is along the x-axis. In such quantum wires, electrons donated by the Si donors in the doped ${\rm Al_{x}Ga_{1-x}As}$ layer transfer to the $z=0$ plane to fill the surface or interface states, and to the $z=L=l_{c}+l_{d}+l_{s}$ plane to form the 2DEG. This transfer of electrons leaves a positive spatial charge in the doped layer and thus causes the conduction band to bend within the heterostructure. One possible case of the band bending is shown in FIG. \[fig:regime C\]. The curve depicts to the bottom of the conduction band along the z-axis. Within the cap and the spacer layers, the curve is linear because there is no spatial charge in these layers. In the doped layer, however, the bottom of the conduction band is curved because of the presence of the spatial charge. The curve is parabolic if the spatial charge density is uniform. $E_{off}$ is the conduction band offset which occurs at the two interfaces between the GaAs and ${\rm Al_{x}Ga_{1-x}As}$ layers. The whole system shown is in equilibrium, that is, the system has a uniform Fermi energy. (This may change when a voltage is applied between the gates and 2DEG, as is discussed below.) Note that, in the situation shown in FIG. \[fig:regime C\], there is an unionized region in the doped layer where the conduction band is flat because donors are not ionized there. A detailed discussion of the features of our model now follows. Model Description {#sec:model description} ----------------- Our theoretical model of split-gate quantum wires has four key features. [**Feature 1) The Si donors are uniformly distributed in the ${\bf Al_{x}Ga_{1-x}As}$ doped layer and divided into two categories: the shallow levels and the deep levels. We assume that electrons are donated only by the ionized shallow donors whose bound energy levels are above the Fermi level. Deep donors can be ionized by illumination.**]{} It is well-known that the electronic state associated with a shallow donor in ${\rm Al_{x}Ga_{1-x}As}$ has the hydrogenic form and can be handled with the effective mass theory [@Kohn]. Neglecting central cell effects, the binding energy of a shallow donor is $ E_{s} = m^{*} e^{4}/2(4\pi \varepsilon \varepsilon_{0} \hbar )^{2}= m^{*} / \varepsilon^{2} {\rm (Ryd)} $, where $m^{*}$ is the effective mass of the electron and $\varepsilon$ is the dielectric constant. In ${\rm Al_{x}Ga_{1-x}As}$, the $\Gamma$ valley of the conduction band is the lowest one when $x<0.45$. In quantum wires, $x$ is usually in this regime. At the mininum point of the $\Gamma$ valley, $m^{*}=0.067\ m_{e}$ and $E_{s} \approx 6$ meV correspondingly. Such a binding energy of shallow donors has been verified by various measurements [@Lifshitz; @Ishikawa]. Because $E_{s}$ is much less than other relevant parameters such as the Schottky barrier and the conduction band offset, we consider $E_{s}$ to be negligible small. In the doped ${\rm Al_{x}Ga_{1-x}As}$ layer, when $x>0.2$, the ground state of a Si donor is the deep level instead of the shallow level. [@Chand] It is now generally accepted that the deep level is associated with a local lattice distortion which is usually called a DX center [@Lang92; @Malloy]. During illumination, a deep donor may absorb a photon and thus ionize. At low temperatures, however, a shallow donor can not change into a deep donor automatically because of the energy barrier associated with the lattice distortion. This argument is supported by many studies such as persistent-photoconductivity experiments [@Lang77]. Accordingly, we have $$N_{total} = N_{s} +N_{d}, \label{donor sum}$$ where $N_{total}$, $N_{s}$, and $N_{d}$ are the total, shallow, and deep donor concentrations, respectively. For a quantum wire, $N_{total}$ can be obtained from the fabrication parameters but $N_{s}$ and $N_{d}$ are undetermined experimentally. This has made it difficult to analyze quantum wires theoretically because $N_{s}$ determines the number of donated electrons and the spatial charge density. However, we will describe a method to calculate $N_{s}$ within our model. [**Feature 2) The Schottky barrier between the metallic gates and the GaAs cap is determined by the type of the gate metal and the type of GaAs interface, and is independent of the gate voltage. The surface states of the exposed GaAs surface are pinned at a single energy level within the forbidden band gap of GaAs. The surface states are localized and surface electrons have a low mobility.**]{} The Schottky barrier of a metal-semiconductor contact refers to the energy difference between the conduction band minima of the semiconductor at the interface and the Fermi level of electrons in the metal. It is generally believed [@Monch] that Schottky barriers are associated with the metal-induced gap states which depend only on the type of the contact metal and the type of the semiconductor interface. This means that, in quantum wires, the Schottky barrier between the gates and the GaAs cap is independent of the gate voltage. For (100) and (110) interfaces of GaAs, the Schottky barriers for many metals have been measured [@Waldrop; @Newman; @McLean]. The surface states of GaAs are associated with the dangling bonds at the exposed surface. The physics of surface states is complicated and there has been no generally accepted model yet. [@Monch] However, experiments show that the surface states of GaAs are pinned at a single energy value within the forbidden band gap as long as the surface is covered by a fraction of an adatom monolayer. [@Spicer] For example, the surface states of the n-type GaAs (100) surface are pinned at about 0.8 eV below the bulk conduction band minima. [@Chiang] Some calculations [@Potz; @Beres] also show that the surface states are very localized, which means that the surface electrons have a very low mobility. The Schottky barrier of the exposed surface refers to the energy difference between the conduction band minima and the pinned surface level (see FIG. \[fig:regime C\]). In the following discussion, we use $\Phi_{sb}$ for the Schottky barrier of the exposed surface and $\Phi_{sb}'$ for the Schottky barrier of the metal-GaAs contact. [**Feature 3) The energy barrier due to the spacer layer is small. Therefore we assume that the electrons on either sides of it are always in equilibrium with each other. The energy barrier that separates the surface electrons is so high that tunneling of electrons through it can be neglected. Therefore we assume that the total number of surface electrons is conserved when the gate voltage varies.**]{} This feature can be justified by that the tunneling current of electrons through an energy barrier is proportional to the tunneling probability of an electron through the barrier. In the WKB approximation, the tunneling probability of an electron at the Fermi level is $$T=\exp [-2 \int_{\rm (barrier)} dz \sqrt{2m^{*}(E_{c}(z)-E_{F}) \over \hbar^{2}}],$$ where $E_{c}(z)$ is the conduction band minimum (refer to FIG. \[fig:regime C\]). Because the energy barrier due to the spacer in a typical quantum wire is small ($l_{s}=20$ nm and $E_{off}=0.2$ eV typically), the corresponding tunneling current is so large that it keeps electrons on both sides in equilibrium no matter how the gate voltage changes. On the other hand, the barrier at the exposed surface is very high ($l_{c}=10$ nm, $l_{d}=40$ nm, and $\Phi_{sb}=0.8$ eV typically), therefore the corresponding tunneling current is so small that the surface electrons are isolated and the total number of the surface electrons is conserved although the gate voltage changes. [**Feature 4) We assume that, after a quantum wire has been fabricated and no gate voltage is applied, the surface electrons share the same Fermi energy with the 2DEG. We also assume that this equilibrium also holds after the quantum wire undergoes illumination at the zero gate voltage.**]{} This assumption is based on the consideration that the high-temperature ($T\sim$ 500 K) fabrication process provides the conditions necessary for the whole system to reach equilibrium. That is, the surface electrons share the same Fermi energy with the rest of the system. After the quantum wire is illuminated, the surface electrons do not necessarily stay in equilibrium with the others. However, by assuming the equilibrium of the whole system after illumination, we have a starting point for calculation of the effect that an illumination has on the quantum wires. Moreover, we speculate that the real situation of quantum wires after an illumination by photons with energies larger than $\Phi_{sb}$ is not too far from an equilibrium state, and therefore the evaluated results should provide useful information. Based on the four features presented above, we are able to set up the electrostatic formalism for any quantum wire system and make predictions. However, we need first to determine the shallow donor density $N_{s}$, which is not directly known from the sample fabrication conditions or from experimental measurements. We find that $N_{s}$ can be determined from $n_{0}$, the 2DEG density at the zero gate voltage, which can be obtained by extrapolating the densities measured from edge state backscattering experiments [@Haug; @Washburn] to zero gate voltage. However, the relation between $N_{s}$ and $n_{0}$ depends on the ionization regime of shallow donors in the doped ${\rm Al_{x}Ga_{1-x}As}$ layer. Therefore, we need to analyze the ionization regimes of the doped layer at zero gate voltage. Ionization Regimes {#sec:regimes} ------------------ The ionization regimes here refer to the spatial arrangement of the ionized shallow donors in the doped layer, and to the way that the donated electrons are distributed between the 2DEG and the surface states, at zero gate voltage. For the quantum wire shown in FIG. \[fig:crossection\], there are three ionization regimes. In ionization regime A, the band bending is shown in FIG. \[fig:regimes AB\]a. In this regime, all of the shallow donors in the doped layer are ionized and no 2DEG is present. Because the bottom of the conduction band in the GaAs channel layer is higher than the surface (interface) levels, all donated electrons transfer to the $z=0$ plane to fill the surface (interface) states. The electrons accumulated at the $z=0$ plane in effect form a ‘capacitor’ with the positively ionized donors in the doped layer. Thus the conduction band in the GaAs channel layer is not affected by the transfer of electrons and remains flat. The ionization of the doped layer falls into this regime when the shallow donor density is very low. In ionization regime B, the band bending is shown in FIG. \[fig:regimes AB\]b. In this regime, all the shallow donors in the doped layer are ionized and a 2DEG is formed at the $z=L$ plane. Note that the curved conduction band within the doped layer has a minimum point M which divides the whole doped layer into two parts, with thicknesses $l_{1}$ and $l_{2}$, respectively. Because the electric field at M is zero, one may consider all of the donated electrons from the region to the left of M to transfer to the $z=0$ plane thus form a ‘capacitor’, while all the donated electrons to the right of M transfer to the $z=L$ plane to form another ‘capacitor’. These two capacitors have no interaction each other because each screens itself completely. Such a consideration enables us to discuss each capacitor separately. Ionization regime C is the most complicated and its band bending structure is shown in FIG. \[fig:regime C\]. Regime C differs from regime B by the presence of an [*unionized region*]{} in the doped layer. In the unionized region, the bound levels of shallow donors are not above the Fermi level, thus the electrons in this region are not ionized and remain bound to the donors. Correspondingly, the whole doped layer is divided into three parts. The left hand one forms one ‘capacitor’ with the surface (interface) electrons, the right hand one forms another ‘capacitor’ with the 2DEG, and the central one is charge neutral with its conduction band being flat. The ionization occurs in regime C when the shallow donor density is very high. For the quantum wire shown in FIG. \[fig:crossection\] with fixed geometric dimensions, as the shallow density increases, the ionization of the doped layer progresses from regime A to B, to C. In studying quantum wires, however, we are only interested in the ionization regimes B and C when the 2DEG is present. Usually the ionization is in regime B before illumination and in regime C after sufficient illumination, because the shallow donor density is increased by illumination. To identify the ionization regime of a particular quantum wire at the zero gate voltage, we need to calculate the critical characteristic parameters. Notice that since the exposed surface Schottky barrier $\Phi_{sb}$ and the metal-GaAs contact Schottky barrier $\Phi_{sb}'$ may be quite different, we have to calculate the critical parameters under the gates and under the exposed surface separately. For under the exposed surface, let $N_{\alpha}$ be the critical shallow donor density that divides regimes A and B, and $N_{\beta}$ be the one that divides regimes B and C. Under the gates, let $N_{\alpha}'$ and $N_{\beta}'$ be the corresponding critical parameters. Now let us calculate $N_{\alpha}$ and $N_{\beta}$. When $N_{s} = N_{\alpha}$, the bottom of conduction band in the GaAs channel layer (the region denoted ‘flat’ in FIG. \[fig:regimes AB\]a) lines up with the system’s Fermi level, and with the energy level of the surface states. Therefore, $${e^2 N_{\alpha} \over \varepsilon \varepsilon_{0} } ( l_{c}l_{d} + { l_{d}^2 \over 2} ) = \Phi_{sb}, \label{eq:Nalpha}$$ in which the left side gives the total band bending in the cap and the doped layers. The two band offsets at $z=l_{c}$ and $z=L$ cancel each other. (Here, as well as in following discussion, we take the ‘capacitor’ to be large and neglect its edge effects.) When $N_{s}=N_{\beta}$, the minimum point M in FIG. \[fig:regimes AB\]b just touches the x-axis. That is, the bottom of the conduction band at M is at the system’s Fermi level. Therefore we have $$\begin{aligned} {e^2 N_{\beta} \over \varepsilon \varepsilon_{0} } ( l_{c}l_{1} + { l_{1}^2 \over 2} ) & = & \Phi_{sb} + E_{off}, \label{eq:Nbeta-1} \\ {e^2 N_{\beta} \over \varepsilon \varepsilon_{0} } ( l_{s}l_{2} + { l_{2}^2 \over 2} ) & = & E_{off} - E_{z0}, \label{eq:Nbeta-2} \\ l_{1} + l_{2} & = & l_{d}, \label{eq:Nbeta-3}\end{aligned}$$ where equations \[eq:Nbeta-1\] and \[eq:Nbeta-2\] come from the fact that the bottom of conduction band at point M is equal to the Fermi energy of surface electrons and of the 2DEG. Note that, in equation \[eq:Nbeta-2\], $E_{z0}$ is the energy difference between the 2DEG Fermi energy and the bottom of the conduction band at $z=L$, as shown in FIG. \[fig:regimes AB\]b. Typically $E_{z0}\sim 0.04$ eV but $E_{z0}$ vanishes when electrons are nearly depleted. [@Harris] Because $E_{z0}$ is comparable to $E_{off}$, we include its effect in equation \[eq:Nbeta-2\]. ($E_{z0}$ does not appear in equation \[eq:Nalpha\], because electrons are depleted in that situation.) When $N_{s}=N_{\beta}$, the corresponding density of surface electrons is $$n_{\beta}^{sur} = N_{\beta} l_{1}, \label{eq:nsurbeta}$$ and the density of the 2DEG is $$n_{\beta} = N_{\beta} l_{2}, \label{eq:nbeta}$$ where $N_{\beta}$, $l_{1}$, and $l_{2}$ are obtained by solving equations \[eq:Nbeta-1\], \[eq:Nbeta-2\], and \[eq:Nbeta-3\]. For the doped layer under the gates, the critical parameter values $N_{\alpha}'$ and $N_{\beta}'$ can be calculated similarly except that the surface states are replaced by the interface states and $\Phi_{sb}$ by $\Phi_{sb}'$. We can identify the ionization regime of a particular quantum wire by comparing its actual shallow donor density $N_{s}$ to its calculated critical values $N_{\alpha}$ and $N_{\beta}$. However, $N_{s}$, being a part of $N_{total}$, is usually not known directly. On the other hand, the 2DEG density $n_{0}$ at zero gate voltage can readily be determined experimentally. Therefore, it is more convenient to work in terms of the comparison between $n_{\beta}$ and $n_{0}$. The conditions for different ionization regimes of the doped layer [*under the exposed surface*]{} are listed in TABLE \[tab:ionizations\]. (The conditions for the ionization regimes of the doped layer [*under the gates*]{} are obtained by replacing $N_{\alpha}$, $N_{\beta}$, and $n_{\beta}$ in TABLE \[tab:ionizations\] by primed quantities.) Determination of $N_{s}$ {#sec:determination} ------------------------ Now we evaluate $N_{s}$ from the measured 2DEG density $n_{0}$ at zero gate voltage. For a quantum wire in ionization regime B, $N_{s}$ is obtained by solving the following equations $$\begin{aligned} {e^2 N_{s} \over \varepsilon \varepsilon_{0} } ( l_{c}l_{1} + { l_{1}^2 \over 2} ) - {e^2 N_{s} \over \varepsilon \varepsilon_{0} } ( l_{s}l_{2} + { l_{2}^2 \over 2} ) & = & \Phi_{sb} + E_{z0}, \label{eq:rhos-1} \\ l_{1} + l_{2} & = & l_{d}, \label{eq:rhos-2} \\ N_{s} l_{2} & = & n_{0}, \label{eq:rhos-3}\end{aligned}$$ where $l_{1}$ and $l_{2}$ have been shown in FIG. \[fig:regimes AB\]b. Equation \[eq:rhos-1\] comes from the condition that the surface energy level is equal to that of the 2DEG. If the ionization of the quantum wire is in regime C (see FIG. \[fig:regime C\]), then $N_{s}$ should be calculated from $$\begin{aligned} {e^2 N_{s} \over \varepsilon \varepsilon_{0} } [l_{s}l_{2} + { l_{2}^2 \over 2} ] & = & E_{off} - E_{z0}, \label{eq:rhos-4} \\ N_{s} l_{2} & = & n_{0}, \label{eq:rhos-5}\end{aligned}$$ where equation \[eq:rhos-4\] comes from the Fermi level of the 2DEG being equal to the bound level of shallow donors in the unionized region. Depletion Voltage {#sec:depletion} ----------------- In a quantum wire, a 2DEG is usually present at the $z=L$ plane before any gate voltage is applied. When a negative gate voltage is applied to the gates, the density of the 2DEG is decreased. The depletion voltage $-V_{dep}$ is the gate voltage at which electrons of the 2DEG are completely depleted from under the gates. The depletion voltage is an important parameter because it characterizes the transition of the system of electrons at the $z=L$ plane from two-dimensional to quasi-one-dimensional. The gate voltage actually measures the energy difference between the Fermi level in the gates and the Fermi level in the GaAs at $z=L$. Therefore, (noting that $E_{z0}=0$ at depletion), the depletion voltage is given by $$eV_{dep} = {e^2 N_{s} \over \varepsilon \varepsilon_{0} } ( l_{c}l_{d} + { l_{d}^2 \over 2} ) - \Phi_{sb}', \label{eq:Vdep}$$ where the first term on the right side gives the total band bending in the cap and the doped layers, and $\Phi_{sb}'$ is the Schottky barrier of the gate-GaAs contact. Note that $N_{s}$ should be determined from the measured 2DEG density $n_{0}$ according to the ionization regime at the zero gate voltage. From equation \[eq:Vdep\], the depletion voltage should be independent of illumination because $N_{s}$ of the regions in the doped layer that are under the gates is not affected by illumination. Pinchoff Voltage {#sec:pinchoff} ---------------- The pinchoff voltage $-V_{pinch}$ is the gate voltage at which electrons are just completely depleted from the $z=L$ plane in the quantum wire. Therefore, it measures the energy difference between the Fermi level of electrons in the gates and the bottom of conduction band at the central point $(x=0,z=L)$ of the electron channel. The calculation of the pinchoff voltage is much more complicated than that of the depletion voltage, because it involves the electrostatic potential difference between at the point $(x=0,z=L)$ and the gates, and depends in an essential way on the fringing fields of the capacitors discussed in Section \[sec:regimes\]. The pinchoff voltage is affected by illumination because an illumination increases the shallow donor density under the exposed semiconductor surface and thus changes the charge distribution. For the purpose of the calculation below, let the electrostatic potential just inside the semiconductor adjacent to the gates be zero and $-\varphi (x,z)$ be the potential function (noting that the system is y-independent). Then the pinchoff voltage is given by $$e V_{pinch}= e \varphi (0,L) - \Phi_{sb}', \label{eq:Vpinch}$$ where $e \varphi (0,L)$ is the potential energy at pinchoff of an electron at point $(x=0,z=L)$, and $\Phi_{sb}'$ is the gate-GaAs contact Schottky barrier. The calculation of $-\varphi (0,L)$ can be done by using the Green’s function method with the Dirichlet boundary condition. The general expression of the potential function for $z \geq 0$ contains two terms that correspond to the contributions from the spatial charge and from the boundary, respectively [@Jackson] $$\begin{aligned} -\varphi(x,z) & = & -\varphi_{1}(x,z) + -\varphi_{2}(x,z) \\ & = & {1 \over 4 \pi \varepsilon \varepsilon_{0} } \int\!\!\int\!\!\int d^{3}r' \rho ({\bf r'}) G({\bf r},{\bf r'}) - {1 \over 4 \pi} \int\!\!\int_{(z'=0)} \! dx' dy' \varphi({\bf r'}) {\partial \over \partial z'} G({\bf r},{\bf r'}), \label{eq:varphi}\end{aligned}$$ where ${\bf r}=(x,y,z)$, ${\bf r'}=(x',y',z')$, $\rho ({\bf r'})$ is the spatial charge density, and $G({\bf r},{\bf r'})$ is the Green’s function, which is given by $$\begin{aligned} \nabla'^{2} G({\bf r},{\bf r'}) & = & -4\pi \delta ({\bf r}-{\bf r'}), \\ G({\bf r},{\bf r'})\mid_{z'=0} & = & 0.\end{aligned}$$ Using the image method, the solution of the Green’s function is $$\begin{aligned} G({\bf r},{\bf r'}) = {1 \over [ (x'-x)^2 + (y'-y)^2 + (z'-z)^2 ]^{1/2} } \nonumber \\ - {1 \over [ (x'-x)^2 + (y'-y)^2 + (z'+z)^2 ]^{1/2} }. \label{eq:G-expression}\end{aligned}$$ At an arbitrary gate voltage prior to the pinchoff voltage, there are electrons present at the $z=L$ plane and there may exist an unionized region in the doped layer as shown in FIG. \[fig:regime C\]. Therefore, the spatial charge density $\rho ({\bf r'})$ is not known analytically, and $-\varphi(x,z)$ can only be calculated numerically. At the pinchoff voltage, however, no electrons are present at the $z=L$ plane and the shallow donors everywhere in the doped layer must be ionized. (If there were an unionized region, there would have to be electrons present at the $z=L$ plane because the bottom of the conduction band in the GaAs channel layer is lower than that in the doped layer by $E_{off}$.) Because of this, it is possible to calculate $-\varphi(x,z)$ analytically at the pinchoff voltage. Because the shallow donors are all ionized, before illumination, the spatial charge density can be expressed as $$\rho ({\bf r}) = \left\{ \begin{array}{ll} eN_{s}, & \mbox{ if $l_{c} \leq z \leq l_{c}+l_{d}$} \\ 0, & \mbox{ otherwise} \end{array} \right.$$ The contribution from the spatial charge, the first term on the right side of equation \[eq:varphi\], can thus be calculated easily and the result is $$-\varphi_{1}(x,z) = { eN_{s} \over \varepsilon \varepsilon_{0} }\times \left\{ \begin{array}{ll} l_{d} z, & \mbox{ if $z<l_{c}$} \\ -{1 \over 2} (z-l_{c}-l_{d})^{2} + l_{c}l_{d} + {1 \over 2}l_{d}^{2}, & \mbox{ if $l_{c} \leq z \leq l_{c}+l_{d}$ } \\ l_{c}l_{d} + {1 \over 2}l_{d}^{2}, & \mbox{ if $z>l_{c}+l_{d}$} \end{array} \right.$$ which is independent of $x$. For the central point $(x=0,z=L)$, $$\varphi_{1}(0,L) = {eN_{s} \over \varepsilon \varepsilon_{0} } ( l_{c}l_{d} + {l_{d}^{2} \over 2} ), \label{eq:varphi1-1}$$ which gives the total band bending in the cap and the doped layers. (We have obtained this result previously in Section \[sec:regimes\] by using consideration of the capacitor.) After an illumination, the spatial shallow donor density has been increased in the doped layer under the exposed surface (Feature 1). As an approximation, we can take the spatial charge density as $$\rho ({\bf r}) = \left\{ \begin{array}{ll} eN_{sl}, & \mbox{ if $x \leq |w|$ and $l_{c} \leq z \leq l_{c}+l_{d}$} \\ eN_{s}, & \mbox{ if $x>|w|$ and $l_{c} \leq z \leq l_{c}+l_{d}$} \\ 0, & \mbox{ otherwise} \end{array} \right. \label{eq:Nsl}$$ in which $N_{sl}>N_{s}$ because the shallow donor density has been increased under the exposed surface. $N_{sl}$ can be determined in the same way as $N_{s}$ from the 2DEG density after illumination at zero gate voltage. After performing the integration, the potential due to the spatial charge after illumination can be expressed as $$\varphi_{1l}(0,L) = \varphi_{1}(0,L) [1-{N_{sl}-N_{s} \over N^{sl}} (\alpha_{1}+\alpha_{2})],$$ where $\varphi_{1}(0,L)$ is given by equation \[eq:varphi1-1\], and $\alpha_{1}=L/ \pi w$ and $\alpha_{2}=-L^{3}/ 3 \pi w^{3}$. Now let us calculate the boundary contribution, the second term in equation \[eq:varphi\]. For split-gate quantum wires, we have a technical problem with potential value at the boundary potential ($z=0$). Although we know the boundary potential near the gates (which has been chosen to be zero here), we do not know exactly how the boundary potential is distributed on the exposed surface. Strictly speaking, the potential distribution on the exposed surface depends on the detailed information of the surface states. But the physics of surface states is very complicated and a calculation including the full details of the surface states is not feasible. However, in studying quantum wires, we find that it is sufficient to make some simple assumptions based on the properties of the surface states which have been described in Feature 3. Considering the symmetry of quantum wires, the potential function at the exposed surface can be expanded as $$-\varphi(x,0) = \sum_{k=0}^{\infty}a_{k}x^{2k},\ |x| \leq w \label{eq:expansion}.$$ where $\{ a_{k} \}$ are constant coefficients. We find it necessary to keep the first two terms in the expansion \[eq:expansion\]. Such a treatment makes it possible to ensure that the surface potential is continuous at $x=\pm w$. This yields $$-\varphi(x,0) = \left \{ \begin{array}{ll} V_{0} (1 - x^{2}/w^{2}), & \mbox{ if $|x| \leq w$} \\ 0, & \mbox{ if $|x|>w$} \end{array} \right. \label{eq:surface potential}$$ where $V_{0}$ is a constant and will be determined later. Substituting equation \[eq:surface potential\] into the second term in equation \[eq:varphi\], the boundary contribution is $$-\varphi_{2}(x,z) = {V_{0} \over \pi w^{2}} [(w^{2}+z^{2}-x^{2}) \theta(x,z) + xz \ln {(w+x)^{2} + z^{2} \over (w-x)^{2} + z^{2}} -2wz], \label{eq:phi3-ex2}$$ where $$\theta(x,z) = \arctan {w-x \over z}+\arctan {w+x \over z},$$ which is just the angle that is subtended by the exposed surface at the point $(x,z)$. Therefore, $$e\varphi_{2} (0,L) ={2e V_{0} \over \pi} [(1 +{L^{2} \over w^{2}}) \arctan{w \over L} - {L \over w}]. \label{eq:phi2}$$ The boundary contribution $-\varphi_{2}(x,z)$ actually describes the potential that laterally confines the electrons at the $z=L$ plane. To help visualize this confining potential, we plot $e \varphi_{2}(x,z)$ in $z>0$ half space in FIG. \[fig:3d potential\]. The intersection of the plot with the $z=L$ plane just gives the confining potential well profile. The larger $L$ is, the more shallow the potential well becomes. Now $V_{0}$ can be determined by the conservation of the total number of surface electrons (Feature 3). That is $$\int_{-w}^{w} n^{sur}(x) dx = 2 e w N_{s} l_{1}, \label{eq:conservation}$$ in which the right side expresses the linear charge density of the exposed surface at zero gate voltage, and the left side is the linear charge density at the pinchoff voltage. The area surface density is evaluated based on the calculated $-\varphi (x,z)$, to yield $$e n^{sur}(x) = {2 \varepsilon \varepsilon_{0} V_{0} w \over \pi w^{2}} [x \ln {(w+x)^{2} \over (w-x)^{2}} - 4 w] - e N_{s} l_{d}. \label{eq:nsur}$$ Finally, we discuss briefly the relationship between the work presented in this section and the earlier work of Davies [@Davies] who was the first to study the boundary contribution to the potential of a quantum wire using the Green’s function method. Davies considered only the leading term in the expansion \[eq:expansion\] of the surface potential. However, for our purposes this approximation is not adequate since it yields a discontinuous potential along the surface instead of equation \[eq:surface potential\], and as a consequence, a surface charge density for which the integral in equation \[eq:conservation\] diverges. By retaining also the second term of the expansion \[eq:expansion\], we obtain a continuous surface potential and a finite integrated surface charge density \[eq:nsur\]. This enables us to use the conservation of the surface charge at the exposed surface to evaluate the parameter $V_{0}$. Discussion of a Real Sample {#sec:discussion} =========================== Now let us take a real split-gate quantum wire as an example for calculating the depletion and pinchoff voltages using the present theory. The sample quantum wire we consider has the typical structure displayed in FIG. \[fig:crossection\]. Grown with MBE on a semi-insulating GaAs substrate, its layers in sequence are a 65 nm GaAs buffer, 30 periods of GaAs/AlAs superlattice, 900 nm GaAs channel layer, 1.5 nm AlAs and 16 nm undoped ${\rm Al_{0.33}Ga_{0.67}As}$ layers as the spacer, 40 nm Si-doped ${\rm Al_{0.33}Ga_{0.67}As}$ layer with donor concentration of ${\rm1.1 \times 10^{18}\ cm^{-3}}$, and 18 nm GaAs cap layer with normal surface (100). On top of the GaAs cap, two separated gate bars of titanium are applied using electron beam lithography. The gate bars have a spatial separation of 200 nm and width of 200 nm. Analysis after growth shows that the undoped ${\rm Al_{0.33}Ga_{0.67}As}$ layer of the spacer is 14.5 nm instead of the expected value 16 nm. This suggests that all of the actual thicknesses should be reduced by 10% from their expected values. Correspondingly, the concentration of the Si donors in the doped ${\rm Al_{0.33}Ga_{0.67}As}$ layer should be increased by 10% so as to keep the nominal total number of donors. The parameter values that we use in our calculations are listed in TABLE \[tab:parameters\]. According to equations \[eq:Nalpha\], \[eq:Nbeta-1\], \[eq:Nbeta-2\], \[eq:Nbeta-3\], and \[eq:nbeta\], the calculated critical values that separate the ionization regimes of this sample are $N_{\alpha}=0.45 {\rm \times10^{18}\ cm^{-3}}$, $N_{\beta}=0.80 {\rm \times10^{18}\ cm^{-3}}$, and $n_{\beta}=6.03 {\rm \times 10^{11}\ cm^{-2}}$. Now let us consider three situations of the quantum wire: before illumination, after one illumination, and after many illuminations (i.e. after saturation with a red light emitting diode). Corresponding to these three situations, the measured densities of the 2DEG at zero gate voltage are ${\rm 3.40 \times10^{11}\ cm^{-2}}$, ${\rm 5.49 \times10^{11}\ cm^{-2}}$, and ${\rm 6.25 \times10^{11}\ cm^{-2}}$, respectively. Comparing these measured values to the calculated critical value $n_{\beta}=6.03 {\rm \times 10^{11}\ cm^{-2}}$ and referring to TABLE \[tab:parameters\], the ionization regimes of this quantum wire before illumination, after one illumination, and after many illuminations are in B, B, and C, respectively. The corresponding shallow donor densities and the depletion voltage and pinchoff voltages can therefore be calculated based on the formalism in Section \[sec:model\]. The calculated results are presented in TABLE \[tab:results\]. Experimentally, the depletion and pinchoff voltages can be known from the measured longitudinal (y-direction) resistances against the gate voltage. [@van_Wees; @Wharam] The measured resistance curves of the sample quantum wire are displayed in FIG. \[fig:resistances\]. Curves a, b, and c correspond to the resistances varying with the gate voltage before illumination, after one illumination, and after many illuminations, respectively. The depletion voltages for curves a, b, and c are $-$0.33 V, $-$0.35 V and $-$0.37 V, respectively, which agree very well with the calculated results $-$0.33 V. (The negative increases of depletion voltage upon illumination can be explained by the fact that the gate bars are very narrow and therefore some illuminating photons may penetrate into the regions under the gates and excite the deep donor there.) The pinchoff voltages read from curves a, b, and c are about $-$0.55 V, $-$0.86 V, and $-$1.33 V, respectively, which are fairly close to their corresponding calculated results $-$0.53 V, $-$0.80 V, and $-$1.43 V. In conclusion, this paper presents an electrostatic model of split-gate quantum wires and sets up a general formalism that is applicable both before and after illumination. For any split-gate quantum wire, given its geometric parameters and its measured 2DEG density at zero gate voltage, additional information such as its ionization state, shallow donor density, depletion voltage, and pinchoff voltage can be calculated based on the model. While contributing to our understanding of the electrostatic characteristics of quantum wires, this model suggests a potential approach for studying the electrodynamic and time-dependent processes in quantum wires. The theory of this paper should also provide a tool for study other gated nanostructures such as multiple constrictions [@Smith; @Hwang; @Schmit; @Simpson] and quantum dots [@Reed; @Kouvenhoven; @Lorke]. We would like to acknowledge helpful discussions with C. J. B. Ford, MBE material grown by P. T. Coleridge and fabrication assistance from P. Chow-Chong, M. Davies, P. Marshall, R. P. Taylor, and R. Barber. This work was supported by the Natural Sciences and Engineering Research Council of Canada and the Centre for Systems Science at Simon Fraser University. L. L. Chang and B. C. Giessen, in [*Synthetic Modulated Structures*]{} (Academic, Orlando, 1985); J. D. Grange, “The Growth of the MBE III-V Compounds and Alloys”, in [*The Technology of Molecular Beam Epitaxy*]{}, edited by E. H. C. Parker (Plenum Press, New York, 1992). R. Dingle et al., Appl. Phys. Lett. [**7**]{}, 665 (1978). For a review, see J. Harris, J. A. Pals, and R. Woltjer, Rep. Prog. Phys. [**52**]{}, 1217 (1989). R. G. Wheeler, K. K. Choi, A. Goel, R. Wisnieff, and D. E. Prober, Phys. Rev. Lett. [**49**]{}, 1674 (1982). A. Scherer, M. L. Roukes, H. G. Craighead, R. M. Ruthen, E. D. Beebe, and J. P. Harbison, Appl. Phys. Lett. [**51**]{}, 2133 (1987). Y. Hirayama, S. Tarucha, Y. Suzuki, and H. Okamoto, Phys. Rev. [**37**]{}, 2774 (1988). A. D. Wieck and K. Ploog, Appl. Phys. Lett. [**56**]{}, 928 (1990). W. J. Skocpol, L. D. Jackel, E. L. Hu, R. E. Howard, and L. A. Fetter, Phys. Rev. Lett. [**49**]{}, 951 (1982). J. P. Kirtley et. al., Phys. Rev. [**B34**]{}, 5414 (1986). H. van Houten, B. J. van wees, M. G. J. Heijman, and J. P. André, Appl. Phys. Lett. [**49**]{}, 1781 (1986). T. J. Thornton, M. Pepper, H. Ahmed, D. Andrews, and G. J. Davies, Phys. Rev. Lett. [**56**]{}, 1198 (1986). H. Z. Zhang, H. P. Wei, D. C. Tsui, and G. Weimann, Phys. Rev. [**B34**]{}, 5635 (1986). For a recent review see S. E. Ulloa, A. MacKinnon, E. Castaño, and G. Kirczenow, “From Ballistic Transport to Localization”, in [*Handbook of Semiconductors*]{} Vol. I, edited by P. T. Landsberg (North-Holland, Amsterdam, 1992). F. W. Sheard and L. Eaves, Nature [**333**]{}, 600 (1988). A. Khurana, Physics Today [**41**]{}, 21 (1988). W. Y. Lai and S. Das Sarma, Phys. Rev. [**B33**]{}, 8874 (1986). S. E. Laux and F. Stern, Appl. Phy. Lett. [**49**]{}, 91 (1986). S. E. Laux, D. J. Franck, and F. Stern, Surf. Sci. [**196**]{}, 101 (1988). J. H. Davies, Semicond. Sci. Technol. [**3**]{}, 995 (1988). J. A. Nixon and J. H. Davies, Phys. Rev. [**B 41**]{}, 7929 (1990). A. Nakamura and A. Okiji, J. Phys. Soc. Jpn. [**60**]{}, 1873 (1991). U. Ravaioli, T. Kerkhoven, M. Raschke, and A. T. Galick, Superlatt. Microstruc. [**11**]{}, 343 (1992). Y. Sun and G. Kirczenow, Phys. Rev. [**B47**]{}, 4413 (1993); Y. Sun and G. Kirczenow, Phys. Rev. Lett. [**72**]{}, 2450 (1994) W. Kohn, Solid State Phys. [**5**]{} 257 (1957). N. Lifshitz, A. Jayaraman, and R. A. Logan, Phys. Rev. [**B 21**]{}, 670 (1980). T Ishikawa, J. Saito, S. Sasa, and S. Hiyamizu, Jpn. J. Appl. Phys. [**21**]{}, L675 (1982). N. Chand, T. Henderson, J. Klem, W. T. Masselink, R. Fischer, Y. C. Chang, and H. Morkoc, Phys. Rev. [**B 80**]{}, 4431 (1984). D. V. Lang, “DX Centers in III-V Alloys”, in [*Deep Centers in Semiconductors*]{}, edited by S. T. Pantelides (Gordon and Breach Science Publishers, Switzerland, 1992). K. J. Malloy and K. Khchaturyan, “DX and Related Defects in Semiconductors”, [*Semiconductors and Semimetals*]{} Vol. 38, edited by E. R. Weber (Academic Press, San Diego, 1993). D. V. Lang and R. A. Logan, Phys. Rev. Lett. [**39**]{}, 635 (1977). See, for examples, W. Mönch, [*Semiconductor Surfaces and Interfaces*]{} (Springer-Verlag, Berlin, 1993). J. R. Waldrop, J. Vac. Sci. Technol. [**B 2**]{}, 445 (1984). N. Newman, W. E. Spicer, T. Kendelewicz, and I. Lindau, J. Vac. Sci. Technol. [**B 4**]{}, 931 (1986). A. B. McLean, D. A. Evans, and R. H. Williams, Semicond. Sci. Technol. [**2**]{}, 547 (1986); A. B. McLean and R. H. Williams, J. Phys. C [**21**]{}, 783 (1988). For a review see W. E. Spicer, I. Lindau, P. Skeath, and C. Y. Su, J. Vac. Sci. Technol. [**17**]{}, 1019 (1980). T.-C. Chiang, R. Ludeke, M. Aono, G. Landgren, F. J. Himpsel, D. E. Eastman, [**B 27**]{}, 4770 (1983). W. Pötz and D. K. Ferry, Phys. Rev. [**B 31**]{}, 968 (1985). R. P. Beres, R. E. Allen, and J. D. Dow, Solid State Commun. [**45**]{}, 13 (1983). C. B. Duke, “Tunneling in Solids”, in [*Solid State Physics*]{}, Supplement Vol. 10, edited by F. Seitz, D. Turnbull, and H. Ehrenreich (Academic Press, New York, 1969). G. Garcia-Calderón, “Tunneling in Semiconductor Resonant Structures”, in [*Physics of Low-Dimensional Semiconductor Structures*]{}, edited by P. Butcher, N. H. March, and M. P. Tosi (Plenum Press, New York, 1993). R. J. Haug, A. H. MacDonald, P.S treda, and K. von Klitzing Phys. Rev. Lett. [**61**]{}, 2797 (1988). S. Washburn, A. B. Fowler, H. Schmid, and D. Kern, Phys. Rev. Lett. [**61**]{}, 2801 (1988). Experimentally, the bulk density of the 2DEG can be determined by Shubnikov de Haas measurements. See for example J. D. Jackson, [*Classical Electrodynamics*]{}, 2nd Edition (John Wiley & Sons, New York, 1975), Section 1.10. Keeping more terms in the expansion will modify the calculated distribution of the surface charge density. However, such a change (for fixed total surface charge) has no significant effect on the potential at the $z=L$ plane because this plane is far away from the surface. H. Okumura, S. Misawa, S. Yoshida, and S. Gonda, Appl. Phys. Lett. [**46**]{}, 377 (1985). B. J. van Wees, H. van Houten, C. W. J. Beenakker, J. G. Williamson, L. P. Kouwenhoven, D. van der Marel, and C. T. Foxon, Phys. Rev. Lett. [**60**]{}, 848 (1988). D. A. Wharam, T. J. Thorton, R. Newbury, M. Pepper, H. Ahmed, J. E. F. Frost, D. G. Hasko, D. C. Peacock, D. A. Ritchie, and G. A. C. Jones, J. Phys. C: Solid State Phys. [**21**]{}, L209 (1988). C. G. Smith, M. Pepper, R. Newbuty, H. Ahmed, D. G. Hasko, D. C. Peacock, J. E. F. Frost, D. A. Ritchie, G. A. C. Jones, and G. Hill, J. Phys.: Condens. Matter [**1**]{}, 6763 (1989). S. W. Hwang, J. A. Simmons, D. C. Tsui, and M. Shayegan, Phys. Rev. [**44**]{}, 13497 (1991). P. E. Schmit, M. Okada, K. Kosemura, and N. Yokoyama, Jpn. J. Appl. Phys. [**30**]{}, L1921 (1991). P. J. Simpson, D. R. Mace, C. J. B. Ford, I. Zailer, M. Pepper, D. A. Ritchie, J. E. F. Frost, G. A. C. Jones, Appl. Phy. Lett. [**63**]{}, 3191 (1993). M. A. Reed et al., Phys. Rev. Lett. [**60**]{}, 535 (1988). L. P. Kouvenhoven et al., Phys. Rev. Lett. [**65**]{}, 535 (1990). A. Lorke, J. P. Kotthaus, and K ploog, Phys. Rev. Lett. [**64**]{}, 2559 (1990). ------------------- ---------------------------------- ------------------------- Ionization regime By shallow donor density $N_{s}$ By 2DEG density $n_{0}$ A $N_{s} < N_{\alpha}$ $n_{0} = 0$ B $N_{\alpha} < N_{s} < N_{\beta}$ $0<n_{0} < n_{\beta}$ C $N_{s} > N_{\beta}$ $n_{0} > n_{\beta}$ ------------------- ---------------------------------- ------------------------- : Criteria for identifying different ionization regimes of the doped layer under the exposed surface in quantum wires at zero gate voltage. \[tab:ionizations\] -------------------------------------------------- --------------- ------- --------- Description Notation Value Unit gate separation $2w$ 200 nm GaAs cap layer $l_{c}$ 16.2 nm doped ${\rm Al_{0.33}Ga_{0.67}As}$ layer $l_{d}$ 36 nm ${\rm Al_{0.33}Ga_{0.67}As}$ and AlAs spacer $l_{s}$ 15.75 nm effective mass $m^{*}$ 0.067 $m_{e}$ dielectric constant of GaAs $\varepsilon$ 12.5 Schottky barrier of GaAs surface (100) [@Chiang] $\Phi_{sb}$ 0.80 eV Schottky barrier of Ti-GaAs contact [@Waldrop] $\Phi_{sb}'$ 0.83 eV band offset [@Okumura] $E_{off}$ 0.2 eV z-direction energy interval [@Harris] $E_{z0}$ 0.04 eV -------------------------------------------------- --------------- ------- --------- : Parameters used in calculations for the real sample of quantum wire \[tab:parameters\] ------------------------------- ------------- ---------------- ----------------- -------------------------- Parameter Before ill. After one ill. After many ill. Unit Ionization regime B B C measured 2DEG density $n_{0}$ 3.40 5.49 6.25 ${\rm 10^{11}\ cm^{-2}}$ shallow donor density 0.65 0.77 1.02 ${\rm 10^{18}\ cm^{-3}}$ $l_{1}$ 30.89 28.87 24.03 nm $l_{2}$ 5.11 7.13 6.11 nm $l_{3}$ 0 0 5.86 nm calculated $-V_{dep}$ $-$0.33 $-$0.33 $-$0.33 V calculated $-V_{pinch}$ $-$0.53 $-$0.80 $-$1.43 V ------------------------------- ------------- ---------------- ----------------- -------------------------- : Calculated results of the real quantum wire \[tab:results\]
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this note we show that the standard (RS) perturbation method gives the same result as the hypervirial pertubative method (HPM), for an approximate analytic expression for the energy eigenvalues of the bounded quartic oscillator. This connection between the HPM and the RS method went unnoticed for a long time, apparently because it was not obvious that the resulting polygamma sums to be evaluated in the RS method could, in fact, be expressed in closed form.' author: - 'Kunle Adegoke[^1]' - Adenike Olatinwo - Gbenga Olunloyo title: 'An alternative derivation of the Fernández-Castro analytic approximate expression for the eigenvalues of the bounded quartic oscillator [^2] ' --- Introduction ============ Interest in the [*bounded*]{} quartic oscillator started with the pioneering work of Barakat and Rosner [@barakat] who employed a power series method to obtain numerical values of the eigenvalues through an iteration scheme. Researchers have since continued to investigate the bounded quartic oscillator and related systems, using various techniques (see references [@fernandez; @navarro; @chauduri; @alhendi] and the references in them). The bounded quartic oscillator is described by the Hamiltonian $$H=-\frac{\hbar^2}{2m}\frac {\rm d^2}{{\rm d}x^2}+\lambda x^4,\quad -a\le x\le a\,,$$ where $m$ is the mass of the oscillator and $\lambda>0$ is the coupling constant. The Hamiltonian $H$ lives in a Hilbert space $\mathcal H$ with inner product between any two functions $f(x)$ and $g(x)$ in $\mathcal H$ defined by $\left(f(x),g(x)\right)=\int_{-a}^a{f(x)g(x)\,{\rm d}x}$, where the functions $f(x)$ and $g(x)$ and indeed all vectors of $\mathcal H$ are required to vanish at the boundary $x=\pm a$. About three and a half decades ago, using their hypervirial pertubative method (HPM), Fernández and Castro [@fernandez] derived the following expression (their equation (22) in our notation) for the eigenvalues of the bounded quartic oscillator: $$\label{equ.dzka6e0} \begin{split} E_r &\approx \frac{{\pi^2 \hbar^2 }}{{8ma^2 }}(r + 1)^2 + \frac{{\lambda a^4 }}{5}\left[ {1 - \frac{{20}}{{\pi^2 (r + 1)^2 }} + \frac{{120}}{{\pi^4 (r + 1)^4 }}} \right]\\ &\qquad + \frac{{32m\lambda ^2 a^{10} }}{{\hbar ^2 }}\left[ {\frac{1}{{225\pi^2 (r + 1)^2 }} - \frac{{37}}{{35\pi^4 (r + 1)^4 }}} \right.\\ &\qquad\qquad\qquad\qquad\left.{ + \frac{{314}}{{5\pi^6 (r + 1)^6 }}- \frac{{1404}}{{\pi^8 (r + 1)^8 }} + \frac{{8712}}{{\pi^{10} (r + 1)^{10} }}}\right]\,, \end{split}$$ for quantum numbers $r=0,1,2,\ldots$ In this paper we show that the standard (RS) perturbation theory with $\lambda$ as the perturbation parameter gives the same result for $E_r$ as given in . As a matter of fact we stumbled upon the work of Fernández and Castro only after we had obtained our result for $E_r$. The Computer Algebra System Waterloo Maple came to our aid in simplifying the resulting perturbation sums and finding their closed form. Basis functions and the matrix elements of $H$ ============================================== Since $H(-x)=H(x)$, the eigenstates of $H(x)$ have definite parity. For $r=0,1,2,\ldots$, the complete orthonormal functions $\{\varphi_r(x)\}$, where, $$\begin{split} \varphi _{2r} (x) = \sqrt {\frac{1}{a}} \cos \left( {\frac{{(2r + 1)\pi x}}{{2a}}} \right),&\quad\varphi _{2r + 1} (x) = \sqrt {\frac{1}{a}} \sin \left( {\frac{{(r + 1)\pi x}}{a}} \right)\,, \end{split}$$ constitute a suitable set of basis functions in $\mathcal H$ for a matrix representation of the bounded quartic oscillator Hamiltonian $H$, since they also satisfy the boundary conditions $\varphi_r(\pm a)=0$. The identities $$\begin{split} \left({\cos \alpha x,\cos \beta x}\right) = (1 - \delta _{\alpha \beta } )\left( {\frac{{\sin ((\alpha - \beta )a)}}{{\alpha - \beta + \delta _{\alpha \beta } }} + \frac{{\sin ((\alpha + \beta )a)}}{{\alpha + \beta }}} \right)&\\ + \delta _{\alpha \beta } \left( {a + \frac{{\sin (2\alpha a)}}{{2\alpha }}} \right)\qquad& \end{split}$$ and $$\begin{split} \left({\sin \alpha x,\sin \beta x}\right) = (1 - \delta _{\alpha \beta } )\left( {\frac{{\sin ((\alpha - \beta )a)}}{{\alpha - \beta + \delta _{\alpha \beta } }} -\frac{{\sin ((\alpha + \beta )a)}}{{\alpha + \beta }}} \right)&\\ + \delta _{\alpha \beta } \left( {a - \frac{{\sin (2\alpha a)}}{{2\alpha }}} \right)\qquad&\,, \end{split}$$ for $\alpha,\beta>0$ and the repeated application of Leibnitz rule for differentiating an integral allow to calculate the matrix elements of $H$ as $$\label{equ.jevxxpj} \begin{split} H_{rs} &= \delta _{rs} \left[ {\frac{{\pi ^2 \hbar ^2 }}{{8ma^2 }}(r + 1)^2 + \frac{{\lambda a^4 }}{5}\left( {1 - \frac{{20}}{{\pi ^2 (r + 1)^2 }} + \frac{{120}}{{\pi ^4 (r + 1)^4 }}} \right)} \right]\\ &\quad + (1 - \delta _{rs} )\left[ {( - 1)^{(r + s)/2} \frac{{16\lambda a^4 }}{{\pi ^2 }}\left( {\frac{1}{{(r - s + \delta _{rs} )^2 }} - \frac{1}{{(r + s + 2)^2 }}} \right)} \right.\\ &\qquad\qquad\qquad\left. { - ( - 1)^{(r + s)/2} \frac{{384\lambda a^4 }}{{\pi ^4 }}\left( {\frac{1}{{(r - s + \delta _{rs} )^4 }} - \frac{1}{{(r + s + 2)^4 }}} \right)} \right]\,, \end{split}$$ provided that $r$ and $s$ have the same parity, and $H_{rs}=0$ otherwise. The matrix elements $H_{rs}$ facilitate the direct diagonalization of the bounded quartic oscillator. The energy eigenvalues can be made arbitrarily accurate by increasing the dimension of the Hamiltonian matrix used; the eigenvalues obtained can therefore be considered exact. We are, however, not concerned here with exact diagonalization but we need the matrix elements $H_{rs}$ for our perturbation calculations. RS derivation of the approximate analytic expression for the energy eigenvalues =============================================================================== For $\lambda$ sufficiently small (see [@navarro] for a rigorous discussion of the convergence criteria), the oscillator potential $V(x)=\lambda x^4$ may be treated as a perturbation of the unperturbed Hamiltonian $T(x)=-\hbar^2/2md^2/dx^2$ (the free particle in a box Hamiltonian). In the standard perturbation theory for states, the approximate energy eigenvalues of $H$, to second order in $\lambda$, are to be calculated from $E_r\approx E_r^{(0)}+E_r^{(1)}+E_r^{(2)}$. We have immediately that $$\label{equ.zdjzs3r} E_r^{(0)} = T_{rr} = \frac{{\pi ^2 \hbar ^2 }}{{8ma^2 }}(r + 1)^2=\varepsilon(r+1)^2$$ and $$\label{equ.g606140} E_r^{(1)} = V_{rr} = \frac{{\lambda a^4 }}{5}\left( {1 - \frac{{20}}{{\pi ^2 (r + 1)^2 }} + \frac{{120}}{{\pi ^4 (r + 1)^4 }}} \right)\,,$$ where $ \varepsilon=\pi ^2 \hbar ^2 /{8ma^2 }\,.$ The second order correction to the energy of the bounded quartic oscillator, $ E_r^{(2)}$, is given by $$\label{equ.kzjql8y} E_r^{(2)} = \sum_{\scriptstyle s = 0 \hfill \atop \scriptstyle s \ne r \hfill}^\infty {\frac{{V_{rs} V_{sr} }}{{\varepsilon _{rs} }}} = \sum_{s = 0}^{r - 1} {\frac{{V_{rs} V_{sr} }}{{\varepsilon _{rs} }}} + \sum_{s = r + 1}^\infty {\frac{{V_{rs} V_{sr} }}{{\varepsilon _{rs} }}}\,,$$ where $$\label{equ.izptbkw} \varepsilon _{rs} = \varepsilon _r - \varepsilon _s = \varepsilon (r + s + 2)(r - s)\,,$$ so that $$\label{equ.cxdserf} \frac{1}{{\varepsilon _{rs} }} = \frac{1}{{2\varepsilon (r + 1)}}\left[ {\frac{1}{{r - s}} + \frac{1}{{r + s + 2}}} \right]\,.$$ Since $V$ is a real symmetric matrix, is simply $$\label{equ.avhr7jt} E_r^{(2)} = \sum_{s = 0}^{r - 1} {\frac{{V_{rs}^2 }}{{\varepsilon _{rs} }}} + \sum_{s = r + 1}^\infty {\frac{{V_{rs}^2 }}{{\varepsilon _{rs} }}}\,.$$ We note that the matrix elements $V_{rs}$ occuring in  are necessarily (since $s\ne r$). Furthermore the only surviving elements $V_{rs}$, according to , are those for which $r$ and $s$ are both odd or both even. It therefore follows from  that $$\label{equ.hnip2m6} \begin{split} V_{rs} &={( - 1)^{(r + s)/2} c_1\left( {\frac{1}{{(r - s )^2 }} - \frac{1}{{(r + s + 2)^2 }}} \right)} \\ &\qquad\qquad { - ( - 1)^{(r + s)/2} c_2\left( {\frac{1}{{(r - s )^4 }} - \frac{1}{{(r + s + 2)^4 }}} \right)}\,, \end{split}$$ where $$c_1=\frac{{16\lambda a^4 }}{{\pi ^2 }}=\frac{\pi^2}{24}c_2\,.$$ Taking  into account, the summand in  is therefore $$\label{equ.ifto5d5} \begin{split} \frac{{V_{rs} ^2 }}{{\varepsilon _{rs} }} &= \frac{1}{{2\varepsilon (r + 1)}}\left( {\frac{1}{{r - s}} + \frac{1}{{r + s + 2}}} \right)\\ &\quad \times \left( {\frac{1}{{(r - s)^2 }} - \frac{1}{{(r + s + 2)^2 }}} \right)^2\\ &\qquad \times \left( {c_1 - c_2 \left[ {\frac{1}{{(r - s)^2 }} + \frac{1}{{(r + s + 2)^2 }}} \right]} \right)^2\,. \end{split}$$ The sum in  is easier to evaluate if the energy eigenvalues are grouped by parity: $$E_{2r}^{(2)} = \sum_{s = 0}^{2r - 1} {\frac{{V_{2r,s}^2 }}{{\varepsilon _{2r,s} }}} + \sum_{s = 2r + 1}^\infty {\frac{{V_{2r,s}^2 }}{{\varepsilon _{2r,s} }}}$$ and $$E_{2r+1}^{(2)} = \sum_{s = 0}^{2r} {\frac{{V_{2r+1,s}^2 }}{{\varepsilon _{2r+1,s} }}} + \sum_{s = 2r + 2}^\infty {\frac{{V_{2r+1,s}^2 }}{{\varepsilon _{2r+1,s} }}}\,,$$ for quantum number $r=0,1,2,\ldots$ Using the summation identity (equation 2.6 of [@gould]) $$\sum_{k = q}^n {f_k } = \sum_{k = \left\lfloor {(q + 1)/2} \right\rfloor }^{\left\lfloor {n/2} \right\rfloor } {f_{2k} } + \sum_{k = \left\lfloor {(q + 2)/2} \right\rfloor }^{\left\lfloor {(n + 1)/2} \right\rfloor } {f_{2k-1} },\quad n\ge q+1\,,$$ where $\lfloor p\rfloor$ denotes the floor of $p$, that is, the greatest integer less than or equal to $p$, the above sums can be expressed as $$\label{equ.hxqejv7} E_{2r}^{(2)} = \sum_{s = 0}^{r - 1} {\frac{{V_{2r,2s}^2 }}{{\varepsilon _{2r,2s} }}} + \sum_{s = r + 1}^\infty {\frac{{V_{2r,2s}^2 }}{{\varepsilon _{2r,2s} }}}$$ and $$\label{equ.i2rivoe} E_{2r + 1}^{(2)} = \sum_{s = 0}^{r - 1} {\frac{{V_{2r + 1,2s + 1}^2 }}{{\varepsilon _{2r + 1,2s + 1} }}} + \sum_{s = r + 1}^\infty {\frac{{V_{2r + 1,2s + 1}^2 }}{{\varepsilon _{2r + 1,2s + 1} }}}\,.$$ Maple is able to evaluate the sums in  and , with the appropriate summand in each case obtained from , and we have (see the Maple code in the appendix) $$\begin{split} E_{2r}^{(2)} &= \frac{{ma^{10} \lambda ^2 }}{{\hbar ^2 }}\left[ {\frac{{32}}{{225\pi ^2 (2r + 1)^2 }} - \frac{{1184}}{{35\pi ^4 (2r + 1)^4 }}} \right.\\ &\qquad\left. { + \frac{{10048}}{{5\pi ^6 (2r + 1)^6 }} - \frac{{44928}}{{\pi ^8 (2r + 1)^8 }} + \frac{{278784}}{{\pi ^{10} (2r + 1)^{10} }}} \right] \end{split}$$ and $$\begin{split} E_{2r + 1}^{(2)} &= \frac{{ma^{10} \lambda ^2 }}{{\hbar ^2 }}\left[ {\frac{8}{{225\pi ^2 (r + 1)^2 }} - \frac{{74}}{{35\pi ^4 (r + 1)^4 }}} \right.\\ &\qquad\left. { + \frac{{157}}{{5\pi ^6 (r + 1)^6 }} - \frac{{351}}{{2\pi ^8 (r + 1)^8 }} + \frac{{1089}}{{4\pi ^{10} (r + 1)^{10} }}} \right]\\ &\\ &= \frac{{ma^{10} \lambda ^2 }}{{\hbar ^2 }}\left[ {\frac{{32}}{{225\pi ^2 (2r + 2)^2 }} - \frac{{1184}}{{35\pi ^4 (2r + 2)^4 }}} \right.\\ &\qquad\left. { + \frac{{10048}}{{5\pi ^6 (2r + 2)^6 }} - \frac{{44928}}{{\pi ^8 (2r + 2)^8 }} + \frac{{278784}}{{\pi ^{10} (2r + 2)^{10} }}} \right]\,, \end{split}$$ from which it follows that $$\label{equ.aabmsh0} \begin{split} E_r^{(2)} &= \frac{{ma^{10} \lambda ^2 }}{{\hbar ^2 }}\left[ {\frac{{32}}{{225\pi ^2 (r + 1)^2 }} - \frac{{1184}}{{35\pi ^4 (r + 1)^4 }}} \right.\\ &\qquad\left. { + \frac{{10048}}{{5\pi ^6 (r + 1)^6 }} - \frac{{44928}}{{\pi ^8 (r + 1)^8 }} + \frac{{278784}}{{\pi ^{10} (r + 1)^{10} }}} \right]\,. \end{split}$$ Adding , and , we finally obtain $$\begin{split} E_r &\approx \frac{{\pi^2 \hbar^2 }}{{8ma^2 }}(r + 1)^2 + \frac{{\lambda a^4 }}{5}\left[ {1 - \frac{{20}}{{\pi^2 (r + 1)^2 }} + \frac{{120}}{{\pi^4 (r + 1)^4 }}} \right]\\ &\qquad + \frac{{32m\lambda ^2 a^{10} }}{{\hbar ^2 }}\left[ {\frac{1}{{225\pi^2 (r + 1)^2 }} - \frac{{37}}{{35\pi^4 (r + 1)^4 }}} \right.\\ &\qquad\qquad\qquad\qquad\left.{ + \frac{{314}}{{5\pi^6 (r + 1)^6 }}- \frac{{1404}}{{\pi^8 (r + 1)^8 }} + \frac{{8712}}{{\pi^{10} (r + 1)^{10} }}}\right]\,, \end{split}$$ as an approximate expression for the eigenvalues of the bounded quartic oscillator. Summary and conclusion ====================== Using the Rayleigh-Schrödinger perturbation theory and with the aid of a summation identity and the Computer Algebra System Maple, we have derived an approximate expression for the energy eigenvalues of the bounded quartic oscillator. This is the same expression that was obtained much earlier in reference [@fernandez] through a more complicated approach. Similar results to ours are also contained in reference [@navarro] where exact diagonalization was done and perturbative series up to the third order in $\lambda$ were also developed for the energy levels. However, the RS sums were determined numerically in that paper as, apparently, closed form could not be found for them; and furthermore only results for the ten lowest eigenvalues were computed. Appendix {#appendix .unnumbered} ======== Maple code to evaluate $E_r^{(2)}$ ================================== > V:=(r,s)->(-1)^(r+s)*c1*(1/(r-s)^2-1/(r+s+2)^2) -(-1)^(r+s)*c2*(1/(r-s)^4-1/(r+s+2)^4): > epsilon:=(r,s)->epsilon*(r+s+2)*(r-s): > simplify(expand(eval(V(r,s)^2/epsilon(r,s),[r=2*r,s=2*s]))) assuming r,posint,s,posint,c1>0,c2>0: > summand:=convert(%,parfrac,s): > S1:=sum(summand,s=0..r-1) assuming r,posint,s,posint,c1>0,c2>0: > S2:=sum(summand,s=r+1..infinity) assuming r,posint,s,posint,c1>0,c2>0: > ssum:=simplify(S1+S2): > fsum:=expand(eval(ssum,[epsilon=Pi^2*h^2/8/m/a^2, c1=16*lambda*a^4/Pi^2,c2=16*24*lambda*a^4/Pi^4])): > arranged:=collect(fsum,Pi): > terms:=[seq(factor(op(i,arranged)),i=1..nops(arranged))]: `> add(L,L=terms);` [$${\frac{{32ma^{10} \lambda ^2 }}{{225\pi ^2 (2r + 1)^2 \hbar ^2 }} - \frac{{1184ma^{10} \lambda ^2 }}{{35\pi ^4 (2r + 1)^4 \hbar ^2 }} + \frac{{10048ma^{10} \lambda ^2 }}{{5\pi ^6 (2r + 1)^6 \hbar ^2 }} - \frac{{44928ma^{10} \lambda ^2 }}{{\pi ^8 (2r + 1)^8 \hbar ^2 }} + \frac{{278784ma^{10} \lambda ^2 }}{{\pi ^{10} (2r + 1)^{10} \hbar ^2 }}}$$ ]{} [99]{} R. BARAKAT AND R. ROSNER (1981), The bounded quartic oscillator, 83A (4):149–150. F. M. FERNÁNDEZ and E. A. CASTRO (1982), An analytic approximate expression for the eigenvalues of the bounded quartic oscillator, 88A (1):4–6. V. C. AGUILERA-NAVARRO, J. F. GOMES and A. H. ZIMERMAN (1983), On the quantum quartic oscillator in a box, 13 (4):664–672. R. N. CHAUDHURI AND B. MUKHERJEE (1983), The eigenvalues of the bounded $\lambda x^{2m}$ oscillators, 16:3193–3196. H. A. ALHENDI AND E. I. LASHIN (2005), Spectrum of one-dimensional anharmonic oscillators, 83:541–550. H. W. GOULD (2011), Table for Fundamentals of Series: Part I: Basic properties of series and products, . [^1]: Corresponding author: [email protected] [^2]: PACS:03.65.Ge
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a semi-Lagrangian scheme for the approximation of a class of Hamilton-Jacobi-Bellman equations on networks. The scheme is explicit and stable under some technical conditions. We prove a convergence theorem and some error estimates. Additionally, the theoretical results are validated by numerical tests. Finally, we apply the scheme to simulate traffic flows modeling problems.' author: - Elisabetta Carlini - Adriano Festa - Nicolas Forcadel bibliography: - 'hjnetworks.bib' date: 'Received: date / Accepted: date' title: 'A semi-Lagrangian scheme for Hamilton-Jacobi equations on networks with application to traffic flow models. ' --- [example.pdf]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction ============ The attention to the study of linear and nonlinear partial differential equations on networks raised consistently in the last decades motivated by the extensive use of systems like roads, pipelines, and electronic and information networks. In particular, extensive literature has been developed for vehicular traffic systems modeled through conservation laws. Existence results can be found in [@garavello2006traffic], and some partial uniqueness results (for a limited number of intersecting roads) in [@garavello2007conservation; @andreianov2011theory]. Nonetheless, the lack of uniqueness on the junction point obliges to add some additional conditions that may be ambiguous or difficult to derive. More recently, another kinds of macroscopic models appears. These models rely on the Moskowitz function and make appear an Hamilton-Jacobi equation (see [@newell]). The theory of Hamilton-Jacobi (HJ) equations on networks is very recent. It is difficult to extend the classic framework to the network context because these equations do not have in general regular solutions and the notion of weak solution (*viscosity solution*) must be adapted to preserve some properties on the junction points. An additional difficulty comes from possible discontinuities on data of the problem. Some theoretical results are contained in the early works [@achdou2013hamilton; @camilli2018flame; @imbert2013flux; @imbert2013hamilton; @schieborn2013viscosity] where, using some appropriate definitions of weak solutions, the authors prove the well-posedness of the problem. We also refer to the works [@barles2016flux; @LionsSouganidis] for simplified proof of uniqueness. Concerning numerical schemes for this kind of equations, there is very few theoretical results. Let us mention the finite differences scheme proposed in [@camilli2018flame; @costeseque2015convergent] and the paper [@imbert2015error] in which they prove some error estimates for this scheme. In this paper, we adopt the notion of solutions as introduced in [@imbert2013flux], which has some good advantages in term of generality, and we introduce a new numerical scheme for HJ equations on a network. For the sake of simplicity, we consider a simplified network (*a junction*), but the result can be extended to a more general class of problems, including more complex structures. We propose a semi-Lagrangian (SL) scheme by discretizing the *dynamic programming principle* presented in [@imbert2013flux]. This scheme generalizes what introduced in [@camilli2013approximation], and it enables discrete characteristics to cross the junctions. This property makes the scheme absolutely stable, allowing large time steps, and it is the main advantage compared to finite differences and finite elements schemes. We prove consistency and monotonicity which imply the convergence of the scheme. We also derive some consistency errors for the numerical solution that we can obtain in two different cases: for state independent Hamiltonians, where controls are constant along the arcs, and in a more general scenario. In the simplified case, we obtain a first order convergence estimate. In the second case, the key result is a consistency estimate that leads to an *a priori* error estimate. The proof is obtained combining some techniques derived from the papers on regional optimal control problems [@barles2013bellman; @barles2016flux]. #### [**Structure of the paper:**]{} in Section \[Sect:basics\] we recall some basic notions for junction networks, the definition of *flux-limited viscosity solutions* and the relation with optimal control problems on networks. In Section \[Sect:scheme\], we derive the SL scheme and prove its basic properties: consistency, monotonicity, and convergence. In Section \[sect:bounds\], we present the main result (Theorem \[teo:bounds2\]) concerning the error estimate. In Section \[Sect:traffic\], we discuss the connection between HJ equations and traffic flow model. Finally, in Section \[Sect:tests\], we show through numerical simulations the efficiency and the accuracy of our new method. Hamilton-Jacobi equations on networks {#Sect:basics} ===================================== A network is a domain composed of a finite number of nodes connected by a finite number of edges. To simplify the description of such system we focus on the case of a *junction*, which is a network composed of one node and a finite number of edges. We follow [@imbert2013flux] and the notations therein to describe the problem.\ Given a positive number $N$, a junction $J$ is a network of $N$ half lines $J_i:=\{k\,e_i, k\in{{\mathbb{R}}}^+\}$ (where each line is isometric to $[0,+\infty)$ and $e_i$ is a unitary vector centered in $0$) connected in a *junction point* that we conventionally place at the origin. We then have $$J:=\bigcup_{i=1,...,N}J_i, \quad J_i\cap J_j =\{0\}, \quad\forall i\neq j,\quad i,j\in\{1,...,N\}.$$ ![Junction with $N=5$ edges.[]{data-label="figJun"}](junction.pdf){height="4.5cm"} We consider the geodesic distance function on $J$ given by $$d(x,y)=\left\{\begin{array}{lll} |x-y|, & & \hbox{if $x,y\in J_i$ for one $i\in\{1,...,N\}$},\\ |x|+|y|, & & \hbox{otherwise.} \end{array} \right.$$ For a real-valued function $u$ defined on $J$, $\partial_i u(x)$ denotes the (spatial) derivative of $u$ at $x \in J_i$ and the gradient of $u$ is defined as: $$u_x:=\left\{ \begin{array}{lll} \partial_i u(x) & & \hbox{if }x\in J_i\setminus \{0\},\\ \left(\partial_1 u(x),\partial_2 u(x),...,\partial_N u(x)\right) & &\hbox{if }x=0. \end{array} \right.$$ We can now describe our problem. Consider the following evolutive HJ equation on the network $J$ $$\label{eq:hjnet} \left\{ \begin{array}{ll} \partial_t u(t,x)+H_i(x,u_x(t,x))=0 & \hbox{ in } (0,T) \times J_i\setminus \{0\},\\ \partial_t u(t,x)+F_A(u_x(t,x))=0 & \hbox{ in } (0,T) \times \{0\}, \end{array} \right.$$ with the initial condition $$\label{eq:initial} u(0,x)=u_0(x)\quad \hbox{ for }x\in J,$$ where $u_0(x)$ is globally Lipschitz continuous on $J$. We suppose that standard assumptions on the Hamiltonian $H$ (cf. i.e. [@BardiCapuz@incollection]) hold: - (Regularity) for all $L>0$ there exists a modulus of continuity $\omega_L$ such that for all $|p|,|q|\leq L$ and $x\in J_i$ $$|H_i(x,p)-H_i(x,q)|\leq \omega_L(|p-q|);$$ in addition, $H(\cdot,p)$ is Lipschitz continuous w.r.t. the space variable. - (Uniform coercivity) $H_i(x,p)\rightarrow +\infty$ for $|p|\rightarrow +\infty$ uniformly for every $x\in J_i\cup \{+\infty\}$, $i=1,...,N$; - (Convexity) $\{H_i(x,\cdot) \leq \lambda \}$ is convex for every choice of $\lambda\in{{\mathbb{R}}}$ and a fixed $x\in J$. Using the convexity hypothesis and the coercivity, there exists a $\hat p_i$ such that the Hamiltonian $H_i$ is non-increasing in $(-\infty,\hat p_i]$ and non-decreasing in $[\hat p_i,\infty)$. We introduce the (respectively) non-increasing and non-decreasing functions $$H^-_i(x,p):=\left\{ \begin{array}{ll} H_i(x,p)& \hbox{for }p\leq \hat p_i \\ H_i(x,\hat p_i)& \hbox{for }p > \hat p_i \end{array} \right. \hbox{ and } H^+_i(x,p):=\left\{ \begin{array}{ll} H_i(x,p)& \hbox{for }p\geq \hat p_i \\ H_i(x,\hat p_i)& \hbox{for }p < \hat p_i. \end{array} \right.$$ Given a parameter $A\in {{\mathbb{R}}}\cup \{-\infty\}$ (flux limiter), we define the operator $F_A:{{\mathbb{R}}}^N\rightarrow {{\mathbb{R}}}$ on the junction point as $$F_A(p):= \max\left(A,\max_{i=1,...,N}H^-_i(0,p_i)\right).$$ In order to introduce the notion of viscosity solution, we introduce the class of test functions. For $T>0$, set $J_T=(0,T)\times J$. We define the class of test functions on $J_T$ and on $J$ as $$C^k(J_T)= \{\varphi \in C(J_T),\, \forall i=1,\dots,N,\, \varphi \in C^k((0,T)\times J_i)\},$$ $$C^k(J)=\{\varphi \in C(J), \, \forall i=1,\dots,N,\, \varphi \in C^k(J_i)\}.$$ We recall also the definition of upper and lower semi-continuous envelopes $u^*$ and $u_*$ of a (locally bounded) function $u$ defined on $[0, T ) \times J$, $$u^*(t, x) = \limsup_{(s,y)\rightarrow(t,x)} u(s, y) \quad \hbox{ and }\quad u_*(t, x) = \liminf _{(s,y)\rightarrow(t,x)} u(s, y).$$ We say that a test function $\varphi$ touches a function $u$ from below (respectively from above) at $(t, x)$ if $u-\varphi$ reaches a minimum (respectively maximum) at $(t, x)$ in a neighborhood of it. Assume that the Hamiltonians satisfy ([**H1**]{})-([**H3**]{}) and let $u :[0, T ) \times J \rightarrow {{\mathbb{R}}}$. - We say that $u$ is a flux-limited sub-solution (resp. flux-limited super-solution) of in $(0, T ) \times J$ if for all test function $\varphi\in C^1(J_T)$ touching $u^*$ from above (resp. $u_*$ from below) at $(t_0, x_0)\in J_T$, we have $$\begin{array}{lll} \varphi_t(t_0 , x_0) + H_i (x_0,\varphi_x(t_0 , x_0)) \leq 0 & \quad \hbox{ (resp. $\geq 0$) } & \hbox{ if $x_0\in J_i$},\\ \varphi_t(t_0 , x_0) + F_A (\varphi_x(t_0 , x_0)) \leq 0 & \quad \hbox{ (resp. $\geq 0$)} \quad & \hbox{ if $x_0=0$}. \end{array}$$ - We say that $u$ is a flux-limited sub-solution (resp. flux-limited super-solution) of on $[0, T ) \times J$ if additionally $$u^*(0, x) \leq u_0 (x) \quad {\hbox{(resp. }} u_*(0, x) \geq u_0 (x)) {\hbox{ for all}} \; x \in J.$$ - We say that $u$ is a flux-limited solution if $u$ is both a flux-limited sub-solution and a flux-limited super-solution. Thanks to the work of Imbert and Monneau [@imbert2013flux], we have the following result which gives an equivalent definition of viscosity solutions for . We use this equivalent definition in particular in the definition of the consistency in Section \[Sect:scheme\]. \[th:1\] Let $\bar H^0 = \max_j \min_pH_j(p)$ and consider $A \in [\bar H^0, +\infty)$. Given solutions $p_{i}^{A} \in \mathbb{R} $ of \[definitionEquivalenteDefDesP\] [H]{}\_i( p\_i\^[A]{} )= [H]{}\^+( p\_i\^[A]{} )=A let us fix any time independent test function $\phi^0(x)$ satisfying, for $i=1,\dots, N$, $$\begin{aligned} \nonumber \partial_{i}\phi^0 (0)= p_{i}^{A}. \end{aligned}$$ Given a function $u:(0,T)\times J\rightarrow \mathbb{R}$, the following properties hold true. i\) If $u$ is an upper semi-continuous sub-solution of with $A=H_0$, for $x\neq 0$, satisfying $$\begin{aligned} \label{conditionDefEquivalentePBLimite} u(t,0)=\limsup_{(s,y)\rightarrow (t,0),\ y\in R^*_i}u(s,y),\end{aligned}$$ then $u$ is a $H_0$-flux limited sub-solution. ii\) Given ${A}>H_0$ and $t_0\in(0,T)$, if $u$ is an upper semi-continuous sub-solution of for $x\neq 0$, satisfying , and if for any test function $\varphi$ touching $u$ from above at $(t_0,0)$ with $$\begin{aligned} \label{definitionEquivalenteDefDeFonctionTestGlobale} \varphi(t,x)= \psi(t) + \phi^0(x), \end{aligned}$$ for some $\psi \in C^2\left((0,+\infty)\right)$, we have $$\begin{aligned} \nonumber \varphi_t + F_{A} \left( \varphi_x \right) \leq 0 \quad \mbox{at }(t_0,0), \end{aligned}$$ then $u$ is a ${A}$-flux limited sub-solution at $(t_0,0)$. iii\) Given $t_0\in(0,T)$, if $u$ is a lower semi-continuous super-solution of for $x\neq 0$ and if for any test function $\varphi$ satisfying (\[definitionEquivalenteDefDeFonctionTestGlobale\]) touching $u$ from above at $(t_0,0)$ we have $$\begin{aligned} \nonumber \varphi_t + F_{A} \left( \varphi_x \right) \geq 0 \quad \mbox{at }(t_0,0), \end{aligned}$$ then $u$ is a ${A}$-flux limited super-solution at $(t_0,0)$. \[thDefinitionEquivalenteSurETSousSolutionsJunction\] Optimal control interpretation and dynamic programming principles ----------------------------------------------------------------- We describe a natural application of equations for a finite-horizon optimal control problem on the network $J$. We recall several results contained in [@imbert2013flux] that are useful in the next sections.\ Let us define the set of admissible dynamics on the network $J$ connecting the point $(s,y)$ to $(t,x)$ as $$\Gamma_{s,y}^{t,x}:=\left\{ \begin{array}{ll} (X(\cdot),\alpha(\cdot))\in {\rm{Lip}}([s,t];J)\times L^\infty ([s,t];{{\mathbb{R}}}^{N+1})\\ \dot X(\tau)=\alpha(\tau), \quad \tau\in[s,t]\\ X(s)=y, \quad X(t)=x \end{array}\right\}.$$ We denote by $(\alpha_0,\alpha_1,...,\alpha_N)$ the $N+1$ components of the control function $\alpha:(0,T)\rightarrow{{\mathbb{R}}}^{N+1}$, where $\alpha_i(t)$ is the control function defined on the branch $J_i$ for $i=1,...,N$ and $\alpha_0(t)$ is the control function defined on the junction point.\ We define a *cost function*, $$L(x,\alpha):= \left\{ \begin{array}{ll} L_i(x,\alpha_i) & \hbox{if }x\in J_i,\\ L_0(\alpha_0) & \hbox{if }x=0, \end{array}\right.$$ where for $ i=1,\dots,N$ and we assume the following - $L_i:{{\mathbb{R}}}^+\times{{\mathbb{R}}}\rightarrow {{\mathbb{R}}}$ are *strictly convex* (w.r.t. the second argument) and uniformly Lipschitz continuous functions, - $L_i$ are *strongly coercive* w.r.t. the control argument uniformly in $x$ ($L_i(x,\alpha_i)/|\alpha_i|\rightarrow +\infty$ for $|\alpha_i|\rightarrow+\infty$ uniformly in $x\in {{\mathbb{R}}}^+$). In addition, $L_0:{{\mathbb{R}}}\rightarrow {{\mathbb{R}}}$ is defined as $$L_0(\alpha_0):=\left\{ \begin{array}{ll} \bar L_0 &\quad \hbox{if }\alpha_0=0,\\ +\infty &\quad \hbox{otherwise,} \end{array}\right.$$ for a given $\bar L_0\in {{\mathbb{R}}}$. We define the [*value function*]{} of the optimal control problem as $$\label{eq:value} u(t,x)=\inf_{y\in J} \inf_{(X(\cdot),\alpha(\cdot))\in \Gamma^{t,x}_{0,y}}\left\{u_0(X(0))+\int_0^t L(X(\tau),\alpha(\tau))d\tau\right\}.$$ It has been proved in [@imbert2013flux] that the following *dynamic programming principle* (DPP) holds. \[hjdpp\] For all $x\in J$, $t\in (0,T]$, $s\in [0,t)$, the value function $u$ defined in satisfies $$\label{eq:dyn} u(t,x)=\inf_{y\in J} \inf_{(X(\cdot),\alpha(\cdot))\in \Gamma^{t,x}_{s,y}}\left\{u(s,X(s))+\int_s^t L(X(\tau),\alpha(\tau))d\tau\right\}.$$ A direct approximation of the DPP is the basis for the scheme which we describe in the next section.\ The following Theorem characterizes the value function as the solution of a HJ equation (for the proof see [@imbert2013flux]). \[hjcont\] The value function $u$ defined in is the unique viscosity solution of with $$\label{hamilt} H_i(x,p):= \sup_{\alpha_i\in {{\mathbb{R}}}} \left\{ \alpha_i \,p - L_i(x,\alpha_i)\right\}, \hbox{ and } A=- \bar L_0.$$ \[Hproperty\] Under assumption [(**[A1]{})**]{}-[(**[A2]{})**]{} the following assertions hold true: 1. for every $x\in{{\mathbb{R}}}^+\cup \{+\infty\}$, $\alpha_i\in\arg\sup_{\alpha_i\in {{\mathbb{R}}}} \{\alpha_i p -L_i(x,\alpha_i)\}$ is bounded. 2. the non increasing part of $H_i(x,p)$ with respect of $p_i$ is given by $$H^-_i(x,p_i)=\sup_{\alpha_i\leq 0}\{\alpha_i p_i -L_i(x,\alpha_i)\}.$$ furthermore the Hamiltonian satisfies properties $({\bf{H1}})-({\bf{H3}})$. From assumption [(**[A1]{})**]{}-[(**[A2]{})**]{}, $\alpha_i p-L_i(x,\alpha_i)$ is a continuous function (negatively) coercive therefore there exists a compact interval $[-\mu,\mu]$, $\mu\in{{\mathbb{R}}}$, such that $$\sup_{\alpha_i\in {{\mathbb{R}}}} \left\{ \alpha_i \,p - L_i(x,\alpha_i)\right\}=\sup_{\alpha_i\in [-\mu,\mu]} \left\{ \alpha_i \,p - L_i(x,\alpha_i)\right\}.$$ Then, $i)$ holds. Assertion $ii)$ follows from Lemma 6.2 in [@imbert2013flux].\ From $i)$, we have $$H_i(x,p)-H_i(x,q)\leq \bar \alpha |p-q|$$ where $\bar \alpha$ is the minimizer in $H_i(x,q)$. Exchanging the role of $p,q$, we get ([**[H1]{}**]{}).\ Taking $\alpha=1$ in we have $$H_i(x,p)\geq p-L_i(x,1).$$ The same argument for $\alpha=-1$ gives $H_i(x,p)\rightarrow +\infty$ for $|p|\rightarrow +\infty$, then ([**[H2]{}**]{}) holds. Finally ([**[H3]{}**]{}) holds since $H_i$ is the superior envelope of convex functions. Numerical resolution: a semi-Lagrangian scheme {#Sect:scheme} ============================================== Let us introduce a uniform discretization of the network $(0,T)\times J$. The choice of a uniform discretization is not restrictive, and the scheme can be easily extended to non-uniform grids. Given $\Delta t$ and $\Delta x$ in ${{\mathbb{R}}}^+$, we define $\Delta=(\Delta x, \Delta t)$, $N_T=\floor{ T/\Delta t}$ ($\floor{\cdot}$ is the truncation operator) and $$\mathcal G^{\Delta} :=\{t_n: n=0,\dots, N_T \}\times J^{\Delta x}$$ where $$J^{\Delta x}:=\bigcup_{i=1,...,N} J_i^{\Delta x}, \quad J_i^{\Delta x}=\{k\Delta x\,e_i:k\in\NN\}.$$ We call $t_{n}=t_n$ for $n=0,\dots, N_T$ and we derive a discrete version of the dynamic programming principle defined on the grid $ \mathcal G^{\Delta} $. To do so, as usual in first-order SL schemes, we discretize the trajectories in $\Gamma^{t_{n+1},x}_{t_{n},y}$ by one step of Euler scheme. For $i\in \{1,\dots,N\}$, let $x\in J_i$ and let $\alpha \in {{\mathbb{R}}}^{N+1}$ be such that $\alpha_i \Delta t\leq |x|$, then the approximated trajectory gets $$x\simeq y+\alpha_i\Delta t .$$ In this case, the discrete backward trajectory $x-\Delta t \alpha_i$ remains on $J_i$, and, by also applying a rectangle formula, the discrete version of at the point $(t_{n+1},x)$ is $$u(t_{n+1},x)\simeq u(t_n,x-\alpha_i\Delta t e_i)+\Delta t L_i(x,\alpha_i).$$ In opposite case $\alpha_i \Delta t> |x|$, the discrete trajectory has passed through the junction. Denoting $ s_0\in [0,\Delta t-\frac{|x|}{\alpha_i}]$ the time spent by the trajectory at the junction point, $J_j$ the arc from which the trajectory comes and $\hat t:=\left(\Delta t- s_0-\frac{|x|}{\alpha_i}\right)$ the time spent by the trajectory on the arc $J_j$, the approximation of at the point $(t_{n+1},x)$ becomes $$u(t_{n+1},x)\simeq u\left(t_n,-\alpha_j \hat t e_j\right) +\hat t\, L_j(0,\alpha_j)+ s_0 L_0(\alpha_0)+\frac{|x|}{\alpha_i}L_i(x,\alpha_i).$$ We call $B(J^{\Delta x})$ and $B(\G^{\Delta })$ the spaces of bounded functions defined respectively on $J^{\Delta x}$ and on $\G^{\Delta }$. To compute the value function on the foot of the discrete trajectories, which, in general, are not grid nodes, we approximate these values by a piecewise linear Lagrange interpolation $\mathbb I[\hat u](z)$, where $u\in B(J^{\Delta x})$ and $z\in J$. The basic properties of the interpolation operator are summarized in the following lemma (for the proof see for instance [@QSS07]). \[inter\] Given the *piecewise linear interpolation* operator $\I:B(J^{\Delta x})\times J\rightarrow {{\mathbb{R}}}$ and a function $\varphi \in C(J)$, we denote by $\hat{\varphi}$ the collection of values $\{\varphi(x_k)\}_{x_k\in J^{\Delta x}}$. We have the following properties: - (Monotonicity) If $\psi \in C(J)$ such that $\psi(x)\leq \varphi(x)$ for every $x\in J^{\Delta x}$ then $$\I[\hat \psi](x)\leq \I[\hat \varphi](x), \quad \hbox{for all }x\in J.$$ - (Polynomial base) There exists a set $\{\phi_i\}_{i=I}$ ($I$ is the set of the indexes of the elements of $J^{\Delta x}$) of lagrangian bases [@QSS07], such that $\phi_i(x_k)=\delta_{i,k}$ where $\delta$ is the Kronecker symbol; and $$\I[\hat \varphi](x)= \sum_{i\in I} \varphi(x_i)\phi_i(x), \quad \hbox{for all }x\in J.$$ - (Error estimate) If $\varphi \in W^{s,\infty}(J)$ with $s=1,2$, then there exists a constant $C>0$ such that $$|\I[\hat \varphi](x)-\varphi(x)|\leq C \Delta x^s,\quad \hbox{for all } x \in J.$$ - (Error estimate, smooth case) If $\varphi \in C^2(J)$ then $$\label{interperr} |\I[\hat \varphi](x)-\varphi(x)|\leq \frac{1}{2}\sup_{\xi\in [a,b]}|\varphi_{xx}(\xi)|\;\left|\prod_{i=1}^2 (x-x_i)\right|,$$ for all $x \in [x_1=a,x_2=b]\in J$. We finally define a fully discrete numerical operator $S:B(\G^{\Delta })\times J^{\Delta x}\to {{\mathbb{R}}}$ as, if $x\in J_i$ $$\label{eq:slscheme2} S[\hat v](x):=\min \left\{ \begin{array}{ll} \inf\limits_{\alpha,\alpha_i<\frac{|x|}{\Delta t}}\mathbb I[\hat v](x-\alpha_i\Delta t e_i)+\Delta t L_i(x,\alpha_i),\\ \inf\limits_{\alpha,\alpha_i\geq \frac{|x|}{\Delta t}}\inf\limits_{ s_0\in[0,\Delta t-\frac{|x|}{\alpha_i}]} \min\limits_{j,\alpha_j\leq 0}\left\{\mathbb I[\hat v]\left(-\left(\Delta t- s_0-\frac{|x|}{\alpha_i}\right)\alpha_j e_j\right)\right.\\ \left.\phantom{tu} +\left(\Delta t- s_0-\frac{|x|}{\alpha_i}\right)L_j(0,\alpha_j)+ s_0 L_0(\alpha_0)+\frac{|x|}{\alpha_i}L_i(x,\alpha_i)\right\}, \end{array}\right.$$ and, if $x=0$ $$\begin{gathered} S[\hat v](x):= \inf\limits_{\alpha,j,\alpha_j\leq 0}\inf\limits_{ s_0\in[0,\Delta t]} \left\{\mathbb I[\hat v]\left(-\left(\Delta t- s_0\right)\alpha_j e_j\right)\right. \\ \left.+\left(\Delta t- s_0\right)L_j(0,\alpha_j)+ s_0 L_0(\alpha_0)\right\}.\end{gathered}$$ We define recursively the discrete solution $w\in B(\G^{\Delta })$ as $$\label{eq:slscheme} w(t_{n+1},x)=S[\hat w^n](x),\quad n=0,\dots,N_T-1,\; x\in J^{\Delta x}$$ where $w^n:=\{w(t_n,x)\}_{x\in J^{\Delta x}}$ for $n=0,\dots,N_T-1$ and $w^0=\{u_0(x)\}_{x\in J^{\Delta x}}$. Next, we prove some basic properties satisfied by the scheme , assuming that assumptions ([**[A1]{}**]{})-([**[A2]{}**]{}) hold. \[monotonicity\] The numerical scheme is monotone, i.e. given two discrete functions $v_1, v_2\in B(J^{\Delta x})$ such that $v_1\leq v_2$ we have $$S[\hat v_1](x)\leq S[\hat v_2](x), \quad \forall x\in J^{\Delta x}.$$ Let us fix a $x\in J^{\Delta x}_i$. We assume that the trajectory relative to $v_1$ passes through the junction and the one relative to $v_2$ does not. The other cases are easier and they can be treated in a similar way. Let us call $(\bar \alpha_i, \bar s_0, \bar j, \bar \alpha_{\bar j}, \bar \alpha_0)$ the optimal strategy relative to $v_1$, and let us call $\hat \alpha_i$ the optimal control relative to $v_2$. The optimal controls are bounded since Prop. \[Hproperty\] holds. We have $$\begin{split} S[\hat v_1](x)=\mathbb I[\hat v_1]\left(-\left(\Delta t-\bar s_0-\frac{|x|}{\bar\alpha_i}\right)\bar \alpha_{\bar j} e_{\bar j}\right)+\left(\Delta t-\bar s_0-\frac{|x|}{\bar \alpha_i}\right)L_j(0,\bar \alpha_{\bar j})\\ +\bar s_0 L_0(\bar \alpha_0)+\frac{|x|}{\alpha_i}L_i(x,\hat \alpha_i) \leq \mathbb I[\hat v_1](x-\hat\alpha_i\Delta t e_i)+\Delta t L_i(x,\hat\alpha_i) = S[\hat v_2](x). \end{split}$$ \[lipw\] Let $w(t_n,x)$ be a solution of . If $u_0$ is uniformly Lipschitz continuous then for $x,y\in J^{{{\Delta x}}}$ there exists a $C>0$ such that $$\left|w(t_n,x)-w(t_n,y)\right|\leq C\, (\Delta t+ d(x,y)) \quad n=0,\dots,N_t$$ In this proof, we denote by $C$ a universal constant that depends only on $L_i$ and that may change line to line and with $L_f$ the Lipschitz constant of a generic function $f$.\ Let just assume that $x,y\in J_i\cap J^{\Delta x} $. The latter is not restrictive since if $x\in J_j\cap J^{\Delta x}, y\in J_i \cap J^{\Delta x}$ with $j\neq i$, we come back to the case of a comparison between point belonging to the same arc writing $$\left|w(t_{n},x)- w(t_{n},y)\right|\leq \left|w(t_{n},x)- w(t_{n},0)\right| + \left|w(t_{n},0)- w(t_{n},y)\right|.$$ We call $\bar \alpha_i$ the optimal control of $S[w^{n-1}](y)$ associated to the i-arc. We consider three different cases: 1. $\bar \alpha_i< |y|/{{\Delta t}}$ with $y\ne 0$. In this case, we consider $\alpha_i$ such that $$x-\Delta t \alpha_i e_i=y-\Delta t\bar \a_i e_i.$$ This means in particular that $$\label{eq:100} |\a_i-\bar \a_i|=\frac {|x-y|}{\Delta t}$$ Using the suboptimal control $\a_i$ for $S[\hat w^{n-1}](x)$ yields $$\begin{gathered} w(t_{n},x)-w(t_{n},y) \leq \I [\hat w^{n-1}]\left(-\left(x-\alpha_i \Delta t e_i\right)\right) + \Delta tL_i(x,\alpha_i)\\ - \I [\hat w^{n-1}]\left(-\left(y-\bar \alpha_i \Delta t e_i\right)\right) -\Delta t L_i(y,\bar \alpha_i) \le \Delta t L_{L_i}|\a_i-\bar \a_i|\le L_{L_i}|x-y| \end{gathered}$$ 2. $0<\frac {|y|}{\Delta t}\le \bar \a_i$. This means in particular that the discrete trajectory starting from $y$ passes through the junction. We denote by $(\bar \a_i,\bar s_0,\bar \a_0,\bar j,\bar \a_{\bar j})$ the optimal control associated with $S[\hat w^{n-1}](y)$. We distinguish two sub cases: - $x=0$. In this case, we choose the suboptimal control $(\bar s_0+\frac {|y|}{\bar \alpha_i},\bar \a_0,\bar j,\bar \a_{\bar j})$ (if $\bar \a_0\ne 0$, we replace it by $0$ in order to stay in the origin) and get $$\begin{aligned} \label{eq:101} &w(t_{n},x)-w(t_{n},y) \nonumber\\ \le &\I [\hat w^{n-1}]\left(-\left({{\Delta t}}-\bar s_0-\frac {|y|}{\bar \alpha_i}\right)\bar \alpha_{\bar j} e_{\bar j}\right) +\left({{\Delta t}}-\bar s_0-\frac {|y|}{\bar \alpha_i}\right)L_{\bar j}(0,\bar \alpha_{\bar j})\nonumber\\ & +\left(\bar s_0+\frac {|y|}{\bar \alpha_i}\right) L_0(\bar \alpha_0) -\I [\hat w^{n-1}]\left(-\left(\Delta t-\bar s_0-\frac {|y|}{\bar \a_i}\right)\bar \a_{\bar j} e_{\bar j}\right)\nonumber\\ &-\left(\Delta t-\bar s_0-\frac {|y|}{\bar \alpha_i}\right)L_{\bar j}(0,\bar \a_{\bar j})\ -\bar s_0 L_0(\bar \alpha_0)-\frac{|y|}{\bar \a_i}L_i(y,\bar \a_i)\nonumber\\ \le & \frac {|y|}{\bar \alpha_i}\left(L_0(\bar \a_0)-L_i(y,\bar \a_i)\right). \end{aligned}$$ If $\bar \a_i\ge 1$, using that $L_i$ is Lipschitz continuous, we get that there exists a constant $C$ (depending only on $L_i(y,0)$ and the Lipschitz constant of $L_i$) such that $$\frac{|L_i(y,\bar \a_i)|}{|\bar \a_i|}\le C.$$ Injecting the estimate above in and using that $L_0(0)$ is bounded, we get $$w(t_{n},x)-w(t_{n},y)\le C|y|=C d(x,y).$$ If $\bar \a_i\le 1$, since $L_0(0)$ and $L_i(\bar \a_i)$ are bounded, we get that there exists a constant $C$ such that $$w(t_{n},x)-w(t_{n},y)\le C\frac {|y|}{\bar \alpha_i}\le C \Delta t.$$ We finally get that in all the cases, $$w(t_{n},x)-w(t_{n},y)\le C \left(\Delta t+d(x,y)\right).$$ - $|x|>0$. In this case, we choose $\a_i$ such that $\frac{|x|}{\a_i}=\frac{|y|}{\bar \a_i}$. This implies in particular that $$x-\frac {|x|}{\a_i} \a_i e_i=y-\frac {|x|}{\a_i} \bar \a_ie_i$$ and so $$\frac {|x|}{\a_i} |\a_i-\bar \a_i|=|x-y|=d(x,y).$$ Using the suboptimal control $(\a_i,\bar s_0,\bar \a_0,\bar j,\bar \a_{\bar j})$ for $S[w^{n-1}](x)$, we get $$\begin{gathered} w(t_{n},x)-w(t_{n},y) \nonumber\\ \le \I [\hat w^{n-1}]\left(-\left({{\Delta t}}-\bar s_0-\frac {|x|}{\alpha_i}\right)\bar \alpha_{\bar j} e_{\bar j}\right) +\left({{\Delta t}}-\bar s_0-\frac {|x|}{\alpha_i}\right)L_{\bar j}(0,\bar \alpha_{\bar j})\\ +\bar s_0 L_0(\bar \alpha_0) +\frac {|x|}{\a_i}L_i(x,\a_i) -\I [\hat w^{n-1}]\left(-\left(\Delta t-\bar s_0-\frac {|y|}{\bar \a_i}\right)\bar \a_{\bar j} e_{\bar j}\right)\nonumber\\ -\left(\Delta t-\bar s_0-\frac {|y|}{\bar \alpha_i}\right)L_{\bar j}(0,\bar \a_{\bar j})-\bar s_0 L_0(\bar \alpha_0)-\frac{|y|}{\bar \a_i}L_i(y,\bar \a_i)\nonumber\\ \le L_{L_i}\frac {|x|}{\alpha_i}|\a_i-\bar \a_i| \le L_{L_i} d(x,y) \end{gathered}$$ 3. $y=0$. We denote by $(\bar s_0,\bar \a_0,\bar j,\bar \a_{\bar j})$ the optimal control associated to the operator $S[w^{n-1}](y)$. We distinguish two sub-cases again: - : [$\bar s_0=\Delta t$]{}. We choose $\a_i\ge \max(1,\frac {|x|}{\Delta t})$ and the suboptimal control $(\a_i,\bar s_0-\frac {|x|}{\a_i},\bar \a_0)$ for $S[w^{n-1}](x)$. We then get $$\begin{gathered} w(t_{n},x)-w(t_{n},y) \nonumber\\ \le \I [\hat w^{n-1}]\left(0\right) +\left(\bar s_0-\frac{|x|}{\a_i}\right) L_0(\bar \alpha_0)+\frac {|x|}{\a_i}L_i(x,\a_i) -\I [\hat w^{n-1}]\left(0\right)\\-\bar s_0 L_0(\bar \alpha_0) \le \frac {|x|}{\a_i}\left(L_i(x,\a_i)-L_0(\bar \a_0) \right) \le L_{L_i} d(x,y) \end{gathered}$$ Using that $L_i$ is Lipschitz continuous, we get that there exists a constant $C$ (depending only on $L_i(0), L_0(0)$ and on the Lipschitz constant of $L_i$) such that $$\frac{|L_i(x,\bar \a_i)|+|L_0(\bar \a_0)|}{|\bar \a_i|}\le C.$$ This implies that $$w(t_{n+1},x)-w(t_{n+1},y)\le C|x|=Cd(x,y).$$ - : [$\bar s_0<\Delta t$]{}. We choose $\a_i\ge \max(1,|\bar \a_{\bar j}|)$ such that $$\label{eq:102} \frac {|x|}{\a_i}\le \frac{\Delta t-\bar s_0} 2\quad{\rm and}\quad \frac{\Delta t-\bar s_0}{\Delta t-\bar s_0-\frac{|x|}{\a_i}}|\bar \a_{\bar j}|\le \a_i.$$ We also set $$\a_{\bar j}=\frac{\Delta t-\bar s_0}{\Delta t-\bar s_0-\frac{|x|}{\a_i}}\bar \a_{\bar j},$$ which satisfies in particular $\a_i\ge |\a_{\bar j}|.$ Taking the suboptimal control $(\a_i,\bar s_0,\bar \a_0,\bar j,\a_{\bar j})$ for $S[w^{n-1}](x)$, we get $$\begin{gathered} w(t_{n},x)-w(t_{n},y) \le \I [\hat w^{n-1}]\left(-\left({{\Delta t}}-\bar s_0-\frac {|x|}{\alpha_i}\right) \alpha_{\bar j} e_{\bar j}\right)\\ +\left({{\Delta t}}-\bar s_0-\frac {|x|}{\alpha_i}\right)L_{\bar j}( 0,\alpha_{\bar j}) +\bar s_0 L_0(\bar \alpha_0) +\frac {|x|}{\a_i}L_i(x,\a_i) \\ -\I [\hat w^{n-1}]\left(-\left(\Delta t-\bar s_0\right)\bar \a_{\bar j} e_{\bar j}\right)-\left(\Delta t-\bar s_0\right)L_{\bar j}(0,\bar \a_{\bar j})-\bar s_0 L_0(\bar \alpha_0)\\ \le \frac {|x|}{\alpha_i}\left(L_i(x,\a_i)-L_{\bar j}(0,\a_{\bar j}) \right) + (\Delta t-\bar s_0)\left(L_{\bar j}(0,\a_{\bar j})-L_{\bar j}(0,\bar \a_{\bar j})\right)\\ \le\frac {|x|}{\alpha_i}\left(L_i(x,\a_i)-L_{\bar j}(0,\a_{\bar j}) \right) + (\Delta t-\bar s_0)L_{L_{\bar j}}|\a_{\bar j}- \bar \a_{\bar j}|.\label{eq:103} \end{gathered}$$ Using that $\alpha_i\ge 1$, we get that $\frac {|L_i(x,\a_i)}{\a_i}\le C$. In the same way (using that $\a_i\ge \a_{\bar j}$) $$\frac{|L_{\bar j}(0,\a_{\bar j})|}{\a_i}\le \frac 1{\a_i}\left(L_{\bar j}(0,0)+L_{L_{\bar j}} |\a_{\bar j}|\right)\le L_{\bar j}(0,0)+ L_{L_{\bar j}} \frac{|\a_{\bar j}|}{\a_i}\le C.$$ Finally, using the definition of $\a_{\bar j}$, we get $$(\Delta t-\bar s_0) |\a_{\bar j}- \bar \a_{\bar j}|=|x|\frac {|\a_{\bar j}|}{\a_i}\le |x|.$$ Injecting these estimates in , we arrive to $$w(t_{n},x)-w(t_{n},y) \le C|x|=Cd(x,y).$$ \[stability\] Let $w(t_n,x)$ be a solution of , then there is a positive constant $K$ such that for any $(t_n,x)\in \mathcal{G}^{\Delta }$ $$|w(t_n,x)-u_0(x)|\leq K t_n .$$ Let $K\in {{\mathbb{R}}}$ be such that $$K\geq \max \left\{ \sup_{x\in J^{\Delta x}}\frac{[S[\hat u^0](x)-u_0(x)]^+}{{{\Delta t}}},\sup_{x\in J^{\Delta x}}\frac{[u_0(x)-S[\hat u^0](x)]^+}{{{\Delta t}}}\right\},$$ where $u^0:=\{u_0(x)\}_{x\in J^{\Delta}}$. The discrete function $\overline{u}(x,t_n):=u_0(x)+K t_n$ is a super discrete solution, i.e. $\overline{u}(x,t_{n+1})\geq S[\hat {\overline{u}}(\cdot,t_n)](x)$ for all $(x,t_n)\in \mathcal{G}^{\Delta }$. In fact, since for all $(x,t_n)\in \mathcal{G}^{\Delta }$ $$K{{\Delta t}}+K t_n\geq \sup_{x\in J^{\Delta x}} [S[\hat u^0](x)-u_0(x)]^++K t_n\geq S[\hat u^0](x)-u_0(x)+K t_n,$$ we have $$\overline{u}(x,t_{n+1})=u_0(x)+K t_{n+1}\geq u_0(x)+S[\hat u^0](x)-u^0(x)+K t_n= S[\hat {\overline{u}}(\cdot,t_n)](x).$$ The discrete function $\underline{u}(x,t_n):=u_0(x)-K t_n$ is a sub discrete solution, i.e. $$\underline{u}(x,t_{n+1})\leq S[\hat {\underline{u}}(\cdot,t_n)](x), \hbox{for all }(x,t_n)\in \mathcal{G}^{\Delta }.$$ In fact, since for all $(x,t_n)\in \mathcal{G}^{\Delta }$ $$K{{\Delta t}}+K t_n\geq \sup_{x\in J^{\Delta x}} [u_0(x)-S[\hat u^0](x)]^++K t_n\geq u_0(x)-S[\hat u^0](x)+K t_n,$$ we have $$\underline{u}(x,t_{n+1})=u_0(x)-K t_{n+1}\geq u_0(x)-(u_0(x)-S[\hat u^0](x)+K t_n)= S[\hat {\underline{u}}(\cdot,t_n)](x).$$ By monotonicity, we have that $\underline{u}(x,t_n)\leq w(x,t_n)\leq \overline{u}(x,t_n)$ for any $(x,t_n)\in \mathcal{G}^{\Delta }$ and this implies the conclusion. By Prop. \[stability\], $w$ solution of is bounded and then the discrete problem is well posed. We observe also that the same argument of Proposition \[Hproperty\] (based on **(A2)**) can be used to prove that the control $\alpha$ in is bounded. We call $$\label{CFL} \mu=\sup\limits_{(x,t)\in J\times(0,T]}\max\limits_{i=1,...,n}|\alpha_i^*|,$$ the maximal absolute value of the optimal control. \[consisrates\] Given $\Delta t>0$ and $\Delta x>0$, let us assume $$\label{CFLg} {\mu}\frac{ \Delta t}{\Delta x}\le 1$$ (with $\mu$ as in ), then for any $\varphi \in C^2(J)$ the following consistency error estimates hold for the scheme : \[cons31\] i) xJ\_i\^\*, |-H\_i(\_x(x))|K\_[xx]{}\_(+[[t]{}]{}),\[cons3\] ii) x=0,|- F\_[A]{}(\_[x]{}(x))|K\_[xx]{}\_( +[[t]{}]{}), where $K$ is a positive constant. $i)$ Let $x\in J_i$. We remark that the condition implies in particular that the scheme reads $$\begin{aligned} S[\hat \varphi](x)=& \inf_{\alpha_i< \frac{|x|}{{{\Delta t}}} }(\I[\hat \varphi](x-{{\Delta t}}\alpha_i e_i )+{{\Delta t}}L_i(x,\alpha_i) )\\ =& \inf_{\alpha_i\in {{\mathbb{R}}}}(\I[\hat \varphi](x-{{\Delta t}}\alpha_i e_i )+{{\Delta t}}L_i(x,\alpha_i) ).\end{aligned}$$ By and by Taylor expansion we have $$\I[\hat \varphi](x-{{\Delta t}}\alpha_i e_i )=\varphi(x)-{{\Delta t}}\alpha_i \partial_i \varphi(x)+\mathcal O({{\Delta t}}^2+\Delta x^2),$$ then $$\begin{aligned} \frac{\varphi(x) - S[\hat \varphi](x)}{{{\Delta t}}}=&-\inf_{\alpha_i\in {{\mathbb{R}}}}\left(-\alpha_i \partial_i \varphi(x)+ L_i(x,\alpha_i) \right)+\mathcal O \left({{\Delta t}}+\frac{{{\Delta x}}^2}{{{\Delta t}}}\right)\\ =&\sup_{\alpha_i\in {{\mathbb{R}}}}\left(\alpha_i \partial_i \varphi(x)- L_i(x,\alpha_i) \right)+\mathcal O\left({{\Delta t}}+\frac{{{\Delta x}}^2}{{{\Delta t}}}\right)\\ =&H_i( \varphi_x(x))+\mathcal O\left(\frac{{{\Delta x}}^2}{{{\Delta t}}}+{{\Delta t}}\right).\end{aligned}$$ $ii)$ Let $x=0$. In this case $$S[\hat \varphi](0)= \inf_{s_0 \in [0,{{\Delta t}}] }\min_{j, \alpha_j\leq 0 } (\I[\hat \varphi](-({{\Delta t}}- s_0) \alpha_j e_j )+({{\Delta t}}- s_0) L_j(0,\alpha_j) + s_0 L_0(\alpha_0)).$$ Let us define $K_{{{\Delta t}}}:=\frac{s_0}{{{\Delta t}}}$, since $s_0\in [0,{{\Delta t}}]$ we have $K_{{{\Delta t}}}\in[0,1]$. Again by Taylor expansion, by Prop. \[Hproperty\] and by , we have $$\begin{aligned} &\frac{\varphi(0) - S[\hat \varphi](0)}{{{\Delta t}}} +\mathcal O \left({{\Delta t}}+\frac{{{\Delta x}}^2}{{{\Delta t}}}\right)\\ =&- \inf_{{{K_{{{\Delta t}}} \in [0,1]}} }\min_{j, \alpha_j\leq 0 }\left(-(1-K_{{{\Delta t}}})\alpha_j \partial_j \varphi(0)+(1-K_{{{\Delta t}}}) L_j(0,\alpha_j)+K_{{{\Delta t}}}L_0(\alpha_0) \right) \\ =&- \inf_{K_{{{\Delta t}}} \in [0,1] }\left[(1-K_{{{\Delta t}}})\underset{ j ,\alpha_j\leq 0}\min\left(-\alpha_j \partial_j \varphi(0)+ L_j(0,\alpha_j)\right)+K_{{{\Delta t}}}\min_{\alpha_0}\left( L_0(\alpha_0) \right)\right]\\ =&\sup_{K_{{{\Delta t}}} \in [0,1] }\left[(1-K_{{{\Delta t}}})\underset{ j ,\alpha_j\leq 0}\max\left(\alpha_j \partial_j \varphi(0)- L_j(0,\alpha_j)\right)+K_{{{\Delta t}}}\max_{\alpha_0}\left(- L_0(\alpha_0) \right)\right] \\ =&\sup_{K_{{{\Delta t}}} \in [0,1] }\left\{(1-K_{{{\Delta t}}})\,\underset{ j }\max{\, H_j^-(\partial_j \varphi(0))}+K_{{{\Delta t}}}A \right\} \\ =&\max\left(\underset{ j }\max\, {H_j^-(\partial_j \varphi(0))},A\right).\end{aligned}$$ This ends the proof of the proposition. The case that we study behaves differently from classic SL schemes, where consistency error estimate is not limited by a CFL-like condition. This difference is due to the presence of discontinuities on the Hamiltonians in correspondence with the junction point. We provide a counterexample clarifying the scenario. Let us consider a simple junction $J:=J_1\cup J_2=(-\infty,0]\cup [0,+\infty)$, provided with the Hamiltonians $$H_1(p)=\frac{|p|^2}{2}-1=\max_{\alpha_1}\left(\alpha_1\cdot p-\frac{\alpha_1^2}{2}-1\right),\; H_2(p)=\max_{\alpha_2}\left(\alpha_2\cdot p-\frac{\alpha^2_2}{2}-2\right),$$ and the flux limiter $A=-1=\max\min_p H_i(p)$ (small enough to not be considered). We check the consistency at $x=\Delta x$ for the smooth function $\varphi=1-x$. We can check that the scheme reads, if $\alpha_2\leq\Delta x/\Delta t$ $$\label{sc1} S[\varphi](\Delta x)=\inf_{\alpha_2}\left(1+(\Delta x-\alpha_2 \Delta t)+{{\Delta t}}\left(\frac{\alpha_2^2}{2}+2\right)\right)=1+{{\Delta x}}+\frac{3}{2}{{\Delta t}}$$ (the minimum is reached for $\alpha_2=1$) and if $\alpha_2>\Delta x/\Delta t$ $$\begin{gathered} \label{sc2} S[\varphi](\Delta x)=\\ \inf_{\alpha_2}\inf_{\alpha_1}\left[1+\left({{\Delta t}}-\frac{{{\Delta x}}}{\alpha_2}\right)\alpha_1+\left({{\Delta t}}-\frac{{{\Delta x}}}{\alpha_2}\right)\left(\frac{\alpha_1^2}{2}+1\right)+\frac{{{\Delta x}}}{\alpha_2}\left(\frac{\alpha_2^2}{2}+2\right)\right]\\ =1+\sqrt{3}\Delta x+\frac{\Delta t}{2}, \end{gathered}$$ where the minimum corresponds to $\alpha_1=-1$ and $\alpha_2=\sqrt{3}$. The minimum between the two options depends on the rate ${{\Delta t}}/{{\Delta x}}$: simply comparing the two output we notice that for $\frac{{{\Delta t}}}{{{\Delta x}}}\leq \sqrt{3}-1$ the scheme assumes the form , while if $\frac{{{\Delta t}}}{{{\Delta x}}}> \sqrt{3}-1$, . We notice that is consistent with the equation since $$\begin{gathered} \frac{\varphi({{\Delta x}})-S[\varphi](\Delta x)}{{{\Delta t}}}-H_2(\varphi_x(\Delta x))=\\ \frac{1+\Delta x-(1+{{\Delta x}}+3/2{{\Delta t}})}{{{\Delta t}}}-\left(\frac{|\varphi_x(\Delta x)|^2}{2}-2\right)=0\end{gathered}$$ while instead for $$\begin{gathered} \frac{\varphi({{\Delta x}})-S[\varphi](\Delta x)}{{{\Delta t}}}-H_2(\varphi_x(\Delta x))=\\ \frac{1+\Delta x-(1+\sqrt{3}{{\Delta x}}+1/2{{\Delta t}})}{{{\Delta t}}}-\left(\frac{|\varphi_x(\Delta x)|^2}{2}-2\right)=1+\sqrt{3}\left(\frac{{{\Delta x}}}{{{\Delta t}}}\right).\end{gathered}$$ The latter means that if $\frac{{{\Delta t}}}{{{\Delta x}}}> \sqrt{3}-1$, no consistency error can be found for the scheme . It is worth to underline that consistency (without consistency error estimate) holds *without assuming* , and consequently the scheme is convergent without any CFL condition, as we show at the end of this section. \[def.cons\] Let $x\in J$ and $(\Delta x_m ,{{\Delta t}}_m)\to0$ as $m\to \infty$. Let $y_m\in J^{\Delta x_m}$ be a sequence of grid points such that $y_m\to x$ as $m\to \infty$. The scheme $S_{{{\Delta t}}}$ is said to be [**[consistent]{}**]{} with if the following properties hold: - If $x\in J_i$, for all test function $\varphi \in C^2(J)$, we have \[cons\] H\_i(\_x(x))m, - If $x=0$, for all test function $\varphi \in C^2(J)$ such that $\partial_i \varphi(0)=p_i^A$ for $i=1,\dots,N$, where $p_i^A\in {{\mathbb{R}}}$ are such that $H_i(p_i^A)=H^+_i(p_i^A)=A$ and $H_i^+(p):=\sup _{\alpha_i\geq 0}(\alpha_i \cdot p-L_i(\alpha))$, we have \[cons2\] F\_[A]{}(\_[x]{}(x)) m. \[consis\] Assume that $\Delta x^2/{{\Delta t}}\to 0$ Then, the scheme is consistent according to Definition \[def.cons\]. Let us consider a sequence $y_m$ such that $y_m\to x$ as $\Delta_m\rightarrow (0,0)$. For notational convenience we drop the index $m$ of the sequence of grid points. In case the limit point $x$ is not on the junction since $x$ is fixed for every sequence $({{\Delta x}}_m,{{\Delta t}}_m)\rightarrow (0,0)$, $y$ definitely verifies $|y|>\mu\Delta t_m$ *independently* from the rate ${{\Delta t}}_m/{{\Delta x}}_m$. Then, the consistency follows as **Case 1** in the proof of Prop. \[consisrates\] (without the condition $\Delta t/\Delta x\leq 1/\mu$). The situation is more complex when the limit point $x$ is $0$. If $y\equiv0$, this case is equivalent to **Case 2** in the proof of Prop. \[consisrates\]. If $y$ is such that $y\to 0 $ and $y \neq 0$, up to a subsequence, we can assume that $y\in J_i$, for some $i$ independent of $m$. In that case, the optimal trajectory can cross the junction in one time step. Let $\varphi \in C^2(J)$ such that $\partial_i \varphi(0)=p_i^A$ for $i=1,\dots,N$ and let us define the two quantities: $$\begin{aligned} \mathcal I_1:=&\inf_{\alpha_i< \frac{|y|}{{{\Delta t}}} }(\I[\hat \varphi](y-{{\Delta t}}\alpha_i e_i )+{{\Delta t}}L_i(y,\alpha_i) ),\\ \mathcal I_2:= &\inf\limits_{\alpha,\alpha_i\geq \frac{|y|}{\Delta t}}\inf\limits_{ s_0\in[0,\Delta t-\frac{|y|}{\alpha_i}]} \min\limits_{j,\alpha_j\leq 0}\left\{ \I[\hat \varphi]\left(-\left(\Delta t- s_0-\frac{|y|}{\alpha_i}\right)\alpha_j e_j\right)\right.\\ &\left.\hspace{1cm}+\left(\Delta t- s_0-\frac{|y|}{\alpha_i}\right)L_j(\alpha_j)+ s_0 L_0(\alpha_0)+\frac{|y|}{\alpha_i}L_i(\alpha_i)\right\}. \end{aligned}$$ We remark that $S[\varphi](y)=\min(\mathcal I_1,\mathcal I_2).$ We begin with the term $\mathcal I_1$. Using and a Taylor expansion, we get $$\begin{aligned} \label{eq:est-I1-1} \mathcal I_1=&\inf_{\alpha_i\le \frac {|y|}{{{\Delta t}}}}\left\{\varphi(y)-\a_i{{\Delta t}}\partial _i\varphi(y)+{{\Delta t}}L_i(y,\a_i)\right\} +\mathcal O({{\Delta x}}^2+{{\Delta t}}^2)\nonumber\\ =&\varphi(y)-{{\Delta t}}\sup _{\alpha_i\le \frac {|y|}{{{\Delta t}}}}\left\{\a_i \partial _i\varphi(y)- L_i(y,\a_i)\right\} +\mathcal O({{\Delta x}}^2+{{\Delta t}}^2)\end{aligned}$$ Using that $$\begin{aligned} \sup _{\alpha_i\le \frac {|y|}{{{\Delta t}}}}\left\{\a_i \partial _i\varphi(y)- L_i(y,\a_i)\right\} \le &\sup _{\alpha_i\in {{\mathbb{R}}}}\left\{\a_i \partial _i\varphi(y)- L_i(y,\a_i)\right\} \\ =& H_i(y,\partial_i\varphi(y))=A+o(1),\end{aligned}$$ we deduce that \[eq:est-I1-2\] I\_1(y) - [[t]{}]{}A +[[t]{}]{} o(1) +O([[x]{}]{}\^2+[[t]{}]{}\^2). For the term $\mathcal I_2$, with , adding into the argument of $\varphi$ the term $y-\frac{|y|}{\alpha_i}\alpha_i e_i=0$ and using the Taylor expansion twice we obtain $$\begin{gathered} \I[\varphi]\left(-\left(\Delta t- s_0-\frac{|y|}{\alpha_i}\right)\alpha_j e_j\right)=\\ \varphi(y)-\frac{|y|}{\alpha_i}\alpha_i \partial_i\varphi(y)-\left(\Delta t- s_0-\frac{|y|}{\alpha_i}\right)\alpha_j \partial_j\varphi(0)+\mathcal O(\Delta t^2+{{\Delta x}}^2).\end{gathered}$$ This implies $$\begin{aligned} &\mathcal I_2+\mathcal O({{\Delta t}}^2+{{\Delta x}}^2)\\ =&\inf_{\alpha_i\geq \frac{|y|}{\Delta t}} \inf_{s_0 \in [0,{{\Delta t}}-\frac{|y|}{\a_i}] }\bigg\{\min_{j}\min_{\alpha_j\leq 0 }\left\{-\left(\Delta t- s_0-\frac{|y|}{\alpha_i}\right)(\alpha_j \partial_j \varphi(0) - L_j(0,\alpha_j))\right\}\\ &\quad \hspace{2cm}+\varphi(y)-\frac {|y|}{\a_i}(\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i))+s_0L_0(\alpha_0) \bigg\}\\ &=\varphi(y)+\inf_{\alpha_i\geq \frac{|y|}{\Delta t}} \inf_{s_0 \in [0,{{\Delta t}}-\frac{|y|}{\a_i}] }\bigg\{-\frac {|y|}{\a_i}(\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i))+s_0L_0(\alpha_0)\\ &\quad\hspace{2cm}-\left(\Delta t- s_0-\frac{|y|}{\alpha_i}\right)\max_{j}\max_{\alpha_j\leq 0 }\left\{(\alpha_j \partial_j \varphi(0) - L_j(0,\alpha_j))\right\}\bigg\}\\ &=\varphi(y)+\inf_{\alpha_i\geq \frac{|y|}{\Delta t}} \inf_{s_0 \in [0,{{\Delta t}}-\frac{|y|}{\a_i}] }\bigg\{-\frac {|y|}{\a_i}(\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i))+s_0L_0(\alpha_0)\\ &\quad\hspace{2cm}-\left(\Delta t- s_0-\frac{|y|}{\alpha_i}\right) \max_j H_j^-(0,\partial _j\varphi (0))\bigg\}\end{aligned}$$ Using $\max_j H_j^-(0,\partial _j\varphi (0))=\max_j \min_p H_j(p)=\bar H^0$, and $L_0(\alpha_0)=-A$, we deduce that (we use $\bar H^0\le A$) $$\begin{aligned} \label{eq:est-I2-1} &\mathcal I_2+\mathcal O({{\Delta t}}^2+{{\Delta x}}^2)\nonumber\\ &=\varphi(y)+\inf_{\alpha_i\geq \frac{|y|}{\Delta t}}\bigg\{-\frac {|y|}{\a_i}(\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i)) \nonumber\\ &\hspace{3cm}+\inf_{s_0 \in [0,{{\Delta t}}-\frac{|y|}{\a_i}] }\left\{s_0(\bar H^0-A) -\left({{\Delta t}}-\frac{|y|}{\a_i}\right)\bar H^0\right\}\bigg\}\nonumber\\ &=\varphi(y)+\inf_{\alpha_i\geq \frac{|y|}{\Delta t}}\bigg\{-\frac {|y|}{\a_i}(\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i)) -\left({{\Delta t}}-\frac{|y|}{\a_i}\right)A\bigg\}\\\nonumber &=-{{\Delta t}}\, A+\varphi(y)-\sup_{\alpha_i\geq \frac{|y|}{\Delta t}}\bigg\{\frac {|y|}{\a_i}(\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i)-A) \bigg\}\nonumber\end{aligned}$$ Using $\frac {|y|}{\a_i}\le {{\Delta t}}$ in the last sup, and $\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i)-A\le o(1)$, we get \[eq:est-I2-2\] I\_2+O([[t]{}]{}\^2+[[x]{}]{}\^2)-[[t]{}]{} A+(y) +[[t]{}]{}o(1). Using and , we finally get \[eq:estI\] S\[\](y)=(I\_1,I\_2)-[[t]{}]{}A+(y) +[[t]{}]{}o(1)+O([[t]{}]{}\^2+[[x]{}]{}\^2). We now want to show that this inequality is in fact an equality. We denote by $\bar \a_i$ the solution of $$\sup_{\a_i\in {{\mathbb{R}}}}\{\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i)\},$$ and we distinguish two cases. Firstly, we consider the case $\bar \a_i\le \frac {|y|}{{{\Delta t}}}$. This implies in particular that $$\begin{aligned} \sup_{\a_i\le \frac {|y|}{{{\Delta t}}}}\{\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i)\}=&\sup_{\a_i\in {{\mathbb{R}}}}\{\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i)\}\\ =&H_i(y,\partial_i \varphi (y))=A+o(1).\end{aligned}$$ Using , we deduce that $$\mathcal I_1= -{{\Delta t}}A+\varphi(y) +{{\Delta t}}o(1)+\mathcal O({{\Delta t}}^2+{{\Delta x}}^2)$$ and so is an equality. We now consider the case $\bar \a_i\ge \frac {|y|}{{{\Delta t}}}$. We define $$\bar {\mathcal I}_2:=\sup_{\alpha_i\geq \frac{|y|}{\Delta t}}\bigg\{\frac {|y|}{\a_i}(\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i)-A) \bigg\}.$$ Using that $0\le\frac{|y|}{\a_i}\le{{\Delta t}}$ and that $\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i)-A\le o(1)$, we get $$\begin{aligned} {{\Delta t}}\, o(1)\ge \bar{\mathcal I}_2 \ge &{{\Delta t}}\left\{\sup_{\alpha_i\geq \frac{|y|}{\Delta t}}\left\{\frac {|y|}{\a_i}(\alpha_i \partial_i \varphi(y) - L_i(y,\alpha_i)\right\}-A\right\}\\ =&{{\Delta t}}(H_i(y,\partial _i\varphi(y))-A)={{\Delta t}}\,o(1).\end{aligned}$$ This implies again that is an equality and ends the proof. Assume that $\frac{{{\Delta x}}^2}{{{\Delta t}}}\to 0$ and let $T>0$ and $u_0$ be a Lipschitz continuous function on $J$. Then the numerical solution of $w(t,x)$ converges locally uniformly as $\Delta\to (0,0)$ to the unique (weak) viscosity solution $u(t,x)$ of on any compact set $\mathcal K$ contained in the domain $(0,T)\times J$, i.e. $$\limsup_{\Delta x,\Delta t\rightarrow 0} \sup_{(t,x)\in \mathcal K \cap \G^{\Delta }}|w(t,x)-u(t,x)|=0.$$ Since the scheme is consistent (for a subsequence verifying ${{\Delta x}}^2/{{\Delta t}}\to 0$), monotone and stable. We can follow [@BS91; @costeseque2015convergent; @imbert2013flux] and obtain the result. Note that the choice of the test functions in the definition of the consistency at the junction is consistent with Theorem \[th:1\] ii) Convergence estimates {#sect:bounds} ===================== In this section, we introduce the main result of the paper. We formulate the result under two possible cases: firstly for special Hamiltonians, and secondly in a more generic scenario. Space independent Hamiltonians. ------------------------------- We suppose - the lagrangians $L_i(x,\alpha_i)\equiv L_i(y,\alpha_i)$ for every choice of $x,y\in J_i$. We observe that as consequence of **(A3)** the optimal control $\bar \alpha_i$ is constant along the arc. \[teo:bounds1\] Let **(A1)**-**(A3)** verified. Considered $u$ a viscosity solution of , and $w$ a solution of the scheme . Then, there exists a positive constant $C$ depending only on the Lipschitz constant of $u$ such that $$\sup_{(t,x )\in \mathcal G^{\Delta }}|u(t,x)-w(t,x)| \leq CT \Delta x. $$ The proof is made by induction assuming that for $n\ge 1$ $$\label{eq:rec} |w^{n-1}(x)-u(t_{n-1},x)|\le (n-1)C\Delta x\quad \forall x\in J^{\Delta x}.$$ Note that it is clearly satisfy for $n=1$. We then want to show that $$|w^{n}(x)-u(t_n,x)|\le nC\Delta x\quad \forall x\in J^{\Delta x}.$$ From Proposition \[hjdpp\], we know that $$\begin{gathered} \label{eq:dppu} u(t_n,x):=\\ \inf_{y\in J}\inf_{(X(.),\alpha(.))\in \Gamma_{t_{n-1},y}^{t_n,x}}\left\{u(t_{n-1}, y)+\int\limits_{t_{n-1}}^{t_n} L(X(\tau),\alpha(\tau))d\tau\right\}. \end{gathered}$$ We call $\bar\alpha=(\bar\alpha_0,\bar\alpha_1,...,\bar\alpha_N)$ and $\bar s_0$ the optimal argument of $S[w^{n-1}](x)$ and we treat only the case where $x\in J^i\backslash\{0\}$ and $|x|/\bar\alpha_i<{{\Delta t}}$ (this corresponds to the more difficult case in which the optimal trajectory cross the junction). We also denote by $\bar X(t)$ (with $t\in[t_{n-1},t_{n}]$) the trajectory obtained applying the control $\bar\alpha$. Clearly such trajectory belongs to $\Gamma_{t_{n-1},\bar X(t_{n-1})}^{t_n,x}$ with $\bar X(t_{n-1})=\left(\Delta t-s_0-\frac{|x|}{\bar \alpha_i}\right)e_j$ and { |X(t)J\_i &t,\ |X(t)J\_j, & &t, . Indeed, we can exclude that an optimal trajectory pass in one other arc or touch multiple times the junction point thanks to the convexity of the functions $L$. In fact, in such case, it would be necessary for an optimal trajectory to pass twice for the same point, i.e. $X(\tilde t_1)=\tilde x$ and $X(\tilde t_2)=\tilde x$, with $X(t)\ne\tilde x$ for $t\in (\tilde t_1, \tilde t_2)$. This means that since $\dot X(t) = \bar \alpha(t)$, we have that $$\int_{\tilde t_1}^{\tilde t_2}\bar \alpha (\tau) d\tau=X(\tilde t_1)-X(\tilde t_2)=0.$$ Then, the average control on $[\tilde t_1, \tilde t_2]$ is zero. Using the strict convexity and the Jensen’s inequality, we find that the optimal control $\bar \alpha$ should be zero. This contradicts the definition of $X$. We can now build a discrete control and an associated trajectory $(\hat \alpha,\hat X)$ for $S[\hat \varphi](x)$ such that $$\hat \alpha_i=\frac{|x|}{t_{n} - \bar t_2}=\frac{1}{t_{n}-\bar t_2}\int^{t_{n}}_{\bar t_2}\bar \alpha_i(\tau) d\tau, \quad \hat s_0=\bar t_2-\bar t_1,$$ $$\hat\alpha_j=\frac{1}{\bar t_1-t_{n-1}}\int_{t_{n-1}}^{\bar t_1}\bar \alpha_j(\tau) d\tau.$$ Then, for construction $\hat X(t_{n-1})=\bar X(t_{n-1})$ and $$\begin{aligned} &S[w^{n-1}](x)-u(t_n,x)\\ =&S[w^{n-1}](x)- u(t_{n-1}, y)+\int_{t_{n-1}}^{t_n} L(\bar X(\tau),\bar \alpha(\tau))d\tau\\ \leq &\I[w^{n-1}](\hat X(t_{n-1}))-u(t_{n-1},\bar X(t_{n-1}))\\ &+\left(\left(\Delta t- \bar t_2\right)L_i(\hat\alpha_i)-\int_{\bar t_2}^{\Delta t} L(\bar X(\tau),\bar\alpha(\tau))d\tau\right)\\ & +\left((\bar t_2-\bar t_1) L_0-\int_{\bar t_1}^{\bar t_2} L(\bar X(\tau),\bar\alpha(\tau))d\tau\right)\\ & +\left(\bar t_1 L_j(\hat\alpha_j)-\int_{t_{n-1}}^{\bar t_1} L(\bar X(\tau),\bar\alpha(\tau))d\tau\right).\end{aligned}$$ Using Jensen’s inequality knowing that the $L$-functions are convex, we get $$\begin{aligned} &\bar t_1 L_j(\hat\alpha_j)-\int_{t_{n-1}}^{\bar t_1} L(\bar X(\tau),\bar\alpha(\tau))d\tau\\ =&\bar t_1 L_j\left(\frac{1}{\bar t_1-t_{n-1}}\int_{t_{n-1}}^{\bar t_1}\bar\alpha_j(\tau)d\tau\right)-\int_{t_{n-1}}^{\bar t_1} L_j(\bar \alpha_j(\tau))d\tau\\ \leq& \int_{t_{n-1}}^{\bar t_1} L_j(\bar \alpha_j(\tau))d\tau-\int_{t_{n-1}}^{\bar t_1} L_j(\bar \alpha_j(\tau))d\tau=0\end{aligned}$$ The two others cost terms can be treat in a similar way. Finally, using that $$\begin{gathered} \I[w^{n-1}](\hat X(t_{n-1}))-u(t_{n-1},\bar X(t_{n-1}))\le \I[w^{n-1}](\hat X(t_{n-1}))\\ -\I[u(t_{n-1}, \cdot)+(n-1)C{{\Delta x}}](\bar X(t_{n-1}))+ \I[u(t_{n-1}, \cdot)](\bar X(t_{n-1}))\\+(n-1)C{{\Delta x}}-u(t_{n-1},\bar X(t_{n-1}))\le nC{{\Delta x}}\end{gathered}$$ where we have used Lemma \[inter\] (i) joint to to control the first term and Lemma \[inter\] (iii) for the last one. This implies that $$w^n(x)-u(t_n,x)\le nC\Delta x$$ and concludes the proof. Space dependent Hamiltonians ---------------------------- We prove an error estimate for stable schemes for which a consistency error estimate holds. \[teo:bounds2\] Considered $u$ a viscosity solution of , and $w$ a solution of a scheme for which the results Lemma \[monotonicity\] (monotonicity), Prop. \[stability\] (stability) and a result similar to Prop. \[consisrates\] (consistency error estimate) hold, then there exists a positive constant $C$ independent from $\Delta t$ and $\Delta x$ such that $$\label{esterr} \sup_{(t,x )\in \mathcal G^{\Delta }}|u(t,x)-w(t,x)| \leq C T \left(\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\sqrt{\Delta t}}+\sqrt{\Delta t}\right) + \sup_{x\in J^{{{\Delta x}}}}|u_0(x)-w(0,x)|;$$ being ${{\mathbb E}}(\Delta t,\Delta x)$ the consistency error of the scheme. In the specific case of the scheme , if we assume moreover , we have $$\sup_{(t,x )\in \mathcal G^{\Delta }}|u(t,x)-w(t,x)| \leq C \left( \sqrt{\Delta t}+\frac{\Delta x^2}{\sqrt{\Delta t^3}}\right).$$ As standard in this kind of proof, we only prove that $$\label{esterr1} u(t,x)-w(t,x) \leq C \left(\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\sqrt{\Delta t}}+\sqrt{\Delta t}\right)+ \sup_{x\in J^{{{\Delta x}}}}|u_0(x)-w(0,x)|\quad \hbox{in } \mathcal G^{\Delta },$$ since the reverse inequality is obtained with small modifications. Assume that $T\le 1$ (the case $T\ge 1$ is obtained by induction). For $i\in \{1,\dots,N\}$ and $j\in \mathbb N$, wet set $x_j^i=j\Delta x e_i$. We then define the extension in the continuous space of $w$ by $$w_\#(t_n,x)=\I[\hat w(t_n,\cdot)](x).$$ Firstly, we assume that $$u_0(x_j^i)\ge w_\#(0,x_j^i)\quad{\rm for\; all\;}i\in\{0,\dots,N\}{\rm \; and\;}j\in \mathbb N.$$ We define $$0\leq\mu_0:=\sup_{x\in J}\{|u_0(x)- w_\#(0,x)|\},$$ and we assume that $\mu_0\le K$. For every $\beta, \eta \in (0,1)$ and $\sigma>0$, we define the auxiliary function, $\hbox{ for }(t,s,x)\in[0,T)\times \{t_n: n=0,\dots, N_T \}\times J$ $$\psi(t,s,x):=u(t,x)-w_\#(s,x)- \frac{(t-s)^2}{2\eta}-\beta |x|^2-\sigma t, \\$$ Using Proposition \[stability\], the inequality $|u(x,t)-u_0(x)| \leq C_T$ (which holds for the continuous solution, see Theorem 2.14 in [@imbert2013flux]), we deduce that $\psi(t,s,x)\to -\infty$ as $|x|\to +\infty$ and then the function $\psi$ achieves its maximum at some point $({{\overline t}}_\beta,{{\overline s}}_\beta,{{\overline x}}_\beta)$. In particular, we have $$\psi({{\overline t}}_\beta,{{\overline s}}_\beta,{{\overline x}}_\beta)\ge \psi(0,0,0)=u_0(0)-w_\#(0,0)\ge 0.$$ We denote by $K$ several positive constants only depending on the Lipschitz constants of $u$. [**Case 1:**]{} ${{\overline x}}_\beta \in J_i\setminus \{0\}$.In this case, we duplicate the space variable by considering, for $\e\in(0,1)$, $$\begin{aligned} \psi_1(t,s,x,y)=&u(t,x)-w_\#(s,y)- \frac{(t-s)^2}{2\eta}-\frac{d(x,y)^2}{2\e}-\frac \beta 2( |x|^2+|y|^2) - \sigma t\\ &-\frac \beta 2|x-{{\overline x}}_\beta|^2-\frac\beta 2 | y-\bar x_\beta|^2-\frac \beta 2|t-{{\overline t}}_\beta|^2-\frac \beta 2|s-{{\overline s}}_\beta|^2,\\ &\hbox{ for }(t,s,x,y)\in[0,T)\times \{t_n: n=0,\dots, N_T \}\times J\times J.\end{aligned}$$ Using Proposition \[stability\] again, the inequality $|u(x,t)-u_0(x)| \leq C_T$, and the fact that $u_0$ is Lipschitz continuous, we deduce that $\psi_1(t,s,x,y)\to -\infty$ as $|x|,|y|\to +\infty$ and then the function $\psi_1$ achieves its maximum at some point $({{\overline t}},{{\overline s}},{{\overline x}},{{\overline y}})$, i.e. $$\psi_1({{\overline t}},{{\overline s}},{{\overline x}},{{\overline y}})\geq \psi_1(t,s,x,y)\quad \hbox{ for all } (t,x),(s,y)\in[0,T)\times J.$$ It is also easy to show that $({{\overline t}},{{\overline s}},{{\overline x}},{{\overline y}})\to({{\overline t}}_\beta,{{\overline s}}_\beta,{{\overline x}}_\beta,{{\overline x}}_\beta)$ as $\e$ goes to zero and so ${{\overline x}},{{\overline y}}\in J_i\setminus\{0\}$, for $\eps$ small enough. [**Step 1. (Basic estimates).** ]{} The maximum point of $\psi_1$ satisfy the following estimates: \[stima2\] d([[x]{}]{},[[y]{}]{})K , |[[t]{}]{}-[[s]{}]{}|K . \[stima1\](|[[x]{}]{}|\^2+||y|\^2), (||x-|x\_|\^2+||y-|x\_|\^2+||t-|t\_|\^2+||s-|s\_|\^2)[K ]{} From $$\psi_1({{\overline t}},{{\overline s}},{{\overline x}},{{\overline y}})\ge \psi_1({{\overline t}}_\beta,{{\overline s}}_\beta,{{\overline x}}_\beta,{{\overline x}}_\beta) {=} \psi({{\overline t}}_\beta,{{\overline s}}_\beta,{{\overline x}}_\beta)\ge 0,$$ we get, (using $0\geq -(\bar t-\bar s)^2/2\eta-d(\bar x,\bar y)^2/2\varepsilon-\sigma \bar t$) $$\begin{aligned} \label{eq:105} &\frac \beta 2 (|\bar x|^2+|\bar y|^2)+\frac \beta 2 \left(|\bar x-\bar x_\beta|^2+|\bar y-\bar x_\beta|^2+|\bar t-\bar t_\beta|^2+|\bar s-\bar s_\beta|^2\right)\nonumber\\ \le& u(\bar t,\bar x)-w_\#(\bar s,\bar y)\le u_0(\bar x)-w_\#(0,\bar y)+K\bar t+K\bar s\le K(1+|\bar x|+|\bar y|)\end{aligned}$$ where we have used Proposition \[stability\] (extended to all the points of $J$ thanks to the monotonicity of the interpolation operator Lemma \[inter\]) and [@imbert2013flux Theorem 2.14] for the second inequality and the fact that $T\le 1$ for the last one. Using Young’s inequality, (i.e. the fact that $|\bar x|\leq 1/\beta + \beta/4 |\bar x|^2$ since $(\beta/2|\bar x|-1)^2\geq 0$) implies in particular that $$\frac \beta 2 (|\bar x|^2+|\bar y|^2)\le K\left(1+\frac 2 \beta +\frac \beta 4 (|\bar x|^2+|\bar y|^2)\right).$$ Multiplying by $\beta$ and using the fact that $\beta \le 1$, we finally deduce that $$\beta|\bar x|,\; \beta |\bar y|\le K.$$ Using again, the equation above implies that $$\beta \left(|\bar x-\bar x_\beta|^2+|\bar y-\bar x_\beta|^2+|\bar t-\bar t_\beta|^2+|\bar s-\bar s_\beta|^2\right)\le K\left(1+\frac 1 \beta\right)$$ and so $$\beta \left(|\bar x-\bar x_\beta|+|\bar y-\bar x_\beta|+|\bar t-\bar t_\beta|+|\bar s-\bar s_\beta|\right)\le K.$$ From $\psi_1({{\overline t}},{{\overline s}},{{\overline x}},{{\overline y}})\geq \psi_1({{\overline t}},{{\overline s}},{{\overline y}},{{\overline y}})$ we get $$\begin{gathered} \label{stima5b} \frac{d({{\overline x}},{{\overline y}})^2}{2\varepsilon}\leq u({{\overline t}},{{\overline x}})-u({{\overline t}},{{\overline y}})+\frac \beta 2(|\bar y|^2- |{{\overline x}}|^2)+\frac \beta 2(|\bar y- \bar x_\beta|^2-|\bar x-\bar x_\beta|^2)\\ \leq K d(\bar x,\bar y)+\frac \beta 2 (|\bar x|+|\bar y|)d(\bar x,\bar y) + \frac \beta 2(|\bar x-\bar x_\beta|+|\bar y-\bar x_\beta|)d(\bar x,\bar y) \le K d(\bar x,\bar y)\end{gathered}$$ which implies the first estimate of . The second bound in is deduced from $\psi({{\overline t}},{{\overline s}},{{\overline x}},{{\overline y}})\geq \psi({{\overline s}},{{\overline s}},{{\overline x}},{{\overline y}})$ in the same way. If we include the estimate $$u(\bar t,\bar x)-w_\#(\bar s,\bar y)\le u_0(\bar x)+K\bar t-w_\#(0,\bar y){+ K\bar s}\le K(\mu_0+d(\bar x,\bar y)+{1})\le {K}$$ in the first part of , we finally deduce . [**[Step 2.]{} (Viscosity inequalities).** ]{} We claim that for $\sigma$ large enough, the supremum of $\psi_1$ is achieved for ${{\overline t}}=0$ or ${{\overline s}}=0$. We prove the assertion by contradiction. Suppose ${{\overline t}}>0$ and ${{\overline s}}>0$. Using the fact that $(t,x)\to\psi_1(t,\bar s, x,\bar y)$ has a maximum in $(\bar x,\bar t)$ and that $u$ is a sub solution, we get $$\label{sub1} \frac{\bar t-\bar s}{\eta}+\sigma+\beta(\bar t-\bar t_\beta)+ H_i\left(\frac{d(\bar x,\bar y)}{\varepsilon}+\beta |\bar x|+\beta(|\bar x-\bar x_\beta|)\right)\leq 0.$$ Since $\bar s>0$ we know that $\psi_1(\bar t, \bar s, \bar x, \bar y)\geq \psi_1(\bar t, \bar s-\Delta t, \bar x, y)$ for a generic $y$ and, by defining $\varphi(s,y)=-\left(\frac{(\bar t-s)^2}{2\eta}+\frac{d(\bar x,y)^2}{2\varepsilon}+\frac{\beta}{2}|y|^2+\frac{\beta}{2}|y-\bar x_\beta|^2+\frac{\beta}{2}|s-\bar s_\beta|^2\right)$, it implies that for a generic $y$ holds $$w_\#(\bar s,\bar y)-\varphi(\bar s,\bar y)\leq w_\#(\bar s-\Delta t, y)-\varphi(\bar s-\Delta t, y).$$ In particular, we have that for any $ z \in J^{{{\Delta x}}}$ $$w_\#(\bar s,\bar y)-\varphi(\bar s,\bar y)\leq w(\bar s-\Delta t, z)-\varphi(\bar s-\Delta t, z).$$ By the monotonicity of the scheme and the fact that the scheme commutes by constant, i.e. $S[ \hat \varphi+C](z)=S[ \hat \varphi](z)+C$ for any constant $C$, choosing $C=w_\#(\bar s,\bar y)-\varphi(\bar s,\bar y)$ we get for any $ z \in J^{{{\Delta x}}}$ $$w(\bar s,z)=S[ \hat w(\bar s-\Delta t)]( z)\geq S[\hat \varphi(\bar s-\Delta t)](z) +C.$$ By the monotonicity of the interpolation operator, this implies $$w_\#(\bar s,\bar y)=\I[\hat w(\bar s,\cdot)](\bar y)\geq \I[S[\hat \varphi(\bar s-\Delta t)](\cdot)](\bar y) +w_\#(\bar s,\bar y)-\varphi(\bar s,\bar y),$$ Simplifying by $w_\#(\bar s,\bar y)$, we get $$-\sum_i \phi_i (\bar y) S[ \hat \varphi(\bar s-\Delta t)](y_i) =-\I[S[\hat \varphi(\bar s-\Delta t)](\cdot)](\bar y) \geq - \varphi(\bar s,\bar y) ,$$ where $\phi_i$ are the basis functions of the interpolation operator (cf. Lemma \[inter\]). Adding and subtracting $\I[\hat \varphi(\bar s,\cdot)](\bar y)-\I[\hat \varphi(\bar s- \Delta t,\cdot)](\bar y)$ and dividing by $\Delta t$ , we get $$\begin{gathered} \sum_i \phi_i (\bar y) \left(\frac{\varphi(\bar s-\Delta t, y_i)-S[ \hat \varphi(\bar s-\Delta t)]( y_i) }{\Delta t}+\frac{\varphi(\bar s, y_i)-\varphi(\bar s-\Delta t, y_i)}{\Delta t} \right) \\ \geq \mathcal{O}\left(\frac{(\Delta x) ^2}{\eps}\right),\end{gathered}$$ where we have used the fact that $\varphi_{xx}=\mathcal O(\frac 1\e)$ joint to Lemma \[inter\]. We observe that $\frac{\varphi(\bar s, y_i)-\varphi(\bar s-\Delta t,y_i)}{\Delta t}=\varphi_s(\bar s, y_i)+\mathcal O(\Delta t/\eta)$, then, using the consistency result – Prop \[consisrates\], we arrive to $$\begin{gathered} \sum \phi_i (\bar y) \left(-\varphi_s(\bar s, y_i)+H_i(\varphi_x(\bar s-\Delta t, y_i))\right)\\ \geq \mathcal O\left(\frac{\Delta t}{\eta}+\frac{(\Delta x) ^2}{\eps} \right)+\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}.\end{gathered}$$ By the regularity of $\varphi$ and $H$ (Lipschitz continuous) and the interpolation error for Lipschitz function (see Lemma \[inter\]), there exists a positive constant $K$ such that $$\label{sub2} \varphi_s(\bar s, \bar y)+H_i(\varphi_x(\bar s-\Delta t, \bar y))\geq - K\left(\frac{\Delta t}{\eta}+\frac{(\Delta x)^2}{\varepsilon}\right)+\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}.$$ We subtract to and expliciting $\varphi$, obtaining $$\begin{gathered} \sigma +\beta(\bar s-\bar s_\beta)+\beta(\bar t-\bar t_\beta)+ H_i\left(\frac{d(\bar x,\bar y)}{\varepsilon}+\beta| \bar x|+\beta(|\bar x-\bar x_\beta|)\right)\\ -H_i\left(\frac{d(\bar x,\bar y)}{\varepsilon}-\beta |\bar y|-\beta(|\bar y-\bar x_\beta|)\right)\leq K\left(\frac{\Delta t}{\eta}+\frac{(\Delta x)^2}{\varepsilon}\right)+\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}.\end{gathered}$$ Then, using that $H_i$ is Lipschitz continuous and the basic estimates of the Step 1, we arrive to $$\label{eq:3} \sigma < K\sqrt{\beta}+ K\left(\frac{\Delta t}{\eta}+\frac{(\Delta x)^2}{\varepsilon} \right)+\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}=:\sigma^*.$$ Therefore, we have that for a $\sigma\ge\sigma^*$ at least one between $\bar t$ and $\bar s$ is equal to zero. [**[Step 3.]{} (Conclusion).** ]{} If $\overline t=0$ we have $$\begin{aligned} \psi_1(0,\bar s,\bar x,\bar y)\leq &u_0(\overline x)-w_\#(\overline s, \overline y)\leq u_0(\overline x)-u_0(\overline y)+C\overline s +\mu_0\\ \leq &K\e+K\eta+\mu_0.\end{aligned}$$ A similar argument applies if $\bar s=0$. Taking $\sigma=\sigma^*$, we get $$\begin{aligned} &u(t,x)-w_{\#}(t,x)-\frac\beta2\left(|x|^2+|y|^2+|x-\bar x_\beta|^2+|y-\bar x_\beta|^2+|t-\bar t_\beta|^2+|s-\bar s_\beta|^2\right)\\ &-\left(K\sqrt{\beta}+ K\left(\frac{\Delta t}{\eta}+\frac{(\Delta x)^2}{\varepsilon} \right)+\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}\right)T\\ \le&K\e+K\eta+\mu_0.\end{aligned}$$ Sending $\beta\to0$ and choosing $\e=\eta=\sqrt {{{\Delta t}}}$, we get the desired estimate. [**Case 2:** ]{} ${{\overline x}}_\beta =0$.Firstly we observe that assuming \[hcont\] &gt; K+ K(++) (which is compatible with $\sigma>\sigma^*$) then, there exists a $\bar A\in \mathbb{R}$ such that $$\label{Ahyp} \frac{\bar s_\beta-\bar t_\beta}{\eta} -K\left(\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}+\frac{\Delta t}{\varepsilon}\right)+K\sqrt{\beta}>\bar A > \frac{\bar s_\beta-\bar t_\beta}{\eta} -\sigma+K\sqrt{\beta}.$$ Using the fact that $(t,x)\to\psi(t,\bar s_\beta, x)$ has a maximum in $(\bar t_\beta,\bar x_\beta)$ and that $u$ is a sub solution, we get $$\label{sub3} \frac{\bar t_\beta-\bar s_\beta}{\eta} +\sigma+ F_A\left(\partial_x\varphi(\bar t_\beta,0)\right)\leq 0,$$ with $\varphi(t,x)=w_\#(\bar s_\beta,x)+\frac{(t-\bar s_\beta)^2}{2\eta}+\beta |x|^2+\sigma t$ and from and , $$\label{ght} \bar A> F_A\left(\partial_x\varphi(\bar t_\beta , 0)\right).$$ We use the definition of $F_A$, and the coercivity of the Hamiltonians to obtain the existence of values $\lambda_i$ such that $$\label{Hbound2} H_i(\lambda_i)=H_i^+(\lambda_i)=\bar A$$ (cf. Fig. \[figH\]) that will be useful in the following of the proof. ![An example of $H^+_i$ functions.[]{data-label="figH"}](H-fun.pdf){height="6.5cm"} Now we pass to identify the right test function to treat this case. We duplicate the space variable differently than in Case 1. We consider, for $\e\in(0,1)$, $$\begin{aligned} \psi_2(t,s,x,y)&=u(t,x)-w_\#(s,y)- \frac{(t-s)^2}{2\eta}-\frac{d(x,y)^2}{2\e}- \frac \beta 2( |x|^2+|y|^2)\\ & - \sigma t-\left(h(x)+h(y)\right)-\frac \beta 2(t-{{\overline t}}_\beta)^2-\frac \beta 2(s-{{\overline s}}_\beta)^2,\\ &\hbox{ for }(t,s,x,y)\in[0,T)\times \{t_n: n=0,\dots, N_T \}\times J\times J.\end{aligned}$$ where $h(x)=\lambda_i x$ if $x\in J_i$ and the $\lambda_i$ are defined in . We denote by $({{\overline t}},{{\overline s}},{{\overline x}},{{\overline y}})$ the maximum point of $\Psi_2$ (we keep the same notation than the previous case, but they are possibly different points). We remark $({{\overline t}},{{\overline s}},{{\overline x}},{{\overline y}})\to({{\overline t}}_\beta,{{\overline s}}_\beta,{{\overline x}}_\beta,{{\overline x}}_\beta)$ as $\e\to 0$. [**[Step 2.]{} (Viscosity inequalities).** ]{} We claim that for $\sigma$ large enough, the supremum of $\psi_1$ is achieved for ${{\overline t}}=0$ or ${{\overline s}}=0$. We prove the assertion by contradiction. Suppose ${{\overline t}}>0$ and ${{\overline s}}>0$. We can have different scenarios: if ${{\overline x}}$ and ${{\overline y}}$ belong to the same arc (junction point excluded) the case is included in Case 1. If instead $\bar x\in J_i\setminus\{0\}$, $\bar y\in J_j$ ($\bar x$ and $\bar y$ belong to different arcs), we can repeat the same argument to obtain with the test function $\psi_2$. We have: $$\frac{\bar t- \bar s}{\eta}+\beta(\bar t-\bar t_\beta)+\sigma + H_i\left(\frac{d(\bar x,\bar y)}{\varepsilon}+2\beta |\bar x|+\lambda_i\right)\leq 0.$$ Observing that the argument inside the Hamiltonian is bigger than $\lambda_i$, we use arriving to $$0\geq\frac{\bar t- \bar s}{\eta}+\beta(\bar t-\bar t_\beta)+\sigma + H_i^+(\lambda_i)= \frac{\bar t_\beta- \bar s_\beta}{\eta}+\sigma + \bar A +K\sqrt{\beta},$$ which contradicts . Then, this case cannot appear. We pass to the last case to consider: $\bar x=0$, $\bar y\in J_i\setminus\{0\}$. First of all, we notice that the *basic estimates* - are still valid for $({{\overline t}},{{\overline s}},{{\overline x}},{{\overline y}})$ maximum point of $\psi_2$ since the added terms $h(x)$, $h(y)$ are easily included in the other linear elements of the estimates. In this case, the difficulty comes comparing two Hamiltonians evaluated, respectively, on the junction point and on one arc. Using the subsolution property with the test function $\psi_2$, we have as first equation: $$\label{eq1} \frac{\bar t-\bar s}{\eta}+\beta(\bar t-\bar t_\beta)+\sigma + F_A\left(\frac{-|\bar y|}{\varepsilon}+\lambda_i\right)\leq 0,$$ where $$F_A\left(\frac{-|\bar y|}{\varepsilon}+\lambda_i\right)=\max\left(A,\, \max_j\left(H_j^-\left(\frac{-|\bar y|}{\varepsilon}+\lambda_i\right)\right)\right).$$ From the definition of $F_A$ it is also valid $$\label{eq2} \frac{\bar t-\bar s}{\eta}+\beta(\bar t-\bar t_\beta)+\sigma + H^-_i\left(\frac{-|\bar y|}{\varepsilon}+\lambda_i\right)\leq 0.$$ Since $\bar y\in J_i\setminus\{0\}$ with the same argument to obtain (but for the test function $\psi_2$) and using the consistency result, we have $$\frac{\bar t-\bar s}{\eta}+\beta(\bar s-\bar s_\beta)+H_i\left(\frac{-|\bar y|}{\varepsilon}-2\beta \bar y+\lambda_i\right)\geq K\left(\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}+\frac{\Delta t}{\eta}+\frac{(\Delta x)^2}{\eps}\right).$$ Now recalling that $H^+(\lambda_i)=\bar A$, $$\begin{aligned} &\frac{\bar t-\bar s}{\eta}+\beta(\bar s-\bar s_\beta) +H^+_i\left(\frac{-|\bar y|}{\varepsilon}-2\beta \bar y+\lambda_i\right)-K\left(\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}+\frac{\Delta t}{\eta}+\frac{(\Delta x)^2}{\eps}\right)\\ \leq& \frac{\bar t-\bar s}{\eta}+\beta(\bar s-\bar s_\beta)+H^+_i\left(\lambda_i\right)-K\left(\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}+\frac{\Delta t}{\eta}+\frac{(\Delta x)^2}{\eps}\right)\\ \leq&\frac{\bar t_\beta-\bar s_\beta}{\eta}+K\sqrt{\beta}+\bar A-K\left(\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}+\frac{\Delta t}{\eta}+\frac{(\Delta x)^2}{\eps}\right)\\ &<0 \end{aligned}$$ for $\e$ small enough, where we used $\beta(\bar s-\bar s_\beta)\leq K\sqrt \beta$ (basic estimates). We can claim that $$\label{eq3} \frac{\bar t-\bar s}{\eta}+\beta(\bar s-\bar s_\beta)+H_i^-\left(\frac{-|\bar y|}{\varepsilon}-2\beta \bar y+\lambda_i\right)\geq K\left(\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}+\frac{\Delta t}{\eta}+\frac{(\Delta x)^2}{\eps}\right).$$ We can finally subtract to , obtaining the desired estimate on $\sigma$ $$\sigma \leq K\sqrt{\beta}+ K\left(\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\e}+\frac{\Delta t}{\varepsilon}+\frac{\Delta x}{\eps}\right):=\sigma^*.$$ In this case we obtain a contradiction with . Then, We have that assuming $\sigma>\sigma^*$ at least one between ${{\overline t}}$ and ${{\overline s}}$ is equal to zero. [**Step 3. (Conclusion).**]{} We obtain the same estimate as in Case 1. It just remains to prove the general case (for which we do not assume that $u^0(x)\ge w_\#(0,x),\forall x\in J^{{\Delta x}}$). Remarking that $\bar u=u+\mu_1$ with $\mu_1=\sup_{x\in J^{{\Delta x}}}(w_\#(0,x)-u_0(x))$ is a solution of the same equation of $u$ but satisfying $\bar u(0,x)\ge w_\#(0,x), \forall x\in J^{{\Delta x}}$, we deduce that $\bar u$ satisfies $$\begin{aligned} &\sup_{(t,x )\in \mathcal G^{\Delta }}(u(t,x)+\mu_1-w(t,x))\\ \leq &C \left(\frac{{{\mathbb E}}(\Delta t,\Delta x)}{\sqrt{\Delta t}}+\sqrt{\Delta t}\right) + \sup_{x\in J^{{{\Delta x}}}}|u_0(x)+\mu_1-w(0,x)|.\end{aligned}$$ which implies and ends the prooof of the Theorem. Application to traffic flows models {#Sect:traffic} =================================== Traffic flow models aim to understand the motion of agents in structures such as highways or roads and to develop optimal networks avoiding congestions. Typically in these models, differently from *crowd motion models* [@cristiani2011multiscale], the dynamic is described on a finite number of one-dimensional pathways (e.g., travel lanes) connected by junction points. In a microscopic model, the behavior of every single agent is considered, whereas a *macroscopic* model considers the density of the agents. The relation between micro and macroscopic scale is still an active subject of research. In the literature, most of the macroscopic models describe the evolution of the density of cars $\rho:J\times [0,T]\rightarrow [0,\rho_{\max}]$ through a system of conservation laws, see for instance [@garavello2016models; @camilli2016discrete]. Recently, a different framework based on HJ has been proposed. The two models are connecting by duality properties. Following [@imbert2013hamilton], we introduce a *cumulative density function* $$\label{cum} u(x,t)=g(t)+\frac{1}{\gamma_i}\int_0^x \rho(y,t)dy, \quad \hbox{ for }x\in J_i$$ where $\gamma_i$ are constant parameters modeling the incoming/outgoing fluxes for each arc $J_i$ at the junction points. The function $g(t)$ has to be determined with respect to the conservation of the total density and the initial conditions. In the same work the authors show that considering $\rho$ solution of a conservation law with flux $f(\rho)$, $u(x,t)$ defined in is formally the solution of with the Hamiltonian as $$H_i(p)=\left\{ \begin{array}{ll} -\frac{1}{\gamma_i}f(\gamma_i p) & \hbox{ for $p\geq 0$ ('incoming' edges)},\\ -\frac{1}{\gamma_i}f(-\gamma_i p) & \hbox{ for $p<0$ ('outgoing' ones) }. \end{array}\right.$$ In [@forcadel2015homogenization], a similar macroscopic model of the form is derived from a microscopic model by an homogenization procedure. The microscopic model is based on a system of ordinary differential equations describing the flow of each single agent. Generally the $f$-functions are often called *fundamental diagram* (cf. for various examples [@treiber2013traffic]), and they establish a relation between speed and density of the agents. Network framework and boundary conditions ----------------------------------------- In realistic situations, traffic flows are defined on finite networks. We briefly extend our framework in the case of networks: we call $\N$ a network composed by $N$ edges $E_i$ isometric to a real interval $[a_i,b_i]$ or, in the unbounded case to $[a_i,\infty)$, and a collection of $M$ points $V_i$, called nodes, such that $$\N:=\bigcup_{i=1,...,N}E_i,\quad \V:=\{V_i, \;i=1,...,M\}$$ and for any $ i,j \in\{ 1,...,M\},\,i\neq j,$ if $E_i\cap E_j\neq \emptyset$, then, $E_i\cap E_j=V_l,$ with $ V_l \in \V$. Clearly, a junction can be represented by a network $\N$. Some details explaining how to extend the results obtained on a junction to a general network are contained in [@imbert2013flux Appendix B], intuitively since the complexity of the dynamic can be described locally on one node of the network, the presence of multiple nodes does not drastically change the study. Some conditions on in-flow, and out-flow boundaries need to be defined. We call $\B\subset \V$ the set that contains the *boundary nodes* of the network. We also assume that these nodes are simply connected, i.e. we assume that $$\hbox{if } V_i\in \B, \hbox{ there exists one and only one }j\in\{1,...,N\} \hbox{ s.t. }V_i\cap E_j\neq \emptyset.$$ Hence, we define two possible types of boundary conditions on $\B$. Let $\B_N,\,\B_D$ be two sets such that $\B=\B_N\cup\B_D$, $\B_N\cap\B_D=\emptyset$ and let us consider $$\left\{ \begin{array}{ll}\label{BC} u_x(t,x)=f_N(t), & \hbox{if }x\in\B_N, \\ u(t,x)=f_B(t), & \hbox{if }x\in\B_D. \end{array}\right.$$ The first equation corresponds to Neumann-like boundary conditions and it model traffic fluxes entering in the network. The second one is a Dirichlet-like condition, and it represents the cost to enter from inflow nodes.\ We consider the following problem, strictly related to : $$\label{eq:hjnet2} \left\{ \begin{array}{ll} \partial_t u(t,x)+H_i(u_x(t,x))=0 & \hbox{ in } (0,T) \times E_i\setminus \V,\\ \partial_t u(t,x)+F_A(u_x(t,x))=0 & \hbox{ in } (0,T) \times \V\setminus\B. \end{array} \right.$$ provided with . The theory discussed for a junction can also be extended to -. Some hints can be found in [@imbert2013flux]. Numerical tests {#Sect:tests} =============== In this section, we develop some tests showing the convergence and the efficiency of the scheme. Firstly, we consider two simple tests to verify the convergence error estimate proved in Section \[sect:bounds\] numerically. Then, we focus on a complex case coming from traffic flows in a more realistic scenario. #### [**Test 1**]{} We consider a basic network composed by two edges connecting the nodes $(-1,0)$ and $(1,0)$ with a junction in $(0,0)$. This case can be seen as an 1D problem in $\Omega=\Omega_1\cup\Omega_2=[-1,0]\cup[0,1]=[-1,1]$ with a discontinuity on the Hamiltonian at the origin.\ We consider the following Hamiltonian on $\Omega$: $$\label{Htest1} H(x,p)=\left\{ \begin{array}{lll} \frac{p^2}{2}-\frac{1}{2}, & &x\in \Omega_1\\ \frac{p^2}{2}-1, & &x\in \Omega_2.\\ \end{array}\right.$$ This example has been used as a benchmark also in [@imbert2015error]. Using the Legendre transform, we rewrite as $$H(x,p)=\left\{ \begin{array}{lll} \displaystyle\max_{\alpha\in{{\mathbb{R}}}}\left(\alpha_1 p-\frac{\alpha_1^2}{2}\right)-\frac{1}{2}, & &x\in \Omega_1\\ \displaystyle\max_{\alpha\in{{\mathbb{R}}}}\left(\alpha_2 p-\frac{\alpha_2^2}{2}\right)-1, & &x\in \Omega_2,\\ \end{array}\right.$$ We chose the initial condition $$u_0(x)=\sin(\pi|x|),$$ and we impose Dirichlet boundary conditions $u(t,-1)=u(t,1)=0$. ![Initial solution and numerical solution at time $t=0.2$ (above) with various choices of the parameter $A$. Solution (stationary) at time $t=2$ (below) with various choices of the parameter $A$.[]{data-label="figA"}](figA.pdf "fig:"){height="5.5cm"} ![Initial solution and numerical solution at time $t=0.2$ (above) with various choices of the parameter $A$. Solution (stationary) at time $t=2$ (below) with various choices of the parameter $A$.[]{data-label="figA"}](figA2.pdf "fig:"){height="5.5cm"} In Fig. \[figA\], we show the numerical solution at time $t=0.1$ and $t=2$ for different choices of the parameter $A$ on the junction point $x=(0,0)$. We can observe as the asymmetry of the Hamiltonian with respect to the origin induces an asymmetric behavior of the solution. We can also observe how the choice of parameter $A$ influences *globally* the value function of the problem. In fact, when $A=0$ the optimal control in $x=0$ is simply $\alpha_0=0$ that corresponds to a zero cost, and since $u_0(0)=0$, the solution $u(t, 0)=0$ for each $t \in [0, T].$ In the case of $A<0$ the situation is different: the control $ \alpha_0=0$ *does not correspond* to a null cost. A trajectory, which remains on the junction point, entails a cost. Furthermore, we observe that for values of $|A|$ sufficiently large, the stationary solution does not change anymore. This is due to the fact that remaining in the junction point is no more a convenient choice. In the case of $A=0$ and $A=-0.2$, we show the convergence rates. In absence of an analytic exact solution, we compare the approximated solution $w(T,x)$ with an approximation $u(T,x)$ obtained on a very fine grid with $\Delta x=10^{-4}$ and $\Delta t=\Delta x$. We evaluate the error with respect to the uniform discrete norm defined as following $$\label{err} E_\infty^{\Delta}:=\underset{x \in J^{\Delta x}}{\max}(|w(T,x)- u(T,x)|).$$ ![Graphic of $E_\infty^{\Delta}$($\circ$) with respect the space step, together with the line $K \Delta x$. Left $A=0$, with $K=4.5$, right $A=-0.2$, with $K=2$.[]{data-label="figerr2"}](error2.pdf "fig:"){height="5cm"} ![Graphic of $E_\infty^{\Delta}$($\circ$) with respect the space step, together with the line $K \Delta x$. Left $A=0$, with $K=4.5$, right $A=-0.2$, with $K=2$.[]{data-label="figerr2"}](error3.pdf "fig:"){height="5cm"} We show the results in Figure \[figerr2\] for $T=0.2$ and $\Delta t=2.5\Delta x$. We observe in the case $A=0$ a linear decay of the $E^{\Delta}_\infty$ error, in particular, the $E^{\Delta}_\infty$ errors fit with a linear regression curve of ratio $K_1=4.5$. We also observe the same convergence order in the case $A=-0.2$, even though the linear convergence has a smaller ratio around $K_1=2$. We underline, by Theorem \[teo:bounds1\], that we aspect a convergence of order $1$ *independently by the choice of ${{\Delta t}}$*. This appears confirmed by the test. #### [**Test 2**]{} We consider still a simple junction network composed by three edges connecting the nodes $(0,1)$, $(-1,-1)$, $(1,-1)$ with the junction point placed in $(0,0)$. ![Projection on the state coordinate plane of the Initial Condition (top left), numerical solution at time $t=0.4$ (top right), $t=0.8$ (bottom left) and $t=1.25$ (bottom right).[]{data-label="fig1"}](t1.pdf "fig:"){height="5cm"} ![Projection on the state coordinate plane of the Initial Condition (top left), numerical solution at time $t=0.4$ (top right), $t=0.8$ (bottom left) and $t=1.25$ (bottom right).[]{data-label="fig1"}](t2.pdf "fig:"){height="5cm"} ![Projection on the state coordinate plane of the Initial Condition (top left), numerical solution at time $t=0.4$ (top right), $t=0.8$ (bottom left) and $t=1.25$ (bottom right).[]{data-label="fig1"}](colorbar.pdf "fig:"){height="5cm"}\ ![Projection on the state coordinate plane of the Initial Condition (top left), numerical solution at time $t=0.4$ (top right), $t=0.8$ (bottom left) and $t=1.25$ (bottom right).[]{data-label="fig1"}](t3.pdf "fig:"){height="5.2cm"} ![Projection on the state coordinate plane of the Initial Condition (top left), numerical solution at time $t=0.4$ (top right), $t=0.8$ (bottom left) and $t=1.25$ (bottom right).[]{data-label="fig1"}](t4.pdf "fig:"){height="5.2cm"} ![Projection on the state coordinate plane of the Initial Condition (top left), numerical solution at time $t=0.4$ (top right), $t=0.8$ (bottom left) and $t=1.25$ (bottom right).[]{data-label="fig1"}](colorbar.pdf "fig:"){height="5.2cm"} We denote by $J_1$ the edge connecting $(0,1)$ to $(0,0)$ and by $J_2$, $J_3$ the edges connecting $(0,0)$ to $(1,-1)$ and $(-1,-1)$, respectively. The cost function $L_i, i=0, 1,2,3$ are defined as follows $$L_i(\alpha_i)=\left\{ \begin{array}{ll} \frac{\alpha_i^2}{2}+1 &\hbox{ if }i=1,3,\\ \frac{\alpha_i^2}{2}+2 &\hbox{ if }i=2,0. \end{array} \right.$$ We impose Dirichlet boundary conditions on the boundary nodes: $$u(t,x)=\left\{ \begin{array}{ll} 0 &\hbox{ if }x=\{(-1,-1),(1,-1)\}\\ \sqrt{2}+1 &\hbox{ if }x=\{(0,1)\}. \end{array} \right.$$ The initial value $u_0$ is chosen as the restriction of $1+x_2$ on $J$, where we denote $(x_1, x_2)=x$. In Figure \[fig1\], we show the color map of the initial condition and of the numerical solution at time $t=0.4,0.8,1.25$, projected on the state coordinate plane. We can observe that the initial datum $u_0$ (Fig. \[fig1\] left/top) quickly evolves to the stationary solution (Fig. \[fig1\] right/bottom), which represents a weighted distance from the boundary points, with exit costs equal to the Dirichlet boundary conditions. We compare the approximate solution at $T=2$ with the exact solution of the corresponding stationary problem; this makes sense since the approximate solution has already reached the steady state at time $T=2$. The exact steady state solution is $$\label{eqex} u(x)=\left\{ \begin{array}{ll} \sqrt{2}+x_2, &\hbox{ if }x \in J_1,\\ \min\left(2\sqrt{(x_1-1)^2+(x_2+1)^2},\sqrt{2}+2\sqrt{x_1^2+x_2^2}\right), &\hbox{ if }x\in J_2,\\ \sqrt{(x_1+1)^2+(x_2+1)^2}, &\hbox{ if }x\in J_3. \end{array} \right.$$ In Figure \[fig2\], we show the behavior of the error for various values of $\Delta$, fixing the ratio between the spatial and the time step as $\Delta t=2.5\Delta x$. We underline that this is possible thanks to the stability property of SL methods for large time steps (i.e. the classical hyperbolic CFL condition [@falcone2014semi] may not be verified). We observe as in the first test a linear decay of the $E^{\Delta}_\infty$ error. ![Graphic of $E_\infty^{\Delta}$($\circ$) with respect the space step, together with the line $K \Delta x$ with $K=6.5$.[]{data-label="fig2"}](error1.pdf){height="6cm"} #### [**Test 3**]{} We conclude this section with a more realistic test where multiple edges are composing a complex traffic network. We consider the main network of the city of Rouen (Figure \[figR\], above) and after some simplifications, we arrive at the network represented in Figure \[figR\], below. Here the edges in continuous blue line are large capacity roads, and in dashed red are smaller roads. The network is contained in the planar set $[0,1200]\times[0,2100]\subset {{\mathbb{R}}}^2$, that corresponds to the pixels of the reference map from which we extracted the network. For practical purposes, we scale it in the domain $[0,1]^2$. ![Map of the city of Rouen and simplified network modeling the structure of the road networks. In blue/solid line the bigger roads, in red/dotted line the smaller roads. The map $\copyright$ OpenStreetMap contributors. []{data-label="figR"}](rouen.png "fig:"){height="5.6cm"}\ ![Map of the city of Rouen and simplified network modeling the structure of the road networks. In blue/solid line the bigger roads, in red/dotted line the smaller roads. The map $\copyright$ OpenStreetMap contributors. []{data-label="figR"}](rouenNet.pdf "fig:"){height="6cm"} Using the traffic flow interpretation of an HJ equation, as sketched in Section \[Sect:traffic\], we choose the initial datum $ u(x,0)=v(x)$ where $v(x)$ is the solution of the following stationary HJ equation $$\left\{\begin{array}{ll} |v_x(x)|=0.7-\frac{(x-(0.5,0.5))^2}{2}, \quad & x\in J\\ v(x)=0, \quad & x\in \B. \end{array}\right.$$ This case can be viewed as a special case of the stationary state of , where the analysis is simpler since the Hamiltonian is continuous (cf. [@camilli2013approximation] for a detailed presentation). This choice of the initial datum models higher concentration of vehicles in correspondence of the city center (center of the domain). We are interested in the evolution of the *density* of the vehicles $\rho$. We could derive it using $\eqref{cum}$. This procedure may be nontrivial (cf. [@costeseque2015convergent; @imbert2013hamilton]) on the junctions and it is still a point that deserves further investigation. Instead, we adopt a numerical heuristic procedure using the relation $\rho(x,t)=-u_x(x,t)$ along every edge and defining $$\rho(x,t)=-\sum_i \min(\partial_i u(x,t),0), \quad \hbox{if } x\in \V$$ where the spatial derivatives are approximated thorough standard finite differences. The numerical test that we present confirms that this procedure provides reasonable results. We want to study the network in a case of an evacuation. We impose null Dirichlet boundary conditions on the exits of the network in correspondence of the red squares of Figure \[figR\] (below). We adopt the simple Hamiltonian $$H_i(p)=\left\{ \begin{array}{ll} \frac{1}{\lambda_i}p^2-p &\hbox{ if $p\geq 0$ },\\ \frac{1}{\lambda_i}p^2+p &\hbox{ if $p<0$,} \end{array} \right.$$ where $\lambda_i$ is the capacity of the arc $i$ namely $\lambda_i=4/5$ in the red edges and $\lambda_i=1$ elsewhere. ![Evolution of the approximation of the density of vehicles at time $t=0,0.5, 1, 1.5$.[]{data-label="figR2"}](RouenG0.pdf "fig:"){width="5.8cm" height="4cm"} ![Evolution of the approximation of the density of vehicles at time $t=0,0.5, 1, 1.5$.[]{data-label="figR2"}](RouenG05.pdf "fig:"){width="5.8cm" height="4cm"} ![Evolution of the approximation of the density of vehicles at time $t=0,0.5, 1, 1.5$.[]{data-label="figR2"}](RouenG1.pdf "fig:"){width="5.8cm" height="4cm"} ![Evolution of the approximation of the density of vehicles at time $t=0,0.5, 1, 1.5$.[]{data-label="figR2"}](RouenG15.pdf "fig:"){width="5.8cm" height="4cm"} We observe that it is possible to obtain these Hamiltonians following [@imbert2013hamilton] for the classic LWR model (cf. [@garavello2016models]) with flux $(1-\rho)\rho$ on the blue arcs and $(1-5/4 \rho)\rho$ for the choice of $A=-0.4$. The red edges have then a reduced capacity compared to the blue ones. We uniformly approximate the arcs using the discretization step ${{\Delta x}}=0.01$ and we sample the time with ${{\Delta t}}=0.05$. Due to the stability properties of the SL scheme no CFL condition is needed, and we can adopt large time steps, fundamental to approximate the behavior of the system for long time scenarios. In Figure \[figR2\], we can see the initial distribution of the density and its evolution in various moments. We observe the following. First of all the density, starting from a smooth configuration (the initial data is for construction $\rho(t,x)=0.7-0.5(x-(0.5,0.5))^2$) rapidly concentrates reaching some areas of maximal density. Those congested areas typically appear before a junction point. It is an intrinsic characteristic already observed in traffic flows literature. This phenomenon can only become more evident in the case of merging bifurcations where the outgoing roads have a reduced capacity. This is the case of the junction in the proximity of the point $(0.5, 0.82)$ where some congested areas are formed and take more time to disappear. Acknowledgments =============== [The first author was supported by the Indam GNCS project “Metodi numerici per equazioni iperboliche e cinetiche e applicazioni” and PGMO project VarPDEMFG. The second and third authors were partially supported by the European Union with the European regional development fund (ERDF, HN0002137) and by the Normandie Regional Council (via the M2NUM project).]{}
{ "pile_set_name": "ArXiv" }